title
stringlengths
4
295
pmid
stringlengths
8
8
background_abstract
stringlengths
12
1.65k
background_abstract_label
stringclasses
12 values
methods_abstract
stringlengths
39
1.48k
methods_abstract_label
stringlengths
6
31
results_abstract
stringlengths
65
1.93k
results_abstract_label
stringclasses
10 values
conclusions_abstract
stringlengths
57
1.02k
conclusions_abstract_label
stringclasses
22 values
mesh_descriptor_names
sequence
pmcid
stringlengths
6
8
background_title
stringlengths
10
86
background_text
stringlengths
215
23.3k
methods_title
stringlengths
6
74
methods_text
stringlengths
99
42.9k
results_title
stringlengths
6
172
results_text
stringlengths
141
62.9k
conclusions_title
stringlengths
9
44
conclusions_text
stringlengths
5
13.6k
other_sections_titles
sequence
other_sections_texts
sequence
other_sections_sec_types
sequence
all_sections_titles
sequence
all_sections_texts
sequence
all_sections_sec_types
sequence
keywords
sequence
whole_article_text
stringlengths
6.93k
126k
whole_article_abstract
stringlengths
936
2.95k
background_conclusion_text
stringlengths
587
24.7k
background_conclusion_abstract
stringlengths
936
2.83k
whole_article_text_length
int64
1.3k
22.5k
whole_article_abstract_length
int64
183
490
other_sections_lengths
sequence
num_sections
int64
3
28
most_frequent_words
sequence
keybert_topics
sequence
annotated_base_background_abstract_prompt
stringclasses
1 value
annotated_base_methods_abstract_prompt
stringclasses
1 value
annotated_base_results_abstract_prompt
stringclasses
1 value
annotated_base_conclusions_abstract_prompt
stringclasses
1 value
annotated_base_whole_article_abstract_prompt
stringclasses
1 value
annotated_base_background_conclusion_abstract_prompt
stringclasses
1 value
annotated_keywords_background_abstract_prompt
stringlengths
28
460
annotated_keywords_methods_abstract_prompt
stringlengths
28
701
annotated_keywords_results_abstract_prompt
stringlengths
28
701
annotated_keywords_conclusions_abstract_prompt
stringlengths
28
428
annotated_keywords_whole_article_abstract_prompt
stringlengths
28
701
annotated_keywords_background_conclusion_abstract_prompt
stringlengths
28
428
annotated_mesh_background_abstract_prompt
stringlengths
53
701
annotated_mesh_methods_abstract_prompt
stringlengths
53
701
annotated_mesh_results_abstract_prompt
stringlengths
53
692
annotated_mesh_conclusions_abstract_prompt
stringlengths
54
701
annotated_mesh_whole_article_abstract_prompt
stringlengths
53
701
annotated_mesh_background_conclusion_abstract_prompt
stringlengths
54
701
annotated_keybert_background_abstract_prompt
stringlengths
100
219
annotated_keybert_methods_abstract_prompt
stringlengths
100
219
annotated_keybert_results_abstract_prompt
stringlengths
101
219
annotated_keybert_conclusions_abstract_prompt
stringlengths
100
240
annotated_keybert_whole_article_abstract_prompt
stringlengths
100
240
annotated_keybert_background_conclusion_abstract_prompt
stringlengths
100
211
annotated_most_frequent_background_abstract_prompt
stringlengths
67
217
annotated_most_frequent_methods_abstract_prompt
stringlengths
67
217
annotated_most_frequent_results_abstract_prompt
stringlengths
67
217
annotated_most_frequent_conclusions_abstract_prompt
stringlengths
71
217
annotated_most_frequent_whole_article_abstract_prompt
stringlengths
67
217
annotated_most_frequent_background_conclusion_abstract_prompt
stringlengths
71
217
annotated_tf_idf_background_abstract_prompt
stringlengths
74
283
annotated_tf_idf_methods_abstract_prompt
stringlengths
67
325
annotated_tf_idf_results_abstract_prompt
stringlengths
69
340
annotated_tf_idf_conclusions_abstract_prompt
stringlengths
83
403
annotated_tf_idf_whole_article_abstract_prompt
stringlengths
70
254
annotated_tf_idf_background_conclusion_abstract_prompt
stringlengths
71
254
annotated_entity_plan_background_abstract_prompt
stringlengths
20
313
annotated_entity_plan_methods_abstract_prompt
stringlengths
20
452
annotated_entity_plan_results_abstract_prompt
stringlengths
20
596
annotated_entity_plan_conclusions_abstract_prompt
stringlengths
20
150
annotated_entity_plan_whole_article_abstract_prompt
stringlengths
50
758
annotated_entity_plan_background_conclusion_abstract_prompt
stringlengths
50
758
Trends in academic productivity in the COVID-19 era: analysis of neurosurgical, stroke neurology, and neurointerventional literature.
32998982
Academic physicians aim to provide clinical and surgical care to their patients while actively contributing to a growing body of scientific literature. The coronavirus disease 2019 (COVID-19) pandemic has resulted in procedural-based specialties across the United States witnessing a sharp decline in their clinical volume and surgical cases.
BACKGROUND
The study compared the neurosurgical, stroke neurology, and neurointerventional academic output during the pandemic lockdown with the same time period in previous years. Editors from a sample of neurosurgical, stroke neurology, and neurointerventional journals provided the total number of original manuscript submissions, broken down by months, from the year 2016 to 2020. Manuscript submission was used as a surrogate metric for academic productivity.
METHODS
8 journals were represented. The aggregated data from all eight journals as a whole showed that a combined average increase of 42.3% was observed on original submissions for 2020. As the average yearly percent increase using the 2016-2019 data for each journal exhibited a combined average increase of 11.2%, the rise in the yearly increase for 2020 in comparison was nearly fourfold. For the same journals in the same time period, the average percent of COVID-19 related publications from January to June of 2020 was 6.87%.
RESULTS
There was a momentous increase in the number of original submissions for the year 2020, and its effects were uniformly experienced across all of our represented journals.
CONCLUSION
[ "COVID-19", "Coronavirus Infections", "Efficiency", "Humans", "Neurology", "Neurosurgery", "Pandemics", "Periodicals as Topic", "Pneumonia, Viral", "Publishing", "Quarantine", "Research", "Stroke", "Universities" ]
7528313
Introduction
On January 20, 2020, the first case of the novel coronavirus disease 2019 (COVID-19) in the United States was confirmed.1 The World Health Organization subsequently declared COVID-19 a global pandemic, and on March 13, 2020 the American College of Surgeons recommended the cessation of elective surgeries and the triaging of remaining cases by the level of acuity.2–4 The recommendation was cemented across the United States when surgical departments were required to cancel all elective surgeries by executive order5 to help to reduce the burden on the healthcare system during the pandemic.6 Surveys of physicians across varied specialties have unanimously demonstrated marked disruptions in clinical practice due to the pandemic.7–10 Interestingly, these disruptions encompass not only elective procedural volumes but also reductions in urgent and emergent procedures. Studies have reported decreased admissions for ischemic stroke,11 a substantial reduction in mechanical thrombectomy volumes,12 and a sharp decline in the number of stroke imaging evaluations at the height of the pandemic.13 In a nationwide survey of US neurointerventionalists, over three-quarters of respondents similarly reported greater than 25% reductions in emergent procedures during the pandemic, with over two-thirds of respondents reporting greater than 50% reductions in overall procedural volumes.14 Additionally, many institutions and departments have attempted to mitigate the risk of exposure to physicians by using telehealth for clinic visits, postponing non-urgent clinic visits, and reducing the number of physicians carrying out rounds or examining patients in hospital facilities. These factors combined have created an unprecedented situation, where many physicians have faced dramatically reduced clinical and surgical responsibilities. The present study explored whether the restrictions of the pandemic offered academic neurosurgeons, stroke neurologists, and neurointerventionalists opportunities for increased participation in academic activity during the pandemic lockdown period, as reflected by submission of manuscripts.
Methods
Editors who serve as representatives of a spectrum of clinical journals in neurosurgery, stroke neurology, and neurointerventional surgery were invited to participate voluntarily in this study. Manuscript submission was used as a surrogate metric for academic productivity. Respondents provided the total number of original manuscript submissions, organized monthly from January to June, from the year 2016 to 2020. The pandemic lockdown period was defined as March to May 2020, when a significant surge in infections was encountered in the northeast and west of the United States. A total of eight neurosurgical, stroke, and neurointerventional journals were represented: Neurosurgery, Journal of Neurosurgery, Journal of Neurosurgery: Spine, Journal of Neurosurgery: Pediatrics, American Journal of Neuroradiology, Journal of NeuroInterventional Surgery, Stroke, and World Neurosurgery. For the American Journal of Neuroradiology, Journal of NeuroInterventional Surgery, Journal of Neurosurgery, Journal of Neurosurgery: Spine, and Journal of Neurosurgery: Pediatrics, the total number of rejections by month for the same time period were also provided. Total number of submissions by month were plotted on a line graph, and the results were separated by year. Statistical analysis using a linear regression model was performed using R (version 3.4.3, Vienna, Austria), in order to obtain the projected submissions during the pandemic lockdown based on previous submissions from 2016 to 2019. Rejection rates were analyzed using two-sample Student’s t-test, with significance set at p<0.05. To shed light on how much of a given surge was due to topics related to COVID-19, an online search was performed using each journal’s respective website with attention paid to topic and the date of online publication.
Results
Eight academic clinical journals to which neurosurgeons, stroke neurologists, and neurointerventionalists tend to submit material were represented. Our study included each of the top six neurosurgical, stroke neurology, and neurointerventional journals as defined by Google Scholar Metrics which uses h5-index for articles published in the last five complete years. All of the journals exhibited a marked increase in the number of original submissions for the year of 2020, particularly during the pandemic lockdown period from March to May (figure 1). The average yearly percent increase using the 2016–2019 data for each journal was calculated and compared with the percent increase for 2020 (table 1; figure 2). Data for percent 2020 increase, average yearly 2016–2019 increase, proportion of observed over expected, and percent of COVID-19 related articles by journal Trends in the number of original submissions by month and year from January to June of 2016–2020. Combined submissions data from seven neurosurgery, stroke neurology, and neurointerventional journals displayed as a graph for the first half of the year for the past 5 years. Limited data were provided for stroke. Actual and projected number of original submissions for first half of 2020 based on previous submissions data from 2016 to 2019. The total count of original submissions was predicted based on a linear regression model using the submission counts from the previous years since 2016. Percentages denote the observed percent increase from 2019 to 2020, the average 2016–2019 yearly percent increase, and the proportion of COVID-19 related articles to the total count of published articles from January 1 to June 30, 2020 for each journal. Ratio denotes the 2020 increase over the average yearly 2016–2019 increase. Submission rejection rate Additionally, the total number of rejections for the same time period were provided by five journals. For the Journal of NeuroInterventional Surgery, the rejection rate for 2020 was 61.5%, similar to the 58.5% rejection rate for 2019 submissions (figure 3). For the American Journal of Neuroradiology, the rejection rate in original submissions for 2020 was 59.5%, lower than the average yearly rejection rate of 67.9%. For the Journal of Neurosurgery, the 2020 rejection rate was 73.5%, higher than the yearly rejection rate of 62.5%. For JNS: Spine and JNS: Pediatrics, the 2020 rejection rates were 69.0% and 54.0%, respectively, while their historical average rejection rates were 66.1% and 43.2%, respectively. The combined average rejection rate for 2020 was 63.5%, just 2.1% higher than the 2019 combined average rejection rate of 61.4%. These journals did not exhibit a statistically significant change in their rejection rates for 2020 compared with their previous years of 2016–2019 (p=0.387). Rejection rate of original submissions by month and year from January to June of 2016–2020. Journals represented are Journal of NeuroInterventional Surgery, American Journal of Neuroradiology, Journal of Neurosurgery, Journal of Neurosurgery: Spine, and Journal of Neurosurgery: Pediatrics. Additionally, the total number of rejections for the same time period were provided by five journals. For the Journal of NeuroInterventional Surgery, the rejection rate for 2020 was 61.5%, similar to the 58.5% rejection rate for 2019 submissions (figure 3). For the American Journal of Neuroradiology, the rejection rate in original submissions for 2020 was 59.5%, lower than the average yearly rejection rate of 67.9%. For the Journal of Neurosurgery, the 2020 rejection rate was 73.5%, higher than the yearly rejection rate of 62.5%. For JNS: Spine and JNS: Pediatrics, the 2020 rejection rates were 69.0% and 54.0%, respectively, while their historical average rejection rates were 66.1% and 43.2%, respectively. The combined average rejection rate for 2020 was 63.5%, just 2.1% higher than the 2019 combined average rejection rate of 61.4%. These journals did not exhibit a statistically significant change in their rejection rates for 2020 compared with their previous years of 2016–2019 (p=0.387). Rejection rate of original submissions by month and year from January to June of 2016–2020. Journals represented are Journal of NeuroInterventional Surgery, American Journal of Neuroradiology, Journal of Neurosurgery, Journal of Neurosurgery: Spine, and Journal of Neurosurgery: Pediatrics. All journals aggregated Looking at the aggregated data from all eight journals as a whole, the combined average increase of 42.3% in their original submissions for 2020 was markedly higher than the average increase of 11.2% exhibited in their previous years. While the projections from the 2016–2019 data for 2020 submissions were modeled graphically, the stark difference between the forecast and actual value from the graph shows that the submissions for 2020 did indeed increase at a greater rate than the predicted values (figure 2). These findings are statistically significant, manifested by the high reproducibility of this trend in all eight of our journals. Finally, for the same journals in the same time period, a search was performed to analyze the number of COVID-19 related publications compared with the total number of publications. The analysis showed that the average percent of COVID-19 related publications from January to June of 2020 was 6.87% (table 1). Individual trends for each journal are shown in the (online supplemental data). Looking at the aggregated data from all eight journals as a whole, the combined average increase of 42.3% in their original submissions for 2020 was markedly higher than the average increase of 11.2% exhibited in their previous years. While the projections from the 2016–2019 data for 2020 submissions were modeled graphically, the stark difference between the forecast and actual value from the graph shows that the submissions for 2020 did indeed increase at a greater rate than the predicted values (figure 2). These findings are statistically significant, manifested by the high reproducibility of this trend in all eight of our journals. Finally, for the same journals in the same time period, a search was performed to analyze the number of COVID-19 related publications compared with the total number of publications. The analysis showed that the average percent of COVID-19 related publications from January to June of 2020 was 6.87% (table 1). Individual trends for each journal are shown in the (online supplemental data).
Conclusion
There was an unprecedented increase in article submissions to eight major neurosurgical, stroke neurology, and neurointerventional peer-reviewed journals during the pandemic, with a combined average increase of 42% for 2020 compared with the average expected increase of 11% found during 2016–2019. COVID-19 related articles comprised just under 7% of the total submissions from January to June of 2020. These findings suggest that reductions in clinical and surgical workload during the pandemic may have translated to increased academic productivity among neurosurgical, stroke, and neurointerventional physicians.
[ "Submission rejection rate", "All journals aggregated" ]
[ "Additionally, the total number of rejections for the same time period were provided by five journals. For the Journal of NeuroInterventional Surgery, the rejection rate for 2020 was 61.5%, similar to the 58.5% rejection rate for 2019 submissions (figure 3). For the American Journal of Neuroradiology, the rejection rate in original submissions for 2020 was 59.5%, lower than the average yearly rejection rate of 67.9%. For the Journal of Neurosurgery, the 2020 rejection rate was 73.5%, higher than the yearly rejection rate of 62.5%. For JNS: Spine and JNS: Pediatrics, the 2020 rejection rates were 69.0% and 54.0%, respectively, while their historical average rejection rates were 66.1% and 43.2%, respectively. The combined average rejection rate for 2020 was 63.5%, just 2.1% higher than the 2019 combined average rejection rate of 61.4%. These journals did not exhibit a statistically significant change in their rejection rates for 2020 compared with their previous years of 2016–2019 (p=0.387).\nRejection rate of original submissions by month and year from January to June of 2016–2020. Journals represented are Journal of NeuroInterventional Surgery, American Journal of Neuroradiology, Journal of Neurosurgery, Journal of Neurosurgery: Spine, and Journal of Neurosurgery: Pediatrics.", "Looking at the aggregated data from all eight journals as a whole, the combined average increase of 42.3% in their original submissions for 2020 was markedly higher than the average increase of 11.2% exhibited in their previous years. While the projections from the 2016–2019 data for 2020 submissions were modeled graphically, the stark difference between the forecast and actual value from the graph shows that the submissions for 2020 did indeed increase at a greater rate than the predicted values (figure 2). These findings are statistically significant, manifested by the high reproducibility of this trend in all eight of our journals. Finally, for the same journals in the same time period, a search was performed to analyze the number of COVID-19 related publications compared with the total number of publications. The analysis showed that the average percent of COVID-19 related publications from January to June of 2020 was 6.87% (table 1). Individual trends for each journal are shown in the (online supplemental data).\n\n\n" ]
[ null, null ]
[ "Introduction", "Methods", "Results", "Submission rejection rate", "All journals aggregated", "Discussion", "Conclusion" ]
[ "On January 20, 2020, the first case of the novel coronavirus disease 2019 (COVID-19) in the United States was confirmed.1 The World Health Organization subsequently declared COVID-19 a global pandemic, and on March 13, 2020 the American College of Surgeons recommended the cessation of elective surgeries and the triaging of remaining cases by the level of acuity.2–4 The recommendation was cemented across the United States when surgical departments were required to cancel all elective surgeries by executive order5 to help to reduce the burden on the healthcare system during the pandemic.6\nSurveys of physicians across varied specialties have unanimously demonstrated marked disruptions in clinical practice due to the pandemic.7–10 Interestingly, these disruptions encompass not only elective procedural volumes but also reductions in urgent and emergent procedures. Studies have reported decreased admissions for ischemic stroke,11 a substantial reduction in mechanical thrombectomy volumes,12 and a sharp decline in the number of stroke imaging evaluations at the height of the pandemic.13 In a nationwide survey of US neurointerventionalists, over three-quarters of respondents similarly reported greater than 25% reductions in emergent procedures during the pandemic, with over two-thirds of respondents reporting greater than 50% reductions in overall procedural volumes.14 Additionally, many institutions and departments have attempted to mitigate the risk of exposure to physicians by using telehealth for clinic visits, postponing non-urgent clinic visits, and reducing the number of physicians carrying out rounds or examining patients in hospital facilities. These factors combined have created an unprecedented situation, where many physicians have faced dramatically reduced clinical and surgical responsibilities. The present study explored whether the restrictions of the pandemic offered academic neurosurgeons, stroke neurologists, and neurointerventionalists opportunities for increased participation in academic activity during the pandemic lockdown period, as reflected by submission of manuscripts.", "Editors who serve as representatives of a spectrum of clinical journals in neurosurgery, stroke neurology, and neurointerventional surgery were invited to participate voluntarily in this study. Manuscript submission was used as a surrogate metric for academic productivity. Respondents provided the total number of original manuscript submissions, organized monthly from January to June, from the year 2016 to 2020. The pandemic lockdown period was defined as March to May 2020, when a significant surge in infections was encountered in the northeast and west of the United States. A total of eight neurosurgical, stroke, and neurointerventional journals were represented: Neurosurgery, Journal of Neurosurgery, Journal of Neurosurgery: Spine, Journal of Neurosurgery: Pediatrics, American Journal of Neuroradiology, Journal of NeuroInterventional Surgery, Stroke, and World Neurosurgery. For the American Journal of Neuroradiology, Journal of NeuroInterventional Surgery, Journal of Neurosurgery, Journal of Neurosurgery: Spine, and Journal of Neurosurgery: Pediatrics, the total number of rejections by month for the same time period were also provided. Total number of submissions by month were plotted on a line graph, and the results were separated by year. Statistical analysis using a linear regression model was performed using R (version 3.4.3, Vienna, Austria), in order to obtain the projected submissions during the pandemic lockdown based on previous submissions from 2016 to 2019. Rejection rates were analyzed using two-sample Student’s t-test, with significance set at p<0.05. To shed light on how much of a given surge was due to topics related to COVID-19, an online search was performed using each journal’s respective website with attention paid to topic and the date of online publication.", "Eight academic clinical journals to which neurosurgeons, stroke neurologists, and neurointerventionalists tend to submit material were represented. Our study included each of the top six neurosurgical, stroke neurology, and neurointerventional journals as defined by Google Scholar Metrics which uses h5-index for articles published in the last five complete years. All of the journals exhibited a marked increase in the number of original submissions for the year of 2020, particularly during the pandemic lockdown period from March to May (figure 1). The average yearly percent increase using the 2016–2019 data for each journal was calculated and compared with the percent increase for 2020 (table 1; figure 2).\nData for percent 2020 increase, average yearly 2016–2019 increase, proportion of observed over expected, and percent of COVID-19 related articles by journal\nTrends in the number of original submissions by month and year from January to June of 2016–2020. Combined submissions data from seven neurosurgery, stroke neurology, and neurointerventional journals displayed as a graph for the first half of the year for the past 5 years. Limited data were provided for stroke.\nActual and projected number of original submissions for first half of 2020 based on previous submissions data from 2016 to 2019. The total count of original submissions was predicted based on a linear regression model using the submission counts from the previous years since 2016.\nPercentages denote the observed percent increase from 2019 to 2020, the average 2016–2019 yearly percent increase, and the proportion of COVID-19 related articles to the total count of published articles from January 1 to June 30, 2020 for each journal. Ratio denotes the 2020 increase over the average yearly 2016–2019 increase.\n Submission rejection rate Additionally, the total number of rejections for the same time period were provided by five journals. For the Journal of NeuroInterventional Surgery, the rejection rate for 2020 was 61.5%, similar to the 58.5% rejection rate for 2019 submissions (figure 3). For the American Journal of Neuroradiology, the rejection rate in original submissions for 2020 was 59.5%, lower than the average yearly rejection rate of 67.9%. For the Journal of Neurosurgery, the 2020 rejection rate was 73.5%, higher than the yearly rejection rate of 62.5%. For JNS: Spine and JNS: Pediatrics, the 2020 rejection rates were 69.0% and 54.0%, respectively, while their historical average rejection rates were 66.1% and 43.2%, respectively. The combined average rejection rate for 2020 was 63.5%, just 2.1% higher than the 2019 combined average rejection rate of 61.4%. These journals did not exhibit a statistically significant change in their rejection rates for 2020 compared with their previous years of 2016–2019 (p=0.387).\nRejection rate of original submissions by month and year from January to June of 2016–2020. Journals represented are Journal of NeuroInterventional Surgery, American Journal of Neuroradiology, Journal of Neurosurgery, Journal of Neurosurgery: Spine, and Journal of Neurosurgery: Pediatrics.\nAdditionally, the total number of rejections for the same time period were provided by five journals. For the Journal of NeuroInterventional Surgery, the rejection rate for 2020 was 61.5%, similar to the 58.5% rejection rate for 2019 submissions (figure 3). For the American Journal of Neuroradiology, the rejection rate in original submissions for 2020 was 59.5%, lower than the average yearly rejection rate of 67.9%. For the Journal of Neurosurgery, the 2020 rejection rate was 73.5%, higher than the yearly rejection rate of 62.5%. For JNS: Spine and JNS: Pediatrics, the 2020 rejection rates were 69.0% and 54.0%, respectively, while their historical average rejection rates were 66.1% and 43.2%, respectively. The combined average rejection rate for 2020 was 63.5%, just 2.1% higher than the 2019 combined average rejection rate of 61.4%. These journals did not exhibit a statistically significant change in their rejection rates for 2020 compared with their previous years of 2016–2019 (p=0.387).\nRejection rate of original submissions by month and year from January to June of 2016–2020. Journals represented are Journal of NeuroInterventional Surgery, American Journal of Neuroradiology, Journal of Neurosurgery, Journal of Neurosurgery: Spine, and Journal of Neurosurgery: Pediatrics.\n All journals aggregated Looking at the aggregated data from all eight journals as a whole, the combined average increase of 42.3% in their original submissions for 2020 was markedly higher than the average increase of 11.2% exhibited in their previous years. While the projections from the 2016–2019 data for 2020 submissions were modeled graphically, the stark difference between the forecast and actual value from the graph shows that the submissions for 2020 did indeed increase at a greater rate than the predicted values (figure 2). These findings are statistically significant, manifested by the high reproducibility of this trend in all eight of our journals. Finally, for the same journals in the same time period, a search was performed to analyze the number of COVID-19 related publications compared with the total number of publications. The analysis showed that the average percent of COVID-19 related publications from January to June of 2020 was 6.87% (table 1). Individual trends for each journal are shown in the (online supplemental data).\n\n\n\nLooking at the aggregated data from all eight journals as a whole, the combined average increase of 42.3% in their original submissions for 2020 was markedly higher than the average increase of 11.2% exhibited in their previous years. While the projections from the 2016–2019 data for 2020 submissions were modeled graphically, the stark difference between the forecast and actual value from the graph shows that the submissions for 2020 did indeed increase at a greater rate than the predicted values (figure 2). These findings are statistically significant, manifested by the high reproducibility of this trend in all eight of our journals. Finally, for the same journals in the same time period, a search was performed to analyze the number of COVID-19 related publications compared with the total number of publications. The analysis showed that the average percent of COVID-19 related publications from January to June of 2020 was 6.87% (table 1). Individual trends for each journal are shown in the (online supplemental data).\n\n\n", "Additionally, the total number of rejections for the same time period were provided by five journals. For the Journal of NeuroInterventional Surgery, the rejection rate for 2020 was 61.5%, similar to the 58.5% rejection rate for 2019 submissions (figure 3). For the American Journal of Neuroradiology, the rejection rate in original submissions for 2020 was 59.5%, lower than the average yearly rejection rate of 67.9%. For the Journal of Neurosurgery, the 2020 rejection rate was 73.5%, higher than the yearly rejection rate of 62.5%. For JNS: Spine and JNS: Pediatrics, the 2020 rejection rates were 69.0% and 54.0%, respectively, while their historical average rejection rates were 66.1% and 43.2%, respectively. The combined average rejection rate for 2020 was 63.5%, just 2.1% higher than the 2019 combined average rejection rate of 61.4%. These journals did not exhibit a statistically significant change in their rejection rates for 2020 compared with their previous years of 2016–2019 (p=0.387).\nRejection rate of original submissions by month and year from January to June of 2016–2020. Journals represented are Journal of NeuroInterventional Surgery, American Journal of Neuroradiology, Journal of Neurosurgery, Journal of Neurosurgery: Spine, and Journal of Neurosurgery: Pediatrics.", "Looking at the aggregated data from all eight journals as a whole, the combined average increase of 42.3% in their original submissions for 2020 was markedly higher than the average increase of 11.2% exhibited in their previous years. While the projections from the 2016–2019 data for 2020 submissions were modeled graphically, the stark difference between the forecast and actual value from the graph shows that the submissions for 2020 did indeed increase at a greater rate than the predicted values (figure 2). These findings are statistically significant, manifested by the high reproducibility of this trend in all eight of our journals. Finally, for the same journals in the same time period, a search was performed to analyze the number of COVID-19 related publications compared with the total number of publications. The analysis showed that the average percent of COVID-19 related publications from January to June of 2020 was 6.87% (table 1). Individual trends for each journal are shown in the (online supplemental data).\n\n\n", "This study sought to quantitatively analyze whether the reduced time spent in surgical practice translated into an increase in scientific and clinical academic productivity, specifically by looking at neurosurgery, stroke neurology, and neurointerventional surgery academic output during the period of lockdown.\nThe data clearly demonstrate that there was a momentous increase in the number of original submissions for the year 2020, which was largely beyond the predicted value for the year using a linear regression model. The effects were experienced across all of our represented journals. World Neurosurgery, Journal of Neurosurgery: Spine, and Journal of Neurosurgery: Pediatrics, in particular, have faced dramatic increases in total submissions of over 50% compared with the previous year which cannot be accounted for by the historical yearly average percent increases. Neurosurgery, Journal of Neurosurgery, Stroke, American Journal of Neuroradiology, and Journal of NeuroInterventional Surgery have all shown increases for 2020 of over 25%, with their total submission counts similarly well above their respective predicted values.\nWhile the restrictions of the pandemic created increased time for clinical research, it has also created substantial challenges to ongoing research efforts. A survey study of neurointerventional research centers revealed widespread disruptions in aneurysm and stroke clinical trials due to missed enrollments and protocol deviations from missed clinical or imaging follow-up.15 Similar reductions in oncologic clinical trial enrollment have been reported.16\nGiven the dramatic increase in submissions for 2020 in the setting of widespread disruptions in ongoing prospective studies, one could argue that the quality of submissions may have decreased during the pandemic lockdown period. We sought to investigate this relationship by using rejection rate as a surrogate from the five journals that were able to provide these data. The rejection rates for the journals that could supply this information did not appear to differ significantly from the rates from previous years (p=0.387), suggesting that the quality of the submissions may have been maintained during the pandemic lockdown despite a large surge in the number of submissions. It is noted that the rate of rejections may be driven in certain journals by the number of issues printed per year as well as the allocated number of pages dictated by the publisher.\nWhen the published articles from the same journals within the same time frame were categorized into COVID-19 pandemic related and non-related, just under 7% of all new article submissions during the pandemic time period were COVID-19 related. There are two ways in which the sharp increases in original submissions can be interpreted: that this is a natural consequence of an increased amount of untapped topics that have now been made available to explore in the wake of COVID-19,17 or that this is a consequence of the unprecedentedly increased time neurosurgeons, stroke neurologists, neurointerventionalists, and trainees have had due to the reductions in the clinical and surgical workload. Both may, in part, help to explain the 2020 surge, yet the relatively small percentage of COVID-19 related articles in comparison with the total number of articles published seems to lend more credence to the latter interpretation that academic physicians across the globe have used this unstructured time to advance scientific knowledge.\nAlthough similar quantitative studies of COVID-19 and its effect on neurosurgical academic output have not been published, it is possible to compare the findings of this study with those of recent studies that used self-reported surveys. Pelargos et al described findings from a survey distributed to neurosurgery residents across the United States and Canada assessing changes to clinical and educational workload. More than 91% of residents reported that their clinical responsibilities have been reduced, and 65.2% stated that they have been spending increased time on clinical research.6 Zoia et al similarly conducted a survey of neurosurgery residents in Italy, and observed that participants homogeneously reported an increase in educational and scientific endeavors, with 55.7% reporting an increase in the production of scientific papers and research projects.18 Both findings are similar to our results, and serve to show the consequences of such increased attention directed towards academic engagement.\nThis study has important limitations. First, owing to the observational nature of our study, one cannot draw conclusions about the variables being studied and can only gather correlational relationships. The influence of confounders that could have affected the observed increase in journal submissions, separate from, or in coordination, with the pandemic, cannot fully be evaluated by the present study out of respect for the privacy of investigators. Second, there is a lag time between submission and publication, and we cannot account for COVID-related studies that are still in progress, leading to potential underestimation of the contribution of COVID-related articles to the increase. This was, however, offset by the fact that many COVID-related articles were fast tracked to publication during that period. Third, the study mainly contained neurosurgical journals, and looked only at clinical journals and not neuroscience journals. Clinical research was thought to be a better surrogate of academic productivity as the vast majority of laboratories were closed during the lockdown period. Lastly, there is no straightforward or uniform way in which to report on other potential submission quality metrics across the spectrum of included journals, such as study design or level of evidence, that would allow for standardization of article quality among the eight diverse journals.", "There was an unprecedented increase in article submissions to eight major neurosurgical, stroke neurology, and neurointerventional peer-reviewed journals during the pandemic, with a combined average increase of 42% for 2020 compared with the average expected increase of 11% found during 2016–2019. COVID-19 related articles comprised just under 7% of the total submissions from January to June of 2020. These findings suggest that reductions in clinical and surgical workload during the pandemic may have translated to increased academic productivity among neurosurgical, stroke, and neurointerventional physicians." ]
[ "intro", "methods", "results", null, null, "discussion", "conclusions" ]
[ "brain", "stroke", "statistics" ]
Introduction: On January 20, 2020, the first case of the novel coronavirus disease 2019 (COVID-19) in the United States was confirmed.1 The World Health Organization subsequently declared COVID-19 a global pandemic, and on March 13, 2020 the American College of Surgeons recommended the cessation of elective surgeries and the triaging of remaining cases by the level of acuity.2–4 The recommendation was cemented across the United States when surgical departments were required to cancel all elective surgeries by executive order5 to help to reduce the burden on the healthcare system during the pandemic.6 Surveys of physicians across varied specialties have unanimously demonstrated marked disruptions in clinical practice due to the pandemic.7–10 Interestingly, these disruptions encompass not only elective procedural volumes but also reductions in urgent and emergent procedures. Studies have reported decreased admissions for ischemic stroke,11 a substantial reduction in mechanical thrombectomy volumes,12 and a sharp decline in the number of stroke imaging evaluations at the height of the pandemic.13 In a nationwide survey of US neurointerventionalists, over three-quarters of respondents similarly reported greater than 25% reductions in emergent procedures during the pandemic, with over two-thirds of respondents reporting greater than 50% reductions in overall procedural volumes.14 Additionally, many institutions and departments have attempted to mitigate the risk of exposure to physicians by using telehealth for clinic visits, postponing non-urgent clinic visits, and reducing the number of physicians carrying out rounds or examining patients in hospital facilities. These factors combined have created an unprecedented situation, where many physicians have faced dramatically reduced clinical and surgical responsibilities. The present study explored whether the restrictions of the pandemic offered academic neurosurgeons, stroke neurologists, and neurointerventionalists opportunities for increased participation in academic activity during the pandemic lockdown period, as reflected by submission of manuscripts. Methods: Editors who serve as representatives of a spectrum of clinical journals in neurosurgery, stroke neurology, and neurointerventional surgery were invited to participate voluntarily in this study. Manuscript submission was used as a surrogate metric for academic productivity. Respondents provided the total number of original manuscript submissions, organized monthly from January to June, from the year 2016 to 2020. The pandemic lockdown period was defined as March to May 2020, when a significant surge in infections was encountered in the northeast and west of the United States. A total of eight neurosurgical, stroke, and neurointerventional journals were represented: Neurosurgery, Journal of Neurosurgery, Journal of Neurosurgery: Spine, Journal of Neurosurgery: Pediatrics, American Journal of Neuroradiology, Journal of NeuroInterventional Surgery, Stroke, and World Neurosurgery. For the American Journal of Neuroradiology, Journal of NeuroInterventional Surgery, Journal of Neurosurgery, Journal of Neurosurgery: Spine, and Journal of Neurosurgery: Pediatrics, the total number of rejections by month for the same time period were also provided. Total number of submissions by month were plotted on a line graph, and the results were separated by year. Statistical analysis using a linear regression model was performed using R (version 3.4.3, Vienna, Austria), in order to obtain the projected submissions during the pandemic lockdown based on previous submissions from 2016 to 2019. Rejection rates were analyzed using two-sample Student’s t-test, with significance set at p<0.05. To shed light on how much of a given surge was due to topics related to COVID-19, an online search was performed using each journal’s respective website with attention paid to topic and the date of online publication. Results: Eight academic clinical journals to which neurosurgeons, stroke neurologists, and neurointerventionalists tend to submit material were represented. Our study included each of the top six neurosurgical, stroke neurology, and neurointerventional journals as defined by Google Scholar Metrics which uses h5-index for articles published in the last five complete years. All of the journals exhibited a marked increase in the number of original submissions for the year of 2020, particularly during the pandemic lockdown period from March to May (figure 1). The average yearly percent increase using the 2016–2019 data for each journal was calculated and compared with the percent increase for 2020 (table 1; figure 2). Data for percent 2020 increase, average yearly 2016–2019 increase, proportion of observed over expected, and percent of COVID-19 related articles by journal Trends in the number of original submissions by month and year from January to June of 2016–2020. Combined submissions data from seven neurosurgery, stroke neurology, and neurointerventional journals displayed as a graph for the first half of the year for the past 5 years. Limited data were provided for stroke. Actual and projected number of original submissions for first half of 2020 based on previous submissions data from 2016 to 2019. The total count of original submissions was predicted based on a linear regression model using the submission counts from the previous years since 2016. Percentages denote the observed percent increase from 2019 to 2020, the average 2016–2019 yearly percent increase, and the proportion of COVID-19 related articles to the total count of published articles from January 1 to June 30, 2020 for each journal. Ratio denotes the 2020 increase over the average yearly 2016–2019 increase. Submission rejection rate Additionally, the total number of rejections for the same time period were provided by five journals. For the Journal of NeuroInterventional Surgery, the rejection rate for 2020 was 61.5%, similar to the 58.5% rejection rate for 2019 submissions (figure 3). For the American Journal of Neuroradiology, the rejection rate in original submissions for 2020 was 59.5%, lower than the average yearly rejection rate of 67.9%. For the Journal of Neurosurgery, the 2020 rejection rate was 73.5%, higher than the yearly rejection rate of 62.5%. For JNS: Spine and JNS: Pediatrics, the 2020 rejection rates were 69.0% and 54.0%, respectively, while their historical average rejection rates were 66.1% and 43.2%, respectively. The combined average rejection rate for 2020 was 63.5%, just 2.1% higher than the 2019 combined average rejection rate of 61.4%. These journals did not exhibit a statistically significant change in their rejection rates for 2020 compared with their previous years of 2016–2019 (p=0.387). Rejection rate of original submissions by month and year from January to June of 2016–2020. Journals represented are Journal of NeuroInterventional Surgery, American Journal of Neuroradiology, Journal of Neurosurgery, Journal of Neurosurgery: Spine, and Journal of Neurosurgery: Pediatrics. Additionally, the total number of rejections for the same time period were provided by five journals. For the Journal of NeuroInterventional Surgery, the rejection rate for 2020 was 61.5%, similar to the 58.5% rejection rate for 2019 submissions (figure 3). For the American Journal of Neuroradiology, the rejection rate in original submissions for 2020 was 59.5%, lower than the average yearly rejection rate of 67.9%. For the Journal of Neurosurgery, the 2020 rejection rate was 73.5%, higher than the yearly rejection rate of 62.5%. For JNS: Spine and JNS: Pediatrics, the 2020 rejection rates were 69.0% and 54.0%, respectively, while their historical average rejection rates were 66.1% and 43.2%, respectively. The combined average rejection rate for 2020 was 63.5%, just 2.1% higher than the 2019 combined average rejection rate of 61.4%. These journals did not exhibit a statistically significant change in their rejection rates for 2020 compared with their previous years of 2016–2019 (p=0.387). Rejection rate of original submissions by month and year from January to June of 2016–2020. Journals represented are Journal of NeuroInterventional Surgery, American Journal of Neuroradiology, Journal of Neurosurgery, Journal of Neurosurgery: Spine, and Journal of Neurosurgery: Pediatrics. All journals aggregated Looking at the aggregated data from all eight journals as a whole, the combined average increase of 42.3% in their original submissions for 2020 was markedly higher than the average increase of 11.2% exhibited in their previous years. While the projections from the 2016–2019 data for 2020 submissions were modeled graphically, the stark difference between the forecast and actual value from the graph shows that the submissions for 2020 did indeed increase at a greater rate than the predicted values (figure 2). These findings are statistically significant, manifested by the high reproducibility of this trend in all eight of our journals. Finally, for the same journals in the same time period, a search was performed to analyze the number of COVID-19 related publications compared with the total number of publications. The analysis showed that the average percent of COVID-19 related publications from January to June of 2020 was 6.87% (table 1). Individual trends for each journal are shown in the (online supplemental data). Looking at the aggregated data from all eight journals as a whole, the combined average increase of 42.3% in their original submissions for 2020 was markedly higher than the average increase of 11.2% exhibited in their previous years. While the projections from the 2016–2019 data for 2020 submissions were modeled graphically, the stark difference between the forecast and actual value from the graph shows that the submissions for 2020 did indeed increase at a greater rate than the predicted values (figure 2). These findings are statistically significant, manifested by the high reproducibility of this trend in all eight of our journals. Finally, for the same journals in the same time period, a search was performed to analyze the number of COVID-19 related publications compared with the total number of publications. The analysis showed that the average percent of COVID-19 related publications from January to June of 2020 was 6.87% (table 1). Individual trends for each journal are shown in the (online supplemental data). Submission rejection rate: Additionally, the total number of rejections for the same time period were provided by five journals. For the Journal of NeuroInterventional Surgery, the rejection rate for 2020 was 61.5%, similar to the 58.5% rejection rate for 2019 submissions (figure 3). For the American Journal of Neuroradiology, the rejection rate in original submissions for 2020 was 59.5%, lower than the average yearly rejection rate of 67.9%. For the Journal of Neurosurgery, the 2020 rejection rate was 73.5%, higher than the yearly rejection rate of 62.5%. For JNS: Spine and JNS: Pediatrics, the 2020 rejection rates were 69.0% and 54.0%, respectively, while their historical average rejection rates were 66.1% and 43.2%, respectively. The combined average rejection rate for 2020 was 63.5%, just 2.1% higher than the 2019 combined average rejection rate of 61.4%. These journals did not exhibit a statistically significant change in their rejection rates for 2020 compared with their previous years of 2016–2019 (p=0.387). Rejection rate of original submissions by month and year from January to June of 2016–2020. Journals represented are Journal of NeuroInterventional Surgery, American Journal of Neuroradiology, Journal of Neurosurgery, Journal of Neurosurgery: Spine, and Journal of Neurosurgery: Pediatrics. All journals aggregated: Looking at the aggregated data from all eight journals as a whole, the combined average increase of 42.3% in their original submissions for 2020 was markedly higher than the average increase of 11.2% exhibited in their previous years. While the projections from the 2016–2019 data for 2020 submissions were modeled graphically, the stark difference between the forecast and actual value from the graph shows that the submissions for 2020 did indeed increase at a greater rate than the predicted values (figure 2). These findings are statistically significant, manifested by the high reproducibility of this trend in all eight of our journals. Finally, for the same journals in the same time period, a search was performed to analyze the number of COVID-19 related publications compared with the total number of publications. The analysis showed that the average percent of COVID-19 related publications from January to June of 2020 was 6.87% (table 1). Individual trends for each journal are shown in the (online supplemental data). Discussion: This study sought to quantitatively analyze whether the reduced time spent in surgical practice translated into an increase in scientific and clinical academic productivity, specifically by looking at neurosurgery, stroke neurology, and neurointerventional surgery academic output during the period of lockdown. The data clearly demonstrate that there was a momentous increase in the number of original submissions for the year 2020, which was largely beyond the predicted value for the year using a linear regression model. The effects were experienced across all of our represented journals. World Neurosurgery, Journal of Neurosurgery: Spine, and Journal of Neurosurgery: Pediatrics, in particular, have faced dramatic increases in total submissions of over 50% compared with the previous year which cannot be accounted for by the historical yearly average percent increases. Neurosurgery, Journal of Neurosurgery, Stroke, American Journal of Neuroradiology, and Journal of NeuroInterventional Surgery have all shown increases for 2020 of over 25%, with their total submission counts similarly well above their respective predicted values. While the restrictions of the pandemic created increased time for clinical research, it has also created substantial challenges to ongoing research efforts. A survey study of neurointerventional research centers revealed widespread disruptions in aneurysm and stroke clinical trials due to missed enrollments and protocol deviations from missed clinical or imaging follow-up.15 Similar reductions in oncologic clinical trial enrollment have been reported.16 Given the dramatic increase in submissions for 2020 in the setting of widespread disruptions in ongoing prospective studies, one could argue that the quality of submissions may have decreased during the pandemic lockdown period. We sought to investigate this relationship by using rejection rate as a surrogate from the five journals that were able to provide these data. The rejection rates for the journals that could supply this information did not appear to differ significantly from the rates from previous years (p=0.387), suggesting that the quality of the submissions may have been maintained during the pandemic lockdown despite a large surge in the number of submissions. It is noted that the rate of rejections may be driven in certain journals by the number of issues printed per year as well as the allocated number of pages dictated by the publisher. When the published articles from the same journals within the same time frame were categorized into COVID-19 pandemic related and non-related, just under 7% of all new article submissions during the pandemic time period were COVID-19 related. There are two ways in which the sharp increases in original submissions can be interpreted: that this is a natural consequence of an increased amount of untapped topics that have now been made available to explore in the wake of COVID-19,17 or that this is a consequence of the unprecedentedly increased time neurosurgeons, stroke neurologists, neurointerventionalists, and trainees have had due to the reductions in the clinical and surgical workload. Both may, in part, help to explain the 2020 surge, yet the relatively small percentage of COVID-19 related articles in comparison with the total number of articles published seems to lend more credence to the latter interpretation that academic physicians across the globe have used this unstructured time to advance scientific knowledge. Although similar quantitative studies of COVID-19 and its effect on neurosurgical academic output have not been published, it is possible to compare the findings of this study with those of recent studies that used self-reported surveys. Pelargos et al described findings from a survey distributed to neurosurgery residents across the United States and Canada assessing changes to clinical and educational workload. More than 91% of residents reported that their clinical responsibilities have been reduced, and 65.2% stated that they have been spending increased time on clinical research.6 Zoia et al similarly conducted a survey of neurosurgery residents in Italy, and observed that participants homogeneously reported an increase in educational and scientific endeavors, with 55.7% reporting an increase in the production of scientific papers and research projects.18 Both findings are similar to our results, and serve to show the consequences of such increased attention directed towards academic engagement. This study has important limitations. First, owing to the observational nature of our study, one cannot draw conclusions about the variables being studied and can only gather correlational relationships. The influence of confounders that could have affected the observed increase in journal submissions, separate from, or in coordination, with the pandemic, cannot fully be evaluated by the present study out of respect for the privacy of investigators. Second, there is a lag time between submission and publication, and we cannot account for COVID-related studies that are still in progress, leading to potential underestimation of the contribution of COVID-related articles to the increase. This was, however, offset by the fact that many COVID-related articles were fast tracked to publication during that period. Third, the study mainly contained neurosurgical journals, and looked only at clinical journals and not neuroscience journals. Clinical research was thought to be a better surrogate of academic productivity as the vast majority of laboratories were closed during the lockdown period. Lastly, there is no straightforward or uniform way in which to report on other potential submission quality metrics across the spectrum of included journals, such as study design or level of evidence, that would allow for standardization of article quality among the eight diverse journals. Conclusion: There was an unprecedented increase in article submissions to eight major neurosurgical, stroke neurology, and neurointerventional peer-reviewed journals during the pandemic, with a combined average increase of 42% for 2020 compared with the average expected increase of 11% found during 2016–2019. COVID-19 related articles comprised just under 7% of the total submissions from January to June of 2020. These findings suggest that reductions in clinical and surgical workload during the pandemic may have translated to increased academic productivity among neurosurgical, stroke, and neurointerventional physicians.
Background: Academic physicians aim to provide clinical and surgical care to their patients while actively contributing to a growing body of scientific literature. The coronavirus disease 2019 (COVID-19) pandemic has resulted in procedural-based specialties across the United States witnessing a sharp decline in their clinical volume and surgical cases. Methods: The study compared the neurosurgical, stroke neurology, and neurointerventional academic output during the pandemic lockdown with the same time period in previous years. Editors from a sample of neurosurgical, stroke neurology, and neurointerventional journals provided the total number of original manuscript submissions, broken down by months, from the year 2016 to 2020. Manuscript submission was used as a surrogate metric for academic productivity. Results: 8 journals were represented. The aggregated data from all eight journals as a whole showed that a combined average increase of 42.3% was observed on original submissions for 2020. As the average yearly percent increase using the 2016-2019 data for each journal exhibited a combined average increase of 11.2%, the rise in the yearly increase for 2020 in comparison was nearly fourfold. For the same journals in the same time period, the average percent of COVID-19 related publications from January to June of 2020 was 6.87%. Conclusions: There was a momentous increase in the number of original submissions for the year 2020, and its effects were uniformly experienced across all of our represented journals.
Introduction: On January 20, 2020, the first case of the novel coronavirus disease 2019 (COVID-19) in the United States was confirmed.1 The World Health Organization subsequently declared COVID-19 a global pandemic, and on March 13, 2020 the American College of Surgeons recommended the cessation of elective surgeries and the triaging of remaining cases by the level of acuity.2–4 The recommendation was cemented across the United States when surgical departments were required to cancel all elective surgeries by executive order5 to help to reduce the burden on the healthcare system during the pandemic.6 Surveys of physicians across varied specialties have unanimously demonstrated marked disruptions in clinical practice due to the pandemic.7–10 Interestingly, these disruptions encompass not only elective procedural volumes but also reductions in urgent and emergent procedures. Studies have reported decreased admissions for ischemic stroke,11 a substantial reduction in mechanical thrombectomy volumes,12 and a sharp decline in the number of stroke imaging evaluations at the height of the pandemic.13 In a nationwide survey of US neurointerventionalists, over three-quarters of respondents similarly reported greater than 25% reductions in emergent procedures during the pandemic, with over two-thirds of respondents reporting greater than 50% reductions in overall procedural volumes.14 Additionally, many institutions and departments have attempted to mitigate the risk of exposure to physicians by using telehealth for clinic visits, postponing non-urgent clinic visits, and reducing the number of physicians carrying out rounds or examining patients in hospital facilities. These factors combined have created an unprecedented situation, where many physicians have faced dramatically reduced clinical and surgical responsibilities. The present study explored whether the restrictions of the pandemic offered academic neurosurgeons, stroke neurologists, and neurointerventionalists opportunities for increased participation in academic activity during the pandemic lockdown period, as reflected by submission of manuscripts. Conclusion: There was an unprecedented increase in article submissions to eight major neurosurgical, stroke neurology, and neurointerventional peer-reviewed journals during the pandemic, with a combined average increase of 42% for 2020 compared with the average expected increase of 11% found during 2016–2019. COVID-19 related articles comprised just under 7% of the total submissions from January to June of 2020. These findings suggest that reductions in clinical and surgical workload during the pandemic may have translated to increased academic productivity among neurosurgical, stroke, and neurointerventional physicians.
Background: Academic physicians aim to provide clinical and surgical care to their patients while actively contributing to a growing body of scientific literature. The coronavirus disease 2019 (COVID-19) pandemic has resulted in procedural-based specialties across the United States witnessing a sharp decline in their clinical volume and surgical cases. Methods: The study compared the neurosurgical, stroke neurology, and neurointerventional academic output during the pandemic lockdown with the same time period in previous years. Editors from a sample of neurosurgical, stroke neurology, and neurointerventional journals provided the total number of original manuscript submissions, broken down by months, from the year 2016 to 2020. Manuscript submission was used as a surrogate metric for academic productivity. Results: 8 journals were represented. The aggregated data from all eight journals as a whole showed that a combined average increase of 42.3% was observed on original submissions for 2020. As the average yearly percent increase using the 2016-2019 data for each journal exhibited a combined average increase of 11.2%, the rise in the yearly increase for 2020 in comparison was nearly fourfold. For the same journals in the same time period, the average percent of COVID-19 related publications from January to June of 2020 was 6.87%. Conclusions: There was a momentous increase in the number of original submissions for the year 2020, and its effects were uniformly experienced across all of our represented journals.
3,321
267
[ 239, 184 ]
7
[ "2020", "journal", "rejection", "submissions", "journals", "rate", "rejection rate", "neurosurgery", "average", "increase" ]
[ "pandemic fully evaluated", "emergent procedures pandemic", "neurointerventional surgery stroke", "neurosurgery stroke american", "pandemic surveys physicians" ]
[CONTENT] brain | stroke | statistics [SUMMARY]
[CONTENT] brain | stroke | statistics [SUMMARY]
[CONTENT] brain | stroke | statistics [SUMMARY]
[CONTENT] brain | stroke | statistics [SUMMARY]
[CONTENT] brain | stroke | statistics [SUMMARY]
[CONTENT] brain | stroke | statistics [SUMMARY]
[CONTENT] COVID-19 | Coronavirus Infections | Efficiency | Humans | Neurology | Neurosurgery | Pandemics | Periodicals as Topic | Pneumonia, Viral | Publishing | Quarantine | Research | Stroke | Universities [SUMMARY]
[CONTENT] COVID-19 | Coronavirus Infections | Efficiency | Humans | Neurology | Neurosurgery | Pandemics | Periodicals as Topic | Pneumonia, Viral | Publishing | Quarantine | Research | Stroke | Universities [SUMMARY]
[CONTENT] COVID-19 | Coronavirus Infections | Efficiency | Humans | Neurology | Neurosurgery | Pandemics | Periodicals as Topic | Pneumonia, Viral | Publishing | Quarantine | Research | Stroke | Universities [SUMMARY]
[CONTENT] COVID-19 | Coronavirus Infections | Efficiency | Humans | Neurology | Neurosurgery | Pandemics | Periodicals as Topic | Pneumonia, Viral | Publishing | Quarantine | Research | Stroke | Universities [SUMMARY]
[CONTENT] COVID-19 | Coronavirus Infections | Efficiency | Humans | Neurology | Neurosurgery | Pandemics | Periodicals as Topic | Pneumonia, Viral | Publishing | Quarantine | Research | Stroke | Universities [SUMMARY]
[CONTENT] COVID-19 | Coronavirus Infections | Efficiency | Humans | Neurology | Neurosurgery | Pandemics | Periodicals as Topic | Pneumonia, Viral | Publishing | Quarantine | Research | Stroke | Universities [SUMMARY]
[CONTENT] pandemic fully evaluated | emergent procedures pandemic | neurointerventional surgery stroke | neurosurgery stroke american | pandemic surveys physicians [SUMMARY]
[CONTENT] pandemic fully evaluated | emergent procedures pandemic | neurointerventional surgery stroke | neurosurgery stroke american | pandemic surveys physicians [SUMMARY]
[CONTENT] pandemic fully evaluated | emergent procedures pandemic | neurointerventional surgery stroke | neurosurgery stroke american | pandemic surveys physicians [SUMMARY]
[CONTENT] pandemic fully evaluated | emergent procedures pandemic | neurointerventional surgery stroke | neurosurgery stroke american | pandemic surveys physicians [SUMMARY]
[CONTENT] pandemic fully evaluated | emergent procedures pandemic | neurointerventional surgery stroke | neurosurgery stroke american | pandemic surveys physicians [SUMMARY]
[CONTENT] pandemic fully evaluated | emergent procedures pandemic | neurointerventional surgery stroke | neurosurgery stroke american | pandemic surveys physicians [SUMMARY]
[CONTENT] 2020 | journal | rejection | submissions | journals | rate | rejection rate | neurosurgery | average | increase [SUMMARY]
[CONTENT] 2020 | journal | rejection | submissions | journals | rate | rejection rate | neurosurgery | average | increase [SUMMARY]
[CONTENT] 2020 | journal | rejection | submissions | journals | rate | rejection rate | neurosurgery | average | increase [SUMMARY]
[CONTENT] 2020 | journal | rejection | submissions | journals | rate | rejection rate | neurosurgery | average | increase [SUMMARY]
[CONTENT] 2020 | journal | rejection | submissions | journals | rate | rejection rate | neurosurgery | average | increase [SUMMARY]
[CONTENT] 2020 | journal | rejection | submissions | journals | rate | rejection rate | neurosurgery | average | increase [SUMMARY]
[CONTENT] pandemic | elective | volumes | physicians | reductions | 13 | elective surgeries | clinic visits | clinic | urgent [SUMMARY]
[CONTENT] journal | neurosurgery | journal neurosurgery | neurointerventional | provided total | provided total number | manuscript | total | submissions | surgery [SUMMARY]
[CONTENT] rejection | rejection rate | rate | 2020 | journal | average | increase | submissions | journals | data [SUMMARY]
[CONTENT] increase | neurosurgical stroke | neurosurgical | neurointerventional | stroke | average | pandemic | pandemic combined | compared average expected increase | journals pandemic [SUMMARY]
[CONTENT] journal | rejection | 2020 | submissions | neurosurgery | rate | rejection rate | increase | journals | average [SUMMARY]
[CONTENT] journal | rejection | 2020 | submissions | neurosurgery | rate | rejection rate | increase | journals | average [SUMMARY]
[CONTENT] ||| 2019 | COVID-19 | the United States [SUMMARY]
[CONTENT] previous years ||| months | the year 2016 to 2020 ||| [SUMMARY]
[CONTENT] 8 ||| all eight | 42.3% | 2020 ||| 2016-2019 | 11.2% | yearly | 2020 ||| COVID-19 | January to June of 2020 | 6.87% [SUMMARY]
[CONTENT] the year 2020 [SUMMARY]
[CONTENT] ||| 2019 | COVID-19 | the United States ||| previous years ||| months | the year 2016 to 2020 ||| ||| 8 ||| all eight | 42.3% | 2020 ||| 2016-2019 | 11.2% | yearly | 2020 ||| COVID-19 | January to June of 2020 | 6.87% ||| the year 2020 [SUMMARY]
[CONTENT] ||| 2019 | COVID-19 | the United States ||| previous years ||| months | the year 2016 to 2020 ||| ||| 8 ||| all eight | 42.3% | 2020 ||| 2016-2019 | 11.2% | yearly | 2020 ||| COVID-19 | January to June of 2020 | 6.87% ||| the year 2020 [SUMMARY]
Results of a randomized phase III/IV trial comparing intermittent bolus versus continuous infusion of antihaemophilic factor (recombinant) in adults with severe or moderately severe haemophilia A undergoing major orthopaedic surgery.
33772963
In patients with haemophilia A undergoing surgery, factor VIII (FVIII) replacement therapy by continuous infusion (CI) may offer an alternative to bolus infusion (BI).
INTRODUCTION
In this multicentre, phase III/IV, controlled study (NCT00357656), 60 previously treated adult patients with severe or moderately severe disease undergoing elective unilateral major orthopaedic surgery (knee replacement, n = 48; hip surgery, n = 4; other, n = 8) requiring drain placement were randomized to receive antihaemophilic factor (recombinant) CI (n = 29) or BI (n = 31) through postoperative day 7. Primary outcome measure was cumulative packed red blood cell (PRBC)/blood volume in the drainage fluid within 24 h after surgery, used to establish non-inferiority of CI to BI.
METHODS
CI:BI ratio of cumulative PRBC volume in the 24-h drainage fluid was 0.92 (p-value <.001 for non-inferiority; 95% confidence interval, 0.82-1.05). Total antihaemophilic factor (recombinant) dose per kg body weight received in the combined trans- and postoperative periods was similar with CI and BI to maintain targeted FVIII levels during/after surgery. Treatment-related adverse events (AEs) were reported in five patients treated by CI (eight events) and five treated by BI (six events), including two serious AEs in each arm.
RESULTS
CI administration of antihaemophilic factor (recombinant) is a viable alternative to BI in patients with haemophilia A undergoing major orthopaedic surgery, providing comparable efficacy and safety.
CONCLUSION
[ "Adult", "Blood Coagulation Tests", "Factor VIII", "Hemophilia A", "Hemostasis", "Humans", "Orthopedic Procedures", "Recombinant Proteins" ]
8252548
INTRODUCTION
Antihaemophilic factor (recombinant), plasma/albumin‐free method (ADVATE®; Baxalta US Inc., a Takeda company, Lexington, MA, USA), is a recombinant human coagulation factor VIII (FVIII) indicated for the treatment and prophylaxis of bleeding, including perioperative management, in patients of all ages with haemophilia A. 1 , 2 When used perioperatively, antihaemophilic factor (recombinant) is typically administered by bolus infusion (BI) at time points dictated by its pharmacokinetic (PK) profile. Continuous infusion (CI) was developed to reduce the wide variations in plasma FVIII levels that usually accompany BI and decrease the quantity of infused FVIII concentrate. 3 , 4 , 5 , 6 , 7 , 8 , 9 CI during and/or after surgery 1 , 10 may stabilize FVIII levels, eliminating the deep troughs characteristic of BI that may increase bleeding risk. Several cohort and non‐controlled studies have indicated that FVIII CI is well tolerated and efficacious for providing perioperative haemostasis for patients with haemophilia A; some studies have suggested that CI may also reduce FVIII consumption compared with BI. 3 , 7 , 8 , 11 , 12 , 13 Continuous infusion and BI in the same type of intervention have not been compared in a prospective, controlled setting. The objective of this prospective, randomized phase III/IV study in patients with severe or moderately severe haemophilia A was to assess the perioperative haemostatic efficacy and safety of antihaemophilic factor (recombinant) administered via CI and intermittent BI.
null
null
RESULTS
Patients The study started on 29 May 2006 and was completed on 9 December 2015. Of 85 patients enrolled at 22 sites (in the United States, European Union, Norway and Russia), 72 received the infusion of antihaemophilic factor (recombinant) in the preoperative period for PK determination. Of these, 63 met the criteria for perioperative treatment and were randomized to receive CI (n = 32) or BI (n = 31). Of the patients who received CI, 23 had severe haemophilia A (baseline FVIII level <1 IU/dl) and six had moderately severe haemophilia A (baseline FVIII level 1 to <2 IU/dl). Of the patients who received BI, 26 had severe haemophilia A and five had moderately severe haemophilia A. Three patients randomized to CI did not undergo surgery and were not treated. Thus, 60 patients received treatment and comprised the per‐protocol analysis set (CI, n = 29; BI, n = 31). The safety analysis set included all 72 patients who received at least one dose of antihaemophilic factor (recombinant). Patient disposition is summarized in Figure 1. Each patient underwent one procedure: unilateral knee replacement surgery (n = 48; 24 CI, 24 BI), hip surgery (n = 4; two CI, two BI) or shoulder/elbow/ankle/knee surgery (n = 8; three CI, five BI). All patients were male. Demographic and clinical characteristics were similar between groups; the medians and ranges of age were nearly identical (Table 2). Four patients received enoxaparin or nadroparin as thrombosis prophylaxis. Disposition of patients. †Nine patients were not randomized for the following reasons: excluded because of pharmacokinetics (n = 4), screen failures (n = 2), physician decision (n = 1), sponsor decision (n = 1) and death (n = 1).‡Three patients randomized to receive continuous infusion did not undergo surgery and were therefore not treated. Thus, 60 patients received treatment and comprised the per‐protocol analysis set (continuous infusion, n = 29; bolus infusion, n = 31) Patient demographic and clinical characteristics in the per‐protocol analysis set The study started on 29 May 2006 and was completed on 9 December 2015. Of 85 patients enrolled at 22 sites (in the United States, European Union, Norway and Russia), 72 received the infusion of antihaemophilic factor (recombinant) in the preoperative period for PK determination. Of these, 63 met the criteria for perioperative treatment and were randomized to receive CI (n = 32) or BI (n = 31). Of the patients who received CI, 23 had severe haemophilia A (baseline FVIII level <1 IU/dl) and six had moderately severe haemophilia A (baseline FVIII level 1 to <2 IU/dl). Of the patients who received BI, 26 had severe haemophilia A and five had moderately severe haemophilia A. Three patients randomized to CI did not undergo surgery and were not treated. Thus, 60 patients received treatment and comprised the per‐protocol analysis set (CI, n = 29; BI, n = 31). The safety analysis set included all 72 patients who received at least one dose of antihaemophilic factor (recombinant). Patient disposition is summarized in Figure 1. Each patient underwent one procedure: unilateral knee replacement surgery (n = 48; 24 CI, 24 BI), hip surgery (n = 4; two CI, two BI) or shoulder/elbow/ankle/knee surgery (n = 8; three CI, five BI). All patients were male. Demographic and clinical characteristics were similar between groups; the medians and ranges of age were nearly identical (Table 2). Four patients received enoxaparin or nadroparin as thrombosis prophylaxis. Disposition of patients. †Nine patients were not randomized for the following reasons: excluded because of pharmacokinetics (n = 4), screen failures (n = 2), physician decision (n = 1), sponsor decision (n = 1) and death (n = 1).‡Three patients randomized to receive continuous infusion did not undergo surgery and were therefore not treated. Thus, 60 patients received treatment and comprised the per‐protocol analysis set (continuous infusion, n = 29; bolus infusion, n = 31) Patient demographic and clinical characteristics in the per‐protocol analysis set Efficacy and exploratory outcomes Information on drainage fluid PRBC was not available for six patients (three CI, three BI) at 24 h after surgery, but the cumulative PRBC/blood volume in the drainage fluid (ie RBC, MCV and haematocrit) during the first 24 h following surgery was comparable between the CI and BI groups (Table 3; Table 4 [by type of surgery]), with a ratio of 0.92 (95% confidence interval for mean, 0.82–1.05; median, 3.4 × 1012 RBCs/l for CI and 3.5 × 1012 RBCs/l for BI). The one‐sided p‐value against the null hypothesis of ratio ≥200% was <.001, confirming the non‐inferiority of CI to BI with a 5% type I error (as the upper confidence limit did not exceed 200%). Key efficacy parameters in the per‐protocol analysis set a Abbreviations: BI, bolus infusion; CI, continuous infusion; PRBC, packed red blood cell; SD, standard deviation. Data shown are from patients in the per‐protocol analysis set, unless otherwise specified. The patient numbers shown for the different parameters vary because some patients did not have all data available. The one‐sided p‐value (against the null hypothesis of the CI:BI ratio ≥200%) was <.001. The one‐sided p‐value (against the null hypothesis of the CI:BI ratio ≥200%) was .041. Not statistically significant by a post hoc Pearson's chi‐squared test (p two‐sided = .612, using Monte Carlo simulation with 106 replicates). Not statistically significant by a post hoc Pearson's chi‐squared test (p two‐sided = .132, using Monte Carlo simulation with 106 replicates). Trans‐ and postoperatively combined. Cumulative PRBC volume in drainage fluid at 24 h (1012 RBCs/l) by type of surgery Abbreviations: NA, not applicable; SD, standard deviation. Total blood loss until drain removal, adjusted for expected blood loss, was slightly higher in the CI group than the BI group (Table 3). The mean (95% confidence interval) ratio of blood loss volume for CI versus BI was estimated to be 1.3 (0.8–2.1), and the one‐sided p‐value against the null hypothesis of ratio ≥200% was .041. Most bleeding episodes occurred in patients receiving BI; of four reported bleeding episodes (one episode per patient), three occurred in patients receiving BI and one in a patient receiving CI (Table 3). None of the bleeding episodes was considered by the investigator to be ‘the result of inadequate therapeutic response in the face of proper dosing, necessitating a change in therapeutic regimen'. Patients receiving CI were given more PRBC transfusions than patients receiving BI. PRBC transfusions were required in 18/29 and 13/31 patients receiving CI and BI, respectively, with a mean (range) of 1.3 (1–5) units in patients receiving CI and a mean (range) of 0.9 (1–3) units in patients receiving BI. The total amount of haemoglobin in the cumulative drainage fluid during the first 24 h after surgery and until drain removal (if drainage continued) was comparable between patients who received CI and BI. In the stratum of patients who underwent unilateral knee replacement (CI, n = 24; BI, n = 24), the point estimate for the mean was 98.21 g/l (95% confidence interval for mean, 89.68–107.56 g/l) for CI and 97.63 g/l (86.07–110.74 g/l) for BI. The ratio of CI to BI was estimated to be 1.01. Clinically relevant postoperative haematomas were observed in two patients receiving CI and two receiving BI (three haematomas per group, for a total of six). Two patients (one CI, one BI) had undergone knee replacement and two patients (one CI, one BI) had undergone hip surgery. The total antihaemophilic factor (recombinant) dose (per kg body weight) administered in the combined transoperative and postoperative periods was similar in the CI and BI groups (Table 3). The global haemostatic efficacy of antihaemophilic factor (recombinant) administered by CI was assessed to be at least as good as that administered by BI. As shown in Table 5, scores of ‘excellent’ were evenly distributed among patients who received CI and patients who received BI. All patients receiving CI had a score of ‘excellent’ or ‘good’. GHEA score Abbreviation: GHEA, Global Hemostatic Efficacy Assessment. Values represent numbers of patients with reported GHEA scores. The GHEA score at postoperative day 8 was missing for six patients. Incremental recovery over time could be analysed only for the BI arm, as BIs in the CI arm were rare. Compared to the value at the loading dose on day 0, the median IR decreased by ~20% after the first week following surgery (day 7), with high variability across individual patients. During the second week, many patients were discharged from hospital and not enough samples were available for analysis. CL could be analysed for the CI arm, but not the BI arm because of insufficient data. The determination of CL used the observed FVIII level as the steady‐state level. This assumption was questionable for days 1 and 4 due to the additional postsurgical BIs and the reduction in infusion rate scheduled at day 3. For days 2, 3 and 5, an increase of ~20% in median CL was observed compared with that from the presurgical full PK analysis. Only at days 6 and 7 was the median CL below the initial value, but high variability in individual patient values was seen throughout the first week. Second‐week data were insufficient for analysis. Information on drainage fluid PRBC was not available for six patients (three CI, three BI) at 24 h after surgery, but the cumulative PRBC/blood volume in the drainage fluid (ie RBC, MCV and haematocrit) during the first 24 h following surgery was comparable between the CI and BI groups (Table 3; Table 4 [by type of surgery]), with a ratio of 0.92 (95% confidence interval for mean, 0.82–1.05; median, 3.4 × 1012 RBCs/l for CI and 3.5 × 1012 RBCs/l for BI). The one‐sided p‐value against the null hypothesis of ratio ≥200% was <.001, confirming the non‐inferiority of CI to BI with a 5% type I error (as the upper confidence limit did not exceed 200%). Key efficacy parameters in the per‐protocol analysis set a Abbreviations: BI, bolus infusion; CI, continuous infusion; PRBC, packed red blood cell; SD, standard deviation. Data shown are from patients in the per‐protocol analysis set, unless otherwise specified. The patient numbers shown for the different parameters vary because some patients did not have all data available. The one‐sided p‐value (against the null hypothesis of the CI:BI ratio ≥200%) was <.001. The one‐sided p‐value (against the null hypothesis of the CI:BI ratio ≥200%) was .041. Not statistically significant by a post hoc Pearson's chi‐squared test (p two‐sided = .612, using Monte Carlo simulation with 106 replicates). Not statistically significant by a post hoc Pearson's chi‐squared test (p two‐sided = .132, using Monte Carlo simulation with 106 replicates). Trans‐ and postoperatively combined. Cumulative PRBC volume in drainage fluid at 24 h (1012 RBCs/l) by type of surgery Abbreviations: NA, not applicable; SD, standard deviation. Total blood loss until drain removal, adjusted for expected blood loss, was slightly higher in the CI group than the BI group (Table 3). The mean (95% confidence interval) ratio of blood loss volume for CI versus BI was estimated to be 1.3 (0.8–2.1), and the one‐sided p‐value against the null hypothesis of ratio ≥200% was .041. Most bleeding episodes occurred in patients receiving BI; of four reported bleeding episodes (one episode per patient), three occurred in patients receiving BI and one in a patient receiving CI (Table 3). None of the bleeding episodes was considered by the investigator to be ‘the result of inadequate therapeutic response in the face of proper dosing, necessitating a change in therapeutic regimen'. Patients receiving CI were given more PRBC transfusions than patients receiving BI. PRBC transfusions were required in 18/29 and 13/31 patients receiving CI and BI, respectively, with a mean (range) of 1.3 (1–5) units in patients receiving CI and a mean (range) of 0.9 (1–3) units in patients receiving BI. The total amount of haemoglobin in the cumulative drainage fluid during the first 24 h after surgery and until drain removal (if drainage continued) was comparable between patients who received CI and BI. In the stratum of patients who underwent unilateral knee replacement (CI, n = 24; BI, n = 24), the point estimate for the mean was 98.21 g/l (95% confidence interval for mean, 89.68–107.56 g/l) for CI and 97.63 g/l (86.07–110.74 g/l) for BI. The ratio of CI to BI was estimated to be 1.01. Clinically relevant postoperative haematomas were observed in two patients receiving CI and two receiving BI (three haematomas per group, for a total of six). Two patients (one CI, one BI) had undergone knee replacement and two patients (one CI, one BI) had undergone hip surgery. The total antihaemophilic factor (recombinant) dose (per kg body weight) administered in the combined transoperative and postoperative periods was similar in the CI and BI groups (Table 3). The global haemostatic efficacy of antihaemophilic factor (recombinant) administered by CI was assessed to be at least as good as that administered by BI. As shown in Table 5, scores of ‘excellent’ were evenly distributed among patients who received CI and patients who received BI. All patients receiving CI had a score of ‘excellent’ or ‘good’. GHEA score Abbreviation: GHEA, Global Hemostatic Efficacy Assessment. Values represent numbers of patients with reported GHEA scores. The GHEA score at postoperative day 8 was missing for six patients. Incremental recovery over time could be analysed only for the BI arm, as BIs in the CI arm were rare. Compared to the value at the loading dose on day 0, the median IR decreased by ~20% after the first week following surgery (day 7), with high variability across individual patients. During the second week, many patients were discharged from hospital and not enough samples were available for analysis. CL could be analysed for the CI arm, but not the BI arm because of insufficient data. The determination of CL used the observed FVIII level as the steady‐state level. This assumption was questionable for days 1 and 4 due to the additional postsurgical BIs and the reduction in infusion rate scheduled at day 3. For days 2, 3 and 5, an increase of ~20% in median CL was observed compared with that from the presurgical full PK analysis. Only at days 6 and 7 was the median CL below the initial value, but high variability in individual patient values was seen throughout the first week. Second‐week data were insufficient for analysis. Safety outcomes Adverse events observed are summarized in Table 6. In the safety population (N = 72), 230 AEs were reported in 51 patients (70.8%). A total of 14 treatment‐related AEs were reported in 10 patients: five patients treated by CI had eight AEs and five patients treated by BI had six AEs. Ten of the 14 treatment‐related AEs were classified as non‐serious (reported in six patients): anaemia (n = 5), headache (n = 2), allergic dermatitis (n = 1), pruritus (n = 1) and pyrexia (n = 1). Summary of AEs in the safety population a and by treatment arm Abbreviations: AE, adverse event; SAE, serious adverse event. The safety population included nine patients who were not randomized to receive continuous or bolus infusion; no AEs were recorded in these patients. All treatment‐related SAEs were development of FVIII inhibitors. One SAE unrelated to treatment occurred in a patient who died before randomization. Ten SAEs were reported in ten patients. Of these, four SAEs of FVIII inhibitor development (two patients in each group) were considered related to treatment; all four patients had severe haemophilia A, and none required treatment with a bypassing agent. The two patients receiving CI developed high‐titre inhibitors (up to 20.8 and 10.7 BU, respectively, on study days 63 and 57), which later decreased to the low‐titre range in both patients (1.0 and 1.7 BU, respectively). One patient was a 35‐year‐old male who had undergone hip surgery; he had received plasma‐derived FVIII products and had weakly positive Lupus anticoagulants of unknown clinical relevance. The other was a 30‐year‐old male who had undergone left knee replacement. Of the two patients receiving BI who developed FVIII inhibitors, one was a 35‐year‐old male who had undergone left primary total knee replacement and developed a low‐titre inhibitor (transient; maximum 0.89 BU that later decreased to 0.17 BU) on study day 30. The other was a 50‐year‐old male who developed a low‐titre inhibitor on study day 36 with a maximum titre of 3 BU that later decreased to 2.4 BU. The other six SAEs (febrile infection, joint swelling, haemarthrosis, pseudomembranous colitis, multiorgan failure and muscle haemorrhage) were considered to be unrelated to treatment. One patient died during the study. Death was due to multiorgan failure attributed to codeine toxicity. The patient had received antihaemophilic factor (recombinant) only for PK evaluation and was not randomized to treatment by CI or BI and therefore did not undergo surgery. Adverse events observed are summarized in Table 6. In the safety population (N = 72), 230 AEs were reported in 51 patients (70.8%). A total of 14 treatment‐related AEs were reported in 10 patients: five patients treated by CI had eight AEs and five patients treated by BI had six AEs. Ten of the 14 treatment‐related AEs were classified as non‐serious (reported in six patients): anaemia (n = 5), headache (n = 2), allergic dermatitis (n = 1), pruritus (n = 1) and pyrexia (n = 1). Summary of AEs in the safety population a and by treatment arm Abbreviations: AE, adverse event; SAE, serious adverse event. The safety population included nine patients who were not randomized to receive continuous or bolus infusion; no AEs were recorded in these patients. All treatment‐related SAEs were development of FVIII inhibitors. One SAE unrelated to treatment occurred in a patient who died before randomization. Ten SAEs were reported in ten patients. Of these, four SAEs of FVIII inhibitor development (two patients in each group) were considered related to treatment; all four patients had severe haemophilia A, and none required treatment with a bypassing agent. The two patients receiving CI developed high‐titre inhibitors (up to 20.8 and 10.7 BU, respectively, on study days 63 and 57), which later decreased to the low‐titre range in both patients (1.0 and 1.7 BU, respectively). One patient was a 35‐year‐old male who had undergone hip surgery; he had received plasma‐derived FVIII products and had weakly positive Lupus anticoagulants of unknown clinical relevance. The other was a 30‐year‐old male who had undergone left knee replacement. Of the two patients receiving BI who developed FVIII inhibitors, one was a 35‐year‐old male who had undergone left primary total knee replacement and developed a low‐titre inhibitor (transient; maximum 0.89 BU that later decreased to 0.17 BU) on study day 30. The other was a 50‐year‐old male who developed a low‐titre inhibitor on study day 36 with a maximum titre of 3 BU that later decreased to 2.4 BU. The other six SAEs (febrile infection, joint swelling, haemarthrosis, pseudomembranous colitis, multiorgan failure and muscle haemorrhage) were considered to be unrelated to treatment. One patient died during the study. Death was due to multiorgan failure attributed to codeine toxicity. The patient had received antihaemophilic factor (recombinant) only for PK evaluation and was not randomized to treatment by CI or BI and therefore did not undergo surgery.
CONCLUSION
The administration of antihaemophilic factor (recombinant) by CI resulted in comparable efficacy and safety outcomes and is a viable alternative to intermittent BI in the perioperative haemostatic management of patients with haemophilia A undergoing major orthopaedic surgery. Taking into account the complexity of CI versus BI, it is useful to know that these types of FVIII administration showed non‐inferiority, such that treatment may be optimized for individual patients. These findings may help inform perioperative haemostatic management of these patients, with the goal of maintaining stable FVIII levels during and after surgery, whether by the use of CI or BI regimens.
[ "INTRODUCTION", "Patients", "Study design", "Primary and secondary efficacy outcome measures", "Safety outcome measures", "Additional exploratory outcome measures", "Statistics", "Patients", "Efficacy and exploratory outcomes", "Safety outcomes", "AUTHOR CONTRIBUTIONS" ]
[ "Antihaemophilic factor (recombinant), plasma/albumin‐free method (ADVATE®; Baxalta US Inc., a Takeda company, Lexington, MA, USA), is a recombinant human coagulation factor VIII (FVIII) indicated for the treatment and prophylaxis of bleeding, including perioperative management, in patients of all ages with haemophilia A.\n1\n, \n2\n When used perioperatively, antihaemophilic factor (recombinant) is typically administered by bolus infusion (BI) at time points dictated by its pharmacokinetic (PK) profile.\nContinuous infusion (CI) was developed to reduce the wide variations in plasma FVIII levels that usually accompany BI and decrease the quantity of infused FVIII concentrate.\n3\n, \n4\n, \n5\n, \n6\n, \n7\n, \n8\n, \n9\n CI during and/or after surgery \n1\n, \n10\n may stabilize FVIII levels, eliminating the deep troughs characteristic of BI that may increase bleeding risk. Several cohort and non‐controlled studies have indicated that FVIII CI is well tolerated and efficacious for providing perioperative haemostasis for patients with haemophilia A; some studies have suggested that CI may also reduce FVIII consumption compared with BI.\n3\n, \n7\n, \n8\n, \n11\n, \n12\n, \n13\n\n\nContinuous infusion and BI in the same type of intervention have not been compared in a prospective, controlled setting. The objective of this prospective, randomized phase III/IV study in patients with severe or moderately severe haemophilia A was to assess the perioperative haemostatic efficacy and safety of antihaemophilic factor (recombinant) administered via CI and intermittent BI.", "Eligible previously treated patients (18–70 years of age) had severe or moderately severe haemophilia A at screening (FVIII level ≤2 IU/dl) and were scheduled for elective unilateral major orthopaedic surgery requiring drain placement. Protocol amendments raised the maximum age (previously 65 years) and expanded the type of surgery (previously unilateral primary total knee replacement only). Major orthopaedic surgery was defined as requiring moderate or deep sedation, general anaesthesia, or major nerve conduction blockade and had a significant risk of large‐volume blood loss or blood loss into confined anatomical space. Patients provided written informed consent and were required to have had prior exposure to FVIII concentrates for ≥150 days. Patients were excluded if they met any of the following criteria: detectable FVIII inhibitors at screening (by the central laboratory), history of inhibitors (>0.4 Bethesda units [BU] by Nijmegen modification of the Bethesda assay), scheduled for any other minor or major surgery, laboratory evidence of abnormal haemostasis from causes other than haemophilia A, and current or planned receipt of an immunomodulatory drug other than antiretroviral therapy.", "This phase III/IV, prospective, multicentre, randomized, controlled study was divided into three periods: (1) a preoperative period, including a PK evaluation; (2) an intraoperative and postoperative period, from loading dose to postoperative day 7; and (3) a safety follow‐up period, from postoperative day 8 to the end‐of‐study visit (6 weeks following surgery). The study was conducted in accordance with the principles of the Declaration of Helsinki, and the protocol, informed consent form and all amendments were approved by the ethics committee at each study site. The trial is registered at ClinicalTrials.gov (NCT00357656).\nPharmacokinetic analysis was performed before surgery to establish individual FVIII recovery and clearance (CL) values. In the preoperative period (≤60 days before the surgery), patients who had completed a washout of 72 h and were not actively bleeding were infused with antihaemophilic factor (recombinant) 50 ± 5 IU/kg of body weight and had 10 postinfusion samples taken within 48 h. If PK data suggested the presence of subclinical inhibitors (FVIII half‐life <8 h, incremental recovery (IR) <1.5 (IU/dl)/(IU/kg), or CL >5.0 ml/(kg × h), patients were excluded from further participation in the study.\nPatients who completed the preoperative PK phase were randomized 1:1 to treatment by CI or BI through postoperative day 7. Patients were stratified by type of surgery (unilateral knee replacement, hip surgery and shoulder/elbow/ankle/knee [except knee replacement] surgery). Randomization was separate and independent for each stratum. Randomization lists were prepared in blocks with a block size >2 using the random number generator algorithm of Wichmann and Hill \n14\n as modified by McLeod.\n15\n Patients in each group had blood drawn once every 24 h for measurement of FVIII activity. Patients stayed in hospital until postoperative day 7 to receive study treatment per protocol. Patients were discharged from hospital according to site practice.\nPatients randomized to CI could continue on CI or switch to intermittent BI starting on postoperative day 8, at the discretion of the investigator. During the period from postoperative day 8 until 6 weeks ±3 days following surgery, treatment (including mechanical or pharmacologic thrombosis prophylaxis) was also at the discretion of the investigator, depending on current practice of the site during physical rehabilitation.\nAntihaemophilic factor (recombinant) doses during the perioperative period (until postoperative day 7) were based on the PK profile determined before surgery. Within 60 min before surgery, patients received a loading dose of antihaemophilic factor (recombinant) to maintain a minimum FVIII level of at least 80% of normal. The formulas for determining the initial weight‐adjusted loading dose differed, depending on whether the patient was randomized to receive antihaemophilic factor (recombinant) as BI or CI for subsequent management:\n\nBI loading dose and subsequent BIs (IU/kg) = ([target FVIII level {IU/dl} × 2I/\nt − preinfusion level {IU/dl}]/IR [{IU/dl}/{IU/kg}]), where ‘I’ is the infusion interval (in hours) at which the target FVIII level shall be maintained and ‘t’ is the estimated FVIII half‐life (in hours);CI loading dose (IU/kg) = ([target FVIII level {IU/dl} − preinfusion level {IU/dl}] / IR [{IU/dl} / {IU/kg}]).\n\nBI loading dose and subsequent BIs (IU/kg) = ([target FVIII level {IU/dl} × 2I/\nt − preinfusion level {IU/dl}]/IR [{IU/dl}/{IU/kg}]), where ‘I’ is the infusion interval (in hours) at which the target FVIII level shall be maintained and ‘t’ is the estimated FVIII half‐life (in hours);\nCI loading dose (IU/kg) = ([target FVIII level {IU/dl} − preinfusion level {IU/dl}] / IR [{IU/dl} / {IU/kg}]).\nThe loading dose was given intravenously over up to 5 min at a maximum infusion rate of 10 ml/min. If the blood sample drawn ~15 min after infusion showed that the desired FVIII level had not been reached or the activated partial thromboplastin time (aPTT) had not normalized, a supplemental loading dose could be given. Surgery could be started only after normalization of the aPTT. To compensate for intraoperative blood loss and increased FVIII consumption, patients received an additional bolus of study drug in the recovery room sufficient to raise the FVIII level by 50 IU/dl. CI was to be started before surgery, immediately following the loading dose; study product was administered at a rate (IU/[kg × h]) calculated according to the following formula: CL × target FVIII level. The infusion was administered using a syringe pump according to this regimen, but always ≥0.4 ml/h. BI treatment started with the loading dose; treatment frequency varied according to the patient's PK profile, but typically included three infusions per 24 h during the first 72 h following the loading dose, infusions every 12 h from postoperative day 3 to 7, and daily infusions from postoperative day 8 to 14. For both CI and BI, FVIII trough levels were to be maintained above 80 IU/dl for the initial 72 h, at 50–100 IU/dl through postoperative day 7 and >30 IU/dl for postoperative days 8–14.", "The primary study objective was to compare the perioperative haemostatic efficacy of antihaemophilic factor (recombinant) CI versus intermittent BI. The primary efficacy outcome measure was the cumulative packed red blood cell (PRBC)/blood volume in the drainage fluid (based on haematocrit values) assessed during the first 24 h after surgery determined from drainage fluid samples using the red blood cell counting method used at the local laboratory. Secondary efficacy outcome measures included postoperative blood loss until drain removal, number of bleeding episodes through postoperative day 7 and number of units of PRBCs transfused. The trigger for initiating a blood transfusion was determined by each clinician for each patient.", "Safety was a secondary objective. The numbers of adverse events (AEs) and serious AEs (SAEs) were assessed, as well as their relationship to treatment and the incidence of FVIII antibody formation. AEs were grouped by the Medical Dictionary for Regulatory Activities system organ class and classified by severity (mild, moderate, severe). Inhibitor testing was performed at screening, at the rescreen visit after the pretreatment phase (if applicable) and at the end‐of‐study visit according to the Nijmegen modification of the Bethesda assay in the central laboratory. FVIII inhibitors could be determined at the local laboratory and verified by the central laboratory.", "Exploratory outcome measures included total weight‐adjusted antihaemophilic factor (recombinant) dose through postoperative day 7, PK assessments (IR, CL) through postoperative day 7, total haemoglobin in cumulative drainage fluid during the first 24 h after surgery and until drain removal, rate of clinically relevant postoperative haematomas, and Global Hemostatic Efficacy Assessment (GHEA).\nThe GHEA was based on three categories (Table 1), added to form the GHEA score (excellent, 7–9 [with no category <2]; good, 5–7 [with no category <1]; fair, 2–4 [with no category <1]; none, 0–1). For a score of 7 to be rated ‘excellent’, each individual assessment score had to be ≥2; otherwise, a score of 7 was to be rated ‘good’.\nGHEA scoring categories\na\n\n\nAbbreviation: GHEA, Global Hemostatic Efficacy Assessment.\nCategories 1 and 2 determined by the operating surgeon; category 3 determined by the investigator. The scores from the three categories were added to form the total GHEA score (excellent, 7–9 [with no category <2]; good, 5–7 [with no category <1]; fair, 2–4 [with no category <1]; none, 0–1). Scores of 7 were rated as ‘excellent’, if each individual assessment score was ≥2; otherwise, a score of 7 was rated as ‘good’.", "Descriptive statistics were provided for baseline characteristics and summarized by treatment regimen. A sample size of 60 divided equally between the CI and BI arms was selected for the study. To establish non‐inferiority of CI to BI, the ratio of the mean PRBC volumes of the drainage fluids in the CI arm to the BI arm was compared to a non‐inferiority margin of 200%. This was equivalent to the upper 95% confidence limit for the ratio being below 200%. The sample size requirements for establishing non‐inferiority by t‐test at a non‐inferiority limit of 200% were calculated and a sample size of 50–60 was determined to provide adequate power. In addition, at least 15 patients in each treatment group were required to have baseline FVIII levels <1%. Pearson's chi‐squared test with Monte Carlo simulation was used for comparison of patients with bleeding episodes and for patients who required transfusions.", "The study started on 29 May 2006 and was completed on 9 December 2015. Of 85 patients enrolled at 22 sites (in the United States, European Union, Norway and Russia), 72 received the infusion of antihaemophilic factor (recombinant) in the preoperative period for PK determination. Of these, 63 met the criteria for perioperative treatment and were randomized to receive CI (n = 32) or BI (n = 31). Of the patients who received CI, 23 had severe haemophilia A (baseline FVIII level <1 IU/dl) and six had moderately severe haemophilia A (baseline FVIII level 1 to <2 IU/dl). Of the patients who received BI, 26 had severe haemophilia A and five had moderately severe haemophilia A. Three patients randomized to CI did not undergo surgery and were not treated. Thus, 60 patients received treatment and comprised the per‐protocol analysis set (CI, n = 29; BI, n = 31). The safety analysis set included all 72 patients who received at least one dose of antihaemophilic factor (recombinant). Patient disposition is summarized in Figure 1. Each patient underwent one procedure: unilateral knee replacement surgery (n = 48; 24 CI, 24 BI), hip surgery (n = 4; two CI, two BI) or shoulder/elbow/ankle/knee surgery (n = 8; three CI, five BI). All patients were male. Demographic and clinical characteristics were similar between groups; the medians and ranges of age were nearly identical (Table 2). Four patients received enoxaparin or nadroparin as thrombosis prophylaxis.\nDisposition of patients. †Nine patients were not randomized for the following reasons: excluded because of pharmacokinetics (n = 4), screen failures (n = 2), physician decision (n = 1), sponsor decision (n = 1) and death (n = 1).‡Three patients randomized to receive continuous infusion did not undergo surgery and were therefore not treated. Thus, 60 patients received treatment and comprised the per‐protocol analysis set (continuous infusion, n = 29; bolus infusion, n = 31)\nPatient demographic and clinical characteristics in the per‐protocol analysis set", "Information on drainage fluid PRBC was not available for six patients (three CI, three BI) at 24 h after surgery, but the cumulative PRBC/blood volume in the drainage fluid (ie RBC, MCV and haematocrit) during the first 24 h following surgery was comparable between the CI and BI groups (Table 3; Table 4 [by type of surgery]), with a ratio of 0.92 (95% confidence interval for mean, 0.82–1.05; median, 3.4 × 1012 RBCs/l for CI and 3.5 × 1012 RBCs/l for BI). The one‐sided p‐value against the null hypothesis of ratio ≥200% was <.001, confirming the non‐inferiority of CI to BI with a 5% type I error (as the upper confidence limit did not exceed 200%).\nKey efficacy parameters in the per‐protocol analysis set\na\n\n\nAbbreviations: BI, bolus infusion; CI, continuous infusion; PRBC, packed red blood cell; SD, standard deviation.\nData shown are from patients in the per‐protocol analysis set, unless otherwise specified. The patient numbers shown for the different parameters vary because some patients did not have all data available.\nThe one‐sided p‐value (against the null hypothesis of the CI:BI ratio ≥200%) was <.001.\nThe one‐sided p‐value (against the null hypothesis of the CI:BI ratio ≥200%) was .041.\nNot statistically significant by a post hoc Pearson's chi‐squared test (p\ntwo‐sided = .612, using Monte Carlo simulation with 106 replicates).\nNot statistically significant by a post hoc Pearson's chi‐squared test (p\ntwo‐sided = .132, using Monte Carlo simulation with 106 replicates).\nTrans‐ and postoperatively combined.\nCumulative PRBC volume in drainage fluid at 24 h (1012 RBCs/l) by type of surgery\nAbbreviations: NA, not applicable; SD, standard deviation.\nTotal blood loss until drain removal, adjusted for expected blood loss, was slightly higher in the CI group than the BI group (Table 3). The mean (95% confidence interval) ratio of blood loss volume for CI versus BI was estimated to be 1.3 (0.8–2.1), and the one‐sided p‐value against the null hypothesis of ratio ≥200% was .041. Most bleeding episodes occurred in patients receiving BI; of four reported bleeding episodes (one episode per patient), three occurred in patients receiving BI and one in a patient receiving CI (Table 3). None of the bleeding episodes was considered by the investigator to be ‘the result of inadequate therapeutic response in the face of proper dosing, necessitating a change in therapeutic regimen'. Patients receiving CI were given more PRBC transfusions than patients receiving BI. PRBC transfusions were required in 18/29 and 13/31 patients receiving CI and BI, respectively, with a mean (range) of 1.3 (1–5) units in patients receiving CI and a mean (range) of 0.9 (1–3) units in patients receiving BI.\nThe total amount of haemoglobin in the cumulative drainage fluid during the first 24 h after surgery and until drain removal (if drainage continued) was comparable between patients who received CI and BI. In the stratum of patients who underwent unilateral knee replacement (CI, n = 24; BI, n = 24), the point estimate for the mean was 98.21 g/l (95% confidence interval for mean, 89.68–107.56 g/l) for CI and 97.63 g/l (86.07–110.74 g/l) for BI. The ratio of CI to BI was estimated to be 1.01.\nClinically relevant postoperative haematomas were observed in two patients receiving CI and two receiving BI (three haematomas per group, for a total of six). Two patients (one CI, one BI) had undergone knee replacement and two patients (one CI, one BI) had undergone hip surgery.\nThe total antihaemophilic factor (recombinant) dose (per kg body weight) administered in the combined transoperative and postoperative periods was similar in the CI and BI groups (Table 3). The global haemostatic efficacy of antihaemophilic factor (recombinant) administered by CI was assessed to be at least as good as that administered by BI. As shown in Table 5, scores of ‘excellent’ were evenly distributed among patients who received CI and patients who received BI. All patients receiving CI had a score of ‘excellent’ or ‘good’.\nGHEA score\nAbbreviation: GHEA, Global Hemostatic Efficacy Assessment.\nValues represent numbers of patients with reported GHEA scores.\nThe GHEA score at postoperative day 8 was missing for six patients.\nIncremental recovery over time could be analysed only for the BI arm, as BIs in the CI arm were rare. Compared to the value at the loading dose on day 0, the median IR decreased by ~20% after the first week following surgery (day 7), with high variability across individual patients. During the second week, many patients were discharged from hospital and not enough samples were available for analysis. CL could be analysed for the CI arm, but not the BI arm because of insufficient data. The determination of CL used the observed FVIII level as the steady‐state level. This assumption was questionable for days 1 and 4 due to the additional postsurgical BIs and the reduction in infusion rate scheduled at day 3. For days 2, 3 and 5, an increase of ~20% in median CL was observed compared with that from the presurgical full PK analysis. Only at days 6 and 7 was the median CL below the initial value, but high variability in individual patient values was seen throughout the first week. Second‐week data were insufficient for analysis.", "Adverse events observed are summarized in Table 6. In the safety population (N = 72), 230 AEs were reported in 51 patients (70.8%). A total of 14 treatment‐related AEs were reported in 10 patients: five patients treated by CI had eight AEs and five patients treated by BI had six AEs. Ten of the 14 treatment‐related AEs were classified as non‐serious (reported in six patients): anaemia (n = 5), headache (n = 2), allergic dermatitis (n = 1), pruritus (n = 1) and pyrexia (n = 1).\nSummary of AEs in the safety population\na\n and by treatment arm\nAbbreviations: AE, adverse event; SAE, serious adverse event.\nThe safety population included nine patients who were not randomized to receive continuous or bolus infusion; no AEs were recorded in these patients.\nAll treatment‐related SAEs were development of FVIII inhibitors.\nOne SAE unrelated to treatment occurred in a patient who died before randomization.\nTen SAEs were reported in ten patients. Of these, four SAEs of FVIII inhibitor development (two patients in each group) were considered related to treatment; all four patients had severe haemophilia A, and none required treatment with a bypassing agent. The two patients receiving CI developed high‐titre inhibitors (up to 20.8 and 10.7 BU, respectively, on study days 63 and 57), which later decreased to the low‐titre range in both patients (1.0 and 1.7 BU, respectively). One patient was a 35‐year‐old male who had undergone hip surgery; he had received plasma‐derived FVIII products and had weakly positive Lupus anticoagulants of unknown clinical relevance. The other was a 30‐year‐old male who had undergone left knee replacement. Of the two patients receiving BI who developed FVIII inhibitors, one was a 35‐year‐old male who had undergone left primary total knee replacement and developed a low‐titre inhibitor (transient; maximum 0.89 BU that later decreased to 0.17 BU) on study day 30. The other was a 50‐year‐old male who developed a low‐titre inhibitor on study day 36 with a maximum titre of 3 BU that later decreased to 2.4 BU. The other six SAEs (febrile infection, joint swelling, haemarthrosis, pseudomembranous colitis, multiorgan failure and muscle haemorrhage) were considered to be unrelated to treatment.\nOne patient died during the study. Death was due to multiorgan failure attributed to codeine toxicity. The patient had received antihaemophilic factor (recombinant) only for PK evaluation and was not randomized to treatment by CI or BI and therefore did not undergo surgery.", "All authors reviewed/revised the manuscript critically for intellectual content and gave their final approval for it to be published. Ingrid Pabinger was a study investigator and contributed to the design of the study and interpretation of the data and study content. Vasily Mamonov and Jerzy Windyga were study investigators and contributed to the interpretation of the data and study content. None of the authors received honoraria for the writing of this manuscript. Werner Engl and Bruce Ewenstein contributed to the conception, design, analysis and interpretation of the clinical trial. Jennifer Doralt, Srilatha Tangada and Gerald Spotts contributed to the analysis and interpretation of the data, and study conduct.\nAll authors agree to be accountable for all aspects of the work in ensuring that questions related to the accuracy or integrity of any part of the work are appropriately investigated and resolved." ]
[ null, null, null, null, null, null, null, null, null, null, null ]
[ "INTRODUCTION", "MATERIALS AND METHODS", "Patients", "Study design", "Primary and secondary efficacy outcome measures", "Safety outcome measures", "Additional exploratory outcome measures", "Statistics", "RESULTS", "Patients", "Efficacy and exploratory outcomes", "Safety outcomes", "DISCUSSION", "CONCLUSION", "DISCLOSURES", "AUTHOR CONTRIBUTIONS" ]
[ "Antihaemophilic factor (recombinant), plasma/albumin‐free method (ADVATE®; Baxalta US Inc., a Takeda company, Lexington, MA, USA), is a recombinant human coagulation factor VIII (FVIII) indicated for the treatment and prophylaxis of bleeding, including perioperative management, in patients of all ages with haemophilia A.\n1\n, \n2\n When used perioperatively, antihaemophilic factor (recombinant) is typically administered by bolus infusion (BI) at time points dictated by its pharmacokinetic (PK) profile.\nContinuous infusion (CI) was developed to reduce the wide variations in plasma FVIII levels that usually accompany BI and decrease the quantity of infused FVIII concentrate.\n3\n, \n4\n, \n5\n, \n6\n, \n7\n, \n8\n, \n9\n CI during and/or after surgery \n1\n, \n10\n may stabilize FVIII levels, eliminating the deep troughs characteristic of BI that may increase bleeding risk. Several cohort and non‐controlled studies have indicated that FVIII CI is well tolerated and efficacious for providing perioperative haemostasis for patients with haemophilia A; some studies have suggested that CI may also reduce FVIII consumption compared with BI.\n3\n, \n7\n, \n8\n, \n11\n, \n12\n, \n13\n\n\nContinuous infusion and BI in the same type of intervention have not been compared in a prospective, controlled setting. The objective of this prospective, randomized phase III/IV study in patients with severe or moderately severe haemophilia A was to assess the perioperative haemostatic efficacy and safety of antihaemophilic factor (recombinant) administered via CI and intermittent BI.", "Patients Eligible previously treated patients (18–70 years of age) had severe or moderately severe haemophilia A at screening (FVIII level ≤2 IU/dl) and were scheduled for elective unilateral major orthopaedic surgery requiring drain placement. Protocol amendments raised the maximum age (previously 65 years) and expanded the type of surgery (previously unilateral primary total knee replacement only). Major orthopaedic surgery was defined as requiring moderate or deep sedation, general anaesthesia, or major nerve conduction blockade and had a significant risk of large‐volume blood loss or blood loss into confined anatomical space. Patients provided written informed consent and were required to have had prior exposure to FVIII concentrates for ≥150 days. Patients were excluded if they met any of the following criteria: detectable FVIII inhibitors at screening (by the central laboratory), history of inhibitors (>0.4 Bethesda units [BU] by Nijmegen modification of the Bethesda assay), scheduled for any other minor or major surgery, laboratory evidence of abnormal haemostasis from causes other than haemophilia A, and current or planned receipt of an immunomodulatory drug other than antiretroviral therapy.\nEligible previously treated patients (18–70 years of age) had severe or moderately severe haemophilia A at screening (FVIII level ≤2 IU/dl) and were scheduled for elective unilateral major orthopaedic surgery requiring drain placement. Protocol amendments raised the maximum age (previously 65 years) and expanded the type of surgery (previously unilateral primary total knee replacement only). Major orthopaedic surgery was defined as requiring moderate or deep sedation, general anaesthesia, or major nerve conduction blockade and had a significant risk of large‐volume blood loss or blood loss into confined anatomical space. Patients provided written informed consent and were required to have had prior exposure to FVIII concentrates for ≥150 days. Patients were excluded if they met any of the following criteria: detectable FVIII inhibitors at screening (by the central laboratory), history of inhibitors (>0.4 Bethesda units [BU] by Nijmegen modification of the Bethesda assay), scheduled for any other minor or major surgery, laboratory evidence of abnormal haemostasis from causes other than haemophilia A, and current or planned receipt of an immunomodulatory drug other than antiretroviral therapy.\nStudy design This phase III/IV, prospective, multicentre, randomized, controlled study was divided into three periods: (1) a preoperative period, including a PK evaluation; (2) an intraoperative and postoperative period, from loading dose to postoperative day 7; and (3) a safety follow‐up period, from postoperative day 8 to the end‐of‐study visit (6 weeks following surgery). The study was conducted in accordance with the principles of the Declaration of Helsinki, and the protocol, informed consent form and all amendments were approved by the ethics committee at each study site. The trial is registered at ClinicalTrials.gov (NCT00357656).\nPharmacokinetic analysis was performed before surgery to establish individual FVIII recovery and clearance (CL) values. In the preoperative period (≤60 days before the surgery), patients who had completed a washout of 72 h and were not actively bleeding were infused with antihaemophilic factor (recombinant) 50 ± 5 IU/kg of body weight and had 10 postinfusion samples taken within 48 h. If PK data suggested the presence of subclinical inhibitors (FVIII half‐life <8 h, incremental recovery (IR) <1.5 (IU/dl)/(IU/kg), or CL >5.0 ml/(kg × h), patients were excluded from further participation in the study.\nPatients who completed the preoperative PK phase were randomized 1:1 to treatment by CI or BI through postoperative day 7. Patients were stratified by type of surgery (unilateral knee replacement, hip surgery and shoulder/elbow/ankle/knee [except knee replacement] surgery). Randomization was separate and independent for each stratum. Randomization lists were prepared in blocks with a block size >2 using the random number generator algorithm of Wichmann and Hill \n14\n as modified by McLeod.\n15\n Patients in each group had blood drawn once every 24 h for measurement of FVIII activity. Patients stayed in hospital until postoperative day 7 to receive study treatment per protocol. Patients were discharged from hospital according to site practice.\nPatients randomized to CI could continue on CI or switch to intermittent BI starting on postoperative day 8, at the discretion of the investigator. During the period from postoperative day 8 until 6 weeks ±3 days following surgery, treatment (including mechanical or pharmacologic thrombosis prophylaxis) was also at the discretion of the investigator, depending on current practice of the site during physical rehabilitation.\nAntihaemophilic factor (recombinant) doses during the perioperative period (until postoperative day 7) were based on the PK profile determined before surgery. Within 60 min before surgery, patients received a loading dose of antihaemophilic factor (recombinant) to maintain a minimum FVIII level of at least 80% of normal. The formulas for determining the initial weight‐adjusted loading dose differed, depending on whether the patient was randomized to receive antihaemophilic factor (recombinant) as BI or CI for subsequent management:\n\nBI loading dose and subsequent BIs (IU/kg) = ([target FVIII level {IU/dl} × 2I/\nt − preinfusion level {IU/dl}]/IR [{IU/dl}/{IU/kg}]), where ‘I’ is the infusion interval (in hours) at which the target FVIII level shall be maintained and ‘t’ is the estimated FVIII half‐life (in hours);CI loading dose (IU/kg) = ([target FVIII level {IU/dl} − preinfusion level {IU/dl}] / IR [{IU/dl} / {IU/kg}]).\n\nBI loading dose and subsequent BIs (IU/kg) = ([target FVIII level {IU/dl} × 2I/\nt − preinfusion level {IU/dl}]/IR [{IU/dl}/{IU/kg}]), where ‘I’ is the infusion interval (in hours) at which the target FVIII level shall be maintained and ‘t’ is the estimated FVIII half‐life (in hours);\nCI loading dose (IU/kg) = ([target FVIII level {IU/dl} − preinfusion level {IU/dl}] / IR [{IU/dl} / {IU/kg}]).\nThe loading dose was given intravenously over up to 5 min at a maximum infusion rate of 10 ml/min. If the blood sample drawn ~15 min after infusion showed that the desired FVIII level had not been reached or the activated partial thromboplastin time (aPTT) had not normalized, a supplemental loading dose could be given. Surgery could be started only after normalization of the aPTT. To compensate for intraoperative blood loss and increased FVIII consumption, patients received an additional bolus of study drug in the recovery room sufficient to raise the FVIII level by 50 IU/dl. CI was to be started before surgery, immediately following the loading dose; study product was administered at a rate (IU/[kg × h]) calculated according to the following formula: CL × target FVIII level. The infusion was administered using a syringe pump according to this regimen, but always ≥0.4 ml/h. BI treatment started with the loading dose; treatment frequency varied according to the patient's PK profile, but typically included three infusions per 24 h during the first 72 h following the loading dose, infusions every 12 h from postoperative day 3 to 7, and daily infusions from postoperative day 8 to 14. For both CI and BI, FVIII trough levels were to be maintained above 80 IU/dl for the initial 72 h, at 50–100 IU/dl through postoperative day 7 and >30 IU/dl for postoperative days 8–14.\nThis phase III/IV, prospective, multicentre, randomized, controlled study was divided into three periods: (1) a preoperative period, including a PK evaluation; (2) an intraoperative and postoperative period, from loading dose to postoperative day 7; and (3) a safety follow‐up period, from postoperative day 8 to the end‐of‐study visit (6 weeks following surgery). The study was conducted in accordance with the principles of the Declaration of Helsinki, and the protocol, informed consent form and all amendments were approved by the ethics committee at each study site. The trial is registered at ClinicalTrials.gov (NCT00357656).\nPharmacokinetic analysis was performed before surgery to establish individual FVIII recovery and clearance (CL) values. In the preoperative period (≤60 days before the surgery), patients who had completed a washout of 72 h and were not actively bleeding were infused with antihaemophilic factor (recombinant) 50 ± 5 IU/kg of body weight and had 10 postinfusion samples taken within 48 h. If PK data suggested the presence of subclinical inhibitors (FVIII half‐life <8 h, incremental recovery (IR) <1.5 (IU/dl)/(IU/kg), or CL >5.0 ml/(kg × h), patients were excluded from further participation in the study.\nPatients who completed the preoperative PK phase were randomized 1:1 to treatment by CI or BI through postoperative day 7. Patients were stratified by type of surgery (unilateral knee replacement, hip surgery and shoulder/elbow/ankle/knee [except knee replacement] surgery). Randomization was separate and independent for each stratum. Randomization lists were prepared in blocks with a block size >2 using the random number generator algorithm of Wichmann and Hill \n14\n as modified by McLeod.\n15\n Patients in each group had blood drawn once every 24 h for measurement of FVIII activity. Patients stayed in hospital until postoperative day 7 to receive study treatment per protocol. Patients were discharged from hospital according to site practice.\nPatients randomized to CI could continue on CI or switch to intermittent BI starting on postoperative day 8, at the discretion of the investigator. During the period from postoperative day 8 until 6 weeks ±3 days following surgery, treatment (including mechanical or pharmacologic thrombosis prophylaxis) was also at the discretion of the investigator, depending on current practice of the site during physical rehabilitation.\nAntihaemophilic factor (recombinant) doses during the perioperative period (until postoperative day 7) were based on the PK profile determined before surgery. Within 60 min before surgery, patients received a loading dose of antihaemophilic factor (recombinant) to maintain a minimum FVIII level of at least 80% of normal. The formulas for determining the initial weight‐adjusted loading dose differed, depending on whether the patient was randomized to receive antihaemophilic factor (recombinant) as BI or CI for subsequent management:\n\nBI loading dose and subsequent BIs (IU/kg) = ([target FVIII level {IU/dl} × 2I/\nt − preinfusion level {IU/dl}]/IR [{IU/dl}/{IU/kg}]), where ‘I’ is the infusion interval (in hours) at which the target FVIII level shall be maintained and ‘t’ is the estimated FVIII half‐life (in hours);CI loading dose (IU/kg) = ([target FVIII level {IU/dl} − preinfusion level {IU/dl}] / IR [{IU/dl} / {IU/kg}]).\n\nBI loading dose and subsequent BIs (IU/kg) = ([target FVIII level {IU/dl} × 2I/\nt − preinfusion level {IU/dl}]/IR [{IU/dl}/{IU/kg}]), where ‘I’ is the infusion interval (in hours) at which the target FVIII level shall be maintained and ‘t’ is the estimated FVIII half‐life (in hours);\nCI loading dose (IU/kg) = ([target FVIII level {IU/dl} − preinfusion level {IU/dl}] / IR [{IU/dl} / {IU/kg}]).\nThe loading dose was given intravenously over up to 5 min at a maximum infusion rate of 10 ml/min. If the blood sample drawn ~15 min after infusion showed that the desired FVIII level had not been reached or the activated partial thromboplastin time (aPTT) had not normalized, a supplemental loading dose could be given. Surgery could be started only after normalization of the aPTT. To compensate for intraoperative blood loss and increased FVIII consumption, patients received an additional bolus of study drug in the recovery room sufficient to raise the FVIII level by 50 IU/dl. CI was to be started before surgery, immediately following the loading dose; study product was administered at a rate (IU/[kg × h]) calculated according to the following formula: CL × target FVIII level. The infusion was administered using a syringe pump according to this regimen, but always ≥0.4 ml/h. BI treatment started with the loading dose; treatment frequency varied according to the patient's PK profile, but typically included three infusions per 24 h during the first 72 h following the loading dose, infusions every 12 h from postoperative day 3 to 7, and daily infusions from postoperative day 8 to 14. For both CI and BI, FVIII trough levels were to be maintained above 80 IU/dl for the initial 72 h, at 50–100 IU/dl through postoperative day 7 and >30 IU/dl for postoperative days 8–14.\nPrimary and secondary efficacy outcome measures The primary study objective was to compare the perioperative haemostatic efficacy of antihaemophilic factor (recombinant) CI versus intermittent BI. The primary efficacy outcome measure was the cumulative packed red blood cell (PRBC)/blood volume in the drainage fluid (based on haematocrit values) assessed during the first 24 h after surgery determined from drainage fluid samples using the red blood cell counting method used at the local laboratory. Secondary efficacy outcome measures included postoperative blood loss until drain removal, number of bleeding episodes through postoperative day 7 and number of units of PRBCs transfused. The trigger for initiating a blood transfusion was determined by each clinician for each patient.\nThe primary study objective was to compare the perioperative haemostatic efficacy of antihaemophilic factor (recombinant) CI versus intermittent BI. The primary efficacy outcome measure was the cumulative packed red blood cell (PRBC)/blood volume in the drainage fluid (based on haematocrit values) assessed during the first 24 h after surgery determined from drainage fluid samples using the red blood cell counting method used at the local laboratory. Secondary efficacy outcome measures included postoperative blood loss until drain removal, number of bleeding episodes through postoperative day 7 and number of units of PRBCs transfused. The trigger for initiating a blood transfusion was determined by each clinician for each patient.\nSafety outcome measures Safety was a secondary objective. The numbers of adverse events (AEs) and serious AEs (SAEs) were assessed, as well as their relationship to treatment and the incidence of FVIII antibody formation. AEs were grouped by the Medical Dictionary for Regulatory Activities system organ class and classified by severity (mild, moderate, severe). Inhibitor testing was performed at screening, at the rescreen visit after the pretreatment phase (if applicable) and at the end‐of‐study visit according to the Nijmegen modification of the Bethesda assay in the central laboratory. FVIII inhibitors could be determined at the local laboratory and verified by the central laboratory.\nSafety was a secondary objective. The numbers of adverse events (AEs) and serious AEs (SAEs) were assessed, as well as their relationship to treatment and the incidence of FVIII antibody formation. AEs were grouped by the Medical Dictionary for Regulatory Activities system organ class and classified by severity (mild, moderate, severe). Inhibitor testing was performed at screening, at the rescreen visit after the pretreatment phase (if applicable) and at the end‐of‐study visit according to the Nijmegen modification of the Bethesda assay in the central laboratory. FVIII inhibitors could be determined at the local laboratory and verified by the central laboratory.\nAdditional exploratory outcome measures Exploratory outcome measures included total weight‐adjusted antihaemophilic factor (recombinant) dose through postoperative day 7, PK assessments (IR, CL) through postoperative day 7, total haemoglobin in cumulative drainage fluid during the first 24 h after surgery and until drain removal, rate of clinically relevant postoperative haematomas, and Global Hemostatic Efficacy Assessment (GHEA).\nThe GHEA was based on three categories (Table 1), added to form the GHEA score (excellent, 7–9 [with no category <2]; good, 5–7 [with no category <1]; fair, 2–4 [with no category <1]; none, 0–1). For a score of 7 to be rated ‘excellent’, each individual assessment score had to be ≥2; otherwise, a score of 7 was to be rated ‘good’.\nGHEA scoring categories\na\n\n\nAbbreviation: GHEA, Global Hemostatic Efficacy Assessment.\nCategories 1 and 2 determined by the operating surgeon; category 3 determined by the investigator. The scores from the three categories were added to form the total GHEA score (excellent, 7–9 [with no category <2]; good, 5–7 [with no category <1]; fair, 2–4 [with no category <1]; none, 0–1). Scores of 7 were rated as ‘excellent’, if each individual assessment score was ≥2; otherwise, a score of 7 was rated as ‘good’.\nExploratory outcome measures included total weight‐adjusted antihaemophilic factor (recombinant) dose through postoperative day 7, PK assessments (IR, CL) through postoperative day 7, total haemoglobin in cumulative drainage fluid during the first 24 h after surgery and until drain removal, rate of clinically relevant postoperative haematomas, and Global Hemostatic Efficacy Assessment (GHEA).\nThe GHEA was based on three categories (Table 1), added to form the GHEA score (excellent, 7–9 [with no category <2]; good, 5–7 [with no category <1]; fair, 2–4 [with no category <1]; none, 0–1). For a score of 7 to be rated ‘excellent’, each individual assessment score had to be ≥2; otherwise, a score of 7 was to be rated ‘good’.\nGHEA scoring categories\na\n\n\nAbbreviation: GHEA, Global Hemostatic Efficacy Assessment.\nCategories 1 and 2 determined by the operating surgeon; category 3 determined by the investigator. The scores from the three categories were added to form the total GHEA score (excellent, 7–9 [with no category <2]; good, 5–7 [with no category <1]; fair, 2–4 [with no category <1]; none, 0–1). Scores of 7 were rated as ‘excellent’, if each individual assessment score was ≥2; otherwise, a score of 7 was rated as ‘good’.\nStatistics Descriptive statistics were provided for baseline characteristics and summarized by treatment regimen. A sample size of 60 divided equally between the CI and BI arms was selected for the study. To establish non‐inferiority of CI to BI, the ratio of the mean PRBC volumes of the drainage fluids in the CI arm to the BI arm was compared to a non‐inferiority margin of 200%. This was equivalent to the upper 95% confidence limit for the ratio being below 200%. The sample size requirements for establishing non‐inferiority by t‐test at a non‐inferiority limit of 200% were calculated and a sample size of 50–60 was determined to provide adequate power. In addition, at least 15 patients in each treatment group were required to have baseline FVIII levels <1%. Pearson's chi‐squared test with Monte Carlo simulation was used for comparison of patients with bleeding episodes and for patients who required transfusions.\nDescriptive statistics were provided for baseline characteristics and summarized by treatment regimen. A sample size of 60 divided equally between the CI and BI arms was selected for the study. To establish non‐inferiority of CI to BI, the ratio of the mean PRBC volumes of the drainage fluids in the CI arm to the BI arm was compared to a non‐inferiority margin of 200%. This was equivalent to the upper 95% confidence limit for the ratio being below 200%. The sample size requirements for establishing non‐inferiority by t‐test at a non‐inferiority limit of 200% were calculated and a sample size of 50–60 was determined to provide adequate power. In addition, at least 15 patients in each treatment group were required to have baseline FVIII levels <1%. Pearson's chi‐squared test with Monte Carlo simulation was used for comparison of patients with bleeding episodes and for patients who required transfusions.", "Eligible previously treated patients (18–70 years of age) had severe or moderately severe haemophilia A at screening (FVIII level ≤2 IU/dl) and were scheduled for elective unilateral major orthopaedic surgery requiring drain placement. Protocol amendments raised the maximum age (previously 65 years) and expanded the type of surgery (previously unilateral primary total knee replacement only). Major orthopaedic surgery was defined as requiring moderate or deep sedation, general anaesthesia, or major nerve conduction blockade and had a significant risk of large‐volume blood loss or blood loss into confined anatomical space. Patients provided written informed consent and were required to have had prior exposure to FVIII concentrates for ≥150 days. Patients were excluded if they met any of the following criteria: detectable FVIII inhibitors at screening (by the central laboratory), history of inhibitors (>0.4 Bethesda units [BU] by Nijmegen modification of the Bethesda assay), scheduled for any other minor or major surgery, laboratory evidence of abnormal haemostasis from causes other than haemophilia A, and current or planned receipt of an immunomodulatory drug other than antiretroviral therapy.", "This phase III/IV, prospective, multicentre, randomized, controlled study was divided into three periods: (1) a preoperative period, including a PK evaluation; (2) an intraoperative and postoperative period, from loading dose to postoperative day 7; and (3) a safety follow‐up period, from postoperative day 8 to the end‐of‐study visit (6 weeks following surgery). The study was conducted in accordance with the principles of the Declaration of Helsinki, and the protocol, informed consent form and all amendments were approved by the ethics committee at each study site. The trial is registered at ClinicalTrials.gov (NCT00357656).\nPharmacokinetic analysis was performed before surgery to establish individual FVIII recovery and clearance (CL) values. In the preoperative period (≤60 days before the surgery), patients who had completed a washout of 72 h and were not actively bleeding were infused with antihaemophilic factor (recombinant) 50 ± 5 IU/kg of body weight and had 10 postinfusion samples taken within 48 h. If PK data suggested the presence of subclinical inhibitors (FVIII half‐life <8 h, incremental recovery (IR) <1.5 (IU/dl)/(IU/kg), or CL >5.0 ml/(kg × h), patients were excluded from further participation in the study.\nPatients who completed the preoperative PK phase were randomized 1:1 to treatment by CI or BI through postoperative day 7. Patients were stratified by type of surgery (unilateral knee replacement, hip surgery and shoulder/elbow/ankle/knee [except knee replacement] surgery). Randomization was separate and independent for each stratum. Randomization lists were prepared in blocks with a block size >2 using the random number generator algorithm of Wichmann and Hill \n14\n as modified by McLeod.\n15\n Patients in each group had blood drawn once every 24 h for measurement of FVIII activity. Patients stayed in hospital until postoperative day 7 to receive study treatment per protocol. Patients were discharged from hospital according to site practice.\nPatients randomized to CI could continue on CI or switch to intermittent BI starting on postoperative day 8, at the discretion of the investigator. During the period from postoperative day 8 until 6 weeks ±3 days following surgery, treatment (including mechanical or pharmacologic thrombosis prophylaxis) was also at the discretion of the investigator, depending on current practice of the site during physical rehabilitation.\nAntihaemophilic factor (recombinant) doses during the perioperative period (until postoperative day 7) were based on the PK profile determined before surgery. Within 60 min before surgery, patients received a loading dose of antihaemophilic factor (recombinant) to maintain a minimum FVIII level of at least 80% of normal. The formulas for determining the initial weight‐adjusted loading dose differed, depending on whether the patient was randomized to receive antihaemophilic factor (recombinant) as BI or CI for subsequent management:\n\nBI loading dose and subsequent BIs (IU/kg) = ([target FVIII level {IU/dl} × 2I/\nt − preinfusion level {IU/dl}]/IR [{IU/dl}/{IU/kg}]), where ‘I’ is the infusion interval (in hours) at which the target FVIII level shall be maintained and ‘t’ is the estimated FVIII half‐life (in hours);CI loading dose (IU/kg) = ([target FVIII level {IU/dl} − preinfusion level {IU/dl}] / IR [{IU/dl} / {IU/kg}]).\n\nBI loading dose and subsequent BIs (IU/kg) = ([target FVIII level {IU/dl} × 2I/\nt − preinfusion level {IU/dl}]/IR [{IU/dl}/{IU/kg}]), where ‘I’ is the infusion interval (in hours) at which the target FVIII level shall be maintained and ‘t’ is the estimated FVIII half‐life (in hours);\nCI loading dose (IU/kg) = ([target FVIII level {IU/dl} − preinfusion level {IU/dl}] / IR [{IU/dl} / {IU/kg}]).\nThe loading dose was given intravenously over up to 5 min at a maximum infusion rate of 10 ml/min. If the blood sample drawn ~15 min after infusion showed that the desired FVIII level had not been reached or the activated partial thromboplastin time (aPTT) had not normalized, a supplemental loading dose could be given. Surgery could be started only after normalization of the aPTT. To compensate for intraoperative blood loss and increased FVIII consumption, patients received an additional bolus of study drug in the recovery room sufficient to raise the FVIII level by 50 IU/dl. CI was to be started before surgery, immediately following the loading dose; study product was administered at a rate (IU/[kg × h]) calculated according to the following formula: CL × target FVIII level. The infusion was administered using a syringe pump according to this regimen, but always ≥0.4 ml/h. BI treatment started with the loading dose; treatment frequency varied according to the patient's PK profile, but typically included three infusions per 24 h during the first 72 h following the loading dose, infusions every 12 h from postoperative day 3 to 7, and daily infusions from postoperative day 8 to 14. For both CI and BI, FVIII trough levels were to be maintained above 80 IU/dl for the initial 72 h, at 50–100 IU/dl through postoperative day 7 and >30 IU/dl for postoperative days 8–14.", "The primary study objective was to compare the perioperative haemostatic efficacy of antihaemophilic factor (recombinant) CI versus intermittent BI. The primary efficacy outcome measure was the cumulative packed red blood cell (PRBC)/blood volume in the drainage fluid (based on haematocrit values) assessed during the first 24 h after surgery determined from drainage fluid samples using the red blood cell counting method used at the local laboratory. Secondary efficacy outcome measures included postoperative blood loss until drain removal, number of bleeding episodes through postoperative day 7 and number of units of PRBCs transfused. The trigger for initiating a blood transfusion was determined by each clinician for each patient.", "Safety was a secondary objective. The numbers of adverse events (AEs) and serious AEs (SAEs) were assessed, as well as their relationship to treatment and the incidence of FVIII antibody formation. AEs were grouped by the Medical Dictionary for Regulatory Activities system organ class and classified by severity (mild, moderate, severe). Inhibitor testing was performed at screening, at the rescreen visit after the pretreatment phase (if applicable) and at the end‐of‐study visit according to the Nijmegen modification of the Bethesda assay in the central laboratory. FVIII inhibitors could be determined at the local laboratory and verified by the central laboratory.", "Exploratory outcome measures included total weight‐adjusted antihaemophilic factor (recombinant) dose through postoperative day 7, PK assessments (IR, CL) through postoperative day 7, total haemoglobin in cumulative drainage fluid during the first 24 h after surgery and until drain removal, rate of clinically relevant postoperative haematomas, and Global Hemostatic Efficacy Assessment (GHEA).\nThe GHEA was based on three categories (Table 1), added to form the GHEA score (excellent, 7–9 [with no category <2]; good, 5–7 [with no category <1]; fair, 2–4 [with no category <1]; none, 0–1). For a score of 7 to be rated ‘excellent’, each individual assessment score had to be ≥2; otherwise, a score of 7 was to be rated ‘good’.\nGHEA scoring categories\na\n\n\nAbbreviation: GHEA, Global Hemostatic Efficacy Assessment.\nCategories 1 and 2 determined by the operating surgeon; category 3 determined by the investigator. The scores from the three categories were added to form the total GHEA score (excellent, 7–9 [with no category <2]; good, 5–7 [with no category <1]; fair, 2–4 [with no category <1]; none, 0–1). Scores of 7 were rated as ‘excellent’, if each individual assessment score was ≥2; otherwise, a score of 7 was rated as ‘good’.", "Descriptive statistics were provided for baseline characteristics and summarized by treatment regimen. A sample size of 60 divided equally between the CI and BI arms was selected for the study. To establish non‐inferiority of CI to BI, the ratio of the mean PRBC volumes of the drainage fluids in the CI arm to the BI arm was compared to a non‐inferiority margin of 200%. This was equivalent to the upper 95% confidence limit for the ratio being below 200%. The sample size requirements for establishing non‐inferiority by t‐test at a non‐inferiority limit of 200% were calculated and a sample size of 50–60 was determined to provide adequate power. In addition, at least 15 patients in each treatment group were required to have baseline FVIII levels <1%. Pearson's chi‐squared test with Monte Carlo simulation was used for comparison of patients with bleeding episodes and for patients who required transfusions.", "Patients The study started on 29 May 2006 and was completed on 9 December 2015. Of 85 patients enrolled at 22 sites (in the United States, European Union, Norway and Russia), 72 received the infusion of antihaemophilic factor (recombinant) in the preoperative period for PK determination. Of these, 63 met the criteria for perioperative treatment and were randomized to receive CI (n = 32) or BI (n = 31). Of the patients who received CI, 23 had severe haemophilia A (baseline FVIII level <1 IU/dl) and six had moderately severe haemophilia A (baseline FVIII level 1 to <2 IU/dl). Of the patients who received BI, 26 had severe haemophilia A and five had moderately severe haemophilia A. Three patients randomized to CI did not undergo surgery and were not treated. Thus, 60 patients received treatment and comprised the per‐protocol analysis set (CI, n = 29; BI, n = 31). The safety analysis set included all 72 patients who received at least one dose of antihaemophilic factor (recombinant). Patient disposition is summarized in Figure 1. Each patient underwent one procedure: unilateral knee replacement surgery (n = 48; 24 CI, 24 BI), hip surgery (n = 4; two CI, two BI) or shoulder/elbow/ankle/knee surgery (n = 8; three CI, five BI). All patients were male. Demographic and clinical characteristics were similar between groups; the medians and ranges of age were nearly identical (Table 2). Four patients received enoxaparin or nadroparin as thrombosis prophylaxis.\nDisposition of patients. †Nine patients were not randomized for the following reasons: excluded because of pharmacokinetics (n = 4), screen failures (n = 2), physician decision (n = 1), sponsor decision (n = 1) and death (n = 1).‡Three patients randomized to receive continuous infusion did not undergo surgery and were therefore not treated. Thus, 60 patients received treatment and comprised the per‐protocol analysis set (continuous infusion, n = 29; bolus infusion, n = 31)\nPatient demographic and clinical characteristics in the per‐protocol analysis set\nThe study started on 29 May 2006 and was completed on 9 December 2015. Of 85 patients enrolled at 22 sites (in the United States, European Union, Norway and Russia), 72 received the infusion of antihaemophilic factor (recombinant) in the preoperative period for PK determination. Of these, 63 met the criteria for perioperative treatment and were randomized to receive CI (n = 32) or BI (n = 31). Of the patients who received CI, 23 had severe haemophilia A (baseline FVIII level <1 IU/dl) and six had moderately severe haemophilia A (baseline FVIII level 1 to <2 IU/dl). Of the patients who received BI, 26 had severe haemophilia A and five had moderately severe haemophilia A. Three patients randomized to CI did not undergo surgery and were not treated. Thus, 60 patients received treatment and comprised the per‐protocol analysis set (CI, n = 29; BI, n = 31). The safety analysis set included all 72 patients who received at least one dose of antihaemophilic factor (recombinant). Patient disposition is summarized in Figure 1. Each patient underwent one procedure: unilateral knee replacement surgery (n = 48; 24 CI, 24 BI), hip surgery (n = 4; two CI, two BI) or shoulder/elbow/ankle/knee surgery (n = 8; three CI, five BI). All patients were male. Demographic and clinical characteristics were similar between groups; the medians and ranges of age were nearly identical (Table 2). Four patients received enoxaparin or nadroparin as thrombosis prophylaxis.\nDisposition of patients. †Nine patients were not randomized for the following reasons: excluded because of pharmacokinetics (n = 4), screen failures (n = 2), physician decision (n = 1), sponsor decision (n = 1) and death (n = 1).‡Three patients randomized to receive continuous infusion did not undergo surgery and were therefore not treated. Thus, 60 patients received treatment and comprised the per‐protocol analysis set (continuous infusion, n = 29; bolus infusion, n = 31)\nPatient demographic and clinical characteristics in the per‐protocol analysis set\nEfficacy and exploratory outcomes Information on drainage fluid PRBC was not available for six patients (three CI, three BI) at 24 h after surgery, but the cumulative PRBC/blood volume in the drainage fluid (ie RBC, MCV and haematocrit) during the first 24 h following surgery was comparable between the CI and BI groups (Table 3; Table 4 [by type of surgery]), with a ratio of 0.92 (95% confidence interval for mean, 0.82–1.05; median, 3.4 × 1012 RBCs/l for CI and 3.5 × 1012 RBCs/l for BI). The one‐sided p‐value against the null hypothesis of ratio ≥200% was <.001, confirming the non‐inferiority of CI to BI with a 5% type I error (as the upper confidence limit did not exceed 200%).\nKey efficacy parameters in the per‐protocol analysis set\na\n\n\nAbbreviations: BI, bolus infusion; CI, continuous infusion; PRBC, packed red blood cell; SD, standard deviation.\nData shown are from patients in the per‐protocol analysis set, unless otherwise specified. The patient numbers shown for the different parameters vary because some patients did not have all data available.\nThe one‐sided p‐value (against the null hypothesis of the CI:BI ratio ≥200%) was <.001.\nThe one‐sided p‐value (against the null hypothesis of the CI:BI ratio ≥200%) was .041.\nNot statistically significant by a post hoc Pearson's chi‐squared test (p\ntwo‐sided = .612, using Monte Carlo simulation with 106 replicates).\nNot statistically significant by a post hoc Pearson's chi‐squared test (p\ntwo‐sided = .132, using Monte Carlo simulation with 106 replicates).\nTrans‐ and postoperatively combined.\nCumulative PRBC volume in drainage fluid at 24 h (1012 RBCs/l) by type of surgery\nAbbreviations: NA, not applicable; SD, standard deviation.\nTotal blood loss until drain removal, adjusted for expected blood loss, was slightly higher in the CI group than the BI group (Table 3). The mean (95% confidence interval) ratio of blood loss volume for CI versus BI was estimated to be 1.3 (0.8–2.1), and the one‐sided p‐value against the null hypothesis of ratio ≥200% was .041. Most bleeding episodes occurred in patients receiving BI; of four reported bleeding episodes (one episode per patient), three occurred in patients receiving BI and one in a patient receiving CI (Table 3). None of the bleeding episodes was considered by the investigator to be ‘the result of inadequate therapeutic response in the face of proper dosing, necessitating a change in therapeutic regimen'. Patients receiving CI were given more PRBC transfusions than patients receiving BI. PRBC transfusions were required in 18/29 and 13/31 patients receiving CI and BI, respectively, with a mean (range) of 1.3 (1–5) units in patients receiving CI and a mean (range) of 0.9 (1–3) units in patients receiving BI.\nThe total amount of haemoglobin in the cumulative drainage fluid during the first 24 h after surgery and until drain removal (if drainage continued) was comparable between patients who received CI and BI. In the stratum of patients who underwent unilateral knee replacement (CI, n = 24; BI, n = 24), the point estimate for the mean was 98.21 g/l (95% confidence interval for mean, 89.68–107.56 g/l) for CI and 97.63 g/l (86.07–110.74 g/l) for BI. The ratio of CI to BI was estimated to be 1.01.\nClinically relevant postoperative haematomas were observed in two patients receiving CI and two receiving BI (three haematomas per group, for a total of six). Two patients (one CI, one BI) had undergone knee replacement and two patients (one CI, one BI) had undergone hip surgery.\nThe total antihaemophilic factor (recombinant) dose (per kg body weight) administered in the combined transoperative and postoperative periods was similar in the CI and BI groups (Table 3). The global haemostatic efficacy of antihaemophilic factor (recombinant) administered by CI was assessed to be at least as good as that administered by BI. As shown in Table 5, scores of ‘excellent’ were evenly distributed among patients who received CI and patients who received BI. All patients receiving CI had a score of ‘excellent’ or ‘good’.\nGHEA score\nAbbreviation: GHEA, Global Hemostatic Efficacy Assessment.\nValues represent numbers of patients with reported GHEA scores.\nThe GHEA score at postoperative day 8 was missing for six patients.\nIncremental recovery over time could be analysed only for the BI arm, as BIs in the CI arm were rare. Compared to the value at the loading dose on day 0, the median IR decreased by ~20% after the first week following surgery (day 7), with high variability across individual patients. During the second week, many patients were discharged from hospital and not enough samples were available for analysis. CL could be analysed for the CI arm, but not the BI arm because of insufficient data. The determination of CL used the observed FVIII level as the steady‐state level. This assumption was questionable for days 1 and 4 due to the additional postsurgical BIs and the reduction in infusion rate scheduled at day 3. For days 2, 3 and 5, an increase of ~20% in median CL was observed compared with that from the presurgical full PK analysis. Only at days 6 and 7 was the median CL below the initial value, but high variability in individual patient values was seen throughout the first week. Second‐week data were insufficient for analysis.\nInformation on drainage fluid PRBC was not available for six patients (three CI, three BI) at 24 h after surgery, but the cumulative PRBC/blood volume in the drainage fluid (ie RBC, MCV and haematocrit) during the first 24 h following surgery was comparable between the CI and BI groups (Table 3; Table 4 [by type of surgery]), with a ratio of 0.92 (95% confidence interval for mean, 0.82–1.05; median, 3.4 × 1012 RBCs/l for CI and 3.5 × 1012 RBCs/l for BI). The one‐sided p‐value against the null hypothesis of ratio ≥200% was <.001, confirming the non‐inferiority of CI to BI with a 5% type I error (as the upper confidence limit did not exceed 200%).\nKey efficacy parameters in the per‐protocol analysis set\na\n\n\nAbbreviations: BI, bolus infusion; CI, continuous infusion; PRBC, packed red blood cell; SD, standard deviation.\nData shown are from patients in the per‐protocol analysis set, unless otherwise specified. The patient numbers shown for the different parameters vary because some patients did not have all data available.\nThe one‐sided p‐value (against the null hypothesis of the CI:BI ratio ≥200%) was <.001.\nThe one‐sided p‐value (against the null hypothesis of the CI:BI ratio ≥200%) was .041.\nNot statistically significant by a post hoc Pearson's chi‐squared test (p\ntwo‐sided = .612, using Monte Carlo simulation with 106 replicates).\nNot statistically significant by a post hoc Pearson's chi‐squared test (p\ntwo‐sided = .132, using Monte Carlo simulation with 106 replicates).\nTrans‐ and postoperatively combined.\nCumulative PRBC volume in drainage fluid at 24 h (1012 RBCs/l) by type of surgery\nAbbreviations: NA, not applicable; SD, standard deviation.\nTotal blood loss until drain removal, adjusted for expected blood loss, was slightly higher in the CI group than the BI group (Table 3). The mean (95% confidence interval) ratio of blood loss volume for CI versus BI was estimated to be 1.3 (0.8–2.1), and the one‐sided p‐value against the null hypothesis of ratio ≥200% was .041. Most bleeding episodes occurred in patients receiving BI; of four reported bleeding episodes (one episode per patient), three occurred in patients receiving BI and one in a patient receiving CI (Table 3). None of the bleeding episodes was considered by the investigator to be ‘the result of inadequate therapeutic response in the face of proper dosing, necessitating a change in therapeutic regimen'. Patients receiving CI were given more PRBC transfusions than patients receiving BI. PRBC transfusions were required in 18/29 and 13/31 patients receiving CI and BI, respectively, with a mean (range) of 1.3 (1–5) units in patients receiving CI and a mean (range) of 0.9 (1–3) units in patients receiving BI.\nThe total amount of haemoglobin in the cumulative drainage fluid during the first 24 h after surgery and until drain removal (if drainage continued) was comparable between patients who received CI and BI. In the stratum of patients who underwent unilateral knee replacement (CI, n = 24; BI, n = 24), the point estimate for the mean was 98.21 g/l (95% confidence interval for mean, 89.68–107.56 g/l) for CI and 97.63 g/l (86.07–110.74 g/l) for BI. The ratio of CI to BI was estimated to be 1.01.\nClinically relevant postoperative haematomas were observed in two patients receiving CI and two receiving BI (three haematomas per group, for a total of six). Two patients (one CI, one BI) had undergone knee replacement and two patients (one CI, one BI) had undergone hip surgery.\nThe total antihaemophilic factor (recombinant) dose (per kg body weight) administered in the combined transoperative and postoperative periods was similar in the CI and BI groups (Table 3). The global haemostatic efficacy of antihaemophilic factor (recombinant) administered by CI was assessed to be at least as good as that administered by BI. As shown in Table 5, scores of ‘excellent’ were evenly distributed among patients who received CI and patients who received BI. All patients receiving CI had a score of ‘excellent’ or ‘good’.\nGHEA score\nAbbreviation: GHEA, Global Hemostatic Efficacy Assessment.\nValues represent numbers of patients with reported GHEA scores.\nThe GHEA score at postoperative day 8 was missing for six patients.\nIncremental recovery over time could be analysed only for the BI arm, as BIs in the CI arm were rare. Compared to the value at the loading dose on day 0, the median IR decreased by ~20% after the first week following surgery (day 7), with high variability across individual patients. During the second week, many patients were discharged from hospital and not enough samples were available for analysis. CL could be analysed for the CI arm, but not the BI arm because of insufficient data. The determination of CL used the observed FVIII level as the steady‐state level. This assumption was questionable for days 1 and 4 due to the additional postsurgical BIs and the reduction in infusion rate scheduled at day 3. For days 2, 3 and 5, an increase of ~20% in median CL was observed compared with that from the presurgical full PK analysis. Only at days 6 and 7 was the median CL below the initial value, but high variability in individual patient values was seen throughout the first week. Second‐week data were insufficient for analysis.\nSafety outcomes Adverse events observed are summarized in Table 6. In the safety population (N = 72), 230 AEs were reported in 51 patients (70.8%). A total of 14 treatment‐related AEs were reported in 10 patients: five patients treated by CI had eight AEs and five patients treated by BI had six AEs. Ten of the 14 treatment‐related AEs were classified as non‐serious (reported in six patients): anaemia (n = 5), headache (n = 2), allergic dermatitis (n = 1), pruritus (n = 1) and pyrexia (n = 1).\nSummary of AEs in the safety population\na\n and by treatment arm\nAbbreviations: AE, adverse event; SAE, serious adverse event.\nThe safety population included nine patients who were not randomized to receive continuous or bolus infusion; no AEs were recorded in these patients.\nAll treatment‐related SAEs were development of FVIII inhibitors.\nOne SAE unrelated to treatment occurred in a patient who died before randomization.\nTen SAEs were reported in ten patients. Of these, four SAEs of FVIII inhibitor development (two patients in each group) were considered related to treatment; all four patients had severe haemophilia A, and none required treatment with a bypassing agent. The two patients receiving CI developed high‐titre inhibitors (up to 20.8 and 10.7 BU, respectively, on study days 63 and 57), which later decreased to the low‐titre range in both patients (1.0 and 1.7 BU, respectively). One patient was a 35‐year‐old male who had undergone hip surgery; he had received plasma‐derived FVIII products and had weakly positive Lupus anticoagulants of unknown clinical relevance. The other was a 30‐year‐old male who had undergone left knee replacement. Of the two patients receiving BI who developed FVIII inhibitors, one was a 35‐year‐old male who had undergone left primary total knee replacement and developed a low‐titre inhibitor (transient; maximum 0.89 BU that later decreased to 0.17 BU) on study day 30. The other was a 50‐year‐old male who developed a low‐titre inhibitor on study day 36 with a maximum titre of 3 BU that later decreased to 2.4 BU. The other six SAEs (febrile infection, joint swelling, haemarthrosis, pseudomembranous colitis, multiorgan failure and muscle haemorrhage) were considered to be unrelated to treatment.\nOne patient died during the study. Death was due to multiorgan failure attributed to codeine toxicity. The patient had received antihaemophilic factor (recombinant) only for PK evaluation and was not randomized to treatment by CI or BI and therefore did not undergo surgery.\nAdverse events observed are summarized in Table 6. In the safety population (N = 72), 230 AEs were reported in 51 patients (70.8%). A total of 14 treatment‐related AEs were reported in 10 patients: five patients treated by CI had eight AEs and five patients treated by BI had six AEs. Ten of the 14 treatment‐related AEs were classified as non‐serious (reported in six patients): anaemia (n = 5), headache (n = 2), allergic dermatitis (n = 1), pruritus (n = 1) and pyrexia (n = 1).\nSummary of AEs in the safety population\na\n and by treatment arm\nAbbreviations: AE, adverse event; SAE, serious adverse event.\nThe safety population included nine patients who were not randomized to receive continuous or bolus infusion; no AEs were recorded in these patients.\nAll treatment‐related SAEs were development of FVIII inhibitors.\nOne SAE unrelated to treatment occurred in a patient who died before randomization.\nTen SAEs were reported in ten patients. Of these, four SAEs of FVIII inhibitor development (two patients in each group) were considered related to treatment; all four patients had severe haemophilia A, and none required treatment with a bypassing agent. The two patients receiving CI developed high‐titre inhibitors (up to 20.8 and 10.7 BU, respectively, on study days 63 and 57), which later decreased to the low‐titre range in both patients (1.0 and 1.7 BU, respectively). One patient was a 35‐year‐old male who had undergone hip surgery; he had received plasma‐derived FVIII products and had weakly positive Lupus anticoagulants of unknown clinical relevance. The other was a 30‐year‐old male who had undergone left knee replacement. Of the two patients receiving BI who developed FVIII inhibitors, one was a 35‐year‐old male who had undergone left primary total knee replacement and developed a low‐titre inhibitor (transient; maximum 0.89 BU that later decreased to 0.17 BU) on study day 30. The other was a 50‐year‐old male who developed a low‐titre inhibitor on study day 36 with a maximum titre of 3 BU that later decreased to 2.4 BU. The other six SAEs (febrile infection, joint swelling, haemarthrosis, pseudomembranous colitis, multiorgan failure and muscle haemorrhage) were considered to be unrelated to treatment.\nOne patient died during the study. Death was due to multiorgan failure attributed to codeine toxicity. The patient had received antihaemophilic factor (recombinant) only for PK evaluation and was not randomized to treatment by CI or BI and therefore did not undergo surgery.", "The study started on 29 May 2006 and was completed on 9 December 2015. Of 85 patients enrolled at 22 sites (in the United States, European Union, Norway and Russia), 72 received the infusion of antihaemophilic factor (recombinant) in the preoperative period for PK determination. Of these, 63 met the criteria for perioperative treatment and were randomized to receive CI (n = 32) or BI (n = 31). Of the patients who received CI, 23 had severe haemophilia A (baseline FVIII level <1 IU/dl) and six had moderately severe haemophilia A (baseline FVIII level 1 to <2 IU/dl). Of the patients who received BI, 26 had severe haemophilia A and five had moderately severe haemophilia A. Three patients randomized to CI did not undergo surgery and were not treated. Thus, 60 patients received treatment and comprised the per‐protocol analysis set (CI, n = 29; BI, n = 31). The safety analysis set included all 72 patients who received at least one dose of antihaemophilic factor (recombinant). Patient disposition is summarized in Figure 1. Each patient underwent one procedure: unilateral knee replacement surgery (n = 48; 24 CI, 24 BI), hip surgery (n = 4; two CI, two BI) or shoulder/elbow/ankle/knee surgery (n = 8; three CI, five BI). All patients were male. Demographic and clinical characteristics were similar between groups; the medians and ranges of age were nearly identical (Table 2). Four patients received enoxaparin or nadroparin as thrombosis prophylaxis.\nDisposition of patients. †Nine patients were not randomized for the following reasons: excluded because of pharmacokinetics (n = 4), screen failures (n = 2), physician decision (n = 1), sponsor decision (n = 1) and death (n = 1).‡Three patients randomized to receive continuous infusion did not undergo surgery and were therefore not treated. Thus, 60 patients received treatment and comprised the per‐protocol analysis set (continuous infusion, n = 29; bolus infusion, n = 31)\nPatient demographic and clinical characteristics in the per‐protocol analysis set", "Information on drainage fluid PRBC was not available for six patients (three CI, three BI) at 24 h after surgery, but the cumulative PRBC/blood volume in the drainage fluid (ie RBC, MCV and haematocrit) during the first 24 h following surgery was comparable between the CI and BI groups (Table 3; Table 4 [by type of surgery]), with a ratio of 0.92 (95% confidence interval for mean, 0.82–1.05; median, 3.4 × 1012 RBCs/l for CI and 3.5 × 1012 RBCs/l for BI). The one‐sided p‐value against the null hypothesis of ratio ≥200% was <.001, confirming the non‐inferiority of CI to BI with a 5% type I error (as the upper confidence limit did not exceed 200%).\nKey efficacy parameters in the per‐protocol analysis set\na\n\n\nAbbreviations: BI, bolus infusion; CI, continuous infusion; PRBC, packed red blood cell; SD, standard deviation.\nData shown are from patients in the per‐protocol analysis set, unless otherwise specified. The patient numbers shown for the different parameters vary because some patients did not have all data available.\nThe one‐sided p‐value (against the null hypothesis of the CI:BI ratio ≥200%) was <.001.\nThe one‐sided p‐value (against the null hypothesis of the CI:BI ratio ≥200%) was .041.\nNot statistically significant by a post hoc Pearson's chi‐squared test (p\ntwo‐sided = .612, using Monte Carlo simulation with 106 replicates).\nNot statistically significant by a post hoc Pearson's chi‐squared test (p\ntwo‐sided = .132, using Monte Carlo simulation with 106 replicates).\nTrans‐ and postoperatively combined.\nCumulative PRBC volume in drainage fluid at 24 h (1012 RBCs/l) by type of surgery\nAbbreviations: NA, not applicable; SD, standard deviation.\nTotal blood loss until drain removal, adjusted for expected blood loss, was slightly higher in the CI group than the BI group (Table 3). The mean (95% confidence interval) ratio of blood loss volume for CI versus BI was estimated to be 1.3 (0.8–2.1), and the one‐sided p‐value against the null hypothesis of ratio ≥200% was .041. Most bleeding episodes occurred in patients receiving BI; of four reported bleeding episodes (one episode per patient), three occurred in patients receiving BI and one in a patient receiving CI (Table 3). None of the bleeding episodes was considered by the investigator to be ‘the result of inadequate therapeutic response in the face of proper dosing, necessitating a change in therapeutic regimen'. Patients receiving CI were given more PRBC transfusions than patients receiving BI. PRBC transfusions were required in 18/29 and 13/31 patients receiving CI and BI, respectively, with a mean (range) of 1.3 (1–5) units in patients receiving CI and a mean (range) of 0.9 (1–3) units in patients receiving BI.\nThe total amount of haemoglobin in the cumulative drainage fluid during the first 24 h after surgery and until drain removal (if drainage continued) was comparable between patients who received CI and BI. In the stratum of patients who underwent unilateral knee replacement (CI, n = 24; BI, n = 24), the point estimate for the mean was 98.21 g/l (95% confidence interval for mean, 89.68–107.56 g/l) for CI and 97.63 g/l (86.07–110.74 g/l) for BI. The ratio of CI to BI was estimated to be 1.01.\nClinically relevant postoperative haematomas were observed in two patients receiving CI and two receiving BI (three haematomas per group, for a total of six). Two patients (one CI, one BI) had undergone knee replacement and two patients (one CI, one BI) had undergone hip surgery.\nThe total antihaemophilic factor (recombinant) dose (per kg body weight) administered in the combined transoperative and postoperative periods was similar in the CI and BI groups (Table 3). The global haemostatic efficacy of antihaemophilic factor (recombinant) administered by CI was assessed to be at least as good as that administered by BI. As shown in Table 5, scores of ‘excellent’ were evenly distributed among patients who received CI and patients who received BI. All patients receiving CI had a score of ‘excellent’ or ‘good’.\nGHEA score\nAbbreviation: GHEA, Global Hemostatic Efficacy Assessment.\nValues represent numbers of patients with reported GHEA scores.\nThe GHEA score at postoperative day 8 was missing for six patients.\nIncremental recovery over time could be analysed only for the BI arm, as BIs in the CI arm were rare. Compared to the value at the loading dose on day 0, the median IR decreased by ~20% after the first week following surgery (day 7), with high variability across individual patients. During the second week, many patients were discharged from hospital and not enough samples were available for analysis. CL could be analysed for the CI arm, but not the BI arm because of insufficient data. The determination of CL used the observed FVIII level as the steady‐state level. This assumption was questionable for days 1 and 4 due to the additional postsurgical BIs and the reduction in infusion rate scheduled at day 3. For days 2, 3 and 5, an increase of ~20% in median CL was observed compared with that from the presurgical full PK analysis. Only at days 6 and 7 was the median CL below the initial value, but high variability in individual patient values was seen throughout the first week. Second‐week data were insufficient for analysis.", "Adverse events observed are summarized in Table 6. In the safety population (N = 72), 230 AEs were reported in 51 patients (70.8%). A total of 14 treatment‐related AEs were reported in 10 patients: five patients treated by CI had eight AEs and five patients treated by BI had six AEs. Ten of the 14 treatment‐related AEs were classified as non‐serious (reported in six patients): anaemia (n = 5), headache (n = 2), allergic dermatitis (n = 1), pruritus (n = 1) and pyrexia (n = 1).\nSummary of AEs in the safety population\na\n and by treatment arm\nAbbreviations: AE, adverse event; SAE, serious adverse event.\nThe safety population included nine patients who were not randomized to receive continuous or bolus infusion; no AEs were recorded in these patients.\nAll treatment‐related SAEs were development of FVIII inhibitors.\nOne SAE unrelated to treatment occurred in a patient who died before randomization.\nTen SAEs were reported in ten patients. Of these, four SAEs of FVIII inhibitor development (two patients in each group) were considered related to treatment; all four patients had severe haemophilia A, and none required treatment with a bypassing agent. The two patients receiving CI developed high‐titre inhibitors (up to 20.8 and 10.7 BU, respectively, on study days 63 and 57), which later decreased to the low‐titre range in both patients (1.0 and 1.7 BU, respectively). One patient was a 35‐year‐old male who had undergone hip surgery; he had received plasma‐derived FVIII products and had weakly positive Lupus anticoagulants of unknown clinical relevance. The other was a 30‐year‐old male who had undergone left knee replacement. Of the two patients receiving BI who developed FVIII inhibitors, one was a 35‐year‐old male who had undergone left primary total knee replacement and developed a low‐titre inhibitor (transient; maximum 0.89 BU that later decreased to 0.17 BU) on study day 30. The other was a 50‐year‐old male who developed a low‐titre inhibitor on study day 36 with a maximum titre of 3 BU that later decreased to 2.4 BU. The other six SAEs (febrile infection, joint swelling, haemarthrosis, pseudomembranous colitis, multiorgan failure and muscle haemorrhage) were considered to be unrelated to treatment.\nOne patient died during the study. Death was due to multiorgan failure attributed to codeine toxicity. The patient had received antihaemophilic factor (recombinant) only for PK evaluation and was not randomized to treatment by CI or BI and therefore did not undergo surgery.", "These results demonstrate that treatment by CI was non‐inferior to treatment by BI in terms of haemostatic efficacy and safety in patients undergoing elective unilateral major orthopaedic surgery that required drain placement. Although prior studies have evaluated CI of FVIII in patients with haemophilia A undergoing surgical procedures,\n3\n, \n8\n, \n11\n, \n12\n, \n13\n this was the first controlled trial to compare CI and BI of FVIII in the studied population.\nThe same concentrate used in the present study was also used in surgical patients in the pivotal study reported by Négrier et al.\n13\n In that prospective, open‐label, uncontrolled clinical trial, the efficacy and safety of CI and BI of antihaemophilic factor (recombinant) was examined in 58 patients undergoing 65 surgical procedures, of which 22 were associated with major haemorrhagic risk.\n13\n CI (with or without supplemental BIs) was used in 18 procedures and BI alone was used in 47 procedures. Intraoperative haemostatic efficacy, as well as postoperative haemostatic efficacy rated at the time of discharge, was assessed as ‘excellent’ or ‘good’ for all procedures; treatments were well tolerated, and no development of FVIII inhibitors was reported.\n13\n The median (range) total FVIII consumption during hospitalization for all major surgeries was 822 (401–2014) IU/kg per surgery with CI (including any supplemental BI) and 910 (228–1825) IU/kg per surgery with BI alone. Among those undergoing orthopaedic procedures, the median daily FVIII consumption during the first seven postoperative days was similar with CI (66.2 IU/kg/day) and BI (65.2 IU/kg/day).\n13\n Similarly to the current study, Négrier et al. concluded that antihaemophilic factor (recombinant) administered via CI or BI was effective and safe for perioperative haemostasis, although that study was not designed to compare CI with BI.\nBased on the findings of earlier studies, which reported good haemostatic efficacy and total FVIII doses 19%–36% lower with CI versus BI,\n3\n, \n7\n, \n8\n, \n11\n, \n12\n it was hypothesized that CI might be similarly or even more effective for preventing postoperative bleeding but with a reduced consumption of antihaemophilic factor (recombinant) compared with BI. However, in our study, the use of antihaemophilic factor (recombinant) was higher for CI versus BI on postoperative days 1–14 and higher for BI versus CI intraoperatively, on postoperative day 0, and from postoperative day 15 to study end. These findings may be due in part to the study protocol and design, which specified dosing, permitted patients to switch from CI to BI from day 8 onwards and focused on reducing variations in plasma factor levels rather than reducing the amount of product used. When transoperative and postoperative usage was combined, the total factor consumption was similar in the CI and BI groups, which mirrors the findings of Négrier et al.\n13\n\n\nThis study provided the unique opportunity to compare the safety of CI versus BI. Although administration by CI provides the advantage of achieving more stable FVIII levels without the troughs that usually accompany BI and place the patient at risk of bleeding, the use of CI during and after surgery has raised concerns about an increased risk of inhibitor formation, particularly in patients with mild or moderate haemophilia.\n16\n, \n17\n, \n18\n However, no increase in inhibitor risk was seen in a large retrospective survey of 1079 procedures\n19\n and no inhibitors were detected in a prospective study of CI use in 46 previously treated patients with severe haemophilia A.\n20\n The development of inhibitors is a multifactorial process associated with a variety of genetic and environmental risk factors.\n21\n In the current study of patients with severe or moderately severe haemophilia A (baseline FVIII ≤2 IU/dl) and prior FVIII exposure of ≥150 days, 4 of 60 (6.7%) patients developed FVIII inhibitors, with no difference in frequency between the two groups (observed in two patients receiving CI and two receiving BI). Both patients with CI developed high‐titre inhibitors that evolved to low titre, whereas the two patients with inhibitors during BI had low‐titre inhibitors. Due to the limited number of affected patients, it is not possible to determine whether this difference was related to the study drug, method of administration, or potential confounding factors (eg the presence of high‐risk FVIII mutations and other genetic risk factors, which were not assessed in this study, or variability in tissue damage related to the surgical procedure). Although patients had been previously exposed to cryoprecipitates, fresh frozen plasma and/or plasma‐derived or recombinant FVIII concentrates, we cannot comment on how tolerant they were to FVIII due to a lack of an accurate record of prestudy FVIII usage. In the separate study of the present study drug (Négrier et al. mentioned above), surgical patients previously treated with antihaemophilic factor (recombinant) did not develop FVIII inhibitors.\n13\n In addition, the risk of inhibitors was not found to be elevated in a postmarketing surveillance study of patients previously treated with antihaemophilic factor (recombinant).\n22\n\n\nDuring the first week after surgery, a decrease in IR and an increase in CL were observed in the current study, although the limited data available prevent meaningful analysis and interpretation of these results, which should be considered in the context of varying results reported in the literature. Although Batorova and Martinowitz saw a significant decrease in CL during the 1–2 weeks following surgery,\n11\n others have described variable changes in CL following major surgical procedures,\n23\n and recent reports indicate substantial intraindividual variation in IR and poor reproducibility of CL, with numerous factors affecting IR and CL.\n23\n, \n24\n\n\nLimitations of this study include the necessity to enrol patients undergoing orthopaedic surgeries other than unilateral knee replacement, difficulty in estimating PRBC volumes in drainage fluid, lack of information on the drainage fluid PRBC for six patients, and variability in surgical techniques and practices at the participating sites, which could only be partially addressed per study protocol. Another limitation inherent to the study design is the use of PK assessment before surgery and central dosing recommendations, which differ from conditions in real‐world clinical practice. On the other hand, this study has inherent strengths as a multicentre randomized study with a large number of patients with balanced surgical procedures in the two arms.", "The administration of antihaemophilic factor (recombinant) by CI resulted in comparable efficacy and safety outcomes and is a viable alternative to intermittent BI in the perioperative haemostatic management of patients with haemophilia A undergoing major orthopaedic surgery. Taking into account the complexity of CI versus BI, it is useful to know that these types of FVIII administration showed non‐inferiority, such that treatment may be optimized for individual patients. These findings may help inform perioperative haemostatic management of these patients, with the goal of maintaining stable FVIII levels during and after surgery, whether by the use of CI or BI regimens.", "IP has received occasional honoraria for lectures and advisory board meetings from Bayer, CSL Behring, Novo Nordisk, Pfizer, Sobi and Shire*; unrestricted research grants from CSL Behring, Novo Nordisk and Shire*. VM has no conflict of interest to declare. JW has received research grants/honoraria for lectures from Baxalta, Bayer, CSL Behring, Ferring Pharmaceuticals, Novo Nordisk, Octapharma, Pfizer, Roche, Siemens, Shire*, Sobi and Werfen. WE is an employee of Baxalta Innovations GmbH, a Takeda company, and a Takeda stock owner. JD is an employee of Baxalta Innovations GmbH, a Takeda company. ST and BE are employees of Baxalta US Inc., a Takeda company, and Takeda stock owners. GS was an employee of Baxalta US Inc., a Takeda company, at the time of the current study.\n*A member of the Takeda group of companies.", "All authors reviewed/revised the manuscript critically for intellectual content and gave their final approval for it to be published. Ingrid Pabinger was a study investigator and contributed to the design of the study and interpretation of the data and study content. Vasily Mamonov and Jerzy Windyga were study investigators and contributed to the interpretation of the data and study content. None of the authors received honoraria for the writing of this manuscript. Werner Engl and Bruce Ewenstein contributed to the conception, design, analysis and interpretation of the clinical trial. Jennifer Doralt, Srilatha Tangada and Gerald Spotts contributed to the analysis and interpretation of the data, and study conduct.\nAll authors agree to be accountable for all aspects of the work in ensuring that questions related to the accuracy or integrity of any part of the work are appropriately investigated and resolved." ]
[ null, "materials-and-methods", null, null, null, null, null, null, "results", null, null, null, "discussion", "conclusions", "COI-statement", null ]
[ "clinical trial", "haemophilia A", "intravenous infusion", "orthopaedic surgery", "recombinant factor VIII" ]
INTRODUCTION: Antihaemophilic factor (recombinant), plasma/albumin‐free method (ADVATE®; Baxalta US Inc., a Takeda company, Lexington, MA, USA), is a recombinant human coagulation factor VIII (FVIII) indicated for the treatment and prophylaxis of bleeding, including perioperative management, in patients of all ages with haemophilia A. 1 , 2 When used perioperatively, antihaemophilic factor (recombinant) is typically administered by bolus infusion (BI) at time points dictated by its pharmacokinetic (PK) profile. Continuous infusion (CI) was developed to reduce the wide variations in plasma FVIII levels that usually accompany BI and decrease the quantity of infused FVIII concentrate. 3 , 4 , 5 , 6 , 7 , 8 , 9 CI during and/or after surgery 1 , 10 may stabilize FVIII levels, eliminating the deep troughs characteristic of BI that may increase bleeding risk. Several cohort and non‐controlled studies have indicated that FVIII CI is well tolerated and efficacious for providing perioperative haemostasis for patients with haemophilia A; some studies have suggested that CI may also reduce FVIII consumption compared with BI. 3 , 7 , 8 , 11 , 12 , 13 Continuous infusion and BI in the same type of intervention have not been compared in a prospective, controlled setting. The objective of this prospective, randomized phase III/IV study in patients with severe or moderately severe haemophilia A was to assess the perioperative haemostatic efficacy and safety of antihaemophilic factor (recombinant) administered via CI and intermittent BI. MATERIALS AND METHODS: Patients Eligible previously treated patients (18–70 years of age) had severe or moderately severe haemophilia A at screening (FVIII level ≤2 IU/dl) and were scheduled for elective unilateral major orthopaedic surgery requiring drain placement. Protocol amendments raised the maximum age (previously 65 years) and expanded the type of surgery (previously unilateral primary total knee replacement only). Major orthopaedic surgery was defined as requiring moderate or deep sedation, general anaesthesia, or major nerve conduction blockade and had a significant risk of large‐volume blood loss or blood loss into confined anatomical space. Patients provided written informed consent and were required to have had prior exposure to FVIII concentrates for ≥150 days. Patients were excluded if they met any of the following criteria: detectable FVIII inhibitors at screening (by the central laboratory), history of inhibitors (>0.4 Bethesda units [BU] by Nijmegen modification of the Bethesda assay), scheduled for any other minor or major surgery, laboratory evidence of abnormal haemostasis from causes other than haemophilia A, and current or planned receipt of an immunomodulatory drug other than antiretroviral therapy. Eligible previously treated patients (18–70 years of age) had severe or moderately severe haemophilia A at screening (FVIII level ≤2 IU/dl) and were scheduled for elective unilateral major orthopaedic surgery requiring drain placement. Protocol amendments raised the maximum age (previously 65 years) and expanded the type of surgery (previously unilateral primary total knee replacement only). Major orthopaedic surgery was defined as requiring moderate or deep sedation, general anaesthesia, or major nerve conduction blockade and had a significant risk of large‐volume blood loss or blood loss into confined anatomical space. Patients provided written informed consent and were required to have had prior exposure to FVIII concentrates for ≥150 days. Patients were excluded if they met any of the following criteria: detectable FVIII inhibitors at screening (by the central laboratory), history of inhibitors (>0.4 Bethesda units [BU] by Nijmegen modification of the Bethesda assay), scheduled for any other minor or major surgery, laboratory evidence of abnormal haemostasis from causes other than haemophilia A, and current or planned receipt of an immunomodulatory drug other than antiretroviral therapy. Study design This phase III/IV, prospective, multicentre, randomized, controlled study was divided into three periods: (1) a preoperative period, including a PK evaluation; (2) an intraoperative and postoperative period, from loading dose to postoperative day 7; and (3) a safety follow‐up period, from postoperative day 8 to the end‐of‐study visit (6 weeks following surgery). The study was conducted in accordance with the principles of the Declaration of Helsinki, and the protocol, informed consent form and all amendments were approved by the ethics committee at each study site. The trial is registered at ClinicalTrials.gov (NCT00357656). Pharmacokinetic analysis was performed before surgery to establish individual FVIII recovery and clearance (CL) values. In the preoperative period (≤60 days before the surgery), patients who had completed a washout of 72 h and were not actively bleeding were infused with antihaemophilic factor (recombinant) 50 ± 5 IU/kg of body weight and had 10 postinfusion samples taken within 48 h. If PK data suggested the presence of subclinical inhibitors (FVIII half‐life <8 h, incremental recovery (IR) <1.5 (IU/dl)/(IU/kg), or CL >5.0 ml/(kg × h), patients were excluded from further participation in the study. Patients who completed the preoperative PK phase were randomized 1:1 to treatment by CI or BI through postoperative day 7. Patients were stratified by type of surgery (unilateral knee replacement, hip surgery and shoulder/elbow/ankle/knee [except knee replacement] surgery). Randomization was separate and independent for each stratum. Randomization lists were prepared in blocks with a block size >2 using the random number generator algorithm of Wichmann and Hill 14 as modified by McLeod. 15 Patients in each group had blood drawn once every 24 h for measurement of FVIII activity. Patients stayed in hospital until postoperative day 7 to receive study treatment per protocol. Patients were discharged from hospital according to site practice. Patients randomized to CI could continue on CI or switch to intermittent BI starting on postoperative day 8, at the discretion of the investigator. During the period from postoperative day 8 until 6 weeks ±3 days following surgery, treatment (including mechanical or pharmacologic thrombosis prophylaxis) was also at the discretion of the investigator, depending on current practice of the site during physical rehabilitation. Antihaemophilic factor (recombinant) doses during the perioperative period (until postoperative day 7) were based on the PK profile determined before surgery. Within 60 min before surgery, patients received a loading dose of antihaemophilic factor (recombinant) to maintain a minimum FVIII level of at least 80% of normal. The formulas for determining the initial weight‐adjusted loading dose differed, depending on whether the patient was randomized to receive antihaemophilic factor (recombinant) as BI or CI for subsequent management: BI loading dose and subsequent BIs (IU/kg) = ([target FVIII level {IU/dl} × 2I/ t − preinfusion level {IU/dl}]/IR [{IU/dl}/{IU/kg}]), where ‘I’ is the infusion interval (in hours) at which the target FVIII level shall be maintained and ‘t’ is the estimated FVIII half‐life (in hours);CI loading dose (IU/kg) = ([target FVIII level {IU/dl} − preinfusion level {IU/dl}] / IR [{IU/dl} / {IU/kg}]). BI loading dose and subsequent BIs (IU/kg) = ([target FVIII level {IU/dl} × 2I/ t − preinfusion level {IU/dl}]/IR [{IU/dl}/{IU/kg}]), where ‘I’ is the infusion interval (in hours) at which the target FVIII level shall be maintained and ‘t’ is the estimated FVIII half‐life (in hours); CI loading dose (IU/kg) = ([target FVIII level {IU/dl} − preinfusion level {IU/dl}] / IR [{IU/dl} / {IU/kg}]). The loading dose was given intravenously over up to 5 min at a maximum infusion rate of 10 ml/min. If the blood sample drawn ~15 min after infusion showed that the desired FVIII level had not been reached or the activated partial thromboplastin time (aPTT) had not normalized, a supplemental loading dose could be given. Surgery could be started only after normalization of the aPTT. To compensate for intraoperative blood loss and increased FVIII consumption, patients received an additional bolus of study drug in the recovery room sufficient to raise the FVIII level by 50 IU/dl. CI was to be started before surgery, immediately following the loading dose; study product was administered at a rate (IU/[kg × h]) calculated according to the following formula: CL × target FVIII level. The infusion was administered using a syringe pump according to this regimen, but always ≥0.4 ml/h. BI treatment started with the loading dose; treatment frequency varied according to the patient's PK profile, but typically included three infusions per 24 h during the first 72 h following the loading dose, infusions every 12 h from postoperative day 3 to 7, and daily infusions from postoperative day 8 to 14. For both CI and BI, FVIII trough levels were to be maintained above 80 IU/dl for the initial 72 h, at 50–100 IU/dl through postoperative day 7 and >30 IU/dl for postoperative days 8–14. This phase III/IV, prospective, multicentre, randomized, controlled study was divided into three periods: (1) a preoperative period, including a PK evaluation; (2) an intraoperative and postoperative period, from loading dose to postoperative day 7; and (3) a safety follow‐up period, from postoperative day 8 to the end‐of‐study visit (6 weeks following surgery). The study was conducted in accordance with the principles of the Declaration of Helsinki, and the protocol, informed consent form and all amendments were approved by the ethics committee at each study site. The trial is registered at ClinicalTrials.gov (NCT00357656). Pharmacokinetic analysis was performed before surgery to establish individual FVIII recovery and clearance (CL) values. In the preoperative period (≤60 days before the surgery), patients who had completed a washout of 72 h and were not actively bleeding were infused with antihaemophilic factor (recombinant) 50 ± 5 IU/kg of body weight and had 10 postinfusion samples taken within 48 h. If PK data suggested the presence of subclinical inhibitors (FVIII half‐life <8 h, incremental recovery (IR) <1.5 (IU/dl)/(IU/kg), or CL >5.0 ml/(kg × h), patients were excluded from further participation in the study. Patients who completed the preoperative PK phase were randomized 1:1 to treatment by CI or BI through postoperative day 7. Patients were stratified by type of surgery (unilateral knee replacement, hip surgery and shoulder/elbow/ankle/knee [except knee replacement] surgery). Randomization was separate and independent for each stratum. Randomization lists were prepared in blocks with a block size >2 using the random number generator algorithm of Wichmann and Hill 14 as modified by McLeod. 15 Patients in each group had blood drawn once every 24 h for measurement of FVIII activity. Patients stayed in hospital until postoperative day 7 to receive study treatment per protocol. Patients were discharged from hospital according to site practice. Patients randomized to CI could continue on CI or switch to intermittent BI starting on postoperative day 8, at the discretion of the investigator. During the period from postoperative day 8 until 6 weeks ±3 days following surgery, treatment (including mechanical or pharmacologic thrombosis prophylaxis) was also at the discretion of the investigator, depending on current practice of the site during physical rehabilitation. Antihaemophilic factor (recombinant) doses during the perioperative period (until postoperative day 7) were based on the PK profile determined before surgery. Within 60 min before surgery, patients received a loading dose of antihaemophilic factor (recombinant) to maintain a minimum FVIII level of at least 80% of normal. The formulas for determining the initial weight‐adjusted loading dose differed, depending on whether the patient was randomized to receive antihaemophilic factor (recombinant) as BI or CI for subsequent management: BI loading dose and subsequent BIs (IU/kg) = ([target FVIII level {IU/dl} × 2I/ t − preinfusion level {IU/dl}]/IR [{IU/dl}/{IU/kg}]), where ‘I’ is the infusion interval (in hours) at which the target FVIII level shall be maintained and ‘t’ is the estimated FVIII half‐life (in hours);CI loading dose (IU/kg) = ([target FVIII level {IU/dl} − preinfusion level {IU/dl}] / IR [{IU/dl} / {IU/kg}]). BI loading dose and subsequent BIs (IU/kg) = ([target FVIII level {IU/dl} × 2I/ t − preinfusion level {IU/dl}]/IR [{IU/dl}/{IU/kg}]), where ‘I’ is the infusion interval (in hours) at which the target FVIII level shall be maintained and ‘t’ is the estimated FVIII half‐life (in hours); CI loading dose (IU/kg) = ([target FVIII level {IU/dl} − preinfusion level {IU/dl}] / IR [{IU/dl} / {IU/kg}]). The loading dose was given intravenously over up to 5 min at a maximum infusion rate of 10 ml/min. If the blood sample drawn ~15 min after infusion showed that the desired FVIII level had not been reached or the activated partial thromboplastin time (aPTT) had not normalized, a supplemental loading dose could be given. Surgery could be started only after normalization of the aPTT. To compensate for intraoperative blood loss and increased FVIII consumption, patients received an additional bolus of study drug in the recovery room sufficient to raise the FVIII level by 50 IU/dl. CI was to be started before surgery, immediately following the loading dose; study product was administered at a rate (IU/[kg × h]) calculated according to the following formula: CL × target FVIII level. The infusion was administered using a syringe pump according to this regimen, but always ≥0.4 ml/h. BI treatment started with the loading dose; treatment frequency varied according to the patient's PK profile, but typically included three infusions per 24 h during the first 72 h following the loading dose, infusions every 12 h from postoperative day 3 to 7, and daily infusions from postoperative day 8 to 14. For both CI and BI, FVIII trough levels were to be maintained above 80 IU/dl for the initial 72 h, at 50–100 IU/dl through postoperative day 7 and >30 IU/dl for postoperative days 8–14. Primary and secondary efficacy outcome measures The primary study objective was to compare the perioperative haemostatic efficacy of antihaemophilic factor (recombinant) CI versus intermittent BI. The primary efficacy outcome measure was the cumulative packed red blood cell (PRBC)/blood volume in the drainage fluid (based on haematocrit values) assessed during the first 24 h after surgery determined from drainage fluid samples using the red blood cell counting method used at the local laboratory. Secondary efficacy outcome measures included postoperative blood loss until drain removal, number of bleeding episodes through postoperative day 7 and number of units of PRBCs transfused. The trigger for initiating a blood transfusion was determined by each clinician for each patient. The primary study objective was to compare the perioperative haemostatic efficacy of antihaemophilic factor (recombinant) CI versus intermittent BI. The primary efficacy outcome measure was the cumulative packed red blood cell (PRBC)/blood volume in the drainage fluid (based on haematocrit values) assessed during the first 24 h after surgery determined from drainage fluid samples using the red blood cell counting method used at the local laboratory. Secondary efficacy outcome measures included postoperative blood loss until drain removal, number of bleeding episodes through postoperative day 7 and number of units of PRBCs transfused. The trigger for initiating a blood transfusion was determined by each clinician for each patient. Safety outcome measures Safety was a secondary objective. The numbers of adverse events (AEs) and serious AEs (SAEs) were assessed, as well as their relationship to treatment and the incidence of FVIII antibody formation. AEs were grouped by the Medical Dictionary for Regulatory Activities system organ class and classified by severity (mild, moderate, severe). Inhibitor testing was performed at screening, at the rescreen visit after the pretreatment phase (if applicable) and at the end‐of‐study visit according to the Nijmegen modification of the Bethesda assay in the central laboratory. FVIII inhibitors could be determined at the local laboratory and verified by the central laboratory. Safety was a secondary objective. The numbers of adverse events (AEs) and serious AEs (SAEs) were assessed, as well as their relationship to treatment and the incidence of FVIII antibody formation. AEs were grouped by the Medical Dictionary for Regulatory Activities system organ class and classified by severity (mild, moderate, severe). Inhibitor testing was performed at screening, at the rescreen visit after the pretreatment phase (if applicable) and at the end‐of‐study visit according to the Nijmegen modification of the Bethesda assay in the central laboratory. FVIII inhibitors could be determined at the local laboratory and verified by the central laboratory. Additional exploratory outcome measures Exploratory outcome measures included total weight‐adjusted antihaemophilic factor (recombinant) dose through postoperative day 7, PK assessments (IR, CL) through postoperative day 7, total haemoglobin in cumulative drainage fluid during the first 24 h after surgery and until drain removal, rate of clinically relevant postoperative haematomas, and Global Hemostatic Efficacy Assessment (GHEA). The GHEA was based on three categories (Table 1), added to form the GHEA score (excellent, 7–9 [with no category <2]; good, 5–7 [with no category <1]; fair, 2–4 [with no category <1]; none, 0–1). For a score of 7 to be rated ‘excellent’, each individual assessment score had to be ≥2; otherwise, a score of 7 was to be rated ‘good’. GHEA scoring categories a Abbreviation: GHEA, Global Hemostatic Efficacy Assessment. Categories 1 and 2 determined by the operating surgeon; category 3 determined by the investigator. The scores from the three categories were added to form the total GHEA score (excellent, 7–9 [with no category <2]; good, 5–7 [with no category <1]; fair, 2–4 [with no category <1]; none, 0–1). Scores of 7 were rated as ‘excellent’, if each individual assessment score was ≥2; otherwise, a score of 7 was rated as ‘good’. Exploratory outcome measures included total weight‐adjusted antihaemophilic factor (recombinant) dose through postoperative day 7, PK assessments (IR, CL) through postoperative day 7, total haemoglobin in cumulative drainage fluid during the first 24 h after surgery and until drain removal, rate of clinically relevant postoperative haematomas, and Global Hemostatic Efficacy Assessment (GHEA). The GHEA was based on three categories (Table 1), added to form the GHEA score (excellent, 7–9 [with no category <2]; good, 5–7 [with no category <1]; fair, 2–4 [with no category <1]; none, 0–1). For a score of 7 to be rated ‘excellent’, each individual assessment score had to be ≥2; otherwise, a score of 7 was to be rated ‘good’. GHEA scoring categories a Abbreviation: GHEA, Global Hemostatic Efficacy Assessment. Categories 1 and 2 determined by the operating surgeon; category 3 determined by the investigator. The scores from the three categories were added to form the total GHEA score (excellent, 7–9 [with no category <2]; good, 5–7 [with no category <1]; fair, 2–4 [with no category <1]; none, 0–1). Scores of 7 were rated as ‘excellent’, if each individual assessment score was ≥2; otherwise, a score of 7 was rated as ‘good’. Statistics Descriptive statistics were provided for baseline characteristics and summarized by treatment regimen. A sample size of 60 divided equally between the CI and BI arms was selected for the study. To establish non‐inferiority of CI to BI, the ratio of the mean PRBC volumes of the drainage fluids in the CI arm to the BI arm was compared to a non‐inferiority margin of 200%. This was equivalent to the upper 95% confidence limit for the ratio being below 200%. The sample size requirements for establishing non‐inferiority by t‐test at a non‐inferiority limit of 200% were calculated and a sample size of 50–60 was determined to provide adequate power. In addition, at least 15 patients in each treatment group were required to have baseline FVIII levels <1%. Pearson's chi‐squared test with Monte Carlo simulation was used for comparison of patients with bleeding episodes and for patients who required transfusions. Descriptive statistics were provided for baseline characteristics and summarized by treatment regimen. A sample size of 60 divided equally between the CI and BI arms was selected for the study. To establish non‐inferiority of CI to BI, the ratio of the mean PRBC volumes of the drainage fluids in the CI arm to the BI arm was compared to a non‐inferiority margin of 200%. This was equivalent to the upper 95% confidence limit for the ratio being below 200%. The sample size requirements for establishing non‐inferiority by t‐test at a non‐inferiority limit of 200% were calculated and a sample size of 50–60 was determined to provide adequate power. In addition, at least 15 patients in each treatment group were required to have baseline FVIII levels <1%. Pearson's chi‐squared test with Monte Carlo simulation was used for comparison of patients with bleeding episodes and for patients who required transfusions. Patients: Eligible previously treated patients (18–70 years of age) had severe or moderately severe haemophilia A at screening (FVIII level ≤2 IU/dl) and were scheduled for elective unilateral major orthopaedic surgery requiring drain placement. Protocol amendments raised the maximum age (previously 65 years) and expanded the type of surgery (previously unilateral primary total knee replacement only). Major orthopaedic surgery was defined as requiring moderate or deep sedation, general anaesthesia, or major nerve conduction blockade and had a significant risk of large‐volume blood loss or blood loss into confined anatomical space. Patients provided written informed consent and were required to have had prior exposure to FVIII concentrates for ≥150 days. Patients were excluded if they met any of the following criteria: detectable FVIII inhibitors at screening (by the central laboratory), history of inhibitors (>0.4 Bethesda units [BU] by Nijmegen modification of the Bethesda assay), scheduled for any other minor or major surgery, laboratory evidence of abnormal haemostasis from causes other than haemophilia A, and current or planned receipt of an immunomodulatory drug other than antiretroviral therapy. Study design: This phase III/IV, prospective, multicentre, randomized, controlled study was divided into three periods: (1) a preoperative period, including a PK evaluation; (2) an intraoperative and postoperative period, from loading dose to postoperative day 7; and (3) a safety follow‐up period, from postoperative day 8 to the end‐of‐study visit (6 weeks following surgery). The study was conducted in accordance with the principles of the Declaration of Helsinki, and the protocol, informed consent form and all amendments were approved by the ethics committee at each study site. The trial is registered at ClinicalTrials.gov (NCT00357656). Pharmacokinetic analysis was performed before surgery to establish individual FVIII recovery and clearance (CL) values. In the preoperative period (≤60 days before the surgery), patients who had completed a washout of 72 h and were not actively bleeding were infused with antihaemophilic factor (recombinant) 50 ± 5 IU/kg of body weight and had 10 postinfusion samples taken within 48 h. If PK data suggested the presence of subclinical inhibitors (FVIII half‐life <8 h, incremental recovery (IR) <1.5 (IU/dl)/(IU/kg), or CL >5.0 ml/(kg × h), patients were excluded from further participation in the study. Patients who completed the preoperative PK phase were randomized 1:1 to treatment by CI or BI through postoperative day 7. Patients were stratified by type of surgery (unilateral knee replacement, hip surgery and shoulder/elbow/ankle/knee [except knee replacement] surgery). Randomization was separate and independent for each stratum. Randomization lists were prepared in blocks with a block size >2 using the random number generator algorithm of Wichmann and Hill 14 as modified by McLeod. 15 Patients in each group had blood drawn once every 24 h for measurement of FVIII activity. Patients stayed in hospital until postoperative day 7 to receive study treatment per protocol. Patients were discharged from hospital according to site practice. Patients randomized to CI could continue on CI or switch to intermittent BI starting on postoperative day 8, at the discretion of the investigator. During the period from postoperative day 8 until 6 weeks ±3 days following surgery, treatment (including mechanical or pharmacologic thrombosis prophylaxis) was also at the discretion of the investigator, depending on current practice of the site during physical rehabilitation. Antihaemophilic factor (recombinant) doses during the perioperative period (until postoperative day 7) were based on the PK profile determined before surgery. Within 60 min before surgery, patients received a loading dose of antihaemophilic factor (recombinant) to maintain a minimum FVIII level of at least 80% of normal. The formulas for determining the initial weight‐adjusted loading dose differed, depending on whether the patient was randomized to receive antihaemophilic factor (recombinant) as BI or CI for subsequent management: BI loading dose and subsequent BIs (IU/kg) = ([target FVIII level {IU/dl} × 2I/ t − preinfusion level {IU/dl}]/IR [{IU/dl}/{IU/kg}]), where ‘I’ is the infusion interval (in hours) at which the target FVIII level shall be maintained and ‘t’ is the estimated FVIII half‐life (in hours);CI loading dose (IU/kg) = ([target FVIII level {IU/dl} − preinfusion level {IU/dl}] / IR [{IU/dl} / {IU/kg}]). BI loading dose and subsequent BIs (IU/kg) = ([target FVIII level {IU/dl} × 2I/ t − preinfusion level {IU/dl}]/IR [{IU/dl}/{IU/kg}]), where ‘I’ is the infusion interval (in hours) at which the target FVIII level shall be maintained and ‘t’ is the estimated FVIII half‐life (in hours); CI loading dose (IU/kg) = ([target FVIII level {IU/dl} − preinfusion level {IU/dl}] / IR [{IU/dl} / {IU/kg}]). The loading dose was given intravenously over up to 5 min at a maximum infusion rate of 10 ml/min. If the blood sample drawn ~15 min after infusion showed that the desired FVIII level had not been reached or the activated partial thromboplastin time (aPTT) had not normalized, a supplemental loading dose could be given. Surgery could be started only after normalization of the aPTT. To compensate for intraoperative blood loss and increased FVIII consumption, patients received an additional bolus of study drug in the recovery room sufficient to raise the FVIII level by 50 IU/dl. CI was to be started before surgery, immediately following the loading dose; study product was administered at a rate (IU/[kg × h]) calculated according to the following formula: CL × target FVIII level. The infusion was administered using a syringe pump according to this regimen, but always ≥0.4 ml/h. BI treatment started with the loading dose; treatment frequency varied according to the patient's PK profile, but typically included three infusions per 24 h during the first 72 h following the loading dose, infusions every 12 h from postoperative day 3 to 7, and daily infusions from postoperative day 8 to 14. For both CI and BI, FVIII trough levels were to be maintained above 80 IU/dl for the initial 72 h, at 50–100 IU/dl through postoperative day 7 and >30 IU/dl for postoperative days 8–14. Primary and secondary efficacy outcome measures: The primary study objective was to compare the perioperative haemostatic efficacy of antihaemophilic factor (recombinant) CI versus intermittent BI. The primary efficacy outcome measure was the cumulative packed red blood cell (PRBC)/blood volume in the drainage fluid (based on haematocrit values) assessed during the first 24 h after surgery determined from drainage fluid samples using the red blood cell counting method used at the local laboratory. Secondary efficacy outcome measures included postoperative blood loss until drain removal, number of bleeding episodes through postoperative day 7 and number of units of PRBCs transfused. The trigger for initiating a blood transfusion was determined by each clinician for each patient. Safety outcome measures: Safety was a secondary objective. The numbers of adverse events (AEs) and serious AEs (SAEs) were assessed, as well as their relationship to treatment and the incidence of FVIII antibody formation. AEs were grouped by the Medical Dictionary for Regulatory Activities system organ class and classified by severity (mild, moderate, severe). Inhibitor testing was performed at screening, at the rescreen visit after the pretreatment phase (if applicable) and at the end‐of‐study visit according to the Nijmegen modification of the Bethesda assay in the central laboratory. FVIII inhibitors could be determined at the local laboratory and verified by the central laboratory. Additional exploratory outcome measures: Exploratory outcome measures included total weight‐adjusted antihaemophilic factor (recombinant) dose through postoperative day 7, PK assessments (IR, CL) through postoperative day 7, total haemoglobin in cumulative drainage fluid during the first 24 h after surgery and until drain removal, rate of clinically relevant postoperative haematomas, and Global Hemostatic Efficacy Assessment (GHEA). The GHEA was based on three categories (Table 1), added to form the GHEA score (excellent, 7–9 [with no category <2]; good, 5–7 [with no category <1]; fair, 2–4 [with no category <1]; none, 0–1). For a score of 7 to be rated ‘excellent’, each individual assessment score had to be ≥2; otherwise, a score of 7 was to be rated ‘good’. GHEA scoring categories a Abbreviation: GHEA, Global Hemostatic Efficacy Assessment. Categories 1 and 2 determined by the operating surgeon; category 3 determined by the investigator. The scores from the three categories were added to form the total GHEA score (excellent, 7–9 [with no category <2]; good, 5–7 [with no category <1]; fair, 2–4 [with no category <1]; none, 0–1). Scores of 7 were rated as ‘excellent’, if each individual assessment score was ≥2; otherwise, a score of 7 was rated as ‘good’. Statistics: Descriptive statistics were provided for baseline characteristics and summarized by treatment regimen. A sample size of 60 divided equally between the CI and BI arms was selected for the study. To establish non‐inferiority of CI to BI, the ratio of the mean PRBC volumes of the drainage fluids in the CI arm to the BI arm was compared to a non‐inferiority margin of 200%. This was equivalent to the upper 95% confidence limit for the ratio being below 200%. The sample size requirements for establishing non‐inferiority by t‐test at a non‐inferiority limit of 200% were calculated and a sample size of 50–60 was determined to provide adequate power. In addition, at least 15 patients in each treatment group were required to have baseline FVIII levels <1%. Pearson's chi‐squared test with Monte Carlo simulation was used for comparison of patients with bleeding episodes and for patients who required transfusions. RESULTS: Patients The study started on 29 May 2006 and was completed on 9 December 2015. Of 85 patients enrolled at 22 sites (in the United States, European Union, Norway and Russia), 72 received the infusion of antihaemophilic factor (recombinant) in the preoperative period for PK determination. Of these, 63 met the criteria for perioperative treatment and were randomized to receive CI (n = 32) or BI (n = 31). Of the patients who received CI, 23 had severe haemophilia A (baseline FVIII level <1 IU/dl) and six had moderately severe haemophilia A (baseline FVIII level 1 to <2 IU/dl). Of the patients who received BI, 26 had severe haemophilia A and five had moderately severe haemophilia A. Three patients randomized to CI did not undergo surgery and were not treated. Thus, 60 patients received treatment and comprised the per‐protocol analysis set (CI, n = 29; BI, n = 31). The safety analysis set included all 72 patients who received at least one dose of antihaemophilic factor (recombinant). Patient disposition is summarized in Figure 1. Each patient underwent one procedure: unilateral knee replacement surgery (n = 48; 24 CI, 24 BI), hip surgery (n = 4; two CI, two BI) or shoulder/elbow/ankle/knee surgery (n = 8; three CI, five BI). All patients were male. Demographic and clinical characteristics were similar between groups; the medians and ranges of age were nearly identical (Table 2). Four patients received enoxaparin or nadroparin as thrombosis prophylaxis. Disposition of patients. †Nine patients were not randomized for the following reasons: excluded because of pharmacokinetics (n = 4), screen failures (n = 2), physician decision (n = 1), sponsor decision (n = 1) and death (n = 1).‡Three patients randomized to receive continuous infusion did not undergo surgery and were therefore not treated. Thus, 60 patients received treatment and comprised the per‐protocol analysis set (continuous infusion, n = 29; bolus infusion, n = 31) Patient demographic and clinical characteristics in the per‐protocol analysis set The study started on 29 May 2006 and was completed on 9 December 2015. Of 85 patients enrolled at 22 sites (in the United States, European Union, Norway and Russia), 72 received the infusion of antihaemophilic factor (recombinant) in the preoperative period for PK determination. Of these, 63 met the criteria for perioperative treatment and were randomized to receive CI (n = 32) or BI (n = 31). Of the patients who received CI, 23 had severe haemophilia A (baseline FVIII level <1 IU/dl) and six had moderately severe haemophilia A (baseline FVIII level 1 to <2 IU/dl). Of the patients who received BI, 26 had severe haemophilia A and five had moderately severe haemophilia A. Three patients randomized to CI did not undergo surgery and were not treated. Thus, 60 patients received treatment and comprised the per‐protocol analysis set (CI, n = 29; BI, n = 31). The safety analysis set included all 72 patients who received at least one dose of antihaemophilic factor (recombinant). Patient disposition is summarized in Figure 1. Each patient underwent one procedure: unilateral knee replacement surgery (n = 48; 24 CI, 24 BI), hip surgery (n = 4; two CI, two BI) or shoulder/elbow/ankle/knee surgery (n = 8; three CI, five BI). All patients were male. Demographic and clinical characteristics were similar between groups; the medians and ranges of age were nearly identical (Table 2). Four patients received enoxaparin or nadroparin as thrombosis prophylaxis. Disposition of patients. †Nine patients were not randomized for the following reasons: excluded because of pharmacokinetics (n = 4), screen failures (n = 2), physician decision (n = 1), sponsor decision (n = 1) and death (n = 1).‡Three patients randomized to receive continuous infusion did not undergo surgery and were therefore not treated. Thus, 60 patients received treatment and comprised the per‐protocol analysis set (continuous infusion, n = 29; bolus infusion, n = 31) Patient demographic and clinical characteristics in the per‐protocol analysis set Efficacy and exploratory outcomes Information on drainage fluid PRBC was not available for six patients (three CI, three BI) at 24 h after surgery, but the cumulative PRBC/blood volume in the drainage fluid (ie RBC, MCV and haematocrit) during the first 24 h following surgery was comparable between the CI and BI groups (Table 3; Table 4 [by type of surgery]), with a ratio of 0.92 (95% confidence interval for mean, 0.82–1.05; median, 3.4 × 1012 RBCs/l for CI and 3.5 × 1012 RBCs/l for BI). The one‐sided p‐value against the null hypothesis of ratio ≥200% was <.001, confirming the non‐inferiority of CI to BI with a 5% type I error (as the upper confidence limit did not exceed 200%). Key efficacy parameters in the per‐protocol analysis set a Abbreviations: BI, bolus infusion; CI, continuous infusion; PRBC, packed red blood cell; SD, standard deviation. Data shown are from patients in the per‐protocol analysis set, unless otherwise specified. The patient numbers shown for the different parameters vary because some patients did not have all data available. The one‐sided p‐value (against the null hypothesis of the CI:BI ratio ≥200%) was <.001. The one‐sided p‐value (against the null hypothesis of the CI:BI ratio ≥200%) was .041. Not statistically significant by a post hoc Pearson's chi‐squared test (p two‐sided = .612, using Monte Carlo simulation with 106 replicates). Not statistically significant by a post hoc Pearson's chi‐squared test (p two‐sided = .132, using Monte Carlo simulation with 106 replicates). Trans‐ and postoperatively combined. Cumulative PRBC volume in drainage fluid at 24 h (1012 RBCs/l) by type of surgery Abbreviations: NA, not applicable; SD, standard deviation. Total blood loss until drain removal, adjusted for expected blood loss, was slightly higher in the CI group than the BI group (Table 3). The mean (95% confidence interval) ratio of blood loss volume for CI versus BI was estimated to be 1.3 (0.8–2.1), and the one‐sided p‐value against the null hypothesis of ratio ≥200% was .041. Most bleeding episodes occurred in patients receiving BI; of four reported bleeding episodes (one episode per patient), three occurred in patients receiving BI and one in a patient receiving CI (Table 3). None of the bleeding episodes was considered by the investigator to be ‘the result of inadequate therapeutic response in the face of proper dosing, necessitating a change in therapeutic regimen'. Patients receiving CI were given more PRBC transfusions than patients receiving BI. PRBC transfusions were required in 18/29 and 13/31 patients receiving CI and BI, respectively, with a mean (range) of 1.3 (1–5) units in patients receiving CI and a mean (range) of 0.9 (1–3) units in patients receiving BI. The total amount of haemoglobin in the cumulative drainage fluid during the first 24 h after surgery and until drain removal (if drainage continued) was comparable between patients who received CI and BI. In the stratum of patients who underwent unilateral knee replacement (CI, n = 24; BI, n = 24), the point estimate for the mean was 98.21 g/l (95% confidence interval for mean, 89.68–107.56 g/l) for CI and 97.63 g/l (86.07–110.74 g/l) for BI. The ratio of CI to BI was estimated to be 1.01. Clinically relevant postoperative haematomas were observed in two patients receiving CI and two receiving BI (three haematomas per group, for a total of six). Two patients (one CI, one BI) had undergone knee replacement and two patients (one CI, one BI) had undergone hip surgery. The total antihaemophilic factor (recombinant) dose (per kg body weight) administered in the combined transoperative and postoperative periods was similar in the CI and BI groups (Table 3). The global haemostatic efficacy of antihaemophilic factor (recombinant) administered by CI was assessed to be at least as good as that administered by BI. As shown in Table 5, scores of ‘excellent’ were evenly distributed among patients who received CI and patients who received BI. All patients receiving CI had a score of ‘excellent’ or ‘good’. GHEA score Abbreviation: GHEA, Global Hemostatic Efficacy Assessment. Values represent numbers of patients with reported GHEA scores. The GHEA score at postoperative day 8 was missing for six patients. Incremental recovery over time could be analysed only for the BI arm, as BIs in the CI arm were rare. Compared to the value at the loading dose on day 0, the median IR decreased by ~20% after the first week following surgery (day 7), with high variability across individual patients. During the second week, many patients were discharged from hospital and not enough samples were available for analysis. CL could be analysed for the CI arm, but not the BI arm because of insufficient data. The determination of CL used the observed FVIII level as the steady‐state level. This assumption was questionable for days 1 and 4 due to the additional postsurgical BIs and the reduction in infusion rate scheduled at day 3. For days 2, 3 and 5, an increase of ~20% in median CL was observed compared with that from the presurgical full PK analysis. Only at days 6 and 7 was the median CL below the initial value, but high variability in individual patient values was seen throughout the first week. Second‐week data were insufficient for analysis. Information on drainage fluid PRBC was not available for six patients (three CI, three BI) at 24 h after surgery, but the cumulative PRBC/blood volume in the drainage fluid (ie RBC, MCV and haematocrit) during the first 24 h following surgery was comparable between the CI and BI groups (Table 3; Table 4 [by type of surgery]), with a ratio of 0.92 (95% confidence interval for mean, 0.82–1.05; median, 3.4 × 1012 RBCs/l for CI and 3.5 × 1012 RBCs/l for BI). The one‐sided p‐value against the null hypothesis of ratio ≥200% was <.001, confirming the non‐inferiority of CI to BI with a 5% type I error (as the upper confidence limit did not exceed 200%). Key efficacy parameters in the per‐protocol analysis set a Abbreviations: BI, bolus infusion; CI, continuous infusion; PRBC, packed red blood cell; SD, standard deviation. Data shown are from patients in the per‐protocol analysis set, unless otherwise specified. The patient numbers shown for the different parameters vary because some patients did not have all data available. The one‐sided p‐value (against the null hypothesis of the CI:BI ratio ≥200%) was <.001. The one‐sided p‐value (against the null hypothesis of the CI:BI ratio ≥200%) was .041. Not statistically significant by a post hoc Pearson's chi‐squared test (p two‐sided = .612, using Monte Carlo simulation with 106 replicates). Not statistically significant by a post hoc Pearson's chi‐squared test (p two‐sided = .132, using Monte Carlo simulation with 106 replicates). Trans‐ and postoperatively combined. Cumulative PRBC volume in drainage fluid at 24 h (1012 RBCs/l) by type of surgery Abbreviations: NA, not applicable; SD, standard deviation. Total blood loss until drain removal, adjusted for expected blood loss, was slightly higher in the CI group than the BI group (Table 3). The mean (95% confidence interval) ratio of blood loss volume for CI versus BI was estimated to be 1.3 (0.8–2.1), and the one‐sided p‐value against the null hypothesis of ratio ≥200% was .041. Most bleeding episodes occurred in patients receiving BI; of four reported bleeding episodes (one episode per patient), three occurred in patients receiving BI and one in a patient receiving CI (Table 3). None of the bleeding episodes was considered by the investigator to be ‘the result of inadequate therapeutic response in the face of proper dosing, necessitating a change in therapeutic regimen'. Patients receiving CI were given more PRBC transfusions than patients receiving BI. PRBC transfusions were required in 18/29 and 13/31 patients receiving CI and BI, respectively, with a mean (range) of 1.3 (1–5) units in patients receiving CI and a mean (range) of 0.9 (1–3) units in patients receiving BI. The total amount of haemoglobin in the cumulative drainage fluid during the first 24 h after surgery and until drain removal (if drainage continued) was comparable between patients who received CI and BI. In the stratum of patients who underwent unilateral knee replacement (CI, n = 24; BI, n = 24), the point estimate for the mean was 98.21 g/l (95% confidence interval for mean, 89.68–107.56 g/l) for CI and 97.63 g/l (86.07–110.74 g/l) for BI. The ratio of CI to BI was estimated to be 1.01. Clinically relevant postoperative haematomas were observed in two patients receiving CI and two receiving BI (three haematomas per group, for a total of six). Two patients (one CI, one BI) had undergone knee replacement and two patients (one CI, one BI) had undergone hip surgery. The total antihaemophilic factor (recombinant) dose (per kg body weight) administered in the combined transoperative and postoperative periods was similar in the CI and BI groups (Table 3). The global haemostatic efficacy of antihaemophilic factor (recombinant) administered by CI was assessed to be at least as good as that administered by BI. As shown in Table 5, scores of ‘excellent’ were evenly distributed among patients who received CI and patients who received BI. All patients receiving CI had a score of ‘excellent’ or ‘good’. GHEA score Abbreviation: GHEA, Global Hemostatic Efficacy Assessment. Values represent numbers of patients with reported GHEA scores. The GHEA score at postoperative day 8 was missing for six patients. Incremental recovery over time could be analysed only for the BI arm, as BIs in the CI arm were rare. Compared to the value at the loading dose on day 0, the median IR decreased by ~20% after the first week following surgery (day 7), with high variability across individual patients. During the second week, many patients were discharged from hospital and not enough samples were available for analysis. CL could be analysed for the CI arm, but not the BI arm because of insufficient data. The determination of CL used the observed FVIII level as the steady‐state level. This assumption was questionable for days 1 and 4 due to the additional postsurgical BIs and the reduction in infusion rate scheduled at day 3. For days 2, 3 and 5, an increase of ~20% in median CL was observed compared with that from the presurgical full PK analysis. Only at days 6 and 7 was the median CL below the initial value, but high variability in individual patient values was seen throughout the first week. Second‐week data were insufficient for analysis. Safety outcomes Adverse events observed are summarized in Table 6. In the safety population (N = 72), 230 AEs were reported in 51 patients (70.8%). A total of 14 treatment‐related AEs were reported in 10 patients: five patients treated by CI had eight AEs and five patients treated by BI had six AEs. Ten of the 14 treatment‐related AEs were classified as non‐serious (reported in six patients): anaemia (n = 5), headache (n = 2), allergic dermatitis (n = 1), pruritus (n = 1) and pyrexia (n = 1). Summary of AEs in the safety population a and by treatment arm Abbreviations: AE, adverse event; SAE, serious adverse event. The safety population included nine patients who were not randomized to receive continuous or bolus infusion; no AEs were recorded in these patients. All treatment‐related SAEs were development of FVIII inhibitors. One SAE unrelated to treatment occurred in a patient who died before randomization. Ten SAEs were reported in ten patients. Of these, four SAEs of FVIII inhibitor development (two patients in each group) were considered related to treatment; all four patients had severe haemophilia A, and none required treatment with a bypassing agent. The two patients receiving CI developed high‐titre inhibitors (up to 20.8 and 10.7 BU, respectively, on study days 63 and 57), which later decreased to the low‐titre range in both patients (1.0 and 1.7 BU, respectively). One patient was a 35‐year‐old male who had undergone hip surgery; he had received plasma‐derived FVIII products and had weakly positive Lupus anticoagulants of unknown clinical relevance. The other was a 30‐year‐old male who had undergone left knee replacement. Of the two patients receiving BI who developed FVIII inhibitors, one was a 35‐year‐old male who had undergone left primary total knee replacement and developed a low‐titre inhibitor (transient; maximum 0.89 BU that later decreased to 0.17 BU) on study day 30. The other was a 50‐year‐old male who developed a low‐titre inhibitor on study day 36 with a maximum titre of 3 BU that later decreased to 2.4 BU. The other six SAEs (febrile infection, joint swelling, haemarthrosis, pseudomembranous colitis, multiorgan failure and muscle haemorrhage) were considered to be unrelated to treatment. One patient died during the study. Death was due to multiorgan failure attributed to codeine toxicity. The patient had received antihaemophilic factor (recombinant) only for PK evaluation and was not randomized to treatment by CI or BI and therefore did not undergo surgery. Adverse events observed are summarized in Table 6. In the safety population (N = 72), 230 AEs were reported in 51 patients (70.8%). A total of 14 treatment‐related AEs were reported in 10 patients: five patients treated by CI had eight AEs and five patients treated by BI had six AEs. Ten of the 14 treatment‐related AEs were classified as non‐serious (reported in six patients): anaemia (n = 5), headache (n = 2), allergic dermatitis (n = 1), pruritus (n = 1) and pyrexia (n = 1). Summary of AEs in the safety population a and by treatment arm Abbreviations: AE, adverse event; SAE, serious adverse event. The safety population included nine patients who were not randomized to receive continuous or bolus infusion; no AEs were recorded in these patients. All treatment‐related SAEs were development of FVIII inhibitors. One SAE unrelated to treatment occurred in a patient who died before randomization. Ten SAEs were reported in ten patients. Of these, four SAEs of FVIII inhibitor development (two patients in each group) were considered related to treatment; all four patients had severe haemophilia A, and none required treatment with a bypassing agent. The two patients receiving CI developed high‐titre inhibitors (up to 20.8 and 10.7 BU, respectively, on study days 63 and 57), which later decreased to the low‐titre range in both patients (1.0 and 1.7 BU, respectively). One patient was a 35‐year‐old male who had undergone hip surgery; he had received plasma‐derived FVIII products and had weakly positive Lupus anticoagulants of unknown clinical relevance. The other was a 30‐year‐old male who had undergone left knee replacement. Of the two patients receiving BI who developed FVIII inhibitors, one was a 35‐year‐old male who had undergone left primary total knee replacement and developed a low‐titre inhibitor (transient; maximum 0.89 BU that later decreased to 0.17 BU) on study day 30. The other was a 50‐year‐old male who developed a low‐titre inhibitor on study day 36 with a maximum titre of 3 BU that later decreased to 2.4 BU. The other six SAEs (febrile infection, joint swelling, haemarthrosis, pseudomembranous colitis, multiorgan failure and muscle haemorrhage) were considered to be unrelated to treatment. One patient died during the study. Death was due to multiorgan failure attributed to codeine toxicity. The patient had received antihaemophilic factor (recombinant) only for PK evaluation and was not randomized to treatment by CI or BI and therefore did not undergo surgery. Patients: The study started on 29 May 2006 and was completed on 9 December 2015. Of 85 patients enrolled at 22 sites (in the United States, European Union, Norway and Russia), 72 received the infusion of antihaemophilic factor (recombinant) in the preoperative period for PK determination. Of these, 63 met the criteria for perioperative treatment and were randomized to receive CI (n = 32) or BI (n = 31). Of the patients who received CI, 23 had severe haemophilia A (baseline FVIII level <1 IU/dl) and six had moderately severe haemophilia A (baseline FVIII level 1 to <2 IU/dl). Of the patients who received BI, 26 had severe haemophilia A and five had moderately severe haemophilia A. Three patients randomized to CI did not undergo surgery and were not treated. Thus, 60 patients received treatment and comprised the per‐protocol analysis set (CI, n = 29; BI, n = 31). The safety analysis set included all 72 patients who received at least one dose of antihaemophilic factor (recombinant). Patient disposition is summarized in Figure 1. Each patient underwent one procedure: unilateral knee replacement surgery (n = 48; 24 CI, 24 BI), hip surgery (n = 4; two CI, two BI) or shoulder/elbow/ankle/knee surgery (n = 8; three CI, five BI). All patients were male. Demographic and clinical characteristics were similar between groups; the medians and ranges of age were nearly identical (Table 2). Four patients received enoxaparin or nadroparin as thrombosis prophylaxis. Disposition of patients. †Nine patients were not randomized for the following reasons: excluded because of pharmacokinetics (n = 4), screen failures (n = 2), physician decision (n = 1), sponsor decision (n = 1) and death (n = 1).‡Three patients randomized to receive continuous infusion did not undergo surgery and were therefore not treated. Thus, 60 patients received treatment and comprised the per‐protocol analysis set (continuous infusion, n = 29; bolus infusion, n = 31) Patient demographic and clinical characteristics in the per‐protocol analysis set Efficacy and exploratory outcomes: Information on drainage fluid PRBC was not available for six patients (three CI, three BI) at 24 h after surgery, but the cumulative PRBC/blood volume in the drainage fluid (ie RBC, MCV and haematocrit) during the first 24 h following surgery was comparable between the CI and BI groups (Table 3; Table 4 [by type of surgery]), with a ratio of 0.92 (95% confidence interval for mean, 0.82–1.05; median, 3.4 × 1012 RBCs/l for CI and 3.5 × 1012 RBCs/l for BI). The one‐sided p‐value against the null hypothesis of ratio ≥200% was <.001, confirming the non‐inferiority of CI to BI with a 5% type I error (as the upper confidence limit did not exceed 200%). Key efficacy parameters in the per‐protocol analysis set a Abbreviations: BI, bolus infusion; CI, continuous infusion; PRBC, packed red blood cell; SD, standard deviation. Data shown are from patients in the per‐protocol analysis set, unless otherwise specified. The patient numbers shown for the different parameters vary because some patients did not have all data available. The one‐sided p‐value (against the null hypothesis of the CI:BI ratio ≥200%) was <.001. The one‐sided p‐value (against the null hypothesis of the CI:BI ratio ≥200%) was .041. Not statistically significant by a post hoc Pearson's chi‐squared test (p two‐sided = .612, using Monte Carlo simulation with 106 replicates). Not statistically significant by a post hoc Pearson's chi‐squared test (p two‐sided = .132, using Monte Carlo simulation with 106 replicates). Trans‐ and postoperatively combined. Cumulative PRBC volume in drainage fluid at 24 h (1012 RBCs/l) by type of surgery Abbreviations: NA, not applicable; SD, standard deviation. Total blood loss until drain removal, adjusted for expected blood loss, was slightly higher in the CI group than the BI group (Table 3). The mean (95% confidence interval) ratio of blood loss volume for CI versus BI was estimated to be 1.3 (0.8–2.1), and the one‐sided p‐value against the null hypothesis of ratio ≥200% was .041. Most bleeding episodes occurred in patients receiving BI; of four reported bleeding episodes (one episode per patient), three occurred in patients receiving BI and one in a patient receiving CI (Table 3). None of the bleeding episodes was considered by the investigator to be ‘the result of inadequate therapeutic response in the face of proper dosing, necessitating a change in therapeutic regimen'. Patients receiving CI were given more PRBC transfusions than patients receiving BI. PRBC transfusions were required in 18/29 and 13/31 patients receiving CI and BI, respectively, with a mean (range) of 1.3 (1–5) units in patients receiving CI and a mean (range) of 0.9 (1–3) units in patients receiving BI. The total amount of haemoglobin in the cumulative drainage fluid during the first 24 h after surgery and until drain removal (if drainage continued) was comparable between patients who received CI and BI. In the stratum of patients who underwent unilateral knee replacement (CI, n = 24; BI, n = 24), the point estimate for the mean was 98.21 g/l (95% confidence interval for mean, 89.68–107.56 g/l) for CI and 97.63 g/l (86.07–110.74 g/l) for BI. The ratio of CI to BI was estimated to be 1.01. Clinically relevant postoperative haematomas were observed in two patients receiving CI and two receiving BI (three haematomas per group, for a total of six). Two patients (one CI, one BI) had undergone knee replacement and two patients (one CI, one BI) had undergone hip surgery. The total antihaemophilic factor (recombinant) dose (per kg body weight) administered in the combined transoperative and postoperative periods was similar in the CI and BI groups (Table 3). The global haemostatic efficacy of antihaemophilic factor (recombinant) administered by CI was assessed to be at least as good as that administered by BI. As shown in Table 5, scores of ‘excellent’ were evenly distributed among patients who received CI and patients who received BI. All patients receiving CI had a score of ‘excellent’ or ‘good’. GHEA score Abbreviation: GHEA, Global Hemostatic Efficacy Assessment. Values represent numbers of patients with reported GHEA scores. The GHEA score at postoperative day 8 was missing for six patients. Incremental recovery over time could be analysed only for the BI arm, as BIs in the CI arm were rare. Compared to the value at the loading dose on day 0, the median IR decreased by ~20% after the first week following surgery (day 7), with high variability across individual patients. During the second week, many patients were discharged from hospital and not enough samples were available for analysis. CL could be analysed for the CI arm, but not the BI arm because of insufficient data. The determination of CL used the observed FVIII level as the steady‐state level. This assumption was questionable for days 1 and 4 due to the additional postsurgical BIs and the reduction in infusion rate scheduled at day 3. For days 2, 3 and 5, an increase of ~20% in median CL was observed compared with that from the presurgical full PK analysis. Only at days 6 and 7 was the median CL below the initial value, but high variability in individual patient values was seen throughout the first week. Second‐week data were insufficient for analysis. Safety outcomes: Adverse events observed are summarized in Table 6. In the safety population (N = 72), 230 AEs were reported in 51 patients (70.8%). A total of 14 treatment‐related AEs were reported in 10 patients: five patients treated by CI had eight AEs and five patients treated by BI had six AEs. Ten of the 14 treatment‐related AEs were classified as non‐serious (reported in six patients): anaemia (n = 5), headache (n = 2), allergic dermatitis (n = 1), pruritus (n = 1) and pyrexia (n = 1). Summary of AEs in the safety population a and by treatment arm Abbreviations: AE, adverse event; SAE, serious adverse event. The safety population included nine patients who were not randomized to receive continuous or bolus infusion; no AEs were recorded in these patients. All treatment‐related SAEs were development of FVIII inhibitors. One SAE unrelated to treatment occurred in a patient who died before randomization. Ten SAEs were reported in ten patients. Of these, four SAEs of FVIII inhibitor development (two patients in each group) were considered related to treatment; all four patients had severe haemophilia A, and none required treatment with a bypassing agent. The two patients receiving CI developed high‐titre inhibitors (up to 20.8 and 10.7 BU, respectively, on study days 63 and 57), which later decreased to the low‐titre range in both patients (1.0 and 1.7 BU, respectively). One patient was a 35‐year‐old male who had undergone hip surgery; he had received plasma‐derived FVIII products and had weakly positive Lupus anticoagulants of unknown clinical relevance. The other was a 30‐year‐old male who had undergone left knee replacement. Of the two patients receiving BI who developed FVIII inhibitors, one was a 35‐year‐old male who had undergone left primary total knee replacement and developed a low‐titre inhibitor (transient; maximum 0.89 BU that later decreased to 0.17 BU) on study day 30. The other was a 50‐year‐old male who developed a low‐titre inhibitor on study day 36 with a maximum titre of 3 BU that later decreased to 2.4 BU. The other six SAEs (febrile infection, joint swelling, haemarthrosis, pseudomembranous colitis, multiorgan failure and muscle haemorrhage) were considered to be unrelated to treatment. One patient died during the study. Death was due to multiorgan failure attributed to codeine toxicity. The patient had received antihaemophilic factor (recombinant) only for PK evaluation and was not randomized to treatment by CI or BI and therefore did not undergo surgery. DISCUSSION: These results demonstrate that treatment by CI was non‐inferior to treatment by BI in terms of haemostatic efficacy and safety in patients undergoing elective unilateral major orthopaedic surgery that required drain placement. Although prior studies have evaluated CI of FVIII in patients with haemophilia A undergoing surgical procedures, 3 , 8 , 11 , 12 , 13 this was the first controlled trial to compare CI and BI of FVIII in the studied population. The same concentrate used in the present study was also used in surgical patients in the pivotal study reported by Négrier et al. 13 In that prospective, open‐label, uncontrolled clinical trial, the efficacy and safety of CI and BI of antihaemophilic factor (recombinant) was examined in 58 patients undergoing 65 surgical procedures, of which 22 were associated with major haemorrhagic risk. 13 CI (with or without supplemental BIs) was used in 18 procedures and BI alone was used in 47 procedures. Intraoperative haemostatic efficacy, as well as postoperative haemostatic efficacy rated at the time of discharge, was assessed as ‘excellent’ or ‘good’ for all procedures; treatments were well tolerated, and no development of FVIII inhibitors was reported. 13 The median (range) total FVIII consumption during hospitalization for all major surgeries was 822 (401–2014) IU/kg per surgery with CI (including any supplemental BI) and 910 (228–1825) IU/kg per surgery with BI alone. Among those undergoing orthopaedic procedures, the median daily FVIII consumption during the first seven postoperative days was similar with CI (66.2 IU/kg/day) and BI (65.2 IU/kg/day). 13 Similarly to the current study, Négrier et al. concluded that antihaemophilic factor (recombinant) administered via CI or BI was effective and safe for perioperative haemostasis, although that study was not designed to compare CI with BI. Based on the findings of earlier studies, which reported good haemostatic efficacy and total FVIII doses 19%–36% lower with CI versus BI, 3 , 7 , 8 , 11 , 12 it was hypothesized that CI might be similarly or even more effective for preventing postoperative bleeding but with a reduced consumption of antihaemophilic factor (recombinant) compared with BI. However, in our study, the use of antihaemophilic factor (recombinant) was higher for CI versus BI on postoperative days 1–14 and higher for BI versus CI intraoperatively, on postoperative day 0, and from postoperative day 15 to study end. These findings may be due in part to the study protocol and design, which specified dosing, permitted patients to switch from CI to BI from day 8 onwards and focused on reducing variations in plasma factor levels rather than reducing the amount of product used. When transoperative and postoperative usage was combined, the total factor consumption was similar in the CI and BI groups, which mirrors the findings of Négrier et al. 13 This study provided the unique opportunity to compare the safety of CI versus BI. Although administration by CI provides the advantage of achieving more stable FVIII levels without the troughs that usually accompany BI and place the patient at risk of bleeding, the use of CI during and after surgery has raised concerns about an increased risk of inhibitor formation, particularly in patients with mild or moderate haemophilia. 16 , 17 , 18 However, no increase in inhibitor risk was seen in a large retrospective survey of 1079 procedures 19 and no inhibitors were detected in a prospective study of CI use in 46 previously treated patients with severe haemophilia A. 20 The development of inhibitors is a multifactorial process associated with a variety of genetic and environmental risk factors. 21 In the current study of patients with severe or moderately severe haemophilia A (baseline FVIII ≤2 IU/dl) and prior FVIII exposure of ≥150 days, 4 of 60 (6.7%) patients developed FVIII inhibitors, with no difference in frequency between the two groups (observed in two patients receiving CI and two receiving BI). Both patients with CI developed high‐titre inhibitors that evolved to low titre, whereas the two patients with inhibitors during BI had low‐titre inhibitors. Due to the limited number of affected patients, it is not possible to determine whether this difference was related to the study drug, method of administration, or potential confounding factors (eg the presence of high‐risk FVIII mutations and other genetic risk factors, which were not assessed in this study, or variability in tissue damage related to the surgical procedure). Although patients had been previously exposed to cryoprecipitates, fresh frozen plasma and/or plasma‐derived or recombinant FVIII concentrates, we cannot comment on how tolerant they were to FVIII due to a lack of an accurate record of prestudy FVIII usage. In the separate study of the present study drug (Négrier et al. mentioned above), surgical patients previously treated with antihaemophilic factor (recombinant) did not develop FVIII inhibitors. 13 In addition, the risk of inhibitors was not found to be elevated in a postmarketing surveillance study of patients previously treated with antihaemophilic factor (recombinant). 22 During the first week after surgery, a decrease in IR and an increase in CL were observed in the current study, although the limited data available prevent meaningful analysis and interpretation of these results, which should be considered in the context of varying results reported in the literature. Although Batorova and Martinowitz saw a significant decrease in CL during the 1–2 weeks following surgery, 11 others have described variable changes in CL following major surgical procedures, 23 and recent reports indicate substantial intraindividual variation in IR and poor reproducibility of CL, with numerous factors affecting IR and CL. 23 , 24 Limitations of this study include the necessity to enrol patients undergoing orthopaedic surgeries other than unilateral knee replacement, difficulty in estimating PRBC volumes in drainage fluid, lack of information on the drainage fluid PRBC for six patients, and variability in surgical techniques and practices at the participating sites, which could only be partially addressed per study protocol. Another limitation inherent to the study design is the use of PK assessment before surgery and central dosing recommendations, which differ from conditions in real‐world clinical practice. On the other hand, this study has inherent strengths as a multicentre randomized study with a large number of patients with balanced surgical procedures in the two arms. CONCLUSION: The administration of antihaemophilic factor (recombinant) by CI resulted in comparable efficacy and safety outcomes and is a viable alternative to intermittent BI in the perioperative haemostatic management of patients with haemophilia A undergoing major orthopaedic surgery. Taking into account the complexity of CI versus BI, it is useful to know that these types of FVIII administration showed non‐inferiority, such that treatment may be optimized for individual patients. These findings may help inform perioperative haemostatic management of these patients, with the goal of maintaining stable FVIII levels during and after surgery, whether by the use of CI or BI regimens. DISCLOSURES: IP has received occasional honoraria for lectures and advisory board meetings from Bayer, CSL Behring, Novo Nordisk, Pfizer, Sobi and Shire*; unrestricted research grants from CSL Behring, Novo Nordisk and Shire*. VM has no conflict of interest to declare. JW has received research grants/honoraria for lectures from Baxalta, Bayer, CSL Behring, Ferring Pharmaceuticals, Novo Nordisk, Octapharma, Pfizer, Roche, Siemens, Shire*, Sobi and Werfen. WE is an employee of Baxalta Innovations GmbH, a Takeda company, and a Takeda stock owner. JD is an employee of Baxalta Innovations GmbH, a Takeda company. ST and BE are employees of Baxalta US Inc., a Takeda company, and Takeda stock owners. GS was an employee of Baxalta US Inc., a Takeda company, at the time of the current study. *A member of the Takeda group of companies. AUTHOR CONTRIBUTIONS: All authors reviewed/revised the manuscript critically for intellectual content and gave their final approval for it to be published. Ingrid Pabinger was a study investigator and contributed to the design of the study and interpretation of the data and study content. Vasily Mamonov and Jerzy Windyga were study investigators and contributed to the interpretation of the data and study content. None of the authors received honoraria for the writing of this manuscript. Werner Engl and Bruce Ewenstein contributed to the conception, design, analysis and interpretation of the clinical trial. Jennifer Doralt, Srilatha Tangada and Gerald Spotts contributed to the analysis and interpretation of the data, and study conduct. All authors agree to be accountable for all aspects of the work in ensuring that questions related to the accuracy or integrity of any part of the work are appropriately investigated and resolved.
Background: In patients with haemophilia A undergoing surgery, factor VIII (FVIII) replacement therapy by continuous infusion (CI) may offer an alternative to bolus infusion (BI). Methods: In this multicentre, phase III/IV, controlled study (NCT00357656), 60 previously treated adult patients with severe or moderately severe disease undergoing elective unilateral major orthopaedic surgery (knee replacement, n = 48; hip surgery, n = 4; other, n = 8) requiring drain placement were randomized to receive antihaemophilic factor (recombinant) CI (n = 29) or BI (n = 31) through postoperative day 7. Primary outcome measure was cumulative packed red blood cell (PRBC)/blood volume in the drainage fluid within 24 h after surgery, used to establish non-inferiority of CI to BI. Results: CI:BI ratio of cumulative PRBC volume in the 24-h drainage fluid was 0.92 (p-value <.001 for non-inferiority; 95% confidence interval, 0.82-1.05). Total antihaemophilic factor (recombinant) dose per kg body weight received in the combined trans- and postoperative periods was similar with CI and BI to maintain targeted FVIII levels during/after surgery. Treatment-related adverse events (AEs) were reported in five patients treated by CI (eight events) and five treated by BI (six events), including two serious AEs in each arm. Conclusions: CI administration of antihaemophilic factor (recombinant) is a viable alternative to BI in patients with haemophilia A undergoing major orthopaedic surgery, providing comparable efficacy and safety.
INTRODUCTION: Antihaemophilic factor (recombinant), plasma/albumin‐free method (ADVATE®; Baxalta US Inc., a Takeda company, Lexington, MA, USA), is a recombinant human coagulation factor VIII (FVIII) indicated for the treatment and prophylaxis of bleeding, including perioperative management, in patients of all ages with haemophilia A. 1 , 2 When used perioperatively, antihaemophilic factor (recombinant) is typically administered by bolus infusion (BI) at time points dictated by its pharmacokinetic (PK) profile. Continuous infusion (CI) was developed to reduce the wide variations in plasma FVIII levels that usually accompany BI and decrease the quantity of infused FVIII concentrate. 3 , 4 , 5 , 6 , 7 , 8 , 9 CI during and/or after surgery 1 , 10 may stabilize FVIII levels, eliminating the deep troughs characteristic of BI that may increase bleeding risk. Several cohort and non‐controlled studies have indicated that FVIII CI is well tolerated and efficacious for providing perioperative haemostasis for patients with haemophilia A; some studies have suggested that CI may also reduce FVIII consumption compared with BI. 3 , 7 , 8 , 11 , 12 , 13 Continuous infusion and BI in the same type of intervention have not been compared in a prospective, controlled setting. The objective of this prospective, randomized phase III/IV study in patients with severe or moderately severe haemophilia A was to assess the perioperative haemostatic efficacy and safety of antihaemophilic factor (recombinant) administered via CI and intermittent BI. CONCLUSION: The administration of antihaemophilic factor (recombinant) by CI resulted in comparable efficacy and safety outcomes and is a viable alternative to intermittent BI in the perioperative haemostatic management of patients with haemophilia A undergoing major orthopaedic surgery. Taking into account the complexity of CI versus BI, it is useful to know that these types of FVIII administration showed non‐inferiority, such that treatment may be optimized for individual patients. These findings may help inform perioperative haemostatic management of these patients, with the goal of maintaining stable FVIII levels during and after surgery, whether by the use of CI or BI regimens.
Background: In patients with haemophilia A undergoing surgery, factor VIII (FVIII) replacement therapy by continuous infusion (CI) may offer an alternative to bolus infusion (BI). Methods: In this multicentre, phase III/IV, controlled study (NCT00357656), 60 previously treated adult patients with severe or moderately severe disease undergoing elective unilateral major orthopaedic surgery (knee replacement, n = 48; hip surgery, n = 4; other, n = 8) requiring drain placement were randomized to receive antihaemophilic factor (recombinant) CI (n = 29) or BI (n = 31) through postoperative day 7. Primary outcome measure was cumulative packed red blood cell (PRBC)/blood volume in the drainage fluid within 24 h after surgery, used to establish non-inferiority of CI to BI. Results: CI:BI ratio of cumulative PRBC volume in the 24-h drainage fluid was 0.92 (p-value <.001 for non-inferiority; 95% confidence interval, 0.82-1.05). Total antihaemophilic factor (recombinant) dose per kg body weight received in the combined trans- and postoperative periods was similar with CI and BI to maintain targeted FVIII levels during/after surgery. Treatment-related adverse events (AEs) were reported in five patients treated by CI (eight events) and five treated by BI (six events), including two serious AEs in each arm. Conclusions: CI administration of antihaemophilic factor (recombinant) is a viable alternative to BI in patients with haemophilia A undergoing major orthopaedic surgery, providing comparable efficacy and safety.
14,209
314
[ 311, 208, 1107, 118, 117, 276, 164, 447, 1106, 496, 155 ]
16
[ "patients", "ci", "bi", "fviii", "surgery", "iu", "study", "postoperative", "treatment", "day" ]
[ "perioperative haemostatic efficacy", "haemophilia perioperatively antihaemophilic", "fviii level infusion", "coagulation factor viii", "administered bolus infusion" ]
null
[CONTENT] clinical trial | haemophilia A | intravenous infusion | orthopaedic surgery | recombinant factor VIII [SUMMARY]
null
[CONTENT] clinical trial | haemophilia A | intravenous infusion | orthopaedic surgery | recombinant factor VIII [SUMMARY]
[CONTENT] clinical trial | haemophilia A | intravenous infusion | orthopaedic surgery | recombinant factor VIII [SUMMARY]
[CONTENT] clinical trial | haemophilia A | intravenous infusion | orthopaedic surgery | recombinant factor VIII [SUMMARY]
[CONTENT] clinical trial | haemophilia A | intravenous infusion | orthopaedic surgery | recombinant factor VIII [SUMMARY]
[CONTENT] Adult | Blood Coagulation Tests | Factor VIII | Hemophilia A | Hemostasis | Humans | Orthopedic Procedures | Recombinant Proteins [SUMMARY]
null
[CONTENT] Adult | Blood Coagulation Tests | Factor VIII | Hemophilia A | Hemostasis | Humans | Orthopedic Procedures | Recombinant Proteins [SUMMARY]
[CONTENT] Adult | Blood Coagulation Tests | Factor VIII | Hemophilia A | Hemostasis | Humans | Orthopedic Procedures | Recombinant Proteins [SUMMARY]
[CONTENT] Adult | Blood Coagulation Tests | Factor VIII | Hemophilia A | Hemostasis | Humans | Orthopedic Procedures | Recombinant Proteins [SUMMARY]
[CONTENT] Adult | Blood Coagulation Tests | Factor VIII | Hemophilia A | Hemostasis | Humans | Orthopedic Procedures | Recombinant Proteins [SUMMARY]
[CONTENT] perioperative haemostatic efficacy | haemophilia perioperatively antihaemophilic | fviii level infusion | coagulation factor viii | administered bolus infusion [SUMMARY]
null
[CONTENT] perioperative haemostatic efficacy | haemophilia perioperatively antihaemophilic | fviii level infusion | coagulation factor viii | administered bolus infusion [SUMMARY]
[CONTENT] perioperative haemostatic efficacy | haemophilia perioperatively antihaemophilic | fviii level infusion | coagulation factor viii | administered bolus infusion [SUMMARY]
[CONTENT] perioperative haemostatic efficacy | haemophilia perioperatively antihaemophilic | fviii level infusion | coagulation factor viii | administered bolus infusion [SUMMARY]
[CONTENT] perioperative haemostatic efficacy | haemophilia perioperatively antihaemophilic | fviii level infusion | coagulation factor viii | administered bolus infusion [SUMMARY]
[CONTENT] patients | ci | bi | fviii | surgery | iu | study | postoperative | treatment | day [SUMMARY]
null
[CONTENT] patients | ci | bi | fviii | surgery | iu | study | postoperative | treatment | day [SUMMARY]
[CONTENT] patients | ci | bi | fviii | surgery | iu | study | postoperative | treatment | day [SUMMARY]
[CONTENT] patients | ci | bi | fviii | surgery | iu | study | postoperative | treatment | day [SUMMARY]
[CONTENT] patients | ci | bi | fviii | surgery | iu | study | postoperative | treatment | day [SUMMARY]
[CONTENT] bi | fviii | ci | infusion bi | reduce | indicated | studies | recombinant | factor | infusion [SUMMARY]
null
[CONTENT] patients | ci | bi | receiving | patients receiving | ci bi | received | patients received | patient | surgery [SUMMARY]
[CONTENT] haemostatic management patients | perioperative haemostatic management | haemostatic management | perioperative haemostatic management patients | administration | management patients | perioperative haemostatic | management | bi | ci [SUMMARY]
[CONTENT] patients | ci | bi | fviii | iu | surgery | study | postoperative | treatment | blood [SUMMARY]
[CONTENT] patients | ci | bi | fviii | iu | surgery | study | postoperative | treatment | blood [SUMMARY]
[CONTENT] CI | BI [SUMMARY]
null
[CONTENT] CI | BI | PRBC | 24 | 0.92 | 95% | 0.82-1.05 ||| CI | BI ||| five | CI | eight | five | BI | six | two [SUMMARY]
[CONTENT] CI | BI [SUMMARY]
[CONTENT] CI | BI ||| NCT00357656 | 60 | 48 | 4 | 8) | CI | 29 | BI | postoperative day 7 ||| 24 | CI | BI ||| CI ||| BI | PRBC | 24 | 0.92 | 95% | 0.82-1.05 ||| CI | BI ||| five | CI | eight | five | BI | six | two ||| CI | BI [SUMMARY]
[CONTENT] CI | BI ||| NCT00357656 | 60 | 48 | 4 | 8) | CI | 29 | BI | postoperative day 7 ||| 24 | CI | BI ||| CI ||| BI | PRBC | 24 | 0.92 | 95% | 0.82-1.05 ||| CI | BI ||| five | CI | eight | five | BI | six | two ||| CI | BI [SUMMARY]
Blood pressure and fasting lipid changes after 24 weeks' treatment with vildagliptin: a pooled analysis in >2,000 previously drug-naïve patients with type 2 diabetes mellitus.
27574437
We have previously shown modest weight loss with vildagliptin treatment. Since body weight balance is associated with changes in blood pressure (BP) and fasting lipids, we have assessed these parameters following vildagliptin treatment.
INTRODUCTION
Data were pooled from all double-blind, randomized, controlled, vildagliptin mono-therapy trials on previously drug-naïve patients with type 2 diabetes mellitus who received vildagliptin 50 mg once daily (qd) or twice daily (bid; n=2,108) and wherein BP and fasting lipid data were obtained.
METHODS
Data from patients receiving vildagliptin 50 mg qd or bid showed reductions from baseline to week 24 in systolic BP (from 132.5±0.32 to 129.8±0.34 mmHg; P<0.0001), diastolic BP (from 81.2±0.18 to 79.6±0.19 mmHg; P<0.0001), fasting triglycerides (from 2.00±0.02 to 1.80±0.02 mmol/L; P<0.0001), very low density lipoprotein cholesterol (from 0.90±0.01 to 0.83±0.01 mmol/L; P<0.0001), and low density lipoprotein cholesterol (from 3.17±0.02 to 3.04±0.02 mmol/L; P<0.0001), whereas high density lipoprotein cholesterol increased (from 1.19±0.01 to 1.22±0.01 mmol/L; P<0.001). Weight decreased by 0.48±0.08 kg (P<0.001).
RESULTS
This large pooled analysis demonstrated that vildagliptin shows a significant reduction in BP and a favorable fasting lipid profile that are associated with modest weight loss.
CONCLUSION
[ "Adamantane", "Biomarkers", "Blood Pressure", "Diabetes Mellitus, Type 2", "Dipeptidyl Peptidase 4", "Dipeptidyl-Peptidase IV Inhibitors", "Fasting", "Female", "Humans", "Linear Models", "Lipids", "Male", "Middle Aged", "Nitriles", "Pyrrolidines", "Randomized Controlled Trials as Topic", "Treatment Outcome", "Vildagliptin", "Weight Loss" ]
4994796
Introduction
We have previously pooled data on vildagliptin monotherapy to compare body weight changes as a function of glycemic control at baseline and showed that, on an average, weight loss is observed with vildagliptin at the glycemic levels at which treatment is often initiated.1 Weight loss is associated with changes in blood pressure (BP) and fasting lipids. Data regarding BP and fasting lipids from the studies on dipeptidyl peptidase-4 (DPP-4) inhibitors are limited and may be confounded by variations among these studies in terms of the variable effects of other oral antidiabetic drugs, baseline adiposity, and insulin resistance that are also associated with changes in BP and fasting lipids.2–6 Here, we aimed to assess the impact of vildagliptin treatment on BP and fasting lipids and correlated these changes with changes in weight, body mass index (BMI), and homeostatic model assessment insulin resistance (HOMA-IR). To address these questions, we employed a large (>2,000) vildagliptin pooled monotherapy database.
Data analysis
A linear regression model was applied to analyze the changes in BP and fasting lipid parameters relative to body weight changes, with/without adjusting for BMI and HOMA-IR.
Results
This pooled analysis included 2,108 patients (54.6% male) with mean (± standard error) age 54.8±0.3 years, BMI 30.9±0.1 kg/m2, type 2 diabetes mellitus duration 2.1±0.1 years, glycated hemoglobin (HbA1c) 8.4%±0.0%, and fasting plasma glucose 9.9±0.1 mmol/L. Vildagliptin treatment (50 mg qd or bid) for 24 weeks resulted in modest but highly significant reductions in both systolic (2.70 mmHg) and diastolic (1.64 mmHg) BP. Fasting triglycerides (TG), very low density lipoprotein (VLDL) cholesterol, and low density lipoprotein (LDL) cholesterol decreased by 0.2, 0.07, and 0.13 mmol/L, respectively. Similarly, total cholesterol and non-high density lipoprotein (non-HDL) cholesterol decreased by 0.18 and 0.21 mmol/L from baseline, respectively. HDL cholesterol increased by 0.03 mmol/L and weight decreased by 0.48 kg (Table 1). A linear regression model was used to analyze the relationship between the aforementioned BP and fasting lipid levels with weight changes, degree of adiposity as assessed by baseline BMI, and degree of insulin resistance as assessed by HOMA-IR. The changes in systolic (Figure 1A) and diastolic (Figure 1B) BP from baseline to 24 weeks relative to changes in weight showed positive slopes of 0.71 (r2=0.04; P<0.001) and 0.40 (r2=0.03; P<0.001), respectively. These values did not improve by adjusting for BMI and HOMA-IR. Positive slopes of 0.023 (r2=0.01; P<0.0001) and 0.010 (r2=0.01; P<0.0001) were observed for changes in TG (Figure 1C) and VLDL cholesterol (Figure 1D) levels from baseline to 24 weeks relative to changes in weight. These values improved to 0.028 (r2=0.03; P<0.0001) and 0.013 (r2=0.04; P<0.0001), respectively, by adjusting for BMI and HOMA-IR. Total cholesterol, LDL, and non-HDL cholesterol did not correlate with weight change, with or without adjusting for BMI or HOMA-IR. Changes in HDL cholesterol (Figure 1E) at 24 weeks relative to changes in weight showed a negative slope of 0.005 (r2=0.009; P<0.0001), and this value did not improve by adjusting for BMI and HOMA-IR.
null
null
[ "Methods", "Assessments", "Ethics and good clinical practice" ]
[ " Patients and study design Data were pooled from eight, double-blind, randomized, controlled, vildagliptin monotherapy trials, including 2,108 previously drug-naïve patients with type 2 diabetes mellitus, who received vildagliptin 50 mg once daily (qd) (n=329) or twice daily (bid) (n=1,779) as a monotherapy and underwent an actual as well as a prespecified study visit, wherein weight, BP, and fasting lipids were assessed at week 24 (studies 1–3, 5, 9, and 12–14 enumerated in Table 1 from a review by Dejager et al7). The data resides in the Novartis vildagliptin database and was extracted and analyzed by Novartis database associates, as directed by the authors.\nData were pooled from eight, double-blind, randomized, controlled, vildagliptin monotherapy trials, including 2,108 previously drug-naïve patients with type 2 diabetes mellitus, who received vildagliptin 50 mg once daily (qd) (n=329) or twice daily (bid) (n=1,779) as a monotherapy and underwent an actual as well as a prespecified study visit, wherein weight, BP, and fasting lipids were assessed at week 24 (studies 1–3, 5, 9, and 12–14 enumerated in Table 1 from a review by Dejager et al7). The data resides in the Novartis vildagliptin database and was extracted and analyzed by Novartis database associates, as directed by the authors.", "Laboratory parameters were assessed by central laboratories: Bioanalytical Research Corporation-EU (Ghent, Belgium), Diabetes Diagnostics Laboratory (Columbia, MO, USA), and Covance (Geneva, Switzerland; Singapore; or Indianapolis, IN, USA). HOMA-IR was calculated based on the following formula: HOMA-IR = (fasting insulin [uU/mL]) × (fasting glucose [mmol/L])/22.5.\n Data analysis A linear regression model was applied to analyze the changes in BP and fasting lipid parameters relative to body weight changes, with/without adjusting for BMI and HOMA-IR.\nA linear regression model was applied to analyze the changes in BP and fasting lipid parameters relative to body weight changes, with/without adjusting for BMI and HOMA-IR.\n Ethics and good clinical practice All study participants provided written informed consent. All protocols were approved by the independent ethics committee/institutional review board at each study site or country. All studies were conducted using Good Clinical Practice and in accordance with the Declaration of Helsinki.\nAll study participants provided written informed consent. All protocols were approved by the independent ethics committee/institutional review board at each study site or country. All studies were conducted using Good Clinical Practice and in accordance with the Declaration of Helsinki.", "All study participants provided written informed consent. All protocols were approved by the independent ethics committee/institutional review board at each study site or country. All studies were conducted using Good Clinical Practice and in accordance with the Declaration of Helsinki." ]
[ "methods", null, null ]
[ "Introduction", "Methods", "Patients and study design", "Assessments", "Data analysis", "Ethics and good clinical practice", "Results", "Discussion" ]
[ "We have previously pooled data on vildagliptin monotherapy to compare body weight changes as a function of glycemic control at baseline and showed that, on an average, weight loss is observed with vildagliptin at the glycemic levels at which treatment is often initiated.1 Weight loss is associated with changes in blood pressure (BP) and fasting lipids. Data regarding BP and fasting lipids from the studies on dipeptidyl peptidase-4 (DPP-4) inhibitors are limited and may be confounded by variations among these studies in terms of the variable effects of other oral antidiabetic drugs, baseline adiposity, and insulin resistance that are also associated with changes in BP and fasting lipids.2–6\nHere, we aimed to assess the impact of vildagliptin treatment on BP and fasting lipids and correlated these changes with changes in weight, body mass index (BMI), and homeostatic model assessment insulin resistance (HOMA-IR). To address these questions, we employed a large (>2,000) vildagliptin pooled monotherapy database.", " Patients and study design Data were pooled from eight, double-blind, randomized, controlled, vildagliptin monotherapy trials, including 2,108 previously drug-naïve patients with type 2 diabetes mellitus, who received vildagliptin 50 mg once daily (qd) (n=329) or twice daily (bid) (n=1,779) as a monotherapy and underwent an actual as well as a prespecified study visit, wherein weight, BP, and fasting lipids were assessed at week 24 (studies 1–3, 5, 9, and 12–14 enumerated in Table 1 from a review by Dejager et al7). The data resides in the Novartis vildagliptin database and was extracted and analyzed by Novartis database associates, as directed by the authors.\nData were pooled from eight, double-blind, randomized, controlled, vildagliptin monotherapy trials, including 2,108 previously drug-naïve patients with type 2 diabetes mellitus, who received vildagliptin 50 mg once daily (qd) (n=329) or twice daily (bid) (n=1,779) as a monotherapy and underwent an actual as well as a prespecified study visit, wherein weight, BP, and fasting lipids were assessed at week 24 (studies 1–3, 5, 9, and 12–14 enumerated in Table 1 from a review by Dejager et al7). The data resides in the Novartis vildagliptin database and was extracted and analyzed by Novartis database associates, as directed by the authors.", "Data were pooled from eight, double-blind, randomized, controlled, vildagliptin monotherapy trials, including 2,108 previously drug-naïve patients with type 2 diabetes mellitus, who received vildagliptin 50 mg once daily (qd) (n=329) or twice daily (bid) (n=1,779) as a monotherapy and underwent an actual as well as a prespecified study visit, wherein weight, BP, and fasting lipids were assessed at week 24 (studies 1–3, 5, 9, and 12–14 enumerated in Table 1 from a review by Dejager et al7). The data resides in the Novartis vildagliptin database and was extracted and analyzed by Novartis database associates, as directed by the authors.", "Laboratory parameters were assessed by central laboratories: Bioanalytical Research Corporation-EU (Ghent, Belgium), Diabetes Diagnostics Laboratory (Columbia, MO, USA), and Covance (Geneva, Switzerland; Singapore; or Indianapolis, IN, USA). HOMA-IR was calculated based on the following formula: HOMA-IR = (fasting insulin [uU/mL]) × (fasting glucose [mmol/L])/22.5.\n Data analysis A linear regression model was applied to analyze the changes in BP and fasting lipid parameters relative to body weight changes, with/without adjusting for BMI and HOMA-IR.\nA linear regression model was applied to analyze the changes in BP and fasting lipid parameters relative to body weight changes, with/without adjusting for BMI and HOMA-IR.\n Ethics and good clinical practice All study participants provided written informed consent. All protocols were approved by the independent ethics committee/institutional review board at each study site or country. All studies were conducted using Good Clinical Practice and in accordance with the Declaration of Helsinki.\nAll study participants provided written informed consent. All protocols were approved by the independent ethics committee/institutional review board at each study site or country. All studies were conducted using Good Clinical Practice and in accordance with the Declaration of Helsinki.", "A linear regression model was applied to analyze the changes in BP and fasting lipid parameters relative to body weight changes, with/without adjusting for BMI and HOMA-IR.", "All study participants provided written informed consent. All protocols were approved by the independent ethics committee/institutional review board at each study site or country. All studies were conducted using Good Clinical Practice and in accordance with the Declaration of Helsinki.", "This pooled analysis included 2,108 patients (54.6% male) with mean (± standard error) age 54.8±0.3 years, BMI 30.9±0.1 kg/m2, type 2 diabetes mellitus duration 2.1±0.1 years, glycated hemoglobin (HbA1c) 8.4%±0.0%, and fasting plasma glucose 9.9±0.1 mmol/L. Vildagliptin treatment (50 mg qd or bid) for 24 weeks resulted in modest but highly significant reductions in both systolic (2.70 mmHg) and diastolic (1.64 mmHg) BP. Fasting triglycerides (TG), very low density lipoprotein (VLDL) cholesterol, and low density lipoprotein (LDL) cholesterol decreased by 0.2, 0.07, and 0.13 mmol/L, respectively. Similarly, total cholesterol and non-high density lipoprotein (non-HDL) cholesterol decreased by 0.18 and 0.21 mmol/L from baseline, respectively. HDL cholesterol increased by 0.03 mmol/L and weight decreased by 0.48 kg (Table 1).\nA linear regression model was used to analyze the relationship between the aforementioned BP and fasting lipid levels with weight changes, degree of adiposity as assessed by baseline BMI, and degree of insulin resistance as assessed by HOMA-IR. The changes in systolic (Figure 1A) and diastolic (Figure 1B) BP from baseline to 24 weeks relative to changes in weight showed positive slopes of 0.71 (r2=0.04; P<0.001) and 0.40 (r2=0.03; P<0.001), respectively. These values did not improve by adjusting for BMI and HOMA-IR. Positive slopes of 0.023 (r2=0.01; P<0.0001) and 0.010 (r2=0.01; P<0.0001) were observed for changes in TG (Figure 1C) and VLDL cholesterol (Figure 1D) levels from baseline to 24 weeks relative to changes in weight. These values improved to 0.028 (r2=0.03; P<0.0001) and 0.013 (r2=0.04; P<0.0001), respectively, by adjusting for BMI and HOMA-IR. Total cholesterol, LDL, and non-HDL cholesterol did not correlate with weight change, with or without adjusting for BMI or HOMA-IR. Changes in HDL cholesterol (Figure 1E) at 24 weeks relative to changes in weight showed a negative slope of 0.005 (r2=0.009; P<0.0001), and this value did not improve by adjusting for BMI and HOMA-IR.", "We have previously shown that, on an average, weight neutrality is essentially observed at higher baselines with vildagliptin, while a weight loss of ~1 kg was observed in patients with baseline HbA1c <8% (64 mmol/mol).1 The current analysis also demonstrated significant reductions in BP with a favorable fasting lipid profile. The slopes of these parameters versus weight change suggest that these changes are at least partially explained by weight reduction. Because weight reduction is greatest at glycemic levels where patients are most often treated,1 BP and fasting lipid benefits may actually be therapeutically greater than those indicated by the averages in this pooled analysis. There is no mechanistic basis for predicting that similar results would not be seen with other DPP-4 inhibitors.\nBoth vildagliptin 50 mg qd or bid are indicated as mono-therapy in many countries and clinical trials were carried out with both dosing frequencies. We have previously shown that when baseline HbA1c levels were below 8% there was no additional HbA1c reduction benefit to the twice daily dosing frequency with monotherapy.7 Sixteen percent of the patients in the current analysis were on the 50 mg qd dose; notably the 50 mg qd group was 4 years older and had baseline HbA1c levels 1.2% lower than the 50 mg bid group. Weight loss was greater in the 50 mg qd group (−1.2 kg in the 50 mg qd group versus −0.4 kg in the 50 mg bid group) as predicted previously by the lower baseline HbA1c and presumably lower degree of glycosuria1 in the 50 mg qd group; the reduction in diastolic BP was ~50% greater and the TG reduction was ~50% lower (from a lower TG baseline of 1.8 versus 2.0 mmol/L) in the 50 mg qd group versus the 50 mg bid group; the correlations were not driven by inclusion of the 50 mg qd dose (data not shown). None of the differences were considered notable enough to justify reporting the data in independent pools and thus we have chosen to report the BP and fasting lipids results in a single pool as we did previously with the weight.1\nThe BP and fasting lipids results are associated with some caveats. The average weight change is small (approximately −0.5 kg), and the measurements of BP and weight in the large clinical trials utilized for the current pool are expected to be associated with a high degree of variability, which presumably explains the reason for low correlation coefficient (r2). In contrast, the n value is very high, hence yielding significant slopes. Partial correlation analysis suggests that the changes in TG and VLDL cholesterol are also partially explained by the degree of adiposity and insulin resistance. These effects of adiposity and insulin resistance or the lack thereof must be interpreted with some caution due to the limitations of the BMI and HOMA-IR surrogates of adiposity and insulin resistance measured in these clinical trials. Furthermore, the studies pooled for this analysis were designed to assess BP and fasting lipids as secondary endpoints, and no data on confounding BP and lipid medications were collected.\nThus, therapeutically, DPP-4 inhibitors yield modest benefits of unknown clinical importance on BP and fasting lipids. Although these BP and fasting lipid benefits of DPP-4 inhibitors have not translated into cardiovascular risk reduction over 3 years,8 it is possible that a primary prevention trial of longer duration might contribute to a cardiovascular benefit after a longer follow-up. Notably, hypotension has not been identified as a side effect with vildagliptin.9" ]
[ "intro", "methods", "methods|subjects", null, "methods", null, "results", "discussion" ]
[ "TG", "HDL", "LDL", "body weight DPP-4 inhibitor", "GLP-1" ]
Introduction: We have previously pooled data on vildagliptin monotherapy to compare body weight changes as a function of glycemic control at baseline and showed that, on an average, weight loss is observed with vildagliptin at the glycemic levels at which treatment is often initiated.1 Weight loss is associated with changes in blood pressure (BP) and fasting lipids. Data regarding BP and fasting lipids from the studies on dipeptidyl peptidase-4 (DPP-4) inhibitors are limited and may be confounded by variations among these studies in terms of the variable effects of other oral antidiabetic drugs, baseline adiposity, and insulin resistance that are also associated with changes in BP and fasting lipids.2–6 Here, we aimed to assess the impact of vildagliptin treatment on BP and fasting lipids and correlated these changes with changes in weight, body mass index (BMI), and homeostatic model assessment insulin resistance (HOMA-IR). To address these questions, we employed a large (>2,000) vildagliptin pooled monotherapy database. Methods: Patients and study design Data were pooled from eight, double-blind, randomized, controlled, vildagliptin monotherapy trials, including 2,108 previously drug-naïve patients with type 2 diabetes mellitus, who received vildagliptin 50 mg once daily (qd) (n=329) or twice daily (bid) (n=1,779) as a monotherapy and underwent an actual as well as a prespecified study visit, wherein weight, BP, and fasting lipids were assessed at week 24 (studies 1–3, 5, 9, and 12–14 enumerated in Table 1 from a review by Dejager et al7). The data resides in the Novartis vildagliptin database and was extracted and analyzed by Novartis database associates, as directed by the authors. Data were pooled from eight, double-blind, randomized, controlled, vildagliptin monotherapy trials, including 2,108 previously drug-naïve patients with type 2 diabetes mellitus, who received vildagliptin 50 mg once daily (qd) (n=329) or twice daily (bid) (n=1,779) as a monotherapy and underwent an actual as well as a prespecified study visit, wherein weight, BP, and fasting lipids were assessed at week 24 (studies 1–3, 5, 9, and 12–14 enumerated in Table 1 from a review by Dejager et al7). The data resides in the Novartis vildagliptin database and was extracted and analyzed by Novartis database associates, as directed by the authors. Patients and study design: Data were pooled from eight, double-blind, randomized, controlled, vildagliptin monotherapy trials, including 2,108 previously drug-naïve patients with type 2 diabetes mellitus, who received vildagliptin 50 mg once daily (qd) (n=329) or twice daily (bid) (n=1,779) as a monotherapy and underwent an actual as well as a prespecified study visit, wherein weight, BP, and fasting lipids were assessed at week 24 (studies 1–3, 5, 9, and 12–14 enumerated in Table 1 from a review by Dejager et al7). The data resides in the Novartis vildagliptin database and was extracted and analyzed by Novartis database associates, as directed by the authors. Assessments: Laboratory parameters were assessed by central laboratories: Bioanalytical Research Corporation-EU (Ghent, Belgium), Diabetes Diagnostics Laboratory (Columbia, MO, USA), and Covance (Geneva, Switzerland; Singapore; or Indianapolis, IN, USA). HOMA-IR was calculated based on the following formula: HOMA-IR = (fasting insulin [uU/mL]) × (fasting glucose [mmol/L])/22.5. Data analysis A linear regression model was applied to analyze the changes in BP and fasting lipid parameters relative to body weight changes, with/without adjusting for BMI and HOMA-IR. A linear regression model was applied to analyze the changes in BP and fasting lipid parameters relative to body weight changes, with/without adjusting for BMI and HOMA-IR. Ethics and good clinical practice All study participants provided written informed consent. All protocols were approved by the independent ethics committee/institutional review board at each study site or country. All studies were conducted using Good Clinical Practice and in accordance with the Declaration of Helsinki. All study participants provided written informed consent. All protocols were approved by the independent ethics committee/institutional review board at each study site or country. All studies were conducted using Good Clinical Practice and in accordance with the Declaration of Helsinki. Data analysis: A linear regression model was applied to analyze the changes in BP and fasting lipid parameters relative to body weight changes, with/without adjusting for BMI and HOMA-IR. Ethics and good clinical practice: All study participants provided written informed consent. All protocols were approved by the independent ethics committee/institutional review board at each study site or country. All studies were conducted using Good Clinical Practice and in accordance with the Declaration of Helsinki. Results: This pooled analysis included 2,108 patients (54.6% male) with mean (± standard error) age 54.8±0.3 years, BMI 30.9±0.1 kg/m2, type 2 diabetes mellitus duration 2.1±0.1 years, glycated hemoglobin (HbA1c) 8.4%±0.0%, and fasting plasma glucose 9.9±0.1 mmol/L. Vildagliptin treatment (50 mg qd or bid) for 24 weeks resulted in modest but highly significant reductions in both systolic (2.70 mmHg) and diastolic (1.64 mmHg) BP. Fasting triglycerides (TG), very low density lipoprotein (VLDL) cholesterol, and low density lipoprotein (LDL) cholesterol decreased by 0.2, 0.07, and 0.13 mmol/L, respectively. Similarly, total cholesterol and non-high density lipoprotein (non-HDL) cholesterol decreased by 0.18 and 0.21 mmol/L from baseline, respectively. HDL cholesterol increased by 0.03 mmol/L and weight decreased by 0.48 kg (Table 1). A linear regression model was used to analyze the relationship between the aforementioned BP and fasting lipid levels with weight changes, degree of adiposity as assessed by baseline BMI, and degree of insulin resistance as assessed by HOMA-IR. The changes in systolic (Figure 1A) and diastolic (Figure 1B) BP from baseline to 24 weeks relative to changes in weight showed positive slopes of 0.71 (r2=0.04; P<0.001) and 0.40 (r2=0.03; P<0.001), respectively. These values did not improve by adjusting for BMI and HOMA-IR. Positive slopes of 0.023 (r2=0.01; P<0.0001) and 0.010 (r2=0.01; P<0.0001) were observed for changes in TG (Figure 1C) and VLDL cholesterol (Figure 1D) levels from baseline to 24 weeks relative to changes in weight. These values improved to 0.028 (r2=0.03; P<0.0001) and 0.013 (r2=0.04; P<0.0001), respectively, by adjusting for BMI and HOMA-IR. Total cholesterol, LDL, and non-HDL cholesterol did not correlate with weight change, with or without adjusting for BMI or HOMA-IR. Changes in HDL cholesterol (Figure 1E) at 24 weeks relative to changes in weight showed a negative slope of 0.005 (r2=0.009; P<0.0001), and this value did not improve by adjusting for BMI and HOMA-IR. Discussion: We have previously shown that, on an average, weight neutrality is essentially observed at higher baselines with vildagliptin, while a weight loss of ~1 kg was observed in patients with baseline HbA1c <8% (64 mmol/mol).1 The current analysis also demonstrated significant reductions in BP with a favorable fasting lipid profile. The slopes of these parameters versus weight change suggest that these changes are at least partially explained by weight reduction. Because weight reduction is greatest at glycemic levels where patients are most often treated,1 BP and fasting lipid benefits may actually be therapeutically greater than those indicated by the averages in this pooled analysis. There is no mechanistic basis for predicting that similar results would not be seen with other DPP-4 inhibitors. Both vildagliptin 50 mg qd or bid are indicated as mono-therapy in many countries and clinical trials were carried out with both dosing frequencies. We have previously shown that when baseline HbA1c levels were below 8% there was no additional HbA1c reduction benefit to the twice daily dosing frequency with monotherapy.7 Sixteen percent of the patients in the current analysis were on the 50 mg qd dose; notably the 50 mg qd group was 4 years older and had baseline HbA1c levels 1.2% lower than the 50 mg bid group. Weight loss was greater in the 50 mg qd group (−1.2 kg in the 50 mg qd group versus −0.4 kg in the 50 mg bid group) as predicted previously by the lower baseline HbA1c and presumably lower degree of glycosuria1 in the 50 mg qd group; the reduction in diastolic BP was ~50% greater and the TG reduction was ~50% lower (from a lower TG baseline of 1.8 versus 2.0 mmol/L) in the 50 mg qd group versus the 50 mg bid group; the correlations were not driven by inclusion of the 50 mg qd dose (data not shown). None of the differences were considered notable enough to justify reporting the data in independent pools and thus we have chosen to report the BP and fasting lipids results in a single pool as we did previously with the weight.1 The BP and fasting lipids results are associated with some caveats. The average weight change is small (approximately −0.5 kg), and the measurements of BP and weight in the large clinical trials utilized for the current pool are expected to be associated with a high degree of variability, which presumably explains the reason for low correlation coefficient (r2). In contrast, the n value is very high, hence yielding significant slopes. Partial correlation analysis suggests that the changes in TG and VLDL cholesterol are also partially explained by the degree of adiposity and insulin resistance. These effects of adiposity and insulin resistance or the lack thereof must be interpreted with some caution due to the limitations of the BMI and HOMA-IR surrogates of adiposity and insulin resistance measured in these clinical trials. Furthermore, the studies pooled for this analysis were designed to assess BP and fasting lipids as secondary endpoints, and no data on confounding BP and lipid medications were collected. Thus, therapeutically, DPP-4 inhibitors yield modest benefits of unknown clinical importance on BP and fasting lipids. Although these BP and fasting lipid benefits of DPP-4 inhibitors have not translated into cardiovascular risk reduction over 3 years,8 it is possible that a primary prevention trial of longer duration might contribute to a cardiovascular benefit after a longer follow-up. Notably, hypotension has not been identified as a side effect with vildagliptin.9
Background: We have previously shown modest weight loss with vildagliptin treatment. Since body weight balance is associated with changes in blood pressure (BP) and fasting lipids, we have assessed these parameters following vildagliptin treatment. Methods: Data were pooled from all double-blind, randomized, controlled, vildagliptin mono-therapy trials on previously drug-naïve patients with type 2 diabetes mellitus who received vildagliptin 50 mg once daily (qd) or twice daily (bid; n=2,108) and wherein BP and fasting lipid data were obtained. Results: Data from patients receiving vildagliptin 50 mg qd or bid showed reductions from baseline to week 24 in systolic BP (from 132.5±0.32 to 129.8±0.34 mmHg; P<0.0001), diastolic BP (from 81.2±0.18 to 79.6±0.19 mmHg; P<0.0001), fasting triglycerides (from 2.00±0.02 to 1.80±0.02 mmol/L; P<0.0001), very low density lipoprotein cholesterol (from 0.90±0.01 to 0.83±0.01 mmol/L; P<0.0001), and low density lipoprotein cholesterol (from 3.17±0.02 to 3.04±0.02 mmol/L; P<0.0001), whereas high density lipoprotein cholesterol increased (from 1.19±0.01 to 1.22±0.01 mmol/L; P<0.001). Weight decreased by 0.48±0.08 kg (P<0.001). Conclusions: This large pooled analysis demonstrated that vildagliptin shows a significant reduction in BP and a favorable fasting lipid profile that are associated with modest weight loss.
null
null
1,997
254
[ 263, 249, 45 ]
8
[ "weight", "bp", "fasting", "changes", "bp fasting", "50", "vildagliptin", "mg", "50 mg", "data" ]
[ "vildagliptin monotherapy compare", "vildagliptin monotherapy trials", "fasting lipids assessed", "observed vildagliptin glycemic", "baselines vildagliptin weight" ]
null
null
[CONTENT] TG | HDL | LDL | body weight DPP-4 inhibitor | GLP-1 [SUMMARY]
[CONTENT] TG | HDL | LDL | body weight DPP-4 inhibitor | GLP-1 [SUMMARY]
[CONTENT] TG | HDL | LDL | body weight DPP-4 inhibitor | GLP-1 [SUMMARY]
null
[CONTENT] TG | HDL | LDL | body weight DPP-4 inhibitor | GLP-1 [SUMMARY]
null
[CONTENT] Adamantane | Biomarkers | Blood Pressure | Diabetes Mellitus, Type 2 | Dipeptidyl Peptidase 4 | Dipeptidyl-Peptidase IV Inhibitors | Fasting | Female | Humans | Linear Models | Lipids | Male | Middle Aged | Nitriles | Pyrrolidines | Randomized Controlled Trials as Topic | Treatment Outcome | Vildagliptin | Weight Loss [SUMMARY]
[CONTENT] Adamantane | Biomarkers | Blood Pressure | Diabetes Mellitus, Type 2 | Dipeptidyl Peptidase 4 | Dipeptidyl-Peptidase IV Inhibitors | Fasting | Female | Humans | Linear Models | Lipids | Male | Middle Aged | Nitriles | Pyrrolidines | Randomized Controlled Trials as Topic | Treatment Outcome | Vildagliptin | Weight Loss [SUMMARY]
[CONTENT] Adamantane | Biomarkers | Blood Pressure | Diabetes Mellitus, Type 2 | Dipeptidyl Peptidase 4 | Dipeptidyl-Peptidase IV Inhibitors | Fasting | Female | Humans | Linear Models | Lipids | Male | Middle Aged | Nitriles | Pyrrolidines | Randomized Controlled Trials as Topic | Treatment Outcome | Vildagliptin | Weight Loss [SUMMARY]
null
[CONTENT] Adamantane | Biomarkers | Blood Pressure | Diabetes Mellitus, Type 2 | Dipeptidyl Peptidase 4 | Dipeptidyl-Peptidase IV Inhibitors | Fasting | Female | Humans | Linear Models | Lipids | Male | Middle Aged | Nitriles | Pyrrolidines | Randomized Controlled Trials as Topic | Treatment Outcome | Vildagliptin | Weight Loss [SUMMARY]
null
[CONTENT] vildagliptin monotherapy compare | vildagliptin monotherapy trials | fasting lipids assessed | observed vildagliptin glycemic | baselines vildagliptin weight [SUMMARY]
[CONTENT] vildagliptin monotherapy compare | vildagliptin monotherapy trials | fasting lipids assessed | observed vildagliptin glycemic | baselines vildagliptin weight [SUMMARY]
[CONTENT] vildagliptin monotherapy compare | vildagliptin monotherapy trials | fasting lipids assessed | observed vildagliptin glycemic | baselines vildagliptin weight [SUMMARY]
null
[CONTENT] vildagliptin monotherapy compare | vildagliptin monotherapy trials | fasting lipids assessed | observed vildagliptin glycemic | baselines vildagliptin weight [SUMMARY]
null
[CONTENT] weight | bp | fasting | changes | bp fasting | 50 | vildagliptin | mg | 50 mg | data [SUMMARY]
[CONTENT] weight | bp | fasting | changes | bp fasting | 50 | vildagliptin | mg | 50 mg | data [SUMMARY]
[CONTENT] weight | bp | fasting | changes | bp fasting | 50 | vildagliptin | mg | 50 mg | data [SUMMARY]
null
[CONTENT] weight | bp | fasting | changes | bp fasting | 50 | vildagliptin | mg | 50 mg | data [SUMMARY]
null
[CONTENT] changes | bp fasting lipids | lipids | fasting lipids | vildagliptin | associated changes | bp fasting | weight | fasting | bp [SUMMARY]
[CONTENT] changes | bp fasting lipid parameters | changes bp fasting lipid | parameters relative body weight | changes adjusting | changes adjusting bmi | changes adjusting bmi homa | model applied analyze changes | model applied analyze | model applied [SUMMARY]
[CONTENT] cholesterol | r2 | figure | 0001 | hdl | weeks | respectively | 24 weeks | hdl cholesterol | changes [SUMMARY]
null
[CONTENT] changes | weight | vildagliptin | fasting | bp | study | bp fasting | 50 | homa ir | homa [SUMMARY]
null
[CONTENT] ||| BP [SUMMARY]
[CONTENT] 2 | 50 | n=2,108 | BP [SUMMARY]
[CONTENT] week 24 | 132.5±0.32 to 129.8±0.34 | 81.2±0.18 | 79.6±0.19 | 2.00±0.02 | 1.80±0.02 | 3.17±0.02 | 1.22±0.01 ||| 0.48±0.08 kg [SUMMARY]
null
[CONTENT] ||| BP ||| 2 | 50 | n=2,108 | BP ||| week 24 | 132.5±0.32 to 129.8±0.34 | 81.2±0.18 | 79.6±0.19 | 2.00±0.02 | 1.80±0.02 | 3.17±0.02 | 1.22±0.01 ||| 0.48±0.08 kg ||| BP [SUMMARY]
null
High prevalence of biochemical disturbances of chronic kidney disease - mineral and bone disorders (CKD-MBD) in a nation-wide peritoneal dialysis cohort: are guideline goals too hard to achieve?
33538758
Chronic kidney disease - mineral and bone disorders (CKD-MBD) are common in dialysis patients. Definition of targets for calcium (Ca), phosphorus (P), parathormone (iPTH), and alkaline phosphatase (ALP) and their treatment recommendations, are provided by international guidelines. There are few studies analyzing CKD-MBD in peritoneal dialysis (PD) patients and the impact of guidelines on mineral metabolism control. The aim of our study was to describe the prevalence of biomarkers for CKD-MBD in a large cohort of PD patients in Brazil.
INTRODUCTION
Data from the nation-wide prospective observational cohort BRAZPD II was used. Incident patients were followed between December 2004 and January 2011. According to KDOQI recommendations, reference ranges for total Ca were 8.4 to 9.5 mg/dL, for P, 3.5 to 5.5 mg/dL, for iPTH, 150-300 pg/mL, and for ALP, 120 U/L.
METHODS
Mean age was 59.8 ± 16 years, 48% were male, and 43% had diabetes. In the beginning, Ca was 8.9 ± 0.9 mg/dL, and 48.3% were on the KODQI target. After 1 year, Ca increased to 9.1 ± 0.9 mg/dL and 50.4% were in the KDOQI preferred range. P at baseline was 5.2 ± 1.6 mg/dL, with 52.8% on target, declining to 4.9 ± 1.5 mg/dL after one year, when 54.7% were on target. Median iPTH at baseline was 238 (P25% 110 - P75% 426 pg/mL) and it remained stable throughout the first year; patients within target ranged from 26 to 28.5%. At the end of the study, 80% was in 3.5 meq/L Ca dialysate concentration, 66.9% of patients was taking any phosphate binder, and 25% was taking activated vitamin D.
RESULTS
We observed a significant prevalence of biochemical disorders related to CKD-MBD in this dialysis population.
CONCLUSIONS
[ "Adult", "Aged", "Calcium", "Chronic Kidney Disease-Mineral and Bone Disorder", "Female", "Goals", "Humans", "Male", "Middle Aged", "Minerals", "Parathyroid Hormone", "Peritoneal Dialysis", "Prevalence", "Renal Dialysis", "Renal Insufficiency, Chronic" ]
8257285
Introduction
Chronic kidney disease-mineral and bone disorders (CKD-MBD) are considered some of the most common complications in dialysis patients, with important impact on patient morbidity and mortality1 - 3. Management of CKD-MBD, particularly (especially) the definition of targets for biochemical parameters, namely calcium, phosphorus, parathormone, alkaline phosphatase and their treatment recommendations, are supported by current guidelines4 , 5. The majority of studies focusing on CKD-MDB in dialysis patients have involved patients on hemodialysis. However, studies with patients on chronic peritoneal dialysis (PD) showed strong evidence that abnormalities of mineral metabolism are also associated with all-cause, cardiovascular6, and infection-related mortality7. Another large national population-based longitudinal study found that in PD Chinese patients population, both hyper and hypophosphatemia and elevated alkaline phosphatase were associated with increase mortality8. Full compliance to every recommendation for CKD-MBD among dialysis patients is not always feasible. For example, when two of the most prescribed drugs to control mineral and bone disorders are used (calcitriol and calcium-based phosphate binders) aiming at reduction of iPTH and phosphate control, a single patient may experience hypercalcemia and/or hyperphosphatemia and move out from guidelines’ recommended range. The National Kidney Foundation Kidney Diseases Outcomes Quality Initiative (NKF-KDOQI) guideline for bone metabolism in CKD recommends that serum levels of phosphorus of dialysis patients should be maintained between 3.5 and 5.5 mg/dL. For total serum calcium levels, the recommendation is to keep the value preferentially between 8.4 to 9.5 mg/dL9 - 11. Similarly, the KDIGO (Kidney Disease Improving Global Outcomes) guidelines suggest lowering elevated phosphorus levels toward the normal range and avoiding hypercalcemia5. The background for such recommendations is therefore clear, being both calcium and phosphorus abnormalities in CKD patients strongly associated with vascular calcification and cardiovascular and overall mortality1 , 12. Interestingly, the literature lacks information about whether the publication of these guidelines was effective to reduce the prevalence of hyper and hypophosphatemia, and hyper and hypocalcemia in dialysis population, and the impact of those guidelines in peritoneal dialysis patients. Indeed, adherence to all the recommended targets and the application of appropriate pharmacological strategies may result in biochemical abnormalities, as can occur in patients that receive calcitriol to treat secondary hyperparathyroidism but develop hypercalcemia and/or hyperphosphatemia. For intact parathormone (iPTH), KDIGO guidelines suggest maintaining levels between 2 to 9 times the upper limit, and KDOQI guidelines suggest levels between 150 to 300 pg/mL. Regarding alkaline phosphatase (ALP), there are no suggested values, only the information that altered levels are related to remodeling disturbances and that levels should be monitored5 , 10. The aim of our study was to describe the prevalence of traditional biochemical parameters of bone-mineral disorders in PD patients, based on the values proposed by the KDOQI guideline, along the first year of therapy, in a large cohort of advanced CKD patients in Brazil.
Methods
This is a nation-wide prospective observational cohort study that used data from the Brazilian Peritoneal Dialysis Study II (BRAZPD II). Socio-demographic, clinical, and laboratory characteristics of the population were previously published13. The ethical committees of all participating centers approved the study. In summary, our database contains clinical and laboratory information from 122 dialysis centers of all five geographic regions of Brazil, corresponding to 65 to 70% of all prevalent PD patients in the country during the study period. Patients were included in this study and followed-up between December 2004 and January 2011. In addition to the general demographic and clinical characteristics we also reported the Davies score for the population. This is a traditional score used on PD studies and is simple to calculate. The score considers the presence of up to 11 comorbidities, each one accounting for 1 point. These comorbidities are malignancy, ischemic heart disease, peripheral vascular disease, left ventricular dysfunction, diabetes, systemic collagen vascular disease, chronic obstructive lung disease, pulmonary fibrosis, active pulmonary tuberculosis, asthma, and cirrhosis14 , 15. The main goal of our study was to describe the prevalence of patients meeting the CKD-MBD KDOQI preferential range of biochemical variables, because this guideline was current at that time, especially for calcium and phosphorus targets, for patients after one year of initiation of chronic PD. For this study, we included all incident patients (those who started PD during the study) that remained at least 90 days in therapy. Calcium and phosphorus levels were measured and recorded monthly following local regulatory rules and proper laboratory methodologies. Patients were stratified in groups according to serum levels of calcium, phosphorus, and iPTH according to KDOQI recommendation: the reference value for total calcium was 8.4 to 9.5 mg/dL, and for phosphorus, 3.5 to 5.5 mg/dL. We also explored the results of iPTH and alkaline phosphatase, although the frequency of measurement of iPTH was only every 6 months. For iPTH, we considered the target proposed by the guideline available at the time (150-300 pg/mL) and for ALP, the value of 120 U/L, which is reported in other references. Information on patient’s prescriptions was also collected. All the biochemical variables related to mineral and bone disorders were obtained at baseline, 6 months, and 12 months after PD initiation. Statistical analysis Continuous variables are reported as mean ± SD or median and range, while categorical variables (e.g., gender, race, etc.) are reported as frequencies or percentages. The comparison between continuous variables was performed using the paired T-test and for categorical values, the chi-square test. It is important to mention beforehand that given the large sample size of the BRAZPD II, differences between variables normally reach statistical significance and clinical relevance should be discussed with this view in mind. Analysis was performed using STATA 14 and figures were generated in the Excel program. Continuous variables are reported as mean ± SD or median and range, while categorical variables (e.g., gender, race, etc.) are reported as frequencies or percentages. The comparison between continuous variables was performed using the paired T-test and for categorical values, the chi-square test. It is important to mention beforehand that given the large sample size of the BRAZPD II, differences between variables normally reach statistical significance and clinical relevance should be discussed with this view in mind. Analysis was performed using STATA 14 and figures were generated in the Excel program.
Results
Baseline characteristics The mean age of the study population was 59.8 ± 16 years, 48% were male, 71% had history of hypertension, and diabetes was present in 43% of the patients. Thirty-seven percent had history of previous hemodialysis, 49% received pre-dialysis care, and 64% were caucasians. Baseline characteristics of the study population, including comorbidity Davies score, which was previously described (14), are presented in Table 1. BMI: body mass index; MW: minimal wage in Brazil. The mean age of the study population was 59.8 ± 16 years, 48% were male, 71% had history of hypertension, and diabetes was present in 43% of the patients. Thirty-seven percent had history of previous hemodialysis, 49% received pre-dialysis care, and 64% were caucasians. Baseline characteristics of the study population, including comorbidity Davies score, which was previously described (14), are presented in Table 1. BMI: body mass index; MW: minimal wage in Brazil. Calcium The mean total serum calcium at baseline was 8.98 ± 0.97 mg/dL, it presented a small increase to 9.08 ± 0.93 mg/dL after 6 months, and continued rising to 9.14 ± 0.94 mg/dL after one year of follow-up. At the beginning of the study, 48.3% of our population was within the recommended target for serum calcium level. This prevalence presented a modest increase to 50.9% at 6 months and remained stable thereafter with 50.4% at the first year of therapy. Figure 1 summarizes mean serum calcium levels, the distribution of patients into 3 groups divided according to the KDOQI preferential range, and the prevalence of the use of calcium-based phosphate binders. Figure 1Serum calcium levels (mg/dL) along the first year of dialysis and the use of calcium-based phosphate binders (percentage). Error bars represent standard deviation. The mean total serum calcium at baseline was 8.98 ± 0.97 mg/dL, it presented a small increase to 9.08 ± 0.93 mg/dL after 6 months, and continued rising to 9.14 ± 0.94 mg/dL after one year of follow-up. At the beginning of the study, 48.3% of our population was within the recommended target for serum calcium level. This prevalence presented a modest increase to 50.9% at 6 months and remained stable thereafter with 50.4% at the first year of therapy. Figure 1 summarizes mean serum calcium levels, the distribution of patients into 3 groups divided according to the KDOQI preferential range, and the prevalence of the use of calcium-based phosphate binders. Figure 1Serum calcium levels (mg/dL) along the first year of dialysis and the use of calcium-based phosphate binders (percentage). Error bars represent standard deviation. Phosphorus The mean serum phosphate at baseline was 5.20 ± 1.65 mg/dL, presenting a small decrease to 4.92 ± 1.55 mg/dL after 6 months, and remaining stable at the end of the first year of follow-up, with 4.95 ± 1.55 mg/dL. At the beginning of the study, 52.8% were within the recommended target for phosphate serum levels, slightly increasing to 56.7% at 6 months, and at the end of the first year on dialysis, 54.7% of the patients were within the target. Figure 2 summarizes mean serum phosphate levels, the distribution of patients into 3 groups divided according to the KDOQI, and the prevalence of the use of any phosphate binder. Figure 2Serum phosphate levels (mg/dL) along the first year of dialysis and the use of any phosphate binder. Error bars represent standard deviation. The mean serum phosphate at baseline was 5.20 ± 1.65 mg/dL, presenting a small decrease to 4.92 ± 1.55 mg/dL after 6 months, and remaining stable at the end of the first year of follow-up, with 4.95 ± 1.55 mg/dL. At the beginning of the study, 52.8% were within the recommended target for phosphate serum levels, slightly increasing to 56.7% at 6 months, and at the end of the first year on dialysis, 54.7% of the patients were within the target. Figure 2 summarizes mean serum phosphate levels, the distribution of patients into 3 groups divided according to the KDOQI, and the prevalence of the use of any phosphate binder. Figure 2Serum phosphate levels (mg/dL) along the first year of dialysis and the use of any phosphate binder. Error bars represent standard deviation. PTH and alkaline phosphatase The median iPTH serum level at baseline was 238 (P25% 110 - P75% 426) pg/mL and it remained relatively stable throughout the first year of dialysis, as depicted in Figure 3. The percentage of patients within the KDOQI recommended range was constant, from 26% at the baseline to a maximum of 28.5% in the third quarter after initiation of PD. For ALP, the median at baseline was 98 (IQR 71-154) UI/L and it did not change along the first year of dialysis. Figure 3Serum iPTH levels (pg/mL) along the first year of dialysis. Error bars represent interquartile range. The median iPTH serum level at baseline was 238 (P25% 110 - P75% 426) pg/mL and it remained relatively stable throughout the first year of dialysis, as depicted in Figure 3. The percentage of patients within the KDOQI recommended range was constant, from 26% at the baseline to a maximum of 28.5% in the third quarter after initiation of PD. For ALP, the median at baseline was 98 (IQR 71-154) UI/L and it did not change along the first year of dialysis. Figure 3Serum iPTH levels (pg/mL) along the first year of dialysis. Error bars represent interquartile range. Prescription of phosphate binders and calcitriol The prevalence of patients prescribed with calcium-based phosphate binders at baseline was 34.1% and it increased to 40.8% after 1 year. More than 80% of patients were receiving 3.5 meq/L calcium on peritoneal dialysis solution. The prevalence of patients prescribed with any phosphate binder at baseline was 51.7% increasing to 66.9% after 1 year. The registry of the use of calcitriol in the BRAZPD started in 2008 and, at the end of the study, 25% of the patients were taking oral activated vitamin D. The prevalence of patients prescribed with calcium-based phosphate binders at baseline was 34.1% and it increased to 40.8% after 1 year. More than 80% of patients were receiving 3.5 meq/L calcium on peritoneal dialysis solution. The prevalence of patients prescribed with any phosphate binder at baseline was 51.7% increasing to 66.9% after 1 year. The registry of the use of calcitriol in the BRAZPD started in 2008 and, at the end of the study, 25% of the patients were taking oral activated vitamin D.
null
null
[ "Statistical analysis", "Baseline characteristics", "Calcium", "Phosphorus", "PTH and alkaline phosphatase", "Prescription of phosphate binders and calcitriol" ]
[ "Continuous variables are reported as mean ± SD or median and range, while\ncategorical variables (e.g., gender, race, etc.) are reported as frequencies or\npercentages. The comparison between continuous variables was performed using the\npaired T-test and for categorical values, the chi-square test. It is important\nto mention beforehand that given the large sample size of the BRAZPD II,\ndifferences between variables normally reach statistical significance and\nclinical relevance should be discussed with this view in mind. Analysis was\nperformed using STATA 14 and figures were generated in the Excel program.", "The mean age of the study population was 59.8 ± 16 years, 48% were male, 71% had\nhistory of hypertension, and diabetes was present in 43% of the patients.\nThirty-seven percent had history of previous hemodialysis, 49% received\npre-dialysis care, and 64% were caucasians. Baseline characteristics of the\nstudy population, including comorbidity Davies score, which was previously\ndescribed (14), are presented in Table\n1.\nBMI: body mass index; MW: minimal wage in Brazil.", "The mean total serum calcium at baseline was 8.98 ± 0.97 mg/dL, it presented a\nsmall increase to 9.08 ± 0.93 mg/dL after 6 months, and continued rising to 9.14\n± 0.94 mg/dL after one year of follow-up. At the beginning of the study, 48.3%\nof our population was within the recommended target for serum calcium level.\nThis prevalence presented a modest increase to 50.9% at 6 months and remained\nstable thereafter with 50.4% at the first year of therapy. Figure 1 summarizes mean serum calcium levels, the\ndistribution of patients into 3 groups divided according to the KDOQI\npreferential range, and the prevalence of the use of calcium-based phosphate\nbinders.\n\nFigure 1Serum calcium levels (mg/dL) along the first year of dialysis and\nthe use of calcium-based phosphate binders (percentage). Error bars\nrepresent standard deviation.\n", "The mean serum phosphate at baseline was 5.20 ± 1.65 mg/dL, presenting a small\ndecrease to 4.92 ± 1.55 mg/dL after 6 months, and remaining stable at the end of\nthe first year of follow-up, with 4.95 ± 1.55 mg/dL. At the beginning of the\nstudy, 52.8% were within the recommended target for phosphate serum levels,\nslightly increasing to 56.7% at 6 months, and at the end of the first year on\ndialysis, 54.7% of the patients were within the target. Figure 2 summarizes mean serum phosphate levels, the\ndistribution of patients into 3 groups divided according to the KDOQI, and the\nprevalence of the use of any phosphate binder.\n\nFigure 2Serum phosphate levels (mg/dL) along the first year of dialysis\nand the use of any phosphate binder. Error bars represent standard\ndeviation.\n", "The median iPTH serum level at baseline was 238 (P25% 110 - P75% 426) pg/mL and\nit remained relatively stable throughout the first year of dialysis, as depicted\nin Figure 3. The percentage of patients\nwithin the KDOQI recommended range was constant, from 26% at the baseline to a\nmaximum of 28.5% in the third quarter after initiation of PD. For ALP, the\nmedian at baseline was 98 (IQR 71-154) UI/L and it did not change along the\nfirst year of dialysis.\n\nFigure 3Serum iPTH levels (pg/mL) along the first year of dialysis. Error\nbars represent interquartile range.\n", "The prevalence of patients prescribed with calcium-based phosphate binders at\nbaseline was 34.1% and it increased to 40.8% after 1 year. More than 80% of\npatients were receiving 3.5 meq/L calcium on peritoneal dialysis solution. The\nprevalence of patients prescribed with any phosphate binder at baseline was\n51.7% increasing to 66.9% after 1 year. The registry of the use of calcitriol in\nthe BRAZPD started in 2008 and, at the end of the study, 25% of the patients\nwere taking oral activated vitamin D." ]
[ null, null, null, null, null, null ]
[ "Introduction", "Methods", "Statistical analysis", "Results", "Baseline characteristics", "Calcium", "Phosphorus", "PTH and alkaline phosphatase", "Prescription of phosphate binders and calcitriol", "Discussion" ]
[ "Chronic kidney disease-mineral and bone disorders (CKD-MBD) are considered some of\nthe most common complications in dialysis patients, with important impact on patient\nmorbidity and mortality1\n-\n3. Management of CKD-MBD, particularly\n(especially) the definition of targets for biochemical parameters, namely calcium,\nphosphorus, parathormone, alkaline phosphatase and their treatment recommendations,\nare supported by current guidelines4\n,\n5.\nThe majority of studies focusing on CKD-MDB in dialysis patients have involved\npatients on hemodialysis. However, studies with patients on chronic peritoneal\ndialysis (PD) showed strong evidence that abnormalities of mineral metabolism are\nalso associated with all-cause, cardiovascular6, and infection-related mortality7. Another large national population-based longitudinal study found that\nin PD Chinese patients population, both hyper and hypophosphatemia and elevated\nalkaline phosphatase were associated with increase mortality8.\nFull compliance to every recommendation for CKD-MBD among dialysis patients is not\nalways feasible. For example, when two of the most prescribed drugs to control\nmineral and bone disorders are used (calcitriol and calcium-based phosphate binders)\naiming at reduction of iPTH and phosphate control, a single patient may experience\nhypercalcemia and/or hyperphosphatemia and move out from guidelines’ recommended\nrange.\nThe National Kidney Foundation Kidney Diseases Outcomes Quality Initiative\n(NKF-KDOQI) guideline for bone metabolism in CKD recommends that serum levels of\nphosphorus of dialysis patients should be maintained between 3.5 and 5.5 mg/dL. For\ntotal serum calcium levels, the recommendation is to keep the value preferentially\nbetween 8.4 to 9.5 mg/dL9\n-\n11. Similarly, the KDIGO (Kidney Disease\nImproving Global Outcomes) guidelines suggest lowering elevated phosphorus levels\ntoward the normal range and avoiding hypercalcemia5.\nThe background for such recommendations is therefore clear, being both calcium and\nphosphorus abnormalities in CKD patients strongly associated with vascular\ncalcification and cardiovascular and overall mortality1\n,\n12. Interestingly, the literature lacks\ninformation about whether the publication of these guidelines was effective to\nreduce the prevalence of hyper and hypophosphatemia, and hyper and hypocalcemia in\ndialysis population, and the impact of those guidelines in peritoneal dialysis\npatients. Indeed, adherence to all the recommended targets and the application of\nappropriate pharmacological strategies may result in biochemical abnormalities, as\ncan occur in patients that receive calcitriol to treat secondary hyperparathyroidism\nbut develop hypercalcemia and/or hyperphosphatemia.\nFor intact parathormone (iPTH), KDIGO guidelines suggest maintaining levels between 2\nto 9 times the upper limit, and KDOQI guidelines suggest levels between 150 to 300\npg/mL. Regarding alkaline phosphatase (ALP), there are no suggested values, only the\ninformation that altered levels are related to remodeling disturbances and that\nlevels should be monitored5\n,\n10.\nThe aim of our study was to describe the prevalence of traditional biochemical\nparameters of bone-mineral disorders in PD patients, based on the values proposed by\nthe KDOQI guideline, along the first year of therapy, in a large cohort of advanced\nCKD patients in Brazil.", "This is a nation-wide prospective observational cohort study that used data from the\nBrazilian Peritoneal Dialysis Study II (BRAZPD II). Socio-demographic, clinical, and\nlaboratory characteristics of the population were previously published13. The ethical committees of all participating\ncenters approved the study. In summary, our database contains clinical and\nlaboratory information from 122 dialysis centers of all five geographic regions of\nBrazil, corresponding to 65 to 70% of all prevalent PD patients in the country\nduring the study period. Patients were included in this study and followed-up\nbetween December 2004 and January 2011.\nIn addition to the general demographic and clinical characteristics we also reported\nthe Davies score for the population. This is a traditional score used on PD studies\nand is simple to calculate. The score considers the presence of up to 11\ncomorbidities, each one accounting for 1 point. These comorbidities are malignancy,\nischemic heart disease, peripheral vascular disease, left ventricular dysfunction,\ndiabetes, systemic collagen vascular disease, chronic obstructive lung disease,\npulmonary fibrosis, active pulmonary tuberculosis, asthma, and cirrhosis14\n,\n15.\nThe main goal of our study was to describe the prevalence of patients meeting the\nCKD-MBD KDOQI preferential range of biochemical variables, because this guideline\nwas current at that time, especially for calcium and phosphorus targets, for\npatients after one year of initiation of chronic PD. For this study, we included all\nincident patients (those who started PD during the study) that remained at least 90\ndays in therapy. Calcium and phosphorus levels were measured and recorded monthly\nfollowing local regulatory rules and proper laboratory methodologies. Patients were\nstratified in groups according to serum levels of calcium, phosphorus, and iPTH\naccording to KDOQI recommendation: the reference value for total calcium was 8.4 to\n9.5 mg/dL, and for phosphorus, 3.5 to 5.5 mg/dL. We also explored the results of\niPTH and alkaline phosphatase, although the frequency of measurement of iPTH was\nonly every 6 months. For iPTH, we considered the target proposed by the guideline\navailable at the time (150-300 pg/mL) and for ALP, the value of 120 U/L, which is\nreported in other references. Information on patient’s prescriptions was also\ncollected. All the biochemical variables related to mineral and bone disorders were\nobtained at baseline, 6 months, and 12 months after PD initiation.\n Statistical analysis Continuous variables are reported as mean ± SD or median and range, while\ncategorical variables (e.g., gender, race, etc.) are reported as frequencies or\npercentages. The comparison between continuous variables was performed using the\npaired T-test and for categorical values, the chi-square test. It is important\nto mention beforehand that given the large sample size of the BRAZPD II,\ndifferences between variables normally reach statistical significance and\nclinical relevance should be discussed with this view in mind. Analysis was\nperformed using STATA 14 and figures were generated in the Excel program.\nContinuous variables are reported as mean ± SD or median and range, while\ncategorical variables (e.g., gender, race, etc.) are reported as frequencies or\npercentages. The comparison between continuous variables was performed using the\npaired T-test and for categorical values, the chi-square test. It is important\nto mention beforehand that given the large sample size of the BRAZPD II,\ndifferences between variables normally reach statistical significance and\nclinical relevance should be discussed with this view in mind. Analysis was\nperformed using STATA 14 and figures were generated in the Excel program.", "Continuous variables are reported as mean ± SD or median and range, while\ncategorical variables (e.g., gender, race, etc.) are reported as frequencies or\npercentages. The comparison between continuous variables was performed using the\npaired T-test and for categorical values, the chi-square test. It is important\nto mention beforehand that given the large sample size of the BRAZPD II,\ndifferences between variables normally reach statistical significance and\nclinical relevance should be discussed with this view in mind. Analysis was\nperformed using STATA 14 and figures were generated in the Excel program.", " Baseline characteristics The mean age of the study population was 59.8 ± 16 years, 48% were male, 71% had\nhistory of hypertension, and diabetes was present in 43% of the patients.\nThirty-seven percent had history of previous hemodialysis, 49% received\npre-dialysis care, and 64% were caucasians. Baseline characteristics of the\nstudy population, including comorbidity Davies score, which was previously\ndescribed (14), are presented in Table\n1.\nBMI: body mass index; MW: minimal wage in Brazil.\nThe mean age of the study population was 59.8 ± 16 years, 48% were male, 71% had\nhistory of hypertension, and diabetes was present in 43% of the patients.\nThirty-seven percent had history of previous hemodialysis, 49% received\npre-dialysis care, and 64% were caucasians. Baseline characteristics of the\nstudy population, including comorbidity Davies score, which was previously\ndescribed (14), are presented in Table\n1.\nBMI: body mass index; MW: minimal wage in Brazil.\n Calcium The mean total serum calcium at baseline was 8.98 ± 0.97 mg/dL, it presented a\nsmall increase to 9.08 ± 0.93 mg/dL after 6 months, and continued rising to 9.14\n± 0.94 mg/dL after one year of follow-up. At the beginning of the study, 48.3%\nof our population was within the recommended target for serum calcium level.\nThis prevalence presented a modest increase to 50.9% at 6 months and remained\nstable thereafter with 50.4% at the first year of therapy. Figure 1 summarizes mean serum calcium levels, the\ndistribution of patients into 3 groups divided according to the KDOQI\npreferential range, and the prevalence of the use of calcium-based phosphate\nbinders.\n\nFigure 1Serum calcium levels (mg/dL) along the first year of dialysis and\nthe use of calcium-based phosphate binders (percentage). Error bars\nrepresent standard deviation.\n\nThe mean total serum calcium at baseline was 8.98 ± 0.97 mg/dL, it presented a\nsmall increase to 9.08 ± 0.93 mg/dL after 6 months, and continued rising to 9.14\n± 0.94 mg/dL after one year of follow-up. At the beginning of the study, 48.3%\nof our population was within the recommended target for serum calcium level.\nThis prevalence presented a modest increase to 50.9% at 6 months and remained\nstable thereafter with 50.4% at the first year of therapy. Figure 1 summarizes mean serum calcium levels, the\ndistribution of patients into 3 groups divided according to the KDOQI\npreferential range, and the prevalence of the use of calcium-based phosphate\nbinders.\n\nFigure 1Serum calcium levels (mg/dL) along the first year of dialysis and\nthe use of calcium-based phosphate binders (percentage). Error bars\nrepresent standard deviation.\n\n Phosphorus The mean serum phosphate at baseline was 5.20 ± 1.65 mg/dL, presenting a small\ndecrease to 4.92 ± 1.55 mg/dL after 6 months, and remaining stable at the end of\nthe first year of follow-up, with 4.95 ± 1.55 mg/dL. At the beginning of the\nstudy, 52.8% were within the recommended target for phosphate serum levels,\nslightly increasing to 56.7% at 6 months, and at the end of the first year on\ndialysis, 54.7% of the patients were within the target. Figure 2 summarizes mean serum phosphate levels, the\ndistribution of patients into 3 groups divided according to the KDOQI, and the\nprevalence of the use of any phosphate binder.\n\nFigure 2Serum phosphate levels (mg/dL) along the first year of dialysis\nand the use of any phosphate binder. Error bars represent standard\ndeviation.\n\nThe mean serum phosphate at baseline was 5.20 ± 1.65 mg/dL, presenting a small\ndecrease to 4.92 ± 1.55 mg/dL after 6 months, and remaining stable at the end of\nthe first year of follow-up, with 4.95 ± 1.55 mg/dL. At the beginning of the\nstudy, 52.8% were within the recommended target for phosphate serum levels,\nslightly increasing to 56.7% at 6 months, and at the end of the first year on\ndialysis, 54.7% of the patients were within the target. Figure 2 summarizes mean serum phosphate levels, the\ndistribution of patients into 3 groups divided according to the KDOQI, and the\nprevalence of the use of any phosphate binder.\n\nFigure 2Serum phosphate levels (mg/dL) along the first year of dialysis\nand the use of any phosphate binder. Error bars represent standard\ndeviation.\n\n PTH and alkaline phosphatase The median iPTH serum level at baseline was 238 (P25% 110 - P75% 426) pg/mL and\nit remained relatively stable throughout the first year of dialysis, as depicted\nin Figure 3. The percentage of patients\nwithin the KDOQI recommended range was constant, from 26% at the baseline to a\nmaximum of 28.5% in the third quarter after initiation of PD. For ALP, the\nmedian at baseline was 98 (IQR 71-154) UI/L and it did not change along the\nfirst year of dialysis.\n\nFigure 3Serum iPTH levels (pg/mL) along the first year of dialysis. Error\nbars represent interquartile range.\n\nThe median iPTH serum level at baseline was 238 (P25% 110 - P75% 426) pg/mL and\nit remained relatively stable throughout the first year of dialysis, as depicted\nin Figure 3. The percentage of patients\nwithin the KDOQI recommended range was constant, from 26% at the baseline to a\nmaximum of 28.5% in the third quarter after initiation of PD. For ALP, the\nmedian at baseline was 98 (IQR 71-154) UI/L and it did not change along the\nfirst year of dialysis.\n\nFigure 3Serum iPTH levels (pg/mL) along the first year of dialysis. Error\nbars represent interquartile range.\n\n Prescription of phosphate binders and calcitriol The prevalence of patients prescribed with calcium-based phosphate binders at\nbaseline was 34.1% and it increased to 40.8% after 1 year. More than 80% of\npatients were receiving 3.5 meq/L calcium on peritoneal dialysis solution. The\nprevalence of patients prescribed with any phosphate binder at baseline was\n51.7% increasing to 66.9% after 1 year. The registry of the use of calcitriol in\nthe BRAZPD started in 2008 and, at the end of the study, 25% of the patients\nwere taking oral activated vitamin D.\nThe prevalence of patients prescribed with calcium-based phosphate binders at\nbaseline was 34.1% and it increased to 40.8% after 1 year. More than 80% of\npatients were receiving 3.5 meq/L calcium on peritoneal dialysis solution. The\nprevalence of patients prescribed with any phosphate binder at baseline was\n51.7% increasing to 66.9% after 1 year. The registry of the use of calcitriol in\nthe BRAZPD started in 2008 and, at the end of the study, 25% of the patients\nwere taking oral activated vitamin D.", "The mean age of the study population was 59.8 ± 16 years, 48% were male, 71% had\nhistory of hypertension, and diabetes was present in 43% of the patients.\nThirty-seven percent had history of previous hemodialysis, 49% received\npre-dialysis care, and 64% were caucasians. Baseline characteristics of the\nstudy population, including comorbidity Davies score, which was previously\ndescribed (14), are presented in Table\n1.\nBMI: body mass index; MW: minimal wage in Brazil.", "The mean total serum calcium at baseline was 8.98 ± 0.97 mg/dL, it presented a\nsmall increase to 9.08 ± 0.93 mg/dL after 6 months, and continued rising to 9.14\n± 0.94 mg/dL after one year of follow-up. At the beginning of the study, 48.3%\nof our population was within the recommended target for serum calcium level.\nThis prevalence presented a modest increase to 50.9% at 6 months and remained\nstable thereafter with 50.4% at the first year of therapy. Figure 1 summarizes mean serum calcium levels, the\ndistribution of patients into 3 groups divided according to the KDOQI\npreferential range, and the prevalence of the use of calcium-based phosphate\nbinders.\n\nFigure 1Serum calcium levels (mg/dL) along the first year of dialysis and\nthe use of calcium-based phosphate binders (percentage). Error bars\nrepresent standard deviation.\n", "The mean serum phosphate at baseline was 5.20 ± 1.65 mg/dL, presenting a small\ndecrease to 4.92 ± 1.55 mg/dL after 6 months, and remaining stable at the end of\nthe first year of follow-up, with 4.95 ± 1.55 mg/dL. At the beginning of the\nstudy, 52.8% were within the recommended target for phosphate serum levels,\nslightly increasing to 56.7% at 6 months, and at the end of the first year on\ndialysis, 54.7% of the patients were within the target. Figure 2 summarizes mean serum phosphate levels, the\ndistribution of patients into 3 groups divided according to the KDOQI, and the\nprevalence of the use of any phosphate binder.\n\nFigure 2Serum phosphate levels (mg/dL) along the first year of dialysis\nand the use of any phosphate binder. Error bars represent standard\ndeviation.\n", "The median iPTH serum level at baseline was 238 (P25% 110 - P75% 426) pg/mL and\nit remained relatively stable throughout the first year of dialysis, as depicted\nin Figure 3. The percentage of patients\nwithin the KDOQI recommended range was constant, from 26% at the baseline to a\nmaximum of 28.5% in the third quarter after initiation of PD. For ALP, the\nmedian at baseline was 98 (IQR 71-154) UI/L and it did not change along the\nfirst year of dialysis.\n\nFigure 3Serum iPTH levels (pg/mL) along the first year of dialysis. Error\nbars represent interquartile range.\n", "The prevalence of patients prescribed with calcium-based phosphate binders at\nbaseline was 34.1% and it increased to 40.8% after 1 year. More than 80% of\npatients were receiving 3.5 meq/L calcium on peritoneal dialysis solution. The\nprevalence of patients prescribed with any phosphate binder at baseline was\n51.7% increasing to 66.9% after 1 year. The registry of the use of calcitriol in\nthe BRAZPD started in 2008 and, at the end of the study, 25% of the patients\nwere taking oral activated vitamin D.", "In this large national cohort, we observed the difficulties of PD patients in\nachieving the CKD-MBD KODQI guideline recommended range10. At the end of the first year of therapy, only 50.4, 54.7,\nand 26.7% of patients were in the suggested range for total calcium, phosphate, and\niPTH levels, respectively. Another study from Canada demonstrated the same\ndifficulties in the management of traditional biochemical mineral and bone\nvariables. Only 64.5% of patients had serum phosphate levels within KDOQI targets,\n44.5% were within calcium target levels, 28.4% were within PTH suggested range, and\n9.4% of PD patients met all 3 targets16. We\nshowed in our study that, at the end of the first year on PD, only half of the\npatients were in the recommended range for serum calcium and phosphorus levels,\ndespite an increase in the prescription of calcium and non-calcium phosphate\nbinders.\nSome studies have evaluated the impact of CKD-MBD biochemical abnormalities on\nmortality in PD patients8\n,\n17\n-\n19. Avram et al.17 observed that lower PTH values were associated with\nincreased mortality, while Rhee et al.19,\nstudying 9.244 PD patients in a retrospective cohort study, demonstrated that PTH\nhad a U-shaped association with mortality, with values of 200-700 pg/mL exhibiting\nthe lowest mortality and concentrations < 100 pg/mL, the highest one.\nAdditionally, Liu et al.8 demonstrated that\nthe effects of ALP levels may operate as a more consistent predictor of mortality\nthan the traditional calcium, phosphate, and PTH levels, in a large cohort of PD\npatients in Taiwan. Noordzij et al.18, in a\nprospective cohort study with 586 PD patients, demonstrated that hyperphosphatemia,\nbut not abnormal levels of calcium or iPTH, were associated with increased\nmortality. Finally, Stevens et al.20, in\nanother prospective cohort study with 158 PD patients, observed that only serum\nphosphate showed significant association with mortality.\nSerum calcium and phosphate levels are important biomarkers for the evaluation of\nCKD-MBD. All guidelines for CKD-MBD recommend special attention to the control of\nhyper/hypophosphatemia and hyper/hypocalcemia5\n,\n10. Disorders of these biomarkers are\nconsidered significant risk factors for overall and cardiovascular mortality1 in the\ndialysis population. During most part of our study, the current guideline was the\nCKD-MBD KDOQI. Published in 2003 and updated in 2009, this guideline recommended a\ntarget for calcium serum between 8.4 and 9.5 mg/dL and phosphate between 3.5 and 5.5\nmg/dL. Few studies on dialysis patients showed a small, if any, impact of the KDOQI\nguideline on the prevalence of calcium and phosphate disorders17. We then decided to look at the behavior of these\nelectrolytes along the first year of therapy in a large PD cohort.\nBased on the KDOQI guideline, the prevalence of hyperphosphatemia and\nhypophosphatemia in our cohort at baseline was similar to previous reports from\ndifferent regions of the world21\n-\n23. Importantly, the prevalence of patients\non the proposed target for phosphate barely changed along the first year of\ndialysis, despite an important increase in the proportion of patients taking\nphosphate binders. Some reasons may have contributed to the difficulty in\ncontrolling phosphate serum levels, including a low patient adherence to diet and\ndrug prescription, and a loss of residual renal function. An increase in iPTH with\ntime could also have contributed due to its action on bone resorption. However, iPTH\nlevels remained stable along the first year of dialysis therapy, and probably did\nnot influenced the results.\nThe prevalence of hyper and hypocalcemia in our cohort was similar to other reports,\nwith a small predominance of hypercalcemia over hypocalcemia 24\n,\n25. These disorders have also been associated\nwith increased mortality rates, although less frequently in the setting of PD25\n,\n26. The increase of almost 7% in the number\nof patients with hypercalcemia is likely related to the use of 3.5 mEq/L calcium PD\nsolutions and to the use of calcium-based phosphate binders. Although available to\nall patients in the country, the 2.5 mEq/L calcium PD solution is not frequently\nprescribed. Additionally, our group had previously demonstrated that in PD patients\nwith PTH < 150 pg/mL, conversion to low calcium solutions (2.5 mEq/L) appears to\nbe a simple and effective strategy to bring iPTH levels to the range determined by\ncurrent guidelines5 when compared with 3.5\nmEq/L calcium PD solutions27.\nDespite the increase in prevalence of hypercalcemia during the observation period of\nour study, the percentage of patients taking calcium-based phosphate binders also\nincreased. One possible explanation is related to the bureaucracy involved to get\nsevelamer hydrochloride from the public health system in some regions of Brazil,\nwhere a proof of high serum calcium is required before getting the non-calcium based\nphosphate binder. Unfortunately, there is no data on this in the BRAZPD database.\nChanges in the membrane profile may also have contributed to the greater number of\npatients with hypercalcemia. Exposure to bio-incompatible PD solutions is a factor\nthat may affect the peritoneal membrane and lead to a progressive increase in the\ntransport status. The higher the transport status, the higher the calcium absorbed\nfrom the peritoneal cavity23.\nIn our study, only 26.7% of patients had iPTH levels within the range suggested by\ninternational guidelines10 during the study\nfollow-up. However, the lack of absolute information about the use of calcitriol,\nnutritional forms of vitamin D, and vitamin D analogs limits the definition of\nwhether further improvement in reaching clinical targets would have been\npossible.\nOur study has some limitations including all those normally related to any\nobservational study such as lack of longitudinal data on residual renal function,\nlack of data on peritoneal membrane status, no information about the doses and\nfrequency of the phosphate binders prescribed, missing data on iPTH and ALP, and\nlack of control of patient adherence to medication and diet. Strengths of our study\ninclude the large sample size with an excellent external validity, laboratory values\nof calcium and phosphorus collected monthly, and longitudinal data on the use of\nphosphate binders.\nIn conclusion, we observed a high prevalence of biochemical disturbances of CKD-MBD\nmarkers in this nation-wide PD cohort. Additionally, initiation of PD was not enough\nto reduce the high prevalence of calcium and phosphorus disturbances in a public\nhealth system that provides free access to dialysis, low-Ca dialysate, calcitriol,\nand phosphate binders. Further studies are needed to identify the causes behind the\ndifficulties of PD centers in achieving the current recommended targets for serum\nlevels of calcium and phosphorus." ]
[ "intro", "methods", null, "results", null, null, null, null, null, "discussion" ]
[ "Phosphorus", "Calcium", "Renal Insufficiency, Chronic", "Fósforo", "Cálcio", "Insuficiência Renal Crônica" ]
Introduction: Chronic kidney disease-mineral and bone disorders (CKD-MBD) are considered some of the most common complications in dialysis patients, with important impact on patient morbidity and mortality1 - 3. Management of CKD-MBD, particularly (especially) the definition of targets for biochemical parameters, namely calcium, phosphorus, parathormone, alkaline phosphatase and their treatment recommendations, are supported by current guidelines4 , 5. The majority of studies focusing on CKD-MDB in dialysis patients have involved patients on hemodialysis. However, studies with patients on chronic peritoneal dialysis (PD) showed strong evidence that abnormalities of mineral metabolism are also associated with all-cause, cardiovascular6, and infection-related mortality7. Another large national population-based longitudinal study found that in PD Chinese patients population, both hyper and hypophosphatemia and elevated alkaline phosphatase were associated with increase mortality8. Full compliance to every recommendation for CKD-MBD among dialysis patients is not always feasible. For example, when two of the most prescribed drugs to control mineral and bone disorders are used (calcitriol and calcium-based phosphate binders) aiming at reduction of iPTH and phosphate control, a single patient may experience hypercalcemia and/or hyperphosphatemia and move out from guidelines’ recommended range. The National Kidney Foundation Kidney Diseases Outcomes Quality Initiative (NKF-KDOQI) guideline for bone metabolism in CKD recommends that serum levels of phosphorus of dialysis patients should be maintained between 3.5 and 5.5 mg/dL. For total serum calcium levels, the recommendation is to keep the value preferentially between 8.4 to 9.5 mg/dL9 - 11. Similarly, the KDIGO (Kidney Disease Improving Global Outcomes) guidelines suggest lowering elevated phosphorus levels toward the normal range and avoiding hypercalcemia5. The background for such recommendations is therefore clear, being both calcium and phosphorus abnormalities in CKD patients strongly associated with vascular calcification and cardiovascular and overall mortality1 , 12. Interestingly, the literature lacks information about whether the publication of these guidelines was effective to reduce the prevalence of hyper and hypophosphatemia, and hyper and hypocalcemia in dialysis population, and the impact of those guidelines in peritoneal dialysis patients. Indeed, adherence to all the recommended targets and the application of appropriate pharmacological strategies may result in biochemical abnormalities, as can occur in patients that receive calcitriol to treat secondary hyperparathyroidism but develop hypercalcemia and/or hyperphosphatemia. For intact parathormone (iPTH), KDIGO guidelines suggest maintaining levels between 2 to 9 times the upper limit, and KDOQI guidelines suggest levels between 150 to 300 pg/mL. Regarding alkaline phosphatase (ALP), there are no suggested values, only the information that altered levels are related to remodeling disturbances and that levels should be monitored5 , 10. The aim of our study was to describe the prevalence of traditional biochemical parameters of bone-mineral disorders in PD patients, based on the values proposed by the KDOQI guideline, along the first year of therapy, in a large cohort of advanced CKD patients in Brazil. Methods: This is a nation-wide prospective observational cohort study that used data from the Brazilian Peritoneal Dialysis Study II (BRAZPD II). Socio-demographic, clinical, and laboratory characteristics of the population were previously published13. The ethical committees of all participating centers approved the study. In summary, our database contains clinical and laboratory information from 122 dialysis centers of all five geographic regions of Brazil, corresponding to 65 to 70% of all prevalent PD patients in the country during the study period. Patients were included in this study and followed-up between December 2004 and January 2011. In addition to the general demographic and clinical characteristics we also reported the Davies score for the population. This is a traditional score used on PD studies and is simple to calculate. The score considers the presence of up to 11 comorbidities, each one accounting for 1 point. These comorbidities are malignancy, ischemic heart disease, peripheral vascular disease, left ventricular dysfunction, diabetes, systemic collagen vascular disease, chronic obstructive lung disease, pulmonary fibrosis, active pulmonary tuberculosis, asthma, and cirrhosis14 , 15. The main goal of our study was to describe the prevalence of patients meeting the CKD-MBD KDOQI preferential range of biochemical variables, because this guideline was current at that time, especially for calcium and phosphorus targets, for patients after one year of initiation of chronic PD. For this study, we included all incident patients (those who started PD during the study) that remained at least 90 days in therapy. Calcium and phosphorus levels were measured and recorded monthly following local regulatory rules and proper laboratory methodologies. Patients were stratified in groups according to serum levels of calcium, phosphorus, and iPTH according to KDOQI recommendation: the reference value for total calcium was 8.4 to 9.5 mg/dL, and for phosphorus, 3.5 to 5.5 mg/dL. We also explored the results of iPTH and alkaline phosphatase, although the frequency of measurement of iPTH was only every 6 months. For iPTH, we considered the target proposed by the guideline available at the time (150-300 pg/mL) and for ALP, the value of 120 U/L, which is reported in other references. Information on patient’s prescriptions was also collected. All the biochemical variables related to mineral and bone disorders were obtained at baseline, 6 months, and 12 months after PD initiation. Statistical analysis Continuous variables are reported as mean ± SD or median and range, while categorical variables (e.g., gender, race, etc.) are reported as frequencies or percentages. The comparison between continuous variables was performed using the paired T-test and for categorical values, the chi-square test. It is important to mention beforehand that given the large sample size of the BRAZPD II, differences between variables normally reach statistical significance and clinical relevance should be discussed with this view in mind. Analysis was performed using STATA 14 and figures were generated in the Excel program. Continuous variables are reported as mean ± SD or median and range, while categorical variables (e.g., gender, race, etc.) are reported as frequencies or percentages. The comparison between continuous variables was performed using the paired T-test and for categorical values, the chi-square test. It is important to mention beforehand that given the large sample size of the BRAZPD II, differences between variables normally reach statistical significance and clinical relevance should be discussed with this view in mind. Analysis was performed using STATA 14 and figures were generated in the Excel program. Statistical analysis: Continuous variables are reported as mean ± SD or median and range, while categorical variables (e.g., gender, race, etc.) are reported as frequencies or percentages. The comparison between continuous variables was performed using the paired T-test and for categorical values, the chi-square test. It is important to mention beforehand that given the large sample size of the BRAZPD II, differences between variables normally reach statistical significance and clinical relevance should be discussed with this view in mind. Analysis was performed using STATA 14 and figures were generated in the Excel program. Results: Baseline characteristics The mean age of the study population was 59.8 ± 16 years, 48% were male, 71% had history of hypertension, and diabetes was present in 43% of the patients. Thirty-seven percent had history of previous hemodialysis, 49% received pre-dialysis care, and 64% were caucasians. Baseline characteristics of the study population, including comorbidity Davies score, which was previously described (14), are presented in Table 1. BMI: body mass index; MW: minimal wage in Brazil. The mean age of the study population was 59.8 ± 16 years, 48% were male, 71% had history of hypertension, and diabetes was present in 43% of the patients. Thirty-seven percent had history of previous hemodialysis, 49% received pre-dialysis care, and 64% were caucasians. Baseline characteristics of the study population, including comorbidity Davies score, which was previously described (14), are presented in Table 1. BMI: body mass index; MW: minimal wage in Brazil. Calcium The mean total serum calcium at baseline was 8.98 ± 0.97 mg/dL, it presented a small increase to 9.08 ± 0.93 mg/dL after 6 months, and continued rising to 9.14 ± 0.94 mg/dL after one year of follow-up. At the beginning of the study, 48.3% of our population was within the recommended target for serum calcium level. This prevalence presented a modest increase to 50.9% at 6 months and remained stable thereafter with 50.4% at the first year of therapy. Figure 1 summarizes mean serum calcium levels, the distribution of patients into 3 groups divided according to the KDOQI preferential range, and the prevalence of the use of calcium-based phosphate binders. Figure 1Serum calcium levels (mg/dL) along the first year of dialysis and the use of calcium-based phosphate binders (percentage). Error bars represent standard deviation. The mean total serum calcium at baseline was 8.98 ± 0.97 mg/dL, it presented a small increase to 9.08 ± 0.93 mg/dL after 6 months, and continued rising to 9.14 ± 0.94 mg/dL after one year of follow-up. At the beginning of the study, 48.3% of our population was within the recommended target for serum calcium level. This prevalence presented a modest increase to 50.9% at 6 months and remained stable thereafter with 50.4% at the first year of therapy. Figure 1 summarizes mean serum calcium levels, the distribution of patients into 3 groups divided according to the KDOQI preferential range, and the prevalence of the use of calcium-based phosphate binders. Figure 1Serum calcium levels (mg/dL) along the first year of dialysis and the use of calcium-based phosphate binders (percentage). Error bars represent standard deviation. Phosphorus The mean serum phosphate at baseline was 5.20 ± 1.65 mg/dL, presenting a small decrease to 4.92 ± 1.55 mg/dL after 6 months, and remaining stable at the end of the first year of follow-up, with 4.95 ± 1.55 mg/dL. At the beginning of the study, 52.8% were within the recommended target for phosphate serum levels, slightly increasing to 56.7% at 6 months, and at the end of the first year on dialysis, 54.7% of the patients were within the target. Figure 2 summarizes mean serum phosphate levels, the distribution of patients into 3 groups divided according to the KDOQI, and the prevalence of the use of any phosphate binder. Figure 2Serum phosphate levels (mg/dL) along the first year of dialysis and the use of any phosphate binder. Error bars represent standard deviation. The mean serum phosphate at baseline was 5.20 ± 1.65 mg/dL, presenting a small decrease to 4.92 ± 1.55 mg/dL after 6 months, and remaining stable at the end of the first year of follow-up, with 4.95 ± 1.55 mg/dL. At the beginning of the study, 52.8% were within the recommended target for phosphate serum levels, slightly increasing to 56.7% at 6 months, and at the end of the first year on dialysis, 54.7% of the patients were within the target. Figure 2 summarizes mean serum phosphate levels, the distribution of patients into 3 groups divided according to the KDOQI, and the prevalence of the use of any phosphate binder. Figure 2Serum phosphate levels (mg/dL) along the first year of dialysis and the use of any phosphate binder. Error bars represent standard deviation. PTH and alkaline phosphatase The median iPTH serum level at baseline was 238 (P25% 110 - P75% 426) pg/mL and it remained relatively stable throughout the first year of dialysis, as depicted in Figure 3. The percentage of patients within the KDOQI recommended range was constant, from 26% at the baseline to a maximum of 28.5% in the third quarter after initiation of PD. For ALP, the median at baseline was 98 (IQR 71-154) UI/L and it did not change along the first year of dialysis. Figure 3Serum iPTH levels (pg/mL) along the first year of dialysis. Error bars represent interquartile range. The median iPTH serum level at baseline was 238 (P25% 110 - P75% 426) pg/mL and it remained relatively stable throughout the first year of dialysis, as depicted in Figure 3. The percentage of patients within the KDOQI recommended range was constant, from 26% at the baseline to a maximum of 28.5% in the third quarter after initiation of PD. For ALP, the median at baseline was 98 (IQR 71-154) UI/L and it did not change along the first year of dialysis. Figure 3Serum iPTH levels (pg/mL) along the first year of dialysis. Error bars represent interquartile range. Prescription of phosphate binders and calcitriol The prevalence of patients prescribed with calcium-based phosphate binders at baseline was 34.1% and it increased to 40.8% after 1 year. More than 80% of patients were receiving 3.5 meq/L calcium on peritoneal dialysis solution. The prevalence of patients prescribed with any phosphate binder at baseline was 51.7% increasing to 66.9% after 1 year. The registry of the use of calcitriol in the BRAZPD started in 2008 and, at the end of the study, 25% of the patients were taking oral activated vitamin D. The prevalence of patients prescribed with calcium-based phosphate binders at baseline was 34.1% and it increased to 40.8% after 1 year. More than 80% of patients were receiving 3.5 meq/L calcium on peritoneal dialysis solution. The prevalence of patients prescribed with any phosphate binder at baseline was 51.7% increasing to 66.9% after 1 year. The registry of the use of calcitriol in the BRAZPD started in 2008 and, at the end of the study, 25% of the patients were taking oral activated vitamin D. Baseline characteristics: The mean age of the study population was 59.8 ± 16 years, 48% were male, 71% had history of hypertension, and diabetes was present in 43% of the patients. Thirty-seven percent had history of previous hemodialysis, 49% received pre-dialysis care, and 64% were caucasians. Baseline characteristics of the study population, including comorbidity Davies score, which was previously described (14), are presented in Table 1. BMI: body mass index; MW: minimal wage in Brazil. Calcium: The mean total serum calcium at baseline was 8.98 ± 0.97 mg/dL, it presented a small increase to 9.08 ± 0.93 mg/dL after 6 months, and continued rising to 9.14 ± 0.94 mg/dL after one year of follow-up. At the beginning of the study, 48.3% of our population was within the recommended target for serum calcium level. This prevalence presented a modest increase to 50.9% at 6 months and remained stable thereafter with 50.4% at the first year of therapy. Figure 1 summarizes mean serum calcium levels, the distribution of patients into 3 groups divided according to the KDOQI preferential range, and the prevalence of the use of calcium-based phosphate binders. Figure 1Serum calcium levels (mg/dL) along the first year of dialysis and the use of calcium-based phosphate binders (percentage). Error bars represent standard deviation. Phosphorus: The mean serum phosphate at baseline was 5.20 ± 1.65 mg/dL, presenting a small decrease to 4.92 ± 1.55 mg/dL after 6 months, and remaining stable at the end of the first year of follow-up, with 4.95 ± 1.55 mg/dL. At the beginning of the study, 52.8% were within the recommended target for phosphate serum levels, slightly increasing to 56.7% at 6 months, and at the end of the first year on dialysis, 54.7% of the patients were within the target. Figure 2 summarizes mean serum phosphate levels, the distribution of patients into 3 groups divided according to the KDOQI, and the prevalence of the use of any phosphate binder. Figure 2Serum phosphate levels (mg/dL) along the first year of dialysis and the use of any phosphate binder. Error bars represent standard deviation. PTH and alkaline phosphatase: The median iPTH serum level at baseline was 238 (P25% 110 - P75% 426) pg/mL and it remained relatively stable throughout the first year of dialysis, as depicted in Figure 3. The percentage of patients within the KDOQI recommended range was constant, from 26% at the baseline to a maximum of 28.5% in the third quarter after initiation of PD. For ALP, the median at baseline was 98 (IQR 71-154) UI/L and it did not change along the first year of dialysis. Figure 3Serum iPTH levels (pg/mL) along the first year of dialysis. Error bars represent interquartile range. Prescription of phosphate binders and calcitriol: The prevalence of patients prescribed with calcium-based phosphate binders at baseline was 34.1% and it increased to 40.8% after 1 year. More than 80% of patients were receiving 3.5 meq/L calcium on peritoneal dialysis solution. The prevalence of patients prescribed with any phosphate binder at baseline was 51.7% increasing to 66.9% after 1 year. The registry of the use of calcitriol in the BRAZPD started in 2008 and, at the end of the study, 25% of the patients were taking oral activated vitamin D. Discussion: In this large national cohort, we observed the difficulties of PD patients in achieving the CKD-MBD KODQI guideline recommended range10. At the end of the first year of therapy, only 50.4, 54.7, and 26.7% of patients were in the suggested range for total calcium, phosphate, and iPTH levels, respectively. Another study from Canada demonstrated the same difficulties in the management of traditional biochemical mineral and bone variables. Only 64.5% of patients had serum phosphate levels within KDOQI targets, 44.5% were within calcium target levels, 28.4% were within PTH suggested range, and 9.4% of PD patients met all 3 targets16. We showed in our study that, at the end of the first year on PD, only half of the patients were in the recommended range for serum calcium and phosphorus levels, despite an increase in the prescription of calcium and non-calcium phosphate binders. Some studies have evaluated the impact of CKD-MBD biochemical abnormalities on mortality in PD patients8 , 17 - 19. Avram et al.17 observed that lower PTH values were associated with increased mortality, while Rhee et al.19, studying 9.244 PD patients in a retrospective cohort study, demonstrated that PTH had a U-shaped association with mortality, with values of 200-700 pg/mL exhibiting the lowest mortality and concentrations < 100 pg/mL, the highest one. Additionally, Liu et al.8 demonstrated that the effects of ALP levels may operate as a more consistent predictor of mortality than the traditional calcium, phosphate, and PTH levels, in a large cohort of PD patients in Taiwan. Noordzij et al.18, in a prospective cohort study with 586 PD patients, demonstrated that hyperphosphatemia, but not abnormal levels of calcium or iPTH, were associated with increased mortality. Finally, Stevens et al.20, in another prospective cohort study with 158 PD patients, observed that only serum phosphate showed significant association with mortality. Serum calcium and phosphate levels are important biomarkers for the evaluation of CKD-MBD. All guidelines for CKD-MBD recommend special attention to the control of hyper/hypophosphatemia and hyper/hypocalcemia5 , 10. Disorders of these biomarkers are considered significant risk factors for overall and cardiovascular mortality1 in the dialysis population. During most part of our study, the current guideline was the CKD-MBD KDOQI. Published in 2003 and updated in 2009, this guideline recommended a target for calcium serum between 8.4 and 9.5 mg/dL and phosphate between 3.5 and 5.5 mg/dL. Few studies on dialysis patients showed a small, if any, impact of the KDOQI guideline on the prevalence of calcium and phosphate disorders17. We then decided to look at the behavior of these electrolytes along the first year of therapy in a large PD cohort. Based on the KDOQI guideline, the prevalence of hyperphosphatemia and hypophosphatemia in our cohort at baseline was similar to previous reports from different regions of the world21 - 23. Importantly, the prevalence of patients on the proposed target for phosphate barely changed along the first year of dialysis, despite an important increase in the proportion of patients taking phosphate binders. Some reasons may have contributed to the difficulty in controlling phosphate serum levels, including a low patient adherence to diet and drug prescription, and a loss of residual renal function. An increase in iPTH with time could also have contributed due to its action on bone resorption. However, iPTH levels remained stable along the first year of dialysis therapy, and probably did not influenced the results. The prevalence of hyper and hypocalcemia in our cohort was similar to other reports, with a small predominance of hypercalcemia over hypocalcemia 24 , 25. These disorders have also been associated with increased mortality rates, although less frequently in the setting of PD25 , 26. The increase of almost 7% in the number of patients with hypercalcemia is likely related to the use of 3.5 mEq/L calcium PD solutions and to the use of calcium-based phosphate binders. Although available to all patients in the country, the 2.5 mEq/L calcium PD solution is not frequently prescribed. Additionally, our group had previously demonstrated that in PD patients with PTH < 150 pg/mL, conversion to low calcium solutions (2.5 mEq/L) appears to be a simple and effective strategy to bring iPTH levels to the range determined by current guidelines5 when compared with 3.5 mEq/L calcium PD solutions27. Despite the increase in prevalence of hypercalcemia during the observation period of our study, the percentage of patients taking calcium-based phosphate binders also increased. One possible explanation is related to the bureaucracy involved to get sevelamer hydrochloride from the public health system in some regions of Brazil, where a proof of high serum calcium is required before getting the non-calcium based phosphate binder. Unfortunately, there is no data on this in the BRAZPD database. Changes in the membrane profile may also have contributed to the greater number of patients with hypercalcemia. Exposure to bio-incompatible PD solutions is a factor that may affect the peritoneal membrane and lead to a progressive increase in the transport status. The higher the transport status, the higher the calcium absorbed from the peritoneal cavity23. In our study, only 26.7% of patients had iPTH levels within the range suggested by international guidelines10 during the study follow-up. However, the lack of absolute information about the use of calcitriol, nutritional forms of vitamin D, and vitamin D analogs limits the definition of whether further improvement in reaching clinical targets would have been possible. Our study has some limitations including all those normally related to any observational study such as lack of longitudinal data on residual renal function, lack of data on peritoneal membrane status, no information about the doses and frequency of the phosphate binders prescribed, missing data on iPTH and ALP, and lack of control of patient adherence to medication and diet. Strengths of our study include the large sample size with an excellent external validity, laboratory values of calcium and phosphorus collected monthly, and longitudinal data on the use of phosphate binders. In conclusion, we observed a high prevalence of biochemical disturbances of CKD-MBD markers in this nation-wide PD cohort. Additionally, initiation of PD was not enough to reduce the high prevalence of calcium and phosphorus disturbances in a public health system that provides free access to dialysis, low-Ca dialysate, calcitriol, and phosphate binders. Further studies are needed to identify the causes behind the difficulties of PD centers in achieving the current recommended targets for serum levels of calcium and phosphorus.
Background: Chronic kidney disease - mineral and bone disorders (CKD-MBD) are common in dialysis patients. Definition of targets for calcium (Ca), phosphorus (P), parathormone (iPTH), and alkaline phosphatase (ALP) and their treatment recommendations, are provided by international guidelines. There are few studies analyzing CKD-MBD in peritoneal dialysis (PD) patients and the impact of guidelines on mineral metabolism control. The aim of our study was to describe the prevalence of biomarkers for CKD-MBD in a large cohort of PD patients in Brazil. Methods: Data from the nation-wide prospective observational cohort BRAZPD II was used. Incident patients were followed between December 2004 and January 2011. According to KDOQI recommendations, reference ranges for total Ca were 8.4 to 9.5 mg/dL, for P, 3.5 to 5.5 mg/dL, for iPTH, 150-300 pg/mL, and for ALP, 120 U/L. Results: Mean age was 59.8 ± 16 years, 48% were male, and 43% had diabetes. In the beginning, Ca was 8.9 ± 0.9 mg/dL, and 48.3% were on the KODQI target. After 1 year, Ca increased to 9.1 ± 0.9 mg/dL and 50.4% were in the KDOQI preferred range. P at baseline was 5.2 ± 1.6 mg/dL, with 52.8% on target, declining to 4.9 ± 1.5 mg/dL after one year, when 54.7% were on target. Median iPTH at baseline was 238 (P25% 110 - P75% 426 pg/mL) and it remained stable throughout the first year; patients within target ranged from 26 to 28.5%. At the end of the study, 80% was in 3.5 meq/L Ca dialysate concentration, 66.9% of patients was taking any phosphate binder, and 25% was taking activated vitamin D. Conclusions: We observed a significant prevalence of biochemical disorders related to CKD-MBD in this dialysis population.
null
null
4,910
384
[ 115, 106, 179, 172, 133, 105 ]
10
[ "patients", "calcium", "phosphate", "year", "levels", "dialysis", "study", "serum", "mg", "mg dl" ]
[ "levels phosphorus dialysis", "patients prescribed phosphate", "ckd mbd dialysis", "calcium peritoneal dialysis", "kidney disease mineral" ]
null
null
[CONTENT] Phosphorus | Calcium | Renal Insufficiency, Chronic | Fósforo | Cálcio | Insuficiência Renal Crônica [SUMMARY]
[CONTENT] Phosphorus | Calcium | Renal Insufficiency, Chronic | Fósforo | Cálcio | Insuficiência Renal Crônica [SUMMARY]
[CONTENT] Phosphorus | Calcium | Renal Insufficiency, Chronic | Fósforo | Cálcio | Insuficiência Renal Crônica [SUMMARY]
null
[CONTENT] Phosphorus | Calcium | Renal Insufficiency, Chronic | Fósforo | Cálcio | Insuficiência Renal Crônica [SUMMARY]
null
[CONTENT] Adult | Aged | Calcium | Chronic Kidney Disease-Mineral and Bone Disorder | Female | Goals | Humans | Male | Middle Aged | Minerals | Parathyroid Hormone | Peritoneal Dialysis | Prevalence | Renal Dialysis | Renal Insufficiency, Chronic [SUMMARY]
[CONTENT] Adult | Aged | Calcium | Chronic Kidney Disease-Mineral and Bone Disorder | Female | Goals | Humans | Male | Middle Aged | Minerals | Parathyroid Hormone | Peritoneal Dialysis | Prevalence | Renal Dialysis | Renal Insufficiency, Chronic [SUMMARY]
[CONTENT] Adult | Aged | Calcium | Chronic Kidney Disease-Mineral and Bone Disorder | Female | Goals | Humans | Male | Middle Aged | Minerals | Parathyroid Hormone | Peritoneal Dialysis | Prevalence | Renal Dialysis | Renal Insufficiency, Chronic [SUMMARY]
null
[CONTENT] Adult | Aged | Calcium | Chronic Kidney Disease-Mineral and Bone Disorder | Female | Goals | Humans | Male | Middle Aged | Minerals | Parathyroid Hormone | Peritoneal Dialysis | Prevalence | Renal Dialysis | Renal Insufficiency, Chronic [SUMMARY]
null
[CONTENT] levels phosphorus dialysis | patients prescribed phosphate | ckd mbd dialysis | calcium peritoneal dialysis | kidney disease mineral [SUMMARY]
[CONTENT] levels phosphorus dialysis | patients prescribed phosphate | ckd mbd dialysis | calcium peritoneal dialysis | kidney disease mineral [SUMMARY]
[CONTENT] levels phosphorus dialysis | patients prescribed phosphate | ckd mbd dialysis | calcium peritoneal dialysis | kidney disease mineral [SUMMARY]
null
[CONTENT] levels phosphorus dialysis | patients prescribed phosphate | ckd mbd dialysis | calcium peritoneal dialysis | kidney disease mineral [SUMMARY]
null
[CONTENT] patients | calcium | phosphate | year | levels | dialysis | study | serum | mg | mg dl [SUMMARY]
[CONTENT] patients | calcium | phosphate | year | levels | dialysis | study | serum | mg | mg dl [SUMMARY]
[CONTENT] patients | calcium | phosphate | year | levels | dialysis | study | serum | mg | mg dl [SUMMARY]
null
[CONTENT] patients | calcium | phosphate | year | levels | dialysis | study | serum | mg | mg dl [SUMMARY]
null
[CONTENT] ckd | guidelines | patients | dialysis patients | kidney | levels | guidelines suggest | suggest | bone | mineral [SUMMARY]
[CONTENT] variables | reported | clinical | study | test | categorical | disease | performed | ii | continuous [SUMMARY]
[CONTENT] phosphate | year | calcium | mg dl | mg | dl | figure | baseline | patients | year dialysis [SUMMARY]
null
[CONTENT] calcium | phosphate | patients | year | levels | mg | dialysis | mg dl | dl | study [SUMMARY]
null
[CONTENT] ||| ALP ||| ||| PD | Brazil [SUMMARY]
[CONTENT] BRAZPD II ||| between December 2004 and January 2011 ||| KDOQI | Ca | 8.4 to | 9.5 mg/dL | P | 3.5 | 5.5 mg | 150-300 | ALP | 120 [SUMMARY]
[CONTENT] 59.8 | 16 years | 48% | 43% ||| Ca | 8.9 | 0.9 | 48.3% | KODQI ||| 1 year | 9.1 | 0.9 | 50.4% | KDOQI ||| 5.2 ± | 52.8% | 4.9 ± | 1.5 | one year | 54.7% ||| 238 | P25 | 110 | 426 | the first year | 26 | 28.5% ||| 80% | 3.5 | 66.9% | 25% | D. [SUMMARY]
null
[CONTENT] ||| ALP ||| ||| PD | Brazil ||| BRAZPD II ||| between December 2004 and January 2011 ||| KDOQI | Ca | 8.4 to | 9.5 mg/dL | P | 3.5 | 5.5 mg | 150-300 | ALP | 120 | 59.8 | 16 years | 48% | 43% ||| Ca | 8.9 | 0.9 | 48.3% | KODQI ||| 1 year | 9.1 | 0.9 | 50.4% | KDOQI ||| 5.2 ± | 52.8% | 4.9 ± | 1.5 | one year | 54.7% ||| 238 | P25 | 110 | 426 | the first year | 26 | 28.5% ||| 80% | 3.5 | 66.9% | 25% | D. [SUMMARY]
null
Clinicopathological significance of non-small cell lung cancer with high prevalence of Oct-4 tumor cells.
22300949
Expression of the stem cell marker octamer 4 (Oct-4) in various neoplasms has been previously reported, but very little is currently known about the potential function of Oct-4 in this setting. The purpose of this study was to assess the prognostic value of Oct-4 expression after surgery in primary non-small cell lung cancer (NSCLC) and investigate its possible molecular mechanism.
BACKGROUND
We measured Oct-4 expression in 113 NSCLC tissue samples and three cell lines by immunohistochemical staining and RT-PCR. The association of Oct-4 expression with demographic characteristics, proliferative marker Ki67, microvessel density (MVD), and expression of vascular endothelial growth factor (VEGF) were assessed.
METHODS
Oct-4 expression was detected in 90.3% of samples and was positively correlated with poor differentiation and adenocarcinoma histology, and Oct-4 mRNA was found in each cell lines detected. Overexpression of Oct-4 had a strong association with cells proliferation in all cases, MVD-negative, and VEGF-negative subsets. A Kaplan-Meier analysis showed that overexpression of Oct-4 was associated with shorter overall survival in all cases, adenocarcinoma, squamous cell carcinoma, MVD-negative, and VEGF-negative subsets. A multivariate analysis demonstrated that Oct-4 level in tumor tissue was an independent prognostic factor for overall survival in all cases, MVD-negative, and VEGF-negative subsets.
RESULTS
Our findings suggest that, even in the context of vulnerable MVD status and VEGF expression, overexpression of Oct-4 in tumor tissue represents a prognostic factor in primary NSCLC patients. Oct-4 may maintain NSCLC cells in a poorly differentiated state through a mechanism that depends on promoting cell proliferation.
CONCLUSION
[ "Adenocarcinoma", "Adult", "Aged", "Carcinoma, Non-Small-Cell Lung", "Carcinoma, Squamous Cell", "Cell Line, Tumor", "Cell Proliferation", "Female", "Gene Expression Regulation, Neoplastic", "Humans", "Kaplan-Meier Estimate", "Ki-67 Antigen", "Male", "Middle Aged", "Neoplasm Staging", "Neovascularization, Pathologic", "Octamer Transcription Factor-3", "Prognosis", "Vascular Endothelial Growth Factor A" ]
3287152
Background
Despite recent progress in treatment, lung cancer remains the leading cause of cancer deaths in both women and men throughout the world [1]. Not all patients with lung cancer benefit from routine surgery and chemotherapy. This is especially true for those with primary non-small cell lung cancer (NSCLC), the most common malignancy in the thoracic field, where such therapies have been tried with limited efficacy [2]. To improve patient survival rate, researchers have increasingly focused on understanding specific characteristics of NSCLCs as a means to elucidate the mechanism of tumor development and develop possible targeted therapeutic approaches. Octamer 4 (Oct-4), a member of the POU-domain transcription factor family, is normally expressed in both adult and embryonic stem cells [3,4]. Recent reports have demonstrated that Oct-4 is not only involved in controlling the maintenance of stem cell pluripotency, but is also specifically responsible for the unlimited proliferative potential of stem cells, suggesting that Oct-4 functions as a master switch during differentiation of human somatic cell [5-7]. Interestingly, Oct-4 is also re-expressed in germ cell tumors [8], breast cancer [9], bladder cancer [10], prostate cancer and hepatomas [11,12], but very little is known about its potential function in malignant disease [13]. Moreover, overexpression of Oct-4 increases the malignant potential of tumors, and downregulation of Oct-4 in tumor cells inhibits tumor growth, suggesting that Oct-4 might play a key role in maintaining the survival of cancer cells [13,14]. Although its asymmetric expression may indicate that Oct-4 is a suitable target for therapeutic intervention in adenocarcinoma and bronchioloalveolar carcinoma [15], the role of Oct-4 expression in primary NSCLC has remained ill defined. To address this potential role, we assessed Oct-4 expression in cancer specimens from 113 patients with primary NSCLC by immunohistochemical staining. We further investigated the association of Oct-4 expression in NSCLC tumor cells with some important clinical pathological indices. In addition, we examined the involvement of Oct-4 in tumor cell proliferation and tumor-induced angiogenesis in NSCLC by relating Oct-4 expression with microvessel density (MVD), and expression of Ki-67 and vascular endothelial growth factor (VEGF), proliferative and the vascular markers, respectively. On the basis of previous reports that a subset of NSCLC tumors do not induce angiogenesis but instead co-opt the normal vasculature for further growth [16,17], we also evaluated associations of Oct-4 expression with tumor cell proliferation and prognosis in subsets of patients with weak VEGF-mediated angiogenesis (disregarding the nonangiogenic subsets of NSCLC in the analysis, which would tend to obscure the role of Oct-4 expression in primary NSCLC). Our results provide the first demonstration that expression of the stem cell marker Oct-4 maintains tumor cells in a poorly differentiated state through a mechanism that depends on promoting cell proliferation. Moreover, even in the context of vulnerable MVD status and VEGF expression, Oct-4 plays an important role in tumor cell proliferation and contributes to poor prognosis in human NSCLC.
Methods
Patients and tissue specimens Cancer tissue and corresponding adjacent normal tissue (within 1-2 cm of the tumor edge) from 113 primary NSCLC cases were randomly selected from our tissue database. Patients had been treated in the Department of Thoracic Surgery of the First Affiliated Hospital of Sun Yat-sen University from Jan 2003 to July 2004. None of the patients had received neoadjuvant chemotherapy or radiotherapy. Clinical information was obtained by reviewing the perioperative medical records, or by telephone or written correspondence. Cases were staged according to the tumor-node-metastases (TNM) classification of the International Union Against Cancer, revised in 2002 [18]. The study was approved by the Medical Ethical Committee of the First Affiliated Hospital, Sun Yat-sen University. Paraffin-embedded specimens of each case were sectioned and fixed on siliconized slides. Histological typing was determined according to World Health Organization classifications [19]. Tumor size and metastatic lymph node number and locations were obtained from pathology reports. Cancer tissue and corresponding adjacent normal tissue (within 1-2 cm of the tumor edge) from 113 primary NSCLC cases were randomly selected from our tissue database. Patients had been treated in the Department of Thoracic Surgery of the First Affiliated Hospital of Sun Yat-sen University from Jan 2003 to July 2004. None of the patients had received neoadjuvant chemotherapy or radiotherapy. Clinical information was obtained by reviewing the perioperative medical records, or by telephone or written correspondence. Cases were staged according to the tumor-node-metastases (TNM) classification of the International Union Against Cancer, revised in 2002 [18]. The study was approved by the Medical Ethical Committee of the First Affiliated Hospital, Sun Yat-sen University. Paraffin-embedded specimens of each case were sectioned and fixed on siliconized slides. Histological typing was determined according to World Health Organization classifications [19]. Tumor size and metastatic lymph node number and locations were obtained from pathology reports. Cell lines The primary NSCLC cell lines, A549, H460 and H1299, obtained from the Cell Bank of the Chinese Academy of Science (Shanghai, China), were cultured in RPMI 1640 medium (Gibco/Invitrogen, Camarillo, CA, USA) supplemented with 10% fetal bovine serum (Hyclone, Logan, UT, USA). The primary NSCLC cell lines, A549, H460 and H1299, obtained from the Cell Bank of the Chinese Academy of Science (Shanghai, China), were cultured in RPMI 1640 medium (Gibco/Invitrogen, Camarillo, CA, USA) supplemented with 10% fetal bovine serum (Hyclone, Logan, UT, USA). Immunohistochemical staining and evaluation The primary antibodies used in this study were as follow: anti-Oct-4 (sc-5279, dilution 1:100; Santa Cruz Biotechnology, Santa Cruz, CA, USA), anti-Ki-67 (ab92742, dilution 1:200; Abcam, Cambridge, UK), and anti-VEGF (sc-7269, dilution 1:100; Santa Cruz Biotechnology, Santa Cruz, CA, USA). Immunohistochemical staining was carried out using the streptavidin-peroxidase method. Cells with nuclear staining for Oct-4 and Ki-67, and cytoplasmic staining for VEGF, were scored as positive for the respective marker. The intensity of Oct-4, Ki-67, and VEGF staining was scored on a 0-to-3 scale: 0, negative; 1, light; 2, moderate; and 3, intense. The percentage of the tumor area stained for each marker at each intensity was calculated by dividing the number of tumor cells positive for the marker at each intensity by the total number of tumor cells. Areas that were negative were given a value of 0. A total of 10-12 discrete foci in each section were examined microscopically (400× magnification) to generate an average staining intensity and percentage of the surface area covered. The final histoscore was calculated using the formula: [(1 × percentage of weakly positive tumor cells) + (2 × percentage of moderately positive tumor cells) + (3 × percentage of intense positive tumor cells)]. The median values of Oct-4, Ki-67, and VEGF histoscores were used to classify samples as positive (above the median) or negative (below the median) for each marker. The primary antibodies used in this study were as follow: anti-Oct-4 (sc-5279, dilution 1:100; Santa Cruz Biotechnology, Santa Cruz, CA, USA), anti-Ki-67 (ab92742, dilution 1:200; Abcam, Cambridge, UK), and anti-VEGF (sc-7269, dilution 1:100; Santa Cruz Biotechnology, Santa Cruz, CA, USA). Immunohistochemical staining was carried out using the streptavidin-peroxidase method. Cells with nuclear staining for Oct-4 and Ki-67, and cytoplasmic staining for VEGF, were scored as positive for the respective marker. The intensity of Oct-4, Ki-67, and VEGF staining was scored on a 0-to-3 scale: 0, negative; 1, light; 2, moderate; and 3, intense. The percentage of the tumor area stained for each marker at each intensity was calculated by dividing the number of tumor cells positive for the marker at each intensity by the total number of tumor cells. Areas that were negative were given a value of 0. A total of 10-12 discrete foci in each section were examined microscopically (400× magnification) to generate an average staining intensity and percentage of the surface area covered. The final histoscore was calculated using the formula: [(1 × percentage of weakly positive tumor cells) + (2 × percentage of moderately positive tumor cells) + (3 × percentage of intense positive tumor cells)]. The median values of Oct-4, Ki-67, and VEGF histoscores were used to classify samples as positive (above the median) or negative (below the median) for each marker. Evaluation of MVD Immunohistochemical staining for CD34 (MS-363, dilution 1:50; Lab Vision, Fremont, CA; Clone QBEnd/10) was analyzed. After identifying the three most vascularized areas within a tumor ("hot spots") at low magnification (×40), vessels in three representative fields in each of these areas were counted at high magnification (×400; 0.152 mm2; 0.44 mm diameter). The high-magnification fields were then marked for subsequent image cell counting analysis. Single immunoreactive endothelial cells or endothelial cell clusters separated from other microvessels were counted as individual microvessels. Endothelial staining in large vessels with tunica media and nonspecific staining of non-endothelial structures were excluded from microvessel counts. The mean visual microvessel density for CD34 was calculated as the average of six counts (three hot spots and three microscopic fields). Microvessel counts greater than the median counts were taken as MVD-positive, and microvessel counts lower than the median were taken as MVD-negative. Immunohistochemical staining for CD34 (MS-363, dilution 1:50; Lab Vision, Fremont, CA; Clone QBEnd/10) was analyzed. After identifying the three most vascularized areas within a tumor ("hot spots") at low magnification (×40), vessels in three representative fields in each of these areas were counted at high magnification (×400; 0.152 mm2; 0.44 mm diameter). The high-magnification fields were then marked for subsequent image cell counting analysis. Single immunoreactive endothelial cells or endothelial cell clusters separated from other microvessels were counted as individual microvessels. Endothelial staining in large vessels with tunica media and nonspecific staining of non-endothelial structures were excluded from microvessel counts. The mean visual microvessel density for CD34 was calculated as the average of six counts (three hot spots and three microscopic fields). Microvessel counts greater than the median counts were taken as MVD-positive, and microvessel counts lower than the median were taken as MVD-negative. Reverse transcription-polymerase chain reaction (RT-PCR) Total RNA was extracted from cultured cells using the TRIzol reagent (Invitrogen, Grand Island NY, USA), according to the manufacturer's instructions. Extracted RNA was treated with DNase (Fermentas, Vilnius, Lithuania) to remove DNA contamination. For cDNA synthesis, 1 μg of total RNA was reverse transcribed using a RevertAid First Strand cDNA Synthesis Kit (Fermentas). PCR was performed with ExTaq (TaKaRa, Japan). The primer sequences and sizes of amplified products were as follows: Oct-4, 5'-GAC AGG GGG AGG GGA GGA GCT AGG-3' and 5'-CTT CCC TCC AAC CAG TTG CCC CAA AC-3' (142 bp); β-actin (internal control), 5'-GTG GGG CGC CCC AGG CAC CA-3' and 5'-CTC CTT AAT GTC ACG CAC GAT TTC-3' (540 bp). Total RNA was extracted from cultured cells using the TRIzol reagent (Invitrogen, Grand Island NY, USA), according to the manufacturer's instructions. Extracted RNA was treated with DNase (Fermentas, Vilnius, Lithuania) to remove DNA contamination. For cDNA synthesis, 1 μg of total RNA was reverse transcribed using a RevertAid First Strand cDNA Synthesis Kit (Fermentas). PCR was performed with ExTaq (TaKaRa, Japan). The primer sequences and sizes of amplified products were as follows: Oct-4, 5'-GAC AGG GGG AGG GGA GGA GCT AGG-3' and 5'-CTT CCC TCC AAC CAG TTG CCC CAA AC-3' (142 bp); β-actin (internal control), 5'-GTG GGG CGC CCC AGG CAC CA-3' and 5'-CTC CTT AAT GTC ACG CAC GAT TTC-3' (540 bp). Statistical analysis All calculations were done using SPSS V.14.0 software (Chicago, IL, USA). Spearman's coefficient of correlation, Chi-squared tests, and Mann-Whitney tests were used as appropriate. A multivariate model was used to evaluate statistical associations among variables. A Cox regression model was used to relate potential prognostic factors with survival. All calculations were done using SPSS V.14.0 software (Chicago, IL, USA). Spearman's coefficient of correlation, Chi-squared tests, and Mann-Whitney tests were used as appropriate. A multivariate model was used to evaluate statistical associations among variables. A Cox regression model was used to relate potential prognostic factors with survival.
null
null
Conclusion
Grant support: This work was supported by grants from the National Basic Research Program of China (973 Program, No. 2008CB517406), the National Natural Science Foundation of China (No. 30671023, 30971675, 30900729), and the Key Scientific and Technological Projects of Guangdong Province (No. 2007A032100003).
[ "Background", "Patients and tissue specimens", "Cell lines", "Immunohistochemical staining and evaluation", "Evaluation of MVD", "Reverse transcription-polymerase chain reaction (RT-PCR)", "Statistical analysis", "Results", "Basic clinical information and tumor characteristics", "Association of Oct-4 expression with clinicopathological characteristics of NSCLC patients", "Oct-4 expression in NSCLC cell lines", "Association of Oct-4 expression with malignant proliferation according to differences in VEGF-mediated angiogenesis", "Association of Oct-4 expression with survival in all cases and in subsets of cases: univariate and multivariate analyses", "Discussion", "Conclusion" ]
[ "Despite recent progress in treatment, lung cancer remains the leading cause of cancer deaths in both women and men throughout the world [1]. Not all patients with lung cancer benefit from routine surgery and chemotherapy. This is especially true for those with primary non-small cell lung cancer (NSCLC), the most common malignancy in the thoracic field, where such therapies have been tried with limited efficacy [2]. To improve patient survival rate, researchers have increasingly focused on understanding specific characteristics of NSCLCs as a means to elucidate the mechanism of tumor development and develop possible targeted therapeutic approaches.\nOctamer 4 (Oct-4), a member of the POU-domain transcription factor family, is normally expressed in both adult and embryonic stem cells [3,4]. Recent reports have demonstrated that Oct-4 is not only involved in controlling the maintenance of stem cell pluripotency, but is also specifically responsible for the unlimited proliferative potential of stem cells, suggesting that Oct-4 functions as a master switch during differentiation of human somatic cell [5-7]. Interestingly, Oct-4 is also re-expressed in germ cell tumors [8], breast cancer [9], bladder cancer [10], prostate cancer and hepatomas [11,12], but very little is known about its potential function in malignant disease [13]. Moreover, overexpression of Oct-4 increases the malignant potential of tumors, and downregulation of Oct-4 in tumor cells inhibits tumor growth, suggesting that Oct-4 might play a key role in maintaining the survival of cancer cells [13,14]. Although its asymmetric expression may indicate that Oct-4 is a suitable target for therapeutic intervention in adenocarcinoma and bronchioloalveolar carcinoma [15], the role of Oct-4 expression in primary NSCLC has remained ill defined.\nTo address this potential role, we assessed Oct-4 expression in cancer specimens from 113 patients with primary NSCLC by immunohistochemical staining. We further investigated the association of Oct-4 expression in NSCLC tumor cells with some important clinical pathological indices. In addition, we examined the involvement of Oct-4 in tumor cell proliferation and tumor-induced angiogenesis in NSCLC by relating Oct-4 expression with microvessel density (MVD), and expression of Ki-67 and vascular endothelial growth factor (VEGF), proliferative and the vascular markers, respectively. On the basis of previous reports that a subset of NSCLC tumors do not induce angiogenesis but instead co-opt the normal vasculature for further growth [16,17], we also evaluated associations of Oct-4 expression with tumor cell proliferation and prognosis in subsets of patients with weak VEGF-mediated angiogenesis (disregarding the nonangiogenic subsets of NSCLC in the analysis, which would tend to obscure the role of Oct-4 expression in primary NSCLC).\nOur results provide the first demonstration that expression of the stem cell marker Oct-4 maintains tumor cells in a poorly differentiated state through a mechanism that depends on promoting cell proliferation. Moreover, even in the context of vulnerable MVD status and VEGF expression, Oct-4 plays an important role in tumor cell proliferation and contributes to poor prognosis in human NSCLC.", "Cancer tissue and corresponding adjacent normal tissue (within 1-2 cm of the tumor edge) from 113 primary NSCLC cases were randomly selected from our tissue database. Patients had been treated in the Department of Thoracic Surgery of the First Affiliated Hospital of Sun Yat-sen University from Jan 2003 to July 2004. None of the patients had received neoadjuvant chemotherapy or radiotherapy. Clinical information was obtained by reviewing the perioperative medical records, or by telephone or written correspondence. Cases were staged according to the tumor-node-metastases (TNM) classification of the International Union Against Cancer, revised in 2002 [18]. The study was approved by the Medical Ethical Committee of the First Affiliated Hospital, Sun Yat-sen University. Paraffin-embedded specimens of each case were sectioned and fixed on siliconized slides. Histological typing was determined according to World Health Organization classifications [19]. Tumor size and metastatic lymph node number and locations were obtained from pathology reports.", "The primary NSCLC cell lines, A549, H460 and H1299, obtained from the Cell Bank of the Chinese Academy of Science (Shanghai, China), were cultured in RPMI 1640 medium (Gibco/Invitrogen, Camarillo, CA, USA) supplemented with 10% fetal bovine serum (Hyclone, Logan, UT, USA).", "The primary antibodies used in this study were as follow: anti-Oct-4 (sc-5279, dilution 1:100; Santa Cruz Biotechnology, Santa Cruz, CA, USA), anti-Ki-67 (ab92742, dilution 1:200; Abcam, Cambridge, UK), and anti-VEGF (sc-7269, dilution 1:100; Santa Cruz Biotechnology, Santa Cruz, CA, USA). Immunohistochemical staining was carried out using the streptavidin-peroxidase method. Cells with nuclear staining for Oct-4 and Ki-67, and cytoplasmic staining for VEGF, were scored as positive for the respective marker. The intensity of Oct-4, Ki-67, and VEGF staining was scored on a 0-to-3 scale: 0, negative; 1, light; 2, moderate; and 3, intense. The percentage of the tumor area stained for each marker at each intensity was calculated by dividing the number of tumor cells positive for the marker at each intensity by the total number of tumor cells. Areas that were negative were given a value of 0. A total of 10-12 discrete foci in each section were examined microscopically (400× magnification) to generate an average staining intensity and percentage of the surface area covered. The final histoscore was calculated using the formula: [(1 × percentage of weakly positive tumor cells) + (2 × percentage of moderately positive tumor cells) + (3 × percentage of intense positive tumor cells)]. The median values of Oct-4, Ki-67, and VEGF histoscores were used to classify samples as positive (above the median) or negative (below the median) for each marker.", "Immunohistochemical staining for CD34 (MS-363, dilution 1:50; Lab Vision, Fremont, CA; Clone QBEnd/10) was analyzed. After identifying the three most vascularized areas within a tumor (\"hot spots\") at low magnification (×40), vessels in three representative fields in each of these areas were counted at high magnification (×400; 0.152 mm2; 0.44 mm diameter). The high-magnification fields were then marked for subsequent image cell counting analysis. Single immunoreactive endothelial cells or endothelial cell clusters separated from other microvessels were counted as individual microvessels. Endothelial staining in large vessels with tunica media and nonspecific staining of non-endothelial structures were excluded from microvessel counts. The mean visual microvessel density for CD34 was calculated as the average of six counts (three hot spots and three microscopic fields). Microvessel counts greater than the median counts were taken as MVD-positive, and microvessel counts lower than the median were taken as MVD-negative.", "Total RNA was extracted from cultured cells using the TRIzol reagent (Invitrogen, Grand Island NY, USA), according to the manufacturer's instructions. Extracted RNA was treated with DNase (Fermentas, Vilnius, Lithuania) to remove DNA contamination. For cDNA synthesis, 1 μg of total RNA was reverse transcribed using a RevertAid First Strand cDNA Synthesis Kit (Fermentas). PCR was performed with ExTaq (TaKaRa, Japan). The primer sequences and sizes of amplified products were as follows: Oct-4, 5'-GAC AGG GGG AGG GGA GGA GCT AGG-3' and 5'-CTT CCC TCC AAC CAG TTG CCC CAA AC-3' (142 bp); β-actin (internal control), 5'-GTG GGG CGC CCC AGG CAC CA-3' and 5'-CTC CTT AAT GTC ACG CAC GAT TTC-3' (540 bp).", "All calculations were done using SPSS V.14.0 software (Chicago, IL, USA). Spearman's coefficient of correlation, Chi-squared tests, and Mann-Whitney tests were used as appropriate. A multivariate model was used to evaluate statistical associations among variables. A Cox regression model was used to relate potential prognostic factors with survival.", " Basic clinical information and tumor characteristics A total of 113 NSCLC patients (82 male and 31 female) were enrolled in the study; the mean age of study participants was 57.2 ± 10.0 years (range, 35-78 years). There were 58 cases of lung adenocarcinoma, 52 cases of squamous cell carcinoma, and three cases of large cell carcinoma. Twenty-seven cases were well differentiated, 34 cases were moderately differentiated, and 52 cases were poorly differentiated. The cases were classified as stage I (n = 30), stage II (n = 48), stage III (n = 18), and stage IV (n = 17). Of the 113 cases, 67 had lymph node metastasis, according to surgery and pathology reports. Analyses of patient data after a 5-year follow-up showed that 77 patients had died; median survival was 21.0 months. As expected, median survival was longer for stage I-II patients (22.0 mo) than stage III-IV patients (13.0 mo; P = 0.001). There were no significant differences in survival according to gender, smoking history, histology, or grading. The clinical characteristics of study samples are shown in Table 1.\nAssociation of Oct-4 Expression with clinical features in NSCLC\naPatients were divided according to the median values of immunohistochemical histoscores\nbPatients were divided according to median age\ncBronchioloalveolar carcinoma was included in well differentiated\ndLarge cell carcinoma was included in poorly differentiated\nA total of 113 NSCLC patients (82 male and 31 female) were enrolled in the study; the mean age of study participants was 57.2 ± 10.0 years (range, 35-78 years). There were 58 cases of lung adenocarcinoma, 52 cases of squamous cell carcinoma, and three cases of large cell carcinoma. Twenty-seven cases were well differentiated, 34 cases were moderately differentiated, and 52 cases were poorly differentiated. The cases were classified as stage I (n = 30), stage II (n = 48), stage III (n = 18), and stage IV (n = 17). Of the 113 cases, 67 had lymph node metastasis, according to surgery and pathology reports. Analyses of patient data after a 5-year follow-up showed that 77 patients had died; median survival was 21.0 months. As expected, median survival was longer for stage I-II patients (22.0 mo) than stage III-IV patients (13.0 mo; P = 0.001). There were no significant differences in survival according to gender, smoking history, histology, or grading. The clinical characteristics of study samples are shown in Table 1.\nAssociation of Oct-4 Expression with clinical features in NSCLC\naPatients were divided according to the median values of immunohistochemical histoscores\nbPatients were divided according to median age\ncBronchioloalveolar carcinoma was included in well differentiated\ndLarge cell carcinoma was included in poorly differentiated\n Association of Oct-4 expression with clinicopathological characteristics of NSCLC patients Immunohistochemical analyses demonstrated that Oct-4 was expressed in 90.3% of samples (102/113 cases), with clear staining observed mostly in the nuclei of tumor cells; alveolar and bronchial epithelial cells in tumor-adjacent tissues were negative for Oct-4 staining (Figure 1). The histoscores of Oct-4 expression were variable among individual tumor samples. The mean Oct-4 histoscore was 31.32 ± 5.99 and the median histoscore was 25.80; this latter value was selected to categorize patients into Oct-4-positive (above the median) and -negative (below the median) groups. Among the 56 Oct-4-negative cases, 11 samples exhibited no Oct-4 staining. The associations of Oct-4-positive and -negative status with various clinical and pathological characteristics of NSCLC are shown in Table 1. Regarding to histoscores of Oct-4 staining, there was prominent discrepancy between adenocarcinoma and squamous cell carcinoma (39.40 ± 3.59 and 21.64 ± 2.47, p = 0.008). There was significant association of Oct-4 histoscores among well, moderated, and poor differentiation of tumor (15.69 ± 3.70, 24.27 ± 2.73, and 43.80 ± 3.49, p = 0.039), and quantification of staining also revealed that these associations differed markedly in adenocarcinoma or squamous cell carcinoma population (Figure 1H). There were no associations between Oct-4 expression and malignant local advance, lymph node metastasis, or TNM stage of disease (Figure 1I).\nOct-4 expression in tissues of well-differentiated adenocarcinoma (A), well-differentiated squamous cell carcinoma (B), poorly differentiated adenocarcinoma (C), and poorly differentiated squamous cell carcinoma (D), as well as VEGF staining (E) and MVD staining (F) were demonstrated immunohistologically. Quantification of Oct-4 expression (Oct-4 histoscore) with respect to differentiation status or tumor histology (G) and local advance or lymph nodes metastasis (H) is shown; 95% CIs are indicated.\nImmunohistochemical analyses demonstrated that Oct-4 was expressed in 90.3% of samples (102/113 cases), with clear staining observed mostly in the nuclei of tumor cells; alveolar and bronchial epithelial cells in tumor-adjacent tissues were negative for Oct-4 staining (Figure 1). The histoscores of Oct-4 expression were variable among individual tumor samples. The mean Oct-4 histoscore was 31.32 ± 5.99 and the median histoscore was 25.80; this latter value was selected to categorize patients into Oct-4-positive (above the median) and -negative (below the median) groups. Among the 56 Oct-4-negative cases, 11 samples exhibited no Oct-4 staining. The associations of Oct-4-positive and -negative status with various clinical and pathological characteristics of NSCLC are shown in Table 1. Regarding to histoscores of Oct-4 staining, there was prominent discrepancy between adenocarcinoma and squamous cell carcinoma (39.40 ± 3.59 and 21.64 ± 2.47, p = 0.008). There was significant association of Oct-4 histoscores among well, moderated, and poor differentiation of tumor (15.69 ± 3.70, 24.27 ± 2.73, and 43.80 ± 3.49, p = 0.039), and quantification of staining also revealed that these associations differed markedly in adenocarcinoma or squamous cell carcinoma population (Figure 1H). There were no associations between Oct-4 expression and malignant local advance, lymph node metastasis, or TNM stage of disease (Figure 1I).\nOct-4 expression in tissues of well-differentiated adenocarcinoma (A), well-differentiated squamous cell carcinoma (B), poorly differentiated adenocarcinoma (C), and poorly differentiated squamous cell carcinoma (D), as well as VEGF staining (E) and MVD staining (F) were demonstrated immunohistologically. Quantification of Oct-4 expression (Oct-4 histoscore) with respect to differentiation status or tumor histology (G) and local advance or lymph nodes metastasis (H) is shown; 95% CIs are indicated.\n Oct-4 expression in NSCLC cell lines To better understand the expression status of Oct-4 in NSCLC, we examined the expression of Oct-4 in the NSCLC cell lines, A549, H460, and H1299. Oct-4 mRNA was detected in each of these cell lines (Figure 1G).\nTo better understand the expression status of Oct-4 in NSCLC, we examined the expression of Oct-4 in the NSCLC cell lines, A549, H460, and H1299. Oct-4 mRNA was detected in each of these cell lines (Figure 1G).\n Association of Oct-4 expression with malignant proliferation according to differences in VEGF-mediated angiogenesis Intratumoral Ki-67 expression, a marker of malignant proliferation, varied according to Oct-4 phenotype in the population under study, with high Ki-67 expression showing a significant association with positive Oct-4 staining (Table 1). Quantification of staining revealed that this association differed markedly depending on Oct-4 histoscores (Figure 2A, p = 0.001) and showed that these two markers were positively correlated (Figure 2B). In MVD-negative and VEGF-negative subsets, intratumoral Ki-67 expression varied significantly according to Oct-4 phenotype (Figure 2A); Ki-67 (Figure 2C) and Oct-4 (Figure 2E) expression were also positively correlated in these subsets. These results suggest a prominent association of Oct-4 expression with malignant proliferation in NSCLC, especially in cases with weak VEGF-mediated angiogenesis.\nKi-67 expression histoscores were significantly different (ANOVA) according to different Oct-4 status in all cases, and in subsets of MVD-negative, MVD-positive, VEGF-negative, and VEGF-positive cases (A). All cases were divided into positive (above the median histoscore) and negative (below the median histoscore) groups. The association of Oct-4 staining with Ki-67 expression was positive in all cases (B), and in subsets of MVD-negative (C), MVD-positive (D), VEGF-negative (E), and VEGF-positive (F) cases. Statistical differences were calculated using Pearson and Spearman correlation analysis.\nIntratumoral Ki-67 expression, a marker of malignant proliferation, varied according to Oct-4 phenotype in the population under study, with high Ki-67 expression showing a significant association with positive Oct-4 staining (Table 1). Quantification of staining revealed that this association differed markedly depending on Oct-4 histoscores (Figure 2A, p = 0.001) and showed that these two markers were positively correlated (Figure 2B). In MVD-negative and VEGF-negative subsets, intratumoral Ki-67 expression varied significantly according to Oct-4 phenotype (Figure 2A); Ki-67 (Figure 2C) and Oct-4 (Figure 2E) expression were also positively correlated in these subsets. These results suggest a prominent association of Oct-4 expression with malignant proliferation in NSCLC, especially in cases with weak VEGF-mediated angiogenesis.\nKi-67 expression histoscores were significantly different (ANOVA) according to different Oct-4 status in all cases, and in subsets of MVD-negative, MVD-positive, VEGF-negative, and VEGF-positive cases (A). All cases were divided into positive (above the median histoscore) and negative (below the median histoscore) groups. The association of Oct-4 staining with Ki-67 expression was positive in all cases (B), and in subsets of MVD-negative (C), MVD-positive (D), VEGF-negative (E), and VEGF-positive (F) cases. Statistical differences were calculated using Pearson and Spearman correlation analysis.\n Association of Oct-4 expression with survival in all cases and in subsets of cases: univariate and multivariate analyses The strength of associations between each individual predictor and overall survival was shown by univariate and multivariate analyses (Table 2). Oct-4 expression in tumor tissue and differentiation of tumor cells were strongly associated with cancer-associated death. Notably, an Oct-4 expression level less than the median histoscore (25.80) was associated with improved survival (HR, 1.011), whereas elevated Oct-4 expression was associated with shorter cumulative survival (p = 0.009). A Kaplan-Meier plot showed a prominent difference in survival estimates for patients with high versus low Oct-4 expression in tumor tissue; this difference corresponded to a median survival of 18.2 ± 6.0 months for patients with high Oct-4 expression compared with a median survival of more than 24.7 ± 9.1 months for patients with low Oct-4 expression (Figure 3A). More importantly, significant differences were also found in the adenocarcinoma subset (17.7 ± 9.1 vs. 27.3 ± 9.6 months; Figure 3B) and the squamous cell carcinoma subset (20.7 ± 9.5 vs. 23.2 ± 10.8 months; Figure 3C). When all predictors were included in a Cox model, Oct-4 expression retained its prognostic significance for overall survival. Hence, a low level of Oct-4 expression in tumor tissue predicted improved survival in NSCLC patients.\nUnivariate and multivariate analyses of individual variables for correlations with overall survival: cox proportional hazards model\nVariable 1, Oct-4 expression was an independent prognostic factor, adjusted by histological differentiation, in all cases\nVariable 2, Oct-4 expression was an independent factor in MVD-negative cases\nVariable 3, Oct-4 expression was an independent factor in VEGF-negative cases\nAbbreviations: HR, hazard ratio; CI, confidence interval\nCumulative Kaplan-Meier survival curves based on the median values of Oct-4 immunochemical histoscores in NSCLC tissues are showed for all cases (A), and for adenocarcinoma (B), squamous cell carcinoma (C), MVD-negative (D), MVD-positive (E), VEGF-negative (F), and VEGF-positive (G) cases. All cases were divided into positive (above the median histoscore) and negative (below the median histoscore) groups. Oct-4-positivity was associated with decreased overall survival in all subset. Statistical differences were calculated using log-rank comparisons.\nIn order to observe the contribution of Oct-4 to overall survival in patients in which VEGF-mediated angiogenesis was disabled, we also performed univariate and multivariate analyses in MVD-negative and VEGF-negative subsets (Table 2). Notably, an Oct-4 expression level less than the median histoscore was associated with improved survival, whereas elevated Oct-4 expression was associated with shorter cumulative survival in both the MVD-negative subset (HR, 1.024, p = 0.005) and the VEGF-negative subset (HR, 1.011, p = 0.042). Further, a Kaplan-Meier plot showed a prominent difference in survival estimates for patients in the MVD-negative subset, where the median survival for patients with high Oct-4 expression was 18.5 ± 7.6 months compared with a median survival of more than 24.3 ± 8.3 months for patients with low Oct-4 expression (Figure 3D). Similar differences were found for patients in the VEGF-negative subset; here the median survival for patients with high Oct-4 expression was 17.5 ± 6.1 months compared with a median survival of more than 21.9 ± 7.5 months for patients with low Oct-4 expression (Figure 3F). Hence, Oct-4 expression retained its prognostic significance for overall survival in NSCLC patients with weak VEGF-mediated angiogenesis.\nThe strength of associations between each individual predictor and overall survival was shown by univariate and multivariate analyses (Table 2). Oct-4 expression in tumor tissue and differentiation of tumor cells were strongly associated with cancer-associated death. Notably, an Oct-4 expression level less than the median histoscore (25.80) was associated with improved survival (HR, 1.011), whereas elevated Oct-4 expression was associated with shorter cumulative survival (p = 0.009). A Kaplan-Meier plot showed a prominent difference in survival estimates for patients with high versus low Oct-4 expression in tumor tissue; this difference corresponded to a median survival of 18.2 ± 6.0 months for patients with high Oct-4 expression compared with a median survival of more than 24.7 ± 9.1 months for patients with low Oct-4 expression (Figure 3A). More importantly, significant differences were also found in the adenocarcinoma subset (17.7 ± 9.1 vs. 27.3 ± 9.6 months; Figure 3B) and the squamous cell carcinoma subset (20.7 ± 9.5 vs. 23.2 ± 10.8 months; Figure 3C). When all predictors were included in a Cox model, Oct-4 expression retained its prognostic significance for overall survival. Hence, a low level of Oct-4 expression in tumor tissue predicted improved survival in NSCLC patients.\nUnivariate and multivariate analyses of individual variables for correlations with overall survival: cox proportional hazards model\nVariable 1, Oct-4 expression was an independent prognostic factor, adjusted by histological differentiation, in all cases\nVariable 2, Oct-4 expression was an independent factor in MVD-negative cases\nVariable 3, Oct-4 expression was an independent factor in VEGF-negative cases\nAbbreviations: HR, hazard ratio; CI, confidence interval\nCumulative Kaplan-Meier survival curves based on the median values of Oct-4 immunochemical histoscores in NSCLC tissues are showed for all cases (A), and for adenocarcinoma (B), squamous cell carcinoma (C), MVD-negative (D), MVD-positive (E), VEGF-negative (F), and VEGF-positive (G) cases. All cases were divided into positive (above the median histoscore) and negative (below the median histoscore) groups. Oct-4-positivity was associated with decreased overall survival in all subset. Statistical differences were calculated using log-rank comparisons.\nIn order to observe the contribution of Oct-4 to overall survival in patients in which VEGF-mediated angiogenesis was disabled, we also performed univariate and multivariate analyses in MVD-negative and VEGF-negative subsets (Table 2). Notably, an Oct-4 expression level less than the median histoscore was associated with improved survival, whereas elevated Oct-4 expression was associated with shorter cumulative survival in both the MVD-negative subset (HR, 1.024, p = 0.005) and the VEGF-negative subset (HR, 1.011, p = 0.042). Further, a Kaplan-Meier plot showed a prominent difference in survival estimates for patients in the MVD-negative subset, where the median survival for patients with high Oct-4 expression was 18.5 ± 7.6 months compared with a median survival of more than 24.3 ± 8.3 months for patients with low Oct-4 expression (Figure 3D). Similar differences were found for patients in the VEGF-negative subset; here the median survival for patients with high Oct-4 expression was 17.5 ± 6.1 months compared with a median survival of more than 21.9 ± 7.5 months for patients with low Oct-4 expression (Figure 3F). Hence, Oct-4 expression retained its prognostic significance for overall survival in NSCLC patients with weak VEGF-mediated angiogenesis.", "A total of 113 NSCLC patients (82 male and 31 female) were enrolled in the study; the mean age of study participants was 57.2 ± 10.0 years (range, 35-78 years). There were 58 cases of lung adenocarcinoma, 52 cases of squamous cell carcinoma, and three cases of large cell carcinoma. Twenty-seven cases were well differentiated, 34 cases were moderately differentiated, and 52 cases were poorly differentiated. The cases were classified as stage I (n = 30), stage II (n = 48), stage III (n = 18), and stage IV (n = 17). Of the 113 cases, 67 had lymph node metastasis, according to surgery and pathology reports. Analyses of patient data after a 5-year follow-up showed that 77 patients had died; median survival was 21.0 months. As expected, median survival was longer for stage I-II patients (22.0 mo) than stage III-IV patients (13.0 mo; P = 0.001). There were no significant differences in survival according to gender, smoking history, histology, or grading. The clinical characteristics of study samples are shown in Table 1.\nAssociation of Oct-4 Expression with clinical features in NSCLC\naPatients were divided according to the median values of immunohistochemical histoscores\nbPatients were divided according to median age\ncBronchioloalveolar carcinoma was included in well differentiated\ndLarge cell carcinoma was included in poorly differentiated", "Immunohistochemical analyses demonstrated that Oct-4 was expressed in 90.3% of samples (102/113 cases), with clear staining observed mostly in the nuclei of tumor cells; alveolar and bronchial epithelial cells in tumor-adjacent tissues were negative for Oct-4 staining (Figure 1). The histoscores of Oct-4 expression were variable among individual tumor samples. The mean Oct-4 histoscore was 31.32 ± 5.99 and the median histoscore was 25.80; this latter value was selected to categorize patients into Oct-4-positive (above the median) and -negative (below the median) groups. Among the 56 Oct-4-negative cases, 11 samples exhibited no Oct-4 staining. The associations of Oct-4-positive and -negative status with various clinical and pathological characteristics of NSCLC are shown in Table 1. Regarding to histoscores of Oct-4 staining, there was prominent discrepancy between adenocarcinoma and squamous cell carcinoma (39.40 ± 3.59 and 21.64 ± 2.47, p = 0.008). There was significant association of Oct-4 histoscores among well, moderated, and poor differentiation of tumor (15.69 ± 3.70, 24.27 ± 2.73, and 43.80 ± 3.49, p = 0.039), and quantification of staining also revealed that these associations differed markedly in adenocarcinoma or squamous cell carcinoma population (Figure 1H). There were no associations between Oct-4 expression and malignant local advance, lymph node metastasis, or TNM stage of disease (Figure 1I).\nOct-4 expression in tissues of well-differentiated adenocarcinoma (A), well-differentiated squamous cell carcinoma (B), poorly differentiated adenocarcinoma (C), and poorly differentiated squamous cell carcinoma (D), as well as VEGF staining (E) and MVD staining (F) were demonstrated immunohistologically. Quantification of Oct-4 expression (Oct-4 histoscore) with respect to differentiation status or tumor histology (G) and local advance or lymph nodes metastasis (H) is shown; 95% CIs are indicated.", "To better understand the expression status of Oct-4 in NSCLC, we examined the expression of Oct-4 in the NSCLC cell lines, A549, H460, and H1299. Oct-4 mRNA was detected in each of these cell lines (Figure 1G).", "Intratumoral Ki-67 expression, a marker of malignant proliferation, varied according to Oct-4 phenotype in the population under study, with high Ki-67 expression showing a significant association with positive Oct-4 staining (Table 1). Quantification of staining revealed that this association differed markedly depending on Oct-4 histoscores (Figure 2A, p = 0.001) and showed that these two markers were positively correlated (Figure 2B). In MVD-negative and VEGF-negative subsets, intratumoral Ki-67 expression varied significantly according to Oct-4 phenotype (Figure 2A); Ki-67 (Figure 2C) and Oct-4 (Figure 2E) expression were also positively correlated in these subsets. These results suggest a prominent association of Oct-4 expression with malignant proliferation in NSCLC, especially in cases with weak VEGF-mediated angiogenesis.\nKi-67 expression histoscores were significantly different (ANOVA) according to different Oct-4 status in all cases, and in subsets of MVD-negative, MVD-positive, VEGF-negative, and VEGF-positive cases (A). All cases were divided into positive (above the median histoscore) and negative (below the median histoscore) groups. The association of Oct-4 staining with Ki-67 expression was positive in all cases (B), and in subsets of MVD-negative (C), MVD-positive (D), VEGF-negative (E), and VEGF-positive (F) cases. Statistical differences were calculated using Pearson and Spearman correlation analysis.", "The strength of associations between each individual predictor and overall survival was shown by univariate and multivariate analyses (Table 2). Oct-4 expression in tumor tissue and differentiation of tumor cells were strongly associated with cancer-associated death. Notably, an Oct-4 expression level less than the median histoscore (25.80) was associated with improved survival (HR, 1.011), whereas elevated Oct-4 expression was associated with shorter cumulative survival (p = 0.009). A Kaplan-Meier plot showed a prominent difference in survival estimates for patients with high versus low Oct-4 expression in tumor tissue; this difference corresponded to a median survival of 18.2 ± 6.0 months for patients with high Oct-4 expression compared with a median survival of more than 24.7 ± 9.1 months for patients with low Oct-4 expression (Figure 3A). More importantly, significant differences were also found in the adenocarcinoma subset (17.7 ± 9.1 vs. 27.3 ± 9.6 months; Figure 3B) and the squamous cell carcinoma subset (20.7 ± 9.5 vs. 23.2 ± 10.8 months; Figure 3C). When all predictors were included in a Cox model, Oct-4 expression retained its prognostic significance for overall survival. Hence, a low level of Oct-4 expression in tumor tissue predicted improved survival in NSCLC patients.\nUnivariate and multivariate analyses of individual variables for correlations with overall survival: cox proportional hazards model\nVariable 1, Oct-4 expression was an independent prognostic factor, adjusted by histological differentiation, in all cases\nVariable 2, Oct-4 expression was an independent factor in MVD-negative cases\nVariable 3, Oct-4 expression was an independent factor in VEGF-negative cases\nAbbreviations: HR, hazard ratio; CI, confidence interval\nCumulative Kaplan-Meier survival curves based on the median values of Oct-4 immunochemical histoscores in NSCLC tissues are showed for all cases (A), and for adenocarcinoma (B), squamous cell carcinoma (C), MVD-negative (D), MVD-positive (E), VEGF-negative (F), and VEGF-positive (G) cases. All cases were divided into positive (above the median histoscore) and negative (below the median histoscore) groups. Oct-4-positivity was associated with decreased overall survival in all subset. Statistical differences were calculated using log-rank comparisons.\nIn order to observe the contribution of Oct-4 to overall survival in patients in which VEGF-mediated angiogenesis was disabled, we also performed univariate and multivariate analyses in MVD-negative and VEGF-negative subsets (Table 2). Notably, an Oct-4 expression level less than the median histoscore was associated with improved survival, whereas elevated Oct-4 expression was associated with shorter cumulative survival in both the MVD-negative subset (HR, 1.024, p = 0.005) and the VEGF-negative subset (HR, 1.011, p = 0.042). Further, a Kaplan-Meier plot showed a prominent difference in survival estimates for patients in the MVD-negative subset, where the median survival for patients with high Oct-4 expression was 18.5 ± 7.6 months compared with a median survival of more than 24.3 ± 8.3 months for patients with low Oct-4 expression (Figure 3D). Similar differences were found for patients in the VEGF-negative subset; here the median survival for patients with high Oct-4 expression was 17.5 ± 6.1 months compared with a median survival of more than 21.9 ± 7.5 months for patients with low Oct-4 expression (Figure 3F). Hence, Oct-4 expression retained its prognostic significance for overall survival in NSCLC patients with weak VEGF-mediated angiogenesis.", "Although Oct-4 has been detected in various carcinomas, including breast cancer [9], bladder cancer [10], prostate cancer [11] and lung adenocarcinoma [20], the precise role of this stem cell marker in maintaining the survival of cancer cells is unclear. Sustained expression of Oct-4 in epithelial tissues has been shown to lead to dysplastic changes through inhibition of cellular differentiation, similar to its action in some progenitor cells, suggesting that Oct-4 may play an important role in the genesis of tumors [21]. However, the mechanisms by which Oct-4 acts during tumor progression have remained poorly understood. Accordingly, we examined the behavior of Oct-4 in primary NSCLC tissues, focusing on the associations of Oct-4 expression with clinicopathological features and markers of tumor-induced angiogenesis. Important in this context is the observation that, after disregarding nonangiogenic subsets of NSCLC (which tend to obscure the association of Oct-4 with tumor angiogenesis), a subset of NSCLC tumors does not induce angiogenesis, but instead co-opts the normal vasculature for further growth.\nOn the basis of the previous finding that Oct-4 may be a major contributor to the maintenance of self-renewal in embryonic stem cells, we investigated the association of Oct-4 expression with self-renewal of NSCLC cells. The immunohistochemical analyses presented here showed clear Oct-4 staining in most sections, and RT-PCR showed Oct-4 mRNA in all NSCLC cell lines. Our data extend the previous report of Oct-4 overexpression in lung adenocarcinoma [20], providing the first demonstration that Oct-4 is also present in lung squamous cell carcinoma specimens, exhibiting an apparent difference in the degree of expression among sections analyzed. One possible explanation for these findings is that the genesis of lung adenocarcinoma and squamous cell carcinoma may be different. The former arises from mucous glands or the cells of bronchoalveolar duct junction and the latter grows most commonly in or around major bronchi. Further studies designed to address the relationship between Oct-4 expression in endothelial precursors and the sites of origin of adenocarcinoma and squamous cell carcinoma are required to confirm this. Our data also showed that the degree of immunohistochemical staining was positively correlated with poor differentiation of tumor cells and Ki-67 expression; this latter marker provides an opportunity to analyze the proliferative cell fraction in preserved tumor specimens. High levels of Oct-4 have been shown to increase the malignant potential of tumors, whereas inactivation of Oct-4 induces a regression of the malignant component [22]; moreover, knockdown of Oct-4 expression in lung cancer cells has been shown to facilitate differentiation of CD133-positive cells into CD133-negative cells [23]. These findings, taken together with our data, indicate that overexpression of Oct-4 in NSCLC tissues may maintain the poorly differentiated state by contributing to tumor cell proliferation. On the other hand, down-regulation of Oct-4 expression has been shown to induce apoptosis of tumor-initiating-cell-like cells through an Oct-4/Tcl1/Akt1 pathway, implying that Oct-4 might maintain the survival of tumor-initiating cells, at least in part, by inhibiting apoptosis [13]. Whether an Oct-4-dependent pathway modulates apoptosis in clinical NSCLC samples or NSCLC cell lines has not yet been tested.\nPrevious reports have indicated that tumor-induced angiogenesis is important in maintaining the poorly differentiated state and promoting metastasis in NSCLC [23,24]. In our study, we observed an association of Oct-4 expression in NSCLC specimens with some features of tumor-induced angiogenesis, but the investigation revealed no prominent linkage between Oct-4 expression and neovascularization (defined by CD34 and VEGF-A expression). However, Passalidou and Pezzella have previously described a subset of NSCLC without morphological evidence of neo-angiogenesis. In these tumors, alveoli are filled with neoplastic cells and the only vessels present appeared to belong to the trapped alveolar septa; moreover, tumors with normal vessels and no neo-angiogenesis seemed resistant to some anti-angiogenic therapies [16,17]. In this context, we observed an association of Oct-4 expression with tumor cell proliferation in patients with weak VEGF-mediated angiogenesis, including MVD-negative and VEGF-negative subsets, indicating that Oct-4 still plays an important role in cell proliferation in NSCLC tumors, even those with weak MVD or VEGF status. Whether Oct-4 expression contributes to resistance to anti-angiogenic therapy thus warrants additional research attention.\nAlthough recent reports have also shown that Oct-4 is re-expressed in different human carcinomas, implicating Oct-4 as a potential diagnostic marker in malignancy [25,26], whether Oct-4 expression can be used as a diagnostic tool to monitor the clinical prognosis of NSCLC patients has not been previously substantiated. An analysis of our follow-up data designed to definitively assess the effect of Oct-4 immunohistochemical expression on the prognosis of NSCLC patients showed that the post-operative survival duration of patients with high Oct-4 expression was notably shorter than that of patients with low expression. These results indicate that overexpression of Oct-4 has a detrimental effect on prognosis, and further demonstrates that Oct-4 expression may be correlated with the malignant behavior of tumors during NSCLC progression. A combined genomic analysis of the Oct-4/SOX2/NANOG pathway has recently demonstrated high prognostic accuracy in studies of patients with multiple tumor types [27]. Similarly, multivariate analyses of the data presented here demonstrated that Oct-4 expression is an independent factor whose expression might indicate poor prognosis of patients with NSCLC, generally, as well in NSCLC patient subsets, especially those with weak or no neovascularization. A detailed investigation of the association of Oct-4 expression with treatment response, particularly a characterization of the molecular phenotype of tumors following downregulation of Oct-4, would provide further support for this interpretation.", "In summary, a multivariate analysis demonstrated that Oct-4 expression was an independent predictor of overall survival, suggesting that Oct-4 may be useful as a molecular marker to assess the prognosis of patients with primary NSCLC, especially those without prominent neovascularization. In patients without prominent tumor-induced angiogenesis, Oct-4-overexpressing cells in primary NSCLC tissue represent a reservoir of tumor cells with differentiation potential; moreover, Oct-4 may maintain tumor cells in a poorly differentiated state through a mechanism that depends on promoting cell proliferation. The molecular mechanisms by which Oct-4 sustains the self-renewal capacity of tumor cells, especially those with poor neovascularization status, are poorly understood and are the focus of our future studies. Developing strategies to inhibit Oct-4 during tumor progression may have positive prognostic implications in primary NSCLC patients." ]
[ null, null, null, null, null, null, null, null, null, null, null, null, null, null, null ]
[ "Background", "Methods", "Patients and tissue specimens", "Cell lines", "Immunohistochemical staining and evaluation", "Evaluation of MVD", "Reverse transcription-polymerase chain reaction (RT-PCR)", "Statistical analysis", "Results", "Basic clinical information and tumor characteristics", "Association of Oct-4 expression with clinicopathological characteristics of NSCLC patients", "Oct-4 expression in NSCLC cell lines", "Association of Oct-4 expression with malignant proliferation according to differences in VEGF-mediated angiogenesis", "Association of Oct-4 expression with survival in all cases and in subsets of cases: univariate and multivariate analyses", "Discussion", "Conclusion" ]
[ "Despite recent progress in treatment, lung cancer remains the leading cause of cancer deaths in both women and men throughout the world [1]. Not all patients with lung cancer benefit from routine surgery and chemotherapy. This is especially true for those with primary non-small cell lung cancer (NSCLC), the most common malignancy in the thoracic field, where such therapies have been tried with limited efficacy [2]. To improve patient survival rate, researchers have increasingly focused on understanding specific characteristics of NSCLCs as a means to elucidate the mechanism of tumor development and develop possible targeted therapeutic approaches.\nOctamer 4 (Oct-4), a member of the POU-domain transcription factor family, is normally expressed in both adult and embryonic stem cells [3,4]. Recent reports have demonstrated that Oct-4 is not only involved in controlling the maintenance of stem cell pluripotency, but is also specifically responsible for the unlimited proliferative potential of stem cells, suggesting that Oct-4 functions as a master switch during differentiation of human somatic cell [5-7]. Interestingly, Oct-4 is also re-expressed in germ cell tumors [8], breast cancer [9], bladder cancer [10], prostate cancer and hepatomas [11,12], but very little is known about its potential function in malignant disease [13]. Moreover, overexpression of Oct-4 increases the malignant potential of tumors, and downregulation of Oct-4 in tumor cells inhibits tumor growth, suggesting that Oct-4 might play a key role in maintaining the survival of cancer cells [13,14]. Although its asymmetric expression may indicate that Oct-4 is a suitable target for therapeutic intervention in adenocarcinoma and bronchioloalveolar carcinoma [15], the role of Oct-4 expression in primary NSCLC has remained ill defined.\nTo address this potential role, we assessed Oct-4 expression in cancer specimens from 113 patients with primary NSCLC by immunohistochemical staining. We further investigated the association of Oct-4 expression in NSCLC tumor cells with some important clinical pathological indices. In addition, we examined the involvement of Oct-4 in tumor cell proliferation and tumor-induced angiogenesis in NSCLC by relating Oct-4 expression with microvessel density (MVD), and expression of Ki-67 and vascular endothelial growth factor (VEGF), proliferative and the vascular markers, respectively. On the basis of previous reports that a subset of NSCLC tumors do not induce angiogenesis but instead co-opt the normal vasculature for further growth [16,17], we also evaluated associations of Oct-4 expression with tumor cell proliferation and prognosis in subsets of patients with weak VEGF-mediated angiogenesis (disregarding the nonangiogenic subsets of NSCLC in the analysis, which would tend to obscure the role of Oct-4 expression in primary NSCLC).\nOur results provide the first demonstration that expression of the stem cell marker Oct-4 maintains tumor cells in a poorly differentiated state through a mechanism that depends on promoting cell proliferation. Moreover, even in the context of vulnerable MVD status and VEGF expression, Oct-4 plays an important role in tumor cell proliferation and contributes to poor prognosis in human NSCLC.", " Patients and tissue specimens Cancer tissue and corresponding adjacent normal tissue (within 1-2 cm of the tumor edge) from 113 primary NSCLC cases were randomly selected from our tissue database. Patients had been treated in the Department of Thoracic Surgery of the First Affiliated Hospital of Sun Yat-sen University from Jan 2003 to July 2004. None of the patients had received neoadjuvant chemotherapy or radiotherapy. Clinical information was obtained by reviewing the perioperative medical records, or by telephone or written correspondence. Cases were staged according to the tumor-node-metastases (TNM) classification of the International Union Against Cancer, revised in 2002 [18]. The study was approved by the Medical Ethical Committee of the First Affiliated Hospital, Sun Yat-sen University. Paraffin-embedded specimens of each case were sectioned and fixed on siliconized slides. Histological typing was determined according to World Health Organization classifications [19]. Tumor size and metastatic lymph node number and locations were obtained from pathology reports.\nCancer tissue and corresponding adjacent normal tissue (within 1-2 cm of the tumor edge) from 113 primary NSCLC cases were randomly selected from our tissue database. Patients had been treated in the Department of Thoracic Surgery of the First Affiliated Hospital of Sun Yat-sen University from Jan 2003 to July 2004. None of the patients had received neoadjuvant chemotherapy or radiotherapy. Clinical information was obtained by reviewing the perioperative medical records, or by telephone or written correspondence. Cases were staged according to the tumor-node-metastases (TNM) classification of the International Union Against Cancer, revised in 2002 [18]. The study was approved by the Medical Ethical Committee of the First Affiliated Hospital, Sun Yat-sen University. Paraffin-embedded specimens of each case were sectioned and fixed on siliconized slides. Histological typing was determined according to World Health Organization classifications [19]. Tumor size and metastatic lymph node number and locations were obtained from pathology reports.\n Cell lines The primary NSCLC cell lines, A549, H460 and H1299, obtained from the Cell Bank of the Chinese Academy of Science (Shanghai, China), were cultured in RPMI 1640 medium (Gibco/Invitrogen, Camarillo, CA, USA) supplemented with 10% fetal bovine serum (Hyclone, Logan, UT, USA).\nThe primary NSCLC cell lines, A549, H460 and H1299, obtained from the Cell Bank of the Chinese Academy of Science (Shanghai, China), were cultured in RPMI 1640 medium (Gibco/Invitrogen, Camarillo, CA, USA) supplemented with 10% fetal bovine serum (Hyclone, Logan, UT, USA).\n Immunohistochemical staining and evaluation The primary antibodies used in this study were as follow: anti-Oct-4 (sc-5279, dilution 1:100; Santa Cruz Biotechnology, Santa Cruz, CA, USA), anti-Ki-67 (ab92742, dilution 1:200; Abcam, Cambridge, UK), and anti-VEGF (sc-7269, dilution 1:100; Santa Cruz Biotechnology, Santa Cruz, CA, USA). Immunohistochemical staining was carried out using the streptavidin-peroxidase method. Cells with nuclear staining for Oct-4 and Ki-67, and cytoplasmic staining for VEGF, were scored as positive for the respective marker. The intensity of Oct-4, Ki-67, and VEGF staining was scored on a 0-to-3 scale: 0, negative; 1, light; 2, moderate; and 3, intense. The percentage of the tumor area stained for each marker at each intensity was calculated by dividing the number of tumor cells positive for the marker at each intensity by the total number of tumor cells. Areas that were negative were given a value of 0. A total of 10-12 discrete foci in each section were examined microscopically (400× magnification) to generate an average staining intensity and percentage of the surface area covered. The final histoscore was calculated using the formula: [(1 × percentage of weakly positive tumor cells) + (2 × percentage of moderately positive tumor cells) + (3 × percentage of intense positive tumor cells)]. The median values of Oct-4, Ki-67, and VEGF histoscores were used to classify samples as positive (above the median) or negative (below the median) for each marker.\nThe primary antibodies used in this study were as follow: anti-Oct-4 (sc-5279, dilution 1:100; Santa Cruz Biotechnology, Santa Cruz, CA, USA), anti-Ki-67 (ab92742, dilution 1:200; Abcam, Cambridge, UK), and anti-VEGF (sc-7269, dilution 1:100; Santa Cruz Biotechnology, Santa Cruz, CA, USA). Immunohistochemical staining was carried out using the streptavidin-peroxidase method. Cells with nuclear staining for Oct-4 and Ki-67, and cytoplasmic staining for VEGF, were scored as positive for the respective marker. The intensity of Oct-4, Ki-67, and VEGF staining was scored on a 0-to-3 scale: 0, negative; 1, light; 2, moderate; and 3, intense. The percentage of the tumor area stained for each marker at each intensity was calculated by dividing the number of tumor cells positive for the marker at each intensity by the total number of tumor cells. Areas that were negative were given a value of 0. A total of 10-12 discrete foci in each section were examined microscopically (400× magnification) to generate an average staining intensity and percentage of the surface area covered. The final histoscore was calculated using the formula: [(1 × percentage of weakly positive tumor cells) + (2 × percentage of moderately positive tumor cells) + (3 × percentage of intense positive tumor cells)]. The median values of Oct-4, Ki-67, and VEGF histoscores were used to classify samples as positive (above the median) or negative (below the median) for each marker.\n Evaluation of MVD Immunohistochemical staining for CD34 (MS-363, dilution 1:50; Lab Vision, Fremont, CA; Clone QBEnd/10) was analyzed. After identifying the three most vascularized areas within a tumor (\"hot spots\") at low magnification (×40), vessels in three representative fields in each of these areas were counted at high magnification (×400; 0.152 mm2; 0.44 mm diameter). The high-magnification fields were then marked for subsequent image cell counting analysis. Single immunoreactive endothelial cells or endothelial cell clusters separated from other microvessels were counted as individual microvessels. Endothelial staining in large vessels with tunica media and nonspecific staining of non-endothelial structures were excluded from microvessel counts. The mean visual microvessel density for CD34 was calculated as the average of six counts (three hot spots and three microscopic fields). Microvessel counts greater than the median counts were taken as MVD-positive, and microvessel counts lower than the median were taken as MVD-negative.\nImmunohistochemical staining for CD34 (MS-363, dilution 1:50; Lab Vision, Fremont, CA; Clone QBEnd/10) was analyzed. After identifying the three most vascularized areas within a tumor (\"hot spots\") at low magnification (×40), vessels in three representative fields in each of these areas were counted at high magnification (×400; 0.152 mm2; 0.44 mm diameter). The high-magnification fields were then marked for subsequent image cell counting analysis. Single immunoreactive endothelial cells or endothelial cell clusters separated from other microvessels were counted as individual microvessels. Endothelial staining in large vessels with tunica media and nonspecific staining of non-endothelial structures were excluded from microvessel counts. The mean visual microvessel density for CD34 was calculated as the average of six counts (three hot spots and three microscopic fields). Microvessel counts greater than the median counts were taken as MVD-positive, and microvessel counts lower than the median were taken as MVD-negative.\n Reverse transcription-polymerase chain reaction (RT-PCR) Total RNA was extracted from cultured cells using the TRIzol reagent (Invitrogen, Grand Island NY, USA), according to the manufacturer's instructions. Extracted RNA was treated with DNase (Fermentas, Vilnius, Lithuania) to remove DNA contamination. For cDNA synthesis, 1 μg of total RNA was reverse transcribed using a RevertAid First Strand cDNA Synthesis Kit (Fermentas). PCR was performed with ExTaq (TaKaRa, Japan). The primer sequences and sizes of amplified products were as follows: Oct-4, 5'-GAC AGG GGG AGG GGA GGA GCT AGG-3' and 5'-CTT CCC TCC AAC CAG TTG CCC CAA AC-3' (142 bp); β-actin (internal control), 5'-GTG GGG CGC CCC AGG CAC CA-3' and 5'-CTC CTT AAT GTC ACG CAC GAT TTC-3' (540 bp).\nTotal RNA was extracted from cultured cells using the TRIzol reagent (Invitrogen, Grand Island NY, USA), according to the manufacturer's instructions. Extracted RNA was treated with DNase (Fermentas, Vilnius, Lithuania) to remove DNA contamination. For cDNA synthesis, 1 μg of total RNA was reverse transcribed using a RevertAid First Strand cDNA Synthesis Kit (Fermentas). PCR was performed with ExTaq (TaKaRa, Japan). The primer sequences and sizes of amplified products were as follows: Oct-4, 5'-GAC AGG GGG AGG GGA GGA GCT AGG-3' and 5'-CTT CCC TCC AAC CAG TTG CCC CAA AC-3' (142 bp); β-actin (internal control), 5'-GTG GGG CGC CCC AGG CAC CA-3' and 5'-CTC CTT AAT GTC ACG CAC GAT TTC-3' (540 bp).\n Statistical analysis All calculations were done using SPSS V.14.0 software (Chicago, IL, USA). Spearman's coefficient of correlation, Chi-squared tests, and Mann-Whitney tests were used as appropriate. A multivariate model was used to evaluate statistical associations among variables. A Cox regression model was used to relate potential prognostic factors with survival.\nAll calculations were done using SPSS V.14.0 software (Chicago, IL, USA). Spearman's coefficient of correlation, Chi-squared tests, and Mann-Whitney tests were used as appropriate. A multivariate model was used to evaluate statistical associations among variables. A Cox regression model was used to relate potential prognostic factors with survival.", "Cancer tissue and corresponding adjacent normal tissue (within 1-2 cm of the tumor edge) from 113 primary NSCLC cases were randomly selected from our tissue database. Patients had been treated in the Department of Thoracic Surgery of the First Affiliated Hospital of Sun Yat-sen University from Jan 2003 to July 2004. None of the patients had received neoadjuvant chemotherapy or radiotherapy. Clinical information was obtained by reviewing the perioperative medical records, or by telephone or written correspondence. Cases were staged according to the tumor-node-metastases (TNM) classification of the International Union Against Cancer, revised in 2002 [18]. The study was approved by the Medical Ethical Committee of the First Affiliated Hospital, Sun Yat-sen University. Paraffin-embedded specimens of each case were sectioned and fixed on siliconized slides. Histological typing was determined according to World Health Organization classifications [19]. Tumor size and metastatic lymph node number and locations were obtained from pathology reports.", "The primary NSCLC cell lines, A549, H460 and H1299, obtained from the Cell Bank of the Chinese Academy of Science (Shanghai, China), were cultured in RPMI 1640 medium (Gibco/Invitrogen, Camarillo, CA, USA) supplemented with 10% fetal bovine serum (Hyclone, Logan, UT, USA).", "The primary antibodies used in this study were as follow: anti-Oct-4 (sc-5279, dilution 1:100; Santa Cruz Biotechnology, Santa Cruz, CA, USA), anti-Ki-67 (ab92742, dilution 1:200; Abcam, Cambridge, UK), and anti-VEGF (sc-7269, dilution 1:100; Santa Cruz Biotechnology, Santa Cruz, CA, USA). Immunohistochemical staining was carried out using the streptavidin-peroxidase method. Cells with nuclear staining for Oct-4 and Ki-67, and cytoplasmic staining for VEGF, were scored as positive for the respective marker. The intensity of Oct-4, Ki-67, and VEGF staining was scored on a 0-to-3 scale: 0, negative; 1, light; 2, moderate; and 3, intense. The percentage of the tumor area stained for each marker at each intensity was calculated by dividing the number of tumor cells positive for the marker at each intensity by the total number of tumor cells. Areas that were negative were given a value of 0. A total of 10-12 discrete foci in each section were examined microscopically (400× magnification) to generate an average staining intensity and percentage of the surface area covered. The final histoscore was calculated using the formula: [(1 × percentage of weakly positive tumor cells) + (2 × percentage of moderately positive tumor cells) + (3 × percentage of intense positive tumor cells)]. The median values of Oct-4, Ki-67, and VEGF histoscores were used to classify samples as positive (above the median) or negative (below the median) for each marker.", "Immunohistochemical staining for CD34 (MS-363, dilution 1:50; Lab Vision, Fremont, CA; Clone QBEnd/10) was analyzed. After identifying the three most vascularized areas within a tumor (\"hot spots\") at low magnification (×40), vessels in three representative fields in each of these areas were counted at high magnification (×400; 0.152 mm2; 0.44 mm diameter). The high-magnification fields were then marked for subsequent image cell counting analysis. Single immunoreactive endothelial cells or endothelial cell clusters separated from other microvessels were counted as individual microvessels. Endothelial staining in large vessels with tunica media and nonspecific staining of non-endothelial structures were excluded from microvessel counts. The mean visual microvessel density for CD34 was calculated as the average of six counts (three hot spots and three microscopic fields). Microvessel counts greater than the median counts were taken as MVD-positive, and microvessel counts lower than the median were taken as MVD-negative.", "Total RNA was extracted from cultured cells using the TRIzol reagent (Invitrogen, Grand Island NY, USA), according to the manufacturer's instructions. Extracted RNA was treated with DNase (Fermentas, Vilnius, Lithuania) to remove DNA contamination. For cDNA synthesis, 1 μg of total RNA was reverse transcribed using a RevertAid First Strand cDNA Synthesis Kit (Fermentas). PCR was performed with ExTaq (TaKaRa, Japan). The primer sequences and sizes of amplified products were as follows: Oct-4, 5'-GAC AGG GGG AGG GGA GGA GCT AGG-3' and 5'-CTT CCC TCC AAC CAG TTG CCC CAA AC-3' (142 bp); β-actin (internal control), 5'-GTG GGG CGC CCC AGG CAC CA-3' and 5'-CTC CTT AAT GTC ACG CAC GAT TTC-3' (540 bp).", "All calculations were done using SPSS V.14.0 software (Chicago, IL, USA). Spearman's coefficient of correlation, Chi-squared tests, and Mann-Whitney tests were used as appropriate. A multivariate model was used to evaluate statistical associations among variables. A Cox regression model was used to relate potential prognostic factors with survival.", " Basic clinical information and tumor characteristics A total of 113 NSCLC patients (82 male and 31 female) were enrolled in the study; the mean age of study participants was 57.2 ± 10.0 years (range, 35-78 years). There were 58 cases of lung adenocarcinoma, 52 cases of squamous cell carcinoma, and three cases of large cell carcinoma. Twenty-seven cases were well differentiated, 34 cases were moderately differentiated, and 52 cases were poorly differentiated. The cases were classified as stage I (n = 30), stage II (n = 48), stage III (n = 18), and stage IV (n = 17). Of the 113 cases, 67 had lymph node metastasis, according to surgery and pathology reports. Analyses of patient data after a 5-year follow-up showed that 77 patients had died; median survival was 21.0 months. As expected, median survival was longer for stage I-II patients (22.0 mo) than stage III-IV patients (13.0 mo; P = 0.001). There were no significant differences in survival according to gender, smoking history, histology, or grading. The clinical characteristics of study samples are shown in Table 1.\nAssociation of Oct-4 Expression with clinical features in NSCLC\naPatients were divided according to the median values of immunohistochemical histoscores\nbPatients were divided according to median age\ncBronchioloalveolar carcinoma was included in well differentiated\ndLarge cell carcinoma was included in poorly differentiated\nA total of 113 NSCLC patients (82 male and 31 female) were enrolled in the study; the mean age of study participants was 57.2 ± 10.0 years (range, 35-78 years). There were 58 cases of lung adenocarcinoma, 52 cases of squamous cell carcinoma, and three cases of large cell carcinoma. Twenty-seven cases were well differentiated, 34 cases were moderately differentiated, and 52 cases were poorly differentiated. The cases were classified as stage I (n = 30), stage II (n = 48), stage III (n = 18), and stage IV (n = 17). Of the 113 cases, 67 had lymph node metastasis, according to surgery and pathology reports. Analyses of patient data after a 5-year follow-up showed that 77 patients had died; median survival was 21.0 months. As expected, median survival was longer for stage I-II patients (22.0 mo) than stage III-IV patients (13.0 mo; P = 0.001). There were no significant differences in survival according to gender, smoking history, histology, or grading. The clinical characteristics of study samples are shown in Table 1.\nAssociation of Oct-4 Expression with clinical features in NSCLC\naPatients were divided according to the median values of immunohistochemical histoscores\nbPatients were divided according to median age\ncBronchioloalveolar carcinoma was included in well differentiated\ndLarge cell carcinoma was included in poorly differentiated\n Association of Oct-4 expression with clinicopathological characteristics of NSCLC patients Immunohistochemical analyses demonstrated that Oct-4 was expressed in 90.3% of samples (102/113 cases), with clear staining observed mostly in the nuclei of tumor cells; alveolar and bronchial epithelial cells in tumor-adjacent tissues were negative for Oct-4 staining (Figure 1). The histoscores of Oct-4 expression were variable among individual tumor samples. The mean Oct-4 histoscore was 31.32 ± 5.99 and the median histoscore was 25.80; this latter value was selected to categorize patients into Oct-4-positive (above the median) and -negative (below the median) groups. Among the 56 Oct-4-negative cases, 11 samples exhibited no Oct-4 staining. The associations of Oct-4-positive and -negative status with various clinical and pathological characteristics of NSCLC are shown in Table 1. Regarding to histoscores of Oct-4 staining, there was prominent discrepancy between adenocarcinoma and squamous cell carcinoma (39.40 ± 3.59 and 21.64 ± 2.47, p = 0.008). There was significant association of Oct-4 histoscores among well, moderated, and poor differentiation of tumor (15.69 ± 3.70, 24.27 ± 2.73, and 43.80 ± 3.49, p = 0.039), and quantification of staining also revealed that these associations differed markedly in adenocarcinoma or squamous cell carcinoma population (Figure 1H). There were no associations between Oct-4 expression and malignant local advance, lymph node metastasis, or TNM stage of disease (Figure 1I).\nOct-4 expression in tissues of well-differentiated adenocarcinoma (A), well-differentiated squamous cell carcinoma (B), poorly differentiated adenocarcinoma (C), and poorly differentiated squamous cell carcinoma (D), as well as VEGF staining (E) and MVD staining (F) were demonstrated immunohistologically. Quantification of Oct-4 expression (Oct-4 histoscore) with respect to differentiation status or tumor histology (G) and local advance or lymph nodes metastasis (H) is shown; 95% CIs are indicated.\nImmunohistochemical analyses demonstrated that Oct-4 was expressed in 90.3% of samples (102/113 cases), with clear staining observed mostly in the nuclei of tumor cells; alveolar and bronchial epithelial cells in tumor-adjacent tissues were negative for Oct-4 staining (Figure 1). The histoscores of Oct-4 expression were variable among individual tumor samples. The mean Oct-4 histoscore was 31.32 ± 5.99 and the median histoscore was 25.80; this latter value was selected to categorize patients into Oct-4-positive (above the median) and -negative (below the median) groups. Among the 56 Oct-4-negative cases, 11 samples exhibited no Oct-4 staining. The associations of Oct-4-positive and -negative status with various clinical and pathological characteristics of NSCLC are shown in Table 1. Regarding to histoscores of Oct-4 staining, there was prominent discrepancy between adenocarcinoma and squamous cell carcinoma (39.40 ± 3.59 and 21.64 ± 2.47, p = 0.008). There was significant association of Oct-4 histoscores among well, moderated, and poor differentiation of tumor (15.69 ± 3.70, 24.27 ± 2.73, and 43.80 ± 3.49, p = 0.039), and quantification of staining also revealed that these associations differed markedly in adenocarcinoma or squamous cell carcinoma population (Figure 1H). There were no associations between Oct-4 expression and malignant local advance, lymph node metastasis, or TNM stage of disease (Figure 1I).\nOct-4 expression in tissues of well-differentiated adenocarcinoma (A), well-differentiated squamous cell carcinoma (B), poorly differentiated adenocarcinoma (C), and poorly differentiated squamous cell carcinoma (D), as well as VEGF staining (E) and MVD staining (F) were demonstrated immunohistologically. Quantification of Oct-4 expression (Oct-4 histoscore) with respect to differentiation status or tumor histology (G) and local advance or lymph nodes metastasis (H) is shown; 95% CIs are indicated.\n Oct-4 expression in NSCLC cell lines To better understand the expression status of Oct-4 in NSCLC, we examined the expression of Oct-4 in the NSCLC cell lines, A549, H460, and H1299. Oct-4 mRNA was detected in each of these cell lines (Figure 1G).\nTo better understand the expression status of Oct-4 in NSCLC, we examined the expression of Oct-4 in the NSCLC cell lines, A549, H460, and H1299. Oct-4 mRNA was detected in each of these cell lines (Figure 1G).\n Association of Oct-4 expression with malignant proliferation according to differences in VEGF-mediated angiogenesis Intratumoral Ki-67 expression, a marker of malignant proliferation, varied according to Oct-4 phenotype in the population under study, with high Ki-67 expression showing a significant association with positive Oct-4 staining (Table 1). Quantification of staining revealed that this association differed markedly depending on Oct-4 histoscores (Figure 2A, p = 0.001) and showed that these two markers were positively correlated (Figure 2B). In MVD-negative and VEGF-negative subsets, intratumoral Ki-67 expression varied significantly according to Oct-4 phenotype (Figure 2A); Ki-67 (Figure 2C) and Oct-4 (Figure 2E) expression were also positively correlated in these subsets. These results suggest a prominent association of Oct-4 expression with malignant proliferation in NSCLC, especially in cases with weak VEGF-mediated angiogenesis.\nKi-67 expression histoscores were significantly different (ANOVA) according to different Oct-4 status in all cases, and in subsets of MVD-negative, MVD-positive, VEGF-negative, and VEGF-positive cases (A). All cases were divided into positive (above the median histoscore) and negative (below the median histoscore) groups. The association of Oct-4 staining with Ki-67 expression was positive in all cases (B), and in subsets of MVD-negative (C), MVD-positive (D), VEGF-negative (E), and VEGF-positive (F) cases. Statistical differences were calculated using Pearson and Spearman correlation analysis.\nIntratumoral Ki-67 expression, a marker of malignant proliferation, varied according to Oct-4 phenotype in the population under study, with high Ki-67 expression showing a significant association with positive Oct-4 staining (Table 1). Quantification of staining revealed that this association differed markedly depending on Oct-4 histoscores (Figure 2A, p = 0.001) and showed that these two markers were positively correlated (Figure 2B). In MVD-negative and VEGF-negative subsets, intratumoral Ki-67 expression varied significantly according to Oct-4 phenotype (Figure 2A); Ki-67 (Figure 2C) and Oct-4 (Figure 2E) expression were also positively correlated in these subsets. These results suggest a prominent association of Oct-4 expression with malignant proliferation in NSCLC, especially in cases with weak VEGF-mediated angiogenesis.\nKi-67 expression histoscores were significantly different (ANOVA) according to different Oct-4 status in all cases, and in subsets of MVD-negative, MVD-positive, VEGF-negative, and VEGF-positive cases (A). All cases were divided into positive (above the median histoscore) and negative (below the median histoscore) groups. The association of Oct-4 staining with Ki-67 expression was positive in all cases (B), and in subsets of MVD-negative (C), MVD-positive (D), VEGF-negative (E), and VEGF-positive (F) cases. Statistical differences were calculated using Pearson and Spearman correlation analysis.\n Association of Oct-4 expression with survival in all cases and in subsets of cases: univariate and multivariate analyses The strength of associations between each individual predictor and overall survival was shown by univariate and multivariate analyses (Table 2). Oct-4 expression in tumor tissue and differentiation of tumor cells were strongly associated with cancer-associated death. Notably, an Oct-4 expression level less than the median histoscore (25.80) was associated with improved survival (HR, 1.011), whereas elevated Oct-4 expression was associated with shorter cumulative survival (p = 0.009). A Kaplan-Meier plot showed a prominent difference in survival estimates for patients with high versus low Oct-4 expression in tumor tissue; this difference corresponded to a median survival of 18.2 ± 6.0 months for patients with high Oct-4 expression compared with a median survival of more than 24.7 ± 9.1 months for patients with low Oct-4 expression (Figure 3A). More importantly, significant differences were also found in the adenocarcinoma subset (17.7 ± 9.1 vs. 27.3 ± 9.6 months; Figure 3B) and the squamous cell carcinoma subset (20.7 ± 9.5 vs. 23.2 ± 10.8 months; Figure 3C). When all predictors were included in a Cox model, Oct-4 expression retained its prognostic significance for overall survival. Hence, a low level of Oct-4 expression in tumor tissue predicted improved survival in NSCLC patients.\nUnivariate and multivariate analyses of individual variables for correlations with overall survival: cox proportional hazards model\nVariable 1, Oct-4 expression was an independent prognostic factor, adjusted by histological differentiation, in all cases\nVariable 2, Oct-4 expression was an independent factor in MVD-negative cases\nVariable 3, Oct-4 expression was an independent factor in VEGF-negative cases\nAbbreviations: HR, hazard ratio; CI, confidence interval\nCumulative Kaplan-Meier survival curves based on the median values of Oct-4 immunochemical histoscores in NSCLC tissues are showed for all cases (A), and for adenocarcinoma (B), squamous cell carcinoma (C), MVD-negative (D), MVD-positive (E), VEGF-negative (F), and VEGF-positive (G) cases. All cases were divided into positive (above the median histoscore) and negative (below the median histoscore) groups. Oct-4-positivity was associated with decreased overall survival in all subset. Statistical differences were calculated using log-rank comparisons.\nIn order to observe the contribution of Oct-4 to overall survival in patients in which VEGF-mediated angiogenesis was disabled, we also performed univariate and multivariate analyses in MVD-negative and VEGF-negative subsets (Table 2). Notably, an Oct-4 expression level less than the median histoscore was associated with improved survival, whereas elevated Oct-4 expression was associated with shorter cumulative survival in both the MVD-negative subset (HR, 1.024, p = 0.005) and the VEGF-negative subset (HR, 1.011, p = 0.042). Further, a Kaplan-Meier plot showed a prominent difference in survival estimates for patients in the MVD-negative subset, where the median survival for patients with high Oct-4 expression was 18.5 ± 7.6 months compared with a median survival of more than 24.3 ± 8.3 months for patients with low Oct-4 expression (Figure 3D). Similar differences were found for patients in the VEGF-negative subset; here the median survival for patients with high Oct-4 expression was 17.5 ± 6.1 months compared with a median survival of more than 21.9 ± 7.5 months for patients with low Oct-4 expression (Figure 3F). Hence, Oct-4 expression retained its prognostic significance for overall survival in NSCLC patients with weak VEGF-mediated angiogenesis.\nThe strength of associations between each individual predictor and overall survival was shown by univariate and multivariate analyses (Table 2). Oct-4 expression in tumor tissue and differentiation of tumor cells were strongly associated with cancer-associated death. Notably, an Oct-4 expression level less than the median histoscore (25.80) was associated with improved survival (HR, 1.011), whereas elevated Oct-4 expression was associated with shorter cumulative survival (p = 0.009). A Kaplan-Meier plot showed a prominent difference in survival estimates for patients with high versus low Oct-4 expression in tumor tissue; this difference corresponded to a median survival of 18.2 ± 6.0 months for patients with high Oct-4 expression compared with a median survival of more than 24.7 ± 9.1 months for patients with low Oct-4 expression (Figure 3A). More importantly, significant differences were also found in the adenocarcinoma subset (17.7 ± 9.1 vs. 27.3 ± 9.6 months; Figure 3B) and the squamous cell carcinoma subset (20.7 ± 9.5 vs. 23.2 ± 10.8 months; Figure 3C). When all predictors were included in a Cox model, Oct-4 expression retained its prognostic significance for overall survival. Hence, a low level of Oct-4 expression in tumor tissue predicted improved survival in NSCLC patients.\nUnivariate and multivariate analyses of individual variables for correlations with overall survival: cox proportional hazards model\nVariable 1, Oct-4 expression was an independent prognostic factor, adjusted by histological differentiation, in all cases\nVariable 2, Oct-4 expression was an independent factor in MVD-negative cases\nVariable 3, Oct-4 expression was an independent factor in VEGF-negative cases\nAbbreviations: HR, hazard ratio; CI, confidence interval\nCumulative Kaplan-Meier survival curves based on the median values of Oct-4 immunochemical histoscores in NSCLC tissues are showed for all cases (A), and for adenocarcinoma (B), squamous cell carcinoma (C), MVD-negative (D), MVD-positive (E), VEGF-negative (F), and VEGF-positive (G) cases. All cases were divided into positive (above the median histoscore) and negative (below the median histoscore) groups. Oct-4-positivity was associated with decreased overall survival in all subset. Statistical differences were calculated using log-rank comparisons.\nIn order to observe the contribution of Oct-4 to overall survival in patients in which VEGF-mediated angiogenesis was disabled, we also performed univariate and multivariate analyses in MVD-negative and VEGF-negative subsets (Table 2). Notably, an Oct-4 expression level less than the median histoscore was associated with improved survival, whereas elevated Oct-4 expression was associated with shorter cumulative survival in both the MVD-negative subset (HR, 1.024, p = 0.005) and the VEGF-negative subset (HR, 1.011, p = 0.042). Further, a Kaplan-Meier plot showed a prominent difference in survival estimates for patients in the MVD-negative subset, where the median survival for patients with high Oct-4 expression was 18.5 ± 7.6 months compared with a median survival of more than 24.3 ± 8.3 months for patients with low Oct-4 expression (Figure 3D). Similar differences were found for patients in the VEGF-negative subset; here the median survival for patients with high Oct-4 expression was 17.5 ± 6.1 months compared with a median survival of more than 21.9 ± 7.5 months for patients with low Oct-4 expression (Figure 3F). Hence, Oct-4 expression retained its prognostic significance for overall survival in NSCLC patients with weak VEGF-mediated angiogenesis.", "A total of 113 NSCLC patients (82 male and 31 female) were enrolled in the study; the mean age of study participants was 57.2 ± 10.0 years (range, 35-78 years). There were 58 cases of lung adenocarcinoma, 52 cases of squamous cell carcinoma, and three cases of large cell carcinoma. Twenty-seven cases were well differentiated, 34 cases were moderately differentiated, and 52 cases were poorly differentiated. The cases were classified as stage I (n = 30), stage II (n = 48), stage III (n = 18), and stage IV (n = 17). Of the 113 cases, 67 had lymph node metastasis, according to surgery and pathology reports. Analyses of patient data after a 5-year follow-up showed that 77 patients had died; median survival was 21.0 months. As expected, median survival was longer for stage I-II patients (22.0 mo) than stage III-IV patients (13.0 mo; P = 0.001). There were no significant differences in survival according to gender, smoking history, histology, or grading. The clinical characteristics of study samples are shown in Table 1.\nAssociation of Oct-4 Expression with clinical features in NSCLC\naPatients were divided according to the median values of immunohistochemical histoscores\nbPatients were divided according to median age\ncBronchioloalveolar carcinoma was included in well differentiated\ndLarge cell carcinoma was included in poorly differentiated", "Immunohistochemical analyses demonstrated that Oct-4 was expressed in 90.3% of samples (102/113 cases), with clear staining observed mostly in the nuclei of tumor cells; alveolar and bronchial epithelial cells in tumor-adjacent tissues were negative for Oct-4 staining (Figure 1). The histoscores of Oct-4 expression were variable among individual tumor samples. The mean Oct-4 histoscore was 31.32 ± 5.99 and the median histoscore was 25.80; this latter value was selected to categorize patients into Oct-4-positive (above the median) and -negative (below the median) groups. Among the 56 Oct-4-negative cases, 11 samples exhibited no Oct-4 staining. The associations of Oct-4-positive and -negative status with various clinical and pathological characteristics of NSCLC are shown in Table 1. Regarding to histoscores of Oct-4 staining, there was prominent discrepancy between adenocarcinoma and squamous cell carcinoma (39.40 ± 3.59 and 21.64 ± 2.47, p = 0.008). There was significant association of Oct-4 histoscores among well, moderated, and poor differentiation of tumor (15.69 ± 3.70, 24.27 ± 2.73, and 43.80 ± 3.49, p = 0.039), and quantification of staining also revealed that these associations differed markedly in adenocarcinoma or squamous cell carcinoma population (Figure 1H). There were no associations between Oct-4 expression and malignant local advance, lymph node metastasis, or TNM stage of disease (Figure 1I).\nOct-4 expression in tissues of well-differentiated adenocarcinoma (A), well-differentiated squamous cell carcinoma (B), poorly differentiated adenocarcinoma (C), and poorly differentiated squamous cell carcinoma (D), as well as VEGF staining (E) and MVD staining (F) were demonstrated immunohistologically. Quantification of Oct-4 expression (Oct-4 histoscore) with respect to differentiation status or tumor histology (G) and local advance or lymph nodes metastasis (H) is shown; 95% CIs are indicated.", "To better understand the expression status of Oct-4 in NSCLC, we examined the expression of Oct-4 in the NSCLC cell lines, A549, H460, and H1299. Oct-4 mRNA was detected in each of these cell lines (Figure 1G).", "Intratumoral Ki-67 expression, a marker of malignant proliferation, varied according to Oct-4 phenotype in the population under study, with high Ki-67 expression showing a significant association with positive Oct-4 staining (Table 1). Quantification of staining revealed that this association differed markedly depending on Oct-4 histoscores (Figure 2A, p = 0.001) and showed that these two markers were positively correlated (Figure 2B). In MVD-negative and VEGF-negative subsets, intratumoral Ki-67 expression varied significantly according to Oct-4 phenotype (Figure 2A); Ki-67 (Figure 2C) and Oct-4 (Figure 2E) expression were also positively correlated in these subsets. These results suggest a prominent association of Oct-4 expression with malignant proliferation in NSCLC, especially in cases with weak VEGF-mediated angiogenesis.\nKi-67 expression histoscores were significantly different (ANOVA) according to different Oct-4 status in all cases, and in subsets of MVD-negative, MVD-positive, VEGF-negative, and VEGF-positive cases (A). All cases were divided into positive (above the median histoscore) and negative (below the median histoscore) groups. The association of Oct-4 staining with Ki-67 expression was positive in all cases (B), and in subsets of MVD-negative (C), MVD-positive (D), VEGF-negative (E), and VEGF-positive (F) cases. Statistical differences were calculated using Pearson and Spearman correlation analysis.", "The strength of associations between each individual predictor and overall survival was shown by univariate and multivariate analyses (Table 2). Oct-4 expression in tumor tissue and differentiation of tumor cells were strongly associated with cancer-associated death. Notably, an Oct-4 expression level less than the median histoscore (25.80) was associated with improved survival (HR, 1.011), whereas elevated Oct-4 expression was associated with shorter cumulative survival (p = 0.009). A Kaplan-Meier plot showed a prominent difference in survival estimates for patients with high versus low Oct-4 expression in tumor tissue; this difference corresponded to a median survival of 18.2 ± 6.0 months for patients with high Oct-4 expression compared with a median survival of more than 24.7 ± 9.1 months for patients with low Oct-4 expression (Figure 3A). More importantly, significant differences were also found in the adenocarcinoma subset (17.7 ± 9.1 vs. 27.3 ± 9.6 months; Figure 3B) and the squamous cell carcinoma subset (20.7 ± 9.5 vs. 23.2 ± 10.8 months; Figure 3C). When all predictors were included in a Cox model, Oct-4 expression retained its prognostic significance for overall survival. Hence, a low level of Oct-4 expression in tumor tissue predicted improved survival in NSCLC patients.\nUnivariate and multivariate analyses of individual variables for correlations with overall survival: cox proportional hazards model\nVariable 1, Oct-4 expression was an independent prognostic factor, adjusted by histological differentiation, in all cases\nVariable 2, Oct-4 expression was an independent factor in MVD-negative cases\nVariable 3, Oct-4 expression was an independent factor in VEGF-negative cases\nAbbreviations: HR, hazard ratio; CI, confidence interval\nCumulative Kaplan-Meier survival curves based on the median values of Oct-4 immunochemical histoscores in NSCLC tissues are showed for all cases (A), and for adenocarcinoma (B), squamous cell carcinoma (C), MVD-negative (D), MVD-positive (E), VEGF-negative (F), and VEGF-positive (G) cases. All cases were divided into positive (above the median histoscore) and negative (below the median histoscore) groups. Oct-4-positivity was associated with decreased overall survival in all subset. Statistical differences were calculated using log-rank comparisons.\nIn order to observe the contribution of Oct-4 to overall survival in patients in which VEGF-mediated angiogenesis was disabled, we also performed univariate and multivariate analyses in MVD-negative and VEGF-negative subsets (Table 2). Notably, an Oct-4 expression level less than the median histoscore was associated with improved survival, whereas elevated Oct-4 expression was associated with shorter cumulative survival in both the MVD-negative subset (HR, 1.024, p = 0.005) and the VEGF-negative subset (HR, 1.011, p = 0.042). Further, a Kaplan-Meier plot showed a prominent difference in survival estimates for patients in the MVD-negative subset, where the median survival for patients with high Oct-4 expression was 18.5 ± 7.6 months compared with a median survival of more than 24.3 ± 8.3 months for patients with low Oct-4 expression (Figure 3D). Similar differences were found for patients in the VEGF-negative subset; here the median survival for patients with high Oct-4 expression was 17.5 ± 6.1 months compared with a median survival of more than 21.9 ± 7.5 months for patients with low Oct-4 expression (Figure 3F). Hence, Oct-4 expression retained its prognostic significance for overall survival in NSCLC patients with weak VEGF-mediated angiogenesis.", "Although Oct-4 has been detected in various carcinomas, including breast cancer [9], bladder cancer [10], prostate cancer [11] and lung adenocarcinoma [20], the precise role of this stem cell marker in maintaining the survival of cancer cells is unclear. Sustained expression of Oct-4 in epithelial tissues has been shown to lead to dysplastic changes through inhibition of cellular differentiation, similar to its action in some progenitor cells, suggesting that Oct-4 may play an important role in the genesis of tumors [21]. However, the mechanisms by which Oct-4 acts during tumor progression have remained poorly understood. Accordingly, we examined the behavior of Oct-4 in primary NSCLC tissues, focusing on the associations of Oct-4 expression with clinicopathological features and markers of tumor-induced angiogenesis. Important in this context is the observation that, after disregarding nonangiogenic subsets of NSCLC (which tend to obscure the association of Oct-4 with tumor angiogenesis), a subset of NSCLC tumors does not induce angiogenesis, but instead co-opts the normal vasculature for further growth.\nOn the basis of the previous finding that Oct-4 may be a major contributor to the maintenance of self-renewal in embryonic stem cells, we investigated the association of Oct-4 expression with self-renewal of NSCLC cells. The immunohistochemical analyses presented here showed clear Oct-4 staining in most sections, and RT-PCR showed Oct-4 mRNA in all NSCLC cell lines. Our data extend the previous report of Oct-4 overexpression in lung adenocarcinoma [20], providing the first demonstration that Oct-4 is also present in lung squamous cell carcinoma specimens, exhibiting an apparent difference in the degree of expression among sections analyzed. One possible explanation for these findings is that the genesis of lung adenocarcinoma and squamous cell carcinoma may be different. The former arises from mucous glands or the cells of bronchoalveolar duct junction and the latter grows most commonly in or around major bronchi. Further studies designed to address the relationship between Oct-4 expression in endothelial precursors and the sites of origin of adenocarcinoma and squamous cell carcinoma are required to confirm this. Our data also showed that the degree of immunohistochemical staining was positively correlated with poor differentiation of tumor cells and Ki-67 expression; this latter marker provides an opportunity to analyze the proliferative cell fraction in preserved tumor specimens. High levels of Oct-4 have been shown to increase the malignant potential of tumors, whereas inactivation of Oct-4 induces a regression of the malignant component [22]; moreover, knockdown of Oct-4 expression in lung cancer cells has been shown to facilitate differentiation of CD133-positive cells into CD133-negative cells [23]. These findings, taken together with our data, indicate that overexpression of Oct-4 in NSCLC tissues may maintain the poorly differentiated state by contributing to tumor cell proliferation. On the other hand, down-regulation of Oct-4 expression has been shown to induce apoptosis of tumor-initiating-cell-like cells through an Oct-4/Tcl1/Akt1 pathway, implying that Oct-4 might maintain the survival of tumor-initiating cells, at least in part, by inhibiting apoptosis [13]. Whether an Oct-4-dependent pathway modulates apoptosis in clinical NSCLC samples or NSCLC cell lines has not yet been tested.\nPrevious reports have indicated that tumor-induced angiogenesis is important in maintaining the poorly differentiated state and promoting metastasis in NSCLC [23,24]. In our study, we observed an association of Oct-4 expression in NSCLC specimens with some features of tumor-induced angiogenesis, but the investigation revealed no prominent linkage between Oct-4 expression and neovascularization (defined by CD34 and VEGF-A expression). However, Passalidou and Pezzella have previously described a subset of NSCLC without morphological evidence of neo-angiogenesis. In these tumors, alveoli are filled with neoplastic cells and the only vessels present appeared to belong to the trapped alveolar septa; moreover, tumors with normal vessels and no neo-angiogenesis seemed resistant to some anti-angiogenic therapies [16,17]. In this context, we observed an association of Oct-4 expression with tumor cell proliferation in patients with weak VEGF-mediated angiogenesis, including MVD-negative and VEGF-negative subsets, indicating that Oct-4 still plays an important role in cell proliferation in NSCLC tumors, even those with weak MVD or VEGF status. Whether Oct-4 expression contributes to resistance to anti-angiogenic therapy thus warrants additional research attention.\nAlthough recent reports have also shown that Oct-4 is re-expressed in different human carcinomas, implicating Oct-4 as a potential diagnostic marker in malignancy [25,26], whether Oct-4 expression can be used as a diagnostic tool to monitor the clinical prognosis of NSCLC patients has not been previously substantiated. An analysis of our follow-up data designed to definitively assess the effect of Oct-4 immunohistochemical expression on the prognosis of NSCLC patients showed that the post-operative survival duration of patients with high Oct-4 expression was notably shorter than that of patients with low expression. These results indicate that overexpression of Oct-4 has a detrimental effect on prognosis, and further demonstrates that Oct-4 expression may be correlated with the malignant behavior of tumors during NSCLC progression. A combined genomic analysis of the Oct-4/SOX2/NANOG pathway has recently demonstrated high prognostic accuracy in studies of patients with multiple tumor types [27]. Similarly, multivariate analyses of the data presented here demonstrated that Oct-4 expression is an independent factor whose expression might indicate poor prognosis of patients with NSCLC, generally, as well in NSCLC patient subsets, especially those with weak or no neovascularization. A detailed investigation of the association of Oct-4 expression with treatment response, particularly a characterization of the molecular phenotype of tumors following downregulation of Oct-4, would provide further support for this interpretation.", "In summary, a multivariate analysis demonstrated that Oct-4 expression was an independent predictor of overall survival, suggesting that Oct-4 may be useful as a molecular marker to assess the prognosis of patients with primary NSCLC, especially those without prominent neovascularization. In patients without prominent tumor-induced angiogenesis, Oct-4-overexpressing cells in primary NSCLC tissue represent a reservoir of tumor cells with differentiation potential; moreover, Oct-4 may maintain tumor cells in a poorly differentiated state through a mechanism that depends on promoting cell proliferation. The molecular mechanisms by which Oct-4 sustains the self-renewal capacity of tumor cells, especially those with poor neovascularization status, are poorly understood and are the focus of our future studies. Developing strategies to inhibit Oct-4 during tumor progression may have positive prognostic implications in primary NSCLC patients." ]
[ null, "methods", null, null, null, null, null, null, null, null, null, null, null, null, null, null ]
[ "Oct-4", "Non-small cell lung cancer", "Prognosis", "Proliferation", "Angiogenesis" ]
Background: Despite recent progress in treatment, lung cancer remains the leading cause of cancer deaths in both women and men throughout the world [1]. Not all patients with lung cancer benefit from routine surgery and chemotherapy. This is especially true for those with primary non-small cell lung cancer (NSCLC), the most common malignancy in the thoracic field, where such therapies have been tried with limited efficacy [2]. To improve patient survival rate, researchers have increasingly focused on understanding specific characteristics of NSCLCs as a means to elucidate the mechanism of tumor development and develop possible targeted therapeutic approaches. Octamer 4 (Oct-4), a member of the POU-domain transcription factor family, is normally expressed in both adult and embryonic stem cells [3,4]. Recent reports have demonstrated that Oct-4 is not only involved in controlling the maintenance of stem cell pluripotency, but is also specifically responsible for the unlimited proliferative potential of stem cells, suggesting that Oct-4 functions as a master switch during differentiation of human somatic cell [5-7]. Interestingly, Oct-4 is also re-expressed in germ cell tumors [8], breast cancer [9], bladder cancer [10], prostate cancer and hepatomas [11,12], but very little is known about its potential function in malignant disease [13]. Moreover, overexpression of Oct-4 increases the malignant potential of tumors, and downregulation of Oct-4 in tumor cells inhibits tumor growth, suggesting that Oct-4 might play a key role in maintaining the survival of cancer cells [13,14]. Although its asymmetric expression may indicate that Oct-4 is a suitable target for therapeutic intervention in adenocarcinoma and bronchioloalveolar carcinoma [15], the role of Oct-4 expression in primary NSCLC has remained ill defined. To address this potential role, we assessed Oct-4 expression in cancer specimens from 113 patients with primary NSCLC by immunohistochemical staining. We further investigated the association of Oct-4 expression in NSCLC tumor cells with some important clinical pathological indices. In addition, we examined the involvement of Oct-4 in tumor cell proliferation and tumor-induced angiogenesis in NSCLC by relating Oct-4 expression with microvessel density (MVD), and expression of Ki-67 and vascular endothelial growth factor (VEGF), proliferative and the vascular markers, respectively. On the basis of previous reports that a subset of NSCLC tumors do not induce angiogenesis but instead co-opt the normal vasculature for further growth [16,17], we also evaluated associations of Oct-4 expression with tumor cell proliferation and prognosis in subsets of patients with weak VEGF-mediated angiogenesis (disregarding the nonangiogenic subsets of NSCLC in the analysis, which would tend to obscure the role of Oct-4 expression in primary NSCLC). Our results provide the first demonstration that expression of the stem cell marker Oct-4 maintains tumor cells in a poorly differentiated state through a mechanism that depends on promoting cell proliferation. Moreover, even in the context of vulnerable MVD status and VEGF expression, Oct-4 plays an important role in tumor cell proliferation and contributes to poor prognosis in human NSCLC. Methods: Patients and tissue specimens Cancer tissue and corresponding adjacent normal tissue (within 1-2 cm of the tumor edge) from 113 primary NSCLC cases were randomly selected from our tissue database. Patients had been treated in the Department of Thoracic Surgery of the First Affiliated Hospital of Sun Yat-sen University from Jan 2003 to July 2004. None of the patients had received neoadjuvant chemotherapy or radiotherapy. Clinical information was obtained by reviewing the perioperative medical records, or by telephone or written correspondence. Cases were staged according to the tumor-node-metastases (TNM) classification of the International Union Against Cancer, revised in 2002 [18]. The study was approved by the Medical Ethical Committee of the First Affiliated Hospital, Sun Yat-sen University. Paraffin-embedded specimens of each case were sectioned and fixed on siliconized slides. Histological typing was determined according to World Health Organization classifications [19]. Tumor size and metastatic lymph node number and locations were obtained from pathology reports. Cancer tissue and corresponding adjacent normal tissue (within 1-2 cm of the tumor edge) from 113 primary NSCLC cases were randomly selected from our tissue database. Patients had been treated in the Department of Thoracic Surgery of the First Affiliated Hospital of Sun Yat-sen University from Jan 2003 to July 2004. None of the patients had received neoadjuvant chemotherapy or radiotherapy. Clinical information was obtained by reviewing the perioperative medical records, or by telephone or written correspondence. Cases were staged according to the tumor-node-metastases (TNM) classification of the International Union Against Cancer, revised in 2002 [18]. The study was approved by the Medical Ethical Committee of the First Affiliated Hospital, Sun Yat-sen University. Paraffin-embedded specimens of each case were sectioned and fixed on siliconized slides. Histological typing was determined according to World Health Organization classifications [19]. Tumor size and metastatic lymph node number and locations were obtained from pathology reports. Cell lines The primary NSCLC cell lines, A549, H460 and H1299, obtained from the Cell Bank of the Chinese Academy of Science (Shanghai, China), were cultured in RPMI 1640 medium (Gibco/Invitrogen, Camarillo, CA, USA) supplemented with 10% fetal bovine serum (Hyclone, Logan, UT, USA). The primary NSCLC cell lines, A549, H460 and H1299, obtained from the Cell Bank of the Chinese Academy of Science (Shanghai, China), were cultured in RPMI 1640 medium (Gibco/Invitrogen, Camarillo, CA, USA) supplemented with 10% fetal bovine serum (Hyclone, Logan, UT, USA). Immunohistochemical staining and evaluation The primary antibodies used in this study were as follow: anti-Oct-4 (sc-5279, dilution 1:100; Santa Cruz Biotechnology, Santa Cruz, CA, USA), anti-Ki-67 (ab92742, dilution 1:200; Abcam, Cambridge, UK), and anti-VEGF (sc-7269, dilution 1:100; Santa Cruz Biotechnology, Santa Cruz, CA, USA). Immunohistochemical staining was carried out using the streptavidin-peroxidase method. Cells with nuclear staining for Oct-4 and Ki-67, and cytoplasmic staining for VEGF, were scored as positive for the respective marker. The intensity of Oct-4, Ki-67, and VEGF staining was scored on a 0-to-3 scale: 0, negative; 1, light; 2, moderate; and 3, intense. The percentage of the tumor area stained for each marker at each intensity was calculated by dividing the number of tumor cells positive for the marker at each intensity by the total number of tumor cells. Areas that were negative were given a value of 0. A total of 10-12 discrete foci in each section were examined microscopically (400× magnification) to generate an average staining intensity and percentage of the surface area covered. The final histoscore was calculated using the formula: [(1 × percentage of weakly positive tumor cells) + (2 × percentage of moderately positive tumor cells) + (3 × percentage of intense positive tumor cells)]. The median values of Oct-4, Ki-67, and VEGF histoscores were used to classify samples as positive (above the median) or negative (below the median) for each marker. The primary antibodies used in this study were as follow: anti-Oct-4 (sc-5279, dilution 1:100; Santa Cruz Biotechnology, Santa Cruz, CA, USA), anti-Ki-67 (ab92742, dilution 1:200; Abcam, Cambridge, UK), and anti-VEGF (sc-7269, dilution 1:100; Santa Cruz Biotechnology, Santa Cruz, CA, USA). Immunohistochemical staining was carried out using the streptavidin-peroxidase method. Cells with nuclear staining for Oct-4 and Ki-67, and cytoplasmic staining for VEGF, were scored as positive for the respective marker. The intensity of Oct-4, Ki-67, and VEGF staining was scored on a 0-to-3 scale: 0, negative; 1, light; 2, moderate; and 3, intense. The percentage of the tumor area stained for each marker at each intensity was calculated by dividing the number of tumor cells positive for the marker at each intensity by the total number of tumor cells. Areas that were negative were given a value of 0. A total of 10-12 discrete foci in each section were examined microscopically (400× magnification) to generate an average staining intensity and percentage of the surface area covered. The final histoscore was calculated using the formula: [(1 × percentage of weakly positive tumor cells) + (2 × percentage of moderately positive tumor cells) + (3 × percentage of intense positive tumor cells)]. The median values of Oct-4, Ki-67, and VEGF histoscores were used to classify samples as positive (above the median) or negative (below the median) for each marker. Evaluation of MVD Immunohistochemical staining for CD34 (MS-363, dilution 1:50; Lab Vision, Fremont, CA; Clone QBEnd/10) was analyzed. After identifying the three most vascularized areas within a tumor ("hot spots") at low magnification (×40), vessels in three representative fields in each of these areas were counted at high magnification (×400; 0.152 mm2; 0.44 mm diameter). The high-magnification fields were then marked for subsequent image cell counting analysis. Single immunoreactive endothelial cells or endothelial cell clusters separated from other microvessels were counted as individual microvessels. Endothelial staining in large vessels with tunica media and nonspecific staining of non-endothelial structures were excluded from microvessel counts. The mean visual microvessel density for CD34 was calculated as the average of six counts (three hot spots and three microscopic fields). Microvessel counts greater than the median counts were taken as MVD-positive, and microvessel counts lower than the median were taken as MVD-negative. Immunohistochemical staining for CD34 (MS-363, dilution 1:50; Lab Vision, Fremont, CA; Clone QBEnd/10) was analyzed. After identifying the three most vascularized areas within a tumor ("hot spots") at low magnification (×40), vessels in three representative fields in each of these areas were counted at high magnification (×400; 0.152 mm2; 0.44 mm diameter). The high-magnification fields were then marked for subsequent image cell counting analysis. Single immunoreactive endothelial cells or endothelial cell clusters separated from other microvessels were counted as individual microvessels. Endothelial staining in large vessels with tunica media and nonspecific staining of non-endothelial structures were excluded from microvessel counts. The mean visual microvessel density for CD34 was calculated as the average of six counts (three hot spots and three microscopic fields). Microvessel counts greater than the median counts were taken as MVD-positive, and microvessel counts lower than the median were taken as MVD-negative. Reverse transcription-polymerase chain reaction (RT-PCR) Total RNA was extracted from cultured cells using the TRIzol reagent (Invitrogen, Grand Island NY, USA), according to the manufacturer's instructions. Extracted RNA was treated with DNase (Fermentas, Vilnius, Lithuania) to remove DNA contamination. For cDNA synthesis, 1 μg of total RNA was reverse transcribed using a RevertAid First Strand cDNA Synthesis Kit (Fermentas). PCR was performed with ExTaq (TaKaRa, Japan). The primer sequences and sizes of amplified products were as follows: Oct-4, 5'-GAC AGG GGG AGG GGA GGA GCT AGG-3' and 5'-CTT CCC TCC AAC CAG TTG CCC CAA AC-3' (142 bp); β-actin (internal control), 5'-GTG GGG CGC CCC AGG CAC CA-3' and 5'-CTC CTT AAT GTC ACG CAC GAT TTC-3' (540 bp). Total RNA was extracted from cultured cells using the TRIzol reagent (Invitrogen, Grand Island NY, USA), according to the manufacturer's instructions. Extracted RNA was treated with DNase (Fermentas, Vilnius, Lithuania) to remove DNA contamination. For cDNA synthesis, 1 μg of total RNA was reverse transcribed using a RevertAid First Strand cDNA Synthesis Kit (Fermentas). PCR was performed with ExTaq (TaKaRa, Japan). The primer sequences and sizes of amplified products were as follows: Oct-4, 5'-GAC AGG GGG AGG GGA GGA GCT AGG-3' and 5'-CTT CCC TCC AAC CAG TTG CCC CAA AC-3' (142 bp); β-actin (internal control), 5'-GTG GGG CGC CCC AGG CAC CA-3' and 5'-CTC CTT AAT GTC ACG CAC GAT TTC-3' (540 bp). Statistical analysis All calculations were done using SPSS V.14.0 software (Chicago, IL, USA). Spearman's coefficient of correlation, Chi-squared tests, and Mann-Whitney tests were used as appropriate. A multivariate model was used to evaluate statistical associations among variables. A Cox regression model was used to relate potential prognostic factors with survival. All calculations were done using SPSS V.14.0 software (Chicago, IL, USA). Spearman's coefficient of correlation, Chi-squared tests, and Mann-Whitney tests were used as appropriate. A multivariate model was used to evaluate statistical associations among variables. A Cox regression model was used to relate potential prognostic factors with survival. Patients and tissue specimens: Cancer tissue and corresponding adjacent normal tissue (within 1-2 cm of the tumor edge) from 113 primary NSCLC cases were randomly selected from our tissue database. Patients had been treated in the Department of Thoracic Surgery of the First Affiliated Hospital of Sun Yat-sen University from Jan 2003 to July 2004. None of the patients had received neoadjuvant chemotherapy or radiotherapy. Clinical information was obtained by reviewing the perioperative medical records, or by telephone or written correspondence. Cases were staged according to the tumor-node-metastases (TNM) classification of the International Union Against Cancer, revised in 2002 [18]. The study was approved by the Medical Ethical Committee of the First Affiliated Hospital, Sun Yat-sen University. Paraffin-embedded specimens of each case were sectioned and fixed on siliconized slides. Histological typing was determined according to World Health Organization classifications [19]. Tumor size and metastatic lymph node number and locations were obtained from pathology reports. Cell lines: The primary NSCLC cell lines, A549, H460 and H1299, obtained from the Cell Bank of the Chinese Academy of Science (Shanghai, China), were cultured in RPMI 1640 medium (Gibco/Invitrogen, Camarillo, CA, USA) supplemented with 10% fetal bovine serum (Hyclone, Logan, UT, USA). Immunohistochemical staining and evaluation: The primary antibodies used in this study were as follow: anti-Oct-4 (sc-5279, dilution 1:100; Santa Cruz Biotechnology, Santa Cruz, CA, USA), anti-Ki-67 (ab92742, dilution 1:200; Abcam, Cambridge, UK), and anti-VEGF (sc-7269, dilution 1:100; Santa Cruz Biotechnology, Santa Cruz, CA, USA). Immunohistochemical staining was carried out using the streptavidin-peroxidase method. Cells with nuclear staining for Oct-4 and Ki-67, and cytoplasmic staining for VEGF, were scored as positive for the respective marker. The intensity of Oct-4, Ki-67, and VEGF staining was scored on a 0-to-3 scale: 0, negative; 1, light; 2, moderate; and 3, intense. The percentage of the tumor area stained for each marker at each intensity was calculated by dividing the number of tumor cells positive for the marker at each intensity by the total number of tumor cells. Areas that were negative were given a value of 0. A total of 10-12 discrete foci in each section were examined microscopically (400× magnification) to generate an average staining intensity and percentage of the surface area covered. The final histoscore was calculated using the formula: [(1 × percentage of weakly positive tumor cells) + (2 × percentage of moderately positive tumor cells) + (3 × percentage of intense positive tumor cells)]. The median values of Oct-4, Ki-67, and VEGF histoscores were used to classify samples as positive (above the median) or negative (below the median) for each marker. Evaluation of MVD: Immunohistochemical staining for CD34 (MS-363, dilution 1:50; Lab Vision, Fremont, CA; Clone QBEnd/10) was analyzed. After identifying the three most vascularized areas within a tumor ("hot spots") at low magnification (×40), vessels in three representative fields in each of these areas were counted at high magnification (×400; 0.152 mm2; 0.44 mm diameter). The high-magnification fields were then marked for subsequent image cell counting analysis. Single immunoreactive endothelial cells or endothelial cell clusters separated from other microvessels were counted as individual microvessels. Endothelial staining in large vessels with tunica media and nonspecific staining of non-endothelial structures were excluded from microvessel counts. The mean visual microvessel density for CD34 was calculated as the average of six counts (three hot spots and three microscopic fields). Microvessel counts greater than the median counts were taken as MVD-positive, and microvessel counts lower than the median were taken as MVD-negative. Reverse transcription-polymerase chain reaction (RT-PCR): Total RNA was extracted from cultured cells using the TRIzol reagent (Invitrogen, Grand Island NY, USA), according to the manufacturer's instructions. Extracted RNA was treated with DNase (Fermentas, Vilnius, Lithuania) to remove DNA contamination. For cDNA synthesis, 1 μg of total RNA was reverse transcribed using a RevertAid First Strand cDNA Synthesis Kit (Fermentas). PCR was performed with ExTaq (TaKaRa, Japan). The primer sequences and sizes of amplified products were as follows: Oct-4, 5'-GAC AGG GGG AGG GGA GGA GCT AGG-3' and 5'-CTT CCC TCC AAC CAG TTG CCC CAA AC-3' (142 bp); β-actin (internal control), 5'-GTG GGG CGC CCC AGG CAC CA-3' and 5'-CTC CTT AAT GTC ACG CAC GAT TTC-3' (540 bp). Statistical analysis: All calculations were done using SPSS V.14.0 software (Chicago, IL, USA). Spearman's coefficient of correlation, Chi-squared tests, and Mann-Whitney tests were used as appropriate. A multivariate model was used to evaluate statistical associations among variables. A Cox regression model was used to relate potential prognostic factors with survival. Results: Basic clinical information and tumor characteristics A total of 113 NSCLC patients (82 male and 31 female) were enrolled in the study; the mean age of study participants was 57.2 ± 10.0 years (range, 35-78 years). There were 58 cases of lung adenocarcinoma, 52 cases of squamous cell carcinoma, and three cases of large cell carcinoma. Twenty-seven cases were well differentiated, 34 cases were moderately differentiated, and 52 cases were poorly differentiated. The cases were classified as stage I (n = 30), stage II (n = 48), stage III (n = 18), and stage IV (n = 17). Of the 113 cases, 67 had lymph node metastasis, according to surgery and pathology reports. Analyses of patient data after a 5-year follow-up showed that 77 patients had died; median survival was 21.0 months. As expected, median survival was longer for stage I-II patients (22.0 mo) than stage III-IV patients (13.0 mo; P = 0.001). There were no significant differences in survival according to gender, smoking history, histology, or grading. The clinical characteristics of study samples are shown in Table 1. Association of Oct-4 Expression with clinical features in NSCLC aPatients were divided according to the median values of immunohistochemical histoscores bPatients were divided according to median age cBronchioloalveolar carcinoma was included in well differentiated dLarge cell carcinoma was included in poorly differentiated A total of 113 NSCLC patients (82 male and 31 female) were enrolled in the study; the mean age of study participants was 57.2 ± 10.0 years (range, 35-78 years). There were 58 cases of lung adenocarcinoma, 52 cases of squamous cell carcinoma, and three cases of large cell carcinoma. Twenty-seven cases were well differentiated, 34 cases were moderately differentiated, and 52 cases were poorly differentiated. The cases were classified as stage I (n = 30), stage II (n = 48), stage III (n = 18), and stage IV (n = 17). Of the 113 cases, 67 had lymph node metastasis, according to surgery and pathology reports. Analyses of patient data after a 5-year follow-up showed that 77 patients had died; median survival was 21.0 months. As expected, median survival was longer for stage I-II patients (22.0 mo) than stage III-IV patients (13.0 mo; P = 0.001). There were no significant differences in survival according to gender, smoking history, histology, or grading. The clinical characteristics of study samples are shown in Table 1. Association of Oct-4 Expression with clinical features in NSCLC aPatients were divided according to the median values of immunohistochemical histoscores bPatients were divided according to median age cBronchioloalveolar carcinoma was included in well differentiated dLarge cell carcinoma was included in poorly differentiated Association of Oct-4 expression with clinicopathological characteristics of NSCLC patients Immunohistochemical analyses demonstrated that Oct-4 was expressed in 90.3% of samples (102/113 cases), with clear staining observed mostly in the nuclei of tumor cells; alveolar and bronchial epithelial cells in tumor-adjacent tissues were negative for Oct-4 staining (Figure 1). The histoscores of Oct-4 expression were variable among individual tumor samples. The mean Oct-4 histoscore was 31.32 ± 5.99 and the median histoscore was 25.80; this latter value was selected to categorize patients into Oct-4-positive (above the median) and -negative (below the median) groups. Among the 56 Oct-4-negative cases, 11 samples exhibited no Oct-4 staining. The associations of Oct-4-positive and -negative status with various clinical and pathological characteristics of NSCLC are shown in Table 1. Regarding to histoscores of Oct-4 staining, there was prominent discrepancy between adenocarcinoma and squamous cell carcinoma (39.40 ± 3.59 and 21.64 ± 2.47, p = 0.008). There was significant association of Oct-4 histoscores among well, moderated, and poor differentiation of tumor (15.69 ± 3.70, 24.27 ± 2.73, and 43.80 ± 3.49, p = 0.039), and quantification of staining also revealed that these associations differed markedly in adenocarcinoma or squamous cell carcinoma population (Figure 1H). There were no associations between Oct-4 expression and malignant local advance, lymph node metastasis, or TNM stage of disease (Figure 1I). Oct-4 expression in tissues of well-differentiated adenocarcinoma (A), well-differentiated squamous cell carcinoma (B), poorly differentiated adenocarcinoma (C), and poorly differentiated squamous cell carcinoma (D), as well as VEGF staining (E) and MVD staining (F) were demonstrated immunohistologically. Quantification of Oct-4 expression (Oct-4 histoscore) with respect to differentiation status or tumor histology (G) and local advance or lymph nodes metastasis (H) is shown; 95% CIs are indicated. Immunohistochemical analyses demonstrated that Oct-4 was expressed in 90.3% of samples (102/113 cases), with clear staining observed mostly in the nuclei of tumor cells; alveolar and bronchial epithelial cells in tumor-adjacent tissues were negative for Oct-4 staining (Figure 1). The histoscores of Oct-4 expression were variable among individual tumor samples. The mean Oct-4 histoscore was 31.32 ± 5.99 and the median histoscore was 25.80; this latter value was selected to categorize patients into Oct-4-positive (above the median) and -negative (below the median) groups. Among the 56 Oct-4-negative cases, 11 samples exhibited no Oct-4 staining. The associations of Oct-4-positive and -negative status with various clinical and pathological characteristics of NSCLC are shown in Table 1. Regarding to histoscores of Oct-4 staining, there was prominent discrepancy between adenocarcinoma and squamous cell carcinoma (39.40 ± 3.59 and 21.64 ± 2.47, p = 0.008). There was significant association of Oct-4 histoscores among well, moderated, and poor differentiation of tumor (15.69 ± 3.70, 24.27 ± 2.73, and 43.80 ± 3.49, p = 0.039), and quantification of staining also revealed that these associations differed markedly in adenocarcinoma or squamous cell carcinoma population (Figure 1H). There were no associations between Oct-4 expression and malignant local advance, lymph node metastasis, or TNM stage of disease (Figure 1I). Oct-4 expression in tissues of well-differentiated adenocarcinoma (A), well-differentiated squamous cell carcinoma (B), poorly differentiated adenocarcinoma (C), and poorly differentiated squamous cell carcinoma (D), as well as VEGF staining (E) and MVD staining (F) were demonstrated immunohistologically. Quantification of Oct-4 expression (Oct-4 histoscore) with respect to differentiation status or tumor histology (G) and local advance or lymph nodes metastasis (H) is shown; 95% CIs are indicated. Oct-4 expression in NSCLC cell lines To better understand the expression status of Oct-4 in NSCLC, we examined the expression of Oct-4 in the NSCLC cell lines, A549, H460, and H1299. Oct-4 mRNA was detected in each of these cell lines (Figure 1G). To better understand the expression status of Oct-4 in NSCLC, we examined the expression of Oct-4 in the NSCLC cell lines, A549, H460, and H1299. Oct-4 mRNA was detected in each of these cell lines (Figure 1G). Association of Oct-4 expression with malignant proliferation according to differences in VEGF-mediated angiogenesis Intratumoral Ki-67 expression, a marker of malignant proliferation, varied according to Oct-4 phenotype in the population under study, with high Ki-67 expression showing a significant association with positive Oct-4 staining (Table 1). Quantification of staining revealed that this association differed markedly depending on Oct-4 histoscores (Figure 2A, p = 0.001) and showed that these two markers were positively correlated (Figure 2B). In MVD-negative and VEGF-negative subsets, intratumoral Ki-67 expression varied significantly according to Oct-4 phenotype (Figure 2A); Ki-67 (Figure 2C) and Oct-4 (Figure 2E) expression were also positively correlated in these subsets. These results suggest a prominent association of Oct-4 expression with malignant proliferation in NSCLC, especially in cases with weak VEGF-mediated angiogenesis. Ki-67 expression histoscores were significantly different (ANOVA) according to different Oct-4 status in all cases, and in subsets of MVD-negative, MVD-positive, VEGF-negative, and VEGF-positive cases (A). All cases were divided into positive (above the median histoscore) and negative (below the median histoscore) groups. The association of Oct-4 staining with Ki-67 expression was positive in all cases (B), and in subsets of MVD-negative (C), MVD-positive (D), VEGF-negative (E), and VEGF-positive (F) cases. Statistical differences were calculated using Pearson and Spearman correlation analysis. Intratumoral Ki-67 expression, a marker of malignant proliferation, varied according to Oct-4 phenotype in the population under study, with high Ki-67 expression showing a significant association with positive Oct-4 staining (Table 1). Quantification of staining revealed that this association differed markedly depending on Oct-4 histoscores (Figure 2A, p = 0.001) and showed that these two markers were positively correlated (Figure 2B). In MVD-negative and VEGF-negative subsets, intratumoral Ki-67 expression varied significantly according to Oct-4 phenotype (Figure 2A); Ki-67 (Figure 2C) and Oct-4 (Figure 2E) expression were also positively correlated in these subsets. These results suggest a prominent association of Oct-4 expression with malignant proliferation in NSCLC, especially in cases with weak VEGF-mediated angiogenesis. Ki-67 expression histoscores were significantly different (ANOVA) according to different Oct-4 status in all cases, and in subsets of MVD-negative, MVD-positive, VEGF-negative, and VEGF-positive cases (A). All cases were divided into positive (above the median histoscore) and negative (below the median histoscore) groups. The association of Oct-4 staining with Ki-67 expression was positive in all cases (B), and in subsets of MVD-negative (C), MVD-positive (D), VEGF-negative (E), and VEGF-positive (F) cases. Statistical differences were calculated using Pearson and Spearman correlation analysis. Association of Oct-4 expression with survival in all cases and in subsets of cases: univariate and multivariate analyses The strength of associations between each individual predictor and overall survival was shown by univariate and multivariate analyses (Table 2). Oct-4 expression in tumor tissue and differentiation of tumor cells were strongly associated with cancer-associated death. Notably, an Oct-4 expression level less than the median histoscore (25.80) was associated with improved survival (HR, 1.011), whereas elevated Oct-4 expression was associated with shorter cumulative survival (p = 0.009). A Kaplan-Meier plot showed a prominent difference in survival estimates for patients with high versus low Oct-4 expression in tumor tissue; this difference corresponded to a median survival of 18.2 ± 6.0 months for patients with high Oct-4 expression compared with a median survival of more than 24.7 ± 9.1 months for patients with low Oct-4 expression (Figure 3A). More importantly, significant differences were also found in the adenocarcinoma subset (17.7 ± 9.1 vs. 27.3 ± 9.6 months; Figure 3B) and the squamous cell carcinoma subset (20.7 ± 9.5 vs. 23.2 ± 10.8 months; Figure 3C). When all predictors were included in a Cox model, Oct-4 expression retained its prognostic significance for overall survival. Hence, a low level of Oct-4 expression in tumor tissue predicted improved survival in NSCLC patients. Univariate and multivariate analyses of individual variables for correlations with overall survival: cox proportional hazards model Variable 1, Oct-4 expression was an independent prognostic factor, adjusted by histological differentiation, in all cases Variable 2, Oct-4 expression was an independent factor in MVD-negative cases Variable 3, Oct-4 expression was an independent factor in VEGF-negative cases Abbreviations: HR, hazard ratio; CI, confidence interval Cumulative Kaplan-Meier survival curves based on the median values of Oct-4 immunochemical histoscores in NSCLC tissues are showed for all cases (A), and for adenocarcinoma (B), squamous cell carcinoma (C), MVD-negative (D), MVD-positive (E), VEGF-negative (F), and VEGF-positive (G) cases. All cases were divided into positive (above the median histoscore) and negative (below the median histoscore) groups. Oct-4-positivity was associated with decreased overall survival in all subset. Statistical differences were calculated using log-rank comparisons. In order to observe the contribution of Oct-4 to overall survival in patients in which VEGF-mediated angiogenesis was disabled, we also performed univariate and multivariate analyses in MVD-negative and VEGF-negative subsets (Table 2). Notably, an Oct-4 expression level less than the median histoscore was associated with improved survival, whereas elevated Oct-4 expression was associated with shorter cumulative survival in both the MVD-negative subset (HR, 1.024, p = 0.005) and the VEGF-negative subset (HR, 1.011, p = 0.042). Further, a Kaplan-Meier plot showed a prominent difference in survival estimates for patients in the MVD-negative subset, where the median survival for patients with high Oct-4 expression was 18.5 ± 7.6 months compared with a median survival of more than 24.3 ± 8.3 months for patients with low Oct-4 expression (Figure 3D). Similar differences were found for patients in the VEGF-negative subset; here the median survival for patients with high Oct-4 expression was 17.5 ± 6.1 months compared with a median survival of more than 21.9 ± 7.5 months for patients with low Oct-4 expression (Figure 3F). Hence, Oct-4 expression retained its prognostic significance for overall survival in NSCLC patients with weak VEGF-mediated angiogenesis. The strength of associations between each individual predictor and overall survival was shown by univariate and multivariate analyses (Table 2). Oct-4 expression in tumor tissue and differentiation of tumor cells were strongly associated with cancer-associated death. Notably, an Oct-4 expression level less than the median histoscore (25.80) was associated with improved survival (HR, 1.011), whereas elevated Oct-4 expression was associated with shorter cumulative survival (p = 0.009). A Kaplan-Meier plot showed a prominent difference in survival estimates for patients with high versus low Oct-4 expression in tumor tissue; this difference corresponded to a median survival of 18.2 ± 6.0 months for patients with high Oct-4 expression compared with a median survival of more than 24.7 ± 9.1 months for patients with low Oct-4 expression (Figure 3A). More importantly, significant differences were also found in the adenocarcinoma subset (17.7 ± 9.1 vs. 27.3 ± 9.6 months; Figure 3B) and the squamous cell carcinoma subset (20.7 ± 9.5 vs. 23.2 ± 10.8 months; Figure 3C). When all predictors were included in a Cox model, Oct-4 expression retained its prognostic significance for overall survival. Hence, a low level of Oct-4 expression in tumor tissue predicted improved survival in NSCLC patients. Univariate and multivariate analyses of individual variables for correlations with overall survival: cox proportional hazards model Variable 1, Oct-4 expression was an independent prognostic factor, adjusted by histological differentiation, in all cases Variable 2, Oct-4 expression was an independent factor in MVD-negative cases Variable 3, Oct-4 expression was an independent factor in VEGF-negative cases Abbreviations: HR, hazard ratio; CI, confidence interval Cumulative Kaplan-Meier survival curves based on the median values of Oct-4 immunochemical histoscores in NSCLC tissues are showed for all cases (A), and for adenocarcinoma (B), squamous cell carcinoma (C), MVD-negative (D), MVD-positive (E), VEGF-negative (F), and VEGF-positive (G) cases. All cases were divided into positive (above the median histoscore) and negative (below the median histoscore) groups. Oct-4-positivity was associated with decreased overall survival in all subset. Statistical differences were calculated using log-rank comparisons. In order to observe the contribution of Oct-4 to overall survival in patients in which VEGF-mediated angiogenesis was disabled, we also performed univariate and multivariate analyses in MVD-negative and VEGF-negative subsets (Table 2). Notably, an Oct-4 expression level less than the median histoscore was associated with improved survival, whereas elevated Oct-4 expression was associated with shorter cumulative survival in both the MVD-negative subset (HR, 1.024, p = 0.005) and the VEGF-negative subset (HR, 1.011, p = 0.042). Further, a Kaplan-Meier plot showed a prominent difference in survival estimates for patients in the MVD-negative subset, where the median survival for patients with high Oct-4 expression was 18.5 ± 7.6 months compared with a median survival of more than 24.3 ± 8.3 months for patients with low Oct-4 expression (Figure 3D). Similar differences were found for patients in the VEGF-negative subset; here the median survival for patients with high Oct-4 expression was 17.5 ± 6.1 months compared with a median survival of more than 21.9 ± 7.5 months for patients with low Oct-4 expression (Figure 3F). Hence, Oct-4 expression retained its prognostic significance for overall survival in NSCLC patients with weak VEGF-mediated angiogenesis. Basic clinical information and tumor characteristics: A total of 113 NSCLC patients (82 male and 31 female) were enrolled in the study; the mean age of study participants was 57.2 ± 10.0 years (range, 35-78 years). There were 58 cases of lung adenocarcinoma, 52 cases of squamous cell carcinoma, and three cases of large cell carcinoma. Twenty-seven cases were well differentiated, 34 cases were moderately differentiated, and 52 cases were poorly differentiated. The cases were classified as stage I (n = 30), stage II (n = 48), stage III (n = 18), and stage IV (n = 17). Of the 113 cases, 67 had lymph node metastasis, according to surgery and pathology reports. Analyses of patient data after a 5-year follow-up showed that 77 patients had died; median survival was 21.0 months. As expected, median survival was longer for stage I-II patients (22.0 mo) than stage III-IV patients (13.0 mo; P = 0.001). There were no significant differences in survival according to gender, smoking history, histology, or grading. The clinical characteristics of study samples are shown in Table 1. Association of Oct-4 Expression with clinical features in NSCLC aPatients were divided according to the median values of immunohistochemical histoscores bPatients were divided according to median age cBronchioloalveolar carcinoma was included in well differentiated dLarge cell carcinoma was included in poorly differentiated Association of Oct-4 expression with clinicopathological characteristics of NSCLC patients: Immunohistochemical analyses demonstrated that Oct-4 was expressed in 90.3% of samples (102/113 cases), with clear staining observed mostly in the nuclei of tumor cells; alveolar and bronchial epithelial cells in tumor-adjacent tissues were negative for Oct-4 staining (Figure 1). The histoscores of Oct-4 expression were variable among individual tumor samples. The mean Oct-4 histoscore was 31.32 ± 5.99 and the median histoscore was 25.80; this latter value was selected to categorize patients into Oct-4-positive (above the median) and -negative (below the median) groups. Among the 56 Oct-4-negative cases, 11 samples exhibited no Oct-4 staining. The associations of Oct-4-positive and -negative status with various clinical and pathological characteristics of NSCLC are shown in Table 1. Regarding to histoscores of Oct-4 staining, there was prominent discrepancy between adenocarcinoma and squamous cell carcinoma (39.40 ± 3.59 and 21.64 ± 2.47, p = 0.008). There was significant association of Oct-4 histoscores among well, moderated, and poor differentiation of tumor (15.69 ± 3.70, 24.27 ± 2.73, and 43.80 ± 3.49, p = 0.039), and quantification of staining also revealed that these associations differed markedly in adenocarcinoma or squamous cell carcinoma population (Figure 1H). There were no associations between Oct-4 expression and malignant local advance, lymph node metastasis, or TNM stage of disease (Figure 1I). Oct-4 expression in tissues of well-differentiated adenocarcinoma (A), well-differentiated squamous cell carcinoma (B), poorly differentiated adenocarcinoma (C), and poorly differentiated squamous cell carcinoma (D), as well as VEGF staining (E) and MVD staining (F) were demonstrated immunohistologically. Quantification of Oct-4 expression (Oct-4 histoscore) with respect to differentiation status or tumor histology (G) and local advance or lymph nodes metastasis (H) is shown; 95% CIs are indicated. Oct-4 expression in NSCLC cell lines: To better understand the expression status of Oct-4 in NSCLC, we examined the expression of Oct-4 in the NSCLC cell lines, A549, H460, and H1299. Oct-4 mRNA was detected in each of these cell lines (Figure 1G). Association of Oct-4 expression with malignant proliferation according to differences in VEGF-mediated angiogenesis: Intratumoral Ki-67 expression, a marker of malignant proliferation, varied according to Oct-4 phenotype in the population under study, with high Ki-67 expression showing a significant association with positive Oct-4 staining (Table 1). Quantification of staining revealed that this association differed markedly depending on Oct-4 histoscores (Figure 2A, p = 0.001) and showed that these two markers were positively correlated (Figure 2B). In MVD-negative and VEGF-negative subsets, intratumoral Ki-67 expression varied significantly according to Oct-4 phenotype (Figure 2A); Ki-67 (Figure 2C) and Oct-4 (Figure 2E) expression were also positively correlated in these subsets. These results suggest a prominent association of Oct-4 expression with malignant proliferation in NSCLC, especially in cases with weak VEGF-mediated angiogenesis. Ki-67 expression histoscores were significantly different (ANOVA) according to different Oct-4 status in all cases, and in subsets of MVD-negative, MVD-positive, VEGF-negative, and VEGF-positive cases (A). All cases were divided into positive (above the median histoscore) and negative (below the median histoscore) groups. The association of Oct-4 staining with Ki-67 expression was positive in all cases (B), and in subsets of MVD-negative (C), MVD-positive (D), VEGF-negative (E), and VEGF-positive (F) cases. Statistical differences were calculated using Pearson and Spearman correlation analysis. Association of Oct-4 expression with survival in all cases and in subsets of cases: univariate and multivariate analyses: The strength of associations between each individual predictor and overall survival was shown by univariate and multivariate analyses (Table 2). Oct-4 expression in tumor tissue and differentiation of tumor cells were strongly associated with cancer-associated death. Notably, an Oct-4 expression level less than the median histoscore (25.80) was associated with improved survival (HR, 1.011), whereas elevated Oct-4 expression was associated with shorter cumulative survival (p = 0.009). A Kaplan-Meier plot showed a prominent difference in survival estimates for patients with high versus low Oct-4 expression in tumor tissue; this difference corresponded to a median survival of 18.2 ± 6.0 months for patients with high Oct-4 expression compared with a median survival of more than 24.7 ± 9.1 months for patients with low Oct-4 expression (Figure 3A). More importantly, significant differences were also found in the adenocarcinoma subset (17.7 ± 9.1 vs. 27.3 ± 9.6 months; Figure 3B) and the squamous cell carcinoma subset (20.7 ± 9.5 vs. 23.2 ± 10.8 months; Figure 3C). When all predictors were included in a Cox model, Oct-4 expression retained its prognostic significance for overall survival. Hence, a low level of Oct-4 expression in tumor tissue predicted improved survival in NSCLC patients. Univariate and multivariate analyses of individual variables for correlations with overall survival: cox proportional hazards model Variable 1, Oct-4 expression was an independent prognostic factor, adjusted by histological differentiation, in all cases Variable 2, Oct-4 expression was an independent factor in MVD-negative cases Variable 3, Oct-4 expression was an independent factor in VEGF-negative cases Abbreviations: HR, hazard ratio; CI, confidence interval Cumulative Kaplan-Meier survival curves based on the median values of Oct-4 immunochemical histoscores in NSCLC tissues are showed for all cases (A), and for adenocarcinoma (B), squamous cell carcinoma (C), MVD-negative (D), MVD-positive (E), VEGF-negative (F), and VEGF-positive (G) cases. All cases were divided into positive (above the median histoscore) and negative (below the median histoscore) groups. Oct-4-positivity was associated with decreased overall survival in all subset. Statistical differences were calculated using log-rank comparisons. In order to observe the contribution of Oct-4 to overall survival in patients in which VEGF-mediated angiogenesis was disabled, we also performed univariate and multivariate analyses in MVD-negative and VEGF-negative subsets (Table 2). Notably, an Oct-4 expression level less than the median histoscore was associated with improved survival, whereas elevated Oct-4 expression was associated with shorter cumulative survival in both the MVD-negative subset (HR, 1.024, p = 0.005) and the VEGF-negative subset (HR, 1.011, p = 0.042). Further, a Kaplan-Meier plot showed a prominent difference in survival estimates for patients in the MVD-negative subset, where the median survival for patients with high Oct-4 expression was 18.5 ± 7.6 months compared with a median survival of more than 24.3 ± 8.3 months for patients with low Oct-4 expression (Figure 3D). Similar differences were found for patients in the VEGF-negative subset; here the median survival for patients with high Oct-4 expression was 17.5 ± 6.1 months compared with a median survival of more than 21.9 ± 7.5 months for patients with low Oct-4 expression (Figure 3F). Hence, Oct-4 expression retained its prognostic significance for overall survival in NSCLC patients with weak VEGF-mediated angiogenesis. Discussion: Although Oct-4 has been detected in various carcinomas, including breast cancer [9], bladder cancer [10], prostate cancer [11] and lung adenocarcinoma [20], the precise role of this stem cell marker in maintaining the survival of cancer cells is unclear. Sustained expression of Oct-4 in epithelial tissues has been shown to lead to dysplastic changes through inhibition of cellular differentiation, similar to its action in some progenitor cells, suggesting that Oct-4 may play an important role in the genesis of tumors [21]. However, the mechanisms by which Oct-4 acts during tumor progression have remained poorly understood. Accordingly, we examined the behavior of Oct-4 in primary NSCLC tissues, focusing on the associations of Oct-4 expression with clinicopathological features and markers of tumor-induced angiogenesis. Important in this context is the observation that, after disregarding nonangiogenic subsets of NSCLC (which tend to obscure the association of Oct-4 with tumor angiogenesis), a subset of NSCLC tumors does not induce angiogenesis, but instead co-opts the normal vasculature for further growth. On the basis of the previous finding that Oct-4 may be a major contributor to the maintenance of self-renewal in embryonic stem cells, we investigated the association of Oct-4 expression with self-renewal of NSCLC cells. The immunohistochemical analyses presented here showed clear Oct-4 staining in most sections, and RT-PCR showed Oct-4 mRNA in all NSCLC cell lines. Our data extend the previous report of Oct-4 overexpression in lung adenocarcinoma [20], providing the first demonstration that Oct-4 is also present in lung squamous cell carcinoma specimens, exhibiting an apparent difference in the degree of expression among sections analyzed. One possible explanation for these findings is that the genesis of lung adenocarcinoma and squamous cell carcinoma may be different. The former arises from mucous glands or the cells of bronchoalveolar duct junction and the latter grows most commonly in or around major bronchi. Further studies designed to address the relationship between Oct-4 expression in endothelial precursors and the sites of origin of adenocarcinoma and squamous cell carcinoma are required to confirm this. Our data also showed that the degree of immunohistochemical staining was positively correlated with poor differentiation of tumor cells and Ki-67 expression; this latter marker provides an opportunity to analyze the proliferative cell fraction in preserved tumor specimens. High levels of Oct-4 have been shown to increase the malignant potential of tumors, whereas inactivation of Oct-4 induces a regression of the malignant component [22]; moreover, knockdown of Oct-4 expression in lung cancer cells has been shown to facilitate differentiation of CD133-positive cells into CD133-negative cells [23]. These findings, taken together with our data, indicate that overexpression of Oct-4 in NSCLC tissues may maintain the poorly differentiated state by contributing to tumor cell proliferation. On the other hand, down-regulation of Oct-4 expression has been shown to induce apoptosis of tumor-initiating-cell-like cells through an Oct-4/Tcl1/Akt1 pathway, implying that Oct-4 might maintain the survival of tumor-initiating cells, at least in part, by inhibiting apoptosis [13]. Whether an Oct-4-dependent pathway modulates apoptosis in clinical NSCLC samples or NSCLC cell lines has not yet been tested. Previous reports have indicated that tumor-induced angiogenesis is important in maintaining the poorly differentiated state and promoting metastasis in NSCLC [23,24]. In our study, we observed an association of Oct-4 expression in NSCLC specimens with some features of tumor-induced angiogenesis, but the investigation revealed no prominent linkage between Oct-4 expression and neovascularization (defined by CD34 and VEGF-A expression). However, Passalidou and Pezzella have previously described a subset of NSCLC without morphological evidence of neo-angiogenesis. In these tumors, alveoli are filled with neoplastic cells and the only vessels present appeared to belong to the trapped alveolar septa; moreover, tumors with normal vessels and no neo-angiogenesis seemed resistant to some anti-angiogenic therapies [16,17]. In this context, we observed an association of Oct-4 expression with tumor cell proliferation in patients with weak VEGF-mediated angiogenesis, including MVD-negative and VEGF-negative subsets, indicating that Oct-4 still plays an important role in cell proliferation in NSCLC tumors, even those with weak MVD or VEGF status. Whether Oct-4 expression contributes to resistance to anti-angiogenic therapy thus warrants additional research attention. Although recent reports have also shown that Oct-4 is re-expressed in different human carcinomas, implicating Oct-4 as a potential diagnostic marker in malignancy [25,26], whether Oct-4 expression can be used as a diagnostic tool to monitor the clinical prognosis of NSCLC patients has not been previously substantiated. An analysis of our follow-up data designed to definitively assess the effect of Oct-4 immunohistochemical expression on the prognosis of NSCLC patients showed that the post-operative survival duration of patients with high Oct-4 expression was notably shorter than that of patients with low expression. These results indicate that overexpression of Oct-4 has a detrimental effect on prognosis, and further demonstrates that Oct-4 expression may be correlated with the malignant behavior of tumors during NSCLC progression. A combined genomic analysis of the Oct-4/SOX2/NANOG pathway has recently demonstrated high prognostic accuracy in studies of patients with multiple tumor types [27]. Similarly, multivariate analyses of the data presented here demonstrated that Oct-4 expression is an independent factor whose expression might indicate poor prognosis of patients with NSCLC, generally, as well in NSCLC patient subsets, especially those with weak or no neovascularization. A detailed investigation of the association of Oct-4 expression with treatment response, particularly a characterization of the molecular phenotype of tumors following downregulation of Oct-4, would provide further support for this interpretation. Conclusion: In summary, a multivariate analysis demonstrated that Oct-4 expression was an independent predictor of overall survival, suggesting that Oct-4 may be useful as a molecular marker to assess the prognosis of patients with primary NSCLC, especially those without prominent neovascularization. In patients without prominent tumor-induced angiogenesis, Oct-4-overexpressing cells in primary NSCLC tissue represent a reservoir of tumor cells with differentiation potential; moreover, Oct-4 may maintain tumor cells in a poorly differentiated state through a mechanism that depends on promoting cell proliferation. The molecular mechanisms by which Oct-4 sustains the self-renewal capacity of tumor cells, especially those with poor neovascularization status, are poorly understood and are the focus of our future studies. Developing strategies to inhibit Oct-4 during tumor progression may have positive prognostic implications in primary NSCLC patients.
Background: Expression of the stem cell marker octamer 4 (Oct-4) in various neoplasms has been previously reported, but very little is currently known about the potential function of Oct-4 in this setting. The purpose of this study was to assess the prognostic value of Oct-4 expression after surgery in primary non-small cell lung cancer (NSCLC) and investigate its possible molecular mechanism. Methods: We measured Oct-4 expression in 113 NSCLC tissue samples and three cell lines by immunohistochemical staining and RT-PCR. The association of Oct-4 expression with demographic characteristics, proliferative marker Ki67, microvessel density (MVD), and expression of vascular endothelial growth factor (VEGF) were assessed. Results: Oct-4 expression was detected in 90.3% of samples and was positively correlated with poor differentiation and adenocarcinoma histology, and Oct-4 mRNA was found in each cell lines detected. Overexpression of Oct-4 had a strong association with cells proliferation in all cases, MVD-negative, and VEGF-negative subsets. A Kaplan-Meier analysis showed that overexpression of Oct-4 was associated with shorter overall survival in all cases, adenocarcinoma, squamous cell carcinoma, MVD-negative, and VEGF-negative subsets. A multivariate analysis demonstrated that Oct-4 level in tumor tissue was an independent prognostic factor for overall survival in all cases, MVD-negative, and VEGF-negative subsets. Conclusions: Our findings suggest that, even in the context of vulnerable MVD status and VEGF expression, overexpression of Oct-4 in tumor tissue represents a prognostic factor in primary NSCLC patients. Oct-4 may maintain NSCLC cells in a poorly differentiated state through a mechanism that depends on promoting cell proliferation.
Background: Despite recent progress in treatment, lung cancer remains the leading cause of cancer deaths in both women and men throughout the world [1]. Not all patients with lung cancer benefit from routine surgery and chemotherapy. This is especially true for those with primary non-small cell lung cancer (NSCLC), the most common malignancy in the thoracic field, where such therapies have been tried with limited efficacy [2]. To improve patient survival rate, researchers have increasingly focused on understanding specific characteristics of NSCLCs as a means to elucidate the mechanism of tumor development and develop possible targeted therapeutic approaches. Octamer 4 (Oct-4), a member of the POU-domain transcription factor family, is normally expressed in both adult and embryonic stem cells [3,4]. Recent reports have demonstrated that Oct-4 is not only involved in controlling the maintenance of stem cell pluripotency, but is also specifically responsible for the unlimited proliferative potential of stem cells, suggesting that Oct-4 functions as a master switch during differentiation of human somatic cell [5-7]. Interestingly, Oct-4 is also re-expressed in germ cell tumors [8], breast cancer [9], bladder cancer [10], prostate cancer and hepatomas [11,12], but very little is known about its potential function in malignant disease [13]. Moreover, overexpression of Oct-4 increases the malignant potential of tumors, and downregulation of Oct-4 in tumor cells inhibits tumor growth, suggesting that Oct-4 might play a key role in maintaining the survival of cancer cells [13,14]. Although its asymmetric expression may indicate that Oct-4 is a suitable target for therapeutic intervention in adenocarcinoma and bronchioloalveolar carcinoma [15], the role of Oct-4 expression in primary NSCLC has remained ill defined. To address this potential role, we assessed Oct-4 expression in cancer specimens from 113 patients with primary NSCLC by immunohistochemical staining. We further investigated the association of Oct-4 expression in NSCLC tumor cells with some important clinical pathological indices. In addition, we examined the involvement of Oct-4 in tumor cell proliferation and tumor-induced angiogenesis in NSCLC by relating Oct-4 expression with microvessel density (MVD), and expression of Ki-67 and vascular endothelial growth factor (VEGF), proliferative and the vascular markers, respectively. On the basis of previous reports that a subset of NSCLC tumors do not induce angiogenesis but instead co-opt the normal vasculature for further growth [16,17], we also evaluated associations of Oct-4 expression with tumor cell proliferation and prognosis in subsets of patients with weak VEGF-mediated angiogenesis (disregarding the nonangiogenic subsets of NSCLC in the analysis, which would tend to obscure the role of Oct-4 expression in primary NSCLC). Our results provide the first demonstration that expression of the stem cell marker Oct-4 maintains tumor cells in a poorly differentiated state through a mechanism that depends on promoting cell proliferation. Moreover, even in the context of vulnerable MVD status and VEGF expression, Oct-4 plays an important role in tumor cell proliferation and contributes to poor prognosis in human NSCLC. Conclusion: Grant support: This work was supported by grants from the National Basic Research Program of China (973 Program, No. 2008CB517406), the National Natural Science Foundation of China (No. 30671023, 30971675, 30900729), and the Key Scientific and Technological Projects of Guangdong Province (No. 2007A032100003).
Background: Expression of the stem cell marker octamer 4 (Oct-4) in various neoplasms has been previously reported, but very little is currently known about the potential function of Oct-4 in this setting. The purpose of this study was to assess the prognostic value of Oct-4 expression after surgery in primary non-small cell lung cancer (NSCLC) and investigate its possible molecular mechanism. Methods: We measured Oct-4 expression in 113 NSCLC tissue samples and three cell lines by immunohistochemical staining and RT-PCR. The association of Oct-4 expression with demographic characteristics, proliferative marker Ki67, microvessel density (MVD), and expression of vascular endothelial growth factor (VEGF) were assessed. Results: Oct-4 expression was detected in 90.3% of samples and was positively correlated with poor differentiation and adenocarcinoma histology, and Oct-4 mRNA was found in each cell lines detected. Overexpression of Oct-4 had a strong association with cells proliferation in all cases, MVD-negative, and VEGF-negative subsets. A Kaplan-Meier analysis showed that overexpression of Oct-4 was associated with shorter overall survival in all cases, adenocarcinoma, squamous cell carcinoma, MVD-negative, and VEGF-negative subsets. A multivariate analysis demonstrated that Oct-4 level in tumor tissue was an independent prognostic factor for overall survival in all cases, MVD-negative, and VEGF-negative subsets. Conclusions: Our findings suggest that, even in the context of vulnerable MVD status and VEGF expression, overexpression of Oct-4 in tumor tissue represents a prognostic factor in primary NSCLC patients. Oct-4 may maintain NSCLC cells in a poorly differentiated state through a mechanism that depends on promoting cell proliferation.
9,660
312
[ 569, 182, 63, 300, 180, 151, 63, 3285, 274, 353, 46, 270, 664, 1060, 146 ]
16
[ "oct", "expression", "oct expression", "tumor", "negative", "survival", "cases", "median", "patients", "cell" ]
[ "oct tumor cells", "lung cancer benefit", "tumors downregulation oct", "therapeutic approaches octamer", "expression lung cancer" ]
null
[CONTENT] Oct-4 | Non-small cell lung cancer | Prognosis | Proliferation | Angiogenesis [SUMMARY]
[CONTENT] Oct-4 | Non-small cell lung cancer | Prognosis | Proliferation | Angiogenesis [SUMMARY]
null
[CONTENT] Oct-4 | Non-small cell lung cancer | Prognosis | Proliferation | Angiogenesis [SUMMARY]
[CONTENT] Oct-4 | Non-small cell lung cancer | Prognosis | Proliferation | Angiogenesis [SUMMARY]
[CONTENT] Oct-4 | Non-small cell lung cancer | Prognosis | Proliferation | Angiogenesis [SUMMARY]
[CONTENT] Adenocarcinoma | Adult | Aged | Carcinoma, Non-Small-Cell Lung | Carcinoma, Squamous Cell | Cell Line, Tumor | Cell Proliferation | Female | Gene Expression Regulation, Neoplastic | Humans | Kaplan-Meier Estimate | Ki-67 Antigen | Male | Middle Aged | Neoplasm Staging | Neovascularization, Pathologic | Octamer Transcription Factor-3 | Prognosis | Vascular Endothelial Growth Factor A [SUMMARY]
[CONTENT] Adenocarcinoma | Adult | Aged | Carcinoma, Non-Small-Cell Lung | Carcinoma, Squamous Cell | Cell Line, Tumor | Cell Proliferation | Female | Gene Expression Regulation, Neoplastic | Humans | Kaplan-Meier Estimate | Ki-67 Antigen | Male | Middle Aged | Neoplasm Staging | Neovascularization, Pathologic | Octamer Transcription Factor-3 | Prognosis | Vascular Endothelial Growth Factor A [SUMMARY]
null
[CONTENT] Adenocarcinoma | Adult | Aged | Carcinoma, Non-Small-Cell Lung | Carcinoma, Squamous Cell | Cell Line, Tumor | Cell Proliferation | Female | Gene Expression Regulation, Neoplastic | Humans | Kaplan-Meier Estimate | Ki-67 Antigen | Male | Middle Aged | Neoplasm Staging | Neovascularization, Pathologic | Octamer Transcription Factor-3 | Prognosis | Vascular Endothelial Growth Factor A [SUMMARY]
[CONTENT] Adenocarcinoma | Adult | Aged | Carcinoma, Non-Small-Cell Lung | Carcinoma, Squamous Cell | Cell Line, Tumor | Cell Proliferation | Female | Gene Expression Regulation, Neoplastic | Humans | Kaplan-Meier Estimate | Ki-67 Antigen | Male | Middle Aged | Neoplasm Staging | Neovascularization, Pathologic | Octamer Transcription Factor-3 | Prognosis | Vascular Endothelial Growth Factor A [SUMMARY]
[CONTENT] Adenocarcinoma | Adult | Aged | Carcinoma, Non-Small-Cell Lung | Carcinoma, Squamous Cell | Cell Line, Tumor | Cell Proliferation | Female | Gene Expression Regulation, Neoplastic | Humans | Kaplan-Meier Estimate | Ki-67 Antigen | Male | Middle Aged | Neoplasm Staging | Neovascularization, Pathologic | Octamer Transcription Factor-3 | Prognosis | Vascular Endothelial Growth Factor A [SUMMARY]
[CONTENT] oct tumor cells | lung cancer benefit | tumors downregulation oct | therapeutic approaches octamer | expression lung cancer [SUMMARY]
[CONTENT] oct tumor cells | lung cancer benefit | tumors downregulation oct | therapeutic approaches octamer | expression lung cancer [SUMMARY]
null
[CONTENT] oct tumor cells | lung cancer benefit | tumors downregulation oct | therapeutic approaches octamer | expression lung cancer [SUMMARY]
[CONTENT] oct tumor cells | lung cancer benefit | tumors downregulation oct | therapeutic approaches octamer | expression lung cancer [SUMMARY]
[CONTENT] oct tumor cells | lung cancer benefit | tumors downregulation oct | therapeutic approaches octamer | expression lung cancer [SUMMARY]
[CONTENT] oct | expression | oct expression | tumor | negative | survival | cases | median | patients | cell [SUMMARY]
[CONTENT] oct | expression | oct expression | tumor | negative | survival | cases | median | patients | cell [SUMMARY]
null
[CONTENT] oct | expression | oct expression | tumor | negative | survival | cases | median | patients | cell [SUMMARY]
[CONTENT] oct | expression | oct expression | tumor | negative | survival | cases | median | patients | cell [SUMMARY]
[CONTENT] oct | expression | oct expression | tumor | negative | survival | cases | median | patients | cell [SUMMARY]
[CONTENT] oct | cancer | expression | role | tumor | cell | nsclc | stem | oct expression | cell proliferation [SUMMARY]
[CONTENT] tumor | staining | percentage | counts | usa | cells | cruz | agg | santa | intensity [SUMMARY]
null
[CONTENT] oct | tumor | cells | primary nsclc | neovascularization | molecular | primary | tumor cells | patients | especially [SUMMARY]
[CONTENT] oct | expression | tumor | oct expression | cell | cases | negative | median | nsclc | patients [SUMMARY]
[CONTENT] oct | expression | tumor | oct expression | cell | cases | negative | median | nsclc | patients [SUMMARY]
[CONTENT] 4 | Oct-4 | Oct-4 ||| Oct-4 | NSCLC [SUMMARY]
[CONTENT] Oct-4 | 113 | three | RT-PCR ||| MVD | VEGF [SUMMARY]
null
[CONTENT] MVD | VEGF | Oct-4 | NSCLC ||| Oct-4 | NSCLC [SUMMARY]
[CONTENT] 4 | Oct-4 | Oct-4 ||| Oct-4 | NSCLC ||| Oct-4 | 113 | three | RT-PCR ||| MVD | VEGF ||| ||| Oct-4 | 90.3% ||| MVD | VEGF ||| Oct-4 | MVD | VEGF ||| MVD | VEGF ||| MVD | VEGF | Oct-4 | NSCLC ||| Oct-4 | NSCLC [SUMMARY]
[CONTENT] 4 | Oct-4 | Oct-4 ||| Oct-4 | NSCLC ||| Oct-4 | 113 | three | RT-PCR ||| MVD | VEGF ||| ||| Oct-4 | 90.3% ||| MVD | VEGF ||| Oct-4 | MVD | VEGF ||| MVD | VEGF ||| MVD | VEGF | Oct-4 | NSCLC ||| Oct-4 | NSCLC [SUMMARY]
Breakthrough COVID-19 Infection During the Delta Variant Dominant Period: Individualized Care Based on Vaccination Status Is Needed.
35971766
The clinical features of coronavirus disease 2019 (COVID-19) patients in the COVID-19 vaccination era need to be clarified because breakthrough infection after vaccination is not uncommon.
BACKGROUND
We retrospectively analyzed hospitalized COVID-19 patients during a delta variant-dominant period 6 months after the national COVID-19 vaccination rollout. The clinical characteristics and risk factors for severe progression were assessed and subclassified according to vaccination status.
METHODS
A total of 438 COVID-19 patients were included; the numbers of patients in the unvaccinated, partially vaccinated and fully vaccinated groups were 188 (42.9%), 117 (26.7%) and 133 (30.4%), respectively. The vaccinated group was older, less symptomatic and had a higher Charlson comorbidity index at presentation. The proportions of patients who experienced severe progression in the unvaccinated and fully vaccinated groups were 20.3% (31/153) and 10.8% (13/120), respectively. Older age, diabetes mellitus, solid cancer, elevated levels of lactate dehydrogenase and chest X-ray abnormalities were associated with severe progression, and the vaccination at least once was the only protective factor for severe progression. Chest X-ray abnormalities at presentation were the only predictor for severe progression among fully vaccinated patients.
RESULTS
In the hospitalized setting, vaccinated and unvaccinated COVID-19 patients showed different clinical features and risk of oxygen demand despite a relatively high proportion of patients in the two groups. Vaccination needs to be assessed as an initial checkpoint, and chest X-ray may be helpful for predicting severe progression in vaccinated patients.
CONCLUSION
[ "COVID-19", "COVID-19 Vaccines", "Humans", "Retrospective Studies", "SARS-CoV-2", "Vaccination" ]
9424692
INTRODUCTION
Vaccination has been one of the key ways in which the 2-year long coronavirus disease 2019 (COVID-19) pandemic has been addressed. Several types of COVID-19 vaccines were rapidly rolled out and have shown efficacy ranging from 74% to 95% in preventing COVID-19 depending on the vaccine platform.123 Severe aggravation was also shown to be significantly prevented by the available vaccines.23 The imperfect efficacy of vaccination has resulted in breakthrough COVID-19 infections and severe progression of COVID-19 regardless of vaccination status, and the factors related to those events are of both scientific and clinical interest. Although the worldwide pandemic situation is largely easing, there is regional imbalance in the severity of the pandemic and in the coverage by vaccines, especially in countries with poor resources. Even in high-income countries, the proportion of unvaccinated individuals is not low. As the world approaches a strategy of living with COVID-19, physicians will frequently find themselves caring for vaccinated and unvaccinated COVID-19 patients and need to be educated for the clinical features of those patients. Since the first rollout of the AZD1222 (ChAdOx1 nCoV-19) vaccine on February 26, 2021, national vaccine coverage in Korea has been favorable, but patients with breakthrough infections have been reported.45 Although population-based studies estimating the effectiveness of COVID-19 vaccines have been reported,678 information about the clinical features and risk factors for aggravation of vaccinated and unvaccinated COVID-19 patients in a real-world setting would be very helpful to physicians. The Korean national COVID-19 policy provided an opportunity to investigate vaccinated and unvaccinated COVID-19 patients needing hospitalization. In this study, we aimed to describe how the clinical manifestations of COVID-19 patients present in a real-world in-hospital care setting during the COVID-19 vaccination era and to determine the clinical implications of vaccination status in the individual patient care.
METHODS
Patients and study setting All hospitalized COVID-19 patients ≥ 19 years of age in whom COVID-19 infection was confirmed by real-time reverse transcription polymerase chain reaction were enrolled from September to October 2021 at Boramae Medical Center, a university affiliated hospital designated for the care of hospitalized COVID-19 patients. Patients for whom information on documented vaccination status was not available were excluded (Fig. 1). During the study period, COVID-19 patients classified as belonging to a high-risk group or a symptomatic low-risk group were allowed to be admitted to the hospital. Most other stable COVID-19 patients were isolated in residential treatment centers. The patients were grouped into vaccinated and unvaccinated groups, and their presenting clinical features and the overall risk factors at admission for severe progression were characterized. In the subgroup analysis, clinical predictors for severe progression were assessed in the fully vaccinated group. All hospitalized COVID-19 patients ≥ 19 years of age in whom COVID-19 infection was confirmed by real-time reverse transcription polymerase chain reaction were enrolled from September to October 2021 at Boramae Medical Center, a university affiliated hospital designated for the care of hospitalized COVID-19 patients. Patients for whom information on documented vaccination status was not available were excluded (Fig. 1). During the study period, COVID-19 patients classified as belonging to a high-risk group or a symptomatic low-risk group were allowed to be admitted to the hospital. Most other stable COVID-19 patients were isolated in residential treatment centers. The patients were grouped into vaccinated and unvaccinated groups, and their presenting clinical features and the overall risk factors at admission for severe progression were characterized. In the subgroup analysis, clinical predictors for severe progression were assessed in the fully vaccinated group. Data collection and definitions Data were retrospectively collected on demographics, vaccination status, clinical variables at presentation (clinical severity, symptoms and signs, laboratory findings and chest radiography), specific treatments, and clinical outcomes. COVID-19-specific treatments, including the use of remdesivir, monoclonal antibodies, or corticosteroids, and the use of antibacterial agents for periods ≥ 3 days were recorded. The level of oxygen supplementation, the in-hospital mortality, and the duration of hospitalization were used to assess the clinical outcomes. Initial clinical severity was assessed based on the National Early Warning Score (NEWS)-2 and the Sequential Organ Failure Assessment (SOFA) score,9 and the severity of underlying comorbidity was estimated according to the Charlson comorbidity index (CCI).10 Patients with COVID-19 who had received full vaccination were defined as those who developed symptoms or were diagnosed with COVID-19 more than 14 days after completion the recommended vaccine regimen. Those who were vaccinated at least once but did not meet the criteria for full vaccination were defined as partially vaccinated. The use of immunosuppressants was defined as having received any anticancer chemotherapy or any immunosuppressive agents, including steroids, within 30 days. The worst results of body temperature on the day of admission and laboratory findings within 24 hours after admission were used. Severe progression was defined as the occurrence of new oxygen demand during the clinical course of the disease, and oxygen demand was defined as needing oxygen for a period of more than 24 consecutive hours. Chest X-ray abnormalities were defined as the presence of lung infiltration suggestive of pneumonia on simple chest X-ray according to the independent formal report of the reviewing radiologists. Data were retrospectively collected on demographics, vaccination status, clinical variables at presentation (clinical severity, symptoms and signs, laboratory findings and chest radiography), specific treatments, and clinical outcomes. COVID-19-specific treatments, including the use of remdesivir, monoclonal antibodies, or corticosteroids, and the use of antibacterial agents for periods ≥ 3 days were recorded. The level of oxygen supplementation, the in-hospital mortality, and the duration of hospitalization were used to assess the clinical outcomes. Initial clinical severity was assessed based on the National Early Warning Score (NEWS)-2 and the Sequential Organ Failure Assessment (SOFA) score,9 and the severity of underlying comorbidity was estimated according to the Charlson comorbidity index (CCI).10 Patients with COVID-19 who had received full vaccination were defined as those who developed symptoms or were diagnosed with COVID-19 more than 14 days after completion the recommended vaccine regimen. Those who were vaccinated at least once but did not meet the criteria for full vaccination were defined as partially vaccinated. The use of immunosuppressants was defined as having received any anticancer chemotherapy or any immunosuppressive agents, including steroids, within 30 days. The worst results of body temperature on the day of admission and laboratory findings within 24 hours after admission were used. Severe progression was defined as the occurrence of new oxygen demand during the clinical course of the disease, and oxygen demand was defined as needing oxygen for a period of more than 24 consecutive hours. Chest X-ray abnormalities were defined as the presence of lung infiltration suggestive of pneumonia on simple chest X-ray according to the independent formal report of the reviewing radiologists. Statistical analysis Student’s t-test or the Mann-Whitney U test was used to compare continuous variables, and the χ2 test or Fisher’s exact test was used to compare categorical variables. To determine the risk factors for severe progression, multivariable logistic regression was performed using variables for which P < 0.10 in the univariate analysis. To avoid multicollinearity, the CCI, variable of no underlying disease and no initial symptoms were not included in the multivariable model. The SOFA score was excluded from the multivariable model to avoid multicollinearity with the NEWS-2 and thrombocytopenia. To determine the goodness of fit for logistic regression models, the Hosmer-Lemeshow test was used. P values < 0.05 were considered statistically significant. Statistical analyses were performed using IBM SPSS Statistics, version 26.0 (IBM Corp., Armonk, NY, USA). Student’s t-test or the Mann-Whitney U test was used to compare continuous variables, and the χ2 test or Fisher’s exact test was used to compare categorical variables. To determine the risk factors for severe progression, multivariable logistic regression was performed using variables for which P < 0.10 in the univariate analysis. To avoid multicollinearity, the CCI, variable of no underlying disease and no initial symptoms were not included in the multivariable model. The SOFA score was excluded from the multivariable model to avoid multicollinearity with the NEWS-2 and thrombocytopenia. To determine the goodness of fit for logistic regression models, the Hosmer-Lemeshow test was used. P values < 0.05 were considered statistically significant. Statistical analyses were performed using IBM SPSS Statistics, version 26.0 (IBM Corp., Armonk, NY, USA). Ethics statement This study was approved by the Institutional Review Board (IRB) of Boramae Medical Center (No. 20-2021-54). The requirement for informed consent was waived by the IRB owing to the retrospective nature of the study. This study was conducted in compliance with the Declaration of Helsinki. This study was approved by the Institutional Review Board (IRB) of Boramae Medical Center (No. 20-2021-54). The requirement for informed consent was waived by the IRB owing to the retrospective nature of the study. This study was conducted in compliance with the Declaration of Helsinki.
RESULTS
Clinical features of unvaccinated and vaccinated COVID-19 patients A total of 438 patients were included in the analysis; 188 (42.9%) and 250 (57.1%) patients were included in the unvaccinated and vaccinated groups, respectively. Of the 250 vaccinated patients, 133 (53.2%) were fully vaccinated, and 117 (46.8%) were partially vaccinated (Fig. 1). Analysis of the baseline characteristics of the patients showed that the vaccinated group was older, had a shorter time lag from diagnosis to admission or from onset of illness to admission, and had a lower initial NEWS-2 than the unvaccinated group (Table 1). The vaccinated group also had a higher CCI (median 2.0 vs. 0.0, P < 0.001) and a higher incidence of other underlying diseases (hypertension, diabetes mellitus, and chronic kidney disease) than the unvaccinated group. Patients without any underlying diseases were more frequent in the unvaccinated group. At admission, dyspnea and fever exceeding 38.0°C were more frequent, and white blood cell and platelet counts were lower, but lactate dehydrogenase (LDH) level and the proportion of chest X-ray abnormalities was higher (70.7% vs. 47.2%, P < 0.001) in the unvaccinated group than in the vaccinated group (Table 1). Values are presented as number (%) or median (interquartile range). BMI = body mass index, NIH = National Institutes of Health, NEWS = National Early Warning Score, SOFA = Sequential Organ Failure Assessment, CCI = Charlson comorbidity index, COPD = chronic obstructive pulmonary disease, CRP = C-reactive protein, Ct = cyclic threshold, RdRp = RNA dependent RNA polymerase. The fully vaccinated group was older and had a shorter time lag from diagnosis to admission or onset of illness to admission than the partially vaccinated group. In the partially vaccinated group, 70.9% of patients had been vaccinated with BNT162b2 (Table 2). The fully vaccinated group had higher median CCI (3.0 vs. 1.0, P < 0.001) and higher proportion of individuals with hypertension and chronic kidney disease than the partially vaccinated group. At presentation, cough was more frequent and the level of LDH was higher in the partially vaccinated group than in the fully vaccinated group (Table 2). Values are presented as number (%) or median (interquartile range). BMI = body mass index, NIH = National Institutes of Health, NEWS = National Early Warning Score, SOFA = Sequential Organ Failure Assessment, CCI = Charlson comorbidity index, COPD = chronic obstructive pulmonary disease, CRP = C-reactive protein, Ct = cyclic threshold, RdRp = RNA dependent RNA polymerase. The unvaccinated group required more oxygen therapy during the clinical course of the disease than the vaccinated group (nasal prong, 30.8% vs. 16.8%; facial mask, 0.0% vs. 0.4%; high flow, 2.7% vs. 2.8%; intubation, 1.6% vs. 1.2%; P = 0.008) (Table 3). Thirty-five patients in the unvaccinated group required oxygen therapy on the day of admission; of the 153 patients in that group without initial oxygen demand, 31 (20.3%) experienced clinical progression to severe disease. Thirteen patients in the fully vaccinated group required oxygen therapy initially; of the 120 patients in that group without initial oxygen demand, 13 (10.8%) required oxygen therapy later (Fig. 2). Consistent with their greater need for oxygen therapy, the unvaccinated group received more remdesivir and corticosteroids. Antibacterial agents were used more frequently in the unvaccinated group than in the vaccinated group (38.3% vs. 21.2%, P < 0.001) (Table 3). The fully vaccinated group remained in the hospital longer (median 10.0 days vs. 8.0 days, P < 0.001) than did the partially vaccinated group, but the proportion of patients needing oxygen therapy did not differ between the fully and partially vaccinated groups (Supplementary Table 1). Values are presented as number (%) or median (interquartile range). Monoclonal antibody: only regdanvimab (CT-P59; Celltrion Inc., Incheon, Korea) was used in Korea. UV = unvaccinated, FV = fully vaccinated. A total of 438 patients were included in the analysis; 188 (42.9%) and 250 (57.1%) patients were included in the unvaccinated and vaccinated groups, respectively. Of the 250 vaccinated patients, 133 (53.2%) were fully vaccinated, and 117 (46.8%) were partially vaccinated (Fig. 1). Analysis of the baseline characteristics of the patients showed that the vaccinated group was older, had a shorter time lag from diagnosis to admission or from onset of illness to admission, and had a lower initial NEWS-2 than the unvaccinated group (Table 1). The vaccinated group also had a higher CCI (median 2.0 vs. 0.0, P < 0.001) and a higher incidence of other underlying diseases (hypertension, diabetes mellitus, and chronic kidney disease) than the unvaccinated group. Patients without any underlying diseases were more frequent in the unvaccinated group. At admission, dyspnea and fever exceeding 38.0°C were more frequent, and white blood cell and platelet counts were lower, but lactate dehydrogenase (LDH) level and the proportion of chest X-ray abnormalities was higher (70.7% vs. 47.2%, P < 0.001) in the unvaccinated group than in the vaccinated group (Table 1). Values are presented as number (%) or median (interquartile range). BMI = body mass index, NIH = National Institutes of Health, NEWS = National Early Warning Score, SOFA = Sequential Organ Failure Assessment, CCI = Charlson comorbidity index, COPD = chronic obstructive pulmonary disease, CRP = C-reactive protein, Ct = cyclic threshold, RdRp = RNA dependent RNA polymerase. The fully vaccinated group was older and had a shorter time lag from diagnosis to admission or onset of illness to admission than the partially vaccinated group. In the partially vaccinated group, 70.9% of patients had been vaccinated with BNT162b2 (Table 2). The fully vaccinated group had higher median CCI (3.0 vs. 1.0, P < 0.001) and higher proportion of individuals with hypertension and chronic kidney disease than the partially vaccinated group. At presentation, cough was more frequent and the level of LDH was higher in the partially vaccinated group than in the fully vaccinated group (Table 2). Values are presented as number (%) or median (interquartile range). BMI = body mass index, NIH = National Institutes of Health, NEWS = National Early Warning Score, SOFA = Sequential Organ Failure Assessment, CCI = Charlson comorbidity index, COPD = chronic obstructive pulmonary disease, CRP = C-reactive protein, Ct = cyclic threshold, RdRp = RNA dependent RNA polymerase. The unvaccinated group required more oxygen therapy during the clinical course of the disease than the vaccinated group (nasal prong, 30.8% vs. 16.8%; facial mask, 0.0% vs. 0.4%; high flow, 2.7% vs. 2.8%; intubation, 1.6% vs. 1.2%; P = 0.008) (Table 3). Thirty-five patients in the unvaccinated group required oxygen therapy on the day of admission; of the 153 patients in that group without initial oxygen demand, 31 (20.3%) experienced clinical progression to severe disease. Thirteen patients in the fully vaccinated group required oxygen therapy initially; of the 120 patients in that group without initial oxygen demand, 13 (10.8%) required oxygen therapy later (Fig. 2). Consistent with their greater need for oxygen therapy, the unvaccinated group received more remdesivir and corticosteroids. Antibacterial agents were used more frequently in the unvaccinated group than in the vaccinated group (38.3% vs. 21.2%, P < 0.001) (Table 3). The fully vaccinated group remained in the hospital longer (median 10.0 days vs. 8.0 days, P < 0.001) than did the partially vaccinated group, but the proportion of patients needing oxygen therapy did not differ between the fully and partially vaccinated groups (Supplementary Table 1). Values are presented as number (%) or median (interquartile range). Monoclonal antibody: only regdanvimab (CT-P59; Celltrion Inc., Incheon, Korea) was used in Korea. UV = unvaccinated, FV = fully vaccinated. Risk factors for severe progression in COVID-19 patients and vaccination status The risk factors for severe progression during the clinical course among patients who did not initially require oxygen therapy were identified by multivariable analysis. Older age (adjusted odds ratio [aOR], 1.04; 95% confidence interval [CI], 1.02–1.07; P = 0.003), diabetes mellitus (aOR, 2.31; 95% CI, 1.00–5.33; P = 0.049), solid cancer (aOR, 3.18; 95% CI, 1.14–8.85; P = 0.027), elevated level of LDH (aOR, 2.99; 95% CI, 1.34–6.65; P = 0.007), and chest X-ray abnormality (aOR, 3.41; 95% CI, 1.31–8.89; P = 0.012) were significantly associated with severe progression. Vaccination at least once was associated with reduced risk of severe progression (aOR, 0.35; 95% CI, 0.15–0.79; P = 0.011) (Table 4). In the fully vaccinated patients without initial oxygen demand, chest X-ray abnormalities were the only risk factor for severe progression (aOR, 20.16; 95% CI, 2.43–167.66; P = 0.005) by multivariable analysis. Elevated level of C-reactive protein (CRP) was excluded from the multivariable model because the P value of the Hosmer-Lemeshow test was < 0.05 in the model including elevated level of CRP (Table 5). OR = odds ratio, CI = confidence interval, aOR = adjusted odds ratio, BMI = body mass index, NEWS = National Early Warning Score, SOFA = Sequential Organ Failure Assessment, CCI = Charlson comorbidity index, COPD = chronic obstructive pulmonary disease, WBC = white blood cell, LDH = lactate dehydrogenase, CRP = C-reactive protein, Ct = cyclic threshold, RdRp = RNA dependent RNA polymerase. OR = odds ratio, CI = confidence interval, aOR = adjusted odds ratio, BMI = body mass index, NEWS = National Early Warning Score, SOFA = Sequential Organ Failure Assessment, CCI = Charlson comorbidity index, COPD = chronic obstructive pulmonary disease, WBC = white blood cell, LDH = lactate dehydrogenase, CRP = C-reactive protein, Ct = cyclic threshold, RdRp = RNA dependent RNA polymerase. The risk factors for severe progression during the clinical course among patients who did not initially require oxygen therapy were identified by multivariable analysis. Older age (adjusted odds ratio [aOR], 1.04; 95% confidence interval [CI], 1.02–1.07; P = 0.003), diabetes mellitus (aOR, 2.31; 95% CI, 1.00–5.33; P = 0.049), solid cancer (aOR, 3.18; 95% CI, 1.14–8.85; P = 0.027), elevated level of LDH (aOR, 2.99; 95% CI, 1.34–6.65; P = 0.007), and chest X-ray abnormality (aOR, 3.41; 95% CI, 1.31–8.89; P = 0.012) were significantly associated with severe progression. Vaccination at least once was associated with reduced risk of severe progression (aOR, 0.35; 95% CI, 0.15–0.79; P = 0.011) (Table 4). In the fully vaccinated patients without initial oxygen demand, chest X-ray abnormalities were the only risk factor for severe progression (aOR, 20.16; 95% CI, 2.43–167.66; P = 0.005) by multivariable analysis. Elevated level of C-reactive protein (CRP) was excluded from the multivariable model because the P value of the Hosmer-Lemeshow test was < 0.05 in the model including elevated level of CRP (Table 5). OR = odds ratio, CI = confidence interval, aOR = adjusted odds ratio, BMI = body mass index, NEWS = National Early Warning Score, SOFA = Sequential Organ Failure Assessment, CCI = Charlson comorbidity index, COPD = chronic obstructive pulmonary disease, WBC = white blood cell, LDH = lactate dehydrogenase, CRP = C-reactive protein, Ct = cyclic threshold, RdRp = RNA dependent RNA polymerase. OR = odds ratio, CI = confidence interval, aOR = adjusted odds ratio, BMI = body mass index, NEWS = National Early Warning Score, SOFA = Sequential Organ Failure Assessment, CCI = Charlson comorbidity index, COPD = chronic obstructive pulmonary disease, WBC = white blood cell, LDH = lactate dehydrogenase, CRP = C-reactive protein, Ct = cyclic threshold, RdRp = RNA dependent RNA polymerase.
null
null
[ "Patients and study setting", "Data collection and definitions", "Statistical analysis", "Clinical features of unvaccinated and vaccinated COVID-19 patients", "Risk factors for severe progression in COVID-19 patients and vaccination status" ]
[ "All hospitalized COVID-19 patients ≥ 19 years of age in whom COVID-19 infection was confirmed by real-time reverse transcription polymerase chain reaction were enrolled from September to October 2021 at Boramae Medical Center, a university affiliated hospital designated for the care of hospitalized COVID-19 patients. Patients for whom information on documented vaccination status was not available were excluded (Fig. 1). During the study period, COVID-19 patients classified as belonging to a high-risk group or a symptomatic low-risk group were allowed to be admitted to the hospital. Most other stable COVID-19 patients were isolated in residential treatment centers. The patients were grouped into vaccinated and unvaccinated groups, and their presenting clinical features and the overall risk factors at admission for severe progression were characterized. In the subgroup analysis, clinical predictors for severe progression were assessed in the fully vaccinated group.", "Data were retrospectively collected on demographics, vaccination status, clinical variables at presentation (clinical severity, symptoms and signs, laboratory findings and chest radiography), specific treatments, and clinical outcomes. COVID-19-specific treatments, including the use of remdesivir, monoclonal antibodies, or corticosteroids, and the use of antibacterial agents for periods ≥ 3 days were recorded. The level of oxygen supplementation, the in-hospital mortality, and the duration of hospitalization were used to assess the clinical outcomes.\nInitial clinical severity was assessed based on the National Early Warning Score (NEWS)-2 and the Sequential Organ Failure Assessment (SOFA) score,9 and the severity of underlying comorbidity was estimated according to the Charlson comorbidity index (CCI).10 Patients with COVID-19 who had received full vaccination were defined as those who developed symptoms or were diagnosed with COVID-19 more than 14 days after completion the recommended vaccine regimen. Those who were vaccinated at least once but did not meet the criteria for full vaccination were defined as partially vaccinated.\nThe use of immunosuppressants was defined as having received any anticancer chemotherapy or any immunosuppressive agents, including steroids, within 30 days. The worst results of body temperature on the day of admission and laboratory findings within 24 hours after admission were used. Severe progression was defined as the occurrence of new oxygen demand during the clinical course of the disease, and oxygen demand was defined as needing oxygen for a period of more than 24 consecutive hours. Chest X-ray abnormalities were defined as the presence of lung infiltration suggestive of pneumonia on simple chest X-ray according to the independent formal report of the reviewing radiologists.", "Student’s t-test or the Mann-Whitney U test was used to compare continuous variables, and the χ2 test or Fisher’s exact test was used to compare categorical variables. To determine the risk factors for severe progression, multivariable logistic regression was performed using variables for which P < 0.10 in the univariate analysis. To avoid multicollinearity, the CCI, variable of no underlying disease and no initial symptoms were not included in the multivariable model. The SOFA score was excluded from the multivariable model to avoid multicollinearity with the NEWS-2 and thrombocytopenia. To determine the goodness of fit for logistic regression models, the Hosmer-Lemeshow test was used. P values < 0.05 were considered statistically significant. Statistical analyses were performed using IBM SPSS Statistics, version 26.0 (IBM Corp., Armonk, NY, USA).", "A total of 438 patients were included in the analysis; 188 (42.9%) and 250 (57.1%) patients were included in the unvaccinated and vaccinated groups, respectively. Of the 250 vaccinated patients, 133 (53.2%) were fully vaccinated, and 117 (46.8%) were partially vaccinated (Fig. 1).\nAnalysis of the baseline characteristics of the patients showed that the vaccinated group was older, had a shorter time lag from diagnosis to admission or from onset of illness to admission, and had a lower initial NEWS-2 than the unvaccinated group (Table 1). The vaccinated group also had a higher CCI (median 2.0 vs. 0.0, P < 0.001) and a higher incidence of other underlying diseases (hypertension, diabetes mellitus, and chronic kidney disease) than the unvaccinated group. Patients without any underlying diseases were more frequent in the unvaccinated group. At admission, dyspnea and fever exceeding 38.0°C were more frequent, and white blood cell and platelet counts were lower, but lactate dehydrogenase (LDH) level and the proportion of chest X-ray abnormalities was higher (70.7% vs. 47.2%, P < 0.001) in the unvaccinated group than in the vaccinated group (Table 1).\nValues are presented as number (%) or median (interquartile range).\nBMI = body mass index, NIH = National Institutes of Health, NEWS = National Early Warning Score, SOFA = Sequential Organ Failure Assessment, CCI = Charlson comorbidity index, COPD = chronic obstructive pulmonary disease, CRP = C-reactive protein, Ct = cyclic threshold, RdRp = RNA dependent RNA polymerase.\nThe fully vaccinated group was older and had a shorter time lag from diagnosis to admission or onset of illness to admission than the partially vaccinated group. In the partially vaccinated group, 70.9% of patients had been vaccinated with BNT162b2 (Table 2). The fully vaccinated group had higher median CCI (3.0 vs. 1.0, P < 0.001) and higher proportion of individuals with hypertension and chronic kidney disease than the partially vaccinated group. At presentation, cough was more frequent and the level of LDH was higher in the partially vaccinated group than in the fully vaccinated group (Table 2).\nValues are presented as number (%) or median (interquartile range).\nBMI = body mass index, NIH = National Institutes of Health, NEWS = National Early Warning Score, SOFA = Sequential Organ Failure Assessment, CCI = Charlson comorbidity index, COPD = chronic obstructive pulmonary disease, CRP = C-reactive protein, Ct = cyclic threshold, RdRp = RNA dependent RNA polymerase.\nThe unvaccinated group required more oxygen therapy during the clinical course of the disease than the vaccinated group (nasal prong, 30.8% vs. 16.8%; facial mask, 0.0% vs. 0.4%; high flow, 2.7% vs. 2.8%; intubation, 1.6% vs. 1.2%; P = 0.008) (Table 3). Thirty-five patients in the unvaccinated group required oxygen therapy on the day of admission; of the 153 patients in that group without initial oxygen demand, 31 (20.3%) experienced clinical progression to severe disease. Thirteen patients in the fully vaccinated group required oxygen therapy initially; of the 120 patients in that group without initial oxygen demand, 13 (10.8%) required oxygen therapy later (Fig. 2). Consistent with their greater need for oxygen therapy, the unvaccinated group received more remdesivir and corticosteroids. Antibacterial agents were used more frequently in the unvaccinated group than in the vaccinated group (38.3% vs. 21.2%, P < 0.001) (Table 3). The fully vaccinated group remained in the hospital longer (median 10.0 days vs. 8.0 days, P < 0.001) than did the partially vaccinated group, but the proportion of patients needing oxygen therapy did not differ between the fully and partially vaccinated groups (Supplementary Table 1).\nValues are presented as number (%) or median (interquartile range). Monoclonal antibody: only regdanvimab (CT-P59; Celltrion Inc., Incheon, Korea) was used in Korea.\nUV = unvaccinated, FV = fully vaccinated.", "The risk factors for severe progression during the clinical course among patients who did not initially require oxygen therapy were identified by multivariable analysis. Older age (adjusted odds ratio [aOR], 1.04; 95% confidence interval [CI], 1.02–1.07; P = 0.003), diabetes mellitus (aOR, 2.31; 95% CI, 1.00–5.33; P = 0.049), solid cancer (aOR, 3.18; 95% CI, 1.14–8.85; P = 0.027), elevated level of LDH (aOR, 2.99; 95% CI, 1.34–6.65; P = 0.007), and chest X-ray abnormality (aOR, 3.41; 95% CI, 1.31–8.89; P = 0.012) were significantly associated with severe progression. Vaccination at least once was associated with reduced risk of severe progression (aOR, 0.35; 95% CI, 0.15–0.79; P = 0.011) (Table 4). In the fully vaccinated patients without initial oxygen demand, chest X-ray abnormalities were the only risk factor for severe progression (aOR, 20.16; 95% CI, 2.43–167.66; P = 0.005) by multivariable analysis. Elevated level of C-reactive protein (CRP) was excluded from the multivariable model because the P value of the Hosmer-Lemeshow test was < 0.05 in the model including elevated level of CRP (Table 5).\nOR = odds ratio, CI = confidence interval, aOR = adjusted odds ratio, BMI = body mass index, NEWS = National Early Warning Score, SOFA = Sequential Organ Failure Assessment, CCI = Charlson comorbidity index, COPD = chronic obstructive pulmonary disease, WBC = white blood cell, LDH = lactate dehydrogenase, CRP = C-reactive protein, Ct = cyclic threshold, RdRp = RNA dependent RNA polymerase.\nOR = odds ratio, CI = confidence interval, aOR = adjusted odds ratio, BMI = body mass index, NEWS = National Early Warning Score, SOFA = Sequential Organ Failure Assessment, CCI = Charlson comorbidity index, COPD = chronic obstructive pulmonary disease, WBC = white blood cell, LDH = lactate dehydrogenase, CRP = C-reactive protein, Ct = cyclic threshold, RdRp = RNA dependent RNA polymerase." ]
[ null, null, null, null, null ]
[ "INTRODUCTION", "METHODS", "Patients and study setting", "Data collection and definitions", "Statistical analysis", "Ethics statement", "RESULTS", "Clinical features of unvaccinated and vaccinated COVID-19 patients", "Risk factors for severe progression in COVID-19 patients and vaccination status", "DISCUSSION" ]
[ "Vaccination has been one of the key ways in which the 2-year long coronavirus disease 2019 (COVID-19) pandemic has been addressed. Several types of COVID-19 vaccines were rapidly rolled out and have shown efficacy ranging from 74% to 95% in preventing COVID-19 depending on the vaccine platform.123\nSevere aggravation was also shown to be significantly prevented by the available vaccines.23 The imperfect efficacy of vaccination has resulted in breakthrough COVID-19 infections and severe progression of COVID-19 regardless of vaccination status, and the factors related to those events are of both scientific and clinical interest. Although the worldwide pandemic situation is largely easing, there is regional imbalance in the severity of the pandemic and in the coverage by vaccines, especially in countries with poor resources. Even in high-income countries, the proportion of unvaccinated individuals is not low. As the world approaches a strategy of living with COVID-19, physicians will frequently find themselves caring for vaccinated and unvaccinated COVID-19 patients and need to be educated for the clinical features of those patients.\nSince the first rollout of the AZD1222 (ChAdOx1 nCoV-19) vaccine on February 26, 2021, national vaccine coverage in Korea has been favorable, but patients with breakthrough infections have been reported.45 Although population-based studies estimating the effectiveness of COVID-19 vaccines have been reported,678 information about the clinical features and risk factors for aggravation of vaccinated and unvaccinated COVID-19 patients in a real-world setting would be very helpful to physicians. The Korean national COVID-19 policy provided an opportunity to investigate vaccinated and unvaccinated COVID-19 patients needing hospitalization.\nIn this study, we aimed to describe how the clinical manifestations of COVID-19 patients present in a real-world in-hospital care setting during the COVID-19 vaccination era and to determine the clinical implications of vaccination status in the individual patient care.", " Patients and study setting All hospitalized COVID-19 patients ≥ 19 years of age in whom COVID-19 infection was confirmed by real-time reverse transcription polymerase chain reaction were enrolled from September to October 2021 at Boramae Medical Center, a university affiliated hospital designated for the care of hospitalized COVID-19 patients. Patients for whom information on documented vaccination status was not available were excluded (Fig. 1). During the study period, COVID-19 patients classified as belonging to a high-risk group or a symptomatic low-risk group were allowed to be admitted to the hospital. Most other stable COVID-19 patients were isolated in residential treatment centers. The patients were grouped into vaccinated and unvaccinated groups, and their presenting clinical features and the overall risk factors at admission for severe progression were characterized. In the subgroup analysis, clinical predictors for severe progression were assessed in the fully vaccinated group.\nAll hospitalized COVID-19 patients ≥ 19 years of age in whom COVID-19 infection was confirmed by real-time reverse transcription polymerase chain reaction were enrolled from September to October 2021 at Boramae Medical Center, a university affiliated hospital designated for the care of hospitalized COVID-19 patients. Patients for whom information on documented vaccination status was not available were excluded (Fig. 1). During the study period, COVID-19 patients classified as belonging to a high-risk group or a symptomatic low-risk group were allowed to be admitted to the hospital. Most other stable COVID-19 patients were isolated in residential treatment centers. The patients were grouped into vaccinated and unvaccinated groups, and their presenting clinical features and the overall risk factors at admission for severe progression were characterized. In the subgroup analysis, clinical predictors for severe progression were assessed in the fully vaccinated group.\n Data collection and definitions Data were retrospectively collected on demographics, vaccination status, clinical variables at presentation (clinical severity, symptoms and signs, laboratory findings and chest radiography), specific treatments, and clinical outcomes. COVID-19-specific treatments, including the use of remdesivir, monoclonal antibodies, or corticosteroids, and the use of antibacterial agents for periods ≥ 3 days were recorded. The level of oxygen supplementation, the in-hospital mortality, and the duration of hospitalization were used to assess the clinical outcomes.\nInitial clinical severity was assessed based on the National Early Warning Score (NEWS)-2 and the Sequential Organ Failure Assessment (SOFA) score,9 and the severity of underlying comorbidity was estimated according to the Charlson comorbidity index (CCI).10 Patients with COVID-19 who had received full vaccination were defined as those who developed symptoms or were diagnosed with COVID-19 more than 14 days after completion the recommended vaccine regimen. Those who were vaccinated at least once but did not meet the criteria for full vaccination were defined as partially vaccinated.\nThe use of immunosuppressants was defined as having received any anticancer chemotherapy or any immunosuppressive agents, including steroids, within 30 days. The worst results of body temperature on the day of admission and laboratory findings within 24 hours after admission were used. Severe progression was defined as the occurrence of new oxygen demand during the clinical course of the disease, and oxygen demand was defined as needing oxygen for a period of more than 24 consecutive hours. Chest X-ray abnormalities were defined as the presence of lung infiltration suggestive of pneumonia on simple chest X-ray according to the independent formal report of the reviewing radiologists.\nData were retrospectively collected on demographics, vaccination status, clinical variables at presentation (clinical severity, symptoms and signs, laboratory findings and chest radiography), specific treatments, and clinical outcomes. COVID-19-specific treatments, including the use of remdesivir, monoclonal antibodies, or corticosteroids, and the use of antibacterial agents for periods ≥ 3 days were recorded. The level of oxygen supplementation, the in-hospital mortality, and the duration of hospitalization were used to assess the clinical outcomes.\nInitial clinical severity was assessed based on the National Early Warning Score (NEWS)-2 and the Sequential Organ Failure Assessment (SOFA) score,9 and the severity of underlying comorbidity was estimated according to the Charlson comorbidity index (CCI).10 Patients with COVID-19 who had received full vaccination were defined as those who developed symptoms or were diagnosed with COVID-19 more than 14 days after completion the recommended vaccine regimen. Those who were vaccinated at least once but did not meet the criteria for full vaccination were defined as partially vaccinated.\nThe use of immunosuppressants was defined as having received any anticancer chemotherapy or any immunosuppressive agents, including steroids, within 30 days. The worst results of body temperature on the day of admission and laboratory findings within 24 hours after admission were used. Severe progression was defined as the occurrence of new oxygen demand during the clinical course of the disease, and oxygen demand was defined as needing oxygen for a period of more than 24 consecutive hours. Chest X-ray abnormalities were defined as the presence of lung infiltration suggestive of pneumonia on simple chest X-ray according to the independent formal report of the reviewing radiologists.\n Statistical analysis Student’s t-test or the Mann-Whitney U test was used to compare continuous variables, and the χ2 test or Fisher’s exact test was used to compare categorical variables. To determine the risk factors for severe progression, multivariable logistic regression was performed using variables for which P < 0.10 in the univariate analysis. To avoid multicollinearity, the CCI, variable of no underlying disease and no initial symptoms were not included in the multivariable model. The SOFA score was excluded from the multivariable model to avoid multicollinearity with the NEWS-2 and thrombocytopenia. To determine the goodness of fit for logistic regression models, the Hosmer-Lemeshow test was used. P values < 0.05 were considered statistically significant. Statistical analyses were performed using IBM SPSS Statistics, version 26.0 (IBM Corp., Armonk, NY, USA).\nStudent’s t-test or the Mann-Whitney U test was used to compare continuous variables, and the χ2 test or Fisher’s exact test was used to compare categorical variables. To determine the risk factors for severe progression, multivariable logistic regression was performed using variables for which P < 0.10 in the univariate analysis. To avoid multicollinearity, the CCI, variable of no underlying disease and no initial symptoms were not included in the multivariable model. The SOFA score was excluded from the multivariable model to avoid multicollinearity with the NEWS-2 and thrombocytopenia. To determine the goodness of fit for logistic regression models, the Hosmer-Lemeshow test was used. P values < 0.05 were considered statistically significant. Statistical analyses were performed using IBM SPSS Statistics, version 26.0 (IBM Corp., Armonk, NY, USA).\n Ethics statement This study was approved by the Institutional Review Board (IRB) of Boramae Medical Center (No. 20-2021-54). The requirement for informed consent was waived by the IRB owing to the retrospective nature of the study. This study was conducted in compliance with the Declaration of Helsinki.\nThis study was approved by the Institutional Review Board (IRB) of Boramae Medical Center (No. 20-2021-54). The requirement for informed consent was waived by the IRB owing to the retrospective nature of the study. This study was conducted in compliance with the Declaration of Helsinki.", "All hospitalized COVID-19 patients ≥ 19 years of age in whom COVID-19 infection was confirmed by real-time reverse transcription polymerase chain reaction were enrolled from September to October 2021 at Boramae Medical Center, a university affiliated hospital designated for the care of hospitalized COVID-19 patients. Patients for whom information on documented vaccination status was not available were excluded (Fig. 1). During the study period, COVID-19 patients classified as belonging to a high-risk group or a symptomatic low-risk group were allowed to be admitted to the hospital. Most other stable COVID-19 patients were isolated in residential treatment centers. The patients were grouped into vaccinated and unvaccinated groups, and their presenting clinical features and the overall risk factors at admission for severe progression were characterized. In the subgroup analysis, clinical predictors for severe progression were assessed in the fully vaccinated group.", "Data were retrospectively collected on demographics, vaccination status, clinical variables at presentation (clinical severity, symptoms and signs, laboratory findings and chest radiography), specific treatments, and clinical outcomes. COVID-19-specific treatments, including the use of remdesivir, monoclonal antibodies, or corticosteroids, and the use of antibacterial agents for periods ≥ 3 days were recorded. The level of oxygen supplementation, the in-hospital mortality, and the duration of hospitalization were used to assess the clinical outcomes.\nInitial clinical severity was assessed based on the National Early Warning Score (NEWS)-2 and the Sequential Organ Failure Assessment (SOFA) score,9 and the severity of underlying comorbidity was estimated according to the Charlson comorbidity index (CCI).10 Patients with COVID-19 who had received full vaccination were defined as those who developed symptoms or were diagnosed with COVID-19 more than 14 days after completion the recommended vaccine regimen. Those who were vaccinated at least once but did not meet the criteria for full vaccination were defined as partially vaccinated.\nThe use of immunosuppressants was defined as having received any anticancer chemotherapy or any immunosuppressive agents, including steroids, within 30 days. The worst results of body temperature on the day of admission and laboratory findings within 24 hours after admission were used. Severe progression was defined as the occurrence of new oxygen demand during the clinical course of the disease, and oxygen demand was defined as needing oxygen for a period of more than 24 consecutive hours. Chest X-ray abnormalities were defined as the presence of lung infiltration suggestive of pneumonia on simple chest X-ray according to the independent formal report of the reviewing radiologists.", "Student’s t-test or the Mann-Whitney U test was used to compare continuous variables, and the χ2 test or Fisher’s exact test was used to compare categorical variables. To determine the risk factors for severe progression, multivariable logistic regression was performed using variables for which P < 0.10 in the univariate analysis. To avoid multicollinearity, the CCI, variable of no underlying disease and no initial symptoms were not included in the multivariable model. The SOFA score was excluded from the multivariable model to avoid multicollinearity with the NEWS-2 and thrombocytopenia. To determine the goodness of fit for logistic regression models, the Hosmer-Lemeshow test was used. P values < 0.05 were considered statistically significant. Statistical analyses were performed using IBM SPSS Statistics, version 26.0 (IBM Corp., Armonk, NY, USA).", "This study was approved by the Institutional Review Board (IRB) of Boramae Medical Center (No. 20-2021-54). The requirement for informed consent was waived by the IRB owing to the retrospective nature of the study. This study was conducted in compliance with the Declaration of Helsinki.", " Clinical features of unvaccinated and vaccinated COVID-19 patients A total of 438 patients were included in the analysis; 188 (42.9%) and 250 (57.1%) patients were included in the unvaccinated and vaccinated groups, respectively. Of the 250 vaccinated patients, 133 (53.2%) were fully vaccinated, and 117 (46.8%) were partially vaccinated (Fig. 1).\nAnalysis of the baseline characteristics of the patients showed that the vaccinated group was older, had a shorter time lag from diagnosis to admission or from onset of illness to admission, and had a lower initial NEWS-2 than the unvaccinated group (Table 1). The vaccinated group also had a higher CCI (median 2.0 vs. 0.0, P < 0.001) and a higher incidence of other underlying diseases (hypertension, diabetes mellitus, and chronic kidney disease) than the unvaccinated group. Patients without any underlying diseases were more frequent in the unvaccinated group. At admission, dyspnea and fever exceeding 38.0°C were more frequent, and white blood cell and platelet counts were lower, but lactate dehydrogenase (LDH) level and the proportion of chest X-ray abnormalities was higher (70.7% vs. 47.2%, P < 0.001) in the unvaccinated group than in the vaccinated group (Table 1).\nValues are presented as number (%) or median (interquartile range).\nBMI = body mass index, NIH = National Institutes of Health, NEWS = National Early Warning Score, SOFA = Sequential Organ Failure Assessment, CCI = Charlson comorbidity index, COPD = chronic obstructive pulmonary disease, CRP = C-reactive protein, Ct = cyclic threshold, RdRp = RNA dependent RNA polymerase.\nThe fully vaccinated group was older and had a shorter time lag from diagnosis to admission or onset of illness to admission than the partially vaccinated group. In the partially vaccinated group, 70.9% of patients had been vaccinated with BNT162b2 (Table 2). The fully vaccinated group had higher median CCI (3.0 vs. 1.0, P < 0.001) and higher proportion of individuals with hypertension and chronic kidney disease than the partially vaccinated group. At presentation, cough was more frequent and the level of LDH was higher in the partially vaccinated group than in the fully vaccinated group (Table 2).\nValues are presented as number (%) or median (interquartile range).\nBMI = body mass index, NIH = National Institutes of Health, NEWS = National Early Warning Score, SOFA = Sequential Organ Failure Assessment, CCI = Charlson comorbidity index, COPD = chronic obstructive pulmonary disease, CRP = C-reactive protein, Ct = cyclic threshold, RdRp = RNA dependent RNA polymerase.\nThe unvaccinated group required more oxygen therapy during the clinical course of the disease than the vaccinated group (nasal prong, 30.8% vs. 16.8%; facial mask, 0.0% vs. 0.4%; high flow, 2.7% vs. 2.8%; intubation, 1.6% vs. 1.2%; P = 0.008) (Table 3). Thirty-five patients in the unvaccinated group required oxygen therapy on the day of admission; of the 153 patients in that group without initial oxygen demand, 31 (20.3%) experienced clinical progression to severe disease. Thirteen patients in the fully vaccinated group required oxygen therapy initially; of the 120 patients in that group without initial oxygen demand, 13 (10.8%) required oxygen therapy later (Fig. 2). Consistent with their greater need for oxygen therapy, the unvaccinated group received more remdesivir and corticosteroids. Antibacterial agents were used more frequently in the unvaccinated group than in the vaccinated group (38.3% vs. 21.2%, P < 0.001) (Table 3). The fully vaccinated group remained in the hospital longer (median 10.0 days vs. 8.0 days, P < 0.001) than did the partially vaccinated group, but the proportion of patients needing oxygen therapy did not differ between the fully and partially vaccinated groups (Supplementary Table 1).\nValues are presented as number (%) or median (interquartile range). Monoclonal antibody: only regdanvimab (CT-P59; Celltrion Inc., Incheon, Korea) was used in Korea.\nUV = unvaccinated, FV = fully vaccinated.\nA total of 438 patients were included in the analysis; 188 (42.9%) and 250 (57.1%) patients were included in the unvaccinated and vaccinated groups, respectively. Of the 250 vaccinated patients, 133 (53.2%) were fully vaccinated, and 117 (46.8%) were partially vaccinated (Fig. 1).\nAnalysis of the baseline characteristics of the patients showed that the vaccinated group was older, had a shorter time lag from diagnosis to admission or from onset of illness to admission, and had a lower initial NEWS-2 than the unvaccinated group (Table 1). The vaccinated group also had a higher CCI (median 2.0 vs. 0.0, P < 0.001) and a higher incidence of other underlying diseases (hypertension, diabetes mellitus, and chronic kidney disease) than the unvaccinated group. Patients without any underlying diseases were more frequent in the unvaccinated group. At admission, dyspnea and fever exceeding 38.0°C were more frequent, and white blood cell and platelet counts were lower, but lactate dehydrogenase (LDH) level and the proportion of chest X-ray abnormalities was higher (70.7% vs. 47.2%, P < 0.001) in the unvaccinated group than in the vaccinated group (Table 1).\nValues are presented as number (%) or median (interquartile range).\nBMI = body mass index, NIH = National Institutes of Health, NEWS = National Early Warning Score, SOFA = Sequential Organ Failure Assessment, CCI = Charlson comorbidity index, COPD = chronic obstructive pulmonary disease, CRP = C-reactive protein, Ct = cyclic threshold, RdRp = RNA dependent RNA polymerase.\nThe fully vaccinated group was older and had a shorter time lag from diagnosis to admission or onset of illness to admission than the partially vaccinated group. In the partially vaccinated group, 70.9% of patients had been vaccinated with BNT162b2 (Table 2). The fully vaccinated group had higher median CCI (3.0 vs. 1.0, P < 0.001) and higher proportion of individuals with hypertension and chronic kidney disease than the partially vaccinated group. At presentation, cough was more frequent and the level of LDH was higher in the partially vaccinated group than in the fully vaccinated group (Table 2).\nValues are presented as number (%) or median (interquartile range).\nBMI = body mass index, NIH = National Institutes of Health, NEWS = National Early Warning Score, SOFA = Sequential Organ Failure Assessment, CCI = Charlson comorbidity index, COPD = chronic obstructive pulmonary disease, CRP = C-reactive protein, Ct = cyclic threshold, RdRp = RNA dependent RNA polymerase.\nThe unvaccinated group required more oxygen therapy during the clinical course of the disease than the vaccinated group (nasal prong, 30.8% vs. 16.8%; facial mask, 0.0% vs. 0.4%; high flow, 2.7% vs. 2.8%; intubation, 1.6% vs. 1.2%; P = 0.008) (Table 3). Thirty-five patients in the unvaccinated group required oxygen therapy on the day of admission; of the 153 patients in that group without initial oxygen demand, 31 (20.3%) experienced clinical progression to severe disease. Thirteen patients in the fully vaccinated group required oxygen therapy initially; of the 120 patients in that group without initial oxygen demand, 13 (10.8%) required oxygen therapy later (Fig. 2). Consistent with their greater need for oxygen therapy, the unvaccinated group received more remdesivir and corticosteroids. Antibacterial agents were used more frequently in the unvaccinated group than in the vaccinated group (38.3% vs. 21.2%, P < 0.001) (Table 3). The fully vaccinated group remained in the hospital longer (median 10.0 days vs. 8.0 days, P < 0.001) than did the partially vaccinated group, but the proportion of patients needing oxygen therapy did not differ between the fully and partially vaccinated groups (Supplementary Table 1).\nValues are presented as number (%) or median (interquartile range). Monoclonal antibody: only regdanvimab (CT-P59; Celltrion Inc., Incheon, Korea) was used in Korea.\nUV = unvaccinated, FV = fully vaccinated.\n Risk factors for severe progression in COVID-19 patients and vaccination status The risk factors for severe progression during the clinical course among patients who did not initially require oxygen therapy were identified by multivariable analysis. Older age (adjusted odds ratio [aOR], 1.04; 95% confidence interval [CI], 1.02–1.07; P = 0.003), diabetes mellitus (aOR, 2.31; 95% CI, 1.00–5.33; P = 0.049), solid cancer (aOR, 3.18; 95% CI, 1.14–8.85; P = 0.027), elevated level of LDH (aOR, 2.99; 95% CI, 1.34–6.65; P = 0.007), and chest X-ray abnormality (aOR, 3.41; 95% CI, 1.31–8.89; P = 0.012) were significantly associated with severe progression. Vaccination at least once was associated with reduced risk of severe progression (aOR, 0.35; 95% CI, 0.15–0.79; P = 0.011) (Table 4). In the fully vaccinated patients without initial oxygen demand, chest X-ray abnormalities were the only risk factor for severe progression (aOR, 20.16; 95% CI, 2.43–167.66; P = 0.005) by multivariable analysis. Elevated level of C-reactive protein (CRP) was excluded from the multivariable model because the P value of the Hosmer-Lemeshow test was < 0.05 in the model including elevated level of CRP (Table 5).\nOR = odds ratio, CI = confidence interval, aOR = adjusted odds ratio, BMI = body mass index, NEWS = National Early Warning Score, SOFA = Sequential Organ Failure Assessment, CCI = Charlson comorbidity index, COPD = chronic obstructive pulmonary disease, WBC = white blood cell, LDH = lactate dehydrogenase, CRP = C-reactive protein, Ct = cyclic threshold, RdRp = RNA dependent RNA polymerase.\nOR = odds ratio, CI = confidence interval, aOR = adjusted odds ratio, BMI = body mass index, NEWS = National Early Warning Score, SOFA = Sequential Organ Failure Assessment, CCI = Charlson comorbidity index, COPD = chronic obstructive pulmonary disease, WBC = white blood cell, LDH = lactate dehydrogenase, CRP = C-reactive protein, Ct = cyclic threshold, RdRp = RNA dependent RNA polymerase.\nThe risk factors for severe progression during the clinical course among patients who did not initially require oxygen therapy were identified by multivariable analysis. Older age (adjusted odds ratio [aOR], 1.04; 95% confidence interval [CI], 1.02–1.07; P = 0.003), diabetes mellitus (aOR, 2.31; 95% CI, 1.00–5.33; P = 0.049), solid cancer (aOR, 3.18; 95% CI, 1.14–8.85; P = 0.027), elevated level of LDH (aOR, 2.99; 95% CI, 1.34–6.65; P = 0.007), and chest X-ray abnormality (aOR, 3.41; 95% CI, 1.31–8.89; P = 0.012) were significantly associated with severe progression. Vaccination at least once was associated with reduced risk of severe progression (aOR, 0.35; 95% CI, 0.15–0.79; P = 0.011) (Table 4). In the fully vaccinated patients without initial oxygen demand, chest X-ray abnormalities were the only risk factor for severe progression (aOR, 20.16; 95% CI, 2.43–167.66; P = 0.005) by multivariable analysis. Elevated level of C-reactive protein (CRP) was excluded from the multivariable model because the P value of the Hosmer-Lemeshow test was < 0.05 in the model including elevated level of CRP (Table 5).\nOR = odds ratio, CI = confidence interval, aOR = adjusted odds ratio, BMI = body mass index, NEWS = National Early Warning Score, SOFA = Sequential Organ Failure Assessment, CCI = Charlson comorbidity index, COPD = chronic obstructive pulmonary disease, WBC = white blood cell, LDH = lactate dehydrogenase, CRP = C-reactive protein, Ct = cyclic threshold, RdRp = RNA dependent RNA polymerase.\nOR = odds ratio, CI = confidence interval, aOR = adjusted odds ratio, BMI = body mass index, NEWS = National Early Warning Score, SOFA = Sequential Organ Failure Assessment, CCI = Charlson comorbidity index, COPD = chronic obstructive pulmonary disease, WBC = white blood cell, LDH = lactate dehydrogenase, CRP = C-reactive protein, Ct = cyclic threshold, RdRp = RNA dependent RNA polymerase.", "A total of 438 patients were included in the analysis; 188 (42.9%) and 250 (57.1%) patients were included in the unvaccinated and vaccinated groups, respectively. Of the 250 vaccinated patients, 133 (53.2%) were fully vaccinated, and 117 (46.8%) were partially vaccinated (Fig. 1).\nAnalysis of the baseline characteristics of the patients showed that the vaccinated group was older, had a shorter time lag from diagnosis to admission or from onset of illness to admission, and had a lower initial NEWS-2 than the unvaccinated group (Table 1). The vaccinated group also had a higher CCI (median 2.0 vs. 0.0, P < 0.001) and a higher incidence of other underlying diseases (hypertension, diabetes mellitus, and chronic kidney disease) than the unvaccinated group. Patients without any underlying diseases were more frequent in the unvaccinated group. At admission, dyspnea and fever exceeding 38.0°C were more frequent, and white blood cell and platelet counts were lower, but lactate dehydrogenase (LDH) level and the proportion of chest X-ray abnormalities was higher (70.7% vs. 47.2%, P < 0.001) in the unvaccinated group than in the vaccinated group (Table 1).\nValues are presented as number (%) or median (interquartile range).\nBMI = body mass index, NIH = National Institutes of Health, NEWS = National Early Warning Score, SOFA = Sequential Organ Failure Assessment, CCI = Charlson comorbidity index, COPD = chronic obstructive pulmonary disease, CRP = C-reactive protein, Ct = cyclic threshold, RdRp = RNA dependent RNA polymerase.\nThe fully vaccinated group was older and had a shorter time lag from diagnosis to admission or onset of illness to admission than the partially vaccinated group. In the partially vaccinated group, 70.9% of patients had been vaccinated with BNT162b2 (Table 2). The fully vaccinated group had higher median CCI (3.0 vs. 1.0, P < 0.001) and higher proportion of individuals with hypertension and chronic kidney disease than the partially vaccinated group. At presentation, cough was more frequent and the level of LDH was higher in the partially vaccinated group than in the fully vaccinated group (Table 2).\nValues are presented as number (%) or median (interquartile range).\nBMI = body mass index, NIH = National Institutes of Health, NEWS = National Early Warning Score, SOFA = Sequential Organ Failure Assessment, CCI = Charlson comorbidity index, COPD = chronic obstructive pulmonary disease, CRP = C-reactive protein, Ct = cyclic threshold, RdRp = RNA dependent RNA polymerase.\nThe unvaccinated group required more oxygen therapy during the clinical course of the disease than the vaccinated group (nasal prong, 30.8% vs. 16.8%; facial mask, 0.0% vs. 0.4%; high flow, 2.7% vs. 2.8%; intubation, 1.6% vs. 1.2%; P = 0.008) (Table 3). Thirty-five patients in the unvaccinated group required oxygen therapy on the day of admission; of the 153 patients in that group without initial oxygen demand, 31 (20.3%) experienced clinical progression to severe disease. Thirteen patients in the fully vaccinated group required oxygen therapy initially; of the 120 patients in that group without initial oxygen demand, 13 (10.8%) required oxygen therapy later (Fig. 2). Consistent with their greater need for oxygen therapy, the unvaccinated group received more remdesivir and corticosteroids. Antibacterial agents were used more frequently in the unvaccinated group than in the vaccinated group (38.3% vs. 21.2%, P < 0.001) (Table 3). The fully vaccinated group remained in the hospital longer (median 10.0 days vs. 8.0 days, P < 0.001) than did the partially vaccinated group, but the proportion of patients needing oxygen therapy did not differ between the fully and partially vaccinated groups (Supplementary Table 1).\nValues are presented as number (%) or median (interquartile range). Monoclonal antibody: only regdanvimab (CT-P59; Celltrion Inc., Incheon, Korea) was used in Korea.\nUV = unvaccinated, FV = fully vaccinated.", "The risk factors for severe progression during the clinical course among patients who did not initially require oxygen therapy were identified by multivariable analysis. Older age (adjusted odds ratio [aOR], 1.04; 95% confidence interval [CI], 1.02–1.07; P = 0.003), diabetes mellitus (aOR, 2.31; 95% CI, 1.00–5.33; P = 0.049), solid cancer (aOR, 3.18; 95% CI, 1.14–8.85; P = 0.027), elevated level of LDH (aOR, 2.99; 95% CI, 1.34–6.65; P = 0.007), and chest X-ray abnormality (aOR, 3.41; 95% CI, 1.31–8.89; P = 0.012) were significantly associated with severe progression. Vaccination at least once was associated with reduced risk of severe progression (aOR, 0.35; 95% CI, 0.15–0.79; P = 0.011) (Table 4). In the fully vaccinated patients without initial oxygen demand, chest X-ray abnormalities were the only risk factor for severe progression (aOR, 20.16; 95% CI, 2.43–167.66; P = 0.005) by multivariable analysis. Elevated level of C-reactive protein (CRP) was excluded from the multivariable model because the P value of the Hosmer-Lemeshow test was < 0.05 in the model including elevated level of CRP (Table 5).\nOR = odds ratio, CI = confidence interval, aOR = adjusted odds ratio, BMI = body mass index, NEWS = National Early Warning Score, SOFA = Sequential Organ Failure Assessment, CCI = Charlson comorbidity index, COPD = chronic obstructive pulmonary disease, WBC = white blood cell, LDH = lactate dehydrogenase, CRP = C-reactive protein, Ct = cyclic threshold, RdRp = RNA dependent RNA polymerase.\nOR = odds ratio, CI = confidence interval, aOR = adjusted odds ratio, BMI = body mass index, NEWS = National Early Warning Score, SOFA = Sequential Organ Failure Assessment, CCI = Charlson comorbidity index, COPD = chronic obstructive pulmonary disease, WBC = white blood cell, LDH = lactate dehydrogenase, CRP = C-reactive protein, Ct = cyclic threshold, RdRp = RNA dependent RNA polymerase.", "During the study period, the national policy of Korea was to isolate most COVID-19 patients in residential treatment centers that mainly provided daily living support with minimum medical care for 10 days. Patients with high risk factors for severe progression or complication, or symptomatic patients who required further medical care were admitted or transferred to dedicated hospitals such as our study hospital. In addition, COVID-19 vaccination in Korea was first made available to the high-risk group, which included elderly individuals. Although the percentage of fully vaccinated individuals in the national population ≥ 18 years of age was 54.2%, it differed markedly in different age groups. The percentages of fully vaccinated individuals over 80, 70–79, and 60–69 years of age were 79.8%, 89.6%, and 87.6%, respectively, but among individuals 50–59, 40-49, 30–39, and 18–29 years of age, the percentages were 58.4%, 32.2%, 36.1% and 32.5%, as of 28 September 2021.\nOur results showed that the hospitalized COVID-19 patients in our study had characteristics in accordance with the national admission guidelines for COVID-19 patients. The vaccinated patients were older, had more comorbidities and were admitted to the hospital as early as possible in the fear that their condition might worsen. This might result in lower clinical severity at admission among these patients. In contrast, the unvaccinated patients were largely younger, had fewer comorbidities and were admitted to the hospital later in the clinical course of the disease due to severe manifestations that were not prevented by vaccination. We did not intend to compare the individual study subjects, vaccinated and unvaccinated directly; instead, we wanted to describe the type of COVID-19 patients we would expect to see in the hospital after the peak of the pandemic eased down. Although the composition of study subjects can be directly attributed to the national policy for allocation of COVID-19 patients, the scenario may be similar in real-world practice in the near future or might exist in other regions of the globe due to the existence of different policies for pandemic control in different countries.\nThe proportion of patients who required supplemental oxygen was higher in the unvaccinated group than in the vaccinated group. Considering the older age of the vaccinated group and the higher proportion of comorbidities in that group, the fact that a lower proportion of patients in the vaccinated group needed oxygen therapy might be due to the effect of vaccination. Among unvaccinated and fully vaccinated patients without initial oxygen demand, the proportions of patients with severe progression were 20.3% (31/153) and 10.8% (13/120), respectively. One study conducted in Korea during the early COVID-19 pandemic showed that 11.7% (16/136) of unvaccinated patients experienced clinical aggravation with a need for oxygen treatment during the clinical course of the disease.11 As the delta variant was the predominant strain at the time of our study (September–October 2021),1213 the higher rate of progression observed in our study might be partially due to the virulence of the delta variant. Compared to the fully vaccinated patients, the proportion of unvaccinated patients who experienced severe progression was higher. Previous studies have shown that COVID-19 vaccination significantly prevents severe outcomes and reduces the length of hospitalization in high-risk groups.1415 The results of our study suggest that a considerable proportion of younger COVID-19 patients with few comorbidities experience high degree of clinical severity and have a need for oxygen therapy in real practice. Fortunately, the oxygen therapy rarely requires invasive ventilation. However, in resource limited settings, even the failure of simple oxygen therapy may lead to a fatal outcome.\nTreatment with remdesivir or dexamethasone was associated with severe cases of the disease. Interestingly, the use of antibacterial agents was also higher in the unvaccinated group, despite the assumption that the older group with multiple comorbidities might require more antibiotics to cope with complications of the disease. This somewhat unexpected result might be due to the higher clinical severity of the unvaccinated group and may indirectly reflect the tendency to inappropriately prescribe antibiotics for patients with COVID-19 who show severe symptoms, such as high fever, dyspnea, chest X-ray abnormalities and hypoxemia, not exactly based on the bacterial complications.1617\nRegarding the laboratory findings, an elevated level of LDH at presentation was a predictor of severe progression. One previous study showed that elevated LDH levels during 6–10 days after the onset of illness were associated with clinical aggravation.11 Abnormal baseline chest X-ray finding was also reported to be a risk factor for oxygen requirement.18 In this study, chest X-ray abnormalities at the initial presentation were a risk factor for severe progression among unvaccinated and vaccinated patients, and also in fully vaccinated patients. Therefore, several important predictors of severe progression that were identified in unvaccinated patients during the early part of the COVID-19 pandemic still appear to be risk factors for severe progression, even under the influence of vaccination and change of dominant variant strain.\nAlthough a previous study reported the occurrence of breakthrough infections in fully vaccinated patients,19 the clinical characteristics and outcomes of vaccine breakthrough infections have not been systematically compared with those of infections in unvaccinated patients. Breakthrough COVID-19 infection in vaccinated people is also important from both the clinical and the pathophysiological perspectives. As expected based on studies that showed vaccine efficacy in preventing severe COVID-19,23 vaccination was the only protective factor for severe progression among patients with breakthrough infection and unvaccinated patients. As this study was conducted among hospitalized patients, the results may help clinicians manage patients with the potential of severe progression in hospital settings. As the chest X-ray abnormality was the only risk factor for clinical progression among fully vaccinated patients, it would be necessary to check the chest X-rays early during the patient’s hospitalization and monitor clinical progression more carefully for the patients with initial chest X-ray abnormalities.\nThis study has several limitations. First, because it was conducted retrospectively at a single center, the generalization of the results may be limited. Second, the number of fully vaccinated patients with severe progression was relatively small. Third, we did not perform virologic analysis of concurrent SARS-CoV-2 variants. However, the likelihood of bias due to the variants may be minimal because most of the study patients were believed to be infected with the delta variant, considering the fact that the dominant viral population in Korea during the study period was the delta variant. Although our results may not be precisely applicable to other variants, the clinical framework could be extrapolated similarly.\nIn conclusion, we described the clinical features of unvaccinated and vaccinated COVID-19 patients in a hospitalized setting. The only protective factor for severe progression was the vaccination at least once. Among fully vaccinated patients, the chest X-ray abnormality was a predictor for severe progression. In a COVID-19 practice of hospitalized setting, vaccination should be an initial checkpoint for triage and chest X-ray may be helpful in predicting progression." ]
[ "intro", "methods", null, null, null, "ethics-statement", "results", null, null, "discussion" ]
[ "Breakthrough Infection", "SARS-CoV-2", "COVID-19", "Severe Progression", "Oxygen Therapy" ]
INTRODUCTION: Vaccination has been one of the key ways in which the 2-year long coronavirus disease 2019 (COVID-19) pandemic has been addressed. Several types of COVID-19 vaccines were rapidly rolled out and have shown efficacy ranging from 74% to 95% in preventing COVID-19 depending on the vaccine platform.123 Severe aggravation was also shown to be significantly prevented by the available vaccines.23 The imperfect efficacy of vaccination has resulted in breakthrough COVID-19 infections and severe progression of COVID-19 regardless of vaccination status, and the factors related to those events are of both scientific and clinical interest. Although the worldwide pandemic situation is largely easing, there is regional imbalance in the severity of the pandemic and in the coverage by vaccines, especially in countries with poor resources. Even in high-income countries, the proportion of unvaccinated individuals is not low. As the world approaches a strategy of living with COVID-19, physicians will frequently find themselves caring for vaccinated and unvaccinated COVID-19 patients and need to be educated for the clinical features of those patients. Since the first rollout of the AZD1222 (ChAdOx1 nCoV-19) vaccine on February 26, 2021, national vaccine coverage in Korea has been favorable, but patients with breakthrough infections have been reported.45 Although population-based studies estimating the effectiveness of COVID-19 vaccines have been reported,678 information about the clinical features and risk factors for aggravation of vaccinated and unvaccinated COVID-19 patients in a real-world setting would be very helpful to physicians. The Korean national COVID-19 policy provided an opportunity to investigate vaccinated and unvaccinated COVID-19 patients needing hospitalization. In this study, we aimed to describe how the clinical manifestations of COVID-19 patients present in a real-world in-hospital care setting during the COVID-19 vaccination era and to determine the clinical implications of vaccination status in the individual patient care. METHODS: Patients and study setting All hospitalized COVID-19 patients ≥ 19 years of age in whom COVID-19 infection was confirmed by real-time reverse transcription polymerase chain reaction were enrolled from September to October 2021 at Boramae Medical Center, a university affiliated hospital designated for the care of hospitalized COVID-19 patients. Patients for whom information on documented vaccination status was not available were excluded (Fig. 1). During the study period, COVID-19 patients classified as belonging to a high-risk group or a symptomatic low-risk group were allowed to be admitted to the hospital. Most other stable COVID-19 patients were isolated in residential treatment centers. The patients were grouped into vaccinated and unvaccinated groups, and their presenting clinical features and the overall risk factors at admission for severe progression were characterized. In the subgroup analysis, clinical predictors for severe progression were assessed in the fully vaccinated group. All hospitalized COVID-19 patients ≥ 19 years of age in whom COVID-19 infection was confirmed by real-time reverse transcription polymerase chain reaction were enrolled from September to October 2021 at Boramae Medical Center, a university affiliated hospital designated for the care of hospitalized COVID-19 patients. Patients for whom information on documented vaccination status was not available were excluded (Fig. 1). During the study period, COVID-19 patients classified as belonging to a high-risk group or a symptomatic low-risk group were allowed to be admitted to the hospital. Most other stable COVID-19 patients were isolated in residential treatment centers. The patients were grouped into vaccinated and unvaccinated groups, and their presenting clinical features and the overall risk factors at admission for severe progression were characterized. In the subgroup analysis, clinical predictors for severe progression were assessed in the fully vaccinated group. Data collection and definitions Data were retrospectively collected on demographics, vaccination status, clinical variables at presentation (clinical severity, symptoms and signs, laboratory findings and chest radiography), specific treatments, and clinical outcomes. COVID-19-specific treatments, including the use of remdesivir, monoclonal antibodies, or corticosteroids, and the use of antibacterial agents for periods ≥ 3 days were recorded. The level of oxygen supplementation, the in-hospital mortality, and the duration of hospitalization were used to assess the clinical outcomes. Initial clinical severity was assessed based on the National Early Warning Score (NEWS)-2 and the Sequential Organ Failure Assessment (SOFA) score,9 and the severity of underlying comorbidity was estimated according to the Charlson comorbidity index (CCI).10 Patients with COVID-19 who had received full vaccination were defined as those who developed symptoms or were diagnosed with COVID-19 more than 14 days after completion the recommended vaccine regimen. Those who were vaccinated at least once but did not meet the criteria for full vaccination were defined as partially vaccinated. The use of immunosuppressants was defined as having received any anticancer chemotherapy or any immunosuppressive agents, including steroids, within 30 days. The worst results of body temperature on the day of admission and laboratory findings within 24 hours after admission were used. Severe progression was defined as the occurrence of new oxygen demand during the clinical course of the disease, and oxygen demand was defined as needing oxygen for a period of more than 24 consecutive hours. Chest X-ray abnormalities were defined as the presence of lung infiltration suggestive of pneumonia on simple chest X-ray according to the independent formal report of the reviewing radiologists. Data were retrospectively collected on demographics, vaccination status, clinical variables at presentation (clinical severity, symptoms and signs, laboratory findings and chest radiography), specific treatments, and clinical outcomes. COVID-19-specific treatments, including the use of remdesivir, monoclonal antibodies, or corticosteroids, and the use of antibacterial agents for periods ≥ 3 days were recorded. The level of oxygen supplementation, the in-hospital mortality, and the duration of hospitalization were used to assess the clinical outcomes. Initial clinical severity was assessed based on the National Early Warning Score (NEWS)-2 and the Sequential Organ Failure Assessment (SOFA) score,9 and the severity of underlying comorbidity was estimated according to the Charlson comorbidity index (CCI).10 Patients with COVID-19 who had received full vaccination were defined as those who developed symptoms or were diagnosed with COVID-19 more than 14 days after completion the recommended vaccine regimen. Those who were vaccinated at least once but did not meet the criteria for full vaccination were defined as partially vaccinated. The use of immunosuppressants was defined as having received any anticancer chemotherapy or any immunosuppressive agents, including steroids, within 30 days. The worst results of body temperature on the day of admission and laboratory findings within 24 hours after admission were used. Severe progression was defined as the occurrence of new oxygen demand during the clinical course of the disease, and oxygen demand was defined as needing oxygen for a period of more than 24 consecutive hours. Chest X-ray abnormalities were defined as the presence of lung infiltration suggestive of pneumonia on simple chest X-ray according to the independent formal report of the reviewing radiologists. Statistical analysis Student’s t-test or the Mann-Whitney U test was used to compare continuous variables, and the χ2 test or Fisher’s exact test was used to compare categorical variables. To determine the risk factors for severe progression, multivariable logistic regression was performed using variables for which P < 0.10 in the univariate analysis. To avoid multicollinearity, the CCI, variable of no underlying disease and no initial symptoms were not included in the multivariable model. The SOFA score was excluded from the multivariable model to avoid multicollinearity with the NEWS-2 and thrombocytopenia. To determine the goodness of fit for logistic regression models, the Hosmer-Lemeshow test was used. P values < 0.05 were considered statistically significant. Statistical analyses were performed using IBM SPSS Statistics, version 26.0 (IBM Corp., Armonk, NY, USA). Student’s t-test or the Mann-Whitney U test was used to compare continuous variables, and the χ2 test or Fisher’s exact test was used to compare categorical variables. To determine the risk factors for severe progression, multivariable logistic regression was performed using variables for which P < 0.10 in the univariate analysis. To avoid multicollinearity, the CCI, variable of no underlying disease and no initial symptoms were not included in the multivariable model. The SOFA score was excluded from the multivariable model to avoid multicollinearity with the NEWS-2 and thrombocytopenia. To determine the goodness of fit for logistic regression models, the Hosmer-Lemeshow test was used. P values < 0.05 were considered statistically significant. Statistical analyses were performed using IBM SPSS Statistics, version 26.0 (IBM Corp., Armonk, NY, USA). Ethics statement This study was approved by the Institutional Review Board (IRB) of Boramae Medical Center (No. 20-2021-54). The requirement for informed consent was waived by the IRB owing to the retrospective nature of the study. This study was conducted in compliance with the Declaration of Helsinki. This study was approved by the Institutional Review Board (IRB) of Boramae Medical Center (No. 20-2021-54). The requirement for informed consent was waived by the IRB owing to the retrospective nature of the study. This study was conducted in compliance with the Declaration of Helsinki. Patients and study setting: All hospitalized COVID-19 patients ≥ 19 years of age in whom COVID-19 infection was confirmed by real-time reverse transcription polymerase chain reaction were enrolled from September to October 2021 at Boramae Medical Center, a university affiliated hospital designated for the care of hospitalized COVID-19 patients. Patients for whom information on documented vaccination status was not available were excluded (Fig. 1). During the study period, COVID-19 patients classified as belonging to a high-risk group or a symptomatic low-risk group were allowed to be admitted to the hospital. Most other stable COVID-19 patients were isolated in residential treatment centers. The patients were grouped into vaccinated and unvaccinated groups, and their presenting clinical features and the overall risk factors at admission for severe progression were characterized. In the subgroup analysis, clinical predictors for severe progression were assessed in the fully vaccinated group. Data collection and definitions: Data were retrospectively collected on demographics, vaccination status, clinical variables at presentation (clinical severity, symptoms and signs, laboratory findings and chest radiography), specific treatments, and clinical outcomes. COVID-19-specific treatments, including the use of remdesivir, monoclonal antibodies, or corticosteroids, and the use of antibacterial agents for periods ≥ 3 days were recorded. The level of oxygen supplementation, the in-hospital mortality, and the duration of hospitalization were used to assess the clinical outcomes. Initial clinical severity was assessed based on the National Early Warning Score (NEWS)-2 and the Sequential Organ Failure Assessment (SOFA) score,9 and the severity of underlying comorbidity was estimated according to the Charlson comorbidity index (CCI).10 Patients with COVID-19 who had received full vaccination were defined as those who developed symptoms or were diagnosed with COVID-19 more than 14 days after completion the recommended vaccine regimen. Those who were vaccinated at least once but did not meet the criteria for full vaccination were defined as partially vaccinated. The use of immunosuppressants was defined as having received any anticancer chemotherapy or any immunosuppressive agents, including steroids, within 30 days. The worst results of body temperature on the day of admission and laboratory findings within 24 hours after admission were used. Severe progression was defined as the occurrence of new oxygen demand during the clinical course of the disease, and oxygen demand was defined as needing oxygen for a period of more than 24 consecutive hours. Chest X-ray abnormalities were defined as the presence of lung infiltration suggestive of pneumonia on simple chest X-ray according to the independent formal report of the reviewing radiologists. Statistical analysis: Student’s t-test or the Mann-Whitney U test was used to compare continuous variables, and the χ2 test or Fisher’s exact test was used to compare categorical variables. To determine the risk factors for severe progression, multivariable logistic regression was performed using variables for which P < 0.10 in the univariate analysis. To avoid multicollinearity, the CCI, variable of no underlying disease and no initial symptoms were not included in the multivariable model. The SOFA score was excluded from the multivariable model to avoid multicollinearity with the NEWS-2 and thrombocytopenia. To determine the goodness of fit for logistic regression models, the Hosmer-Lemeshow test was used. P values < 0.05 were considered statistically significant. Statistical analyses were performed using IBM SPSS Statistics, version 26.0 (IBM Corp., Armonk, NY, USA). Ethics statement: This study was approved by the Institutional Review Board (IRB) of Boramae Medical Center (No. 20-2021-54). The requirement for informed consent was waived by the IRB owing to the retrospective nature of the study. This study was conducted in compliance with the Declaration of Helsinki. RESULTS: Clinical features of unvaccinated and vaccinated COVID-19 patients A total of 438 patients were included in the analysis; 188 (42.9%) and 250 (57.1%) patients were included in the unvaccinated and vaccinated groups, respectively. Of the 250 vaccinated patients, 133 (53.2%) were fully vaccinated, and 117 (46.8%) were partially vaccinated (Fig. 1). Analysis of the baseline characteristics of the patients showed that the vaccinated group was older, had a shorter time lag from diagnosis to admission or from onset of illness to admission, and had a lower initial NEWS-2 than the unvaccinated group (Table 1). The vaccinated group also had a higher CCI (median 2.0 vs. 0.0, P < 0.001) and a higher incidence of other underlying diseases (hypertension, diabetes mellitus, and chronic kidney disease) than the unvaccinated group. Patients without any underlying diseases were more frequent in the unvaccinated group. At admission, dyspnea and fever exceeding 38.0°C were more frequent, and white blood cell and platelet counts were lower, but lactate dehydrogenase (LDH) level and the proportion of chest X-ray abnormalities was higher (70.7% vs. 47.2%, P < 0.001) in the unvaccinated group than in the vaccinated group (Table 1). Values are presented as number (%) or median (interquartile range). BMI = body mass index, NIH = National Institutes of Health, NEWS = National Early Warning Score, SOFA = Sequential Organ Failure Assessment, CCI = Charlson comorbidity index, COPD = chronic obstructive pulmonary disease, CRP = C-reactive protein, Ct = cyclic threshold, RdRp = RNA dependent RNA polymerase. The fully vaccinated group was older and had a shorter time lag from diagnosis to admission or onset of illness to admission than the partially vaccinated group. In the partially vaccinated group, 70.9% of patients had been vaccinated with BNT162b2 (Table 2). The fully vaccinated group had higher median CCI (3.0 vs. 1.0, P < 0.001) and higher proportion of individuals with hypertension and chronic kidney disease than the partially vaccinated group. At presentation, cough was more frequent and the level of LDH was higher in the partially vaccinated group than in the fully vaccinated group (Table 2). Values are presented as number (%) or median (interquartile range). BMI = body mass index, NIH = National Institutes of Health, NEWS = National Early Warning Score, SOFA = Sequential Organ Failure Assessment, CCI = Charlson comorbidity index, COPD = chronic obstructive pulmonary disease, CRP = C-reactive protein, Ct = cyclic threshold, RdRp = RNA dependent RNA polymerase. The unvaccinated group required more oxygen therapy during the clinical course of the disease than the vaccinated group (nasal prong, 30.8% vs. 16.8%; facial mask, 0.0% vs. 0.4%; high flow, 2.7% vs. 2.8%; intubation, 1.6% vs. 1.2%; P = 0.008) (Table 3). Thirty-five patients in the unvaccinated group required oxygen therapy on the day of admission; of the 153 patients in that group without initial oxygen demand, 31 (20.3%) experienced clinical progression to severe disease. Thirteen patients in the fully vaccinated group required oxygen therapy initially; of the 120 patients in that group without initial oxygen demand, 13 (10.8%) required oxygen therapy later (Fig. 2). Consistent with their greater need for oxygen therapy, the unvaccinated group received more remdesivir and corticosteroids. Antibacterial agents were used more frequently in the unvaccinated group than in the vaccinated group (38.3% vs. 21.2%, P < 0.001) (Table 3). The fully vaccinated group remained in the hospital longer (median 10.0 days vs. 8.0 days, P < 0.001) than did the partially vaccinated group, but the proportion of patients needing oxygen therapy did not differ between the fully and partially vaccinated groups (Supplementary Table 1). Values are presented as number (%) or median (interquartile range). Monoclonal antibody: only regdanvimab (CT-P59; Celltrion Inc., Incheon, Korea) was used in Korea. UV = unvaccinated, FV = fully vaccinated. A total of 438 patients were included in the analysis; 188 (42.9%) and 250 (57.1%) patients were included in the unvaccinated and vaccinated groups, respectively. Of the 250 vaccinated patients, 133 (53.2%) were fully vaccinated, and 117 (46.8%) were partially vaccinated (Fig. 1). Analysis of the baseline characteristics of the patients showed that the vaccinated group was older, had a shorter time lag from diagnosis to admission or from onset of illness to admission, and had a lower initial NEWS-2 than the unvaccinated group (Table 1). The vaccinated group also had a higher CCI (median 2.0 vs. 0.0, P < 0.001) and a higher incidence of other underlying diseases (hypertension, diabetes mellitus, and chronic kidney disease) than the unvaccinated group. Patients without any underlying diseases were more frequent in the unvaccinated group. At admission, dyspnea and fever exceeding 38.0°C were more frequent, and white blood cell and platelet counts were lower, but lactate dehydrogenase (LDH) level and the proportion of chest X-ray abnormalities was higher (70.7% vs. 47.2%, P < 0.001) in the unvaccinated group than in the vaccinated group (Table 1). Values are presented as number (%) or median (interquartile range). BMI = body mass index, NIH = National Institutes of Health, NEWS = National Early Warning Score, SOFA = Sequential Organ Failure Assessment, CCI = Charlson comorbidity index, COPD = chronic obstructive pulmonary disease, CRP = C-reactive protein, Ct = cyclic threshold, RdRp = RNA dependent RNA polymerase. The fully vaccinated group was older and had a shorter time lag from diagnosis to admission or onset of illness to admission than the partially vaccinated group. In the partially vaccinated group, 70.9% of patients had been vaccinated with BNT162b2 (Table 2). The fully vaccinated group had higher median CCI (3.0 vs. 1.0, P < 0.001) and higher proportion of individuals with hypertension and chronic kidney disease than the partially vaccinated group. At presentation, cough was more frequent and the level of LDH was higher in the partially vaccinated group than in the fully vaccinated group (Table 2). Values are presented as number (%) or median (interquartile range). BMI = body mass index, NIH = National Institutes of Health, NEWS = National Early Warning Score, SOFA = Sequential Organ Failure Assessment, CCI = Charlson comorbidity index, COPD = chronic obstructive pulmonary disease, CRP = C-reactive protein, Ct = cyclic threshold, RdRp = RNA dependent RNA polymerase. The unvaccinated group required more oxygen therapy during the clinical course of the disease than the vaccinated group (nasal prong, 30.8% vs. 16.8%; facial mask, 0.0% vs. 0.4%; high flow, 2.7% vs. 2.8%; intubation, 1.6% vs. 1.2%; P = 0.008) (Table 3). Thirty-five patients in the unvaccinated group required oxygen therapy on the day of admission; of the 153 patients in that group without initial oxygen demand, 31 (20.3%) experienced clinical progression to severe disease. Thirteen patients in the fully vaccinated group required oxygen therapy initially; of the 120 patients in that group without initial oxygen demand, 13 (10.8%) required oxygen therapy later (Fig. 2). Consistent with their greater need for oxygen therapy, the unvaccinated group received more remdesivir and corticosteroids. Antibacterial agents were used more frequently in the unvaccinated group than in the vaccinated group (38.3% vs. 21.2%, P < 0.001) (Table 3). The fully vaccinated group remained in the hospital longer (median 10.0 days vs. 8.0 days, P < 0.001) than did the partially vaccinated group, but the proportion of patients needing oxygen therapy did not differ between the fully and partially vaccinated groups (Supplementary Table 1). Values are presented as number (%) or median (interquartile range). Monoclonal antibody: only regdanvimab (CT-P59; Celltrion Inc., Incheon, Korea) was used in Korea. UV = unvaccinated, FV = fully vaccinated. Risk factors for severe progression in COVID-19 patients and vaccination status The risk factors for severe progression during the clinical course among patients who did not initially require oxygen therapy were identified by multivariable analysis. Older age (adjusted odds ratio [aOR], 1.04; 95% confidence interval [CI], 1.02–1.07; P = 0.003), diabetes mellitus (aOR, 2.31; 95% CI, 1.00–5.33; P = 0.049), solid cancer (aOR, 3.18; 95% CI, 1.14–8.85; P = 0.027), elevated level of LDH (aOR, 2.99; 95% CI, 1.34–6.65; P = 0.007), and chest X-ray abnormality (aOR, 3.41; 95% CI, 1.31–8.89; P = 0.012) were significantly associated with severe progression. Vaccination at least once was associated with reduced risk of severe progression (aOR, 0.35; 95% CI, 0.15–0.79; P = 0.011) (Table 4). In the fully vaccinated patients without initial oxygen demand, chest X-ray abnormalities were the only risk factor for severe progression (aOR, 20.16; 95% CI, 2.43–167.66; P = 0.005) by multivariable analysis. Elevated level of C-reactive protein (CRP) was excluded from the multivariable model because the P value of the Hosmer-Lemeshow test was < 0.05 in the model including elevated level of CRP (Table 5). OR = odds ratio, CI = confidence interval, aOR = adjusted odds ratio, BMI = body mass index, NEWS = National Early Warning Score, SOFA = Sequential Organ Failure Assessment, CCI = Charlson comorbidity index, COPD = chronic obstructive pulmonary disease, WBC = white blood cell, LDH = lactate dehydrogenase, CRP = C-reactive protein, Ct = cyclic threshold, RdRp = RNA dependent RNA polymerase. OR = odds ratio, CI = confidence interval, aOR = adjusted odds ratio, BMI = body mass index, NEWS = National Early Warning Score, SOFA = Sequential Organ Failure Assessment, CCI = Charlson comorbidity index, COPD = chronic obstructive pulmonary disease, WBC = white blood cell, LDH = lactate dehydrogenase, CRP = C-reactive protein, Ct = cyclic threshold, RdRp = RNA dependent RNA polymerase. The risk factors for severe progression during the clinical course among patients who did not initially require oxygen therapy were identified by multivariable analysis. Older age (adjusted odds ratio [aOR], 1.04; 95% confidence interval [CI], 1.02–1.07; P = 0.003), diabetes mellitus (aOR, 2.31; 95% CI, 1.00–5.33; P = 0.049), solid cancer (aOR, 3.18; 95% CI, 1.14–8.85; P = 0.027), elevated level of LDH (aOR, 2.99; 95% CI, 1.34–6.65; P = 0.007), and chest X-ray abnormality (aOR, 3.41; 95% CI, 1.31–8.89; P = 0.012) were significantly associated with severe progression. Vaccination at least once was associated with reduced risk of severe progression (aOR, 0.35; 95% CI, 0.15–0.79; P = 0.011) (Table 4). In the fully vaccinated patients without initial oxygen demand, chest X-ray abnormalities were the only risk factor for severe progression (aOR, 20.16; 95% CI, 2.43–167.66; P = 0.005) by multivariable analysis. Elevated level of C-reactive protein (CRP) was excluded from the multivariable model because the P value of the Hosmer-Lemeshow test was < 0.05 in the model including elevated level of CRP (Table 5). OR = odds ratio, CI = confidence interval, aOR = adjusted odds ratio, BMI = body mass index, NEWS = National Early Warning Score, SOFA = Sequential Organ Failure Assessment, CCI = Charlson comorbidity index, COPD = chronic obstructive pulmonary disease, WBC = white blood cell, LDH = lactate dehydrogenase, CRP = C-reactive protein, Ct = cyclic threshold, RdRp = RNA dependent RNA polymerase. OR = odds ratio, CI = confidence interval, aOR = adjusted odds ratio, BMI = body mass index, NEWS = National Early Warning Score, SOFA = Sequential Organ Failure Assessment, CCI = Charlson comorbidity index, COPD = chronic obstructive pulmonary disease, WBC = white blood cell, LDH = lactate dehydrogenase, CRP = C-reactive protein, Ct = cyclic threshold, RdRp = RNA dependent RNA polymerase. Clinical features of unvaccinated and vaccinated COVID-19 patients: A total of 438 patients were included in the analysis; 188 (42.9%) and 250 (57.1%) patients were included in the unvaccinated and vaccinated groups, respectively. Of the 250 vaccinated patients, 133 (53.2%) were fully vaccinated, and 117 (46.8%) were partially vaccinated (Fig. 1). Analysis of the baseline characteristics of the patients showed that the vaccinated group was older, had a shorter time lag from diagnosis to admission or from onset of illness to admission, and had a lower initial NEWS-2 than the unvaccinated group (Table 1). The vaccinated group also had a higher CCI (median 2.0 vs. 0.0, P < 0.001) and a higher incidence of other underlying diseases (hypertension, diabetes mellitus, and chronic kidney disease) than the unvaccinated group. Patients without any underlying diseases were more frequent in the unvaccinated group. At admission, dyspnea and fever exceeding 38.0°C were more frequent, and white blood cell and platelet counts were lower, but lactate dehydrogenase (LDH) level and the proportion of chest X-ray abnormalities was higher (70.7% vs. 47.2%, P < 0.001) in the unvaccinated group than in the vaccinated group (Table 1). Values are presented as number (%) or median (interquartile range). BMI = body mass index, NIH = National Institutes of Health, NEWS = National Early Warning Score, SOFA = Sequential Organ Failure Assessment, CCI = Charlson comorbidity index, COPD = chronic obstructive pulmonary disease, CRP = C-reactive protein, Ct = cyclic threshold, RdRp = RNA dependent RNA polymerase. The fully vaccinated group was older and had a shorter time lag from diagnosis to admission or onset of illness to admission than the partially vaccinated group. In the partially vaccinated group, 70.9% of patients had been vaccinated with BNT162b2 (Table 2). The fully vaccinated group had higher median CCI (3.0 vs. 1.0, P < 0.001) and higher proportion of individuals with hypertension and chronic kidney disease than the partially vaccinated group. At presentation, cough was more frequent and the level of LDH was higher in the partially vaccinated group than in the fully vaccinated group (Table 2). Values are presented as number (%) or median (interquartile range). BMI = body mass index, NIH = National Institutes of Health, NEWS = National Early Warning Score, SOFA = Sequential Organ Failure Assessment, CCI = Charlson comorbidity index, COPD = chronic obstructive pulmonary disease, CRP = C-reactive protein, Ct = cyclic threshold, RdRp = RNA dependent RNA polymerase. The unvaccinated group required more oxygen therapy during the clinical course of the disease than the vaccinated group (nasal prong, 30.8% vs. 16.8%; facial mask, 0.0% vs. 0.4%; high flow, 2.7% vs. 2.8%; intubation, 1.6% vs. 1.2%; P = 0.008) (Table 3). Thirty-five patients in the unvaccinated group required oxygen therapy on the day of admission; of the 153 patients in that group without initial oxygen demand, 31 (20.3%) experienced clinical progression to severe disease. Thirteen patients in the fully vaccinated group required oxygen therapy initially; of the 120 patients in that group without initial oxygen demand, 13 (10.8%) required oxygen therapy later (Fig. 2). Consistent with their greater need for oxygen therapy, the unvaccinated group received more remdesivir and corticosteroids. Antibacterial agents were used more frequently in the unvaccinated group than in the vaccinated group (38.3% vs. 21.2%, P < 0.001) (Table 3). The fully vaccinated group remained in the hospital longer (median 10.0 days vs. 8.0 days, P < 0.001) than did the partially vaccinated group, but the proportion of patients needing oxygen therapy did not differ between the fully and partially vaccinated groups (Supplementary Table 1). Values are presented as number (%) or median (interquartile range). Monoclonal antibody: only regdanvimab (CT-P59; Celltrion Inc., Incheon, Korea) was used in Korea. UV = unvaccinated, FV = fully vaccinated. Risk factors for severe progression in COVID-19 patients and vaccination status: The risk factors for severe progression during the clinical course among patients who did not initially require oxygen therapy were identified by multivariable analysis. Older age (adjusted odds ratio [aOR], 1.04; 95% confidence interval [CI], 1.02–1.07; P = 0.003), diabetes mellitus (aOR, 2.31; 95% CI, 1.00–5.33; P = 0.049), solid cancer (aOR, 3.18; 95% CI, 1.14–8.85; P = 0.027), elevated level of LDH (aOR, 2.99; 95% CI, 1.34–6.65; P = 0.007), and chest X-ray abnormality (aOR, 3.41; 95% CI, 1.31–8.89; P = 0.012) were significantly associated with severe progression. Vaccination at least once was associated with reduced risk of severe progression (aOR, 0.35; 95% CI, 0.15–0.79; P = 0.011) (Table 4). In the fully vaccinated patients without initial oxygen demand, chest X-ray abnormalities were the only risk factor for severe progression (aOR, 20.16; 95% CI, 2.43–167.66; P = 0.005) by multivariable analysis. Elevated level of C-reactive protein (CRP) was excluded from the multivariable model because the P value of the Hosmer-Lemeshow test was < 0.05 in the model including elevated level of CRP (Table 5). OR = odds ratio, CI = confidence interval, aOR = adjusted odds ratio, BMI = body mass index, NEWS = National Early Warning Score, SOFA = Sequential Organ Failure Assessment, CCI = Charlson comorbidity index, COPD = chronic obstructive pulmonary disease, WBC = white blood cell, LDH = lactate dehydrogenase, CRP = C-reactive protein, Ct = cyclic threshold, RdRp = RNA dependent RNA polymerase. OR = odds ratio, CI = confidence interval, aOR = adjusted odds ratio, BMI = body mass index, NEWS = National Early Warning Score, SOFA = Sequential Organ Failure Assessment, CCI = Charlson comorbidity index, COPD = chronic obstructive pulmonary disease, WBC = white blood cell, LDH = lactate dehydrogenase, CRP = C-reactive protein, Ct = cyclic threshold, RdRp = RNA dependent RNA polymerase. DISCUSSION: During the study period, the national policy of Korea was to isolate most COVID-19 patients in residential treatment centers that mainly provided daily living support with minimum medical care for 10 days. Patients with high risk factors for severe progression or complication, or symptomatic patients who required further medical care were admitted or transferred to dedicated hospitals such as our study hospital. In addition, COVID-19 vaccination in Korea was first made available to the high-risk group, which included elderly individuals. Although the percentage of fully vaccinated individuals in the national population ≥ 18 years of age was 54.2%, it differed markedly in different age groups. The percentages of fully vaccinated individuals over 80, 70–79, and 60–69 years of age were 79.8%, 89.6%, and 87.6%, respectively, but among individuals 50–59, 40-49, 30–39, and 18–29 years of age, the percentages were 58.4%, 32.2%, 36.1% and 32.5%, as of 28 September 2021. Our results showed that the hospitalized COVID-19 patients in our study had characteristics in accordance with the national admission guidelines for COVID-19 patients. The vaccinated patients were older, had more comorbidities and were admitted to the hospital as early as possible in the fear that their condition might worsen. This might result in lower clinical severity at admission among these patients. In contrast, the unvaccinated patients were largely younger, had fewer comorbidities and were admitted to the hospital later in the clinical course of the disease due to severe manifestations that were not prevented by vaccination. We did not intend to compare the individual study subjects, vaccinated and unvaccinated directly; instead, we wanted to describe the type of COVID-19 patients we would expect to see in the hospital after the peak of the pandemic eased down. Although the composition of study subjects can be directly attributed to the national policy for allocation of COVID-19 patients, the scenario may be similar in real-world practice in the near future or might exist in other regions of the globe due to the existence of different policies for pandemic control in different countries. The proportion of patients who required supplemental oxygen was higher in the unvaccinated group than in the vaccinated group. Considering the older age of the vaccinated group and the higher proportion of comorbidities in that group, the fact that a lower proportion of patients in the vaccinated group needed oxygen therapy might be due to the effect of vaccination. Among unvaccinated and fully vaccinated patients without initial oxygen demand, the proportions of patients with severe progression were 20.3% (31/153) and 10.8% (13/120), respectively. One study conducted in Korea during the early COVID-19 pandemic showed that 11.7% (16/136) of unvaccinated patients experienced clinical aggravation with a need for oxygen treatment during the clinical course of the disease.11 As the delta variant was the predominant strain at the time of our study (September–October 2021),1213 the higher rate of progression observed in our study might be partially due to the virulence of the delta variant. Compared to the fully vaccinated patients, the proportion of unvaccinated patients who experienced severe progression was higher. Previous studies have shown that COVID-19 vaccination significantly prevents severe outcomes and reduces the length of hospitalization in high-risk groups.1415 The results of our study suggest that a considerable proportion of younger COVID-19 patients with few comorbidities experience high degree of clinical severity and have a need for oxygen therapy in real practice. Fortunately, the oxygen therapy rarely requires invasive ventilation. However, in resource limited settings, even the failure of simple oxygen therapy may lead to a fatal outcome. Treatment with remdesivir or dexamethasone was associated with severe cases of the disease. Interestingly, the use of antibacterial agents was also higher in the unvaccinated group, despite the assumption that the older group with multiple comorbidities might require more antibiotics to cope with complications of the disease. This somewhat unexpected result might be due to the higher clinical severity of the unvaccinated group and may indirectly reflect the tendency to inappropriately prescribe antibiotics for patients with COVID-19 who show severe symptoms, such as high fever, dyspnea, chest X-ray abnormalities and hypoxemia, not exactly based on the bacterial complications.1617 Regarding the laboratory findings, an elevated level of LDH at presentation was a predictor of severe progression. One previous study showed that elevated LDH levels during 6–10 days after the onset of illness were associated with clinical aggravation.11 Abnormal baseline chest X-ray finding was also reported to be a risk factor for oxygen requirement.18 In this study, chest X-ray abnormalities at the initial presentation were a risk factor for severe progression among unvaccinated and vaccinated patients, and also in fully vaccinated patients. Therefore, several important predictors of severe progression that were identified in unvaccinated patients during the early part of the COVID-19 pandemic still appear to be risk factors for severe progression, even under the influence of vaccination and change of dominant variant strain. Although a previous study reported the occurrence of breakthrough infections in fully vaccinated patients,19 the clinical characteristics and outcomes of vaccine breakthrough infections have not been systematically compared with those of infections in unvaccinated patients. Breakthrough COVID-19 infection in vaccinated people is also important from both the clinical and the pathophysiological perspectives. As expected based on studies that showed vaccine efficacy in preventing severe COVID-19,23 vaccination was the only protective factor for severe progression among patients with breakthrough infection and unvaccinated patients. As this study was conducted among hospitalized patients, the results may help clinicians manage patients with the potential of severe progression in hospital settings. As the chest X-ray abnormality was the only risk factor for clinical progression among fully vaccinated patients, it would be necessary to check the chest X-rays early during the patient’s hospitalization and monitor clinical progression more carefully for the patients with initial chest X-ray abnormalities. This study has several limitations. First, because it was conducted retrospectively at a single center, the generalization of the results may be limited. Second, the number of fully vaccinated patients with severe progression was relatively small. Third, we did not perform virologic analysis of concurrent SARS-CoV-2 variants. However, the likelihood of bias due to the variants may be minimal because most of the study patients were believed to be infected with the delta variant, considering the fact that the dominant viral population in Korea during the study period was the delta variant. Although our results may not be precisely applicable to other variants, the clinical framework could be extrapolated similarly. In conclusion, we described the clinical features of unvaccinated and vaccinated COVID-19 patients in a hospitalized setting. The only protective factor for severe progression was the vaccination at least once. Among fully vaccinated patients, the chest X-ray abnormality was a predictor for severe progression. In a COVID-19 practice of hospitalized setting, vaccination should be an initial checkpoint for triage and chest X-ray may be helpful in predicting progression.
Background: The clinical features of coronavirus disease 2019 (COVID-19) patients in the COVID-19 vaccination era need to be clarified because breakthrough infection after vaccination is not uncommon. Methods: We retrospectively analyzed hospitalized COVID-19 patients during a delta variant-dominant period 6 months after the national COVID-19 vaccination rollout. The clinical characteristics and risk factors for severe progression were assessed and subclassified according to vaccination status. Results: A total of 438 COVID-19 patients were included; the numbers of patients in the unvaccinated, partially vaccinated and fully vaccinated groups were 188 (42.9%), 117 (26.7%) and 133 (30.4%), respectively. The vaccinated group was older, less symptomatic and had a higher Charlson comorbidity index at presentation. The proportions of patients who experienced severe progression in the unvaccinated and fully vaccinated groups were 20.3% (31/153) and 10.8% (13/120), respectively. Older age, diabetes mellitus, solid cancer, elevated levels of lactate dehydrogenase and chest X-ray abnormalities were associated with severe progression, and the vaccination at least once was the only protective factor for severe progression. Chest X-ray abnormalities at presentation were the only predictor for severe progression among fully vaccinated patients. Conclusions: In the hospitalized setting, vaccinated and unvaccinated COVID-19 patients showed different clinical features and risk of oxygen demand despite a relatively high proportion of patients in the two groups. Vaccination needs to be assessed as an initial checkpoint, and chest X-ray may be helpful for predicting severe progression in vaccinated patients.
null
null
7,358
293
[ 158, 303, 153, 793, 412 ]
10
[ "patients", "vaccinated", "group", "19", "covid", "covid 19", "clinical", "unvaccinated", "vaccinated group", "oxygen" ]
[ "coronavirus disease 2019", "19 vaccination significantly", "covid 19 pandemic", "effectiveness covid 19", "unvaccinated covid 19" ]
null
null
[CONTENT] Breakthrough Infection | SARS-CoV-2 | COVID-19 | Severe Progression | Oxygen Therapy [SUMMARY]
[CONTENT] Breakthrough Infection | SARS-CoV-2 | COVID-19 | Severe Progression | Oxygen Therapy [SUMMARY]
[CONTENT] Breakthrough Infection | SARS-CoV-2 | COVID-19 | Severe Progression | Oxygen Therapy [SUMMARY]
null
[CONTENT] Breakthrough Infection | SARS-CoV-2 | COVID-19 | Severe Progression | Oxygen Therapy [SUMMARY]
null
[CONTENT] COVID-19 | COVID-19 Vaccines | Humans | Retrospective Studies | SARS-CoV-2 | Vaccination [SUMMARY]
[CONTENT] COVID-19 | COVID-19 Vaccines | Humans | Retrospective Studies | SARS-CoV-2 | Vaccination [SUMMARY]
[CONTENT] COVID-19 | COVID-19 Vaccines | Humans | Retrospective Studies | SARS-CoV-2 | Vaccination [SUMMARY]
null
[CONTENT] COVID-19 | COVID-19 Vaccines | Humans | Retrospective Studies | SARS-CoV-2 | Vaccination [SUMMARY]
null
[CONTENT] coronavirus disease 2019 | 19 vaccination significantly | covid 19 pandemic | effectiveness covid 19 | unvaccinated covid 19 [SUMMARY]
[CONTENT] coronavirus disease 2019 | 19 vaccination significantly | covid 19 pandemic | effectiveness covid 19 | unvaccinated covid 19 [SUMMARY]
[CONTENT] coronavirus disease 2019 | 19 vaccination significantly | covid 19 pandemic | effectiveness covid 19 | unvaccinated covid 19 [SUMMARY]
null
[CONTENT] coronavirus disease 2019 | 19 vaccination significantly | covid 19 pandemic | effectiveness covid 19 | unvaccinated covid 19 [SUMMARY]
null
[CONTENT] patients | vaccinated | group | 19 | covid | covid 19 | clinical | unvaccinated | vaccinated group | oxygen [SUMMARY]
[CONTENT] patients | vaccinated | group | 19 | covid | covid 19 | clinical | unvaccinated | vaccinated group | oxygen [SUMMARY]
[CONTENT] patients | vaccinated | group | 19 | covid | covid 19 | clinical | unvaccinated | vaccinated group | oxygen [SUMMARY]
null
[CONTENT] patients | vaccinated | group | 19 | covid | covid 19 | clinical | unvaccinated | vaccinated group | oxygen [SUMMARY]
null
[CONTENT] 19 | covid 19 | covid | vaccines | unvaccinated covid 19 patients | vaccinated unvaccinated covid | unvaccinated covid 19 | vaccinated unvaccinated covid 19 | unvaccinated covid | patients [SUMMARY]
[CONTENT] defined | 19 | covid | covid 19 | clinical | patients | test | variables | study | 19 patients [SUMMARY]
[CONTENT] group | vaccinated | vaccinated group | vs | ci | aor | table | patients | rna | unvaccinated group [SUMMARY]
null
[CONTENT] patients | 19 | group | covid 19 | covid | vaccinated | clinical | study | oxygen | unvaccinated [SUMMARY]
null
[CONTENT] 2019 | COVID-19 | COVID-19 [SUMMARY]
[CONTENT] COVID-19 | 6 months | COVID-19 ||| [SUMMARY]
[CONTENT] 438 | 188 | 42.9% | 117 | 26.7% | 133 | 30.4% ||| Charlson ||| 20.3% | 31/153 | 10.8% ||| ||| [SUMMARY]
null
[CONTENT] 2019 | COVID-19 | COVID-19 ||| COVID-19 | 6 months | COVID-19 ||| ||| 438 | 188 | 42.9% | 117 | 26.7% | 133 | 30.4% ||| Charlson ||| 20.3% | 31/153 | 10.8% ||| ||| ||| COVID-19 | two ||| [SUMMARY]
null
Gastric mucin phenotype indicates aggressive biological behaviour in early differentiated gastric adenocarcinomas following endoscopic treatment.
34256780
The distribution of mucin phenotypes and their relationship with clinicopathological features in early differentiated gastric adenocarcinomas in a Chinese cohort are unknown. We aimed to investigate mucin phenotypes and analyse the relationship between mucin phenotypes and clinicopathological features, especially biological behaviours, in early differentiated gastric adenocarcinomas from endoscopic specimens in a Chinese cohort.
BACKGROUND
Immunohistochemical staining of CD10, MUC2, MUC5AC, and MUC6 was performed in 257 tissue samples from patients with early differentiated gastric adenocarcinomas. The tumour location, gross type, tumour size, histological type, depth of invasion, lymphovascular invasion, mucosal background and other clinicopathological parameters were evaluated. The relationship between mucin phenotypes and clinicopathological features was analysed with the chi-square test.
METHODS
The incidences of gastric, gastrointestinal, intestinal and null phenotypes were 21 %, 56 %, 20 and 3 %, respectively. The mucin phenotypes were related to histology classification (P < 0.05). The proportion of the gastric phenotype became greater during the transition from differentiated to undifferentiated (P < 0.05). Complete intestinal metaplasia was higher in the gastric and intestinal phenotypes than in the gastrointestinal phenotype (P < 0.05). Tumours with poorly differentiated adenocarcinoma were mainly of the gastric phenotype, which was significantly higher than that of purely differentiated tubular adenocarcinoma (P < 0.05), and the depth of invasion in the mixed type was deeper (P < 0.05). Neither recurrence nor metastasis was detected.
RESULTS
The mucin phenotype of early-differentiated gastric adenocarcinoma has clinical implications, and the gastric phenotype has aggressive biological behaviour in early differentiated gastric cancers, especially in those with poorly differentiated adenocarcinoma or papillary adenocarcinoma components.
CONCLUSIONS
[ "Adenocarcinoma", "Adult", "Aged", "Cell Differentiation", "Female", "Gastric Mucins", "Gastric Mucosa", "Humans", "Immunohistochemistry", "Male", "Middle Aged", "Phenotype", "Stomach Neoplasms" ]
8276406
Background
Gastric cancer (GC), one of the most common human cancers worldwide, is a disease with multiple pathogenic factors, various prognoses and different responses to treatments. Thus, properly distinguishing those with worse prognoses from those with better prognoses appears to be significantly important. Four different morphology -based classification systems exist, the World Health Organization (WHO/2019) [1], the Japanese Gastric Cancer Association (JGCA/2017) [1], Laurén [2] and Nakamura [3]. According to the WHO classification, GCs are subclassified into papillary, tubular, poorly cohesive, mucinous and mixed types. In the JGCA classification, the subtypes are papillary (pap), tubular (tub), poorly differentiated (por), signet-ring cell (sig), and mucinous (muc), which are similar to the subtypes used by the WHO. GCs are divided into intestinal and diffuse types using Laurén’s classification or into differentiated and undifferentiated types based on Nakamura’s classification [2–4]. The differentiated type contains pap, tub1, and tub2 according to the JGCA classification and papillary and well/moderately differentiated adenocarcinoma according to the WHO classification. These different histological types exhibit distinct biological behaviours. The mucous produced by cancers is one of the factors determining the nature of biological behaviour. The main component of mucous is a high-molecular-weight glycoprotein called mucin [5]. As cancer progresses, the nature of the mucous changes relative to the degree of biological malignancy. In the 1990 s, with the progress of structural analysis of mucin and the widespread use of monoclonal antibodies to the core protein of mucin, a mucin phenotype subclassification emerged. Mucin phenotype subclassification was entirely based on the mucin expression pattern, independent of histological features. Thus, GCs are classified into gastric, intestinal, gastrointestinal and null mucin phenotypes [4–6]. Previous studies have reported that the gastric phenotype has a higher potential for invasion and metastasis than the intestinal type, which results in a worse prognosis of GCs [7–11]. However, studies have mostly focused on advanced gastric cancers, and early gastric cancers are rarely investigated. Early gastric cancer (EGC) is defined as tumour invasion confined to the mucosa and submucosa, irrespective of regional lymph node metastasis [12]. Endoscopic mucosal resection (EMR) and endoscopic submucosal dissection (ESD) are used as treatments for some intramucosal carcinomas and submucosal lesions, which have a very low probability of lymph node metastasis [13, 14]. To our knowledge, there is no research exploring biological role of mucin phenotypes in EGCs using EMR/ESD by Chinese investigators. Little information is available on the effects of mucin phenotypes on the clinicopathological features of EGCs in a Chinese cohort. Accordingly, we examined mucin expression and mucin phenotypes and explored mucin phenotype clinicopathological characteristics and biological behaviour.
null
null
Results
Expression of mucin markers and mucin phenotype in early gastric cancers The expression percentages of CD10, MUC2, MUC5AC and MUC6 in all EGCs were 43.58 % (112/257), 63.81 % (164/257), 64.98 % (167/257) and 72.76 % (187/257) respectively (Fig. 1). Fig. 1The proportion of CD10, MUC2, MUC5AC, and MUC6 in each mucin phenotype The proportion of CD10, MUC2, MUC5AC, and MUC6 in each mucin phenotype Two hundred fifty-seven EGCs were classified as G-type (21 %, 54/257), GI-type (56 %, 144/257), I-type (20 %, 51/257) and N-type (3 %, 8/257). The GI-type contained the GI-G type (72 %, 103/144) and GI-I type (28 %, 41/144). There were more cases of G-type and GI-G type (61 %, 157/257) than I-type and GI-I type (36 %, 95/257) (Fig. 2). Fig. 2IHC staining of the gastric phenotype, intestinal phenotype and gastrointestinal phenotype. G-type (a-d) shows negative staining for CD10 (a) and MUC2 (b) and positive staining for MUC5AC (c) and MUC6 (d). In contrast, I-type (e-h) shows CD10 (e) and MUC2 (f) marker positivity and negative staining for MUC5AC (g) and MUC6 (h) negative. GI-type (i-l) shows positive staining for CD10 (i), MUC2 (j),MUC5AC (k) and MUC6 (l). (Scale bar =200 μm, HE and IHC ×10) IHC staining of the gastric phenotype, intestinal phenotype and gastrointestinal phenotype. G-type (a-d) shows negative staining for CD10 (a) and MUC2 (b) and positive staining for MUC5AC (c) and MUC6 (d). In contrast, I-type (e-h) shows CD10 (e) and MUC2 (f) marker positivity and negative staining for MUC5AC (g) and MUC6 (h) negative. GI-type (i-l) shows positive staining for CD10 (i), MUC2 (j),MUC5AC (k) and MUC6 (l). (Scale bar =200 μm, HE and IHC ×10) The expression percentages of CD10, MUC2, MUC5AC and MUC6 in all EGCs were 43.58 % (112/257), 63.81 % (164/257), 64.98 % (167/257) and 72.76 % (187/257) respectively (Fig. 1). Fig. 1The proportion of CD10, MUC2, MUC5AC, and MUC6 in each mucin phenotype The proportion of CD10, MUC2, MUC5AC, and MUC6 in each mucin phenotype Two hundred fifty-seven EGCs were classified as G-type (21 %, 54/257), GI-type (56 %, 144/257), I-type (20 %, 51/257) and N-type (3 %, 8/257). The GI-type contained the GI-G type (72 %, 103/144) and GI-I type (28 %, 41/144). There were more cases of G-type and GI-G type (61 %, 157/257) than I-type and GI-I type (36 %, 95/257) (Fig. 2). Fig. 2IHC staining of the gastric phenotype, intestinal phenotype and gastrointestinal phenotype. G-type (a-d) shows negative staining for CD10 (a) and MUC2 (b) and positive staining for MUC5AC (c) and MUC6 (d). In contrast, I-type (e-h) shows CD10 (e) and MUC2 (f) marker positivity and negative staining for MUC5AC (g) and MUC6 (h) negative. GI-type (i-l) shows positive staining for CD10 (i), MUC2 (j),MUC5AC (k) and MUC6 (l). (Scale bar =200 μm, HE and IHC ×10) IHC staining of the gastric phenotype, intestinal phenotype and gastrointestinal phenotype. G-type (a-d) shows negative staining for CD10 (a) and MUC2 (b) and positive staining for MUC5AC (c) and MUC6 (d). In contrast, I-type (e-h) shows CD10 (e) and MUC2 (f) marker positivity and negative staining for MUC5AC (g) and MUC6 (h) negative. GI-type (i-l) shows positive staining for CD10 (i), MUC2 (j),MUC5AC (k) and MUC6 (l). (Scale bar =200 μm, HE and IHC ×10) Relationship between mucin phenotype and clinicopathological features The relationship between mucin phenotype and clinicopathological features is summarized in Table 1. The mucin phenotypes were significantly related to the JGCA and WHO classifications (P < 0.05), but the parameters of sex, age, margin, colour, tumour size, gross type, depth of invasion, and lymphovascular invasion did not significantly differ among those mucin phenotypes (P > 0.05). The I-type had the highest proportion of differentiated EGCs among the four mucin phenotypes (100.0 % vs. 79.7 % vs. 83.1 % vs. 87.5 %, P = 0.027). The G-type group had a higher proportion of tub-por/sig and pap/tub-pap cases than the I-, GI- and N-type groups (20.4 % vs. 7.0 % vs. 0.0 % vs. 12.5 %, P = 0.027) according to the JGCA classification. According to the WHO classification, the G-type had more mixed histological components (18.5 % vs. 5.6 % vs. 0.0 % vs. 12.5 %, P = 0.006) than the other mucin phenotypes. Table 1Relationship between Mucin Phenotypes and Clinicopathological FeaturesG-typeGI-typeI-typeN-typeχ2 valueP valueSex0.6710.906 Male40(74.1 %)99(68.8 %)37(72.5 %)6(75.0 %) Female14(25.9 %)45(31.2 %)14(27.5 %)2(25.0 %)Age0.4120.945 ≤ 64 years28(51.9 %)75(52.1 %)29(56.9 %)4(50.0 %) >64 years26(48.1 %)69(47.9 %)22(43.1 %)4(50.0 %)Tumor Size2.3560.502 ≤ 2 cm34(63.0 %)99(68.8 %)32(62.7 %)7(87.5 %) >2 cm20(37.0 %)45(31.2 %)19(37.3 %)1(12.5 %)Margin4.2690.224 Distinct35(64.8 %)111(77.1 %)40(78.4 %)5(62.5 %) Indistinct19(35.2 %)33(22.9 %)11(21.6 %)3(37.5 %)Color3.4230.337 Darken31(57.4 %)75(52.1 %)34(66.7 %)5(62.5 %) Faded23(42.6 %)69(47.9 %)17(33.3 %)3(37.5 %)Tumor Location5.8190.407 Upper4(7.4 %)15(10.4 %)8(15.7 %)0(0.0 %) Middle12(22.2 %)21(14.6 %)12(23.5 %)1(12.5 %) Low38(70.4 %)108(75.0 %)31(60.8 %)7(87.5 %)Tumor Location0.7870.842 EGJ4(7.4 %)7(4.9 %)3(5.9 %)0(0.0 %) No-EGJ50(92.6 %)137(95.1 %)48(94.1 %)8(100.0 %)Gross type8.9430.162 Protruding10(18.5 %)32(22.2 %)10(19.6 %)0(0.0 %) Depressed16(29.6 %)57(39.6 %)26(51.0 %)3(37.5 %) Protruding-Depressed28(51.9 %)55(38.2 %)15(29.4 %)5(62.5 %)JGCA Classification16.8700.027 tub134(62.9 %)112(77.8 %)45(88.2 %)7(87.5 %) tub29(16.7 %)22(15.2 %)6(11.8 %)0(0.0 %) tub-por/sig7(13.0 %)7(4.9 %)0(0.0 %)1(12.5 %) pap/tub-pap4(7.4 %)3(2.1 %)0(0.0 %)0(0.0 %)WHO Classification18.0520.017 Tubular, well-diff34(62.9 %)112(77.8 %)45(88.2 %)7(87.5 %) Tubular, mod-diff9(16.7 %)22(15.2 %)6(11.8 %)0(0.0 %) Papillary1(1.9 %)2(1.4 %)0(0.0 %)0(0.0 %) Mixed10(18.5 %)8(7.0 %)0(0.0 %)1(12.5 %)Depth of Invasion1.3600.681 M47(87.0 %)132(91.7 %)46(90.2 %)8(100.0 %) SM7(13.0 %)12(8.3 %)5(9.8 %)0(0.0 %)Lymphovascular invasion2.5410.687 (+)1(1.9 %)1(0.7 %)0(0.0 %)0(0.0 %) (-)53(98.1 %)143(99.3 %)51(100.0 %)8(100.0 %)+, present; - absentEGJ esophagogastric junction, well-diff well differentiated, mod-diff moderately differentiated, M mucosa, SM submucosa Relationship between Mucin Phenotypes and Clinicopathological Features +, present; - absent EGJ esophagogastric junction, well-diff well differentiated, mod-diff moderately differentiated, M mucosa, SM submucosa The relationship between mucin phenotype and clinicopathological features is summarized in Table 1. The mucin phenotypes were significantly related to the JGCA and WHO classifications (P < 0.05), but the parameters of sex, age, margin, colour, tumour size, gross type, depth of invasion, and lymphovascular invasion did not significantly differ among those mucin phenotypes (P > 0.05). The I-type had the highest proportion of differentiated EGCs among the four mucin phenotypes (100.0 % vs. 79.7 % vs. 83.1 % vs. 87.5 %, P = 0.027). The G-type group had a higher proportion of tub-por/sig and pap/tub-pap cases than the I-, GI- and N-type groups (20.4 % vs. 7.0 % vs. 0.0 % vs. 12.5 %, P = 0.027) according to the JGCA classification. According to the WHO classification, the G-type had more mixed histological components (18.5 % vs. 5.6 % vs. 0.0 % vs. 12.5 %, P = 0.006) than the other mucin phenotypes. Table 1Relationship between Mucin Phenotypes and Clinicopathological FeaturesG-typeGI-typeI-typeN-typeχ2 valueP valueSex0.6710.906 Male40(74.1 %)99(68.8 %)37(72.5 %)6(75.0 %) Female14(25.9 %)45(31.2 %)14(27.5 %)2(25.0 %)Age0.4120.945 ≤ 64 years28(51.9 %)75(52.1 %)29(56.9 %)4(50.0 %) >64 years26(48.1 %)69(47.9 %)22(43.1 %)4(50.0 %)Tumor Size2.3560.502 ≤ 2 cm34(63.0 %)99(68.8 %)32(62.7 %)7(87.5 %) >2 cm20(37.0 %)45(31.2 %)19(37.3 %)1(12.5 %)Margin4.2690.224 Distinct35(64.8 %)111(77.1 %)40(78.4 %)5(62.5 %) Indistinct19(35.2 %)33(22.9 %)11(21.6 %)3(37.5 %)Color3.4230.337 Darken31(57.4 %)75(52.1 %)34(66.7 %)5(62.5 %) Faded23(42.6 %)69(47.9 %)17(33.3 %)3(37.5 %)Tumor Location5.8190.407 Upper4(7.4 %)15(10.4 %)8(15.7 %)0(0.0 %) Middle12(22.2 %)21(14.6 %)12(23.5 %)1(12.5 %) Low38(70.4 %)108(75.0 %)31(60.8 %)7(87.5 %)Tumor Location0.7870.842 EGJ4(7.4 %)7(4.9 %)3(5.9 %)0(0.0 %) No-EGJ50(92.6 %)137(95.1 %)48(94.1 %)8(100.0 %)Gross type8.9430.162 Protruding10(18.5 %)32(22.2 %)10(19.6 %)0(0.0 %) Depressed16(29.6 %)57(39.6 %)26(51.0 %)3(37.5 %) Protruding-Depressed28(51.9 %)55(38.2 %)15(29.4 %)5(62.5 %)JGCA Classification16.8700.027 tub134(62.9 %)112(77.8 %)45(88.2 %)7(87.5 %) tub29(16.7 %)22(15.2 %)6(11.8 %)0(0.0 %) tub-por/sig7(13.0 %)7(4.9 %)0(0.0 %)1(12.5 %) pap/tub-pap4(7.4 %)3(2.1 %)0(0.0 %)0(0.0 %)WHO Classification18.0520.017 Tubular, well-diff34(62.9 %)112(77.8 %)45(88.2 %)7(87.5 %) Tubular, mod-diff9(16.7 %)22(15.2 %)6(11.8 %)0(0.0 %) Papillary1(1.9 %)2(1.4 %)0(0.0 %)0(0.0 %) Mixed10(18.5 %)8(7.0 %)0(0.0 %)1(12.5 %)Depth of Invasion1.3600.681 M47(87.0 %)132(91.7 %)46(90.2 %)8(100.0 %) SM7(13.0 %)12(8.3 %)5(9.8 %)0(0.0 %)Lymphovascular invasion2.5410.687 (+)1(1.9 %)1(0.7 %)0(0.0 %)0(0.0 %) (-)53(98.1 %)143(99.3 %)51(100.0 %)8(100.0 %)+, present; - absentEGJ esophagogastric junction, well-diff well differentiated, mod-diff moderately differentiated, M mucosa, SM submucosa Relationship between Mucin Phenotypes and Clinicopathological Features +, present; - absent EGJ esophagogastric junction, well-diff well differentiated, mod-diff moderately differentiated, M mucosa, SM submucosa Relationship between mucin phenotypes and background mucosa Intestinal metaplasia (IM) of background mucosa was observed in 199 of 249 (79.9 %) cases (G-, GI- and GI-type), including 38 cases of incomplete IM and 161 cases of complete IM. IM did not significantly differ among mucin phenotypes (P > 0.05). However, the presence of incomplete and complete IM was significantly different in distinct mucin phenotypes (P = 0.004, P = 0.018). The presence of incomplete IM in GI-type EGCs was higher than that in the G-type and I-type EGCs (21.5 % vs. 9.3 % vs. 3.9 %). In the contrast, 77.8 % (42/54) of G-type EGCs and 70.6 % (36/51) of I-type EGCs exhibited complete IM, which was higher than the 57.6 % (83/114) of GI-type EGCs. The IM status of the background mucosa and the relationship with mucin phenotypes are shown in Table 2. Table 2Relationship between mucin phenotypes and background mucosaG-typeGI-typeI-typeχ2 valueP valueIntestinal metaplasia2.6850.265  (+)47(87.1 %)114(79.2 %)38(74.5 %)  (-)7(12.9 %)30(20.8 %)13(25.5 %)Incomplete intestinal metaplasia10.9480.004  (+)5(9.3 %)31(21.5 %)2(3.9 %)  (-)49(90.7 %)113(78.5 %)49(96.1 %)Complete intestinal metaplasia7.9570.018  (+)42(77.8 %)83(57.6 %)36(70.6 %)  (-)12(22.2 %)61(42.4 %)15(29.4 %)+, present; - absent Relationship between mucin phenotypes and background mucosa +, present; - absent Intestinal metaplasia (IM) of background mucosa was observed in 199 of 249 (79.9 %) cases (G-, GI- and GI-type), including 38 cases of incomplete IM and 161 cases of complete IM. IM did not significantly differ among mucin phenotypes (P > 0.05). However, the presence of incomplete and complete IM was significantly different in distinct mucin phenotypes (P = 0.004, P = 0.018). The presence of incomplete IM in GI-type EGCs was higher than that in the G-type and I-type EGCs (21.5 % vs. 9.3 % vs. 3.9 %). In the contrast, 77.8 % (42/54) of G-type EGCs and 70.6 % (36/51) of I-type EGCs exhibited complete IM, which was higher than the 57.6 % (83/114) of GI-type EGCs. The IM status of the background mucosa and the relationship with mucin phenotypes are shown in Table 2. Table 2Relationship between mucin phenotypes and background mucosaG-typeGI-typeI-typeχ2 valueP valueIntestinal metaplasia2.6850.265  (+)47(87.1 %)114(79.2 %)38(74.5 %)  (-)7(12.9 %)30(20.8 %)13(25.5 %)Incomplete intestinal metaplasia10.9480.004  (+)5(9.3 %)31(21.5 %)2(3.9 %)  (-)49(90.7 %)113(78.5 %)49(96.1 %)Complete intestinal metaplasia7.9570.018  (+)42(77.8 %)83(57.6 %)36(70.6 %)  (-)12(22.2 %)61(42.4 %)15(29.4 %)+, present; - absent Relationship between mucin phenotypes and background mucosa +, present; - absent Biological behaviour of mucin phenotypes In addition to tubular adenocarcinoma components, 22 of the 257 cases also contained components of papillary adenocarcinoma, poorly differentiated carcinoma, or signet ring cell carcinoma. Fifteen cases (68.18 %) contained por/sig components, and the other 7 (31.82 %) contained pap components. Eleven cases showed the G-type, 10 cases showed the GI-type, only one case showed the N-type, and none of them showed the I-type. Among the 10 GI-type cases, 9 cases showed the GI-G type, and one showed the GI-I type. Almost all the 22 patients showed the G- and GI-G types, which was significantly higher than the number of I-type patients (P = 0.011). In addition, the 22 patients had a higher proportion of infiltration into the submucosa (P < 0.001). (Table 3; Figs. 3 and 4). Table 3Relationship between Biological Behavior and Mucin Phenotypestubtub-por/sigpap/tub-papχ2 valueP valueDepth of Invasion15.8240.000 M219(93.2 %)9(60.0 %)5(71.4 %) SM16(6.8 %)6(40.0 %)2(28.6 %)Mucin Phenotype14.7430.011 G-type43(18.3 %)7(46.7 %)4 (57.1 %) GI-type134(57.0 %)7(46.7 %)3(42.9 %) I-type51(21.7 %)0(0.0 %)0(0.0 %) N-type7(3.0 %)1(6.6 %)0(0.0 %)M mucosa, SM submucosaFig. 3Tubular adenocarcinoma mixed with papillary adenocarcinoma showing the gastric mucin phenotype. Reconstructive map (a). H&E staining showing papillary structure on the surface and tubular adenocarcinoma in the submucosa; invasive depth is 300 μm (b). Focal positive staining for MUC5AC (c) and strongly positive staining for MUC6 (d), invasive tubular adenocarcinoma showing MUC5AC negative (c) and MUC6 positive (d) status. (Scale bar =400 μm, HE and IHC ×10)Fig. 4Tubular adenocarcinoma mixed with poorly differentiated carcinoma showing the gastric mucin phenotype. Reconstructive map (a). H&E staining showing poorly differentiated carcinoma in the mucosa and tubular adenocarcinoma in the submucosa; the invasive depth is 2500 μm (b). Positive staining for MUC5AC (c) and MUC6 (d), and invasive tubular adenocarcinoma showing MUC5AC (c) and MUC6 (d) positivity. (Scale bar =400 μm, HE and IHC x10) Relationship between Biological Behavior and Mucin Phenotypes M mucosa, SM submucosa Tubular adenocarcinoma mixed with papillary adenocarcinoma showing the gastric mucin phenotype. Reconstructive map (a). H&E staining showing papillary structure on the surface and tubular adenocarcinoma in the submucosa; invasive depth is 300 μm (b). Focal positive staining for MUC5AC (c) and strongly positive staining for MUC6 (d), invasive tubular adenocarcinoma showing MUC5AC negative (c) and MUC6 positive (d) status. (Scale bar =400 μm, HE and IHC ×10) Tubular adenocarcinoma mixed with poorly differentiated carcinoma showing the gastric mucin phenotype. Reconstructive map (a). H&E staining showing poorly differentiated carcinoma in the mucosa and tubular adenocarcinoma in the submucosa; the invasive depth is 2500 μm (b). Positive staining for MUC5AC (c) and MUC6 (d), and invasive tubular adenocarcinoma showing MUC5AC (c) and MUC6 (d) positivity. (Scale bar =400 μm, HE and IHC x10) In addition to tubular adenocarcinoma components, 22 of the 257 cases also contained components of papillary adenocarcinoma, poorly differentiated carcinoma, or signet ring cell carcinoma. Fifteen cases (68.18 %) contained por/sig components, and the other 7 (31.82 %) contained pap components. Eleven cases showed the G-type, 10 cases showed the GI-type, only one case showed the N-type, and none of them showed the I-type. Among the 10 GI-type cases, 9 cases showed the GI-G type, and one showed the GI-I type. Almost all the 22 patients showed the G- and GI-G types, which was significantly higher than the number of I-type patients (P = 0.011). In addition, the 22 patients had a higher proportion of infiltration into the submucosa (P < 0.001). (Table 3; Figs. 3 and 4). Table 3Relationship between Biological Behavior and Mucin Phenotypestubtub-por/sigpap/tub-papχ2 valueP valueDepth of Invasion15.8240.000 M219(93.2 %)9(60.0 %)5(71.4 %) SM16(6.8 %)6(40.0 %)2(28.6 %)Mucin Phenotype14.7430.011 G-type43(18.3 %)7(46.7 %)4 (57.1 %) GI-type134(57.0 %)7(46.7 %)3(42.9 %) I-type51(21.7 %)0(0.0 %)0(0.0 %) N-type7(3.0 %)1(6.6 %)0(0.0 %)M mucosa, SM submucosaFig. 3Tubular adenocarcinoma mixed with papillary adenocarcinoma showing the gastric mucin phenotype. Reconstructive map (a). H&E staining showing papillary structure on the surface and tubular adenocarcinoma in the submucosa; invasive depth is 300 μm (b). Focal positive staining for MUC5AC (c) and strongly positive staining for MUC6 (d), invasive tubular adenocarcinoma showing MUC5AC negative (c) and MUC6 positive (d) status. (Scale bar =400 μm, HE and IHC ×10)Fig. 4Tubular adenocarcinoma mixed with poorly differentiated carcinoma showing the gastric mucin phenotype. Reconstructive map (a). H&E staining showing poorly differentiated carcinoma in the mucosa and tubular adenocarcinoma in the submucosa; the invasive depth is 2500 μm (b). Positive staining for MUC5AC (c) and MUC6 (d), and invasive tubular adenocarcinoma showing MUC5AC (c) and MUC6 (d) positivity. (Scale bar =400 μm, HE and IHC x10) Relationship between Biological Behavior and Mucin Phenotypes M mucosa, SM submucosa Tubular adenocarcinoma mixed with papillary adenocarcinoma showing the gastric mucin phenotype. Reconstructive map (a). H&E staining showing papillary structure on the surface and tubular adenocarcinoma in the submucosa; invasive depth is 300 μm (b). Focal positive staining for MUC5AC (c) and strongly positive staining for MUC6 (d), invasive tubular adenocarcinoma showing MUC5AC negative (c) and MUC6 positive (d) status. (Scale bar =400 μm, HE and IHC ×10) Tubular adenocarcinoma mixed with poorly differentiated carcinoma showing the gastric mucin phenotype. Reconstructive map (a). H&E staining showing poorly differentiated carcinoma in the mucosa and tubular adenocarcinoma in the submucosa; the invasive depth is 2500 μm (b). Positive staining for MUC5AC (c) and MUC6 (d), and invasive tubular adenocarcinoma showing MUC5AC (c) and MUC6 (d) positivity. (Scale bar =400 μm, HE and IHC x10) Follow-up Six patients underwent additional gastrectomy, and there was no residual tumour or lymph node metastasis. All patients were under close follow-up, and neither recurrence nor metastasis was detected. Six patients underwent additional gastrectomy, and there was no residual tumour or lymph node metastasis. All patients were under close follow-up, and neither recurrence nor metastasis was detected.
Conclusions
Our study reports the expression of mucin markers (MUC5AC, MUC6, MUC2 and CD10) and mucin phenotypes in differentiated EGC samples from ESD/EMR in the Chinese population. Mucin phenotypes of early differentiated gastric cancer are of clinical significance, and G-type GC exhibits aggressive biological behaviour in early differentiated GCs, especially in those with poorly differentiated adenocarcinoma or papillary adenocarcinoma components.
[ "Background", "Methods", "Patients and tissue specimens", "Immunohistochemistry", "Classification of mucin phenotype", "Statistical analysis", "Expression of mucin markers and mucin phenotype in early gastric cancers", "Relationship between mucin phenotype and clinicopathological features", "Relationship between mucin phenotypes and background mucosa", "Biological behaviour of mucin phenotypes", "Follow-up" ]
[ "Gastric cancer (GC), one of the most common human cancers worldwide, is a disease with multiple pathogenic factors, various prognoses and different responses to treatments. Thus, properly distinguishing those with worse prognoses from those with better prognoses appears to be significantly important. Four different morphology -based classification systems exist, the World Health Organization (WHO/2019) [1], the Japanese Gastric Cancer Association (JGCA/2017) [1], Laurén [2] and Nakamura [3]. According to the WHO classification, GCs are subclassified into papillary, tubular, poorly cohesive, mucinous and mixed types. In the JGCA classification, the subtypes are papillary (pap), tubular (tub), poorly differentiated (por), signet-ring cell (sig), and mucinous (muc), which are similar to the subtypes used by the WHO. GCs are divided into intestinal and diffuse types using Laurén’s classification or into differentiated and undifferentiated types based on Nakamura’s classification [2–4]. The differentiated type contains pap, tub1, and tub2 according to the JGCA classification and papillary and well/moderately differentiated adenocarcinoma according to the WHO classification. These different histological types exhibit distinct biological behaviours.\nThe mucous produced by cancers is one of the factors determining the nature of biological behaviour. The main component of mucous is a high-molecular-weight glycoprotein called mucin [5]. As cancer progresses, the nature of the mucous changes relative to the degree of biological malignancy. In the 1990 s, with the progress of structural analysis of mucin and the widespread use of monoclonal antibodies to the core protein of mucin, a mucin phenotype subclassification emerged. Mucin phenotype subclassification was entirely based on the mucin expression pattern, independent of histological features. Thus, GCs are classified into gastric, intestinal, gastrointestinal and null mucin phenotypes [4–6]. Previous studies have reported that the gastric phenotype has a higher potential for invasion and metastasis than the intestinal type, which results in a worse prognosis of GCs [7–11]. However, studies have mostly focused on advanced gastric cancers, and early gastric cancers are rarely investigated.\nEarly gastric cancer (EGC) is defined as tumour invasion confined to the mucosa and submucosa, irrespective of regional lymph node metastasis [12]. Endoscopic mucosal resection (EMR) and endoscopic submucosal dissection (ESD) are used as treatments for some intramucosal carcinomas and submucosal lesions, which have a very low probability of lymph node metastasis [13, 14].\nTo our knowledge, there is no research exploring biological role of mucin phenotypes in EGCs using EMR/ESD by Chinese investigators. Little information is available on the effects of mucin phenotypes on the clinicopathological features of EGCs in a Chinese cohort. Accordingly, we examined mucin expression and mucin phenotypes and explored mucin phenotype clinicopathological characteristics and biological behaviour.", "Patients and tissue specimens Our study consisted of 257 consecutive patients who underwent EMR/ESD for differentiated EGCs between January 2012 and June 2018 at The Second Affiliated Hospital of Zhejiang University, China. The group comprised 182 men and 75 women with an age range of 29 to 87 (mean 64) years old.\nThe location of each lesion was classified in terms of the upper (27 cases), middle (46 cases) and lower (184 cases) thirds of the stomach. The size of each lesion was measured by the maximum diameter, which ranged from 0.1 to 6.5 (mean 1.5) centimetres. The protruding category (52 cases) included type 0-I and 0-IIa, the depressed (102 cases) category contained type 0-IIc and III, and all the other cases were considered under the protruding and depressed category (103 cases). Based on the WHO and JGCA classification, the EGCs were subclassified into well differentiated tubular adenocarcinoma (well-diff/tub 1, 198 cases), moderately differentiated tubular adenocarcinoma (mod-diff/tub 2, 37 cases), papillary adenocarcinoma (3 cases) and mixed (tub-pap/sig/por, 19 cases). In the mixed cases, the undifferentiated components (sig/por) were less than 50 %.\nOur study consisted of 257 consecutive patients who underwent EMR/ESD for differentiated EGCs between January 2012 and June 2018 at The Second Affiliated Hospital of Zhejiang University, China. The group comprised 182 men and 75 women with an age range of 29 to 87 (mean 64) years old.\nThe location of each lesion was classified in terms of the upper (27 cases), middle (46 cases) and lower (184 cases) thirds of the stomach. The size of each lesion was measured by the maximum diameter, which ranged from 0.1 to 6.5 (mean 1.5) centimetres. The protruding category (52 cases) included type 0-I and 0-IIa, the depressed (102 cases) category contained type 0-IIc and III, and all the other cases were considered under the protruding and depressed category (103 cases). Based on the WHO and JGCA classification, the EGCs were subclassified into well differentiated tubular adenocarcinoma (well-diff/tub 1, 198 cases), moderately differentiated tubular adenocarcinoma (mod-diff/tub 2, 37 cases), papillary adenocarcinoma (3 cases) and mixed (tub-pap/sig/por, 19 cases). In the mixed cases, the undifferentiated components (sig/por) were less than 50 %.\nImmunohistochemistry All specimens were fixed with 10 % buffered formalin, embedded in paraffin, cut into 4-µm-thick sections, and subjected to haematoxylin and eosin (HE) staining. MUC2 was detected by mAb Ccp58 (Zsbio, 1:100), MUC5AC by mAb MRQ-19 (Zsbio, 1:100), MUC6 by mAb MRQ-20 (Zsbio, 1:100), and CD10 (Zsbio, 1:100) by immunohistochemistry (IHC). IHC was performed by using the Ventana NexES Staining System (Roche, Benchmark®XT). The marker CD10 exhibited both cytoplasmic and glandular luminal reactivity, whereas MUC5AC, MUC6 and MUC2 exhibited only cytoplasmic reactivity. The staining results were categorized as positive when at least one single cell among the carcinoma cells was stained and negative when none of the carcinoma cells were stained [6].\nAll specimens were fixed with 10 % buffered formalin, embedded in paraffin, cut into 4-µm-thick sections, and subjected to haematoxylin and eosin (HE) staining. MUC2 was detected by mAb Ccp58 (Zsbio, 1:100), MUC5AC by mAb MRQ-19 (Zsbio, 1:100), MUC6 by mAb MRQ-20 (Zsbio, 1:100), and CD10 (Zsbio, 1:100) by immunohistochemistry (IHC). IHC was performed by using the Ventana NexES Staining System (Roche, Benchmark®XT). The marker CD10 exhibited both cytoplasmic and glandular luminal reactivity, whereas MUC5AC, MUC6 and MUC2 exhibited only cytoplasmic reactivity. The staining results were categorized as positive when at least one single cell among the carcinoma cells was stained and negative when none of the carcinoma cells were stained [6].\nClassification of mucin phenotype Based on MUC5AC, MUC6, MUC2 and CD10, EGCs can be classified into the gastric phenotype (G-type), gastrointestinal phenotype (GI-type), intestinal phenotype (I-type) and null phenotype (N-type) [4, 6, 15–17]. The following criteria were used for the classification of mucin phenotypes: ① the G-type shows positive staining for at least one of the MUC5AC and MUC6, while CD10 and MUC2 are both negative; ② the I-type shows positive staining for CD10 and/or MUC2, while MUC5AC and MUC6 are both negative; ③ the GI-type shows positive staining for marker CD10 and/or MUC2 associated with one of the markers MUC5AC and MUC6 positive; and④ if none of the four markers are positive, the phenotype is classified as N-type.\nIn addition, among the GI-types, we labelled one expressing more MUC5AC and/or MUC6 than MUC2 and CD10 as a gastric-predominant GI phenotype (GI-G type); otherwise, we labelled it as an intestinal-predominant phenotype (GI-I type).\nBased on MUC5AC, MUC6, MUC2 and CD10, EGCs can be classified into the gastric phenotype (G-type), gastrointestinal phenotype (GI-type), intestinal phenotype (I-type) and null phenotype (N-type) [4, 6, 15–17]. The following criteria were used for the classification of mucin phenotypes: ① the G-type shows positive staining for at least one of the MUC5AC and MUC6, while CD10 and MUC2 are both negative; ② the I-type shows positive staining for CD10 and/or MUC2, while MUC5AC and MUC6 are both negative; ③ the GI-type shows positive staining for marker CD10 and/or MUC2 associated with one of the markers MUC5AC and MUC6 positive; and④ if none of the four markers are positive, the phenotype is classified as N-type.\nIn addition, among the GI-types, we labelled one expressing more MUC5AC and/or MUC6 than MUC2 and CD10 as a gastric-predominant GI phenotype (GI-G type); otherwise, we labelled it as an intestinal-predominant phenotype (GI-I type).\nStatistical analysis Associations between mucin expression profiles and clinicopathological parameters were examined by the chi-square test or Fisher’s exact test. Statistical significance was established to be P < 0.05. Statistical calculations were performed with IBM SPSS Statistics (version 23.0).\nAssociations between mucin expression profiles and clinicopathological parameters were examined by the chi-square test or Fisher’s exact test. Statistical significance was established to be P < 0.05. Statistical calculations were performed with IBM SPSS Statistics (version 23.0).", "Our study consisted of 257 consecutive patients who underwent EMR/ESD for differentiated EGCs between January 2012 and June 2018 at The Second Affiliated Hospital of Zhejiang University, China. The group comprised 182 men and 75 women with an age range of 29 to 87 (mean 64) years old.\nThe location of each lesion was classified in terms of the upper (27 cases), middle (46 cases) and lower (184 cases) thirds of the stomach. The size of each lesion was measured by the maximum diameter, which ranged from 0.1 to 6.5 (mean 1.5) centimetres. The protruding category (52 cases) included type 0-I and 0-IIa, the depressed (102 cases) category contained type 0-IIc and III, and all the other cases were considered under the protruding and depressed category (103 cases). Based on the WHO and JGCA classification, the EGCs were subclassified into well differentiated tubular adenocarcinoma (well-diff/tub 1, 198 cases), moderately differentiated tubular adenocarcinoma (mod-diff/tub 2, 37 cases), papillary adenocarcinoma (3 cases) and mixed (tub-pap/sig/por, 19 cases). In the mixed cases, the undifferentiated components (sig/por) were less than 50 %.", "All specimens were fixed with 10 % buffered formalin, embedded in paraffin, cut into 4-µm-thick sections, and subjected to haematoxylin and eosin (HE) staining. MUC2 was detected by mAb Ccp58 (Zsbio, 1:100), MUC5AC by mAb MRQ-19 (Zsbio, 1:100), MUC6 by mAb MRQ-20 (Zsbio, 1:100), and CD10 (Zsbio, 1:100) by immunohistochemistry (IHC). IHC was performed by using the Ventana NexES Staining System (Roche, Benchmark®XT). The marker CD10 exhibited both cytoplasmic and glandular luminal reactivity, whereas MUC5AC, MUC6 and MUC2 exhibited only cytoplasmic reactivity. The staining results were categorized as positive when at least one single cell among the carcinoma cells was stained and negative when none of the carcinoma cells were stained [6].", "Based on MUC5AC, MUC6, MUC2 and CD10, EGCs can be classified into the gastric phenotype (G-type), gastrointestinal phenotype (GI-type), intestinal phenotype (I-type) and null phenotype (N-type) [4, 6, 15–17]. The following criteria were used for the classification of mucin phenotypes: ① the G-type shows positive staining for at least one of the MUC5AC and MUC6, while CD10 and MUC2 are both negative; ② the I-type shows positive staining for CD10 and/or MUC2, while MUC5AC and MUC6 are both negative; ③ the GI-type shows positive staining for marker CD10 and/or MUC2 associated with one of the markers MUC5AC and MUC6 positive; and④ if none of the four markers are positive, the phenotype is classified as N-type.\nIn addition, among the GI-types, we labelled one expressing more MUC5AC and/or MUC6 than MUC2 and CD10 as a gastric-predominant GI phenotype (GI-G type); otherwise, we labelled it as an intestinal-predominant phenotype (GI-I type).", "Associations between mucin expression profiles and clinicopathological parameters were examined by the chi-square test or Fisher’s exact test. Statistical significance was established to be P < 0.05. Statistical calculations were performed with IBM SPSS Statistics (version 23.0).", "The expression percentages of CD10, MUC2, MUC5AC and MUC6 in all EGCs were 43.58 % (112/257), 63.81 % (164/257), 64.98 % (167/257) and 72.76 % (187/257) respectively (Fig. 1).\nFig. 1The proportion of CD10, MUC2, MUC5AC, and MUC6 in each mucin phenotype\nThe proportion of CD10, MUC2, MUC5AC, and MUC6 in each mucin phenotype\nTwo hundred fifty-seven EGCs were classified as G-type (21 %, 54/257), GI-type (56 %, 144/257), I-type (20 %, 51/257) and N-type (3 %, 8/257). The GI-type contained the GI-G type (72 %, 103/144) and GI-I type (28 %, 41/144). There were more cases of G-type and GI-G type (61 %, 157/257) than I-type and GI-I type (36 %, 95/257) (Fig. 2).\nFig. 2IHC staining of the gastric phenotype, intestinal phenotype and gastrointestinal phenotype. G-type (a-d) shows negative staining for CD10 (a) and MUC2 (b) and positive staining for MUC5AC (c) and MUC6 (d). In contrast, I-type (e-h) shows CD10 (e) and MUC2 (f) marker positivity and negative staining for MUC5AC (g) and MUC6 (h) negative. GI-type (i-l) shows positive staining for CD10 (i), MUC2 (j),MUC5AC (k) and MUC6 (l). (Scale bar =200 μm, HE and IHC ×10)\nIHC staining of the gastric phenotype, intestinal phenotype and gastrointestinal phenotype. G-type (a-d) shows negative staining for CD10 (a) and MUC2 (b) and positive staining for MUC5AC (c) and MUC6 (d). In contrast, I-type (e-h) shows CD10 (e) and MUC2 (f) marker positivity and negative staining for MUC5AC (g) and MUC6 (h) negative. GI-type (i-l) shows positive staining for CD10 (i), MUC2 (j),MUC5AC (k) and MUC6 (l). (Scale bar =200 μm, HE and IHC ×10)", "The relationship between mucin phenotype and clinicopathological features is summarized in Table 1. The mucin phenotypes were significantly related to the JGCA and WHO classifications (P < 0.05), but the parameters of sex, age, margin, colour, tumour size, gross type, depth of invasion, and lymphovascular invasion did not significantly differ among those mucin phenotypes (P > 0.05). The I-type had the highest proportion of differentiated EGCs among the four mucin phenotypes (100.0 % vs. 79.7 % vs. 83.1 % vs. 87.5 %, P = 0.027). The G-type group had a higher proportion of tub-por/sig and pap/tub-pap cases than the I-, GI- and N-type groups (20.4 % vs. 7.0 % vs. 0.0 % vs. 12.5 %, P = 0.027) according to the JGCA classification. According to the WHO classification, the G-type had more mixed histological components (18.5 % vs. 5.6 % vs. 0.0 % vs. 12.5 %, P = 0.006) than the other mucin phenotypes.\nTable 1Relationship between Mucin Phenotypes and Clinicopathological FeaturesG-typeGI-typeI-typeN-typeχ2 valueP valueSex0.6710.906 Male40(74.1 %)99(68.8 %)37(72.5 %)6(75.0 %) Female14(25.9 %)45(31.2 %)14(27.5 %)2(25.0 %)Age0.4120.945 ≤ 64 years28(51.9 %)75(52.1 %)29(56.9 %)4(50.0 %) >64 years26(48.1 %)69(47.9 %)22(43.1 %)4(50.0 %)Tumor Size2.3560.502 ≤ 2 cm34(63.0 %)99(68.8 %)32(62.7 %)7(87.5 %) >2 cm20(37.0 %)45(31.2 %)19(37.3 %)1(12.5 %)Margin4.2690.224 Distinct35(64.8 %)111(77.1 %)40(78.4 %)5(62.5 %) Indistinct19(35.2 %)33(22.9 %)11(21.6 %)3(37.5 %)Color3.4230.337 Darken31(57.4 %)75(52.1 %)34(66.7 %)5(62.5 %) Faded23(42.6 %)69(47.9 %)17(33.3 %)3(37.5 %)Tumor Location5.8190.407 Upper4(7.4 %)15(10.4 %)8(15.7 %)0(0.0 %) Middle12(22.2 %)21(14.6 %)12(23.5 %)1(12.5 %) Low38(70.4 %)108(75.0 %)31(60.8 %)7(87.5 %)Tumor Location0.7870.842 EGJ4(7.4 %)7(4.9 %)3(5.9 %)0(0.0 %) No-EGJ50(92.6 %)137(95.1 %)48(94.1 %)8(100.0 %)Gross type8.9430.162 Protruding10(18.5 %)32(22.2 %)10(19.6 %)0(0.0 %) Depressed16(29.6 %)57(39.6 %)26(51.0 %)3(37.5 %) Protruding-Depressed28(51.9 %)55(38.2 %)15(29.4 %)5(62.5 %)JGCA Classification16.8700.027 tub134(62.9 %)112(77.8 %)45(88.2 %)7(87.5 %) tub29(16.7 %)22(15.2 %)6(11.8 %)0(0.0 %) tub-por/sig7(13.0 %)7(4.9 %)0(0.0 %)1(12.5 %) pap/tub-pap4(7.4 %)3(2.1 %)0(0.0 %)0(0.0 %)WHO Classification18.0520.017 Tubular, well-diff34(62.9 %)112(77.8 %)45(88.2 %)7(87.5 %) Tubular, mod-diff9(16.7 %)22(15.2 %)6(11.8 %)0(0.0 %) Papillary1(1.9 %)2(1.4 %)0(0.0 %)0(0.0 %) Mixed10(18.5 %)8(7.0 %)0(0.0 %)1(12.5 %)Depth of Invasion1.3600.681 M47(87.0 %)132(91.7 %)46(90.2 %)8(100.0 %) SM7(13.0 %)12(8.3 %)5(9.8 %)0(0.0 %)Lymphovascular invasion2.5410.687 (+)1(1.9 %)1(0.7 %)0(0.0 %)0(0.0 %) (-)53(98.1 %)143(99.3 %)51(100.0 %)8(100.0 %)+, present; - absentEGJ esophagogastric junction, well-diff well differentiated, mod-diff moderately differentiated, M mucosa, SM submucosa\nRelationship between Mucin Phenotypes and Clinicopathological Features\n+, present; - absent\nEGJ esophagogastric junction, well-diff well differentiated, mod-diff moderately differentiated, M mucosa, SM submucosa", "Intestinal metaplasia (IM) of background mucosa was observed in 199 of 249 (79.9 %) cases (G-, GI- and GI-type), including 38 cases of incomplete IM and 161 cases of complete IM. IM did not significantly differ among mucin phenotypes (P > 0.05). However, the presence of incomplete and complete IM was significantly different in distinct mucin phenotypes (P = 0.004, P = 0.018). The presence of incomplete IM in GI-type EGCs was higher than that in the G-type and I-type EGCs (21.5 % vs. 9.3 % vs. 3.9 %). In the contrast, 77.8 % (42/54) of G-type EGCs and 70.6 % (36/51) of I-type EGCs exhibited complete IM, which was higher than the 57.6 % (83/114) of GI-type EGCs. The IM status of the background mucosa and the relationship with mucin phenotypes are shown in Table 2.\nTable 2Relationship between mucin phenotypes and background mucosaG-typeGI-typeI-typeχ2 valueP valueIntestinal metaplasia2.6850.265  (+)47(87.1 %)114(79.2 %)38(74.5 %)  (-)7(12.9 %)30(20.8 %)13(25.5 %)Incomplete intestinal metaplasia10.9480.004  (+)5(9.3 %)31(21.5 %)2(3.9 %)  (-)49(90.7 %)113(78.5 %)49(96.1 %)Complete intestinal metaplasia7.9570.018  (+)42(77.8 %)83(57.6 %)36(70.6 %)  (-)12(22.2 %)61(42.4 %)15(29.4 %)+, present; - absent\nRelationship between mucin phenotypes and background mucosa\n+, present; - absent", "In addition to tubular adenocarcinoma components, 22 of the 257 cases also contained components of papillary adenocarcinoma, poorly differentiated carcinoma, or signet ring cell carcinoma. Fifteen cases (68.18 %) contained por/sig components, and the other 7 (31.82 %) contained pap components. Eleven cases showed the G-type, 10 cases showed the GI-type, only one case showed the N-type, and none of them showed the I-type. Among the 10 GI-type cases, 9 cases showed the GI-G type, and one showed the GI-I type. Almost all the 22 patients showed the G- and GI-G types, which was significantly higher than the number of I-type patients (P = 0.011). In addition, the 22 patients had a higher proportion of infiltration into the submucosa (P < 0.001). (Table 3; Figs. 3 and 4).\nTable 3Relationship between Biological Behavior and Mucin Phenotypestubtub-por/sigpap/tub-papχ2 valueP valueDepth of Invasion15.8240.000 M219(93.2 %)9(60.0 %)5(71.4 %) SM16(6.8 %)6(40.0 %)2(28.6 %)Mucin Phenotype14.7430.011 G-type43(18.3 %)7(46.7 %)4 (57.1 %) GI-type134(57.0 %)7(46.7 %)3(42.9 %) I-type51(21.7 %)0(0.0 %)0(0.0 %) N-type7(3.0 %)1(6.6 %)0(0.0 %)M mucosa, SM submucosaFig. 3Tubular adenocarcinoma mixed with papillary adenocarcinoma showing the gastric mucin phenotype. Reconstructive map (a). H&E staining showing papillary structure on the surface and tubular adenocarcinoma in the submucosa; invasive depth is 300 μm (b). Focal positive staining for MUC5AC (c) and strongly positive staining for MUC6 (d), invasive tubular adenocarcinoma showing MUC5AC negative (c) and MUC6 positive (d) status. (Scale bar =400 μm, HE and IHC ×10)Fig. 4Tubular adenocarcinoma mixed with poorly differentiated carcinoma showing the gastric mucin phenotype. Reconstructive map (a). H&E staining showing poorly differentiated carcinoma in the mucosa and tubular adenocarcinoma in the submucosa; the invasive depth is 2500 μm (b). Positive staining for MUC5AC (c) and MUC6 (d), and invasive tubular adenocarcinoma showing MUC5AC (c) and MUC6 (d) positivity. (Scale bar =400 μm, HE and IHC x10)\nRelationship between Biological Behavior and Mucin Phenotypes\nM mucosa, SM submucosa\nTubular adenocarcinoma mixed with papillary adenocarcinoma showing the gastric mucin phenotype. Reconstructive map (a). H&E staining showing papillary structure on the surface and tubular adenocarcinoma in the submucosa; invasive depth is 300 μm (b). Focal positive staining for MUC5AC (c) and strongly positive staining for MUC6 (d), invasive tubular adenocarcinoma showing MUC5AC negative (c) and MUC6 positive (d) status. (Scale bar =400 μm, HE and IHC ×10)\nTubular adenocarcinoma mixed with poorly differentiated carcinoma showing the gastric mucin phenotype. Reconstructive map (a). H&E staining showing poorly differentiated carcinoma in the mucosa and tubular adenocarcinoma in the submucosa; the invasive depth is 2500 μm (b). Positive staining for MUC5AC (c) and MUC6 (d), and invasive tubular adenocarcinoma showing MUC5AC (c) and MUC6 (d) positivity. (Scale bar =400 μm, HE and IHC x10)", "Six patients underwent additional gastrectomy, and there was no residual tumour or lymph node metastasis. All patients were under close follow-up, and neither recurrence nor metastasis was detected." ]
[ null, null, null, null, null, null, null, null, null, null, null ]
[ "Background", "Methods", "Patients and tissue specimens", "Immunohistochemistry", "Classification of mucin phenotype", "Statistical analysis", "Results", "Expression of mucin markers and mucin phenotype in early gastric cancers", "Relationship between mucin phenotype and clinicopathological features", "Relationship between mucin phenotypes and background mucosa", "Biological behaviour of mucin phenotypes", "Follow-up", "Discussion", "Conclusions" ]
[ "Gastric cancer (GC), one of the most common human cancers worldwide, is a disease with multiple pathogenic factors, various prognoses and different responses to treatments. Thus, properly distinguishing those with worse prognoses from those with better prognoses appears to be significantly important. Four different morphology -based classification systems exist, the World Health Organization (WHO/2019) [1], the Japanese Gastric Cancer Association (JGCA/2017) [1], Laurén [2] and Nakamura [3]. According to the WHO classification, GCs are subclassified into papillary, tubular, poorly cohesive, mucinous and mixed types. In the JGCA classification, the subtypes are papillary (pap), tubular (tub), poorly differentiated (por), signet-ring cell (sig), and mucinous (muc), which are similar to the subtypes used by the WHO. GCs are divided into intestinal and diffuse types using Laurén’s classification or into differentiated and undifferentiated types based on Nakamura’s classification [2–4]. The differentiated type contains pap, tub1, and tub2 according to the JGCA classification and papillary and well/moderately differentiated adenocarcinoma according to the WHO classification. These different histological types exhibit distinct biological behaviours.\nThe mucous produced by cancers is one of the factors determining the nature of biological behaviour. The main component of mucous is a high-molecular-weight glycoprotein called mucin [5]. As cancer progresses, the nature of the mucous changes relative to the degree of biological malignancy. In the 1990 s, with the progress of structural analysis of mucin and the widespread use of monoclonal antibodies to the core protein of mucin, a mucin phenotype subclassification emerged. Mucin phenotype subclassification was entirely based on the mucin expression pattern, independent of histological features. Thus, GCs are classified into gastric, intestinal, gastrointestinal and null mucin phenotypes [4–6]. Previous studies have reported that the gastric phenotype has a higher potential for invasion and metastasis than the intestinal type, which results in a worse prognosis of GCs [7–11]. However, studies have mostly focused on advanced gastric cancers, and early gastric cancers are rarely investigated.\nEarly gastric cancer (EGC) is defined as tumour invasion confined to the mucosa and submucosa, irrespective of regional lymph node metastasis [12]. Endoscopic mucosal resection (EMR) and endoscopic submucosal dissection (ESD) are used as treatments for some intramucosal carcinomas and submucosal lesions, which have a very low probability of lymph node metastasis [13, 14].\nTo our knowledge, there is no research exploring biological role of mucin phenotypes in EGCs using EMR/ESD by Chinese investigators. Little information is available on the effects of mucin phenotypes on the clinicopathological features of EGCs in a Chinese cohort. Accordingly, we examined mucin expression and mucin phenotypes and explored mucin phenotype clinicopathological characteristics and biological behaviour.", "Patients and tissue specimens Our study consisted of 257 consecutive patients who underwent EMR/ESD for differentiated EGCs between January 2012 and June 2018 at The Second Affiliated Hospital of Zhejiang University, China. The group comprised 182 men and 75 women with an age range of 29 to 87 (mean 64) years old.\nThe location of each lesion was classified in terms of the upper (27 cases), middle (46 cases) and lower (184 cases) thirds of the stomach. The size of each lesion was measured by the maximum diameter, which ranged from 0.1 to 6.5 (mean 1.5) centimetres. The protruding category (52 cases) included type 0-I and 0-IIa, the depressed (102 cases) category contained type 0-IIc and III, and all the other cases were considered under the protruding and depressed category (103 cases). Based on the WHO and JGCA classification, the EGCs were subclassified into well differentiated tubular adenocarcinoma (well-diff/tub 1, 198 cases), moderately differentiated tubular adenocarcinoma (mod-diff/tub 2, 37 cases), papillary adenocarcinoma (3 cases) and mixed (tub-pap/sig/por, 19 cases). In the mixed cases, the undifferentiated components (sig/por) were less than 50 %.\nOur study consisted of 257 consecutive patients who underwent EMR/ESD for differentiated EGCs between January 2012 and June 2018 at The Second Affiliated Hospital of Zhejiang University, China. The group comprised 182 men and 75 women with an age range of 29 to 87 (mean 64) years old.\nThe location of each lesion was classified in terms of the upper (27 cases), middle (46 cases) and lower (184 cases) thirds of the stomach. The size of each lesion was measured by the maximum diameter, which ranged from 0.1 to 6.5 (mean 1.5) centimetres. The protruding category (52 cases) included type 0-I and 0-IIa, the depressed (102 cases) category contained type 0-IIc and III, and all the other cases were considered under the protruding and depressed category (103 cases). Based on the WHO and JGCA classification, the EGCs were subclassified into well differentiated tubular adenocarcinoma (well-diff/tub 1, 198 cases), moderately differentiated tubular adenocarcinoma (mod-diff/tub 2, 37 cases), papillary adenocarcinoma (3 cases) and mixed (tub-pap/sig/por, 19 cases). In the mixed cases, the undifferentiated components (sig/por) were less than 50 %.\nImmunohistochemistry All specimens were fixed with 10 % buffered formalin, embedded in paraffin, cut into 4-µm-thick sections, and subjected to haematoxylin and eosin (HE) staining. MUC2 was detected by mAb Ccp58 (Zsbio, 1:100), MUC5AC by mAb MRQ-19 (Zsbio, 1:100), MUC6 by mAb MRQ-20 (Zsbio, 1:100), and CD10 (Zsbio, 1:100) by immunohistochemistry (IHC). IHC was performed by using the Ventana NexES Staining System (Roche, Benchmark®XT). The marker CD10 exhibited both cytoplasmic and glandular luminal reactivity, whereas MUC5AC, MUC6 and MUC2 exhibited only cytoplasmic reactivity. The staining results were categorized as positive when at least one single cell among the carcinoma cells was stained and negative when none of the carcinoma cells were stained [6].\nAll specimens were fixed with 10 % buffered formalin, embedded in paraffin, cut into 4-µm-thick sections, and subjected to haematoxylin and eosin (HE) staining. MUC2 was detected by mAb Ccp58 (Zsbio, 1:100), MUC5AC by mAb MRQ-19 (Zsbio, 1:100), MUC6 by mAb MRQ-20 (Zsbio, 1:100), and CD10 (Zsbio, 1:100) by immunohistochemistry (IHC). IHC was performed by using the Ventana NexES Staining System (Roche, Benchmark®XT). The marker CD10 exhibited both cytoplasmic and glandular luminal reactivity, whereas MUC5AC, MUC6 and MUC2 exhibited only cytoplasmic reactivity. The staining results were categorized as positive when at least one single cell among the carcinoma cells was stained and negative when none of the carcinoma cells were stained [6].\nClassification of mucin phenotype Based on MUC5AC, MUC6, MUC2 and CD10, EGCs can be classified into the gastric phenotype (G-type), gastrointestinal phenotype (GI-type), intestinal phenotype (I-type) and null phenotype (N-type) [4, 6, 15–17]. The following criteria were used for the classification of mucin phenotypes: ① the G-type shows positive staining for at least one of the MUC5AC and MUC6, while CD10 and MUC2 are both negative; ② the I-type shows positive staining for CD10 and/or MUC2, while MUC5AC and MUC6 are both negative; ③ the GI-type shows positive staining for marker CD10 and/or MUC2 associated with one of the markers MUC5AC and MUC6 positive; and④ if none of the four markers are positive, the phenotype is classified as N-type.\nIn addition, among the GI-types, we labelled one expressing more MUC5AC and/or MUC6 than MUC2 and CD10 as a gastric-predominant GI phenotype (GI-G type); otherwise, we labelled it as an intestinal-predominant phenotype (GI-I type).\nBased on MUC5AC, MUC6, MUC2 and CD10, EGCs can be classified into the gastric phenotype (G-type), gastrointestinal phenotype (GI-type), intestinal phenotype (I-type) and null phenotype (N-type) [4, 6, 15–17]. The following criteria were used for the classification of mucin phenotypes: ① the G-type shows positive staining for at least one of the MUC5AC and MUC6, while CD10 and MUC2 are both negative; ② the I-type shows positive staining for CD10 and/or MUC2, while MUC5AC and MUC6 are both negative; ③ the GI-type shows positive staining for marker CD10 and/or MUC2 associated with one of the markers MUC5AC and MUC6 positive; and④ if none of the four markers are positive, the phenotype is classified as N-type.\nIn addition, among the GI-types, we labelled one expressing more MUC5AC and/or MUC6 than MUC2 and CD10 as a gastric-predominant GI phenotype (GI-G type); otherwise, we labelled it as an intestinal-predominant phenotype (GI-I type).\nStatistical analysis Associations between mucin expression profiles and clinicopathological parameters were examined by the chi-square test or Fisher’s exact test. Statistical significance was established to be P < 0.05. Statistical calculations were performed with IBM SPSS Statistics (version 23.0).\nAssociations between mucin expression profiles and clinicopathological parameters were examined by the chi-square test or Fisher’s exact test. Statistical significance was established to be P < 0.05. Statistical calculations were performed with IBM SPSS Statistics (version 23.0).", "Our study consisted of 257 consecutive patients who underwent EMR/ESD for differentiated EGCs between January 2012 and June 2018 at The Second Affiliated Hospital of Zhejiang University, China. The group comprised 182 men and 75 women with an age range of 29 to 87 (mean 64) years old.\nThe location of each lesion was classified in terms of the upper (27 cases), middle (46 cases) and lower (184 cases) thirds of the stomach. The size of each lesion was measured by the maximum diameter, which ranged from 0.1 to 6.5 (mean 1.5) centimetres. The protruding category (52 cases) included type 0-I and 0-IIa, the depressed (102 cases) category contained type 0-IIc and III, and all the other cases were considered under the protruding and depressed category (103 cases). Based on the WHO and JGCA classification, the EGCs were subclassified into well differentiated tubular adenocarcinoma (well-diff/tub 1, 198 cases), moderately differentiated tubular adenocarcinoma (mod-diff/tub 2, 37 cases), papillary adenocarcinoma (3 cases) and mixed (tub-pap/sig/por, 19 cases). In the mixed cases, the undifferentiated components (sig/por) were less than 50 %.", "All specimens were fixed with 10 % buffered formalin, embedded in paraffin, cut into 4-µm-thick sections, and subjected to haematoxylin and eosin (HE) staining. MUC2 was detected by mAb Ccp58 (Zsbio, 1:100), MUC5AC by mAb MRQ-19 (Zsbio, 1:100), MUC6 by mAb MRQ-20 (Zsbio, 1:100), and CD10 (Zsbio, 1:100) by immunohistochemistry (IHC). IHC was performed by using the Ventana NexES Staining System (Roche, Benchmark®XT). The marker CD10 exhibited both cytoplasmic and glandular luminal reactivity, whereas MUC5AC, MUC6 and MUC2 exhibited only cytoplasmic reactivity. The staining results were categorized as positive when at least one single cell among the carcinoma cells was stained and negative when none of the carcinoma cells were stained [6].", "Based on MUC5AC, MUC6, MUC2 and CD10, EGCs can be classified into the gastric phenotype (G-type), gastrointestinal phenotype (GI-type), intestinal phenotype (I-type) and null phenotype (N-type) [4, 6, 15–17]. The following criteria were used for the classification of mucin phenotypes: ① the G-type shows positive staining for at least one of the MUC5AC and MUC6, while CD10 and MUC2 are both negative; ② the I-type shows positive staining for CD10 and/or MUC2, while MUC5AC and MUC6 are both negative; ③ the GI-type shows positive staining for marker CD10 and/or MUC2 associated with one of the markers MUC5AC and MUC6 positive; and④ if none of the four markers are positive, the phenotype is classified as N-type.\nIn addition, among the GI-types, we labelled one expressing more MUC5AC and/or MUC6 than MUC2 and CD10 as a gastric-predominant GI phenotype (GI-G type); otherwise, we labelled it as an intestinal-predominant phenotype (GI-I type).", "Associations between mucin expression profiles and clinicopathological parameters were examined by the chi-square test or Fisher’s exact test. Statistical significance was established to be P < 0.05. Statistical calculations were performed with IBM SPSS Statistics (version 23.0).", "Expression of mucin markers and mucin phenotype in early gastric cancers The expression percentages of CD10, MUC2, MUC5AC and MUC6 in all EGCs were 43.58 % (112/257), 63.81 % (164/257), 64.98 % (167/257) and 72.76 % (187/257) respectively (Fig. 1).\nFig. 1The proportion of CD10, MUC2, MUC5AC, and MUC6 in each mucin phenotype\nThe proportion of CD10, MUC2, MUC5AC, and MUC6 in each mucin phenotype\nTwo hundred fifty-seven EGCs were classified as G-type (21 %, 54/257), GI-type (56 %, 144/257), I-type (20 %, 51/257) and N-type (3 %, 8/257). The GI-type contained the GI-G type (72 %, 103/144) and GI-I type (28 %, 41/144). There were more cases of G-type and GI-G type (61 %, 157/257) than I-type and GI-I type (36 %, 95/257) (Fig. 2).\nFig. 2IHC staining of the gastric phenotype, intestinal phenotype and gastrointestinal phenotype. G-type (a-d) shows negative staining for CD10 (a) and MUC2 (b) and positive staining for MUC5AC (c) and MUC6 (d). In contrast, I-type (e-h) shows CD10 (e) and MUC2 (f) marker positivity and negative staining for MUC5AC (g) and MUC6 (h) negative. GI-type (i-l) shows positive staining for CD10 (i), MUC2 (j),MUC5AC (k) and MUC6 (l). (Scale bar =200 μm, HE and IHC ×10)\nIHC staining of the gastric phenotype, intestinal phenotype and gastrointestinal phenotype. G-type (a-d) shows negative staining for CD10 (a) and MUC2 (b) and positive staining for MUC5AC (c) and MUC6 (d). In contrast, I-type (e-h) shows CD10 (e) and MUC2 (f) marker positivity and negative staining for MUC5AC (g) and MUC6 (h) negative. GI-type (i-l) shows positive staining for CD10 (i), MUC2 (j),MUC5AC (k) and MUC6 (l). (Scale bar =200 μm, HE and IHC ×10)\nThe expression percentages of CD10, MUC2, MUC5AC and MUC6 in all EGCs were 43.58 % (112/257), 63.81 % (164/257), 64.98 % (167/257) and 72.76 % (187/257) respectively (Fig. 1).\nFig. 1The proportion of CD10, MUC2, MUC5AC, and MUC6 in each mucin phenotype\nThe proportion of CD10, MUC2, MUC5AC, and MUC6 in each mucin phenotype\nTwo hundred fifty-seven EGCs were classified as G-type (21 %, 54/257), GI-type (56 %, 144/257), I-type (20 %, 51/257) and N-type (3 %, 8/257). The GI-type contained the GI-G type (72 %, 103/144) and GI-I type (28 %, 41/144). There were more cases of G-type and GI-G type (61 %, 157/257) than I-type and GI-I type (36 %, 95/257) (Fig. 2).\nFig. 2IHC staining of the gastric phenotype, intestinal phenotype and gastrointestinal phenotype. G-type (a-d) shows negative staining for CD10 (a) and MUC2 (b) and positive staining for MUC5AC (c) and MUC6 (d). In contrast, I-type (e-h) shows CD10 (e) and MUC2 (f) marker positivity and negative staining for MUC5AC (g) and MUC6 (h) negative. GI-type (i-l) shows positive staining for CD10 (i), MUC2 (j),MUC5AC (k) and MUC6 (l). (Scale bar =200 μm, HE and IHC ×10)\nIHC staining of the gastric phenotype, intestinal phenotype and gastrointestinal phenotype. G-type (a-d) shows negative staining for CD10 (a) and MUC2 (b) and positive staining for MUC5AC (c) and MUC6 (d). In contrast, I-type (e-h) shows CD10 (e) and MUC2 (f) marker positivity and negative staining for MUC5AC (g) and MUC6 (h) negative. GI-type (i-l) shows positive staining for CD10 (i), MUC2 (j),MUC5AC (k) and MUC6 (l). (Scale bar =200 μm, HE and IHC ×10)\nRelationship between mucin phenotype and clinicopathological features The relationship between mucin phenotype and clinicopathological features is summarized in Table 1. The mucin phenotypes were significantly related to the JGCA and WHO classifications (P < 0.05), but the parameters of sex, age, margin, colour, tumour size, gross type, depth of invasion, and lymphovascular invasion did not significantly differ among those mucin phenotypes (P > 0.05). The I-type had the highest proportion of differentiated EGCs among the four mucin phenotypes (100.0 % vs. 79.7 % vs. 83.1 % vs. 87.5 %, P = 0.027). The G-type group had a higher proportion of tub-por/sig and pap/tub-pap cases than the I-, GI- and N-type groups (20.4 % vs. 7.0 % vs. 0.0 % vs. 12.5 %, P = 0.027) according to the JGCA classification. According to the WHO classification, the G-type had more mixed histological components (18.5 % vs. 5.6 % vs. 0.0 % vs. 12.5 %, P = 0.006) than the other mucin phenotypes.\nTable 1Relationship between Mucin Phenotypes and Clinicopathological FeaturesG-typeGI-typeI-typeN-typeχ2 valueP valueSex0.6710.906 Male40(74.1 %)99(68.8 %)37(72.5 %)6(75.0 %) Female14(25.9 %)45(31.2 %)14(27.5 %)2(25.0 %)Age0.4120.945 ≤ 64 years28(51.9 %)75(52.1 %)29(56.9 %)4(50.0 %) >64 years26(48.1 %)69(47.9 %)22(43.1 %)4(50.0 %)Tumor Size2.3560.502 ≤ 2 cm34(63.0 %)99(68.8 %)32(62.7 %)7(87.5 %) >2 cm20(37.0 %)45(31.2 %)19(37.3 %)1(12.5 %)Margin4.2690.224 Distinct35(64.8 %)111(77.1 %)40(78.4 %)5(62.5 %) Indistinct19(35.2 %)33(22.9 %)11(21.6 %)3(37.5 %)Color3.4230.337 Darken31(57.4 %)75(52.1 %)34(66.7 %)5(62.5 %) Faded23(42.6 %)69(47.9 %)17(33.3 %)3(37.5 %)Tumor Location5.8190.407 Upper4(7.4 %)15(10.4 %)8(15.7 %)0(0.0 %) Middle12(22.2 %)21(14.6 %)12(23.5 %)1(12.5 %) Low38(70.4 %)108(75.0 %)31(60.8 %)7(87.5 %)Tumor Location0.7870.842 EGJ4(7.4 %)7(4.9 %)3(5.9 %)0(0.0 %) No-EGJ50(92.6 %)137(95.1 %)48(94.1 %)8(100.0 %)Gross type8.9430.162 Protruding10(18.5 %)32(22.2 %)10(19.6 %)0(0.0 %) Depressed16(29.6 %)57(39.6 %)26(51.0 %)3(37.5 %) Protruding-Depressed28(51.9 %)55(38.2 %)15(29.4 %)5(62.5 %)JGCA Classification16.8700.027 tub134(62.9 %)112(77.8 %)45(88.2 %)7(87.5 %) tub29(16.7 %)22(15.2 %)6(11.8 %)0(0.0 %) tub-por/sig7(13.0 %)7(4.9 %)0(0.0 %)1(12.5 %) pap/tub-pap4(7.4 %)3(2.1 %)0(0.0 %)0(0.0 %)WHO Classification18.0520.017 Tubular, well-diff34(62.9 %)112(77.8 %)45(88.2 %)7(87.5 %) Tubular, mod-diff9(16.7 %)22(15.2 %)6(11.8 %)0(0.0 %) Papillary1(1.9 %)2(1.4 %)0(0.0 %)0(0.0 %) Mixed10(18.5 %)8(7.0 %)0(0.0 %)1(12.5 %)Depth of Invasion1.3600.681 M47(87.0 %)132(91.7 %)46(90.2 %)8(100.0 %) SM7(13.0 %)12(8.3 %)5(9.8 %)0(0.0 %)Lymphovascular invasion2.5410.687 (+)1(1.9 %)1(0.7 %)0(0.0 %)0(0.0 %) (-)53(98.1 %)143(99.3 %)51(100.0 %)8(100.0 %)+, present; - absentEGJ esophagogastric junction, well-diff well differentiated, mod-diff moderately differentiated, M mucosa, SM submucosa\nRelationship between Mucin Phenotypes and Clinicopathological Features\n+, present; - absent\nEGJ esophagogastric junction, well-diff well differentiated, mod-diff moderately differentiated, M mucosa, SM submucosa\nThe relationship between mucin phenotype and clinicopathological features is summarized in Table 1. The mucin phenotypes were significantly related to the JGCA and WHO classifications (P < 0.05), but the parameters of sex, age, margin, colour, tumour size, gross type, depth of invasion, and lymphovascular invasion did not significantly differ among those mucin phenotypes (P > 0.05). The I-type had the highest proportion of differentiated EGCs among the four mucin phenotypes (100.0 % vs. 79.7 % vs. 83.1 % vs. 87.5 %, P = 0.027). The G-type group had a higher proportion of tub-por/sig and pap/tub-pap cases than the I-, GI- and N-type groups (20.4 % vs. 7.0 % vs. 0.0 % vs. 12.5 %, P = 0.027) according to the JGCA classification. According to the WHO classification, the G-type had more mixed histological components (18.5 % vs. 5.6 % vs. 0.0 % vs. 12.5 %, P = 0.006) than the other mucin phenotypes.\nTable 1Relationship between Mucin Phenotypes and Clinicopathological FeaturesG-typeGI-typeI-typeN-typeχ2 valueP valueSex0.6710.906 Male40(74.1 %)99(68.8 %)37(72.5 %)6(75.0 %) Female14(25.9 %)45(31.2 %)14(27.5 %)2(25.0 %)Age0.4120.945 ≤ 64 years28(51.9 %)75(52.1 %)29(56.9 %)4(50.0 %) >64 years26(48.1 %)69(47.9 %)22(43.1 %)4(50.0 %)Tumor Size2.3560.502 ≤ 2 cm34(63.0 %)99(68.8 %)32(62.7 %)7(87.5 %) >2 cm20(37.0 %)45(31.2 %)19(37.3 %)1(12.5 %)Margin4.2690.224 Distinct35(64.8 %)111(77.1 %)40(78.4 %)5(62.5 %) Indistinct19(35.2 %)33(22.9 %)11(21.6 %)3(37.5 %)Color3.4230.337 Darken31(57.4 %)75(52.1 %)34(66.7 %)5(62.5 %) Faded23(42.6 %)69(47.9 %)17(33.3 %)3(37.5 %)Tumor Location5.8190.407 Upper4(7.4 %)15(10.4 %)8(15.7 %)0(0.0 %) Middle12(22.2 %)21(14.6 %)12(23.5 %)1(12.5 %) Low38(70.4 %)108(75.0 %)31(60.8 %)7(87.5 %)Tumor Location0.7870.842 EGJ4(7.4 %)7(4.9 %)3(5.9 %)0(0.0 %) No-EGJ50(92.6 %)137(95.1 %)48(94.1 %)8(100.0 %)Gross type8.9430.162 Protruding10(18.5 %)32(22.2 %)10(19.6 %)0(0.0 %) Depressed16(29.6 %)57(39.6 %)26(51.0 %)3(37.5 %) Protruding-Depressed28(51.9 %)55(38.2 %)15(29.4 %)5(62.5 %)JGCA Classification16.8700.027 tub134(62.9 %)112(77.8 %)45(88.2 %)7(87.5 %) tub29(16.7 %)22(15.2 %)6(11.8 %)0(0.0 %) tub-por/sig7(13.0 %)7(4.9 %)0(0.0 %)1(12.5 %) pap/tub-pap4(7.4 %)3(2.1 %)0(0.0 %)0(0.0 %)WHO Classification18.0520.017 Tubular, well-diff34(62.9 %)112(77.8 %)45(88.2 %)7(87.5 %) Tubular, mod-diff9(16.7 %)22(15.2 %)6(11.8 %)0(0.0 %) Papillary1(1.9 %)2(1.4 %)0(0.0 %)0(0.0 %) Mixed10(18.5 %)8(7.0 %)0(0.0 %)1(12.5 %)Depth of Invasion1.3600.681 M47(87.0 %)132(91.7 %)46(90.2 %)8(100.0 %) SM7(13.0 %)12(8.3 %)5(9.8 %)0(0.0 %)Lymphovascular invasion2.5410.687 (+)1(1.9 %)1(0.7 %)0(0.0 %)0(0.0 %) (-)53(98.1 %)143(99.3 %)51(100.0 %)8(100.0 %)+, present; - absentEGJ esophagogastric junction, well-diff well differentiated, mod-diff moderately differentiated, M mucosa, SM submucosa\nRelationship between Mucin Phenotypes and Clinicopathological Features\n+, present; - absent\nEGJ esophagogastric junction, well-diff well differentiated, mod-diff moderately differentiated, M mucosa, SM submucosa\nRelationship between mucin phenotypes and background mucosa Intestinal metaplasia (IM) of background mucosa was observed in 199 of 249 (79.9 %) cases (G-, GI- and GI-type), including 38 cases of incomplete IM and 161 cases of complete IM. IM did not significantly differ among mucin phenotypes (P > 0.05). However, the presence of incomplete and complete IM was significantly different in distinct mucin phenotypes (P = 0.004, P = 0.018). The presence of incomplete IM in GI-type EGCs was higher than that in the G-type and I-type EGCs (21.5 % vs. 9.3 % vs. 3.9 %). In the contrast, 77.8 % (42/54) of G-type EGCs and 70.6 % (36/51) of I-type EGCs exhibited complete IM, which was higher than the 57.6 % (83/114) of GI-type EGCs. The IM status of the background mucosa and the relationship with mucin phenotypes are shown in Table 2.\nTable 2Relationship between mucin phenotypes and background mucosaG-typeGI-typeI-typeχ2 valueP valueIntestinal metaplasia2.6850.265  (+)47(87.1 %)114(79.2 %)38(74.5 %)  (-)7(12.9 %)30(20.8 %)13(25.5 %)Incomplete intestinal metaplasia10.9480.004  (+)5(9.3 %)31(21.5 %)2(3.9 %)  (-)49(90.7 %)113(78.5 %)49(96.1 %)Complete intestinal metaplasia7.9570.018  (+)42(77.8 %)83(57.6 %)36(70.6 %)  (-)12(22.2 %)61(42.4 %)15(29.4 %)+, present; - absent\nRelationship between mucin phenotypes and background mucosa\n+, present; - absent\nIntestinal metaplasia (IM) of background mucosa was observed in 199 of 249 (79.9 %) cases (G-, GI- and GI-type), including 38 cases of incomplete IM and 161 cases of complete IM. IM did not significantly differ among mucin phenotypes (P > 0.05). However, the presence of incomplete and complete IM was significantly different in distinct mucin phenotypes (P = 0.004, P = 0.018). The presence of incomplete IM in GI-type EGCs was higher than that in the G-type and I-type EGCs (21.5 % vs. 9.3 % vs. 3.9 %). In the contrast, 77.8 % (42/54) of G-type EGCs and 70.6 % (36/51) of I-type EGCs exhibited complete IM, which was higher than the 57.6 % (83/114) of GI-type EGCs. The IM status of the background mucosa and the relationship with mucin phenotypes are shown in Table 2.\nTable 2Relationship between mucin phenotypes and background mucosaG-typeGI-typeI-typeχ2 valueP valueIntestinal metaplasia2.6850.265  (+)47(87.1 %)114(79.2 %)38(74.5 %)  (-)7(12.9 %)30(20.8 %)13(25.5 %)Incomplete intestinal metaplasia10.9480.004  (+)5(9.3 %)31(21.5 %)2(3.9 %)  (-)49(90.7 %)113(78.5 %)49(96.1 %)Complete intestinal metaplasia7.9570.018  (+)42(77.8 %)83(57.6 %)36(70.6 %)  (-)12(22.2 %)61(42.4 %)15(29.4 %)+, present; - absent\nRelationship between mucin phenotypes and background mucosa\n+, present; - absent\nBiological behaviour of mucin phenotypes In addition to tubular adenocarcinoma components, 22 of the 257 cases also contained components of papillary adenocarcinoma, poorly differentiated carcinoma, or signet ring cell carcinoma. Fifteen cases (68.18 %) contained por/sig components, and the other 7 (31.82 %) contained pap components. Eleven cases showed the G-type, 10 cases showed the GI-type, only one case showed the N-type, and none of them showed the I-type. Among the 10 GI-type cases, 9 cases showed the GI-G type, and one showed the GI-I type. Almost all the 22 patients showed the G- and GI-G types, which was significantly higher than the number of I-type patients (P = 0.011). In addition, the 22 patients had a higher proportion of infiltration into the submucosa (P < 0.001). (Table 3; Figs. 3 and 4).\nTable 3Relationship between Biological Behavior and Mucin Phenotypestubtub-por/sigpap/tub-papχ2 valueP valueDepth of Invasion15.8240.000 M219(93.2 %)9(60.0 %)5(71.4 %) SM16(6.8 %)6(40.0 %)2(28.6 %)Mucin Phenotype14.7430.011 G-type43(18.3 %)7(46.7 %)4 (57.1 %) GI-type134(57.0 %)7(46.7 %)3(42.9 %) I-type51(21.7 %)0(0.0 %)0(0.0 %) N-type7(3.0 %)1(6.6 %)0(0.0 %)M mucosa, SM submucosaFig. 3Tubular adenocarcinoma mixed with papillary adenocarcinoma showing the gastric mucin phenotype. Reconstructive map (a). H&E staining showing papillary structure on the surface and tubular adenocarcinoma in the submucosa; invasive depth is 300 μm (b). Focal positive staining for MUC5AC (c) and strongly positive staining for MUC6 (d), invasive tubular adenocarcinoma showing MUC5AC negative (c) and MUC6 positive (d) status. (Scale bar =400 μm, HE and IHC ×10)Fig. 4Tubular adenocarcinoma mixed with poorly differentiated carcinoma showing the gastric mucin phenotype. Reconstructive map (a). H&E staining showing poorly differentiated carcinoma in the mucosa and tubular adenocarcinoma in the submucosa; the invasive depth is 2500 μm (b). Positive staining for MUC5AC (c) and MUC6 (d), and invasive tubular adenocarcinoma showing MUC5AC (c) and MUC6 (d) positivity. (Scale bar =400 μm, HE and IHC x10)\nRelationship between Biological Behavior and Mucin Phenotypes\nM mucosa, SM submucosa\nTubular adenocarcinoma mixed with papillary adenocarcinoma showing the gastric mucin phenotype. Reconstructive map (a). H&E staining showing papillary structure on the surface and tubular adenocarcinoma in the submucosa; invasive depth is 300 μm (b). Focal positive staining for MUC5AC (c) and strongly positive staining for MUC6 (d), invasive tubular adenocarcinoma showing MUC5AC negative (c) and MUC6 positive (d) status. (Scale bar =400 μm, HE and IHC ×10)\nTubular adenocarcinoma mixed with poorly differentiated carcinoma showing the gastric mucin phenotype. Reconstructive map (a). H&E staining showing poorly differentiated carcinoma in the mucosa and tubular adenocarcinoma in the submucosa; the invasive depth is 2500 μm (b). Positive staining for MUC5AC (c) and MUC6 (d), and invasive tubular adenocarcinoma showing MUC5AC (c) and MUC6 (d) positivity. (Scale bar =400 μm, HE and IHC x10)\nIn addition to tubular adenocarcinoma components, 22 of the 257 cases also contained components of papillary adenocarcinoma, poorly differentiated carcinoma, or signet ring cell carcinoma. Fifteen cases (68.18 %) contained por/sig components, and the other 7 (31.82 %) contained pap components. Eleven cases showed the G-type, 10 cases showed the GI-type, only one case showed the N-type, and none of them showed the I-type. Among the 10 GI-type cases, 9 cases showed the GI-G type, and one showed the GI-I type. Almost all the 22 patients showed the G- and GI-G types, which was significantly higher than the number of I-type patients (P = 0.011). In addition, the 22 patients had a higher proportion of infiltration into the submucosa (P < 0.001). (Table 3; Figs. 3 and 4).\nTable 3Relationship between Biological Behavior and Mucin Phenotypestubtub-por/sigpap/tub-papχ2 valueP valueDepth of Invasion15.8240.000 M219(93.2 %)9(60.0 %)5(71.4 %) SM16(6.8 %)6(40.0 %)2(28.6 %)Mucin Phenotype14.7430.011 G-type43(18.3 %)7(46.7 %)4 (57.1 %) GI-type134(57.0 %)7(46.7 %)3(42.9 %) I-type51(21.7 %)0(0.0 %)0(0.0 %) N-type7(3.0 %)1(6.6 %)0(0.0 %)M mucosa, SM submucosaFig. 3Tubular adenocarcinoma mixed with papillary adenocarcinoma showing the gastric mucin phenotype. Reconstructive map (a). H&E staining showing papillary structure on the surface and tubular adenocarcinoma in the submucosa; invasive depth is 300 μm (b). Focal positive staining for MUC5AC (c) and strongly positive staining for MUC6 (d), invasive tubular adenocarcinoma showing MUC5AC negative (c) and MUC6 positive (d) status. (Scale bar =400 μm, HE and IHC ×10)Fig. 4Tubular adenocarcinoma mixed with poorly differentiated carcinoma showing the gastric mucin phenotype. Reconstructive map (a). H&E staining showing poorly differentiated carcinoma in the mucosa and tubular adenocarcinoma in the submucosa; the invasive depth is 2500 μm (b). Positive staining for MUC5AC (c) and MUC6 (d), and invasive tubular adenocarcinoma showing MUC5AC (c) and MUC6 (d) positivity. (Scale bar =400 μm, HE and IHC x10)\nRelationship between Biological Behavior and Mucin Phenotypes\nM mucosa, SM submucosa\nTubular adenocarcinoma mixed with papillary adenocarcinoma showing the gastric mucin phenotype. Reconstructive map (a). H&E staining showing papillary structure on the surface and tubular adenocarcinoma in the submucosa; invasive depth is 300 μm (b). Focal positive staining for MUC5AC (c) and strongly positive staining for MUC6 (d), invasive tubular adenocarcinoma showing MUC5AC negative (c) and MUC6 positive (d) status. (Scale bar =400 μm, HE and IHC ×10)\nTubular adenocarcinoma mixed with poorly differentiated carcinoma showing the gastric mucin phenotype. Reconstructive map (a). H&E staining showing poorly differentiated carcinoma in the mucosa and tubular adenocarcinoma in the submucosa; the invasive depth is 2500 μm (b). Positive staining for MUC5AC (c) and MUC6 (d), and invasive tubular adenocarcinoma showing MUC5AC (c) and MUC6 (d) positivity. (Scale bar =400 μm, HE and IHC x10)\nFollow-up Six patients underwent additional gastrectomy, and there was no residual tumour or lymph node metastasis. All patients were under close follow-up, and neither recurrence nor metastasis was detected.\nSix patients underwent additional gastrectomy, and there was no residual tumour or lymph node metastasis. All patients were under close follow-up, and neither recurrence nor metastasis was detected.", "The expression percentages of CD10, MUC2, MUC5AC and MUC6 in all EGCs were 43.58 % (112/257), 63.81 % (164/257), 64.98 % (167/257) and 72.76 % (187/257) respectively (Fig. 1).\nFig. 1The proportion of CD10, MUC2, MUC5AC, and MUC6 in each mucin phenotype\nThe proportion of CD10, MUC2, MUC5AC, and MUC6 in each mucin phenotype\nTwo hundred fifty-seven EGCs were classified as G-type (21 %, 54/257), GI-type (56 %, 144/257), I-type (20 %, 51/257) and N-type (3 %, 8/257). The GI-type contained the GI-G type (72 %, 103/144) and GI-I type (28 %, 41/144). There were more cases of G-type and GI-G type (61 %, 157/257) than I-type and GI-I type (36 %, 95/257) (Fig. 2).\nFig. 2IHC staining of the gastric phenotype, intestinal phenotype and gastrointestinal phenotype. G-type (a-d) shows negative staining for CD10 (a) and MUC2 (b) and positive staining for MUC5AC (c) and MUC6 (d). In contrast, I-type (e-h) shows CD10 (e) and MUC2 (f) marker positivity and negative staining for MUC5AC (g) and MUC6 (h) negative. GI-type (i-l) shows positive staining for CD10 (i), MUC2 (j),MUC5AC (k) and MUC6 (l). (Scale bar =200 μm, HE and IHC ×10)\nIHC staining of the gastric phenotype, intestinal phenotype and gastrointestinal phenotype. G-type (a-d) shows negative staining for CD10 (a) and MUC2 (b) and positive staining for MUC5AC (c) and MUC6 (d). In contrast, I-type (e-h) shows CD10 (e) and MUC2 (f) marker positivity and negative staining for MUC5AC (g) and MUC6 (h) negative. GI-type (i-l) shows positive staining for CD10 (i), MUC2 (j),MUC5AC (k) and MUC6 (l). (Scale bar =200 μm, HE and IHC ×10)", "The relationship between mucin phenotype and clinicopathological features is summarized in Table 1. The mucin phenotypes were significantly related to the JGCA and WHO classifications (P < 0.05), but the parameters of sex, age, margin, colour, tumour size, gross type, depth of invasion, and lymphovascular invasion did not significantly differ among those mucin phenotypes (P > 0.05). The I-type had the highest proportion of differentiated EGCs among the four mucin phenotypes (100.0 % vs. 79.7 % vs. 83.1 % vs. 87.5 %, P = 0.027). The G-type group had a higher proportion of tub-por/sig and pap/tub-pap cases than the I-, GI- and N-type groups (20.4 % vs. 7.0 % vs. 0.0 % vs. 12.5 %, P = 0.027) according to the JGCA classification. According to the WHO classification, the G-type had more mixed histological components (18.5 % vs. 5.6 % vs. 0.0 % vs. 12.5 %, P = 0.006) than the other mucin phenotypes.\nTable 1Relationship between Mucin Phenotypes and Clinicopathological FeaturesG-typeGI-typeI-typeN-typeχ2 valueP valueSex0.6710.906 Male40(74.1 %)99(68.8 %)37(72.5 %)6(75.0 %) Female14(25.9 %)45(31.2 %)14(27.5 %)2(25.0 %)Age0.4120.945 ≤ 64 years28(51.9 %)75(52.1 %)29(56.9 %)4(50.0 %) >64 years26(48.1 %)69(47.9 %)22(43.1 %)4(50.0 %)Tumor Size2.3560.502 ≤ 2 cm34(63.0 %)99(68.8 %)32(62.7 %)7(87.5 %) >2 cm20(37.0 %)45(31.2 %)19(37.3 %)1(12.5 %)Margin4.2690.224 Distinct35(64.8 %)111(77.1 %)40(78.4 %)5(62.5 %) Indistinct19(35.2 %)33(22.9 %)11(21.6 %)3(37.5 %)Color3.4230.337 Darken31(57.4 %)75(52.1 %)34(66.7 %)5(62.5 %) Faded23(42.6 %)69(47.9 %)17(33.3 %)3(37.5 %)Tumor Location5.8190.407 Upper4(7.4 %)15(10.4 %)8(15.7 %)0(0.0 %) Middle12(22.2 %)21(14.6 %)12(23.5 %)1(12.5 %) Low38(70.4 %)108(75.0 %)31(60.8 %)7(87.5 %)Tumor Location0.7870.842 EGJ4(7.4 %)7(4.9 %)3(5.9 %)0(0.0 %) No-EGJ50(92.6 %)137(95.1 %)48(94.1 %)8(100.0 %)Gross type8.9430.162 Protruding10(18.5 %)32(22.2 %)10(19.6 %)0(0.0 %) Depressed16(29.6 %)57(39.6 %)26(51.0 %)3(37.5 %) Protruding-Depressed28(51.9 %)55(38.2 %)15(29.4 %)5(62.5 %)JGCA Classification16.8700.027 tub134(62.9 %)112(77.8 %)45(88.2 %)7(87.5 %) tub29(16.7 %)22(15.2 %)6(11.8 %)0(0.0 %) tub-por/sig7(13.0 %)7(4.9 %)0(0.0 %)1(12.5 %) pap/tub-pap4(7.4 %)3(2.1 %)0(0.0 %)0(0.0 %)WHO Classification18.0520.017 Tubular, well-diff34(62.9 %)112(77.8 %)45(88.2 %)7(87.5 %) Tubular, mod-diff9(16.7 %)22(15.2 %)6(11.8 %)0(0.0 %) Papillary1(1.9 %)2(1.4 %)0(0.0 %)0(0.0 %) Mixed10(18.5 %)8(7.0 %)0(0.0 %)1(12.5 %)Depth of Invasion1.3600.681 M47(87.0 %)132(91.7 %)46(90.2 %)8(100.0 %) SM7(13.0 %)12(8.3 %)5(9.8 %)0(0.0 %)Lymphovascular invasion2.5410.687 (+)1(1.9 %)1(0.7 %)0(0.0 %)0(0.0 %) (-)53(98.1 %)143(99.3 %)51(100.0 %)8(100.0 %)+, present; - absentEGJ esophagogastric junction, well-diff well differentiated, mod-diff moderately differentiated, M mucosa, SM submucosa\nRelationship between Mucin Phenotypes and Clinicopathological Features\n+, present; - absent\nEGJ esophagogastric junction, well-diff well differentiated, mod-diff moderately differentiated, M mucosa, SM submucosa", "Intestinal metaplasia (IM) of background mucosa was observed in 199 of 249 (79.9 %) cases (G-, GI- and GI-type), including 38 cases of incomplete IM and 161 cases of complete IM. IM did not significantly differ among mucin phenotypes (P > 0.05). However, the presence of incomplete and complete IM was significantly different in distinct mucin phenotypes (P = 0.004, P = 0.018). The presence of incomplete IM in GI-type EGCs was higher than that in the G-type and I-type EGCs (21.5 % vs. 9.3 % vs. 3.9 %). In the contrast, 77.8 % (42/54) of G-type EGCs and 70.6 % (36/51) of I-type EGCs exhibited complete IM, which was higher than the 57.6 % (83/114) of GI-type EGCs. The IM status of the background mucosa and the relationship with mucin phenotypes are shown in Table 2.\nTable 2Relationship between mucin phenotypes and background mucosaG-typeGI-typeI-typeχ2 valueP valueIntestinal metaplasia2.6850.265  (+)47(87.1 %)114(79.2 %)38(74.5 %)  (-)7(12.9 %)30(20.8 %)13(25.5 %)Incomplete intestinal metaplasia10.9480.004  (+)5(9.3 %)31(21.5 %)2(3.9 %)  (-)49(90.7 %)113(78.5 %)49(96.1 %)Complete intestinal metaplasia7.9570.018  (+)42(77.8 %)83(57.6 %)36(70.6 %)  (-)12(22.2 %)61(42.4 %)15(29.4 %)+, present; - absent\nRelationship between mucin phenotypes and background mucosa\n+, present; - absent", "In addition to tubular adenocarcinoma components, 22 of the 257 cases also contained components of papillary adenocarcinoma, poorly differentiated carcinoma, or signet ring cell carcinoma. Fifteen cases (68.18 %) contained por/sig components, and the other 7 (31.82 %) contained pap components. Eleven cases showed the G-type, 10 cases showed the GI-type, only one case showed the N-type, and none of them showed the I-type. Among the 10 GI-type cases, 9 cases showed the GI-G type, and one showed the GI-I type. Almost all the 22 patients showed the G- and GI-G types, which was significantly higher than the number of I-type patients (P = 0.011). In addition, the 22 patients had a higher proportion of infiltration into the submucosa (P < 0.001). (Table 3; Figs. 3 and 4).\nTable 3Relationship between Biological Behavior and Mucin Phenotypestubtub-por/sigpap/tub-papχ2 valueP valueDepth of Invasion15.8240.000 M219(93.2 %)9(60.0 %)5(71.4 %) SM16(6.8 %)6(40.0 %)2(28.6 %)Mucin Phenotype14.7430.011 G-type43(18.3 %)7(46.7 %)4 (57.1 %) GI-type134(57.0 %)7(46.7 %)3(42.9 %) I-type51(21.7 %)0(0.0 %)0(0.0 %) N-type7(3.0 %)1(6.6 %)0(0.0 %)M mucosa, SM submucosaFig. 3Tubular adenocarcinoma mixed with papillary adenocarcinoma showing the gastric mucin phenotype. Reconstructive map (a). H&E staining showing papillary structure on the surface and tubular adenocarcinoma in the submucosa; invasive depth is 300 μm (b). Focal positive staining for MUC5AC (c) and strongly positive staining for MUC6 (d), invasive tubular adenocarcinoma showing MUC5AC negative (c) and MUC6 positive (d) status. (Scale bar =400 μm, HE and IHC ×10)Fig. 4Tubular adenocarcinoma mixed with poorly differentiated carcinoma showing the gastric mucin phenotype. Reconstructive map (a). H&E staining showing poorly differentiated carcinoma in the mucosa and tubular adenocarcinoma in the submucosa; the invasive depth is 2500 μm (b). Positive staining for MUC5AC (c) and MUC6 (d), and invasive tubular adenocarcinoma showing MUC5AC (c) and MUC6 (d) positivity. (Scale bar =400 μm, HE and IHC x10)\nRelationship between Biological Behavior and Mucin Phenotypes\nM mucosa, SM submucosa\nTubular adenocarcinoma mixed with papillary adenocarcinoma showing the gastric mucin phenotype. Reconstructive map (a). H&E staining showing papillary structure on the surface and tubular adenocarcinoma in the submucosa; invasive depth is 300 μm (b). Focal positive staining for MUC5AC (c) and strongly positive staining for MUC6 (d), invasive tubular adenocarcinoma showing MUC5AC negative (c) and MUC6 positive (d) status. (Scale bar =400 μm, HE and IHC ×10)\nTubular adenocarcinoma mixed with poorly differentiated carcinoma showing the gastric mucin phenotype. Reconstructive map (a). H&E staining showing poorly differentiated carcinoma in the mucosa and tubular adenocarcinoma in the submucosa; the invasive depth is 2500 μm (b). Positive staining for MUC5AC (c) and MUC6 (d), and invasive tubular adenocarcinoma showing MUC5AC (c) and MUC6 (d) positivity. (Scale bar =400 μm, HE and IHC x10)", "Six patients underwent additional gastrectomy, and there was no residual tumour or lymph node metastasis. All patients were under close follow-up, and neither recurrence nor metastasis was detected.", "The mucin phenotype classification is based on the mucin marker expression profile. After year 2000, the gastric and intestinal mucin phenotypes were analysed by IHC [15]. The mucin markers MUC5AC, MUC6, MUC2 and CD10 were considered necessary, although there is no consensus on the number of markers that should be used to define a mucin phenotype or the percentage of tumour cells that must be stained [6–8, 11, 15, 18]. MUC5AC is a secreted mucin expressed in the surface mucous epithelium of normal gastric mucosa. High expression of MUC6 is observed in fundic mucous neck cells and pyloric glands of gastric mucosa. CD10 is a marker for the brush border on the luminal surface of the small intestine. In the normal adult intestine, MUC2 expression is observed in the perinuclear areas of goblet cells.\nWe showed that the expression of MUC5AC, MUC6, MUC2 and CD10 was detected in 167 (64.98 %), 187 (72.76 %), 164 (63.81 %), and 112 (43.58 %) of the 257 EGCs, respectively. In previous studies, the expression percentages of MUC5AC, MUC6, MUC2 and CD10 in GCs were 55.1-67.5 %, 44.9-64 %, 35.4-49.3 % and 20.6-20.9 %, respectively [19, 20], and for EGCs, the expression of each mucin marker was 68.75-96.8 %, 19.6-71.58 %, 25-62.10 %, and 0-79 %, respectively [6, 11, 21].\nBased on the combinations of expression of these markers, the 257 EGCs were classified into the G-type (21 %, 54/257), GI-type (56 %, 144/257), I-type (20 %, 51/257) and N-type (3 %, 8/257); in previous reports, the incidence percentages of each of these mucin phenotypes were found to be 15-41.1 %, 20.3-60.1 %, 18.5-46.6 %, and 3.7-31.6 %, respectively, in advanced GCs [3, 13, 14], and 7.9-36.8 %, 18.8-41.2 %, 15.4-55.56 %, and 0-11.1 %, respectively, in early- stage GCs [7, 11, 19–25]. Our results were consistent with these studies. The reported expression ranges vary greatly among different investigators, and different markers, antibodies and case groups may account for this discrepancy. Koyama et al. reported that the incidence of G-type was 19.3 % [26], which was similar to that found in the present study (21 %); however, in his report, the incidence of I-type was much higher than that of G-type (43.8 % vs. 19.3 %), as was reported by Fabio et al. [11]. While Tajima et al. [22] reported the opposite result, since in their study, the incidence of the G-type was much higher than that of the I-type (36.8 % vs. 15.4 %). In our study, the incidence of G-type was almost the same as that of I-type (21 % vs. 20 %). Overall, based on our data, much more than half of these cases were classified as G- and GI-G type GC (61.09 %, 157/257), which is much higher than the incidence of I- and GI-I type GC (36.96 %,95/257). A previous report revealed that almost all intramucosal GC cases exhibited the gastric phenotype, including the GI phenotype [15].\nThe relationship between mucin phenotypes and clinicopathological features was investigated. We found that histology classification (both the JGCA and WHO classification) was closely related to the mucin phenotype. The incidence of I-type was greater than those of the G-, GI- and N-type (100.0 % vs. 79.7 %, 93.1 %, 87.5 %) in differentiated tubular adenocarcinoma. The G-type was histologically significantly correlated with the mixed type (with poorly differentiated/papillary carcinoma). Our data showed that the proportion of G-type carcinoma increased during the transition from solely differentiated type to mixed type carcinoma. Mixed-type early-stage carcinoma more frequently expressed G-type mucin, and G-type tumours were associated with a higher rate of undifferentiated-type tumours than I-type tumours [7, 22].\nThere were no significant differences between mucin phenotypes and other parameters, including sex, age, margin, colour, tumour size, gross type, depth of invasion, and lymphovascular invasion (P > 0.05). These results are consistent with those of other studies in the literatures [22, 24, 27], and there was no clear correlation between phenotypes and clinicopathological characteristics, including sex, age, tumour size, location, macroscopic features, lymphatic or venous invasion, or lymph node metastasis in the case of the differentiated type [22, 24, 27]. Koseki et al. [7] and Oya et al. [28] reported that the incidence of lymphatic invasion, venous invasion and lymph node metastasis in gastric phenotype carcinomas was significantly higher than that in intestinal phenotype carcinomas. In addition, G-type EGCs were correlated with some distinct macroscopic features, namely, a smaller tumour diameter [15], discoloured surface and non-wavy tumour margins [23, 29]. G-type differentiated adenocarcinomas showed a depressed type, indistinct margins and monotonous colour tone across the mucosal layer, whereas I-type adenocarcinomas had an elevated, distinct margin and a red mucosa [3, 4, 30]. The discrepancy of these results may have been due to heterogeneous components that contained poorly differentiated adenocarcinoma [27].\nIntestinal metaplasia has been frequently observed surrounding GC, especially differentiated adenocarcinomas. IM has malignant potential and has been regarded as a precursor of gastric neoplasms. According to Laurén, intestinal-type adenocarcinoma is preceded by metaplastic changes, while diffuse-type adenocarcinoma arises in non-IM gastric mucosa [2]. In the current study, background mucosal IM was observed in 79.9 % of cases among the G-, GI- and I-type EGCs and 87.0 % of cases among the G-type EGCs. 25 % of I-type cases arose from the normal mucosa without IM. IM did not significantly differ among mucin phenotypes (P > 0.05). However, incomplete and complete IM significantly differed with respect to mucin phenotypes (P = 0.004, P = 0,018). A total of 77.78 % (42/54) of G-type and 70.6 % (36/51) of I-type patients had complete IM, which was higher than the rates among GI-type patients (83/114, 57.6 %). The expression of incomplete IM in GI-type EGCs was higher than in G- and I-type EGCs (21.5 % vs. 9.3 % vs. 3.9 %). Our results demonstrated a remarkable difference between mucin phenotypes and the background mucosa. Similar results have been reported by Kabashima et al. [31] and Matsuoka [23]. The mucin phenotype of the carcinoma was independent of mucin phenotypic changes in the surrounding mucosa, and the carcinoma may undergo individual intestinalization. The G-type may imitate the surrounding mucosa, and the carcinomas and the background mucosa have an unstable status, as they commonly possess the hybrid phenotype of the stomach and the small intestine [23, 31].\nMucin phenotypes can indicate biological behaviour in GCs. G-type GCs have increased potential for invasion and metastasis due to infiltrating of deeper layers or more surrounding structures, a higher rate of lymph node metastasis, and poorer prognosis [3, 12, 18, 21]. Even differentiated adenocarcinomas of the G-type had similar outcomes, focused on prognoses, as undifferentiated adenocarcinomas [7–10]. In our research, six patients underwent additional gastrectomy, and there was no residual tumour or lymph node metastasis. All patients were under close follow-up, and neither recurrence nor metastasis was detected. The mixed type (mixed with poorly differentiated or papillary adenocarcinoma) was mainly of the G-type, which was significantly higher than that of purely differentiated tubular adenocarcinoma (P < 0.05), and the depth of infiltration was deeper (P < 0.05). The G-type group had the highest proportion (11/54, 20.37 %) with poorly differentiated/undifferentiated components, and almost all of them (19/22, 86.36 %) expressed the G- and GI-G types. The mixed type may represent a progressive loss of glandular structure during progression of the cancer from the mucosa to advanced stage, and those with submucosal invasion was a risk factor for lymph node metastasis [7, 22, 32]. Differentiated EGC of G-type frequently changed histologically into signet ring-cell carcinoma or poorly differentiated adenocarcinoma. These results may imply more aggressive biological behaviour and poorer prognosis.", "Our study reports the expression of mucin markers (MUC5AC, MUC6, MUC2 and CD10) and mucin phenotypes in differentiated EGC samples from ESD/EMR in the Chinese population. Mucin phenotypes of early differentiated gastric cancer are of clinical significance, and G-type GC exhibits aggressive biological behaviour in early differentiated GCs, especially in those with poorly differentiated adenocarcinoma or papillary adenocarcinoma components." ]
[ null, null, null, null, null, null, "results", null, null, null, null, null, "discussion", "conclusion" ]
[ "Gastric cancer", "Early stage", "Mucin core protein", "Gastric Mucin phenotype", "Biological behaviour" ]
Background: Gastric cancer (GC), one of the most common human cancers worldwide, is a disease with multiple pathogenic factors, various prognoses and different responses to treatments. Thus, properly distinguishing those with worse prognoses from those with better prognoses appears to be significantly important. Four different morphology -based classification systems exist, the World Health Organization (WHO/2019) [1], the Japanese Gastric Cancer Association (JGCA/2017) [1], Laurén [2] and Nakamura [3]. According to the WHO classification, GCs are subclassified into papillary, tubular, poorly cohesive, mucinous and mixed types. In the JGCA classification, the subtypes are papillary (pap), tubular (tub), poorly differentiated (por), signet-ring cell (sig), and mucinous (muc), which are similar to the subtypes used by the WHO. GCs are divided into intestinal and diffuse types using Laurén’s classification or into differentiated and undifferentiated types based on Nakamura’s classification [2–4]. The differentiated type contains pap, tub1, and tub2 according to the JGCA classification and papillary and well/moderately differentiated adenocarcinoma according to the WHO classification. These different histological types exhibit distinct biological behaviours. The mucous produced by cancers is one of the factors determining the nature of biological behaviour. The main component of mucous is a high-molecular-weight glycoprotein called mucin [5]. As cancer progresses, the nature of the mucous changes relative to the degree of biological malignancy. In the 1990 s, with the progress of structural analysis of mucin and the widespread use of monoclonal antibodies to the core protein of mucin, a mucin phenotype subclassification emerged. Mucin phenotype subclassification was entirely based on the mucin expression pattern, independent of histological features. Thus, GCs are classified into gastric, intestinal, gastrointestinal and null mucin phenotypes [4–6]. Previous studies have reported that the gastric phenotype has a higher potential for invasion and metastasis than the intestinal type, which results in a worse prognosis of GCs [7–11]. However, studies have mostly focused on advanced gastric cancers, and early gastric cancers are rarely investigated. Early gastric cancer (EGC) is defined as tumour invasion confined to the mucosa and submucosa, irrespective of regional lymph node metastasis [12]. Endoscopic mucosal resection (EMR) and endoscopic submucosal dissection (ESD) are used as treatments for some intramucosal carcinomas and submucosal lesions, which have a very low probability of lymph node metastasis [13, 14]. To our knowledge, there is no research exploring biological role of mucin phenotypes in EGCs using EMR/ESD by Chinese investigators. Little information is available on the effects of mucin phenotypes on the clinicopathological features of EGCs in a Chinese cohort. Accordingly, we examined mucin expression and mucin phenotypes and explored mucin phenotype clinicopathological characteristics and biological behaviour. Methods: Patients and tissue specimens Our study consisted of 257 consecutive patients who underwent EMR/ESD for differentiated EGCs between January 2012 and June 2018 at The Second Affiliated Hospital of Zhejiang University, China. The group comprised 182 men and 75 women with an age range of 29 to 87 (mean 64) years old. The location of each lesion was classified in terms of the upper (27 cases), middle (46 cases) and lower (184 cases) thirds of the stomach. The size of each lesion was measured by the maximum diameter, which ranged from 0.1 to 6.5 (mean 1.5) centimetres. The protruding category (52 cases) included type 0-I and 0-IIa, the depressed (102 cases) category contained type 0-IIc and III, and all the other cases were considered under the protruding and depressed category (103 cases). Based on the WHO and JGCA classification, the EGCs were subclassified into well differentiated tubular adenocarcinoma (well-diff/tub 1, 198 cases), moderately differentiated tubular adenocarcinoma (mod-diff/tub 2, 37 cases), papillary adenocarcinoma (3 cases) and mixed (tub-pap/sig/por, 19 cases). In the mixed cases, the undifferentiated components (sig/por) were less than 50 %. Our study consisted of 257 consecutive patients who underwent EMR/ESD for differentiated EGCs between January 2012 and June 2018 at The Second Affiliated Hospital of Zhejiang University, China. The group comprised 182 men and 75 women with an age range of 29 to 87 (mean 64) years old. The location of each lesion was classified in terms of the upper (27 cases), middle (46 cases) and lower (184 cases) thirds of the stomach. The size of each lesion was measured by the maximum diameter, which ranged from 0.1 to 6.5 (mean 1.5) centimetres. The protruding category (52 cases) included type 0-I and 0-IIa, the depressed (102 cases) category contained type 0-IIc and III, and all the other cases were considered under the protruding and depressed category (103 cases). Based on the WHO and JGCA classification, the EGCs were subclassified into well differentiated tubular adenocarcinoma (well-diff/tub 1, 198 cases), moderately differentiated tubular adenocarcinoma (mod-diff/tub 2, 37 cases), papillary adenocarcinoma (3 cases) and mixed (tub-pap/sig/por, 19 cases). In the mixed cases, the undifferentiated components (sig/por) were less than 50 %. Immunohistochemistry All specimens were fixed with 10 % buffered formalin, embedded in paraffin, cut into 4-µm-thick sections, and subjected to haematoxylin and eosin (HE) staining. MUC2 was detected by mAb Ccp58 (Zsbio, 1:100), MUC5AC by mAb MRQ-19 (Zsbio, 1:100), MUC6 by mAb MRQ-20 (Zsbio, 1:100), and CD10 (Zsbio, 1:100) by immunohistochemistry (IHC). IHC was performed by using the Ventana NexES Staining System (Roche, Benchmark®XT). The marker CD10 exhibited both cytoplasmic and glandular luminal reactivity, whereas MUC5AC, MUC6 and MUC2 exhibited only cytoplasmic reactivity. The staining results were categorized as positive when at least one single cell among the carcinoma cells was stained and negative when none of the carcinoma cells were stained [6]. All specimens were fixed with 10 % buffered formalin, embedded in paraffin, cut into 4-µm-thick sections, and subjected to haematoxylin and eosin (HE) staining. MUC2 was detected by mAb Ccp58 (Zsbio, 1:100), MUC5AC by mAb MRQ-19 (Zsbio, 1:100), MUC6 by mAb MRQ-20 (Zsbio, 1:100), and CD10 (Zsbio, 1:100) by immunohistochemistry (IHC). IHC was performed by using the Ventana NexES Staining System (Roche, Benchmark®XT). The marker CD10 exhibited both cytoplasmic and glandular luminal reactivity, whereas MUC5AC, MUC6 and MUC2 exhibited only cytoplasmic reactivity. The staining results were categorized as positive when at least one single cell among the carcinoma cells was stained and negative when none of the carcinoma cells were stained [6]. Classification of mucin phenotype Based on MUC5AC, MUC6, MUC2 and CD10, EGCs can be classified into the gastric phenotype (G-type), gastrointestinal phenotype (GI-type), intestinal phenotype (I-type) and null phenotype (N-type) [4, 6, 15–17]. The following criteria were used for the classification of mucin phenotypes: ① the G-type shows positive staining for at least one of the MUC5AC and MUC6, while CD10 and MUC2 are both negative; ② the I-type shows positive staining for CD10 and/or MUC2, while MUC5AC and MUC6 are both negative; ③ the GI-type shows positive staining for marker CD10 and/or MUC2 associated with one of the markers MUC5AC and MUC6 positive; and④ if none of the four markers are positive, the phenotype is classified as N-type. In addition, among the GI-types, we labelled one expressing more MUC5AC and/or MUC6 than MUC2 and CD10 as a gastric-predominant GI phenotype (GI-G type); otherwise, we labelled it as an intestinal-predominant phenotype (GI-I type). Based on MUC5AC, MUC6, MUC2 and CD10, EGCs can be classified into the gastric phenotype (G-type), gastrointestinal phenotype (GI-type), intestinal phenotype (I-type) and null phenotype (N-type) [4, 6, 15–17]. The following criteria were used for the classification of mucin phenotypes: ① the G-type shows positive staining for at least one of the MUC5AC and MUC6, while CD10 and MUC2 are both negative; ② the I-type shows positive staining for CD10 and/or MUC2, while MUC5AC and MUC6 are both negative; ③ the GI-type shows positive staining for marker CD10 and/or MUC2 associated with one of the markers MUC5AC and MUC6 positive; and④ if none of the four markers are positive, the phenotype is classified as N-type. In addition, among the GI-types, we labelled one expressing more MUC5AC and/or MUC6 than MUC2 and CD10 as a gastric-predominant GI phenotype (GI-G type); otherwise, we labelled it as an intestinal-predominant phenotype (GI-I type). Statistical analysis Associations between mucin expression profiles and clinicopathological parameters were examined by the chi-square test or Fisher’s exact test. Statistical significance was established to be P < 0.05. Statistical calculations were performed with IBM SPSS Statistics (version 23.0). Associations between mucin expression profiles and clinicopathological parameters were examined by the chi-square test or Fisher’s exact test. Statistical significance was established to be P < 0.05. Statistical calculations were performed with IBM SPSS Statistics (version 23.0). Patients and tissue specimens: Our study consisted of 257 consecutive patients who underwent EMR/ESD for differentiated EGCs between January 2012 and June 2018 at The Second Affiliated Hospital of Zhejiang University, China. The group comprised 182 men and 75 women with an age range of 29 to 87 (mean 64) years old. The location of each lesion was classified in terms of the upper (27 cases), middle (46 cases) and lower (184 cases) thirds of the stomach. The size of each lesion was measured by the maximum diameter, which ranged from 0.1 to 6.5 (mean 1.5) centimetres. The protruding category (52 cases) included type 0-I and 0-IIa, the depressed (102 cases) category contained type 0-IIc and III, and all the other cases were considered under the protruding and depressed category (103 cases). Based on the WHO and JGCA classification, the EGCs were subclassified into well differentiated tubular adenocarcinoma (well-diff/tub 1, 198 cases), moderately differentiated tubular adenocarcinoma (mod-diff/tub 2, 37 cases), papillary adenocarcinoma (3 cases) and mixed (tub-pap/sig/por, 19 cases). In the mixed cases, the undifferentiated components (sig/por) were less than 50 %. Immunohistochemistry: All specimens were fixed with 10 % buffered formalin, embedded in paraffin, cut into 4-µm-thick sections, and subjected to haematoxylin and eosin (HE) staining. MUC2 was detected by mAb Ccp58 (Zsbio, 1:100), MUC5AC by mAb MRQ-19 (Zsbio, 1:100), MUC6 by mAb MRQ-20 (Zsbio, 1:100), and CD10 (Zsbio, 1:100) by immunohistochemistry (IHC). IHC was performed by using the Ventana NexES Staining System (Roche, Benchmark®XT). The marker CD10 exhibited both cytoplasmic and glandular luminal reactivity, whereas MUC5AC, MUC6 and MUC2 exhibited only cytoplasmic reactivity. The staining results were categorized as positive when at least one single cell among the carcinoma cells was stained and negative when none of the carcinoma cells were stained [6]. Classification of mucin phenotype: Based on MUC5AC, MUC6, MUC2 and CD10, EGCs can be classified into the gastric phenotype (G-type), gastrointestinal phenotype (GI-type), intestinal phenotype (I-type) and null phenotype (N-type) [4, 6, 15–17]. The following criteria were used for the classification of mucin phenotypes: ① the G-type shows positive staining for at least one of the MUC5AC and MUC6, while CD10 and MUC2 are both negative; ② the I-type shows positive staining for CD10 and/or MUC2, while MUC5AC and MUC6 are both negative; ③ the GI-type shows positive staining for marker CD10 and/or MUC2 associated with one of the markers MUC5AC and MUC6 positive; and④ if none of the four markers are positive, the phenotype is classified as N-type. In addition, among the GI-types, we labelled one expressing more MUC5AC and/or MUC6 than MUC2 and CD10 as a gastric-predominant GI phenotype (GI-G type); otherwise, we labelled it as an intestinal-predominant phenotype (GI-I type). Statistical analysis: Associations between mucin expression profiles and clinicopathological parameters were examined by the chi-square test or Fisher’s exact test. Statistical significance was established to be P < 0.05. Statistical calculations were performed with IBM SPSS Statistics (version 23.0). Results: Expression of mucin markers and mucin phenotype in early gastric cancers The expression percentages of CD10, MUC2, MUC5AC and MUC6 in all EGCs were 43.58 % (112/257), 63.81 % (164/257), 64.98 % (167/257) and 72.76 % (187/257) respectively (Fig. 1). Fig. 1The proportion of CD10, MUC2, MUC5AC, and MUC6 in each mucin phenotype The proportion of CD10, MUC2, MUC5AC, and MUC6 in each mucin phenotype Two hundred fifty-seven EGCs were classified as G-type (21 %, 54/257), GI-type (56 %, 144/257), I-type (20 %, 51/257) and N-type (3 %, 8/257). The GI-type contained the GI-G type (72 %, 103/144) and GI-I type (28 %, 41/144). There were more cases of G-type and GI-G type (61 %, 157/257) than I-type and GI-I type (36 %, 95/257) (Fig. 2). Fig. 2IHC staining of the gastric phenotype, intestinal phenotype and gastrointestinal phenotype. G-type (a-d) shows negative staining for CD10 (a) and MUC2 (b) and positive staining for MUC5AC (c) and MUC6 (d). In contrast, I-type (e-h) shows CD10 (e) and MUC2 (f) marker positivity and negative staining for MUC5AC (g) and MUC6 (h) negative. GI-type (i-l) shows positive staining for CD10 (i), MUC2 (j),MUC5AC (k) and MUC6 (l). (Scale bar =200 μm, HE and IHC ×10) IHC staining of the gastric phenotype, intestinal phenotype and gastrointestinal phenotype. G-type (a-d) shows negative staining for CD10 (a) and MUC2 (b) and positive staining for MUC5AC (c) and MUC6 (d). In contrast, I-type (e-h) shows CD10 (e) and MUC2 (f) marker positivity and negative staining for MUC5AC (g) and MUC6 (h) negative. GI-type (i-l) shows positive staining for CD10 (i), MUC2 (j),MUC5AC (k) and MUC6 (l). (Scale bar =200 μm, HE and IHC ×10) The expression percentages of CD10, MUC2, MUC5AC and MUC6 in all EGCs were 43.58 % (112/257), 63.81 % (164/257), 64.98 % (167/257) and 72.76 % (187/257) respectively (Fig. 1). Fig. 1The proportion of CD10, MUC2, MUC5AC, and MUC6 in each mucin phenotype The proportion of CD10, MUC2, MUC5AC, and MUC6 in each mucin phenotype Two hundred fifty-seven EGCs were classified as G-type (21 %, 54/257), GI-type (56 %, 144/257), I-type (20 %, 51/257) and N-type (3 %, 8/257). The GI-type contained the GI-G type (72 %, 103/144) and GI-I type (28 %, 41/144). There were more cases of G-type and GI-G type (61 %, 157/257) than I-type and GI-I type (36 %, 95/257) (Fig. 2). Fig. 2IHC staining of the gastric phenotype, intestinal phenotype and gastrointestinal phenotype. G-type (a-d) shows negative staining for CD10 (a) and MUC2 (b) and positive staining for MUC5AC (c) and MUC6 (d). In contrast, I-type (e-h) shows CD10 (e) and MUC2 (f) marker positivity and negative staining for MUC5AC (g) and MUC6 (h) negative. GI-type (i-l) shows positive staining for CD10 (i), MUC2 (j),MUC5AC (k) and MUC6 (l). (Scale bar =200 μm, HE and IHC ×10) IHC staining of the gastric phenotype, intestinal phenotype and gastrointestinal phenotype. G-type (a-d) shows negative staining for CD10 (a) and MUC2 (b) and positive staining for MUC5AC (c) and MUC6 (d). In contrast, I-type (e-h) shows CD10 (e) and MUC2 (f) marker positivity and negative staining for MUC5AC (g) and MUC6 (h) negative. GI-type (i-l) shows positive staining for CD10 (i), MUC2 (j),MUC5AC (k) and MUC6 (l). (Scale bar =200 μm, HE and IHC ×10) Relationship between mucin phenotype and clinicopathological features The relationship between mucin phenotype and clinicopathological features is summarized in Table 1. The mucin phenotypes were significantly related to the JGCA and WHO classifications (P < 0.05), but the parameters of sex, age, margin, colour, tumour size, gross type, depth of invasion, and lymphovascular invasion did not significantly differ among those mucin phenotypes (P > 0.05). The I-type had the highest proportion of differentiated EGCs among the four mucin phenotypes (100.0 % vs. 79.7 % vs. 83.1 % vs. 87.5 %, P = 0.027). The G-type group had a higher proportion of tub-por/sig and pap/tub-pap cases than the I-, GI- and N-type groups (20.4 % vs. 7.0 % vs. 0.0 % vs. 12.5 %, P = 0.027) according to the JGCA classification. According to the WHO classification, the G-type had more mixed histological components (18.5 % vs. 5.6 % vs. 0.0 % vs. 12.5 %, P = 0.006) than the other mucin phenotypes. Table 1Relationship between Mucin Phenotypes and Clinicopathological FeaturesG-typeGI-typeI-typeN-typeχ2 valueP valueSex0.6710.906 Male40(74.1 %)99(68.8 %)37(72.5 %)6(75.0 %) Female14(25.9 %)45(31.2 %)14(27.5 %)2(25.0 %)Age0.4120.945 ≤ 64 years28(51.9 %)75(52.1 %)29(56.9 %)4(50.0 %) >64 years26(48.1 %)69(47.9 %)22(43.1 %)4(50.0 %)Tumor Size2.3560.502 ≤ 2 cm34(63.0 %)99(68.8 %)32(62.7 %)7(87.5 %) >2 cm20(37.0 %)45(31.2 %)19(37.3 %)1(12.5 %)Margin4.2690.224 Distinct35(64.8 %)111(77.1 %)40(78.4 %)5(62.5 %) Indistinct19(35.2 %)33(22.9 %)11(21.6 %)3(37.5 %)Color3.4230.337 Darken31(57.4 %)75(52.1 %)34(66.7 %)5(62.5 %) Faded23(42.6 %)69(47.9 %)17(33.3 %)3(37.5 %)Tumor Location5.8190.407 Upper4(7.4 %)15(10.4 %)8(15.7 %)0(0.0 %) Middle12(22.2 %)21(14.6 %)12(23.5 %)1(12.5 %) Low38(70.4 %)108(75.0 %)31(60.8 %)7(87.5 %)Tumor Location0.7870.842 EGJ4(7.4 %)7(4.9 %)3(5.9 %)0(0.0 %) No-EGJ50(92.6 %)137(95.1 %)48(94.1 %)8(100.0 %)Gross type8.9430.162 Protruding10(18.5 %)32(22.2 %)10(19.6 %)0(0.0 %) Depressed16(29.6 %)57(39.6 %)26(51.0 %)3(37.5 %) Protruding-Depressed28(51.9 %)55(38.2 %)15(29.4 %)5(62.5 %)JGCA Classification16.8700.027 tub134(62.9 %)112(77.8 %)45(88.2 %)7(87.5 %) tub29(16.7 %)22(15.2 %)6(11.8 %)0(0.0 %) tub-por/sig7(13.0 %)7(4.9 %)0(0.0 %)1(12.5 %) pap/tub-pap4(7.4 %)3(2.1 %)0(0.0 %)0(0.0 %)WHO Classification18.0520.017 Tubular, well-diff34(62.9 %)112(77.8 %)45(88.2 %)7(87.5 %) Tubular, mod-diff9(16.7 %)22(15.2 %)6(11.8 %)0(0.0 %) Papillary1(1.9 %)2(1.4 %)0(0.0 %)0(0.0 %) Mixed10(18.5 %)8(7.0 %)0(0.0 %)1(12.5 %)Depth of Invasion1.3600.681 M47(87.0 %)132(91.7 %)46(90.2 %)8(100.0 %) SM7(13.0 %)12(8.3 %)5(9.8 %)0(0.0 %)Lymphovascular invasion2.5410.687 (+)1(1.9 %)1(0.7 %)0(0.0 %)0(0.0 %) (-)53(98.1 %)143(99.3 %)51(100.0 %)8(100.0 %)+, present; - absentEGJ esophagogastric junction, well-diff well differentiated, mod-diff moderately differentiated, M mucosa, SM submucosa Relationship between Mucin Phenotypes and Clinicopathological Features +, present; - absent EGJ esophagogastric junction, well-diff well differentiated, mod-diff moderately differentiated, M mucosa, SM submucosa The relationship between mucin phenotype and clinicopathological features is summarized in Table 1. The mucin phenotypes were significantly related to the JGCA and WHO classifications (P < 0.05), but the parameters of sex, age, margin, colour, tumour size, gross type, depth of invasion, and lymphovascular invasion did not significantly differ among those mucin phenotypes (P > 0.05). The I-type had the highest proportion of differentiated EGCs among the four mucin phenotypes (100.0 % vs. 79.7 % vs. 83.1 % vs. 87.5 %, P = 0.027). The G-type group had a higher proportion of tub-por/sig and pap/tub-pap cases than the I-, GI- and N-type groups (20.4 % vs. 7.0 % vs. 0.0 % vs. 12.5 %, P = 0.027) according to the JGCA classification. According to the WHO classification, the G-type had more mixed histological components (18.5 % vs. 5.6 % vs. 0.0 % vs. 12.5 %, P = 0.006) than the other mucin phenotypes. Table 1Relationship between Mucin Phenotypes and Clinicopathological FeaturesG-typeGI-typeI-typeN-typeχ2 valueP valueSex0.6710.906 Male40(74.1 %)99(68.8 %)37(72.5 %)6(75.0 %) Female14(25.9 %)45(31.2 %)14(27.5 %)2(25.0 %)Age0.4120.945 ≤ 64 years28(51.9 %)75(52.1 %)29(56.9 %)4(50.0 %) >64 years26(48.1 %)69(47.9 %)22(43.1 %)4(50.0 %)Tumor Size2.3560.502 ≤ 2 cm34(63.0 %)99(68.8 %)32(62.7 %)7(87.5 %) >2 cm20(37.0 %)45(31.2 %)19(37.3 %)1(12.5 %)Margin4.2690.224 Distinct35(64.8 %)111(77.1 %)40(78.4 %)5(62.5 %) Indistinct19(35.2 %)33(22.9 %)11(21.6 %)3(37.5 %)Color3.4230.337 Darken31(57.4 %)75(52.1 %)34(66.7 %)5(62.5 %) Faded23(42.6 %)69(47.9 %)17(33.3 %)3(37.5 %)Tumor Location5.8190.407 Upper4(7.4 %)15(10.4 %)8(15.7 %)0(0.0 %) Middle12(22.2 %)21(14.6 %)12(23.5 %)1(12.5 %) Low38(70.4 %)108(75.0 %)31(60.8 %)7(87.5 %)Tumor Location0.7870.842 EGJ4(7.4 %)7(4.9 %)3(5.9 %)0(0.0 %) No-EGJ50(92.6 %)137(95.1 %)48(94.1 %)8(100.0 %)Gross type8.9430.162 Protruding10(18.5 %)32(22.2 %)10(19.6 %)0(0.0 %) Depressed16(29.6 %)57(39.6 %)26(51.0 %)3(37.5 %) Protruding-Depressed28(51.9 %)55(38.2 %)15(29.4 %)5(62.5 %)JGCA Classification16.8700.027 tub134(62.9 %)112(77.8 %)45(88.2 %)7(87.5 %) tub29(16.7 %)22(15.2 %)6(11.8 %)0(0.0 %) tub-por/sig7(13.0 %)7(4.9 %)0(0.0 %)1(12.5 %) pap/tub-pap4(7.4 %)3(2.1 %)0(0.0 %)0(0.0 %)WHO Classification18.0520.017 Tubular, well-diff34(62.9 %)112(77.8 %)45(88.2 %)7(87.5 %) Tubular, mod-diff9(16.7 %)22(15.2 %)6(11.8 %)0(0.0 %) Papillary1(1.9 %)2(1.4 %)0(0.0 %)0(0.0 %) Mixed10(18.5 %)8(7.0 %)0(0.0 %)1(12.5 %)Depth of Invasion1.3600.681 M47(87.0 %)132(91.7 %)46(90.2 %)8(100.0 %) SM7(13.0 %)12(8.3 %)5(9.8 %)0(0.0 %)Lymphovascular invasion2.5410.687 (+)1(1.9 %)1(0.7 %)0(0.0 %)0(0.0 %) (-)53(98.1 %)143(99.3 %)51(100.0 %)8(100.0 %)+, present; - absentEGJ esophagogastric junction, well-diff well differentiated, mod-diff moderately differentiated, M mucosa, SM submucosa Relationship between Mucin Phenotypes and Clinicopathological Features +, present; - absent EGJ esophagogastric junction, well-diff well differentiated, mod-diff moderately differentiated, M mucosa, SM submucosa Relationship between mucin phenotypes and background mucosa Intestinal metaplasia (IM) of background mucosa was observed in 199 of 249 (79.9 %) cases (G-, GI- and GI-type), including 38 cases of incomplete IM and 161 cases of complete IM. IM did not significantly differ among mucin phenotypes (P > 0.05). However, the presence of incomplete and complete IM was significantly different in distinct mucin phenotypes (P = 0.004, P = 0.018). The presence of incomplete IM in GI-type EGCs was higher than that in the G-type and I-type EGCs (21.5 % vs. 9.3 % vs. 3.9 %). In the contrast, 77.8 % (42/54) of G-type EGCs and 70.6 % (36/51) of I-type EGCs exhibited complete IM, which was higher than the 57.6 % (83/114) of GI-type EGCs. The IM status of the background mucosa and the relationship with mucin phenotypes are shown in Table 2. Table 2Relationship between mucin phenotypes and background mucosaG-typeGI-typeI-typeχ2 valueP valueIntestinal metaplasia2.6850.265  (+)47(87.1 %)114(79.2 %)38(74.5 %)  (-)7(12.9 %)30(20.8 %)13(25.5 %)Incomplete intestinal metaplasia10.9480.004  (+)5(9.3 %)31(21.5 %)2(3.9 %)  (-)49(90.7 %)113(78.5 %)49(96.1 %)Complete intestinal metaplasia7.9570.018  (+)42(77.8 %)83(57.6 %)36(70.6 %)  (-)12(22.2 %)61(42.4 %)15(29.4 %)+, present; - absent Relationship between mucin phenotypes and background mucosa +, present; - absent Intestinal metaplasia (IM) of background mucosa was observed in 199 of 249 (79.9 %) cases (G-, GI- and GI-type), including 38 cases of incomplete IM and 161 cases of complete IM. IM did not significantly differ among mucin phenotypes (P > 0.05). However, the presence of incomplete and complete IM was significantly different in distinct mucin phenotypes (P = 0.004, P = 0.018). The presence of incomplete IM in GI-type EGCs was higher than that in the G-type and I-type EGCs (21.5 % vs. 9.3 % vs. 3.9 %). In the contrast, 77.8 % (42/54) of G-type EGCs and 70.6 % (36/51) of I-type EGCs exhibited complete IM, which was higher than the 57.6 % (83/114) of GI-type EGCs. The IM status of the background mucosa and the relationship with mucin phenotypes are shown in Table 2. Table 2Relationship between mucin phenotypes and background mucosaG-typeGI-typeI-typeχ2 valueP valueIntestinal metaplasia2.6850.265  (+)47(87.1 %)114(79.2 %)38(74.5 %)  (-)7(12.9 %)30(20.8 %)13(25.5 %)Incomplete intestinal metaplasia10.9480.004  (+)5(9.3 %)31(21.5 %)2(3.9 %)  (-)49(90.7 %)113(78.5 %)49(96.1 %)Complete intestinal metaplasia7.9570.018  (+)42(77.8 %)83(57.6 %)36(70.6 %)  (-)12(22.2 %)61(42.4 %)15(29.4 %)+, present; - absent Relationship between mucin phenotypes and background mucosa +, present; - absent Biological behaviour of mucin phenotypes In addition to tubular adenocarcinoma components, 22 of the 257 cases also contained components of papillary adenocarcinoma, poorly differentiated carcinoma, or signet ring cell carcinoma. Fifteen cases (68.18 %) contained por/sig components, and the other 7 (31.82 %) contained pap components. Eleven cases showed the G-type, 10 cases showed the GI-type, only one case showed the N-type, and none of them showed the I-type. Among the 10 GI-type cases, 9 cases showed the GI-G type, and one showed the GI-I type. Almost all the 22 patients showed the G- and GI-G types, which was significantly higher than the number of I-type patients (P = 0.011). In addition, the 22 patients had a higher proportion of infiltration into the submucosa (P < 0.001). (Table 3; Figs. 3 and 4). Table 3Relationship between Biological Behavior and Mucin Phenotypestubtub-por/sigpap/tub-papχ2 valueP valueDepth of Invasion15.8240.000 M219(93.2 %)9(60.0 %)5(71.4 %) SM16(6.8 %)6(40.0 %)2(28.6 %)Mucin Phenotype14.7430.011 G-type43(18.3 %)7(46.7 %)4 (57.1 %) GI-type134(57.0 %)7(46.7 %)3(42.9 %) I-type51(21.7 %)0(0.0 %)0(0.0 %) N-type7(3.0 %)1(6.6 %)0(0.0 %)M mucosa, SM submucosaFig. 3Tubular adenocarcinoma mixed with papillary adenocarcinoma showing the gastric mucin phenotype. Reconstructive map (a). H&E staining showing papillary structure on the surface and tubular adenocarcinoma in the submucosa; invasive depth is 300 μm (b). Focal positive staining for MUC5AC (c) and strongly positive staining for MUC6 (d), invasive tubular adenocarcinoma showing MUC5AC negative (c) and MUC6 positive (d) status. (Scale bar =400 μm, HE and IHC ×10)Fig. 4Tubular adenocarcinoma mixed with poorly differentiated carcinoma showing the gastric mucin phenotype. Reconstructive map (a). H&E staining showing poorly differentiated carcinoma in the mucosa and tubular adenocarcinoma in the submucosa; the invasive depth is 2500 μm (b). Positive staining for MUC5AC (c) and MUC6 (d), and invasive tubular adenocarcinoma showing MUC5AC (c) and MUC6 (d) positivity. (Scale bar =400 μm, HE and IHC x10) Relationship between Biological Behavior and Mucin Phenotypes M mucosa, SM submucosa Tubular adenocarcinoma mixed with papillary adenocarcinoma showing the gastric mucin phenotype. Reconstructive map (a). H&E staining showing papillary structure on the surface and tubular adenocarcinoma in the submucosa; invasive depth is 300 μm (b). Focal positive staining for MUC5AC (c) and strongly positive staining for MUC6 (d), invasive tubular adenocarcinoma showing MUC5AC negative (c) and MUC6 positive (d) status. (Scale bar =400 μm, HE and IHC ×10) Tubular adenocarcinoma mixed with poorly differentiated carcinoma showing the gastric mucin phenotype. Reconstructive map (a). H&E staining showing poorly differentiated carcinoma in the mucosa and tubular adenocarcinoma in the submucosa; the invasive depth is 2500 μm (b). Positive staining for MUC5AC (c) and MUC6 (d), and invasive tubular adenocarcinoma showing MUC5AC (c) and MUC6 (d) positivity. (Scale bar =400 μm, HE and IHC x10) In addition to tubular adenocarcinoma components, 22 of the 257 cases also contained components of papillary adenocarcinoma, poorly differentiated carcinoma, or signet ring cell carcinoma. Fifteen cases (68.18 %) contained por/sig components, and the other 7 (31.82 %) contained pap components. Eleven cases showed the G-type, 10 cases showed the GI-type, only one case showed the N-type, and none of them showed the I-type. Among the 10 GI-type cases, 9 cases showed the GI-G type, and one showed the GI-I type. Almost all the 22 patients showed the G- and GI-G types, which was significantly higher than the number of I-type patients (P = 0.011). In addition, the 22 patients had a higher proportion of infiltration into the submucosa (P < 0.001). (Table 3; Figs. 3 and 4). Table 3Relationship between Biological Behavior and Mucin Phenotypestubtub-por/sigpap/tub-papχ2 valueP valueDepth of Invasion15.8240.000 M219(93.2 %)9(60.0 %)5(71.4 %) SM16(6.8 %)6(40.0 %)2(28.6 %)Mucin Phenotype14.7430.011 G-type43(18.3 %)7(46.7 %)4 (57.1 %) GI-type134(57.0 %)7(46.7 %)3(42.9 %) I-type51(21.7 %)0(0.0 %)0(0.0 %) N-type7(3.0 %)1(6.6 %)0(0.0 %)M mucosa, SM submucosaFig. 3Tubular adenocarcinoma mixed with papillary adenocarcinoma showing the gastric mucin phenotype. Reconstructive map (a). H&E staining showing papillary structure on the surface and tubular adenocarcinoma in the submucosa; invasive depth is 300 μm (b). Focal positive staining for MUC5AC (c) and strongly positive staining for MUC6 (d), invasive tubular adenocarcinoma showing MUC5AC negative (c) and MUC6 positive (d) status. (Scale bar =400 μm, HE and IHC ×10)Fig. 4Tubular adenocarcinoma mixed with poorly differentiated carcinoma showing the gastric mucin phenotype. Reconstructive map (a). H&E staining showing poorly differentiated carcinoma in the mucosa and tubular adenocarcinoma in the submucosa; the invasive depth is 2500 μm (b). Positive staining for MUC5AC (c) and MUC6 (d), and invasive tubular adenocarcinoma showing MUC5AC (c) and MUC6 (d) positivity. (Scale bar =400 μm, HE and IHC x10) Relationship between Biological Behavior and Mucin Phenotypes M mucosa, SM submucosa Tubular adenocarcinoma mixed with papillary adenocarcinoma showing the gastric mucin phenotype. Reconstructive map (a). H&E staining showing papillary structure on the surface and tubular adenocarcinoma in the submucosa; invasive depth is 300 μm (b). Focal positive staining for MUC5AC (c) and strongly positive staining for MUC6 (d), invasive tubular adenocarcinoma showing MUC5AC negative (c) and MUC6 positive (d) status. (Scale bar =400 μm, HE and IHC ×10) Tubular adenocarcinoma mixed with poorly differentiated carcinoma showing the gastric mucin phenotype. Reconstructive map (a). H&E staining showing poorly differentiated carcinoma in the mucosa and tubular adenocarcinoma in the submucosa; the invasive depth is 2500 μm (b). Positive staining for MUC5AC (c) and MUC6 (d), and invasive tubular adenocarcinoma showing MUC5AC (c) and MUC6 (d) positivity. (Scale bar =400 μm, HE and IHC x10) Follow-up Six patients underwent additional gastrectomy, and there was no residual tumour or lymph node metastasis. All patients were under close follow-up, and neither recurrence nor metastasis was detected. Six patients underwent additional gastrectomy, and there was no residual tumour or lymph node metastasis. All patients were under close follow-up, and neither recurrence nor metastasis was detected. Expression of mucin markers and mucin phenotype in early gastric cancers: The expression percentages of CD10, MUC2, MUC5AC and MUC6 in all EGCs were 43.58 % (112/257), 63.81 % (164/257), 64.98 % (167/257) and 72.76 % (187/257) respectively (Fig. 1). Fig. 1The proportion of CD10, MUC2, MUC5AC, and MUC6 in each mucin phenotype The proportion of CD10, MUC2, MUC5AC, and MUC6 in each mucin phenotype Two hundred fifty-seven EGCs were classified as G-type (21 %, 54/257), GI-type (56 %, 144/257), I-type (20 %, 51/257) and N-type (3 %, 8/257). The GI-type contained the GI-G type (72 %, 103/144) and GI-I type (28 %, 41/144). There were more cases of G-type and GI-G type (61 %, 157/257) than I-type and GI-I type (36 %, 95/257) (Fig. 2). Fig. 2IHC staining of the gastric phenotype, intestinal phenotype and gastrointestinal phenotype. G-type (a-d) shows negative staining for CD10 (a) and MUC2 (b) and positive staining for MUC5AC (c) and MUC6 (d). In contrast, I-type (e-h) shows CD10 (e) and MUC2 (f) marker positivity and negative staining for MUC5AC (g) and MUC6 (h) negative. GI-type (i-l) shows positive staining for CD10 (i), MUC2 (j),MUC5AC (k) and MUC6 (l). (Scale bar =200 μm, HE and IHC ×10) IHC staining of the gastric phenotype, intestinal phenotype and gastrointestinal phenotype. G-type (a-d) shows negative staining for CD10 (a) and MUC2 (b) and positive staining for MUC5AC (c) and MUC6 (d). In contrast, I-type (e-h) shows CD10 (e) and MUC2 (f) marker positivity and negative staining for MUC5AC (g) and MUC6 (h) negative. GI-type (i-l) shows positive staining for CD10 (i), MUC2 (j),MUC5AC (k) and MUC6 (l). (Scale bar =200 μm, HE and IHC ×10) Relationship between mucin phenotype and clinicopathological features: The relationship between mucin phenotype and clinicopathological features is summarized in Table 1. The mucin phenotypes were significantly related to the JGCA and WHO classifications (P < 0.05), but the parameters of sex, age, margin, colour, tumour size, gross type, depth of invasion, and lymphovascular invasion did not significantly differ among those mucin phenotypes (P > 0.05). The I-type had the highest proportion of differentiated EGCs among the four mucin phenotypes (100.0 % vs. 79.7 % vs. 83.1 % vs. 87.5 %, P = 0.027). The G-type group had a higher proportion of tub-por/sig and pap/tub-pap cases than the I-, GI- and N-type groups (20.4 % vs. 7.0 % vs. 0.0 % vs. 12.5 %, P = 0.027) according to the JGCA classification. According to the WHO classification, the G-type had more mixed histological components (18.5 % vs. 5.6 % vs. 0.0 % vs. 12.5 %, P = 0.006) than the other mucin phenotypes. Table 1Relationship between Mucin Phenotypes and Clinicopathological FeaturesG-typeGI-typeI-typeN-typeχ2 valueP valueSex0.6710.906 Male40(74.1 %)99(68.8 %)37(72.5 %)6(75.0 %) Female14(25.9 %)45(31.2 %)14(27.5 %)2(25.0 %)Age0.4120.945 ≤ 64 years28(51.9 %)75(52.1 %)29(56.9 %)4(50.0 %) >64 years26(48.1 %)69(47.9 %)22(43.1 %)4(50.0 %)Tumor Size2.3560.502 ≤ 2 cm34(63.0 %)99(68.8 %)32(62.7 %)7(87.5 %) >2 cm20(37.0 %)45(31.2 %)19(37.3 %)1(12.5 %)Margin4.2690.224 Distinct35(64.8 %)111(77.1 %)40(78.4 %)5(62.5 %) Indistinct19(35.2 %)33(22.9 %)11(21.6 %)3(37.5 %)Color3.4230.337 Darken31(57.4 %)75(52.1 %)34(66.7 %)5(62.5 %) Faded23(42.6 %)69(47.9 %)17(33.3 %)3(37.5 %)Tumor Location5.8190.407 Upper4(7.4 %)15(10.4 %)8(15.7 %)0(0.0 %) Middle12(22.2 %)21(14.6 %)12(23.5 %)1(12.5 %) Low38(70.4 %)108(75.0 %)31(60.8 %)7(87.5 %)Tumor Location0.7870.842 EGJ4(7.4 %)7(4.9 %)3(5.9 %)0(0.0 %) No-EGJ50(92.6 %)137(95.1 %)48(94.1 %)8(100.0 %)Gross type8.9430.162 Protruding10(18.5 %)32(22.2 %)10(19.6 %)0(0.0 %) Depressed16(29.6 %)57(39.6 %)26(51.0 %)3(37.5 %) Protruding-Depressed28(51.9 %)55(38.2 %)15(29.4 %)5(62.5 %)JGCA Classification16.8700.027 tub134(62.9 %)112(77.8 %)45(88.2 %)7(87.5 %) tub29(16.7 %)22(15.2 %)6(11.8 %)0(0.0 %) tub-por/sig7(13.0 %)7(4.9 %)0(0.0 %)1(12.5 %) pap/tub-pap4(7.4 %)3(2.1 %)0(0.0 %)0(0.0 %)WHO Classification18.0520.017 Tubular, well-diff34(62.9 %)112(77.8 %)45(88.2 %)7(87.5 %) Tubular, mod-diff9(16.7 %)22(15.2 %)6(11.8 %)0(0.0 %) Papillary1(1.9 %)2(1.4 %)0(0.0 %)0(0.0 %) Mixed10(18.5 %)8(7.0 %)0(0.0 %)1(12.5 %)Depth of Invasion1.3600.681 M47(87.0 %)132(91.7 %)46(90.2 %)8(100.0 %) SM7(13.0 %)12(8.3 %)5(9.8 %)0(0.0 %)Lymphovascular invasion2.5410.687 (+)1(1.9 %)1(0.7 %)0(0.0 %)0(0.0 %) (-)53(98.1 %)143(99.3 %)51(100.0 %)8(100.0 %)+, present; - absentEGJ esophagogastric junction, well-diff well differentiated, mod-diff moderately differentiated, M mucosa, SM submucosa Relationship between Mucin Phenotypes and Clinicopathological Features +, present; - absent EGJ esophagogastric junction, well-diff well differentiated, mod-diff moderately differentiated, M mucosa, SM submucosa Relationship between mucin phenotypes and background mucosa: Intestinal metaplasia (IM) of background mucosa was observed in 199 of 249 (79.9 %) cases (G-, GI- and GI-type), including 38 cases of incomplete IM and 161 cases of complete IM. IM did not significantly differ among mucin phenotypes (P > 0.05). However, the presence of incomplete and complete IM was significantly different in distinct mucin phenotypes (P = 0.004, P = 0.018). The presence of incomplete IM in GI-type EGCs was higher than that in the G-type and I-type EGCs (21.5 % vs. 9.3 % vs. 3.9 %). In the contrast, 77.8 % (42/54) of G-type EGCs and 70.6 % (36/51) of I-type EGCs exhibited complete IM, which was higher than the 57.6 % (83/114) of GI-type EGCs. The IM status of the background mucosa and the relationship with mucin phenotypes are shown in Table 2. Table 2Relationship between mucin phenotypes and background mucosaG-typeGI-typeI-typeχ2 valueP valueIntestinal metaplasia2.6850.265  (+)47(87.1 %)114(79.2 %)38(74.5 %)  (-)7(12.9 %)30(20.8 %)13(25.5 %)Incomplete intestinal metaplasia10.9480.004  (+)5(9.3 %)31(21.5 %)2(3.9 %)  (-)49(90.7 %)113(78.5 %)49(96.1 %)Complete intestinal metaplasia7.9570.018  (+)42(77.8 %)83(57.6 %)36(70.6 %)  (-)12(22.2 %)61(42.4 %)15(29.4 %)+, present; - absent Relationship between mucin phenotypes and background mucosa +, present; - absent Biological behaviour of mucin phenotypes: In addition to tubular adenocarcinoma components, 22 of the 257 cases also contained components of papillary adenocarcinoma, poorly differentiated carcinoma, or signet ring cell carcinoma. Fifteen cases (68.18 %) contained por/sig components, and the other 7 (31.82 %) contained pap components. Eleven cases showed the G-type, 10 cases showed the GI-type, only one case showed the N-type, and none of them showed the I-type. Among the 10 GI-type cases, 9 cases showed the GI-G type, and one showed the GI-I type. Almost all the 22 patients showed the G- and GI-G types, which was significantly higher than the number of I-type patients (P = 0.011). In addition, the 22 patients had a higher proportion of infiltration into the submucosa (P < 0.001). (Table 3; Figs. 3 and 4). Table 3Relationship between Biological Behavior and Mucin Phenotypestubtub-por/sigpap/tub-papχ2 valueP valueDepth of Invasion15.8240.000 M219(93.2 %)9(60.0 %)5(71.4 %) SM16(6.8 %)6(40.0 %)2(28.6 %)Mucin Phenotype14.7430.011 G-type43(18.3 %)7(46.7 %)4 (57.1 %) GI-type134(57.0 %)7(46.7 %)3(42.9 %) I-type51(21.7 %)0(0.0 %)0(0.0 %) N-type7(3.0 %)1(6.6 %)0(0.0 %)M mucosa, SM submucosaFig. 3Tubular adenocarcinoma mixed with papillary adenocarcinoma showing the gastric mucin phenotype. Reconstructive map (a). H&E staining showing papillary structure on the surface and tubular adenocarcinoma in the submucosa; invasive depth is 300 μm (b). Focal positive staining for MUC5AC (c) and strongly positive staining for MUC6 (d), invasive tubular adenocarcinoma showing MUC5AC negative (c) and MUC6 positive (d) status. (Scale bar =400 μm, HE and IHC ×10)Fig. 4Tubular adenocarcinoma mixed with poorly differentiated carcinoma showing the gastric mucin phenotype. Reconstructive map (a). H&E staining showing poorly differentiated carcinoma in the mucosa and tubular adenocarcinoma in the submucosa; the invasive depth is 2500 μm (b). Positive staining for MUC5AC (c) and MUC6 (d), and invasive tubular adenocarcinoma showing MUC5AC (c) and MUC6 (d) positivity. (Scale bar =400 μm, HE and IHC x10) Relationship between Biological Behavior and Mucin Phenotypes M mucosa, SM submucosa Tubular adenocarcinoma mixed with papillary adenocarcinoma showing the gastric mucin phenotype. Reconstructive map (a). H&E staining showing papillary structure on the surface and tubular adenocarcinoma in the submucosa; invasive depth is 300 μm (b). Focal positive staining for MUC5AC (c) and strongly positive staining for MUC6 (d), invasive tubular adenocarcinoma showing MUC5AC negative (c) and MUC6 positive (d) status. (Scale bar =400 μm, HE and IHC ×10) Tubular adenocarcinoma mixed with poorly differentiated carcinoma showing the gastric mucin phenotype. Reconstructive map (a). H&E staining showing poorly differentiated carcinoma in the mucosa and tubular adenocarcinoma in the submucosa; the invasive depth is 2500 μm (b). Positive staining for MUC5AC (c) and MUC6 (d), and invasive tubular adenocarcinoma showing MUC5AC (c) and MUC6 (d) positivity. (Scale bar =400 μm, HE and IHC x10) Follow-up: Six patients underwent additional gastrectomy, and there was no residual tumour or lymph node metastasis. All patients were under close follow-up, and neither recurrence nor metastasis was detected. Discussion: The mucin phenotype classification is based on the mucin marker expression profile. After year 2000, the gastric and intestinal mucin phenotypes were analysed by IHC [15]. The mucin markers MUC5AC, MUC6, MUC2 and CD10 were considered necessary, although there is no consensus on the number of markers that should be used to define a mucin phenotype or the percentage of tumour cells that must be stained [6–8, 11, 15, 18]. MUC5AC is a secreted mucin expressed in the surface mucous epithelium of normal gastric mucosa. High expression of MUC6 is observed in fundic mucous neck cells and pyloric glands of gastric mucosa. CD10 is a marker for the brush border on the luminal surface of the small intestine. In the normal adult intestine, MUC2 expression is observed in the perinuclear areas of goblet cells. We showed that the expression of MUC5AC, MUC6, MUC2 and CD10 was detected in 167 (64.98 %), 187 (72.76 %), 164 (63.81 %), and 112 (43.58 %) of the 257 EGCs, respectively. In previous studies, the expression percentages of MUC5AC, MUC6, MUC2 and CD10 in GCs were 55.1-67.5 %, 44.9-64 %, 35.4-49.3 % and 20.6-20.9 %, respectively [19, 20], and for EGCs, the expression of each mucin marker was 68.75-96.8 %, 19.6-71.58 %, 25-62.10 %, and 0-79 %, respectively [6, 11, 21]. Based on the combinations of expression of these markers, the 257 EGCs were classified into the G-type (21 %, 54/257), GI-type (56 %, 144/257), I-type (20 %, 51/257) and N-type (3 %, 8/257); in previous reports, the incidence percentages of each of these mucin phenotypes were found to be 15-41.1 %, 20.3-60.1 %, 18.5-46.6 %, and 3.7-31.6 %, respectively, in advanced GCs [3, 13, 14], and 7.9-36.8 %, 18.8-41.2 %, 15.4-55.56 %, and 0-11.1 %, respectively, in early- stage GCs [7, 11, 19–25]. Our results were consistent with these studies. The reported expression ranges vary greatly among different investigators, and different markers, antibodies and case groups may account for this discrepancy. Koyama et al. reported that the incidence of G-type was 19.3 % [26], which was similar to that found in the present study (21 %); however, in his report, the incidence of I-type was much higher than that of G-type (43.8 % vs. 19.3 %), as was reported by Fabio et al. [11]. While Tajima et al. [22] reported the opposite result, since in their study, the incidence of the G-type was much higher than that of the I-type (36.8 % vs. 15.4 %). In our study, the incidence of G-type was almost the same as that of I-type (21 % vs. 20 %). Overall, based on our data, much more than half of these cases were classified as G- and GI-G type GC (61.09 %, 157/257), which is much higher than the incidence of I- and GI-I type GC (36.96 %,95/257). A previous report revealed that almost all intramucosal GC cases exhibited the gastric phenotype, including the GI phenotype [15]. The relationship between mucin phenotypes and clinicopathological features was investigated. We found that histology classification (both the JGCA and WHO classification) was closely related to the mucin phenotype. The incidence of I-type was greater than those of the G-, GI- and N-type (100.0 % vs. 79.7 %, 93.1 %, 87.5 %) in differentiated tubular adenocarcinoma. The G-type was histologically significantly correlated with the mixed type (with poorly differentiated/papillary carcinoma). Our data showed that the proportion of G-type carcinoma increased during the transition from solely differentiated type to mixed type carcinoma. Mixed-type early-stage carcinoma more frequently expressed G-type mucin, and G-type tumours were associated with a higher rate of undifferentiated-type tumours than I-type tumours [7, 22]. There were no significant differences between mucin phenotypes and other parameters, including sex, age, margin, colour, tumour size, gross type, depth of invasion, and lymphovascular invasion (P > 0.05). These results are consistent with those of other studies in the literatures [22, 24, 27], and there was no clear correlation between phenotypes and clinicopathological characteristics, including sex, age, tumour size, location, macroscopic features, lymphatic or venous invasion, or lymph node metastasis in the case of the differentiated type [22, 24, 27]. Koseki et al. [7] and Oya et al. [28] reported that the incidence of lymphatic invasion, venous invasion and lymph node metastasis in gastric phenotype carcinomas was significantly higher than that in intestinal phenotype carcinomas. In addition, G-type EGCs were correlated with some distinct macroscopic features, namely, a smaller tumour diameter [15], discoloured surface and non-wavy tumour margins [23, 29]. G-type differentiated adenocarcinomas showed a depressed type, indistinct margins and monotonous colour tone across the mucosal layer, whereas I-type adenocarcinomas had an elevated, distinct margin and a red mucosa [3, 4, 30]. The discrepancy of these results may have been due to heterogeneous components that contained poorly differentiated adenocarcinoma [27]. Intestinal metaplasia has been frequently observed surrounding GC, especially differentiated adenocarcinomas. IM has malignant potential and has been regarded as a precursor of gastric neoplasms. According to Laurén, intestinal-type adenocarcinoma is preceded by metaplastic changes, while diffuse-type adenocarcinoma arises in non-IM gastric mucosa [2]. In the current study, background mucosal IM was observed in 79.9 % of cases among the G-, GI- and I-type EGCs and 87.0 % of cases among the G-type EGCs. 25 % of I-type cases arose from the normal mucosa without IM. IM did not significantly differ among mucin phenotypes (P > 0.05). However, incomplete and complete IM significantly differed with respect to mucin phenotypes (P = 0.004, P = 0,018). A total of 77.78 % (42/54) of G-type and 70.6 % (36/51) of I-type patients had complete IM, which was higher than the rates among GI-type patients (83/114, 57.6 %). The expression of incomplete IM in GI-type EGCs was higher than in G- and I-type EGCs (21.5 % vs. 9.3 % vs. 3.9 %). Our results demonstrated a remarkable difference between mucin phenotypes and the background mucosa. Similar results have been reported by Kabashima et al. [31] and Matsuoka [23]. The mucin phenotype of the carcinoma was independent of mucin phenotypic changes in the surrounding mucosa, and the carcinoma may undergo individual intestinalization. The G-type may imitate the surrounding mucosa, and the carcinomas and the background mucosa have an unstable status, as they commonly possess the hybrid phenotype of the stomach and the small intestine [23, 31]. Mucin phenotypes can indicate biological behaviour in GCs. G-type GCs have increased potential for invasion and metastasis due to infiltrating of deeper layers or more surrounding structures, a higher rate of lymph node metastasis, and poorer prognosis [3, 12, 18, 21]. Even differentiated adenocarcinomas of the G-type had similar outcomes, focused on prognoses, as undifferentiated adenocarcinomas [7–10]. In our research, six patients underwent additional gastrectomy, and there was no residual tumour or lymph node metastasis. All patients were under close follow-up, and neither recurrence nor metastasis was detected. The mixed type (mixed with poorly differentiated or papillary adenocarcinoma) was mainly of the G-type, which was significantly higher than that of purely differentiated tubular adenocarcinoma (P < 0.05), and the depth of infiltration was deeper (P < 0.05). The G-type group had the highest proportion (11/54, 20.37 %) with poorly differentiated/undifferentiated components, and almost all of them (19/22, 86.36 %) expressed the G- and GI-G types. The mixed type may represent a progressive loss of glandular structure during progression of the cancer from the mucosa to advanced stage, and those with submucosal invasion was a risk factor for lymph node metastasis [7, 22, 32]. Differentiated EGC of G-type frequently changed histologically into signet ring-cell carcinoma or poorly differentiated adenocarcinoma. These results may imply more aggressive biological behaviour and poorer prognosis. Conclusions: Our study reports the expression of mucin markers (MUC5AC, MUC6, MUC2 and CD10) and mucin phenotypes in differentiated EGC samples from ESD/EMR in the Chinese population. Mucin phenotypes of early differentiated gastric cancer are of clinical significance, and G-type GC exhibits aggressive biological behaviour in early differentiated GCs, especially in those with poorly differentiated adenocarcinoma or papillary adenocarcinoma components.
Background: The distribution of mucin phenotypes and their relationship with clinicopathological features in early differentiated gastric adenocarcinomas in a Chinese cohort are unknown. We aimed to investigate mucin phenotypes and analyse the relationship between mucin phenotypes and clinicopathological features, especially biological behaviours, in early differentiated gastric adenocarcinomas from endoscopic specimens in a Chinese cohort. Methods: Immunohistochemical staining of CD10, MUC2, MUC5AC, and MUC6 was performed in 257 tissue samples from patients with early differentiated gastric adenocarcinomas. The tumour location, gross type, tumour size, histological type, depth of invasion, lymphovascular invasion, mucosal background and other clinicopathological parameters were evaluated. The relationship between mucin phenotypes and clinicopathological features was analysed with the chi-square test. Results: The incidences of gastric, gastrointestinal, intestinal and null phenotypes were 21 %, 56 %, 20 and 3 %, respectively. The mucin phenotypes were related to histology classification (P < 0.05). The proportion of the gastric phenotype became greater during the transition from differentiated to undifferentiated (P < 0.05). Complete intestinal metaplasia was higher in the gastric and intestinal phenotypes than in the gastrointestinal phenotype (P < 0.05). Tumours with poorly differentiated adenocarcinoma were mainly of the gastric phenotype, which was significantly higher than that of purely differentiated tubular adenocarcinoma (P < 0.05), and the depth of invasion in the mixed type was deeper (P < 0.05). Neither recurrence nor metastasis was detected. Conclusions: The mucin phenotype of early-differentiated gastric adenocarcinoma has clinical implications, and the gastric phenotype has aggressive biological behaviour in early differentiated gastric cancers, especially in those with poorly differentiated adenocarcinoma or papillary adenocarcinoma components.
Background: Gastric cancer (GC), one of the most common human cancers worldwide, is a disease with multiple pathogenic factors, various prognoses and different responses to treatments. Thus, properly distinguishing those with worse prognoses from those with better prognoses appears to be significantly important. Four different morphology -based classification systems exist, the World Health Organization (WHO/2019) [1], the Japanese Gastric Cancer Association (JGCA/2017) [1], Laurén [2] and Nakamura [3]. According to the WHO classification, GCs are subclassified into papillary, tubular, poorly cohesive, mucinous and mixed types. In the JGCA classification, the subtypes are papillary (pap), tubular (tub), poorly differentiated (por), signet-ring cell (sig), and mucinous (muc), which are similar to the subtypes used by the WHO. GCs are divided into intestinal and diffuse types using Laurén’s classification or into differentiated and undifferentiated types based on Nakamura’s classification [2–4]. The differentiated type contains pap, tub1, and tub2 according to the JGCA classification and papillary and well/moderately differentiated adenocarcinoma according to the WHO classification. These different histological types exhibit distinct biological behaviours. The mucous produced by cancers is one of the factors determining the nature of biological behaviour. The main component of mucous is a high-molecular-weight glycoprotein called mucin [5]. As cancer progresses, the nature of the mucous changes relative to the degree of biological malignancy. In the 1990 s, with the progress of structural analysis of mucin and the widespread use of monoclonal antibodies to the core protein of mucin, a mucin phenotype subclassification emerged. Mucin phenotype subclassification was entirely based on the mucin expression pattern, independent of histological features. Thus, GCs are classified into gastric, intestinal, gastrointestinal and null mucin phenotypes [4–6]. Previous studies have reported that the gastric phenotype has a higher potential for invasion and metastasis than the intestinal type, which results in a worse prognosis of GCs [7–11]. However, studies have mostly focused on advanced gastric cancers, and early gastric cancers are rarely investigated. Early gastric cancer (EGC) is defined as tumour invasion confined to the mucosa and submucosa, irrespective of regional lymph node metastasis [12]. Endoscopic mucosal resection (EMR) and endoscopic submucosal dissection (ESD) are used as treatments for some intramucosal carcinomas and submucosal lesions, which have a very low probability of lymph node metastasis [13, 14]. To our knowledge, there is no research exploring biological role of mucin phenotypes in EGCs using EMR/ESD by Chinese investigators. Little information is available on the effects of mucin phenotypes on the clinicopathological features of EGCs in a Chinese cohort. Accordingly, we examined mucin expression and mucin phenotypes and explored mucin phenotype clinicopathological characteristics and biological behaviour. Conclusions: Our study reports the expression of mucin markers (MUC5AC, MUC6, MUC2 and CD10) and mucin phenotypes in differentiated EGC samples from ESD/EMR in the Chinese population. Mucin phenotypes of early differentiated gastric cancer are of clinical significance, and G-type GC exhibits aggressive biological behaviour in early differentiated GCs, especially in those with poorly differentiated adenocarcinoma or papillary adenocarcinoma components.
Background: The distribution of mucin phenotypes and their relationship with clinicopathological features in early differentiated gastric adenocarcinomas in a Chinese cohort are unknown. We aimed to investigate mucin phenotypes and analyse the relationship between mucin phenotypes and clinicopathological features, especially biological behaviours, in early differentiated gastric adenocarcinomas from endoscopic specimens in a Chinese cohort. Methods: Immunohistochemical staining of CD10, MUC2, MUC5AC, and MUC6 was performed in 257 tissue samples from patients with early differentiated gastric adenocarcinomas. The tumour location, gross type, tumour size, histological type, depth of invasion, lymphovascular invasion, mucosal background and other clinicopathological parameters were evaluated. The relationship between mucin phenotypes and clinicopathological features was analysed with the chi-square test. Results: The incidences of gastric, gastrointestinal, intestinal and null phenotypes were 21 %, 56 %, 20 and 3 %, respectively. The mucin phenotypes were related to histology classification (P < 0.05). The proportion of the gastric phenotype became greater during the transition from differentiated to undifferentiated (P < 0.05). Complete intestinal metaplasia was higher in the gastric and intestinal phenotypes than in the gastrointestinal phenotype (P < 0.05). Tumours with poorly differentiated adenocarcinoma were mainly of the gastric phenotype, which was significantly higher than that of purely differentiated tubular adenocarcinoma (P < 0.05), and the depth of invasion in the mixed type was deeper (P < 0.05). Neither recurrence nor metastasis was detected. Conclusions: The mucin phenotype of early-differentiated gastric adenocarcinoma has clinical implications, and the gastric phenotype has aggressive biological behaviour in early differentiated gastric cancers, especially in those with poorly differentiated adenocarcinoma or papillary adenocarcinoma components.
11,699
330
[ 539, 1334, 249, 151, 211, 47, 473, 877, 330, 685, 34 ]
14
[ "type", "mucin", "gi", "phenotype", "staining", "muc5ac", "muc6", "cases", "gi type", "adenocarcinoma" ]
[ "histology classification jgca", "gastrointestinal phenotype gi", "jgca classification papillary", "gastric cancer gc", "differentiated gastric cancer" ]
null
[CONTENT] Gastric cancer | Early stage | Mucin core protein | Gastric Mucin phenotype | Biological behaviour [SUMMARY]
null
[CONTENT] Gastric cancer | Early stage | Mucin core protein | Gastric Mucin phenotype | Biological behaviour [SUMMARY]
[CONTENT] Gastric cancer | Early stage | Mucin core protein | Gastric Mucin phenotype | Biological behaviour [SUMMARY]
[CONTENT] Gastric cancer | Early stage | Mucin core protein | Gastric Mucin phenotype | Biological behaviour [SUMMARY]
[CONTENT] Gastric cancer | Early stage | Mucin core protein | Gastric Mucin phenotype | Biological behaviour [SUMMARY]
[CONTENT] Adenocarcinoma | Adult | Aged | Cell Differentiation | Female | Gastric Mucins | Gastric Mucosa | Humans | Immunohistochemistry | Male | Middle Aged | Phenotype | Stomach Neoplasms [SUMMARY]
null
[CONTENT] Adenocarcinoma | Adult | Aged | Cell Differentiation | Female | Gastric Mucins | Gastric Mucosa | Humans | Immunohistochemistry | Male | Middle Aged | Phenotype | Stomach Neoplasms [SUMMARY]
[CONTENT] Adenocarcinoma | Adult | Aged | Cell Differentiation | Female | Gastric Mucins | Gastric Mucosa | Humans | Immunohistochemistry | Male | Middle Aged | Phenotype | Stomach Neoplasms [SUMMARY]
[CONTENT] Adenocarcinoma | Adult | Aged | Cell Differentiation | Female | Gastric Mucins | Gastric Mucosa | Humans | Immunohistochemistry | Male | Middle Aged | Phenotype | Stomach Neoplasms [SUMMARY]
[CONTENT] Adenocarcinoma | Adult | Aged | Cell Differentiation | Female | Gastric Mucins | Gastric Mucosa | Humans | Immunohistochemistry | Male | Middle Aged | Phenotype | Stomach Neoplasms [SUMMARY]
[CONTENT] histology classification jgca | gastrointestinal phenotype gi | jgca classification papillary | gastric cancer gc | differentiated gastric cancer [SUMMARY]
null
[CONTENT] histology classification jgca | gastrointestinal phenotype gi | jgca classification papillary | gastric cancer gc | differentiated gastric cancer [SUMMARY]
[CONTENT] histology classification jgca | gastrointestinal phenotype gi | jgca classification papillary | gastric cancer gc | differentiated gastric cancer [SUMMARY]
[CONTENT] histology classification jgca | gastrointestinal phenotype gi | jgca classification papillary | gastric cancer gc | differentiated gastric cancer [SUMMARY]
[CONTENT] histology classification jgca | gastrointestinal phenotype gi | jgca classification papillary | gastric cancer gc | differentiated gastric cancer [SUMMARY]
[CONTENT] type | mucin | gi | phenotype | staining | muc5ac | muc6 | cases | gi type | adenocarcinoma [SUMMARY]
null
[CONTENT] type | mucin | gi | phenotype | staining | muc5ac | muc6 | cases | gi type | adenocarcinoma [SUMMARY]
[CONTENT] type | mucin | gi | phenotype | staining | muc5ac | muc6 | cases | gi type | adenocarcinoma [SUMMARY]
[CONTENT] type | mucin | gi | phenotype | staining | muc5ac | muc6 | cases | gi type | adenocarcinoma [SUMMARY]
[CONTENT] type | mucin | gi | phenotype | staining | muc5ac | muc6 | cases | gi type | adenocarcinoma [SUMMARY]
[CONTENT] mucin | classification | gastric | cancers | biological | gcs | cancer | gastric cancer | mucous | prognoses [SUMMARY]
null
[CONTENT] type | staining | showing | mucin | gi | adenocarcinoma | muc6 | muc5ac | gi type | μm [SUMMARY]
[CONTENT] differentiated | early differentiated | early | mucin | adenocarcinoma | significance type gc | significance type gc exhibits | behaviour early differentiated gcs | behaviour early differentiated | behaviour early [SUMMARY]
[CONTENT] type | mucin | staining | cases | gi | muc6 | muc5ac | phenotype | differentiated | muc2 [SUMMARY]
[CONTENT] type | mucin | staining | cases | gi | muc6 | muc5ac | phenotype | differentiated | muc2 [SUMMARY]
[CONTENT] Chinese ||| gastric adenocarcinomas | Chinese [SUMMARY]
null
[CONTENT] 21 % | 56 % | 20 | 3 % ||| 0.05 ||| 0.05 ||| metaplasia | 0.05 ||| 0.05 | 0.05 ||| [SUMMARY]
[CONTENT] [SUMMARY]
[CONTENT] Chinese ||| gastric adenocarcinomas | Chinese ||| 257 | gastric adenocarcinomas ||| ||| ||| 21 % | 56 % | 20 | 3 % ||| 0.05 ||| 0.05 ||| metaplasia | 0.05 ||| 0.05 | 0.05 ||| ||| [SUMMARY]
[CONTENT] Chinese ||| gastric adenocarcinomas | Chinese ||| 257 | gastric adenocarcinomas ||| ||| ||| 21 % | 56 % | 20 | 3 % ||| 0.05 ||| 0.05 ||| metaplasia | 0.05 ||| 0.05 | 0.05 ||| ||| [SUMMARY]
Geo-mapping of caries risk in children and adolescents - a novel approach for allocation of preventive care.
21943023
Dental caries in children is unevenly distributed within populations with a higher burden in low socio-economy groups. Thus, tools are needed to allocate resources and establish evidence-based programs that meet the needs of those at risk. The aim of the study was to apply a novel concept for presenting epidemiological data based on caries risk in the region of Halland in southwest Sweden, using geo-maps.
BACKGROUND
The study population consisted of 46,536 individuals between 3-19 years of age (75% of the eligible population) from whom caries data were reported in 2010. Reported dmfs/DMFS>0 for an individual was considered as the primary caries outcome. Each study individual was geo-coded with respect to his/her residence parish. A parish-specific relative risk (RR) was calculated as the observed-to-expected ratio, where the expected number of individuals with dmfs/DMFS>0 was obtained from the age- and sex-specific caries (dmfs/DMFS>0) rates for the total study population. Smoothed caries risk geo-maps, along with corresponding statistical certainty geo-maps, were produced by using the free software Rapid Inquiry Facility and the ESRI® ArcGIS system.
METHODS
The geo-maps of preschool children (3-6 years), schoolchildren (7-11 years) and adolescents (12-19 years) displayed obvious geographical variations in caries risk, albeit most marked among the preschoolers. Among the preschool children the smoothed relative risk (SmRR) varied from 0.33 to 2.37 in different parishes. With increasing age, the contrasts seemed to diminish although the gross geographical risk pattern persisted also among the adolescents (SmRR range 0.75-1.20).
RESULTS
Geo-maps based on caries risk may provide a novel option to allocate resources and tailor supportive and preventive measures within regions with sections of the population with relatively high caries rates.
CONCLUSION
[ "Adolescent", "Age Factors", "Bayes Theorem", "Child", "Child, Preschool", "DMF Index", "Data Collection", "Dental Caries", "Dental Caries Susceptibility", "Educational Status", "Female", "Geographic Information Systems", "Humans", "Male", "Resource Allocation", "Risk", "Risk Assessment", "Sex Factors", "Social Class", "Sweden", "Tooth, Deciduous", "Young Adult" ]
3198761
Background
In spite of the global decline in childhood caries, widening inequalities in oral health exist between social classes and certain minority ethnic groups [1,2]. In the United Kingdom, this has primarily been observed among preschool- and schoolchildren [3,4]. Also in Scandinavia, where almost all children and adolescents attend the prevention-oriented free public dental service, a social gradient for dental health is evident [5-7]. In order to reduce these gaps, various oral health promotion activities has been suggested and, preferable, integrated with general health education since oral diseases and chronic systemic diseases share many common risk factors [8,9]. The challenge and crucial decision for policymakers and professionals are to allocate resources and establish evidence-based programs that meet the needs of the vulnerable children at risk for caries. The allocation is traditionally based on conventional caries epidemiological data in spite of the fact that the children by then already are diseased. Oral health programs based on caries risk would be more proactive since a comprehensive risk assessment is an essential component in the decision-making process for the prevention and management of dental caries [10]. Thus, the aims of the present communication were to suggest a novel approach to present epidemiological data based on caries risk in a population. We launch the use of geo-maps and apply the concept in the southwest Swedish region of Halland.
Methods
Study population The Halland region has approximately 70,000 inhabitants below the age of 20 years and the vast majority is listed as regular patients at the Public Dental Service that provides free dental care between 1 and 19 years with recall intervals varying from 3 to 24 months depending on the individual need. Data on the experience of manifest (dentin) caries is registered according to the WHO-criteria [11] and annually reported to the community dentistry unit. The present study population included 46,536 individuals for whom caries data were reported in year 2010; they were between 3-19 years old when examined. The overall coverage was 75% of the total 3-19-year population of the region. The remaining children were not recalled for a regular check-up that year or visited a private dentist outside the region. In Halland, the fluoride concentration in piped water supply is low (<0.3 ppm) except in the northern part (the municipality of Kungsbacka) where the natural fluoride content is approximately 1.0 ppm. There is also a geographical variation in socio-economic characteristics of the population in the Halland region. For example, the proportion with post-secondary education among all residents varies between 10 to 48% across different parishes (data from year 2010 provided from Statistics Sweden). The study was approved by the Halland Hospital Ethical committee as well as The Swedish Data Inspection Board. The Halland region has approximately 70,000 inhabitants below the age of 20 years and the vast majority is listed as regular patients at the Public Dental Service that provides free dental care between 1 and 19 years with recall intervals varying from 3 to 24 months depending on the individual need. Data on the experience of manifest (dentin) caries is registered according to the WHO-criteria [11] and annually reported to the community dentistry unit. The present study population included 46,536 individuals for whom caries data were reported in year 2010; they were between 3-19 years old when examined. The overall coverage was 75% of the total 3-19-year population of the region. The remaining children were not recalled for a regular check-up that year or visited a private dentist outside the region. In Halland, the fluoride concentration in piped water supply is low (<0.3 ppm) except in the northern part (the municipality of Kungsbacka) where the natural fluoride content is approximately 1.0 ppm. There is also a geographical variation in socio-economic characteristics of the population in the Halland region. For example, the proportion with post-secondary education among all residents varies between 10 to 48% across different parishes (data from year 2010 provided from Statistics Sweden). The study was approved by the Halland Hospital Ethical committee as well as The Swedish Data Inspection Board. Geographical information system (GIS) methods The Halland region consists of six municipalities that are subdivided into 66 parishes. Geo-maps were produced by using the ESRI® ArcGIS system (Environmental Systems Research Institute, Inc., USA). Each study individual was geo-coded with respect to his/her residence area (parish). Figure 1 shows the number of study persons in each parish Distribution of participants. Geo-map of the Halland region (southwest Sweden) showing the number of study person between 3-19 years of age in each of the 66 residential parishes. The thicker borderlines delimit the six municipalities of Halland. The coverage (i.e., proportion of the eligible population) varied between 61-89% in the different parishes, except in one parish (red background) where the coverage was only 12%. The Halland region consists of six municipalities that are subdivided into 66 parishes. Geo-maps were produced by using the ESRI® ArcGIS system (Environmental Systems Research Institute, Inc., USA). Each study individual was geo-coded with respect to his/her residence area (parish). Figure 1 shows the number of study persons in each parish Distribution of participants. Geo-map of the Halland region (southwest Sweden) showing the number of study person between 3-19 years of age in each of the 66 residential parishes. The thicker borderlines delimit the six municipalities of Halland. The coverage (i.e., proportion of the eligible population) varied between 61-89% in the different parishes, except in one parish (red background) where the coverage was only 12%. Epidemiological and statistical methods Reported dmfs/DMFS >0 for an individual was considered as the primary caries outcome. A parish-specific relative risk (RR) was calculated as the observed-to-expected ratio, where the expected number of individuals with dmfs/DMFS >0 was obtained from the age- and sex-specific caries (dmfs/DMFS >0) rates for the whole region of Halland or, more precisely, for the total study population. The following age strata were used: 3-6, 7-11,12-18, and 19 years. Thus, the expected number for a parish equals the sum of the products ni×ri across the age- and sex-strata i(3-6 year old girls; 3-6 year old boys; 7-11 year old girls; etc.), where ni denotes the stratum-specific number of study individuals residing in the parish and ri denotes the corresponding caries rate observed in the total study population. The computations of the RRs were performed using the free software Rapid Inquiry Facility [12], which provides an extension to ESRI® ArcGIS functions [13]. The Rapid Inquiry Facility (RIF) along with free software for Bayesian data analyses, WinBUGS [14], provides a powerful tool for geo-mapping based on epidemiological data. The caries risk maps show the smoothed RRs (SmRR) for each parish, which were obtained by running the Bayesian hierarchical mapping model in RIF/WinBUGS. We underline that such Bayesian smoothing yields pronounced downward adjustment of a (conventional) RR for a parish with few study persons, estimated with relatively high uncertainty, if that RR turns out notably elevated. Hence, by presenting smoothed caries risk geo-maps, rational adjustments of the conventional (parish-specific) RRs are taken into account [15,16]. We present separate caries risk geo-maps for preschool children (3-6 years), schoolchildren (7-11 years) and adolescents [12-19 years; based on age-stratified (12-18 and 19 years, respectively) analysis]. Along with each caries risk geo-map, we provide the corresponding statistical certainty geo-map. A posterior probability of a parish-specific relative risk above one given the data, denoted Pr(RR>1|data), was obtained by the Bayesian approach. A parish with data yielding strong statistical evidence of an elevated caries risk, more precisely Pr(RR>1|data) > 0.95, was colored red in the certainty geo-map. By contrast, a parish with evidently low caries risk, Pr(RR<1|data) > 0.95, was colored green. Analogously, parish-specific 90% credibility intervals for the relative risk were obtained; and each parish with a 90% credibility interval that covers 1 was colored yellow in the certainty geo-map indicating a weaker statistical evidence for a high or low relative risk in such parishes. We addressed geographical co-variations between caries risk and residents'' level of education by calculating Spearman''s correlations (rS) between the SmRRs and the proportions with post-secondary educational level among the residents (considered as a group-level indicator of socio-economy) across the 66 parishes. Reported dmfs/DMFS >0 for an individual was considered as the primary caries outcome. A parish-specific relative risk (RR) was calculated as the observed-to-expected ratio, where the expected number of individuals with dmfs/DMFS >0 was obtained from the age- and sex-specific caries (dmfs/DMFS >0) rates for the whole region of Halland or, more precisely, for the total study population. The following age strata were used: 3-6, 7-11,12-18, and 19 years. Thus, the expected number for a parish equals the sum of the products ni×ri across the age- and sex-strata i(3-6 year old girls; 3-6 year old boys; 7-11 year old girls; etc.), where ni denotes the stratum-specific number of study individuals residing in the parish and ri denotes the corresponding caries rate observed in the total study population. The computations of the RRs were performed using the free software Rapid Inquiry Facility [12], which provides an extension to ESRI® ArcGIS functions [13]. The Rapid Inquiry Facility (RIF) along with free software for Bayesian data analyses, WinBUGS [14], provides a powerful tool for geo-mapping based on epidemiological data. The caries risk maps show the smoothed RRs (SmRR) for each parish, which were obtained by running the Bayesian hierarchical mapping model in RIF/WinBUGS. We underline that such Bayesian smoothing yields pronounced downward adjustment of a (conventional) RR for a parish with few study persons, estimated with relatively high uncertainty, if that RR turns out notably elevated. Hence, by presenting smoothed caries risk geo-maps, rational adjustments of the conventional (parish-specific) RRs are taken into account [15,16]. We present separate caries risk geo-maps for preschool children (3-6 years), schoolchildren (7-11 years) and adolescents [12-19 years; based on age-stratified (12-18 and 19 years, respectively) analysis]. Along with each caries risk geo-map, we provide the corresponding statistical certainty geo-map. A posterior probability of a parish-specific relative risk above one given the data, denoted Pr(RR>1|data), was obtained by the Bayesian approach. A parish with data yielding strong statistical evidence of an elevated caries risk, more precisely Pr(RR>1|data) > 0.95, was colored red in the certainty geo-map. By contrast, a parish with evidently low caries risk, Pr(RR<1|data) > 0.95, was colored green. Analogously, parish-specific 90% credibility intervals for the relative risk were obtained; and each parish with a 90% credibility interval that covers 1 was colored yellow in the certainty geo-map indicating a weaker statistical evidence for a high or low relative risk in such parishes. We addressed geographical co-variations between caries risk and residents'' level of education by calculating Spearman''s correlations (rS) between the SmRRs and the proportions with post-secondary educational level among the residents (considered as a group-level indicator of socio-economy) across the 66 parishes.
null
null
Conclusion
The pre-publication history for this paper can be accessed here: http://www.biomedcentral.com/1472-6831/11/26/prepub
[ "Background", "Study population", "Geographical information system (GIS) methods", "Epidemiological and statistical methods", "Results", "Discussion", "Conclusion" ]
[ "In spite of the global decline in childhood caries, widening inequalities in oral health exist between social classes and certain minority ethnic groups [1,2]. In the United Kingdom, this has primarily been observed among preschool- and schoolchildren [3,4]. Also in Scandinavia, where almost all children and adolescents attend the prevention-oriented free public dental service, a social gradient for dental health is evident [5-7]. In order to reduce these gaps, various oral health promotion activities has been suggested and, preferable, integrated with general health education since oral diseases and chronic systemic diseases share many common risk factors [8,9]. The challenge and crucial decision for policymakers and professionals are to allocate resources and establish evidence-based programs that meet the needs of the vulnerable children at risk for caries. The allocation is traditionally based on conventional caries epidemiological data in spite of the fact that the children by then already are diseased. Oral health programs based on caries risk would be more proactive since a comprehensive risk assessment is an essential component in the decision-making process for the prevention and management of dental caries [10]. Thus, the aims of the present communication were to suggest a novel approach to present epidemiological data based on caries risk in a population. We launch the use of geo-maps and apply the concept in the southwest Swedish region of Halland.", "The Halland region has approximately 70,000 inhabitants below the age of 20 years and the vast majority is listed as regular patients at the Public Dental Service that provides free dental care between 1 and 19 years with recall intervals varying from 3 to 24 months depending on the individual need. Data on the experience of manifest (dentin) caries is registered according to the WHO-criteria [11] and annually reported to the community dentistry unit. The present study population included 46,536 individuals for whom caries data were reported in year 2010; they were between 3-19 years old when examined. The overall coverage was 75% of the total 3-19-year population of the region. The remaining children were not recalled for a regular check-up that year or visited a private dentist outside the region. In Halland, the fluoride concentration in piped water supply is low (<0.3 ppm) except in the northern part (the municipality of Kungsbacka) where the natural fluoride content is approximately 1.0 ppm. There is also a geographical variation in socio-economic characteristics of the population in the Halland region. For example, the proportion with post-secondary education among all residents varies between 10 to 48% across different parishes (data from year 2010 provided from Statistics Sweden). The study was approved by the Halland Hospital Ethical committee as well as The Swedish Data Inspection Board.", "The Halland region consists of six municipalities that are subdivided into 66 parishes. Geo-maps were produced by using the ESRI® ArcGIS system (Environmental Systems Research Institute, Inc., USA). Each study individual was geo-coded with respect to his/her residence area (parish). Figure 1 shows the number of study persons in each parish\nDistribution of participants. Geo-map of the Halland region (southwest Sweden) showing the number of study person between 3-19 years of age in each of the 66 residential parishes. The thicker borderlines delimit the six municipalities of Halland. The coverage (i.e., proportion of the eligible population) varied between 61-89% in the different parishes, except in one parish (red background) where the coverage was only 12%.", "Reported dmfs/DMFS >0 for an individual was considered as the primary caries outcome. A parish-specific relative risk (RR) was calculated as the observed-to-expected ratio, where the expected number of individuals with dmfs/DMFS >0 was obtained from the age- and sex-specific caries (dmfs/DMFS >0) rates for the whole region of Halland or, more precisely, for the total study population. The following age strata were used: 3-6, 7-11,12-18, and 19 years. Thus, the expected number for a parish equals the sum of the products ni×ri across the age- and sex-strata i(3-6 year old girls; 3-6 year old boys; 7-11 year old girls; etc.), where ni denotes the stratum-specific number of study individuals residing in the parish and ri denotes the corresponding caries rate observed in the total study population. The computations of the RRs were performed using the free software Rapid Inquiry Facility [12], which provides an extension to ESRI® ArcGIS functions [13]. The Rapid Inquiry Facility (RIF) along with free software for Bayesian data analyses, WinBUGS [14], provides a powerful tool for geo-mapping based on epidemiological data. The caries risk maps show the smoothed RRs (SmRR) for each parish, which were obtained by running the Bayesian hierarchical mapping model in RIF/WinBUGS. We underline that such Bayesian smoothing yields pronounced downward adjustment of a (conventional) RR for a parish with few study persons, estimated with relatively high uncertainty, if that RR turns out notably elevated. Hence, by presenting smoothed caries risk geo-maps, rational adjustments of the conventional (parish-specific) RRs are taken into account [15,16].\nWe present separate caries risk geo-maps for preschool children (3-6 years), schoolchildren (7-11 years) and adolescents [12-19 years; based on age-stratified (12-18 and 19 years, respectively) analysis]. Along with each caries risk geo-map, we provide the corresponding statistical certainty geo-map. A posterior probability of a parish-specific relative risk above one given the data, denoted Pr(RR>1|data), was obtained by the Bayesian approach. A parish with data yielding strong statistical evidence of an elevated caries risk, more precisely Pr(RR>1|data) > 0.95, was colored red in the certainty geo-map. By contrast, a parish with evidently low caries risk, Pr(RR<1|data) > 0.95, was colored green. Analogously, parish-specific 90% credibility intervals for the relative risk were obtained; and each parish with a 90% credibility interval that covers 1 was colored yellow in the certainty geo-map indicating a weaker statistical evidence for a high or low relative risk in such parishes.\nWe addressed geographical co-variations between caries risk and residents'' level of education by calculating Spearman''s correlations (rS) between the SmRRs and the proportions with post-secondary educational level among the residents (considered as a group-level indicator of socio-economy) across the 66 parishes.", "The proportion of children with no obvious decay at various ages and the cumulative burden of caries in the region, expressed as mean dmfs/DMFS, are shown in Table 1. The geo-maps of caries risk for preschool children, schoolchildren and adolescents are presented in Figures 2, 3 and 4. The geographical variation in caries risk was obvious. Among the preschool children, the smoothed relative risk (SmRR) varied from 0.33 to 2.37 in different parishes. With increasing age, the contrasts seemed to diminish although the gross geographical risk pattern persisted also among the adolescents (SmRR range 0.75-1.20). As expected, the lowest caries risk (dark green) was seen for all ages in the northern area with the elevated natural fluoride content in water supply. The across-parish-correlation between the SmRRs and the proportions with post-secondary educational level among the residents was highly significant for preschool children (rS = -0.59; p < 0.01), but not for schoolchildren (rS = -0.21; p = 0.10) and adolescents (rS = -0.12; p = 0.24).\nPrevalence and experience of manifest (dentin) caries on surface level in the Region of Halland 2010\nChildren with no obvious decay was defined as dmfs/DMFS = 0\n*primary teeth only\n§permanent teeth only\nGeo-map of caries risk in preschool children. Caries risk geo-map of the Halland region (southwest Sweden) displaying, for each of the 66 residential parishes, the smoothed relative risk (SmRR, range between 0.33-2.37) of caries (dmfs/DMFS >0) among preschoolers (3-6 years). The thicker borderlines delimit the six municipalities of Halland. The corresponding statistical certainty geo-map is also shown [red color, Pr(RR>1|data) > 0.95, i.e. a parish with data yielding strong statistical evidence of an elevated caries risk; green color, Pr(RR<1|data) > 0.95, i.e. a parish with data yielding strong statistical evidence of a low caries risk; and yellow color, the 90% credibility interval covers RR = 1, i.e. a parish with data yielding weaker statistical evidence for a high or low relative risk].\nGeo-map of caries risk in schoolchildren. Caries risk geo-map of the Halland region (southwest Sweden) displaying, for each of the 66 residential parishes, the smoothed relative risk (SmRR, range between 0.36-1.47) of caries (dmfs/DMFS >0) among schoolchildren (7-11 years). The thicker borderlines delimit the six municipalities of Halland. The corresponding statistical certainty geo-map is also shown [red color, Pr(RR>1|data) > 0.95, i.e. a parish with data yielding strong statistical evidence of an elevated caries risk; green color, Pr(RR<1|data) > 0.95, i.e. a parish with data yielding strong statistical evidence of a low caries risk; and yellow color, the 90% credibility interval covers RR = 1, i.e. a parish with data yielding weaker statistical evidence for a high or low relative risk].\nGeo-map of caries risk in adolescents. Caries risk geo-map of the Halland region (southwest Sweden) displaying, for each of the 66 residential parishes, the smoothed relative risk (SmRR, range between 0.75-1.20) of caries (dmfs/DMFS >0) among adolescents (12-19 years). The thicker borderlines delimit the six municipalities of Halland. The corresponding statistical certainty geo-map is also shown [red color, Pr(RR>1|data) > 0.95, i.e. a parish with data yielding strong statistical evidence of an elevated caries risk; green color, Pr(RR<1|data) > 0.95, i.e. a parish with data yielding strong statistical evidence of a low caries risk; and yellow color, the 90% credibility interval covers RR = 1, i.e. a parish with data yielding weaker statistical evidence for a high or low relative risk].", "It is generally recognized that GIS may play an essential role in helping public health organizations understand population health and make decisions [17]. The methods we applied have previously been utilized, e.g. within general medicine for risk mapping of common chronic diseases [18]. However, dental caries is also a common chronic disease and it is evident that severe caries in childhood affects the quality of life. It is also clear that the skew burden of the disease calls for allocation of resources and manpower for preventive activities among those with the highest need. To our knowledge, the GIS concept has not previously been used within dentistry in the present manner. We have noticed close similarities with geo-maps based on some other diseases and medical conditions with life-style and behavior-related determinants, which is interesting. Thus, by addressing the common risk factor approach (high-sugar diet, fats, smoking, alcohol, lack of control, etc), dental professionals will not only prevent dental diseases but will also contribute to preventing obesity, heart disease and diabetes [19]. The co-morbidity of the chronic dental and medical conditions is strong argument for the allocation of monetary and human resources according to the \"directed vulnerable population strategy\" (DVPS) that may be cost saving in the long perspective [20].\nFrom an international perspective, it should be pointed out that the overall cumulative caries burden in the present study population was very low (Table 1) [21]. Although the social and fluoride gradients on caries risk were quite expected, the produced geo-maps certainly provided some interesting and useful information for oral health planners. Educational level was considered a valid, objective socioeconomic indicator in this study area; it is also a parish-level measure that has been stable during recent years. In general, the educational level is higher in the urban residential areas and lower in the rural areas. Within the urban areas (located along coastline in the west) the social gradient could be explained by other factors such as family income and proportion of immigrants. The social gradient could be elaborated more in detail. Additional data on contextual/geographic and individual variables that are related to the caries outcome could allow for more extensive multilevel modeling. Nevertheless, investigators should consider adjustments for contextual and individual predictors with caution, depending on their purpose being to disclose vulnerable groups or provide further insights regarding the underlying predictors. Indeed, a quick glance at the present geo-maps could assist in making policies for reducing dental caries in vulnerable groups. For example, in the centre of the low-risk high-fluoride Kungsbacka municipality in the northern part of the region, an urban parish (1,334 study persons) with a notably higher caries risk among pre- and schoolchildren was identified (Figures 2 and 3). Schools and nurseries are excellent arenas to promote a healthy lifestyle and self-care practices in children [19]. Therefore, a local school-based fluoride and healthy-habit activity could be implemented in such a parish; fluorides have been proven to be effective in controlling and preventing caries and are a vital component of all caries prevention programs [22-24].\nAnother interesting example was the demonstrated age-related impact of the smoothed relative risk which may imply that a relatively larger proportion of the preventive efforts should be steered towards the youngest age groups. Among the preschoolers, the SmRRs captured the geographical risk pattern of early childhood caries. There are several documented examples of successful programs for infant feeding and early fluoride exposure that have reduced inequalities in oral health in preschool children in a cost effective way [25]. In the light of the examples given above, the suggested geo-maps may be an additional tool for allocation of necessary resources to dental practitioners to act as health advocates and monitor out-reach interventions performed by auxiliary staff. Among the adolescents, who have pronounced cumulative caries burden (Table 1), complementary measures to the SmRR (considering also mean dmfs/DMS) might be taken into account as well, in order to better capturing the geographical contrasts in caries burden.", "In summary, geo-maps based on caries risk may provide a novel option to allocate resources and tailor supportive and preventive measures within regions with sections of the population with relatively high caries rates." ]
[ null, null, null, null, null, null, null ]
[ "Background", "Methods", "Study population", "Geographical information system (GIS) methods", "Epidemiological and statistical methods", "Results", "Discussion", "Conclusion" ]
[ "In spite of the global decline in childhood caries, widening inequalities in oral health exist between social classes and certain minority ethnic groups [1,2]. In the United Kingdom, this has primarily been observed among preschool- and schoolchildren [3,4]. Also in Scandinavia, where almost all children and adolescents attend the prevention-oriented free public dental service, a social gradient for dental health is evident [5-7]. In order to reduce these gaps, various oral health promotion activities has been suggested and, preferable, integrated with general health education since oral diseases and chronic systemic diseases share many common risk factors [8,9]. The challenge and crucial decision for policymakers and professionals are to allocate resources and establish evidence-based programs that meet the needs of the vulnerable children at risk for caries. The allocation is traditionally based on conventional caries epidemiological data in spite of the fact that the children by then already are diseased. Oral health programs based on caries risk would be more proactive since a comprehensive risk assessment is an essential component in the decision-making process for the prevention and management of dental caries [10]. Thus, the aims of the present communication were to suggest a novel approach to present epidemiological data based on caries risk in a population. We launch the use of geo-maps and apply the concept in the southwest Swedish region of Halland.", " Study population The Halland region has approximately 70,000 inhabitants below the age of 20 years and the vast majority is listed as regular patients at the Public Dental Service that provides free dental care between 1 and 19 years with recall intervals varying from 3 to 24 months depending on the individual need. Data on the experience of manifest (dentin) caries is registered according to the WHO-criteria [11] and annually reported to the community dentistry unit. The present study population included 46,536 individuals for whom caries data were reported in year 2010; they were between 3-19 years old when examined. The overall coverage was 75% of the total 3-19-year population of the region. The remaining children were not recalled for a regular check-up that year or visited a private dentist outside the region. In Halland, the fluoride concentration in piped water supply is low (<0.3 ppm) except in the northern part (the municipality of Kungsbacka) where the natural fluoride content is approximately 1.0 ppm. There is also a geographical variation in socio-economic characteristics of the population in the Halland region. For example, the proportion with post-secondary education among all residents varies between 10 to 48% across different parishes (data from year 2010 provided from Statistics Sweden). The study was approved by the Halland Hospital Ethical committee as well as The Swedish Data Inspection Board.\nThe Halland region has approximately 70,000 inhabitants below the age of 20 years and the vast majority is listed as regular patients at the Public Dental Service that provides free dental care between 1 and 19 years with recall intervals varying from 3 to 24 months depending on the individual need. Data on the experience of manifest (dentin) caries is registered according to the WHO-criteria [11] and annually reported to the community dentistry unit. The present study population included 46,536 individuals for whom caries data were reported in year 2010; they were between 3-19 years old when examined. The overall coverage was 75% of the total 3-19-year population of the region. The remaining children were not recalled for a regular check-up that year or visited a private dentist outside the region. In Halland, the fluoride concentration in piped water supply is low (<0.3 ppm) except in the northern part (the municipality of Kungsbacka) where the natural fluoride content is approximately 1.0 ppm. There is also a geographical variation in socio-economic characteristics of the population in the Halland region. For example, the proportion with post-secondary education among all residents varies between 10 to 48% across different parishes (data from year 2010 provided from Statistics Sweden). The study was approved by the Halland Hospital Ethical committee as well as The Swedish Data Inspection Board.\n Geographical information system (GIS) methods The Halland region consists of six municipalities that are subdivided into 66 parishes. Geo-maps were produced by using the ESRI® ArcGIS system (Environmental Systems Research Institute, Inc., USA). Each study individual was geo-coded with respect to his/her residence area (parish). Figure 1 shows the number of study persons in each parish\nDistribution of participants. Geo-map of the Halland region (southwest Sweden) showing the number of study person between 3-19 years of age in each of the 66 residential parishes. The thicker borderlines delimit the six municipalities of Halland. The coverage (i.e., proportion of the eligible population) varied between 61-89% in the different parishes, except in one parish (red background) where the coverage was only 12%.\nThe Halland region consists of six municipalities that are subdivided into 66 parishes. Geo-maps were produced by using the ESRI® ArcGIS system (Environmental Systems Research Institute, Inc., USA). Each study individual was geo-coded with respect to his/her residence area (parish). Figure 1 shows the number of study persons in each parish\nDistribution of participants. Geo-map of the Halland region (southwest Sweden) showing the number of study person between 3-19 years of age in each of the 66 residential parishes. The thicker borderlines delimit the six municipalities of Halland. The coverage (i.e., proportion of the eligible population) varied between 61-89% in the different parishes, except in one parish (red background) where the coverage was only 12%.\n Epidemiological and statistical methods Reported dmfs/DMFS >0 for an individual was considered as the primary caries outcome. A parish-specific relative risk (RR) was calculated as the observed-to-expected ratio, where the expected number of individuals with dmfs/DMFS >0 was obtained from the age- and sex-specific caries (dmfs/DMFS >0) rates for the whole region of Halland or, more precisely, for the total study population. The following age strata were used: 3-6, 7-11,12-18, and 19 years. Thus, the expected number for a parish equals the sum of the products ni×ri across the age- and sex-strata i(3-6 year old girls; 3-6 year old boys; 7-11 year old girls; etc.), where ni denotes the stratum-specific number of study individuals residing in the parish and ri denotes the corresponding caries rate observed in the total study population. The computations of the RRs were performed using the free software Rapid Inquiry Facility [12], which provides an extension to ESRI® ArcGIS functions [13]. The Rapid Inquiry Facility (RIF) along with free software for Bayesian data analyses, WinBUGS [14], provides a powerful tool for geo-mapping based on epidemiological data. The caries risk maps show the smoothed RRs (SmRR) for each parish, which were obtained by running the Bayesian hierarchical mapping model in RIF/WinBUGS. We underline that such Bayesian smoothing yields pronounced downward adjustment of a (conventional) RR for a parish with few study persons, estimated with relatively high uncertainty, if that RR turns out notably elevated. Hence, by presenting smoothed caries risk geo-maps, rational adjustments of the conventional (parish-specific) RRs are taken into account [15,16].\nWe present separate caries risk geo-maps for preschool children (3-6 years), schoolchildren (7-11 years) and adolescents [12-19 years; based on age-stratified (12-18 and 19 years, respectively) analysis]. Along with each caries risk geo-map, we provide the corresponding statistical certainty geo-map. A posterior probability of a parish-specific relative risk above one given the data, denoted Pr(RR>1|data), was obtained by the Bayesian approach. A parish with data yielding strong statistical evidence of an elevated caries risk, more precisely Pr(RR>1|data) > 0.95, was colored red in the certainty geo-map. By contrast, a parish with evidently low caries risk, Pr(RR<1|data) > 0.95, was colored green. Analogously, parish-specific 90% credibility intervals for the relative risk were obtained; and each parish with a 90% credibility interval that covers 1 was colored yellow in the certainty geo-map indicating a weaker statistical evidence for a high or low relative risk in such parishes.\nWe addressed geographical co-variations between caries risk and residents'' level of education by calculating Spearman''s correlations (rS) between the SmRRs and the proportions with post-secondary educational level among the residents (considered as a group-level indicator of socio-economy) across the 66 parishes.\nReported dmfs/DMFS >0 for an individual was considered as the primary caries outcome. A parish-specific relative risk (RR) was calculated as the observed-to-expected ratio, where the expected number of individuals with dmfs/DMFS >0 was obtained from the age- and sex-specific caries (dmfs/DMFS >0) rates for the whole region of Halland or, more precisely, for the total study population. The following age strata were used: 3-6, 7-11,12-18, and 19 years. Thus, the expected number for a parish equals the sum of the products ni×ri across the age- and sex-strata i(3-6 year old girls; 3-6 year old boys; 7-11 year old girls; etc.), where ni denotes the stratum-specific number of study individuals residing in the parish and ri denotes the corresponding caries rate observed in the total study population. The computations of the RRs were performed using the free software Rapid Inquiry Facility [12], which provides an extension to ESRI® ArcGIS functions [13]. The Rapid Inquiry Facility (RIF) along with free software for Bayesian data analyses, WinBUGS [14], provides a powerful tool for geo-mapping based on epidemiological data. The caries risk maps show the smoothed RRs (SmRR) for each parish, which were obtained by running the Bayesian hierarchical mapping model in RIF/WinBUGS. We underline that such Bayesian smoothing yields pronounced downward adjustment of a (conventional) RR for a parish with few study persons, estimated with relatively high uncertainty, if that RR turns out notably elevated. Hence, by presenting smoothed caries risk geo-maps, rational adjustments of the conventional (parish-specific) RRs are taken into account [15,16].\nWe present separate caries risk geo-maps for preschool children (3-6 years), schoolchildren (7-11 years) and adolescents [12-19 years; based on age-stratified (12-18 and 19 years, respectively) analysis]. Along with each caries risk geo-map, we provide the corresponding statistical certainty geo-map. A posterior probability of a parish-specific relative risk above one given the data, denoted Pr(RR>1|data), was obtained by the Bayesian approach. A parish with data yielding strong statistical evidence of an elevated caries risk, more precisely Pr(RR>1|data) > 0.95, was colored red in the certainty geo-map. By contrast, a parish with evidently low caries risk, Pr(RR<1|data) > 0.95, was colored green. Analogously, parish-specific 90% credibility intervals for the relative risk were obtained; and each parish with a 90% credibility interval that covers 1 was colored yellow in the certainty geo-map indicating a weaker statistical evidence for a high or low relative risk in such parishes.\nWe addressed geographical co-variations between caries risk and residents'' level of education by calculating Spearman''s correlations (rS) between the SmRRs and the proportions with post-secondary educational level among the residents (considered as a group-level indicator of socio-economy) across the 66 parishes.", "The Halland region has approximately 70,000 inhabitants below the age of 20 years and the vast majority is listed as regular patients at the Public Dental Service that provides free dental care between 1 and 19 years with recall intervals varying from 3 to 24 months depending on the individual need. Data on the experience of manifest (dentin) caries is registered according to the WHO-criteria [11] and annually reported to the community dentistry unit. The present study population included 46,536 individuals for whom caries data were reported in year 2010; they were between 3-19 years old when examined. The overall coverage was 75% of the total 3-19-year population of the region. The remaining children were not recalled for a regular check-up that year or visited a private dentist outside the region. In Halland, the fluoride concentration in piped water supply is low (<0.3 ppm) except in the northern part (the municipality of Kungsbacka) where the natural fluoride content is approximately 1.0 ppm. There is also a geographical variation in socio-economic characteristics of the population in the Halland region. For example, the proportion with post-secondary education among all residents varies between 10 to 48% across different parishes (data from year 2010 provided from Statistics Sweden). The study was approved by the Halland Hospital Ethical committee as well as The Swedish Data Inspection Board.", "The Halland region consists of six municipalities that are subdivided into 66 parishes. Geo-maps were produced by using the ESRI® ArcGIS system (Environmental Systems Research Institute, Inc., USA). Each study individual was geo-coded with respect to his/her residence area (parish). Figure 1 shows the number of study persons in each parish\nDistribution of participants. Geo-map of the Halland region (southwest Sweden) showing the number of study person between 3-19 years of age in each of the 66 residential parishes. The thicker borderlines delimit the six municipalities of Halland. The coverage (i.e., proportion of the eligible population) varied between 61-89% in the different parishes, except in one parish (red background) where the coverage was only 12%.", "Reported dmfs/DMFS >0 for an individual was considered as the primary caries outcome. A parish-specific relative risk (RR) was calculated as the observed-to-expected ratio, where the expected number of individuals with dmfs/DMFS >0 was obtained from the age- and sex-specific caries (dmfs/DMFS >0) rates for the whole region of Halland or, more precisely, for the total study population. The following age strata were used: 3-6, 7-11,12-18, and 19 years. Thus, the expected number for a parish equals the sum of the products ni×ri across the age- and sex-strata i(3-6 year old girls; 3-6 year old boys; 7-11 year old girls; etc.), where ni denotes the stratum-specific number of study individuals residing in the parish and ri denotes the corresponding caries rate observed in the total study population. The computations of the RRs were performed using the free software Rapid Inquiry Facility [12], which provides an extension to ESRI® ArcGIS functions [13]. The Rapid Inquiry Facility (RIF) along with free software for Bayesian data analyses, WinBUGS [14], provides a powerful tool for geo-mapping based on epidemiological data. The caries risk maps show the smoothed RRs (SmRR) for each parish, which were obtained by running the Bayesian hierarchical mapping model in RIF/WinBUGS. We underline that such Bayesian smoothing yields pronounced downward adjustment of a (conventional) RR for a parish with few study persons, estimated with relatively high uncertainty, if that RR turns out notably elevated. Hence, by presenting smoothed caries risk geo-maps, rational adjustments of the conventional (parish-specific) RRs are taken into account [15,16].\nWe present separate caries risk geo-maps for preschool children (3-6 years), schoolchildren (7-11 years) and adolescents [12-19 years; based on age-stratified (12-18 and 19 years, respectively) analysis]. Along with each caries risk geo-map, we provide the corresponding statistical certainty geo-map. A posterior probability of a parish-specific relative risk above one given the data, denoted Pr(RR>1|data), was obtained by the Bayesian approach. A parish with data yielding strong statistical evidence of an elevated caries risk, more precisely Pr(RR>1|data) > 0.95, was colored red in the certainty geo-map. By contrast, a parish with evidently low caries risk, Pr(RR<1|data) > 0.95, was colored green. Analogously, parish-specific 90% credibility intervals for the relative risk were obtained; and each parish with a 90% credibility interval that covers 1 was colored yellow in the certainty geo-map indicating a weaker statistical evidence for a high or low relative risk in such parishes.\nWe addressed geographical co-variations between caries risk and residents'' level of education by calculating Spearman''s correlations (rS) between the SmRRs and the proportions with post-secondary educational level among the residents (considered as a group-level indicator of socio-economy) across the 66 parishes.", "The proportion of children with no obvious decay at various ages and the cumulative burden of caries in the region, expressed as mean dmfs/DMFS, are shown in Table 1. The geo-maps of caries risk for preschool children, schoolchildren and adolescents are presented in Figures 2, 3 and 4. The geographical variation in caries risk was obvious. Among the preschool children, the smoothed relative risk (SmRR) varied from 0.33 to 2.37 in different parishes. With increasing age, the contrasts seemed to diminish although the gross geographical risk pattern persisted also among the adolescents (SmRR range 0.75-1.20). As expected, the lowest caries risk (dark green) was seen for all ages in the northern area with the elevated natural fluoride content in water supply. The across-parish-correlation between the SmRRs and the proportions with post-secondary educational level among the residents was highly significant for preschool children (rS = -0.59; p < 0.01), but not for schoolchildren (rS = -0.21; p = 0.10) and adolescents (rS = -0.12; p = 0.24).\nPrevalence and experience of manifest (dentin) caries on surface level in the Region of Halland 2010\nChildren with no obvious decay was defined as dmfs/DMFS = 0\n*primary teeth only\n§permanent teeth only\nGeo-map of caries risk in preschool children. Caries risk geo-map of the Halland region (southwest Sweden) displaying, for each of the 66 residential parishes, the smoothed relative risk (SmRR, range between 0.33-2.37) of caries (dmfs/DMFS >0) among preschoolers (3-6 years). The thicker borderlines delimit the six municipalities of Halland. The corresponding statistical certainty geo-map is also shown [red color, Pr(RR>1|data) > 0.95, i.e. a parish with data yielding strong statistical evidence of an elevated caries risk; green color, Pr(RR<1|data) > 0.95, i.e. a parish with data yielding strong statistical evidence of a low caries risk; and yellow color, the 90% credibility interval covers RR = 1, i.e. a parish with data yielding weaker statistical evidence for a high or low relative risk].\nGeo-map of caries risk in schoolchildren. Caries risk geo-map of the Halland region (southwest Sweden) displaying, for each of the 66 residential parishes, the smoothed relative risk (SmRR, range between 0.36-1.47) of caries (dmfs/DMFS >0) among schoolchildren (7-11 years). The thicker borderlines delimit the six municipalities of Halland. The corresponding statistical certainty geo-map is also shown [red color, Pr(RR>1|data) > 0.95, i.e. a parish with data yielding strong statistical evidence of an elevated caries risk; green color, Pr(RR<1|data) > 0.95, i.e. a parish with data yielding strong statistical evidence of a low caries risk; and yellow color, the 90% credibility interval covers RR = 1, i.e. a parish with data yielding weaker statistical evidence for a high or low relative risk].\nGeo-map of caries risk in adolescents. Caries risk geo-map of the Halland region (southwest Sweden) displaying, for each of the 66 residential parishes, the smoothed relative risk (SmRR, range between 0.75-1.20) of caries (dmfs/DMFS >0) among adolescents (12-19 years). The thicker borderlines delimit the six municipalities of Halland. The corresponding statistical certainty geo-map is also shown [red color, Pr(RR>1|data) > 0.95, i.e. a parish with data yielding strong statistical evidence of an elevated caries risk; green color, Pr(RR<1|data) > 0.95, i.e. a parish with data yielding strong statistical evidence of a low caries risk; and yellow color, the 90% credibility interval covers RR = 1, i.e. a parish with data yielding weaker statistical evidence for a high or low relative risk].", "It is generally recognized that GIS may play an essential role in helping public health organizations understand population health and make decisions [17]. The methods we applied have previously been utilized, e.g. within general medicine for risk mapping of common chronic diseases [18]. However, dental caries is also a common chronic disease and it is evident that severe caries in childhood affects the quality of life. It is also clear that the skew burden of the disease calls for allocation of resources and manpower for preventive activities among those with the highest need. To our knowledge, the GIS concept has not previously been used within dentistry in the present manner. We have noticed close similarities with geo-maps based on some other diseases and medical conditions with life-style and behavior-related determinants, which is interesting. Thus, by addressing the common risk factor approach (high-sugar diet, fats, smoking, alcohol, lack of control, etc), dental professionals will not only prevent dental diseases but will also contribute to preventing obesity, heart disease and diabetes [19]. The co-morbidity of the chronic dental and medical conditions is strong argument for the allocation of monetary and human resources according to the \"directed vulnerable population strategy\" (DVPS) that may be cost saving in the long perspective [20].\nFrom an international perspective, it should be pointed out that the overall cumulative caries burden in the present study population was very low (Table 1) [21]. Although the social and fluoride gradients on caries risk were quite expected, the produced geo-maps certainly provided some interesting and useful information for oral health planners. Educational level was considered a valid, objective socioeconomic indicator in this study area; it is also a parish-level measure that has been stable during recent years. In general, the educational level is higher in the urban residential areas and lower in the rural areas. Within the urban areas (located along coastline in the west) the social gradient could be explained by other factors such as family income and proportion of immigrants. The social gradient could be elaborated more in detail. Additional data on contextual/geographic and individual variables that are related to the caries outcome could allow for more extensive multilevel modeling. Nevertheless, investigators should consider adjustments for contextual and individual predictors with caution, depending on their purpose being to disclose vulnerable groups or provide further insights regarding the underlying predictors. Indeed, a quick glance at the present geo-maps could assist in making policies for reducing dental caries in vulnerable groups. For example, in the centre of the low-risk high-fluoride Kungsbacka municipality in the northern part of the region, an urban parish (1,334 study persons) with a notably higher caries risk among pre- and schoolchildren was identified (Figures 2 and 3). Schools and nurseries are excellent arenas to promote a healthy lifestyle and self-care practices in children [19]. Therefore, a local school-based fluoride and healthy-habit activity could be implemented in such a parish; fluorides have been proven to be effective in controlling and preventing caries and are a vital component of all caries prevention programs [22-24].\nAnother interesting example was the demonstrated age-related impact of the smoothed relative risk which may imply that a relatively larger proportion of the preventive efforts should be steered towards the youngest age groups. Among the preschoolers, the SmRRs captured the geographical risk pattern of early childhood caries. There are several documented examples of successful programs for infant feeding and early fluoride exposure that have reduced inequalities in oral health in preschool children in a cost effective way [25]. In the light of the examples given above, the suggested geo-maps may be an additional tool for allocation of necessary resources to dental practitioners to act as health advocates and monitor out-reach interventions performed by auxiliary staff. Among the adolescents, who have pronounced cumulative caries burden (Table 1), complementary measures to the SmRR (considering also mean dmfs/DMS) might be taken into account as well, in order to better capturing the geographical contrasts in caries burden.", "In summary, geo-maps based on caries risk may provide a novel option to allocate resources and tailor supportive and preventive measures within regions with sections of the population with relatively high caries rates." ]
[ null, "methods", null, null, null, null, null, null ]
[ "caries", "children", "prevention", "geo-mapping" ]
Background: In spite of the global decline in childhood caries, widening inequalities in oral health exist between social classes and certain minority ethnic groups [1,2]. In the United Kingdom, this has primarily been observed among preschool- and schoolchildren [3,4]. Also in Scandinavia, where almost all children and adolescents attend the prevention-oriented free public dental service, a social gradient for dental health is evident [5-7]. In order to reduce these gaps, various oral health promotion activities has been suggested and, preferable, integrated with general health education since oral diseases and chronic systemic diseases share many common risk factors [8,9]. The challenge and crucial decision for policymakers and professionals are to allocate resources and establish evidence-based programs that meet the needs of the vulnerable children at risk for caries. The allocation is traditionally based on conventional caries epidemiological data in spite of the fact that the children by then already are diseased. Oral health programs based on caries risk would be more proactive since a comprehensive risk assessment is an essential component in the decision-making process for the prevention and management of dental caries [10]. Thus, the aims of the present communication were to suggest a novel approach to present epidemiological data based on caries risk in a population. We launch the use of geo-maps and apply the concept in the southwest Swedish region of Halland. Methods: Study population The Halland region has approximately 70,000 inhabitants below the age of 20 years and the vast majority is listed as regular patients at the Public Dental Service that provides free dental care between 1 and 19 years with recall intervals varying from 3 to 24 months depending on the individual need. Data on the experience of manifest (dentin) caries is registered according to the WHO-criteria [11] and annually reported to the community dentistry unit. The present study population included 46,536 individuals for whom caries data were reported in year 2010; they were between 3-19 years old when examined. The overall coverage was 75% of the total 3-19-year population of the region. The remaining children were not recalled for a regular check-up that year or visited a private dentist outside the region. In Halland, the fluoride concentration in piped water supply is low (<0.3 ppm) except in the northern part (the municipality of Kungsbacka) where the natural fluoride content is approximately 1.0 ppm. There is also a geographical variation in socio-economic characteristics of the population in the Halland region. For example, the proportion with post-secondary education among all residents varies between 10 to 48% across different parishes (data from year 2010 provided from Statistics Sweden). The study was approved by the Halland Hospital Ethical committee as well as The Swedish Data Inspection Board. The Halland region has approximately 70,000 inhabitants below the age of 20 years and the vast majority is listed as regular patients at the Public Dental Service that provides free dental care between 1 and 19 years with recall intervals varying from 3 to 24 months depending on the individual need. Data on the experience of manifest (dentin) caries is registered according to the WHO-criteria [11] and annually reported to the community dentistry unit. The present study population included 46,536 individuals for whom caries data were reported in year 2010; they were between 3-19 years old when examined. The overall coverage was 75% of the total 3-19-year population of the region. The remaining children were not recalled for a regular check-up that year or visited a private dentist outside the region. In Halland, the fluoride concentration in piped water supply is low (<0.3 ppm) except in the northern part (the municipality of Kungsbacka) where the natural fluoride content is approximately 1.0 ppm. There is also a geographical variation in socio-economic characteristics of the population in the Halland region. For example, the proportion with post-secondary education among all residents varies between 10 to 48% across different parishes (data from year 2010 provided from Statistics Sweden). The study was approved by the Halland Hospital Ethical committee as well as The Swedish Data Inspection Board. Geographical information system (GIS) methods The Halland region consists of six municipalities that are subdivided into 66 parishes. Geo-maps were produced by using the ESRI® ArcGIS system (Environmental Systems Research Institute, Inc., USA). Each study individual was geo-coded with respect to his/her residence area (parish). Figure 1 shows the number of study persons in each parish Distribution of participants. Geo-map of the Halland region (southwest Sweden) showing the number of study person between 3-19 years of age in each of the 66 residential parishes. The thicker borderlines delimit the six municipalities of Halland. The coverage (i.e., proportion of the eligible population) varied between 61-89% in the different parishes, except in one parish (red background) where the coverage was only 12%. The Halland region consists of six municipalities that are subdivided into 66 parishes. Geo-maps were produced by using the ESRI® ArcGIS system (Environmental Systems Research Institute, Inc., USA). Each study individual was geo-coded with respect to his/her residence area (parish). Figure 1 shows the number of study persons in each parish Distribution of participants. Geo-map of the Halland region (southwest Sweden) showing the number of study person between 3-19 years of age in each of the 66 residential parishes. The thicker borderlines delimit the six municipalities of Halland. The coverage (i.e., proportion of the eligible population) varied between 61-89% in the different parishes, except in one parish (red background) where the coverage was only 12%. Epidemiological and statistical methods Reported dmfs/DMFS >0 for an individual was considered as the primary caries outcome. A parish-specific relative risk (RR) was calculated as the observed-to-expected ratio, where the expected number of individuals with dmfs/DMFS >0 was obtained from the age- and sex-specific caries (dmfs/DMFS >0) rates for the whole region of Halland or, more precisely, for the total study population. The following age strata were used: 3-6, 7-11,12-18, and 19 years. Thus, the expected number for a parish equals the sum of the products ni×ri across the age- and sex-strata i(3-6 year old girls; 3-6 year old boys; 7-11 year old girls; etc.), where ni denotes the stratum-specific number of study individuals residing in the parish and ri denotes the corresponding caries rate observed in the total study population. The computations of the RRs were performed using the free software Rapid Inquiry Facility [12], which provides an extension to ESRI® ArcGIS functions [13]. The Rapid Inquiry Facility (RIF) along with free software for Bayesian data analyses, WinBUGS [14], provides a powerful tool for geo-mapping based on epidemiological data. The caries risk maps show the smoothed RRs (SmRR) for each parish, which were obtained by running the Bayesian hierarchical mapping model in RIF/WinBUGS. We underline that such Bayesian smoothing yields pronounced downward adjustment of a (conventional) RR for a parish with few study persons, estimated with relatively high uncertainty, if that RR turns out notably elevated. Hence, by presenting smoothed caries risk geo-maps, rational adjustments of the conventional (parish-specific) RRs are taken into account [15,16]. We present separate caries risk geo-maps for preschool children (3-6 years), schoolchildren (7-11 years) and adolescents [12-19 years; based on age-stratified (12-18 and 19 years, respectively) analysis]. Along with each caries risk geo-map, we provide the corresponding statistical certainty geo-map. A posterior probability of a parish-specific relative risk above one given the data, denoted Pr(RR>1|data), was obtained by the Bayesian approach. A parish with data yielding strong statistical evidence of an elevated caries risk, more precisely Pr(RR>1|data) > 0.95, was colored red in the certainty geo-map. By contrast, a parish with evidently low caries risk, Pr(RR<1|data) > 0.95, was colored green. Analogously, parish-specific 90% credibility intervals for the relative risk were obtained; and each parish with a 90% credibility interval that covers 1 was colored yellow in the certainty geo-map indicating a weaker statistical evidence for a high or low relative risk in such parishes. We addressed geographical co-variations between caries risk and residents'' level of education by calculating Spearman''s correlations (rS) between the SmRRs and the proportions with post-secondary educational level among the residents (considered as a group-level indicator of socio-economy) across the 66 parishes. Reported dmfs/DMFS >0 for an individual was considered as the primary caries outcome. A parish-specific relative risk (RR) was calculated as the observed-to-expected ratio, where the expected number of individuals with dmfs/DMFS >0 was obtained from the age- and sex-specific caries (dmfs/DMFS >0) rates for the whole region of Halland or, more precisely, for the total study population. The following age strata were used: 3-6, 7-11,12-18, and 19 years. Thus, the expected number for a parish equals the sum of the products ni×ri across the age- and sex-strata i(3-6 year old girls; 3-6 year old boys; 7-11 year old girls; etc.), where ni denotes the stratum-specific number of study individuals residing in the parish and ri denotes the corresponding caries rate observed in the total study population. The computations of the RRs were performed using the free software Rapid Inquiry Facility [12], which provides an extension to ESRI® ArcGIS functions [13]. The Rapid Inquiry Facility (RIF) along with free software for Bayesian data analyses, WinBUGS [14], provides a powerful tool for geo-mapping based on epidemiological data. The caries risk maps show the smoothed RRs (SmRR) for each parish, which were obtained by running the Bayesian hierarchical mapping model in RIF/WinBUGS. We underline that such Bayesian smoothing yields pronounced downward adjustment of a (conventional) RR for a parish with few study persons, estimated with relatively high uncertainty, if that RR turns out notably elevated. Hence, by presenting smoothed caries risk geo-maps, rational adjustments of the conventional (parish-specific) RRs are taken into account [15,16]. We present separate caries risk geo-maps for preschool children (3-6 years), schoolchildren (7-11 years) and adolescents [12-19 years; based on age-stratified (12-18 and 19 years, respectively) analysis]. Along with each caries risk geo-map, we provide the corresponding statistical certainty geo-map. A posterior probability of a parish-specific relative risk above one given the data, denoted Pr(RR>1|data), was obtained by the Bayesian approach. A parish with data yielding strong statistical evidence of an elevated caries risk, more precisely Pr(RR>1|data) > 0.95, was colored red in the certainty geo-map. By contrast, a parish with evidently low caries risk, Pr(RR<1|data) > 0.95, was colored green. Analogously, parish-specific 90% credibility intervals for the relative risk were obtained; and each parish with a 90% credibility interval that covers 1 was colored yellow in the certainty geo-map indicating a weaker statistical evidence for a high or low relative risk in such parishes. We addressed geographical co-variations between caries risk and residents'' level of education by calculating Spearman''s correlations (rS) between the SmRRs and the proportions with post-secondary educational level among the residents (considered as a group-level indicator of socio-economy) across the 66 parishes. Study population: The Halland region has approximately 70,000 inhabitants below the age of 20 years and the vast majority is listed as regular patients at the Public Dental Service that provides free dental care between 1 and 19 years with recall intervals varying from 3 to 24 months depending on the individual need. Data on the experience of manifest (dentin) caries is registered according to the WHO-criteria [11] and annually reported to the community dentistry unit. The present study population included 46,536 individuals for whom caries data were reported in year 2010; they were between 3-19 years old when examined. The overall coverage was 75% of the total 3-19-year population of the region. The remaining children were not recalled for a regular check-up that year or visited a private dentist outside the region. In Halland, the fluoride concentration in piped water supply is low (<0.3 ppm) except in the northern part (the municipality of Kungsbacka) where the natural fluoride content is approximately 1.0 ppm. There is also a geographical variation in socio-economic characteristics of the population in the Halland region. For example, the proportion with post-secondary education among all residents varies between 10 to 48% across different parishes (data from year 2010 provided from Statistics Sweden). The study was approved by the Halland Hospital Ethical committee as well as The Swedish Data Inspection Board. Geographical information system (GIS) methods: The Halland region consists of six municipalities that are subdivided into 66 parishes. Geo-maps were produced by using the ESRI® ArcGIS system (Environmental Systems Research Institute, Inc., USA). Each study individual was geo-coded with respect to his/her residence area (parish). Figure 1 shows the number of study persons in each parish Distribution of participants. Geo-map of the Halland region (southwest Sweden) showing the number of study person between 3-19 years of age in each of the 66 residential parishes. The thicker borderlines delimit the six municipalities of Halland. The coverage (i.e., proportion of the eligible population) varied between 61-89% in the different parishes, except in one parish (red background) where the coverage was only 12%. Epidemiological and statistical methods: Reported dmfs/DMFS >0 for an individual was considered as the primary caries outcome. A parish-specific relative risk (RR) was calculated as the observed-to-expected ratio, where the expected number of individuals with dmfs/DMFS >0 was obtained from the age- and sex-specific caries (dmfs/DMFS >0) rates for the whole region of Halland or, more precisely, for the total study population. The following age strata were used: 3-6, 7-11,12-18, and 19 years. Thus, the expected number for a parish equals the sum of the products ni×ri across the age- and sex-strata i(3-6 year old girls; 3-6 year old boys; 7-11 year old girls; etc.), where ni denotes the stratum-specific number of study individuals residing in the parish and ri denotes the corresponding caries rate observed in the total study population. The computations of the RRs were performed using the free software Rapid Inquiry Facility [12], which provides an extension to ESRI® ArcGIS functions [13]. The Rapid Inquiry Facility (RIF) along with free software for Bayesian data analyses, WinBUGS [14], provides a powerful tool for geo-mapping based on epidemiological data. The caries risk maps show the smoothed RRs (SmRR) for each parish, which were obtained by running the Bayesian hierarchical mapping model in RIF/WinBUGS. We underline that such Bayesian smoothing yields pronounced downward adjustment of a (conventional) RR for a parish with few study persons, estimated with relatively high uncertainty, if that RR turns out notably elevated. Hence, by presenting smoothed caries risk geo-maps, rational adjustments of the conventional (parish-specific) RRs are taken into account [15,16]. We present separate caries risk geo-maps for preschool children (3-6 years), schoolchildren (7-11 years) and adolescents [12-19 years; based on age-stratified (12-18 and 19 years, respectively) analysis]. Along with each caries risk geo-map, we provide the corresponding statistical certainty geo-map. A posterior probability of a parish-specific relative risk above one given the data, denoted Pr(RR>1|data), was obtained by the Bayesian approach. A parish with data yielding strong statistical evidence of an elevated caries risk, more precisely Pr(RR>1|data) > 0.95, was colored red in the certainty geo-map. By contrast, a parish with evidently low caries risk, Pr(RR<1|data) > 0.95, was colored green. Analogously, parish-specific 90% credibility intervals for the relative risk were obtained; and each parish with a 90% credibility interval that covers 1 was colored yellow in the certainty geo-map indicating a weaker statistical evidence for a high or low relative risk in such parishes. We addressed geographical co-variations between caries risk and residents'' level of education by calculating Spearman''s correlations (rS) between the SmRRs and the proportions with post-secondary educational level among the residents (considered as a group-level indicator of socio-economy) across the 66 parishes. Results: The proportion of children with no obvious decay at various ages and the cumulative burden of caries in the region, expressed as mean dmfs/DMFS, are shown in Table 1. The geo-maps of caries risk for preschool children, schoolchildren and adolescents are presented in Figures 2, 3 and 4. The geographical variation in caries risk was obvious. Among the preschool children, the smoothed relative risk (SmRR) varied from 0.33 to 2.37 in different parishes. With increasing age, the contrasts seemed to diminish although the gross geographical risk pattern persisted also among the adolescents (SmRR range 0.75-1.20). As expected, the lowest caries risk (dark green) was seen for all ages in the northern area with the elevated natural fluoride content in water supply. The across-parish-correlation between the SmRRs and the proportions with post-secondary educational level among the residents was highly significant for preschool children (rS = -0.59; p < 0.01), but not for schoolchildren (rS = -0.21; p = 0.10) and adolescents (rS = -0.12; p = 0.24). Prevalence and experience of manifest (dentin) caries on surface level in the Region of Halland 2010 Children with no obvious decay was defined as dmfs/DMFS = 0 *primary teeth only §permanent teeth only Geo-map of caries risk in preschool children. Caries risk geo-map of the Halland region (southwest Sweden) displaying, for each of the 66 residential parishes, the smoothed relative risk (SmRR, range between 0.33-2.37) of caries (dmfs/DMFS >0) among preschoolers (3-6 years). The thicker borderlines delimit the six municipalities of Halland. The corresponding statistical certainty geo-map is also shown [red color, Pr(RR>1|data) > 0.95, i.e. a parish with data yielding strong statistical evidence of an elevated caries risk; green color, Pr(RR<1|data) > 0.95, i.e. a parish with data yielding strong statistical evidence of a low caries risk; and yellow color, the 90% credibility interval covers RR = 1, i.e. a parish with data yielding weaker statistical evidence for a high or low relative risk]. Geo-map of caries risk in schoolchildren. Caries risk geo-map of the Halland region (southwest Sweden) displaying, for each of the 66 residential parishes, the smoothed relative risk (SmRR, range between 0.36-1.47) of caries (dmfs/DMFS >0) among schoolchildren (7-11 years). The thicker borderlines delimit the six municipalities of Halland. The corresponding statistical certainty geo-map is also shown [red color, Pr(RR>1|data) > 0.95, i.e. a parish with data yielding strong statistical evidence of an elevated caries risk; green color, Pr(RR<1|data) > 0.95, i.e. a parish with data yielding strong statistical evidence of a low caries risk; and yellow color, the 90% credibility interval covers RR = 1, i.e. a parish with data yielding weaker statistical evidence for a high or low relative risk]. Geo-map of caries risk in adolescents. Caries risk geo-map of the Halland region (southwest Sweden) displaying, for each of the 66 residential parishes, the smoothed relative risk (SmRR, range between 0.75-1.20) of caries (dmfs/DMFS >0) among adolescents (12-19 years). The thicker borderlines delimit the six municipalities of Halland. The corresponding statistical certainty geo-map is also shown [red color, Pr(RR>1|data) > 0.95, i.e. a parish with data yielding strong statistical evidence of an elevated caries risk; green color, Pr(RR<1|data) > 0.95, i.e. a parish with data yielding strong statistical evidence of a low caries risk; and yellow color, the 90% credibility interval covers RR = 1, i.e. a parish with data yielding weaker statistical evidence for a high or low relative risk]. Discussion: It is generally recognized that GIS may play an essential role in helping public health organizations understand population health and make decisions [17]. The methods we applied have previously been utilized, e.g. within general medicine for risk mapping of common chronic diseases [18]. However, dental caries is also a common chronic disease and it is evident that severe caries in childhood affects the quality of life. It is also clear that the skew burden of the disease calls for allocation of resources and manpower for preventive activities among those with the highest need. To our knowledge, the GIS concept has not previously been used within dentistry in the present manner. We have noticed close similarities with geo-maps based on some other diseases and medical conditions with life-style and behavior-related determinants, which is interesting. Thus, by addressing the common risk factor approach (high-sugar diet, fats, smoking, alcohol, lack of control, etc), dental professionals will not only prevent dental diseases but will also contribute to preventing obesity, heart disease and diabetes [19]. The co-morbidity of the chronic dental and medical conditions is strong argument for the allocation of monetary and human resources according to the "directed vulnerable population strategy" (DVPS) that may be cost saving in the long perspective [20]. From an international perspective, it should be pointed out that the overall cumulative caries burden in the present study population was very low (Table 1) [21]. Although the social and fluoride gradients on caries risk were quite expected, the produced geo-maps certainly provided some interesting and useful information for oral health planners. Educational level was considered a valid, objective socioeconomic indicator in this study area; it is also a parish-level measure that has been stable during recent years. In general, the educational level is higher in the urban residential areas and lower in the rural areas. Within the urban areas (located along coastline in the west) the social gradient could be explained by other factors such as family income and proportion of immigrants. The social gradient could be elaborated more in detail. Additional data on contextual/geographic and individual variables that are related to the caries outcome could allow for more extensive multilevel modeling. Nevertheless, investigators should consider adjustments for contextual and individual predictors with caution, depending on their purpose being to disclose vulnerable groups or provide further insights regarding the underlying predictors. Indeed, a quick glance at the present geo-maps could assist in making policies for reducing dental caries in vulnerable groups. For example, in the centre of the low-risk high-fluoride Kungsbacka municipality in the northern part of the region, an urban parish (1,334 study persons) with a notably higher caries risk among pre- and schoolchildren was identified (Figures 2 and 3). Schools and nurseries are excellent arenas to promote a healthy lifestyle and self-care practices in children [19]. Therefore, a local school-based fluoride and healthy-habit activity could be implemented in such a parish; fluorides have been proven to be effective in controlling and preventing caries and are a vital component of all caries prevention programs [22-24]. Another interesting example was the demonstrated age-related impact of the smoothed relative risk which may imply that a relatively larger proportion of the preventive efforts should be steered towards the youngest age groups. Among the preschoolers, the SmRRs captured the geographical risk pattern of early childhood caries. There are several documented examples of successful programs for infant feeding and early fluoride exposure that have reduced inequalities in oral health in preschool children in a cost effective way [25]. In the light of the examples given above, the suggested geo-maps may be an additional tool for allocation of necessary resources to dental practitioners to act as health advocates and monitor out-reach interventions performed by auxiliary staff. Among the adolescents, who have pronounced cumulative caries burden (Table 1), complementary measures to the SmRR (considering also mean dmfs/DMS) might be taken into account as well, in order to better capturing the geographical contrasts in caries burden. Conclusion: In summary, geo-maps based on caries risk may provide a novel option to allocate resources and tailor supportive and preventive measures within regions with sections of the population with relatively high caries rates.
Background: Dental caries in children is unevenly distributed within populations with a higher burden in low socio-economy groups. Thus, tools are needed to allocate resources and establish evidence-based programs that meet the needs of those at risk. The aim of the study was to apply a novel concept for presenting epidemiological data based on caries risk in the region of Halland in southwest Sweden, using geo-maps. Methods: The study population consisted of 46,536 individuals between 3-19 years of age (75% of the eligible population) from whom caries data were reported in 2010. Reported dmfs/DMFS>0 for an individual was considered as the primary caries outcome. Each study individual was geo-coded with respect to his/her residence parish. A parish-specific relative risk (RR) was calculated as the observed-to-expected ratio, where the expected number of individuals with dmfs/DMFS>0 was obtained from the age- and sex-specific caries (dmfs/DMFS>0) rates for the total study population. Smoothed caries risk geo-maps, along with corresponding statistical certainty geo-maps, were produced by using the free software Rapid Inquiry Facility and the ESRI® ArcGIS system. Results: The geo-maps of preschool children (3-6 years), schoolchildren (7-11 years) and adolescents (12-19 years) displayed obvious geographical variations in caries risk, albeit most marked among the preschoolers. Among the preschool children the smoothed relative risk (SmRR) varied from 0.33 to 2.37 in different parishes. With increasing age, the contrasts seemed to diminish although the gross geographical risk pattern persisted also among the adolescents (SmRR range 0.75-1.20). Conclusions: Geo-maps based on caries risk may provide a novel option to allocate resources and tailor supportive and preventive measures within regions with sections of the population with relatively high caries rates.
Background: In spite of the global decline in childhood caries, widening inequalities in oral health exist between social classes and certain minority ethnic groups [1,2]. In the United Kingdom, this has primarily been observed among preschool- and schoolchildren [3,4]. Also in Scandinavia, where almost all children and adolescents attend the prevention-oriented free public dental service, a social gradient for dental health is evident [5-7]. In order to reduce these gaps, various oral health promotion activities has been suggested and, preferable, integrated with general health education since oral diseases and chronic systemic diseases share many common risk factors [8,9]. The challenge and crucial decision for policymakers and professionals are to allocate resources and establish evidence-based programs that meet the needs of the vulnerable children at risk for caries. The allocation is traditionally based on conventional caries epidemiological data in spite of the fact that the children by then already are diseased. Oral health programs based on caries risk would be more proactive since a comprehensive risk assessment is an essential component in the decision-making process for the prevention and management of dental caries [10]. Thus, the aims of the present communication were to suggest a novel approach to present epidemiological data based on caries risk in a population. We launch the use of geo-maps and apply the concept in the southwest Swedish region of Halland. Conclusion: The pre-publication history for this paper can be accessed here: http://www.biomedcentral.com/1472-6831/11/26/prepub
Background: Dental caries in children is unevenly distributed within populations with a higher burden in low socio-economy groups. Thus, tools are needed to allocate resources and establish evidence-based programs that meet the needs of those at risk. The aim of the study was to apply a novel concept for presenting epidemiological data based on caries risk in the region of Halland in southwest Sweden, using geo-maps. Methods: The study population consisted of 46,536 individuals between 3-19 years of age (75% of the eligible population) from whom caries data were reported in 2010. Reported dmfs/DMFS>0 for an individual was considered as the primary caries outcome. Each study individual was geo-coded with respect to his/her residence parish. A parish-specific relative risk (RR) was calculated as the observed-to-expected ratio, where the expected number of individuals with dmfs/DMFS>0 was obtained from the age- and sex-specific caries (dmfs/DMFS>0) rates for the total study population. Smoothed caries risk geo-maps, along with corresponding statistical certainty geo-maps, were produced by using the free software Rapid Inquiry Facility and the ESRI® ArcGIS system. Results: The geo-maps of preschool children (3-6 years), schoolchildren (7-11 years) and adolescents (12-19 years) displayed obvious geographical variations in caries risk, albeit most marked among the preschoolers. Among the preschool children the smoothed relative risk (SmRR) varied from 0.33 to 2.37 in different parishes. With increasing age, the contrasts seemed to diminish although the gross geographical risk pattern persisted also among the adolescents (SmRR range 0.75-1.20). Conclusions: Geo-maps based on caries risk may provide a novel option to allocate resources and tailor supportive and preventive measures within regions with sections of the population with relatively high caries rates.
4,938
364
[ 260, 261, 152, 604, 741, 786, 37 ]
8
[ "caries", "risk", "parish", "data", "geo", "caries risk", "halland", "study", "years", "dmfs" ]
[ "adolescents caries risk", "caries prevention programs", "children risk caries", "risk schoolchildren caries", "caries risk preschool" ]
null
[CONTENT] caries | children | prevention | geo-mapping [SUMMARY]
[CONTENT] caries | children | prevention | geo-mapping [SUMMARY]
null
[CONTENT] caries | children | prevention | geo-mapping [SUMMARY]
[CONTENT] caries | children | prevention | geo-mapping [SUMMARY]
[CONTENT] caries | children | prevention | geo-mapping [SUMMARY]
[CONTENT] Adolescent | Age Factors | Bayes Theorem | Child | Child, Preschool | DMF Index | Data Collection | Dental Caries | Dental Caries Susceptibility | Educational Status | Female | Geographic Information Systems | Humans | Male | Resource Allocation | Risk | Risk Assessment | Sex Factors | Social Class | Sweden | Tooth, Deciduous | Young Adult [SUMMARY]
[CONTENT] Adolescent | Age Factors | Bayes Theorem | Child | Child, Preschool | DMF Index | Data Collection | Dental Caries | Dental Caries Susceptibility | Educational Status | Female | Geographic Information Systems | Humans | Male | Resource Allocation | Risk | Risk Assessment | Sex Factors | Social Class | Sweden | Tooth, Deciduous | Young Adult [SUMMARY]
null
[CONTENT] Adolescent | Age Factors | Bayes Theorem | Child | Child, Preschool | DMF Index | Data Collection | Dental Caries | Dental Caries Susceptibility | Educational Status | Female | Geographic Information Systems | Humans | Male | Resource Allocation | Risk | Risk Assessment | Sex Factors | Social Class | Sweden | Tooth, Deciduous | Young Adult [SUMMARY]
[CONTENT] Adolescent | Age Factors | Bayes Theorem | Child | Child, Preschool | DMF Index | Data Collection | Dental Caries | Dental Caries Susceptibility | Educational Status | Female | Geographic Information Systems | Humans | Male | Resource Allocation | Risk | Risk Assessment | Sex Factors | Social Class | Sweden | Tooth, Deciduous | Young Adult [SUMMARY]
[CONTENT] Adolescent | Age Factors | Bayes Theorem | Child | Child, Preschool | DMF Index | Data Collection | Dental Caries | Dental Caries Susceptibility | Educational Status | Female | Geographic Information Systems | Humans | Male | Resource Allocation | Risk | Risk Assessment | Sex Factors | Social Class | Sweden | Tooth, Deciduous | Young Adult [SUMMARY]
[CONTENT] adolescents caries risk | caries prevention programs | children risk caries | risk schoolchildren caries | caries risk preschool [SUMMARY]
[CONTENT] adolescents caries risk | caries prevention programs | children risk caries | risk schoolchildren caries | caries risk preschool [SUMMARY]
null
[CONTENT] adolescents caries risk | caries prevention programs | children risk caries | risk schoolchildren caries | caries risk preschool [SUMMARY]
[CONTENT] adolescents caries risk | caries prevention programs | children risk caries | risk schoolchildren caries | caries risk preschool [SUMMARY]
[CONTENT] adolescents caries risk | caries prevention programs | children risk caries | risk schoolchildren caries | caries risk preschool [SUMMARY]
[CONTENT] caries | risk | parish | data | geo | caries risk | halland | study | years | dmfs [SUMMARY]
[CONTENT] caries | risk | parish | data | geo | caries risk | halland | study | years | dmfs [SUMMARY]
null
[CONTENT] caries | risk | parish | data | geo | caries risk | halland | study | years | dmfs [SUMMARY]
[CONTENT] caries | risk | parish | data | geo | caries risk | halland | study | years | dmfs [SUMMARY]
[CONTENT] caries | risk | parish | data | geo | caries risk | halland | study | years | dmfs [SUMMARY]
[CONTENT] health | oral | caries | oral health | risk | based | decision | spite | dental | programs [SUMMARY]
[CONTENT] parish | data | risk | caries | study | year | specific | years | geo | rr [SUMMARY]
null
[CONTENT] risk provide novel option | resources tailor supportive preventive | regions sections population | tailor | tailor supportive | provide novel option allocate | provide novel option | provide novel | regions sections | tailor supportive preventive [SUMMARY]
[CONTENT] caries | risk | parish | data | geo | caries risk | study | halland | rr | year [SUMMARY]
[CONTENT] caries | risk | parish | data | geo | caries risk | study | halland | rr | year [SUMMARY]
[CONTENT] ||| ||| Halland | Sweden [SUMMARY]
[CONTENT] 46,536 | 3-19 years of age | 75% | 2010 ||| ||| ||| ||| certainty geo-maps | Rapid Inquiry Facility | ESRI [SUMMARY]
null
[CONTENT] [SUMMARY]
[CONTENT] ||| ||| Halland | Sweden ||| 46,536 | 3-19 years of age | 75% | 2010 ||| ||| ||| ||| certainty geo-maps | Rapid Inquiry Facility | ESRI ||| 3-6 years | 7-11 years | 12-19 years ||| SmRR | 0.33 | 2.37 ||| SmRR | 0.75-1.20 ||| [SUMMARY]
[CONTENT] ||| ||| Halland | Sweden ||| 46,536 | 3-19 years of age | 75% | 2010 ||| ||| ||| ||| certainty geo-maps | Rapid Inquiry Facility | ESRI ||| 3-6 years | 7-11 years | 12-19 years ||| SmRR | 0.33 | 2.37 ||| SmRR | 0.75-1.20 ||| [SUMMARY]
Prevalence of Human T-lymphotropic virus type 1 and 2 among blood donors in Manaus, Amazonas State, Brazil.
29267588
Human T-lymphotropic virus type 1 and 2 (HTLV-1/2) is endemic in Brazil, but few studies have investigated the seroprevalence of HTLV and its subtypes among blood donors in the capital city Manaus, Amazonas State, Brazil.
INTRODUCTION
Blood donors (2001-2003) were screened for HTLV-1/2 antibodies by ELISA. Positive results were confirmed and subtyped by Western blot assays. Prevalence rates were calculated and compared with demographic data.
MATERIALS AND METHODS
Among the 87,402 individuals screened, 116 (0.13%) were seropositive for HTLV-1/2. A second sample (76/116) was collected and retested by HTLV-1/2 ELISA, of which only 41/76 were positive. Western blot confirmed HTLV infection in 24/41 retested blood donors [HTLV-1 (n=16), HTLV-2 (n=5) and HTLV-untypable (n=3)].
RESULTS
HTLV-1 and HTLV-2 are prevalent among blood donors in Manaus. However, additional studies are needed to comprehend the epidemiology of HTLV-1/2 in Amazonas not only to understand the pathophysiology of the disease providing adequate medical assistance, but also to reduce or block virus transmission.
DISCUSSION
[ "Adolescent", "Adult", "Blood Donors", "Blotting, Western", "Brazil", "Cross-Sectional Studies", "Enzyme-Linked Immunosorbent Assay", "Female", "HTLV-I Infections", "HTLV-II Infections", "Human T-lymphotropic virus 1", "Human T-lymphotropic virus 2", "Humans", "Male", "Middle Aged", "Prevalence", "Seroepidemiologic Studies", "Young Adult" ]
5738765
INTRODUCTION
Human T-lymphotropic virus 1 and 2 (HTLV-1 and HTLV-2) are retroviruses discovered in the 1980s 1 , 2 . It is estimated that at least 10 million people worldwide are infected with HTLV-1, with endemic foci in Japan, the Caribbean, South America, and Central Africa 3 . HTLV transmission occurs primarily via the following three routes: 1) vertically from mother to child, predominantly through breastfeeding 4 , 5 ; 2) between sexual partners through unprotected intercourse 6 , 7 ; and 3) blood transfusion from a HTLV positive donor or sharing or re-use of needles and syringes to inject drugs 8 , 9 . A number of diseases have been associated with HTLV-1, including adult T-cell leukemia–lymphoma (ATLL), myelopathy/tropical spastic paraparesis (HAM/TSP) 10 - 13 , uveitis, infective dermatitis, and other inflammatory disorders 14 , although most virus carriers remain asymptomatic throughout their lives. Brazil is one of the largest endemic areas for HTLV-1 and -2 infections 3 . Seroprevalence rates of HTLV-1/2 in Brazilian blood donors range from 0.04% to 1.0% depending on the geographic region 15 . In Manaus capital of the Amazonas State, the prevalence of HTLV-1 in first-time blood donors was reported as 0.14% (2008-2009) 16 . However, few studies have tried to identify HTLV subtypes in blood donors and the prevalence of HTLV in this population. The aim of this study was to estimate the seroprevalence of HTLV and to identify the subtypes among blood donors in Manaus.
null
null
RESULTS
A total of 87,402 blood donors were screened by HTLV-1/2 ELISA between August 2001 and August 2003. A total of 116 individuals were tested positive with a seroprevalence of 0.13% in the primary screening (Figure 1). However, only 76/116 (65.5%) volunteered to participate in the confirmatory test. On retesting, only 41/76 individuals were found to be positive by HTLV-1/2 ELISA, positive samples were then tested by Western blot (WB) to confirm diagnoses and to identify the HTLV subtype. According to the WB assay, three samples were indeterminate; 14 samples were negative and 24 were positive for HTLV. Among the 24 positive samples, 16 were HTLV-1, five were HTLV-2 positive and three samples were HTLV positive, but untypable (Figure 1). Demographic and epidemiologic data of HTLVconfirmed samples are shown in Table 1. Average age of positive blood donors was 35.3 ± 11.5 years; 41.6% (n=10) of positive individuals were women. Moreover, 83.33% of positive individuals were first-time blood donors and 62.5% identified themselves as married. Among the HTLVconfirmed individuals, 16.6% (n=4) reported transfusion history and 54.2% described having had multiple sex partners along their entire lives (data not shown).
null
null
[ "Ethical considerations", "Patient information", "Serological analysis", "Statistics" ]
[ "This study was approved by the Research Ethics Committee of Fundação Hospitalar de Hematologia e Hemoterapia do Amazonas (FHEMOAM), in accordance with the Brazilian law, which complied with the Declaration of Helsinki. All the study participants signed an informed consent prior to enrolment.", "Individuals donating blood at FHEMOAM, Manaus, Amazonas, were recruited for the study between August 2001 and August 2003. A questionnaire was used to collect individual information (age, gender, marital status, educational level, first time or regular donor).", "Blood donors were screened for antibodies against HTLV-1/2 by using an enzyme-linked immunosorbent assay (ELISA-Murex HTLV 1, GE 80/81, + 2, Murex Diagnostics). Positive samples were then confirmed by Western blot (HTLV BLOT 2.4, Genelabs Diagnostics®, Singapore), following the manufacturer's instructions.", "Statistical analysis used the Chi-square or the Fisher exact test if the expected frequencies were less than five. A p-value of less than 0.05 was considered statistically significant. Analyses were performed using the Epi info 6 software." ]
[ null, null, null, null ]
[ "INTRODUCTION", "MATERIALS AND METHODS", "Ethical considerations", "Patient information", "Serological analysis", "Statistics", "RESULTS", "DISCUSSION" ]
[ "Human T-lymphotropic virus 1 and 2 (HTLV-1 and HTLV-2) are retroviruses discovered in the 1980s\n1\n\n,\n\n2\n. It is estimated that at least 10 million people worldwide are infected with HTLV-1, with endemic foci in Japan, the Caribbean, South America, and Central Africa\n3\n. HTLV transmission occurs primarily via the following three routes: 1) vertically from mother to child, predominantly through breastfeeding\n4\n\n,\n\n5\n; 2) between sexual partners through unprotected intercourse\n6\n\n,\n\n7\n; and 3) blood transfusion from a HTLV positive donor or sharing or re-use of needles and syringes to inject drugs\n8\n\n,\n\n9\n. A number of diseases have been associated with HTLV-1, including adult T-cell leukemia–lymphoma (ATLL), myelopathy/tropical spastic paraparesis (HAM/TSP)\n10\n\n-\n\n13\n, uveitis, infective dermatitis, and other inflammatory disorders\n14\n, although most virus carriers remain asymptomatic throughout their lives. Brazil is one of the largest endemic areas for HTLV-1 and -2 infections\n3\n. Seroprevalence rates of HTLV-1/2 in Brazilian blood donors range from 0.04% to 1.0% depending on the geographic region\n15\n. In Manaus capital of the Amazonas State, the prevalence of HTLV-1 in first-time blood donors was reported as 0.14% (2008-2009)\n16\n. However, few studies have tried to identify HTLV subtypes in blood donors and the prevalence of HTLV in this population. The aim of this study was to estimate the seroprevalence of HTLV and to identify the subtypes among blood donors in Manaus.", " Ethical considerations This study was approved by the Research Ethics Committee of Fundação Hospitalar de Hematologia e Hemoterapia do Amazonas (FHEMOAM), in accordance with the Brazilian law, which complied with the Declaration of Helsinki. All the study participants signed an informed consent prior to enrolment.\nThis study was approved by the Research Ethics Committee of Fundação Hospitalar de Hematologia e Hemoterapia do Amazonas (FHEMOAM), in accordance with the Brazilian law, which complied with the Declaration of Helsinki. All the study participants signed an informed consent prior to enrolment.\n Patient information Individuals donating blood at FHEMOAM, Manaus, Amazonas, were recruited for the study between August 2001 and August 2003. A questionnaire was used to collect individual information (age, gender, marital status, educational level, first time or regular donor).\nIndividuals donating blood at FHEMOAM, Manaus, Amazonas, were recruited for the study between August 2001 and August 2003. A questionnaire was used to collect individual information (age, gender, marital status, educational level, first time or regular donor).\n Serological analysis Blood donors were screened for antibodies against HTLV-1/2 by using an enzyme-linked immunosorbent assay (ELISA-Murex HTLV 1, GE 80/81, + 2, Murex Diagnostics). Positive samples were then confirmed by Western blot (HTLV BLOT 2.4, Genelabs Diagnostics®, Singapore), following the manufacturer's instructions.\nBlood donors were screened for antibodies against HTLV-1/2 by using an enzyme-linked immunosorbent assay (ELISA-Murex HTLV 1, GE 80/81, + 2, Murex Diagnostics). Positive samples were then confirmed by Western blot (HTLV BLOT 2.4, Genelabs Diagnostics®, Singapore), following the manufacturer's instructions.\n Statistics Statistical analysis used the Chi-square or the Fisher exact test if the expected frequencies were less than five. A p-value of less than 0.05 was considered statistically significant. Analyses were performed using the Epi info 6 software.\nStatistical analysis used the Chi-square or the Fisher exact test if the expected frequencies were less than five. A p-value of less than 0.05 was considered statistically significant. Analyses were performed using the Epi info 6 software.", "This study was approved by the Research Ethics Committee of Fundação Hospitalar de Hematologia e Hemoterapia do Amazonas (FHEMOAM), in accordance with the Brazilian law, which complied with the Declaration of Helsinki. All the study participants signed an informed consent prior to enrolment.", "Individuals donating blood at FHEMOAM, Manaus, Amazonas, were recruited for the study between August 2001 and August 2003. A questionnaire was used to collect individual information (age, gender, marital status, educational level, first time or regular donor).", "Blood donors were screened for antibodies against HTLV-1/2 by using an enzyme-linked immunosorbent assay (ELISA-Murex HTLV 1, GE 80/81, + 2, Murex Diagnostics). Positive samples were then confirmed by Western blot (HTLV BLOT 2.4, Genelabs Diagnostics®, Singapore), following the manufacturer's instructions.", "Statistical analysis used the Chi-square or the Fisher exact test if the expected frequencies were less than five. A p-value of less than 0.05 was considered statistically significant. Analyses were performed using the Epi info 6 software.", "A total of 87,402 blood donors were screened by HTLV-1/2 ELISA between August 2001 and August 2003. A total of 116 individuals were tested positive with a seroprevalence of 0.13% in the primary screening (Figure 1). However, only 76/116 (65.5%) volunteered to participate in the confirmatory test. On retesting, only 41/76 individuals were found to be positive by HTLV-1/2 ELISA, positive samples were then tested by Western blot (WB) to confirm diagnoses and to identify the HTLV subtype. According to the WB assay, three samples were indeterminate; 14 samples were negative and 24 were positive for HTLV. Among the 24 positive samples, 16 were HTLV-1, five were HTLV-2 positive and three samples were HTLV positive, but untypable (Figure 1).\nDemographic and epidemiologic data of HTLVconfirmed samples are shown in Table 1. Average age of positive blood donors was 35.3 ± 11.5 years; 41.6% (n=10) of positive individuals were women. Moreover, 83.33% of positive individuals were first-time blood donors and 62.5% identified themselves as married. Among the HTLVconfirmed individuals, 16.6% (n=4) reported transfusion history and 54.2% described having had multiple sex partners along their entire lives (data not shown).", "This is the first cross-sectional study to demonstrate the presence of HTLV-1 and HTLV-2 among blood donors in Manaus, Amazonas. Previous studies have only detected HTLV-1 in blood donors from Manaus\n16\n\n,\n\n17\n. In this study, the overall seroprevalence of HTLV 1/2 was 0.13% (2001-2003). In 1993, HTLV-1 prevalence in 1,200 blood donors from Manaus was 0.08%\n17\n. In addition, Catalan-Soares et al.\n15\n reported a HTLV 1/2 seroprevalence of 0.53% between 1995 and 2000, whereas, prevalence of HTLV-1 was estimated to be 0.14% in first-time blood donors (2008 to 2009)\n16\n. On other hand, HTLV infection was absent in pregnant women (n=674)\n18\n and individuals with dermatological disease (n=1,200)\n16\n in Manaus. Our results are in accordance with the above studies as we found a low HTLV-1/2 seroprevalence among blood donors in Manaus.\nBesides, seroprevalence of HTLV-1/2 in blood donors has been among the highest in other Northern States of Brazil like Amapá (0.71%) and Pará (0.91%) compared to Amazonas\n15\n, which might be explained by the phenomenon of clustering of HTLV infection in some populations\n3\n\n,\n\n19\n.\nPresence of a considerable number of HTLV-2 positive individuals in our study can be explained by the fact that a significant number of HTLV positive individuals were indigenous descendants. Several researchers have shown that HTLV-2 is predominant among Brazilian indigenous groups, with an area of high endemicity in the Amazon region\n20\n\n,\n\n21\n. Moreover, HTLV-2 infections in non-indigenous populations have been documented among blood donors in urban areas of the Amazon region of Brazil\n22\n\n,\n\n23\n.\nFurthermore, in this study, the prevalence of HTLV infection in married individuals was higher (62.5%) compared to single individuals (37.5%), which needs to be confirmed in a larger population. For transmission of HTLV by sexual route, a higher frequency of exposure has been described, which may be a reason for the higher transmission rates between married individuals\n7\n. In addition, first-time blood donors had a higher prevalence of HTLV infection compared to regular blood donors. This confirms that regular blood donors are perceived to be less risky or safer than first-time blood donors. A higher prevalence of HTLV was observed in women, however, we did not observe any statistically significant association between age and sex in HTLV1/2 infection. Nevertheless, in most endemic areas, HTLV-1 prevalence has been shown to increase with age and to be higher in females\n24\n. Besides, absence of a molecular test to identify and confirm the circulating strains of HTLV-1 and HTLV-2 has been one of the greater limitations of this study due to unavailability of samples. Nevertheless, using serological tools, we conclusively demonstrated circulation of HTLV-1 and HTLV-2 in blood donors in Amazonas.\nIn response to the epidemiological situation, one of the major recommendations by the Global Virus Network's Task Force on HTLV is an improved understanding of HTLV epidemiology in diverse populations, which not only will stimulate basic research in identifying disease biomarkers, but will also unravel mechanisms of viral infectivity, persistence, replication and pathogenesis to open insights into novel treatments\n25\n. In this context, further studies are needed to understand the epidemiology of HTLV-1/2 in Amazonas not only to estimate the disease burden, but also to create a mechanism for continued followup and to reduce or block intra-familial transmission." ]
[ "intro", "materials|methods", null, null, null, null, "results", "discussion" ]
[ "HTLV", "Blood donors", "HTLV subtyping", "HTLV1", "HTLV2", "T-lymphotropic virus type 1 and 2", "Epidemiology of HTLV infection", "Prevalence of HTLV" ]
INTRODUCTION: Human T-lymphotropic virus 1 and 2 (HTLV-1 and HTLV-2) are retroviruses discovered in the 1980s 1 , 2 . It is estimated that at least 10 million people worldwide are infected with HTLV-1, with endemic foci in Japan, the Caribbean, South America, and Central Africa 3 . HTLV transmission occurs primarily via the following three routes: 1) vertically from mother to child, predominantly through breastfeeding 4 , 5 ; 2) between sexual partners through unprotected intercourse 6 , 7 ; and 3) blood transfusion from a HTLV positive donor or sharing or re-use of needles and syringes to inject drugs 8 , 9 . A number of diseases have been associated with HTLV-1, including adult T-cell leukemia–lymphoma (ATLL), myelopathy/tropical spastic paraparesis (HAM/TSP) 10 - 13 , uveitis, infective dermatitis, and other inflammatory disorders 14 , although most virus carriers remain asymptomatic throughout their lives. Brazil is one of the largest endemic areas for HTLV-1 and -2 infections 3 . Seroprevalence rates of HTLV-1/2 in Brazilian blood donors range from 0.04% to 1.0% depending on the geographic region 15 . In Manaus capital of the Amazonas State, the prevalence of HTLV-1 in first-time blood donors was reported as 0.14% (2008-2009) 16 . However, few studies have tried to identify HTLV subtypes in blood donors and the prevalence of HTLV in this population. The aim of this study was to estimate the seroprevalence of HTLV and to identify the subtypes among blood donors in Manaus. MATERIALS AND METHODS: Ethical considerations This study was approved by the Research Ethics Committee of Fundação Hospitalar de Hematologia e Hemoterapia do Amazonas (FHEMOAM), in accordance with the Brazilian law, which complied with the Declaration of Helsinki. All the study participants signed an informed consent prior to enrolment. This study was approved by the Research Ethics Committee of Fundação Hospitalar de Hematologia e Hemoterapia do Amazonas (FHEMOAM), in accordance with the Brazilian law, which complied with the Declaration of Helsinki. All the study participants signed an informed consent prior to enrolment. Patient information Individuals donating blood at FHEMOAM, Manaus, Amazonas, were recruited for the study between August 2001 and August 2003. A questionnaire was used to collect individual information (age, gender, marital status, educational level, first time or regular donor). Individuals donating blood at FHEMOAM, Manaus, Amazonas, were recruited for the study between August 2001 and August 2003. A questionnaire was used to collect individual information (age, gender, marital status, educational level, first time or regular donor). Serological analysis Blood donors were screened for antibodies against HTLV-1/2 by using an enzyme-linked immunosorbent assay (ELISA-Murex HTLV 1, GE 80/81, + 2, Murex Diagnostics). Positive samples were then confirmed by Western blot (HTLV BLOT 2.4, Genelabs Diagnostics®, Singapore), following the manufacturer's instructions. Blood donors were screened for antibodies against HTLV-1/2 by using an enzyme-linked immunosorbent assay (ELISA-Murex HTLV 1, GE 80/81, + 2, Murex Diagnostics). Positive samples were then confirmed by Western blot (HTLV BLOT 2.4, Genelabs Diagnostics®, Singapore), following the manufacturer's instructions. Statistics Statistical analysis used the Chi-square or the Fisher exact test if the expected frequencies were less than five. A p-value of less than 0.05 was considered statistically significant. Analyses were performed using the Epi info 6 software. Statistical analysis used the Chi-square or the Fisher exact test if the expected frequencies were less than five. A p-value of less than 0.05 was considered statistically significant. Analyses were performed using the Epi info 6 software. Ethical considerations: This study was approved by the Research Ethics Committee of Fundação Hospitalar de Hematologia e Hemoterapia do Amazonas (FHEMOAM), in accordance with the Brazilian law, which complied with the Declaration of Helsinki. All the study participants signed an informed consent prior to enrolment. Patient information: Individuals donating blood at FHEMOAM, Manaus, Amazonas, were recruited for the study between August 2001 and August 2003. A questionnaire was used to collect individual information (age, gender, marital status, educational level, first time or regular donor). Serological analysis: Blood donors were screened for antibodies against HTLV-1/2 by using an enzyme-linked immunosorbent assay (ELISA-Murex HTLV 1, GE 80/81, + 2, Murex Diagnostics). Positive samples were then confirmed by Western blot (HTLV BLOT 2.4, Genelabs Diagnostics®, Singapore), following the manufacturer's instructions. Statistics: Statistical analysis used the Chi-square or the Fisher exact test if the expected frequencies were less than five. A p-value of less than 0.05 was considered statistically significant. Analyses were performed using the Epi info 6 software. RESULTS: A total of 87,402 blood donors were screened by HTLV-1/2 ELISA between August 2001 and August 2003. A total of 116 individuals were tested positive with a seroprevalence of 0.13% in the primary screening (Figure 1). However, only 76/116 (65.5%) volunteered to participate in the confirmatory test. On retesting, only 41/76 individuals were found to be positive by HTLV-1/2 ELISA, positive samples were then tested by Western blot (WB) to confirm diagnoses and to identify the HTLV subtype. According to the WB assay, three samples were indeterminate; 14 samples were negative and 24 were positive for HTLV. Among the 24 positive samples, 16 were HTLV-1, five were HTLV-2 positive and three samples were HTLV positive, but untypable (Figure 1). Demographic and epidemiologic data of HTLVconfirmed samples are shown in Table 1. Average age of positive blood donors was 35.3 ± 11.5 years; 41.6% (n=10) of positive individuals were women. Moreover, 83.33% of positive individuals were first-time blood donors and 62.5% identified themselves as married. Among the HTLVconfirmed individuals, 16.6% (n=4) reported transfusion history and 54.2% described having had multiple sex partners along their entire lives (data not shown). DISCUSSION: This is the first cross-sectional study to demonstrate the presence of HTLV-1 and HTLV-2 among blood donors in Manaus, Amazonas. Previous studies have only detected HTLV-1 in blood donors from Manaus 16 , 17 . In this study, the overall seroprevalence of HTLV 1/2 was 0.13% (2001-2003). In 1993, HTLV-1 prevalence in 1,200 blood donors from Manaus was 0.08% 17 . In addition, Catalan-Soares et al. 15 reported a HTLV 1/2 seroprevalence of 0.53% between 1995 and 2000, whereas, prevalence of HTLV-1 was estimated to be 0.14% in first-time blood donors (2008 to 2009) 16 . On other hand, HTLV infection was absent in pregnant women (n=674) 18 and individuals with dermatological disease (n=1,200) 16 in Manaus. Our results are in accordance with the above studies as we found a low HTLV-1/2 seroprevalence among blood donors in Manaus. Besides, seroprevalence of HTLV-1/2 in blood donors has been among the highest in other Northern States of Brazil like Amapá (0.71%) and Pará (0.91%) compared to Amazonas 15 , which might be explained by the phenomenon of clustering of HTLV infection in some populations 3 , 19 . Presence of a considerable number of HTLV-2 positive individuals in our study can be explained by the fact that a significant number of HTLV positive individuals were indigenous descendants. Several researchers have shown that HTLV-2 is predominant among Brazilian indigenous groups, with an area of high endemicity in the Amazon region 20 , 21 . Moreover, HTLV-2 infections in non-indigenous populations have been documented among blood donors in urban areas of the Amazon region of Brazil 22 , 23 . Furthermore, in this study, the prevalence of HTLV infection in married individuals was higher (62.5%) compared to single individuals (37.5%), which needs to be confirmed in a larger population. For transmission of HTLV by sexual route, a higher frequency of exposure has been described, which may be a reason for the higher transmission rates between married individuals 7 . In addition, first-time blood donors had a higher prevalence of HTLV infection compared to regular blood donors. This confirms that regular blood donors are perceived to be less risky or safer than first-time blood donors. A higher prevalence of HTLV was observed in women, however, we did not observe any statistically significant association between age and sex in HTLV1/2 infection. Nevertheless, in most endemic areas, HTLV-1 prevalence has been shown to increase with age and to be higher in females 24 . Besides, absence of a molecular test to identify and confirm the circulating strains of HTLV-1 and HTLV-2 has been one of the greater limitations of this study due to unavailability of samples. Nevertheless, using serological tools, we conclusively demonstrated circulation of HTLV-1 and HTLV-2 in blood donors in Amazonas. In response to the epidemiological situation, one of the major recommendations by the Global Virus Network's Task Force on HTLV is an improved understanding of HTLV epidemiology in diverse populations, which not only will stimulate basic research in identifying disease biomarkers, but will also unravel mechanisms of viral infectivity, persistence, replication and pathogenesis to open insights into novel treatments 25 . In this context, further studies are needed to understand the epidemiology of HTLV-1/2 in Amazonas not only to estimate the disease burden, but also to create a mechanism for continued followup and to reduce or block intra-familial transmission.
Background: Human T-lymphotropic virus type 1 and 2 (HTLV-1/2) is endemic in Brazil, but few studies have investigated the seroprevalence of HTLV and its subtypes among blood donors in the capital city Manaus, Amazonas State, Brazil. Methods: Blood donors (2001-2003) were screened for HTLV-1/2 antibodies by ELISA. Positive results were confirmed and subtyped by Western blot assays. Prevalence rates were calculated and compared with demographic data. Results: Among the 87,402 individuals screened, 116 (0.13%) were seropositive for HTLV-1/2. A second sample (76/116) was collected and retested by HTLV-1/2 ELISA, of which only 41/76 were positive. Western blot confirmed HTLV infection in 24/41 retested blood donors [HTLV-1 (n=16), HTLV-2 (n=5) and HTLV-untypable (n=3)]. Conclusions: HTLV-1 and HTLV-2 are prevalent among blood donors in Manaus. However, additional studies are needed to comprehend the epidemiology of HTLV-1/2 in Amazonas not only to understand the pathophysiology of the disease providing adequate medical assistance, but also to reduce or block virus transmission.
null
null
1,883
208
[ 49, 48, 59, 44 ]
8
[ "htlv", "blood", "donors", "blood donors", "positive", "study", "individuals", "amazonas", "manaus", "samples" ]
[ "htlv infection compared", "htlv infection populations", "epidemiology htlv", "htlv1 infection endemic", "worldwide infected htlv" ]
null
null
null
[CONTENT] HTLV | Blood donors | HTLV subtyping | HTLV1 | HTLV2 | T-lymphotropic virus type 1 and 2 | Epidemiology of HTLV infection | Prevalence of HTLV [SUMMARY]
null
[CONTENT] HTLV | Blood donors | HTLV subtyping | HTLV1 | HTLV2 | T-lymphotropic virus type 1 and 2 | Epidemiology of HTLV infection | Prevalence of HTLV [SUMMARY]
null
[CONTENT] HTLV | Blood donors | HTLV subtyping | HTLV1 | HTLV2 | T-lymphotropic virus type 1 and 2 | Epidemiology of HTLV infection | Prevalence of HTLV [SUMMARY]
null
[CONTENT] Adolescent | Adult | Blood Donors | Blotting, Western | Brazil | Cross-Sectional Studies | Enzyme-Linked Immunosorbent Assay | Female | HTLV-I Infections | HTLV-II Infections | Human T-lymphotropic virus 1 | Human T-lymphotropic virus 2 | Humans | Male | Middle Aged | Prevalence | Seroepidemiologic Studies | Young Adult [SUMMARY]
null
[CONTENT] Adolescent | Adult | Blood Donors | Blotting, Western | Brazil | Cross-Sectional Studies | Enzyme-Linked Immunosorbent Assay | Female | HTLV-I Infections | HTLV-II Infections | Human T-lymphotropic virus 1 | Human T-lymphotropic virus 2 | Humans | Male | Middle Aged | Prevalence | Seroepidemiologic Studies | Young Adult [SUMMARY]
null
[CONTENT] Adolescent | Adult | Blood Donors | Blotting, Western | Brazil | Cross-Sectional Studies | Enzyme-Linked Immunosorbent Assay | Female | HTLV-I Infections | HTLV-II Infections | Human T-lymphotropic virus 1 | Human T-lymphotropic virus 2 | Humans | Male | Middle Aged | Prevalence | Seroepidemiologic Studies | Young Adult [SUMMARY]
null
[CONTENT] htlv infection compared | htlv infection populations | epidemiology htlv | htlv1 infection endemic | worldwide infected htlv [SUMMARY]
null
[CONTENT] htlv infection compared | htlv infection populations | epidemiology htlv | htlv1 infection endemic | worldwide infected htlv [SUMMARY]
null
[CONTENT] htlv infection compared | htlv infection populations | epidemiology htlv | htlv1 infection endemic | worldwide infected htlv [SUMMARY]
null
[CONTENT] htlv | blood | donors | blood donors | positive | study | individuals | amazonas | manaus | samples [SUMMARY]
null
[CONTENT] htlv | blood | donors | blood donors | positive | study | individuals | amazonas | manaus | samples [SUMMARY]
null
[CONTENT] htlv | blood | donors | blood donors | positive | study | individuals | amazonas | manaus | samples [SUMMARY]
null
[CONTENT] htlv | blood | donors | blood donors | subtypes | subtypes blood | subtypes blood donors | prevalence htlv | virus | 10 [SUMMARY]
null
[CONTENT] positive | htlv | samples | individuals | positive samples | 41 | figure | wb | total | 76 [SUMMARY]
null
[CONTENT] htlv | blood | blood donors | donors | study | positive | individuals | august | samples | diagnostics [SUMMARY]
null
[CONTENT] 1 | Brazil | HTLV | Manaus | Amazonas State | Brazil [SUMMARY]
null
[CONTENT] 87,402 | 116 | 0.13% ||| second | ELISA | only 41/76 ||| HTLV | 24/41 [SUMMARY]
null
[CONTENT] 1 | Brazil | HTLV | Manaus | Amazonas State | Brazil ||| 2001-2003 | ELISA ||| ||| ||| ||| 87,402 | 116 | 0.13% ||| second | ELISA | only 41/76 ||| HTLV | 24/41 ||| Manaus ||| Amazonas [SUMMARY]
null
Effect of combined colistin and meropenem against meropenem resistant
34169905
Meropenem-resistant Acinetobacter baumannii and Pseudomonas aeruginosa are the two most common nosocomial pathogens causing ventilator-associated pneumonia. To combat this resistance, different combinations of antibiotics have been evaluated for their efficacy in laboratories as well as in clinical situations.
BACKGROUND
Fifty meropenem-resistant isolates of A. baumannii (n = 25) and P. aeruginosa (n = 25) from endotracheal aspirates were studied. The MIC of colistin and meropenem was found using the microbroth dilution method. The fractional inhibitory concentration was calculated for the combination of antibiotics by checkerboard assay and the antibiotic interactions were assessed. Fisher's exact test was carried out for statistical comparison of categorical variables.
MATERIALS AND METHODS
A synergistic effect between colistin and meropenem was observed in 18/25 (72%) and 6/25 (24%) isolates of Acinetobacter baumannnii and P. Aeruginosa, respectively, with fractional inhibitory concentration indices of ≤0.5. None of the tested isolates exhibited antagonism.
RESULTS
Our results showed that combinations of colistin and meropenem are associated with improvement in minimum inhibitory concentration and may be a promising strategy in treating meropenem-resistant A. baumannii respiratory tract infections.
CONCLUSION
[ "Acinetobacter baumannii", "Anti-Bacterial Agents", "Colistin", "Cross-Sectional Studies", "Drug Combinations", "Drug Resistance, Multiple, Bacterial", "Drug Synergism", "Humans", "Meropenem", "Microbial Sensitivity Tests", "Pseudomonas aeruginosa" ]
8262418
Introduction
In recent times, testing for antimicrobial interactions has become very essential as there is an increase in the proportion of drug-resistant microorganisms and due to restricted choices for the management of the infections caused by those organisms.[1] This is particularly obvious when the infections are caused by nonfermenters.[23] Numerous combinations of antibiotics have been assessed to determine synergistic activity for various organisms. Favorable outcome was achieved for combination of different antibiotics with colistin such as meropenem, fluoroquinolones, and rifampicin.[345] However, a study conducted by Soudeiha et al. in 2017 showed that there was only additive effect and there was no synergism when evaluating combination of colistin and meropenem in Acinetobacter baumanni.[6] Although combination of antimicrobial therapy is known to broaden the spectrum and increase the bactericidal activity, its use still remains debatable in certain strains such as Pseudomonas aeruginosa.[7] A meta-analysis conducted by Paul et al. found that there was no change in the mortality rate when combination therapy was used over monotherapy.[8] Of the different methods available for testing synergistic activity, time-kill assay (TKA) gives consistent and reliable results, however, as this method is too cumbersome, checkerboard assay (CB) and E-test has been extensively followed.[2] The most common organisms causing ventilator-associated pneumonia (VAP) in our center are A. baumannii and P. aeruginosa. Since the rate of meropenem resistance is increasing in these two organisms, we aimed to evaluate the effect of combined colistin and meropenem against meropenem-resistant isolates of A. baumannii and P.eruginosa by checkerboard method.
null
null
Results
All isolates were resistant to meropenem as per the MIC results, while 6 (24%) isolates of A. baumannii and 5 (20%) isolates of P. aeruginosa were resistant to colistin. The FIC indexes were calculated and their interactions are depicted in Tables 1 and 2. Of the 25 A. baumannii isolates which were subjected to synergy testing, 18 (72%) isolates showed synergism, while 7 (28%) isolates demonstrated additive effect and out of the 25 P. aeruginosa isolates, only 6 (24%) isolates showed synergism, 4 (16%) showed indifference, and the rest 15 (60%) isolates showed an additive effect. Neither A. baumannii nor P. aeruginosa demonstrated antagonism with this combination [Tables 1,2 and Figure 1]. Synergy was demonstrated in all six colistin-resistant isolates of A. baumannii (100%), whereas it was demonstrated in only 3 (3/5) colistin-resistant strains of P. aeruginosa (60%). When synergism was compared between colistin resistant and colistin susceptible isolates of A. baumannii (P = 0.137) and P. aeruginosa (P = 0.115), it was observed that most of the resistant isolates showed synergism, however, this difference was not statistically significant. This could be due to the fact that our study was underpowered to pick a statistical difference between them (back calculated power = 41.76%) due to inadequate sample size. Effect of combined colistin and meropenem against Acinetobacter baumannii isolates (n=25) MIC=Minimum inhibitory concentration, FIC=Fractional inhibitory concentration Effect of combined colistin and meropenem against Pseudomonas aeruginosa isolates (n=25) MIC=Minimum inhibitory concentration, FIC=Fractional inhibitory concentration Results of in vitro synergy testing using checkerboard method for Acinetobacter baumannii and Pseudomonas aeruginosa
Conclusion
In the current study using the checkerboard method, when meropenem was combined with colistin, we had better synergistic activity against multidrug-resistant (MDR) A. baumannii isolates when compared to MDR P. aeruginosa isolates. However, more clinical trials are required to ascertain their efficacy and to explore its therapeutic potential, as in vivo settings cannot be entirely simulated in vitro. Limitations A single synergy testing method was used in our study (checkerboard method) and second, our sample size was small and the underlying mechanisms of resistance among the isolates were not determined. A single synergy testing method was used in our study (checkerboard method) and second, our sample size was small and the underlying mechanisms of resistance among the isolates were not determined. Financial support and sponsorship Nil. Nil. Conflicts of interest There are no conflicts of interest. There are no conflicts of interest.
[ "Statistical analysis", "Limitations", "Financial support and sponsorship" ]
[ "Epidata v 3.01 software (Epidata association, Odense Denmark, 1999) was used for entering the data and the analysis was done using SPSS version 19.0( Armonk, NY:IBM Corp). Statistical analysis of categorical variables was carried out using Fisher's exact test and P < 0.05 was taken as significant.", "A single synergy testing method was used in our study (checkerboard method) and second, our sample size was small and the underlying mechanisms of resistance among the isolates were not determined.", "Nil." ]
[ null, null, null ]
[ "Introduction", "Materials and Methods", "Statistical analysis", "Results", "Discussion", "Conclusion", "Limitations", "Financial support and sponsorship", "Conflicts of interest" ]
[ "In recent times, testing for antimicrobial interactions has become very essential as there is an increase in the proportion of drug-resistant microorganisms and due to restricted choices for the management of the infections caused by those organisms.[1] This is particularly obvious when the infections are caused by nonfermenters.[23] Numerous combinations of antibiotics have been assessed to determine synergistic activity for various organisms. Favorable outcome was achieved for combination of different antibiotics with colistin such as meropenem, fluoroquinolones, and rifampicin.[345] However, a study conducted by Soudeiha et al. in 2017 showed that there was only additive effect and there was no synergism when evaluating combination of colistin and meropenem in Acinetobacter baumanni.[6] Although combination of antimicrobial therapy is known to broaden the spectrum and increase the bactericidal activity, its use still remains debatable in certain strains such as Pseudomonas aeruginosa.[7] A meta-analysis conducted by Paul et al. found that there was no change in the mortality rate when combination therapy was used over monotherapy.[8]\nOf the different methods available for testing synergistic activity, time-kill assay (TKA) gives consistent and reliable results, however, as this method is too cumbersome, checkerboard assay (CB) and E-test has been extensively followed.[2] The most common organisms causing ventilator-associated pneumonia (VAP) in our center are A. baumannii and P. aeruginosa. Since the rate of meropenem resistance is increasing in these two organisms, we aimed to evaluate the effect of combined colistin and meropenem against meropenem-resistant isolates of A. baumannii and P.eruginosa by checkerboard method.", "The approval for the study was obtained from the Institute Human ethics committee (JIP/IEC/2018/0142) and was carried out in Department of Microbiology. Study isolates comprised meropenem-resistant A. baumanni and P. aeruginosa (25 each) from endotracheal aspirates of patients with suspected VAP. Broth microdilution was used to determine the MIC for meropenem and colistin using pure powders of the drugs (Sigma-Aldrich, India). Serial doubling dilution was performed for both the antibiotics to cover 4–5 dilutions below and above the MIC. Four times the desired concentration of these antibiotics was prepared, as antibiotic gets diluted by four times in the wells of the microliter plate. Checkerboard arrays were created by dispensing 25 μl of these titrations in the appropriate wells of the microliter plate. Meropenem was dispersed in the columns (1–11) and colistin was dispensed in rows (A-G). The final row (row H) and final column (column 12) contained only meropenem and colistin, respectively.\nTo prepare the inoculum, two to three isolated colonies from overnight incubated plates were taken and suspended in cation-adjusted Mueller–Hinton broth to match the turbidity of 0.5 McFarland (108 CFU/mL). This was followed by dilution to reach the final concentration of 5 × 105 CFU/mL. Fifty microliter of this inoculum was dispensed in all the wells except the drug control wells. Then, the microliter plates were incubated at 37°C for 18–24 h. Readings were taken following incubation and fractional inhibitory concentration (FIC) was calculated as follows:\nFIC = MIC of colistin in combination /MIC of colistin alone + MIC of meropenem in combination / MIC of meropenem alone\nAn FIC ≤0.5 indicated synergy, 0.5 to ≤2 indicated additive effect, and combination was said to be antagonistic if FIC >4.[6]\nStatistical analysis Epidata v 3.01 software (Epidata association, Odense Denmark, 1999) was used for entering the data and the analysis was done using SPSS version 19.0( Armonk, NY:IBM Corp). Statistical analysis of categorical variables was carried out using Fisher's exact test and P < 0.05 was taken as significant.\nEpidata v 3.01 software (Epidata association, Odense Denmark, 1999) was used for entering the data and the analysis was done using SPSS version 19.0( Armonk, NY:IBM Corp). Statistical analysis of categorical variables was carried out using Fisher's exact test and P < 0.05 was taken as significant.", "Epidata v 3.01 software (Epidata association, Odense Denmark, 1999) was used for entering the data and the analysis was done using SPSS version 19.0( Armonk, NY:IBM Corp). Statistical analysis of categorical variables was carried out using Fisher's exact test and P < 0.05 was taken as significant.", "All isolates were resistant to meropenem as per the MIC results, while 6 (24%) isolates of A. baumannii and 5 (20%) isolates of P. aeruginosa were resistant to colistin. The FIC indexes were calculated and their interactions are depicted in Tables 1 and 2. Of the 25 A. baumannii isolates which were subjected to synergy testing, 18 (72%) isolates showed synergism, while 7 (28%) isolates demonstrated additive effect and out of the 25 P. aeruginosa isolates, only 6 (24%) isolates showed synergism, 4 (16%) showed indifference, and the rest 15 (60%) isolates showed an additive effect. Neither A. baumannii nor P. aeruginosa demonstrated antagonism with this combination [Tables 1,2 and Figure 1]. Synergy was demonstrated in all six colistin-resistant isolates of A. baumannii (100%), whereas it was demonstrated in only 3 (3/5) colistin-resistant strains of P. aeruginosa (60%). When synergism was compared between colistin resistant and colistin susceptible isolates of A. baumannii (P = 0.137) and P. aeruginosa (P = 0.115), it was observed that most of the resistant isolates showed synergism, however, this difference was not statistically significant. This could be due to the fact that our study was underpowered to pick a statistical difference between them (back calculated power = 41.76%) due to inadequate sample size.\nEffect of combined colistin and meropenem against Acinetobacter baumannii isolates (n=25)\nMIC=Minimum inhibitory concentration, FIC=Fractional inhibitory concentration\nEffect of combined colistin and meropenem against Pseudomonas aeruginosa isolates (n=25)\nMIC=Minimum inhibitory concentration, FIC=Fractional inhibitory concentration\nResults of in vitro synergy testing using checkerboard method for Acinetobacter baumannii and Pseudomonas aeruginosa", "A. baumannii and P. aeruginosa are the emerging nosocomial pathogens, which are now becoming resistant to commonly used antibiotics such as aminoglycosides, beta-lactams, beta-lactam + beta-lactamase inhibitors in combination and meropenem, so colistin is considered as the last line treatment option.[910] Of note, it has been known that biofilm formation accounts for approximately 80% of the human bacterial infection, and meropenem is said to have a poor anti-biofilm activity when administered alone.[1112] Colistin, a polypeptide antibiotic, has a wide range of activity against Gram-negative organisms and acts mainly by disruption of lipopolysaccharide layer by interacting with the outer membrane of the Gram-negative bacteria. When this drug is given in combination with meropenem, synergism is thought to occur by favoring better access of meropenem to its target site, thereby leading to cell death. In addition, anti-biofilm activity of the combination may also contribute to synergism. A review and meta-analysis done by Jiang et al. showed that in time-kill assay (TKD), a lower synergy rate was shown by colistin (polymyxin E) when compared to polymyxin B, whereas an opposite finding was seen in the checkerboard synergy method.[13]\nReview of literature revealed that different studies have quoted different FIC interpretative results, for example, Liu et al. considered FIC indices (FICI) ≤0.5 as synergy, 0.5 < FICI ≤ 4 as additive and indifference, and FICI >4 as antagonism; van Belkum et al. considered FICI ≤0.5 to be synergistic, FICI 0.5 to ≤1.0 to be additive, FICI 1.0 to ≤2 as indifferent, and, finally FICI ≥3 as antagonistic, whereas Le Minh et al. considered as follows: synergistic when FICI ≤0.5, indifferent 0.5 < FICI <4, and antagonistic when FICI ≥4.[141516] However, we followed the interpretation of Soudeiha et al. who had done a similar study in 2017 by using the same combination of antibiotics (colistin and meropenem) on the same organism (A. baumanni).[6]\nIn our study, we found that synergistic activity was not influenced by the MIC of meropenem (i.e., both low and high meropenem MIC showed similar activity), whereas a study conducted by Fan et al. in murine thigh infection model showed that strains with low meropenem MIC (≤32 mg/L) exhibited more synergistic activity than strains having higher MIC (≥64 mg/L).[17]\nOut of the 25 A. baumannii isolates tested, 18 (72%) isolates showed synergy (FIC ≤0.5) and 7 (28%) showed additive effect (0.5 < FIC ≤2). Our findings were in agreement with those that Le Minh et al. who found that 68% of A. baumannii isolates showed synergistic activity between meropenem and colistin and they also noted that the synergistic activity was more in carbapenem-resistant A. baumannii isolates than in carbapenem-susceptible isolates.[14] Similar studies by Liu et al.[15] and Yavaş et al.[18] showed that there existed 100% synergy among A. baumannii isolates when combining colistin with meropenem. In contrast to our finding, a recent study by Kheshti et al. showed that there was only 10% synergy when combining colistin and meropenem among A. baumannii isolates.[19]\nOur study results showed that for P. aeruginosa (n = 25) when combining colistin and meropenem using checkerboard method, there was a high rate of additive effect – 60% (15/25) and synergistic effect was seen in only 24% (6/25) and 16% (4/25) of isolates showed indifference. Our study supports the finding of Daoud et al. who observed that 63.6% (7/11) had additive effect and 27.2% (3/11) had synergy when combining colistin and meropenem using the checkerboard method for P. aeruginosa.[20] In contrast, Ramadan et al. found that 63.6% showed synergistic effect and 36.4% showed the additive effect.[21] No antagonistic effect was seen in the current study both for A. baumannii and P. aeruginosa.\nIn our study, 76% of the A. baumannni isolates and 80% of P. aeruginosa isolates were susceptible to colistin by microbroth dilution method. Even though monotherapy rather than in combination would decrease the development of adverse side effects caused by the use of additional antibiotics, it is not recommended with colistin. A study by Li et al., Lee et al., Cetin et al., and Bergen et al. showed that colistin when administrated alone leads to the development of heteroresistant Acinetobacter and Pseudomonas strains and can even cause colistin resistance among other isolates and can lead to proliferation of organisms which are innately resistant to colistin leading to secondary bacterial infection leaving the clinicians with no other treatment options.[22232425]\nThe rationale behind doing synergy testing is to find out whether the combination of antibiotics can improve the clinical outcomes in patients for whom in vitro synergy testing of antibiotics showed synergism to widen the empiric therapy and to delay the resistance development during antimicrobial treatment. A recent meta-analysis by Vardakas et al. showed that synergy-guided antibiotic combination therapy led to improved clinical outcome and lower mortality.[26] However, Nutman et al. found no significant improvement in outcomes in patients infected with organisms for which in vitro synergy testing showed synergism, they also found that patients infected with isolates that showed antagonism, in vitro synergy testing did not show worse outcome when a combination of antibiotics was tried.[27]\nThere are a few reasons for the discrepancy of experimental results with in vivo response. First, in vitro synergy testing does not consider the host immune system and the pathogen interaction. Next, the antimicrobial drug concentration achieved at the infection site varies from those needed for synergism in vitro, and although the desired synergistic concentration are achieved in vivo, the effect may not last.[2829]\nAlthough different methods are available for testing synergism, the checkerboard synergy method has the advantage of manipulating the antibiotic concentration used, unlike E-test strip where fixed concentrations of antibiotics are available. Time-kill test gives information on the rapidity of synergistic activity, yet it is unreasonable when testing many isolates, as it is tedious and consumes a lot of time. E-test is the least complex method, however, it is less standardized and constrained by fixed concentration of antibiotic on the strips.[2] Various studies have shown wide-ranging concordance of 33%–100% between CB and TKA, with lower rates of synergism by the checkerboard method.[28] A study by Ni et al. showed that increased rates of synergism are seen with TKD when compared to checkerboard method and E-test; therefore, standardization of the various existing methods for evaluating synergy has to be done to address these limitations.[30]", "In the current study using the checkerboard method, when meropenem was combined with colistin, we had better synergistic activity against multidrug-resistant (MDR) A. baumannii isolates when compared to MDR P. aeruginosa isolates. However, more clinical trials are required to ascertain their efficacy and to explore its therapeutic potential, as in vivo settings cannot be entirely simulated in vitro.\nLimitations A single synergy testing method was used in our study (checkerboard method) and second, our sample size was small and the underlying mechanisms of resistance among the isolates were not determined.\nA single synergy testing method was used in our study (checkerboard method) and second, our sample size was small and the underlying mechanisms of resistance among the isolates were not determined.\nFinancial support and sponsorship Nil.\nNil.\nConflicts of interest There are no conflicts of interest.\nThere are no conflicts of interest.", "A single synergy testing method was used in our study (checkerboard method) and second, our sample size was small and the underlying mechanisms of resistance among the isolates were not determined.", "Nil.", "There are no conflicts of interest." ]
[ "intro", "materials|methods", null, "results", "discussion", "conclusions", null, null, "COI-statement" ]
[ "Checkerboard assay", "Clinical Laboratory Standards Institute", "fractional inhibitory concentration indices", "minimum inhibitory concentration", "ventilator-associated pneumonia" ]
Introduction: In recent times, testing for antimicrobial interactions has become very essential as there is an increase in the proportion of drug-resistant microorganisms and due to restricted choices for the management of the infections caused by those organisms.[1] This is particularly obvious when the infections are caused by nonfermenters.[23] Numerous combinations of antibiotics have been assessed to determine synergistic activity for various organisms. Favorable outcome was achieved for combination of different antibiotics with colistin such as meropenem, fluoroquinolones, and rifampicin.[345] However, a study conducted by Soudeiha et al. in 2017 showed that there was only additive effect and there was no synergism when evaluating combination of colistin and meropenem in Acinetobacter baumanni.[6] Although combination of antimicrobial therapy is known to broaden the spectrum and increase the bactericidal activity, its use still remains debatable in certain strains such as Pseudomonas aeruginosa.[7] A meta-analysis conducted by Paul et al. found that there was no change in the mortality rate when combination therapy was used over monotherapy.[8] Of the different methods available for testing synergistic activity, time-kill assay (TKA) gives consistent and reliable results, however, as this method is too cumbersome, checkerboard assay (CB) and E-test has been extensively followed.[2] The most common organisms causing ventilator-associated pneumonia (VAP) in our center are A. baumannii and P. aeruginosa. Since the rate of meropenem resistance is increasing in these two organisms, we aimed to evaluate the effect of combined colistin and meropenem against meropenem-resistant isolates of A. baumannii and P.eruginosa by checkerboard method. Materials and Methods: The approval for the study was obtained from the Institute Human ethics committee (JIP/IEC/2018/0142) and was carried out in Department of Microbiology. Study isolates comprised meropenem-resistant A. baumanni and P. aeruginosa (25 each) from endotracheal aspirates of patients with suspected VAP. Broth microdilution was used to determine the MIC for meropenem and colistin using pure powders of the drugs (Sigma-Aldrich, India). Serial doubling dilution was performed for both the antibiotics to cover 4–5 dilutions below and above the MIC. Four times the desired concentration of these antibiotics was prepared, as antibiotic gets diluted by four times in the wells of the microliter plate. Checkerboard arrays were created by dispensing 25 μl of these titrations in the appropriate wells of the microliter plate. Meropenem was dispersed in the columns (1–11) and colistin was dispensed in rows (A-G). The final row (row H) and final column (column 12) contained only meropenem and colistin, respectively. To prepare the inoculum, two to three isolated colonies from overnight incubated plates were taken and suspended in cation-adjusted Mueller–Hinton broth to match the turbidity of 0.5 McFarland (108 CFU/mL). This was followed by dilution to reach the final concentration of 5 × 105 CFU/mL. Fifty microliter of this inoculum was dispensed in all the wells except the drug control wells. Then, the microliter plates were incubated at 37°C for 18–24 h. Readings were taken following incubation and fractional inhibitory concentration (FIC) was calculated as follows: FIC = MIC of colistin in combination /MIC of colistin alone + MIC of meropenem in combination / MIC of meropenem alone An FIC ≤0.5 indicated synergy, 0.5 to ≤2 indicated additive effect, and combination was said to be antagonistic if FIC >4.[6] Statistical analysis Epidata v 3.01 software (Epidata association, Odense Denmark, 1999) was used for entering the data and the analysis was done using SPSS version 19.0( Armonk, NY:IBM Corp). Statistical analysis of categorical variables was carried out using Fisher's exact test and P < 0.05 was taken as significant. Epidata v 3.01 software (Epidata association, Odense Denmark, 1999) was used for entering the data and the analysis was done using SPSS version 19.0( Armonk, NY:IBM Corp). Statistical analysis of categorical variables was carried out using Fisher's exact test and P < 0.05 was taken as significant. Statistical analysis: Epidata v 3.01 software (Epidata association, Odense Denmark, 1999) was used for entering the data and the analysis was done using SPSS version 19.0( Armonk, NY:IBM Corp). Statistical analysis of categorical variables was carried out using Fisher's exact test and P < 0.05 was taken as significant. Results: All isolates were resistant to meropenem as per the MIC results, while 6 (24%) isolates of A. baumannii and 5 (20%) isolates of P. aeruginosa were resistant to colistin. The FIC indexes were calculated and their interactions are depicted in Tables 1 and 2. Of the 25 A. baumannii isolates which were subjected to synergy testing, 18 (72%) isolates showed synergism, while 7 (28%) isolates demonstrated additive effect and out of the 25 P. aeruginosa isolates, only 6 (24%) isolates showed synergism, 4 (16%) showed indifference, and the rest 15 (60%) isolates showed an additive effect. Neither A. baumannii nor P. aeruginosa demonstrated antagonism with this combination [Tables 1,2 and Figure 1]. Synergy was demonstrated in all six colistin-resistant isolates of A. baumannii (100%), whereas it was demonstrated in only 3 (3/5) colistin-resistant strains of P. aeruginosa (60%). When synergism was compared between colistin resistant and colistin susceptible isolates of A. baumannii (P = 0.137) and P. aeruginosa (P = 0.115), it was observed that most of the resistant isolates showed synergism, however, this difference was not statistically significant. This could be due to the fact that our study was underpowered to pick a statistical difference between them (back calculated power = 41.76%) due to inadequate sample size. Effect of combined colistin and meropenem against Acinetobacter baumannii isolates (n=25) MIC=Minimum inhibitory concentration, FIC=Fractional inhibitory concentration Effect of combined colistin and meropenem against Pseudomonas aeruginosa isolates (n=25) MIC=Minimum inhibitory concentration, FIC=Fractional inhibitory concentration Results of in vitro synergy testing using checkerboard method for Acinetobacter baumannii and Pseudomonas aeruginosa Discussion: A. baumannii and P. aeruginosa are the emerging nosocomial pathogens, which are now becoming resistant to commonly used antibiotics such as aminoglycosides, beta-lactams, beta-lactam + beta-lactamase inhibitors in combination and meropenem, so colistin is considered as the last line treatment option.[910] Of note, it has been known that biofilm formation accounts for approximately 80% of the human bacterial infection, and meropenem is said to have a poor anti-biofilm activity when administered alone.[1112] Colistin, a polypeptide antibiotic, has a wide range of activity against Gram-negative organisms and acts mainly by disruption of lipopolysaccharide layer by interacting with the outer membrane of the Gram-negative bacteria. When this drug is given in combination with meropenem, synergism is thought to occur by favoring better access of meropenem to its target site, thereby leading to cell death. In addition, anti-biofilm activity of the combination may also contribute to synergism. A review and meta-analysis done by Jiang et al. showed that in time-kill assay (TKD), a lower synergy rate was shown by colistin (polymyxin E) when compared to polymyxin B, whereas an opposite finding was seen in the checkerboard synergy method.[13] Review of literature revealed that different studies have quoted different FIC interpretative results, for example, Liu et al. considered FIC indices (FICI) ≤0.5 as synergy, 0.5 < FICI ≤ 4 as additive and indifference, and FICI >4 as antagonism; van Belkum et al. considered FICI ≤0.5 to be synergistic, FICI 0.5 to ≤1.0 to be additive, FICI 1.0 to ≤2 as indifferent, and, finally FICI ≥3 as antagonistic, whereas Le Minh et al. considered as follows: synergistic when FICI ≤0.5, indifferent 0.5 < FICI <4, and antagonistic when FICI ≥4.[141516] However, we followed the interpretation of Soudeiha et al. who had done a similar study in 2017 by using the same combination of antibiotics (colistin and meropenem) on the same organism (A. baumanni).[6] In our study, we found that synergistic activity was not influenced by the MIC of meropenem (i.e., both low and high meropenem MIC showed similar activity), whereas a study conducted by Fan et al. in murine thigh infection model showed that strains with low meropenem MIC (≤32 mg/L) exhibited more synergistic activity than strains having higher MIC (≥64 mg/L).[17] Out of the 25 A. baumannii isolates tested, 18 (72%) isolates showed synergy (FIC ≤0.5) and 7 (28%) showed additive effect (0.5 < FIC ≤2). Our findings were in agreement with those that Le Minh et al. who found that 68% of A. baumannii isolates showed synergistic activity between meropenem and colistin and they also noted that the synergistic activity was more in carbapenem-resistant A. baumannii isolates than in carbapenem-susceptible isolates.[14] Similar studies by Liu et al.[15] and Yavaş et al.[18] showed that there existed 100% synergy among A. baumannii isolates when combining colistin with meropenem. In contrast to our finding, a recent study by Kheshti et al. showed that there was only 10% synergy when combining colistin and meropenem among A. baumannii isolates.[19] Our study results showed that for P. aeruginosa (n = 25) when combining colistin and meropenem using checkerboard method, there was a high rate of additive effect – 60% (15/25) and synergistic effect was seen in only 24% (6/25) and 16% (4/25) of isolates showed indifference. Our study supports the finding of Daoud et al. who observed that 63.6% (7/11) had additive effect and 27.2% (3/11) had synergy when combining colistin and meropenem using the checkerboard method for P. aeruginosa.[20] In contrast, Ramadan et al. found that 63.6% showed synergistic effect and 36.4% showed the additive effect.[21] No antagonistic effect was seen in the current study both for A. baumannii and P. aeruginosa. In our study, 76% of the A. baumannni isolates and 80% of P. aeruginosa isolates were susceptible to colistin by microbroth dilution method. Even though monotherapy rather than in combination would decrease the development of adverse side effects caused by the use of additional antibiotics, it is not recommended with colistin. A study by Li et al., Lee et al., Cetin et al., and Bergen et al. showed that colistin when administrated alone leads to the development of heteroresistant Acinetobacter and Pseudomonas strains and can even cause colistin resistance among other isolates and can lead to proliferation of organisms which are innately resistant to colistin leading to secondary bacterial infection leaving the clinicians with no other treatment options.[22232425] The rationale behind doing synergy testing is to find out whether the combination of antibiotics can improve the clinical outcomes in patients for whom in vitro synergy testing of antibiotics showed synergism to widen the empiric therapy and to delay the resistance development during antimicrobial treatment. A recent meta-analysis by Vardakas et al. showed that synergy-guided antibiotic combination therapy led to improved clinical outcome and lower mortality.[26] However, Nutman et al. found no significant improvement in outcomes in patients infected with organisms for which in vitro synergy testing showed synergism, they also found that patients infected with isolates that showed antagonism, in vitro synergy testing did not show worse outcome when a combination of antibiotics was tried.[27] There are a few reasons for the discrepancy of experimental results with in vivo response. First, in vitro synergy testing does not consider the host immune system and the pathogen interaction. Next, the antimicrobial drug concentration achieved at the infection site varies from those needed for synergism in vitro, and although the desired synergistic concentration are achieved in vivo, the effect may not last.[2829] Although different methods are available for testing synergism, the checkerboard synergy method has the advantage of manipulating the antibiotic concentration used, unlike E-test strip where fixed concentrations of antibiotics are available. Time-kill test gives information on the rapidity of synergistic activity, yet it is unreasonable when testing many isolates, as it is tedious and consumes a lot of time. E-test is the least complex method, however, it is less standardized and constrained by fixed concentration of antibiotic on the strips.[2] Various studies have shown wide-ranging concordance of 33%–100% between CB and TKA, with lower rates of synergism by the checkerboard method.[28] A study by Ni et al. showed that increased rates of synergism are seen with TKD when compared to checkerboard method and E-test; therefore, standardization of the various existing methods for evaluating synergy has to be done to address these limitations.[30] Conclusion: In the current study using the checkerboard method, when meropenem was combined with colistin, we had better synergistic activity against multidrug-resistant (MDR) A. baumannii isolates when compared to MDR P. aeruginosa isolates. However, more clinical trials are required to ascertain their efficacy and to explore its therapeutic potential, as in vivo settings cannot be entirely simulated in vitro. Limitations A single synergy testing method was used in our study (checkerboard method) and second, our sample size was small and the underlying mechanisms of resistance among the isolates were not determined. A single synergy testing method was used in our study (checkerboard method) and second, our sample size was small and the underlying mechanisms of resistance among the isolates were not determined. Financial support and sponsorship Nil. Nil. Conflicts of interest There are no conflicts of interest. There are no conflicts of interest. Limitations: A single synergy testing method was used in our study (checkerboard method) and second, our sample size was small and the underlying mechanisms of resistance among the isolates were not determined. Financial support and sponsorship: Nil. Conflicts of interest: There are no conflicts of interest.
Background: Meropenem-resistant Acinetobacter baumannii and Pseudomonas aeruginosa are the two most common nosocomial pathogens causing ventilator-associated pneumonia. To combat this resistance, different combinations of antibiotics have been evaluated for their efficacy in laboratories as well as in clinical situations. Methods: Fifty meropenem-resistant isolates of A. baumannii (n = 25) and P. aeruginosa (n = 25) from endotracheal aspirates were studied. The MIC of colistin and meropenem was found using the microbroth dilution method. The fractional inhibitory concentration was calculated for the combination of antibiotics by checkerboard assay and the antibiotic interactions were assessed. Fisher's exact test was carried out for statistical comparison of categorical variables. Results: A synergistic effect between colistin and meropenem was observed in 18/25 (72%) and 6/25 (24%) isolates of Acinetobacter baumannnii and P. Aeruginosa, respectively, with fractional inhibitory concentration indices of ≤0.5. None of the tested isolates exhibited antagonism. Conclusions: Our results showed that combinations of colistin and meropenem are associated with improvement in minimum inhibitory concentration and may be a promising strategy in treating meropenem-resistant A. baumannii respiratory tract infections.
Introduction: In recent times, testing for antimicrobial interactions has become very essential as there is an increase in the proportion of drug-resistant microorganisms and due to restricted choices for the management of the infections caused by those organisms.[1] This is particularly obvious when the infections are caused by nonfermenters.[23] Numerous combinations of antibiotics have been assessed to determine synergistic activity for various organisms. Favorable outcome was achieved for combination of different antibiotics with colistin such as meropenem, fluoroquinolones, and rifampicin.[345] However, a study conducted by Soudeiha et al. in 2017 showed that there was only additive effect and there was no synergism when evaluating combination of colistin and meropenem in Acinetobacter baumanni.[6] Although combination of antimicrobial therapy is known to broaden the spectrum and increase the bactericidal activity, its use still remains debatable in certain strains such as Pseudomonas aeruginosa.[7] A meta-analysis conducted by Paul et al. found that there was no change in the mortality rate when combination therapy was used over monotherapy.[8] Of the different methods available for testing synergistic activity, time-kill assay (TKA) gives consistent and reliable results, however, as this method is too cumbersome, checkerboard assay (CB) and E-test has been extensively followed.[2] The most common organisms causing ventilator-associated pneumonia (VAP) in our center are A. baumannii and P. aeruginosa. Since the rate of meropenem resistance is increasing in these two organisms, we aimed to evaluate the effect of combined colistin and meropenem against meropenem-resistant isolates of A. baumannii and P.eruginosa by checkerboard method. Conclusion: In the current study using the checkerboard method, when meropenem was combined with colistin, we had better synergistic activity against multidrug-resistant (MDR) A. baumannii isolates when compared to MDR P. aeruginosa isolates. However, more clinical trials are required to ascertain their efficacy and to explore its therapeutic potential, as in vivo settings cannot be entirely simulated in vitro. Limitations A single synergy testing method was used in our study (checkerboard method) and second, our sample size was small and the underlying mechanisms of resistance among the isolates were not determined. A single synergy testing method was used in our study (checkerboard method) and second, our sample size was small and the underlying mechanisms of resistance among the isolates were not determined. Financial support and sponsorship Nil. Nil. Conflicts of interest There are no conflicts of interest. There are no conflicts of interest.
Background: Meropenem-resistant Acinetobacter baumannii and Pseudomonas aeruginosa are the two most common nosocomial pathogens causing ventilator-associated pneumonia. To combat this resistance, different combinations of antibiotics have been evaluated for their efficacy in laboratories as well as in clinical situations. Methods: Fifty meropenem-resistant isolates of A. baumannii (n = 25) and P. aeruginosa (n = 25) from endotracheal aspirates were studied. The MIC of colistin and meropenem was found using the microbroth dilution method. The fractional inhibitory concentration was calculated for the combination of antibiotics by checkerboard assay and the antibiotic interactions were assessed. Fisher's exact test was carried out for statistical comparison of categorical variables. Results: A synergistic effect between colistin and meropenem was observed in 18/25 (72%) and 6/25 (24%) isolates of Acinetobacter baumannnii and P. Aeruginosa, respectively, with fractional inhibitory concentration indices of ≤0.5. None of the tested isolates exhibited antagonism. Conclusions: Our results showed that combinations of colistin and meropenem are associated with improvement in minimum inhibitory concentration and may be a promising strategy in treating meropenem-resistant A. baumannii respiratory tract infections.
2,653
217
[ 59, 35, 2 ]
9
[ "isolates", "colistin", "meropenem", "showed", "synergy", "study", "method", "baumannii", "aeruginosa", "combination" ]
[ "antibiotics recommended colistin", "baumanni combination antimicrobial", "combinations antibiotics assessed", "antibiotics colistin meropenem", "combination antimicrobial" ]
null
[CONTENT] Checkerboard assay | Clinical Laboratory Standards Institute | fractional inhibitory concentration indices | minimum inhibitory concentration | ventilator-associated pneumonia [SUMMARY]
null
[CONTENT] Checkerboard assay | Clinical Laboratory Standards Institute | fractional inhibitory concentration indices | minimum inhibitory concentration | ventilator-associated pneumonia [SUMMARY]
[CONTENT] Checkerboard assay | Clinical Laboratory Standards Institute | fractional inhibitory concentration indices | minimum inhibitory concentration | ventilator-associated pneumonia [SUMMARY]
[CONTENT] Checkerboard assay | Clinical Laboratory Standards Institute | fractional inhibitory concentration indices | minimum inhibitory concentration | ventilator-associated pneumonia [SUMMARY]
[CONTENT] Checkerboard assay | Clinical Laboratory Standards Institute | fractional inhibitory concentration indices | minimum inhibitory concentration | ventilator-associated pneumonia [SUMMARY]
[CONTENT] Acinetobacter baumannii | Anti-Bacterial Agents | Colistin | Cross-Sectional Studies | Drug Combinations | Drug Resistance, Multiple, Bacterial | Drug Synergism | Humans | Meropenem | Microbial Sensitivity Tests | Pseudomonas aeruginosa [SUMMARY]
null
[CONTENT] Acinetobacter baumannii | Anti-Bacterial Agents | Colistin | Cross-Sectional Studies | Drug Combinations | Drug Resistance, Multiple, Bacterial | Drug Synergism | Humans | Meropenem | Microbial Sensitivity Tests | Pseudomonas aeruginosa [SUMMARY]
[CONTENT] Acinetobacter baumannii | Anti-Bacterial Agents | Colistin | Cross-Sectional Studies | Drug Combinations | Drug Resistance, Multiple, Bacterial | Drug Synergism | Humans | Meropenem | Microbial Sensitivity Tests | Pseudomonas aeruginosa [SUMMARY]
[CONTENT] Acinetobacter baumannii | Anti-Bacterial Agents | Colistin | Cross-Sectional Studies | Drug Combinations | Drug Resistance, Multiple, Bacterial | Drug Synergism | Humans | Meropenem | Microbial Sensitivity Tests | Pseudomonas aeruginosa [SUMMARY]
[CONTENT] Acinetobacter baumannii | Anti-Bacterial Agents | Colistin | Cross-Sectional Studies | Drug Combinations | Drug Resistance, Multiple, Bacterial | Drug Synergism | Humans | Meropenem | Microbial Sensitivity Tests | Pseudomonas aeruginosa [SUMMARY]
[CONTENT] antibiotics recommended colistin | baumanni combination antimicrobial | combinations antibiotics assessed | antibiotics colistin meropenem | combination antimicrobial [SUMMARY]
null
[CONTENT] antibiotics recommended colistin | baumanni combination antimicrobial | combinations antibiotics assessed | antibiotics colistin meropenem | combination antimicrobial [SUMMARY]
[CONTENT] antibiotics recommended colistin | baumanni combination antimicrobial | combinations antibiotics assessed | antibiotics colistin meropenem | combination antimicrobial [SUMMARY]
[CONTENT] antibiotics recommended colistin | baumanni combination antimicrobial | combinations antibiotics assessed | antibiotics colistin meropenem | combination antimicrobial [SUMMARY]
[CONTENT] antibiotics recommended colistin | baumanni combination antimicrobial | combinations antibiotics assessed | antibiotics colistin meropenem | combination antimicrobial [SUMMARY]
[CONTENT] isolates | colistin | meropenem | showed | synergy | study | method | baumannii | aeruginosa | combination [SUMMARY]
null
[CONTENT] isolates | colistin | meropenem | showed | synergy | study | method | baumannii | aeruginosa | combination [SUMMARY]
[CONTENT] isolates | colistin | meropenem | showed | synergy | study | method | baumannii | aeruginosa | combination [SUMMARY]
[CONTENT] isolates | colistin | meropenem | showed | synergy | study | method | baumannii | aeruginosa | combination [SUMMARY]
[CONTENT] isolates | colistin | meropenem | showed | synergy | study | method | baumannii | aeruginosa | combination [SUMMARY]
[CONTENT] organisms | meropenem | combination | activity | colistin meropenem | infections | infections caused | increase | colistin | conducted [SUMMARY]
null
[CONTENT] isolates | baumannii | colistin | aeruginosa | demonstrated | showed | resistant | inhibitory concentration | inhibitory | isolates showed [SUMMARY]
[CONTENT] method | study checkerboard | conflicts | conflicts interest | study checkerboard method | interest | isolates | mdr | interest conflicts interest | conflicts interest conflicts interest [SUMMARY]
[CONTENT] nil | conflicts interest | interest | conflicts | isolates | method | colistin | meropenem | showed | synergy [SUMMARY]
[CONTENT] nil | conflicts interest | interest | conflicts | isolates | method | colistin | meropenem | showed | synergy [SUMMARY]
[CONTENT] Meropenem | Acinetobacter | Pseudomonas | two ||| [SUMMARY]
null
[CONTENT] colistin | meropenem | 18/25 | 72% | 6/25 | 24% | P. Aeruginosa ||| [SUMMARY]
[CONTENT] colistin | meropenem [SUMMARY]
[CONTENT] Meropenem | Acinetobacter | Pseudomonas | two ||| ||| Fifty | 25 | P. aeruginosa | 25 ||| MIC | colistin | meropenem ||| ||| Fisher ||| ||| colistin | meropenem | 18/25 | 72% | 6/25 | 24% | P. Aeruginosa ||| ||| colistin | meropenem [SUMMARY]
[CONTENT] Meropenem | Acinetobacter | Pseudomonas | two ||| ||| Fifty | 25 | P. aeruginosa | 25 ||| MIC | colistin | meropenem ||| ||| Fisher ||| ||| colistin | meropenem | 18/25 | 72% | 6/25 | 24% | P. Aeruginosa ||| ||| colistin | meropenem [SUMMARY]
The expression profile of Dopamine D2 receptor, MGMT and VEGF in different histological subtypes of pituitary adenomas: a study of 197 cases and indications for the medical therapy.
25027022
To study the expression of D2R, MGMT and VEGF for clinical significance in pituitary adenomas, and to predict the potential curative medical therapy of dopamine agonists, temozolomide and bevacizumab on pituitary adenomas.
BACKGROUND
Immunohistochemistry and western blot were performed to detect the expression of expression of D2R, MGMT and VEGF in pituitary adenoma tissue samples. The ratio of high expression of D2R, MGMT or VEGF in different subtypes of PA was compared by the use of chi-squared tests. The relationships between D2R, MGMT and VEGF expression were assessed by the Spearman rank correlation test. The association between their expression and clinical parameters was analyzed using a chi-squared test, or Fisher's exact probability test when appropriate.
METHODS
The data showed that in 197 different histological subtypes of pituitary adenomas (PAs), 64.9% of them were D2R high expression, 86.3% were MGMT low expression and 58.9% were VEGF high expression. D2R high expression existed more frequently in PRL- and GH- secreting PAs. MGMT low expression existed in all PA subtypes. VEGF high expression existed more frequently in PRL, ACTH, FSH secreting and non-functioning PAs. The data of western blot also support the results. Spearman's rank correlation analysis showed that expression of MGMT was positively associated with D2R (r = 0.154, P = 0.031) and VEGF (r = 0.161, P = 0.024) in PAs, but no correlation was showed between D2R and VEGF expression (r = -0.025, P = 0.725 > 0.05). The association between their expression and clinical parameters was analyzed using a chi-squared test, or Fisher's exact probability test when appropriate, but the result showed no significant association.
RESULTS
PRL-and GH-secreting PAs exist high expression of D2R, responding to dopamine agonists; Most PAs exist low expression of MGMT and high expression of VEGF, TMZ or bevacizumab treatment could be applied under the premise of indications.
CONCLUSIONS
[ "Adenoma", "Antibodies, Monoclonal, Humanized", "Bevacizumab", "Blotting, Western", "DNA Modification Methylases", "DNA Repair Enzymes", "Dacarbazine", "Female", "Humans", "Immunohistochemistry", "Male", "Pituitary Neoplasms", "Receptors, Dopamine D2", "Temozolomide", "Tumor Suppressor Proteins", "Vascular Endothelial Growth Factor A" ]
4223393
Background
Pituitary adenomas (PAs) account for about 15% of intracranial tumors. Although PAs are mostly benign lesions, about 30-55% of them are confirmed to locally invasive, and some of them infiltrate dura, bone and sinuses, are designated highly aggressive [[1],[2]]. The conventional treatment of large pituitary adenomas consists of surgery, and radiotherapy when it is hard to achieve total resection. The use of additional radiotherapy is limited by the risk of radiation necrosis of surrounding structures. Thus, medication treatment, although unlikely to be curative immediately, might lead to certain clinically therapeutic effect, as a useful supplement [[3]]. Currently, first-line clinical medication for PAs generally consists of dopamine agonists (DAs), somatostatin analogs (SSAs) or combinations [[4]]. Recently, some routine chemotherapeutics such as Temozolomide (TMZ) and Bevacizumab have been carefully studied to treat PAs and considered to be potential for aggressive PAs’ medical therapy [[5]-[8]]. DAs were widely used for the treatment of prolactinomas and some somatotropinomas, and the responsiveness depends on the expression of dopamine D2 receptors (D2R) on tumor cells. Abnormal expression of D2R in prolactinoma was considered to confer resistance to DA treatment. Fadul et al. [[7]] first reported two cases of pituitary carcinoma received TMZ treatment, concluding that TMZ may be effective in treating pituitary carcinomas. After that, more and more studies demonstrated the inspiring therapeutic effect of TMZ on pituitary carcinomas and aggressive PAs. As a DNA repairase, O6-methylguanine DNA methyltransferase (MGMT) confers chemoresistance to TMZ [[9]]. Thus, tumors with low expression of MGMT are usually sensitive to TMZ. Bevacizumab is a monoclonal antibody which has been approved by USA FDA to treat colorectal cancer, non-small-cell lung carcinoma, breast cancer, renal carcinoma and recurrent glioma [[10]]. It blocks vascular endothelial growth factor (VEGF) binding to its receptor [[11]]. Experimental and clinical studies have demonstrated that anti-VEGF therapy may be effective in pituitary carcinoma and aggressive PAs. To investigate D2R, MGMT and VEGF expression profile in PAs, and to evaluate the status of the drug targets of DAs, TMZ and Bevacizumab for PA medical therapy, herein, we performed the immunohistochemical staining in 197 cases of different subtypes of PAs.
Methods
Patients and tissues One hundred and ninety seven pituitary adenomas (PAs) of different histological subtypes were selected randomly from patients operated between 2009 and 2011 in the Department of neurosurgery, Jinling Hospital, School of Medicine, Nanjing University. All PA tumor tissues were formalin-fixed and paraffinembedded resected and then pathologically diagnosed, including 28 PRL-secreting adenomas, 20 GH-secreting adenomas, 27 ACTH-secreting adenomas, 15 TSH-secreting adenomas, 37 FSH-secreting adenomas and 70 non-functioning adenomas. One hundred and ninety seven pituitary adenomas (PAs) of different histological subtypes were selected randomly from patients operated between 2009 and 2011 in the Department of neurosurgery, Jinling Hospital, School of Medicine, Nanjing University. All PA tumor tissues were formalin-fixed and paraffinembedded resected and then pathologically diagnosed, including 28 PRL-secreting adenomas, 20 GH-secreting adenomas, 27 ACTH-secreting adenomas, 15 TSH-secreting adenomas, 37 FSH-secreting adenomas and 70 non-functioning adenomas. Immunohistochemical staining A streptavidin-peroxidase (SP) method was used for immunostaining. Briefly, slides were deparaffinized with xylene three times (each for 5–10 min), dehydrated three times in a gradient series of ethanol (100%, 95%, and 75%), and rinsed with PBS. Each slide was treated with 3% H2O2 for 15 min to quench endogenous peroxidase activity. Nonspecific bindings were blocked by treating slides with normal goat serum for 20 min. Slides were first incubated with rabbit polyclonal anti-D2R (Abcam, Shanghai, China; 1:50), mouse monoclonal anti-MGMT (Abcam, Shanghai, China; 1:50) or mouse monoclonal anti-VEGF (Abcam, Shanghai, China; 1:50) overnight at 4°C, and then rinsed twice with PBS. Slides were then incubated with a secondary antibody for 15 min at 37°C followed by treatment with streptavidin–peroxidase reagent for 15 min, and rinsed twice with PBS. The slides were visualized with 3,3’-diaminobenzidine (DAB) for 3 min, counterstained with haematoxylin, and mounted for microscopy. A streptavidin-peroxidase (SP) method was used for immunostaining. Briefly, slides were deparaffinized with xylene three times (each for 5–10 min), dehydrated three times in a gradient series of ethanol (100%, 95%, and 75%), and rinsed with PBS. Each slide was treated with 3% H2O2 for 15 min to quench endogenous peroxidase activity. Nonspecific bindings were blocked by treating slides with normal goat serum for 20 min. Slides were first incubated with rabbit polyclonal anti-D2R (Abcam, Shanghai, China; 1:50), mouse monoclonal anti-MGMT (Abcam, Shanghai, China; 1:50) or mouse monoclonal anti-VEGF (Abcam, Shanghai, China; 1:50) overnight at 4°C, and then rinsed twice with PBS. Slides were then incubated with a secondary antibody for 15 min at 37°C followed by treatment with streptavidin–peroxidase reagent for 15 min, and rinsed twice with PBS. The slides were visualized with 3,3’-diaminobenzidine (DAB) for 3 min, counterstained with haematoxylin, and mounted for microscopy. Evaluation of staining The slides were evaluated by two separate investigators under a light microscope (Dr. Wanchun Li and Dr. Zhenfeng Lu). Staining intensity was scored as 0 (negative), 1 (weak), 2 (medium), and 3 (strong). Extent of staining was scored as 0 (0%), 1 (1–25%), 2 (26-50%), 3 (51-75%), and 4 (76-100%) according to the percentages of the positive staining areas in relation to the whole carcinoma area. The sum of the intensity score and extent score was used as the final staining score (0–7). Tumors having a final staining score of >2 were considered to be positive, score of 2–3 were considered as low expression and score of >3 were high expression. The slides were evaluated by two separate investigators under a light microscope (Dr. Wanchun Li and Dr. Zhenfeng Lu). Staining intensity was scored as 0 (negative), 1 (weak), 2 (medium), and 3 (strong). Extent of staining was scored as 0 (0%), 1 (1–25%), 2 (26-50%), 3 (51-75%), and 4 (76-100%) according to the percentages of the positive staining areas in relation to the whole carcinoma area. The sum of the intensity score and extent score was used as the final staining score (0–7). Tumors having a final staining score of >2 were considered to be positive, score of 2–3 were considered as low expression and score of >3 were high expression. Western blot For western blot analysis, the lysates were separated by SDS-PAGE followed by transferring to an Immobilon-P Transfer membrane (Millipore Corporation, Bedford, MA, USA). Membranes were probed with primary antibodies followed by incubation with secondary antibody. Proteins were visualized with chemiluminescence luminol reagents (Beyotime Institute of Biotechnology, Shanghai, China). For western blot analysis, the lysates were separated by SDS-PAGE followed by transferring to an Immobilon-P Transfer membrane (Millipore Corporation, Bedford, MA, USA). Membranes were probed with primary antibodies followed by incubation with secondary antibody. Proteins were visualized with chemiluminescence luminol reagents (Beyotime Institute of Biotechnology, Shanghai, China). Statistical analysis Statistical analysis was performed using SPSS 16.0 (SPSS Chicago, IL, USA). The ratio of high expression of D2R, MGMT or VEGF in different subtypes of PA was compared by the use of chi-squared tests. The relationships between D2R, MGMT and VEGF expression were assessed by the Spearman rank correlation test. The association between their expression and clinical parameters was analyzed using a chi-squared test, or Fisher's exact probability test when appropriate. P < 0.05 was considered to be statistically significant. Statistical analysis was performed using SPSS 16.0 (SPSS Chicago, IL, USA). The ratio of high expression of D2R, MGMT or VEGF in different subtypes of PA was compared by the use of chi-squared tests. The relationships between D2R, MGMT and VEGF expression were assessed by the Spearman rank correlation test. The association between their expression and clinical parameters was analyzed using a chi-squared test, or Fisher's exact probability test when appropriate. P < 0.05 was considered to be statistically significant.
Results
Expression of D2R, MGMT or VEGF in PA tissues The location of D2R and VEGF in the nuclei and cytoplasm, and of MGMT in the nuclei was considered for scoring (Figure 1A–F). The positive expression of D2R was detected in 194 tissues, of MGMT was in all tissues and of VEGF was in 190 tissues. The proportions of cases showing low (score of ≤3) or high (score of >3) expression levels for D2R, MGMT and VEGF in different subtypes of PA were shown in Table 1. 64.9% of 197 PAs were D2R high expression, 86.3% of them were MGMT low expression and 58.9% of them were VEGF high expression. The ratio of high expression of D2R or MGMT is significantly different in PA subtypes (For D2R: χ2 = 44.844, P < 0.001; For MGMT: χ2 = 13.210, P = 0.021), but for VEGF, there is no significance (χ2 = 9.003, P = 0.109). D2R high expression existed more frequently in PRL, GH, ACTH, TSH and FSH secreting PAs. MGMT low expression existed in all PA subtypes. VEGF high expression existed more frequently in PRL, ACTH, FSH secreting and non-functioning PA. The data of western blot supported and confirmed these results (Figure 2). Expression of D2R, MGMT and VEGF in PAs. (A, B): D2R low (A) and high (B) expression. (C, D): MGMT low (C) and high (D) expression. (E, F): VEGF low (E) and high (F) expression. Bar = 50 μm. Expression profile of D2R, MGMT and VEGF in different subtypes of PA NF, Non-functioning; Low, low expression (score of ≤3); High, high expression (score of >3). The expression of D2R, MGMT and VEGF in different PAs subtypes by detected using western blot. PRL: PRL-secreting PAs; GH: GH-secreting PAs; ACTH: ACTH-secreting PAs; TSH: TSH-secreting PAs; FSH: FSH-secreting PAs; NF: Non-functioning PAs. GAPDH served as loading control. S1 = Sample 1; S2 = Sample 2. The location of D2R and VEGF in the nuclei and cytoplasm, and of MGMT in the nuclei was considered for scoring (Figure 1A–F). The positive expression of D2R was detected in 194 tissues, of MGMT was in all tissues and of VEGF was in 190 tissues. The proportions of cases showing low (score of ≤3) or high (score of >3) expression levels for D2R, MGMT and VEGF in different subtypes of PA were shown in Table 1. 64.9% of 197 PAs were D2R high expression, 86.3% of them were MGMT low expression and 58.9% of them were VEGF high expression. The ratio of high expression of D2R or MGMT is significantly different in PA subtypes (For D2R: χ2 = 44.844, P < 0.001; For MGMT: χ2 = 13.210, P = 0.021), but for VEGF, there is no significance (χ2 = 9.003, P = 0.109). D2R high expression existed more frequently in PRL, GH, ACTH, TSH and FSH secreting PAs. MGMT low expression existed in all PA subtypes. VEGF high expression existed more frequently in PRL, ACTH, FSH secreting and non-functioning PA. The data of western blot supported and confirmed these results (Figure 2). Expression of D2R, MGMT and VEGF in PAs. (A, B): D2R low (A) and high (B) expression. (C, D): MGMT low (C) and high (D) expression. (E, F): VEGF low (E) and high (F) expression. Bar = 50 μm. Expression profile of D2R, MGMT and VEGF in different subtypes of PA NF, Non-functioning; Low, low expression (score of ≤3); High, high expression (score of >3). The expression of D2R, MGMT and VEGF in different PAs subtypes by detected using western blot. PRL: PRL-secreting PAs; GH: GH-secreting PAs; ACTH: ACTH-secreting PAs; TSH: TSH-secreting PAs; FSH: FSH-secreting PAs; NF: Non-functioning PAs. GAPDH served as loading control. S1 = Sample 1; S2 = Sample 2. Relationships between D2R, MGMT and VEGF expression in correlation analysis Spearman's rank correlation analysis showed that MGMT expression was positively associated with D2R expression (r = 0.154, P = 0.031) and with VEGF expression (r = 0.161, P = 0.024) in PA, but D2R expression did not show a correlation with VEGF expression (r = −0.025, P = 0.725 > 0.05). Spearman's rank correlation analysis showed that MGMT expression was positively associated with D2R expression (r = 0.154, P = 0.031) and with VEGF expression (r = 0.161, P = 0.024) in PA, but D2R expression did not show a correlation with VEGF expression (r = −0.025, P = 0.725 > 0.05). Association of D2R, MGMT and VEGF expression with clinical features of PAs In these 197 cases, 106 of them were male and 91 were female; 64 of them were defined as invasive PAs, and others were non-invasive (according to Knosp’s classification [[12]]); 16 of them were recurrent PA, and the others were primary; 16 of them were microadenoma (diameter ≤ 10 mm), and the others were macroadenoma (diameter > 10 mm); 159 of the PAs were tender in tumor tissues, and the others were tenacious; Only 8 patients have taken bromocriptine orally. The associations between clinical variables and D2R, MGMT and VEGF expression are shown in Table 2. However, there was no significant association between D2R, MGMT or VEGF expression and clinical features, including patient sex, tumor growth pattern, tumor recurrence, tumor size, tumor tissue texture and bromocriptine application (P > 0.05). This indicated that despite the variety of PA clinical features, the expression of D2R, MGMT and VEGF are definite in PAs. Association of D2R, MGMT and VEGF expression with clinicopathological characteristics from patients with PA Low, low expression (score of ≤3); High, high expression (score of >3). In these 197 cases, 106 of them were male and 91 were female; 64 of them were defined as invasive PAs, and others were non-invasive (according to Knosp’s classification [[12]]); 16 of them were recurrent PA, and the others were primary; 16 of them were microadenoma (diameter ≤ 10 mm), and the others were macroadenoma (diameter > 10 mm); 159 of the PAs were tender in tumor tissues, and the others were tenacious; Only 8 patients have taken bromocriptine orally. The associations between clinical variables and D2R, MGMT and VEGF expression are shown in Table 2. However, there was no significant association between D2R, MGMT or VEGF expression and clinical features, including patient sex, tumor growth pattern, tumor recurrence, tumor size, tumor tissue texture and bromocriptine application (P > 0.05). This indicated that despite the variety of PA clinical features, the expression of D2R, MGMT and VEGF are definite in PAs. Association of D2R, MGMT and VEGF expression with clinicopathological characteristics from patients with PA Low, low expression (score of ≤3); High, high expression (score of >3).
null
null
[ "Background", "Patients and tissues", "Immunohistochemical staining", "Evaluation of staining", "Western blot", "Statistical analysis", "Expression of D2R, MGMT or VEGF in PA tissues", "Relationships between D2R, MGMT and VEGF expression in correlation analysis", "Association of D2R, MGMT and VEGF expression with clinical features of PAs", "Abbreviations", "Competing interests", "Authors’ contributions" ]
[ "Pituitary adenomas (PAs) account for about 15% of intracranial tumors. Although PAs are mostly benign lesions, about 30-55% of them are confirmed to locally invasive, and some of them infiltrate dura, bone and sinuses, are designated highly aggressive [[1],[2]]. The conventional treatment of large pituitary adenomas consists of surgery, and radiotherapy when it is hard to achieve total resection. The use of additional radiotherapy is limited by the risk of radiation necrosis of surrounding structures. Thus, medication treatment, although unlikely to be curative immediately, might lead to certain clinically therapeutic effect, as a useful supplement [[3]].\nCurrently, first-line clinical medication for PAs generally consists of dopamine agonists (DAs), somatostatin analogs (SSAs) or combinations [[4]]. Recently, some routine chemotherapeutics such as Temozolomide (TMZ) and Bevacizumab have been carefully studied to treat PAs and considered to be potential for aggressive PAs’ medical therapy [[5]-[8]]. DAs were widely used for the treatment of prolactinomas and some somatotropinomas, and the responsiveness depends on the expression of dopamine D2 receptors (D2R) on tumor cells. Abnormal expression of D2R in prolactinoma was considered to confer resistance to DA treatment. Fadul et al. [[7]] first reported two cases of pituitary carcinoma received TMZ treatment, concluding that TMZ may be effective in treating pituitary carcinomas. After that, more and more studies demonstrated the inspiring therapeutic effect of TMZ on pituitary carcinomas and aggressive PAs. As a DNA repairase, O6-methylguanine DNA methyltransferase (MGMT) confers chemoresistance to TMZ [[9]]. Thus, tumors with low expression of MGMT are usually sensitive to TMZ. Bevacizumab is a monoclonal antibody which has been approved by USA FDA to treat colorectal cancer, non-small-cell lung carcinoma, breast cancer, renal carcinoma and recurrent glioma [[10]]. It blocks vascular endothelial growth factor (VEGF) binding to its receptor [[11]]. Experimental and clinical studies have demonstrated that anti-VEGF therapy may be effective in pituitary carcinoma and aggressive PAs.\nTo investigate D2R, MGMT and VEGF expression profile in PAs, and to evaluate the status of the drug targets of DAs, TMZ and Bevacizumab for PA medical therapy, herein, we performed the immunohistochemical staining in 197 cases of different subtypes of PAs.", "One hundred and ninety seven pituitary adenomas (PAs) of different histological subtypes were selected randomly from patients operated between 2009 and 2011 in the Department of neurosurgery, Jinling Hospital, School of Medicine, Nanjing University. All PA tumor tissues were formalin-fixed and paraffinembedded resected and then pathologically diagnosed, including 28 PRL-secreting adenomas, 20 GH-secreting adenomas, 27 ACTH-secreting adenomas, 15 TSH-secreting adenomas, 37 FSH-secreting adenomas and 70 non-functioning adenomas.", "A streptavidin-peroxidase (SP) method was used for immunostaining. Briefly, slides were deparaffinized with xylene three times (each for 5–10 min), dehydrated three times in a gradient series of ethanol (100%, 95%, and 75%), and rinsed with PBS. Each slide was treated with 3% H2O2 for 15 min to quench endogenous peroxidase activity. Nonspecific bindings were blocked by treating slides with normal goat serum for 20 min. Slides were first incubated with rabbit polyclonal anti-D2R (Abcam, Shanghai, China; 1:50), mouse monoclonal anti-MGMT (Abcam, Shanghai, China; 1:50) or mouse monoclonal anti-VEGF (Abcam, Shanghai, China; 1:50) overnight at 4°C, and then rinsed twice with PBS. Slides were then incubated with a secondary antibody for 15 min at 37°C followed by treatment with streptavidin–peroxidase reagent for 15 min, and rinsed twice with PBS. The slides were visualized with 3,3’-diaminobenzidine (DAB) for 3 min, counterstained with haematoxylin, and mounted for microscopy.", "The slides were evaluated by two separate investigators under a light microscope (Dr. Wanchun Li and Dr. Zhenfeng Lu). Staining intensity was scored as 0 (negative), 1 (weak), 2 (medium), and 3 (strong). Extent of staining was scored as 0 (0%), 1 (1–25%), 2 (26-50%), 3 (51-75%), and 4 (76-100%) according to the percentages of the positive staining areas in relation to the whole carcinoma area. The sum of the intensity score and extent score was used as the final staining score (0–7). Tumors having a final staining score of >2 were considered to be positive, score of 2–3 were considered as low expression and score of >3 were high expression.", "For western blot analysis, the lysates were separated by SDS-PAGE followed by transferring to an Immobilon-P Transfer membrane (Millipore Corporation, Bedford, MA, USA). Membranes were probed with primary antibodies followed by incubation with secondary antibody. Proteins were visualized with chemiluminescence luminol reagents (Beyotime Institute of Biotechnology, Shanghai, China).", "Statistical analysis was performed using SPSS 16.0 (SPSS Chicago, IL, USA). The ratio of high expression of D2R, MGMT or VEGF in different subtypes of PA was compared by the use of chi-squared tests. The relationships between D2R, MGMT and VEGF expression were assessed by the Spearman rank correlation test. The association between their expression and clinical parameters was analyzed using a chi-squared test, or Fisher's exact probability test when appropriate. P < 0.05 was considered to be statistically significant.", "The location of D2R and VEGF in the nuclei and cytoplasm, and of MGMT in the nuclei was considered for scoring (Figure 1A–F). The positive expression of D2R was detected in 194 tissues, of MGMT was in all tissues and of VEGF was in 190 tissues. The proportions of cases showing low (score of ≤3) or high (score of >3) expression levels for D2R, MGMT and VEGF in different subtypes of PA were shown in Table 1. 64.9% of 197 PAs were D2R high expression, 86.3% of them were MGMT low expression and 58.9% of them were VEGF high expression. The ratio of high expression of D2R or MGMT is significantly different in PA subtypes (For D2R: χ2 = 44.844, P < 0.001; For MGMT: χ2 = 13.210, P = 0.021), but for VEGF, there is no significance (χ2 = 9.003, P = 0.109). D2R high expression existed more frequently in PRL, GH, ACTH, TSH and FSH secreting PAs. MGMT low expression existed in all PA subtypes. VEGF high expression existed more frequently in PRL, ACTH, FSH secreting and non-functioning PA. The data of western blot supported and confirmed these results (Figure 2).\nExpression of D2R, MGMT and VEGF in PAs. (A, B): D2R low (A) and high (B) expression. (C, D): MGMT low (C) and high (D) expression. (E, F): VEGF low (E) and high (F) expression. Bar = 50 μm.\nExpression profile of D2R, MGMT and VEGF in different subtypes of PA\nNF, Non-functioning; Low, low expression (score of ≤3); High, high expression (score of >3).\nThe expression of D2R, MGMT and VEGF in different PAs subtypes by detected using western blot. PRL: PRL-secreting PAs; GH: GH-secreting PAs; ACTH: ACTH-secreting PAs; TSH: TSH-secreting PAs; FSH: FSH-secreting PAs; NF: Non-functioning PAs. GAPDH served as loading control. S1 = Sample 1; S2 = Sample 2.", "Spearman's rank correlation analysis showed that MGMT expression was positively associated with D2R expression (r = 0.154, P = 0.031) and with VEGF expression (r = 0.161, P = 0.024) in PA, but D2R expression did not show a correlation with VEGF expression (r = −0.025, P = 0.725 > 0.05).", "In these 197 cases, 106 of them were male and 91 were female; 64 of them were defined as invasive PAs, and others were non-invasive (according to Knosp’s classification [[12]]); 16 of them were recurrent PA, and the others were primary; 16 of them were microadenoma (diameter ≤ 10 mm), and the others were macroadenoma (diameter > 10 mm); 159 of the PAs were tender in tumor tissues, and the others were tenacious; Only 8 patients have taken bromocriptine orally. The associations between clinical variables and D2R, MGMT and VEGF expression are shown in Table 2. However, there was no significant association between D2R, MGMT or VEGF expression and clinical features, including patient sex, tumor growth pattern, tumor recurrence, tumor size, tumor tissue texture and bromocriptine application (P > 0.05). This indicated that despite the variety of PA clinical features, the expression of D2R, MGMT and VEGF are definite in PAs.\nAssociation of D2R, MGMT and VEGF expression with clinicopathological characteristics from patients with PA\nLow, low expression (score of ≤3); High, high expression (score of >3).", "PA: Pituitary adenoma\nD2R: Dopamine D2 receptors\nDA: Dopamine agonist\nMGMT: O6-methylguanine DNA methyltransferase\nTMZ: Temozolomide\nVEGF: Vascular endothelial growth factor\nPRL: Prolactin\nGH: Growth hormone\nACTH: Adrenocorticotropic hormone\nTSH: Thyroid stimulating hormone\nFSH: Follicle-stimulating hormone\nNF: Non-functioning", "The authors declare that they have no competing of interests.", "YW, JL and CM designed the research; YW, JL, YH, MT, SW, WL and ZL performed the research; WL and ZL evaluated the pathological sections and scored the extent of staining; JL and YW analyzed the data; JL, YW and CM wrote the paper, CM revised the paper. All authors read and approved the final manuscript." ]
[ null, null, null, null, null, null, null, null, null, null, null, null ]
[ "Background", "Methods", "Patients and tissues", "Immunohistochemical staining", "Evaluation of staining", "Western blot", "Statistical analysis", "Results", "Expression of D2R, MGMT or VEGF in PA tissues", "Relationships between D2R, MGMT and VEGF expression in correlation analysis", "Association of D2R, MGMT and VEGF expression with clinical features of PAs", "Discussion", "Abbreviations", "Competing interests", "Authors’ contributions" ]
[ "Pituitary adenomas (PAs) account for about 15% of intracranial tumors. Although PAs are mostly benign lesions, about 30-55% of them are confirmed to locally invasive, and some of them infiltrate dura, bone and sinuses, are designated highly aggressive [[1],[2]]. The conventional treatment of large pituitary adenomas consists of surgery, and radiotherapy when it is hard to achieve total resection. The use of additional radiotherapy is limited by the risk of radiation necrosis of surrounding structures. Thus, medication treatment, although unlikely to be curative immediately, might lead to certain clinically therapeutic effect, as a useful supplement [[3]].\nCurrently, first-line clinical medication for PAs generally consists of dopamine agonists (DAs), somatostatin analogs (SSAs) or combinations [[4]]. Recently, some routine chemotherapeutics such as Temozolomide (TMZ) and Bevacizumab have been carefully studied to treat PAs and considered to be potential for aggressive PAs’ medical therapy [[5]-[8]]. DAs were widely used for the treatment of prolactinomas and some somatotropinomas, and the responsiveness depends on the expression of dopamine D2 receptors (D2R) on tumor cells. Abnormal expression of D2R in prolactinoma was considered to confer resistance to DA treatment. Fadul et al. [[7]] first reported two cases of pituitary carcinoma received TMZ treatment, concluding that TMZ may be effective in treating pituitary carcinomas. After that, more and more studies demonstrated the inspiring therapeutic effect of TMZ on pituitary carcinomas and aggressive PAs. As a DNA repairase, O6-methylguanine DNA methyltransferase (MGMT) confers chemoresistance to TMZ [[9]]. Thus, tumors with low expression of MGMT are usually sensitive to TMZ. Bevacizumab is a monoclonal antibody which has been approved by USA FDA to treat colorectal cancer, non-small-cell lung carcinoma, breast cancer, renal carcinoma and recurrent glioma [[10]]. It blocks vascular endothelial growth factor (VEGF) binding to its receptor [[11]]. Experimental and clinical studies have demonstrated that anti-VEGF therapy may be effective in pituitary carcinoma and aggressive PAs.\nTo investigate D2R, MGMT and VEGF expression profile in PAs, and to evaluate the status of the drug targets of DAs, TMZ and Bevacizumab for PA medical therapy, herein, we performed the immunohistochemical staining in 197 cases of different subtypes of PAs.", " Patients and tissues One hundred and ninety seven pituitary adenomas (PAs) of different histological subtypes were selected randomly from patients operated between 2009 and 2011 in the Department of neurosurgery, Jinling Hospital, School of Medicine, Nanjing University. All PA tumor tissues were formalin-fixed and paraffinembedded resected and then pathologically diagnosed, including 28 PRL-secreting adenomas, 20 GH-secreting adenomas, 27 ACTH-secreting adenomas, 15 TSH-secreting adenomas, 37 FSH-secreting adenomas and 70 non-functioning adenomas.\nOne hundred and ninety seven pituitary adenomas (PAs) of different histological subtypes were selected randomly from patients operated between 2009 and 2011 in the Department of neurosurgery, Jinling Hospital, School of Medicine, Nanjing University. All PA tumor tissues were formalin-fixed and paraffinembedded resected and then pathologically diagnosed, including 28 PRL-secreting adenomas, 20 GH-secreting adenomas, 27 ACTH-secreting adenomas, 15 TSH-secreting adenomas, 37 FSH-secreting adenomas and 70 non-functioning adenomas.\n Immunohistochemical staining A streptavidin-peroxidase (SP) method was used for immunostaining. Briefly, slides were deparaffinized with xylene three times (each for 5–10 min), dehydrated three times in a gradient series of ethanol (100%, 95%, and 75%), and rinsed with PBS. Each slide was treated with 3% H2O2 for 15 min to quench endogenous peroxidase activity. Nonspecific bindings were blocked by treating slides with normal goat serum for 20 min. Slides were first incubated with rabbit polyclonal anti-D2R (Abcam, Shanghai, China; 1:50), mouse monoclonal anti-MGMT (Abcam, Shanghai, China; 1:50) or mouse monoclonal anti-VEGF (Abcam, Shanghai, China; 1:50) overnight at 4°C, and then rinsed twice with PBS. Slides were then incubated with a secondary antibody for 15 min at 37°C followed by treatment with streptavidin–peroxidase reagent for 15 min, and rinsed twice with PBS. The slides were visualized with 3,3’-diaminobenzidine (DAB) for 3 min, counterstained with haematoxylin, and mounted for microscopy.\nA streptavidin-peroxidase (SP) method was used for immunostaining. Briefly, slides were deparaffinized with xylene three times (each for 5–10 min), dehydrated three times in a gradient series of ethanol (100%, 95%, and 75%), and rinsed with PBS. Each slide was treated with 3% H2O2 for 15 min to quench endogenous peroxidase activity. Nonspecific bindings were blocked by treating slides with normal goat serum for 20 min. Slides were first incubated with rabbit polyclonal anti-D2R (Abcam, Shanghai, China; 1:50), mouse monoclonal anti-MGMT (Abcam, Shanghai, China; 1:50) or mouse monoclonal anti-VEGF (Abcam, Shanghai, China; 1:50) overnight at 4°C, and then rinsed twice with PBS. Slides were then incubated with a secondary antibody for 15 min at 37°C followed by treatment with streptavidin–peroxidase reagent for 15 min, and rinsed twice with PBS. The slides were visualized with 3,3’-diaminobenzidine (DAB) for 3 min, counterstained with haematoxylin, and mounted for microscopy.\n Evaluation of staining The slides were evaluated by two separate investigators under a light microscope (Dr. Wanchun Li and Dr. Zhenfeng Lu). Staining intensity was scored as 0 (negative), 1 (weak), 2 (medium), and 3 (strong). Extent of staining was scored as 0 (0%), 1 (1–25%), 2 (26-50%), 3 (51-75%), and 4 (76-100%) according to the percentages of the positive staining areas in relation to the whole carcinoma area. The sum of the intensity score and extent score was used as the final staining score (0–7). Tumors having a final staining score of >2 were considered to be positive, score of 2–3 were considered as low expression and score of >3 were high expression.\nThe slides were evaluated by two separate investigators under a light microscope (Dr. Wanchun Li and Dr. Zhenfeng Lu). Staining intensity was scored as 0 (negative), 1 (weak), 2 (medium), and 3 (strong). Extent of staining was scored as 0 (0%), 1 (1–25%), 2 (26-50%), 3 (51-75%), and 4 (76-100%) according to the percentages of the positive staining areas in relation to the whole carcinoma area. The sum of the intensity score and extent score was used as the final staining score (0–7). Tumors having a final staining score of >2 were considered to be positive, score of 2–3 were considered as low expression and score of >3 were high expression.\n Western blot For western blot analysis, the lysates were separated by SDS-PAGE followed by transferring to an Immobilon-P Transfer membrane (Millipore Corporation, Bedford, MA, USA). Membranes were probed with primary antibodies followed by incubation with secondary antibody. Proteins were visualized with chemiluminescence luminol reagents (Beyotime Institute of Biotechnology, Shanghai, China).\nFor western blot analysis, the lysates were separated by SDS-PAGE followed by transferring to an Immobilon-P Transfer membrane (Millipore Corporation, Bedford, MA, USA). Membranes were probed with primary antibodies followed by incubation with secondary antibody. Proteins were visualized with chemiluminescence luminol reagents (Beyotime Institute of Biotechnology, Shanghai, China).\n Statistical analysis Statistical analysis was performed using SPSS 16.0 (SPSS Chicago, IL, USA). The ratio of high expression of D2R, MGMT or VEGF in different subtypes of PA was compared by the use of chi-squared tests. The relationships between D2R, MGMT and VEGF expression were assessed by the Spearman rank correlation test. The association between their expression and clinical parameters was analyzed using a chi-squared test, or Fisher's exact probability test when appropriate. P < 0.05 was considered to be statistically significant.\nStatistical analysis was performed using SPSS 16.0 (SPSS Chicago, IL, USA). The ratio of high expression of D2R, MGMT or VEGF in different subtypes of PA was compared by the use of chi-squared tests. The relationships between D2R, MGMT and VEGF expression were assessed by the Spearman rank correlation test. The association between their expression and clinical parameters was analyzed using a chi-squared test, or Fisher's exact probability test when appropriate. P < 0.05 was considered to be statistically significant.", "One hundred and ninety seven pituitary adenomas (PAs) of different histological subtypes were selected randomly from patients operated between 2009 and 2011 in the Department of neurosurgery, Jinling Hospital, School of Medicine, Nanjing University. All PA tumor tissues were formalin-fixed and paraffinembedded resected and then pathologically diagnosed, including 28 PRL-secreting adenomas, 20 GH-secreting adenomas, 27 ACTH-secreting adenomas, 15 TSH-secreting adenomas, 37 FSH-secreting adenomas and 70 non-functioning adenomas.", "A streptavidin-peroxidase (SP) method was used for immunostaining. Briefly, slides were deparaffinized with xylene three times (each for 5–10 min), dehydrated three times in a gradient series of ethanol (100%, 95%, and 75%), and rinsed with PBS. Each slide was treated with 3% H2O2 for 15 min to quench endogenous peroxidase activity. Nonspecific bindings were blocked by treating slides with normal goat serum for 20 min. Slides were first incubated with rabbit polyclonal anti-D2R (Abcam, Shanghai, China; 1:50), mouse monoclonal anti-MGMT (Abcam, Shanghai, China; 1:50) or mouse monoclonal anti-VEGF (Abcam, Shanghai, China; 1:50) overnight at 4°C, and then rinsed twice with PBS. Slides were then incubated with a secondary antibody for 15 min at 37°C followed by treatment with streptavidin–peroxidase reagent for 15 min, and rinsed twice with PBS. The slides were visualized with 3,3’-diaminobenzidine (DAB) for 3 min, counterstained with haematoxylin, and mounted for microscopy.", "The slides were evaluated by two separate investigators under a light microscope (Dr. Wanchun Li and Dr. Zhenfeng Lu). Staining intensity was scored as 0 (negative), 1 (weak), 2 (medium), and 3 (strong). Extent of staining was scored as 0 (0%), 1 (1–25%), 2 (26-50%), 3 (51-75%), and 4 (76-100%) according to the percentages of the positive staining areas in relation to the whole carcinoma area. The sum of the intensity score and extent score was used as the final staining score (0–7). Tumors having a final staining score of >2 were considered to be positive, score of 2–3 were considered as low expression and score of >3 were high expression.", "For western blot analysis, the lysates were separated by SDS-PAGE followed by transferring to an Immobilon-P Transfer membrane (Millipore Corporation, Bedford, MA, USA). Membranes were probed with primary antibodies followed by incubation with secondary antibody. Proteins were visualized with chemiluminescence luminol reagents (Beyotime Institute of Biotechnology, Shanghai, China).", "Statistical analysis was performed using SPSS 16.0 (SPSS Chicago, IL, USA). The ratio of high expression of D2R, MGMT or VEGF in different subtypes of PA was compared by the use of chi-squared tests. The relationships between D2R, MGMT and VEGF expression were assessed by the Spearman rank correlation test. The association between their expression and clinical parameters was analyzed using a chi-squared test, or Fisher's exact probability test when appropriate. P < 0.05 was considered to be statistically significant.", " Expression of D2R, MGMT or VEGF in PA tissues The location of D2R and VEGF in the nuclei and cytoplasm, and of MGMT in the nuclei was considered for scoring (Figure 1A–F). The positive expression of D2R was detected in 194 tissues, of MGMT was in all tissues and of VEGF was in 190 tissues. The proportions of cases showing low (score of ≤3) or high (score of >3) expression levels for D2R, MGMT and VEGF in different subtypes of PA were shown in Table 1. 64.9% of 197 PAs were D2R high expression, 86.3% of them were MGMT low expression and 58.9% of them were VEGF high expression. The ratio of high expression of D2R or MGMT is significantly different in PA subtypes (For D2R: χ2 = 44.844, P < 0.001; For MGMT: χ2 = 13.210, P = 0.021), but for VEGF, there is no significance (χ2 = 9.003, P = 0.109). D2R high expression existed more frequently in PRL, GH, ACTH, TSH and FSH secreting PAs. MGMT low expression existed in all PA subtypes. VEGF high expression existed more frequently in PRL, ACTH, FSH secreting and non-functioning PA. The data of western blot supported and confirmed these results (Figure 2).\nExpression of D2R, MGMT and VEGF in PAs. (A, B): D2R low (A) and high (B) expression. (C, D): MGMT low (C) and high (D) expression. (E, F): VEGF low (E) and high (F) expression. Bar = 50 μm.\nExpression profile of D2R, MGMT and VEGF in different subtypes of PA\nNF, Non-functioning; Low, low expression (score of ≤3); High, high expression (score of >3).\nThe expression of D2R, MGMT and VEGF in different PAs subtypes by detected using western blot. PRL: PRL-secreting PAs; GH: GH-secreting PAs; ACTH: ACTH-secreting PAs; TSH: TSH-secreting PAs; FSH: FSH-secreting PAs; NF: Non-functioning PAs. GAPDH served as loading control. S1 = Sample 1; S2 = Sample 2.\nThe location of D2R and VEGF in the nuclei and cytoplasm, and of MGMT in the nuclei was considered for scoring (Figure 1A–F). The positive expression of D2R was detected in 194 tissues, of MGMT was in all tissues and of VEGF was in 190 tissues. The proportions of cases showing low (score of ≤3) or high (score of >3) expression levels for D2R, MGMT and VEGF in different subtypes of PA were shown in Table 1. 64.9% of 197 PAs were D2R high expression, 86.3% of them were MGMT low expression and 58.9% of them were VEGF high expression. The ratio of high expression of D2R or MGMT is significantly different in PA subtypes (For D2R: χ2 = 44.844, P < 0.001; For MGMT: χ2 = 13.210, P = 0.021), but for VEGF, there is no significance (χ2 = 9.003, P = 0.109). D2R high expression existed more frequently in PRL, GH, ACTH, TSH and FSH secreting PAs. MGMT low expression existed in all PA subtypes. VEGF high expression existed more frequently in PRL, ACTH, FSH secreting and non-functioning PA. The data of western blot supported and confirmed these results (Figure 2).\nExpression of D2R, MGMT and VEGF in PAs. (A, B): D2R low (A) and high (B) expression. (C, D): MGMT low (C) and high (D) expression. (E, F): VEGF low (E) and high (F) expression. Bar = 50 μm.\nExpression profile of D2R, MGMT and VEGF in different subtypes of PA\nNF, Non-functioning; Low, low expression (score of ≤3); High, high expression (score of >3).\nThe expression of D2R, MGMT and VEGF in different PAs subtypes by detected using western blot. PRL: PRL-secreting PAs; GH: GH-secreting PAs; ACTH: ACTH-secreting PAs; TSH: TSH-secreting PAs; FSH: FSH-secreting PAs; NF: Non-functioning PAs. GAPDH served as loading control. S1 = Sample 1; S2 = Sample 2.\n Relationships between D2R, MGMT and VEGF expression in correlation analysis Spearman's rank correlation analysis showed that MGMT expression was positively associated with D2R expression (r = 0.154, P = 0.031) and with VEGF expression (r = 0.161, P = 0.024) in PA, but D2R expression did not show a correlation with VEGF expression (r = −0.025, P = 0.725 > 0.05).\nSpearman's rank correlation analysis showed that MGMT expression was positively associated with D2R expression (r = 0.154, P = 0.031) and with VEGF expression (r = 0.161, P = 0.024) in PA, but D2R expression did not show a correlation with VEGF expression (r = −0.025, P = 0.725 > 0.05).\n Association of D2R, MGMT and VEGF expression with clinical features of PAs In these 197 cases, 106 of them were male and 91 were female; 64 of them were defined as invasive PAs, and others were non-invasive (according to Knosp’s classification [[12]]); 16 of them were recurrent PA, and the others were primary; 16 of them were microadenoma (diameter ≤ 10 mm), and the others were macroadenoma (diameter > 10 mm); 159 of the PAs were tender in tumor tissues, and the others were tenacious; Only 8 patients have taken bromocriptine orally. The associations between clinical variables and D2R, MGMT and VEGF expression are shown in Table 2. However, there was no significant association between D2R, MGMT or VEGF expression and clinical features, including patient sex, tumor growth pattern, tumor recurrence, tumor size, tumor tissue texture and bromocriptine application (P > 0.05). This indicated that despite the variety of PA clinical features, the expression of D2R, MGMT and VEGF are definite in PAs.\nAssociation of D2R, MGMT and VEGF expression with clinicopathological characteristics from patients with PA\nLow, low expression (score of ≤3); High, high expression (score of >3).\nIn these 197 cases, 106 of them were male and 91 were female; 64 of them were defined as invasive PAs, and others were non-invasive (according to Knosp’s classification [[12]]); 16 of them were recurrent PA, and the others were primary; 16 of them were microadenoma (diameter ≤ 10 mm), and the others were macroadenoma (diameter > 10 mm); 159 of the PAs were tender in tumor tissues, and the others were tenacious; Only 8 patients have taken bromocriptine orally. The associations between clinical variables and D2R, MGMT and VEGF expression are shown in Table 2. However, there was no significant association between D2R, MGMT or VEGF expression and clinical features, including patient sex, tumor growth pattern, tumor recurrence, tumor size, tumor tissue texture and bromocriptine application (P > 0.05). This indicated that despite the variety of PA clinical features, the expression of D2R, MGMT and VEGF are definite in PAs.\nAssociation of D2R, MGMT and VEGF expression with clinicopathological characteristics from patients with PA\nLow, low expression (score of ≤3); High, high expression (score of >3).", "The location of D2R and VEGF in the nuclei and cytoplasm, and of MGMT in the nuclei was considered for scoring (Figure 1A–F). The positive expression of D2R was detected in 194 tissues, of MGMT was in all tissues and of VEGF was in 190 tissues. The proportions of cases showing low (score of ≤3) or high (score of >3) expression levels for D2R, MGMT and VEGF in different subtypes of PA were shown in Table 1. 64.9% of 197 PAs were D2R high expression, 86.3% of them were MGMT low expression and 58.9% of them were VEGF high expression. The ratio of high expression of D2R or MGMT is significantly different in PA subtypes (For D2R: χ2 = 44.844, P < 0.001; For MGMT: χ2 = 13.210, P = 0.021), but for VEGF, there is no significance (χ2 = 9.003, P = 0.109). D2R high expression existed more frequently in PRL, GH, ACTH, TSH and FSH secreting PAs. MGMT low expression existed in all PA subtypes. VEGF high expression existed more frequently in PRL, ACTH, FSH secreting and non-functioning PA. The data of western blot supported and confirmed these results (Figure 2).\nExpression of D2R, MGMT and VEGF in PAs. (A, B): D2R low (A) and high (B) expression. (C, D): MGMT low (C) and high (D) expression. (E, F): VEGF low (E) and high (F) expression. Bar = 50 μm.\nExpression profile of D2R, MGMT and VEGF in different subtypes of PA\nNF, Non-functioning; Low, low expression (score of ≤3); High, high expression (score of >3).\nThe expression of D2R, MGMT and VEGF in different PAs subtypes by detected using western blot. PRL: PRL-secreting PAs; GH: GH-secreting PAs; ACTH: ACTH-secreting PAs; TSH: TSH-secreting PAs; FSH: FSH-secreting PAs; NF: Non-functioning PAs. GAPDH served as loading control. S1 = Sample 1; S2 = Sample 2.", "Spearman's rank correlation analysis showed that MGMT expression was positively associated with D2R expression (r = 0.154, P = 0.031) and with VEGF expression (r = 0.161, P = 0.024) in PA, but D2R expression did not show a correlation with VEGF expression (r = −0.025, P = 0.725 > 0.05).", "In these 197 cases, 106 of them were male and 91 were female; 64 of them were defined as invasive PAs, and others were non-invasive (according to Knosp’s classification [[12]]); 16 of them were recurrent PA, and the others were primary; 16 of them were microadenoma (diameter ≤ 10 mm), and the others were macroadenoma (diameter > 10 mm); 159 of the PAs were tender in tumor tissues, and the others were tenacious; Only 8 patients have taken bromocriptine orally. The associations between clinical variables and D2R, MGMT and VEGF expression are shown in Table 2. However, there was no significant association between D2R, MGMT or VEGF expression and clinical features, including patient sex, tumor growth pattern, tumor recurrence, tumor size, tumor tissue texture and bromocriptine application (P > 0.05). This indicated that despite the variety of PA clinical features, the expression of D2R, MGMT and VEGF are definite in PAs.\nAssociation of D2R, MGMT and VEGF expression with clinicopathological characteristics from patients with PA\nLow, low expression (score of ≤3); High, high expression (score of >3).", "Dopamine D2 receptor is expressed in the anterior and intermediate lobes of the pituitary gland. The response to dopamine agonists is related to the activity of the D2 receptor which belongs to the family of G proteincoupled receptors and acts through AMP cyclase enzyme inhibition [[13]]. de Bruin et al. demonstrated that D2 receptor expressed in more than 75% of the cell population in normal human pituitary, indicating that D2 receptors are not expressed only in lactotrophs and melanotrophs, which represent no more than 30% of the entire cell population of the normal pituitary gland [[14]]. In PRL secreting pituitary tumors, the high espression level of D2 receptor explains the good therapeutic response to dopamine agonists, which induces tumor shrinkage. In present study, we investigated the expression of D2R in 197 cases of PAs and found that approximately 92.9% of prolactinomas and 90% of somatotropinomas are high expression of D2R, indicating potential good drug-sensitivity for dopamine agonists (DAs). Previous clinical studies revealed that cabergoline and bromocriptine can normalize serum PRL levels in more than 80% of prolactinomas patients [[15],[16]] and have a good effect in somatotropinoma patients [[17]], which consistent with our data from immunostaining analysis. Our data also showed 83.8% of FSH-secreting PAs and 66.7% of ACTH-secreting PAs are high expression of D2R, which is supported by several other reported studies, although clinical studies showed a long-term cure of 48% in cabergoline treated ACTH-secreting PAs [[18]-[20]]. Only 37.1% of non-functioning (NF) PAs highly expressed D2R according to our data, consistenting with the report by Colao et al. that the cumulative evidence for NF PAs shrinkage after DA therapy is 27.6% [[21]].\nMGMT is a DNA repair protein that counteracts the effect of TMZ which is used for malignant glioma standard treatment. Recently, more and more studies revealed the therapeutic effect of TMZ on PAs, especially on aggressive PAs and pituitary carcinomas. MGMT expression as assessed by immunohistochemistry may predict response to temozolomide therapy in patients with aggressive pituitary tumors [[7],[22]]. McCormack group demonstrated that low MGMT expression and MGMT promoter methylation were found in the pituitary tumor of the patient who responded to TMZ, high MGMT expression was seen in the patient demonstrating a poor response to TMZ [[23]]. They reported the results that eleven out of 88 PA samples (13%) had low MGMT expression, and that prolactinomas were more likely to have low MGMT expression compared with other pituitary tumor subtypes. Herein, in this study we detected 170 out of 197 PAs (86.3%) existing MGMT expression lower than 50% (<50%) which was considered to be low MGMT expression. This data was higher than that form reported clinical studies in TMZ treated functioning PA, non-functioning PA and pituitary carcinoma with the remission rate of 75%, 55% and 72% respectively, which can be explained by Bush’s study that not all MGMT low expression PA respond to TMZ although medical therapy with TMZ can be helpful in the management of life-threatening PAs that have failed to respond to conventional treatments [[24]]. Our results showed low MGMT expression (<50%) in 85.7% of PRL-secreting PAs, 90% of GH-secreting PAs, 81.5% of ACTH-secreting PAs, 93.3% of TSH-secreting PAs, 70.3% of FSH-secreting PAs and 94.3% of non-functioning PAs, predicting almost all subtypes of PAs are suitable for TMZ therapy, although only fewer curative cases were separately reported [[25],[26]]. Further large scale clinical trials are necessary.\nVEGF is a key mediator of endothelial cell proliferation, angiogenesis and vascular permeability. It plays a pivotal role in the genesis and progression of solid tumors. Onofri et al. analyzed VEGF protein expression in 39 cases of PAs, found only 5 cases (13%) were VEGF negative [[8]]. Lloyd et al. examined 148 human pituitary adenomas for VEGF protein expression by immunohistochemistry, and showed positive staining in all groups with stronger staining in GH, ACTH, TSH, and gonadotroph adenomas and in pituitary carcinomas [[27]]. Our study detected 190 positive VEGF expression cases in 197 PAs and 58.9% of them are in high expression level, including 60.7% of PRL-secreting PAs, 78.4% FSH-secreting PAs, 51.9% ACTH-secreting PAs and 57.1% non-functioning PAs. Niveiro et al. investigated VEGF expression in 60 human pituitary adenomas, and found that low expression of VEGF was seen predominantly in prolactin cell adenomas, and high in non-functioning adenomas, which is different from our data that 60.7% of prolactin cell adenomas verses 57.1% non-functioning adenomas [[11]]. Moreover, VEGF was considered also involved in conventional medical therapy for PAs. Octreotide was reported to down-regulate VEGF expression to achieve antiangiogenic effects on PAs [[28]]. Gagliano et al. demonstrated that cabergoline reduces cell viability in non-functioning pituitary adenomas by inhibiting VEGF secretion, of which the modulation might mediate the effects of DA agonists on cell proliferation in non-functioning adenoma [[29]]. Interestingly, in present study, we did spearman’s rank correlation analysis and found that D2R expression did not show a correlation with VEGF expression. Although it is prospective to treat PAs by anti-VEGF, up to now, only one case of PA has been reported to be cured by bevacizumab [[6]]. The mechanisms of VEGF in PA genesis and progression are still unclear. More studies are needed to investigate the effects of anti-VEGF therapy on PA patients.\nTo confirm the results, we also detected the expression of D2R, MGMT and VEGF by using western blot. The data supported the results of immunohistochemical staining. Two samples were selected for each PAs subtype. The positive expression of western blot indicated the immunohistochemical staining is available, and the thickness differences of the blot band revealed the expression level differences in separate sample.\nMoreover, by spearman’s rank correlation analysis, we found that MGMT expression was positively associated with D2R and VEGF expression in PAs. As far as we know, it is the first time to report the association of D2R and MGMT expression which is positive. Only one report by Moshkin et al. has ever mentioned the association of MGMT and VEGF expression in PA. They demonstrated a progressive regrowth and malignant transformation of a silent subtype 2 pituitary corticotroph adenoma, with significant VEGF and MGMT immunopositivity [[30]]. The association between VEGF and MGMT expression in PAs need further investigations, as well as D2R and MGMT expression.\nIn addition, we analyzed the association of D2R, MGMT and VEGF expression with clinical features of PAs, but no association was found. This indicated that their expression was not affected by the differences of clinical features, and that the medical therapy can be applied in any patient in need.\nIn conclusion, in this study we demonstrated the expression of D2R, MGMT and VEGF in 197 different histological subtypes of pituitary adenomas, and analyzed the relationships between D2R, MGMT and VEGF expression and the association of D2R, MGMT and VEGF expression with PA clinical features including patient sex, tumor growth pattern, tumor recurrence, tumor size, tumor tissue texture and bromocriptine application. Our data revealed that PRL-and GH-secreting PAs exist high expression of D2R, responding to dopamine agonists; Most PAs exist low expression of MGMT and high expression of VEGF, TMZ or bevacizumab treatment could be applied under the premise of indications.", "PA: Pituitary adenoma\nD2R: Dopamine D2 receptors\nDA: Dopamine agonist\nMGMT: O6-methylguanine DNA methyltransferase\nTMZ: Temozolomide\nVEGF: Vascular endothelial growth factor\nPRL: Prolactin\nGH: Growth hormone\nACTH: Adrenocorticotropic hormone\nTSH: Thyroid stimulating hormone\nFSH: Follicle-stimulating hormone\nNF: Non-functioning", "The authors declare that they have no competing of interests.", "YW, JL and CM designed the research; YW, JL, YH, MT, SW, WL and ZL performed the research; WL and ZL evaluated the pathological sections and scored the extent of staining; JL and YW analyzed the data; JL, YW and CM wrote the paper, CM revised the paper. All authors read and approved the final manuscript." ]
[ null, "methods", null, null, null, null, null, "results", null, null, null, "discussion", null, null, null ]
[ "Dopamine D2 receptors", "MGMT", "VEGF", "Dopamine agonists", "Temozolomide", "Bevacizumab" ]
Background: Pituitary adenomas (PAs) account for about 15% of intracranial tumors. Although PAs are mostly benign lesions, about 30-55% of them are confirmed to locally invasive, and some of them infiltrate dura, bone and sinuses, are designated highly aggressive [[1],[2]]. The conventional treatment of large pituitary adenomas consists of surgery, and radiotherapy when it is hard to achieve total resection. The use of additional radiotherapy is limited by the risk of radiation necrosis of surrounding structures. Thus, medication treatment, although unlikely to be curative immediately, might lead to certain clinically therapeutic effect, as a useful supplement [[3]]. Currently, first-line clinical medication for PAs generally consists of dopamine agonists (DAs), somatostatin analogs (SSAs) or combinations [[4]]. Recently, some routine chemotherapeutics such as Temozolomide (TMZ) and Bevacizumab have been carefully studied to treat PAs and considered to be potential for aggressive PAs’ medical therapy [[5]-[8]]. DAs were widely used for the treatment of prolactinomas and some somatotropinomas, and the responsiveness depends on the expression of dopamine D2 receptors (D2R) on tumor cells. Abnormal expression of D2R in prolactinoma was considered to confer resistance to DA treatment. Fadul et al. [[7]] first reported two cases of pituitary carcinoma received TMZ treatment, concluding that TMZ may be effective in treating pituitary carcinomas. After that, more and more studies demonstrated the inspiring therapeutic effect of TMZ on pituitary carcinomas and aggressive PAs. As a DNA repairase, O6-methylguanine DNA methyltransferase (MGMT) confers chemoresistance to TMZ [[9]]. Thus, tumors with low expression of MGMT are usually sensitive to TMZ. Bevacizumab is a monoclonal antibody which has been approved by USA FDA to treat colorectal cancer, non-small-cell lung carcinoma, breast cancer, renal carcinoma and recurrent glioma [[10]]. It blocks vascular endothelial growth factor (VEGF) binding to its receptor [[11]]. Experimental and clinical studies have demonstrated that anti-VEGF therapy may be effective in pituitary carcinoma and aggressive PAs. To investigate D2R, MGMT and VEGF expression profile in PAs, and to evaluate the status of the drug targets of DAs, TMZ and Bevacizumab for PA medical therapy, herein, we performed the immunohistochemical staining in 197 cases of different subtypes of PAs. Methods: Patients and tissues One hundred and ninety seven pituitary adenomas (PAs) of different histological subtypes were selected randomly from patients operated between 2009 and 2011 in the Department of neurosurgery, Jinling Hospital, School of Medicine, Nanjing University. All PA tumor tissues were formalin-fixed and paraffinembedded resected and then pathologically diagnosed, including 28 PRL-secreting adenomas, 20 GH-secreting adenomas, 27 ACTH-secreting adenomas, 15 TSH-secreting adenomas, 37 FSH-secreting adenomas and 70 non-functioning adenomas. One hundred and ninety seven pituitary adenomas (PAs) of different histological subtypes were selected randomly from patients operated between 2009 and 2011 in the Department of neurosurgery, Jinling Hospital, School of Medicine, Nanjing University. All PA tumor tissues were formalin-fixed and paraffinembedded resected and then pathologically diagnosed, including 28 PRL-secreting adenomas, 20 GH-secreting adenomas, 27 ACTH-secreting adenomas, 15 TSH-secreting adenomas, 37 FSH-secreting adenomas and 70 non-functioning adenomas. Immunohistochemical staining A streptavidin-peroxidase (SP) method was used for immunostaining. Briefly, slides were deparaffinized with xylene three times (each for 5–10 min), dehydrated three times in a gradient series of ethanol (100%, 95%, and 75%), and rinsed with PBS. Each slide was treated with 3% H2O2 for 15 min to quench endogenous peroxidase activity. Nonspecific bindings were blocked by treating slides with normal goat serum for 20 min. Slides were first incubated with rabbit polyclonal anti-D2R (Abcam, Shanghai, China; 1:50), mouse monoclonal anti-MGMT (Abcam, Shanghai, China; 1:50) or mouse monoclonal anti-VEGF (Abcam, Shanghai, China; 1:50) overnight at 4°C, and then rinsed twice with PBS. Slides were then incubated with a secondary antibody for 15 min at 37°C followed by treatment with streptavidin–peroxidase reagent for 15 min, and rinsed twice with PBS. The slides were visualized with 3,3’-diaminobenzidine (DAB) for 3 min, counterstained with haematoxylin, and mounted for microscopy. A streptavidin-peroxidase (SP) method was used for immunostaining. Briefly, slides were deparaffinized with xylene three times (each for 5–10 min), dehydrated three times in a gradient series of ethanol (100%, 95%, and 75%), and rinsed with PBS. Each slide was treated with 3% H2O2 for 15 min to quench endogenous peroxidase activity. Nonspecific bindings were blocked by treating slides with normal goat serum for 20 min. Slides were first incubated with rabbit polyclonal anti-D2R (Abcam, Shanghai, China; 1:50), mouse monoclonal anti-MGMT (Abcam, Shanghai, China; 1:50) or mouse monoclonal anti-VEGF (Abcam, Shanghai, China; 1:50) overnight at 4°C, and then rinsed twice with PBS. Slides were then incubated with a secondary antibody for 15 min at 37°C followed by treatment with streptavidin–peroxidase reagent for 15 min, and rinsed twice with PBS. The slides were visualized with 3,3’-diaminobenzidine (DAB) for 3 min, counterstained with haematoxylin, and mounted for microscopy. Evaluation of staining The slides were evaluated by two separate investigators under a light microscope (Dr. Wanchun Li and Dr. Zhenfeng Lu). Staining intensity was scored as 0 (negative), 1 (weak), 2 (medium), and 3 (strong). Extent of staining was scored as 0 (0%), 1 (1–25%), 2 (26-50%), 3 (51-75%), and 4 (76-100%) according to the percentages of the positive staining areas in relation to the whole carcinoma area. The sum of the intensity score and extent score was used as the final staining score (0–7). Tumors having a final staining score of >2 were considered to be positive, score of 2–3 were considered as low expression and score of >3 were high expression. The slides were evaluated by two separate investigators under a light microscope (Dr. Wanchun Li and Dr. Zhenfeng Lu). Staining intensity was scored as 0 (negative), 1 (weak), 2 (medium), and 3 (strong). Extent of staining was scored as 0 (0%), 1 (1–25%), 2 (26-50%), 3 (51-75%), and 4 (76-100%) according to the percentages of the positive staining areas in relation to the whole carcinoma area. The sum of the intensity score and extent score was used as the final staining score (0–7). Tumors having a final staining score of >2 were considered to be positive, score of 2–3 were considered as low expression and score of >3 were high expression. Western blot For western blot analysis, the lysates were separated by SDS-PAGE followed by transferring to an Immobilon-P Transfer membrane (Millipore Corporation, Bedford, MA, USA). Membranes were probed with primary antibodies followed by incubation with secondary antibody. Proteins were visualized with chemiluminescence luminol reagents (Beyotime Institute of Biotechnology, Shanghai, China). For western blot analysis, the lysates were separated by SDS-PAGE followed by transferring to an Immobilon-P Transfer membrane (Millipore Corporation, Bedford, MA, USA). Membranes were probed with primary antibodies followed by incubation with secondary antibody. Proteins were visualized with chemiluminescence luminol reagents (Beyotime Institute of Biotechnology, Shanghai, China). Statistical analysis Statistical analysis was performed using SPSS 16.0 (SPSS Chicago, IL, USA). The ratio of high expression of D2R, MGMT or VEGF in different subtypes of PA was compared by the use of chi-squared tests. The relationships between D2R, MGMT and VEGF expression were assessed by the Spearman rank correlation test. The association between their expression and clinical parameters was analyzed using a chi-squared test, or Fisher's exact probability test when appropriate. P < 0.05 was considered to be statistically significant. Statistical analysis was performed using SPSS 16.0 (SPSS Chicago, IL, USA). The ratio of high expression of D2R, MGMT or VEGF in different subtypes of PA was compared by the use of chi-squared tests. The relationships between D2R, MGMT and VEGF expression were assessed by the Spearman rank correlation test. The association between their expression and clinical parameters was analyzed using a chi-squared test, or Fisher's exact probability test when appropriate. P < 0.05 was considered to be statistically significant. Patients and tissues: One hundred and ninety seven pituitary adenomas (PAs) of different histological subtypes were selected randomly from patients operated between 2009 and 2011 in the Department of neurosurgery, Jinling Hospital, School of Medicine, Nanjing University. All PA tumor tissues were formalin-fixed and paraffinembedded resected and then pathologically diagnosed, including 28 PRL-secreting adenomas, 20 GH-secreting adenomas, 27 ACTH-secreting adenomas, 15 TSH-secreting adenomas, 37 FSH-secreting adenomas and 70 non-functioning adenomas. Immunohistochemical staining: A streptavidin-peroxidase (SP) method was used for immunostaining. Briefly, slides were deparaffinized with xylene three times (each for 5–10 min), dehydrated three times in a gradient series of ethanol (100%, 95%, and 75%), and rinsed with PBS. Each slide was treated with 3% H2O2 for 15 min to quench endogenous peroxidase activity. Nonspecific bindings were blocked by treating slides with normal goat serum for 20 min. Slides were first incubated with rabbit polyclonal anti-D2R (Abcam, Shanghai, China; 1:50), mouse monoclonal anti-MGMT (Abcam, Shanghai, China; 1:50) or mouse monoclonal anti-VEGF (Abcam, Shanghai, China; 1:50) overnight at 4°C, and then rinsed twice with PBS. Slides were then incubated with a secondary antibody for 15 min at 37°C followed by treatment with streptavidin–peroxidase reagent for 15 min, and rinsed twice with PBS. The slides were visualized with 3,3’-diaminobenzidine (DAB) for 3 min, counterstained with haematoxylin, and mounted for microscopy. Evaluation of staining: The slides were evaluated by two separate investigators under a light microscope (Dr. Wanchun Li and Dr. Zhenfeng Lu). Staining intensity was scored as 0 (negative), 1 (weak), 2 (medium), and 3 (strong). Extent of staining was scored as 0 (0%), 1 (1–25%), 2 (26-50%), 3 (51-75%), and 4 (76-100%) according to the percentages of the positive staining areas in relation to the whole carcinoma area. The sum of the intensity score and extent score was used as the final staining score (0–7). Tumors having a final staining score of >2 were considered to be positive, score of 2–3 were considered as low expression and score of >3 were high expression. Western blot: For western blot analysis, the lysates were separated by SDS-PAGE followed by transferring to an Immobilon-P Transfer membrane (Millipore Corporation, Bedford, MA, USA). Membranes were probed with primary antibodies followed by incubation with secondary antibody. Proteins were visualized with chemiluminescence luminol reagents (Beyotime Institute of Biotechnology, Shanghai, China). Statistical analysis: Statistical analysis was performed using SPSS 16.0 (SPSS Chicago, IL, USA). The ratio of high expression of D2R, MGMT or VEGF in different subtypes of PA was compared by the use of chi-squared tests. The relationships between D2R, MGMT and VEGF expression were assessed by the Spearman rank correlation test. The association between their expression and clinical parameters was analyzed using a chi-squared test, or Fisher's exact probability test when appropriate. P < 0.05 was considered to be statistically significant. Results: Expression of D2R, MGMT or VEGF in PA tissues The location of D2R and VEGF in the nuclei and cytoplasm, and of MGMT in the nuclei was considered for scoring (Figure 1A–F). The positive expression of D2R was detected in 194 tissues, of MGMT was in all tissues and of VEGF was in 190 tissues. The proportions of cases showing low (score of ≤3) or high (score of >3) expression levels for D2R, MGMT and VEGF in different subtypes of PA were shown in Table 1. 64.9% of 197 PAs were D2R high expression, 86.3% of them were MGMT low expression and 58.9% of them were VEGF high expression. The ratio of high expression of D2R or MGMT is significantly different in PA subtypes (For D2R: χ2 = 44.844, P < 0.001; For MGMT: χ2 = 13.210, P = 0.021), but for VEGF, there is no significance (χ2 = 9.003, P = 0.109). D2R high expression existed more frequently in PRL, GH, ACTH, TSH and FSH secreting PAs. MGMT low expression existed in all PA subtypes. VEGF high expression existed more frequently in PRL, ACTH, FSH secreting and non-functioning PA. The data of western blot supported and confirmed these results (Figure 2). Expression of D2R, MGMT and VEGF in PAs. (A, B): D2R low (A) and high (B) expression. (C, D): MGMT low (C) and high (D) expression. (E, F): VEGF low (E) and high (F) expression. Bar = 50 μm. Expression profile of D2R, MGMT and VEGF in different subtypes of PA NF, Non-functioning; Low, low expression (score of ≤3); High, high expression (score of >3). The expression of D2R, MGMT and VEGF in different PAs subtypes by detected using western blot. PRL: PRL-secreting PAs; GH: GH-secreting PAs; ACTH: ACTH-secreting PAs; TSH: TSH-secreting PAs; FSH: FSH-secreting PAs; NF: Non-functioning PAs. GAPDH served as loading control. S1 = Sample 1; S2 = Sample 2. The location of D2R and VEGF in the nuclei and cytoplasm, and of MGMT in the nuclei was considered for scoring (Figure 1A–F). The positive expression of D2R was detected in 194 tissues, of MGMT was in all tissues and of VEGF was in 190 tissues. The proportions of cases showing low (score of ≤3) or high (score of >3) expression levels for D2R, MGMT and VEGF in different subtypes of PA were shown in Table 1. 64.9% of 197 PAs were D2R high expression, 86.3% of them were MGMT low expression and 58.9% of them were VEGF high expression. The ratio of high expression of D2R or MGMT is significantly different in PA subtypes (For D2R: χ2 = 44.844, P < 0.001; For MGMT: χ2 = 13.210, P = 0.021), but for VEGF, there is no significance (χ2 = 9.003, P = 0.109). D2R high expression existed more frequently in PRL, GH, ACTH, TSH and FSH secreting PAs. MGMT low expression existed in all PA subtypes. VEGF high expression existed more frequently in PRL, ACTH, FSH secreting and non-functioning PA. The data of western blot supported and confirmed these results (Figure 2). Expression of D2R, MGMT and VEGF in PAs. (A, B): D2R low (A) and high (B) expression. (C, D): MGMT low (C) and high (D) expression. (E, F): VEGF low (E) and high (F) expression. Bar = 50 μm. Expression profile of D2R, MGMT and VEGF in different subtypes of PA NF, Non-functioning; Low, low expression (score of ≤3); High, high expression (score of >3). The expression of D2R, MGMT and VEGF in different PAs subtypes by detected using western blot. PRL: PRL-secreting PAs; GH: GH-secreting PAs; ACTH: ACTH-secreting PAs; TSH: TSH-secreting PAs; FSH: FSH-secreting PAs; NF: Non-functioning PAs. GAPDH served as loading control. S1 = Sample 1; S2 = Sample 2. Relationships between D2R, MGMT and VEGF expression in correlation analysis Spearman's rank correlation analysis showed that MGMT expression was positively associated with D2R expression (r = 0.154, P = 0.031) and with VEGF expression (r = 0.161, P = 0.024) in PA, but D2R expression did not show a correlation with VEGF expression (r = −0.025, P = 0.725 > 0.05). Spearman's rank correlation analysis showed that MGMT expression was positively associated with D2R expression (r = 0.154, P = 0.031) and with VEGF expression (r = 0.161, P = 0.024) in PA, but D2R expression did not show a correlation with VEGF expression (r = −0.025, P = 0.725 > 0.05). Association of D2R, MGMT and VEGF expression with clinical features of PAs In these 197 cases, 106 of them were male and 91 were female; 64 of them were defined as invasive PAs, and others were non-invasive (according to Knosp’s classification [[12]]); 16 of them were recurrent PA, and the others were primary; 16 of them were microadenoma (diameter ≤ 10 mm), and the others were macroadenoma (diameter > 10 mm); 159 of the PAs were tender in tumor tissues, and the others were tenacious; Only 8 patients have taken bromocriptine orally. The associations between clinical variables and D2R, MGMT and VEGF expression are shown in Table 2. However, there was no significant association between D2R, MGMT or VEGF expression and clinical features, including patient sex, tumor growth pattern, tumor recurrence, tumor size, tumor tissue texture and bromocriptine application (P > 0.05). This indicated that despite the variety of PA clinical features, the expression of D2R, MGMT and VEGF are definite in PAs. Association of D2R, MGMT and VEGF expression with clinicopathological characteristics from patients with PA Low, low expression (score of ≤3); High, high expression (score of >3). In these 197 cases, 106 of them were male and 91 were female; 64 of them were defined as invasive PAs, and others were non-invasive (according to Knosp’s classification [[12]]); 16 of them were recurrent PA, and the others were primary; 16 of them were microadenoma (diameter ≤ 10 mm), and the others were macroadenoma (diameter > 10 mm); 159 of the PAs were tender in tumor tissues, and the others were tenacious; Only 8 patients have taken bromocriptine orally. The associations between clinical variables and D2R, MGMT and VEGF expression are shown in Table 2. However, there was no significant association between D2R, MGMT or VEGF expression and clinical features, including patient sex, tumor growth pattern, tumor recurrence, tumor size, tumor tissue texture and bromocriptine application (P > 0.05). This indicated that despite the variety of PA clinical features, the expression of D2R, MGMT and VEGF are definite in PAs. Association of D2R, MGMT and VEGF expression with clinicopathological characteristics from patients with PA Low, low expression (score of ≤3); High, high expression (score of >3). Expression of D2R, MGMT or VEGF in PA tissues: The location of D2R and VEGF in the nuclei and cytoplasm, and of MGMT in the nuclei was considered for scoring (Figure 1A–F). The positive expression of D2R was detected in 194 tissues, of MGMT was in all tissues and of VEGF was in 190 tissues. The proportions of cases showing low (score of ≤3) or high (score of >3) expression levels for D2R, MGMT and VEGF in different subtypes of PA were shown in Table 1. 64.9% of 197 PAs were D2R high expression, 86.3% of them were MGMT low expression and 58.9% of them were VEGF high expression. The ratio of high expression of D2R or MGMT is significantly different in PA subtypes (For D2R: χ2 = 44.844, P < 0.001; For MGMT: χ2 = 13.210, P = 0.021), but for VEGF, there is no significance (χ2 = 9.003, P = 0.109). D2R high expression existed more frequently in PRL, GH, ACTH, TSH and FSH secreting PAs. MGMT low expression existed in all PA subtypes. VEGF high expression existed more frequently in PRL, ACTH, FSH secreting and non-functioning PA. The data of western blot supported and confirmed these results (Figure 2). Expression of D2R, MGMT and VEGF in PAs. (A, B): D2R low (A) and high (B) expression. (C, D): MGMT low (C) and high (D) expression. (E, F): VEGF low (E) and high (F) expression. Bar = 50 μm. Expression profile of D2R, MGMT and VEGF in different subtypes of PA NF, Non-functioning; Low, low expression (score of ≤3); High, high expression (score of >3). The expression of D2R, MGMT and VEGF in different PAs subtypes by detected using western blot. PRL: PRL-secreting PAs; GH: GH-secreting PAs; ACTH: ACTH-secreting PAs; TSH: TSH-secreting PAs; FSH: FSH-secreting PAs; NF: Non-functioning PAs. GAPDH served as loading control. S1 = Sample 1; S2 = Sample 2. Relationships between D2R, MGMT and VEGF expression in correlation analysis: Spearman's rank correlation analysis showed that MGMT expression was positively associated with D2R expression (r = 0.154, P = 0.031) and with VEGF expression (r = 0.161, P = 0.024) in PA, but D2R expression did not show a correlation with VEGF expression (r = −0.025, P = 0.725 > 0.05). Association of D2R, MGMT and VEGF expression with clinical features of PAs: In these 197 cases, 106 of them were male and 91 were female; 64 of them were defined as invasive PAs, and others were non-invasive (according to Knosp’s classification [[12]]); 16 of them were recurrent PA, and the others were primary; 16 of them were microadenoma (diameter ≤ 10 mm), and the others were macroadenoma (diameter > 10 mm); 159 of the PAs were tender in tumor tissues, and the others were tenacious; Only 8 patients have taken bromocriptine orally. The associations between clinical variables and D2R, MGMT and VEGF expression are shown in Table 2. However, there was no significant association between D2R, MGMT or VEGF expression and clinical features, including patient sex, tumor growth pattern, tumor recurrence, tumor size, tumor tissue texture and bromocriptine application (P > 0.05). This indicated that despite the variety of PA clinical features, the expression of D2R, MGMT and VEGF are definite in PAs. Association of D2R, MGMT and VEGF expression with clinicopathological characteristics from patients with PA Low, low expression (score of ≤3); High, high expression (score of >3). Discussion: Dopamine D2 receptor is expressed in the anterior and intermediate lobes of the pituitary gland. The response to dopamine agonists is related to the activity of the D2 receptor which belongs to the family of G proteincoupled receptors and acts through AMP cyclase enzyme inhibition [[13]]. de Bruin et al. demonstrated that D2 receptor expressed in more than 75% of the cell population in normal human pituitary, indicating that D2 receptors are not expressed only in lactotrophs and melanotrophs, which represent no more than 30% of the entire cell population of the normal pituitary gland [[14]]. In PRL secreting pituitary tumors, the high espression level of D2 receptor explains the good therapeutic response to dopamine agonists, which induces tumor shrinkage. In present study, we investigated the expression of D2R in 197 cases of PAs and found that approximately 92.9% of prolactinomas and 90% of somatotropinomas are high expression of D2R, indicating potential good drug-sensitivity for dopamine agonists (DAs). Previous clinical studies revealed that cabergoline and bromocriptine can normalize serum PRL levels in more than 80% of prolactinomas patients [[15],[16]] and have a good effect in somatotropinoma patients [[17]], which consistent with our data from immunostaining analysis. Our data also showed 83.8% of FSH-secreting PAs and 66.7% of ACTH-secreting PAs are high expression of D2R, which is supported by several other reported studies, although clinical studies showed a long-term cure of 48% in cabergoline treated ACTH-secreting PAs [[18]-[20]]. Only 37.1% of non-functioning (NF) PAs highly expressed D2R according to our data, consistenting with the report by Colao et al. that the cumulative evidence for NF PAs shrinkage after DA therapy is 27.6% [[21]]. MGMT is a DNA repair protein that counteracts the effect of TMZ which is used for malignant glioma standard treatment. Recently, more and more studies revealed the therapeutic effect of TMZ on PAs, especially on aggressive PAs and pituitary carcinomas. MGMT expression as assessed by immunohistochemistry may predict response to temozolomide therapy in patients with aggressive pituitary tumors [[7],[22]]. McCormack group demonstrated that low MGMT expression and MGMT promoter methylation were found in the pituitary tumor of the patient who responded to TMZ, high MGMT expression was seen in the patient demonstrating a poor response to TMZ [[23]]. They reported the results that eleven out of 88 PA samples (13%) had low MGMT expression, and that prolactinomas were more likely to have low MGMT expression compared with other pituitary tumor subtypes. Herein, in this study we detected 170 out of 197 PAs (86.3%) existing MGMT expression lower than 50% (<50%) which was considered to be low MGMT expression. This data was higher than that form reported clinical studies in TMZ treated functioning PA, non-functioning PA and pituitary carcinoma with the remission rate of 75%, 55% and 72% respectively, which can be explained by Bush’s study that not all MGMT low expression PA respond to TMZ although medical therapy with TMZ can be helpful in the management of life-threatening PAs that have failed to respond to conventional treatments [[24]]. Our results showed low MGMT expression (<50%) in 85.7% of PRL-secreting PAs, 90% of GH-secreting PAs, 81.5% of ACTH-secreting PAs, 93.3% of TSH-secreting PAs, 70.3% of FSH-secreting PAs and 94.3% of non-functioning PAs, predicting almost all subtypes of PAs are suitable for TMZ therapy, although only fewer curative cases were separately reported [[25],[26]]. Further large scale clinical trials are necessary. VEGF is a key mediator of endothelial cell proliferation, angiogenesis and vascular permeability. It plays a pivotal role in the genesis and progression of solid tumors. Onofri et al. analyzed VEGF protein expression in 39 cases of PAs, found only 5 cases (13%) were VEGF negative [[8]]. Lloyd et al. examined 148 human pituitary adenomas for VEGF protein expression by immunohistochemistry, and showed positive staining in all groups with stronger staining in GH, ACTH, TSH, and gonadotroph adenomas and in pituitary carcinomas [[27]]. Our study detected 190 positive VEGF expression cases in 197 PAs and 58.9% of them are in high expression level, including 60.7% of PRL-secreting PAs, 78.4% FSH-secreting PAs, 51.9% ACTH-secreting PAs and 57.1% non-functioning PAs. Niveiro et al. investigated VEGF expression in 60 human pituitary adenomas, and found that low expression of VEGF was seen predominantly in prolactin cell adenomas, and high in non-functioning adenomas, which is different from our data that 60.7% of prolactin cell adenomas verses 57.1% non-functioning adenomas [[11]]. Moreover, VEGF was considered also involved in conventional medical therapy for PAs. Octreotide was reported to down-regulate VEGF expression to achieve antiangiogenic effects on PAs [[28]]. Gagliano et al. demonstrated that cabergoline reduces cell viability in non-functioning pituitary adenomas by inhibiting VEGF secretion, of which the modulation might mediate the effects of DA agonists on cell proliferation in non-functioning adenoma [[29]]. Interestingly, in present study, we did spearman’s rank correlation analysis and found that D2R expression did not show a correlation with VEGF expression. Although it is prospective to treat PAs by anti-VEGF, up to now, only one case of PA has been reported to be cured by bevacizumab [[6]]. The mechanisms of VEGF in PA genesis and progression are still unclear. More studies are needed to investigate the effects of anti-VEGF therapy on PA patients. To confirm the results, we also detected the expression of D2R, MGMT and VEGF by using western blot. The data supported the results of immunohistochemical staining. Two samples were selected for each PAs subtype. The positive expression of western blot indicated the immunohistochemical staining is available, and the thickness differences of the blot band revealed the expression level differences in separate sample. Moreover, by spearman’s rank correlation analysis, we found that MGMT expression was positively associated with D2R and VEGF expression in PAs. As far as we know, it is the first time to report the association of D2R and MGMT expression which is positive. Only one report by Moshkin et al. has ever mentioned the association of MGMT and VEGF expression in PA. They demonstrated a progressive regrowth and malignant transformation of a silent subtype 2 pituitary corticotroph adenoma, with significant VEGF and MGMT immunopositivity [[30]]. The association between VEGF and MGMT expression in PAs need further investigations, as well as D2R and MGMT expression. In addition, we analyzed the association of D2R, MGMT and VEGF expression with clinical features of PAs, but no association was found. This indicated that their expression was not affected by the differences of clinical features, and that the medical therapy can be applied in any patient in need. In conclusion, in this study we demonstrated the expression of D2R, MGMT and VEGF in 197 different histological subtypes of pituitary adenomas, and analyzed the relationships between D2R, MGMT and VEGF expression and the association of D2R, MGMT and VEGF expression with PA clinical features including patient sex, tumor growth pattern, tumor recurrence, tumor size, tumor tissue texture and bromocriptine application. Our data revealed that PRL-and GH-secreting PAs exist high expression of D2R, responding to dopamine agonists; Most PAs exist low expression of MGMT and high expression of VEGF, TMZ or bevacizumab treatment could be applied under the premise of indications. Abbreviations: PA: Pituitary adenoma D2R: Dopamine D2 receptors DA: Dopamine agonist MGMT: O6-methylguanine DNA methyltransferase TMZ: Temozolomide VEGF: Vascular endothelial growth factor PRL: Prolactin GH: Growth hormone ACTH: Adrenocorticotropic hormone TSH: Thyroid stimulating hormone FSH: Follicle-stimulating hormone NF: Non-functioning Competing interests: The authors declare that they have no competing of interests. Authors’ contributions: YW, JL and CM designed the research; YW, JL, YH, MT, SW, WL and ZL performed the research; WL and ZL evaluated the pathological sections and scored the extent of staining; JL and YW analyzed the data; JL, YW and CM wrote the paper, CM revised the paper. All authors read and approved the final manuscript.
Background: To study the expression of D2R, MGMT and VEGF for clinical significance in pituitary adenomas, and to predict the potential curative medical therapy of dopamine agonists, temozolomide and bevacizumab on pituitary adenomas. Methods: Immunohistochemistry and western blot were performed to detect the expression of expression of D2R, MGMT and VEGF in pituitary adenoma tissue samples. The ratio of high expression of D2R, MGMT or VEGF in different subtypes of PA was compared by the use of chi-squared tests. The relationships between D2R, MGMT and VEGF expression were assessed by the Spearman rank correlation test. The association between their expression and clinical parameters was analyzed using a chi-squared test, or Fisher's exact probability test when appropriate. Results: The data showed that in 197 different histological subtypes of pituitary adenomas (PAs), 64.9% of them were D2R high expression, 86.3% were MGMT low expression and 58.9% were VEGF high expression. D2R high expression existed more frequently in PRL- and GH- secreting PAs. MGMT low expression existed in all PA subtypes. VEGF high expression existed more frequently in PRL, ACTH, FSH secreting and non-functioning PAs. The data of western blot also support the results. Spearman's rank correlation analysis showed that expression of MGMT was positively associated with D2R (r = 0.154, P = 0.031) and VEGF (r = 0.161, P = 0.024) in PAs, but no correlation was showed between D2R and VEGF expression (r = -0.025, P = 0.725 > 0.05). The association between their expression and clinical parameters was analyzed using a chi-squared test, or Fisher's exact probability test when appropriate, but the result showed no significant association. Conclusions: PRL-and GH-secreting PAs exist high expression of D2R, responding to dopamine agonists; Most PAs exist low expression of MGMT and high expression of VEGF, TMZ or bevacizumab treatment could be applied under the premise of indications.
null
null
6,406
374
[ 456, 93, 210, 159, 65, 99, 447, 77, 237, 67, 11, 70 ]
15
[ "expression", "vegf", "mgmt", "pas", "d2r", "high", "secreting", "d2r mgmt", "pa", "mgmt vegf" ]
[ "pituitary carcinoma remission", "pituitary adenomas analyzed", "pituitary adenomas consists", "effective treating pituitary", "therapy effective pituitary" ]
null
null
[CONTENT] Dopamine D2 receptors | MGMT | VEGF | Dopamine agonists | Temozolomide | Bevacizumab [SUMMARY]
[CONTENT] Dopamine D2 receptors | MGMT | VEGF | Dopamine agonists | Temozolomide | Bevacizumab [SUMMARY]
[CONTENT] Dopamine D2 receptors | MGMT | VEGF | Dopamine agonists | Temozolomide | Bevacizumab [SUMMARY]
null
[CONTENT] Dopamine D2 receptors | MGMT | VEGF | Dopamine agonists | Temozolomide | Bevacizumab [SUMMARY]
null
[CONTENT] Adenoma | Antibodies, Monoclonal, Humanized | Bevacizumab | Blotting, Western | DNA Modification Methylases | DNA Repair Enzymes | Dacarbazine | Female | Humans | Immunohistochemistry | Male | Pituitary Neoplasms | Receptors, Dopamine D2 | Temozolomide | Tumor Suppressor Proteins | Vascular Endothelial Growth Factor A [SUMMARY]
[CONTENT] Adenoma | Antibodies, Monoclonal, Humanized | Bevacizumab | Blotting, Western | DNA Modification Methylases | DNA Repair Enzymes | Dacarbazine | Female | Humans | Immunohistochemistry | Male | Pituitary Neoplasms | Receptors, Dopamine D2 | Temozolomide | Tumor Suppressor Proteins | Vascular Endothelial Growth Factor A [SUMMARY]
[CONTENT] Adenoma | Antibodies, Monoclonal, Humanized | Bevacizumab | Blotting, Western | DNA Modification Methylases | DNA Repair Enzymes | Dacarbazine | Female | Humans | Immunohistochemistry | Male | Pituitary Neoplasms | Receptors, Dopamine D2 | Temozolomide | Tumor Suppressor Proteins | Vascular Endothelial Growth Factor A [SUMMARY]
null
[CONTENT] Adenoma | Antibodies, Monoclonal, Humanized | Bevacizumab | Blotting, Western | DNA Modification Methylases | DNA Repair Enzymes | Dacarbazine | Female | Humans | Immunohistochemistry | Male | Pituitary Neoplasms | Receptors, Dopamine D2 | Temozolomide | Tumor Suppressor Proteins | Vascular Endothelial Growth Factor A [SUMMARY]
null
[CONTENT] pituitary carcinoma remission | pituitary adenomas analyzed | pituitary adenomas consists | effective treating pituitary | therapy effective pituitary [SUMMARY]
[CONTENT] pituitary carcinoma remission | pituitary adenomas analyzed | pituitary adenomas consists | effective treating pituitary | therapy effective pituitary [SUMMARY]
[CONTENT] pituitary carcinoma remission | pituitary adenomas analyzed | pituitary adenomas consists | effective treating pituitary | therapy effective pituitary [SUMMARY]
null
[CONTENT] pituitary carcinoma remission | pituitary adenomas analyzed | pituitary adenomas consists | effective treating pituitary | therapy effective pituitary [SUMMARY]
null
[CONTENT] expression | vegf | mgmt | pas | d2r | high | secreting | d2r mgmt | pa | mgmt vegf [SUMMARY]
[CONTENT] expression | vegf | mgmt | pas | d2r | high | secreting | d2r mgmt | pa | mgmt vegf [SUMMARY]
[CONTENT] expression | vegf | mgmt | pas | d2r | high | secreting | d2r mgmt | pa | mgmt vegf [SUMMARY]
null
[CONTENT] expression | vegf | mgmt | pas | d2r | high | secreting | d2r mgmt | pa | mgmt vegf [SUMMARY]
null
[CONTENT] tmz | pas | pituitary | treatment | aggressive | carcinoma | tmz bevacizumab | aggressive pas | therapy | das [SUMMARY]
[CONTENT] min | adenomas | slides | secreting adenomas | staining | score | secreting | shanghai china | shanghai | china [SUMMARY]
[CONTENT] expression | d2r | mgmt | vegf | pas | high | d2r mgmt | low | high expression | d2r mgmt vegf [SUMMARY]
null
[CONTENT] expression | pas | vegf | d2r | mgmt | secreting | score | adenomas | high | d2r mgmt [SUMMARY]
null
[CONTENT] D2R | MGMT | VEGF [SUMMARY]
[CONTENT] D2R | MGMT | VEGF ||| D2R | MGMT | VEGF ||| D2R | MGMT | VEGF | Spearman ||| Fisher [SUMMARY]
[CONTENT] 197 | 64.9% | 86.3% | MGMT | 58.9% | VEGF ||| GH- ||| MGMT ||| VEGF | PRL | ACTH | FSH ||| ||| Spearman | MGMT | D2R | 0.154 | 0.031 | VEGF | 0.161 | 0.024 | D2R | VEGF | 0.725 | 0.05 ||| Fisher [SUMMARY]
null
[CONTENT] D2R | MGMT | VEGF ||| D2R | MGMT | VEGF ||| D2R | MGMT | VEGF ||| D2R | MGMT | VEGF | Spearman ||| Fisher ||| ||| 197 | 64.9% | 86.3% | MGMT | 58.9% | VEGF ||| GH- ||| MGMT ||| VEGF | PRL | ACTH | FSH ||| ||| Spearman | MGMT | D2R | 0.154 | 0.031 | VEGF | 0.161 | 0.024 | D2R | VEGF | 0.725 | 0.05 ||| Fisher ||| D2R | MGMT | VEGF | TMZ [SUMMARY]
null
A Novel UC Exclusion Diet and Antibiotics for Treatment of Mild to Moderate Pediatric Ulcerative Colitis: A Prospective Open-Label Pilot Study.
34835992
As the microbiome plays an important role in instigating inflammation in ulcerative colitis (UC), strategies targeting the microbiome may offer an alternative therapeutic approach. The goal of the pilot trial was to evaluate the potential efficacy and feasibility of a novel UC exclusion diet (UCED) for clinical remission, as well as the potential of sequential antibiotics for diet-refractory patients to achieve remission without steroids.
BACKGROUND
This was a prospective, single-arm, multicenter, open-label pilot study in patients aged 8-19, with pediatric UC activity index (PUCAI) scores >10 on stable maintenance therapy. Patients failing to enter remission (PUCAI < 10) on the diet could receive a 14-day course of amoxycillin, metronidazole and doxycycline (AMD), and were re-assessed on day 21. The primary endpoint was intention-to-treat (ITT) remission at week 6, with UCED as the only intervention.
METHODS
Twenty-four UCED treatment courses were given to 23 eligible children (mean age: 15.3 ± 2.9 years). The median PUCAI decreased from 35 (30-40) at baseline to 12.5 (5-30) at week 6 (p = 0.001). Clinical remission with UCED alone was achieved in 9/24 (37.5%). The median fecal calprotectin declined from 818 (630.0-1880.0) μg/g at baseline to 592.0 (140.7-1555.0) μg/g at week 6 (p > 0.05). Eight patients received treatment with antibiotics after failing on the diet; 4/8 (50.0%) subsequently entered remission 3 weeks later.
RESULTS
The UCED appears to be effective and feasible for the induction of remission in children with mild to moderate UC. The sequential use of UCED followed by antibiotic therapy needs to be evaluated as a microbiome-targeted, steroid-sparing strategy.
CONCLUSION
[ "Adolescent", "Amoxicillin", "Anti-Bacterial Agents", "Child", "Colitis, Ulcerative", "Doxycycline", "Drug Therapy, Combination", "Eating", "Female", "Humans", "Intention to Treat Analysis", "Male", "Metronidazole", "Nutritional Status", "Patient Compliance", "Pilot Projects", "Prospective Studies", "Remission Induction", "Treatment Outcome" ]
8622458
1. Introduction
Ulcerative colitis (UC) in children is a chronic inflammatory disorder of the colon, associated with clinical symptoms including diarrhea and rectal bleeding, and has a negative impact on quality of life [1]. Epidemiologic studies have demonstrated an overall increase in the prevalence and the incidence of UC in both developed and developing countries [2,3]. The sharp change in food consumption from a non-Western to a Western diet has been suggested to contribute to this trend [3,4]. The pathogenesis of ulcerative colitis (UC) is believed to be related to dysbiosis coupled with diminished host-barrier function and unrepressed inflammation. The dysbiosis in UC is characterized by a reduction in short-chain-fatty-acid (SCFA)-producing taxa [5,6] and, in some studies, by an increase in potential pathobionts, such as Escherichia or Ruminococcus gnavus [7], or hydrogen-sulfide-reducing bacteria [6]. The manipulation of the microbiota has become one of the most intriguing targets for intervention in inflammatory bowel diseases (IBDs). Fecal microbiota transplantation (FMT), a straightforward therapy that manipulates the microbiota, has been shown to be effective in the short term in about 30% of cases [8]. The success of FMT appears to depend upon the choice of donor and their microbiota composition [9]. Recent data suggest that diet may alter intestinal microbiota, directly affect the host epithelial and goblet cells, diminish antimicrobial peptides, and influence the immune system’s responses [4,10,11,12,13,14]. However, while dietary interventions have proven to be highly effective in inducing remission in Crohn’s disease, the role of diet in UC and the potential for dietary therapies remain elusive [15,16,17]. A recent guideline from the international organization for the study of inflammatory bowel disease (IOIBD), primarily based on epidemiologic and animal models, suggested that patients with UC should reduce exposure to red or processed meat; saturated, trans and dairy fat; and certain additives [18]. However, there are no prospective randomized controlled trials with dietary interventions published to date that have demonstrated that a dietary intervention can induce remission in active UC in children. Currently, the only non-biologic medication for UC, which does not suppress the immune system, is 5-aminosalicylic acid (5ASA), which is considered to be the first-line therapy in mild to moderate cases. However, medications such as steroids, immunomodulators (IMM) and biologics have been increasingly used in the treatment of pediatric UC. The manipulation of the intestinal microbiome is an emerging new strategy for the treatment of IBD, which may reduce the need for immunosuppression. Two strategies that might alter the microbiome and could be used in conjunction are diet and antibiotics. Dietary components can modulate the composition and metabolome of the gut microbiota, as well as affect the intestinal epithelium, goblet cells and innate immune system [12]. Dietary factors present in Western diets may decrease the production of mucins or cause more permeable mucous and antimicrobial peptides, as well as reshaping the microbiota [13,19]. Turner et al. demonstrated that patients with severe UC have an increased relative abundance of Gammaproteobacteria [7]. The presence of pathobionts may make a disease amenable to antibiotic therapy. Antibiotic combinations have been used as a microbiota-targeted therapy, including a triple combination of amoxicillin, tetracycline and metronidazole, and have been shown to be effective in active adult UC patients [20,21]. We hypothesize that the UC disease course can be controlled either by using a novel diet, developed especially for the induction of remission in UC; by an antibiotic strategy; or by both. To test the feasibility and efficacy of the diet, we decided to examine these strategies in a pilot trial in children with mild to moderate UC.
null
null
3. Results
3.1. Study Population Thirty-two patients were screened between November 2014 and November 2020; eight patients were excluded (Supplementary Figure S1). Twenty-four UCED treatment courses were given to 23 eligible, consenting patients and were included in the final analysis. Significant delays in enrollment were encountered, as some of the collaborators left their institutions during the trial and there was significant delay in the ethical approval in other institutions. One patient received a second course of the UCED after a relapse two years later. The mean age of the included patients was 15.3 ± 2.9 years, with a mean disease duration of 1.4 ± 1.4 years. Demographic data are presented in Table 1. The majority had a moderate disease severity and had failed 5ASA, one patient was newly diagnosed with treatment-naïve UC, and one was coming off a course of steroids and flared. Thirty-two patients were screened between November 2014 and November 2020; eight patients were excluded (Supplementary Figure S1). Twenty-four UCED treatment courses were given to 23 eligible, consenting patients and were included in the final analysis. Significant delays in enrollment were encountered, as some of the collaborators left their institutions during the trial and there was significant delay in the ethical approval in other institutions. One patient received a second course of the UCED after a relapse two years later. The mean age of the included patients was 15.3 ± 2.9 years, with a mean disease duration of 1.4 ± 1.4 years. Demographic data are presented in Table 1. The majority had a moderate disease severity and had failed 5ASA, one patient was newly diagnosed with treatment-naïve UC, and one was coming off a course of steroids and flared. 3.2. Response to UCED Exclusively by Week 6 Clinical responses to UCED were achieved in 17/24 (70.8%) patients by week 6, and 9/24 (37.5%) had ITT clinical remissions at week 6 with the UCED (Figure 2). One patient entered into a second remission after receiving another course of UCED, two years after the initial response. For the 23 patients, 8/23 (34.8%) had ITT clinical remissions at week 6 with the UCED. Withdrawals in remission were imputed as non-remission in the ITT analysis. The median PUCAI decreased from 35 (30–40) at baseline to 12.5 (5–30) at week 6, and the mean PUCAI decreased from 34.0 ± 10 to 17.3 ± 16.9 (p = 0.001) according to the ITT analysis including all the patients in a LOCF analysis (Figure 3). There were no differences in the baseline median PUCAI score, baseline FC levels and disease extent between patients who entered remission at week 6 versus treatment failures. The FC level did not vary between the recruited centers. Six patients withdrew by week 3: two noncompliant patients (one response and one remission) and four patients who required additional therapy (Supplementary Figure S1). FC results were available for 18 patients. The median FC remained unchanged from week 0 to week 3 ((818 (630.0–1880.0) μg/g and 968.0 (272.0–1798.4) μg/g, respectively (p = 0.76)) and declined from week 3 to week 6 of the diet ((968.0 (272.0–1880.0) μg/g to 592.0 (140.7–1555.0) μg/g, respectively (p = 0.41)), corresponding to a 49% reduction from week 3 to week 6; the decline between week 0 and week 6 was not significant (p = 0.11). Among five patients who achieved remission at week 6 with baseline and week 6 FC, the median FC level decreased from 630 (IQR, 332–1586) μg/g at week 0 to 230 (75–1298) μg/g at week 6 (p = 0.14). Among the patients who were in clinical remission at week 6, seven were slow responders and achieved clinical remission only at week 6; the other two patients achieved remission at week 2 and week 3. Two patients achieved remission at week 2 and 3, respectively, but developed recurrence of mild symptoms by week 6 and were considered failures by ITT. One patient developed a Shigella infection during the trial that led to symptoms despite a marked reduction in FC from 1167 to 111 at week 6. Among the patients whose PUCAIs increased from baseline to week 6, it was interesting to see that two of the three patients had proctitis with moderate disease of about 1-year duration and a family history of IBD. Clinical responses to UCED were achieved in 17/24 (70.8%) patients by week 6, and 9/24 (37.5%) had ITT clinical remissions at week 6 with the UCED (Figure 2). One patient entered into a second remission after receiving another course of UCED, two years after the initial response. For the 23 patients, 8/23 (34.8%) had ITT clinical remissions at week 6 with the UCED. Withdrawals in remission were imputed as non-remission in the ITT analysis. The median PUCAI decreased from 35 (30–40) at baseline to 12.5 (5–30) at week 6, and the mean PUCAI decreased from 34.0 ± 10 to 17.3 ± 16.9 (p = 0.001) according to the ITT analysis including all the patients in a LOCF analysis (Figure 3). There were no differences in the baseline median PUCAI score, baseline FC levels and disease extent between patients who entered remission at week 6 versus treatment failures. The FC level did not vary between the recruited centers. Six patients withdrew by week 3: two noncompliant patients (one response and one remission) and four patients who required additional therapy (Supplementary Figure S1). FC results were available for 18 patients. The median FC remained unchanged from week 0 to week 3 ((818 (630.0–1880.0) μg/g and 968.0 (272.0–1798.4) μg/g, respectively (p = 0.76)) and declined from week 3 to week 6 of the diet ((968.0 (272.0–1880.0) μg/g to 592.0 (140.7–1555.0) μg/g, respectively (p = 0.41)), corresponding to a 49% reduction from week 3 to week 6; the decline between week 0 and week 6 was not significant (p = 0.11). Among five patients who achieved remission at week 6 with baseline and week 6 FC, the median FC level decreased from 630 (IQR, 332–1586) μg/g at week 0 to 230 (75–1298) μg/g at week 6 (p = 0.14). Among the patients who were in clinical remission at week 6, seven were slow responders and achieved clinical remission only at week 6; the other two patients achieved remission at week 2 and week 3. Two patients achieved remission at week 2 and 3, respectively, but developed recurrence of mild symptoms by week 6 and were considered failures by ITT. One patient developed a Shigella infection during the trial that led to symptoms despite a marked reduction in FC from 1167 to 111 at week 6. Among the patients whose PUCAIs increased from baseline to week 6, it was interesting to see that two of the three patients had proctitis with moderate disease of about 1-year duration and a family history of IBD. 3.3. Sustained Remission with UCED at Week 12 Six out of nine patients (66%) maintained remission through week 12 without additional therapy; thus, clinical remission was observed in 6/24 (25%) at week 12 based on ITT analysis. One patient withdrew despite remission and stopped the diet; two patients experienced relapses between weeks 7 and 12: one patient developed mild intermittent bleeding without other symptoms (PUCAI: 10), and one patient developed a mild relapse. The median PUCAI decreased from 35 (30–40) at baseline to 15 (5–30) at week 12 (p = 0.002) according to ITT analysis including all the patients in a LOCF analysis. Six out of nine patients (66%) maintained remission through week 12 without additional therapy; thus, clinical remission was observed in 6/24 (25%) at week 12 based on ITT analysis. One patient withdrew despite remission and stopped the diet; two patients experienced relapses between weeks 7 and 12: one patient developed mild intermittent bleeding without other symptoms (PUCAI: 10), and one patient developed a mild relapse. The median PUCAI decreased from 35 (30–40) at baseline to 15 (5–30) at week 12 (p = 0.002) according to ITT analysis including all the patients in a LOCF analysis. 3.4. Response to ADM after UCED Failure Eight patients received treatment with antibiotics after failing the diet; 4/8 (50.0%) subsequently entered remission (Figure 2). Thus, in total, 13/24 (54.2%) patients obtained remission; of those, nine patients were on the diet alone, and four, on a sequential diet and antibiotic therapy as induction therapy. Eight patients received treatment with antibiotics after failing the diet; 4/8 (50.0%) subsequently entered remission (Figure 2). Thus, in total, 13/24 (54.2%) patients obtained remission; of those, nine patients were on the diet alone, and four, on a sequential diet and antibiotic therapy as induction therapy. 3.5. Tolerance and Adherence Three patients stopped the diet (two stopped despite good responses at week 3—a PUCAI of 0 and PUCAI of 10); thus, intolerance occurred in 3/24 (12.5%). The adherence to the diet was available for 22/24 (91.7%) patients at week 3; 19 (86.4%) patients had high compliance, 2 had fair compliance (9.1%) and 1 (4.5%) had poor compliance. Data for adherence at week 6 were available for 15 patients among the 16 patients who reached this week; all were highly compliant. Three patients stopped the diet (two stopped despite good responses at week 3—a PUCAI of 0 and PUCAI of 10); thus, intolerance occurred in 3/24 (12.5%). The adherence to the diet was available for 22/24 (91.7%) patients at week 3; 19 (86.4%) patients had high compliance, 2 had fair compliance (9.1%) and 1 (4.5%) had poor compliance. Data for adherence at week 6 were available for 15 patients among the 16 patients who reached this week; all were highly compliant. 3.6. Nutritional Outcomes As the diet was designed to decrease animal saturated fat, total protein, SAAs and heme while providing fiber, we analyzed dietary intake before and after UCED. The analysis of dietary intake showed a clinically relevant decrease in total energy per day, as at baseline, the mean daily energy intake was 42.3 ± 25.2 kcal/kg/day, versus 32.7 ± 14.2 at week 6 (p = 0.06). In addition to energy intake reduction, the median weight decreased from 62 (57–65) to 59 kg (52–63) after 6 weeks of the diet (p = 0.02), with a mean weight loss of 0.4 ± 0.3 kg per week. Treatment with UCED was accompanied by a significant decrease in total protein, SAAs, saturated fat and iron, while there was a significant increase in total fiber consumption per day (Figure 4). As the diet was designed to decrease animal saturated fat, total protein, SAAs and heme while providing fiber, we analyzed dietary intake before and after UCED. The analysis of dietary intake showed a clinically relevant decrease in total energy per day, as at baseline, the mean daily energy intake was 42.3 ± 25.2 kcal/kg/day, versus 32.7 ± 14.2 at week 6 (p = 0.06). In addition to energy intake reduction, the median weight decreased from 62 (57–65) to 59 kg (52–63) after 6 weeks of the diet (p = 0.02), with a mean weight loss of 0.4 ± 0.3 kg per week. Treatment with UCED was accompanied by a significant decrease in total protein, SAAs, saturated fat and iron, while there was a significant increase in total fiber consumption per day (Figure 4). 3.7. Safety During the UCED treatment, eight patients had adverse events. Three patients had worsening of disease at week 3, two patients developed constipation, one patient lost weight during the first six weeks, and one patient developed a fever unrelated to the disease. Among the patients who received AMD, three patients had worsening of the disease, one patient had metronidazole intolerance with diarrhea, and one patient developed pneumonia one week after stopping the antibiotics. During the UCED treatment, eight patients had adverse events. Three patients had worsening of disease at week 3, two patients developed constipation, one patient lost weight during the first six weeks, and one patient developed a fever unrelated to the disease. Among the patients who received AMD, three patients had worsening of the disease, one patient had metronidazole intolerance with diarrhea, and one patient developed pneumonia one week after stopping the antibiotics.
null
null
[ "2. Materials and Methods", "2.1. Study Population and Design", "2.2. The Ulcerative Colitis Diet Intervention", "2.3. Inclusion and Exclusion Criteria", "2.4. Data Collection and Dietary Assessment", "2.5. Statistical Analysis", "3.1. Study Population", "3.2. Response to UCED Exclusively by Week 6", "3.3. Sustained Remission with UCED at Week 12", "3.4. Response to ADM after UCED Failure", "3.5. Tolerance and Adherence", "3.6. Nutritional Outcomes", "3.7. Safety" ]
[ "2.1. Study Population and Design This was an open-label, prospective, single-arm, multicenter pilot study involving the treatment of children with active mild to moderate UC using a novel diet with an antibiotic rescue design for dietary failures. The study population targeted patients aged 8–19, with mild to moderate active disease, defined according to the Pediatric UC Activity Index (10 ≤ PUCAI ≤ 45), at diagnosis or despite maintenance therapy with 5ASA or thiopurines, stable for at least 6 weeks. All the recruited patients meeting the inclusion and exclusion criteria (see below) were introduced to a novel dietary intervention (Supplementary Table S1) termed the Ulcerative Colitis Exclusion Diet (UCED), for the first 6 weeks, and those in remission at week 6 received a step-down diet for another 6 weeks. Patients with no improvement by week 3, who failed to achieve remission by week 6, or who deteriorated between weeks 6 and 12 could decide to receive a 14-day course of amoxicillin (50 mg/kg/day; max: 500 mg of TID), metronidazole (15 mg/kg/day; max: 250 mg of TID) and doxycycline (4 mg/kg/day; max: 100 mg of BID) (AMD). Subjects with intolerance to AMD were instructed to discontinue the metronidazole. The patients on AMD were seen again 7 days after the cessation of antibiotics (Figure 1). The patients were instructed to continue their pretreatment (5ASA or immunomodulators) without any dose change. The primary outcome was the clinical remission rate (PUCAI < 10) at week 6 for the UCED intervention. The study took place at 3 sites: the Wolfson Medical Center, Holon, Israel; the Children’s Hospital of Philadelphia, PA, USA; and the IWK Health Center, Halifax, NS, Canada. All the patients signed informed consent, and all the sites obtained ethical approval (NCT 02345733).\nThis was an open-label, prospective, single-arm, multicenter pilot study involving the treatment of children with active mild to moderate UC using a novel diet with an antibiotic rescue design for dietary failures. The study population targeted patients aged 8–19, with mild to moderate active disease, defined according to the Pediatric UC Activity Index (10 ≤ PUCAI ≤ 45), at diagnosis or despite maintenance therapy with 5ASA or thiopurines, stable for at least 6 weeks. All the recruited patients meeting the inclusion and exclusion criteria (see below) were introduced to a novel dietary intervention (Supplementary Table S1) termed the Ulcerative Colitis Exclusion Diet (UCED), for the first 6 weeks, and those in remission at week 6 received a step-down diet for another 6 weeks. Patients with no improvement by week 3, who failed to achieve remission by week 6, or who deteriorated between weeks 6 and 12 could decide to receive a 14-day course of amoxicillin (50 mg/kg/day; max: 500 mg of TID), metronidazole (15 mg/kg/day; max: 250 mg of TID) and doxycycline (4 mg/kg/day; max: 100 mg of BID) (AMD). Subjects with intolerance to AMD were instructed to discontinue the metronidazole. The patients on AMD were seen again 7 days after the cessation of antibiotics (Figure 1). The patients were instructed to continue their pretreatment (5ASA or immunomodulators) without any dose change. The primary outcome was the clinical remission rate (PUCAI < 10) at week 6 for the UCED intervention. The study took place at 3 sites: the Wolfson Medical Center, Holon, Israel; the Children’s Hospital of Philadelphia, PA, USA; and the IWK Health Center, Halifax, NS, Canada. All the patients signed informed consent, and all the sites obtained ethical approval (NCT 02345733).\n2.2. The Ulcerative Colitis Diet Intervention The UCED diet was designed to alter dietary components that may adversely affect goblet cells, mucus permeability and microbiome composition, which were previously linked to UC [10,11]. It may be described as a low-protein, high-fiber, low-fat diet that also excludes additives. The following principles guiding food exclusion and addition included decreased exposure to sulfated amino acids (SAAs) [22,23,24], total protein [12,25,26], heme [27], animal fat [23,24], saturated and polyunsaturated fat [28], and food additives [29], with exposure to tryptophan [30,31] and natural sources of pectin and resistant starch [13,27,32,33,34]. The term “exclusion diet” was used, as the main principle of the diet is an exclusion of these foods’ components, but some other foods were added. The diet was designed with mandatory, allowed and disallowed foods. The first-phase diet is rich in fruit and vegetables, and includes mandatory foods, primarily fruits and vegetables. There are allowed foods that can be consumed without limitation, such as rice and potatoes; foods with prescribed amounts, such as chicken, eggs, yoghurt and pasta; and disallowed foods, such as red meat and processed foods. The diet also reduces sugar and fructose intake from sources other than fruits. The second phase at weeks 7–12 is more permissive, with more options of fruits and vegetables, and additions to the prescribed amounts of grain products and certain pulses. The patients were instructed on the use of the diet and were provided with recipes and a dietary support system. A UCED day sample menu is presented in Supplementary Table S1.\nThe UCED diet was designed to alter dietary components that may adversely affect goblet cells, mucus permeability and microbiome composition, which were previously linked to UC [10,11]. It may be described as a low-protein, high-fiber, low-fat diet that also excludes additives. The following principles guiding food exclusion and addition included decreased exposure to sulfated amino acids (SAAs) [22,23,24], total protein [12,25,26], heme [27], animal fat [23,24], saturated and polyunsaturated fat [28], and food additives [29], with exposure to tryptophan [30,31] and natural sources of pectin and resistant starch [13,27,32,33,34]. The term “exclusion diet” was used, as the main principle of the diet is an exclusion of these foods’ components, but some other foods were added. The diet was designed with mandatory, allowed and disallowed foods. The first-phase diet is rich in fruit and vegetables, and includes mandatory foods, primarily fruits and vegetables. There are allowed foods that can be consumed without limitation, such as rice and potatoes; foods with prescribed amounts, such as chicken, eggs, yoghurt and pasta; and disallowed foods, such as red meat and processed foods. The diet also reduces sugar and fructose intake from sources other than fruits. The second phase at weeks 7–12 is more permissive, with more options of fruits and vegetables, and additions to the prescribed amounts of grain products and certain pulses. The patients were instructed on the use of the diet and were provided with recipes and a dietary support system. A UCED day sample menu is presented in Supplementary Table S1.\n2.3. Inclusion and Exclusion Criteria The inclusion and exclusion criteria were defined based on our initial experience, showing efficacy in cases of mild–moderate UC. Our goal was to target the appropriate UC population for a pilot study that would benefit most from a novel induction dietary therapy as an induction therapy and to test its feasibility and efficacy. The inclusion criteria included informed consent; established diagnosis of UC; age between 8 and 19 years; mild to moderate active disease defined as 10 ≤ PUCAI ≤ 45; and stable medication (IMM/5ASA) use for the past 6 weeks or no medication. The exclusion criteria included any proven current infection, such as a positive stool culture, a parasite, or positivity for Clostridioides difficile toxin; antibiotic or steroid use in the past 2 weeks, with the exception of patients stopping steroids at enrolment; current or past use of biologics; PUCAI > 45; acute severe UC (PUCAI ≥ 65) in the previous 12 months; a current extraintestinal manifestation of UC; and primary sclerosing cholangitis or liver disease, and pregnancy or an allergy to one of the antibiotics excluded patients from entering the antibiotic arm but not the diet.\nThe inclusion and exclusion criteria were defined based on our initial experience, showing efficacy in cases of mild–moderate UC. Our goal was to target the appropriate UC population for a pilot study that would benefit most from a novel induction dietary therapy as an induction therapy and to test its feasibility and efficacy. The inclusion criteria included informed consent; established diagnosis of UC; age between 8 and 19 years; mild to moderate active disease defined as 10 ≤ PUCAI ≤ 45; and stable medication (IMM/5ASA) use for the past 6 weeks or no medication. The exclusion criteria included any proven current infection, such as a positive stool culture, a parasite, or positivity for Clostridioides difficile toxin; antibiotic or steroid use in the past 2 weeks, with the exception of patients stopping steroids at enrolment; current or past use of biologics; PUCAI > 45; acute severe UC (PUCAI ≥ 65) in the previous 12 months; a current extraintestinal manifestation of UC; and primary sclerosing cholangitis or liver disease, and pregnancy or an allergy to one of the antibiotics excluded patients from entering the antibiotic arm but not the diet.\n2.4. Data Collection and Dietary Assessment The patients were seen at baseline and weeks 3, 6 and 12. At week 2, a phone call visit was performed to assess the PUCAI and dietary compliance. At each visit, a PUCAI score was recorded. A clinical response was defined as a decrease in PUCAI score of at least 10 points, and clinical remission was defined as PUCAI < 10. The patients were asked to provide stool samples for fecal calprotectin (FC) at weeks 0, 3 and 6, which were analyzed locally at each participating center. We performed 24 h recall via a dietitian at weeks 0, 3 and 6. The patients were asked to record the foods and beverages and the consumed amounts in a food diary (FD) over a 3-day period (weekend and 2 weekdays) at week 3. A modified diet-adherence questionnaire [16] was completed on weeks 3 and 6. High diet adherence was determined by finding high adherence on the questionnaire and by the dietitian’s assessment based on direct questioning. Poor compliance was defined by having low compliance in any assessment.\nThe patients were seen at baseline and weeks 3, 6 and 12. At week 2, a phone call visit was performed to assess the PUCAI and dietary compliance. At each visit, a PUCAI score was recorded. A clinical response was defined as a decrease in PUCAI score of at least 10 points, and clinical remission was defined as PUCAI < 10. The patients were asked to provide stool samples for fecal calprotectin (FC) at weeks 0, 3 and 6, which were analyzed locally at each participating center. We performed 24 h recall via a dietitian at weeks 0, 3 and 6. The patients were asked to record the foods and beverages and the consumed amounts in a food diary (FD) over a 3-day period (weekend and 2 weekdays) at week 3. A modified diet-adherence questionnaire [16] was completed on weeks 3 and 6. High diet adherence was determined by finding high adherence on the questionnaire and by the dietitian’s assessment based on direct questioning. Poor compliance was defined by having low compliance in any assessment.\n2.5. Statistical Analysis Continuous variables were evaluated for distribution normality and are reported as medians (interquartile ranges, IQRs) or means (standard deviations, SDs) as appropriate. Nominal variables are summarized as frequencies and are presented as n (%). The primary end point of the proportion of patients in remission at week 6 was analyzed according to the intention-to-treat (ITT) paradigm. A pairwise comparison of the PUCAI at week 0 versus week 6/week 12 was analyzed using the Wilcoxon signed rank test and used according to the last-observation-carried-forward (LOCF) approach. A pairwise comparison of the FC at week 0 versus week 6 was analyzed using the Wilcoxon signed rank test and was performed only for subjects with parameters at both time points. The food macronutrient and micronutrient daily intake was based on the food records at baseline before starting the diet and during the UCED phase 1 at week 6 or week 3, and was compared using the t-test for paired samples or the Wilcoxon signed-rank test as appropriate. Only patients with both week 0 and week 3/6 diet records were entered into the nutritional analysis. All the statistical analyses were performed using the SPSS version 27 statistical analysis software (IBM, Endicott, NY, USA). All the tests were two sided and were considered to be significant at p values < 0.05.\nContinuous variables were evaluated for distribution normality and are reported as medians (interquartile ranges, IQRs) or means (standard deviations, SDs) as appropriate. Nominal variables are summarized as frequencies and are presented as n (%). The primary end point of the proportion of patients in remission at week 6 was analyzed according to the intention-to-treat (ITT) paradigm. A pairwise comparison of the PUCAI at week 0 versus week 6/week 12 was analyzed using the Wilcoxon signed rank test and used according to the last-observation-carried-forward (LOCF) approach. A pairwise comparison of the FC at week 0 versus week 6 was analyzed using the Wilcoxon signed rank test and was performed only for subjects with parameters at both time points. The food macronutrient and micronutrient daily intake was based on the food records at baseline before starting the diet and during the UCED phase 1 at week 6 or week 3, and was compared using the t-test for paired samples or the Wilcoxon signed-rank test as appropriate. Only patients with both week 0 and week 3/6 diet records were entered into the nutritional analysis. All the statistical analyses were performed using the SPSS version 27 statistical analysis software (IBM, Endicott, NY, USA). All the tests were two sided and were considered to be significant at p values < 0.05.", "This was an open-label, prospective, single-arm, multicenter pilot study involving the treatment of children with active mild to moderate UC using a novel diet with an antibiotic rescue design for dietary failures. The study population targeted patients aged 8–19, with mild to moderate active disease, defined according to the Pediatric UC Activity Index (10 ≤ PUCAI ≤ 45), at diagnosis or despite maintenance therapy with 5ASA or thiopurines, stable for at least 6 weeks. All the recruited patients meeting the inclusion and exclusion criteria (see below) were introduced to a novel dietary intervention (Supplementary Table S1) termed the Ulcerative Colitis Exclusion Diet (UCED), for the first 6 weeks, and those in remission at week 6 received a step-down diet for another 6 weeks. Patients with no improvement by week 3, who failed to achieve remission by week 6, or who deteriorated between weeks 6 and 12 could decide to receive a 14-day course of amoxicillin (50 mg/kg/day; max: 500 mg of TID), metronidazole (15 mg/kg/day; max: 250 mg of TID) and doxycycline (4 mg/kg/day; max: 100 mg of BID) (AMD). Subjects with intolerance to AMD were instructed to discontinue the metronidazole. The patients on AMD were seen again 7 days after the cessation of antibiotics (Figure 1). The patients were instructed to continue their pretreatment (5ASA or immunomodulators) without any dose change. The primary outcome was the clinical remission rate (PUCAI < 10) at week 6 for the UCED intervention. The study took place at 3 sites: the Wolfson Medical Center, Holon, Israel; the Children’s Hospital of Philadelphia, PA, USA; and the IWK Health Center, Halifax, NS, Canada. All the patients signed informed consent, and all the sites obtained ethical approval (NCT 02345733).", "The UCED diet was designed to alter dietary components that may adversely affect goblet cells, mucus permeability and microbiome composition, which were previously linked to UC [10,11]. It may be described as a low-protein, high-fiber, low-fat diet that also excludes additives. The following principles guiding food exclusion and addition included decreased exposure to sulfated amino acids (SAAs) [22,23,24], total protein [12,25,26], heme [27], animal fat [23,24], saturated and polyunsaturated fat [28], and food additives [29], with exposure to tryptophan [30,31] and natural sources of pectin and resistant starch [13,27,32,33,34]. The term “exclusion diet” was used, as the main principle of the diet is an exclusion of these foods’ components, but some other foods were added. The diet was designed with mandatory, allowed and disallowed foods. The first-phase diet is rich in fruit and vegetables, and includes mandatory foods, primarily fruits and vegetables. There are allowed foods that can be consumed without limitation, such as rice and potatoes; foods with prescribed amounts, such as chicken, eggs, yoghurt and pasta; and disallowed foods, such as red meat and processed foods. The diet also reduces sugar and fructose intake from sources other than fruits. The second phase at weeks 7–12 is more permissive, with more options of fruits and vegetables, and additions to the prescribed amounts of grain products and certain pulses. The patients were instructed on the use of the diet and were provided with recipes and a dietary support system. A UCED day sample menu is presented in Supplementary Table S1.", "The inclusion and exclusion criteria were defined based on our initial experience, showing efficacy in cases of mild–moderate UC. Our goal was to target the appropriate UC population for a pilot study that would benefit most from a novel induction dietary therapy as an induction therapy and to test its feasibility and efficacy. The inclusion criteria included informed consent; established diagnosis of UC; age between 8 and 19 years; mild to moderate active disease defined as 10 ≤ PUCAI ≤ 45; and stable medication (IMM/5ASA) use for the past 6 weeks or no medication. The exclusion criteria included any proven current infection, such as a positive stool culture, a parasite, or positivity for Clostridioides difficile toxin; antibiotic or steroid use in the past 2 weeks, with the exception of patients stopping steroids at enrolment; current or past use of biologics; PUCAI > 45; acute severe UC (PUCAI ≥ 65) in the previous 12 months; a current extraintestinal manifestation of UC; and primary sclerosing cholangitis or liver disease, and pregnancy or an allergy to one of the antibiotics excluded patients from entering the antibiotic arm but not the diet.", "The patients were seen at baseline and weeks 3, 6 and 12. At week 2, a phone call visit was performed to assess the PUCAI and dietary compliance. At each visit, a PUCAI score was recorded. A clinical response was defined as a decrease in PUCAI score of at least 10 points, and clinical remission was defined as PUCAI < 10. The patients were asked to provide stool samples for fecal calprotectin (FC) at weeks 0, 3 and 6, which were analyzed locally at each participating center. We performed 24 h recall via a dietitian at weeks 0, 3 and 6. The patients were asked to record the foods and beverages and the consumed amounts in a food diary (FD) over a 3-day period (weekend and 2 weekdays) at week 3. A modified diet-adherence questionnaire [16] was completed on weeks 3 and 6. High diet adherence was determined by finding high adherence on the questionnaire and by the dietitian’s assessment based on direct questioning. Poor compliance was defined by having low compliance in any assessment.", "Continuous variables were evaluated for distribution normality and are reported as medians (interquartile ranges, IQRs) or means (standard deviations, SDs) as appropriate. Nominal variables are summarized as frequencies and are presented as n (%). The primary end point of the proportion of patients in remission at week 6 was analyzed according to the intention-to-treat (ITT) paradigm. A pairwise comparison of the PUCAI at week 0 versus week 6/week 12 was analyzed using the Wilcoxon signed rank test and used according to the last-observation-carried-forward (LOCF) approach. A pairwise comparison of the FC at week 0 versus week 6 was analyzed using the Wilcoxon signed rank test and was performed only for subjects with parameters at both time points. The food macronutrient and micronutrient daily intake was based on the food records at baseline before starting the diet and during the UCED phase 1 at week 6 or week 3, and was compared using the t-test for paired samples or the Wilcoxon signed-rank test as appropriate. Only patients with both week 0 and week 3/6 diet records were entered into the nutritional analysis. All the statistical analyses were performed using the SPSS version 27 statistical analysis software (IBM, Endicott, NY, USA). All the tests were two sided and were considered to be significant at p values < 0.05.", "Thirty-two patients were screened between November 2014 and November 2020; eight patients were excluded (Supplementary Figure S1). Twenty-four UCED treatment courses were given to 23 eligible, consenting patients and were included in the final analysis. Significant delays in enrollment were encountered, as some of the collaborators left their institutions during the trial and there was significant delay in the ethical approval in other institutions. One patient received a second course of the UCED after a relapse two years later. The mean age of the included patients was 15.3 ± 2.9 years, with a mean disease duration of 1.4 ± 1.4 years. Demographic data are presented in Table 1. The majority had a moderate disease severity and had failed 5ASA, one patient was newly diagnosed with treatment-naïve UC, and one was coming off a course of steroids and flared.", "Clinical responses to UCED were achieved in 17/24 (70.8%) patients by week 6, and 9/24 (37.5%) had ITT clinical remissions at week 6 with the UCED (Figure 2). One patient entered into a second remission after receiving another course of UCED, two years after the initial response. For the 23 patients, 8/23 (34.8%) had ITT clinical remissions at week 6 with the UCED. Withdrawals in remission were imputed as non-remission in the ITT analysis. The median PUCAI decreased from 35 (30–40) at baseline to 12.5 (5–30) at week 6, and the mean PUCAI decreased from 34.0 ± 10 to 17.3 ± 16.9 (p = 0.001) according to the ITT analysis including all the patients in a LOCF analysis (Figure 3). There were no differences in the baseline median PUCAI score, baseline FC levels and disease extent between patients who entered remission at week 6 versus treatment failures. The FC level did not vary between the recruited centers. Six patients withdrew by week 3: two noncompliant patients (one response and one remission) and four patients who required additional therapy (Supplementary Figure S1). FC results were available for 18 patients. The median FC remained unchanged from week 0 to week 3 ((818 (630.0–1880.0) μg/g and 968.0 (272.0–1798.4) μg/g, respectively (p = 0.76)) and declined from week 3 to week 6 of the diet ((968.0 (272.0–1880.0) μg/g to 592.0 (140.7–1555.0) μg/g, respectively (p = 0.41)), corresponding to a 49% reduction from week 3 to week 6; the decline between week 0 and week 6 was not significant (p = 0.11). Among five patients who achieved remission at week 6 with baseline and week 6 FC, the median FC level decreased from 630 (IQR, 332–1586) μg/g at week 0 to 230 (75–1298) μg/g at week 6 (p = 0.14).\nAmong the patients who were in clinical remission at week 6, seven were slow responders and achieved clinical remission only at week 6; the other two patients achieved remission at week 2 and week 3. Two patients achieved remission at week 2 and 3, respectively, but developed recurrence of mild symptoms by week 6 and were considered failures by ITT. One patient developed a Shigella infection during the trial that led to symptoms despite a marked reduction in FC from 1167 to 111 at week 6. Among the patients whose PUCAIs increased from baseline to week 6, it was interesting to see that two of the three patients had proctitis with moderate disease of about 1-year duration and a family history of IBD.", "Six out of nine patients (66%) maintained remission through week 12 without additional therapy; thus, clinical remission was observed in 6/24 (25%) at week 12 based on ITT analysis. One patient withdrew despite remission and stopped the diet; two patients experienced relapses between weeks 7 and 12: one patient developed mild intermittent bleeding without other symptoms (PUCAI: 10), and one patient developed a mild relapse. The median PUCAI decreased from 35 (30–40) at baseline to 15 (5–30) at week 12 (p = 0.002) according to ITT analysis including all the patients in a LOCF analysis.", "Eight patients received treatment with antibiotics after failing the diet; 4/8 (50.0%) subsequently entered remission (Figure 2). Thus, in total, 13/24 (54.2%) patients obtained remission; of those, nine patients were on the diet alone, and four, on a sequential diet and antibiotic therapy as induction therapy.", "Three patients stopped the diet (two stopped despite good responses at week 3—a PUCAI of 0 and PUCAI of 10); thus, intolerance occurred in 3/24 (12.5%). The adherence to the diet was available for 22/24 (91.7%) patients at week 3; 19 (86.4%) patients had high compliance, 2 had fair compliance (9.1%) and 1 (4.5%) had poor compliance. Data for adherence at week 6 were available for 15 patients among the 16 patients who reached this week; all were highly compliant.", "As the diet was designed to decrease animal saturated fat, total protein, SAAs and heme while providing fiber, we analyzed dietary intake before and after UCED. The analysis of dietary intake showed a clinically relevant decrease in total energy per day, as at baseline, the mean daily energy intake was 42.3 ± 25.2 kcal/kg/day, versus 32.7 ± 14.2 at week 6 (p = 0.06). In addition to energy intake reduction, the median weight decreased from 62 (57–65) to 59 kg (52–63) after 6 weeks of the diet (p = 0.02), with a mean weight loss of 0.4 ± 0.3 kg per week. Treatment with UCED was accompanied by a significant decrease in total protein, SAAs, saturated fat and iron, while there was a significant increase in total fiber consumption per day (Figure 4).", "During the UCED treatment, eight patients had adverse events. Three patients had worsening of disease at week 3, two patients developed constipation, one patient lost weight during the first six weeks, and one patient developed a fever unrelated to the disease. Among the patients who received AMD, three patients had worsening of the disease, one patient had metronidazole intolerance with diarrhea, and one patient developed pneumonia one week after stopping the antibiotics." ]
[ null, null, null, null, null, null, null, null, null, null, null, null, null ]
[ "1. Introduction", "2. Materials and Methods", "2.1. Study Population and Design", "2.2. The Ulcerative Colitis Diet Intervention", "2.3. Inclusion and Exclusion Criteria", "2.4. Data Collection and Dietary Assessment", "2.5. Statistical Analysis", "3. Results", "3.1. Study Population", "3.2. Response to UCED Exclusively by Week 6", "3.3. Sustained Remission with UCED at Week 12", "3.4. Response to ADM after UCED Failure", "3.5. Tolerance and Adherence", "3.6. Nutritional Outcomes", "3.7. Safety", "4. Discussion" ]
[ "Ulcerative colitis (UC) in children is a chronic inflammatory disorder of the colon, associated with clinical symptoms including diarrhea and rectal bleeding, and has a negative impact on quality of life [1]. Epidemiologic studies have demonstrated an overall increase in the prevalence and the incidence of UC in both developed and developing countries [2,3]. The sharp change in food consumption from a non-Western to a Western diet has been suggested to contribute to this trend [3,4]. The pathogenesis of ulcerative colitis (UC) is believed to be related to dysbiosis coupled with diminished host-barrier function and unrepressed inflammation. The dysbiosis in UC is characterized by a reduction in short-chain-fatty-acid (SCFA)-producing taxa [5,6] and, in some studies, by an increase in potential pathobionts, such as Escherichia or Ruminococcus gnavus [7], or hydrogen-sulfide-reducing bacteria [6]. The manipulation of the microbiota has become one of the most intriguing targets for intervention in inflammatory bowel diseases (IBDs). Fecal microbiota transplantation (FMT), a straightforward therapy that manipulates the microbiota, has been shown to be effective in the short term in about 30% of cases [8]. The success of FMT appears to depend upon the choice of donor and their microbiota composition [9]. Recent data suggest that diet may alter intestinal microbiota, directly affect the host epithelial and goblet cells, diminish antimicrobial peptides, and influence the immune system’s responses [4,10,11,12,13,14]. However, while dietary interventions have proven to be highly effective in inducing remission in Crohn’s disease, the role of diet in UC and the potential for dietary therapies remain elusive [15,16,17]. A recent guideline from the international organization for the study of inflammatory bowel disease (IOIBD), primarily based on epidemiologic and animal models, suggested that patients with UC should reduce exposure to red or processed meat; saturated, trans and dairy fat; and certain additives [18]. However, there are no prospective randomized controlled trials with dietary interventions published to date that have demonstrated that a dietary intervention can induce remission in active UC in children. Currently, the only non-biologic medication for UC, which does not suppress the immune system, is 5-aminosalicylic acid (5ASA), which is considered to be the first-line therapy in mild to moderate cases. However, medications such as steroids, immunomodulators (IMM) and biologics have been increasingly used in the treatment of pediatric UC.\nThe manipulation of the intestinal microbiome is an emerging new strategy for the treatment of IBD, which may reduce the need for immunosuppression. Two strategies that might alter the microbiome and could be used in conjunction are diet and antibiotics. Dietary components can modulate the composition and metabolome of the gut microbiota, as well as affect the intestinal epithelium, goblet cells and innate immune system [12]. Dietary factors present in Western diets may decrease the production of mucins or cause more permeable mucous and antimicrobial peptides, as well as reshaping the microbiota [13,19]. Turner et al. demonstrated that patients with severe UC have an increased relative abundance of Gammaproteobacteria [7]. The presence of pathobionts may make a disease amenable to antibiotic therapy. Antibiotic combinations have been used as a microbiota-targeted therapy, including a triple combination of amoxicillin, tetracycline and metronidazole, and have been shown to be effective in active adult UC patients [20,21].\nWe hypothesize that the UC disease course can be controlled either by using a novel diet, developed especially for the induction of remission in UC; by an antibiotic strategy; or by both. To test the feasibility and efficacy of the diet, we decided to examine these strategies in a pilot trial in children with mild to moderate UC.", "2.1. Study Population and Design This was an open-label, prospective, single-arm, multicenter pilot study involving the treatment of children with active mild to moderate UC using a novel diet with an antibiotic rescue design for dietary failures. The study population targeted patients aged 8–19, with mild to moderate active disease, defined according to the Pediatric UC Activity Index (10 ≤ PUCAI ≤ 45), at diagnosis or despite maintenance therapy with 5ASA or thiopurines, stable for at least 6 weeks. All the recruited patients meeting the inclusion and exclusion criteria (see below) were introduced to a novel dietary intervention (Supplementary Table S1) termed the Ulcerative Colitis Exclusion Diet (UCED), for the first 6 weeks, and those in remission at week 6 received a step-down diet for another 6 weeks. Patients with no improvement by week 3, who failed to achieve remission by week 6, or who deteriorated between weeks 6 and 12 could decide to receive a 14-day course of amoxicillin (50 mg/kg/day; max: 500 mg of TID), metronidazole (15 mg/kg/day; max: 250 mg of TID) and doxycycline (4 mg/kg/day; max: 100 mg of BID) (AMD). Subjects with intolerance to AMD were instructed to discontinue the metronidazole. The patients on AMD were seen again 7 days after the cessation of antibiotics (Figure 1). The patients were instructed to continue their pretreatment (5ASA or immunomodulators) without any dose change. The primary outcome was the clinical remission rate (PUCAI < 10) at week 6 for the UCED intervention. The study took place at 3 sites: the Wolfson Medical Center, Holon, Israel; the Children’s Hospital of Philadelphia, PA, USA; and the IWK Health Center, Halifax, NS, Canada. All the patients signed informed consent, and all the sites obtained ethical approval (NCT 02345733).\nThis was an open-label, prospective, single-arm, multicenter pilot study involving the treatment of children with active mild to moderate UC using a novel diet with an antibiotic rescue design for dietary failures. The study population targeted patients aged 8–19, with mild to moderate active disease, defined according to the Pediatric UC Activity Index (10 ≤ PUCAI ≤ 45), at diagnosis or despite maintenance therapy with 5ASA or thiopurines, stable for at least 6 weeks. All the recruited patients meeting the inclusion and exclusion criteria (see below) were introduced to a novel dietary intervention (Supplementary Table S1) termed the Ulcerative Colitis Exclusion Diet (UCED), for the first 6 weeks, and those in remission at week 6 received a step-down diet for another 6 weeks. Patients with no improvement by week 3, who failed to achieve remission by week 6, or who deteriorated between weeks 6 and 12 could decide to receive a 14-day course of amoxicillin (50 mg/kg/day; max: 500 mg of TID), metronidazole (15 mg/kg/day; max: 250 mg of TID) and doxycycline (4 mg/kg/day; max: 100 mg of BID) (AMD). Subjects with intolerance to AMD were instructed to discontinue the metronidazole. The patients on AMD were seen again 7 days after the cessation of antibiotics (Figure 1). The patients were instructed to continue their pretreatment (5ASA or immunomodulators) without any dose change. The primary outcome was the clinical remission rate (PUCAI < 10) at week 6 for the UCED intervention. The study took place at 3 sites: the Wolfson Medical Center, Holon, Israel; the Children’s Hospital of Philadelphia, PA, USA; and the IWK Health Center, Halifax, NS, Canada. All the patients signed informed consent, and all the sites obtained ethical approval (NCT 02345733).\n2.2. The Ulcerative Colitis Diet Intervention The UCED diet was designed to alter dietary components that may adversely affect goblet cells, mucus permeability and microbiome composition, which were previously linked to UC [10,11]. It may be described as a low-protein, high-fiber, low-fat diet that also excludes additives. The following principles guiding food exclusion and addition included decreased exposure to sulfated amino acids (SAAs) [22,23,24], total protein [12,25,26], heme [27], animal fat [23,24], saturated and polyunsaturated fat [28], and food additives [29], with exposure to tryptophan [30,31] and natural sources of pectin and resistant starch [13,27,32,33,34]. The term “exclusion diet” was used, as the main principle of the diet is an exclusion of these foods’ components, but some other foods were added. The diet was designed with mandatory, allowed and disallowed foods. The first-phase diet is rich in fruit and vegetables, and includes mandatory foods, primarily fruits and vegetables. There are allowed foods that can be consumed without limitation, such as rice and potatoes; foods with prescribed amounts, such as chicken, eggs, yoghurt and pasta; and disallowed foods, such as red meat and processed foods. The diet also reduces sugar and fructose intake from sources other than fruits. The second phase at weeks 7–12 is more permissive, with more options of fruits and vegetables, and additions to the prescribed amounts of grain products and certain pulses. The patients were instructed on the use of the diet and were provided with recipes and a dietary support system. A UCED day sample menu is presented in Supplementary Table S1.\nThe UCED diet was designed to alter dietary components that may adversely affect goblet cells, mucus permeability and microbiome composition, which were previously linked to UC [10,11]. It may be described as a low-protein, high-fiber, low-fat diet that also excludes additives. The following principles guiding food exclusion and addition included decreased exposure to sulfated amino acids (SAAs) [22,23,24], total protein [12,25,26], heme [27], animal fat [23,24], saturated and polyunsaturated fat [28], and food additives [29], with exposure to tryptophan [30,31] and natural sources of pectin and resistant starch [13,27,32,33,34]. The term “exclusion diet” was used, as the main principle of the diet is an exclusion of these foods’ components, but some other foods were added. The diet was designed with mandatory, allowed and disallowed foods. The first-phase diet is rich in fruit and vegetables, and includes mandatory foods, primarily fruits and vegetables. There are allowed foods that can be consumed without limitation, such as rice and potatoes; foods with prescribed amounts, such as chicken, eggs, yoghurt and pasta; and disallowed foods, such as red meat and processed foods. The diet also reduces sugar and fructose intake from sources other than fruits. The second phase at weeks 7–12 is more permissive, with more options of fruits and vegetables, and additions to the prescribed amounts of grain products and certain pulses. The patients were instructed on the use of the diet and were provided with recipes and a dietary support system. A UCED day sample menu is presented in Supplementary Table S1.\n2.3. Inclusion and Exclusion Criteria The inclusion and exclusion criteria were defined based on our initial experience, showing efficacy in cases of mild–moderate UC. Our goal was to target the appropriate UC population for a pilot study that would benefit most from a novel induction dietary therapy as an induction therapy and to test its feasibility and efficacy. The inclusion criteria included informed consent; established diagnosis of UC; age between 8 and 19 years; mild to moderate active disease defined as 10 ≤ PUCAI ≤ 45; and stable medication (IMM/5ASA) use for the past 6 weeks or no medication. The exclusion criteria included any proven current infection, such as a positive stool culture, a parasite, or positivity for Clostridioides difficile toxin; antibiotic or steroid use in the past 2 weeks, with the exception of patients stopping steroids at enrolment; current or past use of biologics; PUCAI > 45; acute severe UC (PUCAI ≥ 65) in the previous 12 months; a current extraintestinal manifestation of UC; and primary sclerosing cholangitis or liver disease, and pregnancy or an allergy to one of the antibiotics excluded patients from entering the antibiotic arm but not the diet.\nThe inclusion and exclusion criteria were defined based on our initial experience, showing efficacy in cases of mild–moderate UC. Our goal was to target the appropriate UC population for a pilot study that would benefit most from a novel induction dietary therapy as an induction therapy and to test its feasibility and efficacy. The inclusion criteria included informed consent; established diagnosis of UC; age between 8 and 19 years; mild to moderate active disease defined as 10 ≤ PUCAI ≤ 45; and stable medication (IMM/5ASA) use for the past 6 weeks or no medication. The exclusion criteria included any proven current infection, such as a positive stool culture, a parasite, or positivity for Clostridioides difficile toxin; antibiotic or steroid use in the past 2 weeks, with the exception of patients stopping steroids at enrolment; current or past use of biologics; PUCAI > 45; acute severe UC (PUCAI ≥ 65) in the previous 12 months; a current extraintestinal manifestation of UC; and primary sclerosing cholangitis or liver disease, and pregnancy or an allergy to one of the antibiotics excluded patients from entering the antibiotic arm but not the diet.\n2.4. Data Collection and Dietary Assessment The patients were seen at baseline and weeks 3, 6 and 12. At week 2, a phone call visit was performed to assess the PUCAI and dietary compliance. At each visit, a PUCAI score was recorded. A clinical response was defined as a decrease in PUCAI score of at least 10 points, and clinical remission was defined as PUCAI < 10. The patients were asked to provide stool samples for fecal calprotectin (FC) at weeks 0, 3 and 6, which were analyzed locally at each participating center. We performed 24 h recall via a dietitian at weeks 0, 3 and 6. The patients were asked to record the foods and beverages and the consumed amounts in a food diary (FD) over a 3-day period (weekend and 2 weekdays) at week 3. A modified diet-adherence questionnaire [16] was completed on weeks 3 and 6. High diet adherence was determined by finding high adherence on the questionnaire and by the dietitian’s assessment based on direct questioning. Poor compliance was defined by having low compliance in any assessment.\nThe patients were seen at baseline and weeks 3, 6 and 12. At week 2, a phone call visit was performed to assess the PUCAI and dietary compliance. At each visit, a PUCAI score was recorded. A clinical response was defined as a decrease in PUCAI score of at least 10 points, and clinical remission was defined as PUCAI < 10. The patients were asked to provide stool samples for fecal calprotectin (FC) at weeks 0, 3 and 6, which were analyzed locally at each participating center. We performed 24 h recall via a dietitian at weeks 0, 3 and 6. The patients were asked to record the foods and beverages and the consumed amounts in a food diary (FD) over a 3-day period (weekend and 2 weekdays) at week 3. A modified diet-adherence questionnaire [16] was completed on weeks 3 and 6. High diet adherence was determined by finding high adherence on the questionnaire and by the dietitian’s assessment based on direct questioning. Poor compliance was defined by having low compliance in any assessment.\n2.5. Statistical Analysis Continuous variables were evaluated for distribution normality and are reported as medians (interquartile ranges, IQRs) or means (standard deviations, SDs) as appropriate. Nominal variables are summarized as frequencies and are presented as n (%). The primary end point of the proportion of patients in remission at week 6 was analyzed according to the intention-to-treat (ITT) paradigm. A pairwise comparison of the PUCAI at week 0 versus week 6/week 12 was analyzed using the Wilcoxon signed rank test and used according to the last-observation-carried-forward (LOCF) approach. A pairwise comparison of the FC at week 0 versus week 6 was analyzed using the Wilcoxon signed rank test and was performed only for subjects with parameters at both time points. The food macronutrient and micronutrient daily intake was based on the food records at baseline before starting the diet and during the UCED phase 1 at week 6 or week 3, and was compared using the t-test for paired samples or the Wilcoxon signed-rank test as appropriate. Only patients with both week 0 and week 3/6 diet records were entered into the nutritional analysis. All the statistical analyses were performed using the SPSS version 27 statistical analysis software (IBM, Endicott, NY, USA). All the tests were two sided and were considered to be significant at p values < 0.05.\nContinuous variables were evaluated for distribution normality and are reported as medians (interquartile ranges, IQRs) or means (standard deviations, SDs) as appropriate. Nominal variables are summarized as frequencies and are presented as n (%). The primary end point of the proportion of patients in remission at week 6 was analyzed according to the intention-to-treat (ITT) paradigm. A pairwise comparison of the PUCAI at week 0 versus week 6/week 12 was analyzed using the Wilcoxon signed rank test and used according to the last-observation-carried-forward (LOCF) approach. A pairwise comparison of the FC at week 0 versus week 6 was analyzed using the Wilcoxon signed rank test and was performed only for subjects with parameters at both time points. The food macronutrient and micronutrient daily intake was based on the food records at baseline before starting the diet and during the UCED phase 1 at week 6 or week 3, and was compared using the t-test for paired samples or the Wilcoxon signed-rank test as appropriate. Only patients with both week 0 and week 3/6 diet records were entered into the nutritional analysis. All the statistical analyses were performed using the SPSS version 27 statistical analysis software (IBM, Endicott, NY, USA). All the tests were two sided and were considered to be significant at p values < 0.05.", "This was an open-label, prospective, single-arm, multicenter pilot study involving the treatment of children with active mild to moderate UC using a novel diet with an antibiotic rescue design for dietary failures. The study population targeted patients aged 8–19, with mild to moderate active disease, defined according to the Pediatric UC Activity Index (10 ≤ PUCAI ≤ 45), at diagnosis or despite maintenance therapy with 5ASA or thiopurines, stable for at least 6 weeks. All the recruited patients meeting the inclusion and exclusion criteria (see below) were introduced to a novel dietary intervention (Supplementary Table S1) termed the Ulcerative Colitis Exclusion Diet (UCED), for the first 6 weeks, and those in remission at week 6 received a step-down diet for another 6 weeks. Patients with no improvement by week 3, who failed to achieve remission by week 6, or who deteriorated between weeks 6 and 12 could decide to receive a 14-day course of amoxicillin (50 mg/kg/day; max: 500 mg of TID), metronidazole (15 mg/kg/day; max: 250 mg of TID) and doxycycline (4 mg/kg/day; max: 100 mg of BID) (AMD). Subjects with intolerance to AMD were instructed to discontinue the metronidazole. The patients on AMD were seen again 7 days after the cessation of antibiotics (Figure 1). The patients were instructed to continue their pretreatment (5ASA or immunomodulators) without any dose change. The primary outcome was the clinical remission rate (PUCAI < 10) at week 6 for the UCED intervention. The study took place at 3 sites: the Wolfson Medical Center, Holon, Israel; the Children’s Hospital of Philadelphia, PA, USA; and the IWK Health Center, Halifax, NS, Canada. All the patients signed informed consent, and all the sites obtained ethical approval (NCT 02345733).", "The UCED diet was designed to alter dietary components that may adversely affect goblet cells, mucus permeability and microbiome composition, which were previously linked to UC [10,11]. It may be described as a low-protein, high-fiber, low-fat diet that also excludes additives. The following principles guiding food exclusion and addition included decreased exposure to sulfated amino acids (SAAs) [22,23,24], total protein [12,25,26], heme [27], animal fat [23,24], saturated and polyunsaturated fat [28], and food additives [29], with exposure to tryptophan [30,31] and natural sources of pectin and resistant starch [13,27,32,33,34]. The term “exclusion diet” was used, as the main principle of the diet is an exclusion of these foods’ components, but some other foods were added. The diet was designed with mandatory, allowed and disallowed foods. The first-phase diet is rich in fruit and vegetables, and includes mandatory foods, primarily fruits and vegetables. There are allowed foods that can be consumed without limitation, such as rice and potatoes; foods with prescribed amounts, such as chicken, eggs, yoghurt and pasta; and disallowed foods, such as red meat and processed foods. The diet also reduces sugar and fructose intake from sources other than fruits. The second phase at weeks 7–12 is more permissive, with more options of fruits and vegetables, and additions to the prescribed amounts of grain products and certain pulses. The patients were instructed on the use of the diet and were provided with recipes and a dietary support system. A UCED day sample menu is presented in Supplementary Table S1.", "The inclusion and exclusion criteria were defined based on our initial experience, showing efficacy in cases of mild–moderate UC. Our goal was to target the appropriate UC population for a pilot study that would benefit most from a novel induction dietary therapy as an induction therapy and to test its feasibility and efficacy. The inclusion criteria included informed consent; established diagnosis of UC; age between 8 and 19 years; mild to moderate active disease defined as 10 ≤ PUCAI ≤ 45; and stable medication (IMM/5ASA) use for the past 6 weeks or no medication. The exclusion criteria included any proven current infection, such as a positive stool culture, a parasite, or positivity for Clostridioides difficile toxin; antibiotic or steroid use in the past 2 weeks, with the exception of patients stopping steroids at enrolment; current or past use of biologics; PUCAI > 45; acute severe UC (PUCAI ≥ 65) in the previous 12 months; a current extraintestinal manifestation of UC; and primary sclerosing cholangitis or liver disease, and pregnancy or an allergy to one of the antibiotics excluded patients from entering the antibiotic arm but not the diet.", "The patients were seen at baseline and weeks 3, 6 and 12. At week 2, a phone call visit was performed to assess the PUCAI and dietary compliance. At each visit, a PUCAI score was recorded. A clinical response was defined as a decrease in PUCAI score of at least 10 points, and clinical remission was defined as PUCAI < 10. The patients were asked to provide stool samples for fecal calprotectin (FC) at weeks 0, 3 and 6, which were analyzed locally at each participating center. We performed 24 h recall via a dietitian at weeks 0, 3 and 6. The patients were asked to record the foods and beverages and the consumed amounts in a food diary (FD) over a 3-day period (weekend and 2 weekdays) at week 3. A modified diet-adherence questionnaire [16] was completed on weeks 3 and 6. High diet adherence was determined by finding high adherence on the questionnaire and by the dietitian’s assessment based on direct questioning. Poor compliance was defined by having low compliance in any assessment.", "Continuous variables were evaluated for distribution normality and are reported as medians (interquartile ranges, IQRs) or means (standard deviations, SDs) as appropriate. Nominal variables are summarized as frequencies and are presented as n (%). The primary end point of the proportion of patients in remission at week 6 was analyzed according to the intention-to-treat (ITT) paradigm. A pairwise comparison of the PUCAI at week 0 versus week 6/week 12 was analyzed using the Wilcoxon signed rank test and used according to the last-observation-carried-forward (LOCF) approach. A pairwise comparison of the FC at week 0 versus week 6 was analyzed using the Wilcoxon signed rank test and was performed only for subjects with parameters at both time points. The food macronutrient and micronutrient daily intake was based on the food records at baseline before starting the diet and during the UCED phase 1 at week 6 or week 3, and was compared using the t-test for paired samples or the Wilcoxon signed-rank test as appropriate. Only patients with both week 0 and week 3/6 diet records were entered into the nutritional analysis. All the statistical analyses were performed using the SPSS version 27 statistical analysis software (IBM, Endicott, NY, USA). All the tests were two sided and were considered to be significant at p values < 0.05.", "3.1. Study Population Thirty-two patients were screened between November 2014 and November 2020; eight patients were excluded (Supplementary Figure S1). Twenty-four UCED treatment courses were given to 23 eligible, consenting patients and were included in the final analysis. Significant delays in enrollment were encountered, as some of the collaborators left their institutions during the trial and there was significant delay in the ethical approval in other institutions. One patient received a second course of the UCED after a relapse two years later. The mean age of the included patients was 15.3 ± 2.9 years, with a mean disease duration of 1.4 ± 1.4 years. Demographic data are presented in Table 1. The majority had a moderate disease severity and had failed 5ASA, one patient was newly diagnosed with treatment-naïve UC, and one was coming off a course of steroids and flared.\nThirty-two patients were screened between November 2014 and November 2020; eight patients were excluded (Supplementary Figure S1). Twenty-four UCED treatment courses were given to 23 eligible, consenting patients and were included in the final analysis. Significant delays in enrollment were encountered, as some of the collaborators left their institutions during the trial and there was significant delay in the ethical approval in other institutions. One patient received a second course of the UCED after a relapse two years later. The mean age of the included patients was 15.3 ± 2.9 years, with a mean disease duration of 1.4 ± 1.4 years. Demographic data are presented in Table 1. The majority had a moderate disease severity and had failed 5ASA, one patient was newly diagnosed with treatment-naïve UC, and one was coming off a course of steroids and flared.\n3.2. Response to UCED Exclusively by Week 6 Clinical responses to UCED were achieved in 17/24 (70.8%) patients by week 6, and 9/24 (37.5%) had ITT clinical remissions at week 6 with the UCED (Figure 2). One patient entered into a second remission after receiving another course of UCED, two years after the initial response. For the 23 patients, 8/23 (34.8%) had ITT clinical remissions at week 6 with the UCED. Withdrawals in remission were imputed as non-remission in the ITT analysis. The median PUCAI decreased from 35 (30–40) at baseline to 12.5 (5–30) at week 6, and the mean PUCAI decreased from 34.0 ± 10 to 17.3 ± 16.9 (p = 0.001) according to the ITT analysis including all the patients in a LOCF analysis (Figure 3). There were no differences in the baseline median PUCAI score, baseline FC levels and disease extent between patients who entered remission at week 6 versus treatment failures. The FC level did not vary between the recruited centers. Six patients withdrew by week 3: two noncompliant patients (one response and one remission) and four patients who required additional therapy (Supplementary Figure S1). FC results were available for 18 patients. The median FC remained unchanged from week 0 to week 3 ((818 (630.0–1880.0) μg/g and 968.0 (272.0–1798.4) μg/g, respectively (p = 0.76)) and declined from week 3 to week 6 of the diet ((968.0 (272.0–1880.0) μg/g to 592.0 (140.7–1555.0) μg/g, respectively (p = 0.41)), corresponding to a 49% reduction from week 3 to week 6; the decline between week 0 and week 6 was not significant (p = 0.11). Among five patients who achieved remission at week 6 with baseline and week 6 FC, the median FC level decreased from 630 (IQR, 332–1586) μg/g at week 0 to 230 (75–1298) μg/g at week 6 (p = 0.14).\nAmong the patients who were in clinical remission at week 6, seven were slow responders and achieved clinical remission only at week 6; the other two patients achieved remission at week 2 and week 3. Two patients achieved remission at week 2 and 3, respectively, but developed recurrence of mild symptoms by week 6 and were considered failures by ITT. One patient developed a Shigella infection during the trial that led to symptoms despite a marked reduction in FC from 1167 to 111 at week 6. Among the patients whose PUCAIs increased from baseline to week 6, it was interesting to see that two of the three patients had proctitis with moderate disease of about 1-year duration and a family history of IBD.\nClinical responses to UCED were achieved in 17/24 (70.8%) patients by week 6, and 9/24 (37.5%) had ITT clinical remissions at week 6 with the UCED (Figure 2). One patient entered into a second remission after receiving another course of UCED, two years after the initial response. For the 23 patients, 8/23 (34.8%) had ITT clinical remissions at week 6 with the UCED. Withdrawals in remission were imputed as non-remission in the ITT analysis. The median PUCAI decreased from 35 (30–40) at baseline to 12.5 (5–30) at week 6, and the mean PUCAI decreased from 34.0 ± 10 to 17.3 ± 16.9 (p = 0.001) according to the ITT analysis including all the patients in a LOCF analysis (Figure 3). There were no differences in the baseline median PUCAI score, baseline FC levels and disease extent between patients who entered remission at week 6 versus treatment failures. The FC level did not vary between the recruited centers. Six patients withdrew by week 3: two noncompliant patients (one response and one remission) and four patients who required additional therapy (Supplementary Figure S1). FC results were available for 18 patients. The median FC remained unchanged from week 0 to week 3 ((818 (630.0–1880.0) μg/g and 968.0 (272.0–1798.4) μg/g, respectively (p = 0.76)) and declined from week 3 to week 6 of the diet ((968.0 (272.0–1880.0) μg/g to 592.0 (140.7–1555.0) μg/g, respectively (p = 0.41)), corresponding to a 49% reduction from week 3 to week 6; the decline between week 0 and week 6 was not significant (p = 0.11). Among five patients who achieved remission at week 6 with baseline and week 6 FC, the median FC level decreased from 630 (IQR, 332–1586) μg/g at week 0 to 230 (75–1298) μg/g at week 6 (p = 0.14).\nAmong the patients who were in clinical remission at week 6, seven were slow responders and achieved clinical remission only at week 6; the other two patients achieved remission at week 2 and week 3. Two patients achieved remission at week 2 and 3, respectively, but developed recurrence of mild symptoms by week 6 and were considered failures by ITT. One patient developed a Shigella infection during the trial that led to symptoms despite a marked reduction in FC from 1167 to 111 at week 6. Among the patients whose PUCAIs increased from baseline to week 6, it was interesting to see that two of the three patients had proctitis with moderate disease of about 1-year duration and a family history of IBD.\n3.3. Sustained Remission with UCED at Week 12 Six out of nine patients (66%) maintained remission through week 12 without additional therapy; thus, clinical remission was observed in 6/24 (25%) at week 12 based on ITT analysis. One patient withdrew despite remission and stopped the diet; two patients experienced relapses between weeks 7 and 12: one patient developed mild intermittent bleeding without other symptoms (PUCAI: 10), and one patient developed a mild relapse. The median PUCAI decreased from 35 (30–40) at baseline to 15 (5–30) at week 12 (p = 0.002) according to ITT analysis including all the patients in a LOCF analysis.\nSix out of nine patients (66%) maintained remission through week 12 without additional therapy; thus, clinical remission was observed in 6/24 (25%) at week 12 based on ITT analysis. One patient withdrew despite remission and stopped the diet; two patients experienced relapses between weeks 7 and 12: one patient developed mild intermittent bleeding without other symptoms (PUCAI: 10), and one patient developed a mild relapse. The median PUCAI decreased from 35 (30–40) at baseline to 15 (5–30) at week 12 (p = 0.002) according to ITT analysis including all the patients in a LOCF analysis.\n3.4. Response to ADM after UCED Failure Eight patients received treatment with antibiotics after failing the diet; 4/8 (50.0%) subsequently entered remission (Figure 2). Thus, in total, 13/24 (54.2%) patients obtained remission; of those, nine patients were on the diet alone, and four, on a sequential diet and antibiotic therapy as induction therapy.\nEight patients received treatment with antibiotics after failing the diet; 4/8 (50.0%) subsequently entered remission (Figure 2). Thus, in total, 13/24 (54.2%) patients obtained remission; of those, nine patients were on the diet alone, and four, on a sequential diet and antibiotic therapy as induction therapy.\n3.5. Tolerance and Adherence Three patients stopped the diet (two stopped despite good responses at week 3—a PUCAI of 0 and PUCAI of 10); thus, intolerance occurred in 3/24 (12.5%). The adherence to the diet was available for 22/24 (91.7%) patients at week 3; 19 (86.4%) patients had high compliance, 2 had fair compliance (9.1%) and 1 (4.5%) had poor compliance. Data for adherence at week 6 were available for 15 patients among the 16 patients who reached this week; all were highly compliant.\nThree patients stopped the diet (two stopped despite good responses at week 3—a PUCAI of 0 and PUCAI of 10); thus, intolerance occurred in 3/24 (12.5%). The adherence to the diet was available for 22/24 (91.7%) patients at week 3; 19 (86.4%) patients had high compliance, 2 had fair compliance (9.1%) and 1 (4.5%) had poor compliance. Data for adherence at week 6 were available for 15 patients among the 16 patients who reached this week; all were highly compliant.\n3.6. Nutritional Outcomes As the diet was designed to decrease animal saturated fat, total protein, SAAs and heme while providing fiber, we analyzed dietary intake before and after UCED. The analysis of dietary intake showed a clinically relevant decrease in total energy per day, as at baseline, the mean daily energy intake was 42.3 ± 25.2 kcal/kg/day, versus 32.7 ± 14.2 at week 6 (p = 0.06). In addition to energy intake reduction, the median weight decreased from 62 (57–65) to 59 kg (52–63) after 6 weeks of the diet (p = 0.02), with a mean weight loss of 0.4 ± 0.3 kg per week. Treatment with UCED was accompanied by a significant decrease in total protein, SAAs, saturated fat and iron, while there was a significant increase in total fiber consumption per day (Figure 4).\nAs the diet was designed to decrease animal saturated fat, total protein, SAAs and heme while providing fiber, we analyzed dietary intake before and after UCED. The analysis of dietary intake showed a clinically relevant decrease in total energy per day, as at baseline, the mean daily energy intake was 42.3 ± 25.2 kcal/kg/day, versus 32.7 ± 14.2 at week 6 (p = 0.06). In addition to energy intake reduction, the median weight decreased from 62 (57–65) to 59 kg (52–63) after 6 weeks of the diet (p = 0.02), with a mean weight loss of 0.4 ± 0.3 kg per week. Treatment with UCED was accompanied by a significant decrease in total protein, SAAs, saturated fat and iron, while there was a significant increase in total fiber consumption per day (Figure 4).\n3.7. Safety During the UCED treatment, eight patients had adverse events. Three patients had worsening of disease at week 3, two patients developed constipation, one patient lost weight during the first six weeks, and one patient developed a fever unrelated to the disease. Among the patients who received AMD, three patients had worsening of the disease, one patient had metronidazole intolerance with diarrhea, and one patient developed pneumonia one week after stopping the antibiotics.\nDuring the UCED treatment, eight patients had adverse events. Three patients had worsening of disease at week 3, two patients developed constipation, one patient lost weight during the first six weeks, and one patient developed a fever unrelated to the disease. Among the patients who received AMD, three patients had worsening of the disease, one patient had metronidazole intolerance with diarrhea, and one patient developed pneumonia one week after stopping the antibiotics.", "Thirty-two patients were screened between November 2014 and November 2020; eight patients were excluded (Supplementary Figure S1). Twenty-four UCED treatment courses were given to 23 eligible, consenting patients and were included in the final analysis. Significant delays in enrollment were encountered, as some of the collaborators left their institutions during the trial and there was significant delay in the ethical approval in other institutions. One patient received a second course of the UCED after a relapse two years later. The mean age of the included patients was 15.3 ± 2.9 years, with a mean disease duration of 1.4 ± 1.4 years. Demographic data are presented in Table 1. The majority had a moderate disease severity and had failed 5ASA, one patient was newly diagnosed with treatment-naïve UC, and one was coming off a course of steroids and flared.", "Clinical responses to UCED were achieved in 17/24 (70.8%) patients by week 6, and 9/24 (37.5%) had ITT clinical remissions at week 6 with the UCED (Figure 2). One patient entered into a second remission after receiving another course of UCED, two years after the initial response. For the 23 patients, 8/23 (34.8%) had ITT clinical remissions at week 6 with the UCED. Withdrawals in remission were imputed as non-remission in the ITT analysis. The median PUCAI decreased from 35 (30–40) at baseline to 12.5 (5–30) at week 6, and the mean PUCAI decreased from 34.0 ± 10 to 17.3 ± 16.9 (p = 0.001) according to the ITT analysis including all the patients in a LOCF analysis (Figure 3). There were no differences in the baseline median PUCAI score, baseline FC levels and disease extent between patients who entered remission at week 6 versus treatment failures. The FC level did not vary between the recruited centers. Six patients withdrew by week 3: two noncompliant patients (one response and one remission) and four patients who required additional therapy (Supplementary Figure S1). FC results were available for 18 patients. The median FC remained unchanged from week 0 to week 3 ((818 (630.0–1880.0) μg/g and 968.0 (272.0–1798.4) μg/g, respectively (p = 0.76)) and declined from week 3 to week 6 of the diet ((968.0 (272.0–1880.0) μg/g to 592.0 (140.7–1555.0) μg/g, respectively (p = 0.41)), corresponding to a 49% reduction from week 3 to week 6; the decline between week 0 and week 6 was not significant (p = 0.11). Among five patients who achieved remission at week 6 with baseline and week 6 FC, the median FC level decreased from 630 (IQR, 332–1586) μg/g at week 0 to 230 (75–1298) μg/g at week 6 (p = 0.14).\nAmong the patients who were in clinical remission at week 6, seven were slow responders and achieved clinical remission only at week 6; the other two patients achieved remission at week 2 and week 3. Two patients achieved remission at week 2 and 3, respectively, but developed recurrence of mild symptoms by week 6 and were considered failures by ITT. One patient developed a Shigella infection during the trial that led to symptoms despite a marked reduction in FC from 1167 to 111 at week 6. Among the patients whose PUCAIs increased from baseline to week 6, it was interesting to see that two of the three patients had proctitis with moderate disease of about 1-year duration and a family history of IBD.", "Six out of nine patients (66%) maintained remission through week 12 without additional therapy; thus, clinical remission was observed in 6/24 (25%) at week 12 based on ITT analysis. One patient withdrew despite remission and stopped the diet; two patients experienced relapses between weeks 7 and 12: one patient developed mild intermittent bleeding without other symptoms (PUCAI: 10), and one patient developed a mild relapse. The median PUCAI decreased from 35 (30–40) at baseline to 15 (5–30) at week 12 (p = 0.002) according to ITT analysis including all the patients in a LOCF analysis.", "Eight patients received treatment with antibiotics after failing the diet; 4/8 (50.0%) subsequently entered remission (Figure 2). Thus, in total, 13/24 (54.2%) patients obtained remission; of those, nine patients were on the diet alone, and four, on a sequential diet and antibiotic therapy as induction therapy.", "Three patients stopped the diet (two stopped despite good responses at week 3—a PUCAI of 0 and PUCAI of 10); thus, intolerance occurred in 3/24 (12.5%). The adherence to the diet was available for 22/24 (91.7%) patients at week 3; 19 (86.4%) patients had high compliance, 2 had fair compliance (9.1%) and 1 (4.5%) had poor compliance. Data for adherence at week 6 were available for 15 patients among the 16 patients who reached this week; all were highly compliant.", "As the diet was designed to decrease animal saturated fat, total protein, SAAs and heme while providing fiber, we analyzed dietary intake before and after UCED. The analysis of dietary intake showed a clinically relevant decrease in total energy per day, as at baseline, the mean daily energy intake was 42.3 ± 25.2 kcal/kg/day, versus 32.7 ± 14.2 at week 6 (p = 0.06). In addition to energy intake reduction, the median weight decreased from 62 (57–65) to 59 kg (52–63) after 6 weeks of the diet (p = 0.02), with a mean weight loss of 0.4 ± 0.3 kg per week. Treatment with UCED was accompanied by a significant decrease in total protein, SAAs, saturated fat and iron, while there was a significant increase in total fiber consumption per day (Figure 4).", "During the UCED treatment, eight patients had adverse events. Three patients had worsening of disease at week 3, two patients developed constipation, one patient lost weight during the first six weeks, and one patient developed a fever unrelated to the disease. Among the patients who received AMD, three patients had worsening of the disease, one patient had metronidazole intolerance with diarrhea, and one patient developed pneumonia one week after stopping the antibiotics.", "In this pilot study, we evaluated two therapies targeting the microbiome sequentially. The first intervention was a novel diet targeting the intestinal epithelium, goblet cells and innate immune system, in addition to the microbiota composition. The second intervention, used only in dietary-failure patients, was an established antibiotic protocol [20,21] studied in adults but never prospectively evaluated in children. The main purpose of this study was, first, to evaluate the feasibility of this specific dietary intervention, in order to improve the design and adherence, prior to starting an interventional randomized controlled trial. In light of this study’s outcomes, we will test the superiority of the UCED when administered together with a 5ASA regimen, compared to 5ASA alone, in pediatric patients with mild–moderate UC in a randomized control trial.\nWe demonstrated clinical responses in 70% of the patients with UCED at week 6 and clinical remission in 37.5% of the patients at week 6 by ITT analysis. This was accompanied by a decline in FC, primarily after week 3, which did not show statistical significance, likely due to the small sample size. The FC at week 6 was available for 5/9 patients in remission before any change in therapy; there was a decrease in the median FC among these patients from 630 (IQR, 332–1586) μg/g at week 0 to 230 (75–1298) μg/g at week 6 (p = 0.14).\nFurthermore, 50% of those who failed to obtain remission with the diet entered remission after adding a 14-day course of AMD. Thus, over 50% of the patients obtained remission without immune suppression; of those, nine patients did so on diet alone and four, on a sequential diet and antibiotic therapy as induction therapy. We chose this sequential design in order to gain insight into the independent effect of each treatment and to provide pilot data in order to proceed to randomized controlled trials in the future based on the outcomes.\nThe UCED diet was designed to minimize the impact of dietary components that may adversely affect goblet cells, mucus permeability and microbiome composition, which were previously linked to UC. The UCED was designed to decrease protein, SAAs and saturated fatty acids while providing fiber as a substrate for SCFA and to prevent fiber deprivation, which may deplete the mucus layer. We were able to demonstrate that the intake of these components was, in fact, reduced among our patients, while the fiber intake significantly increased (Figure 4). Van der Post et al. have established that a permeable mucus layer may be an early event in UC [10,11]. Microbial SCFA production is essential for providing fuel for epithelial cells and affects the regulation of the immune system by inducing the regulation of T-regulatory cells [35]. A high-fat diet and maltodextrin have been shown to negatively affect goblet cells [24,36,37,38]. Epithelial damage is also a hallmark of UC, and a high-fat diet and high-protein diet may negatively affect epithelial cells [37,39]. Permeable mucus has been associated with Proteobacteria expansion [40], and a high-fat diet has been shown to be associated with Proteobacteria and Enterobacteriaceae expansion [39,40].\nAnother factor that may affect the mucus layer is fiber deprivation [34]; certain fibers such as pectins might induce more viscous mucus and have an anti-inflammatory effect [13,41]. Finally, high levels of hydrogen sulfide may have a toxic effect on epithelial cells and can cause a breakdown of the mucin network [22]. Substrates for hydrogen-sulfide production are predominantly derived from SAAs [42], while fruits and vegetables are sources of short-chain fatty acids, which regulate the production of protein metabolites and maintain tight junctions [43,44]. We used these principles to design this diet, and the results of this pilot study have led us to launch a randomized control trial (NIH NCT03980405). At this juncture, we cannot be certain which components that were included or excluded were responsible for the clinical effect or what the effect upon the microbiome was.\nThere are a few clinical studies that have suggested a link between diet and UC, but the data are conflicting. A prospective interventional crossover study in 18 UC adult patients showed that a low-fat, high-fiber diet decreased markers of inflammation and reduced intestinal dysbiosis [45]. A large prospective UC cohort followed from remission suggested that relapse was associated with the intake of the saturated fatty acid myristic acid, found primarily in grain-fed beef and dairy [28,46]. The benefit of plant-based diets (PBD) in UC patients was demonstrated by Chiba M. et al. to contribute to preventing relapse at one-year follow-up in UC patients; therapy incorporating a PBD was shown to induce remission in about one-third of patients with mild UC [47]. However, a large Swiss prospective cohort study showed that vegetarians had no advantage over omnivores with regard to disease activity, hospitalizations, complications or surgery with UC [48].\nMost of the patients in our study tolerated the diet well, and only 12% discontinued the diet. Interestingly, two of these patients were in remission at week 3 and had a mild increase in symptoms by week six; one patient who had responded very well to the diet developed a Shigella infection during the trial that led to symptoms, despite a marked reduction in FC from 1167 to 111 at week 6; these four patients were considered failures in the ITT analysis.\nWe observed that the majority of the patients responded to the diet only between weeks 3 and 6; this is supported by the FC data, which did not show a decline between baseline and week 3 but showed a 49% decline between weeks 3 and 6. This differs from the response to exclusive enteral nutrition or the Crohn’s disease exclusion diet, in which the response was rapid and the majority of patients achieved remission during the first 3 weeks of dietary therapy [49]. Despite the decline, the median FC remained high at week 6; larger studies are required to detect the impact of diet on gut healing, including performing colonoscopies.\nAntibiotics may be a double-edged sword in IBD. Antibiotics may increase dysbiosis and increase the translocation of bacteria [50] but, on the other hand, may be effective in refractory patients [7,20,21], and there is an interaction between diet and antibiotics with regard to inflammation in rodent models (a high-fat diet may increase antibiotic-induced dysbiosis) [39]. To date, the utility of the antibiotic combination we used in the trial has been demonstrated prospectively only for severe or steroid-dependent adult UC [7,20,21]. Here, we demonstrate, in a small cohort of diet-refractory patients, that antibiotic therapy may have benefit in achieving remission, and a prospective randomized controlled trial is currently underway to evaluate this further.\nThere are several limitations of this study. This was a pilot trial used to generate data and conducted as a proof of concept, to allow progress to larger trials if the data were positive. Thus, the sample size was limited. We only investigated children with mild to moderate disease; based on previous clinical experience, this group is the most likely to benefit from the combination of diet and antibiotics, and this combination might be used to avoid steroids and immunosuppressive therapy in the future. We were also hampered by the fact that not all the patients provided FC samples as requested. In addition, we saw a weight reduction after 6 weeks of the diet, which might indicate that the diet is not suitable for severe cases of UC with malnutrition. Another weakness of the study is that we did not perform colonoscopy in order to evaluate mucosal healing. However, recently, we have published a randomized controlled trial in adult patients with active refractory UC, showing that the UCED alone appeared to achieve mucosal healing versus single-donor fecal transplantation with or without diet, as mucosal healing (Mayo 0) was achieved only in the group that received the UCED (3/15, 20%) vs. 0/36 of the patients who received fecal transplantation (p = 0.022) [51]. However, we emphasize that, without a placebo group, caution needs to be taken in interpreting our results, as some of the response could be mediated by placebo effects. The strengths of this pilot trial were the prospective nature and use of defined criteria for inclusion and remission, as well as it being the first report of this novel diet.\nIn conclusion, the results of this pilot trial suggest that both diet and antibiotics may have a role for the induction of remission in mild to moderate UC in children. This needs to be explored further with a larger sample size. Randomized controlled trials are now underway with both therapies to provide more evidence for these therapies, which could facilitate the use of microbiome-targeted therapies in conjunction with other medical therapy or instead of immune suppression in the future." ]
[ "intro", null, null, null, null, null, null, "results", null, null, null, null, null, null, null, "discussion" ]
[ "ulcerative colitis", "child", "diet", "antibiotics", "remission", "treatment" ]
1. Introduction: Ulcerative colitis (UC) in children is a chronic inflammatory disorder of the colon, associated with clinical symptoms including diarrhea and rectal bleeding, and has a negative impact on quality of life [1]. Epidemiologic studies have demonstrated an overall increase in the prevalence and the incidence of UC in both developed and developing countries [2,3]. The sharp change in food consumption from a non-Western to a Western diet has been suggested to contribute to this trend [3,4]. The pathogenesis of ulcerative colitis (UC) is believed to be related to dysbiosis coupled with diminished host-barrier function and unrepressed inflammation. The dysbiosis in UC is characterized by a reduction in short-chain-fatty-acid (SCFA)-producing taxa [5,6] and, in some studies, by an increase in potential pathobionts, such as Escherichia or Ruminococcus gnavus [7], or hydrogen-sulfide-reducing bacteria [6]. The manipulation of the microbiota has become one of the most intriguing targets for intervention in inflammatory bowel diseases (IBDs). Fecal microbiota transplantation (FMT), a straightforward therapy that manipulates the microbiota, has been shown to be effective in the short term in about 30% of cases [8]. The success of FMT appears to depend upon the choice of donor and their microbiota composition [9]. Recent data suggest that diet may alter intestinal microbiota, directly affect the host epithelial and goblet cells, diminish antimicrobial peptides, and influence the immune system’s responses [4,10,11,12,13,14]. However, while dietary interventions have proven to be highly effective in inducing remission in Crohn’s disease, the role of diet in UC and the potential for dietary therapies remain elusive [15,16,17]. A recent guideline from the international organization for the study of inflammatory bowel disease (IOIBD), primarily based on epidemiologic and animal models, suggested that patients with UC should reduce exposure to red or processed meat; saturated, trans and dairy fat; and certain additives [18]. However, there are no prospective randomized controlled trials with dietary interventions published to date that have demonstrated that a dietary intervention can induce remission in active UC in children. Currently, the only non-biologic medication for UC, which does not suppress the immune system, is 5-aminosalicylic acid (5ASA), which is considered to be the first-line therapy in mild to moderate cases. However, medications such as steroids, immunomodulators (IMM) and biologics have been increasingly used in the treatment of pediatric UC. The manipulation of the intestinal microbiome is an emerging new strategy for the treatment of IBD, which may reduce the need for immunosuppression. Two strategies that might alter the microbiome and could be used in conjunction are diet and antibiotics. Dietary components can modulate the composition and metabolome of the gut microbiota, as well as affect the intestinal epithelium, goblet cells and innate immune system [12]. Dietary factors present in Western diets may decrease the production of mucins or cause more permeable mucous and antimicrobial peptides, as well as reshaping the microbiota [13,19]. Turner et al. demonstrated that patients with severe UC have an increased relative abundance of Gammaproteobacteria [7]. The presence of pathobionts may make a disease amenable to antibiotic therapy. Antibiotic combinations have been used as a microbiota-targeted therapy, including a triple combination of amoxicillin, tetracycline and metronidazole, and have been shown to be effective in active adult UC patients [20,21]. We hypothesize that the UC disease course can be controlled either by using a novel diet, developed especially for the induction of remission in UC; by an antibiotic strategy; or by both. To test the feasibility and efficacy of the diet, we decided to examine these strategies in a pilot trial in children with mild to moderate UC. 2. Materials and Methods: 2.1. Study Population and Design This was an open-label, prospective, single-arm, multicenter pilot study involving the treatment of children with active mild to moderate UC using a novel diet with an antibiotic rescue design for dietary failures. The study population targeted patients aged 8–19, with mild to moderate active disease, defined according to the Pediatric UC Activity Index (10 ≤ PUCAI ≤ 45), at diagnosis or despite maintenance therapy with 5ASA or thiopurines, stable for at least 6 weeks. All the recruited patients meeting the inclusion and exclusion criteria (see below) were introduced to a novel dietary intervention (Supplementary Table S1) termed the Ulcerative Colitis Exclusion Diet (UCED), for the first 6 weeks, and those in remission at week 6 received a step-down diet for another 6 weeks. Patients with no improvement by week 3, who failed to achieve remission by week 6, or who deteriorated between weeks 6 and 12 could decide to receive a 14-day course of amoxicillin (50 mg/kg/day; max: 500 mg of TID), metronidazole (15 mg/kg/day; max: 250 mg of TID) and doxycycline (4 mg/kg/day; max: 100 mg of BID) (AMD). Subjects with intolerance to AMD were instructed to discontinue the metronidazole. The patients on AMD were seen again 7 days after the cessation of antibiotics (Figure 1). The patients were instructed to continue their pretreatment (5ASA or immunomodulators) without any dose change. The primary outcome was the clinical remission rate (PUCAI < 10) at week 6 for the UCED intervention. The study took place at 3 sites: the Wolfson Medical Center, Holon, Israel; the Children’s Hospital of Philadelphia, PA, USA; and the IWK Health Center, Halifax, NS, Canada. All the patients signed informed consent, and all the sites obtained ethical approval (NCT 02345733). This was an open-label, prospective, single-arm, multicenter pilot study involving the treatment of children with active mild to moderate UC using a novel diet with an antibiotic rescue design for dietary failures. The study population targeted patients aged 8–19, with mild to moderate active disease, defined according to the Pediatric UC Activity Index (10 ≤ PUCAI ≤ 45), at diagnosis or despite maintenance therapy with 5ASA or thiopurines, stable for at least 6 weeks. All the recruited patients meeting the inclusion and exclusion criteria (see below) were introduced to a novel dietary intervention (Supplementary Table S1) termed the Ulcerative Colitis Exclusion Diet (UCED), for the first 6 weeks, and those in remission at week 6 received a step-down diet for another 6 weeks. Patients with no improvement by week 3, who failed to achieve remission by week 6, or who deteriorated between weeks 6 and 12 could decide to receive a 14-day course of amoxicillin (50 mg/kg/day; max: 500 mg of TID), metronidazole (15 mg/kg/day; max: 250 mg of TID) and doxycycline (4 mg/kg/day; max: 100 mg of BID) (AMD). Subjects with intolerance to AMD were instructed to discontinue the metronidazole. The patients on AMD were seen again 7 days after the cessation of antibiotics (Figure 1). The patients were instructed to continue their pretreatment (5ASA or immunomodulators) without any dose change. The primary outcome was the clinical remission rate (PUCAI < 10) at week 6 for the UCED intervention. The study took place at 3 sites: the Wolfson Medical Center, Holon, Israel; the Children’s Hospital of Philadelphia, PA, USA; and the IWK Health Center, Halifax, NS, Canada. All the patients signed informed consent, and all the sites obtained ethical approval (NCT 02345733). 2.2. The Ulcerative Colitis Diet Intervention The UCED diet was designed to alter dietary components that may adversely affect goblet cells, mucus permeability and microbiome composition, which were previously linked to UC [10,11]. It may be described as a low-protein, high-fiber, low-fat diet that also excludes additives. The following principles guiding food exclusion and addition included decreased exposure to sulfated amino acids (SAAs) [22,23,24], total protein [12,25,26], heme [27], animal fat [23,24], saturated and polyunsaturated fat [28], and food additives [29], with exposure to tryptophan [30,31] and natural sources of pectin and resistant starch [13,27,32,33,34]. The term “exclusion diet” was used, as the main principle of the diet is an exclusion of these foods’ components, but some other foods were added. The diet was designed with mandatory, allowed and disallowed foods. The first-phase diet is rich in fruit and vegetables, and includes mandatory foods, primarily fruits and vegetables. There are allowed foods that can be consumed without limitation, such as rice and potatoes; foods with prescribed amounts, such as chicken, eggs, yoghurt and pasta; and disallowed foods, such as red meat and processed foods. The diet also reduces sugar and fructose intake from sources other than fruits. The second phase at weeks 7–12 is more permissive, with more options of fruits and vegetables, and additions to the prescribed amounts of grain products and certain pulses. The patients were instructed on the use of the diet and were provided with recipes and a dietary support system. A UCED day sample menu is presented in Supplementary Table S1. The UCED diet was designed to alter dietary components that may adversely affect goblet cells, mucus permeability and microbiome composition, which were previously linked to UC [10,11]. It may be described as a low-protein, high-fiber, low-fat diet that also excludes additives. The following principles guiding food exclusion and addition included decreased exposure to sulfated amino acids (SAAs) [22,23,24], total protein [12,25,26], heme [27], animal fat [23,24], saturated and polyunsaturated fat [28], and food additives [29], with exposure to tryptophan [30,31] and natural sources of pectin and resistant starch [13,27,32,33,34]. The term “exclusion diet” was used, as the main principle of the diet is an exclusion of these foods’ components, but some other foods were added. The diet was designed with mandatory, allowed and disallowed foods. The first-phase diet is rich in fruit and vegetables, and includes mandatory foods, primarily fruits and vegetables. There are allowed foods that can be consumed without limitation, such as rice and potatoes; foods with prescribed amounts, such as chicken, eggs, yoghurt and pasta; and disallowed foods, such as red meat and processed foods. The diet also reduces sugar and fructose intake from sources other than fruits. The second phase at weeks 7–12 is more permissive, with more options of fruits and vegetables, and additions to the prescribed amounts of grain products and certain pulses. The patients were instructed on the use of the diet and were provided with recipes and a dietary support system. A UCED day sample menu is presented in Supplementary Table S1. 2.3. Inclusion and Exclusion Criteria The inclusion and exclusion criteria were defined based on our initial experience, showing efficacy in cases of mild–moderate UC. Our goal was to target the appropriate UC population for a pilot study that would benefit most from a novel induction dietary therapy as an induction therapy and to test its feasibility and efficacy. The inclusion criteria included informed consent; established diagnosis of UC; age between 8 and 19 years; mild to moderate active disease defined as 10 ≤ PUCAI ≤ 45; and stable medication (IMM/5ASA) use for the past 6 weeks or no medication. The exclusion criteria included any proven current infection, such as a positive stool culture, a parasite, or positivity for Clostridioides difficile toxin; antibiotic or steroid use in the past 2 weeks, with the exception of patients stopping steroids at enrolment; current or past use of biologics; PUCAI > 45; acute severe UC (PUCAI ≥ 65) in the previous 12 months; a current extraintestinal manifestation of UC; and primary sclerosing cholangitis or liver disease, and pregnancy or an allergy to one of the antibiotics excluded patients from entering the antibiotic arm but not the diet. The inclusion and exclusion criteria were defined based on our initial experience, showing efficacy in cases of mild–moderate UC. Our goal was to target the appropriate UC population for a pilot study that would benefit most from a novel induction dietary therapy as an induction therapy and to test its feasibility and efficacy. The inclusion criteria included informed consent; established diagnosis of UC; age between 8 and 19 years; mild to moderate active disease defined as 10 ≤ PUCAI ≤ 45; and stable medication (IMM/5ASA) use for the past 6 weeks or no medication. The exclusion criteria included any proven current infection, such as a positive stool culture, a parasite, or positivity for Clostridioides difficile toxin; antibiotic or steroid use in the past 2 weeks, with the exception of patients stopping steroids at enrolment; current or past use of biologics; PUCAI > 45; acute severe UC (PUCAI ≥ 65) in the previous 12 months; a current extraintestinal manifestation of UC; and primary sclerosing cholangitis or liver disease, and pregnancy or an allergy to one of the antibiotics excluded patients from entering the antibiotic arm but not the diet. 2.4. Data Collection and Dietary Assessment The patients were seen at baseline and weeks 3, 6 and 12. At week 2, a phone call visit was performed to assess the PUCAI and dietary compliance. At each visit, a PUCAI score was recorded. A clinical response was defined as a decrease in PUCAI score of at least 10 points, and clinical remission was defined as PUCAI < 10. The patients were asked to provide stool samples for fecal calprotectin (FC) at weeks 0, 3 and 6, which were analyzed locally at each participating center. We performed 24 h recall via a dietitian at weeks 0, 3 and 6. The patients were asked to record the foods and beverages and the consumed amounts in a food diary (FD) over a 3-day period (weekend and 2 weekdays) at week 3. A modified diet-adherence questionnaire [16] was completed on weeks 3 and 6. High diet adherence was determined by finding high adherence on the questionnaire and by the dietitian’s assessment based on direct questioning. Poor compliance was defined by having low compliance in any assessment. The patients were seen at baseline and weeks 3, 6 and 12. At week 2, a phone call visit was performed to assess the PUCAI and dietary compliance. At each visit, a PUCAI score was recorded. A clinical response was defined as a decrease in PUCAI score of at least 10 points, and clinical remission was defined as PUCAI < 10. The patients were asked to provide stool samples for fecal calprotectin (FC) at weeks 0, 3 and 6, which were analyzed locally at each participating center. We performed 24 h recall via a dietitian at weeks 0, 3 and 6. The patients were asked to record the foods and beverages and the consumed amounts in a food diary (FD) over a 3-day period (weekend and 2 weekdays) at week 3. A modified diet-adherence questionnaire [16] was completed on weeks 3 and 6. High diet adherence was determined by finding high adherence on the questionnaire and by the dietitian’s assessment based on direct questioning. Poor compliance was defined by having low compliance in any assessment. 2.5. Statistical Analysis Continuous variables were evaluated for distribution normality and are reported as medians (interquartile ranges, IQRs) or means (standard deviations, SDs) as appropriate. Nominal variables are summarized as frequencies and are presented as n (%). The primary end point of the proportion of patients in remission at week 6 was analyzed according to the intention-to-treat (ITT) paradigm. A pairwise comparison of the PUCAI at week 0 versus week 6/week 12 was analyzed using the Wilcoxon signed rank test and used according to the last-observation-carried-forward (LOCF) approach. A pairwise comparison of the FC at week 0 versus week 6 was analyzed using the Wilcoxon signed rank test and was performed only for subjects with parameters at both time points. The food macronutrient and micronutrient daily intake was based on the food records at baseline before starting the diet and during the UCED phase 1 at week 6 or week 3, and was compared using the t-test for paired samples or the Wilcoxon signed-rank test as appropriate. Only patients with both week 0 and week 3/6 diet records were entered into the nutritional analysis. All the statistical analyses were performed using the SPSS version 27 statistical analysis software (IBM, Endicott, NY, USA). All the tests were two sided and were considered to be significant at p values < 0.05. Continuous variables were evaluated for distribution normality and are reported as medians (interquartile ranges, IQRs) or means (standard deviations, SDs) as appropriate. Nominal variables are summarized as frequencies and are presented as n (%). The primary end point of the proportion of patients in remission at week 6 was analyzed according to the intention-to-treat (ITT) paradigm. A pairwise comparison of the PUCAI at week 0 versus week 6/week 12 was analyzed using the Wilcoxon signed rank test and used according to the last-observation-carried-forward (LOCF) approach. A pairwise comparison of the FC at week 0 versus week 6 was analyzed using the Wilcoxon signed rank test and was performed only for subjects with parameters at both time points. The food macronutrient and micronutrient daily intake was based on the food records at baseline before starting the diet and during the UCED phase 1 at week 6 or week 3, and was compared using the t-test for paired samples or the Wilcoxon signed-rank test as appropriate. Only patients with both week 0 and week 3/6 diet records were entered into the nutritional analysis. All the statistical analyses were performed using the SPSS version 27 statistical analysis software (IBM, Endicott, NY, USA). All the tests were two sided and were considered to be significant at p values < 0.05. 2.1. Study Population and Design: This was an open-label, prospective, single-arm, multicenter pilot study involving the treatment of children with active mild to moderate UC using a novel diet with an antibiotic rescue design for dietary failures. The study population targeted patients aged 8–19, with mild to moderate active disease, defined according to the Pediatric UC Activity Index (10 ≤ PUCAI ≤ 45), at diagnosis or despite maintenance therapy with 5ASA or thiopurines, stable for at least 6 weeks. All the recruited patients meeting the inclusion and exclusion criteria (see below) were introduced to a novel dietary intervention (Supplementary Table S1) termed the Ulcerative Colitis Exclusion Diet (UCED), for the first 6 weeks, and those in remission at week 6 received a step-down diet for another 6 weeks. Patients with no improvement by week 3, who failed to achieve remission by week 6, or who deteriorated between weeks 6 and 12 could decide to receive a 14-day course of amoxicillin (50 mg/kg/day; max: 500 mg of TID), metronidazole (15 mg/kg/day; max: 250 mg of TID) and doxycycline (4 mg/kg/day; max: 100 mg of BID) (AMD). Subjects with intolerance to AMD were instructed to discontinue the metronidazole. The patients on AMD were seen again 7 days after the cessation of antibiotics (Figure 1). The patients were instructed to continue their pretreatment (5ASA or immunomodulators) without any dose change. The primary outcome was the clinical remission rate (PUCAI < 10) at week 6 for the UCED intervention. The study took place at 3 sites: the Wolfson Medical Center, Holon, Israel; the Children’s Hospital of Philadelphia, PA, USA; and the IWK Health Center, Halifax, NS, Canada. All the patients signed informed consent, and all the sites obtained ethical approval (NCT 02345733). 2.2. The Ulcerative Colitis Diet Intervention: The UCED diet was designed to alter dietary components that may adversely affect goblet cells, mucus permeability and microbiome composition, which were previously linked to UC [10,11]. It may be described as a low-protein, high-fiber, low-fat diet that also excludes additives. The following principles guiding food exclusion and addition included decreased exposure to sulfated amino acids (SAAs) [22,23,24], total protein [12,25,26], heme [27], animal fat [23,24], saturated and polyunsaturated fat [28], and food additives [29], with exposure to tryptophan [30,31] and natural sources of pectin and resistant starch [13,27,32,33,34]. The term “exclusion diet” was used, as the main principle of the diet is an exclusion of these foods’ components, but some other foods were added. The diet was designed with mandatory, allowed and disallowed foods. The first-phase diet is rich in fruit and vegetables, and includes mandatory foods, primarily fruits and vegetables. There are allowed foods that can be consumed without limitation, such as rice and potatoes; foods with prescribed amounts, such as chicken, eggs, yoghurt and pasta; and disallowed foods, such as red meat and processed foods. The diet also reduces sugar and fructose intake from sources other than fruits. The second phase at weeks 7–12 is more permissive, with more options of fruits and vegetables, and additions to the prescribed amounts of grain products and certain pulses. The patients were instructed on the use of the diet and were provided with recipes and a dietary support system. A UCED day sample menu is presented in Supplementary Table S1. 2.3. Inclusion and Exclusion Criteria: The inclusion and exclusion criteria were defined based on our initial experience, showing efficacy in cases of mild–moderate UC. Our goal was to target the appropriate UC population for a pilot study that would benefit most from a novel induction dietary therapy as an induction therapy and to test its feasibility and efficacy. The inclusion criteria included informed consent; established diagnosis of UC; age between 8 and 19 years; mild to moderate active disease defined as 10 ≤ PUCAI ≤ 45; and stable medication (IMM/5ASA) use for the past 6 weeks or no medication. The exclusion criteria included any proven current infection, such as a positive stool culture, a parasite, or positivity for Clostridioides difficile toxin; antibiotic or steroid use in the past 2 weeks, with the exception of patients stopping steroids at enrolment; current or past use of biologics; PUCAI > 45; acute severe UC (PUCAI ≥ 65) in the previous 12 months; a current extraintestinal manifestation of UC; and primary sclerosing cholangitis or liver disease, and pregnancy or an allergy to one of the antibiotics excluded patients from entering the antibiotic arm but not the diet. 2.4. Data Collection and Dietary Assessment: The patients were seen at baseline and weeks 3, 6 and 12. At week 2, a phone call visit was performed to assess the PUCAI and dietary compliance. At each visit, a PUCAI score was recorded. A clinical response was defined as a decrease in PUCAI score of at least 10 points, and clinical remission was defined as PUCAI < 10. The patients were asked to provide stool samples for fecal calprotectin (FC) at weeks 0, 3 and 6, which were analyzed locally at each participating center. We performed 24 h recall via a dietitian at weeks 0, 3 and 6. The patients were asked to record the foods and beverages and the consumed amounts in a food diary (FD) over a 3-day period (weekend and 2 weekdays) at week 3. A modified diet-adherence questionnaire [16] was completed on weeks 3 and 6. High diet adherence was determined by finding high adherence on the questionnaire and by the dietitian’s assessment based on direct questioning. Poor compliance was defined by having low compliance in any assessment. 2.5. Statistical Analysis: Continuous variables were evaluated for distribution normality and are reported as medians (interquartile ranges, IQRs) or means (standard deviations, SDs) as appropriate. Nominal variables are summarized as frequencies and are presented as n (%). The primary end point of the proportion of patients in remission at week 6 was analyzed according to the intention-to-treat (ITT) paradigm. A pairwise comparison of the PUCAI at week 0 versus week 6/week 12 was analyzed using the Wilcoxon signed rank test and used according to the last-observation-carried-forward (LOCF) approach. A pairwise comparison of the FC at week 0 versus week 6 was analyzed using the Wilcoxon signed rank test and was performed only for subjects with parameters at both time points. The food macronutrient and micronutrient daily intake was based on the food records at baseline before starting the diet and during the UCED phase 1 at week 6 or week 3, and was compared using the t-test for paired samples or the Wilcoxon signed-rank test as appropriate. Only patients with both week 0 and week 3/6 diet records were entered into the nutritional analysis. All the statistical analyses were performed using the SPSS version 27 statistical analysis software (IBM, Endicott, NY, USA). All the tests were two sided and were considered to be significant at p values < 0.05. 3. Results: 3.1. Study Population Thirty-two patients were screened between November 2014 and November 2020; eight patients were excluded (Supplementary Figure S1). Twenty-four UCED treatment courses were given to 23 eligible, consenting patients and were included in the final analysis. Significant delays in enrollment were encountered, as some of the collaborators left their institutions during the trial and there was significant delay in the ethical approval in other institutions. One patient received a second course of the UCED after a relapse two years later. The mean age of the included patients was 15.3 ± 2.9 years, with a mean disease duration of 1.4 ± 1.4 years. Demographic data are presented in Table 1. The majority had a moderate disease severity and had failed 5ASA, one patient was newly diagnosed with treatment-naïve UC, and one was coming off a course of steroids and flared. Thirty-two patients were screened between November 2014 and November 2020; eight patients were excluded (Supplementary Figure S1). Twenty-four UCED treatment courses were given to 23 eligible, consenting patients and were included in the final analysis. Significant delays in enrollment were encountered, as some of the collaborators left their institutions during the trial and there was significant delay in the ethical approval in other institutions. One patient received a second course of the UCED after a relapse two years later. The mean age of the included patients was 15.3 ± 2.9 years, with a mean disease duration of 1.4 ± 1.4 years. Demographic data are presented in Table 1. The majority had a moderate disease severity and had failed 5ASA, one patient was newly diagnosed with treatment-naïve UC, and one was coming off a course of steroids and flared. 3.2. Response to UCED Exclusively by Week 6 Clinical responses to UCED were achieved in 17/24 (70.8%) patients by week 6, and 9/24 (37.5%) had ITT clinical remissions at week 6 with the UCED (Figure 2). One patient entered into a second remission after receiving another course of UCED, two years after the initial response. For the 23 patients, 8/23 (34.8%) had ITT clinical remissions at week 6 with the UCED. Withdrawals in remission were imputed as non-remission in the ITT analysis. The median PUCAI decreased from 35 (30–40) at baseline to 12.5 (5–30) at week 6, and the mean PUCAI decreased from 34.0 ± 10 to 17.3 ± 16.9 (p = 0.001) according to the ITT analysis including all the patients in a LOCF analysis (Figure 3). There were no differences in the baseline median PUCAI score, baseline FC levels and disease extent between patients who entered remission at week 6 versus treatment failures. The FC level did not vary between the recruited centers. Six patients withdrew by week 3: two noncompliant patients (one response and one remission) and four patients who required additional therapy (Supplementary Figure S1). FC results were available for 18 patients. The median FC remained unchanged from week 0 to week 3 ((818 (630.0–1880.0) μg/g and 968.0 (272.0–1798.4) μg/g, respectively (p = 0.76)) and declined from week 3 to week 6 of the diet ((968.0 (272.0–1880.0) μg/g to 592.0 (140.7–1555.0) μg/g, respectively (p = 0.41)), corresponding to a 49% reduction from week 3 to week 6; the decline between week 0 and week 6 was not significant (p = 0.11). Among five patients who achieved remission at week 6 with baseline and week 6 FC, the median FC level decreased from 630 (IQR, 332–1586) μg/g at week 0 to 230 (75–1298) μg/g at week 6 (p = 0.14). Among the patients who were in clinical remission at week 6, seven were slow responders and achieved clinical remission only at week 6; the other two patients achieved remission at week 2 and week 3. Two patients achieved remission at week 2 and 3, respectively, but developed recurrence of mild symptoms by week 6 and were considered failures by ITT. One patient developed a Shigella infection during the trial that led to symptoms despite a marked reduction in FC from 1167 to 111 at week 6. Among the patients whose PUCAIs increased from baseline to week 6, it was interesting to see that two of the three patients had proctitis with moderate disease of about 1-year duration and a family history of IBD. Clinical responses to UCED were achieved in 17/24 (70.8%) patients by week 6, and 9/24 (37.5%) had ITT clinical remissions at week 6 with the UCED (Figure 2). One patient entered into a second remission after receiving another course of UCED, two years after the initial response. For the 23 patients, 8/23 (34.8%) had ITT clinical remissions at week 6 with the UCED. Withdrawals in remission were imputed as non-remission in the ITT analysis. The median PUCAI decreased from 35 (30–40) at baseline to 12.5 (5–30) at week 6, and the mean PUCAI decreased from 34.0 ± 10 to 17.3 ± 16.9 (p = 0.001) according to the ITT analysis including all the patients in a LOCF analysis (Figure 3). There were no differences in the baseline median PUCAI score, baseline FC levels and disease extent between patients who entered remission at week 6 versus treatment failures. The FC level did not vary between the recruited centers. Six patients withdrew by week 3: two noncompliant patients (one response and one remission) and four patients who required additional therapy (Supplementary Figure S1). FC results were available for 18 patients. The median FC remained unchanged from week 0 to week 3 ((818 (630.0–1880.0) μg/g and 968.0 (272.0–1798.4) μg/g, respectively (p = 0.76)) and declined from week 3 to week 6 of the diet ((968.0 (272.0–1880.0) μg/g to 592.0 (140.7–1555.0) μg/g, respectively (p = 0.41)), corresponding to a 49% reduction from week 3 to week 6; the decline between week 0 and week 6 was not significant (p = 0.11). Among five patients who achieved remission at week 6 with baseline and week 6 FC, the median FC level decreased from 630 (IQR, 332–1586) μg/g at week 0 to 230 (75–1298) μg/g at week 6 (p = 0.14). Among the patients who were in clinical remission at week 6, seven were slow responders and achieved clinical remission only at week 6; the other two patients achieved remission at week 2 and week 3. Two patients achieved remission at week 2 and 3, respectively, but developed recurrence of mild symptoms by week 6 and were considered failures by ITT. One patient developed a Shigella infection during the trial that led to symptoms despite a marked reduction in FC from 1167 to 111 at week 6. Among the patients whose PUCAIs increased from baseline to week 6, it was interesting to see that two of the three patients had proctitis with moderate disease of about 1-year duration and a family history of IBD. 3.3. Sustained Remission with UCED at Week 12 Six out of nine patients (66%) maintained remission through week 12 without additional therapy; thus, clinical remission was observed in 6/24 (25%) at week 12 based on ITT analysis. One patient withdrew despite remission and stopped the diet; two patients experienced relapses between weeks 7 and 12: one patient developed mild intermittent bleeding without other symptoms (PUCAI: 10), and one patient developed a mild relapse. The median PUCAI decreased from 35 (30–40) at baseline to 15 (5–30) at week 12 (p = 0.002) according to ITT analysis including all the patients in a LOCF analysis. Six out of nine patients (66%) maintained remission through week 12 without additional therapy; thus, clinical remission was observed in 6/24 (25%) at week 12 based on ITT analysis. One patient withdrew despite remission and stopped the diet; two patients experienced relapses between weeks 7 and 12: one patient developed mild intermittent bleeding without other symptoms (PUCAI: 10), and one patient developed a mild relapse. The median PUCAI decreased from 35 (30–40) at baseline to 15 (5–30) at week 12 (p = 0.002) according to ITT analysis including all the patients in a LOCF analysis. 3.4. Response to ADM after UCED Failure Eight patients received treatment with antibiotics after failing the diet; 4/8 (50.0%) subsequently entered remission (Figure 2). Thus, in total, 13/24 (54.2%) patients obtained remission; of those, nine patients were on the diet alone, and four, on a sequential diet and antibiotic therapy as induction therapy. Eight patients received treatment with antibiotics after failing the diet; 4/8 (50.0%) subsequently entered remission (Figure 2). Thus, in total, 13/24 (54.2%) patients obtained remission; of those, nine patients were on the diet alone, and four, on a sequential diet and antibiotic therapy as induction therapy. 3.5. Tolerance and Adherence Three patients stopped the diet (two stopped despite good responses at week 3—a PUCAI of 0 and PUCAI of 10); thus, intolerance occurred in 3/24 (12.5%). The adherence to the diet was available for 22/24 (91.7%) patients at week 3; 19 (86.4%) patients had high compliance, 2 had fair compliance (9.1%) and 1 (4.5%) had poor compliance. Data for adherence at week 6 were available for 15 patients among the 16 patients who reached this week; all were highly compliant. Three patients stopped the diet (two stopped despite good responses at week 3—a PUCAI of 0 and PUCAI of 10); thus, intolerance occurred in 3/24 (12.5%). The adherence to the diet was available for 22/24 (91.7%) patients at week 3; 19 (86.4%) patients had high compliance, 2 had fair compliance (9.1%) and 1 (4.5%) had poor compliance. Data for adherence at week 6 were available for 15 patients among the 16 patients who reached this week; all were highly compliant. 3.6. Nutritional Outcomes As the diet was designed to decrease animal saturated fat, total protein, SAAs and heme while providing fiber, we analyzed dietary intake before and after UCED. The analysis of dietary intake showed a clinically relevant decrease in total energy per day, as at baseline, the mean daily energy intake was 42.3 ± 25.2 kcal/kg/day, versus 32.7 ± 14.2 at week 6 (p = 0.06). In addition to energy intake reduction, the median weight decreased from 62 (57–65) to 59 kg (52–63) after 6 weeks of the diet (p = 0.02), with a mean weight loss of 0.4 ± 0.3 kg per week. Treatment with UCED was accompanied by a significant decrease in total protein, SAAs, saturated fat and iron, while there was a significant increase in total fiber consumption per day (Figure 4). As the diet was designed to decrease animal saturated fat, total protein, SAAs and heme while providing fiber, we analyzed dietary intake before and after UCED. The analysis of dietary intake showed a clinically relevant decrease in total energy per day, as at baseline, the mean daily energy intake was 42.3 ± 25.2 kcal/kg/day, versus 32.7 ± 14.2 at week 6 (p = 0.06). In addition to energy intake reduction, the median weight decreased from 62 (57–65) to 59 kg (52–63) after 6 weeks of the diet (p = 0.02), with a mean weight loss of 0.4 ± 0.3 kg per week. Treatment with UCED was accompanied by a significant decrease in total protein, SAAs, saturated fat and iron, while there was a significant increase in total fiber consumption per day (Figure 4). 3.7. Safety During the UCED treatment, eight patients had adverse events. Three patients had worsening of disease at week 3, two patients developed constipation, one patient lost weight during the first six weeks, and one patient developed a fever unrelated to the disease. Among the patients who received AMD, three patients had worsening of the disease, one patient had metronidazole intolerance with diarrhea, and one patient developed pneumonia one week after stopping the antibiotics. During the UCED treatment, eight patients had adverse events. Three patients had worsening of disease at week 3, two patients developed constipation, one patient lost weight during the first six weeks, and one patient developed a fever unrelated to the disease. Among the patients who received AMD, three patients had worsening of the disease, one patient had metronidazole intolerance with diarrhea, and one patient developed pneumonia one week after stopping the antibiotics. 3.1. Study Population: Thirty-two patients were screened between November 2014 and November 2020; eight patients were excluded (Supplementary Figure S1). Twenty-four UCED treatment courses were given to 23 eligible, consenting patients and were included in the final analysis. Significant delays in enrollment were encountered, as some of the collaborators left their institutions during the trial and there was significant delay in the ethical approval in other institutions. One patient received a second course of the UCED after a relapse two years later. The mean age of the included patients was 15.3 ± 2.9 years, with a mean disease duration of 1.4 ± 1.4 years. Demographic data are presented in Table 1. The majority had a moderate disease severity and had failed 5ASA, one patient was newly diagnosed with treatment-naïve UC, and one was coming off a course of steroids and flared. 3.2. Response to UCED Exclusively by Week 6: Clinical responses to UCED were achieved in 17/24 (70.8%) patients by week 6, and 9/24 (37.5%) had ITT clinical remissions at week 6 with the UCED (Figure 2). One patient entered into a second remission after receiving another course of UCED, two years after the initial response. For the 23 patients, 8/23 (34.8%) had ITT clinical remissions at week 6 with the UCED. Withdrawals in remission were imputed as non-remission in the ITT analysis. The median PUCAI decreased from 35 (30–40) at baseline to 12.5 (5–30) at week 6, and the mean PUCAI decreased from 34.0 ± 10 to 17.3 ± 16.9 (p = 0.001) according to the ITT analysis including all the patients in a LOCF analysis (Figure 3). There were no differences in the baseline median PUCAI score, baseline FC levels and disease extent between patients who entered remission at week 6 versus treatment failures. The FC level did not vary between the recruited centers. Six patients withdrew by week 3: two noncompliant patients (one response and one remission) and four patients who required additional therapy (Supplementary Figure S1). FC results were available for 18 patients. The median FC remained unchanged from week 0 to week 3 ((818 (630.0–1880.0) μg/g and 968.0 (272.0–1798.4) μg/g, respectively (p = 0.76)) and declined from week 3 to week 6 of the diet ((968.0 (272.0–1880.0) μg/g to 592.0 (140.7–1555.0) μg/g, respectively (p = 0.41)), corresponding to a 49% reduction from week 3 to week 6; the decline between week 0 and week 6 was not significant (p = 0.11). Among five patients who achieved remission at week 6 with baseline and week 6 FC, the median FC level decreased from 630 (IQR, 332–1586) μg/g at week 0 to 230 (75–1298) μg/g at week 6 (p = 0.14). Among the patients who were in clinical remission at week 6, seven were slow responders and achieved clinical remission only at week 6; the other two patients achieved remission at week 2 and week 3. Two patients achieved remission at week 2 and 3, respectively, but developed recurrence of mild symptoms by week 6 and were considered failures by ITT. One patient developed a Shigella infection during the trial that led to symptoms despite a marked reduction in FC from 1167 to 111 at week 6. Among the patients whose PUCAIs increased from baseline to week 6, it was interesting to see that two of the three patients had proctitis with moderate disease of about 1-year duration and a family history of IBD. 3.3. Sustained Remission with UCED at Week 12: Six out of nine patients (66%) maintained remission through week 12 without additional therapy; thus, clinical remission was observed in 6/24 (25%) at week 12 based on ITT analysis. One patient withdrew despite remission and stopped the diet; two patients experienced relapses between weeks 7 and 12: one patient developed mild intermittent bleeding without other symptoms (PUCAI: 10), and one patient developed a mild relapse. The median PUCAI decreased from 35 (30–40) at baseline to 15 (5–30) at week 12 (p = 0.002) according to ITT analysis including all the patients in a LOCF analysis. 3.4. Response to ADM after UCED Failure: Eight patients received treatment with antibiotics after failing the diet; 4/8 (50.0%) subsequently entered remission (Figure 2). Thus, in total, 13/24 (54.2%) patients obtained remission; of those, nine patients were on the diet alone, and four, on a sequential diet and antibiotic therapy as induction therapy. 3.5. Tolerance and Adherence: Three patients stopped the diet (two stopped despite good responses at week 3—a PUCAI of 0 and PUCAI of 10); thus, intolerance occurred in 3/24 (12.5%). The adherence to the diet was available for 22/24 (91.7%) patients at week 3; 19 (86.4%) patients had high compliance, 2 had fair compliance (9.1%) and 1 (4.5%) had poor compliance. Data for adherence at week 6 were available for 15 patients among the 16 patients who reached this week; all were highly compliant. 3.6. Nutritional Outcomes: As the diet was designed to decrease animal saturated fat, total protein, SAAs and heme while providing fiber, we analyzed dietary intake before and after UCED. The analysis of dietary intake showed a clinically relevant decrease in total energy per day, as at baseline, the mean daily energy intake was 42.3 ± 25.2 kcal/kg/day, versus 32.7 ± 14.2 at week 6 (p = 0.06). In addition to energy intake reduction, the median weight decreased from 62 (57–65) to 59 kg (52–63) after 6 weeks of the diet (p = 0.02), with a mean weight loss of 0.4 ± 0.3 kg per week. Treatment with UCED was accompanied by a significant decrease in total protein, SAAs, saturated fat and iron, while there was a significant increase in total fiber consumption per day (Figure 4). 3.7. Safety: During the UCED treatment, eight patients had adverse events. Three patients had worsening of disease at week 3, two patients developed constipation, one patient lost weight during the first six weeks, and one patient developed a fever unrelated to the disease. Among the patients who received AMD, three patients had worsening of the disease, one patient had metronidazole intolerance with diarrhea, and one patient developed pneumonia one week after stopping the antibiotics. 4. Discussion: In this pilot study, we evaluated two therapies targeting the microbiome sequentially. The first intervention was a novel diet targeting the intestinal epithelium, goblet cells and innate immune system, in addition to the microbiota composition. The second intervention, used only in dietary-failure patients, was an established antibiotic protocol [20,21] studied in adults but never prospectively evaluated in children. The main purpose of this study was, first, to evaluate the feasibility of this specific dietary intervention, in order to improve the design and adherence, prior to starting an interventional randomized controlled trial. In light of this study’s outcomes, we will test the superiority of the UCED when administered together with a 5ASA regimen, compared to 5ASA alone, in pediatric patients with mild–moderate UC in a randomized control trial. We demonstrated clinical responses in 70% of the patients with UCED at week 6 and clinical remission in 37.5% of the patients at week 6 by ITT analysis. This was accompanied by a decline in FC, primarily after week 3, which did not show statistical significance, likely due to the small sample size. The FC at week 6 was available for 5/9 patients in remission before any change in therapy; there was a decrease in the median FC among these patients from 630 (IQR, 332–1586) μg/g at week 0 to 230 (75–1298) μg/g at week 6 (p = 0.14). Furthermore, 50% of those who failed to obtain remission with the diet entered remission after adding a 14-day course of AMD. Thus, over 50% of the patients obtained remission without immune suppression; of those, nine patients did so on diet alone and four, on a sequential diet and antibiotic therapy as induction therapy. We chose this sequential design in order to gain insight into the independent effect of each treatment and to provide pilot data in order to proceed to randomized controlled trials in the future based on the outcomes. The UCED diet was designed to minimize the impact of dietary components that may adversely affect goblet cells, mucus permeability and microbiome composition, which were previously linked to UC. The UCED was designed to decrease protein, SAAs and saturated fatty acids while providing fiber as a substrate for SCFA and to prevent fiber deprivation, which may deplete the mucus layer. We were able to demonstrate that the intake of these components was, in fact, reduced among our patients, while the fiber intake significantly increased (Figure 4). Van der Post et al. have established that a permeable mucus layer may be an early event in UC [10,11]. Microbial SCFA production is essential for providing fuel for epithelial cells and affects the regulation of the immune system by inducing the regulation of T-regulatory cells [35]. A high-fat diet and maltodextrin have been shown to negatively affect goblet cells [24,36,37,38]. Epithelial damage is also a hallmark of UC, and a high-fat diet and high-protein diet may negatively affect epithelial cells [37,39]. Permeable mucus has been associated with Proteobacteria expansion [40], and a high-fat diet has been shown to be associated with Proteobacteria and Enterobacteriaceae expansion [39,40]. Another factor that may affect the mucus layer is fiber deprivation [34]; certain fibers such as pectins might induce more viscous mucus and have an anti-inflammatory effect [13,41]. Finally, high levels of hydrogen sulfide may have a toxic effect on epithelial cells and can cause a breakdown of the mucin network [22]. Substrates for hydrogen-sulfide production are predominantly derived from SAAs [42], while fruits and vegetables are sources of short-chain fatty acids, which regulate the production of protein metabolites and maintain tight junctions [43,44]. We used these principles to design this diet, and the results of this pilot study have led us to launch a randomized control trial (NIH NCT03980405). At this juncture, we cannot be certain which components that were included or excluded were responsible for the clinical effect or what the effect upon the microbiome was. There are a few clinical studies that have suggested a link between diet and UC, but the data are conflicting. A prospective interventional crossover study in 18 UC adult patients showed that a low-fat, high-fiber diet decreased markers of inflammation and reduced intestinal dysbiosis [45]. A large prospective UC cohort followed from remission suggested that relapse was associated with the intake of the saturated fatty acid myristic acid, found primarily in grain-fed beef and dairy [28,46]. The benefit of plant-based diets (PBD) in UC patients was demonstrated by Chiba M. et al. to contribute to preventing relapse at one-year follow-up in UC patients; therapy incorporating a PBD was shown to induce remission in about one-third of patients with mild UC [47]. However, a large Swiss prospective cohort study showed that vegetarians had no advantage over omnivores with regard to disease activity, hospitalizations, complications or surgery with UC [48]. Most of the patients in our study tolerated the diet well, and only 12% discontinued the diet. Interestingly, two of these patients were in remission at week 3 and had a mild increase in symptoms by week six; one patient who had responded very well to the diet developed a Shigella infection during the trial that led to symptoms, despite a marked reduction in FC from 1167 to 111 at week 6; these four patients were considered failures in the ITT analysis. We observed that the majority of the patients responded to the diet only between weeks 3 and 6; this is supported by the FC data, which did not show a decline between baseline and week 3 but showed a 49% decline between weeks 3 and 6. This differs from the response to exclusive enteral nutrition or the Crohn’s disease exclusion diet, in which the response was rapid and the majority of patients achieved remission during the first 3 weeks of dietary therapy [49]. Despite the decline, the median FC remained high at week 6; larger studies are required to detect the impact of diet on gut healing, including performing colonoscopies. Antibiotics may be a double-edged sword in IBD. Antibiotics may increase dysbiosis and increase the translocation of bacteria [50] but, on the other hand, may be effective in refractory patients [7,20,21], and there is an interaction between diet and antibiotics with regard to inflammation in rodent models (a high-fat diet may increase antibiotic-induced dysbiosis) [39]. To date, the utility of the antibiotic combination we used in the trial has been demonstrated prospectively only for severe or steroid-dependent adult UC [7,20,21]. Here, we demonstrate, in a small cohort of diet-refractory patients, that antibiotic therapy may have benefit in achieving remission, and a prospective randomized controlled trial is currently underway to evaluate this further. There are several limitations of this study. This was a pilot trial used to generate data and conducted as a proof of concept, to allow progress to larger trials if the data were positive. Thus, the sample size was limited. We only investigated children with mild to moderate disease; based on previous clinical experience, this group is the most likely to benefit from the combination of diet and antibiotics, and this combination might be used to avoid steroids and immunosuppressive therapy in the future. We were also hampered by the fact that not all the patients provided FC samples as requested. In addition, we saw a weight reduction after 6 weeks of the diet, which might indicate that the diet is not suitable for severe cases of UC with malnutrition. Another weakness of the study is that we did not perform colonoscopy in order to evaluate mucosal healing. However, recently, we have published a randomized controlled trial in adult patients with active refractory UC, showing that the UCED alone appeared to achieve mucosal healing versus single-donor fecal transplantation with or without diet, as mucosal healing (Mayo 0) was achieved only in the group that received the UCED (3/15, 20%) vs. 0/36 of the patients who received fecal transplantation (p = 0.022) [51]. However, we emphasize that, without a placebo group, caution needs to be taken in interpreting our results, as some of the response could be mediated by placebo effects. The strengths of this pilot trial were the prospective nature and use of defined criteria for inclusion and remission, as well as it being the first report of this novel diet. In conclusion, the results of this pilot trial suggest that both diet and antibiotics may have a role for the induction of remission in mild to moderate UC in children. This needs to be explored further with a larger sample size. Randomized controlled trials are now underway with both therapies to provide more evidence for these therapies, which could facilitate the use of microbiome-targeted therapies in conjunction with other medical therapy or instead of immune suppression in the future.
Background: As the microbiome plays an important role in instigating inflammation in ulcerative colitis (UC), strategies targeting the microbiome may offer an alternative therapeutic approach. The goal of the pilot trial was to evaluate the potential efficacy and feasibility of a novel UC exclusion diet (UCED) for clinical remission, as well as the potential of sequential antibiotics for diet-refractory patients to achieve remission without steroids. Methods: This was a prospective, single-arm, multicenter, open-label pilot study in patients aged 8-19, with pediatric UC activity index (PUCAI) scores >10 on stable maintenance therapy. Patients failing to enter remission (PUCAI < 10) on the diet could receive a 14-day course of amoxycillin, metronidazole and doxycycline (AMD), and were re-assessed on day 21. The primary endpoint was intention-to-treat (ITT) remission at week 6, with UCED as the only intervention. Results: Twenty-four UCED treatment courses were given to 23 eligible children (mean age: 15.3 ± 2.9 years). The median PUCAI decreased from 35 (30-40) at baseline to 12.5 (5-30) at week 6 (p = 0.001). Clinical remission with UCED alone was achieved in 9/24 (37.5%). The median fecal calprotectin declined from 818 (630.0-1880.0) μg/g at baseline to 592.0 (140.7-1555.0) μg/g at week 6 (p > 0.05). Eight patients received treatment with antibiotics after failing on the diet; 4/8 (50.0%) subsequently entered remission 3 weeks later. Conclusions: The UCED appears to be effective and feasible for the induction of remission in children with mild to moderate UC. The sequential use of UCED followed by antibiotic therapy needs to be evaluated as a microbiome-targeted, steroid-sparing strategy.
null
null
10,377
361
[ 2761, 367, 316, 214, 205, 259, 159, 517, 118, 63, 107, 162, 82 ]
16
[ "week", "patients", "diet", "remission", "uc", "uced", "pucai", "weeks", "12", "disease" ]
[ "gut microbiota affect", "inflammation dysbiosis", "ulcerative colitis", "gut microbiota", "ulcerative colitis diet" ]
null
null
null
[CONTENT] ulcerative colitis | child | diet | antibiotics | remission | treatment [SUMMARY]
null
[CONTENT] ulcerative colitis | child | diet | antibiotics | remission | treatment [SUMMARY]
null
[CONTENT] ulcerative colitis | child | diet | antibiotics | remission | treatment [SUMMARY]
null
[CONTENT] Adolescent | Amoxicillin | Anti-Bacterial Agents | Child | Colitis, Ulcerative | Doxycycline | Drug Therapy, Combination | Eating | Female | Humans | Intention to Treat Analysis | Male | Metronidazole | Nutritional Status | Patient Compliance | Pilot Projects | Prospective Studies | Remission Induction | Treatment Outcome [SUMMARY]
null
[CONTENT] Adolescent | Amoxicillin | Anti-Bacterial Agents | Child | Colitis, Ulcerative | Doxycycline | Drug Therapy, Combination | Eating | Female | Humans | Intention to Treat Analysis | Male | Metronidazole | Nutritional Status | Patient Compliance | Pilot Projects | Prospective Studies | Remission Induction | Treatment Outcome [SUMMARY]
null
[CONTENT] Adolescent | Amoxicillin | Anti-Bacterial Agents | Child | Colitis, Ulcerative | Doxycycline | Drug Therapy, Combination | Eating | Female | Humans | Intention to Treat Analysis | Male | Metronidazole | Nutritional Status | Patient Compliance | Pilot Projects | Prospective Studies | Remission Induction | Treatment Outcome [SUMMARY]
null
[CONTENT] gut microbiota affect | inflammation dysbiosis | ulcerative colitis | gut microbiota | ulcerative colitis diet [SUMMARY]
null
[CONTENT] gut microbiota affect | inflammation dysbiosis | ulcerative colitis | gut microbiota | ulcerative colitis diet [SUMMARY]
null
[CONTENT] gut microbiota affect | inflammation dysbiosis | ulcerative colitis | gut microbiota | ulcerative colitis diet [SUMMARY]
null
[CONTENT] week | patients | diet | remission | uc | uced | pucai | weeks | 12 | disease [SUMMARY]
null
[CONTENT] week | patients | diet | remission | uc | uced | pucai | weeks | 12 | disease [SUMMARY]
null
[CONTENT] week | patients | diet | remission | uc | uced | pucai | weeks | 12 | disease [SUMMARY]
null
[CONTENT] uc | microbiota | western | dietary | demonstrated | intestinal | inflammatory | immune | effective | immune system [SUMMARY]
null
[CONTENT] week | patients | remission | patient | uced | μg | fc | itt | developed | analysis [SUMMARY]
null
[CONTENT] week | patients | diet | remission | uc | pucai | patient | weeks | uced | disease [SUMMARY]
null
[CONTENT] UC ||| UC [SUMMARY]
null
[CONTENT] Twenty-four | 23 | 15.3 | 2.9 years ||| 35 | 30-40 | 12.5 | 5-30 | week 6 | 0.001 ||| 9/24 | 37.5% ||| calprotectin | 818 | 630.0-1880.0 | 592.0 | 140.7-1555.0 | 0.05 ||| Eight | 4/8 | 50.0% | 3 weeks later [SUMMARY]
null
[CONTENT] UC ||| UC ||| 8-19 | UC | 10 ||| 14-day | amoxycillin | AMD | day 21 ||| ITT | week 6 ||| Twenty-four | 23 | 15.3 | 2.9 years ||| 35 | 30-40 | 12.5 | 5-30 | week 6 | 0.001 ||| 9/24 | 37.5% ||| calprotectin | 818 | 630.0-1880.0 | 592.0 | 140.7-1555.0 | 0.05 ||| Eight | 4/8 | 50.0% | 3 weeks later ||| UC ||| [SUMMARY]
null
Effects of polymer molecular weight on relative oral bioavailability of curcumin.
22745556
Polylactic-co-glycolic acid (PLGA) nanoparticles have been used to increase the relative oral bioavailability of hydrophobic compounds and polyphenols in recent years, but the effects of the molecular weight of PLGA on bioavailability are still unknown. This study investigated the influence of polymer molecular weight on the relative oral bioavailability of curcumin, and explored the possible mechanism accounting for the outcome.
BACKGROUND
Curcumin encapsulated in low (5000-15,000) and high (40,000-75,000) molecular weight PLGA (LMw-NPC and HMw-NPC, respectively) were prepared using an emulsification-solvent evaporation method. Curcumin alone and in the nanoformulations was administered orally to freely mobile rats, and blood samples were collected to evaluate the bioavailability of curcumin, LMw-NPC, and HMw-NPC. An ex vivo experimental gut absorption model was used to investigate the effects of different molecular weights of PLGA formulation on absorption of curcumin. High-performance liquid chromatography with diode array detection was used for quantification of curcumin in biosamples.
METHODS
There were no significant differences in particle properties between LMw-NPC and HMw-NPC, but the relative bioavailability of HMw-NPC was 1.67-fold and 40-fold higher than that of LMw-NPC and conventional curcumin, respectively. In addition, the mean peak concentration (C(max)) of conventional curcumin, LMw-NPC, and HMw-NPC was 0.028, 0.042, and 0.057 μg/mL, respectively. The gut absorption study further revealed that the HMw-PLGA formulation markedly increased the absorption rate of curcumin in the duodenum and resulted in excellent bioavailability compared with conventional curcumin and LMw-NPC.
RESULTS
Our findings demonstrate that different molecular weights of PLGA have varying bioavailability, contributing to changes in the absorption rate at the duodenum. The results of this study provide the rationale for design of a nanomedicine delivery system to enhance the bioavailability of water-insoluble pharmaceutical compounds and functional foods.
CONCLUSION
[ "Absorption", "Administration, Oral", "Animals", "Biological Availability", "Chromatography, High Pressure Liquid", "Curcumin", "Intestine, Small", "Lactic Acid", "Male", "Molecular Weight", "Nanoparticles", "Particle Size", "Polyglycolic Acid", "Polylactic Acid-Polyglycolic Acid Copolymer", "Rats, Sprague-Dawley", "Reproducibility of Results", "Tissue Distribution" ]
3384366
Characteristics of LMw-NPC and HMw-NPC
The properties of the LMw-NPC and HMw-NPC are summarized in Table 1. The zeta potential refers to the surface charge on the nanoparticles and the polydispersity index is the size distribution, indicating the degree of similarity between the nanoparticles, such that a smaller polydispersity index indicates that the particles are similar in size. Nanoparticles containing curcumin were prepared using a high-pressure emulsification-solvent evaporation procedure with PLGA as the encapsulating material. A negative zeta potential and smaller polydispersity index was obtained (−12.2 mV and 0.076 for LMw-NPC versus −14.1 mV and 0.068 for HMw-NPC). The sizes of the LMw-NPC and HMw-NPC were 166 ± 7.4 nm and 163 ± 4.2 nm, respectively. The encapsulation efficiency of both nanoformulations was nearly 45%. According to the results, there was no statistically significant difference in particle characteristics between LMw-NPC and HMw-NPC. As in our previous studies,3,17 similar properties of nanoparticles were found in the different batches when complied with standard procedures to prepare these LMw-NPC. Therefore, our study results were confirmed to be replicable and were well validated. Figure 1 shows the TEM images of LMw-NPC and HMw-NPC. The morphology of the curcumin nanoparticles was either ellipsoid or spherical, and the diameter results were similar to the data observed by dynamic light scattering (below 200 nm) for both LMw- NPC and HMw-NPC. Our TEM data are also consistent with those reported by Liang et al and Chang et al,19,20 with regard to the nanoparticles being a bright color and surrounded by a dark shadow. Our curcumin-loaded PLGA nanoparticles were well prepared and could be used in our subsequent pharmacokinetic study.
Animal study
Male Sprague-Dawley rats (215 ± 10 g body weight) were obtained from the Laboratory Animal Center at National Yang-Ming University (Taipei, Taiwan). The animals were pathogen-free and allowed to adapt to an environmentally controlled habitat (24°C ± 1°C on a 12-hour light-dark cycle). Food (Laboratory Rodent Diet 5001, PMI Feeds Inc, Richmond, IN) and water were available ad libitum. All animal experiments were performed according to the National Yang-Ming University guidelines, principles, and procedures for the care and use of laboratory animals. The rats were anesthetized using pentobarbital 50 mg/kg intraperitoneally. During anesthesia, the right jugular vein was catheterized with polyethylene tubing for blood sampling. An external cannula was fastened in the dorsal neck area. The rats were returned to consciousness after one day and could move freely. Curcumin and the LMw-NPC/HMw-NPC nanoformulations were administered by gavage to the rats at doses of 1 g/kg and 50 mg/kg, respectively. After oral administration, a 300 μL blood sample was obtained from the right jugular vein into a tube rinsed with heparin at 15, 30, 45, 60, 90, 120, 150, 180, 240, 300, 360, and 480 minutes. Fecal and urine samples were collected in metabolic cages (Mini Mitter, Bend, OR) at 0–12, 12–24, 24–36, 36–48, and 48–72 hours after treatment with curcumin, LMw-NPC, or HMw-NPC.
Results
Characteristics of LMw-NPC and HMw-NPC The properties of the LMw-NPC and HMw-NPC are summarized in Table 1. The zeta potential refers to the surface charge on the nanoparticles and the polydispersity index is the size distribution, indicating the degree of similarity between the nanoparticles, such that a smaller polydispersity index indicates that the particles are similar in size. Nanoparticles containing curcumin were prepared using a high-pressure emulsification-solvent evaporation procedure with PLGA as the encapsulating material. A negative zeta potential and smaller polydispersity index was obtained (−12.2 mV and 0.076 for LMw-NPC versus −14.1 mV and 0.068 for HMw-NPC). The sizes of the LMw-NPC and HMw-NPC were 166 ± 7.4 nm and 163 ± 4.2 nm, respectively. The encapsulation efficiency of both nanoformulations was nearly 45%. According to the results, there was no statistically significant difference in particle characteristics between LMw-NPC and HMw-NPC. As in our previous studies,3,17 similar properties of nanoparticles were found in the different batches when complied with standard procedures to prepare these LMw-NPC. Therefore, our study results were confirmed to be replicable and were well validated. Figure 1 shows the TEM images of LMw-NPC and HMw-NPC. The morphology of the curcumin nanoparticles was either ellipsoid or spherical, and the diameter results were similar to the data observed by dynamic light scattering (below 200 nm) for both LMw- NPC and HMw-NPC. Our TEM data are also consistent with those reported by Liang et al and Chang et al,19,20 with regard to the nanoparticles being a bright color and surrounded by a dark shadow. Our curcumin-loaded PLGA nanoparticles were well prepared and could be used in our subsequent pharmacokinetic study. The properties of the LMw-NPC and HMw-NPC are summarized in Table 1. The zeta potential refers to the surface charge on the nanoparticles and the polydispersity index is the size distribution, indicating the degree of similarity between the nanoparticles, such that a smaller polydispersity index indicates that the particles are similar in size. Nanoparticles containing curcumin were prepared using a high-pressure emulsification-solvent evaporation procedure with PLGA as the encapsulating material. A negative zeta potential and smaller polydispersity index was obtained (−12.2 mV and 0.076 for LMw-NPC versus −14.1 mV and 0.068 for HMw-NPC). The sizes of the LMw-NPC and HMw-NPC were 166 ± 7.4 nm and 163 ± 4.2 nm, respectively. The encapsulation efficiency of both nanoformulations was nearly 45%. According to the results, there was no statistically significant difference in particle characteristics between LMw-NPC and HMw-NPC. As in our previous studies,3,17 similar properties of nanoparticles were found in the different batches when complied with standard procedures to prepare these LMw-NPC. Therefore, our study results were confirmed to be replicable and were well validated. Figure 1 shows the TEM images of LMw-NPC and HMw-NPC. The morphology of the curcumin nanoparticles was either ellipsoid or spherical, and the diameter results were similar to the data observed by dynamic light scattering (below 200 nm) for both LMw- NPC and HMw-NPC. Our TEM data are also consistent with those reported by Liang et al and Chang et al,19,20 with regard to the nanoparticles being a bright color and surrounded by a dark shadow. Our curcumin-loaded PLGA nanoparticles were well prepared and could be used in our subsequent pharmacokinetic study. Validation of HPLC analytic method The HPLC method was developed to investigate curcumin in rat plasma samples. Figure 2A–E shows the chromatograms of blank rat matrices, and Figure 2F–J shows the chromatograms of rat biosamples containing the internal standard and curcumin after administration of HMw-NPC. The retention times were approximately 6.5 and 22 minutes for the internal standard and curcumin, respectively, with no obvious interference peak in the blank chromatograms. These results suggest that the HPLC analytical conditions provided good separation and selectivity for curcumin and the internal standard in the experimental matrices. The validation methods were for linearity, limit of detection, limit of quantification, precision, accuracy, and extraction recovery. The calibration curves for plasma, feces, urine, serosal fluid, and sac tissue showed good linearity (r2 > 0.995) over the range of 0.01–2.5 μg/mL. The limits of detection and quantification for curcumin were determined to be 0.005 μg/mL and 0.01 μg/mL, respectively. Intra-assay and interassay precision (% RSD and accuracy [% bias]) of curcumin measurements in each rat matrix were ±15%. Extraction of curcumin at low, medium, and high concentration in rat plasma, feces, urine, serosal fluid, and sac tissue was at least 90%. These results indicate that the analytical and extraction methods used for curcumin and the internal standard were reliable for the following pharmacokinetic study. The HPLC method was developed to investigate curcumin in rat plasma samples. Figure 2A–E shows the chromatograms of blank rat matrices, and Figure 2F–J shows the chromatograms of rat biosamples containing the internal standard and curcumin after administration of HMw-NPC. The retention times were approximately 6.5 and 22 minutes for the internal standard and curcumin, respectively, with no obvious interference peak in the blank chromatograms. These results suggest that the HPLC analytical conditions provided good separation and selectivity for curcumin and the internal standard in the experimental matrices. The validation methods were for linearity, limit of detection, limit of quantification, precision, accuracy, and extraction recovery. The calibration curves for plasma, feces, urine, serosal fluid, and sac tissue showed good linearity (r2 > 0.995) over the range of 0.01–2.5 μg/mL. The limits of detection and quantification for curcumin were determined to be 0.005 μg/mL and 0.01 μg/mL, respectively. Intra-assay and interassay precision (% RSD and accuracy [% bias]) of curcumin measurements in each rat matrix were ±15%. Extraction of curcumin at low, medium, and high concentration in rat plasma, feces, urine, serosal fluid, and sac tissue was at least 90%. These results indicate that the analytical and extraction methods used for curcumin and the internal standard were reliable for the following pharmacokinetic study. Bioavailability and pharmacokinetics Mean concentrations of curcumin in rat plasma at different times after oral administration of conventional curcumin (1 mg/kg) and LMw-NPC and HMw-NPC (50 mg/kg) are shown in Figure 3. Curcumin could be detected in plasma for 6 hours after oral ingestion of conventional curcumin and of LMw-NPC, but the retention time in rat plasma was prolonged to 8 hours for the HMw-NPC formulation. Table 2 shows significant increases in Cmax (from 0.028 for curcumin alone to 0.042 for LMw-NPC to 0.057 μg/mL for HMw-NPC) and area under the concentration-time curve (AUC) per dose (from 0.0062 to 0.15 to 0.25, respectively), indicating that the absorption of curcumin increased when it was encapsulated in a nanoformulation. In addition, the relative bioavailability of HMw-NPC was 40-fold and 1.67-fold higher than that of conventional curcumin and LMw-NPC, respectively (Table 2). This indicates that the HMw-PLGA nanoformulation achieved markedly better absorption of curcumin compared with LMw-PLGA. Furthermore, after oral administration of HMw-NPC, the Tmax of curcumin was 15 minutes shorter than that of the other curcumin formulations (see Table 2). These results indicate that the relative bioavailability of curcumin was increased at a shorter Tmax, indicating that Tmax was closely related to the bioavailability of curcumin. Without nanoformulation, the accumulated percentage of unabsorbed curcumin in feces approached 90% and in urine was almost 0% (Figure 4), implying that there was low absorption of curcumin in the gastrointestinal tract. PLGA encapsulation reduced the amount of curcumin in feces (from 90% for curcumin alone to 40% for LMw-PLG to 20% for HMw-PLGA) and increased the amount of curcumin in urine (from near 0% to 0.07% to 0.16%), respectively, indicating that the absorption of curcumin was enhanced after PLGA formulation. These excretion results confirmed the pharmacokinetic parameters of the oral study, ie, the absorption of curcumin was significantly increased by both LMw-PLGA and HMw-PLGA formulation. Mean concentrations of curcumin in rat plasma at different times after oral administration of conventional curcumin (1 mg/kg) and LMw-NPC and HMw-NPC (50 mg/kg) are shown in Figure 3. Curcumin could be detected in plasma for 6 hours after oral ingestion of conventional curcumin and of LMw-NPC, but the retention time in rat plasma was prolonged to 8 hours for the HMw-NPC formulation. Table 2 shows significant increases in Cmax (from 0.028 for curcumin alone to 0.042 for LMw-NPC to 0.057 μg/mL for HMw-NPC) and area under the concentration-time curve (AUC) per dose (from 0.0062 to 0.15 to 0.25, respectively), indicating that the absorption of curcumin increased when it was encapsulated in a nanoformulation. In addition, the relative bioavailability of HMw-NPC was 40-fold and 1.67-fold higher than that of conventional curcumin and LMw-NPC, respectively (Table 2). This indicates that the HMw-PLGA nanoformulation achieved markedly better absorption of curcumin compared with LMw-PLGA. Furthermore, after oral administration of HMw-NPC, the Tmax of curcumin was 15 minutes shorter than that of the other curcumin formulations (see Table 2). These results indicate that the relative bioavailability of curcumin was increased at a shorter Tmax, indicating that Tmax was closely related to the bioavailability of curcumin. Without nanoformulation, the accumulated percentage of unabsorbed curcumin in feces approached 90% and in urine was almost 0% (Figure 4), implying that there was low absorption of curcumin in the gastrointestinal tract. PLGA encapsulation reduced the amount of curcumin in feces (from 90% for curcumin alone to 40% for LMw-PLG to 20% for HMw-PLGA) and increased the amount of curcumin in urine (from near 0% to 0.07% to 0.16%), respectively, indicating that the absorption of curcumin was enhanced after PLGA formulation. These excretion results confirmed the pharmacokinetic parameters of the oral study, ie, the absorption of curcumin was significantly increased by both LMw-PLGA and HMw-PLGA formulation. Ex vivo absorption To understand better the relationship between relative bioavailability and Tmax for different molecular weights of PLGA-encapsulated curcumin and to explore the possible reasons for this, an ex vivo absorption study was undertaken. The concentration of curcumin absorbed from the incubation medium into the sac tissue and then into the serosal fluid is shown in Figure 5. When samples of the reverted sac of rat small intestine were incubated with conventional curcumin, LMw-NPC, and HMw-NPC, the amount of curcumin absorbed from the curcumin nanoformulation was significantly increased in the duodenum and the ileum, but was not significantly different in the jejunum. The curcumin content was highest in the serosal fluid of the duodenum in the HMw-NPC group, followed by the LMw-NPC and conventional curcumin groups. The curcumin concentration from HMw-NPC in the serosal fluid of the duodenum was 5.12-fold and 2.66-fold higher than from LMw-NPC and conventional curcumin, respectively (Figure 5A). The absorption of curcumin, as determined by the sum of the amounts of curcumin in the serosal fluid, sac tissue, and regions of the rat gut after incubation with curcumin and its formulations, is shown in Table 3. It can be seen that the amount of curcumin in the duodenum and ileum was significantly increased by curcumin encapsulated with PLGA. The HMw-PLGA formulation noticeably enhanced the absorption of curcumin in the duodenum compared with LMw-PLGA (from 0.63 μg to 0.89 μg) and with conventional curcumin (from 0.47 μg to 0.89 μg). The ex vivo curcumin absorption results support the findings of the oral study, ie, that curcumin formulated with PLGA increased the absorption of curcumin in the intestine. Overall, HMw- PLGA enabled considerably more curcumin absorption than LMw-PLGA. To understand better the relationship between relative bioavailability and Tmax for different molecular weights of PLGA-encapsulated curcumin and to explore the possible reasons for this, an ex vivo absorption study was undertaken. The concentration of curcumin absorbed from the incubation medium into the sac tissue and then into the serosal fluid is shown in Figure 5. When samples of the reverted sac of rat small intestine were incubated with conventional curcumin, LMw-NPC, and HMw-NPC, the amount of curcumin absorbed from the curcumin nanoformulation was significantly increased in the duodenum and the ileum, but was not significantly different in the jejunum. The curcumin content was highest in the serosal fluid of the duodenum in the HMw-NPC group, followed by the LMw-NPC and conventional curcumin groups. The curcumin concentration from HMw-NPC in the serosal fluid of the duodenum was 5.12-fold and 2.66-fold higher than from LMw-NPC and conventional curcumin, respectively (Figure 5A). The absorption of curcumin, as determined by the sum of the amounts of curcumin in the serosal fluid, sac tissue, and regions of the rat gut after incubation with curcumin and its formulations, is shown in Table 3. It can be seen that the amount of curcumin in the duodenum and ileum was significantly increased by curcumin encapsulated with PLGA. The HMw-PLGA formulation noticeably enhanced the absorption of curcumin in the duodenum compared with LMw-PLGA (from 0.63 μg to 0.89 μg) and with conventional curcumin (from 0.47 μg to 0.89 μg). The ex vivo curcumin absorption results support the findings of the oral study, ie, that curcumin formulated with PLGA increased the absorption of curcumin in the intestine. Overall, HMw- PLGA enabled considerably more curcumin absorption than LMw-PLGA.
null
null
[ "Introduction", "Chemicals and reagents", "Encapsulation of curcumin in PLGA nanoparticles", "Entrapment efficiency", "Transmission electron microscopy", "HPLC system and analytical method validation", "HPLC system", "Validation of analytical method", "Sample preparation", "Ex vivo curcumin absorption using reverted rat gut sacs", "Pharmacokinetics and statistics", "Characteristics of LMw-NPC and HMw-NPC", "Validation of HPLC analytic method", "Bioavailability and pharmacokinetics", "Ex vivo absorption" ]
[ "Polyphenols exist widely in plants and plant-derived foods, including vegetables, fruit, tea, spices, wine, beverages, and nutritional supplement products, thus constitute an important part of the human diet. Previous studies have demonstrated that polyphenols have broad pharmacological activity and biological benefits, including anticancer, neuroprotection, chemoprevention, and protection against cardiovascular disease.1 These advantages are attributed to their useful antioxidant and anti-inflammatory properties that regulate cell proliferation and function to prevent onset and progression of various human diseases.2 However, the beneficial effects of many polyphenols on human health are limited due to their low oral bioavailability. This is because the polyphenols, including curcumin,3 resveratrol,4 quercetin,5 daidzein,6 silymarin,7 and other polyphenols2 are poorly absorbed in the gut and undergo fast metabolism by the liver.\nTo improve the low oral bioavailability of polyphenols, researchers have investigated their encapsulation in different nanoformulation systems, including micelles, liposomes, and nanoparticles.2,8 The mechanism by which these nanoformulations enhance the oral bioavailability of polyphenols involves increasing the surface area and interaction of the compounds, thereby raising the dissolution rate to improve their absorption.9 Nevertheless, previous studies have shown that polyester nanoparticle-based formulations are suitable for oral ingestion because they are more stable than other formulations in the gastrointestinal tract.10 Polyesters, for instance, polylactic-co-glycolic acid (PLGA), are the substances most often used in nanoformulations and have been approved by the US Food and Drug Administration as therapeutic devices owing to their biocompatibility, biodegradability, and versatile degradation kinetics.11 Recent research has shown that nanoformulations based on PLGA and nanogels have lower human serum protein binding and are more compatible than β-cyclodextrin, cellulose, and dendrimers with human erythrocytes. 12 Accordingly, PLGA nanoparticle systems have been used extensively to increase the oral bioavailability of polyphenols. Sonaje et al have demonstrated that the oral bioavailability of ellagic acid is increased when it is nanoformulated with PLGA.13 Another study has shown that quercetin encapsulated in PLGA nanoparticles has an increased ability to protect the liver and brain against oxidative damage.14 Curcumin in a PLGA nanoformulation not only has increased oral bioavailability but is also taken up in increased amounts by metastatic cancer cells.3,15 Xie et al have reported that the improved oral bioavailability of curcumin when encapsulated in PLGA nanoparticles may be attributable to enhanced water solubility, improved permeability, and inhibition of P-glycoprotein-mediated efflux in the intestine.16\nAlthough PLGA nanoformulations are effective in improving the oral bioavailability of polyphenols, many kinds of PLGA with different molecular weights are available on the market. Our knowledge about the influence of the molecular weight of PLGA on the oral bioavailability of polyphenols is limited. Hence, the aim of this work was to investigate the effects of the molecular weight of PLGA on the oral bioavailability of polyphenols. Curcumin, one of the biologically active polyphenols, is a candidate compound for further pharmacokinetic study.", "Polyvinyl alcohol (molecular weight 9000–10,000), low molecular weight PLGA (50:50, molecular weight 5000–15,000), high molecular weight PLGA (50:50, molecular weight 40,000–75,000), monosodium phosphate, sucrose, 2-(4′-hydroxybenzeneazo)benzoic acid (as the internal standard), and polyethylene glycol 400 were obtained from Sigma-Aldrich (St Louis, MO). Curcumin (purity ≥95%) was purchased from Fluka (Buchs, Switzerland). Acetonitrile and dichloromethane were obtained from Merck (Darmstadt, Germany). Milli-Q grade water (Millipore, Bedford, MA) was used for preparation of the mobile phase and solution.", "A high-pressure emulsification-solvent evaporation technique was used to encapsulate curcumin in PLGA nanoparticles, as reported previously.17 In brief, 50 mg of PLGA (low or high molecular weight) and 5 mg of curcumin were added to 1.25 mL of dichloromethane as the oil phase and sonicated for 5 minutes to dissolve all substances. This oil phase was added to 10 mL of aqueous phase (containing 2% polyvinyl alcohol and 20% sucrose, w/v) and then homogenized using a Polytron PT-MR 2100 (Kinematica AG, Lucerne, Switzerland) at 28,000 rpm for 10 minutes to form an emulsion. The emulsion was passed across a 0.1 μm filter twice at an operating pressure of 5 kg/cm2 using an accessory extruder (EF-C5, Avestin, Canada) to formulate low and high molecular weight PLGA nanoparticles encapsulating curcumin (LMw-NPC and HMw-NPC, respectively). The resulting nanoparticles were stirred at 500 rpm overnight and air-dried to evaporate the organic solvent.", "To evaluate the entrapment efficiency of curcumin encapsulated in PLGA nanoparticles, LMw-NPC and HMw-NPC solutions were centrifuged at 11,000 rpm for 15 minutes ( Centrifuge 5415R, Eppendorf, Germany). After centrifugation, the supernatant was removed and 1 mL of acetonitrile was then added to the nanoparticle pellet to destroy its structure and allow release of curcumin. The amount of curcumin in the nanoparticles was analyzed by high-performance liquid chromatography (HPLC). Percent entrapment efficiency is given by:", "Briefly, the LMw-NPC and HMw-NPC solutions were placed dropwise onto a 400 mesh copper grid coated with carbon. About 15 minutes after nanoparticle deposition, the grid was patted with filter paper to remove surface water and then stained using a solution of phosphotungstic acid (2%, w/v) for 20 minutes. TEM samples were obtained after the stained sample had been allowed to dry in air. Micrographs of LMw- NPC and HMw-NPC were obtained using a transmission electron microscope (TEM, JEM-2000EXII, JEOL, Tokyo, Japan).", " HPLC system The HPLC system comprised a chromatographic pump (LC-20AT, Shimadzu, Kyoto, Japan), autosampler ( SIL-20AT, Shimadzu), diode array detector (SPD-M20A, Shimadzu), and degasser (DG-240). A reversed-phase C18 column (4.6 × 150 mm, particle size 5 μm, Eclipse XDB, Agilent, Palo Alto, CA) was used for the HPLC separation. The mobile phase consisted of acetonitrile, 10 mM monosodium phosphate (pH 3.5 adjusted by phosphoric acid, 40:60, v/v) at a flow rate of 0.8 mL per minute. The HPLC run time was 27 minutes and the detection wavelength was set at 425 nm. The mobile phase was passed through a 0.45 μm Millipore membrane filter and then degassed by sonication 2510R-DTH (Bransonic, CT) before use.\nThe HPLC system comprised a chromatographic pump (LC-20AT, Shimadzu, Kyoto, Japan), autosampler ( SIL-20AT, Shimadzu), diode array detector (SPD-M20A, Shimadzu), and degasser (DG-240). A reversed-phase C18 column (4.6 × 150 mm, particle size 5 μm, Eclipse XDB, Agilent, Palo Alto, CA) was used for the HPLC separation. The mobile phase consisted of acetonitrile, 10 mM monosodium phosphate (pH 3.5 adjusted by phosphoric acid, 40:60, v/v) at a flow rate of 0.8 mL per minute. The HPLC run time was 27 minutes and the detection wavelength was set at 425 nm. The mobile phase was passed through a 0.45 μm Millipore membrane filter and then degassed by sonication 2510R-DTH (Bransonic, CT) before use.", "The HPLC system comprised a chromatographic pump (LC-20AT, Shimadzu, Kyoto, Japan), autosampler ( SIL-20AT, Shimadzu), diode array detector (SPD-M20A, Shimadzu), and degasser (DG-240). A reversed-phase C18 column (4.6 × 150 mm, particle size 5 μm, Eclipse XDB, Agilent, Palo Alto, CA) was used for the HPLC separation. The mobile phase consisted of acetonitrile, 10 mM monosodium phosphate (pH 3.5 adjusted by phosphoric acid, 40:60, v/v) at a flow rate of 0.8 mL per minute. The HPLC run time was 27 minutes and the detection wavelength was set at 425 nm. The mobile phase was passed through a 0.45 μm Millipore membrane filter and then degassed by sonication 2510R-DTH (Bransonic, CT) before use.", "A stock solution of curcumin was diluted in acetonitrile 500 μg/mL to make serial concentrations of a working standard solution (0.1, 0.25, 1, 2.5, 10, and 25 μg/mL) with 50% acetonitrile. Calibration standards were prepared using 5 μL of the working standard solution spiked with 45 μL of blank plasma, feces, and urine. The sample extracted procedures described follow the sample preparation section, on the next page. The calibration curves were represented by the peak area ratio of curcumin to the internal standard spiked in blank matrix as the y axis and the concentration of curcumin as the x axis.\nThe limits of detection and quantification were defined as a signal-to-noise ratio of 3 and the lowest concentration of the linear regression, respectively. The accuracy and precision of intraday (same day) and interday (six sequential days) sampling of curcumin were assayed (six individual tests). Accuracy (% bias) was calculated as [(Cobs – Cnom)/Cnom] × 100, while Cnom represented the nominal concentration and Cobs indicated the mean value of the observed concentration. Precision was calculated as the relative standard deviation (RSD) from the observed concentrations as follows:\nThe % bias and % RSD value for the lowest acceptable reproducibility concentrations were defined as being within ±15%. Extraction recovery (%) of curcumin was assessed by comparing the peak area ratio of curcumin and the internal standard spiked with blank matrix three times (at concentrations of 0.1, 0.25, and 2.5 μg/mL) to reference the curcumin prepared in 50% acetonitrile at the same concentrations.", "Blood samples were centrifuged at 8000 rpm for 10 minutes at 4°C for preparation of the plasma samples. The supernatant was collected and preserved at −20°C before analysis. The feces were air-dried, weighed, and then homogenized in five- fold of their volume (1 g feces with 5 ml) with 50% aqueous acetonitrile using the Polytron PT-MR 2100 homogenizer. The fecal samples were then centrifuged at 6000 rpm for 10 minutes at 4°C, and the supernatant was then collected and preserved at −20°C for further assay. Plasma and fecal samples (50 μL, vortex- mixed with 100 μL of internal standard solution containing 1.5 μg/mL of 2-(4′-hydroxybenzeneazo)benzoic acid dissolved in acetonitrile) were subjected to protein precipitation. After centrifugation at 12,000 rpm for 15 minutes, 20 μL of supernatant was collected and analyzed by HPLC.\nThe original volume of urine was recorded before centrifugation at 8000 rpm for 10 minutes and collection of the supernatant. Next, 50 μL of the urine supernatant was extracted using 1 mL of ethyl acetate and vortexed for one minute. The ethyl acetate phase was collected by centrifugation (13,000 rpm for one minute), and then transferred to a clear glass tube. After drying by vacuum centrifugation, the residue was reconstituted in 100 μL of 50% internal standard solution, and 20 μL of the dissolved solution was injected into the HPLC column.", "The ex vivo drug absorption study was performed as previous described.18 Male Sprague-Dawley rats (210 ± 10 g body weight) were sacrificed by intraperitoneal overdose using 1 mL/kg urethane (1 g/mL) and α-chloralose (0.1 g/mL). The entire small intestine was quickly removed and douched several times with normal saline (0.9%, w/v) at room temperature. The intestine was put into warm (37°C) TC-199 medium (pH 7.4) and separated into the duodenum, jejunum, and ileum. The gut was then gently reverted using a glass rod (2.5 mm diameter). The large gut sac was divided into small sacs of approximately 2 cm in length, then filled with TC-199 medium and tied with a silk suture.\nEach of the small sacs was incubated in a conical flask containing 19.5 mL of TC-199 medium in a gentle shaking water bath at 37°C. Next, 0.5 mL of curcumin 100 μg/mL, LMw-NPC, and HMw-NPC were added into conical flasks. After incubating for one hour, each sac was cut open, and the serosal fluid and sac tissue collected into a clear tube; 1 mL of acetonitrile was added to the sac tissue, which was then homogenized and centrifuged at 6000 rpm for 10 minutes. A 50 μL quantity of the sac supernatant and serosal fluid was added to 100 μL of internal standard solution for protein precipitation and centrifuged at 13,000 rpm for 10 minutes. The supernatants (20 μL) were collected and analyzed by HPLC.", "Pharmacokinetic values were calculated from each individual set of data using WinNonlin, standard edition, version 1.1 (Pharsight Corporation, Mountain View, CA) and the noncompartmental method. The relative oral bioavailability of curcumin was calculated according to the equation:\nThe pharmacokinetic results are represented as the mean ± standard error. The statistical analysis was performed by t-test (SPSS version 10.0, SPSS Inc, Chicago, IL) to compare different groups. The level of statistical significance was set at P < 0.05.", "The properties of the LMw-NPC and HMw-NPC are summarized in Table 1. The zeta potential refers to the surface charge on the nanoparticles and the polydispersity index is the size distribution, indicating the degree of similarity between the nanoparticles, such that a smaller polydispersity index indicates that the particles are similar in size. Nanoparticles containing curcumin were prepared using a high-pressure emulsification-solvent evaporation procedure with PLGA as the encapsulating material. A negative zeta potential and smaller polydispersity index was obtained (−12.2 mV and 0.076 for LMw-NPC versus −14.1 mV and 0.068 for HMw-NPC). The sizes of the LMw-NPC and HMw-NPC were 166 ± 7.4 nm and 163 ± 4.2 nm, respectively. The encapsulation efficiency of both nanoformulations was nearly 45%. According to the results, there was no statistically significant difference in particle characteristics between LMw-NPC and HMw-NPC. As in our previous studies,3,17 similar properties of nanoparticles were found in the different batches when complied with standard procedures to prepare these LMw-NPC. Therefore, our study results were confirmed to be replicable and were well validated. Figure 1 shows the TEM images of LMw-NPC and HMw-NPC. The morphology of the curcumin nanoparticles was either ellipsoid or spherical, and the diameter results were similar to the data observed by dynamic light scattering (below 200 nm) for both LMw- NPC and HMw-NPC. Our TEM data are also consistent with those reported by Liang et al and Chang et al,19,20 with regard to the nanoparticles being a bright color and surrounded by a dark shadow. Our curcumin-loaded PLGA nanoparticles were well prepared and could be used in our subsequent pharmacokinetic study.", "The HPLC method was developed to investigate curcumin in rat plasma samples. Figure 2A–E shows the chromatograms of blank rat matrices, and Figure 2F–J shows the chromatograms of rat biosamples containing the internal standard and curcumin after administration of HMw-NPC. The retention times were approximately 6.5 and 22 minutes for the internal standard and curcumin, respectively, with no obvious interference peak in the blank chromatograms. These results suggest that the HPLC analytical conditions provided good separation and selectivity for curcumin and the internal standard in the experimental matrices. The validation methods were for linearity, limit of detection, limit of quantification, precision, accuracy, and extraction recovery. The calibration curves for plasma, feces, urine, serosal fluid, and sac tissue showed good linearity (r2 > 0.995) over the range of 0.01–2.5 μg/mL. The limits of detection and quantification for curcumin were determined to be 0.005 μg/mL and 0.01 μg/mL, respectively. Intra-assay and interassay precision (% RSD and accuracy [% bias]) of curcumin measurements in each rat matrix were ±15%. Extraction of curcumin at low, medium, and high concentration in rat plasma, feces, urine, serosal fluid, and sac tissue was at least 90%. These results indicate that the analytical and extraction methods used for curcumin and the internal standard were reliable for the following pharmacokinetic study.", "Mean concentrations of curcumin in rat plasma at different times after oral administration of conventional curcumin (1 mg/kg) and LMw-NPC and HMw-NPC (50 mg/kg) are shown in Figure 3. Curcumin could be detected in plasma for 6 hours after oral ingestion of conventional curcumin and of LMw-NPC, but the retention time in rat plasma was prolonged to 8 hours for the HMw-NPC formulation. Table 2 shows significant increases in Cmax (from 0.028 for curcumin alone to 0.042 for LMw-NPC to 0.057 μg/mL for HMw-NPC) and area under the concentration-time curve (AUC) per dose (from 0.0062 to 0.15 to 0.25, respectively), indicating that the absorption of curcumin increased when it was encapsulated in a nanoformulation. In addition, the relative bioavailability of HMw-NPC was 40-fold and 1.67-fold higher than that of conventional curcumin and LMw-NPC, respectively (Table 2). This indicates that the HMw-PLGA nanoformulation achieved markedly better absorption of curcumin compared with LMw-PLGA. Furthermore, after oral administration of HMw-NPC, the Tmax of curcumin was 15 minutes shorter than that of the other curcumin formulations (see Table 2). These results indicate that the relative bioavailability of curcumin was increased at a shorter Tmax, indicating that Tmax was closely related to the bioavailability of curcumin.\nWithout nanoformulation, the accumulated percentage of unabsorbed curcumin in feces approached 90% and in urine was almost 0% (Figure 4), implying that there was low absorption of curcumin in the gastrointestinal tract. PLGA encapsulation reduced the amount of curcumin in feces (from 90% for curcumin alone to 40% for LMw-PLG to 20% for HMw-PLGA) and increased the amount of curcumin in urine (from near 0% to 0.07% to 0.16%), respectively, indicating that the absorption of curcumin was enhanced after PLGA formulation. These excretion results confirmed the pharmacokinetic parameters of the oral study, ie, the absorption of curcumin was significantly increased by both LMw-PLGA and HMw-PLGA formulation.", "To understand better the relationship between relative bioavailability and Tmax for different molecular weights of PLGA-encapsulated curcumin and to explore the possible reasons for this, an ex vivo absorption study was undertaken. The concentration of curcumin absorbed from the incubation medium into the sac tissue and then into the serosal fluid is shown in Figure 5. When samples of the reverted sac of rat small intestine were incubated with conventional curcumin, LMw-NPC, and HMw-NPC, the amount of curcumin absorbed from the curcumin nanoformulation was significantly increased in the duodenum and the ileum, but was not significantly different in the jejunum. The curcumin content was highest in the serosal fluid of the duodenum in the HMw-NPC group, followed by the LMw-NPC and conventional curcumin groups. The curcumin concentration from HMw-NPC in the serosal fluid of the duodenum was 5.12-fold and 2.66-fold higher than from LMw-NPC and conventional curcumin, respectively (Figure 5A).\nThe absorption of curcumin, as determined by the sum of the amounts of curcumin in the serosal fluid, sac tissue, and regions of the rat gut after incubation with curcumin and its formulations, is shown in Table 3. It can be seen that the amount of curcumin in the duodenum and ileum was significantly increased by curcumin encapsulated with PLGA. The HMw-PLGA formulation noticeably enhanced the absorption of curcumin in the duodenum compared with LMw-PLGA (from 0.63 μg to 0.89 μg) and with conventional curcumin (from 0.47 μg to 0.89 μg). The ex vivo curcumin absorption results support the findings of the oral study, ie, that curcumin formulated with PLGA increased the absorption of curcumin in the intestine. Overall, HMw- PLGA enabled considerably more curcumin absorption than LMw-PLGA." ]
[ null, null, null, null, null, null, null, null, null, null, null, "intro", null, null, null ]
[ "Introduction", "Materials and methods", "Chemicals and reagents", "Encapsulation of curcumin in PLGA nanoparticles", "Characteristics of LMw-NPC and HMw-NPC", "Entrapment efficiency", "Transmission electron microscopy", "HPLC system and analytical method validation", "HPLC system", "Validation of analytical method", "Animal study", "Sample preparation", "Ex vivo curcumin absorption using reverted rat gut sacs", "Pharmacokinetics and statistics", "Results", "Characteristics of LMw-NPC and HMw-NPC", "Validation of HPLC analytic method", "Bioavailability and pharmacokinetics", "Ex vivo absorption", "Discussion" ]
[ "Polyphenols exist widely in plants and plant-derived foods, including vegetables, fruit, tea, spices, wine, beverages, and nutritional supplement products, thus constitute an important part of the human diet. Previous studies have demonstrated that polyphenols have broad pharmacological activity and biological benefits, including anticancer, neuroprotection, chemoprevention, and protection against cardiovascular disease.1 These advantages are attributed to their useful antioxidant and anti-inflammatory properties that regulate cell proliferation and function to prevent onset and progression of various human diseases.2 However, the beneficial effects of many polyphenols on human health are limited due to their low oral bioavailability. This is because the polyphenols, including curcumin,3 resveratrol,4 quercetin,5 daidzein,6 silymarin,7 and other polyphenols2 are poorly absorbed in the gut and undergo fast metabolism by the liver.\nTo improve the low oral bioavailability of polyphenols, researchers have investigated their encapsulation in different nanoformulation systems, including micelles, liposomes, and nanoparticles.2,8 The mechanism by which these nanoformulations enhance the oral bioavailability of polyphenols involves increasing the surface area and interaction of the compounds, thereby raising the dissolution rate to improve their absorption.9 Nevertheless, previous studies have shown that polyester nanoparticle-based formulations are suitable for oral ingestion because they are more stable than other formulations in the gastrointestinal tract.10 Polyesters, for instance, polylactic-co-glycolic acid (PLGA), are the substances most often used in nanoformulations and have been approved by the US Food and Drug Administration as therapeutic devices owing to their biocompatibility, biodegradability, and versatile degradation kinetics.11 Recent research has shown that nanoformulations based on PLGA and nanogels have lower human serum protein binding and are more compatible than β-cyclodextrin, cellulose, and dendrimers with human erythrocytes. 12 Accordingly, PLGA nanoparticle systems have been used extensively to increase the oral bioavailability of polyphenols. Sonaje et al have demonstrated that the oral bioavailability of ellagic acid is increased when it is nanoformulated with PLGA.13 Another study has shown that quercetin encapsulated in PLGA nanoparticles has an increased ability to protect the liver and brain against oxidative damage.14 Curcumin in a PLGA nanoformulation not only has increased oral bioavailability but is also taken up in increased amounts by metastatic cancer cells.3,15 Xie et al have reported that the improved oral bioavailability of curcumin when encapsulated in PLGA nanoparticles may be attributable to enhanced water solubility, improved permeability, and inhibition of P-glycoprotein-mediated efflux in the intestine.16\nAlthough PLGA nanoformulations are effective in improving the oral bioavailability of polyphenols, many kinds of PLGA with different molecular weights are available on the market. Our knowledge about the influence of the molecular weight of PLGA on the oral bioavailability of polyphenols is limited. Hence, the aim of this work was to investigate the effects of the molecular weight of PLGA on the oral bioavailability of polyphenols. Curcumin, one of the biologically active polyphenols, is a candidate compound for further pharmacokinetic study.", " Chemicals and reagents Polyvinyl alcohol (molecular weight 9000–10,000), low molecular weight PLGA (50:50, molecular weight 5000–15,000), high molecular weight PLGA (50:50, molecular weight 40,000–75,000), monosodium phosphate, sucrose, 2-(4′-hydroxybenzeneazo)benzoic acid (as the internal standard), and polyethylene glycol 400 were obtained from Sigma-Aldrich (St Louis, MO). Curcumin (purity ≥95%) was purchased from Fluka (Buchs, Switzerland). Acetonitrile and dichloromethane were obtained from Merck (Darmstadt, Germany). Milli-Q grade water (Millipore, Bedford, MA) was used for preparation of the mobile phase and solution.\nPolyvinyl alcohol (molecular weight 9000–10,000), low molecular weight PLGA (50:50, molecular weight 5000–15,000), high molecular weight PLGA (50:50, molecular weight 40,000–75,000), monosodium phosphate, sucrose, 2-(4′-hydroxybenzeneazo)benzoic acid (as the internal standard), and polyethylene glycol 400 were obtained from Sigma-Aldrich (St Louis, MO). Curcumin (purity ≥95%) was purchased from Fluka (Buchs, Switzerland). Acetonitrile and dichloromethane were obtained from Merck (Darmstadt, Germany). Milli-Q grade water (Millipore, Bedford, MA) was used for preparation of the mobile phase and solution.\n Encapsulation of curcumin in PLGA nanoparticles A high-pressure emulsification-solvent evaporation technique was used to encapsulate curcumin in PLGA nanoparticles, as reported previously.17 In brief, 50 mg of PLGA (low or high molecular weight) and 5 mg of curcumin were added to 1.25 mL of dichloromethane as the oil phase and sonicated for 5 minutes to dissolve all substances. This oil phase was added to 10 mL of aqueous phase (containing 2% polyvinyl alcohol and 20% sucrose, w/v) and then homogenized using a Polytron PT-MR 2100 (Kinematica AG, Lucerne, Switzerland) at 28,000 rpm for 10 minutes to form an emulsion. The emulsion was passed across a 0.1 μm filter twice at an operating pressure of 5 kg/cm2 using an accessory extruder (EF-C5, Avestin, Canada) to formulate low and high molecular weight PLGA nanoparticles encapsulating curcumin (LMw-NPC and HMw-NPC, respectively). The resulting nanoparticles were stirred at 500 rpm overnight and air-dried to evaporate the organic solvent.\nA high-pressure emulsification-solvent evaporation technique was used to encapsulate curcumin in PLGA nanoparticles, as reported previously.17 In brief, 50 mg of PLGA (low or high molecular weight) and 5 mg of curcumin were added to 1.25 mL of dichloromethane as the oil phase and sonicated for 5 minutes to dissolve all substances. This oil phase was added to 10 mL of aqueous phase (containing 2% polyvinyl alcohol and 20% sucrose, w/v) and then homogenized using a Polytron PT-MR 2100 (Kinematica AG, Lucerne, Switzerland) at 28,000 rpm for 10 minutes to form an emulsion. The emulsion was passed across a 0.1 μm filter twice at an operating pressure of 5 kg/cm2 using an accessory extruder (EF-C5, Avestin, Canada) to formulate low and high molecular weight PLGA nanoparticles encapsulating curcumin (LMw-NPC and HMw-NPC, respectively). The resulting nanoparticles were stirred at 500 rpm overnight and air-dried to evaporate the organic solvent.\n Characteristics of LMw-NPC and HMw-NPC The particle size and polydispersity index of the LMw-NPC and HMw-NPC were measured by dynamic light scattering (90Plus, BIC, Holtsville, NY). The zeta potential of the LMw-NPC and HMw-NPC was determined using a zeta potential analyzer (90Plus, BIC).\nThe particle size and polydispersity index of the LMw-NPC and HMw-NPC were measured by dynamic light scattering (90Plus, BIC, Holtsville, NY). The zeta potential of the LMw-NPC and HMw-NPC was determined using a zeta potential analyzer (90Plus, BIC).\n Entrapment efficiency To evaluate the entrapment efficiency of curcumin encapsulated in PLGA nanoparticles, LMw-NPC and HMw-NPC solutions were centrifuged at 11,000 rpm for 15 minutes ( Centrifuge 5415R, Eppendorf, Germany). After centrifugation, the supernatant was removed and 1 mL of acetonitrile was then added to the nanoparticle pellet to destroy its structure and allow release of curcumin. The amount of curcumin in the nanoparticles was analyzed by high-performance liquid chromatography (HPLC). Percent entrapment efficiency is given by:\nTo evaluate the entrapment efficiency of curcumin encapsulated in PLGA nanoparticles, LMw-NPC and HMw-NPC solutions were centrifuged at 11,000 rpm for 15 minutes ( Centrifuge 5415R, Eppendorf, Germany). After centrifugation, the supernatant was removed and 1 mL of acetonitrile was then added to the nanoparticle pellet to destroy its structure and allow release of curcumin. The amount of curcumin in the nanoparticles was analyzed by high-performance liquid chromatography (HPLC). Percent entrapment efficiency is given by:\n Transmission electron microscopy Briefly, the LMw-NPC and HMw-NPC solutions were placed dropwise onto a 400 mesh copper grid coated with carbon. About 15 minutes after nanoparticle deposition, the grid was patted with filter paper to remove surface water and then stained using a solution of phosphotungstic acid (2%, w/v) for 20 minutes. TEM samples were obtained after the stained sample had been allowed to dry in air. Micrographs of LMw- NPC and HMw-NPC were obtained using a transmission electron microscope (TEM, JEM-2000EXII, JEOL, Tokyo, Japan).\nBriefly, the LMw-NPC and HMw-NPC solutions were placed dropwise onto a 400 mesh copper grid coated with carbon. About 15 minutes after nanoparticle deposition, the grid was patted with filter paper to remove surface water and then stained using a solution of phosphotungstic acid (2%, w/v) for 20 minutes. TEM samples were obtained after the stained sample had been allowed to dry in air. Micrographs of LMw- NPC and HMw-NPC were obtained using a transmission electron microscope (TEM, JEM-2000EXII, JEOL, Tokyo, Japan).\n HPLC system and analytical method validation HPLC system The HPLC system comprised a chromatographic pump (LC-20AT, Shimadzu, Kyoto, Japan), autosampler ( SIL-20AT, Shimadzu), diode array detector (SPD-M20A, Shimadzu), and degasser (DG-240). A reversed-phase C18 column (4.6 × 150 mm, particle size 5 μm, Eclipse XDB, Agilent, Palo Alto, CA) was used for the HPLC separation. The mobile phase consisted of acetonitrile, 10 mM monosodium phosphate (pH 3.5 adjusted by phosphoric acid, 40:60, v/v) at a flow rate of 0.8 mL per minute. The HPLC run time was 27 minutes and the detection wavelength was set at 425 nm. The mobile phase was passed through a 0.45 μm Millipore membrane filter and then degassed by sonication 2510R-DTH (Bransonic, CT) before use.\nThe HPLC system comprised a chromatographic pump (LC-20AT, Shimadzu, Kyoto, Japan), autosampler ( SIL-20AT, Shimadzu), diode array detector (SPD-M20A, Shimadzu), and degasser (DG-240). A reversed-phase C18 column (4.6 × 150 mm, particle size 5 μm, Eclipse XDB, Agilent, Palo Alto, CA) was used for the HPLC separation. The mobile phase consisted of acetonitrile, 10 mM monosodium phosphate (pH 3.5 adjusted by phosphoric acid, 40:60, v/v) at a flow rate of 0.8 mL per minute. The HPLC run time was 27 minutes and the detection wavelength was set at 425 nm. The mobile phase was passed through a 0.45 μm Millipore membrane filter and then degassed by sonication 2510R-DTH (Bransonic, CT) before use.\n HPLC system The HPLC system comprised a chromatographic pump (LC-20AT, Shimadzu, Kyoto, Japan), autosampler ( SIL-20AT, Shimadzu), diode array detector (SPD-M20A, Shimadzu), and degasser (DG-240). A reversed-phase C18 column (4.6 × 150 mm, particle size 5 μm, Eclipse XDB, Agilent, Palo Alto, CA) was used for the HPLC separation. The mobile phase consisted of acetonitrile, 10 mM monosodium phosphate (pH 3.5 adjusted by phosphoric acid, 40:60, v/v) at a flow rate of 0.8 mL per minute. The HPLC run time was 27 minutes and the detection wavelength was set at 425 nm. The mobile phase was passed through a 0.45 μm Millipore membrane filter and then degassed by sonication 2510R-DTH (Bransonic, CT) before use.\nThe HPLC system comprised a chromatographic pump (LC-20AT, Shimadzu, Kyoto, Japan), autosampler ( SIL-20AT, Shimadzu), diode array detector (SPD-M20A, Shimadzu), and degasser (DG-240). A reversed-phase C18 column (4.6 × 150 mm, particle size 5 μm, Eclipse XDB, Agilent, Palo Alto, CA) was used for the HPLC separation. The mobile phase consisted of acetonitrile, 10 mM monosodium phosphate (pH 3.5 adjusted by phosphoric acid, 40:60, v/v) at a flow rate of 0.8 mL per minute. The HPLC run time was 27 minutes and the detection wavelength was set at 425 nm. The mobile phase was passed through a 0.45 μm Millipore membrane filter and then degassed by sonication 2510R-DTH (Bransonic, CT) before use.\n Validation of analytical method A stock solution of curcumin was diluted in acetonitrile 500 μg/mL to make serial concentrations of a working standard solution (0.1, 0.25, 1, 2.5, 10, and 25 μg/mL) with 50% acetonitrile. Calibration standards were prepared using 5 μL of the working standard solution spiked with 45 μL of blank plasma, feces, and urine. The sample extracted procedures described follow the sample preparation section, on the next page. The calibration curves were represented by the peak area ratio of curcumin to the internal standard spiked in blank matrix as the y axis and the concentration of curcumin as the x axis.\nThe limits of detection and quantification were defined as a signal-to-noise ratio of 3 and the lowest concentration of the linear regression, respectively. The accuracy and precision of intraday (same day) and interday (six sequential days) sampling of curcumin were assayed (six individual tests). Accuracy (% bias) was calculated as [(Cobs – Cnom)/Cnom] × 100, while Cnom represented the nominal concentration and Cobs indicated the mean value of the observed concentration. Precision was calculated as the relative standard deviation (RSD) from the observed concentrations as follows:\nThe % bias and % RSD value for the lowest acceptable reproducibility concentrations were defined as being within ±15%. Extraction recovery (%) of curcumin was assessed by comparing the peak area ratio of curcumin and the internal standard spiked with blank matrix three times (at concentrations of 0.1, 0.25, and 2.5 μg/mL) to reference the curcumin prepared in 50% acetonitrile at the same concentrations.\nA stock solution of curcumin was diluted in acetonitrile 500 μg/mL to make serial concentrations of a working standard solution (0.1, 0.25, 1, 2.5, 10, and 25 μg/mL) with 50% acetonitrile. Calibration standards were prepared using 5 μL of the working standard solution spiked with 45 μL of blank plasma, feces, and urine. The sample extracted procedures described follow the sample preparation section, on the next page. The calibration curves were represented by the peak area ratio of curcumin to the internal standard spiked in blank matrix as the y axis and the concentration of curcumin as the x axis.\nThe limits of detection and quantification were defined as a signal-to-noise ratio of 3 and the lowest concentration of the linear regression, respectively. The accuracy and precision of intraday (same day) and interday (six sequential days) sampling of curcumin were assayed (six individual tests). Accuracy (% bias) was calculated as [(Cobs – Cnom)/Cnom] × 100, while Cnom represented the nominal concentration and Cobs indicated the mean value of the observed concentration. Precision was calculated as the relative standard deviation (RSD) from the observed concentrations as follows:\nThe % bias and % RSD value for the lowest acceptable reproducibility concentrations were defined as being within ±15%. Extraction recovery (%) of curcumin was assessed by comparing the peak area ratio of curcumin and the internal standard spiked with blank matrix three times (at concentrations of 0.1, 0.25, and 2.5 μg/mL) to reference the curcumin prepared in 50% acetonitrile at the same concentrations.\n Animal study Male Sprague-Dawley rats (215 ± 10 g body weight) were obtained from the Laboratory Animal Center at National Yang-Ming University (Taipei, Taiwan). The animals were pathogen-free and allowed to adapt to an environmentally controlled habitat (24°C ± 1°C on a 12-hour light-dark cycle). Food (Laboratory Rodent Diet 5001, PMI Feeds Inc, Richmond, IN) and water were available ad libitum. All animal experiments were performed according to the National Yang-Ming University guidelines, principles, and procedures for the care and use of laboratory animals.\nThe rats were anesthetized using pentobarbital 50 mg/kg intraperitoneally. During anesthesia, the right jugular vein was catheterized with polyethylene tubing for blood sampling. An external cannula was fastened in the dorsal neck area. The rats were returned to consciousness after one day and could move freely. Curcumin and the LMw-NPC/HMw-NPC nanoformulations were administered by gavage to the rats at doses of 1 g/kg and 50 mg/kg, respectively. After oral administration, a 300 μL blood sample was obtained from the right jugular vein into a tube rinsed with heparin at 15, 30, 45, 60, 90, 120, 150, 180, 240, 300, 360, and 480 minutes. Fecal and urine samples were collected in metabolic cages (Mini Mitter, Bend, OR) at 0–12, 12–24, 24–36, 36–48, and 48–72 hours after treatment with curcumin, LMw-NPC, or HMw-NPC.\nMale Sprague-Dawley rats (215 ± 10 g body weight) were obtained from the Laboratory Animal Center at National Yang-Ming University (Taipei, Taiwan). The animals were pathogen-free and allowed to adapt to an environmentally controlled habitat (24°C ± 1°C on a 12-hour light-dark cycle). Food (Laboratory Rodent Diet 5001, PMI Feeds Inc, Richmond, IN) and water were available ad libitum. All animal experiments were performed according to the National Yang-Ming University guidelines, principles, and procedures for the care and use of laboratory animals.\nThe rats were anesthetized using pentobarbital 50 mg/kg intraperitoneally. During anesthesia, the right jugular vein was catheterized with polyethylene tubing for blood sampling. An external cannula was fastened in the dorsal neck area. The rats were returned to consciousness after one day and could move freely. Curcumin and the LMw-NPC/HMw-NPC nanoformulations were administered by gavage to the rats at doses of 1 g/kg and 50 mg/kg, respectively. After oral administration, a 300 μL blood sample was obtained from the right jugular vein into a tube rinsed with heparin at 15, 30, 45, 60, 90, 120, 150, 180, 240, 300, 360, and 480 minutes. Fecal and urine samples were collected in metabolic cages (Mini Mitter, Bend, OR) at 0–12, 12–24, 24–36, 36–48, and 48–72 hours after treatment with curcumin, LMw-NPC, or HMw-NPC.\n Sample preparation Blood samples were centrifuged at 8000 rpm for 10 minutes at 4°C for preparation of the plasma samples. The supernatant was collected and preserved at −20°C before analysis. The feces were air-dried, weighed, and then homogenized in five- fold of their volume (1 g feces with 5 ml) with 50% aqueous acetonitrile using the Polytron PT-MR 2100 homogenizer. The fecal samples were then centrifuged at 6000 rpm for 10 minutes at 4°C, and the supernatant was then collected and preserved at −20°C for further assay. Plasma and fecal samples (50 μL, vortex- mixed with 100 μL of internal standard solution containing 1.5 μg/mL of 2-(4′-hydroxybenzeneazo)benzoic acid dissolved in acetonitrile) were subjected to protein precipitation. After centrifugation at 12,000 rpm for 15 minutes, 20 μL of supernatant was collected and analyzed by HPLC.\nThe original volume of urine was recorded before centrifugation at 8000 rpm for 10 minutes and collection of the supernatant. Next, 50 μL of the urine supernatant was extracted using 1 mL of ethyl acetate and vortexed for one minute. The ethyl acetate phase was collected by centrifugation (13,000 rpm for one minute), and then transferred to a clear glass tube. After drying by vacuum centrifugation, the residue was reconstituted in 100 μL of 50% internal standard solution, and 20 μL of the dissolved solution was injected into the HPLC column.\nBlood samples were centrifuged at 8000 rpm for 10 minutes at 4°C for preparation of the plasma samples. The supernatant was collected and preserved at −20°C before analysis. The feces were air-dried, weighed, and then homogenized in five- fold of their volume (1 g feces with 5 ml) with 50% aqueous acetonitrile using the Polytron PT-MR 2100 homogenizer. The fecal samples were then centrifuged at 6000 rpm for 10 minutes at 4°C, and the supernatant was then collected and preserved at −20°C for further assay. Plasma and fecal samples (50 μL, vortex- mixed with 100 μL of internal standard solution containing 1.5 μg/mL of 2-(4′-hydroxybenzeneazo)benzoic acid dissolved in acetonitrile) were subjected to protein precipitation. After centrifugation at 12,000 rpm for 15 minutes, 20 μL of supernatant was collected and analyzed by HPLC.\nThe original volume of urine was recorded before centrifugation at 8000 rpm for 10 minutes and collection of the supernatant. Next, 50 μL of the urine supernatant was extracted using 1 mL of ethyl acetate and vortexed for one minute. The ethyl acetate phase was collected by centrifugation (13,000 rpm for one minute), and then transferred to a clear glass tube. After drying by vacuum centrifugation, the residue was reconstituted in 100 μL of 50% internal standard solution, and 20 μL of the dissolved solution was injected into the HPLC column.\n Ex vivo curcumin absorption using reverted rat gut sacs The ex vivo drug absorption study was performed as previous described.18 Male Sprague-Dawley rats (210 ± 10 g body weight) were sacrificed by intraperitoneal overdose using 1 mL/kg urethane (1 g/mL) and α-chloralose (0.1 g/mL). The entire small intestine was quickly removed and douched several times with normal saline (0.9%, w/v) at room temperature. The intestine was put into warm (37°C) TC-199 medium (pH 7.4) and separated into the duodenum, jejunum, and ileum. The gut was then gently reverted using a glass rod (2.5 mm diameter). The large gut sac was divided into small sacs of approximately 2 cm in length, then filled with TC-199 medium and tied with a silk suture.\nEach of the small sacs was incubated in a conical flask containing 19.5 mL of TC-199 medium in a gentle shaking water bath at 37°C. Next, 0.5 mL of curcumin 100 μg/mL, LMw-NPC, and HMw-NPC were added into conical flasks. After incubating for one hour, each sac was cut open, and the serosal fluid and sac tissue collected into a clear tube; 1 mL of acetonitrile was added to the sac tissue, which was then homogenized and centrifuged at 6000 rpm for 10 minutes. A 50 μL quantity of the sac supernatant and serosal fluid was added to 100 μL of internal standard solution for protein precipitation and centrifuged at 13,000 rpm for 10 minutes. The supernatants (20 μL) were collected and analyzed by HPLC.\nThe ex vivo drug absorption study was performed as previous described.18 Male Sprague-Dawley rats (210 ± 10 g body weight) were sacrificed by intraperitoneal overdose using 1 mL/kg urethane (1 g/mL) and α-chloralose (0.1 g/mL). The entire small intestine was quickly removed and douched several times with normal saline (0.9%, w/v) at room temperature. The intestine was put into warm (37°C) TC-199 medium (pH 7.4) and separated into the duodenum, jejunum, and ileum. The gut was then gently reverted using a glass rod (2.5 mm diameter). The large gut sac was divided into small sacs of approximately 2 cm in length, then filled with TC-199 medium and tied with a silk suture.\nEach of the small sacs was incubated in a conical flask containing 19.5 mL of TC-199 medium in a gentle shaking water bath at 37°C. Next, 0.5 mL of curcumin 100 μg/mL, LMw-NPC, and HMw-NPC were added into conical flasks. After incubating for one hour, each sac was cut open, and the serosal fluid and sac tissue collected into a clear tube; 1 mL of acetonitrile was added to the sac tissue, which was then homogenized and centrifuged at 6000 rpm for 10 minutes. A 50 μL quantity of the sac supernatant and serosal fluid was added to 100 μL of internal standard solution for protein precipitation and centrifuged at 13,000 rpm for 10 minutes. The supernatants (20 μL) were collected and analyzed by HPLC.\n Pharmacokinetics and statistics Pharmacokinetic values were calculated from each individual set of data using WinNonlin, standard edition, version 1.1 (Pharsight Corporation, Mountain View, CA) and the noncompartmental method. The relative oral bioavailability of curcumin was calculated according to the equation:\nThe pharmacokinetic results are represented as the mean ± standard error. The statistical analysis was performed by t-test (SPSS version 10.0, SPSS Inc, Chicago, IL) to compare different groups. The level of statistical significance was set at P < 0.05.\nPharmacokinetic values were calculated from each individual set of data using WinNonlin, standard edition, version 1.1 (Pharsight Corporation, Mountain View, CA) and the noncompartmental method. The relative oral bioavailability of curcumin was calculated according to the equation:\nThe pharmacokinetic results are represented as the mean ± standard error. The statistical analysis was performed by t-test (SPSS version 10.0, SPSS Inc, Chicago, IL) to compare different groups. The level of statistical significance was set at P < 0.05.", "Polyvinyl alcohol (molecular weight 9000–10,000), low molecular weight PLGA (50:50, molecular weight 5000–15,000), high molecular weight PLGA (50:50, molecular weight 40,000–75,000), monosodium phosphate, sucrose, 2-(4′-hydroxybenzeneazo)benzoic acid (as the internal standard), and polyethylene glycol 400 were obtained from Sigma-Aldrich (St Louis, MO). Curcumin (purity ≥95%) was purchased from Fluka (Buchs, Switzerland). Acetonitrile and dichloromethane were obtained from Merck (Darmstadt, Germany). Milli-Q grade water (Millipore, Bedford, MA) was used for preparation of the mobile phase and solution.", "A high-pressure emulsification-solvent evaporation technique was used to encapsulate curcumin in PLGA nanoparticles, as reported previously.17 In brief, 50 mg of PLGA (low or high molecular weight) and 5 mg of curcumin were added to 1.25 mL of dichloromethane as the oil phase and sonicated for 5 minutes to dissolve all substances. This oil phase was added to 10 mL of aqueous phase (containing 2% polyvinyl alcohol and 20% sucrose, w/v) and then homogenized using a Polytron PT-MR 2100 (Kinematica AG, Lucerne, Switzerland) at 28,000 rpm for 10 minutes to form an emulsion. The emulsion was passed across a 0.1 μm filter twice at an operating pressure of 5 kg/cm2 using an accessory extruder (EF-C5, Avestin, Canada) to formulate low and high molecular weight PLGA nanoparticles encapsulating curcumin (LMw-NPC and HMw-NPC, respectively). The resulting nanoparticles were stirred at 500 rpm overnight and air-dried to evaporate the organic solvent.", "The particle size and polydispersity index of the LMw-NPC and HMw-NPC were measured by dynamic light scattering (90Plus, BIC, Holtsville, NY). The zeta potential of the LMw-NPC and HMw-NPC was determined using a zeta potential analyzer (90Plus, BIC).", "To evaluate the entrapment efficiency of curcumin encapsulated in PLGA nanoparticles, LMw-NPC and HMw-NPC solutions were centrifuged at 11,000 rpm for 15 minutes ( Centrifuge 5415R, Eppendorf, Germany). After centrifugation, the supernatant was removed and 1 mL of acetonitrile was then added to the nanoparticle pellet to destroy its structure and allow release of curcumin. The amount of curcumin in the nanoparticles was analyzed by high-performance liquid chromatography (HPLC). Percent entrapment efficiency is given by:", "Briefly, the LMw-NPC and HMw-NPC solutions were placed dropwise onto a 400 mesh copper grid coated with carbon. About 15 minutes after nanoparticle deposition, the grid was patted with filter paper to remove surface water and then stained using a solution of phosphotungstic acid (2%, w/v) for 20 minutes. TEM samples were obtained after the stained sample had been allowed to dry in air. Micrographs of LMw- NPC and HMw-NPC were obtained using a transmission electron microscope (TEM, JEM-2000EXII, JEOL, Tokyo, Japan).", " HPLC system The HPLC system comprised a chromatographic pump (LC-20AT, Shimadzu, Kyoto, Japan), autosampler ( SIL-20AT, Shimadzu), diode array detector (SPD-M20A, Shimadzu), and degasser (DG-240). A reversed-phase C18 column (4.6 × 150 mm, particle size 5 μm, Eclipse XDB, Agilent, Palo Alto, CA) was used for the HPLC separation. The mobile phase consisted of acetonitrile, 10 mM monosodium phosphate (pH 3.5 adjusted by phosphoric acid, 40:60, v/v) at a flow rate of 0.8 mL per minute. The HPLC run time was 27 minutes and the detection wavelength was set at 425 nm. The mobile phase was passed through a 0.45 μm Millipore membrane filter and then degassed by sonication 2510R-DTH (Bransonic, CT) before use.\nThe HPLC system comprised a chromatographic pump (LC-20AT, Shimadzu, Kyoto, Japan), autosampler ( SIL-20AT, Shimadzu), diode array detector (SPD-M20A, Shimadzu), and degasser (DG-240). A reversed-phase C18 column (4.6 × 150 mm, particle size 5 μm, Eclipse XDB, Agilent, Palo Alto, CA) was used for the HPLC separation. The mobile phase consisted of acetonitrile, 10 mM monosodium phosphate (pH 3.5 adjusted by phosphoric acid, 40:60, v/v) at a flow rate of 0.8 mL per minute. The HPLC run time was 27 minutes and the detection wavelength was set at 425 nm. The mobile phase was passed through a 0.45 μm Millipore membrane filter and then degassed by sonication 2510R-DTH (Bransonic, CT) before use.", "The HPLC system comprised a chromatographic pump (LC-20AT, Shimadzu, Kyoto, Japan), autosampler ( SIL-20AT, Shimadzu), diode array detector (SPD-M20A, Shimadzu), and degasser (DG-240). A reversed-phase C18 column (4.6 × 150 mm, particle size 5 μm, Eclipse XDB, Agilent, Palo Alto, CA) was used for the HPLC separation. The mobile phase consisted of acetonitrile, 10 mM monosodium phosphate (pH 3.5 adjusted by phosphoric acid, 40:60, v/v) at a flow rate of 0.8 mL per minute. The HPLC run time was 27 minutes and the detection wavelength was set at 425 nm. The mobile phase was passed through a 0.45 μm Millipore membrane filter and then degassed by sonication 2510R-DTH (Bransonic, CT) before use.", "A stock solution of curcumin was diluted in acetonitrile 500 μg/mL to make serial concentrations of a working standard solution (0.1, 0.25, 1, 2.5, 10, and 25 μg/mL) with 50% acetonitrile. Calibration standards were prepared using 5 μL of the working standard solution spiked with 45 μL of blank plasma, feces, and urine. The sample extracted procedures described follow the sample preparation section, on the next page. The calibration curves were represented by the peak area ratio of curcumin to the internal standard spiked in blank matrix as the y axis and the concentration of curcumin as the x axis.\nThe limits of detection and quantification were defined as a signal-to-noise ratio of 3 and the lowest concentration of the linear regression, respectively. The accuracy and precision of intraday (same day) and interday (six sequential days) sampling of curcumin were assayed (six individual tests). Accuracy (% bias) was calculated as [(Cobs – Cnom)/Cnom] × 100, while Cnom represented the nominal concentration and Cobs indicated the mean value of the observed concentration. Precision was calculated as the relative standard deviation (RSD) from the observed concentrations as follows:\nThe % bias and % RSD value for the lowest acceptable reproducibility concentrations were defined as being within ±15%. Extraction recovery (%) of curcumin was assessed by comparing the peak area ratio of curcumin and the internal standard spiked with blank matrix three times (at concentrations of 0.1, 0.25, and 2.5 μg/mL) to reference the curcumin prepared in 50% acetonitrile at the same concentrations.", "Male Sprague-Dawley rats (215 ± 10 g body weight) were obtained from the Laboratory Animal Center at National Yang-Ming University (Taipei, Taiwan). The animals were pathogen-free and allowed to adapt to an environmentally controlled habitat (24°C ± 1°C on a 12-hour light-dark cycle). Food (Laboratory Rodent Diet 5001, PMI Feeds Inc, Richmond, IN) and water were available ad libitum. All animal experiments were performed according to the National Yang-Ming University guidelines, principles, and procedures for the care and use of laboratory animals.\nThe rats were anesthetized using pentobarbital 50 mg/kg intraperitoneally. During anesthesia, the right jugular vein was catheterized with polyethylene tubing for blood sampling. An external cannula was fastened in the dorsal neck area. The rats were returned to consciousness after one day and could move freely. Curcumin and the LMw-NPC/HMw-NPC nanoformulations were administered by gavage to the rats at doses of 1 g/kg and 50 mg/kg, respectively. After oral administration, a 300 μL blood sample was obtained from the right jugular vein into a tube rinsed with heparin at 15, 30, 45, 60, 90, 120, 150, 180, 240, 300, 360, and 480 minutes. Fecal and urine samples were collected in metabolic cages (Mini Mitter, Bend, OR) at 0–12, 12–24, 24–36, 36–48, and 48–72 hours after treatment with curcumin, LMw-NPC, or HMw-NPC.", "Blood samples were centrifuged at 8000 rpm for 10 minutes at 4°C for preparation of the plasma samples. The supernatant was collected and preserved at −20°C before analysis. The feces were air-dried, weighed, and then homogenized in five- fold of their volume (1 g feces with 5 ml) with 50% aqueous acetonitrile using the Polytron PT-MR 2100 homogenizer. The fecal samples were then centrifuged at 6000 rpm for 10 minutes at 4°C, and the supernatant was then collected and preserved at −20°C for further assay. Plasma and fecal samples (50 μL, vortex- mixed with 100 μL of internal standard solution containing 1.5 μg/mL of 2-(4′-hydroxybenzeneazo)benzoic acid dissolved in acetonitrile) were subjected to protein precipitation. After centrifugation at 12,000 rpm for 15 minutes, 20 μL of supernatant was collected and analyzed by HPLC.\nThe original volume of urine was recorded before centrifugation at 8000 rpm for 10 minutes and collection of the supernatant. Next, 50 μL of the urine supernatant was extracted using 1 mL of ethyl acetate and vortexed for one minute. The ethyl acetate phase was collected by centrifugation (13,000 rpm for one minute), and then transferred to a clear glass tube. After drying by vacuum centrifugation, the residue was reconstituted in 100 μL of 50% internal standard solution, and 20 μL of the dissolved solution was injected into the HPLC column.", "The ex vivo drug absorption study was performed as previous described.18 Male Sprague-Dawley rats (210 ± 10 g body weight) were sacrificed by intraperitoneal overdose using 1 mL/kg urethane (1 g/mL) and α-chloralose (0.1 g/mL). The entire small intestine was quickly removed and douched several times with normal saline (0.9%, w/v) at room temperature. The intestine was put into warm (37°C) TC-199 medium (pH 7.4) and separated into the duodenum, jejunum, and ileum. The gut was then gently reverted using a glass rod (2.5 mm diameter). The large gut sac was divided into small sacs of approximately 2 cm in length, then filled with TC-199 medium and tied with a silk suture.\nEach of the small sacs was incubated in a conical flask containing 19.5 mL of TC-199 medium in a gentle shaking water bath at 37°C. Next, 0.5 mL of curcumin 100 μg/mL, LMw-NPC, and HMw-NPC were added into conical flasks. After incubating for one hour, each sac was cut open, and the serosal fluid and sac tissue collected into a clear tube; 1 mL of acetonitrile was added to the sac tissue, which was then homogenized and centrifuged at 6000 rpm for 10 minutes. A 50 μL quantity of the sac supernatant and serosal fluid was added to 100 μL of internal standard solution for protein precipitation and centrifuged at 13,000 rpm for 10 minutes. The supernatants (20 μL) were collected and analyzed by HPLC.", "Pharmacokinetic values were calculated from each individual set of data using WinNonlin, standard edition, version 1.1 (Pharsight Corporation, Mountain View, CA) and the noncompartmental method. The relative oral bioavailability of curcumin was calculated according to the equation:\nThe pharmacokinetic results are represented as the mean ± standard error. The statistical analysis was performed by t-test (SPSS version 10.0, SPSS Inc, Chicago, IL) to compare different groups. The level of statistical significance was set at P < 0.05.", " Characteristics of LMw-NPC and HMw-NPC The properties of the LMw-NPC and HMw-NPC are summarized in Table 1. The zeta potential refers to the surface charge on the nanoparticles and the polydispersity index is the size distribution, indicating the degree of similarity between the nanoparticles, such that a smaller polydispersity index indicates that the particles are similar in size. Nanoparticles containing curcumin were prepared using a high-pressure emulsification-solvent evaporation procedure with PLGA as the encapsulating material. A negative zeta potential and smaller polydispersity index was obtained (−12.2 mV and 0.076 for LMw-NPC versus −14.1 mV and 0.068 for HMw-NPC). The sizes of the LMw-NPC and HMw-NPC were 166 ± 7.4 nm and 163 ± 4.2 nm, respectively. The encapsulation efficiency of both nanoformulations was nearly 45%. According to the results, there was no statistically significant difference in particle characteristics between LMw-NPC and HMw-NPC. As in our previous studies,3,17 similar properties of nanoparticles were found in the different batches when complied with standard procedures to prepare these LMw-NPC. Therefore, our study results were confirmed to be replicable and were well validated. Figure 1 shows the TEM images of LMw-NPC and HMw-NPC. The morphology of the curcumin nanoparticles was either ellipsoid or spherical, and the diameter results were similar to the data observed by dynamic light scattering (below 200 nm) for both LMw- NPC and HMw-NPC. Our TEM data are also consistent with those reported by Liang et al and Chang et al,19,20 with regard to the nanoparticles being a bright color and surrounded by a dark shadow. Our curcumin-loaded PLGA nanoparticles were well prepared and could be used in our subsequent pharmacokinetic study.\nThe properties of the LMw-NPC and HMw-NPC are summarized in Table 1. The zeta potential refers to the surface charge on the nanoparticles and the polydispersity index is the size distribution, indicating the degree of similarity between the nanoparticles, such that a smaller polydispersity index indicates that the particles are similar in size. Nanoparticles containing curcumin were prepared using a high-pressure emulsification-solvent evaporation procedure with PLGA as the encapsulating material. A negative zeta potential and smaller polydispersity index was obtained (−12.2 mV and 0.076 for LMw-NPC versus −14.1 mV and 0.068 for HMw-NPC). The sizes of the LMw-NPC and HMw-NPC were 166 ± 7.4 nm and 163 ± 4.2 nm, respectively. The encapsulation efficiency of both nanoformulations was nearly 45%. According to the results, there was no statistically significant difference in particle characteristics between LMw-NPC and HMw-NPC. As in our previous studies,3,17 similar properties of nanoparticles were found in the different batches when complied with standard procedures to prepare these LMw-NPC. Therefore, our study results were confirmed to be replicable and were well validated. Figure 1 shows the TEM images of LMw-NPC and HMw-NPC. The morphology of the curcumin nanoparticles was either ellipsoid or spherical, and the diameter results were similar to the data observed by dynamic light scattering (below 200 nm) for both LMw- NPC and HMw-NPC. Our TEM data are also consistent with those reported by Liang et al and Chang et al,19,20 with regard to the nanoparticles being a bright color and surrounded by a dark shadow. Our curcumin-loaded PLGA nanoparticles were well prepared and could be used in our subsequent pharmacokinetic study.\n Validation of HPLC analytic method The HPLC method was developed to investigate curcumin in rat plasma samples. Figure 2A–E shows the chromatograms of blank rat matrices, and Figure 2F–J shows the chromatograms of rat biosamples containing the internal standard and curcumin after administration of HMw-NPC. The retention times were approximately 6.5 and 22 minutes for the internal standard and curcumin, respectively, with no obvious interference peak in the blank chromatograms. These results suggest that the HPLC analytical conditions provided good separation and selectivity for curcumin and the internal standard in the experimental matrices. The validation methods were for linearity, limit of detection, limit of quantification, precision, accuracy, and extraction recovery. The calibration curves for plasma, feces, urine, serosal fluid, and sac tissue showed good linearity (r2 > 0.995) over the range of 0.01–2.5 μg/mL. The limits of detection and quantification for curcumin were determined to be 0.005 μg/mL and 0.01 μg/mL, respectively. Intra-assay and interassay precision (% RSD and accuracy [% bias]) of curcumin measurements in each rat matrix were ±15%. Extraction of curcumin at low, medium, and high concentration in rat plasma, feces, urine, serosal fluid, and sac tissue was at least 90%. These results indicate that the analytical and extraction methods used for curcumin and the internal standard were reliable for the following pharmacokinetic study.\nThe HPLC method was developed to investigate curcumin in rat plasma samples. Figure 2A–E shows the chromatograms of blank rat matrices, and Figure 2F–J shows the chromatograms of rat biosamples containing the internal standard and curcumin after administration of HMw-NPC. The retention times were approximately 6.5 and 22 minutes for the internal standard and curcumin, respectively, with no obvious interference peak in the blank chromatograms. These results suggest that the HPLC analytical conditions provided good separation and selectivity for curcumin and the internal standard in the experimental matrices. The validation methods were for linearity, limit of detection, limit of quantification, precision, accuracy, and extraction recovery. The calibration curves for plasma, feces, urine, serosal fluid, and sac tissue showed good linearity (r2 > 0.995) over the range of 0.01–2.5 μg/mL. The limits of detection and quantification for curcumin were determined to be 0.005 μg/mL and 0.01 μg/mL, respectively. Intra-assay and interassay precision (% RSD and accuracy [% bias]) of curcumin measurements in each rat matrix were ±15%. Extraction of curcumin at low, medium, and high concentration in rat plasma, feces, urine, serosal fluid, and sac tissue was at least 90%. These results indicate that the analytical and extraction methods used for curcumin and the internal standard were reliable for the following pharmacokinetic study.\n Bioavailability and pharmacokinetics Mean concentrations of curcumin in rat plasma at different times after oral administration of conventional curcumin (1 mg/kg) and LMw-NPC and HMw-NPC (50 mg/kg) are shown in Figure 3. Curcumin could be detected in plasma for 6 hours after oral ingestion of conventional curcumin and of LMw-NPC, but the retention time in rat plasma was prolonged to 8 hours for the HMw-NPC formulation. Table 2 shows significant increases in Cmax (from 0.028 for curcumin alone to 0.042 for LMw-NPC to 0.057 μg/mL for HMw-NPC) and area under the concentration-time curve (AUC) per dose (from 0.0062 to 0.15 to 0.25, respectively), indicating that the absorption of curcumin increased when it was encapsulated in a nanoformulation. In addition, the relative bioavailability of HMw-NPC was 40-fold and 1.67-fold higher than that of conventional curcumin and LMw-NPC, respectively (Table 2). This indicates that the HMw-PLGA nanoformulation achieved markedly better absorption of curcumin compared with LMw-PLGA. Furthermore, after oral administration of HMw-NPC, the Tmax of curcumin was 15 minutes shorter than that of the other curcumin formulations (see Table 2). These results indicate that the relative bioavailability of curcumin was increased at a shorter Tmax, indicating that Tmax was closely related to the bioavailability of curcumin.\nWithout nanoformulation, the accumulated percentage of unabsorbed curcumin in feces approached 90% and in urine was almost 0% (Figure 4), implying that there was low absorption of curcumin in the gastrointestinal tract. PLGA encapsulation reduced the amount of curcumin in feces (from 90% for curcumin alone to 40% for LMw-PLG to 20% for HMw-PLGA) and increased the amount of curcumin in urine (from near 0% to 0.07% to 0.16%), respectively, indicating that the absorption of curcumin was enhanced after PLGA formulation. These excretion results confirmed the pharmacokinetic parameters of the oral study, ie, the absorption of curcumin was significantly increased by both LMw-PLGA and HMw-PLGA formulation.\nMean concentrations of curcumin in rat plasma at different times after oral administration of conventional curcumin (1 mg/kg) and LMw-NPC and HMw-NPC (50 mg/kg) are shown in Figure 3. Curcumin could be detected in plasma for 6 hours after oral ingestion of conventional curcumin and of LMw-NPC, but the retention time in rat plasma was prolonged to 8 hours for the HMw-NPC formulation. Table 2 shows significant increases in Cmax (from 0.028 for curcumin alone to 0.042 for LMw-NPC to 0.057 μg/mL for HMw-NPC) and area under the concentration-time curve (AUC) per dose (from 0.0062 to 0.15 to 0.25, respectively), indicating that the absorption of curcumin increased when it was encapsulated in a nanoformulation. In addition, the relative bioavailability of HMw-NPC was 40-fold and 1.67-fold higher than that of conventional curcumin and LMw-NPC, respectively (Table 2). This indicates that the HMw-PLGA nanoformulation achieved markedly better absorption of curcumin compared with LMw-PLGA. Furthermore, after oral administration of HMw-NPC, the Tmax of curcumin was 15 minutes shorter than that of the other curcumin formulations (see Table 2). These results indicate that the relative bioavailability of curcumin was increased at a shorter Tmax, indicating that Tmax was closely related to the bioavailability of curcumin.\nWithout nanoformulation, the accumulated percentage of unabsorbed curcumin in feces approached 90% and in urine was almost 0% (Figure 4), implying that there was low absorption of curcumin in the gastrointestinal tract. PLGA encapsulation reduced the amount of curcumin in feces (from 90% for curcumin alone to 40% for LMw-PLG to 20% for HMw-PLGA) and increased the amount of curcumin in urine (from near 0% to 0.07% to 0.16%), respectively, indicating that the absorption of curcumin was enhanced after PLGA formulation. These excretion results confirmed the pharmacokinetic parameters of the oral study, ie, the absorption of curcumin was significantly increased by both LMw-PLGA and HMw-PLGA formulation.\n Ex vivo absorption To understand better the relationship between relative bioavailability and Tmax for different molecular weights of PLGA-encapsulated curcumin and to explore the possible reasons for this, an ex vivo absorption study was undertaken. The concentration of curcumin absorbed from the incubation medium into the sac tissue and then into the serosal fluid is shown in Figure 5. When samples of the reverted sac of rat small intestine were incubated with conventional curcumin, LMw-NPC, and HMw-NPC, the amount of curcumin absorbed from the curcumin nanoformulation was significantly increased in the duodenum and the ileum, but was not significantly different in the jejunum. The curcumin content was highest in the serosal fluid of the duodenum in the HMw-NPC group, followed by the LMw-NPC and conventional curcumin groups. The curcumin concentration from HMw-NPC in the serosal fluid of the duodenum was 5.12-fold and 2.66-fold higher than from LMw-NPC and conventional curcumin, respectively (Figure 5A).\nThe absorption of curcumin, as determined by the sum of the amounts of curcumin in the serosal fluid, sac tissue, and regions of the rat gut after incubation with curcumin and its formulations, is shown in Table 3. It can be seen that the amount of curcumin in the duodenum and ileum was significantly increased by curcumin encapsulated with PLGA. The HMw-PLGA formulation noticeably enhanced the absorption of curcumin in the duodenum compared with LMw-PLGA (from 0.63 μg to 0.89 μg) and with conventional curcumin (from 0.47 μg to 0.89 μg). The ex vivo curcumin absorption results support the findings of the oral study, ie, that curcumin formulated with PLGA increased the absorption of curcumin in the intestine. Overall, HMw- PLGA enabled considerably more curcumin absorption than LMw-PLGA.\nTo understand better the relationship between relative bioavailability and Tmax for different molecular weights of PLGA-encapsulated curcumin and to explore the possible reasons for this, an ex vivo absorption study was undertaken. The concentration of curcumin absorbed from the incubation medium into the sac tissue and then into the serosal fluid is shown in Figure 5. When samples of the reverted sac of rat small intestine were incubated with conventional curcumin, LMw-NPC, and HMw-NPC, the amount of curcumin absorbed from the curcumin nanoformulation was significantly increased in the duodenum and the ileum, but was not significantly different in the jejunum. The curcumin content was highest in the serosal fluid of the duodenum in the HMw-NPC group, followed by the LMw-NPC and conventional curcumin groups. The curcumin concentration from HMw-NPC in the serosal fluid of the duodenum was 5.12-fold and 2.66-fold higher than from LMw-NPC and conventional curcumin, respectively (Figure 5A).\nThe absorption of curcumin, as determined by the sum of the amounts of curcumin in the serosal fluid, sac tissue, and regions of the rat gut after incubation with curcumin and its formulations, is shown in Table 3. It can be seen that the amount of curcumin in the duodenum and ileum was significantly increased by curcumin encapsulated with PLGA. The HMw-PLGA formulation noticeably enhanced the absorption of curcumin in the duodenum compared with LMw-PLGA (from 0.63 μg to 0.89 μg) and with conventional curcumin (from 0.47 μg to 0.89 μg). The ex vivo curcumin absorption results support the findings of the oral study, ie, that curcumin formulated with PLGA increased the absorption of curcumin in the intestine. Overall, HMw- PLGA enabled considerably more curcumin absorption than LMw-PLGA.", "The properties of the LMw-NPC and HMw-NPC are summarized in Table 1. The zeta potential refers to the surface charge on the nanoparticles and the polydispersity index is the size distribution, indicating the degree of similarity between the nanoparticles, such that a smaller polydispersity index indicates that the particles are similar in size. Nanoparticles containing curcumin were prepared using a high-pressure emulsification-solvent evaporation procedure with PLGA as the encapsulating material. A negative zeta potential and smaller polydispersity index was obtained (−12.2 mV and 0.076 for LMw-NPC versus −14.1 mV and 0.068 for HMw-NPC). The sizes of the LMw-NPC and HMw-NPC were 166 ± 7.4 nm and 163 ± 4.2 nm, respectively. The encapsulation efficiency of both nanoformulations was nearly 45%. According to the results, there was no statistically significant difference in particle characteristics between LMw-NPC and HMw-NPC. As in our previous studies,3,17 similar properties of nanoparticles were found in the different batches when complied with standard procedures to prepare these LMw-NPC. Therefore, our study results were confirmed to be replicable and were well validated. Figure 1 shows the TEM images of LMw-NPC and HMw-NPC. The morphology of the curcumin nanoparticles was either ellipsoid or spherical, and the diameter results were similar to the data observed by dynamic light scattering (below 200 nm) for both LMw- NPC and HMw-NPC. Our TEM data are also consistent with those reported by Liang et al and Chang et al,19,20 with regard to the nanoparticles being a bright color and surrounded by a dark shadow. Our curcumin-loaded PLGA nanoparticles were well prepared and could be used in our subsequent pharmacokinetic study.", "The HPLC method was developed to investigate curcumin in rat plasma samples. Figure 2A–E shows the chromatograms of blank rat matrices, and Figure 2F–J shows the chromatograms of rat biosamples containing the internal standard and curcumin after administration of HMw-NPC. The retention times were approximately 6.5 and 22 minutes for the internal standard and curcumin, respectively, with no obvious interference peak in the blank chromatograms. These results suggest that the HPLC analytical conditions provided good separation and selectivity for curcumin and the internal standard in the experimental matrices. The validation methods were for linearity, limit of detection, limit of quantification, precision, accuracy, and extraction recovery. The calibration curves for plasma, feces, urine, serosal fluid, and sac tissue showed good linearity (r2 > 0.995) over the range of 0.01–2.5 μg/mL. The limits of detection and quantification for curcumin were determined to be 0.005 μg/mL and 0.01 μg/mL, respectively. Intra-assay and interassay precision (% RSD and accuracy [% bias]) of curcumin measurements in each rat matrix were ±15%. Extraction of curcumin at low, medium, and high concentration in rat plasma, feces, urine, serosal fluid, and sac tissue was at least 90%. These results indicate that the analytical and extraction methods used for curcumin and the internal standard were reliable for the following pharmacokinetic study.", "Mean concentrations of curcumin in rat plasma at different times after oral administration of conventional curcumin (1 mg/kg) and LMw-NPC and HMw-NPC (50 mg/kg) are shown in Figure 3. Curcumin could be detected in plasma for 6 hours after oral ingestion of conventional curcumin and of LMw-NPC, but the retention time in rat plasma was prolonged to 8 hours for the HMw-NPC formulation. Table 2 shows significant increases in Cmax (from 0.028 for curcumin alone to 0.042 for LMw-NPC to 0.057 μg/mL for HMw-NPC) and area under the concentration-time curve (AUC) per dose (from 0.0062 to 0.15 to 0.25, respectively), indicating that the absorption of curcumin increased when it was encapsulated in a nanoformulation. In addition, the relative bioavailability of HMw-NPC was 40-fold and 1.67-fold higher than that of conventional curcumin and LMw-NPC, respectively (Table 2). This indicates that the HMw-PLGA nanoformulation achieved markedly better absorption of curcumin compared with LMw-PLGA. Furthermore, after oral administration of HMw-NPC, the Tmax of curcumin was 15 minutes shorter than that of the other curcumin formulations (see Table 2). These results indicate that the relative bioavailability of curcumin was increased at a shorter Tmax, indicating that Tmax was closely related to the bioavailability of curcumin.\nWithout nanoformulation, the accumulated percentage of unabsorbed curcumin in feces approached 90% and in urine was almost 0% (Figure 4), implying that there was low absorption of curcumin in the gastrointestinal tract. PLGA encapsulation reduced the amount of curcumin in feces (from 90% for curcumin alone to 40% for LMw-PLG to 20% for HMw-PLGA) and increased the amount of curcumin in urine (from near 0% to 0.07% to 0.16%), respectively, indicating that the absorption of curcumin was enhanced after PLGA formulation. These excretion results confirmed the pharmacokinetic parameters of the oral study, ie, the absorption of curcumin was significantly increased by both LMw-PLGA and HMw-PLGA formulation.", "To understand better the relationship between relative bioavailability and Tmax for different molecular weights of PLGA-encapsulated curcumin and to explore the possible reasons for this, an ex vivo absorption study was undertaken. The concentration of curcumin absorbed from the incubation medium into the sac tissue and then into the serosal fluid is shown in Figure 5. When samples of the reverted sac of rat small intestine were incubated with conventional curcumin, LMw-NPC, and HMw-NPC, the amount of curcumin absorbed from the curcumin nanoformulation was significantly increased in the duodenum and the ileum, but was not significantly different in the jejunum. The curcumin content was highest in the serosal fluid of the duodenum in the HMw-NPC group, followed by the LMw-NPC and conventional curcumin groups. The curcumin concentration from HMw-NPC in the serosal fluid of the duodenum was 5.12-fold and 2.66-fold higher than from LMw-NPC and conventional curcumin, respectively (Figure 5A).\nThe absorption of curcumin, as determined by the sum of the amounts of curcumin in the serosal fluid, sac tissue, and regions of the rat gut after incubation with curcumin and its formulations, is shown in Table 3. It can be seen that the amount of curcumin in the duodenum and ileum was significantly increased by curcumin encapsulated with PLGA. The HMw-PLGA formulation noticeably enhanced the absorption of curcumin in the duodenum compared with LMw-PLGA (from 0.63 μg to 0.89 μg) and with conventional curcumin (from 0.47 μg to 0.89 μg). The ex vivo curcumin absorption results support the findings of the oral study, ie, that curcumin formulated with PLGA increased the absorption of curcumin in the intestine. Overall, HMw- PLGA enabled considerably more curcumin absorption than LMw-PLGA.", "Nanoformulation technology has been extensively utilized in foods and nutritional supplements to increase the oral bioavailability of active compounds, which may be beneficial for human health. However, there has been little investigation of how the materials used in the nanoformulation process affect the oral pharmacokinetics of a product. This study found that changing the molecular weight of PLGA changes the pharmacokinetic parameters of conventional curcumin after oral administration. The present study also investigated the possible mechanism for this difference. The HPLC method was optimized and validated to obtain reliable pharmacokinetic data for curcumin21 (Figure 2).\nThe results of the animal study showed that the bioavailability of curcumin was significantly increased by nanoformulation when using low or high molecular weight PLGA. The bioavailability of a compound in the body can be enhanced by decreasing its metabolism in the body or by increasing the absorption of the compound in the gastrointestinal tract.22 Transit time in the stomach and intestines is a major factor affecting absorption, and a longer intestinal transit time will generally enhance the absorption of a substance.23 In our study, the transit time of curcumin in the gastrointestinal tract was investigated to determine the effect of absorption. We undertook an excretion study using a metabolic cage, and the fecal excretion data (Figure 4A) showed that the time approaching the fastest rate and maximum accumulated amount of curcumin in feces was 18 hours, after which a plateau was maintained for 18–60 hours in all groups. This indicates that the time for curcumin to transit the rat gastrointestinal tract is independent of whether curcumin is formulated with or without LMw-PLGA or HMw-PLGA. This finding suggests that increased bioavailability is not due to the prolonged transit time of curcumin through the gastrointestinal tract as a result of formulation with PLGA. On the other hand, the hydrophilic characteristics of a compound will affect its absorption in the body. Merisko-Liversidge et al reported that nanoformulations increase the surface area and surface interactions of a compound compared with conventional forms, and that such formulations also increase the rate at which compounds are dissolved in the gastrointestinal tract, thereby improving the absorption of water-insoluble compounds.9 Therefore, increased bioavailability of curcumin in the body may be attributable to faster dissolution after administration via a nanoformulation. We found that the Cmax, AUC, and AUC per dose for the HMw- NPC group were significantly higher than for the LMw-NPC group (Table 2), even though the two groups had similar nanoparticle characteristics (Table 1). This may be because HMw-PLGA was more resistant than LMw-PLGA to the very acidic environment of gastric juice, which could reduce absorption by damaging the nanoparticle structure.24\nIt is noteworthy that different Tmax values were observed in the different curcumin formulation groups (Table 2). Denoting the time to maximum concentration of a drug in plasma, Tmax can be used as an indicator of the drug absorption rate, with smaller values indicating more rapid absorption.25 According to our findings, the Tmax of conventional curcumin, LMw-NPC, and HMw-NPC was 45, 30, and 15 minutes, respectively. This indicates that absorption of HMw-NPC was faster than that of LMw-NPC, followed by conventional curcumin. A previous study found that oral absorption of a drug is influenced by particle size, which affects the dissolution rate.26 However, the current study found no statistically significant difference in particle size or any other particle characteristics between HMw-NPC and LMw-NPC (Table 1). Thus, particle size and other characteristics might not be responsible for the different bioavailability and absorption results for HMw-NPC and LMw-NPC. We hypothesized that the observed differences in bioavailability and Tmax may be due to different absorption rates in different portions of the gastrointestinal tract. To verify this hypothesis, we investigated ex vivo curcumin absorption from a portion of the small intestine.\nIt is known that the small intestine, composed of the duodenum, jejunum, and ileum, is the main site for digestion and absorption of water, nutrients, and electrolytes after a substance has been ingested orally.27 Our ex vivo absorption study demonstrated that conventional curcumin was absorbed mostly in the jejunum (Table 3). When encapsulated with PLGA, the amount of curcumin absorbed in the duodenum and ileum increased significantly, especially for HMw-NPC in the duodenum, but there was no significant difference in the rate of curcumin absorption in the jejunum with or without PLGA nanoformulation (Table 3).\nAlthough DeSesso et al have reported that the bulk of absorption takes place in the duodenum and proximal half of the jejunum,23 our results indicate that the jejunum is the main area for absorption of curcumin. Encapsulating curcumin in a PLGA formulation will increase the amount of curcumin that can be absorbed in the duodenum, and even more so when curcumin is encapsulated with larger molecular weight PLGA. Therefore, increasing the absorption rate of curcumin in the duodenum may play an important role in increasing the relative bioavailability of curcumin formulated with PLGA. Rapid absorption of HMw-NPC in the duodenum would have shortened the Tmax when any of our study formulations were ingested. Previous research has shown that the duodenum is the main absorption site in the intestine.23 Our results demonstrate that the bioavailability of a compound that cannot be effectively absorbed can be increased by high molecular weight PLGA, which is a significant and useful finding for nanoformulation. Possible transport mechanisms for this phenomenon will be investigated in a further study.\nIn conclusion, this study demonstrates that formulation with PLGA could enhance the bioavailability of a water-insoluble polyphenol by increasing the absorption rate in the duodenum. In addition, different molecular weights of PLGA affect the pharmacokinetics of curcumin, which are closely related to the rate of absorption in the duodenum. These findings provide meaningful information for PLGA nanoformulations intended to increase the bioavailability of hydrophobic compounds in the design of drug delivery systems, and can be utilized in further preclinical and clinical research." ]
[ null, "materials|methods", null, null, "intro", null, null, null, null, null, "methods", null, null, null, "results", "intro", null, null, null, "discussion" ]
[ "absorption", "duodenum", "molecular weight", "poly(lactic-co-glycolic acid)", "PLGA", "relative oral bioavailability" ]
Introduction: Polyphenols exist widely in plants and plant-derived foods, including vegetables, fruit, tea, spices, wine, beverages, and nutritional supplement products, thus constitute an important part of the human diet. Previous studies have demonstrated that polyphenols have broad pharmacological activity and biological benefits, including anticancer, neuroprotection, chemoprevention, and protection against cardiovascular disease.1 These advantages are attributed to their useful antioxidant and anti-inflammatory properties that regulate cell proliferation and function to prevent onset and progression of various human diseases.2 However, the beneficial effects of many polyphenols on human health are limited due to their low oral bioavailability. This is because the polyphenols, including curcumin,3 resveratrol,4 quercetin,5 daidzein,6 silymarin,7 and other polyphenols2 are poorly absorbed in the gut and undergo fast metabolism by the liver. To improve the low oral bioavailability of polyphenols, researchers have investigated their encapsulation in different nanoformulation systems, including micelles, liposomes, and nanoparticles.2,8 The mechanism by which these nanoformulations enhance the oral bioavailability of polyphenols involves increasing the surface area and interaction of the compounds, thereby raising the dissolution rate to improve their absorption.9 Nevertheless, previous studies have shown that polyester nanoparticle-based formulations are suitable for oral ingestion because they are more stable than other formulations in the gastrointestinal tract.10 Polyesters, for instance, polylactic-co-glycolic acid (PLGA), are the substances most often used in nanoformulations and have been approved by the US Food and Drug Administration as therapeutic devices owing to their biocompatibility, biodegradability, and versatile degradation kinetics.11 Recent research has shown that nanoformulations based on PLGA and nanogels have lower human serum protein binding and are more compatible than β-cyclodextrin, cellulose, and dendrimers with human erythrocytes. 12 Accordingly, PLGA nanoparticle systems have been used extensively to increase the oral bioavailability of polyphenols. Sonaje et al have demonstrated that the oral bioavailability of ellagic acid is increased when it is nanoformulated with PLGA.13 Another study has shown that quercetin encapsulated in PLGA nanoparticles has an increased ability to protect the liver and brain against oxidative damage.14 Curcumin in a PLGA nanoformulation not only has increased oral bioavailability but is also taken up in increased amounts by metastatic cancer cells.3,15 Xie et al have reported that the improved oral bioavailability of curcumin when encapsulated in PLGA nanoparticles may be attributable to enhanced water solubility, improved permeability, and inhibition of P-glycoprotein-mediated efflux in the intestine.16 Although PLGA nanoformulations are effective in improving the oral bioavailability of polyphenols, many kinds of PLGA with different molecular weights are available on the market. Our knowledge about the influence of the molecular weight of PLGA on the oral bioavailability of polyphenols is limited. Hence, the aim of this work was to investigate the effects of the molecular weight of PLGA on the oral bioavailability of polyphenols. Curcumin, one of the biologically active polyphenols, is a candidate compound for further pharmacokinetic study. Materials and methods: Chemicals and reagents Polyvinyl alcohol (molecular weight 9000–10,000), low molecular weight PLGA (50:50, molecular weight 5000–15,000), high molecular weight PLGA (50:50, molecular weight 40,000–75,000), monosodium phosphate, sucrose, 2-(4′-hydroxybenzeneazo)benzoic acid (as the internal standard), and polyethylene glycol 400 were obtained from Sigma-Aldrich (St Louis, MO). Curcumin (purity ≥95%) was purchased from Fluka (Buchs, Switzerland). Acetonitrile and dichloromethane were obtained from Merck (Darmstadt, Germany). Milli-Q grade water (Millipore, Bedford, MA) was used for preparation of the mobile phase and solution. Polyvinyl alcohol (molecular weight 9000–10,000), low molecular weight PLGA (50:50, molecular weight 5000–15,000), high molecular weight PLGA (50:50, molecular weight 40,000–75,000), monosodium phosphate, sucrose, 2-(4′-hydroxybenzeneazo)benzoic acid (as the internal standard), and polyethylene glycol 400 were obtained from Sigma-Aldrich (St Louis, MO). Curcumin (purity ≥95%) was purchased from Fluka (Buchs, Switzerland). Acetonitrile and dichloromethane were obtained from Merck (Darmstadt, Germany). Milli-Q grade water (Millipore, Bedford, MA) was used for preparation of the mobile phase and solution. Encapsulation of curcumin in PLGA nanoparticles A high-pressure emulsification-solvent evaporation technique was used to encapsulate curcumin in PLGA nanoparticles, as reported previously.17 In brief, 50 mg of PLGA (low or high molecular weight) and 5 mg of curcumin were added to 1.25 mL of dichloromethane as the oil phase and sonicated for 5 minutes to dissolve all substances. This oil phase was added to 10 mL of aqueous phase (containing 2% polyvinyl alcohol and 20% sucrose, w/v) and then homogenized using a Polytron PT-MR 2100 (Kinematica AG, Lucerne, Switzerland) at 28,000 rpm for 10 minutes to form an emulsion. The emulsion was passed across a 0.1 μm filter twice at an operating pressure of 5 kg/cm2 using an accessory extruder (EF-C5, Avestin, Canada) to formulate low and high molecular weight PLGA nanoparticles encapsulating curcumin (LMw-NPC and HMw-NPC, respectively). The resulting nanoparticles were stirred at 500 rpm overnight and air-dried to evaporate the organic solvent. A high-pressure emulsification-solvent evaporation technique was used to encapsulate curcumin in PLGA nanoparticles, as reported previously.17 In brief, 50 mg of PLGA (low or high molecular weight) and 5 mg of curcumin were added to 1.25 mL of dichloromethane as the oil phase and sonicated for 5 minutes to dissolve all substances. This oil phase was added to 10 mL of aqueous phase (containing 2% polyvinyl alcohol and 20% sucrose, w/v) and then homogenized using a Polytron PT-MR 2100 (Kinematica AG, Lucerne, Switzerland) at 28,000 rpm for 10 minutes to form an emulsion. The emulsion was passed across a 0.1 μm filter twice at an operating pressure of 5 kg/cm2 using an accessory extruder (EF-C5, Avestin, Canada) to formulate low and high molecular weight PLGA nanoparticles encapsulating curcumin (LMw-NPC and HMw-NPC, respectively). The resulting nanoparticles were stirred at 500 rpm overnight and air-dried to evaporate the organic solvent. Characteristics of LMw-NPC and HMw-NPC The particle size and polydispersity index of the LMw-NPC and HMw-NPC were measured by dynamic light scattering (90Plus, BIC, Holtsville, NY). The zeta potential of the LMw-NPC and HMw-NPC was determined using a zeta potential analyzer (90Plus, BIC). The particle size and polydispersity index of the LMw-NPC and HMw-NPC were measured by dynamic light scattering (90Plus, BIC, Holtsville, NY). The zeta potential of the LMw-NPC and HMw-NPC was determined using a zeta potential analyzer (90Plus, BIC). Entrapment efficiency To evaluate the entrapment efficiency of curcumin encapsulated in PLGA nanoparticles, LMw-NPC and HMw-NPC solutions were centrifuged at 11,000 rpm for 15 minutes ( Centrifuge 5415R, Eppendorf, Germany). After centrifugation, the supernatant was removed and 1 mL of acetonitrile was then added to the nanoparticle pellet to destroy its structure and allow release of curcumin. The amount of curcumin in the nanoparticles was analyzed by high-performance liquid chromatography (HPLC). Percent entrapment efficiency is given by: To evaluate the entrapment efficiency of curcumin encapsulated in PLGA nanoparticles, LMw-NPC and HMw-NPC solutions were centrifuged at 11,000 rpm for 15 minutes ( Centrifuge 5415R, Eppendorf, Germany). After centrifugation, the supernatant was removed and 1 mL of acetonitrile was then added to the nanoparticle pellet to destroy its structure and allow release of curcumin. The amount of curcumin in the nanoparticles was analyzed by high-performance liquid chromatography (HPLC). Percent entrapment efficiency is given by: Transmission electron microscopy Briefly, the LMw-NPC and HMw-NPC solutions were placed dropwise onto a 400 mesh copper grid coated with carbon. About 15 minutes after nanoparticle deposition, the grid was patted with filter paper to remove surface water and then stained using a solution of phosphotungstic acid (2%, w/v) for 20 minutes. TEM samples were obtained after the stained sample had been allowed to dry in air. Micrographs of LMw- NPC and HMw-NPC were obtained using a transmission electron microscope (TEM, JEM-2000EXII, JEOL, Tokyo, Japan). Briefly, the LMw-NPC and HMw-NPC solutions were placed dropwise onto a 400 mesh copper grid coated with carbon. About 15 minutes after nanoparticle deposition, the grid was patted with filter paper to remove surface water and then stained using a solution of phosphotungstic acid (2%, w/v) for 20 minutes. TEM samples were obtained after the stained sample had been allowed to dry in air. Micrographs of LMw- NPC and HMw-NPC were obtained using a transmission electron microscope (TEM, JEM-2000EXII, JEOL, Tokyo, Japan). HPLC system and analytical method validation HPLC system The HPLC system comprised a chromatographic pump (LC-20AT, Shimadzu, Kyoto, Japan), autosampler ( SIL-20AT, Shimadzu), diode array detector (SPD-M20A, Shimadzu), and degasser (DG-240). A reversed-phase C18 column (4.6 × 150 mm, particle size 5 μm, Eclipse XDB, Agilent, Palo Alto, CA) was used for the HPLC separation. The mobile phase consisted of acetonitrile, 10 mM monosodium phosphate (pH 3.5 adjusted by phosphoric acid, 40:60, v/v) at a flow rate of 0.8 mL per minute. The HPLC run time was 27 minutes and the detection wavelength was set at 425 nm. The mobile phase was passed through a 0.45 μm Millipore membrane filter and then degassed by sonication 2510R-DTH (Bransonic, CT) before use. The HPLC system comprised a chromatographic pump (LC-20AT, Shimadzu, Kyoto, Japan), autosampler ( SIL-20AT, Shimadzu), diode array detector (SPD-M20A, Shimadzu), and degasser (DG-240). A reversed-phase C18 column (4.6 × 150 mm, particle size 5 μm, Eclipse XDB, Agilent, Palo Alto, CA) was used for the HPLC separation. The mobile phase consisted of acetonitrile, 10 mM monosodium phosphate (pH 3.5 adjusted by phosphoric acid, 40:60, v/v) at a flow rate of 0.8 mL per minute. The HPLC run time was 27 minutes and the detection wavelength was set at 425 nm. The mobile phase was passed through a 0.45 μm Millipore membrane filter and then degassed by sonication 2510R-DTH (Bransonic, CT) before use. HPLC system The HPLC system comprised a chromatographic pump (LC-20AT, Shimadzu, Kyoto, Japan), autosampler ( SIL-20AT, Shimadzu), diode array detector (SPD-M20A, Shimadzu), and degasser (DG-240). A reversed-phase C18 column (4.6 × 150 mm, particle size 5 μm, Eclipse XDB, Agilent, Palo Alto, CA) was used for the HPLC separation. The mobile phase consisted of acetonitrile, 10 mM monosodium phosphate (pH 3.5 adjusted by phosphoric acid, 40:60, v/v) at a flow rate of 0.8 mL per minute. The HPLC run time was 27 minutes and the detection wavelength was set at 425 nm. The mobile phase was passed through a 0.45 μm Millipore membrane filter and then degassed by sonication 2510R-DTH (Bransonic, CT) before use. The HPLC system comprised a chromatographic pump (LC-20AT, Shimadzu, Kyoto, Japan), autosampler ( SIL-20AT, Shimadzu), diode array detector (SPD-M20A, Shimadzu), and degasser (DG-240). A reversed-phase C18 column (4.6 × 150 mm, particle size 5 μm, Eclipse XDB, Agilent, Palo Alto, CA) was used for the HPLC separation. The mobile phase consisted of acetonitrile, 10 mM monosodium phosphate (pH 3.5 adjusted by phosphoric acid, 40:60, v/v) at a flow rate of 0.8 mL per minute. The HPLC run time was 27 minutes and the detection wavelength was set at 425 nm. The mobile phase was passed through a 0.45 μm Millipore membrane filter and then degassed by sonication 2510R-DTH (Bransonic, CT) before use. Validation of analytical method A stock solution of curcumin was diluted in acetonitrile 500 μg/mL to make serial concentrations of a working standard solution (0.1, 0.25, 1, 2.5, 10, and 25 μg/mL) with 50% acetonitrile. Calibration standards were prepared using 5 μL of the working standard solution spiked with 45 μL of blank plasma, feces, and urine. The sample extracted procedures described follow the sample preparation section, on the next page. The calibration curves were represented by the peak area ratio of curcumin to the internal standard spiked in blank matrix as the y axis and the concentration of curcumin as the x axis. The limits of detection and quantification were defined as a signal-to-noise ratio of 3 and the lowest concentration of the linear regression, respectively. The accuracy and precision of intraday (same day) and interday (six sequential days) sampling of curcumin were assayed (six individual tests). Accuracy (% bias) was calculated as [(Cobs – Cnom)/Cnom] × 100, while Cnom represented the nominal concentration and Cobs indicated the mean value of the observed concentration. Precision was calculated as the relative standard deviation (RSD) from the observed concentrations as follows: The % bias and % RSD value for the lowest acceptable reproducibility concentrations were defined as being within ±15%. Extraction recovery (%) of curcumin was assessed by comparing the peak area ratio of curcumin and the internal standard spiked with blank matrix three times (at concentrations of 0.1, 0.25, and 2.5 μg/mL) to reference the curcumin prepared in 50% acetonitrile at the same concentrations. A stock solution of curcumin was diluted in acetonitrile 500 μg/mL to make serial concentrations of a working standard solution (0.1, 0.25, 1, 2.5, 10, and 25 μg/mL) with 50% acetonitrile. Calibration standards were prepared using 5 μL of the working standard solution spiked with 45 μL of blank plasma, feces, and urine. The sample extracted procedures described follow the sample preparation section, on the next page. The calibration curves were represented by the peak area ratio of curcumin to the internal standard spiked in blank matrix as the y axis and the concentration of curcumin as the x axis. The limits of detection and quantification were defined as a signal-to-noise ratio of 3 and the lowest concentration of the linear regression, respectively. The accuracy and precision of intraday (same day) and interday (six sequential days) sampling of curcumin were assayed (six individual tests). Accuracy (% bias) was calculated as [(Cobs – Cnom)/Cnom] × 100, while Cnom represented the nominal concentration and Cobs indicated the mean value of the observed concentration. Precision was calculated as the relative standard deviation (RSD) from the observed concentrations as follows: The % bias and % RSD value for the lowest acceptable reproducibility concentrations were defined as being within ±15%. Extraction recovery (%) of curcumin was assessed by comparing the peak area ratio of curcumin and the internal standard spiked with blank matrix three times (at concentrations of 0.1, 0.25, and 2.5 μg/mL) to reference the curcumin prepared in 50% acetonitrile at the same concentrations. Animal study Male Sprague-Dawley rats (215 ± 10 g body weight) were obtained from the Laboratory Animal Center at National Yang-Ming University (Taipei, Taiwan). The animals were pathogen-free and allowed to adapt to an environmentally controlled habitat (24°C ± 1°C on a 12-hour light-dark cycle). Food (Laboratory Rodent Diet 5001, PMI Feeds Inc, Richmond, IN) and water were available ad libitum. All animal experiments were performed according to the National Yang-Ming University guidelines, principles, and procedures for the care and use of laboratory animals. The rats were anesthetized using pentobarbital 50 mg/kg intraperitoneally. During anesthesia, the right jugular vein was catheterized with polyethylene tubing for blood sampling. An external cannula was fastened in the dorsal neck area. The rats were returned to consciousness after one day and could move freely. Curcumin and the LMw-NPC/HMw-NPC nanoformulations were administered by gavage to the rats at doses of 1 g/kg and 50 mg/kg, respectively. After oral administration, a 300 μL blood sample was obtained from the right jugular vein into a tube rinsed with heparin at 15, 30, 45, 60, 90, 120, 150, 180, 240, 300, 360, and 480 minutes. Fecal and urine samples were collected in metabolic cages (Mini Mitter, Bend, OR) at 0–12, 12–24, 24–36, 36–48, and 48–72 hours after treatment with curcumin, LMw-NPC, or HMw-NPC. Male Sprague-Dawley rats (215 ± 10 g body weight) were obtained from the Laboratory Animal Center at National Yang-Ming University (Taipei, Taiwan). The animals were pathogen-free and allowed to adapt to an environmentally controlled habitat (24°C ± 1°C on a 12-hour light-dark cycle). Food (Laboratory Rodent Diet 5001, PMI Feeds Inc, Richmond, IN) and water were available ad libitum. All animal experiments were performed according to the National Yang-Ming University guidelines, principles, and procedures for the care and use of laboratory animals. The rats were anesthetized using pentobarbital 50 mg/kg intraperitoneally. During anesthesia, the right jugular vein was catheterized with polyethylene tubing for blood sampling. An external cannula was fastened in the dorsal neck area. The rats were returned to consciousness after one day and could move freely. Curcumin and the LMw-NPC/HMw-NPC nanoformulations were administered by gavage to the rats at doses of 1 g/kg and 50 mg/kg, respectively. After oral administration, a 300 μL blood sample was obtained from the right jugular vein into a tube rinsed with heparin at 15, 30, 45, 60, 90, 120, 150, 180, 240, 300, 360, and 480 minutes. Fecal and urine samples were collected in metabolic cages (Mini Mitter, Bend, OR) at 0–12, 12–24, 24–36, 36–48, and 48–72 hours after treatment with curcumin, LMw-NPC, or HMw-NPC. Sample preparation Blood samples were centrifuged at 8000 rpm for 10 minutes at 4°C for preparation of the plasma samples. The supernatant was collected and preserved at −20°C before analysis. The feces were air-dried, weighed, and then homogenized in five- fold of their volume (1 g feces with 5 ml) with 50% aqueous acetonitrile using the Polytron PT-MR 2100 homogenizer. The fecal samples were then centrifuged at 6000 rpm for 10 minutes at 4°C, and the supernatant was then collected and preserved at −20°C for further assay. Plasma and fecal samples (50 μL, vortex- mixed with 100 μL of internal standard solution containing 1.5 μg/mL of 2-(4′-hydroxybenzeneazo)benzoic acid dissolved in acetonitrile) were subjected to protein precipitation. After centrifugation at 12,000 rpm for 15 minutes, 20 μL of supernatant was collected and analyzed by HPLC. The original volume of urine was recorded before centrifugation at 8000 rpm for 10 minutes and collection of the supernatant. Next, 50 μL of the urine supernatant was extracted using 1 mL of ethyl acetate and vortexed for one minute. The ethyl acetate phase was collected by centrifugation (13,000 rpm for one minute), and then transferred to a clear glass tube. After drying by vacuum centrifugation, the residue was reconstituted in 100 μL of 50% internal standard solution, and 20 μL of the dissolved solution was injected into the HPLC column. Blood samples were centrifuged at 8000 rpm for 10 minutes at 4°C for preparation of the plasma samples. The supernatant was collected and preserved at −20°C before analysis. The feces were air-dried, weighed, and then homogenized in five- fold of their volume (1 g feces with 5 ml) with 50% aqueous acetonitrile using the Polytron PT-MR 2100 homogenizer. The fecal samples were then centrifuged at 6000 rpm for 10 minutes at 4°C, and the supernatant was then collected and preserved at −20°C for further assay. Plasma and fecal samples (50 μL, vortex- mixed with 100 μL of internal standard solution containing 1.5 μg/mL of 2-(4′-hydroxybenzeneazo)benzoic acid dissolved in acetonitrile) were subjected to protein precipitation. After centrifugation at 12,000 rpm for 15 minutes, 20 μL of supernatant was collected and analyzed by HPLC. The original volume of urine was recorded before centrifugation at 8000 rpm for 10 minutes and collection of the supernatant. Next, 50 μL of the urine supernatant was extracted using 1 mL of ethyl acetate and vortexed for one minute. The ethyl acetate phase was collected by centrifugation (13,000 rpm for one minute), and then transferred to a clear glass tube. After drying by vacuum centrifugation, the residue was reconstituted in 100 μL of 50% internal standard solution, and 20 μL of the dissolved solution was injected into the HPLC column. Ex vivo curcumin absorption using reverted rat gut sacs The ex vivo drug absorption study was performed as previous described.18 Male Sprague-Dawley rats (210 ± 10 g body weight) were sacrificed by intraperitoneal overdose using 1 mL/kg urethane (1 g/mL) and α-chloralose (0.1 g/mL). The entire small intestine was quickly removed and douched several times with normal saline (0.9%, w/v) at room temperature. The intestine was put into warm (37°C) TC-199 medium (pH 7.4) and separated into the duodenum, jejunum, and ileum. The gut was then gently reverted using a glass rod (2.5 mm diameter). The large gut sac was divided into small sacs of approximately 2 cm in length, then filled with TC-199 medium and tied with a silk suture. Each of the small sacs was incubated in a conical flask containing 19.5 mL of TC-199 medium in a gentle shaking water bath at 37°C. Next, 0.5 mL of curcumin 100 μg/mL, LMw-NPC, and HMw-NPC were added into conical flasks. After incubating for one hour, each sac was cut open, and the serosal fluid and sac tissue collected into a clear tube; 1 mL of acetonitrile was added to the sac tissue, which was then homogenized and centrifuged at 6000 rpm for 10 minutes. A 50 μL quantity of the sac supernatant and serosal fluid was added to 100 μL of internal standard solution for protein precipitation and centrifuged at 13,000 rpm for 10 minutes. The supernatants (20 μL) were collected and analyzed by HPLC. The ex vivo drug absorption study was performed as previous described.18 Male Sprague-Dawley rats (210 ± 10 g body weight) were sacrificed by intraperitoneal overdose using 1 mL/kg urethane (1 g/mL) and α-chloralose (0.1 g/mL). The entire small intestine was quickly removed and douched several times with normal saline (0.9%, w/v) at room temperature. The intestine was put into warm (37°C) TC-199 medium (pH 7.4) and separated into the duodenum, jejunum, and ileum. The gut was then gently reverted using a glass rod (2.5 mm diameter). The large gut sac was divided into small sacs of approximately 2 cm in length, then filled with TC-199 medium and tied with a silk suture. Each of the small sacs was incubated in a conical flask containing 19.5 mL of TC-199 medium in a gentle shaking water bath at 37°C. Next, 0.5 mL of curcumin 100 μg/mL, LMw-NPC, and HMw-NPC were added into conical flasks. After incubating for one hour, each sac was cut open, and the serosal fluid and sac tissue collected into a clear tube; 1 mL of acetonitrile was added to the sac tissue, which was then homogenized and centrifuged at 6000 rpm for 10 minutes. A 50 μL quantity of the sac supernatant and serosal fluid was added to 100 μL of internal standard solution for protein precipitation and centrifuged at 13,000 rpm for 10 minutes. The supernatants (20 μL) were collected and analyzed by HPLC. Pharmacokinetics and statistics Pharmacokinetic values were calculated from each individual set of data using WinNonlin, standard edition, version 1.1 (Pharsight Corporation, Mountain View, CA) and the noncompartmental method. The relative oral bioavailability of curcumin was calculated according to the equation: The pharmacokinetic results are represented as the mean ± standard error. The statistical analysis was performed by t-test (SPSS version 10.0, SPSS Inc, Chicago, IL) to compare different groups. The level of statistical significance was set at P < 0.05. Pharmacokinetic values were calculated from each individual set of data using WinNonlin, standard edition, version 1.1 (Pharsight Corporation, Mountain View, CA) and the noncompartmental method. The relative oral bioavailability of curcumin was calculated according to the equation: The pharmacokinetic results are represented as the mean ± standard error. The statistical analysis was performed by t-test (SPSS version 10.0, SPSS Inc, Chicago, IL) to compare different groups. The level of statistical significance was set at P < 0.05. Chemicals and reagents: Polyvinyl alcohol (molecular weight 9000–10,000), low molecular weight PLGA (50:50, molecular weight 5000–15,000), high molecular weight PLGA (50:50, molecular weight 40,000–75,000), monosodium phosphate, sucrose, 2-(4′-hydroxybenzeneazo)benzoic acid (as the internal standard), and polyethylene glycol 400 were obtained from Sigma-Aldrich (St Louis, MO). Curcumin (purity ≥95%) was purchased from Fluka (Buchs, Switzerland). Acetonitrile and dichloromethane were obtained from Merck (Darmstadt, Germany). Milli-Q grade water (Millipore, Bedford, MA) was used for preparation of the mobile phase and solution. Encapsulation of curcumin in PLGA nanoparticles: A high-pressure emulsification-solvent evaporation technique was used to encapsulate curcumin in PLGA nanoparticles, as reported previously.17 In brief, 50 mg of PLGA (low or high molecular weight) and 5 mg of curcumin were added to 1.25 mL of dichloromethane as the oil phase and sonicated for 5 minutes to dissolve all substances. This oil phase was added to 10 mL of aqueous phase (containing 2% polyvinyl alcohol and 20% sucrose, w/v) and then homogenized using a Polytron PT-MR 2100 (Kinematica AG, Lucerne, Switzerland) at 28,000 rpm for 10 minutes to form an emulsion. The emulsion was passed across a 0.1 μm filter twice at an operating pressure of 5 kg/cm2 using an accessory extruder (EF-C5, Avestin, Canada) to formulate low and high molecular weight PLGA nanoparticles encapsulating curcumin (LMw-NPC and HMw-NPC, respectively). The resulting nanoparticles were stirred at 500 rpm overnight and air-dried to evaporate the organic solvent. Characteristics of LMw-NPC and HMw-NPC: The particle size and polydispersity index of the LMw-NPC and HMw-NPC were measured by dynamic light scattering (90Plus, BIC, Holtsville, NY). The zeta potential of the LMw-NPC and HMw-NPC was determined using a zeta potential analyzer (90Plus, BIC). Entrapment efficiency: To evaluate the entrapment efficiency of curcumin encapsulated in PLGA nanoparticles, LMw-NPC and HMw-NPC solutions were centrifuged at 11,000 rpm for 15 minutes ( Centrifuge 5415R, Eppendorf, Germany). After centrifugation, the supernatant was removed and 1 mL of acetonitrile was then added to the nanoparticle pellet to destroy its structure and allow release of curcumin. The amount of curcumin in the nanoparticles was analyzed by high-performance liquid chromatography (HPLC). Percent entrapment efficiency is given by: Transmission electron microscopy: Briefly, the LMw-NPC and HMw-NPC solutions were placed dropwise onto a 400 mesh copper grid coated with carbon. About 15 minutes after nanoparticle deposition, the grid was patted with filter paper to remove surface water and then stained using a solution of phosphotungstic acid (2%, w/v) for 20 minutes. TEM samples were obtained after the stained sample had been allowed to dry in air. Micrographs of LMw- NPC and HMw-NPC were obtained using a transmission electron microscope (TEM, JEM-2000EXII, JEOL, Tokyo, Japan). HPLC system and analytical method validation: HPLC system The HPLC system comprised a chromatographic pump (LC-20AT, Shimadzu, Kyoto, Japan), autosampler ( SIL-20AT, Shimadzu), diode array detector (SPD-M20A, Shimadzu), and degasser (DG-240). A reversed-phase C18 column (4.6 × 150 mm, particle size 5 μm, Eclipse XDB, Agilent, Palo Alto, CA) was used for the HPLC separation. The mobile phase consisted of acetonitrile, 10 mM monosodium phosphate (pH 3.5 adjusted by phosphoric acid, 40:60, v/v) at a flow rate of 0.8 mL per minute. The HPLC run time was 27 minutes and the detection wavelength was set at 425 nm. The mobile phase was passed through a 0.45 μm Millipore membrane filter and then degassed by sonication 2510R-DTH (Bransonic, CT) before use. The HPLC system comprised a chromatographic pump (LC-20AT, Shimadzu, Kyoto, Japan), autosampler ( SIL-20AT, Shimadzu), diode array detector (SPD-M20A, Shimadzu), and degasser (DG-240). A reversed-phase C18 column (4.6 × 150 mm, particle size 5 μm, Eclipse XDB, Agilent, Palo Alto, CA) was used for the HPLC separation. The mobile phase consisted of acetonitrile, 10 mM monosodium phosphate (pH 3.5 adjusted by phosphoric acid, 40:60, v/v) at a flow rate of 0.8 mL per minute. The HPLC run time was 27 minutes and the detection wavelength was set at 425 nm. The mobile phase was passed through a 0.45 μm Millipore membrane filter and then degassed by sonication 2510R-DTH (Bransonic, CT) before use. HPLC system: The HPLC system comprised a chromatographic pump (LC-20AT, Shimadzu, Kyoto, Japan), autosampler ( SIL-20AT, Shimadzu), diode array detector (SPD-M20A, Shimadzu), and degasser (DG-240). A reversed-phase C18 column (4.6 × 150 mm, particle size 5 μm, Eclipse XDB, Agilent, Palo Alto, CA) was used for the HPLC separation. The mobile phase consisted of acetonitrile, 10 mM monosodium phosphate (pH 3.5 adjusted by phosphoric acid, 40:60, v/v) at a flow rate of 0.8 mL per minute. The HPLC run time was 27 minutes and the detection wavelength was set at 425 nm. The mobile phase was passed through a 0.45 μm Millipore membrane filter and then degassed by sonication 2510R-DTH (Bransonic, CT) before use. Validation of analytical method: A stock solution of curcumin was diluted in acetonitrile 500 μg/mL to make serial concentrations of a working standard solution (0.1, 0.25, 1, 2.5, 10, and 25 μg/mL) with 50% acetonitrile. Calibration standards were prepared using 5 μL of the working standard solution spiked with 45 μL of blank plasma, feces, and urine. The sample extracted procedures described follow the sample preparation section, on the next page. The calibration curves were represented by the peak area ratio of curcumin to the internal standard spiked in blank matrix as the y axis and the concentration of curcumin as the x axis. The limits of detection and quantification were defined as a signal-to-noise ratio of 3 and the lowest concentration of the linear regression, respectively. The accuracy and precision of intraday (same day) and interday (six sequential days) sampling of curcumin were assayed (six individual tests). Accuracy (% bias) was calculated as [(Cobs – Cnom)/Cnom] × 100, while Cnom represented the nominal concentration and Cobs indicated the mean value of the observed concentration. Precision was calculated as the relative standard deviation (RSD) from the observed concentrations as follows: The % bias and % RSD value for the lowest acceptable reproducibility concentrations were defined as being within ±15%. Extraction recovery (%) of curcumin was assessed by comparing the peak area ratio of curcumin and the internal standard spiked with blank matrix three times (at concentrations of 0.1, 0.25, and 2.5 μg/mL) to reference the curcumin prepared in 50% acetonitrile at the same concentrations. Animal study: Male Sprague-Dawley rats (215 ± 10 g body weight) were obtained from the Laboratory Animal Center at National Yang-Ming University (Taipei, Taiwan). The animals were pathogen-free and allowed to adapt to an environmentally controlled habitat (24°C ± 1°C on a 12-hour light-dark cycle). Food (Laboratory Rodent Diet 5001, PMI Feeds Inc, Richmond, IN) and water were available ad libitum. All animal experiments were performed according to the National Yang-Ming University guidelines, principles, and procedures for the care and use of laboratory animals. The rats were anesthetized using pentobarbital 50 mg/kg intraperitoneally. During anesthesia, the right jugular vein was catheterized with polyethylene tubing for blood sampling. An external cannula was fastened in the dorsal neck area. The rats were returned to consciousness after one day and could move freely. Curcumin and the LMw-NPC/HMw-NPC nanoformulations were administered by gavage to the rats at doses of 1 g/kg and 50 mg/kg, respectively. After oral administration, a 300 μL blood sample was obtained from the right jugular vein into a tube rinsed with heparin at 15, 30, 45, 60, 90, 120, 150, 180, 240, 300, 360, and 480 minutes. Fecal and urine samples were collected in metabolic cages (Mini Mitter, Bend, OR) at 0–12, 12–24, 24–36, 36–48, and 48–72 hours after treatment with curcumin, LMw-NPC, or HMw-NPC. Sample preparation: Blood samples were centrifuged at 8000 rpm for 10 minutes at 4°C for preparation of the plasma samples. The supernatant was collected and preserved at −20°C before analysis. The feces were air-dried, weighed, and then homogenized in five- fold of their volume (1 g feces with 5 ml) with 50% aqueous acetonitrile using the Polytron PT-MR 2100 homogenizer. The fecal samples were then centrifuged at 6000 rpm for 10 minutes at 4°C, and the supernatant was then collected and preserved at −20°C for further assay. Plasma and fecal samples (50 μL, vortex- mixed with 100 μL of internal standard solution containing 1.5 μg/mL of 2-(4′-hydroxybenzeneazo)benzoic acid dissolved in acetonitrile) were subjected to protein precipitation. After centrifugation at 12,000 rpm for 15 minutes, 20 μL of supernatant was collected and analyzed by HPLC. The original volume of urine was recorded before centrifugation at 8000 rpm for 10 minutes and collection of the supernatant. Next, 50 μL of the urine supernatant was extracted using 1 mL of ethyl acetate and vortexed for one minute. The ethyl acetate phase was collected by centrifugation (13,000 rpm for one minute), and then transferred to a clear glass tube. After drying by vacuum centrifugation, the residue was reconstituted in 100 μL of 50% internal standard solution, and 20 μL of the dissolved solution was injected into the HPLC column. Ex vivo curcumin absorption using reverted rat gut sacs: The ex vivo drug absorption study was performed as previous described.18 Male Sprague-Dawley rats (210 ± 10 g body weight) were sacrificed by intraperitoneal overdose using 1 mL/kg urethane (1 g/mL) and α-chloralose (0.1 g/mL). The entire small intestine was quickly removed and douched several times with normal saline (0.9%, w/v) at room temperature. The intestine was put into warm (37°C) TC-199 medium (pH 7.4) and separated into the duodenum, jejunum, and ileum. The gut was then gently reverted using a glass rod (2.5 mm diameter). The large gut sac was divided into small sacs of approximately 2 cm in length, then filled with TC-199 medium and tied with a silk suture. Each of the small sacs was incubated in a conical flask containing 19.5 mL of TC-199 medium in a gentle shaking water bath at 37°C. Next, 0.5 mL of curcumin 100 μg/mL, LMw-NPC, and HMw-NPC were added into conical flasks. After incubating for one hour, each sac was cut open, and the serosal fluid and sac tissue collected into a clear tube; 1 mL of acetonitrile was added to the sac tissue, which was then homogenized and centrifuged at 6000 rpm for 10 minutes. A 50 μL quantity of the sac supernatant and serosal fluid was added to 100 μL of internal standard solution for protein precipitation and centrifuged at 13,000 rpm for 10 minutes. The supernatants (20 μL) were collected and analyzed by HPLC. Pharmacokinetics and statistics: Pharmacokinetic values were calculated from each individual set of data using WinNonlin, standard edition, version 1.1 (Pharsight Corporation, Mountain View, CA) and the noncompartmental method. The relative oral bioavailability of curcumin was calculated according to the equation: The pharmacokinetic results are represented as the mean ± standard error. The statistical analysis was performed by t-test (SPSS version 10.0, SPSS Inc, Chicago, IL) to compare different groups. The level of statistical significance was set at P < 0.05. Results: Characteristics of LMw-NPC and HMw-NPC The properties of the LMw-NPC and HMw-NPC are summarized in Table 1. The zeta potential refers to the surface charge on the nanoparticles and the polydispersity index is the size distribution, indicating the degree of similarity between the nanoparticles, such that a smaller polydispersity index indicates that the particles are similar in size. Nanoparticles containing curcumin were prepared using a high-pressure emulsification-solvent evaporation procedure with PLGA as the encapsulating material. A negative zeta potential and smaller polydispersity index was obtained (−12.2 mV and 0.076 for LMw-NPC versus −14.1 mV and 0.068 for HMw-NPC). The sizes of the LMw-NPC and HMw-NPC were 166 ± 7.4 nm and 163 ± 4.2 nm, respectively. The encapsulation efficiency of both nanoformulations was nearly 45%. According to the results, there was no statistically significant difference in particle characteristics between LMw-NPC and HMw-NPC. As in our previous studies,3,17 similar properties of nanoparticles were found in the different batches when complied with standard procedures to prepare these LMw-NPC. Therefore, our study results were confirmed to be replicable and were well validated. Figure 1 shows the TEM images of LMw-NPC and HMw-NPC. The morphology of the curcumin nanoparticles was either ellipsoid or spherical, and the diameter results were similar to the data observed by dynamic light scattering (below 200 nm) for both LMw- NPC and HMw-NPC. Our TEM data are also consistent with those reported by Liang et al and Chang et al,19,20 with regard to the nanoparticles being a bright color and surrounded by a dark shadow. Our curcumin-loaded PLGA nanoparticles were well prepared and could be used in our subsequent pharmacokinetic study. The properties of the LMw-NPC and HMw-NPC are summarized in Table 1. The zeta potential refers to the surface charge on the nanoparticles and the polydispersity index is the size distribution, indicating the degree of similarity between the nanoparticles, such that a smaller polydispersity index indicates that the particles are similar in size. Nanoparticles containing curcumin were prepared using a high-pressure emulsification-solvent evaporation procedure with PLGA as the encapsulating material. A negative zeta potential and smaller polydispersity index was obtained (−12.2 mV and 0.076 for LMw-NPC versus −14.1 mV and 0.068 for HMw-NPC). The sizes of the LMw-NPC and HMw-NPC were 166 ± 7.4 nm and 163 ± 4.2 nm, respectively. The encapsulation efficiency of both nanoformulations was nearly 45%. According to the results, there was no statistically significant difference in particle characteristics between LMw-NPC and HMw-NPC. As in our previous studies,3,17 similar properties of nanoparticles were found in the different batches when complied with standard procedures to prepare these LMw-NPC. Therefore, our study results were confirmed to be replicable and were well validated. Figure 1 shows the TEM images of LMw-NPC and HMw-NPC. The morphology of the curcumin nanoparticles was either ellipsoid or spherical, and the diameter results were similar to the data observed by dynamic light scattering (below 200 nm) for both LMw- NPC and HMw-NPC. Our TEM data are also consistent with those reported by Liang et al and Chang et al,19,20 with regard to the nanoparticles being a bright color and surrounded by a dark shadow. Our curcumin-loaded PLGA nanoparticles were well prepared and could be used in our subsequent pharmacokinetic study. Validation of HPLC analytic method The HPLC method was developed to investigate curcumin in rat plasma samples. Figure 2A–E shows the chromatograms of blank rat matrices, and Figure 2F–J shows the chromatograms of rat biosamples containing the internal standard and curcumin after administration of HMw-NPC. The retention times were approximately 6.5 and 22 minutes for the internal standard and curcumin, respectively, with no obvious interference peak in the blank chromatograms. These results suggest that the HPLC analytical conditions provided good separation and selectivity for curcumin and the internal standard in the experimental matrices. The validation methods were for linearity, limit of detection, limit of quantification, precision, accuracy, and extraction recovery. The calibration curves for plasma, feces, urine, serosal fluid, and sac tissue showed good linearity (r2 > 0.995) over the range of 0.01–2.5 μg/mL. The limits of detection and quantification for curcumin were determined to be 0.005 μg/mL and 0.01 μg/mL, respectively. Intra-assay and interassay precision (% RSD and accuracy [% bias]) of curcumin measurements in each rat matrix were ±15%. Extraction of curcumin at low, medium, and high concentration in rat plasma, feces, urine, serosal fluid, and sac tissue was at least 90%. These results indicate that the analytical and extraction methods used for curcumin and the internal standard were reliable for the following pharmacokinetic study. The HPLC method was developed to investigate curcumin in rat plasma samples. Figure 2A–E shows the chromatograms of blank rat matrices, and Figure 2F–J shows the chromatograms of rat biosamples containing the internal standard and curcumin after administration of HMw-NPC. The retention times were approximately 6.5 and 22 minutes for the internal standard and curcumin, respectively, with no obvious interference peak in the blank chromatograms. These results suggest that the HPLC analytical conditions provided good separation and selectivity for curcumin and the internal standard in the experimental matrices. The validation methods were for linearity, limit of detection, limit of quantification, precision, accuracy, and extraction recovery. The calibration curves for plasma, feces, urine, serosal fluid, and sac tissue showed good linearity (r2 > 0.995) over the range of 0.01–2.5 μg/mL. The limits of detection and quantification for curcumin were determined to be 0.005 μg/mL and 0.01 μg/mL, respectively. Intra-assay and interassay precision (% RSD and accuracy [% bias]) of curcumin measurements in each rat matrix were ±15%. Extraction of curcumin at low, medium, and high concentration in rat plasma, feces, urine, serosal fluid, and sac tissue was at least 90%. These results indicate that the analytical and extraction methods used for curcumin and the internal standard were reliable for the following pharmacokinetic study. Bioavailability and pharmacokinetics Mean concentrations of curcumin in rat plasma at different times after oral administration of conventional curcumin (1 mg/kg) and LMw-NPC and HMw-NPC (50 mg/kg) are shown in Figure 3. Curcumin could be detected in plasma for 6 hours after oral ingestion of conventional curcumin and of LMw-NPC, but the retention time in rat plasma was prolonged to 8 hours for the HMw-NPC formulation. Table 2 shows significant increases in Cmax (from 0.028 for curcumin alone to 0.042 for LMw-NPC to 0.057 μg/mL for HMw-NPC) and area under the concentration-time curve (AUC) per dose (from 0.0062 to 0.15 to 0.25, respectively), indicating that the absorption of curcumin increased when it was encapsulated in a nanoformulation. In addition, the relative bioavailability of HMw-NPC was 40-fold and 1.67-fold higher than that of conventional curcumin and LMw-NPC, respectively (Table 2). This indicates that the HMw-PLGA nanoformulation achieved markedly better absorption of curcumin compared with LMw-PLGA. Furthermore, after oral administration of HMw-NPC, the Tmax of curcumin was 15 minutes shorter than that of the other curcumin formulations (see Table 2). These results indicate that the relative bioavailability of curcumin was increased at a shorter Tmax, indicating that Tmax was closely related to the bioavailability of curcumin. Without nanoformulation, the accumulated percentage of unabsorbed curcumin in feces approached 90% and in urine was almost 0% (Figure 4), implying that there was low absorption of curcumin in the gastrointestinal tract. PLGA encapsulation reduced the amount of curcumin in feces (from 90% for curcumin alone to 40% for LMw-PLG to 20% for HMw-PLGA) and increased the amount of curcumin in urine (from near 0% to 0.07% to 0.16%), respectively, indicating that the absorption of curcumin was enhanced after PLGA formulation. These excretion results confirmed the pharmacokinetic parameters of the oral study, ie, the absorption of curcumin was significantly increased by both LMw-PLGA and HMw-PLGA formulation. Mean concentrations of curcumin in rat plasma at different times after oral administration of conventional curcumin (1 mg/kg) and LMw-NPC and HMw-NPC (50 mg/kg) are shown in Figure 3. Curcumin could be detected in plasma for 6 hours after oral ingestion of conventional curcumin and of LMw-NPC, but the retention time in rat plasma was prolonged to 8 hours for the HMw-NPC formulation. Table 2 shows significant increases in Cmax (from 0.028 for curcumin alone to 0.042 for LMw-NPC to 0.057 μg/mL for HMw-NPC) and area under the concentration-time curve (AUC) per dose (from 0.0062 to 0.15 to 0.25, respectively), indicating that the absorption of curcumin increased when it was encapsulated in a nanoformulation. In addition, the relative bioavailability of HMw-NPC was 40-fold and 1.67-fold higher than that of conventional curcumin and LMw-NPC, respectively (Table 2). This indicates that the HMw-PLGA nanoformulation achieved markedly better absorption of curcumin compared with LMw-PLGA. Furthermore, after oral administration of HMw-NPC, the Tmax of curcumin was 15 minutes shorter than that of the other curcumin formulations (see Table 2). These results indicate that the relative bioavailability of curcumin was increased at a shorter Tmax, indicating that Tmax was closely related to the bioavailability of curcumin. Without nanoformulation, the accumulated percentage of unabsorbed curcumin in feces approached 90% and in urine was almost 0% (Figure 4), implying that there was low absorption of curcumin in the gastrointestinal tract. PLGA encapsulation reduced the amount of curcumin in feces (from 90% for curcumin alone to 40% for LMw-PLG to 20% for HMw-PLGA) and increased the amount of curcumin in urine (from near 0% to 0.07% to 0.16%), respectively, indicating that the absorption of curcumin was enhanced after PLGA formulation. These excretion results confirmed the pharmacokinetic parameters of the oral study, ie, the absorption of curcumin was significantly increased by both LMw-PLGA and HMw-PLGA formulation. Ex vivo absorption To understand better the relationship between relative bioavailability and Tmax for different molecular weights of PLGA-encapsulated curcumin and to explore the possible reasons for this, an ex vivo absorption study was undertaken. The concentration of curcumin absorbed from the incubation medium into the sac tissue and then into the serosal fluid is shown in Figure 5. When samples of the reverted sac of rat small intestine were incubated with conventional curcumin, LMw-NPC, and HMw-NPC, the amount of curcumin absorbed from the curcumin nanoformulation was significantly increased in the duodenum and the ileum, but was not significantly different in the jejunum. The curcumin content was highest in the serosal fluid of the duodenum in the HMw-NPC group, followed by the LMw-NPC and conventional curcumin groups. The curcumin concentration from HMw-NPC in the serosal fluid of the duodenum was 5.12-fold and 2.66-fold higher than from LMw-NPC and conventional curcumin, respectively (Figure 5A). The absorption of curcumin, as determined by the sum of the amounts of curcumin in the serosal fluid, sac tissue, and regions of the rat gut after incubation with curcumin and its formulations, is shown in Table 3. It can be seen that the amount of curcumin in the duodenum and ileum was significantly increased by curcumin encapsulated with PLGA. The HMw-PLGA formulation noticeably enhanced the absorption of curcumin in the duodenum compared with LMw-PLGA (from 0.63 μg to 0.89 μg) and with conventional curcumin (from 0.47 μg to 0.89 μg). The ex vivo curcumin absorption results support the findings of the oral study, ie, that curcumin formulated with PLGA increased the absorption of curcumin in the intestine. Overall, HMw- PLGA enabled considerably more curcumin absorption than LMw-PLGA. To understand better the relationship between relative bioavailability and Tmax for different molecular weights of PLGA-encapsulated curcumin and to explore the possible reasons for this, an ex vivo absorption study was undertaken. The concentration of curcumin absorbed from the incubation medium into the sac tissue and then into the serosal fluid is shown in Figure 5. When samples of the reverted sac of rat small intestine were incubated with conventional curcumin, LMw-NPC, and HMw-NPC, the amount of curcumin absorbed from the curcumin nanoformulation was significantly increased in the duodenum and the ileum, but was not significantly different in the jejunum. The curcumin content was highest in the serosal fluid of the duodenum in the HMw-NPC group, followed by the LMw-NPC and conventional curcumin groups. The curcumin concentration from HMw-NPC in the serosal fluid of the duodenum was 5.12-fold and 2.66-fold higher than from LMw-NPC and conventional curcumin, respectively (Figure 5A). The absorption of curcumin, as determined by the sum of the amounts of curcumin in the serosal fluid, sac tissue, and regions of the rat gut after incubation with curcumin and its formulations, is shown in Table 3. It can be seen that the amount of curcumin in the duodenum and ileum was significantly increased by curcumin encapsulated with PLGA. The HMw-PLGA formulation noticeably enhanced the absorption of curcumin in the duodenum compared with LMw-PLGA (from 0.63 μg to 0.89 μg) and with conventional curcumin (from 0.47 μg to 0.89 μg). The ex vivo curcumin absorption results support the findings of the oral study, ie, that curcumin formulated with PLGA increased the absorption of curcumin in the intestine. Overall, HMw- PLGA enabled considerably more curcumin absorption than LMw-PLGA. Characteristics of LMw-NPC and HMw-NPC: The properties of the LMw-NPC and HMw-NPC are summarized in Table 1. The zeta potential refers to the surface charge on the nanoparticles and the polydispersity index is the size distribution, indicating the degree of similarity between the nanoparticles, such that a smaller polydispersity index indicates that the particles are similar in size. Nanoparticles containing curcumin were prepared using a high-pressure emulsification-solvent evaporation procedure with PLGA as the encapsulating material. A negative zeta potential and smaller polydispersity index was obtained (−12.2 mV and 0.076 for LMw-NPC versus −14.1 mV and 0.068 for HMw-NPC). The sizes of the LMw-NPC and HMw-NPC were 166 ± 7.4 nm and 163 ± 4.2 nm, respectively. The encapsulation efficiency of both nanoformulations was nearly 45%. According to the results, there was no statistically significant difference in particle characteristics between LMw-NPC and HMw-NPC. As in our previous studies,3,17 similar properties of nanoparticles were found in the different batches when complied with standard procedures to prepare these LMw-NPC. Therefore, our study results were confirmed to be replicable and were well validated. Figure 1 shows the TEM images of LMw-NPC and HMw-NPC. The morphology of the curcumin nanoparticles was either ellipsoid or spherical, and the diameter results were similar to the data observed by dynamic light scattering (below 200 nm) for both LMw- NPC and HMw-NPC. Our TEM data are also consistent with those reported by Liang et al and Chang et al,19,20 with regard to the nanoparticles being a bright color and surrounded by a dark shadow. Our curcumin-loaded PLGA nanoparticles were well prepared and could be used in our subsequent pharmacokinetic study. Validation of HPLC analytic method: The HPLC method was developed to investigate curcumin in rat plasma samples. Figure 2A–E shows the chromatograms of blank rat matrices, and Figure 2F–J shows the chromatograms of rat biosamples containing the internal standard and curcumin after administration of HMw-NPC. The retention times were approximately 6.5 and 22 minutes for the internal standard and curcumin, respectively, with no obvious interference peak in the blank chromatograms. These results suggest that the HPLC analytical conditions provided good separation and selectivity for curcumin and the internal standard in the experimental matrices. The validation methods were for linearity, limit of detection, limit of quantification, precision, accuracy, and extraction recovery. The calibration curves for plasma, feces, urine, serosal fluid, and sac tissue showed good linearity (r2 > 0.995) over the range of 0.01–2.5 μg/mL. The limits of detection and quantification for curcumin were determined to be 0.005 μg/mL and 0.01 μg/mL, respectively. Intra-assay and interassay precision (% RSD and accuracy [% bias]) of curcumin measurements in each rat matrix were ±15%. Extraction of curcumin at low, medium, and high concentration in rat plasma, feces, urine, serosal fluid, and sac tissue was at least 90%. These results indicate that the analytical and extraction methods used for curcumin and the internal standard were reliable for the following pharmacokinetic study. Bioavailability and pharmacokinetics: Mean concentrations of curcumin in rat plasma at different times after oral administration of conventional curcumin (1 mg/kg) and LMw-NPC and HMw-NPC (50 mg/kg) are shown in Figure 3. Curcumin could be detected in plasma for 6 hours after oral ingestion of conventional curcumin and of LMw-NPC, but the retention time in rat plasma was prolonged to 8 hours for the HMw-NPC formulation. Table 2 shows significant increases in Cmax (from 0.028 for curcumin alone to 0.042 for LMw-NPC to 0.057 μg/mL for HMw-NPC) and area under the concentration-time curve (AUC) per dose (from 0.0062 to 0.15 to 0.25, respectively), indicating that the absorption of curcumin increased when it was encapsulated in a nanoformulation. In addition, the relative bioavailability of HMw-NPC was 40-fold and 1.67-fold higher than that of conventional curcumin and LMw-NPC, respectively (Table 2). This indicates that the HMw-PLGA nanoformulation achieved markedly better absorption of curcumin compared with LMw-PLGA. Furthermore, after oral administration of HMw-NPC, the Tmax of curcumin was 15 minutes shorter than that of the other curcumin formulations (see Table 2). These results indicate that the relative bioavailability of curcumin was increased at a shorter Tmax, indicating that Tmax was closely related to the bioavailability of curcumin. Without nanoformulation, the accumulated percentage of unabsorbed curcumin in feces approached 90% and in urine was almost 0% (Figure 4), implying that there was low absorption of curcumin in the gastrointestinal tract. PLGA encapsulation reduced the amount of curcumin in feces (from 90% for curcumin alone to 40% for LMw-PLG to 20% for HMw-PLGA) and increased the amount of curcumin in urine (from near 0% to 0.07% to 0.16%), respectively, indicating that the absorption of curcumin was enhanced after PLGA formulation. These excretion results confirmed the pharmacokinetic parameters of the oral study, ie, the absorption of curcumin was significantly increased by both LMw-PLGA and HMw-PLGA formulation. Ex vivo absorption: To understand better the relationship between relative bioavailability and Tmax for different molecular weights of PLGA-encapsulated curcumin and to explore the possible reasons for this, an ex vivo absorption study was undertaken. The concentration of curcumin absorbed from the incubation medium into the sac tissue and then into the serosal fluid is shown in Figure 5. When samples of the reverted sac of rat small intestine were incubated with conventional curcumin, LMw-NPC, and HMw-NPC, the amount of curcumin absorbed from the curcumin nanoformulation was significantly increased in the duodenum and the ileum, but was not significantly different in the jejunum. The curcumin content was highest in the serosal fluid of the duodenum in the HMw-NPC group, followed by the LMw-NPC and conventional curcumin groups. The curcumin concentration from HMw-NPC in the serosal fluid of the duodenum was 5.12-fold and 2.66-fold higher than from LMw-NPC and conventional curcumin, respectively (Figure 5A). The absorption of curcumin, as determined by the sum of the amounts of curcumin in the serosal fluid, sac tissue, and regions of the rat gut after incubation with curcumin and its formulations, is shown in Table 3. It can be seen that the amount of curcumin in the duodenum and ileum was significantly increased by curcumin encapsulated with PLGA. The HMw-PLGA formulation noticeably enhanced the absorption of curcumin in the duodenum compared with LMw-PLGA (from 0.63 μg to 0.89 μg) and with conventional curcumin (from 0.47 μg to 0.89 μg). The ex vivo curcumin absorption results support the findings of the oral study, ie, that curcumin formulated with PLGA increased the absorption of curcumin in the intestine. Overall, HMw- PLGA enabled considerably more curcumin absorption than LMw-PLGA. Discussion: Nanoformulation technology has been extensively utilized in foods and nutritional supplements to increase the oral bioavailability of active compounds, which may be beneficial for human health. However, there has been little investigation of how the materials used in the nanoformulation process affect the oral pharmacokinetics of a product. This study found that changing the molecular weight of PLGA changes the pharmacokinetic parameters of conventional curcumin after oral administration. The present study also investigated the possible mechanism for this difference. The HPLC method was optimized and validated to obtain reliable pharmacokinetic data for curcumin21 (Figure 2). The results of the animal study showed that the bioavailability of curcumin was significantly increased by nanoformulation when using low or high molecular weight PLGA. The bioavailability of a compound in the body can be enhanced by decreasing its metabolism in the body or by increasing the absorption of the compound in the gastrointestinal tract.22 Transit time in the stomach and intestines is a major factor affecting absorption, and a longer intestinal transit time will generally enhance the absorption of a substance.23 In our study, the transit time of curcumin in the gastrointestinal tract was investigated to determine the effect of absorption. We undertook an excretion study using a metabolic cage, and the fecal excretion data (Figure 4A) showed that the time approaching the fastest rate and maximum accumulated amount of curcumin in feces was 18 hours, after which a plateau was maintained for 18–60 hours in all groups. This indicates that the time for curcumin to transit the rat gastrointestinal tract is independent of whether curcumin is formulated with or without LMw-PLGA or HMw-PLGA. This finding suggests that increased bioavailability is not due to the prolonged transit time of curcumin through the gastrointestinal tract as a result of formulation with PLGA. On the other hand, the hydrophilic characteristics of a compound will affect its absorption in the body. Merisko-Liversidge et al reported that nanoformulations increase the surface area and surface interactions of a compound compared with conventional forms, and that such formulations also increase the rate at which compounds are dissolved in the gastrointestinal tract, thereby improving the absorption of water-insoluble compounds.9 Therefore, increased bioavailability of curcumin in the body may be attributable to faster dissolution after administration via a nanoformulation. We found that the Cmax, AUC, and AUC per dose for the HMw- NPC group were significantly higher than for the LMw-NPC group (Table 2), even though the two groups had similar nanoparticle characteristics (Table 1). This may be because HMw-PLGA was more resistant than LMw-PLGA to the very acidic environment of gastric juice, which could reduce absorption by damaging the nanoparticle structure.24 It is noteworthy that different Tmax values were observed in the different curcumin formulation groups (Table 2). Denoting the time to maximum concentration of a drug in plasma, Tmax can be used as an indicator of the drug absorption rate, with smaller values indicating more rapid absorption.25 According to our findings, the Tmax of conventional curcumin, LMw-NPC, and HMw-NPC was 45, 30, and 15 minutes, respectively. This indicates that absorption of HMw-NPC was faster than that of LMw-NPC, followed by conventional curcumin. A previous study found that oral absorption of a drug is influenced by particle size, which affects the dissolution rate.26 However, the current study found no statistically significant difference in particle size or any other particle characteristics between HMw-NPC and LMw-NPC (Table 1). Thus, particle size and other characteristics might not be responsible for the different bioavailability and absorption results for HMw-NPC and LMw-NPC. We hypothesized that the observed differences in bioavailability and Tmax may be due to different absorption rates in different portions of the gastrointestinal tract. To verify this hypothesis, we investigated ex vivo curcumin absorption from a portion of the small intestine. It is known that the small intestine, composed of the duodenum, jejunum, and ileum, is the main site for digestion and absorption of water, nutrients, and electrolytes after a substance has been ingested orally.27 Our ex vivo absorption study demonstrated that conventional curcumin was absorbed mostly in the jejunum (Table 3). When encapsulated with PLGA, the amount of curcumin absorbed in the duodenum and ileum increased significantly, especially for HMw-NPC in the duodenum, but there was no significant difference in the rate of curcumin absorption in the jejunum with or without PLGA nanoformulation (Table 3). Although DeSesso et al have reported that the bulk of absorption takes place in the duodenum and proximal half of the jejunum,23 our results indicate that the jejunum is the main area for absorption of curcumin. Encapsulating curcumin in a PLGA formulation will increase the amount of curcumin that can be absorbed in the duodenum, and even more so when curcumin is encapsulated with larger molecular weight PLGA. Therefore, increasing the absorption rate of curcumin in the duodenum may play an important role in increasing the relative bioavailability of curcumin formulated with PLGA. Rapid absorption of HMw-NPC in the duodenum would have shortened the Tmax when any of our study formulations were ingested. Previous research has shown that the duodenum is the main absorption site in the intestine.23 Our results demonstrate that the bioavailability of a compound that cannot be effectively absorbed can be increased by high molecular weight PLGA, which is a significant and useful finding for nanoformulation. Possible transport mechanisms for this phenomenon will be investigated in a further study. In conclusion, this study demonstrates that formulation with PLGA could enhance the bioavailability of a water-insoluble polyphenol by increasing the absorption rate in the duodenum. In addition, different molecular weights of PLGA affect the pharmacokinetics of curcumin, which are closely related to the rate of absorption in the duodenum. These findings provide meaningful information for PLGA nanoformulations intended to increase the bioavailability of hydrophobic compounds in the design of drug delivery systems, and can be utilized in further preclinical and clinical research.
Background: Polylactic-co-glycolic acid (PLGA) nanoparticles have been used to increase the relative oral bioavailability of hydrophobic compounds and polyphenols in recent years, but the effects of the molecular weight of PLGA on bioavailability are still unknown. This study investigated the influence of polymer molecular weight on the relative oral bioavailability of curcumin, and explored the possible mechanism accounting for the outcome. Methods: Curcumin encapsulated in low (5000-15,000) and high (40,000-75,000) molecular weight PLGA (LMw-NPC and HMw-NPC, respectively) were prepared using an emulsification-solvent evaporation method. Curcumin alone and in the nanoformulations was administered orally to freely mobile rats, and blood samples were collected to evaluate the bioavailability of curcumin, LMw-NPC, and HMw-NPC. An ex vivo experimental gut absorption model was used to investigate the effects of different molecular weights of PLGA formulation on absorption of curcumin. High-performance liquid chromatography with diode array detection was used for quantification of curcumin in biosamples. Results: There were no significant differences in particle properties between LMw-NPC and HMw-NPC, but the relative bioavailability of HMw-NPC was 1.67-fold and 40-fold higher than that of LMw-NPC and conventional curcumin, respectively. In addition, the mean peak concentration (C(max)) of conventional curcumin, LMw-NPC, and HMw-NPC was 0.028, 0.042, and 0.057 μg/mL, respectively. The gut absorption study further revealed that the HMw-PLGA formulation markedly increased the absorption rate of curcumin in the duodenum and resulted in excellent bioavailability compared with conventional curcumin and LMw-NPC. Conclusions: Our findings demonstrate that different molecular weights of PLGA have varying bioavailability, contributing to changes in the absorption rate at the duodenum. The results of this study provide the rationale for design of a nanomedicine delivery system to enhance the bioavailability of water-insoluble pharmaceutical compounds and functional foods.
null
null
12,389
374
[ 527, 115, 190, 92, 106, 315, 155, 306, 265, 298, 96, 320, 262, 402, 333 ]
20
[ "curcumin", "npc", "hmw", "plga", "lmw", "hmw npc", "lmw npc", "ml", "absorption", "npc hmw npc" ]
[ "bioavailability polyphenols curcumin", "biologically active polyphenols", "bioavailability polyphenols limited", "polyphenols broad pharmacological", "oral bioavailability polyphenols" ]
null
null
[CONTENT] absorption | duodenum | molecular weight | poly(lactic-co-glycolic acid) | PLGA | relative oral bioavailability [SUMMARY]
[CONTENT] absorption | duodenum | molecular weight | poly(lactic-co-glycolic acid) | PLGA | relative oral bioavailability [SUMMARY]
[CONTENT] absorption | duodenum | molecular weight | poly(lactic-co-glycolic acid) | PLGA | relative oral bioavailability [SUMMARY]
null
[CONTENT] absorption | duodenum | molecular weight | poly(lactic-co-glycolic acid) | PLGA | relative oral bioavailability [SUMMARY]
null
[CONTENT] Absorption | Administration, Oral | Animals | Biological Availability | Chromatography, High Pressure Liquid | Curcumin | Intestine, Small | Lactic Acid | Male | Molecular Weight | Nanoparticles | Particle Size | Polyglycolic Acid | Polylactic Acid-Polyglycolic Acid Copolymer | Rats, Sprague-Dawley | Reproducibility of Results | Tissue Distribution [SUMMARY]
[CONTENT] Absorption | Administration, Oral | Animals | Biological Availability | Chromatography, High Pressure Liquid | Curcumin | Intestine, Small | Lactic Acid | Male | Molecular Weight | Nanoparticles | Particle Size | Polyglycolic Acid | Polylactic Acid-Polyglycolic Acid Copolymer | Rats, Sprague-Dawley | Reproducibility of Results | Tissue Distribution [SUMMARY]
[CONTENT] Absorption | Administration, Oral | Animals | Biological Availability | Chromatography, High Pressure Liquid | Curcumin | Intestine, Small | Lactic Acid | Male | Molecular Weight | Nanoparticles | Particle Size | Polyglycolic Acid | Polylactic Acid-Polyglycolic Acid Copolymer | Rats, Sprague-Dawley | Reproducibility of Results | Tissue Distribution [SUMMARY]
null
[CONTENT] Absorption | Administration, Oral | Animals | Biological Availability | Chromatography, High Pressure Liquid | Curcumin | Intestine, Small | Lactic Acid | Male | Molecular Weight | Nanoparticles | Particle Size | Polyglycolic Acid | Polylactic Acid-Polyglycolic Acid Copolymer | Rats, Sprague-Dawley | Reproducibility of Results | Tissue Distribution [SUMMARY]
null
[CONTENT] bioavailability polyphenols curcumin | biologically active polyphenols | bioavailability polyphenols limited | polyphenols broad pharmacological | oral bioavailability polyphenols [SUMMARY]
[CONTENT] bioavailability polyphenols curcumin | biologically active polyphenols | bioavailability polyphenols limited | polyphenols broad pharmacological | oral bioavailability polyphenols [SUMMARY]
[CONTENT] bioavailability polyphenols curcumin | biologically active polyphenols | bioavailability polyphenols limited | polyphenols broad pharmacological | oral bioavailability polyphenols [SUMMARY]
null
[CONTENT] bioavailability polyphenols curcumin | biologically active polyphenols | bioavailability polyphenols limited | polyphenols broad pharmacological | oral bioavailability polyphenols [SUMMARY]
null
[CONTENT] curcumin | npc | hmw | plga | lmw | hmw npc | lmw npc | ml | absorption | npc hmw npc [SUMMARY]
[CONTENT] curcumin | npc | hmw | plga | lmw | hmw npc | lmw npc | ml | absorption | npc hmw npc [SUMMARY]
[CONTENT] curcumin | npc | hmw | plga | lmw | hmw npc | lmw npc | ml | absorption | npc hmw npc [SUMMARY]
null
[CONTENT] curcumin | npc | hmw | plga | lmw | hmw npc | lmw npc | ml | absorption | npc hmw npc [SUMMARY]
null
[CONTENT] bic | 90plus bic | 90plus | npc | zeta | zeta potential | potential | lmw npc hmw npc | lmw npc | lmw [SUMMARY]
[CONTENT] rats | laboratory | 24 | kg | 12 | ming university | national | national yang | right jugular | national yang ming [SUMMARY]
[CONTENT] curcumin | npc | lmw | hmw | plga | absorption | hmw npc | lmw npc | absorption curcumin | rat [SUMMARY]
null
[CONTENT] curcumin | npc | plga | lmw | hmw | hmw npc | lmw npc | absorption | ml | nanoparticles [SUMMARY]
null
[CONTENT] recent years | PLGA ||| [SUMMARY]
[CONTENT] Curcumin | 5000-15,000 | 40,000-75,000 | PLGA | NPC ||| Curcumin ||| ||| [SUMMARY]
[CONTENT] NPC | NPC | 1.67-fold | 40-fold ||| NPC | 0.028 | 0.042 | 0.057 ||| [SUMMARY]
null
[CONTENT] recent years | PLGA ||| ||| 5000-15,000 | 40,000-75,000 | PLGA | NPC ||| Curcumin ||| ||| ||| NPC | NPC | 1.67-fold | 40-fold ||| NPC | 0.028 | 0.042 | 0.057 ||| ||| PLGA ||| [SUMMARY]
null
Molecular detection and genetic diversity of bovine papillomavirus in dairy cows in Xinjiang, China.
34170091
Bovine papillomatosis is a type of proliferative tumor disease of skin and mucosae caused by bovine papillomavirus (BPV). As a transboundary and emerging disease in cattle, it poses a potential threat to the dairy industry.
BACKGROUND
122 papilloma skin lesions from 8 intensive dairy farms located in different regions of Xinjiang, China were detected by polymerase chain reaction. The genetic evolution relationships of various types of BPVs were analyzed by examining this phylogenetic tree.
METHODS
Ten genotypes of BPV (BPV1, BPV2, BPV3, BPV6, BPV7, BPV8, BPV10, BPV11, BPV13, and BPV14) were detected and identified in dairy cows. These were the first reported detections of BPV13 and BPV14 in Xinjiang, Mixed infections were detected, and there were geographical differences in the distribution of the BPV genotypes. Notably, the BPV infection rate among young cattle (< 1-year-old) developed from the same supply of frozen sperm was higher than that of the other young cows naturally raised under the same environmental conditions.
RESULTS
Genotyping based on the L1 gene of BPV showed that BPVs circulating in Xinjiang China displayed substantial genetic diversity. This study provided valuable data at the molecular epidemiology level, which is conducive to developing deep insights into the genetic diversity and pathogenic characteristics of BPVs in dairy cows.
CONCLUSIONS
[ "Animals", "Cattle", "Cattle Diseases", "Dairying", "Deltapapillomavirus", "Female", "Genetic Variation", "Papillomavirus Infections" ]
8318792
INTRODUCTION
Bovine papillomatosis (BP) is a chronic proliferative skin disease caused by bovine papillomavirus (BPV) [1], which results in cutaneous neoplastic lesions and reductions in animal constitution within the cattle industry [2]. As one of the transboundary and emerging diseases in cattle, BP circulates in many countries [345678]. However, to date, there is no effective vaccine to prevent the occurrence of BP [2]. As a non-enveloped double-stranded DNA virus, BPV is a member of the Papillomaviridae family that commonly infects epithelial cells of bovine skin, mucous membranes, and other tissues [29]. The genome DNA of BPV is about 7800 bp, and the virus capsid is mainly composed of Ll protein [10]. So far, at least 27 genotypes of BPVs (BPV1-27) have been identified and subsequently classified into the following genera: Deltapapillomavirus genus (BPV1, BPV2, BPV13, and BPV14), Xipapillomavirus genus (BPV3, BPV4, BPV6, BPV9, BPV10, BPV11, BPV12, BPV15, BPV17, and BPV20), Epsilonpapillomavirus genus (BPV5 and BPV8), Dyokappapapillomavirus genus (BPV16, BPV18, and BPV22), Dyoxipapillomavirus (BPV7) and unclassified genera (BPV19, BPV21, and BPV27) [811121314151617]. Furthermore, some unclassified papillomaviruses have been subsequently reported [13151819]. Recently, a novel type, designated as BPV28, from Japan was identified and characterized [17] and classified in the Xipapillomavirus genus. Xinjiang is one of five major cattle-farming areas in China. The dairy cattle inventory in Xinjiang province amounts to 2.8 million, BPV-like infections frequently occur in dairy cattle; however, to date, infection status and the genotypes of BPVs in dairy cows in Xinjiang, China remain unclear. Hence, this study aims to identify and characterize the genotypes of BPVs circulating in intensive dairy farms in Xinjiang, China. The results provide new insights into the molecular characteristics of BPVs, which will be conducive to studies into the pathogenicity of BPVs and the development of an effective vaccine to prevent this disease and its related cutaneous and mucosal tumors in dairy cows.
null
null
RESULTS
Papilloma in nipple samples had flat, round, and lobed shapes, while papilloma was nodular and cauliflower-like in head and neck samples (Fig. 1). Among the 5 major cattle-farming areas, the overall BPV detection rate was 100% in the samples collected from Changji (18 cases), Shihezi (22 cases), Tacheng (37 cases), Yili (29 cases), and Aksu (16 cases). Among 122 samples from dairy cows with BP-like clinical symptoms, all were positive based on broad-spectrum PCR detection of papillomaviruses. Based on the sequencing results and according to the classification standard for papillomaviruses, 10 genotypes were identified (Fig. 2). The partial sequences of the L1 genes from ten BPV epidemic strains have been submitted to GenBank (accession numbers: BPV1-XJ05, MT974519; BPV2-XJ18, MT974518; BPV3-XJ21, MT974517; BPV6-XJ38, MT974516; BPV7-XJ42, MT974515; BPV8-XJ76, MT974514; BPV10-XJ91, MT974513; BPV11-XJ102, MT974512; BPV13-XJ29, MT974511; BPV14-XJ82, MT974510) (Supplementary Table 2). BPV, bovine papillomavirus. The detection rates of the different BPV types were significantly different. Among the 122 dairy cow samples, 98 (80.33%) were infected by BPV2; 77 (63.11%, 77/122) were infected by BPV1; 39 (31.97%, 39/122) were infected by BPV8; 27 (22.13%, 27/122) were infected by BPV3; 21 (17.21%, 21/122) were infected by BPV7; 16 (13.11%, 16/122) were infected by BPV6; 6 (4.92%, 6/122) were infected by BPV11; 5 (4.10%, 5/122) were infected by BPV10; 3 (2.46%, 3/122) were infected by BPV13; 1 (0.82%, 1/122) was infected by BPV14 (Fig. 2, Supplementary Table 1). These were the first reported detections of BPV13 and BPV14 in Xinjiang. The results indicate that BPV2 and BPV1 are the predominant genotypes circulating in dairy cows in Xinjiang province, China. As shown in Table 1, there were some geographical differences in the distribution of BPV genotypes. Among them, BPV10, BPV11, and BPV1 14 were uniquely detected in Tacheng, Changji, and Shihezi, respectively, while BPV1, BPV2, and BPV3 were the most widely distributed in Xinjiang province (Table 1). Notably, molecular detection results confirmed that BPV13 and BPV14 were present in Xinjiang, China, and mixed infections of different BPV genotypes were common in dairy cows (Supplementary Table 1), accounting for 98.08% (116/122) of all samples. By contrast, the BPV detection rate in young cattle (< 1-year-old) developed from the same supply of frozen sperm (43.75%, 21/48) was significantly higher (p < 0.05) than that of young cattle naturally reproduced (14.29%, 6/42) at the same dairy farm located in Tacheng environment (Fig. 3, Supplementary Table 3). Values are presented as number (%). BPV, bovine papillomavirus. *The asterisk indicates a significant difference. BPV, bovine papillomavirus. *The asterisk indicates a significant difference. When compared with those of the reference strains, the L1 genes of Xinjiang strains BPV1-XJ05, BPV2-XJ18, BPV3-XJ21, BPV6-XJ38, BPV7-XJ42, BPV8-XJ76, BPV10-XJ91, BPV11-XJ102, BPV13-XJ29, and BPV14-XJ82 shared 95.2–100% identities with reference strains (BPV1, NC_001522; BPV2, KC256805; BPV3, AJ620207; BPV6, AJ620208; BPV7, NC_007612; BPV8, DQ098913; BPV10, AB331651; BPV11, AB543507; BPV13, JQ798171; BPV14, KR868228, respectively), indicating the presence of obvious genetic diversity. The phylogenetic tree based on L1 gene of BPVs revealed that BPV1-XJ05, BPV2-XJ18, BPV13-XJ29, and BPV14-XJ82 were members of the Deltapapillomavirus genus, while BPV3-XJ21, BPV6-XJ38, BPV10-XJ91, and BPV11-XJ102 were classified as members of the Xipapillomavirus genus. BPV8-XJ76 was a representative of the Epsilonpapillomavirus genus, whereas BPV7-XJ42 was a Dyoxipapillomavirus genus member (Fig. 4). Based on genetic distances, BPV13-XJ29 was shown to be closely related to the BPVBR-UEL4 strain, while BPV14-XJ82 had a close relationship with the BPV/UFPE04BR strain. BPV, bovine papillomavirus.
null
null
[ "Collection of clinical samples", "Design of primers", "Molecular detection", "Sequencing and comparison of L1 gene DNA sequences", "Phylogenetic analysis based on the BPV L1 gene", "Statistical analysis of data", "Ethics statement" ]
[ "During 2015–2019, a total of 122 clinical samples of Holstein cows with BP-like symptoms were collected from 8 intensive dairy farms located in 5 major cattle-farming areas of Xinjiang province, China (Supplementary Fig. 1), including the Changji (18 cases), Shihezi (22 cases), Tacheng (37 cases), Yili (29 cases), and Aksu (16 cases) areas. The skin samples were collected from several parts of dairy cow bodies, including face, neck, shoulder, back, and nipple. All efforts were made to minimize animal stress. Dairy information on age, sex, pathologically damaged sites, and papillomatosis size was also recorded (Supplementary Table 1). The obtained papilloma lesion samples were stored at −80oC freezer.", "The degenerate primers FAP59-FAP64 (FAP59, 5′-TAACWGTIGGICAYCCWTATT-3′ and FAP64, 5′-CCWATATCWVHCATITCICCATC-3′) were designed as described in the study by Forslund et al. [20]. The primers were used for polymerase chain reaction (PCR) amplification of viral DNA from the skin samples.", "Briefly, 100 mg of each collected papilloma skin lesion sample were removed using a sterile knife and placed in a tissue grinder. After adding 1 mL of physiological saline, the sample was ground. The homogenate was centrifuged at 3,000 r/min for 1 min, and then 200 μL of supernatant was collected. Total DNA was isolated from the supernatant using a MiniBEST Viral DNA Extraction Kit (TaKaRa Bio, Japan) according to the manufacturer's instructions. The extracted DNA was used as a PCR template. The PCR was carried out using a Bio-Rad C1000 Touch 850W Thermal Cycler (Bio-Rad, USA). The PCR reaction mix included 21 μL of water, 1 μL (0.2 μmol/L) of each FAP59-FAP64 primer, 25 μL of 2× Premix Ex Taq (TaKaRa Bio), and 2 μL of DNA template. The PCR sequence conditions were as follows: 95°C for 5 min followed by 30 cycles of 40 sec at 94°C, 40 sec at 64°C, and 1 min at 72°C, and a final 10 min at 72°C. The PCR products were separated on 2% agarose gel by electrophoresis, and the gels were stained with ethidium bromide (Takara Bio). Gels were then visualized under UV light and photographed. The DNA bands were recovered using an agarose gel DNA fragment recovery kit (Promega, USA) according to the manufacturer's instructions. The purified DNA fragment was then cloned into pMD19-T (Takara Bio) for sequencing.", "Positive clones that had been screened and verified by PCR and restriction enzyme digestion as correct were randomly chosen to conduct DNA sequencing. When the DNA sequences of at least three clones were completely identical, they were recognized as the original sequence. Using both Clustal W (https://www.genome.jp/tools-bin/clustalw) and DNAMAN software (versions: 6.0; Lynnon Corporation, Canada), these DNA sequences were compared to L1 gene sequences of BPV1-28 that were downloaded from GenBank (https://www.ncbi.nlm.nih.gov/), and sequence identities were calculated. According to the nucleotide sequence identity of a relatively conserved region of the L1 gene, the different types of BPV share a genetic identity of more than 10%; differences between 2% and 10% identity define a subtype, while a variant is defined when the difference is < 2% [91121]. Furthermore, relationship details for the detected BPV genotypes associated with age, sex, pathologically damaged sites, and papillomatosis size were also analyzed.", "The gene sequences of various genotypes of BPVs detected in this study, and those of BPVs 1–28 are presented in Supplementary Table 1. A phylogenetic tree was constructed by using the neighbor-joining method included in MEGA 5 software [22]. The bootstrap repeat was 1,000. The genetic evolution relationships of various BPV types were analyzed by examining this phylogenetic tree.", "The data were analyzed using SAS software (version 9.1; SAS Institute, Inc., USA). The detection rates of the different BPV types were compared using the χ\n2 test. A value of p < 0.05 was considered statistically significant, while a p < 0.01 was considered extremely significant.", "The experiments were carried out in accordance with the guidelines issued by the Ethical Committee of Shihezi University (No. SHZUA2015-120)." ]
[ null, null, null, null, null, null, null ]
[ "INTRODUCTION", "MATERIALS AND METHODS", "Collection of clinical samples", "Design of primers", "Molecular detection", "Sequencing and comparison of L1 gene DNA sequences", "Phylogenetic analysis based on the BPV L1 gene", "Statistical analysis of data", "Ethics statement", "RESULTS", "DISCUSSION" ]
[ "Bovine papillomatosis (BP) is a chronic proliferative skin disease caused by bovine papillomavirus (BPV) [1], which results in cutaneous neoplastic lesions and reductions in animal constitution within the cattle industry [2]. As one of the transboundary and emerging diseases in cattle, BP circulates in many countries [345678]. However, to date, there is no effective vaccine to prevent the occurrence of BP [2].\nAs a non-enveloped double-stranded DNA virus, BPV is a member of the Papillomaviridae family that commonly infects epithelial cells of bovine skin, mucous membranes, and other tissues [29]. The genome DNA of BPV is about 7800 bp, and the virus capsid is mainly composed of Ll protein [10]. So far, at least 27 genotypes of BPVs (BPV1-27) have been identified and subsequently classified into the following genera: Deltapapillomavirus genus (BPV1, BPV2, BPV13, and BPV14), Xipapillomavirus genus (BPV3, BPV4, BPV6, BPV9, BPV10, BPV11, BPV12, BPV15, BPV17, and BPV20), Epsilonpapillomavirus genus (BPV5 and BPV8), Dyokappapapillomavirus genus (BPV16, BPV18, and BPV22), Dyoxipapillomavirus (BPV7) and unclassified genera (BPV19, BPV21, and BPV27) [811121314151617]. Furthermore, some unclassified papillomaviruses have been subsequently reported [13151819]. Recently, a novel type, designated as BPV28, from Japan was identified and characterized [17] and classified in the Xipapillomavirus genus.\nXinjiang is one of five major cattle-farming areas in China. The dairy cattle inventory in Xinjiang province amounts to 2.8 million, BPV-like infections frequently occur in dairy cattle; however, to date, infection status and the genotypes of BPVs in dairy cows in Xinjiang, China remain unclear. Hence, this study aims to identify and characterize the genotypes of BPVs circulating in intensive dairy farms in Xinjiang, China. The results provide new insights into the molecular characteristics of BPVs, which will be conducive to studies into the pathogenicity of BPVs and the development of an effective vaccine to prevent this disease and its related cutaneous and mucosal tumors in dairy cows.", "Collection of clinical samples During 2015–2019, a total of 122 clinical samples of Holstein cows with BP-like symptoms were collected from 8 intensive dairy farms located in 5 major cattle-farming areas of Xinjiang province, China (Supplementary Fig. 1), including the Changji (18 cases), Shihezi (22 cases), Tacheng (37 cases), Yili (29 cases), and Aksu (16 cases) areas. The skin samples were collected from several parts of dairy cow bodies, including face, neck, shoulder, back, and nipple. All efforts were made to minimize animal stress. Dairy information on age, sex, pathologically damaged sites, and papillomatosis size was also recorded (Supplementary Table 1). The obtained papilloma lesion samples were stored at −80oC freezer.\nDuring 2015–2019, a total of 122 clinical samples of Holstein cows with BP-like symptoms were collected from 8 intensive dairy farms located in 5 major cattle-farming areas of Xinjiang province, China (Supplementary Fig. 1), including the Changji (18 cases), Shihezi (22 cases), Tacheng (37 cases), Yili (29 cases), and Aksu (16 cases) areas. The skin samples were collected from several parts of dairy cow bodies, including face, neck, shoulder, back, and nipple. All efforts were made to minimize animal stress. Dairy information on age, sex, pathologically damaged sites, and papillomatosis size was also recorded (Supplementary Table 1). The obtained papilloma lesion samples were stored at −80oC freezer.\nDesign of primers The degenerate primers FAP59-FAP64 (FAP59, 5′-TAACWGTIGGICAYCCWTATT-3′ and FAP64, 5′-CCWATATCWVHCATITCICCATC-3′) were designed as described in the study by Forslund et al. [20]. The primers were used for polymerase chain reaction (PCR) amplification of viral DNA from the skin samples.\nThe degenerate primers FAP59-FAP64 (FAP59, 5′-TAACWGTIGGICAYCCWTATT-3′ and FAP64, 5′-CCWATATCWVHCATITCICCATC-3′) were designed as described in the study by Forslund et al. [20]. The primers were used for polymerase chain reaction (PCR) amplification of viral DNA from the skin samples.\nMolecular detection Briefly, 100 mg of each collected papilloma skin lesion sample were removed using a sterile knife and placed in a tissue grinder. After adding 1 mL of physiological saline, the sample was ground. The homogenate was centrifuged at 3,000 r/min for 1 min, and then 200 μL of supernatant was collected. Total DNA was isolated from the supernatant using a MiniBEST Viral DNA Extraction Kit (TaKaRa Bio, Japan) according to the manufacturer's instructions. The extracted DNA was used as a PCR template. The PCR was carried out using a Bio-Rad C1000 Touch 850W Thermal Cycler (Bio-Rad, USA). The PCR reaction mix included 21 μL of water, 1 μL (0.2 μmol/L) of each FAP59-FAP64 primer, 25 μL of 2× Premix Ex Taq (TaKaRa Bio), and 2 μL of DNA template. The PCR sequence conditions were as follows: 95°C for 5 min followed by 30 cycles of 40 sec at 94°C, 40 sec at 64°C, and 1 min at 72°C, and a final 10 min at 72°C. The PCR products were separated on 2% agarose gel by electrophoresis, and the gels were stained with ethidium bromide (Takara Bio). Gels were then visualized under UV light and photographed. The DNA bands were recovered using an agarose gel DNA fragment recovery kit (Promega, USA) according to the manufacturer's instructions. The purified DNA fragment was then cloned into pMD19-T (Takara Bio) for sequencing.\nBriefly, 100 mg of each collected papilloma skin lesion sample were removed using a sterile knife and placed in a tissue grinder. After adding 1 mL of physiological saline, the sample was ground. The homogenate was centrifuged at 3,000 r/min for 1 min, and then 200 μL of supernatant was collected. Total DNA was isolated from the supernatant using a MiniBEST Viral DNA Extraction Kit (TaKaRa Bio, Japan) according to the manufacturer's instructions. The extracted DNA was used as a PCR template. The PCR was carried out using a Bio-Rad C1000 Touch 850W Thermal Cycler (Bio-Rad, USA). The PCR reaction mix included 21 μL of water, 1 μL (0.2 μmol/L) of each FAP59-FAP64 primer, 25 μL of 2× Premix Ex Taq (TaKaRa Bio), and 2 μL of DNA template. The PCR sequence conditions were as follows: 95°C for 5 min followed by 30 cycles of 40 sec at 94°C, 40 sec at 64°C, and 1 min at 72°C, and a final 10 min at 72°C. The PCR products were separated on 2% agarose gel by electrophoresis, and the gels were stained with ethidium bromide (Takara Bio). Gels were then visualized under UV light and photographed. The DNA bands were recovered using an agarose gel DNA fragment recovery kit (Promega, USA) according to the manufacturer's instructions. The purified DNA fragment was then cloned into pMD19-T (Takara Bio) for sequencing.\nSequencing and comparison of L1 gene DNA sequences Positive clones that had been screened and verified by PCR and restriction enzyme digestion as correct were randomly chosen to conduct DNA sequencing. When the DNA sequences of at least three clones were completely identical, they were recognized as the original sequence. Using both Clustal W (https://www.genome.jp/tools-bin/clustalw) and DNAMAN software (versions: 6.0; Lynnon Corporation, Canada), these DNA sequences were compared to L1 gene sequences of BPV1-28 that were downloaded from GenBank (https://www.ncbi.nlm.nih.gov/), and sequence identities were calculated. According to the nucleotide sequence identity of a relatively conserved region of the L1 gene, the different types of BPV share a genetic identity of more than 10%; differences between 2% and 10% identity define a subtype, while a variant is defined when the difference is < 2% [91121]. Furthermore, relationship details for the detected BPV genotypes associated with age, sex, pathologically damaged sites, and papillomatosis size were also analyzed.\nPositive clones that had been screened and verified by PCR and restriction enzyme digestion as correct were randomly chosen to conduct DNA sequencing. When the DNA sequences of at least three clones were completely identical, they were recognized as the original sequence. Using both Clustal W (https://www.genome.jp/tools-bin/clustalw) and DNAMAN software (versions: 6.0; Lynnon Corporation, Canada), these DNA sequences were compared to L1 gene sequences of BPV1-28 that were downloaded from GenBank (https://www.ncbi.nlm.nih.gov/), and sequence identities were calculated. According to the nucleotide sequence identity of a relatively conserved region of the L1 gene, the different types of BPV share a genetic identity of more than 10%; differences between 2% and 10% identity define a subtype, while a variant is defined when the difference is < 2% [91121]. Furthermore, relationship details for the detected BPV genotypes associated with age, sex, pathologically damaged sites, and papillomatosis size were also analyzed.\nPhylogenetic analysis based on the BPV L1 gene The gene sequences of various genotypes of BPVs detected in this study, and those of BPVs 1–28 are presented in Supplementary Table 1. A phylogenetic tree was constructed by using the neighbor-joining method included in MEGA 5 software [22]. The bootstrap repeat was 1,000. The genetic evolution relationships of various BPV types were analyzed by examining this phylogenetic tree.\nThe gene sequences of various genotypes of BPVs detected in this study, and those of BPVs 1–28 are presented in Supplementary Table 1. A phylogenetic tree was constructed by using the neighbor-joining method included in MEGA 5 software [22]. The bootstrap repeat was 1,000. The genetic evolution relationships of various BPV types were analyzed by examining this phylogenetic tree.\nStatistical analysis of data The data were analyzed using SAS software (version 9.1; SAS Institute, Inc., USA). The detection rates of the different BPV types were compared using the χ\n2 test. A value of p < 0.05 was considered statistically significant, while a p < 0.01 was considered extremely significant.\nThe data were analyzed using SAS software (version 9.1; SAS Institute, Inc., USA). The detection rates of the different BPV types were compared using the χ\n2 test. A value of p < 0.05 was considered statistically significant, while a p < 0.01 was considered extremely significant.\nEthics statement The experiments were carried out in accordance with the guidelines issued by the Ethical Committee of Shihezi University (No. SHZUA2015-120).\nThe experiments were carried out in accordance with the guidelines issued by the Ethical Committee of Shihezi University (No. SHZUA2015-120).", "During 2015–2019, a total of 122 clinical samples of Holstein cows with BP-like symptoms were collected from 8 intensive dairy farms located in 5 major cattle-farming areas of Xinjiang province, China (Supplementary Fig. 1), including the Changji (18 cases), Shihezi (22 cases), Tacheng (37 cases), Yili (29 cases), and Aksu (16 cases) areas. The skin samples were collected from several parts of dairy cow bodies, including face, neck, shoulder, back, and nipple. All efforts were made to minimize animal stress. Dairy information on age, sex, pathologically damaged sites, and papillomatosis size was also recorded (Supplementary Table 1). The obtained papilloma lesion samples were stored at −80oC freezer.", "The degenerate primers FAP59-FAP64 (FAP59, 5′-TAACWGTIGGICAYCCWTATT-3′ and FAP64, 5′-CCWATATCWVHCATITCICCATC-3′) were designed as described in the study by Forslund et al. [20]. The primers were used for polymerase chain reaction (PCR) amplification of viral DNA from the skin samples.", "Briefly, 100 mg of each collected papilloma skin lesion sample were removed using a sterile knife and placed in a tissue grinder. After adding 1 mL of physiological saline, the sample was ground. The homogenate was centrifuged at 3,000 r/min for 1 min, and then 200 μL of supernatant was collected. Total DNA was isolated from the supernatant using a MiniBEST Viral DNA Extraction Kit (TaKaRa Bio, Japan) according to the manufacturer's instructions. The extracted DNA was used as a PCR template. The PCR was carried out using a Bio-Rad C1000 Touch 850W Thermal Cycler (Bio-Rad, USA). The PCR reaction mix included 21 μL of water, 1 μL (0.2 μmol/L) of each FAP59-FAP64 primer, 25 μL of 2× Premix Ex Taq (TaKaRa Bio), and 2 μL of DNA template. The PCR sequence conditions were as follows: 95°C for 5 min followed by 30 cycles of 40 sec at 94°C, 40 sec at 64°C, and 1 min at 72°C, and a final 10 min at 72°C. The PCR products were separated on 2% agarose gel by electrophoresis, and the gels were stained with ethidium bromide (Takara Bio). Gels were then visualized under UV light and photographed. The DNA bands were recovered using an agarose gel DNA fragment recovery kit (Promega, USA) according to the manufacturer's instructions. The purified DNA fragment was then cloned into pMD19-T (Takara Bio) for sequencing.", "Positive clones that had been screened and verified by PCR and restriction enzyme digestion as correct were randomly chosen to conduct DNA sequencing. When the DNA sequences of at least three clones were completely identical, they were recognized as the original sequence. Using both Clustal W (https://www.genome.jp/tools-bin/clustalw) and DNAMAN software (versions: 6.0; Lynnon Corporation, Canada), these DNA sequences were compared to L1 gene sequences of BPV1-28 that were downloaded from GenBank (https://www.ncbi.nlm.nih.gov/), and sequence identities were calculated. According to the nucleotide sequence identity of a relatively conserved region of the L1 gene, the different types of BPV share a genetic identity of more than 10%; differences between 2% and 10% identity define a subtype, while a variant is defined when the difference is < 2% [91121]. Furthermore, relationship details for the detected BPV genotypes associated with age, sex, pathologically damaged sites, and papillomatosis size were also analyzed.", "The gene sequences of various genotypes of BPVs detected in this study, and those of BPVs 1–28 are presented in Supplementary Table 1. A phylogenetic tree was constructed by using the neighbor-joining method included in MEGA 5 software [22]. The bootstrap repeat was 1,000. The genetic evolution relationships of various BPV types were analyzed by examining this phylogenetic tree.", "The data were analyzed using SAS software (version 9.1; SAS Institute, Inc., USA). The detection rates of the different BPV types were compared using the χ\n2 test. A value of p < 0.05 was considered statistically significant, while a p < 0.01 was considered extremely significant.", "The experiments were carried out in accordance with the guidelines issued by the Ethical Committee of Shihezi University (No. SHZUA2015-120).", "Papilloma in nipple samples had flat, round, and lobed shapes, while papilloma was nodular and cauliflower-like in head and neck samples (Fig. 1). Among the 5 major cattle-farming areas, the overall BPV detection rate was 100% in the samples collected from Changji (18 cases), Shihezi (22 cases), Tacheng (37 cases), Yili (29 cases), and Aksu (16 cases).\nAmong 122 samples from dairy cows with BP-like clinical symptoms, all were positive based on broad-spectrum PCR detection of papillomaviruses. Based on the sequencing results and according to the classification standard for papillomaviruses, 10 genotypes were identified (Fig. 2). The partial sequences of the L1 genes from ten BPV epidemic strains have been submitted to GenBank (accession numbers: BPV1-XJ05, MT974519; BPV2-XJ18, MT974518; BPV3-XJ21, MT974517; BPV6-XJ38, MT974516; BPV7-XJ42, MT974515; BPV8-XJ76, MT974514; BPV10-XJ91, MT974513; BPV11-XJ102, MT974512; BPV13-XJ29, MT974511; BPV14-XJ82, MT974510) (Supplementary Table 2).\nBPV, bovine papillomavirus.\nThe detection rates of the different BPV types were significantly different. Among the 122 dairy cow samples, 98 (80.33%) were infected by BPV2; 77 (63.11%, 77/122) were infected by BPV1; 39 (31.97%, 39/122) were infected by BPV8; 27 (22.13%, 27/122) were infected by BPV3; 21 (17.21%, 21/122) were infected by BPV7; 16 (13.11%, 16/122) were infected by BPV6; 6 (4.92%, 6/122) were infected by BPV11; 5 (4.10%, 5/122) were infected by BPV10; 3 (2.46%, 3/122) were infected by BPV13; 1 (0.82%, 1/122) was infected by BPV14 (Fig. 2, Supplementary Table 1). These were the first reported detections of BPV13 and BPV14 in Xinjiang. The results indicate that BPV2 and BPV1 are the predominant genotypes circulating in dairy cows in Xinjiang province, China.\nAs shown in Table 1, there were some geographical differences in the distribution of BPV genotypes. Among them, BPV10, BPV11, and BPV1 14 were uniquely detected in Tacheng, Changji, and Shihezi, respectively, while BPV1, BPV2, and BPV3 were the most widely distributed in Xinjiang province (Table 1). Notably, molecular detection results confirmed that BPV13 and BPV14 were present in Xinjiang, China, and mixed infections of different BPV genotypes were common in dairy cows (Supplementary Table 1), accounting for 98.08% (116/122) of all samples. By contrast, the BPV detection rate in young cattle (< 1-year-old) developed from the same supply of frozen sperm (43.75%, 21/48) was significantly higher (p < 0.05) than that of young cattle naturally reproduced (14.29%, 6/42) at the same dairy farm located in Tacheng environment (Fig. 3, Supplementary Table 3).\nValues are presented as number (%).\nBPV, bovine papillomavirus.\n*The asterisk indicates a significant difference.\nBPV, bovine papillomavirus.\n*The asterisk indicates a significant difference.\nWhen compared with those of the reference strains, the L1 genes of Xinjiang strains BPV1-XJ05, BPV2-XJ18, BPV3-XJ21, BPV6-XJ38, BPV7-XJ42, BPV8-XJ76, BPV10-XJ91, BPV11-XJ102, BPV13-XJ29, and BPV14-XJ82 shared 95.2–100% identities with reference strains (BPV1, NC_001522; BPV2, KC256805; BPV3, AJ620207; BPV6, AJ620208; BPV7, NC_007612; BPV8, DQ098913; BPV10, AB331651; BPV11, AB543507; BPV13, JQ798171; BPV14, KR868228, respectively), indicating the presence of obvious genetic diversity.\nThe phylogenetic tree based on L1 gene of BPVs revealed that BPV1-XJ05, BPV2-XJ18, BPV13-XJ29, and BPV14-XJ82 were members of the Deltapapillomavirus genus, while BPV3-XJ21, BPV6-XJ38, BPV10-XJ91, and BPV11-XJ102 were classified as members of the Xipapillomavirus genus. BPV8-XJ76 was a representative of the Epsilonpapillomavirus genus, whereas BPV7-XJ42 was a Dyoxipapillomavirus genus member (Fig. 4). Based on genetic distances, BPV13-XJ29 was shown to be closely related to the BPVBR-UEL4 strain, while BPV14-XJ82 had a close relationship with the BPV/UFPE04BR strain.\nBPV, bovine papillomavirus.", "The L1 protein is not only a major component of the BPV capsid but also a major protective antigen for inducing the release of neutralizing antibodies, and it is prone to mutation under immune system pressure [222324]. At present, 28 types of BPVs have been identified based on the nucleotide sequence differences of the L1 gene [11725]. However, the predominant types of BPVs vary among the different geographic regions of the world [1226]. Ogawa et al. [21] investigated 122 samples of bovine tumor skin and normal healthy skin and detected 11 BPV types (BAPV1-10 and BAPV11my 41). Lunardi et al. [12] identified BPVs 6–10 and 2 unreported putative papillomavirus types (BPV/BR-UEL6 and BPV/BR-UEL7 40) in teat lesions from cattle farms in Brazil. In this study, we detected 10 genotypes of BPVs, including BPV1, BPV2, BPV3, BPV6, BPV7, BPV10, BPV11, BPV11, BPV13, and BPV14 in dairy cattle from intensive dairy farms in Xinjiang, China. The distribution of the BPV genotypes varied within the geographical regions of Xinjiang.\nThe relationship between the genetic diversity of BPVs and their pathogenicity remains unclear [27]. Therefore, the pathogenesis and immunity of different genotypes of BPVs should be further explored in the future.\nIt is reported that BPV can infect cattle through various routes [228], among which direct contact is commonly considered responsible for transmission [1]. It was reported that viral DNA could be detected in milk, blood, urine, semen, and spermatozoa of BPV-infected animals, implying that lymphocytes, seminal fluid, and spermatozoa may have potential roles in BPV transmission [29]. In our study, it is worth noting that the BPV infection rate among young cattle (< 1-year-old) developed from the same supply of frozen sperms was significantly higher than that of other young cattle naturally raised in the same environment. This high infection rate may be closely related to the extensive use of frozen sperm to reproduce Holstein cows in Xinjiang, China, implying that artificial insemination with frozen semen may be an important route of BPV transmission in dairy cows.\nTo date, at least 27 BPV genotypes have been identified and characterized [131819]; however, numerous putative and new genotypes of BPVs have already been detected in dairy cows [16]. Yamashita-Kawanishi et al. [17] identified and characterized a novel BPV (named BPV28) in a facial cutaneous papilloma lesion on Holstein dairy cattle in Japan. Herein, we report the first occurrence of BPV13 and BPV14 in Xinjiang, China. Mixed infections with different BPV genotypes were common in the same papillomatosis sample, indicating the widespread presence of genetic diversity. This study provides valuable epidemiological data for BPV in Xinjiang, China, and elucidates the genetic diversity and pathogenic characteristics of BPVs in dairy cattle." ]
[ "intro", "materials|methods", null, null, null, null, null, null, null, "results", "discussion" ]
[ "bovine papillomavirus", "molecular detection", "genetic diversity", "Xinjiang, China" ]
INTRODUCTION: Bovine papillomatosis (BP) is a chronic proliferative skin disease caused by bovine papillomavirus (BPV) [1], which results in cutaneous neoplastic lesions and reductions in animal constitution within the cattle industry [2]. As one of the transboundary and emerging diseases in cattle, BP circulates in many countries [345678]. However, to date, there is no effective vaccine to prevent the occurrence of BP [2]. As a non-enveloped double-stranded DNA virus, BPV is a member of the Papillomaviridae family that commonly infects epithelial cells of bovine skin, mucous membranes, and other tissues [29]. The genome DNA of BPV is about 7800 bp, and the virus capsid is mainly composed of Ll protein [10]. So far, at least 27 genotypes of BPVs (BPV1-27) have been identified and subsequently classified into the following genera: Deltapapillomavirus genus (BPV1, BPV2, BPV13, and BPV14), Xipapillomavirus genus (BPV3, BPV4, BPV6, BPV9, BPV10, BPV11, BPV12, BPV15, BPV17, and BPV20), Epsilonpapillomavirus genus (BPV5 and BPV8), Dyokappapapillomavirus genus (BPV16, BPV18, and BPV22), Dyoxipapillomavirus (BPV7) and unclassified genera (BPV19, BPV21, and BPV27) [811121314151617]. Furthermore, some unclassified papillomaviruses have been subsequently reported [13151819]. Recently, a novel type, designated as BPV28, from Japan was identified and characterized [17] and classified in the Xipapillomavirus genus. Xinjiang is one of five major cattle-farming areas in China. The dairy cattle inventory in Xinjiang province amounts to 2.8 million, BPV-like infections frequently occur in dairy cattle; however, to date, infection status and the genotypes of BPVs in dairy cows in Xinjiang, China remain unclear. Hence, this study aims to identify and characterize the genotypes of BPVs circulating in intensive dairy farms in Xinjiang, China. The results provide new insights into the molecular characteristics of BPVs, which will be conducive to studies into the pathogenicity of BPVs and the development of an effective vaccine to prevent this disease and its related cutaneous and mucosal tumors in dairy cows. MATERIALS AND METHODS: Collection of clinical samples During 2015–2019, a total of 122 clinical samples of Holstein cows with BP-like symptoms were collected from 8 intensive dairy farms located in 5 major cattle-farming areas of Xinjiang province, China (Supplementary Fig. 1), including the Changji (18 cases), Shihezi (22 cases), Tacheng (37 cases), Yili (29 cases), and Aksu (16 cases) areas. The skin samples were collected from several parts of dairy cow bodies, including face, neck, shoulder, back, and nipple. All efforts were made to minimize animal stress. Dairy information on age, sex, pathologically damaged sites, and papillomatosis size was also recorded (Supplementary Table 1). The obtained papilloma lesion samples were stored at −80oC freezer. During 2015–2019, a total of 122 clinical samples of Holstein cows with BP-like symptoms were collected from 8 intensive dairy farms located in 5 major cattle-farming areas of Xinjiang province, China (Supplementary Fig. 1), including the Changji (18 cases), Shihezi (22 cases), Tacheng (37 cases), Yili (29 cases), and Aksu (16 cases) areas. The skin samples were collected from several parts of dairy cow bodies, including face, neck, shoulder, back, and nipple. All efforts were made to minimize animal stress. Dairy information on age, sex, pathologically damaged sites, and papillomatosis size was also recorded (Supplementary Table 1). The obtained papilloma lesion samples were stored at −80oC freezer. Design of primers The degenerate primers FAP59-FAP64 (FAP59, 5′-TAACWGTIGGICAYCCWTATT-3′ and FAP64, 5′-CCWATATCWVHCATITCICCATC-3′) were designed as described in the study by Forslund et al. [20]. The primers were used for polymerase chain reaction (PCR) amplification of viral DNA from the skin samples. The degenerate primers FAP59-FAP64 (FAP59, 5′-TAACWGTIGGICAYCCWTATT-3′ and FAP64, 5′-CCWATATCWVHCATITCICCATC-3′) were designed as described in the study by Forslund et al. [20]. The primers were used for polymerase chain reaction (PCR) amplification of viral DNA from the skin samples. Molecular detection Briefly, 100 mg of each collected papilloma skin lesion sample were removed using a sterile knife and placed in a tissue grinder. After adding 1 mL of physiological saline, the sample was ground. The homogenate was centrifuged at 3,000 r/min for 1 min, and then 200 μL of supernatant was collected. Total DNA was isolated from the supernatant using a MiniBEST Viral DNA Extraction Kit (TaKaRa Bio, Japan) according to the manufacturer's instructions. The extracted DNA was used as a PCR template. The PCR was carried out using a Bio-Rad C1000 Touch 850W Thermal Cycler (Bio-Rad, USA). The PCR reaction mix included 21 μL of water, 1 μL (0.2 μmol/L) of each FAP59-FAP64 primer, 25 μL of 2× Premix Ex Taq (TaKaRa Bio), and 2 μL of DNA template. The PCR sequence conditions were as follows: 95°C for 5 min followed by 30 cycles of 40 sec at 94°C, 40 sec at 64°C, and 1 min at 72°C, and a final 10 min at 72°C. The PCR products were separated on 2% agarose gel by electrophoresis, and the gels were stained with ethidium bromide (Takara Bio). Gels were then visualized under UV light and photographed. The DNA bands were recovered using an agarose gel DNA fragment recovery kit (Promega, USA) according to the manufacturer's instructions. The purified DNA fragment was then cloned into pMD19-T (Takara Bio) for sequencing. Briefly, 100 mg of each collected papilloma skin lesion sample were removed using a sterile knife and placed in a tissue grinder. After adding 1 mL of physiological saline, the sample was ground. The homogenate was centrifuged at 3,000 r/min for 1 min, and then 200 μL of supernatant was collected. Total DNA was isolated from the supernatant using a MiniBEST Viral DNA Extraction Kit (TaKaRa Bio, Japan) according to the manufacturer's instructions. The extracted DNA was used as a PCR template. The PCR was carried out using a Bio-Rad C1000 Touch 850W Thermal Cycler (Bio-Rad, USA). The PCR reaction mix included 21 μL of water, 1 μL (0.2 μmol/L) of each FAP59-FAP64 primer, 25 μL of 2× Premix Ex Taq (TaKaRa Bio), and 2 μL of DNA template. The PCR sequence conditions were as follows: 95°C for 5 min followed by 30 cycles of 40 sec at 94°C, 40 sec at 64°C, and 1 min at 72°C, and a final 10 min at 72°C. The PCR products were separated on 2% agarose gel by electrophoresis, and the gels were stained with ethidium bromide (Takara Bio). Gels were then visualized under UV light and photographed. The DNA bands were recovered using an agarose gel DNA fragment recovery kit (Promega, USA) according to the manufacturer's instructions. The purified DNA fragment was then cloned into pMD19-T (Takara Bio) for sequencing. Sequencing and comparison of L1 gene DNA sequences Positive clones that had been screened and verified by PCR and restriction enzyme digestion as correct were randomly chosen to conduct DNA sequencing. When the DNA sequences of at least three clones were completely identical, they were recognized as the original sequence. Using both Clustal W (https://www.genome.jp/tools-bin/clustalw) and DNAMAN software (versions: 6.0; Lynnon Corporation, Canada), these DNA sequences were compared to L1 gene sequences of BPV1-28 that were downloaded from GenBank (https://www.ncbi.nlm.nih.gov/), and sequence identities were calculated. According to the nucleotide sequence identity of a relatively conserved region of the L1 gene, the different types of BPV share a genetic identity of more than 10%; differences between 2% and 10% identity define a subtype, while a variant is defined when the difference is < 2% [91121]. Furthermore, relationship details for the detected BPV genotypes associated with age, sex, pathologically damaged sites, and papillomatosis size were also analyzed. Positive clones that had been screened and verified by PCR and restriction enzyme digestion as correct were randomly chosen to conduct DNA sequencing. When the DNA sequences of at least three clones were completely identical, they were recognized as the original sequence. Using both Clustal W (https://www.genome.jp/tools-bin/clustalw) and DNAMAN software (versions: 6.0; Lynnon Corporation, Canada), these DNA sequences were compared to L1 gene sequences of BPV1-28 that were downloaded from GenBank (https://www.ncbi.nlm.nih.gov/), and sequence identities were calculated. According to the nucleotide sequence identity of a relatively conserved region of the L1 gene, the different types of BPV share a genetic identity of more than 10%; differences between 2% and 10% identity define a subtype, while a variant is defined when the difference is < 2% [91121]. Furthermore, relationship details for the detected BPV genotypes associated with age, sex, pathologically damaged sites, and papillomatosis size were also analyzed. Phylogenetic analysis based on the BPV L1 gene The gene sequences of various genotypes of BPVs detected in this study, and those of BPVs 1–28 are presented in Supplementary Table 1. A phylogenetic tree was constructed by using the neighbor-joining method included in MEGA 5 software [22]. The bootstrap repeat was 1,000. The genetic evolution relationships of various BPV types were analyzed by examining this phylogenetic tree. The gene sequences of various genotypes of BPVs detected in this study, and those of BPVs 1–28 are presented in Supplementary Table 1. A phylogenetic tree was constructed by using the neighbor-joining method included in MEGA 5 software [22]. The bootstrap repeat was 1,000. The genetic evolution relationships of various BPV types were analyzed by examining this phylogenetic tree. Statistical analysis of data The data were analyzed using SAS software (version 9.1; SAS Institute, Inc., USA). The detection rates of the different BPV types were compared using the χ 2 test. A value of p < 0.05 was considered statistically significant, while a p < 0.01 was considered extremely significant. The data were analyzed using SAS software (version 9.1; SAS Institute, Inc., USA). The detection rates of the different BPV types were compared using the χ 2 test. A value of p < 0.05 was considered statistically significant, while a p < 0.01 was considered extremely significant. Ethics statement The experiments were carried out in accordance with the guidelines issued by the Ethical Committee of Shihezi University (No. SHZUA2015-120). The experiments were carried out in accordance with the guidelines issued by the Ethical Committee of Shihezi University (No. SHZUA2015-120). Collection of clinical samples: During 2015–2019, a total of 122 clinical samples of Holstein cows with BP-like symptoms were collected from 8 intensive dairy farms located in 5 major cattle-farming areas of Xinjiang province, China (Supplementary Fig. 1), including the Changji (18 cases), Shihezi (22 cases), Tacheng (37 cases), Yili (29 cases), and Aksu (16 cases) areas. The skin samples were collected from several parts of dairy cow bodies, including face, neck, shoulder, back, and nipple. All efforts were made to minimize animal stress. Dairy information on age, sex, pathologically damaged sites, and papillomatosis size was also recorded (Supplementary Table 1). The obtained papilloma lesion samples were stored at −80oC freezer. Design of primers: The degenerate primers FAP59-FAP64 (FAP59, 5′-TAACWGTIGGICAYCCWTATT-3′ and FAP64, 5′-CCWATATCWVHCATITCICCATC-3′) were designed as described in the study by Forslund et al. [20]. The primers were used for polymerase chain reaction (PCR) amplification of viral DNA from the skin samples. Molecular detection: Briefly, 100 mg of each collected papilloma skin lesion sample were removed using a sterile knife and placed in a tissue grinder. After adding 1 mL of physiological saline, the sample was ground. The homogenate was centrifuged at 3,000 r/min for 1 min, and then 200 μL of supernatant was collected. Total DNA was isolated from the supernatant using a MiniBEST Viral DNA Extraction Kit (TaKaRa Bio, Japan) according to the manufacturer's instructions. The extracted DNA was used as a PCR template. The PCR was carried out using a Bio-Rad C1000 Touch 850W Thermal Cycler (Bio-Rad, USA). The PCR reaction mix included 21 μL of water, 1 μL (0.2 μmol/L) of each FAP59-FAP64 primer, 25 μL of 2× Premix Ex Taq (TaKaRa Bio), and 2 μL of DNA template. The PCR sequence conditions were as follows: 95°C for 5 min followed by 30 cycles of 40 sec at 94°C, 40 sec at 64°C, and 1 min at 72°C, and a final 10 min at 72°C. The PCR products were separated on 2% agarose gel by electrophoresis, and the gels were stained with ethidium bromide (Takara Bio). Gels were then visualized under UV light and photographed. The DNA bands were recovered using an agarose gel DNA fragment recovery kit (Promega, USA) according to the manufacturer's instructions. The purified DNA fragment was then cloned into pMD19-T (Takara Bio) for sequencing. Sequencing and comparison of L1 gene DNA sequences: Positive clones that had been screened and verified by PCR and restriction enzyme digestion as correct were randomly chosen to conduct DNA sequencing. When the DNA sequences of at least three clones were completely identical, they were recognized as the original sequence. Using both Clustal W (https://www.genome.jp/tools-bin/clustalw) and DNAMAN software (versions: 6.0; Lynnon Corporation, Canada), these DNA sequences were compared to L1 gene sequences of BPV1-28 that were downloaded from GenBank (https://www.ncbi.nlm.nih.gov/), and sequence identities were calculated. According to the nucleotide sequence identity of a relatively conserved region of the L1 gene, the different types of BPV share a genetic identity of more than 10%; differences between 2% and 10% identity define a subtype, while a variant is defined when the difference is < 2% [91121]. Furthermore, relationship details for the detected BPV genotypes associated with age, sex, pathologically damaged sites, and papillomatosis size were also analyzed. Phylogenetic analysis based on the BPV L1 gene: The gene sequences of various genotypes of BPVs detected in this study, and those of BPVs 1–28 are presented in Supplementary Table 1. A phylogenetic tree was constructed by using the neighbor-joining method included in MEGA 5 software [22]. The bootstrap repeat was 1,000. The genetic evolution relationships of various BPV types were analyzed by examining this phylogenetic tree. Statistical analysis of data: The data were analyzed using SAS software (version 9.1; SAS Institute, Inc., USA). The detection rates of the different BPV types were compared using the χ 2 test. A value of p < 0.05 was considered statistically significant, while a p < 0.01 was considered extremely significant. Ethics statement: The experiments were carried out in accordance with the guidelines issued by the Ethical Committee of Shihezi University (No. SHZUA2015-120). RESULTS: Papilloma in nipple samples had flat, round, and lobed shapes, while papilloma was nodular and cauliflower-like in head and neck samples (Fig. 1). Among the 5 major cattle-farming areas, the overall BPV detection rate was 100% in the samples collected from Changji (18 cases), Shihezi (22 cases), Tacheng (37 cases), Yili (29 cases), and Aksu (16 cases). Among 122 samples from dairy cows with BP-like clinical symptoms, all were positive based on broad-spectrum PCR detection of papillomaviruses. Based on the sequencing results and according to the classification standard for papillomaviruses, 10 genotypes were identified (Fig. 2). The partial sequences of the L1 genes from ten BPV epidemic strains have been submitted to GenBank (accession numbers: BPV1-XJ05, MT974519; BPV2-XJ18, MT974518; BPV3-XJ21, MT974517; BPV6-XJ38, MT974516; BPV7-XJ42, MT974515; BPV8-XJ76, MT974514; BPV10-XJ91, MT974513; BPV11-XJ102, MT974512; BPV13-XJ29, MT974511; BPV14-XJ82, MT974510) (Supplementary Table 2). BPV, bovine papillomavirus. The detection rates of the different BPV types were significantly different. Among the 122 dairy cow samples, 98 (80.33%) were infected by BPV2; 77 (63.11%, 77/122) were infected by BPV1; 39 (31.97%, 39/122) were infected by BPV8; 27 (22.13%, 27/122) were infected by BPV3; 21 (17.21%, 21/122) were infected by BPV7; 16 (13.11%, 16/122) were infected by BPV6; 6 (4.92%, 6/122) were infected by BPV11; 5 (4.10%, 5/122) were infected by BPV10; 3 (2.46%, 3/122) were infected by BPV13; 1 (0.82%, 1/122) was infected by BPV14 (Fig. 2, Supplementary Table 1). These were the first reported detections of BPV13 and BPV14 in Xinjiang. The results indicate that BPV2 and BPV1 are the predominant genotypes circulating in dairy cows in Xinjiang province, China. As shown in Table 1, there were some geographical differences in the distribution of BPV genotypes. Among them, BPV10, BPV11, and BPV1 14 were uniquely detected in Tacheng, Changji, and Shihezi, respectively, while BPV1, BPV2, and BPV3 were the most widely distributed in Xinjiang province (Table 1). Notably, molecular detection results confirmed that BPV13 and BPV14 were present in Xinjiang, China, and mixed infections of different BPV genotypes were common in dairy cows (Supplementary Table 1), accounting for 98.08% (116/122) of all samples. By contrast, the BPV detection rate in young cattle (< 1-year-old) developed from the same supply of frozen sperm (43.75%, 21/48) was significantly higher (p < 0.05) than that of young cattle naturally reproduced (14.29%, 6/42) at the same dairy farm located in Tacheng environment (Fig. 3, Supplementary Table 3). Values are presented as number (%). BPV, bovine papillomavirus. *The asterisk indicates a significant difference. BPV, bovine papillomavirus. *The asterisk indicates a significant difference. When compared with those of the reference strains, the L1 genes of Xinjiang strains BPV1-XJ05, BPV2-XJ18, BPV3-XJ21, BPV6-XJ38, BPV7-XJ42, BPV8-XJ76, BPV10-XJ91, BPV11-XJ102, BPV13-XJ29, and BPV14-XJ82 shared 95.2–100% identities with reference strains (BPV1, NC_001522; BPV2, KC256805; BPV3, AJ620207; BPV6, AJ620208; BPV7, NC_007612; BPV8, DQ098913; BPV10, AB331651; BPV11, AB543507; BPV13, JQ798171; BPV14, KR868228, respectively), indicating the presence of obvious genetic diversity. The phylogenetic tree based on L1 gene of BPVs revealed that BPV1-XJ05, BPV2-XJ18, BPV13-XJ29, and BPV14-XJ82 were members of the Deltapapillomavirus genus, while BPV3-XJ21, BPV6-XJ38, BPV10-XJ91, and BPV11-XJ102 were classified as members of the Xipapillomavirus genus. BPV8-XJ76 was a representative of the Epsilonpapillomavirus genus, whereas BPV7-XJ42 was a Dyoxipapillomavirus genus member (Fig. 4). Based on genetic distances, BPV13-XJ29 was shown to be closely related to the BPVBR-UEL4 strain, while BPV14-XJ82 had a close relationship with the BPV/UFPE04BR strain. BPV, bovine papillomavirus. DISCUSSION: The L1 protein is not only a major component of the BPV capsid but also a major protective antigen for inducing the release of neutralizing antibodies, and it is prone to mutation under immune system pressure [222324]. At present, 28 types of BPVs have been identified based on the nucleotide sequence differences of the L1 gene [11725]. However, the predominant types of BPVs vary among the different geographic regions of the world [1226]. Ogawa et al. [21] investigated 122 samples of bovine tumor skin and normal healthy skin and detected 11 BPV types (BAPV1-10 and BAPV11my 41). Lunardi et al. [12] identified BPVs 6–10 and 2 unreported putative papillomavirus types (BPV/BR-UEL6 and BPV/BR-UEL7 40) in teat lesions from cattle farms in Brazil. In this study, we detected 10 genotypes of BPVs, including BPV1, BPV2, BPV3, BPV6, BPV7, BPV10, BPV11, BPV11, BPV13, and BPV14 in dairy cattle from intensive dairy farms in Xinjiang, China. The distribution of the BPV genotypes varied within the geographical regions of Xinjiang. The relationship between the genetic diversity of BPVs and their pathogenicity remains unclear [27]. Therefore, the pathogenesis and immunity of different genotypes of BPVs should be further explored in the future. It is reported that BPV can infect cattle through various routes [228], among which direct contact is commonly considered responsible for transmission [1]. It was reported that viral DNA could be detected in milk, blood, urine, semen, and spermatozoa of BPV-infected animals, implying that lymphocytes, seminal fluid, and spermatozoa may have potential roles in BPV transmission [29]. In our study, it is worth noting that the BPV infection rate among young cattle (< 1-year-old) developed from the same supply of frozen sperms was significantly higher than that of other young cattle naturally raised in the same environment. This high infection rate may be closely related to the extensive use of frozen sperm to reproduce Holstein cows in Xinjiang, China, implying that artificial insemination with frozen semen may be an important route of BPV transmission in dairy cows. To date, at least 27 BPV genotypes have been identified and characterized [131819]; however, numerous putative and new genotypes of BPVs have already been detected in dairy cows [16]. Yamashita-Kawanishi et al. [17] identified and characterized a novel BPV (named BPV28) in a facial cutaneous papilloma lesion on Holstein dairy cattle in Japan. Herein, we report the first occurrence of BPV13 and BPV14 in Xinjiang, China. Mixed infections with different BPV genotypes were common in the same papillomatosis sample, indicating the widespread presence of genetic diversity. This study provides valuable epidemiological data for BPV in Xinjiang, China, and elucidates the genetic diversity and pathogenic characteristics of BPVs in dairy cattle.
Background: Bovine papillomatosis is a type of proliferative tumor disease of skin and mucosae caused by bovine papillomavirus (BPV). As a transboundary and emerging disease in cattle, it poses a potential threat to the dairy industry. Methods: 122 papilloma skin lesions from 8 intensive dairy farms located in different regions of Xinjiang, China were detected by polymerase chain reaction. The genetic evolution relationships of various types of BPVs were analyzed by examining this phylogenetic tree. Results: Ten genotypes of BPV (BPV1, BPV2, BPV3, BPV6, BPV7, BPV8, BPV10, BPV11, BPV13, and BPV14) were detected and identified in dairy cows. These were the first reported detections of BPV13 and BPV14 in Xinjiang, Mixed infections were detected, and there were geographical differences in the distribution of the BPV genotypes. Notably, the BPV infection rate among young cattle (< 1-year-old) developed from the same supply of frozen sperm was higher than that of the other young cows naturally raised under the same environmental conditions. Conclusions: Genotyping based on the L1 gene of BPV showed that BPVs circulating in Xinjiang China displayed substantial genetic diversity. This study provided valuable data at the molecular epidemiology level, which is conducive to developing deep insights into the genetic diversity and pathogenic characteristics of BPVs in dairy cows.
null
null
4,411
254
[ 146, 51, 293, 179, 68, 57, 26 ]
11
[ "bpv", "dna", "dairy", "pcr", "samples", "bpvs", "cases", "genotypes", "bio", "cattle" ]
[ "bovine papillomavirus asterisk", "deltapapillomavirus genus bpv3", "deltapapillomavirus genus bpv1", "bovine papillomavirus detection", "caused bovine papillomavirus" ]
null
null
null
[CONTENT] bovine papillomavirus | molecular detection | genetic diversity | Xinjiang, China [SUMMARY]
null
[CONTENT] bovine papillomavirus | molecular detection | genetic diversity | Xinjiang, China [SUMMARY]
null
[CONTENT] bovine papillomavirus | molecular detection | genetic diversity | Xinjiang, China [SUMMARY]
null
[CONTENT] Animals | Cattle | Cattle Diseases | Dairying | Deltapapillomavirus | Female | Genetic Variation | Papillomavirus Infections [SUMMARY]
null
[CONTENT] Animals | Cattle | Cattle Diseases | Dairying | Deltapapillomavirus | Female | Genetic Variation | Papillomavirus Infections [SUMMARY]
null
[CONTENT] Animals | Cattle | Cattle Diseases | Dairying | Deltapapillomavirus | Female | Genetic Variation | Papillomavirus Infections [SUMMARY]
null
[CONTENT] bovine papillomavirus asterisk | deltapapillomavirus genus bpv3 | deltapapillomavirus genus bpv1 | bovine papillomavirus detection | caused bovine papillomavirus [SUMMARY]
null
[CONTENT] bovine papillomavirus asterisk | deltapapillomavirus genus bpv3 | deltapapillomavirus genus bpv1 | bovine papillomavirus detection | caused bovine papillomavirus [SUMMARY]
null
[CONTENT] bovine papillomavirus asterisk | deltapapillomavirus genus bpv3 | deltapapillomavirus genus bpv1 | bovine papillomavirus detection | caused bovine papillomavirus [SUMMARY]
null
[CONTENT] bpv | dna | dairy | pcr | samples | bpvs | cases | genotypes | bio | cattle [SUMMARY]
null
[CONTENT] bpv | dna | dairy | pcr | samples | bpvs | cases | genotypes | bio | cattle [SUMMARY]
null
[CONTENT] bpv | dna | dairy | pcr | samples | bpvs | cases | genotypes | bio | cattle [SUMMARY]
null
[CONTENT] genus | bpvs | cattle | dairy | bp | xinjiang | bovine | genotypes bpvs | bpv | unclassified [SUMMARY]
null
[CONTENT] 122 infected | infected | 122 | bpv13 | bpv14 | bpv | bpv2 | bpv1 | bpv10 | bpv3 [SUMMARY]
null
[CONTENT] bpv | dna | dairy | cases | bpvs | pcr | samples | bio | cattle | genotypes [SUMMARY]
null
[CONTENT] mucosae ||| [SUMMARY]
null
[CONTENT] Ten | BPV11 ||| first | Xinjiang ||| BPV | 1-year-old [SUMMARY]
null
[CONTENT] ||| mucosae ||| ||| 122 | 8 | Xinjiang | China ||| ||| Ten | BPV11 ||| first | Xinjiang ||| BPV | 1-year-old ||| L1 | Xinjiang | China ||| [SUMMARY]
null
Viral miRNAs in plasma and urine divulge JC polyomavirus infection.
25178457
JC polyomavirus (JCPyV) is a widespread human polyomavirus that usually resides latently in its host, but can be reactivated under immune-compromised conditions potentially causing Progressive Multifocal Leukoencephalopathy (PML). JCPyV encodes its own microRNA, jcv-miR-J1.
BACKGROUND
We have investigated in 50 healthy subjects whether jcv-miR-J1-5p (and its variant jcv-miR-J1a-5p) can be detected in plasma or urine.
METHODS
We found that the overall detection rate of JCPyV miRNA was 74% (37/50) in plasma and 62% (31/50) in urine. Subjects were further categorized based on JCPyV VP1 serology status and viral shedding. In seronegative subjects, JCPyV miRNA was found in 86% (12/14) and 57% (8/14) of plasma and urine samples, respectively. In seropositive subjects, the detection rate was 69% (25/36) and 64% (23/36) for plasma and urine, respectively. Furthermore, in seropositive subjects shedding virus in urine, higher levels of urinary viral miRNAs were observed, compared to non-shedding seropositive subjects (P < 0.001). No correlation was observed between urinary and plasma miRNAs.
RESULTS
These data indicate that analysis of circulating viral miRNAs divulge the presence of latent JCPyV infection allowing further stratification of seropositive individuals. Also, our data indicate higher infection rates than would be expected from serology alone.
CONCLUSION
[ "Adult", "Female", "Humans", "JC Virus", "Male", "MicroRNAs", "Middle Aged", "Polyomavirus Infections", "RNA, Viral", "Sensitivity and Specificity", "Tumor Virus Infections", "Viral Load", "Virus Shedding", "Young Adult" ]
4168162
Introduction
The human JC polyomavirus (JCPyV) is the etiological agent of Progressive Multifocal Leukoencephalopathy (PML), a demyelinating disease of the brain caused by lytic infection of oligodendrocytes upon viral reactivation [1]. JCPyV is a circular double-stranded DNA virus with very restricted cellular tropism, infecting oligodendrocytes, astrocytes, kidney epithelial cells and peripheral blood cells [2, 3]. It is thought that infection usually occurs asymptomatically in childhood, after which the virus remains latent in the body [4–6]. Under certain immunocompromising conditions, such as treatment with immunomodulatory drugs (e.g. natalizumab) or infection with Human Immunodeficiency Virus (HIV), the virus can be reactivated and actively replicate into the brain, leading to PML. Current risk assessment for development of PML is mainly based on the detection of antibodies against VP1, the major capsid protein and the detection of viral DNA in urine (viruria). It has been reported that 50 to 80% of humans are seropositive for JCPyV and approximately one fifth of the population sheds JCPyV in the urine [7–13]. Detection of viral DNA in plasma (viremia) is very rare and has been shown not to be useful for predicting PML risk [14–16]. Recently it was shown, however, that viral DNA can be detected in CD34+ or CD19+ cells, with an increased detection rate in Multiple Sclerosis (MS) patients treated with natalizumab [3]. As the risk of developing PML increases upon prolonged use of natalizumab, current treatment guidelines recommend discontinuation of therapy after the second year, particularly in JCPyV seropositive patients [17]. Given the high prevalence of JCPyV antibodies, a large number of patients are advised to discontinue therapy. Although most, if not all, PML patients are seropositive or show seroconversion before diagnosis of PML, the incidence of PML in natalizumab-treated MS patients is not more than 1.1% in the highest risk subgroup, indicating that not all seropositive subjects have the same risk of developing PML [18]. Moreover, the introduction of a risk stratification algorithm, predominantly based on JCPyV serology, has not led to a reduction of PML incidence in natalizumab-treated MS patients [19]. Therefore, development of new tools for improved risk stratification is warranted, as this might justify continued therapy for many MS patients and better identify those patients who are really at risk of developing PML. MicroRNAs (miRNAs) are small, non-coding RNAs that play an important role in fine-tuning the expression of specific gene products through translational repression and/or mRNA degradation and as such are implicated in many diseases [20–22]. Cellular miRNAs can also be released in small vesicles, such as exosomes and the levels of these extracellular miRNAs in biological fluids have become very valuable markers of several diseases, such as cancer, Alzheimer’s disease and diabetes [23–25]. In the context of JCPyV, it was shown that there does not appear to be a relationship between circulating human miRNAs and the presence of anti-VP1 antibodies or urinary viral load [26]. Several viruses encode their own sets of miRNAs, which can have self-regulatory or host modulating roles [22]. Also JCPyV, as well as other polyomaviruses, encodes its own unique microRNA that is produced as part of the late transcript in an infected host cell [27, 28]. These microRNAs are thought to play an important role in controlling viral replication through downregulation of Large T-Antigen expression, but also in controlling NKG2D-mediated killing of virus-infected cells by natural killer (NK) cells through downregulation of the stress-induced ligand ULBP3 [28, 29]. The diagnostic potential of circulating viral miRNAs has already been investigated for Epstein-Barr virus and Kaposi’s sarcoma-associated herpesvirus (KSHV), where they might represent potential markers for virus associated malignancies [30, 31]. Also JCPyV miRNA has been shown recently to be a useful biomarker for JCPyV infection in the gastrointestinal tract [32]. In this study we have investigated whether JCPyV-encoded miRNAs could be detected in plasma or urine of healthy subjects and whether the presence of these miRNAs was related to VP1 serology or urinary viral load. We demonstrate that these viral miRNAs can indeed be detected in plasma or urine and that they might be useful markers for JCPyV infection.
null
null
Results
Assay linearity and specificity Plasma or urine levels of JCPyV miRNA were analyzed using stem–loop RT followed by TaqMan PCR analysis [33]. Since the 3p miRNA of JCPyV is identical to the 3p miRNA of BKPyV, only 5p miRNAs were investigated [28]. As we previously identified a variant of the JCV-miR-J1-5p bearing one nucleotide difference compared to the miRNA described in miRBase (designated JCV-miR-J1a-5p), assays specifically designed for both variants were used, as well as an assay detecting the closely related BKPyV 5p miRNA (Figure  1A-C) [26]. To evaluate the specificity of the assays, standard curves were prepared of each miRNA (JCV-miR-J1-5p, JCV-miR-J1a-5p and BKV-miR-B1-5p) and analyzed using the three specific assays (Figure  1D-F). Relative detection efficiency was calculated from the difference of quantification cycle (Cq) between the specific assay and the non-specific assay, using samples containing 2.104 copies/μL of the individual miRNAs (Figure  1G). Only marginal cross reaction was observed for most combinations, but a substantial cross reaction was observed between JCV-miR-J1a-5p and BKV-miR-B1-5p. Therefore, for every single clinical sample (plasma or urine), the contribution of non-specific amplification was calculated based on these standard curves. For JCV-miR-J1-5p and JCV-miR-J1a-5p it was confirmed that the contribution of non-specific signal was negligible compared to the specific signal (Additional file 1: Table S1 and Additional file 2: Figure S1). For the BKV-miR-B1-5p assay, however, the calculated contribution of JCV-miR-J1a-5p in most cases was similar to the measured levels, indicating that the BKV-miR-B1-5p assay in most cases actually was detecting JCV-miR-J1a-5p. Consequently, we can conclude that in most samples BKV-miR-B1-5p is absent or present at such low levels that it does not interfere in the interpretation of the JCV-miR-J1-5p and JCV-miR-J1a-5p analyses.Figure 1 Validation of the stem-loop RT-PCR miRNA assays. (A) Genomic location of the JCPyV/BKPyV encoded miRNAs. (B) Graphical representation of a pre-miRNA as precursor of the mature 5p and 3p miRNAs generated upon cleavage by Dicer (C) Sequence comparison of the three miRNAs investigated. (D-F) The assay linearity and specificity was evaluated with dilution series of three synthetic miRNAs, miR-J1-5p, miR-J1a-5p and miR-B1-5p. Each dilution series was analyzed using the three miRNA assays. (G) The assay readings of miR-J1-5p by miR-J1-5p assay, miR-J1a-5p by miR-J1a-5p assay, and miR-B1-5p by miR-B1-5p assay at concentration levels of 2.104 copies/μL extract were used as the relative standards (100%) for the analysis of assay cross-reactivity. Validation of the stem-loop RT-PCR miRNA assays. (A) Genomic location of the JCPyV/BKPyV encoded miRNAs. (B) Graphical representation of a pre-miRNA as precursor of the mature 5p and 3p miRNAs generated upon cleavage by Dicer (C) Sequence comparison of the three miRNAs investigated. (D-F) The assay linearity and specificity was evaluated with dilution series of three synthetic miRNAs, miR-J1-5p, miR-J1a-5p and miR-B1-5p. Each dilution series was analyzed using the three miRNA assays. (G) The assay readings of miR-J1-5p by miR-J1-5p assay, miR-J1a-5p by miR-J1a-5p assay, and miR-B1-5p by miR-B1-5p assay at concentration levels of 2.104 copies/μL extract were used as the relative standards (100%) for the analysis of assay cross-reactivity. Plasma or urine levels of JCPyV miRNA were analyzed using stem–loop RT followed by TaqMan PCR analysis [33]. Since the 3p miRNA of JCPyV is identical to the 3p miRNA of BKPyV, only 5p miRNAs were investigated [28]. As we previously identified a variant of the JCV-miR-J1-5p bearing one nucleotide difference compared to the miRNA described in miRBase (designated JCV-miR-J1a-5p), assays specifically designed for both variants were used, as well as an assay detecting the closely related BKPyV 5p miRNA (Figure  1A-C) [26]. To evaluate the specificity of the assays, standard curves were prepared of each miRNA (JCV-miR-J1-5p, JCV-miR-J1a-5p and BKV-miR-B1-5p) and analyzed using the three specific assays (Figure  1D-F). Relative detection efficiency was calculated from the difference of quantification cycle (Cq) between the specific assay and the non-specific assay, using samples containing 2.104 copies/μL of the individual miRNAs (Figure  1G). Only marginal cross reaction was observed for most combinations, but a substantial cross reaction was observed between JCV-miR-J1a-5p and BKV-miR-B1-5p. Therefore, for every single clinical sample (plasma or urine), the contribution of non-specific amplification was calculated based on these standard curves. For JCV-miR-J1-5p and JCV-miR-J1a-5p it was confirmed that the contribution of non-specific signal was negligible compared to the specific signal (Additional file 1: Table S1 and Additional file 2: Figure S1). For the BKV-miR-B1-5p assay, however, the calculated contribution of JCV-miR-J1a-5p in most cases was similar to the measured levels, indicating that the BKV-miR-B1-5p assay in most cases actually was detecting JCV-miR-J1a-5p. Consequently, we can conclude that in most samples BKV-miR-B1-5p is absent or present at such low levels that it does not interfere in the interpretation of the JCV-miR-J1-5p and JCV-miR-J1a-5p analyses.Figure 1 Validation of the stem-loop RT-PCR miRNA assays. (A) Genomic location of the JCPyV/BKPyV encoded miRNAs. (B) Graphical representation of a pre-miRNA as precursor of the mature 5p and 3p miRNAs generated upon cleavage by Dicer (C) Sequence comparison of the three miRNAs investigated. (D-F) The assay linearity and specificity was evaluated with dilution series of three synthetic miRNAs, miR-J1-5p, miR-J1a-5p and miR-B1-5p. Each dilution series was analyzed using the three miRNA assays. (G) The assay readings of miR-J1-5p by miR-J1-5p assay, miR-J1a-5p by miR-J1a-5p assay, and miR-B1-5p by miR-B1-5p assay at concentration levels of 2.104 copies/μL extract were used as the relative standards (100%) for the analysis of assay cross-reactivity. Validation of the stem-loop RT-PCR miRNA assays. (A) Genomic location of the JCPyV/BKPyV encoded miRNAs. (B) Graphical representation of a pre-miRNA as precursor of the mature 5p and 3p miRNAs generated upon cleavage by Dicer (C) Sequence comparison of the three miRNAs investigated. (D-F) The assay linearity and specificity was evaluated with dilution series of three synthetic miRNAs, miR-J1-5p, miR-J1a-5p and miR-B1-5p. Each dilution series was analyzed using the three miRNA assays. (G) The assay readings of miR-J1-5p by miR-J1-5p assay, miR-J1a-5p by miR-J1a-5p assay, and miR-B1-5p by miR-B1-5p assay at concentration levels of 2.104 copies/μL extract were used as the relative standards (100%) for the analysis of assay cross-reactivity. Analysis of JCPyV miRNA levels in plasma and urine Plasma and urine samples were collected from 50 healthy subjects (HS) and JCPyV VP1 serology and both plasma and urinary viral load was determined (Additional file 3: Table S2). Based on these parameters, subjects were categorized as Ab-, Ab+VL- or Ab+VL+ (Table  1). Among these 50 HS JCPyV VP1 antibody prevalence was 72% (36/50) and 36% (13/36) of this group were also shedding virus in their urine, representing 26% (13/50) of the total study population. In the JCPyV VP1 seronegative group, no urinary virus shedding was observed. We found no subjects with detectable JCPyV DNA in plasma, similar to previous observations [16].Table 1 Overview of subjects investigated VariableHealthy subjects (n = 50)Gender, n (%)Male22 (44%)Female28 (56%)Age, median (Min-Max)40.5 (23-59)Race, ethnicity, n (%)White38 (76%)Black3 (6%)Asian8 (16%)Other/unknown1 (2%)VP1 serology, n (%)Positive36 (72%)Negative14 (28%)Viruria, n (%)Positive*13 (26%)Negative37 (74%)Viremia, n (%)Positive0 (0%)Negative50 (100%)*All viruric subjects were part of the VP1 seropositive subgroup. Overview of subjects investigated *All viruric subjects were part of the VP1 seropositive subgroup. Total RNA, including microRNAs was isolated from both urine and plasma and the level of JCPyV 5p miRNA was quantified in all samples (Figure  2). The overall detection rate of JCPyV miRNA was 74% (37/50) in plasma and 62% (31/50) in urine (Figure  3A). Further analysis of the different subgroups shows that JCPyV miRNA was detected in plasma or urine from HS from all subgroups (Ab-, Ab+ VL- or Ab+VL+) at similar detection rates (P > 0.05 between the subgroups for both plasma and urine). In the seropositive group, JCPyV 5p miRNA was detected in plasma of 69% (25/36) of subjects and in urine of 64% (23/36) of subjects. Remarkably, also in the seronegative group, JCPyV 5p miRNA was detected in plasma of 86% (12/14) of subjects and in urine of 57% (8/14) of subjects. These detection rates were not statistically different compared to those in seropositive subjects (P > 0.05 both for plasma and urine).Figure 2 Individual levels of jcv-miR-J1-5p and jcv-miR-J1a-5p in plasma and urine. Plasma and urine levels (in log copies/mL) of the two JCPyvV miRNA variants (jcv-miR-J1-5p and jcv-miR-J1a-5p) in every individual subject.Figure 3 JCPyV miRNA levels detected in plasma and urine of healthy subjects, categorized based on serology and urinary viral load. (A) Number of subjects with detectable levels of JCPyV miRNAs in plasma and urine in the different groups. (B) Correlation between plasma and urine levels (in log copies/mL) of JCPyV miRNAs. (C) Plasma levels (in log copies/mL) of JCPyV miRNAs in the different groups. In case both variants were detected, the sum of both levels is presented. (D) Urine levels (in log copies/mL) of JCPyV miRNAs in the different groups. In case both variants were detected, the sum of both levels is presented. Individual levels of jcv-miR-J1-5p and jcv-miR-J1a-5p in plasma and urine. Plasma and urine levels (in log copies/mL) of the two JCPyvV miRNA variants (jcv-miR-J1-5p and jcv-miR-J1a-5p) in every individual subject. JCPyV miRNA levels detected in plasma and urine of healthy subjects, categorized based on serology and urinary viral load. (A) Number of subjects with detectable levels of JCPyV miRNAs in plasma and urine in the different groups. (B) Correlation between plasma and urine levels (in log copies/mL) of JCPyV miRNAs. (C) Plasma levels (in log copies/mL) of JCPyV miRNAs in the different groups. In case both variants were detected, the sum of both levels is presented. (D) Urine levels (in log copies/mL) of JCPyV miRNAs in the different groups. In case both variants were detected, the sum of both levels is presented. Quantitative analysis of JCPyV 5p miRNA indicated plasma levels of JCPyV 5p miRNA were similar in the three different subgroups (Figure  3C). In urine, significantly higher levels (P < 0.001) were observed in the subgroup shedding JCPyV in their urine compared to the subgroup not shedding JCPyV in their urine (Figure  3D). No correlation was observed between plasma levels and urine levels of JCPyV 5p miRNA (Figure  3B). Remarkably, while in plasma higher levels of JCV-miR-J1-5p were detected than JCV-miR-J1a-5p, in urine both variants were detected at similar levels (Figure  2). Also, the identity of the miRNAs (J1 or J1a variant) was not correlated between urine and plasma. Comparison of JCPyV 5p miRNA levels with JCPyV VP1 serology or urinary viral load showed that, specifically in the subgroup of JCPyV shedders (Ab+VL+) a good correlation (P < 0.005, r = 0.78) exists between miRNA levels and antibody levels, as well as a moderate correlation (P = 0.07, r = 0.57) with urinary viral load (Figure  4A-B). No correlation could be observed between plasma miRNA levels and any other parameter analyzed (Figure  4C-D).Figure 4 Viral miRNAs related to other viral parameters. (A) Correlation between JCPyV miRNA levels (in log copies/mL) in urine and JCPyV VP1 serology (in log2 test/ctrl). Shedders are indicated in red and P-value and r value are based on this subset only. (B) Correlation between JCPyV miRNA levels (in log copies/mL) in urine and JCPyV urinary viral load (in log copies/mL) in shedders. (C) Correlation between JCPyV miRNA levels (in log copies/mL) in plasma and JCPyV VP1 serology (in log2 test/ctrl). Shedders are indicated in red and P-value is based on this subset only. (D) Correlation between JCPyV miRNA levels (in log copies/mL) in plasma and JCPyV urinary viral load (in log copies/mL) in shedders. Viral miRNAs related to other viral parameters. (A) Correlation between JCPyV miRNA levels (in log copies/mL) in urine and JCPyV VP1 serology (in log2 test/ctrl). Shedders are indicated in red and P-value and r value are based on this subset only. (B) Correlation between JCPyV miRNA levels (in log copies/mL) in urine and JCPyV urinary viral load (in log copies/mL) in shedders. (C) Correlation between JCPyV miRNA levels (in log copies/mL) in plasma and JCPyV VP1 serology (in log2 test/ctrl). Shedders are indicated in red and P-value is based on this subset only. (D) Correlation between JCPyV miRNA levels (in log copies/mL) in plasma and JCPyV urinary viral load (in log copies/mL) in shedders. Plasma and urine samples were collected from 50 healthy subjects (HS) and JCPyV VP1 serology and both plasma and urinary viral load was determined (Additional file 3: Table S2). Based on these parameters, subjects were categorized as Ab-, Ab+VL- or Ab+VL+ (Table  1). Among these 50 HS JCPyV VP1 antibody prevalence was 72% (36/50) and 36% (13/36) of this group were also shedding virus in their urine, representing 26% (13/50) of the total study population. In the JCPyV VP1 seronegative group, no urinary virus shedding was observed. We found no subjects with detectable JCPyV DNA in plasma, similar to previous observations [16].Table 1 Overview of subjects investigated VariableHealthy subjects (n = 50)Gender, n (%)Male22 (44%)Female28 (56%)Age, median (Min-Max)40.5 (23-59)Race, ethnicity, n (%)White38 (76%)Black3 (6%)Asian8 (16%)Other/unknown1 (2%)VP1 serology, n (%)Positive36 (72%)Negative14 (28%)Viruria, n (%)Positive*13 (26%)Negative37 (74%)Viremia, n (%)Positive0 (0%)Negative50 (100%)*All viruric subjects were part of the VP1 seropositive subgroup. Overview of subjects investigated *All viruric subjects were part of the VP1 seropositive subgroup. Total RNA, including microRNAs was isolated from both urine and plasma and the level of JCPyV 5p miRNA was quantified in all samples (Figure  2). The overall detection rate of JCPyV miRNA was 74% (37/50) in plasma and 62% (31/50) in urine (Figure  3A). Further analysis of the different subgroups shows that JCPyV miRNA was detected in plasma or urine from HS from all subgroups (Ab-, Ab+ VL- or Ab+VL+) at similar detection rates (P > 0.05 between the subgroups for both plasma and urine). In the seropositive group, JCPyV 5p miRNA was detected in plasma of 69% (25/36) of subjects and in urine of 64% (23/36) of subjects. Remarkably, also in the seronegative group, JCPyV 5p miRNA was detected in plasma of 86% (12/14) of subjects and in urine of 57% (8/14) of subjects. These detection rates were not statistically different compared to those in seropositive subjects (P > 0.05 both for plasma and urine).Figure 2 Individual levels of jcv-miR-J1-5p and jcv-miR-J1a-5p in plasma and urine. Plasma and urine levels (in log copies/mL) of the two JCPyvV miRNA variants (jcv-miR-J1-5p and jcv-miR-J1a-5p) in every individual subject.Figure 3 JCPyV miRNA levels detected in plasma and urine of healthy subjects, categorized based on serology and urinary viral load. (A) Number of subjects with detectable levels of JCPyV miRNAs in plasma and urine in the different groups. (B) Correlation between plasma and urine levels (in log copies/mL) of JCPyV miRNAs. (C) Plasma levels (in log copies/mL) of JCPyV miRNAs in the different groups. In case both variants were detected, the sum of both levels is presented. (D) Urine levels (in log copies/mL) of JCPyV miRNAs in the different groups. In case both variants were detected, the sum of both levels is presented. Individual levels of jcv-miR-J1-5p and jcv-miR-J1a-5p in plasma and urine. Plasma and urine levels (in log copies/mL) of the two JCPyvV miRNA variants (jcv-miR-J1-5p and jcv-miR-J1a-5p) in every individual subject. JCPyV miRNA levels detected in plasma and urine of healthy subjects, categorized based on serology and urinary viral load. (A) Number of subjects with detectable levels of JCPyV miRNAs in plasma and urine in the different groups. (B) Correlation between plasma and urine levels (in log copies/mL) of JCPyV miRNAs. (C) Plasma levels (in log copies/mL) of JCPyV miRNAs in the different groups. In case both variants were detected, the sum of both levels is presented. (D) Urine levels (in log copies/mL) of JCPyV miRNAs in the different groups. In case both variants were detected, the sum of both levels is presented. Quantitative analysis of JCPyV 5p miRNA indicated plasma levels of JCPyV 5p miRNA were similar in the three different subgroups (Figure  3C). In urine, significantly higher levels (P < 0.001) were observed in the subgroup shedding JCPyV in their urine compared to the subgroup not shedding JCPyV in their urine (Figure  3D). No correlation was observed between plasma levels and urine levels of JCPyV 5p miRNA (Figure  3B). Remarkably, while in plasma higher levels of JCV-miR-J1-5p were detected than JCV-miR-J1a-5p, in urine both variants were detected at similar levels (Figure  2). Also, the identity of the miRNAs (J1 or J1a variant) was not correlated between urine and plasma. Comparison of JCPyV 5p miRNA levels with JCPyV VP1 serology or urinary viral load showed that, specifically in the subgroup of JCPyV shedders (Ab+VL+) a good correlation (P < 0.005, r = 0.78) exists between miRNA levels and antibody levels, as well as a moderate correlation (P = 0.07, r = 0.57) with urinary viral load (Figure  4A-B). No correlation could be observed between plasma miRNA levels and any other parameter analyzed (Figure  4C-D).Figure 4 Viral miRNAs related to other viral parameters. (A) Correlation between JCPyV miRNA levels (in log copies/mL) in urine and JCPyV VP1 serology (in log2 test/ctrl). Shedders are indicated in red and P-value and r value are based on this subset only. (B) Correlation between JCPyV miRNA levels (in log copies/mL) in urine and JCPyV urinary viral load (in log copies/mL) in shedders. (C) Correlation between JCPyV miRNA levels (in log copies/mL) in plasma and JCPyV VP1 serology (in log2 test/ctrl). Shedders are indicated in red and P-value is based on this subset only. (D) Correlation between JCPyV miRNA levels (in log copies/mL) in plasma and JCPyV urinary viral load (in log copies/mL) in shedders. Viral miRNAs related to other viral parameters. (A) Correlation between JCPyV miRNA levels (in log copies/mL) in urine and JCPyV VP1 serology (in log2 test/ctrl). Shedders are indicated in red and P-value and r value are based on this subset only. (B) Correlation between JCPyV miRNA levels (in log copies/mL) in urine and JCPyV urinary viral load (in log copies/mL) in shedders. (C) Correlation between JCPyV miRNA levels (in log copies/mL) in plasma and JCPyV VP1 serology (in log2 test/ctrl). Shedders are indicated in red and P-value is based on this subset only. (D) Correlation between JCPyV miRNA levels (in log copies/mL) in plasma and JCPyV urinary viral load (in log copies/mL) in shedders.
Conclusions
Based on the work described here, we conclude that JCPyV miRNAs can be detected in plasma and urine of healthy subjects, allowing further stratification of seropositive individuals. Furthermore, our data indicate higher infection rates than would be expected from serology alone.
[ "Assay linearity and specificity", "Analysis of JCPyV miRNA levels in plasma and urine", "Ethics statement", "Healthy subject samples", "JC polyomavirus viral load assay", "JC polyomavirus VP1 serology assay", "Synthetic microRNA molecules and generation of miRNA standard curves", "Analysis of viral miRNAs", "Statistical analysis", "" ]
[ "Plasma or urine levels of JCPyV miRNA were analyzed using stem–loop RT followed by TaqMan PCR analysis\n[33]. Since the 3p miRNA of JCPyV is identical to the 3p miRNA of BKPyV, only 5p miRNAs were investigated\n[28]. As we previously identified a variant of the JCV-miR-J1-5p bearing one nucleotide difference compared to the miRNA described in miRBase (designated JCV-miR-J1a-5p), assays specifically designed for both variants were used, as well as an assay detecting the closely related BKPyV 5p miRNA (Figure \n1A-C)\n[26]. To evaluate the specificity of the assays, standard curves were prepared of each miRNA (JCV-miR-J1-5p, JCV-miR-J1a-5p and BKV-miR-B1-5p) and analyzed using the three specific assays (Figure \n1D-F). Relative detection efficiency was calculated from the difference of quantification cycle (Cq) between the specific assay and the non-specific assay, using samples containing 2.104 copies/μL of the individual miRNAs (Figure \n1G). Only marginal cross reaction was observed for most combinations, but a substantial cross reaction was observed between JCV-miR-J1a-5p and BKV-miR-B1-5p. Therefore, for every single clinical sample (plasma or urine), the contribution of non-specific amplification was calculated based on these standard curves. For JCV-miR-J1-5p and JCV-miR-J1a-5p it was confirmed that the contribution of non-specific signal was negligible compared to the specific signal (Additional file\n1: Table S1 and Additional file\n2: Figure S1). For the BKV-miR-B1-5p assay, however, the calculated contribution of JCV-miR-J1a-5p in most cases was similar to the measured levels, indicating that the BKV-miR-B1-5p assay in most cases actually was detecting JCV-miR-J1a-5p. Consequently, we can conclude that in most samples BKV-miR-B1-5p is absent or present at such low levels that it does not interfere in the interpretation of the JCV-miR-J1-5p and JCV-miR-J1a-5p analyses.Figure 1\nValidation of the stem-loop RT-PCR miRNA assays.\n(A) Genomic location of the JCPyV/BKPyV encoded miRNAs. (B) Graphical representation of a pre-miRNA as precursor of the mature 5p and 3p miRNAs generated upon cleavage by Dicer (C) Sequence comparison of the three miRNAs investigated. (D-F) The assay linearity and specificity was evaluated with dilution series of three synthetic miRNAs, miR-J1-5p, miR-J1a-5p and miR-B1-5p. Each dilution series was analyzed using the three miRNA assays. (G) The assay readings of miR-J1-5p by miR-J1-5p assay, miR-J1a-5p by miR-J1a-5p assay, and miR-B1-5p by miR-B1-5p assay at concentration levels of 2.104 copies/μL extract were used as the relative standards (100%) for the analysis of assay cross-reactivity.\n\nValidation of the stem-loop RT-PCR miRNA assays.\n(A) Genomic location of the JCPyV/BKPyV encoded miRNAs. (B) Graphical representation of a pre-miRNA as precursor of the mature 5p and 3p miRNAs generated upon cleavage by Dicer (C) Sequence comparison of the three miRNAs investigated. (D-F) The assay linearity and specificity was evaluated with dilution series of three synthetic miRNAs, miR-J1-5p, miR-J1a-5p and miR-B1-5p. Each dilution series was analyzed using the three miRNA assays. (G) The assay readings of miR-J1-5p by miR-J1-5p assay, miR-J1a-5p by miR-J1a-5p assay, and miR-B1-5p by miR-B1-5p assay at concentration levels of 2.104 copies/μL extract were used as the relative standards (100%) for the analysis of assay cross-reactivity.", "Plasma and urine samples were collected from 50 healthy subjects (HS) and JCPyV VP1 serology and both plasma and urinary viral load was determined (Additional file\n3: Table S2). Based on these parameters, subjects were categorized as Ab-, Ab+VL- or Ab+VL+ (Table \n1). Among these 50 HS JCPyV VP1 antibody prevalence was 72% (36/50) and 36% (13/36) of this group were also shedding virus in their urine, representing 26% (13/50) of the total study population. In the JCPyV VP1 seronegative group, no urinary virus shedding was observed. We found no subjects with detectable JCPyV DNA in plasma, similar to previous observations\n[16].Table 1\nOverview of subjects investigated\nVariableHealthy subjects (n = 50)Gender, n (%)Male22 (44%)Female28 (56%)Age, median (Min-Max)40.5 (23-59)Race, ethnicity, n (%)White38 (76%)Black3 (6%)Asian8 (16%)Other/unknown1 (2%)VP1 serology, n (%)Positive36 (72%)Negative14 (28%)Viruria, n (%)Positive*13 (26%)Negative37 (74%)Viremia, n (%)Positive0 (0%)Negative50 (100%)*All viruric subjects were part of the VP1 seropositive subgroup.\n\nOverview of subjects investigated\n\n*All viruric subjects were part of the VP1 seropositive subgroup.\nTotal RNA, including microRNAs was isolated from both urine and plasma and the level of JCPyV 5p miRNA was quantified in all samples (Figure \n2). The overall detection rate of JCPyV miRNA was 74% (37/50) in plasma and 62% (31/50) in urine (Figure \n3A). Further analysis of the different subgroups shows that JCPyV miRNA was detected in plasma or urine from HS from all subgroups (Ab-, Ab+ VL- or Ab+VL+) at similar detection rates (P > 0.05 between the subgroups for both plasma and urine). In the seropositive group, JCPyV 5p miRNA was detected in plasma of 69% (25/36) of subjects and in urine of 64% (23/36) of subjects. Remarkably, also in the seronegative group, JCPyV 5p miRNA was detected in plasma of 86% (12/14) of subjects and in urine of 57% (8/14) of subjects. These detection rates were not statistically different compared to those in seropositive subjects (P > 0.05 both for plasma and urine).Figure 2\nIndividual levels of jcv-miR-J1-5p and jcv-miR-J1a-5p in plasma and urine. Plasma and urine levels (in log copies/mL) of the two JCPyvV miRNA variants (jcv-miR-J1-5p and jcv-miR-J1a-5p) in every individual subject.Figure 3\nJCPyV miRNA levels detected in plasma and urine of healthy subjects, categorized based on serology and urinary viral load. (A) Number of subjects with detectable levels of JCPyV miRNAs in plasma and urine in the different groups. (B) Correlation between plasma and urine levels (in log copies/mL) of JCPyV miRNAs. (C) Plasma levels (in log copies/mL) of JCPyV miRNAs in the different groups. In case both variants were detected, the sum of both levels is presented. (D) Urine levels (in log copies/mL) of JCPyV miRNAs in the different groups. In case both variants were detected, the sum of both levels is presented.\n\nIndividual levels of jcv-miR-J1-5p and jcv-miR-J1a-5p in plasma and urine. Plasma and urine levels (in log copies/mL) of the two JCPyvV miRNA variants (jcv-miR-J1-5p and jcv-miR-J1a-5p) in every individual subject.\n\nJCPyV miRNA levels detected in plasma and urine of healthy subjects, categorized based on serology and urinary viral load. (A) Number of subjects with detectable levels of JCPyV miRNAs in plasma and urine in the different groups. (B) Correlation between plasma and urine levels (in log copies/mL) of JCPyV miRNAs. (C) Plasma levels (in log copies/mL) of JCPyV miRNAs in the different groups. In case both variants were detected, the sum of both levels is presented. (D) Urine levels (in log copies/mL) of JCPyV miRNAs in the different groups. In case both variants were detected, the sum of both levels is presented.\nQuantitative analysis of JCPyV 5p miRNA indicated plasma levels of JCPyV 5p miRNA were similar in the three different subgroups (Figure \n3C). In urine, significantly higher levels (P < 0.001) were observed in the subgroup shedding JCPyV in their urine compared to the subgroup not shedding JCPyV in their urine (Figure \n3D). No correlation was observed between plasma levels and urine levels of JCPyV 5p miRNA (Figure \n3B). Remarkably, while in plasma higher levels of JCV-miR-J1-5p were detected than JCV-miR-J1a-5p, in urine both variants were detected at similar levels (Figure \n2). Also, the identity of the miRNAs (J1 or J1a variant) was not correlated between urine and plasma. Comparison of JCPyV 5p miRNA levels with JCPyV VP1 serology or urinary viral load showed that, specifically in the subgroup of JCPyV shedders (Ab+VL+) a good correlation (P < 0.005, r = 0.78) exists between miRNA levels and antibody levels, as well as a moderate correlation (P = 0.07, r = 0.57) with urinary viral load (Figure \n4A-B). No correlation could be observed between plasma miRNA levels and any other parameter analyzed (Figure \n4C-D).Figure 4\nViral miRNAs related to other viral parameters. (A) Correlation between JCPyV miRNA levels (in log copies/mL) in urine and JCPyV VP1 serology (in log2 test/ctrl). Shedders are indicated in red and P-value and r value are based on this subset only. (B) Correlation between JCPyV miRNA levels (in log copies/mL) in urine and JCPyV urinary viral load (in log copies/mL) in shedders. (C) Correlation between JCPyV miRNA levels (in log copies/mL) in plasma and JCPyV VP1 serology (in log2 test/ctrl). Shedders are indicated in red and P-value is based on this subset only. (D) Correlation between JCPyV miRNA levels (in log copies/mL) in plasma and JCPyV urinary viral load (in log copies/mL) in shedders.\n\nViral miRNAs related to other viral parameters. (A) Correlation between JCPyV miRNA levels (in log copies/mL) in urine and JCPyV VP1 serology (in log2 test/ctrl). Shedders are indicated in red and P-value and r value are based on this subset only. (B) Correlation between JCPyV miRNA levels (in log copies/mL) in urine and JCPyV urinary viral load (in log copies/mL) in shedders. (C) Correlation between JCPyV miRNA levels (in log copies/mL) in plasma and JCPyV VP1 serology (in log2 test/ctrl). Shedders are indicated in red and P-value is based on this subset only. (D) Correlation between JCPyV miRNA levels (in log copies/mL) in plasma and JCPyV urinary viral load (in log copies/mL) in shedders.", "The Ethics Committee [“Commissie voor Medische Ethiek - ZiekenhuisNetwerk Antwerpen (ZNA) and the Ethics committee University Hospital Antwerp] approved the Protocol, and Informed consent, which were signed by all subjects.", "A total of 50 healthy subjects (HS) were recruited in Belgium for this study\n[10, 26, 49]. The demographic description of the HS population is presented in Table \n1. Plasma samples and urine samples were collected from all these HS and stored at -80°C until further processing.", "Analysis of the urinary viral load was performed as described previously\n[10]. Analysis of the plasma viral load was performed similarly, with the exception that 200 μL plasma was used for DNA extraction.", "The anti-JCPyV antibody assay was performed as described earlier\n[26]. Samples were considered positive if OD values were higher than 2-fold the OD value of the blank sample (i.e. log2 test/ctrl > 1).", "Three RNase-free 5’-phosphorylated miRNA oligoribonucleotides were synthesized (Integrated DNA Technologies) for the validation of the miRNA assays, corresponding to jcv-miR-J1-5p (5’-phospho-UUCUGAGACCUGGGAAAAGCAU-OH-3’), jcv-miR-J1a-5p (5’-phospho-UUCUGAGACCUGGGAAGAGCAU-OH-3’) and bkv-miR-B1-5p (5’-phospho-AUCUGAGACUUGGGAAGAGCAU-OH-3’). Stock solutions of 100 μM synthetic oligonucleotide in RNase-free and DNase-free water were prepared according to the concentrations and sample purity quoted by the manufacturer (based on spectrophotometric analysis). These stock solutions were diluted to a concentration of approximately 3.32 pM, corresponding to 2.105 copies/μL. A total of six serial 10-fold dilutions were prepared, starting from 2.105 copies/μL down to 2 copies/μL and additional no template controls (NTC; zero copies) were examined. Dilution series of each of the synthetic miRNAs were made in RNase-free and DNase-free water.", "Levels of circulating viral miRNAs were determined by means of quantitative reverse transcriptase PCR (qRT-PCR) with hydrolysis probe based miRNA assays, purchased from Life Technologies. Specific assays were used for jcv-miR-J1-5p (Assay ID 007464_mat), jcv-miR-J1a-5p (Custom Assay ID CS39QVQ) and bkv-miR-B1-5p (Assay ID 007796_mat). As extraction and reverse transcription efficacy control, assays specific for human miRNAs hsa-miR-26a (assay ID 000405), hsa-miR-30b (assay ID 002129) and mmu-miR-93 (assay ID 001090) were included in the analysis.\nBriefly, RNA was isolated from 200 μL plasma or urine using the miRNeasy kit (Qiagen) according to the manufacturer’s instructions. Three μL of total RNA (representing 20% of total RNA extract) or synthetic miRNA solution was reverse transcribed using the pooled RT stem-loop primers (Life Technologies), enabling miRNA specific cDNA synthesis. Subsequently, 2.5 μL of the RT product (representing 1/6 of total RT product) was pre-amplified by means of a 12-cycle PCR reaction with a pool of miRNA specific forward primer and universal reverse primer to increase detection sensitivity. Diluted (1:8) pre-amplified miRNA cDNA was then used as input for a 45-cycle qPCR reaction with miRNA specific hydrolysis probes and primers (Life Technologies). For analysis of the viral miRNAs, 2 μL of input was used. For analysis of the human miRNAs, 2 μL of input was used for urine derived miRNAs and 0.2 μL was used for plasma derived miRNAs. All reactions were performed in duplicate on the LightCycler® 480 instrument (Roche Applied Science). Quantification cycle (Cq) values were calculated using the 2nd Derivative method with a detection cut-off of 40 cycles. Only samples with quantifiable Cq values for both duplicates were considered positive. Absolute miRNA levels in plasma or urine were calculated based on the standard curves for each specific miRNA assay. Limit of quantification (LOQ) was defined as the miRNA concentration corresponding to a Cq value of 40, based on the standard curve for the specific miRNA. For each sample, average Cq value of the 3 human control miRNAs was calculated and possible outliers were identified using Grubbs’ test (using a significance level of 0.05). In case outliers were detected, results of the corresponding viral miRNAs were not included for further analysis.", "Differences in relative occurrence of viral miRNAs between groups were assessed using a Fisher’s test. Differences between groups were considered statistically significant at P < 0.05. Differences in miRNA levels between groups were assessed using a Mann-Whitney test. Differences between groups were considered statistically significant at P < 0.05. Correlation between different parameters was analyzed using linear regression. P-value was calculated to determine whether slope was significantly non-zero and strength of correlation was determined using r-value. All statistical analyses were performed using GraphPad Prism v 5.04.", "Additional file 1: Table S1: Calculated contributions of non-specific detection. Based on the calibration curves for each specific assay, the concentration of the specific miRNA in every sample is calculated. Based on that concentration and the calibration curves for the non-specific assays, a calculated Cq value is then determined for each of the other assays. In case this calculated Cq value is higher than 40, it is assumed that there is no contribution of non-specific detection of that particular miRNA in that particular assay. This approach is also visually presented in Additional file\n2: Figure S1. (ZIP 28 KB)\nAdditional file 2: Figure S1: Calculation of the contribution of non-specific detection. Based on the standard curves the contribution of non-specific detection of a miRNA on the Cq value of another miRNA assay is calculated. First, based on the Cq value obtained from the specific assay and its specific standard curve, the miRNA level is quantified (indicated by “1”). Subsequently, it is calculated what the Cq value would be in the non-specific assays using extrapolation of the non-specific standard curves (indicated by “2”). (PNG 215 KB)\nAdditional file 3: Table S2: Individual results of JCPyV VP1 antibody levels, urinary viral load and plasma viral load. (ZIP 12 KB)" ]
[ null, null, null, null, null, null, null, null, null, null ]
[ "Introduction", "Results", "Assay linearity and specificity", "Analysis of JCPyV miRNA levels in plasma and urine", "Discussion", "Conclusions", "Materials and methods", "Ethics statement", "Healthy subject samples", "JC polyomavirus viral load assay", "JC polyomavirus VP1 serology assay", "Synthetic microRNA molecules and generation of miRNA standard curves", "Analysis of viral miRNAs", "Statistical analysis", "Electronic supplementary material", "" ]
[ "The human JC polyomavirus (JCPyV) is the etiological agent of Progressive Multifocal Leukoencephalopathy (PML), a demyelinating disease of the brain caused by lytic infection of oligodendrocytes upon viral reactivation\n[1]. JCPyV is a circular double-stranded DNA virus with very restricted cellular tropism, infecting oligodendrocytes, astrocytes, kidney epithelial cells and peripheral blood cells\n[2, 3]. It is thought that infection usually occurs asymptomatically in childhood, after which the virus remains latent in the body\n[4–6]. Under certain immunocompromising conditions, such as treatment with immunomodulatory drugs (e.g. natalizumab) or infection with Human Immunodeficiency Virus (HIV), the virus can be reactivated and actively replicate into the brain, leading to PML. Current risk assessment for development of PML is mainly based on the detection of antibodies against VP1, the major capsid protein and the detection of viral DNA in urine (viruria). It has been reported that 50 to 80% of humans are seropositive for JCPyV and approximately one fifth of the population sheds JCPyV in the urine\n[7–13]. Detection of viral DNA in plasma (viremia) is very rare and has been shown not to be useful for predicting PML risk\n[14–16]. Recently it was shown, however, that viral DNA can be detected in CD34+ or CD19+ cells, with an increased detection rate in Multiple Sclerosis (MS) patients treated with natalizumab\n[3]. As the risk of developing PML increases upon prolonged use of natalizumab, current treatment guidelines recommend discontinuation of therapy after the second year, particularly in JCPyV seropositive patients\n[17]. Given the high prevalence of JCPyV antibodies, a large number of patients are advised to discontinue therapy. Although most, if not all, PML patients are seropositive or show seroconversion before diagnosis of PML, the incidence of PML in natalizumab-treated MS patients is not more than 1.1% in the highest risk subgroup, indicating that not all seropositive subjects have the same risk of developing PML\n[18]. Moreover, the introduction of a risk stratification algorithm, predominantly based on JCPyV serology, has not led to a reduction of PML incidence in natalizumab-treated MS patients\n[19]. Therefore, development of new tools for improved risk stratification is warranted, as this might justify continued therapy for many MS patients and better identify those patients who are really at risk of developing PML.\nMicroRNAs (miRNAs) are small, non-coding RNAs that play an important role in fine-tuning the expression of specific gene products through translational repression and/or mRNA degradation and as such are implicated in many diseases\n[20–22]. Cellular miRNAs can also be released in small vesicles, such as exosomes and the levels of these extracellular miRNAs in biological fluids have become very valuable markers of several diseases, such as cancer, Alzheimer’s disease and diabetes\n[23–25]. In the context of JCPyV, it was shown that there does not appear to be a relationship between circulating human miRNAs and the presence of anti-VP1 antibodies or urinary viral load\n[26].\nSeveral viruses encode their own sets of miRNAs, which can have self-regulatory or host modulating roles\n[22]. Also JCPyV, as well as other polyomaviruses, encodes its own unique microRNA that is produced as part of the late transcript in an infected host cell\n[27, 28]. These microRNAs are thought to play an important role in controlling viral replication through downregulation of Large T-Antigen expression, but also in controlling NKG2D-mediated killing of virus-infected cells by natural killer (NK) cells through downregulation of the stress-induced ligand ULBP3\n[28, 29].\nThe diagnostic potential of circulating viral miRNAs has already been investigated for Epstein-Barr virus and Kaposi’s sarcoma-associated herpesvirus (KSHV), where they might represent potential markers for virus associated malignancies\n[30, 31]. Also JCPyV miRNA has been shown recently to be a useful biomarker for JCPyV infection in the gastrointestinal tract\n[32]. In this study we have investigated whether JCPyV-encoded miRNAs could be detected in plasma or urine of healthy subjects and whether the presence of these miRNAs was related to VP1 serology or urinary viral load. We demonstrate that these viral miRNAs can indeed be detected in plasma or urine and that they might be useful markers for JCPyV infection.", " Assay linearity and specificity Plasma or urine levels of JCPyV miRNA were analyzed using stem–loop RT followed by TaqMan PCR analysis\n[33]. Since the 3p miRNA of JCPyV is identical to the 3p miRNA of BKPyV, only 5p miRNAs were investigated\n[28]. As we previously identified a variant of the JCV-miR-J1-5p bearing one nucleotide difference compared to the miRNA described in miRBase (designated JCV-miR-J1a-5p), assays specifically designed for both variants were used, as well as an assay detecting the closely related BKPyV 5p miRNA (Figure \n1A-C)\n[26]. To evaluate the specificity of the assays, standard curves were prepared of each miRNA (JCV-miR-J1-5p, JCV-miR-J1a-5p and BKV-miR-B1-5p) and analyzed using the three specific assays (Figure \n1D-F). Relative detection efficiency was calculated from the difference of quantification cycle (Cq) between the specific assay and the non-specific assay, using samples containing 2.104 copies/μL of the individual miRNAs (Figure \n1G). Only marginal cross reaction was observed for most combinations, but a substantial cross reaction was observed between JCV-miR-J1a-5p and BKV-miR-B1-5p. Therefore, for every single clinical sample (plasma or urine), the contribution of non-specific amplification was calculated based on these standard curves. For JCV-miR-J1-5p and JCV-miR-J1a-5p it was confirmed that the contribution of non-specific signal was negligible compared to the specific signal (Additional file\n1: Table S1 and Additional file\n2: Figure S1). For the BKV-miR-B1-5p assay, however, the calculated contribution of JCV-miR-J1a-5p in most cases was similar to the measured levels, indicating that the BKV-miR-B1-5p assay in most cases actually was detecting JCV-miR-J1a-5p. Consequently, we can conclude that in most samples BKV-miR-B1-5p is absent or present at such low levels that it does not interfere in the interpretation of the JCV-miR-J1-5p and JCV-miR-J1a-5p analyses.Figure 1\nValidation of the stem-loop RT-PCR miRNA assays.\n(A) Genomic location of the JCPyV/BKPyV encoded miRNAs. (B) Graphical representation of a pre-miRNA as precursor of the mature 5p and 3p miRNAs generated upon cleavage by Dicer (C) Sequence comparison of the three miRNAs investigated. (D-F) The assay linearity and specificity was evaluated with dilution series of three synthetic miRNAs, miR-J1-5p, miR-J1a-5p and miR-B1-5p. Each dilution series was analyzed using the three miRNA assays. (G) The assay readings of miR-J1-5p by miR-J1-5p assay, miR-J1a-5p by miR-J1a-5p assay, and miR-B1-5p by miR-B1-5p assay at concentration levels of 2.104 copies/μL extract were used as the relative standards (100%) for the analysis of assay cross-reactivity.\n\nValidation of the stem-loop RT-PCR miRNA assays.\n(A) Genomic location of the JCPyV/BKPyV encoded miRNAs. (B) Graphical representation of a pre-miRNA as precursor of the mature 5p and 3p miRNAs generated upon cleavage by Dicer (C) Sequence comparison of the three miRNAs investigated. (D-F) The assay linearity and specificity was evaluated with dilution series of three synthetic miRNAs, miR-J1-5p, miR-J1a-5p and miR-B1-5p. Each dilution series was analyzed using the three miRNA assays. (G) The assay readings of miR-J1-5p by miR-J1-5p assay, miR-J1a-5p by miR-J1a-5p assay, and miR-B1-5p by miR-B1-5p assay at concentration levels of 2.104 copies/μL extract were used as the relative standards (100%) for the analysis of assay cross-reactivity.\nPlasma or urine levels of JCPyV miRNA were analyzed using stem–loop RT followed by TaqMan PCR analysis\n[33]. Since the 3p miRNA of JCPyV is identical to the 3p miRNA of BKPyV, only 5p miRNAs were investigated\n[28]. As we previously identified a variant of the JCV-miR-J1-5p bearing one nucleotide difference compared to the miRNA described in miRBase (designated JCV-miR-J1a-5p), assays specifically designed for both variants were used, as well as an assay detecting the closely related BKPyV 5p miRNA (Figure \n1A-C)\n[26]. To evaluate the specificity of the assays, standard curves were prepared of each miRNA (JCV-miR-J1-5p, JCV-miR-J1a-5p and BKV-miR-B1-5p) and analyzed using the three specific assays (Figure \n1D-F). Relative detection efficiency was calculated from the difference of quantification cycle (Cq) between the specific assay and the non-specific assay, using samples containing 2.104 copies/μL of the individual miRNAs (Figure \n1G). Only marginal cross reaction was observed for most combinations, but a substantial cross reaction was observed between JCV-miR-J1a-5p and BKV-miR-B1-5p. Therefore, for every single clinical sample (plasma or urine), the contribution of non-specific amplification was calculated based on these standard curves. For JCV-miR-J1-5p and JCV-miR-J1a-5p it was confirmed that the contribution of non-specific signal was negligible compared to the specific signal (Additional file\n1: Table S1 and Additional file\n2: Figure S1). For the BKV-miR-B1-5p assay, however, the calculated contribution of JCV-miR-J1a-5p in most cases was similar to the measured levels, indicating that the BKV-miR-B1-5p assay in most cases actually was detecting JCV-miR-J1a-5p. Consequently, we can conclude that in most samples BKV-miR-B1-5p is absent or present at such low levels that it does not interfere in the interpretation of the JCV-miR-J1-5p and JCV-miR-J1a-5p analyses.Figure 1\nValidation of the stem-loop RT-PCR miRNA assays.\n(A) Genomic location of the JCPyV/BKPyV encoded miRNAs. (B) Graphical representation of a pre-miRNA as precursor of the mature 5p and 3p miRNAs generated upon cleavage by Dicer (C) Sequence comparison of the three miRNAs investigated. (D-F) The assay linearity and specificity was evaluated with dilution series of three synthetic miRNAs, miR-J1-5p, miR-J1a-5p and miR-B1-5p. Each dilution series was analyzed using the three miRNA assays. (G) The assay readings of miR-J1-5p by miR-J1-5p assay, miR-J1a-5p by miR-J1a-5p assay, and miR-B1-5p by miR-B1-5p assay at concentration levels of 2.104 copies/μL extract were used as the relative standards (100%) for the analysis of assay cross-reactivity.\n\nValidation of the stem-loop RT-PCR miRNA assays.\n(A) Genomic location of the JCPyV/BKPyV encoded miRNAs. (B) Graphical representation of a pre-miRNA as precursor of the mature 5p and 3p miRNAs generated upon cleavage by Dicer (C) Sequence comparison of the three miRNAs investigated. (D-F) The assay linearity and specificity was evaluated with dilution series of three synthetic miRNAs, miR-J1-5p, miR-J1a-5p and miR-B1-5p. Each dilution series was analyzed using the three miRNA assays. (G) The assay readings of miR-J1-5p by miR-J1-5p assay, miR-J1a-5p by miR-J1a-5p assay, and miR-B1-5p by miR-B1-5p assay at concentration levels of 2.104 copies/μL extract were used as the relative standards (100%) for the analysis of assay cross-reactivity.\n Analysis of JCPyV miRNA levels in plasma and urine Plasma and urine samples were collected from 50 healthy subjects (HS) and JCPyV VP1 serology and both plasma and urinary viral load was determined (Additional file\n3: Table S2). Based on these parameters, subjects were categorized as Ab-, Ab+VL- or Ab+VL+ (Table \n1). Among these 50 HS JCPyV VP1 antibody prevalence was 72% (36/50) and 36% (13/36) of this group were also shedding virus in their urine, representing 26% (13/50) of the total study population. In the JCPyV VP1 seronegative group, no urinary virus shedding was observed. We found no subjects with detectable JCPyV DNA in plasma, similar to previous observations\n[16].Table 1\nOverview of subjects investigated\nVariableHealthy subjects (n = 50)Gender, n (%)Male22 (44%)Female28 (56%)Age, median (Min-Max)40.5 (23-59)Race, ethnicity, n (%)White38 (76%)Black3 (6%)Asian8 (16%)Other/unknown1 (2%)VP1 serology, n (%)Positive36 (72%)Negative14 (28%)Viruria, n (%)Positive*13 (26%)Negative37 (74%)Viremia, n (%)Positive0 (0%)Negative50 (100%)*All viruric subjects were part of the VP1 seropositive subgroup.\n\nOverview of subjects investigated\n\n*All viruric subjects were part of the VP1 seropositive subgroup.\nTotal RNA, including microRNAs was isolated from both urine and plasma and the level of JCPyV 5p miRNA was quantified in all samples (Figure \n2). The overall detection rate of JCPyV miRNA was 74% (37/50) in plasma and 62% (31/50) in urine (Figure \n3A). Further analysis of the different subgroups shows that JCPyV miRNA was detected in plasma or urine from HS from all subgroups (Ab-, Ab+ VL- or Ab+VL+) at similar detection rates (P > 0.05 between the subgroups for both plasma and urine). In the seropositive group, JCPyV 5p miRNA was detected in plasma of 69% (25/36) of subjects and in urine of 64% (23/36) of subjects. Remarkably, also in the seronegative group, JCPyV 5p miRNA was detected in plasma of 86% (12/14) of subjects and in urine of 57% (8/14) of subjects. These detection rates were not statistically different compared to those in seropositive subjects (P > 0.05 both for plasma and urine).Figure 2\nIndividual levels of jcv-miR-J1-5p and jcv-miR-J1a-5p in plasma and urine. Plasma and urine levels (in log copies/mL) of the two JCPyvV miRNA variants (jcv-miR-J1-5p and jcv-miR-J1a-5p) in every individual subject.Figure 3\nJCPyV miRNA levels detected in plasma and urine of healthy subjects, categorized based on serology and urinary viral load. (A) Number of subjects with detectable levels of JCPyV miRNAs in plasma and urine in the different groups. (B) Correlation between plasma and urine levels (in log copies/mL) of JCPyV miRNAs. (C) Plasma levels (in log copies/mL) of JCPyV miRNAs in the different groups. In case both variants were detected, the sum of both levels is presented. (D) Urine levels (in log copies/mL) of JCPyV miRNAs in the different groups. In case both variants were detected, the sum of both levels is presented.\n\nIndividual levels of jcv-miR-J1-5p and jcv-miR-J1a-5p in plasma and urine. Plasma and urine levels (in log copies/mL) of the two JCPyvV miRNA variants (jcv-miR-J1-5p and jcv-miR-J1a-5p) in every individual subject.\n\nJCPyV miRNA levels detected in plasma and urine of healthy subjects, categorized based on serology and urinary viral load. (A) Number of subjects with detectable levels of JCPyV miRNAs in plasma and urine in the different groups. (B) Correlation between plasma and urine levels (in log copies/mL) of JCPyV miRNAs. (C) Plasma levels (in log copies/mL) of JCPyV miRNAs in the different groups. In case both variants were detected, the sum of both levels is presented. (D) Urine levels (in log copies/mL) of JCPyV miRNAs in the different groups. In case both variants were detected, the sum of both levels is presented.\nQuantitative analysis of JCPyV 5p miRNA indicated plasma levels of JCPyV 5p miRNA were similar in the three different subgroups (Figure \n3C). In urine, significantly higher levels (P < 0.001) were observed in the subgroup shedding JCPyV in their urine compared to the subgroup not shedding JCPyV in their urine (Figure \n3D). No correlation was observed between plasma levels and urine levels of JCPyV 5p miRNA (Figure \n3B). Remarkably, while in plasma higher levels of JCV-miR-J1-5p were detected than JCV-miR-J1a-5p, in urine both variants were detected at similar levels (Figure \n2). Also, the identity of the miRNAs (J1 or J1a variant) was not correlated between urine and plasma. Comparison of JCPyV 5p miRNA levels with JCPyV VP1 serology or urinary viral load showed that, specifically in the subgroup of JCPyV shedders (Ab+VL+) a good correlation (P < 0.005, r = 0.78) exists between miRNA levels and antibody levels, as well as a moderate correlation (P = 0.07, r = 0.57) with urinary viral load (Figure \n4A-B). No correlation could be observed between plasma miRNA levels and any other parameter analyzed (Figure \n4C-D).Figure 4\nViral miRNAs related to other viral parameters. (A) Correlation between JCPyV miRNA levels (in log copies/mL) in urine and JCPyV VP1 serology (in log2 test/ctrl). Shedders are indicated in red and P-value and r value are based on this subset only. (B) Correlation between JCPyV miRNA levels (in log copies/mL) in urine and JCPyV urinary viral load (in log copies/mL) in shedders. (C) Correlation between JCPyV miRNA levels (in log copies/mL) in plasma and JCPyV VP1 serology (in log2 test/ctrl). Shedders are indicated in red and P-value is based on this subset only. (D) Correlation between JCPyV miRNA levels (in log copies/mL) in plasma and JCPyV urinary viral load (in log copies/mL) in shedders.\n\nViral miRNAs related to other viral parameters. (A) Correlation between JCPyV miRNA levels (in log copies/mL) in urine and JCPyV VP1 serology (in log2 test/ctrl). Shedders are indicated in red and P-value and r value are based on this subset only. (B) Correlation between JCPyV miRNA levels (in log copies/mL) in urine and JCPyV urinary viral load (in log copies/mL) in shedders. (C) Correlation between JCPyV miRNA levels (in log copies/mL) in plasma and JCPyV VP1 serology (in log2 test/ctrl). Shedders are indicated in red and P-value is based on this subset only. (D) Correlation between JCPyV miRNA levels (in log copies/mL) in plasma and JCPyV urinary viral load (in log copies/mL) in shedders.\nPlasma and urine samples were collected from 50 healthy subjects (HS) and JCPyV VP1 serology and both plasma and urinary viral load was determined (Additional file\n3: Table S2). Based on these parameters, subjects were categorized as Ab-, Ab+VL- or Ab+VL+ (Table \n1). Among these 50 HS JCPyV VP1 antibody prevalence was 72% (36/50) and 36% (13/36) of this group were also shedding virus in their urine, representing 26% (13/50) of the total study population. In the JCPyV VP1 seronegative group, no urinary virus shedding was observed. We found no subjects with detectable JCPyV DNA in plasma, similar to previous observations\n[16].Table 1\nOverview of subjects investigated\nVariableHealthy subjects (n = 50)Gender, n (%)Male22 (44%)Female28 (56%)Age, median (Min-Max)40.5 (23-59)Race, ethnicity, n (%)White38 (76%)Black3 (6%)Asian8 (16%)Other/unknown1 (2%)VP1 serology, n (%)Positive36 (72%)Negative14 (28%)Viruria, n (%)Positive*13 (26%)Negative37 (74%)Viremia, n (%)Positive0 (0%)Negative50 (100%)*All viruric subjects were part of the VP1 seropositive subgroup.\n\nOverview of subjects investigated\n\n*All viruric subjects were part of the VP1 seropositive subgroup.\nTotal RNA, including microRNAs was isolated from both urine and plasma and the level of JCPyV 5p miRNA was quantified in all samples (Figure \n2). The overall detection rate of JCPyV miRNA was 74% (37/50) in plasma and 62% (31/50) in urine (Figure \n3A). Further analysis of the different subgroups shows that JCPyV miRNA was detected in plasma or urine from HS from all subgroups (Ab-, Ab+ VL- or Ab+VL+) at similar detection rates (P > 0.05 between the subgroups for both plasma and urine). In the seropositive group, JCPyV 5p miRNA was detected in plasma of 69% (25/36) of subjects and in urine of 64% (23/36) of subjects. Remarkably, also in the seronegative group, JCPyV 5p miRNA was detected in plasma of 86% (12/14) of subjects and in urine of 57% (8/14) of subjects. These detection rates were not statistically different compared to those in seropositive subjects (P > 0.05 both for plasma and urine).Figure 2\nIndividual levels of jcv-miR-J1-5p and jcv-miR-J1a-5p in plasma and urine. Plasma and urine levels (in log copies/mL) of the two JCPyvV miRNA variants (jcv-miR-J1-5p and jcv-miR-J1a-5p) in every individual subject.Figure 3\nJCPyV miRNA levels detected in plasma and urine of healthy subjects, categorized based on serology and urinary viral load. (A) Number of subjects with detectable levels of JCPyV miRNAs in plasma and urine in the different groups. (B) Correlation between plasma and urine levels (in log copies/mL) of JCPyV miRNAs. (C) Plasma levels (in log copies/mL) of JCPyV miRNAs in the different groups. In case both variants were detected, the sum of both levels is presented. (D) Urine levels (in log copies/mL) of JCPyV miRNAs in the different groups. In case both variants were detected, the sum of both levels is presented.\n\nIndividual levels of jcv-miR-J1-5p and jcv-miR-J1a-5p in plasma and urine. Plasma and urine levels (in log copies/mL) of the two JCPyvV miRNA variants (jcv-miR-J1-5p and jcv-miR-J1a-5p) in every individual subject.\n\nJCPyV miRNA levels detected in plasma and urine of healthy subjects, categorized based on serology and urinary viral load. (A) Number of subjects with detectable levels of JCPyV miRNAs in plasma and urine in the different groups. (B) Correlation between plasma and urine levels (in log copies/mL) of JCPyV miRNAs. (C) Plasma levels (in log copies/mL) of JCPyV miRNAs in the different groups. In case both variants were detected, the sum of both levels is presented. (D) Urine levels (in log copies/mL) of JCPyV miRNAs in the different groups. In case both variants were detected, the sum of both levels is presented.\nQuantitative analysis of JCPyV 5p miRNA indicated plasma levels of JCPyV 5p miRNA were similar in the three different subgroups (Figure \n3C). In urine, significantly higher levels (P < 0.001) were observed in the subgroup shedding JCPyV in their urine compared to the subgroup not shedding JCPyV in their urine (Figure \n3D). No correlation was observed between plasma levels and urine levels of JCPyV 5p miRNA (Figure \n3B). Remarkably, while in plasma higher levels of JCV-miR-J1-5p were detected than JCV-miR-J1a-5p, in urine both variants were detected at similar levels (Figure \n2). Also, the identity of the miRNAs (J1 or J1a variant) was not correlated between urine and plasma. Comparison of JCPyV 5p miRNA levels with JCPyV VP1 serology or urinary viral load showed that, specifically in the subgroup of JCPyV shedders (Ab+VL+) a good correlation (P < 0.005, r = 0.78) exists between miRNA levels and antibody levels, as well as a moderate correlation (P = 0.07, r = 0.57) with urinary viral load (Figure \n4A-B). No correlation could be observed between plasma miRNA levels and any other parameter analyzed (Figure \n4C-D).Figure 4\nViral miRNAs related to other viral parameters. (A) Correlation between JCPyV miRNA levels (in log copies/mL) in urine and JCPyV VP1 serology (in log2 test/ctrl). Shedders are indicated in red and P-value and r value are based on this subset only. (B) Correlation between JCPyV miRNA levels (in log copies/mL) in urine and JCPyV urinary viral load (in log copies/mL) in shedders. (C) Correlation between JCPyV miRNA levels (in log copies/mL) in plasma and JCPyV VP1 serology (in log2 test/ctrl). Shedders are indicated in red and P-value is based on this subset only. (D) Correlation between JCPyV miRNA levels (in log copies/mL) in plasma and JCPyV urinary viral load (in log copies/mL) in shedders.\n\nViral miRNAs related to other viral parameters. (A) Correlation between JCPyV miRNA levels (in log copies/mL) in urine and JCPyV VP1 serology (in log2 test/ctrl). Shedders are indicated in red and P-value and r value are based on this subset only. (B) Correlation between JCPyV miRNA levels (in log copies/mL) in urine and JCPyV urinary viral load (in log copies/mL) in shedders. (C) Correlation between JCPyV miRNA levels (in log copies/mL) in plasma and JCPyV VP1 serology (in log2 test/ctrl). Shedders are indicated in red and P-value is based on this subset only. (D) Correlation between JCPyV miRNA levels (in log copies/mL) in plasma and JCPyV urinary viral load (in log copies/mL) in shedders.", "Plasma or urine levels of JCPyV miRNA were analyzed using stem–loop RT followed by TaqMan PCR analysis\n[33]. Since the 3p miRNA of JCPyV is identical to the 3p miRNA of BKPyV, only 5p miRNAs were investigated\n[28]. As we previously identified a variant of the JCV-miR-J1-5p bearing one nucleotide difference compared to the miRNA described in miRBase (designated JCV-miR-J1a-5p), assays specifically designed for both variants were used, as well as an assay detecting the closely related BKPyV 5p miRNA (Figure \n1A-C)\n[26]. To evaluate the specificity of the assays, standard curves were prepared of each miRNA (JCV-miR-J1-5p, JCV-miR-J1a-5p and BKV-miR-B1-5p) and analyzed using the three specific assays (Figure \n1D-F). Relative detection efficiency was calculated from the difference of quantification cycle (Cq) between the specific assay and the non-specific assay, using samples containing 2.104 copies/μL of the individual miRNAs (Figure \n1G). Only marginal cross reaction was observed for most combinations, but a substantial cross reaction was observed between JCV-miR-J1a-5p and BKV-miR-B1-5p. Therefore, for every single clinical sample (plasma or urine), the contribution of non-specific amplification was calculated based on these standard curves. For JCV-miR-J1-5p and JCV-miR-J1a-5p it was confirmed that the contribution of non-specific signal was negligible compared to the specific signal (Additional file\n1: Table S1 and Additional file\n2: Figure S1). For the BKV-miR-B1-5p assay, however, the calculated contribution of JCV-miR-J1a-5p in most cases was similar to the measured levels, indicating that the BKV-miR-B1-5p assay in most cases actually was detecting JCV-miR-J1a-5p. Consequently, we can conclude that in most samples BKV-miR-B1-5p is absent or present at such low levels that it does not interfere in the interpretation of the JCV-miR-J1-5p and JCV-miR-J1a-5p analyses.Figure 1\nValidation of the stem-loop RT-PCR miRNA assays.\n(A) Genomic location of the JCPyV/BKPyV encoded miRNAs. (B) Graphical representation of a pre-miRNA as precursor of the mature 5p and 3p miRNAs generated upon cleavage by Dicer (C) Sequence comparison of the three miRNAs investigated. (D-F) The assay linearity and specificity was evaluated with dilution series of three synthetic miRNAs, miR-J1-5p, miR-J1a-5p and miR-B1-5p. Each dilution series was analyzed using the three miRNA assays. (G) The assay readings of miR-J1-5p by miR-J1-5p assay, miR-J1a-5p by miR-J1a-5p assay, and miR-B1-5p by miR-B1-5p assay at concentration levels of 2.104 copies/μL extract were used as the relative standards (100%) for the analysis of assay cross-reactivity.\n\nValidation of the stem-loop RT-PCR miRNA assays.\n(A) Genomic location of the JCPyV/BKPyV encoded miRNAs. (B) Graphical representation of a pre-miRNA as precursor of the mature 5p and 3p miRNAs generated upon cleavage by Dicer (C) Sequence comparison of the three miRNAs investigated. (D-F) The assay linearity and specificity was evaluated with dilution series of three synthetic miRNAs, miR-J1-5p, miR-J1a-5p and miR-B1-5p. Each dilution series was analyzed using the three miRNA assays. (G) The assay readings of miR-J1-5p by miR-J1-5p assay, miR-J1a-5p by miR-J1a-5p assay, and miR-B1-5p by miR-B1-5p assay at concentration levels of 2.104 copies/μL extract were used as the relative standards (100%) for the analysis of assay cross-reactivity.", "Plasma and urine samples were collected from 50 healthy subjects (HS) and JCPyV VP1 serology and both plasma and urinary viral load was determined (Additional file\n3: Table S2). Based on these parameters, subjects were categorized as Ab-, Ab+VL- or Ab+VL+ (Table \n1). Among these 50 HS JCPyV VP1 antibody prevalence was 72% (36/50) and 36% (13/36) of this group were also shedding virus in their urine, representing 26% (13/50) of the total study population. In the JCPyV VP1 seronegative group, no urinary virus shedding was observed. We found no subjects with detectable JCPyV DNA in plasma, similar to previous observations\n[16].Table 1\nOverview of subjects investigated\nVariableHealthy subjects (n = 50)Gender, n (%)Male22 (44%)Female28 (56%)Age, median (Min-Max)40.5 (23-59)Race, ethnicity, n (%)White38 (76%)Black3 (6%)Asian8 (16%)Other/unknown1 (2%)VP1 serology, n (%)Positive36 (72%)Negative14 (28%)Viruria, n (%)Positive*13 (26%)Negative37 (74%)Viremia, n (%)Positive0 (0%)Negative50 (100%)*All viruric subjects were part of the VP1 seropositive subgroup.\n\nOverview of subjects investigated\n\n*All viruric subjects were part of the VP1 seropositive subgroup.\nTotal RNA, including microRNAs was isolated from both urine and plasma and the level of JCPyV 5p miRNA was quantified in all samples (Figure \n2). The overall detection rate of JCPyV miRNA was 74% (37/50) in plasma and 62% (31/50) in urine (Figure \n3A). Further analysis of the different subgroups shows that JCPyV miRNA was detected in plasma or urine from HS from all subgroups (Ab-, Ab+ VL- or Ab+VL+) at similar detection rates (P > 0.05 between the subgroups for both plasma and urine). In the seropositive group, JCPyV 5p miRNA was detected in plasma of 69% (25/36) of subjects and in urine of 64% (23/36) of subjects. Remarkably, also in the seronegative group, JCPyV 5p miRNA was detected in plasma of 86% (12/14) of subjects and in urine of 57% (8/14) of subjects. These detection rates were not statistically different compared to those in seropositive subjects (P > 0.05 both for plasma and urine).Figure 2\nIndividual levels of jcv-miR-J1-5p and jcv-miR-J1a-5p in plasma and urine. Plasma and urine levels (in log copies/mL) of the two JCPyvV miRNA variants (jcv-miR-J1-5p and jcv-miR-J1a-5p) in every individual subject.Figure 3\nJCPyV miRNA levels detected in plasma and urine of healthy subjects, categorized based on serology and urinary viral load. (A) Number of subjects with detectable levels of JCPyV miRNAs in plasma and urine in the different groups. (B) Correlation between plasma and urine levels (in log copies/mL) of JCPyV miRNAs. (C) Plasma levels (in log copies/mL) of JCPyV miRNAs in the different groups. In case both variants were detected, the sum of both levels is presented. (D) Urine levels (in log copies/mL) of JCPyV miRNAs in the different groups. In case both variants were detected, the sum of both levels is presented.\n\nIndividual levels of jcv-miR-J1-5p and jcv-miR-J1a-5p in plasma and urine. Plasma and urine levels (in log copies/mL) of the two JCPyvV miRNA variants (jcv-miR-J1-5p and jcv-miR-J1a-5p) in every individual subject.\n\nJCPyV miRNA levels detected in plasma and urine of healthy subjects, categorized based on serology and urinary viral load. (A) Number of subjects with detectable levels of JCPyV miRNAs in plasma and urine in the different groups. (B) Correlation between plasma and urine levels (in log copies/mL) of JCPyV miRNAs. (C) Plasma levels (in log copies/mL) of JCPyV miRNAs in the different groups. In case both variants were detected, the sum of both levels is presented. (D) Urine levels (in log copies/mL) of JCPyV miRNAs in the different groups. In case both variants were detected, the sum of both levels is presented.\nQuantitative analysis of JCPyV 5p miRNA indicated plasma levels of JCPyV 5p miRNA were similar in the three different subgroups (Figure \n3C). In urine, significantly higher levels (P < 0.001) were observed in the subgroup shedding JCPyV in their urine compared to the subgroup not shedding JCPyV in their urine (Figure \n3D). No correlation was observed between plasma levels and urine levels of JCPyV 5p miRNA (Figure \n3B). Remarkably, while in plasma higher levels of JCV-miR-J1-5p were detected than JCV-miR-J1a-5p, in urine both variants were detected at similar levels (Figure \n2). Also, the identity of the miRNAs (J1 or J1a variant) was not correlated between urine and plasma. Comparison of JCPyV 5p miRNA levels with JCPyV VP1 serology or urinary viral load showed that, specifically in the subgroup of JCPyV shedders (Ab+VL+) a good correlation (P < 0.005, r = 0.78) exists between miRNA levels and antibody levels, as well as a moderate correlation (P = 0.07, r = 0.57) with urinary viral load (Figure \n4A-B). No correlation could be observed between plasma miRNA levels and any other parameter analyzed (Figure \n4C-D).Figure 4\nViral miRNAs related to other viral parameters. (A) Correlation between JCPyV miRNA levels (in log copies/mL) in urine and JCPyV VP1 serology (in log2 test/ctrl). Shedders are indicated in red and P-value and r value are based on this subset only. (B) Correlation between JCPyV miRNA levels (in log copies/mL) in urine and JCPyV urinary viral load (in log copies/mL) in shedders. (C) Correlation between JCPyV miRNA levels (in log copies/mL) in plasma and JCPyV VP1 serology (in log2 test/ctrl). Shedders are indicated in red and P-value is based on this subset only. (D) Correlation between JCPyV miRNA levels (in log copies/mL) in plasma and JCPyV urinary viral load (in log copies/mL) in shedders.\n\nViral miRNAs related to other viral parameters. (A) Correlation between JCPyV miRNA levels (in log copies/mL) in urine and JCPyV VP1 serology (in log2 test/ctrl). Shedders are indicated in red and P-value and r value are based on this subset only. (B) Correlation between JCPyV miRNA levels (in log copies/mL) in urine and JCPyV urinary viral load (in log copies/mL) in shedders. (C) Correlation between JCPyV miRNA levels (in log copies/mL) in plasma and JCPyV VP1 serology (in log2 test/ctrl). Shedders are indicated in red and P-value is based on this subset only. (D) Correlation between JCPyV miRNA levels (in log copies/mL) in plasma and JCPyV urinary viral load (in log copies/mL) in shedders.", "Although JCPyV is known to be the etiological agent responsible for development of PML, it is also well-known that this polyomavirus is widely distributed in the human population without any clinical manifestation\n[1]. In this study we have analyzed the level of the viral miRNAs in plasma and urine of healthy subjects. As is the case for other polyomaviruses, JCPyV encodes a pre-miRNA that is further processed into the mature 5p and 3p miR-J1s. The pre-miRNA is encoded on the late strand of the viral genome and is shown to be expressed concurrently with the late mRNA transcript, thereby downregulating early transcript\n[27, 28]. The presence of viral miRNAs might therefore be considered as a marker for latent infection. Since BKPyV and JCPyV share the same 3p miRNA, only the 5p miRNAs were included in this study\n[28].\nWe showed that most of the healthy subjects have low, but well-detectable levels of viral miRNAs in total RNA extracts of plasma or urine, even in subjects that are seronegative and as such considered not to be infected. This finding indicates that the analysis of antibodies against JCPyV VP1 is insufficient to identify all infected individuals. Previous studies also identified individuals that were seronegative but clearly infected based on the fact that they were viremic in specific cell compartments (CD34+ cells) or plasma\n[3, 34–36]. Whereas in most cases only a limited number of seronegative subjects was found to be infected, our analysis of viral miRNAs in plasma and urine identified, respectively 12 and 8 out of 14 seronegative individuals to be infected with JCPyV. Only 1 seronegative subject was found to be negative for both plasma and urine viral miRNAs. These data suggest that JCPyV is capable of evading immune recognition by the host’s humoral immune system and residing latently in the body, with viral miRNAs leaking in the blood or urine as a silent witness of this latent activity.\nSimilar to the findings in seronegative subjects, also a large group of seropositive subjects was found to have viral miRNAs in plasma or urine. 69% of seropositive subjects were positive for plasma viral miRNAs and 64% were found to be positive for urinary viral miRNAs. Given the fact that viral miRNAs can only be produced upon ongoing viral transcription in infected host cells, whereas serology rather discloses information on exposure to the virus and more in particular the response of the humoral immune system towards this exposure, the analysis of viral miRNAs in seropositive subjects might add another level to the risk prediction algorithms for PML development. We therefore suggest that further studies would be performed to investigate the potential of viral miRNA levels in plasma or urine as a risk marker for PML development.\nPlasma levels of viral miRNAs in seropositive subjects were not different compared to seronegative subjects and both were close to the detection limit, indicating that in healthy subjects only low activity of JCPyV is present in the periphery, independent of their antibody status. In contrast, we found significantly increased urinary viral miRNA levels in shedders. In this group of viral shedders, there was also a strong correlation between urinary miRNA levels and anti-VP1 antibody levels or urinary viral load.\nBesides the discordance between urine and plasma miRNA levels, also the identity of the detected miRNAs (J1-5p or J1a-5p variant) within individual subjects was different in plasma and urine. The most obvious explanation for these observations would be that two independent viral propagation zones exist in one individual: one in bone marrow cells, blood cells and perhaps also brain cells (Bone-Blood-Brain) and a second one in urinary tract cells. This hypothesis is also supported by other work where sequencing analysis of the VP1 gene and/or non-coding control region (NCCR) in samples obtained from PML patients showed a similar dichotomy between urine on the one hand and plasma and cerebrospinal fluid (CSF) on the other hand\n[37–39]. Whether crosstalk between both viral propagation zones exists, is unclear and remains to be investigated.\nWe found that increased viral multiplication in the urinary propagation zone, as determined by increased urinary viral load, is accompanied by an increased level of urinary viral miRNAs. This might at first sight appear in contradiction with the observation that polyomavirus miRNAs, including JCPyV, have been shown to downregulate expression of large T-antigen and to suppress viral replication\n[28, 40–42]. One might therefore assume that an increased level of viral miRNAs would be associated to a lower viral load. This mechanistic model does however not exclude that in subjects with high urinary viral load, viral miRNAs are released at increased levels as a consequence of high virion production, a process that requires transcription of the late transcript, which also encodes the miRNAs. Furthermore, several studies have shown that infectivity of mutant viruses lacking the viral miRNAs are not impacted, again indicating that increased levels of viral miRNAs are not per se associated with decreased virion production in vivo\n[28, 43]. On the other hand, these miRNAs can also be produced and released from an infected cell without the need for active viral replication, as is also the case for Herpes Simplex Virus and Epstein-Barr Virus during latent infection\n[44–46]. This would in fact also explain why no viral DNA can be detected in plasma of healthy subjects, while viral miRNAs are detectable in a large number of these subjects. Taken together, low levels of circulating viral miRNAs would serve as biomarker for latent JCPyV infection and increased levels might be indicative of increased viral activity. This raises the question whether an increased viral multiplication in the bone-blood-brain propagation zone, as is the case in PML patients, also is accompanied by an increase in plasma viral miRNAs. Therefore, it would be of great interest to determine miRNA levels in plasma of PML patients, but even more to assess in longitudinal studies whether plasma miRNA levels in MS patients treated with natalizumab increase over time and as such might serve as a monitoring tool for viral reactivation. Similar studies have been performed for the closely related BKPyV in the context of kidney transplant patients developing polyomavirus-associated nephropathy (PVAN) and found strongly increased levels of miR-B1-5p in urine from patients with PVAN\n[42]. Also, a strong correlation appears to exist between BKPyV encoded miRNAs and BKPyV DNA in blood of infected renal transplant patients\n[47].\nSince the three miRNAs analyzed in this study are very similar, the cross-reactivity of the different assays was evaluated using synthetic miRNAs. Although we believe that we have provided sufficient evidence that the detected levels of JCPyV miRNAs are specific for JCPyV, it cannot be excluded that the assays used are prone to some non-specific amplification of other miRNAs. However, no miRNAs with nucleotide sequence resembling the sequence of jcv-miR-J1-5p have been registered in miRBase 21, except for bkv-miR-B1-5p\n[48]. It can also not be excluded that other, unknown human polyomaviruses exist that encode an identical or closely related miRNA that is detected in our assays.", "Based on the work described here, we conclude that JCPyV miRNAs can be detected in plasma and urine of healthy subjects, allowing further stratification of seropositive individuals. Furthermore, our data indicate higher infection rates than would be expected from serology alone.", " Ethics statement The Ethics Committee [“Commissie voor Medische Ethiek - ZiekenhuisNetwerk Antwerpen (ZNA) and the Ethics committee University Hospital Antwerp] approved the Protocol, and Informed consent, which were signed by all subjects.\nThe Ethics Committee [“Commissie voor Medische Ethiek - ZiekenhuisNetwerk Antwerpen (ZNA) and the Ethics committee University Hospital Antwerp] approved the Protocol, and Informed consent, which were signed by all subjects.\n Healthy subject samples A total of 50 healthy subjects (HS) were recruited in Belgium for this study\n[10, 26, 49]. The demographic description of the HS population is presented in Table \n1. Plasma samples and urine samples were collected from all these HS and stored at -80°C until further processing.\nA total of 50 healthy subjects (HS) were recruited in Belgium for this study\n[10, 26, 49]. The demographic description of the HS population is presented in Table \n1. Plasma samples and urine samples were collected from all these HS and stored at -80°C until further processing.\n JC polyomavirus viral load assay Analysis of the urinary viral load was performed as described previously\n[10]. Analysis of the plasma viral load was performed similarly, with the exception that 200 μL plasma was used for DNA extraction.\nAnalysis of the urinary viral load was performed as described previously\n[10]. Analysis of the plasma viral load was performed similarly, with the exception that 200 μL plasma was used for DNA extraction.\n JC polyomavirus VP1 serology assay The anti-JCPyV antibody assay was performed as described earlier\n[26]. Samples were considered positive if OD values were higher than 2-fold the OD value of the blank sample (i.e. log2 test/ctrl > 1).\nThe anti-JCPyV antibody assay was performed as described earlier\n[26]. Samples were considered positive if OD values were higher than 2-fold the OD value of the blank sample (i.e. log2 test/ctrl > 1).\n Synthetic microRNA molecules and generation of miRNA standard curves Three RNase-free 5’-phosphorylated miRNA oligoribonucleotides were synthesized (Integrated DNA Technologies) for the validation of the miRNA assays, corresponding to jcv-miR-J1-5p (5’-phospho-UUCUGAGACCUGGGAAAAGCAU-OH-3’), jcv-miR-J1a-5p (5’-phospho-UUCUGAGACCUGGGAAGAGCAU-OH-3’) and bkv-miR-B1-5p (5’-phospho-AUCUGAGACUUGGGAAGAGCAU-OH-3’). Stock solutions of 100 μM synthetic oligonucleotide in RNase-free and DNase-free water were prepared according to the concentrations and sample purity quoted by the manufacturer (based on spectrophotometric analysis). These stock solutions were diluted to a concentration of approximately 3.32 pM, corresponding to 2.105 copies/μL. A total of six serial 10-fold dilutions were prepared, starting from 2.105 copies/μL down to 2 copies/μL and additional no template controls (NTC; zero copies) were examined. Dilution series of each of the synthetic miRNAs were made in RNase-free and DNase-free water.\nThree RNase-free 5’-phosphorylated miRNA oligoribonucleotides were synthesized (Integrated DNA Technologies) for the validation of the miRNA assays, corresponding to jcv-miR-J1-5p (5’-phospho-UUCUGAGACCUGGGAAAAGCAU-OH-3’), jcv-miR-J1a-5p (5’-phospho-UUCUGAGACCUGGGAAGAGCAU-OH-3’) and bkv-miR-B1-5p (5’-phospho-AUCUGAGACUUGGGAAGAGCAU-OH-3’). Stock solutions of 100 μM synthetic oligonucleotide in RNase-free and DNase-free water were prepared according to the concentrations and sample purity quoted by the manufacturer (based on spectrophotometric analysis). These stock solutions were diluted to a concentration of approximately 3.32 pM, corresponding to 2.105 copies/μL. A total of six serial 10-fold dilutions were prepared, starting from 2.105 copies/μL down to 2 copies/μL and additional no template controls (NTC; zero copies) were examined. Dilution series of each of the synthetic miRNAs were made in RNase-free and DNase-free water.\n Analysis of viral miRNAs Levels of circulating viral miRNAs were determined by means of quantitative reverse transcriptase PCR (qRT-PCR) with hydrolysis probe based miRNA assays, purchased from Life Technologies. Specific assays were used for jcv-miR-J1-5p (Assay ID 007464_mat), jcv-miR-J1a-5p (Custom Assay ID CS39QVQ) and bkv-miR-B1-5p (Assay ID 007796_mat). As extraction and reverse transcription efficacy control, assays specific for human miRNAs hsa-miR-26a (assay ID 000405), hsa-miR-30b (assay ID 002129) and mmu-miR-93 (assay ID 001090) were included in the analysis.\nBriefly, RNA was isolated from 200 μL plasma or urine using the miRNeasy kit (Qiagen) according to the manufacturer’s instructions. Three μL of total RNA (representing 20% of total RNA extract) or synthetic miRNA solution was reverse transcribed using the pooled RT stem-loop primers (Life Technologies), enabling miRNA specific cDNA synthesis. Subsequently, 2.5 μL of the RT product (representing 1/6 of total RT product) was pre-amplified by means of a 12-cycle PCR reaction with a pool of miRNA specific forward primer and universal reverse primer to increase detection sensitivity. Diluted (1:8) pre-amplified miRNA cDNA was then used as input for a 45-cycle qPCR reaction with miRNA specific hydrolysis probes and primers (Life Technologies). For analysis of the viral miRNAs, 2 μL of input was used. For analysis of the human miRNAs, 2 μL of input was used for urine derived miRNAs and 0.2 μL was used for plasma derived miRNAs. All reactions were performed in duplicate on the LightCycler® 480 instrument (Roche Applied Science). Quantification cycle (Cq) values were calculated using the 2nd Derivative method with a detection cut-off of 40 cycles. Only samples with quantifiable Cq values for both duplicates were considered positive. Absolute miRNA levels in plasma or urine were calculated based on the standard curves for each specific miRNA assay. Limit of quantification (LOQ) was defined as the miRNA concentration corresponding to a Cq value of 40, based on the standard curve for the specific miRNA. For each sample, average Cq value of the 3 human control miRNAs was calculated and possible outliers were identified using Grubbs’ test (using a significance level of 0.05). In case outliers were detected, results of the corresponding viral miRNAs were not included for further analysis.\nLevels of circulating viral miRNAs were determined by means of quantitative reverse transcriptase PCR (qRT-PCR) with hydrolysis probe based miRNA assays, purchased from Life Technologies. Specific assays were used for jcv-miR-J1-5p (Assay ID 007464_mat), jcv-miR-J1a-5p (Custom Assay ID CS39QVQ) and bkv-miR-B1-5p (Assay ID 007796_mat). As extraction and reverse transcription efficacy control, assays specific for human miRNAs hsa-miR-26a (assay ID 000405), hsa-miR-30b (assay ID 002129) and mmu-miR-93 (assay ID 001090) were included in the analysis.\nBriefly, RNA was isolated from 200 μL plasma or urine using the miRNeasy kit (Qiagen) according to the manufacturer’s instructions. Three μL of total RNA (representing 20% of total RNA extract) or synthetic miRNA solution was reverse transcribed using the pooled RT stem-loop primers (Life Technologies), enabling miRNA specific cDNA synthesis. Subsequently, 2.5 μL of the RT product (representing 1/6 of total RT product) was pre-amplified by means of a 12-cycle PCR reaction with a pool of miRNA specific forward primer and universal reverse primer to increase detection sensitivity. Diluted (1:8) pre-amplified miRNA cDNA was then used as input for a 45-cycle qPCR reaction with miRNA specific hydrolysis probes and primers (Life Technologies). For analysis of the viral miRNAs, 2 μL of input was used. For analysis of the human miRNAs, 2 μL of input was used for urine derived miRNAs and 0.2 μL was used for plasma derived miRNAs. All reactions were performed in duplicate on the LightCycler® 480 instrument (Roche Applied Science). Quantification cycle (Cq) values were calculated using the 2nd Derivative method with a detection cut-off of 40 cycles. Only samples with quantifiable Cq values for both duplicates were considered positive. Absolute miRNA levels in plasma or urine were calculated based on the standard curves for each specific miRNA assay. Limit of quantification (LOQ) was defined as the miRNA concentration corresponding to a Cq value of 40, based on the standard curve for the specific miRNA. For each sample, average Cq value of the 3 human control miRNAs was calculated and possible outliers were identified using Grubbs’ test (using a significance level of 0.05). In case outliers were detected, results of the corresponding viral miRNAs were not included for further analysis.\n Statistical analysis Differences in relative occurrence of viral miRNAs between groups were assessed using a Fisher’s test. Differences between groups were considered statistically significant at P < 0.05. Differences in miRNA levels between groups were assessed using a Mann-Whitney test. Differences between groups were considered statistically significant at P < 0.05. Correlation between different parameters was analyzed using linear regression. P-value was calculated to determine whether slope was significantly non-zero and strength of correlation was determined using r-value. All statistical analyses were performed using GraphPad Prism v 5.04.\nDifferences in relative occurrence of viral miRNAs between groups were assessed using a Fisher’s test. Differences between groups were considered statistically significant at P < 0.05. Differences in miRNA levels between groups were assessed using a Mann-Whitney test. Differences between groups were considered statistically significant at P < 0.05. Correlation between different parameters was analyzed using linear regression. P-value was calculated to determine whether slope was significantly non-zero and strength of correlation was determined using r-value. All statistical analyses were performed using GraphPad Prism v 5.04.", "The Ethics Committee [“Commissie voor Medische Ethiek - ZiekenhuisNetwerk Antwerpen (ZNA) and the Ethics committee University Hospital Antwerp] approved the Protocol, and Informed consent, which were signed by all subjects.", "A total of 50 healthy subjects (HS) were recruited in Belgium for this study\n[10, 26, 49]. The demographic description of the HS population is presented in Table \n1. Plasma samples and urine samples were collected from all these HS and stored at -80°C until further processing.", "Analysis of the urinary viral load was performed as described previously\n[10]. Analysis of the plasma viral load was performed similarly, with the exception that 200 μL plasma was used for DNA extraction.", "The anti-JCPyV antibody assay was performed as described earlier\n[26]. Samples were considered positive if OD values were higher than 2-fold the OD value of the blank sample (i.e. log2 test/ctrl > 1).", "Three RNase-free 5’-phosphorylated miRNA oligoribonucleotides were synthesized (Integrated DNA Technologies) for the validation of the miRNA assays, corresponding to jcv-miR-J1-5p (5’-phospho-UUCUGAGACCUGGGAAAAGCAU-OH-3’), jcv-miR-J1a-5p (5’-phospho-UUCUGAGACCUGGGAAGAGCAU-OH-3’) and bkv-miR-B1-5p (5’-phospho-AUCUGAGACUUGGGAAGAGCAU-OH-3’). Stock solutions of 100 μM synthetic oligonucleotide in RNase-free and DNase-free water were prepared according to the concentrations and sample purity quoted by the manufacturer (based on spectrophotometric analysis). These stock solutions were diluted to a concentration of approximately 3.32 pM, corresponding to 2.105 copies/μL. A total of six serial 10-fold dilutions were prepared, starting from 2.105 copies/μL down to 2 copies/μL and additional no template controls (NTC; zero copies) were examined. Dilution series of each of the synthetic miRNAs were made in RNase-free and DNase-free water.", "Levels of circulating viral miRNAs were determined by means of quantitative reverse transcriptase PCR (qRT-PCR) with hydrolysis probe based miRNA assays, purchased from Life Technologies. Specific assays were used for jcv-miR-J1-5p (Assay ID 007464_mat), jcv-miR-J1a-5p (Custom Assay ID CS39QVQ) and bkv-miR-B1-5p (Assay ID 007796_mat). As extraction and reverse transcription efficacy control, assays specific for human miRNAs hsa-miR-26a (assay ID 000405), hsa-miR-30b (assay ID 002129) and mmu-miR-93 (assay ID 001090) were included in the analysis.\nBriefly, RNA was isolated from 200 μL plasma or urine using the miRNeasy kit (Qiagen) according to the manufacturer’s instructions. Three μL of total RNA (representing 20% of total RNA extract) or synthetic miRNA solution was reverse transcribed using the pooled RT stem-loop primers (Life Technologies), enabling miRNA specific cDNA synthesis. Subsequently, 2.5 μL of the RT product (representing 1/6 of total RT product) was pre-amplified by means of a 12-cycle PCR reaction with a pool of miRNA specific forward primer and universal reverse primer to increase detection sensitivity. Diluted (1:8) pre-amplified miRNA cDNA was then used as input for a 45-cycle qPCR reaction with miRNA specific hydrolysis probes and primers (Life Technologies). For analysis of the viral miRNAs, 2 μL of input was used. For analysis of the human miRNAs, 2 μL of input was used for urine derived miRNAs and 0.2 μL was used for plasma derived miRNAs. All reactions were performed in duplicate on the LightCycler® 480 instrument (Roche Applied Science). Quantification cycle (Cq) values were calculated using the 2nd Derivative method with a detection cut-off of 40 cycles. Only samples with quantifiable Cq values for both duplicates were considered positive. Absolute miRNA levels in plasma or urine were calculated based on the standard curves for each specific miRNA assay. Limit of quantification (LOQ) was defined as the miRNA concentration corresponding to a Cq value of 40, based on the standard curve for the specific miRNA. For each sample, average Cq value of the 3 human control miRNAs was calculated and possible outliers were identified using Grubbs’ test (using a significance level of 0.05). In case outliers were detected, results of the corresponding viral miRNAs were not included for further analysis.", "Differences in relative occurrence of viral miRNAs between groups were assessed using a Fisher’s test. Differences between groups were considered statistically significant at P < 0.05. Differences in miRNA levels between groups were assessed using a Mann-Whitney test. Differences between groups were considered statistically significant at P < 0.05. Correlation between different parameters was analyzed using linear regression. P-value was calculated to determine whether slope was significantly non-zero and strength of correlation was determined using r-value. All statistical analyses were performed using GraphPad Prism v 5.04.", " Additional file 1: Table S1: Calculated contributions of non-specific detection. Based on the calibration curves for each specific assay, the concentration of the specific miRNA in every sample is calculated. Based on that concentration and the calibration curves for the non-specific assays, a calculated Cq value is then determined for each of the other assays. In case this calculated Cq value is higher than 40, it is assumed that there is no contribution of non-specific detection of that particular miRNA in that particular assay. This approach is also visually presented in Additional file\n2: Figure S1. (ZIP 28 KB)\nAdditional file 2: Figure S1: Calculation of the contribution of non-specific detection. Based on the standard curves the contribution of non-specific detection of a miRNA on the Cq value of another miRNA assay is calculated. First, based on the Cq value obtained from the specific assay and its specific standard curve, the miRNA level is quantified (indicated by “1”). Subsequently, it is calculated what the Cq value would be in the non-specific assays using extrapolation of the non-specific standard curves (indicated by “2”). (PNG 215 KB)\nAdditional file 3: Table S2: Individual results of JCPyV VP1 antibody levels, urinary viral load and plasma viral load. (ZIP 12 KB)\nAdditional file 1: Table S1: Calculated contributions of non-specific detection. Based on the calibration curves for each specific assay, the concentration of the specific miRNA in every sample is calculated. Based on that concentration and the calibration curves for the non-specific assays, a calculated Cq value is then determined for each of the other assays. In case this calculated Cq value is higher than 40, it is assumed that there is no contribution of non-specific detection of that particular miRNA in that particular assay. This approach is also visually presented in Additional file\n2: Figure S1. (ZIP 28 KB)\nAdditional file 2: Figure S1: Calculation of the contribution of non-specific detection. Based on the standard curves the contribution of non-specific detection of a miRNA on the Cq value of another miRNA assay is calculated. First, based on the Cq value obtained from the specific assay and its specific standard curve, the miRNA level is quantified (indicated by “1”). Subsequently, it is calculated what the Cq value would be in the non-specific assays using extrapolation of the non-specific standard curves (indicated by “2”). (PNG 215 KB)\nAdditional file 3: Table S2: Individual results of JCPyV VP1 antibody levels, urinary viral load and plasma viral load. (ZIP 12 KB)", "Additional file 1: Table S1: Calculated contributions of non-specific detection. Based on the calibration curves for each specific assay, the concentration of the specific miRNA in every sample is calculated. Based on that concentration and the calibration curves for the non-specific assays, a calculated Cq value is then determined for each of the other assays. In case this calculated Cq value is higher than 40, it is assumed that there is no contribution of non-specific detection of that particular miRNA in that particular assay. This approach is also visually presented in Additional file\n2: Figure S1. (ZIP 28 KB)\nAdditional file 2: Figure S1: Calculation of the contribution of non-specific detection. Based on the standard curves the contribution of non-specific detection of a miRNA on the Cq value of another miRNA assay is calculated. First, based on the Cq value obtained from the specific assay and its specific standard curve, the miRNA level is quantified (indicated by “1”). Subsequently, it is calculated what the Cq value would be in the non-specific assays using extrapolation of the non-specific standard curves (indicated by “2”). (PNG 215 KB)\nAdditional file 3: Table S2: Individual results of JCPyV VP1 antibody levels, urinary viral load and plasma viral load. (ZIP 12 KB)" ]
[ "intro", "results", null, null, "discussion", "conclusions", "materials|methods", null, null, null, null, null, null, null, "supplementary-material", null ]
[ "JC Polyomavirus", "Viral microRNA", "Circulating microRNA", "Progressive Multifocal Leukoencephalopathy", "Biomarker", "Viral activity" ]
Introduction: The human JC polyomavirus (JCPyV) is the etiological agent of Progressive Multifocal Leukoencephalopathy (PML), a demyelinating disease of the brain caused by lytic infection of oligodendrocytes upon viral reactivation [1]. JCPyV is a circular double-stranded DNA virus with very restricted cellular tropism, infecting oligodendrocytes, astrocytes, kidney epithelial cells and peripheral blood cells [2, 3]. It is thought that infection usually occurs asymptomatically in childhood, after which the virus remains latent in the body [4–6]. Under certain immunocompromising conditions, such as treatment with immunomodulatory drugs (e.g. natalizumab) or infection with Human Immunodeficiency Virus (HIV), the virus can be reactivated and actively replicate into the brain, leading to PML. Current risk assessment for development of PML is mainly based on the detection of antibodies against VP1, the major capsid protein and the detection of viral DNA in urine (viruria). It has been reported that 50 to 80% of humans are seropositive for JCPyV and approximately one fifth of the population sheds JCPyV in the urine [7–13]. Detection of viral DNA in plasma (viremia) is very rare and has been shown not to be useful for predicting PML risk [14–16]. Recently it was shown, however, that viral DNA can be detected in CD34+ or CD19+ cells, with an increased detection rate in Multiple Sclerosis (MS) patients treated with natalizumab [3]. As the risk of developing PML increases upon prolonged use of natalizumab, current treatment guidelines recommend discontinuation of therapy after the second year, particularly in JCPyV seropositive patients [17]. Given the high prevalence of JCPyV antibodies, a large number of patients are advised to discontinue therapy. Although most, if not all, PML patients are seropositive or show seroconversion before diagnosis of PML, the incidence of PML in natalizumab-treated MS patients is not more than 1.1% in the highest risk subgroup, indicating that not all seropositive subjects have the same risk of developing PML [18]. Moreover, the introduction of a risk stratification algorithm, predominantly based on JCPyV serology, has not led to a reduction of PML incidence in natalizumab-treated MS patients [19]. Therefore, development of new tools for improved risk stratification is warranted, as this might justify continued therapy for many MS patients and better identify those patients who are really at risk of developing PML. MicroRNAs (miRNAs) are small, non-coding RNAs that play an important role in fine-tuning the expression of specific gene products through translational repression and/or mRNA degradation and as such are implicated in many diseases [20–22]. Cellular miRNAs can also be released in small vesicles, such as exosomes and the levels of these extracellular miRNAs in biological fluids have become very valuable markers of several diseases, such as cancer, Alzheimer’s disease and diabetes [23–25]. In the context of JCPyV, it was shown that there does not appear to be a relationship between circulating human miRNAs and the presence of anti-VP1 antibodies or urinary viral load [26]. Several viruses encode their own sets of miRNAs, which can have self-regulatory or host modulating roles [22]. Also JCPyV, as well as other polyomaviruses, encodes its own unique microRNA that is produced as part of the late transcript in an infected host cell [27, 28]. These microRNAs are thought to play an important role in controlling viral replication through downregulation of Large T-Antigen expression, but also in controlling NKG2D-mediated killing of virus-infected cells by natural killer (NK) cells through downregulation of the stress-induced ligand ULBP3 [28, 29]. The diagnostic potential of circulating viral miRNAs has already been investigated for Epstein-Barr virus and Kaposi’s sarcoma-associated herpesvirus (KSHV), where they might represent potential markers for virus associated malignancies [30, 31]. Also JCPyV miRNA has been shown recently to be a useful biomarker for JCPyV infection in the gastrointestinal tract [32]. In this study we have investigated whether JCPyV-encoded miRNAs could be detected in plasma or urine of healthy subjects and whether the presence of these miRNAs was related to VP1 serology or urinary viral load. We demonstrate that these viral miRNAs can indeed be detected in plasma or urine and that they might be useful markers for JCPyV infection. Results: Assay linearity and specificity Plasma or urine levels of JCPyV miRNA were analyzed using stem–loop RT followed by TaqMan PCR analysis [33]. Since the 3p miRNA of JCPyV is identical to the 3p miRNA of BKPyV, only 5p miRNAs were investigated [28]. As we previously identified a variant of the JCV-miR-J1-5p bearing one nucleotide difference compared to the miRNA described in miRBase (designated JCV-miR-J1a-5p), assays specifically designed for both variants were used, as well as an assay detecting the closely related BKPyV 5p miRNA (Figure  1A-C) [26]. To evaluate the specificity of the assays, standard curves were prepared of each miRNA (JCV-miR-J1-5p, JCV-miR-J1a-5p and BKV-miR-B1-5p) and analyzed using the three specific assays (Figure  1D-F). Relative detection efficiency was calculated from the difference of quantification cycle (Cq) between the specific assay and the non-specific assay, using samples containing 2.104 copies/μL of the individual miRNAs (Figure  1G). Only marginal cross reaction was observed for most combinations, but a substantial cross reaction was observed between JCV-miR-J1a-5p and BKV-miR-B1-5p. Therefore, for every single clinical sample (plasma or urine), the contribution of non-specific amplification was calculated based on these standard curves. For JCV-miR-J1-5p and JCV-miR-J1a-5p it was confirmed that the contribution of non-specific signal was negligible compared to the specific signal (Additional file 1: Table S1 and Additional file 2: Figure S1). For the BKV-miR-B1-5p assay, however, the calculated contribution of JCV-miR-J1a-5p in most cases was similar to the measured levels, indicating that the BKV-miR-B1-5p assay in most cases actually was detecting JCV-miR-J1a-5p. Consequently, we can conclude that in most samples BKV-miR-B1-5p is absent or present at such low levels that it does not interfere in the interpretation of the JCV-miR-J1-5p and JCV-miR-J1a-5p analyses.Figure 1 Validation of the stem-loop RT-PCR miRNA assays. (A) Genomic location of the JCPyV/BKPyV encoded miRNAs. (B) Graphical representation of a pre-miRNA as precursor of the mature 5p and 3p miRNAs generated upon cleavage by Dicer (C) Sequence comparison of the three miRNAs investigated. (D-F) The assay linearity and specificity was evaluated with dilution series of three synthetic miRNAs, miR-J1-5p, miR-J1a-5p and miR-B1-5p. Each dilution series was analyzed using the three miRNA assays. (G) The assay readings of miR-J1-5p by miR-J1-5p assay, miR-J1a-5p by miR-J1a-5p assay, and miR-B1-5p by miR-B1-5p assay at concentration levels of 2.104 copies/μL extract were used as the relative standards (100%) for the analysis of assay cross-reactivity. Validation of the stem-loop RT-PCR miRNA assays. (A) Genomic location of the JCPyV/BKPyV encoded miRNAs. (B) Graphical representation of a pre-miRNA as precursor of the mature 5p and 3p miRNAs generated upon cleavage by Dicer (C) Sequence comparison of the three miRNAs investigated. (D-F) The assay linearity and specificity was evaluated with dilution series of three synthetic miRNAs, miR-J1-5p, miR-J1a-5p and miR-B1-5p. Each dilution series was analyzed using the three miRNA assays. (G) The assay readings of miR-J1-5p by miR-J1-5p assay, miR-J1a-5p by miR-J1a-5p assay, and miR-B1-5p by miR-B1-5p assay at concentration levels of 2.104 copies/μL extract were used as the relative standards (100%) for the analysis of assay cross-reactivity. Plasma or urine levels of JCPyV miRNA were analyzed using stem–loop RT followed by TaqMan PCR analysis [33]. Since the 3p miRNA of JCPyV is identical to the 3p miRNA of BKPyV, only 5p miRNAs were investigated [28]. As we previously identified a variant of the JCV-miR-J1-5p bearing one nucleotide difference compared to the miRNA described in miRBase (designated JCV-miR-J1a-5p), assays specifically designed for both variants were used, as well as an assay detecting the closely related BKPyV 5p miRNA (Figure  1A-C) [26]. To evaluate the specificity of the assays, standard curves were prepared of each miRNA (JCV-miR-J1-5p, JCV-miR-J1a-5p and BKV-miR-B1-5p) and analyzed using the three specific assays (Figure  1D-F). Relative detection efficiency was calculated from the difference of quantification cycle (Cq) between the specific assay and the non-specific assay, using samples containing 2.104 copies/μL of the individual miRNAs (Figure  1G). Only marginal cross reaction was observed for most combinations, but a substantial cross reaction was observed between JCV-miR-J1a-5p and BKV-miR-B1-5p. Therefore, for every single clinical sample (plasma or urine), the contribution of non-specific amplification was calculated based on these standard curves. For JCV-miR-J1-5p and JCV-miR-J1a-5p it was confirmed that the contribution of non-specific signal was negligible compared to the specific signal (Additional file 1: Table S1 and Additional file 2: Figure S1). For the BKV-miR-B1-5p assay, however, the calculated contribution of JCV-miR-J1a-5p in most cases was similar to the measured levels, indicating that the BKV-miR-B1-5p assay in most cases actually was detecting JCV-miR-J1a-5p. Consequently, we can conclude that in most samples BKV-miR-B1-5p is absent or present at such low levels that it does not interfere in the interpretation of the JCV-miR-J1-5p and JCV-miR-J1a-5p analyses.Figure 1 Validation of the stem-loop RT-PCR miRNA assays. (A) Genomic location of the JCPyV/BKPyV encoded miRNAs. (B) Graphical representation of a pre-miRNA as precursor of the mature 5p and 3p miRNAs generated upon cleavage by Dicer (C) Sequence comparison of the three miRNAs investigated. (D-F) The assay linearity and specificity was evaluated with dilution series of three synthetic miRNAs, miR-J1-5p, miR-J1a-5p and miR-B1-5p. Each dilution series was analyzed using the three miRNA assays. (G) The assay readings of miR-J1-5p by miR-J1-5p assay, miR-J1a-5p by miR-J1a-5p assay, and miR-B1-5p by miR-B1-5p assay at concentration levels of 2.104 copies/μL extract were used as the relative standards (100%) for the analysis of assay cross-reactivity. Validation of the stem-loop RT-PCR miRNA assays. (A) Genomic location of the JCPyV/BKPyV encoded miRNAs. (B) Graphical representation of a pre-miRNA as precursor of the mature 5p and 3p miRNAs generated upon cleavage by Dicer (C) Sequence comparison of the three miRNAs investigated. (D-F) The assay linearity and specificity was evaluated with dilution series of three synthetic miRNAs, miR-J1-5p, miR-J1a-5p and miR-B1-5p. Each dilution series was analyzed using the three miRNA assays. (G) The assay readings of miR-J1-5p by miR-J1-5p assay, miR-J1a-5p by miR-J1a-5p assay, and miR-B1-5p by miR-B1-5p assay at concentration levels of 2.104 copies/μL extract were used as the relative standards (100%) for the analysis of assay cross-reactivity. Analysis of JCPyV miRNA levels in plasma and urine Plasma and urine samples were collected from 50 healthy subjects (HS) and JCPyV VP1 serology and both plasma and urinary viral load was determined (Additional file 3: Table S2). Based on these parameters, subjects were categorized as Ab-, Ab+VL- or Ab+VL+ (Table  1). Among these 50 HS JCPyV VP1 antibody prevalence was 72% (36/50) and 36% (13/36) of this group were also shedding virus in their urine, representing 26% (13/50) of the total study population. In the JCPyV VP1 seronegative group, no urinary virus shedding was observed. We found no subjects with detectable JCPyV DNA in plasma, similar to previous observations [16].Table 1 Overview of subjects investigated VariableHealthy subjects (n = 50)Gender, n (%)Male22 (44%)Female28 (56%)Age, median (Min-Max)40.5 (23-59)Race, ethnicity, n (%)White38 (76%)Black3 (6%)Asian8 (16%)Other/unknown1 (2%)VP1 serology, n (%)Positive36 (72%)Negative14 (28%)Viruria, n (%)Positive*13 (26%)Negative37 (74%)Viremia, n (%)Positive0 (0%)Negative50 (100%)*All viruric subjects were part of the VP1 seropositive subgroup. Overview of subjects investigated *All viruric subjects were part of the VP1 seropositive subgroup. Total RNA, including microRNAs was isolated from both urine and plasma and the level of JCPyV 5p miRNA was quantified in all samples (Figure  2). The overall detection rate of JCPyV miRNA was 74% (37/50) in plasma and 62% (31/50) in urine (Figure  3A). Further analysis of the different subgroups shows that JCPyV miRNA was detected in plasma or urine from HS from all subgroups (Ab-, Ab+ VL- or Ab+VL+) at similar detection rates (P > 0.05 between the subgroups for both plasma and urine). In the seropositive group, JCPyV 5p miRNA was detected in plasma of 69% (25/36) of subjects and in urine of 64% (23/36) of subjects. Remarkably, also in the seronegative group, JCPyV 5p miRNA was detected in plasma of 86% (12/14) of subjects and in urine of 57% (8/14) of subjects. These detection rates were not statistically different compared to those in seropositive subjects (P > 0.05 both for plasma and urine).Figure 2 Individual levels of jcv-miR-J1-5p and jcv-miR-J1a-5p in plasma and urine. Plasma and urine levels (in log copies/mL) of the two JCPyvV miRNA variants (jcv-miR-J1-5p and jcv-miR-J1a-5p) in every individual subject.Figure 3 JCPyV miRNA levels detected in plasma and urine of healthy subjects, categorized based on serology and urinary viral load. (A) Number of subjects with detectable levels of JCPyV miRNAs in plasma and urine in the different groups. (B) Correlation between plasma and urine levels (in log copies/mL) of JCPyV miRNAs. (C) Plasma levels (in log copies/mL) of JCPyV miRNAs in the different groups. In case both variants were detected, the sum of both levels is presented. (D) Urine levels (in log copies/mL) of JCPyV miRNAs in the different groups. In case both variants were detected, the sum of both levels is presented. Individual levels of jcv-miR-J1-5p and jcv-miR-J1a-5p in plasma and urine. Plasma and urine levels (in log copies/mL) of the two JCPyvV miRNA variants (jcv-miR-J1-5p and jcv-miR-J1a-5p) in every individual subject. JCPyV miRNA levels detected in plasma and urine of healthy subjects, categorized based on serology and urinary viral load. (A) Number of subjects with detectable levels of JCPyV miRNAs in plasma and urine in the different groups. (B) Correlation between plasma and urine levels (in log copies/mL) of JCPyV miRNAs. (C) Plasma levels (in log copies/mL) of JCPyV miRNAs in the different groups. In case both variants were detected, the sum of both levels is presented. (D) Urine levels (in log copies/mL) of JCPyV miRNAs in the different groups. In case both variants were detected, the sum of both levels is presented. Quantitative analysis of JCPyV 5p miRNA indicated plasma levels of JCPyV 5p miRNA were similar in the three different subgroups (Figure  3C). In urine, significantly higher levels (P < 0.001) were observed in the subgroup shedding JCPyV in their urine compared to the subgroup not shedding JCPyV in their urine (Figure  3D). No correlation was observed between plasma levels and urine levels of JCPyV 5p miRNA (Figure  3B). Remarkably, while in plasma higher levels of JCV-miR-J1-5p were detected than JCV-miR-J1a-5p, in urine both variants were detected at similar levels (Figure  2). Also, the identity of the miRNAs (J1 or J1a variant) was not correlated between urine and plasma. Comparison of JCPyV 5p miRNA levels with JCPyV VP1 serology or urinary viral load showed that, specifically in the subgroup of JCPyV shedders (Ab+VL+) a good correlation (P < 0.005, r = 0.78) exists between miRNA levels and antibody levels, as well as a moderate correlation (P = 0.07, r = 0.57) with urinary viral load (Figure  4A-B). No correlation could be observed between plasma miRNA levels and any other parameter analyzed (Figure  4C-D).Figure 4 Viral miRNAs related to other viral parameters. (A) Correlation between JCPyV miRNA levels (in log copies/mL) in urine and JCPyV VP1 serology (in log2 test/ctrl). Shedders are indicated in red and P-value and r value are based on this subset only. (B) Correlation between JCPyV miRNA levels (in log copies/mL) in urine and JCPyV urinary viral load (in log copies/mL) in shedders. (C) Correlation between JCPyV miRNA levels (in log copies/mL) in plasma and JCPyV VP1 serology (in log2 test/ctrl). Shedders are indicated in red and P-value is based on this subset only. (D) Correlation between JCPyV miRNA levels (in log copies/mL) in plasma and JCPyV urinary viral load (in log copies/mL) in shedders. Viral miRNAs related to other viral parameters. (A) Correlation between JCPyV miRNA levels (in log copies/mL) in urine and JCPyV VP1 serology (in log2 test/ctrl). Shedders are indicated in red and P-value and r value are based on this subset only. (B) Correlation between JCPyV miRNA levels (in log copies/mL) in urine and JCPyV urinary viral load (in log copies/mL) in shedders. (C) Correlation between JCPyV miRNA levels (in log copies/mL) in plasma and JCPyV VP1 serology (in log2 test/ctrl). Shedders are indicated in red and P-value is based on this subset only. (D) Correlation between JCPyV miRNA levels (in log copies/mL) in plasma and JCPyV urinary viral load (in log copies/mL) in shedders. Plasma and urine samples were collected from 50 healthy subjects (HS) and JCPyV VP1 serology and both plasma and urinary viral load was determined (Additional file 3: Table S2). Based on these parameters, subjects were categorized as Ab-, Ab+VL- or Ab+VL+ (Table  1). Among these 50 HS JCPyV VP1 antibody prevalence was 72% (36/50) and 36% (13/36) of this group were also shedding virus in their urine, representing 26% (13/50) of the total study population. In the JCPyV VP1 seronegative group, no urinary virus shedding was observed. We found no subjects with detectable JCPyV DNA in plasma, similar to previous observations [16].Table 1 Overview of subjects investigated VariableHealthy subjects (n = 50)Gender, n (%)Male22 (44%)Female28 (56%)Age, median (Min-Max)40.5 (23-59)Race, ethnicity, n (%)White38 (76%)Black3 (6%)Asian8 (16%)Other/unknown1 (2%)VP1 serology, n (%)Positive36 (72%)Negative14 (28%)Viruria, n (%)Positive*13 (26%)Negative37 (74%)Viremia, n (%)Positive0 (0%)Negative50 (100%)*All viruric subjects were part of the VP1 seropositive subgroup. Overview of subjects investigated *All viruric subjects were part of the VP1 seropositive subgroup. Total RNA, including microRNAs was isolated from both urine and plasma and the level of JCPyV 5p miRNA was quantified in all samples (Figure  2). The overall detection rate of JCPyV miRNA was 74% (37/50) in plasma and 62% (31/50) in urine (Figure  3A). Further analysis of the different subgroups shows that JCPyV miRNA was detected in plasma or urine from HS from all subgroups (Ab-, Ab+ VL- or Ab+VL+) at similar detection rates (P > 0.05 between the subgroups for both plasma and urine). In the seropositive group, JCPyV 5p miRNA was detected in plasma of 69% (25/36) of subjects and in urine of 64% (23/36) of subjects. Remarkably, also in the seronegative group, JCPyV 5p miRNA was detected in plasma of 86% (12/14) of subjects and in urine of 57% (8/14) of subjects. These detection rates were not statistically different compared to those in seropositive subjects (P > 0.05 both for plasma and urine).Figure 2 Individual levels of jcv-miR-J1-5p and jcv-miR-J1a-5p in plasma and urine. Plasma and urine levels (in log copies/mL) of the two JCPyvV miRNA variants (jcv-miR-J1-5p and jcv-miR-J1a-5p) in every individual subject.Figure 3 JCPyV miRNA levels detected in plasma and urine of healthy subjects, categorized based on serology and urinary viral load. (A) Number of subjects with detectable levels of JCPyV miRNAs in plasma and urine in the different groups. (B) Correlation between plasma and urine levels (in log copies/mL) of JCPyV miRNAs. (C) Plasma levels (in log copies/mL) of JCPyV miRNAs in the different groups. In case both variants were detected, the sum of both levels is presented. (D) Urine levels (in log copies/mL) of JCPyV miRNAs in the different groups. In case both variants were detected, the sum of both levels is presented. Individual levels of jcv-miR-J1-5p and jcv-miR-J1a-5p in plasma and urine. Plasma and urine levels (in log copies/mL) of the two JCPyvV miRNA variants (jcv-miR-J1-5p and jcv-miR-J1a-5p) in every individual subject. JCPyV miRNA levels detected in plasma and urine of healthy subjects, categorized based on serology and urinary viral load. (A) Number of subjects with detectable levels of JCPyV miRNAs in plasma and urine in the different groups. (B) Correlation between plasma and urine levels (in log copies/mL) of JCPyV miRNAs. (C) Plasma levels (in log copies/mL) of JCPyV miRNAs in the different groups. In case both variants were detected, the sum of both levels is presented. (D) Urine levels (in log copies/mL) of JCPyV miRNAs in the different groups. In case both variants were detected, the sum of both levels is presented. Quantitative analysis of JCPyV 5p miRNA indicated plasma levels of JCPyV 5p miRNA were similar in the three different subgroups (Figure  3C). In urine, significantly higher levels (P < 0.001) were observed in the subgroup shedding JCPyV in their urine compared to the subgroup not shedding JCPyV in their urine (Figure  3D). No correlation was observed between plasma levels and urine levels of JCPyV 5p miRNA (Figure  3B). Remarkably, while in plasma higher levels of JCV-miR-J1-5p were detected than JCV-miR-J1a-5p, in urine both variants were detected at similar levels (Figure  2). Also, the identity of the miRNAs (J1 or J1a variant) was not correlated between urine and plasma. Comparison of JCPyV 5p miRNA levels with JCPyV VP1 serology or urinary viral load showed that, specifically in the subgroup of JCPyV shedders (Ab+VL+) a good correlation (P < 0.005, r = 0.78) exists between miRNA levels and antibody levels, as well as a moderate correlation (P = 0.07, r = 0.57) with urinary viral load (Figure  4A-B). No correlation could be observed between plasma miRNA levels and any other parameter analyzed (Figure  4C-D).Figure 4 Viral miRNAs related to other viral parameters. (A) Correlation between JCPyV miRNA levels (in log copies/mL) in urine and JCPyV VP1 serology (in log2 test/ctrl). Shedders are indicated in red and P-value and r value are based on this subset only. (B) Correlation between JCPyV miRNA levels (in log copies/mL) in urine and JCPyV urinary viral load (in log copies/mL) in shedders. (C) Correlation between JCPyV miRNA levels (in log copies/mL) in plasma and JCPyV VP1 serology (in log2 test/ctrl). Shedders are indicated in red and P-value is based on this subset only. (D) Correlation between JCPyV miRNA levels (in log copies/mL) in plasma and JCPyV urinary viral load (in log copies/mL) in shedders. Viral miRNAs related to other viral parameters. (A) Correlation between JCPyV miRNA levels (in log copies/mL) in urine and JCPyV VP1 serology (in log2 test/ctrl). Shedders are indicated in red and P-value and r value are based on this subset only. (B) Correlation between JCPyV miRNA levels (in log copies/mL) in urine and JCPyV urinary viral load (in log copies/mL) in shedders. (C) Correlation between JCPyV miRNA levels (in log copies/mL) in plasma and JCPyV VP1 serology (in log2 test/ctrl). Shedders are indicated in red and P-value is based on this subset only. (D) Correlation between JCPyV miRNA levels (in log copies/mL) in plasma and JCPyV urinary viral load (in log copies/mL) in shedders. Assay linearity and specificity: Plasma or urine levels of JCPyV miRNA were analyzed using stem–loop RT followed by TaqMan PCR analysis [33]. Since the 3p miRNA of JCPyV is identical to the 3p miRNA of BKPyV, only 5p miRNAs were investigated [28]. As we previously identified a variant of the JCV-miR-J1-5p bearing one nucleotide difference compared to the miRNA described in miRBase (designated JCV-miR-J1a-5p), assays specifically designed for both variants were used, as well as an assay detecting the closely related BKPyV 5p miRNA (Figure  1A-C) [26]. To evaluate the specificity of the assays, standard curves were prepared of each miRNA (JCV-miR-J1-5p, JCV-miR-J1a-5p and BKV-miR-B1-5p) and analyzed using the three specific assays (Figure  1D-F). Relative detection efficiency was calculated from the difference of quantification cycle (Cq) between the specific assay and the non-specific assay, using samples containing 2.104 copies/μL of the individual miRNAs (Figure  1G). Only marginal cross reaction was observed for most combinations, but a substantial cross reaction was observed between JCV-miR-J1a-5p and BKV-miR-B1-5p. Therefore, for every single clinical sample (plasma or urine), the contribution of non-specific amplification was calculated based on these standard curves. For JCV-miR-J1-5p and JCV-miR-J1a-5p it was confirmed that the contribution of non-specific signal was negligible compared to the specific signal (Additional file 1: Table S1 and Additional file 2: Figure S1). For the BKV-miR-B1-5p assay, however, the calculated contribution of JCV-miR-J1a-5p in most cases was similar to the measured levels, indicating that the BKV-miR-B1-5p assay in most cases actually was detecting JCV-miR-J1a-5p. Consequently, we can conclude that in most samples BKV-miR-B1-5p is absent or present at such low levels that it does not interfere in the interpretation of the JCV-miR-J1-5p and JCV-miR-J1a-5p analyses.Figure 1 Validation of the stem-loop RT-PCR miRNA assays. (A) Genomic location of the JCPyV/BKPyV encoded miRNAs. (B) Graphical representation of a pre-miRNA as precursor of the mature 5p and 3p miRNAs generated upon cleavage by Dicer (C) Sequence comparison of the three miRNAs investigated. (D-F) The assay linearity and specificity was evaluated with dilution series of three synthetic miRNAs, miR-J1-5p, miR-J1a-5p and miR-B1-5p. Each dilution series was analyzed using the three miRNA assays. (G) The assay readings of miR-J1-5p by miR-J1-5p assay, miR-J1a-5p by miR-J1a-5p assay, and miR-B1-5p by miR-B1-5p assay at concentration levels of 2.104 copies/μL extract were used as the relative standards (100%) for the analysis of assay cross-reactivity. Validation of the stem-loop RT-PCR miRNA assays. (A) Genomic location of the JCPyV/BKPyV encoded miRNAs. (B) Graphical representation of a pre-miRNA as precursor of the mature 5p and 3p miRNAs generated upon cleavage by Dicer (C) Sequence comparison of the three miRNAs investigated. (D-F) The assay linearity and specificity was evaluated with dilution series of three synthetic miRNAs, miR-J1-5p, miR-J1a-5p and miR-B1-5p. Each dilution series was analyzed using the three miRNA assays. (G) The assay readings of miR-J1-5p by miR-J1-5p assay, miR-J1a-5p by miR-J1a-5p assay, and miR-B1-5p by miR-B1-5p assay at concentration levels of 2.104 copies/μL extract were used as the relative standards (100%) for the analysis of assay cross-reactivity. Analysis of JCPyV miRNA levels in plasma and urine: Plasma and urine samples were collected from 50 healthy subjects (HS) and JCPyV VP1 serology and both plasma and urinary viral load was determined (Additional file 3: Table S2). Based on these parameters, subjects were categorized as Ab-, Ab+VL- or Ab+VL+ (Table  1). Among these 50 HS JCPyV VP1 antibody prevalence was 72% (36/50) and 36% (13/36) of this group were also shedding virus in their urine, representing 26% (13/50) of the total study population. In the JCPyV VP1 seronegative group, no urinary virus shedding was observed. We found no subjects with detectable JCPyV DNA in plasma, similar to previous observations [16].Table 1 Overview of subjects investigated VariableHealthy subjects (n = 50)Gender, n (%)Male22 (44%)Female28 (56%)Age, median (Min-Max)40.5 (23-59)Race, ethnicity, n (%)White38 (76%)Black3 (6%)Asian8 (16%)Other/unknown1 (2%)VP1 serology, n (%)Positive36 (72%)Negative14 (28%)Viruria, n (%)Positive*13 (26%)Negative37 (74%)Viremia, n (%)Positive0 (0%)Negative50 (100%)*All viruric subjects were part of the VP1 seropositive subgroup. Overview of subjects investigated *All viruric subjects were part of the VP1 seropositive subgroup. Total RNA, including microRNAs was isolated from both urine and plasma and the level of JCPyV 5p miRNA was quantified in all samples (Figure  2). The overall detection rate of JCPyV miRNA was 74% (37/50) in plasma and 62% (31/50) in urine (Figure  3A). Further analysis of the different subgroups shows that JCPyV miRNA was detected in plasma or urine from HS from all subgroups (Ab-, Ab+ VL- or Ab+VL+) at similar detection rates (P > 0.05 between the subgroups for both plasma and urine). In the seropositive group, JCPyV 5p miRNA was detected in plasma of 69% (25/36) of subjects and in urine of 64% (23/36) of subjects. Remarkably, also in the seronegative group, JCPyV 5p miRNA was detected in plasma of 86% (12/14) of subjects and in urine of 57% (8/14) of subjects. These detection rates were not statistically different compared to those in seropositive subjects (P > 0.05 both for plasma and urine).Figure 2 Individual levels of jcv-miR-J1-5p and jcv-miR-J1a-5p in plasma and urine. Plasma and urine levels (in log copies/mL) of the two JCPyvV miRNA variants (jcv-miR-J1-5p and jcv-miR-J1a-5p) in every individual subject.Figure 3 JCPyV miRNA levels detected in plasma and urine of healthy subjects, categorized based on serology and urinary viral load. (A) Number of subjects with detectable levels of JCPyV miRNAs in plasma and urine in the different groups. (B) Correlation between plasma and urine levels (in log copies/mL) of JCPyV miRNAs. (C) Plasma levels (in log copies/mL) of JCPyV miRNAs in the different groups. In case both variants were detected, the sum of both levels is presented. (D) Urine levels (in log copies/mL) of JCPyV miRNAs in the different groups. In case both variants were detected, the sum of both levels is presented. Individual levels of jcv-miR-J1-5p and jcv-miR-J1a-5p in plasma and urine. Plasma and urine levels (in log copies/mL) of the two JCPyvV miRNA variants (jcv-miR-J1-5p and jcv-miR-J1a-5p) in every individual subject. JCPyV miRNA levels detected in plasma and urine of healthy subjects, categorized based on serology and urinary viral load. (A) Number of subjects with detectable levels of JCPyV miRNAs in plasma and urine in the different groups. (B) Correlation between plasma and urine levels (in log copies/mL) of JCPyV miRNAs. (C) Plasma levels (in log copies/mL) of JCPyV miRNAs in the different groups. In case both variants were detected, the sum of both levels is presented. (D) Urine levels (in log copies/mL) of JCPyV miRNAs in the different groups. In case both variants were detected, the sum of both levels is presented. Quantitative analysis of JCPyV 5p miRNA indicated plasma levels of JCPyV 5p miRNA were similar in the three different subgroups (Figure  3C). In urine, significantly higher levels (P < 0.001) were observed in the subgroup shedding JCPyV in their urine compared to the subgroup not shedding JCPyV in their urine (Figure  3D). No correlation was observed between plasma levels and urine levels of JCPyV 5p miRNA (Figure  3B). Remarkably, while in plasma higher levels of JCV-miR-J1-5p were detected than JCV-miR-J1a-5p, in urine both variants were detected at similar levels (Figure  2). Also, the identity of the miRNAs (J1 or J1a variant) was not correlated between urine and plasma. Comparison of JCPyV 5p miRNA levels with JCPyV VP1 serology or urinary viral load showed that, specifically in the subgroup of JCPyV shedders (Ab+VL+) a good correlation (P < 0.005, r = 0.78) exists between miRNA levels and antibody levels, as well as a moderate correlation (P = 0.07, r = 0.57) with urinary viral load (Figure  4A-B). No correlation could be observed between plasma miRNA levels and any other parameter analyzed (Figure  4C-D).Figure 4 Viral miRNAs related to other viral parameters. (A) Correlation between JCPyV miRNA levels (in log copies/mL) in urine and JCPyV VP1 serology (in log2 test/ctrl). Shedders are indicated in red and P-value and r value are based on this subset only. (B) Correlation between JCPyV miRNA levels (in log copies/mL) in urine and JCPyV urinary viral load (in log copies/mL) in shedders. (C) Correlation between JCPyV miRNA levels (in log copies/mL) in plasma and JCPyV VP1 serology (in log2 test/ctrl). Shedders are indicated in red and P-value is based on this subset only. (D) Correlation between JCPyV miRNA levels (in log copies/mL) in plasma and JCPyV urinary viral load (in log copies/mL) in shedders. Viral miRNAs related to other viral parameters. (A) Correlation between JCPyV miRNA levels (in log copies/mL) in urine and JCPyV VP1 serology (in log2 test/ctrl). Shedders are indicated in red and P-value and r value are based on this subset only. (B) Correlation between JCPyV miRNA levels (in log copies/mL) in urine and JCPyV urinary viral load (in log copies/mL) in shedders. (C) Correlation between JCPyV miRNA levels (in log copies/mL) in plasma and JCPyV VP1 serology (in log2 test/ctrl). Shedders are indicated in red and P-value is based on this subset only. (D) Correlation between JCPyV miRNA levels (in log copies/mL) in plasma and JCPyV urinary viral load (in log copies/mL) in shedders. Discussion: Although JCPyV is known to be the etiological agent responsible for development of PML, it is also well-known that this polyomavirus is widely distributed in the human population without any clinical manifestation [1]. In this study we have analyzed the level of the viral miRNAs in plasma and urine of healthy subjects. As is the case for other polyomaviruses, JCPyV encodes a pre-miRNA that is further processed into the mature 5p and 3p miR-J1s. The pre-miRNA is encoded on the late strand of the viral genome and is shown to be expressed concurrently with the late mRNA transcript, thereby downregulating early transcript [27, 28]. The presence of viral miRNAs might therefore be considered as a marker for latent infection. Since BKPyV and JCPyV share the same 3p miRNA, only the 5p miRNAs were included in this study [28]. We showed that most of the healthy subjects have low, but well-detectable levels of viral miRNAs in total RNA extracts of plasma or urine, even in subjects that are seronegative and as such considered not to be infected. This finding indicates that the analysis of antibodies against JCPyV VP1 is insufficient to identify all infected individuals. Previous studies also identified individuals that were seronegative but clearly infected based on the fact that they were viremic in specific cell compartments (CD34+ cells) or plasma [3, 34–36]. Whereas in most cases only a limited number of seronegative subjects was found to be infected, our analysis of viral miRNAs in plasma and urine identified, respectively 12 and 8 out of 14 seronegative individuals to be infected with JCPyV. Only 1 seronegative subject was found to be negative for both plasma and urine viral miRNAs. These data suggest that JCPyV is capable of evading immune recognition by the host’s humoral immune system and residing latently in the body, with viral miRNAs leaking in the blood or urine as a silent witness of this latent activity. Similar to the findings in seronegative subjects, also a large group of seropositive subjects was found to have viral miRNAs in plasma or urine. 69% of seropositive subjects were positive for plasma viral miRNAs and 64% were found to be positive for urinary viral miRNAs. Given the fact that viral miRNAs can only be produced upon ongoing viral transcription in infected host cells, whereas serology rather discloses information on exposure to the virus and more in particular the response of the humoral immune system towards this exposure, the analysis of viral miRNAs in seropositive subjects might add another level to the risk prediction algorithms for PML development. We therefore suggest that further studies would be performed to investigate the potential of viral miRNA levels in plasma or urine as a risk marker for PML development. Plasma levels of viral miRNAs in seropositive subjects were not different compared to seronegative subjects and both were close to the detection limit, indicating that in healthy subjects only low activity of JCPyV is present in the periphery, independent of their antibody status. In contrast, we found significantly increased urinary viral miRNA levels in shedders. In this group of viral shedders, there was also a strong correlation between urinary miRNA levels and anti-VP1 antibody levels or urinary viral load. Besides the discordance between urine and plasma miRNA levels, also the identity of the detected miRNAs (J1-5p or J1a-5p variant) within individual subjects was different in plasma and urine. The most obvious explanation for these observations would be that two independent viral propagation zones exist in one individual: one in bone marrow cells, blood cells and perhaps also brain cells (Bone-Blood-Brain) and a second one in urinary tract cells. This hypothesis is also supported by other work where sequencing analysis of the VP1 gene and/or non-coding control region (NCCR) in samples obtained from PML patients showed a similar dichotomy between urine on the one hand and plasma and cerebrospinal fluid (CSF) on the other hand [37–39]. Whether crosstalk between both viral propagation zones exists, is unclear and remains to be investigated. We found that increased viral multiplication in the urinary propagation zone, as determined by increased urinary viral load, is accompanied by an increased level of urinary viral miRNAs. This might at first sight appear in contradiction with the observation that polyomavirus miRNAs, including JCPyV, have been shown to downregulate expression of large T-antigen and to suppress viral replication [28, 40–42]. One might therefore assume that an increased level of viral miRNAs would be associated to a lower viral load. This mechanistic model does however not exclude that in subjects with high urinary viral load, viral miRNAs are released at increased levels as a consequence of high virion production, a process that requires transcription of the late transcript, which also encodes the miRNAs. Furthermore, several studies have shown that infectivity of mutant viruses lacking the viral miRNAs are not impacted, again indicating that increased levels of viral miRNAs are not per se associated with decreased virion production in vivo [28, 43]. On the other hand, these miRNAs can also be produced and released from an infected cell without the need for active viral replication, as is also the case for Herpes Simplex Virus and Epstein-Barr Virus during latent infection [44–46]. This would in fact also explain why no viral DNA can be detected in plasma of healthy subjects, while viral miRNAs are detectable in a large number of these subjects. Taken together, low levels of circulating viral miRNAs would serve as biomarker for latent JCPyV infection and increased levels might be indicative of increased viral activity. This raises the question whether an increased viral multiplication in the bone-blood-brain propagation zone, as is the case in PML patients, also is accompanied by an increase in plasma viral miRNAs. Therefore, it would be of great interest to determine miRNA levels in plasma of PML patients, but even more to assess in longitudinal studies whether plasma miRNA levels in MS patients treated with natalizumab increase over time and as such might serve as a monitoring tool for viral reactivation. Similar studies have been performed for the closely related BKPyV in the context of kidney transplant patients developing polyomavirus-associated nephropathy (PVAN) and found strongly increased levels of miR-B1-5p in urine from patients with PVAN [42]. Also, a strong correlation appears to exist between BKPyV encoded miRNAs and BKPyV DNA in blood of infected renal transplant patients [47]. Since the three miRNAs analyzed in this study are very similar, the cross-reactivity of the different assays was evaluated using synthetic miRNAs. Although we believe that we have provided sufficient evidence that the detected levels of JCPyV miRNAs are specific for JCPyV, it cannot be excluded that the assays used are prone to some non-specific amplification of other miRNAs. However, no miRNAs with nucleotide sequence resembling the sequence of jcv-miR-J1-5p have been registered in miRBase 21, except for bkv-miR-B1-5p [48]. It can also not be excluded that other, unknown human polyomaviruses exist that encode an identical or closely related miRNA that is detected in our assays. Conclusions: Based on the work described here, we conclude that JCPyV miRNAs can be detected in plasma and urine of healthy subjects, allowing further stratification of seropositive individuals. Furthermore, our data indicate higher infection rates than would be expected from serology alone. Materials and methods: Ethics statement The Ethics Committee [“Commissie voor Medische Ethiek - ZiekenhuisNetwerk Antwerpen (ZNA) and the Ethics committee University Hospital Antwerp] approved the Protocol, and Informed consent, which were signed by all subjects. The Ethics Committee [“Commissie voor Medische Ethiek - ZiekenhuisNetwerk Antwerpen (ZNA) and the Ethics committee University Hospital Antwerp] approved the Protocol, and Informed consent, which were signed by all subjects. Healthy subject samples A total of 50 healthy subjects (HS) were recruited in Belgium for this study [10, 26, 49]. The demographic description of the HS population is presented in Table  1. Plasma samples and urine samples were collected from all these HS and stored at -80°C until further processing. A total of 50 healthy subjects (HS) were recruited in Belgium for this study [10, 26, 49]. The demographic description of the HS population is presented in Table  1. Plasma samples and urine samples were collected from all these HS and stored at -80°C until further processing. JC polyomavirus viral load assay Analysis of the urinary viral load was performed as described previously [10]. Analysis of the plasma viral load was performed similarly, with the exception that 200 μL plasma was used for DNA extraction. Analysis of the urinary viral load was performed as described previously [10]. Analysis of the plasma viral load was performed similarly, with the exception that 200 μL plasma was used for DNA extraction. JC polyomavirus VP1 serology assay The anti-JCPyV antibody assay was performed as described earlier [26]. Samples were considered positive if OD values were higher than 2-fold the OD value of the blank sample (i.e. log2 test/ctrl > 1). The anti-JCPyV antibody assay was performed as described earlier [26]. Samples were considered positive if OD values were higher than 2-fold the OD value of the blank sample (i.e. log2 test/ctrl > 1). Synthetic microRNA molecules and generation of miRNA standard curves Three RNase-free 5’-phosphorylated miRNA oligoribonucleotides were synthesized (Integrated DNA Technologies) for the validation of the miRNA assays, corresponding to jcv-miR-J1-5p (5’-phospho-UUCUGAGACCUGGGAAAAGCAU-OH-3’), jcv-miR-J1a-5p (5’-phospho-UUCUGAGACCUGGGAAGAGCAU-OH-3’) and bkv-miR-B1-5p (5’-phospho-AUCUGAGACUUGGGAAGAGCAU-OH-3’). Stock solutions of 100 μM synthetic oligonucleotide in RNase-free and DNase-free water were prepared according to the concentrations and sample purity quoted by the manufacturer (based on spectrophotometric analysis). These stock solutions were diluted to a concentration of approximately 3.32 pM, corresponding to 2.105 copies/μL. A total of six serial 10-fold dilutions were prepared, starting from 2.105 copies/μL down to 2 copies/μL and additional no template controls (NTC; zero copies) were examined. Dilution series of each of the synthetic miRNAs were made in RNase-free and DNase-free water. Three RNase-free 5’-phosphorylated miRNA oligoribonucleotides were synthesized (Integrated DNA Technologies) for the validation of the miRNA assays, corresponding to jcv-miR-J1-5p (5’-phospho-UUCUGAGACCUGGGAAAAGCAU-OH-3’), jcv-miR-J1a-5p (5’-phospho-UUCUGAGACCUGGGAAGAGCAU-OH-3’) and bkv-miR-B1-5p (5’-phospho-AUCUGAGACUUGGGAAGAGCAU-OH-3’). Stock solutions of 100 μM synthetic oligonucleotide in RNase-free and DNase-free water were prepared according to the concentrations and sample purity quoted by the manufacturer (based on spectrophotometric analysis). These stock solutions were diluted to a concentration of approximately 3.32 pM, corresponding to 2.105 copies/μL. A total of six serial 10-fold dilutions were prepared, starting from 2.105 copies/μL down to 2 copies/μL and additional no template controls (NTC; zero copies) were examined. Dilution series of each of the synthetic miRNAs were made in RNase-free and DNase-free water. Analysis of viral miRNAs Levels of circulating viral miRNAs were determined by means of quantitative reverse transcriptase PCR (qRT-PCR) with hydrolysis probe based miRNA assays, purchased from Life Technologies. Specific assays were used for jcv-miR-J1-5p (Assay ID 007464_mat), jcv-miR-J1a-5p (Custom Assay ID CS39QVQ) and bkv-miR-B1-5p (Assay ID 007796_mat). As extraction and reverse transcription efficacy control, assays specific for human miRNAs hsa-miR-26a (assay ID 000405), hsa-miR-30b (assay ID 002129) and mmu-miR-93 (assay ID 001090) were included in the analysis. Briefly, RNA was isolated from 200 μL plasma or urine using the miRNeasy kit (Qiagen) according to the manufacturer’s instructions. Three μL of total RNA (representing 20% of total RNA extract) or synthetic miRNA solution was reverse transcribed using the pooled RT stem-loop primers (Life Technologies), enabling miRNA specific cDNA synthesis. Subsequently, 2.5 μL of the RT product (representing 1/6 of total RT product) was pre-amplified by means of a 12-cycle PCR reaction with a pool of miRNA specific forward primer and universal reverse primer to increase detection sensitivity. Diluted (1:8) pre-amplified miRNA cDNA was then used as input for a 45-cycle qPCR reaction with miRNA specific hydrolysis probes and primers (Life Technologies). For analysis of the viral miRNAs, 2 μL of input was used. For analysis of the human miRNAs, 2 μL of input was used for urine derived miRNAs and 0.2 μL was used for plasma derived miRNAs. All reactions were performed in duplicate on the LightCycler® 480 instrument (Roche Applied Science). Quantification cycle (Cq) values were calculated using the 2nd Derivative method with a detection cut-off of 40 cycles. Only samples with quantifiable Cq values for both duplicates were considered positive. Absolute miRNA levels in plasma or urine were calculated based on the standard curves for each specific miRNA assay. Limit of quantification (LOQ) was defined as the miRNA concentration corresponding to a Cq value of 40, based on the standard curve for the specific miRNA. For each sample, average Cq value of the 3 human control miRNAs was calculated and possible outliers were identified using Grubbs’ test (using a significance level of 0.05). In case outliers were detected, results of the corresponding viral miRNAs were not included for further analysis. Levels of circulating viral miRNAs were determined by means of quantitative reverse transcriptase PCR (qRT-PCR) with hydrolysis probe based miRNA assays, purchased from Life Technologies. Specific assays were used for jcv-miR-J1-5p (Assay ID 007464_mat), jcv-miR-J1a-5p (Custom Assay ID CS39QVQ) and bkv-miR-B1-5p (Assay ID 007796_mat). As extraction and reverse transcription efficacy control, assays specific for human miRNAs hsa-miR-26a (assay ID 000405), hsa-miR-30b (assay ID 002129) and mmu-miR-93 (assay ID 001090) were included in the analysis. Briefly, RNA was isolated from 200 μL plasma or urine using the miRNeasy kit (Qiagen) according to the manufacturer’s instructions. Three μL of total RNA (representing 20% of total RNA extract) or synthetic miRNA solution was reverse transcribed using the pooled RT stem-loop primers (Life Technologies), enabling miRNA specific cDNA synthesis. Subsequently, 2.5 μL of the RT product (representing 1/6 of total RT product) was pre-amplified by means of a 12-cycle PCR reaction with a pool of miRNA specific forward primer and universal reverse primer to increase detection sensitivity. Diluted (1:8) pre-amplified miRNA cDNA was then used as input for a 45-cycle qPCR reaction with miRNA specific hydrolysis probes and primers (Life Technologies). For analysis of the viral miRNAs, 2 μL of input was used. For analysis of the human miRNAs, 2 μL of input was used for urine derived miRNAs and 0.2 μL was used for plasma derived miRNAs. All reactions were performed in duplicate on the LightCycler® 480 instrument (Roche Applied Science). Quantification cycle (Cq) values were calculated using the 2nd Derivative method with a detection cut-off of 40 cycles. Only samples with quantifiable Cq values for both duplicates were considered positive. Absolute miRNA levels in plasma or urine were calculated based on the standard curves for each specific miRNA assay. Limit of quantification (LOQ) was defined as the miRNA concentration corresponding to a Cq value of 40, based on the standard curve for the specific miRNA. For each sample, average Cq value of the 3 human control miRNAs was calculated and possible outliers were identified using Grubbs’ test (using a significance level of 0.05). In case outliers were detected, results of the corresponding viral miRNAs were not included for further analysis. Statistical analysis Differences in relative occurrence of viral miRNAs between groups were assessed using a Fisher’s test. Differences between groups were considered statistically significant at P < 0.05. Differences in miRNA levels between groups were assessed using a Mann-Whitney test. Differences between groups were considered statistically significant at P < 0.05. Correlation between different parameters was analyzed using linear regression. P-value was calculated to determine whether slope was significantly non-zero and strength of correlation was determined using r-value. All statistical analyses were performed using GraphPad Prism v 5.04. Differences in relative occurrence of viral miRNAs between groups were assessed using a Fisher’s test. Differences between groups were considered statistically significant at P < 0.05. Differences in miRNA levels between groups were assessed using a Mann-Whitney test. Differences between groups were considered statistically significant at P < 0.05. Correlation between different parameters was analyzed using linear regression. P-value was calculated to determine whether slope was significantly non-zero and strength of correlation was determined using r-value. All statistical analyses were performed using GraphPad Prism v 5.04. Ethics statement: The Ethics Committee [“Commissie voor Medische Ethiek - ZiekenhuisNetwerk Antwerpen (ZNA) and the Ethics committee University Hospital Antwerp] approved the Protocol, and Informed consent, which were signed by all subjects. Healthy subject samples: A total of 50 healthy subjects (HS) were recruited in Belgium for this study [10, 26, 49]. The demographic description of the HS population is presented in Table  1. Plasma samples and urine samples were collected from all these HS and stored at -80°C until further processing. JC polyomavirus viral load assay: Analysis of the urinary viral load was performed as described previously [10]. Analysis of the plasma viral load was performed similarly, with the exception that 200 μL plasma was used for DNA extraction. JC polyomavirus VP1 serology assay: The anti-JCPyV antibody assay was performed as described earlier [26]. Samples were considered positive if OD values were higher than 2-fold the OD value of the blank sample (i.e. log2 test/ctrl > 1). Synthetic microRNA molecules and generation of miRNA standard curves: Three RNase-free 5’-phosphorylated miRNA oligoribonucleotides were synthesized (Integrated DNA Technologies) for the validation of the miRNA assays, corresponding to jcv-miR-J1-5p (5’-phospho-UUCUGAGACCUGGGAAAAGCAU-OH-3’), jcv-miR-J1a-5p (5’-phospho-UUCUGAGACCUGGGAAGAGCAU-OH-3’) and bkv-miR-B1-5p (5’-phospho-AUCUGAGACUUGGGAAGAGCAU-OH-3’). Stock solutions of 100 μM synthetic oligonucleotide in RNase-free and DNase-free water were prepared according to the concentrations and sample purity quoted by the manufacturer (based on spectrophotometric analysis). These stock solutions were diluted to a concentration of approximately 3.32 pM, corresponding to 2.105 copies/μL. A total of six serial 10-fold dilutions were prepared, starting from 2.105 copies/μL down to 2 copies/μL and additional no template controls (NTC; zero copies) were examined. Dilution series of each of the synthetic miRNAs were made in RNase-free and DNase-free water. Analysis of viral miRNAs: Levels of circulating viral miRNAs were determined by means of quantitative reverse transcriptase PCR (qRT-PCR) with hydrolysis probe based miRNA assays, purchased from Life Technologies. Specific assays were used for jcv-miR-J1-5p (Assay ID 007464_mat), jcv-miR-J1a-5p (Custom Assay ID CS39QVQ) and bkv-miR-B1-5p (Assay ID 007796_mat). As extraction and reverse transcription efficacy control, assays specific for human miRNAs hsa-miR-26a (assay ID 000405), hsa-miR-30b (assay ID 002129) and mmu-miR-93 (assay ID 001090) were included in the analysis. Briefly, RNA was isolated from 200 μL plasma or urine using the miRNeasy kit (Qiagen) according to the manufacturer’s instructions. Three μL of total RNA (representing 20% of total RNA extract) or synthetic miRNA solution was reverse transcribed using the pooled RT stem-loop primers (Life Technologies), enabling miRNA specific cDNA synthesis. Subsequently, 2.5 μL of the RT product (representing 1/6 of total RT product) was pre-amplified by means of a 12-cycle PCR reaction with a pool of miRNA specific forward primer and universal reverse primer to increase detection sensitivity. Diluted (1:8) pre-amplified miRNA cDNA was then used as input for a 45-cycle qPCR reaction with miRNA specific hydrolysis probes and primers (Life Technologies). For analysis of the viral miRNAs, 2 μL of input was used. For analysis of the human miRNAs, 2 μL of input was used for urine derived miRNAs and 0.2 μL was used for plasma derived miRNAs. All reactions were performed in duplicate on the LightCycler® 480 instrument (Roche Applied Science). Quantification cycle (Cq) values were calculated using the 2nd Derivative method with a detection cut-off of 40 cycles. Only samples with quantifiable Cq values for both duplicates were considered positive. Absolute miRNA levels in plasma or urine were calculated based on the standard curves for each specific miRNA assay. Limit of quantification (LOQ) was defined as the miRNA concentration corresponding to a Cq value of 40, based on the standard curve for the specific miRNA. For each sample, average Cq value of the 3 human control miRNAs was calculated and possible outliers were identified using Grubbs’ test (using a significance level of 0.05). In case outliers were detected, results of the corresponding viral miRNAs were not included for further analysis. Statistical analysis: Differences in relative occurrence of viral miRNAs between groups were assessed using a Fisher’s test. Differences between groups were considered statistically significant at P < 0.05. Differences in miRNA levels between groups were assessed using a Mann-Whitney test. Differences between groups were considered statistically significant at P < 0.05. Correlation between different parameters was analyzed using linear regression. P-value was calculated to determine whether slope was significantly non-zero and strength of correlation was determined using r-value. All statistical analyses were performed using GraphPad Prism v 5.04. Electronic supplementary material: Additional file 1: Table S1: Calculated contributions of non-specific detection. Based on the calibration curves for each specific assay, the concentration of the specific miRNA in every sample is calculated. Based on that concentration and the calibration curves for the non-specific assays, a calculated Cq value is then determined for each of the other assays. In case this calculated Cq value is higher than 40, it is assumed that there is no contribution of non-specific detection of that particular miRNA in that particular assay. This approach is also visually presented in Additional file 2: Figure S1. (ZIP 28 KB) Additional file 2: Figure S1: Calculation of the contribution of non-specific detection. Based on the standard curves the contribution of non-specific detection of a miRNA on the Cq value of another miRNA assay is calculated. First, based on the Cq value obtained from the specific assay and its specific standard curve, the miRNA level is quantified (indicated by “1”). Subsequently, it is calculated what the Cq value would be in the non-specific assays using extrapolation of the non-specific standard curves (indicated by “2”). (PNG 215 KB) Additional file 3: Table S2: Individual results of JCPyV VP1 antibody levels, urinary viral load and plasma viral load. (ZIP 12 KB) Additional file 1: Table S1: Calculated contributions of non-specific detection. Based on the calibration curves for each specific assay, the concentration of the specific miRNA in every sample is calculated. Based on that concentration and the calibration curves for the non-specific assays, a calculated Cq value is then determined for each of the other assays. In case this calculated Cq value is higher than 40, it is assumed that there is no contribution of non-specific detection of that particular miRNA in that particular assay. This approach is also visually presented in Additional file 2: Figure S1. (ZIP 28 KB) Additional file 2: Figure S1: Calculation of the contribution of non-specific detection. Based on the standard curves the contribution of non-specific detection of a miRNA on the Cq value of another miRNA assay is calculated. First, based on the Cq value obtained from the specific assay and its specific standard curve, the miRNA level is quantified (indicated by “1”). Subsequently, it is calculated what the Cq value would be in the non-specific assays using extrapolation of the non-specific standard curves (indicated by “2”). (PNG 215 KB) Additional file 3: Table S2: Individual results of JCPyV VP1 antibody levels, urinary viral load and plasma viral load. (ZIP 12 KB) : Additional file 1: Table S1: Calculated contributions of non-specific detection. Based on the calibration curves for each specific assay, the concentration of the specific miRNA in every sample is calculated. Based on that concentration and the calibration curves for the non-specific assays, a calculated Cq value is then determined for each of the other assays. In case this calculated Cq value is higher than 40, it is assumed that there is no contribution of non-specific detection of that particular miRNA in that particular assay. This approach is also visually presented in Additional file 2: Figure S1. (ZIP 28 KB) Additional file 2: Figure S1: Calculation of the contribution of non-specific detection. Based on the standard curves the contribution of non-specific detection of a miRNA on the Cq value of another miRNA assay is calculated. First, based on the Cq value obtained from the specific assay and its specific standard curve, the miRNA level is quantified (indicated by “1”). Subsequently, it is calculated what the Cq value would be in the non-specific assays using extrapolation of the non-specific standard curves (indicated by “2”). (PNG 215 KB) Additional file 3: Table S2: Individual results of JCPyV VP1 antibody levels, urinary viral load and plasma viral load. (ZIP 12 KB)
Background: JC polyomavirus (JCPyV) is a widespread human polyomavirus that usually resides latently in its host, but can be reactivated under immune-compromised conditions potentially causing Progressive Multifocal Leukoencephalopathy (PML). JCPyV encodes its own microRNA, jcv-miR-J1. Methods: We have investigated in 50 healthy subjects whether jcv-miR-J1-5p (and its variant jcv-miR-J1a-5p) can be detected in plasma or urine. Results: We found that the overall detection rate of JCPyV miRNA was 74% (37/50) in plasma and 62% (31/50) in urine. Subjects were further categorized based on JCPyV VP1 serology status and viral shedding. In seronegative subjects, JCPyV miRNA was found in 86% (12/14) and 57% (8/14) of plasma and urine samples, respectively. In seropositive subjects, the detection rate was 69% (25/36) and 64% (23/36) for plasma and urine, respectively. Furthermore, in seropositive subjects shedding virus in urine, higher levels of urinary viral miRNAs were observed, compared to non-shedding seropositive subjects (P < 0.001). No correlation was observed between urinary and plasma miRNAs. Conclusions: These data indicate that analysis of circulating viral miRNAs divulge the presence of latent JCPyV infection allowing further stratification of seropositive individuals. Also, our data indicate higher infection rates than would be expected from serology alone.
Introduction: The human JC polyomavirus (JCPyV) is the etiological agent of Progressive Multifocal Leukoencephalopathy (PML), a demyelinating disease of the brain caused by lytic infection of oligodendrocytes upon viral reactivation [1]. JCPyV is a circular double-stranded DNA virus with very restricted cellular tropism, infecting oligodendrocytes, astrocytes, kidney epithelial cells and peripheral blood cells [2, 3]. It is thought that infection usually occurs asymptomatically in childhood, after which the virus remains latent in the body [4–6]. Under certain immunocompromising conditions, such as treatment with immunomodulatory drugs (e.g. natalizumab) or infection with Human Immunodeficiency Virus (HIV), the virus can be reactivated and actively replicate into the brain, leading to PML. Current risk assessment for development of PML is mainly based on the detection of antibodies against VP1, the major capsid protein and the detection of viral DNA in urine (viruria). It has been reported that 50 to 80% of humans are seropositive for JCPyV and approximately one fifth of the population sheds JCPyV in the urine [7–13]. Detection of viral DNA in plasma (viremia) is very rare and has been shown not to be useful for predicting PML risk [14–16]. Recently it was shown, however, that viral DNA can be detected in CD34+ or CD19+ cells, with an increased detection rate in Multiple Sclerosis (MS) patients treated with natalizumab [3]. As the risk of developing PML increases upon prolonged use of natalizumab, current treatment guidelines recommend discontinuation of therapy after the second year, particularly in JCPyV seropositive patients [17]. Given the high prevalence of JCPyV antibodies, a large number of patients are advised to discontinue therapy. Although most, if not all, PML patients are seropositive or show seroconversion before diagnosis of PML, the incidence of PML in natalizumab-treated MS patients is not more than 1.1% in the highest risk subgroup, indicating that not all seropositive subjects have the same risk of developing PML [18]. Moreover, the introduction of a risk stratification algorithm, predominantly based on JCPyV serology, has not led to a reduction of PML incidence in natalizumab-treated MS patients [19]. Therefore, development of new tools for improved risk stratification is warranted, as this might justify continued therapy for many MS patients and better identify those patients who are really at risk of developing PML. MicroRNAs (miRNAs) are small, non-coding RNAs that play an important role in fine-tuning the expression of specific gene products through translational repression and/or mRNA degradation and as such are implicated in many diseases [20–22]. Cellular miRNAs can also be released in small vesicles, such as exosomes and the levels of these extracellular miRNAs in biological fluids have become very valuable markers of several diseases, such as cancer, Alzheimer’s disease and diabetes [23–25]. In the context of JCPyV, it was shown that there does not appear to be a relationship between circulating human miRNAs and the presence of anti-VP1 antibodies or urinary viral load [26]. Several viruses encode their own sets of miRNAs, which can have self-regulatory or host modulating roles [22]. Also JCPyV, as well as other polyomaviruses, encodes its own unique microRNA that is produced as part of the late transcript in an infected host cell [27, 28]. These microRNAs are thought to play an important role in controlling viral replication through downregulation of Large T-Antigen expression, but also in controlling NKG2D-mediated killing of virus-infected cells by natural killer (NK) cells through downregulation of the stress-induced ligand ULBP3 [28, 29]. The diagnostic potential of circulating viral miRNAs has already been investigated for Epstein-Barr virus and Kaposi’s sarcoma-associated herpesvirus (KSHV), where they might represent potential markers for virus associated malignancies [30, 31]. Also JCPyV miRNA has been shown recently to be a useful biomarker for JCPyV infection in the gastrointestinal tract [32]. In this study we have investigated whether JCPyV-encoded miRNAs could be detected in plasma or urine of healthy subjects and whether the presence of these miRNAs was related to VP1 serology or urinary viral load. We demonstrate that these viral miRNAs can indeed be detected in plasma or urine and that they might be useful markers for JCPyV infection. Conclusions: Based on the work described here, we conclude that JCPyV miRNAs can be detected in plasma and urine of healthy subjects, allowing further stratification of seropositive individuals. Furthermore, our data indicate higher infection rates than would be expected from serology alone.
Background: JC polyomavirus (JCPyV) is a widespread human polyomavirus that usually resides latently in its host, but can be reactivated under immune-compromised conditions potentially causing Progressive Multifocal Leukoencephalopathy (PML). JCPyV encodes its own microRNA, jcv-miR-J1. Methods: We have investigated in 50 healthy subjects whether jcv-miR-J1-5p (and its variant jcv-miR-J1a-5p) can be detected in plasma or urine. Results: We found that the overall detection rate of JCPyV miRNA was 74% (37/50) in plasma and 62% (31/50) in urine. Subjects were further categorized based on JCPyV VP1 serology status and viral shedding. In seronegative subjects, JCPyV miRNA was found in 86% (12/14) and 57% (8/14) of plasma and urine samples, respectively. In seropositive subjects, the detection rate was 69% (25/36) and 64% (23/36) for plasma and urine, respectively. Furthermore, in seropositive subjects shedding virus in urine, higher levels of urinary viral miRNAs were observed, compared to non-shedding seropositive subjects (P < 0.001). No correlation was observed between urinary and plasma miRNAs. Conclusions: These data indicate that analysis of circulating viral miRNAs divulge the presence of latent JCPyV infection allowing further stratification of seropositive individuals. Also, our data indicate higher infection rates than would be expected from serology alone.
12,620
274
[ 787, 1411, 38, 60, 40, 48, 184, 467, 106, 262 ]
16
[ "5p", "mirna", "jcpyv", "mir", "levels", "plasma", "mirnas", "urine", "viral", "assay" ]
[ "jcpyv infection", "antibodies jcpyv vp1", "viral reactivation jcpyv", "jc polyomavirus viral", "human jc polyomavirus" ]
null
[CONTENT] JC Polyomavirus | Viral microRNA | Circulating microRNA | Progressive Multifocal Leukoencephalopathy | Biomarker | Viral activity [SUMMARY]
null
[CONTENT] JC Polyomavirus | Viral microRNA | Circulating microRNA | Progressive Multifocal Leukoencephalopathy | Biomarker | Viral activity [SUMMARY]
[CONTENT] JC Polyomavirus | Viral microRNA | Circulating microRNA | Progressive Multifocal Leukoencephalopathy | Biomarker | Viral activity [SUMMARY]
[CONTENT] JC Polyomavirus | Viral microRNA | Circulating microRNA | Progressive Multifocal Leukoencephalopathy | Biomarker | Viral activity [SUMMARY]
[CONTENT] JC Polyomavirus | Viral microRNA | Circulating microRNA | Progressive Multifocal Leukoencephalopathy | Biomarker | Viral activity [SUMMARY]
[CONTENT] Adult | Female | Humans | JC Virus | Male | MicroRNAs | Middle Aged | Polyomavirus Infections | RNA, Viral | Sensitivity and Specificity | Tumor Virus Infections | Viral Load | Virus Shedding | Young Adult [SUMMARY]
null
[CONTENT] Adult | Female | Humans | JC Virus | Male | MicroRNAs | Middle Aged | Polyomavirus Infections | RNA, Viral | Sensitivity and Specificity | Tumor Virus Infections | Viral Load | Virus Shedding | Young Adult [SUMMARY]
[CONTENT] Adult | Female | Humans | JC Virus | Male | MicroRNAs | Middle Aged | Polyomavirus Infections | RNA, Viral | Sensitivity and Specificity | Tumor Virus Infections | Viral Load | Virus Shedding | Young Adult [SUMMARY]
[CONTENT] Adult | Female | Humans | JC Virus | Male | MicroRNAs | Middle Aged | Polyomavirus Infections | RNA, Viral | Sensitivity and Specificity | Tumor Virus Infections | Viral Load | Virus Shedding | Young Adult [SUMMARY]
[CONTENT] Adult | Female | Humans | JC Virus | Male | MicroRNAs | Middle Aged | Polyomavirus Infections | RNA, Viral | Sensitivity and Specificity | Tumor Virus Infections | Viral Load | Virus Shedding | Young Adult [SUMMARY]
[CONTENT] jcpyv infection | antibodies jcpyv vp1 | viral reactivation jcpyv | jc polyomavirus viral | human jc polyomavirus [SUMMARY]
null
[CONTENT] jcpyv infection | antibodies jcpyv vp1 | viral reactivation jcpyv | jc polyomavirus viral | human jc polyomavirus [SUMMARY]
[CONTENT] jcpyv infection | antibodies jcpyv vp1 | viral reactivation jcpyv | jc polyomavirus viral | human jc polyomavirus [SUMMARY]
[CONTENT] jcpyv infection | antibodies jcpyv vp1 | viral reactivation jcpyv | jc polyomavirus viral | human jc polyomavirus [SUMMARY]
[CONTENT] jcpyv infection | antibodies jcpyv vp1 | viral reactivation jcpyv | jc polyomavirus viral | human jc polyomavirus [SUMMARY]
[CONTENT] 5p | mirna | jcpyv | mir | levels | plasma | mirnas | urine | viral | assay [SUMMARY]
null
[CONTENT] 5p | mirna | jcpyv | mir | levels | plasma | mirnas | urine | viral | assay [SUMMARY]
[CONTENT] 5p | mirna | jcpyv | mir | levels | plasma | mirnas | urine | viral | assay [SUMMARY]
[CONTENT] 5p | mirna | jcpyv | mir | levels | plasma | mirnas | urine | viral | assay [SUMMARY]
[CONTENT] 5p | mirna | jcpyv | mir | levels | plasma | mirnas | urine | viral | assay [SUMMARY]
[CONTENT] pml | patients | risk | jcpyv | virus | natalizumab | cells | mirnas | viral | infection [SUMMARY]
null
[CONTENT] 5p | mir | jcpyv | levels | copies ml | log copies ml | log copies | log | ml | urine [SUMMARY]
[CONTENT] data indicate higher infection | conclude jcpyv mirnas detected | data indicate | data indicate higher | based work described | based work | jcpyv mirnas detected | jcpyv mirnas detected plasma | allowing | allowing stratification [SUMMARY]
[CONTENT] 5p | mir | mirna | mirnas | viral | specific | jcpyv | assay | plasma | urine [SUMMARY]
[CONTENT] 5p | mir | mirna | mirnas | viral | specific | jcpyv | assay | plasma | urine [SUMMARY]
[CONTENT] Progressive Multifocal Leukoencephalopathy | PML ||| jcv-miR-J1 [SUMMARY]
null
[CONTENT] 74% | 37/50 | 62% | 31/50 ||| JCPyV VP1 serology ||| 86% | 12/14 | 57% | 8/14 ||| 69% | 25/36 | 64% | 23/36 ||| 0.001 ||| [SUMMARY]
[CONTENT] ||| [SUMMARY]
[CONTENT] Progressive Multifocal Leukoencephalopathy | PML ||| jcv-miR-J1 ||| 50 | jcv-miR-J1-5p | jcv-miR-J1a-5p ||| ||| 74% | 37/50 | 62% | 31/50 ||| JCPyV VP1 serology ||| 86% | 12/14 | 57% | 8/14 ||| 69% | 25/36 | 64% | 23/36 ||| 0.001 ||| ||| ||| [SUMMARY]
[CONTENT] Progressive Multifocal Leukoencephalopathy | PML ||| jcv-miR-J1 ||| 50 | jcv-miR-J1-5p | jcv-miR-J1a-5p ||| ||| 74% | 37/50 | 62% | 31/50 ||| JCPyV VP1 serology ||| 86% | 12/14 | 57% | 8/14 ||| 69% | 25/36 | 64% | 23/36 ||| 0.001 ||| ||| ||| [SUMMARY]
Evaluation of the impact of universal testing for gestational diabetes mellitus on maternal and neonatal health outcomes: a retrospective analysis.
25199524
Gestational diabetes (GDM) affects a substantial proportion of women in pregnancy and is associated with increased risk of adverse perinatal and long term outcomes. Treatment seems to improve perinatal outcomes, the relative effectiveness of different strategies for identifying women with GDM however is less clear.This paper describes an evaluation of the impact of a change in policy from selective risk factor based offering, to universal offering of an oral glucose tolerance test (OGTT) to identify women with GDM on maternal and neonatal outcomes.
BACKGROUND
Retrospective six year analysis of 35,674 births at the Women's and Newborn unit, Bradford Royal Infirmary, United Kingdom.
METHODS
The proportion of the whole obstetric population diagnosed with GDM increased almost fourfold following universal offering of an OGTT compared to selective offering of an OGTT; Rate Ratio (RR) 3.75 (95% CI 3.28 to 4.29), the proportion identified with severe hyperglycaemia doubled following the policy change; 1.96 (1.50 to 2.58). The case detection rate however, for GDM in the whole population and severe hyperglycaemia in those with GDM reduced by 50-60%; 0.40 (0.35 to 0.46) and 0.51 (0.39 to 0.67) respectively. Universally offering an OGTT was associated with an increased induction of labour rate in the whole obstetric population and in women with GDM; 1.43 (1.35 to 1.50) and 1.21 (1.00 to1.49) respectively. Caesarean section, macrosomia and perinatal mortality rates in the whole population were similar. For women with GDM, rate of caesarean section; 0.70 (0.57 to 0.87), macrosomia; 0.22 (0.15 to 0.34) and perinatal mortality 0.12 (0.03 to 0.46) decreased following the policy change.
RESULTS
Universally offering an OGTT was associated with increased identification of women with GDM and severe hyperglycaemia and with neonatal benefits for those with GDM. There was no evidence of benefit or adverse effects in neonatal outcomes in the whole obstetric population.
CONCLUSIONS
[ "Cesarean Section", "Diabetes, Gestational", "Diagnostic Tests, Routine", "Female", "Fetal Macrosomia", "Glucose Tolerance Test", "Health Policy", "Humans", "Hyperglycemia", "Infant, Newborn", "Labor, Induced", "Perinatal Mortality", "Pregnancy", "Retrospective Studies", "United Kingdom" ]
4167281
Background
Gestational diabetes mellitus (GDM) affects 2-6% of pregnant women and is associated with increased risk of important adverse perinatal outcomes, including macrosomia and birth injury [1, 2]. There is also evidence of increased long term risk of type 2 diabetes [3] and consequent cardiovascular disease in the mothers [4] and possibly of increased long term risk of obesity and associated adverse cardio-metabolic risk in offspring [5–8]. Evidence is increasing that treatment of GDM improves perinatal outcomes [9] supporting the case for improved identification of women with GDM. There is debate however about the relative effectiveness of different strategies for identifying women with GDM, largely because of the lack of good quality evidence. This has led to variation in clinical guidelines and practice for detecting GDM between, and within, countries. Strategies include: case by case assessment [10], selective (75 g or 100 g) oral glucose tolerance testing of high risk women identified using specific risk factors or a 50 g glucose challenge test [11] and universal testing, i.e. offering all women an oral glucose tolerance test (OGTT) [12]. To date in the United Kingdom (UK) there has been no recommendation to offer all women an OGTT, in practice, and more recently in clinical guidelines, selective testing of high risk women has been undertaken. Prior to 2008 in the UK there was no national recommended screening strategy to identify women with GDM. Screening if it was conducted, was at the discretion of the clinician and based on variable use of risk factors [13–15]. When risk factor screening was undertaken, a two-step approach was preferred: clinicians made a clinical assessment of each woman’s risk and offered a two hour 75 g OGTT to identify GDM, with a diagnosis based on the World Health Organisation (WHO) criteria [16]. Since 2008 UK national clinical guidance has recommended that all women are screened by assessment of specific risk factors at their first pregnancy appointment. Any pregnant woman (not previously identified as having type 2 diabetes) with one or more risk factor: family history of diabetes; South Asian; black or middle eastern ethnicity; previous history of having a baby with macrosomia; or body mass index (kg/m2) (BMI) ≥30, should be offered an OGTT between 24 and 28 weeks gestation [11]. The American College of Obstetricians and Gynecologists (ACOG) also recommends a two-step approach. Women are screened for GDM at 24 to 26 weeks, either by patient history, risk factors or 50 g one hour glucose challenge test and if screen positive offered a 100 g OGTT. GDM diagnosis is made using criteria from Carpenter and Coustan or the National Diabetes Data Group [17]. By contrast the International Association of Diabetes in Pregnancy Study Group [12] (indorsed by the American Diabetes Association) recommend all women not previously identified as having type 2 diabetes (irrespective of risk factors) are offered a diagnostic 75 g OGTT at 24–28 weeks. GDM is diagnosed if any one of plasma glucose levels are >5.0 mmol/l, one hour >9.9 mmol/l or two hour >8.4 mmol/l is found [18]. Current screening recommendations are based on evidence from observational studies with no evidence that diagnosis using any of the above strategies improves perinatal or long term adverse outcomes or is cost-effective. Consequently the US Preventative Services Taskforce recommends clinicians discuss screening for GDM with each woman and case by case decisions made based on risk status [10]. Selectively offering women an OGTT based on risk factor assessment and universally offering an OGTT to some extent represent the extremes of possible approaches for identifying women at risk. Theoretically the former approach may have a high false negative rate and miss the opportunity to prevent adverse perinatal outcomes in women at risk, whereas the latter approach may over-diagnose [19], cause unnecessary anxiety [20] and may not be cost-effective in improving perinatal outcomes. Recent policy changes from a clinician led risk factor screening approach to universal offering a diagnostic 75 g OGTT in Bradford provided us with the opportunity to compare these two approaches. Setting Bradford is a city in the North of England with high levels of deprivation. Approximately half of births are to women of South Asian origin and a fifth of White British pregnant women are obese (unpublished routine study hospital data from 2012). In response to the Bradford District Mortality Commission, which was undertaken in 2006 and highlighted rising infant mortality and the poor health of pregnant women in the city [21], a number of changes to clinical practice were implemented, including offering a diagnostic OGTT to all pregnant women at 26–28 weeks gestation. Prior to 2007 women were offered an OGTT at 26–28 weeks gestation following individual assessment of risk status. The aim of this study was to evaluate the policy change from case by case risk factor based assessment and selective testing to universal testing (offering a diagnostic OGTT to all women) to identify gestational diabetes. Bradford is a city in the North of England with high levels of deprivation. Approximately half of births are to women of South Asian origin and a fifth of White British pregnant women are obese (unpublished routine study hospital data from 2012). In response to the Bradford District Mortality Commission, which was undertaken in 2006 and highlighted rising infant mortality and the poor health of pregnant women in the city [21], a number of changes to clinical practice were implemented, including offering a diagnostic OGTT to all pregnant women at 26–28 weeks gestation. Prior to 2007 women were offered an OGTT at 26–28 weeks gestation following individual assessment of risk status. The aim of this study was to evaluate the policy change from case by case risk factor based assessment and selective testing to universal testing (offering a diagnostic OGTT to all women) to identify gestational diabetes.
Methods
Participants We conducted a before and after comparison using data from all women who gave birth at the Bradford Royal Infirmary between 2004 to 2006 (before group) and 2008 to 2010 (after group). Those who gave birth in 2007 were excluded as this year contained a mixture of women selectively offered an OGTT and women universally offered an OGTT. In the UK, anonymised routinely collected data of those accessing National Health Service care may be used to evaluate service provision. Those using such data for service evaluation are not required to obtain verbal or written consent from patients or obtain ethical approval. Therefore neither patient consent nor ethical approval for this study was obtained, as routinely collected hospital data were used to evaluate service provision. Analyses were conducted on a completely anonymised dataset and none of the authors had access to any patient identifiers [22]. We conducted a before and after comparison using data from all women who gave birth at the Bradford Royal Infirmary between 2004 to 2006 (before group) and 2008 to 2010 (after group). Those who gave birth in 2007 were excluded as this year contained a mixture of women selectively offered an OGTT and women universally offered an OGTT. In the UK, anonymised routinely collected data of those accessing National Health Service care may be used to evaluate service provision. Those using such data for service evaluation are not required to obtain verbal or written consent from patients or obtain ethical approval. Therefore neither patient consent nor ethical approval for this study was obtained, as routinely collected hospital data were used to evaluate service provision. Analyses were conducted on a completely anonymised dataset and none of the authors had access to any patient identifiers [22]. Oral glucose tolerance test The 75 g OGTT was administered during the study period between 26–28 weeks gestation. Fasting plasma glucose and two hour post-load plasma glucose measurements were obtained. Criteria used to diagnose GDM was based on WHO recommendations (i.e. either fasting glucose ≥6.1 mmol/L or two-hour glucose ≥7.8 mmol/L) and did not change throughout the study period [16]. The 75 g OGTT was administered during the study period between 26–28 weeks gestation. Fasting plasma glucose and two hour post-load plasma glucose measurements were obtained. Criteria used to diagnose GDM was based on WHO recommendations (i.e. either fasting glucose ≥6.1 mmol/L or two-hour glucose ≥7.8 mmol/L) and did not change throughout the study period [16]. Extraction of data Data were extracted from hospital written and electronic medical records. Maternal risk factors (previous GDM, family history of diabetes, previous history of giving birth to a baby with macrosomia, obesity, South Asian or black ethnicity, glycosuria, raised result of any random blood glucose test and increased liquor volume) were available for women who had completed an OGTT between 2004 to 2006 (case by case assessment and selectively tested group). These risk factors were not available for women during this time period that did not complete an OGTT, nor were they available for women who were pregnant after 2007, other than ethnicity which was available for all women. Data on whether an OGTT was performed, whether they were diagnosed with GDM and maternal and neonatal outcomes (induction of labour, caesarean birth, macrosomia (birth weight equal to or greater than 4 kg), perinatal mortality, admission to neonatal unit, Erb’s palsy, fractured clavicle, were available for all births (before and after the policy change) and included in the analyses. Data on risk factors (where available), completion of the OGTT and OGTT results were abstracted from paper medical or electronic records, outcome data were provided by the hospital electronic records system for the period 2004 to 2006. Neonatal transitional care unit data and data for 2008 to 2010 were abstracted from paper medical or electronic records. Research midwives abstracted data from paper records using a pre-prepared coded data extraction sheet. A 5% random sample of these data were independently abstracted by a research fellow; for all field codes there was greater than 98% agreement between the research fellow and research midwife abstractions. Electronic data were transferred from the electronic medical record to a research database and merged with the abstracted paper record data. These electronic data were manually checked against stored paper records and again high levels (98%) of agreement were found. We also compared the medical records of GDM diagnoses with the biochemical laboratory records of test results and found very high levels of agreement (>99%). Data were extracted from hospital written and electronic medical records. Maternal risk factors (previous GDM, family history of diabetes, previous history of giving birth to a baby with macrosomia, obesity, South Asian or black ethnicity, glycosuria, raised result of any random blood glucose test and increased liquor volume) were available for women who had completed an OGTT between 2004 to 2006 (case by case assessment and selectively tested group). These risk factors were not available for women during this time period that did not complete an OGTT, nor were they available for women who were pregnant after 2007, other than ethnicity which was available for all women. Data on whether an OGTT was performed, whether they were diagnosed with GDM and maternal and neonatal outcomes (induction of labour, caesarean birth, macrosomia (birth weight equal to or greater than 4 kg), perinatal mortality, admission to neonatal unit, Erb’s palsy, fractured clavicle, were available for all births (before and after the policy change) and included in the analyses. Data on risk factors (where available), completion of the OGTT and OGTT results were abstracted from paper medical or electronic records, outcome data were provided by the hospital electronic records system for the period 2004 to 2006. Neonatal transitional care unit data and data for 2008 to 2010 were abstracted from paper medical or electronic records. Research midwives abstracted data from paper records using a pre-prepared coded data extraction sheet. A 5% random sample of these data were independently abstracted by a research fellow; for all field codes there was greater than 98% agreement between the research fellow and research midwife abstractions. Electronic data were transferred from the electronic medical record to a research database and merged with the abstracted paper record data. These electronic data were manually checked against stored paper records and again high levels (98%) of agreement were found. We also compared the medical records of GDM diagnoses with the biochemical laboratory records of test results and found very high levels of agreement (>99%). Analysis All analyses were conducted using Stata version 10 [23]. We calculated the percentage of women with an identified risk factor and used the binomial distribution to calculate 95% confidence intervals for these prevalence’s. We used the same method to calculate the percentage and 95% confidence intervals for women who had an OGTT between 2008 and 2010 (i.e. period during which there was a policy of universal OGTT testing). The proportion of women with GDM, either a fasting blood glucose ≥6.1 mmol/L, or a two hour blood glucose ≥7.8 mmol/L or both and of these the proportion with severe hyperglycaemia, either a fasting blood glucose ≥7 mmol/L, or a two hour blood glucose ≥11.1 mmol/L or both was estimated. The proportion of women/infants with each risk factor (where available) and outcome (diagnosis of GDM, induction of labour, caesarean birth, macrosomia, perinatal mortality, admission to neonatal unit, Erb’s palsy and fractured clavicle) was estimated for each time period. Rate ratios (RR), comparing outcomes after the introduction of the universal offer of a diagnostic OGTT, to those before this policy, together with their 95% confidence intervals were estimated using the epitab command in Stata [23]. All analyses were conducted using Stata version 10 [23]. We calculated the percentage of women with an identified risk factor and used the binomial distribution to calculate 95% confidence intervals for these prevalence’s. We used the same method to calculate the percentage and 95% confidence intervals for women who had an OGTT between 2008 and 2010 (i.e. period during which there was a policy of universal OGTT testing). The proportion of women with GDM, either a fasting blood glucose ≥6.1 mmol/L, or a two hour blood glucose ≥7.8 mmol/L or both and of these the proportion with severe hyperglycaemia, either a fasting blood glucose ≥7 mmol/L, or a two hour blood glucose ≥11.1 mmol/L or both was estimated. The proportion of women/infants with each risk factor (where available) and outcome (diagnosis of GDM, induction of labour, caesarean birth, macrosomia, perinatal mortality, admission to neonatal unit, Erb’s palsy and fractured clavicle) was estimated for each time period. Rate ratios (RR), comparing outcomes after the introduction of the universal offer of a diagnostic OGTT, to those before this policy, together with their 95% confidence intervals were estimated using the epitab command in Stata [23].
Results
Between 2004 and 2006 (selectively offered group), 17,160 women gave birth; 1162 (7%) were offered an OGTT following risk assessment by their clinician and 1151 completed the test (7% of all women and 99% of those offered the test). Between 2008 and 2010, (universally offered group) 18,514 women gave birth, 18,328 (99%) were offered an OGTT and 11,516 completed the test (62% of all women and 63% of those offered the test) (Table 1). For births occurring between 2004–2006, amongst those who completed the OGTT 58% were from an ethnic group with increased risk (the most common risk factor) compared with 55% after the policy change (risk ratio 0.95, 95% CI: 0.87 to 1.03). Obesity was the second most common individual risk factor for women selectively offered an OGTT (Additional file 1: Table S1). A small proportion (1%) of the women in the universally offered group were not offered an OGTT, reasons were: pre-existing diabetes or late attendance for antenatal care. Of those attending for the OGTT (after group only), 3% did not complete the test either because they had been unable to fast overnight or were unable to drink the glucose solution due to nausea. These women were all offered a second appointment and over 99% completed the test at that second appointment.Table 1 Association between selective and universal offer of an oral glucose tolerance test (OGTT) with gestational diabetes and severe hyperglycaemia detection rate (number tested with percentage of whole population and percentage of screened population in parenthesis) Selective offering of an OGTTUniversal offering of an OGTT2004200520062004-20062008200920102008-2010Rate ratio comparing detection rates after to before (95% CI) for the whole population and for the screened populationWhole population5512578458641716061626251610118514Completed OGTT323 (6)299 (5)529 (9)1151 (7)3797 (62)3947 (63)3772 (62)11516 (62)GDMa identified62 (1, 19)83 (1, 28)135 (2, 26)280 (2, 24)311 (5, 8)398 (6, 10)423 (7, 11)1132 (6, 10)3.75 (3.28 - 4.29) 0.40 (0.35 - 0.46)Severe hyperglycaemiab identified21 (0.4, 34)33 (0.6, 40)29 (0.5, 21)83 (0.5, 30)46 (0.7, 14)63 (1.0, 16)63 (1.0, 16)172 (1.0, 15)1.96 (1.50 - 2.58) 0.51 (0.39 - 0.67)OGTT = oral glucose tolerance test.GDM = gestational diabetes mellitus, CI = confidence interval. aGDM = either a fasting blood glucose ≥6.1 mmol/L, or a two hour blood glucose ≥7.8 mol/L or both. bSevere hyperglycaemia = either a fasting blood glucose ≥7 mmol/L, or a two hour blood glucose ≥11.1 mmol/L or both. Association between selective and universal offer of an oral glucose tolerance test (OGTT) with gestational diabetes and severe hyperglycaemia detection rate (number tested with percentage of whole population and percentage of screened population in parenthesis) OGTT = oral glucose tolerance test. GDM = gestational diabetes mellitus, CI = confidence interval. aGDM = either a fasting blood glucose ≥6.1 mmol/L, or a two hour blood glucose ≥7.8 mol/L or both. bSevere hyperglycaemia = either a fasting blood glucose ≥7 mmol/L, or a two hour blood glucose ≥11.1 mmol/L or both. The proportion diagnosed with GDM increased almost fourfold after the introduction of the policy of offering a diagnostic OGTT to all pregnant women compared to selectively offering the test, (2% to 6%) Rate Ratio (RR) (3.75 95% CI 3.28 to 4.29) (Table 1). The proportion identified as having severe hyperglycaemia doubled following the change in policy (0.5% to 1.0%, 1.96 1.50 to 2.58) (Table 1). However, the population case detection rate (for both GDM and severe hyperglycaemia in those with GDM) reduced by 50-60% following universal offering of a diagnostic OGTT, reflecting an increase in those offered the test who were not at risk (Table 1). Table 2 shows the testing strategies comparison of adverse maternal and neonatal outcomes. Induction of labour rate increased both in the whole obstetric population and in women with GDM after the introduction of offering universal OGTT. The caesarean section rate in the whole population was similar before and after the introduction of offering universal OGTT, but introduction of this universal offer was associated with a reduction in caesarean section rates amongst those with GDM. Similarly, offering universal OGTT was not associated with a change in the rate of macrosomia in the whole obstetric population, but was associated with a marked reduction in its rate amongst those with GDM. There was also a reduction in perinatal mortality for women with GDM after the introduction of the policy and some evidence of a weaker reduction in the whole population, but the latter rate ratio had wide confidence intervals that included the null value.Table 2 Association between selective and universal offer of an OGTT to identify gestational diabetes and risk of adverse maternal and neonatal outcomes in the whole obstetric population and in women with gestational diabetes Whole obstetric populationWomen identified with gestational diabetesSelective offer of an OGTT 2004–2006 N = 17160 N (%)Universal offer of an OGTT 2008–2010 N = 18514 N (%)Rate ratio comparing after to before (95% CI)Selective offer of an OGTT 2004–2006 N = 280 N (%)Universal offer of an OGTT 2008–2010 N = 1132 N (%)Rate ratio comparing after to before (95% CI)Induction of labour2422 (14.1)3678 (20.0)1.43 (1.35- 1.50)120 (42.9)587 (51.9)1.21 (1.00-1.49)Caesarean birth3477 (20.3)3709 (20.0)1.00 (0.96-1.05)122 (43.6)345 (30.5)0.70 (0.57-0.87)Macrosomia (birth weight ≥4 kg)1105 (6.4)1219 (6.6)1.04 (0.95-1.12)45 (16.1)50 (4.4)0.22 (0.15-0.34)Perinatal mortality228 (1.3)212 (1.1)0.86 (0.71-1.04)4 (1.4)8 (0.7)0.12 (0.03-0.46)Admitted to NNU1670 (9.7)1497 (8.1)0.83 (0.77-0.89)69 (24.6)118 (10.4)0.42 (0.31-0.58)Erb’s palsy7 (0.04)12 (0.06)1.59 (0.58-4.76)***Fractured clavicle4 (0.02)6 (0.03)1.39 (0.33-6.70)***OGTT = oral glucose tolerance test.NNU: neonatal unit; *insufficient numbers to perform analyses for these outcomes in those identified with gestational diabetes. Association between selective and universal offer of an OGTT to identify gestational diabetes and risk of adverse maternal and neonatal outcomes in the whole obstetric population and in women with gestational diabetes OGTT = oral glucose tolerance test. NNU: neonatal unit; *insufficient numbers to perform analyses for these outcomes in those identified with gestational diabetes. Offering universal OGTT was also associated with a reduced rate of admission to the neonatal unit for the whole obstetric population and amongst those with GDM. The rates of Erb’s palsy and fractured clavicle in the infants appeared similar before and after the policy change, but at both time points rates of these outcomes were very low and the ratio estimates have very wide confidence intervals that include any association from a marked reduction to a marked increase in risk.
Conclusion
Our results suggest that offering all women an OGTT was associated with increased identification of women with GDM and severe hyperglycaemia and with neonatal benefits for those with GDM. There was no evidence of clear differences in neonatal outcomes in the whole obstetric population, which is perhaps not surprising as only 6% were identified as having GDM when an OGTT was offered to all women and GDM diagnosis made based on the WHO criteria. Ethics Ethical approval was not required for this work. Ethical approval was not required for this work.
[ "Background", "Setting", "Participants", "Oral glucose tolerance test", "Extraction of data", "Analysis", "Implications for research", "Strengths and limitations", "Ethics", "" ]
[ "Gestational diabetes mellitus (GDM) affects 2-6% of pregnant women and is associated with increased risk of important adverse perinatal outcomes, including macrosomia and birth injury [1, 2]. There is also evidence of increased long term risk of type 2 diabetes [3] and consequent cardiovascular disease in the mothers [4] and possibly of increased long term risk of obesity and associated adverse cardio-metabolic risk in offspring [5–8].\nEvidence is increasing that treatment of GDM improves perinatal outcomes [9] supporting the case for improved identification of women with GDM. There is debate however about the relative effectiveness of different strategies for identifying women with GDM, largely because of the lack of good quality evidence. This has led to variation in clinical guidelines and practice for detecting GDM between, and within, countries. Strategies include: case by case assessment [10], selective (75 g or 100 g) oral glucose tolerance testing of high risk women identified using specific risk factors or a 50 g glucose challenge test [11] and universal testing, i.e. offering all women an oral glucose tolerance test (OGTT) [12].\nTo date in the United Kingdom (UK) there has been no recommendation to offer all women an OGTT, in practice, and more recently in clinical guidelines, selective testing of high risk women has been undertaken. Prior to 2008 in the UK there was no national recommended screening strategy to identify women with GDM. Screening if it was conducted, was at the discretion of the clinician and based on variable use of risk factors [13–15]. When risk factor screening was undertaken, a two-step approach was preferred: clinicians made a clinical assessment of each woman’s risk and offered a two hour 75 g OGTT to identify GDM, with a diagnosis based on the World Health Organisation (WHO) criteria [16].\nSince 2008 UK national clinical guidance has recommended that all women are screened by assessment of specific risk factors at their first pregnancy appointment. Any pregnant woman (not previously identified as having type 2 diabetes) with one or more risk factor: family history of diabetes; South Asian; black or middle eastern ethnicity; previous history of having a baby with macrosomia; or body mass index (kg/m2) (BMI) ≥30, should be offered an OGTT between 24 and 28 weeks gestation [11].\nThe American College of Obstetricians and Gynecologists (ACOG) also recommends a two-step approach. Women are screened for GDM at 24 to 26 weeks, either by patient history, risk factors or 50 g one hour glucose challenge test and if screen positive offered a 100 g OGTT. GDM diagnosis is made using criteria from Carpenter and Coustan or the National Diabetes Data Group [17]. By contrast the International Association of Diabetes in Pregnancy Study Group [12] (indorsed by the American Diabetes Association) recommend all women not previously identified as having type 2 diabetes (irrespective of risk factors) are offered a diagnostic 75 g OGTT at 24–28 weeks. GDM is diagnosed if any one of plasma glucose levels are >5.0 mmol/l, one hour >9.9 mmol/l or two hour >8.4 mmol/l is found [18].\nCurrent screening recommendations are based on evidence from observational studies with no evidence that diagnosis using any of the above strategies improves perinatal or long term adverse outcomes or is cost-effective. Consequently the US Preventative Services Taskforce recommends clinicians discuss screening for GDM with each woman and case by case decisions made based on risk status [10]. Selectively offering women an OGTT based on risk factor assessment and universally offering an OGTT to some extent represent the extremes of possible approaches for identifying women at risk. Theoretically the former approach may have a high false negative rate and miss the opportunity to prevent adverse perinatal outcomes in women at risk, whereas the latter approach may over-diagnose [19], cause unnecessary anxiety [20] and may not be cost-effective in improving perinatal outcomes. Recent policy changes from a clinician led risk factor screening approach to universal offering a diagnostic 75 g OGTT in Bradford provided us with the opportunity to compare these two approaches.\n Setting Bradford is a city in the North of England with high levels of deprivation. Approximately half of births are to women of South Asian origin and a fifth of White British pregnant women are obese (unpublished routine study hospital data from 2012). In response to the Bradford District Mortality Commission, which was undertaken in 2006 and highlighted rising infant mortality and the poor health of pregnant women in the city [21], a number of changes to clinical practice were implemented, including offering a diagnostic OGTT to all pregnant women at 26–28 weeks gestation. Prior to 2007 women were offered an OGTT at 26–28 weeks gestation following individual assessment of risk status. The aim of this study was to evaluate the policy change from case by case risk factor based assessment and selective testing to universal testing (offering a diagnostic OGTT to all women) to identify gestational diabetes.\nBradford is a city in the North of England with high levels of deprivation. Approximately half of births are to women of South Asian origin and a fifth of White British pregnant women are obese (unpublished routine study hospital data from 2012). In response to the Bradford District Mortality Commission, which was undertaken in 2006 and highlighted rising infant mortality and the poor health of pregnant women in the city [21], a number of changes to clinical practice were implemented, including offering a diagnostic OGTT to all pregnant women at 26–28 weeks gestation. Prior to 2007 women were offered an OGTT at 26–28 weeks gestation following individual assessment of risk status. The aim of this study was to evaluate the policy change from case by case risk factor based assessment and selective testing to universal testing (offering a diagnostic OGTT to all women) to identify gestational diabetes.", "Bradford is a city in the North of England with high levels of deprivation. Approximately half of births are to women of South Asian origin and a fifth of White British pregnant women are obese (unpublished routine study hospital data from 2012). In response to the Bradford District Mortality Commission, which was undertaken in 2006 and highlighted rising infant mortality and the poor health of pregnant women in the city [21], a number of changes to clinical practice were implemented, including offering a diagnostic OGTT to all pregnant women at 26–28 weeks gestation. Prior to 2007 women were offered an OGTT at 26–28 weeks gestation following individual assessment of risk status. The aim of this study was to evaluate the policy change from case by case risk factor based assessment and selective testing to universal testing (offering a diagnostic OGTT to all women) to identify gestational diabetes.", "We conducted a before and after comparison using data from all women who gave birth at the Bradford Royal Infirmary between 2004 to 2006 (before group) and 2008 to 2010 (after group). Those who gave birth in 2007 were excluded as this year contained a mixture of women selectively offered an OGTT and women universally offered an OGTT. In the UK, anonymised routinely collected data of those accessing National Health Service care may be used to evaluate service provision. Those using such data for service evaluation are not required to obtain verbal or written consent from patients or obtain ethical approval. Therefore neither patient consent nor ethical approval for this study was obtained, as routinely collected hospital data were used to evaluate service provision. Analyses were conducted on a completely anonymised dataset and none of the authors had access to any patient identifiers [22].", "The 75 g OGTT was administered during the study period between 26–28 weeks gestation. Fasting plasma glucose and two hour post-load plasma glucose measurements were obtained. Criteria used to diagnose GDM was based on WHO recommendations (i.e. either fasting glucose ≥6.1 mmol/L or two-hour glucose ≥7.8 mmol/L) and did not change throughout the study period [16].", "Data were extracted from hospital written and electronic medical records. Maternal risk factors (previous GDM, family history of diabetes, previous history of giving birth to a baby with macrosomia, obesity, South Asian or black ethnicity, glycosuria, raised result of any random blood glucose test and increased liquor volume) were available for women who had completed an OGTT between 2004 to 2006 (case by case assessment and selectively tested group). These risk factors were not available for women during this time period that did not complete an OGTT, nor were they available for women who were pregnant after 2007, other than ethnicity which was available for all women. Data on whether an OGTT was performed, whether they were diagnosed with GDM and maternal and neonatal outcomes (induction of labour, caesarean birth, macrosomia (birth weight equal to or greater than 4 kg), perinatal mortality, admission to neonatal unit, Erb’s palsy, fractured clavicle, were available for all births (before and after the policy change) and included in the analyses.\nData on risk factors (where available), completion of the OGTT and OGTT results were abstracted from paper medical or electronic records, outcome data were provided by the hospital electronic records system for the period 2004 to 2006. Neonatal transitional care unit data and data for 2008 to 2010 were abstracted from paper medical or electronic records. Research midwives abstracted data from paper records using a pre-prepared coded data extraction sheet. A 5% random sample of these data were independently abstracted by a research fellow; for all field codes there was greater than 98% agreement between the research fellow and research midwife abstractions. Electronic data were transferred from the electronic medical record to a research database and merged with the abstracted paper record data. These electronic data were manually checked against stored paper records and again high levels (98%) of agreement were found. We also compared the medical records of GDM diagnoses with the biochemical laboratory records of test results and found very high levels of agreement (>99%).", "All analyses were conducted using Stata version 10 [23]. We calculated the percentage of women with an identified risk factor and used the binomial distribution to calculate 95% confidence intervals for these prevalence’s. We used the same method to calculate the percentage and 95% confidence intervals for women who had an OGTT between 2008 and 2010 (i.e. period during which there was a policy of universal OGTT testing).\nThe proportion of women with GDM, either a fasting blood glucose ≥6.1 mmol/L, or a two hour blood glucose ≥7.8 mmol/L or both and of these the proportion with severe hyperglycaemia, either a fasting blood glucose ≥7 mmol/L, or a two hour blood glucose ≥11.1 mmol/L or both was estimated. The proportion of women/infants with each risk factor (where available) and outcome (diagnosis of GDM, induction of labour, caesarean birth, macrosomia, perinatal mortality, admission to neonatal unit, Erb’s palsy and fractured clavicle) was estimated for each time period. Rate ratios (RR), comparing outcomes after the introduction of the universal offer of a diagnostic OGTT, to those before this policy, together with their 95% confidence intervals were estimated using the epitab command in Stata [23].", "Further research is needed to understand why, when all women were offered an OGTT only 63% attended for the test, compared with 99% when the OGTT was only offered to those identified as at risk. It is also important to investigate what the effect of increasing uptake of the test might be. Ultimately, randomised trials that compare universal testing and selective testing and assessment of a range of perinatal outcomes are required, but such trials would require a very large sample size. Continued evaluation of the Bradford population and other similar populations where there is a policy change provide some evidence of the likely impact of universal testing. It is also important to include assessment of the cost-effectiveness of strategies to identify and treat GDM, on-going work may help to answer this question [32].", "A key strength of our study is its ability to examine the association of offering universal testing for OGTT compared with a more conservative strategy of selective testing based on individual assessment of risk. We were able to include in our analyses all women giving birth in Bradford over the study period examined and therefore results are unlikely to be affected by selection bias and the numbers included in the analyses are large. This approach provides a ‘real life’ evaluation of what happens when health care policy is changed, rather than findings from randomised trials that may overestimate the effect of an intervention [33, 34]. Whilst routine clinical data may be less accurate than data collected specifically for research purposes we carefully checked the reliability of data from different sources, where possible (e.g. electronic and paper medical records and laboratory results) and completed an independent abstraction of a random sample of data from medical records to improve the validity of the data we have used here.\nThe main weakness of this study is that it is a before and after comparison and therefore we cannot assume that any associations are caused by the change in policy of offering an OGTT to all pregnant women. For 2006 to 2008 we only had access to aggregated data for outcomes therefore we could only examine unadjusted associations and not adjust analyses to take account of the individual level background characteristics and confounders of the population and how these might have changed over time. Although there were no other changes in policy, any other characteristic that changed over this period could explain the findings we have observed (i.e. could have confounded our assumed association of change to universal offering of an OGTT with the outcomes assessed). For example, there was an increase in induction of labour rate following the introduction of universal offer of an OGTT, both for the whole obstetric population and for those with GDM, which cannot be wholly accounted for by the increase in GDM diagnosis.\nAs noted above there was already evidence that the rate of diagnosis of GDM was increasing prior to the change in policy, but the rate of this increase was much more marked after the policy was introduced. Despite the increasing rates of diagnosis before the policy change, there were still improved outcomes for the group universally offered an OGTT compared to the group selectively offered an OGTT.\nData on detailed risk factor assessment for all women across the whole period was not available; data were only available for those in the before group who completed an OGTT. Thus, we were unable to examine how well risk factor screening was implemented before the offer of universal testing or to compare risk factors between those who completed an OGTT and those who declined the invitation after the policy change. As noted above we are unable to complete a cost-effectiveness analysis in this study. Lastly, the population of Bradford are a high risk group as half of births are to women of South Asian origin and a fifth of the non-South Asian population are obese. Thus, at least 60% of this population would have at least one risk factor deemed by the NICE guidelines introduced in 2008, to make them eligible for an OGTT [11]. We cannot conclude that the results found in this population would necessarily generalise to other populations or to populations where other strategies or criteria are used, but given the whole population coverage it is likely that they would generalise to populations with similar high levels of risk.", "Ethical approval was not required for this work.", "Additional file 1:\nProportions of pregnant women who completed an OGTT who had documented risk factors* for GDM.\n(DOC 32 KB)" ]
[ null, null, null, null, null, null, null, null, null, null ]
[ "Background", "Setting", "Methods", "Participants", "Oral glucose tolerance test", "Extraction of data", "Analysis", "Results", "Discussion", "Implications for research", "Strengths and limitations", "Conclusion", "Ethics", "Electronic supplementary material", "" ]
[ "Gestational diabetes mellitus (GDM) affects 2-6% of pregnant women and is associated with increased risk of important adverse perinatal outcomes, including macrosomia and birth injury [1, 2]. There is also evidence of increased long term risk of type 2 diabetes [3] and consequent cardiovascular disease in the mothers [4] and possibly of increased long term risk of obesity and associated adverse cardio-metabolic risk in offspring [5–8].\nEvidence is increasing that treatment of GDM improves perinatal outcomes [9] supporting the case for improved identification of women with GDM. There is debate however about the relative effectiveness of different strategies for identifying women with GDM, largely because of the lack of good quality evidence. This has led to variation in clinical guidelines and practice for detecting GDM between, and within, countries. Strategies include: case by case assessment [10], selective (75 g or 100 g) oral glucose tolerance testing of high risk women identified using specific risk factors or a 50 g glucose challenge test [11] and universal testing, i.e. offering all women an oral glucose tolerance test (OGTT) [12].\nTo date in the United Kingdom (UK) there has been no recommendation to offer all women an OGTT, in practice, and more recently in clinical guidelines, selective testing of high risk women has been undertaken. Prior to 2008 in the UK there was no national recommended screening strategy to identify women with GDM. Screening if it was conducted, was at the discretion of the clinician and based on variable use of risk factors [13–15]. When risk factor screening was undertaken, a two-step approach was preferred: clinicians made a clinical assessment of each woman’s risk and offered a two hour 75 g OGTT to identify GDM, with a diagnosis based on the World Health Organisation (WHO) criteria [16].\nSince 2008 UK national clinical guidance has recommended that all women are screened by assessment of specific risk factors at their first pregnancy appointment. Any pregnant woman (not previously identified as having type 2 diabetes) with one or more risk factor: family history of diabetes; South Asian; black or middle eastern ethnicity; previous history of having a baby with macrosomia; or body mass index (kg/m2) (BMI) ≥30, should be offered an OGTT between 24 and 28 weeks gestation [11].\nThe American College of Obstetricians and Gynecologists (ACOG) also recommends a two-step approach. Women are screened for GDM at 24 to 26 weeks, either by patient history, risk factors or 50 g one hour glucose challenge test and if screen positive offered a 100 g OGTT. GDM diagnosis is made using criteria from Carpenter and Coustan or the National Diabetes Data Group [17]. By contrast the International Association of Diabetes in Pregnancy Study Group [12] (indorsed by the American Diabetes Association) recommend all women not previously identified as having type 2 diabetes (irrespective of risk factors) are offered a diagnostic 75 g OGTT at 24–28 weeks. GDM is diagnosed if any one of plasma glucose levels are >5.0 mmol/l, one hour >9.9 mmol/l or two hour >8.4 mmol/l is found [18].\nCurrent screening recommendations are based on evidence from observational studies with no evidence that diagnosis using any of the above strategies improves perinatal or long term adverse outcomes or is cost-effective. Consequently the US Preventative Services Taskforce recommends clinicians discuss screening for GDM with each woman and case by case decisions made based on risk status [10]. Selectively offering women an OGTT based on risk factor assessment and universally offering an OGTT to some extent represent the extremes of possible approaches for identifying women at risk. Theoretically the former approach may have a high false negative rate and miss the opportunity to prevent adverse perinatal outcomes in women at risk, whereas the latter approach may over-diagnose [19], cause unnecessary anxiety [20] and may not be cost-effective in improving perinatal outcomes. Recent policy changes from a clinician led risk factor screening approach to universal offering a diagnostic 75 g OGTT in Bradford provided us with the opportunity to compare these two approaches.\n Setting Bradford is a city in the North of England with high levels of deprivation. Approximately half of births are to women of South Asian origin and a fifth of White British pregnant women are obese (unpublished routine study hospital data from 2012). In response to the Bradford District Mortality Commission, which was undertaken in 2006 and highlighted rising infant mortality and the poor health of pregnant women in the city [21], a number of changes to clinical practice were implemented, including offering a diagnostic OGTT to all pregnant women at 26–28 weeks gestation. Prior to 2007 women were offered an OGTT at 26–28 weeks gestation following individual assessment of risk status. The aim of this study was to evaluate the policy change from case by case risk factor based assessment and selective testing to universal testing (offering a diagnostic OGTT to all women) to identify gestational diabetes.\nBradford is a city in the North of England with high levels of deprivation. Approximately half of births are to women of South Asian origin and a fifth of White British pregnant women are obese (unpublished routine study hospital data from 2012). In response to the Bradford District Mortality Commission, which was undertaken in 2006 and highlighted rising infant mortality and the poor health of pregnant women in the city [21], a number of changes to clinical practice were implemented, including offering a diagnostic OGTT to all pregnant women at 26–28 weeks gestation. Prior to 2007 women were offered an OGTT at 26–28 weeks gestation following individual assessment of risk status. The aim of this study was to evaluate the policy change from case by case risk factor based assessment and selective testing to universal testing (offering a diagnostic OGTT to all women) to identify gestational diabetes.", "Bradford is a city in the North of England with high levels of deprivation. Approximately half of births are to women of South Asian origin and a fifth of White British pregnant women are obese (unpublished routine study hospital data from 2012). In response to the Bradford District Mortality Commission, which was undertaken in 2006 and highlighted rising infant mortality and the poor health of pregnant women in the city [21], a number of changes to clinical practice were implemented, including offering a diagnostic OGTT to all pregnant women at 26–28 weeks gestation. Prior to 2007 women were offered an OGTT at 26–28 weeks gestation following individual assessment of risk status. The aim of this study was to evaluate the policy change from case by case risk factor based assessment and selective testing to universal testing (offering a diagnostic OGTT to all women) to identify gestational diabetes.", " Participants We conducted a before and after comparison using data from all women who gave birth at the Bradford Royal Infirmary between 2004 to 2006 (before group) and 2008 to 2010 (after group). Those who gave birth in 2007 were excluded as this year contained a mixture of women selectively offered an OGTT and women universally offered an OGTT. In the UK, anonymised routinely collected data of those accessing National Health Service care may be used to evaluate service provision. Those using such data for service evaluation are not required to obtain verbal or written consent from patients or obtain ethical approval. Therefore neither patient consent nor ethical approval for this study was obtained, as routinely collected hospital data were used to evaluate service provision. Analyses were conducted on a completely anonymised dataset and none of the authors had access to any patient identifiers [22].\nWe conducted a before and after comparison using data from all women who gave birth at the Bradford Royal Infirmary between 2004 to 2006 (before group) and 2008 to 2010 (after group). Those who gave birth in 2007 were excluded as this year contained a mixture of women selectively offered an OGTT and women universally offered an OGTT. In the UK, anonymised routinely collected data of those accessing National Health Service care may be used to evaluate service provision. Those using such data for service evaluation are not required to obtain verbal or written consent from patients or obtain ethical approval. Therefore neither patient consent nor ethical approval for this study was obtained, as routinely collected hospital data were used to evaluate service provision. Analyses were conducted on a completely anonymised dataset and none of the authors had access to any patient identifiers [22].\n Oral glucose tolerance test The 75 g OGTT was administered during the study period between 26–28 weeks gestation. Fasting plasma glucose and two hour post-load plasma glucose measurements were obtained. Criteria used to diagnose GDM was based on WHO recommendations (i.e. either fasting glucose ≥6.1 mmol/L or two-hour glucose ≥7.8 mmol/L) and did not change throughout the study period [16].\nThe 75 g OGTT was administered during the study period between 26–28 weeks gestation. Fasting plasma glucose and two hour post-load plasma glucose measurements were obtained. Criteria used to diagnose GDM was based on WHO recommendations (i.e. either fasting glucose ≥6.1 mmol/L or two-hour glucose ≥7.8 mmol/L) and did not change throughout the study period [16].\n Extraction of data Data were extracted from hospital written and electronic medical records. Maternal risk factors (previous GDM, family history of diabetes, previous history of giving birth to a baby with macrosomia, obesity, South Asian or black ethnicity, glycosuria, raised result of any random blood glucose test and increased liquor volume) were available for women who had completed an OGTT between 2004 to 2006 (case by case assessment and selectively tested group). These risk factors were not available for women during this time period that did not complete an OGTT, nor were they available for women who were pregnant after 2007, other than ethnicity which was available for all women. Data on whether an OGTT was performed, whether they were diagnosed with GDM and maternal and neonatal outcomes (induction of labour, caesarean birth, macrosomia (birth weight equal to or greater than 4 kg), perinatal mortality, admission to neonatal unit, Erb’s palsy, fractured clavicle, were available for all births (before and after the policy change) and included in the analyses.\nData on risk factors (where available), completion of the OGTT and OGTT results were abstracted from paper medical or electronic records, outcome data were provided by the hospital electronic records system for the period 2004 to 2006. Neonatal transitional care unit data and data for 2008 to 2010 were abstracted from paper medical or electronic records. Research midwives abstracted data from paper records using a pre-prepared coded data extraction sheet. A 5% random sample of these data were independently abstracted by a research fellow; for all field codes there was greater than 98% agreement between the research fellow and research midwife abstractions. Electronic data were transferred from the electronic medical record to a research database and merged with the abstracted paper record data. These electronic data were manually checked against stored paper records and again high levels (98%) of agreement were found. We also compared the medical records of GDM diagnoses with the biochemical laboratory records of test results and found very high levels of agreement (>99%).\nData were extracted from hospital written and electronic medical records. Maternal risk factors (previous GDM, family history of diabetes, previous history of giving birth to a baby with macrosomia, obesity, South Asian or black ethnicity, glycosuria, raised result of any random blood glucose test and increased liquor volume) were available for women who had completed an OGTT between 2004 to 2006 (case by case assessment and selectively tested group). These risk factors were not available for women during this time period that did not complete an OGTT, nor were they available for women who were pregnant after 2007, other than ethnicity which was available for all women. Data on whether an OGTT was performed, whether they were diagnosed with GDM and maternal and neonatal outcomes (induction of labour, caesarean birth, macrosomia (birth weight equal to or greater than 4 kg), perinatal mortality, admission to neonatal unit, Erb’s palsy, fractured clavicle, were available for all births (before and after the policy change) and included in the analyses.\nData on risk factors (where available), completion of the OGTT and OGTT results were abstracted from paper medical or electronic records, outcome data were provided by the hospital electronic records system for the period 2004 to 2006. Neonatal transitional care unit data and data for 2008 to 2010 were abstracted from paper medical or electronic records. Research midwives abstracted data from paper records using a pre-prepared coded data extraction sheet. A 5% random sample of these data were independently abstracted by a research fellow; for all field codes there was greater than 98% agreement between the research fellow and research midwife abstractions. Electronic data were transferred from the electronic medical record to a research database and merged with the abstracted paper record data. These electronic data were manually checked against stored paper records and again high levels (98%) of agreement were found. We also compared the medical records of GDM diagnoses with the biochemical laboratory records of test results and found very high levels of agreement (>99%).\n Analysis All analyses were conducted using Stata version 10 [23]. We calculated the percentage of women with an identified risk factor and used the binomial distribution to calculate 95% confidence intervals for these prevalence’s. We used the same method to calculate the percentage and 95% confidence intervals for women who had an OGTT between 2008 and 2010 (i.e. period during which there was a policy of universal OGTT testing).\nThe proportion of women with GDM, either a fasting blood glucose ≥6.1 mmol/L, or a two hour blood glucose ≥7.8 mmol/L or both and of these the proportion with severe hyperglycaemia, either a fasting blood glucose ≥7 mmol/L, or a two hour blood glucose ≥11.1 mmol/L or both was estimated. The proportion of women/infants with each risk factor (where available) and outcome (diagnosis of GDM, induction of labour, caesarean birth, macrosomia, perinatal mortality, admission to neonatal unit, Erb’s palsy and fractured clavicle) was estimated for each time period. Rate ratios (RR), comparing outcomes after the introduction of the universal offer of a diagnostic OGTT, to those before this policy, together with their 95% confidence intervals were estimated using the epitab command in Stata [23].\nAll analyses were conducted using Stata version 10 [23]. We calculated the percentage of women with an identified risk factor and used the binomial distribution to calculate 95% confidence intervals for these prevalence’s. We used the same method to calculate the percentage and 95% confidence intervals for women who had an OGTT between 2008 and 2010 (i.e. period during which there was a policy of universal OGTT testing).\nThe proportion of women with GDM, either a fasting blood glucose ≥6.1 mmol/L, or a two hour blood glucose ≥7.8 mmol/L or both and of these the proportion with severe hyperglycaemia, either a fasting blood glucose ≥7 mmol/L, or a two hour blood glucose ≥11.1 mmol/L or both was estimated. The proportion of women/infants with each risk factor (where available) and outcome (diagnosis of GDM, induction of labour, caesarean birth, macrosomia, perinatal mortality, admission to neonatal unit, Erb’s palsy and fractured clavicle) was estimated for each time period. Rate ratios (RR), comparing outcomes after the introduction of the universal offer of a diagnostic OGTT, to those before this policy, together with their 95% confidence intervals were estimated using the epitab command in Stata [23].", "We conducted a before and after comparison using data from all women who gave birth at the Bradford Royal Infirmary between 2004 to 2006 (before group) and 2008 to 2010 (after group). Those who gave birth in 2007 were excluded as this year contained a mixture of women selectively offered an OGTT and women universally offered an OGTT. In the UK, anonymised routinely collected data of those accessing National Health Service care may be used to evaluate service provision. Those using such data for service evaluation are not required to obtain verbal or written consent from patients or obtain ethical approval. Therefore neither patient consent nor ethical approval for this study was obtained, as routinely collected hospital data were used to evaluate service provision. Analyses were conducted on a completely anonymised dataset and none of the authors had access to any patient identifiers [22].", "The 75 g OGTT was administered during the study period between 26–28 weeks gestation. Fasting plasma glucose and two hour post-load plasma glucose measurements were obtained. Criteria used to diagnose GDM was based on WHO recommendations (i.e. either fasting glucose ≥6.1 mmol/L or two-hour glucose ≥7.8 mmol/L) and did not change throughout the study period [16].", "Data were extracted from hospital written and electronic medical records. Maternal risk factors (previous GDM, family history of diabetes, previous history of giving birth to a baby with macrosomia, obesity, South Asian or black ethnicity, glycosuria, raised result of any random blood glucose test and increased liquor volume) were available for women who had completed an OGTT between 2004 to 2006 (case by case assessment and selectively tested group). These risk factors were not available for women during this time period that did not complete an OGTT, nor were they available for women who were pregnant after 2007, other than ethnicity which was available for all women. Data on whether an OGTT was performed, whether they were diagnosed with GDM and maternal and neonatal outcomes (induction of labour, caesarean birth, macrosomia (birth weight equal to or greater than 4 kg), perinatal mortality, admission to neonatal unit, Erb’s palsy, fractured clavicle, were available for all births (before and after the policy change) and included in the analyses.\nData on risk factors (where available), completion of the OGTT and OGTT results were abstracted from paper medical or electronic records, outcome data were provided by the hospital electronic records system for the period 2004 to 2006. Neonatal transitional care unit data and data for 2008 to 2010 were abstracted from paper medical or electronic records. Research midwives abstracted data from paper records using a pre-prepared coded data extraction sheet. A 5% random sample of these data were independently abstracted by a research fellow; for all field codes there was greater than 98% agreement between the research fellow and research midwife abstractions. Electronic data were transferred from the electronic medical record to a research database and merged with the abstracted paper record data. These electronic data were manually checked against stored paper records and again high levels (98%) of agreement were found. We also compared the medical records of GDM diagnoses with the biochemical laboratory records of test results and found very high levels of agreement (>99%).", "All analyses were conducted using Stata version 10 [23]. We calculated the percentage of women with an identified risk factor and used the binomial distribution to calculate 95% confidence intervals for these prevalence’s. We used the same method to calculate the percentage and 95% confidence intervals for women who had an OGTT between 2008 and 2010 (i.e. period during which there was a policy of universal OGTT testing).\nThe proportion of women with GDM, either a fasting blood glucose ≥6.1 mmol/L, or a two hour blood glucose ≥7.8 mmol/L or both and of these the proportion with severe hyperglycaemia, either a fasting blood glucose ≥7 mmol/L, or a two hour blood glucose ≥11.1 mmol/L or both was estimated. The proportion of women/infants with each risk factor (where available) and outcome (diagnosis of GDM, induction of labour, caesarean birth, macrosomia, perinatal mortality, admission to neonatal unit, Erb’s palsy and fractured clavicle) was estimated for each time period. Rate ratios (RR), comparing outcomes after the introduction of the universal offer of a diagnostic OGTT, to those before this policy, together with their 95% confidence intervals were estimated using the epitab command in Stata [23].", "Between 2004 and 2006 (selectively offered group), 17,160 women gave birth; 1162 (7%) were offered an OGTT following risk assessment by their clinician and 1151 completed the test (7% of all women and 99% of those offered the test). Between 2008 and 2010, (universally offered group) 18,514 women gave birth, 18,328 (99%) were offered an OGTT and 11,516 completed the test (62% of all women and 63% of those offered the test) (Table 1). For births occurring between 2004–2006, amongst those who completed the OGTT 58% were from an ethnic group with increased risk (the most common risk factor) compared with 55% after the policy change (risk ratio 0.95, 95% CI: 0.87 to 1.03). Obesity was the second most common individual risk factor for women selectively offered an OGTT (Additional file 1: Table S1). A small proportion (1%) of the women in the universally offered group were not offered an OGTT, reasons were: pre-existing diabetes or late attendance for antenatal care. Of those attending for the OGTT (after group only), 3% did not complete the test either because they had been unable to fast overnight or were unable to drink the glucose solution due to nausea. These women were all offered a second appointment and over 99% completed the test at that second appointment.Table 1\nAssociation between selective and universal offer of an oral glucose tolerance test (OGTT) with gestational diabetes and severe hyperglycaemia detection rate (number tested with percentage of whole population and percentage of screened population in parenthesis)\nSelective offering of an OGTTUniversal offering of an OGTT2004200520062004-20062008200920102008-2010Rate ratio comparing detection rates after to before (95% CI) for the whole population and for the screened populationWhole population5512578458641716061626251610118514Completed OGTT323 (6)299 (5)529 (9)1151 (7)3797 (62)3947 (63)3772 (62)11516 (62)GDMa identified62 (1, 19)83 (1, 28)135 (2, 26)280 (2, 24)311 (5, 8)398 (6, 10)423 (7, 11)1132 (6, 10)3.75 (3.28 - 4.29) 0.40 (0.35 - 0.46)Severe hyperglycaemiab identified21 (0.4, 34)33 (0.6, 40)29 (0.5, 21)83 (0.5, 30)46 (0.7, 14)63 (1.0, 16)63 (1.0, 16)172 (1.0, 15)1.96 (1.50 - 2.58) 0.51 (0.39 - 0.67)OGTT = oral glucose tolerance test.GDM = gestational diabetes mellitus, CI = confidence interval.\naGDM = either a fasting blood glucose ≥6.1 mmol/L, or a two hour blood glucose ≥7.8 mol/L or both.\nbSevere hyperglycaemia = either a fasting blood glucose ≥7 mmol/L, or a two hour blood glucose ≥11.1 mmol/L or both.\n\nAssociation between selective and universal offer of an oral glucose tolerance test (OGTT) with gestational diabetes and severe hyperglycaemia detection rate (number tested with percentage of whole population and percentage of screened population in parenthesis)\n\nOGTT = oral glucose tolerance test.\nGDM = gestational diabetes mellitus, CI = confidence interval.\n\naGDM = either a fasting blood glucose ≥6.1 mmol/L, or a two hour blood glucose ≥7.8 mol/L or both.\n\nbSevere hyperglycaemia = either a fasting blood glucose ≥7 mmol/L, or a two hour blood glucose ≥11.1 mmol/L or both.\nThe proportion diagnosed with GDM increased almost fourfold after the introduction of the policy of offering a diagnostic OGTT to all pregnant women compared to selectively offering the test, (2% to 6%) Rate Ratio (RR) (3.75 95% CI 3.28 to 4.29) (Table 1). The proportion identified as having severe hyperglycaemia doubled following the change in policy (0.5% to 1.0%, 1.96 1.50 to 2.58) (Table 1). However, the population case detection rate (for both GDM and severe hyperglycaemia in those with GDM) reduced by 50-60% following universal offering of a diagnostic OGTT, reflecting an increase in those offered the test who were not at risk (Table 1).\nTable 2 shows the testing strategies comparison of adverse maternal and neonatal outcomes. Induction of labour rate increased both in the whole obstetric population and in women with GDM after the introduction of offering universal OGTT. The caesarean section rate in the whole population was similar before and after the introduction of offering universal OGTT, but introduction of this universal offer was associated with a reduction in caesarean section rates amongst those with GDM. Similarly, offering universal OGTT was not associated with a change in the rate of macrosomia in the whole obstetric population, but was associated with a marked reduction in its rate amongst those with GDM. There was also a reduction in perinatal mortality for women with GDM after the introduction of the policy and some evidence of a weaker reduction in the whole population, but the latter rate ratio had wide confidence intervals that included the null value.Table 2\nAssociation between selective and universal offer of an OGTT to identify gestational diabetes and risk of adverse maternal and neonatal outcomes in the whole obstetric population and in women with gestational diabetes\nWhole obstetric populationWomen identified with gestational diabetesSelective offer of an OGTT 2004–2006 N = 17160 N (%)Universal offer of an OGTT 2008–2010 N = 18514 N (%)Rate ratio comparing after to before (95% CI)Selective offer of an OGTT 2004–2006 N = 280 N (%)Universal offer of an OGTT 2008–2010 N = 1132 N (%)Rate ratio comparing after to before (95% CI)Induction of labour2422 (14.1)3678 (20.0)1.43 (1.35- 1.50)120 (42.9)587 (51.9)1.21 (1.00-1.49)Caesarean birth3477 (20.3)3709 (20.0)1.00 (0.96-1.05)122 (43.6)345 (30.5)0.70 (0.57-0.87)Macrosomia (birth weight ≥4 kg)1105 (6.4)1219 (6.6)1.04 (0.95-1.12)45 (16.1)50 (4.4)0.22 (0.15-0.34)Perinatal mortality228 (1.3)212 (1.1)0.86 (0.71-1.04)4 (1.4)8 (0.7)0.12 (0.03-0.46)Admitted to NNU1670 (9.7)1497 (8.1)0.83 (0.77-0.89)69 (24.6)118 (10.4)0.42 (0.31-0.58)Erb’s palsy7 (0.04)12 (0.06)1.59 (0.58-4.76)***Fractured clavicle4 (0.02)6 (0.03)1.39 (0.33-6.70)***OGTT = oral glucose tolerance test.NNU: neonatal unit; *insufficient numbers to perform analyses for these outcomes in those identified with gestational diabetes.\n\nAssociation between selective and universal offer of an OGTT to identify gestational diabetes and risk of adverse maternal and neonatal outcomes in the whole obstetric population and in women with gestational diabetes\n\nOGTT = oral glucose tolerance test.\nNNU: neonatal unit; *insufficient numbers to perform analyses for these outcomes in those identified with gestational diabetes.\nOffering universal OGTT was also associated with a reduced rate of admission to the neonatal unit for the whole obstetric population and amongst those with GDM. The rates of Erb’s palsy and fractured clavicle in the infants appeared similar before and after the policy change, but at both time points rates of these outcomes were very low and the ratio estimates have very wide confidence intervals that include any association from a marked reduction to a marked increase in risk.", "Changing antenatal care policy to identify women with GDM from case by case assessment and selective diagnostic OGTT, to offering a diagnostic OGTT to all pregnant women was associated with increased rates of GDM diagnoses and severe hyperglycaemia in the whole obstetric population, but with lower detection rates in those who completed the OGTT. This suggests universal offer of OGTT is associated with the increased identification of mild-moderate hyperglycaemia. There was also a reduction from 94% to 63% of those offered a diagnostic OGTT who accepted and completed the test. Induction of labour rate in the whole obstetric population and in those diagnosed with GDM increased when all women were offered a universal diagnostic OGTT, with no overall increase in caesarean section rate and a decrease in those with GDM. Offering all women diagnostic testing was also associated with improved neonatal outcomes in women identified with GDM, including a marked reduction in rates of macrosomia, neonatal mortality and admissions to neonatal care.\nThere was no strong evidence that offering an OGTT to all women affected most outcome rates for the whole obstetric population. There was some suggestion that perinatal mortality may have reduced in the whole obstetric population following the policy change, but here the confidence intervals were wide and just included the null value.\nThus, overall these results suggest that universal testing with OGTT in a high risk population such as that in Bradford with a large proportion at risk because of their ethnicity or obesity has some beneficial effects in terms of reducing adverse perinatal outcomes in those identified with GDM, without increasing caesarean section rates or adverse neonatal outcomes in the whole obstetric population. There were no changes to treatment policy during the study period that could account for the differences in outcomes demonstrated. A ‘step up’ approach was followed depending on severity of hyperglycaemia, whereby diet and exercise modification was advised and metformin and/or insulin added accordingly.\nThe reduced uptake of the invitation for a universally offered OGTT is similar to other studies who have reported rates of 58% [24] and 73% [25] and may reflect a difference in attitudes to a test once it is offered to everyone. It is plausible that both health care practitioners and the women themselves see the test as less important when it is not restricted to those designated as high risk following individual assessment. For example, in the absence of risk factors health professionals may place less emphasise on the importance of the test. Unfortunately, we did not have detailed information on risk factors in both those attending antenatal care before and after the change in policy to examine whether uptake amongst the most at risk had changed over time. It is also possible that those who declined the test were different in some way to those who accepted the test offer and that any benefits of universal testing may be greater if uptake could be improved from 63%, though we are unable to examine this here. Uptake of OGTT offer however may increase over time as clinicians and women become more familiar with the test and reasons for testing.\nThe potential benefits of offering universal testing and treatment of cases must be weighed against the cost of this service and any possible adverse effects to the pregnant women. Criteria for GDM diagnosis did not change across the study period, however it is notable that before the introduction of universal diagnostic testing three women were tested for each woman diagnosed with GDM, whereas once universal invitation was introduced ten women were tested for each woman diagnosed. If it were possible to increase the uptake of testing with universal invitation this difference would likely increase.\nThe small proportion at both time points of women who presented for the test, but could not complete it, because they had not fasted or were nauseous following drinking the glucose solution suggests that once a woman has decided to accept the invitation the test is feasible in the vast majority. We are unable in this study to examine cost-effectiveness as we do not have comprehensive data on the extent of risk factor screening that was completed in all women in the before group and since this is a before and after comparison we are only able to assess association rather than causation/effect.\nThere were no changes to criteria thresholds for GDM diagnosis within the study period, which would be likely to affect prevalence of GDM, [26] even so prevalence increased irrespective of strategy used. The increasing prevalence of maternal obesity and an increasing awareness amongst clinicians of the association between obesity and poor pregnancy outcomes may be responsible for the changes seen. For example, in 2004 three women were tested for GDM because of a raised BMI, in 2005, 17 women and in 2006, 225 women. Different risk factors are associated with different degrees of risk for GDM development. South Asian ethnicity and family history of diabetes carry greater risk of GDM development compared with BMI ≥30 [27]. However, we found no difference in the proportion of women who completed the OGTT before and after introduction of universal testing who belonged to an at-risk ethnic group, which is perhaps surprising as 50% of the study population are of South Asian ethnicity, this may reflect a belief that the use of ethnicity as a risk factor would result in too great a cost burden at the study hospital prior to the policy change or that the risk conveyed was not great enough to necessitate screening.\nOur findings with respect to associations in the group of women who are diagnosed with GDM are consistent with randomised trials of more intensive treatment of GDM, suggesting that the increased detection rates did result in effective treatment in those with GDM. For example, recent randomised trials suggest that intensive treatment effectively reduces macrosomia and other adverse neonatal outcomes [28, 29]. With respect to maternal outcomes these trials had different findings, with one showing intensive treatment resulted in an increase in induction rates (as in our study), but no effect on caesarean section rates [28] and the second, no difference in induction rate and a reduction in caesarean section rates (the latter as in our study) [29].\nThe reduction in rates of macrosomia, comparing births after the introduction of universal offering of OGTT to those before is also consistent with substantial evidence that maternal hyperglycaemia causes increased birth size overall and increased adiposity at birth [8, 30]. Macrosomia complicates the intrapartum, rather than the antenatal period, and is therefore more likely to be associated with birth trauma and arrested labour rather than perinatal mortality [31]. In our study population the numbers experiencing birth trauma (Erb’s palsy or fractured clavicle) were too few to make meaningful conclusions about the change in policy to offering universal testing.\n Implications for research Further research is needed to understand why, when all women were offered an OGTT only 63% attended for the test, compared with 99% when the OGTT was only offered to those identified as at risk. It is also important to investigate what the effect of increasing uptake of the test might be. Ultimately, randomised trials that compare universal testing and selective testing and assessment of a range of perinatal outcomes are required, but such trials would require a very large sample size. Continued evaluation of the Bradford population and other similar populations where there is a policy change provide some evidence of the likely impact of universal testing. It is also important to include assessment of the cost-effectiveness of strategies to identify and treat GDM, on-going work may help to answer this question [32].\nFurther research is needed to understand why, when all women were offered an OGTT only 63% attended for the test, compared with 99% when the OGTT was only offered to those identified as at risk. It is also important to investigate what the effect of increasing uptake of the test might be. Ultimately, randomised trials that compare universal testing and selective testing and assessment of a range of perinatal outcomes are required, but such trials would require a very large sample size. Continued evaluation of the Bradford population and other similar populations where there is a policy change provide some evidence of the likely impact of universal testing. It is also important to include assessment of the cost-effectiveness of strategies to identify and treat GDM, on-going work may help to answer this question [32].\n Strengths and limitations A key strength of our study is its ability to examine the association of offering universal testing for OGTT compared with a more conservative strategy of selective testing based on individual assessment of risk. We were able to include in our analyses all women giving birth in Bradford over the study period examined and therefore results are unlikely to be affected by selection bias and the numbers included in the analyses are large. This approach provides a ‘real life’ evaluation of what happens when health care policy is changed, rather than findings from randomised trials that may overestimate the effect of an intervention [33, 34]. Whilst routine clinical data may be less accurate than data collected specifically for research purposes we carefully checked the reliability of data from different sources, where possible (e.g. electronic and paper medical records and laboratory results) and completed an independent abstraction of a random sample of data from medical records to improve the validity of the data we have used here.\nThe main weakness of this study is that it is a before and after comparison and therefore we cannot assume that any associations are caused by the change in policy of offering an OGTT to all pregnant women. For 2006 to 2008 we only had access to aggregated data for outcomes therefore we could only examine unadjusted associations and not adjust analyses to take account of the individual level background characteristics and confounders of the population and how these might have changed over time. Although there were no other changes in policy, any other characteristic that changed over this period could explain the findings we have observed (i.e. could have confounded our assumed association of change to universal offering of an OGTT with the outcomes assessed). For example, there was an increase in induction of labour rate following the introduction of universal offer of an OGTT, both for the whole obstetric population and for those with GDM, which cannot be wholly accounted for by the increase in GDM diagnosis.\nAs noted above there was already evidence that the rate of diagnosis of GDM was increasing prior to the change in policy, but the rate of this increase was much more marked after the policy was introduced. Despite the increasing rates of diagnosis before the policy change, there were still improved outcomes for the group universally offered an OGTT compared to the group selectively offered an OGTT.\nData on detailed risk factor assessment for all women across the whole period was not available; data were only available for those in the before group who completed an OGTT. Thus, we were unable to examine how well risk factor screening was implemented before the offer of universal testing or to compare risk factors between those who completed an OGTT and those who declined the invitation after the policy change. As noted above we are unable to complete a cost-effectiveness analysis in this study. Lastly, the population of Bradford are a high risk group as half of births are to women of South Asian origin and a fifth of the non-South Asian population are obese. Thus, at least 60% of this population would have at least one risk factor deemed by the NICE guidelines introduced in 2008, to make them eligible for an OGTT [11]. We cannot conclude that the results found in this population would necessarily generalise to other populations or to populations where other strategies or criteria are used, but given the whole population coverage it is likely that they would generalise to populations with similar high levels of risk.\nA key strength of our study is its ability to examine the association of offering universal testing for OGTT compared with a more conservative strategy of selective testing based on individual assessment of risk. We were able to include in our analyses all women giving birth in Bradford over the study period examined and therefore results are unlikely to be affected by selection bias and the numbers included in the analyses are large. This approach provides a ‘real life’ evaluation of what happens when health care policy is changed, rather than findings from randomised trials that may overestimate the effect of an intervention [33, 34]. Whilst routine clinical data may be less accurate than data collected specifically for research purposes we carefully checked the reliability of data from different sources, where possible (e.g. electronic and paper medical records and laboratory results) and completed an independent abstraction of a random sample of data from medical records to improve the validity of the data we have used here.\nThe main weakness of this study is that it is a before and after comparison and therefore we cannot assume that any associations are caused by the change in policy of offering an OGTT to all pregnant women. For 2006 to 2008 we only had access to aggregated data for outcomes therefore we could only examine unadjusted associations and not adjust analyses to take account of the individual level background characteristics and confounders of the population and how these might have changed over time. Although there were no other changes in policy, any other characteristic that changed over this period could explain the findings we have observed (i.e. could have confounded our assumed association of change to universal offering of an OGTT with the outcomes assessed). For example, there was an increase in induction of labour rate following the introduction of universal offer of an OGTT, both for the whole obstetric population and for those with GDM, which cannot be wholly accounted for by the increase in GDM diagnosis.\nAs noted above there was already evidence that the rate of diagnosis of GDM was increasing prior to the change in policy, but the rate of this increase was much more marked after the policy was introduced. Despite the increasing rates of diagnosis before the policy change, there were still improved outcomes for the group universally offered an OGTT compared to the group selectively offered an OGTT.\nData on detailed risk factor assessment for all women across the whole period was not available; data were only available for those in the before group who completed an OGTT. Thus, we were unable to examine how well risk factor screening was implemented before the offer of universal testing or to compare risk factors between those who completed an OGTT and those who declined the invitation after the policy change. As noted above we are unable to complete a cost-effectiveness analysis in this study. Lastly, the population of Bradford are a high risk group as half of births are to women of South Asian origin and a fifth of the non-South Asian population are obese. Thus, at least 60% of this population would have at least one risk factor deemed by the NICE guidelines introduced in 2008, to make them eligible for an OGTT [11]. We cannot conclude that the results found in this population would necessarily generalise to other populations or to populations where other strategies or criteria are used, but given the whole population coverage it is likely that they would generalise to populations with similar high levels of risk.", "Further research is needed to understand why, when all women were offered an OGTT only 63% attended for the test, compared with 99% when the OGTT was only offered to those identified as at risk. It is also important to investigate what the effect of increasing uptake of the test might be. Ultimately, randomised trials that compare universal testing and selective testing and assessment of a range of perinatal outcomes are required, but such trials would require a very large sample size. Continued evaluation of the Bradford population and other similar populations where there is a policy change provide some evidence of the likely impact of universal testing. It is also important to include assessment of the cost-effectiveness of strategies to identify and treat GDM, on-going work may help to answer this question [32].", "A key strength of our study is its ability to examine the association of offering universal testing for OGTT compared with a more conservative strategy of selective testing based on individual assessment of risk. We were able to include in our analyses all women giving birth in Bradford over the study period examined and therefore results are unlikely to be affected by selection bias and the numbers included in the analyses are large. This approach provides a ‘real life’ evaluation of what happens when health care policy is changed, rather than findings from randomised trials that may overestimate the effect of an intervention [33, 34]. Whilst routine clinical data may be less accurate than data collected specifically for research purposes we carefully checked the reliability of data from different sources, where possible (e.g. electronic and paper medical records and laboratory results) and completed an independent abstraction of a random sample of data from medical records to improve the validity of the data we have used here.\nThe main weakness of this study is that it is a before and after comparison and therefore we cannot assume that any associations are caused by the change in policy of offering an OGTT to all pregnant women. For 2006 to 2008 we only had access to aggregated data for outcomes therefore we could only examine unadjusted associations and not adjust analyses to take account of the individual level background characteristics and confounders of the population and how these might have changed over time. Although there were no other changes in policy, any other characteristic that changed over this period could explain the findings we have observed (i.e. could have confounded our assumed association of change to universal offering of an OGTT with the outcomes assessed). For example, there was an increase in induction of labour rate following the introduction of universal offer of an OGTT, both for the whole obstetric population and for those with GDM, which cannot be wholly accounted for by the increase in GDM diagnosis.\nAs noted above there was already evidence that the rate of diagnosis of GDM was increasing prior to the change in policy, but the rate of this increase was much more marked after the policy was introduced. Despite the increasing rates of diagnosis before the policy change, there were still improved outcomes for the group universally offered an OGTT compared to the group selectively offered an OGTT.\nData on detailed risk factor assessment for all women across the whole period was not available; data were only available for those in the before group who completed an OGTT. Thus, we were unable to examine how well risk factor screening was implemented before the offer of universal testing or to compare risk factors between those who completed an OGTT and those who declined the invitation after the policy change. As noted above we are unable to complete a cost-effectiveness analysis in this study. Lastly, the population of Bradford are a high risk group as half of births are to women of South Asian origin and a fifth of the non-South Asian population are obese. Thus, at least 60% of this population would have at least one risk factor deemed by the NICE guidelines introduced in 2008, to make them eligible for an OGTT [11]. We cannot conclude that the results found in this population would necessarily generalise to other populations or to populations where other strategies or criteria are used, but given the whole population coverage it is likely that they would generalise to populations with similar high levels of risk.", "Our results suggest that offering all women an OGTT was associated with increased identification of women with GDM and severe hyperglycaemia and with neonatal benefits for those with GDM. There was no evidence of clear differences in neonatal outcomes in the whole obstetric population, which is perhaps not surprising as only 6% were identified as having GDM when an OGTT was offered to all women and GDM diagnosis made based on the WHO criteria.\n Ethics Ethical approval was not required for this work.\nEthical approval was not required for this work.", "Ethical approval was not required for this work.", " Additional file 1:\nProportions of pregnant women who completed an OGTT who had documented risk factors* for GDM.\n(DOC 32 KB)\nAdditional file 1:\nProportions of pregnant women who completed an OGTT who had documented risk factors* for GDM.\n(DOC 32 KB)", "Additional file 1:\nProportions of pregnant women who completed an OGTT who had documented risk factors* for GDM.\n(DOC 32 KB)" ]
[ null, null, "methods", null, null, null, null, "results", "discussion", null, null, "conclusions", null, "supplementary-material", null ]
[ "Gestational diabetes", "Universal and selective screening", "Risk factors", "Oral glucose tolerance test", "Perinatal outcomes" ]
Background: Gestational diabetes mellitus (GDM) affects 2-6% of pregnant women and is associated with increased risk of important adverse perinatal outcomes, including macrosomia and birth injury [1, 2]. There is also evidence of increased long term risk of type 2 diabetes [3] and consequent cardiovascular disease in the mothers [4] and possibly of increased long term risk of obesity and associated adverse cardio-metabolic risk in offspring [5–8]. Evidence is increasing that treatment of GDM improves perinatal outcomes [9] supporting the case for improved identification of women with GDM. There is debate however about the relative effectiveness of different strategies for identifying women with GDM, largely because of the lack of good quality evidence. This has led to variation in clinical guidelines and practice for detecting GDM between, and within, countries. Strategies include: case by case assessment [10], selective (75 g or 100 g) oral glucose tolerance testing of high risk women identified using specific risk factors or a 50 g glucose challenge test [11] and universal testing, i.e. offering all women an oral glucose tolerance test (OGTT) [12]. To date in the United Kingdom (UK) there has been no recommendation to offer all women an OGTT, in practice, and more recently in clinical guidelines, selective testing of high risk women has been undertaken. Prior to 2008 in the UK there was no national recommended screening strategy to identify women with GDM. Screening if it was conducted, was at the discretion of the clinician and based on variable use of risk factors [13–15]. When risk factor screening was undertaken, a two-step approach was preferred: clinicians made a clinical assessment of each woman’s risk and offered a two hour 75 g OGTT to identify GDM, with a diagnosis based on the World Health Organisation (WHO) criteria [16]. Since 2008 UK national clinical guidance has recommended that all women are screened by assessment of specific risk factors at their first pregnancy appointment. Any pregnant woman (not previously identified as having type 2 diabetes) with one or more risk factor: family history of diabetes; South Asian; black or middle eastern ethnicity; previous history of having a baby with macrosomia; or body mass index (kg/m2) (BMI) ≥30, should be offered an OGTT between 24 and 28 weeks gestation [11]. The American College of Obstetricians and Gynecologists (ACOG) also recommends a two-step approach. Women are screened for GDM at 24 to 26 weeks, either by patient history, risk factors or 50 g one hour glucose challenge test and if screen positive offered a 100 g OGTT. GDM diagnosis is made using criteria from Carpenter and Coustan or the National Diabetes Data Group [17]. By contrast the International Association of Diabetes in Pregnancy Study Group [12] (indorsed by the American Diabetes Association) recommend all women not previously identified as having type 2 diabetes (irrespective of risk factors) are offered a diagnostic 75 g OGTT at 24–28 weeks. GDM is diagnosed if any one of plasma glucose levels are >5.0 mmol/l, one hour >9.9 mmol/l or two hour >8.4 mmol/l is found [18]. Current screening recommendations are based on evidence from observational studies with no evidence that diagnosis using any of the above strategies improves perinatal or long term adverse outcomes or is cost-effective. Consequently the US Preventative Services Taskforce recommends clinicians discuss screening for GDM with each woman and case by case decisions made based on risk status [10]. Selectively offering women an OGTT based on risk factor assessment and universally offering an OGTT to some extent represent the extremes of possible approaches for identifying women at risk. Theoretically the former approach may have a high false negative rate and miss the opportunity to prevent adverse perinatal outcomes in women at risk, whereas the latter approach may over-diagnose [19], cause unnecessary anxiety [20] and may not be cost-effective in improving perinatal outcomes. Recent policy changes from a clinician led risk factor screening approach to universal offering a diagnostic 75 g OGTT in Bradford provided us with the opportunity to compare these two approaches. Setting Bradford is a city in the North of England with high levels of deprivation. Approximately half of births are to women of South Asian origin and a fifth of White British pregnant women are obese (unpublished routine study hospital data from 2012). In response to the Bradford District Mortality Commission, which was undertaken in 2006 and highlighted rising infant mortality and the poor health of pregnant women in the city [21], a number of changes to clinical practice were implemented, including offering a diagnostic OGTT to all pregnant women at 26–28 weeks gestation. Prior to 2007 women were offered an OGTT at 26–28 weeks gestation following individual assessment of risk status. The aim of this study was to evaluate the policy change from case by case risk factor based assessment and selective testing to universal testing (offering a diagnostic OGTT to all women) to identify gestational diabetes. Bradford is a city in the North of England with high levels of deprivation. Approximately half of births are to women of South Asian origin and a fifth of White British pregnant women are obese (unpublished routine study hospital data from 2012). In response to the Bradford District Mortality Commission, which was undertaken in 2006 and highlighted rising infant mortality and the poor health of pregnant women in the city [21], a number of changes to clinical practice were implemented, including offering a diagnostic OGTT to all pregnant women at 26–28 weeks gestation. Prior to 2007 women were offered an OGTT at 26–28 weeks gestation following individual assessment of risk status. The aim of this study was to evaluate the policy change from case by case risk factor based assessment and selective testing to universal testing (offering a diagnostic OGTT to all women) to identify gestational diabetes. Setting: Bradford is a city in the North of England with high levels of deprivation. Approximately half of births are to women of South Asian origin and a fifth of White British pregnant women are obese (unpublished routine study hospital data from 2012). In response to the Bradford District Mortality Commission, which was undertaken in 2006 and highlighted rising infant mortality and the poor health of pregnant women in the city [21], a number of changes to clinical practice were implemented, including offering a diagnostic OGTT to all pregnant women at 26–28 weeks gestation. Prior to 2007 women were offered an OGTT at 26–28 weeks gestation following individual assessment of risk status. The aim of this study was to evaluate the policy change from case by case risk factor based assessment and selective testing to universal testing (offering a diagnostic OGTT to all women) to identify gestational diabetes. Methods: Participants We conducted a before and after comparison using data from all women who gave birth at the Bradford Royal Infirmary between 2004 to 2006 (before group) and 2008 to 2010 (after group). Those who gave birth in 2007 were excluded as this year contained a mixture of women selectively offered an OGTT and women universally offered an OGTT. In the UK, anonymised routinely collected data of those accessing National Health Service care may be used to evaluate service provision. Those using such data for service evaluation are not required to obtain verbal or written consent from patients or obtain ethical approval. Therefore neither patient consent nor ethical approval for this study was obtained, as routinely collected hospital data were used to evaluate service provision. Analyses were conducted on a completely anonymised dataset and none of the authors had access to any patient identifiers [22]. We conducted a before and after comparison using data from all women who gave birth at the Bradford Royal Infirmary between 2004 to 2006 (before group) and 2008 to 2010 (after group). Those who gave birth in 2007 were excluded as this year contained a mixture of women selectively offered an OGTT and women universally offered an OGTT. In the UK, anonymised routinely collected data of those accessing National Health Service care may be used to evaluate service provision. Those using such data for service evaluation are not required to obtain verbal or written consent from patients or obtain ethical approval. Therefore neither patient consent nor ethical approval for this study was obtained, as routinely collected hospital data were used to evaluate service provision. Analyses were conducted on a completely anonymised dataset and none of the authors had access to any patient identifiers [22]. Oral glucose tolerance test The 75 g OGTT was administered during the study period between 26–28 weeks gestation. Fasting plasma glucose and two hour post-load plasma glucose measurements were obtained. Criteria used to diagnose GDM was based on WHO recommendations (i.e. either fasting glucose ≥6.1 mmol/L or two-hour glucose ≥7.8 mmol/L) and did not change throughout the study period [16]. The 75 g OGTT was administered during the study period between 26–28 weeks gestation. Fasting plasma glucose and two hour post-load plasma glucose measurements were obtained. Criteria used to diagnose GDM was based on WHO recommendations (i.e. either fasting glucose ≥6.1 mmol/L or two-hour glucose ≥7.8 mmol/L) and did not change throughout the study period [16]. Extraction of data Data were extracted from hospital written and electronic medical records. Maternal risk factors (previous GDM, family history of diabetes, previous history of giving birth to a baby with macrosomia, obesity, South Asian or black ethnicity, glycosuria, raised result of any random blood glucose test and increased liquor volume) were available for women who had completed an OGTT between 2004 to 2006 (case by case assessment and selectively tested group). These risk factors were not available for women during this time period that did not complete an OGTT, nor were they available for women who were pregnant after 2007, other than ethnicity which was available for all women. Data on whether an OGTT was performed, whether they were diagnosed with GDM and maternal and neonatal outcomes (induction of labour, caesarean birth, macrosomia (birth weight equal to or greater than 4 kg), perinatal mortality, admission to neonatal unit, Erb’s palsy, fractured clavicle, were available for all births (before and after the policy change) and included in the analyses. Data on risk factors (where available), completion of the OGTT and OGTT results were abstracted from paper medical or electronic records, outcome data were provided by the hospital electronic records system for the period 2004 to 2006. Neonatal transitional care unit data and data for 2008 to 2010 were abstracted from paper medical or electronic records. Research midwives abstracted data from paper records using a pre-prepared coded data extraction sheet. A 5% random sample of these data were independently abstracted by a research fellow; for all field codes there was greater than 98% agreement between the research fellow and research midwife abstractions. Electronic data were transferred from the electronic medical record to a research database and merged with the abstracted paper record data. These electronic data were manually checked against stored paper records and again high levels (98%) of agreement were found. We also compared the medical records of GDM diagnoses with the biochemical laboratory records of test results and found very high levels of agreement (>99%). Data were extracted from hospital written and electronic medical records. Maternal risk factors (previous GDM, family history of diabetes, previous history of giving birth to a baby with macrosomia, obesity, South Asian or black ethnicity, glycosuria, raised result of any random blood glucose test and increased liquor volume) were available for women who had completed an OGTT between 2004 to 2006 (case by case assessment and selectively tested group). These risk factors were not available for women during this time period that did not complete an OGTT, nor were they available for women who were pregnant after 2007, other than ethnicity which was available for all women. Data on whether an OGTT was performed, whether they were diagnosed with GDM and maternal and neonatal outcomes (induction of labour, caesarean birth, macrosomia (birth weight equal to or greater than 4 kg), perinatal mortality, admission to neonatal unit, Erb’s palsy, fractured clavicle, were available for all births (before and after the policy change) and included in the analyses. Data on risk factors (where available), completion of the OGTT and OGTT results were abstracted from paper medical or electronic records, outcome data were provided by the hospital electronic records system for the period 2004 to 2006. Neonatal transitional care unit data and data for 2008 to 2010 were abstracted from paper medical or electronic records. Research midwives abstracted data from paper records using a pre-prepared coded data extraction sheet. A 5% random sample of these data were independently abstracted by a research fellow; for all field codes there was greater than 98% agreement between the research fellow and research midwife abstractions. Electronic data were transferred from the electronic medical record to a research database and merged with the abstracted paper record data. These electronic data were manually checked against stored paper records and again high levels (98%) of agreement were found. We also compared the medical records of GDM diagnoses with the biochemical laboratory records of test results and found very high levels of agreement (>99%). Analysis All analyses were conducted using Stata version 10 [23]. We calculated the percentage of women with an identified risk factor and used the binomial distribution to calculate 95% confidence intervals for these prevalence’s. We used the same method to calculate the percentage and 95% confidence intervals for women who had an OGTT between 2008 and 2010 (i.e. period during which there was a policy of universal OGTT testing). The proportion of women with GDM, either a fasting blood glucose ≥6.1 mmol/L, or a two hour blood glucose ≥7.8 mmol/L or both and of these the proportion with severe hyperglycaemia, either a fasting blood glucose ≥7 mmol/L, or a two hour blood glucose ≥11.1 mmol/L or both was estimated. The proportion of women/infants with each risk factor (where available) and outcome (diagnosis of GDM, induction of labour, caesarean birth, macrosomia, perinatal mortality, admission to neonatal unit, Erb’s palsy and fractured clavicle) was estimated for each time period. Rate ratios (RR), comparing outcomes after the introduction of the universal offer of a diagnostic OGTT, to those before this policy, together with their 95% confidence intervals were estimated using the epitab command in Stata [23]. All analyses were conducted using Stata version 10 [23]. We calculated the percentage of women with an identified risk factor and used the binomial distribution to calculate 95% confidence intervals for these prevalence’s. We used the same method to calculate the percentage and 95% confidence intervals for women who had an OGTT between 2008 and 2010 (i.e. period during which there was a policy of universal OGTT testing). The proportion of women with GDM, either a fasting blood glucose ≥6.1 mmol/L, or a two hour blood glucose ≥7.8 mmol/L or both and of these the proportion with severe hyperglycaemia, either a fasting blood glucose ≥7 mmol/L, or a two hour blood glucose ≥11.1 mmol/L or both was estimated. The proportion of women/infants with each risk factor (where available) and outcome (diagnosis of GDM, induction of labour, caesarean birth, macrosomia, perinatal mortality, admission to neonatal unit, Erb’s palsy and fractured clavicle) was estimated for each time period. Rate ratios (RR), comparing outcomes after the introduction of the universal offer of a diagnostic OGTT, to those before this policy, together with their 95% confidence intervals were estimated using the epitab command in Stata [23]. Participants: We conducted a before and after comparison using data from all women who gave birth at the Bradford Royal Infirmary between 2004 to 2006 (before group) and 2008 to 2010 (after group). Those who gave birth in 2007 were excluded as this year contained a mixture of women selectively offered an OGTT and women universally offered an OGTT. In the UK, anonymised routinely collected data of those accessing National Health Service care may be used to evaluate service provision. Those using such data for service evaluation are not required to obtain verbal or written consent from patients or obtain ethical approval. Therefore neither patient consent nor ethical approval for this study was obtained, as routinely collected hospital data were used to evaluate service provision. Analyses were conducted on a completely anonymised dataset and none of the authors had access to any patient identifiers [22]. Oral glucose tolerance test: The 75 g OGTT was administered during the study period between 26–28 weeks gestation. Fasting plasma glucose and two hour post-load plasma glucose measurements were obtained. Criteria used to diagnose GDM was based on WHO recommendations (i.e. either fasting glucose ≥6.1 mmol/L or two-hour glucose ≥7.8 mmol/L) and did not change throughout the study period [16]. Extraction of data: Data were extracted from hospital written and electronic medical records. Maternal risk factors (previous GDM, family history of diabetes, previous history of giving birth to a baby with macrosomia, obesity, South Asian or black ethnicity, glycosuria, raised result of any random blood glucose test and increased liquor volume) were available for women who had completed an OGTT between 2004 to 2006 (case by case assessment and selectively tested group). These risk factors were not available for women during this time period that did not complete an OGTT, nor were they available for women who were pregnant after 2007, other than ethnicity which was available for all women. Data on whether an OGTT was performed, whether they were diagnosed with GDM and maternal and neonatal outcomes (induction of labour, caesarean birth, macrosomia (birth weight equal to or greater than 4 kg), perinatal mortality, admission to neonatal unit, Erb’s palsy, fractured clavicle, were available for all births (before and after the policy change) and included in the analyses. Data on risk factors (where available), completion of the OGTT and OGTT results were abstracted from paper medical or electronic records, outcome data were provided by the hospital electronic records system for the period 2004 to 2006. Neonatal transitional care unit data and data for 2008 to 2010 were abstracted from paper medical or electronic records. Research midwives abstracted data from paper records using a pre-prepared coded data extraction sheet. A 5% random sample of these data were independently abstracted by a research fellow; for all field codes there was greater than 98% agreement between the research fellow and research midwife abstractions. Electronic data were transferred from the electronic medical record to a research database and merged with the abstracted paper record data. These electronic data were manually checked against stored paper records and again high levels (98%) of agreement were found. We also compared the medical records of GDM diagnoses with the biochemical laboratory records of test results and found very high levels of agreement (>99%). Analysis: All analyses were conducted using Stata version 10 [23]. We calculated the percentage of women with an identified risk factor and used the binomial distribution to calculate 95% confidence intervals for these prevalence’s. We used the same method to calculate the percentage and 95% confidence intervals for women who had an OGTT between 2008 and 2010 (i.e. period during which there was a policy of universal OGTT testing). The proportion of women with GDM, either a fasting blood glucose ≥6.1 mmol/L, or a two hour blood glucose ≥7.8 mmol/L or both and of these the proportion with severe hyperglycaemia, either a fasting blood glucose ≥7 mmol/L, or a two hour blood glucose ≥11.1 mmol/L or both was estimated. The proportion of women/infants with each risk factor (where available) and outcome (diagnosis of GDM, induction of labour, caesarean birth, macrosomia, perinatal mortality, admission to neonatal unit, Erb’s palsy and fractured clavicle) was estimated for each time period. Rate ratios (RR), comparing outcomes after the introduction of the universal offer of a diagnostic OGTT, to those before this policy, together with their 95% confidence intervals were estimated using the epitab command in Stata [23]. Results: Between 2004 and 2006 (selectively offered group), 17,160 women gave birth; 1162 (7%) were offered an OGTT following risk assessment by their clinician and 1151 completed the test (7% of all women and 99% of those offered the test). Between 2008 and 2010, (universally offered group) 18,514 women gave birth, 18,328 (99%) were offered an OGTT and 11,516 completed the test (62% of all women and 63% of those offered the test) (Table 1). For births occurring between 2004–2006, amongst those who completed the OGTT 58% were from an ethnic group with increased risk (the most common risk factor) compared with 55% after the policy change (risk ratio 0.95, 95% CI: 0.87 to 1.03). Obesity was the second most common individual risk factor for women selectively offered an OGTT (Additional file 1: Table S1). A small proportion (1%) of the women in the universally offered group were not offered an OGTT, reasons were: pre-existing diabetes or late attendance for antenatal care. Of those attending for the OGTT (after group only), 3% did not complete the test either because they had been unable to fast overnight or were unable to drink the glucose solution due to nausea. These women were all offered a second appointment and over 99% completed the test at that second appointment.Table 1 Association between selective and universal offer of an oral glucose tolerance test (OGTT) with gestational diabetes and severe hyperglycaemia detection rate (number tested with percentage of whole population and percentage of screened population in parenthesis) Selective offering of an OGTTUniversal offering of an OGTT2004200520062004-20062008200920102008-2010Rate ratio comparing detection rates after to before (95% CI) for the whole population and for the screened populationWhole population5512578458641716061626251610118514Completed OGTT323 (6)299 (5)529 (9)1151 (7)3797 (62)3947 (63)3772 (62)11516 (62)GDMa identified62 (1, 19)83 (1, 28)135 (2, 26)280 (2, 24)311 (5, 8)398 (6, 10)423 (7, 11)1132 (6, 10)3.75 (3.28 - 4.29) 0.40 (0.35 - 0.46)Severe hyperglycaemiab identified21 (0.4, 34)33 (0.6, 40)29 (0.5, 21)83 (0.5, 30)46 (0.7, 14)63 (1.0, 16)63 (1.0, 16)172 (1.0, 15)1.96 (1.50 - 2.58) 0.51 (0.39 - 0.67)OGTT = oral glucose tolerance test.GDM = gestational diabetes mellitus, CI = confidence interval. aGDM = either a fasting blood glucose ≥6.1 mmol/L, or a two hour blood glucose ≥7.8 mol/L or both. bSevere hyperglycaemia = either a fasting blood glucose ≥7 mmol/L, or a two hour blood glucose ≥11.1 mmol/L or both. Association between selective and universal offer of an oral glucose tolerance test (OGTT) with gestational diabetes and severe hyperglycaemia detection rate (number tested with percentage of whole population and percentage of screened population in parenthesis) OGTT = oral glucose tolerance test. GDM = gestational diabetes mellitus, CI = confidence interval. aGDM = either a fasting blood glucose ≥6.1 mmol/L, or a two hour blood glucose ≥7.8 mol/L or both. bSevere hyperglycaemia = either a fasting blood glucose ≥7 mmol/L, or a two hour blood glucose ≥11.1 mmol/L or both. The proportion diagnosed with GDM increased almost fourfold after the introduction of the policy of offering a diagnostic OGTT to all pregnant women compared to selectively offering the test, (2% to 6%) Rate Ratio (RR) (3.75 95% CI 3.28 to 4.29) (Table 1). The proportion identified as having severe hyperglycaemia doubled following the change in policy (0.5% to 1.0%, 1.96 1.50 to 2.58) (Table 1). However, the population case detection rate (for both GDM and severe hyperglycaemia in those with GDM) reduced by 50-60% following universal offering of a diagnostic OGTT, reflecting an increase in those offered the test who were not at risk (Table 1). Table 2 shows the testing strategies comparison of adverse maternal and neonatal outcomes. Induction of labour rate increased both in the whole obstetric population and in women with GDM after the introduction of offering universal OGTT. The caesarean section rate in the whole population was similar before and after the introduction of offering universal OGTT, but introduction of this universal offer was associated with a reduction in caesarean section rates amongst those with GDM. Similarly, offering universal OGTT was not associated with a change in the rate of macrosomia in the whole obstetric population, but was associated with a marked reduction in its rate amongst those with GDM. There was also a reduction in perinatal mortality for women with GDM after the introduction of the policy and some evidence of a weaker reduction in the whole population, but the latter rate ratio had wide confidence intervals that included the null value.Table 2 Association between selective and universal offer of an OGTT to identify gestational diabetes and risk of adverse maternal and neonatal outcomes in the whole obstetric population and in women with gestational diabetes Whole obstetric populationWomen identified with gestational diabetesSelective offer of an OGTT 2004–2006 N = 17160 N (%)Universal offer of an OGTT 2008–2010 N = 18514 N (%)Rate ratio comparing after to before (95% CI)Selective offer of an OGTT 2004–2006 N = 280 N (%)Universal offer of an OGTT 2008–2010 N = 1132 N (%)Rate ratio comparing after to before (95% CI)Induction of labour2422 (14.1)3678 (20.0)1.43 (1.35- 1.50)120 (42.9)587 (51.9)1.21 (1.00-1.49)Caesarean birth3477 (20.3)3709 (20.0)1.00 (0.96-1.05)122 (43.6)345 (30.5)0.70 (0.57-0.87)Macrosomia (birth weight ≥4 kg)1105 (6.4)1219 (6.6)1.04 (0.95-1.12)45 (16.1)50 (4.4)0.22 (0.15-0.34)Perinatal mortality228 (1.3)212 (1.1)0.86 (0.71-1.04)4 (1.4)8 (0.7)0.12 (0.03-0.46)Admitted to NNU1670 (9.7)1497 (8.1)0.83 (0.77-0.89)69 (24.6)118 (10.4)0.42 (0.31-0.58)Erb’s palsy7 (0.04)12 (0.06)1.59 (0.58-4.76)***Fractured clavicle4 (0.02)6 (0.03)1.39 (0.33-6.70)***OGTT = oral glucose tolerance test.NNU: neonatal unit; *insufficient numbers to perform analyses for these outcomes in those identified with gestational diabetes. Association between selective and universal offer of an OGTT to identify gestational diabetes and risk of adverse maternal and neonatal outcomes in the whole obstetric population and in women with gestational diabetes OGTT = oral glucose tolerance test. NNU: neonatal unit; *insufficient numbers to perform analyses for these outcomes in those identified with gestational diabetes. Offering universal OGTT was also associated with a reduced rate of admission to the neonatal unit for the whole obstetric population and amongst those with GDM. The rates of Erb’s palsy and fractured clavicle in the infants appeared similar before and after the policy change, but at both time points rates of these outcomes were very low and the ratio estimates have very wide confidence intervals that include any association from a marked reduction to a marked increase in risk. Discussion: Changing antenatal care policy to identify women with GDM from case by case assessment and selective diagnostic OGTT, to offering a diagnostic OGTT to all pregnant women was associated with increased rates of GDM diagnoses and severe hyperglycaemia in the whole obstetric population, but with lower detection rates in those who completed the OGTT. This suggests universal offer of OGTT is associated with the increased identification of mild-moderate hyperglycaemia. There was also a reduction from 94% to 63% of those offered a diagnostic OGTT who accepted and completed the test. Induction of labour rate in the whole obstetric population and in those diagnosed with GDM increased when all women were offered a universal diagnostic OGTT, with no overall increase in caesarean section rate and a decrease in those with GDM. Offering all women diagnostic testing was also associated with improved neonatal outcomes in women identified with GDM, including a marked reduction in rates of macrosomia, neonatal mortality and admissions to neonatal care. There was no strong evidence that offering an OGTT to all women affected most outcome rates for the whole obstetric population. There was some suggestion that perinatal mortality may have reduced in the whole obstetric population following the policy change, but here the confidence intervals were wide and just included the null value. Thus, overall these results suggest that universal testing with OGTT in a high risk population such as that in Bradford with a large proportion at risk because of their ethnicity or obesity has some beneficial effects in terms of reducing adverse perinatal outcomes in those identified with GDM, without increasing caesarean section rates or adverse neonatal outcomes in the whole obstetric population. There were no changes to treatment policy during the study period that could account for the differences in outcomes demonstrated. A ‘step up’ approach was followed depending on severity of hyperglycaemia, whereby diet and exercise modification was advised and metformin and/or insulin added accordingly. The reduced uptake of the invitation for a universally offered OGTT is similar to other studies who have reported rates of 58% [24] and 73% [25] and may reflect a difference in attitudes to a test once it is offered to everyone. It is plausible that both health care practitioners and the women themselves see the test as less important when it is not restricted to those designated as high risk following individual assessment. For example, in the absence of risk factors health professionals may place less emphasise on the importance of the test. Unfortunately, we did not have detailed information on risk factors in both those attending antenatal care before and after the change in policy to examine whether uptake amongst the most at risk had changed over time. It is also possible that those who declined the test were different in some way to those who accepted the test offer and that any benefits of universal testing may be greater if uptake could be improved from 63%, though we are unable to examine this here. Uptake of OGTT offer however may increase over time as clinicians and women become more familiar with the test and reasons for testing. The potential benefits of offering universal testing and treatment of cases must be weighed against the cost of this service and any possible adverse effects to the pregnant women. Criteria for GDM diagnosis did not change across the study period, however it is notable that before the introduction of universal diagnostic testing three women were tested for each woman diagnosed with GDM, whereas once universal invitation was introduced ten women were tested for each woman diagnosed. If it were possible to increase the uptake of testing with universal invitation this difference would likely increase. The small proportion at both time points of women who presented for the test, but could not complete it, because they had not fasted or were nauseous following drinking the glucose solution suggests that once a woman has decided to accept the invitation the test is feasible in the vast majority. We are unable in this study to examine cost-effectiveness as we do not have comprehensive data on the extent of risk factor screening that was completed in all women in the before group and since this is a before and after comparison we are only able to assess association rather than causation/effect. There were no changes to criteria thresholds for GDM diagnosis within the study period, which would be likely to affect prevalence of GDM, [26] even so prevalence increased irrespective of strategy used. The increasing prevalence of maternal obesity and an increasing awareness amongst clinicians of the association between obesity and poor pregnancy outcomes may be responsible for the changes seen. For example, in 2004 three women were tested for GDM because of a raised BMI, in 2005, 17 women and in 2006, 225 women. Different risk factors are associated with different degrees of risk for GDM development. South Asian ethnicity and family history of diabetes carry greater risk of GDM development compared with BMI ≥30 [27]. However, we found no difference in the proportion of women who completed the OGTT before and after introduction of universal testing who belonged to an at-risk ethnic group, which is perhaps surprising as 50% of the study population are of South Asian ethnicity, this may reflect a belief that the use of ethnicity as a risk factor would result in too great a cost burden at the study hospital prior to the policy change or that the risk conveyed was not great enough to necessitate screening. Our findings with respect to associations in the group of women who are diagnosed with GDM are consistent with randomised trials of more intensive treatment of GDM, suggesting that the increased detection rates did result in effective treatment in those with GDM. For example, recent randomised trials suggest that intensive treatment effectively reduces macrosomia and other adverse neonatal outcomes [28, 29]. With respect to maternal outcomes these trials had different findings, with one showing intensive treatment resulted in an increase in induction rates (as in our study), but no effect on caesarean section rates [28] and the second, no difference in induction rate and a reduction in caesarean section rates (the latter as in our study) [29]. The reduction in rates of macrosomia, comparing births after the introduction of universal offering of OGTT to those before is also consistent with substantial evidence that maternal hyperglycaemia causes increased birth size overall and increased adiposity at birth [8, 30]. Macrosomia complicates the intrapartum, rather than the antenatal period, and is therefore more likely to be associated with birth trauma and arrested labour rather than perinatal mortality [31]. In our study population the numbers experiencing birth trauma (Erb’s palsy or fractured clavicle) were too few to make meaningful conclusions about the change in policy to offering universal testing. Implications for research Further research is needed to understand why, when all women were offered an OGTT only 63% attended for the test, compared with 99% when the OGTT was only offered to those identified as at risk. It is also important to investigate what the effect of increasing uptake of the test might be. Ultimately, randomised trials that compare universal testing and selective testing and assessment of a range of perinatal outcomes are required, but such trials would require a very large sample size. Continued evaluation of the Bradford population and other similar populations where there is a policy change provide some evidence of the likely impact of universal testing. It is also important to include assessment of the cost-effectiveness of strategies to identify and treat GDM, on-going work may help to answer this question [32]. Further research is needed to understand why, when all women were offered an OGTT only 63% attended for the test, compared with 99% when the OGTT was only offered to those identified as at risk. It is also important to investigate what the effect of increasing uptake of the test might be. Ultimately, randomised trials that compare universal testing and selective testing and assessment of a range of perinatal outcomes are required, but such trials would require a very large sample size. Continued evaluation of the Bradford population and other similar populations where there is a policy change provide some evidence of the likely impact of universal testing. It is also important to include assessment of the cost-effectiveness of strategies to identify and treat GDM, on-going work may help to answer this question [32]. Strengths and limitations A key strength of our study is its ability to examine the association of offering universal testing for OGTT compared with a more conservative strategy of selective testing based on individual assessment of risk. We were able to include in our analyses all women giving birth in Bradford over the study period examined and therefore results are unlikely to be affected by selection bias and the numbers included in the analyses are large. This approach provides a ‘real life’ evaluation of what happens when health care policy is changed, rather than findings from randomised trials that may overestimate the effect of an intervention [33, 34]. Whilst routine clinical data may be less accurate than data collected specifically for research purposes we carefully checked the reliability of data from different sources, where possible (e.g. electronic and paper medical records and laboratory results) and completed an independent abstraction of a random sample of data from medical records to improve the validity of the data we have used here. The main weakness of this study is that it is a before and after comparison and therefore we cannot assume that any associations are caused by the change in policy of offering an OGTT to all pregnant women. For 2006 to 2008 we only had access to aggregated data for outcomes therefore we could only examine unadjusted associations and not adjust analyses to take account of the individual level background characteristics and confounders of the population and how these might have changed over time. Although there were no other changes in policy, any other characteristic that changed over this period could explain the findings we have observed (i.e. could have confounded our assumed association of change to universal offering of an OGTT with the outcomes assessed). For example, there was an increase in induction of labour rate following the introduction of universal offer of an OGTT, both for the whole obstetric population and for those with GDM, which cannot be wholly accounted for by the increase in GDM diagnosis. As noted above there was already evidence that the rate of diagnosis of GDM was increasing prior to the change in policy, but the rate of this increase was much more marked after the policy was introduced. Despite the increasing rates of diagnosis before the policy change, there were still improved outcomes for the group universally offered an OGTT compared to the group selectively offered an OGTT. Data on detailed risk factor assessment for all women across the whole period was not available; data were only available for those in the before group who completed an OGTT. Thus, we were unable to examine how well risk factor screening was implemented before the offer of universal testing or to compare risk factors between those who completed an OGTT and those who declined the invitation after the policy change. As noted above we are unable to complete a cost-effectiveness analysis in this study. Lastly, the population of Bradford are a high risk group as half of births are to women of South Asian origin and a fifth of the non-South Asian population are obese. Thus, at least 60% of this population would have at least one risk factor deemed by the NICE guidelines introduced in 2008, to make them eligible for an OGTT [11]. We cannot conclude that the results found in this population would necessarily generalise to other populations or to populations where other strategies or criteria are used, but given the whole population coverage it is likely that they would generalise to populations with similar high levels of risk. A key strength of our study is its ability to examine the association of offering universal testing for OGTT compared with a more conservative strategy of selective testing based on individual assessment of risk. We were able to include in our analyses all women giving birth in Bradford over the study period examined and therefore results are unlikely to be affected by selection bias and the numbers included in the analyses are large. This approach provides a ‘real life’ evaluation of what happens when health care policy is changed, rather than findings from randomised trials that may overestimate the effect of an intervention [33, 34]. Whilst routine clinical data may be less accurate than data collected specifically for research purposes we carefully checked the reliability of data from different sources, where possible (e.g. electronic and paper medical records and laboratory results) and completed an independent abstraction of a random sample of data from medical records to improve the validity of the data we have used here. The main weakness of this study is that it is a before and after comparison and therefore we cannot assume that any associations are caused by the change in policy of offering an OGTT to all pregnant women. For 2006 to 2008 we only had access to aggregated data for outcomes therefore we could only examine unadjusted associations and not adjust analyses to take account of the individual level background characteristics and confounders of the population and how these might have changed over time. Although there were no other changes in policy, any other characteristic that changed over this period could explain the findings we have observed (i.e. could have confounded our assumed association of change to universal offering of an OGTT with the outcomes assessed). For example, there was an increase in induction of labour rate following the introduction of universal offer of an OGTT, both for the whole obstetric population and for those with GDM, which cannot be wholly accounted for by the increase in GDM diagnosis. As noted above there was already evidence that the rate of diagnosis of GDM was increasing prior to the change in policy, but the rate of this increase was much more marked after the policy was introduced. Despite the increasing rates of diagnosis before the policy change, there were still improved outcomes for the group universally offered an OGTT compared to the group selectively offered an OGTT. Data on detailed risk factor assessment for all women across the whole period was not available; data were only available for those in the before group who completed an OGTT. Thus, we were unable to examine how well risk factor screening was implemented before the offer of universal testing or to compare risk factors between those who completed an OGTT and those who declined the invitation after the policy change. As noted above we are unable to complete a cost-effectiveness analysis in this study. Lastly, the population of Bradford are a high risk group as half of births are to women of South Asian origin and a fifth of the non-South Asian population are obese. Thus, at least 60% of this population would have at least one risk factor deemed by the NICE guidelines introduced in 2008, to make them eligible for an OGTT [11]. We cannot conclude that the results found in this population would necessarily generalise to other populations or to populations where other strategies or criteria are used, but given the whole population coverage it is likely that they would generalise to populations with similar high levels of risk. Implications for research: Further research is needed to understand why, when all women were offered an OGTT only 63% attended for the test, compared with 99% when the OGTT was only offered to those identified as at risk. It is also important to investigate what the effect of increasing uptake of the test might be. Ultimately, randomised trials that compare universal testing and selective testing and assessment of a range of perinatal outcomes are required, but such trials would require a very large sample size. Continued evaluation of the Bradford population and other similar populations where there is a policy change provide some evidence of the likely impact of universal testing. It is also important to include assessment of the cost-effectiveness of strategies to identify and treat GDM, on-going work may help to answer this question [32]. Strengths and limitations: A key strength of our study is its ability to examine the association of offering universal testing for OGTT compared with a more conservative strategy of selective testing based on individual assessment of risk. We were able to include in our analyses all women giving birth in Bradford over the study period examined and therefore results are unlikely to be affected by selection bias and the numbers included in the analyses are large. This approach provides a ‘real life’ evaluation of what happens when health care policy is changed, rather than findings from randomised trials that may overestimate the effect of an intervention [33, 34]. Whilst routine clinical data may be less accurate than data collected specifically for research purposes we carefully checked the reliability of data from different sources, where possible (e.g. electronic and paper medical records and laboratory results) and completed an independent abstraction of a random sample of data from medical records to improve the validity of the data we have used here. The main weakness of this study is that it is a before and after comparison and therefore we cannot assume that any associations are caused by the change in policy of offering an OGTT to all pregnant women. For 2006 to 2008 we only had access to aggregated data for outcomes therefore we could only examine unadjusted associations and not adjust analyses to take account of the individual level background characteristics and confounders of the population and how these might have changed over time. Although there were no other changes in policy, any other characteristic that changed over this period could explain the findings we have observed (i.e. could have confounded our assumed association of change to universal offering of an OGTT with the outcomes assessed). For example, there was an increase in induction of labour rate following the introduction of universal offer of an OGTT, both for the whole obstetric population and for those with GDM, which cannot be wholly accounted for by the increase in GDM diagnosis. As noted above there was already evidence that the rate of diagnosis of GDM was increasing prior to the change in policy, but the rate of this increase was much more marked after the policy was introduced. Despite the increasing rates of diagnosis before the policy change, there were still improved outcomes for the group universally offered an OGTT compared to the group selectively offered an OGTT. Data on detailed risk factor assessment for all women across the whole period was not available; data were only available for those in the before group who completed an OGTT. Thus, we were unable to examine how well risk factor screening was implemented before the offer of universal testing or to compare risk factors between those who completed an OGTT and those who declined the invitation after the policy change. As noted above we are unable to complete a cost-effectiveness analysis in this study. Lastly, the population of Bradford are a high risk group as half of births are to women of South Asian origin and a fifth of the non-South Asian population are obese. Thus, at least 60% of this population would have at least one risk factor deemed by the NICE guidelines introduced in 2008, to make them eligible for an OGTT [11]. We cannot conclude that the results found in this population would necessarily generalise to other populations or to populations where other strategies or criteria are used, but given the whole population coverage it is likely that they would generalise to populations with similar high levels of risk. Conclusion: Our results suggest that offering all women an OGTT was associated with increased identification of women with GDM and severe hyperglycaemia and with neonatal benefits for those with GDM. There was no evidence of clear differences in neonatal outcomes in the whole obstetric population, which is perhaps not surprising as only 6% were identified as having GDM when an OGTT was offered to all women and GDM diagnosis made based on the WHO criteria. Ethics Ethical approval was not required for this work. Ethical approval was not required for this work. Ethics: Ethical approval was not required for this work. Electronic supplementary material: Additional file 1: Proportions of pregnant women who completed an OGTT who had documented risk factors* for GDM. (DOC 32 KB) Additional file 1: Proportions of pregnant women who completed an OGTT who had documented risk factors* for GDM. (DOC 32 KB) : Additional file 1: Proportions of pregnant women who completed an OGTT who had documented risk factors* for GDM. (DOC 32 KB)
Background: Gestational diabetes (GDM) affects a substantial proportion of women in pregnancy and is associated with increased risk of adverse perinatal and long term outcomes. Treatment seems to improve perinatal outcomes, the relative effectiveness of different strategies for identifying women with GDM however is less clear.This paper describes an evaluation of the impact of a change in policy from selective risk factor based offering, to universal offering of an oral glucose tolerance test (OGTT) to identify women with GDM on maternal and neonatal outcomes. Methods: Retrospective six year analysis of 35,674 births at the Women's and Newborn unit, Bradford Royal Infirmary, United Kingdom. Results: The proportion of the whole obstetric population diagnosed with GDM increased almost fourfold following universal offering of an OGTT compared to selective offering of an OGTT; Rate Ratio (RR) 3.75 (95% CI 3.28 to 4.29), the proportion identified with severe hyperglycaemia doubled following the policy change; 1.96 (1.50 to 2.58). The case detection rate however, for GDM in the whole population and severe hyperglycaemia in those with GDM reduced by 50-60%; 0.40 (0.35 to 0.46) and 0.51 (0.39 to 0.67) respectively. Universally offering an OGTT was associated with an increased induction of labour rate in the whole obstetric population and in women with GDM; 1.43 (1.35 to 1.50) and 1.21 (1.00 to1.49) respectively. Caesarean section, macrosomia and perinatal mortality rates in the whole population were similar. For women with GDM, rate of caesarean section; 0.70 (0.57 to 0.87), macrosomia; 0.22 (0.15 to 0.34) and perinatal mortality 0.12 (0.03 to 0.46) decreased following the policy change. Conclusions: Universally offering an OGTT was associated with increased identification of women with GDM and severe hyperglycaemia and with neonatal benefits for those with GDM. There was no evidence of benefit or adverse effects in neonatal outcomes in the whole obstetric population.
Background: Gestational diabetes mellitus (GDM) affects 2-6% of pregnant women and is associated with increased risk of important adverse perinatal outcomes, including macrosomia and birth injury [1, 2]. There is also evidence of increased long term risk of type 2 diabetes [3] and consequent cardiovascular disease in the mothers [4] and possibly of increased long term risk of obesity and associated adverse cardio-metabolic risk in offspring [5–8]. Evidence is increasing that treatment of GDM improves perinatal outcomes [9] supporting the case for improved identification of women with GDM. There is debate however about the relative effectiveness of different strategies for identifying women with GDM, largely because of the lack of good quality evidence. This has led to variation in clinical guidelines and practice for detecting GDM between, and within, countries. Strategies include: case by case assessment [10], selective (75 g or 100 g) oral glucose tolerance testing of high risk women identified using specific risk factors or a 50 g glucose challenge test [11] and universal testing, i.e. offering all women an oral glucose tolerance test (OGTT) [12]. To date in the United Kingdom (UK) there has been no recommendation to offer all women an OGTT, in practice, and more recently in clinical guidelines, selective testing of high risk women has been undertaken. Prior to 2008 in the UK there was no national recommended screening strategy to identify women with GDM. Screening if it was conducted, was at the discretion of the clinician and based on variable use of risk factors [13–15]. When risk factor screening was undertaken, a two-step approach was preferred: clinicians made a clinical assessment of each woman’s risk and offered a two hour 75 g OGTT to identify GDM, with a diagnosis based on the World Health Organisation (WHO) criteria [16]. Since 2008 UK national clinical guidance has recommended that all women are screened by assessment of specific risk factors at their first pregnancy appointment. Any pregnant woman (not previously identified as having type 2 diabetes) with one or more risk factor: family history of diabetes; South Asian; black or middle eastern ethnicity; previous history of having a baby with macrosomia; or body mass index (kg/m2) (BMI) ≥30, should be offered an OGTT between 24 and 28 weeks gestation [11]. The American College of Obstetricians and Gynecologists (ACOG) also recommends a two-step approach. Women are screened for GDM at 24 to 26 weeks, either by patient history, risk factors or 50 g one hour glucose challenge test and if screen positive offered a 100 g OGTT. GDM diagnosis is made using criteria from Carpenter and Coustan or the National Diabetes Data Group [17]. By contrast the International Association of Diabetes in Pregnancy Study Group [12] (indorsed by the American Diabetes Association) recommend all women not previously identified as having type 2 diabetes (irrespective of risk factors) are offered a diagnostic 75 g OGTT at 24–28 weeks. GDM is diagnosed if any one of plasma glucose levels are >5.0 mmol/l, one hour >9.9 mmol/l or two hour >8.4 mmol/l is found [18]. Current screening recommendations are based on evidence from observational studies with no evidence that diagnosis using any of the above strategies improves perinatal or long term adverse outcomes or is cost-effective. Consequently the US Preventative Services Taskforce recommends clinicians discuss screening for GDM with each woman and case by case decisions made based on risk status [10]. Selectively offering women an OGTT based on risk factor assessment and universally offering an OGTT to some extent represent the extremes of possible approaches for identifying women at risk. Theoretically the former approach may have a high false negative rate and miss the opportunity to prevent adverse perinatal outcomes in women at risk, whereas the latter approach may over-diagnose [19], cause unnecessary anxiety [20] and may not be cost-effective in improving perinatal outcomes. Recent policy changes from a clinician led risk factor screening approach to universal offering a diagnostic 75 g OGTT in Bradford provided us with the opportunity to compare these two approaches. Setting Bradford is a city in the North of England with high levels of deprivation. Approximately half of births are to women of South Asian origin and a fifth of White British pregnant women are obese (unpublished routine study hospital data from 2012). In response to the Bradford District Mortality Commission, which was undertaken in 2006 and highlighted rising infant mortality and the poor health of pregnant women in the city [21], a number of changes to clinical practice were implemented, including offering a diagnostic OGTT to all pregnant women at 26–28 weeks gestation. Prior to 2007 women were offered an OGTT at 26–28 weeks gestation following individual assessment of risk status. The aim of this study was to evaluate the policy change from case by case risk factor based assessment and selective testing to universal testing (offering a diagnostic OGTT to all women) to identify gestational diabetes. Bradford is a city in the North of England with high levels of deprivation. Approximately half of births are to women of South Asian origin and a fifth of White British pregnant women are obese (unpublished routine study hospital data from 2012). In response to the Bradford District Mortality Commission, which was undertaken in 2006 and highlighted rising infant mortality and the poor health of pregnant women in the city [21], a number of changes to clinical practice were implemented, including offering a diagnostic OGTT to all pregnant women at 26–28 weeks gestation. Prior to 2007 women were offered an OGTT at 26–28 weeks gestation following individual assessment of risk status. The aim of this study was to evaluate the policy change from case by case risk factor based assessment and selective testing to universal testing (offering a diagnostic OGTT to all women) to identify gestational diabetes. Conclusion: Our results suggest that offering all women an OGTT was associated with increased identification of women with GDM and severe hyperglycaemia and with neonatal benefits for those with GDM. There was no evidence of clear differences in neonatal outcomes in the whole obstetric population, which is perhaps not surprising as only 6% were identified as having GDM when an OGTT was offered to all women and GDM diagnosis made based on the WHO criteria. Ethics Ethical approval was not required for this work. Ethical approval was not required for this work.
Background: Gestational diabetes (GDM) affects a substantial proportion of women in pregnancy and is associated with increased risk of adverse perinatal and long term outcomes. Treatment seems to improve perinatal outcomes, the relative effectiveness of different strategies for identifying women with GDM however is less clear.This paper describes an evaluation of the impact of a change in policy from selective risk factor based offering, to universal offering of an oral glucose tolerance test (OGTT) to identify women with GDM on maternal and neonatal outcomes. Methods: Retrospective six year analysis of 35,674 births at the Women's and Newborn unit, Bradford Royal Infirmary, United Kingdom. Results: The proportion of the whole obstetric population diagnosed with GDM increased almost fourfold following universal offering of an OGTT compared to selective offering of an OGTT; Rate Ratio (RR) 3.75 (95% CI 3.28 to 4.29), the proportion identified with severe hyperglycaemia doubled following the policy change; 1.96 (1.50 to 2.58). The case detection rate however, for GDM in the whole population and severe hyperglycaemia in those with GDM reduced by 50-60%; 0.40 (0.35 to 0.46) and 0.51 (0.39 to 0.67) respectively. Universally offering an OGTT was associated with an increased induction of labour rate in the whole obstetric population and in women with GDM; 1.43 (1.35 to 1.50) and 1.21 (1.00 to1.49) respectively. Caesarean section, macrosomia and perinatal mortality rates in the whole population were similar. For women with GDM, rate of caesarean section; 0.70 (0.57 to 0.87), macrosomia; 0.22 (0.15 to 0.34) and perinatal mortality 0.12 (0.03 to 0.46) decreased following the policy change. Conclusions: Universally offering an OGTT was associated with increased identification of women with GDM and severe hyperglycaemia and with neonatal benefits for those with GDM. There was no evidence of benefit or adverse effects in neonatal outcomes in the whole obstetric population.
9,215
368
[ 1140, 161, 158, 74, 388, 243, 152, 646, 9, 28 ]
15
[ "ogtt", "women", "risk", "data", "gdm", "universal", "glucose", "policy", "population", "offered" ]
[ "gestational diabetes methods", "gestational diabetes association", "gdm maternal", "test gdm gestational", "gdm affects pregnant" ]
[CONTENT] Gestational diabetes | Universal and selective screening | Risk factors | Oral glucose tolerance test | Perinatal outcomes [SUMMARY]
[CONTENT] Gestational diabetes | Universal and selective screening | Risk factors | Oral glucose tolerance test | Perinatal outcomes [SUMMARY]
[CONTENT] Gestational diabetes | Universal and selective screening | Risk factors | Oral glucose tolerance test | Perinatal outcomes [SUMMARY]
[CONTENT] Gestational diabetes | Universal and selective screening | Risk factors | Oral glucose tolerance test | Perinatal outcomes [SUMMARY]
[CONTENT] Gestational diabetes | Universal and selective screening | Risk factors | Oral glucose tolerance test | Perinatal outcomes [SUMMARY]
[CONTENT] Gestational diabetes | Universal and selective screening | Risk factors | Oral glucose tolerance test | Perinatal outcomes [SUMMARY]
[CONTENT] Cesarean Section | Diabetes, Gestational | Diagnostic Tests, Routine | Female | Fetal Macrosomia | Glucose Tolerance Test | Health Policy | Humans | Hyperglycemia | Infant, Newborn | Labor, Induced | Perinatal Mortality | Pregnancy | Retrospective Studies | United Kingdom [SUMMARY]
[CONTENT] Cesarean Section | Diabetes, Gestational | Diagnostic Tests, Routine | Female | Fetal Macrosomia | Glucose Tolerance Test | Health Policy | Humans | Hyperglycemia | Infant, Newborn | Labor, Induced | Perinatal Mortality | Pregnancy | Retrospective Studies | United Kingdom [SUMMARY]
[CONTENT] Cesarean Section | Diabetes, Gestational | Diagnostic Tests, Routine | Female | Fetal Macrosomia | Glucose Tolerance Test | Health Policy | Humans | Hyperglycemia | Infant, Newborn | Labor, Induced | Perinatal Mortality | Pregnancy | Retrospective Studies | United Kingdom [SUMMARY]
[CONTENT] Cesarean Section | Diabetes, Gestational | Diagnostic Tests, Routine | Female | Fetal Macrosomia | Glucose Tolerance Test | Health Policy | Humans | Hyperglycemia | Infant, Newborn | Labor, Induced | Perinatal Mortality | Pregnancy | Retrospective Studies | United Kingdom [SUMMARY]
[CONTENT] Cesarean Section | Diabetes, Gestational | Diagnostic Tests, Routine | Female | Fetal Macrosomia | Glucose Tolerance Test | Health Policy | Humans | Hyperglycemia | Infant, Newborn | Labor, Induced | Perinatal Mortality | Pregnancy | Retrospective Studies | United Kingdom [SUMMARY]
[CONTENT] Cesarean Section | Diabetes, Gestational | Diagnostic Tests, Routine | Female | Fetal Macrosomia | Glucose Tolerance Test | Health Policy | Humans | Hyperglycemia | Infant, Newborn | Labor, Induced | Perinatal Mortality | Pregnancy | Retrospective Studies | United Kingdom [SUMMARY]
[CONTENT] gestational diabetes methods | gestational diabetes association | gdm maternal | test gdm gestational | gdm affects pregnant [SUMMARY]
[CONTENT] gestational diabetes methods | gestational diabetes association | gdm maternal | test gdm gestational | gdm affects pregnant [SUMMARY]
[CONTENT] gestational diabetes methods | gestational diabetes association | gdm maternal | test gdm gestational | gdm affects pregnant [SUMMARY]
[CONTENT] gestational diabetes methods | gestational diabetes association | gdm maternal | test gdm gestational | gdm affects pregnant [SUMMARY]
[CONTENT] gestational diabetes methods | gestational diabetes association | gdm maternal | test gdm gestational | gdm affects pregnant [SUMMARY]
[CONTENT] gestational diabetes methods | gestational diabetes association | gdm maternal | test gdm gestational | gdm affects pregnant [SUMMARY]
[CONTENT] ogtt | women | risk | data | gdm | universal | glucose | policy | population | offered [SUMMARY]
[CONTENT] ogtt | women | risk | data | gdm | universal | glucose | policy | population | offered [SUMMARY]
[CONTENT] ogtt | women | risk | data | gdm | universal | glucose | policy | population | offered [SUMMARY]
[CONTENT] ogtt | women | risk | data | gdm | universal | glucose | policy | population | offered [SUMMARY]
[CONTENT] ogtt | women | risk | data | gdm | universal | glucose | policy | population | offered [SUMMARY]
[CONTENT] ogtt | women | risk | data | gdm | universal | glucose | policy | population | offered [SUMMARY]
[CONTENT] risk | women | diabetes | case | ogtt | weeks | offering | screening | gdm | 28 weeks [SUMMARY]
[CONTENT] data | records | glucose | electronic | available | abstracted | women | mmol | ogtt | period [SUMMARY]
[CONTENT] gestational | population | test | ogtt | glucose | table | gestational diabetes | rate | ratio | ci [SUMMARY]
[CONTENT] required work | ethical approval required work | ethical approval required | approval required | approval required work | gdm | ethical approval | approval | work | ethical [SUMMARY]
[CONTENT] women | ogtt | data | risk | gdm | glucose | population | approval | ethical approval | ethical [SUMMARY]
[CONTENT] women | ogtt | data | risk | gdm | glucose | population | approval | ethical approval | ethical [SUMMARY]
[CONTENT] GDM ||| GDM ||| GDM [SUMMARY]
[CONTENT] six year | 35,674 | Newborn | Bradford Royal Infirmary | United Kingdom [SUMMARY]
[CONTENT] GDM | OGTT | Rate Ratio | 3.75 | 95% | CI | 3.28 | 4.29 | 1.96 | 1.50 | 2.58 ||| GDM | GDM | 50-60% | 0.40 | 0.35 | 0.46 | 0.51 | 0.39 to 0.67 ||| OGTT | GDM | 1.43 | 1.35 to 1.50 | 1.21 | 1.00 ||| ||| GDM | 0.70 | 0.57 | 0.87 | 0.22 | 0.15 to 0.34 | 0.12 | 0.03 | 0.46 [SUMMARY]
[CONTENT] OGTT | GDM | GDM ||| [SUMMARY]
[CONTENT] GDM ||| GDM ||| GDM ||| six year | 35,674 | Newborn | Bradford Royal Infirmary | United Kingdom ||| ||| GDM | OGTT | Rate Ratio | 3.75 | 95% | CI | 3.28 | 4.29 | 1.96 | 1.50 | 2.58 ||| GDM | GDM | 50-60% | 0.40 | 0.35 | 0.46 | 0.51 | 0.39 to 0.67 ||| OGTT | GDM | 1.43 | 1.35 to 1.50 | 1.21 | 1.00 ||| ||| GDM | 0.70 | 0.57 | 0.87 | 0.22 | 0.15 to 0.34 | 0.12 | 0.03 | 0.46 ||| OGTT | GDM | GDM ||| [SUMMARY]
[CONTENT] GDM ||| GDM ||| GDM ||| six year | 35,674 | Newborn | Bradford Royal Infirmary | United Kingdom ||| ||| GDM | OGTT | Rate Ratio | 3.75 | 95% | CI | 3.28 | 4.29 | 1.96 | 1.50 | 2.58 ||| GDM | GDM | 50-60% | 0.40 | 0.35 | 0.46 | 0.51 | 0.39 to 0.67 ||| OGTT | GDM | 1.43 | 1.35 to 1.50 | 1.21 | 1.00 ||| ||| GDM | 0.70 | 0.57 | 0.87 | 0.22 | 0.15 to 0.34 | 0.12 | 0.03 | 0.46 ||| OGTT | GDM | GDM ||| [SUMMARY]
Longitudinal analysis of minority women's perceptions of cohesion: the role of cooperation, communication, and competition.
24779959
Interaction in the form of cooperation, communication, and friendly competition theoretically precede the development of group cohesion, which often precedes adherence to health promotion programs. The purpose of this manuscript was to explore longitudinal relationships among dimensions of group cohesion and group-interaction variables to inform and improve group-based strategies within programs aimed at promoting physical activity.
BACKGROUND
Ethnic minority women completed a group dynamics-based physical activity promotion intervention (N = 103; 73% African American; 27% Hispanic/Latina; mage = 47.89 + 8.17 years; mBMI = 34.43+ 8.07 kg/m2) and assessments of group cohesion and group-interaction variables at baseline, 6 months (post-program), and 12 months (follow-up).
METHODS
All four dimensions of group cohesion had significant (ps < 0.01) relationships with the group-interaction variables. Competition was a consistently strong predictor of cohesion, while cooperation did not demonstrate consistent patterns of prediction.
RESULTS
Facilitating a sense of friendly competition may increase engagement in physical activity programs by bolstering group cohesion.
CONCLUSIONS
[ "Adult", "Black or African American", "Body Mass Index", "Communication", "Female", "Health Knowledge, Attitudes, Practice", "Health Promotion", "Hispanic or Latino", "Humans", "Interpersonal Relations", "Linear Models", "Longitudinal Studies", "Middle Aged", "Minority Groups", "Motor Activity", "Social Facilitation", "Surveys and Questionnaires" ]
4108125
Background
Group dynamics includes the study of the nature of groups, individual relationships within groups, and interactions with others. Over 60 years ago, Kurt Lewin’s seminal work suggested that the degree to which a group was cohesive would determine individuals’ level of success as a collective [1]. Group cohesion has a long history as an important predictor of performance and outcomes in work, military, sport, and exercise groups [2-4]. Having a strong sense of group cohesion also a reflects a fundamental human need—the need to belong [5]. Group cohesion has been defined in many ways [6-8], but Carron, Brawley, & Widmeyer’s [9] definition has been used consistently in physical activity promotion and research. They define group cohesion as a dynamic process reflected in the shared pursuit of common objectives to satisfy members’ needs [9]. Group cohesion is further operationalized as individual (1) attraction to the group’s task-based activities (ATG-T), (2) attraction to the group’s social activities (ATG-S), (3) perceptions of the group’s integration around task-based activities (GI-T), and (4) perceptions of the group’s integration around social activities (GI-S). Over the previous two decades, a large body of literature has also documented the positive relationship between group cohesion and physical activity adoption and maintenance [10-14]. Participants who have strong perceptions of group cohesion attend group sessions more often, are late less often, and drop out less frequently [11]. Group cohesion also has demonstrated a consistent relationship with positive attitudes toward physical activity and enhanced perceptions of self-efficacy and personal control [4]. From a theoretical perspective, Carron and Spink [15] proposed that group-interaction variables, such as communication, cooperation, and competition, are the likely precursors to developing group cohesion. Communication is defined as the sharing of information through verbal and non-verbal means. In group dynamics-based physical activity interventions, task communication (i.e., physical activity-based) occurs through facilitated group-goal setting and peer-learning activities (c.f., Irish et al. [16], Estabrooks [17]). Cooperation is defined as sharing resources to achieve a specific outcome. In physical activity classes, a cooperative environment would be one where participants provide assistance to one another in setting up the exercise equipment, overcoming obstacles, or even doing exercises together [18]. Finally, competition in an exercise group context is defined as providing participants the motivation of being superior to other groups or group members, or the motivation to keep their own group functioning at a high level (c.f., Kerr et al. [19], Steiner [20]). The concept of friendly competition influencing physical activity outcomes has been applied across a number of populations [21-23]. Friendly competition is a sense of competition that is connected to the overall success of the group and can reflect a generalized sense of intragroup competition as well as intergroup competitions within a single intervention. With friendly competition, people are inspired to compete against each other with the recognition that even if someone else wins, it benefits the group as a whole. Further, a group may share a set of norms around fairness and reciprocity in the form of competition [24]. The motivational aspect of friendly competition has been identified across a number of studies including faith-based weight loss trials that include physical activity [25], worksite physical activity programs [26], and physical activity promotion in hard to reach audiences [27]. Gaining better understanding of the relationships between group communication, cooperation, and competition has both theoretical and practical implications. Theoretically, to date there has been no study directed at understanding the relative contributions of communication, cooperation, and competition-based strategies to changes in group cohesion [10]. For example, it could be hypothesized that physical activity groups are more cooperative than competitive and that strategies focusing on cooperation may be superior in this context. Alternatively, communication that includes a focus on helping participants identify similarities in health aspirations could be a stronger predictor of group cohesion than cooperation or competition. However there is a lack of longitudinal or even cross-sectional research that has analyzed the change in perceptions of group-interaction variables as it relates to the change in cohesion over time. From a practical perspective, group-interaction variables have been used as a guide to develop strategies that are hypothesized to improve group cohesion, yet the relationship between group interaction variables and group cohesion has not been examined within these intervention studies [18]. While the depth of research on other variables that enhance cohesion exists, these specific group-interaction variables have been widely used as a guide to develop strategies that are, in turn, hypothesized to improve group cohesion, yet the relationship between group interaction variables and group cohesion has not been determined [18]. Further, no study to date has analyzed the change in perceptions of group-interaction variables as it relates to the change in cohesion over time. Understanding the mechanistic relationship between strategies that target group-interaction variables, changes in those variables, and changes in perceptions of group cohesion provides valuable information for future program development. A longitudinal study design enhances the ability to track changes in the relationships between group-interaction variables and group cohesion over time. This is significant to group-based physical activity promotion programs, because it allows programs to be planned to integrate the strategies that are most likely to improve group cohesion. Understanding these underlying mechanisms also provides health educators with the information necessary to ensure that strategies that do not contribute to changes in perceptions of group cohesion are not unnecessarily applied in practice settings where resources are often limited. To date, there have been no investigations examining the relationship between physical activity group cohesion and group member perceptions of communication, cooperation, and friendly competition [10]. Understanding these relationships could aid in developing stronger strategies (e.g., appropriate facilitation of friendly competition within a group) to enhance group cohesion and provide practical information for those delivering interventions when making resource allocation decisions (i.e., what resources are needed to deliver the intervention to the desired population, ranging from time to materials). The Health is Power (HIP) trial [28], a study testing the effectiveness of a group-based physical activity promotion program for ethnic minority women, provided an opportunity to explore the relationships among the dimensions of group cohesion and communication, cooperation, and competition over time. In the primary meditational analyses of HIP, the investigators found that all dimensions of group cohesion mediated the effect of the intervention with regard to psychosocial outcomes but not physical activity behaviors (i.e., the intervention was associated with increased cohesion but did not lead to increased physical activity) [29]. The purpose of this exploratory study was to determine the longitudinal relationship of communication, cooperation, and friendly competition to the dimensions of group cohesion. Specifically, the intention of this study was to test participants’ perceptions of the strength or presence of these variables within their group. It was hypothesized that each group-interaction variable would contribute to a large proportion of explained variance in group cohesion over time. Hypothesis 1: Cooperation would predict the Individual’s Attraction to the Group-Task. Hypothesis 2: Friendly competition would predict the Individual’s Attraction to the Group-Task as well as the Group’s Integration towards the Task. Hypothesis 3: Social communication would predict aspects related to social cohesion (both Individual’s Attraction to the Group-Socially as well as the Group’s Integration Socially). Hypothesis 4: Task communication would predict aspects related to task cohesion: both the Individual’s Attraction to the Group-Task as well as the Group’s Integration towards the Task). Hypothesis 5: Perceptions of group cohesion and group interaction would increase over the course of the program, but decrease from program completion to the 12-month follow-up. Testing the hypotheses, outlined in Figure 1 below, contributes to the gap in the literature about strategies that influence the perception of group cohesion. Hypotheses for group-interaction variables prediction of cohesion over-time.
Methods
African American and Hispanic or Latina women were recruited to participate in a multi-site, community-based study to test a 6-month intervention designed to promote physical activity (see Lee et al. [29] for details on recruitment). This study focused on ethnic minority women because they have particularly low rates of physical activity and disproportionately suffer from chronic diseases related to physical inactivity [30]. Further, ethnic minority women are often gatekeepers for physical activity behaviors within their families [31,32]. Women were randomly assigned to the physical activity intervention or a fruit and vegetable promotion matched contact comparison group. Only participants randomly assigned to the physical activity intervention group were included in this study. Health education intervention sessions and intervention content and materials have been previously described [28]. In Houston, there were six African-American and one Hispanic or Latina cohorts, and there were three Hispanic or Latina cohorts in Austin. All sessions were conducted in English. Intervention group sizes did not differ significantly, with a range of participants from 10 to 37. The intervention was 24 weeks in duration and included 6 sessions. Each session included group dynamics strategies and principles based on the model developed by Carron and Spink [15] and included 15 minutes of walking after the educational component. Opportunities for communication were provided in the form of small group discussions related to the session objective. For example, women discussed strategies to overcome barriers to physical activity, shared goals, and relapse prevention plans during their group sessions. Women also used time before and after the sessions, as well as during group walks at the end of the intervention sessions, to socialize. To facilitate cooperation, participants engaged in peer problem solving activities and collaborative group goal setting. Groups also engaged in friendly competition by developing small teams working to achieve a group goal for physical activity tacked on a large map; whichever team had traveled the farthest was “winning”. The facilitators were trained to foster group interaction using semi-structured scripts. For example, a semi-structured script focusing on friendly competition might include: “You are all part of one big team (the whole group) as well as a part of a smaller team. The small teams will be the folks that you set your shared goals with and we will have a friendly competition between each team. But I want you to remember, that the goal of our large group is that everyone in the program works their way up to 45 minutes of physical activity on at least 5 days a week. So, even though there may be some competition, this program is really about getting and giving support to be more physically active.” Participant perceptions of physical activity group cohesion, communication, cooperation, and competition were assessed at baseline, post intervention, and 6-months after the intervention was completed. All research study activities were approved by the University of Houston’s Committee for the Protection of Human Subjects, and all participants provided written informed consent prior to participation. Sample As the focus of this study was to determine the effect of group communication, cooperation, and competition on cohesion within a sample of minority women, data from the 103 participants randomly assigned to the physical activity group who completed baseline and post-intervention measures were analyzed. Of those women, 73% identified as African American and 27% identified as Hispanic or Latina. The participants were 47.89 years of age (±8.17), with an average BMI of 34.43 kg/m2 (±8.07). Eighty-three participants (80%) completed the 12-month follow-up assessment. As the focus of this study was to determine the effect of group communication, cooperation, and competition on cohesion within a sample of minority women, data from the 103 participants randomly assigned to the physical activity group who completed baseline and post-intervention measures were analyzed. Of those women, 73% identified as African American and 27% identified as Hispanic or Latina. The participants were 47.89 years of age (±8.17), with an average BMI of 34.43 kg/m2 (±8.07). Eighty-three participants (80%) completed the 12-month follow-up assessment. Measures Group cohesion The Physical Activity Group Environment Questionnaire (PAGEQ) [33] is a group cohesion inventory for physical activity groups and was used in the HIP trial. The PAGEQ is a 21-item measure of the four dimensions of ATG-T, ATG-S, GI-T, GI-S with 6, 6, 5, and 4 items, respectively. All 21 items are on a 9-point Likert scale ranging from ‘strongly agree’ to ‘strongly disagree’ [33]. ATG-T was assessed by having participants respond to items such as ‘I like the amount of physical activity I get with this group’. ATG-S was operationalized through statements such as ‘I enjoy my social interactions with this group’. The group integration dimensions of cohesion were assessed using items such as ‘members of our group often socialize together’ (GI-S) and ‘our group is united in its beliefs about the benefits of regular physical activity’ (GI-T). This questionnaire has demonstrated content, predictive, and concurrent validity [33]. The Physical Activity Group Environment Questionnaire (PAGEQ) [33] is a group cohesion inventory for physical activity groups and was used in the HIP trial. The PAGEQ is a 21-item measure of the four dimensions of ATG-T, ATG-S, GI-T, GI-S with 6, 6, 5, and 4 items, respectively. All 21 items are on a 9-point Likert scale ranging from ‘strongly agree’ to ‘strongly disagree’ [33]. ATG-T was assessed by having participants respond to items such as ‘I like the amount of physical activity I get with this group’. ATG-S was operationalized through statements such as ‘I enjoy my social interactions with this group’. The group integration dimensions of cohesion were assessed using items such as ‘members of our group often socialize together’ (GI-S) and ‘our group is united in its beliefs about the benefits of regular physical activity’ (GI-T). This questionnaire has demonstrated content, predictive, and concurrent validity [33]. Group-interaction variables Additional items were embedded within the PAGEQ to measure the group-interaction variables of interest, including communication, cooperation, and competition. As this was an exploratory study, we developed items that were reviewed for face validity and aligned with the definitions of each of the constructs. Like cohesion, communication was operationalized as having a task and social focus and was measured through 6 items that can be further divided into task-based communication (e.g., ‘members of our group talk about how often they should do physical activity’) and social communication (e.g., ‘people of this group talk about things that are happening in our lives’). Cooperation and competition were not conceptualized as having relevant social components and focused on task outcomes. Cooperation was measured through 3 items (e.g., ‘we all cooperate to help this group’s program run smoothly’) as was competition (e.g., ‘There is friendly competition within the members to stay as healthy as possible’). Internal consistencies for the group-interaction variables were all acceptable: task communication (α = .94), social communication (α = .65), cooperation (α = .91), and friendly competition (α = .81). The sessions were designed to enhance opportunities for each of these constructs (e.g., opportunities to cooperate before, during, and after class, facilitated social interactions) (Additional file 1). Additional items were embedded within the PAGEQ to measure the group-interaction variables of interest, including communication, cooperation, and competition. As this was an exploratory study, we developed items that were reviewed for face validity and aligned with the definitions of each of the constructs. Like cohesion, communication was operationalized as having a task and social focus and was measured through 6 items that can be further divided into task-based communication (e.g., ‘members of our group talk about how often they should do physical activity’) and social communication (e.g., ‘people of this group talk about things that are happening in our lives’). Cooperation and competition were not conceptualized as having relevant social components and focused on task outcomes. Cooperation was measured through 3 items (e.g., ‘we all cooperate to help this group’s program run smoothly’) as was competition (e.g., ‘There is friendly competition within the members to stay as healthy as possible’). Internal consistencies for the group-interaction variables were all acceptable: task communication (α = .94), social communication (α = .65), cooperation (α = .91), and friendly competition (α = .81). The sessions were designed to enhance opportunities for each of these constructs (e.g., opportunities to cooperate before, during, and after class, facilitated social interactions) (Additional file 1). Group cohesion The Physical Activity Group Environment Questionnaire (PAGEQ) [33] is a group cohesion inventory for physical activity groups and was used in the HIP trial. The PAGEQ is a 21-item measure of the four dimensions of ATG-T, ATG-S, GI-T, GI-S with 6, 6, 5, and 4 items, respectively. All 21 items are on a 9-point Likert scale ranging from ‘strongly agree’ to ‘strongly disagree’ [33]. ATG-T was assessed by having participants respond to items such as ‘I like the amount of physical activity I get with this group’. ATG-S was operationalized through statements such as ‘I enjoy my social interactions with this group’. The group integration dimensions of cohesion were assessed using items such as ‘members of our group often socialize together’ (GI-S) and ‘our group is united in its beliefs about the benefits of regular physical activity’ (GI-T). This questionnaire has demonstrated content, predictive, and concurrent validity [33]. The Physical Activity Group Environment Questionnaire (PAGEQ) [33] is a group cohesion inventory for physical activity groups and was used in the HIP trial. The PAGEQ is a 21-item measure of the four dimensions of ATG-T, ATG-S, GI-T, GI-S with 6, 6, 5, and 4 items, respectively. All 21 items are on a 9-point Likert scale ranging from ‘strongly agree’ to ‘strongly disagree’ [33]. ATG-T was assessed by having participants respond to items such as ‘I like the amount of physical activity I get with this group’. ATG-S was operationalized through statements such as ‘I enjoy my social interactions with this group’. The group integration dimensions of cohesion were assessed using items such as ‘members of our group often socialize together’ (GI-S) and ‘our group is united in its beliefs about the benefits of regular physical activity’ (GI-T). This questionnaire has demonstrated content, predictive, and concurrent validity [33]. Group-interaction variables Additional items were embedded within the PAGEQ to measure the group-interaction variables of interest, including communication, cooperation, and competition. As this was an exploratory study, we developed items that were reviewed for face validity and aligned with the definitions of each of the constructs. Like cohesion, communication was operationalized as having a task and social focus and was measured through 6 items that can be further divided into task-based communication (e.g., ‘members of our group talk about how often they should do physical activity’) and social communication (e.g., ‘people of this group talk about things that are happening in our lives’). Cooperation and competition were not conceptualized as having relevant social components and focused on task outcomes. Cooperation was measured through 3 items (e.g., ‘we all cooperate to help this group’s program run smoothly’) as was competition (e.g., ‘There is friendly competition within the members to stay as healthy as possible’). Internal consistencies for the group-interaction variables were all acceptable: task communication (α = .94), social communication (α = .65), cooperation (α = .91), and friendly competition (α = .81). The sessions were designed to enhance opportunities for each of these constructs (e.g., opportunities to cooperate before, during, and after class, facilitated social interactions) (Additional file 1). Additional items were embedded within the PAGEQ to measure the group-interaction variables of interest, including communication, cooperation, and competition. As this was an exploratory study, we developed items that were reviewed for face validity and aligned with the definitions of each of the constructs. Like cohesion, communication was operationalized as having a task and social focus and was measured through 6 items that can be further divided into task-based communication (e.g., ‘members of our group talk about how often they should do physical activity’) and social communication (e.g., ‘people of this group talk about things that are happening in our lives’). Cooperation and competition were not conceptualized as having relevant social components and focused on task outcomes. Cooperation was measured through 3 items (e.g., ‘we all cooperate to help this group’s program run smoothly’) as was competition (e.g., ‘There is friendly competition within the members to stay as healthy as possible’). Internal consistencies for the group-interaction variables were all acceptable: task communication (α = .94), social communication (α = .65), cooperation (α = .91), and friendly competition (α = .81). The sessions were designed to enhance opportunities for each of these constructs (e.g., opportunities to cooperate before, during, and after class, facilitated social interactions) (Additional file 1). Analysis Descriptive statistics, paired sample t-tests, and multiple regressions were conducted in IBM SPSS 19.0 with a priori significance set at p < 0.05. Within participant t-tests were conducted to determine changes in the group cohesion and interaction variables over time. Multiple linear regression was conducted to detect which group-interaction variables predicted group cohesion over the course of the program and at 12-months follow-up, accounting for age and ethnicity. Four regressions were completed, one for each dimension of cohesion as the dependent variable to test hypotheses 1-4, using the group interaction variables as independent variables. Longitudinal change scores (from baseline to post-intervention and post-intervention to follow-up) were computed for each group-interaction variable and dimension of cohesion for use in the regression models. Overall perceptions of cohesion were recorded at baseline, post-program, and follow-up to determine the trend in the perceptions of cohesion over time (hypothesis 5). Descriptive statistics, paired sample t-tests, and multiple regressions were conducted in IBM SPSS 19.0 with a priori significance set at p < 0.05. Within participant t-tests were conducted to determine changes in the group cohesion and interaction variables over time. Multiple linear regression was conducted to detect which group-interaction variables predicted group cohesion over the course of the program and at 12-months follow-up, accounting for age and ethnicity. Four regressions were completed, one for each dimension of cohesion as the dependent variable to test hypotheses 1-4, using the group interaction variables as independent variables. Longitudinal change scores (from baseline to post-intervention and post-intervention to follow-up) were computed for each group-interaction variable and dimension of cohesion for use in the regression models. Overall perceptions of cohesion were recorded at baseline, post-program, and follow-up to determine the trend in the perceptions of cohesion over time (hypothesis 5).
Results
Table 1 includes the descriptive data across time for the group cohesion and group-interaction variables. Age and ethnicity did not contribute to a significant proportion of the variance in the models. As can be noted in the Table, all four group-interaction variables (task and social communication, cooperation, competition), as well as three dimensions of cohesion (ATG-T, ATG-S, GI-T), significantly increased from baseline to post-intervention (p < .05), and the magnitudes of the changes were moderate to large (Cohen’s d ranging from 0.5-0.89). GI-S significantly decreased from baseline to post-intervention, and the magnitude of change was moderate (Cohen’s d = 0.64). All variables had a significant decrease from post-intervention to 12-month follow-up in the range of small to moderate effect sizes (Cohen’s d ranging from 0.27-0.53). Descriptive statistics of group-interaction variables over time 1 1*Significant change (p < .05) between baseline and post-program. **Significant change (p < .05) between post-program and follow-up. ATG-T (Attraction to the Group’s Task); ATG-S (Attraction to the Group Socially), GI-T (Group’s Integration towards the Task); GI-S (Group’s Integration Socially). The proportion of explained variance in ATG-T at each time point and was approximately 30 percent for each of the longitudinal regression analyses (see Table 2). Participant perceptions of competition and task-based communication were consistent contributors to the variance explained in the longitudinal regression. The group-interaction variables seemed to explain a slightly higher amount of the variance in GI-T when considering longitudinal data (i.e., approximately 63 percent of the variance). Task-based communication and friendly competition were again significant contributors to the explained variance within the longitudinal regression. Longitudinal regression results predicting each dimension of group cohesion *(p < .05)2. 2ATG-T (Attraction to the Group’s Task); ATG-S (Attraction to the Group Socially), GI-T (Group’s Integration towards the Task); GI-S (Group’s Integration Socially). The regression analyses used to examine ATG-S showed a significant amount of explained variance within a longitudinal (43 to 59 percent) approach. Social communication and friendly competition were significant contributors to the explained variance in the longitudinal regression of ATG-S. A somewhat lower proportion of variance was explained using the group-interaction variables with GI-S (25 percent longitudinally). There were no distinct patterns across the regressions for GI-S, which was predicted by social and task-based communication (T1-T2) and by friendly competition (T2-T3).
Conclusions
Group dynamics-based physical activity programs are successful at achieving their outcomes of interest [18]. Although previous research has shown that increased perceptions of cohesion lead to increased engagement in the program and subsequent increases in physical activity [4], less is known about what strategies lead to increased perceptions of cohesion [18]. This study presents promising data about how group-interaction variables, including communication, competition, and cooperation, may influence the perception of cohesion. Specific investigation of these variables indicated that strategies that foster friendly competition are most likely to improve participant perceptions of group cohesion.
[ "Background", "Sample", "Measures", "Group cohesion", "Group-interaction variables", "Analysis", "Abbreviations", "Competing interests", "Authors’ contributions" ]
[ "Group dynamics includes the study of the nature of groups, individual relationships within groups, and interactions with others. Over 60 years ago, Kurt Lewin’s seminal work suggested that the degree to which a group was cohesive would determine individuals’ level of success as a collective [1]. Group cohesion has a long history as an important predictor of performance and outcomes in work, military, sport, and exercise groups [2-4]. Having a strong sense of group cohesion also a reflects a fundamental human need—the need to belong [5].\nGroup cohesion has been defined in many ways [6-8], but Carron, Brawley, & Widmeyer’s [9] definition has been used consistently in physical activity promotion and research. They define group cohesion as a dynamic process reflected in the shared pursuit of common objectives to satisfy members’ needs [9]. Group cohesion is further operationalized as individual (1) attraction to the group’s task-based activities (ATG-T), (2) attraction to the group’s social activities (ATG-S), (3) perceptions of the group’s integration around task-based activities (GI-T), and (4) perceptions of the group’s integration around social activities (GI-S).\nOver the previous two decades, a large body of literature has also documented the positive relationship between group cohesion and physical activity adoption and maintenance [10-14]. Participants who have strong perceptions of group cohesion attend group sessions more often, are late less often, and drop out less frequently [11]. Group cohesion also has demonstrated a consistent relationship with positive attitudes toward physical activity and enhanced perceptions of self-efficacy and personal control [4].\nFrom a theoretical perspective, Carron and Spink [15] proposed that group-interaction variables, such as communication, cooperation, and competition, are the likely precursors to developing group cohesion. Communication is defined as the sharing of information through verbal and non-verbal means. In group dynamics-based physical activity interventions, task communication (i.e., physical activity-based) occurs through facilitated group-goal setting and peer-learning activities (c.f., Irish et al. [16], Estabrooks [17]). Cooperation is defined as sharing resources to achieve a specific outcome. In physical activity classes, a cooperative environment would be one where participants provide assistance to one another in setting up the exercise equipment, overcoming obstacles, or even doing exercises together [18]. Finally, competition in an exercise group context is defined as providing participants the motivation of being superior to other groups or group members, or the motivation to keep their own group functioning at a high level (c.f., Kerr et al. [19], Steiner [20]). The concept of friendly competition influencing physical activity outcomes has been applied across a number of populations [21-23]. Friendly competition is a sense of competition that is connected to the overall success of the group and can reflect a generalized sense of intragroup competition as well as intergroup competitions within a single intervention. With friendly competition, people are inspired to compete against each other with the recognition that even if someone else wins, it benefits the group as a whole. Further, a group may share a set of norms around fairness and reciprocity in the form of competition [24]. The motivational aspect of friendly competition has been identified across a number of studies including faith-based weight loss trials that include physical activity [25], worksite physical activity programs [26], and physical activity promotion in hard to reach audiences [27]. Gaining better understanding of the relationships between group communication, cooperation, and competition has both theoretical and practical implications.\nTheoretically, to date there has been no study directed at understanding the relative contributions of communication, cooperation, and competition-based strategies to changes in group cohesion [10]. For example, it could be hypothesized that physical activity groups are more cooperative than competitive and that strategies focusing on cooperation may be superior in this context. Alternatively, communication that includes a focus on helping participants identify similarities in health aspirations could be a stronger predictor of group cohesion than cooperation or competition. However there is a lack of longitudinal or even cross-sectional research that has analyzed the change in perceptions of group-interaction variables as it relates to the change in cohesion over time.\nFrom a practical perspective, group-interaction variables have been used as a guide to develop strategies that are hypothesized to improve group cohesion, yet the relationship between group interaction variables and group cohesion has not been examined within these intervention studies [18]. While the depth of research on other variables that enhance cohesion exists, these specific group-interaction variables have been widely used as a guide to develop strategies that are, in turn, hypothesized to improve group cohesion, yet the relationship between group interaction variables and group cohesion has not been determined [18]. Further, no study to date has analyzed the change in perceptions of group-interaction variables as it relates to the change in cohesion over time. Understanding the mechanistic relationship between strategies that target group-interaction variables, changes in those variables, and changes in perceptions of group cohesion provides valuable information for future program development. A longitudinal study design enhances the ability to track changes in the relationships between group-interaction variables and group cohesion over time. This is significant to group-based physical activity promotion programs, because it allows programs to be planned to integrate the strategies that are most likely to improve group cohesion. Understanding these underlying mechanisms also provides health educators with the information necessary to ensure that strategies that do not contribute to changes in perceptions of group cohesion are not unnecessarily applied in practice settings where resources are often limited.\nTo date, there have been no investigations examining the relationship between physical activity group cohesion and group member perceptions of communication, cooperation, and friendly competition [10]. Understanding these relationships could aid in developing stronger strategies (e.g., appropriate facilitation of friendly competition within a group) to enhance group cohesion and provide practical information for those delivering interventions when making resource allocation decisions (i.e., what resources are needed to deliver the intervention to the desired population, ranging from time to materials).\nThe Health is Power (HIP) trial [28], a study testing the effectiveness of a group-based physical activity promotion program for ethnic minority women, provided an opportunity to explore the relationships among the dimensions of group cohesion and communication, cooperation, and competition over time. In the primary meditational analyses of HIP, the investigators found that all dimensions of group cohesion mediated the effect of the intervention with regard to psychosocial outcomes but not physical activity behaviors (i.e., the intervention was associated with increased cohesion but did not lead to increased physical activity) [29].\nThe purpose of this exploratory study was to determine the longitudinal relationship of communication, cooperation, and friendly competition to the dimensions of group cohesion. Specifically, the intention of this study was to test participants’ perceptions of the strength or presence of these variables within their group. It was hypothesized that each group-interaction variable would contribute to a large proportion of explained variance in group cohesion over time.\nHypothesis 1: Cooperation would predict the Individual’s Attraction to the Group-Task.\nHypothesis 2: Friendly competition would predict the Individual’s Attraction to the Group-Task as well as the Group’s Integration towards the Task.\nHypothesis 3: Social communication would predict aspects related to social cohesion (both Individual’s Attraction to the Group-Socially as well as the Group’s Integration Socially).\nHypothesis 4: Task communication would predict aspects related to task cohesion: both the Individual’s Attraction to the Group-Task as well as the Group’s Integration towards the Task).\nHypothesis 5: Perceptions of group cohesion and group interaction would increase over the course of the program, but decrease from program completion to the 12-month follow-up.\nTesting the hypotheses, outlined in Figure 1 below, contributes to the gap in the literature about strategies that influence the perception of group cohesion.\nHypotheses for group-interaction variables prediction of cohesion over-time.", "As the focus of this study was to determine the effect of group communication, cooperation, and competition on cohesion within a sample of minority women, data from the 103 participants randomly assigned to the physical activity group who completed baseline and post-intervention measures were analyzed. Of those women, 73% identified as African American and 27% identified as Hispanic or Latina. The participants were 47.89 years of age (±8.17), with an average BMI of 34.43 kg/m2 (±8.07). Eighty-three participants (80%) completed the 12-month follow-up assessment.", " Group cohesion The Physical Activity Group Environment Questionnaire (PAGEQ) [33] is a group cohesion inventory for physical activity groups and was used in the HIP trial. The PAGEQ is a 21-item measure of the four dimensions of ATG-T, ATG-S, GI-T, GI-S with 6, 6, 5, and 4 items, respectively. All 21 items are on a 9-point Likert scale ranging from ‘strongly agree’ to ‘strongly disagree’ [33]. ATG-T was assessed by having participants respond to items such as ‘I like the amount of physical activity I get with this group’. ATG-S was operationalized through statements such as ‘I enjoy my social interactions with this group’. The group integration dimensions of cohesion were assessed using items such as ‘members of our group often socialize together’ (GI-S) and ‘our group is united in its beliefs about the benefits of regular physical activity’ (GI-T). This questionnaire has demonstrated content, predictive, and concurrent validity [33].\nThe Physical Activity Group Environment Questionnaire (PAGEQ) [33] is a group cohesion inventory for physical activity groups and was used in the HIP trial. The PAGEQ is a 21-item measure of the four dimensions of ATG-T, ATG-S, GI-T, GI-S with 6, 6, 5, and 4 items, respectively. All 21 items are on a 9-point Likert scale ranging from ‘strongly agree’ to ‘strongly disagree’ [33]. ATG-T was assessed by having participants respond to items such as ‘I like the amount of physical activity I get with this group’. ATG-S was operationalized through statements such as ‘I enjoy my social interactions with this group’. The group integration dimensions of cohesion were assessed using items such as ‘members of our group often socialize together’ (GI-S) and ‘our group is united in its beliefs about the benefits of regular physical activity’ (GI-T). This questionnaire has demonstrated content, predictive, and concurrent validity [33].\n Group-interaction variables Additional items were embedded within the PAGEQ to measure the group-interaction variables of interest, including communication, cooperation, and competition. As this was an exploratory study, we developed items that were reviewed for face validity and aligned with the definitions of each of the constructs. Like cohesion, communication was operationalized as having a task and social focus and was measured through 6 items that can be further divided into task-based communication (e.g., ‘members of our group talk about how often they should do physical activity’) and social communication (e.g., ‘people of this group talk about things that are happening in our lives’). Cooperation and competition were not conceptualized as having relevant social components and focused on task outcomes. Cooperation was measured through 3 items (e.g., ‘we all cooperate to help this group’s program run smoothly’) as was competition (e.g., ‘There is friendly competition within the members to stay as healthy as possible’). Internal consistencies for the group-interaction variables were all acceptable: task communication (α = .94), social communication (α = .65), cooperation (α = .91), and friendly competition (α = .81). The sessions were designed to enhance opportunities for each of these constructs (e.g., opportunities to cooperate before, during, and after class, facilitated social interactions) (Additional file 1).\nAdditional items were embedded within the PAGEQ to measure the group-interaction variables of interest, including communication, cooperation, and competition. As this was an exploratory study, we developed items that were reviewed for face validity and aligned with the definitions of each of the constructs. Like cohesion, communication was operationalized as having a task and social focus and was measured through 6 items that can be further divided into task-based communication (e.g., ‘members of our group talk about how often they should do physical activity’) and social communication (e.g., ‘people of this group talk about things that are happening in our lives’). Cooperation and competition were not conceptualized as having relevant social components and focused on task outcomes. Cooperation was measured through 3 items (e.g., ‘we all cooperate to help this group’s program run smoothly’) as was competition (e.g., ‘There is friendly competition within the members to stay as healthy as possible’). Internal consistencies for the group-interaction variables were all acceptable: task communication (α = .94), social communication (α = .65), cooperation (α = .91), and friendly competition (α = .81). The sessions were designed to enhance opportunities for each of these constructs (e.g., opportunities to cooperate before, during, and after class, facilitated social interactions) (Additional file 1).", "The Physical Activity Group Environment Questionnaire (PAGEQ) [33] is a group cohesion inventory for physical activity groups and was used in the HIP trial. The PAGEQ is a 21-item measure of the four dimensions of ATG-T, ATG-S, GI-T, GI-S with 6, 6, 5, and 4 items, respectively. All 21 items are on a 9-point Likert scale ranging from ‘strongly agree’ to ‘strongly disagree’ [33]. ATG-T was assessed by having participants respond to items such as ‘I like the amount of physical activity I get with this group’. ATG-S was operationalized through statements such as ‘I enjoy my social interactions with this group’. The group integration dimensions of cohesion were assessed using items such as ‘members of our group often socialize together’ (GI-S) and ‘our group is united in its beliefs about the benefits of regular physical activity’ (GI-T). This questionnaire has demonstrated content, predictive, and concurrent validity [33].", "Additional items were embedded within the PAGEQ to measure the group-interaction variables of interest, including communication, cooperation, and competition. As this was an exploratory study, we developed items that were reviewed for face validity and aligned with the definitions of each of the constructs. Like cohesion, communication was operationalized as having a task and social focus and was measured through 6 items that can be further divided into task-based communication (e.g., ‘members of our group talk about how often they should do physical activity’) and social communication (e.g., ‘people of this group talk about things that are happening in our lives’). Cooperation and competition were not conceptualized as having relevant social components and focused on task outcomes. Cooperation was measured through 3 items (e.g., ‘we all cooperate to help this group’s program run smoothly’) as was competition (e.g., ‘There is friendly competition within the members to stay as healthy as possible’). Internal consistencies for the group-interaction variables were all acceptable: task communication (α = .94), social communication (α = .65), cooperation (α = .91), and friendly competition (α = .81). The sessions were designed to enhance opportunities for each of these constructs (e.g., opportunities to cooperate before, during, and after class, facilitated social interactions) (Additional file 1).", "Descriptive statistics, paired sample t-tests, and multiple regressions were conducted in IBM SPSS 19.0 with a priori significance set at p < 0.05. Within participant t-tests were conducted to determine changes in the group cohesion and interaction variables over time. Multiple linear regression was conducted to detect which group-interaction variables predicted group cohesion over the course of the program and at 12-months follow-up, accounting for age and ethnicity. Four regressions were completed, one for each dimension of cohesion as the dependent variable to test hypotheses 1-4, using the group interaction variables as independent variables. Longitudinal change scores (from baseline to post-intervention and post-intervention to follow-up) were computed for each group-interaction variable and dimension of cohesion for use in the regression models. Overall perceptions of cohesion were recorded at baseline, post-program, and follow-up to determine the trend in the perceptions of cohesion over time (hypothesis 5).", "HIP: Health is power; ATG-T: Attraction to the group’s task-based activities; ATG-S: Attraction to the group’s social activities; GI-T: Group integration around task activities; GI-S: Group’s integration around social activities.", "The author declares that they have no competing interests.", "SMH ran the analyses and wrote the manuscript with the assistance of PAE, who also developed the measure being tested. REL and SKM developed and delivered the intervention and contributed to the draft of the manuscript. All authors contributed to the editing and approval of the final manuscript." ]
[ null, null, null, null, null, null, null, null, null ]
[ "Background", "Methods", "Sample", "Measures", "Group cohesion", "Group-interaction variables", "Analysis", "Results", "Discussion", "Conclusions", "Abbreviations", "Competing interests", "Authors’ contributions", "Supplementary Material" ]
[ "Group dynamics includes the study of the nature of groups, individual relationships within groups, and interactions with others. Over 60 years ago, Kurt Lewin’s seminal work suggested that the degree to which a group was cohesive would determine individuals’ level of success as a collective [1]. Group cohesion has a long history as an important predictor of performance and outcomes in work, military, sport, and exercise groups [2-4]. Having a strong sense of group cohesion also a reflects a fundamental human need—the need to belong [5].\nGroup cohesion has been defined in many ways [6-8], but Carron, Brawley, & Widmeyer’s [9] definition has been used consistently in physical activity promotion and research. They define group cohesion as a dynamic process reflected in the shared pursuit of common objectives to satisfy members’ needs [9]. Group cohesion is further operationalized as individual (1) attraction to the group’s task-based activities (ATG-T), (2) attraction to the group’s social activities (ATG-S), (3) perceptions of the group’s integration around task-based activities (GI-T), and (4) perceptions of the group’s integration around social activities (GI-S).\nOver the previous two decades, a large body of literature has also documented the positive relationship between group cohesion and physical activity adoption and maintenance [10-14]. Participants who have strong perceptions of group cohesion attend group sessions more often, are late less often, and drop out less frequently [11]. Group cohesion also has demonstrated a consistent relationship with positive attitudes toward physical activity and enhanced perceptions of self-efficacy and personal control [4].\nFrom a theoretical perspective, Carron and Spink [15] proposed that group-interaction variables, such as communication, cooperation, and competition, are the likely precursors to developing group cohesion. Communication is defined as the sharing of information through verbal and non-verbal means. In group dynamics-based physical activity interventions, task communication (i.e., physical activity-based) occurs through facilitated group-goal setting and peer-learning activities (c.f., Irish et al. [16], Estabrooks [17]). Cooperation is defined as sharing resources to achieve a specific outcome. In physical activity classes, a cooperative environment would be one where participants provide assistance to one another in setting up the exercise equipment, overcoming obstacles, or even doing exercises together [18]. Finally, competition in an exercise group context is defined as providing participants the motivation of being superior to other groups or group members, or the motivation to keep their own group functioning at a high level (c.f., Kerr et al. [19], Steiner [20]). The concept of friendly competition influencing physical activity outcomes has been applied across a number of populations [21-23]. Friendly competition is a sense of competition that is connected to the overall success of the group and can reflect a generalized sense of intragroup competition as well as intergroup competitions within a single intervention. With friendly competition, people are inspired to compete against each other with the recognition that even if someone else wins, it benefits the group as a whole. Further, a group may share a set of norms around fairness and reciprocity in the form of competition [24]. The motivational aspect of friendly competition has been identified across a number of studies including faith-based weight loss trials that include physical activity [25], worksite physical activity programs [26], and physical activity promotion in hard to reach audiences [27]. Gaining better understanding of the relationships between group communication, cooperation, and competition has both theoretical and practical implications.\nTheoretically, to date there has been no study directed at understanding the relative contributions of communication, cooperation, and competition-based strategies to changes in group cohesion [10]. For example, it could be hypothesized that physical activity groups are more cooperative than competitive and that strategies focusing on cooperation may be superior in this context. Alternatively, communication that includes a focus on helping participants identify similarities in health aspirations could be a stronger predictor of group cohesion than cooperation or competition. However there is a lack of longitudinal or even cross-sectional research that has analyzed the change in perceptions of group-interaction variables as it relates to the change in cohesion over time.\nFrom a practical perspective, group-interaction variables have been used as a guide to develop strategies that are hypothesized to improve group cohesion, yet the relationship between group interaction variables and group cohesion has not been examined within these intervention studies [18]. While the depth of research on other variables that enhance cohesion exists, these specific group-interaction variables have been widely used as a guide to develop strategies that are, in turn, hypothesized to improve group cohesion, yet the relationship between group interaction variables and group cohesion has not been determined [18]. Further, no study to date has analyzed the change in perceptions of group-interaction variables as it relates to the change in cohesion over time. Understanding the mechanistic relationship between strategies that target group-interaction variables, changes in those variables, and changes in perceptions of group cohesion provides valuable information for future program development. A longitudinal study design enhances the ability to track changes in the relationships between group-interaction variables and group cohesion over time. This is significant to group-based physical activity promotion programs, because it allows programs to be planned to integrate the strategies that are most likely to improve group cohesion. Understanding these underlying mechanisms also provides health educators with the information necessary to ensure that strategies that do not contribute to changes in perceptions of group cohesion are not unnecessarily applied in practice settings where resources are often limited.\nTo date, there have been no investigations examining the relationship between physical activity group cohesion and group member perceptions of communication, cooperation, and friendly competition [10]. Understanding these relationships could aid in developing stronger strategies (e.g., appropriate facilitation of friendly competition within a group) to enhance group cohesion and provide practical information for those delivering interventions when making resource allocation decisions (i.e., what resources are needed to deliver the intervention to the desired population, ranging from time to materials).\nThe Health is Power (HIP) trial [28], a study testing the effectiveness of a group-based physical activity promotion program for ethnic minority women, provided an opportunity to explore the relationships among the dimensions of group cohesion and communication, cooperation, and competition over time. In the primary meditational analyses of HIP, the investigators found that all dimensions of group cohesion mediated the effect of the intervention with regard to psychosocial outcomes but not physical activity behaviors (i.e., the intervention was associated with increased cohesion but did not lead to increased physical activity) [29].\nThe purpose of this exploratory study was to determine the longitudinal relationship of communication, cooperation, and friendly competition to the dimensions of group cohesion. Specifically, the intention of this study was to test participants’ perceptions of the strength or presence of these variables within their group. It was hypothesized that each group-interaction variable would contribute to a large proportion of explained variance in group cohesion over time.\nHypothesis 1: Cooperation would predict the Individual’s Attraction to the Group-Task.\nHypothesis 2: Friendly competition would predict the Individual’s Attraction to the Group-Task as well as the Group’s Integration towards the Task.\nHypothesis 3: Social communication would predict aspects related to social cohesion (both Individual’s Attraction to the Group-Socially as well as the Group’s Integration Socially).\nHypothesis 4: Task communication would predict aspects related to task cohesion: both the Individual’s Attraction to the Group-Task as well as the Group’s Integration towards the Task).\nHypothesis 5: Perceptions of group cohesion and group interaction would increase over the course of the program, but decrease from program completion to the 12-month follow-up.\nTesting the hypotheses, outlined in Figure 1 below, contributes to the gap in the literature about strategies that influence the perception of group cohesion.\nHypotheses for group-interaction variables prediction of cohesion over-time.", "African American and Hispanic or Latina women were recruited to participate in a multi-site, community-based study to test a 6-month intervention designed to promote physical activity (see Lee et al. [29] for details on recruitment). This study focused on ethnic minority women because they have particularly low rates of physical activity and disproportionately suffer from chronic diseases related to physical inactivity [30]. Further, ethnic minority women are often gatekeepers for physical activity behaviors within their families [31,32].\nWomen were randomly assigned to the physical activity intervention or a fruit and vegetable promotion matched contact comparison group. Only participants randomly assigned to the physical activity intervention group were included in this study. Health education intervention sessions and intervention content and materials have been previously described [28]. In Houston, there were six African-American and one Hispanic or Latina cohorts, and there were three Hispanic or Latina cohorts in Austin. All sessions were conducted in English. Intervention group sizes did not differ significantly, with a range of participants from 10 to 37.\nThe intervention was 24 weeks in duration and included 6 sessions. Each session included group dynamics strategies and principles based on the model developed by Carron and Spink [15] and included 15 minutes of walking after the educational component. Opportunities for communication were provided in the form of small group discussions related to the session objective. For example, women discussed strategies to overcome barriers to physical activity, shared goals, and relapse prevention plans during their group sessions. Women also used time before and after the sessions, as well as during group walks at the end of the intervention sessions, to socialize. To facilitate cooperation, participants engaged in peer problem solving activities and collaborative group goal setting. Groups also engaged in friendly competition by developing small teams working to achieve a group goal for physical activity tacked on a large map; whichever team had traveled the farthest was “winning”. The facilitators were trained to foster group interaction using semi-structured scripts. For example, a semi-structured script focusing on friendly competition might include:\n“You are all part of one big team (the whole group) as well as a part of a smaller team. The small teams will be the folks that you set your shared goals with and we will have a friendly competition between each team. But I want you to remember, that the goal of our large group is that everyone in the program works their way up to 45 minutes of physical activity on at least 5 days a week. So, even though there may be some competition, this program is really about getting and giving support to be more physically active.”\nParticipant perceptions of physical activity group cohesion, communication, cooperation, and competition were assessed at baseline, post intervention, and 6-months after the intervention was completed. All research study activities were approved by the University of Houston’s Committee for the Protection of Human Subjects, and all participants provided written informed consent prior to participation.\n Sample As the focus of this study was to determine the effect of group communication, cooperation, and competition on cohesion within a sample of minority women, data from the 103 participants randomly assigned to the physical activity group who completed baseline and post-intervention measures were analyzed. Of those women, 73% identified as African American and 27% identified as Hispanic or Latina. The participants were 47.89 years of age (±8.17), with an average BMI of 34.43 kg/m2 (±8.07). Eighty-three participants (80%) completed the 12-month follow-up assessment.\nAs the focus of this study was to determine the effect of group communication, cooperation, and competition on cohesion within a sample of minority women, data from the 103 participants randomly assigned to the physical activity group who completed baseline and post-intervention measures were analyzed. Of those women, 73% identified as African American and 27% identified as Hispanic or Latina. The participants were 47.89 years of age (±8.17), with an average BMI of 34.43 kg/m2 (±8.07). Eighty-three participants (80%) completed the 12-month follow-up assessment.\n Measures Group cohesion The Physical Activity Group Environment Questionnaire (PAGEQ) [33] is a group cohesion inventory for physical activity groups and was used in the HIP trial. The PAGEQ is a 21-item measure of the four dimensions of ATG-T, ATG-S, GI-T, GI-S with 6, 6, 5, and 4 items, respectively. All 21 items are on a 9-point Likert scale ranging from ‘strongly agree’ to ‘strongly disagree’ [33]. ATG-T was assessed by having participants respond to items such as ‘I like the amount of physical activity I get with this group’. ATG-S was operationalized through statements such as ‘I enjoy my social interactions with this group’. The group integration dimensions of cohesion were assessed using items such as ‘members of our group often socialize together’ (GI-S) and ‘our group is united in its beliefs about the benefits of regular physical activity’ (GI-T). This questionnaire has demonstrated content, predictive, and concurrent validity [33].\nThe Physical Activity Group Environment Questionnaire (PAGEQ) [33] is a group cohesion inventory for physical activity groups and was used in the HIP trial. The PAGEQ is a 21-item measure of the four dimensions of ATG-T, ATG-S, GI-T, GI-S with 6, 6, 5, and 4 items, respectively. All 21 items are on a 9-point Likert scale ranging from ‘strongly agree’ to ‘strongly disagree’ [33]. ATG-T was assessed by having participants respond to items such as ‘I like the amount of physical activity I get with this group’. ATG-S was operationalized through statements such as ‘I enjoy my social interactions with this group’. The group integration dimensions of cohesion were assessed using items such as ‘members of our group often socialize together’ (GI-S) and ‘our group is united in its beliefs about the benefits of regular physical activity’ (GI-T). This questionnaire has demonstrated content, predictive, and concurrent validity [33].\n Group-interaction variables Additional items were embedded within the PAGEQ to measure the group-interaction variables of interest, including communication, cooperation, and competition. As this was an exploratory study, we developed items that were reviewed for face validity and aligned with the definitions of each of the constructs. Like cohesion, communication was operationalized as having a task and social focus and was measured through 6 items that can be further divided into task-based communication (e.g., ‘members of our group talk about how often they should do physical activity’) and social communication (e.g., ‘people of this group talk about things that are happening in our lives’). Cooperation and competition were not conceptualized as having relevant social components and focused on task outcomes. Cooperation was measured through 3 items (e.g., ‘we all cooperate to help this group’s program run smoothly’) as was competition (e.g., ‘There is friendly competition within the members to stay as healthy as possible’). Internal consistencies for the group-interaction variables were all acceptable: task communication (α = .94), social communication (α = .65), cooperation (α = .91), and friendly competition (α = .81). The sessions were designed to enhance opportunities for each of these constructs (e.g., opportunities to cooperate before, during, and after class, facilitated social interactions) (Additional file 1).\nAdditional items were embedded within the PAGEQ to measure the group-interaction variables of interest, including communication, cooperation, and competition. As this was an exploratory study, we developed items that were reviewed for face validity and aligned with the definitions of each of the constructs. Like cohesion, communication was operationalized as having a task and social focus and was measured through 6 items that can be further divided into task-based communication (e.g., ‘members of our group talk about how often they should do physical activity’) and social communication (e.g., ‘people of this group talk about things that are happening in our lives’). Cooperation and competition were not conceptualized as having relevant social components and focused on task outcomes. Cooperation was measured through 3 items (e.g., ‘we all cooperate to help this group’s program run smoothly’) as was competition (e.g., ‘There is friendly competition within the members to stay as healthy as possible’). Internal consistencies for the group-interaction variables were all acceptable: task communication (α = .94), social communication (α = .65), cooperation (α = .91), and friendly competition (α = .81). The sessions were designed to enhance opportunities for each of these constructs (e.g., opportunities to cooperate before, during, and after class, facilitated social interactions) (Additional file 1).\n Group cohesion The Physical Activity Group Environment Questionnaire (PAGEQ) [33] is a group cohesion inventory for physical activity groups and was used in the HIP trial. The PAGEQ is a 21-item measure of the four dimensions of ATG-T, ATG-S, GI-T, GI-S with 6, 6, 5, and 4 items, respectively. All 21 items are on a 9-point Likert scale ranging from ‘strongly agree’ to ‘strongly disagree’ [33]. ATG-T was assessed by having participants respond to items such as ‘I like the amount of physical activity I get with this group’. ATG-S was operationalized through statements such as ‘I enjoy my social interactions with this group’. The group integration dimensions of cohesion were assessed using items such as ‘members of our group often socialize together’ (GI-S) and ‘our group is united in its beliefs about the benefits of regular physical activity’ (GI-T). This questionnaire has demonstrated content, predictive, and concurrent validity [33].\nThe Physical Activity Group Environment Questionnaire (PAGEQ) [33] is a group cohesion inventory for physical activity groups and was used in the HIP trial. The PAGEQ is a 21-item measure of the four dimensions of ATG-T, ATG-S, GI-T, GI-S with 6, 6, 5, and 4 items, respectively. All 21 items are on a 9-point Likert scale ranging from ‘strongly agree’ to ‘strongly disagree’ [33]. ATG-T was assessed by having participants respond to items such as ‘I like the amount of physical activity I get with this group’. ATG-S was operationalized through statements such as ‘I enjoy my social interactions with this group’. The group integration dimensions of cohesion were assessed using items such as ‘members of our group often socialize together’ (GI-S) and ‘our group is united in its beliefs about the benefits of regular physical activity’ (GI-T). This questionnaire has demonstrated content, predictive, and concurrent validity [33].\n Group-interaction variables Additional items were embedded within the PAGEQ to measure the group-interaction variables of interest, including communication, cooperation, and competition. As this was an exploratory study, we developed items that were reviewed for face validity and aligned with the definitions of each of the constructs. Like cohesion, communication was operationalized as having a task and social focus and was measured through 6 items that can be further divided into task-based communication (e.g., ‘members of our group talk about how often they should do physical activity’) and social communication (e.g., ‘people of this group talk about things that are happening in our lives’). Cooperation and competition were not conceptualized as having relevant social components and focused on task outcomes. Cooperation was measured through 3 items (e.g., ‘we all cooperate to help this group’s program run smoothly’) as was competition (e.g., ‘There is friendly competition within the members to stay as healthy as possible’). Internal consistencies for the group-interaction variables were all acceptable: task communication (α = .94), social communication (α = .65), cooperation (α = .91), and friendly competition (α = .81). The sessions were designed to enhance opportunities for each of these constructs (e.g., opportunities to cooperate before, during, and after class, facilitated social interactions) (Additional file 1).\nAdditional items were embedded within the PAGEQ to measure the group-interaction variables of interest, including communication, cooperation, and competition. As this was an exploratory study, we developed items that were reviewed for face validity and aligned with the definitions of each of the constructs. Like cohesion, communication was operationalized as having a task and social focus and was measured through 6 items that can be further divided into task-based communication (e.g., ‘members of our group talk about how often they should do physical activity’) and social communication (e.g., ‘people of this group talk about things that are happening in our lives’). Cooperation and competition were not conceptualized as having relevant social components and focused on task outcomes. Cooperation was measured through 3 items (e.g., ‘we all cooperate to help this group’s program run smoothly’) as was competition (e.g., ‘There is friendly competition within the members to stay as healthy as possible’). Internal consistencies for the group-interaction variables were all acceptable: task communication (α = .94), social communication (α = .65), cooperation (α = .91), and friendly competition (α = .81). The sessions were designed to enhance opportunities for each of these constructs (e.g., opportunities to cooperate before, during, and after class, facilitated social interactions) (Additional file 1).\n Analysis Descriptive statistics, paired sample t-tests, and multiple regressions were conducted in IBM SPSS 19.0 with a priori significance set at p < 0.05. Within participant t-tests were conducted to determine changes in the group cohesion and interaction variables over time. Multiple linear regression was conducted to detect which group-interaction variables predicted group cohesion over the course of the program and at 12-months follow-up, accounting for age and ethnicity. Four regressions were completed, one for each dimension of cohesion as the dependent variable to test hypotheses 1-4, using the group interaction variables as independent variables. Longitudinal change scores (from baseline to post-intervention and post-intervention to follow-up) were computed for each group-interaction variable and dimension of cohesion for use in the regression models. Overall perceptions of cohesion were recorded at baseline, post-program, and follow-up to determine the trend in the perceptions of cohesion over time (hypothesis 5).\nDescriptive statistics, paired sample t-tests, and multiple regressions were conducted in IBM SPSS 19.0 with a priori significance set at p < 0.05. Within participant t-tests were conducted to determine changes in the group cohesion and interaction variables over time. Multiple linear regression was conducted to detect which group-interaction variables predicted group cohesion over the course of the program and at 12-months follow-up, accounting for age and ethnicity. Four regressions were completed, one for each dimension of cohesion as the dependent variable to test hypotheses 1-4, using the group interaction variables as independent variables. Longitudinal change scores (from baseline to post-intervention and post-intervention to follow-up) were computed for each group-interaction variable and dimension of cohesion for use in the regression models. Overall perceptions of cohesion were recorded at baseline, post-program, and follow-up to determine the trend in the perceptions of cohesion over time (hypothesis 5).", "As the focus of this study was to determine the effect of group communication, cooperation, and competition on cohesion within a sample of minority women, data from the 103 participants randomly assigned to the physical activity group who completed baseline and post-intervention measures were analyzed. Of those women, 73% identified as African American and 27% identified as Hispanic or Latina. The participants were 47.89 years of age (±8.17), with an average BMI of 34.43 kg/m2 (±8.07). Eighty-three participants (80%) completed the 12-month follow-up assessment.", " Group cohesion The Physical Activity Group Environment Questionnaire (PAGEQ) [33] is a group cohesion inventory for physical activity groups and was used in the HIP trial. The PAGEQ is a 21-item measure of the four dimensions of ATG-T, ATG-S, GI-T, GI-S with 6, 6, 5, and 4 items, respectively. All 21 items are on a 9-point Likert scale ranging from ‘strongly agree’ to ‘strongly disagree’ [33]. ATG-T was assessed by having participants respond to items such as ‘I like the amount of physical activity I get with this group’. ATG-S was operationalized through statements such as ‘I enjoy my social interactions with this group’. The group integration dimensions of cohesion were assessed using items such as ‘members of our group often socialize together’ (GI-S) and ‘our group is united in its beliefs about the benefits of regular physical activity’ (GI-T). This questionnaire has demonstrated content, predictive, and concurrent validity [33].\nThe Physical Activity Group Environment Questionnaire (PAGEQ) [33] is a group cohesion inventory for physical activity groups and was used in the HIP trial. The PAGEQ is a 21-item measure of the four dimensions of ATG-T, ATG-S, GI-T, GI-S with 6, 6, 5, and 4 items, respectively. All 21 items are on a 9-point Likert scale ranging from ‘strongly agree’ to ‘strongly disagree’ [33]. ATG-T was assessed by having participants respond to items such as ‘I like the amount of physical activity I get with this group’. ATG-S was operationalized through statements such as ‘I enjoy my social interactions with this group’. The group integration dimensions of cohesion were assessed using items such as ‘members of our group often socialize together’ (GI-S) and ‘our group is united in its beliefs about the benefits of regular physical activity’ (GI-T). This questionnaire has demonstrated content, predictive, and concurrent validity [33].\n Group-interaction variables Additional items were embedded within the PAGEQ to measure the group-interaction variables of interest, including communication, cooperation, and competition. As this was an exploratory study, we developed items that were reviewed for face validity and aligned with the definitions of each of the constructs. Like cohesion, communication was operationalized as having a task and social focus and was measured through 6 items that can be further divided into task-based communication (e.g., ‘members of our group talk about how often they should do physical activity’) and social communication (e.g., ‘people of this group talk about things that are happening in our lives’). Cooperation and competition were not conceptualized as having relevant social components and focused on task outcomes. Cooperation was measured through 3 items (e.g., ‘we all cooperate to help this group’s program run smoothly’) as was competition (e.g., ‘There is friendly competition within the members to stay as healthy as possible’). Internal consistencies for the group-interaction variables were all acceptable: task communication (α = .94), social communication (α = .65), cooperation (α = .91), and friendly competition (α = .81). The sessions were designed to enhance opportunities for each of these constructs (e.g., opportunities to cooperate before, during, and after class, facilitated social interactions) (Additional file 1).\nAdditional items were embedded within the PAGEQ to measure the group-interaction variables of interest, including communication, cooperation, and competition. As this was an exploratory study, we developed items that were reviewed for face validity and aligned with the definitions of each of the constructs. Like cohesion, communication was operationalized as having a task and social focus and was measured through 6 items that can be further divided into task-based communication (e.g., ‘members of our group talk about how often they should do physical activity’) and social communication (e.g., ‘people of this group talk about things that are happening in our lives’). Cooperation and competition were not conceptualized as having relevant social components and focused on task outcomes. Cooperation was measured through 3 items (e.g., ‘we all cooperate to help this group’s program run smoothly’) as was competition (e.g., ‘There is friendly competition within the members to stay as healthy as possible’). Internal consistencies for the group-interaction variables were all acceptable: task communication (α = .94), social communication (α = .65), cooperation (α = .91), and friendly competition (α = .81). The sessions were designed to enhance opportunities for each of these constructs (e.g., opportunities to cooperate before, during, and after class, facilitated social interactions) (Additional file 1).", "The Physical Activity Group Environment Questionnaire (PAGEQ) [33] is a group cohesion inventory for physical activity groups and was used in the HIP trial. The PAGEQ is a 21-item measure of the four dimensions of ATG-T, ATG-S, GI-T, GI-S with 6, 6, 5, and 4 items, respectively. All 21 items are on a 9-point Likert scale ranging from ‘strongly agree’ to ‘strongly disagree’ [33]. ATG-T was assessed by having participants respond to items such as ‘I like the amount of physical activity I get with this group’. ATG-S was operationalized through statements such as ‘I enjoy my social interactions with this group’. The group integration dimensions of cohesion were assessed using items such as ‘members of our group often socialize together’ (GI-S) and ‘our group is united in its beliefs about the benefits of regular physical activity’ (GI-T). This questionnaire has demonstrated content, predictive, and concurrent validity [33].", "Additional items were embedded within the PAGEQ to measure the group-interaction variables of interest, including communication, cooperation, and competition. As this was an exploratory study, we developed items that were reviewed for face validity and aligned with the definitions of each of the constructs. Like cohesion, communication was operationalized as having a task and social focus and was measured through 6 items that can be further divided into task-based communication (e.g., ‘members of our group talk about how often they should do physical activity’) and social communication (e.g., ‘people of this group talk about things that are happening in our lives’). Cooperation and competition were not conceptualized as having relevant social components and focused on task outcomes. Cooperation was measured through 3 items (e.g., ‘we all cooperate to help this group’s program run smoothly’) as was competition (e.g., ‘There is friendly competition within the members to stay as healthy as possible’). Internal consistencies for the group-interaction variables were all acceptable: task communication (α = .94), social communication (α = .65), cooperation (α = .91), and friendly competition (α = .81). The sessions were designed to enhance opportunities for each of these constructs (e.g., opportunities to cooperate before, during, and after class, facilitated social interactions) (Additional file 1).", "Descriptive statistics, paired sample t-tests, and multiple regressions were conducted in IBM SPSS 19.0 with a priori significance set at p < 0.05. Within participant t-tests were conducted to determine changes in the group cohesion and interaction variables over time. Multiple linear regression was conducted to detect which group-interaction variables predicted group cohesion over the course of the program and at 12-months follow-up, accounting for age and ethnicity. Four regressions were completed, one for each dimension of cohesion as the dependent variable to test hypotheses 1-4, using the group interaction variables as independent variables. Longitudinal change scores (from baseline to post-intervention and post-intervention to follow-up) were computed for each group-interaction variable and dimension of cohesion for use in the regression models. Overall perceptions of cohesion were recorded at baseline, post-program, and follow-up to determine the trend in the perceptions of cohesion over time (hypothesis 5).", "Table 1 includes the descriptive data across time for the group cohesion and group-interaction variables. Age and ethnicity did not contribute to a significant proportion of the variance in the models. As can be noted in the Table, all four group-interaction variables (task and social communication, cooperation, competition), as well as three dimensions of cohesion (ATG-T, ATG-S, GI-T), significantly increased from baseline to post-intervention (p < .05), and the magnitudes of the changes were moderate to large (Cohen’s d ranging from 0.5-0.89). GI-S significantly decreased from baseline to post-intervention, and the magnitude of change was moderate (Cohen’s d = 0.64). All variables had a significant decrease from post-intervention to 12-month follow-up in the range of small to moderate effect sizes (Cohen’s d ranging from 0.27-0.53).\n\nDescriptive statistics of group-interaction variables over time\n\n1\n\n\n1*Significant change (p < .05) between baseline and post-program.\n**Significant change (p < .05) between post-program and follow-up.\nATG-T (Attraction to the Group’s Task); ATG-S (Attraction to the Group Socially), GI-T (Group’s Integration towards the Task); GI-S (Group’s Integration Socially).\nThe proportion of explained variance in ATG-T at each time point and was approximately 30 percent for each of the longitudinal regression analyses (see Table 2). Participant perceptions of competition and task-based communication were consistent contributors to the variance explained in the longitudinal regression. The group-interaction variables seemed to explain a slightly higher amount of the variance in GI-T when considering longitudinal data (i.e., approximately 63 percent of the variance). Task-based communication and friendly competition were again significant contributors to the explained variance within the longitudinal regression.\nLongitudinal regression results predicting each dimension of group cohesion\n*(p < .05)2.\n2ATG-T (Attraction to the Group’s Task); ATG-S (Attraction to the Group Socially), GI-T (Group’s Integration towards the Task); GI-S (Group’s Integration Socially).\nThe regression analyses used to examine ATG-S showed a significant amount of explained variance within a longitudinal (43 to 59 percent) approach. Social communication and friendly competition were significant contributors to the explained variance in the longitudinal regression of ATG-S. A somewhat lower proportion of variance was explained using the group-interaction variables with GI-S (25 percent longitudinally). There were no distinct patterns across the regressions for GI-S, which was predicted by social and task-based communication (T1-T2) and by friendly competition (T2-T3).", "Communication, cooperation, and competition are key variables in the prediction of group cohesion [15,18]. Findings support the propositions from Carron and Spink’s model that some of these variables are significantly related to group cohesion [15]. We extended the findings of previous studies to show that friendly competition predicted nearly all of the dimensions of group cohesion at all time points using longitudinal analyses. We also found that social and task-based forms of communication had more consistent patterns of relationships with the social and task-based dimensions of group cohesion, supporting the hypothesis that different group-interaction variables would predict different dimensions of group cohesion. Contrary to our hypotheses, perceptions of cooperation did not demonstrate a consistent relationship with any dimension of group cohesion or across time-points.\nOne of the more interesting, and perhaps unexpected, findings was the degree to which friendly competition was consistently and positively related across group cohesion dimensions. There is evidence that when groups set a goal based upon the summing of individual progress it results in a perceived conjunctive task, where the success is based upon not only the expertise of the highest performing members, but is also limited by the progress of the lowest performing members [34,35]. In HIP, participants set a shared goal within their teams to achieve across the duration of the study. At the end of each intervention session, progress toward the team goal was reviewed and compared to other teams. Teams that met their goal were celebrated, fostering a sense of friendly competition between teams. This type of friendly competition has been highlighted as a possible motivating feature in a number of other studies with different populations. Ingram and colleagues provided qualitative data that highlighted the motivational aspects of measuring up to the standards of others in the group [36]. Further, Buis and colleagues hypothesized that the positive relationship between competition and physical activity goal completion found in their study was due to a sense of group accountability or cohesion [26]. Finally, in a group of African American men, Hooker and colleagues used a similar small, within-group competition to successfully increase physical activity [27]. The analyses showed that friendly competition predicted cohesion at each time point, indicating that friendly competition may be a key group-interaction for effective group-based interventions.\nPerhaps counter-intuitively, people have been known to find greater attraction to their competitors rather than noncompetitors [37]. In the same way team members may compete with those who hold a similar position, ethnic minority women in the health education intervention may compare themselves to others in the group who share similar life-roles to them (e.g., group member, wife, mother, friend) [37]. Friendly competition was one group dynamics strategy in this intervention that increased a sense of belonging.\nPerhaps a less abstract explanation for the potential role of friendly competition to predict group cohesion is simply that participants like to try to be the best in their own groups. For example, Green and colleagues [38] successfully harnessed this idea of friendly competition in their study of group dynamics-based physical activity promotion in worksites, where worksites had team-based competitions. Recognition for successful competition was acknowledged by group praise. This competition, feedback, and reward approach also resulted in significant increases in physical activity [38]. Even in the absence of providing specific competition related prizes within HIP, we still observed the positive effect of competition on cohesion.\nOur findings around task-based communication and the task-based aspects of group cohesion should not be surprising given the use of a number of strategies that encouraged participants to engage in discussions about physical activity. During HIP intervention sessions, topics, such as setting challenging, yet attainable, goals, overcoming barriers to doing physical activity with practical solutions, and increasing social support to achieve physical activity goals, were discussed in the larger group. Women were tasked with continuing the discussion in their teams and completing a related worksheet as a team to increase task-based communication. Communication around the task at hand can be facilitated through mechanisms such as group problem-solving and has been used successfully in other studies [39,40]. Our findings contribute new information to this body of literature—that group interactions may not only result in applicable plans for participants to achieve a goal, but may also foster a sense of cohesion that can increase motivation toward achieving the goal [15].\nIt was surprising that cooperation was not a consistent predictor of group cohesion over the course of this study. Our initial belief was that cooperation would be strongly related to the task-based aspects of group cohesion because of the previous findings that control beliefs, often developed through vicarious learning and support, predicted task related group cohesion [13]. There are a number of possible explanations for this. First, the sessions may not have included activities that the participants considered cooperative. However, this seems unlikely given the significant increase in participant perceptions of cooperation over the course of the program and because cooperative activities are a consistently reported aspect of group-based programs for physical activity [18]. Second, it could be that communication and friendly competition account for the role of cooperation within a physical activity environment. We did not propose such an indirect relationship prior to completing our analyses, but suggest this may be an interesting area of future investigation.\nThis was the first study to determine the predictive relationship of group-interaction variables to group cohesion. The measures for group-interaction variables were developed specifically for this project, and, although they demonstrated internal consistency and predictive validity, further validity and reliability testing is warranted. In addition, the analyses were limited to participants who had both the baseline and follow-up assessments for group interaction and cohesion variables resulting in findings that cannot be generalized broadly. Nevertheless, the investigation of these relationships in a large minority population, over time provided the opportunity to speak more definitively about the consistent relationships that were found. In the same vein, the ethnic and racial composition of this sample may influence the generalizability of these results to other groups (e.g., mixed-race, mixed-gender).\nThese results help to decrease the paucity in the literature around the relationship between group-interaction variables and group cohesion. In a recent systematic review of group dynamics-based physical activity interventions it was concluded that more research is needed to determine what mechanisms lead to the robust effect of these interventions [18]. Group-interaction variables are a direct way in which to influence the perception of cohesion. Strategies that foster friendly competition will be the most likely to improve participant perceptions of group cohesion, and cooperation lacked a consistent pattern of prediction. Future research is also needed to expand upon this exploratory study to determine the degree to which group interaction and group cohesion mediate an increase in physical activity or program adherence.\nCompetition was a greater, and more consistent, predictor of cohesion over the other group-interaction variables. These data for ethnic minority women are of particular interest as females are seen as the more cooperative and collective gender [41,42] and as ethnic minority groups in America have been known to seek group identity and shared achievement [43]. It has been documented that competition is an influential factor for African American men engaging in physical activity interventions [44]. This is the first study, to our knowledge, to find such a strong relationship between competition and a sense of group cohesion in African American and Latino women. Reasons attributed to the appeal of competition for men (e.g., showing off, sense of accomplishment) provide new insights to the assumed cooperative female gender and suggest that interpersonal relationships that support positive health behavior changes are more complex than previously suggested. Last, the effects of competition are stronger for males than females in mixed-gender environments [45]. Future research endeavors are needed to see if these findings are generalizable to other all-female physical activity groups. If so, including elements of the competitive side of physical activity as a promotion strategy might help women to engage in and sustain physical activity.", "Group dynamics-based physical activity programs are successful at achieving their outcomes of interest [18]. Although previous research has shown that increased perceptions of cohesion lead to increased engagement in the program and subsequent increases in physical activity [4], less is known about what strategies lead to increased perceptions of cohesion [18]. This study presents promising data about how group-interaction variables, including communication, competition, and cooperation, may influence the perception of cohesion. Specific investigation of these variables indicated that strategies that foster friendly competition are most likely to improve participant perceptions of group cohesion.", "HIP: Health is power; ATG-T: Attraction to the group’s task-based activities; ATG-S: Attraction to the group’s social activities; GI-T: Group integration around task activities; GI-S: Group’s integration around social activities.", "The author declares that they have no competing interests.", "SMH ran the analyses and wrote the manuscript with the assistance of PAE, who also developed the measure being tested. REL and SKM developed and delivered the intervention and contributed to the draft of the manuscript. All authors contributed to the editing and approval of the final manuscript.", "Complete group-interaction variables on a 9-point agreement scale.\nClick here for file" ]
[ null, "methods", null, null, null, null, null, "results", "discussion", "conclusions", null, null, null, "supplementary-material" ]
[ "Group dynamics", "Group-interaction", "Physical activity" ]
Background: Group dynamics includes the study of the nature of groups, individual relationships within groups, and interactions with others. Over 60 years ago, Kurt Lewin’s seminal work suggested that the degree to which a group was cohesive would determine individuals’ level of success as a collective [1]. Group cohesion has a long history as an important predictor of performance and outcomes in work, military, sport, and exercise groups [2-4]. Having a strong sense of group cohesion also a reflects a fundamental human need—the need to belong [5]. Group cohesion has been defined in many ways [6-8], but Carron, Brawley, & Widmeyer’s [9] definition has been used consistently in physical activity promotion and research. They define group cohesion as a dynamic process reflected in the shared pursuit of common objectives to satisfy members’ needs [9]. Group cohesion is further operationalized as individual (1) attraction to the group’s task-based activities (ATG-T), (2) attraction to the group’s social activities (ATG-S), (3) perceptions of the group’s integration around task-based activities (GI-T), and (4) perceptions of the group’s integration around social activities (GI-S). Over the previous two decades, a large body of literature has also documented the positive relationship between group cohesion and physical activity adoption and maintenance [10-14]. Participants who have strong perceptions of group cohesion attend group sessions more often, are late less often, and drop out less frequently [11]. Group cohesion also has demonstrated a consistent relationship with positive attitudes toward physical activity and enhanced perceptions of self-efficacy and personal control [4]. From a theoretical perspective, Carron and Spink [15] proposed that group-interaction variables, such as communication, cooperation, and competition, are the likely precursors to developing group cohesion. Communication is defined as the sharing of information through verbal and non-verbal means. In group dynamics-based physical activity interventions, task communication (i.e., physical activity-based) occurs through facilitated group-goal setting and peer-learning activities (c.f., Irish et al. [16], Estabrooks [17]). Cooperation is defined as sharing resources to achieve a specific outcome. In physical activity classes, a cooperative environment would be one where participants provide assistance to one another in setting up the exercise equipment, overcoming obstacles, or even doing exercises together [18]. Finally, competition in an exercise group context is defined as providing participants the motivation of being superior to other groups or group members, or the motivation to keep their own group functioning at a high level (c.f., Kerr et al. [19], Steiner [20]). The concept of friendly competition influencing physical activity outcomes has been applied across a number of populations [21-23]. Friendly competition is a sense of competition that is connected to the overall success of the group and can reflect a generalized sense of intragroup competition as well as intergroup competitions within a single intervention. With friendly competition, people are inspired to compete against each other with the recognition that even if someone else wins, it benefits the group as a whole. Further, a group may share a set of norms around fairness and reciprocity in the form of competition [24]. The motivational aspect of friendly competition has been identified across a number of studies including faith-based weight loss trials that include physical activity [25], worksite physical activity programs [26], and physical activity promotion in hard to reach audiences [27]. Gaining better understanding of the relationships between group communication, cooperation, and competition has both theoretical and practical implications. Theoretically, to date there has been no study directed at understanding the relative contributions of communication, cooperation, and competition-based strategies to changes in group cohesion [10]. For example, it could be hypothesized that physical activity groups are more cooperative than competitive and that strategies focusing on cooperation may be superior in this context. Alternatively, communication that includes a focus on helping participants identify similarities in health aspirations could be a stronger predictor of group cohesion than cooperation or competition. However there is a lack of longitudinal or even cross-sectional research that has analyzed the change in perceptions of group-interaction variables as it relates to the change in cohesion over time. From a practical perspective, group-interaction variables have been used as a guide to develop strategies that are hypothesized to improve group cohesion, yet the relationship between group interaction variables and group cohesion has not been examined within these intervention studies [18]. While the depth of research on other variables that enhance cohesion exists, these specific group-interaction variables have been widely used as a guide to develop strategies that are, in turn, hypothesized to improve group cohesion, yet the relationship between group interaction variables and group cohesion has not been determined [18]. Further, no study to date has analyzed the change in perceptions of group-interaction variables as it relates to the change in cohesion over time. Understanding the mechanistic relationship between strategies that target group-interaction variables, changes in those variables, and changes in perceptions of group cohesion provides valuable information for future program development. A longitudinal study design enhances the ability to track changes in the relationships between group-interaction variables and group cohesion over time. This is significant to group-based physical activity promotion programs, because it allows programs to be planned to integrate the strategies that are most likely to improve group cohesion. Understanding these underlying mechanisms also provides health educators with the information necessary to ensure that strategies that do not contribute to changes in perceptions of group cohesion are not unnecessarily applied in practice settings where resources are often limited. To date, there have been no investigations examining the relationship between physical activity group cohesion and group member perceptions of communication, cooperation, and friendly competition [10]. Understanding these relationships could aid in developing stronger strategies (e.g., appropriate facilitation of friendly competition within a group) to enhance group cohesion and provide practical information for those delivering interventions when making resource allocation decisions (i.e., what resources are needed to deliver the intervention to the desired population, ranging from time to materials). The Health is Power (HIP) trial [28], a study testing the effectiveness of a group-based physical activity promotion program for ethnic minority women, provided an opportunity to explore the relationships among the dimensions of group cohesion and communication, cooperation, and competition over time. In the primary meditational analyses of HIP, the investigators found that all dimensions of group cohesion mediated the effect of the intervention with regard to psychosocial outcomes but not physical activity behaviors (i.e., the intervention was associated with increased cohesion but did not lead to increased physical activity) [29]. The purpose of this exploratory study was to determine the longitudinal relationship of communication, cooperation, and friendly competition to the dimensions of group cohesion. Specifically, the intention of this study was to test participants’ perceptions of the strength or presence of these variables within their group. It was hypothesized that each group-interaction variable would contribute to a large proportion of explained variance in group cohesion over time. Hypothesis 1: Cooperation would predict the Individual’s Attraction to the Group-Task. Hypothesis 2: Friendly competition would predict the Individual’s Attraction to the Group-Task as well as the Group’s Integration towards the Task. Hypothesis 3: Social communication would predict aspects related to social cohesion (both Individual’s Attraction to the Group-Socially as well as the Group’s Integration Socially). Hypothesis 4: Task communication would predict aspects related to task cohesion: both the Individual’s Attraction to the Group-Task as well as the Group’s Integration towards the Task). Hypothesis 5: Perceptions of group cohesion and group interaction would increase over the course of the program, but decrease from program completion to the 12-month follow-up. Testing the hypotheses, outlined in Figure 1 below, contributes to the gap in the literature about strategies that influence the perception of group cohesion. Hypotheses for group-interaction variables prediction of cohesion over-time. Methods: African American and Hispanic or Latina women were recruited to participate in a multi-site, community-based study to test a 6-month intervention designed to promote physical activity (see Lee et al. [29] for details on recruitment). This study focused on ethnic minority women because they have particularly low rates of physical activity and disproportionately suffer from chronic diseases related to physical inactivity [30]. Further, ethnic minority women are often gatekeepers for physical activity behaviors within their families [31,32]. Women were randomly assigned to the physical activity intervention or a fruit and vegetable promotion matched contact comparison group. Only participants randomly assigned to the physical activity intervention group were included in this study. Health education intervention sessions and intervention content and materials have been previously described [28]. In Houston, there were six African-American and one Hispanic or Latina cohorts, and there were three Hispanic or Latina cohorts in Austin. All sessions were conducted in English. Intervention group sizes did not differ significantly, with a range of participants from 10 to 37. The intervention was 24 weeks in duration and included 6 sessions. Each session included group dynamics strategies and principles based on the model developed by Carron and Spink [15] and included 15 minutes of walking after the educational component. Opportunities for communication were provided in the form of small group discussions related to the session objective. For example, women discussed strategies to overcome barriers to physical activity, shared goals, and relapse prevention plans during their group sessions. Women also used time before and after the sessions, as well as during group walks at the end of the intervention sessions, to socialize. To facilitate cooperation, participants engaged in peer problem solving activities and collaborative group goal setting. Groups also engaged in friendly competition by developing small teams working to achieve a group goal for physical activity tacked on a large map; whichever team had traveled the farthest was “winning”. The facilitators were trained to foster group interaction using semi-structured scripts. For example, a semi-structured script focusing on friendly competition might include: “You are all part of one big team (the whole group) as well as a part of a smaller team. The small teams will be the folks that you set your shared goals with and we will have a friendly competition between each team. But I want you to remember, that the goal of our large group is that everyone in the program works their way up to 45 minutes of physical activity on at least 5 days a week. So, even though there may be some competition, this program is really about getting and giving support to be more physically active.” Participant perceptions of physical activity group cohesion, communication, cooperation, and competition were assessed at baseline, post intervention, and 6-months after the intervention was completed. All research study activities were approved by the University of Houston’s Committee for the Protection of Human Subjects, and all participants provided written informed consent prior to participation. Sample As the focus of this study was to determine the effect of group communication, cooperation, and competition on cohesion within a sample of minority women, data from the 103 participants randomly assigned to the physical activity group who completed baseline and post-intervention measures were analyzed. Of those women, 73% identified as African American and 27% identified as Hispanic or Latina. The participants were 47.89 years of age (±8.17), with an average BMI of 34.43 kg/m2 (±8.07). Eighty-three participants (80%) completed the 12-month follow-up assessment. As the focus of this study was to determine the effect of group communication, cooperation, and competition on cohesion within a sample of minority women, data from the 103 participants randomly assigned to the physical activity group who completed baseline and post-intervention measures were analyzed. Of those women, 73% identified as African American and 27% identified as Hispanic or Latina. The participants were 47.89 years of age (±8.17), with an average BMI of 34.43 kg/m2 (±8.07). Eighty-three participants (80%) completed the 12-month follow-up assessment. Measures Group cohesion The Physical Activity Group Environment Questionnaire (PAGEQ) [33] is a group cohesion inventory for physical activity groups and was used in the HIP trial. The PAGEQ is a 21-item measure of the four dimensions of ATG-T, ATG-S, GI-T, GI-S with 6, 6, 5, and 4 items, respectively. All 21 items are on a 9-point Likert scale ranging from ‘strongly agree’ to ‘strongly disagree’ [33]. ATG-T was assessed by having participants respond to items such as ‘I like the amount of physical activity I get with this group’. ATG-S was operationalized through statements such as ‘I enjoy my social interactions with this group’. The group integration dimensions of cohesion were assessed using items such as ‘members of our group often socialize together’ (GI-S) and ‘our group is united in its beliefs about the benefits of regular physical activity’ (GI-T). This questionnaire has demonstrated content, predictive, and concurrent validity [33]. The Physical Activity Group Environment Questionnaire (PAGEQ) [33] is a group cohesion inventory for physical activity groups and was used in the HIP trial. The PAGEQ is a 21-item measure of the four dimensions of ATG-T, ATG-S, GI-T, GI-S with 6, 6, 5, and 4 items, respectively. All 21 items are on a 9-point Likert scale ranging from ‘strongly agree’ to ‘strongly disagree’ [33]. ATG-T was assessed by having participants respond to items such as ‘I like the amount of physical activity I get with this group’. ATG-S was operationalized through statements such as ‘I enjoy my social interactions with this group’. The group integration dimensions of cohesion were assessed using items such as ‘members of our group often socialize together’ (GI-S) and ‘our group is united in its beliefs about the benefits of regular physical activity’ (GI-T). This questionnaire has demonstrated content, predictive, and concurrent validity [33]. Group-interaction variables Additional items were embedded within the PAGEQ to measure the group-interaction variables of interest, including communication, cooperation, and competition. As this was an exploratory study, we developed items that were reviewed for face validity and aligned with the definitions of each of the constructs. Like cohesion, communication was operationalized as having a task and social focus and was measured through 6 items that can be further divided into task-based communication (e.g., ‘members of our group talk about how often they should do physical activity’) and social communication (e.g., ‘people of this group talk about things that are happening in our lives’). Cooperation and competition were not conceptualized as having relevant social components and focused on task outcomes. Cooperation was measured through 3 items (e.g., ‘we all cooperate to help this group’s program run smoothly’) as was competition (e.g., ‘There is friendly competition within the members to stay as healthy as possible’). Internal consistencies for the group-interaction variables were all acceptable: task communication (α = .94), social communication (α = .65), cooperation (α = .91), and friendly competition (α = .81). The sessions were designed to enhance opportunities for each of these constructs (e.g., opportunities to cooperate before, during, and after class, facilitated social interactions) (Additional file 1). Additional items were embedded within the PAGEQ to measure the group-interaction variables of interest, including communication, cooperation, and competition. As this was an exploratory study, we developed items that were reviewed for face validity and aligned with the definitions of each of the constructs. Like cohesion, communication was operationalized as having a task and social focus and was measured through 6 items that can be further divided into task-based communication (e.g., ‘members of our group talk about how often they should do physical activity’) and social communication (e.g., ‘people of this group talk about things that are happening in our lives’). Cooperation and competition were not conceptualized as having relevant social components and focused on task outcomes. Cooperation was measured through 3 items (e.g., ‘we all cooperate to help this group’s program run smoothly’) as was competition (e.g., ‘There is friendly competition within the members to stay as healthy as possible’). Internal consistencies for the group-interaction variables were all acceptable: task communication (α = .94), social communication (α = .65), cooperation (α = .91), and friendly competition (α = .81). The sessions were designed to enhance opportunities for each of these constructs (e.g., opportunities to cooperate before, during, and after class, facilitated social interactions) (Additional file 1). Group cohesion The Physical Activity Group Environment Questionnaire (PAGEQ) [33] is a group cohesion inventory for physical activity groups and was used in the HIP trial. The PAGEQ is a 21-item measure of the four dimensions of ATG-T, ATG-S, GI-T, GI-S with 6, 6, 5, and 4 items, respectively. All 21 items are on a 9-point Likert scale ranging from ‘strongly agree’ to ‘strongly disagree’ [33]. ATG-T was assessed by having participants respond to items such as ‘I like the amount of physical activity I get with this group’. ATG-S was operationalized through statements such as ‘I enjoy my social interactions with this group’. The group integration dimensions of cohesion were assessed using items such as ‘members of our group often socialize together’ (GI-S) and ‘our group is united in its beliefs about the benefits of regular physical activity’ (GI-T). This questionnaire has demonstrated content, predictive, and concurrent validity [33]. The Physical Activity Group Environment Questionnaire (PAGEQ) [33] is a group cohesion inventory for physical activity groups and was used in the HIP trial. The PAGEQ is a 21-item measure of the four dimensions of ATG-T, ATG-S, GI-T, GI-S with 6, 6, 5, and 4 items, respectively. All 21 items are on a 9-point Likert scale ranging from ‘strongly agree’ to ‘strongly disagree’ [33]. ATG-T was assessed by having participants respond to items such as ‘I like the amount of physical activity I get with this group’. ATG-S was operationalized through statements such as ‘I enjoy my social interactions with this group’. The group integration dimensions of cohesion were assessed using items such as ‘members of our group often socialize together’ (GI-S) and ‘our group is united in its beliefs about the benefits of regular physical activity’ (GI-T). This questionnaire has demonstrated content, predictive, and concurrent validity [33]. Group-interaction variables Additional items were embedded within the PAGEQ to measure the group-interaction variables of interest, including communication, cooperation, and competition. As this was an exploratory study, we developed items that were reviewed for face validity and aligned with the definitions of each of the constructs. Like cohesion, communication was operationalized as having a task and social focus and was measured through 6 items that can be further divided into task-based communication (e.g., ‘members of our group talk about how often they should do physical activity’) and social communication (e.g., ‘people of this group talk about things that are happening in our lives’). Cooperation and competition were not conceptualized as having relevant social components and focused on task outcomes. Cooperation was measured through 3 items (e.g., ‘we all cooperate to help this group’s program run smoothly’) as was competition (e.g., ‘There is friendly competition within the members to stay as healthy as possible’). Internal consistencies for the group-interaction variables were all acceptable: task communication (α = .94), social communication (α = .65), cooperation (α = .91), and friendly competition (α = .81). The sessions were designed to enhance opportunities for each of these constructs (e.g., opportunities to cooperate before, during, and after class, facilitated social interactions) (Additional file 1). Additional items were embedded within the PAGEQ to measure the group-interaction variables of interest, including communication, cooperation, and competition. As this was an exploratory study, we developed items that were reviewed for face validity and aligned with the definitions of each of the constructs. Like cohesion, communication was operationalized as having a task and social focus and was measured through 6 items that can be further divided into task-based communication (e.g., ‘members of our group talk about how often they should do physical activity’) and social communication (e.g., ‘people of this group talk about things that are happening in our lives’). Cooperation and competition were not conceptualized as having relevant social components and focused on task outcomes. Cooperation was measured through 3 items (e.g., ‘we all cooperate to help this group’s program run smoothly’) as was competition (e.g., ‘There is friendly competition within the members to stay as healthy as possible’). Internal consistencies for the group-interaction variables were all acceptable: task communication (α = .94), social communication (α = .65), cooperation (α = .91), and friendly competition (α = .81). The sessions were designed to enhance opportunities for each of these constructs (e.g., opportunities to cooperate before, during, and after class, facilitated social interactions) (Additional file 1). Analysis Descriptive statistics, paired sample t-tests, and multiple regressions were conducted in IBM SPSS 19.0 with a priori significance set at p < 0.05. Within participant t-tests were conducted to determine changes in the group cohesion and interaction variables over time. Multiple linear regression was conducted to detect which group-interaction variables predicted group cohesion over the course of the program and at 12-months follow-up, accounting for age and ethnicity. Four regressions were completed, one for each dimension of cohesion as the dependent variable to test hypotheses 1-4, using the group interaction variables as independent variables. Longitudinal change scores (from baseline to post-intervention and post-intervention to follow-up) were computed for each group-interaction variable and dimension of cohesion for use in the regression models. Overall perceptions of cohesion were recorded at baseline, post-program, and follow-up to determine the trend in the perceptions of cohesion over time (hypothesis 5). Descriptive statistics, paired sample t-tests, and multiple regressions were conducted in IBM SPSS 19.0 with a priori significance set at p < 0.05. Within participant t-tests were conducted to determine changes in the group cohesion and interaction variables over time. Multiple linear regression was conducted to detect which group-interaction variables predicted group cohesion over the course of the program and at 12-months follow-up, accounting for age and ethnicity. Four regressions were completed, one for each dimension of cohesion as the dependent variable to test hypotheses 1-4, using the group interaction variables as independent variables. Longitudinal change scores (from baseline to post-intervention and post-intervention to follow-up) were computed for each group-interaction variable and dimension of cohesion for use in the regression models. Overall perceptions of cohesion were recorded at baseline, post-program, and follow-up to determine the trend in the perceptions of cohesion over time (hypothesis 5). Sample: As the focus of this study was to determine the effect of group communication, cooperation, and competition on cohesion within a sample of minority women, data from the 103 participants randomly assigned to the physical activity group who completed baseline and post-intervention measures were analyzed. Of those women, 73% identified as African American and 27% identified as Hispanic or Latina. The participants were 47.89 years of age (±8.17), with an average BMI of 34.43 kg/m2 (±8.07). Eighty-three participants (80%) completed the 12-month follow-up assessment. Measures: Group cohesion The Physical Activity Group Environment Questionnaire (PAGEQ) [33] is a group cohesion inventory for physical activity groups and was used in the HIP trial. The PAGEQ is a 21-item measure of the four dimensions of ATG-T, ATG-S, GI-T, GI-S with 6, 6, 5, and 4 items, respectively. All 21 items are on a 9-point Likert scale ranging from ‘strongly agree’ to ‘strongly disagree’ [33]. ATG-T was assessed by having participants respond to items such as ‘I like the amount of physical activity I get with this group’. ATG-S was operationalized through statements such as ‘I enjoy my social interactions with this group’. The group integration dimensions of cohesion were assessed using items such as ‘members of our group often socialize together’ (GI-S) and ‘our group is united in its beliefs about the benefits of regular physical activity’ (GI-T). This questionnaire has demonstrated content, predictive, and concurrent validity [33]. The Physical Activity Group Environment Questionnaire (PAGEQ) [33] is a group cohesion inventory for physical activity groups and was used in the HIP trial. The PAGEQ is a 21-item measure of the four dimensions of ATG-T, ATG-S, GI-T, GI-S with 6, 6, 5, and 4 items, respectively. All 21 items are on a 9-point Likert scale ranging from ‘strongly agree’ to ‘strongly disagree’ [33]. ATG-T was assessed by having participants respond to items such as ‘I like the amount of physical activity I get with this group’. ATG-S was operationalized through statements such as ‘I enjoy my social interactions with this group’. The group integration dimensions of cohesion were assessed using items such as ‘members of our group often socialize together’ (GI-S) and ‘our group is united in its beliefs about the benefits of regular physical activity’ (GI-T). This questionnaire has demonstrated content, predictive, and concurrent validity [33]. Group-interaction variables Additional items were embedded within the PAGEQ to measure the group-interaction variables of interest, including communication, cooperation, and competition. As this was an exploratory study, we developed items that were reviewed for face validity and aligned with the definitions of each of the constructs. Like cohesion, communication was operationalized as having a task and social focus and was measured through 6 items that can be further divided into task-based communication (e.g., ‘members of our group talk about how often they should do physical activity’) and social communication (e.g., ‘people of this group talk about things that are happening in our lives’). Cooperation and competition were not conceptualized as having relevant social components and focused on task outcomes. Cooperation was measured through 3 items (e.g., ‘we all cooperate to help this group’s program run smoothly’) as was competition (e.g., ‘There is friendly competition within the members to stay as healthy as possible’). Internal consistencies for the group-interaction variables were all acceptable: task communication (α = .94), social communication (α = .65), cooperation (α = .91), and friendly competition (α = .81). The sessions were designed to enhance opportunities for each of these constructs (e.g., opportunities to cooperate before, during, and after class, facilitated social interactions) (Additional file 1). Additional items were embedded within the PAGEQ to measure the group-interaction variables of interest, including communication, cooperation, and competition. As this was an exploratory study, we developed items that were reviewed for face validity and aligned with the definitions of each of the constructs. Like cohesion, communication was operationalized as having a task and social focus and was measured through 6 items that can be further divided into task-based communication (e.g., ‘members of our group talk about how often they should do physical activity’) and social communication (e.g., ‘people of this group talk about things that are happening in our lives’). Cooperation and competition were not conceptualized as having relevant social components and focused on task outcomes. Cooperation was measured through 3 items (e.g., ‘we all cooperate to help this group’s program run smoothly’) as was competition (e.g., ‘There is friendly competition within the members to stay as healthy as possible’). Internal consistencies for the group-interaction variables were all acceptable: task communication (α = .94), social communication (α = .65), cooperation (α = .91), and friendly competition (α = .81). The sessions were designed to enhance opportunities for each of these constructs (e.g., opportunities to cooperate before, during, and after class, facilitated social interactions) (Additional file 1). Group cohesion: The Physical Activity Group Environment Questionnaire (PAGEQ) [33] is a group cohesion inventory for physical activity groups and was used in the HIP trial. The PAGEQ is a 21-item measure of the four dimensions of ATG-T, ATG-S, GI-T, GI-S with 6, 6, 5, and 4 items, respectively. All 21 items are on a 9-point Likert scale ranging from ‘strongly agree’ to ‘strongly disagree’ [33]. ATG-T was assessed by having participants respond to items such as ‘I like the amount of physical activity I get with this group’. ATG-S was operationalized through statements such as ‘I enjoy my social interactions with this group’. The group integration dimensions of cohesion were assessed using items such as ‘members of our group often socialize together’ (GI-S) and ‘our group is united in its beliefs about the benefits of regular physical activity’ (GI-T). This questionnaire has demonstrated content, predictive, and concurrent validity [33]. Group-interaction variables: Additional items were embedded within the PAGEQ to measure the group-interaction variables of interest, including communication, cooperation, and competition. As this was an exploratory study, we developed items that were reviewed for face validity and aligned with the definitions of each of the constructs. Like cohesion, communication was operationalized as having a task and social focus and was measured through 6 items that can be further divided into task-based communication (e.g., ‘members of our group talk about how often they should do physical activity’) and social communication (e.g., ‘people of this group talk about things that are happening in our lives’). Cooperation and competition were not conceptualized as having relevant social components and focused on task outcomes. Cooperation was measured through 3 items (e.g., ‘we all cooperate to help this group’s program run smoothly’) as was competition (e.g., ‘There is friendly competition within the members to stay as healthy as possible’). Internal consistencies for the group-interaction variables were all acceptable: task communication (α = .94), social communication (α = .65), cooperation (α = .91), and friendly competition (α = .81). The sessions were designed to enhance opportunities for each of these constructs (e.g., opportunities to cooperate before, during, and after class, facilitated social interactions) (Additional file 1). Analysis: Descriptive statistics, paired sample t-tests, and multiple regressions were conducted in IBM SPSS 19.0 with a priori significance set at p < 0.05. Within participant t-tests were conducted to determine changes in the group cohesion and interaction variables over time. Multiple linear regression was conducted to detect which group-interaction variables predicted group cohesion over the course of the program and at 12-months follow-up, accounting for age and ethnicity. Four regressions were completed, one for each dimension of cohesion as the dependent variable to test hypotheses 1-4, using the group interaction variables as independent variables. Longitudinal change scores (from baseline to post-intervention and post-intervention to follow-up) were computed for each group-interaction variable and dimension of cohesion for use in the regression models. Overall perceptions of cohesion were recorded at baseline, post-program, and follow-up to determine the trend in the perceptions of cohesion over time (hypothesis 5). Results: Table 1 includes the descriptive data across time for the group cohesion and group-interaction variables. Age and ethnicity did not contribute to a significant proportion of the variance in the models. As can be noted in the Table, all four group-interaction variables (task and social communication, cooperation, competition), as well as three dimensions of cohesion (ATG-T, ATG-S, GI-T), significantly increased from baseline to post-intervention (p < .05), and the magnitudes of the changes were moderate to large (Cohen’s d ranging from 0.5-0.89). GI-S significantly decreased from baseline to post-intervention, and the magnitude of change was moderate (Cohen’s d = 0.64). All variables had a significant decrease from post-intervention to 12-month follow-up in the range of small to moderate effect sizes (Cohen’s d ranging from 0.27-0.53). Descriptive statistics of group-interaction variables over time 1 1*Significant change (p < .05) between baseline and post-program. **Significant change (p < .05) between post-program and follow-up. ATG-T (Attraction to the Group’s Task); ATG-S (Attraction to the Group Socially), GI-T (Group’s Integration towards the Task); GI-S (Group’s Integration Socially). The proportion of explained variance in ATG-T at each time point and was approximately 30 percent for each of the longitudinal regression analyses (see Table 2). Participant perceptions of competition and task-based communication were consistent contributors to the variance explained in the longitudinal regression. The group-interaction variables seemed to explain a slightly higher amount of the variance in GI-T when considering longitudinal data (i.e., approximately 63 percent of the variance). Task-based communication and friendly competition were again significant contributors to the explained variance within the longitudinal regression. Longitudinal regression results predicting each dimension of group cohesion *(p < .05)2. 2ATG-T (Attraction to the Group’s Task); ATG-S (Attraction to the Group Socially), GI-T (Group’s Integration towards the Task); GI-S (Group’s Integration Socially). The regression analyses used to examine ATG-S showed a significant amount of explained variance within a longitudinal (43 to 59 percent) approach. Social communication and friendly competition were significant contributors to the explained variance in the longitudinal regression of ATG-S. A somewhat lower proportion of variance was explained using the group-interaction variables with GI-S (25 percent longitudinally). There were no distinct patterns across the regressions for GI-S, which was predicted by social and task-based communication (T1-T2) and by friendly competition (T2-T3). Discussion: Communication, cooperation, and competition are key variables in the prediction of group cohesion [15,18]. Findings support the propositions from Carron and Spink’s model that some of these variables are significantly related to group cohesion [15]. We extended the findings of previous studies to show that friendly competition predicted nearly all of the dimensions of group cohesion at all time points using longitudinal analyses. We also found that social and task-based forms of communication had more consistent patterns of relationships with the social and task-based dimensions of group cohesion, supporting the hypothesis that different group-interaction variables would predict different dimensions of group cohesion. Contrary to our hypotheses, perceptions of cooperation did not demonstrate a consistent relationship with any dimension of group cohesion or across time-points. One of the more interesting, and perhaps unexpected, findings was the degree to which friendly competition was consistently and positively related across group cohesion dimensions. There is evidence that when groups set a goal based upon the summing of individual progress it results in a perceived conjunctive task, where the success is based upon not only the expertise of the highest performing members, but is also limited by the progress of the lowest performing members [34,35]. In HIP, participants set a shared goal within their teams to achieve across the duration of the study. At the end of each intervention session, progress toward the team goal was reviewed and compared to other teams. Teams that met their goal were celebrated, fostering a sense of friendly competition between teams. This type of friendly competition has been highlighted as a possible motivating feature in a number of other studies with different populations. Ingram and colleagues provided qualitative data that highlighted the motivational aspects of measuring up to the standards of others in the group [36]. Further, Buis and colleagues hypothesized that the positive relationship between competition and physical activity goal completion found in their study was due to a sense of group accountability or cohesion [26]. Finally, in a group of African American men, Hooker and colleagues used a similar small, within-group competition to successfully increase physical activity [27]. The analyses showed that friendly competition predicted cohesion at each time point, indicating that friendly competition may be a key group-interaction for effective group-based interventions. Perhaps counter-intuitively, people have been known to find greater attraction to their competitors rather than noncompetitors [37]. In the same way team members may compete with those who hold a similar position, ethnic minority women in the health education intervention may compare themselves to others in the group who share similar life-roles to them (e.g., group member, wife, mother, friend) [37]. Friendly competition was one group dynamics strategy in this intervention that increased a sense of belonging. Perhaps a less abstract explanation for the potential role of friendly competition to predict group cohesion is simply that participants like to try to be the best in their own groups. For example, Green and colleagues [38] successfully harnessed this idea of friendly competition in their study of group dynamics-based physical activity promotion in worksites, where worksites had team-based competitions. Recognition for successful competition was acknowledged by group praise. This competition, feedback, and reward approach also resulted in significant increases in physical activity [38]. Even in the absence of providing specific competition related prizes within HIP, we still observed the positive effect of competition on cohesion. Our findings around task-based communication and the task-based aspects of group cohesion should not be surprising given the use of a number of strategies that encouraged participants to engage in discussions about physical activity. During HIP intervention sessions, topics, such as setting challenging, yet attainable, goals, overcoming barriers to doing physical activity with practical solutions, and increasing social support to achieve physical activity goals, were discussed in the larger group. Women were tasked with continuing the discussion in their teams and completing a related worksheet as a team to increase task-based communication. Communication around the task at hand can be facilitated through mechanisms such as group problem-solving and has been used successfully in other studies [39,40]. Our findings contribute new information to this body of literature—that group interactions may not only result in applicable plans for participants to achieve a goal, but may also foster a sense of cohesion that can increase motivation toward achieving the goal [15]. It was surprising that cooperation was not a consistent predictor of group cohesion over the course of this study. Our initial belief was that cooperation would be strongly related to the task-based aspects of group cohesion because of the previous findings that control beliefs, often developed through vicarious learning and support, predicted task related group cohesion [13]. There are a number of possible explanations for this. First, the sessions may not have included activities that the participants considered cooperative. However, this seems unlikely given the significant increase in participant perceptions of cooperation over the course of the program and because cooperative activities are a consistently reported aspect of group-based programs for physical activity [18]. Second, it could be that communication and friendly competition account for the role of cooperation within a physical activity environment. We did not propose such an indirect relationship prior to completing our analyses, but suggest this may be an interesting area of future investigation. This was the first study to determine the predictive relationship of group-interaction variables to group cohesion. The measures for group-interaction variables were developed specifically for this project, and, although they demonstrated internal consistency and predictive validity, further validity and reliability testing is warranted. In addition, the analyses were limited to participants who had both the baseline and follow-up assessments for group interaction and cohesion variables resulting in findings that cannot be generalized broadly. Nevertheless, the investigation of these relationships in a large minority population, over time provided the opportunity to speak more definitively about the consistent relationships that were found. In the same vein, the ethnic and racial composition of this sample may influence the generalizability of these results to other groups (e.g., mixed-race, mixed-gender). These results help to decrease the paucity in the literature around the relationship between group-interaction variables and group cohesion. In a recent systematic review of group dynamics-based physical activity interventions it was concluded that more research is needed to determine what mechanisms lead to the robust effect of these interventions [18]. Group-interaction variables are a direct way in which to influence the perception of cohesion. Strategies that foster friendly competition will be the most likely to improve participant perceptions of group cohesion, and cooperation lacked a consistent pattern of prediction. Future research is also needed to expand upon this exploratory study to determine the degree to which group interaction and group cohesion mediate an increase in physical activity or program adherence. Competition was a greater, and more consistent, predictor of cohesion over the other group-interaction variables. These data for ethnic minority women are of particular interest as females are seen as the more cooperative and collective gender [41,42] and as ethnic minority groups in America have been known to seek group identity and shared achievement [43]. It has been documented that competition is an influential factor for African American men engaging in physical activity interventions [44]. This is the first study, to our knowledge, to find such a strong relationship between competition and a sense of group cohesion in African American and Latino women. Reasons attributed to the appeal of competition for men (e.g., showing off, sense of accomplishment) provide new insights to the assumed cooperative female gender and suggest that interpersonal relationships that support positive health behavior changes are more complex than previously suggested. Last, the effects of competition are stronger for males than females in mixed-gender environments [45]. Future research endeavors are needed to see if these findings are generalizable to other all-female physical activity groups. If so, including elements of the competitive side of physical activity as a promotion strategy might help women to engage in and sustain physical activity. Conclusions: Group dynamics-based physical activity programs are successful at achieving their outcomes of interest [18]. Although previous research has shown that increased perceptions of cohesion lead to increased engagement in the program and subsequent increases in physical activity [4], less is known about what strategies lead to increased perceptions of cohesion [18]. This study presents promising data about how group-interaction variables, including communication, competition, and cooperation, may influence the perception of cohesion. Specific investigation of these variables indicated that strategies that foster friendly competition are most likely to improve participant perceptions of group cohesion. Abbreviations: HIP: Health is power; ATG-T: Attraction to the group’s task-based activities; ATG-S: Attraction to the group’s social activities; GI-T: Group integration around task activities; GI-S: Group’s integration around social activities. Competing interests: The author declares that they have no competing interests. Authors’ contributions: SMH ran the analyses and wrote the manuscript with the assistance of PAE, who also developed the measure being tested. REL and SKM developed and delivered the intervention and contributed to the draft of the manuscript. All authors contributed to the editing and approval of the final manuscript. Supplementary Material: Complete group-interaction variables on a 9-point agreement scale. Click here for file
Background: Interaction in the form of cooperation, communication, and friendly competition theoretically precede the development of group cohesion, which often precedes adherence to health promotion programs. The purpose of this manuscript was to explore longitudinal relationships among dimensions of group cohesion and group-interaction variables to inform and improve group-based strategies within programs aimed at promoting physical activity. Methods: Ethnic minority women completed a group dynamics-based physical activity promotion intervention (N = 103; 73% African American; 27% Hispanic/Latina; mage = 47.89 + 8.17 years; mBMI = 34.43+ 8.07 kg/m2) and assessments of group cohesion and group-interaction variables at baseline, 6 months (post-program), and 12 months (follow-up). Results: All four dimensions of group cohesion had significant (ps < 0.01) relationships with the group-interaction variables. Competition was a consistently strong predictor of cohesion, while cooperation did not demonstrate consistent patterns of prediction. Conclusions: Facilitating a sense of friendly competition may increase engagement in physical activity programs by bolstering group cohesion.
Background: Group dynamics includes the study of the nature of groups, individual relationships within groups, and interactions with others. Over 60 years ago, Kurt Lewin’s seminal work suggested that the degree to which a group was cohesive would determine individuals’ level of success as a collective [1]. Group cohesion has a long history as an important predictor of performance and outcomes in work, military, sport, and exercise groups [2-4]. Having a strong sense of group cohesion also a reflects a fundamental human need—the need to belong [5]. Group cohesion has been defined in many ways [6-8], but Carron, Brawley, & Widmeyer’s [9] definition has been used consistently in physical activity promotion and research. They define group cohesion as a dynamic process reflected in the shared pursuit of common objectives to satisfy members’ needs [9]. Group cohesion is further operationalized as individual (1) attraction to the group’s task-based activities (ATG-T), (2) attraction to the group’s social activities (ATG-S), (3) perceptions of the group’s integration around task-based activities (GI-T), and (4) perceptions of the group’s integration around social activities (GI-S). Over the previous two decades, a large body of literature has also documented the positive relationship between group cohesion and physical activity adoption and maintenance [10-14]. Participants who have strong perceptions of group cohesion attend group sessions more often, are late less often, and drop out less frequently [11]. Group cohesion also has demonstrated a consistent relationship with positive attitudes toward physical activity and enhanced perceptions of self-efficacy and personal control [4]. From a theoretical perspective, Carron and Spink [15] proposed that group-interaction variables, such as communication, cooperation, and competition, are the likely precursors to developing group cohesion. Communication is defined as the sharing of information through verbal and non-verbal means. In group dynamics-based physical activity interventions, task communication (i.e., physical activity-based) occurs through facilitated group-goal setting and peer-learning activities (c.f., Irish et al. [16], Estabrooks [17]). Cooperation is defined as sharing resources to achieve a specific outcome. In physical activity classes, a cooperative environment would be one where participants provide assistance to one another in setting up the exercise equipment, overcoming obstacles, or even doing exercises together [18]. Finally, competition in an exercise group context is defined as providing participants the motivation of being superior to other groups or group members, or the motivation to keep their own group functioning at a high level (c.f., Kerr et al. [19], Steiner [20]). The concept of friendly competition influencing physical activity outcomes has been applied across a number of populations [21-23]. Friendly competition is a sense of competition that is connected to the overall success of the group and can reflect a generalized sense of intragroup competition as well as intergroup competitions within a single intervention. With friendly competition, people are inspired to compete against each other with the recognition that even if someone else wins, it benefits the group as a whole. Further, a group may share a set of norms around fairness and reciprocity in the form of competition [24]. The motivational aspect of friendly competition has been identified across a number of studies including faith-based weight loss trials that include physical activity [25], worksite physical activity programs [26], and physical activity promotion in hard to reach audiences [27]. Gaining better understanding of the relationships between group communication, cooperation, and competition has both theoretical and practical implications. Theoretically, to date there has been no study directed at understanding the relative contributions of communication, cooperation, and competition-based strategies to changes in group cohesion [10]. For example, it could be hypothesized that physical activity groups are more cooperative than competitive and that strategies focusing on cooperation may be superior in this context. Alternatively, communication that includes a focus on helping participants identify similarities in health aspirations could be a stronger predictor of group cohesion than cooperation or competition. However there is a lack of longitudinal or even cross-sectional research that has analyzed the change in perceptions of group-interaction variables as it relates to the change in cohesion over time. From a practical perspective, group-interaction variables have been used as a guide to develop strategies that are hypothesized to improve group cohesion, yet the relationship between group interaction variables and group cohesion has not been examined within these intervention studies [18]. While the depth of research on other variables that enhance cohesion exists, these specific group-interaction variables have been widely used as a guide to develop strategies that are, in turn, hypothesized to improve group cohesion, yet the relationship between group interaction variables and group cohesion has not been determined [18]. Further, no study to date has analyzed the change in perceptions of group-interaction variables as it relates to the change in cohesion over time. Understanding the mechanistic relationship between strategies that target group-interaction variables, changes in those variables, and changes in perceptions of group cohesion provides valuable information for future program development. A longitudinal study design enhances the ability to track changes in the relationships between group-interaction variables and group cohesion over time. This is significant to group-based physical activity promotion programs, because it allows programs to be planned to integrate the strategies that are most likely to improve group cohesion. Understanding these underlying mechanisms also provides health educators with the information necessary to ensure that strategies that do not contribute to changes in perceptions of group cohesion are not unnecessarily applied in practice settings where resources are often limited. To date, there have been no investigations examining the relationship between physical activity group cohesion and group member perceptions of communication, cooperation, and friendly competition [10]. Understanding these relationships could aid in developing stronger strategies (e.g., appropriate facilitation of friendly competition within a group) to enhance group cohesion and provide practical information for those delivering interventions when making resource allocation decisions (i.e., what resources are needed to deliver the intervention to the desired population, ranging from time to materials). The Health is Power (HIP) trial [28], a study testing the effectiveness of a group-based physical activity promotion program for ethnic minority women, provided an opportunity to explore the relationships among the dimensions of group cohesion and communication, cooperation, and competition over time. In the primary meditational analyses of HIP, the investigators found that all dimensions of group cohesion mediated the effect of the intervention with regard to psychosocial outcomes but not physical activity behaviors (i.e., the intervention was associated with increased cohesion but did not lead to increased physical activity) [29]. The purpose of this exploratory study was to determine the longitudinal relationship of communication, cooperation, and friendly competition to the dimensions of group cohesion. Specifically, the intention of this study was to test participants’ perceptions of the strength or presence of these variables within their group. It was hypothesized that each group-interaction variable would contribute to a large proportion of explained variance in group cohesion over time. Hypothesis 1: Cooperation would predict the Individual’s Attraction to the Group-Task. Hypothesis 2: Friendly competition would predict the Individual’s Attraction to the Group-Task as well as the Group’s Integration towards the Task. Hypothesis 3: Social communication would predict aspects related to social cohesion (both Individual’s Attraction to the Group-Socially as well as the Group’s Integration Socially). Hypothesis 4: Task communication would predict aspects related to task cohesion: both the Individual’s Attraction to the Group-Task as well as the Group’s Integration towards the Task). Hypothesis 5: Perceptions of group cohesion and group interaction would increase over the course of the program, but decrease from program completion to the 12-month follow-up. Testing the hypotheses, outlined in Figure 1 below, contributes to the gap in the literature about strategies that influence the perception of group cohesion. Hypotheses for group-interaction variables prediction of cohesion over-time. Conclusions: Group dynamics-based physical activity programs are successful at achieving their outcomes of interest [18]. Although previous research has shown that increased perceptions of cohesion lead to increased engagement in the program and subsequent increases in physical activity [4], less is known about what strategies lead to increased perceptions of cohesion [18]. This study presents promising data about how group-interaction variables, including communication, competition, and cooperation, may influence the perception of cohesion. Specific investigation of these variables indicated that strategies that foster friendly competition are most likely to improve participant perceptions of group cohesion.
Background: Interaction in the form of cooperation, communication, and friendly competition theoretically precede the development of group cohesion, which often precedes adherence to health promotion programs. The purpose of this manuscript was to explore longitudinal relationships among dimensions of group cohesion and group-interaction variables to inform and improve group-based strategies within programs aimed at promoting physical activity. Methods: Ethnic minority women completed a group dynamics-based physical activity promotion intervention (N = 103; 73% African American; 27% Hispanic/Latina; mage = 47.89 + 8.17 years; mBMI = 34.43+ 8.07 kg/m2) and assessments of group cohesion and group-interaction variables at baseline, 6 months (post-program), and 12 months (follow-up). Results: All four dimensions of group cohesion had significant (ps < 0.01) relationships with the group-interaction variables. Competition was a consistently strong predictor of cohesion, while cooperation did not demonstrate consistent patterns of prediction. Conclusions: Facilitating a sense of friendly competition may increase engagement in physical activity programs by bolstering group cohesion.
8,923
227
[ 1597, 114, 976, 208, 274, 188, 53, 10, 52 ]
14
[ "group", "cohesion", "competition", "physical", "physical activity", "activity", "communication", "group cohesion", "variables", "interaction" ]
[ "group cohesion inventory", "group cohesion cooperation", "cohesion physical activity", "define group cohesion", "activity group cohesion" ]
[CONTENT] Group dynamics | Group-interaction | Physical activity [SUMMARY]
[CONTENT] Group dynamics | Group-interaction | Physical activity [SUMMARY]
[CONTENT] Group dynamics | Group-interaction | Physical activity [SUMMARY]
[CONTENT] Group dynamics | Group-interaction | Physical activity [SUMMARY]
[CONTENT] Group dynamics | Group-interaction | Physical activity [SUMMARY]
[CONTENT] Group dynamics | Group-interaction | Physical activity [SUMMARY]
[CONTENT] Adult | Black or African American | Body Mass Index | Communication | Female | Health Knowledge, Attitudes, Practice | Health Promotion | Hispanic or Latino | Humans | Interpersonal Relations | Linear Models | Longitudinal Studies | Middle Aged | Minority Groups | Motor Activity | Social Facilitation | Surveys and Questionnaires [SUMMARY]
[CONTENT] Adult | Black or African American | Body Mass Index | Communication | Female | Health Knowledge, Attitudes, Practice | Health Promotion | Hispanic or Latino | Humans | Interpersonal Relations | Linear Models | Longitudinal Studies | Middle Aged | Minority Groups | Motor Activity | Social Facilitation | Surveys and Questionnaires [SUMMARY]
[CONTENT] Adult | Black or African American | Body Mass Index | Communication | Female | Health Knowledge, Attitudes, Practice | Health Promotion | Hispanic or Latino | Humans | Interpersonal Relations | Linear Models | Longitudinal Studies | Middle Aged | Minority Groups | Motor Activity | Social Facilitation | Surveys and Questionnaires [SUMMARY]
[CONTENT] Adult | Black or African American | Body Mass Index | Communication | Female | Health Knowledge, Attitudes, Practice | Health Promotion | Hispanic or Latino | Humans | Interpersonal Relations | Linear Models | Longitudinal Studies | Middle Aged | Minority Groups | Motor Activity | Social Facilitation | Surveys and Questionnaires [SUMMARY]
[CONTENT] Adult | Black or African American | Body Mass Index | Communication | Female | Health Knowledge, Attitudes, Practice | Health Promotion | Hispanic or Latino | Humans | Interpersonal Relations | Linear Models | Longitudinal Studies | Middle Aged | Minority Groups | Motor Activity | Social Facilitation | Surveys and Questionnaires [SUMMARY]
[CONTENT] Adult | Black or African American | Body Mass Index | Communication | Female | Health Knowledge, Attitudes, Practice | Health Promotion | Hispanic or Latino | Humans | Interpersonal Relations | Linear Models | Longitudinal Studies | Middle Aged | Minority Groups | Motor Activity | Social Facilitation | Surveys and Questionnaires [SUMMARY]
[CONTENT] group cohesion inventory | group cohesion cooperation | cohesion physical activity | define group cohesion | activity group cohesion [SUMMARY]
[CONTENT] group cohesion inventory | group cohesion cooperation | cohesion physical activity | define group cohesion | activity group cohesion [SUMMARY]
[CONTENT] group cohesion inventory | group cohesion cooperation | cohesion physical activity | define group cohesion | activity group cohesion [SUMMARY]
[CONTENT] group cohesion inventory | group cohesion cooperation | cohesion physical activity | define group cohesion | activity group cohesion [SUMMARY]
[CONTENT] group cohesion inventory | group cohesion cooperation | cohesion physical activity | define group cohesion | activity group cohesion [SUMMARY]
[CONTENT] group cohesion inventory | group cohesion cooperation | cohesion physical activity | define group cohesion | activity group cohesion [SUMMARY]
[CONTENT] group | cohesion | competition | physical | physical activity | activity | communication | group cohesion | variables | interaction [SUMMARY]
[CONTENT] group | cohesion | competition | physical | physical activity | activity | communication | group cohesion | variables | interaction [SUMMARY]
[CONTENT] group | cohesion | competition | physical | physical activity | activity | communication | group cohesion | variables | interaction [SUMMARY]
[CONTENT] group | cohesion | competition | physical | physical activity | activity | communication | group cohesion | variables | interaction [SUMMARY]
[CONTENT] group | cohesion | competition | physical | physical activity | activity | communication | group cohesion | variables | interaction [SUMMARY]
[CONTENT] group | cohesion | competition | physical | physical activity | activity | communication | group cohesion | variables | interaction [SUMMARY]
[CONTENT] group | cohesion | group cohesion | competition | physical activity | activity | physical | perceptions | strategies | perceptions group [SUMMARY]
[CONTENT] group | items | physical | activity | physical activity | communication | competition | cohesion | social | cooperation [SUMMARY]
[CONTENT] variance | group | significant | gi | explained | longitudinal regression | atg | regression | longitudinal | task [SUMMARY]
[CONTENT] increased | increased perceptions | increased perceptions cohesion | perceptions | cohesion | lead increased | perceptions cohesion | lead | 18 | strategies [SUMMARY]
[CONTENT] group | cohesion | competition | items | physical | physical activity | activity | communication | task | variables [SUMMARY]
[CONTENT] group | cohesion | competition | items | physical | physical activity | activity | communication | task | variables [SUMMARY]
[CONTENT] ||| [SUMMARY]
[CONTENT] 103 | 73% | African American | 27% | Hispanic | 47.89 +  | mBMI | 34.43 | 8.07 kg/m2 | 6 months | 12 months [SUMMARY]
[CONTENT] four | ps <  | 0.01 ||| [SUMMARY]
[CONTENT] [SUMMARY]
[CONTENT] ||| ||| 103 | 73% | African American | 27% | Hispanic | 47.89 +  | mBMI | 34.43 | 8.07 kg/m2 | 6 months | 12 months ||| four | ps <  | 0.01 ||| ||| [SUMMARY]
[CONTENT] ||| ||| 103 | 73% | African American | 27% | Hispanic | 47.89 +  | mBMI | 34.43 | 8.07 kg/m2 | 6 months | 12 months ||| four | ps <  | 0.01 ||| ||| [SUMMARY]
Chicken CATH-2 Increases Antigen Presentation Markers on Chicken Monocytes and Macrophages.
31362652
Cathelicidins are a family of Host Defense Peptides (HDPs), that play an important role in the innate immune response. They exert both broad-spectrum antimicrobial activity against pathogens, and strong immunomodulatory functions that affect the response of innate and adaptive immune cells.
BACKGROUND
Chicken macrophages and chicken monocytes were incubated with cathelicidins. Activation of immune cells was determined by measuring surface markers Mannose Receptor Ctype 1 (MRC1) and MHC-II. Cytokine production was measured by qPCR and nitric oxide production was determined using the Griess assay. Finally, the effect of cathelicidins on phagocytosis was measured using carboxylate-modified polystyrene latex beads.
METHODS
CATH-2 and its all-D enantiomer D-CATH-2 increased MRC1 and MHC-II expression, markers for antigen presentation, on primary chicken monocytes, whereas LL-37 did not. D-CATH- 2 also increased the MRC1 and MHC-II expression if a chicken macrophage cell line (HD11 cells) was used. In addition, LPS-induced NO production by HD11 cells was inhibited by CATH-2 and D-CATH-2.
RESULTS
These results are a clear indication that CATH-2 (and D-CATH-2) affect the activation state of monocytes and macrophages, which leads to optimization of the innate immune response and enhancement of the adaptive immune response.
CONCLUSION
[ "Amino Acid Sequence", "Animals", "Antigen Presentation", "Antimicrobial Cationic Peptides", "Biomarkers", "Cathelicidins", "Cell Line", "Chickens", "Cytokines", "Gene Expression Regulation", "Histocompatibility Antigens Class II", "Humans", "Immunity, Innate", "Immunomodulation", "Lectins, C-Type", "Macrophages", "Mannose Receptor", "Mannose-Binding Lectins", "Monocytes", "Receptors, Cell Surface" ]
6978643
INTRODUCTION
Cathelicidins are short cationic peptides with an important role in host defense. They have broad antimicrobial activity as shown by their ability to kill a wide range of bacteria, viruses and fungi [1-4]. Besides direct antimicrobial activity, cathelicidins also play a more immunomodulatory role in the innate immune system through, amongst others, activation and chemotaxis of a variety of immune cells [5] and their role is still expanding [6-9]. However, it is unclear if chicken cathelicidins share all of these activities with their mammalian counterparts. In human, the only cathelicidin gene hCAP-18, has been shown to yield at least three different mature peptides, of 
which LL-37 is the most commonly studied [9]. This cathelicidin is produced by different cell types, including neutrophils, macrophages, and NK cells [10]. In chicken there are four different cathelicidins: CATH-1, -2, and -3, and CATH-B1 [11-14]. Multiple reports have described immunomodulatory effects of cathelicidins in vivo. For example, LL-37 upregulated the neutrophil response and cleared an infection in a murine Pseudomonas aeruginosa lung infection model without the peptide showing direct antimicrobial effects [15]. The chicken cathelicidin CATH-1 was used in a MRSA infection model in mice, where it provided partial protection [16]. In addition, chicken CATH-2 increased the number of phagocytic cells in a zebrafish model leading to a protective effect against bacterial infection. These studies suggested a more immuno-stimulatory mechanism of action for this peptide [17]. Finally, in ovo treatment with the all-D enantiomer of CATH-2 (D-CATH-2) showed a protective effect against a subsequent E. coli infection 7 days after hatch, indicating again that immunomodulatory effects and not direct antimicrobial killing was involved [18]. The involvement of immune cells and specifically macrophages was best shown in a study using a so-called innate defense-regulator peptide, IDR-1. This peptide, which does not have direct antimicrobial activity, could protect mice from infections, but was inactive when these mice were depleted of monocytes and macrophages [19]. In line with this, we have shown that CATH-2, D-CATH-2 and LL-37 have an effect on the mononuclear phagocyte population within chicken PBMCs [20]. In order to better understand the in vivo immunomodulatory activity of CATH-2 the current study investigated the effect of CATH-2 on primary chicken monocytes and a chicken macrophage cell line. The all D-enantiomer D-CATH-2 was included based on its stability towards proteases and its described effect in ovo [18], while LL-37 was included as a well described immunomodulatory peptide of human origin. The study indicates that CATH-2 has clear effects on macrophage and monocyte antigen presentation markers which could be an important feature of these peptides as part of the innate immune response of chickens towards pathogens.
null
null
RESULTS
CATH-2 and D-CATH-2 Increased Antigen Presentation Capacity of Primary Chicken Monocytes Chicken monocytes were incubated with chicken CATH-2, D-CATH-2 and human LL-37. After 4 h of incubation, CATH-2 and D-CATH-2 dose-dependently increased the expression of MRC1 approximately 12- and 15-fold, respectively, while LL-37 had no effect on expression (Figure 1A). Similarly, CATH-2 and D-CATH-2 could induce MHC-II expression, albeit to a lower extend (1.2- and 2-fold increased expression, respectively: Figure 1B). This indicates that CATH-2 and its all-D enantiomer D-CATH-2 can affect the antigen presentation capacity of monocytes. The effects of TLR4 agonist LPS and TLR1/TLR2 agonist PAM3CSK4 alone or in combination with cathelicidins (2.5 µM) for 24 h on expression of MRC1 and MHC-II in primary chicken monocytes was also studied. In the absence of peptides, no significant effects were observed for either TLR ligand (white bars, Figures 2A and 2B). Similarly, LPS or PAM3CSK4 did not lead to increased expression of MRC1 or MHC-II in the presence of 2.5 µM CATH-2, D-CATH-2 or LL-37. The only significant difference found was for MRC1 expression for all groups containing D-CATH-2 compared to samples without peptide present, confirming the results from Figure 1 where D-CATH-2 by itself increased MRC1 expression. Finally, intracellular levels of IL-1β, IL-6 and IFN-γ were determined for LPS-stimulated monocytes in the presence or absence of 10 µM of peptides using FACS analysis. Comparable to the antigen presenting markers, no significant effect of LPS and/or cathelicidins was observed (data not shown). Thus, chicken monocytes are not particularly sensitive for TLR2 or TLR4 stimulation with regards to marker expression and cytokine expression. Chicken monocytes were incubated with chicken CATH-2, D-CATH-2 and human LL-37. After 4 h of incubation, CATH-2 and D-CATH-2 dose-dependently increased the expression of MRC1 approximately 12- and 15-fold, respectively, while LL-37 had no effect on expression (Figure 1A). Similarly, CATH-2 and D-CATH-2 could induce MHC-II expression, albeit to a lower extend (1.2- and 2-fold increased expression, respectively: Figure 1B). This indicates that CATH-2 and its all-D enantiomer D-CATH-2 can affect the antigen presentation capacity of monocytes. The effects of TLR4 agonist LPS and TLR1/TLR2 agonist PAM3CSK4 alone or in combination with cathelicidins (2.5 µM) for 24 h on expression of MRC1 and MHC-II in primary chicken monocytes was also studied. In the absence of peptides, no significant effects were observed for either TLR ligand (white bars, Figures 2A and 2B). Similarly, LPS or PAM3CSK4 did not lead to increased expression of MRC1 or MHC-II in the presence of 2.5 µM CATH-2, D-CATH-2 or LL-37. The only significant difference found was for MRC1 expression for all groups containing D-CATH-2 compared to samples without peptide present, confirming the results from Figure 1 where D-CATH-2 by itself increased MRC1 expression. Finally, intracellular levels of IL-1β, IL-6 and IFN-γ were determined for LPS-stimulated monocytes in the presence or absence of 10 µM of peptides using FACS analysis. Comparable to the antigen presenting markers, no significant effect of LPS and/or cathelicidins was observed (data not shown). Thus, chicken monocytes are not particularly sensitive for TLR2 or TLR4 stimulation with regards to marker expression and cytokine expression. D-CATH-2 Increased the Antigen Presentation Capacity of HD11 Cells In order to confirm our results obtained in primary chicken monocytes, we also studied the effects of CATH-2, D-CATH-2 and LL-37 on the chicken macrophage cell line HD11. As observed for primary monocytes D-CATH-2 induced an increase in MRC1 and MHC-II expression in HD11 cells after 4 h exposure Surprisingly, unlike in primary monocytes, CATH-2 did not affect the expression of these cell surface markers in HD11 cells (Figures 3A and B). An increased incubation time did also not result in significant effects on MRC1 and MHC-II expression for CATH-2 (Figures 3C and D), indicating that the difference observed between monocytes and macrophages for CATH-2 stimulation is not likely due to kinetic differences. As expected, LL-37 also failed to induce an increased expression of MRC1 or MHC-II in HD11 cells. In order to confirm our results obtained in primary chicken monocytes, we also studied the effects of CATH-2, D-CATH-2 and LL-37 on the chicken macrophage cell line HD11. As observed for primary monocytes D-CATH-2 induced an increase in MRC1 and MHC-II expression in HD11 cells after 4 h exposure Surprisingly, unlike in primary monocytes, CATH-2 did not affect the expression of these cell surface markers in HD11 cells (Figures 3A and B). An increased incubation time did also not result in significant effects on MRC1 and MHC-II expression for CATH-2 (Figures 3C and D), indicating that the difference observed between monocytes and macrophages for CATH-2 stimulation is not likely due to kinetic differences. As expected, LL-37 also failed to induce an increased expression of MRC1 or MHC-II in HD11 cells. Cathelicidins Inhibited LPS-Induced NO Production and Increased Phagocytosis in HD11 Cells To determine possible functional implications of incubating immune cells with CATH-2, D-CATH-2 and LL-37, the effect on LPS-induced NO production and phagocytic capacity of HD11 cells was investigated. LPS stimulation led to 25-fold higher NO levels compared to unstimulated cells (Figure 4A). There was a significant decrease in the LPS-induced NO production with CATH-2 and D-CATH-2 (Figure 4A), with the latter showing a strong effect at the lowest concentration tested (2.5 µM), while CATH-2 required higher concentrations for full activity. As a second functional assay the effect of cathelicidins on phagocytic activity of macrophages was tested (Figure 4B). Interestingly, LL-37 had a clear dose-dependent effect on the phagocytosis of beads showing a 3-, and 5-fold increase in uptake at 5 µM and 10 µM of LL-37. CATH-2 and D-CATH-2 had no effect on phagocytosis at any concentration. These assays show that human and chicken cathelicidins seem to affect different functions of chicken macrophages. To determine possible functional implications of incubating immune cells with CATH-2, D-CATH-2 and LL-37, the effect on LPS-induced NO production and phagocytic capacity of HD11 cells was investigated. LPS stimulation led to 25-fold higher NO levels compared to unstimulated cells (Figure 4A). There was a significant decrease in the LPS-induced NO production with CATH-2 and D-CATH-2 (Figure 4A), with the latter showing a strong effect at the lowest concentration tested (2.5 µM), while CATH-2 required higher concentrations for full activity. As a second functional assay the effect of cathelicidins on phagocytic activity of macrophages was tested (Figure 4B). Interestingly, LL-37 had a clear dose-dependent effect on the phagocytosis of beads showing a 3-, and 5-fold increase in uptake at 5 µM and 10 µM of LL-37. CATH-2 and D-CATH-2 had no effect on phagocytosis at any concentration. These assays show that human and chicken cathelicidins seem to affect different functions of chicken macrophages.
CONCLUSION
Overall, this study shows that both CATH-2 and D-CATH-2 increase antigen presentation markers on both primary monocytes and HD11 cells. These indicators of an increased inflammatory response could lead to more T cell activation and thus the potential for a better adaptive immune response against infection. This novel finding adds another functionality to the chicken cathelicidin repertoire in immunity.
[ "Peptides", "Cell Surface Marker Staining", "Intracellular Staining of Cytokines", "Phagocytosis Assay", "Griess Assay", "Statistical Analysis", "CATH-2 and D-CATH-2 Increased Antigen Presentation Capacity of Primary Chicken Monocytes", "D-CATH-2 Increased the Antigen Presentation Capacity of HD11 Cells", "Cathelicidins Inhibited LPS-Induced NO Production and Increased Phagocytosis in HD11 Cells" ]
[ "CATH-2 and D-CATH-2 (amino acid sequence: RFGRFLRKIRRFRPKVTITIQGSARF-NH2) were synthesized by Fmoc-chemistry (CPC Scientific) and LL-37 was synthesized by Fmoc-chemistry at the Academic Centre for Dentistry Amsterdam (ACTA).", "Chicken macrophage HD11 cells, (gift from Prof. Jos van Putten, Utrecht University, the Netherlands) were cultured in a 24-wells plate containing 250,000 cells at 37 °C in RPMI1640 media (Lonza, Switzerland) containing 10% fetal calf serum (Bodinco, The Netherlands) and 1% penicillin/streptomycin (Life Technologies, CA, USA). For the monocyte culture, whole blood was collected from ~76 week-old healthy chickens and PBMCs were isolated using a Ficoll gradient and frozen until use. PBMCs were cultured overnight in a 24-wells plate containing 5x106 cells in RPMI1640 media containing 10% fetal calf serum, 10 U/ml penicillin and 10 mg/ml streptomycin. The next day, the wells were washed three times to remove all non-attached cells.\nHD11 cells and monocytes were incubated with different concentrations of peptide for 4 or 24 h. In some experiments, monocytes were also stimulated with 100 ng/ml LPS (E. coli 0111:B4; InvivoGen, CA, USA) or 100 ng/ml PAM3CSK4 (InvivoGen) for 24 h. Next, cells were harvested and incubated for 30 min with antibodies KUL-01-FITC (mannose receptor MRC1/CD206; clone KUL01), MHCII-PE (clone 2G11) (both Southern Biotech, Birmingham, USA) on ice, and subsequently incubated for 30 min on ice with secondary BV421-labelled antibody (Biolegend, CA, USA). Afterwards, cells were washed and analyzed using flow cytometry (FACSCanto-II, BD Biosciences, CA, USA) and FlowJo software (Ashland, OR, USA).", "Monocytes were incubated with 10 µM peptide for 16 h in the presence of 100 ng/ml LPS and 1 µg/ml GolgiPlug (BD Biosciences). After harvesting, cells were incubated with KUL-01-FITC for the cell surface staining. The intracellular staining was performed according to the manufacturer’s protocol (BD Cytofix/Cytoperm Fixation/ Permeabilization kit; BD Biosciences). In short, after the cell surface staining, cells were incubated with Fixaton/ Permeabilization solution for 20 min. Next, antibodies to IL-1β, IL-6 and IFN-γ (Bioconnect, Toronto, Canada) with secondary AF405-labelled antibody (Invitrogen, CA, USA) were used to study intracellular cytokine production. Cells were washed and analyzed using flow cytometry and FlowJo. The relative cytokine production was determined by correcting for the staining control and expressed as a fold difference from levels in control HD11 cells.", "In a 96-wells plate, 50,000 HD11 cells per well were incubated with carboxylate-modified polystyrene latex beads (Sigma-Aldrich, Missouri, USA) in a 1:10 ratio for 1 h at 
41 °C in the presence of different concentrations of peptide. Cells were washed four times with cold PBS containing 1% FSC and 0.01% NaN3 and analyzed using flow cytometry. Phagocytosis was determined by correcting the uptake of latex beads at 41 °C for the uptake at 4 °C.", "NO production was determined using the Griess Assay. In a 48-wells plate, 250,000 HD11 cells per well were incubated overnight in the presence of different concentrations of peptide and 100 ng/ml LPS. Supernatant was harvested and 50 μl of the supernatant was mixed with 50 μl 1% sulfanilamide (5% phosphoric acid) and incubated for 5 min at RT in the dark. Then 50 μl of 0.1% N-(1-naphthyl) ethylenediamine dihydrochloride was added and incubated for 5 min at RT in the dark. Absorbance was measured at 550 nm and the amount of NO production was determined by a standard of sodium nitrate (Sigma-Aldrich).", "Statistical significance was assessed with one-way ANOVA followed by the Dunnett Post-Hoc test in GraphPad Prism software, version 6.02. A p-value of <0.05 was considered statistically significant.", "Chicken monocytes were incubated with chicken CATH-2, D-CATH-2 and human LL-37. After 4 h of incubation, CATH-2 and D-CATH-2 dose-dependently increased the expression of MRC1 approximately 12- and 15-fold, respectively, while LL-37 had no effect on expression (Figure 1A). Similarly, CATH-2 and D-CATH-2 could induce MHC-II expression, albeit to a lower extend (1.2- and 2-fold increased expression, respectively: Figure 1B). This indicates that CATH-2 and its all-D enantiomer D-CATH-2 can affect the antigen presentation capacity of monocytes.\nThe effects of TLR4 agonist LPS and TLR1/TLR2 agonist PAM3CSK4 alone or in combination with cathelicidins (2.5 µM) for 24 h on expression of MRC1 and MHC-II in primary chicken monocytes was also studied. In the absence of peptides, no significant effects were observed for either TLR ligand (white bars, Figures 2A and 2B). Similarly, LPS or PAM3CSK4 did not lead to increased expression of MRC1 or MHC-II in the presence of 2.5 µM CATH-2, D-CATH-2 or LL-37. The only significant difference found was for MRC1 expression for all groups containing D-CATH-2 compared to samples without peptide present, confirming the results from Figure 1 where D-CATH-2 by itself increased MRC1 expression.\nFinally, intracellular levels of IL-1β, IL-6 and IFN-γ were determined for LPS-stimulated monocytes in the presence or absence of 10 µM of peptides using FACS analysis. Comparable to the antigen presenting markers, no significant effect of LPS and/or cathelicidins was observed (data not shown). Thus, chicken monocytes are not particularly sensitive for TLR2 or TLR4 stimulation with regards to marker expression and cytokine expression.", "In order to confirm our results obtained in primary chicken monocytes, we also studied the effects of CATH-2, D-CATH-2 and LL-37 on the chicken macrophage cell line HD11. As observed for primary monocytes D-CATH-2 induced an increase in MRC1 and MHC-II expression in HD11 cells after 4 h exposure Surprisingly, unlike in primary monocytes, CATH-2 did not affect the expression of these cell surface markers in HD11 cells (Figures 3A and B). An increased incubation time did also not result in significant effects on MRC1 and MHC-II expression for CATH-2 (Figures 3C and D), indicating that the difference observed between monocytes and macrophages for CATH-2 stimulation is not likely due to kinetic differences. As expected, LL-37 also failed to induce an increased expression of MRC1 or MHC-II in HD11 cells.", "To determine possible functional implications of incubating immune cells with CATH-2, D-CATH-2 and LL-37, the effect on LPS-induced NO production and phagocytic capacity of HD11 cells was investigated. LPS stimulation led to 25-fold higher NO levels compared to unstimulated cells (Figure 4A). There was a significant decrease in the LPS-induced NO production with CATH-2 and D-CATH-2 (Figure 4A), with the latter showing a strong effect at the lowest concentration tested (2.5 µM), while CATH-2 required higher concentrations for full activity. As a second functional assay the effect of cathelicidins on phagocytic activity of macrophages was tested (Figure 4B). Interestingly, LL-37 had a clear dose-dependent effect on the phagocytosis of beads showing a 3-, and 5-fold increase in uptake at 5 µM and 10 µM of LL-37. CATH-2 and D-CATH-2 had no effect on phagocytosis at any concentration. These assays show that human and chicken cathelicidins seem to affect different functions of chicken macrophages." ]
[ null, null, null, null, null, null, null, null, null ]
[ "INTRODUCTION", "MATERIALS AND METHODS", "Peptides", "Cell Surface Marker Staining", "Intracellular Staining of Cytokines", "Phagocytosis Assay", "Griess Assay", "Statistical Analysis", "RESULTS", "CATH-2 and D-CATH-2 Increased Antigen Presentation Capacity of Primary Chicken Monocytes", "D-CATH-2 Increased the Antigen Presentation Capacity of HD11 Cells", "Cathelicidins Inhibited LPS-Induced NO Production and Increased Phagocytosis in HD11 Cells", "DISCUSSION", "CONCLUSION" ]
[ "Cathelicidins are short cationic peptides with an important role in host defense. They have broad antimicrobial activity as shown by their ability to kill a wide range of bacteria, viruses and fungi [1-4]. Besides direct antimicrobial activity, cathelicidins also play a more immunomodulatory role in the innate immune system through, amongst others, activation and chemotaxis of a variety of immune cells [5] and their role is still expanding [6-9]. However, it is unclear if chicken cathelicidins share all of these activities with their mammalian counterparts.\nIn human, the only cathelicidin gene hCAP-18, has been shown to yield at least three different mature peptides, of 
which LL-37 is the most commonly studied [9]. This cathelicidin is produced by different cell types, including neutrophils, macrophages, and NK cells [10]. In chicken there are four different cathelicidins: CATH-1, -2, and -3, and CATH-B1 [11-14]. Multiple reports have described immunomodulatory effects of cathelicidins in vivo. For example, LL-37 upregulated the neutrophil response and cleared an infection in a murine Pseudomonas aeruginosa lung infection model without the peptide showing direct antimicrobial effects [15]. The chicken cathelicidin CATH-1 was used in a MRSA infection model in mice, where it provided partial protection [16]. In addition, chicken CATH-2 increased the number of phagocytic cells in a zebrafish model leading to a protective effect against bacterial infection. These studies suggested a more immuno-stimulatory mechanism of action for this peptide [17]. Finally, in ovo treatment with the all-D enantiomer of CATH-2 (D-CATH-2) showed a protective effect against a subsequent E. coli infection 7 days after hatch, indicating again that immunomodulatory effects and not direct antimicrobial killing was involved [18].\nThe involvement of immune cells and specifically macrophages was best shown in a study using a so-called innate defense-regulator peptide, IDR-1. This peptide, which does not have direct antimicrobial activity, could protect mice from infections, but was inactive when these mice were depleted of monocytes and macrophages [19]. In line with this, we have shown that CATH-2, D-CATH-2 and LL-37 have an effect on the mononuclear phagocyte population within chicken PBMCs [20].\nIn order to better understand the in vivo immunomodulatory activity of CATH-2 the current study investigated the effect of CATH-2 on primary chicken monocytes and a chicken macrophage cell line. The all D-enantiomer D-CATH-2 was included based on its stability towards proteases and its described effect in ovo [18], while LL-37 was included as a well described immunomodulatory peptide of human origin. The study indicates that CATH-2 has clear effects on macrophage and monocyte antigen presentation markers which could be an important feature of these peptides as part of the innate immune response of chickens towards pathogens.", " Peptides CATH-2 and D-CATH-2 (amino acid sequence: RFGRFLRKIRRFRPKVTITIQGSARF-NH2) were synthesized by Fmoc-chemistry (CPC Scientific) and LL-37 was synthesized by Fmoc-chemistry at the Academic Centre for Dentistry Amsterdam (ACTA).\nCATH-2 and D-CATH-2 (amino acid sequence: RFGRFLRKIRRFRPKVTITIQGSARF-NH2) were synthesized by Fmoc-chemistry (CPC Scientific) and LL-37 was synthesized by Fmoc-chemistry at the Academic Centre for Dentistry Amsterdam (ACTA).\n Cell Surface Marker Staining Chicken macrophage HD11 cells, (gift from Prof. Jos van Putten, Utrecht University, the Netherlands) were cultured in a 24-wells plate containing 250,000 cells at 37 °C in RPMI1640 media (Lonza, Switzerland) containing 10% fetal calf serum (Bodinco, The Netherlands) and 1% penicillin/streptomycin (Life Technologies, CA, USA). For the monocyte culture, whole blood was collected from ~76 week-old healthy chickens and PBMCs were isolated using a Ficoll gradient and frozen until use. PBMCs were cultured overnight in a 24-wells plate containing 5x106 cells in RPMI1640 media containing 10% fetal calf serum, 10 U/ml penicillin and 10 mg/ml streptomycin. The next day, the wells were washed three times to remove all non-attached cells.\nHD11 cells and monocytes were incubated with different concentrations of peptide for 4 or 24 h. In some experiments, monocytes were also stimulated with 100 ng/ml LPS (E. coli 0111:B4; InvivoGen, CA, USA) or 100 ng/ml PAM3CSK4 (InvivoGen) for 24 h. Next, cells were harvested and incubated for 30 min with antibodies KUL-01-FITC (mannose receptor MRC1/CD206; clone KUL01), MHCII-PE (clone 2G11) (both Southern Biotech, Birmingham, USA) on ice, and subsequently incubated for 30 min on ice with secondary BV421-labelled antibody (Biolegend, CA, USA). Afterwards, cells were washed and analyzed using flow cytometry (FACSCanto-II, BD Biosciences, CA, USA) and FlowJo software (Ashland, OR, USA).\nChicken macrophage HD11 cells, (gift from Prof. Jos van Putten, Utrecht University, the Netherlands) were cultured in a 24-wells plate containing 250,000 cells at 37 °C in RPMI1640 media (Lonza, Switzerland) containing 10% fetal calf serum (Bodinco, The Netherlands) and 1% penicillin/streptomycin (Life Technologies, CA, USA). For the monocyte culture, whole blood was collected from ~76 week-old healthy chickens and PBMCs were isolated using a Ficoll gradient and frozen until use. PBMCs were cultured overnight in a 24-wells plate containing 5x106 cells in RPMI1640 media containing 10% fetal calf serum, 10 U/ml penicillin and 10 mg/ml streptomycin. The next day, the wells were washed three times to remove all non-attached cells.\nHD11 cells and monocytes were incubated with different concentrations of peptide for 4 or 24 h. In some experiments, monocytes were also stimulated with 100 ng/ml LPS (E. coli 0111:B4; InvivoGen, CA, USA) or 100 ng/ml PAM3CSK4 (InvivoGen) for 24 h. Next, cells were harvested and incubated for 30 min with antibodies KUL-01-FITC (mannose receptor MRC1/CD206; clone KUL01), MHCII-PE (clone 2G11) (both Southern Biotech, Birmingham, USA) on ice, and subsequently incubated for 30 min on ice with secondary BV421-labelled antibody (Biolegend, CA, USA). Afterwards, cells were washed and analyzed using flow cytometry (FACSCanto-II, BD Biosciences, CA, USA) and FlowJo software (Ashland, OR, USA).\n Intracellular Staining of Cytokines Monocytes were incubated with 10 µM peptide for 16 h in the presence of 100 ng/ml LPS and 1 µg/ml GolgiPlug (BD Biosciences). After harvesting, cells were incubated with KUL-01-FITC for the cell surface staining. The intracellular staining was performed according to the manufacturer’s protocol (BD Cytofix/Cytoperm Fixation/ Permeabilization kit; BD Biosciences). In short, after the cell surface staining, cells were incubated with Fixaton/ Permeabilization solution for 20 min. Next, antibodies to IL-1β, IL-6 and IFN-γ (Bioconnect, Toronto, Canada) with secondary AF405-labelled antibody (Invitrogen, CA, USA) were used to study intracellular cytokine production. Cells were washed and analyzed using flow cytometry and FlowJo. The relative cytokine production was determined by correcting for the staining control and expressed as a fold difference from levels in control HD11 cells.\nMonocytes were incubated with 10 µM peptide for 16 h in the presence of 100 ng/ml LPS and 1 µg/ml GolgiPlug (BD Biosciences). After harvesting, cells were incubated with KUL-01-FITC for the cell surface staining. The intracellular staining was performed according to the manufacturer’s protocol (BD Cytofix/Cytoperm Fixation/ Permeabilization kit; BD Biosciences). In short, after the cell surface staining, cells were incubated with Fixaton/ Permeabilization solution for 20 min. Next, antibodies to IL-1β, IL-6 and IFN-γ (Bioconnect, Toronto, Canada) with secondary AF405-labelled antibody (Invitrogen, CA, USA) were used to study intracellular cytokine production. Cells were washed and analyzed using flow cytometry and FlowJo. The relative cytokine production was determined by correcting for the staining control and expressed as a fold difference from levels in control HD11 cells.\n Phagocytosis Assay In a 96-wells plate, 50,000 HD11 cells per well were incubated with carboxylate-modified polystyrene latex beads (Sigma-Aldrich, Missouri, USA) in a 1:10 ratio for 1 h at 
41 °C in the presence of different concentrations of peptide. Cells were washed four times with cold PBS containing 1% FSC and 0.01% NaN3 and analyzed using flow cytometry. Phagocytosis was determined by correcting the uptake of latex beads at 41 °C for the uptake at 4 °C.\nIn a 96-wells plate, 50,000 HD11 cells per well were incubated with carboxylate-modified polystyrene latex beads (Sigma-Aldrich, Missouri, USA) in a 1:10 ratio for 1 h at 
41 °C in the presence of different concentrations of peptide. Cells were washed four times with cold PBS containing 1% FSC and 0.01% NaN3 and analyzed using flow cytometry. Phagocytosis was determined by correcting the uptake of latex beads at 41 °C for the uptake at 4 °C.\n Griess Assay NO production was determined using the Griess Assay. In a 48-wells plate, 250,000 HD11 cells per well were incubated overnight in the presence of different concentrations of peptide and 100 ng/ml LPS. Supernatant was harvested and 50 μl of the supernatant was mixed with 50 μl 1% sulfanilamide (5% phosphoric acid) and incubated for 5 min at RT in the dark. Then 50 μl of 0.1% N-(1-naphthyl) ethylenediamine dihydrochloride was added and incubated for 5 min at RT in the dark. Absorbance was measured at 550 nm and the amount of NO production was determined by a standard of sodium nitrate (Sigma-Aldrich).\nNO production was determined using the Griess Assay. In a 48-wells plate, 250,000 HD11 cells per well were incubated overnight in the presence of different concentrations of peptide and 100 ng/ml LPS. Supernatant was harvested and 50 μl of the supernatant was mixed with 50 μl 1% sulfanilamide (5% phosphoric acid) and incubated for 5 min at RT in the dark. Then 50 μl of 0.1% N-(1-naphthyl) ethylenediamine dihydrochloride was added and incubated for 5 min at RT in the dark. Absorbance was measured at 550 nm and the amount of NO production was determined by a standard of sodium nitrate (Sigma-Aldrich).\n Statistical Analysis Statistical significance was assessed with one-way ANOVA followed by the Dunnett Post-Hoc test in GraphPad Prism software, version 6.02. A p-value of <0.05 was considered statistically significant.\nStatistical significance was assessed with one-way ANOVA followed by the Dunnett Post-Hoc test in GraphPad Prism software, version 6.02. A p-value of <0.05 was considered statistically significant.", "CATH-2 and D-CATH-2 (amino acid sequence: RFGRFLRKIRRFRPKVTITIQGSARF-NH2) were synthesized by Fmoc-chemistry (CPC Scientific) and LL-37 was synthesized by Fmoc-chemistry at the Academic Centre for Dentistry Amsterdam (ACTA).", "Chicken macrophage HD11 cells, (gift from Prof. Jos van Putten, Utrecht University, the Netherlands) were cultured in a 24-wells plate containing 250,000 cells at 37 °C in RPMI1640 media (Lonza, Switzerland) containing 10% fetal calf serum (Bodinco, The Netherlands) and 1% penicillin/streptomycin (Life Technologies, CA, USA). For the monocyte culture, whole blood was collected from ~76 week-old healthy chickens and PBMCs were isolated using a Ficoll gradient and frozen until use. PBMCs were cultured overnight in a 24-wells plate containing 5x106 cells in RPMI1640 media containing 10% fetal calf serum, 10 U/ml penicillin and 10 mg/ml streptomycin. The next day, the wells were washed three times to remove all non-attached cells.\nHD11 cells and monocytes were incubated with different concentrations of peptide for 4 or 24 h. In some experiments, monocytes were also stimulated with 100 ng/ml LPS (E. coli 0111:B4; InvivoGen, CA, USA) or 100 ng/ml PAM3CSK4 (InvivoGen) for 24 h. Next, cells were harvested and incubated for 30 min with antibodies KUL-01-FITC (mannose receptor MRC1/CD206; clone KUL01), MHCII-PE (clone 2G11) (both Southern Biotech, Birmingham, USA) on ice, and subsequently incubated for 30 min on ice with secondary BV421-labelled antibody (Biolegend, CA, USA). Afterwards, cells were washed and analyzed using flow cytometry (FACSCanto-II, BD Biosciences, CA, USA) and FlowJo software (Ashland, OR, USA).", "Monocytes were incubated with 10 µM peptide for 16 h in the presence of 100 ng/ml LPS and 1 µg/ml GolgiPlug (BD Biosciences). After harvesting, cells were incubated with KUL-01-FITC for the cell surface staining. The intracellular staining was performed according to the manufacturer’s protocol (BD Cytofix/Cytoperm Fixation/ Permeabilization kit; BD Biosciences). In short, after the cell surface staining, cells were incubated with Fixaton/ Permeabilization solution for 20 min. Next, antibodies to IL-1β, IL-6 and IFN-γ (Bioconnect, Toronto, Canada) with secondary AF405-labelled antibody (Invitrogen, CA, USA) were used to study intracellular cytokine production. Cells were washed and analyzed using flow cytometry and FlowJo. The relative cytokine production was determined by correcting for the staining control and expressed as a fold difference from levels in control HD11 cells.", "In a 96-wells plate, 50,000 HD11 cells per well were incubated with carboxylate-modified polystyrene latex beads (Sigma-Aldrich, Missouri, USA) in a 1:10 ratio for 1 h at 
41 °C in the presence of different concentrations of peptide. Cells were washed four times with cold PBS containing 1% FSC and 0.01% NaN3 and analyzed using flow cytometry. Phagocytosis was determined by correcting the uptake of latex beads at 41 °C for the uptake at 4 °C.", "NO production was determined using the Griess Assay. In a 48-wells plate, 250,000 HD11 cells per well were incubated overnight in the presence of different concentrations of peptide and 100 ng/ml LPS. Supernatant was harvested and 50 μl of the supernatant was mixed with 50 μl 1% sulfanilamide (5% phosphoric acid) and incubated for 5 min at RT in the dark. Then 50 μl of 0.1% N-(1-naphthyl) ethylenediamine dihydrochloride was added and incubated for 5 min at RT in the dark. Absorbance was measured at 550 nm and the amount of NO production was determined by a standard of sodium nitrate (Sigma-Aldrich).", "Statistical significance was assessed with one-way ANOVA followed by the Dunnett Post-Hoc test in GraphPad Prism software, version 6.02. A p-value of <0.05 was considered statistically significant.", " CATH-2 and D-CATH-2 Increased Antigen Presentation Capacity of Primary Chicken Monocytes Chicken monocytes were incubated with chicken CATH-2, D-CATH-2 and human LL-37. After 4 h of incubation, CATH-2 and D-CATH-2 dose-dependently increased the expression of MRC1 approximately 12- and 15-fold, respectively, while LL-37 had no effect on expression (Figure 1A). Similarly, CATH-2 and D-CATH-2 could induce MHC-II expression, albeit to a lower extend (1.2- and 2-fold increased expression, respectively: Figure 1B). This indicates that CATH-2 and its all-D enantiomer D-CATH-2 can affect the antigen presentation capacity of monocytes.\nThe effects of TLR4 agonist LPS and TLR1/TLR2 agonist PAM3CSK4 alone or in combination with cathelicidins (2.5 µM) for 24 h on expression of MRC1 and MHC-II in primary chicken monocytes was also studied. In the absence of peptides, no significant effects were observed for either TLR ligand (white bars, Figures 2A and 2B). Similarly, LPS or PAM3CSK4 did not lead to increased expression of MRC1 or MHC-II in the presence of 2.5 µM CATH-2, D-CATH-2 or LL-37. The only significant difference found was for MRC1 expression for all groups containing D-CATH-2 compared to samples without peptide present, confirming the results from Figure 1 where D-CATH-2 by itself increased MRC1 expression.\nFinally, intracellular levels of IL-1β, IL-6 and IFN-γ were determined for LPS-stimulated monocytes in the presence or absence of 10 µM of peptides using FACS analysis. Comparable to the antigen presenting markers, no significant effect of LPS and/or cathelicidins was observed (data not shown). Thus, chicken monocytes are not particularly sensitive for TLR2 or TLR4 stimulation with regards to marker expression and cytokine expression.\nChicken monocytes were incubated with chicken CATH-2, D-CATH-2 and human LL-37. After 4 h of incubation, CATH-2 and D-CATH-2 dose-dependently increased the expression of MRC1 approximately 12- and 15-fold, respectively, while LL-37 had no effect on expression (Figure 1A). Similarly, CATH-2 and D-CATH-2 could induce MHC-II expression, albeit to a lower extend (1.2- and 2-fold increased expression, respectively: Figure 1B). This indicates that CATH-2 and its all-D enantiomer D-CATH-2 can affect the antigen presentation capacity of monocytes.\nThe effects of TLR4 agonist LPS and TLR1/TLR2 agonist PAM3CSK4 alone or in combination with cathelicidins (2.5 µM) for 24 h on expression of MRC1 and MHC-II in primary chicken monocytes was also studied. In the absence of peptides, no significant effects were observed for either TLR ligand (white bars, Figures 2A and 2B). Similarly, LPS or PAM3CSK4 did not lead to increased expression of MRC1 or MHC-II in the presence of 2.5 µM CATH-2, D-CATH-2 or LL-37. The only significant difference found was for MRC1 expression for all groups containing D-CATH-2 compared to samples without peptide present, confirming the results from Figure 1 where D-CATH-2 by itself increased MRC1 expression.\nFinally, intracellular levels of IL-1β, IL-6 and IFN-γ were determined for LPS-stimulated monocytes in the presence or absence of 10 µM of peptides using FACS analysis. Comparable to the antigen presenting markers, no significant effect of LPS and/or cathelicidins was observed (data not shown). Thus, chicken monocytes are not particularly sensitive for TLR2 or TLR4 stimulation with regards to marker expression and cytokine expression.\n D-CATH-2 Increased the Antigen Presentation Capacity of HD11 Cells In order to confirm our results obtained in primary chicken monocytes, we also studied the effects of CATH-2, D-CATH-2 and LL-37 on the chicken macrophage cell line HD11. As observed for primary monocytes D-CATH-2 induced an increase in MRC1 and MHC-II expression in HD11 cells after 4 h exposure Surprisingly, unlike in primary monocytes, CATH-2 did not affect the expression of these cell surface markers in HD11 cells (Figures 3A and B). An increased incubation time did also not result in significant effects on MRC1 and MHC-II expression for CATH-2 (Figures 3C and D), indicating that the difference observed between monocytes and macrophages for CATH-2 stimulation is not likely due to kinetic differences. As expected, LL-37 also failed to induce an increased expression of MRC1 or MHC-II in HD11 cells.\nIn order to confirm our results obtained in primary chicken monocytes, we also studied the effects of CATH-2, D-CATH-2 and LL-37 on the chicken macrophage cell line HD11. As observed for primary monocytes D-CATH-2 induced an increase in MRC1 and MHC-II expression in HD11 cells after 4 h exposure Surprisingly, unlike in primary monocytes, CATH-2 did not affect the expression of these cell surface markers in HD11 cells (Figures 3A and B). An increased incubation time did also not result in significant effects on MRC1 and MHC-II expression for CATH-2 (Figures 3C and D), indicating that the difference observed between monocytes and macrophages for CATH-2 stimulation is not likely due to kinetic differences. As expected, LL-37 also failed to induce an increased expression of MRC1 or MHC-II in HD11 cells.\n Cathelicidins Inhibited LPS-Induced NO Production and Increased Phagocytosis in HD11 Cells To determine possible functional implications of incubating immune cells with CATH-2, D-CATH-2 and LL-37, the effect on LPS-induced NO production and phagocytic capacity of HD11 cells was investigated. LPS stimulation led to 25-fold higher NO levels compared to unstimulated cells (Figure 4A). There was a significant decrease in the LPS-induced NO production with CATH-2 and D-CATH-2 (Figure 4A), with the latter showing a strong effect at the lowest concentration tested (2.5 µM), while CATH-2 required higher concentrations for full activity. As a second functional assay the effect of cathelicidins on phagocytic activity of macrophages was tested (Figure 4B). Interestingly, LL-37 had a clear dose-dependent effect on the phagocytosis of beads showing a 3-, and 5-fold increase in uptake at 5 µM and 10 µM of LL-37. CATH-2 and D-CATH-2 had no effect on phagocytosis at any concentration. These assays show that human and chicken cathelicidins seem to affect different functions of chicken macrophages.\nTo determine possible functional implications of incubating immune cells with CATH-2, D-CATH-2 and LL-37, the effect on LPS-induced NO production and phagocytic capacity of HD11 cells was investigated. LPS stimulation led to 25-fold higher NO levels compared to unstimulated cells (Figure 4A). There was a significant decrease in the LPS-induced NO production with CATH-2 and D-CATH-2 (Figure 4A), with the latter showing a strong effect at the lowest concentration tested (2.5 µM), while CATH-2 required higher concentrations for full activity. As a second functional assay the effect of cathelicidins on phagocytic activity of macrophages was tested (Figure 4B). Interestingly, LL-37 had a clear dose-dependent effect on the phagocytosis of beads showing a 3-, and 5-fold increase in uptake at 5 µM and 10 µM of LL-37. CATH-2 and D-CATH-2 had no effect on phagocytosis at any concentration. These assays show that human and chicken cathelicidins seem to affect different functions of chicken macrophages.", "Chicken monocytes were incubated with chicken CATH-2, D-CATH-2 and human LL-37. After 4 h of incubation, CATH-2 and D-CATH-2 dose-dependently increased the expression of MRC1 approximately 12- and 15-fold, respectively, while LL-37 had no effect on expression (Figure 1A). Similarly, CATH-2 and D-CATH-2 could induce MHC-II expression, albeit to a lower extend (1.2- and 2-fold increased expression, respectively: Figure 1B). This indicates that CATH-2 and its all-D enantiomer D-CATH-2 can affect the antigen presentation capacity of monocytes.\nThe effects of TLR4 agonist LPS and TLR1/TLR2 agonist PAM3CSK4 alone or in combination with cathelicidins (2.5 µM) for 24 h on expression of MRC1 and MHC-II in primary chicken monocytes was also studied. In the absence of peptides, no significant effects were observed for either TLR ligand (white bars, Figures 2A and 2B). Similarly, LPS or PAM3CSK4 did not lead to increased expression of MRC1 or MHC-II in the presence of 2.5 µM CATH-2, D-CATH-2 or LL-37. The only significant difference found was for MRC1 expression for all groups containing D-CATH-2 compared to samples without peptide present, confirming the results from Figure 1 where D-CATH-2 by itself increased MRC1 expression.\nFinally, intracellular levels of IL-1β, IL-6 and IFN-γ were determined for LPS-stimulated monocytes in the presence or absence of 10 µM of peptides using FACS analysis. Comparable to the antigen presenting markers, no significant effect of LPS and/or cathelicidins was observed (data not shown). Thus, chicken monocytes are not particularly sensitive for TLR2 or TLR4 stimulation with regards to marker expression and cytokine expression.", "In order to confirm our results obtained in primary chicken monocytes, we also studied the effects of CATH-2, D-CATH-2 and LL-37 on the chicken macrophage cell line HD11. As observed for primary monocytes D-CATH-2 induced an increase in MRC1 and MHC-II expression in HD11 cells after 4 h exposure Surprisingly, unlike in primary monocytes, CATH-2 did not affect the expression of these cell surface markers in HD11 cells (Figures 3A and B). An increased incubation time did also not result in significant effects on MRC1 and MHC-II expression for CATH-2 (Figures 3C and D), indicating that the difference observed between monocytes and macrophages for CATH-2 stimulation is not likely due to kinetic differences. As expected, LL-37 also failed to induce an increased expression of MRC1 or MHC-II in HD11 cells.", "To determine possible functional implications of incubating immune cells with CATH-2, D-CATH-2 and LL-37, the effect on LPS-induced NO production and phagocytic capacity of HD11 cells was investigated. LPS stimulation led to 25-fold higher NO levels compared to unstimulated cells (Figure 4A). There was a significant decrease in the LPS-induced NO production with CATH-2 and D-CATH-2 (Figure 4A), with the latter showing a strong effect at the lowest concentration tested (2.5 µM), while CATH-2 required higher concentrations for full activity. As a second functional assay the effect of cathelicidins on phagocytic activity of macrophages was tested (Figure 4B). Interestingly, LL-37 had a clear dose-dependent effect on the phagocytosis of beads showing a 3-, and 5-fold increase in uptake at 5 µM and 10 µM of LL-37. CATH-2 and D-CATH-2 had no effect on phagocytosis at any concentration. These assays show that human and chicken cathelicidins seem to affect different functions of chicken macrophages.", "In this study, immunomodulatory effects of CATH-2, D-CATH-2 and LL-37 on chicken immune cells were investigated using chicken monocytes and HD11 cells. The antigen presentation capacity of primary chicken monocytes was increased after incubation with D-CATH-2 and CATH-2, as shown by the increased expression of the mannose receptor MRC1 and MHC-II molecule. Recently, it was demonstrated by our research group that these markers are upregulated in the mixed population of mononuclear phagocytes when looking within the entire PBMC population [20] and here we confirm that in a specific monocyte/macrophage cell population. A similar effect was described for a chicken CATH-1 derivative (fowlicidin 16-26) on mouse macrophages [21], indicating that this could be a feature of more chicken cathelicidins. No effect on antigen presentation markers on chicken monocytes or macrophages was observed for LL-37. In contrast, this human cathelicidin increases surface markers on human dendritic cells and monocytes, indicating that this could at least partially be a species-specific effect of cathelicidins [22, 23]. The increased antigen presentation capacity by CATH-2 suggests that the presence of the peptide helps to prepare the cell for an enhanced adaptive immune response leading to an optimal reaction for fighting an infection.\nPrimary monocytes were relatively insensitive towards stimulation with LPS and PAM3CSK4, and the presence of CATH-2 or LL-37 could not enhance MRC1 and MHC-II expression. This is in line with the observation that chickens have decreased sensitivity towards LPS compared to mammals [24, 25]. On the other hand, LPS was capable of inducing NO production in chicken macrophages, which could be neutralized by the chicken cathelicidins. The capacity to neutralize LPS has been described for more cathelicidins [26, 27] and is partly attributed to direct binding of the peptide to LPS [28, 29]. Interestingly, no effect of LPS was observed on intracellular cytokine levels of monocytes. Several studies have actually shown that LPS induced cytokine levels in chicken immune cells including macrophages and monocytes, however, these were all measured on a transcriptional level [30, 31]. The current results indicate that this might actually not be reflected on a translational level. Alternatively, the incubation time of 24 h might have resulted in tolerance to LPS and in that way early responses towards LPS might have been overlooked. However, a full kinetic study was beyond the scope of the current study and will be addressed in follow-up studies.\nThe effect of cathelicidins on the functionality of chicken macrophages indicated that phagocytosis was strongly increased by LL-37, while there was no effect observed for CATH-2 and D-CATH-2. This corresponds well to a study where a similar enhancing effect of LL-37 on phagocytic capacity of human macrophages was described [32]. However, related studies showed that LL-37 had no effect on phagocytosis of mouse bone-marrow-derived macrophages [26], and actually decreased the phagocytic activity of mouse RAW264.7 cells [33]. This suggests that the effect exerted by LL-37 on this cell function is dependent on the species and type of cells.\nInterestingly, some differences were observed in immunomodulatory activity between CATH-2 and D-CATH-2. The latter showed higher capacity to induce antigen presentation markers (Figure 3) and in LPS neutralization activity (Figure 4). This could be due to higher stability of the D-enantiomer, either towards proteases, or a more general stability during the incubation times of the experiments. However, since L- and D-CATH-2 are mirror images of each other, especially receptor based interactions, but also more general membrane interactions of CATH-2, could be affected by structural differences due to the incorporation of D-amino acids into the molecule. However, the activity-spectrum of both CATH-2 peptides was largely overlapping, while LL-37 showed an almost opposite set of activities, with no effect on antigen marker presentation, but very strong effects on phagocytosis.\nThe enhancement of antigen presentation markers adds onto a long list of immunomodulatory functions for this (and other) cathelicidins. For example, CATH-2 was shown to enhance cytokine production in HD-11 cells and using truncated and mutated versions of CATH-2, some structural features could be linked to this activity [30]. In addition CATH-2 was shown to enhance uptake of bacterial DNA and subsequent TLR21 activation [34], but also showed an inhibitory effect of the immune response towards non-viable bacteria, reducing the potential damaging effects of a pro-inflammatory response [35, 36]. Which of these activities is important for the described protective effect of CATH-2 in vivo [18] is unclear, but the observed activities in in vitro systems highlight the complexity and also the potential of CATH-2 in prevention and possibly treatment of microbial infections. Furthermore, a better understanding of structure-activity relationships of these peptides might help in development of optimized peptide for therapeutic use, while for prophylactic effects of peptides it is, especially in chicken, probably more cost-effective to determine ways to upregulate natural production of cathelicidins or other host defence peptides such as defensins.", "Overall, this study shows that both CATH-2 and D-CATH-2 increase antigen presentation markers on both primary monocytes and HD11 cells. These indicators of an increased inflammatory response could lead to more T cell activation and thus the potential for a better adaptive immune response against infection. This novel finding adds another functionality to the chicken cathelicidin repertoire in immunity." ]
[ "intro", "materials|methods", null, null, null, null, null, null, "results", null, null, null, "discussion", "conclusions" ]
[ "Host defense peptide", "MRC1", "Antigen presentation", "HD11 cells", "Innate immunity", "Cathelicidins" ]
INTRODUCTION: Cathelicidins are short cationic peptides with an important role in host defense. They have broad antimicrobial activity as shown by their ability to kill a wide range of bacteria, viruses and fungi [1-4]. Besides direct antimicrobial activity, cathelicidins also play a more immunomodulatory role in the innate immune system through, amongst others, activation and chemotaxis of a variety of immune cells [5] and their role is still expanding [6-9]. However, it is unclear if chicken cathelicidins share all of these activities with their mammalian counterparts. In human, the only cathelicidin gene hCAP-18, has been shown to yield at least three different mature peptides, of 
which LL-37 is the most commonly studied [9]. This cathelicidin is produced by different cell types, including neutrophils, macrophages, and NK cells [10]. In chicken there are four different cathelicidins: CATH-1, -2, and -3, and CATH-B1 [11-14]. Multiple reports have described immunomodulatory effects of cathelicidins in vivo. For example, LL-37 upregulated the neutrophil response and cleared an infection in a murine Pseudomonas aeruginosa lung infection model without the peptide showing direct antimicrobial effects [15]. The chicken cathelicidin CATH-1 was used in a MRSA infection model in mice, where it provided partial protection [16]. In addition, chicken CATH-2 increased the number of phagocytic cells in a zebrafish model leading to a protective effect against bacterial infection. These studies suggested a more immuno-stimulatory mechanism of action for this peptide [17]. Finally, in ovo treatment with the all-D enantiomer of CATH-2 (D-CATH-2) showed a protective effect against a subsequent E. coli infection 7 days after hatch, indicating again that immunomodulatory effects and not direct antimicrobial killing was involved [18]. The involvement of immune cells and specifically macrophages was best shown in a study using a so-called innate defense-regulator peptide, IDR-1. This peptide, which does not have direct antimicrobial activity, could protect mice from infections, but was inactive when these mice were depleted of monocytes and macrophages [19]. In line with this, we have shown that CATH-2, D-CATH-2 and LL-37 have an effect on the mononuclear phagocyte population within chicken PBMCs [20]. In order to better understand the in vivo immunomodulatory activity of CATH-2 the current study investigated the effect of CATH-2 on primary chicken monocytes and a chicken macrophage cell line. The all D-enantiomer D-CATH-2 was included based on its stability towards proteases and its described effect in ovo [18], while LL-37 was included as a well described immunomodulatory peptide of human origin. The study indicates that CATH-2 has clear effects on macrophage and monocyte antigen presentation markers which could be an important feature of these peptides as part of the innate immune response of chickens towards pathogens. MATERIALS AND METHODS: Peptides CATH-2 and D-CATH-2 (amino acid sequence: RFGRFLRKIRRFRPKVTITIQGSARF-NH2) were synthesized by Fmoc-chemistry (CPC Scientific) and LL-37 was synthesized by Fmoc-chemistry at the Academic Centre for Dentistry Amsterdam (ACTA). CATH-2 and D-CATH-2 (amino acid sequence: RFGRFLRKIRRFRPKVTITIQGSARF-NH2) were synthesized by Fmoc-chemistry (CPC Scientific) and LL-37 was synthesized by Fmoc-chemistry at the Academic Centre for Dentistry Amsterdam (ACTA). Cell Surface Marker Staining Chicken macrophage HD11 cells, (gift from Prof. Jos van Putten, Utrecht University, the Netherlands) were cultured in a 24-wells plate containing 250,000 cells at 37 °C in RPMI1640 media (Lonza, Switzerland) containing 10% fetal calf serum (Bodinco, The Netherlands) and 1% penicillin/streptomycin (Life Technologies, CA, USA). For the monocyte culture, whole blood was collected from ~76 week-old healthy chickens and PBMCs were isolated using a Ficoll gradient and frozen until use. PBMCs were cultured overnight in a 24-wells plate containing 5x106 cells in RPMI1640 media containing 10% fetal calf serum, 10 U/ml penicillin and 10 mg/ml streptomycin. The next day, the wells were washed three times to remove all non-attached cells. HD11 cells and monocytes were incubated with different concentrations of peptide for 4 or 24 h. In some experiments, monocytes were also stimulated with 100 ng/ml LPS (E. coli 0111:B4; InvivoGen, CA, USA) or 100 ng/ml PAM3CSK4 (InvivoGen) for 24 h. Next, cells were harvested and incubated for 30 min with antibodies KUL-01-FITC (mannose receptor MRC1/CD206; clone KUL01), MHCII-PE (clone 2G11) (both Southern Biotech, Birmingham, USA) on ice, and subsequently incubated for 30 min on ice with secondary BV421-labelled antibody (Biolegend, CA, USA). Afterwards, cells were washed and analyzed using flow cytometry (FACSCanto-II, BD Biosciences, CA, USA) and FlowJo software (Ashland, OR, USA). Chicken macrophage HD11 cells, (gift from Prof. Jos van Putten, Utrecht University, the Netherlands) were cultured in a 24-wells plate containing 250,000 cells at 37 °C in RPMI1640 media (Lonza, Switzerland) containing 10% fetal calf serum (Bodinco, The Netherlands) and 1% penicillin/streptomycin (Life Technologies, CA, USA). For the monocyte culture, whole blood was collected from ~76 week-old healthy chickens and PBMCs were isolated using a Ficoll gradient and frozen until use. PBMCs were cultured overnight in a 24-wells plate containing 5x106 cells in RPMI1640 media containing 10% fetal calf serum, 10 U/ml penicillin and 10 mg/ml streptomycin. The next day, the wells were washed three times to remove all non-attached cells. HD11 cells and monocytes were incubated with different concentrations of peptide for 4 or 24 h. In some experiments, monocytes were also stimulated with 100 ng/ml LPS (E. coli 0111:B4; InvivoGen, CA, USA) or 100 ng/ml PAM3CSK4 (InvivoGen) for 24 h. Next, cells were harvested and incubated for 30 min with antibodies KUL-01-FITC (mannose receptor MRC1/CD206; clone KUL01), MHCII-PE (clone 2G11) (both Southern Biotech, Birmingham, USA) on ice, and subsequently incubated for 30 min on ice with secondary BV421-labelled antibody (Biolegend, CA, USA). Afterwards, cells were washed and analyzed using flow cytometry (FACSCanto-II, BD Biosciences, CA, USA) and FlowJo software (Ashland, OR, USA). Intracellular Staining of Cytokines Monocytes were incubated with 10 µM peptide for 16 h in the presence of 100 ng/ml LPS and 1 µg/ml GolgiPlug (BD Biosciences). After harvesting, cells were incubated with KUL-01-FITC for the cell surface staining. The intracellular staining was performed according to the manufacturer’s protocol (BD Cytofix/Cytoperm Fixation/ Permeabilization kit; BD Biosciences). In short, after the cell surface staining, cells were incubated with Fixaton/ Permeabilization solution for 20 min. Next, antibodies to IL-1β, IL-6 and IFN-γ (Bioconnect, Toronto, Canada) with secondary AF405-labelled antibody (Invitrogen, CA, USA) were used to study intracellular cytokine production. Cells were washed and analyzed using flow cytometry and FlowJo. The relative cytokine production was determined by correcting for the staining control and expressed as a fold difference from levels in control HD11 cells. Monocytes were incubated with 10 µM peptide for 16 h in the presence of 100 ng/ml LPS and 1 µg/ml GolgiPlug (BD Biosciences). After harvesting, cells were incubated with KUL-01-FITC for the cell surface staining. The intracellular staining was performed according to the manufacturer’s protocol (BD Cytofix/Cytoperm Fixation/ Permeabilization kit; BD Biosciences). In short, after the cell surface staining, cells were incubated with Fixaton/ Permeabilization solution for 20 min. Next, antibodies to IL-1β, IL-6 and IFN-γ (Bioconnect, Toronto, Canada) with secondary AF405-labelled antibody (Invitrogen, CA, USA) were used to study intracellular cytokine production. Cells were washed and analyzed using flow cytometry and FlowJo. The relative cytokine production was determined by correcting for the staining control and expressed as a fold difference from levels in control HD11 cells. Phagocytosis Assay In a 96-wells plate, 50,000 HD11 cells per well were incubated with carboxylate-modified polystyrene latex beads (Sigma-Aldrich, Missouri, USA) in a 1:10 ratio for 1 h at 
41 °C in the presence of different concentrations of peptide. Cells were washed four times with cold PBS containing 1% FSC and 0.01% NaN3 and analyzed using flow cytometry. Phagocytosis was determined by correcting the uptake of latex beads at 41 °C for the uptake at 4 °C. In a 96-wells plate, 50,000 HD11 cells per well were incubated with carboxylate-modified polystyrene latex beads (Sigma-Aldrich, Missouri, USA) in a 1:10 ratio for 1 h at 
41 °C in the presence of different concentrations of peptide. Cells were washed four times with cold PBS containing 1% FSC and 0.01% NaN3 and analyzed using flow cytometry. Phagocytosis was determined by correcting the uptake of latex beads at 41 °C for the uptake at 4 °C. Griess Assay NO production was determined using the Griess Assay. In a 48-wells plate, 250,000 HD11 cells per well were incubated overnight in the presence of different concentrations of peptide and 100 ng/ml LPS. Supernatant was harvested and 50 μl of the supernatant was mixed with 50 μl 1% sulfanilamide (5% phosphoric acid) and incubated for 5 min at RT in the dark. Then 50 μl of 0.1% N-(1-naphthyl) ethylenediamine dihydrochloride was added and incubated for 5 min at RT in the dark. Absorbance was measured at 550 nm and the amount of NO production was determined by a standard of sodium nitrate (Sigma-Aldrich). NO production was determined using the Griess Assay. In a 48-wells plate, 250,000 HD11 cells per well were incubated overnight in the presence of different concentrations of peptide and 100 ng/ml LPS. Supernatant was harvested and 50 μl of the supernatant was mixed with 50 μl 1% sulfanilamide (5% phosphoric acid) and incubated for 5 min at RT in the dark. Then 50 μl of 0.1% N-(1-naphthyl) ethylenediamine dihydrochloride was added and incubated for 5 min at RT in the dark. Absorbance was measured at 550 nm and the amount of NO production was determined by a standard of sodium nitrate (Sigma-Aldrich). Statistical Analysis Statistical significance was assessed with one-way ANOVA followed by the Dunnett Post-Hoc test in GraphPad Prism software, version 6.02. A p-value of <0.05 was considered statistically significant. Statistical significance was assessed with one-way ANOVA followed by the Dunnett Post-Hoc test in GraphPad Prism software, version 6.02. A p-value of <0.05 was considered statistically significant. Peptides: CATH-2 and D-CATH-2 (amino acid sequence: RFGRFLRKIRRFRPKVTITIQGSARF-NH2) were synthesized by Fmoc-chemistry (CPC Scientific) and LL-37 was synthesized by Fmoc-chemistry at the Academic Centre for Dentistry Amsterdam (ACTA). Cell Surface Marker Staining: Chicken macrophage HD11 cells, (gift from Prof. Jos van Putten, Utrecht University, the Netherlands) were cultured in a 24-wells plate containing 250,000 cells at 37 °C in RPMI1640 media (Lonza, Switzerland) containing 10% fetal calf serum (Bodinco, The Netherlands) and 1% penicillin/streptomycin (Life Technologies, CA, USA). For the monocyte culture, whole blood was collected from ~76 week-old healthy chickens and PBMCs were isolated using a Ficoll gradient and frozen until use. PBMCs were cultured overnight in a 24-wells plate containing 5x106 cells in RPMI1640 media containing 10% fetal calf serum, 10 U/ml penicillin and 10 mg/ml streptomycin. The next day, the wells were washed three times to remove all non-attached cells. HD11 cells and monocytes were incubated with different concentrations of peptide for 4 or 24 h. In some experiments, monocytes were also stimulated with 100 ng/ml LPS (E. coli 0111:B4; InvivoGen, CA, USA) or 100 ng/ml PAM3CSK4 (InvivoGen) for 24 h. Next, cells were harvested and incubated for 30 min with antibodies KUL-01-FITC (mannose receptor MRC1/CD206; clone KUL01), MHCII-PE (clone 2G11) (both Southern Biotech, Birmingham, USA) on ice, and subsequently incubated for 30 min on ice with secondary BV421-labelled antibody (Biolegend, CA, USA). Afterwards, cells were washed and analyzed using flow cytometry (FACSCanto-II, BD Biosciences, CA, USA) and FlowJo software (Ashland, OR, USA). Intracellular Staining of Cytokines: Monocytes were incubated with 10 µM peptide for 16 h in the presence of 100 ng/ml LPS and 1 µg/ml GolgiPlug (BD Biosciences). After harvesting, cells were incubated with KUL-01-FITC for the cell surface staining. The intracellular staining was performed according to the manufacturer’s protocol (BD Cytofix/Cytoperm Fixation/ Permeabilization kit; BD Biosciences). In short, after the cell surface staining, cells were incubated with Fixaton/ Permeabilization solution for 20 min. Next, antibodies to IL-1β, IL-6 and IFN-γ (Bioconnect, Toronto, Canada) with secondary AF405-labelled antibody (Invitrogen, CA, USA) were used to study intracellular cytokine production. Cells were washed and analyzed using flow cytometry and FlowJo. The relative cytokine production was determined by correcting for the staining control and expressed as a fold difference from levels in control HD11 cells. Phagocytosis Assay: In a 96-wells plate, 50,000 HD11 cells per well were incubated with carboxylate-modified polystyrene latex beads (Sigma-Aldrich, Missouri, USA) in a 1:10 ratio for 1 h at 
41 °C in the presence of different concentrations of peptide. Cells were washed four times with cold PBS containing 1% FSC and 0.01% NaN3 and analyzed using flow cytometry. Phagocytosis was determined by correcting the uptake of latex beads at 41 °C for the uptake at 4 °C. Griess Assay: NO production was determined using the Griess Assay. In a 48-wells plate, 250,000 HD11 cells per well were incubated overnight in the presence of different concentrations of peptide and 100 ng/ml LPS. Supernatant was harvested and 50 μl of the supernatant was mixed with 50 μl 1% sulfanilamide (5% phosphoric acid) and incubated for 5 min at RT in the dark. Then 50 μl of 0.1% N-(1-naphthyl) ethylenediamine dihydrochloride was added and incubated for 5 min at RT in the dark. Absorbance was measured at 550 nm and the amount of NO production was determined by a standard of sodium nitrate (Sigma-Aldrich). Statistical Analysis: Statistical significance was assessed with one-way ANOVA followed by the Dunnett Post-Hoc test in GraphPad Prism software, version 6.02. A p-value of <0.05 was considered statistically significant. RESULTS: CATH-2 and D-CATH-2 Increased Antigen Presentation Capacity of Primary Chicken Monocytes Chicken monocytes were incubated with chicken CATH-2, D-CATH-2 and human LL-37. After 4 h of incubation, CATH-2 and D-CATH-2 dose-dependently increased the expression of MRC1 approximately 12- and 15-fold, respectively, while LL-37 had no effect on expression (Figure 1A). Similarly, CATH-2 and D-CATH-2 could induce MHC-II expression, albeit to a lower extend (1.2- and 2-fold increased expression, respectively: Figure 1B). This indicates that CATH-2 and its all-D enantiomer D-CATH-2 can affect the antigen presentation capacity of monocytes. The effects of TLR4 agonist LPS and TLR1/TLR2 agonist PAM3CSK4 alone or in combination with cathelicidins (2.5 µM) for 24 h on expression of MRC1 and MHC-II in primary chicken monocytes was also studied. In the absence of peptides, no significant effects were observed for either TLR ligand (white bars, Figures 2A and 2B). Similarly, LPS or PAM3CSK4 did not lead to increased expression of MRC1 or MHC-II in the presence of 2.5 µM CATH-2, D-CATH-2 or LL-37. The only significant difference found was for MRC1 expression for all groups containing D-CATH-2 compared to samples without peptide present, confirming the results from Figure 1 where D-CATH-2 by itself increased MRC1 expression. Finally, intracellular levels of IL-1β, IL-6 and IFN-γ were determined for LPS-stimulated monocytes in the presence or absence of 10 µM of peptides using FACS analysis. Comparable to the antigen presenting markers, no significant effect of LPS and/or cathelicidins was observed (data not shown). Thus, chicken monocytes are not particularly sensitive for TLR2 or TLR4 stimulation with regards to marker expression and cytokine expression. Chicken monocytes were incubated with chicken CATH-2, D-CATH-2 and human LL-37. After 4 h of incubation, CATH-2 and D-CATH-2 dose-dependently increased the expression of MRC1 approximately 12- and 15-fold, respectively, while LL-37 had no effect on expression (Figure 1A). Similarly, CATH-2 and D-CATH-2 could induce MHC-II expression, albeit to a lower extend (1.2- and 2-fold increased expression, respectively: Figure 1B). This indicates that CATH-2 and its all-D enantiomer D-CATH-2 can affect the antigen presentation capacity of monocytes. The effects of TLR4 agonist LPS and TLR1/TLR2 agonist PAM3CSK4 alone or in combination with cathelicidins (2.5 µM) for 24 h on expression of MRC1 and MHC-II in primary chicken monocytes was also studied. In the absence of peptides, no significant effects were observed for either TLR ligand (white bars, Figures 2A and 2B). Similarly, LPS or PAM3CSK4 did not lead to increased expression of MRC1 or MHC-II in the presence of 2.5 µM CATH-2, D-CATH-2 or LL-37. The only significant difference found was for MRC1 expression for all groups containing D-CATH-2 compared to samples without peptide present, confirming the results from Figure 1 where D-CATH-2 by itself increased MRC1 expression. Finally, intracellular levels of IL-1β, IL-6 and IFN-γ were determined for LPS-stimulated monocytes in the presence or absence of 10 µM of peptides using FACS analysis. Comparable to the antigen presenting markers, no significant effect of LPS and/or cathelicidins was observed (data not shown). Thus, chicken monocytes are not particularly sensitive for TLR2 or TLR4 stimulation with regards to marker expression and cytokine expression. D-CATH-2 Increased the Antigen Presentation Capacity of HD11 Cells In order to confirm our results obtained in primary chicken monocytes, we also studied the effects of CATH-2, D-CATH-2 and LL-37 on the chicken macrophage cell line HD11. As observed for primary monocytes D-CATH-2 induced an increase in MRC1 and MHC-II expression in HD11 cells after 4 h exposure Surprisingly, unlike in primary monocytes, CATH-2 did not affect the expression of these cell surface markers in HD11 cells (Figures 3A and B). An increased incubation time did also not result in significant effects on MRC1 and MHC-II expression for CATH-2 (Figures 3C and D), indicating that the difference observed between monocytes and macrophages for CATH-2 stimulation is not likely due to kinetic differences. As expected, LL-37 also failed to induce an increased expression of MRC1 or MHC-II in HD11 cells. In order to confirm our results obtained in primary chicken monocytes, we also studied the effects of CATH-2, D-CATH-2 and LL-37 on the chicken macrophage cell line HD11. As observed for primary monocytes D-CATH-2 induced an increase in MRC1 and MHC-II expression in HD11 cells after 4 h exposure Surprisingly, unlike in primary monocytes, CATH-2 did not affect the expression of these cell surface markers in HD11 cells (Figures 3A and B). An increased incubation time did also not result in significant effects on MRC1 and MHC-II expression for CATH-2 (Figures 3C and D), indicating that the difference observed between monocytes and macrophages for CATH-2 stimulation is not likely due to kinetic differences. As expected, LL-37 also failed to induce an increased expression of MRC1 or MHC-II in HD11 cells. Cathelicidins Inhibited LPS-Induced NO Production and Increased Phagocytosis in HD11 Cells To determine possible functional implications of incubating immune cells with CATH-2, D-CATH-2 and LL-37, the effect on LPS-induced NO production and phagocytic capacity of HD11 cells was investigated. LPS stimulation led to 25-fold higher NO levels compared to unstimulated cells (Figure 4A). There was a significant decrease in the LPS-induced NO production with CATH-2 and D-CATH-2 (Figure 4A), with the latter showing a strong effect at the lowest concentration tested (2.5 µM), while CATH-2 required higher concentrations for full activity. As a second functional assay the effect of cathelicidins on phagocytic activity of macrophages was tested (Figure 4B). Interestingly, LL-37 had a clear dose-dependent effect on the phagocytosis of beads showing a 3-, and 5-fold increase in uptake at 5 µM and 10 µM of LL-37. CATH-2 and D-CATH-2 had no effect on phagocytosis at any concentration. These assays show that human and chicken cathelicidins seem to affect different functions of chicken macrophages. To determine possible functional implications of incubating immune cells with CATH-2, D-CATH-2 and LL-37, the effect on LPS-induced NO production and phagocytic capacity of HD11 cells was investigated. LPS stimulation led to 25-fold higher NO levels compared to unstimulated cells (Figure 4A). There was a significant decrease in the LPS-induced NO production with CATH-2 and D-CATH-2 (Figure 4A), with the latter showing a strong effect at the lowest concentration tested (2.5 µM), while CATH-2 required higher concentrations for full activity. As a second functional assay the effect of cathelicidins on phagocytic activity of macrophages was tested (Figure 4B). Interestingly, LL-37 had a clear dose-dependent effect on the phagocytosis of beads showing a 3-, and 5-fold increase in uptake at 5 µM and 10 µM of LL-37. CATH-2 and D-CATH-2 had no effect on phagocytosis at any concentration. These assays show that human and chicken cathelicidins seem to affect different functions of chicken macrophages. CATH-2 and D-CATH-2 Increased Antigen Presentation Capacity of Primary Chicken Monocytes: Chicken monocytes were incubated with chicken CATH-2, D-CATH-2 and human LL-37. After 4 h of incubation, CATH-2 and D-CATH-2 dose-dependently increased the expression of MRC1 approximately 12- and 15-fold, respectively, while LL-37 had no effect on expression (Figure 1A). Similarly, CATH-2 and D-CATH-2 could induce MHC-II expression, albeit to a lower extend (1.2- and 2-fold increased expression, respectively: Figure 1B). This indicates that CATH-2 and its all-D enantiomer D-CATH-2 can affect the antigen presentation capacity of monocytes. The effects of TLR4 agonist LPS and TLR1/TLR2 agonist PAM3CSK4 alone or in combination with cathelicidins (2.5 µM) for 24 h on expression of MRC1 and MHC-II in primary chicken monocytes was also studied. In the absence of peptides, no significant effects were observed for either TLR ligand (white bars, Figures 2A and 2B). Similarly, LPS or PAM3CSK4 did not lead to increased expression of MRC1 or MHC-II in the presence of 2.5 µM CATH-2, D-CATH-2 or LL-37. The only significant difference found was for MRC1 expression for all groups containing D-CATH-2 compared to samples without peptide present, confirming the results from Figure 1 where D-CATH-2 by itself increased MRC1 expression. Finally, intracellular levels of IL-1β, IL-6 and IFN-γ were determined for LPS-stimulated monocytes in the presence or absence of 10 µM of peptides using FACS analysis. Comparable to the antigen presenting markers, no significant effect of LPS and/or cathelicidins was observed (data not shown). Thus, chicken monocytes are not particularly sensitive for TLR2 or TLR4 stimulation with regards to marker expression and cytokine expression. D-CATH-2 Increased the Antigen Presentation Capacity of HD11 Cells: In order to confirm our results obtained in primary chicken monocytes, we also studied the effects of CATH-2, D-CATH-2 and LL-37 on the chicken macrophage cell line HD11. As observed for primary monocytes D-CATH-2 induced an increase in MRC1 and MHC-II expression in HD11 cells after 4 h exposure Surprisingly, unlike in primary monocytes, CATH-2 did not affect the expression of these cell surface markers in HD11 cells (Figures 3A and B). An increased incubation time did also not result in significant effects on MRC1 and MHC-II expression for CATH-2 (Figures 3C and D), indicating that the difference observed between monocytes and macrophages for CATH-2 stimulation is not likely due to kinetic differences. As expected, LL-37 also failed to induce an increased expression of MRC1 or MHC-II in HD11 cells. Cathelicidins Inhibited LPS-Induced NO Production and Increased Phagocytosis in HD11 Cells: To determine possible functional implications of incubating immune cells with CATH-2, D-CATH-2 and LL-37, the effect on LPS-induced NO production and phagocytic capacity of HD11 cells was investigated. LPS stimulation led to 25-fold higher NO levels compared to unstimulated cells (Figure 4A). There was a significant decrease in the LPS-induced NO production with CATH-2 and D-CATH-2 (Figure 4A), with the latter showing a strong effect at the lowest concentration tested (2.5 µM), while CATH-2 required higher concentrations for full activity. As a second functional assay the effect of cathelicidins on phagocytic activity of macrophages was tested (Figure 4B). Interestingly, LL-37 had a clear dose-dependent effect on the phagocytosis of beads showing a 3-, and 5-fold increase in uptake at 5 µM and 10 µM of LL-37. CATH-2 and D-CATH-2 had no effect on phagocytosis at any concentration. These assays show that human and chicken cathelicidins seem to affect different functions of chicken macrophages. DISCUSSION: In this study, immunomodulatory effects of CATH-2, D-CATH-2 and LL-37 on chicken immune cells were investigated using chicken monocytes and HD11 cells. The antigen presentation capacity of primary chicken monocytes was increased after incubation with D-CATH-2 and CATH-2, as shown by the increased expression of the mannose receptor MRC1 and MHC-II molecule. Recently, it was demonstrated by our research group that these markers are upregulated in the mixed population of mononuclear phagocytes when looking within the entire PBMC population [20] and here we confirm that in a specific monocyte/macrophage cell population. A similar effect was described for a chicken CATH-1 derivative (fowlicidin 16-26) on mouse macrophages [21], indicating that this could be a feature of more chicken cathelicidins. No effect on antigen presentation markers on chicken monocytes or macrophages was observed for LL-37. In contrast, this human cathelicidin increases surface markers on human dendritic cells and monocytes, indicating that this could at least partially be a species-specific effect of cathelicidins [22, 23]. The increased antigen presentation capacity by CATH-2 suggests that the presence of the peptide helps to prepare the cell for an enhanced adaptive immune response leading to an optimal reaction for fighting an infection. Primary monocytes were relatively insensitive towards stimulation with LPS and PAM3CSK4, and the presence of CATH-2 or LL-37 could not enhance MRC1 and MHC-II expression. This is in line with the observation that chickens have decreased sensitivity towards LPS compared to mammals [24, 25]. On the other hand, LPS was capable of inducing NO production in chicken macrophages, which could be neutralized by the chicken cathelicidins. The capacity to neutralize LPS has been described for more cathelicidins [26, 27] and is partly attributed to direct binding of the peptide to LPS [28, 29]. Interestingly, no effect of LPS was observed on intracellular cytokine levels of monocytes. Several studies have actually shown that LPS induced cytokine levels in chicken immune cells including macrophages and monocytes, however, these were all measured on a transcriptional level [30, 31]. The current results indicate that this might actually not be reflected on a translational level. Alternatively, the incubation time of 24 h might have resulted in tolerance to LPS and in that way early responses towards LPS might have been overlooked. However, a full kinetic study was beyond the scope of the current study and will be addressed in follow-up studies. The effect of cathelicidins on the functionality of chicken macrophages indicated that phagocytosis was strongly increased by LL-37, while there was no effect observed for CATH-2 and D-CATH-2. This corresponds well to a study where a similar enhancing effect of LL-37 on phagocytic capacity of human macrophages was described [32]. However, related studies showed that LL-37 had no effect on phagocytosis of mouse bone-marrow-derived macrophages [26], and actually decreased the phagocytic activity of mouse RAW264.7 cells [33]. This suggests that the effect exerted by LL-37 on this cell function is dependent on the species and type of cells. Interestingly, some differences were observed in immunomodulatory activity between CATH-2 and D-CATH-2. The latter showed higher capacity to induce antigen presentation markers (Figure 3) and in LPS neutralization activity (Figure 4). This could be due to higher stability of the D-enantiomer, either towards proteases, or a more general stability during the incubation times of the experiments. However, since L- and D-CATH-2 are mirror images of each other, especially receptor based interactions, but also more general membrane interactions of CATH-2, could be affected by structural differences due to the incorporation of D-amino acids into the molecule. However, the activity-spectrum of both CATH-2 peptides was largely overlapping, while LL-37 showed an almost opposite set of activities, with no effect on antigen marker presentation, but very strong effects on phagocytosis. The enhancement of antigen presentation markers adds onto a long list of immunomodulatory functions for this (and other) cathelicidins. For example, CATH-2 was shown to enhance cytokine production in HD-11 cells and using truncated and mutated versions of CATH-2, some structural features could be linked to this activity [30]. In addition CATH-2 was shown to enhance uptake of bacterial DNA and subsequent TLR21 activation [34], but also showed an inhibitory effect of the immune response towards non-viable bacteria, reducing the potential damaging effects of a pro-inflammatory response [35, 36]. Which of these activities is important for the described protective effect of CATH-2 in vivo [18] is unclear, but the observed activities in in vitro systems highlight the complexity and also the potential of CATH-2 in prevention and possibly treatment of microbial infections. Furthermore, a better understanding of structure-activity relationships of these peptides might help in development of optimized peptide for therapeutic use, while for prophylactic effects of peptides it is, especially in chicken, probably more cost-effective to determine ways to upregulate natural production of cathelicidins or other host defence peptides such as defensins. CONCLUSION: Overall, this study shows that both CATH-2 and D-CATH-2 increase antigen presentation markers on both primary monocytes and HD11 cells. These indicators of an increased inflammatory response could lead to more T cell activation and thus the potential for a better adaptive immune response against infection. This novel finding adds another functionality to the chicken cathelicidin repertoire in immunity.
Background: Cathelicidins are a family of Host Defense Peptides (HDPs), that play an important role in the innate immune response. They exert both broad-spectrum antimicrobial activity against pathogens, and strong immunomodulatory functions that affect the response of innate and adaptive immune cells. Methods: Chicken macrophages and chicken monocytes were incubated with cathelicidins. Activation of immune cells was determined by measuring surface markers Mannose Receptor Ctype 1 (MRC1) and MHC-II. Cytokine production was measured by qPCR and nitric oxide production was determined using the Griess assay. Finally, the effect of cathelicidins on phagocytosis was measured using carboxylate-modified polystyrene latex beads. Results: CATH-2 and its all-D enantiomer D-CATH-2 increased MRC1 and MHC-II expression, markers for antigen presentation, on primary chicken monocytes, whereas LL-37 did not. D-CATH- 2 also increased the MRC1 and MHC-II expression if a chicken macrophage cell line (HD11 cells) was used. In addition, LPS-induced NO production by HD11 cells was inhibited by CATH-2 and D-CATH-2. Conclusions: These results are a clear indication that CATH-2 (and D-CATH-2) affect the activation state of monocytes and macrophages, which leads to optimization of the innate immune response and enhancement of the adaptive immune response.
INTRODUCTION: Cathelicidins are short cationic peptides with an important role in host defense. They have broad antimicrobial activity as shown by their ability to kill a wide range of bacteria, viruses and fungi [1-4]. Besides direct antimicrobial activity, cathelicidins also play a more immunomodulatory role in the innate immune system through, amongst others, activation and chemotaxis of a variety of immune cells [5] and their role is still expanding [6-9]. However, it is unclear if chicken cathelicidins share all of these activities with their mammalian counterparts. In human, the only cathelicidin gene hCAP-18, has been shown to yield at least three different mature peptides, of 
which LL-37 is the most commonly studied [9]. This cathelicidin is produced by different cell types, including neutrophils, macrophages, and NK cells [10]. In chicken there are four different cathelicidins: CATH-1, -2, and -3, and CATH-B1 [11-14]. Multiple reports have described immunomodulatory effects of cathelicidins in vivo. For example, LL-37 upregulated the neutrophil response and cleared an infection in a murine Pseudomonas aeruginosa lung infection model without the peptide showing direct antimicrobial effects [15]. The chicken cathelicidin CATH-1 was used in a MRSA infection model in mice, where it provided partial protection [16]. In addition, chicken CATH-2 increased the number of phagocytic cells in a zebrafish model leading to a protective effect against bacterial infection. These studies suggested a more immuno-stimulatory mechanism of action for this peptide [17]. Finally, in ovo treatment with the all-D enantiomer of CATH-2 (D-CATH-2) showed a protective effect against a subsequent E. coli infection 7 days after hatch, indicating again that immunomodulatory effects and not direct antimicrobial killing was involved [18]. The involvement of immune cells and specifically macrophages was best shown in a study using a so-called innate defense-regulator peptide, IDR-1. This peptide, which does not have direct antimicrobial activity, could protect mice from infections, but was inactive when these mice were depleted of monocytes and macrophages [19]. In line with this, we have shown that CATH-2, D-CATH-2 and LL-37 have an effect on the mononuclear phagocyte population within chicken PBMCs [20]. In order to better understand the in vivo immunomodulatory activity of CATH-2 the current study investigated the effect of CATH-2 on primary chicken monocytes and a chicken macrophage cell line. The all D-enantiomer D-CATH-2 was included based on its stability towards proteases and its described effect in ovo [18], while LL-37 was included as a well described immunomodulatory peptide of human origin. The study indicates that CATH-2 has clear effects on macrophage and monocyte antigen presentation markers which could be an important feature of these peptides as part of the innate immune response of chickens towards pathogens. CONCLUSION: Overall, this study shows that both CATH-2 and D-CATH-2 increase antigen presentation markers on both primary monocytes and HD11 cells. These indicators of an increased inflammatory response could lead to more T cell activation and thus the potential for a better adaptive immune response against infection. This novel finding adds another functionality to the chicken cathelicidin repertoire in immunity.
Background: Cathelicidins are a family of Host Defense Peptides (HDPs), that play an important role in the innate immune response. They exert both broad-spectrum antimicrobial activity against pathogens, and strong immunomodulatory functions that affect the response of innate and adaptive immune cells. Methods: Chicken macrophages and chicken monocytes were incubated with cathelicidins. Activation of immune cells was determined by measuring surface markers Mannose Receptor Ctype 1 (MRC1) and MHC-II. Cytokine production was measured by qPCR and nitric oxide production was determined using the Griess assay. Finally, the effect of cathelicidins on phagocytosis was measured using carboxylate-modified polystyrene latex beads. Results: CATH-2 and its all-D enantiomer D-CATH-2 increased MRC1 and MHC-II expression, markers for antigen presentation, on primary chicken monocytes, whereas LL-37 did not. D-CATH- 2 also increased the MRC1 and MHC-II expression if a chicken macrophage cell line (HD11 cells) was used. In addition, LPS-induced NO production by HD11 cells was inhibited by CATH-2 and D-CATH-2. Conclusions: These results are a clear indication that CATH-2 (and D-CATH-2) affect the activation state of monocytes and macrophages, which leads to optimization of the innate immune response and enhancement of the adaptive immune response.
6,061
250
[ 43, 307, 165, 95, 124, 37, 326, 154, 190 ]
14
[ "cath", "cells", "chicken", "monocytes", "expression", "37", "lps", "ll 37", "ll", "effect" ]
[ "human chicken cathelicidins", "cathelicidins phagocytic activity", "cells cathelicidins inhibited", "chicken cathelicidin repertoire", "immunomodulatory functions cathelicidins" ]
null
[CONTENT] Host defense peptide | MRC1 | Antigen presentation | HD11 cells | Innate immunity | Cathelicidins [SUMMARY]
null
[CONTENT] Host defense peptide | MRC1 | Antigen presentation | HD11 cells | Innate immunity | Cathelicidins [SUMMARY]
[CONTENT] Host defense peptide | MRC1 | Antigen presentation | HD11 cells | Innate immunity | Cathelicidins [SUMMARY]
[CONTENT] Host defense peptide | MRC1 | Antigen presentation | HD11 cells | Innate immunity | Cathelicidins [SUMMARY]
[CONTENT] Host defense peptide | MRC1 | Antigen presentation | HD11 cells | Innate immunity | Cathelicidins [SUMMARY]
[CONTENT] Amino Acid Sequence | Animals | Antigen Presentation | Antimicrobial Cationic Peptides | Biomarkers | Cathelicidins | Cell Line | Chickens | Cytokines | Gene Expression Regulation | Histocompatibility Antigens Class II | Humans | Immunity, Innate | Immunomodulation | Lectins, C-Type | Macrophages | Mannose Receptor | Mannose-Binding Lectins | Monocytes | Receptors, Cell Surface [SUMMARY]
null
[CONTENT] Amino Acid Sequence | Animals | Antigen Presentation | Antimicrobial Cationic Peptides | Biomarkers | Cathelicidins | Cell Line | Chickens | Cytokines | Gene Expression Regulation | Histocompatibility Antigens Class II | Humans | Immunity, Innate | Immunomodulation | Lectins, C-Type | Macrophages | Mannose Receptor | Mannose-Binding Lectins | Monocytes | Receptors, Cell Surface [SUMMARY]
[CONTENT] Amino Acid Sequence | Animals | Antigen Presentation | Antimicrobial Cationic Peptides | Biomarkers | Cathelicidins | Cell Line | Chickens | Cytokines | Gene Expression Regulation | Histocompatibility Antigens Class II | Humans | Immunity, Innate | Immunomodulation | Lectins, C-Type | Macrophages | Mannose Receptor | Mannose-Binding Lectins | Monocytes | Receptors, Cell Surface [SUMMARY]
[CONTENT] Amino Acid Sequence | Animals | Antigen Presentation | Antimicrobial Cationic Peptides | Biomarkers | Cathelicidins | Cell Line | Chickens | Cytokines | Gene Expression Regulation | Histocompatibility Antigens Class II | Humans | Immunity, Innate | Immunomodulation | Lectins, C-Type | Macrophages | Mannose Receptor | Mannose-Binding Lectins | Monocytes | Receptors, Cell Surface [SUMMARY]
[CONTENT] Amino Acid Sequence | Animals | Antigen Presentation | Antimicrobial Cationic Peptides | Biomarkers | Cathelicidins | Cell Line | Chickens | Cytokines | Gene Expression Regulation | Histocompatibility Antigens Class II | Humans | Immunity, Innate | Immunomodulation | Lectins, C-Type | Macrophages | Mannose Receptor | Mannose-Binding Lectins | Monocytes | Receptors, Cell Surface [SUMMARY]
[CONTENT] human chicken cathelicidins | cathelicidins phagocytic activity | cells cathelicidins inhibited | chicken cathelicidin repertoire | immunomodulatory functions cathelicidins [SUMMARY]
null
[CONTENT] human chicken cathelicidins | cathelicidins phagocytic activity | cells cathelicidins inhibited | chicken cathelicidin repertoire | immunomodulatory functions cathelicidins [SUMMARY]
[CONTENT] human chicken cathelicidins | cathelicidins phagocytic activity | cells cathelicidins inhibited | chicken cathelicidin repertoire | immunomodulatory functions cathelicidins [SUMMARY]
[CONTENT] human chicken cathelicidins | cathelicidins phagocytic activity | cells cathelicidins inhibited | chicken cathelicidin repertoire | immunomodulatory functions cathelicidins [SUMMARY]
[CONTENT] human chicken cathelicidins | cathelicidins phagocytic activity | cells cathelicidins inhibited | chicken cathelicidin repertoire | immunomodulatory functions cathelicidins [SUMMARY]
[CONTENT] cath | cells | chicken | monocytes | expression | 37 | lps | ll 37 | ll | effect [SUMMARY]
null
[CONTENT] cath | cells | chicken | monocytes | expression | 37 | lps | ll 37 | ll | effect [SUMMARY]
[CONTENT] cath | cells | chicken | monocytes | expression | 37 | lps | ll 37 | ll | effect [SUMMARY]
[CONTENT] cath | cells | chicken | monocytes | expression | 37 | lps | ll 37 | ll | effect [SUMMARY]
[CONTENT] cath | cells | chicken | monocytes | expression | 37 | lps | ll 37 | ll | effect [SUMMARY]
[CONTENT] cath | antimicrobial | immunomodulatory | direct antimicrobial | infection | direct | chicken | cathelicidins | effect | model [SUMMARY]
null
[CONTENT] cath | expression | mrc1 | effect | monocytes | increased | mhc | mhc ii | figure | ll 37 [SUMMARY]
[CONTENT] response | infection novel | cath cath increase | repertoire immunity | inflammatory response lead cell | monocytes hd11 cells indicators | inflammatory response lead | cell activation potential better | hd11 cells indicators increased | markers primary monocytes hd11 [SUMMARY]
[CONTENT] cath | cells | expression | effect | monocytes | chicken | ll | ll 37 | 37 | incubated [SUMMARY]
[CONTENT] cath | cells | expression | effect | monocytes | chicken | ll | ll 37 | 37 | incubated [SUMMARY]
[CONTENT] ||| [SUMMARY]
null
[CONTENT] MRC1 | MHC-II | LL-37 ||| 2 | MRC1 | MHC-II ||| LPS [SUMMARY]
[CONTENT] [SUMMARY]
[CONTENT] ||| ||| ||| Ctype 1 | MHC-II ||| qPCR | Griess ||| ||| MRC1 | MHC-II | LL-37 ||| 2 | MRC1 | MHC-II ||| LPS ||| [SUMMARY]
[CONTENT] ||| ||| ||| Ctype 1 | MHC-II ||| qPCR | Griess ||| ||| MRC1 | MHC-II | LL-37 ||| 2 | MRC1 | MHC-II ||| LPS ||| [SUMMARY]
Clinical outcomes in patients co-infected with COVID-19 and Staphylococcus aureus: a scoping review.
34548027
Endemic to the hospital environment, Staphylococcus aureus (S. aureus) is a leading bacterial pathogen that causes deadly infections such as bacteremia and endocarditis. In past viral pandemics, it has been the principal cause of secondary bacterial infections, significantly increasing patient mortality rates. Our world now combats the rapid spread of COVID-19, leading to a pandemic with a death toll greatly surpassing those of many past pandemics. However, the impact of co-infection with S. aureus remains unclear. Therefore, we aimed to perform a high-quality scoping review of the literature to synthesize the existing evidence on the clinical outcomes of COVID-19 and S. aureus co-infection.
BACKGROUND
A scoping review of the literature was conducted in PubMed, Scopus, Ovid MEDLINE, CINAHL, ScienceDirect, medRxiv, and the WHO COVID-19 database using a combination of terms. Articles that were in English, included patients infected with both COVID-19 and S. aureus, and provided a description of clinical outcomes for patients were eligible. From these articles, the following data were extracted: type of staphylococcal species, onset of co-infection, patient sex, age, symptoms, hospital interventions, and clinical outcomes. Quality assessments of final studies were also conducted using the Joanna Briggs Institute's critical appraisal tools.
METHODS
Searches generated a total of 1922 publications, and 28 articles were eligible for the final analysis. Of the 115 co-infected patients, there were a total of 71 deaths (61.7%) and 41 discharges (35.7%), with 62 patients (53.9%) requiring ICU admission. Patients were infected with methicillin-sensitive and methicillin-resistant strains of S. aureus, with the majority (76.5%) acquiring co-infection with S. aureus following hospital admission for COVID-19. Aside from antibiotics, the most commonly reported hospital interventions were intubation with mechanical ventilation (74.8 %), central venous catheter (19.1 %), and corticosteroids (13.0 %).
RESULTS
Given the mortality rates reported thus far for patients co-infected with S. aureus and COVID-19, COVID-19 vaccination and outpatient treatment may be key initiatives for reducing hospital admission and S. aureus co-infection risk. Physician vigilance is recommended during COVID-19 interventions that may increase the risk of bacterial co-infection with pathogens, such as S. aureus, as the medical community's understanding of these infection processes continues to evolve.
CONCLUSIONS
[ "COVID-19", "COVID-19 Vaccines", "Humans", "SARS-CoV-2", "Staphylococcal Infections", "Staphylococcus aureus" ]
8453255
Background
Upon passage of the March 11th anniversary of the official declaration of the coronavirus disease 2019 (COVID-19) pandemic [1], the causative severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) pathogen has infected over 181 million individuals and resulted in more than 3.9 million deaths worldwide as of July 1, 2021 [2]. In addition to rapid spread through high transmission rates [3], infection with COVID-19 can result in severe complications such as acute respiratory distress syndrome (ARDS), thromboembolic events, septic shock, and multi-organ failure [4]. In response to this novel virus, the clinical environment has evolved to accommodate the complexities of healthcare delivery in the pandemic environment [5]. Accordingly, a particularly challenging scenario for clinicians is the management of patients with common infections that may be complicated by subsequent COVID-19 co-infection, or conversely co-infected with a pathogen following primary infection with COVID-19 [6]. Bacterial co-infection in COVID-19 patients may exacerbate the immunocompromised state caused by COVID-19, further worsening clinical prognosis [7]. Implicated as a leading bacterial pathogen in both community- and healthcare-associated infections, Staphylococcus aureus (S. aureus) is commonly feared in the hospital environment for its risk of deadly outcomes such as endocarditis, bacteremia, sepsis, and death [8]. In past viral pandemics, S. aureus has been the principal cause of secondary bacterial infections, significantly increasing patient mortality rates [9]. For viral influenza infection specifically, S. aureus co-infection and bacteremia has been associated with mortality rates of almost 50%, in contrast to the 1.4% morality rates observed in patients infected with influenza alone [10]. Given the parallels between the clinical presentation, course, and outcomes of influenza and COVID-19 viral infection [11], mortality rates in COVID-19 patients co-infected with S. aureus may reflect those observed in influenza patients. However, while recent studies have focused on the incidence and prevalence of COVID-19 and S. aureus co-infection, the clinical outcomes of patients co-infected with these two specific pathogens remains unclear given that existing studies consolidate S. aureus patient outcomes with other bacterial pathogens [12–14]. Given that the literature informing our knowledge of COVID-19 is a dynamic and evolving entity, the purpose of this scoping review is to evaluate the current body of evidence reporting the clinical outcomes of patients co-infected with COVID-19 and S. aureus. To date, there has been no review focusing specifically on the clinical treatment courses and subsequent outcomes of COVID-19 and S. aureus co-infection. In response to the urgency of the pandemic state and high rates of COVID-19 hospital admissions, we aim to identify important areas for further research and explore potential implications for clinical practice.
null
null
Results
Our search strategy produced a total of 1922 potential publications with patients co-infected by COVID-19 and S. aureus. For transparent and reproducible methods, the PRISMA 2020 flow diagram for new systematic reviews was utilized to display the search results of our scoping review (Fig. 1). Following deduplication (n = 597) and a comprehensive screen of study titles and abstracts for irrelevant material (n = 1233), we reviewed 92 full texts for inclusion eligibility. Of these texts, 64 did not include patient outcomes for COVID-19 and S. aureus co-infected patients: 57 were incidence or prevalence studies with no patient-specific outcomes data, two included patients with COVID-19 and a history of S. aureus infection but no current COVID-19 and S. aureus co-infection, two were genome analysis studies with no patient data, and three were unavailable in English (Additional file 1: Table S5).Fig. 1Process of searching and selecting articles included in the scoping review based on the PRISMA 2020 flow diagram Process of searching and selecting articles included in the scoping review based on the PRISMA 2020 flow diagram Publication types and geography Following full-text review, 28 studies qualified for inclusion in our review, resulting in a total of 115 patients. Of these 28 included studies, 22 were case reports (describing single patients with individual data), two were case series (describing 7–42 patients with aggregate data), and four were cohort studies (describing 4–40 patients with aggregate data). Countries of study publication included the United States (n = 7) [7, 9, 21–25], Italy (n = 7) [26–32], Japan (n = 2) [33, 34], Iran (n = 2) [35, 36], the United Kingdom (n = 2) [37, 38], Spain (n = 2) [39, 40], Bahrain (n = 1) [41], China (n = 1) [42], France (n = 1) [43], the Philippines (n = 1) [44], Korea (n = 1) [45], and Canada (n = 1) [46], with publication dates ranging from April 15, 2020 to June 16, 2021. Table 1 describes the characteristics of these included studies and available information on their respective patient demographics in detail. Table 1Study and patient characteristicsFirst AuthorCountryPublication dateStudy designQuality AssessmentNAgeMale/FemaleTypeCo-infectionComorbiditiesOutcomeAdachiJapan05/15/20Case report8/8184FemaleMSSA Klebsiella pneumoniae NoneDeathBagnatoItaly08/05/20Case report8/8162FemaleMSSA Candida tropicalis HypertensionDischargeChandranUnited Kingdom05/20/21Case report7/81NRFemaleMSSANoneType 2 diabetes mellitusDeathChenChina11/01/20Case report6/8129MaleMSSA and MRSA Haemophilus influenzae NR (not reported)DischargeChoudhuryUSA07/04/20Case report7/8173MaleMSSANoneType 2 diabetes mellitus, chronic foot osteomyelitis, aortic stenosis, prosthetic aortic valve, atrial fibrillation, prior S. aureus infectionHospiceCusumanoUSA11/12/20Case series9/104265.6 (mean)Males (n = 21), Females (n = 21)MSSA (n = 23) and MRSA (n = 19)Enterococcus faecalis (n = 3), Candida spp. (n = 2), Klebsiella pneumoniae (n = 2), Escheria coli (n = 1), Bacillus spp. (n = 1), Micrococcus spp. (n = 1), Staphylococcus epidermidis (n = 1), Proteus mirabilis (n = 1)Hypertension (n = 29), diabetes mellitus (n = 21), cardiovascular disease (n = 19), lung disease (n = 7), chronic kidney disease (n = 6), malignancy (n = 5), end-stage renal disease (n = 4), organ transplant (n = 3), liver disease (n = 1)Death at 30 days (n = 28)De PascaleItaly05/31/21Prospective cohort8/114064 (mean)Males (n = 33), Females (n = 7)MSSA (n = 14), MRSA (n = 26)Bacteroidetes (n = 18), Proteobacteria (n = 7), Actinobacteria (n = 3), Tenericutes (n = 2), Fusobacteria (n = 1)1Diabetes mellitus (n = 8), cardiovascular disease (n = 7), lung disease (n = 7), immunosuppression (n = 4), neoplasm (n = 4), chronic kidney disease (n = 3)Death (n = 26)DuployezFrance04/16/20Case report8/8135MaleMSSA (PVL-secreting)NoneNoneDeathEdradaPhilippines05/07/20Case report6/8139FemaleMSSAInfluenza B, Klebsiella pneumoniaeNoneDischargeElSeirafiBahrain06/23/20Case report6/8159MaleMRSANoneNRDeathFilocamoItaly05/11/20Case report8/8150MaleMSSANoneNoneDischargeHamzaviIran08/01/20Case report6/8114MaleMSSANoneCerebral palsyDeathHoshiyamaJapan11/02/20Case series6/10147MaleMSSANonePrevious cerebral hemorrhageDischarge“”“”“”“”“”139MaleMSSA Group B Streptococcus HypertensionDischargeHussainUnited Kingdom05/22/20Case report8/8169FemaleMSSANoneProsthetic aortic valve with reduced ejection fractionDeathLevesqueCanada07/01/20Case report6/8153FemaleMSSANoneHypertension, diabetes mellitus, dyslipidemiaHospitalMirzaUSA11/16/20Case report6/8129MaleMRSAMulti-drug resistant PseudomonasCystic fibrosis with moderate obstructive lung disease, exocrine pancreatic insufficiency, gastroparesis, chronic S. aureusDischargePatekUSA04/15/20Case report7/810MaleMSSAHerpes simplex virusMaternal history of oral herpetic lesionsDischargePosteraroItaly09/06/20Case report8/8179MaleMRSAMorganella morganii, Candida glabrata, Acinetobacter baumannii, Proteus mirabilis, Klebsiella pneumoniae, Escherichia coliType 2 diabetes mellitus, ischemic heart disease, peripheral artery disease, left leg amputationDeathRajdevUSA09/10/20Case report7/8132MaleMSSA Klebsiella pneumoniae Type 2 diabetes mellitusDischargeRajdevUSA09/28/20Case report7/8136MaleMSSA Haemophilus influenzae Hypertension, two renal transplants for renal dysplasiaDischargeRamos-MartinezSpain07/30/20Prospective cohort6/11160NRMSSANoneType 2 diabetes mellitus, hypercholesterolemia, wrist arthritis, sternoclavicular arthritisDeathRandallUSA12/01/20Case report7/8160MaleMRSANoneChronic obstructive lung disease, coronary artery disease, hypothyroidismDeath“”“”“”“”“”183MaleMRSANoneHypertension, atrial fibrillationDeath“”“”“”“”“”160MaleMRSAHepatitis CHypertension, type 2 diabetes mellitus, cirrhosisDeathRegazzoniItaly08/07/20Case report2/8170MaleMSSANoneNRHospitalSharifipourIran09/01/20Prospective cohort7/111NRNRMSSANoneNoneDischarge“”“”“”“”1NRNRMRSANoneType 2 diabetes mellitusDeathSonKorea06/16/21Retrospective cohort8/11479 (mean)Male (n = 3), Female (n = 1)MRSAC. albicans (n = 2), Vancomycin-resistant enterococci (n = 2), S. maltophilia (n = 1) carbapenem-resistant Acinetobacter baumannii (n = 1),NRDeath (n = 3)SpannellaItaly06/23/20Case report8/8195FemaleMSSA Citrobacter werkmanii Hypertension, chronic heart failure, paroxysmal atrial fibrillation, dyslipidemia, chronic kidney disease, vascular dementia, sacral pressure ulcers, dysphagiaDeathSpotoItaly09/30/20Case report6/8155FemaleMSSANoneTriple negative, BRCA1-related, right breast cancer with multiple bone metastasis, type 2 diabetes mellitusDeathValgaSpain06/11/20Case report6/8168MaleMSSANoneHypertension, type 2 diabetes mellitus, congestive heart failure, sleep apnea, ischemic heart disease, chronic kidney diseaseDischargePatients were colonized with these bacterial phyla, but no distinction between colonization versus infection was reported Study and patient characteristics Patients were colonized with these bacterial phyla, but no distinction between colonization versus infection was reported Following full-text review, 28 studies qualified for inclusion in our review, resulting in a total of 115 patients. Of these 28 included studies, 22 were case reports (describing single patients with individual data), two were case series (describing 7–42 patients with aggregate data), and four were cohort studies (describing 4–40 patients with aggregate data). Countries of study publication included the United States (n = 7) [7, 9, 21–25], Italy (n = 7) [26–32], Japan (n = 2) [33, 34], Iran (n = 2) [35, 36], the United Kingdom (n = 2) [37, 38], Spain (n = 2) [39, 40], Bahrain (n = 1) [41], China (n = 1) [42], France (n = 1) [43], the Philippines (n = 1) [44], Korea (n = 1) [45], and Canada (n = 1) [46], with publication dates ranging from April 15, 2020 to June 16, 2021. Table 1 describes the characteristics of these included studies and available information on their respective patient demographics in detail. Table 1Study and patient characteristicsFirst AuthorCountryPublication dateStudy designQuality AssessmentNAgeMale/FemaleTypeCo-infectionComorbiditiesOutcomeAdachiJapan05/15/20Case report8/8184FemaleMSSA Klebsiella pneumoniae NoneDeathBagnatoItaly08/05/20Case report8/8162FemaleMSSA Candida tropicalis HypertensionDischargeChandranUnited Kingdom05/20/21Case report7/81NRFemaleMSSANoneType 2 diabetes mellitusDeathChenChina11/01/20Case report6/8129MaleMSSA and MRSA Haemophilus influenzae NR (not reported)DischargeChoudhuryUSA07/04/20Case report7/8173MaleMSSANoneType 2 diabetes mellitus, chronic foot osteomyelitis, aortic stenosis, prosthetic aortic valve, atrial fibrillation, prior S. aureus infectionHospiceCusumanoUSA11/12/20Case series9/104265.6 (mean)Males (n = 21), Females (n = 21)MSSA (n = 23) and MRSA (n = 19)Enterococcus faecalis (n = 3), Candida spp. (n = 2), Klebsiella pneumoniae (n = 2), Escheria coli (n = 1), Bacillus spp. (n = 1), Micrococcus spp. (n = 1), Staphylococcus epidermidis (n = 1), Proteus mirabilis (n = 1)Hypertension (n = 29), diabetes mellitus (n = 21), cardiovascular disease (n = 19), lung disease (n = 7), chronic kidney disease (n = 6), malignancy (n = 5), end-stage renal disease (n = 4), organ transplant (n = 3), liver disease (n = 1)Death at 30 days (n = 28)De PascaleItaly05/31/21Prospective cohort8/114064 (mean)Males (n = 33), Females (n = 7)MSSA (n = 14), MRSA (n = 26)Bacteroidetes (n = 18), Proteobacteria (n = 7), Actinobacteria (n = 3), Tenericutes (n = 2), Fusobacteria (n = 1)1Diabetes mellitus (n = 8), cardiovascular disease (n = 7), lung disease (n = 7), immunosuppression (n = 4), neoplasm (n = 4), chronic kidney disease (n = 3)Death (n = 26)DuployezFrance04/16/20Case report8/8135MaleMSSA (PVL-secreting)NoneNoneDeathEdradaPhilippines05/07/20Case report6/8139FemaleMSSAInfluenza B, Klebsiella pneumoniaeNoneDischargeElSeirafiBahrain06/23/20Case report6/8159MaleMRSANoneNRDeathFilocamoItaly05/11/20Case report8/8150MaleMSSANoneNoneDischargeHamzaviIran08/01/20Case report6/8114MaleMSSANoneCerebral palsyDeathHoshiyamaJapan11/02/20Case series6/10147MaleMSSANonePrevious cerebral hemorrhageDischarge“”“”“”“”“”139MaleMSSA Group B Streptococcus HypertensionDischargeHussainUnited Kingdom05/22/20Case report8/8169FemaleMSSANoneProsthetic aortic valve with reduced ejection fractionDeathLevesqueCanada07/01/20Case report6/8153FemaleMSSANoneHypertension, diabetes mellitus, dyslipidemiaHospitalMirzaUSA11/16/20Case report6/8129MaleMRSAMulti-drug resistant PseudomonasCystic fibrosis with moderate obstructive lung disease, exocrine pancreatic insufficiency, gastroparesis, chronic S. aureusDischargePatekUSA04/15/20Case report7/810MaleMSSAHerpes simplex virusMaternal history of oral herpetic lesionsDischargePosteraroItaly09/06/20Case report8/8179MaleMRSAMorganella morganii, Candida glabrata, Acinetobacter baumannii, Proteus mirabilis, Klebsiella pneumoniae, Escherichia coliType 2 diabetes mellitus, ischemic heart disease, peripheral artery disease, left leg amputationDeathRajdevUSA09/10/20Case report7/8132MaleMSSA Klebsiella pneumoniae Type 2 diabetes mellitusDischargeRajdevUSA09/28/20Case report7/8136MaleMSSA Haemophilus influenzae Hypertension, two renal transplants for renal dysplasiaDischargeRamos-MartinezSpain07/30/20Prospective cohort6/11160NRMSSANoneType 2 diabetes mellitus, hypercholesterolemia, wrist arthritis, sternoclavicular arthritisDeathRandallUSA12/01/20Case report7/8160MaleMRSANoneChronic obstructive lung disease, coronary artery disease, hypothyroidismDeath“”“”“”“”“”183MaleMRSANoneHypertension, atrial fibrillationDeath“”“”“”“”“”160MaleMRSAHepatitis CHypertension, type 2 diabetes mellitus, cirrhosisDeathRegazzoniItaly08/07/20Case report2/8170MaleMSSANoneNRHospitalSharifipourIran09/01/20Prospective cohort7/111NRNRMSSANoneNoneDischarge“”“”“”“”1NRNRMRSANoneType 2 diabetes mellitusDeathSonKorea06/16/21Retrospective cohort8/11479 (mean)Male (n = 3), Female (n = 1)MRSAC. albicans (n = 2), Vancomycin-resistant enterococci (n = 2), S. maltophilia (n = 1) carbapenem-resistant Acinetobacter baumannii (n = 1),NRDeath (n = 3)SpannellaItaly06/23/20Case report8/8195FemaleMSSA Citrobacter werkmanii Hypertension, chronic heart failure, paroxysmal atrial fibrillation, dyslipidemia, chronic kidney disease, vascular dementia, sacral pressure ulcers, dysphagiaDeathSpotoItaly09/30/20Case report6/8155FemaleMSSANoneTriple negative, BRCA1-related, right breast cancer with multiple bone metastasis, type 2 diabetes mellitusDeathValgaSpain06/11/20Case report6/8168MaleMSSANoneHypertension, type 2 diabetes mellitus, congestive heart failure, sleep apnea, ischemic heart disease, chronic kidney diseaseDischargePatients were colonized with these bacterial phyla, but no distinction between colonization versus infection was reported Study and patient characteristics Patients were colonized with these bacterial phyla, but no distinction between colonization versus infection was reported Publication quality Figure 2 represents the quality assessment scores produced by the Joanna Briggs Institute’s critical appraisal tools. Scores ranged from 2 to 8 for case reports (out of 8 points total) (n = 22), 6–9 for case series (out of 10 points total) (n = 2), and 6–8 for cohort studies (out of 11 points total) (n = 4). The mean quality assessment score for these publications compared within their respective categories was 6.8 for case reports, 7.5 for case series, and 7.3 for cohort studies. In terms of most common study design limitations, the metric of patient post-intervention clinical conditions was least clearly described for case reports, neither of the case series consecutively included participants, and strategies to address incomplete follow-up were only reported for one of the four cohort studies.Fig. 2Quality assessment scores for included publications reported as “yes” or “no” for achieving quality metrics per the Joanna Briggs Institute’s critical appraisal tools Quality assessment scores for included publications reported as “yes” or “no” for achieving quality metrics per the Joanna Briggs Institute’s critical appraisal tools Figure 2 represents the quality assessment scores produced by the Joanna Briggs Institute’s critical appraisal tools. Scores ranged from 2 to 8 for case reports (out of 8 points total) (n = 22), 6–9 for case series (out of 10 points total) (n = 2), and 6–8 for cohort studies (out of 11 points total) (n = 4). The mean quality assessment score for these publications compared within their respective categories was 6.8 for case reports, 7.5 for case series, and 7.3 for cohort studies. In terms of most common study design limitations, the metric of patient post-intervention clinical conditions was least clearly described for case reports, neither of the case series consecutively included participants, and strategies to address incomplete follow-up were only reported for one of the four cohort studies.Fig. 2Quality assessment scores for included publications reported as “yes” or “no” for achieving quality metrics per the Joanna Briggs Institute’s critical appraisal tools Quality assessment scores for included publications reported as “yes” or “no” for achieving quality metrics per the Joanna Briggs Institute’s critical appraisal tools Patient demographics For the 115 total patients included in our review that were co-infected with COVID-19 and S. aureus, their demographic (Table 1) and clinical data (Table 2) were described with varying completeness. Staphylococcal species and patient outcomes are reported in both tables to enable direct comparison with patient demographics and clinical course. Across our patient sample, the mean patient age was 54.8 years (SD = 21.6), 65.3% (n = 75) were male, 32.1% (n = 37) were female, and 3 patients (2.6%) did not have their gender specified in the study. Patients presented with a diversity of comorbidities with diabetes mellitus (33.9%, n = 39), hypertension (32.2%, n = 37), and cardiovascular disease (28.7%, n = 33) reported as the most common. Five patients presented with no comorbidities and four studies reported no information on patient medical history related to comorbidities. The most common presenting symptoms reported by patients at hospital admission included cough (13.9%, n = 16), fever (13.9%, n = 16), and dyspnea (13.0%, n = 15). Table 2Clinical characteristicsFirst AuthorNTypeDiagnosisCo-infection onsetPresentationDx findingsTreatments and InterventionsComplicationsLength of stayICUOutcomeAdachi1MSSASputum sample, pneumoniaUnclearaFever, diarrhea, dyspneaBilateral opacities on chest x-ray (CXR), ground glass opacities & lower lobe consolidation on chest computed tomography (CT)Antibiotics, corticosteroids, lopinavir/ritonavir, morphineARDS16YesDeathBagnato1MSSABlood culture, bacteremiaHospital-onsetFever, cough, diarrhea, myalgiaUnremarkable head CT, normal creatine kinaseAntibiotics, corticosteroids, intubation and ventilation, antifungals, lopinavir/ritonavir, hydroxychloroquine, tocilizumab, neuromuscular blocking agents, olanzapinePsychomotor agitation and temporospatial disorientation, myopathy140YesDischargeChandran1MSSABlood culture and tracheal aspirate, pneumonia (ventilator-associated) and bacteremiaHospital-onsetDyspnea (positive COVID test)Bilateral interstitial infiltrates (CXR) and ground glass opacities (CT)Antibiotics, intubation and ventilationBilateral cavitating lung lesions, septic shock15YesDeathChen1MSSA and MRSASputum sample, pneumoniaHospital-onsetAsymptomatic (positive COVID test)Patchy consolidation and ground glass opacities in right upper lobe on CXR (day 29)Antibiotics, corticosteroids, lopinavir/ritonavir, Abidol combined with IFN inhalant, Thymalfasin, ribavirin, loratadinePneumonia51NoDischargeChoudhury1MSSABlood culture, endocarditis and bacteremiaUnclearbAltered mental status, low back pain, urinary incontinence, right foot ulcersCystitis and pyelonephritis on CT, epidural abscess (L4/5) on magnetic resonance imaging (MRI)Antibiotics, oral rifampin, hydroxychloroquineEndocarditis, aortic root abscessNR (not reported)NRHospiceCusumano42MSSA (n = 23) and MRSA (n = 19)Blood culture, bacteremia (n = 42), pneumonia (n = 8), vascular (n = 3), osteomyelitis (n = 1), skin (n = 1)Hospital-onset (n = 28), community-onset (n = 14)Not reported (NR)Abnormal CXR (n = 36), vegetation on transthoracic echo (n = 1)Antibiotics (n = 42), intubation and ventilation (n = 31), central venous catheter (n = 19)NRNRNRDeath at 30 days (n = 28)De Pascale40MSSA (n = 14), MRSA (n = 26)Tracheal aspirate and blood culture, pneumonia (ventilator-associated) (n = 40) and bacteremia (n = 19)Late hospital-onset (n = 35), early hospital-onset (n = 5)NRNRAntibiotics (n = 40), intubation and ventilation (n = 40)Septic shock (n = 22), acute kidney injury (n = 4)11 (mean)Yes (n = 40)Death (n = 26)Duployez1MSSA (PVL-secreting)Pleural drainage sample, pneumoniaUnclearcFever, cough, bloody sputumConsolidation of left upper lobe, left pleural effusion, right ground glass opacities, bilateral cavitary lesions on CTAntibiotics, intubation and ventilation, extracorporeal membrane oxygenation (ECMO), anticoagulation, upper left lobectomyNecrotizing pneumonia, deterioration of respiratory, renal, and liver functions17YesDeathEdrada1MSSANasal and throat swab with PCRCommunity-onset/carrierDry cough, sore throatUnremarkable chest CTOseltamivirNone19NoDischargeElSeirafi1MRSABlood culture, bacteremiaHospital-onsetFever, dry cough, dyspneaBilateral pulmonary infiltrates and ARDS on CXRAntibiotics, IFN, ribavirin, plasma therapy, tocilizumab injectionsSeptic shock with multi-organ dysfunction16YesDeathFilocamo1MSSABlood culture, bacteremiaHospital-onsetFever, dyspneaBilateral ground glass opacities on chest CTAntibiotics, intubation and ventilation, lopinavir/ritonavir, hydroxychloroquine, anakinraProgressive cholestatic liver injury29YesDischargeHamzavi1MSSABlood culture, bacteremiaUncleardFever, cough, dyspnea, lethargyLeft pleural effusion on CXRAntibiotics, intubation and ventilationMulti-organ dysfunctionNRYesDeathHoshiyama1MSSAThroat swab and sputum sampleUncleareCoughNormal labsNRNRNRNoDischarge“”1MSSAThroat swab and sputum sampleUncleareCoughNormal labsNRNRNRNoDischargeHussain1MSSABlood culture, bacteremiaCommunity-onsetFever, cough, dyspnea, malaiseBilateral reticular enhancement and heavily calcified aortic valve with mass effect on left atrial wall on chest CTAntibiotics, intubation and ventilation, esophagogastroduodenoscopy, pantoprazole, amiodarone, heparinBleeding Dieulafoy’s lesion, fast atrial fibrillation, acute kidney injury, multi-organ failure, intracerebral hematoma18YesDeathLevesque1MSSASputum sample, pneumonia (ventilator-associated)Hospital-onsetFever, dry cough, dyspneaSmall intraventricular hemorrhage on head CT (day 39)Antibiotics, intubation and ventilation, corticosteroids, propofol, fentanyl, neuromuscular blocking agents, heparin, continuous platelet infusion, blood transfusions, IVIG, endobronchial clot removals, romiplostim, vincristineARDS, ICU-acquired neuromyopathy, acute kidney injury, thrombocytopenia, intraventricular hemorrhage, ventilator-associated pneumoniaAt least 39YesHospitalMirza1MRSASputum sampleCarrier (chronic)Chest pain, dyspneaBilateral upper lobe bronchial wall thickening and bronchiectasis with nodular and interstitial opacities on chest CTAntibiotics, remdesivirMeropenem-resistant pseudomonas6NoDischargePatek1MSSAWound culture, cellulitisCommunity-onsetFever, erythema and erosions of right thumb and fourth digit, somnolence, decreased feedingElevated LFTs, bilateral perihilar streaking on CXR, neutropeniaAntibiotics, acyclovir, nasal cannula O2Hypoxic respiratory failure7YesDischargePosteraro1MRSABlood culture, bacteremiaHospital-onsetFever, cough, dyspneaCXR and chest CT consistent with pneumoniaAntibiotics, antifungals, hydroxychloroquine, darunavir/ritonavirHypoxia, left leg re-amputation, septic shock53YesDeathRajdev1MSSASputum sample, pneumonia (ventilator-associated)Community- and hospital-onsetfDyspneaBilateral consolidations on CXR, bilateral ground glass opacities and pneumomediastinum with subcutaneous emphysema on chest CTIntubation and ventilation, epoprostenol, hydromorphone, neuromuscular blocking agents, ECMOAnemia, epistaxis, oropharyngeal bleeding, ARDS47YesDischargeRajdev1MSSATracheal aspirate, pneumoniaHospital-onsetFever, cough, dyspnea, myalgiasDiffuse bilateral pulmonary opacities on CXRAntibiotics, intubation and ventilation, corticosteroids, tacrolimus, mycophenolate, remdesivirHypoxic respiratory failure, Guillan Barré syndrome23NRDischargeRamos-Martinez1MSSABlood culture, bacteremia (central venous catheter-associated)Hospital-onsetFever, meningitis, right infrapopliteal deep vein thrombosisMild mitral insufficiency on transthoracic echoAntibiotics, intubation and ventilation, central venous catheter, corticosteroids, tocilizumabNative valve endocarditis, progressive sepsisAt least 20YesDeathRandall1MRSABlood culture, bacteremiaHospital-onsetFever, cough, dyspneaNRIntubation and ventilation, corticosteroids, central venous catheterRespiratory distress3NRDeath“”1MRSABlood culture, bacteremiaHospital-onsetHypoxia (positive COVID test)NRCorticosteroids, remdesivir, central venous catheterSeptic shock14NRDeath“”1MRSABlood culture, bacteremiaHospital-onsetHypoxia (positive COVID test)NRCorticosteroidsCardiac arrest10YesDeathRegazzoni1MSSANasal swab and blood culture, bacteremiaHospital-onsetBilateral pneumonia (positive COVID test)Ischemic areas with hemorrhagic transformation on head CT and MRI, large vegetations on aortic valve with regurgitation on transesophageal echoAntibiotics, corticosteroidsSevere systemic inflammatory responseAt least 10NRHospitalSharifipour1MSSATracheal aspirate, pneumonia (ventilator-associated)Hospital-onsetCough, dyspnea, sore throatNRAntibiotics, intubation and ventilationVentilator-associated pneumonia13YesDischarge“”1MRSATracheal aspirate, pneumonia (ventilator-associated)Hospital-onsetCough, dyspnea, sore throatNRAntibiotics, intubation and ventilationVentilator-associated pneumonia9YesDeathSon4MRSASputum sample, pneumonia (n = 4)Hospital-onset (n = 4)Pneumonia (positive COVID test)NRAntibiotics (n = 4), corticosteroids (n = 4)NR42 (mean)YesDeath (n = 3)Spannella1MSSABronchoalveolar lavage, pneumoniaCommunity-onsetFever, cough, emesisBilateral ground glass opacities and multiple areas of consolidation on CXRAntibiotics, metoprolol, amiodarone, continuous positive-pressure airwayAtrial fibrillation, respiratory failure, altered mental status, tachycardia, severe hypoxemia27YesDeathSpoto1MSSATracheal aspirate, pneumoniaUncleargFever, dyspnea, respiratory distress following chemoimmunotherapyBilateral ground glass opacities and consolidation in the middle/upper lobes on chest CTAntibiotics, intubation and ventilation, lopinavir-ritonavir, hydroxychloroquineARDS5NRDeathValga1MSSATracheal aspirate, pneumoniaHospital-onsetFever, dry coughNRAntibiotics, intubation and ventilation, corticosteroids, hydroxychloroquine, lopinavir/ritonavir, IFN beta, heparinARDS, multi-organ failure47YesDischargeaPositive sputum culture on day 10bPatient recently treated for S. aureus prior to admission, but setting is unclearcPleural fluid tested on day 4dTimeline of blood culture uncleareTimeline of sputum testing unclearfPositive sputum on admission, subsequent ventilator-associated infectiongPatient was receiving routine treatments in a healthcare-setting Clinical characteristics aPositive sputum culture on day 10 bPatient recently treated for S. aureus prior to admission, but setting is unclear cPleural fluid tested on day 4 dTimeline of blood culture unclear eTimeline of sputum testing unclear fPositive sputum on admission, subsequent ventilator-associated infection gPatient was receiving routine treatments in a healthcare-setting For the 115 total patients included in our review that were co-infected with COVID-19 and S. aureus, their demographic (Table 1) and clinical data (Table 2) were described with varying completeness. Staphylococcal species and patient outcomes are reported in both tables to enable direct comparison with patient demographics and clinical course. Across our patient sample, the mean patient age was 54.8 years (SD = 21.6), 65.3% (n = 75) were male, 32.1% (n = 37) were female, and 3 patients (2.6%) did not have their gender specified in the study. Patients presented with a diversity of comorbidities with diabetes mellitus (33.9%, n = 39), hypertension (32.2%, n = 37), and cardiovascular disease (28.7%, n = 33) reported as the most common. Five patients presented with no comorbidities and four studies reported no information on patient medical history related to comorbidities. The most common presenting symptoms reported by patients at hospital admission included cough (13.9%, n = 16), fever (13.9%, n = 16), and dyspnea (13.0%, n = 15). Table 2Clinical characteristicsFirst AuthorNTypeDiagnosisCo-infection onsetPresentationDx findingsTreatments and InterventionsComplicationsLength of stayICUOutcomeAdachi1MSSASputum sample, pneumoniaUnclearaFever, diarrhea, dyspneaBilateral opacities on chest x-ray (CXR), ground glass opacities & lower lobe consolidation on chest computed tomography (CT)Antibiotics, corticosteroids, lopinavir/ritonavir, morphineARDS16YesDeathBagnato1MSSABlood culture, bacteremiaHospital-onsetFever, cough, diarrhea, myalgiaUnremarkable head CT, normal creatine kinaseAntibiotics, corticosteroids, intubation and ventilation, antifungals, lopinavir/ritonavir, hydroxychloroquine, tocilizumab, neuromuscular blocking agents, olanzapinePsychomotor agitation and temporospatial disorientation, myopathy140YesDischargeChandran1MSSABlood culture and tracheal aspirate, pneumonia (ventilator-associated) and bacteremiaHospital-onsetDyspnea (positive COVID test)Bilateral interstitial infiltrates (CXR) and ground glass opacities (CT)Antibiotics, intubation and ventilationBilateral cavitating lung lesions, septic shock15YesDeathChen1MSSA and MRSASputum sample, pneumoniaHospital-onsetAsymptomatic (positive COVID test)Patchy consolidation and ground glass opacities in right upper lobe on CXR (day 29)Antibiotics, corticosteroids, lopinavir/ritonavir, Abidol combined with IFN inhalant, Thymalfasin, ribavirin, loratadinePneumonia51NoDischargeChoudhury1MSSABlood culture, endocarditis and bacteremiaUnclearbAltered mental status, low back pain, urinary incontinence, right foot ulcersCystitis and pyelonephritis on CT, epidural abscess (L4/5) on magnetic resonance imaging (MRI)Antibiotics, oral rifampin, hydroxychloroquineEndocarditis, aortic root abscessNR (not reported)NRHospiceCusumano42MSSA (n = 23) and MRSA (n = 19)Blood culture, bacteremia (n = 42), pneumonia (n = 8), vascular (n = 3), osteomyelitis (n = 1), skin (n = 1)Hospital-onset (n = 28), community-onset (n = 14)Not reported (NR)Abnormal CXR (n = 36), vegetation on transthoracic echo (n = 1)Antibiotics (n = 42), intubation and ventilation (n = 31), central venous catheter (n = 19)NRNRNRDeath at 30 days (n = 28)De Pascale40MSSA (n = 14), MRSA (n = 26)Tracheal aspirate and blood culture, pneumonia (ventilator-associated) (n = 40) and bacteremia (n = 19)Late hospital-onset (n = 35), early hospital-onset (n = 5)NRNRAntibiotics (n = 40), intubation and ventilation (n = 40)Septic shock (n = 22), acute kidney injury (n = 4)11 (mean)Yes (n = 40)Death (n = 26)Duployez1MSSA (PVL-secreting)Pleural drainage sample, pneumoniaUnclearcFever, cough, bloody sputumConsolidation of left upper lobe, left pleural effusion, right ground glass opacities, bilateral cavitary lesions on CTAntibiotics, intubation and ventilation, extracorporeal membrane oxygenation (ECMO), anticoagulation, upper left lobectomyNecrotizing pneumonia, deterioration of respiratory, renal, and liver functions17YesDeathEdrada1MSSANasal and throat swab with PCRCommunity-onset/carrierDry cough, sore throatUnremarkable chest CTOseltamivirNone19NoDischargeElSeirafi1MRSABlood culture, bacteremiaHospital-onsetFever, dry cough, dyspneaBilateral pulmonary infiltrates and ARDS on CXRAntibiotics, IFN, ribavirin, plasma therapy, tocilizumab injectionsSeptic shock with multi-organ dysfunction16YesDeathFilocamo1MSSABlood culture, bacteremiaHospital-onsetFever, dyspneaBilateral ground glass opacities on chest CTAntibiotics, intubation and ventilation, lopinavir/ritonavir, hydroxychloroquine, anakinraProgressive cholestatic liver injury29YesDischargeHamzavi1MSSABlood culture, bacteremiaUncleardFever, cough, dyspnea, lethargyLeft pleural effusion on CXRAntibiotics, intubation and ventilationMulti-organ dysfunctionNRYesDeathHoshiyama1MSSAThroat swab and sputum sampleUncleareCoughNormal labsNRNRNRNoDischarge“”1MSSAThroat swab and sputum sampleUncleareCoughNormal labsNRNRNRNoDischargeHussain1MSSABlood culture, bacteremiaCommunity-onsetFever, cough, dyspnea, malaiseBilateral reticular enhancement and heavily calcified aortic valve with mass effect on left atrial wall on chest CTAntibiotics, intubation and ventilation, esophagogastroduodenoscopy, pantoprazole, amiodarone, heparinBleeding Dieulafoy’s lesion, fast atrial fibrillation, acute kidney injury, multi-organ failure, intracerebral hematoma18YesDeathLevesque1MSSASputum sample, pneumonia (ventilator-associated)Hospital-onsetFever, dry cough, dyspneaSmall intraventricular hemorrhage on head CT (day 39)Antibiotics, intubation and ventilation, corticosteroids, propofol, fentanyl, neuromuscular blocking agents, heparin, continuous platelet infusion, blood transfusions, IVIG, endobronchial clot removals, romiplostim, vincristineARDS, ICU-acquired neuromyopathy, acute kidney injury, thrombocytopenia, intraventricular hemorrhage, ventilator-associated pneumoniaAt least 39YesHospitalMirza1MRSASputum sampleCarrier (chronic)Chest pain, dyspneaBilateral upper lobe bronchial wall thickening and bronchiectasis with nodular and interstitial opacities on chest CTAntibiotics, remdesivirMeropenem-resistant pseudomonas6NoDischargePatek1MSSAWound culture, cellulitisCommunity-onsetFever, erythema and erosions of right thumb and fourth digit, somnolence, decreased feedingElevated LFTs, bilateral perihilar streaking on CXR, neutropeniaAntibiotics, acyclovir, nasal cannula O2Hypoxic respiratory failure7YesDischargePosteraro1MRSABlood culture, bacteremiaHospital-onsetFever, cough, dyspneaCXR and chest CT consistent with pneumoniaAntibiotics, antifungals, hydroxychloroquine, darunavir/ritonavirHypoxia, left leg re-amputation, septic shock53YesDeathRajdev1MSSASputum sample, pneumonia (ventilator-associated)Community- and hospital-onsetfDyspneaBilateral consolidations on CXR, bilateral ground glass opacities and pneumomediastinum with subcutaneous emphysema on chest CTIntubation and ventilation, epoprostenol, hydromorphone, neuromuscular blocking agents, ECMOAnemia, epistaxis, oropharyngeal bleeding, ARDS47YesDischargeRajdev1MSSATracheal aspirate, pneumoniaHospital-onsetFever, cough, dyspnea, myalgiasDiffuse bilateral pulmonary opacities on CXRAntibiotics, intubation and ventilation, corticosteroids, tacrolimus, mycophenolate, remdesivirHypoxic respiratory failure, Guillan Barré syndrome23NRDischargeRamos-Martinez1MSSABlood culture, bacteremia (central venous catheter-associated)Hospital-onsetFever, meningitis, right infrapopliteal deep vein thrombosisMild mitral insufficiency on transthoracic echoAntibiotics, intubation and ventilation, central venous catheter, corticosteroids, tocilizumabNative valve endocarditis, progressive sepsisAt least 20YesDeathRandall1MRSABlood culture, bacteremiaHospital-onsetFever, cough, dyspneaNRIntubation and ventilation, corticosteroids, central venous catheterRespiratory distress3NRDeath“”1MRSABlood culture, bacteremiaHospital-onsetHypoxia (positive COVID test)NRCorticosteroids, remdesivir, central venous catheterSeptic shock14NRDeath“”1MRSABlood culture, bacteremiaHospital-onsetHypoxia (positive COVID test)NRCorticosteroidsCardiac arrest10YesDeathRegazzoni1MSSANasal swab and blood culture, bacteremiaHospital-onsetBilateral pneumonia (positive COVID test)Ischemic areas with hemorrhagic transformation on head CT and MRI, large vegetations on aortic valve with regurgitation on transesophageal echoAntibiotics, corticosteroidsSevere systemic inflammatory responseAt least 10NRHospitalSharifipour1MSSATracheal aspirate, pneumonia (ventilator-associated)Hospital-onsetCough, dyspnea, sore throatNRAntibiotics, intubation and ventilationVentilator-associated pneumonia13YesDischarge“”1MRSATracheal aspirate, pneumonia (ventilator-associated)Hospital-onsetCough, dyspnea, sore throatNRAntibiotics, intubation and ventilationVentilator-associated pneumonia9YesDeathSon4MRSASputum sample, pneumonia (n = 4)Hospital-onset (n = 4)Pneumonia (positive COVID test)NRAntibiotics (n = 4), corticosteroids (n = 4)NR42 (mean)YesDeath (n = 3)Spannella1MSSABronchoalveolar lavage, pneumoniaCommunity-onsetFever, cough, emesisBilateral ground glass opacities and multiple areas of consolidation on CXRAntibiotics, metoprolol, amiodarone, continuous positive-pressure airwayAtrial fibrillation, respiratory failure, altered mental status, tachycardia, severe hypoxemia27YesDeathSpoto1MSSATracheal aspirate, pneumoniaUncleargFever, dyspnea, respiratory distress following chemoimmunotherapyBilateral ground glass opacities and consolidation in the middle/upper lobes on chest CTAntibiotics, intubation and ventilation, lopinavir-ritonavir, hydroxychloroquineARDS5NRDeathValga1MSSATracheal aspirate, pneumoniaHospital-onsetFever, dry coughNRAntibiotics, intubation and ventilation, corticosteroids, hydroxychloroquine, lopinavir/ritonavir, IFN beta, heparinARDS, multi-organ failure47YesDischargeaPositive sputum culture on day 10bPatient recently treated for S. aureus prior to admission, but setting is unclearcPleural fluid tested on day 4dTimeline of blood culture uncleareTimeline of sputum testing unclearfPositive sputum on admission, subsequent ventilator-associated infectiongPatient was receiving routine treatments in a healthcare-setting Clinical characteristics aPositive sputum culture on day 10 bPatient recently treated for S. aureus prior to admission, but setting is unclear cPleural fluid tested on day 4 dTimeline of blood culture unclear eTimeline of sputum testing unclear fPositive sputum on admission, subsequent ventilator-associated infection gPatient was receiving routine treatments in a healthcare-setting Infection characteristics In terms of specific staphylococcal species co-infection, 51.3% (n = 59) of patients were infected with methicillin-sensitive staphylococcus aureus (MSSA) and 49.6% (n = 57) were infected with methicillin-resistant staphylococcus aureus (MRSA), with a single patient co-infected with both MRSA and MSSA. One patient co-infected with MSSA had a fatal Panton-Valentine Leukocidin toxin-producing strain of MSSA (PVL-MSSA). In addition to COVID-19 and S. aureus co-infection, 26.1% (n = 30) of patients were co-infected with one or more separate pathogens such as Klebsiella pneumoniae (n = 6), Candida spp. (n = 6), Enterococcus spp. (n = 5), Haemophilus influenzae (n = 2), Proteus mirabilis (n = 2), Escherichia coli (n = 2). Comprehensive patient co-infection data are reported in Table 1. In terms of specific staphylococcal species co-infection, 51.3% (n = 59) of patients were infected with methicillin-sensitive staphylococcus aureus (MSSA) and 49.6% (n = 57) were infected with methicillin-resistant staphylococcus aureus (MRSA), with a single patient co-infected with both MRSA and MSSA. One patient co-infected with MSSA had a fatal Panton-Valentine Leukocidin toxin-producing strain of MSSA (PVL-MSSA). In addition to COVID-19 and S. aureus co-infection, 26.1% (n = 30) of patients were co-infected with one or more separate pathogens such as Klebsiella pneumoniae (n = 6), Candida spp. (n = 6), Enterococcus spp. (n = 5), Haemophilus influenzae (n = 2), Proteus mirabilis (n = 2), Escherichia coli (n = 2). Comprehensive patient co-infection data are reported in Table 1. Diagnoses and treatments Of all 115 reported cases of co-infection with COVID-19 and S. aureus, diagnosis of S. aureus infection was most frequently established by blood culture in our patient sample (64.3%, n = 74), with S. aureus infections manifesting predominantly in patients as bacteremia (64.3%, n = 74) and pneumonia (55.7%, n = 64), accompanied by several additional endocarditis/vasculitis (3.5%, n = 4), cellulitis (1.7%, n = 2), and osteomyelitis (0.9%, n = 1) cases. Additionally, two patients that tested positive for S. aureus with no clear infection source were suspected to be chronic carriers of the bacterial pathogen. From this variety of infection presentations, the majority (76.5%, n = 88) experienced hospital-onset S. aureus co-infection following hospitalization for an initial infection with COVID-19, and 19 patients (16.5%) presented with S. aureus infection at the time of admission that was determined to be community-onset in etiology. Aside from a standard course of antibiotics, patients received a diversity of adjuvant treatments during their hospital admission, with the most common interventions including intubation and mechanical ventilation (74.8%, n = 86), a central venous catheter (19.1%, n = 22), and corticosteroids (13.0%, n = 15). Table 2 describes the clinical course following hospital admission for each patient in comprehensive detail. Of all 115 reported cases of co-infection with COVID-19 and S. aureus, diagnosis of S. aureus infection was most frequently established by blood culture in our patient sample (64.3%, n = 74), with S. aureus infections manifesting predominantly in patients as bacteremia (64.3%, n = 74) and pneumonia (55.7%, n = 64), accompanied by several additional endocarditis/vasculitis (3.5%, n = 4), cellulitis (1.7%, n = 2), and osteomyelitis (0.9%, n = 1) cases. Additionally, two patients that tested positive for S. aureus with no clear infection source were suspected to be chronic carriers of the bacterial pathogen. From this variety of infection presentations, the majority (76.5%, n = 88) experienced hospital-onset S. aureus co-infection following hospitalization for an initial infection with COVID-19, and 19 patients (16.5%) presented with S. aureus infection at the time of admission that was determined to be community-onset in etiology. Aside from a standard course of antibiotics, patients received a diversity of adjuvant treatments during their hospital admission, with the most common interventions including intubation and mechanical ventilation (74.8%, n = 86), a central venous catheter (19.1%, n = 22), and corticosteroids (13.0%, n = 15). Table 2 describes the clinical course following hospital admission for each patient in comprehensive detail.
Conclusion
In contrast to patients infected solely with COVID-19, co-infection with COVID-19 and S. aureus demonstrates a higher patient mortality rate during hospital admission. S. aureus co-infection in COVID-19 patients is predominantly healthcare-associated, and common hospital interventions for patients with severe COVID-19 infection may increase the risk for bacterial infection. Our findings emphasize the imperative of COVID-19 vaccination to prevent hospitalization for COVID-19 treatment and the subsequent susceptibility to hospital-acquired S. aureus co-infection.
[ "Background", "Methods", "Search strategy and study selection", "Data extraction", "Data synthesis and analysis", "Quality assessment", "Publication types and geography", "Publication quality", "Patient demographics", "Infection characteristics", "Diagnoses and treatments", "Complications and outcomes", "" ]
[ "Upon passage of the March 11th anniversary of the official declaration of the coronavirus disease 2019 (COVID-19) pandemic [1], the causative severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) pathogen has infected over 181 million individuals and resulted in more than 3.9 million deaths worldwide as of July 1, 2021 [2]. In addition to rapid spread through high transmission rates [3], infection with COVID-19 can result in severe complications such as acute respiratory distress syndrome (ARDS), thromboembolic events, septic shock, and multi-organ failure [4]. In response to this novel virus, the clinical environment has evolved to accommodate the complexities of healthcare delivery in the pandemic environment [5]. Accordingly, a particularly challenging scenario for clinicians is the management of patients with common infections that may be complicated by subsequent COVID-19 co-infection, or conversely co-infected with a pathogen following primary infection with COVID-19 [6]. Bacterial co-infection in COVID-19 patients may exacerbate the immunocompromised state caused by COVID-19, further worsening clinical prognosis [7].\nImplicated as a leading bacterial pathogen in both community- and healthcare-associated infections, Staphylococcus aureus (S. aureus) is commonly feared in the hospital environment for its risk of deadly outcomes such as endocarditis, bacteremia, sepsis, and death [8]. In past viral pandemics, S. aureus has been the principal cause of secondary bacterial infections, significantly increasing patient mortality rates [9]. For viral influenza infection specifically, S. aureus co-infection and bacteremia has been associated with mortality rates of almost 50%, in contrast to the 1.4% morality rates observed in patients infected with influenza alone [10]. Given the parallels between the clinical presentation, course, and outcomes of influenza and COVID-19 viral infection [11], mortality rates in COVID-19 patients co-infected with S. aureus may reflect those observed in influenza patients. However, while recent studies have focused on the incidence and prevalence of COVID-19 and S. aureus co-infection, the clinical outcomes of patients co-infected with these two specific pathogens remains unclear given that existing studies consolidate S. aureus patient outcomes with other bacterial pathogens [12–14].\nGiven that the literature informing our knowledge of COVID-19 is a dynamic and evolving entity, the purpose of this scoping review is to evaluate the current body of evidence reporting the clinical outcomes of patients co-infected with COVID-19 and S. aureus. To date, there has been no review focusing specifically on the clinical treatment courses and subsequent outcomes of COVID-19 and S. aureus co-infection. In response to the urgency of the pandemic state and high rates of COVID-19 hospital admissions, we aim to identify important areas for further research and explore potential implications for clinical practice.", "Search strategy and study selection To provide a scoping review of initial insight into the breadth of developing data on COVID-19 and S. aureus co-infection, we followed the five-stage methodology of scoping review practice presented by Levac, Colquhoun, and O’Brien [15]. In accordance with the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) extension for Scoping Reviews [16], we conducted electronic searches in PubMed, Scopus, Ovid MEDLINE, CINAHL, ScienceDirect, medRxiv (preprint), and the WHO COVID-19 database between July 3, 2021 and July 16, 2021. Search terms were combined with the use of Boolean operators and included subject headings or key terms specific to COVID-19 (i.e. severe acute respiratory syndrome coronavirus 2 OR SARS-CoV2 OR 2019 novel coronavirus OR 2019-nCoV OR coronavirus disease 2019 virus OR COVID-19 OR Wuhan coronavirus) and Staphylococcus aureus (i.e. methicillin-resistant staphylococcus aureus OR MRSA OR methicillin-susceptible Staphylococcus aureus OR MSSA OR staphylococcal infections). A comprehensive list of our scoping terms and search strategies is included in the Appendix (Ädditional file 1: Table S1). Two independent, experienced reviewers (JA and KV) screened the titles and abstracts of eligible studies and performed full-text review on qualified selections. For this review, we broadly considered articles of any design that included patients infected with both COVID-19 and S. aureus, provided a description of the timeline and ultimate clinical outcomes for these patients (i.e. death or discharge from hospital) at study completion, and were available in English. Studies were excluded if they did not report final outcomes since our scoping review purpose was to evaluate the quality of existing literature that described the clinical course and mortality rate of patients co-infected with these pathogens. We excluded duplicate records and disagreements regarding study inclusion were resolved by consensus or feedback from the senior author.\nTo provide a scoping review of initial insight into the breadth of developing data on COVID-19 and S. aureus co-infection, we followed the five-stage methodology of scoping review practice presented by Levac, Colquhoun, and O’Brien [15]. In accordance with the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) extension for Scoping Reviews [16], we conducted electronic searches in PubMed, Scopus, Ovid MEDLINE, CINAHL, ScienceDirect, medRxiv (preprint), and the WHO COVID-19 database between July 3, 2021 and July 16, 2021. Search terms were combined with the use of Boolean operators and included subject headings or key terms specific to COVID-19 (i.e. severe acute respiratory syndrome coronavirus 2 OR SARS-CoV2 OR 2019 novel coronavirus OR 2019-nCoV OR coronavirus disease 2019 virus OR COVID-19 OR Wuhan coronavirus) and Staphylococcus aureus (i.e. methicillin-resistant staphylococcus aureus OR MRSA OR methicillin-susceptible Staphylococcus aureus OR MSSA OR staphylococcal infections). A comprehensive list of our scoping terms and search strategies is included in the Appendix (Ädditional file 1: Table S1). Two independent, experienced reviewers (JA and KV) screened the titles and abstracts of eligible studies and performed full-text review on qualified selections. For this review, we broadly considered articles of any design that included patients infected with both COVID-19 and S. aureus, provided a description of the timeline and ultimate clinical outcomes for these patients (i.e. death or discharge from hospital) at study completion, and were available in English. Studies were excluded if they did not report final outcomes since our scoping review purpose was to evaluate the quality of existing literature that described the clinical course and mortality rate of patients co-infected with these pathogens. We excluded duplicate records and disagreements regarding study inclusion were resolved by consensus or feedback from the senior author.\nData extraction For the final articles selected, we completed data extraction in duplicate, and any discrepancies were resolved through discussion or consult with the senior author. While several studies also included reports on patients infected with COVID-19 alone or co-infected with an alternative pathogen, we extracted data solely for patients with COVID-19 and S. aureus co-infection. Our data extraction items included study methodology, author and study location, type of staphylococcal species, onset of S. aureus infection, S. aureus culture site and infection source, patient sample size, age, gender, presentation, comorbidities or additional co-infections, prior history of S. aureus infection, diagnostic findings, hospital treatments and interventions, complications, total length of hospital admission, intensive care unit transfer, and final patient mortality outcomes upon study completion.\nFor the final articles selected, we completed data extraction in duplicate, and any discrepancies were resolved through discussion or consult with the senior author. While several studies also included reports on patients infected with COVID-19 alone or co-infected with an alternative pathogen, we extracted data solely for patients with COVID-19 and S. aureus co-infection. Our data extraction items included study methodology, author and study location, type of staphylococcal species, onset of S. aureus infection, S. aureus culture site and infection source, patient sample size, age, gender, presentation, comorbidities or additional co-infections, prior history of S. aureus infection, diagnostic findings, hospital treatments and interventions, complications, total length of hospital admission, intensive care unit transfer, and final patient mortality outcomes upon study completion.\nData synthesis and analysis Microsoft Excel 2016 (Redmond, WA, USA) was used to collect and chart data extracted from the studies that met the inclusion criteria. Data was synthesized and analyzed descriptively, with frequency counts performed for individual and grouped study metrics. The purpose of synthesizing the extracted information through this method was to create an overview of existing knowledge and identify gaps in the current literature on COVID-19 and S. aureus co-infection.\nMicrosoft Excel 2016 (Redmond, WA, USA) was used to collect and chart data extracted from the studies that met the inclusion criteria. Data was synthesized and analyzed descriptively, with frequency counts performed for individual and grouped study metrics. The purpose of synthesizing the extracted information through this method was to create an overview of existing knowledge and identify gaps in the current literature on COVID-19 and S. aureus co-infection.\nQuality assessment Given that the majority of existing literature reporting outcomes data for COVID-19 and S. aureus co-infection were case reports, we utilized the Joanna Briggs Institute’s critical appraisal tools [17] to provide a metric for our scoping assessment of the methodological quality of the included studies. Application of these tools enabled examination of study quality in the areas of inclusion criteria, sample size, description of study participants, setting, and the appropriateness of the statistical analysis. As in previous reviews [18, 19], the tools were modified to produce a numeric score with case reports assessed based on an eight-item scale, case series on a ten-item scale, and cohort studies on an eleven-item scale. Studies were assessed with the methodological quality tool specific to their design (i.e. case report, case series, cohort) by two independent reviewers (JA and KV) and discrepancies were resolved through discussion. While debate exists regarding the minimal number of patients required for study qualification as a “case series” [20], we considered studies reporting individual patient data as “case reports” and those reporting aggregate patient data as “case series.” Our complete quality assessment, including tools and scores, is available in the Appendix (Additional file 1: Tables S2–S4).\nGiven that the majority of existing literature reporting outcomes data for COVID-19 and S. aureus co-infection were case reports, we utilized the Joanna Briggs Institute’s critical appraisal tools [17] to provide a metric for our scoping assessment of the methodological quality of the included studies. Application of these tools enabled examination of study quality in the areas of inclusion criteria, sample size, description of study participants, setting, and the appropriateness of the statistical analysis. As in previous reviews [18, 19], the tools were modified to produce a numeric score with case reports assessed based on an eight-item scale, case series on a ten-item scale, and cohort studies on an eleven-item scale. Studies were assessed with the methodological quality tool specific to their design (i.e. case report, case series, cohort) by two independent reviewers (JA and KV) and discrepancies were resolved through discussion. While debate exists regarding the minimal number of patients required for study qualification as a “case series” [20], we considered studies reporting individual patient data as “case reports” and those reporting aggregate patient data as “case series.” Our complete quality assessment, including tools and scores, is available in the Appendix (Additional file 1: Tables S2–S4).", "To provide a scoping review of initial insight into the breadth of developing data on COVID-19 and S. aureus co-infection, we followed the five-stage methodology of scoping review practice presented by Levac, Colquhoun, and O’Brien [15]. In accordance with the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) extension for Scoping Reviews [16], we conducted electronic searches in PubMed, Scopus, Ovid MEDLINE, CINAHL, ScienceDirect, medRxiv (preprint), and the WHO COVID-19 database between July 3, 2021 and July 16, 2021. Search terms were combined with the use of Boolean operators and included subject headings or key terms specific to COVID-19 (i.e. severe acute respiratory syndrome coronavirus 2 OR SARS-CoV2 OR 2019 novel coronavirus OR 2019-nCoV OR coronavirus disease 2019 virus OR COVID-19 OR Wuhan coronavirus) and Staphylococcus aureus (i.e. methicillin-resistant staphylococcus aureus OR MRSA OR methicillin-susceptible Staphylococcus aureus OR MSSA OR staphylococcal infections). A comprehensive list of our scoping terms and search strategies is included in the Appendix (Ädditional file 1: Table S1). Two independent, experienced reviewers (JA and KV) screened the titles and abstracts of eligible studies and performed full-text review on qualified selections. For this review, we broadly considered articles of any design that included patients infected with both COVID-19 and S. aureus, provided a description of the timeline and ultimate clinical outcomes for these patients (i.e. death or discharge from hospital) at study completion, and were available in English. Studies were excluded if they did not report final outcomes since our scoping review purpose was to evaluate the quality of existing literature that described the clinical course and mortality rate of patients co-infected with these pathogens. We excluded duplicate records and disagreements regarding study inclusion were resolved by consensus or feedback from the senior author.", "For the final articles selected, we completed data extraction in duplicate, and any discrepancies were resolved through discussion or consult with the senior author. While several studies also included reports on patients infected with COVID-19 alone or co-infected with an alternative pathogen, we extracted data solely for patients with COVID-19 and S. aureus co-infection. Our data extraction items included study methodology, author and study location, type of staphylococcal species, onset of S. aureus infection, S. aureus culture site and infection source, patient sample size, age, gender, presentation, comorbidities or additional co-infections, prior history of S. aureus infection, diagnostic findings, hospital treatments and interventions, complications, total length of hospital admission, intensive care unit transfer, and final patient mortality outcomes upon study completion.", "Microsoft Excel 2016 (Redmond, WA, USA) was used to collect and chart data extracted from the studies that met the inclusion criteria. Data was synthesized and analyzed descriptively, with frequency counts performed for individual and grouped study metrics. The purpose of synthesizing the extracted information through this method was to create an overview of existing knowledge and identify gaps in the current literature on COVID-19 and S. aureus co-infection.", "Given that the majority of existing literature reporting outcomes data for COVID-19 and S. aureus co-infection were case reports, we utilized the Joanna Briggs Institute’s critical appraisal tools [17] to provide a metric for our scoping assessment of the methodological quality of the included studies. Application of these tools enabled examination of study quality in the areas of inclusion criteria, sample size, description of study participants, setting, and the appropriateness of the statistical analysis. As in previous reviews [18, 19], the tools were modified to produce a numeric score with case reports assessed based on an eight-item scale, case series on a ten-item scale, and cohort studies on an eleven-item scale. Studies were assessed with the methodological quality tool specific to their design (i.e. case report, case series, cohort) by two independent reviewers (JA and KV) and discrepancies were resolved through discussion. While debate exists regarding the minimal number of patients required for study qualification as a “case series” [20], we considered studies reporting individual patient data as “case reports” and those reporting aggregate patient data as “case series.” Our complete quality assessment, including tools and scores, is available in the Appendix (Additional file 1: Tables S2–S4).", "Following full-text review, 28 studies qualified for inclusion in our review, resulting in a total of 115 patients. Of these 28 included studies, 22 were case reports (describing single patients with individual data), two were case series (describing 7–42 patients with aggregate data), and four were cohort studies (describing 4–40 patients with aggregate data). Countries of study publication included the United States (n = 7) [7, 9, 21–25], Italy (n = 7) [26–32], Japan (n = 2) [33, 34], Iran (n = 2) [35, 36], the United Kingdom (n = 2) [37, 38], Spain (n = 2) [39, 40], Bahrain (n = 1) [41], China (n = 1) [42], France (n = 1) [43], the Philippines (n = 1) [44], Korea (n = 1) [45], and Canada (n = 1) [46], with publication dates ranging from April 15, 2020 to June 16, 2021. Table 1 describes the characteristics of these included studies and available information on their respective patient demographics in detail.\n\nTable 1Study and patient characteristicsFirst AuthorCountryPublication dateStudy designQuality AssessmentNAgeMale/FemaleTypeCo-infectionComorbiditiesOutcomeAdachiJapan05/15/20Case report8/8184FemaleMSSA\nKlebsiella pneumoniae\nNoneDeathBagnatoItaly08/05/20Case report8/8162FemaleMSSA\nCandida tropicalis\nHypertensionDischargeChandranUnited Kingdom05/20/21Case report7/81NRFemaleMSSANoneType 2 diabetes mellitusDeathChenChina11/01/20Case report6/8129MaleMSSA and MRSA\nHaemophilus influenzae\nNR (not reported)DischargeChoudhuryUSA07/04/20Case report7/8173MaleMSSANoneType 2 diabetes mellitus, chronic foot osteomyelitis, aortic stenosis, prosthetic aortic valve, atrial fibrillation, prior S. aureus infectionHospiceCusumanoUSA11/12/20Case series9/104265.6 (mean)Males (n = 21), Females (n = 21)MSSA (n = 23) and MRSA (n = 19)Enterococcus faecalis (n = 3), Candida spp. (n = 2), Klebsiella pneumoniae (n = 2), Escheria coli (n = 1), Bacillus spp. (n = 1), Micrococcus spp. (n = 1), Staphylococcus epidermidis (n = 1), Proteus mirabilis (n = 1)Hypertension (n = 29), diabetes mellitus (n = 21), cardiovascular disease (n = 19), lung disease (n = 7), chronic kidney disease (n = 6), malignancy (n = 5), end-stage renal disease (n = 4), organ transplant (n = 3), liver disease (n = 1)Death at 30 days (n = 28)De PascaleItaly05/31/21Prospective cohort8/114064 (mean)Males (n = 33), Females (n = 7)MSSA (n = 14), MRSA (n = 26)Bacteroidetes (n = 18), Proteobacteria (n = 7), Actinobacteria (n = 3), Tenericutes (n = 2), Fusobacteria (n = 1)1Diabetes mellitus (n = 8), cardiovascular disease (n = 7), lung disease (n = 7), immunosuppression (n = 4), neoplasm (n = 4), chronic kidney disease (n = 3)Death (n = 26)DuployezFrance04/16/20Case report8/8135MaleMSSA (PVL-secreting)NoneNoneDeathEdradaPhilippines05/07/20Case report6/8139FemaleMSSAInfluenza B, Klebsiella pneumoniaeNoneDischargeElSeirafiBahrain06/23/20Case report6/8159MaleMRSANoneNRDeathFilocamoItaly05/11/20Case report8/8150MaleMSSANoneNoneDischargeHamzaviIran08/01/20Case report6/8114MaleMSSANoneCerebral palsyDeathHoshiyamaJapan11/02/20Case series6/10147MaleMSSANonePrevious cerebral hemorrhageDischarge“”“”“”“”“”139MaleMSSA\nGroup B Streptococcus\nHypertensionDischargeHussainUnited Kingdom05/22/20Case report8/8169FemaleMSSANoneProsthetic aortic valve with reduced ejection fractionDeathLevesqueCanada07/01/20Case report6/8153FemaleMSSANoneHypertension, diabetes mellitus, dyslipidemiaHospitalMirzaUSA11/16/20Case report6/8129MaleMRSAMulti-drug resistant PseudomonasCystic fibrosis with moderate obstructive lung disease, exocrine pancreatic insufficiency, gastroparesis, chronic S. aureusDischargePatekUSA04/15/20Case report7/810MaleMSSAHerpes simplex virusMaternal history of oral herpetic lesionsDischargePosteraroItaly09/06/20Case report8/8179MaleMRSAMorganella morganii, Candida glabrata, Acinetobacter baumannii, Proteus mirabilis, Klebsiella pneumoniae, Escherichia coliType 2 diabetes mellitus, ischemic heart disease, peripheral artery disease, left leg amputationDeathRajdevUSA09/10/20Case report7/8132MaleMSSA\nKlebsiella pneumoniae\nType 2 diabetes mellitusDischargeRajdevUSA09/28/20Case report7/8136MaleMSSA\nHaemophilus influenzae\nHypertension, two renal transplants for renal dysplasiaDischargeRamos-MartinezSpain07/30/20Prospective cohort6/11160NRMSSANoneType 2 diabetes mellitus, hypercholesterolemia, wrist arthritis, sternoclavicular arthritisDeathRandallUSA12/01/20Case report7/8160MaleMRSANoneChronic obstructive lung disease, coronary artery disease, hypothyroidismDeath“”“”“”“”“”183MaleMRSANoneHypertension, atrial fibrillationDeath“”“”“”“”“”160MaleMRSAHepatitis CHypertension, type 2 diabetes mellitus, cirrhosisDeathRegazzoniItaly08/07/20Case report2/8170MaleMSSANoneNRHospitalSharifipourIran09/01/20Prospective cohort7/111NRNRMSSANoneNoneDischarge“”“”“”“”1NRNRMRSANoneType 2 diabetes mellitusDeathSonKorea06/16/21Retrospective cohort8/11479 (mean)Male (n = 3), Female (n = 1)MRSAC. albicans (n = 2), Vancomycin-resistant enterococci (n = 2), S. maltophilia (n = 1) carbapenem-resistant Acinetobacter baumannii (n = 1),NRDeath (n = 3)SpannellaItaly06/23/20Case report8/8195FemaleMSSA\nCitrobacter werkmanii\nHypertension, chronic heart failure, paroxysmal atrial fibrillation, dyslipidemia, chronic kidney disease, vascular dementia, sacral pressure ulcers, dysphagiaDeathSpotoItaly09/30/20Case report6/8155FemaleMSSANoneTriple negative, BRCA1-related, right breast cancer with multiple bone metastasis, type 2 diabetes mellitusDeathValgaSpain06/11/20Case report6/8168MaleMSSANoneHypertension, type 2 diabetes mellitus, congestive heart failure, sleep apnea, ischemic heart disease, chronic kidney diseaseDischargePatients were colonized with these bacterial phyla, but no distinction between colonization versus infection was reported\n\nStudy and patient characteristics\nPatients were colonized with these bacterial phyla, but no distinction between colonization versus infection was reported", "Figure 2 represents the quality assessment scores produced by the Joanna Briggs Institute’s critical appraisal tools. Scores ranged from 2 to 8 for case reports (out of 8 points total) (n = 22), 6–9 for case series (out of 10 points total) (n = 2), and 6–8 for cohort studies (out of 11 points total) (n = 4). The mean quality assessment score for these publications compared within their respective categories was 6.8 for case reports, 7.5 for case series, and 7.3 for cohort studies. In terms of most common study design limitations, the metric of patient post-intervention clinical conditions was least clearly described for case reports, neither of the case series consecutively included participants, and strategies to address incomplete follow-up were only reported for one of the four cohort studies.Fig. 2Quality assessment scores for included\npublications reported as “yes” or “no” for achieving quality metrics per the\nJoanna Briggs Institute’s critical appraisal tools\nQuality assessment scores for included\npublications reported as “yes” or “no” for achieving quality metrics per the\nJoanna Briggs Institute’s critical appraisal tools", "For the 115 total patients included in our review that were co-infected with COVID-19 and S. aureus, their demographic (Table 1) and clinical data (Table 2) were described with varying completeness. Staphylococcal species and patient outcomes are reported in both tables to enable direct comparison with patient demographics and clinical course. Across our patient sample, the mean patient age was 54.8 years (SD = 21.6), 65.3% (n = 75) were male, 32.1% (n = 37) were female, and 3 patients (2.6%) did not have their gender specified in the study. Patients presented with a diversity of comorbidities with diabetes mellitus (33.9%, n = 39), hypertension (32.2%, n = 37), and cardiovascular disease (28.7%, n = 33) reported as the most common. Five patients presented with no comorbidities and four studies reported no information on patient medical history related to comorbidities. The most common presenting symptoms reported by patients at hospital admission included cough (13.9%, n = 16), fever (13.9%, n = 16), and dyspnea (13.0%, n = 15).\n\nTable 2Clinical characteristicsFirst AuthorNTypeDiagnosisCo-infection onsetPresentationDx findingsTreatments and InterventionsComplicationsLength of stayICUOutcomeAdachi1MSSASputum sample, pneumoniaUnclearaFever, diarrhea, dyspneaBilateral opacities on chest x-ray (CXR), ground glass opacities & lower lobe consolidation on chest computed tomography (CT)Antibiotics, corticosteroids, lopinavir/ritonavir, morphineARDS16YesDeathBagnato1MSSABlood culture, bacteremiaHospital-onsetFever, cough, diarrhea, myalgiaUnremarkable head CT, normal creatine kinaseAntibiotics, corticosteroids, intubation and ventilation, antifungals, lopinavir/ritonavir, hydroxychloroquine, tocilizumab, neuromuscular blocking agents, olanzapinePsychomotor agitation and temporospatial disorientation, myopathy140YesDischargeChandran1MSSABlood culture and tracheal aspirate, pneumonia (ventilator-associated) and bacteremiaHospital-onsetDyspnea (positive COVID test)Bilateral interstitial infiltrates (CXR) and ground glass opacities (CT)Antibiotics, intubation and ventilationBilateral cavitating lung lesions, septic shock15YesDeathChen1MSSA and MRSASputum sample, pneumoniaHospital-onsetAsymptomatic (positive COVID test)Patchy consolidation and ground glass opacities in right upper lobe on CXR (day 29)Antibiotics, corticosteroids, lopinavir/ritonavir, Abidol combined with IFN inhalant, Thymalfasin, ribavirin, loratadinePneumonia51NoDischargeChoudhury1MSSABlood culture, endocarditis and bacteremiaUnclearbAltered mental status, low back pain, urinary incontinence, right foot ulcersCystitis and pyelonephritis on CT, epidural abscess (L4/5) on magnetic resonance imaging (MRI)Antibiotics, oral rifampin, hydroxychloroquineEndocarditis, aortic root abscessNR (not reported)NRHospiceCusumano42MSSA (n = 23) and MRSA (n = 19)Blood culture, bacteremia (n = 42), pneumonia (n = 8), vascular (n = 3), osteomyelitis (n = 1), skin (n = 1)Hospital-onset (n = 28), community-onset (n = 14)Not reported (NR)Abnormal CXR (n = 36), vegetation on transthoracic echo (n = 1)Antibiotics (n = 42), intubation and ventilation (n = 31), central venous catheter (n = 19)NRNRNRDeath at 30 days (n = 28)De Pascale40MSSA (n = 14), MRSA (n = 26)Tracheal aspirate and blood culture, pneumonia (ventilator-associated) (n = 40) and bacteremia (n = 19)Late hospital-onset (n = 35), early hospital-onset (n = 5)NRNRAntibiotics (n = 40), intubation and ventilation (n = 40)Septic shock (n = 22), acute kidney injury (n = 4)11 (mean)Yes (n = 40)Death (n = 26)Duployez1MSSA (PVL-secreting)Pleural drainage sample, pneumoniaUnclearcFever, cough, bloody sputumConsolidation of left upper lobe, left pleural effusion, right ground glass opacities, bilateral cavitary lesions on CTAntibiotics, intubation and ventilation, extracorporeal membrane oxygenation (ECMO), anticoagulation, upper left lobectomyNecrotizing pneumonia, deterioration of respiratory, renal, and liver functions17YesDeathEdrada1MSSANasal and throat swab with PCRCommunity-onset/carrierDry cough, sore throatUnremarkable chest CTOseltamivirNone19NoDischargeElSeirafi1MRSABlood culture, bacteremiaHospital-onsetFever, dry cough, dyspneaBilateral pulmonary infiltrates and ARDS on CXRAntibiotics, IFN, ribavirin, plasma therapy, tocilizumab injectionsSeptic shock with multi-organ dysfunction16YesDeathFilocamo1MSSABlood culture, bacteremiaHospital-onsetFever, dyspneaBilateral ground glass opacities on chest CTAntibiotics, intubation and ventilation, lopinavir/ritonavir, hydroxychloroquine, anakinraProgressive cholestatic liver injury29YesDischargeHamzavi1MSSABlood culture, bacteremiaUncleardFever, cough, dyspnea, lethargyLeft pleural effusion on CXRAntibiotics, intubation and ventilationMulti-organ dysfunctionNRYesDeathHoshiyama1MSSAThroat swab and sputum sampleUncleareCoughNormal labsNRNRNRNoDischarge“”1MSSAThroat swab and sputum sampleUncleareCoughNormal labsNRNRNRNoDischargeHussain1MSSABlood culture, bacteremiaCommunity-onsetFever, cough, dyspnea, malaiseBilateral reticular enhancement and heavily calcified aortic valve with mass effect on left atrial wall on chest CTAntibiotics, intubation and ventilation, esophagogastroduodenoscopy, pantoprazole, amiodarone, heparinBleeding Dieulafoy’s lesion, fast atrial fibrillation, acute kidney injury, multi-organ failure, intracerebral hematoma18YesDeathLevesque1MSSASputum sample, pneumonia (ventilator-associated)Hospital-onsetFever, dry cough, dyspneaSmall intraventricular hemorrhage on head CT (day 39)Antibiotics, intubation and ventilation, corticosteroids, propofol, fentanyl, neuromuscular blocking agents, heparin, continuous platelet infusion, blood transfusions, IVIG, endobronchial clot removals, romiplostim, vincristineARDS, ICU-acquired neuromyopathy, acute kidney injury, thrombocytopenia, intraventricular hemorrhage, ventilator-associated pneumoniaAt least 39YesHospitalMirza1MRSASputum sampleCarrier (chronic)Chest pain, dyspneaBilateral upper lobe bronchial wall thickening and bronchiectasis with nodular and interstitial opacities on chest CTAntibiotics, remdesivirMeropenem-resistant pseudomonas6NoDischargePatek1MSSAWound culture, cellulitisCommunity-onsetFever, erythema and erosions of right thumb and fourth digit, somnolence, decreased feedingElevated LFTs, bilateral perihilar streaking on CXR, neutropeniaAntibiotics, acyclovir, nasal cannula O2Hypoxic respiratory failure7YesDischargePosteraro1MRSABlood culture, bacteremiaHospital-onsetFever, cough, dyspneaCXR and chest CT consistent with pneumoniaAntibiotics, antifungals, hydroxychloroquine, darunavir/ritonavirHypoxia, left leg re-amputation, septic shock53YesDeathRajdev1MSSASputum sample, pneumonia (ventilator-associated)Community- and hospital-onsetfDyspneaBilateral consolidations on CXR, bilateral ground glass opacities and pneumomediastinum with subcutaneous emphysema on chest CTIntubation and ventilation, epoprostenol, hydromorphone, neuromuscular blocking agents, ECMOAnemia, epistaxis, oropharyngeal bleeding, ARDS47YesDischargeRajdev1MSSATracheal aspirate, pneumoniaHospital-onsetFever, cough, dyspnea, myalgiasDiffuse bilateral pulmonary opacities on CXRAntibiotics, intubation and ventilation, corticosteroids, tacrolimus, mycophenolate, remdesivirHypoxic respiratory failure, Guillan Barré syndrome23NRDischargeRamos-Martinez1MSSABlood culture, bacteremia (central venous catheter-associated)Hospital-onsetFever, meningitis, right infrapopliteal deep vein thrombosisMild mitral insufficiency on transthoracic echoAntibiotics, intubation and ventilation, central venous catheter, corticosteroids, tocilizumabNative valve endocarditis, progressive sepsisAt least 20YesDeathRandall1MRSABlood culture, bacteremiaHospital-onsetFever, cough, dyspneaNRIntubation and ventilation, corticosteroids, central venous catheterRespiratory distress3NRDeath“”1MRSABlood culture, bacteremiaHospital-onsetHypoxia (positive COVID test)NRCorticosteroids, remdesivir, central venous catheterSeptic shock14NRDeath“”1MRSABlood culture, bacteremiaHospital-onsetHypoxia (positive COVID test)NRCorticosteroidsCardiac arrest10YesDeathRegazzoni1MSSANasal swab and blood culture, bacteremiaHospital-onsetBilateral pneumonia (positive COVID test)Ischemic areas with hemorrhagic transformation on head CT and MRI, large vegetations on aortic valve with regurgitation on transesophageal echoAntibiotics, corticosteroidsSevere systemic inflammatory responseAt least 10NRHospitalSharifipour1MSSATracheal aspirate, pneumonia (ventilator-associated)Hospital-onsetCough, dyspnea, sore throatNRAntibiotics, intubation and ventilationVentilator-associated pneumonia13YesDischarge“”1MRSATracheal aspirate, pneumonia (ventilator-associated)Hospital-onsetCough, dyspnea, sore throatNRAntibiotics, intubation and ventilationVentilator-associated pneumonia9YesDeathSon4MRSASputum sample, pneumonia (n = 4)Hospital-onset (n = 4)Pneumonia (positive COVID test)NRAntibiotics (n = 4), corticosteroids (n = 4)NR42 (mean)YesDeath (n = 3)Spannella1MSSABronchoalveolar lavage, pneumoniaCommunity-onsetFever, cough, emesisBilateral ground glass opacities and multiple areas of consolidation on CXRAntibiotics, metoprolol, amiodarone, continuous positive-pressure airwayAtrial fibrillation, respiratory failure, altered mental status, tachycardia, severe hypoxemia27YesDeathSpoto1MSSATracheal aspirate, pneumoniaUncleargFever, dyspnea, respiratory distress following chemoimmunotherapyBilateral ground glass opacities and consolidation in the middle/upper lobes on chest CTAntibiotics, intubation and ventilation, lopinavir-ritonavir, hydroxychloroquineARDS5NRDeathValga1MSSATracheal aspirate, pneumoniaHospital-onsetFever, dry coughNRAntibiotics, intubation and ventilation, corticosteroids, hydroxychloroquine, lopinavir/ritonavir, IFN beta, heparinARDS, multi-organ failure47YesDischargeaPositive sputum culture on day 10bPatient recently treated for S. aureus prior to admission, but setting is unclearcPleural fluid tested on day 4dTimeline of blood culture uncleareTimeline of sputum testing unclearfPositive sputum on admission, subsequent ventilator-associated infectiongPatient was receiving routine treatments in a healthcare-setting\n\nClinical characteristics\naPositive sputum culture on day 10\nbPatient recently treated for S. aureus prior to admission, but setting is unclear\ncPleural fluid tested on day 4\ndTimeline of blood culture unclear\neTimeline of sputum testing unclear\nfPositive sputum on admission, subsequent ventilator-associated infection\ngPatient was receiving routine treatments in a healthcare-setting", "In terms of specific staphylococcal species co-infection, 51.3% (n = 59) of patients were infected with methicillin-sensitive staphylococcus aureus (MSSA) and 49.6% (n = 57) were infected with methicillin-resistant staphylococcus aureus (MRSA), with a single patient co-infected with both MRSA and MSSA. One patient co-infected with MSSA had a fatal Panton-Valentine Leukocidin toxin-producing strain of MSSA (PVL-MSSA). In addition to COVID-19 and S. aureus co-infection, 26.1% (n = 30) of patients were co-infected with one or more separate pathogens such as Klebsiella pneumoniae (n = 6), Candida spp. (n = 6), Enterococcus spp. (n = 5), Haemophilus influenzae (n = 2), Proteus mirabilis (n = 2), Escherichia coli (n = 2). Comprehensive patient co-infection data are reported in Table 1.", "Of all 115 reported cases of co-infection with COVID-19 and S. aureus, diagnosis of S. aureus infection was most frequently established by blood culture in our patient sample (64.3%, n = 74), with S. aureus infections manifesting predominantly in patients as bacteremia (64.3%, n = 74) and pneumonia (55.7%, n = 64), accompanied by several additional endocarditis/vasculitis (3.5%, n = 4), cellulitis (1.7%, n = 2), and osteomyelitis (0.9%, n = 1) cases. Additionally, two patients that tested positive for S. aureus with no clear infection source were suspected to be chronic carriers of the bacterial pathogen. From this variety of infection presentations, the majority (76.5%, n = 88) experienced hospital-onset S. aureus co-infection following hospitalization for an initial infection with COVID-19, and 19 patients (16.5%) presented with S. aureus infection at the time of admission that was determined to be community-onset in etiology. Aside from a standard course of antibiotics, patients received a diversity of adjuvant treatments during their hospital admission, with the most common interventions including intubation and mechanical ventilation (74.8%, n = 86), a central venous catheter (19.1%, n = 22), and corticosteroids (13.0%, n = 15). Table 2 describes the clinical course following hospital admission for each patient in comprehensive detail.", "During the hospital course of the 115 co-infected patients in our review, the most common complications were sepsis or systemic inflammatory response syndrome (23.5 %, n = 27), acute kidney injury (5.2%, n = 6), acute respiratory distress syndrome (4.3%, n = 5), pneumonia (4.3%, n = 5), and multi-organ dysfunction or failure (4.3%, n = 5). Transfer to an intensive care unit during admission was clearly reported for 53.9% (n = 62) of patients, unnecessary for 4.3% (n = 5), and not reported for the remaining 41.8% (n = 48). Patients were admitted for a mean length of 26.2 days (SD = 26.7) to any type of inpatient hospital unit, with the length of hospital stay not reported in five cases. Upon analysis of the final outcomes reported for the hospital course of our co-infected COVID-19 and S. aureus patient sample, 71 (61.7%) patients died, 41 (35.7%) were discharged, two remained hospitalized and in stable condition on study conclusion, and one patient was placed in hospice care. Table 2 further details the specific complications presenting in each patient’s hospital trajectory and Table 3 reports the final pooled frequencies of patient co-infection characteristics and outcomes.\n\nTable 3Pooled frequencies of patient co-infection characteristics and outcomes (n = 115)Total (%)\nGender\n Male75 (65.3 ) Female37 (32.1) Unspecified3 (2.6)\nStaphylococcal Species\n MSSA59 (51.3) MRSA57 (49.6)\nCo-infection\n\nKlebsiella pneumoniae\n6 (5.2) Candida spp.6 (5.2) Enterococcus spp.5 (4.3)\nHemophilus influenzae\n2 (1.7)\nEscherichia coli\n2 (1.7)\nProteus mirabilis\n2 (1.7)\nAcinetobacter baumannii\n2 (1.7) Bacillus spp.1 (0.9)\nStaphylococcus epidermidis\n1 (0.9) Micrococcus spp.1 (0.9) Pseudomonas spp.1 (0.9)\nMorganella morganii\n1 (0.9)\nCitrobacter werkmanii\n1 (0.9)\nS. maltophilia\n1 (0.9) Hepatitis C1 (0.9) Herpes simplex virus1 (0.9) Group B Streptococcus1 (0.9) None83 (72.2)\nS. Aureus Diagnostic Test\n Blood culture74 (64.3) Tracheal aspirate46 (40.0) Sputum sample11 (9.5) Nasal swab2 (1.7) Lower respiratory tract sample2 (1.7) Chronic carrier2 (1.7) Wound culture1 (0.9)S. aureus Diagnosis\n Bacteremia74 (63.4) Pneumonia64 (55.7) Ventilator-associated44 (38.3) Endocarditis/vasculitis4 (3.5) Cellulitis2 (1.7) Chronic carrier2 (1.7) Osteomyelitis1 (0.9) Not reported2 (1.7)S. Aureus\nInfection Onset Hospital88 (76.5) Community19 (16.5) Unclear7 (6.1)\nComplications\n Sepsis/Systemic Inflammatory Response Syndrome27 (23.5) Acute kidney injury6 (5.2  Acute respiratory distress syndrome5 (4.3) Pneumonia5 (4.3) Multi-organ dysfunction/failure5 (4.3) Bleeding/coagulopathy5 (4.3) Hypoxic respiratory failure3 (2.6) Myopathy/neuropathy3 (2.6) Abscess formation2 (1.7) Confusion and altered mental status2 (1.7) Atrial fibrillation2 (1.7) Endocarditis2 (1.7) Anemia1 (0.9) Cardiac arrest1 (0.9) Thrombocytopenia1 (0.9) Re-amputation1 (0.9) Cholestatic liver injury1 (0.9) Not reported3 (2.6)\nICU\n Yes62 (53.9) No5 (4.3) Not reported48 (41.8)\nOutcome\n Death71 (61.7) Discharge41 (35.7) Hospital2 (1.7) Hospice1 (0.9)\n\nPooled frequencies of patient co-infection characteristics and outcomes (n = 115)", "\nAdditional file 1: Table S1. Search strategies, conducted between July 3, 2021, and July 16, 2021. Total results = 1922. Table S2. Joanna Briggs Quality Assessment for case reports included in the review. Table S3: Joanna Briggs Quality Assessment for case-series included in the review. Table S4. Joanna Briggs Quality Assessment for cohort studies included in the review. Table S5. Excluded articles after full-text analysis, with reason (n = 64).\n\nAdditional file 1: Table S1. Search strategies, conducted between July 3, 2021, and July 16, 2021. Total results = 1922. Table S2. Joanna Briggs Quality Assessment for case reports included in the review. Table S3: Joanna Briggs Quality Assessment for case-series included in the review. Table S4. Joanna Briggs Quality Assessment for cohort studies included in the review. Table S5. Excluded articles after full-text analysis, with reason (n = 64)." ]
[ null, null, null, null, null, null, null, null, null, null, null, null, null ]
[ "Background", "Methods", "Search strategy and study selection", "Data extraction", "Data synthesis and analysis", "Quality assessment", "Results", "Publication types and geography", "Publication quality", "Patient demographics", "Infection characteristics", "Diagnoses and treatments", "Complications and outcomes", "Discussion", "Conclusion", "Supplementary Information", "" ]
[ "Upon passage of the March 11th anniversary of the official declaration of the coronavirus disease 2019 (COVID-19) pandemic [1], the causative severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) pathogen has infected over 181 million individuals and resulted in more than 3.9 million deaths worldwide as of July 1, 2021 [2]. In addition to rapid spread through high transmission rates [3], infection with COVID-19 can result in severe complications such as acute respiratory distress syndrome (ARDS), thromboembolic events, septic shock, and multi-organ failure [4]. In response to this novel virus, the clinical environment has evolved to accommodate the complexities of healthcare delivery in the pandemic environment [5]. Accordingly, a particularly challenging scenario for clinicians is the management of patients with common infections that may be complicated by subsequent COVID-19 co-infection, or conversely co-infected with a pathogen following primary infection with COVID-19 [6]. Bacterial co-infection in COVID-19 patients may exacerbate the immunocompromised state caused by COVID-19, further worsening clinical prognosis [7].\nImplicated as a leading bacterial pathogen in both community- and healthcare-associated infections, Staphylococcus aureus (S. aureus) is commonly feared in the hospital environment for its risk of deadly outcomes such as endocarditis, bacteremia, sepsis, and death [8]. In past viral pandemics, S. aureus has been the principal cause of secondary bacterial infections, significantly increasing patient mortality rates [9]. For viral influenza infection specifically, S. aureus co-infection and bacteremia has been associated with mortality rates of almost 50%, in contrast to the 1.4% morality rates observed in patients infected with influenza alone [10]. Given the parallels between the clinical presentation, course, and outcomes of influenza and COVID-19 viral infection [11], mortality rates in COVID-19 patients co-infected with S. aureus may reflect those observed in influenza patients. However, while recent studies have focused on the incidence and prevalence of COVID-19 and S. aureus co-infection, the clinical outcomes of patients co-infected with these two specific pathogens remains unclear given that existing studies consolidate S. aureus patient outcomes with other bacterial pathogens [12–14].\nGiven that the literature informing our knowledge of COVID-19 is a dynamic and evolving entity, the purpose of this scoping review is to evaluate the current body of evidence reporting the clinical outcomes of patients co-infected with COVID-19 and S. aureus. To date, there has been no review focusing specifically on the clinical treatment courses and subsequent outcomes of COVID-19 and S. aureus co-infection. In response to the urgency of the pandemic state and high rates of COVID-19 hospital admissions, we aim to identify important areas for further research and explore potential implications for clinical practice.", "Search strategy and study selection To provide a scoping review of initial insight into the breadth of developing data on COVID-19 and S. aureus co-infection, we followed the five-stage methodology of scoping review practice presented by Levac, Colquhoun, and O’Brien [15]. In accordance with the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) extension for Scoping Reviews [16], we conducted electronic searches in PubMed, Scopus, Ovid MEDLINE, CINAHL, ScienceDirect, medRxiv (preprint), and the WHO COVID-19 database between July 3, 2021 and July 16, 2021. Search terms were combined with the use of Boolean operators and included subject headings or key terms specific to COVID-19 (i.e. severe acute respiratory syndrome coronavirus 2 OR SARS-CoV2 OR 2019 novel coronavirus OR 2019-nCoV OR coronavirus disease 2019 virus OR COVID-19 OR Wuhan coronavirus) and Staphylococcus aureus (i.e. methicillin-resistant staphylococcus aureus OR MRSA OR methicillin-susceptible Staphylococcus aureus OR MSSA OR staphylococcal infections). A comprehensive list of our scoping terms and search strategies is included in the Appendix (Ädditional file 1: Table S1). Two independent, experienced reviewers (JA and KV) screened the titles and abstracts of eligible studies and performed full-text review on qualified selections. For this review, we broadly considered articles of any design that included patients infected with both COVID-19 and S. aureus, provided a description of the timeline and ultimate clinical outcomes for these patients (i.e. death or discharge from hospital) at study completion, and were available in English. Studies were excluded if they did not report final outcomes since our scoping review purpose was to evaluate the quality of existing literature that described the clinical course and mortality rate of patients co-infected with these pathogens. We excluded duplicate records and disagreements regarding study inclusion were resolved by consensus or feedback from the senior author.\nTo provide a scoping review of initial insight into the breadth of developing data on COVID-19 and S. aureus co-infection, we followed the five-stage methodology of scoping review practice presented by Levac, Colquhoun, and O’Brien [15]. In accordance with the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) extension for Scoping Reviews [16], we conducted electronic searches in PubMed, Scopus, Ovid MEDLINE, CINAHL, ScienceDirect, medRxiv (preprint), and the WHO COVID-19 database between July 3, 2021 and July 16, 2021. Search terms were combined with the use of Boolean operators and included subject headings or key terms specific to COVID-19 (i.e. severe acute respiratory syndrome coronavirus 2 OR SARS-CoV2 OR 2019 novel coronavirus OR 2019-nCoV OR coronavirus disease 2019 virus OR COVID-19 OR Wuhan coronavirus) and Staphylococcus aureus (i.e. methicillin-resistant staphylococcus aureus OR MRSA OR methicillin-susceptible Staphylococcus aureus OR MSSA OR staphylococcal infections). A comprehensive list of our scoping terms and search strategies is included in the Appendix (Ädditional file 1: Table S1). Two independent, experienced reviewers (JA and KV) screened the titles and abstracts of eligible studies and performed full-text review on qualified selections. For this review, we broadly considered articles of any design that included patients infected with both COVID-19 and S. aureus, provided a description of the timeline and ultimate clinical outcomes for these patients (i.e. death or discharge from hospital) at study completion, and were available in English. Studies were excluded if they did not report final outcomes since our scoping review purpose was to evaluate the quality of existing literature that described the clinical course and mortality rate of patients co-infected with these pathogens. We excluded duplicate records and disagreements regarding study inclusion were resolved by consensus or feedback from the senior author.\nData extraction For the final articles selected, we completed data extraction in duplicate, and any discrepancies were resolved through discussion or consult with the senior author. While several studies also included reports on patients infected with COVID-19 alone or co-infected with an alternative pathogen, we extracted data solely for patients with COVID-19 and S. aureus co-infection. Our data extraction items included study methodology, author and study location, type of staphylococcal species, onset of S. aureus infection, S. aureus culture site and infection source, patient sample size, age, gender, presentation, comorbidities or additional co-infections, prior history of S. aureus infection, diagnostic findings, hospital treatments and interventions, complications, total length of hospital admission, intensive care unit transfer, and final patient mortality outcomes upon study completion.\nFor the final articles selected, we completed data extraction in duplicate, and any discrepancies were resolved through discussion or consult with the senior author. While several studies also included reports on patients infected with COVID-19 alone or co-infected with an alternative pathogen, we extracted data solely for patients with COVID-19 and S. aureus co-infection. Our data extraction items included study methodology, author and study location, type of staphylococcal species, onset of S. aureus infection, S. aureus culture site and infection source, patient sample size, age, gender, presentation, comorbidities or additional co-infections, prior history of S. aureus infection, diagnostic findings, hospital treatments and interventions, complications, total length of hospital admission, intensive care unit transfer, and final patient mortality outcomes upon study completion.\nData synthesis and analysis Microsoft Excel 2016 (Redmond, WA, USA) was used to collect and chart data extracted from the studies that met the inclusion criteria. Data was synthesized and analyzed descriptively, with frequency counts performed for individual and grouped study metrics. The purpose of synthesizing the extracted information through this method was to create an overview of existing knowledge and identify gaps in the current literature on COVID-19 and S. aureus co-infection.\nMicrosoft Excel 2016 (Redmond, WA, USA) was used to collect and chart data extracted from the studies that met the inclusion criteria. Data was synthesized and analyzed descriptively, with frequency counts performed for individual and grouped study metrics. The purpose of synthesizing the extracted information through this method was to create an overview of existing knowledge and identify gaps in the current literature on COVID-19 and S. aureus co-infection.\nQuality assessment Given that the majority of existing literature reporting outcomes data for COVID-19 and S. aureus co-infection were case reports, we utilized the Joanna Briggs Institute’s critical appraisal tools [17] to provide a metric for our scoping assessment of the methodological quality of the included studies. Application of these tools enabled examination of study quality in the areas of inclusion criteria, sample size, description of study participants, setting, and the appropriateness of the statistical analysis. As in previous reviews [18, 19], the tools were modified to produce a numeric score with case reports assessed based on an eight-item scale, case series on a ten-item scale, and cohort studies on an eleven-item scale. Studies were assessed with the methodological quality tool specific to their design (i.e. case report, case series, cohort) by two independent reviewers (JA and KV) and discrepancies were resolved through discussion. While debate exists regarding the minimal number of patients required for study qualification as a “case series” [20], we considered studies reporting individual patient data as “case reports” and those reporting aggregate patient data as “case series.” Our complete quality assessment, including tools and scores, is available in the Appendix (Additional file 1: Tables S2–S4).\nGiven that the majority of existing literature reporting outcomes data for COVID-19 and S. aureus co-infection were case reports, we utilized the Joanna Briggs Institute’s critical appraisal tools [17] to provide a metric for our scoping assessment of the methodological quality of the included studies. Application of these tools enabled examination of study quality in the areas of inclusion criteria, sample size, description of study participants, setting, and the appropriateness of the statistical analysis. As in previous reviews [18, 19], the tools were modified to produce a numeric score with case reports assessed based on an eight-item scale, case series on a ten-item scale, and cohort studies on an eleven-item scale. Studies were assessed with the methodological quality tool specific to their design (i.e. case report, case series, cohort) by two independent reviewers (JA and KV) and discrepancies were resolved through discussion. While debate exists regarding the minimal number of patients required for study qualification as a “case series” [20], we considered studies reporting individual patient data as “case reports” and those reporting aggregate patient data as “case series.” Our complete quality assessment, including tools and scores, is available in the Appendix (Additional file 1: Tables S2–S4).", "To provide a scoping review of initial insight into the breadth of developing data on COVID-19 and S. aureus co-infection, we followed the five-stage methodology of scoping review practice presented by Levac, Colquhoun, and O’Brien [15]. In accordance with the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) extension for Scoping Reviews [16], we conducted electronic searches in PubMed, Scopus, Ovid MEDLINE, CINAHL, ScienceDirect, medRxiv (preprint), and the WHO COVID-19 database between July 3, 2021 and July 16, 2021. Search terms were combined with the use of Boolean operators and included subject headings or key terms specific to COVID-19 (i.e. severe acute respiratory syndrome coronavirus 2 OR SARS-CoV2 OR 2019 novel coronavirus OR 2019-nCoV OR coronavirus disease 2019 virus OR COVID-19 OR Wuhan coronavirus) and Staphylococcus aureus (i.e. methicillin-resistant staphylococcus aureus OR MRSA OR methicillin-susceptible Staphylococcus aureus OR MSSA OR staphylococcal infections). A comprehensive list of our scoping terms and search strategies is included in the Appendix (Ädditional file 1: Table S1). Two independent, experienced reviewers (JA and KV) screened the titles and abstracts of eligible studies and performed full-text review on qualified selections. For this review, we broadly considered articles of any design that included patients infected with both COVID-19 and S. aureus, provided a description of the timeline and ultimate clinical outcomes for these patients (i.e. death or discharge from hospital) at study completion, and were available in English. Studies were excluded if they did not report final outcomes since our scoping review purpose was to evaluate the quality of existing literature that described the clinical course and mortality rate of patients co-infected with these pathogens. We excluded duplicate records and disagreements regarding study inclusion were resolved by consensus or feedback from the senior author.", "For the final articles selected, we completed data extraction in duplicate, and any discrepancies were resolved through discussion or consult with the senior author. While several studies also included reports on patients infected with COVID-19 alone or co-infected with an alternative pathogen, we extracted data solely for patients with COVID-19 and S. aureus co-infection. Our data extraction items included study methodology, author and study location, type of staphylococcal species, onset of S. aureus infection, S. aureus culture site and infection source, patient sample size, age, gender, presentation, comorbidities or additional co-infections, prior history of S. aureus infection, diagnostic findings, hospital treatments and interventions, complications, total length of hospital admission, intensive care unit transfer, and final patient mortality outcomes upon study completion.", "Microsoft Excel 2016 (Redmond, WA, USA) was used to collect and chart data extracted from the studies that met the inclusion criteria. Data was synthesized and analyzed descriptively, with frequency counts performed for individual and grouped study metrics. The purpose of synthesizing the extracted information through this method was to create an overview of existing knowledge and identify gaps in the current literature on COVID-19 and S. aureus co-infection.", "Given that the majority of existing literature reporting outcomes data for COVID-19 and S. aureus co-infection were case reports, we utilized the Joanna Briggs Institute’s critical appraisal tools [17] to provide a metric for our scoping assessment of the methodological quality of the included studies. Application of these tools enabled examination of study quality in the areas of inclusion criteria, sample size, description of study participants, setting, and the appropriateness of the statistical analysis. As in previous reviews [18, 19], the tools were modified to produce a numeric score with case reports assessed based on an eight-item scale, case series on a ten-item scale, and cohort studies on an eleven-item scale. Studies were assessed with the methodological quality tool specific to their design (i.e. case report, case series, cohort) by two independent reviewers (JA and KV) and discrepancies were resolved through discussion. While debate exists regarding the minimal number of patients required for study qualification as a “case series” [20], we considered studies reporting individual patient data as “case reports” and those reporting aggregate patient data as “case series.” Our complete quality assessment, including tools and scores, is available in the Appendix (Additional file 1: Tables S2–S4).", "Our search strategy produced a total of 1922 potential publications with patients co-infected by COVID-19 and S. aureus. For transparent and reproducible methods, the PRISMA 2020 flow diagram for new systematic reviews was utilized to display the search results of our scoping review (Fig. 1). Following deduplication (n = 597) and a comprehensive screen of study titles and abstracts for irrelevant material (n = 1233), we reviewed 92 full texts for inclusion eligibility. Of these texts, 64 did not include patient outcomes for COVID-19 and S. aureus co-infected patients: 57 were incidence or prevalence studies with no patient-specific outcomes data, two included patients with COVID-19 and a history of S. aureus infection but no current COVID-19 and S. aureus co-infection, two were genome analysis studies with no patient data, and three were unavailable in English (Additional file 1: Table S5).Fig. 1Process of searching and selecting articles\nincluded in the scoping review based on the PRISMA 2020 flow diagram\nProcess of searching and selecting articles\nincluded in the scoping review based on the PRISMA 2020 flow diagram\nPublication types and geography Following full-text review, 28 studies qualified for inclusion in our review, resulting in a total of 115 patients. Of these 28 included studies, 22 were case reports (describing single patients with individual data), two were case series (describing 7–42 patients with aggregate data), and four were cohort studies (describing 4–40 patients with aggregate data). Countries of study publication included the United States (n = 7) [7, 9, 21–25], Italy (n = 7) [26–32], Japan (n = 2) [33, 34], Iran (n = 2) [35, 36], the United Kingdom (n = 2) [37, 38], Spain (n = 2) [39, 40], Bahrain (n = 1) [41], China (n = 1) [42], France (n = 1) [43], the Philippines (n = 1) [44], Korea (n = 1) [45], and Canada (n = 1) [46], with publication dates ranging from April 15, 2020 to June 16, 2021. Table 1 describes the characteristics of these included studies and available information on their respective patient demographics in detail.\n\nTable 1Study and patient characteristicsFirst AuthorCountryPublication dateStudy designQuality AssessmentNAgeMale/FemaleTypeCo-infectionComorbiditiesOutcomeAdachiJapan05/15/20Case report8/8184FemaleMSSA\nKlebsiella pneumoniae\nNoneDeathBagnatoItaly08/05/20Case report8/8162FemaleMSSA\nCandida tropicalis\nHypertensionDischargeChandranUnited Kingdom05/20/21Case report7/81NRFemaleMSSANoneType 2 diabetes mellitusDeathChenChina11/01/20Case report6/8129MaleMSSA and MRSA\nHaemophilus influenzae\nNR (not reported)DischargeChoudhuryUSA07/04/20Case report7/8173MaleMSSANoneType 2 diabetes mellitus, chronic foot osteomyelitis, aortic stenosis, prosthetic aortic valve, atrial fibrillation, prior S. aureus infectionHospiceCusumanoUSA11/12/20Case series9/104265.6 (mean)Males (n = 21), Females (n = 21)MSSA (n = 23) and MRSA (n = 19)Enterococcus faecalis (n = 3), Candida spp. (n = 2), Klebsiella pneumoniae (n = 2), Escheria coli (n = 1), Bacillus spp. (n = 1), Micrococcus spp. (n = 1), Staphylococcus epidermidis (n = 1), Proteus mirabilis (n = 1)Hypertension (n = 29), diabetes mellitus (n = 21), cardiovascular disease (n = 19), lung disease (n = 7), chronic kidney disease (n = 6), malignancy (n = 5), end-stage renal disease (n = 4), organ transplant (n = 3), liver disease (n = 1)Death at 30 days (n = 28)De PascaleItaly05/31/21Prospective cohort8/114064 (mean)Males (n = 33), Females (n = 7)MSSA (n = 14), MRSA (n = 26)Bacteroidetes (n = 18), Proteobacteria (n = 7), Actinobacteria (n = 3), Tenericutes (n = 2), Fusobacteria (n = 1)1Diabetes mellitus (n = 8), cardiovascular disease (n = 7), lung disease (n = 7), immunosuppression (n = 4), neoplasm (n = 4), chronic kidney disease (n = 3)Death (n = 26)DuployezFrance04/16/20Case report8/8135MaleMSSA (PVL-secreting)NoneNoneDeathEdradaPhilippines05/07/20Case report6/8139FemaleMSSAInfluenza B, Klebsiella pneumoniaeNoneDischargeElSeirafiBahrain06/23/20Case report6/8159MaleMRSANoneNRDeathFilocamoItaly05/11/20Case report8/8150MaleMSSANoneNoneDischargeHamzaviIran08/01/20Case report6/8114MaleMSSANoneCerebral palsyDeathHoshiyamaJapan11/02/20Case series6/10147MaleMSSANonePrevious cerebral hemorrhageDischarge“”“”“”“”“”139MaleMSSA\nGroup B Streptococcus\nHypertensionDischargeHussainUnited Kingdom05/22/20Case report8/8169FemaleMSSANoneProsthetic aortic valve with reduced ejection fractionDeathLevesqueCanada07/01/20Case report6/8153FemaleMSSANoneHypertension, diabetes mellitus, dyslipidemiaHospitalMirzaUSA11/16/20Case report6/8129MaleMRSAMulti-drug resistant PseudomonasCystic fibrosis with moderate obstructive lung disease, exocrine pancreatic insufficiency, gastroparesis, chronic S. aureusDischargePatekUSA04/15/20Case report7/810MaleMSSAHerpes simplex virusMaternal history of oral herpetic lesionsDischargePosteraroItaly09/06/20Case report8/8179MaleMRSAMorganella morganii, Candida glabrata, Acinetobacter baumannii, Proteus mirabilis, Klebsiella pneumoniae, Escherichia coliType 2 diabetes mellitus, ischemic heart disease, peripheral artery disease, left leg amputationDeathRajdevUSA09/10/20Case report7/8132MaleMSSA\nKlebsiella pneumoniae\nType 2 diabetes mellitusDischargeRajdevUSA09/28/20Case report7/8136MaleMSSA\nHaemophilus influenzae\nHypertension, two renal transplants for renal dysplasiaDischargeRamos-MartinezSpain07/30/20Prospective cohort6/11160NRMSSANoneType 2 diabetes mellitus, hypercholesterolemia, wrist arthritis, sternoclavicular arthritisDeathRandallUSA12/01/20Case report7/8160MaleMRSANoneChronic obstructive lung disease, coronary artery disease, hypothyroidismDeath“”“”“”“”“”183MaleMRSANoneHypertension, atrial fibrillationDeath“”“”“”“”“”160MaleMRSAHepatitis CHypertension, type 2 diabetes mellitus, cirrhosisDeathRegazzoniItaly08/07/20Case report2/8170MaleMSSANoneNRHospitalSharifipourIran09/01/20Prospective cohort7/111NRNRMSSANoneNoneDischarge“”“”“”“”1NRNRMRSANoneType 2 diabetes mellitusDeathSonKorea06/16/21Retrospective cohort8/11479 (mean)Male (n = 3), Female (n = 1)MRSAC. albicans (n = 2), Vancomycin-resistant enterococci (n = 2), S. maltophilia (n = 1) carbapenem-resistant Acinetobacter baumannii (n = 1),NRDeath (n = 3)SpannellaItaly06/23/20Case report8/8195FemaleMSSA\nCitrobacter werkmanii\nHypertension, chronic heart failure, paroxysmal atrial fibrillation, dyslipidemia, chronic kidney disease, vascular dementia, sacral pressure ulcers, dysphagiaDeathSpotoItaly09/30/20Case report6/8155FemaleMSSANoneTriple negative, BRCA1-related, right breast cancer with multiple bone metastasis, type 2 diabetes mellitusDeathValgaSpain06/11/20Case report6/8168MaleMSSANoneHypertension, type 2 diabetes mellitus, congestive heart failure, sleep apnea, ischemic heart disease, chronic kidney diseaseDischargePatients were colonized with these bacterial phyla, but no distinction between colonization versus infection was reported\n\nStudy and patient characteristics\nPatients were colonized with these bacterial phyla, but no distinction between colonization versus infection was reported\nFollowing full-text review, 28 studies qualified for inclusion in our review, resulting in a total of 115 patients. Of these 28 included studies, 22 were case reports (describing single patients with individual data), two were case series (describing 7–42 patients with aggregate data), and four were cohort studies (describing 4–40 patients with aggregate data). Countries of study publication included the United States (n = 7) [7, 9, 21–25], Italy (n = 7) [26–32], Japan (n = 2) [33, 34], Iran (n = 2) [35, 36], the United Kingdom (n = 2) [37, 38], Spain (n = 2) [39, 40], Bahrain (n = 1) [41], China (n = 1) [42], France (n = 1) [43], the Philippines (n = 1) [44], Korea (n = 1) [45], and Canada (n = 1) [46], with publication dates ranging from April 15, 2020 to June 16, 2021. Table 1 describes the characteristics of these included studies and available information on their respective patient demographics in detail.\n\nTable 1Study and patient characteristicsFirst AuthorCountryPublication dateStudy designQuality AssessmentNAgeMale/FemaleTypeCo-infectionComorbiditiesOutcomeAdachiJapan05/15/20Case report8/8184FemaleMSSA\nKlebsiella pneumoniae\nNoneDeathBagnatoItaly08/05/20Case report8/8162FemaleMSSA\nCandida tropicalis\nHypertensionDischargeChandranUnited Kingdom05/20/21Case report7/81NRFemaleMSSANoneType 2 diabetes mellitusDeathChenChina11/01/20Case report6/8129MaleMSSA and MRSA\nHaemophilus influenzae\nNR (not reported)DischargeChoudhuryUSA07/04/20Case report7/8173MaleMSSANoneType 2 diabetes mellitus, chronic foot osteomyelitis, aortic stenosis, prosthetic aortic valve, atrial fibrillation, prior S. aureus infectionHospiceCusumanoUSA11/12/20Case series9/104265.6 (mean)Males (n = 21), Females (n = 21)MSSA (n = 23) and MRSA (n = 19)Enterococcus faecalis (n = 3), Candida spp. (n = 2), Klebsiella pneumoniae (n = 2), Escheria coli (n = 1), Bacillus spp. (n = 1), Micrococcus spp. (n = 1), Staphylococcus epidermidis (n = 1), Proteus mirabilis (n = 1)Hypertension (n = 29), diabetes mellitus (n = 21), cardiovascular disease (n = 19), lung disease (n = 7), chronic kidney disease (n = 6), malignancy (n = 5), end-stage renal disease (n = 4), organ transplant (n = 3), liver disease (n = 1)Death at 30 days (n = 28)De PascaleItaly05/31/21Prospective cohort8/114064 (mean)Males (n = 33), Females (n = 7)MSSA (n = 14), MRSA (n = 26)Bacteroidetes (n = 18), Proteobacteria (n = 7), Actinobacteria (n = 3), Tenericutes (n = 2), Fusobacteria (n = 1)1Diabetes mellitus (n = 8), cardiovascular disease (n = 7), lung disease (n = 7), immunosuppression (n = 4), neoplasm (n = 4), chronic kidney disease (n = 3)Death (n = 26)DuployezFrance04/16/20Case report8/8135MaleMSSA (PVL-secreting)NoneNoneDeathEdradaPhilippines05/07/20Case report6/8139FemaleMSSAInfluenza B, Klebsiella pneumoniaeNoneDischargeElSeirafiBahrain06/23/20Case report6/8159MaleMRSANoneNRDeathFilocamoItaly05/11/20Case report8/8150MaleMSSANoneNoneDischargeHamzaviIran08/01/20Case report6/8114MaleMSSANoneCerebral palsyDeathHoshiyamaJapan11/02/20Case series6/10147MaleMSSANonePrevious cerebral hemorrhageDischarge“”“”“”“”“”139MaleMSSA\nGroup B Streptococcus\nHypertensionDischargeHussainUnited Kingdom05/22/20Case report8/8169FemaleMSSANoneProsthetic aortic valve with reduced ejection fractionDeathLevesqueCanada07/01/20Case report6/8153FemaleMSSANoneHypertension, diabetes mellitus, dyslipidemiaHospitalMirzaUSA11/16/20Case report6/8129MaleMRSAMulti-drug resistant PseudomonasCystic fibrosis with moderate obstructive lung disease, exocrine pancreatic insufficiency, gastroparesis, chronic S. aureusDischargePatekUSA04/15/20Case report7/810MaleMSSAHerpes simplex virusMaternal history of oral herpetic lesionsDischargePosteraroItaly09/06/20Case report8/8179MaleMRSAMorganella morganii, Candida glabrata, Acinetobacter baumannii, Proteus mirabilis, Klebsiella pneumoniae, Escherichia coliType 2 diabetes mellitus, ischemic heart disease, peripheral artery disease, left leg amputationDeathRajdevUSA09/10/20Case report7/8132MaleMSSA\nKlebsiella pneumoniae\nType 2 diabetes mellitusDischargeRajdevUSA09/28/20Case report7/8136MaleMSSA\nHaemophilus influenzae\nHypertension, two renal transplants for renal dysplasiaDischargeRamos-MartinezSpain07/30/20Prospective cohort6/11160NRMSSANoneType 2 diabetes mellitus, hypercholesterolemia, wrist arthritis, sternoclavicular arthritisDeathRandallUSA12/01/20Case report7/8160MaleMRSANoneChronic obstructive lung disease, coronary artery disease, hypothyroidismDeath“”“”“”“”“”183MaleMRSANoneHypertension, atrial fibrillationDeath“”“”“”“”“”160MaleMRSAHepatitis CHypertension, type 2 diabetes mellitus, cirrhosisDeathRegazzoniItaly08/07/20Case report2/8170MaleMSSANoneNRHospitalSharifipourIran09/01/20Prospective cohort7/111NRNRMSSANoneNoneDischarge“”“”“”“”1NRNRMRSANoneType 2 diabetes mellitusDeathSonKorea06/16/21Retrospective cohort8/11479 (mean)Male (n = 3), Female (n = 1)MRSAC. albicans (n = 2), Vancomycin-resistant enterococci (n = 2), S. maltophilia (n = 1) carbapenem-resistant Acinetobacter baumannii (n = 1),NRDeath (n = 3)SpannellaItaly06/23/20Case report8/8195FemaleMSSA\nCitrobacter werkmanii\nHypertension, chronic heart failure, paroxysmal atrial fibrillation, dyslipidemia, chronic kidney disease, vascular dementia, sacral pressure ulcers, dysphagiaDeathSpotoItaly09/30/20Case report6/8155FemaleMSSANoneTriple negative, BRCA1-related, right breast cancer with multiple bone metastasis, type 2 diabetes mellitusDeathValgaSpain06/11/20Case report6/8168MaleMSSANoneHypertension, type 2 diabetes mellitus, congestive heart failure, sleep apnea, ischemic heart disease, chronic kidney diseaseDischargePatients were colonized with these bacterial phyla, but no distinction between colonization versus infection was reported\n\nStudy and patient characteristics\nPatients were colonized with these bacterial phyla, but no distinction between colonization versus infection was reported\nPublication quality Figure 2 represents the quality assessment scores produced by the Joanna Briggs Institute’s critical appraisal tools. Scores ranged from 2 to 8 for case reports (out of 8 points total) (n = 22), 6–9 for case series (out of 10 points total) (n = 2), and 6–8 for cohort studies (out of 11 points total) (n = 4). The mean quality assessment score for these publications compared within their respective categories was 6.8 for case reports, 7.5 for case series, and 7.3 for cohort studies. In terms of most common study design limitations, the metric of patient post-intervention clinical conditions was least clearly described for case reports, neither of the case series consecutively included participants, and strategies to address incomplete follow-up were only reported for one of the four cohort studies.Fig. 2Quality assessment scores for included\npublications reported as “yes” or “no” for achieving quality metrics per the\nJoanna Briggs Institute’s critical appraisal tools\nQuality assessment scores for included\npublications reported as “yes” or “no” for achieving quality metrics per the\nJoanna Briggs Institute’s critical appraisal tools\nFigure 2 represents the quality assessment scores produced by the Joanna Briggs Institute’s critical appraisal tools. Scores ranged from 2 to 8 for case reports (out of 8 points total) (n = 22), 6–9 for case series (out of 10 points total) (n = 2), and 6–8 for cohort studies (out of 11 points total) (n = 4). The mean quality assessment score for these publications compared within their respective categories was 6.8 for case reports, 7.5 for case series, and 7.3 for cohort studies. In terms of most common study design limitations, the metric of patient post-intervention clinical conditions was least clearly described for case reports, neither of the case series consecutively included participants, and strategies to address incomplete follow-up were only reported for one of the four cohort studies.Fig. 2Quality assessment scores for included\npublications reported as “yes” or “no” for achieving quality metrics per the\nJoanna Briggs Institute’s critical appraisal tools\nQuality assessment scores for included\npublications reported as “yes” or “no” for achieving quality metrics per the\nJoanna Briggs Institute’s critical appraisal tools\nPatient demographics For the 115 total patients included in our review that were co-infected with COVID-19 and S. aureus, their demographic (Table 1) and clinical data (Table 2) were described with varying completeness. Staphylococcal species and patient outcomes are reported in both tables to enable direct comparison with patient demographics and clinical course. Across our patient sample, the mean patient age was 54.8 years (SD = 21.6), 65.3% (n = 75) were male, 32.1% (n = 37) were female, and 3 patients (2.6%) did not have their gender specified in the study. Patients presented with a diversity of comorbidities with diabetes mellitus (33.9%, n = 39), hypertension (32.2%, n = 37), and cardiovascular disease (28.7%, n = 33) reported as the most common. Five patients presented with no comorbidities and four studies reported no information on patient medical history related to comorbidities. The most common presenting symptoms reported by patients at hospital admission included cough (13.9%, n = 16), fever (13.9%, n = 16), and dyspnea (13.0%, n = 15).\n\nTable 2Clinical characteristicsFirst AuthorNTypeDiagnosisCo-infection onsetPresentationDx findingsTreatments and InterventionsComplicationsLength of stayICUOutcomeAdachi1MSSASputum sample, pneumoniaUnclearaFever, diarrhea, dyspneaBilateral opacities on chest x-ray (CXR), ground glass opacities & lower lobe consolidation on chest computed tomography (CT)Antibiotics, corticosteroids, lopinavir/ritonavir, morphineARDS16YesDeathBagnato1MSSABlood culture, bacteremiaHospital-onsetFever, cough, diarrhea, myalgiaUnremarkable head CT, normal creatine kinaseAntibiotics, corticosteroids, intubation and ventilation, antifungals, lopinavir/ritonavir, hydroxychloroquine, tocilizumab, neuromuscular blocking agents, olanzapinePsychomotor agitation and temporospatial disorientation, myopathy140YesDischargeChandran1MSSABlood culture and tracheal aspirate, pneumonia (ventilator-associated) and bacteremiaHospital-onsetDyspnea (positive COVID test)Bilateral interstitial infiltrates (CXR) and ground glass opacities (CT)Antibiotics, intubation and ventilationBilateral cavitating lung lesions, septic shock15YesDeathChen1MSSA and MRSASputum sample, pneumoniaHospital-onsetAsymptomatic (positive COVID test)Patchy consolidation and ground glass opacities in right upper lobe on CXR (day 29)Antibiotics, corticosteroids, lopinavir/ritonavir, Abidol combined with IFN inhalant, Thymalfasin, ribavirin, loratadinePneumonia51NoDischargeChoudhury1MSSABlood culture, endocarditis and bacteremiaUnclearbAltered mental status, low back pain, urinary incontinence, right foot ulcersCystitis and pyelonephritis on CT, epidural abscess (L4/5) on magnetic resonance imaging (MRI)Antibiotics, oral rifampin, hydroxychloroquineEndocarditis, aortic root abscessNR (not reported)NRHospiceCusumano42MSSA (n = 23) and MRSA (n = 19)Blood culture, bacteremia (n = 42), pneumonia (n = 8), vascular (n = 3), osteomyelitis (n = 1), skin (n = 1)Hospital-onset (n = 28), community-onset (n = 14)Not reported (NR)Abnormal CXR (n = 36), vegetation on transthoracic echo (n = 1)Antibiotics (n = 42), intubation and ventilation (n = 31), central venous catheter (n = 19)NRNRNRDeath at 30 days (n = 28)De Pascale40MSSA (n = 14), MRSA (n = 26)Tracheal aspirate and blood culture, pneumonia (ventilator-associated) (n = 40) and bacteremia (n = 19)Late hospital-onset (n = 35), early hospital-onset (n = 5)NRNRAntibiotics (n = 40), intubation and ventilation (n = 40)Septic shock (n = 22), acute kidney injury (n = 4)11 (mean)Yes (n = 40)Death (n = 26)Duployez1MSSA (PVL-secreting)Pleural drainage sample, pneumoniaUnclearcFever, cough, bloody sputumConsolidation of left upper lobe, left pleural effusion, right ground glass opacities, bilateral cavitary lesions on CTAntibiotics, intubation and ventilation, extracorporeal membrane oxygenation (ECMO), anticoagulation, upper left lobectomyNecrotizing pneumonia, deterioration of respiratory, renal, and liver functions17YesDeathEdrada1MSSANasal and throat swab with PCRCommunity-onset/carrierDry cough, sore throatUnremarkable chest CTOseltamivirNone19NoDischargeElSeirafi1MRSABlood culture, bacteremiaHospital-onsetFever, dry cough, dyspneaBilateral pulmonary infiltrates and ARDS on CXRAntibiotics, IFN, ribavirin, plasma therapy, tocilizumab injectionsSeptic shock with multi-organ dysfunction16YesDeathFilocamo1MSSABlood culture, bacteremiaHospital-onsetFever, dyspneaBilateral ground glass opacities on chest CTAntibiotics, intubation and ventilation, lopinavir/ritonavir, hydroxychloroquine, anakinraProgressive cholestatic liver injury29YesDischargeHamzavi1MSSABlood culture, bacteremiaUncleardFever, cough, dyspnea, lethargyLeft pleural effusion on CXRAntibiotics, intubation and ventilationMulti-organ dysfunctionNRYesDeathHoshiyama1MSSAThroat swab and sputum sampleUncleareCoughNormal labsNRNRNRNoDischarge“”1MSSAThroat swab and sputum sampleUncleareCoughNormal labsNRNRNRNoDischargeHussain1MSSABlood culture, bacteremiaCommunity-onsetFever, cough, dyspnea, malaiseBilateral reticular enhancement and heavily calcified aortic valve with mass effect on left atrial wall on chest CTAntibiotics, intubation and ventilation, esophagogastroduodenoscopy, pantoprazole, amiodarone, heparinBleeding Dieulafoy’s lesion, fast atrial fibrillation, acute kidney injury, multi-organ failure, intracerebral hematoma18YesDeathLevesque1MSSASputum sample, pneumonia (ventilator-associated)Hospital-onsetFever, dry cough, dyspneaSmall intraventricular hemorrhage on head CT (day 39)Antibiotics, intubation and ventilation, corticosteroids, propofol, fentanyl, neuromuscular blocking agents, heparin, continuous platelet infusion, blood transfusions, IVIG, endobronchial clot removals, romiplostim, vincristineARDS, ICU-acquired neuromyopathy, acute kidney injury, thrombocytopenia, intraventricular hemorrhage, ventilator-associated pneumoniaAt least 39YesHospitalMirza1MRSASputum sampleCarrier (chronic)Chest pain, dyspneaBilateral upper lobe bronchial wall thickening and bronchiectasis with nodular and interstitial opacities on chest CTAntibiotics, remdesivirMeropenem-resistant pseudomonas6NoDischargePatek1MSSAWound culture, cellulitisCommunity-onsetFever, erythema and erosions of right thumb and fourth digit, somnolence, decreased feedingElevated LFTs, bilateral perihilar streaking on CXR, neutropeniaAntibiotics, acyclovir, nasal cannula O2Hypoxic respiratory failure7YesDischargePosteraro1MRSABlood culture, bacteremiaHospital-onsetFever, cough, dyspneaCXR and chest CT consistent with pneumoniaAntibiotics, antifungals, hydroxychloroquine, darunavir/ritonavirHypoxia, left leg re-amputation, septic shock53YesDeathRajdev1MSSASputum sample, pneumonia (ventilator-associated)Community- and hospital-onsetfDyspneaBilateral consolidations on CXR, bilateral ground glass opacities and pneumomediastinum with subcutaneous emphysema on chest CTIntubation and ventilation, epoprostenol, hydromorphone, neuromuscular blocking agents, ECMOAnemia, epistaxis, oropharyngeal bleeding, ARDS47YesDischargeRajdev1MSSATracheal aspirate, pneumoniaHospital-onsetFever, cough, dyspnea, myalgiasDiffuse bilateral pulmonary opacities on CXRAntibiotics, intubation and ventilation, corticosteroids, tacrolimus, mycophenolate, remdesivirHypoxic respiratory failure, Guillan Barré syndrome23NRDischargeRamos-Martinez1MSSABlood culture, bacteremia (central venous catheter-associated)Hospital-onsetFever, meningitis, right infrapopliteal deep vein thrombosisMild mitral insufficiency on transthoracic echoAntibiotics, intubation and ventilation, central venous catheter, corticosteroids, tocilizumabNative valve endocarditis, progressive sepsisAt least 20YesDeathRandall1MRSABlood culture, bacteremiaHospital-onsetFever, cough, dyspneaNRIntubation and ventilation, corticosteroids, central venous catheterRespiratory distress3NRDeath“”1MRSABlood culture, bacteremiaHospital-onsetHypoxia (positive COVID test)NRCorticosteroids, remdesivir, central venous catheterSeptic shock14NRDeath“”1MRSABlood culture, bacteremiaHospital-onsetHypoxia (positive COVID test)NRCorticosteroidsCardiac arrest10YesDeathRegazzoni1MSSANasal swab and blood culture, bacteremiaHospital-onsetBilateral pneumonia (positive COVID test)Ischemic areas with hemorrhagic transformation on head CT and MRI, large vegetations on aortic valve with regurgitation on transesophageal echoAntibiotics, corticosteroidsSevere systemic inflammatory responseAt least 10NRHospitalSharifipour1MSSATracheal aspirate, pneumonia (ventilator-associated)Hospital-onsetCough, dyspnea, sore throatNRAntibiotics, intubation and ventilationVentilator-associated pneumonia13YesDischarge“”1MRSATracheal aspirate, pneumonia (ventilator-associated)Hospital-onsetCough, dyspnea, sore throatNRAntibiotics, intubation and ventilationVentilator-associated pneumonia9YesDeathSon4MRSASputum sample, pneumonia (n = 4)Hospital-onset (n = 4)Pneumonia (positive COVID test)NRAntibiotics (n = 4), corticosteroids (n = 4)NR42 (mean)YesDeath (n = 3)Spannella1MSSABronchoalveolar lavage, pneumoniaCommunity-onsetFever, cough, emesisBilateral ground glass opacities and multiple areas of consolidation on CXRAntibiotics, metoprolol, amiodarone, continuous positive-pressure airwayAtrial fibrillation, respiratory failure, altered mental status, tachycardia, severe hypoxemia27YesDeathSpoto1MSSATracheal aspirate, pneumoniaUncleargFever, dyspnea, respiratory distress following chemoimmunotherapyBilateral ground glass opacities and consolidation in the middle/upper lobes on chest CTAntibiotics, intubation and ventilation, lopinavir-ritonavir, hydroxychloroquineARDS5NRDeathValga1MSSATracheal aspirate, pneumoniaHospital-onsetFever, dry coughNRAntibiotics, intubation and ventilation, corticosteroids, hydroxychloroquine, lopinavir/ritonavir, IFN beta, heparinARDS, multi-organ failure47YesDischargeaPositive sputum culture on day 10bPatient recently treated for S. aureus prior to admission, but setting is unclearcPleural fluid tested on day 4dTimeline of blood culture uncleareTimeline of sputum testing unclearfPositive sputum on admission, subsequent ventilator-associated infectiongPatient was receiving routine treatments in a healthcare-setting\n\nClinical characteristics\naPositive sputum culture on day 10\nbPatient recently treated for S. aureus prior to admission, but setting is unclear\ncPleural fluid tested on day 4\ndTimeline of blood culture unclear\neTimeline of sputum testing unclear\nfPositive sputum on admission, subsequent ventilator-associated infection\ngPatient was receiving routine treatments in a healthcare-setting\nFor the 115 total patients included in our review that were co-infected with COVID-19 and S. aureus, their demographic (Table 1) and clinical data (Table 2) were described with varying completeness. Staphylococcal species and patient outcomes are reported in both tables to enable direct comparison with patient demographics and clinical course. Across our patient sample, the mean patient age was 54.8 years (SD = 21.6), 65.3% (n = 75) were male, 32.1% (n = 37) were female, and 3 patients (2.6%) did not have their gender specified in the study. Patients presented with a diversity of comorbidities with diabetes mellitus (33.9%, n = 39), hypertension (32.2%, n = 37), and cardiovascular disease (28.7%, n = 33) reported as the most common. Five patients presented with no comorbidities and four studies reported no information on patient medical history related to comorbidities. The most common presenting symptoms reported by patients at hospital admission included cough (13.9%, n = 16), fever (13.9%, n = 16), and dyspnea (13.0%, n = 15).\n\nTable 2Clinical characteristicsFirst AuthorNTypeDiagnosisCo-infection onsetPresentationDx findingsTreatments and InterventionsComplicationsLength of stayICUOutcomeAdachi1MSSASputum sample, pneumoniaUnclearaFever, diarrhea, dyspneaBilateral opacities on chest x-ray (CXR), ground glass opacities & lower lobe consolidation on chest computed tomography (CT)Antibiotics, corticosteroids, lopinavir/ritonavir, morphineARDS16YesDeathBagnato1MSSABlood culture, bacteremiaHospital-onsetFever, cough, diarrhea, myalgiaUnremarkable head CT, normal creatine kinaseAntibiotics, corticosteroids, intubation and ventilation, antifungals, lopinavir/ritonavir, hydroxychloroquine, tocilizumab, neuromuscular blocking agents, olanzapinePsychomotor agitation and temporospatial disorientation, myopathy140YesDischargeChandran1MSSABlood culture and tracheal aspirate, pneumonia (ventilator-associated) and bacteremiaHospital-onsetDyspnea (positive COVID test)Bilateral interstitial infiltrates (CXR) and ground glass opacities (CT)Antibiotics, intubation and ventilationBilateral cavitating lung lesions, septic shock15YesDeathChen1MSSA and MRSASputum sample, pneumoniaHospital-onsetAsymptomatic (positive COVID test)Patchy consolidation and ground glass opacities in right upper lobe on CXR (day 29)Antibiotics, corticosteroids, lopinavir/ritonavir, Abidol combined with IFN inhalant, Thymalfasin, ribavirin, loratadinePneumonia51NoDischargeChoudhury1MSSABlood culture, endocarditis and bacteremiaUnclearbAltered mental status, low back pain, urinary incontinence, right foot ulcersCystitis and pyelonephritis on CT, epidural abscess (L4/5) on magnetic resonance imaging (MRI)Antibiotics, oral rifampin, hydroxychloroquineEndocarditis, aortic root abscessNR (not reported)NRHospiceCusumano42MSSA (n = 23) and MRSA (n = 19)Blood culture, bacteremia (n = 42), pneumonia (n = 8), vascular (n = 3), osteomyelitis (n = 1), skin (n = 1)Hospital-onset (n = 28), community-onset (n = 14)Not reported (NR)Abnormal CXR (n = 36), vegetation on transthoracic echo (n = 1)Antibiotics (n = 42), intubation and ventilation (n = 31), central venous catheter (n = 19)NRNRNRDeath at 30 days (n = 28)De Pascale40MSSA (n = 14), MRSA (n = 26)Tracheal aspirate and blood culture, pneumonia (ventilator-associated) (n = 40) and bacteremia (n = 19)Late hospital-onset (n = 35), early hospital-onset (n = 5)NRNRAntibiotics (n = 40), intubation and ventilation (n = 40)Septic shock (n = 22), acute kidney injury (n = 4)11 (mean)Yes (n = 40)Death (n = 26)Duployez1MSSA (PVL-secreting)Pleural drainage sample, pneumoniaUnclearcFever, cough, bloody sputumConsolidation of left upper lobe, left pleural effusion, right ground glass opacities, bilateral cavitary lesions on CTAntibiotics, intubation and ventilation, extracorporeal membrane oxygenation (ECMO), anticoagulation, upper left lobectomyNecrotizing pneumonia, deterioration of respiratory, renal, and liver functions17YesDeathEdrada1MSSANasal and throat swab with PCRCommunity-onset/carrierDry cough, sore throatUnremarkable chest CTOseltamivirNone19NoDischargeElSeirafi1MRSABlood culture, bacteremiaHospital-onsetFever, dry cough, dyspneaBilateral pulmonary infiltrates and ARDS on CXRAntibiotics, IFN, ribavirin, plasma therapy, tocilizumab injectionsSeptic shock with multi-organ dysfunction16YesDeathFilocamo1MSSABlood culture, bacteremiaHospital-onsetFever, dyspneaBilateral ground glass opacities on chest CTAntibiotics, intubation and ventilation, lopinavir/ritonavir, hydroxychloroquine, anakinraProgressive cholestatic liver injury29YesDischargeHamzavi1MSSABlood culture, bacteremiaUncleardFever, cough, dyspnea, lethargyLeft pleural effusion on CXRAntibiotics, intubation and ventilationMulti-organ dysfunctionNRYesDeathHoshiyama1MSSAThroat swab and sputum sampleUncleareCoughNormal labsNRNRNRNoDischarge“”1MSSAThroat swab and sputum sampleUncleareCoughNormal labsNRNRNRNoDischargeHussain1MSSABlood culture, bacteremiaCommunity-onsetFever, cough, dyspnea, malaiseBilateral reticular enhancement and heavily calcified aortic valve with mass effect on left atrial wall on chest CTAntibiotics, intubation and ventilation, esophagogastroduodenoscopy, pantoprazole, amiodarone, heparinBleeding Dieulafoy’s lesion, fast atrial fibrillation, acute kidney injury, multi-organ failure, intracerebral hematoma18YesDeathLevesque1MSSASputum sample, pneumonia (ventilator-associated)Hospital-onsetFever, dry cough, dyspneaSmall intraventricular hemorrhage on head CT (day 39)Antibiotics, intubation and ventilation, corticosteroids, propofol, fentanyl, neuromuscular blocking agents, heparin, continuous platelet infusion, blood transfusions, IVIG, endobronchial clot removals, romiplostim, vincristineARDS, ICU-acquired neuromyopathy, acute kidney injury, thrombocytopenia, intraventricular hemorrhage, ventilator-associated pneumoniaAt least 39YesHospitalMirza1MRSASputum sampleCarrier (chronic)Chest pain, dyspneaBilateral upper lobe bronchial wall thickening and bronchiectasis with nodular and interstitial opacities on chest CTAntibiotics, remdesivirMeropenem-resistant pseudomonas6NoDischargePatek1MSSAWound culture, cellulitisCommunity-onsetFever, erythema and erosions of right thumb and fourth digit, somnolence, decreased feedingElevated LFTs, bilateral perihilar streaking on CXR, neutropeniaAntibiotics, acyclovir, nasal cannula O2Hypoxic respiratory failure7YesDischargePosteraro1MRSABlood culture, bacteremiaHospital-onsetFever, cough, dyspneaCXR and chest CT consistent with pneumoniaAntibiotics, antifungals, hydroxychloroquine, darunavir/ritonavirHypoxia, left leg re-amputation, septic shock53YesDeathRajdev1MSSASputum sample, pneumonia (ventilator-associated)Community- and hospital-onsetfDyspneaBilateral consolidations on CXR, bilateral ground glass opacities and pneumomediastinum with subcutaneous emphysema on chest CTIntubation and ventilation, epoprostenol, hydromorphone, neuromuscular blocking agents, ECMOAnemia, epistaxis, oropharyngeal bleeding, ARDS47YesDischargeRajdev1MSSATracheal aspirate, pneumoniaHospital-onsetFever, cough, dyspnea, myalgiasDiffuse bilateral pulmonary opacities on CXRAntibiotics, intubation and ventilation, corticosteroids, tacrolimus, mycophenolate, remdesivirHypoxic respiratory failure, Guillan Barré syndrome23NRDischargeRamos-Martinez1MSSABlood culture, bacteremia (central venous catheter-associated)Hospital-onsetFever, meningitis, right infrapopliteal deep vein thrombosisMild mitral insufficiency on transthoracic echoAntibiotics, intubation and ventilation, central venous catheter, corticosteroids, tocilizumabNative valve endocarditis, progressive sepsisAt least 20YesDeathRandall1MRSABlood culture, bacteremiaHospital-onsetFever, cough, dyspneaNRIntubation and ventilation, corticosteroids, central venous catheterRespiratory distress3NRDeath“”1MRSABlood culture, bacteremiaHospital-onsetHypoxia (positive COVID test)NRCorticosteroids, remdesivir, central venous catheterSeptic shock14NRDeath“”1MRSABlood culture, bacteremiaHospital-onsetHypoxia (positive COVID test)NRCorticosteroidsCardiac arrest10YesDeathRegazzoni1MSSANasal swab and blood culture, bacteremiaHospital-onsetBilateral pneumonia (positive COVID test)Ischemic areas with hemorrhagic transformation on head CT and MRI, large vegetations on aortic valve with regurgitation on transesophageal echoAntibiotics, corticosteroidsSevere systemic inflammatory responseAt least 10NRHospitalSharifipour1MSSATracheal aspirate, pneumonia (ventilator-associated)Hospital-onsetCough, dyspnea, sore throatNRAntibiotics, intubation and ventilationVentilator-associated pneumonia13YesDischarge“”1MRSATracheal aspirate, pneumonia (ventilator-associated)Hospital-onsetCough, dyspnea, sore throatNRAntibiotics, intubation and ventilationVentilator-associated pneumonia9YesDeathSon4MRSASputum sample, pneumonia (n = 4)Hospital-onset (n = 4)Pneumonia (positive COVID test)NRAntibiotics (n = 4), corticosteroids (n = 4)NR42 (mean)YesDeath (n = 3)Spannella1MSSABronchoalveolar lavage, pneumoniaCommunity-onsetFever, cough, emesisBilateral ground glass opacities and multiple areas of consolidation on CXRAntibiotics, metoprolol, amiodarone, continuous positive-pressure airwayAtrial fibrillation, respiratory failure, altered mental status, tachycardia, severe hypoxemia27YesDeathSpoto1MSSATracheal aspirate, pneumoniaUncleargFever, dyspnea, respiratory distress following chemoimmunotherapyBilateral ground glass opacities and consolidation in the middle/upper lobes on chest CTAntibiotics, intubation and ventilation, lopinavir-ritonavir, hydroxychloroquineARDS5NRDeathValga1MSSATracheal aspirate, pneumoniaHospital-onsetFever, dry coughNRAntibiotics, intubation and ventilation, corticosteroids, hydroxychloroquine, lopinavir/ritonavir, IFN beta, heparinARDS, multi-organ failure47YesDischargeaPositive sputum culture on day 10bPatient recently treated for S. aureus prior to admission, but setting is unclearcPleural fluid tested on day 4dTimeline of blood culture uncleareTimeline of sputum testing unclearfPositive sputum on admission, subsequent ventilator-associated infectiongPatient was receiving routine treatments in a healthcare-setting\n\nClinical characteristics\naPositive sputum culture on day 10\nbPatient recently treated for S. aureus prior to admission, but setting is unclear\ncPleural fluid tested on day 4\ndTimeline of blood culture unclear\neTimeline of sputum testing unclear\nfPositive sputum on admission, subsequent ventilator-associated infection\ngPatient was receiving routine treatments in a healthcare-setting\nInfection characteristics In terms of specific staphylococcal species co-infection, 51.3% (n = 59) of patients were infected with methicillin-sensitive staphylococcus aureus (MSSA) and 49.6% (n = 57) were infected with methicillin-resistant staphylococcus aureus (MRSA), with a single patient co-infected with both MRSA and MSSA. One patient co-infected with MSSA had a fatal Panton-Valentine Leukocidin toxin-producing strain of MSSA (PVL-MSSA). In addition to COVID-19 and S. aureus co-infection, 26.1% (n = 30) of patients were co-infected with one or more separate pathogens such as Klebsiella pneumoniae (n = 6), Candida spp. (n = 6), Enterococcus spp. (n = 5), Haemophilus influenzae (n = 2), Proteus mirabilis (n = 2), Escherichia coli (n = 2). Comprehensive patient co-infection data are reported in Table 1.\nIn terms of specific staphylococcal species co-infection, 51.3% (n = 59) of patients were infected with methicillin-sensitive staphylococcus aureus (MSSA) and 49.6% (n = 57) were infected with methicillin-resistant staphylococcus aureus (MRSA), with a single patient co-infected with both MRSA and MSSA. One patient co-infected with MSSA had a fatal Panton-Valentine Leukocidin toxin-producing strain of MSSA (PVL-MSSA). In addition to COVID-19 and S. aureus co-infection, 26.1% (n = 30) of patients were co-infected with one or more separate pathogens such as Klebsiella pneumoniae (n = 6), Candida spp. (n = 6), Enterococcus spp. (n = 5), Haemophilus influenzae (n = 2), Proteus mirabilis (n = 2), Escherichia coli (n = 2). Comprehensive patient co-infection data are reported in Table 1.\nDiagnoses and treatments Of all 115 reported cases of co-infection with COVID-19 and S. aureus, diagnosis of S. aureus infection was most frequently established by blood culture in our patient sample (64.3%, n = 74), with S. aureus infections manifesting predominantly in patients as bacteremia (64.3%, n = 74) and pneumonia (55.7%, n = 64), accompanied by several additional endocarditis/vasculitis (3.5%, n = 4), cellulitis (1.7%, n = 2), and osteomyelitis (0.9%, n = 1) cases. Additionally, two patients that tested positive for S. aureus with no clear infection source were suspected to be chronic carriers of the bacterial pathogen. From this variety of infection presentations, the majority (76.5%, n = 88) experienced hospital-onset S. aureus co-infection following hospitalization for an initial infection with COVID-19, and 19 patients (16.5%) presented with S. aureus infection at the time of admission that was determined to be community-onset in etiology. Aside from a standard course of antibiotics, patients received a diversity of adjuvant treatments during their hospital admission, with the most common interventions including intubation and mechanical ventilation (74.8%, n = 86), a central venous catheter (19.1%, n = 22), and corticosteroids (13.0%, n = 15). Table 2 describes the clinical course following hospital admission for each patient in comprehensive detail.\nOf all 115 reported cases of co-infection with COVID-19 and S. aureus, diagnosis of S. aureus infection was most frequently established by blood culture in our patient sample (64.3%, n = 74), with S. aureus infections manifesting predominantly in patients as bacteremia (64.3%, n = 74) and pneumonia (55.7%, n = 64), accompanied by several additional endocarditis/vasculitis (3.5%, n = 4), cellulitis (1.7%, n = 2), and osteomyelitis (0.9%, n = 1) cases. Additionally, two patients that tested positive for S. aureus with no clear infection source were suspected to be chronic carriers of the bacterial pathogen. From this variety of infection presentations, the majority (76.5%, n = 88) experienced hospital-onset S. aureus co-infection following hospitalization for an initial infection with COVID-19, and 19 patients (16.5%) presented with S. aureus infection at the time of admission that was determined to be community-onset in etiology. Aside from a standard course of antibiotics, patients received a diversity of adjuvant treatments during their hospital admission, with the most common interventions including intubation and mechanical ventilation (74.8%, n = 86), a central venous catheter (19.1%, n = 22), and corticosteroids (13.0%, n = 15). Table 2 describes the clinical course following hospital admission for each patient in comprehensive detail.", "Following full-text review, 28 studies qualified for inclusion in our review, resulting in a total of 115 patients. Of these 28 included studies, 22 were case reports (describing single patients with individual data), two were case series (describing 7–42 patients with aggregate data), and four were cohort studies (describing 4–40 patients with aggregate data). Countries of study publication included the United States (n = 7) [7, 9, 21–25], Italy (n = 7) [26–32], Japan (n = 2) [33, 34], Iran (n = 2) [35, 36], the United Kingdom (n = 2) [37, 38], Spain (n = 2) [39, 40], Bahrain (n = 1) [41], China (n = 1) [42], France (n = 1) [43], the Philippines (n = 1) [44], Korea (n = 1) [45], and Canada (n = 1) [46], with publication dates ranging from April 15, 2020 to June 16, 2021. Table 1 describes the characteristics of these included studies and available information on their respective patient demographics in detail.\n\nTable 1Study and patient characteristicsFirst AuthorCountryPublication dateStudy designQuality AssessmentNAgeMale/FemaleTypeCo-infectionComorbiditiesOutcomeAdachiJapan05/15/20Case report8/8184FemaleMSSA\nKlebsiella pneumoniae\nNoneDeathBagnatoItaly08/05/20Case report8/8162FemaleMSSA\nCandida tropicalis\nHypertensionDischargeChandranUnited Kingdom05/20/21Case report7/81NRFemaleMSSANoneType 2 diabetes mellitusDeathChenChina11/01/20Case report6/8129MaleMSSA and MRSA\nHaemophilus influenzae\nNR (not reported)DischargeChoudhuryUSA07/04/20Case report7/8173MaleMSSANoneType 2 diabetes mellitus, chronic foot osteomyelitis, aortic stenosis, prosthetic aortic valve, atrial fibrillation, prior S. aureus infectionHospiceCusumanoUSA11/12/20Case series9/104265.6 (mean)Males (n = 21), Females (n = 21)MSSA (n = 23) and MRSA (n = 19)Enterococcus faecalis (n = 3), Candida spp. (n = 2), Klebsiella pneumoniae (n = 2), Escheria coli (n = 1), Bacillus spp. (n = 1), Micrococcus spp. (n = 1), Staphylococcus epidermidis (n = 1), Proteus mirabilis (n = 1)Hypertension (n = 29), diabetes mellitus (n = 21), cardiovascular disease (n = 19), lung disease (n = 7), chronic kidney disease (n = 6), malignancy (n = 5), end-stage renal disease (n = 4), organ transplant (n = 3), liver disease (n = 1)Death at 30 days (n = 28)De PascaleItaly05/31/21Prospective cohort8/114064 (mean)Males (n = 33), Females (n = 7)MSSA (n = 14), MRSA (n = 26)Bacteroidetes (n = 18), Proteobacteria (n = 7), Actinobacteria (n = 3), Tenericutes (n = 2), Fusobacteria (n = 1)1Diabetes mellitus (n = 8), cardiovascular disease (n = 7), lung disease (n = 7), immunosuppression (n = 4), neoplasm (n = 4), chronic kidney disease (n = 3)Death (n = 26)DuployezFrance04/16/20Case report8/8135MaleMSSA (PVL-secreting)NoneNoneDeathEdradaPhilippines05/07/20Case report6/8139FemaleMSSAInfluenza B, Klebsiella pneumoniaeNoneDischargeElSeirafiBahrain06/23/20Case report6/8159MaleMRSANoneNRDeathFilocamoItaly05/11/20Case report8/8150MaleMSSANoneNoneDischargeHamzaviIran08/01/20Case report6/8114MaleMSSANoneCerebral palsyDeathHoshiyamaJapan11/02/20Case series6/10147MaleMSSANonePrevious cerebral hemorrhageDischarge“”“”“”“”“”139MaleMSSA\nGroup B Streptococcus\nHypertensionDischargeHussainUnited Kingdom05/22/20Case report8/8169FemaleMSSANoneProsthetic aortic valve with reduced ejection fractionDeathLevesqueCanada07/01/20Case report6/8153FemaleMSSANoneHypertension, diabetes mellitus, dyslipidemiaHospitalMirzaUSA11/16/20Case report6/8129MaleMRSAMulti-drug resistant PseudomonasCystic fibrosis with moderate obstructive lung disease, exocrine pancreatic insufficiency, gastroparesis, chronic S. aureusDischargePatekUSA04/15/20Case report7/810MaleMSSAHerpes simplex virusMaternal history of oral herpetic lesionsDischargePosteraroItaly09/06/20Case report8/8179MaleMRSAMorganella morganii, Candida glabrata, Acinetobacter baumannii, Proteus mirabilis, Klebsiella pneumoniae, Escherichia coliType 2 diabetes mellitus, ischemic heart disease, peripheral artery disease, left leg amputationDeathRajdevUSA09/10/20Case report7/8132MaleMSSA\nKlebsiella pneumoniae\nType 2 diabetes mellitusDischargeRajdevUSA09/28/20Case report7/8136MaleMSSA\nHaemophilus influenzae\nHypertension, two renal transplants for renal dysplasiaDischargeRamos-MartinezSpain07/30/20Prospective cohort6/11160NRMSSANoneType 2 diabetes mellitus, hypercholesterolemia, wrist arthritis, sternoclavicular arthritisDeathRandallUSA12/01/20Case report7/8160MaleMRSANoneChronic obstructive lung disease, coronary artery disease, hypothyroidismDeath“”“”“”“”“”183MaleMRSANoneHypertension, atrial fibrillationDeath“”“”“”“”“”160MaleMRSAHepatitis CHypertension, type 2 diabetes mellitus, cirrhosisDeathRegazzoniItaly08/07/20Case report2/8170MaleMSSANoneNRHospitalSharifipourIran09/01/20Prospective cohort7/111NRNRMSSANoneNoneDischarge“”“”“”“”1NRNRMRSANoneType 2 diabetes mellitusDeathSonKorea06/16/21Retrospective cohort8/11479 (mean)Male (n = 3), Female (n = 1)MRSAC. albicans (n = 2), Vancomycin-resistant enterococci (n = 2), S. maltophilia (n = 1) carbapenem-resistant Acinetobacter baumannii (n = 1),NRDeath (n = 3)SpannellaItaly06/23/20Case report8/8195FemaleMSSA\nCitrobacter werkmanii\nHypertension, chronic heart failure, paroxysmal atrial fibrillation, dyslipidemia, chronic kidney disease, vascular dementia, sacral pressure ulcers, dysphagiaDeathSpotoItaly09/30/20Case report6/8155FemaleMSSANoneTriple negative, BRCA1-related, right breast cancer with multiple bone metastasis, type 2 diabetes mellitusDeathValgaSpain06/11/20Case report6/8168MaleMSSANoneHypertension, type 2 diabetes mellitus, congestive heart failure, sleep apnea, ischemic heart disease, chronic kidney diseaseDischargePatients were colonized with these bacterial phyla, but no distinction between colonization versus infection was reported\n\nStudy and patient characteristics\nPatients were colonized with these bacterial phyla, but no distinction between colonization versus infection was reported", "Figure 2 represents the quality assessment scores produced by the Joanna Briggs Institute’s critical appraisal tools. Scores ranged from 2 to 8 for case reports (out of 8 points total) (n = 22), 6–9 for case series (out of 10 points total) (n = 2), and 6–8 for cohort studies (out of 11 points total) (n = 4). The mean quality assessment score for these publications compared within their respective categories was 6.8 for case reports, 7.5 for case series, and 7.3 for cohort studies. In terms of most common study design limitations, the metric of patient post-intervention clinical conditions was least clearly described for case reports, neither of the case series consecutively included participants, and strategies to address incomplete follow-up were only reported for one of the four cohort studies.Fig. 2Quality assessment scores for included\npublications reported as “yes” or “no” for achieving quality metrics per the\nJoanna Briggs Institute’s critical appraisal tools\nQuality assessment scores for included\npublications reported as “yes” or “no” for achieving quality metrics per the\nJoanna Briggs Institute’s critical appraisal tools", "For the 115 total patients included in our review that were co-infected with COVID-19 and S. aureus, their demographic (Table 1) and clinical data (Table 2) were described with varying completeness. Staphylococcal species and patient outcomes are reported in both tables to enable direct comparison with patient demographics and clinical course. Across our patient sample, the mean patient age was 54.8 years (SD = 21.6), 65.3% (n = 75) were male, 32.1% (n = 37) were female, and 3 patients (2.6%) did not have their gender specified in the study. Patients presented with a diversity of comorbidities with diabetes mellitus (33.9%, n = 39), hypertension (32.2%, n = 37), and cardiovascular disease (28.7%, n = 33) reported as the most common. Five patients presented with no comorbidities and four studies reported no information on patient medical history related to comorbidities. The most common presenting symptoms reported by patients at hospital admission included cough (13.9%, n = 16), fever (13.9%, n = 16), and dyspnea (13.0%, n = 15).\n\nTable 2Clinical characteristicsFirst AuthorNTypeDiagnosisCo-infection onsetPresentationDx findingsTreatments and InterventionsComplicationsLength of stayICUOutcomeAdachi1MSSASputum sample, pneumoniaUnclearaFever, diarrhea, dyspneaBilateral opacities on chest x-ray (CXR), ground glass opacities & lower lobe consolidation on chest computed tomography (CT)Antibiotics, corticosteroids, lopinavir/ritonavir, morphineARDS16YesDeathBagnato1MSSABlood culture, bacteremiaHospital-onsetFever, cough, diarrhea, myalgiaUnremarkable head CT, normal creatine kinaseAntibiotics, corticosteroids, intubation and ventilation, antifungals, lopinavir/ritonavir, hydroxychloroquine, tocilizumab, neuromuscular blocking agents, olanzapinePsychomotor agitation and temporospatial disorientation, myopathy140YesDischargeChandran1MSSABlood culture and tracheal aspirate, pneumonia (ventilator-associated) and bacteremiaHospital-onsetDyspnea (positive COVID test)Bilateral interstitial infiltrates (CXR) and ground glass opacities (CT)Antibiotics, intubation and ventilationBilateral cavitating lung lesions, septic shock15YesDeathChen1MSSA and MRSASputum sample, pneumoniaHospital-onsetAsymptomatic (positive COVID test)Patchy consolidation and ground glass opacities in right upper lobe on CXR (day 29)Antibiotics, corticosteroids, lopinavir/ritonavir, Abidol combined with IFN inhalant, Thymalfasin, ribavirin, loratadinePneumonia51NoDischargeChoudhury1MSSABlood culture, endocarditis and bacteremiaUnclearbAltered mental status, low back pain, urinary incontinence, right foot ulcersCystitis and pyelonephritis on CT, epidural abscess (L4/5) on magnetic resonance imaging (MRI)Antibiotics, oral rifampin, hydroxychloroquineEndocarditis, aortic root abscessNR (not reported)NRHospiceCusumano42MSSA (n = 23) and MRSA (n = 19)Blood culture, bacteremia (n = 42), pneumonia (n = 8), vascular (n = 3), osteomyelitis (n = 1), skin (n = 1)Hospital-onset (n = 28), community-onset (n = 14)Not reported (NR)Abnormal CXR (n = 36), vegetation on transthoracic echo (n = 1)Antibiotics (n = 42), intubation and ventilation (n = 31), central venous catheter (n = 19)NRNRNRDeath at 30 days (n = 28)De Pascale40MSSA (n = 14), MRSA (n = 26)Tracheal aspirate and blood culture, pneumonia (ventilator-associated) (n = 40) and bacteremia (n = 19)Late hospital-onset (n = 35), early hospital-onset (n = 5)NRNRAntibiotics (n = 40), intubation and ventilation (n = 40)Septic shock (n = 22), acute kidney injury (n = 4)11 (mean)Yes (n = 40)Death (n = 26)Duployez1MSSA (PVL-secreting)Pleural drainage sample, pneumoniaUnclearcFever, cough, bloody sputumConsolidation of left upper lobe, left pleural effusion, right ground glass opacities, bilateral cavitary lesions on CTAntibiotics, intubation and ventilation, extracorporeal membrane oxygenation (ECMO), anticoagulation, upper left lobectomyNecrotizing pneumonia, deterioration of respiratory, renal, and liver functions17YesDeathEdrada1MSSANasal and throat swab with PCRCommunity-onset/carrierDry cough, sore throatUnremarkable chest CTOseltamivirNone19NoDischargeElSeirafi1MRSABlood culture, bacteremiaHospital-onsetFever, dry cough, dyspneaBilateral pulmonary infiltrates and ARDS on CXRAntibiotics, IFN, ribavirin, plasma therapy, tocilizumab injectionsSeptic shock with multi-organ dysfunction16YesDeathFilocamo1MSSABlood culture, bacteremiaHospital-onsetFever, dyspneaBilateral ground glass opacities on chest CTAntibiotics, intubation and ventilation, lopinavir/ritonavir, hydroxychloroquine, anakinraProgressive cholestatic liver injury29YesDischargeHamzavi1MSSABlood culture, bacteremiaUncleardFever, cough, dyspnea, lethargyLeft pleural effusion on CXRAntibiotics, intubation and ventilationMulti-organ dysfunctionNRYesDeathHoshiyama1MSSAThroat swab and sputum sampleUncleareCoughNormal labsNRNRNRNoDischarge“”1MSSAThroat swab and sputum sampleUncleareCoughNormal labsNRNRNRNoDischargeHussain1MSSABlood culture, bacteremiaCommunity-onsetFever, cough, dyspnea, malaiseBilateral reticular enhancement and heavily calcified aortic valve with mass effect on left atrial wall on chest CTAntibiotics, intubation and ventilation, esophagogastroduodenoscopy, pantoprazole, amiodarone, heparinBleeding Dieulafoy’s lesion, fast atrial fibrillation, acute kidney injury, multi-organ failure, intracerebral hematoma18YesDeathLevesque1MSSASputum sample, pneumonia (ventilator-associated)Hospital-onsetFever, dry cough, dyspneaSmall intraventricular hemorrhage on head CT (day 39)Antibiotics, intubation and ventilation, corticosteroids, propofol, fentanyl, neuromuscular blocking agents, heparin, continuous platelet infusion, blood transfusions, IVIG, endobronchial clot removals, romiplostim, vincristineARDS, ICU-acquired neuromyopathy, acute kidney injury, thrombocytopenia, intraventricular hemorrhage, ventilator-associated pneumoniaAt least 39YesHospitalMirza1MRSASputum sampleCarrier (chronic)Chest pain, dyspneaBilateral upper lobe bronchial wall thickening and bronchiectasis with nodular and interstitial opacities on chest CTAntibiotics, remdesivirMeropenem-resistant pseudomonas6NoDischargePatek1MSSAWound culture, cellulitisCommunity-onsetFever, erythema and erosions of right thumb and fourth digit, somnolence, decreased feedingElevated LFTs, bilateral perihilar streaking on CXR, neutropeniaAntibiotics, acyclovir, nasal cannula O2Hypoxic respiratory failure7YesDischargePosteraro1MRSABlood culture, bacteremiaHospital-onsetFever, cough, dyspneaCXR and chest CT consistent with pneumoniaAntibiotics, antifungals, hydroxychloroquine, darunavir/ritonavirHypoxia, left leg re-amputation, septic shock53YesDeathRajdev1MSSASputum sample, pneumonia (ventilator-associated)Community- and hospital-onsetfDyspneaBilateral consolidations on CXR, bilateral ground glass opacities and pneumomediastinum with subcutaneous emphysema on chest CTIntubation and ventilation, epoprostenol, hydromorphone, neuromuscular blocking agents, ECMOAnemia, epistaxis, oropharyngeal bleeding, ARDS47YesDischargeRajdev1MSSATracheal aspirate, pneumoniaHospital-onsetFever, cough, dyspnea, myalgiasDiffuse bilateral pulmonary opacities on CXRAntibiotics, intubation and ventilation, corticosteroids, tacrolimus, mycophenolate, remdesivirHypoxic respiratory failure, Guillan Barré syndrome23NRDischargeRamos-Martinez1MSSABlood culture, bacteremia (central venous catheter-associated)Hospital-onsetFever, meningitis, right infrapopliteal deep vein thrombosisMild mitral insufficiency on transthoracic echoAntibiotics, intubation and ventilation, central venous catheter, corticosteroids, tocilizumabNative valve endocarditis, progressive sepsisAt least 20YesDeathRandall1MRSABlood culture, bacteremiaHospital-onsetFever, cough, dyspneaNRIntubation and ventilation, corticosteroids, central venous catheterRespiratory distress3NRDeath“”1MRSABlood culture, bacteremiaHospital-onsetHypoxia (positive COVID test)NRCorticosteroids, remdesivir, central venous catheterSeptic shock14NRDeath“”1MRSABlood culture, bacteremiaHospital-onsetHypoxia (positive COVID test)NRCorticosteroidsCardiac arrest10YesDeathRegazzoni1MSSANasal swab and blood culture, bacteremiaHospital-onsetBilateral pneumonia (positive COVID test)Ischemic areas with hemorrhagic transformation on head CT and MRI, large vegetations on aortic valve with regurgitation on transesophageal echoAntibiotics, corticosteroidsSevere systemic inflammatory responseAt least 10NRHospitalSharifipour1MSSATracheal aspirate, pneumonia (ventilator-associated)Hospital-onsetCough, dyspnea, sore throatNRAntibiotics, intubation and ventilationVentilator-associated pneumonia13YesDischarge“”1MRSATracheal aspirate, pneumonia (ventilator-associated)Hospital-onsetCough, dyspnea, sore throatNRAntibiotics, intubation and ventilationVentilator-associated pneumonia9YesDeathSon4MRSASputum sample, pneumonia (n = 4)Hospital-onset (n = 4)Pneumonia (positive COVID test)NRAntibiotics (n = 4), corticosteroids (n = 4)NR42 (mean)YesDeath (n = 3)Spannella1MSSABronchoalveolar lavage, pneumoniaCommunity-onsetFever, cough, emesisBilateral ground glass opacities and multiple areas of consolidation on CXRAntibiotics, metoprolol, amiodarone, continuous positive-pressure airwayAtrial fibrillation, respiratory failure, altered mental status, tachycardia, severe hypoxemia27YesDeathSpoto1MSSATracheal aspirate, pneumoniaUncleargFever, dyspnea, respiratory distress following chemoimmunotherapyBilateral ground glass opacities and consolidation in the middle/upper lobes on chest CTAntibiotics, intubation and ventilation, lopinavir-ritonavir, hydroxychloroquineARDS5NRDeathValga1MSSATracheal aspirate, pneumoniaHospital-onsetFever, dry coughNRAntibiotics, intubation and ventilation, corticosteroids, hydroxychloroquine, lopinavir/ritonavir, IFN beta, heparinARDS, multi-organ failure47YesDischargeaPositive sputum culture on day 10bPatient recently treated for S. aureus prior to admission, but setting is unclearcPleural fluid tested on day 4dTimeline of blood culture uncleareTimeline of sputum testing unclearfPositive sputum on admission, subsequent ventilator-associated infectiongPatient was receiving routine treatments in a healthcare-setting\n\nClinical characteristics\naPositive sputum culture on day 10\nbPatient recently treated for S. aureus prior to admission, but setting is unclear\ncPleural fluid tested on day 4\ndTimeline of blood culture unclear\neTimeline of sputum testing unclear\nfPositive sputum on admission, subsequent ventilator-associated infection\ngPatient was receiving routine treatments in a healthcare-setting", "In terms of specific staphylococcal species co-infection, 51.3% (n = 59) of patients were infected with methicillin-sensitive staphylococcus aureus (MSSA) and 49.6% (n = 57) were infected with methicillin-resistant staphylococcus aureus (MRSA), with a single patient co-infected with both MRSA and MSSA. One patient co-infected with MSSA had a fatal Panton-Valentine Leukocidin toxin-producing strain of MSSA (PVL-MSSA). In addition to COVID-19 and S. aureus co-infection, 26.1% (n = 30) of patients were co-infected with one or more separate pathogens such as Klebsiella pneumoniae (n = 6), Candida spp. (n = 6), Enterococcus spp. (n = 5), Haemophilus influenzae (n = 2), Proteus mirabilis (n = 2), Escherichia coli (n = 2). Comprehensive patient co-infection data are reported in Table 1.", "Of all 115 reported cases of co-infection with COVID-19 and S. aureus, diagnosis of S. aureus infection was most frequently established by blood culture in our patient sample (64.3%, n = 74), with S. aureus infections manifesting predominantly in patients as bacteremia (64.3%, n = 74) and pneumonia (55.7%, n = 64), accompanied by several additional endocarditis/vasculitis (3.5%, n = 4), cellulitis (1.7%, n = 2), and osteomyelitis (0.9%, n = 1) cases. Additionally, two patients that tested positive for S. aureus with no clear infection source were suspected to be chronic carriers of the bacterial pathogen. From this variety of infection presentations, the majority (76.5%, n = 88) experienced hospital-onset S. aureus co-infection following hospitalization for an initial infection with COVID-19, and 19 patients (16.5%) presented with S. aureus infection at the time of admission that was determined to be community-onset in etiology. Aside from a standard course of antibiotics, patients received a diversity of adjuvant treatments during their hospital admission, with the most common interventions including intubation and mechanical ventilation (74.8%, n = 86), a central venous catheter (19.1%, n = 22), and corticosteroids (13.0%, n = 15). Table 2 describes the clinical course following hospital admission for each patient in comprehensive detail.", "During the hospital course of the 115 co-infected patients in our review, the most common complications were sepsis or systemic inflammatory response syndrome (23.5 %, n = 27), acute kidney injury (5.2%, n = 6), acute respiratory distress syndrome (4.3%, n = 5), pneumonia (4.3%, n = 5), and multi-organ dysfunction or failure (4.3%, n = 5). Transfer to an intensive care unit during admission was clearly reported for 53.9% (n = 62) of patients, unnecessary for 4.3% (n = 5), and not reported for the remaining 41.8% (n = 48). Patients were admitted for a mean length of 26.2 days (SD = 26.7) to any type of inpatient hospital unit, with the length of hospital stay not reported in five cases. Upon analysis of the final outcomes reported for the hospital course of our co-infected COVID-19 and S. aureus patient sample, 71 (61.7%) patients died, 41 (35.7%) were discharged, two remained hospitalized and in stable condition on study conclusion, and one patient was placed in hospice care. Table 2 further details the specific complications presenting in each patient’s hospital trajectory and Table 3 reports the final pooled frequencies of patient co-infection characteristics and outcomes.\n\nTable 3Pooled frequencies of patient co-infection characteristics and outcomes (n = 115)Total (%)\nGender\n Male75 (65.3 ) Female37 (32.1) Unspecified3 (2.6)\nStaphylococcal Species\n MSSA59 (51.3) MRSA57 (49.6)\nCo-infection\n\nKlebsiella pneumoniae\n6 (5.2) Candida spp.6 (5.2) Enterococcus spp.5 (4.3)\nHemophilus influenzae\n2 (1.7)\nEscherichia coli\n2 (1.7)\nProteus mirabilis\n2 (1.7)\nAcinetobacter baumannii\n2 (1.7) Bacillus spp.1 (0.9)\nStaphylococcus epidermidis\n1 (0.9) Micrococcus spp.1 (0.9) Pseudomonas spp.1 (0.9)\nMorganella morganii\n1 (0.9)\nCitrobacter werkmanii\n1 (0.9)\nS. maltophilia\n1 (0.9) Hepatitis C1 (0.9) Herpes simplex virus1 (0.9) Group B Streptococcus1 (0.9) None83 (72.2)\nS. Aureus Diagnostic Test\n Blood culture74 (64.3) Tracheal aspirate46 (40.0) Sputum sample11 (9.5) Nasal swab2 (1.7) Lower respiratory tract sample2 (1.7) Chronic carrier2 (1.7) Wound culture1 (0.9)S. aureus Diagnosis\n Bacteremia74 (63.4) Pneumonia64 (55.7) Ventilator-associated44 (38.3) Endocarditis/vasculitis4 (3.5) Cellulitis2 (1.7) Chronic carrier2 (1.7) Osteomyelitis1 (0.9) Not reported2 (1.7)S. Aureus\nInfection Onset Hospital88 (76.5) Community19 (16.5) Unclear7 (6.1)\nComplications\n Sepsis/Systemic Inflammatory Response Syndrome27 (23.5) Acute kidney injury6 (5.2  Acute respiratory distress syndrome5 (4.3) Pneumonia5 (4.3) Multi-organ dysfunction/failure5 (4.3) Bleeding/coagulopathy5 (4.3) Hypoxic respiratory failure3 (2.6) Myopathy/neuropathy3 (2.6) Abscess formation2 (1.7) Confusion and altered mental status2 (1.7) Atrial fibrillation2 (1.7) Endocarditis2 (1.7) Anemia1 (0.9) Cardiac arrest1 (0.9) Thrombocytopenia1 (0.9) Re-amputation1 (0.9) Cholestatic liver injury1 (0.9) Not reported3 (2.6)\nICU\n Yes62 (53.9) No5 (4.3) Not reported48 (41.8)\nOutcome\n Death71 (61.7) Discharge41 (35.7) Hospital2 (1.7) Hospice1 (0.9)\n\nPooled frequencies of patient co-infection characteristics and outcomes (n = 115)", "As our evidence base of the outcomes of patients with COVID-19 infection continues to expand, thorough review of the various clinical scenarios and environments inherent to the treatment process of this disease are crucial for patient care management and improvement. Given that higher levels of morbidity and death have been observed in influenza patients co-infected with multiple pathogens during past pandemics [47], exploring the outcomes of co-infected COVID-19 patients may establish similar trends and reveal strategies for decreasing the morbidity and mortality of this population in our current pandemic. Our review of the available clinical data reporting the outcomes of patients co-infected with COVID-19 and the common bacterial pathogen, S. aureus, was purposed to augment this knowledge base and has produced several key findings regarding mortality rate, co-infection onset, and treatment considerations for these patients.\nForemost, the mortality rate in our review for patients co-infected with COVID-19 and S. aureus was 61.7%, which depicts a significantly increased mortality rate when contrasted with patients infected solely by COVID-19 [48]. This outcome is comparable to the increased morality rates observed in patients acquiring co-infection with S. aureus in addition to influenza [10], however, our findings emphasize an important difference in the etiology of COVID-19 and influenza co-infection with S. aureus. For influenza specifically, co-infection with S. aureus is predominantly diagnosed upon patient presentation to a healthcare setting, indicating that the community is a frequent and supportive environment for the co-infection processes of these pathogens [9, 49]. In contrast, our findings indicate that co-infection with S. aureus predominantly occurs in the hospital environment for patients with COVID-19 infection. The terminology used to differentiate these infection etiologies is “community-associated” versus “healthcare-associated,” with delineation between these diagnoses occurring at 48-hours after admission to a hospital or healthcare facility [50]. Given that co-infection with COVID-19 and S. aureus occurred after hospital admission in 76.5% of the patients in our review, preventative measures in the community-setting or treatment in an outpatient environment may be important considerations for mortality reduction from healthcare-associated S. aureus infection.\nImportantly, while the predominance of S. aureus co-infections occurring after patient admission for COVID-19 infection is likely associated with a wide diversity of patient- and environment-specific factors, our findings suggest that this infection sequence may be partly attributed to the COVID-19 treatment course. The most common patient interventions identified in our review included intubation and mechanical ventilation, central venous catheter placement, and corticosteroids, which are each associated with increased risks of bacterial infection through introduction of a foreign body or immunosuppressive properties that dually support bacterial growth [51, 52]. Although these first-line treatments for decompensating patients that present with severe COVID-19 infection may predispose patients to S. aureus bacterial co-infection and subsequently increased mortality rates, they are often unavoidable during the patient treatment course. Vigilant management surrounding these interventions in patients with COVID-19 infection, such as timely central line or ventilator removal and prudent steroid dosing, are key quality improvement practices that warrant routine physician adherence during patient treatment processes given co-infection mortality rates.\nIn contrast to COVID-19 infection alone, the increased patient morbidity and mortality of COVID-19 and healthcare-associated S. aureus co-infection identified in our review have important implications for future research and clinical practice. While of clear and crucial public health importance, our findings further emphasize the imperative of COVID-19 vaccination to reduce both infection and symptom severity that may predispose patients to the necessity of hospital interventions and subsequent S. aureus co-infection. The effectiveness of this strategy is exemplified by the reduction in influenza and S. aureus pathology observed with increased influenza vaccination [53, 54]. As seen with influenza co-infection, vaccination may be a crucial harm reduction measure given that no S. aureus prophylaxis exists, and the incidence of S. aureus strains refractory to antibiotics is rising [55]. Additionally, the mortality trends observed in COVID-19 patients co-infected with S. aureus highlight the necessity for future reviews and clinical studies focused on the co-infection outcomes of other bacterial and viral pathogens alongside COVID-19. Further research may inform our ability to predict the trajectory of patients with various co-infections and identify infection patterns that influence treatment decisions.\nTo our knowledge, this is the first study to review and evaluate the outcomes of patients co-infected with COVID-19 and S. aureus. However, we acknowledge several limitations to this review. First, the majority of the studies included in our review were individual case reports due to the recent emergence of COVID-19 and limited literature exploring outcomes for patients co-infected with S. aureus. While these types of studies can be vital for expanding the medical knowledge base and reveal fundamental disease characteristics, it is crucial to consider the reporting bias that may exist in this study design and lack of comparison groups. Per our quality assessment, trends in study limitations for each type of publication were variable. Accordingly, our intent for this review was to pool these outcomes in order to reduce this bias and transparently report each case for appropriate assessment and application of our findings. In addition, Cusumano et al.’s [9] case series comprised 42 of the patients in our review and used a study end-point of death at 30 days, implicating that the true mortality rate of patients with COVID-19 and S. aureus co-infection may be higher if related complications necessitate an extended hospital course. Future high-quality clinical studies examining patient outcomes are warranted and of critical importance to further expand on the findings of our systematic review.", "In contrast to patients infected solely with COVID-19, co-infection with COVID-19 and S. aureus demonstrates a higher patient mortality rate during hospital admission. S. aureus co-infection in COVID-19 patients is predominantly healthcare-associated, and common hospital interventions for patients with severe COVID-19 infection may increase the risk for bacterial infection. Our findings emphasize the imperative of COVID-19 vaccination to prevent hospitalization for COVID-19 treatment and the subsequent susceptibility to hospital-acquired S. aureus co-infection.", " \nAdditional file 1: Table S1. Search strategies, conducted between July 3, 2021, and July 16, 2021. Total results = 1922. Table S2. Joanna Briggs Quality Assessment for case reports included in the review. Table S3: Joanna Briggs Quality Assessment for case-series included in the review. Table S4. Joanna Briggs Quality Assessment for cohort studies included in the review. Table S5. Excluded articles after full-text analysis, with reason (n = 64).\n\nAdditional file 1: Table S1. Search strategies, conducted between July 3, 2021, and July 16, 2021. Total results = 1922. Table S2. Joanna Briggs Quality Assessment for case reports included in the review. Table S3: Joanna Briggs Quality Assessment for case-series included in the review. Table S4. Joanna Briggs Quality Assessment for cohort studies included in the review. Table S5. Excluded articles after full-text analysis, with reason (n = 64).\n\nAdditional file 1: Table S1. Search strategies, conducted between July 3, 2021, and July 16, 2021. Total results = 1922. Table S2. Joanna Briggs Quality Assessment for case reports included in the review. Table S3: Joanna Briggs Quality Assessment for case-series included in the review. Table S4. Joanna Briggs Quality Assessment for cohort studies included in the review. Table S5. Excluded articles after full-text analysis, with reason (n = 64).\n\nAdditional file 1: Table S1. Search strategies, conducted between July 3, 2021, and July 16, 2021. Total results = 1922. Table S2. Joanna Briggs Quality Assessment for case reports included in the review. Table S3: Joanna Briggs Quality Assessment for case-series included in the review. Table S4. Joanna Briggs Quality Assessment for cohort studies included in the review. Table S5. Excluded articles after full-text analysis, with reason (n = 64).", "\nAdditional file 1: Table S1. Search strategies, conducted between July 3, 2021, and July 16, 2021. Total results = 1922. Table S2. Joanna Briggs Quality Assessment for case reports included in the review. Table S3: Joanna Briggs Quality Assessment for case-series included in the review. Table S4. Joanna Briggs Quality Assessment for cohort studies included in the review. Table S5. Excluded articles after full-text analysis, with reason (n = 64).\n\nAdditional file 1: Table S1. Search strategies, conducted between July 3, 2021, and July 16, 2021. Total results = 1922. Table S2. Joanna Briggs Quality Assessment for case reports included in the review. Table S3: Joanna Briggs Quality Assessment for case-series included in the review. Table S4. Joanna Briggs Quality Assessment for cohort studies included in the review. Table S5. Excluded articles after full-text analysis, with reason (n = 64)." ]
[ null, null, null, null, null, null, "results", null, null, null, null, null, null, "discussion", "conclusion", "supplementary-material", null ]
[ "COVID-19", "\nStaphylococcus aureus\n", "Co-infection", "Antibiotics", "Hospitalization", "Infection" ]
Background: Upon passage of the March 11th anniversary of the official declaration of the coronavirus disease 2019 (COVID-19) pandemic [1], the causative severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) pathogen has infected over 181 million individuals and resulted in more than 3.9 million deaths worldwide as of July 1, 2021 [2]. In addition to rapid spread through high transmission rates [3], infection with COVID-19 can result in severe complications such as acute respiratory distress syndrome (ARDS), thromboembolic events, septic shock, and multi-organ failure [4]. In response to this novel virus, the clinical environment has evolved to accommodate the complexities of healthcare delivery in the pandemic environment [5]. Accordingly, a particularly challenging scenario for clinicians is the management of patients with common infections that may be complicated by subsequent COVID-19 co-infection, or conversely co-infected with a pathogen following primary infection with COVID-19 [6]. Bacterial co-infection in COVID-19 patients may exacerbate the immunocompromised state caused by COVID-19, further worsening clinical prognosis [7]. Implicated as a leading bacterial pathogen in both community- and healthcare-associated infections, Staphylococcus aureus (S. aureus) is commonly feared in the hospital environment for its risk of deadly outcomes such as endocarditis, bacteremia, sepsis, and death [8]. In past viral pandemics, S. aureus has been the principal cause of secondary bacterial infections, significantly increasing patient mortality rates [9]. For viral influenza infection specifically, S. aureus co-infection and bacteremia has been associated with mortality rates of almost 50%, in contrast to the 1.4% morality rates observed in patients infected with influenza alone [10]. Given the parallels between the clinical presentation, course, and outcomes of influenza and COVID-19 viral infection [11], mortality rates in COVID-19 patients co-infected with S. aureus may reflect those observed in influenza patients. However, while recent studies have focused on the incidence and prevalence of COVID-19 and S. aureus co-infection, the clinical outcomes of patients co-infected with these two specific pathogens remains unclear given that existing studies consolidate S. aureus patient outcomes with other bacterial pathogens [12–14]. Given that the literature informing our knowledge of COVID-19 is a dynamic and evolving entity, the purpose of this scoping review is to evaluate the current body of evidence reporting the clinical outcomes of patients co-infected with COVID-19 and S. aureus. To date, there has been no review focusing specifically on the clinical treatment courses and subsequent outcomes of COVID-19 and S. aureus co-infection. In response to the urgency of the pandemic state and high rates of COVID-19 hospital admissions, we aim to identify important areas for further research and explore potential implications for clinical practice. Methods: Search strategy and study selection To provide a scoping review of initial insight into the breadth of developing data on COVID-19 and S. aureus co-infection, we followed the five-stage methodology of scoping review practice presented by Levac, Colquhoun, and O’Brien [15]. In accordance with the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) extension for Scoping Reviews [16], we conducted electronic searches in PubMed, Scopus, Ovid MEDLINE, CINAHL, ScienceDirect, medRxiv (preprint), and the WHO COVID-19 database between July 3, 2021 and July 16, 2021. Search terms were combined with the use of Boolean operators and included subject headings or key terms specific to COVID-19 (i.e. severe acute respiratory syndrome coronavirus 2 OR SARS-CoV2 OR 2019 novel coronavirus OR 2019-nCoV OR coronavirus disease 2019 virus OR COVID-19 OR Wuhan coronavirus) and Staphylococcus aureus (i.e. methicillin-resistant staphylococcus aureus OR MRSA OR methicillin-susceptible Staphylococcus aureus OR MSSA OR staphylococcal infections). A comprehensive list of our scoping terms and search strategies is included in the Appendix (Ädditional file 1: Table S1). Two independent, experienced reviewers (JA and KV) screened the titles and abstracts of eligible studies and performed full-text review on qualified selections. For this review, we broadly considered articles of any design that included patients infected with both COVID-19 and S. aureus, provided a description of the timeline and ultimate clinical outcomes for these patients (i.e. death or discharge from hospital) at study completion, and were available in English. Studies were excluded if they did not report final outcomes since our scoping review purpose was to evaluate the quality of existing literature that described the clinical course and mortality rate of patients co-infected with these pathogens. We excluded duplicate records and disagreements regarding study inclusion were resolved by consensus or feedback from the senior author. To provide a scoping review of initial insight into the breadth of developing data on COVID-19 and S. aureus co-infection, we followed the five-stage methodology of scoping review practice presented by Levac, Colquhoun, and O’Brien [15]. In accordance with the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) extension for Scoping Reviews [16], we conducted electronic searches in PubMed, Scopus, Ovid MEDLINE, CINAHL, ScienceDirect, medRxiv (preprint), and the WHO COVID-19 database between July 3, 2021 and July 16, 2021. Search terms were combined with the use of Boolean operators and included subject headings or key terms specific to COVID-19 (i.e. severe acute respiratory syndrome coronavirus 2 OR SARS-CoV2 OR 2019 novel coronavirus OR 2019-nCoV OR coronavirus disease 2019 virus OR COVID-19 OR Wuhan coronavirus) and Staphylococcus aureus (i.e. methicillin-resistant staphylococcus aureus OR MRSA OR methicillin-susceptible Staphylococcus aureus OR MSSA OR staphylococcal infections). A comprehensive list of our scoping terms and search strategies is included in the Appendix (Ädditional file 1: Table S1). Two independent, experienced reviewers (JA and KV) screened the titles and abstracts of eligible studies and performed full-text review on qualified selections. For this review, we broadly considered articles of any design that included patients infected with both COVID-19 and S. aureus, provided a description of the timeline and ultimate clinical outcomes for these patients (i.e. death or discharge from hospital) at study completion, and were available in English. Studies were excluded if they did not report final outcomes since our scoping review purpose was to evaluate the quality of existing literature that described the clinical course and mortality rate of patients co-infected with these pathogens. We excluded duplicate records and disagreements regarding study inclusion were resolved by consensus or feedback from the senior author. Data extraction For the final articles selected, we completed data extraction in duplicate, and any discrepancies were resolved through discussion or consult with the senior author. While several studies also included reports on patients infected with COVID-19 alone or co-infected with an alternative pathogen, we extracted data solely for patients with COVID-19 and S. aureus co-infection. Our data extraction items included study methodology, author and study location, type of staphylococcal species, onset of S. aureus infection, S. aureus culture site and infection source, patient sample size, age, gender, presentation, comorbidities or additional co-infections, prior history of S. aureus infection, diagnostic findings, hospital treatments and interventions, complications, total length of hospital admission, intensive care unit transfer, and final patient mortality outcomes upon study completion. For the final articles selected, we completed data extraction in duplicate, and any discrepancies were resolved through discussion or consult with the senior author. While several studies also included reports on patients infected with COVID-19 alone or co-infected with an alternative pathogen, we extracted data solely for patients with COVID-19 and S. aureus co-infection. Our data extraction items included study methodology, author and study location, type of staphylococcal species, onset of S. aureus infection, S. aureus culture site and infection source, patient sample size, age, gender, presentation, comorbidities or additional co-infections, prior history of S. aureus infection, diagnostic findings, hospital treatments and interventions, complications, total length of hospital admission, intensive care unit transfer, and final patient mortality outcomes upon study completion. Data synthesis and analysis Microsoft Excel 2016 (Redmond, WA, USA) was used to collect and chart data extracted from the studies that met the inclusion criteria. Data was synthesized and analyzed descriptively, with frequency counts performed for individual and grouped study metrics. The purpose of synthesizing the extracted information through this method was to create an overview of existing knowledge and identify gaps in the current literature on COVID-19 and S. aureus co-infection. Microsoft Excel 2016 (Redmond, WA, USA) was used to collect and chart data extracted from the studies that met the inclusion criteria. Data was synthesized and analyzed descriptively, with frequency counts performed for individual and grouped study metrics. The purpose of synthesizing the extracted information through this method was to create an overview of existing knowledge and identify gaps in the current literature on COVID-19 and S. aureus co-infection. Quality assessment Given that the majority of existing literature reporting outcomes data for COVID-19 and S. aureus co-infection were case reports, we utilized the Joanna Briggs Institute’s critical appraisal tools [17] to provide a metric for our scoping assessment of the methodological quality of the included studies. Application of these tools enabled examination of study quality in the areas of inclusion criteria, sample size, description of study participants, setting, and the appropriateness of the statistical analysis. As in previous reviews [18, 19], the tools were modified to produce a numeric score with case reports assessed based on an eight-item scale, case series on a ten-item scale, and cohort studies on an eleven-item scale. Studies were assessed with the methodological quality tool specific to their design (i.e. case report, case series, cohort) by two independent reviewers (JA and KV) and discrepancies were resolved through discussion. While debate exists regarding the minimal number of patients required for study qualification as a “case series” [20], we considered studies reporting individual patient data as “case reports” and those reporting aggregate patient data as “case series.” Our complete quality assessment, including tools and scores, is available in the Appendix (Additional file 1: Tables S2–S4). Given that the majority of existing literature reporting outcomes data for COVID-19 and S. aureus co-infection were case reports, we utilized the Joanna Briggs Institute’s critical appraisal tools [17] to provide a metric for our scoping assessment of the methodological quality of the included studies. Application of these tools enabled examination of study quality in the areas of inclusion criteria, sample size, description of study participants, setting, and the appropriateness of the statistical analysis. As in previous reviews [18, 19], the tools were modified to produce a numeric score with case reports assessed based on an eight-item scale, case series on a ten-item scale, and cohort studies on an eleven-item scale. Studies were assessed with the methodological quality tool specific to their design (i.e. case report, case series, cohort) by two independent reviewers (JA and KV) and discrepancies were resolved through discussion. While debate exists regarding the minimal number of patients required for study qualification as a “case series” [20], we considered studies reporting individual patient data as “case reports” and those reporting aggregate patient data as “case series.” Our complete quality assessment, including tools and scores, is available in the Appendix (Additional file 1: Tables S2–S4). Search strategy and study selection: To provide a scoping review of initial insight into the breadth of developing data on COVID-19 and S. aureus co-infection, we followed the five-stage methodology of scoping review practice presented by Levac, Colquhoun, and O’Brien [15]. In accordance with the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) extension for Scoping Reviews [16], we conducted electronic searches in PubMed, Scopus, Ovid MEDLINE, CINAHL, ScienceDirect, medRxiv (preprint), and the WHO COVID-19 database between July 3, 2021 and July 16, 2021. Search terms were combined with the use of Boolean operators and included subject headings or key terms specific to COVID-19 (i.e. severe acute respiratory syndrome coronavirus 2 OR SARS-CoV2 OR 2019 novel coronavirus OR 2019-nCoV OR coronavirus disease 2019 virus OR COVID-19 OR Wuhan coronavirus) and Staphylococcus aureus (i.e. methicillin-resistant staphylococcus aureus OR MRSA OR methicillin-susceptible Staphylococcus aureus OR MSSA OR staphylococcal infections). A comprehensive list of our scoping terms and search strategies is included in the Appendix (Ädditional file 1: Table S1). Two independent, experienced reviewers (JA and KV) screened the titles and abstracts of eligible studies and performed full-text review on qualified selections. For this review, we broadly considered articles of any design that included patients infected with both COVID-19 and S. aureus, provided a description of the timeline and ultimate clinical outcomes for these patients (i.e. death or discharge from hospital) at study completion, and were available in English. Studies were excluded if they did not report final outcomes since our scoping review purpose was to evaluate the quality of existing literature that described the clinical course and mortality rate of patients co-infected with these pathogens. We excluded duplicate records and disagreements regarding study inclusion were resolved by consensus or feedback from the senior author. Data extraction: For the final articles selected, we completed data extraction in duplicate, and any discrepancies were resolved through discussion or consult with the senior author. While several studies also included reports on patients infected with COVID-19 alone or co-infected with an alternative pathogen, we extracted data solely for patients with COVID-19 and S. aureus co-infection. Our data extraction items included study methodology, author and study location, type of staphylococcal species, onset of S. aureus infection, S. aureus culture site and infection source, patient sample size, age, gender, presentation, comorbidities or additional co-infections, prior history of S. aureus infection, diagnostic findings, hospital treatments and interventions, complications, total length of hospital admission, intensive care unit transfer, and final patient mortality outcomes upon study completion. Data synthesis and analysis: Microsoft Excel 2016 (Redmond, WA, USA) was used to collect and chart data extracted from the studies that met the inclusion criteria. Data was synthesized and analyzed descriptively, with frequency counts performed for individual and grouped study metrics. The purpose of synthesizing the extracted information through this method was to create an overview of existing knowledge and identify gaps in the current literature on COVID-19 and S. aureus co-infection. Quality assessment: Given that the majority of existing literature reporting outcomes data for COVID-19 and S. aureus co-infection were case reports, we utilized the Joanna Briggs Institute’s critical appraisal tools [17] to provide a metric for our scoping assessment of the methodological quality of the included studies. Application of these tools enabled examination of study quality in the areas of inclusion criteria, sample size, description of study participants, setting, and the appropriateness of the statistical analysis. As in previous reviews [18, 19], the tools were modified to produce a numeric score with case reports assessed based on an eight-item scale, case series on a ten-item scale, and cohort studies on an eleven-item scale. Studies were assessed with the methodological quality tool specific to their design (i.e. case report, case series, cohort) by two independent reviewers (JA and KV) and discrepancies were resolved through discussion. While debate exists regarding the minimal number of patients required for study qualification as a “case series” [20], we considered studies reporting individual patient data as “case reports” and those reporting aggregate patient data as “case series.” Our complete quality assessment, including tools and scores, is available in the Appendix (Additional file 1: Tables S2–S4). Results: Our search strategy produced a total of 1922 potential publications with patients co-infected by COVID-19 and S. aureus. For transparent and reproducible methods, the PRISMA 2020 flow diagram for new systematic reviews was utilized to display the search results of our scoping review (Fig. 1). Following deduplication (n = 597) and a comprehensive screen of study titles and abstracts for irrelevant material (n = 1233), we reviewed 92 full texts for inclusion eligibility. Of these texts, 64 did not include patient outcomes for COVID-19 and S. aureus co-infected patients: 57 were incidence or prevalence studies with no patient-specific outcomes data, two included patients with COVID-19 and a history of S. aureus infection but no current COVID-19 and S. aureus co-infection, two were genome analysis studies with no patient data, and three were unavailable in English (Additional file 1: Table S5).Fig. 1Process of searching and selecting articles included in the scoping review based on the PRISMA 2020 flow diagram Process of searching and selecting articles included in the scoping review based on the PRISMA 2020 flow diagram Publication types and geography Following full-text review, 28 studies qualified for inclusion in our review, resulting in a total of 115 patients. Of these 28 included studies, 22 were case reports (describing single patients with individual data), two were case series (describing 7–42 patients with aggregate data), and four were cohort studies (describing 4–40 patients with aggregate data). Countries of study publication included the United States (n = 7) [7, 9, 21–25], Italy (n = 7) [26–32], Japan (n = 2) [33, 34], Iran (n = 2) [35, 36], the United Kingdom (n = 2) [37, 38], Spain (n = 2) [39, 40], Bahrain (n = 1) [41], China (n = 1) [42], France (n = 1) [43], the Philippines (n = 1) [44], Korea (n = 1) [45], and Canada (n = 1) [46], with publication dates ranging from April 15, 2020 to June 16, 2021. Table 1 describes the characteristics of these included studies and available information on their respective patient demographics in detail. Table 1Study and patient characteristicsFirst AuthorCountryPublication dateStudy designQuality AssessmentNAgeMale/FemaleTypeCo-infectionComorbiditiesOutcomeAdachiJapan05/15/20Case report8/8184FemaleMSSA Klebsiella pneumoniae NoneDeathBagnatoItaly08/05/20Case report8/8162FemaleMSSA Candida tropicalis HypertensionDischargeChandranUnited Kingdom05/20/21Case report7/81NRFemaleMSSANoneType 2 diabetes mellitusDeathChenChina11/01/20Case report6/8129MaleMSSA and MRSA Haemophilus influenzae NR (not reported)DischargeChoudhuryUSA07/04/20Case report7/8173MaleMSSANoneType 2 diabetes mellitus, chronic foot osteomyelitis, aortic stenosis, prosthetic aortic valve, atrial fibrillation, prior S. aureus infectionHospiceCusumanoUSA11/12/20Case series9/104265.6 (mean)Males (n = 21), Females (n = 21)MSSA (n = 23) and MRSA (n = 19)Enterococcus faecalis (n = 3), Candida spp. (n = 2), Klebsiella pneumoniae (n = 2), Escheria coli (n = 1), Bacillus spp. (n = 1), Micrococcus spp. (n = 1), Staphylococcus epidermidis (n = 1), Proteus mirabilis (n = 1)Hypertension (n = 29), diabetes mellitus (n = 21), cardiovascular disease (n = 19), lung disease (n = 7), chronic kidney disease (n = 6), malignancy (n = 5), end-stage renal disease (n = 4), organ transplant (n = 3), liver disease (n = 1)Death at 30 days (n = 28)De PascaleItaly05/31/21Prospective cohort8/114064 (mean)Males (n = 33), Females (n = 7)MSSA (n = 14), MRSA (n = 26)Bacteroidetes (n = 18), Proteobacteria (n = 7), Actinobacteria (n = 3), Tenericutes (n = 2), Fusobacteria (n = 1)1Diabetes mellitus (n = 8), cardiovascular disease (n = 7), lung disease (n = 7), immunosuppression (n = 4), neoplasm (n = 4), chronic kidney disease (n = 3)Death (n = 26)DuployezFrance04/16/20Case report8/8135MaleMSSA (PVL-secreting)NoneNoneDeathEdradaPhilippines05/07/20Case report6/8139FemaleMSSAInfluenza B, Klebsiella pneumoniaeNoneDischargeElSeirafiBahrain06/23/20Case report6/8159MaleMRSANoneNRDeathFilocamoItaly05/11/20Case report8/8150MaleMSSANoneNoneDischargeHamzaviIran08/01/20Case report6/8114MaleMSSANoneCerebral palsyDeathHoshiyamaJapan11/02/20Case series6/10147MaleMSSANonePrevious cerebral hemorrhageDischarge“”“”“”“”“”139MaleMSSA Group B Streptococcus HypertensionDischargeHussainUnited Kingdom05/22/20Case report8/8169FemaleMSSANoneProsthetic aortic valve with reduced ejection fractionDeathLevesqueCanada07/01/20Case report6/8153FemaleMSSANoneHypertension, diabetes mellitus, dyslipidemiaHospitalMirzaUSA11/16/20Case report6/8129MaleMRSAMulti-drug resistant PseudomonasCystic fibrosis with moderate obstructive lung disease, exocrine pancreatic insufficiency, gastroparesis, chronic S. aureusDischargePatekUSA04/15/20Case report7/810MaleMSSAHerpes simplex virusMaternal history of oral herpetic lesionsDischargePosteraroItaly09/06/20Case report8/8179MaleMRSAMorganella morganii, Candida glabrata, Acinetobacter baumannii, Proteus mirabilis, Klebsiella pneumoniae, Escherichia coliType 2 diabetes mellitus, ischemic heart disease, peripheral artery disease, left leg amputationDeathRajdevUSA09/10/20Case report7/8132MaleMSSA Klebsiella pneumoniae Type 2 diabetes mellitusDischargeRajdevUSA09/28/20Case report7/8136MaleMSSA Haemophilus influenzae Hypertension, two renal transplants for renal dysplasiaDischargeRamos-MartinezSpain07/30/20Prospective cohort6/11160NRMSSANoneType 2 diabetes mellitus, hypercholesterolemia, wrist arthritis, sternoclavicular arthritisDeathRandallUSA12/01/20Case report7/8160MaleMRSANoneChronic obstructive lung disease, coronary artery disease, hypothyroidismDeath“”“”“”“”“”183MaleMRSANoneHypertension, atrial fibrillationDeath“”“”“”“”“”160MaleMRSAHepatitis CHypertension, type 2 diabetes mellitus, cirrhosisDeathRegazzoniItaly08/07/20Case report2/8170MaleMSSANoneNRHospitalSharifipourIran09/01/20Prospective cohort7/111NRNRMSSANoneNoneDischarge“”“”“”“”1NRNRMRSANoneType 2 diabetes mellitusDeathSonKorea06/16/21Retrospective cohort8/11479 (mean)Male (n = 3), Female (n = 1)MRSAC. albicans (n = 2), Vancomycin-resistant enterococci (n = 2), S. maltophilia (n = 1) carbapenem-resistant Acinetobacter baumannii (n = 1),NRDeath (n = 3)SpannellaItaly06/23/20Case report8/8195FemaleMSSA Citrobacter werkmanii Hypertension, chronic heart failure, paroxysmal atrial fibrillation, dyslipidemia, chronic kidney disease, vascular dementia, sacral pressure ulcers, dysphagiaDeathSpotoItaly09/30/20Case report6/8155FemaleMSSANoneTriple negative, BRCA1-related, right breast cancer with multiple bone metastasis, type 2 diabetes mellitusDeathValgaSpain06/11/20Case report6/8168MaleMSSANoneHypertension, type 2 diabetes mellitus, congestive heart failure, sleep apnea, ischemic heart disease, chronic kidney diseaseDischargePatients were colonized with these bacterial phyla, but no distinction between colonization versus infection was reported Study and patient characteristics Patients were colonized with these bacterial phyla, but no distinction between colonization versus infection was reported Following full-text review, 28 studies qualified for inclusion in our review, resulting in a total of 115 patients. Of these 28 included studies, 22 were case reports (describing single patients with individual data), two were case series (describing 7–42 patients with aggregate data), and four were cohort studies (describing 4–40 patients with aggregate data). Countries of study publication included the United States (n = 7) [7, 9, 21–25], Italy (n = 7) [26–32], Japan (n = 2) [33, 34], Iran (n = 2) [35, 36], the United Kingdom (n = 2) [37, 38], Spain (n = 2) [39, 40], Bahrain (n = 1) [41], China (n = 1) [42], France (n = 1) [43], the Philippines (n = 1) [44], Korea (n = 1) [45], and Canada (n = 1) [46], with publication dates ranging from April 15, 2020 to June 16, 2021. Table 1 describes the characteristics of these included studies and available information on their respective patient demographics in detail. Table 1Study and patient characteristicsFirst AuthorCountryPublication dateStudy designQuality AssessmentNAgeMale/FemaleTypeCo-infectionComorbiditiesOutcomeAdachiJapan05/15/20Case report8/8184FemaleMSSA Klebsiella pneumoniae NoneDeathBagnatoItaly08/05/20Case report8/8162FemaleMSSA Candida tropicalis HypertensionDischargeChandranUnited Kingdom05/20/21Case report7/81NRFemaleMSSANoneType 2 diabetes mellitusDeathChenChina11/01/20Case report6/8129MaleMSSA and MRSA Haemophilus influenzae NR (not reported)DischargeChoudhuryUSA07/04/20Case report7/8173MaleMSSANoneType 2 diabetes mellitus, chronic foot osteomyelitis, aortic stenosis, prosthetic aortic valve, atrial fibrillation, prior S. aureus infectionHospiceCusumanoUSA11/12/20Case series9/104265.6 (mean)Males (n = 21), Females (n = 21)MSSA (n = 23) and MRSA (n = 19)Enterococcus faecalis (n = 3), Candida spp. (n = 2), Klebsiella pneumoniae (n = 2), Escheria coli (n = 1), Bacillus spp. (n = 1), Micrococcus spp. (n = 1), Staphylococcus epidermidis (n = 1), Proteus mirabilis (n = 1)Hypertension (n = 29), diabetes mellitus (n = 21), cardiovascular disease (n = 19), lung disease (n = 7), chronic kidney disease (n = 6), malignancy (n = 5), end-stage renal disease (n = 4), organ transplant (n = 3), liver disease (n = 1)Death at 30 days (n = 28)De PascaleItaly05/31/21Prospective cohort8/114064 (mean)Males (n = 33), Females (n = 7)MSSA (n = 14), MRSA (n = 26)Bacteroidetes (n = 18), Proteobacteria (n = 7), Actinobacteria (n = 3), Tenericutes (n = 2), Fusobacteria (n = 1)1Diabetes mellitus (n = 8), cardiovascular disease (n = 7), lung disease (n = 7), immunosuppression (n = 4), neoplasm (n = 4), chronic kidney disease (n = 3)Death (n = 26)DuployezFrance04/16/20Case report8/8135MaleMSSA (PVL-secreting)NoneNoneDeathEdradaPhilippines05/07/20Case report6/8139FemaleMSSAInfluenza B, Klebsiella pneumoniaeNoneDischargeElSeirafiBahrain06/23/20Case report6/8159MaleMRSANoneNRDeathFilocamoItaly05/11/20Case report8/8150MaleMSSANoneNoneDischargeHamzaviIran08/01/20Case report6/8114MaleMSSANoneCerebral palsyDeathHoshiyamaJapan11/02/20Case series6/10147MaleMSSANonePrevious cerebral hemorrhageDischarge“”“”“”“”“”139MaleMSSA Group B Streptococcus HypertensionDischargeHussainUnited Kingdom05/22/20Case report8/8169FemaleMSSANoneProsthetic aortic valve with reduced ejection fractionDeathLevesqueCanada07/01/20Case report6/8153FemaleMSSANoneHypertension, diabetes mellitus, dyslipidemiaHospitalMirzaUSA11/16/20Case report6/8129MaleMRSAMulti-drug resistant PseudomonasCystic fibrosis with moderate obstructive lung disease, exocrine pancreatic insufficiency, gastroparesis, chronic S. aureusDischargePatekUSA04/15/20Case report7/810MaleMSSAHerpes simplex virusMaternal history of oral herpetic lesionsDischargePosteraroItaly09/06/20Case report8/8179MaleMRSAMorganella morganii, Candida glabrata, Acinetobacter baumannii, Proteus mirabilis, Klebsiella pneumoniae, Escherichia coliType 2 diabetes mellitus, ischemic heart disease, peripheral artery disease, left leg amputationDeathRajdevUSA09/10/20Case report7/8132MaleMSSA Klebsiella pneumoniae Type 2 diabetes mellitusDischargeRajdevUSA09/28/20Case report7/8136MaleMSSA Haemophilus influenzae Hypertension, two renal transplants for renal dysplasiaDischargeRamos-MartinezSpain07/30/20Prospective cohort6/11160NRMSSANoneType 2 diabetes mellitus, hypercholesterolemia, wrist arthritis, sternoclavicular arthritisDeathRandallUSA12/01/20Case report7/8160MaleMRSANoneChronic obstructive lung disease, coronary artery disease, hypothyroidismDeath“”“”“”“”“”183MaleMRSANoneHypertension, atrial fibrillationDeath“”“”“”“”“”160MaleMRSAHepatitis CHypertension, type 2 diabetes mellitus, cirrhosisDeathRegazzoniItaly08/07/20Case report2/8170MaleMSSANoneNRHospitalSharifipourIran09/01/20Prospective cohort7/111NRNRMSSANoneNoneDischarge“”“”“”“”1NRNRMRSANoneType 2 diabetes mellitusDeathSonKorea06/16/21Retrospective cohort8/11479 (mean)Male (n = 3), Female (n = 1)MRSAC. albicans (n = 2), Vancomycin-resistant enterococci (n = 2), S. maltophilia (n = 1) carbapenem-resistant Acinetobacter baumannii (n = 1),NRDeath (n = 3)SpannellaItaly06/23/20Case report8/8195FemaleMSSA Citrobacter werkmanii Hypertension, chronic heart failure, paroxysmal atrial fibrillation, dyslipidemia, chronic kidney disease, vascular dementia, sacral pressure ulcers, dysphagiaDeathSpotoItaly09/30/20Case report6/8155FemaleMSSANoneTriple negative, BRCA1-related, right breast cancer with multiple bone metastasis, type 2 diabetes mellitusDeathValgaSpain06/11/20Case report6/8168MaleMSSANoneHypertension, type 2 diabetes mellitus, congestive heart failure, sleep apnea, ischemic heart disease, chronic kidney diseaseDischargePatients were colonized with these bacterial phyla, but no distinction between colonization versus infection was reported Study and patient characteristics Patients were colonized with these bacterial phyla, but no distinction between colonization versus infection was reported Publication quality Figure 2 represents the quality assessment scores produced by the Joanna Briggs Institute’s critical appraisal tools. Scores ranged from 2 to 8 for case reports (out of 8 points total) (n = 22), 6–9 for case series (out of 10 points total) (n = 2), and 6–8 for cohort studies (out of 11 points total) (n = 4). The mean quality assessment score for these publications compared within their respective categories was 6.8 for case reports, 7.5 for case series, and 7.3 for cohort studies. In terms of most common study design limitations, the metric of patient post-intervention clinical conditions was least clearly described for case reports, neither of the case series consecutively included participants, and strategies to address incomplete follow-up were only reported for one of the four cohort studies.Fig. 2Quality assessment scores for included publications reported as “yes” or “no” for achieving quality metrics per the Joanna Briggs Institute’s critical appraisal tools Quality assessment scores for included publications reported as “yes” or “no” for achieving quality metrics per the Joanna Briggs Institute’s critical appraisal tools Figure 2 represents the quality assessment scores produced by the Joanna Briggs Institute’s critical appraisal tools. Scores ranged from 2 to 8 for case reports (out of 8 points total) (n = 22), 6–9 for case series (out of 10 points total) (n = 2), and 6–8 for cohort studies (out of 11 points total) (n = 4). The mean quality assessment score for these publications compared within their respective categories was 6.8 for case reports, 7.5 for case series, and 7.3 for cohort studies. In terms of most common study design limitations, the metric of patient post-intervention clinical conditions was least clearly described for case reports, neither of the case series consecutively included participants, and strategies to address incomplete follow-up were only reported for one of the four cohort studies.Fig. 2Quality assessment scores for included publications reported as “yes” or “no” for achieving quality metrics per the Joanna Briggs Institute’s critical appraisal tools Quality assessment scores for included publications reported as “yes” or “no” for achieving quality metrics per the Joanna Briggs Institute’s critical appraisal tools Patient demographics For the 115 total patients included in our review that were co-infected with COVID-19 and S. aureus, their demographic (Table 1) and clinical data (Table 2) were described with varying completeness. Staphylococcal species and patient outcomes are reported in both tables to enable direct comparison with patient demographics and clinical course. Across our patient sample, the mean patient age was 54.8 years (SD = 21.6), 65.3% (n = 75) were male, 32.1% (n = 37) were female, and 3 patients (2.6%) did not have their gender specified in the study. Patients presented with a diversity of comorbidities with diabetes mellitus (33.9%, n = 39), hypertension (32.2%, n = 37), and cardiovascular disease (28.7%, n = 33) reported as the most common. Five patients presented with no comorbidities and four studies reported no information on patient medical history related to comorbidities. The most common presenting symptoms reported by patients at hospital admission included cough (13.9%, n = 16), fever (13.9%, n = 16), and dyspnea (13.0%, n = 15). Table 2Clinical characteristicsFirst AuthorNTypeDiagnosisCo-infection onsetPresentationDx findingsTreatments and InterventionsComplicationsLength of stayICUOutcomeAdachi1MSSASputum sample, pneumoniaUnclearaFever, diarrhea, dyspneaBilateral opacities on chest x-ray (CXR), ground glass opacities & lower lobe consolidation on chest computed tomography (CT)Antibiotics, corticosteroids, lopinavir/ritonavir, morphineARDS16YesDeathBagnato1MSSABlood culture, bacteremiaHospital-onsetFever, cough, diarrhea, myalgiaUnremarkable head CT, normal creatine kinaseAntibiotics, corticosteroids, intubation and ventilation, antifungals, lopinavir/ritonavir, hydroxychloroquine, tocilizumab, neuromuscular blocking agents, olanzapinePsychomotor agitation and temporospatial disorientation, myopathy140YesDischargeChandran1MSSABlood culture and tracheal aspirate, pneumonia (ventilator-associated) and bacteremiaHospital-onsetDyspnea (positive COVID test)Bilateral interstitial infiltrates (CXR) and ground glass opacities (CT)Antibiotics, intubation and ventilationBilateral cavitating lung lesions, septic shock15YesDeathChen1MSSA and MRSASputum sample, pneumoniaHospital-onsetAsymptomatic (positive COVID test)Patchy consolidation and ground glass opacities in right upper lobe on CXR (day 29)Antibiotics, corticosteroids, lopinavir/ritonavir, Abidol combined with IFN inhalant, Thymalfasin, ribavirin, loratadinePneumonia51NoDischargeChoudhury1MSSABlood culture, endocarditis and bacteremiaUnclearbAltered mental status, low back pain, urinary incontinence, right foot ulcersCystitis and pyelonephritis on CT, epidural abscess (L4/5) on magnetic resonance imaging (MRI)Antibiotics, oral rifampin, hydroxychloroquineEndocarditis, aortic root abscessNR (not reported)NRHospiceCusumano42MSSA (n = 23) and MRSA (n = 19)Blood culture, bacteremia (n = 42), pneumonia (n = 8), vascular (n = 3), osteomyelitis (n = 1), skin (n = 1)Hospital-onset (n = 28), community-onset (n = 14)Not reported (NR)Abnormal CXR (n = 36), vegetation on transthoracic echo (n = 1)Antibiotics (n = 42), intubation and ventilation (n = 31), central venous catheter (n = 19)NRNRNRDeath at 30 days (n = 28)De Pascale40MSSA (n = 14), MRSA (n = 26)Tracheal aspirate and blood culture, pneumonia (ventilator-associated) (n = 40) and bacteremia (n = 19)Late hospital-onset (n = 35), early hospital-onset (n = 5)NRNRAntibiotics (n = 40), intubation and ventilation (n = 40)Septic shock (n = 22), acute kidney injury (n = 4)11 (mean)Yes (n = 40)Death (n = 26)Duployez1MSSA (PVL-secreting)Pleural drainage sample, pneumoniaUnclearcFever, cough, bloody sputumConsolidation of left upper lobe, left pleural effusion, right ground glass opacities, bilateral cavitary lesions on CTAntibiotics, intubation and ventilation, extracorporeal membrane oxygenation (ECMO), anticoagulation, upper left lobectomyNecrotizing pneumonia, deterioration of respiratory, renal, and liver functions17YesDeathEdrada1MSSANasal and throat swab with PCRCommunity-onset/carrierDry cough, sore throatUnremarkable chest CTOseltamivirNone19NoDischargeElSeirafi1MRSABlood culture, bacteremiaHospital-onsetFever, dry cough, dyspneaBilateral pulmonary infiltrates and ARDS on CXRAntibiotics, IFN, ribavirin, plasma therapy, tocilizumab injectionsSeptic shock with multi-organ dysfunction16YesDeathFilocamo1MSSABlood culture, bacteremiaHospital-onsetFever, dyspneaBilateral ground glass opacities on chest CTAntibiotics, intubation and ventilation, lopinavir/ritonavir, hydroxychloroquine, anakinraProgressive cholestatic liver injury29YesDischargeHamzavi1MSSABlood culture, bacteremiaUncleardFever, cough, dyspnea, lethargyLeft pleural effusion on CXRAntibiotics, intubation and ventilationMulti-organ dysfunctionNRYesDeathHoshiyama1MSSAThroat swab and sputum sampleUncleareCoughNormal labsNRNRNRNoDischarge“”1MSSAThroat swab and sputum sampleUncleareCoughNormal labsNRNRNRNoDischargeHussain1MSSABlood culture, bacteremiaCommunity-onsetFever, cough, dyspnea, malaiseBilateral reticular enhancement and heavily calcified aortic valve with mass effect on left atrial wall on chest CTAntibiotics, intubation and ventilation, esophagogastroduodenoscopy, pantoprazole, amiodarone, heparinBleeding Dieulafoy’s lesion, fast atrial fibrillation, acute kidney injury, multi-organ failure, intracerebral hematoma18YesDeathLevesque1MSSASputum sample, pneumonia (ventilator-associated)Hospital-onsetFever, dry cough, dyspneaSmall intraventricular hemorrhage on head CT (day 39)Antibiotics, intubation and ventilation, corticosteroids, propofol, fentanyl, neuromuscular blocking agents, heparin, continuous platelet infusion, blood transfusions, IVIG, endobronchial clot removals, romiplostim, vincristineARDS, ICU-acquired neuromyopathy, acute kidney injury, thrombocytopenia, intraventricular hemorrhage, ventilator-associated pneumoniaAt least 39YesHospitalMirza1MRSASputum sampleCarrier (chronic)Chest pain, dyspneaBilateral upper lobe bronchial wall thickening and bronchiectasis with nodular and interstitial opacities on chest CTAntibiotics, remdesivirMeropenem-resistant pseudomonas6NoDischargePatek1MSSAWound culture, cellulitisCommunity-onsetFever, erythema and erosions of right thumb and fourth digit, somnolence, decreased feedingElevated LFTs, bilateral perihilar streaking on CXR, neutropeniaAntibiotics, acyclovir, nasal cannula O2Hypoxic respiratory failure7YesDischargePosteraro1MRSABlood culture, bacteremiaHospital-onsetFever, cough, dyspneaCXR and chest CT consistent with pneumoniaAntibiotics, antifungals, hydroxychloroquine, darunavir/ritonavirHypoxia, left leg re-amputation, septic shock53YesDeathRajdev1MSSASputum sample, pneumonia (ventilator-associated)Community- and hospital-onsetfDyspneaBilateral consolidations on CXR, bilateral ground glass opacities and pneumomediastinum with subcutaneous emphysema on chest CTIntubation and ventilation, epoprostenol, hydromorphone, neuromuscular blocking agents, ECMOAnemia, epistaxis, oropharyngeal bleeding, ARDS47YesDischargeRajdev1MSSATracheal aspirate, pneumoniaHospital-onsetFever, cough, dyspnea, myalgiasDiffuse bilateral pulmonary opacities on CXRAntibiotics, intubation and ventilation, corticosteroids, tacrolimus, mycophenolate, remdesivirHypoxic respiratory failure, Guillan Barré syndrome23NRDischargeRamos-Martinez1MSSABlood culture, bacteremia (central venous catheter-associated)Hospital-onsetFever, meningitis, right infrapopliteal deep vein thrombosisMild mitral insufficiency on transthoracic echoAntibiotics, intubation and ventilation, central venous catheter, corticosteroids, tocilizumabNative valve endocarditis, progressive sepsisAt least 20YesDeathRandall1MRSABlood culture, bacteremiaHospital-onsetFever, cough, dyspneaNRIntubation and ventilation, corticosteroids, central venous catheterRespiratory distress3NRDeath“”1MRSABlood culture, bacteremiaHospital-onsetHypoxia (positive COVID test)NRCorticosteroids, remdesivir, central venous catheterSeptic shock14NRDeath“”1MRSABlood culture, bacteremiaHospital-onsetHypoxia (positive COVID test)NRCorticosteroidsCardiac arrest10YesDeathRegazzoni1MSSANasal swab and blood culture, bacteremiaHospital-onsetBilateral pneumonia (positive COVID test)Ischemic areas with hemorrhagic transformation on head CT and MRI, large vegetations on aortic valve with regurgitation on transesophageal echoAntibiotics, corticosteroidsSevere systemic inflammatory responseAt least 10NRHospitalSharifipour1MSSATracheal aspirate, pneumonia (ventilator-associated)Hospital-onsetCough, dyspnea, sore throatNRAntibiotics, intubation and ventilationVentilator-associated pneumonia13YesDischarge“”1MRSATracheal aspirate, pneumonia (ventilator-associated)Hospital-onsetCough, dyspnea, sore throatNRAntibiotics, intubation and ventilationVentilator-associated pneumonia9YesDeathSon4MRSASputum sample, pneumonia (n = 4)Hospital-onset (n = 4)Pneumonia (positive COVID test)NRAntibiotics (n = 4), corticosteroids (n = 4)NR42 (mean)YesDeath (n = 3)Spannella1MSSABronchoalveolar lavage, pneumoniaCommunity-onsetFever, cough, emesisBilateral ground glass opacities and multiple areas of consolidation on CXRAntibiotics, metoprolol, amiodarone, continuous positive-pressure airwayAtrial fibrillation, respiratory failure, altered mental status, tachycardia, severe hypoxemia27YesDeathSpoto1MSSATracheal aspirate, pneumoniaUncleargFever, dyspnea, respiratory distress following chemoimmunotherapyBilateral ground glass opacities and consolidation in the middle/upper lobes on chest CTAntibiotics, intubation and ventilation, lopinavir-ritonavir, hydroxychloroquineARDS5NRDeathValga1MSSATracheal aspirate, pneumoniaHospital-onsetFever, dry coughNRAntibiotics, intubation and ventilation, corticosteroids, hydroxychloroquine, lopinavir/ritonavir, IFN beta, heparinARDS, multi-organ failure47YesDischargeaPositive sputum culture on day 10bPatient recently treated for S. aureus prior to admission, but setting is unclearcPleural fluid tested on day 4dTimeline of blood culture uncleareTimeline of sputum testing unclearfPositive sputum on admission, subsequent ventilator-associated infectiongPatient was receiving routine treatments in a healthcare-setting Clinical characteristics aPositive sputum culture on day 10 bPatient recently treated for S. aureus prior to admission, but setting is unclear cPleural fluid tested on day 4 dTimeline of blood culture unclear eTimeline of sputum testing unclear fPositive sputum on admission, subsequent ventilator-associated infection gPatient was receiving routine treatments in a healthcare-setting For the 115 total patients included in our review that were co-infected with COVID-19 and S. aureus, their demographic (Table 1) and clinical data (Table 2) were described with varying completeness. Staphylococcal species and patient outcomes are reported in both tables to enable direct comparison with patient demographics and clinical course. Across our patient sample, the mean patient age was 54.8 years (SD = 21.6), 65.3% (n = 75) were male, 32.1% (n = 37) were female, and 3 patients (2.6%) did not have their gender specified in the study. Patients presented with a diversity of comorbidities with diabetes mellitus (33.9%, n = 39), hypertension (32.2%, n = 37), and cardiovascular disease (28.7%, n = 33) reported as the most common. Five patients presented with no comorbidities and four studies reported no information on patient medical history related to comorbidities. The most common presenting symptoms reported by patients at hospital admission included cough (13.9%, n = 16), fever (13.9%, n = 16), and dyspnea (13.0%, n = 15). Table 2Clinical characteristicsFirst AuthorNTypeDiagnosisCo-infection onsetPresentationDx findingsTreatments and InterventionsComplicationsLength of stayICUOutcomeAdachi1MSSASputum sample, pneumoniaUnclearaFever, diarrhea, dyspneaBilateral opacities on chest x-ray (CXR), ground glass opacities & lower lobe consolidation on chest computed tomography (CT)Antibiotics, corticosteroids, lopinavir/ritonavir, morphineARDS16YesDeathBagnato1MSSABlood culture, bacteremiaHospital-onsetFever, cough, diarrhea, myalgiaUnremarkable head CT, normal creatine kinaseAntibiotics, corticosteroids, intubation and ventilation, antifungals, lopinavir/ritonavir, hydroxychloroquine, tocilizumab, neuromuscular blocking agents, olanzapinePsychomotor agitation and temporospatial disorientation, myopathy140YesDischargeChandran1MSSABlood culture and tracheal aspirate, pneumonia (ventilator-associated) and bacteremiaHospital-onsetDyspnea (positive COVID test)Bilateral interstitial infiltrates (CXR) and ground glass opacities (CT)Antibiotics, intubation and ventilationBilateral cavitating lung lesions, septic shock15YesDeathChen1MSSA and MRSASputum sample, pneumoniaHospital-onsetAsymptomatic (positive COVID test)Patchy consolidation and ground glass opacities in right upper lobe on CXR (day 29)Antibiotics, corticosteroids, lopinavir/ritonavir, Abidol combined with IFN inhalant, Thymalfasin, ribavirin, loratadinePneumonia51NoDischargeChoudhury1MSSABlood culture, endocarditis and bacteremiaUnclearbAltered mental status, low back pain, urinary incontinence, right foot ulcersCystitis and pyelonephritis on CT, epidural abscess (L4/5) on magnetic resonance imaging (MRI)Antibiotics, oral rifampin, hydroxychloroquineEndocarditis, aortic root abscessNR (not reported)NRHospiceCusumano42MSSA (n = 23) and MRSA (n = 19)Blood culture, bacteremia (n = 42), pneumonia (n = 8), vascular (n = 3), osteomyelitis (n = 1), skin (n = 1)Hospital-onset (n = 28), community-onset (n = 14)Not reported (NR)Abnormal CXR (n = 36), vegetation on transthoracic echo (n = 1)Antibiotics (n = 42), intubation and ventilation (n = 31), central venous catheter (n = 19)NRNRNRDeath at 30 days (n = 28)De Pascale40MSSA (n = 14), MRSA (n = 26)Tracheal aspirate and blood culture, pneumonia (ventilator-associated) (n = 40) and bacteremia (n = 19)Late hospital-onset (n = 35), early hospital-onset (n = 5)NRNRAntibiotics (n = 40), intubation and ventilation (n = 40)Septic shock (n = 22), acute kidney injury (n = 4)11 (mean)Yes (n = 40)Death (n = 26)Duployez1MSSA (PVL-secreting)Pleural drainage sample, pneumoniaUnclearcFever, cough, bloody sputumConsolidation of left upper lobe, left pleural effusion, right ground glass opacities, bilateral cavitary lesions on CTAntibiotics, intubation and ventilation, extracorporeal membrane oxygenation (ECMO), anticoagulation, upper left lobectomyNecrotizing pneumonia, deterioration of respiratory, renal, and liver functions17YesDeathEdrada1MSSANasal and throat swab with PCRCommunity-onset/carrierDry cough, sore throatUnremarkable chest CTOseltamivirNone19NoDischargeElSeirafi1MRSABlood culture, bacteremiaHospital-onsetFever, dry cough, dyspneaBilateral pulmonary infiltrates and ARDS on CXRAntibiotics, IFN, ribavirin, plasma therapy, tocilizumab injectionsSeptic shock with multi-organ dysfunction16YesDeathFilocamo1MSSABlood culture, bacteremiaHospital-onsetFever, dyspneaBilateral ground glass opacities on chest CTAntibiotics, intubation and ventilation, lopinavir/ritonavir, hydroxychloroquine, anakinraProgressive cholestatic liver injury29YesDischargeHamzavi1MSSABlood culture, bacteremiaUncleardFever, cough, dyspnea, lethargyLeft pleural effusion on CXRAntibiotics, intubation and ventilationMulti-organ dysfunctionNRYesDeathHoshiyama1MSSAThroat swab and sputum sampleUncleareCoughNormal labsNRNRNRNoDischarge“”1MSSAThroat swab and sputum sampleUncleareCoughNormal labsNRNRNRNoDischargeHussain1MSSABlood culture, bacteremiaCommunity-onsetFever, cough, dyspnea, malaiseBilateral reticular enhancement and heavily calcified aortic valve with mass effect on left atrial wall on chest CTAntibiotics, intubation and ventilation, esophagogastroduodenoscopy, pantoprazole, amiodarone, heparinBleeding Dieulafoy’s lesion, fast atrial fibrillation, acute kidney injury, multi-organ failure, intracerebral hematoma18YesDeathLevesque1MSSASputum sample, pneumonia (ventilator-associated)Hospital-onsetFever, dry cough, dyspneaSmall intraventricular hemorrhage on head CT (day 39)Antibiotics, intubation and ventilation, corticosteroids, propofol, fentanyl, neuromuscular blocking agents, heparin, continuous platelet infusion, blood transfusions, IVIG, endobronchial clot removals, romiplostim, vincristineARDS, ICU-acquired neuromyopathy, acute kidney injury, thrombocytopenia, intraventricular hemorrhage, ventilator-associated pneumoniaAt least 39YesHospitalMirza1MRSASputum sampleCarrier (chronic)Chest pain, dyspneaBilateral upper lobe bronchial wall thickening and bronchiectasis with nodular and interstitial opacities on chest CTAntibiotics, remdesivirMeropenem-resistant pseudomonas6NoDischargePatek1MSSAWound culture, cellulitisCommunity-onsetFever, erythema and erosions of right thumb and fourth digit, somnolence, decreased feedingElevated LFTs, bilateral perihilar streaking on CXR, neutropeniaAntibiotics, acyclovir, nasal cannula O2Hypoxic respiratory failure7YesDischargePosteraro1MRSABlood culture, bacteremiaHospital-onsetFever, cough, dyspneaCXR and chest CT consistent with pneumoniaAntibiotics, antifungals, hydroxychloroquine, darunavir/ritonavirHypoxia, left leg re-amputation, septic shock53YesDeathRajdev1MSSASputum sample, pneumonia (ventilator-associated)Community- and hospital-onsetfDyspneaBilateral consolidations on CXR, bilateral ground glass opacities and pneumomediastinum with subcutaneous emphysema on chest CTIntubation and ventilation, epoprostenol, hydromorphone, neuromuscular blocking agents, ECMOAnemia, epistaxis, oropharyngeal bleeding, ARDS47YesDischargeRajdev1MSSATracheal aspirate, pneumoniaHospital-onsetFever, cough, dyspnea, myalgiasDiffuse bilateral pulmonary opacities on CXRAntibiotics, intubation and ventilation, corticosteroids, tacrolimus, mycophenolate, remdesivirHypoxic respiratory failure, Guillan Barré syndrome23NRDischargeRamos-Martinez1MSSABlood culture, bacteremia (central venous catheter-associated)Hospital-onsetFever, meningitis, right infrapopliteal deep vein thrombosisMild mitral insufficiency on transthoracic echoAntibiotics, intubation and ventilation, central venous catheter, corticosteroids, tocilizumabNative valve endocarditis, progressive sepsisAt least 20YesDeathRandall1MRSABlood culture, bacteremiaHospital-onsetFever, cough, dyspneaNRIntubation and ventilation, corticosteroids, central venous catheterRespiratory distress3NRDeath“”1MRSABlood culture, bacteremiaHospital-onsetHypoxia (positive COVID test)NRCorticosteroids, remdesivir, central venous catheterSeptic shock14NRDeath“”1MRSABlood culture, bacteremiaHospital-onsetHypoxia (positive COVID test)NRCorticosteroidsCardiac arrest10YesDeathRegazzoni1MSSANasal swab and blood culture, bacteremiaHospital-onsetBilateral pneumonia (positive COVID test)Ischemic areas with hemorrhagic transformation on head CT and MRI, large vegetations on aortic valve with regurgitation on transesophageal echoAntibiotics, corticosteroidsSevere systemic inflammatory responseAt least 10NRHospitalSharifipour1MSSATracheal aspirate, pneumonia (ventilator-associated)Hospital-onsetCough, dyspnea, sore throatNRAntibiotics, intubation and ventilationVentilator-associated pneumonia13YesDischarge“”1MRSATracheal aspirate, pneumonia (ventilator-associated)Hospital-onsetCough, dyspnea, sore throatNRAntibiotics, intubation and ventilationVentilator-associated pneumonia9YesDeathSon4MRSASputum sample, pneumonia (n = 4)Hospital-onset (n = 4)Pneumonia (positive COVID test)NRAntibiotics (n = 4), corticosteroids (n = 4)NR42 (mean)YesDeath (n = 3)Spannella1MSSABronchoalveolar lavage, pneumoniaCommunity-onsetFever, cough, emesisBilateral ground glass opacities and multiple areas of consolidation on CXRAntibiotics, metoprolol, amiodarone, continuous positive-pressure airwayAtrial fibrillation, respiratory failure, altered mental status, tachycardia, severe hypoxemia27YesDeathSpoto1MSSATracheal aspirate, pneumoniaUncleargFever, dyspnea, respiratory distress following chemoimmunotherapyBilateral ground glass opacities and consolidation in the middle/upper lobes on chest CTAntibiotics, intubation and ventilation, lopinavir-ritonavir, hydroxychloroquineARDS5NRDeathValga1MSSATracheal aspirate, pneumoniaHospital-onsetFever, dry coughNRAntibiotics, intubation and ventilation, corticosteroids, hydroxychloroquine, lopinavir/ritonavir, IFN beta, heparinARDS, multi-organ failure47YesDischargeaPositive sputum culture on day 10bPatient recently treated for S. aureus prior to admission, but setting is unclearcPleural fluid tested on day 4dTimeline of blood culture uncleareTimeline of sputum testing unclearfPositive sputum on admission, subsequent ventilator-associated infectiongPatient was receiving routine treatments in a healthcare-setting Clinical characteristics aPositive sputum culture on day 10 bPatient recently treated for S. aureus prior to admission, but setting is unclear cPleural fluid tested on day 4 dTimeline of blood culture unclear eTimeline of sputum testing unclear fPositive sputum on admission, subsequent ventilator-associated infection gPatient was receiving routine treatments in a healthcare-setting Infection characteristics In terms of specific staphylococcal species co-infection, 51.3% (n = 59) of patients were infected with methicillin-sensitive staphylococcus aureus (MSSA) and 49.6% (n = 57) were infected with methicillin-resistant staphylococcus aureus (MRSA), with a single patient co-infected with both MRSA and MSSA. One patient co-infected with MSSA had a fatal Panton-Valentine Leukocidin toxin-producing strain of MSSA (PVL-MSSA). In addition to COVID-19 and S. aureus co-infection, 26.1% (n = 30) of patients were co-infected with one or more separate pathogens such as Klebsiella pneumoniae (n = 6), Candida spp. (n = 6), Enterococcus spp. (n = 5), Haemophilus influenzae (n = 2), Proteus mirabilis (n = 2), Escherichia coli (n = 2). Comprehensive patient co-infection data are reported in Table 1. In terms of specific staphylococcal species co-infection, 51.3% (n = 59) of patients were infected with methicillin-sensitive staphylococcus aureus (MSSA) and 49.6% (n = 57) were infected with methicillin-resistant staphylococcus aureus (MRSA), with a single patient co-infected with both MRSA and MSSA. One patient co-infected with MSSA had a fatal Panton-Valentine Leukocidin toxin-producing strain of MSSA (PVL-MSSA). In addition to COVID-19 and S. aureus co-infection, 26.1% (n = 30) of patients were co-infected with one or more separate pathogens such as Klebsiella pneumoniae (n = 6), Candida spp. (n = 6), Enterococcus spp. (n = 5), Haemophilus influenzae (n = 2), Proteus mirabilis (n = 2), Escherichia coli (n = 2). Comprehensive patient co-infection data are reported in Table 1. Diagnoses and treatments Of all 115 reported cases of co-infection with COVID-19 and S. aureus, diagnosis of S. aureus infection was most frequently established by blood culture in our patient sample (64.3%, n = 74), with S. aureus infections manifesting predominantly in patients as bacteremia (64.3%, n = 74) and pneumonia (55.7%, n = 64), accompanied by several additional endocarditis/vasculitis (3.5%, n = 4), cellulitis (1.7%, n = 2), and osteomyelitis (0.9%, n = 1) cases. Additionally, two patients that tested positive for S. aureus with no clear infection source were suspected to be chronic carriers of the bacterial pathogen. From this variety of infection presentations, the majority (76.5%, n = 88) experienced hospital-onset S. aureus co-infection following hospitalization for an initial infection with COVID-19, and 19 patients (16.5%) presented with S. aureus infection at the time of admission that was determined to be community-onset in etiology. Aside from a standard course of antibiotics, patients received a diversity of adjuvant treatments during their hospital admission, with the most common interventions including intubation and mechanical ventilation (74.8%, n = 86), a central venous catheter (19.1%, n = 22), and corticosteroids (13.0%, n = 15). Table 2 describes the clinical course following hospital admission for each patient in comprehensive detail. Of all 115 reported cases of co-infection with COVID-19 and S. aureus, diagnosis of S. aureus infection was most frequently established by blood culture in our patient sample (64.3%, n = 74), with S. aureus infections manifesting predominantly in patients as bacteremia (64.3%, n = 74) and pneumonia (55.7%, n = 64), accompanied by several additional endocarditis/vasculitis (3.5%, n = 4), cellulitis (1.7%, n = 2), and osteomyelitis (0.9%, n = 1) cases. Additionally, two patients that tested positive for S. aureus with no clear infection source were suspected to be chronic carriers of the bacterial pathogen. From this variety of infection presentations, the majority (76.5%, n = 88) experienced hospital-onset S. aureus co-infection following hospitalization for an initial infection with COVID-19, and 19 patients (16.5%) presented with S. aureus infection at the time of admission that was determined to be community-onset in etiology. Aside from a standard course of antibiotics, patients received a diversity of adjuvant treatments during their hospital admission, with the most common interventions including intubation and mechanical ventilation (74.8%, n = 86), a central venous catheter (19.1%, n = 22), and corticosteroids (13.0%, n = 15). Table 2 describes the clinical course following hospital admission for each patient in comprehensive detail. Publication types and geography: Following full-text review, 28 studies qualified for inclusion in our review, resulting in a total of 115 patients. Of these 28 included studies, 22 were case reports (describing single patients with individual data), two were case series (describing 7–42 patients with aggregate data), and four were cohort studies (describing 4–40 patients with aggregate data). Countries of study publication included the United States (n = 7) [7, 9, 21–25], Italy (n = 7) [26–32], Japan (n = 2) [33, 34], Iran (n = 2) [35, 36], the United Kingdom (n = 2) [37, 38], Spain (n = 2) [39, 40], Bahrain (n = 1) [41], China (n = 1) [42], France (n = 1) [43], the Philippines (n = 1) [44], Korea (n = 1) [45], and Canada (n = 1) [46], with publication dates ranging from April 15, 2020 to June 16, 2021. Table 1 describes the characteristics of these included studies and available information on their respective patient demographics in detail. Table 1Study and patient characteristicsFirst AuthorCountryPublication dateStudy designQuality AssessmentNAgeMale/FemaleTypeCo-infectionComorbiditiesOutcomeAdachiJapan05/15/20Case report8/8184FemaleMSSA Klebsiella pneumoniae NoneDeathBagnatoItaly08/05/20Case report8/8162FemaleMSSA Candida tropicalis HypertensionDischargeChandranUnited Kingdom05/20/21Case report7/81NRFemaleMSSANoneType 2 diabetes mellitusDeathChenChina11/01/20Case report6/8129MaleMSSA and MRSA Haemophilus influenzae NR (not reported)DischargeChoudhuryUSA07/04/20Case report7/8173MaleMSSANoneType 2 diabetes mellitus, chronic foot osteomyelitis, aortic stenosis, prosthetic aortic valve, atrial fibrillation, prior S. aureus infectionHospiceCusumanoUSA11/12/20Case series9/104265.6 (mean)Males (n = 21), Females (n = 21)MSSA (n = 23) and MRSA (n = 19)Enterococcus faecalis (n = 3), Candida spp. (n = 2), Klebsiella pneumoniae (n = 2), Escheria coli (n = 1), Bacillus spp. (n = 1), Micrococcus spp. (n = 1), Staphylococcus epidermidis (n = 1), Proteus mirabilis (n = 1)Hypertension (n = 29), diabetes mellitus (n = 21), cardiovascular disease (n = 19), lung disease (n = 7), chronic kidney disease (n = 6), malignancy (n = 5), end-stage renal disease (n = 4), organ transplant (n = 3), liver disease (n = 1)Death at 30 days (n = 28)De PascaleItaly05/31/21Prospective cohort8/114064 (mean)Males (n = 33), Females (n = 7)MSSA (n = 14), MRSA (n = 26)Bacteroidetes (n = 18), Proteobacteria (n = 7), Actinobacteria (n = 3), Tenericutes (n = 2), Fusobacteria (n = 1)1Diabetes mellitus (n = 8), cardiovascular disease (n = 7), lung disease (n = 7), immunosuppression (n = 4), neoplasm (n = 4), chronic kidney disease (n = 3)Death (n = 26)DuployezFrance04/16/20Case report8/8135MaleMSSA (PVL-secreting)NoneNoneDeathEdradaPhilippines05/07/20Case report6/8139FemaleMSSAInfluenza B, Klebsiella pneumoniaeNoneDischargeElSeirafiBahrain06/23/20Case report6/8159MaleMRSANoneNRDeathFilocamoItaly05/11/20Case report8/8150MaleMSSANoneNoneDischargeHamzaviIran08/01/20Case report6/8114MaleMSSANoneCerebral palsyDeathHoshiyamaJapan11/02/20Case series6/10147MaleMSSANonePrevious cerebral hemorrhageDischarge“”“”“”“”“”139MaleMSSA Group B Streptococcus HypertensionDischargeHussainUnited Kingdom05/22/20Case report8/8169FemaleMSSANoneProsthetic aortic valve with reduced ejection fractionDeathLevesqueCanada07/01/20Case report6/8153FemaleMSSANoneHypertension, diabetes mellitus, dyslipidemiaHospitalMirzaUSA11/16/20Case report6/8129MaleMRSAMulti-drug resistant PseudomonasCystic fibrosis with moderate obstructive lung disease, exocrine pancreatic insufficiency, gastroparesis, chronic S. aureusDischargePatekUSA04/15/20Case report7/810MaleMSSAHerpes simplex virusMaternal history of oral herpetic lesionsDischargePosteraroItaly09/06/20Case report8/8179MaleMRSAMorganella morganii, Candida glabrata, Acinetobacter baumannii, Proteus mirabilis, Klebsiella pneumoniae, Escherichia coliType 2 diabetes mellitus, ischemic heart disease, peripheral artery disease, left leg amputationDeathRajdevUSA09/10/20Case report7/8132MaleMSSA Klebsiella pneumoniae Type 2 diabetes mellitusDischargeRajdevUSA09/28/20Case report7/8136MaleMSSA Haemophilus influenzae Hypertension, two renal transplants for renal dysplasiaDischargeRamos-MartinezSpain07/30/20Prospective cohort6/11160NRMSSANoneType 2 diabetes mellitus, hypercholesterolemia, wrist arthritis, sternoclavicular arthritisDeathRandallUSA12/01/20Case report7/8160MaleMRSANoneChronic obstructive lung disease, coronary artery disease, hypothyroidismDeath“”“”“”“”“”183MaleMRSANoneHypertension, atrial fibrillationDeath“”“”“”“”“”160MaleMRSAHepatitis CHypertension, type 2 diabetes mellitus, cirrhosisDeathRegazzoniItaly08/07/20Case report2/8170MaleMSSANoneNRHospitalSharifipourIran09/01/20Prospective cohort7/111NRNRMSSANoneNoneDischarge“”“”“”“”1NRNRMRSANoneType 2 diabetes mellitusDeathSonKorea06/16/21Retrospective cohort8/11479 (mean)Male (n = 3), Female (n = 1)MRSAC. albicans (n = 2), Vancomycin-resistant enterococci (n = 2), S. maltophilia (n = 1) carbapenem-resistant Acinetobacter baumannii (n = 1),NRDeath (n = 3)SpannellaItaly06/23/20Case report8/8195FemaleMSSA Citrobacter werkmanii Hypertension, chronic heart failure, paroxysmal atrial fibrillation, dyslipidemia, chronic kidney disease, vascular dementia, sacral pressure ulcers, dysphagiaDeathSpotoItaly09/30/20Case report6/8155FemaleMSSANoneTriple negative, BRCA1-related, right breast cancer with multiple bone metastasis, type 2 diabetes mellitusDeathValgaSpain06/11/20Case report6/8168MaleMSSANoneHypertension, type 2 diabetes mellitus, congestive heart failure, sleep apnea, ischemic heart disease, chronic kidney diseaseDischargePatients were colonized with these bacterial phyla, but no distinction between colonization versus infection was reported Study and patient characteristics Patients were colonized with these bacterial phyla, but no distinction between colonization versus infection was reported Publication quality: Figure 2 represents the quality assessment scores produced by the Joanna Briggs Institute’s critical appraisal tools. Scores ranged from 2 to 8 for case reports (out of 8 points total) (n = 22), 6–9 for case series (out of 10 points total) (n = 2), and 6–8 for cohort studies (out of 11 points total) (n = 4). The mean quality assessment score for these publications compared within their respective categories was 6.8 for case reports, 7.5 for case series, and 7.3 for cohort studies. In terms of most common study design limitations, the metric of patient post-intervention clinical conditions was least clearly described for case reports, neither of the case series consecutively included participants, and strategies to address incomplete follow-up were only reported for one of the four cohort studies.Fig. 2Quality assessment scores for included publications reported as “yes” or “no” for achieving quality metrics per the Joanna Briggs Institute’s critical appraisal tools Quality assessment scores for included publications reported as “yes” or “no” for achieving quality metrics per the Joanna Briggs Institute’s critical appraisal tools Patient demographics: For the 115 total patients included in our review that were co-infected with COVID-19 and S. aureus, their demographic (Table 1) and clinical data (Table 2) were described with varying completeness. Staphylococcal species and patient outcomes are reported in both tables to enable direct comparison with patient demographics and clinical course. Across our patient sample, the mean patient age was 54.8 years (SD = 21.6), 65.3% (n = 75) were male, 32.1% (n = 37) were female, and 3 patients (2.6%) did not have their gender specified in the study. Patients presented with a diversity of comorbidities with diabetes mellitus (33.9%, n = 39), hypertension (32.2%, n = 37), and cardiovascular disease (28.7%, n = 33) reported as the most common. Five patients presented with no comorbidities and four studies reported no information on patient medical history related to comorbidities. The most common presenting symptoms reported by patients at hospital admission included cough (13.9%, n = 16), fever (13.9%, n = 16), and dyspnea (13.0%, n = 15). Table 2Clinical characteristicsFirst AuthorNTypeDiagnosisCo-infection onsetPresentationDx findingsTreatments and InterventionsComplicationsLength of stayICUOutcomeAdachi1MSSASputum sample, pneumoniaUnclearaFever, diarrhea, dyspneaBilateral opacities on chest x-ray (CXR), ground glass opacities & lower lobe consolidation on chest computed tomography (CT)Antibiotics, corticosteroids, lopinavir/ritonavir, morphineARDS16YesDeathBagnato1MSSABlood culture, bacteremiaHospital-onsetFever, cough, diarrhea, myalgiaUnremarkable head CT, normal creatine kinaseAntibiotics, corticosteroids, intubation and ventilation, antifungals, lopinavir/ritonavir, hydroxychloroquine, tocilizumab, neuromuscular blocking agents, olanzapinePsychomotor agitation and temporospatial disorientation, myopathy140YesDischargeChandran1MSSABlood culture and tracheal aspirate, pneumonia (ventilator-associated) and bacteremiaHospital-onsetDyspnea (positive COVID test)Bilateral interstitial infiltrates (CXR) and ground glass opacities (CT)Antibiotics, intubation and ventilationBilateral cavitating lung lesions, septic shock15YesDeathChen1MSSA and MRSASputum sample, pneumoniaHospital-onsetAsymptomatic (positive COVID test)Patchy consolidation and ground glass opacities in right upper lobe on CXR (day 29)Antibiotics, corticosteroids, lopinavir/ritonavir, Abidol combined with IFN inhalant, Thymalfasin, ribavirin, loratadinePneumonia51NoDischargeChoudhury1MSSABlood culture, endocarditis and bacteremiaUnclearbAltered mental status, low back pain, urinary incontinence, right foot ulcersCystitis and pyelonephritis on CT, epidural abscess (L4/5) on magnetic resonance imaging (MRI)Antibiotics, oral rifampin, hydroxychloroquineEndocarditis, aortic root abscessNR (not reported)NRHospiceCusumano42MSSA (n = 23) and MRSA (n = 19)Blood culture, bacteremia (n = 42), pneumonia (n = 8), vascular (n = 3), osteomyelitis (n = 1), skin (n = 1)Hospital-onset (n = 28), community-onset (n = 14)Not reported (NR)Abnormal CXR (n = 36), vegetation on transthoracic echo (n = 1)Antibiotics (n = 42), intubation and ventilation (n = 31), central venous catheter (n = 19)NRNRNRDeath at 30 days (n = 28)De Pascale40MSSA (n = 14), MRSA (n = 26)Tracheal aspirate and blood culture, pneumonia (ventilator-associated) (n = 40) and bacteremia (n = 19)Late hospital-onset (n = 35), early hospital-onset (n = 5)NRNRAntibiotics (n = 40), intubation and ventilation (n = 40)Septic shock (n = 22), acute kidney injury (n = 4)11 (mean)Yes (n = 40)Death (n = 26)Duployez1MSSA (PVL-secreting)Pleural drainage sample, pneumoniaUnclearcFever, cough, bloody sputumConsolidation of left upper lobe, left pleural effusion, right ground glass opacities, bilateral cavitary lesions on CTAntibiotics, intubation and ventilation, extracorporeal membrane oxygenation (ECMO), anticoagulation, upper left lobectomyNecrotizing pneumonia, deterioration of respiratory, renal, and liver functions17YesDeathEdrada1MSSANasal and throat swab with PCRCommunity-onset/carrierDry cough, sore throatUnremarkable chest CTOseltamivirNone19NoDischargeElSeirafi1MRSABlood culture, bacteremiaHospital-onsetFever, dry cough, dyspneaBilateral pulmonary infiltrates and ARDS on CXRAntibiotics, IFN, ribavirin, plasma therapy, tocilizumab injectionsSeptic shock with multi-organ dysfunction16YesDeathFilocamo1MSSABlood culture, bacteremiaHospital-onsetFever, dyspneaBilateral ground glass opacities on chest CTAntibiotics, intubation and ventilation, lopinavir/ritonavir, hydroxychloroquine, anakinraProgressive cholestatic liver injury29YesDischargeHamzavi1MSSABlood culture, bacteremiaUncleardFever, cough, dyspnea, lethargyLeft pleural effusion on CXRAntibiotics, intubation and ventilationMulti-organ dysfunctionNRYesDeathHoshiyama1MSSAThroat swab and sputum sampleUncleareCoughNormal labsNRNRNRNoDischarge“”1MSSAThroat swab and sputum sampleUncleareCoughNormal labsNRNRNRNoDischargeHussain1MSSABlood culture, bacteremiaCommunity-onsetFever, cough, dyspnea, malaiseBilateral reticular enhancement and heavily calcified aortic valve with mass effect on left atrial wall on chest CTAntibiotics, intubation and ventilation, esophagogastroduodenoscopy, pantoprazole, amiodarone, heparinBleeding Dieulafoy’s lesion, fast atrial fibrillation, acute kidney injury, multi-organ failure, intracerebral hematoma18YesDeathLevesque1MSSASputum sample, pneumonia (ventilator-associated)Hospital-onsetFever, dry cough, dyspneaSmall intraventricular hemorrhage on head CT (day 39)Antibiotics, intubation and ventilation, corticosteroids, propofol, fentanyl, neuromuscular blocking agents, heparin, continuous platelet infusion, blood transfusions, IVIG, endobronchial clot removals, romiplostim, vincristineARDS, ICU-acquired neuromyopathy, acute kidney injury, thrombocytopenia, intraventricular hemorrhage, ventilator-associated pneumoniaAt least 39YesHospitalMirza1MRSASputum sampleCarrier (chronic)Chest pain, dyspneaBilateral upper lobe bronchial wall thickening and bronchiectasis with nodular and interstitial opacities on chest CTAntibiotics, remdesivirMeropenem-resistant pseudomonas6NoDischargePatek1MSSAWound culture, cellulitisCommunity-onsetFever, erythema and erosions of right thumb and fourth digit, somnolence, decreased feedingElevated LFTs, bilateral perihilar streaking on CXR, neutropeniaAntibiotics, acyclovir, nasal cannula O2Hypoxic respiratory failure7YesDischargePosteraro1MRSABlood culture, bacteremiaHospital-onsetFever, cough, dyspneaCXR and chest CT consistent with pneumoniaAntibiotics, antifungals, hydroxychloroquine, darunavir/ritonavirHypoxia, left leg re-amputation, septic shock53YesDeathRajdev1MSSASputum sample, pneumonia (ventilator-associated)Community- and hospital-onsetfDyspneaBilateral consolidations on CXR, bilateral ground glass opacities and pneumomediastinum with subcutaneous emphysema on chest CTIntubation and ventilation, epoprostenol, hydromorphone, neuromuscular blocking agents, ECMOAnemia, epistaxis, oropharyngeal bleeding, ARDS47YesDischargeRajdev1MSSATracheal aspirate, pneumoniaHospital-onsetFever, cough, dyspnea, myalgiasDiffuse bilateral pulmonary opacities on CXRAntibiotics, intubation and ventilation, corticosteroids, tacrolimus, mycophenolate, remdesivirHypoxic respiratory failure, Guillan Barré syndrome23NRDischargeRamos-Martinez1MSSABlood culture, bacteremia (central venous catheter-associated)Hospital-onsetFever, meningitis, right infrapopliteal deep vein thrombosisMild mitral insufficiency on transthoracic echoAntibiotics, intubation and ventilation, central venous catheter, corticosteroids, tocilizumabNative valve endocarditis, progressive sepsisAt least 20YesDeathRandall1MRSABlood culture, bacteremiaHospital-onsetFever, cough, dyspneaNRIntubation and ventilation, corticosteroids, central venous catheterRespiratory distress3NRDeath“”1MRSABlood culture, bacteremiaHospital-onsetHypoxia (positive COVID test)NRCorticosteroids, remdesivir, central venous catheterSeptic shock14NRDeath“”1MRSABlood culture, bacteremiaHospital-onsetHypoxia (positive COVID test)NRCorticosteroidsCardiac arrest10YesDeathRegazzoni1MSSANasal swab and blood culture, bacteremiaHospital-onsetBilateral pneumonia (positive COVID test)Ischemic areas with hemorrhagic transformation on head CT and MRI, large vegetations on aortic valve with regurgitation on transesophageal echoAntibiotics, corticosteroidsSevere systemic inflammatory responseAt least 10NRHospitalSharifipour1MSSATracheal aspirate, pneumonia (ventilator-associated)Hospital-onsetCough, dyspnea, sore throatNRAntibiotics, intubation and ventilationVentilator-associated pneumonia13YesDischarge“”1MRSATracheal aspirate, pneumonia (ventilator-associated)Hospital-onsetCough, dyspnea, sore throatNRAntibiotics, intubation and ventilationVentilator-associated pneumonia9YesDeathSon4MRSASputum sample, pneumonia (n = 4)Hospital-onset (n = 4)Pneumonia (positive COVID test)NRAntibiotics (n = 4), corticosteroids (n = 4)NR42 (mean)YesDeath (n = 3)Spannella1MSSABronchoalveolar lavage, pneumoniaCommunity-onsetFever, cough, emesisBilateral ground glass opacities and multiple areas of consolidation on CXRAntibiotics, metoprolol, amiodarone, continuous positive-pressure airwayAtrial fibrillation, respiratory failure, altered mental status, tachycardia, severe hypoxemia27YesDeathSpoto1MSSATracheal aspirate, pneumoniaUncleargFever, dyspnea, respiratory distress following chemoimmunotherapyBilateral ground glass opacities and consolidation in the middle/upper lobes on chest CTAntibiotics, intubation and ventilation, lopinavir-ritonavir, hydroxychloroquineARDS5NRDeathValga1MSSATracheal aspirate, pneumoniaHospital-onsetFever, dry coughNRAntibiotics, intubation and ventilation, corticosteroids, hydroxychloroquine, lopinavir/ritonavir, IFN beta, heparinARDS, multi-organ failure47YesDischargeaPositive sputum culture on day 10bPatient recently treated for S. aureus prior to admission, but setting is unclearcPleural fluid tested on day 4dTimeline of blood culture uncleareTimeline of sputum testing unclearfPositive sputum on admission, subsequent ventilator-associated infectiongPatient was receiving routine treatments in a healthcare-setting Clinical characteristics aPositive sputum culture on day 10 bPatient recently treated for S. aureus prior to admission, but setting is unclear cPleural fluid tested on day 4 dTimeline of blood culture unclear eTimeline of sputum testing unclear fPositive sputum on admission, subsequent ventilator-associated infection gPatient was receiving routine treatments in a healthcare-setting Infection characteristics: In terms of specific staphylococcal species co-infection, 51.3% (n = 59) of patients were infected with methicillin-sensitive staphylococcus aureus (MSSA) and 49.6% (n = 57) were infected with methicillin-resistant staphylococcus aureus (MRSA), with a single patient co-infected with both MRSA and MSSA. One patient co-infected with MSSA had a fatal Panton-Valentine Leukocidin toxin-producing strain of MSSA (PVL-MSSA). In addition to COVID-19 and S. aureus co-infection, 26.1% (n = 30) of patients were co-infected with one or more separate pathogens such as Klebsiella pneumoniae (n = 6), Candida spp. (n = 6), Enterococcus spp. (n = 5), Haemophilus influenzae (n = 2), Proteus mirabilis (n = 2), Escherichia coli (n = 2). Comprehensive patient co-infection data are reported in Table 1. Diagnoses and treatments: Of all 115 reported cases of co-infection with COVID-19 and S. aureus, diagnosis of S. aureus infection was most frequently established by blood culture in our patient sample (64.3%, n = 74), with S. aureus infections manifesting predominantly in patients as bacteremia (64.3%, n = 74) and pneumonia (55.7%, n = 64), accompanied by several additional endocarditis/vasculitis (3.5%, n = 4), cellulitis (1.7%, n = 2), and osteomyelitis (0.9%, n = 1) cases. Additionally, two patients that tested positive for S. aureus with no clear infection source were suspected to be chronic carriers of the bacterial pathogen. From this variety of infection presentations, the majority (76.5%, n = 88) experienced hospital-onset S. aureus co-infection following hospitalization for an initial infection with COVID-19, and 19 patients (16.5%) presented with S. aureus infection at the time of admission that was determined to be community-onset in etiology. Aside from a standard course of antibiotics, patients received a diversity of adjuvant treatments during their hospital admission, with the most common interventions including intubation and mechanical ventilation (74.8%, n = 86), a central venous catheter (19.1%, n = 22), and corticosteroids (13.0%, n = 15). Table 2 describes the clinical course following hospital admission for each patient in comprehensive detail. Complications and outcomes: During the hospital course of the 115 co-infected patients in our review, the most common complications were sepsis or systemic inflammatory response syndrome (23.5 %, n = 27), acute kidney injury (5.2%, n = 6), acute respiratory distress syndrome (4.3%, n = 5), pneumonia (4.3%, n = 5), and multi-organ dysfunction or failure (4.3%, n = 5). Transfer to an intensive care unit during admission was clearly reported for 53.9% (n = 62) of patients, unnecessary for 4.3% (n = 5), and not reported for the remaining 41.8% (n = 48). Patients were admitted for a mean length of 26.2 days (SD = 26.7) to any type of inpatient hospital unit, with the length of hospital stay not reported in five cases. Upon analysis of the final outcomes reported for the hospital course of our co-infected COVID-19 and S. aureus patient sample, 71 (61.7%) patients died, 41 (35.7%) were discharged, two remained hospitalized and in stable condition on study conclusion, and one patient was placed in hospice care. Table 2 further details the specific complications presenting in each patient’s hospital trajectory and Table 3 reports the final pooled frequencies of patient co-infection characteristics and outcomes. Table 3Pooled frequencies of patient co-infection characteristics and outcomes (n = 115)Total (%) Gender  Male75 (65.3 ) Female37 (32.1) Unspecified3 (2.6) Staphylococcal Species  MSSA59 (51.3) MRSA57 (49.6) Co-infection Klebsiella pneumoniae 6 (5.2) Candida spp.6 (5.2) Enterococcus spp.5 (4.3) Hemophilus influenzae 2 (1.7) Escherichia coli 2 (1.7) Proteus mirabilis 2 (1.7) Acinetobacter baumannii 2 (1.7) Bacillus spp.1 (0.9) Staphylococcus epidermidis 1 (0.9) Micrococcus spp.1 (0.9) Pseudomonas spp.1 (0.9) Morganella morganii 1 (0.9) Citrobacter werkmanii 1 (0.9) S. maltophilia 1 (0.9) Hepatitis C1 (0.9) Herpes simplex virus1 (0.9) Group B Streptococcus1 (0.9) None83 (72.2) S. Aureus Diagnostic Test  Blood culture74 (64.3) Tracheal aspirate46 (40.0) Sputum sample11 (9.5) Nasal swab2 (1.7) Lower respiratory tract sample2 (1.7) Chronic carrier2 (1.7) Wound culture1 (0.9)S. aureus Diagnosis  Bacteremia74 (63.4) Pneumonia64 (55.7) Ventilator-associated44 (38.3) Endocarditis/vasculitis4 (3.5) Cellulitis2 (1.7) Chronic carrier2 (1.7) Osteomyelitis1 (0.9) Not reported2 (1.7)S. Aureus Infection Onset Hospital88 (76.5) Community19 (16.5) Unclear7 (6.1) Complications  Sepsis/Systemic Inflammatory Response Syndrome27 (23.5) Acute kidney injury6 (5.2  Acute respiratory distress syndrome5 (4.3) Pneumonia5 (4.3) Multi-organ dysfunction/failure5 (4.3) Bleeding/coagulopathy5 (4.3) Hypoxic respiratory failure3 (2.6) Myopathy/neuropathy3 (2.6) Abscess formation2 (1.7) Confusion and altered mental status2 (1.7) Atrial fibrillation2 (1.7) Endocarditis2 (1.7) Anemia1 (0.9) Cardiac arrest1 (0.9) Thrombocytopenia1 (0.9) Re-amputation1 (0.9) Cholestatic liver injury1 (0.9) Not reported3 (2.6) ICU  Yes62 (53.9) No5 (4.3) Not reported48 (41.8) Outcome  Death71 (61.7) Discharge41 (35.7) Hospital2 (1.7) Hospice1 (0.9) Pooled frequencies of patient co-infection characteristics and outcomes (n = 115) Discussion: As our evidence base of the outcomes of patients with COVID-19 infection continues to expand, thorough review of the various clinical scenarios and environments inherent to the treatment process of this disease are crucial for patient care management and improvement. Given that higher levels of morbidity and death have been observed in influenza patients co-infected with multiple pathogens during past pandemics [47], exploring the outcomes of co-infected COVID-19 patients may establish similar trends and reveal strategies for decreasing the morbidity and mortality of this population in our current pandemic. Our review of the available clinical data reporting the outcomes of patients co-infected with COVID-19 and the common bacterial pathogen, S. aureus, was purposed to augment this knowledge base and has produced several key findings regarding mortality rate, co-infection onset, and treatment considerations for these patients. Foremost, the mortality rate in our review for patients co-infected with COVID-19 and S. aureus was 61.7%, which depicts a significantly increased mortality rate when contrasted with patients infected solely by COVID-19 [48]. This outcome is comparable to the increased morality rates observed in patients acquiring co-infection with S. aureus in addition to influenza [10], however, our findings emphasize an important difference in the etiology of COVID-19 and influenza co-infection with S. aureus. For influenza specifically, co-infection with S. aureus is predominantly diagnosed upon patient presentation to a healthcare setting, indicating that the community is a frequent and supportive environment for the co-infection processes of these pathogens [9, 49]. In contrast, our findings indicate that co-infection with S. aureus predominantly occurs in the hospital environment for patients with COVID-19 infection. The terminology used to differentiate these infection etiologies is “community-associated” versus “healthcare-associated,” with delineation between these diagnoses occurring at 48-hours after admission to a hospital or healthcare facility [50]. Given that co-infection with COVID-19 and S. aureus occurred after hospital admission in 76.5% of the patients in our review, preventative measures in the community-setting or treatment in an outpatient environment may be important considerations for mortality reduction from healthcare-associated S. aureus infection. Importantly, while the predominance of S. aureus co-infections occurring after patient admission for COVID-19 infection is likely associated with a wide diversity of patient- and environment-specific factors, our findings suggest that this infection sequence may be partly attributed to the COVID-19 treatment course. The most common patient interventions identified in our review included intubation and mechanical ventilation, central venous catheter placement, and corticosteroids, which are each associated with increased risks of bacterial infection through introduction of a foreign body or immunosuppressive properties that dually support bacterial growth [51, 52]. Although these first-line treatments for decompensating patients that present with severe COVID-19 infection may predispose patients to S. aureus bacterial co-infection and subsequently increased mortality rates, they are often unavoidable during the patient treatment course. Vigilant management surrounding these interventions in patients with COVID-19 infection, such as timely central line or ventilator removal and prudent steroid dosing, are key quality improvement practices that warrant routine physician adherence during patient treatment processes given co-infection mortality rates. In contrast to COVID-19 infection alone, the increased patient morbidity and mortality of COVID-19 and healthcare-associated S. aureus co-infection identified in our review have important implications for future research and clinical practice. While of clear and crucial public health importance, our findings further emphasize the imperative of COVID-19 vaccination to reduce both infection and symptom severity that may predispose patients to the necessity of hospital interventions and subsequent S. aureus co-infection. The effectiveness of this strategy is exemplified by the reduction in influenza and S. aureus pathology observed with increased influenza vaccination [53, 54]. As seen with influenza co-infection, vaccination may be a crucial harm reduction measure given that no S. aureus prophylaxis exists, and the incidence of S. aureus strains refractory to antibiotics is rising [55]. Additionally, the mortality trends observed in COVID-19 patients co-infected with S. aureus highlight the necessity for future reviews and clinical studies focused on the co-infection outcomes of other bacterial and viral pathogens alongside COVID-19. Further research may inform our ability to predict the trajectory of patients with various co-infections and identify infection patterns that influence treatment decisions. To our knowledge, this is the first study to review and evaluate the outcomes of patients co-infected with COVID-19 and S. aureus. However, we acknowledge several limitations to this review. First, the majority of the studies included in our review were individual case reports due to the recent emergence of COVID-19 and limited literature exploring outcomes for patients co-infected with S. aureus. While these types of studies can be vital for expanding the medical knowledge base and reveal fundamental disease characteristics, it is crucial to consider the reporting bias that may exist in this study design and lack of comparison groups. Per our quality assessment, trends in study limitations for each type of publication were variable. Accordingly, our intent for this review was to pool these outcomes in order to reduce this bias and transparently report each case for appropriate assessment and application of our findings. In addition, Cusumano et al.’s [9] case series comprised 42 of the patients in our review and used a study end-point of death at 30 days, implicating that the true mortality rate of patients with COVID-19 and S. aureus co-infection may be higher if related complications necessitate an extended hospital course. Future high-quality clinical studies examining patient outcomes are warranted and of critical importance to further expand on the findings of our systematic review. Conclusion: In contrast to patients infected solely with COVID-19, co-infection with COVID-19 and S. aureus demonstrates a higher patient mortality rate during hospital admission. S. aureus co-infection in COVID-19 patients is predominantly healthcare-associated, and common hospital interventions for patients with severe COVID-19 infection may increase the risk for bacterial infection. Our findings emphasize the imperative of COVID-19 vaccination to prevent hospitalization for COVID-19 treatment and the subsequent susceptibility to hospital-acquired S. aureus co-infection. Supplementary Information: Additional file 1: Table S1. Search strategies, conducted between July 3, 2021, and July 16, 2021. Total results = 1922. Table S2. Joanna Briggs Quality Assessment for case reports included in the review. Table S3: Joanna Briggs Quality Assessment for case-series included in the review. Table S4. Joanna Briggs Quality Assessment for cohort studies included in the review. Table S5. Excluded articles after full-text analysis, with reason (n = 64). Additional file 1: Table S1. Search strategies, conducted between July 3, 2021, and July 16, 2021. Total results = 1922. Table S2. Joanna Briggs Quality Assessment for case reports included in the review. Table S3: Joanna Briggs Quality Assessment for case-series included in the review. Table S4. Joanna Briggs Quality Assessment for cohort studies included in the review. Table S5. Excluded articles after full-text analysis, with reason (n = 64). Additional file 1: Table S1. Search strategies, conducted between July 3, 2021, and July 16, 2021. Total results = 1922. Table S2. Joanna Briggs Quality Assessment for case reports included in the review. Table S3: Joanna Briggs Quality Assessment for case-series included in the review. Table S4. Joanna Briggs Quality Assessment for cohort studies included in the review. Table S5. Excluded articles after full-text analysis, with reason (n = 64). Additional file 1: Table S1. Search strategies, conducted between July 3, 2021, and July 16, 2021. Total results = 1922. Table S2. Joanna Briggs Quality Assessment for case reports included in the review. Table S3: Joanna Briggs Quality Assessment for case-series included in the review. Table S4. Joanna Briggs Quality Assessment for cohort studies included in the review. Table S5. Excluded articles after full-text analysis, with reason (n = 64). : Additional file 1: Table S1. Search strategies, conducted between July 3, 2021, and July 16, 2021. Total results = 1922. Table S2. Joanna Briggs Quality Assessment for case reports included in the review. Table S3: Joanna Briggs Quality Assessment for case-series included in the review. Table S4. Joanna Briggs Quality Assessment for cohort studies included in the review. Table S5. Excluded articles after full-text analysis, with reason (n = 64). Additional file 1: Table S1. Search strategies, conducted between July 3, 2021, and July 16, 2021. Total results = 1922. Table S2. Joanna Briggs Quality Assessment for case reports included in the review. Table S3: Joanna Briggs Quality Assessment for case-series included in the review. Table S4. Joanna Briggs Quality Assessment for cohort studies included in the review. Table S5. Excluded articles after full-text analysis, with reason (n = 64).
Background: Endemic to the hospital environment, Staphylococcus aureus (S. aureus) is a leading bacterial pathogen that causes deadly infections such as bacteremia and endocarditis. In past viral pandemics, it has been the principal cause of secondary bacterial infections, significantly increasing patient mortality rates. Our world now combats the rapid spread of COVID-19, leading to a pandemic with a death toll greatly surpassing those of many past pandemics. However, the impact of co-infection with S. aureus remains unclear. Therefore, we aimed to perform a high-quality scoping review of the literature to synthesize the existing evidence on the clinical outcomes of COVID-19 and S. aureus co-infection. Methods: A scoping review of the literature was conducted in PubMed, Scopus, Ovid MEDLINE, CINAHL, ScienceDirect, medRxiv, and the WHO COVID-19 database using a combination of terms. Articles that were in English, included patients infected with both COVID-19 and S. aureus, and provided a description of clinical outcomes for patients were eligible. From these articles, the following data were extracted: type of staphylococcal species, onset of co-infection, patient sex, age, symptoms, hospital interventions, and clinical outcomes. Quality assessments of final studies were also conducted using the Joanna Briggs Institute's critical appraisal tools. Results: Searches generated a total of 1922 publications, and 28 articles were eligible for the final analysis. Of the 115 co-infected patients, there were a total of 71 deaths (61.7%) and 41 discharges (35.7%), with 62 patients (53.9%) requiring ICU admission. Patients were infected with methicillin-sensitive and methicillin-resistant strains of S. aureus, with the majority (76.5%) acquiring co-infection with S. aureus following hospital admission for COVID-19. Aside from antibiotics, the most commonly reported hospital interventions were intubation with mechanical ventilation (74.8 %), central venous catheter (19.1 %), and corticosteroids (13.0 %). Conclusions: Given the mortality rates reported thus far for patients co-infected with S. aureus and COVID-19, COVID-19 vaccination and outpatient treatment may be key initiatives for reducing hospital admission and S. aureus co-infection risk. Physician vigilance is recommended during COVID-19 interventions that may increase the risk of bacterial co-infection with pathogens, such as S. aureus, as the medical community's understanding of these infection processes continues to evolve.
Background: Upon passage of the March 11th anniversary of the official declaration of the coronavirus disease 2019 (COVID-19) pandemic [1], the causative severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) pathogen has infected over 181 million individuals and resulted in more than 3.9 million deaths worldwide as of July 1, 2021 [2]. In addition to rapid spread through high transmission rates [3], infection with COVID-19 can result in severe complications such as acute respiratory distress syndrome (ARDS), thromboembolic events, septic shock, and multi-organ failure [4]. In response to this novel virus, the clinical environment has evolved to accommodate the complexities of healthcare delivery in the pandemic environment [5]. Accordingly, a particularly challenging scenario for clinicians is the management of patients with common infections that may be complicated by subsequent COVID-19 co-infection, or conversely co-infected with a pathogen following primary infection with COVID-19 [6]. Bacterial co-infection in COVID-19 patients may exacerbate the immunocompromised state caused by COVID-19, further worsening clinical prognosis [7]. Implicated as a leading bacterial pathogen in both community- and healthcare-associated infections, Staphylococcus aureus (S. aureus) is commonly feared in the hospital environment for its risk of deadly outcomes such as endocarditis, bacteremia, sepsis, and death [8]. In past viral pandemics, S. aureus has been the principal cause of secondary bacterial infections, significantly increasing patient mortality rates [9]. For viral influenza infection specifically, S. aureus co-infection and bacteremia has been associated with mortality rates of almost 50%, in contrast to the 1.4% morality rates observed in patients infected with influenza alone [10]. Given the parallels between the clinical presentation, course, and outcomes of influenza and COVID-19 viral infection [11], mortality rates in COVID-19 patients co-infected with S. aureus may reflect those observed in influenza patients. However, while recent studies have focused on the incidence and prevalence of COVID-19 and S. aureus co-infection, the clinical outcomes of patients co-infected with these two specific pathogens remains unclear given that existing studies consolidate S. aureus patient outcomes with other bacterial pathogens [12–14]. Given that the literature informing our knowledge of COVID-19 is a dynamic and evolving entity, the purpose of this scoping review is to evaluate the current body of evidence reporting the clinical outcomes of patients co-infected with COVID-19 and S. aureus. To date, there has been no review focusing specifically on the clinical treatment courses and subsequent outcomes of COVID-19 and S. aureus co-infection. In response to the urgency of the pandemic state and high rates of COVID-19 hospital admissions, we aim to identify important areas for further research and explore potential implications for clinical practice. Conclusion: In contrast to patients infected solely with COVID-19, co-infection with COVID-19 and S. aureus demonstrates a higher patient mortality rate during hospital admission. S. aureus co-infection in COVID-19 patients is predominantly healthcare-associated, and common hospital interventions for patients with severe COVID-19 infection may increase the risk for bacterial infection. Our findings emphasize the imperative of COVID-19 vaccination to prevent hospitalization for COVID-19 treatment and the subsequent susceptibility to hospital-acquired S. aureus co-infection.
Background: Endemic to the hospital environment, Staphylococcus aureus (S. aureus) is a leading bacterial pathogen that causes deadly infections such as bacteremia and endocarditis. In past viral pandemics, it has been the principal cause of secondary bacterial infections, significantly increasing patient mortality rates. Our world now combats the rapid spread of COVID-19, leading to a pandemic with a death toll greatly surpassing those of many past pandemics. However, the impact of co-infection with S. aureus remains unclear. Therefore, we aimed to perform a high-quality scoping review of the literature to synthesize the existing evidence on the clinical outcomes of COVID-19 and S. aureus co-infection. Methods: A scoping review of the literature was conducted in PubMed, Scopus, Ovid MEDLINE, CINAHL, ScienceDirect, medRxiv, and the WHO COVID-19 database using a combination of terms. Articles that were in English, included patients infected with both COVID-19 and S. aureus, and provided a description of clinical outcomes for patients were eligible. From these articles, the following data were extracted: type of staphylococcal species, onset of co-infection, patient sex, age, symptoms, hospital interventions, and clinical outcomes. Quality assessments of final studies were also conducted using the Joanna Briggs Institute's critical appraisal tools. Results: Searches generated a total of 1922 publications, and 28 articles were eligible for the final analysis. Of the 115 co-infected patients, there were a total of 71 deaths (61.7%) and 41 discharges (35.7%), with 62 patients (53.9%) requiring ICU admission. Patients were infected with methicillin-sensitive and methicillin-resistant strains of S. aureus, with the majority (76.5%) acquiring co-infection with S. aureus following hospital admission for COVID-19. Aside from antibiotics, the most commonly reported hospital interventions were intubation with mechanical ventilation (74.8 %), central venous catheter (19.1 %), and corticosteroids (13.0 %). Conclusions: Given the mortality rates reported thus far for patients co-infected with S. aureus and COVID-19, COVID-19 vaccination and outpatient treatment may be key initiatives for reducing hospital admission and S. aureus co-infection risk. Physician vigilance is recommended during COVID-19 interventions that may increase the risk of bacterial co-infection with pathogens, such as S. aureus, as the medical community's understanding of these infection processes continues to evolve.
15,944
459
[ 525, 1672, 350, 149, 79, 248, 1019, 229, 1629, 202, 296, 757, 188 ]
17
[ "infection", "aureus", "19", "patients", "covid", "co", "covid 19", "patient", "20case", "culture" ]
[ "covid 19 pandemic", "covid 19 severe", "covid 19 patients", "novel coronavirus 2019", "coronavirus disease 2019" ]
null
[CONTENT] COVID-19 | Staphylococcus aureus | Co-infection | Antibiotics | Hospitalization | Infection [SUMMARY]
null
[CONTENT] COVID-19 | Staphylococcus aureus | Co-infection | Antibiotics | Hospitalization | Infection [SUMMARY]
[CONTENT] COVID-19 | Staphylococcus aureus | Co-infection | Antibiotics | Hospitalization | Infection [SUMMARY]
[CONTENT] COVID-19 | Staphylococcus aureus | Co-infection | Antibiotics | Hospitalization | Infection [SUMMARY]
[CONTENT] COVID-19 | Staphylococcus aureus | Co-infection | Antibiotics | Hospitalization | Infection [SUMMARY]
[CONTENT] COVID-19 | COVID-19 Vaccines | Humans | SARS-CoV-2 | Staphylococcal Infections | Staphylococcus aureus [SUMMARY]
null
[CONTENT] COVID-19 | COVID-19 Vaccines | Humans | SARS-CoV-2 | Staphylococcal Infections | Staphylococcus aureus [SUMMARY]
[CONTENT] COVID-19 | COVID-19 Vaccines | Humans | SARS-CoV-2 | Staphylococcal Infections | Staphylococcus aureus [SUMMARY]
[CONTENT] COVID-19 | COVID-19 Vaccines | Humans | SARS-CoV-2 | Staphylococcal Infections | Staphylococcus aureus [SUMMARY]
[CONTENT] COVID-19 | COVID-19 Vaccines | Humans | SARS-CoV-2 | Staphylococcal Infections | Staphylococcus aureus [SUMMARY]
[CONTENT] covid 19 pandemic | covid 19 severe | covid 19 patients | novel coronavirus 2019 | coronavirus disease 2019 [SUMMARY]
null
[CONTENT] covid 19 pandemic | covid 19 severe | covid 19 patients | novel coronavirus 2019 | coronavirus disease 2019 [SUMMARY]
[CONTENT] covid 19 pandemic | covid 19 severe | covid 19 patients | novel coronavirus 2019 | coronavirus disease 2019 [SUMMARY]
[CONTENT] covid 19 pandemic | covid 19 severe | covid 19 patients | novel coronavirus 2019 | coronavirus disease 2019 [SUMMARY]
[CONTENT] covid 19 pandemic | covid 19 severe | covid 19 patients | novel coronavirus 2019 | coronavirus disease 2019 [SUMMARY]
[CONTENT] infection | aureus | 19 | patients | covid | co | covid 19 | patient | 20case | culture [SUMMARY]
null
[CONTENT] infection | aureus | 19 | patients | covid | co | covid 19 | patient | 20case | culture [SUMMARY]
[CONTENT] infection | aureus | 19 | patients | covid | co | covid 19 | patient | 20case | culture [SUMMARY]
[CONTENT] infection | aureus | 19 | patients | covid | co | covid 19 | patient | 20case | culture [SUMMARY]
[CONTENT] infection | aureus | 19 | patients | covid | co | covid 19 | patient | 20case | culture [SUMMARY]
[CONTENT] rates | covid | covid 19 | 19 | clinical | co | influenza | aureus | infection | outcomes [SUMMARY]
null
[CONTENT] 20case | culture | intubation | cough | onsetfever | ventilation | intubation ventilation | opacities | diabetes | disease [SUMMARY]
[CONTENT] covid | covid 19 | 19 | infection | hospital | co infection covid | infection covid | infection covid 19 | co infection covid 19 | co infection [SUMMARY]
[CONTENT] infection | aureus | 19 | covid | co | covid 19 | patients | case | quality | co infection [SUMMARY]
[CONTENT] infection | aureus | 19 | covid | co | covid 19 | patients | case | quality | co infection [SUMMARY]
[CONTENT] Staphylococcus aureus ||| secondary ||| COVID-19 ||| S. ||| COVID-19 | S. aureus [SUMMARY]
null
[CONTENT] 1922 | 28 ||| 115 | 71 | 61.7% | 41 | 35.7% | 62 | 53.9% | ICU ||| methicillin | methicillin | 76.5% | S. | COVID-19 ||| 74.8 % | 19.1 % | 13.0 % [SUMMARY]
[CONTENT] COVID-19 | COVID-19 | S. ||| COVID-19 [SUMMARY]
[CONTENT] Staphylococcus aureus ||| secondary ||| COVID-19 ||| S. ||| COVID-19 | S. aureus ||| PubMed, Scopus, | Ovid MEDLINE | ScienceDirect ||| English | COVID-19 | S. aureus ||| ||| the Joanna Briggs Institute's ||| ||| 1922 | 28 ||| 115 | 71 | 61.7% | 41 | 35.7% | 62 | 53.9% | ICU ||| methicillin | methicillin | 76.5% | S. | COVID-19 ||| 74.8 % | 19.1 % | 13.0 % ||| COVID-19 | COVID-19 | S. ||| COVID-19 [SUMMARY]
[CONTENT] Staphylococcus aureus ||| secondary ||| COVID-19 ||| S. ||| COVID-19 | S. aureus ||| PubMed, Scopus, | Ovid MEDLINE | ScienceDirect ||| English | COVID-19 | S. aureus ||| ||| the Joanna Briggs Institute's ||| ||| 1922 | 28 ||| 115 | 71 | 61.7% | 41 | 35.7% | 62 | 53.9% | ICU ||| methicillin | methicillin | 76.5% | S. | COVID-19 ||| 74.8 % | 19.1 % | 13.0 % ||| COVID-19 | COVID-19 | S. ||| COVID-19 [SUMMARY]
Ginseng extract and ginsenoside Rb1 attenuate carbon tetrachloride-induced liver fibrosis in rats.
25344394
Ginsenosides, the major bioactive compounds in ginseng root, have been found to have antioxidant, immunomodulatory and anti-inflammatory activities. This study investigated the effects of ginsenosides on carbon tetrachloride (CCl4)-induced hepatitis and liver fibrosis in rats.
BACKGROUND
Male Sprague-Dawley rats were randomly divided into four groups: control, CCl4, CCl4 + 0.5 g/kg Panax ginseng extract and CCl4 + 0.05 g/kg ginsenoside Rb1 groups. The treated groups were orally given Panax ginseng extract or ginsenoside Rb1 two weeks before the induction of liver injury for successive 9 weeks. Liver injury was induced by intraperitoneally injected with 400 ml/l CCl4 at a dose of 0.75 ml/kg body weight weekly for 7 weeks. The control group was intraperitoneally injected with olive oil.
METHODS
The pathological results showed that ginsenoside Rb1 decreased hepatic fat deposition (2.65 ± 0.82 vs 3.50 ± 0.75, p <0.05) and Panax ginseng extract lowered hepatic reticular fiber accumulation (1.05 ± 0.44 vs 1.60 ± 0.39, p <0.01) increased by CCl4. Plasma alanine aminotransferase and aspartate aminotransferase activities were increased by CCl4 (p <0.01), and aspartate aminotransferase activity was decreased by Panax ginseng extract at week 9 (p <0.05). Exposure to CCl4 for 7 weeks, the levels of plasma and hepatic triglycerides (p <0.01), hepatic cholesterol (p <0.01), interleukin-1β (p <0.01), prostaglandin E2 (p <0.05), soluble intercellular adhesion molecule-1 (p <0.05), hydroxyproline (p <0.05), matrix metalloproteinase-2 (p <0.05) and tissue inhibitor of metalloproteinase-1 (TIMP-1) (p <0.01) were elevated, however, hepatic interleukin-10 level was lowered (p <0.05). Both Panax ginseng extract and ginsenoside Rb1 decreased plasma and hepatic triglyceride, hepatic prostaglandin E2, hydroxyproline and TIMP-1 levels, and Panax ginseng extract further inhibited interleukin-1β concentrations (p <0.05).
RESULTS
Panax ginseng extract and ginsenoside Rb1 attenuate plasma aminotransferase activities and liver inflammation to inhibit CCl4-induced liver fibrosis through down-regulation of hepatic prostaglandin E2 and TIMP-1.
CONCLUSIONS
[ "Animals", "Carbon Tetrachloride", "Ginsenosides", "Humans", "Intercellular Adhesion Molecule-1", "Interleukin-10", "Interleukin-1beta", "Liver Cirrhosis", "Male", "Matrix Metalloproteinase 2", "Panax", "Plant Extracts", "Rats", "Rats, Sprague-Dawley" ]
4216840
Background
Liver cirrhosis is an irreversible stage in the process of liver damage that occurs after liver fibrosis. Liver fibrosis is attributed to inflammation, excessive accumulation of extracellular matrix and tissue remodeling under wound healing [1]. Chronic hepatitis and liver cirrhosis are positively associated with the occurrence of hepatocellular carcinoma [2, 3]. Therefore, the inhibition of hepatic inflammation and fibrosis is crucial in preventing the occurrence of liver cirrhosis and hepatocellular carcinoma. Oxidative stress from reactive oxygen species plays an important role in liver fibrogenesis [4]. Carbon tetrachloride (CCl4) is considered as a toxic chemical that induces hepatotoxicity including fatty degeneration, inflammation, fibrosis, hepatocellular death and carcinogenicity [5, 6]. Trichloromethyl radical produced from the metabolism of CCl4 initiates a chain reaction to cause lipid peroxidation, membrane dysfunction and further hepatotoxic damage [6]. The toxic metabolite of CCl4 can activate Kupffer cells to secrete cytokines such as interleukin-1 (IL-1) and tumor necrosis factor-α (TNF-α), stimulate transforming growth factor-β (TGF-β) production, inhibit nitric oxide (NO) formation and induce inflammation and liver fibrosis [6–8]. Matrix metalloproteinase (MMP)-2, known as type IV collagenase and gelatinase A, acts as the regulator for the breakdown of extracellular matrix, and tissue inhibitor metalloproteinase (TIMP)-1, as the inhibitor of MMPs, exhibits anti-fibrolytic, growth-stimulated and anti-apoptotic activities [9]. Chronic exposure of CCl4 leads to liver fibrosis, which diminishes extracellular matrix degradation and increases MMP-2 secretion through the induction of tissue inhibitor TIMPs [9]. Panax ginseng (P. ginseng) root has been commonly used in oriental medicine, diet or dietary supplement. Ginsenosides, a class of steroid glycosides and triterpene saponins, are the major bioactive compounds in P. ginseng root and ginsenoside Rb1 (C54H92O23, molecular weight: 1109.3) is considered as the most abundant ginsenoside among more than 30 ginsenosides in P. ginseng [10, 11]. The previous studies have reported that P. ginseng and its active components or metabolites had antioxidant, immunomodulatory, anti-inflammatory, and lipid-lowering effects [12–15]. Many studies have shown that ginsenoside Rb1 and its metabolite compound K attenuated liver injury through inhibiting lipid peroxidation, TNF-α, NO, prostaglandin E2 (PGE2), intercellular adhesion molecule (ICAM)-1 and nuclear factor-κB (NF-κB) activation [16–19]. However, the effect of ginsenosides on liver fibrosis is not clear. Considering ginsenoside Rb1 as the most abundant ginsenoside in P. ginseng [10, 11] and its hepatoprotective activity [16–19], therefore, this study investigated the protective effects of P. ginseng extract (ginseng extract) and ginsenoside Rb1 on CCl4-induced liver inflammation and fibrosis in rats.
Methods
Animals and treatments Sprague–Dawley rats weighing 200–250 g were purchased from the National Laboratory Animal Center (Taipei, Taiwan). Rats were housed under a 12-h light–dark cycle at 22-24°C with a relative humidity of 65-70%. After one-week adaptation, rats were randomly divided into four groups (n =10 per group): control, CCl4, CCl4 + ginseng extract (GE) and CCl4 + ginsenoside Rb1 (Rb1) groups. The normal diet based on Laboratory Rodent Diet 5001 powder was purchased from PMI Nutrition International Inc. (Brentwood, MO). Ginseng extract (Ashland Inc., Covington, KY, USA) containing 800 g ginsenosides/kg extract (80%) (ginsenosides in the extract include Rb1, Rc, Rd, Rg1, Rg2, Rg3, Rh1 and Rh2) and ginsenoside Rb1 (China Chemical & Pharmaceutical Co., Ltd., Taipei, Taiwan) with 98% purity were blended with the normal diet at a dose of 0.5 g/kg and 0.05 g/kg, respectively. Ginsenoside Rb1 content was equal in the GE and Rb1 groups. Rats were fed ginseng extract or ginsenoside Rb1 two weeks before (week 0, W0) the induction of liver injury by intraperitoneal injection of 400 ml/l CCl4 in olive oil at a dose of 0.75 ml/kg body weight weekly for 7 weeks. The control group was injected with an equal volume of olive oil without CCl4. Food intake, water intake and body weight were recorded throughout 9-week experimental period. This study was approved by the Institutional Animal Care and Use Committee of Taipei Medical University. Sprague–Dawley rats weighing 200–250 g were purchased from the National Laboratory Animal Center (Taipei, Taiwan). Rats were housed under a 12-h light–dark cycle at 22-24°C with a relative humidity of 65-70%. After one-week adaptation, rats were randomly divided into four groups (n =10 per group): control, CCl4, CCl4 + ginseng extract (GE) and CCl4 + ginsenoside Rb1 (Rb1) groups. The normal diet based on Laboratory Rodent Diet 5001 powder was purchased from PMI Nutrition International Inc. (Brentwood, MO). Ginseng extract (Ashland Inc., Covington, KY, USA) containing 800 g ginsenosides/kg extract (80%) (ginsenosides in the extract include Rb1, Rc, Rd, Rg1, Rg2, Rg3, Rh1 and Rh2) and ginsenoside Rb1 (China Chemical & Pharmaceutical Co., Ltd., Taipei, Taiwan) with 98% purity were blended with the normal diet at a dose of 0.5 g/kg and 0.05 g/kg, respectively. Ginsenoside Rb1 content was equal in the GE and Rb1 groups. Rats were fed ginseng extract or ginsenoside Rb1 two weeks before (week 0, W0) the induction of liver injury by intraperitoneal injection of 400 ml/l CCl4 in olive oil at a dose of 0.75 ml/kg body weight weekly for 7 weeks. The control group was injected with an equal volume of olive oil without CCl4. Food intake, water intake and body weight were recorded throughout 9-week experimental period. This study was approved by the Institutional Animal Care and Use Committee of Taipei Medical University. Histopathological examination After 9 weeks, rats were euthanized with ether and liver samples from left lateral lobe, median lobe and right lateral lobe were collected for histopathological and biochemical analyses. Excised liver specimens from different lobes (1 cm × 1 cm) were fixed in 10% paraformaldehyde, embedded in paraffin, sectioned and stained with hematoxylin and eosin (H&E), Masson’s trichrome or silver. The specimens were coded with a single-blind method and graded from 0 (no lesion), 1 (trace lesion), 2 (weak lesion), 3 (moderate lesion) to 4 (severe lesion) for fat changes, and from 0 (no lesion), 1 (lesion in the central vein area), 2 (lesion in the central vein area and expansion to the surrounding area) to 3 (lesion in the central and portal vein areas or cirrhosis) for necrosis, inflammation, and fibrosis under a light microscope by a pathologist. After 9 weeks, rats were euthanized with ether and liver samples from left lateral lobe, median lobe and right lateral lobe were collected for histopathological and biochemical analyses. Excised liver specimens from different lobes (1 cm × 1 cm) were fixed in 10% paraformaldehyde, embedded in paraffin, sectioned and stained with hematoxylin and eosin (H&E), Masson’s trichrome or silver. The specimens were coded with a single-blind method and graded from 0 (no lesion), 1 (trace lesion), 2 (weak lesion), 3 (moderate lesion) to 4 (severe lesion) for fat changes, and from 0 (no lesion), 1 (lesion in the central vein area), 2 (lesion in the central vein area and expansion to the surrounding area) to 3 (lesion in the central and portal vein areas or cirrhosis) for necrosis, inflammation, and fibrosis under a light microscope by a pathologist. Plasma alanine aminotransferase (ALT) and aspartate aminotransferase (AST) activities Blood samples from rat tails were collected into heparin-containing tubes at weeks 0, 2 (CCl4 injection) and 9. Blood was centrifuged at 3000 g for 15 min at 4°C. Plasma ALT and AST activities were measured spectrophotometrically at 570 nm using a commercial kit (RM 163-K, Iatron Laboratories Inc., Tokyo, Japan). Blood samples from rat tails were collected into heparin-containing tubes at weeks 0, 2 (CCl4 injection) and 9. Blood was centrifuged at 3000 g for 15 min at 4°C. Plasma ALT and AST activities were measured spectrophotometrically at 570 nm using a commercial kit (RM 163-K, Iatron Laboratories Inc., Tokyo, Japan). Plasma and hepatic lipid concentrations Blood samples from rat tails were collected at weeks 0, 2 and 9, and centrifuged at 3000 g for 15 min at 4°C. Liver samples from left lateral lobe, median lobe and right lateral lobe were homogenized in chloroform/methanol (2:1) solution and extracted by chloroform/methanol/water (3:48:47) solution. Triglycerides and total cholesterol concentrations in plasma and liver were determined spectrophotometrically at 500 nm using commercial enzymatic kits (Randox® TR213 for triglycerides, Randox® CH201 for total cholesterol, Randox Laboratories Ltd., London, UK). Blood samples from rat tails were collected at weeks 0, 2 and 9, and centrifuged at 3000 g for 15 min at 4°C. Liver samples from left lateral lobe, median lobe and right lateral lobe were homogenized in chloroform/methanol (2:1) solution and extracted by chloroform/methanol/water (3:48:47) solution. Triglycerides and total cholesterol concentrations in plasma and liver were determined spectrophotometrically at 500 nm using commercial enzymatic kits (Randox® TR213 for triglycerides, Randox® CH201 for total cholesterol, Randox Laboratories Ltd., London, UK). Hepatic inflammatory markers Liver slices (0.5 g) were homogenized in 1.5 mL of buffer solution (50 mmol/l Tris, 150 mmol/l NaCl, and 10 ml/l Triton X-100, pH 7.2) [20] and mixed with 100 μl of proteinase inhibitor cocktail solution (P8340, Sigma-Aldrich, Inc., Saint Louis, USA). Liver homogenate was centrifuged at 3000 g for 15 min at 4°C for TNF-α, IL-1β and IL-10 analysis. For PGE2 and soluble ICAM-1 (sICAM-1) analysis, liver slices (0.5 g) were mixed with 1.0 ml of homogenized buffer (0.25 mol/l sucrose, 50 mmol/l Tris–HCl, and 5 mmol/l EDTA, pH 7.5). Liver homogenate was centrifuged at 8000 g for 15 min at 4°C. Hepatic TNF-α, IL-1β, IL-10, PGE2 and sICAM-1 levels were measured spectrophotometrically using enzyme-linked immunosorbent assay (ELISA) kits (Quantikine® RTA00 for TNF-α, Quantikine® RLB00 for IL-1β, DuoSet® DY522 for IL-10, PGE2, Quantikine® KGE004 for PGE2, Quantikine® RIC100 for sICAM-1, R&D Systems, Inc., Minneapolis, USA). Hepatic supernatant was separately incubated with rat anti-TNF-α, anti-IL-1β, anti-IL-10, anti-PGE2 or anti-sICAM-1, then washed with wash buffer (0.05% Tween® in phosphate buffer solution, PBS) followed by incubation with polyclonal antibody against TNF-α, IL-1β, PGE2 or sICAM-1 conjugated to horseradish peroxidase or biotinylated anti-IL-10 secondary antibody with streptavidin conjugated to horseradish peroxidase, respectively. After washed with wash buffer several times, the substrate solution (hydrogen peroxide and chromogen tetramethylbenzidine) was added and the reaction was terminated by adding diluted hydrochloric acid. The absorbance was determined at 450 nm. Protein concentration was measured by the method of Lowry et al. [21]. Liver slices (0.5 g) were homogenized in 1.5 mL of buffer solution (50 mmol/l Tris, 150 mmol/l NaCl, and 10 ml/l Triton X-100, pH 7.2) [20] and mixed with 100 μl of proteinase inhibitor cocktail solution (P8340, Sigma-Aldrich, Inc., Saint Louis, USA). Liver homogenate was centrifuged at 3000 g for 15 min at 4°C for TNF-α, IL-1β and IL-10 analysis. For PGE2 and soluble ICAM-1 (sICAM-1) analysis, liver slices (0.5 g) were mixed with 1.0 ml of homogenized buffer (0.25 mol/l sucrose, 50 mmol/l Tris–HCl, and 5 mmol/l EDTA, pH 7.5). Liver homogenate was centrifuged at 8000 g for 15 min at 4°C. Hepatic TNF-α, IL-1β, IL-10, PGE2 and sICAM-1 levels were measured spectrophotometrically using enzyme-linked immunosorbent assay (ELISA) kits (Quantikine® RTA00 for TNF-α, Quantikine® RLB00 for IL-1β, DuoSet® DY522 for IL-10, PGE2, Quantikine® KGE004 for PGE2, Quantikine® RIC100 for sICAM-1, R&D Systems, Inc., Minneapolis, USA). Hepatic supernatant was separately incubated with rat anti-TNF-α, anti-IL-1β, anti-IL-10, anti-PGE2 or anti-sICAM-1, then washed with wash buffer (0.05% Tween® in phosphate buffer solution, PBS) followed by incubation with polyclonal antibody against TNF-α, IL-1β, PGE2 or sICAM-1 conjugated to horseradish peroxidase or biotinylated anti-IL-10 secondary antibody with streptavidin conjugated to horseradish peroxidase, respectively. After washed with wash buffer several times, the substrate solution (hydrogen peroxide and chromogen tetramethylbenzidine) was added and the reaction was terminated by adding diluted hydrochloric acid. The absorbance was determined at 450 nm. Protein concentration was measured by the method of Lowry et al. [21]. Hepatic hydroxyproline, MMP-2 and TIMP-1 levels Hepatic hydroxyproline level was measured by colorimetric assay. Freeze-dried liver specimen (0.25 g) was homogenized with 2 ml of distilled water. Liver homogenate was hydrolyzed in alkaline solution (2 mol/L NaOH), oxidized with chloramines T reagent, and incubated with Ehrlich’s reagent at 65°C. The chromogenic product was determined spectrophotometrically at 550 nm. The levels of MMP-2 and TIMP-1 in the liver were determined by commercial kits (Quantikine® DMP200 for MMP-2, Quantikine® RTM100 TIMP-1, R&D Systems, Inc.) using ELISA. Liver slices were homogenized with PBS and proteinase inhibitor cocktail solution and centrifuged at 12000 g for 10 min at 4°C. The supernatant was centrifuged again and collected for further analysis. Hepatic supernatant was separately incubated with rat anti-MMP-2 or anti- TIMP-1, washed with wash buffer, and incubated with polyclonal antibody against MMP-2 or TIMP-1 conjugated to horseradish peroxidase followed by several washes with wash buffer. The substrate solution (hydrogen peroxide and chromogen tetramethylbenzidine) was added for the reaction and stop solution (diluted hydrochloric acid) was then added to stop the reaction. The absorbance was measured at 450 nm. Hepatic hydroxyproline level was measured by colorimetric assay. Freeze-dried liver specimen (0.25 g) was homogenized with 2 ml of distilled water. Liver homogenate was hydrolyzed in alkaline solution (2 mol/L NaOH), oxidized with chloramines T reagent, and incubated with Ehrlich’s reagent at 65°C. The chromogenic product was determined spectrophotometrically at 550 nm. The levels of MMP-2 and TIMP-1 in the liver were determined by commercial kits (Quantikine® DMP200 for MMP-2, Quantikine® RTM100 TIMP-1, R&D Systems, Inc.) using ELISA. Liver slices were homogenized with PBS and proteinase inhibitor cocktail solution and centrifuged at 12000 g for 10 min at 4°C. The supernatant was centrifuged again and collected for further analysis. Hepatic supernatant was separately incubated with rat anti-MMP-2 or anti- TIMP-1, washed with wash buffer, and incubated with polyclonal antibody against MMP-2 or TIMP-1 conjugated to horseradish peroxidase followed by several washes with wash buffer. The substrate solution (hydrogen peroxide and chromogen tetramethylbenzidine) was added for the reaction and stop solution (diluted hydrochloric acid) was then added to stop the reaction. The absorbance was measured at 450 nm. Statistical analysis All data were expressed as mean ± SD. The data were analyzed by one-way analysis of variance (ANOVA) using Statistical Analysis System (SAS version 9.1, SAS Institute Inc., Cary, NC, USA). The difference between any two groups was analyzed by Fisher’s least significant difference test. A value p <0.05 was considered significant. All data were expressed as mean ± SD. The data were analyzed by one-way analysis of variance (ANOVA) using Statistical Analysis System (SAS version 9.1, SAS Institute Inc., Cary, NC, USA). The difference between any two groups was analyzed by Fisher’s least significant difference test. A value p <0.05 was considered significant.
Results
Body weight, liver weight and food intake The results of body weight, liver weight and food intake were shown in Table 1 to monitor the effects of the treatments on gross growth and liver weight. Final body weight and weight gain were significantly higher in the control group than those in the CCl4 (p <0.01), GE (p <0.05), and Rb1 (p <0.05) groups (Table 1), but not significantly different among the three CCl4 treated groups. Daily intake of ginseng extract and ginsenoside Rb1 was 12.6 ± 0.6 mg (33.8 ± 1.5 mg/kg body weight) and 1.3 ± 0.6 mg (3.3 ± 0.1 mg/kg body weight) in the GE and Rb1 groups, respectively. The relative liver weight was significantly higher in the CCl4 and GE groups than that in the control group (p <0.01). The Rb1 group significantly reduced the relative liver weight compared with the CCl4 group (p <0.05). However, total liver weight and daily food intake did not differ significantly among the four groups.Table 1 Body weight, liver weight and food intake in rats treated with ginsenosides against CCl 4 -induced liver damage ControlCCl 4 GERb1Initial body weight (g)240 ± 8a 233 ± 12a 236 ± 14a 239 ± 15a Final body weight (g)478 ± 23b 427 ± 25a 433 ± 38a 448 ± 37a Weight gain (g)238 ± 22b 194 ± 23a 208 ± 34a 209 ± 30a Total liver weight (g)15.9 ± 2.5a 17.5 ± 1.5a 17.3 ± 3.8a 16.4 ± 1.8a Relative liver weight (g/kg)33.2 ± 4.8a 41.1 ± 3.4c 39.7 ± 5.8bc 36.7 ± 3.4ab Food intake (g/d)26.5 ± 1.6a 25.3 ± 1.0a 25.2 ± 1.2a 25.4 ± 1.2a Data are presented as mean ± SD (n =10). Values not sharing the same superscript differ significantly (p <0.05) within the same row. Body weight, liver weight and food intake in rats treated with ginsenosides against CCl 4 -induced liver damage Data are presented as mean ± SD (n =10). Values not sharing the same superscript differ significantly (p <0.05) within the same row. The results of body weight, liver weight and food intake were shown in Table 1 to monitor the effects of the treatments on gross growth and liver weight. Final body weight and weight gain were significantly higher in the control group than those in the CCl4 (p <0.01), GE (p <0.05), and Rb1 (p <0.05) groups (Table 1), but not significantly different among the three CCl4 treated groups. Daily intake of ginseng extract and ginsenoside Rb1 was 12.6 ± 0.6 mg (33.8 ± 1.5 mg/kg body weight) and 1.3 ± 0.6 mg (3.3 ± 0.1 mg/kg body weight) in the GE and Rb1 groups, respectively. The relative liver weight was significantly higher in the CCl4 and GE groups than that in the control group (p <0.01). The Rb1 group significantly reduced the relative liver weight compared with the CCl4 group (p <0.05). However, total liver weight and daily food intake did not differ significantly among the four groups.Table 1 Body weight, liver weight and food intake in rats treated with ginsenosides against CCl 4 -induced liver damage ControlCCl 4 GERb1Initial body weight (g)240 ± 8a 233 ± 12a 236 ± 14a 239 ± 15a Final body weight (g)478 ± 23b 427 ± 25a 433 ± 38a 448 ± 37a Weight gain (g)238 ± 22b 194 ± 23a 208 ± 34a 209 ± 30a Total liver weight (g)15.9 ± 2.5a 17.5 ± 1.5a 17.3 ± 3.8a 16.4 ± 1.8a Relative liver weight (g/kg)33.2 ± 4.8a 41.1 ± 3.4c 39.7 ± 5.8bc 36.7 ± 3.4ab Food intake (g/d)26.5 ± 1.6a 25.3 ± 1.0a 25.2 ± 1.2a 25.4 ± 1.2a Data are presented as mean ± SD (n =10). Values not sharing the same superscript differ significantly (p <0.05) within the same row. Body weight, liver weight and food intake in rats treated with ginsenosides against CCl 4 -induced liver damage Data are presented as mean ± SD (n =10). Values not sharing the same superscript differ significantly (p <0.05) within the same row. Histopathological examination The results of the histopathological examination by different stains were demonstrated in Figures 1 and 2 to determine the effects of the treatments on histopathological changes in the liver, especially on liver fibrosis. The bright red color of H&E staining shown in Figure 1A could be resulted from strong eosin staining, a fluorescent red dye. The pathological sections stained by H&E showed that no fat was accumulated in the liver of the control group, whereas large fat vacuoles were observed in the liver of the CCl4 group (Figure 1A). However, the Rb1 group had significantly decreased fat vacuoles compared with the CCl4 group (2.65 ± 0.82 vs. 3.50 ± 0.75, p <0.05) (Figure 1B). The pathological scores for fat change were not significantly different between the GE and Rb1 groups. The CCl4, GE, and Rb1 groups had significantly elevated cell necrosis (p <0.05), inflammatory cells (p <0.01), and fibrosis (p <0.01) in the central veins compared with the control group. However, the pathological scores for necrosis, inflammation, and fibrosis in the liver did not significantly differ among the three CCl4 treated groups.Figure 1 The representative histological sections of rat liver specimens. A: hematoxylin and eosin stain at 20 × 10 magnification, B: semi-quantitative scores graded from 0 (no lesion), 1 (trace lesion), 2 (weak lesion), 3 (moderate lesion) to 4 (severe lesion) for fat changes, and from 0 (no lesion), 1 (lesion in the central vein area), 2 (lesion in the central vein area and expansion to the surrounding area) to 3 (lesion in the central and portal vein areas or cirrhosis) for necrosis, inflammation and fibrosis in control, CCl4, GE and Rb1 groups. Solid and dashed arrows represent the central vein and collagen fibers. Data are presented as mean ± SD (n =10). Values not sharing the same letter differ significantly (p <0.05). Scale bar =50 μm.Figure 2 The representative histological sections of rat liver specimens. A: Masson’s trichrome stain at 15 × 10 magnification, B: silver stain at 15 × 10 magnification; C: semi-quantitative scores graded from 0 (no collagen formation), 1 (collagen formation in the central vein area), 2 (collagen and fibrous bridge formation in different central vein areas) to 3 (cirrhosis) for fibrosis in the control, CCl4, GE and Rb1 groups. Collagen fibers stained with Masson’s trichrome appear blue. Reticular fibers stained with silver appear brown. Solid and dashed arrows represent the central vein and fiber bridging. Data are presented as mean ± SD (n =10). Values not sharing the same letter differ significantly (p <0.05). Scale bar =50 μm. The representative histological sections of rat liver specimens. A: hematoxylin and eosin stain at 20 × 10 magnification, B: semi-quantitative scores graded from 0 (no lesion), 1 (trace lesion), 2 (weak lesion), 3 (moderate lesion) to 4 (severe lesion) for fat changes, and from 0 (no lesion), 1 (lesion in the central vein area), 2 (lesion in the central vein area and expansion to the surrounding area) to 3 (lesion in the central and portal vein areas or cirrhosis) for necrosis, inflammation and fibrosis in control, CCl4, GE and Rb1 groups. Solid and dashed arrows represent the central vein and collagen fibers. Data are presented as mean ± SD (n =10). Values not sharing the same letter differ significantly (p <0.05). Scale bar =50 μm. The representative histological sections of rat liver specimens. A: Masson’s trichrome stain at 15 × 10 magnification, B: silver stain at 15 × 10 magnification; C: semi-quantitative scores graded from 0 (no collagen formation), 1 (collagen formation in the central vein area), 2 (collagen and fibrous bridge formation in different central vein areas) to 3 (cirrhosis) for fibrosis in the control, CCl4, GE and Rb1 groups. Collagen fibers stained with Masson’s trichrome appear blue. Reticular fibers stained with silver appear brown. Solid and dashed arrows represent the central vein and fiber bridging. Data are presented as mean ± SD (n =10). Values not sharing the same letter differ significantly (p <0.05). Scale bar =50 μm. The pathological assessment of liver fibrosis as observed by Masson’s trichrome stain demonstrated that the formation of collagen fibers appeared blue was elevated by the exposure to CCl4 (Figure 2A). The fibrosis scores determined by Masson’s trichrome stain in the CCl4 (1.35 ± 0.34), GE (1.10 ± 0.39) and Rb1 (1.40 ± 0.39) groups were significantly higher compared with the control group (p <0.01) (Figure 2C), but the GE group had a lower liver fibrosis score compared with the Rb1 group (p <0.05). The accumulation of liver reticular fibers stained by silver and appeared brown was increased by the exposure to CCl4 (p <0.01) (Figure 2B). The GE (1.05 ± 0.44) group had significantly reduced accumulation of reticular fibers than the CCl4 (1.60 ± 0.39, p <0.01) and Rb1 (1.40 ± 0.21, p <0.05) groups (Figure 2C). The results of the histopathological examination by different stains were demonstrated in Figures 1 and 2 to determine the effects of the treatments on histopathological changes in the liver, especially on liver fibrosis. The bright red color of H&E staining shown in Figure 1A could be resulted from strong eosin staining, a fluorescent red dye. The pathological sections stained by H&E showed that no fat was accumulated in the liver of the control group, whereas large fat vacuoles were observed in the liver of the CCl4 group (Figure 1A). However, the Rb1 group had significantly decreased fat vacuoles compared with the CCl4 group (2.65 ± 0.82 vs. 3.50 ± 0.75, p <0.05) (Figure 1B). The pathological scores for fat change were not significantly different between the GE and Rb1 groups. The CCl4, GE, and Rb1 groups had significantly elevated cell necrosis (p <0.05), inflammatory cells (p <0.01), and fibrosis (p <0.01) in the central veins compared with the control group. However, the pathological scores for necrosis, inflammation, and fibrosis in the liver did not significantly differ among the three CCl4 treated groups.Figure 1 The representative histological sections of rat liver specimens. A: hematoxylin and eosin stain at 20 × 10 magnification, B: semi-quantitative scores graded from 0 (no lesion), 1 (trace lesion), 2 (weak lesion), 3 (moderate lesion) to 4 (severe lesion) for fat changes, and from 0 (no lesion), 1 (lesion in the central vein area), 2 (lesion in the central vein area and expansion to the surrounding area) to 3 (lesion in the central and portal vein areas or cirrhosis) for necrosis, inflammation and fibrosis in control, CCl4, GE and Rb1 groups. Solid and dashed arrows represent the central vein and collagen fibers. Data are presented as mean ± SD (n =10). Values not sharing the same letter differ significantly (p <0.05). Scale bar =50 μm.Figure 2 The representative histological sections of rat liver specimens. A: Masson’s trichrome stain at 15 × 10 magnification, B: silver stain at 15 × 10 magnification; C: semi-quantitative scores graded from 0 (no collagen formation), 1 (collagen formation in the central vein area), 2 (collagen and fibrous bridge formation in different central vein areas) to 3 (cirrhosis) for fibrosis in the control, CCl4, GE and Rb1 groups. Collagen fibers stained with Masson’s trichrome appear blue. Reticular fibers stained with silver appear brown. Solid and dashed arrows represent the central vein and fiber bridging. Data are presented as mean ± SD (n =10). Values not sharing the same letter differ significantly (p <0.05). Scale bar =50 μm. The representative histological sections of rat liver specimens. A: hematoxylin and eosin stain at 20 × 10 magnification, B: semi-quantitative scores graded from 0 (no lesion), 1 (trace lesion), 2 (weak lesion), 3 (moderate lesion) to 4 (severe lesion) for fat changes, and from 0 (no lesion), 1 (lesion in the central vein area), 2 (lesion in the central vein area and expansion to the surrounding area) to 3 (lesion in the central and portal vein areas or cirrhosis) for necrosis, inflammation and fibrosis in control, CCl4, GE and Rb1 groups. Solid and dashed arrows represent the central vein and collagen fibers. Data are presented as mean ± SD (n =10). Values not sharing the same letter differ significantly (p <0.05). Scale bar =50 μm. The representative histological sections of rat liver specimens. A: Masson’s trichrome stain at 15 × 10 magnification, B: silver stain at 15 × 10 magnification; C: semi-quantitative scores graded from 0 (no collagen formation), 1 (collagen formation in the central vein area), 2 (collagen and fibrous bridge formation in different central vein areas) to 3 (cirrhosis) for fibrosis in the control, CCl4, GE and Rb1 groups. Collagen fibers stained with Masson’s trichrome appear blue. Reticular fibers stained with silver appear brown. Solid and dashed arrows represent the central vein and fiber bridging. Data are presented as mean ± SD (n =10). Values not sharing the same letter differ significantly (p <0.05). Scale bar =50 μm. The pathological assessment of liver fibrosis as observed by Masson’s trichrome stain demonstrated that the formation of collagen fibers appeared blue was elevated by the exposure to CCl4 (Figure 2A). The fibrosis scores determined by Masson’s trichrome stain in the CCl4 (1.35 ± 0.34), GE (1.10 ± 0.39) and Rb1 (1.40 ± 0.39) groups were significantly higher compared with the control group (p <0.01) (Figure 2C), but the GE group had a lower liver fibrosis score compared with the Rb1 group (p <0.05). The accumulation of liver reticular fibers stained by silver and appeared brown was increased by the exposure to CCl4 (p <0.01) (Figure 2B). The GE (1.05 ± 0.44) group had significantly reduced accumulation of reticular fibers than the CCl4 (1.60 ± 0.39, p <0.01) and Rb1 (1.40 ± 0.21, p <0.05) groups (Figure 2C). Plasma ALT and AST activities Plasma ALT and AST activities were measured to assess the effects of the treatments on liver functions. Plasma ALT and AST activities were significantly elevated in the CCl4 group than those in the control, GE and Rb1 groups (p <0.01) after the induction of liver injury (W2) (Table 2). The CCl4 group still had increased plasma ALT (p <0.05) and AST (p <0.01) activities compared with the control group, whereas plasma ALT and AST activities did not differ among the control, GE and Rb1 groups at week 9. The GE group significantly decreased plasma AST activity compared with the CCl4 group at week 9 (p <0.05).Table 2 Plasma ALT and AST activities in rats treated with ginsenosides ControlCCl 4 GERb1ALT activity (IU/l)W029.6 ± 2.9a 27.6 ± 1.7a 31.2 ± 3.8a 29.2 ± 32.8a W233.2 ± 3.6a 4012.4 ± 2212.9b 469.0 ± 435.2a 1072.6 ± 618.6a W933.6 ± 4.9a 334.9 ± 379.1b 136.4 ± 167.4ab 177.5 ± 333.0ab AST activity (IU/l)W075.9 ± 6.3a 78.3 ± 4.4a 70.8 ± 23.5a 77.6 ± 5.3a W262.1 ± 20.2a 8370.4 ± 5360.8b 2155.0 ± 1973.8a 1233.6 ± 616.7a W963.3 ± 6.0a 288.4 ± 181.3b 134.9 ± 114.6a 171.3 ± 227.9a Data are presented as mean ± SD (n =10). Values not sharing the same superscript differ significantly (p <0.05) within the same row. Plasma ALT and AST activities in rats treated with ginsenosides Data are presented as mean ± SD (n =10). Values not sharing the same superscript differ significantly (p <0.05) within the same row. Plasma ALT and AST activities were measured to assess the effects of the treatments on liver functions. Plasma ALT and AST activities were significantly elevated in the CCl4 group than those in the control, GE and Rb1 groups (p <0.01) after the induction of liver injury (W2) (Table 2). The CCl4 group still had increased plasma ALT (p <0.05) and AST (p <0.01) activities compared with the control group, whereas plasma ALT and AST activities did not differ among the control, GE and Rb1 groups at week 9. The GE group significantly decreased plasma AST activity compared with the CCl4 group at week 9 (p <0.05).Table 2 Plasma ALT and AST activities in rats treated with ginsenosides ControlCCl 4 GERb1ALT activity (IU/l)W029.6 ± 2.9a 27.6 ± 1.7a 31.2 ± 3.8a 29.2 ± 32.8a W233.2 ± 3.6a 4012.4 ± 2212.9b 469.0 ± 435.2a 1072.6 ± 618.6a W933.6 ± 4.9a 334.9 ± 379.1b 136.4 ± 167.4ab 177.5 ± 333.0ab AST activity (IU/l)W075.9 ± 6.3a 78.3 ± 4.4a 70.8 ± 23.5a 77.6 ± 5.3a W262.1 ± 20.2a 8370.4 ± 5360.8b 2155.0 ± 1973.8a 1233.6 ± 616.7a W963.3 ± 6.0a 288.4 ± 181.3b 134.9 ± 114.6a 171.3 ± 227.9a Data are presented as mean ± SD (n =10). Values not sharing the same superscript differ significantly (p <0.05) within the same row. Plasma ALT and AST activities in rats treated with ginsenosides Data are presented as mean ± SD (n =10). Values not sharing the same superscript differ significantly (p <0.05) within the same row. Plasma and hepatic triglyceride and total cholesterol concentrations Plasma and hepatic lipid concentrations were determined to evaluate the effects of the treatments on lipid profiles. After the induction of liver injury (W2), plasma triglycerides were significantly increased and still maintained higher level at week 9 in the CCl4 group compared with those in the control group (p <0.01) (Table 3). Treatment with GE (p <0.01) and Rb1 (p <0.05) significantly reduced plasma triglycerides compared with the CCl4 group to the similar level of the control group at weeks 2 and 9.Table 3 Plasma triglycerides and total cholesterol concentrations in rats treated with ginsenosides ControlCCl 4 GERb1Triglycerides (mmol/l)W00.16 ± 0.04a 0.16 ± 0.04a 0.14 ± 0.03a 0.15 ± 0.02a W20.24 ± 0.04a 0.39 ± 0.19b 0.25 ± 0.04a 0.28 ± 0.10a W90.54 ± 0.10c 0.35 ± 0.06b 0.27 ± 0.05a 0.29 ± 0.07a Total cholesterol (mmol/l)W00.90 ± 0.11b 0.71 ± 0.12a 0.79 ± 0.16ab 0.75 ± 0.10a W21.01 ± 0.14a 1.20 ± 0.18b 1.05 ± 0.21ab 0.95 ± 0.27a W90.79 ± 0.11a 0.70 ± 0.17a 0.67 ± 0.10a 0.66 ± 0.20a Data are presented as mean ± SD (n =10). Values not sharing the same superscript differ significantly (p <0.05) within the same row. Plasma triglycerides and total cholesterol concentrations in rats treated with ginsenosides Data are presented as mean ± SD (n =10). Values not sharing the same superscript differ significantly (p <0.05) within the same row. At the baseline, plasma total cholesterol level was significantly lower in the CCl4 and Rb1 groups than that in the control group (p <0.01). After the induction of liver injury, total cholesterol level was significantly elevated in the CCl4 group than that in the control and Rb1 groups (p <0.05) (Table 3). Plasma total cholesterol level did not differ significantly among the four groups at week 9. Hepatic triglyceride concentrations were significantly increased by 73% in the CCl4 group than those in the control group (p <0.01), and decreased by 56% and 60% in the GE and Rb1 groups, respectively, compared with the CCl4 group (p <0.01) (Figure 3A). Hepatic total cholesterol level was significantly greater in the CCl4, GE and Rb1 groups than that in the control group (p <0.01), but not significantly different among CCl4 treated groups (Figure 3B).Figure 3 Effects of ginsenosides extract and ginsenoside Rb1 on hepatic lipids in rats. A: hepatic triglycerides, B: total cholesterol concentration. Data are presented as mean ± SD (n =10). Values not sharing the same letter differ significantly (p <0.05). Effects of ginsenosides extract and ginsenoside Rb1 on hepatic lipids in rats. A: hepatic triglycerides, B: total cholesterol concentration. Data are presented as mean ± SD (n =10). Values not sharing the same letter differ significantly (p <0.05). Plasma and hepatic lipid concentrations were determined to evaluate the effects of the treatments on lipid profiles. After the induction of liver injury (W2), plasma triglycerides were significantly increased and still maintained higher level at week 9 in the CCl4 group compared with those in the control group (p <0.01) (Table 3). Treatment with GE (p <0.01) and Rb1 (p <0.05) significantly reduced plasma triglycerides compared with the CCl4 group to the similar level of the control group at weeks 2 and 9.Table 3 Plasma triglycerides and total cholesterol concentrations in rats treated with ginsenosides ControlCCl 4 GERb1Triglycerides (mmol/l)W00.16 ± 0.04a 0.16 ± 0.04a 0.14 ± 0.03a 0.15 ± 0.02a W20.24 ± 0.04a 0.39 ± 0.19b 0.25 ± 0.04a 0.28 ± 0.10a W90.54 ± 0.10c 0.35 ± 0.06b 0.27 ± 0.05a 0.29 ± 0.07a Total cholesterol (mmol/l)W00.90 ± 0.11b 0.71 ± 0.12a 0.79 ± 0.16ab 0.75 ± 0.10a W21.01 ± 0.14a 1.20 ± 0.18b 1.05 ± 0.21ab 0.95 ± 0.27a W90.79 ± 0.11a 0.70 ± 0.17a 0.67 ± 0.10a 0.66 ± 0.20a Data are presented as mean ± SD (n =10). Values not sharing the same superscript differ significantly (p <0.05) within the same row. Plasma triglycerides and total cholesterol concentrations in rats treated with ginsenosides Data are presented as mean ± SD (n =10). Values not sharing the same superscript differ significantly (p <0.05) within the same row. At the baseline, plasma total cholesterol level was significantly lower in the CCl4 and Rb1 groups than that in the control group (p <0.01). After the induction of liver injury, total cholesterol level was significantly elevated in the CCl4 group than that in the control and Rb1 groups (p <0.05) (Table 3). Plasma total cholesterol level did not differ significantly among the four groups at week 9. Hepatic triglyceride concentrations were significantly increased by 73% in the CCl4 group than those in the control group (p <0.01), and decreased by 56% and 60% in the GE and Rb1 groups, respectively, compared with the CCl4 group (p <0.01) (Figure 3A). Hepatic total cholesterol level was significantly greater in the CCl4, GE and Rb1 groups than that in the control group (p <0.01), but not significantly different among CCl4 treated groups (Figure 3B).Figure 3 Effects of ginsenosides extract and ginsenoside Rb1 on hepatic lipids in rats. A: hepatic triglycerides, B: total cholesterol concentration. Data are presented as mean ± SD (n =10). Values not sharing the same letter differ significantly (p <0.05). Effects of ginsenosides extract and ginsenoside Rb1 on hepatic lipids in rats. A: hepatic triglycerides, B: total cholesterol concentration. Data are presented as mean ± SD (n =10). Values not sharing the same letter differ significantly (p <0.05). Hepatic TNF-α, IL-1β, IL-10, PGE2, and sICAM-1 levels The results of hepatic cytokine levels were found in Table 4 to determine the effects of the treatments on hepatic mediators released in the inflammatory condition. Hepatic IL-1β (p <0.01), PGE2 (p <0.05) and sICAM-1 (p <0.05) levels were significantly elevated, whereas hepatic IL-10 level was significantly decreased in the CCl4 group compared with those in the control group (Table 4). Hepatic TNF-α, IL-1β and PGE2 levels were significantly reduced in the GE group compared with the CCl4 group (p <0.05). The Rb1 group had higher hepatic TNF-α level than the GE group, but lower hepatic PGE2 level than the CCl4 group (p <0.05).Table 4 Hepatic TNF-α, IL-1β, IL-10, PGE 2 , and sICAM-1 levels in rats treated with ginsenosides ControlCCl 4 GERb1TNF-α (pg/mg protein)15.8 ± 7.6ab 17.2 ± 5.1b 10.4 ± 3.3a 16.7 ± 6.9b IL-1β (pg/mg protein)132.5 ± 39.6a 333.0 ± 135.8c 212.0 ± 115.2ab 292.9 ± 182.9bc IL-10 (pg/mg protein)4461 ± 958b 3443 ± 508a 3513 ± 677a 3514 ± 600a PGE2 (pg/mg protein)6196 ± 1599b 9822 ± 2610c 4636 ± 1928ab 4128 ± 1480a sICAM-1 (pg/mg protein)2782 ± 771a 3991 ± 867b 3288 ± 567ab 3427 ± 963ab Data are presented as mean ± SD (n =10). Values not sharing the same superscript differ significantly (p <0.05) within the same row. Hepatic TNF-α, IL-1β, IL-10, PGE 2 , and sICAM-1 levels in rats treated with ginsenosides Data are presented as mean ± SD (n =10). Values not sharing the same superscript differ significantly (p <0.05) within the same row. The results of hepatic cytokine levels were found in Table 4 to determine the effects of the treatments on hepatic mediators released in the inflammatory condition. Hepatic IL-1β (p <0.01), PGE2 (p <0.05) and sICAM-1 (p <0.05) levels were significantly elevated, whereas hepatic IL-10 level was significantly decreased in the CCl4 group compared with those in the control group (Table 4). Hepatic TNF-α, IL-1β and PGE2 levels were significantly reduced in the GE group compared with the CCl4 group (p <0.05). The Rb1 group had higher hepatic TNF-α level than the GE group, but lower hepatic PGE2 level than the CCl4 group (p <0.05).Table 4 Hepatic TNF-α, IL-1β, IL-10, PGE 2 , and sICAM-1 levels in rats treated with ginsenosides ControlCCl 4 GERb1TNF-α (pg/mg protein)15.8 ± 7.6ab 17.2 ± 5.1b 10.4 ± 3.3a 16.7 ± 6.9b IL-1β (pg/mg protein)132.5 ± 39.6a 333.0 ± 135.8c 212.0 ± 115.2ab 292.9 ± 182.9bc IL-10 (pg/mg protein)4461 ± 958b 3443 ± 508a 3513 ± 677a 3514 ± 600a PGE2 (pg/mg protein)6196 ± 1599b 9822 ± 2610c 4636 ± 1928ab 4128 ± 1480a sICAM-1 (pg/mg protein)2782 ± 771a 3991 ± 867b 3288 ± 567ab 3427 ± 963ab Data are presented as mean ± SD (n =10). Values not sharing the same superscript differ significantly (p <0.05) within the same row. Hepatic TNF-α, IL-1β, IL-10, PGE 2 , and sICAM-1 levels in rats treated with ginsenosides Data are presented as mean ± SD (n =10). Values not sharing the same superscript differ significantly (p <0.05) within the same row. Hepatic hydroxyproline, MMP-2 and TIMP-1 levels The results of hepatic hydroxyproline, MMP-2 and TIMP-1 levels were shown in Figure 4 to investigate the effects of the treatments on liver fibrogenesis and fibrolysis. Hepatic hydroxyproline (p <0.05), MMP-2 (p <0.05) and TIMP-1 (p <0.01) levels were elevated by 55%, 28% and 61%, respectively, in the CCl4 group compared with the control group (Figure 4). Ginseng extract and ginsenoside Rb1 treatments significantly reduced hepatic hydroxyproline level by 36% and 30% (Figure 4A) and TIMP-1 level by 27% and 27% (Figure 4C), respectively, compared with the CCl4 group (p <0.05). However, hepatic MMP-2 level was not different in the three CCl4 treated groups (Figure 4B).Figure 4 Effects of ginsenosides extract and ginsenoside Rb1 on liver fibrosis markers in rats. A: hepatic hydroxyproline level, B: MMP-2 level; C: TIMP-1 level. Data are presented as mean ± SD (n =10). Values not sharing the same letter differ significantly (p <0.05). Effects of ginsenosides extract and ginsenoside Rb1 on liver fibrosis markers in rats. A: hepatic hydroxyproline level, B: MMP-2 level; C: TIMP-1 level. Data are presented as mean ± SD (n =10). Values not sharing the same letter differ significantly (p <0.05). The results of hepatic hydroxyproline, MMP-2 and TIMP-1 levels were shown in Figure 4 to investigate the effects of the treatments on liver fibrogenesis and fibrolysis. Hepatic hydroxyproline (p <0.05), MMP-2 (p <0.05) and TIMP-1 (p <0.01) levels were elevated by 55%, 28% and 61%, respectively, in the CCl4 group compared with the control group (Figure 4). Ginseng extract and ginsenoside Rb1 treatments significantly reduced hepatic hydroxyproline level by 36% and 30% (Figure 4A) and TIMP-1 level by 27% and 27% (Figure 4C), respectively, compared with the CCl4 group (p <0.05). However, hepatic MMP-2 level was not different in the three CCl4 treated groups (Figure 4B).Figure 4 Effects of ginsenosides extract and ginsenoside Rb1 on liver fibrosis markers in rats. A: hepatic hydroxyproline level, B: MMP-2 level; C: TIMP-1 level. Data are presented as mean ± SD (n =10). Values not sharing the same letter differ significantly (p <0.05). Effects of ginsenosides extract and ginsenoside Rb1 on liver fibrosis markers in rats. A: hepatic hydroxyproline level, B: MMP-2 level; C: TIMP-1 level. Data are presented as mean ± SD (n =10). Values not sharing the same letter differ significantly (p <0.05).
Conclusions
In conclusion, Panax ginseng extract (0.5 g/kg) and ginsenoside Rb1 (0.05 g/kg) decrease plasma ALT and AST activities elevated by CCl4-induced liver damage and inhibit the accumulation of triglycerides in the liver. The levels of TNF-α, PGE2, hydroxyproline and TIMP-1 in the liver are diminished by ginseng extract and ginsenoside Rb1. Therefore, ginseng extract and ginsenoside Rb1 attenuate CCl4-induced liver injury through anti-inflammatory and antifibrotic effects.
[ "Background", "Animals and treatments", "Histopathological examination", "Plasma alanine aminotransferase (ALT) and aspartate aminotransferase (AST) activities", "Plasma and hepatic lipid concentrations", "Hepatic inflammatory markers", "Hepatic hydroxyproline, MMP-2 and TIMP-1 levels", "Statistical analysis", "Body weight, liver weight and food intake", "Histopathological examination", "Plasma ALT and AST activities", "Plasma and hepatic triglyceride and total cholesterol concentrations", "Hepatic TNF-α, IL-1β, IL-10, PGE2, and sICAM-1 levels", "Hepatic hydroxyproline, MMP-2 and TIMP-1 levels" ]
[ "Liver cirrhosis is an irreversible stage in the process of liver damage that occurs after liver fibrosis. Liver fibrosis is attributed to inflammation, excessive accumulation of extracellular matrix and tissue remodeling under wound healing [1]. Chronic hepatitis and liver cirrhosis are positively associated with the occurrence of hepatocellular carcinoma [2, 3]. Therefore, the inhibition of hepatic inflammation and fibrosis is crucial in preventing the occurrence of liver cirrhosis and hepatocellular carcinoma.\nOxidative stress from reactive oxygen species plays an important role in liver fibrogenesis [4]. Carbon tetrachloride (CCl4) is considered as a toxic chemical that induces hepatotoxicity including fatty degeneration, inflammation, fibrosis, hepatocellular death and carcinogenicity [5, 6]. Trichloromethyl radical produced from the metabolism of CCl4 initiates a chain reaction to cause lipid peroxidation, membrane dysfunction and further hepatotoxic damage [6]. The toxic metabolite of CCl4 can activate Kupffer cells to secrete cytokines such as interleukin-1 (IL-1) and tumor necrosis factor-α (TNF-α), stimulate transforming growth factor-β (TGF-β) production, inhibit nitric oxide (NO) formation and induce inflammation and liver fibrosis [6–8]. Matrix metalloproteinase (MMP)-2, known as type IV collagenase and gelatinase A, acts as the regulator for the breakdown of extracellular matrix, and tissue inhibitor metalloproteinase (TIMP)-1, as the inhibitor of MMPs, exhibits anti-fibrolytic, growth-stimulated and anti-apoptotic activities [9]. Chronic exposure of CCl4 leads to liver fibrosis, which diminishes extracellular matrix degradation and increases MMP-2 secretion through the induction of tissue inhibitor TIMPs [9].\nPanax ginseng (P. ginseng) root has been commonly used in oriental medicine, diet or dietary supplement. Ginsenosides, a class of steroid glycosides and triterpene saponins, are the major bioactive compounds in P. ginseng root and ginsenoside Rb1 (C54H92O23, molecular weight: 1109.3) is considered as the most abundant ginsenoside among more than 30 ginsenosides in P. ginseng\n[10, 11]. The previous studies have reported that P. ginseng and its active components or metabolites had antioxidant, immunomodulatory, anti-inflammatory, and lipid-lowering effects [12–15]. Many studies have shown that ginsenoside Rb1 and its metabolite compound K attenuated liver injury through inhibiting lipid peroxidation, TNF-α, NO, prostaglandin E2 (PGE2), intercellular adhesion molecule (ICAM)-1 and nuclear factor-κB (NF-κB) activation [16–19]. However, the effect of ginsenosides on liver fibrosis is not clear. Considering ginsenoside Rb1 as the most abundant ginsenoside in P. ginseng\n[10, 11] and its hepatoprotective activity [16–19], therefore, this study investigated the protective effects of P. ginseng extract (ginseng extract) and ginsenoside Rb1 on CCl4-induced liver inflammation and fibrosis in rats.", "Sprague–Dawley rats weighing 200–250 g were purchased from the National Laboratory Animal Center (Taipei, Taiwan). Rats were housed under a 12-h light–dark cycle at 22-24°C with a relative humidity of 65-70%. After one-week adaptation, rats were randomly divided into four groups (n =10 per group): control, CCl4, CCl4 + ginseng extract (GE) and CCl4 + ginsenoside Rb1 (Rb1) groups. The normal diet based on Laboratory Rodent Diet 5001 powder was purchased from PMI Nutrition International Inc. (Brentwood, MO). Ginseng extract (Ashland Inc., Covington, KY, USA) containing 800 g ginsenosides/kg extract (80%) (ginsenosides in the extract include Rb1, Rc, Rd, Rg1, Rg2, Rg3, Rh1 and Rh2) and ginsenoside Rb1 (China Chemical & Pharmaceutical Co., Ltd., Taipei, Taiwan) with 98% purity were blended with the normal diet at a dose of 0.5 g/kg and 0.05 g/kg, respectively. Ginsenoside Rb1 content was equal in the GE and Rb1 groups. Rats were fed ginseng extract or ginsenoside Rb1 two weeks before (week 0, W0) the induction of liver injury by intraperitoneal injection of 400 ml/l CCl4 in olive oil at a dose of 0.75 ml/kg body weight weekly for 7 weeks. The control group was injected with an equal volume of olive oil without CCl4. Food intake, water intake and body weight were recorded throughout 9-week experimental period. This study was approved by the Institutional Animal Care and Use Committee of Taipei Medical University.", "After 9 weeks, rats were euthanized with ether and liver samples from left lateral lobe, median lobe and right lateral lobe were collected for histopathological and biochemical analyses. Excised liver specimens from different lobes (1 cm × 1 cm) were fixed in 10% paraformaldehyde, embedded in paraffin, sectioned and stained with hematoxylin and eosin (H&E), Masson’s trichrome or silver. The specimens were coded with a single-blind method and graded from 0 (no lesion), 1 (trace lesion), 2 (weak lesion), 3 (moderate lesion) to 4 (severe lesion) for fat changes, and from 0 (no lesion), 1 (lesion in the central vein area), 2 (lesion in the central vein area and expansion to the surrounding area) to 3 (lesion in the central and portal vein areas or cirrhosis) for necrosis, inflammation, and fibrosis under a light microscope by a pathologist.", "Blood samples from rat tails were collected into heparin-containing tubes at weeks 0, 2 (CCl4 injection) and 9. Blood was centrifuged at 3000 g for 15 min at 4°C. Plasma ALT and AST activities were measured spectrophotometrically at 570 nm using a commercial kit (RM 163-K, Iatron Laboratories Inc., Tokyo, Japan).", "Blood samples from rat tails were collected at weeks 0, 2 and 9, and centrifuged at 3000 g for 15 min at 4°C. Liver samples from left lateral lobe, median lobe and right lateral lobe were homogenized in chloroform/methanol (2:1) solution and extracted by chloroform/methanol/water (3:48:47) solution. Triglycerides and total cholesterol concentrations in plasma and liver were determined spectrophotometrically at 500 nm using commercial enzymatic kits (Randox® TR213 for triglycerides, Randox® CH201 for total cholesterol, Randox Laboratories Ltd., London, UK).", "Liver slices (0.5 g) were homogenized in 1.5 mL of buffer solution (50 mmol/l Tris, 150 mmol/l NaCl, and 10 ml/l Triton X-100, pH 7.2) [20] and mixed with 100 μl of proteinase inhibitor cocktail solution (P8340, Sigma-Aldrich, Inc., Saint Louis, USA). Liver homogenate was centrifuged at 3000 g for 15 min at 4°C for TNF-α, IL-1β and IL-10 analysis. For PGE2 and soluble ICAM-1 (sICAM-1) analysis, liver slices (0.5 g) were mixed with 1.0 ml of homogenized buffer (0.25 mol/l sucrose, 50 mmol/l Tris–HCl, and 5 mmol/l EDTA, pH 7.5). Liver homogenate was centrifuged at 8000 g for 15 min at 4°C.\nHepatic TNF-α, IL-1β, IL-10, PGE2 and sICAM-1 levels were measured spectrophotometrically using enzyme-linked immunosorbent assay (ELISA) kits (Quantikine® RTA00 for TNF-α, Quantikine® RLB00 for IL-1β, DuoSet® DY522 for IL-10, PGE2, Quantikine® KGE004 for PGE2, Quantikine® RIC100 for sICAM-1, R&D Systems, Inc., Minneapolis, USA). Hepatic supernatant was separately incubated with rat anti-TNF-α, anti-IL-1β, anti-IL-10, anti-PGE2 or anti-sICAM-1, then washed with wash buffer (0.05% Tween® in phosphate buffer solution, PBS) followed by incubation with polyclonal antibody against TNF-α, IL-1β, PGE2 or sICAM-1 conjugated to horseradish peroxidase or biotinylated anti-IL-10 secondary antibody with streptavidin conjugated to horseradish peroxidase, respectively. After washed with wash buffer several times, the substrate solution (hydrogen peroxide and chromogen tetramethylbenzidine) was added and the reaction was terminated by adding diluted hydrochloric acid. The absorbance was determined at 450 nm. Protein concentration was measured by the method of Lowry et al. [21].", "Hepatic hydroxyproline level was measured by colorimetric assay. Freeze-dried liver specimen (0.25 g) was homogenized with 2 ml of distilled water. Liver homogenate was hydrolyzed in alkaline solution (2 mol/L NaOH), oxidized with chloramines T reagent, and incubated with Ehrlich’s reagent at 65°C. The chromogenic product was determined spectrophotometrically at 550 nm.\nThe levels of MMP-2 and TIMP-1 in the liver were determined by commercial kits (Quantikine® DMP200 for MMP-2, Quantikine® RTM100 TIMP-1, R&D Systems, Inc.) using ELISA. Liver slices were homogenized with PBS and proteinase inhibitor cocktail solution and centrifuged at 12000 g for 10 min at 4°C. The supernatant was centrifuged again and collected for further analysis. Hepatic supernatant was separately incubated with rat anti-MMP-2 or anti- TIMP-1, washed with wash buffer, and incubated with polyclonal antibody against MMP-2 or TIMP-1 conjugated to horseradish peroxidase followed by several washes with wash buffer. The substrate solution (hydrogen peroxide and chromogen tetramethylbenzidine) was added for the reaction and stop solution (diluted hydrochloric acid) was then added to stop the reaction. The absorbance was measured at 450 nm.", "All data were expressed as mean ± SD. The data were analyzed by one-way analysis of variance (ANOVA) using Statistical Analysis System (SAS version 9.1, SAS Institute Inc., Cary, NC, USA). The difference between any two groups was analyzed by Fisher’s least significant difference test. A value p <0.05 was considered significant.", "The results of body weight, liver weight and food intake were shown in Table 1 to monitor the effects of the treatments on gross growth and liver weight. Final body weight and weight gain were significantly higher in the control group than those in the CCl4 (p <0.01), GE (p <0.05), and Rb1 (p <0.05) groups (Table 1), but not significantly different among the three CCl4 treated groups. Daily intake of ginseng extract and ginsenoside Rb1 was 12.6 ± 0.6 mg (33.8 ± 1.5 mg/kg body weight) and 1.3 ± 0.6 mg (3.3 ± 0.1 mg/kg body weight) in the GE and Rb1 groups, respectively. The relative liver weight was significantly higher in the CCl4 and GE groups than that in the control group (p <0.01). The Rb1 group significantly reduced the relative liver weight compared with the CCl4 group (p <0.05). However, total liver weight and daily food intake did not differ significantly among the four groups.Table 1\nBody weight, liver weight and food intake in rats treated with ginsenosides against CCl\n4\n-induced liver damage\nControlCCl\n4\nGERb1Initial body weight (g)240 ± 8a\n233 ± 12a\n236 ± 14a\n239 ± 15a\nFinal body weight (g)478 ± 23b\n427 ± 25a\n433 ± 38a\n448 ± 37a\nWeight gain (g)238 ± 22b\n194 ± 23a\n208 ± 34a\n209 ± 30a\nTotal liver weight (g)15.9 ± 2.5a\n17.5 ± 1.5a\n17.3 ± 3.8a\n16.4 ± 1.8a\nRelative liver weight (g/kg)33.2 ± 4.8a\n41.1 ± 3.4c\n39.7 ± 5.8bc\n36.7 ± 3.4ab\nFood intake (g/d)26.5 ± 1.6a\n25.3 ± 1.0a\n25.2 ± 1.2a\n25.4 ± 1.2a\nData are presented as mean ± SD (n =10). Values not sharing the same superscript differ significantly (p <0.05) within the same row.\n\nBody weight, liver weight and food intake in rats treated with ginsenosides against CCl\n4\n-induced liver damage\n\nData are presented as mean ± SD (n =10). Values not sharing the same superscript differ significantly (p <0.05) within the same row.", "The results of the histopathological examination by different stains were demonstrated in Figures 1 and 2 to determine the effects of the treatments on histopathological changes in the liver, especially on liver fibrosis. The bright red color of H&E staining shown in Figure 1A could be resulted from strong eosin staining, a fluorescent red dye. The pathological sections stained by H&E showed that no fat was accumulated in the liver of the control group, whereas large fat vacuoles were observed in the liver of the CCl4 group (Figure 1A). However, the Rb1 group had significantly decreased fat vacuoles compared with the CCl4 group (2.65 ± 0.82 vs. 3.50 ± 0.75, p <0.05) (Figure 1B). The pathological scores for fat change were not significantly different between the GE and Rb1 groups. The CCl4, GE, and Rb1 groups had significantly elevated cell necrosis (p <0.05), inflammatory cells (p <0.01), and fibrosis (p <0.01) in the central veins compared with the control group. However, the pathological scores for necrosis, inflammation, and fibrosis in the liver did not significantly differ among the three CCl4 treated groups.Figure 1\nThe representative histological sections of rat liver specimens. A: hematoxylin and eosin stain at 20 × 10 magnification, B: semi-quantitative scores graded from 0 (no lesion), 1 (trace lesion), 2 (weak lesion), 3 (moderate lesion) to 4 (severe lesion) for fat changes, and from 0 (no lesion), 1 (lesion in the central vein area), 2 (lesion in the central vein area and expansion to the surrounding area) to 3 (lesion in the central and portal vein areas or cirrhosis) for necrosis, inflammation and fibrosis in control, CCl4, GE and Rb1 groups. Solid and dashed arrows represent the central vein and collagen fibers. Data are presented as mean ± SD (n =10). Values not sharing the same letter differ significantly (p <0.05). Scale bar =50 μm.Figure 2\nThe representative histological sections of rat liver specimens. A: Masson’s trichrome stain at 15 × 10 magnification, B: silver stain at 15 × 10 magnification; C: semi-quantitative scores graded from 0 (no collagen formation), 1 (collagen formation in the central vein area), 2 (collagen and fibrous bridge formation in different central vein areas) to 3 (cirrhosis) for fibrosis in the control, CCl4, GE and Rb1 groups. Collagen fibers stained with Masson’s trichrome appear blue. Reticular fibers stained with silver appear brown. Solid and dashed arrows represent the central vein and fiber bridging. Data are presented as mean ± SD (n =10). Values not sharing the same letter differ significantly (p <0.05). Scale bar =50 μm.\n\nThe representative histological sections of rat liver specimens. A: hematoxylin and eosin stain at 20 × 10 magnification, B: semi-quantitative scores graded from 0 (no lesion), 1 (trace lesion), 2 (weak lesion), 3 (moderate lesion) to 4 (severe lesion) for fat changes, and from 0 (no lesion), 1 (lesion in the central vein area), 2 (lesion in the central vein area and expansion to the surrounding area) to 3 (lesion in the central and portal vein areas or cirrhosis) for necrosis, inflammation and fibrosis in control, CCl4, GE and Rb1 groups. Solid and dashed arrows represent the central vein and collagen fibers. Data are presented as mean ± SD (n =10). Values not sharing the same letter differ significantly (p <0.05). Scale bar =50 μm.\n\nThe representative histological sections of rat liver specimens. A: Masson’s trichrome stain at 15 × 10 magnification, B: silver stain at 15 × 10 magnification; C: semi-quantitative scores graded from 0 (no collagen formation), 1 (collagen formation in the central vein area), 2 (collagen and fibrous bridge formation in different central vein areas) to 3 (cirrhosis) for fibrosis in the control, CCl4, GE and Rb1 groups. Collagen fibers stained with Masson’s trichrome appear blue. Reticular fibers stained with silver appear brown. Solid and dashed arrows represent the central vein and fiber bridging. Data are presented as mean ± SD (n =10). Values not sharing the same letter differ significantly (p <0.05). Scale bar =50 μm.\nThe pathological assessment of liver fibrosis as observed by Masson’s trichrome stain demonstrated that the formation of collagen fibers appeared blue was elevated by the exposure to CCl4 (Figure 2A). The fibrosis scores determined by Masson’s trichrome stain in the CCl4 (1.35 ± 0.34), GE (1.10 ± 0.39) and Rb1 (1.40 ± 0.39) groups were significantly higher compared with the control group (p <0.01) (Figure 2C), but the GE group had a lower liver fibrosis score compared with the Rb1 group (p <0.05). The accumulation of liver reticular fibers stained by silver and appeared brown was increased by the exposure to CCl4 (p <0.01) (Figure 2B). The GE (1.05 ± 0.44) group had significantly reduced accumulation of reticular fibers than the CCl4 (1.60 ± 0.39, p <0.01) and Rb1 (1.40 ± 0.21, p <0.05) groups (Figure 2C).", "Plasma ALT and AST activities were measured to assess the effects of the treatments on liver functions. Plasma ALT and AST activities were significantly elevated in the CCl4 group than those in the control, GE and Rb1 groups (p <0.01) after the induction of liver injury (W2) (Table 2). The CCl4 group still had increased plasma ALT (p <0.05) and AST (p <0.01) activities compared with the control group, whereas plasma ALT and AST activities did not differ among the control, GE and Rb1 groups at week 9. The GE group significantly decreased plasma AST activity compared with the CCl4 group at week 9 (p <0.05).Table 2\nPlasma ALT and AST activities in rats treated with ginsenosides\nControlCCl\n4\nGERb1ALT activity (IU/l)W029.6 ± 2.9a\n27.6 ± 1.7a\n31.2 ± 3.8a\n29.2 ± 32.8a\nW233.2 ± 3.6a\n4012.4 ± 2212.9b\n469.0 ± 435.2a\n1072.6 ± 618.6a\nW933.6 ± 4.9a\n334.9 ± 379.1b\n136.4 ± 167.4ab\n177.5 ± 333.0ab\nAST activity (IU/l)W075.9 ± 6.3a\n78.3 ± 4.4a\n70.8 ± 23.5a\n77.6 ± 5.3a\nW262.1 ± 20.2a\n8370.4 ± 5360.8b\n2155.0 ± 1973.8a\n1233.6 ± 616.7a\nW963.3 ± 6.0a\n288.4 ± 181.3b\n134.9 ± 114.6a\n171.3 ± 227.9a\nData are presented as mean ± SD (n =10). Values not sharing the same superscript differ significantly (p <0.05) within the same row.\n\nPlasma ALT and AST activities in rats treated with ginsenosides\n\nData are presented as mean ± SD (n =10). Values not sharing the same superscript differ significantly (p <0.05) within the same row.", "Plasma and hepatic lipid concentrations were determined to evaluate the effects of the treatments on lipid profiles. After the induction of liver injury (W2), plasma triglycerides were significantly increased and still maintained higher level at week 9 in the CCl4 group compared with those in the control group (p <0.01) (Table 3). Treatment with GE (p <0.01) and Rb1 (p <0.05) significantly reduced plasma triglycerides compared with the CCl4 group to the similar level of the control group at weeks 2 and 9.Table 3\nPlasma triglycerides and total cholesterol concentrations in rats treated with ginsenosides\nControlCCl\n4\nGERb1Triglycerides (mmol/l)W00.16 ± 0.04a\n0.16 ± 0.04a\n0.14 ± 0.03a\n0.15 ± 0.02a\nW20.24 ± 0.04a\n0.39 ± 0.19b\n0.25 ± 0.04a\n0.28 ± 0.10a\nW90.54 ± 0.10c\n0.35 ± 0.06b\n0.27 ± 0.05a\n0.29 ± 0.07a\nTotal cholesterol (mmol/l)W00.90 ± 0.11b\n0.71 ± 0.12a\n0.79 ± 0.16ab\n0.75 ± 0.10a\nW21.01 ± 0.14a\n1.20 ± 0.18b\n1.05 ± 0.21ab\n0.95 ± 0.27a\nW90.79 ± 0.11a\n0.70 ± 0.17a\n0.67 ± 0.10a\n0.66 ± 0.20a\nData are presented as mean ± SD (n =10). Values not sharing the same superscript differ significantly (p <0.05) within the same row.\n\nPlasma triglycerides and total cholesterol concentrations in rats treated with ginsenosides\n\nData are presented as mean ± SD (n =10). Values not sharing the same superscript differ significantly (p <0.05) within the same row.\nAt the baseline, plasma total cholesterol level was significantly lower in the CCl4 and Rb1 groups than that in the control group (p <0.01). After the induction of liver injury, total cholesterol level was significantly elevated in the CCl4 group than that in the control and Rb1 groups (p <0.05) (Table 3). Plasma total cholesterol level did not differ significantly among the four groups at week 9.\nHepatic triglyceride concentrations were significantly increased by 73% in the CCl4 group than those in the control group (p <0.01), and decreased by 56% and 60% in the GE and Rb1 groups, respectively, compared with the CCl4 group (p <0.01) (Figure 3A). Hepatic total cholesterol level was significantly greater in the CCl4, GE and Rb1 groups than that in the control group (p <0.01), but not significantly different among CCl4 treated groups (Figure 3B).Figure 3\nEffects of ginsenosides extract and ginsenoside Rb1 on hepatic lipids in rats. A: hepatic triglycerides, B: total cholesterol concentration. Data are presented as mean ± SD (n =10). Values not sharing the same letter differ significantly (p <0.05).\n\nEffects of ginsenosides extract and ginsenoside Rb1 on hepatic lipids in rats. A: hepatic triglycerides, B: total cholesterol concentration. Data are presented as mean ± SD (n =10). Values not sharing the same letter differ significantly (p <0.05).", "The results of hepatic cytokine levels were found in Table 4 to determine the effects of the treatments on hepatic mediators released in the inflammatory condition. Hepatic IL-1β (p <0.01), PGE2 (p <0.05) and sICAM-1 (p <0.05) levels were significantly elevated, whereas hepatic IL-10 level was significantly decreased in the CCl4 group compared with those in the control group (Table 4). Hepatic TNF-α, IL-1β and PGE2 levels were significantly reduced in the GE group compared with the CCl4 group (p <0.05). The Rb1 group had higher hepatic TNF-α level than the GE group, but lower hepatic PGE2 level than the CCl4 group (p <0.05).Table 4\nHepatic TNF-α, IL-1β, IL-10, PGE\n2\n, and sICAM-1 levels in rats treated with ginsenosides\nControlCCl\n4\nGERb1TNF-α (pg/mg protein)15.8 ± 7.6ab\n17.2 ± 5.1b\n10.4 ± 3.3a\n16.7 ± 6.9b\nIL-1β (pg/mg protein)132.5 ± 39.6a\n333.0 ± 135.8c\n212.0 ± 115.2ab\n292.9 ± 182.9bc\nIL-10 (pg/mg protein)4461 ± 958b\n3443 ± 508a\n3513 ± 677a\n3514 ± 600a\nPGE2 (pg/mg protein)6196 ± 1599b\n9822 ± 2610c\n4636 ± 1928ab\n4128 ± 1480a\nsICAM-1 (pg/mg protein)2782 ± 771a\n3991 ± 867b\n3288 ± 567ab\n3427 ± 963ab\nData are presented as mean ± SD (n =10). Values not sharing the same superscript differ significantly (p <0.05) within the same row.\n\nHepatic TNF-α, IL-1β, IL-10, PGE\n2\n, and sICAM-1 levels in rats treated with ginsenosides\n\nData are presented as mean ± SD (n =10). Values not sharing the same superscript differ significantly (p <0.05) within the same row.", "The results of hepatic hydroxyproline, MMP-2 and TIMP-1 levels were shown in Figure 4 to investigate the effects of the treatments on liver fibrogenesis and fibrolysis. Hepatic hydroxyproline (p <0.05), MMP-2 (p <0.05) and TIMP-1 (p <0.01) levels were elevated by 55%, 28% and 61%, respectively, in the CCl4 group compared with the control group (Figure 4). Ginseng extract and ginsenoside Rb1 treatments significantly reduced hepatic hydroxyproline level by 36% and 30% (Figure 4A) and TIMP-1 level by 27% and 27% (Figure 4C), respectively, compared with the CCl4 group (p <0.05). However, hepatic MMP-2 level was not different in the three CCl4 treated groups (Figure 4B).Figure 4\nEffects of ginsenosides extract and ginsenoside Rb1 on liver fibrosis markers in rats. A: hepatic hydroxyproline level, B: MMP-2 level; C: TIMP-1 level. Data are presented as mean ± SD (n =10). Values not sharing the same letter differ significantly (p <0.05).\n\nEffects of ginsenosides extract and ginsenoside Rb1 on liver fibrosis markers in rats. A: hepatic hydroxyproline level, B: MMP-2 level; C: TIMP-1 level. Data are presented as mean ± SD (n =10). Values not sharing the same letter differ significantly (p <0.05)." ]
[ null, null, null, null, null, null, null, null, null, null, null, null, null, null ]
[ "Background", "Methods", "Animals and treatments", "Histopathological examination", "Plasma alanine aminotransferase (ALT) and aspartate aminotransferase (AST) activities", "Plasma and hepatic lipid concentrations", "Hepatic inflammatory markers", "Hepatic hydroxyproline, MMP-2 and TIMP-1 levels", "Statistical analysis", "Results", "Body weight, liver weight and food intake", "Histopathological examination", "Plasma ALT and AST activities", "Plasma and hepatic triglyceride and total cholesterol concentrations", "Hepatic TNF-α, IL-1β, IL-10, PGE2, and sICAM-1 levels", "Hepatic hydroxyproline, MMP-2 and TIMP-1 levels", "Discussion", "Conclusions" ]
[ "Liver cirrhosis is an irreversible stage in the process of liver damage that occurs after liver fibrosis. Liver fibrosis is attributed to inflammation, excessive accumulation of extracellular matrix and tissue remodeling under wound healing [1]. Chronic hepatitis and liver cirrhosis are positively associated with the occurrence of hepatocellular carcinoma [2, 3]. Therefore, the inhibition of hepatic inflammation and fibrosis is crucial in preventing the occurrence of liver cirrhosis and hepatocellular carcinoma.\nOxidative stress from reactive oxygen species plays an important role in liver fibrogenesis [4]. Carbon tetrachloride (CCl4) is considered as a toxic chemical that induces hepatotoxicity including fatty degeneration, inflammation, fibrosis, hepatocellular death and carcinogenicity [5, 6]. Trichloromethyl radical produced from the metabolism of CCl4 initiates a chain reaction to cause lipid peroxidation, membrane dysfunction and further hepatotoxic damage [6]. The toxic metabolite of CCl4 can activate Kupffer cells to secrete cytokines such as interleukin-1 (IL-1) and tumor necrosis factor-α (TNF-α), stimulate transforming growth factor-β (TGF-β) production, inhibit nitric oxide (NO) formation and induce inflammation and liver fibrosis [6–8]. Matrix metalloproteinase (MMP)-2, known as type IV collagenase and gelatinase A, acts as the regulator for the breakdown of extracellular matrix, and tissue inhibitor metalloproteinase (TIMP)-1, as the inhibitor of MMPs, exhibits anti-fibrolytic, growth-stimulated and anti-apoptotic activities [9]. Chronic exposure of CCl4 leads to liver fibrosis, which diminishes extracellular matrix degradation and increases MMP-2 secretion through the induction of tissue inhibitor TIMPs [9].\nPanax ginseng (P. ginseng) root has been commonly used in oriental medicine, diet or dietary supplement. Ginsenosides, a class of steroid glycosides and triterpene saponins, are the major bioactive compounds in P. ginseng root and ginsenoside Rb1 (C54H92O23, molecular weight: 1109.3) is considered as the most abundant ginsenoside among more than 30 ginsenosides in P. ginseng\n[10, 11]. The previous studies have reported that P. ginseng and its active components or metabolites had antioxidant, immunomodulatory, anti-inflammatory, and lipid-lowering effects [12–15]. Many studies have shown that ginsenoside Rb1 and its metabolite compound K attenuated liver injury through inhibiting lipid peroxidation, TNF-α, NO, prostaglandin E2 (PGE2), intercellular adhesion molecule (ICAM)-1 and nuclear factor-κB (NF-κB) activation [16–19]. However, the effect of ginsenosides on liver fibrosis is not clear. Considering ginsenoside Rb1 as the most abundant ginsenoside in P. ginseng\n[10, 11] and its hepatoprotective activity [16–19], therefore, this study investigated the protective effects of P. ginseng extract (ginseng extract) and ginsenoside Rb1 on CCl4-induced liver inflammation and fibrosis in rats.", " Animals and treatments Sprague–Dawley rats weighing 200–250 g were purchased from the National Laboratory Animal Center (Taipei, Taiwan). Rats were housed under a 12-h light–dark cycle at 22-24°C with a relative humidity of 65-70%. After one-week adaptation, rats were randomly divided into four groups (n =10 per group): control, CCl4, CCl4 + ginseng extract (GE) and CCl4 + ginsenoside Rb1 (Rb1) groups. The normal diet based on Laboratory Rodent Diet 5001 powder was purchased from PMI Nutrition International Inc. (Brentwood, MO). Ginseng extract (Ashland Inc., Covington, KY, USA) containing 800 g ginsenosides/kg extract (80%) (ginsenosides in the extract include Rb1, Rc, Rd, Rg1, Rg2, Rg3, Rh1 and Rh2) and ginsenoside Rb1 (China Chemical & Pharmaceutical Co., Ltd., Taipei, Taiwan) with 98% purity were blended with the normal diet at a dose of 0.5 g/kg and 0.05 g/kg, respectively. Ginsenoside Rb1 content was equal in the GE and Rb1 groups. Rats were fed ginseng extract or ginsenoside Rb1 two weeks before (week 0, W0) the induction of liver injury by intraperitoneal injection of 400 ml/l CCl4 in olive oil at a dose of 0.75 ml/kg body weight weekly for 7 weeks. The control group was injected with an equal volume of olive oil without CCl4. Food intake, water intake and body weight were recorded throughout 9-week experimental period. This study was approved by the Institutional Animal Care and Use Committee of Taipei Medical University.\nSprague–Dawley rats weighing 200–250 g were purchased from the National Laboratory Animal Center (Taipei, Taiwan). Rats were housed under a 12-h light–dark cycle at 22-24°C with a relative humidity of 65-70%. After one-week adaptation, rats were randomly divided into four groups (n =10 per group): control, CCl4, CCl4 + ginseng extract (GE) and CCl4 + ginsenoside Rb1 (Rb1) groups. The normal diet based on Laboratory Rodent Diet 5001 powder was purchased from PMI Nutrition International Inc. (Brentwood, MO). Ginseng extract (Ashland Inc., Covington, KY, USA) containing 800 g ginsenosides/kg extract (80%) (ginsenosides in the extract include Rb1, Rc, Rd, Rg1, Rg2, Rg3, Rh1 and Rh2) and ginsenoside Rb1 (China Chemical & Pharmaceutical Co., Ltd., Taipei, Taiwan) with 98% purity were blended with the normal diet at a dose of 0.5 g/kg and 0.05 g/kg, respectively. Ginsenoside Rb1 content was equal in the GE and Rb1 groups. Rats were fed ginseng extract or ginsenoside Rb1 two weeks before (week 0, W0) the induction of liver injury by intraperitoneal injection of 400 ml/l CCl4 in olive oil at a dose of 0.75 ml/kg body weight weekly for 7 weeks. The control group was injected with an equal volume of olive oil without CCl4. Food intake, water intake and body weight were recorded throughout 9-week experimental period. This study was approved by the Institutional Animal Care and Use Committee of Taipei Medical University.\n Histopathological examination After 9 weeks, rats were euthanized with ether and liver samples from left lateral lobe, median lobe and right lateral lobe were collected for histopathological and biochemical analyses. Excised liver specimens from different lobes (1 cm × 1 cm) were fixed in 10% paraformaldehyde, embedded in paraffin, sectioned and stained with hematoxylin and eosin (H&E), Masson’s trichrome or silver. The specimens were coded with a single-blind method and graded from 0 (no lesion), 1 (trace lesion), 2 (weak lesion), 3 (moderate lesion) to 4 (severe lesion) for fat changes, and from 0 (no lesion), 1 (lesion in the central vein area), 2 (lesion in the central vein area and expansion to the surrounding area) to 3 (lesion in the central and portal vein areas or cirrhosis) for necrosis, inflammation, and fibrosis under a light microscope by a pathologist.\nAfter 9 weeks, rats were euthanized with ether and liver samples from left lateral lobe, median lobe and right lateral lobe were collected for histopathological and biochemical analyses. Excised liver specimens from different lobes (1 cm × 1 cm) were fixed in 10% paraformaldehyde, embedded in paraffin, sectioned and stained with hematoxylin and eosin (H&E), Masson’s trichrome or silver. The specimens were coded with a single-blind method and graded from 0 (no lesion), 1 (trace lesion), 2 (weak lesion), 3 (moderate lesion) to 4 (severe lesion) for fat changes, and from 0 (no lesion), 1 (lesion in the central vein area), 2 (lesion in the central vein area and expansion to the surrounding area) to 3 (lesion in the central and portal vein areas or cirrhosis) for necrosis, inflammation, and fibrosis under a light microscope by a pathologist.\n Plasma alanine aminotransferase (ALT) and aspartate aminotransferase (AST) activities Blood samples from rat tails were collected into heparin-containing tubes at weeks 0, 2 (CCl4 injection) and 9. Blood was centrifuged at 3000 g for 15 min at 4°C. Plasma ALT and AST activities were measured spectrophotometrically at 570 nm using a commercial kit (RM 163-K, Iatron Laboratories Inc., Tokyo, Japan).\nBlood samples from rat tails were collected into heparin-containing tubes at weeks 0, 2 (CCl4 injection) and 9. Blood was centrifuged at 3000 g for 15 min at 4°C. Plasma ALT and AST activities were measured spectrophotometrically at 570 nm using a commercial kit (RM 163-K, Iatron Laboratories Inc., Tokyo, Japan).\n Plasma and hepatic lipid concentrations Blood samples from rat tails were collected at weeks 0, 2 and 9, and centrifuged at 3000 g for 15 min at 4°C. Liver samples from left lateral lobe, median lobe and right lateral lobe were homogenized in chloroform/methanol (2:1) solution and extracted by chloroform/methanol/water (3:48:47) solution. Triglycerides and total cholesterol concentrations in plasma and liver were determined spectrophotometrically at 500 nm using commercial enzymatic kits (Randox® TR213 for triglycerides, Randox® CH201 for total cholesterol, Randox Laboratories Ltd., London, UK).\nBlood samples from rat tails were collected at weeks 0, 2 and 9, and centrifuged at 3000 g for 15 min at 4°C. Liver samples from left lateral lobe, median lobe and right lateral lobe were homogenized in chloroform/methanol (2:1) solution and extracted by chloroform/methanol/water (3:48:47) solution. Triglycerides and total cholesterol concentrations in plasma and liver were determined spectrophotometrically at 500 nm using commercial enzymatic kits (Randox® TR213 for triglycerides, Randox® CH201 for total cholesterol, Randox Laboratories Ltd., London, UK).\n Hepatic inflammatory markers Liver slices (0.5 g) were homogenized in 1.5 mL of buffer solution (50 mmol/l Tris, 150 mmol/l NaCl, and 10 ml/l Triton X-100, pH 7.2) [20] and mixed with 100 μl of proteinase inhibitor cocktail solution (P8340, Sigma-Aldrich, Inc., Saint Louis, USA). Liver homogenate was centrifuged at 3000 g for 15 min at 4°C for TNF-α, IL-1β and IL-10 analysis. For PGE2 and soluble ICAM-1 (sICAM-1) analysis, liver slices (0.5 g) were mixed with 1.0 ml of homogenized buffer (0.25 mol/l sucrose, 50 mmol/l Tris–HCl, and 5 mmol/l EDTA, pH 7.5). Liver homogenate was centrifuged at 8000 g for 15 min at 4°C.\nHepatic TNF-α, IL-1β, IL-10, PGE2 and sICAM-1 levels were measured spectrophotometrically using enzyme-linked immunosorbent assay (ELISA) kits (Quantikine® RTA00 for TNF-α, Quantikine® RLB00 for IL-1β, DuoSet® DY522 for IL-10, PGE2, Quantikine® KGE004 for PGE2, Quantikine® RIC100 for sICAM-1, R&D Systems, Inc., Minneapolis, USA). Hepatic supernatant was separately incubated with rat anti-TNF-α, anti-IL-1β, anti-IL-10, anti-PGE2 or anti-sICAM-1, then washed with wash buffer (0.05% Tween® in phosphate buffer solution, PBS) followed by incubation with polyclonal antibody against TNF-α, IL-1β, PGE2 or sICAM-1 conjugated to horseradish peroxidase or biotinylated anti-IL-10 secondary antibody with streptavidin conjugated to horseradish peroxidase, respectively. After washed with wash buffer several times, the substrate solution (hydrogen peroxide and chromogen tetramethylbenzidine) was added and the reaction was terminated by adding diluted hydrochloric acid. The absorbance was determined at 450 nm. Protein concentration was measured by the method of Lowry et al. [21].\nLiver slices (0.5 g) were homogenized in 1.5 mL of buffer solution (50 mmol/l Tris, 150 mmol/l NaCl, and 10 ml/l Triton X-100, pH 7.2) [20] and mixed with 100 μl of proteinase inhibitor cocktail solution (P8340, Sigma-Aldrich, Inc., Saint Louis, USA). Liver homogenate was centrifuged at 3000 g for 15 min at 4°C for TNF-α, IL-1β and IL-10 analysis. For PGE2 and soluble ICAM-1 (sICAM-1) analysis, liver slices (0.5 g) were mixed with 1.0 ml of homogenized buffer (0.25 mol/l sucrose, 50 mmol/l Tris–HCl, and 5 mmol/l EDTA, pH 7.5). Liver homogenate was centrifuged at 8000 g for 15 min at 4°C.\nHepatic TNF-α, IL-1β, IL-10, PGE2 and sICAM-1 levels were measured spectrophotometrically using enzyme-linked immunosorbent assay (ELISA) kits (Quantikine® RTA00 for TNF-α, Quantikine® RLB00 for IL-1β, DuoSet® DY522 for IL-10, PGE2, Quantikine® KGE004 for PGE2, Quantikine® RIC100 for sICAM-1, R&D Systems, Inc., Minneapolis, USA). Hepatic supernatant was separately incubated with rat anti-TNF-α, anti-IL-1β, anti-IL-10, anti-PGE2 or anti-sICAM-1, then washed with wash buffer (0.05% Tween® in phosphate buffer solution, PBS) followed by incubation with polyclonal antibody against TNF-α, IL-1β, PGE2 or sICAM-1 conjugated to horseradish peroxidase or biotinylated anti-IL-10 secondary antibody with streptavidin conjugated to horseradish peroxidase, respectively. After washed with wash buffer several times, the substrate solution (hydrogen peroxide and chromogen tetramethylbenzidine) was added and the reaction was terminated by adding diluted hydrochloric acid. The absorbance was determined at 450 nm. Protein concentration was measured by the method of Lowry et al. [21].\n Hepatic hydroxyproline, MMP-2 and TIMP-1 levels Hepatic hydroxyproline level was measured by colorimetric assay. Freeze-dried liver specimen (0.25 g) was homogenized with 2 ml of distilled water. Liver homogenate was hydrolyzed in alkaline solution (2 mol/L NaOH), oxidized with chloramines T reagent, and incubated with Ehrlich’s reagent at 65°C. The chromogenic product was determined spectrophotometrically at 550 nm.\nThe levels of MMP-2 and TIMP-1 in the liver were determined by commercial kits (Quantikine® DMP200 for MMP-2, Quantikine® RTM100 TIMP-1, R&D Systems, Inc.) using ELISA. Liver slices were homogenized with PBS and proteinase inhibitor cocktail solution and centrifuged at 12000 g for 10 min at 4°C. The supernatant was centrifuged again and collected for further analysis. Hepatic supernatant was separately incubated with rat anti-MMP-2 or anti- TIMP-1, washed with wash buffer, and incubated with polyclonal antibody against MMP-2 or TIMP-1 conjugated to horseradish peroxidase followed by several washes with wash buffer. The substrate solution (hydrogen peroxide and chromogen tetramethylbenzidine) was added for the reaction and stop solution (diluted hydrochloric acid) was then added to stop the reaction. The absorbance was measured at 450 nm.\nHepatic hydroxyproline level was measured by colorimetric assay. Freeze-dried liver specimen (0.25 g) was homogenized with 2 ml of distilled water. Liver homogenate was hydrolyzed in alkaline solution (2 mol/L NaOH), oxidized with chloramines T reagent, and incubated with Ehrlich’s reagent at 65°C. The chromogenic product was determined spectrophotometrically at 550 nm.\nThe levels of MMP-2 and TIMP-1 in the liver were determined by commercial kits (Quantikine® DMP200 for MMP-2, Quantikine® RTM100 TIMP-1, R&D Systems, Inc.) using ELISA. Liver slices were homogenized with PBS and proteinase inhibitor cocktail solution and centrifuged at 12000 g for 10 min at 4°C. The supernatant was centrifuged again and collected for further analysis. Hepatic supernatant was separately incubated with rat anti-MMP-2 or anti- TIMP-1, washed with wash buffer, and incubated with polyclonal antibody against MMP-2 or TIMP-1 conjugated to horseradish peroxidase followed by several washes with wash buffer. The substrate solution (hydrogen peroxide and chromogen tetramethylbenzidine) was added for the reaction and stop solution (diluted hydrochloric acid) was then added to stop the reaction. The absorbance was measured at 450 nm.\n Statistical analysis All data were expressed as mean ± SD. The data were analyzed by one-way analysis of variance (ANOVA) using Statistical Analysis System (SAS version 9.1, SAS Institute Inc., Cary, NC, USA). The difference between any two groups was analyzed by Fisher’s least significant difference test. A value p <0.05 was considered significant.\nAll data were expressed as mean ± SD. The data were analyzed by one-way analysis of variance (ANOVA) using Statistical Analysis System (SAS version 9.1, SAS Institute Inc., Cary, NC, USA). The difference between any two groups was analyzed by Fisher’s least significant difference test. A value p <0.05 was considered significant.", "Sprague–Dawley rats weighing 200–250 g were purchased from the National Laboratory Animal Center (Taipei, Taiwan). Rats were housed under a 12-h light–dark cycle at 22-24°C with a relative humidity of 65-70%. After one-week adaptation, rats were randomly divided into four groups (n =10 per group): control, CCl4, CCl4 + ginseng extract (GE) and CCl4 + ginsenoside Rb1 (Rb1) groups. The normal diet based on Laboratory Rodent Diet 5001 powder was purchased from PMI Nutrition International Inc. (Brentwood, MO). Ginseng extract (Ashland Inc., Covington, KY, USA) containing 800 g ginsenosides/kg extract (80%) (ginsenosides in the extract include Rb1, Rc, Rd, Rg1, Rg2, Rg3, Rh1 and Rh2) and ginsenoside Rb1 (China Chemical & Pharmaceutical Co., Ltd., Taipei, Taiwan) with 98% purity were blended with the normal diet at a dose of 0.5 g/kg and 0.05 g/kg, respectively. Ginsenoside Rb1 content was equal in the GE and Rb1 groups. Rats were fed ginseng extract or ginsenoside Rb1 two weeks before (week 0, W0) the induction of liver injury by intraperitoneal injection of 400 ml/l CCl4 in olive oil at a dose of 0.75 ml/kg body weight weekly for 7 weeks. The control group was injected with an equal volume of olive oil without CCl4. Food intake, water intake and body weight were recorded throughout 9-week experimental period. This study was approved by the Institutional Animal Care and Use Committee of Taipei Medical University.", "After 9 weeks, rats were euthanized with ether and liver samples from left lateral lobe, median lobe and right lateral lobe were collected for histopathological and biochemical analyses. Excised liver specimens from different lobes (1 cm × 1 cm) were fixed in 10% paraformaldehyde, embedded in paraffin, sectioned and stained with hematoxylin and eosin (H&E), Masson’s trichrome or silver. The specimens were coded with a single-blind method and graded from 0 (no lesion), 1 (trace lesion), 2 (weak lesion), 3 (moderate lesion) to 4 (severe lesion) for fat changes, and from 0 (no lesion), 1 (lesion in the central vein area), 2 (lesion in the central vein area and expansion to the surrounding area) to 3 (lesion in the central and portal vein areas or cirrhosis) for necrosis, inflammation, and fibrosis under a light microscope by a pathologist.", "Blood samples from rat tails were collected into heparin-containing tubes at weeks 0, 2 (CCl4 injection) and 9. Blood was centrifuged at 3000 g for 15 min at 4°C. Plasma ALT and AST activities were measured spectrophotometrically at 570 nm using a commercial kit (RM 163-K, Iatron Laboratories Inc., Tokyo, Japan).", "Blood samples from rat tails were collected at weeks 0, 2 and 9, and centrifuged at 3000 g for 15 min at 4°C. Liver samples from left lateral lobe, median lobe and right lateral lobe were homogenized in chloroform/methanol (2:1) solution and extracted by chloroform/methanol/water (3:48:47) solution. Triglycerides and total cholesterol concentrations in plasma and liver were determined spectrophotometrically at 500 nm using commercial enzymatic kits (Randox® TR213 for triglycerides, Randox® CH201 for total cholesterol, Randox Laboratories Ltd., London, UK).", "Liver slices (0.5 g) were homogenized in 1.5 mL of buffer solution (50 mmol/l Tris, 150 mmol/l NaCl, and 10 ml/l Triton X-100, pH 7.2) [20] and mixed with 100 μl of proteinase inhibitor cocktail solution (P8340, Sigma-Aldrich, Inc., Saint Louis, USA). Liver homogenate was centrifuged at 3000 g for 15 min at 4°C for TNF-α, IL-1β and IL-10 analysis. For PGE2 and soluble ICAM-1 (sICAM-1) analysis, liver slices (0.5 g) were mixed with 1.0 ml of homogenized buffer (0.25 mol/l sucrose, 50 mmol/l Tris–HCl, and 5 mmol/l EDTA, pH 7.5). Liver homogenate was centrifuged at 8000 g for 15 min at 4°C.\nHepatic TNF-α, IL-1β, IL-10, PGE2 and sICAM-1 levels were measured spectrophotometrically using enzyme-linked immunosorbent assay (ELISA) kits (Quantikine® RTA00 for TNF-α, Quantikine® RLB00 for IL-1β, DuoSet® DY522 for IL-10, PGE2, Quantikine® KGE004 for PGE2, Quantikine® RIC100 for sICAM-1, R&D Systems, Inc., Minneapolis, USA). Hepatic supernatant was separately incubated with rat anti-TNF-α, anti-IL-1β, anti-IL-10, anti-PGE2 or anti-sICAM-1, then washed with wash buffer (0.05% Tween® in phosphate buffer solution, PBS) followed by incubation with polyclonal antibody against TNF-α, IL-1β, PGE2 or sICAM-1 conjugated to horseradish peroxidase or biotinylated anti-IL-10 secondary antibody with streptavidin conjugated to horseradish peroxidase, respectively. After washed with wash buffer several times, the substrate solution (hydrogen peroxide and chromogen tetramethylbenzidine) was added and the reaction was terminated by adding diluted hydrochloric acid. The absorbance was determined at 450 nm. Protein concentration was measured by the method of Lowry et al. [21].", "Hepatic hydroxyproline level was measured by colorimetric assay. Freeze-dried liver specimen (0.25 g) was homogenized with 2 ml of distilled water. Liver homogenate was hydrolyzed in alkaline solution (2 mol/L NaOH), oxidized with chloramines T reagent, and incubated with Ehrlich’s reagent at 65°C. The chromogenic product was determined spectrophotometrically at 550 nm.\nThe levels of MMP-2 and TIMP-1 in the liver were determined by commercial kits (Quantikine® DMP200 for MMP-2, Quantikine® RTM100 TIMP-1, R&D Systems, Inc.) using ELISA. Liver slices were homogenized with PBS and proteinase inhibitor cocktail solution and centrifuged at 12000 g for 10 min at 4°C. The supernatant was centrifuged again and collected for further analysis. Hepatic supernatant was separately incubated with rat anti-MMP-2 or anti- TIMP-1, washed with wash buffer, and incubated with polyclonal antibody against MMP-2 or TIMP-1 conjugated to horseradish peroxidase followed by several washes with wash buffer. The substrate solution (hydrogen peroxide and chromogen tetramethylbenzidine) was added for the reaction and stop solution (diluted hydrochloric acid) was then added to stop the reaction. The absorbance was measured at 450 nm.", "All data were expressed as mean ± SD. The data were analyzed by one-way analysis of variance (ANOVA) using Statistical Analysis System (SAS version 9.1, SAS Institute Inc., Cary, NC, USA). The difference between any two groups was analyzed by Fisher’s least significant difference test. A value p <0.05 was considered significant.", " Body weight, liver weight and food intake The results of body weight, liver weight and food intake were shown in Table 1 to monitor the effects of the treatments on gross growth and liver weight. Final body weight and weight gain were significantly higher in the control group than those in the CCl4 (p <0.01), GE (p <0.05), and Rb1 (p <0.05) groups (Table 1), but not significantly different among the three CCl4 treated groups. Daily intake of ginseng extract and ginsenoside Rb1 was 12.6 ± 0.6 mg (33.8 ± 1.5 mg/kg body weight) and 1.3 ± 0.6 mg (3.3 ± 0.1 mg/kg body weight) in the GE and Rb1 groups, respectively. The relative liver weight was significantly higher in the CCl4 and GE groups than that in the control group (p <0.01). The Rb1 group significantly reduced the relative liver weight compared with the CCl4 group (p <0.05). However, total liver weight and daily food intake did not differ significantly among the four groups.Table 1\nBody weight, liver weight and food intake in rats treated with ginsenosides against CCl\n4\n-induced liver damage\nControlCCl\n4\nGERb1Initial body weight (g)240 ± 8a\n233 ± 12a\n236 ± 14a\n239 ± 15a\nFinal body weight (g)478 ± 23b\n427 ± 25a\n433 ± 38a\n448 ± 37a\nWeight gain (g)238 ± 22b\n194 ± 23a\n208 ± 34a\n209 ± 30a\nTotal liver weight (g)15.9 ± 2.5a\n17.5 ± 1.5a\n17.3 ± 3.8a\n16.4 ± 1.8a\nRelative liver weight (g/kg)33.2 ± 4.8a\n41.1 ± 3.4c\n39.7 ± 5.8bc\n36.7 ± 3.4ab\nFood intake (g/d)26.5 ± 1.6a\n25.3 ± 1.0a\n25.2 ± 1.2a\n25.4 ± 1.2a\nData are presented as mean ± SD (n =10). Values not sharing the same superscript differ significantly (p <0.05) within the same row.\n\nBody weight, liver weight and food intake in rats treated with ginsenosides against CCl\n4\n-induced liver damage\n\nData are presented as mean ± SD (n =10). Values not sharing the same superscript differ significantly (p <0.05) within the same row.\nThe results of body weight, liver weight and food intake were shown in Table 1 to monitor the effects of the treatments on gross growth and liver weight. Final body weight and weight gain were significantly higher in the control group than those in the CCl4 (p <0.01), GE (p <0.05), and Rb1 (p <0.05) groups (Table 1), but not significantly different among the three CCl4 treated groups. Daily intake of ginseng extract and ginsenoside Rb1 was 12.6 ± 0.6 mg (33.8 ± 1.5 mg/kg body weight) and 1.3 ± 0.6 mg (3.3 ± 0.1 mg/kg body weight) in the GE and Rb1 groups, respectively. The relative liver weight was significantly higher in the CCl4 and GE groups than that in the control group (p <0.01). The Rb1 group significantly reduced the relative liver weight compared with the CCl4 group (p <0.05). However, total liver weight and daily food intake did not differ significantly among the four groups.Table 1\nBody weight, liver weight and food intake in rats treated with ginsenosides against CCl\n4\n-induced liver damage\nControlCCl\n4\nGERb1Initial body weight (g)240 ± 8a\n233 ± 12a\n236 ± 14a\n239 ± 15a\nFinal body weight (g)478 ± 23b\n427 ± 25a\n433 ± 38a\n448 ± 37a\nWeight gain (g)238 ± 22b\n194 ± 23a\n208 ± 34a\n209 ± 30a\nTotal liver weight (g)15.9 ± 2.5a\n17.5 ± 1.5a\n17.3 ± 3.8a\n16.4 ± 1.8a\nRelative liver weight (g/kg)33.2 ± 4.8a\n41.1 ± 3.4c\n39.7 ± 5.8bc\n36.7 ± 3.4ab\nFood intake (g/d)26.5 ± 1.6a\n25.3 ± 1.0a\n25.2 ± 1.2a\n25.4 ± 1.2a\nData are presented as mean ± SD (n =10). Values not sharing the same superscript differ significantly (p <0.05) within the same row.\n\nBody weight, liver weight and food intake in rats treated with ginsenosides against CCl\n4\n-induced liver damage\n\nData are presented as mean ± SD (n =10). Values not sharing the same superscript differ significantly (p <0.05) within the same row.\n Histopathological examination The results of the histopathological examination by different stains were demonstrated in Figures 1 and 2 to determine the effects of the treatments on histopathological changes in the liver, especially on liver fibrosis. The bright red color of H&E staining shown in Figure 1A could be resulted from strong eosin staining, a fluorescent red dye. The pathological sections stained by H&E showed that no fat was accumulated in the liver of the control group, whereas large fat vacuoles were observed in the liver of the CCl4 group (Figure 1A). However, the Rb1 group had significantly decreased fat vacuoles compared with the CCl4 group (2.65 ± 0.82 vs. 3.50 ± 0.75, p <0.05) (Figure 1B). The pathological scores for fat change were not significantly different between the GE and Rb1 groups. The CCl4, GE, and Rb1 groups had significantly elevated cell necrosis (p <0.05), inflammatory cells (p <0.01), and fibrosis (p <0.01) in the central veins compared with the control group. However, the pathological scores for necrosis, inflammation, and fibrosis in the liver did not significantly differ among the three CCl4 treated groups.Figure 1\nThe representative histological sections of rat liver specimens. A: hematoxylin and eosin stain at 20 × 10 magnification, B: semi-quantitative scores graded from 0 (no lesion), 1 (trace lesion), 2 (weak lesion), 3 (moderate lesion) to 4 (severe lesion) for fat changes, and from 0 (no lesion), 1 (lesion in the central vein area), 2 (lesion in the central vein area and expansion to the surrounding area) to 3 (lesion in the central and portal vein areas or cirrhosis) for necrosis, inflammation and fibrosis in control, CCl4, GE and Rb1 groups. Solid and dashed arrows represent the central vein and collagen fibers. Data are presented as mean ± SD (n =10). Values not sharing the same letter differ significantly (p <0.05). Scale bar =50 μm.Figure 2\nThe representative histological sections of rat liver specimens. A: Masson’s trichrome stain at 15 × 10 magnification, B: silver stain at 15 × 10 magnification; C: semi-quantitative scores graded from 0 (no collagen formation), 1 (collagen formation in the central vein area), 2 (collagen and fibrous bridge formation in different central vein areas) to 3 (cirrhosis) for fibrosis in the control, CCl4, GE and Rb1 groups. Collagen fibers stained with Masson’s trichrome appear blue. Reticular fibers stained with silver appear brown. Solid and dashed arrows represent the central vein and fiber bridging. Data are presented as mean ± SD (n =10). Values not sharing the same letter differ significantly (p <0.05). Scale bar =50 μm.\n\nThe representative histological sections of rat liver specimens. A: hematoxylin and eosin stain at 20 × 10 magnification, B: semi-quantitative scores graded from 0 (no lesion), 1 (trace lesion), 2 (weak lesion), 3 (moderate lesion) to 4 (severe lesion) for fat changes, and from 0 (no lesion), 1 (lesion in the central vein area), 2 (lesion in the central vein area and expansion to the surrounding area) to 3 (lesion in the central and portal vein areas or cirrhosis) for necrosis, inflammation and fibrosis in control, CCl4, GE and Rb1 groups. Solid and dashed arrows represent the central vein and collagen fibers. Data are presented as mean ± SD (n =10). Values not sharing the same letter differ significantly (p <0.05). Scale bar =50 μm.\n\nThe representative histological sections of rat liver specimens. A: Masson’s trichrome stain at 15 × 10 magnification, B: silver stain at 15 × 10 magnification; C: semi-quantitative scores graded from 0 (no collagen formation), 1 (collagen formation in the central vein area), 2 (collagen and fibrous bridge formation in different central vein areas) to 3 (cirrhosis) for fibrosis in the control, CCl4, GE and Rb1 groups. Collagen fibers stained with Masson’s trichrome appear blue. Reticular fibers stained with silver appear brown. Solid and dashed arrows represent the central vein and fiber bridging. Data are presented as mean ± SD (n =10). Values not sharing the same letter differ significantly (p <0.05). Scale bar =50 μm.\nThe pathological assessment of liver fibrosis as observed by Masson’s trichrome stain demonstrated that the formation of collagen fibers appeared blue was elevated by the exposure to CCl4 (Figure 2A). The fibrosis scores determined by Masson’s trichrome stain in the CCl4 (1.35 ± 0.34), GE (1.10 ± 0.39) and Rb1 (1.40 ± 0.39) groups were significantly higher compared with the control group (p <0.01) (Figure 2C), but the GE group had a lower liver fibrosis score compared with the Rb1 group (p <0.05). The accumulation of liver reticular fibers stained by silver and appeared brown was increased by the exposure to CCl4 (p <0.01) (Figure 2B). The GE (1.05 ± 0.44) group had significantly reduced accumulation of reticular fibers than the CCl4 (1.60 ± 0.39, p <0.01) and Rb1 (1.40 ± 0.21, p <0.05) groups (Figure 2C).\nThe results of the histopathological examination by different stains were demonstrated in Figures 1 and 2 to determine the effects of the treatments on histopathological changes in the liver, especially on liver fibrosis. The bright red color of H&E staining shown in Figure 1A could be resulted from strong eosin staining, a fluorescent red dye. The pathological sections stained by H&E showed that no fat was accumulated in the liver of the control group, whereas large fat vacuoles were observed in the liver of the CCl4 group (Figure 1A). However, the Rb1 group had significantly decreased fat vacuoles compared with the CCl4 group (2.65 ± 0.82 vs. 3.50 ± 0.75, p <0.05) (Figure 1B). The pathological scores for fat change were not significantly different between the GE and Rb1 groups. The CCl4, GE, and Rb1 groups had significantly elevated cell necrosis (p <0.05), inflammatory cells (p <0.01), and fibrosis (p <0.01) in the central veins compared with the control group. However, the pathological scores for necrosis, inflammation, and fibrosis in the liver did not significantly differ among the three CCl4 treated groups.Figure 1\nThe representative histological sections of rat liver specimens. A: hematoxylin and eosin stain at 20 × 10 magnification, B: semi-quantitative scores graded from 0 (no lesion), 1 (trace lesion), 2 (weak lesion), 3 (moderate lesion) to 4 (severe lesion) for fat changes, and from 0 (no lesion), 1 (lesion in the central vein area), 2 (lesion in the central vein area and expansion to the surrounding area) to 3 (lesion in the central and portal vein areas or cirrhosis) for necrosis, inflammation and fibrosis in control, CCl4, GE and Rb1 groups. Solid and dashed arrows represent the central vein and collagen fibers. Data are presented as mean ± SD (n =10). Values not sharing the same letter differ significantly (p <0.05). Scale bar =50 μm.Figure 2\nThe representative histological sections of rat liver specimens. A: Masson’s trichrome stain at 15 × 10 magnification, B: silver stain at 15 × 10 magnification; C: semi-quantitative scores graded from 0 (no collagen formation), 1 (collagen formation in the central vein area), 2 (collagen and fibrous bridge formation in different central vein areas) to 3 (cirrhosis) for fibrosis in the control, CCl4, GE and Rb1 groups. Collagen fibers stained with Masson’s trichrome appear blue. Reticular fibers stained with silver appear brown. Solid and dashed arrows represent the central vein and fiber bridging. Data are presented as mean ± SD (n =10). Values not sharing the same letter differ significantly (p <0.05). Scale bar =50 μm.\n\nThe representative histological sections of rat liver specimens. A: hematoxylin and eosin stain at 20 × 10 magnification, B: semi-quantitative scores graded from 0 (no lesion), 1 (trace lesion), 2 (weak lesion), 3 (moderate lesion) to 4 (severe lesion) for fat changes, and from 0 (no lesion), 1 (lesion in the central vein area), 2 (lesion in the central vein area and expansion to the surrounding area) to 3 (lesion in the central and portal vein areas or cirrhosis) for necrosis, inflammation and fibrosis in control, CCl4, GE and Rb1 groups. Solid and dashed arrows represent the central vein and collagen fibers. Data are presented as mean ± SD (n =10). Values not sharing the same letter differ significantly (p <0.05). Scale bar =50 μm.\n\nThe representative histological sections of rat liver specimens. A: Masson’s trichrome stain at 15 × 10 magnification, B: silver stain at 15 × 10 magnification; C: semi-quantitative scores graded from 0 (no collagen formation), 1 (collagen formation in the central vein area), 2 (collagen and fibrous bridge formation in different central vein areas) to 3 (cirrhosis) for fibrosis in the control, CCl4, GE and Rb1 groups. Collagen fibers stained with Masson’s trichrome appear blue. Reticular fibers stained with silver appear brown. Solid and dashed arrows represent the central vein and fiber bridging. Data are presented as mean ± SD (n =10). Values not sharing the same letter differ significantly (p <0.05). Scale bar =50 μm.\nThe pathological assessment of liver fibrosis as observed by Masson’s trichrome stain demonstrated that the formation of collagen fibers appeared blue was elevated by the exposure to CCl4 (Figure 2A). The fibrosis scores determined by Masson’s trichrome stain in the CCl4 (1.35 ± 0.34), GE (1.10 ± 0.39) and Rb1 (1.40 ± 0.39) groups were significantly higher compared with the control group (p <0.01) (Figure 2C), but the GE group had a lower liver fibrosis score compared with the Rb1 group (p <0.05). The accumulation of liver reticular fibers stained by silver and appeared brown was increased by the exposure to CCl4 (p <0.01) (Figure 2B). The GE (1.05 ± 0.44) group had significantly reduced accumulation of reticular fibers than the CCl4 (1.60 ± 0.39, p <0.01) and Rb1 (1.40 ± 0.21, p <0.05) groups (Figure 2C).\n Plasma ALT and AST activities Plasma ALT and AST activities were measured to assess the effects of the treatments on liver functions. Plasma ALT and AST activities were significantly elevated in the CCl4 group than those in the control, GE and Rb1 groups (p <0.01) after the induction of liver injury (W2) (Table 2). The CCl4 group still had increased plasma ALT (p <0.05) and AST (p <0.01) activities compared with the control group, whereas plasma ALT and AST activities did not differ among the control, GE and Rb1 groups at week 9. The GE group significantly decreased plasma AST activity compared with the CCl4 group at week 9 (p <0.05).Table 2\nPlasma ALT and AST activities in rats treated with ginsenosides\nControlCCl\n4\nGERb1ALT activity (IU/l)W029.6 ± 2.9a\n27.6 ± 1.7a\n31.2 ± 3.8a\n29.2 ± 32.8a\nW233.2 ± 3.6a\n4012.4 ± 2212.9b\n469.0 ± 435.2a\n1072.6 ± 618.6a\nW933.6 ± 4.9a\n334.9 ± 379.1b\n136.4 ± 167.4ab\n177.5 ± 333.0ab\nAST activity (IU/l)W075.9 ± 6.3a\n78.3 ± 4.4a\n70.8 ± 23.5a\n77.6 ± 5.3a\nW262.1 ± 20.2a\n8370.4 ± 5360.8b\n2155.0 ± 1973.8a\n1233.6 ± 616.7a\nW963.3 ± 6.0a\n288.4 ± 181.3b\n134.9 ± 114.6a\n171.3 ± 227.9a\nData are presented as mean ± SD (n =10). Values not sharing the same superscript differ significantly (p <0.05) within the same row.\n\nPlasma ALT and AST activities in rats treated with ginsenosides\n\nData are presented as mean ± SD (n =10). Values not sharing the same superscript differ significantly (p <0.05) within the same row.\nPlasma ALT and AST activities were measured to assess the effects of the treatments on liver functions. Plasma ALT and AST activities were significantly elevated in the CCl4 group than those in the control, GE and Rb1 groups (p <0.01) after the induction of liver injury (W2) (Table 2). The CCl4 group still had increased plasma ALT (p <0.05) and AST (p <0.01) activities compared with the control group, whereas plasma ALT and AST activities did not differ among the control, GE and Rb1 groups at week 9. The GE group significantly decreased plasma AST activity compared with the CCl4 group at week 9 (p <0.05).Table 2\nPlasma ALT and AST activities in rats treated with ginsenosides\nControlCCl\n4\nGERb1ALT activity (IU/l)W029.6 ± 2.9a\n27.6 ± 1.7a\n31.2 ± 3.8a\n29.2 ± 32.8a\nW233.2 ± 3.6a\n4012.4 ± 2212.9b\n469.0 ± 435.2a\n1072.6 ± 618.6a\nW933.6 ± 4.9a\n334.9 ± 379.1b\n136.4 ± 167.4ab\n177.5 ± 333.0ab\nAST activity (IU/l)W075.9 ± 6.3a\n78.3 ± 4.4a\n70.8 ± 23.5a\n77.6 ± 5.3a\nW262.1 ± 20.2a\n8370.4 ± 5360.8b\n2155.0 ± 1973.8a\n1233.6 ± 616.7a\nW963.3 ± 6.0a\n288.4 ± 181.3b\n134.9 ± 114.6a\n171.3 ± 227.9a\nData are presented as mean ± SD (n =10). Values not sharing the same superscript differ significantly (p <0.05) within the same row.\n\nPlasma ALT and AST activities in rats treated with ginsenosides\n\nData are presented as mean ± SD (n =10). Values not sharing the same superscript differ significantly (p <0.05) within the same row.\n Plasma and hepatic triglyceride and total cholesterol concentrations Plasma and hepatic lipid concentrations were determined to evaluate the effects of the treatments on lipid profiles. After the induction of liver injury (W2), plasma triglycerides were significantly increased and still maintained higher level at week 9 in the CCl4 group compared with those in the control group (p <0.01) (Table 3). Treatment with GE (p <0.01) and Rb1 (p <0.05) significantly reduced plasma triglycerides compared with the CCl4 group to the similar level of the control group at weeks 2 and 9.Table 3\nPlasma triglycerides and total cholesterol concentrations in rats treated with ginsenosides\nControlCCl\n4\nGERb1Triglycerides (mmol/l)W00.16 ± 0.04a\n0.16 ± 0.04a\n0.14 ± 0.03a\n0.15 ± 0.02a\nW20.24 ± 0.04a\n0.39 ± 0.19b\n0.25 ± 0.04a\n0.28 ± 0.10a\nW90.54 ± 0.10c\n0.35 ± 0.06b\n0.27 ± 0.05a\n0.29 ± 0.07a\nTotal cholesterol (mmol/l)W00.90 ± 0.11b\n0.71 ± 0.12a\n0.79 ± 0.16ab\n0.75 ± 0.10a\nW21.01 ± 0.14a\n1.20 ± 0.18b\n1.05 ± 0.21ab\n0.95 ± 0.27a\nW90.79 ± 0.11a\n0.70 ± 0.17a\n0.67 ± 0.10a\n0.66 ± 0.20a\nData are presented as mean ± SD (n =10). Values not sharing the same superscript differ significantly (p <0.05) within the same row.\n\nPlasma triglycerides and total cholesterol concentrations in rats treated with ginsenosides\n\nData are presented as mean ± SD (n =10). Values not sharing the same superscript differ significantly (p <0.05) within the same row.\nAt the baseline, plasma total cholesterol level was significantly lower in the CCl4 and Rb1 groups than that in the control group (p <0.01). After the induction of liver injury, total cholesterol level was significantly elevated in the CCl4 group than that in the control and Rb1 groups (p <0.05) (Table 3). Plasma total cholesterol level did not differ significantly among the four groups at week 9.\nHepatic triglyceride concentrations were significantly increased by 73% in the CCl4 group than those in the control group (p <0.01), and decreased by 56% and 60% in the GE and Rb1 groups, respectively, compared with the CCl4 group (p <0.01) (Figure 3A). Hepatic total cholesterol level was significantly greater in the CCl4, GE and Rb1 groups than that in the control group (p <0.01), but not significantly different among CCl4 treated groups (Figure 3B).Figure 3\nEffects of ginsenosides extract and ginsenoside Rb1 on hepatic lipids in rats. A: hepatic triglycerides, B: total cholesterol concentration. Data are presented as mean ± SD (n =10). Values not sharing the same letter differ significantly (p <0.05).\n\nEffects of ginsenosides extract and ginsenoside Rb1 on hepatic lipids in rats. A: hepatic triglycerides, B: total cholesterol concentration. Data are presented as mean ± SD (n =10). Values not sharing the same letter differ significantly (p <0.05).\nPlasma and hepatic lipid concentrations were determined to evaluate the effects of the treatments on lipid profiles. After the induction of liver injury (W2), plasma triglycerides were significantly increased and still maintained higher level at week 9 in the CCl4 group compared with those in the control group (p <0.01) (Table 3). Treatment with GE (p <0.01) and Rb1 (p <0.05) significantly reduced plasma triglycerides compared with the CCl4 group to the similar level of the control group at weeks 2 and 9.Table 3\nPlasma triglycerides and total cholesterol concentrations in rats treated with ginsenosides\nControlCCl\n4\nGERb1Triglycerides (mmol/l)W00.16 ± 0.04a\n0.16 ± 0.04a\n0.14 ± 0.03a\n0.15 ± 0.02a\nW20.24 ± 0.04a\n0.39 ± 0.19b\n0.25 ± 0.04a\n0.28 ± 0.10a\nW90.54 ± 0.10c\n0.35 ± 0.06b\n0.27 ± 0.05a\n0.29 ± 0.07a\nTotal cholesterol (mmol/l)W00.90 ± 0.11b\n0.71 ± 0.12a\n0.79 ± 0.16ab\n0.75 ± 0.10a\nW21.01 ± 0.14a\n1.20 ± 0.18b\n1.05 ± 0.21ab\n0.95 ± 0.27a\nW90.79 ± 0.11a\n0.70 ± 0.17a\n0.67 ± 0.10a\n0.66 ± 0.20a\nData are presented as mean ± SD (n =10). Values not sharing the same superscript differ significantly (p <0.05) within the same row.\n\nPlasma triglycerides and total cholesterol concentrations in rats treated with ginsenosides\n\nData are presented as mean ± SD (n =10). Values not sharing the same superscript differ significantly (p <0.05) within the same row.\nAt the baseline, plasma total cholesterol level was significantly lower in the CCl4 and Rb1 groups than that in the control group (p <0.01). After the induction of liver injury, total cholesterol level was significantly elevated in the CCl4 group than that in the control and Rb1 groups (p <0.05) (Table 3). Plasma total cholesterol level did not differ significantly among the four groups at week 9.\nHepatic triglyceride concentrations were significantly increased by 73% in the CCl4 group than those in the control group (p <0.01), and decreased by 56% and 60% in the GE and Rb1 groups, respectively, compared with the CCl4 group (p <0.01) (Figure 3A). Hepatic total cholesterol level was significantly greater in the CCl4, GE and Rb1 groups than that in the control group (p <0.01), but not significantly different among CCl4 treated groups (Figure 3B).Figure 3\nEffects of ginsenosides extract and ginsenoside Rb1 on hepatic lipids in rats. A: hepatic triglycerides, B: total cholesterol concentration. Data are presented as mean ± SD (n =10). Values not sharing the same letter differ significantly (p <0.05).\n\nEffects of ginsenosides extract and ginsenoside Rb1 on hepatic lipids in rats. A: hepatic triglycerides, B: total cholesterol concentration. Data are presented as mean ± SD (n =10). Values not sharing the same letter differ significantly (p <0.05).\n Hepatic TNF-α, IL-1β, IL-10, PGE2, and sICAM-1 levels The results of hepatic cytokine levels were found in Table 4 to determine the effects of the treatments on hepatic mediators released in the inflammatory condition. Hepatic IL-1β (p <0.01), PGE2 (p <0.05) and sICAM-1 (p <0.05) levels were significantly elevated, whereas hepatic IL-10 level was significantly decreased in the CCl4 group compared with those in the control group (Table 4). Hepatic TNF-α, IL-1β and PGE2 levels were significantly reduced in the GE group compared with the CCl4 group (p <0.05). The Rb1 group had higher hepatic TNF-α level than the GE group, but lower hepatic PGE2 level than the CCl4 group (p <0.05).Table 4\nHepatic TNF-α, IL-1β, IL-10, PGE\n2\n, and sICAM-1 levels in rats treated with ginsenosides\nControlCCl\n4\nGERb1TNF-α (pg/mg protein)15.8 ± 7.6ab\n17.2 ± 5.1b\n10.4 ± 3.3a\n16.7 ± 6.9b\nIL-1β (pg/mg protein)132.5 ± 39.6a\n333.0 ± 135.8c\n212.0 ± 115.2ab\n292.9 ± 182.9bc\nIL-10 (pg/mg protein)4461 ± 958b\n3443 ± 508a\n3513 ± 677a\n3514 ± 600a\nPGE2 (pg/mg protein)6196 ± 1599b\n9822 ± 2610c\n4636 ± 1928ab\n4128 ± 1480a\nsICAM-1 (pg/mg protein)2782 ± 771a\n3991 ± 867b\n3288 ± 567ab\n3427 ± 963ab\nData are presented as mean ± SD (n =10). Values not sharing the same superscript differ significantly (p <0.05) within the same row.\n\nHepatic TNF-α, IL-1β, IL-10, PGE\n2\n, and sICAM-1 levels in rats treated with ginsenosides\n\nData are presented as mean ± SD (n =10). Values not sharing the same superscript differ significantly (p <0.05) within the same row.\nThe results of hepatic cytokine levels were found in Table 4 to determine the effects of the treatments on hepatic mediators released in the inflammatory condition. Hepatic IL-1β (p <0.01), PGE2 (p <0.05) and sICAM-1 (p <0.05) levels were significantly elevated, whereas hepatic IL-10 level was significantly decreased in the CCl4 group compared with those in the control group (Table 4). Hepatic TNF-α, IL-1β and PGE2 levels were significantly reduced in the GE group compared with the CCl4 group (p <0.05). The Rb1 group had higher hepatic TNF-α level than the GE group, but lower hepatic PGE2 level than the CCl4 group (p <0.05).Table 4\nHepatic TNF-α, IL-1β, IL-10, PGE\n2\n, and sICAM-1 levels in rats treated with ginsenosides\nControlCCl\n4\nGERb1TNF-α (pg/mg protein)15.8 ± 7.6ab\n17.2 ± 5.1b\n10.4 ± 3.3a\n16.7 ± 6.9b\nIL-1β (pg/mg protein)132.5 ± 39.6a\n333.0 ± 135.8c\n212.0 ± 115.2ab\n292.9 ± 182.9bc\nIL-10 (pg/mg protein)4461 ± 958b\n3443 ± 508a\n3513 ± 677a\n3514 ± 600a\nPGE2 (pg/mg protein)6196 ± 1599b\n9822 ± 2610c\n4636 ± 1928ab\n4128 ± 1480a\nsICAM-1 (pg/mg protein)2782 ± 771a\n3991 ± 867b\n3288 ± 567ab\n3427 ± 963ab\nData are presented as mean ± SD (n =10). Values not sharing the same superscript differ significantly (p <0.05) within the same row.\n\nHepatic TNF-α, IL-1β, IL-10, PGE\n2\n, and sICAM-1 levels in rats treated with ginsenosides\n\nData are presented as mean ± SD (n =10). Values not sharing the same superscript differ significantly (p <0.05) within the same row.\n Hepatic hydroxyproline, MMP-2 and TIMP-1 levels The results of hepatic hydroxyproline, MMP-2 and TIMP-1 levels were shown in Figure 4 to investigate the effects of the treatments on liver fibrogenesis and fibrolysis. Hepatic hydroxyproline (p <0.05), MMP-2 (p <0.05) and TIMP-1 (p <0.01) levels were elevated by 55%, 28% and 61%, respectively, in the CCl4 group compared with the control group (Figure 4). Ginseng extract and ginsenoside Rb1 treatments significantly reduced hepatic hydroxyproline level by 36% and 30% (Figure 4A) and TIMP-1 level by 27% and 27% (Figure 4C), respectively, compared with the CCl4 group (p <0.05). However, hepatic MMP-2 level was not different in the three CCl4 treated groups (Figure 4B).Figure 4\nEffects of ginsenosides extract and ginsenoside Rb1 on liver fibrosis markers in rats. A: hepatic hydroxyproline level, B: MMP-2 level; C: TIMP-1 level. Data are presented as mean ± SD (n =10). Values not sharing the same letter differ significantly (p <0.05).\n\nEffects of ginsenosides extract and ginsenoside Rb1 on liver fibrosis markers in rats. A: hepatic hydroxyproline level, B: MMP-2 level; C: TIMP-1 level. Data are presented as mean ± SD (n =10). Values not sharing the same letter differ significantly (p <0.05).\nThe results of hepatic hydroxyproline, MMP-2 and TIMP-1 levels were shown in Figure 4 to investigate the effects of the treatments on liver fibrogenesis and fibrolysis. Hepatic hydroxyproline (p <0.05), MMP-2 (p <0.05) and TIMP-1 (p <0.01) levels were elevated by 55%, 28% and 61%, respectively, in the CCl4 group compared with the control group (Figure 4). Ginseng extract and ginsenoside Rb1 treatments significantly reduced hepatic hydroxyproline level by 36% and 30% (Figure 4A) and TIMP-1 level by 27% and 27% (Figure 4C), respectively, compared with the CCl4 group (p <0.05). However, hepatic MMP-2 level was not different in the three CCl4 treated groups (Figure 4B).Figure 4\nEffects of ginsenosides extract and ginsenoside Rb1 on liver fibrosis markers in rats. A: hepatic hydroxyproline level, B: MMP-2 level; C: TIMP-1 level. Data are presented as mean ± SD (n =10). Values not sharing the same letter differ significantly (p <0.05).\n\nEffects of ginsenosides extract and ginsenoside Rb1 on liver fibrosis markers in rats. A: hepatic hydroxyproline level, B: MMP-2 level; C: TIMP-1 level. Data are presented as mean ± SD (n =10). Values not sharing the same letter differ significantly (p <0.05).", "The results of body weight, liver weight and food intake were shown in Table 1 to monitor the effects of the treatments on gross growth and liver weight. Final body weight and weight gain were significantly higher in the control group than those in the CCl4 (p <0.01), GE (p <0.05), and Rb1 (p <0.05) groups (Table 1), but not significantly different among the three CCl4 treated groups. Daily intake of ginseng extract and ginsenoside Rb1 was 12.6 ± 0.6 mg (33.8 ± 1.5 mg/kg body weight) and 1.3 ± 0.6 mg (3.3 ± 0.1 mg/kg body weight) in the GE and Rb1 groups, respectively. The relative liver weight was significantly higher in the CCl4 and GE groups than that in the control group (p <0.01). The Rb1 group significantly reduced the relative liver weight compared with the CCl4 group (p <0.05). However, total liver weight and daily food intake did not differ significantly among the four groups.Table 1\nBody weight, liver weight and food intake in rats treated with ginsenosides against CCl\n4\n-induced liver damage\nControlCCl\n4\nGERb1Initial body weight (g)240 ± 8a\n233 ± 12a\n236 ± 14a\n239 ± 15a\nFinal body weight (g)478 ± 23b\n427 ± 25a\n433 ± 38a\n448 ± 37a\nWeight gain (g)238 ± 22b\n194 ± 23a\n208 ± 34a\n209 ± 30a\nTotal liver weight (g)15.9 ± 2.5a\n17.5 ± 1.5a\n17.3 ± 3.8a\n16.4 ± 1.8a\nRelative liver weight (g/kg)33.2 ± 4.8a\n41.1 ± 3.4c\n39.7 ± 5.8bc\n36.7 ± 3.4ab\nFood intake (g/d)26.5 ± 1.6a\n25.3 ± 1.0a\n25.2 ± 1.2a\n25.4 ± 1.2a\nData are presented as mean ± SD (n =10). Values not sharing the same superscript differ significantly (p <0.05) within the same row.\n\nBody weight, liver weight and food intake in rats treated with ginsenosides against CCl\n4\n-induced liver damage\n\nData are presented as mean ± SD (n =10). Values not sharing the same superscript differ significantly (p <0.05) within the same row.", "The results of the histopathological examination by different stains were demonstrated in Figures 1 and 2 to determine the effects of the treatments on histopathological changes in the liver, especially on liver fibrosis. The bright red color of H&E staining shown in Figure 1A could be resulted from strong eosin staining, a fluorescent red dye. The pathological sections stained by H&E showed that no fat was accumulated in the liver of the control group, whereas large fat vacuoles were observed in the liver of the CCl4 group (Figure 1A). However, the Rb1 group had significantly decreased fat vacuoles compared with the CCl4 group (2.65 ± 0.82 vs. 3.50 ± 0.75, p <0.05) (Figure 1B). The pathological scores for fat change were not significantly different between the GE and Rb1 groups. The CCl4, GE, and Rb1 groups had significantly elevated cell necrosis (p <0.05), inflammatory cells (p <0.01), and fibrosis (p <0.01) in the central veins compared with the control group. However, the pathological scores for necrosis, inflammation, and fibrosis in the liver did not significantly differ among the three CCl4 treated groups.Figure 1\nThe representative histological sections of rat liver specimens. A: hematoxylin and eosin stain at 20 × 10 magnification, B: semi-quantitative scores graded from 0 (no lesion), 1 (trace lesion), 2 (weak lesion), 3 (moderate lesion) to 4 (severe lesion) for fat changes, and from 0 (no lesion), 1 (lesion in the central vein area), 2 (lesion in the central vein area and expansion to the surrounding area) to 3 (lesion in the central and portal vein areas or cirrhosis) for necrosis, inflammation and fibrosis in control, CCl4, GE and Rb1 groups. Solid and dashed arrows represent the central vein and collagen fibers. Data are presented as mean ± SD (n =10). Values not sharing the same letter differ significantly (p <0.05). Scale bar =50 μm.Figure 2\nThe representative histological sections of rat liver specimens. A: Masson’s trichrome stain at 15 × 10 magnification, B: silver stain at 15 × 10 magnification; C: semi-quantitative scores graded from 0 (no collagen formation), 1 (collagen formation in the central vein area), 2 (collagen and fibrous bridge formation in different central vein areas) to 3 (cirrhosis) for fibrosis in the control, CCl4, GE and Rb1 groups. Collagen fibers stained with Masson’s trichrome appear blue. Reticular fibers stained with silver appear brown. Solid and dashed arrows represent the central vein and fiber bridging. Data are presented as mean ± SD (n =10). Values not sharing the same letter differ significantly (p <0.05). Scale bar =50 μm.\n\nThe representative histological sections of rat liver specimens. A: hematoxylin and eosin stain at 20 × 10 magnification, B: semi-quantitative scores graded from 0 (no lesion), 1 (trace lesion), 2 (weak lesion), 3 (moderate lesion) to 4 (severe lesion) for fat changes, and from 0 (no lesion), 1 (lesion in the central vein area), 2 (lesion in the central vein area and expansion to the surrounding area) to 3 (lesion in the central and portal vein areas or cirrhosis) for necrosis, inflammation and fibrosis in control, CCl4, GE and Rb1 groups. Solid and dashed arrows represent the central vein and collagen fibers. Data are presented as mean ± SD (n =10). Values not sharing the same letter differ significantly (p <0.05). Scale bar =50 μm.\n\nThe representative histological sections of rat liver specimens. A: Masson’s trichrome stain at 15 × 10 magnification, B: silver stain at 15 × 10 magnification; C: semi-quantitative scores graded from 0 (no collagen formation), 1 (collagen formation in the central vein area), 2 (collagen and fibrous bridge formation in different central vein areas) to 3 (cirrhosis) for fibrosis in the control, CCl4, GE and Rb1 groups. Collagen fibers stained with Masson’s trichrome appear blue. Reticular fibers stained with silver appear brown. Solid and dashed arrows represent the central vein and fiber bridging. Data are presented as mean ± SD (n =10). Values not sharing the same letter differ significantly (p <0.05). Scale bar =50 μm.\nThe pathological assessment of liver fibrosis as observed by Masson’s trichrome stain demonstrated that the formation of collagen fibers appeared blue was elevated by the exposure to CCl4 (Figure 2A). The fibrosis scores determined by Masson’s trichrome stain in the CCl4 (1.35 ± 0.34), GE (1.10 ± 0.39) and Rb1 (1.40 ± 0.39) groups were significantly higher compared with the control group (p <0.01) (Figure 2C), but the GE group had a lower liver fibrosis score compared with the Rb1 group (p <0.05). The accumulation of liver reticular fibers stained by silver and appeared brown was increased by the exposure to CCl4 (p <0.01) (Figure 2B). The GE (1.05 ± 0.44) group had significantly reduced accumulation of reticular fibers than the CCl4 (1.60 ± 0.39, p <0.01) and Rb1 (1.40 ± 0.21, p <0.05) groups (Figure 2C).", "Plasma ALT and AST activities were measured to assess the effects of the treatments on liver functions. Plasma ALT and AST activities were significantly elevated in the CCl4 group than those in the control, GE and Rb1 groups (p <0.01) after the induction of liver injury (W2) (Table 2). The CCl4 group still had increased plasma ALT (p <0.05) and AST (p <0.01) activities compared with the control group, whereas plasma ALT and AST activities did not differ among the control, GE and Rb1 groups at week 9. The GE group significantly decreased plasma AST activity compared with the CCl4 group at week 9 (p <0.05).Table 2\nPlasma ALT and AST activities in rats treated with ginsenosides\nControlCCl\n4\nGERb1ALT activity (IU/l)W029.6 ± 2.9a\n27.6 ± 1.7a\n31.2 ± 3.8a\n29.2 ± 32.8a\nW233.2 ± 3.6a\n4012.4 ± 2212.9b\n469.0 ± 435.2a\n1072.6 ± 618.6a\nW933.6 ± 4.9a\n334.9 ± 379.1b\n136.4 ± 167.4ab\n177.5 ± 333.0ab\nAST activity (IU/l)W075.9 ± 6.3a\n78.3 ± 4.4a\n70.8 ± 23.5a\n77.6 ± 5.3a\nW262.1 ± 20.2a\n8370.4 ± 5360.8b\n2155.0 ± 1973.8a\n1233.6 ± 616.7a\nW963.3 ± 6.0a\n288.4 ± 181.3b\n134.9 ± 114.6a\n171.3 ± 227.9a\nData are presented as mean ± SD (n =10). Values not sharing the same superscript differ significantly (p <0.05) within the same row.\n\nPlasma ALT and AST activities in rats treated with ginsenosides\n\nData are presented as mean ± SD (n =10). Values not sharing the same superscript differ significantly (p <0.05) within the same row.", "Plasma and hepatic lipid concentrations were determined to evaluate the effects of the treatments on lipid profiles. After the induction of liver injury (W2), plasma triglycerides were significantly increased and still maintained higher level at week 9 in the CCl4 group compared with those in the control group (p <0.01) (Table 3). Treatment with GE (p <0.01) and Rb1 (p <0.05) significantly reduced plasma triglycerides compared with the CCl4 group to the similar level of the control group at weeks 2 and 9.Table 3\nPlasma triglycerides and total cholesterol concentrations in rats treated with ginsenosides\nControlCCl\n4\nGERb1Triglycerides (mmol/l)W00.16 ± 0.04a\n0.16 ± 0.04a\n0.14 ± 0.03a\n0.15 ± 0.02a\nW20.24 ± 0.04a\n0.39 ± 0.19b\n0.25 ± 0.04a\n0.28 ± 0.10a\nW90.54 ± 0.10c\n0.35 ± 0.06b\n0.27 ± 0.05a\n0.29 ± 0.07a\nTotal cholesterol (mmol/l)W00.90 ± 0.11b\n0.71 ± 0.12a\n0.79 ± 0.16ab\n0.75 ± 0.10a\nW21.01 ± 0.14a\n1.20 ± 0.18b\n1.05 ± 0.21ab\n0.95 ± 0.27a\nW90.79 ± 0.11a\n0.70 ± 0.17a\n0.67 ± 0.10a\n0.66 ± 0.20a\nData are presented as mean ± SD (n =10). Values not sharing the same superscript differ significantly (p <0.05) within the same row.\n\nPlasma triglycerides and total cholesterol concentrations in rats treated with ginsenosides\n\nData are presented as mean ± SD (n =10). Values not sharing the same superscript differ significantly (p <0.05) within the same row.\nAt the baseline, plasma total cholesterol level was significantly lower in the CCl4 and Rb1 groups than that in the control group (p <0.01). After the induction of liver injury, total cholesterol level was significantly elevated in the CCl4 group than that in the control and Rb1 groups (p <0.05) (Table 3). Plasma total cholesterol level did not differ significantly among the four groups at week 9.\nHepatic triglyceride concentrations were significantly increased by 73% in the CCl4 group than those in the control group (p <0.01), and decreased by 56% and 60% in the GE and Rb1 groups, respectively, compared with the CCl4 group (p <0.01) (Figure 3A). Hepatic total cholesterol level was significantly greater in the CCl4, GE and Rb1 groups than that in the control group (p <0.01), but not significantly different among CCl4 treated groups (Figure 3B).Figure 3\nEffects of ginsenosides extract and ginsenoside Rb1 on hepatic lipids in rats. A: hepatic triglycerides, B: total cholesterol concentration. Data are presented as mean ± SD (n =10). Values not sharing the same letter differ significantly (p <0.05).\n\nEffects of ginsenosides extract and ginsenoside Rb1 on hepatic lipids in rats. A: hepatic triglycerides, B: total cholesterol concentration. Data are presented as mean ± SD (n =10). Values not sharing the same letter differ significantly (p <0.05).", "The results of hepatic cytokine levels were found in Table 4 to determine the effects of the treatments on hepatic mediators released in the inflammatory condition. Hepatic IL-1β (p <0.01), PGE2 (p <0.05) and sICAM-1 (p <0.05) levels were significantly elevated, whereas hepatic IL-10 level was significantly decreased in the CCl4 group compared with those in the control group (Table 4). Hepatic TNF-α, IL-1β and PGE2 levels were significantly reduced in the GE group compared with the CCl4 group (p <0.05). The Rb1 group had higher hepatic TNF-α level than the GE group, but lower hepatic PGE2 level than the CCl4 group (p <0.05).Table 4\nHepatic TNF-α, IL-1β, IL-10, PGE\n2\n, and sICAM-1 levels in rats treated with ginsenosides\nControlCCl\n4\nGERb1TNF-α (pg/mg protein)15.8 ± 7.6ab\n17.2 ± 5.1b\n10.4 ± 3.3a\n16.7 ± 6.9b\nIL-1β (pg/mg protein)132.5 ± 39.6a\n333.0 ± 135.8c\n212.0 ± 115.2ab\n292.9 ± 182.9bc\nIL-10 (pg/mg protein)4461 ± 958b\n3443 ± 508a\n3513 ± 677a\n3514 ± 600a\nPGE2 (pg/mg protein)6196 ± 1599b\n9822 ± 2610c\n4636 ± 1928ab\n4128 ± 1480a\nsICAM-1 (pg/mg protein)2782 ± 771a\n3991 ± 867b\n3288 ± 567ab\n3427 ± 963ab\nData are presented as mean ± SD (n =10). Values not sharing the same superscript differ significantly (p <0.05) within the same row.\n\nHepatic TNF-α, IL-1β, IL-10, PGE\n2\n, and sICAM-1 levels in rats treated with ginsenosides\n\nData are presented as mean ± SD (n =10). Values not sharing the same superscript differ significantly (p <0.05) within the same row.", "The results of hepatic hydroxyproline, MMP-2 and TIMP-1 levels were shown in Figure 4 to investigate the effects of the treatments on liver fibrogenesis and fibrolysis. Hepatic hydroxyproline (p <0.05), MMP-2 (p <0.05) and TIMP-1 (p <0.01) levels were elevated by 55%, 28% and 61%, respectively, in the CCl4 group compared with the control group (Figure 4). Ginseng extract and ginsenoside Rb1 treatments significantly reduced hepatic hydroxyproline level by 36% and 30% (Figure 4A) and TIMP-1 level by 27% and 27% (Figure 4C), respectively, compared with the CCl4 group (p <0.05). However, hepatic MMP-2 level was not different in the three CCl4 treated groups (Figure 4B).Figure 4\nEffects of ginsenosides extract and ginsenoside Rb1 on liver fibrosis markers in rats. A: hepatic hydroxyproline level, B: MMP-2 level; C: TIMP-1 level. Data are presented as mean ± SD (n =10). Values not sharing the same letter differ significantly (p <0.05).\n\nEffects of ginsenosides extract and ginsenoside Rb1 on liver fibrosis markers in rats. A: hepatic hydroxyproline level, B: MMP-2 level; C: TIMP-1 level. Data are presented as mean ± SD (n =10). Values not sharing the same letter differ significantly (p <0.05).", "Similar to the previous study [22], plasma ALT and AST activities were increased by CCl4-induced liver injury. Ginseng extract and ginsenoside Rb1 significantly decreased plasma ALT and AST activities elevated by the exposure to CCl4. The previous studies demonstrated that ginseng extract or heated ginseng exhibited antioxidant activity and acted as a free radical scavenger to inhibit lipid peroxidation in vitro and increase catalase and superoxide dismutase activities in V79-4 lung fibroblast cells [14, 23]. Moreover, ginsenoside Rb1, Rg1 or derived metabolite-compound K decreased hepatic malondialdehyde level and increased serum ALT and AST activities [16, 18]. Therefore, ginseng extract and ginsenoside Rb1, as a free radical scavenger, may eliminate free radical damage to the hepatocytes.\nExposure to CCl4 led to significant increases in the accumulation of fat vacuoles and the levels of triglycerides and total cholesterol in the liver. The abnormal fat accumulation in the liver caused by CCl4 could be attributed to: (1) the imbalance between lipogenesis and lipolysis by increasing lipid synthesis and the rate of lipid esterification [24] as well as by decreasing cAMP production via the stimulation of hormone-sensitive lipase [25, 26], and (2) impaired synthesis and secretion of very low density lipoprotein through the interference of glycosylation and maturation of lipoglycoprotein by free radicals which are produced by CCl4 metabolism [24, 27], or through the inactivation of Ca+2-ATPase pump in the mitochondria and endoplasmic reticulum [6, 28].\nLiver damage and elevated hepatic triglycerides induced by CCl4 were improved by the treatment of GE and Rb1. Red ginseng saponin, containing ginsenosides Rb1, Rb2, Rc, Rd, Re and Rg1, played a crucial role in hepatoprotection by suppressing oxidative stress and lipid peroxides via inhibiting the expression and activity of cytochrome P450 in the liver [29]. Consistent with our findings, ginsenoside Rb1 injected intraperitoneally at a dose of 10 mg/kg body weight for 3 d significantly decreased hepatic lipids by increasing hepatic cAMP production [30]. Additionally, Rb1 injected intraperitoneally at a dose of 10 mg/kg body weight also showed to reduce hepatic triglyceride accumulation in high fat diet-induced obese rats by increasing hepatic carnitine palmitoyltransferase 1 activity and cellular AMP/ATP ratio to stimulate fatty acid oxidation and suppress lipogenesis, respectively [31]. Compound K, a major intestinal metabolite of ginsenosides, has been demonstrated to elevate gene expression of peroxisome proliferator-activated receptor-α and decrease gene expression of fatty acid synthase and stearoyl-CoA desaturase 1 through activating AMP-activated protein kinase in HepG2 human hepatoma cells [32]. The previous study revealed that ginseng extract rich in ginsenosides suppressed hepatic cholesterol synthesis via inhibiting hepatic β-hydroxy-β-methylglutaryl-CoA reductase and cholesterol 7α-hydroxylase activities [33]. These results suggest that ginseng extract, ginsenoside Rb1 and their metabolite may accelerate lipid utilization and suppress lipid biosynthesis in the liver to further decrease elevated hepatic triglycerides induced by CCl4 exposure.\nKupffer cells activated by oxidative stress secrete cytokines, such as TNF-α and IL-1β, to stimulate the expression of sICAM-1 which induces the activation of neutrophils [7]. The results of H&E staining showed accumulation of inflammatory cells in the CCl4 group. Furthermore, fibrotic bridges were observed in the histopathological sections stained by silver in the CCl4 group as cells became necrotic and reticular fibers resembled after frameworks collapsed. Ginseng extract reduced accumulation of reticular fibers, ameliorated cell necrosis and inhibited production of TNF-α and IL-1β. In agreement with our present study, in vitro studies demonstrated that ginseng and ginsenoside Rb1 suppressed TNF-α production and IL-1β mRNA expression in murine RAW264.7 macrophages [34, 35]. Ginsenoside Rg1, one of the important components in P. ginseng, intravenously injected at 20 mg/kg body weight significantly attenuated serum TNF-α and IL-6 release in septic mice [36].\nThe expression of PGE2 and cycloogenase-2 (COX-2) are induced by inflammatory response, and the expression of COX-2 was stimulated by proinflammatory cytokines, such as TNF-α and IL-1β [37]. Our present study found that ginseng extract and ginsenoside Rb1 significantly decreased hepatic PGE2 level induced by CCl4. It is presumed that ginseng extract and ginsenoside Rb suppressed PGE2 production through reducing proinflammatory cytokines and suppressing COX-2 expression. Furthermore, activation of NF-κB modulates expression and secretion of proinflammatory cytokines, chemokines, adhesion molecules, COX-2 and inducible nitric oxide synthase (iNOS) [38]. Ginsenoside Rb1, Rg1, Rg3, Rh1 and their derived metabolite compound K down-regulated activation of NF-κB and simultaneously suppressed PGE2, ICAM-1, COX-2 and iNOS expression in vitro\n[17, 19, 39–42]. It is presumed that ginseng extract and ginsenoside Rb1 attenuated production of proinflammatory factors possibly via inhibiting NF-κB activation.\nThe accumulation of hydroxyproline and collagen fibers was found in the CCl4 group, whereas ginseng extract and ginsenoside Rb1 decreased hepatic hydroxyproline and TIMP-1 levels to inhibit liver fibrosis. Oxidative stress induced by CCl4 metabolism could further stimulate proliferation and invasiveness of hepatic stellate cells (HSCs) [43]. Proliferated HSCs resulted in increases in TGF-β1 secretion, which activates HSCs and induces gene expression of type I collagen, and induction of collagen accumulation. Activated HSCs express MMPs and their tissue inhibitors (TIMPs). Oxidative stress stimulated MMP-2 production by HSCs via extracellular signal-regulated kinase1/2 and phosphatidylinositol 3 kinase pathways [43], and MMPs induce HSCs proliferation and migration [44]. In vitro studies found that ginsenoside Rb1 inhibited HSCs activation and mRNA expression of type I and III collagen, TGF-β1 and TIMP-1 [45], and its metabolite induced apoptosis in HSCs via caspase-3 activation pathway [46]. Compound K was found to inhibit MMP-2 expression and NF-κB activation in the in vitro model [47]. Ginsenoside Rg1 subcutaneously injected at 50 and 100 mg/kg body weight attenuated serum levels of hyaluronic acid and type III procollagen, and hepatic hydroxyproline level in rats with thioacetamide-induced liver fibrosis [48]. The decrease in activated HSCs could lead to inhibition of fibrogenesis and TIMP-1 expression by reducing TNF-α and TGF-β1 [49]. Our finding demonstrated that ginseng extract and ginsenoside Rb1 diminished hepatic TIMP-1 level accompanied with decreased TNF-α level. Therefore, ginseng extract and ginsenoside Rb1 could suppress activation and proliferation of HSCs and further inhibit liver fibrosis.\nGinsenoside Rb1 content was equivalent in the GE and Rb1 groups in the present study. Ginseng extract with additional seven ginsenosides except for ginsenoside Rb1 more effectively diminished collagen accumulation and inhibited TNF-α production compared with ginsenoside Rb1. Therefore, the hepatoprotective and anti-inflammatory actions of ginseng extract on CCl4-induced liver damage could be attributed to the synergistic action of overall ingredients including the remaining 20% constituents and their metabolites.", "In conclusion, Panax ginseng extract (0.5 g/kg) and ginsenoside Rb1 (0.05 g/kg) decrease plasma ALT and AST activities elevated by CCl4-induced liver damage and inhibit the accumulation of triglycerides in the liver. The levels of TNF-α, PGE2, hydroxyproline and TIMP-1 in the liver are diminished by ginseng extract and ginsenoside Rb1. Therefore, ginseng extract and ginsenoside Rb1 attenuate CCl4-induced liver injury through anti-inflammatory and antifibrotic effects." ]
[ null, "methods", null, null, null, null, null, null, null, "results", null, null, null, null, null, null, "discussion", "conclusions" ]
[ "Ginsenoside Rb1", "Carbon tetrachloride", "Interleukin-1β", "Liver fibrosis", "Tissue inhibitor of metalloproteinase-1" ]
Background: Liver cirrhosis is an irreversible stage in the process of liver damage that occurs after liver fibrosis. Liver fibrosis is attributed to inflammation, excessive accumulation of extracellular matrix and tissue remodeling under wound healing [1]. Chronic hepatitis and liver cirrhosis are positively associated with the occurrence of hepatocellular carcinoma [2, 3]. Therefore, the inhibition of hepatic inflammation and fibrosis is crucial in preventing the occurrence of liver cirrhosis and hepatocellular carcinoma. Oxidative stress from reactive oxygen species plays an important role in liver fibrogenesis [4]. Carbon tetrachloride (CCl4) is considered as a toxic chemical that induces hepatotoxicity including fatty degeneration, inflammation, fibrosis, hepatocellular death and carcinogenicity [5, 6]. Trichloromethyl radical produced from the metabolism of CCl4 initiates a chain reaction to cause lipid peroxidation, membrane dysfunction and further hepatotoxic damage [6]. The toxic metabolite of CCl4 can activate Kupffer cells to secrete cytokines such as interleukin-1 (IL-1) and tumor necrosis factor-α (TNF-α), stimulate transforming growth factor-β (TGF-β) production, inhibit nitric oxide (NO) formation and induce inflammation and liver fibrosis [6–8]. Matrix metalloproteinase (MMP)-2, known as type IV collagenase and gelatinase A, acts as the regulator for the breakdown of extracellular matrix, and tissue inhibitor metalloproteinase (TIMP)-1, as the inhibitor of MMPs, exhibits anti-fibrolytic, growth-stimulated and anti-apoptotic activities [9]. Chronic exposure of CCl4 leads to liver fibrosis, which diminishes extracellular matrix degradation and increases MMP-2 secretion through the induction of tissue inhibitor TIMPs [9]. Panax ginseng (P. ginseng) root has been commonly used in oriental medicine, diet or dietary supplement. Ginsenosides, a class of steroid glycosides and triterpene saponins, are the major bioactive compounds in P. ginseng root and ginsenoside Rb1 (C54H92O23, molecular weight: 1109.3) is considered as the most abundant ginsenoside among more than 30 ginsenosides in P. ginseng [10, 11]. The previous studies have reported that P. ginseng and its active components or metabolites had antioxidant, immunomodulatory, anti-inflammatory, and lipid-lowering effects [12–15]. Many studies have shown that ginsenoside Rb1 and its metabolite compound K attenuated liver injury through inhibiting lipid peroxidation, TNF-α, NO, prostaglandin E2 (PGE2), intercellular adhesion molecule (ICAM)-1 and nuclear factor-κB (NF-κB) activation [16–19]. However, the effect of ginsenosides on liver fibrosis is not clear. Considering ginsenoside Rb1 as the most abundant ginsenoside in P. ginseng [10, 11] and its hepatoprotective activity [16–19], therefore, this study investigated the protective effects of P. ginseng extract (ginseng extract) and ginsenoside Rb1 on CCl4-induced liver inflammation and fibrosis in rats. Methods: Animals and treatments Sprague–Dawley rats weighing 200–250 g were purchased from the National Laboratory Animal Center (Taipei, Taiwan). Rats were housed under a 12-h light–dark cycle at 22-24°C with a relative humidity of 65-70%. After one-week adaptation, rats were randomly divided into four groups (n =10 per group): control, CCl4, CCl4 + ginseng extract (GE) and CCl4 + ginsenoside Rb1 (Rb1) groups. The normal diet based on Laboratory Rodent Diet 5001 powder was purchased from PMI Nutrition International Inc. (Brentwood, MO). Ginseng extract (Ashland Inc., Covington, KY, USA) containing 800 g ginsenosides/kg extract (80%) (ginsenosides in the extract include Rb1, Rc, Rd, Rg1, Rg2, Rg3, Rh1 and Rh2) and ginsenoside Rb1 (China Chemical & Pharmaceutical Co., Ltd., Taipei, Taiwan) with 98% purity were blended with the normal diet at a dose of 0.5 g/kg and 0.05 g/kg, respectively. Ginsenoside Rb1 content was equal in the GE and Rb1 groups. Rats were fed ginseng extract or ginsenoside Rb1 two weeks before (week 0, W0) the induction of liver injury by intraperitoneal injection of 400 ml/l CCl4 in olive oil at a dose of 0.75 ml/kg body weight weekly for 7 weeks. The control group was injected with an equal volume of olive oil without CCl4. Food intake, water intake and body weight were recorded throughout 9-week experimental period. This study was approved by the Institutional Animal Care and Use Committee of Taipei Medical University. Sprague–Dawley rats weighing 200–250 g were purchased from the National Laboratory Animal Center (Taipei, Taiwan). Rats were housed under a 12-h light–dark cycle at 22-24°C with a relative humidity of 65-70%. After one-week adaptation, rats were randomly divided into four groups (n =10 per group): control, CCl4, CCl4 + ginseng extract (GE) and CCl4 + ginsenoside Rb1 (Rb1) groups. The normal diet based on Laboratory Rodent Diet 5001 powder was purchased from PMI Nutrition International Inc. (Brentwood, MO). Ginseng extract (Ashland Inc., Covington, KY, USA) containing 800 g ginsenosides/kg extract (80%) (ginsenosides in the extract include Rb1, Rc, Rd, Rg1, Rg2, Rg3, Rh1 and Rh2) and ginsenoside Rb1 (China Chemical & Pharmaceutical Co., Ltd., Taipei, Taiwan) with 98% purity were blended with the normal diet at a dose of 0.5 g/kg and 0.05 g/kg, respectively. Ginsenoside Rb1 content was equal in the GE and Rb1 groups. Rats were fed ginseng extract or ginsenoside Rb1 two weeks before (week 0, W0) the induction of liver injury by intraperitoneal injection of 400 ml/l CCl4 in olive oil at a dose of 0.75 ml/kg body weight weekly for 7 weeks. The control group was injected with an equal volume of olive oil without CCl4. Food intake, water intake and body weight were recorded throughout 9-week experimental period. This study was approved by the Institutional Animal Care and Use Committee of Taipei Medical University. Histopathological examination After 9 weeks, rats were euthanized with ether and liver samples from left lateral lobe, median lobe and right lateral lobe were collected for histopathological and biochemical analyses. Excised liver specimens from different lobes (1 cm × 1 cm) were fixed in 10% paraformaldehyde, embedded in paraffin, sectioned and stained with hematoxylin and eosin (H&E), Masson’s trichrome or silver. The specimens were coded with a single-blind method and graded from 0 (no lesion), 1 (trace lesion), 2 (weak lesion), 3 (moderate lesion) to 4 (severe lesion) for fat changes, and from 0 (no lesion), 1 (lesion in the central vein area), 2 (lesion in the central vein area and expansion to the surrounding area) to 3 (lesion in the central and portal vein areas or cirrhosis) for necrosis, inflammation, and fibrosis under a light microscope by a pathologist. After 9 weeks, rats were euthanized with ether and liver samples from left lateral lobe, median lobe and right lateral lobe were collected for histopathological and biochemical analyses. Excised liver specimens from different lobes (1 cm × 1 cm) were fixed in 10% paraformaldehyde, embedded in paraffin, sectioned and stained with hematoxylin and eosin (H&E), Masson’s trichrome or silver. The specimens were coded with a single-blind method and graded from 0 (no lesion), 1 (trace lesion), 2 (weak lesion), 3 (moderate lesion) to 4 (severe lesion) for fat changes, and from 0 (no lesion), 1 (lesion in the central vein area), 2 (lesion in the central vein area and expansion to the surrounding area) to 3 (lesion in the central and portal vein areas or cirrhosis) for necrosis, inflammation, and fibrosis under a light microscope by a pathologist. Plasma alanine aminotransferase (ALT) and aspartate aminotransferase (AST) activities Blood samples from rat tails were collected into heparin-containing tubes at weeks 0, 2 (CCl4 injection) and 9. Blood was centrifuged at 3000 g for 15 min at 4°C. Plasma ALT and AST activities were measured spectrophotometrically at 570 nm using a commercial kit (RM 163-K, Iatron Laboratories Inc., Tokyo, Japan). Blood samples from rat tails were collected into heparin-containing tubes at weeks 0, 2 (CCl4 injection) and 9. Blood was centrifuged at 3000 g for 15 min at 4°C. Plasma ALT and AST activities were measured spectrophotometrically at 570 nm using a commercial kit (RM 163-K, Iatron Laboratories Inc., Tokyo, Japan). Plasma and hepatic lipid concentrations Blood samples from rat tails were collected at weeks 0, 2 and 9, and centrifuged at 3000 g for 15 min at 4°C. Liver samples from left lateral lobe, median lobe and right lateral lobe were homogenized in chloroform/methanol (2:1) solution and extracted by chloroform/methanol/water (3:48:47) solution. Triglycerides and total cholesterol concentrations in plasma and liver were determined spectrophotometrically at 500 nm using commercial enzymatic kits (Randox® TR213 for triglycerides, Randox® CH201 for total cholesterol, Randox Laboratories Ltd., London, UK). Blood samples from rat tails were collected at weeks 0, 2 and 9, and centrifuged at 3000 g for 15 min at 4°C. Liver samples from left lateral lobe, median lobe and right lateral lobe were homogenized in chloroform/methanol (2:1) solution and extracted by chloroform/methanol/water (3:48:47) solution. Triglycerides and total cholesterol concentrations in plasma and liver were determined spectrophotometrically at 500 nm using commercial enzymatic kits (Randox® TR213 for triglycerides, Randox® CH201 for total cholesterol, Randox Laboratories Ltd., London, UK). Hepatic inflammatory markers Liver slices (0.5 g) were homogenized in 1.5 mL of buffer solution (50 mmol/l Tris, 150 mmol/l NaCl, and 10 ml/l Triton X-100, pH 7.2) [20] and mixed with 100 μl of proteinase inhibitor cocktail solution (P8340, Sigma-Aldrich, Inc., Saint Louis, USA). Liver homogenate was centrifuged at 3000 g for 15 min at 4°C for TNF-α, IL-1β and IL-10 analysis. For PGE2 and soluble ICAM-1 (sICAM-1) analysis, liver slices (0.5 g) were mixed with 1.0 ml of homogenized buffer (0.25 mol/l sucrose, 50 mmol/l Tris–HCl, and 5 mmol/l EDTA, pH 7.5). Liver homogenate was centrifuged at 8000 g for 15 min at 4°C. Hepatic TNF-α, IL-1β, IL-10, PGE2 and sICAM-1 levels were measured spectrophotometrically using enzyme-linked immunosorbent assay (ELISA) kits (Quantikine® RTA00 for TNF-α, Quantikine® RLB00 for IL-1β, DuoSet® DY522 for IL-10, PGE2, Quantikine® KGE004 for PGE2, Quantikine® RIC100 for sICAM-1, R&D Systems, Inc., Minneapolis, USA). Hepatic supernatant was separately incubated with rat anti-TNF-α, anti-IL-1β, anti-IL-10, anti-PGE2 or anti-sICAM-1, then washed with wash buffer (0.05% Tween® in phosphate buffer solution, PBS) followed by incubation with polyclonal antibody against TNF-α, IL-1β, PGE2 or sICAM-1 conjugated to horseradish peroxidase or biotinylated anti-IL-10 secondary antibody with streptavidin conjugated to horseradish peroxidase, respectively. After washed with wash buffer several times, the substrate solution (hydrogen peroxide and chromogen tetramethylbenzidine) was added and the reaction was terminated by adding diluted hydrochloric acid. The absorbance was determined at 450 nm. Protein concentration was measured by the method of Lowry et al. [21]. Liver slices (0.5 g) were homogenized in 1.5 mL of buffer solution (50 mmol/l Tris, 150 mmol/l NaCl, and 10 ml/l Triton X-100, pH 7.2) [20] and mixed with 100 μl of proteinase inhibitor cocktail solution (P8340, Sigma-Aldrich, Inc., Saint Louis, USA). Liver homogenate was centrifuged at 3000 g for 15 min at 4°C for TNF-α, IL-1β and IL-10 analysis. For PGE2 and soluble ICAM-1 (sICAM-1) analysis, liver slices (0.5 g) were mixed with 1.0 ml of homogenized buffer (0.25 mol/l sucrose, 50 mmol/l Tris–HCl, and 5 mmol/l EDTA, pH 7.5). Liver homogenate was centrifuged at 8000 g for 15 min at 4°C. Hepatic TNF-α, IL-1β, IL-10, PGE2 and sICAM-1 levels were measured spectrophotometrically using enzyme-linked immunosorbent assay (ELISA) kits (Quantikine® RTA00 for TNF-α, Quantikine® RLB00 for IL-1β, DuoSet® DY522 for IL-10, PGE2, Quantikine® KGE004 for PGE2, Quantikine® RIC100 for sICAM-1, R&D Systems, Inc., Minneapolis, USA). Hepatic supernatant was separately incubated with rat anti-TNF-α, anti-IL-1β, anti-IL-10, anti-PGE2 or anti-sICAM-1, then washed with wash buffer (0.05% Tween® in phosphate buffer solution, PBS) followed by incubation with polyclonal antibody against TNF-α, IL-1β, PGE2 or sICAM-1 conjugated to horseradish peroxidase or biotinylated anti-IL-10 secondary antibody with streptavidin conjugated to horseradish peroxidase, respectively. After washed with wash buffer several times, the substrate solution (hydrogen peroxide and chromogen tetramethylbenzidine) was added and the reaction was terminated by adding diluted hydrochloric acid. The absorbance was determined at 450 nm. Protein concentration was measured by the method of Lowry et al. [21]. Hepatic hydroxyproline, MMP-2 and TIMP-1 levels Hepatic hydroxyproline level was measured by colorimetric assay. Freeze-dried liver specimen (0.25 g) was homogenized with 2 ml of distilled water. Liver homogenate was hydrolyzed in alkaline solution (2 mol/L NaOH), oxidized with chloramines T reagent, and incubated with Ehrlich’s reagent at 65°C. The chromogenic product was determined spectrophotometrically at 550 nm. The levels of MMP-2 and TIMP-1 in the liver were determined by commercial kits (Quantikine® DMP200 for MMP-2, Quantikine® RTM100 TIMP-1, R&D Systems, Inc.) using ELISA. Liver slices were homogenized with PBS and proteinase inhibitor cocktail solution and centrifuged at 12000 g for 10 min at 4°C. The supernatant was centrifuged again and collected for further analysis. Hepatic supernatant was separately incubated with rat anti-MMP-2 or anti- TIMP-1, washed with wash buffer, and incubated with polyclonal antibody against MMP-2 or TIMP-1 conjugated to horseradish peroxidase followed by several washes with wash buffer. The substrate solution (hydrogen peroxide and chromogen tetramethylbenzidine) was added for the reaction and stop solution (diluted hydrochloric acid) was then added to stop the reaction. The absorbance was measured at 450 nm. Hepatic hydroxyproline level was measured by colorimetric assay. Freeze-dried liver specimen (0.25 g) was homogenized with 2 ml of distilled water. Liver homogenate was hydrolyzed in alkaline solution (2 mol/L NaOH), oxidized with chloramines T reagent, and incubated with Ehrlich’s reagent at 65°C. The chromogenic product was determined spectrophotometrically at 550 nm. The levels of MMP-2 and TIMP-1 in the liver were determined by commercial kits (Quantikine® DMP200 for MMP-2, Quantikine® RTM100 TIMP-1, R&D Systems, Inc.) using ELISA. Liver slices were homogenized with PBS and proteinase inhibitor cocktail solution and centrifuged at 12000 g for 10 min at 4°C. The supernatant was centrifuged again and collected for further analysis. Hepatic supernatant was separately incubated with rat anti-MMP-2 or anti- TIMP-1, washed with wash buffer, and incubated with polyclonal antibody against MMP-2 or TIMP-1 conjugated to horseradish peroxidase followed by several washes with wash buffer. The substrate solution (hydrogen peroxide and chromogen tetramethylbenzidine) was added for the reaction and stop solution (diluted hydrochloric acid) was then added to stop the reaction. The absorbance was measured at 450 nm. Statistical analysis All data were expressed as mean ± SD. The data were analyzed by one-way analysis of variance (ANOVA) using Statistical Analysis System (SAS version 9.1, SAS Institute Inc., Cary, NC, USA). The difference between any two groups was analyzed by Fisher’s least significant difference test. A value p <0.05 was considered significant. All data were expressed as mean ± SD. The data were analyzed by one-way analysis of variance (ANOVA) using Statistical Analysis System (SAS version 9.1, SAS Institute Inc., Cary, NC, USA). The difference between any two groups was analyzed by Fisher’s least significant difference test. A value p <0.05 was considered significant. Animals and treatments: Sprague–Dawley rats weighing 200–250 g were purchased from the National Laboratory Animal Center (Taipei, Taiwan). Rats were housed under a 12-h light–dark cycle at 22-24°C with a relative humidity of 65-70%. After one-week adaptation, rats were randomly divided into four groups (n =10 per group): control, CCl4, CCl4 + ginseng extract (GE) and CCl4 + ginsenoside Rb1 (Rb1) groups. The normal diet based on Laboratory Rodent Diet 5001 powder was purchased from PMI Nutrition International Inc. (Brentwood, MO). Ginseng extract (Ashland Inc., Covington, KY, USA) containing 800 g ginsenosides/kg extract (80%) (ginsenosides in the extract include Rb1, Rc, Rd, Rg1, Rg2, Rg3, Rh1 and Rh2) and ginsenoside Rb1 (China Chemical & Pharmaceutical Co., Ltd., Taipei, Taiwan) with 98% purity were blended with the normal diet at a dose of 0.5 g/kg and 0.05 g/kg, respectively. Ginsenoside Rb1 content was equal in the GE and Rb1 groups. Rats were fed ginseng extract or ginsenoside Rb1 two weeks before (week 0, W0) the induction of liver injury by intraperitoneal injection of 400 ml/l CCl4 in olive oil at a dose of 0.75 ml/kg body weight weekly for 7 weeks. The control group was injected with an equal volume of olive oil without CCl4. Food intake, water intake and body weight were recorded throughout 9-week experimental period. This study was approved by the Institutional Animal Care and Use Committee of Taipei Medical University. Histopathological examination: After 9 weeks, rats were euthanized with ether and liver samples from left lateral lobe, median lobe and right lateral lobe were collected for histopathological and biochemical analyses. Excised liver specimens from different lobes (1 cm × 1 cm) were fixed in 10% paraformaldehyde, embedded in paraffin, sectioned and stained with hematoxylin and eosin (H&E), Masson’s trichrome or silver. The specimens were coded with a single-blind method and graded from 0 (no lesion), 1 (trace lesion), 2 (weak lesion), 3 (moderate lesion) to 4 (severe lesion) for fat changes, and from 0 (no lesion), 1 (lesion in the central vein area), 2 (lesion in the central vein area and expansion to the surrounding area) to 3 (lesion in the central and portal vein areas or cirrhosis) for necrosis, inflammation, and fibrosis under a light microscope by a pathologist. Plasma alanine aminotransferase (ALT) and aspartate aminotransferase (AST) activities: Blood samples from rat tails were collected into heparin-containing tubes at weeks 0, 2 (CCl4 injection) and 9. Blood was centrifuged at 3000 g for 15 min at 4°C. Plasma ALT and AST activities were measured spectrophotometrically at 570 nm using a commercial kit (RM 163-K, Iatron Laboratories Inc., Tokyo, Japan). Plasma and hepatic lipid concentrations: Blood samples from rat tails were collected at weeks 0, 2 and 9, and centrifuged at 3000 g for 15 min at 4°C. Liver samples from left lateral lobe, median lobe and right lateral lobe were homogenized in chloroform/methanol (2:1) solution and extracted by chloroform/methanol/water (3:48:47) solution. Triglycerides and total cholesterol concentrations in plasma and liver were determined spectrophotometrically at 500 nm using commercial enzymatic kits (Randox® TR213 for triglycerides, Randox® CH201 for total cholesterol, Randox Laboratories Ltd., London, UK). Hepatic inflammatory markers: Liver slices (0.5 g) were homogenized in 1.5 mL of buffer solution (50 mmol/l Tris, 150 mmol/l NaCl, and 10 ml/l Triton X-100, pH 7.2) [20] and mixed with 100 μl of proteinase inhibitor cocktail solution (P8340, Sigma-Aldrich, Inc., Saint Louis, USA). Liver homogenate was centrifuged at 3000 g for 15 min at 4°C for TNF-α, IL-1β and IL-10 analysis. For PGE2 and soluble ICAM-1 (sICAM-1) analysis, liver slices (0.5 g) were mixed with 1.0 ml of homogenized buffer (0.25 mol/l sucrose, 50 mmol/l Tris–HCl, and 5 mmol/l EDTA, pH 7.5). Liver homogenate was centrifuged at 8000 g for 15 min at 4°C. Hepatic TNF-α, IL-1β, IL-10, PGE2 and sICAM-1 levels were measured spectrophotometrically using enzyme-linked immunosorbent assay (ELISA) kits (Quantikine® RTA00 for TNF-α, Quantikine® RLB00 for IL-1β, DuoSet® DY522 for IL-10, PGE2, Quantikine® KGE004 for PGE2, Quantikine® RIC100 for sICAM-1, R&D Systems, Inc., Minneapolis, USA). Hepatic supernatant was separately incubated with rat anti-TNF-α, anti-IL-1β, anti-IL-10, anti-PGE2 or anti-sICAM-1, then washed with wash buffer (0.05% Tween® in phosphate buffer solution, PBS) followed by incubation with polyclonal antibody against TNF-α, IL-1β, PGE2 or sICAM-1 conjugated to horseradish peroxidase or biotinylated anti-IL-10 secondary antibody with streptavidin conjugated to horseradish peroxidase, respectively. After washed with wash buffer several times, the substrate solution (hydrogen peroxide and chromogen tetramethylbenzidine) was added and the reaction was terminated by adding diluted hydrochloric acid. The absorbance was determined at 450 nm. Protein concentration was measured by the method of Lowry et al. [21]. Hepatic hydroxyproline, MMP-2 and TIMP-1 levels: Hepatic hydroxyproline level was measured by colorimetric assay. Freeze-dried liver specimen (0.25 g) was homogenized with 2 ml of distilled water. Liver homogenate was hydrolyzed in alkaline solution (2 mol/L NaOH), oxidized with chloramines T reagent, and incubated with Ehrlich’s reagent at 65°C. The chromogenic product was determined spectrophotometrically at 550 nm. The levels of MMP-2 and TIMP-1 in the liver were determined by commercial kits (Quantikine® DMP200 for MMP-2, Quantikine® RTM100 TIMP-1, R&D Systems, Inc.) using ELISA. Liver slices were homogenized with PBS and proteinase inhibitor cocktail solution and centrifuged at 12000 g for 10 min at 4°C. The supernatant was centrifuged again and collected for further analysis. Hepatic supernatant was separately incubated with rat anti-MMP-2 or anti- TIMP-1, washed with wash buffer, and incubated with polyclonal antibody against MMP-2 or TIMP-1 conjugated to horseradish peroxidase followed by several washes with wash buffer. The substrate solution (hydrogen peroxide and chromogen tetramethylbenzidine) was added for the reaction and stop solution (diluted hydrochloric acid) was then added to stop the reaction. The absorbance was measured at 450 nm. Statistical analysis: All data were expressed as mean ± SD. The data were analyzed by one-way analysis of variance (ANOVA) using Statistical Analysis System (SAS version 9.1, SAS Institute Inc., Cary, NC, USA). The difference between any two groups was analyzed by Fisher’s least significant difference test. A value p <0.05 was considered significant. Results: Body weight, liver weight and food intake The results of body weight, liver weight and food intake were shown in Table 1 to monitor the effects of the treatments on gross growth and liver weight. Final body weight and weight gain were significantly higher in the control group than those in the CCl4 (p <0.01), GE (p <0.05), and Rb1 (p <0.05) groups (Table 1), but not significantly different among the three CCl4 treated groups. Daily intake of ginseng extract and ginsenoside Rb1 was 12.6 ± 0.6 mg (33.8 ± 1.5 mg/kg body weight) and 1.3 ± 0.6 mg (3.3 ± 0.1 mg/kg body weight) in the GE and Rb1 groups, respectively. The relative liver weight was significantly higher in the CCl4 and GE groups than that in the control group (p <0.01). The Rb1 group significantly reduced the relative liver weight compared with the CCl4 group (p <0.05). However, total liver weight and daily food intake did not differ significantly among the four groups.Table 1 Body weight, liver weight and food intake in rats treated with ginsenosides against CCl 4 -induced liver damage ControlCCl 4 GERb1Initial body weight (g)240 ± 8a 233 ± 12a 236 ± 14a 239 ± 15a Final body weight (g)478 ± 23b 427 ± 25a 433 ± 38a 448 ± 37a Weight gain (g)238 ± 22b 194 ± 23a 208 ± 34a 209 ± 30a Total liver weight (g)15.9 ± 2.5a 17.5 ± 1.5a 17.3 ± 3.8a 16.4 ± 1.8a Relative liver weight (g/kg)33.2 ± 4.8a 41.1 ± 3.4c 39.7 ± 5.8bc 36.7 ± 3.4ab Food intake (g/d)26.5 ± 1.6a 25.3 ± 1.0a 25.2 ± 1.2a 25.4 ± 1.2a Data are presented as mean ± SD (n =10). Values not sharing the same superscript differ significantly (p <0.05) within the same row. Body weight, liver weight and food intake in rats treated with ginsenosides against CCl 4 -induced liver damage Data are presented as mean ± SD (n =10). Values not sharing the same superscript differ significantly (p <0.05) within the same row. The results of body weight, liver weight and food intake were shown in Table 1 to monitor the effects of the treatments on gross growth and liver weight. Final body weight and weight gain were significantly higher in the control group than those in the CCl4 (p <0.01), GE (p <0.05), and Rb1 (p <0.05) groups (Table 1), but not significantly different among the three CCl4 treated groups. Daily intake of ginseng extract and ginsenoside Rb1 was 12.6 ± 0.6 mg (33.8 ± 1.5 mg/kg body weight) and 1.3 ± 0.6 mg (3.3 ± 0.1 mg/kg body weight) in the GE and Rb1 groups, respectively. The relative liver weight was significantly higher in the CCl4 and GE groups than that in the control group (p <0.01). The Rb1 group significantly reduced the relative liver weight compared with the CCl4 group (p <0.05). However, total liver weight and daily food intake did not differ significantly among the four groups.Table 1 Body weight, liver weight and food intake in rats treated with ginsenosides against CCl 4 -induced liver damage ControlCCl 4 GERb1Initial body weight (g)240 ± 8a 233 ± 12a 236 ± 14a 239 ± 15a Final body weight (g)478 ± 23b 427 ± 25a 433 ± 38a 448 ± 37a Weight gain (g)238 ± 22b 194 ± 23a 208 ± 34a 209 ± 30a Total liver weight (g)15.9 ± 2.5a 17.5 ± 1.5a 17.3 ± 3.8a 16.4 ± 1.8a Relative liver weight (g/kg)33.2 ± 4.8a 41.1 ± 3.4c 39.7 ± 5.8bc 36.7 ± 3.4ab Food intake (g/d)26.5 ± 1.6a 25.3 ± 1.0a 25.2 ± 1.2a 25.4 ± 1.2a Data are presented as mean ± SD (n =10). Values not sharing the same superscript differ significantly (p <0.05) within the same row. Body weight, liver weight and food intake in rats treated with ginsenosides against CCl 4 -induced liver damage Data are presented as mean ± SD (n =10). Values not sharing the same superscript differ significantly (p <0.05) within the same row. Histopathological examination The results of the histopathological examination by different stains were demonstrated in Figures 1 and 2 to determine the effects of the treatments on histopathological changes in the liver, especially on liver fibrosis. The bright red color of H&E staining shown in Figure 1A could be resulted from strong eosin staining, a fluorescent red dye. The pathological sections stained by H&E showed that no fat was accumulated in the liver of the control group, whereas large fat vacuoles were observed in the liver of the CCl4 group (Figure 1A). However, the Rb1 group had significantly decreased fat vacuoles compared with the CCl4 group (2.65 ± 0.82 vs. 3.50 ± 0.75, p <0.05) (Figure 1B). The pathological scores for fat change were not significantly different between the GE and Rb1 groups. The CCl4, GE, and Rb1 groups had significantly elevated cell necrosis (p <0.05), inflammatory cells (p <0.01), and fibrosis (p <0.01) in the central veins compared with the control group. However, the pathological scores for necrosis, inflammation, and fibrosis in the liver did not significantly differ among the three CCl4 treated groups.Figure 1 The representative histological sections of rat liver specimens. A: hematoxylin and eosin stain at 20 × 10 magnification, B: semi-quantitative scores graded from 0 (no lesion), 1 (trace lesion), 2 (weak lesion), 3 (moderate lesion) to 4 (severe lesion) for fat changes, and from 0 (no lesion), 1 (lesion in the central vein area), 2 (lesion in the central vein area and expansion to the surrounding area) to 3 (lesion in the central and portal vein areas or cirrhosis) for necrosis, inflammation and fibrosis in control, CCl4, GE and Rb1 groups. Solid and dashed arrows represent the central vein and collagen fibers. Data are presented as mean ± SD (n =10). Values not sharing the same letter differ significantly (p <0.05). Scale bar =50 μm.Figure 2 The representative histological sections of rat liver specimens. A: Masson’s trichrome stain at 15 × 10 magnification, B: silver stain at 15 × 10 magnification; C: semi-quantitative scores graded from 0 (no collagen formation), 1 (collagen formation in the central vein area), 2 (collagen and fibrous bridge formation in different central vein areas) to 3 (cirrhosis) for fibrosis in the control, CCl4, GE and Rb1 groups. Collagen fibers stained with Masson’s trichrome appear blue. Reticular fibers stained with silver appear brown. Solid and dashed arrows represent the central vein and fiber bridging. Data are presented as mean ± SD (n =10). Values not sharing the same letter differ significantly (p <0.05). Scale bar =50 μm. The representative histological sections of rat liver specimens. A: hematoxylin and eosin stain at 20 × 10 magnification, B: semi-quantitative scores graded from 0 (no lesion), 1 (trace lesion), 2 (weak lesion), 3 (moderate lesion) to 4 (severe lesion) for fat changes, and from 0 (no lesion), 1 (lesion in the central vein area), 2 (lesion in the central vein area and expansion to the surrounding area) to 3 (lesion in the central and portal vein areas or cirrhosis) for necrosis, inflammation and fibrosis in control, CCl4, GE and Rb1 groups. Solid and dashed arrows represent the central vein and collagen fibers. Data are presented as mean ± SD (n =10). Values not sharing the same letter differ significantly (p <0.05). Scale bar =50 μm. The representative histological sections of rat liver specimens. A: Masson’s trichrome stain at 15 × 10 magnification, B: silver stain at 15 × 10 magnification; C: semi-quantitative scores graded from 0 (no collagen formation), 1 (collagen formation in the central vein area), 2 (collagen and fibrous bridge formation in different central vein areas) to 3 (cirrhosis) for fibrosis in the control, CCl4, GE and Rb1 groups. Collagen fibers stained with Masson’s trichrome appear blue. Reticular fibers stained with silver appear brown. Solid and dashed arrows represent the central vein and fiber bridging. Data are presented as mean ± SD (n =10). Values not sharing the same letter differ significantly (p <0.05). Scale bar =50 μm. The pathological assessment of liver fibrosis as observed by Masson’s trichrome stain demonstrated that the formation of collagen fibers appeared blue was elevated by the exposure to CCl4 (Figure 2A). The fibrosis scores determined by Masson’s trichrome stain in the CCl4 (1.35 ± 0.34), GE (1.10 ± 0.39) and Rb1 (1.40 ± 0.39) groups were significantly higher compared with the control group (p <0.01) (Figure 2C), but the GE group had a lower liver fibrosis score compared with the Rb1 group (p <0.05). The accumulation of liver reticular fibers stained by silver and appeared brown was increased by the exposure to CCl4 (p <0.01) (Figure 2B). The GE (1.05 ± 0.44) group had significantly reduced accumulation of reticular fibers than the CCl4 (1.60 ± 0.39, p <0.01) and Rb1 (1.40 ± 0.21, p <0.05) groups (Figure 2C). The results of the histopathological examination by different stains were demonstrated in Figures 1 and 2 to determine the effects of the treatments on histopathological changes in the liver, especially on liver fibrosis. The bright red color of H&E staining shown in Figure 1A could be resulted from strong eosin staining, a fluorescent red dye. The pathological sections stained by H&E showed that no fat was accumulated in the liver of the control group, whereas large fat vacuoles were observed in the liver of the CCl4 group (Figure 1A). However, the Rb1 group had significantly decreased fat vacuoles compared with the CCl4 group (2.65 ± 0.82 vs. 3.50 ± 0.75, p <0.05) (Figure 1B). The pathological scores for fat change were not significantly different between the GE and Rb1 groups. The CCl4, GE, and Rb1 groups had significantly elevated cell necrosis (p <0.05), inflammatory cells (p <0.01), and fibrosis (p <0.01) in the central veins compared with the control group. However, the pathological scores for necrosis, inflammation, and fibrosis in the liver did not significantly differ among the three CCl4 treated groups.Figure 1 The representative histological sections of rat liver specimens. A: hematoxylin and eosin stain at 20 × 10 magnification, B: semi-quantitative scores graded from 0 (no lesion), 1 (trace lesion), 2 (weak lesion), 3 (moderate lesion) to 4 (severe lesion) for fat changes, and from 0 (no lesion), 1 (lesion in the central vein area), 2 (lesion in the central vein area and expansion to the surrounding area) to 3 (lesion in the central and portal vein areas or cirrhosis) for necrosis, inflammation and fibrosis in control, CCl4, GE and Rb1 groups. Solid and dashed arrows represent the central vein and collagen fibers. Data are presented as mean ± SD (n =10). Values not sharing the same letter differ significantly (p <0.05). Scale bar =50 μm.Figure 2 The representative histological sections of rat liver specimens. A: Masson’s trichrome stain at 15 × 10 magnification, B: silver stain at 15 × 10 magnification; C: semi-quantitative scores graded from 0 (no collagen formation), 1 (collagen formation in the central vein area), 2 (collagen and fibrous bridge formation in different central vein areas) to 3 (cirrhosis) for fibrosis in the control, CCl4, GE and Rb1 groups. Collagen fibers stained with Masson’s trichrome appear blue. Reticular fibers stained with silver appear brown. Solid and dashed arrows represent the central vein and fiber bridging. Data are presented as mean ± SD (n =10). Values not sharing the same letter differ significantly (p <0.05). Scale bar =50 μm. The representative histological sections of rat liver specimens. A: hematoxylin and eosin stain at 20 × 10 magnification, B: semi-quantitative scores graded from 0 (no lesion), 1 (trace lesion), 2 (weak lesion), 3 (moderate lesion) to 4 (severe lesion) for fat changes, and from 0 (no lesion), 1 (lesion in the central vein area), 2 (lesion in the central vein area and expansion to the surrounding area) to 3 (lesion in the central and portal vein areas or cirrhosis) for necrosis, inflammation and fibrosis in control, CCl4, GE and Rb1 groups. Solid and dashed arrows represent the central vein and collagen fibers. Data are presented as mean ± SD (n =10). Values not sharing the same letter differ significantly (p <0.05). Scale bar =50 μm. The representative histological sections of rat liver specimens. A: Masson’s trichrome stain at 15 × 10 magnification, B: silver stain at 15 × 10 magnification; C: semi-quantitative scores graded from 0 (no collagen formation), 1 (collagen formation in the central vein area), 2 (collagen and fibrous bridge formation in different central vein areas) to 3 (cirrhosis) for fibrosis in the control, CCl4, GE and Rb1 groups. Collagen fibers stained with Masson’s trichrome appear blue. Reticular fibers stained with silver appear brown. Solid and dashed arrows represent the central vein and fiber bridging. Data are presented as mean ± SD (n =10). Values not sharing the same letter differ significantly (p <0.05). Scale bar =50 μm. The pathological assessment of liver fibrosis as observed by Masson’s trichrome stain demonstrated that the formation of collagen fibers appeared blue was elevated by the exposure to CCl4 (Figure 2A). The fibrosis scores determined by Masson’s trichrome stain in the CCl4 (1.35 ± 0.34), GE (1.10 ± 0.39) and Rb1 (1.40 ± 0.39) groups were significantly higher compared with the control group (p <0.01) (Figure 2C), but the GE group had a lower liver fibrosis score compared with the Rb1 group (p <0.05). The accumulation of liver reticular fibers stained by silver and appeared brown was increased by the exposure to CCl4 (p <0.01) (Figure 2B). The GE (1.05 ± 0.44) group had significantly reduced accumulation of reticular fibers than the CCl4 (1.60 ± 0.39, p <0.01) and Rb1 (1.40 ± 0.21, p <0.05) groups (Figure 2C). Plasma ALT and AST activities Plasma ALT and AST activities were measured to assess the effects of the treatments on liver functions. Plasma ALT and AST activities were significantly elevated in the CCl4 group than those in the control, GE and Rb1 groups (p <0.01) after the induction of liver injury (W2) (Table 2). The CCl4 group still had increased plasma ALT (p <0.05) and AST (p <0.01) activities compared with the control group, whereas plasma ALT and AST activities did not differ among the control, GE and Rb1 groups at week 9. The GE group significantly decreased plasma AST activity compared with the CCl4 group at week 9 (p <0.05).Table 2 Plasma ALT and AST activities in rats treated with ginsenosides ControlCCl 4 GERb1ALT activity (IU/l)W029.6 ± 2.9a 27.6 ± 1.7a 31.2 ± 3.8a 29.2 ± 32.8a W233.2 ± 3.6a 4012.4 ± 2212.9b 469.0 ± 435.2a 1072.6 ± 618.6a W933.6 ± 4.9a 334.9 ± 379.1b 136.4 ± 167.4ab 177.5 ± 333.0ab AST activity (IU/l)W075.9 ± 6.3a 78.3 ± 4.4a 70.8 ± 23.5a 77.6 ± 5.3a W262.1 ± 20.2a 8370.4 ± 5360.8b 2155.0 ± 1973.8a 1233.6 ± 616.7a W963.3 ± 6.0a 288.4 ± 181.3b 134.9 ± 114.6a 171.3 ± 227.9a Data are presented as mean ± SD (n =10). Values not sharing the same superscript differ significantly (p <0.05) within the same row. Plasma ALT and AST activities in rats treated with ginsenosides Data are presented as mean ± SD (n =10). Values not sharing the same superscript differ significantly (p <0.05) within the same row. Plasma ALT and AST activities were measured to assess the effects of the treatments on liver functions. Plasma ALT and AST activities were significantly elevated in the CCl4 group than those in the control, GE and Rb1 groups (p <0.01) after the induction of liver injury (W2) (Table 2). The CCl4 group still had increased plasma ALT (p <0.05) and AST (p <0.01) activities compared with the control group, whereas plasma ALT and AST activities did not differ among the control, GE and Rb1 groups at week 9. The GE group significantly decreased plasma AST activity compared with the CCl4 group at week 9 (p <0.05).Table 2 Plasma ALT and AST activities in rats treated with ginsenosides ControlCCl 4 GERb1ALT activity (IU/l)W029.6 ± 2.9a 27.6 ± 1.7a 31.2 ± 3.8a 29.2 ± 32.8a W233.2 ± 3.6a 4012.4 ± 2212.9b 469.0 ± 435.2a 1072.6 ± 618.6a W933.6 ± 4.9a 334.9 ± 379.1b 136.4 ± 167.4ab 177.5 ± 333.0ab AST activity (IU/l)W075.9 ± 6.3a 78.3 ± 4.4a 70.8 ± 23.5a 77.6 ± 5.3a W262.1 ± 20.2a 8370.4 ± 5360.8b 2155.0 ± 1973.8a 1233.6 ± 616.7a W963.3 ± 6.0a 288.4 ± 181.3b 134.9 ± 114.6a 171.3 ± 227.9a Data are presented as mean ± SD (n =10). Values not sharing the same superscript differ significantly (p <0.05) within the same row. Plasma ALT and AST activities in rats treated with ginsenosides Data are presented as mean ± SD (n =10). Values not sharing the same superscript differ significantly (p <0.05) within the same row. Plasma and hepatic triglyceride and total cholesterol concentrations Plasma and hepatic lipid concentrations were determined to evaluate the effects of the treatments on lipid profiles. After the induction of liver injury (W2), plasma triglycerides were significantly increased and still maintained higher level at week 9 in the CCl4 group compared with those in the control group (p <0.01) (Table 3). Treatment with GE (p <0.01) and Rb1 (p <0.05) significantly reduced plasma triglycerides compared with the CCl4 group to the similar level of the control group at weeks 2 and 9.Table 3 Plasma triglycerides and total cholesterol concentrations in rats treated with ginsenosides ControlCCl 4 GERb1Triglycerides (mmol/l)W00.16 ± 0.04a 0.16 ± 0.04a 0.14 ± 0.03a 0.15 ± 0.02a W20.24 ± 0.04a 0.39 ± 0.19b 0.25 ± 0.04a 0.28 ± 0.10a W90.54 ± 0.10c 0.35 ± 0.06b 0.27 ± 0.05a 0.29 ± 0.07a Total cholesterol (mmol/l)W00.90 ± 0.11b 0.71 ± 0.12a 0.79 ± 0.16ab 0.75 ± 0.10a W21.01 ± 0.14a 1.20 ± 0.18b 1.05 ± 0.21ab 0.95 ± 0.27a W90.79 ± 0.11a 0.70 ± 0.17a 0.67 ± 0.10a 0.66 ± 0.20a Data are presented as mean ± SD (n =10). Values not sharing the same superscript differ significantly (p <0.05) within the same row. Plasma triglycerides and total cholesterol concentrations in rats treated with ginsenosides Data are presented as mean ± SD (n =10). Values not sharing the same superscript differ significantly (p <0.05) within the same row. At the baseline, plasma total cholesterol level was significantly lower in the CCl4 and Rb1 groups than that in the control group (p <0.01). After the induction of liver injury, total cholesterol level was significantly elevated in the CCl4 group than that in the control and Rb1 groups (p <0.05) (Table 3). Plasma total cholesterol level did not differ significantly among the four groups at week 9. Hepatic triglyceride concentrations were significantly increased by 73% in the CCl4 group than those in the control group (p <0.01), and decreased by 56% and 60% in the GE and Rb1 groups, respectively, compared with the CCl4 group (p <0.01) (Figure 3A). Hepatic total cholesterol level was significantly greater in the CCl4, GE and Rb1 groups than that in the control group (p <0.01), but not significantly different among CCl4 treated groups (Figure 3B).Figure 3 Effects of ginsenosides extract and ginsenoside Rb1 on hepatic lipids in rats. A: hepatic triglycerides, B: total cholesterol concentration. Data are presented as mean ± SD (n =10). Values not sharing the same letter differ significantly (p <0.05). Effects of ginsenosides extract and ginsenoside Rb1 on hepatic lipids in rats. A: hepatic triglycerides, B: total cholesterol concentration. Data are presented as mean ± SD (n =10). Values not sharing the same letter differ significantly (p <0.05). Plasma and hepatic lipid concentrations were determined to evaluate the effects of the treatments on lipid profiles. After the induction of liver injury (W2), plasma triglycerides were significantly increased and still maintained higher level at week 9 in the CCl4 group compared with those in the control group (p <0.01) (Table 3). Treatment with GE (p <0.01) and Rb1 (p <0.05) significantly reduced plasma triglycerides compared with the CCl4 group to the similar level of the control group at weeks 2 and 9.Table 3 Plasma triglycerides and total cholesterol concentrations in rats treated with ginsenosides ControlCCl 4 GERb1Triglycerides (mmol/l)W00.16 ± 0.04a 0.16 ± 0.04a 0.14 ± 0.03a 0.15 ± 0.02a W20.24 ± 0.04a 0.39 ± 0.19b 0.25 ± 0.04a 0.28 ± 0.10a W90.54 ± 0.10c 0.35 ± 0.06b 0.27 ± 0.05a 0.29 ± 0.07a Total cholesterol (mmol/l)W00.90 ± 0.11b 0.71 ± 0.12a 0.79 ± 0.16ab 0.75 ± 0.10a W21.01 ± 0.14a 1.20 ± 0.18b 1.05 ± 0.21ab 0.95 ± 0.27a W90.79 ± 0.11a 0.70 ± 0.17a 0.67 ± 0.10a 0.66 ± 0.20a Data are presented as mean ± SD (n =10). Values not sharing the same superscript differ significantly (p <0.05) within the same row. Plasma triglycerides and total cholesterol concentrations in rats treated with ginsenosides Data are presented as mean ± SD (n =10). Values not sharing the same superscript differ significantly (p <0.05) within the same row. At the baseline, plasma total cholesterol level was significantly lower in the CCl4 and Rb1 groups than that in the control group (p <0.01). After the induction of liver injury, total cholesterol level was significantly elevated in the CCl4 group than that in the control and Rb1 groups (p <0.05) (Table 3). Plasma total cholesterol level did not differ significantly among the four groups at week 9. Hepatic triglyceride concentrations were significantly increased by 73% in the CCl4 group than those in the control group (p <0.01), and decreased by 56% and 60% in the GE and Rb1 groups, respectively, compared with the CCl4 group (p <0.01) (Figure 3A). Hepatic total cholesterol level was significantly greater in the CCl4, GE and Rb1 groups than that in the control group (p <0.01), but not significantly different among CCl4 treated groups (Figure 3B).Figure 3 Effects of ginsenosides extract and ginsenoside Rb1 on hepatic lipids in rats. A: hepatic triglycerides, B: total cholesterol concentration. Data are presented as mean ± SD (n =10). Values not sharing the same letter differ significantly (p <0.05). Effects of ginsenosides extract and ginsenoside Rb1 on hepatic lipids in rats. A: hepatic triglycerides, B: total cholesterol concentration. Data are presented as mean ± SD (n =10). Values not sharing the same letter differ significantly (p <0.05). Hepatic TNF-α, IL-1β, IL-10, PGE2, and sICAM-1 levels The results of hepatic cytokine levels were found in Table 4 to determine the effects of the treatments on hepatic mediators released in the inflammatory condition. Hepatic IL-1β (p <0.01), PGE2 (p <0.05) and sICAM-1 (p <0.05) levels were significantly elevated, whereas hepatic IL-10 level was significantly decreased in the CCl4 group compared with those in the control group (Table 4). Hepatic TNF-α, IL-1β and PGE2 levels were significantly reduced in the GE group compared with the CCl4 group (p <0.05). The Rb1 group had higher hepatic TNF-α level than the GE group, but lower hepatic PGE2 level than the CCl4 group (p <0.05).Table 4 Hepatic TNF-α, IL-1β, IL-10, PGE 2 , and sICAM-1 levels in rats treated with ginsenosides ControlCCl 4 GERb1TNF-α (pg/mg protein)15.8 ± 7.6ab 17.2 ± 5.1b 10.4 ± 3.3a 16.7 ± 6.9b IL-1β (pg/mg protein)132.5 ± 39.6a 333.0 ± 135.8c 212.0 ± 115.2ab 292.9 ± 182.9bc IL-10 (pg/mg protein)4461 ± 958b 3443 ± 508a 3513 ± 677a 3514 ± 600a PGE2 (pg/mg protein)6196 ± 1599b 9822 ± 2610c 4636 ± 1928ab 4128 ± 1480a sICAM-1 (pg/mg protein)2782 ± 771a 3991 ± 867b 3288 ± 567ab 3427 ± 963ab Data are presented as mean ± SD (n =10). Values not sharing the same superscript differ significantly (p <0.05) within the same row. Hepatic TNF-α, IL-1β, IL-10, PGE 2 , and sICAM-1 levels in rats treated with ginsenosides Data are presented as mean ± SD (n =10). Values not sharing the same superscript differ significantly (p <0.05) within the same row. The results of hepatic cytokine levels were found in Table 4 to determine the effects of the treatments on hepatic mediators released in the inflammatory condition. Hepatic IL-1β (p <0.01), PGE2 (p <0.05) and sICAM-1 (p <0.05) levels were significantly elevated, whereas hepatic IL-10 level was significantly decreased in the CCl4 group compared with those in the control group (Table 4). Hepatic TNF-α, IL-1β and PGE2 levels were significantly reduced in the GE group compared with the CCl4 group (p <0.05). The Rb1 group had higher hepatic TNF-α level than the GE group, but lower hepatic PGE2 level than the CCl4 group (p <0.05).Table 4 Hepatic TNF-α, IL-1β, IL-10, PGE 2 , and sICAM-1 levels in rats treated with ginsenosides ControlCCl 4 GERb1TNF-α (pg/mg protein)15.8 ± 7.6ab 17.2 ± 5.1b 10.4 ± 3.3a 16.7 ± 6.9b IL-1β (pg/mg protein)132.5 ± 39.6a 333.0 ± 135.8c 212.0 ± 115.2ab 292.9 ± 182.9bc IL-10 (pg/mg protein)4461 ± 958b 3443 ± 508a 3513 ± 677a 3514 ± 600a PGE2 (pg/mg protein)6196 ± 1599b 9822 ± 2610c 4636 ± 1928ab 4128 ± 1480a sICAM-1 (pg/mg protein)2782 ± 771a 3991 ± 867b 3288 ± 567ab 3427 ± 963ab Data are presented as mean ± SD (n =10). Values not sharing the same superscript differ significantly (p <0.05) within the same row. Hepatic TNF-α, IL-1β, IL-10, PGE 2 , and sICAM-1 levels in rats treated with ginsenosides Data are presented as mean ± SD (n =10). Values not sharing the same superscript differ significantly (p <0.05) within the same row. Hepatic hydroxyproline, MMP-2 and TIMP-1 levels The results of hepatic hydroxyproline, MMP-2 and TIMP-1 levels were shown in Figure 4 to investigate the effects of the treatments on liver fibrogenesis and fibrolysis. Hepatic hydroxyproline (p <0.05), MMP-2 (p <0.05) and TIMP-1 (p <0.01) levels were elevated by 55%, 28% and 61%, respectively, in the CCl4 group compared with the control group (Figure 4). Ginseng extract and ginsenoside Rb1 treatments significantly reduced hepatic hydroxyproline level by 36% and 30% (Figure 4A) and TIMP-1 level by 27% and 27% (Figure 4C), respectively, compared with the CCl4 group (p <0.05). However, hepatic MMP-2 level was not different in the three CCl4 treated groups (Figure 4B).Figure 4 Effects of ginsenosides extract and ginsenoside Rb1 on liver fibrosis markers in rats. A: hepatic hydroxyproline level, B: MMP-2 level; C: TIMP-1 level. Data are presented as mean ± SD (n =10). Values not sharing the same letter differ significantly (p <0.05). Effects of ginsenosides extract and ginsenoside Rb1 on liver fibrosis markers in rats. A: hepatic hydroxyproline level, B: MMP-2 level; C: TIMP-1 level. Data are presented as mean ± SD (n =10). Values not sharing the same letter differ significantly (p <0.05). The results of hepatic hydroxyproline, MMP-2 and TIMP-1 levels were shown in Figure 4 to investigate the effects of the treatments on liver fibrogenesis and fibrolysis. Hepatic hydroxyproline (p <0.05), MMP-2 (p <0.05) and TIMP-1 (p <0.01) levels were elevated by 55%, 28% and 61%, respectively, in the CCl4 group compared with the control group (Figure 4). Ginseng extract and ginsenoside Rb1 treatments significantly reduced hepatic hydroxyproline level by 36% and 30% (Figure 4A) and TIMP-1 level by 27% and 27% (Figure 4C), respectively, compared with the CCl4 group (p <0.05). However, hepatic MMP-2 level was not different in the three CCl4 treated groups (Figure 4B).Figure 4 Effects of ginsenosides extract and ginsenoside Rb1 on liver fibrosis markers in rats. A: hepatic hydroxyproline level, B: MMP-2 level; C: TIMP-1 level. Data are presented as mean ± SD (n =10). Values not sharing the same letter differ significantly (p <0.05). Effects of ginsenosides extract and ginsenoside Rb1 on liver fibrosis markers in rats. A: hepatic hydroxyproline level, B: MMP-2 level; C: TIMP-1 level. Data are presented as mean ± SD (n =10). Values not sharing the same letter differ significantly (p <0.05). Body weight, liver weight and food intake: The results of body weight, liver weight and food intake were shown in Table 1 to monitor the effects of the treatments on gross growth and liver weight. Final body weight and weight gain were significantly higher in the control group than those in the CCl4 (p <0.01), GE (p <0.05), and Rb1 (p <0.05) groups (Table 1), but not significantly different among the three CCl4 treated groups. Daily intake of ginseng extract and ginsenoside Rb1 was 12.6 ± 0.6 mg (33.8 ± 1.5 mg/kg body weight) and 1.3 ± 0.6 mg (3.3 ± 0.1 mg/kg body weight) in the GE and Rb1 groups, respectively. The relative liver weight was significantly higher in the CCl4 and GE groups than that in the control group (p <0.01). The Rb1 group significantly reduced the relative liver weight compared with the CCl4 group (p <0.05). However, total liver weight and daily food intake did not differ significantly among the four groups.Table 1 Body weight, liver weight and food intake in rats treated with ginsenosides against CCl 4 -induced liver damage ControlCCl 4 GERb1Initial body weight (g)240 ± 8a 233 ± 12a 236 ± 14a 239 ± 15a Final body weight (g)478 ± 23b 427 ± 25a 433 ± 38a 448 ± 37a Weight gain (g)238 ± 22b 194 ± 23a 208 ± 34a 209 ± 30a Total liver weight (g)15.9 ± 2.5a 17.5 ± 1.5a 17.3 ± 3.8a 16.4 ± 1.8a Relative liver weight (g/kg)33.2 ± 4.8a 41.1 ± 3.4c 39.7 ± 5.8bc 36.7 ± 3.4ab Food intake (g/d)26.5 ± 1.6a 25.3 ± 1.0a 25.2 ± 1.2a 25.4 ± 1.2a Data are presented as mean ± SD (n =10). Values not sharing the same superscript differ significantly (p <0.05) within the same row. Body weight, liver weight and food intake in rats treated with ginsenosides against CCl 4 -induced liver damage Data are presented as mean ± SD (n =10). Values not sharing the same superscript differ significantly (p <0.05) within the same row. Histopathological examination: The results of the histopathological examination by different stains were demonstrated in Figures 1 and 2 to determine the effects of the treatments on histopathological changes in the liver, especially on liver fibrosis. The bright red color of H&E staining shown in Figure 1A could be resulted from strong eosin staining, a fluorescent red dye. The pathological sections stained by H&E showed that no fat was accumulated in the liver of the control group, whereas large fat vacuoles were observed in the liver of the CCl4 group (Figure 1A). However, the Rb1 group had significantly decreased fat vacuoles compared with the CCl4 group (2.65 ± 0.82 vs. 3.50 ± 0.75, p <0.05) (Figure 1B). The pathological scores for fat change were not significantly different between the GE and Rb1 groups. The CCl4, GE, and Rb1 groups had significantly elevated cell necrosis (p <0.05), inflammatory cells (p <0.01), and fibrosis (p <0.01) in the central veins compared with the control group. However, the pathological scores for necrosis, inflammation, and fibrosis in the liver did not significantly differ among the three CCl4 treated groups.Figure 1 The representative histological sections of rat liver specimens. A: hematoxylin and eosin stain at 20 × 10 magnification, B: semi-quantitative scores graded from 0 (no lesion), 1 (trace lesion), 2 (weak lesion), 3 (moderate lesion) to 4 (severe lesion) for fat changes, and from 0 (no lesion), 1 (lesion in the central vein area), 2 (lesion in the central vein area and expansion to the surrounding area) to 3 (lesion in the central and portal vein areas or cirrhosis) for necrosis, inflammation and fibrosis in control, CCl4, GE and Rb1 groups. Solid and dashed arrows represent the central vein and collagen fibers. Data are presented as mean ± SD (n =10). Values not sharing the same letter differ significantly (p <0.05). Scale bar =50 μm.Figure 2 The representative histological sections of rat liver specimens. A: Masson’s trichrome stain at 15 × 10 magnification, B: silver stain at 15 × 10 magnification; C: semi-quantitative scores graded from 0 (no collagen formation), 1 (collagen formation in the central vein area), 2 (collagen and fibrous bridge formation in different central vein areas) to 3 (cirrhosis) for fibrosis in the control, CCl4, GE and Rb1 groups. Collagen fibers stained with Masson’s trichrome appear blue. Reticular fibers stained with silver appear brown. Solid and dashed arrows represent the central vein and fiber bridging. Data are presented as mean ± SD (n =10). Values not sharing the same letter differ significantly (p <0.05). Scale bar =50 μm. The representative histological sections of rat liver specimens. A: hematoxylin and eosin stain at 20 × 10 magnification, B: semi-quantitative scores graded from 0 (no lesion), 1 (trace lesion), 2 (weak lesion), 3 (moderate lesion) to 4 (severe lesion) for fat changes, and from 0 (no lesion), 1 (lesion in the central vein area), 2 (lesion in the central vein area and expansion to the surrounding area) to 3 (lesion in the central and portal vein areas or cirrhosis) for necrosis, inflammation and fibrosis in control, CCl4, GE and Rb1 groups. Solid and dashed arrows represent the central vein and collagen fibers. Data are presented as mean ± SD (n =10). Values not sharing the same letter differ significantly (p <0.05). Scale bar =50 μm. The representative histological sections of rat liver specimens. A: Masson’s trichrome stain at 15 × 10 magnification, B: silver stain at 15 × 10 magnification; C: semi-quantitative scores graded from 0 (no collagen formation), 1 (collagen formation in the central vein area), 2 (collagen and fibrous bridge formation in different central vein areas) to 3 (cirrhosis) for fibrosis in the control, CCl4, GE and Rb1 groups. Collagen fibers stained with Masson’s trichrome appear blue. Reticular fibers stained with silver appear brown. Solid and dashed arrows represent the central vein and fiber bridging. Data are presented as mean ± SD (n =10). Values not sharing the same letter differ significantly (p <0.05). Scale bar =50 μm. The pathological assessment of liver fibrosis as observed by Masson’s trichrome stain demonstrated that the formation of collagen fibers appeared blue was elevated by the exposure to CCl4 (Figure 2A). The fibrosis scores determined by Masson’s trichrome stain in the CCl4 (1.35 ± 0.34), GE (1.10 ± 0.39) and Rb1 (1.40 ± 0.39) groups were significantly higher compared with the control group (p <0.01) (Figure 2C), but the GE group had a lower liver fibrosis score compared with the Rb1 group (p <0.05). The accumulation of liver reticular fibers stained by silver and appeared brown was increased by the exposure to CCl4 (p <0.01) (Figure 2B). The GE (1.05 ± 0.44) group had significantly reduced accumulation of reticular fibers than the CCl4 (1.60 ± 0.39, p <0.01) and Rb1 (1.40 ± 0.21, p <0.05) groups (Figure 2C). Plasma ALT and AST activities: Plasma ALT and AST activities were measured to assess the effects of the treatments on liver functions. Plasma ALT and AST activities were significantly elevated in the CCl4 group than those in the control, GE and Rb1 groups (p <0.01) after the induction of liver injury (W2) (Table 2). The CCl4 group still had increased plasma ALT (p <0.05) and AST (p <0.01) activities compared with the control group, whereas plasma ALT and AST activities did not differ among the control, GE and Rb1 groups at week 9. The GE group significantly decreased plasma AST activity compared with the CCl4 group at week 9 (p <0.05).Table 2 Plasma ALT and AST activities in rats treated with ginsenosides ControlCCl 4 GERb1ALT activity (IU/l)W029.6 ± 2.9a 27.6 ± 1.7a 31.2 ± 3.8a 29.2 ± 32.8a W233.2 ± 3.6a 4012.4 ± 2212.9b 469.0 ± 435.2a 1072.6 ± 618.6a W933.6 ± 4.9a 334.9 ± 379.1b 136.4 ± 167.4ab 177.5 ± 333.0ab AST activity (IU/l)W075.9 ± 6.3a 78.3 ± 4.4a 70.8 ± 23.5a 77.6 ± 5.3a W262.1 ± 20.2a 8370.4 ± 5360.8b 2155.0 ± 1973.8a 1233.6 ± 616.7a W963.3 ± 6.0a 288.4 ± 181.3b 134.9 ± 114.6a 171.3 ± 227.9a Data are presented as mean ± SD (n =10). Values not sharing the same superscript differ significantly (p <0.05) within the same row. Plasma ALT and AST activities in rats treated with ginsenosides Data are presented as mean ± SD (n =10). Values not sharing the same superscript differ significantly (p <0.05) within the same row. Plasma and hepatic triglyceride and total cholesterol concentrations: Plasma and hepatic lipid concentrations were determined to evaluate the effects of the treatments on lipid profiles. After the induction of liver injury (W2), plasma triglycerides were significantly increased and still maintained higher level at week 9 in the CCl4 group compared with those in the control group (p <0.01) (Table 3). Treatment with GE (p <0.01) and Rb1 (p <0.05) significantly reduced plasma triglycerides compared with the CCl4 group to the similar level of the control group at weeks 2 and 9.Table 3 Plasma triglycerides and total cholesterol concentrations in rats treated with ginsenosides ControlCCl 4 GERb1Triglycerides (mmol/l)W00.16 ± 0.04a 0.16 ± 0.04a 0.14 ± 0.03a 0.15 ± 0.02a W20.24 ± 0.04a 0.39 ± 0.19b 0.25 ± 0.04a 0.28 ± 0.10a W90.54 ± 0.10c 0.35 ± 0.06b 0.27 ± 0.05a 0.29 ± 0.07a Total cholesterol (mmol/l)W00.90 ± 0.11b 0.71 ± 0.12a 0.79 ± 0.16ab 0.75 ± 0.10a W21.01 ± 0.14a 1.20 ± 0.18b 1.05 ± 0.21ab 0.95 ± 0.27a W90.79 ± 0.11a 0.70 ± 0.17a 0.67 ± 0.10a 0.66 ± 0.20a Data are presented as mean ± SD (n =10). Values not sharing the same superscript differ significantly (p <0.05) within the same row. Plasma triglycerides and total cholesterol concentrations in rats treated with ginsenosides Data are presented as mean ± SD (n =10). Values not sharing the same superscript differ significantly (p <0.05) within the same row. At the baseline, plasma total cholesterol level was significantly lower in the CCl4 and Rb1 groups than that in the control group (p <0.01). After the induction of liver injury, total cholesterol level was significantly elevated in the CCl4 group than that in the control and Rb1 groups (p <0.05) (Table 3). Plasma total cholesterol level did not differ significantly among the four groups at week 9. Hepatic triglyceride concentrations were significantly increased by 73% in the CCl4 group than those in the control group (p <0.01), and decreased by 56% and 60% in the GE and Rb1 groups, respectively, compared with the CCl4 group (p <0.01) (Figure 3A). Hepatic total cholesterol level was significantly greater in the CCl4, GE and Rb1 groups than that in the control group (p <0.01), but not significantly different among CCl4 treated groups (Figure 3B).Figure 3 Effects of ginsenosides extract and ginsenoside Rb1 on hepatic lipids in rats. A: hepatic triglycerides, B: total cholesterol concentration. Data are presented as mean ± SD (n =10). Values not sharing the same letter differ significantly (p <0.05). Effects of ginsenosides extract and ginsenoside Rb1 on hepatic lipids in rats. A: hepatic triglycerides, B: total cholesterol concentration. Data are presented as mean ± SD (n =10). Values not sharing the same letter differ significantly (p <0.05). Hepatic TNF-α, IL-1β, IL-10, PGE2, and sICAM-1 levels: The results of hepatic cytokine levels were found in Table 4 to determine the effects of the treatments on hepatic mediators released in the inflammatory condition. Hepatic IL-1β (p <0.01), PGE2 (p <0.05) and sICAM-1 (p <0.05) levels were significantly elevated, whereas hepatic IL-10 level was significantly decreased in the CCl4 group compared with those in the control group (Table 4). Hepatic TNF-α, IL-1β and PGE2 levels were significantly reduced in the GE group compared with the CCl4 group (p <0.05). The Rb1 group had higher hepatic TNF-α level than the GE group, but lower hepatic PGE2 level than the CCl4 group (p <0.05).Table 4 Hepatic TNF-α, IL-1β, IL-10, PGE 2 , and sICAM-1 levels in rats treated with ginsenosides ControlCCl 4 GERb1TNF-α (pg/mg protein)15.8 ± 7.6ab 17.2 ± 5.1b 10.4 ± 3.3a 16.7 ± 6.9b IL-1β (pg/mg protein)132.5 ± 39.6a 333.0 ± 135.8c 212.0 ± 115.2ab 292.9 ± 182.9bc IL-10 (pg/mg protein)4461 ± 958b 3443 ± 508a 3513 ± 677a 3514 ± 600a PGE2 (pg/mg protein)6196 ± 1599b 9822 ± 2610c 4636 ± 1928ab 4128 ± 1480a sICAM-1 (pg/mg protein)2782 ± 771a 3991 ± 867b 3288 ± 567ab 3427 ± 963ab Data are presented as mean ± SD (n =10). Values not sharing the same superscript differ significantly (p <0.05) within the same row. Hepatic TNF-α, IL-1β, IL-10, PGE 2 , and sICAM-1 levels in rats treated with ginsenosides Data are presented as mean ± SD (n =10). Values not sharing the same superscript differ significantly (p <0.05) within the same row. Hepatic hydroxyproline, MMP-2 and TIMP-1 levels: The results of hepatic hydroxyproline, MMP-2 and TIMP-1 levels were shown in Figure 4 to investigate the effects of the treatments on liver fibrogenesis and fibrolysis. Hepatic hydroxyproline (p <0.05), MMP-2 (p <0.05) and TIMP-1 (p <0.01) levels were elevated by 55%, 28% and 61%, respectively, in the CCl4 group compared with the control group (Figure 4). Ginseng extract and ginsenoside Rb1 treatments significantly reduced hepatic hydroxyproline level by 36% and 30% (Figure 4A) and TIMP-1 level by 27% and 27% (Figure 4C), respectively, compared with the CCl4 group (p <0.05). However, hepatic MMP-2 level was not different in the three CCl4 treated groups (Figure 4B).Figure 4 Effects of ginsenosides extract and ginsenoside Rb1 on liver fibrosis markers in rats. A: hepatic hydroxyproline level, B: MMP-2 level; C: TIMP-1 level. Data are presented as mean ± SD (n =10). Values not sharing the same letter differ significantly (p <0.05). Effects of ginsenosides extract and ginsenoside Rb1 on liver fibrosis markers in rats. A: hepatic hydroxyproline level, B: MMP-2 level; C: TIMP-1 level. Data are presented as mean ± SD (n =10). Values not sharing the same letter differ significantly (p <0.05). Discussion: Similar to the previous study [22], plasma ALT and AST activities were increased by CCl4-induced liver injury. Ginseng extract and ginsenoside Rb1 significantly decreased plasma ALT and AST activities elevated by the exposure to CCl4. The previous studies demonstrated that ginseng extract or heated ginseng exhibited antioxidant activity and acted as a free radical scavenger to inhibit lipid peroxidation in vitro and increase catalase and superoxide dismutase activities in V79-4 lung fibroblast cells [14, 23]. Moreover, ginsenoside Rb1, Rg1 or derived metabolite-compound K decreased hepatic malondialdehyde level and increased serum ALT and AST activities [16, 18]. Therefore, ginseng extract and ginsenoside Rb1, as a free radical scavenger, may eliminate free radical damage to the hepatocytes. Exposure to CCl4 led to significant increases in the accumulation of fat vacuoles and the levels of triglycerides and total cholesterol in the liver. The abnormal fat accumulation in the liver caused by CCl4 could be attributed to: (1) the imbalance between lipogenesis and lipolysis by increasing lipid synthesis and the rate of lipid esterification [24] as well as by decreasing cAMP production via the stimulation of hormone-sensitive lipase [25, 26], and (2) impaired synthesis and secretion of very low density lipoprotein through the interference of glycosylation and maturation of lipoglycoprotein by free radicals which are produced by CCl4 metabolism [24, 27], or through the inactivation of Ca+2-ATPase pump in the mitochondria and endoplasmic reticulum [6, 28]. Liver damage and elevated hepatic triglycerides induced by CCl4 were improved by the treatment of GE and Rb1. Red ginseng saponin, containing ginsenosides Rb1, Rb2, Rc, Rd, Re and Rg1, played a crucial role in hepatoprotection by suppressing oxidative stress and lipid peroxides via inhibiting the expression and activity of cytochrome P450 in the liver [29]. Consistent with our findings, ginsenoside Rb1 injected intraperitoneally at a dose of 10 mg/kg body weight for 3 d significantly decreased hepatic lipids by increasing hepatic cAMP production [30]. Additionally, Rb1 injected intraperitoneally at a dose of 10 mg/kg body weight also showed to reduce hepatic triglyceride accumulation in high fat diet-induced obese rats by increasing hepatic carnitine palmitoyltransferase 1 activity and cellular AMP/ATP ratio to stimulate fatty acid oxidation and suppress lipogenesis, respectively [31]. Compound K, a major intestinal metabolite of ginsenosides, has been demonstrated to elevate gene expression of peroxisome proliferator-activated receptor-α and decrease gene expression of fatty acid synthase and stearoyl-CoA desaturase 1 through activating AMP-activated protein kinase in HepG2 human hepatoma cells [32]. The previous study revealed that ginseng extract rich in ginsenosides suppressed hepatic cholesterol synthesis via inhibiting hepatic β-hydroxy-β-methylglutaryl-CoA reductase and cholesterol 7α-hydroxylase activities [33]. These results suggest that ginseng extract, ginsenoside Rb1 and their metabolite may accelerate lipid utilization and suppress lipid biosynthesis in the liver to further decrease elevated hepatic triglycerides induced by CCl4 exposure. Kupffer cells activated by oxidative stress secrete cytokines, such as TNF-α and IL-1β, to stimulate the expression of sICAM-1 which induces the activation of neutrophils [7]. The results of H&E staining showed accumulation of inflammatory cells in the CCl4 group. Furthermore, fibrotic bridges were observed in the histopathological sections stained by silver in the CCl4 group as cells became necrotic and reticular fibers resembled after frameworks collapsed. Ginseng extract reduced accumulation of reticular fibers, ameliorated cell necrosis and inhibited production of TNF-α and IL-1β. In agreement with our present study, in vitro studies demonstrated that ginseng and ginsenoside Rb1 suppressed TNF-α production and IL-1β mRNA expression in murine RAW264.7 macrophages [34, 35]. Ginsenoside Rg1, one of the important components in P. ginseng, intravenously injected at 20 mg/kg body weight significantly attenuated serum TNF-α and IL-6 release in septic mice [36]. The expression of PGE2 and cycloogenase-2 (COX-2) are induced by inflammatory response, and the expression of COX-2 was stimulated by proinflammatory cytokines, such as TNF-α and IL-1β [37]. Our present study found that ginseng extract and ginsenoside Rb1 significantly decreased hepatic PGE2 level induced by CCl4. It is presumed that ginseng extract and ginsenoside Rb suppressed PGE2 production through reducing proinflammatory cytokines and suppressing COX-2 expression. Furthermore, activation of NF-κB modulates expression and secretion of proinflammatory cytokines, chemokines, adhesion molecules, COX-2 and inducible nitric oxide synthase (iNOS) [38]. Ginsenoside Rb1, Rg1, Rg3, Rh1 and their derived metabolite compound K down-regulated activation of NF-κB and simultaneously suppressed PGE2, ICAM-1, COX-2 and iNOS expression in vitro [17, 19, 39–42]. It is presumed that ginseng extract and ginsenoside Rb1 attenuated production of proinflammatory factors possibly via inhibiting NF-κB activation. The accumulation of hydroxyproline and collagen fibers was found in the CCl4 group, whereas ginseng extract and ginsenoside Rb1 decreased hepatic hydroxyproline and TIMP-1 levels to inhibit liver fibrosis. Oxidative stress induced by CCl4 metabolism could further stimulate proliferation and invasiveness of hepatic stellate cells (HSCs) [43]. Proliferated HSCs resulted in increases in TGF-β1 secretion, which activates HSCs and induces gene expression of type I collagen, and induction of collagen accumulation. Activated HSCs express MMPs and their tissue inhibitors (TIMPs). Oxidative stress stimulated MMP-2 production by HSCs via extracellular signal-regulated kinase1/2 and phosphatidylinositol 3 kinase pathways [43], and MMPs induce HSCs proliferation and migration [44]. In vitro studies found that ginsenoside Rb1 inhibited HSCs activation and mRNA expression of type I and III collagen, TGF-β1 and TIMP-1 [45], and its metabolite induced apoptosis in HSCs via caspase-3 activation pathway [46]. Compound K was found to inhibit MMP-2 expression and NF-κB activation in the in vitro model [47]. Ginsenoside Rg1 subcutaneously injected at 50 and 100 mg/kg body weight attenuated serum levels of hyaluronic acid and type III procollagen, and hepatic hydroxyproline level in rats with thioacetamide-induced liver fibrosis [48]. The decrease in activated HSCs could lead to inhibition of fibrogenesis and TIMP-1 expression by reducing TNF-α and TGF-β1 [49]. Our finding demonstrated that ginseng extract and ginsenoside Rb1 diminished hepatic TIMP-1 level accompanied with decreased TNF-α level. Therefore, ginseng extract and ginsenoside Rb1 could suppress activation and proliferation of HSCs and further inhibit liver fibrosis. Ginsenoside Rb1 content was equivalent in the GE and Rb1 groups in the present study. Ginseng extract with additional seven ginsenosides except for ginsenoside Rb1 more effectively diminished collagen accumulation and inhibited TNF-α production compared with ginsenoside Rb1. Therefore, the hepatoprotective and anti-inflammatory actions of ginseng extract on CCl4-induced liver damage could be attributed to the synergistic action of overall ingredients including the remaining 20% constituents and their metabolites. Conclusions: In conclusion, Panax ginseng extract (0.5 g/kg) and ginsenoside Rb1 (0.05 g/kg) decrease plasma ALT and AST activities elevated by CCl4-induced liver damage and inhibit the accumulation of triglycerides in the liver. The levels of TNF-α, PGE2, hydroxyproline and TIMP-1 in the liver are diminished by ginseng extract and ginsenoside Rb1. Therefore, ginseng extract and ginsenoside Rb1 attenuate CCl4-induced liver injury through anti-inflammatory and antifibrotic effects.
Background: Ginsenosides, the major bioactive compounds in ginseng root, have been found to have antioxidant, immunomodulatory and anti-inflammatory activities. This study investigated the effects of ginsenosides on carbon tetrachloride (CCl4)-induced hepatitis and liver fibrosis in rats. Methods: Male Sprague-Dawley rats were randomly divided into four groups: control, CCl4, CCl4 + 0.5 g/kg Panax ginseng extract and CCl4 + 0.05 g/kg ginsenoside Rb1 groups. The treated groups were orally given Panax ginseng extract or ginsenoside Rb1 two weeks before the induction of liver injury for successive 9 weeks. Liver injury was induced by intraperitoneally injected with 400 ml/l CCl4 at a dose of 0.75 ml/kg body weight weekly for 7 weeks. The control group was intraperitoneally injected with olive oil. Results: The pathological results showed that ginsenoside Rb1 decreased hepatic fat deposition (2.65 ± 0.82 vs 3.50 ± 0.75, p <0.05) and Panax ginseng extract lowered hepatic reticular fiber accumulation (1.05 ± 0.44 vs 1.60 ± 0.39, p <0.01) increased by CCl4. Plasma alanine aminotransferase and aspartate aminotransferase activities were increased by CCl4 (p <0.01), and aspartate aminotransferase activity was decreased by Panax ginseng extract at week 9 (p <0.05). Exposure to CCl4 for 7 weeks, the levels of plasma and hepatic triglycerides (p <0.01), hepatic cholesterol (p <0.01), interleukin-1β (p <0.01), prostaglandin E2 (p <0.05), soluble intercellular adhesion molecule-1 (p <0.05), hydroxyproline (p <0.05), matrix metalloproteinase-2 (p <0.05) and tissue inhibitor of metalloproteinase-1 (TIMP-1) (p <0.01) were elevated, however, hepatic interleukin-10 level was lowered (p <0.05). Both Panax ginseng extract and ginsenoside Rb1 decreased plasma and hepatic triglyceride, hepatic prostaglandin E2, hydroxyproline and TIMP-1 levels, and Panax ginseng extract further inhibited interleukin-1β concentrations (p <0.05). Conclusions: Panax ginseng extract and ginsenoside Rb1 attenuate plasma aminotransferase activities and liver inflammation to inhibit CCl4-induced liver fibrosis through down-regulation of hepatic prostaglandin E2 and TIMP-1.
Background: Liver cirrhosis is an irreversible stage in the process of liver damage that occurs after liver fibrosis. Liver fibrosis is attributed to inflammation, excessive accumulation of extracellular matrix and tissue remodeling under wound healing [1]. Chronic hepatitis and liver cirrhosis are positively associated with the occurrence of hepatocellular carcinoma [2, 3]. Therefore, the inhibition of hepatic inflammation and fibrosis is crucial in preventing the occurrence of liver cirrhosis and hepatocellular carcinoma. Oxidative stress from reactive oxygen species plays an important role in liver fibrogenesis [4]. Carbon tetrachloride (CCl4) is considered as a toxic chemical that induces hepatotoxicity including fatty degeneration, inflammation, fibrosis, hepatocellular death and carcinogenicity [5, 6]. Trichloromethyl radical produced from the metabolism of CCl4 initiates a chain reaction to cause lipid peroxidation, membrane dysfunction and further hepatotoxic damage [6]. The toxic metabolite of CCl4 can activate Kupffer cells to secrete cytokines such as interleukin-1 (IL-1) and tumor necrosis factor-α (TNF-α), stimulate transforming growth factor-β (TGF-β) production, inhibit nitric oxide (NO) formation and induce inflammation and liver fibrosis [6–8]. Matrix metalloproteinase (MMP)-2, known as type IV collagenase and gelatinase A, acts as the regulator for the breakdown of extracellular matrix, and tissue inhibitor metalloproteinase (TIMP)-1, as the inhibitor of MMPs, exhibits anti-fibrolytic, growth-stimulated and anti-apoptotic activities [9]. Chronic exposure of CCl4 leads to liver fibrosis, which diminishes extracellular matrix degradation and increases MMP-2 secretion through the induction of tissue inhibitor TIMPs [9]. Panax ginseng (P. ginseng) root has been commonly used in oriental medicine, diet or dietary supplement. Ginsenosides, a class of steroid glycosides and triterpene saponins, are the major bioactive compounds in P. ginseng root and ginsenoside Rb1 (C54H92O23, molecular weight: 1109.3) is considered as the most abundant ginsenoside among more than 30 ginsenosides in P. ginseng [10, 11]. The previous studies have reported that P. ginseng and its active components or metabolites had antioxidant, immunomodulatory, anti-inflammatory, and lipid-lowering effects [12–15]. Many studies have shown that ginsenoside Rb1 and its metabolite compound K attenuated liver injury through inhibiting lipid peroxidation, TNF-α, NO, prostaglandin E2 (PGE2), intercellular adhesion molecule (ICAM)-1 and nuclear factor-κB (NF-κB) activation [16–19]. However, the effect of ginsenosides on liver fibrosis is not clear. Considering ginsenoside Rb1 as the most abundant ginsenoside in P. ginseng [10, 11] and its hepatoprotective activity [16–19], therefore, this study investigated the protective effects of P. ginseng extract (ginseng extract) and ginsenoside Rb1 on CCl4-induced liver inflammation and fibrosis in rats. Conclusions: In conclusion, Panax ginseng extract (0.5 g/kg) and ginsenoside Rb1 (0.05 g/kg) decrease plasma ALT and AST activities elevated by CCl4-induced liver damage and inhibit the accumulation of triglycerides in the liver. The levels of TNF-α, PGE2, hydroxyproline and TIMP-1 in the liver are diminished by ginseng extract and ginsenoside Rb1. Therefore, ginseng extract and ginsenoside Rb1 attenuate CCl4-induced liver injury through anti-inflammatory and antifibrotic effects.
Background: Ginsenosides, the major bioactive compounds in ginseng root, have been found to have antioxidant, immunomodulatory and anti-inflammatory activities. This study investigated the effects of ginsenosides on carbon tetrachloride (CCl4)-induced hepatitis and liver fibrosis in rats. Methods: Male Sprague-Dawley rats were randomly divided into four groups: control, CCl4, CCl4 + 0.5 g/kg Panax ginseng extract and CCl4 + 0.05 g/kg ginsenoside Rb1 groups. The treated groups were orally given Panax ginseng extract or ginsenoside Rb1 two weeks before the induction of liver injury for successive 9 weeks. Liver injury was induced by intraperitoneally injected with 400 ml/l CCl4 at a dose of 0.75 ml/kg body weight weekly for 7 weeks. The control group was intraperitoneally injected with olive oil. Results: The pathological results showed that ginsenoside Rb1 decreased hepatic fat deposition (2.65 ± 0.82 vs 3.50 ± 0.75, p <0.05) and Panax ginseng extract lowered hepatic reticular fiber accumulation (1.05 ± 0.44 vs 1.60 ± 0.39, p <0.01) increased by CCl4. Plasma alanine aminotransferase and aspartate aminotransferase activities were increased by CCl4 (p <0.01), and aspartate aminotransferase activity was decreased by Panax ginseng extract at week 9 (p <0.05). Exposure to CCl4 for 7 weeks, the levels of plasma and hepatic triglycerides (p <0.01), hepatic cholesterol (p <0.01), interleukin-1β (p <0.01), prostaglandin E2 (p <0.05), soluble intercellular adhesion molecule-1 (p <0.05), hydroxyproline (p <0.05), matrix metalloproteinase-2 (p <0.05) and tissue inhibitor of metalloproteinase-1 (TIMP-1) (p <0.01) were elevated, however, hepatic interleukin-10 level was lowered (p <0.05). Both Panax ginseng extract and ginsenoside Rb1 decreased plasma and hepatic triglyceride, hepatic prostaglandin E2, hydroxyproline and TIMP-1 levels, and Panax ginseng extract further inhibited interleukin-1β concentrations (p <0.05). Conclusions: Panax ginseng extract and ginsenoside Rb1 attenuate plasma aminotransferase activities and liver inflammation to inhibit CCl4-induced liver fibrosis through down-regulation of hepatic prostaglandin E2 and TIMP-1.
16,067
406
[ 529, 321, 185, 71, 109, 382, 225, 70, 497, 1094, 377, 631, 399, 269 ]
18
[ "liver", "ccl4", "rb1", "significantly", "group", "05", "10", "hepatic", "groups", "lesion" ]
[ "fibrosis oxidative stress", "inflammation liver fibrosis", "ccl induced liver", "hepatocytes exposure ccl4", "fibrogenesis carbon tetrachloride" ]
[CONTENT] Ginsenoside Rb1 | Carbon tetrachloride | Interleukin-1β | Liver fibrosis | Tissue inhibitor of metalloproteinase-1 [SUMMARY]
[CONTENT] Ginsenoside Rb1 | Carbon tetrachloride | Interleukin-1β | Liver fibrosis | Tissue inhibitor of metalloproteinase-1 [SUMMARY]
[CONTENT] Ginsenoside Rb1 | Carbon tetrachloride | Interleukin-1β | Liver fibrosis | Tissue inhibitor of metalloproteinase-1 [SUMMARY]
[CONTENT] Ginsenoside Rb1 | Carbon tetrachloride | Interleukin-1β | Liver fibrosis | Tissue inhibitor of metalloproteinase-1 [SUMMARY]
[CONTENT] Ginsenoside Rb1 | Carbon tetrachloride | Interleukin-1β | Liver fibrosis | Tissue inhibitor of metalloproteinase-1 [SUMMARY]
[CONTENT] Ginsenoside Rb1 | Carbon tetrachloride | Interleukin-1β | Liver fibrosis | Tissue inhibitor of metalloproteinase-1 [SUMMARY]
[CONTENT] Animals | Carbon Tetrachloride | Ginsenosides | Humans | Intercellular Adhesion Molecule-1 | Interleukin-10 | Interleukin-1beta | Liver Cirrhosis | Male | Matrix Metalloproteinase 2 | Panax | Plant Extracts | Rats | Rats, Sprague-Dawley [SUMMARY]
[CONTENT] Animals | Carbon Tetrachloride | Ginsenosides | Humans | Intercellular Adhesion Molecule-1 | Interleukin-10 | Interleukin-1beta | Liver Cirrhosis | Male | Matrix Metalloproteinase 2 | Panax | Plant Extracts | Rats | Rats, Sprague-Dawley [SUMMARY]
[CONTENT] Animals | Carbon Tetrachloride | Ginsenosides | Humans | Intercellular Adhesion Molecule-1 | Interleukin-10 | Interleukin-1beta | Liver Cirrhosis | Male | Matrix Metalloproteinase 2 | Panax | Plant Extracts | Rats | Rats, Sprague-Dawley [SUMMARY]
[CONTENT] Animals | Carbon Tetrachloride | Ginsenosides | Humans | Intercellular Adhesion Molecule-1 | Interleukin-10 | Interleukin-1beta | Liver Cirrhosis | Male | Matrix Metalloproteinase 2 | Panax | Plant Extracts | Rats | Rats, Sprague-Dawley [SUMMARY]
[CONTENT] Animals | Carbon Tetrachloride | Ginsenosides | Humans | Intercellular Adhesion Molecule-1 | Interleukin-10 | Interleukin-1beta | Liver Cirrhosis | Male | Matrix Metalloproteinase 2 | Panax | Plant Extracts | Rats | Rats, Sprague-Dawley [SUMMARY]
[CONTENT] Animals | Carbon Tetrachloride | Ginsenosides | Humans | Intercellular Adhesion Molecule-1 | Interleukin-10 | Interleukin-1beta | Liver Cirrhosis | Male | Matrix Metalloproteinase 2 | Panax | Plant Extracts | Rats | Rats, Sprague-Dawley [SUMMARY]
[CONTENT] fibrosis oxidative stress | inflammation liver fibrosis | ccl induced liver | hepatocytes exposure ccl4 | fibrogenesis carbon tetrachloride [SUMMARY]
[CONTENT] fibrosis oxidative stress | inflammation liver fibrosis | ccl induced liver | hepatocytes exposure ccl4 | fibrogenesis carbon tetrachloride [SUMMARY]
[CONTENT] fibrosis oxidative stress | inflammation liver fibrosis | ccl induced liver | hepatocytes exposure ccl4 | fibrogenesis carbon tetrachloride [SUMMARY]
[CONTENT] fibrosis oxidative stress | inflammation liver fibrosis | ccl induced liver | hepatocytes exposure ccl4 | fibrogenesis carbon tetrachloride [SUMMARY]
[CONTENT] fibrosis oxidative stress | inflammation liver fibrosis | ccl induced liver | hepatocytes exposure ccl4 | fibrogenesis carbon tetrachloride [SUMMARY]
[CONTENT] fibrosis oxidative stress | inflammation liver fibrosis | ccl induced liver | hepatocytes exposure ccl4 | fibrogenesis carbon tetrachloride [SUMMARY]
[CONTENT] liver | ccl4 | rb1 | significantly | group | 05 | 10 | hepatic | groups | lesion [SUMMARY]
[CONTENT] liver | ccl4 | rb1 | significantly | group | 05 | 10 | hepatic | groups | lesion [SUMMARY]
[CONTENT] liver | ccl4 | rb1 | significantly | group | 05 | 10 | hepatic | groups | lesion [SUMMARY]
[CONTENT] liver | ccl4 | rb1 | significantly | group | 05 | 10 | hepatic | groups | lesion [SUMMARY]
[CONTENT] liver | ccl4 | rb1 | significantly | group | 05 | 10 | hepatic | groups | lesion [SUMMARY]
[CONTENT] liver | ccl4 | rb1 | significantly | group | 05 | 10 | hepatic | groups | lesion [SUMMARY]
[CONTENT] fibrosis | liver | ginseng | matrix | liver fibrosis | inflammation | ginsenoside | hepatocellular | liver cirrhosis | factor [SUMMARY]
[CONTENT] solution | lesion | il | buffer | anti | quantikine | lobe | liver | ml | inc [SUMMARY]
[CONTENT] significantly | group | 05 | ccl4 | lesion | figure | weight | hepatic | differ | rb1 [SUMMARY]
[CONTENT] ginseng | ginseng extract | ccl4 induced liver | ccl4 induced | extract | ginsenoside rb1 | ginsenoside | liver | induced liver | induced [SUMMARY]
[CONTENT] liver | rb1 | significantly | ccl4 | lesion | group | hepatic | 05 | il | 10 [SUMMARY]
[CONTENT] liver | rb1 | significantly | ccl4 | lesion | group | hepatic | 05 | il | 10 [SUMMARY]
[CONTENT] ||| [SUMMARY]
[CONTENT] Male Sprague-Dawley | four | CCl4 | CCl4 ||| ||| two weeks | successive 9 weeks ||| 400 ml/l CCl4 | 0.75 ml/kg | weekly | 7 weeks ||| [SUMMARY]
[CONTENT] 2.65 | 0.82 | 3.50 | 0.75 | 1.05 | 0.44 | 1.60 | CCl4 ||| Plasma | CCl4 | week 9 ||| 7 weeks | interleukin-10 ||| prostaglandin E2 [SUMMARY]
[CONTENT] CCl4 [SUMMARY]
[CONTENT] ||| ||| Male Sprague-Dawley | four | CCl4 | CCl4 ||| ||| two weeks | successive 9 weeks ||| 400 ml/l CCl4 | 0.75 ml/kg | weekly | 7 weeks ||| ||| ||| 2.65 | 0.82 | 3.50 | 0.75 | 1.05 | 0.44 | 1.60 | CCl4 ||| Plasma | CCl4 | week 9 ||| 7 weeks | interleukin-10 ||| prostaglandin E2 ||| ||| CCl4 [SUMMARY]
[CONTENT] ||| ||| Male Sprague-Dawley | four | CCl4 | CCl4 ||| ||| two weeks | successive 9 weeks ||| 400 ml/l CCl4 | 0.75 ml/kg | weekly | 7 weeks ||| ||| ||| 2.65 | 0.82 | 3.50 | 0.75 | 1.05 | 0.44 | 1.60 | CCl4 ||| Plasma | CCl4 | week 9 ||| 7 weeks | interleukin-10 ||| prostaglandin E2 ||| ||| CCl4 [SUMMARY]
Diagnostic Performance and Comparative Evaluation of the Architect, Liaison, and Platelia Epstein-Barr Virus Antibody Assays.
29797817
Epstein-Barr Virus (EBV) is one of the most prevalent causes of viral infection in humans. EBV infection stage (acute, past, or absent infection) is typically determined using a combination of assays that detect EBV-specific markers, such as IgG and IgM antibodies against the EBV viral capsid antigen (VCA) and IgG antibodies against the EBV nuclear antigen (EBNA). We compared the diagnostic performance and agreement of results between three commercial EBV antibody assays using an EBV performance panel (SeraCare Life Science, Milford, MA, USA) as a reference.
BACKGROUND
EBV antibody tests of EBV VCA IgM, VCA IgG, and EBNA IgG antibodies were performed by the Architect (Abbott Diagnostics, Wiesbaden, Germany), Liaison (DiaSorin, Saluggia, Italy), and Platelia (Bio-Rad, Marnes-la-Coquette, France) assays. Agreement between the three assays was evaluated using 279 clinical samples, and EBV DNA and antibody test results were compared.
METHODS
The three EBV antibody assays showed good diagnostic performance with good and excellent agreement with the performance panel (kappa coefficient, >0.6). The overall VCA IgM positivity rate was higher in EBV DNA-positive samples than in EBV DNA-negative samples for all three EBV antibody assays (P=0.02). The three EBV antibody assays exhibited good agreement in results for the clinical samples.
RESULTS
The diagnostic performance of the three EBV antibody assays was acceptable, and they showed comparable agreement in results for the clinical samples.
CONCLUSIONS
[ "Adolescent", "Adult", "Aged", "Aged, 80 and over", "Antibodies, Viral", "Child", "Child, Preschool", "DNA, Viral", "Epstein-Barr Virus Infections", "Female", "Herpesvirus 4, Human", "Humans", "Immunoglobulin G", "Immunoglobulin M", "Infant", "Male", "Middle Aged", "Reagent Kits, Diagnostic", "Sensitivity and Specificity", "Young Adult" ]
5973921
INTRODUCTION
The Epstein-Barr virus (EBV) is one of the most prevalent causes of viral infection in humans. EBV has been linked to infectious mononucleosis [1]. Humans are the only natural hosts of EBV. Oral transmission is the primary route of infection in adolescents and young adults [2]; however, EBV can also be acquired via blood transfusions, hematopoietic stem cell transplantation, or solid organ transplants, and these infections can be life-threatening [3456]. In immunocompromised patients (particularly in solid organ transplant recipients), EBV DNA detection tests are performed within three months of transplantation and at least once per year in stable transplanted patients [47]. However, these tests do not reflect EBV infection status in all patients, as the virus can exist in B lymphocytes and possess Herpesviridae latency [8]. Antibody tests for EBV, such as those involving IgM and IgG antibodies against the EBV viral capsid antigen (VCA) and an IgG antibody against the EBV nuclear antigen (EBNA), are also frequently used to determine EBV infection status. EBV infection stage is typically assessed using a combination of antibody assays. An ideal EBV testing panel, including a combination of antibody assays for detecting EBV-specific antigens, should be able to detect all serologic infection statuses (acute, past, and absent infections). Although the immunofluorescence assay (IFA) is considered the gold standard for the detection of VCA and EBNA IgG [9], chemiluminescence immunoassays (CLIA) using automated analyzers are also widely used because of their objective interpretation of results, high throughput, and low labor intensiveness [9101112131415]. Different commercial EBV antibody assays often show varying test results, thus hindering the interpretation of patient EBV infection status [91016]. Reliable test results are needed for an exact diagnosis of EBV infection; therefore, EBV assays used in diagnostic laboratories must be evaluated for their accuracy, performance, and reliability. Although many studies have compared commercial EBV assays [91011121314], the diagnostic performance of the assays has not been evaluated using a standard EBV performance panel (SeraCare Life Science, Milford, MA, USA) covering various EBV infection status. To fill this gap, we evaluated the diagnostic performances of three commercial EBV antibody assays, Architect (Abbott Diagnostics, Wiesbaden, Germany), Liaison (DiaSorin, Saluggia, Italy), and Platelia (Bio-Rad, Marnes-la-Coquette, France) assays, using an EBV performance panel and compared the results of these three commercial assays in clinical samples of patients with suspected EBV infection.
null
null
RESULTS
1. Diagnostic performance of the three EBV antibody assays The duplicate EBV antibody results were the same in all three assays. The EBV performance panel exhibited good and excellent agreement with the Architect, Liaison, and Platelia EBV antibody assays, with Kappa coefficients>0.6 (Table 2). However, we observed poor agreement between the performance panel and the Liaison assay for VCA IgG (kappa coefficient, 0.35; Table 2). One and two VCA IgM-positive EBV performance panel samples tested negative with the Architect and Platelia assays and Liaison assay, respectively; these changes affected the sensitivity of the VCA IgM tests (sensitivity, 71.4–85.7%; Table 2). Two and three panel samples tested VCA IgG-positive with the Architect and Liaison assays, respectively, but were negative according to the EBV performance panel (Table 2); this affected the specificity of the two assays. For EBNA IgG, three assays and EBV performance panel demonstrated similar results. The duplicate EBV antibody results were the same in all three assays. The EBV performance panel exhibited good and excellent agreement with the Architect, Liaison, and Platelia EBV antibody assays, with Kappa coefficients>0.6 (Table 2). However, we observed poor agreement between the performance panel and the Liaison assay for VCA IgG (kappa coefficient, 0.35; Table 2). One and two VCA IgM-positive EBV performance panel samples tested negative with the Architect and Platelia assays and Liaison assay, respectively; these changes affected the sensitivity of the VCA IgM tests (sensitivity, 71.4–85.7%; Table 2). Two and three panel samples tested VCA IgG-positive with the Architect and Liaison assays, respectively, but were negative according to the EBV performance panel (Table 2); this affected the specificity of the two assays. For EBNA IgG, three assays and EBV performance panel demonstrated similar results. 2. Comparison of the EBV antibody and DNA assays EBV DNA was detected in 74 of 279 (26.5%) patient samples (Table 3). Most EBV DNA-negative patients were also VCA IgM-negative. The overall VCA IgM positivity rate was higher in EBV DNA-positive samples than in EBV DNA-negative samples for all three EBV antibody assays (P=0.02). Forty of the 279 patients had confirmed cases of other infectious agents/diseases such as Cytomegalovirus, Mycobacterium tuberculosis, and malaria. Six of the 40 patients were positive for the EBV DNA test and all were VCA IgM negative according to the Architect, Liaison, and Platelia assays. One of the 34 EBV DNA-negative patients with another infectious disease was VCA IgM positive according to the Architect assay. The positive and negative test likelihood ratio and Youden's index according to EBV DNA test, VCA IgM test, and assay combination results were evaluated based on the clinical diagnosis of the 279 patients (Table 4). For the Architect and Liaison assays, a combination of VCA IgM and EBV DNA results was better than the EBV DNA test alone. EBV DNA was detected in 74 of 279 (26.5%) patient samples (Table 3). Most EBV DNA-negative patients were also VCA IgM-negative. The overall VCA IgM positivity rate was higher in EBV DNA-positive samples than in EBV DNA-negative samples for all three EBV antibody assays (P=0.02). Forty of the 279 patients had confirmed cases of other infectious agents/diseases such as Cytomegalovirus, Mycobacterium tuberculosis, and malaria. Six of the 40 patients were positive for the EBV DNA test and all were VCA IgM negative according to the Architect, Liaison, and Platelia assays. One of the 34 EBV DNA-negative patients with another infectious disease was VCA IgM positive according to the Architect assay. The positive and negative test likelihood ratio and Youden's index according to EBV DNA test, VCA IgM test, and assay combination results were evaluated based on the clinical diagnosis of the 279 patients (Table 4). For the Architect and Liaison assays, a combination of VCA IgM and EBV DNA results was better than the EBV DNA test alone. 3. Comparisons between the Architect, Liaison, and Platelia EBV antibody assays The Architect and Liaison assays exhibited kappa coefficients of 0.79 for VCA IgM, 0.80 for VCA IgG, and 0.92 for EBNA IgG, indicating excellent agreement between the two assays (Table 5). We observed excellent agreement between the Architect and Platelia assays and between the Liaison and Platelia assays for EBNA IgG (kappa coefficients, 0.89 and 0.88, respectively). The kappa coefficients showed fair and good agreement between the Architect and Platelia assays and between the Liaison and Platelia assays for VCA IgM and VCA IgG. The percentage of positive agreement for VCA IgM was relatively low when comparing the Liaison and Platelia assays and the Architect and Platelia assays, because the Platelia assay had a lower VCA IgM positivity rate than the other assays as well as because of the overall low prevalence of VCA IgM-positive samples. The Architect and Liaison assays exhibited kappa coefficients of 0.79 for VCA IgM, 0.80 for VCA IgG, and 0.92 for EBNA IgG, indicating excellent agreement between the two assays (Table 5). We observed excellent agreement between the Architect and Platelia assays and between the Liaison and Platelia assays for EBNA IgG (kappa coefficients, 0.89 and 0.88, respectively). The kappa coefficients showed fair and good agreement between the Architect and Platelia assays and between the Liaison and Platelia assays for VCA IgM and VCA IgG. The percentage of positive agreement for VCA IgM was relatively low when comparing the Liaison and Platelia assays and the Architect and Platelia assays, because the Platelia assay had a lower VCA IgM positivity rate than the other assays as well as because of the overall low prevalence of VCA IgM-positive samples.
null
null
[ "1. Study samples", "2. EBV antibody assays", "3. EBV performance panel", "4. EBV DNA test", "5. Statistical analysis", "1. Diagnostic performance of the three EBV antibody assays", "2. Comparison of the EBV antibody and DNA assays", "3. Comparisons between the Architect, Liaison, and Platelia EBV antibody assays" ]
[ "A total of 279 samples (from 123 males and 156 females with a median age of 25 years, range 11 months-96 years) for which an EBV PCR test and antibody assays (VCA IgM, VCA IgG, and EBNA IgG) were concurrently ordered, were collected from March 2015 to March 2016. All samples were derived from immunocompetent patients for the diagnosis of EBV infection. We excluded samples from immunocompromised patients (e.g., those with cancer, organ transplant recipients, or those with other infectious diseases). The Liaison assay was performed on the requested date for EBV antibody tests, and the residual samples were stored at –20℃, following separation for the Architect assay and Platelia assay and between assay testing. This study was approved by the Institutional Review Board (IRB) of Severance Hospital, Seoul, Korea. The IRB approved exemption for written informed consent, because this study was performed using stored residual samples after testing and based on a retrospective chart review.", "The Liaison assay was performed before the Architect and Platelia assays. The Liaison test is based on CLIA and samples were analyzed using the Liaison analyzer (DiaSorin); the Architect assay is a chemiluminescent microparticle immunoassay (CMIA) and was performed using the Architect i2000 analyzer (Abbott Diagnostics); and the Platelia test is an ELISA that was performed manually. Optical densities of the Platelia test were quantified using the BEP III System (Siemens, Marburg, Germany). For the Platelia assay, serum samples were diluted 1:21 (10 µL+ 200 µL) with the sample diluent provided with the assay kit. The instructions of the other tests did not recommend the use of serum dilution. All assay procedures followed the manufacturers' instructions (Table 1).", "The Anti-Epstein-Barr Virus Mixed Titer Performance Panel (SeraCare Life Science) includes 21 samples covering different EBV infection status (one seronegative, 13 past infections [VCA IgM−, VCA IgG+, EBNA IgG+], three acute primary infections [VCA IgM+, VCA IgG+, EBNA IgG−], three early phase acute primary infections [VCA IgM+, VCA IgG−, EBNA IgG−], and one transient infection or re-infection [VCA IgM+, VCA IgG+, EBNA IgG+]) and a data sheet for comparing the panel results with the test results of the commercial EBV assays.\nLyophilized aliquots of donor plasma were dissolved in distilled water prior to analysis. Tests were performed in duplicate using the three commercial EBV assays to reduce the likelihood of experimental or reagent errors. VCA IgM values >43.9 and >1.0 were considered positive according to the performance panel data sheet of the Liaison and Platelia assays, respectively; VCA IgG values >1.0 were considered positive for the Platelia EBV assay; and EBNA IgG values >21.9 and >1.0 in the Liaison and Platelia EBV assays, respectively, were considered positive. The positive and negative results of the panel samples, obtained using the EBV antibody assays, were interpreted according to the performance panel instructions, and EBV antibody assay results were determined following the instructions of each assay.", "PCR for EBV DNA was performed using the artus EBV RG PCR Kit (Qiagen, Hilden, Germany) on Rotor-Gene Q Instrument (Qiagen) after extracting DNA from plasma with the QIAamp DNA Mini Kit (Qiagen, Dusseldorf, Germany) according to the manufacturer's instruction.", "Kappa statistics, sensitivity, and specificity were calculated for VCA IgM, VCA IgG, and EBNA IgG based on the EBV performance panel results. We also calculated kappa statistics and the percentages of positive, negative, and total agreement between the Architect, Liaison, and Platelia EBV assays. Kappa values >0.75 indicated excellent agreement, values between 0.40 and 0.75 suggested fair to good agreement, and values <0.40 represented poor agreement. The percentage of total agreement between the assays represents the percentage of paired tests with identical results. Because “gray zone” or equivocal results cannot be interpreted as positive results and are mostly considered negative by clinicians, these results were grouped to evaluate kappa statistics and agreement percentages. The Student t-test was performed to determine the VCA IgM positive rate between the EBV DNA-positive and negative groups. P values<0.05 were considered statistically significant. To evaluate diagnostic performance based on clinical diagnosis of EBV infection of clinical samples, Youden's index and positive and negative Likelihood ratio were calculated. All statistical analyses were performed using Analyse-it, version 4.65 (Analyse-it Software, Ltd., Leeds, UK).", "The duplicate EBV antibody results were the same in all three assays. The EBV performance panel exhibited good and excellent agreement with the Architect, Liaison, and Platelia EBV antibody assays, with Kappa coefficients>0.6 (Table 2). However, we observed poor agreement between the performance panel and the Liaison assay for VCA IgG (kappa coefficient, 0.35; Table 2). One and two VCA IgM-positive EBV performance panel samples tested negative with the Architect and Platelia assays and Liaison assay, respectively; these changes affected the sensitivity of the VCA IgM tests (sensitivity, 71.4–85.7%; Table 2). Two and three panel samples tested VCA IgG-positive with the Architect and Liaison assays, respectively, but were negative according to the EBV performance panel (Table 2); this affected the specificity of the two assays. For EBNA IgG, three assays and EBV performance panel demonstrated similar results.", "EBV DNA was detected in 74 of 279 (26.5%) patient samples (Table 3). Most EBV DNA-negative patients were also VCA IgM-negative. The overall VCA IgM positivity rate was higher in EBV DNA-positive samples than in EBV DNA-negative samples for all three EBV antibody assays (P=0.02).\nForty of the 279 patients had confirmed cases of other infectious agents/diseases such as Cytomegalovirus, Mycobacterium tuberculosis, and malaria. Six of the 40 patients were positive for the EBV DNA test and all were VCA IgM negative according to the Architect, Liaison, and Platelia assays. One of the 34 EBV DNA-negative patients with another infectious disease was VCA IgM positive according to the Architect assay. The positive and negative test likelihood ratio and Youden's index according to EBV DNA test, VCA IgM test, and assay combination results were evaluated based on the clinical diagnosis of the 279 patients (Table 4). For the Architect and Liaison assays, a combination of VCA IgM and EBV DNA results was better than the EBV DNA test alone.", "The Architect and Liaison assays exhibited kappa coefficients of 0.79 for VCA IgM, 0.80 for VCA IgG, and 0.92 for EBNA IgG, indicating excellent agreement between the two assays (Table 5). We observed excellent agreement between the Architect and Platelia assays and between the Liaison and Platelia assays for EBNA IgG (kappa coefficients, 0.89 and 0.88, respectively). The kappa coefficients showed fair and good agreement between the Architect and Platelia assays and between the Liaison and Platelia assays for VCA IgM and VCA IgG. The percentage of positive agreement for VCA IgM was relatively low when comparing the Liaison and Platelia assays and the Architect and Platelia assays, because the Platelia assay had a lower VCA IgM positivity rate than the other assays as well as because of the overall low prevalence of VCA IgM-positive samples." ]
[ null, null, null, null, null, null, null, null ]
[ "INTRODUCTION", "METHODS", "1. Study samples", "2. EBV antibody assays", "3. EBV performance panel", "4. EBV DNA test", "5. Statistical analysis", "RESULTS", "1. Diagnostic performance of the three EBV antibody assays", "2. Comparison of the EBV antibody and DNA assays", "3. Comparisons between the Architect, Liaison, and Platelia EBV antibody assays", "DISCUSSION" ]
[ "The Epstein-Barr virus (EBV) is one of the most prevalent causes of viral infection in humans. EBV has been linked to infectious mononucleosis [1]. Humans are the only natural hosts of EBV. Oral transmission is the primary route of infection in adolescents and young adults [2]; however, EBV can also be acquired via blood transfusions, hematopoietic stem cell transplantation, or solid organ transplants, and these infections can be life-threatening [3456].\nIn immunocompromised patients (particularly in solid organ transplant recipients), EBV DNA detection tests are performed within three months of transplantation and at least once per year in stable transplanted patients [47]. However, these tests do not reflect EBV infection status in all patients, as the virus can exist in B lymphocytes and possess Herpesviridae latency [8]. Antibody tests for EBV, such as those involving IgM and IgG antibodies against the EBV viral capsid antigen (VCA) and an IgG antibody against the EBV nuclear antigen (EBNA), are also frequently used to determine EBV infection status. EBV infection stage is typically assessed using a combination of antibody assays. An ideal EBV testing panel, including a combination of antibody assays for detecting EBV-specific antigens, should be able to detect all serologic infection statuses (acute, past, and absent infections).\nAlthough the immunofluorescence assay (IFA) is considered the gold standard for the detection of VCA and EBNA IgG [9], chemiluminescence immunoassays (CLIA) using automated analyzers are also widely used because of their objective interpretation of results, high throughput, and low labor intensiveness [9101112131415]. Different commercial EBV antibody assays often show varying test results, thus hindering the interpretation of patient EBV infection status [91016]. Reliable test results are needed for an exact diagnosis of EBV infection; therefore, EBV assays used in diagnostic laboratories must be evaluated for their accuracy, performance, and reliability.\nAlthough many studies have compared commercial EBV assays [91011121314], the diagnostic performance of the assays has not been evaluated using a standard EBV performance panel (SeraCare Life Science, Milford, MA, USA) covering various EBV infection status. To fill this gap, we evaluated the diagnostic performances of three commercial EBV antibody assays, Architect (Abbott Diagnostics, Wiesbaden, Germany), Liaison (DiaSorin, Saluggia, Italy), and Platelia (Bio-Rad, Marnes-la-Coquette, France) assays, using an EBV performance panel and compared the results of these three commercial assays in clinical samples of patients with suspected EBV infection.", " 1. Study samples A total of 279 samples (from 123 males and 156 females with a median age of 25 years, range 11 months-96 years) for which an EBV PCR test and antibody assays (VCA IgM, VCA IgG, and EBNA IgG) were concurrently ordered, were collected from March 2015 to March 2016. All samples were derived from immunocompetent patients for the diagnosis of EBV infection. We excluded samples from immunocompromised patients (e.g., those with cancer, organ transplant recipients, or those with other infectious diseases). The Liaison assay was performed on the requested date for EBV antibody tests, and the residual samples were stored at –20℃, following separation for the Architect assay and Platelia assay and between assay testing. This study was approved by the Institutional Review Board (IRB) of Severance Hospital, Seoul, Korea. The IRB approved exemption for written informed consent, because this study was performed using stored residual samples after testing and based on a retrospective chart review.\nA total of 279 samples (from 123 males and 156 females with a median age of 25 years, range 11 months-96 years) for which an EBV PCR test and antibody assays (VCA IgM, VCA IgG, and EBNA IgG) were concurrently ordered, were collected from March 2015 to March 2016. All samples were derived from immunocompetent patients for the diagnosis of EBV infection. We excluded samples from immunocompromised patients (e.g., those with cancer, organ transplant recipients, or those with other infectious diseases). The Liaison assay was performed on the requested date for EBV antibody tests, and the residual samples were stored at –20℃, following separation for the Architect assay and Platelia assay and between assay testing. This study was approved by the Institutional Review Board (IRB) of Severance Hospital, Seoul, Korea. The IRB approved exemption for written informed consent, because this study was performed using stored residual samples after testing and based on a retrospective chart review.\n 2. EBV antibody assays The Liaison assay was performed before the Architect and Platelia assays. The Liaison test is based on CLIA and samples were analyzed using the Liaison analyzer (DiaSorin); the Architect assay is a chemiluminescent microparticle immunoassay (CMIA) and was performed using the Architect i2000 analyzer (Abbott Diagnostics); and the Platelia test is an ELISA that was performed manually. Optical densities of the Platelia test were quantified using the BEP III System (Siemens, Marburg, Germany). For the Platelia assay, serum samples were diluted 1:21 (10 µL+ 200 µL) with the sample diluent provided with the assay kit. The instructions of the other tests did not recommend the use of serum dilution. All assay procedures followed the manufacturers' instructions (Table 1).\nThe Liaison assay was performed before the Architect and Platelia assays. The Liaison test is based on CLIA and samples were analyzed using the Liaison analyzer (DiaSorin); the Architect assay is a chemiluminescent microparticle immunoassay (CMIA) and was performed using the Architect i2000 analyzer (Abbott Diagnostics); and the Platelia test is an ELISA that was performed manually. Optical densities of the Platelia test were quantified using the BEP III System (Siemens, Marburg, Germany). For the Platelia assay, serum samples were diluted 1:21 (10 µL+ 200 µL) with the sample diluent provided with the assay kit. The instructions of the other tests did not recommend the use of serum dilution. All assay procedures followed the manufacturers' instructions (Table 1).\n 3. EBV performance panel The Anti-Epstein-Barr Virus Mixed Titer Performance Panel (SeraCare Life Science) includes 21 samples covering different EBV infection status (one seronegative, 13 past infections [VCA IgM−, VCA IgG+, EBNA IgG+], three acute primary infections [VCA IgM+, VCA IgG+, EBNA IgG−], three early phase acute primary infections [VCA IgM+, VCA IgG−, EBNA IgG−], and one transient infection or re-infection [VCA IgM+, VCA IgG+, EBNA IgG+]) and a data sheet for comparing the panel results with the test results of the commercial EBV assays.\nLyophilized aliquots of donor plasma were dissolved in distilled water prior to analysis. Tests were performed in duplicate using the three commercial EBV assays to reduce the likelihood of experimental or reagent errors. VCA IgM values >43.9 and >1.0 were considered positive according to the performance panel data sheet of the Liaison and Platelia assays, respectively; VCA IgG values >1.0 were considered positive for the Platelia EBV assay; and EBNA IgG values >21.9 and >1.0 in the Liaison and Platelia EBV assays, respectively, were considered positive. The positive and negative results of the panel samples, obtained using the EBV antibody assays, were interpreted according to the performance panel instructions, and EBV antibody assay results were determined following the instructions of each assay.\nThe Anti-Epstein-Barr Virus Mixed Titer Performance Panel (SeraCare Life Science) includes 21 samples covering different EBV infection status (one seronegative, 13 past infections [VCA IgM−, VCA IgG+, EBNA IgG+], three acute primary infections [VCA IgM+, VCA IgG+, EBNA IgG−], three early phase acute primary infections [VCA IgM+, VCA IgG−, EBNA IgG−], and one transient infection or re-infection [VCA IgM+, VCA IgG+, EBNA IgG+]) and a data sheet for comparing the panel results with the test results of the commercial EBV assays.\nLyophilized aliquots of donor plasma were dissolved in distilled water prior to analysis. Tests were performed in duplicate using the three commercial EBV assays to reduce the likelihood of experimental or reagent errors. VCA IgM values >43.9 and >1.0 were considered positive according to the performance panel data sheet of the Liaison and Platelia assays, respectively; VCA IgG values >1.0 were considered positive for the Platelia EBV assay; and EBNA IgG values >21.9 and >1.0 in the Liaison and Platelia EBV assays, respectively, were considered positive. The positive and negative results of the panel samples, obtained using the EBV antibody assays, were interpreted according to the performance panel instructions, and EBV antibody assay results were determined following the instructions of each assay.\n 4. EBV DNA test PCR for EBV DNA was performed using the artus EBV RG PCR Kit (Qiagen, Hilden, Germany) on Rotor-Gene Q Instrument (Qiagen) after extracting DNA from plasma with the QIAamp DNA Mini Kit (Qiagen, Dusseldorf, Germany) according to the manufacturer's instruction.\nPCR for EBV DNA was performed using the artus EBV RG PCR Kit (Qiagen, Hilden, Germany) on Rotor-Gene Q Instrument (Qiagen) after extracting DNA from plasma with the QIAamp DNA Mini Kit (Qiagen, Dusseldorf, Germany) according to the manufacturer's instruction.\n 5. Statistical analysis Kappa statistics, sensitivity, and specificity were calculated for VCA IgM, VCA IgG, and EBNA IgG based on the EBV performance panel results. We also calculated kappa statistics and the percentages of positive, negative, and total agreement between the Architect, Liaison, and Platelia EBV assays. Kappa values >0.75 indicated excellent agreement, values between 0.40 and 0.75 suggested fair to good agreement, and values <0.40 represented poor agreement. The percentage of total agreement between the assays represents the percentage of paired tests with identical results. Because “gray zone” or equivocal results cannot be interpreted as positive results and are mostly considered negative by clinicians, these results were grouped to evaluate kappa statistics and agreement percentages. The Student t-test was performed to determine the VCA IgM positive rate between the EBV DNA-positive and negative groups. P values<0.05 were considered statistically significant. To evaluate diagnostic performance based on clinical diagnosis of EBV infection of clinical samples, Youden's index and positive and negative Likelihood ratio were calculated. All statistical analyses were performed using Analyse-it, version 4.65 (Analyse-it Software, Ltd., Leeds, UK).\nKappa statistics, sensitivity, and specificity were calculated for VCA IgM, VCA IgG, and EBNA IgG based on the EBV performance panel results. We also calculated kappa statistics and the percentages of positive, negative, and total agreement between the Architect, Liaison, and Platelia EBV assays. Kappa values >0.75 indicated excellent agreement, values between 0.40 and 0.75 suggested fair to good agreement, and values <0.40 represented poor agreement. The percentage of total agreement between the assays represents the percentage of paired tests with identical results. Because “gray zone” or equivocal results cannot be interpreted as positive results and are mostly considered negative by clinicians, these results were grouped to evaluate kappa statistics and agreement percentages. The Student t-test was performed to determine the VCA IgM positive rate between the EBV DNA-positive and negative groups. P values<0.05 were considered statistically significant. To evaluate diagnostic performance based on clinical diagnosis of EBV infection of clinical samples, Youden's index and positive and negative Likelihood ratio were calculated. All statistical analyses were performed using Analyse-it, version 4.65 (Analyse-it Software, Ltd., Leeds, UK).", "A total of 279 samples (from 123 males and 156 females with a median age of 25 years, range 11 months-96 years) for which an EBV PCR test and antibody assays (VCA IgM, VCA IgG, and EBNA IgG) were concurrently ordered, were collected from March 2015 to March 2016. All samples were derived from immunocompetent patients for the diagnosis of EBV infection. We excluded samples from immunocompromised patients (e.g., those with cancer, organ transplant recipients, or those with other infectious diseases). The Liaison assay was performed on the requested date for EBV antibody tests, and the residual samples were stored at –20℃, following separation for the Architect assay and Platelia assay and between assay testing. This study was approved by the Institutional Review Board (IRB) of Severance Hospital, Seoul, Korea. The IRB approved exemption for written informed consent, because this study was performed using stored residual samples after testing and based on a retrospective chart review.", "The Liaison assay was performed before the Architect and Platelia assays. The Liaison test is based on CLIA and samples were analyzed using the Liaison analyzer (DiaSorin); the Architect assay is a chemiluminescent microparticle immunoassay (CMIA) and was performed using the Architect i2000 analyzer (Abbott Diagnostics); and the Platelia test is an ELISA that was performed manually. Optical densities of the Platelia test were quantified using the BEP III System (Siemens, Marburg, Germany). For the Platelia assay, serum samples were diluted 1:21 (10 µL+ 200 µL) with the sample diluent provided with the assay kit. The instructions of the other tests did not recommend the use of serum dilution. All assay procedures followed the manufacturers' instructions (Table 1).", "The Anti-Epstein-Barr Virus Mixed Titer Performance Panel (SeraCare Life Science) includes 21 samples covering different EBV infection status (one seronegative, 13 past infections [VCA IgM−, VCA IgG+, EBNA IgG+], three acute primary infections [VCA IgM+, VCA IgG+, EBNA IgG−], three early phase acute primary infections [VCA IgM+, VCA IgG−, EBNA IgG−], and one transient infection or re-infection [VCA IgM+, VCA IgG+, EBNA IgG+]) and a data sheet for comparing the panel results with the test results of the commercial EBV assays.\nLyophilized aliquots of donor plasma were dissolved in distilled water prior to analysis. Tests were performed in duplicate using the three commercial EBV assays to reduce the likelihood of experimental or reagent errors. VCA IgM values >43.9 and >1.0 were considered positive according to the performance panel data sheet of the Liaison and Platelia assays, respectively; VCA IgG values >1.0 were considered positive for the Platelia EBV assay; and EBNA IgG values >21.9 and >1.0 in the Liaison and Platelia EBV assays, respectively, were considered positive. The positive and negative results of the panel samples, obtained using the EBV antibody assays, were interpreted according to the performance panel instructions, and EBV antibody assay results were determined following the instructions of each assay.", "PCR for EBV DNA was performed using the artus EBV RG PCR Kit (Qiagen, Hilden, Germany) on Rotor-Gene Q Instrument (Qiagen) after extracting DNA from plasma with the QIAamp DNA Mini Kit (Qiagen, Dusseldorf, Germany) according to the manufacturer's instruction.", "Kappa statistics, sensitivity, and specificity were calculated for VCA IgM, VCA IgG, and EBNA IgG based on the EBV performance panel results. We also calculated kappa statistics and the percentages of positive, negative, and total agreement between the Architect, Liaison, and Platelia EBV assays. Kappa values >0.75 indicated excellent agreement, values between 0.40 and 0.75 suggested fair to good agreement, and values <0.40 represented poor agreement. The percentage of total agreement between the assays represents the percentage of paired tests with identical results. Because “gray zone” or equivocal results cannot be interpreted as positive results and are mostly considered negative by clinicians, these results were grouped to evaluate kappa statistics and agreement percentages. The Student t-test was performed to determine the VCA IgM positive rate between the EBV DNA-positive and negative groups. P values<0.05 were considered statistically significant. To evaluate diagnostic performance based on clinical diagnosis of EBV infection of clinical samples, Youden's index and positive and negative Likelihood ratio were calculated. All statistical analyses were performed using Analyse-it, version 4.65 (Analyse-it Software, Ltd., Leeds, UK).", " 1. Diagnostic performance of the three EBV antibody assays The duplicate EBV antibody results were the same in all three assays. The EBV performance panel exhibited good and excellent agreement with the Architect, Liaison, and Platelia EBV antibody assays, with Kappa coefficients>0.6 (Table 2). However, we observed poor agreement between the performance panel and the Liaison assay for VCA IgG (kappa coefficient, 0.35; Table 2). One and two VCA IgM-positive EBV performance panel samples tested negative with the Architect and Platelia assays and Liaison assay, respectively; these changes affected the sensitivity of the VCA IgM tests (sensitivity, 71.4–85.7%; Table 2). Two and three panel samples tested VCA IgG-positive with the Architect and Liaison assays, respectively, but were negative according to the EBV performance panel (Table 2); this affected the specificity of the two assays. For EBNA IgG, three assays and EBV performance panel demonstrated similar results.\nThe duplicate EBV antibody results were the same in all three assays. The EBV performance panel exhibited good and excellent agreement with the Architect, Liaison, and Platelia EBV antibody assays, with Kappa coefficients>0.6 (Table 2). However, we observed poor agreement between the performance panel and the Liaison assay for VCA IgG (kappa coefficient, 0.35; Table 2). One and two VCA IgM-positive EBV performance panel samples tested negative with the Architect and Platelia assays and Liaison assay, respectively; these changes affected the sensitivity of the VCA IgM tests (sensitivity, 71.4–85.7%; Table 2). Two and three panel samples tested VCA IgG-positive with the Architect and Liaison assays, respectively, but were negative according to the EBV performance panel (Table 2); this affected the specificity of the two assays. For EBNA IgG, three assays and EBV performance panel demonstrated similar results.\n 2. Comparison of the EBV antibody and DNA assays EBV DNA was detected in 74 of 279 (26.5%) patient samples (Table 3). Most EBV DNA-negative patients were also VCA IgM-negative. The overall VCA IgM positivity rate was higher in EBV DNA-positive samples than in EBV DNA-negative samples for all three EBV antibody assays (P=0.02).\nForty of the 279 patients had confirmed cases of other infectious agents/diseases such as Cytomegalovirus, Mycobacterium tuberculosis, and malaria. Six of the 40 patients were positive for the EBV DNA test and all were VCA IgM negative according to the Architect, Liaison, and Platelia assays. One of the 34 EBV DNA-negative patients with another infectious disease was VCA IgM positive according to the Architect assay. The positive and negative test likelihood ratio and Youden's index according to EBV DNA test, VCA IgM test, and assay combination results were evaluated based on the clinical diagnosis of the 279 patients (Table 4). For the Architect and Liaison assays, a combination of VCA IgM and EBV DNA results was better than the EBV DNA test alone.\nEBV DNA was detected in 74 of 279 (26.5%) patient samples (Table 3). Most EBV DNA-negative patients were also VCA IgM-negative. The overall VCA IgM positivity rate was higher in EBV DNA-positive samples than in EBV DNA-negative samples for all three EBV antibody assays (P=0.02).\nForty of the 279 patients had confirmed cases of other infectious agents/diseases such as Cytomegalovirus, Mycobacterium tuberculosis, and malaria. Six of the 40 patients were positive for the EBV DNA test and all were VCA IgM negative according to the Architect, Liaison, and Platelia assays. One of the 34 EBV DNA-negative patients with another infectious disease was VCA IgM positive according to the Architect assay. The positive and negative test likelihood ratio and Youden's index according to EBV DNA test, VCA IgM test, and assay combination results were evaluated based on the clinical diagnosis of the 279 patients (Table 4). For the Architect and Liaison assays, a combination of VCA IgM and EBV DNA results was better than the EBV DNA test alone.\n 3. Comparisons between the Architect, Liaison, and Platelia EBV antibody assays The Architect and Liaison assays exhibited kappa coefficients of 0.79 for VCA IgM, 0.80 for VCA IgG, and 0.92 for EBNA IgG, indicating excellent agreement between the two assays (Table 5). We observed excellent agreement between the Architect and Platelia assays and between the Liaison and Platelia assays for EBNA IgG (kappa coefficients, 0.89 and 0.88, respectively). The kappa coefficients showed fair and good agreement between the Architect and Platelia assays and between the Liaison and Platelia assays for VCA IgM and VCA IgG. The percentage of positive agreement for VCA IgM was relatively low when comparing the Liaison and Platelia assays and the Architect and Platelia assays, because the Platelia assay had a lower VCA IgM positivity rate than the other assays as well as because of the overall low prevalence of VCA IgM-positive samples.\nThe Architect and Liaison assays exhibited kappa coefficients of 0.79 for VCA IgM, 0.80 for VCA IgG, and 0.92 for EBNA IgG, indicating excellent agreement between the two assays (Table 5). We observed excellent agreement between the Architect and Platelia assays and between the Liaison and Platelia assays for EBNA IgG (kappa coefficients, 0.89 and 0.88, respectively). The kappa coefficients showed fair and good agreement between the Architect and Platelia assays and between the Liaison and Platelia assays for VCA IgM and VCA IgG. The percentage of positive agreement for VCA IgM was relatively low when comparing the Liaison and Platelia assays and the Architect and Platelia assays, because the Platelia assay had a lower VCA IgM positivity rate than the other assays as well as because of the overall low prevalence of VCA IgM-positive samples.", "The duplicate EBV antibody results were the same in all three assays. The EBV performance panel exhibited good and excellent agreement with the Architect, Liaison, and Platelia EBV antibody assays, with Kappa coefficients>0.6 (Table 2). However, we observed poor agreement between the performance panel and the Liaison assay for VCA IgG (kappa coefficient, 0.35; Table 2). One and two VCA IgM-positive EBV performance panel samples tested negative with the Architect and Platelia assays and Liaison assay, respectively; these changes affected the sensitivity of the VCA IgM tests (sensitivity, 71.4–85.7%; Table 2). Two and three panel samples tested VCA IgG-positive with the Architect and Liaison assays, respectively, but were negative according to the EBV performance panel (Table 2); this affected the specificity of the two assays. For EBNA IgG, three assays and EBV performance panel demonstrated similar results.", "EBV DNA was detected in 74 of 279 (26.5%) patient samples (Table 3). Most EBV DNA-negative patients were also VCA IgM-negative. The overall VCA IgM positivity rate was higher in EBV DNA-positive samples than in EBV DNA-negative samples for all three EBV antibody assays (P=0.02).\nForty of the 279 patients had confirmed cases of other infectious agents/diseases such as Cytomegalovirus, Mycobacterium tuberculosis, and malaria. Six of the 40 patients were positive for the EBV DNA test and all were VCA IgM negative according to the Architect, Liaison, and Platelia assays. One of the 34 EBV DNA-negative patients with another infectious disease was VCA IgM positive according to the Architect assay. The positive and negative test likelihood ratio and Youden's index according to EBV DNA test, VCA IgM test, and assay combination results were evaluated based on the clinical diagnosis of the 279 patients (Table 4). For the Architect and Liaison assays, a combination of VCA IgM and EBV DNA results was better than the EBV DNA test alone.", "The Architect and Liaison assays exhibited kappa coefficients of 0.79 for VCA IgM, 0.80 for VCA IgG, and 0.92 for EBNA IgG, indicating excellent agreement between the two assays (Table 5). We observed excellent agreement between the Architect and Platelia assays and between the Liaison and Platelia assays for EBNA IgG (kappa coefficients, 0.89 and 0.88, respectively). The kappa coefficients showed fair and good agreement between the Architect and Platelia assays and between the Liaison and Platelia assays for VCA IgM and VCA IgG. The percentage of positive agreement for VCA IgM was relatively low when comparing the Liaison and Platelia assays and the Architect and Platelia assays, because the Platelia assay had a lower VCA IgM positivity rate than the other assays as well as because of the overall low prevalence of VCA IgM-positive samples.", "We evaluated the diagnostic performance of the Architect CMIA, Liaison CLIA, and Platelia ELISA EBV assays (VCA IgM, VCA IgG, and EBNA IgG antibodies) using an EBV performance panel as a reference. The EBV performance panel results indicated that the three EBV antibody assays showed excellent agreement for VCA IgM and EBNA IgG; however, the assays exhibited poorer VCA IgG detection than the performance panel. Although only a few VCA IgM-positive panel samples tested negative using the three antibody assays, false negative results could affect correct EBV infection diagnosis in patients with suspected EBV infection. Kappa agreement and sensitivity were significantly affected because of the limited number of VCA IgM-positive samples in the EBV performance panel. The bias of the performance panel towards more VCA IgM-negative and VCA IgG-positive and EBNA IgG-positive panel members could have caused different agreement results. One EBNA IgG-positive panel sample tested negative using the Liaison and Platelia assays.\nThis panel sample had a past infection status, and the test result was close to the cut-off value based on the EBV performance panel data sheet (Liaison, 33.15 U/mL; Platelia, 1.4 ISR). For VCA IgG, the Architect and Liaison assays showed positive results for two and three VCA IgG-negative panel members, respectively. In contrast, the Platelia assay demonstrated negative results for three VCA IgG-positive panel samples. This was likely due to differences in detection sensitivity between the assay methods used in the Architect and Liaison assays (CLIA vs ELISA).\nAs the Architect and Liaison assays showed more differences in their test results than the Platelia assay compared with the performance panel, the kappa agreement and specificity of the Architect and Liaison assays were affected more than those of the Platelia assay. The percentages of total agreement were 90.5%, 85.7%, and 85.7% between the EBV performance panel and the Architect, Liaison, and Platelia assays, respectively. However, false-positive and false-negative results have different importance in the field of infection diagnosis; Platelia assay false-negative cases represented the possibility of missed EBV infection diagnosis. We hypothesize that the performance panel had included more VCA IgM-positive and VCA IgG-negative samples, we would have seen excellent agreement and comparable sensitivity and specificity for the three EBV antibody assays.\nWhen we compared the EBV DNA and VCA IgM antibody results, we detected significantly higher positivity rates for the EBV DNA test. The EBV DNA detection method has a higher sensitivity than serological antibody detection methods and EBV DNA can be detected even without EBV infection, as once infected with EBV, the virus might be latently present in B lymphocytes [8]. Over 80% of clinical samples that tested negative for EBV infection using the antibody assays exhibited EBV DNA positivity. These samples were possibly collected during early infection stages (prior to VCA IgM formation); DNA positivity could also be due to EBV latency in B lymphocytes. In addition, EBV DNA detection can fail at times, despite current EBV infection, because of Herpesviridae latency in molecular testing [8]. This could explain the VCA IgM positivity and EBV DNA negativity in a number of samples in our study. An evaluation of the diagnostic performance of the EBV DNA test and EBV antibody assays alone or in combination based on the clinical diagnosis of 279 patients indicated that the performance of the VCA IgM and EBV DNA combination was superior to that of the Architect or Liaison assay alone. Although most EBV DNA-negative samples were also VCA IgM negative and EBV DNA was detected in VCA IgM positive samples, the EBV DNA and antibody tests did not always show the same results. Therefore, EBV DNA and antibody assays should be used complementarily to diagnose the EBV infection status of patients.\nThe Architect and Liaison assays showed excellent agreement for VCA IgM, VCA IgG, and EBNA IgG. The Platelia assay showed more negative test results for VCA IgM- and VCA IgG-positive panel samples than the Architect and Liaison assays. This difference in sensitivity between the Platelia and the Architect and Liaison assays is most likely due to the fact that the Platelia assay uses ELISA, while the Architect and Liaison assays use CMIA and CLIA (with fully-automated analyzers), respectively. However, all kappa coefficients were >0.40; thus, the three EBV assays were considered to have fair and excellent agreement. The agreements between the different EBV antibody assays were comparable with the results of other studies [910111213141517].\nOur study has several limitations. First, the EBV performance panel and clinical samples were not tested with the IFA method, which is considered the gold standard for VCA IgG and EBNA IgG detection. However, excellent sensitivity and specificity have been reported for chemiluminescent EBV antibody detection assays [12]. In addition, we used a commercial mixed performance panel consisting of various samples of EBV infection status to evaluate the diagnostic performance of the EBV antibody assays. The standard performance panel, which has been validated with several widely used EBV antibody assays and known EBV infection status, is a valuable evaluation tool for EBV antibody detection assays. Second, the study included only a few VCA IgM positive clinical and performance panel samples. Therefore, evaluating the accuracy of positive results for the VCA IgM assays was difficult and the agreement values between the different assays were likely affected. As EBV infection mostly occurs during childhood and adolescence and young patients typically do not show severe symptoms, the number of samples with new infection status was limited. Finally, we did not compare the EBV DNA test with the performance panel because of insufficient material. This may help explain the discrepancies in test results between the EBV DNA and antibody tests. Future studies including a sufficient number of samples with validated EBV infection status should be conducted to address this discrepancy and to accurately evaluate the diagnostic performance of the EBV assays.\nDespite these limitations, to the best of our knowledge, this is the first study to evaluate the diagnostic performance of the Architect, Liaison, and Platelia EBV antibody assays using a standard performance panel. The diagnostic performances of the three EBV antibody assays using a combination of VCA IgM, VCA IgG, and EBNA IgG antibodies were acceptable. In addition, the three EBV antibody assays showed good agreement for the clinical samples. A combination of the EBV DNA and VCA IgM antibody tests would provide better diagnostic performance for patients with suspected EBV infection." ]
[ "intro", "materials|methods", null, null, null, null, null, "results", null, null, null, "discussion" ]
[ "Epstein-Barr virus", "Assay", "Diagnostic performance", "DNA", "Antibody", "Immunoglobulin" ]
INTRODUCTION: The Epstein-Barr virus (EBV) is one of the most prevalent causes of viral infection in humans. EBV has been linked to infectious mononucleosis [1]. Humans are the only natural hosts of EBV. Oral transmission is the primary route of infection in adolescents and young adults [2]; however, EBV can also be acquired via blood transfusions, hematopoietic stem cell transplantation, or solid organ transplants, and these infections can be life-threatening [3456]. In immunocompromised patients (particularly in solid organ transplant recipients), EBV DNA detection tests are performed within three months of transplantation and at least once per year in stable transplanted patients [47]. However, these tests do not reflect EBV infection status in all patients, as the virus can exist in B lymphocytes and possess Herpesviridae latency [8]. Antibody tests for EBV, such as those involving IgM and IgG antibodies against the EBV viral capsid antigen (VCA) and an IgG antibody against the EBV nuclear antigen (EBNA), are also frequently used to determine EBV infection status. EBV infection stage is typically assessed using a combination of antibody assays. An ideal EBV testing panel, including a combination of antibody assays for detecting EBV-specific antigens, should be able to detect all serologic infection statuses (acute, past, and absent infections). Although the immunofluorescence assay (IFA) is considered the gold standard for the detection of VCA and EBNA IgG [9], chemiluminescence immunoassays (CLIA) using automated analyzers are also widely used because of their objective interpretation of results, high throughput, and low labor intensiveness [9101112131415]. Different commercial EBV antibody assays often show varying test results, thus hindering the interpretation of patient EBV infection status [91016]. Reliable test results are needed for an exact diagnosis of EBV infection; therefore, EBV assays used in diagnostic laboratories must be evaluated for their accuracy, performance, and reliability. Although many studies have compared commercial EBV assays [91011121314], the diagnostic performance of the assays has not been evaluated using a standard EBV performance panel (SeraCare Life Science, Milford, MA, USA) covering various EBV infection status. To fill this gap, we evaluated the diagnostic performances of three commercial EBV antibody assays, Architect (Abbott Diagnostics, Wiesbaden, Germany), Liaison (DiaSorin, Saluggia, Italy), and Platelia (Bio-Rad, Marnes-la-Coquette, France) assays, using an EBV performance panel and compared the results of these three commercial assays in clinical samples of patients with suspected EBV infection. METHODS: 1. Study samples A total of 279 samples (from 123 males and 156 females with a median age of 25 years, range 11 months-96 years) for which an EBV PCR test and antibody assays (VCA IgM, VCA IgG, and EBNA IgG) were concurrently ordered, were collected from March 2015 to March 2016. All samples were derived from immunocompetent patients for the diagnosis of EBV infection. We excluded samples from immunocompromised patients (e.g., those with cancer, organ transplant recipients, or those with other infectious diseases). The Liaison assay was performed on the requested date for EBV antibody tests, and the residual samples were stored at –20℃, following separation for the Architect assay and Platelia assay and between assay testing. This study was approved by the Institutional Review Board (IRB) of Severance Hospital, Seoul, Korea. The IRB approved exemption for written informed consent, because this study was performed using stored residual samples after testing and based on a retrospective chart review. A total of 279 samples (from 123 males and 156 females with a median age of 25 years, range 11 months-96 years) for which an EBV PCR test and antibody assays (VCA IgM, VCA IgG, and EBNA IgG) were concurrently ordered, were collected from March 2015 to March 2016. All samples were derived from immunocompetent patients for the diagnosis of EBV infection. We excluded samples from immunocompromised patients (e.g., those with cancer, organ transplant recipients, or those with other infectious diseases). The Liaison assay was performed on the requested date for EBV antibody tests, and the residual samples were stored at –20℃, following separation for the Architect assay and Platelia assay and between assay testing. This study was approved by the Institutional Review Board (IRB) of Severance Hospital, Seoul, Korea. The IRB approved exemption for written informed consent, because this study was performed using stored residual samples after testing and based on a retrospective chart review. 2. EBV antibody assays The Liaison assay was performed before the Architect and Platelia assays. The Liaison test is based on CLIA and samples were analyzed using the Liaison analyzer (DiaSorin); the Architect assay is a chemiluminescent microparticle immunoassay (CMIA) and was performed using the Architect i2000 analyzer (Abbott Diagnostics); and the Platelia test is an ELISA that was performed manually. Optical densities of the Platelia test were quantified using the BEP III System (Siemens, Marburg, Germany). For the Platelia assay, serum samples were diluted 1:21 (10 µL+ 200 µL) with the sample diluent provided with the assay kit. The instructions of the other tests did not recommend the use of serum dilution. All assay procedures followed the manufacturers' instructions (Table 1). The Liaison assay was performed before the Architect and Platelia assays. The Liaison test is based on CLIA and samples were analyzed using the Liaison analyzer (DiaSorin); the Architect assay is a chemiluminescent microparticle immunoassay (CMIA) and was performed using the Architect i2000 analyzer (Abbott Diagnostics); and the Platelia test is an ELISA that was performed manually. Optical densities of the Platelia test were quantified using the BEP III System (Siemens, Marburg, Germany). For the Platelia assay, serum samples were diluted 1:21 (10 µL+ 200 µL) with the sample diluent provided with the assay kit. The instructions of the other tests did not recommend the use of serum dilution. All assay procedures followed the manufacturers' instructions (Table 1). 3. EBV performance panel The Anti-Epstein-Barr Virus Mixed Titer Performance Panel (SeraCare Life Science) includes 21 samples covering different EBV infection status (one seronegative, 13 past infections [VCA IgM−, VCA IgG+, EBNA IgG+], three acute primary infections [VCA IgM+, VCA IgG+, EBNA IgG−], three early phase acute primary infections [VCA IgM+, VCA IgG−, EBNA IgG−], and one transient infection or re-infection [VCA IgM+, VCA IgG+, EBNA IgG+]) and a data sheet for comparing the panel results with the test results of the commercial EBV assays. Lyophilized aliquots of donor plasma were dissolved in distilled water prior to analysis. Tests were performed in duplicate using the three commercial EBV assays to reduce the likelihood of experimental or reagent errors. VCA IgM values >43.9 and >1.0 were considered positive according to the performance panel data sheet of the Liaison and Platelia assays, respectively; VCA IgG values >1.0 were considered positive for the Platelia EBV assay; and EBNA IgG values >21.9 and >1.0 in the Liaison and Platelia EBV assays, respectively, were considered positive. The positive and negative results of the panel samples, obtained using the EBV antibody assays, were interpreted according to the performance panel instructions, and EBV antibody assay results were determined following the instructions of each assay. The Anti-Epstein-Barr Virus Mixed Titer Performance Panel (SeraCare Life Science) includes 21 samples covering different EBV infection status (one seronegative, 13 past infections [VCA IgM−, VCA IgG+, EBNA IgG+], three acute primary infections [VCA IgM+, VCA IgG+, EBNA IgG−], three early phase acute primary infections [VCA IgM+, VCA IgG−, EBNA IgG−], and one transient infection or re-infection [VCA IgM+, VCA IgG+, EBNA IgG+]) and a data sheet for comparing the panel results with the test results of the commercial EBV assays. Lyophilized aliquots of donor plasma were dissolved in distilled water prior to analysis. Tests were performed in duplicate using the three commercial EBV assays to reduce the likelihood of experimental or reagent errors. VCA IgM values >43.9 and >1.0 were considered positive according to the performance panel data sheet of the Liaison and Platelia assays, respectively; VCA IgG values >1.0 were considered positive for the Platelia EBV assay; and EBNA IgG values >21.9 and >1.0 in the Liaison and Platelia EBV assays, respectively, were considered positive. The positive and negative results of the panel samples, obtained using the EBV antibody assays, were interpreted according to the performance panel instructions, and EBV antibody assay results were determined following the instructions of each assay. 4. EBV DNA test PCR for EBV DNA was performed using the artus EBV RG PCR Kit (Qiagen, Hilden, Germany) on Rotor-Gene Q Instrument (Qiagen) after extracting DNA from plasma with the QIAamp DNA Mini Kit (Qiagen, Dusseldorf, Germany) according to the manufacturer's instruction. PCR for EBV DNA was performed using the artus EBV RG PCR Kit (Qiagen, Hilden, Germany) on Rotor-Gene Q Instrument (Qiagen) after extracting DNA from plasma with the QIAamp DNA Mini Kit (Qiagen, Dusseldorf, Germany) according to the manufacturer's instruction. 5. Statistical analysis Kappa statistics, sensitivity, and specificity were calculated for VCA IgM, VCA IgG, and EBNA IgG based on the EBV performance panel results. We also calculated kappa statistics and the percentages of positive, negative, and total agreement between the Architect, Liaison, and Platelia EBV assays. Kappa values >0.75 indicated excellent agreement, values between 0.40 and 0.75 suggested fair to good agreement, and values <0.40 represented poor agreement. The percentage of total agreement between the assays represents the percentage of paired tests with identical results. Because “gray zone” or equivocal results cannot be interpreted as positive results and are mostly considered negative by clinicians, these results were grouped to evaluate kappa statistics and agreement percentages. The Student t-test was performed to determine the VCA IgM positive rate between the EBV DNA-positive and negative groups. P values<0.05 were considered statistically significant. To evaluate diagnostic performance based on clinical diagnosis of EBV infection of clinical samples, Youden's index and positive and negative Likelihood ratio were calculated. All statistical analyses were performed using Analyse-it, version 4.65 (Analyse-it Software, Ltd., Leeds, UK). Kappa statistics, sensitivity, and specificity were calculated for VCA IgM, VCA IgG, and EBNA IgG based on the EBV performance panel results. We also calculated kappa statistics and the percentages of positive, negative, and total agreement between the Architect, Liaison, and Platelia EBV assays. Kappa values >0.75 indicated excellent agreement, values between 0.40 and 0.75 suggested fair to good agreement, and values <0.40 represented poor agreement. The percentage of total agreement between the assays represents the percentage of paired tests with identical results. Because “gray zone” or equivocal results cannot be interpreted as positive results and are mostly considered negative by clinicians, these results were grouped to evaluate kappa statistics and agreement percentages. The Student t-test was performed to determine the VCA IgM positive rate between the EBV DNA-positive and negative groups. P values<0.05 were considered statistically significant. To evaluate diagnostic performance based on clinical diagnosis of EBV infection of clinical samples, Youden's index and positive and negative Likelihood ratio were calculated. All statistical analyses were performed using Analyse-it, version 4.65 (Analyse-it Software, Ltd., Leeds, UK). 1. Study samples: A total of 279 samples (from 123 males and 156 females with a median age of 25 years, range 11 months-96 years) for which an EBV PCR test and antibody assays (VCA IgM, VCA IgG, and EBNA IgG) were concurrently ordered, were collected from March 2015 to March 2016. All samples were derived from immunocompetent patients for the diagnosis of EBV infection. We excluded samples from immunocompromised patients (e.g., those with cancer, organ transplant recipients, or those with other infectious diseases). The Liaison assay was performed on the requested date for EBV antibody tests, and the residual samples were stored at –20℃, following separation for the Architect assay and Platelia assay and between assay testing. This study was approved by the Institutional Review Board (IRB) of Severance Hospital, Seoul, Korea. The IRB approved exemption for written informed consent, because this study was performed using stored residual samples after testing and based on a retrospective chart review. 2. EBV antibody assays: The Liaison assay was performed before the Architect and Platelia assays. The Liaison test is based on CLIA and samples were analyzed using the Liaison analyzer (DiaSorin); the Architect assay is a chemiluminescent microparticle immunoassay (CMIA) and was performed using the Architect i2000 analyzer (Abbott Diagnostics); and the Platelia test is an ELISA that was performed manually. Optical densities of the Platelia test were quantified using the BEP III System (Siemens, Marburg, Germany). For the Platelia assay, serum samples were diluted 1:21 (10 µL+ 200 µL) with the sample diluent provided with the assay kit. The instructions of the other tests did not recommend the use of serum dilution. All assay procedures followed the manufacturers' instructions (Table 1). 3. EBV performance panel: The Anti-Epstein-Barr Virus Mixed Titer Performance Panel (SeraCare Life Science) includes 21 samples covering different EBV infection status (one seronegative, 13 past infections [VCA IgM−, VCA IgG+, EBNA IgG+], three acute primary infections [VCA IgM+, VCA IgG+, EBNA IgG−], three early phase acute primary infections [VCA IgM+, VCA IgG−, EBNA IgG−], and one transient infection or re-infection [VCA IgM+, VCA IgG+, EBNA IgG+]) and a data sheet for comparing the panel results with the test results of the commercial EBV assays. Lyophilized aliquots of donor plasma were dissolved in distilled water prior to analysis. Tests were performed in duplicate using the three commercial EBV assays to reduce the likelihood of experimental or reagent errors. VCA IgM values >43.9 and >1.0 were considered positive according to the performance panel data sheet of the Liaison and Platelia assays, respectively; VCA IgG values >1.0 were considered positive for the Platelia EBV assay; and EBNA IgG values >21.9 and >1.0 in the Liaison and Platelia EBV assays, respectively, were considered positive. The positive and negative results of the panel samples, obtained using the EBV antibody assays, were interpreted according to the performance panel instructions, and EBV antibody assay results were determined following the instructions of each assay. 4. EBV DNA test: PCR for EBV DNA was performed using the artus EBV RG PCR Kit (Qiagen, Hilden, Germany) on Rotor-Gene Q Instrument (Qiagen) after extracting DNA from plasma with the QIAamp DNA Mini Kit (Qiagen, Dusseldorf, Germany) according to the manufacturer's instruction. 5. Statistical analysis: Kappa statistics, sensitivity, and specificity were calculated for VCA IgM, VCA IgG, and EBNA IgG based on the EBV performance panel results. We also calculated kappa statistics and the percentages of positive, negative, and total agreement between the Architect, Liaison, and Platelia EBV assays. Kappa values >0.75 indicated excellent agreement, values between 0.40 and 0.75 suggested fair to good agreement, and values <0.40 represented poor agreement. The percentage of total agreement between the assays represents the percentage of paired tests with identical results. Because “gray zone” or equivocal results cannot be interpreted as positive results and are mostly considered negative by clinicians, these results were grouped to evaluate kappa statistics and agreement percentages. The Student t-test was performed to determine the VCA IgM positive rate between the EBV DNA-positive and negative groups. P values<0.05 were considered statistically significant. To evaluate diagnostic performance based on clinical diagnosis of EBV infection of clinical samples, Youden's index and positive and negative Likelihood ratio were calculated. All statistical analyses were performed using Analyse-it, version 4.65 (Analyse-it Software, Ltd., Leeds, UK). RESULTS: 1. Diagnostic performance of the three EBV antibody assays The duplicate EBV antibody results were the same in all three assays. The EBV performance panel exhibited good and excellent agreement with the Architect, Liaison, and Platelia EBV antibody assays, with Kappa coefficients>0.6 (Table 2). However, we observed poor agreement between the performance panel and the Liaison assay for VCA IgG (kappa coefficient, 0.35; Table 2). One and two VCA IgM-positive EBV performance panel samples tested negative with the Architect and Platelia assays and Liaison assay, respectively; these changes affected the sensitivity of the VCA IgM tests (sensitivity, 71.4–85.7%; Table 2). Two and three panel samples tested VCA IgG-positive with the Architect and Liaison assays, respectively, but were negative according to the EBV performance panel (Table 2); this affected the specificity of the two assays. For EBNA IgG, three assays and EBV performance panel demonstrated similar results. The duplicate EBV antibody results were the same in all three assays. The EBV performance panel exhibited good and excellent agreement with the Architect, Liaison, and Platelia EBV antibody assays, with Kappa coefficients>0.6 (Table 2). However, we observed poor agreement between the performance panel and the Liaison assay for VCA IgG (kappa coefficient, 0.35; Table 2). One and two VCA IgM-positive EBV performance panel samples tested negative with the Architect and Platelia assays and Liaison assay, respectively; these changes affected the sensitivity of the VCA IgM tests (sensitivity, 71.4–85.7%; Table 2). Two and three panel samples tested VCA IgG-positive with the Architect and Liaison assays, respectively, but were negative according to the EBV performance panel (Table 2); this affected the specificity of the two assays. For EBNA IgG, three assays and EBV performance panel demonstrated similar results. 2. Comparison of the EBV antibody and DNA assays EBV DNA was detected in 74 of 279 (26.5%) patient samples (Table 3). Most EBV DNA-negative patients were also VCA IgM-negative. The overall VCA IgM positivity rate was higher in EBV DNA-positive samples than in EBV DNA-negative samples for all three EBV antibody assays (P=0.02). Forty of the 279 patients had confirmed cases of other infectious agents/diseases such as Cytomegalovirus, Mycobacterium tuberculosis, and malaria. Six of the 40 patients were positive for the EBV DNA test and all were VCA IgM negative according to the Architect, Liaison, and Platelia assays. One of the 34 EBV DNA-negative patients with another infectious disease was VCA IgM positive according to the Architect assay. The positive and negative test likelihood ratio and Youden's index according to EBV DNA test, VCA IgM test, and assay combination results were evaluated based on the clinical diagnosis of the 279 patients (Table 4). For the Architect and Liaison assays, a combination of VCA IgM and EBV DNA results was better than the EBV DNA test alone. EBV DNA was detected in 74 of 279 (26.5%) patient samples (Table 3). Most EBV DNA-negative patients were also VCA IgM-negative. The overall VCA IgM positivity rate was higher in EBV DNA-positive samples than in EBV DNA-negative samples for all three EBV antibody assays (P=0.02). Forty of the 279 patients had confirmed cases of other infectious agents/diseases such as Cytomegalovirus, Mycobacterium tuberculosis, and malaria. Six of the 40 patients were positive for the EBV DNA test and all were VCA IgM negative according to the Architect, Liaison, and Platelia assays. One of the 34 EBV DNA-negative patients with another infectious disease was VCA IgM positive according to the Architect assay. The positive and negative test likelihood ratio and Youden's index according to EBV DNA test, VCA IgM test, and assay combination results were evaluated based on the clinical diagnosis of the 279 patients (Table 4). For the Architect and Liaison assays, a combination of VCA IgM and EBV DNA results was better than the EBV DNA test alone. 3. Comparisons between the Architect, Liaison, and Platelia EBV antibody assays The Architect and Liaison assays exhibited kappa coefficients of 0.79 for VCA IgM, 0.80 for VCA IgG, and 0.92 for EBNA IgG, indicating excellent agreement between the two assays (Table 5). We observed excellent agreement between the Architect and Platelia assays and between the Liaison and Platelia assays for EBNA IgG (kappa coefficients, 0.89 and 0.88, respectively). The kappa coefficients showed fair and good agreement between the Architect and Platelia assays and between the Liaison and Platelia assays for VCA IgM and VCA IgG. The percentage of positive agreement for VCA IgM was relatively low when comparing the Liaison and Platelia assays and the Architect and Platelia assays, because the Platelia assay had a lower VCA IgM positivity rate than the other assays as well as because of the overall low prevalence of VCA IgM-positive samples. The Architect and Liaison assays exhibited kappa coefficients of 0.79 for VCA IgM, 0.80 for VCA IgG, and 0.92 for EBNA IgG, indicating excellent agreement between the two assays (Table 5). We observed excellent agreement between the Architect and Platelia assays and between the Liaison and Platelia assays for EBNA IgG (kappa coefficients, 0.89 and 0.88, respectively). The kappa coefficients showed fair and good agreement between the Architect and Platelia assays and between the Liaison and Platelia assays for VCA IgM and VCA IgG. The percentage of positive agreement for VCA IgM was relatively low when comparing the Liaison and Platelia assays and the Architect and Platelia assays, because the Platelia assay had a lower VCA IgM positivity rate than the other assays as well as because of the overall low prevalence of VCA IgM-positive samples. 1. Diagnostic performance of the three EBV antibody assays: The duplicate EBV antibody results were the same in all three assays. The EBV performance panel exhibited good and excellent agreement with the Architect, Liaison, and Platelia EBV antibody assays, with Kappa coefficients>0.6 (Table 2). However, we observed poor agreement between the performance panel and the Liaison assay for VCA IgG (kappa coefficient, 0.35; Table 2). One and two VCA IgM-positive EBV performance panel samples tested negative with the Architect and Platelia assays and Liaison assay, respectively; these changes affected the sensitivity of the VCA IgM tests (sensitivity, 71.4–85.7%; Table 2). Two and three panel samples tested VCA IgG-positive with the Architect and Liaison assays, respectively, but were negative according to the EBV performance panel (Table 2); this affected the specificity of the two assays. For EBNA IgG, three assays and EBV performance panel demonstrated similar results. 2. Comparison of the EBV antibody and DNA assays: EBV DNA was detected in 74 of 279 (26.5%) patient samples (Table 3). Most EBV DNA-negative patients were also VCA IgM-negative. The overall VCA IgM positivity rate was higher in EBV DNA-positive samples than in EBV DNA-negative samples for all three EBV antibody assays (P=0.02). Forty of the 279 patients had confirmed cases of other infectious agents/diseases such as Cytomegalovirus, Mycobacterium tuberculosis, and malaria. Six of the 40 patients were positive for the EBV DNA test and all were VCA IgM negative according to the Architect, Liaison, and Platelia assays. One of the 34 EBV DNA-negative patients with another infectious disease was VCA IgM positive according to the Architect assay. The positive and negative test likelihood ratio and Youden's index according to EBV DNA test, VCA IgM test, and assay combination results were evaluated based on the clinical diagnosis of the 279 patients (Table 4). For the Architect and Liaison assays, a combination of VCA IgM and EBV DNA results was better than the EBV DNA test alone. 3. Comparisons between the Architect, Liaison, and Platelia EBV antibody assays: The Architect and Liaison assays exhibited kappa coefficients of 0.79 for VCA IgM, 0.80 for VCA IgG, and 0.92 for EBNA IgG, indicating excellent agreement between the two assays (Table 5). We observed excellent agreement between the Architect and Platelia assays and between the Liaison and Platelia assays for EBNA IgG (kappa coefficients, 0.89 and 0.88, respectively). The kappa coefficients showed fair and good agreement between the Architect and Platelia assays and between the Liaison and Platelia assays for VCA IgM and VCA IgG. The percentage of positive agreement for VCA IgM was relatively low when comparing the Liaison and Platelia assays and the Architect and Platelia assays, because the Platelia assay had a lower VCA IgM positivity rate than the other assays as well as because of the overall low prevalence of VCA IgM-positive samples. DISCUSSION: We evaluated the diagnostic performance of the Architect CMIA, Liaison CLIA, and Platelia ELISA EBV assays (VCA IgM, VCA IgG, and EBNA IgG antibodies) using an EBV performance panel as a reference. The EBV performance panel results indicated that the three EBV antibody assays showed excellent agreement for VCA IgM and EBNA IgG; however, the assays exhibited poorer VCA IgG detection than the performance panel. Although only a few VCA IgM-positive panel samples tested negative using the three antibody assays, false negative results could affect correct EBV infection diagnosis in patients with suspected EBV infection. Kappa agreement and sensitivity were significantly affected because of the limited number of VCA IgM-positive samples in the EBV performance panel. The bias of the performance panel towards more VCA IgM-negative and VCA IgG-positive and EBNA IgG-positive panel members could have caused different agreement results. One EBNA IgG-positive panel sample tested negative using the Liaison and Platelia assays. This panel sample had a past infection status, and the test result was close to the cut-off value based on the EBV performance panel data sheet (Liaison, 33.15 U/mL; Platelia, 1.4 ISR). For VCA IgG, the Architect and Liaison assays showed positive results for two and three VCA IgG-negative panel members, respectively. In contrast, the Platelia assay demonstrated negative results for three VCA IgG-positive panel samples. This was likely due to differences in detection sensitivity between the assay methods used in the Architect and Liaison assays (CLIA vs ELISA). As the Architect and Liaison assays showed more differences in their test results than the Platelia assay compared with the performance panel, the kappa agreement and specificity of the Architect and Liaison assays were affected more than those of the Platelia assay. The percentages of total agreement were 90.5%, 85.7%, and 85.7% between the EBV performance panel and the Architect, Liaison, and Platelia assays, respectively. However, false-positive and false-negative results have different importance in the field of infection diagnosis; Platelia assay false-negative cases represented the possibility of missed EBV infection diagnosis. We hypothesize that the performance panel had included more VCA IgM-positive and VCA IgG-negative samples, we would have seen excellent agreement and comparable sensitivity and specificity for the three EBV antibody assays. When we compared the EBV DNA and VCA IgM antibody results, we detected significantly higher positivity rates for the EBV DNA test. The EBV DNA detection method has a higher sensitivity than serological antibody detection methods and EBV DNA can be detected even without EBV infection, as once infected with EBV, the virus might be latently present in B lymphocytes [8]. Over 80% of clinical samples that tested negative for EBV infection using the antibody assays exhibited EBV DNA positivity. These samples were possibly collected during early infection stages (prior to VCA IgM formation); DNA positivity could also be due to EBV latency in B lymphocytes. In addition, EBV DNA detection can fail at times, despite current EBV infection, because of Herpesviridae latency in molecular testing [8]. This could explain the VCA IgM positivity and EBV DNA negativity in a number of samples in our study. An evaluation of the diagnostic performance of the EBV DNA test and EBV antibody assays alone or in combination based on the clinical diagnosis of 279 patients indicated that the performance of the VCA IgM and EBV DNA combination was superior to that of the Architect or Liaison assay alone. Although most EBV DNA-negative samples were also VCA IgM negative and EBV DNA was detected in VCA IgM positive samples, the EBV DNA and antibody tests did not always show the same results. Therefore, EBV DNA and antibody assays should be used complementarily to diagnose the EBV infection status of patients. The Architect and Liaison assays showed excellent agreement for VCA IgM, VCA IgG, and EBNA IgG. The Platelia assay showed more negative test results for VCA IgM- and VCA IgG-positive panel samples than the Architect and Liaison assays. This difference in sensitivity between the Platelia and the Architect and Liaison assays is most likely due to the fact that the Platelia assay uses ELISA, while the Architect and Liaison assays use CMIA and CLIA (with fully-automated analyzers), respectively. However, all kappa coefficients were >0.40; thus, the three EBV assays were considered to have fair and excellent agreement. The agreements between the different EBV antibody assays were comparable with the results of other studies [910111213141517]. Our study has several limitations. First, the EBV performance panel and clinical samples were not tested with the IFA method, which is considered the gold standard for VCA IgG and EBNA IgG detection. However, excellent sensitivity and specificity have been reported for chemiluminescent EBV antibody detection assays [12]. In addition, we used a commercial mixed performance panel consisting of various samples of EBV infection status to evaluate the diagnostic performance of the EBV antibody assays. The standard performance panel, which has been validated with several widely used EBV antibody assays and known EBV infection status, is a valuable evaluation tool for EBV antibody detection assays. Second, the study included only a few VCA IgM positive clinical and performance panel samples. Therefore, evaluating the accuracy of positive results for the VCA IgM assays was difficult and the agreement values between the different assays were likely affected. As EBV infection mostly occurs during childhood and adolescence and young patients typically do not show severe symptoms, the number of samples with new infection status was limited. Finally, we did not compare the EBV DNA test with the performance panel because of insufficient material. This may help explain the discrepancies in test results between the EBV DNA and antibody tests. Future studies including a sufficient number of samples with validated EBV infection status should be conducted to address this discrepancy and to accurately evaluate the diagnostic performance of the EBV assays. Despite these limitations, to the best of our knowledge, this is the first study to evaluate the diagnostic performance of the Architect, Liaison, and Platelia EBV antibody assays using a standard performance panel. The diagnostic performances of the three EBV antibody assays using a combination of VCA IgM, VCA IgG, and EBNA IgG antibodies were acceptable. In addition, the three EBV antibody assays showed good agreement for the clinical samples. A combination of the EBV DNA and VCA IgM antibody tests would provide better diagnostic performance for patients with suspected EBV infection.
Background: Epstein-Barr Virus (EBV) is one of the most prevalent causes of viral infection in humans. EBV infection stage (acute, past, or absent infection) is typically determined using a combination of assays that detect EBV-specific markers, such as IgG and IgM antibodies against the EBV viral capsid antigen (VCA) and IgG antibodies against the EBV nuclear antigen (EBNA). We compared the diagnostic performance and agreement of results between three commercial EBV antibody assays using an EBV performance panel (SeraCare Life Science, Milford, MA, USA) as a reference. Methods: EBV antibody tests of EBV VCA IgM, VCA IgG, and EBNA IgG antibodies were performed by the Architect (Abbott Diagnostics, Wiesbaden, Germany), Liaison (DiaSorin, Saluggia, Italy), and Platelia (Bio-Rad, Marnes-la-Coquette, France) assays. Agreement between the three assays was evaluated using 279 clinical samples, and EBV DNA and antibody test results were compared. Results: The three EBV antibody assays showed good diagnostic performance with good and excellent agreement with the performance panel (kappa coefficient, >0.6). The overall VCA IgM positivity rate was higher in EBV DNA-positive samples than in EBV DNA-negative samples for all three EBV antibody assays (P=0.02). The three EBV antibody assays exhibited good agreement in results for the clinical samples. Conclusions: The diagnostic performance of the three EBV antibody assays was acceptable, and they showed comparable agreement in results for the clinical samples.
null
null
6,013
297
[ 184, 142, 252, 54, 217, 170, 206, 151 ]
12
[ "ebv", "assays", "vca", "igg", "igm", "vca igm", "platelia", "liaison", "samples", "positive" ]
[ "infection ebv assays", "dna assays ebv", "ebv antibody dna", "ebv antibody results", "ebv antibody tests" ]
null
null
null
[CONTENT] Epstein-Barr virus | Assay | Diagnostic performance | DNA | Antibody | Immunoglobulin [SUMMARY]
null
[CONTENT] Epstein-Barr virus | Assay | Diagnostic performance | DNA | Antibody | Immunoglobulin [SUMMARY]
null
[CONTENT] Epstein-Barr virus | Assay | Diagnostic performance | DNA | Antibody | Immunoglobulin [SUMMARY]
null
[CONTENT] Adolescent | Adult | Aged | Aged, 80 and over | Antibodies, Viral | Child | Child, Preschool | DNA, Viral | Epstein-Barr Virus Infections | Female | Herpesvirus 4, Human | Humans | Immunoglobulin G | Immunoglobulin M | Infant | Male | Middle Aged | Reagent Kits, Diagnostic | Sensitivity and Specificity | Young Adult [SUMMARY]
null
[CONTENT] Adolescent | Adult | Aged | Aged, 80 and over | Antibodies, Viral | Child | Child, Preschool | DNA, Viral | Epstein-Barr Virus Infections | Female | Herpesvirus 4, Human | Humans | Immunoglobulin G | Immunoglobulin M | Infant | Male | Middle Aged | Reagent Kits, Diagnostic | Sensitivity and Specificity | Young Adult [SUMMARY]
null
[CONTENT] Adolescent | Adult | Aged | Aged, 80 and over | Antibodies, Viral | Child | Child, Preschool | DNA, Viral | Epstein-Barr Virus Infections | Female | Herpesvirus 4, Human | Humans | Immunoglobulin G | Immunoglobulin M | Infant | Male | Middle Aged | Reagent Kits, Diagnostic | Sensitivity and Specificity | Young Adult [SUMMARY]
null
[CONTENT] infection ebv assays | dna assays ebv | ebv antibody dna | ebv antibody results | ebv antibody tests [SUMMARY]
null
[CONTENT] infection ebv assays | dna assays ebv | ebv antibody dna | ebv antibody results | ebv antibody tests [SUMMARY]
null
[CONTENT] infection ebv assays | dna assays ebv | ebv antibody dna | ebv antibody results | ebv antibody tests [SUMMARY]
null
[CONTENT] ebv | assays | vca | igg | igm | vca igm | platelia | liaison | samples | positive [SUMMARY]
null
[CONTENT] ebv | assays | vca | igg | igm | vca igm | platelia | liaison | samples | positive [SUMMARY]
null
[CONTENT] ebv | assays | vca | igg | igm | vca igm | platelia | liaison | samples | positive [SUMMARY]
null
[CONTENT] ebv | infection | ebv infection | assays | antibody | infection status | commercial | status | ebv infection status | commercial ebv [SUMMARY]
null
[CONTENT] assays | ebv | vca | vca igm | igm | dna | ebv dna | architect | negative | liaison [SUMMARY]
null
[CONTENT] ebv | vca | assays | igg | vca igm | igm | dna | positive | panel | platelia [SUMMARY]
null
[CONTENT] Epstein-Barr Virus | EBV ||| EBV | IgG | IgM | EBV | VCA | IgG | EBV ||| three | EBV | EBV | SeraCare Life Science | Milford | MA | USA [SUMMARY]
null
[CONTENT] three | EBV | 0.6 ||| VCA IgM | three | EBV ||| three | EBV [SUMMARY]
null
[CONTENT] Epstein-Barr Virus | EBV ||| EBV | IgG | IgM | EBV | VCA | IgG | EBV ||| three | EBV | EBV | SeraCare Life Science | Milford | MA | USA ||| EBV VCA IgM | VCA IgG | Architect | Abbott Diagnostics | Wiesbaden | Germany | Liaison | DiaSorin | Saluggia | Italy | Platelia | Bio-Rad | France ||| three | 279 ||| ||| three | EBV | 0.6 ||| VCA IgM | three | EBV ||| three | EBV ||| three | EBV [SUMMARY]
null
Alteration of Thyroid Hormone among Patients with Ischemic Stroke visiting a Tertiary Care Hospital: A Descriptive Cross-sectional Study.
34508483
Stroke is broadly classified as cerebral infarction, intracerebral hemorrhage and subarachnoid hemorrhage. Neuroendocrine profile is altered in acute ischemic stroke and there is a link between hypothyroidism and atherosclerosis which in turn may lead to stroke. The objective of this study was to find out the prevalence of alteration of thyroid hormones in patients with ischemic stroke in a tertiary care center.
INTRODUCTION
This descriptive cross-sectional study was conducted from June to December 2019 in a tertiary care center. Ethical approval was taken from Institutional review board of National Academy of Medical Sciences (reference number: IM 175). Patients with a diagnosis of stroke, without evidence of cardioembolic source, history of liver disease, renal failure and thyroid disease and who do not use thyroidal supplementation within 180 days prior the event were included. Convenience sampling was done. Data was entered in Microsoft Excel and analyzed using Statistical Package for the Social Sciences version 22. Point estimate at 90% Confidence Interval was calculated along with frequency and percentage for binary data.
METHODS
The prevalence of altered thyroid levels among 73 patients was 13 (17.8%) (90% Confidence Interval= 10.44-25.16). Among them 11 (15.1%) were hypothyroid and 2 (2.7%) were hyperthyroid. Among severity of hypothyroid cases, subclinical hypothyroidism grade IA was seen in 51 (70%), subclinical hypothyroidism grade IB was seen in 22 (30%).
RESULTS
The prevalence of altered thyroid levels among patients undergoing ischemic stroke was similar to the findings of other international studies.
CONCLUSIONS
[ "Brain Ischemia", "Cross-Sectional Studies", "Humans", "Ischemic Stroke", "Stroke", "Tertiary Care Centers", "Thyroid Hormones" ]
9107842
INTRODUCTION
Stroke is characterized as a neurological deficit including cerebral infarction, intracerebral hemorrhage (ICH), and subarachnoid hemorrhage (SAH) and is a major cause of disability and death worldwide.1 About 1,795,000 people experience a new or recurrent stroke each year.2 In developing countries, a majority of the stroke burden is observed, accounting for 75.2% of all stroke-related deaths and 81.0% of the associated DALYs lost.3 In Nepal, it was estimated that around 50,000 patients have stroke per year with annual death of around 15000. Of all the ischemic strokes, 77.4% are atherothrombotic occlusion of cerebral blood vessels.4 Thyroid dysfunction has been associated with cerebrovascular accidents (CVAs).5 Hyperthyroidism may cause ischemic stroke due to its relation to atrial fibrillation (AF).6 The main predisposing factors for stroke are hypertension, cigarette smoking, alcohol consumption and diabetes.4 The aim of this study was to find out the prevalence of altered thyroid levels among patients with ischemic CVAs in a tertiary care hospital.
METHODS
This descriptive cross-sectional study was conducted from June to December 2019 in National Academy of Medical Sciences (NAMS), Bir Hospital, Kathmandu. Ethical approval from Institutional review board of NAMS (reference number. IM 175). Convenience sampling was done and the sample size of cases to be enrolled was calculated by using the formula, n = Z2 × p × q / e2   = (1.645)2 × (0.5) × (1-0.5) / (0.1)2   = 67.65 Where, n = minimum required sample size, Z = 1.645 at 90% Confidence Interval (CI), p = prevalence taken as 50% for maximum sample size, q = 1-p e = margin of error, 10% Hence, the required sample size was 67.65. Adding a 10% nonresponse rate, we enrolled 73 patients in our study. Patients with clinically diagnosed stroke, presenting within 48 hours of onset of symptoms with CT or MRI findings were included in the study. Patients with prior use of thyroid supplements recently within last 180 days and with a history of liver disease, renal failure and known thyroid disease and who don't give consent were excluded from the study. Informed written consent was taken either in Nepali or in English language, in whichever they felt comfortable. Confidentiality was maintained to the utmost. For those who are not able to give consent because of their clinical state, informed consent was taken from their close relative. Proforma was translated in Nepali verbally if needed depending upon the cases. For diagnosing the type of stroke, every patient underwent neuroimaging with non-contrast CT of the head. Cerebral infarction was diagnosed if neurological deficits are accompanied by a hypo dense lesion >15 mm in diameter in an appropriate area on a cranial CT scan. Lacunar infarction were diagnosed if the patient presented with a pure motor stroke, a pure sensory stroke, ataxic hemiparesis or a sensorimotor stroke in the absence of a visual field defect and evidence of higher cerebral dysfunction and a hypo dense lesion of < 15 mm in diameter or normal CT scan. In the laboratory, fully automatic analyzer VITROSECi/ECiQ Immunodiagnostic Systems, VITROSTSH was measured using a chemiluminescent immunometric assay with a detection range from 0.01 to 100 m IU/L. TSH values between 0.46 and 4.68 m IU/L were considered the normal reference range (euthyroid). Specimens with TSH values above or below the normal reference range were tested for free thyroxine (FT4). FT4 was measured using a competitive chemiluminescent assay with a detection range of 0.1-12.0 ng/dL. FT4 concentrations within 0.70-2.19 ng/dL was within the normal reference range. All samples included in the study met quality control criteria. Samples with TSH values > 4.69 m IU/L and normal FT4 levels were classified as indicating SCH. The collected data was stored and analyzed with the Statistical Package for the Social Sciences (SPSS) version 22.0 for windows. Point estimate at 90% Confidence Interval was calculated along with frequency and percentage for binary data.
RESULTS
The prevalence of altered in thyroid levels among patients undergoing ischemic stroke was 13 (17.8%) and among them 11 (15.1%) were hypothyroid and 2 (2.7%) were hyperthyroid. Minimum age of patient in the study was 26 years and maximum age was 115 years. The mean age of presentation of cerebral infarction in this study was 63.30±19.217. Among the total 73 cases, 49 were male and 24 were female. Among 49 males, 37 were euthyroid, 11 were hypothyroid and one male was hyperthyroid. Among 24 females, 23 were euthyroid and one female was hyperthyroid. On further categorization of altered thyroid status, 10 (13.7%) were subclinical hypothyroidism, 1 (1.4%) cases were of overt hypothyroidism and 2 (2.7%) cases had subclinical hyperthyroidism. With regards to the severity of subclinical hypothyroidism, out of the total 10 cases of subclinical hypothyroidism, 70% had grade IA, while the remaining 30% had grade IB severity of the status (Figure 1). In the study 49 (67.12%) of ischemic stroke were male and 24 (32.88%) were female. Among euthyroid cases mean total cholesterol was 139.55 mg/dl which was higher than the mean cholesterol level in overt hypothyroid cases with 110 mg/dl. But mean total cholesterol level was lower among overt hypothyroid cases compared to euthyroid and overt hypothyroid cases. Mean TC was highest among subclinical hyperthyroid cases. Mean LDLc was highest among euthyroid cases with mean value of 73.94 mg/dl and lowest among overt hypothyroid cases with mean value of 41.00 mg/dl. TG was highest among overt hypothyroid cases with mean value of 199 mg/dl. HDLc was highest among euthyroid cases with mean value of 43.42 mg/dl. Subclinical hypothyroid cases had mean HDL value of 39.90 mg/dl (Table 1).
CONCLUSIONS
The prevalence of altered in thyroid levels among patients undergoing ischemic stroke was similar to the findings of other international studies. The prevalence of stroke is increasing in developing countries like Nepal as there is rise in non-communicable diseases in this part of world. There are various risk factors for stroke which are modifiable and non-modifiable. So, early identification of these risk factors can have great impact in reduction of morbidity, mortality and burden associated with stroke.
[]
[]
[]
[ "INTRODUCTION", "METHODS", "RESULTS", "DISCUSSION", "CONCLUSIONS" ]
[ "Stroke is characterized as a neurological deficit including cerebral infarction, intracerebral hemorrhage (ICH), and subarachnoid hemorrhage (SAH) and is a major cause of disability and death worldwide.1 About 1,795,000 people experience a new or recurrent stroke each year.2 In developing countries, a majority of the stroke burden is observed, accounting for 75.2% of all stroke-related deaths and 81.0% of the associated DALYs lost.3 In Nepal, it was estimated that around 50,000 patients have stroke per year with annual death of around 15000. Of all the ischemic strokes, 77.4% are atherothrombotic occlusion of cerebral blood vessels.4\nThyroid dysfunction has been associated with cerebrovascular accidents (CVAs).5 Hyperthyroidism may cause ischemic stroke due to its relation to atrial fibrillation (AF).6 The main predisposing factors for stroke are hypertension, cigarette smoking, alcohol consumption and diabetes.4\nThe aim of this study was to find out the prevalence of altered thyroid levels among patients with ischemic CVAs in a tertiary care hospital.", "This descriptive cross-sectional study was conducted from June to December 2019 in National Academy of Medical Sciences (NAMS), Bir Hospital, Kathmandu. Ethical approval from Institutional review board of NAMS (reference number. IM 175). Convenience sampling was done and the sample size of cases to be enrolled was calculated by using the formula,\nn = Z2 × p × q / e2\n  = (1.645)2 × (0.5) × (1-0.5) / (0.1)2\n  = 67.65\nWhere,\nn = minimum required sample size,\nZ = 1.645 at 90% Confidence Interval (CI),\np = prevalence taken as 50% for maximum sample size,\nq = 1-p\ne = margin of error, 10%\nHence, the required sample size was 67.65. Adding a 10% nonresponse rate, we enrolled 73 patients in our study. Patients with clinically diagnosed stroke, presenting within 48 hours of onset of symptoms with CT or MRI findings were included in the study. Patients with prior use of thyroid supplements recently within last 180 days and with a history of liver disease, renal failure and known thyroid disease and who don't give consent were excluded from the study. Informed written consent was taken either in Nepali or in English language, in whichever they felt comfortable. Confidentiality was maintained to the utmost. For those who are not able to give consent because of their clinical state, informed consent was taken from their close relative.\nProforma was translated in Nepali verbally if needed depending upon the cases. For diagnosing the type of stroke, every patient underwent neuroimaging with non-contrast CT of the head. Cerebral infarction was diagnosed if neurological deficits are accompanied by a hypo dense lesion >15 mm in diameter in an appropriate area on a cranial CT scan. Lacunar infarction were diagnosed if the patient presented with a pure motor stroke, a pure sensory stroke, ataxic hemiparesis or a sensorimotor stroke in the absence of a visual field defect and evidence of higher cerebral dysfunction and a hypo dense lesion of < 15 mm in diameter or normal CT scan. In the laboratory, fully automatic analyzer VITROSECi/ECiQ Immunodiagnostic Systems, VITROSTSH was measured using a chemiluminescent immunometric assay with a detection range from 0.01 to 100 m IU/L. TSH values between 0.46 and 4.68 m IU/L were considered the normal reference range (euthyroid). Specimens with TSH values above or below the normal reference range were tested for free thyroxine (FT4). FT4 was measured using a competitive chemiluminescent assay with a detection range of 0.1-12.0 ng/dL. FT4 concentrations within 0.70-2.19 ng/dL was within the normal reference range. All samples included in the study met quality control criteria. Samples with TSH values > 4.69 m IU/L and normal FT4 levels were classified as indicating SCH.\nThe collected data was stored and analyzed with the Statistical Package for the Social Sciences (SPSS) version 22.0 for windows. Point estimate at 90% Confidence Interval was calculated along with frequency and percentage for binary data.", "The prevalence of altered in thyroid levels among patients undergoing ischemic stroke was 13 (17.8%) and among them 11 (15.1%) were hypothyroid and 2 (2.7%) were hyperthyroid. Minimum age of patient in the study was 26 years and maximum age was 115 years. The mean age of presentation of cerebral infarction in this study was 63.30±19.217.\nAmong the total 73 cases, 49 were male and 24 were female. Among 49 males, 37 were euthyroid, 11 were hypothyroid and one male was hyperthyroid. Among 24 females, 23 were euthyroid and one female was hyperthyroid. On further categorization of altered thyroid status, 10 (13.7%) were subclinical hypothyroidism, 1 (1.4%) cases were of overt hypothyroidism and 2 (2.7%) cases had subclinical hyperthyroidism. With regards to the severity of subclinical hypothyroidism, out of the total 10 cases of subclinical hypothyroidism, 70% had grade IA, while the remaining 30% had grade IB severity of the status (Figure 1). In the study 49 (67.12%) of ischemic stroke were male and 24 (32.88%) were female.\nAmong euthyroid cases mean total cholesterol was 139.55 mg/dl which was higher than the mean cholesterol level in overt hypothyroid cases with 110 mg/dl. But mean total cholesterol level was lower among overt hypothyroid cases compared to euthyroid and overt hypothyroid cases. Mean TC was highest among subclinical hyperthyroid cases. Mean LDLc was highest among euthyroid cases with mean value of 73.94 mg/dl and lowest among overt hypothyroid cases with mean value of 41.00 mg/dl. TG was highest among overt hypothyroid cases with mean value of 199 mg/dl. HDLc was highest among euthyroid cases with mean value of 43.42 mg/dl. Subclinical hypothyroid cases had mean HDL value of 39.90 mg/dl (Table 1).", "Stroke is a form of acute stress with detrimental effect on various neurophysiological pathways.7 Hypothyroidism is a possible risk factor for stroke.8 A number of comorbidities have been associated with increased mortality in acute stroke patients. It is not known whether hypothyroidism (either clinical or subclinical) affects outcome in patients with acute cerebrovascular disease. A neuroprotective role of hypothyroidism has been shown in acute stroke patients.8 Low T3 appeared to be associated with stroke severity and short-term outcome.9 In addition, strokes of undetermined etiology accounted for one-third to one-quarter of ischemic strokes among young people and among cases of stroke of undetermined etiology hyperthyroidism could be the underlying cause.10,11 So, TFT screening is useful in patients with acute ischemic stroke (IS) because thyroid disorders are known risk factors for cerebrovascular diseases.6 In our study we observed 17.8% of the cases had thyroid dysfunction. Our finding is similar to the findings of the retrospective study done by Xu XY , et al. in 893 patients with acute ischemic stroke in which 19% of the cases had abnormal thyroid function tests with at least one of the thyroid related hormones below or above the normal ranges.12 Unknown overt or subclinical hyperthyroidism is associated with cardio-embolic stroke.13 Thyroid dysfunction in our study was lower than the study done by Dimopolou, et al. in 33 critically ill patients under mechanical ventilation due to acute stroke; study concluded 36% of the cases had thyroid dysfunction.14 In our study we found that 1.4% of the cases had overt hypothyroidism, this is comparable to the study done by O'keefe L.M , et al. in 129 patients with acute ischemic stroke. 2.32 % of the cases in the study had hypothyroidism.1513.7 % of our study population had subclinical hypothyroidism.\nOur result is similar to the study done by Pande, et al. to establish a relation between the variation in thyroid profile and ischemic CVAs in 75 patients within 48 hours of the event, which concluded that 12% of the cases had subclinical hypothyroidism.7 Prospective cohort study done by Chakler L, et al. on 47,573 adults (3451 subclinical hypothyroidism) from 17 cohorts concluded an increased risk of stroke on subjects younger than 65 years and those with higher TSH concentrations. Subclinical hypothyroidism (SCH) is postulated to increase stroke risk via atherogenic changes associated with abnormal thyroid function.16 The study concluded that hyperthyroidism is associated with an increased risk for ischemic stroke among young adults.11\nIn our study HDLc was low (<40 mg/dl) in all the case with thyroid dysfunction (18% cases) this is much lower compared to the study done by Pokharel , et al., among 281 stroke patients, in which 45% had their HDLc levels below 40 mg/dl.17 TG was elevated (>150 mg/dl) in cases with overt hypothyroidism (1.4%) this is consistent with the study done by Abrams JJ , et al. which concluded TG metabolism is not grossly deranged in hypothyroidism.18 In our study we found that stroke occurred more commonly among males (67.12%) compared to females (32.88%). Women have lower stroke incidence than men, it might be due to positive effects of estrogen on the cerebral circulation. A lifetime exposure to ovarian estrogens may protect against ischemic stroke.19 Seventy-two percent of the cases in our study were smokers. Wang, et al. reported that 48% of patients with stroke smoked.20 Tsai, et al. reported 38% of the cases with ischemic stroke were smokers.21\nOur results might not be generalized to all stroke patients as we recruited study participants from a single hospital only. As many of our patients presented to us after 48 hours of the occurrence of stroke events, we could not involve them in our study.", "The prevalence of altered in thyroid levels among patients undergoing ischemic stroke was similar to the findings of other international studies. The prevalence of stroke is increasing in developing countries like Nepal as there is rise in non-communicable diseases in this part of world. There are various risk factors for stroke which are modifiable and non-modifiable. So, early identification of these risk factors can have great impact in reduction of morbidity, mortality and burden associated with stroke." ]
[ "intro", "methods", "results", "discussion", "conclusions" ]
[ "\nhyperthyroidism\n", "\nhypothyroidism\n", "\nischemic stroke\n" ]
INTRODUCTION: Stroke is characterized as a neurological deficit including cerebral infarction, intracerebral hemorrhage (ICH), and subarachnoid hemorrhage (SAH) and is a major cause of disability and death worldwide.1 About 1,795,000 people experience a new or recurrent stroke each year.2 In developing countries, a majority of the stroke burden is observed, accounting for 75.2% of all stroke-related deaths and 81.0% of the associated DALYs lost.3 In Nepal, it was estimated that around 50,000 patients have stroke per year with annual death of around 15000. Of all the ischemic strokes, 77.4% are atherothrombotic occlusion of cerebral blood vessels.4 Thyroid dysfunction has been associated with cerebrovascular accidents (CVAs).5 Hyperthyroidism may cause ischemic stroke due to its relation to atrial fibrillation (AF).6 The main predisposing factors for stroke are hypertension, cigarette smoking, alcohol consumption and diabetes.4 The aim of this study was to find out the prevalence of altered thyroid levels among patients with ischemic CVAs in a tertiary care hospital. METHODS: This descriptive cross-sectional study was conducted from June to December 2019 in National Academy of Medical Sciences (NAMS), Bir Hospital, Kathmandu. Ethical approval from Institutional review board of NAMS (reference number. IM 175). Convenience sampling was done and the sample size of cases to be enrolled was calculated by using the formula, n = Z2 × p × q / e2   = (1.645)2 × (0.5) × (1-0.5) / (0.1)2   = 67.65 Where, n = minimum required sample size, Z = 1.645 at 90% Confidence Interval (CI), p = prevalence taken as 50% for maximum sample size, q = 1-p e = margin of error, 10% Hence, the required sample size was 67.65. Adding a 10% nonresponse rate, we enrolled 73 patients in our study. Patients with clinically diagnosed stroke, presenting within 48 hours of onset of symptoms with CT or MRI findings were included in the study. Patients with prior use of thyroid supplements recently within last 180 days and with a history of liver disease, renal failure and known thyroid disease and who don't give consent were excluded from the study. Informed written consent was taken either in Nepali or in English language, in whichever they felt comfortable. Confidentiality was maintained to the utmost. For those who are not able to give consent because of their clinical state, informed consent was taken from their close relative. Proforma was translated in Nepali verbally if needed depending upon the cases. For diagnosing the type of stroke, every patient underwent neuroimaging with non-contrast CT of the head. Cerebral infarction was diagnosed if neurological deficits are accompanied by a hypo dense lesion >15 mm in diameter in an appropriate area on a cranial CT scan. Lacunar infarction were diagnosed if the patient presented with a pure motor stroke, a pure sensory stroke, ataxic hemiparesis or a sensorimotor stroke in the absence of a visual field defect and evidence of higher cerebral dysfunction and a hypo dense lesion of < 15 mm in diameter or normal CT scan. In the laboratory, fully automatic analyzer VITROSECi/ECiQ Immunodiagnostic Systems, VITROSTSH was measured using a chemiluminescent immunometric assay with a detection range from 0.01 to 100 m IU/L. TSH values between 0.46 and 4.68 m IU/L were considered the normal reference range (euthyroid). Specimens with TSH values above or below the normal reference range were tested for free thyroxine (FT4). FT4 was measured using a competitive chemiluminescent assay with a detection range of 0.1-12.0 ng/dL. FT4 concentrations within 0.70-2.19 ng/dL was within the normal reference range. All samples included in the study met quality control criteria. Samples with TSH values > 4.69 m IU/L and normal FT4 levels were classified as indicating SCH. The collected data was stored and analyzed with the Statistical Package for the Social Sciences (SPSS) version 22.0 for windows. Point estimate at 90% Confidence Interval was calculated along with frequency and percentage for binary data. RESULTS: The prevalence of altered in thyroid levels among patients undergoing ischemic stroke was 13 (17.8%) and among them 11 (15.1%) were hypothyroid and 2 (2.7%) were hyperthyroid. Minimum age of patient in the study was 26 years and maximum age was 115 years. The mean age of presentation of cerebral infarction in this study was 63.30±19.217. Among the total 73 cases, 49 were male and 24 were female. Among 49 males, 37 were euthyroid, 11 were hypothyroid and one male was hyperthyroid. Among 24 females, 23 were euthyroid and one female was hyperthyroid. On further categorization of altered thyroid status, 10 (13.7%) were subclinical hypothyroidism, 1 (1.4%) cases were of overt hypothyroidism and 2 (2.7%) cases had subclinical hyperthyroidism. With regards to the severity of subclinical hypothyroidism, out of the total 10 cases of subclinical hypothyroidism, 70% had grade IA, while the remaining 30% had grade IB severity of the status (Figure 1). In the study 49 (67.12%) of ischemic stroke were male and 24 (32.88%) were female. Among euthyroid cases mean total cholesterol was 139.55 mg/dl which was higher than the mean cholesterol level in overt hypothyroid cases with 110 mg/dl. But mean total cholesterol level was lower among overt hypothyroid cases compared to euthyroid and overt hypothyroid cases. Mean TC was highest among subclinical hyperthyroid cases. Mean LDLc was highest among euthyroid cases with mean value of 73.94 mg/dl and lowest among overt hypothyroid cases with mean value of 41.00 mg/dl. TG was highest among overt hypothyroid cases with mean value of 199 mg/dl. HDLc was highest among euthyroid cases with mean value of 43.42 mg/dl. Subclinical hypothyroid cases had mean HDL value of 39.90 mg/dl (Table 1). DISCUSSION: Stroke is a form of acute stress with detrimental effect on various neurophysiological pathways.7 Hypothyroidism is a possible risk factor for stroke.8 A number of comorbidities have been associated with increased mortality in acute stroke patients. It is not known whether hypothyroidism (either clinical or subclinical) affects outcome in patients with acute cerebrovascular disease. A neuroprotective role of hypothyroidism has been shown in acute stroke patients.8 Low T3 appeared to be associated with stroke severity and short-term outcome.9 In addition, strokes of undetermined etiology accounted for one-third to one-quarter of ischemic strokes among young people and among cases of stroke of undetermined etiology hyperthyroidism could be the underlying cause.10,11 So, TFT screening is useful in patients with acute ischemic stroke (IS) because thyroid disorders are known risk factors for cerebrovascular diseases.6 In our study we observed 17.8% of the cases had thyroid dysfunction. Our finding is similar to the findings of the retrospective study done by Xu XY , et al. in 893 patients with acute ischemic stroke in which 19% of the cases had abnormal thyroid function tests with at least one of the thyroid related hormones below or above the normal ranges.12 Unknown overt or subclinical hyperthyroidism is associated with cardio-embolic stroke.13 Thyroid dysfunction in our study was lower than the study done by Dimopolou, et al. in 33 critically ill patients under mechanical ventilation due to acute stroke; study concluded 36% of the cases had thyroid dysfunction.14 In our study we found that 1.4% of the cases had overt hypothyroidism, this is comparable to the study done by O'keefe L.M , et al. in 129 patients with acute ischemic stroke. 2.32 % of the cases in the study had hypothyroidism.1513.7 % of our study population had subclinical hypothyroidism. Our result is similar to the study done by Pande, et al. to establish a relation between the variation in thyroid profile and ischemic CVAs in 75 patients within 48 hours of the event, which concluded that 12% of the cases had subclinical hypothyroidism.7 Prospective cohort study done by Chakler L, et al. on 47,573 adults (3451 subclinical hypothyroidism) from 17 cohorts concluded an increased risk of stroke on subjects younger than 65 years and those with higher TSH concentrations. Subclinical hypothyroidism (SCH) is postulated to increase stroke risk via atherogenic changes associated with abnormal thyroid function.16 The study concluded that hyperthyroidism is associated with an increased risk for ischemic stroke among young adults.11 In our study HDLc was low (<40 mg/dl) in all the case with thyroid dysfunction (18% cases) this is much lower compared to the study done by Pokharel , et al., among 281 stroke patients, in which 45% had their HDLc levels below 40 mg/dl.17 TG was elevated (>150 mg/dl) in cases with overt hypothyroidism (1.4%) this is consistent with the study done by Abrams JJ , et al. which concluded TG metabolism is not grossly deranged in hypothyroidism.18 In our study we found that stroke occurred more commonly among males (67.12%) compared to females (32.88%). Women have lower stroke incidence than men, it might be due to positive effects of estrogen on the cerebral circulation. A lifetime exposure to ovarian estrogens may protect against ischemic stroke.19 Seventy-two percent of the cases in our study were smokers. Wang, et al. reported that 48% of patients with stroke smoked.20 Tsai, et al. reported 38% of the cases with ischemic stroke were smokers.21 Our results might not be generalized to all stroke patients as we recruited study participants from a single hospital only. As many of our patients presented to us after 48 hours of the occurrence of stroke events, we could not involve them in our study. CONCLUSIONS: The prevalence of altered in thyroid levels among patients undergoing ischemic stroke was similar to the findings of other international studies. The prevalence of stroke is increasing in developing countries like Nepal as there is rise in non-communicable diseases in this part of world. There are various risk factors for stroke which are modifiable and non-modifiable. So, early identification of these risk factors can have great impact in reduction of morbidity, mortality and burden associated with stroke.
Background: Stroke is broadly classified as cerebral infarction, intracerebral hemorrhage and subarachnoid hemorrhage. Neuroendocrine profile is altered in acute ischemic stroke and there is a link between hypothyroidism and atherosclerosis which in turn may lead to stroke. The objective of this study was to find out the prevalence of alteration of thyroid hormones in patients with ischemic stroke in a tertiary care center. Methods: This descriptive cross-sectional study was conducted from June to December 2019 in a tertiary care center. Ethical approval was taken from Institutional review board of National Academy of Medical Sciences (reference number: IM 175). Patients with a diagnosis of stroke, without evidence of cardioembolic source, history of liver disease, renal failure and thyroid disease and who do not use thyroidal supplementation within 180 days prior the event were included. Convenience sampling was done. Data was entered in Microsoft Excel and analyzed using Statistical Package for the Social Sciences version 22. Point estimate at 90% Confidence Interval was calculated along with frequency and percentage for binary data. Results: The prevalence of altered thyroid levels among 73 patients was 13 (17.8%) (90% Confidence Interval= 10.44-25.16). Among them 11 (15.1%) were hypothyroid and 2 (2.7%) were hyperthyroid. Among severity of hypothyroid cases, subclinical hypothyroidism grade IA was seen in 51 (70%), subclinical hypothyroidism grade IB was seen in 22 (30%). Conclusions: The prevalence of altered thyroid levels among patients undergoing ischemic stroke was similar to the findings of other international studies.
INTRODUCTION: Stroke is characterized as a neurological deficit including cerebral infarction, intracerebral hemorrhage (ICH), and subarachnoid hemorrhage (SAH) and is a major cause of disability and death worldwide.1 About 1,795,000 people experience a new or recurrent stroke each year.2 In developing countries, a majority of the stroke burden is observed, accounting for 75.2% of all stroke-related deaths and 81.0% of the associated DALYs lost.3 In Nepal, it was estimated that around 50,000 patients have stroke per year with annual death of around 15000. Of all the ischemic strokes, 77.4% are atherothrombotic occlusion of cerebral blood vessels.4 Thyroid dysfunction has been associated with cerebrovascular accidents (CVAs).5 Hyperthyroidism may cause ischemic stroke due to its relation to atrial fibrillation (AF).6 The main predisposing factors for stroke are hypertension, cigarette smoking, alcohol consumption and diabetes.4 The aim of this study was to find out the prevalence of altered thyroid levels among patients with ischemic CVAs in a tertiary care hospital. CONCLUSIONS: The prevalence of altered in thyroid levels among patients undergoing ischemic stroke was similar to the findings of other international studies. The prevalence of stroke is increasing in developing countries like Nepal as there is rise in non-communicable diseases in this part of world. There are various risk factors for stroke which are modifiable and non-modifiable. So, early identification of these risk factors can have great impact in reduction of morbidity, mortality and burden associated with stroke.
Background: Stroke is broadly classified as cerebral infarction, intracerebral hemorrhage and subarachnoid hemorrhage. Neuroendocrine profile is altered in acute ischemic stroke and there is a link between hypothyroidism and atherosclerosis which in turn may lead to stroke. The objective of this study was to find out the prevalence of alteration of thyroid hormones in patients with ischemic stroke in a tertiary care center. Methods: This descriptive cross-sectional study was conducted from June to December 2019 in a tertiary care center. Ethical approval was taken from Institutional review board of National Academy of Medical Sciences (reference number: IM 175). Patients with a diagnosis of stroke, without evidence of cardioembolic source, history of liver disease, renal failure and thyroid disease and who do not use thyroidal supplementation within 180 days prior the event were included. Convenience sampling was done. Data was entered in Microsoft Excel and analyzed using Statistical Package for the Social Sciences version 22. Point estimate at 90% Confidence Interval was calculated along with frequency and percentage for binary data. Results: The prevalence of altered thyroid levels among 73 patients was 13 (17.8%) (90% Confidence Interval= 10.44-25.16). Among them 11 (15.1%) were hypothyroid and 2 (2.7%) were hyperthyroid. Among severity of hypothyroid cases, subclinical hypothyroidism grade IA was seen in 51 (70%), subclinical hypothyroidism grade IB was seen in 22 (30%). Conclusions: The prevalence of altered thyroid levels among patients undergoing ischemic stroke was similar to the findings of other international studies.
1,919
298
[]
5
[ "stroke", "study", "cases", "patients", "thyroid", "hypothyroidism", "ischemic", "dl", "subclinical", "mean" ]
[ "stroke related deaths", "stroke risk atherogenic", "prevalence stroke increasing", "stroke 13 thyroid", "ischemic stroke thyroid" ]
[CONTENT] hyperthyroidism | hypothyroidism | ischemic stroke [SUMMARY]
[CONTENT] hyperthyroidism | hypothyroidism | ischemic stroke [SUMMARY]
[CONTENT] hyperthyroidism | hypothyroidism | ischemic stroke [SUMMARY]
[CONTENT] hyperthyroidism | hypothyroidism | ischemic stroke [SUMMARY]
[CONTENT] hyperthyroidism | hypothyroidism | ischemic stroke [SUMMARY]
[CONTENT] hyperthyroidism | hypothyroidism | ischemic stroke [SUMMARY]
[CONTENT] Brain Ischemia | Cross-Sectional Studies | Humans | Ischemic Stroke | Stroke | Tertiary Care Centers | Thyroid Hormones [SUMMARY]
[CONTENT] Brain Ischemia | Cross-Sectional Studies | Humans | Ischemic Stroke | Stroke | Tertiary Care Centers | Thyroid Hormones [SUMMARY]
[CONTENT] Brain Ischemia | Cross-Sectional Studies | Humans | Ischemic Stroke | Stroke | Tertiary Care Centers | Thyroid Hormones [SUMMARY]
[CONTENT] Brain Ischemia | Cross-Sectional Studies | Humans | Ischemic Stroke | Stroke | Tertiary Care Centers | Thyroid Hormones [SUMMARY]
[CONTENT] Brain Ischemia | Cross-Sectional Studies | Humans | Ischemic Stroke | Stroke | Tertiary Care Centers | Thyroid Hormones [SUMMARY]
[CONTENT] Brain Ischemia | Cross-Sectional Studies | Humans | Ischemic Stroke | Stroke | Tertiary Care Centers | Thyroid Hormones [SUMMARY]
[CONTENT] stroke related deaths | stroke risk atherogenic | prevalence stroke increasing | stroke 13 thyroid | ischemic stroke thyroid [SUMMARY]
[CONTENT] stroke related deaths | stroke risk atherogenic | prevalence stroke increasing | stroke 13 thyroid | ischemic stroke thyroid [SUMMARY]
[CONTENT] stroke related deaths | stroke risk atherogenic | prevalence stroke increasing | stroke 13 thyroid | ischemic stroke thyroid [SUMMARY]
[CONTENT] stroke related deaths | stroke risk atherogenic | prevalence stroke increasing | stroke 13 thyroid | ischemic stroke thyroid [SUMMARY]
[CONTENT] stroke related deaths | stroke risk atherogenic | prevalence stroke increasing | stroke 13 thyroid | ischemic stroke thyroid [SUMMARY]
[CONTENT] stroke related deaths | stroke risk atherogenic | prevalence stroke increasing | stroke 13 thyroid | ischemic stroke thyroid [SUMMARY]
[CONTENT] stroke | study | cases | patients | thyroid | hypothyroidism | ischemic | dl | subclinical | mean [SUMMARY]
[CONTENT] stroke | study | cases | patients | thyroid | hypothyroidism | ischemic | dl | subclinical | mean [SUMMARY]
[CONTENT] stroke | study | cases | patients | thyroid | hypothyroidism | ischemic | dl | subclinical | mean [SUMMARY]
[CONTENT] stroke | study | cases | patients | thyroid | hypothyroidism | ischemic | dl | subclinical | mean [SUMMARY]
[CONTENT] stroke | study | cases | patients | thyroid | hypothyroidism | ischemic | dl | subclinical | mean [SUMMARY]
[CONTENT] stroke | study | cases | patients | thyroid | hypothyroidism | ischemic | dl | subclinical | mean [SUMMARY]
[CONTENT] stroke | death | year | 000 | stroke year | hemorrhage | ischemic | cause | cvas | associated [SUMMARY]
[CONTENT] range | normal | sample | ct | consent | size | ft4 | reference | sample size | iu [SUMMARY]
[CONTENT] mean | cases | cases mean | hypothyroid | hypothyroid cases | mg dl | mg | overt hypothyroid cases | overt hypothyroid | value [SUMMARY]
[CONTENT] modifiable | stroke | risk | non | risk factors | factors | prevalence | factors great | impact reduction morbidity | non communicable diseases world [SUMMARY]
[CONTENT] stroke | cases | study | mean | hypothyroidism | patients | ischemic | subclinical | thyroid | risk [SUMMARY]
[CONTENT] stroke | cases | study | mean | hypothyroidism | patients | ischemic | subclinical | thyroid | risk [SUMMARY]
[CONTENT] ||| ||| tertiary [SUMMARY]
[CONTENT] June to December 2019 | tertiary ||| Institutional review board | National Academy of Medical Sciences | IM | 175 ||| 180 days ||| ||| Microsoft Excel | Statistical Package | Social Sciences | 22 ||| Point | 90% [SUMMARY]
[CONTENT] 73 | 13 | 17.8% | 90% | 10.44-25.16 ||| 11 | 15.1% | 2 | 2.7% ||| IA | 51 | 70% | IB | 22 | 30% [SUMMARY]
[CONTENT] [SUMMARY]
[CONTENT] ||| ||| tertiary ||| June to December 2019 | tertiary ||| Institutional review board | National Academy of Medical Sciences | IM | 175 ||| 180 days ||| ||| Microsoft Excel | Statistical Package | Social Sciences | 22 ||| Point | 90% ||| ||| 73 | 13 | 17.8% | 90% | 10.44-25.16 ||| 11 | 15.1% | 2 | 2.7% ||| IA | 51 | 70% | IB | 22 | 30% ||| [SUMMARY]
[CONTENT] ||| ||| tertiary ||| June to December 2019 | tertiary ||| Institutional review board | National Academy of Medical Sciences | IM | 175 ||| 180 days ||| ||| Microsoft Excel | Statistical Package | Social Sciences | 22 ||| Point | 90% ||| ||| 73 | 13 | 17.8% | 90% | 10.44-25.16 ||| 11 | 15.1% | 2 | 2.7% ||| IA | 51 | 70% | IB | 22 | 30% ||| [SUMMARY]
Three weeks of time-restricted eating improves glucose homeostasis in adults with type 2 diabetes but does not improve insulin sensitivity: a randomised crossover trial.
35871650
ClinicalTrials.gov NCT03992248 FUNDING: ZonMW, 459001013.
TRIAL REGISTRATION
Fourteen adults with type 2 diabetes (BMI 30.5±4.2 kg/m2, HbA1c 46.1±7.2 mmol/mol [6.4±0.7%]) participated in a 3 week TRE (daily food intake within 10 h) vs control (spreading food intake over ≥14 h) regimen in a randomised, crossover trial design. The study was performed at Maastricht University, the Netherlands. Eligibility criteria included diagnosis of type 2 diabetes, intermediate chronotype and absence of medical conditions that could interfere with the study execution and/or outcome. Randomisation was performed by a study-independent investigator, ensuring that an equal amount of participants started with TRE and CON. Due to the nature of the study, neither volunteers nor investigators were blinded to the study interventions. The quality of the data was checked without knowledge on intervention allocation. Hepatic glycogen levels were assessed with 13C-MRS and insulin sensitivity was assessed using a hyperinsulinaemic-euglycaemic two-step clamp. Furthermore, glucose homeostasis was assessed with 24 h continuous glucose monitoring devices. Secondary outcomes included 24 h energy expenditure and substrate oxidation, hepatic lipid content and skeletal muscle mitochondrial capacity.
METHODS
Results are depicted as mean ± SEM. Hepatic glycogen content was similar between TRE and control condition (0.15±0.01 vs 0.15±0.01 AU, p=0.88). M value was not significantly affected by TRE (19.6±1.8 vs 17.7±1.8 μmol kg-1 min-1 in TRE vs control, respectively, p=0.10). Hepatic and peripheral insulin sensitivity also remained unaffected by TRE (p=0.67 and p=0.25, respectively). Yet, insulin-induced non-oxidative glucose disposal was increased with TRE (non-oxidative glucose disposal 4.3±1.1 vs 1.5±1.7 μmol kg-1 min-1, p=0.04). TRE increased the time spent in the normoglycaemic range (15.1±0.8 vs 12.2±1.1 h per day, p=0.01), and decreased fasting glucose (7.6±0.4 vs 8.6±0.4 mmol/l, p=0.03) and 24 h glucose levels (6.8±0.2 vs 7.6±0.3 mmol/l, p<0.01). Energy expenditure over 24 h was unaffected; nevertheless, TRE decreased 24 h glucose oxidation (260.2±7.6 vs 277.8±10.7 g/day, p=0.04). No adverse events were reported that were related to the interventions.
RESULTS
We show that a 10 h TRE regimen is a feasible, safe and effective means to improve 24 h glucose homeostasis in free-living adults with type 2 diabetes. However, these changes were not accompanied by changes in insulin sensitivity or hepatic glycogen.
CONCLUSIONS/INTERPRETATION
[ "Adult", "Blood Glucose", "Blood Glucose Self-Monitoring", "Cross-Over Studies", "Diabetes Mellitus, Type 2", "Glucose", "Homeostasis", "Humans", "Insulin", "Insulin Resistance", "Lipids", "Liver Glycogen" ]
9477920
Introduction
Our modern 24 h society is characterised by ubiquitous food availability, irregular sleep–activity patterns and frequent exposure to artificial light sources. Together, these factors lead to a disrupted day–night rhythm, which contributes to the development of type 2 diabetes [1–3]. In Western society, most people tend to spread their daily food intake over a minimum of 14 h [4], likely resulting in the absence of a true, nocturnal fasted state. Restricting food intake to a pre-defined time window (typically ≤12 h), i.e. time-restricted eating [TRE]), restores the cycle of daytime eating and prolonged fasting during the evening and night. Indeed, several studies demonstrated that TRE has promising metabolic effects in overweight or obese individuals, including increased lipid oxidation [5], decreased plasma glucose levels [6, 7] and improved insulin sensitivity [8]. While promising, the latter studies applied extremely short eating time windows (e.g. 6–8 h) in highly-controlled settings [5–10], thus hampering implementation into daily life. To date, only Parr et al have successfully explored the potential of TRE in adults with type 2 diabetes using a 9 h TRE regimen [11], However, the effects of TRE on metabolic health remained largely unexplored. Despite the fact that TRE is sometimes accompanied by (unintended) weight loss [4–6, 9, 10, 12, 13], which inherently improves metabolic health, it has also been reported to improve metabolic health in the absence of weight loss [8], indicating that additional mechanisms underlie the effects of TRE. In this context, individuals with impaired metabolic health display aberrations in rhythmicity of metabolic processes such as glucose homeostasis [14, 15], mitochondrial oxidative capacity [16] and whole-body substrate oxidation [16] compared with the rhythms found in healthy, lean individuals [14, 15, 17]. Disruption of circadian rhythmicity is proposed to contribute to the impaired matching of substrate utilisation with substrate availability, which is associated with type 2 diabetes [18]. In turn, we hypothesise that these impairments in metabolic rhythmicity are due to a disturbed eating–fasting cycle. Therefore, restricting food intake to daytime and, consequently, extending the period of fasting, may improve metabolic health. More specifically, hepatic glycogen could play a pivotal role in this process, as it serves as a fuel during the night when glucose levels are low and is replenished during the daytime [19]. A decrease in hepatic glycogen triggers the stimulation of fat oxidation and molecular metabolic adaptations that accommodate substrate availability in the fasted state [20], and the need to replenish these stores may improve insulin sensitivity. Hitherto, it is not known whether TRE could result in a more pronounced depletion in hepatic glycogen levels in type 2 diabetes, leading to an improved insulin sensitivity. The aim of the current study was to examine the effect of limiting food intake to a feasible 10 h daily time frame for 3 weeks in free-living conditions on hepatic glycogen utilisation and insulin sensitivity in adults with type 2 diabetes.
Procedures
At the start of each intervention period, body weight was determined and a continuous glucose monitoring (CGM) device (Freestyle Libre Pro; Abbott, Chicago, USA) was placed on the back of the upper arm to measured interstitial glucose levels every 15 min. On one occasion, between day 7 and 15 of each intervention, fasted hepatic glycogen was measured in the morning at 07:00 hours using 13C-MRS. The day before the MRS measurement, volunteers consumed a standardised meal at home at either 16:40 hours (TRE) or 20:40 hours (CON), ensuring 20 min of meal consumption, so that they were fasted from, respectively, 17:00 hours or 21:00 hours. On day 19, volunteers arrived at the university at 15:00 hours for measurement of body composition using air displacement plethysmography (BodPod; Cosmed, Rome, Italy), followed by the placement of an i.v. cannula. Afterwards, volunteers entered a respiration chamber for a 36 h measurement of energy expenditure and substrate utilisation using whole-room indirect calorimetry. With TRE, a dinner consisting of 49 per cent of energy (En%) was provided in the respiration chamber at 17:40 hours and volunteers were fasted from 18:00 hours. With CON, a snack of 10 En% was provided at 18:00 hours followed by a 39 En% dinner at 21:40 hours, resulting in a fast from 22:00 hours. Energy content of the meals was based on estimated energy expenditure using the Harris and Benedict equation [22]. On day 20, while in the respiration chamber, a fasted blood sample was obtained at 07:30 hours and 24 h urine was collected for analysis of nitrogen excretion. In both intervention arms, volunteers received standardised meals at fixed times (08:00, 12:00, 15:00 and 18:40 hours) consisting of, respectively, 21 En%, 30 En%, 10 En% and 39 En%. Energy intake was based on sleeping metabolic rate (determined during the night of day 19) multiplied by an activity factor of 1.5. Macronutrient composition of the meals was 56 En% carbohydrates, 30 En% fat and 14 En% protein. Furthermore, volunteers performed low-intensity physical activity at 10:30, 13:00 and 16:00 hours. One bout of activity consisted of 15 min of stepping on an aerobic step and 15 min of standing. On day 21, after a standardised 11 h fast, volunteers left the respiration chamber at 06:00 hours. Next, a blood sample was taken, followed by measurement of hepatic glycogen and lipid content using 13C-MRS and 1H-MRS, respectively. Subsequently, a muscle biopsy was obtained to assess ex vivo mitochondrial oxidative capacity, after which a hyperinsulinaemic–euglycaemic two-step clamp was started to measure insulin sensitivity. See ESM Methods for detailed descriptions of measurement methods.
null
null
Discussion
ESM(PDF 809 kb) (PDF 809 kb)
[ "Introduction", "Methods", "Intervention", "Biochemical analyses", "Data analysis", "Results", "Participant characteristics", "Adherence", "Hepatic glycogen and lipid content", "Insulin sensitivity and glucose homeostasis", "Twenty-four-hour energy and substrate metabolism", "Discussion" ]
[ "Our modern 24 h society is characterised by ubiquitous food availability, irregular sleep–activity patterns and frequent exposure to artificial light sources. Together, these factors lead to a disrupted day–night rhythm, which contributes to the development of type 2 diabetes [1–3]. In Western society, most people tend to spread their daily food intake over a minimum of 14 h [4], likely resulting in the absence of a true, nocturnal fasted state. Restricting food intake to a pre-defined time window (typically ≤12 h), i.e. time-restricted eating [TRE]), restores the cycle of daytime eating and prolonged fasting during the evening and night. Indeed, several studies demonstrated that TRE has promising metabolic effects in overweight or obese individuals, including increased lipid oxidation [5], decreased plasma glucose levels [6, 7] and improved insulin sensitivity [8]. While promising, the latter studies applied extremely short eating time windows (e.g. 6–8 h) in highly-controlled settings [5–10], thus hampering implementation into daily life. To date, only Parr et al have successfully explored the potential of TRE in adults with type 2 diabetes using a 9 h TRE regimen [11], However, the effects of TRE on metabolic health remained largely unexplored.\nDespite the fact that TRE is sometimes accompanied by (unintended) weight loss [4–6, 9, 10, 12, 13], which inherently improves metabolic health, it has also been reported to improve metabolic health in the absence of weight loss [8], indicating that additional mechanisms underlie the effects of TRE. In this context, individuals with impaired metabolic health display aberrations in rhythmicity of metabolic processes such as glucose homeostasis [14, 15], mitochondrial oxidative capacity [16] and whole-body substrate oxidation [16] compared with the rhythms found in healthy, lean individuals [14, 15, 17]. Disruption of circadian rhythmicity is proposed to contribute to the impaired matching of substrate utilisation with substrate availability, which is associated with type 2 diabetes [18]. In turn, we hypothesise that these impairments in metabolic rhythmicity are due to a disturbed eating–fasting cycle. Therefore, restricting food intake to daytime and, consequently, extending the period of fasting, may improve metabolic health. More specifically, hepatic glycogen could play a pivotal role in this process, as it serves as a fuel during the night when glucose levels are low and is replenished during the daytime [19]. A decrease in hepatic glycogen triggers the stimulation of fat oxidation and molecular metabolic adaptations that accommodate substrate availability in the fasted state [20], and the need to replenish these stores may improve insulin sensitivity. Hitherto, it is not known whether TRE could result in a more pronounced depletion in hepatic glycogen levels in type 2 diabetes, leading to an improved insulin sensitivity.\nThe aim of the current study was to examine the effect of limiting food intake to a feasible 10 h daily time frame for 3 weeks in free-living conditions on hepatic glycogen utilisation and insulin sensitivity in adults with type 2 diabetes.", "This randomised crossover study was conducted between April 2019 and February 2021, after approval of the Ethics Committee of Maastricht University Medical Center (Maastricht, the Netherlands), and conformed with the Declaration of Helsinki [21]. The trial was registered at ClinicalTrials.gov (registration no. NCT03992248). All volunteers signed an informed consent form prior to participation. The randomisation procedure is described in the electronic supplementary material (ESM) Methods. The study consisted of two 3-week intervention periods separated by a wash-out period of ≥4 weeks. At the end of each intervention period, main outcomes were measured (ESM Fig. 1) at the Metabolic Research Unit of Maastricht University, the Netherlands. Male and female adults with type 2 diabetes, aged between 50 and 75 years and BMI ≥25 kg/m2, were eligible for participation. For detailed inclusion and exclusion criteria, see ESM Table 1.\nIntervention During the TRE intervention, volunteers were instructed to consume their habitual diet within a 10 h window during the daytime, with the last meal completed no later than 18:00 hours. Outside this time window, volunteers were only allowed to drink water, plain tea and black coffee. To increase compliance, volunteers were also allowed to drink zero-energy soft drinks in the evening hours if consumed in moderation. During the control (CON) intervention, volunteers were instructed to spread their habitual diet over at least 14 h per day without additional restraints on the time window of food intake. For both intervention periods, volunteers were instructed to maintain their normal physical activity and sleep patterns and to remain weight stable. Food intake and sleep times were recorded daily using a food and sleep diary. Volunteers based the food intake of their second intervention period on the food and sleep diary filled out during the first period to promote similar dietary quantity and quality in both intervention arms. To optimise compliance, a weekly phone call was scheduled to monitor the volunteers and to provide additional instructions if necessary.\nDuring the TRE intervention, volunteers were instructed to consume their habitual diet within a 10 h window during the daytime, with the last meal completed no later than 18:00 hours. Outside this time window, volunteers were only allowed to drink water, plain tea and black coffee. To increase compliance, volunteers were also allowed to drink zero-energy soft drinks in the evening hours if consumed in moderation. During the control (CON) intervention, volunteers were instructed to spread their habitual diet over at least 14 h per day without additional restraints on the time window of food intake. For both intervention periods, volunteers were instructed to maintain their normal physical activity and sleep patterns and to remain weight stable. Food intake and sleep times were recorded daily using a food and sleep diary. Volunteers based the food intake of their second intervention period on the food and sleep diary filled out during the first period to promote similar dietary quantity and quality in both intervention arms. To optimise compliance, a weekly phone call was scheduled to monitor the volunteers and to provide additional instructions if necessary.\nProcedures At the start of each intervention period, body weight was determined and a continuous glucose monitoring (CGM) device (Freestyle Libre Pro; Abbott, Chicago, USA) was placed on the back of the upper arm to measured interstitial glucose levels every 15 min. On one occasion, between day 7 and 15 of each intervention, fasted hepatic glycogen was measured in the morning at 07:00 hours using 13C-MRS. The day before the MRS measurement, volunteers consumed a standardised meal at home at either 16:40 hours (TRE) or 20:40 hours (CON), ensuring 20 min of meal consumption, so that they were fasted from, respectively, 17:00 hours or 21:00 hours.\nOn day 19, volunteers arrived at the university at 15:00 hours for measurement of body composition using air displacement plethysmography (BodPod; Cosmed, Rome, Italy), followed by the placement of an i.v. cannula. Afterwards, volunteers entered a respiration chamber for a 36 h measurement of energy expenditure and substrate utilisation using whole-room indirect calorimetry. With TRE, a dinner consisting of 49 per cent of energy (En%) was provided in the respiration chamber at 17:40 hours and volunteers were fasted from 18:00 hours. With CON, a snack of 10 En% was provided at 18:00 hours followed by a 39 En% dinner at 21:40 hours, resulting in a fast from 22:00 hours. Energy content of the meals was based on estimated energy expenditure using the Harris and Benedict equation [22].\nOn day 20, while in the respiration chamber, a fasted blood sample was obtained at 07:30 hours and 24 h urine was collected for analysis of nitrogen excretion. In both intervention arms, volunteers received standardised meals at fixed times (08:00, 12:00, 15:00 and 18:40 hours) consisting of, respectively, 21 En%, 30 En%, 10 En% and 39 En%. Energy intake was based on sleeping metabolic rate (determined during the night of day 19) multiplied by an activity factor of 1.5. Macronutrient composition of the meals was 56 En% carbohydrates, 30 En% fat and 14 En% protein. Furthermore, volunteers performed low-intensity physical activity at 10:30, 13:00 and 16:00 hours. One bout of activity consisted of 15 min of stepping on an aerobic step and 15 min of standing.\nOn day 21, after a standardised 11 h fast, volunteers left the respiration chamber at 06:00 hours. Next, a blood sample was taken, followed by measurement of hepatic glycogen and lipid content using 13C-MRS and 1H-MRS, respectively. Subsequently, a muscle biopsy was obtained to assess ex vivo mitochondrial oxidative capacity, after which a hyperinsulinaemic–euglycaemic two-step clamp was started to measure insulin sensitivity. See ESM Methods for detailed descriptions of measurement methods.\nAt the start of each intervention period, body weight was determined and a continuous glucose monitoring (CGM) device (Freestyle Libre Pro; Abbott, Chicago, USA) was placed on the back of the upper arm to measured interstitial glucose levels every 15 min. On one occasion, between day 7 and 15 of each intervention, fasted hepatic glycogen was measured in the morning at 07:00 hours using 13C-MRS. The day before the MRS measurement, volunteers consumed a standardised meal at home at either 16:40 hours (TRE) or 20:40 hours (CON), ensuring 20 min of meal consumption, so that they were fasted from, respectively, 17:00 hours or 21:00 hours.\nOn day 19, volunteers arrived at the university at 15:00 hours for measurement of body composition using air displacement plethysmography (BodPod; Cosmed, Rome, Italy), followed by the placement of an i.v. cannula. Afterwards, volunteers entered a respiration chamber for a 36 h measurement of energy expenditure and substrate utilisation using whole-room indirect calorimetry. With TRE, a dinner consisting of 49 per cent of energy (En%) was provided in the respiration chamber at 17:40 hours and volunteers were fasted from 18:00 hours. With CON, a snack of 10 En% was provided at 18:00 hours followed by a 39 En% dinner at 21:40 hours, resulting in a fast from 22:00 hours. Energy content of the meals was based on estimated energy expenditure using the Harris and Benedict equation [22].\nOn day 20, while in the respiration chamber, a fasted blood sample was obtained at 07:30 hours and 24 h urine was collected for analysis of nitrogen excretion. In both intervention arms, volunteers received standardised meals at fixed times (08:00, 12:00, 15:00 and 18:40 hours) consisting of, respectively, 21 En%, 30 En%, 10 En% and 39 En%. Energy intake was based on sleeping metabolic rate (determined during the night of day 19) multiplied by an activity factor of 1.5. Macronutrient composition of the meals was 56 En% carbohydrates, 30 En% fat and 14 En% protein. Furthermore, volunteers performed low-intensity physical activity at 10:30, 13:00 and 16:00 hours. One bout of activity consisted of 15 min of stepping on an aerobic step and 15 min of standing.\nOn day 21, after a standardised 11 h fast, volunteers left the respiration chamber at 06:00 hours. Next, a blood sample was taken, followed by measurement of hepatic glycogen and lipid content using 13C-MRS and 1H-MRS, respectively. Subsequently, a muscle biopsy was obtained to assess ex vivo mitochondrial oxidative capacity, after which a hyperinsulinaemic–euglycaemic two-step clamp was started to measure insulin sensitivity. See ESM Methods for detailed descriptions of measurement methods.\nBiochemical analyses Blood samples were used for quantification of metabolites and nitrogen was assessed using 24 h urine samples. See ESM Methods for further details regarding the biochemical analyses.\nBlood samples were used for quantification of metabolites and nitrogen was assessed using 24 h urine samples. See ESM Methods for further details regarding the biochemical analyses.\nData analysis The statistical packages SPSS Statistics 25 (IBM, New York, USA) and Prism 9 (GraphPad Software, San Diego, USA) were used for statistical analyses. Interventional comparisons are expressed as mean ± SEM. Participant characteristics are expressed as mean ± SD. Differences between CON and TRE were tested using the paired t test, unless specified otherwise. A two-sided p<0.05 was considered statistically significant. The power calculation, as well as other calculations made using the measured data, are described in the ESM Methods.\nThe statistical packages SPSS Statistics 25 (IBM, New York, USA) and Prism 9 (GraphPad Software, San Diego, USA) were used for statistical analyses. Interventional comparisons are expressed as mean ± SEM. Participant characteristics are expressed as mean ± SD. Differences between CON and TRE were tested using the paired t test, unless specified otherwise. A two-sided p<0.05 was considered statistically significant. The power calculation, as well as other calculations made using the measured data, are described in the ESM Methods.", "During the TRE intervention, volunteers were instructed to consume their habitual diet within a 10 h window during the daytime, with the last meal completed no later than 18:00 hours. Outside this time window, volunteers were only allowed to drink water, plain tea and black coffee. To increase compliance, volunteers were also allowed to drink zero-energy soft drinks in the evening hours if consumed in moderation. During the control (CON) intervention, volunteers were instructed to spread their habitual diet over at least 14 h per day without additional restraints on the time window of food intake. For both intervention periods, volunteers were instructed to maintain their normal physical activity and sleep patterns and to remain weight stable. Food intake and sleep times were recorded daily using a food and sleep diary. Volunteers based the food intake of their second intervention period on the food and sleep diary filled out during the first period to promote similar dietary quantity and quality in both intervention arms. To optimise compliance, a weekly phone call was scheduled to monitor the volunteers and to provide additional instructions if necessary.", "Blood samples were used for quantification of metabolites and nitrogen was assessed using 24 h urine samples. See ESM Methods for further details regarding the biochemical analyses.", "The statistical packages SPSS Statistics 25 (IBM, New York, USA) and Prism 9 (GraphPad Software, San Diego, USA) were used for statistical analyses. Interventional comparisons are expressed as mean ± SEM. Participant characteristics are expressed as mean ± SD. Differences between CON and TRE were tested using the paired t test, unless specified otherwise. A two-sided p<0.05 was considered statistically significant. The power calculation, as well as other calculations made using the measured data, are described in the ESM Methods.", "Participant characteristics A flowchart of participant enrolment is depicted in ESM Fig. 2. Baseline participant characteristics are presented in Table 1. The median Morningness-Eveningness Questionnaire Self-Assessment (MEQ-SA) score amounted to 59.5 (range 41–72). Only one volunteer was identified as an extreme morning type but was included in the study as the intervention did not interfere with his habitual day–night rhythm.\nTable 1Baseline characteristics of participantsCharacteristicMeasurement/valueN14Sex, n female/n male7/7Age, years67.5±5.2BMI, kg/m230.5±3.7Diabetes medication, n yes/n no10/4 Metformin only, n7 Metformin + gliclazide, n3Fasting plasma glucose, mmol/l7.9±1.3HbA1c, mmol/mol46.1±7.2HbA1c, %6.4±0.7AST, μkat/l0.4±0.1ALT, μkat/l0.4±0.2GGT, μkat/l0.4±0.2eGFR, ml min−1 1.73 m−279.9±14.5MEQ-SA, score59.1±7.7Data are shown as mean ± SD, unless stated otherwiseALT, alanine aminotransferase; AST, aspartate aminotransferase; GGT, γ-glutamyl transferase; MEQ-SA, Morningness-Eveningness Questionnaire Self-Assessment\nBaseline characteristics of participants\nData are shown as mean ± SD, unless stated otherwise\nALT, alanine aminotransferase; AST, aspartate aminotransferase; GGT, γ-glutamyl transferase; MEQ-SA, Morningness-Eveningness Questionnaire Self-Assessment\nA flowchart of participant enrolment is depicted in ESM Fig. 2. Baseline participant characteristics are presented in Table 1. The median Morningness-Eveningness Questionnaire Self-Assessment (MEQ-SA) score amounted to 59.5 (range 41–72). Only one volunteer was identified as an extreme morning type but was included in the study as the intervention did not interfere with his habitual day–night rhythm.\nTable 1Baseline characteristics of participantsCharacteristicMeasurement/valueN14Sex, n female/n male7/7Age, years67.5±5.2BMI, kg/m230.5±3.7Diabetes medication, n yes/n no10/4 Metformin only, n7 Metformin + gliclazide, n3Fasting plasma glucose, mmol/l7.9±1.3HbA1c, mmol/mol46.1±7.2HbA1c, %6.4±0.7AST, μkat/l0.4±0.1ALT, μkat/l0.4±0.2GGT, μkat/l0.4±0.2eGFR, ml min−1 1.73 m−279.9±14.5MEQ-SA, score59.1±7.7Data are shown as mean ± SD, unless stated otherwiseALT, alanine aminotransferase; AST, aspartate aminotransferase; GGT, γ-glutamyl transferase; MEQ-SA, Morningness-Eveningness Questionnaire Self-Assessment\nBaseline characteristics of participants\nData are shown as mean ± SD, unless stated otherwise\nALT, alanine aminotransferase; AST, aspartate aminotransferase; GGT, γ-glutamyl transferase; MEQ-SA, Morningness-Eveningness Questionnaire Self-Assessment\nAdherence Volunteers did not indicate any changes in diabetes medication throughout the study. Volunteers recorded their daily food intake and sleep habits for, on average, 17 days during TRE and 18 days during CON. Based on these data, the eating window averaged 9.1±0.2 h in TRE vs 13.4±0.1 h in CON (p<0.01). Sleep–wake patterns were similar in both interventions, with a mean sleep duration of 8.1±0.2 h during TRE and 8.0±0.2 h during CON (p=0.17). Body weight at the start of each intervention was comparable between TRE and CON (89.1±3.7 vs 89.2±3.8 kg, respectively, p=0.62). Although volunteers were instructed to remain weight stable, a small but significant weight loss occurred in response to TRE (−1.0±0.3 kg, p<0.01) but not CON (−0.3±0.3 kg, p=0.22). The weight loss with TRE was significantly greater than the weight change observed with CON (p=0.02). Body composition determined on day 19 was comparable between TRE and CON (TRE vs CON: fat mass 37.4±2.7 vs 37.9±2.9 kg, p=0.58; and fat-free mass 50.7±2.6 vs 51.0±2.6 kg, p=0.60).\nVolunteers did not indicate any changes in diabetes medication throughout the study. Volunteers recorded their daily food intake and sleep habits for, on average, 17 days during TRE and 18 days during CON. Based on these data, the eating window averaged 9.1±0.2 h in TRE vs 13.4±0.1 h in CON (p<0.01). Sleep–wake patterns were similar in both interventions, with a mean sleep duration of 8.1±0.2 h during TRE and 8.0±0.2 h during CON (p=0.17). Body weight at the start of each intervention was comparable between TRE and CON (89.1±3.7 vs 89.2±3.8 kg, respectively, p=0.62). Although volunteers were instructed to remain weight stable, a small but significant weight loss occurred in response to TRE (−1.0±0.3 kg, p<0.01) but not CON (−0.3±0.3 kg, p=0.22). The weight loss with TRE was significantly greater than the weight change observed with CON (p=0.02). Body composition determined on day 19 was comparable between TRE and CON (TRE vs CON: fat mass 37.4±2.7 vs 37.9±2.9 kg, p=0.58; and fat-free mass 50.7±2.6 vs 51.0±2.6 kg, p=0.60).\nHepatic glycogen and lipid content Approximately half-way through each intervention period, hepatic glycogen levels were assessed in the morning following a 14 h (TRE) and 10 h (CON) night-time fast. Hepatic glycogen did not differ significantly between TRE vs CON (0.16±0.03 vs 0.17±0.02 arbitrary units [AU], respectively, p=0.43). At the end of each intervention, hepatic glycogen levels were also assessed after a standardised overnight fast of 11 h for both TRE and CON but did not reveal an altered hepatic glycogen content with TRE compared with CON (0.15±0.01 vs 0.15±0.01 AU, respectively, p=0.88). We also assessed hepatic lipid content; neither the amount of lipids nor the composition of the hepatic lipid pool was altered with TRE vs CON (respectively: total lipid content 9.0±2.0 vs 8.6±1.6%, p=0.47; polyunsaturated fatty acids 17.0±1.3 vs 16.2±1.2%, p=0.41; mono-unsaturated fatty acids 40.6±0.9 vs 42.9±1.4%, p=0.19; and saturated fatty acids 42.4±1.2 vs 40.9±1.5%, p=0.41).\nApproximately half-way through each intervention period, hepatic glycogen levels were assessed in the morning following a 14 h (TRE) and 10 h (CON) night-time fast. Hepatic glycogen did not differ significantly between TRE vs CON (0.16±0.03 vs 0.17±0.02 arbitrary units [AU], respectively, p=0.43). At the end of each intervention, hepatic glycogen levels were also assessed after a standardised overnight fast of 11 h for both TRE and CON but did not reveal an altered hepatic glycogen content with TRE compared with CON (0.15±0.01 vs 0.15±0.01 AU, respectively, p=0.88). We also assessed hepatic lipid content; neither the amount of lipids nor the composition of the hepatic lipid pool was altered with TRE vs CON (respectively: total lipid content 9.0±2.0 vs 8.6±1.6%, p=0.47; polyunsaturated fatty acids 17.0±1.3 vs 16.2±1.2%, p=0.41; mono-unsaturated fatty acids 40.6±0.9 vs 42.9±1.4%, p=0.19; and saturated fatty acids 42.4±1.2 vs 40.9±1.5%, p=0.41).\nInsulin sensitivity and glucose homeostasis A hyperinsulinaemic–euglycaemic two-step clamp with a glucose tracer and indirect calorimetry was performed to assess insulin sensitivity. No differences in M value were found when comparing TRE and CON (19.6±1.8 vs 17.7±1.8 μmol kg−1 min−1, respectively, p=0.1). Hepatic insulin sensitivity was not affected by TRE, as exemplified by a similar endogenous glucose production (EGP) with TRE and CON in the fasted state and in the low- and high-insulin-stimulated states (p=0.83, p=0.38 and p=0.30, respectively; Fig. 1a). Suppression of EGP was also similar when comparing TRE with CON upon low- and high-insulin infusion (p=0.67 and p=0.47; Fig. 1a). NEFA suppression upon low insulin exposure was not different between TRE and CON (−365.2±41.6 vs −359.1±43.2 mmol/l, p=0.8). However, absolute levels of NEFAs were lower with TRE during the low- and high-insulin phase (p=0.02 and p=0.04; Fig. 1b), which may hint at an improved adipose tissue insulin sensitivity.\nFig. 1Effect of TRE on EGP (a), plasma NEFA (b), Rd (c), NOGD (d) and fat oxidation (e) measured during a hyperinsulinaemic–euglycaemic two-step clamp (n=14). *p<0.05 (data were analysed with paired t tests). Rd, Rate of disappearance\nEffect of TRE on EGP (a), plasma NEFA (b), Rd (c), NOGD (d) and fat oxidation (e) measured during a hyperinsulinaemic–euglycaemic two-step clamp (n=14). *p<0.05 (data were analysed with paired t tests). Rd, Rate of disappearance\nPeripheral insulin-stimulated glucose disposal, reflected by the change in rate of disappearance (Rd) from basal to high insulin, remained unchanged with TRE (p=0.25; Fig. 1c). However, we observed a larger insulin-stimulated non-oxidative glucose disposal (NOGD, difference from baseline to high insulin) with TRE than with CON (4.3±1.1 vs 1.5±1.7 μmol kg−1 min−1, respectively, p=0.04; Fig. 1d) reflecting an increased ability to form glycogen. Conversely, insulin-stimulated carbohydrate oxidation from basal to high insulin appeared to be lower with TRE than with CON (4.7±0.9 vs 6.2±0.9 μmol kg−1 min−1, respectively) but this difference was not statistically significant (p=0.07). Consistently, insulin-induced suppression of fat oxidation from basal to high-insulin was lower with TRE than with CON (−1.3±0.3 vs −1.8±0.2 μmol kg−1 min−1, p=0.04; Fig. 1e). Energy expenditure did not differ between TRE and CON during the basal, low-insulin and high-insulin phase of the clamp. These results indicate that while peripheral insulin sensitivity is unchanged with TRE, glucose uptake is more directed towards storage compared with oxidation. Both hepatic and peripheral insulin sensitivity, as well as levels of hepatic glycogen, were additionally analysed in volunteers who only used metformin as diabetes treatment (n=7) and this did not alter the outcomes.\nTo examine the effect of TRE on glucose homeostasis, CGM data from the last 4 days in the free-living situation (days 15–18) were analysed for both interventions. Four volunteers presented incomplete CGM data due to technical issues, hence statistics were performed on CGM data from ten volunteers. Mean 24 h glucose levels were lower in TRE compared with CON (6.8±0.2 vs 7.6±0.3 mmol/l, p<0.01; Fig. 2a–f). Nocturnal glucose levels were consistently lower in TRE vs CON (Fig. 2a–d). Furthermore, volunteers spent more time in the normal glucose range upon TRE compared with CON (15.1±0.8 vs 12.2±1.1 h per day, p=0.01; Fig. 2f). Concomitantly, time spent in the high glucose range was less in TRE compared with CON (5.5±0.5 vs 7.5 0.7 h per day, p=0.02) whereas no differences between eating regimens were found for time spent in hyperglycaemia (2.3±0.4 vs 3.7±0.8 h per day, p=0.24), time spent in the low glucose range (0.5±0.1 vs 0.4±0.1 h per day, p=1.00) or time spent in hypoglycaemia (0.7±0.3 vs 0.1±0.0 h per day, p=0.48).\nFig. 2(a–d) Twenty-four-hour glucose levels on days 15 (a), 16 (b), 17 (c) and 18 (d) during TRE or CON (n=10). Mean 24 h glucose from day 15 to day 18 (n=10) analysed using a paired t test (e). Time spent in glucose range during days 15–18 (n=10) analysed using Wilcoxon tests with Bonferroni correction (f) *p<0.05. Hypo, hypoglycaemia defined as glucose levels <4.0 mmol/l; Low, low glucose levels defined as glucose levels 4.0–4.3 mmol/l; Normal range, glucose levels within the normal range defined as 4.4–7.2 mmol/l; High, high glucose levels defined as glucose levels 7.3–9.9 mmol/l; Hyper, hyperglycaemia defined as glucose levels >10 mmol/l\n(a–d) Twenty-four-hour glucose levels on days 15 (a), 16 (b), 17 (c) and 18 (d) during TRE or CON (n=10). Mean 24 h glucose from day 15 to day 18 (n=10) analysed using a paired t test (e). Time spent in glucose range during days 15–18 (n=10) analysed using Wilcoxon tests with Bonferroni correction (f) *p<0.05. Hypo, hypoglycaemia defined as glucose levels <4.0 mmol/l; Low, low glucose levels defined as glucose levels 4.0–4.3 mmol/l; Normal range, glucose levels within the normal range defined as 4.4–7.2 mmol/l; High, high glucose levels defined as glucose levels 7.3–9.9 mmol/l; Hyper, hyperglycaemia defined as glucose levels >10 mmol/l\nAdditionally, fasting plasma metabolites were assessed on day 20 and day 21 of each intervention. On day 20, blood samples were taken after a 10 h (CON) or 14 h (TRE) overnight fast. Plasma glucose on day 20 was lower after TRE (7.6±0.4 vs 8.6±0.4 mmol/l, respectively, p=0.03) whereas plasma insulin, triglycerides (TG), and NEFA levels were comparable between conditions (Table 2). On day 21, when overnight fasting time was similar for both interventions (11 h), plasma glucose levels remained lower in TRE than in CON (8.0±0.3 vs 8.9±0.5 mmol/l, respectively, p=0.04), whereas no differences were detected in plasma insulin, TG and NEFA levels (Table 2).\nTable 2Blood plasma biochemistryMetaboliteCONTREp valueDay 20 (n=13) Triglycerides, mmol/l2.1±0.31.9±0.20.30 NEFA, mmol/l0.529±0.0380.489±0.0350.39 Glucose, mmol/la8.6±0.47.6±0.40.03 Insulin, pmol/l111.1±20.8104.2±13.90.27Day 21 (n=14) Triglycerides, mmol/l2.1±0.32.2±0.20.66 NEFA, mmol/l0.601±0.0700.542±0.0640.30 Glucose, mmol/lb8.9±0.58.0±0.30.04 Insulin, pmol/l97.2±13.9111.1±20.80.16Data are shown as mean ± SEMaFasted blood values with fasting time 10 h for CON and 14 h for TREbFasted blood values with fasting time 11 h for both CON and TRE\nBlood plasma biochemistry\nData are shown as mean ± SEM\naFasted blood values with fasting time 10 h for CON and 14 h for TRE\nbFasted blood values with fasting time 11 h for both CON and TRE\nA hyperinsulinaemic–euglycaemic two-step clamp with a glucose tracer and indirect calorimetry was performed to assess insulin sensitivity. No differences in M value were found when comparing TRE and CON (19.6±1.8 vs 17.7±1.8 μmol kg−1 min−1, respectively, p=0.1). Hepatic insulin sensitivity was not affected by TRE, as exemplified by a similar endogenous glucose production (EGP) with TRE and CON in the fasted state and in the low- and high-insulin-stimulated states (p=0.83, p=0.38 and p=0.30, respectively; Fig. 1a). Suppression of EGP was also similar when comparing TRE with CON upon low- and high-insulin infusion (p=0.67 and p=0.47; Fig. 1a). NEFA suppression upon low insulin exposure was not different between TRE and CON (−365.2±41.6 vs −359.1±43.2 mmol/l, p=0.8). However, absolute levels of NEFAs were lower with TRE during the low- and high-insulin phase (p=0.02 and p=0.04; Fig. 1b), which may hint at an improved adipose tissue insulin sensitivity.\nFig. 1Effect of TRE on EGP (a), plasma NEFA (b), Rd (c), NOGD (d) and fat oxidation (e) measured during a hyperinsulinaemic–euglycaemic two-step clamp (n=14). *p<0.05 (data were analysed with paired t tests). Rd, Rate of disappearance\nEffect of TRE on EGP (a), plasma NEFA (b), Rd (c), NOGD (d) and fat oxidation (e) measured during a hyperinsulinaemic–euglycaemic two-step clamp (n=14). *p<0.05 (data were analysed with paired t tests). Rd, Rate of disappearance\nPeripheral insulin-stimulated glucose disposal, reflected by the change in rate of disappearance (Rd) from basal to high insulin, remained unchanged with TRE (p=0.25; Fig. 1c). However, we observed a larger insulin-stimulated non-oxidative glucose disposal (NOGD, difference from baseline to high insulin) with TRE than with CON (4.3±1.1 vs 1.5±1.7 μmol kg−1 min−1, respectively, p=0.04; Fig. 1d) reflecting an increased ability to form glycogen. Conversely, insulin-stimulated carbohydrate oxidation from basal to high insulin appeared to be lower with TRE than with CON (4.7±0.9 vs 6.2±0.9 μmol kg−1 min−1, respectively) but this difference was not statistically significant (p=0.07). Consistently, insulin-induced suppression of fat oxidation from basal to high-insulin was lower with TRE than with CON (−1.3±0.3 vs −1.8±0.2 μmol kg−1 min−1, p=0.04; Fig. 1e). Energy expenditure did not differ between TRE and CON during the basal, low-insulin and high-insulin phase of the clamp. These results indicate that while peripheral insulin sensitivity is unchanged with TRE, glucose uptake is more directed towards storage compared with oxidation. Both hepatic and peripheral insulin sensitivity, as well as levels of hepatic glycogen, were additionally analysed in volunteers who only used metformin as diabetes treatment (n=7) and this did not alter the outcomes.\nTo examine the effect of TRE on glucose homeostasis, CGM data from the last 4 days in the free-living situation (days 15–18) were analysed for both interventions. Four volunteers presented incomplete CGM data due to technical issues, hence statistics were performed on CGM data from ten volunteers. Mean 24 h glucose levels were lower in TRE compared with CON (6.8±0.2 vs 7.6±0.3 mmol/l, p<0.01; Fig. 2a–f). Nocturnal glucose levels were consistently lower in TRE vs CON (Fig. 2a–d). Furthermore, volunteers spent more time in the normal glucose range upon TRE compared with CON (15.1±0.8 vs 12.2±1.1 h per day, p=0.01; Fig. 2f). Concomitantly, time spent in the high glucose range was less in TRE compared with CON (5.5±0.5 vs 7.5 0.7 h per day, p=0.02) whereas no differences between eating regimens were found for time spent in hyperglycaemia (2.3±0.4 vs 3.7±0.8 h per day, p=0.24), time spent in the low glucose range (0.5±0.1 vs 0.4±0.1 h per day, p=1.00) or time spent in hypoglycaemia (0.7±0.3 vs 0.1±0.0 h per day, p=0.48).\nFig. 2(a–d) Twenty-four-hour glucose levels on days 15 (a), 16 (b), 17 (c) and 18 (d) during TRE or CON (n=10). Mean 24 h glucose from day 15 to day 18 (n=10) analysed using a paired t test (e). Time spent in glucose range during days 15–18 (n=10) analysed using Wilcoxon tests with Bonferroni correction (f) *p<0.05. Hypo, hypoglycaemia defined as glucose levels <4.0 mmol/l; Low, low glucose levels defined as glucose levels 4.0–4.3 mmol/l; Normal range, glucose levels within the normal range defined as 4.4–7.2 mmol/l; High, high glucose levels defined as glucose levels 7.3–9.9 mmol/l; Hyper, hyperglycaemia defined as glucose levels >10 mmol/l\n(a–d) Twenty-four-hour glucose levels on days 15 (a), 16 (b), 17 (c) and 18 (d) during TRE or CON (n=10). Mean 24 h glucose from day 15 to day 18 (n=10) analysed using a paired t test (e). Time spent in glucose range during days 15–18 (n=10) analysed using Wilcoxon tests with Bonferroni correction (f) *p<0.05. Hypo, hypoglycaemia defined as glucose levels <4.0 mmol/l; Low, low glucose levels defined as glucose levels 4.0–4.3 mmol/l; Normal range, glucose levels within the normal range defined as 4.4–7.2 mmol/l; High, high glucose levels defined as glucose levels 7.3–9.9 mmol/l; Hyper, hyperglycaemia defined as glucose levels >10 mmol/l\nAdditionally, fasting plasma metabolites were assessed on day 20 and day 21 of each intervention. On day 20, blood samples were taken after a 10 h (CON) or 14 h (TRE) overnight fast. Plasma glucose on day 20 was lower after TRE (7.6±0.4 vs 8.6±0.4 mmol/l, respectively, p=0.03) whereas plasma insulin, triglycerides (TG), and NEFA levels were comparable between conditions (Table 2). On day 21, when overnight fasting time was similar for both interventions (11 h), plasma glucose levels remained lower in TRE than in CON (8.0±0.3 vs 8.9±0.5 mmol/l, respectively, p=0.04), whereas no differences were detected in plasma insulin, TG and NEFA levels (Table 2).\nTable 2Blood plasma biochemistryMetaboliteCONTREp valueDay 20 (n=13) Triglycerides, mmol/l2.1±0.31.9±0.20.30 NEFA, mmol/l0.529±0.0380.489±0.0350.39 Glucose, mmol/la8.6±0.47.6±0.40.03 Insulin, pmol/l111.1±20.8104.2±13.90.27Day 21 (n=14) Triglycerides, mmol/l2.1±0.32.2±0.20.66 NEFA, mmol/l0.601±0.0700.542±0.0640.30 Glucose, mmol/lb8.9±0.58.0±0.30.04 Insulin, pmol/l97.2±13.9111.1±20.80.16Data are shown as mean ± SEMaFasted blood values with fasting time 10 h for CON and 14 h for TREbFasted blood values with fasting time 11 h for both CON and TRE\nBlood plasma biochemistry\nData are shown as mean ± SEM\naFasted blood values with fasting time 10 h for CON and 14 h for TRE\nbFasted blood values with fasting time 11 h for both CON and TRE\nTwenty-four-hour energy and substrate metabolism On day 19, volunteers resided in a respiration chamber for 36 h for measurement of energy expenditure and substrate oxidation. Twenty-four-hour energy expenditure was similar for TRE and CON (9.57±0.22 vs 9.68±0.29 MJ/day, respectively, p=0.22; Fig. 3a), as was the 24 h respiratory exchange ratio (RER) (0.86±0.01 vs 0.86±0.01, respectively, p=0.13). Nonetheless, 24 h carbohydrate oxidation was lower in TRE vs CON (260.2±7.6 vs 277.8±10.7 g/day, respectively, p=0.04; Fig. 3b), whereas 24 h fat oxidation (91.9±6.6 vs 93.5±5.5 g/day, respectively, p=0.72; Fig. 3c) was unaffected. Twenty-four-hour protein oxidation seemed higher upon TRE but the difference did not reach statistical significance (72.8±7.2 vs 58.5±5.4 g/day, respectively, p=0.18; Fig. 3d). Sleeping metabolic rate appeared to be lower with TRE compared with CON (4.66±0.14 vs 4.77±0.18 kJ/min, respectively), although this decrease was not statistically significant (p=0.05; Fig. 3e). There was no change in carbohydrate or fat oxidation during sleep in response to TRE vs CON (RER 0.84±0.01 vs 0.84±0.01, p=0.50; Fig. 3f).\nFig. 3Effect of TRE on 24 h energy expenditure (a), substrate oxidation (b–d), sleeping metabolic rate (e) and RER during sleep (f), (n=13). *p<0.05\nEffect of TRE on 24 h energy expenditure (a), substrate oxidation (b–d), sleeping metabolic rate (e) and RER during sleep (f), (n=13). *p<0.05\nOn day 21, muscle biopsies were obtained to assess ex vivo mitochondrial oxidative capacity by means of high-resolution respirometry. In total, paired biopsies from 13 out of 14 volunteers were analysed. Mitochondrial respiration did not differ between TRE and CON (Table 3).\nTable 3Mitochondrial oxidative capacityRespiration stateCONTREp valueState 2a M, pmol mg−1 s−15.0±0.45.5±0.60.49 MO, pmol mg−1 s−16.6±0.46.8±0.50.93 MG, pmol mg−1 s−17.1±0.46.9±0.40.54State 3b MO, pmol mg−1 s−128.3±2.029.1±1.40.91 MG, pmol mg−1 s−131.2±1.932.6±1.80.82 MOG, pmol mg−1 s−137.0±2.337.8±1.90.99 MOGS, pmol mg−1 s−155.7±3.357.6±2.70.80State Uc MGS, pmol mg−1 s−158.0±3.259.9±2.90.91 FCCP, pmol mg−1 s−167.8±4.766.6±3.50.50State 4od Oligomycin, pmol mg−1 s−117.6±1.218.1±1.20.85Data presented as mean ± SEM, n=13aState 2, respiration in presence of substrates alonebState 3, ADP-stimulated respirationcState U, maximal respiration in response to an uncoupling agentdState 4o, mitochondrial proton leak measured by blocking ATP synthaseFCCP, trifluoro-methoxy carbonyl cyanide-4 phenylhydrazone; G, glutamate; M, malate; O, octanoyl-carnitine; S, succinate\nMitochondrial oxidative capacity\nData presented as mean ± SEM, n=13\naState 2, respiration in presence of substrates alone\nbState 3, ADP-stimulated respiration\ncState U, maximal respiration in response to an uncoupling agent\ndState 4o, mitochondrial proton leak measured by blocking ATP synthase\nFCCP, trifluoro-methoxy carbonyl cyanide-4 phenylhydrazone; G, glutamate; M, malate; O, octanoyl-carnitine; S, succinate\nOn day 19, volunteers resided in a respiration chamber for 36 h for measurement of energy expenditure and substrate oxidation. Twenty-four-hour energy expenditure was similar for TRE and CON (9.57±0.22 vs 9.68±0.29 MJ/day, respectively, p=0.22; Fig. 3a), as was the 24 h respiratory exchange ratio (RER) (0.86±0.01 vs 0.86±0.01, respectively, p=0.13). Nonetheless, 24 h carbohydrate oxidation was lower in TRE vs CON (260.2±7.6 vs 277.8±10.7 g/day, respectively, p=0.04; Fig. 3b), whereas 24 h fat oxidation (91.9±6.6 vs 93.5±5.5 g/day, respectively, p=0.72; Fig. 3c) was unaffected. Twenty-four-hour protein oxidation seemed higher upon TRE but the difference did not reach statistical significance (72.8±7.2 vs 58.5±5.4 g/day, respectively, p=0.18; Fig. 3d). Sleeping metabolic rate appeared to be lower with TRE compared with CON (4.66±0.14 vs 4.77±0.18 kJ/min, respectively), although this decrease was not statistically significant (p=0.05; Fig. 3e). There was no change in carbohydrate or fat oxidation during sleep in response to TRE vs CON (RER 0.84±0.01 vs 0.84±0.01, p=0.50; Fig. 3f).\nFig. 3Effect of TRE on 24 h energy expenditure (a), substrate oxidation (b–d), sleeping metabolic rate (e) and RER during sleep (f), (n=13). *p<0.05\nEffect of TRE on 24 h energy expenditure (a), substrate oxidation (b–d), sleeping metabolic rate (e) and RER during sleep (f), (n=13). *p<0.05\nOn day 21, muscle biopsies were obtained to assess ex vivo mitochondrial oxidative capacity by means of high-resolution respirometry. In total, paired biopsies from 13 out of 14 volunteers were analysed. Mitochondrial respiration did not differ between TRE and CON (Table 3).\nTable 3Mitochondrial oxidative capacityRespiration stateCONTREp valueState 2a M, pmol mg−1 s−15.0±0.45.5±0.60.49 MO, pmol mg−1 s−16.6±0.46.8±0.50.93 MG, pmol mg−1 s−17.1±0.46.9±0.40.54State 3b MO, pmol mg−1 s−128.3±2.029.1±1.40.91 MG, pmol mg−1 s−131.2±1.932.6±1.80.82 MOG, pmol mg−1 s−137.0±2.337.8±1.90.99 MOGS, pmol mg−1 s−155.7±3.357.6±2.70.80State Uc MGS, pmol mg−1 s−158.0±3.259.9±2.90.91 FCCP, pmol mg−1 s−167.8±4.766.6±3.50.50State 4od Oligomycin, pmol mg−1 s−117.6±1.218.1±1.20.85Data presented as mean ± SEM, n=13aState 2, respiration in presence of substrates alonebState 3, ADP-stimulated respirationcState U, maximal respiration in response to an uncoupling agentdState 4o, mitochondrial proton leak measured by blocking ATP synthaseFCCP, trifluoro-methoxy carbonyl cyanide-4 phenylhydrazone; G, glutamate; M, malate; O, octanoyl-carnitine; S, succinate\nMitochondrial oxidative capacity\nData presented as mean ± SEM, n=13\naState 2, respiration in presence of substrates alone\nbState 3, ADP-stimulated respiration\ncState U, maximal respiration in response to an uncoupling agent\ndState 4o, mitochondrial proton leak measured by blocking ATP synthase\nFCCP, trifluoro-methoxy carbonyl cyanide-4 phenylhydrazone; G, glutamate; M, malate; O, octanoyl-carnitine; S, succinate", "A flowchart of participant enrolment is depicted in ESM Fig. 2. Baseline participant characteristics are presented in Table 1. The median Morningness-Eveningness Questionnaire Self-Assessment (MEQ-SA) score amounted to 59.5 (range 41–72). Only one volunteer was identified as an extreme morning type but was included in the study as the intervention did not interfere with his habitual day–night rhythm.\nTable 1Baseline characteristics of participantsCharacteristicMeasurement/valueN14Sex, n female/n male7/7Age, years67.5±5.2BMI, kg/m230.5±3.7Diabetes medication, n yes/n no10/4 Metformin only, n7 Metformin + gliclazide, n3Fasting plasma glucose, mmol/l7.9±1.3HbA1c, mmol/mol46.1±7.2HbA1c, %6.4±0.7AST, μkat/l0.4±0.1ALT, μkat/l0.4±0.2GGT, μkat/l0.4±0.2eGFR, ml min−1 1.73 m−279.9±14.5MEQ-SA, score59.1±7.7Data are shown as mean ± SD, unless stated otherwiseALT, alanine aminotransferase; AST, aspartate aminotransferase; GGT, γ-glutamyl transferase; MEQ-SA, Morningness-Eveningness Questionnaire Self-Assessment\nBaseline characteristics of participants\nData are shown as mean ± SD, unless stated otherwise\nALT, alanine aminotransferase; AST, aspartate aminotransferase; GGT, γ-glutamyl transferase; MEQ-SA, Morningness-Eveningness Questionnaire Self-Assessment", "Volunteers did not indicate any changes in diabetes medication throughout the study. Volunteers recorded their daily food intake and sleep habits for, on average, 17 days during TRE and 18 days during CON. Based on these data, the eating window averaged 9.1±0.2 h in TRE vs 13.4±0.1 h in CON (p<0.01). Sleep–wake patterns were similar in both interventions, with a mean sleep duration of 8.1±0.2 h during TRE and 8.0±0.2 h during CON (p=0.17). Body weight at the start of each intervention was comparable between TRE and CON (89.1±3.7 vs 89.2±3.8 kg, respectively, p=0.62). Although volunteers were instructed to remain weight stable, a small but significant weight loss occurred in response to TRE (−1.0±0.3 kg, p<0.01) but not CON (−0.3±0.3 kg, p=0.22). The weight loss with TRE was significantly greater than the weight change observed with CON (p=0.02). Body composition determined on day 19 was comparable between TRE and CON (TRE vs CON: fat mass 37.4±2.7 vs 37.9±2.9 kg, p=0.58; and fat-free mass 50.7±2.6 vs 51.0±2.6 kg, p=0.60).", "Approximately half-way through each intervention period, hepatic glycogen levels were assessed in the morning following a 14 h (TRE) and 10 h (CON) night-time fast. Hepatic glycogen did not differ significantly between TRE vs CON (0.16±0.03 vs 0.17±0.02 arbitrary units [AU], respectively, p=0.43). At the end of each intervention, hepatic glycogen levels were also assessed after a standardised overnight fast of 11 h for both TRE and CON but did not reveal an altered hepatic glycogen content with TRE compared with CON (0.15±0.01 vs 0.15±0.01 AU, respectively, p=0.88). We also assessed hepatic lipid content; neither the amount of lipids nor the composition of the hepatic lipid pool was altered with TRE vs CON (respectively: total lipid content 9.0±2.0 vs 8.6±1.6%, p=0.47; polyunsaturated fatty acids 17.0±1.3 vs 16.2±1.2%, p=0.41; mono-unsaturated fatty acids 40.6±0.9 vs 42.9±1.4%, p=0.19; and saturated fatty acids 42.4±1.2 vs 40.9±1.5%, p=0.41).", "A hyperinsulinaemic–euglycaemic two-step clamp with a glucose tracer and indirect calorimetry was performed to assess insulin sensitivity. No differences in M value were found when comparing TRE and CON (19.6±1.8 vs 17.7±1.8 μmol kg−1 min−1, respectively, p=0.1). Hepatic insulin sensitivity was not affected by TRE, as exemplified by a similar endogenous glucose production (EGP) with TRE and CON in the fasted state and in the low- and high-insulin-stimulated states (p=0.83, p=0.38 and p=0.30, respectively; Fig. 1a). Suppression of EGP was also similar when comparing TRE with CON upon low- and high-insulin infusion (p=0.67 and p=0.47; Fig. 1a). NEFA suppression upon low insulin exposure was not different between TRE and CON (−365.2±41.6 vs −359.1±43.2 mmol/l, p=0.8). However, absolute levels of NEFAs were lower with TRE during the low- and high-insulin phase (p=0.02 and p=0.04; Fig. 1b), which may hint at an improved adipose tissue insulin sensitivity.\nFig. 1Effect of TRE on EGP (a), plasma NEFA (b), Rd (c), NOGD (d) and fat oxidation (e) measured during a hyperinsulinaemic–euglycaemic two-step clamp (n=14). *p<0.05 (data were analysed with paired t tests). Rd, Rate of disappearance\nEffect of TRE on EGP (a), plasma NEFA (b), Rd (c), NOGD (d) and fat oxidation (e) measured during a hyperinsulinaemic–euglycaemic two-step clamp (n=14). *p<0.05 (data were analysed with paired t tests). Rd, Rate of disappearance\nPeripheral insulin-stimulated glucose disposal, reflected by the change in rate of disappearance (Rd) from basal to high insulin, remained unchanged with TRE (p=0.25; Fig. 1c). However, we observed a larger insulin-stimulated non-oxidative glucose disposal (NOGD, difference from baseline to high insulin) with TRE than with CON (4.3±1.1 vs 1.5±1.7 μmol kg−1 min−1, respectively, p=0.04; Fig. 1d) reflecting an increased ability to form glycogen. Conversely, insulin-stimulated carbohydrate oxidation from basal to high insulin appeared to be lower with TRE than with CON (4.7±0.9 vs 6.2±0.9 μmol kg−1 min−1, respectively) but this difference was not statistically significant (p=0.07). Consistently, insulin-induced suppression of fat oxidation from basal to high-insulin was lower with TRE than with CON (−1.3±0.3 vs −1.8±0.2 μmol kg−1 min−1, p=0.04; Fig. 1e). Energy expenditure did not differ between TRE and CON during the basal, low-insulin and high-insulin phase of the clamp. These results indicate that while peripheral insulin sensitivity is unchanged with TRE, glucose uptake is more directed towards storage compared with oxidation. Both hepatic and peripheral insulin sensitivity, as well as levels of hepatic glycogen, were additionally analysed in volunteers who only used metformin as diabetes treatment (n=7) and this did not alter the outcomes.\nTo examine the effect of TRE on glucose homeostasis, CGM data from the last 4 days in the free-living situation (days 15–18) were analysed for both interventions. Four volunteers presented incomplete CGM data due to technical issues, hence statistics were performed on CGM data from ten volunteers. Mean 24 h glucose levels were lower in TRE compared with CON (6.8±0.2 vs 7.6±0.3 mmol/l, p<0.01; Fig. 2a–f). Nocturnal glucose levels were consistently lower in TRE vs CON (Fig. 2a–d). Furthermore, volunteers spent more time in the normal glucose range upon TRE compared with CON (15.1±0.8 vs 12.2±1.1 h per day, p=0.01; Fig. 2f). Concomitantly, time spent in the high glucose range was less in TRE compared with CON (5.5±0.5 vs 7.5 0.7 h per day, p=0.02) whereas no differences between eating regimens were found for time spent in hyperglycaemia (2.3±0.4 vs 3.7±0.8 h per day, p=0.24), time spent in the low glucose range (0.5±0.1 vs 0.4±0.1 h per day, p=1.00) or time spent in hypoglycaemia (0.7±0.3 vs 0.1±0.0 h per day, p=0.48).\nFig. 2(a–d) Twenty-four-hour glucose levels on days 15 (a), 16 (b), 17 (c) and 18 (d) during TRE or CON (n=10). Mean 24 h glucose from day 15 to day 18 (n=10) analysed using a paired t test (e). Time spent in glucose range during days 15–18 (n=10) analysed using Wilcoxon tests with Bonferroni correction (f) *p<0.05. Hypo, hypoglycaemia defined as glucose levels <4.0 mmol/l; Low, low glucose levels defined as glucose levels 4.0–4.3 mmol/l; Normal range, glucose levels within the normal range defined as 4.4–7.2 mmol/l; High, high glucose levels defined as glucose levels 7.3–9.9 mmol/l; Hyper, hyperglycaemia defined as glucose levels >10 mmol/l\n(a–d) Twenty-four-hour glucose levels on days 15 (a), 16 (b), 17 (c) and 18 (d) during TRE or CON (n=10). Mean 24 h glucose from day 15 to day 18 (n=10) analysed using a paired t test (e). Time spent in glucose range during days 15–18 (n=10) analysed using Wilcoxon tests with Bonferroni correction (f) *p<0.05. Hypo, hypoglycaemia defined as glucose levels <4.0 mmol/l; Low, low glucose levels defined as glucose levels 4.0–4.3 mmol/l; Normal range, glucose levels within the normal range defined as 4.4–7.2 mmol/l; High, high glucose levels defined as glucose levels 7.3–9.9 mmol/l; Hyper, hyperglycaemia defined as glucose levels >10 mmol/l\nAdditionally, fasting plasma metabolites were assessed on day 20 and day 21 of each intervention. On day 20, blood samples were taken after a 10 h (CON) or 14 h (TRE) overnight fast. Plasma glucose on day 20 was lower after TRE (7.6±0.4 vs 8.6±0.4 mmol/l, respectively, p=0.03) whereas plasma insulin, triglycerides (TG), and NEFA levels were comparable between conditions (Table 2). On day 21, when overnight fasting time was similar for both interventions (11 h), plasma glucose levels remained lower in TRE than in CON (8.0±0.3 vs 8.9±0.5 mmol/l, respectively, p=0.04), whereas no differences were detected in plasma insulin, TG and NEFA levels (Table 2).\nTable 2Blood plasma biochemistryMetaboliteCONTREp valueDay 20 (n=13) Triglycerides, mmol/l2.1±0.31.9±0.20.30 NEFA, mmol/l0.529±0.0380.489±0.0350.39 Glucose, mmol/la8.6±0.47.6±0.40.03 Insulin, pmol/l111.1±20.8104.2±13.90.27Day 21 (n=14) Triglycerides, mmol/l2.1±0.32.2±0.20.66 NEFA, mmol/l0.601±0.0700.542±0.0640.30 Glucose, mmol/lb8.9±0.58.0±0.30.04 Insulin, pmol/l97.2±13.9111.1±20.80.16Data are shown as mean ± SEMaFasted blood values with fasting time 10 h for CON and 14 h for TREbFasted blood values with fasting time 11 h for both CON and TRE\nBlood plasma biochemistry\nData are shown as mean ± SEM\naFasted blood values with fasting time 10 h for CON and 14 h for TRE\nbFasted blood values with fasting time 11 h for both CON and TRE", "On day 19, volunteers resided in a respiration chamber for 36 h for measurement of energy expenditure and substrate oxidation. Twenty-four-hour energy expenditure was similar for TRE and CON (9.57±0.22 vs 9.68±0.29 MJ/day, respectively, p=0.22; Fig. 3a), as was the 24 h respiratory exchange ratio (RER) (0.86±0.01 vs 0.86±0.01, respectively, p=0.13). Nonetheless, 24 h carbohydrate oxidation was lower in TRE vs CON (260.2±7.6 vs 277.8±10.7 g/day, respectively, p=0.04; Fig. 3b), whereas 24 h fat oxidation (91.9±6.6 vs 93.5±5.5 g/day, respectively, p=0.72; Fig. 3c) was unaffected. Twenty-four-hour protein oxidation seemed higher upon TRE but the difference did not reach statistical significance (72.8±7.2 vs 58.5±5.4 g/day, respectively, p=0.18; Fig. 3d). Sleeping metabolic rate appeared to be lower with TRE compared with CON (4.66±0.14 vs 4.77±0.18 kJ/min, respectively), although this decrease was not statistically significant (p=0.05; Fig. 3e). There was no change in carbohydrate or fat oxidation during sleep in response to TRE vs CON (RER 0.84±0.01 vs 0.84±0.01, p=0.50; Fig. 3f).\nFig. 3Effect of TRE on 24 h energy expenditure (a), substrate oxidation (b–d), sleeping metabolic rate (e) and RER during sleep (f), (n=13). *p<0.05\nEffect of TRE on 24 h energy expenditure (a), substrate oxidation (b–d), sleeping metabolic rate (e) and RER during sleep (f), (n=13). *p<0.05\nOn day 21, muscle biopsies were obtained to assess ex vivo mitochondrial oxidative capacity by means of high-resolution respirometry. In total, paired biopsies from 13 out of 14 volunteers were analysed. Mitochondrial respiration did not differ between TRE and CON (Table 3).\nTable 3Mitochondrial oxidative capacityRespiration stateCONTREp valueState 2a M, pmol mg−1 s−15.0±0.45.5±0.60.49 MO, pmol mg−1 s−16.6±0.46.8±0.50.93 MG, pmol mg−1 s−17.1±0.46.9±0.40.54State 3b MO, pmol mg−1 s−128.3±2.029.1±1.40.91 MG, pmol mg−1 s−131.2±1.932.6±1.80.82 MOG, pmol mg−1 s−137.0±2.337.8±1.90.99 MOGS, pmol mg−1 s−155.7±3.357.6±2.70.80State Uc MGS, pmol mg−1 s−158.0±3.259.9±2.90.91 FCCP, pmol mg−1 s−167.8±4.766.6±3.50.50State 4od Oligomycin, pmol mg−1 s−117.6±1.218.1±1.20.85Data presented as mean ± SEM, n=13aState 2, respiration in presence of substrates alonebState 3, ADP-stimulated respirationcState U, maximal respiration in response to an uncoupling agentdState 4o, mitochondrial proton leak measured by blocking ATP synthaseFCCP, trifluoro-methoxy carbonyl cyanide-4 phenylhydrazone; G, glutamate; M, malate; O, octanoyl-carnitine; S, succinate\nMitochondrial oxidative capacity\nData presented as mean ± SEM, n=13\naState 2, respiration in presence of substrates alone\nbState 3, ADP-stimulated respiration\ncState U, maximal respiration in response to an uncoupling agent\ndState 4o, mitochondrial proton leak measured by blocking ATP synthase\nFCCP, trifluoro-methoxy carbonyl cyanide-4 phenylhydrazone; G, glutamate; M, malate; O, octanoyl-carnitine; S, succinate", "TRE is a novel strategy to improve metabolic health and has been proposed to counteract the detrimental effects of eating throughout the day by limiting food intake to daytime hours. To date, only a few studies have examined the metabolic effects of TRE in adults with type 2 diabetes. Here, we tested whether restricting energy intake to a feasible, 10 h time frame for 3 weeks would lower hepatic glycogen levels and improve insulin sensitivity in overweight/obese adults with type 2 diabetes. Additionally, we explored the effects of TRE on glucose homeostasis, 24 h energy metabolism and mitochondrial function.\nWe hypothesised that the 10 h TRE regimen, with the latest food intake at 18:00 hours, would result in a more pronounced fasting state, especially during the night. During the night, the liver is crucial to the regulation of blood glucose through the processes of gluconeogenesis and glycogenolysis and it has been shown that these processes are elevated in type 2 diabetes [23, 24]. Therefore, we hypothesised that hepatic glycogen would be lower after TRE and would be associated with an improved insulin sensitivity.\nIn contrast to our hypothesis, hepatic glycogenolysis appeared to be unaffected by TRE since there was no change in glycogen content after a standardised 11 h fast at the end of the intervention. Neither was there a change after a 14 h (TRE) vs 10 h (CON) overnight fast half-way through the intervention. In addition, EGP suppression during the low-insulin phase of the hyperglycaemic–euglycaemic clamp (reflecting hepatic insulin sensitivity) did not differ between TRE and CON. A limitation of our approach is that we did not measure hepatic glycogen dynamics during the night. Such measurements may be important, as our clamp results showed an increase in NOGD upon high-insulin stimulation with TRE, suggesting an increased glycogen storage. These results could suggest small changes in hepatic glycogen turnover; alternatively, muscle glycogen levels may play a role in explaining our clamp results as the muscle accounts for most of the glycogen synthesis upon high-insulin stimulation in healthy individuals. Interestingly, type 2 diabetes is characterised by an impaired insulin-stimulated glycogen storage [25]. Thus, an improvement in NOGD due to TRE may help to regulate 24 h and postprandial glucose levels. Indeed, 24 h glucose levels were significantly improved after TRE.\nWe did not observe an effect of TRE on insulin sensitivity. A previous controlled randomised crossover study by Sutton et al did show an improved insulin sensitivity with TRE [8]. Thus, men with prediabetes followed a 5 week 6 h early TRE regimen, whereby the last meal was consumed before 15:00 hours. The differences in results may be explained by the shorter eating window and earlier consumption of the last meal (15:00 vs 18:00 hours), creating a longer period of fasting. Here, we chose a 10 h TRE, which we believe would be feasible to incorporate into the work and family life of most adults with type 2 diabetes; future studies will be needed to reveal whether the duration of the fasting period is indeed crucial in determining positive effects on insulin sensitivity.\nDespite the lack of changes in hepatic glycogen and insulin sensitivity, we did find that our 10 h TRE protocol decreased 24 h glucose levels in individuals with type 2 diabetes, primarily driven by decreased nocturnal glucose levels. Notably, TRE also lowered overnight fasting glucose, increased the time spent in the normal glucose range and decreased time spent in the high glucose range, all of which are clinically relevant variables in type 2 diabetes. Importantly, morning fasting glucose levels were consistently lower with TRE than with CON, even when the fasting duration prior to the blood draw was similar between the two interventions. This may indicate lasting changes in nocturnal glucose homeostasis. Additionally, we found that time spent in hypoglycaemia was not significantly increased upon TRE and no serious adverse events were reported resulting from TRE, thereby underscoring that a ~10 h eating window is a safe and effective lifestyle intervention for adults with type 2 diabetes.\nMechanisms underlying the improvement in glucose homeostasis upon TRE remain unclear. Our results show that TRE did not improve peripheral and hepatic insulin sensitivity, skeletal muscle mitochondrial function, energy metabolism or hepatic lipid content, all of which are known to be affected in type 2 diabetes mellitus [25–29]. Under high-insulin conditions during the clamp, we observed a larger reliance on fatty acid oxidation accompanied by higher NEFA levels and lower glucose oxidation. Lower glucose oxidation was also observed when measured over 24 h. Although not statistically significant, the mean of 24 h protein oxidation was higher with TRE, possibly reflecting a more pronounced fasting state and a drive towards a higher rate of amino-acid-driven gluconeogenesis. A previous study by Lundell et al indeed suggested that TRE could affect protein metabolism to cope with the extended period of fasting [30]. However, the exact mechanisms and implications of these effects require further investigation, and it would be interesting to investigate nocturnal glucose metabolism in more detail. The improvement in glucose homeostasis may also partially be explained by the weight loss induced by TRE, which has also been reported previously [4–6, 9, 10, 13, 31]. It should be noted, however, that the body weight loss was rather small (~0.7 kg compared with CON after 3 weeks of intervention) which makes it less likely to completely explain the differences in glucose homeostasis.\nA limitation of the current study is the relatively heterogeneous study population consisting of adults with and without use of glucose-lowering medication. Use of medication might have resulted in TRE having less effect, as the medication may be targeting the same metabolic pathways. Only recruiting volunteers not receiving medication would have prevented this issue but would have made the results less applicable to the general type 2 diabetes population. Another limitation of our study is the relatively short duration of 3 weeks. This duration was chosen as the aim of this study was to assess whether TRE would result in metabolic improvements in type 2 diabetes and to explore potential mechanisms underlying these changes. In our experience, human interventions of 3 weeks are able to affect the outcome variables investigated in our study. Since our TRE protocol was feasible and safe, and resulted in improved 24 h glucose levels, it would be interesting to examine the impact of 10 h TRE on glucose homeostasis and insulin sensitivity in type 2 diabetes in the long term to address the clinical relevance of TRE.\nIn conclusion, we show that a daytime 10 h TRE regimen for 3 weeks decreases glucose levels and prolongs the time spent in normoglycaemia in adults with type 2 diabetes as compared with spreading daily food intake over at least 14 h. These improvements were not mediated by changes in hepatic glycogen, insulin sensitivity, mitochondrial function or 24 h substrate oxidation. These data highlight the potential benefits of TRE in type 2 diabetes." ]
[ null, null, null, null, null, null, null, null, null, null, null, null ]
[ "Introduction", "Methods", "Intervention", "Procedures", "Biochemical analyses", "Data analysis", "Results", "Participant characteristics", "Adherence", "Hepatic glycogen and lipid content", "Insulin sensitivity and glucose homeostasis", "Twenty-four-hour energy and substrate metabolism", "Discussion", "Supplementary information" ]
[ "Our modern 24 h society is characterised by ubiquitous food availability, irregular sleep–activity patterns and frequent exposure to artificial light sources. Together, these factors lead to a disrupted day–night rhythm, which contributes to the development of type 2 diabetes [1–3]. In Western society, most people tend to spread their daily food intake over a minimum of 14 h [4], likely resulting in the absence of a true, nocturnal fasted state. Restricting food intake to a pre-defined time window (typically ≤12 h), i.e. time-restricted eating [TRE]), restores the cycle of daytime eating and prolonged fasting during the evening and night. Indeed, several studies demonstrated that TRE has promising metabolic effects in overweight or obese individuals, including increased lipid oxidation [5], decreased plasma glucose levels [6, 7] and improved insulin sensitivity [8]. While promising, the latter studies applied extremely short eating time windows (e.g. 6–8 h) in highly-controlled settings [5–10], thus hampering implementation into daily life. To date, only Parr et al have successfully explored the potential of TRE in adults with type 2 diabetes using a 9 h TRE regimen [11], However, the effects of TRE on metabolic health remained largely unexplored.\nDespite the fact that TRE is sometimes accompanied by (unintended) weight loss [4–6, 9, 10, 12, 13], which inherently improves metabolic health, it has also been reported to improve metabolic health in the absence of weight loss [8], indicating that additional mechanisms underlie the effects of TRE. In this context, individuals with impaired metabolic health display aberrations in rhythmicity of metabolic processes such as glucose homeostasis [14, 15], mitochondrial oxidative capacity [16] and whole-body substrate oxidation [16] compared with the rhythms found in healthy, lean individuals [14, 15, 17]. Disruption of circadian rhythmicity is proposed to contribute to the impaired matching of substrate utilisation with substrate availability, which is associated with type 2 diabetes [18]. In turn, we hypothesise that these impairments in metabolic rhythmicity are due to a disturbed eating–fasting cycle. Therefore, restricting food intake to daytime and, consequently, extending the period of fasting, may improve metabolic health. More specifically, hepatic glycogen could play a pivotal role in this process, as it serves as a fuel during the night when glucose levels are low and is replenished during the daytime [19]. A decrease in hepatic glycogen triggers the stimulation of fat oxidation and molecular metabolic adaptations that accommodate substrate availability in the fasted state [20], and the need to replenish these stores may improve insulin sensitivity. Hitherto, it is not known whether TRE could result in a more pronounced depletion in hepatic glycogen levels in type 2 diabetes, leading to an improved insulin sensitivity.\nThe aim of the current study was to examine the effect of limiting food intake to a feasible 10 h daily time frame for 3 weeks in free-living conditions on hepatic glycogen utilisation and insulin sensitivity in adults with type 2 diabetes.", "This randomised crossover study was conducted between April 2019 and February 2021, after approval of the Ethics Committee of Maastricht University Medical Center (Maastricht, the Netherlands), and conformed with the Declaration of Helsinki [21]. The trial was registered at ClinicalTrials.gov (registration no. NCT03992248). All volunteers signed an informed consent form prior to participation. The randomisation procedure is described in the electronic supplementary material (ESM) Methods. The study consisted of two 3-week intervention periods separated by a wash-out period of ≥4 weeks. At the end of each intervention period, main outcomes were measured (ESM Fig. 1) at the Metabolic Research Unit of Maastricht University, the Netherlands. Male and female adults with type 2 diabetes, aged between 50 and 75 years and BMI ≥25 kg/m2, were eligible for participation. For detailed inclusion and exclusion criteria, see ESM Table 1.\nIntervention During the TRE intervention, volunteers were instructed to consume their habitual diet within a 10 h window during the daytime, with the last meal completed no later than 18:00 hours. Outside this time window, volunteers were only allowed to drink water, plain tea and black coffee. To increase compliance, volunteers were also allowed to drink zero-energy soft drinks in the evening hours if consumed in moderation. During the control (CON) intervention, volunteers were instructed to spread their habitual diet over at least 14 h per day without additional restraints on the time window of food intake. For both intervention periods, volunteers were instructed to maintain their normal physical activity and sleep patterns and to remain weight stable. Food intake and sleep times were recorded daily using a food and sleep diary. Volunteers based the food intake of their second intervention period on the food and sleep diary filled out during the first period to promote similar dietary quantity and quality in both intervention arms. To optimise compliance, a weekly phone call was scheduled to monitor the volunteers and to provide additional instructions if necessary.\nDuring the TRE intervention, volunteers were instructed to consume their habitual diet within a 10 h window during the daytime, with the last meal completed no later than 18:00 hours. Outside this time window, volunteers were only allowed to drink water, plain tea and black coffee. To increase compliance, volunteers were also allowed to drink zero-energy soft drinks in the evening hours if consumed in moderation. During the control (CON) intervention, volunteers were instructed to spread their habitual diet over at least 14 h per day without additional restraints on the time window of food intake. For both intervention periods, volunteers were instructed to maintain their normal physical activity and sleep patterns and to remain weight stable. Food intake and sleep times were recorded daily using a food and sleep diary. Volunteers based the food intake of their second intervention period on the food and sleep diary filled out during the first period to promote similar dietary quantity and quality in both intervention arms. To optimise compliance, a weekly phone call was scheduled to monitor the volunteers and to provide additional instructions if necessary.\nProcedures At the start of each intervention period, body weight was determined and a continuous glucose monitoring (CGM) device (Freestyle Libre Pro; Abbott, Chicago, USA) was placed on the back of the upper arm to measured interstitial glucose levels every 15 min. On one occasion, between day 7 and 15 of each intervention, fasted hepatic glycogen was measured in the morning at 07:00 hours using 13C-MRS. The day before the MRS measurement, volunteers consumed a standardised meal at home at either 16:40 hours (TRE) or 20:40 hours (CON), ensuring 20 min of meal consumption, so that they were fasted from, respectively, 17:00 hours or 21:00 hours.\nOn day 19, volunteers arrived at the university at 15:00 hours for measurement of body composition using air displacement plethysmography (BodPod; Cosmed, Rome, Italy), followed by the placement of an i.v. cannula. Afterwards, volunteers entered a respiration chamber for a 36 h measurement of energy expenditure and substrate utilisation using whole-room indirect calorimetry. With TRE, a dinner consisting of 49 per cent of energy (En%) was provided in the respiration chamber at 17:40 hours and volunteers were fasted from 18:00 hours. With CON, a snack of 10 En% was provided at 18:00 hours followed by a 39 En% dinner at 21:40 hours, resulting in a fast from 22:00 hours. Energy content of the meals was based on estimated energy expenditure using the Harris and Benedict equation [22].\nOn day 20, while in the respiration chamber, a fasted blood sample was obtained at 07:30 hours and 24 h urine was collected for analysis of nitrogen excretion. In both intervention arms, volunteers received standardised meals at fixed times (08:00, 12:00, 15:00 and 18:40 hours) consisting of, respectively, 21 En%, 30 En%, 10 En% and 39 En%. Energy intake was based on sleeping metabolic rate (determined during the night of day 19) multiplied by an activity factor of 1.5. Macronutrient composition of the meals was 56 En% carbohydrates, 30 En% fat and 14 En% protein. Furthermore, volunteers performed low-intensity physical activity at 10:30, 13:00 and 16:00 hours. One bout of activity consisted of 15 min of stepping on an aerobic step and 15 min of standing.\nOn day 21, after a standardised 11 h fast, volunteers left the respiration chamber at 06:00 hours. Next, a blood sample was taken, followed by measurement of hepatic glycogen and lipid content using 13C-MRS and 1H-MRS, respectively. Subsequently, a muscle biopsy was obtained to assess ex vivo mitochondrial oxidative capacity, after which a hyperinsulinaemic–euglycaemic two-step clamp was started to measure insulin sensitivity. See ESM Methods for detailed descriptions of measurement methods.\nAt the start of each intervention period, body weight was determined and a continuous glucose monitoring (CGM) device (Freestyle Libre Pro; Abbott, Chicago, USA) was placed on the back of the upper arm to measured interstitial glucose levels every 15 min. On one occasion, between day 7 and 15 of each intervention, fasted hepatic glycogen was measured in the morning at 07:00 hours using 13C-MRS. The day before the MRS measurement, volunteers consumed a standardised meal at home at either 16:40 hours (TRE) or 20:40 hours (CON), ensuring 20 min of meal consumption, so that they were fasted from, respectively, 17:00 hours or 21:00 hours.\nOn day 19, volunteers arrived at the university at 15:00 hours for measurement of body composition using air displacement plethysmography (BodPod; Cosmed, Rome, Italy), followed by the placement of an i.v. cannula. Afterwards, volunteers entered a respiration chamber for a 36 h measurement of energy expenditure and substrate utilisation using whole-room indirect calorimetry. With TRE, a dinner consisting of 49 per cent of energy (En%) was provided in the respiration chamber at 17:40 hours and volunteers were fasted from 18:00 hours. With CON, a snack of 10 En% was provided at 18:00 hours followed by a 39 En% dinner at 21:40 hours, resulting in a fast from 22:00 hours. Energy content of the meals was based on estimated energy expenditure using the Harris and Benedict equation [22].\nOn day 20, while in the respiration chamber, a fasted blood sample was obtained at 07:30 hours and 24 h urine was collected for analysis of nitrogen excretion. In both intervention arms, volunteers received standardised meals at fixed times (08:00, 12:00, 15:00 and 18:40 hours) consisting of, respectively, 21 En%, 30 En%, 10 En% and 39 En%. Energy intake was based on sleeping metabolic rate (determined during the night of day 19) multiplied by an activity factor of 1.5. Macronutrient composition of the meals was 56 En% carbohydrates, 30 En% fat and 14 En% protein. Furthermore, volunteers performed low-intensity physical activity at 10:30, 13:00 and 16:00 hours. One bout of activity consisted of 15 min of stepping on an aerobic step and 15 min of standing.\nOn day 21, after a standardised 11 h fast, volunteers left the respiration chamber at 06:00 hours. Next, a blood sample was taken, followed by measurement of hepatic glycogen and lipid content using 13C-MRS and 1H-MRS, respectively. Subsequently, a muscle biopsy was obtained to assess ex vivo mitochondrial oxidative capacity, after which a hyperinsulinaemic–euglycaemic two-step clamp was started to measure insulin sensitivity. See ESM Methods for detailed descriptions of measurement methods.\nBiochemical analyses Blood samples were used for quantification of metabolites and nitrogen was assessed using 24 h urine samples. See ESM Methods for further details regarding the biochemical analyses.\nBlood samples were used for quantification of metabolites and nitrogen was assessed using 24 h urine samples. See ESM Methods for further details regarding the biochemical analyses.\nData analysis The statistical packages SPSS Statistics 25 (IBM, New York, USA) and Prism 9 (GraphPad Software, San Diego, USA) were used for statistical analyses. Interventional comparisons are expressed as mean ± SEM. Participant characteristics are expressed as mean ± SD. Differences between CON and TRE were tested using the paired t test, unless specified otherwise. A two-sided p<0.05 was considered statistically significant. The power calculation, as well as other calculations made using the measured data, are described in the ESM Methods.\nThe statistical packages SPSS Statistics 25 (IBM, New York, USA) and Prism 9 (GraphPad Software, San Diego, USA) were used for statistical analyses. Interventional comparisons are expressed as mean ± SEM. Participant characteristics are expressed as mean ± SD. Differences between CON and TRE were tested using the paired t test, unless specified otherwise. A two-sided p<0.05 was considered statistically significant. The power calculation, as well as other calculations made using the measured data, are described in the ESM Methods.", "During the TRE intervention, volunteers were instructed to consume their habitual diet within a 10 h window during the daytime, with the last meal completed no later than 18:00 hours. Outside this time window, volunteers were only allowed to drink water, plain tea and black coffee. To increase compliance, volunteers were also allowed to drink zero-energy soft drinks in the evening hours if consumed in moderation. During the control (CON) intervention, volunteers were instructed to spread their habitual diet over at least 14 h per day without additional restraints on the time window of food intake. For both intervention periods, volunteers were instructed to maintain their normal physical activity and sleep patterns and to remain weight stable. Food intake and sleep times were recorded daily using a food and sleep diary. Volunteers based the food intake of their second intervention period on the food and sleep diary filled out during the first period to promote similar dietary quantity and quality in both intervention arms. To optimise compliance, a weekly phone call was scheduled to monitor the volunteers and to provide additional instructions if necessary.", "At the start of each intervention period, body weight was determined and a continuous glucose monitoring (CGM) device (Freestyle Libre Pro; Abbott, Chicago, USA) was placed on the back of the upper arm to measured interstitial glucose levels every 15 min. On one occasion, between day 7 and 15 of each intervention, fasted hepatic glycogen was measured in the morning at 07:00 hours using 13C-MRS. The day before the MRS measurement, volunteers consumed a standardised meal at home at either 16:40 hours (TRE) or 20:40 hours (CON), ensuring 20 min of meal consumption, so that they were fasted from, respectively, 17:00 hours or 21:00 hours.\nOn day 19, volunteers arrived at the university at 15:00 hours for measurement of body composition using air displacement plethysmography (BodPod; Cosmed, Rome, Italy), followed by the placement of an i.v. cannula. Afterwards, volunteers entered a respiration chamber for a 36 h measurement of energy expenditure and substrate utilisation using whole-room indirect calorimetry. With TRE, a dinner consisting of 49 per cent of energy (En%) was provided in the respiration chamber at 17:40 hours and volunteers were fasted from 18:00 hours. With CON, a snack of 10 En% was provided at 18:00 hours followed by a 39 En% dinner at 21:40 hours, resulting in a fast from 22:00 hours. Energy content of the meals was based on estimated energy expenditure using the Harris and Benedict equation [22].\nOn day 20, while in the respiration chamber, a fasted blood sample was obtained at 07:30 hours and 24 h urine was collected for analysis of nitrogen excretion. In both intervention arms, volunteers received standardised meals at fixed times (08:00, 12:00, 15:00 and 18:40 hours) consisting of, respectively, 21 En%, 30 En%, 10 En% and 39 En%. Energy intake was based on sleeping metabolic rate (determined during the night of day 19) multiplied by an activity factor of 1.5. Macronutrient composition of the meals was 56 En% carbohydrates, 30 En% fat and 14 En% protein. Furthermore, volunteers performed low-intensity physical activity at 10:30, 13:00 and 16:00 hours. One bout of activity consisted of 15 min of stepping on an aerobic step and 15 min of standing.\nOn day 21, after a standardised 11 h fast, volunteers left the respiration chamber at 06:00 hours. Next, a blood sample was taken, followed by measurement of hepatic glycogen and lipid content using 13C-MRS and 1H-MRS, respectively. Subsequently, a muscle biopsy was obtained to assess ex vivo mitochondrial oxidative capacity, after which a hyperinsulinaemic–euglycaemic two-step clamp was started to measure insulin sensitivity. See ESM Methods for detailed descriptions of measurement methods.", "Blood samples were used for quantification of metabolites and nitrogen was assessed using 24 h urine samples. See ESM Methods for further details regarding the biochemical analyses.", "The statistical packages SPSS Statistics 25 (IBM, New York, USA) and Prism 9 (GraphPad Software, San Diego, USA) were used for statistical analyses. Interventional comparisons are expressed as mean ± SEM. Participant characteristics are expressed as mean ± SD. Differences between CON and TRE were tested using the paired t test, unless specified otherwise. A two-sided p<0.05 was considered statistically significant. The power calculation, as well as other calculations made using the measured data, are described in the ESM Methods.", "Participant characteristics A flowchart of participant enrolment is depicted in ESM Fig. 2. Baseline participant characteristics are presented in Table 1. The median Morningness-Eveningness Questionnaire Self-Assessment (MEQ-SA) score amounted to 59.5 (range 41–72). Only one volunteer was identified as an extreme morning type but was included in the study as the intervention did not interfere with his habitual day–night rhythm.\nTable 1Baseline characteristics of participantsCharacteristicMeasurement/valueN14Sex, n female/n male7/7Age, years67.5±5.2BMI, kg/m230.5±3.7Diabetes medication, n yes/n no10/4 Metformin only, n7 Metformin + gliclazide, n3Fasting plasma glucose, mmol/l7.9±1.3HbA1c, mmol/mol46.1±7.2HbA1c, %6.4±0.7AST, μkat/l0.4±0.1ALT, μkat/l0.4±0.2GGT, μkat/l0.4±0.2eGFR, ml min−1 1.73 m−279.9±14.5MEQ-SA, score59.1±7.7Data are shown as mean ± SD, unless stated otherwiseALT, alanine aminotransferase; AST, aspartate aminotransferase; GGT, γ-glutamyl transferase; MEQ-SA, Morningness-Eveningness Questionnaire Self-Assessment\nBaseline characteristics of participants\nData are shown as mean ± SD, unless stated otherwise\nALT, alanine aminotransferase; AST, aspartate aminotransferase; GGT, γ-glutamyl transferase; MEQ-SA, Morningness-Eveningness Questionnaire Self-Assessment\nA flowchart of participant enrolment is depicted in ESM Fig. 2. Baseline participant characteristics are presented in Table 1. The median Morningness-Eveningness Questionnaire Self-Assessment (MEQ-SA) score amounted to 59.5 (range 41–72). Only one volunteer was identified as an extreme morning type but was included in the study as the intervention did not interfere with his habitual day–night rhythm.\nTable 1Baseline characteristics of participantsCharacteristicMeasurement/valueN14Sex, n female/n male7/7Age, years67.5±5.2BMI, kg/m230.5±3.7Diabetes medication, n yes/n no10/4 Metformin only, n7 Metformin + gliclazide, n3Fasting plasma glucose, mmol/l7.9±1.3HbA1c, mmol/mol46.1±7.2HbA1c, %6.4±0.7AST, μkat/l0.4±0.1ALT, μkat/l0.4±0.2GGT, μkat/l0.4±0.2eGFR, ml min−1 1.73 m−279.9±14.5MEQ-SA, score59.1±7.7Data are shown as mean ± SD, unless stated otherwiseALT, alanine aminotransferase; AST, aspartate aminotransferase; GGT, γ-glutamyl transferase; MEQ-SA, Morningness-Eveningness Questionnaire Self-Assessment\nBaseline characteristics of participants\nData are shown as mean ± SD, unless stated otherwise\nALT, alanine aminotransferase; AST, aspartate aminotransferase; GGT, γ-glutamyl transferase; MEQ-SA, Morningness-Eveningness Questionnaire Self-Assessment\nAdherence Volunteers did not indicate any changes in diabetes medication throughout the study. Volunteers recorded their daily food intake and sleep habits for, on average, 17 days during TRE and 18 days during CON. Based on these data, the eating window averaged 9.1±0.2 h in TRE vs 13.4±0.1 h in CON (p<0.01). Sleep–wake patterns were similar in both interventions, with a mean sleep duration of 8.1±0.2 h during TRE and 8.0±0.2 h during CON (p=0.17). Body weight at the start of each intervention was comparable between TRE and CON (89.1±3.7 vs 89.2±3.8 kg, respectively, p=0.62). Although volunteers were instructed to remain weight stable, a small but significant weight loss occurred in response to TRE (−1.0±0.3 kg, p<0.01) but not CON (−0.3±0.3 kg, p=0.22). The weight loss with TRE was significantly greater than the weight change observed with CON (p=0.02). Body composition determined on day 19 was comparable between TRE and CON (TRE vs CON: fat mass 37.4±2.7 vs 37.9±2.9 kg, p=0.58; and fat-free mass 50.7±2.6 vs 51.0±2.6 kg, p=0.60).\nVolunteers did not indicate any changes in diabetes medication throughout the study. Volunteers recorded their daily food intake and sleep habits for, on average, 17 days during TRE and 18 days during CON. Based on these data, the eating window averaged 9.1±0.2 h in TRE vs 13.4±0.1 h in CON (p<0.01). Sleep–wake patterns were similar in both interventions, with a mean sleep duration of 8.1±0.2 h during TRE and 8.0±0.2 h during CON (p=0.17). Body weight at the start of each intervention was comparable between TRE and CON (89.1±3.7 vs 89.2±3.8 kg, respectively, p=0.62). Although volunteers were instructed to remain weight stable, a small but significant weight loss occurred in response to TRE (−1.0±0.3 kg, p<0.01) but not CON (−0.3±0.3 kg, p=0.22). The weight loss with TRE was significantly greater than the weight change observed with CON (p=0.02). Body composition determined on day 19 was comparable between TRE and CON (TRE vs CON: fat mass 37.4±2.7 vs 37.9±2.9 kg, p=0.58; and fat-free mass 50.7±2.6 vs 51.0±2.6 kg, p=0.60).\nHepatic glycogen and lipid content Approximately half-way through each intervention period, hepatic glycogen levels were assessed in the morning following a 14 h (TRE) and 10 h (CON) night-time fast. Hepatic glycogen did not differ significantly between TRE vs CON (0.16±0.03 vs 0.17±0.02 arbitrary units [AU], respectively, p=0.43). At the end of each intervention, hepatic glycogen levels were also assessed after a standardised overnight fast of 11 h for both TRE and CON but did not reveal an altered hepatic glycogen content with TRE compared with CON (0.15±0.01 vs 0.15±0.01 AU, respectively, p=0.88). We also assessed hepatic lipid content; neither the amount of lipids nor the composition of the hepatic lipid pool was altered with TRE vs CON (respectively: total lipid content 9.0±2.0 vs 8.6±1.6%, p=0.47; polyunsaturated fatty acids 17.0±1.3 vs 16.2±1.2%, p=0.41; mono-unsaturated fatty acids 40.6±0.9 vs 42.9±1.4%, p=0.19; and saturated fatty acids 42.4±1.2 vs 40.9±1.5%, p=0.41).\nApproximately half-way through each intervention period, hepatic glycogen levels were assessed in the morning following a 14 h (TRE) and 10 h (CON) night-time fast. Hepatic glycogen did not differ significantly between TRE vs CON (0.16±0.03 vs 0.17±0.02 arbitrary units [AU], respectively, p=0.43). At the end of each intervention, hepatic glycogen levels were also assessed after a standardised overnight fast of 11 h for both TRE and CON but did not reveal an altered hepatic glycogen content with TRE compared with CON (0.15±0.01 vs 0.15±0.01 AU, respectively, p=0.88). We also assessed hepatic lipid content; neither the amount of lipids nor the composition of the hepatic lipid pool was altered with TRE vs CON (respectively: total lipid content 9.0±2.0 vs 8.6±1.6%, p=0.47; polyunsaturated fatty acids 17.0±1.3 vs 16.2±1.2%, p=0.41; mono-unsaturated fatty acids 40.6±0.9 vs 42.9±1.4%, p=0.19; and saturated fatty acids 42.4±1.2 vs 40.9±1.5%, p=0.41).\nInsulin sensitivity and glucose homeostasis A hyperinsulinaemic–euglycaemic two-step clamp with a glucose tracer and indirect calorimetry was performed to assess insulin sensitivity. No differences in M value were found when comparing TRE and CON (19.6±1.8 vs 17.7±1.8 μmol kg−1 min−1, respectively, p=0.1). Hepatic insulin sensitivity was not affected by TRE, as exemplified by a similar endogenous glucose production (EGP) with TRE and CON in the fasted state and in the low- and high-insulin-stimulated states (p=0.83, p=0.38 and p=0.30, respectively; Fig. 1a). Suppression of EGP was also similar when comparing TRE with CON upon low- and high-insulin infusion (p=0.67 and p=0.47; Fig. 1a). NEFA suppression upon low insulin exposure was not different between TRE and CON (−365.2±41.6 vs −359.1±43.2 mmol/l, p=0.8). However, absolute levels of NEFAs were lower with TRE during the low- and high-insulin phase (p=0.02 and p=0.04; Fig. 1b), which may hint at an improved adipose tissue insulin sensitivity.\nFig. 1Effect of TRE on EGP (a), plasma NEFA (b), Rd (c), NOGD (d) and fat oxidation (e) measured during a hyperinsulinaemic–euglycaemic two-step clamp (n=14). *p<0.05 (data were analysed with paired t tests). Rd, Rate of disappearance\nEffect of TRE on EGP (a), plasma NEFA (b), Rd (c), NOGD (d) and fat oxidation (e) measured during a hyperinsulinaemic–euglycaemic two-step clamp (n=14). *p<0.05 (data were analysed with paired t tests). Rd, Rate of disappearance\nPeripheral insulin-stimulated glucose disposal, reflected by the change in rate of disappearance (Rd) from basal to high insulin, remained unchanged with TRE (p=0.25; Fig. 1c). However, we observed a larger insulin-stimulated non-oxidative glucose disposal (NOGD, difference from baseline to high insulin) with TRE than with CON (4.3±1.1 vs 1.5±1.7 μmol kg−1 min−1, respectively, p=0.04; Fig. 1d) reflecting an increased ability to form glycogen. Conversely, insulin-stimulated carbohydrate oxidation from basal to high insulin appeared to be lower with TRE than with CON (4.7±0.9 vs 6.2±0.9 μmol kg−1 min−1, respectively) but this difference was not statistically significant (p=0.07). Consistently, insulin-induced suppression of fat oxidation from basal to high-insulin was lower with TRE than with CON (−1.3±0.3 vs −1.8±0.2 μmol kg−1 min−1, p=0.04; Fig. 1e). Energy expenditure did not differ between TRE and CON during the basal, low-insulin and high-insulin phase of the clamp. These results indicate that while peripheral insulin sensitivity is unchanged with TRE, glucose uptake is more directed towards storage compared with oxidation. Both hepatic and peripheral insulin sensitivity, as well as levels of hepatic glycogen, were additionally analysed in volunteers who only used metformin as diabetes treatment (n=7) and this did not alter the outcomes.\nTo examine the effect of TRE on glucose homeostasis, CGM data from the last 4 days in the free-living situation (days 15–18) were analysed for both interventions. Four volunteers presented incomplete CGM data due to technical issues, hence statistics were performed on CGM data from ten volunteers. Mean 24 h glucose levels were lower in TRE compared with CON (6.8±0.2 vs 7.6±0.3 mmol/l, p<0.01; Fig. 2a–f). Nocturnal glucose levels were consistently lower in TRE vs CON (Fig. 2a–d). Furthermore, volunteers spent more time in the normal glucose range upon TRE compared with CON (15.1±0.8 vs 12.2±1.1 h per day, p=0.01; Fig. 2f). Concomitantly, time spent in the high glucose range was less in TRE compared with CON (5.5±0.5 vs 7.5 0.7 h per day, p=0.02) whereas no differences between eating regimens were found for time spent in hyperglycaemia (2.3±0.4 vs 3.7±0.8 h per day, p=0.24), time spent in the low glucose range (0.5±0.1 vs 0.4±0.1 h per day, p=1.00) or time spent in hypoglycaemia (0.7±0.3 vs 0.1±0.0 h per day, p=0.48).\nFig. 2(a–d) Twenty-four-hour glucose levels on days 15 (a), 16 (b), 17 (c) and 18 (d) during TRE or CON (n=10). Mean 24 h glucose from day 15 to day 18 (n=10) analysed using a paired t test (e). Time spent in glucose range during days 15–18 (n=10) analysed using Wilcoxon tests with Bonferroni correction (f) *p<0.05. Hypo, hypoglycaemia defined as glucose levels <4.0 mmol/l; Low, low glucose levels defined as glucose levels 4.0–4.3 mmol/l; Normal range, glucose levels within the normal range defined as 4.4–7.2 mmol/l; High, high glucose levels defined as glucose levels 7.3–9.9 mmol/l; Hyper, hyperglycaemia defined as glucose levels >10 mmol/l\n(a–d) Twenty-four-hour glucose levels on days 15 (a), 16 (b), 17 (c) and 18 (d) during TRE or CON (n=10). Mean 24 h glucose from day 15 to day 18 (n=10) analysed using a paired t test (e). Time spent in glucose range during days 15–18 (n=10) analysed using Wilcoxon tests with Bonferroni correction (f) *p<0.05. Hypo, hypoglycaemia defined as glucose levels <4.0 mmol/l; Low, low glucose levels defined as glucose levels 4.0–4.3 mmol/l; Normal range, glucose levels within the normal range defined as 4.4–7.2 mmol/l; High, high glucose levels defined as glucose levels 7.3–9.9 mmol/l; Hyper, hyperglycaemia defined as glucose levels >10 mmol/l\nAdditionally, fasting plasma metabolites were assessed on day 20 and day 21 of each intervention. On day 20, blood samples were taken after a 10 h (CON) or 14 h (TRE) overnight fast. Plasma glucose on day 20 was lower after TRE (7.6±0.4 vs 8.6±0.4 mmol/l, respectively, p=0.03) whereas plasma insulin, triglycerides (TG), and NEFA levels were comparable between conditions (Table 2). On day 21, when overnight fasting time was similar for both interventions (11 h), plasma glucose levels remained lower in TRE than in CON (8.0±0.3 vs 8.9±0.5 mmol/l, respectively, p=0.04), whereas no differences were detected in plasma insulin, TG and NEFA levels (Table 2).\nTable 2Blood plasma biochemistryMetaboliteCONTREp valueDay 20 (n=13) Triglycerides, mmol/l2.1±0.31.9±0.20.30 NEFA, mmol/l0.529±0.0380.489±0.0350.39 Glucose, mmol/la8.6±0.47.6±0.40.03 Insulin, pmol/l111.1±20.8104.2±13.90.27Day 21 (n=14) Triglycerides, mmol/l2.1±0.32.2±0.20.66 NEFA, mmol/l0.601±0.0700.542±0.0640.30 Glucose, mmol/lb8.9±0.58.0±0.30.04 Insulin, pmol/l97.2±13.9111.1±20.80.16Data are shown as mean ± SEMaFasted blood values with fasting time 10 h for CON and 14 h for TREbFasted blood values with fasting time 11 h for both CON and TRE\nBlood plasma biochemistry\nData are shown as mean ± SEM\naFasted blood values with fasting time 10 h for CON and 14 h for TRE\nbFasted blood values with fasting time 11 h for both CON and TRE\nA hyperinsulinaemic–euglycaemic two-step clamp with a glucose tracer and indirect calorimetry was performed to assess insulin sensitivity. No differences in M value were found when comparing TRE and CON (19.6±1.8 vs 17.7±1.8 μmol kg−1 min−1, respectively, p=0.1). Hepatic insulin sensitivity was not affected by TRE, as exemplified by a similar endogenous glucose production (EGP) with TRE and CON in the fasted state and in the low- and high-insulin-stimulated states (p=0.83, p=0.38 and p=0.30, respectively; Fig. 1a). Suppression of EGP was also similar when comparing TRE with CON upon low- and high-insulin infusion (p=0.67 and p=0.47; Fig. 1a). NEFA suppression upon low insulin exposure was not different between TRE and CON (−365.2±41.6 vs −359.1±43.2 mmol/l, p=0.8). However, absolute levels of NEFAs were lower with TRE during the low- and high-insulin phase (p=0.02 and p=0.04; Fig. 1b), which may hint at an improved adipose tissue insulin sensitivity.\nFig. 1Effect of TRE on EGP (a), plasma NEFA (b), Rd (c), NOGD (d) and fat oxidation (e) measured during a hyperinsulinaemic–euglycaemic two-step clamp (n=14). *p<0.05 (data were analysed with paired t tests). Rd, Rate of disappearance\nEffect of TRE on EGP (a), plasma NEFA (b), Rd (c), NOGD (d) and fat oxidation (e) measured during a hyperinsulinaemic–euglycaemic two-step clamp (n=14). *p<0.05 (data were analysed with paired t tests). Rd, Rate of disappearance\nPeripheral insulin-stimulated glucose disposal, reflected by the change in rate of disappearance (Rd) from basal to high insulin, remained unchanged with TRE (p=0.25; Fig. 1c). However, we observed a larger insulin-stimulated non-oxidative glucose disposal (NOGD, difference from baseline to high insulin) with TRE than with CON (4.3±1.1 vs 1.5±1.7 μmol kg−1 min−1, respectively, p=0.04; Fig. 1d) reflecting an increased ability to form glycogen. Conversely, insulin-stimulated carbohydrate oxidation from basal to high insulin appeared to be lower with TRE than with CON (4.7±0.9 vs 6.2±0.9 μmol kg−1 min−1, respectively) but this difference was not statistically significant (p=0.07). Consistently, insulin-induced suppression of fat oxidation from basal to high-insulin was lower with TRE than with CON (−1.3±0.3 vs −1.8±0.2 μmol kg−1 min−1, p=0.04; Fig. 1e). Energy expenditure did not differ between TRE and CON during the basal, low-insulin and high-insulin phase of the clamp. These results indicate that while peripheral insulin sensitivity is unchanged with TRE, glucose uptake is more directed towards storage compared with oxidation. Both hepatic and peripheral insulin sensitivity, as well as levels of hepatic glycogen, were additionally analysed in volunteers who only used metformin as diabetes treatment (n=7) and this did not alter the outcomes.\nTo examine the effect of TRE on glucose homeostasis, CGM data from the last 4 days in the free-living situation (days 15–18) were analysed for both interventions. Four volunteers presented incomplete CGM data due to technical issues, hence statistics were performed on CGM data from ten volunteers. Mean 24 h glucose levels were lower in TRE compared with CON (6.8±0.2 vs 7.6±0.3 mmol/l, p<0.01; Fig. 2a–f). Nocturnal glucose levels were consistently lower in TRE vs CON (Fig. 2a–d). Furthermore, volunteers spent more time in the normal glucose range upon TRE compared with CON (15.1±0.8 vs 12.2±1.1 h per day, p=0.01; Fig. 2f). Concomitantly, time spent in the high glucose range was less in TRE compared with CON (5.5±0.5 vs 7.5 0.7 h per day, p=0.02) whereas no differences between eating regimens were found for time spent in hyperglycaemia (2.3±0.4 vs 3.7±0.8 h per day, p=0.24), time spent in the low glucose range (0.5±0.1 vs 0.4±0.1 h per day, p=1.00) or time spent in hypoglycaemia (0.7±0.3 vs 0.1±0.0 h per day, p=0.48).\nFig. 2(a–d) Twenty-four-hour glucose levels on days 15 (a), 16 (b), 17 (c) and 18 (d) during TRE or CON (n=10). Mean 24 h glucose from day 15 to day 18 (n=10) analysed using a paired t test (e). Time spent in glucose range during days 15–18 (n=10) analysed using Wilcoxon tests with Bonferroni correction (f) *p<0.05. Hypo, hypoglycaemia defined as glucose levels <4.0 mmol/l; Low, low glucose levels defined as glucose levels 4.0–4.3 mmol/l; Normal range, glucose levels within the normal range defined as 4.4–7.2 mmol/l; High, high glucose levels defined as glucose levels 7.3–9.9 mmol/l; Hyper, hyperglycaemia defined as glucose levels >10 mmol/l\n(a–d) Twenty-four-hour glucose levels on days 15 (a), 16 (b), 17 (c) and 18 (d) during TRE or CON (n=10). Mean 24 h glucose from day 15 to day 18 (n=10) analysed using a paired t test (e). Time spent in glucose range during days 15–18 (n=10) analysed using Wilcoxon tests with Bonferroni correction (f) *p<0.05. Hypo, hypoglycaemia defined as glucose levels <4.0 mmol/l; Low, low glucose levels defined as glucose levels 4.0–4.3 mmol/l; Normal range, glucose levels within the normal range defined as 4.4–7.2 mmol/l; High, high glucose levels defined as glucose levels 7.3–9.9 mmol/l; Hyper, hyperglycaemia defined as glucose levels >10 mmol/l\nAdditionally, fasting plasma metabolites were assessed on day 20 and day 21 of each intervention. On day 20, blood samples were taken after a 10 h (CON) or 14 h (TRE) overnight fast. Plasma glucose on day 20 was lower after TRE (7.6±0.4 vs 8.6±0.4 mmol/l, respectively, p=0.03) whereas plasma insulin, triglycerides (TG), and NEFA levels were comparable between conditions (Table 2). On day 21, when overnight fasting time was similar for both interventions (11 h), plasma glucose levels remained lower in TRE than in CON (8.0±0.3 vs 8.9±0.5 mmol/l, respectively, p=0.04), whereas no differences were detected in plasma insulin, TG and NEFA levels (Table 2).\nTable 2Blood plasma biochemistryMetaboliteCONTREp valueDay 20 (n=13) Triglycerides, mmol/l2.1±0.31.9±0.20.30 NEFA, mmol/l0.529±0.0380.489±0.0350.39 Glucose, mmol/la8.6±0.47.6±0.40.03 Insulin, pmol/l111.1±20.8104.2±13.90.27Day 21 (n=14) Triglycerides, mmol/l2.1±0.32.2±0.20.66 NEFA, mmol/l0.601±0.0700.542±0.0640.30 Glucose, mmol/lb8.9±0.58.0±0.30.04 Insulin, pmol/l97.2±13.9111.1±20.80.16Data are shown as mean ± SEMaFasted blood values with fasting time 10 h for CON and 14 h for TREbFasted blood values with fasting time 11 h for both CON and TRE\nBlood plasma biochemistry\nData are shown as mean ± SEM\naFasted blood values with fasting time 10 h for CON and 14 h for TRE\nbFasted blood values with fasting time 11 h for both CON and TRE\nTwenty-four-hour energy and substrate metabolism On day 19, volunteers resided in a respiration chamber for 36 h for measurement of energy expenditure and substrate oxidation. Twenty-four-hour energy expenditure was similar for TRE and CON (9.57±0.22 vs 9.68±0.29 MJ/day, respectively, p=0.22; Fig. 3a), as was the 24 h respiratory exchange ratio (RER) (0.86±0.01 vs 0.86±0.01, respectively, p=0.13). Nonetheless, 24 h carbohydrate oxidation was lower in TRE vs CON (260.2±7.6 vs 277.8±10.7 g/day, respectively, p=0.04; Fig. 3b), whereas 24 h fat oxidation (91.9±6.6 vs 93.5±5.5 g/day, respectively, p=0.72; Fig. 3c) was unaffected. Twenty-four-hour protein oxidation seemed higher upon TRE but the difference did not reach statistical significance (72.8±7.2 vs 58.5±5.4 g/day, respectively, p=0.18; Fig. 3d). Sleeping metabolic rate appeared to be lower with TRE compared with CON (4.66±0.14 vs 4.77±0.18 kJ/min, respectively), although this decrease was not statistically significant (p=0.05; Fig. 3e). There was no change in carbohydrate or fat oxidation during sleep in response to TRE vs CON (RER 0.84±0.01 vs 0.84±0.01, p=0.50; Fig. 3f).\nFig. 3Effect of TRE on 24 h energy expenditure (a), substrate oxidation (b–d), sleeping metabolic rate (e) and RER during sleep (f), (n=13). *p<0.05\nEffect of TRE on 24 h energy expenditure (a), substrate oxidation (b–d), sleeping metabolic rate (e) and RER during sleep (f), (n=13). *p<0.05\nOn day 21, muscle biopsies were obtained to assess ex vivo mitochondrial oxidative capacity by means of high-resolution respirometry. In total, paired biopsies from 13 out of 14 volunteers were analysed. Mitochondrial respiration did not differ between TRE and CON (Table 3).\nTable 3Mitochondrial oxidative capacityRespiration stateCONTREp valueState 2a M, pmol mg−1 s−15.0±0.45.5±0.60.49 MO, pmol mg−1 s−16.6±0.46.8±0.50.93 MG, pmol mg−1 s−17.1±0.46.9±0.40.54State 3b MO, pmol mg−1 s−128.3±2.029.1±1.40.91 MG, pmol mg−1 s−131.2±1.932.6±1.80.82 MOG, pmol mg−1 s−137.0±2.337.8±1.90.99 MOGS, pmol mg−1 s−155.7±3.357.6±2.70.80State Uc MGS, pmol mg−1 s−158.0±3.259.9±2.90.91 FCCP, pmol mg−1 s−167.8±4.766.6±3.50.50State 4od Oligomycin, pmol mg−1 s−117.6±1.218.1±1.20.85Data presented as mean ± SEM, n=13aState 2, respiration in presence of substrates alonebState 3, ADP-stimulated respirationcState U, maximal respiration in response to an uncoupling agentdState 4o, mitochondrial proton leak measured by blocking ATP synthaseFCCP, trifluoro-methoxy carbonyl cyanide-4 phenylhydrazone; G, glutamate; M, malate; O, octanoyl-carnitine; S, succinate\nMitochondrial oxidative capacity\nData presented as mean ± SEM, n=13\naState 2, respiration in presence of substrates alone\nbState 3, ADP-stimulated respiration\ncState U, maximal respiration in response to an uncoupling agent\ndState 4o, mitochondrial proton leak measured by blocking ATP synthase\nFCCP, trifluoro-methoxy carbonyl cyanide-4 phenylhydrazone; G, glutamate; M, malate; O, octanoyl-carnitine; S, succinate\nOn day 19, volunteers resided in a respiration chamber for 36 h for measurement of energy expenditure and substrate oxidation. Twenty-four-hour energy expenditure was similar for TRE and CON (9.57±0.22 vs 9.68±0.29 MJ/day, respectively, p=0.22; Fig. 3a), as was the 24 h respiratory exchange ratio (RER) (0.86±0.01 vs 0.86±0.01, respectively, p=0.13). Nonetheless, 24 h carbohydrate oxidation was lower in TRE vs CON (260.2±7.6 vs 277.8±10.7 g/day, respectively, p=0.04; Fig. 3b), whereas 24 h fat oxidation (91.9±6.6 vs 93.5±5.5 g/day, respectively, p=0.72; Fig. 3c) was unaffected. Twenty-four-hour protein oxidation seemed higher upon TRE but the difference did not reach statistical significance (72.8±7.2 vs 58.5±5.4 g/day, respectively, p=0.18; Fig. 3d). Sleeping metabolic rate appeared to be lower with TRE compared with CON (4.66±0.14 vs 4.77±0.18 kJ/min, respectively), although this decrease was not statistically significant (p=0.05; Fig. 3e). There was no change in carbohydrate or fat oxidation during sleep in response to TRE vs CON (RER 0.84±0.01 vs 0.84±0.01, p=0.50; Fig. 3f).\nFig. 3Effect of TRE on 24 h energy expenditure (a), substrate oxidation (b–d), sleeping metabolic rate (e) and RER during sleep (f), (n=13). *p<0.05\nEffect of TRE on 24 h energy expenditure (a), substrate oxidation (b–d), sleeping metabolic rate (e) and RER during sleep (f), (n=13). *p<0.05\nOn day 21, muscle biopsies were obtained to assess ex vivo mitochondrial oxidative capacity by means of high-resolution respirometry. In total, paired biopsies from 13 out of 14 volunteers were analysed. Mitochondrial respiration did not differ between TRE and CON (Table 3).\nTable 3Mitochondrial oxidative capacityRespiration stateCONTREp valueState 2a M, pmol mg−1 s−15.0±0.45.5±0.60.49 MO, pmol mg−1 s−16.6±0.46.8±0.50.93 MG, pmol mg−1 s−17.1±0.46.9±0.40.54State 3b MO, pmol mg−1 s−128.3±2.029.1±1.40.91 MG, pmol mg−1 s−131.2±1.932.6±1.80.82 MOG, pmol mg−1 s−137.0±2.337.8±1.90.99 MOGS, pmol mg−1 s−155.7±3.357.6±2.70.80State Uc MGS, pmol mg−1 s−158.0±3.259.9±2.90.91 FCCP, pmol mg−1 s−167.8±4.766.6±3.50.50State 4od Oligomycin, pmol mg−1 s−117.6±1.218.1±1.20.85Data presented as mean ± SEM, n=13aState 2, respiration in presence of substrates alonebState 3, ADP-stimulated respirationcState U, maximal respiration in response to an uncoupling agentdState 4o, mitochondrial proton leak measured by blocking ATP synthaseFCCP, trifluoro-methoxy carbonyl cyanide-4 phenylhydrazone; G, glutamate; M, malate; O, octanoyl-carnitine; S, succinate\nMitochondrial oxidative capacity\nData presented as mean ± SEM, n=13\naState 2, respiration in presence of substrates alone\nbState 3, ADP-stimulated respiration\ncState U, maximal respiration in response to an uncoupling agent\ndState 4o, mitochondrial proton leak measured by blocking ATP synthase\nFCCP, trifluoro-methoxy carbonyl cyanide-4 phenylhydrazone; G, glutamate; M, malate; O, octanoyl-carnitine; S, succinate", "A flowchart of participant enrolment is depicted in ESM Fig. 2. Baseline participant characteristics are presented in Table 1. The median Morningness-Eveningness Questionnaire Self-Assessment (MEQ-SA) score amounted to 59.5 (range 41–72). Only one volunteer was identified as an extreme morning type but was included in the study as the intervention did not interfere with his habitual day–night rhythm.\nTable 1Baseline characteristics of participantsCharacteristicMeasurement/valueN14Sex, n female/n male7/7Age, years67.5±5.2BMI, kg/m230.5±3.7Diabetes medication, n yes/n no10/4 Metformin only, n7 Metformin + gliclazide, n3Fasting plasma glucose, mmol/l7.9±1.3HbA1c, mmol/mol46.1±7.2HbA1c, %6.4±0.7AST, μkat/l0.4±0.1ALT, μkat/l0.4±0.2GGT, μkat/l0.4±0.2eGFR, ml min−1 1.73 m−279.9±14.5MEQ-SA, score59.1±7.7Data are shown as mean ± SD, unless stated otherwiseALT, alanine aminotransferase; AST, aspartate aminotransferase; GGT, γ-glutamyl transferase; MEQ-SA, Morningness-Eveningness Questionnaire Self-Assessment\nBaseline characteristics of participants\nData are shown as mean ± SD, unless stated otherwise\nALT, alanine aminotransferase; AST, aspartate aminotransferase; GGT, γ-glutamyl transferase; MEQ-SA, Morningness-Eveningness Questionnaire Self-Assessment", "Volunteers did not indicate any changes in diabetes medication throughout the study. Volunteers recorded their daily food intake and sleep habits for, on average, 17 days during TRE and 18 days during CON. Based on these data, the eating window averaged 9.1±0.2 h in TRE vs 13.4±0.1 h in CON (p<0.01). Sleep–wake patterns were similar in both interventions, with a mean sleep duration of 8.1±0.2 h during TRE and 8.0±0.2 h during CON (p=0.17). Body weight at the start of each intervention was comparable between TRE and CON (89.1±3.7 vs 89.2±3.8 kg, respectively, p=0.62). Although volunteers were instructed to remain weight stable, a small but significant weight loss occurred in response to TRE (−1.0±0.3 kg, p<0.01) but not CON (−0.3±0.3 kg, p=0.22). The weight loss with TRE was significantly greater than the weight change observed with CON (p=0.02). Body composition determined on day 19 was comparable between TRE and CON (TRE vs CON: fat mass 37.4±2.7 vs 37.9±2.9 kg, p=0.58; and fat-free mass 50.7±2.6 vs 51.0±2.6 kg, p=0.60).", "Approximately half-way through each intervention period, hepatic glycogen levels were assessed in the morning following a 14 h (TRE) and 10 h (CON) night-time fast. Hepatic glycogen did not differ significantly between TRE vs CON (0.16±0.03 vs 0.17±0.02 arbitrary units [AU], respectively, p=0.43). At the end of each intervention, hepatic glycogen levels were also assessed after a standardised overnight fast of 11 h for both TRE and CON but did not reveal an altered hepatic glycogen content with TRE compared with CON (0.15±0.01 vs 0.15±0.01 AU, respectively, p=0.88). We also assessed hepatic lipid content; neither the amount of lipids nor the composition of the hepatic lipid pool was altered with TRE vs CON (respectively: total lipid content 9.0±2.0 vs 8.6±1.6%, p=0.47; polyunsaturated fatty acids 17.0±1.3 vs 16.2±1.2%, p=0.41; mono-unsaturated fatty acids 40.6±0.9 vs 42.9±1.4%, p=0.19; and saturated fatty acids 42.4±1.2 vs 40.9±1.5%, p=0.41).", "A hyperinsulinaemic–euglycaemic two-step clamp with a glucose tracer and indirect calorimetry was performed to assess insulin sensitivity. No differences in M value were found when comparing TRE and CON (19.6±1.8 vs 17.7±1.8 μmol kg−1 min−1, respectively, p=0.1). Hepatic insulin sensitivity was not affected by TRE, as exemplified by a similar endogenous glucose production (EGP) with TRE and CON in the fasted state and in the low- and high-insulin-stimulated states (p=0.83, p=0.38 and p=0.30, respectively; Fig. 1a). Suppression of EGP was also similar when comparing TRE with CON upon low- and high-insulin infusion (p=0.67 and p=0.47; Fig. 1a). NEFA suppression upon low insulin exposure was not different between TRE and CON (−365.2±41.6 vs −359.1±43.2 mmol/l, p=0.8). However, absolute levels of NEFAs were lower with TRE during the low- and high-insulin phase (p=0.02 and p=0.04; Fig. 1b), which may hint at an improved adipose tissue insulin sensitivity.\nFig. 1Effect of TRE on EGP (a), plasma NEFA (b), Rd (c), NOGD (d) and fat oxidation (e) measured during a hyperinsulinaemic–euglycaemic two-step clamp (n=14). *p<0.05 (data were analysed with paired t tests). Rd, Rate of disappearance\nEffect of TRE on EGP (a), plasma NEFA (b), Rd (c), NOGD (d) and fat oxidation (e) measured during a hyperinsulinaemic–euglycaemic two-step clamp (n=14). *p<0.05 (data were analysed with paired t tests). Rd, Rate of disappearance\nPeripheral insulin-stimulated glucose disposal, reflected by the change in rate of disappearance (Rd) from basal to high insulin, remained unchanged with TRE (p=0.25; Fig. 1c). However, we observed a larger insulin-stimulated non-oxidative glucose disposal (NOGD, difference from baseline to high insulin) with TRE than with CON (4.3±1.1 vs 1.5±1.7 μmol kg−1 min−1, respectively, p=0.04; Fig. 1d) reflecting an increased ability to form glycogen. Conversely, insulin-stimulated carbohydrate oxidation from basal to high insulin appeared to be lower with TRE than with CON (4.7±0.9 vs 6.2±0.9 μmol kg−1 min−1, respectively) but this difference was not statistically significant (p=0.07). Consistently, insulin-induced suppression of fat oxidation from basal to high-insulin was lower with TRE than with CON (−1.3±0.3 vs −1.8±0.2 μmol kg−1 min−1, p=0.04; Fig. 1e). Energy expenditure did not differ between TRE and CON during the basal, low-insulin and high-insulin phase of the clamp. These results indicate that while peripheral insulin sensitivity is unchanged with TRE, glucose uptake is more directed towards storage compared with oxidation. Both hepatic and peripheral insulin sensitivity, as well as levels of hepatic glycogen, were additionally analysed in volunteers who only used metformin as diabetes treatment (n=7) and this did not alter the outcomes.\nTo examine the effect of TRE on glucose homeostasis, CGM data from the last 4 days in the free-living situation (days 15–18) were analysed for both interventions. Four volunteers presented incomplete CGM data due to technical issues, hence statistics were performed on CGM data from ten volunteers. Mean 24 h glucose levels were lower in TRE compared with CON (6.8±0.2 vs 7.6±0.3 mmol/l, p<0.01; Fig. 2a–f). Nocturnal glucose levels were consistently lower in TRE vs CON (Fig. 2a–d). Furthermore, volunteers spent more time in the normal glucose range upon TRE compared with CON (15.1±0.8 vs 12.2±1.1 h per day, p=0.01; Fig. 2f). Concomitantly, time spent in the high glucose range was less in TRE compared with CON (5.5±0.5 vs 7.5 0.7 h per day, p=0.02) whereas no differences between eating regimens were found for time spent in hyperglycaemia (2.3±0.4 vs 3.7±0.8 h per day, p=0.24), time spent in the low glucose range (0.5±0.1 vs 0.4±0.1 h per day, p=1.00) or time spent in hypoglycaemia (0.7±0.3 vs 0.1±0.0 h per day, p=0.48).\nFig. 2(a–d) Twenty-four-hour glucose levels on days 15 (a), 16 (b), 17 (c) and 18 (d) during TRE or CON (n=10). Mean 24 h glucose from day 15 to day 18 (n=10) analysed using a paired t test (e). Time spent in glucose range during days 15–18 (n=10) analysed using Wilcoxon tests with Bonferroni correction (f) *p<0.05. Hypo, hypoglycaemia defined as glucose levels <4.0 mmol/l; Low, low glucose levels defined as glucose levels 4.0–4.3 mmol/l; Normal range, glucose levels within the normal range defined as 4.4–7.2 mmol/l; High, high glucose levels defined as glucose levels 7.3–9.9 mmol/l; Hyper, hyperglycaemia defined as glucose levels >10 mmol/l\n(a–d) Twenty-four-hour glucose levels on days 15 (a), 16 (b), 17 (c) and 18 (d) during TRE or CON (n=10). Mean 24 h glucose from day 15 to day 18 (n=10) analysed using a paired t test (e). Time spent in glucose range during days 15–18 (n=10) analysed using Wilcoxon tests with Bonferroni correction (f) *p<0.05. Hypo, hypoglycaemia defined as glucose levels <4.0 mmol/l; Low, low glucose levels defined as glucose levels 4.0–4.3 mmol/l; Normal range, glucose levels within the normal range defined as 4.4–7.2 mmol/l; High, high glucose levels defined as glucose levels 7.3–9.9 mmol/l; Hyper, hyperglycaemia defined as glucose levels >10 mmol/l\nAdditionally, fasting plasma metabolites were assessed on day 20 and day 21 of each intervention. On day 20, blood samples were taken after a 10 h (CON) or 14 h (TRE) overnight fast. Plasma glucose on day 20 was lower after TRE (7.6±0.4 vs 8.6±0.4 mmol/l, respectively, p=0.03) whereas plasma insulin, triglycerides (TG), and NEFA levels were comparable between conditions (Table 2). On day 21, when overnight fasting time was similar for both interventions (11 h), plasma glucose levels remained lower in TRE than in CON (8.0±0.3 vs 8.9±0.5 mmol/l, respectively, p=0.04), whereas no differences were detected in plasma insulin, TG and NEFA levels (Table 2).\nTable 2Blood plasma biochemistryMetaboliteCONTREp valueDay 20 (n=13) Triglycerides, mmol/l2.1±0.31.9±0.20.30 NEFA, mmol/l0.529±0.0380.489±0.0350.39 Glucose, mmol/la8.6±0.47.6±0.40.03 Insulin, pmol/l111.1±20.8104.2±13.90.27Day 21 (n=14) Triglycerides, mmol/l2.1±0.32.2±0.20.66 NEFA, mmol/l0.601±0.0700.542±0.0640.30 Glucose, mmol/lb8.9±0.58.0±0.30.04 Insulin, pmol/l97.2±13.9111.1±20.80.16Data are shown as mean ± SEMaFasted blood values with fasting time 10 h for CON and 14 h for TREbFasted blood values with fasting time 11 h for both CON and TRE\nBlood plasma biochemistry\nData are shown as mean ± SEM\naFasted blood values with fasting time 10 h for CON and 14 h for TRE\nbFasted blood values with fasting time 11 h for both CON and TRE", "On day 19, volunteers resided in a respiration chamber for 36 h for measurement of energy expenditure and substrate oxidation. Twenty-four-hour energy expenditure was similar for TRE and CON (9.57±0.22 vs 9.68±0.29 MJ/day, respectively, p=0.22; Fig. 3a), as was the 24 h respiratory exchange ratio (RER) (0.86±0.01 vs 0.86±0.01, respectively, p=0.13). Nonetheless, 24 h carbohydrate oxidation was lower in TRE vs CON (260.2±7.6 vs 277.8±10.7 g/day, respectively, p=0.04; Fig. 3b), whereas 24 h fat oxidation (91.9±6.6 vs 93.5±5.5 g/day, respectively, p=0.72; Fig. 3c) was unaffected. Twenty-four-hour protein oxidation seemed higher upon TRE but the difference did not reach statistical significance (72.8±7.2 vs 58.5±5.4 g/day, respectively, p=0.18; Fig. 3d). Sleeping metabolic rate appeared to be lower with TRE compared with CON (4.66±0.14 vs 4.77±0.18 kJ/min, respectively), although this decrease was not statistically significant (p=0.05; Fig. 3e). There was no change in carbohydrate or fat oxidation during sleep in response to TRE vs CON (RER 0.84±0.01 vs 0.84±0.01, p=0.50; Fig. 3f).\nFig. 3Effect of TRE on 24 h energy expenditure (a), substrate oxidation (b–d), sleeping metabolic rate (e) and RER during sleep (f), (n=13). *p<0.05\nEffect of TRE on 24 h energy expenditure (a), substrate oxidation (b–d), sleeping metabolic rate (e) and RER during sleep (f), (n=13). *p<0.05\nOn day 21, muscle biopsies were obtained to assess ex vivo mitochondrial oxidative capacity by means of high-resolution respirometry. In total, paired biopsies from 13 out of 14 volunteers were analysed. Mitochondrial respiration did not differ between TRE and CON (Table 3).\nTable 3Mitochondrial oxidative capacityRespiration stateCONTREp valueState 2a M, pmol mg−1 s−15.0±0.45.5±0.60.49 MO, pmol mg−1 s−16.6±0.46.8±0.50.93 MG, pmol mg−1 s−17.1±0.46.9±0.40.54State 3b MO, pmol mg−1 s−128.3±2.029.1±1.40.91 MG, pmol mg−1 s−131.2±1.932.6±1.80.82 MOG, pmol mg−1 s−137.0±2.337.8±1.90.99 MOGS, pmol mg−1 s−155.7±3.357.6±2.70.80State Uc MGS, pmol mg−1 s−158.0±3.259.9±2.90.91 FCCP, pmol mg−1 s−167.8±4.766.6±3.50.50State 4od Oligomycin, pmol mg−1 s−117.6±1.218.1±1.20.85Data presented as mean ± SEM, n=13aState 2, respiration in presence of substrates alonebState 3, ADP-stimulated respirationcState U, maximal respiration in response to an uncoupling agentdState 4o, mitochondrial proton leak measured by blocking ATP synthaseFCCP, trifluoro-methoxy carbonyl cyanide-4 phenylhydrazone; G, glutamate; M, malate; O, octanoyl-carnitine; S, succinate\nMitochondrial oxidative capacity\nData presented as mean ± SEM, n=13\naState 2, respiration in presence of substrates alone\nbState 3, ADP-stimulated respiration\ncState U, maximal respiration in response to an uncoupling agent\ndState 4o, mitochondrial proton leak measured by blocking ATP synthase\nFCCP, trifluoro-methoxy carbonyl cyanide-4 phenylhydrazone; G, glutamate; M, malate; O, octanoyl-carnitine; S, succinate", "TRE is a novel strategy to improve metabolic health and has been proposed to counteract the detrimental effects of eating throughout the day by limiting food intake to daytime hours. To date, only a few studies have examined the metabolic effects of TRE in adults with type 2 diabetes. Here, we tested whether restricting energy intake to a feasible, 10 h time frame for 3 weeks would lower hepatic glycogen levels and improve insulin sensitivity in overweight/obese adults with type 2 diabetes. Additionally, we explored the effects of TRE on glucose homeostasis, 24 h energy metabolism and mitochondrial function.\nWe hypothesised that the 10 h TRE regimen, with the latest food intake at 18:00 hours, would result in a more pronounced fasting state, especially during the night. During the night, the liver is crucial to the regulation of blood glucose through the processes of gluconeogenesis and glycogenolysis and it has been shown that these processes are elevated in type 2 diabetes [23, 24]. Therefore, we hypothesised that hepatic glycogen would be lower after TRE and would be associated with an improved insulin sensitivity.\nIn contrast to our hypothesis, hepatic glycogenolysis appeared to be unaffected by TRE since there was no change in glycogen content after a standardised 11 h fast at the end of the intervention. Neither was there a change after a 14 h (TRE) vs 10 h (CON) overnight fast half-way through the intervention. In addition, EGP suppression during the low-insulin phase of the hyperglycaemic–euglycaemic clamp (reflecting hepatic insulin sensitivity) did not differ between TRE and CON. A limitation of our approach is that we did not measure hepatic glycogen dynamics during the night. Such measurements may be important, as our clamp results showed an increase in NOGD upon high-insulin stimulation with TRE, suggesting an increased glycogen storage. These results could suggest small changes in hepatic glycogen turnover; alternatively, muscle glycogen levels may play a role in explaining our clamp results as the muscle accounts for most of the glycogen synthesis upon high-insulin stimulation in healthy individuals. Interestingly, type 2 diabetes is characterised by an impaired insulin-stimulated glycogen storage [25]. Thus, an improvement in NOGD due to TRE may help to regulate 24 h and postprandial glucose levels. Indeed, 24 h glucose levels were significantly improved after TRE.\nWe did not observe an effect of TRE on insulin sensitivity. A previous controlled randomised crossover study by Sutton et al did show an improved insulin sensitivity with TRE [8]. Thus, men with prediabetes followed a 5 week 6 h early TRE regimen, whereby the last meal was consumed before 15:00 hours. The differences in results may be explained by the shorter eating window and earlier consumption of the last meal (15:00 vs 18:00 hours), creating a longer period of fasting. Here, we chose a 10 h TRE, which we believe would be feasible to incorporate into the work and family life of most adults with type 2 diabetes; future studies will be needed to reveal whether the duration of the fasting period is indeed crucial in determining positive effects on insulin sensitivity.\nDespite the lack of changes in hepatic glycogen and insulin sensitivity, we did find that our 10 h TRE protocol decreased 24 h glucose levels in individuals with type 2 diabetes, primarily driven by decreased nocturnal glucose levels. Notably, TRE also lowered overnight fasting glucose, increased the time spent in the normal glucose range and decreased time spent in the high glucose range, all of which are clinically relevant variables in type 2 diabetes. Importantly, morning fasting glucose levels were consistently lower with TRE than with CON, even when the fasting duration prior to the blood draw was similar between the two interventions. This may indicate lasting changes in nocturnal glucose homeostasis. Additionally, we found that time spent in hypoglycaemia was not significantly increased upon TRE and no serious adverse events were reported resulting from TRE, thereby underscoring that a ~10 h eating window is a safe and effective lifestyle intervention for adults with type 2 diabetes.\nMechanisms underlying the improvement in glucose homeostasis upon TRE remain unclear. Our results show that TRE did not improve peripheral and hepatic insulin sensitivity, skeletal muscle mitochondrial function, energy metabolism or hepatic lipid content, all of which are known to be affected in type 2 diabetes mellitus [25–29]. Under high-insulin conditions during the clamp, we observed a larger reliance on fatty acid oxidation accompanied by higher NEFA levels and lower glucose oxidation. Lower glucose oxidation was also observed when measured over 24 h. Although not statistically significant, the mean of 24 h protein oxidation was higher with TRE, possibly reflecting a more pronounced fasting state and a drive towards a higher rate of amino-acid-driven gluconeogenesis. A previous study by Lundell et al indeed suggested that TRE could affect protein metabolism to cope with the extended period of fasting [30]. However, the exact mechanisms and implications of these effects require further investigation, and it would be interesting to investigate nocturnal glucose metabolism in more detail. The improvement in glucose homeostasis may also partially be explained by the weight loss induced by TRE, which has also been reported previously [4–6, 9, 10, 13, 31]. It should be noted, however, that the body weight loss was rather small (~0.7 kg compared with CON after 3 weeks of intervention) which makes it less likely to completely explain the differences in glucose homeostasis.\nA limitation of the current study is the relatively heterogeneous study population consisting of adults with and without use of glucose-lowering medication. Use of medication might have resulted in TRE having less effect, as the medication may be targeting the same metabolic pathways. Only recruiting volunteers not receiving medication would have prevented this issue but would have made the results less applicable to the general type 2 diabetes population. Another limitation of our study is the relatively short duration of 3 weeks. This duration was chosen as the aim of this study was to assess whether TRE would result in metabolic improvements in type 2 diabetes and to explore potential mechanisms underlying these changes. In our experience, human interventions of 3 weeks are able to affect the outcome variables investigated in our study. Since our TRE protocol was feasible and safe, and resulted in improved 24 h glucose levels, it would be interesting to examine the impact of 10 h TRE on glucose homeostasis and insulin sensitivity in type 2 diabetes in the long term to address the clinical relevance of TRE.\nIn conclusion, we show that a daytime 10 h TRE regimen for 3 weeks decreases glucose levels and prolongs the time spent in normoglycaemia in adults with type 2 diabetes as compared with spreading daily food intake over at least 14 h. These improvements were not mediated by changes in hepatic glycogen, insulin sensitivity, mitochondrial function or 24 h substrate oxidation. These data highlight the potential benefits of TRE in type 2 diabetes.", " \nESM(PDF 809 kb)\n(PDF 809 kb)\n\nESM(PDF 809 kb)\n(PDF 809 kb)" ]
[ null, null, null, "methods", null, null, null, null, null, null, null, null, null, "supplementary-material" ]
[ "Circadian rhythm", "Glucose homeostasis", "Hepatic fat", "Hepatic glycogen", "Insulin sensitivity", "Intermittent fasting", "Lifestyle intervention", "Mitochondrial oxidative capacity", "TRE", "Type 2 diabetes" ]
Introduction: Our modern 24 h society is characterised by ubiquitous food availability, irregular sleep–activity patterns and frequent exposure to artificial light sources. Together, these factors lead to a disrupted day–night rhythm, which contributes to the development of type 2 diabetes [1–3]. In Western society, most people tend to spread their daily food intake over a minimum of 14 h [4], likely resulting in the absence of a true, nocturnal fasted state. Restricting food intake to a pre-defined time window (typically ≤12 h), i.e. time-restricted eating [TRE]), restores the cycle of daytime eating and prolonged fasting during the evening and night. Indeed, several studies demonstrated that TRE has promising metabolic effects in overweight or obese individuals, including increased lipid oxidation [5], decreased plasma glucose levels [6, 7] and improved insulin sensitivity [8]. While promising, the latter studies applied extremely short eating time windows (e.g. 6–8 h) in highly-controlled settings [5–10], thus hampering implementation into daily life. To date, only Parr et al have successfully explored the potential of TRE in adults with type 2 diabetes using a 9 h TRE regimen [11], However, the effects of TRE on metabolic health remained largely unexplored. Despite the fact that TRE is sometimes accompanied by (unintended) weight loss [4–6, 9, 10, 12, 13], which inherently improves metabolic health, it has also been reported to improve metabolic health in the absence of weight loss [8], indicating that additional mechanisms underlie the effects of TRE. In this context, individuals with impaired metabolic health display aberrations in rhythmicity of metabolic processes such as glucose homeostasis [14, 15], mitochondrial oxidative capacity [16] and whole-body substrate oxidation [16] compared with the rhythms found in healthy, lean individuals [14, 15, 17]. Disruption of circadian rhythmicity is proposed to contribute to the impaired matching of substrate utilisation with substrate availability, which is associated with type 2 diabetes [18]. In turn, we hypothesise that these impairments in metabolic rhythmicity are due to a disturbed eating–fasting cycle. Therefore, restricting food intake to daytime and, consequently, extending the period of fasting, may improve metabolic health. More specifically, hepatic glycogen could play a pivotal role in this process, as it serves as a fuel during the night when glucose levels are low and is replenished during the daytime [19]. A decrease in hepatic glycogen triggers the stimulation of fat oxidation and molecular metabolic adaptations that accommodate substrate availability in the fasted state [20], and the need to replenish these stores may improve insulin sensitivity. Hitherto, it is not known whether TRE could result in a more pronounced depletion in hepatic glycogen levels in type 2 diabetes, leading to an improved insulin sensitivity. The aim of the current study was to examine the effect of limiting food intake to a feasible 10 h daily time frame for 3 weeks in free-living conditions on hepatic glycogen utilisation and insulin sensitivity in adults with type 2 diabetes. Methods: This randomised crossover study was conducted between April 2019 and February 2021, after approval of the Ethics Committee of Maastricht University Medical Center (Maastricht, the Netherlands), and conformed with the Declaration of Helsinki [21]. The trial was registered at ClinicalTrials.gov (registration no. NCT03992248). All volunteers signed an informed consent form prior to participation. The randomisation procedure is described in the electronic supplementary material (ESM) Methods. The study consisted of two 3-week intervention periods separated by a wash-out period of ≥4 weeks. At the end of each intervention period, main outcomes were measured (ESM Fig. 1) at the Metabolic Research Unit of Maastricht University, the Netherlands. Male and female adults with type 2 diabetes, aged between 50 and 75 years and BMI ≥25 kg/m2, were eligible for participation. For detailed inclusion and exclusion criteria, see ESM Table 1. Intervention During the TRE intervention, volunteers were instructed to consume their habitual diet within a 10 h window during the daytime, with the last meal completed no later than 18:00 hours. Outside this time window, volunteers were only allowed to drink water, plain tea and black coffee. To increase compliance, volunteers were also allowed to drink zero-energy soft drinks in the evening hours if consumed in moderation. During the control (CON) intervention, volunteers were instructed to spread their habitual diet over at least 14 h per day without additional restraints on the time window of food intake. For both intervention periods, volunteers were instructed to maintain their normal physical activity and sleep patterns and to remain weight stable. Food intake and sleep times were recorded daily using a food and sleep diary. Volunteers based the food intake of their second intervention period on the food and sleep diary filled out during the first period to promote similar dietary quantity and quality in both intervention arms. To optimise compliance, a weekly phone call was scheduled to monitor the volunteers and to provide additional instructions if necessary. During the TRE intervention, volunteers were instructed to consume their habitual diet within a 10 h window during the daytime, with the last meal completed no later than 18:00 hours. Outside this time window, volunteers were only allowed to drink water, plain tea and black coffee. To increase compliance, volunteers were also allowed to drink zero-energy soft drinks in the evening hours if consumed in moderation. During the control (CON) intervention, volunteers were instructed to spread their habitual diet over at least 14 h per day without additional restraints on the time window of food intake. For both intervention periods, volunteers were instructed to maintain their normal physical activity and sleep patterns and to remain weight stable. Food intake and sleep times were recorded daily using a food and sleep diary. Volunteers based the food intake of their second intervention period on the food and sleep diary filled out during the first period to promote similar dietary quantity and quality in both intervention arms. To optimise compliance, a weekly phone call was scheduled to monitor the volunteers and to provide additional instructions if necessary. Procedures At the start of each intervention period, body weight was determined and a continuous glucose monitoring (CGM) device (Freestyle Libre Pro; Abbott, Chicago, USA) was placed on the back of the upper arm to measured interstitial glucose levels every 15 min. On one occasion, between day 7 and 15 of each intervention, fasted hepatic glycogen was measured in the morning at 07:00 hours using 13C-MRS. The day before the MRS measurement, volunteers consumed a standardised meal at home at either 16:40 hours (TRE) or 20:40 hours (CON), ensuring 20 min of meal consumption, so that they were fasted from, respectively, 17:00 hours or 21:00 hours. On day 19, volunteers arrived at the university at 15:00 hours for measurement of body composition using air displacement plethysmography (BodPod; Cosmed, Rome, Italy), followed by the placement of an i.v. cannula. Afterwards, volunteers entered a respiration chamber for a 36 h measurement of energy expenditure and substrate utilisation using whole-room indirect calorimetry. With TRE, a dinner consisting of 49 per cent of energy (En%) was provided in the respiration chamber at 17:40 hours and volunteers were fasted from 18:00 hours. With CON, a snack of 10 En% was provided at 18:00 hours followed by a 39 En% dinner at 21:40 hours, resulting in a fast from 22:00 hours. Energy content of the meals was based on estimated energy expenditure using the Harris and Benedict equation [22]. On day 20, while in the respiration chamber, a fasted blood sample was obtained at 07:30 hours and 24 h urine was collected for analysis of nitrogen excretion. In both intervention arms, volunteers received standardised meals at fixed times (08:00, 12:00, 15:00 and 18:40 hours) consisting of, respectively, 21 En%, 30 En%, 10 En% and 39 En%. Energy intake was based on sleeping metabolic rate (determined during the night of day 19) multiplied by an activity factor of 1.5. Macronutrient composition of the meals was 56 En% carbohydrates, 30 En% fat and 14 En% protein. Furthermore, volunteers performed low-intensity physical activity at 10:30, 13:00 and 16:00 hours. One bout of activity consisted of 15 min of stepping on an aerobic step and 15 min of standing. On day 21, after a standardised 11 h fast, volunteers left the respiration chamber at 06:00 hours. Next, a blood sample was taken, followed by measurement of hepatic glycogen and lipid content using 13C-MRS and 1H-MRS, respectively. Subsequently, a muscle biopsy was obtained to assess ex vivo mitochondrial oxidative capacity, after which a hyperinsulinaemic–euglycaemic two-step clamp was started to measure insulin sensitivity. See ESM Methods for detailed descriptions of measurement methods. At the start of each intervention period, body weight was determined and a continuous glucose monitoring (CGM) device (Freestyle Libre Pro; Abbott, Chicago, USA) was placed on the back of the upper arm to measured interstitial glucose levels every 15 min. On one occasion, between day 7 and 15 of each intervention, fasted hepatic glycogen was measured in the morning at 07:00 hours using 13C-MRS. The day before the MRS measurement, volunteers consumed a standardised meal at home at either 16:40 hours (TRE) or 20:40 hours (CON), ensuring 20 min of meal consumption, so that they were fasted from, respectively, 17:00 hours or 21:00 hours. On day 19, volunteers arrived at the university at 15:00 hours for measurement of body composition using air displacement plethysmography (BodPod; Cosmed, Rome, Italy), followed by the placement of an i.v. cannula. Afterwards, volunteers entered a respiration chamber for a 36 h measurement of energy expenditure and substrate utilisation using whole-room indirect calorimetry. With TRE, a dinner consisting of 49 per cent of energy (En%) was provided in the respiration chamber at 17:40 hours and volunteers were fasted from 18:00 hours. With CON, a snack of 10 En% was provided at 18:00 hours followed by a 39 En% dinner at 21:40 hours, resulting in a fast from 22:00 hours. Energy content of the meals was based on estimated energy expenditure using the Harris and Benedict equation [22]. On day 20, while in the respiration chamber, a fasted blood sample was obtained at 07:30 hours and 24 h urine was collected for analysis of nitrogen excretion. In both intervention arms, volunteers received standardised meals at fixed times (08:00, 12:00, 15:00 and 18:40 hours) consisting of, respectively, 21 En%, 30 En%, 10 En% and 39 En%. Energy intake was based on sleeping metabolic rate (determined during the night of day 19) multiplied by an activity factor of 1.5. Macronutrient composition of the meals was 56 En% carbohydrates, 30 En% fat and 14 En% protein. Furthermore, volunteers performed low-intensity physical activity at 10:30, 13:00 and 16:00 hours. One bout of activity consisted of 15 min of stepping on an aerobic step and 15 min of standing. On day 21, after a standardised 11 h fast, volunteers left the respiration chamber at 06:00 hours. Next, a blood sample was taken, followed by measurement of hepatic glycogen and lipid content using 13C-MRS and 1H-MRS, respectively. Subsequently, a muscle biopsy was obtained to assess ex vivo mitochondrial oxidative capacity, after which a hyperinsulinaemic–euglycaemic two-step clamp was started to measure insulin sensitivity. See ESM Methods for detailed descriptions of measurement methods. Biochemical analyses Blood samples were used for quantification of metabolites and nitrogen was assessed using 24 h urine samples. See ESM Methods for further details regarding the biochemical analyses. Blood samples were used for quantification of metabolites and nitrogen was assessed using 24 h urine samples. See ESM Methods for further details regarding the biochemical analyses. Data analysis The statistical packages SPSS Statistics 25 (IBM, New York, USA) and Prism 9 (GraphPad Software, San Diego, USA) were used for statistical analyses. Interventional comparisons are expressed as mean ± SEM. Participant characteristics are expressed as mean ± SD. Differences between CON and TRE were tested using the paired t test, unless specified otherwise. A two-sided p<0.05 was considered statistically significant. The power calculation, as well as other calculations made using the measured data, are described in the ESM Methods. The statistical packages SPSS Statistics 25 (IBM, New York, USA) and Prism 9 (GraphPad Software, San Diego, USA) were used for statistical analyses. Interventional comparisons are expressed as mean ± SEM. Participant characteristics are expressed as mean ± SD. Differences between CON and TRE were tested using the paired t test, unless specified otherwise. A two-sided p<0.05 was considered statistically significant. The power calculation, as well as other calculations made using the measured data, are described in the ESM Methods. Intervention: During the TRE intervention, volunteers were instructed to consume their habitual diet within a 10 h window during the daytime, with the last meal completed no later than 18:00 hours. Outside this time window, volunteers were only allowed to drink water, plain tea and black coffee. To increase compliance, volunteers were also allowed to drink zero-energy soft drinks in the evening hours if consumed in moderation. During the control (CON) intervention, volunteers were instructed to spread their habitual diet over at least 14 h per day without additional restraints on the time window of food intake. For both intervention periods, volunteers were instructed to maintain their normal physical activity and sleep patterns and to remain weight stable. Food intake and sleep times were recorded daily using a food and sleep diary. Volunteers based the food intake of their second intervention period on the food and sleep diary filled out during the first period to promote similar dietary quantity and quality in both intervention arms. To optimise compliance, a weekly phone call was scheduled to monitor the volunteers and to provide additional instructions if necessary. Procedures: At the start of each intervention period, body weight was determined and a continuous glucose monitoring (CGM) device (Freestyle Libre Pro; Abbott, Chicago, USA) was placed on the back of the upper arm to measured interstitial glucose levels every 15 min. On one occasion, between day 7 and 15 of each intervention, fasted hepatic glycogen was measured in the morning at 07:00 hours using 13C-MRS. The day before the MRS measurement, volunteers consumed a standardised meal at home at either 16:40 hours (TRE) or 20:40 hours (CON), ensuring 20 min of meal consumption, so that they were fasted from, respectively, 17:00 hours or 21:00 hours. On day 19, volunteers arrived at the university at 15:00 hours for measurement of body composition using air displacement plethysmography (BodPod; Cosmed, Rome, Italy), followed by the placement of an i.v. cannula. Afterwards, volunteers entered a respiration chamber for a 36 h measurement of energy expenditure and substrate utilisation using whole-room indirect calorimetry. With TRE, a dinner consisting of 49 per cent of energy (En%) was provided in the respiration chamber at 17:40 hours and volunteers were fasted from 18:00 hours. With CON, a snack of 10 En% was provided at 18:00 hours followed by a 39 En% dinner at 21:40 hours, resulting in a fast from 22:00 hours. Energy content of the meals was based on estimated energy expenditure using the Harris and Benedict equation [22]. On day 20, while in the respiration chamber, a fasted blood sample was obtained at 07:30 hours and 24 h urine was collected for analysis of nitrogen excretion. In both intervention arms, volunteers received standardised meals at fixed times (08:00, 12:00, 15:00 and 18:40 hours) consisting of, respectively, 21 En%, 30 En%, 10 En% and 39 En%. Energy intake was based on sleeping metabolic rate (determined during the night of day 19) multiplied by an activity factor of 1.5. Macronutrient composition of the meals was 56 En% carbohydrates, 30 En% fat and 14 En% protein. Furthermore, volunteers performed low-intensity physical activity at 10:30, 13:00 and 16:00 hours. One bout of activity consisted of 15 min of stepping on an aerobic step and 15 min of standing. On day 21, after a standardised 11 h fast, volunteers left the respiration chamber at 06:00 hours. Next, a blood sample was taken, followed by measurement of hepatic glycogen and lipid content using 13C-MRS and 1H-MRS, respectively. Subsequently, a muscle biopsy was obtained to assess ex vivo mitochondrial oxidative capacity, after which a hyperinsulinaemic–euglycaemic two-step clamp was started to measure insulin sensitivity. See ESM Methods for detailed descriptions of measurement methods. Biochemical analyses: Blood samples were used for quantification of metabolites and nitrogen was assessed using 24 h urine samples. See ESM Methods for further details regarding the biochemical analyses. Data analysis: The statistical packages SPSS Statistics 25 (IBM, New York, USA) and Prism 9 (GraphPad Software, San Diego, USA) were used for statistical analyses. Interventional comparisons are expressed as mean ± SEM. Participant characteristics are expressed as mean ± SD. Differences between CON and TRE were tested using the paired t test, unless specified otherwise. A two-sided p<0.05 was considered statistically significant. The power calculation, as well as other calculations made using the measured data, are described in the ESM Methods. Results: Participant characteristics A flowchart of participant enrolment is depicted in ESM Fig. 2. Baseline participant characteristics are presented in Table 1. The median Morningness-Eveningness Questionnaire Self-Assessment (MEQ-SA) score amounted to 59.5 (range 41–72). Only one volunteer was identified as an extreme morning type but was included in the study as the intervention did not interfere with his habitual day–night rhythm. Table 1Baseline characteristics of participantsCharacteristicMeasurement/valueN14Sex, n female/n male7/7Age, years67.5±5.2BMI, kg/m230.5±3.7Diabetes medication, n yes/n no10/4 Metformin only, n7 Metformin + gliclazide, n3Fasting plasma glucose, mmol/l7.9±1.3HbA1c, mmol/mol46.1±7.2HbA1c, %6.4±0.7AST, μkat/l0.4±0.1ALT, μkat/l0.4±0.2GGT, μkat/l0.4±0.2eGFR, ml min−1 1.73 m−279.9±14.5MEQ-SA, score59.1±7.7Data are shown as mean ± SD, unless stated otherwiseALT, alanine aminotransferase; AST, aspartate aminotransferase; GGT, γ-glutamyl transferase; MEQ-SA, Morningness-Eveningness Questionnaire Self-Assessment Baseline characteristics of participants Data are shown as mean ± SD, unless stated otherwise ALT, alanine aminotransferase; AST, aspartate aminotransferase; GGT, γ-glutamyl transferase; MEQ-SA, Morningness-Eveningness Questionnaire Self-Assessment A flowchart of participant enrolment is depicted in ESM Fig. 2. Baseline participant characteristics are presented in Table 1. The median Morningness-Eveningness Questionnaire Self-Assessment (MEQ-SA) score amounted to 59.5 (range 41–72). Only one volunteer was identified as an extreme morning type but was included in the study as the intervention did not interfere with his habitual day–night rhythm. Table 1Baseline characteristics of participantsCharacteristicMeasurement/valueN14Sex, n female/n male7/7Age, years67.5±5.2BMI, kg/m230.5±3.7Diabetes medication, n yes/n no10/4 Metformin only, n7 Metformin + gliclazide, n3Fasting plasma glucose, mmol/l7.9±1.3HbA1c, mmol/mol46.1±7.2HbA1c, %6.4±0.7AST, μkat/l0.4±0.1ALT, μkat/l0.4±0.2GGT, μkat/l0.4±0.2eGFR, ml min−1 1.73 m−279.9±14.5MEQ-SA, score59.1±7.7Data are shown as mean ± SD, unless stated otherwiseALT, alanine aminotransferase; AST, aspartate aminotransferase; GGT, γ-glutamyl transferase; MEQ-SA, Morningness-Eveningness Questionnaire Self-Assessment Baseline characteristics of participants Data are shown as mean ± SD, unless stated otherwise ALT, alanine aminotransferase; AST, aspartate aminotransferase; GGT, γ-glutamyl transferase; MEQ-SA, Morningness-Eveningness Questionnaire Self-Assessment Adherence Volunteers did not indicate any changes in diabetes medication throughout the study. Volunteers recorded their daily food intake and sleep habits for, on average, 17 days during TRE and 18 days during CON. Based on these data, the eating window averaged 9.1±0.2 h in TRE vs 13.4±0.1 h in CON (p<0.01). Sleep–wake patterns were similar in both interventions, with a mean sleep duration of 8.1±0.2 h during TRE and 8.0±0.2 h during CON (p=0.17). Body weight at the start of each intervention was comparable between TRE and CON (89.1±3.7 vs 89.2±3.8 kg, respectively, p=0.62). Although volunteers were instructed to remain weight stable, a small but significant weight loss occurred in response to TRE (−1.0±0.3 kg, p<0.01) but not CON (−0.3±0.3 kg, p=0.22). The weight loss with TRE was significantly greater than the weight change observed with CON (p=0.02). Body composition determined on day 19 was comparable between TRE and CON (TRE vs CON: fat mass 37.4±2.7 vs 37.9±2.9 kg, p=0.58; and fat-free mass 50.7±2.6 vs 51.0±2.6 kg, p=0.60). Volunteers did not indicate any changes in diabetes medication throughout the study. Volunteers recorded their daily food intake and sleep habits for, on average, 17 days during TRE and 18 days during CON. Based on these data, the eating window averaged 9.1±0.2 h in TRE vs 13.4±0.1 h in CON (p<0.01). Sleep–wake patterns were similar in both interventions, with a mean sleep duration of 8.1±0.2 h during TRE and 8.0±0.2 h during CON (p=0.17). Body weight at the start of each intervention was comparable between TRE and CON (89.1±3.7 vs 89.2±3.8 kg, respectively, p=0.62). Although volunteers were instructed to remain weight stable, a small but significant weight loss occurred in response to TRE (−1.0±0.3 kg, p<0.01) but not CON (−0.3±0.3 kg, p=0.22). The weight loss with TRE was significantly greater than the weight change observed with CON (p=0.02). Body composition determined on day 19 was comparable between TRE and CON (TRE vs CON: fat mass 37.4±2.7 vs 37.9±2.9 kg, p=0.58; and fat-free mass 50.7±2.6 vs 51.0±2.6 kg, p=0.60). Hepatic glycogen and lipid content Approximately half-way through each intervention period, hepatic glycogen levels were assessed in the morning following a 14 h (TRE) and 10 h (CON) night-time fast. Hepatic glycogen did not differ significantly between TRE vs CON (0.16±0.03 vs 0.17±0.02 arbitrary units [AU], respectively, p=0.43). At the end of each intervention, hepatic glycogen levels were also assessed after a standardised overnight fast of 11 h for both TRE and CON but did not reveal an altered hepatic glycogen content with TRE compared with CON (0.15±0.01 vs 0.15±0.01 AU, respectively, p=0.88). We also assessed hepatic lipid content; neither the amount of lipids nor the composition of the hepatic lipid pool was altered with TRE vs CON (respectively: total lipid content 9.0±2.0 vs 8.6±1.6%, p=0.47; polyunsaturated fatty acids 17.0±1.3 vs 16.2±1.2%, p=0.41; mono-unsaturated fatty acids 40.6±0.9 vs 42.9±1.4%, p=0.19; and saturated fatty acids 42.4±1.2 vs 40.9±1.5%, p=0.41). Approximately half-way through each intervention period, hepatic glycogen levels were assessed in the morning following a 14 h (TRE) and 10 h (CON) night-time fast. Hepatic glycogen did not differ significantly between TRE vs CON (0.16±0.03 vs 0.17±0.02 arbitrary units [AU], respectively, p=0.43). At the end of each intervention, hepatic glycogen levels were also assessed after a standardised overnight fast of 11 h for both TRE and CON but did not reveal an altered hepatic glycogen content with TRE compared with CON (0.15±0.01 vs 0.15±0.01 AU, respectively, p=0.88). We also assessed hepatic lipid content; neither the amount of lipids nor the composition of the hepatic lipid pool was altered with TRE vs CON (respectively: total lipid content 9.0±2.0 vs 8.6±1.6%, p=0.47; polyunsaturated fatty acids 17.0±1.3 vs 16.2±1.2%, p=0.41; mono-unsaturated fatty acids 40.6±0.9 vs 42.9±1.4%, p=0.19; and saturated fatty acids 42.4±1.2 vs 40.9±1.5%, p=0.41). Insulin sensitivity and glucose homeostasis A hyperinsulinaemic–euglycaemic two-step clamp with a glucose tracer and indirect calorimetry was performed to assess insulin sensitivity. No differences in M value were found when comparing TRE and CON (19.6±1.8 vs 17.7±1.8 μmol kg−1 min−1, respectively, p=0.1). Hepatic insulin sensitivity was not affected by TRE, as exemplified by a similar endogenous glucose production (EGP) with TRE and CON in the fasted state and in the low- and high-insulin-stimulated states (p=0.83, p=0.38 and p=0.30, respectively; Fig. 1a). Suppression of EGP was also similar when comparing TRE with CON upon low- and high-insulin infusion (p=0.67 and p=0.47; Fig. 1a). NEFA suppression upon low insulin exposure was not different between TRE and CON (−365.2±41.6 vs −359.1±43.2 mmol/l, p=0.8). However, absolute levels of NEFAs were lower with TRE during the low- and high-insulin phase (p=0.02 and p=0.04; Fig. 1b), which may hint at an improved adipose tissue insulin sensitivity. Fig. 1Effect of TRE on EGP (a), plasma NEFA (b), Rd (c), NOGD (d) and fat oxidation (e) measured during a hyperinsulinaemic–euglycaemic two-step clamp (n=14). *p<0.05 (data were analysed with paired t tests). Rd, Rate of disappearance Effect of TRE on EGP (a), plasma NEFA (b), Rd (c), NOGD (d) and fat oxidation (e) measured during a hyperinsulinaemic–euglycaemic two-step clamp (n=14). *p<0.05 (data were analysed with paired t tests). Rd, Rate of disappearance Peripheral insulin-stimulated glucose disposal, reflected by the change in rate of disappearance (Rd) from basal to high insulin, remained unchanged with TRE (p=0.25; Fig. 1c). However, we observed a larger insulin-stimulated non-oxidative glucose disposal (NOGD, difference from baseline to high insulin) with TRE than with CON (4.3±1.1 vs 1.5±1.7 μmol kg−1 min−1, respectively, p=0.04; Fig. 1d) reflecting an increased ability to form glycogen. Conversely, insulin-stimulated carbohydrate oxidation from basal to high insulin appeared to be lower with TRE than with CON (4.7±0.9 vs 6.2±0.9 μmol kg−1 min−1, respectively) but this difference was not statistically significant (p=0.07). Consistently, insulin-induced suppression of fat oxidation from basal to high-insulin was lower with TRE than with CON (−1.3±0.3 vs −1.8±0.2 μmol kg−1 min−1, p=0.04; Fig. 1e). Energy expenditure did not differ between TRE and CON during the basal, low-insulin and high-insulin phase of the clamp. These results indicate that while peripheral insulin sensitivity is unchanged with TRE, glucose uptake is more directed towards storage compared with oxidation. Both hepatic and peripheral insulin sensitivity, as well as levels of hepatic glycogen, were additionally analysed in volunteers who only used metformin as diabetes treatment (n=7) and this did not alter the outcomes. To examine the effect of TRE on glucose homeostasis, CGM data from the last 4 days in the free-living situation (days 15–18) were analysed for both interventions. Four volunteers presented incomplete CGM data due to technical issues, hence statistics were performed on CGM data from ten volunteers. Mean 24 h glucose levels were lower in TRE compared with CON (6.8±0.2 vs 7.6±0.3 mmol/l, p<0.01; Fig. 2a–f). Nocturnal glucose levels were consistently lower in TRE vs CON (Fig. 2a–d). Furthermore, volunteers spent more time in the normal glucose range upon TRE compared with CON (15.1±0.8 vs 12.2±1.1 h per day, p=0.01; Fig. 2f). Concomitantly, time spent in the high glucose range was less in TRE compared with CON (5.5±0.5 vs 7.5 0.7 h per day, p=0.02) whereas no differences between eating regimens were found for time spent in hyperglycaemia (2.3±0.4 vs 3.7±0.8 h per day, p=0.24), time spent in the low glucose range (0.5±0.1 vs 0.4±0.1 h per day, p=1.00) or time spent in hypoglycaemia (0.7±0.3 vs 0.1±0.0 h per day, p=0.48). Fig. 2(a–d) Twenty-four-hour glucose levels on days 15 (a), 16 (b), 17 (c) and 18 (d) during TRE or CON (n=10). Mean 24 h glucose from day 15 to day 18 (n=10) analysed using a paired t test (e). Time spent in glucose range during days 15–18 (n=10) analysed using Wilcoxon tests with Bonferroni correction (f) *p<0.05. Hypo, hypoglycaemia defined as glucose levels <4.0 mmol/l; Low, low glucose levels defined as glucose levels 4.0–4.3 mmol/l; Normal range, glucose levels within the normal range defined as 4.4–7.2 mmol/l; High, high glucose levels defined as glucose levels 7.3–9.9 mmol/l; Hyper, hyperglycaemia defined as glucose levels >10 mmol/l (a–d) Twenty-four-hour glucose levels on days 15 (a), 16 (b), 17 (c) and 18 (d) during TRE or CON (n=10). Mean 24 h glucose from day 15 to day 18 (n=10) analysed using a paired t test (e). Time spent in glucose range during days 15–18 (n=10) analysed using Wilcoxon tests with Bonferroni correction (f) *p<0.05. Hypo, hypoglycaemia defined as glucose levels <4.0 mmol/l; Low, low glucose levels defined as glucose levels 4.0–4.3 mmol/l; Normal range, glucose levels within the normal range defined as 4.4–7.2 mmol/l; High, high glucose levels defined as glucose levels 7.3–9.9 mmol/l; Hyper, hyperglycaemia defined as glucose levels >10 mmol/l Additionally, fasting plasma metabolites were assessed on day 20 and day 21 of each intervention. On day 20, blood samples were taken after a 10 h (CON) or 14 h (TRE) overnight fast. Plasma glucose on day 20 was lower after TRE (7.6±0.4 vs 8.6±0.4 mmol/l, respectively, p=0.03) whereas plasma insulin, triglycerides (TG), and NEFA levels were comparable between conditions (Table 2). On day 21, when overnight fasting time was similar for both interventions (11 h), plasma glucose levels remained lower in TRE than in CON (8.0±0.3 vs 8.9±0.5 mmol/l, respectively, p=0.04), whereas no differences were detected in plasma insulin, TG and NEFA levels (Table 2). Table 2Blood plasma biochemistryMetaboliteCONTREp valueDay 20 (n=13) Triglycerides, mmol/l2.1±0.31.9±0.20.30 NEFA, mmol/l0.529±0.0380.489±0.0350.39 Glucose, mmol/la8.6±0.47.6±0.40.03 Insulin, pmol/l111.1±20.8104.2±13.90.27Day 21 (n=14) Triglycerides, mmol/l2.1±0.32.2±0.20.66 NEFA, mmol/l0.601±0.0700.542±0.0640.30 Glucose, mmol/lb8.9±0.58.0±0.30.04 Insulin, pmol/l97.2±13.9111.1±20.80.16Data are shown as mean ± SEMaFasted blood values with fasting time 10 h for CON and 14 h for TREbFasted blood values with fasting time 11 h for both CON and TRE Blood plasma biochemistry Data are shown as mean ± SEM aFasted blood values with fasting time 10 h for CON and 14 h for TRE bFasted blood values with fasting time 11 h for both CON and TRE A hyperinsulinaemic–euglycaemic two-step clamp with a glucose tracer and indirect calorimetry was performed to assess insulin sensitivity. No differences in M value were found when comparing TRE and CON (19.6±1.8 vs 17.7±1.8 μmol kg−1 min−1, respectively, p=0.1). Hepatic insulin sensitivity was not affected by TRE, as exemplified by a similar endogenous glucose production (EGP) with TRE and CON in the fasted state and in the low- and high-insulin-stimulated states (p=0.83, p=0.38 and p=0.30, respectively; Fig. 1a). Suppression of EGP was also similar when comparing TRE with CON upon low- and high-insulin infusion (p=0.67 and p=0.47; Fig. 1a). NEFA suppression upon low insulin exposure was not different between TRE and CON (−365.2±41.6 vs −359.1±43.2 mmol/l, p=0.8). However, absolute levels of NEFAs were lower with TRE during the low- and high-insulin phase (p=0.02 and p=0.04; Fig. 1b), which may hint at an improved adipose tissue insulin sensitivity. Fig. 1Effect of TRE on EGP (a), plasma NEFA (b), Rd (c), NOGD (d) and fat oxidation (e) measured during a hyperinsulinaemic–euglycaemic two-step clamp (n=14). *p<0.05 (data were analysed with paired t tests). Rd, Rate of disappearance Effect of TRE on EGP (a), plasma NEFA (b), Rd (c), NOGD (d) and fat oxidation (e) measured during a hyperinsulinaemic–euglycaemic two-step clamp (n=14). *p<0.05 (data were analysed with paired t tests). Rd, Rate of disappearance Peripheral insulin-stimulated glucose disposal, reflected by the change in rate of disappearance (Rd) from basal to high insulin, remained unchanged with TRE (p=0.25; Fig. 1c). However, we observed a larger insulin-stimulated non-oxidative glucose disposal (NOGD, difference from baseline to high insulin) with TRE than with CON (4.3±1.1 vs 1.5±1.7 μmol kg−1 min−1, respectively, p=0.04; Fig. 1d) reflecting an increased ability to form glycogen. Conversely, insulin-stimulated carbohydrate oxidation from basal to high insulin appeared to be lower with TRE than with CON (4.7±0.9 vs 6.2±0.9 μmol kg−1 min−1, respectively) but this difference was not statistically significant (p=0.07). Consistently, insulin-induced suppression of fat oxidation from basal to high-insulin was lower with TRE than with CON (−1.3±0.3 vs −1.8±0.2 μmol kg−1 min−1, p=0.04; Fig. 1e). Energy expenditure did not differ between TRE and CON during the basal, low-insulin and high-insulin phase of the clamp. These results indicate that while peripheral insulin sensitivity is unchanged with TRE, glucose uptake is more directed towards storage compared with oxidation. Both hepatic and peripheral insulin sensitivity, as well as levels of hepatic glycogen, were additionally analysed in volunteers who only used metformin as diabetes treatment (n=7) and this did not alter the outcomes. To examine the effect of TRE on glucose homeostasis, CGM data from the last 4 days in the free-living situation (days 15–18) were analysed for both interventions. Four volunteers presented incomplete CGM data due to technical issues, hence statistics were performed on CGM data from ten volunteers. Mean 24 h glucose levels were lower in TRE compared with CON (6.8±0.2 vs 7.6±0.3 mmol/l, p<0.01; Fig. 2a–f). Nocturnal glucose levels were consistently lower in TRE vs CON (Fig. 2a–d). Furthermore, volunteers spent more time in the normal glucose range upon TRE compared with CON (15.1±0.8 vs 12.2±1.1 h per day, p=0.01; Fig. 2f). Concomitantly, time spent in the high glucose range was less in TRE compared with CON (5.5±0.5 vs 7.5 0.7 h per day, p=0.02) whereas no differences between eating regimens were found for time spent in hyperglycaemia (2.3±0.4 vs 3.7±0.8 h per day, p=0.24), time spent in the low glucose range (0.5±0.1 vs 0.4±0.1 h per day, p=1.00) or time spent in hypoglycaemia (0.7±0.3 vs 0.1±0.0 h per day, p=0.48). Fig. 2(a–d) Twenty-four-hour glucose levels on days 15 (a), 16 (b), 17 (c) and 18 (d) during TRE or CON (n=10). Mean 24 h glucose from day 15 to day 18 (n=10) analysed using a paired t test (e). Time spent in glucose range during days 15–18 (n=10) analysed using Wilcoxon tests with Bonferroni correction (f) *p<0.05. Hypo, hypoglycaemia defined as glucose levels <4.0 mmol/l; Low, low glucose levels defined as glucose levels 4.0–4.3 mmol/l; Normal range, glucose levels within the normal range defined as 4.4–7.2 mmol/l; High, high glucose levels defined as glucose levels 7.3–9.9 mmol/l; Hyper, hyperglycaemia defined as glucose levels >10 mmol/l (a–d) Twenty-four-hour glucose levels on days 15 (a), 16 (b), 17 (c) and 18 (d) during TRE or CON (n=10). Mean 24 h glucose from day 15 to day 18 (n=10) analysed using a paired t test (e). Time spent in glucose range during days 15–18 (n=10) analysed using Wilcoxon tests with Bonferroni correction (f) *p<0.05. Hypo, hypoglycaemia defined as glucose levels <4.0 mmol/l; Low, low glucose levels defined as glucose levels 4.0–4.3 mmol/l; Normal range, glucose levels within the normal range defined as 4.4–7.2 mmol/l; High, high glucose levels defined as glucose levels 7.3–9.9 mmol/l; Hyper, hyperglycaemia defined as glucose levels >10 mmol/l Additionally, fasting plasma metabolites were assessed on day 20 and day 21 of each intervention. On day 20, blood samples were taken after a 10 h (CON) or 14 h (TRE) overnight fast. Plasma glucose on day 20 was lower after TRE (7.6±0.4 vs 8.6±0.4 mmol/l, respectively, p=0.03) whereas plasma insulin, triglycerides (TG), and NEFA levels were comparable between conditions (Table 2). On day 21, when overnight fasting time was similar for both interventions (11 h), plasma glucose levels remained lower in TRE than in CON (8.0±0.3 vs 8.9±0.5 mmol/l, respectively, p=0.04), whereas no differences were detected in plasma insulin, TG and NEFA levels (Table 2). Table 2Blood plasma biochemistryMetaboliteCONTREp valueDay 20 (n=13) Triglycerides, mmol/l2.1±0.31.9±0.20.30 NEFA, mmol/l0.529±0.0380.489±0.0350.39 Glucose, mmol/la8.6±0.47.6±0.40.03 Insulin, pmol/l111.1±20.8104.2±13.90.27Day 21 (n=14) Triglycerides, mmol/l2.1±0.32.2±0.20.66 NEFA, mmol/l0.601±0.0700.542±0.0640.30 Glucose, mmol/lb8.9±0.58.0±0.30.04 Insulin, pmol/l97.2±13.9111.1±20.80.16Data are shown as mean ± SEMaFasted blood values with fasting time 10 h for CON and 14 h for TREbFasted blood values with fasting time 11 h for both CON and TRE Blood plasma biochemistry Data are shown as mean ± SEM aFasted blood values with fasting time 10 h for CON and 14 h for TRE bFasted blood values with fasting time 11 h for both CON and TRE Twenty-four-hour energy and substrate metabolism On day 19, volunteers resided in a respiration chamber for 36 h for measurement of energy expenditure and substrate oxidation. Twenty-four-hour energy expenditure was similar for TRE and CON (9.57±0.22 vs 9.68±0.29 MJ/day, respectively, p=0.22; Fig. 3a), as was the 24 h respiratory exchange ratio (RER) (0.86±0.01 vs 0.86±0.01, respectively, p=0.13). Nonetheless, 24 h carbohydrate oxidation was lower in TRE vs CON (260.2±7.6 vs 277.8±10.7 g/day, respectively, p=0.04; Fig. 3b), whereas 24 h fat oxidation (91.9±6.6 vs 93.5±5.5 g/day, respectively, p=0.72; Fig. 3c) was unaffected. Twenty-four-hour protein oxidation seemed higher upon TRE but the difference did not reach statistical significance (72.8±7.2 vs 58.5±5.4 g/day, respectively, p=0.18; Fig. 3d). Sleeping metabolic rate appeared to be lower with TRE compared with CON (4.66±0.14 vs 4.77±0.18 kJ/min, respectively), although this decrease was not statistically significant (p=0.05; Fig. 3e). There was no change in carbohydrate or fat oxidation during sleep in response to TRE vs CON (RER 0.84±0.01 vs 0.84±0.01, p=0.50; Fig. 3f). Fig. 3Effect of TRE on 24 h energy expenditure (a), substrate oxidation (b–d), sleeping metabolic rate (e) and RER during sleep (f), (n=13). *p<0.05 Effect of TRE on 24 h energy expenditure (a), substrate oxidation (b–d), sleeping metabolic rate (e) and RER during sleep (f), (n=13). *p<0.05 On day 21, muscle biopsies were obtained to assess ex vivo mitochondrial oxidative capacity by means of high-resolution respirometry. In total, paired biopsies from 13 out of 14 volunteers were analysed. Mitochondrial respiration did not differ between TRE and CON (Table 3). Table 3Mitochondrial oxidative capacityRespiration stateCONTREp valueState 2a M, pmol mg−1 s−15.0±0.45.5±0.60.49 MO, pmol mg−1 s−16.6±0.46.8±0.50.93 MG, pmol mg−1 s−17.1±0.46.9±0.40.54State 3b MO, pmol mg−1 s−128.3±2.029.1±1.40.91 MG, pmol mg−1 s−131.2±1.932.6±1.80.82 MOG, pmol mg−1 s−137.0±2.337.8±1.90.99 MOGS, pmol mg−1 s−155.7±3.357.6±2.70.80State Uc MGS, pmol mg−1 s−158.0±3.259.9±2.90.91 FCCP, pmol mg−1 s−167.8±4.766.6±3.50.50State 4od Oligomycin, pmol mg−1 s−117.6±1.218.1±1.20.85Data presented as mean ± SEM, n=13aState 2, respiration in presence of substrates alonebState 3, ADP-stimulated respirationcState U, maximal respiration in response to an uncoupling agentdState 4o, mitochondrial proton leak measured by blocking ATP synthaseFCCP, trifluoro-methoxy carbonyl cyanide-4 phenylhydrazone; G, glutamate; M, malate; O, octanoyl-carnitine; S, succinate Mitochondrial oxidative capacity Data presented as mean ± SEM, n=13 aState 2, respiration in presence of substrates alone bState 3, ADP-stimulated respiration cState U, maximal respiration in response to an uncoupling agent dState 4o, mitochondrial proton leak measured by blocking ATP synthase FCCP, trifluoro-methoxy carbonyl cyanide-4 phenylhydrazone; G, glutamate; M, malate; O, octanoyl-carnitine; S, succinate On day 19, volunteers resided in a respiration chamber for 36 h for measurement of energy expenditure and substrate oxidation. Twenty-four-hour energy expenditure was similar for TRE and CON (9.57±0.22 vs 9.68±0.29 MJ/day, respectively, p=0.22; Fig. 3a), as was the 24 h respiratory exchange ratio (RER) (0.86±0.01 vs 0.86±0.01, respectively, p=0.13). Nonetheless, 24 h carbohydrate oxidation was lower in TRE vs CON (260.2±7.6 vs 277.8±10.7 g/day, respectively, p=0.04; Fig. 3b), whereas 24 h fat oxidation (91.9±6.6 vs 93.5±5.5 g/day, respectively, p=0.72; Fig. 3c) was unaffected. Twenty-four-hour protein oxidation seemed higher upon TRE but the difference did not reach statistical significance (72.8±7.2 vs 58.5±5.4 g/day, respectively, p=0.18; Fig. 3d). Sleeping metabolic rate appeared to be lower with TRE compared with CON (4.66±0.14 vs 4.77±0.18 kJ/min, respectively), although this decrease was not statistically significant (p=0.05; Fig. 3e). There was no change in carbohydrate or fat oxidation during sleep in response to TRE vs CON (RER 0.84±0.01 vs 0.84±0.01, p=0.50; Fig. 3f). Fig. 3Effect of TRE on 24 h energy expenditure (a), substrate oxidation (b–d), sleeping metabolic rate (e) and RER during sleep (f), (n=13). *p<0.05 Effect of TRE on 24 h energy expenditure (a), substrate oxidation (b–d), sleeping metabolic rate (e) and RER during sleep (f), (n=13). *p<0.05 On day 21, muscle biopsies were obtained to assess ex vivo mitochondrial oxidative capacity by means of high-resolution respirometry. In total, paired biopsies from 13 out of 14 volunteers were analysed. Mitochondrial respiration did not differ between TRE and CON (Table 3). Table 3Mitochondrial oxidative capacityRespiration stateCONTREp valueState 2a M, pmol mg−1 s−15.0±0.45.5±0.60.49 MO, pmol mg−1 s−16.6±0.46.8±0.50.93 MG, pmol mg−1 s−17.1±0.46.9±0.40.54State 3b MO, pmol mg−1 s−128.3±2.029.1±1.40.91 MG, pmol mg−1 s−131.2±1.932.6±1.80.82 MOG, pmol mg−1 s−137.0±2.337.8±1.90.99 MOGS, pmol mg−1 s−155.7±3.357.6±2.70.80State Uc MGS, pmol mg−1 s−158.0±3.259.9±2.90.91 FCCP, pmol mg−1 s−167.8±4.766.6±3.50.50State 4od Oligomycin, pmol mg−1 s−117.6±1.218.1±1.20.85Data presented as mean ± SEM, n=13aState 2, respiration in presence of substrates alonebState 3, ADP-stimulated respirationcState U, maximal respiration in response to an uncoupling agentdState 4o, mitochondrial proton leak measured by blocking ATP synthaseFCCP, trifluoro-methoxy carbonyl cyanide-4 phenylhydrazone; G, glutamate; M, malate; O, octanoyl-carnitine; S, succinate Mitochondrial oxidative capacity Data presented as mean ± SEM, n=13 aState 2, respiration in presence of substrates alone bState 3, ADP-stimulated respiration cState U, maximal respiration in response to an uncoupling agent dState 4o, mitochondrial proton leak measured by blocking ATP synthase FCCP, trifluoro-methoxy carbonyl cyanide-4 phenylhydrazone; G, glutamate; M, malate; O, octanoyl-carnitine; S, succinate Participant characteristics: A flowchart of participant enrolment is depicted in ESM Fig. 2. Baseline participant characteristics are presented in Table 1. The median Morningness-Eveningness Questionnaire Self-Assessment (MEQ-SA) score amounted to 59.5 (range 41–72). Only one volunteer was identified as an extreme morning type but was included in the study as the intervention did not interfere with his habitual day–night rhythm. Table 1Baseline characteristics of participantsCharacteristicMeasurement/valueN14Sex, n female/n male7/7Age, years67.5±5.2BMI, kg/m230.5±3.7Diabetes medication, n yes/n no10/4 Metformin only, n7 Metformin + gliclazide, n3Fasting plasma glucose, mmol/l7.9±1.3HbA1c, mmol/mol46.1±7.2HbA1c, %6.4±0.7AST, μkat/l0.4±0.1ALT, μkat/l0.4±0.2GGT, μkat/l0.4±0.2eGFR, ml min−1 1.73 m−279.9±14.5MEQ-SA, score59.1±7.7Data are shown as mean ± SD, unless stated otherwiseALT, alanine aminotransferase; AST, aspartate aminotransferase; GGT, γ-glutamyl transferase; MEQ-SA, Morningness-Eveningness Questionnaire Self-Assessment Baseline characteristics of participants Data are shown as mean ± SD, unless stated otherwise ALT, alanine aminotransferase; AST, aspartate aminotransferase; GGT, γ-glutamyl transferase; MEQ-SA, Morningness-Eveningness Questionnaire Self-Assessment Adherence: Volunteers did not indicate any changes in diabetes medication throughout the study. Volunteers recorded their daily food intake and sleep habits for, on average, 17 days during TRE and 18 days during CON. Based on these data, the eating window averaged 9.1±0.2 h in TRE vs 13.4±0.1 h in CON (p<0.01). Sleep–wake patterns were similar in both interventions, with a mean sleep duration of 8.1±0.2 h during TRE and 8.0±0.2 h during CON (p=0.17). Body weight at the start of each intervention was comparable between TRE and CON (89.1±3.7 vs 89.2±3.8 kg, respectively, p=0.62). Although volunteers were instructed to remain weight stable, a small but significant weight loss occurred in response to TRE (−1.0±0.3 kg, p<0.01) but not CON (−0.3±0.3 kg, p=0.22). The weight loss with TRE was significantly greater than the weight change observed with CON (p=0.02). Body composition determined on day 19 was comparable between TRE and CON (TRE vs CON: fat mass 37.4±2.7 vs 37.9±2.9 kg, p=0.58; and fat-free mass 50.7±2.6 vs 51.0±2.6 kg, p=0.60). Hepatic glycogen and lipid content: Approximately half-way through each intervention period, hepatic glycogen levels were assessed in the morning following a 14 h (TRE) and 10 h (CON) night-time fast. Hepatic glycogen did not differ significantly between TRE vs CON (0.16±0.03 vs 0.17±0.02 arbitrary units [AU], respectively, p=0.43). At the end of each intervention, hepatic glycogen levels were also assessed after a standardised overnight fast of 11 h for both TRE and CON but did not reveal an altered hepatic glycogen content with TRE compared with CON (0.15±0.01 vs 0.15±0.01 AU, respectively, p=0.88). We also assessed hepatic lipid content; neither the amount of lipids nor the composition of the hepatic lipid pool was altered with TRE vs CON (respectively: total lipid content 9.0±2.0 vs 8.6±1.6%, p=0.47; polyunsaturated fatty acids 17.0±1.3 vs 16.2±1.2%, p=0.41; mono-unsaturated fatty acids 40.6±0.9 vs 42.9±1.4%, p=0.19; and saturated fatty acids 42.4±1.2 vs 40.9±1.5%, p=0.41). Insulin sensitivity and glucose homeostasis: A hyperinsulinaemic–euglycaemic two-step clamp with a glucose tracer and indirect calorimetry was performed to assess insulin sensitivity. No differences in M value were found when comparing TRE and CON (19.6±1.8 vs 17.7±1.8 μmol kg−1 min−1, respectively, p=0.1). Hepatic insulin sensitivity was not affected by TRE, as exemplified by a similar endogenous glucose production (EGP) with TRE and CON in the fasted state and in the low- and high-insulin-stimulated states (p=0.83, p=0.38 and p=0.30, respectively; Fig. 1a). Suppression of EGP was also similar when comparing TRE with CON upon low- and high-insulin infusion (p=0.67 and p=0.47; Fig. 1a). NEFA suppression upon low insulin exposure was not different between TRE and CON (−365.2±41.6 vs −359.1±43.2 mmol/l, p=0.8). However, absolute levels of NEFAs were lower with TRE during the low- and high-insulin phase (p=0.02 and p=0.04; Fig. 1b), which may hint at an improved adipose tissue insulin sensitivity. Fig. 1Effect of TRE on EGP (a), plasma NEFA (b), Rd (c), NOGD (d) and fat oxidation (e) measured during a hyperinsulinaemic–euglycaemic two-step clamp (n=14). *p<0.05 (data were analysed with paired t tests). Rd, Rate of disappearance Effect of TRE on EGP (a), plasma NEFA (b), Rd (c), NOGD (d) and fat oxidation (e) measured during a hyperinsulinaemic–euglycaemic two-step clamp (n=14). *p<0.05 (data were analysed with paired t tests). Rd, Rate of disappearance Peripheral insulin-stimulated glucose disposal, reflected by the change in rate of disappearance (Rd) from basal to high insulin, remained unchanged with TRE (p=0.25; Fig. 1c). However, we observed a larger insulin-stimulated non-oxidative glucose disposal (NOGD, difference from baseline to high insulin) with TRE than with CON (4.3±1.1 vs 1.5±1.7 μmol kg−1 min−1, respectively, p=0.04; Fig. 1d) reflecting an increased ability to form glycogen. Conversely, insulin-stimulated carbohydrate oxidation from basal to high insulin appeared to be lower with TRE than with CON (4.7±0.9 vs 6.2±0.9 μmol kg−1 min−1, respectively) but this difference was not statistically significant (p=0.07). Consistently, insulin-induced suppression of fat oxidation from basal to high-insulin was lower with TRE than with CON (−1.3±0.3 vs −1.8±0.2 μmol kg−1 min−1, p=0.04; Fig. 1e). Energy expenditure did not differ between TRE and CON during the basal, low-insulin and high-insulin phase of the clamp. These results indicate that while peripheral insulin sensitivity is unchanged with TRE, glucose uptake is more directed towards storage compared with oxidation. Both hepatic and peripheral insulin sensitivity, as well as levels of hepatic glycogen, were additionally analysed in volunteers who only used metformin as diabetes treatment (n=7) and this did not alter the outcomes. To examine the effect of TRE on glucose homeostasis, CGM data from the last 4 days in the free-living situation (days 15–18) were analysed for both interventions. Four volunteers presented incomplete CGM data due to technical issues, hence statistics were performed on CGM data from ten volunteers. Mean 24 h glucose levels were lower in TRE compared with CON (6.8±0.2 vs 7.6±0.3 mmol/l, p<0.01; Fig. 2a–f). Nocturnal glucose levels were consistently lower in TRE vs CON (Fig. 2a–d). Furthermore, volunteers spent more time in the normal glucose range upon TRE compared with CON (15.1±0.8 vs 12.2±1.1 h per day, p=0.01; Fig. 2f). Concomitantly, time spent in the high glucose range was less in TRE compared with CON (5.5±0.5 vs 7.5 0.7 h per day, p=0.02) whereas no differences between eating regimens were found for time spent in hyperglycaemia (2.3±0.4 vs 3.7±0.8 h per day, p=0.24), time spent in the low glucose range (0.5±0.1 vs 0.4±0.1 h per day, p=1.00) or time spent in hypoglycaemia (0.7±0.3 vs 0.1±0.0 h per day, p=0.48). Fig. 2(a–d) Twenty-four-hour glucose levels on days 15 (a), 16 (b), 17 (c) and 18 (d) during TRE or CON (n=10). Mean 24 h glucose from day 15 to day 18 (n=10) analysed using a paired t test (e). Time spent in glucose range during days 15–18 (n=10) analysed using Wilcoxon tests with Bonferroni correction (f) *p<0.05. Hypo, hypoglycaemia defined as glucose levels <4.0 mmol/l; Low, low glucose levels defined as glucose levels 4.0–4.3 mmol/l; Normal range, glucose levels within the normal range defined as 4.4–7.2 mmol/l; High, high glucose levels defined as glucose levels 7.3–9.9 mmol/l; Hyper, hyperglycaemia defined as glucose levels >10 mmol/l (a–d) Twenty-four-hour glucose levels on days 15 (a), 16 (b), 17 (c) and 18 (d) during TRE or CON (n=10). Mean 24 h glucose from day 15 to day 18 (n=10) analysed using a paired t test (e). Time spent in glucose range during days 15–18 (n=10) analysed using Wilcoxon tests with Bonferroni correction (f) *p<0.05. Hypo, hypoglycaemia defined as glucose levels <4.0 mmol/l; Low, low glucose levels defined as glucose levels 4.0–4.3 mmol/l; Normal range, glucose levels within the normal range defined as 4.4–7.2 mmol/l; High, high glucose levels defined as glucose levels 7.3–9.9 mmol/l; Hyper, hyperglycaemia defined as glucose levels >10 mmol/l Additionally, fasting plasma metabolites were assessed on day 20 and day 21 of each intervention. On day 20, blood samples were taken after a 10 h (CON) or 14 h (TRE) overnight fast. Plasma glucose on day 20 was lower after TRE (7.6±0.4 vs 8.6±0.4 mmol/l, respectively, p=0.03) whereas plasma insulin, triglycerides (TG), and NEFA levels were comparable between conditions (Table 2). On day 21, when overnight fasting time was similar for both interventions (11 h), plasma glucose levels remained lower in TRE than in CON (8.0±0.3 vs 8.9±0.5 mmol/l, respectively, p=0.04), whereas no differences were detected in plasma insulin, TG and NEFA levels (Table 2). Table 2Blood plasma biochemistryMetaboliteCONTREp valueDay 20 (n=13) Triglycerides, mmol/l2.1±0.31.9±0.20.30 NEFA, mmol/l0.529±0.0380.489±0.0350.39 Glucose, mmol/la8.6±0.47.6±0.40.03 Insulin, pmol/l111.1±20.8104.2±13.90.27Day 21 (n=14) Triglycerides, mmol/l2.1±0.32.2±0.20.66 NEFA, mmol/l0.601±0.0700.542±0.0640.30 Glucose, mmol/lb8.9±0.58.0±0.30.04 Insulin, pmol/l97.2±13.9111.1±20.80.16Data are shown as mean ± SEMaFasted blood values with fasting time 10 h for CON and 14 h for TREbFasted blood values with fasting time 11 h for both CON and TRE Blood plasma biochemistry Data are shown as mean ± SEM aFasted blood values with fasting time 10 h for CON and 14 h for TRE bFasted blood values with fasting time 11 h for both CON and TRE Twenty-four-hour energy and substrate metabolism: On day 19, volunteers resided in a respiration chamber for 36 h for measurement of energy expenditure and substrate oxidation. Twenty-four-hour energy expenditure was similar for TRE and CON (9.57±0.22 vs 9.68±0.29 MJ/day, respectively, p=0.22; Fig. 3a), as was the 24 h respiratory exchange ratio (RER) (0.86±0.01 vs 0.86±0.01, respectively, p=0.13). Nonetheless, 24 h carbohydrate oxidation was lower in TRE vs CON (260.2±7.6 vs 277.8±10.7 g/day, respectively, p=0.04; Fig. 3b), whereas 24 h fat oxidation (91.9±6.6 vs 93.5±5.5 g/day, respectively, p=0.72; Fig. 3c) was unaffected. Twenty-four-hour protein oxidation seemed higher upon TRE but the difference did not reach statistical significance (72.8±7.2 vs 58.5±5.4 g/day, respectively, p=0.18; Fig. 3d). Sleeping metabolic rate appeared to be lower with TRE compared with CON (4.66±0.14 vs 4.77±0.18 kJ/min, respectively), although this decrease was not statistically significant (p=0.05; Fig. 3e). There was no change in carbohydrate or fat oxidation during sleep in response to TRE vs CON (RER 0.84±0.01 vs 0.84±0.01, p=0.50; Fig. 3f). Fig. 3Effect of TRE on 24 h energy expenditure (a), substrate oxidation (b–d), sleeping metabolic rate (e) and RER during sleep (f), (n=13). *p<0.05 Effect of TRE on 24 h energy expenditure (a), substrate oxidation (b–d), sleeping metabolic rate (e) and RER during sleep (f), (n=13). *p<0.05 On day 21, muscle biopsies were obtained to assess ex vivo mitochondrial oxidative capacity by means of high-resolution respirometry. In total, paired biopsies from 13 out of 14 volunteers were analysed. Mitochondrial respiration did not differ between TRE and CON (Table 3). Table 3Mitochondrial oxidative capacityRespiration stateCONTREp valueState 2a M, pmol mg−1 s−15.0±0.45.5±0.60.49 MO, pmol mg−1 s−16.6±0.46.8±0.50.93 MG, pmol mg−1 s−17.1±0.46.9±0.40.54State 3b MO, pmol mg−1 s−128.3±2.029.1±1.40.91 MG, pmol mg−1 s−131.2±1.932.6±1.80.82 MOG, pmol mg−1 s−137.0±2.337.8±1.90.99 MOGS, pmol mg−1 s−155.7±3.357.6±2.70.80State Uc MGS, pmol mg−1 s−158.0±3.259.9±2.90.91 FCCP, pmol mg−1 s−167.8±4.766.6±3.50.50State 4od Oligomycin, pmol mg−1 s−117.6±1.218.1±1.20.85Data presented as mean ± SEM, n=13aState 2, respiration in presence of substrates alonebState 3, ADP-stimulated respirationcState U, maximal respiration in response to an uncoupling agentdState 4o, mitochondrial proton leak measured by blocking ATP synthaseFCCP, trifluoro-methoxy carbonyl cyanide-4 phenylhydrazone; G, glutamate; M, malate; O, octanoyl-carnitine; S, succinate Mitochondrial oxidative capacity Data presented as mean ± SEM, n=13 aState 2, respiration in presence of substrates alone bState 3, ADP-stimulated respiration cState U, maximal respiration in response to an uncoupling agent dState 4o, mitochondrial proton leak measured by blocking ATP synthase FCCP, trifluoro-methoxy carbonyl cyanide-4 phenylhydrazone; G, glutamate; M, malate; O, octanoyl-carnitine; S, succinate Discussion: TRE is a novel strategy to improve metabolic health and has been proposed to counteract the detrimental effects of eating throughout the day by limiting food intake to daytime hours. To date, only a few studies have examined the metabolic effects of TRE in adults with type 2 diabetes. Here, we tested whether restricting energy intake to a feasible, 10 h time frame for 3 weeks would lower hepatic glycogen levels and improve insulin sensitivity in overweight/obese adults with type 2 diabetes. Additionally, we explored the effects of TRE on glucose homeostasis, 24 h energy metabolism and mitochondrial function. We hypothesised that the 10 h TRE regimen, with the latest food intake at 18:00 hours, would result in a more pronounced fasting state, especially during the night. During the night, the liver is crucial to the regulation of blood glucose through the processes of gluconeogenesis and glycogenolysis and it has been shown that these processes are elevated in type 2 diabetes [23, 24]. Therefore, we hypothesised that hepatic glycogen would be lower after TRE and would be associated with an improved insulin sensitivity. In contrast to our hypothesis, hepatic glycogenolysis appeared to be unaffected by TRE since there was no change in glycogen content after a standardised 11 h fast at the end of the intervention. Neither was there a change after a 14 h (TRE) vs 10 h (CON) overnight fast half-way through the intervention. In addition, EGP suppression during the low-insulin phase of the hyperglycaemic–euglycaemic clamp (reflecting hepatic insulin sensitivity) did not differ between TRE and CON. A limitation of our approach is that we did not measure hepatic glycogen dynamics during the night. Such measurements may be important, as our clamp results showed an increase in NOGD upon high-insulin stimulation with TRE, suggesting an increased glycogen storage. These results could suggest small changes in hepatic glycogen turnover; alternatively, muscle glycogen levels may play a role in explaining our clamp results as the muscle accounts for most of the glycogen synthesis upon high-insulin stimulation in healthy individuals. Interestingly, type 2 diabetes is characterised by an impaired insulin-stimulated glycogen storage [25]. Thus, an improvement in NOGD due to TRE may help to regulate 24 h and postprandial glucose levels. Indeed, 24 h glucose levels were significantly improved after TRE. We did not observe an effect of TRE on insulin sensitivity. A previous controlled randomised crossover study by Sutton et al did show an improved insulin sensitivity with TRE [8]. Thus, men with prediabetes followed a 5 week 6 h early TRE regimen, whereby the last meal was consumed before 15:00 hours. The differences in results may be explained by the shorter eating window and earlier consumption of the last meal (15:00 vs 18:00 hours), creating a longer period of fasting. Here, we chose a 10 h TRE, which we believe would be feasible to incorporate into the work and family life of most adults with type 2 diabetes; future studies will be needed to reveal whether the duration of the fasting period is indeed crucial in determining positive effects on insulin sensitivity. Despite the lack of changes in hepatic glycogen and insulin sensitivity, we did find that our 10 h TRE protocol decreased 24 h glucose levels in individuals with type 2 diabetes, primarily driven by decreased nocturnal glucose levels. Notably, TRE also lowered overnight fasting glucose, increased the time spent in the normal glucose range and decreased time spent in the high glucose range, all of which are clinically relevant variables in type 2 diabetes. Importantly, morning fasting glucose levels were consistently lower with TRE than with CON, even when the fasting duration prior to the blood draw was similar between the two interventions. This may indicate lasting changes in nocturnal glucose homeostasis. Additionally, we found that time spent in hypoglycaemia was not significantly increased upon TRE and no serious adverse events were reported resulting from TRE, thereby underscoring that a ~10 h eating window is a safe and effective lifestyle intervention for adults with type 2 diabetes. Mechanisms underlying the improvement in glucose homeostasis upon TRE remain unclear. Our results show that TRE did not improve peripheral and hepatic insulin sensitivity, skeletal muscle mitochondrial function, energy metabolism or hepatic lipid content, all of which are known to be affected in type 2 diabetes mellitus [25–29]. Under high-insulin conditions during the clamp, we observed a larger reliance on fatty acid oxidation accompanied by higher NEFA levels and lower glucose oxidation. Lower glucose oxidation was also observed when measured over 24 h. Although not statistically significant, the mean of 24 h protein oxidation was higher with TRE, possibly reflecting a more pronounced fasting state and a drive towards a higher rate of amino-acid-driven gluconeogenesis. A previous study by Lundell et al indeed suggested that TRE could affect protein metabolism to cope with the extended period of fasting [30]. However, the exact mechanisms and implications of these effects require further investigation, and it would be interesting to investigate nocturnal glucose metabolism in more detail. The improvement in glucose homeostasis may also partially be explained by the weight loss induced by TRE, which has also been reported previously [4–6, 9, 10, 13, 31]. It should be noted, however, that the body weight loss was rather small (~0.7 kg compared with CON after 3 weeks of intervention) which makes it less likely to completely explain the differences in glucose homeostasis. A limitation of the current study is the relatively heterogeneous study population consisting of adults with and without use of glucose-lowering medication. Use of medication might have resulted in TRE having less effect, as the medication may be targeting the same metabolic pathways. Only recruiting volunteers not receiving medication would have prevented this issue but would have made the results less applicable to the general type 2 diabetes population. Another limitation of our study is the relatively short duration of 3 weeks. This duration was chosen as the aim of this study was to assess whether TRE would result in metabolic improvements in type 2 diabetes and to explore potential mechanisms underlying these changes. In our experience, human interventions of 3 weeks are able to affect the outcome variables investigated in our study. Since our TRE protocol was feasible and safe, and resulted in improved 24 h glucose levels, it would be interesting to examine the impact of 10 h TRE on glucose homeostasis and insulin sensitivity in type 2 diabetes in the long term to address the clinical relevance of TRE. In conclusion, we show that a daytime 10 h TRE regimen for 3 weeks decreases glucose levels and prolongs the time spent in normoglycaemia in adults with type 2 diabetes as compared with spreading daily food intake over at least 14 h. These improvements were not mediated by changes in hepatic glycogen, insulin sensitivity, mitochondrial function or 24 h substrate oxidation. These data highlight the potential benefits of TRE in type 2 diabetes. Supplementary information: ESM(PDF 809 kb) (PDF 809 kb) ESM(PDF 809 kb) (PDF 809 kb)
Background: ClinicalTrials.gov NCT03992248 FUNDING: ZonMW, 459001013. Methods: Fourteen adults with type 2 diabetes (BMI 30.5±4.2 kg/m2, HbA1c 46.1±7.2 mmol/mol [6.4±0.7%]) participated in a 3 week TRE (daily food intake within 10 h) vs control (spreading food intake over ≥14 h) regimen in a randomised, crossover trial design. The study was performed at Maastricht University, the Netherlands. Eligibility criteria included diagnosis of type 2 diabetes, intermediate chronotype and absence of medical conditions that could interfere with the study execution and/or outcome. Randomisation was performed by a study-independent investigator, ensuring that an equal amount of participants started with TRE and CON. Due to the nature of the study, neither volunteers nor investigators were blinded to the study interventions. The quality of the data was checked without knowledge on intervention allocation. Hepatic glycogen levels were assessed with 13C-MRS and insulin sensitivity was assessed using a hyperinsulinaemic-euglycaemic two-step clamp. Furthermore, glucose homeostasis was assessed with 24 h continuous glucose monitoring devices. Secondary outcomes included 24 h energy expenditure and substrate oxidation, hepatic lipid content and skeletal muscle mitochondrial capacity. Results: Results are depicted as mean ± SEM. Hepatic glycogen content was similar between TRE and control condition (0.15±0.01 vs 0.15±0.01 AU, p=0.88). M value was not significantly affected by TRE (19.6±1.8 vs 17.7±1.8 μmol kg-1 min-1 in TRE vs control, respectively, p=0.10). Hepatic and peripheral insulin sensitivity also remained unaffected by TRE (p=0.67 and p=0.25, respectively). Yet, insulin-induced non-oxidative glucose disposal was increased with TRE (non-oxidative glucose disposal 4.3±1.1 vs 1.5±1.7 μmol kg-1 min-1, p=0.04). TRE increased the time spent in the normoglycaemic range (15.1±0.8 vs 12.2±1.1 h per day, p=0.01), and decreased fasting glucose (7.6±0.4 vs 8.6±0.4 mmol/l, p=0.03) and 24 h glucose levels (6.8±0.2 vs 7.6±0.3 mmol/l, p<0.01). Energy expenditure over 24 h was unaffected; nevertheless, TRE decreased 24 h glucose oxidation (260.2±7.6 vs 277.8±10.7 g/day, p=0.04). No adverse events were reported that were related to the interventions. Conclusions: We show that a 10 h TRE regimen is a feasible, safe and effective means to improve 24 h glucose homeostasis in free-living adults with type 2 diabetes. However, these changes were not accompanied by changes in insulin sensitivity or hepatic glycogen.
Introduction: Our modern 24 h society is characterised by ubiquitous food availability, irregular sleep–activity patterns and frequent exposure to artificial light sources. Together, these factors lead to a disrupted day–night rhythm, which contributes to the development of type 2 diabetes [1–3]. In Western society, most people tend to spread their daily food intake over a minimum of 14 h [4], likely resulting in the absence of a true, nocturnal fasted state. Restricting food intake to a pre-defined time window (typically ≤12 h), i.e. time-restricted eating [TRE]), restores the cycle of daytime eating and prolonged fasting during the evening and night. Indeed, several studies demonstrated that TRE has promising metabolic effects in overweight or obese individuals, including increased lipid oxidation [5], decreased plasma glucose levels [6, 7] and improved insulin sensitivity [8]. While promising, the latter studies applied extremely short eating time windows (e.g. 6–8 h) in highly-controlled settings [5–10], thus hampering implementation into daily life. To date, only Parr et al have successfully explored the potential of TRE in adults with type 2 diabetes using a 9 h TRE regimen [11], However, the effects of TRE on metabolic health remained largely unexplored. Despite the fact that TRE is sometimes accompanied by (unintended) weight loss [4–6, 9, 10, 12, 13], which inherently improves metabolic health, it has also been reported to improve metabolic health in the absence of weight loss [8], indicating that additional mechanisms underlie the effects of TRE. In this context, individuals with impaired metabolic health display aberrations in rhythmicity of metabolic processes such as glucose homeostasis [14, 15], mitochondrial oxidative capacity [16] and whole-body substrate oxidation [16] compared with the rhythms found in healthy, lean individuals [14, 15, 17]. Disruption of circadian rhythmicity is proposed to contribute to the impaired matching of substrate utilisation with substrate availability, which is associated with type 2 diabetes [18]. In turn, we hypothesise that these impairments in metabolic rhythmicity are due to a disturbed eating–fasting cycle. Therefore, restricting food intake to daytime and, consequently, extending the period of fasting, may improve metabolic health. More specifically, hepatic glycogen could play a pivotal role in this process, as it serves as a fuel during the night when glucose levels are low and is replenished during the daytime [19]. A decrease in hepatic glycogen triggers the stimulation of fat oxidation and molecular metabolic adaptations that accommodate substrate availability in the fasted state [20], and the need to replenish these stores may improve insulin sensitivity. Hitherto, it is not known whether TRE could result in a more pronounced depletion in hepatic glycogen levels in type 2 diabetes, leading to an improved insulin sensitivity. The aim of the current study was to examine the effect of limiting food intake to a feasible 10 h daily time frame for 3 weeks in free-living conditions on hepatic glycogen utilisation and insulin sensitivity in adults with type 2 diabetes. Discussion: ESM(PDF 809 kb) (PDF 809 kb)
Background: ClinicalTrials.gov NCT03992248 FUNDING: ZonMW, 459001013. Methods: Fourteen adults with type 2 diabetes (BMI 30.5±4.2 kg/m2, HbA1c 46.1±7.2 mmol/mol [6.4±0.7%]) participated in a 3 week TRE (daily food intake within 10 h) vs control (spreading food intake over ≥14 h) regimen in a randomised, crossover trial design. The study was performed at Maastricht University, the Netherlands. Eligibility criteria included diagnosis of type 2 diabetes, intermediate chronotype and absence of medical conditions that could interfere with the study execution and/or outcome. Randomisation was performed by a study-independent investigator, ensuring that an equal amount of participants started with TRE and CON. Due to the nature of the study, neither volunteers nor investigators were blinded to the study interventions. The quality of the data was checked without knowledge on intervention allocation. Hepatic glycogen levels were assessed with 13C-MRS and insulin sensitivity was assessed using a hyperinsulinaemic-euglycaemic two-step clamp. Furthermore, glucose homeostasis was assessed with 24 h continuous glucose monitoring devices. Secondary outcomes included 24 h energy expenditure and substrate oxidation, hepatic lipid content and skeletal muscle mitochondrial capacity. Results: Results are depicted as mean ± SEM. Hepatic glycogen content was similar between TRE and control condition (0.15±0.01 vs 0.15±0.01 AU, p=0.88). M value was not significantly affected by TRE (19.6±1.8 vs 17.7±1.8 μmol kg-1 min-1 in TRE vs control, respectively, p=0.10). Hepatic and peripheral insulin sensitivity also remained unaffected by TRE (p=0.67 and p=0.25, respectively). Yet, insulin-induced non-oxidative glucose disposal was increased with TRE (non-oxidative glucose disposal 4.3±1.1 vs 1.5±1.7 μmol kg-1 min-1, p=0.04). TRE increased the time spent in the normoglycaemic range (15.1±0.8 vs 12.2±1.1 h per day, p=0.01), and decreased fasting glucose (7.6±0.4 vs 8.6±0.4 mmol/l, p=0.03) and 24 h glucose levels (6.8±0.2 vs 7.6±0.3 mmol/l, p<0.01). Energy expenditure over 24 h was unaffected; nevertheless, TRE decreased 24 h glucose oxidation (260.2±7.6 vs 277.8±10.7 g/day, p=0.04). No adverse events were reported that were related to the interventions. Conclusions: We show that a 10 h TRE regimen is a feasible, safe and effective means to improve 24 h glucose homeostasis in free-living adults with type 2 diabetes. However, these changes were not accompanied by changes in insulin sensitivity or hepatic glycogen.
12,552
466
[ 594, 1893, 204, 29, 99, 5217, 230, 207, 184, 1387, 585, 1304 ]
14
[ "tre", "glucose", "con", "vs", "insulin", "levels", "day", "volunteers", "glucose levels", "mmol" ]
[ "intake sleep times", "time restricted eating", "prolonged fasting evening", "nocturnal glucose metabolism", "diabetes tre regimen" ]
null
[CONTENT] Circadian rhythm | Glucose homeostasis | Hepatic fat | Hepatic glycogen | Insulin sensitivity | Intermittent fasting | Lifestyle intervention | Mitochondrial oxidative capacity | TRE | Type 2 diabetes [SUMMARY]
[CONTENT] Circadian rhythm | Glucose homeostasis | Hepatic fat | Hepatic glycogen | Insulin sensitivity | Intermittent fasting | Lifestyle intervention | Mitochondrial oxidative capacity | TRE | Type 2 diabetes [SUMMARY]
null
[CONTENT] Circadian rhythm | Glucose homeostasis | Hepatic fat | Hepatic glycogen | Insulin sensitivity | Intermittent fasting | Lifestyle intervention | Mitochondrial oxidative capacity | TRE | Type 2 diabetes [SUMMARY]
[CONTENT] Circadian rhythm | Glucose homeostasis | Hepatic fat | Hepatic glycogen | Insulin sensitivity | Intermittent fasting | Lifestyle intervention | Mitochondrial oxidative capacity | TRE | Type 2 diabetes [SUMMARY]
[CONTENT] Circadian rhythm | Glucose homeostasis | Hepatic fat | Hepatic glycogen | Insulin sensitivity | Intermittent fasting | Lifestyle intervention | Mitochondrial oxidative capacity | TRE | Type 2 diabetes [SUMMARY]
[CONTENT] Adult | Blood Glucose | Blood Glucose Self-Monitoring | Cross-Over Studies | Diabetes Mellitus, Type 2 | Glucose | Homeostasis | Humans | Insulin | Insulin Resistance | Lipids | Liver Glycogen [SUMMARY]
[CONTENT] Adult | Blood Glucose | Blood Glucose Self-Monitoring | Cross-Over Studies | Diabetes Mellitus, Type 2 | Glucose | Homeostasis | Humans | Insulin | Insulin Resistance | Lipids | Liver Glycogen [SUMMARY]
null
[CONTENT] Adult | Blood Glucose | Blood Glucose Self-Monitoring | Cross-Over Studies | Diabetes Mellitus, Type 2 | Glucose | Homeostasis | Humans | Insulin | Insulin Resistance | Lipids | Liver Glycogen [SUMMARY]
[CONTENT] Adult | Blood Glucose | Blood Glucose Self-Monitoring | Cross-Over Studies | Diabetes Mellitus, Type 2 | Glucose | Homeostasis | Humans | Insulin | Insulin Resistance | Lipids | Liver Glycogen [SUMMARY]
[CONTENT] Adult | Blood Glucose | Blood Glucose Self-Monitoring | Cross-Over Studies | Diabetes Mellitus, Type 2 | Glucose | Homeostasis | Humans | Insulin | Insulin Resistance | Lipids | Liver Glycogen [SUMMARY]
[CONTENT] intake sleep times | time restricted eating | prolonged fasting evening | nocturnal glucose metabolism | diabetes tre regimen [SUMMARY]
[CONTENT] intake sleep times | time restricted eating | prolonged fasting evening | nocturnal glucose metabolism | diabetes tre regimen [SUMMARY]
null
[CONTENT] intake sleep times | time restricted eating | prolonged fasting evening | nocturnal glucose metabolism | diabetes tre regimen [SUMMARY]
[CONTENT] intake sleep times | time restricted eating | prolonged fasting evening | nocturnal glucose metabolism | diabetes tre regimen [SUMMARY]
[CONTENT] intake sleep times | time restricted eating | prolonged fasting evening | nocturnal glucose metabolism | diabetes tre regimen [SUMMARY]
[CONTENT] tre | glucose | con | vs | insulin | levels | day | volunteers | glucose levels | mmol [SUMMARY]
[CONTENT] tre | glucose | con | vs | insulin | levels | day | volunteers | glucose levels | mmol [SUMMARY]
null
[CONTENT] tre | glucose | con | vs | insulin | levels | day | volunteers | glucose levels | mmol [SUMMARY]
[CONTENT] tre | glucose | con | vs | insulin | levels | day | volunteers | glucose levels | mmol [SUMMARY]
[CONTENT] tre | glucose | con | vs | insulin | levels | day | volunteers | glucose levels | mmol [SUMMARY]
[CONTENT] metabolic | metabolic health | health | type diabetes | tre | type | rhythmicity | availability | diabetes | food [SUMMARY]
[CONTENT] hours | en | 00 | 00 hours | 40 hours | volunteers | measurement | mrs | 15 | 40 [SUMMARY]
null
[CONTENT] tre | glucose | type diabetes | type | insulin | diabetes | glycogen | insulin sensitivity | sensitivity | fasting [SUMMARY]
[CONTENT] tre | vs | con | glucose | hours | volunteers | insulin | levels | 00 | day [SUMMARY]
[CONTENT] tre | vs | con | glucose | hours | volunteers | insulin | levels | 00 | day [SUMMARY]
[CONTENT] NCT03992248 | 459001013 [SUMMARY]
[CONTENT] Fourteen | 2 | 30.5±4.2 kg ||| 6.4±0.7% | 3 week | 10 ||| Maastricht University | Netherlands ||| 2 ||| TRE | CON ||| ||| ||| 13C | two ||| 24 ||| 24 [SUMMARY]
null
[CONTENT] 10 | 24 | 2 ||| [SUMMARY]
[CONTENT] NCT03992248 | 459001013 ||| Fourteen | 2 | 30.5±4.2 kg ||| 6.4±0.7% | 3 week | 10 ||| Maastricht University | Netherlands ||| 2 ||| TRE | CON ||| ||| ||| 13C | two ||| 24 ||| 24 ||| ||| TRE ||| TRE | 19.6±1.8 | 17.7±1.8 | p=0.10 ||| TRE ||| TRE | 4.3±1.1 | 1.5±1.7 | p=0.04 ||| 15.1±0.8 | 12.2±1.1 h per day | 8.6±0.4 | 24 | 6.8±0.2 ||| 24 | 24 | 260.2±7.6 | 277.8±10.7 | p=0.04 ||| ||| 10 | 24 | 2 ||| [SUMMARY]
[CONTENT] NCT03992248 | 459001013 ||| Fourteen | 2 | 30.5±4.2 kg ||| 6.4±0.7% | 3 week | 10 ||| Maastricht University | Netherlands ||| 2 ||| TRE | CON ||| ||| ||| 13C | two ||| 24 ||| 24 ||| ||| TRE ||| TRE | 19.6±1.8 | 17.7±1.8 | p=0.10 ||| TRE ||| TRE | 4.3±1.1 | 1.5±1.7 | p=0.04 ||| 15.1±0.8 | 12.2±1.1 h per day | 8.6±0.4 | 24 | 6.8±0.2 ||| 24 | 24 | 260.2±7.6 | 277.8±10.7 | p=0.04 ||| ||| 10 | 24 | 2 ||| [SUMMARY]
Changing trends of birth weight with maternal age: a cross-sectional study in Xi'an city of Northwestern China.
33256654
Most studies have shown that maternal age is associated with birth weight. However, the specific relationship between each additional year of maternal age and birth weight remains unclear. The study aimed to analyze the specific association between maternal age and birth weight.
BACKGROUND
Raw data for all live births from 2015 to 2018 were obtained from the Medical Birth Registry of Xi'an, China. A total of 490,143 mother-child pairs with full-term singleton live births and the maternal age ranging from 20 to 40 years old were included in our study. Birth weight, gestational age, neonatal birth date, maternal birth date, residence and ethnicity were collected. Generalized additive model and two-piece wise linear regression model were used to analyze the specific relationships between maternal age and birth weight, risk of low birth weight, and risk of macrosomia.
METHODS
The relationships between maternal age and birth weight, risk of low birth weight, and risk of macrosomia were nonlinear. Birth weight increased 16.204 g per year when maternal age was less than 24 years old (95%CI: 14.323, 18.086), and increased 12.051 g per year when maternal age ranged from 24 to 34 years old (95%CI: 11.609, 12.493), then decreased 0.824 g per year (95% CI: -3.112, 1.464). The risk of low birth weight decreased with the increase of maternal age until 36 years old (OR = 0.917, 95%CI: 0.903, 0.932 when maternal age was younger than 27 years old; OR = 0.965, 95%CI: 0.955, 0.976 when maternal age ranged from 27 to 36 years old), then increased when maternal age was older than 36 years old (OR = 1.133, 95%CI: 1.026, 1.250). The risk of macrosomia increased with the increase of maternal age (OR = 1.102, 95%CI: 1.075, 1.129 when maternal age was younger than 24 years old; OR = 1.065, 95%CI: 1.060, 1.071 when maternal age ranged from 24 to 33 years old; OR = 1.029, 95%CI: 1.012, 1.046 when maternal age was older than 33 years old).
RESULTS
For women of childbearing age (20-40 years old), the threshold of maternal age on low birth weight was 36 years old, and the risk of macrosomia increased with the increase of maternal age.
CONCLUSIONS
[ "Adult", "Birth Weight", "China", "Cross-Sectional Studies", "Female", "Fetal Macrosomia", "Gestational Age", "Humans", "Infant, Low Birth Weight", "Infant, Newborn", "Linear Models", "Live Birth", "Maternal Age", "Pregnancy", "Registries" ]
7708914
Background
Birth weight (BW) is the most important index reflecting intrauterine growth and development of newborns, and is also a vital index to evaluate the health status of the newborns [1]. Abnormal BW, including low birth weight (LBW, BW < 2500 g) and macrosomia (BW ≥ 4000 g), significantly increases the risk of perinatal mortality and morbidity and, in recent years, has been shown to be a marker of age-related disease risk [2, 3]. As the most common adverse birth outcome, the incidence of abnormal BW is generally high in the world. It was estimated that the incidence of LBW was about 5–7% in developed countries and as high as 19% in developing countries [4], and in mainland China, the specific interest for this paper, it was 6.1% [5]. Furthermore, the incidence of macrosomia also has increased over the past two to three decades in both developed and developing countries, the prevalence of macrosomia was 9.2% in the United States and 7.3% in China recently [6, 7]. Considering the large number of newborns in China, it is likely that LBW and macrosomia might remain a major public health issue over the next few years in China [3]. With the development of society and change of people’s conception of procreation (assisted reproductive technology, the increased education and economic activities of women, reduced marriage rates and increased double income no kids), it was reported that the global trend of delayed childbirth is increasing, e.g. United States and Korea [8, 9]. Similarly in China, the mean maternal age and the proportion of elderly pregnant women(≥ 35 years) are increasing [10]. In 2011, a survey of 14 provinces in China showed that the average delivery age of pregnant women was 28 ± 5 years, the proportion of maternal age older than 35 was 7.8%, and the proportion of maternal age younger than 20 years old was 1.4% [6, 11]. The high proportion of pregnant women older than 35 years old is accompanied by the increased risk of miscarriage and chromosomal aneuploidy [12]. Previous studies indicated that the relationship between maternal age and BW was inconsistent [12–16]. Most of the previous studies focused on the incidence of LWB and macrosomia in different age groups by fixed classification of maternal age [13, 16]. This classification method with 35 years old as the boundary cannot accurately reflect the trend of BW and age, as this may underestimate the age related risk of younger age groups and overestimate risks in older age groups [17]. There is no sufficient evidence to assess the appropriateness of using this artificial traditional age classification to evaluate their impacts on BW. Further, with the increase of delayed childbearing in recent years, the development of medical technology and social and economic development, the current traditional methods of dividing age threshold may not be suitable for today’s situation. Especially for China, a country with a large birth population and a huge social change, this research is helpful to clinical decisionmaking and for public health policies. The objective of this study was to estimate the changes of BW according to 1-year intervals of maternal age among women, treating age as a continuous variable with a flexible, generalized additive approach.
null
null
Results
Related factors of the mother-child pairs A total of 490,143 mother-child pairs were abstracted. The mean BW of newborns was 3364.937 ± 420.528 g, and the mean age of mothers was 28.728 ± 4.134 years old. The incidence of LBW was 1.5%, and the incidence of macrosomia was 6.0%. The basic characteristics of the mother-child pairs were shown in Table 1. There were significant differences in BW, the proportion of LBW and macrosomia among different gestational age groups, seasonal of birth groups, and maternal residence groups (P < 0.05), but not maternal ethnicity groups (P > 0.05). Table 1Basic characteristics of the mother-child pairsVariablesBirth weight(n = 490,143)Low birth weight(n = 7146)Macrosomia(n = 29,457)N (%)Mean ± SD(g)FN(%)χ2N(%)χ2Gestational age(weeks)51543.00*15294.00*7197.40* 37–37+ 636,138(7.4)3020.404 ± 422.0753125(8.6)498(1.4) 38–38+ 698,636 (20.1)3247.870 ± 396.3501994(2.0)2946(3.0) 39–39+ 6162,966 (33.2)3369.093 ± 394.1691312(0.8)8442(5.2) 40–40+ 6138,209 (28.2)3462.910 ± 400.804590(0.4)11,373(8.2) 41–41+ 654,194(11.1)3545.406 ± 395.768125(0.2)6198(11.4)Season of birth24.01*13.97*36.51* Spring119,481(24.4)3363.061 ± 422.3331806(1.5)7137(5.9) Summer121,425(24.8)3360.024 ± 18.3221859(1.5)6932(5.7) Autumn128,548(26.2)3367.011 ± 419.1821773(1.4)7806(6.1) Winter120,689(24.6)3369.533 ± 422.3241708(1.4)7582(6.3)Maternal residence243.76*58.42*40.42* Urban245,250(50.1)3372.876 ± 416.9853370(1.4)15,193(6.2) Suburb128,008(26.1)3363.916 ± 422.3301800(1.4)7648(6.0) Countryside116,885(23.8)3349.401 ± 425.4851976(1.7)6616(5.7)Ethnicity0.9750.2651.165 Han485,192(99.0)3364.998 ± 420.4537069(1.5)29,141(6.0) Other4951 (1.0)3359.067 ± 427.86177(1.6)316(6.4)*P < 0.05 Basic characteristics of the mother-child pairs *P < 0.05 A total of 490,143 mother-child pairs were abstracted. The mean BW of newborns was 3364.937 ± 420.528 g, and the mean age of mothers was 28.728 ± 4.134 years old. The incidence of LBW was 1.5%, and the incidence of macrosomia was 6.0%. The basic characteristics of the mother-child pairs were shown in Table 1. There were significant differences in BW, the proportion of LBW and macrosomia among different gestational age groups, seasonal of birth groups, and maternal residence groups (P < 0.05), but not maternal ethnicity groups (P > 0.05). Table 1Basic characteristics of the mother-child pairsVariablesBirth weight(n = 490,143)Low birth weight(n = 7146)Macrosomia(n = 29,457)N (%)Mean ± SD(g)FN(%)χ2N(%)χ2Gestational age(weeks)51543.00*15294.00*7197.40* 37–37+ 636,138(7.4)3020.404 ± 422.0753125(8.6)498(1.4) 38–38+ 698,636 (20.1)3247.870 ± 396.3501994(2.0)2946(3.0) 39–39+ 6162,966 (33.2)3369.093 ± 394.1691312(0.8)8442(5.2) 40–40+ 6138,209 (28.2)3462.910 ± 400.804590(0.4)11,373(8.2) 41–41+ 654,194(11.1)3545.406 ± 395.768125(0.2)6198(11.4)Season of birth24.01*13.97*36.51* Spring119,481(24.4)3363.061 ± 422.3331806(1.5)7137(5.9) Summer121,425(24.8)3360.024 ± 18.3221859(1.5)6932(5.7) Autumn128,548(26.2)3367.011 ± 419.1821773(1.4)7806(6.1) Winter120,689(24.6)3369.533 ± 422.3241708(1.4)7582(6.3)Maternal residence243.76*58.42*40.42* Urban245,250(50.1)3372.876 ± 416.9853370(1.4)15,193(6.2) Suburb128,008(26.1)3363.916 ± 422.3301800(1.4)7648(6.0) Countryside116,885(23.8)3349.401 ± 425.4851976(1.7)6616(5.7)Ethnicity0.9750.2651.165 Han485,192(99.0)3364.998 ± 420.4537069(1.5)29,141(6.0) Other4951 (1.0)3359.067 ± 427.86177(1.6)316(6.4)*P < 0.05 Basic characteristics of the mother-child pairs *P < 0.05 The relationship between maternal age and birth weight, LBW and macrosomia Both unadjusted and adjusted smoothed plots suggest that there was a nonlinear relationship between maternal age and BW (Fig. 2a, b). BW increased gradually until age 34, then decreased. Similarly, both adjusted smoothed plots suggest that there were nonlinear relationships between maternal age and LBW and macrosomia (Fig. 2c, d). The risk of LBW decreased gradually until age 36, then increased. The risk of macrosomia increased with the increase of maternal age when maternal age ranged from 20 to 40 years old. Fig. 2Relationships between maternal age and BW, LBW and macrosomia with 95% confidence interval (dashed lines) *. a: Unadjusted BW; b: Adjusted BW; c: LBW d: Macrosomia. *: Adjusted: b, c and d adjusting for gestational age, season of birth and maternal residence) Relationships between maternal age and BW, LBW and macrosomia with 95% confidence interval (dashed lines) *. a: Unadjusted BW; b: Adjusted BW; c: LBW d: Macrosomia. *: Adjusted: b, c and d adjusting for gestational age, season of birth and maternal residence) Two turning points of maternal age for BW were found at 24 and 34 years old. (Table 2) The BW increased 16.204 g per year increase of maternal age up to the 24 years old (β = 16.204, 95%CI: 14.323, 18.086). The BW increased 12.051 g per year increase of maternal age when maternal age ranged from 24 to 34 years old (β = 12.051, 95%CI: 11.609, 12.493). But there was not significantly association between maternal age and BW when maternal age was older than 34 years old (β=-0.824, 95% CI: -3.112, 1.464). Table 2The effect of maternal age on BW in two-piece wise linear regression modelTurning point of maternal ageMaternal ageAdjusted β/OR (95%CI)aP valueBirth weight24y,34y20 ≤ and < 24 y16.204 (14.323, 18.086)< 0.00124 ≤ and ≤ 34 y12.051 (11.609, 12.493)< 0.00134 < and ≤ 40 y-0.824 (-3.112, 1.464)0.480Low birth weight27y,36y20 ≤ and < 27 y0.917 (0.903, 0.932)< 0.00127 ≤ and ≤ 36 y0.965 (0.955, 0.976)< 0.00136 < and ≤ 40 y1.133 (1.026, 1.250)0.013Macrosomia24y,33y20 ≤ and < 24 y1.102 (1.075, 1.129)< 0.00124 ≤ and ≤ 33 y1.065 (1.060, 1.071)< 0.00133 < and ≤ 40 y1.029 (1.012, 1.046)0.001aAdjusted: adjusting for gestational age, season of birth, maternal residence The effect of maternal age on BW in two-piece wise linear regression model aAdjusted: adjusting for gestational age, season of birth, maternal residence Two turning points of maternal age for LBW were found at 27 and 36 years old (Table 2). The risk of LBW decreased by 8.3% per year increase of maternal age up to 27 years old (Odds Ratio (OR) = 0.917, 95%CI: 0.903, 0.932). The risk of LBW decreased by 3.5% per year increase of maternal age when maternal age ranged from 27 to 36 years old (OR = 0.965, 95%CI: 0.955, 0.976). The risk of LBW increased by 13.3% per year increase in maternal age when maternal age was older than 36 years old (OR = 1.133, 95%CI: 1.026, 1.250). Similarly, two turning points value of maternal age for macrosomia were found at 24 and 33 years (Table 2). The risk of macrosomia increased by 10.2% per year increase of maternal age up to 24 years old (OR = 1.102, 95%CI: 1.075, 1.129). The risk of macrosomia increased by 6.5% per year increase of maternal age when maternal age ranged from 24 to 33 years old (OR = 1.065, 95%CI: 1.060, 1.071). The risk of macrosomia increased by 2.9% per year increase of maternal age when maternal age was older than 33 years old (OR = 1.029, 95%CI: 1.012, 1.046). Both unadjusted and adjusted smoothed plots suggest that there was a nonlinear relationship between maternal age and BW (Fig. 2a, b). BW increased gradually until age 34, then decreased. Similarly, both adjusted smoothed plots suggest that there were nonlinear relationships between maternal age and LBW and macrosomia (Fig. 2c, d). The risk of LBW decreased gradually until age 36, then increased. The risk of macrosomia increased with the increase of maternal age when maternal age ranged from 20 to 40 years old. Fig. 2Relationships between maternal age and BW, LBW and macrosomia with 95% confidence interval (dashed lines) *. a: Unadjusted BW; b: Adjusted BW; c: LBW d: Macrosomia. *: Adjusted: b, c and d adjusting for gestational age, season of birth and maternal residence) Relationships between maternal age and BW, LBW and macrosomia with 95% confidence interval (dashed lines) *. a: Unadjusted BW; b: Adjusted BW; c: LBW d: Macrosomia. *: Adjusted: b, c and d adjusting for gestational age, season of birth and maternal residence) Two turning points of maternal age for BW were found at 24 and 34 years old. (Table 2) The BW increased 16.204 g per year increase of maternal age up to the 24 years old (β = 16.204, 95%CI: 14.323, 18.086). The BW increased 12.051 g per year increase of maternal age when maternal age ranged from 24 to 34 years old (β = 12.051, 95%CI: 11.609, 12.493). But there was not significantly association between maternal age and BW when maternal age was older than 34 years old (β=-0.824, 95% CI: -3.112, 1.464). Table 2The effect of maternal age on BW in two-piece wise linear regression modelTurning point of maternal ageMaternal ageAdjusted β/OR (95%CI)aP valueBirth weight24y,34y20 ≤ and < 24 y16.204 (14.323, 18.086)< 0.00124 ≤ and ≤ 34 y12.051 (11.609, 12.493)< 0.00134 < and ≤ 40 y-0.824 (-3.112, 1.464)0.480Low birth weight27y,36y20 ≤ and < 27 y0.917 (0.903, 0.932)< 0.00127 ≤ and ≤ 36 y0.965 (0.955, 0.976)< 0.00136 < and ≤ 40 y1.133 (1.026, 1.250)0.013Macrosomia24y,33y20 ≤ and < 24 y1.102 (1.075, 1.129)< 0.00124 ≤ and ≤ 33 y1.065 (1.060, 1.071)< 0.00133 < and ≤ 40 y1.029 (1.012, 1.046)0.001aAdjusted: adjusting for gestational age, season of birth, maternal residence The effect of maternal age on BW in two-piece wise linear regression model aAdjusted: adjusting for gestational age, season of birth, maternal residence Two turning points of maternal age for LBW were found at 27 and 36 years old (Table 2). The risk of LBW decreased by 8.3% per year increase of maternal age up to 27 years old (Odds Ratio (OR) = 0.917, 95%CI: 0.903, 0.932). The risk of LBW decreased by 3.5% per year increase of maternal age when maternal age ranged from 27 to 36 years old (OR = 0.965, 95%CI: 0.955, 0.976). The risk of LBW increased by 13.3% per year increase in maternal age when maternal age was older than 36 years old (OR = 1.133, 95%CI: 1.026, 1.250). Similarly, two turning points value of maternal age for macrosomia were found at 24 and 33 years (Table 2). The risk of macrosomia increased by 10.2% per year increase of maternal age up to 24 years old (OR = 1.102, 95%CI: 1.075, 1.129). The risk of macrosomia increased by 6.5% per year increase of maternal age when maternal age ranged from 24 to 33 years old (OR = 1.065, 95%CI: 1.060, 1.071). The risk of macrosomia increased by 2.9% per year increase of maternal age when maternal age was older than 33 years old (OR = 1.029, 95%CI: 1.012, 1.046).
Conclusions
Our research indicated the specific relationships between maternal age change and the change of BW, LBW and macrosomia when maternal age ranged from 20 to 40 years old. BW increased gradually until 34 years old, then decreased. The threshold maternal age for LBW was 36 years old, and the risk of macrosomia was increasing with the increase of maternal age. These results should be carefully taken into account by maternal care providers in order to inform women adequately, support them in understanding potential risks of BW associated with their age, and the importance of prenatal care. However, optimized maternal age should be determined individually by different pregnancy complications and adverse neonatal outcomes. The mechanism between maternal age and BW is not clear. Therefore, further studies are needed to examine the relationship between maternal age and BW.
[ "Background", "Methods", "\n\nData Source and Study Population\n\n", "Measurements and Definitions of LBW and Macrosomia", "Statistical Analysis", "Related factors of the mother-child pairs", "The relationship between maternal age and birth weight, LBW and macrosomia", "Main results", "Interpretation", "Strengths and limitations" ]
[ "Birth weight (BW) is the most important index reflecting intrauterine growth and development of newborns, and is also a vital index to evaluate the health status of the newborns [1]. Abnormal BW, including low birth weight (LBW, BW < 2500 g) and macrosomia (BW ≥ 4000 g), significantly increases the risk of perinatal mortality and morbidity and, in recent years, has been shown to be a marker of age-related disease risk [2, 3]. As the most common adverse birth outcome, the incidence of abnormal BW is generally high in the world. It was estimated that the incidence of LBW was about 5–7% in developed countries and as high as 19% in developing countries [4], and in mainland China, the specific interest for this paper, it was 6.1% [5]. Furthermore, the incidence of macrosomia also has increased over the past two to three decades in both developed and developing countries, the prevalence of macrosomia was 9.2% in the United States and 7.3% in China recently [6, 7]. Considering the large number of newborns in China, it is likely that LBW and macrosomia might remain a major public health issue over the next few years in China [3].\nWith the development of society and change of people’s conception of procreation (assisted reproductive technology, the increased education and economic activities of women, reduced marriage rates and increased double income no kids), it was reported that the global trend of delayed childbirth is increasing, e.g. United States and Korea [8, 9]. Similarly in China, the mean maternal age and the proportion of elderly pregnant women(≥ 35 years) are increasing [10]. In 2011, a survey of 14 provinces in China showed that the average delivery age of pregnant women was 28 ± 5 years, the proportion of maternal age older than 35 was 7.8%, and the proportion of maternal age younger than 20 years old was 1.4% [6, 11]. The high proportion of pregnant women older than 35 years old is accompanied by the increased risk of miscarriage and chromosomal aneuploidy [12].\nPrevious studies indicated that the relationship between maternal age and BW was inconsistent [12–16]. Most of the previous studies focused on the incidence of LWB and macrosomia in different age groups by fixed classification of maternal age [13, 16]. This classification method with 35 years old as the boundary cannot accurately reflect the trend of BW and age, as this may underestimate the age related risk of younger age groups and overestimate risks in older age groups [17]. There is no sufficient evidence to assess the appropriateness of using this artificial traditional age classification to evaluate their impacts on BW. Further, with the increase of delayed childbearing in recent years, the development of medical technology and social and economic development, the current traditional methods of dividing age threshold may not be suitable for today’s situation. Especially for China, a country with a large birth population and a huge social change, this research is helpful to clinical decisionmaking and for public health policies. The objective of this study was to estimate the changes of BW according to 1-year intervals of maternal age among women, treating age as a continuous variable with a flexible, generalized additive approach.", " \n\nData Source and Study Population\n\n A cross-sectional study was conducted to determine the relationship between maternal age and BW with dose-response analyses. The data were obtained from the Birth Registry Database in Xi’an city, which covered all midwifery clinics and hospitals in the city. Dedicated trained health workers are responsible for collecting and registering birth information at each hospital. Validity was confirmed at various levels: by the Chief Midwife, the Chief Physician, and the Department of Medical Administration at each hospital; by the newborns’ parents at the time when the birth certificate was issued; and reviewed by issuing personnel. This analysis included all full-term singleton live births born from January 2015 to December 2018 in Xi’an city of Shaanxi province in China. We collected the information on maternal and newborn, including the number of births, BW, gestational age, neonatal birth date, maternal residence, maternal birth date, residence and ethnicity.\nA total number of 536,993 mother-child pairs were abstracted from the Medical Birth Registry of Xi’an, born from Jan 2015 to Dec 2018. After removing those failing to meet the requirements explained below and unreasonable records, 490,143 mother-child pairs (91.3%) were included in the analysis. Newborns who were term birth (gestational age was between 37 to 41+ 6 weeks), singleton live births, and whose BW was greater than or equal 1000 g and maternal age at birth ranged from 20 to 40 years old were selected. Clinically, those extreme values (BW < 1000 g) were mainly caused by some clinical diseases rather than maternal age. These extremes can create biases that affect the exploration for real relationships between mother’s age and BW. Similarly, pregnant women over the age of 40 and under the age of 20 had more clinical complications and has too many confounding factors, which would cause biases. To get accurate results, we excluded this part of data. The flow chart of including criteria can be seen in Fig. 1.\nFig. 1Flow chart of including criteria.\nFlow chart of including criteria.\nThe study protocol was approved by the Medical Ethics Committee of the First Affiliated Hospital of Xi’an Jiaotong University (No.XJTU1AF2018LSK-245). Data used in the study were all anonymous, and no individual identifiable information was available for the analysis. In the process of this study, all data were only used to conduct research, not for other purposes, which can ensure that the privacy of patients was fully protected.\nA cross-sectional study was conducted to determine the relationship between maternal age and BW with dose-response analyses. The data were obtained from the Birth Registry Database in Xi’an city, which covered all midwifery clinics and hospitals in the city. Dedicated trained health workers are responsible for collecting and registering birth information at each hospital. Validity was confirmed at various levels: by the Chief Midwife, the Chief Physician, and the Department of Medical Administration at each hospital; by the newborns’ parents at the time when the birth certificate was issued; and reviewed by issuing personnel. This analysis included all full-term singleton live births born from January 2015 to December 2018 in Xi’an city of Shaanxi province in China. We collected the information on maternal and newborn, including the number of births, BW, gestational age, neonatal birth date, maternal residence, maternal birth date, residence and ethnicity.\nA total number of 536,993 mother-child pairs were abstracted from the Medical Birth Registry of Xi’an, born from Jan 2015 to Dec 2018. After removing those failing to meet the requirements explained below and unreasonable records, 490,143 mother-child pairs (91.3%) were included in the analysis. Newborns who were term birth (gestational age was between 37 to 41+ 6 weeks), singleton live births, and whose BW was greater than or equal 1000 g and maternal age at birth ranged from 20 to 40 years old were selected. Clinically, those extreme values (BW < 1000 g) were mainly caused by some clinical diseases rather than maternal age. These extremes can create biases that affect the exploration for real relationships between mother’s age and BW. Similarly, pregnant women over the age of 40 and under the age of 20 had more clinical complications and has too many confounding factors, which would cause biases. To get accurate results, we excluded this part of data. The flow chart of including criteria can be seen in Fig. 1.\nFig. 1Flow chart of including criteria.\nFlow chart of including criteria.\nThe study protocol was approved by the Medical Ethics Committee of the First Affiliated Hospital of Xi’an Jiaotong University (No.XJTU1AF2018LSK-245). Data used in the study were all anonymous, and no individual identifiable information was available for the analysis. In the process of this study, all data were only used to conduct research, not for other purposes, which can ensure that the privacy of patients was fully protected.\n Measurements and Definitions of LBW and Macrosomia The maternal age was analyzed as a continuous variable. BW (in grams) was measured routinely by registered midwives using an electronic weighting scale within 2 hours of delivery. The LBW was defined as BW less than 2500 g [18], and the macrosomia was defined as BW greater than or equal to 4000 g [19]. The season of birth was divided into spring, summer, autumn and winter according to the birth time. According to the Chinese lunar calendar, the winter, spring, summer and fall begin on December 1st, March 1st, June 1st and September 1st, respectively. The maternal residence was divided into 3 groups (urban, suburb, countryside). Maternal ethnicity was divided into 2 groups (Han and other).\nThe maternal age was analyzed as a continuous variable. BW (in grams) was measured routinely by registered midwives using an electronic weighting scale within 2 hours of delivery. The LBW was defined as BW less than 2500 g [18], and the macrosomia was defined as BW greater than or equal to 4000 g [19]. The season of birth was divided into spring, summer, autumn and winter according to the birth time. According to the Chinese lunar calendar, the winter, spring, summer and fall begin on December 1st, March 1st, June 1st and September 1st, respectively. The maternal residence was divided into 3 groups (urban, suburb, countryside). Maternal ethnicity was divided into 2 groups (Han and other).\n Statistical Analysis We described the distribution of each group with the number and proportion of births. The maternal age at birth obeyed normal distribution after testing and was expressed as means ± standard deviation (means ± SD). The women’s characteristics was analyzed as categorical variables to examine the association with BW by one-way ANOVA and risk of LBW and macrosomia by Chi-square Test.\nGeneralized additive model (GAM) was conducted to see whether there were nonlinear relationships between maternal age and BW, LBW and macrosomia, respectively. And the relationships were further verified by adjusting the potential confounders found by the One-way ANOVA and Chi-square Test.\nTwo-piece wise linear regression model was used to examine the threshold effect of the maternal age on BW and the risk of LBW and macrosomia with an adjustment for potential confounders [20]. The turning point of maternal age was defined as where the relationships between maternal age and BW and the risk of LBW and macrosomia started to change. 푃<0.05 was considered as statistically significant.\nAll analyses were conducted with the statistical packages R (R Foundation; http://www.r-project.org; version 3.6.3) and Empower Stats (www.empowerstats.com; X&Y Solutions Inc).\nWe described the distribution of each group with the number and proportion of births. The maternal age at birth obeyed normal distribution after testing and was expressed as means ± standard deviation (means ± SD). The women’s characteristics was analyzed as categorical variables to examine the association with BW by one-way ANOVA and risk of LBW and macrosomia by Chi-square Test.\nGeneralized additive model (GAM) was conducted to see whether there were nonlinear relationships between maternal age and BW, LBW and macrosomia, respectively. And the relationships were further verified by adjusting the potential confounders found by the One-way ANOVA and Chi-square Test.\nTwo-piece wise linear regression model was used to examine the threshold effect of the maternal age on BW and the risk of LBW and macrosomia with an adjustment for potential confounders [20]. The turning point of maternal age was defined as where the relationships between maternal age and BW and the risk of LBW and macrosomia started to change. 푃<0.05 was considered as statistically significant.\nAll analyses were conducted with the statistical packages R (R Foundation; http://www.r-project.org; version 3.6.3) and Empower Stats (www.empowerstats.com; X&Y Solutions Inc).", "A cross-sectional study was conducted to determine the relationship between maternal age and BW with dose-response analyses. The data were obtained from the Birth Registry Database in Xi’an city, which covered all midwifery clinics and hospitals in the city. Dedicated trained health workers are responsible for collecting and registering birth information at each hospital. Validity was confirmed at various levels: by the Chief Midwife, the Chief Physician, and the Department of Medical Administration at each hospital; by the newborns’ parents at the time when the birth certificate was issued; and reviewed by issuing personnel. This analysis included all full-term singleton live births born from January 2015 to December 2018 in Xi’an city of Shaanxi province in China. We collected the information on maternal and newborn, including the number of births, BW, gestational age, neonatal birth date, maternal residence, maternal birth date, residence and ethnicity.\nA total number of 536,993 mother-child pairs were abstracted from the Medical Birth Registry of Xi’an, born from Jan 2015 to Dec 2018. After removing those failing to meet the requirements explained below and unreasonable records, 490,143 mother-child pairs (91.3%) were included in the analysis. Newborns who were term birth (gestational age was between 37 to 41+ 6 weeks), singleton live births, and whose BW was greater than or equal 1000 g and maternal age at birth ranged from 20 to 40 years old were selected. Clinically, those extreme values (BW < 1000 g) were mainly caused by some clinical diseases rather than maternal age. These extremes can create biases that affect the exploration for real relationships between mother’s age and BW. Similarly, pregnant women over the age of 40 and under the age of 20 had more clinical complications and has too many confounding factors, which would cause biases. To get accurate results, we excluded this part of data. The flow chart of including criteria can be seen in Fig. 1.\nFig. 1Flow chart of including criteria.\nFlow chart of including criteria.\nThe study protocol was approved by the Medical Ethics Committee of the First Affiliated Hospital of Xi’an Jiaotong University (No.XJTU1AF2018LSK-245). Data used in the study were all anonymous, and no individual identifiable information was available for the analysis. In the process of this study, all data were only used to conduct research, not for other purposes, which can ensure that the privacy of patients was fully protected.", "The maternal age was analyzed as a continuous variable. BW (in grams) was measured routinely by registered midwives using an electronic weighting scale within 2 hours of delivery. The LBW was defined as BW less than 2500 g [18], and the macrosomia was defined as BW greater than or equal to 4000 g [19]. The season of birth was divided into spring, summer, autumn and winter according to the birth time. According to the Chinese lunar calendar, the winter, spring, summer and fall begin on December 1st, March 1st, June 1st and September 1st, respectively. The maternal residence was divided into 3 groups (urban, suburb, countryside). Maternal ethnicity was divided into 2 groups (Han and other).", "We described the distribution of each group with the number and proportion of births. The maternal age at birth obeyed normal distribution after testing and was expressed as means ± standard deviation (means ± SD). The women’s characteristics was analyzed as categorical variables to examine the association with BW by one-way ANOVA and risk of LBW and macrosomia by Chi-square Test.\nGeneralized additive model (GAM) was conducted to see whether there were nonlinear relationships between maternal age and BW, LBW and macrosomia, respectively. And the relationships were further verified by adjusting the potential confounders found by the One-way ANOVA and Chi-square Test.\nTwo-piece wise linear regression model was used to examine the threshold effect of the maternal age on BW and the risk of LBW and macrosomia with an adjustment for potential confounders [20]. The turning point of maternal age was defined as where the relationships between maternal age and BW and the risk of LBW and macrosomia started to change. 푃<0.05 was considered as statistically significant.\nAll analyses were conducted with the statistical packages R (R Foundation; http://www.r-project.org; version 3.6.3) and Empower Stats (www.empowerstats.com; X&Y Solutions Inc).", "A total of 490,143 mother-child pairs were abstracted. The mean BW of newborns was 3364.937 ± 420.528 g, and the mean age of mothers was 28.728 ± 4.134 years old. The incidence of LBW was 1.5%, and the incidence of macrosomia was 6.0%. The basic characteristics of the mother-child pairs were shown in Table 1. There were significant differences in BW, the proportion of LBW and macrosomia among different gestational age groups, seasonal of birth groups, and maternal residence groups (P < 0.05), but not maternal ethnicity groups (P > 0.05).\nTable 1Basic characteristics of the mother-child pairsVariablesBirth weight(n = 490,143)Low birth weight(n = 7146)Macrosomia(n = 29,457)N (%)Mean ± SD(g)FN(%)χ2N(%)χ2Gestational age(weeks)51543.00*15294.00*7197.40* 37–37+ 636,138(7.4)3020.404 ± 422.0753125(8.6)498(1.4) 38–38+ 698,636 (20.1)3247.870 ± 396.3501994(2.0)2946(3.0) 39–39+ 6162,966 (33.2)3369.093 ± 394.1691312(0.8)8442(5.2) 40–40+ 6138,209 (28.2)3462.910 ± 400.804590(0.4)11,373(8.2) 41–41+ 654,194(11.1)3545.406 ± 395.768125(0.2)6198(11.4)Season of birth24.01*13.97*36.51* Spring119,481(24.4)3363.061 ± 422.3331806(1.5)7137(5.9) Summer121,425(24.8)3360.024 ± 18.3221859(1.5)6932(5.7) Autumn128,548(26.2)3367.011 ± 419.1821773(1.4)7806(6.1) Winter120,689(24.6)3369.533 ± 422.3241708(1.4)7582(6.3)Maternal residence243.76*58.42*40.42* Urban245,250(50.1)3372.876 ± 416.9853370(1.4)15,193(6.2) Suburb128,008(26.1)3363.916 ± 422.3301800(1.4)7648(6.0) Countryside116,885(23.8)3349.401 ± 425.4851976(1.7)6616(5.7)Ethnicity0.9750.2651.165 Han485,192(99.0)3364.998 ± 420.4537069(1.5)29,141(6.0) Other4951 (1.0)3359.067 ± 427.86177(1.6)316(6.4)*P < 0.05\nBasic characteristics of the mother-child pairs\n*P < 0.05", "Both unadjusted and adjusted smoothed plots suggest that there was a nonlinear relationship between maternal age and BW (Fig. 2a, b). BW increased gradually until age 34, then decreased. Similarly, both adjusted smoothed plots suggest that there were nonlinear relationships between maternal age and LBW and macrosomia (Fig. 2c, d). The risk of LBW decreased gradually until age 36, then increased. The risk of macrosomia increased with the increase of maternal age when maternal age ranged from 20 to 40 years old.\nFig. 2Relationships between maternal age and BW, LBW and macrosomia with 95% confidence interval (dashed lines) *. a: Unadjusted BW; b: Adjusted BW; c: LBW d: Macrosomia. *: Adjusted: b, c and d adjusting for gestational age, season of birth and maternal residence)\nRelationships between maternal age and BW, LBW and macrosomia with 95% confidence interval (dashed lines) *. a: Unadjusted BW; b: Adjusted BW; c: LBW d: Macrosomia. *: Adjusted: b, c and d adjusting for gestational age, season of birth and maternal residence)\nTwo turning points of maternal age for BW were found at 24 and 34 years old. (Table 2) The BW increased 16.204 g per year increase of maternal age up to the 24 years old (β = 16.204, 95%CI: 14.323, 18.086). The BW increased 12.051 g per year increase of maternal age when maternal age ranged from 24 to 34 years old (β = 12.051, 95%CI: 11.609, 12.493). But there was not significantly association between maternal age and BW when maternal age was older than 34 years old (β=-0.824, 95% CI: -3.112, 1.464).\nTable 2The effect of maternal age on BW in two-piece wise linear regression modelTurning point of maternal ageMaternal ageAdjusted β/OR (95%CI)aP valueBirth weight24y,34y20 ≤ and < 24 y16.204 (14.323, 18.086)< 0.00124 ≤ and ≤ 34 y12.051 (11.609, 12.493)< 0.00134 < and ≤ 40 y-0.824 (-3.112, 1.464)0.480Low birth weight27y,36y20 ≤ and < 27 y0.917 (0.903, 0.932)< 0.00127 ≤ and ≤ 36 y0.965 (0.955, 0.976)< 0.00136 < and ≤ 40 y1.133 (1.026, 1.250)0.013Macrosomia24y,33y20 ≤ and < 24 y1.102 (1.075, 1.129)< 0.00124 ≤ and ≤ 33 y1.065 (1.060, 1.071)< 0.00133 < and ≤ 40 y1.029 (1.012, 1.046)0.001aAdjusted: adjusting for gestational age, season of birth, maternal residence\nThe effect of maternal age on BW in two-piece wise linear regression model\naAdjusted: adjusting for gestational age, season of birth, maternal residence\nTwo turning points of maternal age for LBW were found at 27 and 36 years old (Table 2). The risk of LBW decreased by 8.3% per year increase of maternal age up to 27 years old (Odds Ratio (OR) = 0.917, 95%CI: 0.903, 0.932). The risk of LBW decreased by 3.5% per year increase of maternal age when maternal age ranged from 27 to 36 years old (OR = 0.965, 95%CI: 0.955, 0.976). The risk of LBW increased by 13.3% per year increase in maternal age when maternal age was older than 36 years old (OR = 1.133, 95%CI: 1.026, 1.250). Similarly, two turning points value of maternal age for macrosomia were found at 24 and 33 years (Table 2). The risk of macrosomia increased by 10.2% per year increase of maternal age up to 24 years old (OR = 1.102, 95%CI: 1.075, 1.129). The risk of macrosomia increased by 6.5% per year increase of maternal age when maternal age ranged from 24 to 33 years old (OR = 1.065, 95%CI: 1.060, 1.071). The risk of macrosomia increased by 2.9% per year increase of maternal age when maternal age was older than 33 years old (OR = 1.029, 95%CI: 1.012, 1.046).", "Our research indicated the specific relationships between maternal age change and the change of BW, LBW and macrosomia. BW increased gradually until maternal age 34, then decreased. The risk of LBW decreased gradually until age 36, then increased. The risk of macrosomia increased with the increase of maternal age. The results of this study on BW showed that the relationship between the risk of abnormal BW and age is not based on the traditional 35-year-old boundary. Our findings provided the absolute risks of abnormal BW by maternal age, which would be useful for patient fertility counseling.", "A nonlinear relationship between maternal age and BW was observed and two turning points of maternal age at 24 and 34 years old were found in this study. The BW increased faster from 20 to 23 years old than from 24 to 34 years old. The curve was consistent with the findings of the previous studies [12, 13], however, there was no research provided that the threshold maternal age on BW was 34 years old [12, 21, 22]. We observed a marginal significantly negative association between maternal age and BW when maternal age ranged from 35 to 40 years, which was consistent with the findings of the previous studies [23, 24]. In previous studies, the population was grouped according to maternal age into several subgroups, then to investigate the effect of maternal age on the BW. Most studies considered women in age group 20–29 years as reference group. Differently, in our study, the maternal age was analyzed as a continuous variable rather than categorical variable, to find a more accurate age threshold.\nThe mechanism of the effect of maternal age on BW is still unclear. In the study, we found the risks of LBW and macrosomia after 35 years present the same pattern, increasing at these maternal age period. Relevant researches showed that most of the effects on the offspring of intrauterine exposure to maternal age-related obstetric complications might be induced by epigenetic DNA reprogramming during critical periods of the embryo or fetal development [25]. And it is well known that mitochondria are maternally derived. We also know that mitochondrial DNA is not capable of DNA repair and is therefore at greater risk of acquiring mutations with age.\n[12]. Similarly, the quality of woman’s eggs declined dramatically with increasing age, leading to an increased risk of pregnancy-related complications among older women [12]. Furthermore, older mothers are more likely to suffer from chronic diseases and their complications in pregnancy, including obesity, anemia and diabetes. In addition, ageing process affects the reproductive system similar to other systems in the human body. These factors may contribute to that the risks of LBW and macrosomia after 35 years present the same pattern, increasing at these maternal age period.\nA nonlinear relationship between maternal age and the risk of term LBW was found in our study. The curve was the same as the previous study [13, 26]. As LBW newborns may be premature with other risk factors [27], we restricted our study population to term newborns, therefore the etiology of LBW was intrauterine growth restriction [18]. The prevalence of term LBW in our study was 1.5%, which was lower than 2% reported by the other study [28], which suggesting there was a good perinatal care system in Xi’an, China. Although the prevalence of term LBW is low, it is not negligible and term LBW can lead to adverse pregnancy outcomes, as severe neonatal asphyxia [29]. The threshold maternal age on the risk of term LBW was 36 years old, which means that the risk of term LBW will significantly increase when the maternal age is over 36 years old, but fewer studies have reported it [5, 6, 14]. However, the biological mechanisms by which maternal age cause the term LBW are unclear. The increased risk of LBW among younger can be explained by the low socioeconomic conditions and increased nutritional demands of pregnancy [30].\nWith the increase of maternal age, the risk of macrosomia was increasing and two turning points of maternal age were found at 24 and 33 years. The curve was identical to the previous studies [31]. The incidence of macrosomia was 6.0% in our study, which was approximate with the previous study [14]. The risk of macrosomia increased faster from 20 to 23 years old than from 24 to 33 years old, which was the same as the change of BW. Term macrosomia is influenced by maternal hyperglycemia and endocrine status through placental circulation [32]. The increased risk of term macrosomia is partly explained by the increased prevalence of diabetes with the increased maternal age [13].", "Our research has some advantages: first, our study was a large sample research based on four years records. Besides, our study had very strict inclusion and exclusion criteria and the data was also carried on the strict cleaning and so on, which increases the credibility of the results. However, there were some potential limitations in this study. First, some potential confounders, such as fetus gender, parity, medical history, economic condition and paternal age, were not adjusted because of limited data. Although the cutoff for “paternal advanced age” is not clearly defined, there is an increase in genetic risk as men age. Previous studies showed that economic condition and fetus gender were associated with higher BW, but adjustment for socioeconomic factors and fetus gender made little difference to the results [22]. In addition, at any maternal age, higher parity was associated with higher BW [22].\nTherefore, the extrapolation of the results requires caution, especially for others ethnicity [21]. However, the results of this study still have a certain degree of reference significance for other regions. Second, the mother-child pairs were not included when the maternal age was younger than 20 years old or older than 40 years old, because of the small proportion of them and limited data. Women delivering before 20 or after age 40 years had a higher incidence of pregnancy complications. Therefore, our research could only reflect the changing trend of BW, LBW and macrosomia when the maternal age ranged from 20 to 40 years old. In order to ensure the accuracy of our research results, we limited the subjects to full term singleton live birth, which could eliminate some potential influencing factors [27], such as serious pregnancy complications. Although the large sample size might increase potential confounding, we could estimate the relationships between maternal age and BW, LBW and macrosomia in detail with generalized additive model and two-piece wise linear regression model." ]
[ null, null, null, null, null, null, null, null, null, null ]
[ "Background", "Methods", "\n\nData Source and Study Population\n\n", "Measurements and Definitions of LBW and Macrosomia", "Statistical Analysis", "Results", "Related factors of the mother-child pairs", "The relationship between maternal age and birth weight, LBW and macrosomia", "Discussion", "Main results", "Interpretation", "Strengths and limitations", "Conclusions" ]
[ "Birth weight (BW) is the most important index reflecting intrauterine growth and development of newborns, and is also a vital index to evaluate the health status of the newborns [1]. Abnormal BW, including low birth weight (LBW, BW < 2500 g) and macrosomia (BW ≥ 4000 g), significantly increases the risk of perinatal mortality and morbidity and, in recent years, has been shown to be a marker of age-related disease risk [2, 3]. As the most common adverse birth outcome, the incidence of abnormal BW is generally high in the world. It was estimated that the incidence of LBW was about 5–7% in developed countries and as high as 19% in developing countries [4], and in mainland China, the specific interest for this paper, it was 6.1% [5]. Furthermore, the incidence of macrosomia also has increased over the past two to three decades in both developed and developing countries, the prevalence of macrosomia was 9.2% in the United States and 7.3% in China recently [6, 7]. Considering the large number of newborns in China, it is likely that LBW and macrosomia might remain a major public health issue over the next few years in China [3].\nWith the development of society and change of people’s conception of procreation (assisted reproductive technology, the increased education and economic activities of women, reduced marriage rates and increased double income no kids), it was reported that the global trend of delayed childbirth is increasing, e.g. United States and Korea [8, 9]. Similarly in China, the mean maternal age and the proportion of elderly pregnant women(≥ 35 years) are increasing [10]. In 2011, a survey of 14 provinces in China showed that the average delivery age of pregnant women was 28 ± 5 years, the proportion of maternal age older than 35 was 7.8%, and the proportion of maternal age younger than 20 years old was 1.4% [6, 11]. The high proportion of pregnant women older than 35 years old is accompanied by the increased risk of miscarriage and chromosomal aneuploidy [12].\nPrevious studies indicated that the relationship between maternal age and BW was inconsistent [12–16]. Most of the previous studies focused on the incidence of LWB and macrosomia in different age groups by fixed classification of maternal age [13, 16]. This classification method with 35 years old as the boundary cannot accurately reflect the trend of BW and age, as this may underestimate the age related risk of younger age groups and overestimate risks in older age groups [17]. There is no sufficient evidence to assess the appropriateness of using this artificial traditional age classification to evaluate their impacts on BW. Further, with the increase of delayed childbearing in recent years, the development of medical technology and social and economic development, the current traditional methods of dividing age threshold may not be suitable for today’s situation. Especially for China, a country with a large birth population and a huge social change, this research is helpful to clinical decisionmaking and for public health policies. The objective of this study was to estimate the changes of BW according to 1-year intervals of maternal age among women, treating age as a continuous variable with a flexible, generalized additive approach.", " \n\nData Source and Study Population\n\n A cross-sectional study was conducted to determine the relationship between maternal age and BW with dose-response analyses. The data were obtained from the Birth Registry Database in Xi’an city, which covered all midwifery clinics and hospitals in the city. Dedicated trained health workers are responsible for collecting and registering birth information at each hospital. Validity was confirmed at various levels: by the Chief Midwife, the Chief Physician, and the Department of Medical Administration at each hospital; by the newborns’ parents at the time when the birth certificate was issued; and reviewed by issuing personnel. This analysis included all full-term singleton live births born from January 2015 to December 2018 in Xi’an city of Shaanxi province in China. We collected the information on maternal and newborn, including the number of births, BW, gestational age, neonatal birth date, maternal residence, maternal birth date, residence and ethnicity.\nA total number of 536,993 mother-child pairs were abstracted from the Medical Birth Registry of Xi’an, born from Jan 2015 to Dec 2018. After removing those failing to meet the requirements explained below and unreasonable records, 490,143 mother-child pairs (91.3%) were included in the analysis. Newborns who were term birth (gestational age was between 37 to 41+ 6 weeks), singleton live births, and whose BW was greater than or equal 1000 g and maternal age at birth ranged from 20 to 40 years old were selected. Clinically, those extreme values (BW < 1000 g) were mainly caused by some clinical diseases rather than maternal age. These extremes can create biases that affect the exploration for real relationships between mother’s age and BW. Similarly, pregnant women over the age of 40 and under the age of 20 had more clinical complications and has too many confounding factors, which would cause biases. To get accurate results, we excluded this part of data. The flow chart of including criteria can be seen in Fig. 1.\nFig. 1Flow chart of including criteria.\nFlow chart of including criteria.\nThe study protocol was approved by the Medical Ethics Committee of the First Affiliated Hospital of Xi’an Jiaotong University (No.XJTU1AF2018LSK-245). Data used in the study were all anonymous, and no individual identifiable information was available for the analysis. In the process of this study, all data were only used to conduct research, not for other purposes, which can ensure that the privacy of patients was fully protected.\nA cross-sectional study was conducted to determine the relationship between maternal age and BW with dose-response analyses. The data were obtained from the Birth Registry Database in Xi’an city, which covered all midwifery clinics and hospitals in the city. Dedicated trained health workers are responsible for collecting and registering birth information at each hospital. Validity was confirmed at various levels: by the Chief Midwife, the Chief Physician, and the Department of Medical Administration at each hospital; by the newborns’ parents at the time when the birth certificate was issued; and reviewed by issuing personnel. This analysis included all full-term singleton live births born from January 2015 to December 2018 in Xi’an city of Shaanxi province in China. We collected the information on maternal and newborn, including the number of births, BW, gestational age, neonatal birth date, maternal residence, maternal birth date, residence and ethnicity.\nA total number of 536,993 mother-child pairs were abstracted from the Medical Birth Registry of Xi’an, born from Jan 2015 to Dec 2018. After removing those failing to meet the requirements explained below and unreasonable records, 490,143 mother-child pairs (91.3%) were included in the analysis. Newborns who were term birth (gestational age was between 37 to 41+ 6 weeks), singleton live births, and whose BW was greater than or equal 1000 g and maternal age at birth ranged from 20 to 40 years old were selected. Clinically, those extreme values (BW < 1000 g) were mainly caused by some clinical diseases rather than maternal age. These extremes can create biases that affect the exploration for real relationships between mother’s age and BW. Similarly, pregnant women over the age of 40 and under the age of 20 had more clinical complications and has too many confounding factors, which would cause biases. To get accurate results, we excluded this part of data. The flow chart of including criteria can be seen in Fig. 1.\nFig. 1Flow chart of including criteria.\nFlow chart of including criteria.\nThe study protocol was approved by the Medical Ethics Committee of the First Affiliated Hospital of Xi’an Jiaotong University (No.XJTU1AF2018LSK-245). Data used in the study were all anonymous, and no individual identifiable information was available for the analysis. In the process of this study, all data were only used to conduct research, not for other purposes, which can ensure that the privacy of patients was fully protected.\n Measurements and Definitions of LBW and Macrosomia The maternal age was analyzed as a continuous variable. BW (in grams) was measured routinely by registered midwives using an electronic weighting scale within 2 hours of delivery. The LBW was defined as BW less than 2500 g [18], and the macrosomia was defined as BW greater than or equal to 4000 g [19]. The season of birth was divided into spring, summer, autumn and winter according to the birth time. According to the Chinese lunar calendar, the winter, spring, summer and fall begin on December 1st, March 1st, June 1st and September 1st, respectively. The maternal residence was divided into 3 groups (urban, suburb, countryside). Maternal ethnicity was divided into 2 groups (Han and other).\nThe maternal age was analyzed as a continuous variable. BW (in grams) was measured routinely by registered midwives using an electronic weighting scale within 2 hours of delivery. The LBW was defined as BW less than 2500 g [18], and the macrosomia was defined as BW greater than or equal to 4000 g [19]. The season of birth was divided into spring, summer, autumn and winter according to the birth time. According to the Chinese lunar calendar, the winter, spring, summer and fall begin on December 1st, March 1st, June 1st and September 1st, respectively. The maternal residence was divided into 3 groups (urban, suburb, countryside). Maternal ethnicity was divided into 2 groups (Han and other).\n Statistical Analysis We described the distribution of each group with the number and proportion of births. The maternal age at birth obeyed normal distribution after testing and was expressed as means ± standard deviation (means ± SD). The women’s characteristics was analyzed as categorical variables to examine the association with BW by one-way ANOVA and risk of LBW and macrosomia by Chi-square Test.\nGeneralized additive model (GAM) was conducted to see whether there were nonlinear relationships between maternal age and BW, LBW and macrosomia, respectively. And the relationships were further verified by adjusting the potential confounders found by the One-way ANOVA and Chi-square Test.\nTwo-piece wise linear regression model was used to examine the threshold effect of the maternal age on BW and the risk of LBW and macrosomia with an adjustment for potential confounders [20]. The turning point of maternal age was defined as where the relationships between maternal age and BW and the risk of LBW and macrosomia started to change. 푃<0.05 was considered as statistically significant.\nAll analyses were conducted with the statistical packages R (R Foundation; http://www.r-project.org; version 3.6.3) and Empower Stats (www.empowerstats.com; X&Y Solutions Inc).\nWe described the distribution of each group with the number and proportion of births. The maternal age at birth obeyed normal distribution after testing and was expressed as means ± standard deviation (means ± SD). The women’s characteristics was analyzed as categorical variables to examine the association with BW by one-way ANOVA and risk of LBW and macrosomia by Chi-square Test.\nGeneralized additive model (GAM) was conducted to see whether there were nonlinear relationships between maternal age and BW, LBW and macrosomia, respectively. And the relationships were further verified by adjusting the potential confounders found by the One-way ANOVA and Chi-square Test.\nTwo-piece wise linear regression model was used to examine the threshold effect of the maternal age on BW and the risk of LBW and macrosomia with an adjustment for potential confounders [20]. The turning point of maternal age was defined as where the relationships between maternal age and BW and the risk of LBW and macrosomia started to change. 푃<0.05 was considered as statistically significant.\nAll analyses were conducted with the statistical packages R (R Foundation; http://www.r-project.org; version 3.6.3) and Empower Stats (www.empowerstats.com; X&Y Solutions Inc).", "A cross-sectional study was conducted to determine the relationship between maternal age and BW with dose-response analyses. The data were obtained from the Birth Registry Database in Xi’an city, which covered all midwifery clinics and hospitals in the city. Dedicated trained health workers are responsible for collecting and registering birth information at each hospital. Validity was confirmed at various levels: by the Chief Midwife, the Chief Physician, and the Department of Medical Administration at each hospital; by the newborns’ parents at the time when the birth certificate was issued; and reviewed by issuing personnel. This analysis included all full-term singleton live births born from January 2015 to December 2018 in Xi’an city of Shaanxi province in China. We collected the information on maternal and newborn, including the number of births, BW, gestational age, neonatal birth date, maternal residence, maternal birth date, residence and ethnicity.\nA total number of 536,993 mother-child pairs were abstracted from the Medical Birth Registry of Xi’an, born from Jan 2015 to Dec 2018. After removing those failing to meet the requirements explained below and unreasonable records, 490,143 mother-child pairs (91.3%) were included in the analysis. Newborns who were term birth (gestational age was between 37 to 41+ 6 weeks), singleton live births, and whose BW was greater than or equal 1000 g and maternal age at birth ranged from 20 to 40 years old were selected. Clinically, those extreme values (BW < 1000 g) were mainly caused by some clinical diseases rather than maternal age. These extremes can create biases that affect the exploration for real relationships between mother’s age and BW. Similarly, pregnant women over the age of 40 and under the age of 20 had more clinical complications and has too many confounding factors, which would cause biases. To get accurate results, we excluded this part of data. The flow chart of including criteria can be seen in Fig. 1.\nFig. 1Flow chart of including criteria.\nFlow chart of including criteria.\nThe study protocol was approved by the Medical Ethics Committee of the First Affiliated Hospital of Xi’an Jiaotong University (No.XJTU1AF2018LSK-245). Data used in the study were all anonymous, and no individual identifiable information was available for the analysis. In the process of this study, all data were only used to conduct research, not for other purposes, which can ensure that the privacy of patients was fully protected.", "The maternal age was analyzed as a continuous variable. BW (in grams) was measured routinely by registered midwives using an electronic weighting scale within 2 hours of delivery. The LBW was defined as BW less than 2500 g [18], and the macrosomia was defined as BW greater than or equal to 4000 g [19]. The season of birth was divided into spring, summer, autumn and winter according to the birth time. According to the Chinese lunar calendar, the winter, spring, summer and fall begin on December 1st, March 1st, June 1st and September 1st, respectively. The maternal residence was divided into 3 groups (urban, suburb, countryside). Maternal ethnicity was divided into 2 groups (Han and other).", "We described the distribution of each group with the number and proportion of births. The maternal age at birth obeyed normal distribution after testing and was expressed as means ± standard deviation (means ± SD). The women’s characteristics was analyzed as categorical variables to examine the association with BW by one-way ANOVA and risk of LBW and macrosomia by Chi-square Test.\nGeneralized additive model (GAM) was conducted to see whether there were nonlinear relationships between maternal age and BW, LBW and macrosomia, respectively. And the relationships were further verified by adjusting the potential confounders found by the One-way ANOVA and Chi-square Test.\nTwo-piece wise linear regression model was used to examine the threshold effect of the maternal age on BW and the risk of LBW and macrosomia with an adjustment for potential confounders [20]. The turning point of maternal age was defined as where the relationships between maternal age and BW and the risk of LBW and macrosomia started to change. 푃<0.05 was considered as statistically significant.\nAll analyses were conducted with the statistical packages R (R Foundation; http://www.r-project.org; version 3.6.3) and Empower Stats (www.empowerstats.com; X&Y Solutions Inc).", " Related factors of the mother-child pairs A total of 490,143 mother-child pairs were abstracted. The mean BW of newborns was 3364.937 ± 420.528 g, and the mean age of mothers was 28.728 ± 4.134 years old. The incidence of LBW was 1.5%, and the incidence of macrosomia was 6.0%. The basic characteristics of the mother-child pairs were shown in Table 1. There were significant differences in BW, the proportion of LBW and macrosomia among different gestational age groups, seasonal of birth groups, and maternal residence groups (P < 0.05), but not maternal ethnicity groups (P > 0.05).\nTable 1Basic characteristics of the mother-child pairsVariablesBirth weight(n = 490,143)Low birth weight(n = 7146)Macrosomia(n = 29,457)N (%)Mean ± SD(g)FN(%)χ2N(%)χ2Gestational age(weeks)51543.00*15294.00*7197.40* 37–37+ 636,138(7.4)3020.404 ± 422.0753125(8.6)498(1.4) 38–38+ 698,636 (20.1)3247.870 ± 396.3501994(2.0)2946(3.0) 39–39+ 6162,966 (33.2)3369.093 ± 394.1691312(0.8)8442(5.2) 40–40+ 6138,209 (28.2)3462.910 ± 400.804590(0.4)11,373(8.2) 41–41+ 654,194(11.1)3545.406 ± 395.768125(0.2)6198(11.4)Season of birth24.01*13.97*36.51* Spring119,481(24.4)3363.061 ± 422.3331806(1.5)7137(5.9) Summer121,425(24.8)3360.024 ± 18.3221859(1.5)6932(5.7) Autumn128,548(26.2)3367.011 ± 419.1821773(1.4)7806(6.1) Winter120,689(24.6)3369.533 ± 422.3241708(1.4)7582(6.3)Maternal residence243.76*58.42*40.42* Urban245,250(50.1)3372.876 ± 416.9853370(1.4)15,193(6.2) Suburb128,008(26.1)3363.916 ± 422.3301800(1.4)7648(6.0) Countryside116,885(23.8)3349.401 ± 425.4851976(1.7)6616(5.7)Ethnicity0.9750.2651.165 Han485,192(99.0)3364.998 ± 420.4537069(1.5)29,141(6.0) Other4951 (1.0)3359.067 ± 427.86177(1.6)316(6.4)*P < 0.05\nBasic characteristics of the mother-child pairs\n*P < 0.05\nA total of 490,143 mother-child pairs were abstracted. The mean BW of newborns was 3364.937 ± 420.528 g, and the mean age of mothers was 28.728 ± 4.134 years old. The incidence of LBW was 1.5%, and the incidence of macrosomia was 6.0%. The basic characteristics of the mother-child pairs were shown in Table 1. There were significant differences in BW, the proportion of LBW and macrosomia among different gestational age groups, seasonal of birth groups, and maternal residence groups (P < 0.05), but not maternal ethnicity groups (P > 0.05).\nTable 1Basic characteristics of the mother-child pairsVariablesBirth weight(n = 490,143)Low birth weight(n = 7146)Macrosomia(n = 29,457)N (%)Mean ± SD(g)FN(%)χ2N(%)χ2Gestational age(weeks)51543.00*15294.00*7197.40* 37–37+ 636,138(7.4)3020.404 ± 422.0753125(8.6)498(1.4) 38–38+ 698,636 (20.1)3247.870 ± 396.3501994(2.0)2946(3.0) 39–39+ 6162,966 (33.2)3369.093 ± 394.1691312(0.8)8442(5.2) 40–40+ 6138,209 (28.2)3462.910 ± 400.804590(0.4)11,373(8.2) 41–41+ 654,194(11.1)3545.406 ± 395.768125(0.2)6198(11.4)Season of birth24.01*13.97*36.51* Spring119,481(24.4)3363.061 ± 422.3331806(1.5)7137(5.9) Summer121,425(24.8)3360.024 ± 18.3221859(1.5)6932(5.7) Autumn128,548(26.2)3367.011 ± 419.1821773(1.4)7806(6.1) Winter120,689(24.6)3369.533 ± 422.3241708(1.4)7582(6.3)Maternal residence243.76*58.42*40.42* Urban245,250(50.1)3372.876 ± 416.9853370(1.4)15,193(6.2) Suburb128,008(26.1)3363.916 ± 422.3301800(1.4)7648(6.0) Countryside116,885(23.8)3349.401 ± 425.4851976(1.7)6616(5.7)Ethnicity0.9750.2651.165 Han485,192(99.0)3364.998 ± 420.4537069(1.5)29,141(6.0) Other4951 (1.0)3359.067 ± 427.86177(1.6)316(6.4)*P < 0.05\nBasic characteristics of the mother-child pairs\n*P < 0.05\n The relationship between maternal age and birth weight, LBW and macrosomia Both unadjusted and adjusted smoothed plots suggest that there was a nonlinear relationship between maternal age and BW (Fig. 2a, b). BW increased gradually until age 34, then decreased. Similarly, both adjusted smoothed plots suggest that there were nonlinear relationships between maternal age and LBW and macrosomia (Fig. 2c, d). The risk of LBW decreased gradually until age 36, then increased. The risk of macrosomia increased with the increase of maternal age when maternal age ranged from 20 to 40 years old.\nFig. 2Relationships between maternal age and BW, LBW and macrosomia with 95% confidence interval (dashed lines) *. a: Unadjusted BW; b: Adjusted BW; c: LBW d: Macrosomia. *: Adjusted: b, c and d adjusting for gestational age, season of birth and maternal residence)\nRelationships between maternal age and BW, LBW and macrosomia with 95% confidence interval (dashed lines) *. a: Unadjusted BW; b: Adjusted BW; c: LBW d: Macrosomia. *: Adjusted: b, c and d adjusting for gestational age, season of birth and maternal residence)\nTwo turning points of maternal age for BW were found at 24 and 34 years old. (Table 2) The BW increased 16.204 g per year increase of maternal age up to the 24 years old (β = 16.204, 95%CI: 14.323, 18.086). The BW increased 12.051 g per year increase of maternal age when maternal age ranged from 24 to 34 years old (β = 12.051, 95%CI: 11.609, 12.493). But there was not significantly association between maternal age and BW when maternal age was older than 34 years old (β=-0.824, 95% CI: -3.112, 1.464).\nTable 2The effect of maternal age on BW in two-piece wise linear regression modelTurning point of maternal ageMaternal ageAdjusted β/OR (95%CI)aP valueBirth weight24y,34y20 ≤ and < 24 y16.204 (14.323, 18.086)< 0.00124 ≤ and ≤ 34 y12.051 (11.609, 12.493)< 0.00134 < and ≤ 40 y-0.824 (-3.112, 1.464)0.480Low birth weight27y,36y20 ≤ and < 27 y0.917 (0.903, 0.932)< 0.00127 ≤ and ≤ 36 y0.965 (0.955, 0.976)< 0.00136 < and ≤ 40 y1.133 (1.026, 1.250)0.013Macrosomia24y,33y20 ≤ and < 24 y1.102 (1.075, 1.129)< 0.00124 ≤ and ≤ 33 y1.065 (1.060, 1.071)< 0.00133 < and ≤ 40 y1.029 (1.012, 1.046)0.001aAdjusted: adjusting for gestational age, season of birth, maternal residence\nThe effect of maternal age on BW in two-piece wise linear regression model\naAdjusted: adjusting for gestational age, season of birth, maternal residence\nTwo turning points of maternal age for LBW were found at 27 and 36 years old (Table 2). The risk of LBW decreased by 8.3% per year increase of maternal age up to 27 years old (Odds Ratio (OR) = 0.917, 95%CI: 0.903, 0.932). The risk of LBW decreased by 3.5% per year increase of maternal age when maternal age ranged from 27 to 36 years old (OR = 0.965, 95%CI: 0.955, 0.976). The risk of LBW increased by 13.3% per year increase in maternal age when maternal age was older than 36 years old (OR = 1.133, 95%CI: 1.026, 1.250). Similarly, two turning points value of maternal age for macrosomia were found at 24 and 33 years (Table 2). The risk of macrosomia increased by 10.2% per year increase of maternal age up to 24 years old (OR = 1.102, 95%CI: 1.075, 1.129). The risk of macrosomia increased by 6.5% per year increase of maternal age when maternal age ranged from 24 to 33 years old (OR = 1.065, 95%CI: 1.060, 1.071). The risk of macrosomia increased by 2.9% per year increase of maternal age when maternal age was older than 33 years old (OR = 1.029, 95%CI: 1.012, 1.046).\nBoth unadjusted and adjusted smoothed plots suggest that there was a nonlinear relationship between maternal age and BW (Fig. 2a, b). BW increased gradually until age 34, then decreased. Similarly, both adjusted smoothed plots suggest that there were nonlinear relationships between maternal age and LBW and macrosomia (Fig. 2c, d). The risk of LBW decreased gradually until age 36, then increased. The risk of macrosomia increased with the increase of maternal age when maternal age ranged from 20 to 40 years old.\nFig. 2Relationships between maternal age and BW, LBW and macrosomia with 95% confidence interval (dashed lines) *. a: Unadjusted BW; b: Adjusted BW; c: LBW d: Macrosomia. *: Adjusted: b, c and d adjusting for gestational age, season of birth and maternal residence)\nRelationships between maternal age and BW, LBW and macrosomia with 95% confidence interval (dashed lines) *. a: Unadjusted BW; b: Adjusted BW; c: LBW d: Macrosomia. *: Adjusted: b, c and d adjusting for gestational age, season of birth and maternal residence)\nTwo turning points of maternal age for BW were found at 24 and 34 years old. (Table 2) The BW increased 16.204 g per year increase of maternal age up to the 24 years old (β = 16.204, 95%CI: 14.323, 18.086). The BW increased 12.051 g per year increase of maternal age when maternal age ranged from 24 to 34 years old (β = 12.051, 95%CI: 11.609, 12.493). But there was not significantly association between maternal age and BW when maternal age was older than 34 years old (β=-0.824, 95% CI: -3.112, 1.464).\nTable 2The effect of maternal age on BW in two-piece wise linear regression modelTurning point of maternal ageMaternal ageAdjusted β/OR (95%CI)aP valueBirth weight24y,34y20 ≤ and < 24 y16.204 (14.323, 18.086)< 0.00124 ≤ and ≤ 34 y12.051 (11.609, 12.493)< 0.00134 < and ≤ 40 y-0.824 (-3.112, 1.464)0.480Low birth weight27y,36y20 ≤ and < 27 y0.917 (0.903, 0.932)< 0.00127 ≤ and ≤ 36 y0.965 (0.955, 0.976)< 0.00136 < and ≤ 40 y1.133 (1.026, 1.250)0.013Macrosomia24y,33y20 ≤ and < 24 y1.102 (1.075, 1.129)< 0.00124 ≤ and ≤ 33 y1.065 (1.060, 1.071)< 0.00133 < and ≤ 40 y1.029 (1.012, 1.046)0.001aAdjusted: adjusting for gestational age, season of birth, maternal residence\nThe effect of maternal age on BW in two-piece wise linear regression model\naAdjusted: adjusting for gestational age, season of birth, maternal residence\nTwo turning points of maternal age for LBW were found at 27 and 36 years old (Table 2). The risk of LBW decreased by 8.3% per year increase of maternal age up to 27 years old (Odds Ratio (OR) = 0.917, 95%CI: 0.903, 0.932). The risk of LBW decreased by 3.5% per year increase of maternal age when maternal age ranged from 27 to 36 years old (OR = 0.965, 95%CI: 0.955, 0.976). The risk of LBW increased by 13.3% per year increase in maternal age when maternal age was older than 36 years old (OR = 1.133, 95%CI: 1.026, 1.250). Similarly, two turning points value of maternal age for macrosomia were found at 24 and 33 years (Table 2). The risk of macrosomia increased by 10.2% per year increase of maternal age up to 24 years old (OR = 1.102, 95%CI: 1.075, 1.129). The risk of macrosomia increased by 6.5% per year increase of maternal age when maternal age ranged from 24 to 33 years old (OR = 1.065, 95%CI: 1.060, 1.071). The risk of macrosomia increased by 2.9% per year increase of maternal age when maternal age was older than 33 years old (OR = 1.029, 95%CI: 1.012, 1.046).", "A total of 490,143 mother-child pairs were abstracted. The mean BW of newborns was 3364.937 ± 420.528 g, and the mean age of mothers was 28.728 ± 4.134 years old. The incidence of LBW was 1.5%, and the incidence of macrosomia was 6.0%. The basic characteristics of the mother-child pairs were shown in Table 1. There were significant differences in BW, the proportion of LBW and macrosomia among different gestational age groups, seasonal of birth groups, and maternal residence groups (P < 0.05), but not maternal ethnicity groups (P > 0.05).\nTable 1Basic characteristics of the mother-child pairsVariablesBirth weight(n = 490,143)Low birth weight(n = 7146)Macrosomia(n = 29,457)N (%)Mean ± SD(g)FN(%)χ2N(%)χ2Gestational age(weeks)51543.00*15294.00*7197.40* 37–37+ 636,138(7.4)3020.404 ± 422.0753125(8.6)498(1.4) 38–38+ 698,636 (20.1)3247.870 ± 396.3501994(2.0)2946(3.0) 39–39+ 6162,966 (33.2)3369.093 ± 394.1691312(0.8)8442(5.2) 40–40+ 6138,209 (28.2)3462.910 ± 400.804590(0.4)11,373(8.2) 41–41+ 654,194(11.1)3545.406 ± 395.768125(0.2)6198(11.4)Season of birth24.01*13.97*36.51* Spring119,481(24.4)3363.061 ± 422.3331806(1.5)7137(5.9) Summer121,425(24.8)3360.024 ± 18.3221859(1.5)6932(5.7) Autumn128,548(26.2)3367.011 ± 419.1821773(1.4)7806(6.1) Winter120,689(24.6)3369.533 ± 422.3241708(1.4)7582(6.3)Maternal residence243.76*58.42*40.42* Urban245,250(50.1)3372.876 ± 416.9853370(1.4)15,193(6.2) Suburb128,008(26.1)3363.916 ± 422.3301800(1.4)7648(6.0) Countryside116,885(23.8)3349.401 ± 425.4851976(1.7)6616(5.7)Ethnicity0.9750.2651.165 Han485,192(99.0)3364.998 ± 420.4537069(1.5)29,141(6.0) Other4951 (1.0)3359.067 ± 427.86177(1.6)316(6.4)*P < 0.05\nBasic characteristics of the mother-child pairs\n*P < 0.05", "Both unadjusted and adjusted smoothed plots suggest that there was a nonlinear relationship between maternal age and BW (Fig. 2a, b). BW increased gradually until age 34, then decreased. Similarly, both adjusted smoothed plots suggest that there were nonlinear relationships between maternal age and LBW and macrosomia (Fig. 2c, d). The risk of LBW decreased gradually until age 36, then increased. The risk of macrosomia increased with the increase of maternal age when maternal age ranged from 20 to 40 years old.\nFig. 2Relationships between maternal age and BW, LBW and macrosomia with 95% confidence interval (dashed lines) *. a: Unadjusted BW; b: Adjusted BW; c: LBW d: Macrosomia. *: Adjusted: b, c and d adjusting for gestational age, season of birth and maternal residence)\nRelationships between maternal age and BW, LBW and macrosomia with 95% confidence interval (dashed lines) *. a: Unadjusted BW; b: Adjusted BW; c: LBW d: Macrosomia. *: Adjusted: b, c and d adjusting for gestational age, season of birth and maternal residence)\nTwo turning points of maternal age for BW were found at 24 and 34 years old. (Table 2) The BW increased 16.204 g per year increase of maternal age up to the 24 years old (β = 16.204, 95%CI: 14.323, 18.086). The BW increased 12.051 g per year increase of maternal age when maternal age ranged from 24 to 34 years old (β = 12.051, 95%CI: 11.609, 12.493). But there was not significantly association between maternal age and BW when maternal age was older than 34 years old (β=-0.824, 95% CI: -3.112, 1.464).\nTable 2The effect of maternal age on BW in two-piece wise linear regression modelTurning point of maternal ageMaternal ageAdjusted β/OR (95%CI)aP valueBirth weight24y,34y20 ≤ and < 24 y16.204 (14.323, 18.086)< 0.00124 ≤ and ≤ 34 y12.051 (11.609, 12.493)< 0.00134 < and ≤ 40 y-0.824 (-3.112, 1.464)0.480Low birth weight27y,36y20 ≤ and < 27 y0.917 (0.903, 0.932)< 0.00127 ≤ and ≤ 36 y0.965 (0.955, 0.976)< 0.00136 < and ≤ 40 y1.133 (1.026, 1.250)0.013Macrosomia24y,33y20 ≤ and < 24 y1.102 (1.075, 1.129)< 0.00124 ≤ and ≤ 33 y1.065 (1.060, 1.071)< 0.00133 < and ≤ 40 y1.029 (1.012, 1.046)0.001aAdjusted: adjusting for gestational age, season of birth, maternal residence\nThe effect of maternal age on BW in two-piece wise linear regression model\naAdjusted: adjusting for gestational age, season of birth, maternal residence\nTwo turning points of maternal age for LBW were found at 27 and 36 years old (Table 2). The risk of LBW decreased by 8.3% per year increase of maternal age up to 27 years old (Odds Ratio (OR) = 0.917, 95%CI: 0.903, 0.932). The risk of LBW decreased by 3.5% per year increase of maternal age when maternal age ranged from 27 to 36 years old (OR = 0.965, 95%CI: 0.955, 0.976). The risk of LBW increased by 13.3% per year increase in maternal age when maternal age was older than 36 years old (OR = 1.133, 95%CI: 1.026, 1.250). Similarly, two turning points value of maternal age for macrosomia were found at 24 and 33 years (Table 2). The risk of macrosomia increased by 10.2% per year increase of maternal age up to 24 years old (OR = 1.102, 95%CI: 1.075, 1.129). The risk of macrosomia increased by 6.5% per year increase of maternal age when maternal age ranged from 24 to 33 years old (OR = 1.065, 95%CI: 1.060, 1.071). The risk of macrosomia increased by 2.9% per year increase of maternal age when maternal age was older than 33 years old (OR = 1.029, 95%CI: 1.012, 1.046).", " Main results Our research indicated the specific relationships between maternal age change and the change of BW, LBW and macrosomia. BW increased gradually until maternal age 34, then decreased. The risk of LBW decreased gradually until age 36, then increased. The risk of macrosomia increased with the increase of maternal age. The results of this study on BW showed that the relationship between the risk of abnormal BW and age is not based on the traditional 35-year-old boundary. Our findings provided the absolute risks of abnormal BW by maternal age, which would be useful for patient fertility counseling.\nOur research indicated the specific relationships between maternal age change and the change of BW, LBW and macrosomia. BW increased gradually until maternal age 34, then decreased. The risk of LBW decreased gradually until age 36, then increased. The risk of macrosomia increased with the increase of maternal age. The results of this study on BW showed that the relationship between the risk of abnormal BW and age is not based on the traditional 35-year-old boundary. Our findings provided the absolute risks of abnormal BW by maternal age, which would be useful for patient fertility counseling.\n Interpretation A nonlinear relationship between maternal age and BW was observed and two turning points of maternal age at 24 and 34 years old were found in this study. The BW increased faster from 20 to 23 years old than from 24 to 34 years old. The curve was consistent with the findings of the previous studies [12, 13], however, there was no research provided that the threshold maternal age on BW was 34 years old [12, 21, 22]. We observed a marginal significantly negative association between maternal age and BW when maternal age ranged from 35 to 40 years, which was consistent with the findings of the previous studies [23, 24]. In previous studies, the population was grouped according to maternal age into several subgroups, then to investigate the effect of maternal age on the BW. Most studies considered women in age group 20–29 years as reference group. Differently, in our study, the maternal age was analyzed as a continuous variable rather than categorical variable, to find a more accurate age threshold.\nThe mechanism of the effect of maternal age on BW is still unclear. In the study, we found the risks of LBW and macrosomia after 35 years present the same pattern, increasing at these maternal age period. Relevant researches showed that most of the effects on the offspring of intrauterine exposure to maternal age-related obstetric complications might be induced by epigenetic DNA reprogramming during critical periods of the embryo or fetal development [25]. And it is well known that mitochondria are maternally derived. We also know that mitochondrial DNA is not capable of DNA repair and is therefore at greater risk of acquiring mutations with age.\n[12]. Similarly, the quality of woman’s eggs declined dramatically with increasing age, leading to an increased risk of pregnancy-related complications among older women [12]. Furthermore, older mothers are more likely to suffer from chronic diseases and their complications in pregnancy, including obesity, anemia and diabetes. In addition, ageing process affects the reproductive system similar to other systems in the human body. These factors may contribute to that the risks of LBW and macrosomia after 35 years present the same pattern, increasing at these maternal age period.\nA nonlinear relationship between maternal age and the risk of term LBW was found in our study. The curve was the same as the previous study [13, 26]. As LBW newborns may be premature with other risk factors [27], we restricted our study population to term newborns, therefore the etiology of LBW was intrauterine growth restriction [18]. The prevalence of term LBW in our study was 1.5%, which was lower than 2% reported by the other study [28], which suggesting there was a good perinatal care system in Xi’an, China. Although the prevalence of term LBW is low, it is not negligible and term LBW can lead to adverse pregnancy outcomes, as severe neonatal asphyxia [29]. The threshold maternal age on the risk of term LBW was 36 years old, which means that the risk of term LBW will significantly increase when the maternal age is over 36 years old, but fewer studies have reported it [5, 6, 14]. However, the biological mechanisms by which maternal age cause the term LBW are unclear. The increased risk of LBW among younger can be explained by the low socioeconomic conditions and increased nutritional demands of pregnancy [30].\nWith the increase of maternal age, the risk of macrosomia was increasing and two turning points of maternal age were found at 24 and 33 years. The curve was identical to the previous studies [31]. The incidence of macrosomia was 6.0% in our study, which was approximate with the previous study [14]. The risk of macrosomia increased faster from 20 to 23 years old than from 24 to 33 years old, which was the same as the change of BW. Term macrosomia is influenced by maternal hyperglycemia and endocrine status through placental circulation [32]. The increased risk of term macrosomia is partly explained by the increased prevalence of diabetes with the increased maternal age [13].\nA nonlinear relationship between maternal age and BW was observed and two turning points of maternal age at 24 and 34 years old were found in this study. The BW increased faster from 20 to 23 years old than from 24 to 34 years old. The curve was consistent with the findings of the previous studies [12, 13], however, there was no research provided that the threshold maternal age on BW was 34 years old [12, 21, 22]. We observed a marginal significantly negative association between maternal age and BW when maternal age ranged from 35 to 40 years, which was consistent with the findings of the previous studies [23, 24]. In previous studies, the population was grouped according to maternal age into several subgroups, then to investigate the effect of maternal age on the BW. Most studies considered women in age group 20–29 years as reference group. Differently, in our study, the maternal age was analyzed as a continuous variable rather than categorical variable, to find a more accurate age threshold.\nThe mechanism of the effect of maternal age on BW is still unclear. In the study, we found the risks of LBW and macrosomia after 35 years present the same pattern, increasing at these maternal age period. Relevant researches showed that most of the effects on the offspring of intrauterine exposure to maternal age-related obstetric complications might be induced by epigenetic DNA reprogramming during critical periods of the embryo or fetal development [25]. And it is well known that mitochondria are maternally derived. We also know that mitochondrial DNA is not capable of DNA repair and is therefore at greater risk of acquiring mutations with age.\n[12]. Similarly, the quality of woman’s eggs declined dramatically with increasing age, leading to an increased risk of pregnancy-related complications among older women [12]. Furthermore, older mothers are more likely to suffer from chronic diseases and their complications in pregnancy, including obesity, anemia and diabetes. In addition, ageing process affects the reproductive system similar to other systems in the human body. These factors may contribute to that the risks of LBW and macrosomia after 35 years present the same pattern, increasing at these maternal age period.\nA nonlinear relationship between maternal age and the risk of term LBW was found in our study. The curve was the same as the previous study [13, 26]. As LBW newborns may be premature with other risk factors [27], we restricted our study population to term newborns, therefore the etiology of LBW was intrauterine growth restriction [18]. The prevalence of term LBW in our study was 1.5%, which was lower than 2% reported by the other study [28], which suggesting there was a good perinatal care system in Xi’an, China. Although the prevalence of term LBW is low, it is not negligible and term LBW can lead to adverse pregnancy outcomes, as severe neonatal asphyxia [29]. The threshold maternal age on the risk of term LBW was 36 years old, which means that the risk of term LBW will significantly increase when the maternal age is over 36 years old, but fewer studies have reported it [5, 6, 14]. However, the biological mechanisms by which maternal age cause the term LBW are unclear. The increased risk of LBW among younger can be explained by the low socioeconomic conditions and increased nutritional demands of pregnancy [30].\nWith the increase of maternal age, the risk of macrosomia was increasing and two turning points of maternal age were found at 24 and 33 years. The curve was identical to the previous studies [31]. The incidence of macrosomia was 6.0% in our study, which was approximate with the previous study [14]. The risk of macrosomia increased faster from 20 to 23 years old than from 24 to 33 years old, which was the same as the change of BW. Term macrosomia is influenced by maternal hyperglycemia and endocrine status through placental circulation [32]. The increased risk of term macrosomia is partly explained by the increased prevalence of diabetes with the increased maternal age [13].\n Strengths and limitations Our research has some advantages: first, our study was a large sample research based on four years records. Besides, our study had very strict inclusion and exclusion criteria and the data was also carried on the strict cleaning and so on, which increases the credibility of the results. However, there were some potential limitations in this study. First, some potential confounders, such as fetus gender, parity, medical history, economic condition and paternal age, were not adjusted because of limited data. Although the cutoff for “paternal advanced age” is not clearly defined, there is an increase in genetic risk as men age. Previous studies showed that economic condition and fetus gender were associated with higher BW, but adjustment for socioeconomic factors and fetus gender made little difference to the results [22]. In addition, at any maternal age, higher parity was associated with higher BW [22].\nTherefore, the extrapolation of the results requires caution, especially for others ethnicity [21]. However, the results of this study still have a certain degree of reference significance for other regions. Second, the mother-child pairs were not included when the maternal age was younger than 20 years old or older than 40 years old, because of the small proportion of them and limited data. Women delivering before 20 or after age 40 years had a higher incidence of pregnancy complications. Therefore, our research could only reflect the changing trend of BW, LBW and macrosomia when the maternal age ranged from 20 to 40 years old. In order to ensure the accuracy of our research results, we limited the subjects to full term singleton live birth, which could eliminate some potential influencing factors [27], such as serious pregnancy complications. Although the large sample size might increase potential confounding, we could estimate the relationships between maternal age and BW, LBW and macrosomia in detail with generalized additive model and two-piece wise linear regression model.\nOur research has some advantages: first, our study was a large sample research based on four years records. Besides, our study had very strict inclusion and exclusion criteria and the data was also carried on the strict cleaning and so on, which increases the credibility of the results. However, there were some potential limitations in this study. First, some potential confounders, such as fetus gender, parity, medical history, economic condition and paternal age, were not adjusted because of limited data. Although the cutoff for “paternal advanced age” is not clearly defined, there is an increase in genetic risk as men age. Previous studies showed that economic condition and fetus gender were associated with higher BW, but adjustment for socioeconomic factors and fetus gender made little difference to the results [22]. In addition, at any maternal age, higher parity was associated with higher BW [22].\nTherefore, the extrapolation of the results requires caution, especially for others ethnicity [21]. However, the results of this study still have a certain degree of reference significance for other regions. Second, the mother-child pairs were not included when the maternal age was younger than 20 years old or older than 40 years old, because of the small proportion of them and limited data. Women delivering before 20 or after age 40 years had a higher incidence of pregnancy complications. Therefore, our research could only reflect the changing trend of BW, LBW and macrosomia when the maternal age ranged from 20 to 40 years old. In order to ensure the accuracy of our research results, we limited the subjects to full term singleton live birth, which could eliminate some potential influencing factors [27], such as serious pregnancy complications. Although the large sample size might increase potential confounding, we could estimate the relationships between maternal age and BW, LBW and macrosomia in detail with generalized additive model and two-piece wise linear regression model.", "Our research indicated the specific relationships between maternal age change and the change of BW, LBW and macrosomia. BW increased gradually until maternal age 34, then decreased. The risk of LBW decreased gradually until age 36, then increased. The risk of macrosomia increased with the increase of maternal age. The results of this study on BW showed that the relationship between the risk of abnormal BW and age is not based on the traditional 35-year-old boundary. Our findings provided the absolute risks of abnormal BW by maternal age, which would be useful for patient fertility counseling.", "A nonlinear relationship between maternal age and BW was observed and two turning points of maternal age at 24 and 34 years old were found in this study. The BW increased faster from 20 to 23 years old than from 24 to 34 years old. The curve was consistent with the findings of the previous studies [12, 13], however, there was no research provided that the threshold maternal age on BW was 34 years old [12, 21, 22]. We observed a marginal significantly negative association between maternal age and BW when maternal age ranged from 35 to 40 years, which was consistent with the findings of the previous studies [23, 24]. In previous studies, the population was grouped according to maternal age into several subgroups, then to investigate the effect of maternal age on the BW. Most studies considered women in age group 20–29 years as reference group. Differently, in our study, the maternal age was analyzed as a continuous variable rather than categorical variable, to find a more accurate age threshold.\nThe mechanism of the effect of maternal age on BW is still unclear. In the study, we found the risks of LBW and macrosomia after 35 years present the same pattern, increasing at these maternal age period. Relevant researches showed that most of the effects on the offspring of intrauterine exposure to maternal age-related obstetric complications might be induced by epigenetic DNA reprogramming during critical periods of the embryo or fetal development [25]. And it is well known that mitochondria are maternally derived. We also know that mitochondrial DNA is not capable of DNA repair and is therefore at greater risk of acquiring mutations with age.\n[12]. Similarly, the quality of woman’s eggs declined dramatically with increasing age, leading to an increased risk of pregnancy-related complications among older women [12]. Furthermore, older mothers are more likely to suffer from chronic diseases and their complications in pregnancy, including obesity, anemia and diabetes. In addition, ageing process affects the reproductive system similar to other systems in the human body. These factors may contribute to that the risks of LBW and macrosomia after 35 years present the same pattern, increasing at these maternal age period.\nA nonlinear relationship between maternal age and the risk of term LBW was found in our study. The curve was the same as the previous study [13, 26]. As LBW newborns may be premature with other risk factors [27], we restricted our study population to term newborns, therefore the etiology of LBW was intrauterine growth restriction [18]. The prevalence of term LBW in our study was 1.5%, which was lower than 2% reported by the other study [28], which suggesting there was a good perinatal care system in Xi’an, China. Although the prevalence of term LBW is low, it is not negligible and term LBW can lead to adverse pregnancy outcomes, as severe neonatal asphyxia [29]. The threshold maternal age on the risk of term LBW was 36 years old, which means that the risk of term LBW will significantly increase when the maternal age is over 36 years old, but fewer studies have reported it [5, 6, 14]. However, the biological mechanisms by which maternal age cause the term LBW are unclear. The increased risk of LBW among younger can be explained by the low socioeconomic conditions and increased nutritional demands of pregnancy [30].\nWith the increase of maternal age, the risk of macrosomia was increasing and two turning points of maternal age were found at 24 and 33 years. The curve was identical to the previous studies [31]. The incidence of macrosomia was 6.0% in our study, which was approximate with the previous study [14]. The risk of macrosomia increased faster from 20 to 23 years old than from 24 to 33 years old, which was the same as the change of BW. Term macrosomia is influenced by maternal hyperglycemia and endocrine status through placental circulation [32]. The increased risk of term macrosomia is partly explained by the increased prevalence of diabetes with the increased maternal age [13].", "Our research has some advantages: first, our study was a large sample research based on four years records. Besides, our study had very strict inclusion and exclusion criteria and the data was also carried on the strict cleaning and so on, which increases the credibility of the results. However, there were some potential limitations in this study. First, some potential confounders, such as fetus gender, parity, medical history, economic condition and paternal age, were not adjusted because of limited data. Although the cutoff for “paternal advanced age” is not clearly defined, there is an increase in genetic risk as men age. Previous studies showed that economic condition and fetus gender were associated with higher BW, but adjustment for socioeconomic factors and fetus gender made little difference to the results [22]. In addition, at any maternal age, higher parity was associated with higher BW [22].\nTherefore, the extrapolation of the results requires caution, especially for others ethnicity [21]. However, the results of this study still have a certain degree of reference significance for other regions. Second, the mother-child pairs were not included when the maternal age was younger than 20 years old or older than 40 years old, because of the small proportion of them and limited data. Women delivering before 20 or after age 40 years had a higher incidence of pregnancy complications. Therefore, our research could only reflect the changing trend of BW, LBW and macrosomia when the maternal age ranged from 20 to 40 years old. In order to ensure the accuracy of our research results, we limited the subjects to full term singleton live birth, which could eliminate some potential influencing factors [27], such as serious pregnancy complications. Although the large sample size might increase potential confounding, we could estimate the relationships between maternal age and BW, LBW and macrosomia in detail with generalized additive model and two-piece wise linear regression model.", "Our research indicated the specific relationships between maternal age change and the change of BW, LBW and macrosomia when maternal age ranged from 20 to 40 years old. BW increased gradually until 34 years old, then decreased. The threshold maternal age for LBW was 36 years old, and the risk of macrosomia was increasing with the increase of maternal age. These results should be carefully taken into account by maternal care providers in order to inform women adequately, support them in understanding potential risks of BW associated with their age, and the importance of prenatal care. However, optimized maternal age should be determined individually by different pregnancy complications and adverse neonatal outcomes. The mechanism between maternal age and BW is not clear. Therefore, further studies are needed to examine the relationship between maternal age and BW." ]
[ null, null, null, null, null, "results", null, null, "discussion", null, null, null, "conclusion" ]
[ "Maternal age", "Birth weight", "Low birth weight", "Macrosomia" ]
Background: Birth weight (BW) is the most important index reflecting intrauterine growth and development of newborns, and is also a vital index to evaluate the health status of the newborns [1]. Abnormal BW, including low birth weight (LBW, BW < 2500 g) and macrosomia (BW ≥ 4000 g), significantly increases the risk of perinatal mortality and morbidity and, in recent years, has been shown to be a marker of age-related disease risk [2, 3]. As the most common adverse birth outcome, the incidence of abnormal BW is generally high in the world. It was estimated that the incidence of LBW was about 5–7% in developed countries and as high as 19% in developing countries [4], and in mainland China, the specific interest for this paper, it was 6.1% [5]. Furthermore, the incidence of macrosomia also has increased over the past two to three decades in both developed and developing countries, the prevalence of macrosomia was 9.2% in the United States and 7.3% in China recently [6, 7]. Considering the large number of newborns in China, it is likely that LBW and macrosomia might remain a major public health issue over the next few years in China [3]. With the development of society and change of people’s conception of procreation (assisted reproductive technology, the increased education and economic activities of women, reduced marriage rates and increased double income no kids), it was reported that the global trend of delayed childbirth is increasing, e.g. United States and Korea [8, 9]. Similarly in China, the mean maternal age and the proportion of elderly pregnant women(≥ 35 years) are increasing [10]. In 2011, a survey of 14 provinces in China showed that the average delivery age of pregnant women was 28 ± 5 years, the proportion of maternal age older than 35 was 7.8%, and the proportion of maternal age younger than 20 years old was 1.4% [6, 11]. The high proportion of pregnant women older than 35 years old is accompanied by the increased risk of miscarriage and chromosomal aneuploidy [12]. Previous studies indicated that the relationship between maternal age and BW was inconsistent [12–16]. Most of the previous studies focused on the incidence of LWB and macrosomia in different age groups by fixed classification of maternal age [13, 16]. This classification method with 35 years old as the boundary cannot accurately reflect the trend of BW and age, as this may underestimate the age related risk of younger age groups and overestimate risks in older age groups [17]. There is no sufficient evidence to assess the appropriateness of using this artificial traditional age classification to evaluate their impacts on BW. Further, with the increase of delayed childbearing in recent years, the development of medical technology and social and economic development, the current traditional methods of dividing age threshold may not be suitable for today’s situation. Especially for China, a country with a large birth population and a huge social change, this research is helpful to clinical decisionmaking and for public health policies. The objective of this study was to estimate the changes of BW according to 1-year intervals of maternal age among women, treating age as a continuous variable with a flexible, generalized additive approach. Methods: Data Source and Study Population A cross-sectional study was conducted to determine the relationship between maternal age and BW with dose-response analyses. The data were obtained from the Birth Registry Database in Xi’an city, which covered all midwifery clinics and hospitals in the city. Dedicated trained health workers are responsible for collecting and registering birth information at each hospital. Validity was confirmed at various levels: by the Chief Midwife, the Chief Physician, and the Department of Medical Administration at each hospital; by the newborns’ parents at the time when the birth certificate was issued; and reviewed by issuing personnel. This analysis included all full-term singleton live births born from January 2015 to December 2018 in Xi’an city of Shaanxi province in China. We collected the information on maternal and newborn, including the number of births, BW, gestational age, neonatal birth date, maternal residence, maternal birth date, residence and ethnicity. A total number of 536,993 mother-child pairs were abstracted from the Medical Birth Registry of Xi’an, born from Jan 2015 to Dec 2018. After removing those failing to meet the requirements explained below and unreasonable records, 490,143 mother-child pairs (91.3%) were included in the analysis. Newborns who were term birth (gestational age was between 37 to 41+ 6 weeks), singleton live births, and whose BW was greater than or equal 1000 g and maternal age at birth ranged from 20 to 40 years old were selected. Clinically, those extreme values (BW < 1000 g) were mainly caused by some clinical diseases rather than maternal age. These extremes can create biases that affect the exploration for real relationships between mother’s age and BW. Similarly, pregnant women over the age of 40 and under the age of 20 had more clinical complications and has too many confounding factors, which would cause biases. To get accurate results, we excluded this part of data. The flow chart of including criteria can be seen in Fig. 1. Fig. 1Flow chart of including criteria. Flow chart of including criteria. The study protocol was approved by the Medical Ethics Committee of the First Affiliated Hospital of Xi’an Jiaotong University (No.XJTU1AF2018LSK-245). Data used in the study were all anonymous, and no individual identifiable information was available for the analysis. In the process of this study, all data were only used to conduct research, not for other purposes, which can ensure that the privacy of patients was fully protected. A cross-sectional study was conducted to determine the relationship between maternal age and BW with dose-response analyses. The data were obtained from the Birth Registry Database in Xi’an city, which covered all midwifery clinics and hospitals in the city. Dedicated trained health workers are responsible for collecting and registering birth information at each hospital. Validity was confirmed at various levels: by the Chief Midwife, the Chief Physician, and the Department of Medical Administration at each hospital; by the newborns’ parents at the time when the birth certificate was issued; and reviewed by issuing personnel. This analysis included all full-term singleton live births born from January 2015 to December 2018 in Xi’an city of Shaanxi province in China. We collected the information on maternal and newborn, including the number of births, BW, gestational age, neonatal birth date, maternal residence, maternal birth date, residence and ethnicity. A total number of 536,993 mother-child pairs were abstracted from the Medical Birth Registry of Xi’an, born from Jan 2015 to Dec 2018. After removing those failing to meet the requirements explained below and unreasonable records, 490,143 mother-child pairs (91.3%) were included in the analysis. Newborns who were term birth (gestational age was between 37 to 41+ 6 weeks), singleton live births, and whose BW was greater than or equal 1000 g and maternal age at birth ranged from 20 to 40 years old were selected. Clinically, those extreme values (BW < 1000 g) were mainly caused by some clinical diseases rather than maternal age. These extremes can create biases that affect the exploration for real relationships between mother’s age and BW. Similarly, pregnant women over the age of 40 and under the age of 20 had more clinical complications and has too many confounding factors, which would cause biases. To get accurate results, we excluded this part of data. The flow chart of including criteria can be seen in Fig. 1. Fig. 1Flow chart of including criteria. Flow chart of including criteria. The study protocol was approved by the Medical Ethics Committee of the First Affiliated Hospital of Xi’an Jiaotong University (No.XJTU1AF2018LSK-245). Data used in the study were all anonymous, and no individual identifiable information was available for the analysis. In the process of this study, all data were only used to conduct research, not for other purposes, which can ensure that the privacy of patients was fully protected. Measurements and Definitions of LBW and Macrosomia The maternal age was analyzed as a continuous variable. BW (in grams) was measured routinely by registered midwives using an electronic weighting scale within 2 hours of delivery. The LBW was defined as BW less than 2500 g [18], and the macrosomia was defined as BW greater than or equal to 4000 g [19]. The season of birth was divided into spring, summer, autumn and winter according to the birth time. According to the Chinese lunar calendar, the winter, spring, summer and fall begin on December 1st, March 1st, June 1st and September 1st, respectively. The maternal residence was divided into 3 groups (urban, suburb, countryside). Maternal ethnicity was divided into 2 groups (Han and other). The maternal age was analyzed as a continuous variable. BW (in grams) was measured routinely by registered midwives using an electronic weighting scale within 2 hours of delivery. The LBW was defined as BW less than 2500 g [18], and the macrosomia was defined as BW greater than or equal to 4000 g [19]. The season of birth was divided into spring, summer, autumn and winter according to the birth time. According to the Chinese lunar calendar, the winter, spring, summer and fall begin on December 1st, March 1st, June 1st and September 1st, respectively. The maternal residence was divided into 3 groups (urban, suburb, countryside). Maternal ethnicity was divided into 2 groups (Han and other). Statistical Analysis We described the distribution of each group with the number and proportion of births. The maternal age at birth obeyed normal distribution after testing and was expressed as means ± standard deviation (means ± SD). The women’s characteristics was analyzed as categorical variables to examine the association with BW by one-way ANOVA and risk of LBW and macrosomia by Chi-square Test. Generalized additive model (GAM) was conducted to see whether there were nonlinear relationships between maternal age and BW, LBW and macrosomia, respectively. And the relationships were further verified by adjusting the potential confounders found by the One-way ANOVA and Chi-square Test. Two-piece wise linear regression model was used to examine the threshold effect of the maternal age on BW and the risk of LBW and macrosomia with an adjustment for potential confounders [20]. The turning point of maternal age was defined as where the relationships between maternal age and BW and the risk of LBW and macrosomia started to change. 푃<0.05 was considered as statistically significant. All analyses were conducted with the statistical packages R (R Foundation; http://www.r-project.org; version 3.6.3) and Empower Stats (www.empowerstats.com; X&Y Solutions Inc). We described the distribution of each group with the number and proportion of births. The maternal age at birth obeyed normal distribution after testing and was expressed as means ± standard deviation (means ± SD). The women’s characteristics was analyzed as categorical variables to examine the association with BW by one-way ANOVA and risk of LBW and macrosomia by Chi-square Test. Generalized additive model (GAM) was conducted to see whether there were nonlinear relationships between maternal age and BW, LBW and macrosomia, respectively. And the relationships were further verified by adjusting the potential confounders found by the One-way ANOVA and Chi-square Test. Two-piece wise linear regression model was used to examine the threshold effect of the maternal age on BW and the risk of LBW and macrosomia with an adjustment for potential confounders [20]. The turning point of maternal age was defined as where the relationships between maternal age and BW and the risk of LBW and macrosomia started to change. 푃<0.05 was considered as statistically significant. All analyses were conducted with the statistical packages R (R Foundation; http://www.r-project.org; version 3.6.3) and Empower Stats (www.empowerstats.com; X&Y Solutions Inc). Data Source and Study Population : A cross-sectional study was conducted to determine the relationship between maternal age and BW with dose-response analyses. The data were obtained from the Birth Registry Database in Xi’an city, which covered all midwifery clinics and hospitals in the city. Dedicated trained health workers are responsible for collecting and registering birth information at each hospital. Validity was confirmed at various levels: by the Chief Midwife, the Chief Physician, and the Department of Medical Administration at each hospital; by the newborns’ parents at the time when the birth certificate was issued; and reviewed by issuing personnel. This analysis included all full-term singleton live births born from January 2015 to December 2018 in Xi’an city of Shaanxi province in China. We collected the information on maternal and newborn, including the number of births, BW, gestational age, neonatal birth date, maternal residence, maternal birth date, residence and ethnicity. A total number of 536,993 mother-child pairs were abstracted from the Medical Birth Registry of Xi’an, born from Jan 2015 to Dec 2018. After removing those failing to meet the requirements explained below and unreasonable records, 490,143 mother-child pairs (91.3%) were included in the analysis. Newborns who were term birth (gestational age was between 37 to 41+ 6 weeks), singleton live births, and whose BW was greater than or equal 1000 g and maternal age at birth ranged from 20 to 40 years old were selected. Clinically, those extreme values (BW < 1000 g) were mainly caused by some clinical diseases rather than maternal age. These extremes can create biases that affect the exploration for real relationships between mother’s age and BW. Similarly, pregnant women over the age of 40 and under the age of 20 had more clinical complications and has too many confounding factors, which would cause biases. To get accurate results, we excluded this part of data. The flow chart of including criteria can be seen in Fig. 1. Fig. 1Flow chart of including criteria. Flow chart of including criteria. The study protocol was approved by the Medical Ethics Committee of the First Affiliated Hospital of Xi’an Jiaotong University (No.XJTU1AF2018LSK-245). Data used in the study were all anonymous, and no individual identifiable information was available for the analysis. In the process of this study, all data were only used to conduct research, not for other purposes, which can ensure that the privacy of patients was fully protected. Measurements and Definitions of LBW and Macrosomia: The maternal age was analyzed as a continuous variable. BW (in grams) was measured routinely by registered midwives using an electronic weighting scale within 2 hours of delivery. The LBW was defined as BW less than 2500 g [18], and the macrosomia was defined as BW greater than or equal to 4000 g [19]. The season of birth was divided into spring, summer, autumn and winter according to the birth time. According to the Chinese lunar calendar, the winter, spring, summer and fall begin on December 1st, March 1st, June 1st and September 1st, respectively. The maternal residence was divided into 3 groups (urban, suburb, countryside). Maternal ethnicity was divided into 2 groups (Han and other). Statistical Analysis: We described the distribution of each group with the number and proportion of births. The maternal age at birth obeyed normal distribution after testing and was expressed as means ± standard deviation (means ± SD). The women’s characteristics was analyzed as categorical variables to examine the association with BW by one-way ANOVA and risk of LBW and macrosomia by Chi-square Test. Generalized additive model (GAM) was conducted to see whether there were nonlinear relationships between maternal age and BW, LBW and macrosomia, respectively. And the relationships were further verified by adjusting the potential confounders found by the One-way ANOVA and Chi-square Test. Two-piece wise linear regression model was used to examine the threshold effect of the maternal age on BW and the risk of LBW and macrosomia with an adjustment for potential confounders [20]. The turning point of maternal age was defined as where the relationships between maternal age and BW and the risk of LBW and macrosomia started to change. 푃<0.05 was considered as statistically significant. All analyses were conducted with the statistical packages R (R Foundation; http://www.r-project.org; version 3.6.3) and Empower Stats (www.empowerstats.com; X&Y Solutions Inc). Results: Related factors of the mother-child pairs A total of 490,143 mother-child pairs were abstracted. The mean BW of newborns was 3364.937 ± 420.528 g, and the mean age of mothers was 28.728 ± 4.134 years old. The incidence of LBW was 1.5%, and the incidence of macrosomia was 6.0%. The basic characteristics of the mother-child pairs were shown in Table 1. There were significant differences in BW, the proportion of LBW and macrosomia among different gestational age groups, seasonal of birth groups, and maternal residence groups (P < 0.05), but not maternal ethnicity groups (P > 0.05). Table 1Basic characteristics of the mother-child pairsVariablesBirth weight(n = 490,143)Low birth weight(n = 7146)Macrosomia(n = 29,457)N (%)Mean ± SD(g)FN(%)χ2N(%)χ2Gestational age(weeks)51543.00*15294.00*7197.40* 37–37+ 636,138(7.4)3020.404 ± 422.0753125(8.6)498(1.4) 38–38+ 698,636 (20.1)3247.870 ± 396.3501994(2.0)2946(3.0) 39–39+ 6162,966 (33.2)3369.093 ± 394.1691312(0.8)8442(5.2) 40–40+ 6138,209 (28.2)3462.910 ± 400.804590(0.4)11,373(8.2) 41–41+ 654,194(11.1)3545.406 ± 395.768125(0.2)6198(11.4)Season of birth24.01*13.97*36.51* Spring119,481(24.4)3363.061 ± 422.3331806(1.5)7137(5.9) Summer121,425(24.8)3360.024 ± 18.3221859(1.5)6932(5.7) Autumn128,548(26.2)3367.011 ± 419.1821773(1.4)7806(6.1) Winter120,689(24.6)3369.533 ± 422.3241708(1.4)7582(6.3)Maternal residence243.76*58.42*40.42* Urban245,250(50.1)3372.876 ± 416.9853370(1.4)15,193(6.2) Suburb128,008(26.1)3363.916 ± 422.3301800(1.4)7648(6.0) Countryside116,885(23.8)3349.401 ± 425.4851976(1.7)6616(5.7)Ethnicity0.9750.2651.165 Han485,192(99.0)3364.998 ± 420.4537069(1.5)29,141(6.0) Other4951 (1.0)3359.067 ± 427.86177(1.6)316(6.4)*P < 0.05 Basic characteristics of the mother-child pairs *P < 0.05 A total of 490,143 mother-child pairs were abstracted. The mean BW of newborns was 3364.937 ± 420.528 g, and the mean age of mothers was 28.728 ± 4.134 years old. The incidence of LBW was 1.5%, and the incidence of macrosomia was 6.0%. The basic characteristics of the mother-child pairs were shown in Table 1. There were significant differences in BW, the proportion of LBW and macrosomia among different gestational age groups, seasonal of birth groups, and maternal residence groups (P < 0.05), but not maternal ethnicity groups (P > 0.05). Table 1Basic characteristics of the mother-child pairsVariablesBirth weight(n = 490,143)Low birth weight(n = 7146)Macrosomia(n = 29,457)N (%)Mean ± SD(g)FN(%)χ2N(%)χ2Gestational age(weeks)51543.00*15294.00*7197.40* 37–37+ 636,138(7.4)3020.404 ± 422.0753125(8.6)498(1.4) 38–38+ 698,636 (20.1)3247.870 ± 396.3501994(2.0)2946(3.0) 39–39+ 6162,966 (33.2)3369.093 ± 394.1691312(0.8)8442(5.2) 40–40+ 6138,209 (28.2)3462.910 ± 400.804590(0.4)11,373(8.2) 41–41+ 654,194(11.1)3545.406 ± 395.768125(0.2)6198(11.4)Season of birth24.01*13.97*36.51* Spring119,481(24.4)3363.061 ± 422.3331806(1.5)7137(5.9) Summer121,425(24.8)3360.024 ± 18.3221859(1.5)6932(5.7) Autumn128,548(26.2)3367.011 ± 419.1821773(1.4)7806(6.1) Winter120,689(24.6)3369.533 ± 422.3241708(1.4)7582(6.3)Maternal residence243.76*58.42*40.42* Urban245,250(50.1)3372.876 ± 416.9853370(1.4)15,193(6.2) Suburb128,008(26.1)3363.916 ± 422.3301800(1.4)7648(6.0) Countryside116,885(23.8)3349.401 ± 425.4851976(1.7)6616(5.7)Ethnicity0.9750.2651.165 Han485,192(99.0)3364.998 ± 420.4537069(1.5)29,141(6.0) Other4951 (1.0)3359.067 ± 427.86177(1.6)316(6.4)*P < 0.05 Basic characteristics of the mother-child pairs *P < 0.05 The relationship between maternal age and birth weight, LBW and macrosomia Both unadjusted and adjusted smoothed plots suggest that there was a nonlinear relationship between maternal age and BW (Fig. 2a, b). BW increased gradually until age 34, then decreased. Similarly, both adjusted smoothed plots suggest that there were nonlinear relationships between maternal age and LBW and macrosomia (Fig. 2c, d). The risk of LBW decreased gradually until age 36, then increased. The risk of macrosomia increased with the increase of maternal age when maternal age ranged from 20 to 40 years old. Fig. 2Relationships between maternal age and BW, LBW and macrosomia with 95% confidence interval (dashed lines) *. a: Unadjusted BW; b: Adjusted BW; c: LBW d: Macrosomia. *: Adjusted: b, c and d adjusting for gestational age, season of birth and maternal residence) Relationships between maternal age and BW, LBW and macrosomia with 95% confidence interval (dashed lines) *. a: Unadjusted BW; b: Adjusted BW; c: LBW d: Macrosomia. *: Adjusted: b, c and d adjusting for gestational age, season of birth and maternal residence) Two turning points of maternal age for BW were found at 24 and 34 years old. (Table 2) The BW increased 16.204 g per year increase of maternal age up to the 24 years old (β = 16.204, 95%CI: 14.323, 18.086). The BW increased 12.051 g per year increase of maternal age when maternal age ranged from 24 to 34 years old (β = 12.051, 95%CI: 11.609, 12.493). But there was not significantly association between maternal age and BW when maternal age was older than 34 years old (β=-0.824, 95% CI: -3.112, 1.464). Table 2The effect of maternal age on BW in two-piece wise linear regression modelTurning point of maternal ageMaternal ageAdjusted β/OR (95%CI)aP valueBirth weight24y,34y20 ≤ and < 24 y16.204 (14.323, 18.086)< 0.00124 ≤ and ≤ 34 y12.051 (11.609, 12.493)< 0.00134 < and ≤ 40 y-0.824 (-3.112, 1.464)0.480Low birth weight27y,36y20 ≤ and < 27 y0.917 (0.903, 0.932)< 0.00127 ≤ and ≤ 36 y0.965 (0.955, 0.976)< 0.00136 < and ≤ 40 y1.133 (1.026, 1.250)0.013Macrosomia24y,33y20 ≤ and < 24 y1.102 (1.075, 1.129)< 0.00124 ≤ and ≤ 33 y1.065 (1.060, 1.071)< 0.00133 < and ≤ 40 y1.029 (1.012, 1.046)0.001aAdjusted: adjusting for gestational age, season of birth, maternal residence The effect of maternal age on BW in two-piece wise linear regression model aAdjusted: adjusting for gestational age, season of birth, maternal residence Two turning points of maternal age for LBW were found at 27 and 36 years old (Table 2). The risk of LBW decreased by 8.3% per year increase of maternal age up to 27 years old (Odds Ratio (OR) = 0.917, 95%CI: 0.903, 0.932). The risk of LBW decreased by 3.5% per year increase of maternal age when maternal age ranged from 27 to 36 years old (OR = 0.965, 95%CI: 0.955, 0.976). The risk of LBW increased by 13.3% per year increase in maternal age when maternal age was older than 36 years old (OR = 1.133, 95%CI: 1.026, 1.250). Similarly, two turning points value of maternal age for macrosomia were found at 24 and 33 years (Table 2). The risk of macrosomia increased by 10.2% per year increase of maternal age up to 24 years old (OR = 1.102, 95%CI: 1.075, 1.129). The risk of macrosomia increased by 6.5% per year increase of maternal age when maternal age ranged from 24 to 33 years old (OR = 1.065, 95%CI: 1.060, 1.071). The risk of macrosomia increased by 2.9% per year increase of maternal age when maternal age was older than 33 years old (OR = 1.029, 95%CI: 1.012, 1.046). Both unadjusted and adjusted smoothed plots suggest that there was a nonlinear relationship between maternal age and BW (Fig. 2a, b). BW increased gradually until age 34, then decreased. Similarly, both adjusted smoothed plots suggest that there were nonlinear relationships between maternal age and LBW and macrosomia (Fig. 2c, d). The risk of LBW decreased gradually until age 36, then increased. The risk of macrosomia increased with the increase of maternal age when maternal age ranged from 20 to 40 years old. Fig. 2Relationships between maternal age and BW, LBW and macrosomia with 95% confidence interval (dashed lines) *. a: Unadjusted BW; b: Adjusted BW; c: LBW d: Macrosomia. *: Adjusted: b, c and d adjusting for gestational age, season of birth and maternal residence) Relationships between maternal age and BW, LBW and macrosomia with 95% confidence interval (dashed lines) *. a: Unadjusted BW; b: Adjusted BW; c: LBW d: Macrosomia. *: Adjusted: b, c and d adjusting for gestational age, season of birth and maternal residence) Two turning points of maternal age for BW were found at 24 and 34 years old. (Table 2) The BW increased 16.204 g per year increase of maternal age up to the 24 years old (β = 16.204, 95%CI: 14.323, 18.086). The BW increased 12.051 g per year increase of maternal age when maternal age ranged from 24 to 34 years old (β = 12.051, 95%CI: 11.609, 12.493). But there was not significantly association between maternal age and BW when maternal age was older than 34 years old (β=-0.824, 95% CI: -3.112, 1.464). Table 2The effect of maternal age on BW in two-piece wise linear regression modelTurning point of maternal ageMaternal ageAdjusted β/OR (95%CI)aP valueBirth weight24y,34y20 ≤ and < 24 y16.204 (14.323, 18.086)< 0.00124 ≤ and ≤ 34 y12.051 (11.609, 12.493)< 0.00134 < and ≤ 40 y-0.824 (-3.112, 1.464)0.480Low birth weight27y,36y20 ≤ and < 27 y0.917 (0.903, 0.932)< 0.00127 ≤ and ≤ 36 y0.965 (0.955, 0.976)< 0.00136 < and ≤ 40 y1.133 (1.026, 1.250)0.013Macrosomia24y,33y20 ≤ and < 24 y1.102 (1.075, 1.129)< 0.00124 ≤ and ≤ 33 y1.065 (1.060, 1.071)< 0.00133 < and ≤ 40 y1.029 (1.012, 1.046)0.001aAdjusted: adjusting for gestational age, season of birth, maternal residence The effect of maternal age on BW in two-piece wise linear regression model aAdjusted: adjusting for gestational age, season of birth, maternal residence Two turning points of maternal age for LBW were found at 27 and 36 years old (Table 2). The risk of LBW decreased by 8.3% per year increase of maternal age up to 27 years old (Odds Ratio (OR) = 0.917, 95%CI: 0.903, 0.932). The risk of LBW decreased by 3.5% per year increase of maternal age when maternal age ranged from 27 to 36 years old (OR = 0.965, 95%CI: 0.955, 0.976). The risk of LBW increased by 13.3% per year increase in maternal age when maternal age was older than 36 years old (OR = 1.133, 95%CI: 1.026, 1.250). Similarly, two turning points value of maternal age for macrosomia were found at 24 and 33 years (Table 2). The risk of macrosomia increased by 10.2% per year increase of maternal age up to 24 years old (OR = 1.102, 95%CI: 1.075, 1.129). The risk of macrosomia increased by 6.5% per year increase of maternal age when maternal age ranged from 24 to 33 years old (OR = 1.065, 95%CI: 1.060, 1.071). The risk of macrosomia increased by 2.9% per year increase of maternal age when maternal age was older than 33 years old (OR = 1.029, 95%CI: 1.012, 1.046). Related factors of the mother-child pairs: A total of 490,143 mother-child pairs were abstracted. The mean BW of newborns was 3364.937 ± 420.528 g, and the mean age of mothers was 28.728 ± 4.134 years old. The incidence of LBW was 1.5%, and the incidence of macrosomia was 6.0%. The basic characteristics of the mother-child pairs were shown in Table 1. There were significant differences in BW, the proportion of LBW and macrosomia among different gestational age groups, seasonal of birth groups, and maternal residence groups (P < 0.05), but not maternal ethnicity groups (P > 0.05). Table 1Basic characteristics of the mother-child pairsVariablesBirth weight(n = 490,143)Low birth weight(n = 7146)Macrosomia(n = 29,457)N (%)Mean ± SD(g)FN(%)χ2N(%)χ2Gestational age(weeks)51543.00*15294.00*7197.40* 37–37+ 636,138(7.4)3020.404 ± 422.0753125(8.6)498(1.4) 38–38+ 698,636 (20.1)3247.870 ± 396.3501994(2.0)2946(3.0) 39–39+ 6162,966 (33.2)3369.093 ± 394.1691312(0.8)8442(5.2) 40–40+ 6138,209 (28.2)3462.910 ± 400.804590(0.4)11,373(8.2) 41–41+ 654,194(11.1)3545.406 ± 395.768125(0.2)6198(11.4)Season of birth24.01*13.97*36.51* Spring119,481(24.4)3363.061 ± 422.3331806(1.5)7137(5.9) Summer121,425(24.8)3360.024 ± 18.3221859(1.5)6932(5.7) Autumn128,548(26.2)3367.011 ± 419.1821773(1.4)7806(6.1) Winter120,689(24.6)3369.533 ± 422.3241708(1.4)7582(6.3)Maternal residence243.76*58.42*40.42* Urban245,250(50.1)3372.876 ± 416.9853370(1.4)15,193(6.2) Suburb128,008(26.1)3363.916 ± 422.3301800(1.4)7648(6.0) Countryside116,885(23.8)3349.401 ± 425.4851976(1.7)6616(5.7)Ethnicity0.9750.2651.165 Han485,192(99.0)3364.998 ± 420.4537069(1.5)29,141(6.0) Other4951 (1.0)3359.067 ± 427.86177(1.6)316(6.4)*P < 0.05 Basic characteristics of the mother-child pairs *P < 0.05 The relationship between maternal age and birth weight, LBW and macrosomia: Both unadjusted and adjusted smoothed plots suggest that there was a nonlinear relationship between maternal age and BW (Fig. 2a, b). BW increased gradually until age 34, then decreased. Similarly, both adjusted smoothed plots suggest that there were nonlinear relationships between maternal age and LBW and macrosomia (Fig. 2c, d). The risk of LBW decreased gradually until age 36, then increased. The risk of macrosomia increased with the increase of maternal age when maternal age ranged from 20 to 40 years old. Fig. 2Relationships between maternal age and BW, LBW and macrosomia with 95% confidence interval (dashed lines) *. a: Unadjusted BW; b: Adjusted BW; c: LBW d: Macrosomia. *: Adjusted: b, c and d adjusting for gestational age, season of birth and maternal residence) Relationships between maternal age and BW, LBW and macrosomia with 95% confidence interval (dashed lines) *. a: Unadjusted BW; b: Adjusted BW; c: LBW d: Macrosomia. *: Adjusted: b, c and d adjusting for gestational age, season of birth and maternal residence) Two turning points of maternal age for BW were found at 24 and 34 years old. (Table 2) The BW increased 16.204 g per year increase of maternal age up to the 24 years old (β = 16.204, 95%CI: 14.323, 18.086). The BW increased 12.051 g per year increase of maternal age when maternal age ranged from 24 to 34 years old (β = 12.051, 95%CI: 11.609, 12.493). But there was not significantly association between maternal age and BW when maternal age was older than 34 years old (β=-0.824, 95% CI: -3.112, 1.464). Table 2The effect of maternal age on BW in two-piece wise linear regression modelTurning point of maternal ageMaternal ageAdjusted β/OR (95%CI)aP valueBirth weight24y,34y20 ≤ and < 24 y16.204 (14.323, 18.086)< 0.00124 ≤ and ≤ 34 y12.051 (11.609, 12.493)< 0.00134 < and ≤ 40 y-0.824 (-3.112, 1.464)0.480Low birth weight27y,36y20 ≤ and < 27 y0.917 (0.903, 0.932)< 0.00127 ≤ and ≤ 36 y0.965 (0.955, 0.976)< 0.00136 < and ≤ 40 y1.133 (1.026, 1.250)0.013Macrosomia24y,33y20 ≤ and < 24 y1.102 (1.075, 1.129)< 0.00124 ≤ and ≤ 33 y1.065 (1.060, 1.071)< 0.00133 < and ≤ 40 y1.029 (1.012, 1.046)0.001aAdjusted: adjusting for gestational age, season of birth, maternal residence The effect of maternal age on BW in two-piece wise linear regression model aAdjusted: adjusting for gestational age, season of birth, maternal residence Two turning points of maternal age for LBW were found at 27 and 36 years old (Table 2). The risk of LBW decreased by 8.3% per year increase of maternal age up to 27 years old (Odds Ratio (OR) = 0.917, 95%CI: 0.903, 0.932). The risk of LBW decreased by 3.5% per year increase of maternal age when maternal age ranged from 27 to 36 years old (OR = 0.965, 95%CI: 0.955, 0.976). The risk of LBW increased by 13.3% per year increase in maternal age when maternal age was older than 36 years old (OR = 1.133, 95%CI: 1.026, 1.250). Similarly, two turning points value of maternal age for macrosomia were found at 24 and 33 years (Table 2). The risk of macrosomia increased by 10.2% per year increase of maternal age up to 24 years old (OR = 1.102, 95%CI: 1.075, 1.129). The risk of macrosomia increased by 6.5% per year increase of maternal age when maternal age ranged from 24 to 33 years old (OR = 1.065, 95%CI: 1.060, 1.071). The risk of macrosomia increased by 2.9% per year increase of maternal age when maternal age was older than 33 years old (OR = 1.029, 95%CI: 1.012, 1.046). Discussion: Main results Our research indicated the specific relationships between maternal age change and the change of BW, LBW and macrosomia. BW increased gradually until maternal age 34, then decreased. The risk of LBW decreased gradually until age 36, then increased. The risk of macrosomia increased with the increase of maternal age. The results of this study on BW showed that the relationship between the risk of abnormal BW and age is not based on the traditional 35-year-old boundary. Our findings provided the absolute risks of abnormal BW by maternal age, which would be useful for patient fertility counseling. Our research indicated the specific relationships between maternal age change and the change of BW, LBW and macrosomia. BW increased gradually until maternal age 34, then decreased. The risk of LBW decreased gradually until age 36, then increased. The risk of macrosomia increased with the increase of maternal age. The results of this study on BW showed that the relationship between the risk of abnormal BW and age is not based on the traditional 35-year-old boundary. Our findings provided the absolute risks of abnormal BW by maternal age, which would be useful for patient fertility counseling. Interpretation A nonlinear relationship between maternal age and BW was observed and two turning points of maternal age at 24 and 34 years old were found in this study. The BW increased faster from 20 to 23 years old than from 24 to 34 years old. The curve was consistent with the findings of the previous studies [12, 13], however, there was no research provided that the threshold maternal age on BW was 34 years old [12, 21, 22]. We observed a marginal significantly negative association between maternal age and BW when maternal age ranged from 35 to 40 years, which was consistent with the findings of the previous studies [23, 24]. In previous studies, the population was grouped according to maternal age into several subgroups, then to investigate the effect of maternal age on the BW. Most studies considered women in age group 20–29 years as reference group. Differently, in our study, the maternal age was analyzed as a continuous variable rather than categorical variable, to find a more accurate age threshold. The mechanism of the effect of maternal age on BW is still unclear. In the study, we found the risks of LBW and macrosomia after 35 years present the same pattern, increasing at these maternal age period. Relevant researches showed that most of the effects on the offspring of intrauterine exposure to maternal age-related obstetric complications might be induced by epigenetic DNA reprogramming during critical periods of the embryo or fetal development [25]. And it is well known that mitochondria are maternally derived. We also know that mitochondrial DNA is not capable of DNA repair and is therefore at greater risk of acquiring mutations with age. [12]. Similarly, the quality of woman’s eggs declined dramatically with increasing age, leading to an increased risk of pregnancy-related complications among older women [12]. Furthermore, older mothers are more likely to suffer from chronic diseases and their complications in pregnancy, including obesity, anemia and diabetes. In addition, ageing process affects the reproductive system similar to other systems in the human body. These factors may contribute to that the risks of LBW and macrosomia after 35 years present the same pattern, increasing at these maternal age period. A nonlinear relationship between maternal age and the risk of term LBW was found in our study. The curve was the same as the previous study [13, 26]. As LBW newborns may be premature with other risk factors [27], we restricted our study population to term newborns, therefore the etiology of LBW was intrauterine growth restriction [18]. The prevalence of term LBW in our study was 1.5%, which was lower than 2% reported by the other study [28], which suggesting there was a good perinatal care system in Xi’an, China. Although the prevalence of term LBW is low, it is not negligible and term LBW can lead to adverse pregnancy outcomes, as severe neonatal asphyxia [29]. The threshold maternal age on the risk of term LBW was 36 years old, which means that the risk of term LBW will significantly increase when the maternal age is over 36 years old, but fewer studies have reported it [5, 6, 14]. However, the biological mechanisms by which maternal age cause the term LBW are unclear. The increased risk of LBW among younger can be explained by the low socioeconomic conditions and increased nutritional demands of pregnancy [30]. With the increase of maternal age, the risk of macrosomia was increasing and two turning points of maternal age were found at 24 and 33 years. The curve was identical to the previous studies [31]. The incidence of macrosomia was 6.0% in our study, which was approximate with the previous study [14]. The risk of macrosomia increased faster from 20 to 23 years old than from 24 to 33 years old, which was the same as the change of BW. Term macrosomia is influenced by maternal hyperglycemia and endocrine status through placental circulation [32]. The increased risk of term macrosomia is partly explained by the increased prevalence of diabetes with the increased maternal age [13]. A nonlinear relationship between maternal age and BW was observed and two turning points of maternal age at 24 and 34 years old were found in this study. The BW increased faster from 20 to 23 years old than from 24 to 34 years old. The curve was consistent with the findings of the previous studies [12, 13], however, there was no research provided that the threshold maternal age on BW was 34 years old [12, 21, 22]. We observed a marginal significantly negative association between maternal age and BW when maternal age ranged from 35 to 40 years, which was consistent with the findings of the previous studies [23, 24]. In previous studies, the population was grouped according to maternal age into several subgroups, then to investigate the effect of maternal age on the BW. Most studies considered women in age group 20–29 years as reference group. Differently, in our study, the maternal age was analyzed as a continuous variable rather than categorical variable, to find a more accurate age threshold. The mechanism of the effect of maternal age on BW is still unclear. In the study, we found the risks of LBW and macrosomia after 35 years present the same pattern, increasing at these maternal age period. Relevant researches showed that most of the effects on the offspring of intrauterine exposure to maternal age-related obstetric complications might be induced by epigenetic DNA reprogramming during critical periods of the embryo or fetal development [25]. And it is well known that mitochondria are maternally derived. We also know that mitochondrial DNA is not capable of DNA repair and is therefore at greater risk of acquiring mutations with age. [12]. Similarly, the quality of woman’s eggs declined dramatically with increasing age, leading to an increased risk of pregnancy-related complications among older women [12]. Furthermore, older mothers are more likely to suffer from chronic diseases and their complications in pregnancy, including obesity, anemia and diabetes. In addition, ageing process affects the reproductive system similar to other systems in the human body. These factors may contribute to that the risks of LBW and macrosomia after 35 years present the same pattern, increasing at these maternal age period. A nonlinear relationship between maternal age and the risk of term LBW was found in our study. The curve was the same as the previous study [13, 26]. As LBW newborns may be premature with other risk factors [27], we restricted our study population to term newborns, therefore the etiology of LBW was intrauterine growth restriction [18]. The prevalence of term LBW in our study was 1.5%, which was lower than 2% reported by the other study [28], which suggesting there was a good perinatal care system in Xi’an, China. Although the prevalence of term LBW is low, it is not negligible and term LBW can lead to adverse pregnancy outcomes, as severe neonatal asphyxia [29]. The threshold maternal age on the risk of term LBW was 36 years old, which means that the risk of term LBW will significantly increase when the maternal age is over 36 years old, but fewer studies have reported it [5, 6, 14]. However, the biological mechanisms by which maternal age cause the term LBW are unclear. The increased risk of LBW among younger can be explained by the low socioeconomic conditions and increased nutritional demands of pregnancy [30]. With the increase of maternal age, the risk of macrosomia was increasing and two turning points of maternal age were found at 24 and 33 years. The curve was identical to the previous studies [31]. The incidence of macrosomia was 6.0% in our study, which was approximate with the previous study [14]. The risk of macrosomia increased faster from 20 to 23 years old than from 24 to 33 years old, which was the same as the change of BW. Term macrosomia is influenced by maternal hyperglycemia and endocrine status through placental circulation [32]. The increased risk of term macrosomia is partly explained by the increased prevalence of diabetes with the increased maternal age [13]. Strengths and limitations Our research has some advantages: first, our study was a large sample research based on four years records. Besides, our study had very strict inclusion and exclusion criteria and the data was also carried on the strict cleaning and so on, which increases the credibility of the results. However, there were some potential limitations in this study. First, some potential confounders, such as fetus gender, parity, medical history, economic condition and paternal age, were not adjusted because of limited data. Although the cutoff for “paternal advanced age” is not clearly defined, there is an increase in genetic risk as men age. Previous studies showed that economic condition and fetus gender were associated with higher BW, but adjustment for socioeconomic factors and fetus gender made little difference to the results [22]. In addition, at any maternal age, higher parity was associated with higher BW [22]. Therefore, the extrapolation of the results requires caution, especially for others ethnicity [21]. However, the results of this study still have a certain degree of reference significance for other regions. Second, the mother-child pairs were not included when the maternal age was younger than 20 years old or older than 40 years old, because of the small proportion of them and limited data. Women delivering before 20 or after age 40 years had a higher incidence of pregnancy complications. Therefore, our research could only reflect the changing trend of BW, LBW and macrosomia when the maternal age ranged from 20 to 40 years old. In order to ensure the accuracy of our research results, we limited the subjects to full term singleton live birth, which could eliminate some potential influencing factors [27], such as serious pregnancy complications. Although the large sample size might increase potential confounding, we could estimate the relationships between maternal age and BW, LBW and macrosomia in detail with generalized additive model and two-piece wise linear regression model. Our research has some advantages: first, our study was a large sample research based on four years records. Besides, our study had very strict inclusion and exclusion criteria and the data was also carried on the strict cleaning and so on, which increases the credibility of the results. However, there were some potential limitations in this study. First, some potential confounders, such as fetus gender, parity, medical history, economic condition and paternal age, were not adjusted because of limited data. Although the cutoff for “paternal advanced age” is not clearly defined, there is an increase in genetic risk as men age. Previous studies showed that economic condition and fetus gender were associated with higher BW, but adjustment for socioeconomic factors and fetus gender made little difference to the results [22]. In addition, at any maternal age, higher parity was associated with higher BW [22]. Therefore, the extrapolation of the results requires caution, especially for others ethnicity [21]. However, the results of this study still have a certain degree of reference significance for other regions. Second, the mother-child pairs were not included when the maternal age was younger than 20 years old or older than 40 years old, because of the small proportion of them and limited data. Women delivering before 20 or after age 40 years had a higher incidence of pregnancy complications. Therefore, our research could only reflect the changing trend of BW, LBW and macrosomia when the maternal age ranged from 20 to 40 years old. In order to ensure the accuracy of our research results, we limited the subjects to full term singleton live birth, which could eliminate some potential influencing factors [27], such as serious pregnancy complications. Although the large sample size might increase potential confounding, we could estimate the relationships between maternal age and BW, LBW and macrosomia in detail with generalized additive model and two-piece wise linear regression model. Main results: Our research indicated the specific relationships between maternal age change and the change of BW, LBW and macrosomia. BW increased gradually until maternal age 34, then decreased. The risk of LBW decreased gradually until age 36, then increased. The risk of macrosomia increased with the increase of maternal age. The results of this study on BW showed that the relationship between the risk of abnormal BW and age is not based on the traditional 35-year-old boundary. Our findings provided the absolute risks of abnormal BW by maternal age, which would be useful for patient fertility counseling. Interpretation: A nonlinear relationship between maternal age and BW was observed and two turning points of maternal age at 24 and 34 years old were found in this study. The BW increased faster from 20 to 23 years old than from 24 to 34 years old. The curve was consistent with the findings of the previous studies [12, 13], however, there was no research provided that the threshold maternal age on BW was 34 years old [12, 21, 22]. We observed a marginal significantly negative association between maternal age and BW when maternal age ranged from 35 to 40 years, which was consistent with the findings of the previous studies [23, 24]. In previous studies, the population was grouped according to maternal age into several subgroups, then to investigate the effect of maternal age on the BW. Most studies considered women in age group 20–29 years as reference group. Differently, in our study, the maternal age was analyzed as a continuous variable rather than categorical variable, to find a more accurate age threshold. The mechanism of the effect of maternal age on BW is still unclear. In the study, we found the risks of LBW and macrosomia after 35 years present the same pattern, increasing at these maternal age period. Relevant researches showed that most of the effects on the offspring of intrauterine exposure to maternal age-related obstetric complications might be induced by epigenetic DNA reprogramming during critical periods of the embryo or fetal development [25]. And it is well known that mitochondria are maternally derived. We also know that mitochondrial DNA is not capable of DNA repair and is therefore at greater risk of acquiring mutations with age. [12]. Similarly, the quality of woman’s eggs declined dramatically with increasing age, leading to an increased risk of pregnancy-related complications among older women [12]. Furthermore, older mothers are more likely to suffer from chronic diseases and their complications in pregnancy, including obesity, anemia and diabetes. In addition, ageing process affects the reproductive system similar to other systems in the human body. These factors may contribute to that the risks of LBW and macrosomia after 35 years present the same pattern, increasing at these maternal age period. A nonlinear relationship between maternal age and the risk of term LBW was found in our study. The curve was the same as the previous study [13, 26]. As LBW newborns may be premature with other risk factors [27], we restricted our study population to term newborns, therefore the etiology of LBW was intrauterine growth restriction [18]. The prevalence of term LBW in our study was 1.5%, which was lower than 2% reported by the other study [28], which suggesting there was a good perinatal care system in Xi’an, China. Although the prevalence of term LBW is low, it is not negligible and term LBW can lead to adverse pregnancy outcomes, as severe neonatal asphyxia [29]. The threshold maternal age on the risk of term LBW was 36 years old, which means that the risk of term LBW will significantly increase when the maternal age is over 36 years old, but fewer studies have reported it [5, 6, 14]. However, the biological mechanisms by which maternal age cause the term LBW are unclear. The increased risk of LBW among younger can be explained by the low socioeconomic conditions and increased nutritional demands of pregnancy [30]. With the increase of maternal age, the risk of macrosomia was increasing and two turning points of maternal age were found at 24 and 33 years. The curve was identical to the previous studies [31]. The incidence of macrosomia was 6.0% in our study, which was approximate with the previous study [14]. The risk of macrosomia increased faster from 20 to 23 years old than from 24 to 33 years old, which was the same as the change of BW. Term macrosomia is influenced by maternal hyperglycemia and endocrine status through placental circulation [32]. The increased risk of term macrosomia is partly explained by the increased prevalence of diabetes with the increased maternal age [13]. Strengths and limitations: Our research has some advantages: first, our study was a large sample research based on four years records. Besides, our study had very strict inclusion and exclusion criteria and the data was also carried on the strict cleaning and so on, which increases the credibility of the results. However, there were some potential limitations in this study. First, some potential confounders, such as fetus gender, parity, medical history, economic condition and paternal age, were not adjusted because of limited data. Although the cutoff for “paternal advanced age” is not clearly defined, there is an increase in genetic risk as men age. Previous studies showed that economic condition and fetus gender were associated with higher BW, but adjustment for socioeconomic factors and fetus gender made little difference to the results [22]. In addition, at any maternal age, higher parity was associated with higher BW [22]. Therefore, the extrapolation of the results requires caution, especially for others ethnicity [21]. However, the results of this study still have a certain degree of reference significance for other regions. Second, the mother-child pairs were not included when the maternal age was younger than 20 years old or older than 40 years old, because of the small proportion of them and limited data. Women delivering before 20 or after age 40 years had a higher incidence of pregnancy complications. Therefore, our research could only reflect the changing trend of BW, LBW and macrosomia when the maternal age ranged from 20 to 40 years old. In order to ensure the accuracy of our research results, we limited the subjects to full term singleton live birth, which could eliminate some potential influencing factors [27], such as serious pregnancy complications. Although the large sample size might increase potential confounding, we could estimate the relationships between maternal age and BW, LBW and macrosomia in detail with generalized additive model and two-piece wise linear regression model. Conclusions: Our research indicated the specific relationships between maternal age change and the change of BW, LBW and macrosomia when maternal age ranged from 20 to 40 years old. BW increased gradually until 34 years old, then decreased. The threshold maternal age for LBW was 36 years old, and the risk of macrosomia was increasing with the increase of maternal age. These results should be carefully taken into account by maternal care providers in order to inform women adequately, support them in understanding potential risks of BW associated with their age, and the importance of prenatal care. However, optimized maternal age should be determined individually by different pregnancy complications and adverse neonatal outcomes. The mechanism between maternal age and BW is not clear. Therefore, further studies are needed to examine the relationship between maternal age and BW.
Background: Most studies have shown that maternal age is associated with birth weight. However, the specific relationship between each additional year of maternal age and birth weight remains unclear. The study aimed to analyze the specific association between maternal age and birth weight. Methods: Raw data for all live births from 2015 to 2018 were obtained from the Medical Birth Registry of Xi'an, China. A total of 490,143 mother-child pairs with full-term singleton live births and the maternal age ranging from 20 to 40 years old were included in our study. Birth weight, gestational age, neonatal birth date, maternal birth date, residence and ethnicity were collected. Generalized additive model and two-piece wise linear regression model were used to analyze the specific relationships between maternal age and birth weight, risk of low birth weight, and risk of macrosomia. Results: The relationships between maternal age and birth weight, risk of low birth weight, and risk of macrosomia were nonlinear. Birth weight increased 16.204 g per year when maternal age was less than 24 years old (95%CI: 14.323, 18.086), and increased 12.051 g per year when maternal age ranged from 24 to 34 years old (95%CI: 11.609, 12.493), then decreased 0.824 g per year (95% CI: -3.112, 1.464). The risk of low birth weight decreased with the increase of maternal age until 36 years old (OR = 0.917, 95%CI: 0.903, 0.932 when maternal age was younger than 27 years old; OR = 0.965, 95%CI: 0.955, 0.976 when maternal age ranged from 27 to 36 years old), then increased when maternal age was older than 36 years old (OR = 1.133, 95%CI: 1.026, 1.250). The risk of macrosomia increased with the increase of maternal age (OR = 1.102, 95%CI: 1.075, 1.129 when maternal age was younger than 24 years old; OR = 1.065, 95%CI: 1.060, 1.071 when maternal age ranged from 24 to 33 years old; OR = 1.029, 95%CI: 1.012, 1.046 when maternal age was older than 33 years old). Conclusions: For women of childbearing age (20-40 years old), the threshold of maternal age on low birth weight was 36 years old, and the risk of macrosomia increased with the increase of maternal age.
Background: Birth weight (BW) is the most important index reflecting intrauterine growth and development of newborns, and is also a vital index to evaluate the health status of the newborns [1]. Abnormal BW, including low birth weight (LBW, BW < 2500 g) and macrosomia (BW ≥ 4000 g), significantly increases the risk of perinatal mortality and morbidity and, in recent years, has been shown to be a marker of age-related disease risk [2, 3]. As the most common adverse birth outcome, the incidence of abnormal BW is generally high in the world. It was estimated that the incidence of LBW was about 5–7% in developed countries and as high as 19% in developing countries [4], and in mainland China, the specific interest for this paper, it was 6.1% [5]. Furthermore, the incidence of macrosomia also has increased over the past two to three decades in both developed and developing countries, the prevalence of macrosomia was 9.2% in the United States and 7.3% in China recently [6, 7]. Considering the large number of newborns in China, it is likely that LBW and macrosomia might remain a major public health issue over the next few years in China [3]. With the development of society and change of people’s conception of procreation (assisted reproductive technology, the increased education and economic activities of women, reduced marriage rates and increased double income no kids), it was reported that the global trend of delayed childbirth is increasing, e.g. United States and Korea [8, 9]. Similarly in China, the mean maternal age and the proportion of elderly pregnant women(≥ 35 years) are increasing [10]. In 2011, a survey of 14 provinces in China showed that the average delivery age of pregnant women was 28 ± 5 years, the proportion of maternal age older than 35 was 7.8%, and the proportion of maternal age younger than 20 years old was 1.4% [6, 11]. The high proportion of pregnant women older than 35 years old is accompanied by the increased risk of miscarriage and chromosomal aneuploidy [12]. Previous studies indicated that the relationship between maternal age and BW was inconsistent [12–16]. Most of the previous studies focused on the incidence of LWB and macrosomia in different age groups by fixed classification of maternal age [13, 16]. This classification method with 35 years old as the boundary cannot accurately reflect the trend of BW and age, as this may underestimate the age related risk of younger age groups and overestimate risks in older age groups [17]. There is no sufficient evidence to assess the appropriateness of using this artificial traditional age classification to evaluate their impacts on BW. Further, with the increase of delayed childbearing in recent years, the development of medical technology and social and economic development, the current traditional methods of dividing age threshold may not be suitable for today’s situation. Especially for China, a country with a large birth population and a huge social change, this research is helpful to clinical decisionmaking and for public health policies. The objective of this study was to estimate the changes of BW according to 1-year intervals of maternal age among women, treating age as a continuous variable with a flexible, generalized additive approach. Conclusions: Our research indicated the specific relationships between maternal age change and the change of BW, LBW and macrosomia when maternal age ranged from 20 to 40 years old. BW increased gradually until 34 years old, then decreased. The threshold maternal age for LBW was 36 years old, and the risk of macrosomia was increasing with the increase of maternal age. These results should be carefully taken into account by maternal care providers in order to inform women adequately, support them in understanding potential risks of BW associated with their age, and the importance of prenatal care. However, optimized maternal age should be determined individually by different pregnancy complications and adverse neonatal outcomes. The mechanism between maternal age and BW is not clear. Therefore, further studies are needed to examine the relationship between maternal age and BW.
Background: Most studies have shown that maternal age is associated with birth weight. However, the specific relationship between each additional year of maternal age and birth weight remains unclear. The study aimed to analyze the specific association between maternal age and birth weight. Methods: Raw data for all live births from 2015 to 2018 were obtained from the Medical Birth Registry of Xi'an, China. A total of 490,143 mother-child pairs with full-term singleton live births and the maternal age ranging from 20 to 40 years old were included in our study. Birth weight, gestational age, neonatal birth date, maternal birth date, residence and ethnicity were collected. Generalized additive model and two-piece wise linear regression model were used to analyze the specific relationships between maternal age and birth weight, risk of low birth weight, and risk of macrosomia. Results: The relationships between maternal age and birth weight, risk of low birth weight, and risk of macrosomia were nonlinear. Birth weight increased 16.204 g per year when maternal age was less than 24 years old (95%CI: 14.323, 18.086), and increased 12.051 g per year when maternal age ranged from 24 to 34 years old (95%CI: 11.609, 12.493), then decreased 0.824 g per year (95% CI: -3.112, 1.464). The risk of low birth weight decreased with the increase of maternal age until 36 years old (OR = 0.917, 95%CI: 0.903, 0.932 when maternal age was younger than 27 years old; OR = 0.965, 95%CI: 0.955, 0.976 when maternal age ranged from 27 to 36 years old), then increased when maternal age was older than 36 years old (OR = 1.133, 95%CI: 1.026, 1.250). The risk of macrosomia increased with the increase of maternal age (OR = 1.102, 95%CI: 1.075, 1.129 when maternal age was younger than 24 years old; OR = 1.065, 95%CI: 1.060, 1.071 when maternal age ranged from 24 to 33 years old; OR = 1.029, 95%CI: 1.012, 1.046 when maternal age was older than 33 years old). Conclusions: For women of childbearing age (20-40 years old), the threshold of maternal age on low birth weight was 36 years old, and the risk of macrosomia increased with the increase of maternal age.
10,786
476
[ 647, 1725, 475, 146, 230, 312, 832, 110, 800, 375 ]
13
[ "age", "maternal", "maternal age", "bw", "years", "lbw", "macrosomia", "old", "years old", "risk" ]
[ "incidence macrosomia increased", "macrosomia maternal age", "risk lbw macrosomia", "lbw macrosomia maternal", "countries prevalence macrosomia" ]
null
[CONTENT] Maternal age | Birth weight | Low birth weight | Macrosomia [SUMMARY]
null
[CONTENT] Maternal age | Birth weight | Low birth weight | Macrosomia [SUMMARY]
[CONTENT] Maternal age | Birth weight | Low birth weight | Macrosomia [SUMMARY]
[CONTENT] Maternal age | Birth weight | Low birth weight | Macrosomia [SUMMARY]
[CONTENT] Maternal age | Birth weight | Low birth weight | Macrosomia [SUMMARY]
[CONTENT] Adult | Birth Weight | China | Cross-Sectional Studies | Female | Fetal Macrosomia | Gestational Age | Humans | Infant, Low Birth Weight | Infant, Newborn | Linear Models | Live Birth | Maternal Age | Pregnancy | Registries [SUMMARY]
null
[CONTENT] Adult | Birth Weight | China | Cross-Sectional Studies | Female | Fetal Macrosomia | Gestational Age | Humans | Infant, Low Birth Weight | Infant, Newborn | Linear Models | Live Birth | Maternal Age | Pregnancy | Registries [SUMMARY]
[CONTENT] Adult | Birth Weight | China | Cross-Sectional Studies | Female | Fetal Macrosomia | Gestational Age | Humans | Infant, Low Birth Weight | Infant, Newborn | Linear Models | Live Birth | Maternal Age | Pregnancy | Registries [SUMMARY]
[CONTENT] Adult | Birth Weight | China | Cross-Sectional Studies | Female | Fetal Macrosomia | Gestational Age | Humans | Infant, Low Birth Weight | Infant, Newborn | Linear Models | Live Birth | Maternal Age | Pregnancy | Registries [SUMMARY]
[CONTENT] Adult | Birth Weight | China | Cross-Sectional Studies | Female | Fetal Macrosomia | Gestational Age | Humans | Infant, Low Birth Weight | Infant, Newborn | Linear Models | Live Birth | Maternal Age | Pregnancy | Registries [SUMMARY]
[CONTENT] incidence macrosomia increased | macrosomia maternal age | risk lbw macrosomia | lbw macrosomia maternal | countries prevalence macrosomia [SUMMARY]
null
[CONTENT] incidence macrosomia increased | macrosomia maternal age | risk lbw macrosomia | lbw macrosomia maternal | countries prevalence macrosomia [SUMMARY]
[CONTENT] incidence macrosomia increased | macrosomia maternal age | risk lbw macrosomia | lbw macrosomia maternal | countries prevalence macrosomia [SUMMARY]
[CONTENT] incidence macrosomia increased | macrosomia maternal age | risk lbw macrosomia | lbw macrosomia maternal | countries prevalence macrosomia [SUMMARY]
[CONTENT] incidence macrosomia increased | macrosomia maternal age | risk lbw macrosomia | lbw macrosomia maternal | countries prevalence macrosomia [SUMMARY]
[CONTENT] age | maternal | maternal age | bw | years | lbw | macrosomia | old | years old | risk [SUMMARY]
null
[CONTENT] age | maternal | maternal age | bw | years | lbw | macrosomia | old | years old | risk [SUMMARY]
[CONTENT] age | maternal | maternal age | bw | years | lbw | macrosomia | old | years old | risk [SUMMARY]
[CONTENT] age | maternal | maternal age | bw | years | lbw | macrosomia | old | years old | risk [SUMMARY]
[CONTENT] age | maternal | maternal age | bw | years | lbw | macrosomia | old | years old | risk [SUMMARY]
[CONTENT] age | china | years | development | bw | classification | countries | high | 35 | women [SUMMARY]
null
[CONTENT] age | maternal | 95 | maternal age | 95 ci | ci | year increase maternal | year increase | year increase maternal age | 24 [SUMMARY]
[CONTENT] age | maternal | maternal age | bw | care | years old | years | old | change | bw associated [SUMMARY]
[CONTENT] age | maternal | maternal age | bw | years | lbw | macrosomia | risk | increased | study [SUMMARY]
[CONTENT] age | maternal | maternal age | bw | years | lbw | macrosomia | risk | increased | study [SUMMARY]
[CONTENT] ||| each additional year ||| [SUMMARY]
null
[CONTENT] ||| 16.204 | 14.323 | 18.086 | 12.051 | 24 | 34 years old | 11.609 | 12.493 | 0.824 | 95% | CI | 1.464 ||| 36 years old | 0.917 | 0.903 | 0.932 | 27 years old | 0.965 | 95%CI | 0.955 | 0.976 | 27 | 36 years old | 1.133 | 1.026 | 1.250 ||| 1.102 | 1.075 | 1.129 | 24 years old | 1.065 | 1.060 | 1.071 | 24 | 33 years old | 1.029 | 95%CI | 1.012 | 1.046 | 33 years old [SUMMARY]
[CONTENT] 20-40 years old | 36 years old [SUMMARY]
[CONTENT] ||| each additional year ||| ||| 2015 | 2018 | the Medical Birth Registry of Xi'an | China ||| 490,143 | 20 | years ||| ||| two ||| ||| ||| 16.204 | 14.323 | 18.086 | 12.051 | 24 | 34 years old | 11.609 | 12.493 | 0.824 | 95% | CI | 1.464 ||| 36 years old | 0.917 | 0.903 | 0.932 | 27 years old | 0.965 | 95%CI | 0.955 | 0.976 | 27 | 36 years old | 1.133 | 1.026 | 1.250 ||| 1.102 | 1.075 | 1.129 | 24 years old | 1.065 | 1.060 | 1.071 | 24 | 33 years old | 1.029 | 95%CI | 1.012 | 1.046 | 33 years old ||| 20-40 years old | 36 years old [SUMMARY]
[CONTENT] ||| each additional year ||| ||| 2015 | 2018 | the Medical Birth Registry of Xi'an | China ||| 490,143 | 20 | years ||| ||| two ||| ||| ||| 16.204 | 14.323 | 18.086 | 12.051 | 24 | 34 years old | 11.609 | 12.493 | 0.824 | 95% | CI | 1.464 ||| 36 years old | 0.917 | 0.903 | 0.932 | 27 years old | 0.965 | 95%CI | 0.955 | 0.976 | 27 | 36 years old | 1.133 | 1.026 | 1.250 ||| 1.102 | 1.075 | 1.129 | 24 years old | 1.065 | 1.060 | 1.071 | 24 | 33 years old | 1.029 | 95%CI | 1.012 | 1.046 | 33 years old ||| 20-40 years old | 36 years old [SUMMARY]
Incidence and Impact of Refeeding Syndrome in an Internal Medicine and Gastroenterology Ward of an Italian Tertiary Referral Center: A Prospective Cohort Study.
35405956
Refeeding syndrome (RS) is a neglected, potentially fatal syndrome that occurs in malnourished patients undergoing rapid nutritional replenishment after a period of fasting. The American Society for Parenteral and Enteral Nutrition (ASPEN) recently released new criteria for RS risk and diagnosis. Real-life data on its incidence are still limited.
BACKGROUND
We consecutively enrolled patients admitted to the Internal Medicine and Gastroenterology Unit of our center. The RS risk prevalence and incidence of RS were evaluated according to ASPEN. The length of stay (LOS), mortality, and re-admission rate within 30 days were assessed.
METHODS
Among 203 admitted patients, 98 (48.3%) were at risk of RS; RS occurred in 38 patients (18.7% of the entire cohort). Patients diagnosed with RS had a higher mean LOS (12.5 days ± 7.9) than those who were not diagnosed with RS (7.1 ± 4.2) (p &lt; 0.0001). Nine patients (4.4%) died. Body mass index (OR 0.82; 95% CI 0.69-0.97), RS diagnosis (OR 10.1; 95% CI 2.4-42.6), and medical nutritional support within 48 h (OR 0.12; 95% CI 0.02-0.56) were associated with mortality.
RESULTS
RS incidence is high among clinical wards, influencing clinical outcomes. Awareness among clinicians is necessary to identify patients at risk and to support those developing this syndrome.
CONCLUSIONS
[ "Cohort Studies", "Gastroenterology", "Humans", "Incidence", "Length of Stay", "Malnutrition", "Prospective Studies", "Refeeding Syndrome", "Tertiary Care Centers" ]
9002385
1. Introduction
Refeeding syndrome is a potentially fatal complication that occurs in malnourished patients when an excessive amount of nutrients is too rapidly delivered after a prolonged state of fasting, as often happens in hospital settings [1]. It is defined as severe electrolyte and fluid shifts, associated with metabolic abnormalities in malnourished patients undergoing refeeding, whether orally, enterally, or parenterally [2]. Up-to-date data about the incidence of RS remain heterogeneous. A recent systematic review of thirty-five observational studies found a wide range of RS incidence, varying from 0 to 62%, among studies [3]. The risk of developing RS is high after prolonged fasting, leading to a reduction in insulin release and an increase in glucagon secretion. Metabolic inversion from the use of glucose, as a source of energy, to the use of structural proteins and lipid deposits occurs with a reduction in intracellular energy cofactors (vitamins and minerals), in particular, phosphorus, potassium, and magnesium. During refeeding, the presence of nutrients, in particular, carbohydrates, stimulates glycolysis, glycogen synthesis, and the synthesis of lipids and proteins, with increased intracellular retention of water and sodium. This anabolic process requires the presence of cofactors, such as thiamine (vitamin B1) and minerals. This results in a rapid reduction in these cofactors at the intravascular level, reduced elimination of sodium and water in the urine, water retention, and the risk of cardiometabolic, respiratory, and neurological decompensation, eventually leading to death [3,4]. Such a condition often occurs during hospitalization, sometimes causing death if not diagnosed and treated promptly [5]. Recently, the American Society for Parenteral and Enteral Nutrition (ASPEN) convened an inter-professional task force to develop a consensus to identify patients at a high risk of developing RS, to prevent and treat it. This consensus, published in April 2020 [6], defined RS as a metabolic and electrolytic alteration that occurs after the reintroduction of nutrition. This retrospective cohort study aimed to assess the prevalence of RS risk and the incidence of RS according to ASPEN criteria, and its impact on length of stay (LOS), mortality, and re-admissions within 30 days in patients admitted to the Internal Medicine and Gastroenterology Unit.
null
null
3. Results
3.1. Baseline Patients’ Characteristics Two hundred and three patients were enrolled during 11 months of observation. The clinical and demographical data are presented in Table 1. The mean age was 66.1 ± 14.1 years; there were 127 male patients (62.6%) and 68.5% of the patients (n = 139) were admitted from the emergency department (ED). The mean Charlson’s Comorbidity Index (CCI) was 3.0 ± 2.4. 3.1.1. Nutritional Evaluation The mean body weight was 71.8 ± 16.3 kg and the mean height was 169.0 ± 8.6 cm, deriving a body mass index (BMI) of 25.0 ± 4.9 kg/m2. Upon admission, according to nutritional risk screening (NRS-2002), 70 patients (34.5%) were at risk of malnutrition, while, according to the Malnutrition Universal Screening Tool (MUST), 99 patients (48.7%) had the same risk. Twenty-four patients (11.8%) underwent specialist nutritional evaluation during their hospital stay, and 74 (36.5%) were treated with medical nutrition within 48 h from admission, in particular, 63 (64.3%) with oral nutritional supplementation (ONS) and 13 (13.3%) with parenteral nutrition (PN). The mean body weight was 71.8 ± 16.3 kg and the mean height was 169.0 ± 8.6 cm, deriving a body mass index (BMI) of 25.0 ± 4.9 kg/m2. Upon admission, according to nutritional risk screening (NRS-2002), 70 patients (34.5%) were at risk of malnutrition, while, according to the Malnutrition Universal Screening Tool (MUST), 99 patients (48.7%) had the same risk. Twenty-four patients (11.8%) underwent specialist nutritional evaluation during their hospital stay, and 74 (36.5%) were treated with medical nutrition within 48 h from admission, in particular, 63 (64.3%) with oral nutritional supplementation (ONS) and 13 (13.3%) with parenteral nutrition (PN). 3.1.2. Refeeding Syndrome The risk of RS was identified in 98 (48.3%) patients; of these, 44 (21.7%) were at medium risk and 54 (26.6%) were at high risk of developing RS. Thirty-eight patients (18.7% of the entire cohort) developed RS (Figure 1). Table 2 resumes several data associated with RS development. In particular, blood potassium at 48 h from admission was lower in RS patients (3.9 mmol/L vs. 3.5; OR 0.21; p = 0.004), in addition to phosphorus at 48 h (3.3 mg/dL vs. 2.8; OR 0.24; p = 0.002) and phosphorus at 5 days (3.4 mg/dL vs. 2.5; OR 0.04; p = 0.002). The risk of RS was identified in 98 (48.3%) patients; of these, 44 (21.7%) were at medium risk and 54 (26.6%) were at high risk of developing RS. Thirty-eight patients (18.7% of the entire cohort) developed RS (Figure 1). Table 2 resumes several data associated with RS development. In particular, blood potassium at 48 h from admission was lower in RS patients (3.9 mmol/L vs. 3.5; OR 0.21; p = 0.004), in addition to phosphorus at 48 h (3.3 mg/dL vs. 2.8; OR 0.24; p = 0.002) and phosphorus at 5 days (3.4 mg/dL vs. 2.5; OR 0.04; p = 0.002). 3.1.3. Length of Hospital Stay The mean LOS was 8.2 ± 5.8 days. Patients diagnosed with RS had a mean LOS of 12.5 ± 7.9 days. Patients without RS had a mean LOS of 7.1 ± 4.2 days (p < 0.0001) (Figure 2). The principal factors associated with a high LOS are resumed in Table 3. In particular, ER admission (OR 0.38; 95%CI 0.28–0.54), NRS-2002 >3 (OR 0.67; 95% CI 0.49–0.92), MUST ≥2 (OR 0.51; 95% CI 0.37–0.71), RS risk (OR 0.66; 95% CI 0.50–0.88), and RS development (OR 0.45; 95% CI 0.50–0.88) were factors associated with a higher LOS. The mean LOS was 8.2 ± 5.8 days. Patients diagnosed with RS had a mean LOS of 12.5 ± 7.9 days. Patients without RS had a mean LOS of 7.1 ± 4.2 days (p < 0.0001) (Figure 2). The principal factors associated with a high LOS are resumed in Table 3. In particular, ER admission (OR 0.38; 95%CI 0.28–0.54), NRS-2002 >3 (OR 0.67; 95% CI 0.49–0.92), MUST ≥2 (OR 0.51; 95% CI 0.37–0.71), RS risk (OR 0.66; 95% CI 0.50–0.88), and RS development (OR 0.45; 95% CI 0.50–0.88) were factors associated with a higher LOS. 3.1.4. In-Hospital Mortality and Hospital Readmission Nine patients (4.4%) died during hospitalization. BMI (OR 0.82; 95% CI 0.69–0.97), RS development (OR 10.1; 95% CI 2.4–42.6), and medical nutritional support within 48 h from admission (OR 0.12; 95% CI 0.02–0.56) were associated with mortality, even if these associations were different (Table 4). Thirteen patients (6.4%) were re-admitted to hospital within 30 days from discharge; however, none of the tested variables were associated with re-admission (Supplementary Files Table S1). Nine patients (4.4%) died during hospitalization. BMI (OR 0.82; 95% CI 0.69–0.97), RS development (OR 10.1; 95% CI 2.4–42.6), and medical nutritional support within 48 h from admission (OR 0.12; 95% CI 0.02–0.56) were associated with mortality, even if these associations were different (Table 4). Thirteen patients (6.4%) were re-admitted to hospital within 30 days from discharge; however, none of the tested variables were associated with re-admission (Supplementary Files Table S1). Two hundred and three patients were enrolled during 11 months of observation. The clinical and demographical data are presented in Table 1. The mean age was 66.1 ± 14.1 years; there were 127 male patients (62.6%) and 68.5% of the patients (n = 139) were admitted from the emergency department (ED). The mean Charlson’s Comorbidity Index (CCI) was 3.0 ± 2.4. 3.1.1. Nutritional Evaluation The mean body weight was 71.8 ± 16.3 kg and the mean height was 169.0 ± 8.6 cm, deriving a body mass index (BMI) of 25.0 ± 4.9 kg/m2. Upon admission, according to nutritional risk screening (NRS-2002), 70 patients (34.5%) were at risk of malnutrition, while, according to the Malnutrition Universal Screening Tool (MUST), 99 patients (48.7%) had the same risk. Twenty-four patients (11.8%) underwent specialist nutritional evaluation during their hospital stay, and 74 (36.5%) were treated with medical nutrition within 48 h from admission, in particular, 63 (64.3%) with oral nutritional supplementation (ONS) and 13 (13.3%) with parenteral nutrition (PN). The mean body weight was 71.8 ± 16.3 kg and the mean height was 169.0 ± 8.6 cm, deriving a body mass index (BMI) of 25.0 ± 4.9 kg/m2. Upon admission, according to nutritional risk screening (NRS-2002), 70 patients (34.5%) were at risk of malnutrition, while, according to the Malnutrition Universal Screening Tool (MUST), 99 patients (48.7%) had the same risk. Twenty-four patients (11.8%) underwent specialist nutritional evaluation during their hospital stay, and 74 (36.5%) were treated with medical nutrition within 48 h from admission, in particular, 63 (64.3%) with oral nutritional supplementation (ONS) and 13 (13.3%) with parenteral nutrition (PN). 3.1.2. Refeeding Syndrome The risk of RS was identified in 98 (48.3%) patients; of these, 44 (21.7%) were at medium risk and 54 (26.6%) were at high risk of developing RS. Thirty-eight patients (18.7% of the entire cohort) developed RS (Figure 1). Table 2 resumes several data associated with RS development. In particular, blood potassium at 48 h from admission was lower in RS patients (3.9 mmol/L vs. 3.5; OR 0.21; p = 0.004), in addition to phosphorus at 48 h (3.3 mg/dL vs. 2.8; OR 0.24; p = 0.002) and phosphorus at 5 days (3.4 mg/dL vs. 2.5; OR 0.04; p = 0.002). The risk of RS was identified in 98 (48.3%) patients; of these, 44 (21.7%) were at medium risk and 54 (26.6%) were at high risk of developing RS. Thirty-eight patients (18.7% of the entire cohort) developed RS (Figure 1). Table 2 resumes several data associated with RS development. In particular, blood potassium at 48 h from admission was lower in RS patients (3.9 mmol/L vs. 3.5; OR 0.21; p = 0.004), in addition to phosphorus at 48 h (3.3 mg/dL vs. 2.8; OR 0.24; p = 0.002) and phosphorus at 5 days (3.4 mg/dL vs. 2.5; OR 0.04; p = 0.002). 3.1.3. Length of Hospital Stay The mean LOS was 8.2 ± 5.8 days. Patients diagnosed with RS had a mean LOS of 12.5 ± 7.9 days. Patients without RS had a mean LOS of 7.1 ± 4.2 days (p < 0.0001) (Figure 2). The principal factors associated with a high LOS are resumed in Table 3. In particular, ER admission (OR 0.38; 95%CI 0.28–0.54), NRS-2002 >3 (OR 0.67; 95% CI 0.49–0.92), MUST ≥2 (OR 0.51; 95% CI 0.37–0.71), RS risk (OR 0.66; 95% CI 0.50–0.88), and RS development (OR 0.45; 95% CI 0.50–0.88) were factors associated with a higher LOS. The mean LOS was 8.2 ± 5.8 days. Patients diagnosed with RS had a mean LOS of 12.5 ± 7.9 days. Patients without RS had a mean LOS of 7.1 ± 4.2 days (p < 0.0001) (Figure 2). The principal factors associated with a high LOS are resumed in Table 3. In particular, ER admission (OR 0.38; 95%CI 0.28–0.54), NRS-2002 >3 (OR 0.67; 95% CI 0.49–0.92), MUST ≥2 (OR 0.51; 95% CI 0.37–0.71), RS risk (OR 0.66; 95% CI 0.50–0.88), and RS development (OR 0.45; 95% CI 0.50–0.88) were factors associated with a higher LOS. 3.1.4. In-Hospital Mortality and Hospital Readmission Nine patients (4.4%) died during hospitalization. BMI (OR 0.82; 95% CI 0.69–0.97), RS development (OR 10.1; 95% CI 2.4–42.6), and medical nutritional support within 48 h from admission (OR 0.12; 95% CI 0.02–0.56) were associated with mortality, even if these associations were different (Table 4). Thirteen patients (6.4%) were re-admitted to hospital within 30 days from discharge; however, none of the tested variables were associated with re-admission (Supplementary Files Table S1). Nine patients (4.4%) died during hospitalization. BMI (OR 0.82; 95% CI 0.69–0.97), RS development (OR 10.1; 95% CI 2.4–42.6), and medical nutritional support within 48 h from admission (OR 0.12; 95% CI 0.02–0.56) were associated with mortality, even if these associations were different (Table 4). Thirteen patients (6.4%) were re-admitted to hospital within 30 days from discharge; however, none of the tested variables were associated with re-admission (Supplementary Files Table S1).
5. Conclusions
Our work showed a prevalence of RS risk of 48.3% and an incidence of RS diagnosis of 18.7% in a cohort of 203 patients consecutively admitted to a Department of Internal Medicine and Gastroenterology, according to ASPEN criteria. RS is associated with higher mortality and a higher LOS. Such evidence should raise awareness among clinicians to identify patients at risk of RS early, and to prompt specific nutritional support in patients developing this syndrome.
[ "2. Materials and Methods", "2.1. Study Design and Ethical Committee Approval", "2.3. Protocol Description", "2.3.1. Protocol Algorithm", "2.3.2. Determination of RS Risk", "2.3.3. Diagnosis of RS", "2.4. Outcomes Measures", "2.5. Sample Size Calculation", "2.6. Data Collection and Statistical Analysis", "3.1. Baseline Patients’ Characteristics", "3.1.1. Nutritional Evaluation", "3.1.2. Refeeding Syndrome", "3.1.3. Length of Hospital Stay", "3.1.4. In-Hospital Mortality and Hospital Readmission" ]
[ "2.1. Study Design and Ethical Committee Approval This was a single-center observational prospective cohort study.\nThe study was conducted based on the Declaration of Helsinki and according to Good Clinical Practice guidelines. The study was approved by the Ethical Committee of Fondazione Policlinico A. Gemelli IRCCS—Catholic University of the Sacred Heart (Protocol code 2638/22). All participants signed a consent form recording their agreement to take part in the study and to have the results published. This study was reported according to the STROBE guidelines for cohort studies [7].\nThis was a single-center observational prospective cohort study.\nThe study was conducted based on the Declaration of Helsinki and according to Good Clinical Practice guidelines. The study was approved by the Ethical Committee of Fondazione Policlinico A. Gemelli IRCCS—Catholic University of the Sacred Heart (Protocol code 2638/22). All participants signed a consent form recording their agreement to take part in the study and to have the results published. This study was reported according to the STROBE guidelines for cohort studies [7].\n2.2. Patients All adult (>18 years old) patients admitted to the Internal Medicine and Gastroenterology Unit at the Fondazione Policlinico Agostino Gemelli IRCCS, Rome, Italy, from March 2021 to January 2022, were prospectively evaluated.\nThe exclusion criteria were patients already undergoing artificial nutrition during a possible stay in the emergency room, the presence of hypophosphatemia on the day of admission, or refusal to participate in the study.\nAll adult (>18 years old) patients admitted to the Internal Medicine and Gastroenterology Unit at the Fondazione Policlinico Agostino Gemelli IRCCS, Rome, Italy, from March 2021 to January 2022, were prospectively evaluated.\nThe exclusion criteria were patients already undergoing artificial nutrition during a possible stay in the emergency room, the presence of hypophosphatemia on the day of admission, or refusal to participate in the study.\n2.3. Protocol Description 2.3.1. Protocol Algorithm Patients were assessed for risk of RS by physicians of the Internal Medicine and Gastroenterology Unit (B.E.A. and M.I.) upon admission. In patients at risk, a specific evaluation (electrolyte and clinical check) was performed 48 h and 5 days after admission (M.D.A., R.B. and T.G.) to identify patients with RS.\nMortality, LOS, and hospital re-admission within 30 days among enrolled patients were recorded by the analysis of medical records.\nPatients were assessed for risk of RS by physicians of the Internal Medicine and Gastroenterology Unit (B.E.A. and M.I.) upon admission. In patients at risk, a specific evaluation (electrolyte and clinical check) was performed 48 h and 5 days after admission (M.D.A., R.B. and T.G.) to identify patients with RS.\nMortality, LOS, and hospital re-admission within 30 days among enrolled patients were recorded by the analysis of medical records.\n2.3.2. Determination of RS Risk The evaluation of RS risk was performed according to the recently released ASPEN criteria [6].\nPatients were considered at “moderate risk” if they met two of the following risk criteria:BMI was between 16 and 18.5 kg/m2;A weight loss of 5% of habitual weight was reported;There was no or negligible oral intake for 5–6 days OR < 75% of estimated energy requirement for >7 days during an acute illness or injury OR < 75% of estimated energy requirement for >1 month;There were low levels of potassium, phosphorus, magnesium, or normal current levels and recent low levels necessitating minimal or single-dose supplementation;There was evidence of moderate subcutaneous fat loss;There was evidence of mild or moderate muscle loss;In presence of higher-risk comorbidities (moderate disease).\nBMI was between 16 and 18.5 kg/m2;\nA weight loss of 5% of habitual weight was reported;\nThere was no or negligible oral intake for 5–6 days OR < 75% of estimated energy requirement for >7 days during an acute illness or injury OR < 75% of estimated energy requirement for >1 month;\nThere were low levels of potassium, phosphorus, magnesium, or normal current levels and recent low levels necessitating minimal or single-dose supplementation;\nThere was evidence of moderate subcutaneous fat loss;\nThere was evidence of mild or moderate muscle loss;\nIn presence of higher-risk comorbidities (moderate disease).\nPatients were considered at “significant risk” if they met one of the following risk criteria:BMI was <16 kg/m2;A weight loss of 7.5% in 3 months or >10% in 6 months was reported;There was no or negligible oral intake for >7 days OR < 50% of estimated energy requirement for >5 days during an acute illness or injury OR < 50% of estimated energy requirement for >1 month;There were moderately/significantly low levels of potassium, phosphorus, magnesium, or minimally low or normal levels and recent low levels necessitating significant or multiple-dose supplementation;There was evidence of severe subcutaneous fat loss;There was evidence of severe muscle loss;In presence of higher-risk comorbidities (severe disease).\nBMI was <16 kg/m2;\nA weight loss of 7.5% in 3 months or >10% in 6 months was reported;\nThere was no or negligible oral intake for >7 days OR < 50% of estimated energy requirement for >5 days during an acute illness or injury OR < 50% of estimated energy requirement for >1 month;\nThere were moderately/significantly low levels of potassium, phosphorus, magnesium, or minimally low or normal levels and recent low levels necessitating significant or multiple-dose supplementation;\nThere was evidence of severe subcutaneous fat loss;\nThere was evidence of severe muscle loss;\nIn presence of higher-risk comorbidities (severe disease).\nThe evaluation of RS risk was performed according to the recently released ASPEN criteria [6].\nPatients were considered at “moderate risk” if they met two of the following risk criteria:BMI was between 16 and 18.5 kg/m2;A weight loss of 5% of habitual weight was reported;There was no or negligible oral intake for 5–6 days OR < 75% of estimated energy requirement for >7 days during an acute illness or injury OR < 75% of estimated energy requirement for >1 month;There were low levels of potassium, phosphorus, magnesium, or normal current levels and recent low levels necessitating minimal or single-dose supplementation;There was evidence of moderate subcutaneous fat loss;There was evidence of mild or moderate muscle loss;In presence of higher-risk comorbidities (moderate disease).\nBMI was between 16 and 18.5 kg/m2;\nA weight loss of 5% of habitual weight was reported;\nThere was no or negligible oral intake for 5–6 days OR < 75% of estimated energy requirement for >7 days during an acute illness or injury OR < 75% of estimated energy requirement for >1 month;\nThere were low levels of potassium, phosphorus, magnesium, or normal current levels and recent low levels necessitating minimal or single-dose supplementation;\nThere was evidence of moderate subcutaneous fat loss;\nThere was evidence of mild or moderate muscle loss;\nIn presence of higher-risk comorbidities (moderate disease).\nPatients were considered at “significant risk” if they met one of the following risk criteria:BMI was <16 kg/m2;A weight loss of 7.5% in 3 months or >10% in 6 months was reported;There was no or negligible oral intake for >7 days OR < 50% of estimated energy requirement for >5 days during an acute illness or injury OR < 50% of estimated energy requirement for >1 month;There were moderately/significantly low levels of potassium, phosphorus, magnesium, or minimally low or normal levels and recent low levels necessitating significant or multiple-dose supplementation;There was evidence of severe subcutaneous fat loss;There was evidence of severe muscle loss;In presence of higher-risk comorbidities (severe disease).\nBMI was <16 kg/m2;\nA weight loss of 7.5% in 3 months or >10% in 6 months was reported;\nThere was no or negligible oral intake for >7 days OR < 50% of estimated energy requirement for >5 days during an acute illness or injury OR < 50% of estimated energy requirement for >1 month;\nThere were moderately/significantly low levels of potassium, phosphorus, magnesium, or minimally low or normal levels and recent low levels necessitating significant or multiple-dose supplementation;\nThere was evidence of severe subcutaneous fat loss;\nThere was evidence of severe muscle loss;\nIn presence of higher-risk comorbidities (severe disease).\n2.3.3. Diagnosis of RS The diagnosis of RS was then confirmed according to the above-mentioned ASPEN criteria [6], which are as follows:A decrease in serum phosphorus, potassium, and/or magnesium levels by 10–20% (mild RS), 20–30% (moderate RS), or >30%, and/or organ dysfunction resulting from a decrease in any of these and/or due to thiamin deficiency (severe RS).The decrease occurs within 5 days of reinitiating or substantially increasing energy provision.\nA decrease in serum phosphorus, potassium, and/or magnesium levels by 10–20% (mild RS), 20–30% (moderate RS), or >30%, and/or organ dysfunction resulting from a decrease in any of these and/or due to thiamin deficiency (severe RS).\nThe decrease occurs within 5 days of reinitiating or substantially increasing energy provision.\nThe diagnosis of RS was then confirmed according to the above-mentioned ASPEN criteria [6], which are as follows:A decrease in serum phosphorus, potassium, and/or magnesium levels by 10–20% (mild RS), 20–30% (moderate RS), or >30%, and/or organ dysfunction resulting from a decrease in any of these and/or due to thiamin deficiency (severe RS).The decrease occurs within 5 days of reinitiating or substantially increasing energy provision.\nA decrease in serum phosphorus, potassium, and/or magnesium levels by 10–20% (mild RS), 20–30% (moderate RS), or >30%, and/or organ dysfunction resulting from a decrease in any of these and/or due to thiamin deficiency (severe RS).\nThe decrease occurs within 5 days of reinitiating or substantially increasing energy provision.\n2.3.1. Protocol Algorithm Patients were assessed for risk of RS by physicians of the Internal Medicine and Gastroenterology Unit (B.E.A. and M.I.) upon admission. In patients at risk, a specific evaluation (electrolyte and clinical check) was performed 48 h and 5 days after admission (M.D.A., R.B. and T.G.) to identify patients with RS.\nMortality, LOS, and hospital re-admission within 30 days among enrolled patients were recorded by the analysis of medical records.\nPatients were assessed for risk of RS by physicians of the Internal Medicine and Gastroenterology Unit (B.E.A. and M.I.) upon admission. In patients at risk, a specific evaluation (electrolyte and clinical check) was performed 48 h and 5 days after admission (M.D.A., R.B. and T.G.) to identify patients with RS.\nMortality, LOS, and hospital re-admission within 30 days among enrolled patients were recorded by the analysis of medical records.\n2.3.2. Determination of RS Risk The evaluation of RS risk was performed according to the recently released ASPEN criteria [6].\nPatients were considered at “moderate risk” if they met two of the following risk criteria:BMI was between 16 and 18.5 kg/m2;A weight loss of 5% of habitual weight was reported;There was no or negligible oral intake for 5–6 days OR < 75% of estimated energy requirement for >7 days during an acute illness or injury OR < 75% of estimated energy requirement for >1 month;There were low levels of potassium, phosphorus, magnesium, or normal current levels and recent low levels necessitating minimal or single-dose supplementation;There was evidence of moderate subcutaneous fat loss;There was evidence of mild or moderate muscle loss;In presence of higher-risk comorbidities (moderate disease).\nBMI was between 16 and 18.5 kg/m2;\nA weight loss of 5% of habitual weight was reported;\nThere was no or negligible oral intake for 5–6 days OR < 75% of estimated energy requirement for >7 days during an acute illness or injury OR < 75% of estimated energy requirement for >1 month;\nThere were low levels of potassium, phosphorus, magnesium, or normal current levels and recent low levels necessitating minimal or single-dose supplementation;\nThere was evidence of moderate subcutaneous fat loss;\nThere was evidence of mild or moderate muscle loss;\nIn presence of higher-risk comorbidities (moderate disease).\nPatients were considered at “significant risk” if they met one of the following risk criteria:BMI was <16 kg/m2;A weight loss of 7.5% in 3 months or >10% in 6 months was reported;There was no or negligible oral intake for >7 days OR < 50% of estimated energy requirement for >5 days during an acute illness or injury OR < 50% of estimated energy requirement for >1 month;There were moderately/significantly low levels of potassium, phosphorus, magnesium, or minimally low or normal levels and recent low levels necessitating significant or multiple-dose supplementation;There was evidence of severe subcutaneous fat loss;There was evidence of severe muscle loss;In presence of higher-risk comorbidities (severe disease).\nBMI was <16 kg/m2;\nA weight loss of 7.5% in 3 months or >10% in 6 months was reported;\nThere was no or negligible oral intake for >7 days OR < 50% of estimated energy requirement for >5 days during an acute illness or injury OR < 50% of estimated energy requirement for >1 month;\nThere were moderately/significantly low levels of potassium, phosphorus, magnesium, or minimally low or normal levels and recent low levels necessitating significant or multiple-dose supplementation;\nThere was evidence of severe subcutaneous fat loss;\nThere was evidence of severe muscle loss;\nIn presence of higher-risk comorbidities (severe disease).\nThe evaluation of RS risk was performed according to the recently released ASPEN criteria [6].\nPatients were considered at “moderate risk” if they met two of the following risk criteria:BMI was between 16 and 18.5 kg/m2;A weight loss of 5% of habitual weight was reported;There was no or negligible oral intake for 5–6 days OR < 75% of estimated energy requirement for >7 days during an acute illness or injury OR < 75% of estimated energy requirement for >1 month;There were low levels of potassium, phosphorus, magnesium, or normal current levels and recent low levels necessitating minimal or single-dose supplementation;There was evidence of moderate subcutaneous fat loss;There was evidence of mild or moderate muscle loss;In presence of higher-risk comorbidities (moderate disease).\nBMI was between 16 and 18.5 kg/m2;\nA weight loss of 5% of habitual weight was reported;\nThere was no or negligible oral intake for 5–6 days OR < 75% of estimated energy requirement for >7 days during an acute illness or injury OR < 75% of estimated energy requirement for >1 month;\nThere were low levels of potassium, phosphorus, magnesium, or normal current levels and recent low levels necessitating minimal or single-dose supplementation;\nThere was evidence of moderate subcutaneous fat loss;\nThere was evidence of mild or moderate muscle loss;\nIn presence of higher-risk comorbidities (moderate disease).\nPatients were considered at “significant risk” if they met one of the following risk criteria:BMI was <16 kg/m2;A weight loss of 7.5% in 3 months or >10% in 6 months was reported;There was no or negligible oral intake for >7 days OR < 50% of estimated energy requirement for >5 days during an acute illness or injury OR < 50% of estimated energy requirement for >1 month;There were moderately/significantly low levels of potassium, phosphorus, magnesium, or minimally low or normal levels and recent low levels necessitating significant or multiple-dose supplementation;There was evidence of severe subcutaneous fat loss;There was evidence of severe muscle loss;In presence of higher-risk comorbidities (severe disease).\nBMI was <16 kg/m2;\nA weight loss of 7.5% in 3 months or >10% in 6 months was reported;\nThere was no or negligible oral intake for >7 days OR < 50% of estimated energy requirement for >5 days during an acute illness or injury OR < 50% of estimated energy requirement for >1 month;\nThere were moderately/significantly low levels of potassium, phosphorus, magnesium, or minimally low or normal levels and recent low levels necessitating significant or multiple-dose supplementation;\nThere was evidence of severe subcutaneous fat loss;\nThere was evidence of severe muscle loss;\nIn presence of higher-risk comorbidities (severe disease).\n2.3.3. Diagnosis of RS The diagnosis of RS was then confirmed according to the above-mentioned ASPEN criteria [6], which are as follows:A decrease in serum phosphorus, potassium, and/or magnesium levels by 10–20% (mild RS), 20–30% (moderate RS), or >30%, and/or organ dysfunction resulting from a decrease in any of these and/or due to thiamin deficiency (severe RS).The decrease occurs within 5 days of reinitiating or substantially increasing energy provision.\nA decrease in serum phosphorus, potassium, and/or magnesium levels by 10–20% (mild RS), 20–30% (moderate RS), or >30%, and/or organ dysfunction resulting from a decrease in any of these and/or due to thiamin deficiency (severe RS).\nThe decrease occurs within 5 days of reinitiating or substantially increasing energy provision.\nThe diagnosis of RS was then confirmed according to the above-mentioned ASPEN criteria [6], which are as follows:A decrease in serum phosphorus, potassium, and/or magnesium levels by 10–20% (mild RS), 20–30% (moderate RS), or >30%, and/or organ dysfunction resulting from a decrease in any of these and/or due to thiamin deficiency (severe RS).The decrease occurs within 5 days of reinitiating or substantially increasing energy provision.\nA decrease in serum phosphorus, potassium, and/or magnesium levels by 10–20% (mild RS), 20–30% (moderate RS), or >30%, and/or organ dysfunction resulting from a decrease in any of these and/or due to thiamin deficiency (severe RS).\nThe decrease occurs within 5 days of reinitiating or substantially increasing energy provision.\n2.4. Outcomes Measures The primary outcome was the incidence of RS risk according to ASPEN 2020 criteria. The secondary outcomes were the diagnosis of RS after the initiation of nutritional support and its impact on mortality, LOS, and hospital readmission within 30 days.\nThe primary outcome was the incidence of RS risk according to ASPEN 2020 criteria. The secondary outcomes were the diagnosis of RS after the initiation of nutritional support and its impact on mortality, LOS, and hospital readmission within 30 days.\n2.5. Sample Size Calculation According to a recent systematic review, the incidence of RS widely varied from 0% to 62% across the studies [3]. However, a previous study identified an incidence of RS risk up to 54% and an RS diagnosis rate of 8% of patients admitted to an internal medicine department [8]. With a margin of error of 6% and a confidence interval of 95%, between 79 and 158 patients should have been enrolled to intercept the above-mentioned incidences (percentages). Considering a dropout rate of 10%, our sample size was set at 176 patients.\nAccording to a recent systematic review, the incidence of RS widely varied from 0% to 62% across the studies [3]. However, a previous study identified an incidence of RS risk up to 54% and an RS diagnosis rate of 8% of patients admitted to an internal medicine department [8]. With a margin of error of 6% and a confidence interval of 95%, between 79 and 158 patients should have been enrolled to intercept the above-mentioned incidences (percentages). Considering a dropout rate of 10%, our sample size was set at 176 patients.\n2.6. Data Collection and Statistical Analysis Data were collected using a specific Excel© spreadsheet. Data are shown using descriptive statistical methods. The following measures were used as quantitative variables: minimum, maximum, range, mean and standard deviation. The qualitative variables were summarized in tables of absolute and percentage frequencies. The possible normality of continuous distributions wasexamined by applying the Kolmogorov–Smirnov test.\nThe primary objective was reached by calculating the cumulative incidence of risk of developing RS in the enrolled patients.\nThe secondary objectives were reached through the cumulative incidence of overt RS development in the enrolled at-risk patients, and the development of a Cox regression model for the evaluation of mean hospital stay, with log-rank analysis to highlight differences between the groups that develop RS or not. To analyze other secondary outcomes, univariate logistic models were created, obtaining odds ratios (OR).\nA p < 0.05 was set as statistically significant. All statistical analyses were carried out with STATA (version 13, Stata Corporation; College Station, TX, USA).\nData were collected using a specific Excel© spreadsheet. Data are shown using descriptive statistical methods. The following measures were used as quantitative variables: minimum, maximum, range, mean and standard deviation. The qualitative variables were summarized in tables of absolute and percentage frequencies. The possible normality of continuous distributions wasexamined by applying the Kolmogorov–Smirnov test.\nThe primary objective was reached by calculating the cumulative incidence of risk of developing RS in the enrolled patients.\nThe secondary objectives were reached through the cumulative incidence of overt RS development in the enrolled at-risk patients, and the development of a Cox regression model for the evaluation of mean hospital stay, with log-rank analysis to highlight differences between the groups that develop RS or not. To analyze other secondary outcomes, univariate logistic models were created, obtaining odds ratios (OR).\nA p < 0.05 was set as statistically significant. All statistical analyses were carried out with STATA (version 13, Stata Corporation; College Station, TX, USA).", "This was a single-center observational prospective cohort study.\nThe study was conducted based on the Declaration of Helsinki and according to Good Clinical Practice guidelines. The study was approved by the Ethical Committee of Fondazione Policlinico A. Gemelli IRCCS—Catholic University of the Sacred Heart (Protocol code 2638/22). All participants signed a consent form recording their agreement to take part in the study and to have the results published. This study was reported according to the STROBE guidelines for cohort studies [7].", "2.3.1. Protocol Algorithm Patients were assessed for risk of RS by physicians of the Internal Medicine and Gastroenterology Unit (B.E.A. and M.I.) upon admission. In patients at risk, a specific evaluation (electrolyte and clinical check) was performed 48 h and 5 days after admission (M.D.A., R.B. and T.G.) to identify patients with RS.\nMortality, LOS, and hospital re-admission within 30 days among enrolled patients were recorded by the analysis of medical records.\nPatients were assessed for risk of RS by physicians of the Internal Medicine and Gastroenterology Unit (B.E.A. and M.I.) upon admission. In patients at risk, a specific evaluation (electrolyte and clinical check) was performed 48 h and 5 days after admission (M.D.A., R.B. and T.G.) to identify patients with RS.\nMortality, LOS, and hospital re-admission within 30 days among enrolled patients were recorded by the analysis of medical records.\n2.3.2. Determination of RS Risk The evaluation of RS risk was performed according to the recently released ASPEN criteria [6].\nPatients were considered at “moderate risk” if they met two of the following risk criteria:BMI was between 16 and 18.5 kg/m2;A weight loss of 5% of habitual weight was reported;There was no or negligible oral intake for 5–6 days OR < 75% of estimated energy requirement for >7 days during an acute illness or injury OR < 75% of estimated energy requirement for >1 month;There were low levels of potassium, phosphorus, magnesium, or normal current levels and recent low levels necessitating minimal or single-dose supplementation;There was evidence of moderate subcutaneous fat loss;There was evidence of mild or moderate muscle loss;In presence of higher-risk comorbidities (moderate disease).\nBMI was between 16 and 18.5 kg/m2;\nA weight loss of 5% of habitual weight was reported;\nThere was no or negligible oral intake for 5–6 days OR < 75% of estimated energy requirement for >7 days during an acute illness or injury OR < 75% of estimated energy requirement for >1 month;\nThere were low levels of potassium, phosphorus, magnesium, or normal current levels and recent low levels necessitating minimal or single-dose supplementation;\nThere was evidence of moderate subcutaneous fat loss;\nThere was evidence of mild or moderate muscle loss;\nIn presence of higher-risk comorbidities (moderate disease).\nPatients were considered at “significant risk” if they met one of the following risk criteria:BMI was <16 kg/m2;A weight loss of 7.5% in 3 months or >10% in 6 months was reported;There was no or negligible oral intake for >7 days OR < 50% of estimated energy requirement for >5 days during an acute illness or injury OR < 50% of estimated energy requirement for >1 month;There were moderately/significantly low levels of potassium, phosphorus, magnesium, or minimally low or normal levels and recent low levels necessitating significant or multiple-dose supplementation;There was evidence of severe subcutaneous fat loss;There was evidence of severe muscle loss;In presence of higher-risk comorbidities (severe disease).\nBMI was <16 kg/m2;\nA weight loss of 7.5% in 3 months or >10% in 6 months was reported;\nThere was no or negligible oral intake for >7 days OR < 50% of estimated energy requirement for >5 days during an acute illness or injury OR < 50% of estimated energy requirement for >1 month;\nThere were moderately/significantly low levels of potassium, phosphorus, magnesium, or minimally low or normal levels and recent low levels necessitating significant or multiple-dose supplementation;\nThere was evidence of severe subcutaneous fat loss;\nThere was evidence of severe muscle loss;\nIn presence of higher-risk comorbidities (severe disease).\nThe evaluation of RS risk was performed according to the recently released ASPEN criteria [6].\nPatients were considered at “moderate risk” if they met two of the following risk criteria:BMI was between 16 and 18.5 kg/m2;A weight loss of 5% of habitual weight was reported;There was no or negligible oral intake for 5–6 days OR < 75% of estimated energy requirement for >7 days during an acute illness or injury OR < 75% of estimated energy requirement for >1 month;There were low levels of potassium, phosphorus, magnesium, or normal current levels and recent low levels necessitating minimal or single-dose supplementation;There was evidence of moderate subcutaneous fat loss;There was evidence of mild or moderate muscle loss;In presence of higher-risk comorbidities (moderate disease).\nBMI was between 16 and 18.5 kg/m2;\nA weight loss of 5% of habitual weight was reported;\nThere was no or negligible oral intake for 5–6 days OR < 75% of estimated energy requirement for >7 days during an acute illness or injury OR < 75% of estimated energy requirement for >1 month;\nThere were low levels of potassium, phosphorus, magnesium, or normal current levels and recent low levels necessitating minimal or single-dose supplementation;\nThere was evidence of moderate subcutaneous fat loss;\nThere was evidence of mild or moderate muscle loss;\nIn presence of higher-risk comorbidities (moderate disease).\nPatients were considered at “significant risk” if they met one of the following risk criteria:BMI was <16 kg/m2;A weight loss of 7.5% in 3 months or >10% in 6 months was reported;There was no or negligible oral intake for >7 days OR < 50% of estimated energy requirement for >5 days during an acute illness or injury OR < 50% of estimated energy requirement for >1 month;There were moderately/significantly low levels of potassium, phosphorus, magnesium, or minimally low or normal levels and recent low levels necessitating significant or multiple-dose supplementation;There was evidence of severe subcutaneous fat loss;There was evidence of severe muscle loss;In presence of higher-risk comorbidities (severe disease).\nBMI was <16 kg/m2;\nA weight loss of 7.5% in 3 months or >10% in 6 months was reported;\nThere was no or negligible oral intake for >7 days OR < 50% of estimated energy requirement for >5 days during an acute illness or injury OR < 50% of estimated energy requirement for >1 month;\nThere were moderately/significantly low levels of potassium, phosphorus, magnesium, or minimally low or normal levels and recent low levels necessitating significant or multiple-dose supplementation;\nThere was evidence of severe subcutaneous fat loss;\nThere was evidence of severe muscle loss;\nIn presence of higher-risk comorbidities (severe disease).\n2.3.3. Diagnosis of RS The diagnosis of RS was then confirmed according to the above-mentioned ASPEN criteria [6], which are as follows:A decrease in serum phosphorus, potassium, and/or magnesium levels by 10–20% (mild RS), 20–30% (moderate RS), or >30%, and/or organ dysfunction resulting from a decrease in any of these and/or due to thiamin deficiency (severe RS).The decrease occurs within 5 days of reinitiating or substantially increasing energy provision.\nA decrease in serum phosphorus, potassium, and/or magnesium levels by 10–20% (mild RS), 20–30% (moderate RS), or >30%, and/or organ dysfunction resulting from a decrease in any of these and/or due to thiamin deficiency (severe RS).\nThe decrease occurs within 5 days of reinitiating or substantially increasing energy provision.\nThe diagnosis of RS was then confirmed according to the above-mentioned ASPEN criteria [6], which are as follows:A decrease in serum phosphorus, potassium, and/or magnesium levels by 10–20% (mild RS), 20–30% (moderate RS), or >30%, and/or organ dysfunction resulting from a decrease in any of these and/or due to thiamin deficiency (severe RS).The decrease occurs within 5 days of reinitiating or substantially increasing energy provision.\nA decrease in serum phosphorus, potassium, and/or magnesium levels by 10–20% (mild RS), 20–30% (moderate RS), or >30%, and/or organ dysfunction resulting from a decrease in any of these and/or due to thiamin deficiency (severe RS).\nThe decrease occurs within 5 days of reinitiating or substantially increasing energy provision.", "Patients were assessed for risk of RS by physicians of the Internal Medicine and Gastroenterology Unit (B.E.A. and M.I.) upon admission. In patients at risk, a specific evaluation (electrolyte and clinical check) was performed 48 h and 5 days after admission (M.D.A., R.B. and T.G.) to identify patients with RS.\nMortality, LOS, and hospital re-admission within 30 days among enrolled patients were recorded by the analysis of medical records.", "The evaluation of RS risk was performed according to the recently released ASPEN criteria [6].\nPatients were considered at “moderate risk” if they met two of the following risk criteria:BMI was between 16 and 18.5 kg/m2;A weight loss of 5% of habitual weight was reported;There was no or negligible oral intake for 5–6 days OR < 75% of estimated energy requirement for >7 days during an acute illness or injury OR < 75% of estimated energy requirement for >1 month;There were low levels of potassium, phosphorus, magnesium, or normal current levels and recent low levels necessitating minimal or single-dose supplementation;There was evidence of moderate subcutaneous fat loss;There was evidence of mild or moderate muscle loss;In presence of higher-risk comorbidities (moderate disease).\nBMI was between 16 and 18.5 kg/m2;\nA weight loss of 5% of habitual weight was reported;\nThere was no or negligible oral intake for 5–6 days OR < 75% of estimated energy requirement for >7 days during an acute illness or injury OR < 75% of estimated energy requirement for >1 month;\nThere were low levels of potassium, phosphorus, magnesium, or normal current levels and recent low levels necessitating minimal or single-dose supplementation;\nThere was evidence of moderate subcutaneous fat loss;\nThere was evidence of mild or moderate muscle loss;\nIn presence of higher-risk comorbidities (moderate disease).\nPatients were considered at “significant risk” if they met one of the following risk criteria:BMI was <16 kg/m2;A weight loss of 7.5% in 3 months or >10% in 6 months was reported;There was no or negligible oral intake for >7 days OR < 50% of estimated energy requirement for >5 days during an acute illness or injury OR < 50% of estimated energy requirement for >1 month;There were moderately/significantly low levels of potassium, phosphorus, magnesium, or minimally low or normal levels and recent low levels necessitating significant or multiple-dose supplementation;There was evidence of severe subcutaneous fat loss;There was evidence of severe muscle loss;In presence of higher-risk comorbidities (severe disease).\nBMI was <16 kg/m2;\nA weight loss of 7.5% in 3 months or >10% in 6 months was reported;\nThere was no or negligible oral intake for >7 days OR < 50% of estimated energy requirement for >5 days during an acute illness or injury OR < 50% of estimated energy requirement for >1 month;\nThere were moderately/significantly low levels of potassium, phosphorus, magnesium, or minimally low or normal levels and recent low levels necessitating significant or multiple-dose supplementation;\nThere was evidence of severe subcutaneous fat loss;\nThere was evidence of severe muscle loss;\nIn presence of higher-risk comorbidities (severe disease).", "The diagnosis of RS was then confirmed according to the above-mentioned ASPEN criteria [6], which are as follows:A decrease in serum phosphorus, potassium, and/or magnesium levels by 10–20% (mild RS), 20–30% (moderate RS), or >30%, and/or organ dysfunction resulting from a decrease in any of these and/or due to thiamin deficiency (severe RS).The decrease occurs within 5 days of reinitiating or substantially increasing energy provision.\nA decrease in serum phosphorus, potassium, and/or magnesium levels by 10–20% (mild RS), 20–30% (moderate RS), or >30%, and/or organ dysfunction resulting from a decrease in any of these and/or due to thiamin deficiency (severe RS).\nThe decrease occurs within 5 days of reinitiating or substantially increasing energy provision.", "The primary outcome was the incidence of RS risk according to ASPEN 2020 criteria. The secondary outcomes were the diagnosis of RS after the initiation of nutritional support and its impact on mortality, LOS, and hospital readmission within 30 days.", "According to a recent systematic review, the incidence of RS widely varied from 0% to 62% across the studies [3]. However, a previous study identified an incidence of RS risk up to 54% and an RS diagnosis rate of 8% of patients admitted to an internal medicine department [8]. With a margin of error of 6% and a confidence interval of 95%, between 79 and 158 patients should have been enrolled to intercept the above-mentioned incidences (percentages). Considering a dropout rate of 10%, our sample size was set at 176 patients.", "Data were collected using a specific Excel© spreadsheet. Data are shown using descriptive statistical methods. The following measures were used as quantitative variables: minimum, maximum, range, mean and standard deviation. The qualitative variables were summarized in tables of absolute and percentage frequencies. The possible normality of continuous distributions wasexamined by applying the Kolmogorov–Smirnov test.\nThe primary objective was reached by calculating the cumulative incidence of risk of developing RS in the enrolled patients.\nThe secondary objectives were reached through the cumulative incidence of overt RS development in the enrolled at-risk patients, and the development of a Cox regression model for the evaluation of mean hospital stay, with log-rank analysis to highlight differences between the groups that develop RS or not. To analyze other secondary outcomes, univariate logistic models were created, obtaining odds ratios (OR).\nA p < 0.05 was set as statistically significant. All statistical analyses were carried out with STATA (version 13, Stata Corporation; College Station, TX, USA).", "Two hundred and three patients were enrolled during 11 months of observation. The clinical and demographical data are presented in Table 1. The mean age was 66.1 ± 14.1 years; there were 127 male patients (62.6%) and 68.5% of the patients (n = 139) were admitted from the emergency department (ED). The mean Charlson’s Comorbidity Index (CCI) was 3.0 ± 2.4.\n3.1.1. Nutritional Evaluation The mean body weight was 71.8 ± 16.3 kg and the mean height was 169.0 ± 8.6 cm, deriving a body mass index (BMI) of 25.0 ± 4.9 kg/m2. Upon admission, according to nutritional risk screening (NRS-2002), 70 patients (34.5%) were at risk of malnutrition, while, according to the Malnutrition Universal Screening Tool (MUST), 99 patients (48.7%) had the same risk. Twenty-four patients (11.8%) underwent specialist nutritional evaluation during their hospital stay, and 74 (36.5%) were treated with medical nutrition within 48 h from admission, in particular, 63 (64.3%) with oral nutritional supplementation (ONS) and 13 (13.3%) with parenteral nutrition (PN).\nThe mean body weight was 71.8 ± 16.3 kg and the mean height was 169.0 ± 8.6 cm, deriving a body mass index (BMI) of 25.0 ± 4.9 kg/m2. Upon admission, according to nutritional risk screening (NRS-2002), 70 patients (34.5%) were at risk of malnutrition, while, according to the Malnutrition Universal Screening Tool (MUST), 99 patients (48.7%) had the same risk. Twenty-four patients (11.8%) underwent specialist nutritional evaluation during their hospital stay, and 74 (36.5%) were treated with medical nutrition within 48 h from admission, in particular, 63 (64.3%) with oral nutritional supplementation (ONS) and 13 (13.3%) with parenteral nutrition (PN).\n3.1.2. Refeeding Syndrome The risk of RS was identified in 98 (48.3%) patients; of these, 44 (21.7%) were at medium risk and 54 (26.6%) were at high risk of developing RS. Thirty-eight patients (18.7% of the entire cohort) developed RS (Figure 1).\nTable 2 resumes several data associated with RS development. In particular, blood potassium at 48 h from admission was lower in RS patients (3.9 mmol/L vs. 3.5; OR 0.21; p = 0.004), in addition to phosphorus at 48 h (3.3 mg/dL vs. 2.8; OR 0.24; p = 0.002) and phosphorus at 5 days (3.4 mg/dL vs. 2.5; OR 0.04; p = 0.002).\nThe risk of RS was identified in 98 (48.3%) patients; of these, 44 (21.7%) were at medium risk and 54 (26.6%) were at high risk of developing RS. Thirty-eight patients (18.7% of the entire cohort) developed RS (Figure 1).\nTable 2 resumes several data associated with RS development. In particular, blood potassium at 48 h from admission was lower in RS patients (3.9 mmol/L vs. 3.5; OR 0.21; p = 0.004), in addition to phosphorus at 48 h (3.3 mg/dL vs. 2.8; OR 0.24; p = 0.002) and phosphorus at 5 days (3.4 mg/dL vs. 2.5; OR 0.04; p = 0.002).\n3.1.3. Length of Hospital Stay The mean LOS was 8.2 ± 5.8 days. Patients diagnosed with RS had a mean LOS of 12.5 ± 7.9 days. Patients without RS had a mean LOS of 7.1 ± 4.2 days (p < 0.0001) (Figure 2).\nThe principal factors associated with a high LOS are resumed in Table 3. In particular, ER admission (OR 0.38; 95%CI 0.28–0.54), NRS-2002 >3 (OR 0.67; 95% CI 0.49–0.92), MUST ≥2 (OR 0.51; 95% CI 0.37–0.71), RS risk (OR 0.66; 95% CI 0.50–0.88), and RS development (OR 0.45; 95% CI 0.50–0.88) were factors associated with a higher LOS.\nThe mean LOS was 8.2 ± 5.8 days. Patients diagnosed with RS had a mean LOS of 12.5 ± 7.9 days. Patients without RS had a mean LOS of 7.1 ± 4.2 days (p < 0.0001) (Figure 2).\nThe principal factors associated with a high LOS are resumed in Table 3. In particular, ER admission (OR 0.38; 95%CI 0.28–0.54), NRS-2002 >3 (OR 0.67; 95% CI 0.49–0.92), MUST ≥2 (OR 0.51; 95% CI 0.37–0.71), RS risk (OR 0.66; 95% CI 0.50–0.88), and RS development (OR 0.45; 95% CI 0.50–0.88) were factors associated with a higher LOS.\n3.1.4. In-Hospital Mortality and Hospital Readmission Nine patients (4.4%) died during hospitalization. BMI (OR 0.82; 95% CI 0.69–0.97), RS development (OR 10.1; 95% CI 2.4–42.6), and medical nutritional support within 48 h from admission (OR 0.12; 95% CI 0.02–0.56) were associated with mortality, even if these associations were different (Table 4). Thirteen patients (6.4%) were re-admitted to hospital within 30 days from discharge; however, none of the tested variables were associated with re-admission (Supplementary Files Table S1).\nNine patients (4.4%) died during hospitalization. BMI (OR 0.82; 95% CI 0.69–0.97), RS development (OR 10.1; 95% CI 2.4–42.6), and medical nutritional support within 48 h from admission (OR 0.12; 95% CI 0.02–0.56) were associated with mortality, even if these associations were different (Table 4). Thirteen patients (6.4%) were re-admitted to hospital within 30 days from discharge; however, none of the tested variables were associated with re-admission (Supplementary Files Table S1).", "The mean body weight was 71.8 ± 16.3 kg and the mean height was 169.0 ± 8.6 cm, deriving a body mass index (BMI) of 25.0 ± 4.9 kg/m2. Upon admission, according to nutritional risk screening (NRS-2002), 70 patients (34.5%) were at risk of malnutrition, while, according to the Malnutrition Universal Screening Tool (MUST), 99 patients (48.7%) had the same risk. Twenty-four patients (11.8%) underwent specialist nutritional evaluation during their hospital stay, and 74 (36.5%) were treated with medical nutrition within 48 h from admission, in particular, 63 (64.3%) with oral nutritional supplementation (ONS) and 13 (13.3%) with parenteral nutrition (PN).", "The risk of RS was identified in 98 (48.3%) patients; of these, 44 (21.7%) were at medium risk and 54 (26.6%) were at high risk of developing RS. Thirty-eight patients (18.7% of the entire cohort) developed RS (Figure 1).\nTable 2 resumes several data associated with RS development. In particular, blood potassium at 48 h from admission was lower in RS patients (3.9 mmol/L vs. 3.5; OR 0.21; p = 0.004), in addition to phosphorus at 48 h (3.3 mg/dL vs. 2.8; OR 0.24; p = 0.002) and phosphorus at 5 days (3.4 mg/dL vs. 2.5; OR 0.04; p = 0.002).", "The mean LOS was 8.2 ± 5.8 days. Patients diagnosed with RS had a mean LOS of 12.5 ± 7.9 days. Patients without RS had a mean LOS of 7.1 ± 4.2 days (p < 0.0001) (Figure 2).\nThe principal factors associated with a high LOS are resumed in Table 3. In particular, ER admission (OR 0.38; 95%CI 0.28–0.54), NRS-2002 >3 (OR 0.67; 95% CI 0.49–0.92), MUST ≥2 (OR 0.51; 95% CI 0.37–0.71), RS risk (OR 0.66; 95% CI 0.50–0.88), and RS development (OR 0.45; 95% CI 0.50–0.88) were factors associated with a higher LOS.", "Nine patients (4.4%) died during hospitalization. BMI (OR 0.82; 95% CI 0.69–0.97), RS development (OR 10.1; 95% CI 2.4–42.6), and medical nutritional support within 48 h from admission (OR 0.12; 95% CI 0.02–0.56) were associated with mortality, even if these associations were different (Table 4). Thirteen patients (6.4%) were re-admitted to hospital within 30 days from discharge; however, none of the tested variables were associated with re-admission (Supplementary Files Table S1)." ]
[ null, null, null, null, null, null, null, null, null, null, null, null, null, null ]
[ "1. Introduction", "2. Materials and Methods", "2.1. Study Design and Ethical Committee Approval", "2.2. Patients", "2.3. Protocol Description", "2.3.1. Protocol Algorithm", "2.3.2. Determination of RS Risk", "2.3.3. Diagnosis of RS", "2.4. Outcomes Measures", "2.5. Sample Size Calculation", "2.6. Data Collection and Statistical Analysis", "3. Results", "3.1. Baseline Patients’ Characteristics", "3.1.1. Nutritional Evaluation", "3.1.2. Refeeding Syndrome", "3.1.3. Length of Hospital Stay", "3.1.4. In-Hospital Mortality and Hospital Readmission", "4. Discussion", "5. Conclusions" ]
[ "Refeeding syndrome is a potentially fatal complication that occurs in malnourished patients when an excessive amount of nutrients is too rapidly delivered after a prolonged state of fasting, as often happens in hospital settings [1]. It is defined as severe electrolyte and fluid shifts, associated with metabolic abnormalities in malnourished patients undergoing refeeding, whether orally, enterally, or parenterally [2]. Up-to-date data about the incidence of RS remain heterogeneous. A recent systematic review of thirty-five observational studies found a wide range of RS incidence, varying from 0 to 62%, among studies [3].\nThe risk of developing RS is high after prolonged fasting, leading to a reduction in insulin release and an increase in glucagon secretion. Metabolic inversion from the use of glucose, as a source of energy, to the use of structural proteins and lipid deposits occurs with a reduction in intracellular energy cofactors (vitamins and minerals), in particular, phosphorus, potassium, and magnesium. During refeeding, the presence of nutrients, in particular, carbohydrates, stimulates glycolysis, glycogen synthesis, and the synthesis of lipids and proteins, with increased intracellular retention of water and sodium. This anabolic process requires the presence of cofactors, such as thiamine (vitamin B1) and minerals. This results in a rapid reduction in these cofactors at the intravascular level, reduced elimination of sodium and water in the urine, water retention, and the risk of cardiometabolic, respiratory, and neurological decompensation, eventually leading to death [3,4]. Such a condition often occurs during hospitalization, sometimes causing death if not diagnosed and treated promptly [5].\nRecently, the American Society for Parenteral and Enteral Nutrition (ASPEN) convened an inter-professional task force to develop a consensus to identify patients at a high risk of developing RS, to prevent and treat it. This consensus, published in April 2020 [6], defined RS as a metabolic and electrolytic alteration that occurs after the reintroduction of nutrition. This retrospective cohort study aimed to assess the prevalence of RS risk and the incidence of RS according to ASPEN criteria, and its impact on length of stay (LOS), mortality, and re-admissions within 30 days in patients admitted to the Internal Medicine and Gastroenterology Unit.", "2.1. Study Design and Ethical Committee Approval This was a single-center observational prospective cohort study.\nThe study was conducted based on the Declaration of Helsinki and according to Good Clinical Practice guidelines. The study was approved by the Ethical Committee of Fondazione Policlinico A. Gemelli IRCCS—Catholic University of the Sacred Heart (Protocol code 2638/22). All participants signed a consent form recording their agreement to take part in the study and to have the results published. This study was reported according to the STROBE guidelines for cohort studies [7].\nThis was a single-center observational prospective cohort study.\nThe study was conducted based on the Declaration of Helsinki and according to Good Clinical Practice guidelines. The study was approved by the Ethical Committee of Fondazione Policlinico A. Gemelli IRCCS—Catholic University of the Sacred Heart (Protocol code 2638/22). All participants signed a consent form recording their agreement to take part in the study and to have the results published. This study was reported according to the STROBE guidelines for cohort studies [7].\n2.2. Patients All adult (>18 years old) patients admitted to the Internal Medicine and Gastroenterology Unit at the Fondazione Policlinico Agostino Gemelli IRCCS, Rome, Italy, from March 2021 to January 2022, were prospectively evaluated.\nThe exclusion criteria were patients already undergoing artificial nutrition during a possible stay in the emergency room, the presence of hypophosphatemia on the day of admission, or refusal to participate in the study.\nAll adult (>18 years old) patients admitted to the Internal Medicine and Gastroenterology Unit at the Fondazione Policlinico Agostino Gemelli IRCCS, Rome, Italy, from March 2021 to January 2022, were prospectively evaluated.\nThe exclusion criteria were patients already undergoing artificial nutrition during a possible stay in the emergency room, the presence of hypophosphatemia on the day of admission, or refusal to participate in the study.\n2.3. Protocol Description 2.3.1. Protocol Algorithm Patients were assessed for risk of RS by physicians of the Internal Medicine and Gastroenterology Unit (B.E.A. and M.I.) upon admission. In patients at risk, a specific evaluation (electrolyte and clinical check) was performed 48 h and 5 days after admission (M.D.A., R.B. and T.G.) to identify patients with RS.\nMortality, LOS, and hospital re-admission within 30 days among enrolled patients were recorded by the analysis of medical records.\nPatients were assessed for risk of RS by physicians of the Internal Medicine and Gastroenterology Unit (B.E.A. and M.I.) upon admission. In patients at risk, a specific evaluation (electrolyte and clinical check) was performed 48 h and 5 days after admission (M.D.A., R.B. and T.G.) to identify patients with RS.\nMortality, LOS, and hospital re-admission within 30 days among enrolled patients were recorded by the analysis of medical records.\n2.3.2. Determination of RS Risk The evaluation of RS risk was performed according to the recently released ASPEN criteria [6].\nPatients were considered at “moderate risk” if they met two of the following risk criteria:BMI was between 16 and 18.5 kg/m2;A weight loss of 5% of habitual weight was reported;There was no or negligible oral intake for 5–6 days OR < 75% of estimated energy requirement for >7 days during an acute illness or injury OR < 75% of estimated energy requirement for >1 month;There were low levels of potassium, phosphorus, magnesium, or normal current levels and recent low levels necessitating minimal or single-dose supplementation;There was evidence of moderate subcutaneous fat loss;There was evidence of mild or moderate muscle loss;In presence of higher-risk comorbidities (moderate disease).\nBMI was between 16 and 18.5 kg/m2;\nA weight loss of 5% of habitual weight was reported;\nThere was no or negligible oral intake for 5–6 days OR < 75% of estimated energy requirement for >7 days during an acute illness or injury OR < 75% of estimated energy requirement for >1 month;\nThere were low levels of potassium, phosphorus, magnesium, or normal current levels and recent low levels necessitating minimal or single-dose supplementation;\nThere was evidence of moderate subcutaneous fat loss;\nThere was evidence of mild or moderate muscle loss;\nIn presence of higher-risk comorbidities (moderate disease).\nPatients were considered at “significant risk” if they met one of the following risk criteria:BMI was <16 kg/m2;A weight loss of 7.5% in 3 months or >10% in 6 months was reported;There was no or negligible oral intake for >7 days OR < 50% of estimated energy requirement for >5 days during an acute illness or injury OR < 50% of estimated energy requirement for >1 month;There were moderately/significantly low levels of potassium, phosphorus, magnesium, or minimally low or normal levels and recent low levels necessitating significant or multiple-dose supplementation;There was evidence of severe subcutaneous fat loss;There was evidence of severe muscle loss;In presence of higher-risk comorbidities (severe disease).\nBMI was <16 kg/m2;\nA weight loss of 7.5% in 3 months or >10% in 6 months was reported;\nThere was no or negligible oral intake for >7 days OR < 50% of estimated energy requirement for >5 days during an acute illness or injury OR < 50% of estimated energy requirement for >1 month;\nThere were moderately/significantly low levels of potassium, phosphorus, magnesium, or minimally low or normal levels and recent low levels necessitating significant or multiple-dose supplementation;\nThere was evidence of severe subcutaneous fat loss;\nThere was evidence of severe muscle loss;\nIn presence of higher-risk comorbidities (severe disease).\nThe evaluation of RS risk was performed according to the recently released ASPEN criteria [6].\nPatients were considered at “moderate risk” if they met two of the following risk criteria:BMI was between 16 and 18.5 kg/m2;A weight loss of 5% of habitual weight was reported;There was no or negligible oral intake for 5–6 days OR < 75% of estimated energy requirement for >7 days during an acute illness or injury OR < 75% of estimated energy requirement for >1 month;There were low levels of potassium, phosphorus, magnesium, or normal current levels and recent low levels necessitating minimal or single-dose supplementation;There was evidence of moderate subcutaneous fat loss;There was evidence of mild or moderate muscle loss;In presence of higher-risk comorbidities (moderate disease).\nBMI was between 16 and 18.5 kg/m2;\nA weight loss of 5% of habitual weight was reported;\nThere was no or negligible oral intake for 5–6 days OR < 75% of estimated energy requirement for >7 days during an acute illness or injury OR < 75% of estimated energy requirement for >1 month;\nThere were low levels of potassium, phosphorus, magnesium, or normal current levels and recent low levels necessitating minimal or single-dose supplementation;\nThere was evidence of moderate subcutaneous fat loss;\nThere was evidence of mild or moderate muscle loss;\nIn presence of higher-risk comorbidities (moderate disease).\nPatients were considered at “significant risk” if they met one of the following risk criteria:BMI was <16 kg/m2;A weight loss of 7.5% in 3 months or >10% in 6 months was reported;There was no or negligible oral intake for >7 days OR < 50% of estimated energy requirement for >5 days during an acute illness or injury OR < 50% of estimated energy requirement for >1 month;There were moderately/significantly low levels of potassium, phosphorus, magnesium, or minimally low or normal levels and recent low levels necessitating significant or multiple-dose supplementation;There was evidence of severe subcutaneous fat loss;There was evidence of severe muscle loss;In presence of higher-risk comorbidities (severe disease).\nBMI was <16 kg/m2;\nA weight loss of 7.5% in 3 months or >10% in 6 months was reported;\nThere was no or negligible oral intake for >7 days OR < 50% of estimated energy requirement for >5 days during an acute illness or injury OR < 50% of estimated energy requirement for >1 month;\nThere were moderately/significantly low levels of potassium, phosphorus, magnesium, or minimally low or normal levels and recent low levels necessitating significant or multiple-dose supplementation;\nThere was evidence of severe subcutaneous fat loss;\nThere was evidence of severe muscle loss;\nIn presence of higher-risk comorbidities (severe disease).\n2.3.3. Diagnosis of RS The diagnosis of RS was then confirmed according to the above-mentioned ASPEN criteria [6], which are as follows:A decrease in serum phosphorus, potassium, and/or magnesium levels by 10–20% (mild RS), 20–30% (moderate RS), or >30%, and/or organ dysfunction resulting from a decrease in any of these and/or due to thiamin deficiency (severe RS).The decrease occurs within 5 days of reinitiating or substantially increasing energy provision.\nA decrease in serum phosphorus, potassium, and/or magnesium levels by 10–20% (mild RS), 20–30% (moderate RS), or >30%, and/or organ dysfunction resulting from a decrease in any of these and/or due to thiamin deficiency (severe RS).\nThe decrease occurs within 5 days of reinitiating or substantially increasing energy provision.\nThe diagnosis of RS was then confirmed according to the above-mentioned ASPEN criteria [6], which are as follows:A decrease in serum phosphorus, potassium, and/or magnesium levels by 10–20% (mild RS), 20–30% (moderate RS), or >30%, and/or organ dysfunction resulting from a decrease in any of these and/or due to thiamin deficiency (severe RS).The decrease occurs within 5 days of reinitiating or substantially increasing energy provision.\nA decrease in serum phosphorus, potassium, and/or magnesium levels by 10–20% (mild RS), 20–30% (moderate RS), or >30%, and/or organ dysfunction resulting from a decrease in any of these and/or due to thiamin deficiency (severe RS).\nThe decrease occurs within 5 days of reinitiating or substantially increasing energy provision.\n2.3.1. Protocol Algorithm Patients were assessed for risk of RS by physicians of the Internal Medicine and Gastroenterology Unit (B.E.A. and M.I.) upon admission. In patients at risk, a specific evaluation (electrolyte and clinical check) was performed 48 h and 5 days after admission (M.D.A., R.B. and T.G.) to identify patients with RS.\nMortality, LOS, and hospital re-admission within 30 days among enrolled patients were recorded by the analysis of medical records.\nPatients were assessed for risk of RS by physicians of the Internal Medicine and Gastroenterology Unit (B.E.A. and M.I.) upon admission. In patients at risk, a specific evaluation (electrolyte and clinical check) was performed 48 h and 5 days after admission (M.D.A., R.B. and T.G.) to identify patients with RS.\nMortality, LOS, and hospital re-admission within 30 days among enrolled patients were recorded by the analysis of medical records.\n2.3.2. Determination of RS Risk The evaluation of RS risk was performed according to the recently released ASPEN criteria [6].\nPatients were considered at “moderate risk” if they met two of the following risk criteria:BMI was between 16 and 18.5 kg/m2;A weight loss of 5% of habitual weight was reported;There was no or negligible oral intake for 5–6 days OR < 75% of estimated energy requirement for >7 days during an acute illness or injury OR < 75% of estimated energy requirement for >1 month;There were low levels of potassium, phosphorus, magnesium, or normal current levels and recent low levels necessitating minimal or single-dose supplementation;There was evidence of moderate subcutaneous fat loss;There was evidence of mild or moderate muscle loss;In presence of higher-risk comorbidities (moderate disease).\nBMI was between 16 and 18.5 kg/m2;\nA weight loss of 5% of habitual weight was reported;\nThere was no or negligible oral intake for 5–6 days OR < 75% of estimated energy requirement for >7 days during an acute illness or injury OR < 75% of estimated energy requirement for >1 month;\nThere were low levels of potassium, phosphorus, magnesium, or normal current levels and recent low levels necessitating minimal or single-dose supplementation;\nThere was evidence of moderate subcutaneous fat loss;\nThere was evidence of mild or moderate muscle loss;\nIn presence of higher-risk comorbidities (moderate disease).\nPatients were considered at “significant risk” if they met one of the following risk criteria:BMI was <16 kg/m2;A weight loss of 7.5% in 3 months or >10% in 6 months was reported;There was no or negligible oral intake for >7 days OR < 50% of estimated energy requirement for >5 days during an acute illness or injury OR < 50% of estimated energy requirement for >1 month;There were moderately/significantly low levels of potassium, phosphorus, magnesium, or minimally low or normal levels and recent low levels necessitating significant or multiple-dose supplementation;There was evidence of severe subcutaneous fat loss;There was evidence of severe muscle loss;In presence of higher-risk comorbidities (severe disease).\nBMI was <16 kg/m2;\nA weight loss of 7.5% in 3 months or >10% in 6 months was reported;\nThere was no or negligible oral intake for >7 days OR < 50% of estimated energy requirement for >5 days during an acute illness or injury OR < 50% of estimated energy requirement for >1 month;\nThere were moderately/significantly low levels of potassium, phosphorus, magnesium, or minimally low or normal levels and recent low levels necessitating significant or multiple-dose supplementation;\nThere was evidence of severe subcutaneous fat loss;\nThere was evidence of severe muscle loss;\nIn presence of higher-risk comorbidities (severe disease).\nThe evaluation of RS risk was performed according to the recently released ASPEN criteria [6].\nPatients were considered at “moderate risk” if they met two of the following risk criteria:BMI was between 16 and 18.5 kg/m2;A weight loss of 5% of habitual weight was reported;There was no or negligible oral intake for 5–6 days OR < 75% of estimated energy requirement for >7 days during an acute illness or injury OR < 75% of estimated energy requirement for >1 month;There were low levels of potassium, phosphorus, magnesium, or normal current levels and recent low levels necessitating minimal or single-dose supplementation;There was evidence of moderate subcutaneous fat loss;There was evidence of mild or moderate muscle loss;In presence of higher-risk comorbidities (moderate disease).\nBMI was between 16 and 18.5 kg/m2;\nA weight loss of 5% of habitual weight was reported;\nThere was no or negligible oral intake for 5–6 days OR < 75% of estimated energy requirement for >7 days during an acute illness or injury OR < 75% of estimated energy requirement for >1 month;\nThere were low levels of potassium, phosphorus, magnesium, or normal current levels and recent low levels necessitating minimal or single-dose supplementation;\nThere was evidence of moderate subcutaneous fat loss;\nThere was evidence of mild or moderate muscle loss;\nIn presence of higher-risk comorbidities (moderate disease).\nPatients were considered at “significant risk” if they met one of the following risk criteria:BMI was <16 kg/m2;A weight loss of 7.5% in 3 months or >10% in 6 months was reported;There was no or negligible oral intake for >7 days OR < 50% of estimated energy requirement for >5 days during an acute illness or injury OR < 50% of estimated energy requirement for >1 month;There were moderately/significantly low levels of potassium, phosphorus, magnesium, or minimally low or normal levels and recent low levels necessitating significant or multiple-dose supplementation;There was evidence of severe subcutaneous fat loss;There was evidence of severe muscle loss;In presence of higher-risk comorbidities (severe disease).\nBMI was <16 kg/m2;\nA weight loss of 7.5% in 3 months or >10% in 6 months was reported;\nThere was no or negligible oral intake for >7 days OR < 50% of estimated energy requirement for >5 days during an acute illness or injury OR < 50% of estimated energy requirement for >1 month;\nThere were moderately/significantly low levels of potassium, phosphorus, magnesium, or minimally low or normal levels and recent low levels necessitating significant or multiple-dose supplementation;\nThere was evidence of severe subcutaneous fat loss;\nThere was evidence of severe muscle loss;\nIn presence of higher-risk comorbidities (severe disease).\n2.3.3. Diagnosis of RS The diagnosis of RS was then confirmed according to the above-mentioned ASPEN criteria [6], which are as follows:A decrease in serum phosphorus, potassium, and/or magnesium levels by 10–20% (mild RS), 20–30% (moderate RS), or >30%, and/or organ dysfunction resulting from a decrease in any of these and/or due to thiamin deficiency (severe RS).The decrease occurs within 5 days of reinitiating or substantially increasing energy provision.\nA decrease in serum phosphorus, potassium, and/or magnesium levels by 10–20% (mild RS), 20–30% (moderate RS), or >30%, and/or organ dysfunction resulting from a decrease in any of these and/or due to thiamin deficiency (severe RS).\nThe decrease occurs within 5 days of reinitiating or substantially increasing energy provision.\nThe diagnosis of RS was then confirmed according to the above-mentioned ASPEN criteria [6], which are as follows:A decrease in serum phosphorus, potassium, and/or magnesium levels by 10–20% (mild RS), 20–30% (moderate RS), or >30%, and/or organ dysfunction resulting from a decrease in any of these and/or due to thiamin deficiency (severe RS).The decrease occurs within 5 days of reinitiating or substantially increasing energy provision.\nA decrease in serum phosphorus, potassium, and/or magnesium levels by 10–20% (mild RS), 20–30% (moderate RS), or >30%, and/or organ dysfunction resulting from a decrease in any of these and/or due to thiamin deficiency (severe RS).\nThe decrease occurs within 5 days of reinitiating or substantially increasing energy provision.\n2.4. Outcomes Measures The primary outcome was the incidence of RS risk according to ASPEN 2020 criteria. The secondary outcomes were the diagnosis of RS after the initiation of nutritional support and its impact on mortality, LOS, and hospital readmission within 30 days.\nThe primary outcome was the incidence of RS risk according to ASPEN 2020 criteria. The secondary outcomes were the diagnosis of RS after the initiation of nutritional support and its impact on mortality, LOS, and hospital readmission within 30 days.\n2.5. Sample Size Calculation According to a recent systematic review, the incidence of RS widely varied from 0% to 62% across the studies [3]. However, a previous study identified an incidence of RS risk up to 54% and an RS diagnosis rate of 8% of patients admitted to an internal medicine department [8]. With a margin of error of 6% and a confidence interval of 95%, between 79 and 158 patients should have been enrolled to intercept the above-mentioned incidences (percentages). Considering a dropout rate of 10%, our sample size was set at 176 patients.\nAccording to a recent systematic review, the incidence of RS widely varied from 0% to 62% across the studies [3]. However, a previous study identified an incidence of RS risk up to 54% and an RS diagnosis rate of 8% of patients admitted to an internal medicine department [8]. With a margin of error of 6% and a confidence interval of 95%, between 79 and 158 patients should have been enrolled to intercept the above-mentioned incidences (percentages). Considering a dropout rate of 10%, our sample size was set at 176 patients.\n2.6. Data Collection and Statistical Analysis Data were collected using a specific Excel© spreadsheet. Data are shown using descriptive statistical methods. The following measures were used as quantitative variables: minimum, maximum, range, mean and standard deviation. The qualitative variables were summarized in tables of absolute and percentage frequencies. The possible normality of continuous distributions wasexamined by applying the Kolmogorov–Smirnov test.\nThe primary objective was reached by calculating the cumulative incidence of risk of developing RS in the enrolled patients.\nThe secondary objectives were reached through the cumulative incidence of overt RS development in the enrolled at-risk patients, and the development of a Cox regression model for the evaluation of mean hospital stay, with log-rank analysis to highlight differences between the groups that develop RS or not. To analyze other secondary outcomes, univariate logistic models were created, obtaining odds ratios (OR).\nA p < 0.05 was set as statistically significant. All statistical analyses were carried out with STATA (version 13, Stata Corporation; College Station, TX, USA).\nData were collected using a specific Excel© spreadsheet. Data are shown using descriptive statistical methods. The following measures were used as quantitative variables: minimum, maximum, range, mean and standard deviation. The qualitative variables were summarized in tables of absolute and percentage frequencies. The possible normality of continuous distributions wasexamined by applying the Kolmogorov–Smirnov test.\nThe primary objective was reached by calculating the cumulative incidence of risk of developing RS in the enrolled patients.\nThe secondary objectives were reached through the cumulative incidence of overt RS development in the enrolled at-risk patients, and the development of a Cox regression model for the evaluation of mean hospital stay, with log-rank analysis to highlight differences between the groups that develop RS or not. To analyze other secondary outcomes, univariate logistic models were created, obtaining odds ratios (OR).\nA p < 0.05 was set as statistically significant. All statistical analyses were carried out with STATA (version 13, Stata Corporation; College Station, TX, USA).", "This was a single-center observational prospective cohort study.\nThe study was conducted based on the Declaration of Helsinki and according to Good Clinical Practice guidelines. The study was approved by the Ethical Committee of Fondazione Policlinico A. Gemelli IRCCS—Catholic University of the Sacred Heart (Protocol code 2638/22). All participants signed a consent form recording their agreement to take part in the study and to have the results published. This study was reported according to the STROBE guidelines for cohort studies [7].", "All adult (>18 years old) patients admitted to the Internal Medicine and Gastroenterology Unit at the Fondazione Policlinico Agostino Gemelli IRCCS, Rome, Italy, from March 2021 to January 2022, were prospectively evaluated.\nThe exclusion criteria were patients already undergoing artificial nutrition during a possible stay in the emergency room, the presence of hypophosphatemia on the day of admission, or refusal to participate in the study.", "2.3.1. Protocol Algorithm Patients were assessed for risk of RS by physicians of the Internal Medicine and Gastroenterology Unit (B.E.A. and M.I.) upon admission. In patients at risk, a specific evaluation (electrolyte and clinical check) was performed 48 h and 5 days after admission (M.D.A., R.B. and T.G.) to identify patients with RS.\nMortality, LOS, and hospital re-admission within 30 days among enrolled patients were recorded by the analysis of medical records.\nPatients were assessed for risk of RS by physicians of the Internal Medicine and Gastroenterology Unit (B.E.A. and M.I.) upon admission. In patients at risk, a specific evaluation (electrolyte and clinical check) was performed 48 h and 5 days after admission (M.D.A., R.B. and T.G.) to identify patients with RS.\nMortality, LOS, and hospital re-admission within 30 days among enrolled patients were recorded by the analysis of medical records.\n2.3.2. Determination of RS Risk The evaluation of RS risk was performed according to the recently released ASPEN criteria [6].\nPatients were considered at “moderate risk” if they met two of the following risk criteria:BMI was between 16 and 18.5 kg/m2;A weight loss of 5% of habitual weight was reported;There was no or negligible oral intake for 5–6 days OR < 75% of estimated energy requirement for >7 days during an acute illness or injury OR < 75% of estimated energy requirement for >1 month;There were low levels of potassium, phosphorus, magnesium, or normal current levels and recent low levels necessitating minimal or single-dose supplementation;There was evidence of moderate subcutaneous fat loss;There was evidence of mild or moderate muscle loss;In presence of higher-risk comorbidities (moderate disease).\nBMI was between 16 and 18.5 kg/m2;\nA weight loss of 5% of habitual weight was reported;\nThere was no or negligible oral intake for 5–6 days OR < 75% of estimated energy requirement for >7 days during an acute illness or injury OR < 75% of estimated energy requirement for >1 month;\nThere were low levels of potassium, phosphorus, magnesium, or normal current levels and recent low levels necessitating minimal or single-dose supplementation;\nThere was evidence of moderate subcutaneous fat loss;\nThere was evidence of mild or moderate muscle loss;\nIn presence of higher-risk comorbidities (moderate disease).\nPatients were considered at “significant risk” if they met one of the following risk criteria:BMI was <16 kg/m2;A weight loss of 7.5% in 3 months or >10% in 6 months was reported;There was no or negligible oral intake for >7 days OR < 50% of estimated energy requirement for >5 days during an acute illness or injury OR < 50% of estimated energy requirement for >1 month;There were moderately/significantly low levels of potassium, phosphorus, magnesium, or minimally low or normal levels and recent low levels necessitating significant or multiple-dose supplementation;There was evidence of severe subcutaneous fat loss;There was evidence of severe muscle loss;In presence of higher-risk comorbidities (severe disease).\nBMI was <16 kg/m2;\nA weight loss of 7.5% in 3 months or >10% in 6 months was reported;\nThere was no or negligible oral intake for >7 days OR < 50% of estimated energy requirement for >5 days during an acute illness or injury OR < 50% of estimated energy requirement for >1 month;\nThere were moderately/significantly low levels of potassium, phosphorus, magnesium, or minimally low or normal levels and recent low levels necessitating significant or multiple-dose supplementation;\nThere was evidence of severe subcutaneous fat loss;\nThere was evidence of severe muscle loss;\nIn presence of higher-risk comorbidities (severe disease).\nThe evaluation of RS risk was performed according to the recently released ASPEN criteria [6].\nPatients were considered at “moderate risk” if they met two of the following risk criteria:BMI was between 16 and 18.5 kg/m2;A weight loss of 5% of habitual weight was reported;There was no or negligible oral intake for 5–6 days OR < 75% of estimated energy requirement for >7 days during an acute illness or injury OR < 75% of estimated energy requirement for >1 month;There were low levels of potassium, phosphorus, magnesium, or normal current levels and recent low levels necessitating minimal or single-dose supplementation;There was evidence of moderate subcutaneous fat loss;There was evidence of mild or moderate muscle loss;In presence of higher-risk comorbidities (moderate disease).\nBMI was between 16 and 18.5 kg/m2;\nA weight loss of 5% of habitual weight was reported;\nThere was no or negligible oral intake for 5–6 days OR < 75% of estimated energy requirement for >7 days during an acute illness or injury OR < 75% of estimated energy requirement for >1 month;\nThere were low levels of potassium, phosphorus, magnesium, or normal current levels and recent low levels necessitating minimal or single-dose supplementation;\nThere was evidence of moderate subcutaneous fat loss;\nThere was evidence of mild or moderate muscle loss;\nIn presence of higher-risk comorbidities (moderate disease).\nPatients were considered at “significant risk” if they met one of the following risk criteria:BMI was <16 kg/m2;A weight loss of 7.5% in 3 months or >10% in 6 months was reported;There was no or negligible oral intake for >7 days OR < 50% of estimated energy requirement for >5 days during an acute illness or injury OR < 50% of estimated energy requirement for >1 month;There were moderately/significantly low levels of potassium, phosphorus, magnesium, or minimally low or normal levels and recent low levels necessitating significant or multiple-dose supplementation;There was evidence of severe subcutaneous fat loss;There was evidence of severe muscle loss;In presence of higher-risk comorbidities (severe disease).\nBMI was <16 kg/m2;\nA weight loss of 7.5% in 3 months or >10% in 6 months was reported;\nThere was no or negligible oral intake for >7 days OR < 50% of estimated energy requirement for >5 days during an acute illness or injury OR < 50% of estimated energy requirement for >1 month;\nThere were moderately/significantly low levels of potassium, phosphorus, magnesium, or minimally low or normal levels and recent low levels necessitating significant or multiple-dose supplementation;\nThere was evidence of severe subcutaneous fat loss;\nThere was evidence of severe muscle loss;\nIn presence of higher-risk comorbidities (severe disease).\n2.3.3. Diagnosis of RS The diagnosis of RS was then confirmed according to the above-mentioned ASPEN criteria [6], which are as follows:A decrease in serum phosphorus, potassium, and/or magnesium levels by 10–20% (mild RS), 20–30% (moderate RS), or >30%, and/or organ dysfunction resulting from a decrease in any of these and/or due to thiamin deficiency (severe RS).The decrease occurs within 5 days of reinitiating or substantially increasing energy provision.\nA decrease in serum phosphorus, potassium, and/or magnesium levels by 10–20% (mild RS), 20–30% (moderate RS), or >30%, and/or organ dysfunction resulting from a decrease in any of these and/or due to thiamin deficiency (severe RS).\nThe decrease occurs within 5 days of reinitiating or substantially increasing energy provision.\nThe diagnosis of RS was then confirmed according to the above-mentioned ASPEN criteria [6], which are as follows:A decrease in serum phosphorus, potassium, and/or magnesium levels by 10–20% (mild RS), 20–30% (moderate RS), or >30%, and/or organ dysfunction resulting from a decrease in any of these and/or due to thiamin deficiency (severe RS).The decrease occurs within 5 days of reinitiating or substantially increasing energy provision.\nA decrease in serum phosphorus, potassium, and/or magnesium levels by 10–20% (mild RS), 20–30% (moderate RS), or >30%, and/or organ dysfunction resulting from a decrease in any of these and/or due to thiamin deficiency (severe RS).\nThe decrease occurs within 5 days of reinitiating or substantially increasing energy provision.", "Patients were assessed for risk of RS by physicians of the Internal Medicine and Gastroenterology Unit (B.E.A. and M.I.) upon admission. In patients at risk, a specific evaluation (electrolyte and clinical check) was performed 48 h and 5 days after admission (M.D.A., R.B. and T.G.) to identify patients with RS.\nMortality, LOS, and hospital re-admission within 30 days among enrolled patients were recorded by the analysis of medical records.", "The evaluation of RS risk was performed according to the recently released ASPEN criteria [6].\nPatients were considered at “moderate risk” if they met two of the following risk criteria:BMI was between 16 and 18.5 kg/m2;A weight loss of 5% of habitual weight was reported;There was no or negligible oral intake for 5–6 days OR < 75% of estimated energy requirement for >7 days during an acute illness or injury OR < 75% of estimated energy requirement for >1 month;There were low levels of potassium, phosphorus, magnesium, or normal current levels and recent low levels necessitating minimal or single-dose supplementation;There was evidence of moderate subcutaneous fat loss;There was evidence of mild or moderate muscle loss;In presence of higher-risk comorbidities (moderate disease).\nBMI was between 16 and 18.5 kg/m2;\nA weight loss of 5% of habitual weight was reported;\nThere was no or negligible oral intake for 5–6 days OR < 75% of estimated energy requirement for >7 days during an acute illness or injury OR < 75% of estimated energy requirement for >1 month;\nThere were low levels of potassium, phosphorus, magnesium, or normal current levels and recent low levels necessitating minimal or single-dose supplementation;\nThere was evidence of moderate subcutaneous fat loss;\nThere was evidence of mild or moderate muscle loss;\nIn presence of higher-risk comorbidities (moderate disease).\nPatients were considered at “significant risk” if they met one of the following risk criteria:BMI was <16 kg/m2;A weight loss of 7.5% in 3 months or >10% in 6 months was reported;There was no or negligible oral intake for >7 days OR < 50% of estimated energy requirement for >5 days during an acute illness or injury OR < 50% of estimated energy requirement for >1 month;There were moderately/significantly low levels of potassium, phosphorus, magnesium, or minimally low or normal levels and recent low levels necessitating significant or multiple-dose supplementation;There was evidence of severe subcutaneous fat loss;There was evidence of severe muscle loss;In presence of higher-risk comorbidities (severe disease).\nBMI was <16 kg/m2;\nA weight loss of 7.5% in 3 months or >10% in 6 months was reported;\nThere was no or negligible oral intake for >7 days OR < 50% of estimated energy requirement for >5 days during an acute illness or injury OR < 50% of estimated energy requirement for >1 month;\nThere were moderately/significantly low levels of potassium, phosphorus, magnesium, or minimally low or normal levels and recent low levels necessitating significant or multiple-dose supplementation;\nThere was evidence of severe subcutaneous fat loss;\nThere was evidence of severe muscle loss;\nIn presence of higher-risk comorbidities (severe disease).", "The diagnosis of RS was then confirmed according to the above-mentioned ASPEN criteria [6], which are as follows:A decrease in serum phosphorus, potassium, and/or magnesium levels by 10–20% (mild RS), 20–30% (moderate RS), or >30%, and/or organ dysfunction resulting from a decrease in any of these and/or due to thiamin deficiency (severe RS).The decrease occurs within 5 days of reinitiating or substantially increasing energy provision.\nA decrease in serum phosphorus, potassium, and/or magnesium levels by 10–20% (mild RS), 20–30% (moderate RS), or >30%, and/or organ dysfunction resulting from a decrease in any of these and/or due to thiamin deficiency (severe RS).\nThe decrease occurs within 5 days of reinitiating or substantially increasing energy provision.", "The primary outcome was the incidence of RS risk according to ASPEN 2020 criteria. The secondary outcomes were the diagnosis of RS after the initiation of nutritional support and its impact on mortality, LOS, and hospital readmission within 30 days.", "According to a recent systematic review, the incidence of RS widely varied from 0% to 62% across the studies [3]. However, a previous study identified an incidence of RS risk up to 54% and an RS diagnosis rate of 8% of patients admitted to an internal medicine department [8]. With a margin of error of 6% and a confidence interval of 95%, between 79 and 158 patients should have been enrolled to intercept the above-mentioned incidences (percentages). Considering a dropout rate of 10%, our sample size was set at 176 patients.", "Data were collected using a specific Excel© spreadsheet. Data are shown using descriptive statistical methods. The following measures were used as quantitative variables: minimum, maximum, range, mean and standard deviation. The qualitative variables were summarized in tables of absolute and percentage frequencies. The possible normality of continuous distributions wasexamined by applying the Kolmogorov–Smirnov test.\nThe primary objective was reached by calculating the cumulative incidence of risk of developing RS in the enrolled patients.\nThe secondary objectives were reached through the cumulative incidence of overt RS development in the enrolled at-risk patients, and the development of a Cox regression model for the evaluation of mean hospital stay, with log-rank analysis to highlight differences between the groups that develop RS or not. To analyze other secondary outcomes, univariate logistic models were created, obtaining odds ratios (OR).\nA p < 0.05 was set as statistically significant. All statistical analyses were carried out with STATA (version 13, Stata Corporation; College Station, TX, USA).", "3.1. Baseline Patients’ Characteristics Two hundred and three patients were enrolled during 11 months of observation. The clinical and demographical data are presented in Table 1. The mean age was 66.1 ± 14.1 years; there were 127 male patients (62.6%) and 68.5% of the patients (n = 139) were admitted from the emergency department (ED). The mean Charlson’s Comorbidity Index (CCI) was 3.0 ± 2.4.\n3.1.1. Nutritional Evaluation The mean body weight was 71.8 ± 16.3 kg and the mean height was 169.0 ± 8.6 cm, deriving a body mass index (BMI) of 25.0 ± 4.9 kg/m2. Upon admission, according to nutritional risk screening (NRS-2002), 70 patients (34.5%) were at risk of malnutrition, while, according to the Malnutrition Universal Screening Tool (MUST), 99 patients (48.7%) had the same risk. Twenty-four patients (11.8%) underwent specialist nutritional evaluation during their hospital stay, and 74 (36.5%) were treated with medical nutrition within 48 h from admission, in particular, 63 (64.3%) with oral nutritional supplementation (ONS) and 13 (13.3%) with parenteral nutrition (PN).\nThe mean body weight was 71.8 ± 16.3 kg and the mean height was 169.0 ± 8.6 cm, deriving a body mass index (BMI) of 25.0 ± 4.9 kg/m2. Upon admission, according to nutritional risk screening (NRS-2002), 70 patients (34.5%) were at risk of malnutrition, while, according to the Malnutrition Universal Screening Tool (MUST), 99 patients (48.7%) had the same risk. Twenty-four patients (11.8%) underwent specialist nutritional evaluation during their hospital stay, and 74 (36.5%) were treated with medical nutrition within 48 h from admission, in particular, 63 (64.3%) with oral nutritional supplementation (ONS) and 13 (13.3%) with parenteral nutrition (PN).\n3.1.2. Refeeding Syndrome The risk of RS was identified in 98 (48.3%) patients; of these, 44 (21.7%) were at medium risk and 54 (26.6%) were at high risk of developing RS. Thirty-eight patients (18.7% of the entire cohort) developed RS (Figure 1).\nTable 2 resumes several data associated with RS development. In particular, blood potassium at 48 h from admission was lower in RS patients (3.9 mmol/L vs. 3.5; OR 0.21; p = 0.004), in addition to phosphorus at 48 h (3.3 mg/dL vs. 2.8; OR 0.24; p = 0.002) and phosphorus at 5 days (3.4 mg/dL vs. 2.5; OR 0.04; p = 0.002).\nThe risk of RS was identified in 98 (48.3%) patients; of these, 44 (21.7%) were at medium risk and 54 (26.6%) were at high risk of developing RS. Thirty-eight patients (18.7% of the entire cohort) developed RS (Figure 1).\nTable 2 resumes several data associated with RS development. In particular, blood potassium at 48 h from admission was lower in RS patients (3.9 mmol/L vs. 3.5; OR 0.21; p = 0.004), in addition to phosphorus at 48 h (3.3 mg/dL vs. 2.8; OR 0.24; p = 0.002) and phosphorus at 5 days (3.4 mg/dL vs. 2.5; OR 0.04; p = 0.002).\n3.1.3. Length of Hospital Stay The mean LOS was 8.2 ± 5.8 days. Patients diagnosed with RS had a mean LOS of 12.5 ± 7.9 days. Patients without RS had a mean LOS of 7.1 ± 4.2 days (p < 0.0001) (Figure 2).\nThe principal factors associated with a high LOS are resumed in Table 3. In particular, ER admission (OR 0.38; 95%CI 0.28–0.54), NRS-2002 >3 (OR 0.67; 95% CI 0.49–0.92), MUST ≥2 (OR 0.51; 95% CI 0.37–0.71), RS risk (OR 0.66; 95% CI 0.50–0.88), and RS development (OR 0.45; 95% CI 0.50–0.88) were factors associated with a higher LOS.\nThe mean LOS was 8.2 ± 5.8 days. Patients diagnosed with RS had a mean LOS of 12.5 ± 7.9 days. Patients without RS had a mean LOS of 7.1 ± 4.2 days (p < 0.0001) (Figure 2).\nThe principal factors associated with a high LOS are resumed in Table 3. In particular, ER admission (OR 0.38; 95%CI 0.28–0.54), NRS-2002 >3 (OR 0.67; 95% CI 0.49–0.92), MUST ≥2 (OR 0.51; 95% CI 0.37–0.71), RS risk (OR 0.66; 95% CI 0.50–0.88), and RS development (OR 0.45; 95% CI 0.50–0.88) were factors associated with a higher LOS.\n3.1.4. In-Hospital Mortality and Hospital Readmission Nine patients (4.4%) died during hospitalization. BMI (OR 0.82; 95% CI 0.69–0.97), RS development (OR 10.1; 95% CI 2.4–42.6), and medical nutritional support within 48 h from admission (OR 0.12; 95% CI 0.02–0.56) were associated with mortality, even if these associations were different (Table 4). Thirteen patients (6.4%) were re-admitted to hospital within 30 days from discharge; however, none of the tested variables were associated with re-admission (Supplementary Files Table S1).\nNine patients (4.4%) died during hospitalization. BMI (OR 0.82; 95% CI 0.69–0.97), RS development (OR 10.1; 95% CI 2.4–42.6), and medical nutritional support within 48 h from admission (OR 0.12; 95% CI 0.02–0.56) were associated with mortality, even if these associations were different (Table 4). Thirteen patients (6.4%) were re-admitted to hospital within 30 days from discharge; however, none of the tested variables were associated with re-admission (Supplementary Files Table S1).\nTwo hundred and three patients were enrolled during 11 months of observation. The clinical and demographical data are presented in Table 1. The mean age was 66.1 ± 14.1 years; there were 127 male patients (62.6%) and 68.5% of the patients (n = 139) were admitted from the emergency department (ED). The mean Charlson’s Comorbidity Index (CCI) was 3.0 ± 2.4.\n3.1.1. Nutritional Evaluation The mean body weight was 71.8 ± 16.3 kg and the mean height was 169.0 ± 8.6 cm, deriving a body mass index (BMI) of 25.0 ± 4.9 kg/m2. Upon admission, according to nutritional risk screening (NRS-2002), 70 patients (34.5%) were at risk of malnutrition, while, according to the Malnutrition Universal Screening Tool (MUST), 99 patients (48.7%) had the same risk. Twenty-four patients (11.8%) underwent specialist nutritional evaluation during their hospital stay, and 74 (36.5%) were treated with medical nutrition within 48 h from admission, in particular, 63 (64.3%) with oral nutritional supplementation (ONS) and 13 (13.3%) with parenteral nutrition (PN).\nThe mean body weight was 71.8 ± 16.3 kg and the mean height was 169.0 ± 8.6 cm, deriving a body mass index (BMI) of 25.0 ± 4.9 kg/m2. Upon admission, according to nutritional risk screening (NRS-2002), 70 patients (34.5%) were at risk of malnutrition, while, according to the Malnutrition Universal Screening Tool (MUST), 99 patients (48.7%) had the same risk. Twenty-four patients (11.8%) underwent specialist nutritional evaluation during their hospital stay, and 74 (36.5%) were treated with medical nutrition within 48 h from admission, in particular, 63 (64.3%) with oral nutritional supplementation (ONS) and 13 (13.3%) with parenteral nutrition (PN).\n3.1.2. Refeeding Syndrome The risk of RS was identified in 98 (48.3%) patients; of these, 44 (21.7%) were at medium risk and 54 (26.6%) were at high risk of developing RS. Thirty-eight patients (18.7% of the entire cohort) developed RS (Figure 1).\nTable 2 resumes several data associated with RS development. In particular, blood potassium at 48 h from admission was lower in RS patients (3.9 mmol/L vs. 3.5; OR 0.21; p = 0.004), in addition to phosphorus at 48 h (3.3 mg/dL vs. 2.8; OR 0.24; p = 0.002) and phosphorus at 5 days (3.4 mg/dL vs. 2.5; OR 0.04; p = 0.002).\nThe risk of RS was identified in 98 (48.3%) patients; of these, 44 (21.7%) were at medium risk and 54 (26.6%) were at high risk of developing RS. Thirty-eight patients (18.7% of the entire cohort) developed RS (Figure 1).\nTable 2 resumes several data associated with RS development. In particular, blood potassium at 48 h from admission was lower in RS patients (3.9 mmol/L vs. 3.5; OR 0.21; p = 0.004), in addition to phosphorus at 48 h (3.3 mg/dL vs. 2.8; OR 0.24; p = 0.002) and phosphorus at 5 days (3.4 mg/dL vs. 2.5; OR 0.04; p = 0.002).\n3.1.3. Length of Hospital Stay The mean LOS was 8.2 ± 5.8 days. Patients diagnosed with RS had a mean LOS of 12.5 ± 7.9 days. Patients without RS had a mean LOS of 7.1 ± 4.2 days (p < 0.0001) (Figure 2).\nThe principal factors associated with a high LOS are resumed in Table 3. In particular, ER admission (OR 0.38; 95%CI 0.28–0.54), NRS-2002 >3 (OR 0.67; 95% CI 0.49–0.92), MUST ≥2 (OR 0.51; 95% CI 0.37–0.71), RS risk (OR 0.66; 95% CI 0.50–0.88), and RS development (OR 0.45; 95% CI 0.50–0.88) were factors associated with a higher LOS.\nThe mean LOS was 8.2 ± 5.8 days. Patients diagnosed with RS had a mean LOS of 12.5 ± 7.9 days. Patients without RS had a mean LOS of 7.1 ± 4.2 days (p < 0.0001) (Figure 2).\nThe principal factors associated with a high LOS are resumed in Table 3. In particular, ER admission (OR 0.38; 95%CI 0.28–0.54), NRS-2002 >3 (OR 0.67; 95% CI 0.49–0.92), MUST ≥2 (OR 0.51; 95% CI 0.37–0.71), RS risk (OR 0.66; 95% CI 0.50–0.88), and RS development (OR 0.45; 95% CI 0.50–0.88) were factors associated with a higher LOS.\n3.1.4. In-Hospital Mortality and Hospital Readmission Nine patients (4.4%) died during hospitalization. BMI (OR 0.82; 95% CI 0.69–0.97), RS development (OR 10.1; 95% CI 2.4–42.6), and medical nutritional support within 48 h from admission (OR 0.12; 95% CI 0.02–0.56) were associated with mortality, even if these associations were different (Table 4). Thirteen patients (6.4%) were re-admitted to hospital within 30 days from discharge; however, none of the tested variables were associated with re-admission (Supplementary Files Table S1).\nNine patients (4.4%) died during hospitalization. BMI (OR 0.82; 95% CI 0.69–0.97), RS development (OR 10.1; 95% CI 2.4–42.6), and medical nutritional support within 48 h from admission (OR 0.12; 95% CI 0.02–0.56) were associated with mortality, even if these associations were different (Table 4). Thirteen patients (6.4%) were re-admitted to hospital within 30 days from discharge; however, none of the tested variables were associated with re-admission (Supplementary Files Table S1).", "Two hundred and three patients were enrolled during 11 months of observation. The clinical and demographical data are presented in Table 1. The mean age was 66.1 ± 14.1 years; there were 127 male patients (62.6%) and 68.5% of the patients (n = 139) were admitted from the emergency department (ED). The mean Charlson’s Comorbidity Index (CCI) was 3.0 ± 2.4.\n3.1.1. Nutritional Evaluation The mean body weight was 71.8 ± 16.3 kg and the mean height was 169.0 ± 8.6 cm, deriving a body mass index (BMI) of 25.0 ± 4.9 kg/m2. Upon admission, according to nutritional risk screening (NRS-2002), 70 patients (34.5%) were at risk of malnutrition, while, according to the Malnutrition Universal Screening Tool (MUST), 99 patients (48.7%) had the same risk. Twenty-four patients (11.8%) underwent specialist nutritional evaluation during their hospital stay, and 74 (36.5%) were treated with medical nutrition within 48 h from admission, in particular, 63 (64.3%) with oral nutritional supplementation (ONS) and 13 (13.3%) with parenteral nutrition (PN).\nThe mean body weight was 71.8 ± 16.3 kg and the mean height was 169.0 ± 8.6 cm, deriving a body mass index (BMI) of 25.0 ± 4.9 kg/m2. Upon admission, according to nutritional risk screening (NRS-2002), 70 patients (34.5%) were at risk of malnutrition, while, according to the Malnutrition Universal Screening Tool (MUST), 99 patients (48.7%) had the same risk. Twenty-four patients (11.8%) underwent specialist nutritional evaluation during their hospital stay, and 74 (36.5%) were treated with medical nutrition within 48 h from admission, in particular, 63 (64.3%) with oral nutritional supplementation (ONS) and 13 (13.3%) with parenteral nutrition (PN).\n3.1.2. Refeeding Syndrome The risk of RS was identified in 98 (48.3%) patients; of these, 44 (21.7%) were at medium risk and 54 (26.6%) were at high risk of developing RS. Thirty-eight patients (18.7% of the entire cohort) developed RS (Figure 1).\nTable 2 resumes several data associated with RS development. In particular, blood potassium at 48 h from admission was lower in RS patients (3.9 mmol/L vs. 3.5; OR 0.21; p = 0.004), in addition to phosphorus at 48 h (3.3 mg/dL vs. 2.8; OR 0.24; p = 0.002) and phosphorus at 5 days (3.4 mg/dL vs. 2.5; OR 0.04; p = 0.002).\nThe risk of RS was identified in 98 (48.3%) patients; of these, 44 (21.7%) were at medium risk and 54 (26.6%) were at high risk of developing RS. Thirty-eight patients (18.7% of the entire cohort) developed RS (Figure 1).\nTable 2 resumes several data associated with RS development. In particular, blood potassium at 48 h from admission was lower in RS patients (3.9 mmol/L vs. 3.5; OR 0.21; p = 0.004), in addition to phosphorus at 48 h (3.3 mg/dL vs. 2.8; OR 0.24; p = 0.002) and phosphorus at 5 days (3.4 mg/dL vs. 2.5; OR 0.04; p = 0.002).\n3.1.3. Length of Hospital Stay The mean LOS was 8.2 ± 5.8 days. Patients diagnosed with RS had a mean LOS of 12.5 ± 7.9 days. Patients without RS had a mean LOS of 7.1 ± 4.2 days (p < 0.0001) (Figure 2).\nThe principal factors associated with a high LOS are resumed in Table 3. In particular, ER admission (OR 0.38; 95%CI 0.28–0.54), NRS-2002 >3 (OR 0.67; 95% CI 0.49–0.92), MUST ≥2 (OR 0.51; 95% CI 0.37–0.71), RS risk (OR 0.66; 95% CI 0.50–0.88), and RS development (OR 0.45; 95% CI 0.50–0.88) were factors associated with a higher LOS.\nThe mean LOS was 8.2 ± 5.8 days. Patients diagnosed with RS had a mean LOS of 12.5 ± 7.9 days. Patients without RS had a mean LOS of 7.1 ± 4.2 days (p < 0.0001) (Figure 2).\nThe principal factors associated with a high LOS are resumed in Table 3. In particular, ER admission (OR 0.38; 95%CI 0.28–0.54), NRS-2002 >3 (OR 0.67; 95% CI 0.49–0.92), MUST ≥2 (OR 0.51; 95% CI 0.37–0.71), RS risk (OR 0.66; 95% CI 0.50–0.88), and RS development (OR 0.45; 95% CI 0.50–0.88) were factors associated with a higher LOS.\n3.1.4. In-Hospital Mortality and Hospital Readmission Nine patients (4.4%) died during hospitalization. BMI (OR 0.82; 95% CI 0.69–0.97), RS development (OR 10.1; 95% CI 2.4–42.6), and medical nutritional support within 48 h from admission (OR 0.12; 95% CI 0.02–0.56) were associated with mortality, even if these associations were different (Table 4). Thirteen patients (6.4%) were re-admitted to hospital within 30 days from discharge; however, none of the tested variables were associated with re-admission (Supplementary Files Table S1).\nNine patients (4.4%) died during hospitalization. BMI (OR 0.82; 95% CI 0.69–0.97), RS development (OR 10.1; 95% CI 2.4–42.6), and medical nutritional support within 48 h from admission (OR 0.12; 95% CI 0.02–0.56) were associated with mortality, even if these associations were different (Table 4). Thirteen patients (6.4%) were re-admitted to hospital within 30 days from discharge; however, none of the tested variables were associated with re-admission (Supplementary Files Table S1).", "The mean body weight was 71.8 ± 16.3 kg and the mean height was 169.0 ± 8.6 cm, deriving a body mass index (BMI) of 25.0 ± 4.9 kg/m2. Upon admission, according to nutritional risk screening (NRS-2002), 70 patients (34.5%) were at risk of malnutrition, while, according to the Malnutrition Universal Screening Tool (MUST), 99 patients (48.7%) had the same risk. Twenty-four patients (11.8%) underwent specialist nutritional evaluation during their hospital stay, and 74 (36.5%) were treated with medical nutrition within 48 h from admission, in particular, 63 (64.3%) with oral nutritional supplementation (ONS) and 13 (13.3%) with parenteral nutrition (PN).", "The risk of RS was identified in 98 (48.3%) patients; of these, 44 (21.7%) were at medium risk and 54 (26.6%) were at high risk of developing RS. Thirty-eight patients (18.7% of the entire cohort) developed RS (Figure 1).\nTable 2 resumes several data associated with RS development. In particular, blood potassium at 48 h from admission was lower in RS patients (3.9 mmol/L vs. 3.5; OR 0.21; p = 0.004), in addition to phosphorus at 48 h (3.3 mg/dL vs. 2.8; OR 0.24; p = 0.002) and phosphorus at 5 days (3.4 mg/dL vs. 2.5; OR 0.04; p = 0.002).", "The mean LOS was 8.2 ± 5.8 days. Patients diagnosed with RS had a mean LOS of 12.5 ± 7.9 days. Patients without RS had a mean LOS of 7.1 ± 4.2 days (p < 0.0001) (Figure 2).\nThe principal factors associated with a high LOS are resumed in Table 3. In particular, ER admission (OR 0.38; 95%CI 0.28–0.54), NRS-2002 >3 (OR 0.67; 95% CI 0.49–0.92), MUST ≥2 (OR 0.51; 95% CI 0.37–0.71), RS risk (OR 0.66; 95% CI 0.50–0.88), and RS development (OR 0.45; 95% CI 0.50–0.88) were factors associated with a higher LOS.", "Nine patients (4.4%) died during hospitalization. BMI (OR 0.82; 95% CI 0.69–0.97), RS development (OR 10.1; 95% CI 2.4–42.6), and medical nutritional support within 48 h from admission (OR 0.12; 95% CI 0.02–0.56) were associated with mortality, even if these associations were different (Table 4). Thirteen patients (6.4%) were re-admitted to hospital within 30 days from discharge; however, none of the tested variables were associated with re-admission (Supplementary Files Table S1).", "This observational prospective cohort study consecutively enrolled 203 inpatients admitted to an Internal Medicine and Gastroenterology Ward of an Italian Tertiary Referral Center; most of these patients (68.5%) were admitted from the ED. According to the ASPEN criteria [6], 98 patients (48.3%) were at risk of developing RS and 38 patients (18.7% of the whole cohort) developed RS. During the hospital stay, nine patients died; RS was associated with intra-hospital mortality with an OR of 10.1, which means that the possibility of an inpatient dying from RS or its complications was 10:1, compared to a patient without RS. It is noteworthy to highlight that a higher baseline BMI and the presence of nutritional support were both associated with lower in-hospital mortality. Furthermore, regarding LOS, patients who developed RS had an average hospital stay of five days more than those who did not develop the syndrome (p < 0.0001). Interestingly, a longer LOS was associated with a worse nutritional status (NRS-2002 >3 and MUST ≥2) upon ED admission, confirming the role of hospital malnutrition as a predictor of poor clinical outcomes, as previously described [9].\nThe overall evidence raises the question of how RS is recognized by physicians in clinical practice as a serious clinical challenge. Indeed, emerging literature reports a variable incidence of RS, ranging from 0 to 62% of incidence, depending on the type of studies [3]. Homogeneous data on specific clinical contexts are still limited. With regards to Internal Medicine wards, Kraaijenbrink et al. [8] reported, in 2017, an incidence of RS risk of 54% among 178 admitted patients in the Netherlands, according to the National Institute for Health and Care Excellence (NICE) criteria [10]. In that cohort, only 14 patients (8%) were diagnosed with RS. However, the authors considered the development of severe hypophosphatemia during follow-up, a positive NICE score, and normal phosphate levels upon admission as a hallmark of RS. ASPEN criteria, released in 2020 [6], also include the imbalance of “serum potassium, and/or magnesium, and/or organ dysfunction resulting from a decrease in any of these and/or due to thiamin deficiency”. Such criteria could lead clinicians to a wider identification of RS, otherwise neglected as general electrolyte disturbance. Moreover, early diagnosis is useful to prompt specific nutritional therapy, according to the same ASPEN guidelines [6].\nA recent observational cohort study [11] showed a high incidence of refeeding syndrome in patients with total parenteral nutrition in a Brazilian reference university hospital. The authors used the diagnostic criteria of Doig et al. [12] (serum phosphorous levels decreasing by 0.5 mg/dL or lower than 2.0 mg/dL after initiation of TPN, together with a decrease in serum magnesium and/or potassium levels). Data from the electronic medical records of 97 hospitalized patients showed an incidence of RS of 43.3% in patients treated with parenteral nutrition. Interestingly, patients who received standard parenteral nutrition (without the supervision of a team of nutritional specialists) were more likely to develop RS. In turn, our study consecutively evaluated all patients admitted to the ward, and only 13 patients (6.4% of the whole cohort) received parenteral nutrition, while 63 (31%) received oral nutritional supplements within 48 h from admission. As per protocol, the period of observation in our study was limited to the first five days from admission; thus, we could have lacked data after this period. Nevertheless, the incidence of RS remains high.\nOf note, in our study, neither nutritional team support nor nutritional supplementation within 48 h (neither oral nor parenteral) influenced the LOS. However, nutritional counseling was delivered in only 24.5% of patients who were considered at risk of developing RS, while nutritional supplementation (within 48 h) was delivered in 75.5% of patients at risk. We did not perform an in-depth analysis to check if nutritional support within 48 h was delivered according to the guidelines. This could be the first limitation of the study, due to its observational nature. Another limitation is the period of follow-up. We decided to limit the observational window of patients who were considered at risk to only five days from admission. We do not have data about the patients who were not considered at risk upon admission. However, collecting data for each patient throughout the entire hospital stay would be unfeasible, so we needed to make a choice.\nFurthermore, we decided not to perform a multivariable analysis after the univariable analyses, since almost all the variables significantly associated with the endpoints were collinear. Moreover, there were only a few episodes of mortality and readmissions. Indeed, only 13 patients (6.4%) were re-admitted to hospital within 30 days from discharge. As described in the Supplementary files, we found no significant association with any registered variables. This could be attributed to the absence of a follow-up observation after hospital discharge.", "Our work showed a prevalence of RS risk of 48.3% and an incidence of RS diagnosis of 18.7% in a cohort of 203 patients consecutively admitted to a Department of Internal Medicine and Gastroenterology, according to ASPEN criteria. RS is associated with higher mortality and a higher LOS. Such evidence should raise awareness among clinicians to identify patients at risk of RS early, and to prompt specific nutritional support in patients developing this syndrome." ]
[ "intro", null, null, "subjects", null, null, null, null, null, null, null, "results", null, null, null, null, null, "discussion", "conclusions" ]
[ "refeeding", "malnutrition", "ASPEN criteria", "mortality", "length of hospital stay", "readmission" ]
1. Introduction: Refeeding syndrome is a potentially fatal complication that occurs in malnourished patients when an excessive amount of nutrients is too rapidly delivered after a prolonged state of fasting, as often happens in hospital settings [1]. It is defined as severe electrolyte and fluid shifts, associated with metabolic abnormalities in malnourished patients undergoing refeeding, whether orally, enterally, or parenterally [2]. Up-to-date data about the incidence of RS remain heterogeneous. A recent systematic review of thirty-five observational studies found a wide range of RS incidence, varying from 0 to 62%, among studies [3]. The risk of developing RS is high after prolonged fasting, leading to a reduction in insulin release and an increase in glucagon secretion. Metabolic inversion from the use of glucose, as a source of energy, to the use of structural proteins and lipid deposits occurs with a reduction in intracellular energy cofactors (vitamins and minerals), in particular, phosphorus, potassium, and magnesium. During refeeding, the presence of nutrients, in particular, carbohydrates, stimulates glycolysis, glycogen synthesis, and the synthesis of lipids and proteins, with increased intracellular retention of water and sodium. This anabolic process requires the presence of cofactors, such as thiamine (vitamin B1) and minerals. This results in a rapid reduction in these cofactors at the intravascular level, reduced elimination of sodium and water in the urine, water retention, and the risk of cardiometabolic, respiratory, and neurological decompensation, eventually leading to death [3,4]. Such a condition often occurs during hospitalization, sometimes causing death if not diagnosed and treated promptly [5]. Recently, the American Society for Parenteral and Enteral Nutrition (ASPEN) convened an inter-professional task force to develop a consensus to identify patients at a high risk of developing RS, to prevent and treat it. This consensus, published in April 2020 [6], defined RS as a metabolic and electrolytic alteration that occurs after the reintroduction of nutrition. This retrospective cohort study aimed to assess the prevalence of RS risk and the incidence of RS according to ASPEN criteria, and its impact on length of stay (LOS), mortality, and re-admissions within 30 days in patients admitted to the Internal Medicine and Gastroenterology Unit. 2. Materials and Methods: 2.1. Study Design and Ethical Committee Approval This was a single-center observational prospective cohort study. The study was conducted based on the Declaration of Helsinki and according to Good Clinical Practice guidelines. The study was approved by the Ethical Committee of Fondazione Policlinico A. Gemelli IRCCS—Catholic University of the Sacred Heart (Protocol code 2638/22). All participants signed a consent form recording their agreement to take part in the study and to have the results published. This study was reported according to the STROBE guidelines for cohort studies [7]. This was a single-center observational prospective cohort study. The study was conducted based on the Declaration of Helsinki and according to Good Clinical Practice guidelines. The study was approved by the Ethical Committee of Fondazione Policlinico A. Gemelli IRCCS—Catholic University of the Sacred Heart (Protocol code 2638/22). All participants signed a consent form recording their agreement to take part in the study and to have the results published. This study was reported according to the STROBE guidelines for cohort studies [7]. 2.2. Patients All adult (>18 years old) patients admitted to the Internal Medicine and Gastroenterology Unit at the Fondazione Policlinico Agostino Gemelli IRCCS, Rome, Italy, from March 2021 to January 2022, were prospectively evaluated. The exclusion criteria were patients already undergoing artificial nutrition during a possible stay in the emergency room, the presence of hypophosphatemia on the day of admission, or refusal to participate in the study. All adult (>18 years old) patients admitted to the Internal Medicine and Gastroenterology Unit at the Fondazione Policlinico Agostino Gemelli IRCCS, Rome, Italy, from March 2021 to January 2022, were prospectively evaluated. The exclusion criteria were patients already undergoing artificial nutrition during a possible stay in the emergency room, the presence of hypophosphatemia on the day of admission, or refusal to participate in the study. 2.3. Protocol Description 2.3.1. Protocol Algorithm Patients were assessed for risk of RS by physicians of the Internal Medicine and Gastroenterology Unit (B.E.A. and M.I.) upon admission. In patients at risk, a specific evaluation (electrolyte and clinical check) was performed 48 h and 5 days after admission (M.D.A., R.B. and T.G.) to identify patients with RS. Mortality, LOS, and hospital re-admission within 30 days among enrolled patients were recorded by the analysis of medical records. Patients were assessed for risk of RS by physicians of the Internal Medicine and Gastroenterology Unit (B.E.A. and M.I.) upon admission. In patients at risk, a specific evaluation (electrolyte and clinical check) was performed 48 h and 5 days after admission (M.D.A., R.B. and T.G.) to identify patients with RS. Mortality, LOS, and hospital re-admission within 30 days among enrolled patients were recorded by the analysis of medical records. 2.3.2. Determination of RS Risk The evaluation of RS risk was performed according to the recently released ASPEN criteria [6]. Patients were considered at “moderate risk” if they met two of the following risk criteria:BMI was between 16 and 18.5 kg/m2;A weight loss of 5% of habitual weight was reported;There was no or negligible oral intake for 5–6 days OR < 75% of estimated energy requirement for >7 days during an acute illness or injury OR < 75% of estimated energy requirement for >1 month;There were low levels of potassium, phosphorus, magnesium, or normal current levels and recent low levels necessitating minimal or single-dose supplementation;There was evidence of moderate subcutaneous fat loss;There was evidence of mild or moderate muscle loss;In presence of higher-risk comorbidities (moderate disease). BMI was between 16 and 18.5 kg/m2; A weight loss of 5% of habitual weight was reported; There was no or negligible oral intake for 5–6 days OR < 75% of estimated energy requirement for >7 days during an acute illness or injury OR < 75% of estimated energy requirement for >1 month; There were low levels of potassium, phosphorus, magnesium, or normal current levels and recent low levels necessitating minimal or single-dose supplementation; There was evidence of moderate subcutaneous fat loss; There was evidence of mild or moderate muscle loss; In presence of higher-risk comorbidities (moderate disease). Patients were considered at “significant risk” if they met one of the following risk criteria:BMI was <16 kg/m2;A weight loss of 7.5% in 3 months or >10% in 6 months was reported;There was no or negligible oral intake for >7 days OR < 50% of estimated energy requirement for >5 days during an acute illness or injury OR < 50% of estimated energy requirement for >1 month;There were moderately/significantly low levels of potassium, phosphorus, magnesium, or minimally low or normal levels and recent low levels necessitating significant or multiple-dose supplementation;There was evidence of severe subcutaneous fat loss;There was evidence of severe muscle loss;In presence of higher-risk comorbidities (severe disease). BMI was <16 kg/m2; A weight loss of 7.5% in 3 months or >10% in 6 months was reported; There was no or negligible oral intake for >7 days OR < 50% of estimated energy requirement for >5 days during an acute illness or injury OR < 50% of estimated energy requirement for >1 month; There were moderately/significantly low levels of potassium, phosphorus, magnesium, or minimally low or normal levels and recent low levels necessitating significant or multiple-dose supplementation; There was evidence of severe subcutaneous fat loss; There was evidence of severe muscle loss; In presence of higher-risk comorbidities (severe disease). The evaluation of RS risk was performed according to the recently released ASPEN criteria [6]. Patients were considered at “moderate risk” if they met two of the following risk criteria:BMI was between 16 and 18.5 kg/m2;A weight loss of 5% of habitual weight was reported;There was no or negligible oral intake for 5–6 days OR < 75% of estimated energy requirement for >7 days during an acute illness or injury OR < 75% of estimated energy requirement for >1 month;There were low levels of potassium, phosphorus, magnesium, or normal current levels and recent low levels necessitating minimal or single-dose supplementation;There was evidence of moderate subcutaneous fat loss;There was evidence of mild or moderate muscle loss;In presence of higher-risk comorbidities (moderate disease). BMI was between 16 and 18.5 kg/m2; A weight loss of 5% of habitual weight was reported; There was no or negligible oral intake for 5–6 days OR < 75% of estimated energy requirement for >7 days during an acute illness or injury OR < 75% of estimated energy requirement for >1 month; There were low levels of potassium, phosphorus, magnesium, or normal current levels and recent low levels necessitating minimal or single-dose supplementation; There was evidence of moderate subcutaneous fat loss; There was evidence of mild or moderate muscle loss; In presence of higher-risk comorbidities (moderate disease). Patients were considered at “significant risk” if they met one of the following risk criteria:BMI was <16 kg/m2;A weight loss of 7.5% in 3 months or >10% in 6 months was reported;There was no or negligible oral intake for >7 days OR < 50% of estimated energy requirement for >5 days during an acute illness or injury OR < 50% of estimated energy requirement for >1 month;There were moderately/significantly low levels of potassium, phosphorus, magnesium, or minimally low or normal levels and recent low levels necessitating significant or multiple-dose supplementation;There was evidence of severe subcutaneous fat loss;There was evidence of severe muscle loss;In presence of higher-risk comorbidities (severe disease). BMI was <16 kg/m2; A weight loss of 7.5% in 3 months or >10% in 6 months was reported; There was no or negligible oral intake for >7 days OR < 50% of estimated energy requirement for >5 days during an acute illness or injury OR < 50% of estimated energy requirement for >1 month; There were moderately/significantly low levels of potassium, phosphorus, magnesium, or minimally low or normal levels and recent low levels necessitating significant or multiple-dose supplementation; There was evidence of severe subcutaneous fat loss; There was evidence of severe muscle loss; In presence of higher-risk comorbidities (severe disease). 2.3.3. Diagnosis of RS The diagnosis of RS was then confirmed according to the above-mentioned ASPEN criteria [6], which are as follows:A decrease in serum phosphorus, potassium, and/or magnesium levels by 10–20% (mild RS), 20–30% (moderate RS), or >30%, and/or organ dysfunction resulting from a decrease in any of these and/or due to thiamin deficiency (severe RS).The decrease occurs within 5 days of reinitiating or substantially increasing energy provision. A decrease in serum phosphorus, potassium, and/or magnesium levels by 10–20% (mild RS), 20–30% (moderate RS), or >30%, and/or organ dysfunction resulting from a decrease in any of these and/or due to thiamin deficiency (severe RS). The decrease occurs within 5 days of reinitiating or substantially increasing energy provision. The diagnosis of RS was then confirmed according to the above-mentioned ASPEN criteria [6], which are as follows:A decrease in serum phosphorus, potassium, and/or magnesium levels by 10–20% (mild RS), 20–30% (moderate RS), or >30%, and/or organ dysfunction resulting from a decrease in any of these and/or due to thiamin deficiency (severe RS).The decrease occurs within 5 days of reinitiating or substantially increasing energy provision. A decrease in serum phosphorus, potassium, and/or magnesium levels by 10–20% (mild RS), 20–30% (moderate RS), or >30%, and/or organ dysfunction resulting from a decrease in any of these and/or due to thiamin deficiency (severe RS). The decrease occurs within 5 days of reinitiating or substantially increasing energy provision. 2.3.1. Protocol Algorithm Patients were assessed for risk of RS by physicians of the Internal Medicine and Gastroenterology Unit (B.E.A. and M.I.) upon admission. In patients at risk, a specific evaluation (electrolyte and clinical check) was performed 48 h and 5 days after admission (M.D.A., R.B. and T.G.) to identify patients with RS. Mortality, LOS, and hospital re-admission within 30 days among enrolled patients were recorded by the analysis of medical records. Patients were assessed for risk of RS by physicians of the Internal Medicine and Gastroenterology Unit (B.E.A. and M.I.) upon admission. In patients at risk, a specific evaluation (electrolyte and clinical check) was performed 48 h and 5 days after admission (M.D.A., R.B. and T.G.) to identify patients with RS. Mortality, LOS, and hospital re-admission within 30 days among enrolled patients were recorded by the analysis of medical records. 2.3.2. Determination of RS Risk The evaluation of RS risk was performed according to the recently released ASPEN criteria [6]. Patients were considered at “moderate risk” if they met two of the following risk criteria:BMI was between 16 and 18.5 kg/m2;A weight loss of 5% of habitual weight was reported;There was no or negligible oral intake for 5–6 days OR < 75% of estimated energy requirement for >7 days during an acute illness or injury OR < 75% of estimated energy requirement for >1 month;There were low levels of potassium, phosphorus, magnesium, or normal current levels and recent low levels necessitating minimal or single-dose supplementation;There was evidence of moderate subcutaneous fat loss;There was evidence of mild or moderate muscle loss;In presence of higher-risk comorbidities (moderate disease). BMI was between 16 and 18.5 kg/m2; A weight loss of 5% of habitual weight was reported; There was no or negligible oral intake for 5–6 days OR < 75% of estimated energy requirement for >7 days during an acute illness or injury OR < 75% of estimated energy requirement for >1 month; There were low levels of potassium, phosphorus, magnesium, or normal current levels and recent low levels necessitating minimal or single-dose supplementation; There was evidence of moderate subcutaneous fat loss; There was evidence of mild or moderate muscle loss; In presence of higher-risk comorbidities (moderate disease). Patients were considered at “significant risk” if they met one of the following risk criteria:BMI was <16 kg/m2;A weight loss of 7.5% in 3 months or >10% in 6 months was reported;There was no or negligible oral intake for >7 days OR < 50% of estimated energy requirement for >5 days during an acute illness or injury OR < 50% of estimated energy requirement for >1 month;There were moderately/significantly low levels of potassium, phosphorus, magnesium, or minimally low or normal levels and recent low levels necessitating significant or multiple-dose supplementation;There was evidence of severe subcutaneous fat loss;There was evidence of severe muscle loss;In presence of higher-risk comorbidities (severe disease). BMI was <16 kg/m2; A weight loss of 7.5% in 3 months or >10% in 6 months was reported; There was no or negligible oral intake for >7 days OR < 50% of estimated energy requirement for >5 days during an acute illness or injury OR < 50% of estimated energy requirement for >1 month; There were moderately/significantly low levels of potassium, phosphorus, magnesium, or minimally low or normal levels and recent low levels necessitating significant or multiple-dose supplementation; There was evidence of severe subcutaneous fat loss; There was evidence of severe muscle loss; In presence of higher-risk comorbidities (severe disease). The evaluation of RS risk was performed according to the recently released ASPEN criteria [6]. Patients were considered at “moderate risk” if they met two of the following risk criteria:BMI was between 16 and 18.5 kg/m2;A weight loss of 5% of habitual weight was reported;There was no or negligible oral intake for 5–6 days OR < 75% of estimated energy requirement for >7 days during an acute illness or injury OR < 75% of estimated energy requirement for >1 month;There were low levels of potassium, phosphorus, magnesium, or normal current levels and recent low levels necessitating minimal or single-dose supplementation;There was evidence of moderate subcutaneous fat loss;There was evidence of mild or moderate muscle loss;In presence of higher-risk comorbidities (moderate disease). BMI was between 16 and 18.5 kg/m2; A weight loss of 5% of habitual weight was reported; There was no or negligible oral intake for 5–6 days OR < 75% of estimated energy requirement for >7 days during an acute illness or injury OR < 75% of estimated energy requirement for >1 month; There were low levels of potassium, phosphorus, magnesium, or normal current levels and recent low levels necessitating minimal or single-dose supplementation; There was evidence of moderate subcutaneous fat loss; There was evidence of mild or moderate muscle loss; In presence of higher-risk comorbidities (moderate disease). Patients were considered at “significant risk” if they met one of the following risk criteria:BMI was <16 kg/m2;A weight loss of 7.5% in 3 months or >10% in 6 months was reported;There was no or negligible oral intake for >7 days OR < 50% of estimated energy requirement for >5 days during an acute illness or injury OR < 50% of estimated energy requirement for >1 month;There were moderately/significantly low levels of potassium, phosphorus, magnesium, or minimally low or normal levels and recent low levels necessitating significant or multiple-dose supplementation;There was evidence of severe subcutaneous fat loss;There was evidence of severe muscle loss;In presence of higher-risk comorbidities (severe disease). BMI was <16 kg/m2; A weight loss of 7.5% in 3 months or >10% in 6 months was reported; There was no or negligible oral intake for >7 days OR < 50% of estimated energy requirement for >5 days during an acute illness or injury OR < 50% of estimated energy requirement for >1 month; There were moderately/significantly low levels of potassium, phosphorus, magnesium, or minimally low or normal levels and recent low levels necessitating significant or multiple-dose supplementation; There was evidence of severe subcutaneous fat loss; There was evidence of severe muscle loss; In presence of higher-risk comorbidities (severe disease). 2.3.3. Diagnosis of RS The diagnosis of RS was then confirmed according to the above-mentioned ASPEN criteria [6], which are as follows:A decrease in serum phosphorus, potassium, and/or magnesium levels by 10–20% (mild RS), 20–30% (moderate RS), or >30%, and/or organ dysfunction resulting from a decrease in any of these and/or due to thiamin deficiency (severe RS).The decrease occurs within 5 days of reinitiating or substantially increasing energy provision. A decrease in serum phosphorus, potassium, and/or magnesium levels by 10–20% (mild RS), 20–30% (moderate RS), or >30%, and/or organ dysfunction resulting from a decrease in any of these and/or due to thiamin deficiency (severe RS). The decrease occurs within 5 days of reinitiating or substantially increasing energy provision. The diagnosis of RS was then confirmed according to the above-mentioned ASPEN criteria [6], which are as follows:A decrease in serum phosphorus, potassium, and/or magnesium levels by 10–20% (mild RS), 20–30% (moderate RS), or >30%, and/or organ dysfunction resulting from a decrease in any of these and/or due to thiamin deficiency (severe RS).The decrease occurs within 5 days of reinitiating or substantially increasing energy provision. A decrease in serum phosphorus, potassium, and/or magnesium levels by 10–20% (mild RS), 20–30% (moderate RS), or >30%, and/or organ dysfunction resulting from a decrease in any of these and/or due to thiamin deficiency (severe RS). The decrease occurs within 5 days of reinitiating or substantially increasing energy provision. 2.4. Outcomes Measures The primary outcome was the incidence of RS risk according to ASPEN 2020 criteria. The secondary outcomes were the diagnosis of RS after the initiation of nutritional support and its impact on mortality, LOS, and hospital readmission within 30 days. The primary outcome was the incidence of RS risk according to ASPEN 2020 criteria. The secondary outcomes were the diagnosis of RS after the initiation of nutritional support and its impact on mortality, LOS, and hospital readmission within 30 days. 2.5. Sample Size Calculation According to a recent systematic review, the incidence of RS widely varied from 0% to 62% across the studies [3]. However, a previous study identified an incidence of RS risk up to 54% and an RS diagnosis rate of 8% of patients admitted to an internal medicine department [8]. With a margin of error of 6% and a confidence interval of 95%, between 79 and 158 patients should have been enrolled to intercept the above-mentioned incidences (percentages). Considering a dropout rate of 10%, our sample size was set at 176 patients. According to a recent systematic review, the incidence of RS widely varied from 0% to 62% across the studies [3]. However, a previous study identified an incidence of RS risk up to 54% and an RS diagnosis rate of 8% of patients admitted to an internal medicine department [8]. With a margin of error of 6% and a confidence interval of 95%, between 79 and 158 patients should have been enrolled to intercept the above-mentioned incidences (percentages). Considering a dropout rate of 10%, our sample size was set at 176 patients. 2.6. Data Collection and Statistical Analysis Data were collected using a specific Excel© spreadsheet. Data are shown using descriptive statistical methods. The following measures were used as quantitative variables: minimum, maximum, range, mean and standard deviation. The qualitative variables were summarized in tables of absolute and percentage frequencies. The possible normality of continuous distributions wasexamined by applying the Kolmogorov–Smirnov test. The primary objective was reached by calculating the cumulative incidence of risk of developing RS in the enrolled patients. The secondary objectives were reached through the cumulative incidence of overt RS development in the enrolled at-risk patients, and the development of a Cox regression model for the evaluation of mean hospital stay, with log-rank analysis to highlight differences between the groups that develop RS or not. To analyze other secondary outcomes, univariate logistic models were created, obtaining odds ratios (OR). A p < 0.05 was set as statistically significant. All statistical analyses were carried out with STATA (version 13, Stata Corporation; College Station, TX, USA). Data were collected using a specific Excel© spreadsheet. Data are shown using descriptive statistical methods. The following measures were used as quantitative variables: minimum, maximum, range, mean and standard deviation. The qualitative variables were summarized in tables of absolute and percentage frequencies. The possible normality of continuous distributions wasexamined by applying the Kolmogorov–Smirnov test. The primary objective was reached by calculating the cumulative incidence of risk of developing RS in the enrolled patients. The secondary objectives were reached through the cumulative incidence of overt RS development in the enrolled at-risk patients, and the development of a Cox regression model for the evaluation of mean hospital stay, with log-rank analysis to highlight differences between the groups that develop RS or not. To analyze other secondary outcomes, univariate logistic models were created, obtaining odds ratios (OR). A p < 0.05 was set as statistically significant. All statistical analyses were carried out with STATA (version 13, Stata Corporation; College Station, TX, USA). 2.1. Study Design and Ethical Committee Approval: This was a single-center observational prospective cohort study. The study was conducted based on the Declaration of Helsinki and according to Good Clinical Practice guidelines. The study was approved by the Ethical Committee of Fondazione Policlinico A. Gemelli IRCCS—Catholic University of the Sacred Heart (Protocol code 2638/22). All participants signed a consent form recording their agreement to take part in the study and to have the results published. This study was reported according to the STROBE guidelines for cohort studies [7]. 2.2. Patients: All adult (>18 years old) patients admitted to the Internal Medicine and Gastroenterology Unit at the Fondazione Policlinico Agostino Gemelli IRCCS, Rome, Italy, from March 2021 to January 2022, were prospectively evaluated. The exclusion criteria were patients already undergoing artificial nutrition during a possible stay in the emergency room, the presence of hypophosphatemia on the day of admission, or refusal to participate in the study. 2.3. Protocol Description: 2.3.1. Protocol Algorithm Patients were assessed for risk of RS by physicians of the Internal Medicine and Gastroenterology Unit (B.E.A. and M.I.) upon admission. In patients at risk, a specific evaluation (electrolyte and clinical check) was performed 48 h and 5 days after admission (M.D.A., R.B. and T.G.) to identify patients with RS. Mortality, LOS, and hospital re-admission within 30 days among enrolled patients were recorded by the analysis of medical records. Patients were assessed for risk of RS by physicians of the Internal Medicine and Gastroenterology Unit (B.E.A. and M.I.) upon admission. In patients at risk, a specific evaluation (electrolyte and clinical check) was performed 48 h and 5 days after admission (M.D.A., R.B. and T.G.) to identify patients with RS. Mortality, LOS, and hospital re-admission within 30 days among enrolled patients were recorded by the analysis of medical records. 2.3.2. Determination of RS Risk The evaluation of RS risk was performed according to the recently released ASPEN criteria [6]. Patients were considered at “moderate risk” if they met two of the following risk criteria:BMI was between 16 and 18.5 kg/m2;A weight loss of 5% of habitual weight was reported;There was no or negligible oral intake for 5–6 days OR < 75% of estimated energy requirement for >7 days during an acute illness or injury OR < 75% of estimated energy requirement for >1 month;There were low levels of potassium, phosphorus, magnesium, or normal current levels and recent low levels necessitating minimal or single-dose supplementation;There was evidence of moderate subcutaneous fat loss;There was evidence of mild or moderate muscle loss;In presence of higher-risk comorbidities (moderate disease). BMI was between 16 and 18.5 kg/m2; A weight loss of 5% of habitual weight was reported; There was no or negligible oral intake for 5–6 days OR < 75% of estimated energy requirement for >7 days during an acute illness or injury OR < 75% of estimated energy requirement for >1 month; There were low levels of potassium, phosphorus, magnesium, or normal current levels and recent low levels necessitating minimal or single-dose supplementation; There was evidence of moderate subcutaneous fat loss; There was evidence of mild or moderate muscle loss; In presence of higher-risk comorbidities (moderate disease). Patients were considered at “significant risk” if they met one of the following risk criteria:BMI was <16 kg/m2;A weight loss of 7.5% in 3 months or >10% in 6 months was reported;There was no or negligible oral intake for >7 days OR < 50% of estimated energy requirement for >5 days during an acute illness or injury OR < 50% of estimated energy requirement for >1 month;There were moderately/significantly low levels of potassium, phosphorus, magnesium, or minimally low or normal levels and recent low levels necessitating significant or multiple-dose supplementation;There was evidence of severe subcutaneous fat loss;There was evidence of severe muscle loss;In presence of higher-risk comorbidities (severe disease). BMI was <16 kg/m2; A weight loss of 7.5% in 3 months or >10% in 6 months was reported; There was no or negligible oral intake for >7 days OR < 50% of estimated energy requirement for >5 days during an acute illness or injury OR < 50% of estimated energy requirement for >1 month; There were moderately/significantly low levels of potassium, phosphorus, magnesium, or minimally low or normal levels and recent low levels necessitating significant or multiple-dose supplementation; There was evidence of severe subcutaneous fat loss; There was evidence of severe muscle loss; In presence of higher-risk comorbidities (severe disease). The evaluation of RS risk was performed according to the recently released ASPEN criteria [6]. Patients were considered at “moderate risk” if they met two of the following risk criteria:BMI was between 16 and 18.5 kg/m2;A weight loss of 5% of habitual weight was reported;There was no or negligible oral intake for 5–6 days OR < 75% of estimated energy requirement for >7 days during an acute illness or injury OR < 75% of estimated energy requirement for >1 month;There were low levels of potassium, phosphorus, magnesium, or normal current levels and recent low levels necessitating minimal or single-dose supplementation;There was evidence of moderate subcutaneous fat loss;There was evidence of mild or moderate muscle loss;In presence of higher-risk comorbidities (moderate disease). BMI was between 16 and 18.5 kg/m2; A weight loss of 5% of habitual weight was reported; There was no or negligible oral intake for 5–6 days OR < 75% of estimated energy requirement for >7 days during an acute illness or injury OR < 75% of estimated energy requirement for >1 month; There were low levels of potassium, phosphorus, magnesium, or normal current levels and recent low levels necessitating minimal or single-dose supplementation; There was evidence of moderate subcutaneous fat loss; There was evidence of mild or moderate muscle loss; In presence of higher-risk comorbidities (moderate disease). Patients were considered at “significant risk” if they met one of the following risk criteria:BMI was <16 kg/m2;A weight loss of 7.5% in 3 months or >10% in 6 months was reported;There was no or negligible oral intake for >7 days OR < 50% of estimated energy requirement for >5 days during an acute illness or injury OR < 50% of estimated energy requirement for >1 month;There were moderately/significantly low levels of potassium, phosphorus, magnesium, or minimally low or normal levels and recent low levels necessitating significant or multiple-dose supplementation;There was evidence of severe subcutaneous fat loss;There was evidence of severe muscle loss;In presence of higher-risk comorbidities (severe disease). BMI was <16 kg/m2; A weight loss of 7.5% in 3 months or >10% in 6 months was reported; There was no or negligible oral intake for >7 days OR < 50% of estimated energy requirement for >5 days during an acute illness or injury OR < 50% of estimated energy requirement for >1 month; There were moderately/significantly low levels of potassium, phosphorus, magnesium, or minimally low or normal levels and recent low levels necessitating significant or multiple-dose supplementation; There was evidence of severe subcutaneous fat loss; There was evidence of severe muscle loss; In presence of higher-risk comorbidities (severe disease). 2.3.3. Diagnosis of RS The diagnosis of RS was then confirmed according to the above-mentioned ASPEN criteria [6], which are as follows:A decrease in serum phosphorus, potassium, and/or magnesium levels by 10–20% (mild RS), 20–30% (moderate RS), or >30%, and/or organ dysfunction resulting from a decrease in any of these and/or due to thiamin deficiency (severe RS).The decrease occurs within 5 days of reinitiating or substantially increasing energy provision. A decrease in serum phosphorus, potassium, and/or magnesium levels by 10–20% (mild RS), 20–30% (moderate RS), or >30%, and/or organ dysfunction resulting from a decrease in any of these and/or due to thiamin deficiency (severe RS). The decrease occurs within 5 days of reinitiating or substantially increasing energy provision. The diagnosis of RS was then confirmed according to the above-mentioned ASPEN criteria [6], which are as follows:A decrease in serum phosphorus, potassium, and/or magnesium levels by 10–20% (mild RS), 20–30% (moderate RS), or >30%, and/or organ dysfunction resulting from a decrease in any of these and/or due to thiamin deficiency (severe RS).The decrease occurs within 5 days of reinitiating or substantially increasing energy provision. A decrease in serum phosphorus, potassium, and/or magnesium levels by 10–20% (mild RS), 20–30% (moderate RS), or >30%, and/or organ dysfunction resulting from a decrease in any of these and/or due to thiamin deficiency (severe RS). The decrease occurs within 5 days of reinitiating or substantially increasing energy provision. 2.3.1. Protocol Algorithm: Patients were assessed for risk of RS by physicians of the Internal Medicine and Gastroenterology Unit (B.E.A. and M.I.) upon admission. In patients at risk, a specific evaluation (electrolyte and clinical check) was performed 48 h and 5 days after admission (M.D.A., R.B. and T.G.) to identify patients with RS. Mortality, LOS, and hospital re-admission within 30 days among enrolled patients were recorded by the analysis of medical records. 2.3.2. Determination of RS Risk: The evaluation of RS risk was performed according to the recently released ASPEN criteria [6]. Patients were considered at “moderate risk” if they met two of the following risk criteria:BMI was between 16 and 18.5 kg/m2;A weight loss of 5% of habitual weight was reported;There was no or negligible oral intake for 5–6 days OR < 75% of estimated energy requirement for >7 days during an acute illness or injury OR < 75% of estimated energy requirement for >1 month;There were low levels of potassium, phosphorus, magnesium, or normal current levels and recent low levels necessitating minimal or single-dose supplementation;There was evidence of moderate subcutaneous fat loss;There was evidence of mild or moderate muscle loss;In presence of higher-risk comorbidities (moderate disease). BMI was between 16 and 18.5 kg/m2; A weight loss of 5% of habitual weight was reported; There was no or negligible oral intake for 5–6 days OR < 75% of estimated energy requirement for >7 days during an acute illness or injury OR < 75% of estimated energy requirement for >1 month; There were low levels of potassium, phosphorus, magnesium, or normal current levels and recent low levels necessitating minimal or single-dose supplementation; There was evidence of moderate subcutaneous fat loss; There was evidence of mild or moderate muscle loss; In presence of higher-risk comorbidities (moderate disease). Patients were considered at “significant risk” if they met one of the following risk criteria:BMI was <16 kg/m2;A weight loss of 7.5% in 3 months or >10% in 6 months was reported;There was no or negligible oral intake for >7 days OR < 50% of estimated energy requirement for >5 days during an acute illness or injury OR < 50% of estimated energy requirement for >1 month;There were moderately/significantly low levels of potassium, phosphorus, magnesium, or minimally low or normal levels and recent low levels necessitating significant or multiple-dose supplementation;There was evidence of severe subcutaneous fat loss;There was evidence of severe muscle loss;In presence of higher-risk comorbidities (severe disease). BMI was <16 kg/m2; A weight loss of 7.5% in 3 months or >10% in 6 months was reported; There was no or negligible oral intake for >7 days OR < 50% of estimated energy requirement for >5 days during an acute illness or injury OR < 50% of estimated energy requirement for >1 month; There were moderately/significantly low levels of potassium, phosphorus, magnesium, or minimally low or normal levels and recent low levels necessitating significant or multiple-dose supplementation; There was evidence of severe subcutaneous fat loss; There was evidence of severe muscle loss; In presence of higher-risk comorbidities (severe disease). 2.3.3. Diagnosis of RS: The diagnosis of RS was then confirmed according to the above-mentioned ASPEN criteria [6], which are as follows:A decrease in serum phosphorus, potassium, and/or magnesium levels by 10–20% (mild RS), 20–30% (moderate RS), or >30%, and/or organ dysfunction resulting from a decrease in any of these and/or due to thiamin deficiency (severe RS).The decrease occurs within 5 days of reinitiating or substantially increasing energy provision. A decrease in serum phosphorus, potassium, and/or magnesium levels by 10–20% (mild RS), 20–30% (moderate RS), or >30%, and/or organ dysfunction resulting from a decrease in any of these and/or due to thiamin deficiency (severe RS). The decrease occurs within 5 days of reinitiating or substantially increasing energy provision. 2.4. Outcomes Measures: The primary outcome was the incidence of RS risk according to ASPEN 2020 criteria. The secondary outcomes were the diagnosis of RS after the initiation of nutritional support and its impact on mortality, LOS, and hospital readmission within 30 days. 2.5. Sample Size Calculation: According to a recent systematic review, the incidence of RS widely varied from 0% to 62% across the studies [3]. However, a previous study identified an incidence of RS risk up to 54% and an RS diagnosis rate of 8% of patients admitted to an internal medicine department [8]. With a margin of error of 6% and a confidence interval of 95%, between 79 and 158 patients should have been enrolled to intercept the above-mentioned incidences (percentages). Considering a dropout rate of 10%, our sample size was set at 176 patients. 2.6. Data Collection and Statistical Analysis: Data were collected using a specific Excel© spreadsheet. Data are shown using descriptive statistical methods. The following measures were used as quantitative variables: minimum, maximum, range, mean and standard deviation. The qualitative variables were summarized in tables of absolute and percentage frequencies. The possible normality of continuous distributions wasexamined by applying the Kolmogorov–Smirnov test. The primary objective was reached by calculating the cumulative incidence of risk of developing RS in the enrolled patients. The secondary objectives were reached through the cumulative incidence of overt RS development in the enrolled at-risk patients, and the development of a Cox regression model for the evaluation of mean hospital stay, with log-rank analysis to highlight differences between the groups that develop RS or not. To analyze other secondary outcomes, univariate logistic models were created, obtaining odds ratios (OR). A p < 0.05 was set as statistically significant. All statistical analyses were carried out with STATA (version 13, Stata Corporation; College Station, TX, USA). 3. Results: 3.1. Baseline Patients’ Characteristics Two hundred and three patients were enrolled during 11 months of observation. The clinical and demographical data are presented in Table 1. The mean age was 66.1 ± 14.1 years; there were 127 male patients (62.6%) and 68.5% of the patients (n = 139) were admitted from the emergency department (ED). The mean Charlson’s Comorbidity Index (CCI) was 3.0 ± 2.4. 3.1.1. Nutritional Evaluation The mean body weight was 71.8 ± 16.3 kg and the mean height was 169.0 ± 8.6 cm, deriving a body mass index (BMI) of 25.0 ± 4.9 kg/m2. Upon admission, according to nutritional risk screening (NRS-2002), 70 patients (34.5%) were at risk of malnutrition, while, according to the Malnutrition Universal Screening Tool (MUST), 99 patients (48.7%) had the same risk. Twenty-four patients (11.8%) underwent specialist nutritional evaluation during their hospital stay, and 74 (36.5%) were treated with medical nutrition within 48 h from admission, in particular, 63 (64.3%) with oral nutritional supplementation (ONS) and 13 (13.3%) with parenteral nutrition (PN). The mean body weight was 71.8 ± 16.3 kg and the mean height was 169.0 ± 8.6 cm, deriving a body mass index (BMI) of 25.0 ± 4.9 kg/m2. Upon admission, according to nutritional risk screening (NRS-2002), 70 patients (34.5%) were at risk of malnutrition, while, according to the Malnutrition Universal Screening Tool (MUST), 99 patients (48.7%) had the same risk. Twenty-four patients (11.8%) underwent specialist nutritional evaluation during their hospital stay, and 74 (36.5%) were treated with medical nutrition within 48 h from admission, in particular, 63 (64.3%) with oral nutritional supplementation (ONS) and 13 (13.3%) with parenteral nutrition (PN). 3.1.2. Refeeding Syndrome The risk of RS was identified in 98 (48.3%) patients; of these, 44 (21.7%) were at medium risk and 54 (26.6%) were at high risk of developing RS. Thirty-eight patients (18.7% of the entire cohort) developed RS (Figure 1). Table 2 resumes several data associated with RS development. In particular, blood potassium at 48 h from admission was lower in RS patients (3.9 mmol/L vs. 3.5; OR 0.21; p = 0.004), in addition to phosphorus at 48 h (3.3 mg/dL vs. 2.8; OR 0.24; p = 0.002) and phosphorus at 5 days (3.4 mg/dL vs. 2.5; OR 0.04; p = 0.002). The risk of RS was identified in 98 (48.3%) patients; of these, 44 (21.7%) were at medium risk and 54 (26.6%) were at high risk of developing RS. Thirty-eight patients (18.7% of the entire cohort) developed RS (Figure 1). Table 2 resumes several data associated with RS development. In particular, blood potassium at 48 h from admission was lower in RS patients (3.9 mmol/L vs. 3.5; OR 0.21; p = 0.004), in addition to phosphorus at 48 h (3.3 mg/dL vs. 2.8; OR 0.24; p = 0.002) and phosphorus at 5 days (3.4 mg/dL vs. 2.5; OR 0.04; p = 0.002). 3.1.3. Length of Hospital Stay The mean LOS was 8.2 ± 5.8 days. Patients diagnosed with RS had a mean LOS of 12.5 ± 7.9 days. Patients without RS had a mean LOS of 7.1 ± 4.2 days (p < 0.0001) (Figure 2). The principal factors associated with a high LOS are resumed in Table 3. In particular, ER admission (OR 0.38; 95%CI 0.28–0.54), NRS-2002 >3 (OR 0.67; 95% CI 0.49–0.92), MUST ≥2 (OR 0.51; 95% CI 0.37–0.71), RS risk (OR 0.66; 95% CI 0.50–0.88), and RS development (OR 0.45; 95% CI 0.50–0.88) were factors associated with a higher LOS. The mean LOS was 8.2 ± 5.8 days. Patients diagnosed with RS had a mean LOS of 12.5 ± 7.9 days. Patients without RS had a mean LOS of 7.1 ± 4.2 days (p < 0.0001) (Figure 2). The principal factors associated with a high LOS are resumed in Table 3. In particular, ER admission (OR 0.38; 95%CI 0.28–0.54), NRS-2002 >3 (OR 0.67; 95% CI 0.49–0.92), MUST ≥2 (OR 0.51; 95% CI 0.37–0.71), RS risk (OR 0.66; 95% CI 0.50–0.88), and RS development (OR 0.45; 95% CI 0.50–0.88) were factors associated with a higher LOS. 3.1.4. In-Hospital Mortality and Hospital Readmission Nine patients (4.4%) died during hospitalization. BMI (OR 0.82; 95% CI 0.69–0.97), RS development (OR 10.1; 95% CI 2.4–42.6), and medical nutritional support within 48 h from admission (OR 0.12; 95% CI 0.02–0.56) were associated with mortality, even if these associations were different (Table 4). Thirteen patients (6.4%) were re-admitted to hospital within 30 days from discharge; however, none of the tested variables were associated with re-admission (Supplementary Files Table S1). Nine patients (4.4%) died during hospitalization. BMI (OR 0.82; 95% CI 0.69–0.97), RS development (OR 10.1; 95% CI 2.4–42.6), and medical nutritional support within 48 h from admission (OR 0.12; 95% CI 0.02–0.56) were associated with mortality, even if these associations were different (Table 4). Thirteen patients (6.4%) were re-admitted to hospital within 30 days from discharge; however, none of the tested variables were associated with re-admission (Supplementary Files Table S1). Two hundred and three patients were enrolled during 11 months of observation. The clinical and demographical data are presented in Table 1. The mean age was 66.1 ± 14.1 years; there were 127 male patients (62.6%) and 68.5% of the patients (n = 139) were admitted from the emergency department (ED). The mean Charlson’s Comorbidity Index (CCI) was 3.0 ± 2.4. 3.1.1. Nutritional Evaluation The mean body weight was 71.8 ± 16.3 kg and the mean height was 169.0 ± 8.6 cm, deriving a body mass index (BMI) of 25.0 ± 4.9 kg/m2. Upon admission, according to nutritional risk screening (NRS-2002), 70 patients (34.5%) were at risk of malnutrition, while, according to the Malnutrition Universal Screening Tool (MUST), 99 patients (48.7%) had the same risk. Twenty-four patients (11.8%) underwent specialist nutritional evaluation during their hospital stay, and 74 (36.5%) were treated with medical nutrition within 48 h from admission, in particular, 63 (64.3%) with oral nutritional supplementation (ONS) and 13 (13.3%) with parenteral nutrition (PN). The mean body weight was 71.8 ± 16.3 kg and the mean height was 169.0 ± 8.6 cm, deriving a body mass index (BMI) of 25.0 ± 4.9 kg/m2. Upon admission, according to nutritional risk screening (NRS-2002), 70 patients (34.5%) were at risk of malnutrition, while, according to the Malnutrition Universal Screening Tool (MUST), 99 patients (48.7%) had the same risk. Twenty-four patients (11.8%) underwent specialist nutritional evaluation during their hospital stay, and 74 (36.5%) were treated with medical nutrition within 48 h from admission, in particular, 63 (64.3%) with oral nutritional supplementation (ONS) and 13 (13.3%) with parenteral nutrition (PN). 3.1.2. Refeeding Syndrome The risk of RS was identified in 98 (48.3%) patients; of these, 44 (21.7%) were at medium risk and 54 (26.6%) were at high risk of developing RS. Thirty-eight patients (18.7% of the entire cohort) developed RS (Figure 1). Table 2 resumes several data associated with RS development. In particular, blood potassium at 48 h from admission was lower in RS patients (3.9 mmol/L vs. 3.5; OR 0.21; p = 0.004), in addition to phosphorus at 48 h (3.3 mg/dL vs. 2.8; OR 0.24; p = 0.002) and phosphorus at 5 days (3.4 mg/dL vs. 2.5; OR 0.04; p = 0.002). The risk of RS was identified in 98 (48.3%) patients; of these, 44 (21.7%) were at medium risk and 54 (26.6%) were at high risk of developing RS. Thirty-eight patients (18.7% of the entire cohort) developed RS (Figure 1). Table 2 resumes several data associated with RS development. In particular, blood potassium at 48 h from admission was lower in RS patients (3.9 mmol/L vs. 3.5; OR 0.21; p = 0.004), in addition to phosphorus at 48 h (3.3 mg/dL vs. 2.8; OR 0.24; p = 0.002) and phosphorus at 5 days (3.4 mg/dL vs. 2.5; OR 0.04; p = 0.002). 3.1.3. Length of Hospital Stay The mean LOS was 8.2 ± 5.8 days. Patients diagnosed with RS had a mean LOS of 12.5 ± 7.9 days. Patients without RS had a mean LOS of 7.1 ± 4.2 days (p < 0.0001) (Figure 2). The principal factors associated with a high LOS are resumed in Table 3. In particular, ER admission (OR 0.38; 95%CI 0.28–0.54), NRS-2002 >3 (OR 0.67; 95% CI 0.49–0.92), MUST ≥2 (OR 0.51; 95% CI 0.37–0.71), RS risk (OR 0.66; 95% CI 0.50–0.88), and RS development (OR 0.45; 95% CI 0.50–0.88) were factors associated with a higher LOS. The mean LOS was 8.2 ± 5.8 days. Patients diagnosed with RS had a mean LOS of 12.5 ± 7.9 days. Patients without RS had a mean LOS of 7.1 ± 4.2 days (p < 0.0001) (Figure 2). The principal factors associated with a high LOS are resumed in Table 3. In particular, ER admission (OR 0.38; 95%CI 0.28–0.54), NRS-2002 >3 (OR 0.67; 95% CI 0.49–0.92), MUST ≥2 (OR 0.51; 95% CI 0.37–0.71), RS risk (OR 0.66; 95% CI 0.50–0.88), and RS development (OR 0.45; 95% CI 0.50–0.88) were factors associated with a higher LOS. 3.1.4. In-Hospital Mortality and Hospital Readmission Nine patients (4.4%) died during hospitalization. BMI (OR 0.82; 95% CI 0.69–0.97), RS development (OR 10.1; 95% CI 2.4–42.6), and medical nutritional support within 48 h from admission (OR 0.12; 95% CI 0.02–0.56) were associated with mortality, even if these associations were different (Table 4). Thirteen patients (6.4%) were re-admitted to hospital within 30 days from discharge; however, none of the tested variables were associated with re-admission (Supplementary Files Table S1). Nine patients (4.4%) died during hospitalization. BMI (OR 0.82; 95% CI 0.69–0.97), RS development (OR 10.1; 95% CI 2.4–42.6), and medical nutritional support within 48 h from admission (OR 0.12; 95% CI 0.02–0.56) were associated with mortality, even if these associations were different (Table 4). Thirteen patients (6.4%) were re-admitted to hospital within 30 days from discharge; however, none of the tested variables were associated with re-admission (Supplementary Files Table S1). 3.1. Baseline Patients’ Characteristics: Two hundred and three patients were enrolled during 11 months of observation. The clinical and demographical data are presented in Table 1. The mean age was 66.1 ± 14.1 years; there were 127 male patients (62.6%) and 68.5% of the patients (n = 139) were admitted from the emergency department (ED). The mean Charlson’s Comorbidity Index (CCI) was 3.0 ± 2.4. 3.1.1. Nutritional Evaluation The mean body weight was 71.8 ± 16.3 kg and the mean height was 169.0 ± 8.6 cm, deriving a body mass index (BMI) of 25.0 ± 4.9 kg/m2. Upon admission, according to nutritional risk screening (NRS-2002), 70 patients (34.5%) were at risk of malnutrition, while, according to the Malnutrition Universal Screening Tool (MUST), 99 patients (48.7%) had the same risk. Twenty-four patients (11.8%) underwent specialist nutritional evaluation during their hospital stay, and 74 (36.5%) were treated with medical nutrition within 48 h from admission, in particular, 63 (64.3%) with oral nutritional supplementation (ONS) and 13 (13.3%) with parenteral nutrition (PN). The mean body weight was 71.8 ± 16.3 kg and the mean height was 169.0 ± 8.6 cm, deriving a body mass index (BMI) of 25.0 ± 4.9 kg/m2. Upon admission, according to nutritional risk screening (NRS-2002), 70 patients (34.5%) were at risk of malnutrition, while, according to the Malnutrition Universal Screening Tool (MUST), 99 patients (48.7%) had the same risk. Twenty-four patients (11.8%) underwent specialist nutritional evaluation during their hospital stay, and 74 (36.5%) were treated with medical nutrition within 48 h from admission, in particular, 63 (64.3%) with oral nutritional supplementation (ONS) and 13 (13.3%) with parenteral nutrition (PN). 3.1.2. Refeeding Syndrome The risk of RS was identified in 98 (48.3%) patients; of these, 44 (21.7%) were at medium risk and 54 (26.6%) were at high risk of developing RS. Thirty-eight patients (18.7% of the entire cohort) developed RS (Figure 1). Table 2 resumes several data associated with RS development. In particular, blood potassium at 48 h from admission was lower in RS patients (3.9 mmol/L vs. 3.5; OR 0.21; p = 0.004), in addition to phosphorus at 48 h (3.3 mg/dL vs. 2.8; OR 0.24; p = 0.002) and phosphorus at 5 days (3.4 mg/dL vs. 2.5; OR 0.04; p = 0.002). The risk of RS was identified in 98 (48.3%) patients; of these, 44 (21.7%) were at medium risk and 54 (26.6%) were at high risk of developing RS. Thirty-eight patients (18.7% of the entire cohort) developed RS (Figure 1). Table 2 resumes several data associated with RS development. In particular, blood potassium at 48 h from admission was lower in RS patients (3.9 mmol/L vs. 3.5; OR 0.21; p = 0.004), in addition to phosphorus at 48 h (3.3 mg/dL vs. 2.8; OR 0.24; p = 0.002) and phosphorus at 5 days (3.4 mg/dL vs. 2.5; OR 0.04; p = 0.002). 3.1.3. Length of Hospital Stay The mean LOS was 8.2 ± 5.8 days. Patients diagnosed with RS had a mean LOS of 12.5 ± 7.9 days. Patients without RS had a mean LOS of 7.1 ± 4.2 days (p < 0.0001) (Figure 2). The principal factors associated with a high LOS are resumed in Table 3. In particular, ER admission (OR 0.38; 95%CI 0.28–0.54), NRS-2002 >3 (OR 0.67; 95% CI 0.49–0.92), MUST ≥2 (OR 0.51; 95% CI 0.37–0.71), RS risk (OR 0.66; 95% CI 0.50–0.88), and RS development (OR 0.45; 95% CI 0.50–0.88) were factors associated with a higher LOS. The mean LOS was 8.2 ± 5.8 days. Patients diagnosed with RS had a mean LOS of 12.5 ± 7.9 days. Patients without RS had a mean LOS of 7.1 ± 4.2 days (p < 0.0001) (Figure 2). The principal factors associated with a high LOS are resumed in Table 3. In particular, ER admission (OR 0.38; 95%CI 0.28–0.54), NRS-2002 >3 (OR 0.67; 95% CI 0.49–0.92), MUST ≥2 (OR 0.51; 95% CI 0.37–0.71), RS risk (OR 0.66; 95% CI 0.50–0.88), and RS development (OR 0.45; 95% CI 0.50–0.88) were factors associated with a higher LOS. 3.1.4. In-Hospital Mortality and Hospital Readmission Nine patients (4.4%) died during hospitalization. BMI (OR 0.82; 95% CI 0.69–0.97), RS development (OR 10.1; 95% CI 2.4–42.6), and medical nutritional support within 48 h from admission (OR 0.12; 95% CI 0.02–0.56) were associated with mortality, even if these associations were different (Table 4). Thirteen patients (6.4%) were re-admitted to hospital within 30 days from discharge; however, none of the tested variables were associated with re-admission (Supplementary Files Table S1). Nine patients (4.4%) died during hospitalization. BMI (OR 0.82; 95% CI 0.69–0.97), RS development (OR 10.1; 95% CI 2.4–42.6), and medical nutritional support within 48 h from admission (OR 0.12; 95% CI 0.02–0.56) were associated with mortality, even if these associations were different (Table 4). Thirteen patients (6.4%) were re-admitted to hospital within 30 days from discharge; however, none of the tested variables were associated with re-admission (Supplementary Files Table S1). 3.1.1. Nutritional Evaluation: The mean body weight was 71.8 ± 16.3 kg and the mean height was 169.0 ± 8.6 cm, deriving a body mass index (BMI) of 25.0 ± 4.9 kg/m2. Upon admission, according to nutritional risk screening (NRS-2002), 70 patients (34.5%) were at risk of malnutrition, while, according to the Malnutrition Universal Screening Tool (MUST), 99 patients (48.7%) had the same risk. Twenty-four patients (11.8%) underwent specialist nutritional evaluation during their hospital stay, and 74 (36.5%) were treated with medical nutrition within 48 h from admission, in particular, 63 (64.3%) with oral nutritional supplementation (ONS) and 13 (13.3%) with parenteral nutrition (PN). 3.1.2. Refeeding Syndrome: The risk of RS was identified in 98 (48.3%) patients; of these, 44 (21.7%) were at medium risk and 54 (26.6%) were at high risk of developing RS. Thirty-eight patients (18.7% of the entire cohort) developed RS (Figure 1). Table 2 resumes several data associated with RS development. In particular, blood potassium at 48 h from admission was lower in RS patients (3.9 mmol/L vs. 3.5; OR 0.21; p = 0.004), in addition to phosphorus at 48 h (3.3 mg/dL vs. 2.8; OR 0.24; p = 0.002) and phosphorus at 5 days (3.4 mg/dL vs. 2.5; OR 0.04; p = 0.002). 3.1.3. Length of Hospital Stay: The mean LOS was 8.2 ± 5.8 days. Patients diagnosed with RS had a mean LOS of 12.5 ± 7.9 days. Patients without RS had a mean LOS of 7.1 ± 4.2 days (p < 0.0001) (Figure 2). The principal factors associated with a high LOS are resumed in Table 3. In particular, ER admission (OR 0.38; 95%CI 0.28–0.54), NRS-2002 >3 (OR 0.67; 95% CI 0.49–0.92), MUST ≥2 (OR 0.51; 95% CI 0.37–0.71), RS risk (OR 0.66; 95% CI 0.50–0.88), and RS development (OR 0.45; 95% CI 0.50–0.88) were factors associated with a higher LOS. 3.1.4. In-Hospital Mortality and Hospital Readmission: Nine patients (4.4%) died during hospitalization. BMI (OR 0.82; 95% CI 0.69–0.97), RS development (OR 10.1; 95% CI 2.4–42.6), and medical nutritional support within 48 h from admission (OR 0.12; 95% CI 0.02–0.56) were associated with mortality, even if these associations were different (Table 4). Thirteen patients (6.4%) were re-admitted to hospital within 30 days from discharge; however, none of the tested variables were associated with re-admission (Supplementary Files Table S1). 4. Discussion: This observational prospective cohort study consecutively enrolled 203 inpatients admitted to an Internal Medicine and Gastroenterology Ward of an Italian Tertiary Referral Center; most of these patients (68.5%) were admitted from the ED. According to the ASPEN criteria [6], 98 patients (48.3%) were at risk of developing RS and 38 patients (18.7% of the whole cohort) developed RS. During the hospital stay, nine patients died; RS was associated with intra-hospital mortality with an OR of 10.1, which means that the possibility of an inpatient dying from RS or its complications was 10:1, compared to a patient without RS. It is noteworthy to highlight that a higher baseline BMI and the presence of nutritional support were both associated with lower in-hospital mortality. Furthermore, regarding LOS, patients who developed RS had an average hospital stay of five days more than those who did not develop the syndrome (p < 0.0001). Interestingly, a longer LOS was associated with a worse nutritional status (NRS-2002 >3 and MUST ≥2) upon ED admission, confirming the role of hospital malnutrition as a predictor of poor clinical outcomes, as previously described [9]. The overall evidence raises the question of how RS is recognized by physicians in clinical practice as a serious clinical challenge. Indeed, emerging literature reports a variable incidence of RS, ranging from 0 to 62% of incidence, depending on the type of studies [3]. Homogeneous data on specific clinical contexts are still limited. With regards to Internal Medicine wards, Kraaijenbrink et al. [8] reported, in 2017, an incidence of RS risk of 54% among 178 admitted patients in the Netherlands, according to the National Institute for Health and Care Excellence (NICE) criteria [10]. In that cohort, only 14 patients (8%) were diagnosed with RS. However, the authors considered the development of severe hypophosphatemia during follow-up, a positive NICE score, and normal phosphate levels upon admission as a hallmark of RS. ASPEN criteria, released in 2020 [6], also include the imbalance of “serum potassium, and/or magnesium, and/or organ dysfunction resulting from a decrease in any of these and/or due to thiamin deficiency”. Such criteria could lead clinicians to a wider identification of RS, otherwise neglected as general electrolyte disturbance. Moreover, early diagnosis is useful to prompt specific nutritional therapy, according to the same ASPEN guidelines [6]. A recent observational cohort study [11] showed a high incidence of refeeding syndrome in patients with total parenteral nutrition in a Brazilian reference university hospital. The authors used the diagnostic criteria of Doig et al. [12] (serum phosphorous levels decreasing by 0.5 mg/dL or lower than 2.0 mg/dL after initiation of TPN, together with a decrease in serum magnesium and/or potassium levels). Data from the electronic medical records of 97 hospitalized patients showed an incidence of RS of 43.3% in patients treated with parenteral nutrition. Interestingly, patients who received standard parenteral nutrition (without the supervision of a team of nutritional specialists) were more likely to develop RS. In turn, our study consecutively evaluated all patients admitted to the ward, and only 13 patients (6.4% of the whole cohort) received parenteral nutrition, while 63 (31%) received oral nutritional supplements within 48 h from admission. As per protocol, the period of observation in our study was limited to the first five days from admission; thus, we could have lacked data after this period. Nevertheless, the incidence of RS remains high. Of note, in our study, neither nutritional team support nor nutritional supplementation within 48 h (neither oral nor parenteral) influenced the LOS. However, nutritional counseling was delivered in only 24.5% of patients who were considered at risk of developing RS, while nutritional supplementation (within 48 h) was delivered in 75.5% of patients at risk. We did not perform an in-depth analysis to check if nutritional support within 48 h was delivered according to the guidelines. This could be the first limitation of the study, due to its observational nature. Another limitation is the period of follow-up. We decided to limit the observational window of patients who were considered at risk to only five days from admission. We do not have data about the patients who were not considered at risk upon admission. However, collecting data for each patient throughout the entire hospital stay would be unfeasible, so we needed to make a choice. Furthermore, we decided not to perform a multivariable analysis after the univariable analyses, since almost all the variables significantly associated with the endpoints were collinear. Moreover, there were only a few episodes of mortality and readmissions. Indeed, only 13 patients (6.4%) were re-admitted to hospital within 30 days from discharge. As described in the Supplementary files, we found no significant association with any registered variables. This could be attributed to the absence of a follow-up observation after hospital discharge. 5. Conclusions: Our work showed a prevalence of RS risk of 48.3% and an incidence of RS diagnosis of 18.7% in a cohort of 203 patients consecutively admitted to a Department of Internal Medicine and Gastroenterology, according to ASPEN criteria. RS is associated with higher mortality and a higher LOS. Such evidence should raise awareness among clinicians to identify patients at risk of RS early, and to prompt specific nutritional support in patients developing this syndrome.
Background: Refeeding syndrome (RS) is a neglected, potentially fatal syndrome that occurs in malnourished patients undergoing rapid nutritional replenishment after a period of fasting. The American Society for Parenteral and Enteral Nutrition (ASPEN) recently released new criteria for RS risk and diagnosis. Real-life data on its incidence are still limited. Methods: We consecutively enrolled patients admitted to the Internal Medicine and Gastroenterology Unit of our center. The RS risk prevalence and incidence of RS were evaluated according to ASPEN. The length of stay (LOS), mortality, and re-admission rate within 30 days were assessed. Results: Among 203 admitted patients, 98 (48.3%) were at risk of RS; RS occurred in 38 patients (18.7% of the entire cohort). Patients diagnosed with RS had a higher mean LOS (12.5 days ± 7.9) than those who were not diagnosed with RS (7.1 ± 4.2) (p &lt; 0.0001). Nine patients (4.4%) died. Body mass index (OR 0.82; 95% CI 0.69-0.97), RS diagnosis (OR 10.1; 95% CI 2.4-42.6), and medical nutritional support within 48 h (OR 0.12; 95% CI 0.02-0.56) were associated with mortality. Conclusions: RS incidence is high among clinical wards, influencing clinical outcomes. Awareness among clinicians is necessary to identify patients at risk and to support those developing this syndrome.
1. Introduction: Refeeding syndrome is a potentially fatal complication that occurs in malnourished patients when an excessive amount of nutrients is too rapidly delivered after a prolonged state of fasting, as often happens in hospital settings [1]. It is defined as severe electrolyte and fluid shifts, associated with metabolic abnormalities in malnourished patients undergoing refeeding, whether orally, enterally, or parenterally [2]. Up-to-date data about the incidence of RS remain heterogeneous. A recent systematic review of thirty-five observational studies found a wide range of RS incidence, varying from 0 to 62%, among studies [3]. The risk of developing RS is high after prolonged fasting, leading to a reduction in insulin release and an increase in glucagon secretion. Metabolic inversion from the use of glucose, as a source of energy, to the use of structural proteins and lipid deposits occurs with a reduction in intracellular energy cofactors (vitamins and minerals), in particular, phosphorus, potassium, and magnesium. During refeeding, the presence of nutrients, in particular, carbohydrates, stimulates glycolysis, glycogen synthesis, and the synthesis of lipids and proteins, with increased intracellular retention of water and sodium. This anabolic process requires the presence of cofactors, such as thiamine (vitamin B1) and minerals. This results in a rapid reduction in these cofactors at the intravascular level, reduced elimination of sodium and water in the urine, water retention, and the risk of cardiometabolic, respiratory, and neurological decompensation, eventually leading to death [3,4]. Such a condition often occurs during hospitalization, sometimes causing death if not diagnosed and treated promptly [5]. Recently, the American Society for Parenteral and Enteral Nutrition (ASPEN) convened an inter-professional task force to develop a consensus to identify patients at a high risk of developing RS, to prevent and treat it. This consensus, published in April 2020 [6], defined RS as a metabolic and electrolytic alteration that occurs after the reintroduction of nutrition. This retrospective cohort study aimed to assess the prevalence of RS risk and the incidence of RS according to ASPEN criteria, and its impact on length of stay (LOS), mortality, and re-admissions within 30 days in patients admitted to the Internal Medicine and Gastroenterology Unit. 5. Conclusions: Our work showed a prevalence of RS risk of 48.3% and an incidence of RS diagnosis of 18.7% in a cohort of 203 patients consecutively admitted to a Department of Internal Medicine and Gastroenterology, according to ASPEN criteria. RS is associated with higher mortality and a higher LOS. Such evidence should raise awareness among clinicians to identify patients at risk of RS early, and to prompt specific nutritional support in patients developing this syndrome.
Background: Refeeding syndrome (RS) is a neglected, potentially fatal syndrome that occurs in malnourished patients undergoing rapid nutritional replenishment after a period of fasting. The American Society for Parenteral and Enteral Nutrition (ASPEN) recently released new criteria for RS risk and diagnosis. Real-life data on its incidence are still limited. Methods: We consecutively enrolled patients admitted to the Internal Medicine and Gastroenterology Unit of our center. The RS risk prevalence and incidence of RS were evaluated according to ASPEN. The length of stay (LOS), mortality, and re-admission rate within 30 days were assessed. Results: Among 203 admitted patients, 98 (48.3%) were at risk of RS; RS occurred in 38 patients (18.7% of the entire cohort). Patients diagnosed with RS had a higher mean LOS (12.5 days ± 7.9) than those who were not diagnosed with RS (7.1 ± 4.2) (p &lt; 0.0001). Nine patients (4.4%) died. Body mass index (OR 0.82; 95% CI 0.69-0.97), RS diagnosis (OR 10.1; 95% CI 2.4-42.6), and medical nutritional support within 48 h (OR 0.12; 95% CI 0.02-0.56) were associated with mortality. Conclusions: RS incidence is high among clinical wards, influencing clinical outcomes. Awareness among clinicians is necessary to identify patients at risk and to support those developing this syndrome.
12,814
281
[ 4284, 95, 1596, 85, 548, 155, 44, 114, 195, 1156, 145, 144, 130, 105 ]
19
[ "rs", "patients", "risk", "days", "levels", "loss", "energy", "admission", "low", "moderate" ]
[ "introduction refeeding syndrome", "occurs malnourished patients", "refeeding syndrome potentially", "reintroduction nutrition retrospective", "excessive nutrients rapidly" ]
null
[CONTENT] refeeding | malnutrition | ASPEN criteria | mortality | length of hospital stay | readmission [SUMMARY]
null
[CONTENT] refeeding | malnutrition | ASPEN criteria | mortality | length of hospital stay | readmission [SUMMARY]
[CONTENT] refeeding | malnutrition | ASPEN criteria | mortality | length of hospital stay | readmission [SUMMARY]
[CONTENT] refeeding | malnutrition | ASPEN criteria | mortality | length of hospital stay | readmission [SUMMARY]
[CONTENT] refeeding | malnutrition | ASPEN criteria | mortality | length of hospital stay | readmission [SUMMARY]
[CONTENT] Cohort Studies | Gastroenterology | Humans | Incidence | Length of Stay | Malnutrition | Prospective Studies | Refeeding Syndrome | Tertiary Care Centers [SUMMARY]
null
[CONTENT] Cohort Studies | Gastroenterology | Humans | Incidence | Length of Stay | Malnutrition | Prospective Studies | Refeeding Syndrome | Tertiary Care Centers [SUMMARY]
[CONTENT] Cohort Studies | Gastroenterology | Humans | Incidence | Length of Stay | Malnutrition | Prospective Studies | Refeeding Syndrome | Tertiary Care Centers [SUMMARY]
[CONTENT] Cohort Studies | Gastroenterology | Humans | Incidence | Length of Stay | Malnutrition | Prospective Studies | Refeeding Syndrome | Tertiary Care Centers [SUMMARY]
[CONTENT] Cohort Studies | Gastroenterology | Humans | Incidence | Length of Stay | Malnutrition | Prospective Studies | Refeeding Syndrome | Tertiary Care Centers [SUMMARY]
[CONTENT] introduction refeeding syndrome | occurs malnourished patients | refeeding syndrome potentially | reintroduction nutrition retrospective | excessive nutrients rapidly [SUMMARY]
null
[CONTENT] introduction refeeding syndrome | occurs malnourished patients | refeeding syndrome potentially | reintroduction nutrition retrospective | excessive nutrients rapidly [SUMMARY]
[CONTENT] introduction refeeding syndrome | occurs malnourished patients | refeeding syndrome potentially | reintroduction nutrition retrospective | excessive nutrients rapidly [SUMMARY]
[CONTENT] introduction refeeding syndrome | occurs malnourished patients | refeeding syndrome potentially | reintroduction nutrition retrospective | excessive nutrients rapidly [SUMMARY]
[CONTENT] introduction refeeding syndrome | occurs malnourished patients | refeeding syndrome potentially | reintroduction nutrition retrospective | excessive nutrients rapidly [SUMMARY]
[CONTENT] rs | patients | risk | days | levels | loss | energy | admission | low | moderate [SUMMARY]
null
[CONTENT] rs | patients | risk | days | levels | loss | energy | admission | low | moderate [SUMMARY]
[CONTENT] rs | patients | risk | days | levels | loss | energy | admission | low | moderate [SUMMARY]
[CONTENT] rs | patients | risk | days | levels | loss | energy | admission | low | moderate [SUMMARY]
[CONTENT] rs | patients | risk | days | levels | loss | energy | admission | low | moderate [SUMMARY]
[CONTENT] metabolic | reduction | water | cofactors | occurs | rs | refeeding | death | minerals | defined [SUMMARY]
null
[CONTENT] 95 ci | ci | 95 | patients | mean | rs | table | 48 | admission | associated [SUMMARY]
[CONTENT] rs | higher | patients | admitted department internal medicine | 48 incidence | support patients developing syndrome | support patients developing | support patients | 48 incidence rs | 48 incidence rs diagnosis [SUMMARY]
[CONTENT] rs | patients | risk | 95 ci | ci | days | 95 | admission | levels | 48 [SUMMARY]
[CONTENT] rs | patients | risk | 95 ci | ci | days | 95 | admission | levels | 48 [SUMMARY]
[CONTENT] RS ||| The American Society for Parenteral and Enteral Nutrition | RS ||| [SUMMARY]
null
[CONTENT] 203 | 98 | 48.3% | RS | RS | 38 | 18.7% ||| RS | LOS | 12.5 days | 7.9 | RS | 7.1 ± | 4.2 | p &lt | 0.0001 ||| Nine | 4.4% ||| 0.82 | 95% | CI | 0.69-0.97 | RS | 10.1 | 95% | CI | 2.4-42.6 | 48 | 0.12 | 95% | CI | 0.02-0.56 [SUMMARY]
[CONTENT] RS ||| [SUMMARY]
[CONTENT] RS ||| The American Society for Parenteral and Enteral Nutrition | RS ||| ||| the Internal Medicine and Gastroenterology Unit ||| RS | RS ||| 30 days ||| ||| 203 | 98 | 48.3% | RS | RS | 38 | 18.7% ||| RS | LOS | 12.5 days | 7.9 | RS | 7.1 ± | 4.2 | p &lt | 0.0001 ||| Nine | 4.4% ||| 0.82 | 95% | CI | 0.69-0.97 | RS | 10.1 | 95% | CI | 2.4-42.6 | 48 | 0.12 | 95% | CI | 0.02-0.56 ||| RS ||| [SUMMARY]
[CONTENT] RS ||| The American Society for Parenteral and Enteral Nutrition | RS ||| ||| the Internal Medicine and Gastroenterology Unit ||| RS | RS ||| 30 days ||| ||| 203 | 98 | 48.3% | RS | RS | 38 | 18.7% ||| RS | LOS | 12.5 days | 7.9 | RS | 7.1 ± | 4.2 | p &lt | 0.0001 ||| Nine | 4.4% ||| 0.82 | 95% | CI | 0.69-0.97 | RS | 10.1 | 95% | CI | 2.4-42.6 | 48 | 0.12 | 95% | CI | 0.02-0.56 ||| RS ||| [SUMMARY]
The maxillary incisor display at rest: analysis of the underlying components.
30672985
Maxillary incisal display is one of the most important attributes of smile esthetics.
INTRODUCTION
A cross-sectional study was conducted on 150 subjects (75 males, 75 females) aged 18-30 years. The MIDR was recorded from the pretreatment orthodontic records. The following parameters were assessed on lateral cephalograms: ANB angle, mandibular plane angle, palatal plane angle, lower anterior and total anterior facial heights, upper incisor inclination, upper anterior dentoalveolar height, and upper lip length, thickness and protrusion. The relationship between MIDR and various skeletal, dental and soft tissue components was assessed using linear regression analyses.
METHODS
The mean MIDR was significantly greater in females than males (p = 0.011). A significant positive correlation was found between MIDR and ANB angle, mandibular plane angle and lower anterior facial height. A significant negative correlation was found between MIDR and upper lip length and thickness. Linear regression analysis showed that upper lip length was the strongest predictor of MIDR, explaining 29.7% of variance in MIDR. A multiple linear regression model based on mandibular plane angle, lower anterior facial height, upper lip length and upper lip thickness explained about 63.4% of variance in MIDR.
RESULTS
Incisal display at rest was generally greater in females than males. Multiple factors play a role in determining MIDR, nevertheless upper lip length was found to be the strongest predictor of variations in MIDR.
CONCLUSIONS
[ "Adolescent", "Adult", "Anatomic Landmarks", "Cephalometry", "Cross-Sectional Studies", "Esthetics, Dental", "Face", "Female", "Gingiva", "Humans", "Incisor", "Linear Models", "Lip", "Male", "Mandible", "Maxilla", "Skull", "Smiling", "Vertical Dimension", "Young Adult" ]
6340193
INTRODUCTION
Smile is one of the most important expressions contributing to the facial attractiveness. An attractive and pleasing smile enhances the acceptance of an individual in the society by improving interpersonal relationships. 1 With patients becoming increasingly conscious of their dental appearance, smile esthetics has become the primary objective of orthodontic treatment. 2 The most important esthetic goal in orthodontics is to achieve a balanced smile, which can be best described as an appropriate positioning of teeth and gingival scaffold within the dynamic display zone.3 A significant portion of maxillary incisors is also visible during speech, mastication and various facial expressions. The vertical exposure of the maxillary incisors during function is strongly correlated to the maxillary incisor display at rest (MIDR). Various studies have shown that people with pleasing smile esthetics have a MIDR ranging from 2 to 4 mm.4,5 Excessive exposure of the maxillary incisors at rest may result in gummy smile; whereas, the reduced incisor exposure is less esthetic and is considered a sign of aging.4,5 A significant proportion of orthodontic patients present to the dental clinics with the chief complaint of an excessive or reduced maxillary incisor display. 6 The treatment planning for each patient aims at the correction of one or more hard or soft tissue components responsible for a less ideal incisal display. Several hard and soft tissue structures that surround and support maxillary incisors have been shown to affect the MIDR. 6 - 8 An increased or reduced vertical skull dimensions and a discrepancy in the sagittal jaw relationship are the primary skeletal components affecting the MIDR. However, some authors also claim that the vertical maxillary excess (VME) is the strongest determinant of the maxillary incisor display.9-11 The height of anterior portion of maxilla is dependent on the dentoalveolar segment, as patients with extruded anterior teeth have greater anterior maxillary dentoalveolar height. Depending on the severity of VME, orthodontic intrusion of maxillary incisors can be a viable option as an alternative to surgical repositioning of maxilla. 12 However, the true incisor intrusion is limited to 4 mm and its long term stability has not been demonstrated.12-15 The degree of upper incisor inclination is also related to upper incisor display, as retroclined incisors are usually more extruded. 7 Variations in the upper lip length directly affect the MIDR. 16 A short upper lip in relation to the underlying skeletal structures may result in an excessive MIDR and vice versa. 16 , 17 In patients with short upper lip, if the surgical approach to increase the lip length is not opted, the potential of a successful orthodontic camouflage is reduced. However, patients with hyperactive lip elevator muscles may present with a normal MIDR but still show excessive gingival tissues during smile. 18 Thus, along with the dental and skeletal components, the role of soft tissues in determining smile esthetics of an individual cannot be denied. Only few studies addressed the association between these underlying skeletal, dental and soft tissue components and MIDR. 19 , 20 Thus, the treatment of inappropriate display of maxillary incisors is usually limited to only few components that are easy to modify by orthodontic treatment or orthognathic surgery. The current study was designed to explore the role of different substructure attributes contributing to the display of maxillary incisors at rest, which may need to be altered by orthodontic or surgical treatment to improve dental esthetics.
null
null
RESULTS
The mean age of males and females included in the study was comparable (p = 0.086). However, females presented a mean MIDR 1 mm greater than males (p = 0.011) (Table 1). Table 1Comparison of mean ages and maxillary incisor display at rest, between males and females. Males (n = 75) Mean ± SDFemales (n = 75) Mean ± SDP - valueAge (years)22.00 ± 4.1322.21 ± 4.450.086Incisal display at rest (mm)3.72 ± 2.694.77 ± 2.240.011*n = 150; SD = standard deviation; independent sample t-test.* p < 0.05. n = 150; SD = standard deviation; independent sample t-test. * p < 0.05. A simple linear regression analysis showed that several dental, skeletal and soft tissue components were significantly related to the MIDR (Table 2). The highest variances in MIDR were explained by upper lip length (29.7%), upper lip thickness (27.3%) and mandibular plane angle (25.8%). The palatal plane angle and total anterior facial height were least significantly associated with the MIDR, explaining only 0.06% and 0.00% variance, respectively. No significant association was found with age in the present study sample comprising the age group 18-30 years. Table 2Simple linear regression analysis. VariablerP - valueAdjusted R 2 Skeletal componentsANB Angle0.311<0.001*9.1%Mandibular Plane Angle0.513<0.001*25.8%Palatal Plane Angle0.0300.7160.06%Lower Anterior Facial Height0.341<0.001*11.0%Total Anterior Facial Height0.0790.3360.00%Dental componentsUpper Incisor Inclination-0.1950.017*3.2%Upper Anterior Dentoalveolar Height0.1690.039*2.2%Soft tissue componentsUpper Lip Thickness-0.527<0.001*27.3%Upper Lip Length-0.549<0.001*29.7%Upper Lip Protrusion0.2070.011*3.6%Age -0.0470.6290.00%n = 150; Linear regression analysis.* p < 0.05. n = 150; Linear regression analysis. * p < 0.05. Multiple linear stepwise regression analysis was used to remove inter-correlation among the eight independent variables and to find out the clinically important variables that could predict the amount of MIDR. This resulted in a four-variable model incorporating mandibular plane angle, lower anterior facial height, upper lip thickness and upper lip length, explaining about of 63% variance in the MIDR (Table 3). Table 3Multiple linear regression model.VariableCoefficient (B)Standard ErrorP - valueConstant4.8161.5680.003Mandibular Plane Angle0.0940.025<0.001*Lower Anterior Facial Height0.0830.018<0.001*Upper Lip Thickness-0.3690.043<0.001*Upper Lip Length-0.1340.041<0.001*n = 150; Adjusted R 2 = 0.634.* p < 0.05. n = 150; Adjusted R 2 = 0.634. * p < 0.05.
CONCLUSIONS
Maxillary incisal display at rest was generally greater in females than males. Upper lip length was found to be the strongest predictor of the maxillary incisal display at rest; however, several soft tissue, hard tissue and dental components affected MIDR. About two-third variance in the maxillary incisal display at rest was explained by the vertical facial pattern, and upper lip length and thickness.
[ "Skeletal components", "Dental components", "Soft tissue components " ]
[ "\n» ANB angle: angle formed by points A, N and B.» Palatal plane angle: angle formed between SN plane and Palatal Plane (PP).» Mandibular plane angle: angle formed between SN plane and GoGn plane.» Lower anterior facial height (LAFH): linear distance from PP to Menton (Me).» Total anterior facial height (TAFH): linear distance from nasion to Me.\n\n» ANB angle: angle formed by points A, N and B.\n» Palatal plane angle: angle formed between SN plane and Palatal Plane (PP).\n» Mandibular plane angle: angle formed between SN plane and GoGn plane.\n» Lower anterior facial height (LAFH): linear distance from PP to Menton (Me).\n» Total anterior facial height (TAFH): linear distance from nasion to Me.", "\n» Upper incisor inclination (UISN): angle formed between the long axis of most prominent maxillary incisors and SN plane.» Upper anterior dentoalveolar height (UADH): shortest distance from PP to the lowest point of maxillary incisor.\n\n» Upper incisor inclination (UISN): angle formed between the long axis of most prominent maxillary incisors and SN plane.\n» Upper anterior dentoalveolar height (UADH): shortest distance from PP to the lowest point of maxillary incisor.", "\n» Upper lip length (ULL): linear distance from the junction of nasal columella and upper lip to the junction of upper and lower lips.» Upper lip thickness (ULT): distance from labrale superius (Ls) to the alveolar bone crest in midline.» Upper lip procumbency: shortest distance between E - plane and Ls, recorded as positive value if Ls is anterior to E - plane, and negative if Ls is posterior to E - plane.\n\n» Upper lip length (ULL): linear distance from the junction of nasal columella and upper lip to the junction of upper and lower lips.\n» Upper lip thickness (ULT): distance from labrale superius (Ls) to the alveolar bone crest in midline.\n» Upper lip procumbency: shortest distance between E - plane and Ls, recorded as positive value if Ls is anterior to E - plane, and negative if Ls is posterior to E - plane.\nTo assess the measurement error, 30 lateral cephalograms were randomly selected and the steps of landmarks identification, tracing and measurement were repeated by the main researcher after three weeks of initial examination. Intra-class correlation coefficients were performed to assess the reliability for the two sets of measurements. The values of coefficients of reliability were found to be greater than 0.91 and 0.88 for all linear and angular variables, respectively.\nData were analyzed in SPSS for Windows (version 20.0, SPSS Inc. Chicago). Kolmogorov-Smirnov test was used to check the normality of the measurements. Independent sample t-test was used to compare the mean age and mean incisal display at rest, between males and females. Linear regression analyses were performed to assess the variations in maxillary incisal display explained by each component. A multiple linear regression model was generated based on the four strongest factors. A p-value < 0.05 was considered statistically significant." ]
[ null, null, null ]
[ "INTRODUCTION", "MATERIAL AND METHODS", "Skeletal components", "Dental components", "Soft tissue components ", "RESULTS", "DISCUSSION", "CONCLUSIONS" ]
[ "Smile is one of the most important expressions contributing to the facial attractiveness. An attractive and pleasing smile enhances the acceptance of an individual in the society by improving interpersonal relationships.\n1\n With patients becoming increasingly conscious of their dental appearance, smile esthetics has become the primary objective of orthodontic treatment.\n2\n The most important esthetic goal in orthodontics is to achieve a balanced smile, which can be best described as an appropriate positioning of teeth and gingival scaffold within the dynamic display zone.3 A significant portion of maxillary incisors is also visible during speech, mastication and various facial expressions. The vertical exposure of the maxillary incisors during function is strongly correlated to the maxillary incisor display at rest (MIDR). \nVarious studies have shown that people with pleasing smile esthetics have a MIDR ranging from 2 to 4 mm.4,5 Excessive exposure of the maxillary incisors at rest may result in gummy smile; whereas, the reduced incisor exposure is less esthetic and is considered a sign of aging.4,5 A significant proportion of orthodontic patients present to the dental clinics with the chief complaint of an excessive or reduced maxillary incisor display.\n6\n The treatment planning for each patient aims at the correction of one or more hard or soft tissue components responsible for a less ideal incisal display. \nSeveral hard and soft tissue structures that surround and support maxillary incisors have been shown to affect the MIDR.\n6\n\n-\n\n8\n An increased or reduced vertical skull dimensions and a discrepancy in the sagittal jaw relationship are the primary skeletal components affecting the MIDR. However, some authors also claim that the vertical maxillary excess (VME) is the strongest determinant of the maxillary incisor display.9-11 The height of anterior portion of maxilla is dependent on the dentoalveolar segment, as patients with extruded anterior teeth have greater anterior maxillary dentoalveolar height. Depending on the severity of VME, orthodontic intrusion of maxillary incisors can be a viable option as an alternative to surgical repositioning of maxilla.\n12\n However, the true incisor intrusion is limited to 4 mm and its long term stability has not been demonstrated.12-15 The degree of upper incisor inclination is also related to upper incisor display, as retroclined incisors are usually more extruded.\n7\n\n\nVariations in the upper lip length directly affect the MIDR.\n16\n A short upper lip in relation to the underlying skeletal structures may result in an excessive MIDR and vice versa.\n16\n\n,\n\n17\n In patients with short upper lip, if the surgical approach to increase the lip length is not opted, the potential of a successful orthodontic camouflage is reduced. However, patients with hyperactive lip elevator muscles may present with a normal MIDR but still show excessive gingival tissues during smile.\n18\n Thus, along with the dental and skeletal components, the role of soft tissues in determining smile esthetics of an individual cannot be denied.\nOnly few studies addressed the association between these underlying skeletal, dental and soft tissue components and MIDR.\n19\n\n,\n\n20\n Thus, the treatment of inappropriate display of maxillary incisors is usually limited to only few components that are easy to modify by orthodontic treatment or orthognathic surgery. The current study was designed to explore the role of different substructure attributes contributing to the display of maxillary incisors at rest, which may need to be altered by orthodontic or surgical treatment to improve dental esthetics. ", "A retrospective cross-sectional study was conducted at The Aga Khan University Hospital, using the pretreatment orthodontic records of adult orthodontic patients aged 18 to 30 years. The sample size was calculated using the findings of Arriola-Guillen and Flores-Mir,\n21\n who reported the correlation between the upper incisor display and upper lip height as -0.333. The power was set at 90% and alpha was kept as 0.05 to calculate the sample size, which showed a sample of 48 subjects was required. However, to increase the power of this study, the maximum number of available subjects was included. This resulted in a total sample of 150 subjects (75 males and 75 females). Ethical clearance was obtained from the ethical review committee of The Aga Khan University (ERC Exemption No. 4003-Sur-ERC-16) prior to the data collection.\nSubjects with previous history of orthodontic treatment, trauma or surgery involving facial structures or with any craniofacial anomaly or syndrome were excluded from the study. \nThe MIDR of all subjects was clinically measured using a millimeter scale, with the patient sitting upright, with lips completely relaxed. The maximum distance from the lowest point of upper lip to the incisal edge of any of the upper incisor was recorded as MIDR. The lateral cephalograms were recorded with the standardized method using Orthoralix 9200 (Gendex-KaVo, Milan, Italy). The technique involved rigid head fixation in a cephalostat and a 165-cm film-to-tube distance. The sagittal facial plane was held at a right angle to the path of the X-rays, while the Frankfort Horizontal Plane (FHP) of the subject was kept parallel to the horizontal plane. Teeth were occluded in the centric occlusion and lips were maintained in a relaxed position. \nThe lateral cephalograms of all the patients were manually traced by the main investigator on acetate paper, and the linear and angular measurements of all skeletal, dental and soft tissue components were performed with the help of a millimeter ruler and protractor, respectively (Figs 1 and 2). The following skeletal, dental and soft tissue components were included in the study: \n\nFigure 1- Skeletal components: ANB angle, palatal plane angle, mandibular plane angle, lower anterior facial height (LAFH), total anterior facial height (TAFH). PP, palatal plane; ANS, anterior nasal spine; PNS, posterior nasal spine; Go, gonion; Gn, gnathion; N, nasion; S, sella; A, deepest point at the anterior aspect of maxillary alveolar process; B, deepest point at the anterior aspect of mandibular alveolar process.\n\n\nFigure 2- Dental and soft tissue components: upper anterior dentoalveolar height (UADH); upper incisor to SN plane (UISN) angle; upper lip length (ULL); upper lip thickness (ULT); upper lip procumbency (the linear distance from Ls to the E line); PP, palatal plane; N, nasion; S, sella; E-plane, a plane joining the most prominent points of nose and chin; Ls, labrale superius - the most prominent point on the vermilion border of upper lip.\n\n Skeletal components \n» ANB angle: angle formed by points A, N and B.» Palatal plane angle: angle formed between SN plane and Palatal Plane (PP).» Mandibular plane angle: angle formed between SN plane and GoGn plane.» Lower anterior facial height (LAFH): linear distance from PP to Menton (Me).» Total anterior facial height (TAFH): linear distance from nasion to Me.\n\n» ANB angle: angle formed by points A, N and B.\n» Palatal plane angle: angle formed between SN plane and Palatal Plane (PP).\n» Mandibular plane angle: angle formed between SN plane and GoGn plane.\n» Lower anterior facial height (LAFH): linear distance from PP to Menton (Me).\n» Total anterior facial height (TAFH): linear distance from nasion to Me.\n\n» ANB angle: angle formed by points A, N and B.» Palatal plane angle: angle formed between SN plane and Palatal Plane (PP).» Mandibular plane angle: angle formed between SN plane and GoGn plane.» Lower anterior facial height (LAFH): linear distance from PP to Menton (Me).» Total anterior facial height (TAFH): linear distance from nasion to Me.\n\n» ANB angle: angle formed by points A, N and B.\n» Palatal plane angle: angle formed between SN plane and Palatal Plane (PP).\n» Mandibular plane angle: angle formed between SN plane and GoGn plane.\n» Lower anterior facial height (LAFH): linear distance from PP to Menton (Me).\n» Total anterior facial height (TAFH): linear distance from nasion to Me.\n Dental components \n» Upper incisor inclination (UISN): angle formed between the long axis of most prominent maxillary incisors and SN plane.» Upper anterior dentoalveolar height (UADH): shortest distance from PP to the lowest point of maxillary incisor.\n\n» Upper incisor inclination (UISN): angle formed between the long axis of most prominent maxillary incisors and SN plane.\n» Upper anterior dentoalveolar height (UADH): shortest distance from PP to the lowest point of maxillary incisor.\n\n» Upper incisor inclination (UISN): angle formed between the long axis of most prominent maxillary incisors and SN plane.» Upper anterior dentoalveolar height (UADH): shortest distance from PP to the lowest point of maxillary incisor.\n\n» Upper incisor inclination (UISN): angle formed between the long axis of most prominent maxillary incisors and SN plane.\n» Upper anterior dentoalveolar height (UADH): shortest distance from PP to the lowest point of maxillary incisor.\n Soft tissue components \n» Upper lip length (ULL): linear distance from the junction of nasal columella and upper lip to the junction of upper and lower lips.» Upper lip thickness (ULT): distance from labrale superius (Ls) to the alveolar bone crest in midline.» Upper lip procumbency: shortest distance between E - plane and Ls, recorded as positive value if Ls is anterior to E - plane, and negative if Ls is posterior to E - plane.\n\n» Upper lip length (ULL): linear distance from the junction of nasal columella and upper lip to the junction of upper and lower lips.\n» Upper lip thickness (ULT): distance from labrale superius (Ls) to the alveolar bone crest in midline.\n» Upper lip procumbency: shortest distance between E - plane and Ls, recorded as positive value if Ls is anterior to E - plane, and negative if Ls is posterior to E - plane.\nTo assess the measurement error, 30 lateral cephalograms were randomly selected and the steps of landmarks identification, tracing and measurement were repeated by the main researcher after three weeks of initial examination. Intra-class correlation coefficients were performed to assess the reliability for the two sets of measurements. The values of coefficients of reliability were found to be greater than 0.91 and 0.88 for all linear and angular variables, respectively.\nData were analyzed in SPSS for Windows (version 20.0, SPSS Inc. Chicago). Kolmogorov-Smirnov test was used to check the normality of the measurements. Independent sample t-test was used to compare the mean age and mean incisal display at rest, between males and females. Linear regression analyses were performed to assess the variations in maxillary incisal display explained by each component. A multiple linear regression model was generated based on the four strongest factors. A p-value < 0.05 was considered statistically significant.\n\n» Upper lip length (ULL): linear distance from the junction of nasal columella and upper lip to the junction of upper and lower lips.» Upper lip thickness (ULT): distance from labrale superius (Ls) to the alveolar bone crest in midline.» Upper lip procumbency: shortest distance between E - plane and Ls, recorded as positive value if Ls is anterior to E - plane, and negative if Ls is posterior to E - plane.\n\n» Upper lip length (ULL): linear distance from the junction of nasal columella and upper lip to the junction of upper and lower lips.\n» Upper lip thickness (ULT): distance from labrale superius (Ls) to the alveolar bone crest in midline.\n» Upper lip procumbency: shortest distance between E - plane and Ls, recorded as positive value if Ls is anterior to E - plane, and negative if Ls is posterior to E - plane.\nTo assess the measurement error, 30 lateral cephalograms were randomly selected and the steps of landmarks identification, tracing and measurement were repeated by the main researcher after three weeks of initial examination. Intra-class correlation coefficients were performed to assess the reliability for the two sets of measurements. The values of coefficients of reliability were found to be greater than 0.91 and 0.88 for all linear and angular variables, respectively.\nData were analyzed in SPSS for Windows (version 20.0, SPSS Inc. Chicago). Kolmogorov-Smirnov test was used to check the normality of the measurements. Independent sample t-test was used to compare the mean age and mean incisal display at rest, between males and females. Linear regression analyses were performed to assess the variations in maxillary incisal display explained by each component. A multiple linear regression model was generated based on the four strongest factors. A p-value < 0.05 was considered statistically significant.", "\n» ANB angle: angle formed by points A, N and B.» Palatal plane angle: angle formed between SN plane and Palatal Plane (PP).» Mandibular plane angle: angle formed between SN plane and GoGn plane.» Lower anterior facial height (LAFH): linear distance from PP to Menton (Me).» Total anterior facial height (TAFH): linear distance from nasion to Me.\n\n» ANB angle: angle formed by points A, N and B.\n» Palatal plane angle: angle formed between SN plane and Palatal Plane (PP).\n» Mandibular plane angle: angle formed between SN plane and GoGn plane.\n» Lower anterior facial height (LAFH): linear distance from PP to Menton (Me).\n» Total anterior facial height (TAFH): linear distance from nasion to Me.", "\n» Upper incisor inclination (UISN): angle formed between the long axis of most prominent maxillary incisors and SN plane.» Upper anterior dentoalveolar height (UADH): shortest distance from PP to the lowest point of maxillary incisor.\n\n» Upper incisor inclination (UISN): angle formed between the long axis of most prominent maxillary incisors and SN plane.\n» Upper anterior dentoalveolar height (UADH): shortest distance from PP to the lowest point of maxillary incisor.", "\n» Upper lip length (ULL): linear distance from the junction of nasal columella and upper lip to the junction of upper and lower lips.» Upper lip thickness (ULT): distance from labrale superius (Ls) to the alveolar bone crest in midline.» Upper lip procumbency: shortest distance between E - plane and Ls, recorded as positive value if Ls is anterior to E - plane, and negative if Ls is posterior to E - plane.\n\n» Upper lip length (ULL): linear distance from the junction of nasal columella and upper lip to the junction of upper and lower lips.\n» Upper lip thickness (ULT): distance from labrale superius (Ls) to the alveolar bone crest in midline.\n» Upper lip procumbency: shortest distance between E - plane and Ls, recorded as positive value if Ls is anterior to E - plane, and negative if Ls is posterior to E - plane.\nTo assess the measurement error, 30 lateral cephalograms were randomly selected and the steps of landmarks identification, tracing and measurement were repeated by the main researcher after three weeks of initial examination. Intra-class correlation coefficients were performed to assess the reliability for the two sets of measurements. The values of coefficients of reliability were found to be greater than 0.91 and 0.88 for all linear and angular variables, respectively.\nData were analyzed in SPSS for Windows (version 20.0, SPSS Inc. Chicago). Kolmogorov-Smirnov test was used to check the normality of the measurements. Independent sample t-test was used to compare the mean age and mean incisal display at rest, between males and females. Linear regression analyses were performed to assess the variations in maxillary incisal display explained by each component. A multiple linear regression model was generated based on the four strongest factors. A p-value < 0.05 was considered statistically significant.", "The mean age of males and females included in the study was comparable (p = 0.086). However, females presented a mean MIDR 1 mm greater than males (p = 0.011) (Table 1).\n\nTable 1Comparison of mean ages and maxillary incisor display at rest, between males and females. Males (n = 75) Mean ± SDFemales (n = 75) Mean ± SDP - valueAge (years)22.00 ± 4.1322.21 ± 4.450.086Incisal display at rest (mm)3.72 ± 2.694.77 ± 2.240.011*n = 150; SD = standard deviation; independent sample t-test.* p < 0.05.\n\nn = 150; SD = standard deviation; independent sample t-test.\n* p < 0.05.\nA simple linear regression analysis showed that several dental, skeletal and soft tissue components were significantly related to the MIDR (Table 2). The highest variances in MIDR were explained by upper lip length (29.7%), upper lip thickness (27.3%) and mandibular plane angle (25.8%). The palatal plane angle and total anterior facial height were least significantly associated with the MIDR, explaining only 0.06% and 0.00% variance, respectively. No significant association was found with age in the present study sample comprising the age group 18-30 years. \n\nTable 2Simple linear regression analysis. VariablerP - valueAdjusted R\n2\n\nSkeletal componentsANB Angle0.311<0.001*9.1%Mandibular Plane Angle0.513<0.001*25.8%Palatal Plane Angle0.0300.7160.06%Lower Anterior Facial Height0.341<0.001*11.0%Total Anterior Facial Height0.0790.3360.00%Dental componentsUpper Incisor Inclination-0.1950.017*3.2%Upper Anterior Dentoalveolar Height0.1690.039*2.2%Soft tissue componentsUpper Lip Thickness-0.527<0.001*27.3%Upper Lip Length-0.549<0.001*29.7%Upper Lip Protrusion0.2070.011*3.6%Age -0.0470.6290.00%n = 150; Linear regression analysis.* p < 0.05.\n\nn = 150; Linear regression analysis.\n* p < 0.05.\nMultiple linear stepwise regression analysis was used to remove inter-correlation among the eight independent variables and to find out the clinically important variables that could predict the amount of MIDR. This resulted in a four-variable model incorporating mandibular plane angle, lower anterior facial height, upper lip thickness and upper lip length, explaining about of 63% variance in the MIDR (Table 3).\n\nTable 3Multiple linear regression model.VariableCoefficient (B)Standard ErrorP - valueConstant4.8161.5680.003Mandibular Plane Angle0.0940.025<0.001*Lower Anterior Facial Height0.0830.018<0.001*Upper Lip Thickness-0.3690.043<0.001*Upper Lip Length-0.1340.041<0.001*n = 150; Adjusted R\n2\n = 0.634.* p < 0.05.\n\nn = 150; Adjusted R\n2\n = 0.634.\n* p < 0.05.", "It is difficult to develop an accurate and reproducible method of assessing maxillary incisal display at smile that can be used universally.\n23\n Several factors such as age, sex, emotional status, and circadian rhythms can affect the MIDR and the activity of the orofacial muscles involved in the dynamic process of smiling.\n23\n\n-\n\n25\n All of these factors could not be controlled in the present study. A large sample size of only young adults with equal representation of males and females might have mitigated the effects of some confounders. Moreover, maxillary incisal display during other facial expressions and normal conversation is difficult to be objectively assessed. In this regard, MIDR has been found to be strongly correlated to the maxillary incisal display during function, and have been used to represent the dental component of the facial esthetics.\n26\n\n\nA reduction in the MIDR is a part of the normal aging process. To reduce the impact of age, only young adults aged 18-30 years were included in this study, allowing for better analysis of MIDR relationship with different anatomic variables. The current study reported a sexual dimorphism in MIDR, which was in disagreement with the findings of other studies.\n26\n\n,\n\n27\n However, other studies11,28 have shown that women show more maxillary incisal display than men, which is in agreement with the present results. The structural differences in the facial soft and hard tissues between males and females may explain a greater MIDR in women than men. An ultrasound-based investigation has shown that females have relatively thicker zygomaticus major muscles as compared to males.\n29\n Similarly, a Class II jaw relationship is more frequently found in females, which is strongly correlated to a greater MIDR.\n19\n However, larger size of clinical crowns in males may partially negate the effect of variations in soft tissue anatomy.\n11\n Thus, interaction between several underlying components play a role in determining the ultimate proportion of maxillary incisors visible during rest and function. Interestingly, when several variables were considered in a multiple linear regression model, the gender failed to contribute significantly to the total variation in MIDR.\nThe present findings present upper lip length as the major etiological factor affecting maxillary incisal display. However, there are controversial reports about the role of upper lip length in the published literature. Some studies\n13\n\n,\n\n16\n provide evidence that short upper lip is associated with excessive upper incisal display; whereas, other studies\n18\n\n,\n\n27\n\n,\n\n30\n claim that a short upper lip is most frequently found in patients with short facial height and reduced incisal display. Despite these conflicting reports, orthodontists frequently consider a short upper lip as the cause of gummy smile. Surgical lip lengthening and use of Botox injections remain the main treatment for short upper lip.31 However, due to the invasive nature, unpredictable results and possible complications of surgical lip lengthening, and temporary results of Botox injections, most of the patients with gummy smile are treated with orthodontic intrusion of upper incisors, crown lengthening procedures or Le Fort I maxillary impaction.\n31\n\n,\n\n32\n\n\nThe morphological variation of maxilla, its rotation around the transverse axis and its position in sagittal plane, all have been implicated in the cases of an excessive or reduced MIDR. Anterior maxillary dentoalveolar height, also regarded as anterior maxillary height or vertical maxillary height in literature,\n21\n have been shown to be significantly associated with the excessive incisor display.9-11 The morphology of anterior maxilla is determined by both genetic and environmental factors. Studies have shown that the upper anterior dentoalveolar height is affected by dental intrusion or extrusion, under the influence of different environmental or therapeutic factors; thus it was included in the dental components in the current study.\n12\n\n,\n\n21\n Similarly, a clockwise rotation of maxillary base may result in an excessive MIDR, while a counter-clockwise rotation results in reduced incisal display.11 Lastly, the maxillary prognathism has been shown to be associated with an excessive maxillary incisal display.\n19\n The current study investigated the role of anterior maxillary dentoalveolar height, the palatal plane angle and maxillary prognathism in relation to mandible in determining the amount of MIDR. No significant association was found between palatal plane angle and MIDR, while a weak positive correlation was found between anterior maxillary dentoalveolar height and MIDR. However, the maxillary position with respect to mandibular sagittal plane as assessed through ANB angle was significantly associated with the MIDR, explaining about 9% variance. These results are in agreement with the findings of previous studies.19,20 In addition, a Class II jaw relationship with maxillary prognathism is associated with a thin upper lip.34 A moderate negative correlation between the upper lip thickness and MIDR, as discovered in this study, explains the interaction between the skeletal and soft tissue components and its effect on MIDR.\nApart from the lip characteristics, the second factor that has most consistently been linked to MIDR is the vertical facial dimension.\n34\n\n,\n\n35\n The vertical facial proportions are assessed by parameters such as total anterior facial height, lower anterior facial height, cranial base to mandibular plane angle, and Frankfort horizontal plane to mandibular plane angle. The current study contemplates cranial base to mandibular plane angle among the strongest predictors of MIDR, explaining about 25% of variance in MIDR. Similarly, lower anterior facial height was also found to be significantly associated with MIDR. A multitude of studies corroborate the present findings.\n19\n\n,\n\n20\n The relevance of use of vertical pull headgear in growing children and surgical correction of increased facial dimension with Le Fort I maxillary impaction cannot be overemphasized in this regard.\nAmong dental components, Sabri36 claimed that proclination of maxillary incisors can significantly reduce MIDR. This might be true for some patients, however, upper incisor to SN plane inclination was not found to be significantly associated with MIDR in the current study. Similar findings were reported by Suh et al20 not only for upper incisor inclination, but also for other dental components such as overjet and overbite. Thus, the chief determinants of maxillary incisor display are soft and hard tissue components, and treatment should ideally be directed towards correction of these attributes. \nThis analysis describes the association between the MIDR and different dental, skeletal and soft tissue components and provides insights of the etiological bases of inappropriate display of maxillary incisors. Findings of the current study may facilitate the decision-making process in orthodontic patients lacking an ideal maxillary incisal display, thus can help in making more efficient treatment plans for these patients. The orthodontic clinician can focus on the main underlying component, design an individualized treatment plan, and tailor a suitable mechanotherapy protocol according to the patient’s need. However, the variables included in the multiple linear regression model explain only 63% of the variation in MIDR, which indicates that other factors remain to be identified. \nThe other limitation of the current study is the use of MIDR as the predictor of maxillary incisal display during function. In social circumstances, the maxillary incisal display during conversation, smile and other facial expressions has more practical significance, and thus should be analyzed accordingly. Hyperactivity of lip muscles has been reported as the possible cause of gummy smile by different researchers, and poor correlation has been reported between the MIDR and maxillary incisal display during smile in these patients.\n16\n\n-\n\n18\n Thus, studies with methodology involving evaluation of smile dynamic could provide better explanations of etiological factors of unaesthetic display of maxillary incisors during function. ", "Maxillary incisal display at rest was generally greater in females than males. Upper lip length was found to be the strongest predictor of the maxillary incisal display at rest; however, several soft tissue, hard tissue and dental components affected MIDR. About two-third variance in the maxillary incisal display at rest was explained by the vertical facial pattern, and upper lip length and thickness." ]
[ "intro", "materials|methods", null, null, null, "results", "discussion", "conclusions" ]
[ "Esthetics", "Incisal display", "Lip." ]
INTRODUCTION: Smile is one of the most important expressions contributing to the facial attractiveness. An attractive and pleasing smile enhances the acceptance of an individual in the society by improving interpersonal relationships. 1 With patients becoming increasingly conscious of their dental appearance, smile esthetics has become the primary objective of orthodontic treatment. 2 The most important esthetic goal in orthodontics is to achieve a balanced smile, which can be best described as an appropriate positioning of teeth and gingival scaffold within the dynamic display zone.3 A significant portion of maxillary incisors is also visible during speech, mastication and various facial expressions. The vertical exposure of the maxillary incisors during function is strongly correlated to the maxillary incisor display at rest (MIDR). Various studies have shown that people with pleasing smile esthetics have a MIDR ranging from 2 to 4 mm.4,5 Excessive exposure of the maxillary incisors at rest may result in gummy smile; whereas, the reduced incisor exposure is less esthetic and is considered a sign of aging.4,5 A significant proportion of orthodontic patients present to the dental clinics with the chief complaint of an excessive or reduced maxillary incisor display. 6 The treatment planning for each patient aims at the correction of one or more hard or soft tissue components responsible for a less ideal incisal display. Several hard and soft tissue structures that surround and support maxillary incisors have been shown to affect the MIDR. 6 - 8 An increased or reduced vertical skull dimensions and a discrepancy in the sagittal jaw relationship are the primary skeletal components affecting the MIDR. However, some authors also claim that the vertical maxillary excess (VME) is the strongest determinant of the maxillary incisor display.9-11 The height of anterior portion of maxilla is dependent on the dentoalveolar segment, as patients with extruded anterior teeth have greater anterior maxillary dentoalveolar height. Depending on the severity of VME, orthodontic intrusion of maxillary incisors can be a viable option as an alternative to surgical repositioning of maxilla. 12 However, the true incisor intrusion is limited to 4 mm and its long term stability has not been demonstrated.12-15 The degree of upper incisor inclination is also related to upper incisor display, as retroclined incisors are usually more extruded. 7 Variations in the upper lip length directly affect the MIDR. 16 A short upper lip in relation to the underlying skeletal structures may result in an excessive MIDR and vice versa. 16 , 17 In patients with short upper lip, if the surgical approach to increase the lip length is not opted, the potential of a successful orthodontic camouflage is reduced. However, patients with hyperactive lip elevator muscles may present with a normal MIDR but still show excessive gingival tissues during smile. 18 Thus, along with the dental and skeletal components, the role of soft tissues in determining smile esthetics of an individual cannot be denied. Only few studies addressed the association between these underlying skeletal, dental and soft tissue components and MIDR. 19 , 20 Thus, the treatment of inappropriate display of maxillary incisors is usually limited to only few components that are easy to modify by orthodontic treatment or orthognathic surgery. The current study was designed to explore the role of different substructure attributes contributing to the display of maxillary incisors at rest, which may need to be altered by orthodontic or surgical treatment to improve dental esthetics. MATERIAL AND METHODS: A retrospective cross-sectional study was conducted at The Aga Khan University Hospital, using the pretreatment orthodontic records of adult orthodontic patients aged 18 to 30 years. The sample size was calculated using the findings of Arriola-Guillen and Flores-Mir, 21 who reported the correlation between the upper incisor display and upper lip height as -0.333. The power was set at 90% and alpha was kept as 0.05 to calculate the sample size, which showed a sample of 48 subjects was required. However, to increase the power of this study, the maximum number of available subjects was included. This resulted in a total sample of 150 subjects (75 males and 75 females). Ethical clearance was obtained from the ethical review committee of The Aga Khan University (ERC Exemption No. 4003-Sur-ERC-16) prior to the data collection. Subjects with previous history of orthodontic treatment, trauma or surgery involving facial structures or with any craniofacial anomaly or syndrome were excluded from the study. The MIDR of all subjects was clinically measured using a millimeter scale, with the patient sitting upright, with lips completely relaxed. The maximum distance from the lowest point of upper lip to the incisal edge of any of the upper incisor was recorded as MIDR. The lateral cephalograms were recorded with the standardized method using Orthoralix 9200 (Gendex-KaVo, Milan, Italy). The technique involved rigid head fixation in a cephalostat and a 165-cm film-to-tube distance. The sagittal facial plane was held at a right angle to the path of the X-rays, while the Frankfort Horizontal Plane (FHP) of the subject was kept parallel to the horizontal plane. Teeth were occluded in the centric occlusion and lips were maintained in a relaxed position. The lateral cephalograms of all the patients were manually traced by the main investigator on acetate paper, and the linear and angular measurements of all skeletal, dental and soft tissue components were performed with the help of a millimeter ruler and protractor, respectively (Figs 1 and 2). The following skeletal, dental and soft tissue components were included in the study: Figure 1- Skeletal components: ANB angle, palatal plane angle, mandibular plane angle, lower anterior facial height (LAFH), total anterior facial height (TAFH). PP, palatal plane; ANS, anterior nasal spine; PNS, posterior nasal spine; Go, gonion; Gn, gnathion; N, nasion; S, sella; A, deepest point at the anterior aspect of maxillary alveolar process; B, deepest point at the anterior aspect of mandibular alveolar process. Figure 2- Dental and soft tissue components: upper anterior dentoalveolar height (UADH); upper incisor to SN plane (UISN) angle; upper lip length (ULL); upper lip thickness (ULT); upper lip procumbency (the linear distance from Ls to the E line); PP, palatal plane; N, nasion; S, sella; E-plane, a plane joining the most prominent points of nose and chin; Ls, labrale superius - the most prominent point on the vermilion border of upper lip. Skeletal components » ANB angle: angle formed by points A, N and B.» Palatal plane angle: angle formed between SN plane and Palatal Plane (PP).» Mandibular plane angle: angle formed between SN plane and GoGn plane.» Lower anterior facial height (LAFH): linear distance from PP to Menton (Me).» Total anterior facial height (TAFH): linear distance from nasion to Me. » ANB angle: angle formed by points A, N and B. » Palatal plane angle: angle formed between SN plane and Palatal Plane (PP). » Mandibular plane angle: angle formed between SN plane and GoGn plane. » Lower anterior facial height (LAFH): linear distance from PP to Menton (Me). » Total anterior facial height (TAFH): linear distance from nasion to Me. » ANB angle: angle formed by points A, N and B.» Palatal plane angle: angle formed between SN plane and Palatal Plane (PP).» Mandibular plane angle: angle formed between SN plane and GoGn plane.» Lower anterior facial height (LAFH): linear distance from PP to Menton (Me).» Total anterior facial height (TAFH): linear distance from nasion to Me. » ANB angle: angle formed by points A, N and B. » Palatal plane angle: angle formed between SN plane and Palatal Plane (PP). » Mandibular plane angle: angle formed between SN plane and GoGn plane. » Lower anterior facial height (LAFH): linear distance from PP to Menton (Me). » Total anterior facial height (TAFH): linear distance from nasion to Me. Dental components » Upper incisor inclination (UISN): angle formed between the long axis of most prominent maxillary incisors and SN plane.» Upper anterior dentoalveolar height (UADH): shortest distance from PP to the lowest point of maxillary incisor. » Upper incisor inclination (UISN): angle formed between the long axis of most prominent maxillary incisors and SN plane. » Upper anterior dentoalveolar height (UADH): shortest distance from PP to the lowest point of maxillary incisor. » Upper incisor inclination (UISN): angle formed between the long axis of most prominent maxillary incisors and SN plane.» Upper anterior dentoalveolar height (UADH): shortest distance from PP to the lowest point of maxillary incisor. » Upper incisor inclination (UISN): angle formed between the long axis of most prominent maxillary incisors and SN plane. » Upper anterior dentoalveolar height (UADH): shortest distance from PP to the lowest point of maxillary incisor. Soft tissue components » Upper lip length (ULL): linear distance from the junction of nasal columella and upper lip to the junction of upper and lower lips.» Upper lip thickness (ULT): distance from labrale superius (Ls) to the alveolar bone crest in midline.» Upper lip procumbency: shortest distance between E - plane and Ls, recorded as positive value if Ls is anterior to E - plane, and negative if Ls is posterior to E - plane. » Upper lip length (ULL): linear distance from the junction of nasal columella and upper lip to the junction of upper and lower lips. » Upper lip thickness (ULT): distance from labrale superius (Ls) to the alveolar bone crest in midline. » Upper lip procumbency: shortest distance between E - plane and Ls, recorded as positive value if Ls is anterior to E - plane, and negative if Ls is posterior to E - plane. To assess the measurement error, 30 lateral cephalograms were randomly selected and the steps of landmarks identification, tracing and measurement were repeated by the main researcher after three weeks of initial examination. Intra-class correlation coefficients were performed to assess the reliability for the two sets of measurements. The values of coefficients of reliability were found to be greater than 0.91 and 0.88 for all linear and angular variables, respectively. Data were analyzed in SPSS for Windows (version 20.0, SPSS Inc. Chicago). Kolmogorov-Smirnov test was used to check the normality of the measurements. Independent sample t-test was used to compare the mean age and mean incisal display at rest, between males and females. Linear regression analyses were performed to assess the variations in maxillary incisal display explained by each component. A multiple linear regression model was generated based on the four strongest factors. A p-value < 0.05 was considered statistically significant. » Upper lip length (ULL): linear distance from the junction of nasal columella and upper lip to the junction of upper and lower lips.» Upper lip thickness (ULT): distance from labrale superius (Ls) to the alveolar bone crest in midline.» Upper lip procumbency: shortest distance between E - plane and Ls, recorded as positive value if Ls is anterior to E - plane, and negative if Ls is posterior to E - plane. » Upper lip length (ULL): linear distance from the junction of nasal columella and upper lip to the junction of upper and lower lips. » Upper lip thickness (ULT): distance from labrale superius (Ls) to the alveolar bone crest in midline. » Upper lip procumbency: shortest distance between E - plane and Ls, recorded as positive value if Ls is anterior to E - plane, and negative if Ls is posterior to E - plane. To assess the measurement error, 30 lateral cephalograms were randomly selected and the steps of landmarks identification, tracing and measurement were repeated by the main researcher after three weeks of initial examination. Intra-class correlation coefficients were performed to assess the reliability for the two sets of measurements. The values of coefficients of reliability were found to be greater than 0.91 and 0.88 for all linear and angular variables, respectively. Data were analyzed in SPSS for Windows (version 20.0, SPSS Inc. Chicago). Kolmogorov-Smirnov test was used to check the normality of the measurements. Independent sample t-test was used to compare the mean age and mean incisal display at rest, between males and females. Linear regression analyses were performed to assess the variations in maxillary incisal display explained by each component. A multiple linear regression model was generated based on the four strongest factors. A p-value < 0.05 was considered statistically significant. Skeletal components: » ANB angle: angle formed by points A, N and B.» Palatal plane angle: angle formed between SN plane and Palatal Plane (PP).» Mandibular plane angle: angle formed between SN plane and GoGn plane.» Lower anterior facial height (LAFH): linear distance from PP to Menton (Me).» Total anterior facial height (TAFH): linear distance from nasion to Me. » ANB angle: angle formed by points A, N and B. » Palatal plane angle: angle formed between SN plane and Palatal Plane (PP). » Mandibular plane angle: angle formed between SN plane and GoGn plane. » Lower anterior facial height (LAFH): linear distance from PP to Menton (Me). » Total anterior facial height (TAFH): linear distance from nasion to Me. Dental components: » Upper incisor inclination (UISN): angle formed between the long axis of most prominent maxillary incisors and SN plane.» Upper anterior dentoalveolar height (UADH): shortest distance from PP to the lowest point of maxillary incisor. » Upper incisor inclination (UISN): angle formed between the long axis of most prominent maxillary incisors and SN plane. » Upper anterior dentoalveolar height (UADH): shortest distance from PP to the lowest point of maxillary incisor. Soft tissue components : » Upper lip length (ULL): linear distance from the junction of nasal columella and upper lip to the junction of upper and lower lips.» Upper lip thickness (ULT): distance from labrale superius (Ls) to the alveolar bone crest in midline.» Upper lip procumbency: shortest distance between E - plane and Ls, recorded as positive value if Ls is anterior to E - plane, and negative if Ls is posterior to E - plane. » Upper lip length (ULL): linear distance from the junction of nasal columella and upper lip to the junction of upper and lower lips. » Upper lip thickness (ULT): distance from labrale superius (Ls) to the alveolar bone crest in midline. » Upper lip procumbency: shortest distance between E - plane and Ls, recorded as positive value if Ls is anterior to E - plane, and negative if Ls is posterior to E - plane. To assess the measurement error, 30 lateral cephalograms were randomly selected and the steps of landmarks identification, tracing and measurement were repeated by the main researcher after three weeks of initial examination. Intra-class correlation coefficients were performed to assess the reliability for the two sets of measurements. The values of coefficients of reliability were found to be greater than 0.91 and 0.88 for all linear and angular variables, respectively. Data were analyzed in SPSS for Windows (version 20.0, SPSS Inc. Chicago). Kolmogorov-Smirnov test was used to check the normality of the measurements. Independent sample t-test was used to compare the mean age and mean incisal display at rest, between males and females. Linear regression analyses were performed to assess the variations in maxillary incisal display explained by each component. A multiple linear regression model was generated based on the four strongest factors. A p-value < 0.05 was considered statistically significant. RESULTS: The mean age of males and females included in the study was comparable (p = 0.086). However, females presented a mean MIDR 1 mm greater than males (p = 0.011) (Table 1). Table 1Comparison of mean ages and maxillary incisor display at rest, between males and females. Males (n = 75) Mean ± SDFemales (n = 75) Mean ± SDP - valueAge (years)22.00 ± 4.1322.21 ± 4.450.086Incisal display at rest (mm)3.72 ± 2.694.77 ± 2.240.011*n = 150; SD = standard deviation; independent sample t-test.* p < 0.05. n = 150; SD = standard deviation; independent sample t-test. * p < 0.05. A simple linear regression analysis showed that several dental, skeletal and soft tissue components were significantly related to the MIDR (Table 2). The highest variances in MIDR were explained by upper lip length (29.7%), upper lip thickness (27.3%) and mandibular plane angle (25.8%). The palatal plane angle and total anterior facial height were least significantly associated with the MIDR, explaining only 0.06% and 0.00% variance, respectively. No significant association was found with age in the present study sample comprising the age group 18-30 years. Table 2Simple linear regression analysis. VariablerP - valueAdjusted R 2 Skeletal componentsANB Angle0.311<0.001*9.1%Mandibular Plane Angle0.513<0.001*25.8%Palatal Plane Angle0.0300.7160.06%Lower Anterior Facial Height0.341<0.001*11.0%Total Anterior Facial Height0.0790.3360.00%Dental componentsUpper Incisor Inclination-0.1950.017*3.2%Upper Anterior Dentoalveolar Height0.1690.039*2.2%Soft tissue componentsUpper Lip Thickness-0.527<0.001*27.3%Upper Lip Length-0.549<0.001*29.7%Upper Lip Protrusion0.2070.011*3.6%Age -0.0470.6290.00%n = 150; Linear regression analysis.* p < 0.05. n = 150; Linear regression analysis. * p < 0.05. Multiple linear stepwise regression analysis was used to remove inter-correlation among the eight independent variables and to find out the clinically important variables that could predict the amount of MIDR. This resulted in a four-variable model incorporating mandibular plane angle, lower anterior facial height, upper lip thickness and upper lip length, explaining about of 63% variance in the MIDR (Table 3). Table 3Multiple linear regression model.VariableCoefficient (B)Standard ErrorP - valueConstant4.8161.5680.003Mandibular Plane Angle0.0940.025<0.001*Lower Anterior Facial Height0.0830.018<0.001*Upper Lip Thickness-0.3690.043<0.001*Upper Lip Length-0.1340.041<0.001*n = 150; Adjusted R 2 = 0.634.* p < 0.05. n = 150; Adjusted R 2 = 0.634. * p < 0.05. DISCUSSION: It is difficult to develop an accurate and reproducible method of assessing maxillary incisal display at smile that can be used universally. 23 Several factors such as age, sex, emotional status, and circadian rhythms can affect the MIDR and the activity of the orofacial muscles involved in the dynamic process of smiling. 23 - 25 All of these factors could not be controlled in the present study. A large sample size of only young adults with equal representation of males and females might have mitigated the effects of some confounders. Moreover, maxillary incisal display during other facial expressions and normal conversation is difficult to be objectively assessed. In this regard, MIDR has been found to be strongly correlated to the maxillary incisal display during function, and have been used to represent the dental component of the facial esthetics. 26 A reduction in the MIDR is a part of the normal aging process. To reduce the impact of age, only young adults aged 18-30 years were included in this study, allowing for better analysis of MIDR relationship with different anatomic variables. The current study reported a sexual dimorphism in MIDR, which was in disagreement with the findings of other studies. 26 , 27 However, other studies11,28 have shown that women show more maxillary incisal display than men, which is in agreement with the present results. The structural differences in the facial soft and hard tissues between males and females may explain a greater MIDR in women than men. An ultrasound-based investigation has shown that females have relatively thicker zygomaticus major muscles as compared to males. 29 Similarly, a Class II jaw relationship is more frequently found in females, which is strongly correlated to a greater MIDR. 19 However, larger size of clinical crowns in males may partially negate the effect of variations in soft tissue anatomy. 11 Thus, interaction between several underlying components play a role in determining the ultimate proportion of maxillary incisors visible during rest and function. Interestingly, when several variables were considered in a multiple linear regression model, the gender failed to contribute significantly to the total variation in MIDR. The present findings present upper lip length as the major etiological factor affecting maxillary incisal display. However, there are controversial reports about the role of upper lip length in the published literature. Some studies 13 , 16 provide evidence that short upper lip is associated with excessive upper incisal display; whereas, other studies 18 , 27 , 30 claim that a short upper lip is most frequently found in patients with short facial height and reduced incisal display. Despite these conflicting reports, orthodontists frequently consider a short upper lip as the cause of gummy smile. Surgical lip lengthening and use of Botox injections remain the main treatment for short upper lip.31 However, due to the invasive nature, unpredictable results and possible complications of surgical lip lengthening, and temporary results of Botox injections, most of the patients with gummy smile are treated with orthodontic intrusion of upper incisors, crown lengthening procedures or Le Fort I maxillary impaction. 31 , 32 The morphological variation of maxilla, its rotation around the transverse axis and its position in sagittal plane, all have been implicated in the cases of an excessive or reduced MIDR. Anterior maxillary dentoalveolar height, also regarded as anterior maxillary height or vertical maxillary height in literature, 21 have been shown to be significantly associated with the excessive incisor display.9-11 The morphology of anterior maxilla is determined by both genetic and environmental factors. Studies have shown that the upper anterior dentoalveolar height is affected by dental intrusion or extrusion, under the influence of different environmental or therapeutic factors; thus it was included in the dental components in the current study. 12 , 21 Similarly, a clockwise rotation of maxillary base may result in an excessive MIDR, while a counter-clockwise rotation results in reduced incisal display.11 Lastly, the maxillary prognathism has been shown to be associated with an excessive maxillary incisal display. 19 The current study investigated the role of anterior maxillary dentoalveolar height, the palatal plane angle and maxillary prognathism in relation to mandible in determining the amount of MIDR. No significant association was found between palatal plane angle and MIDR, while a weak positive correlation was found between anterior maxillary dentoalveolar height and MIDR. However, the maxillary position with respect to mandibular sagittal plane as assessed through ANB angle was significantly associated with the MIDR, explaining about 9% variance. These results are in agreement with the findings of previous studies.19,20 In addition, a Class II jaw relationship with maxillary prognathism is associated with a thin upper lip.34 A moderate negative correlation between the upper lip thickness and MIDR, as discovered in this study, explains the interaction between the skeletal and soft tissue components and its effect on MIDR. Apart from the lip characteristics, the second factor that has most consistently been linked to MIDR is the vertical facial dimension. 34 , 35 The vertical facial proportions are assessed by parameters such as total anterior facial height, lower anterior facial height, cranial base to mandibular plane angle, and Frankfort horizontal plane to mandibular plane angle. The current study contemplates cranial base to mandibular plane angle among the strongest predictors of MIDR, explaining about 25% of variance in MIDR. Similarly, lower anterior facial height was also found to be significantly associated with MIDR. A multitude of studies corroborate the present findings. 19 , 20 The relevance of use of vertical pull headgear in growing children and surgical correction of increased facial dimension with Le Fort I maxillary impaction cannot be overemphasized in this regard. Among dental components, Sabri36 claimed that proclination of maxillary incisors can significantly reduce MIDR. This might be true for some patients, however, upper incisor to SN plane inclination was not found to be significantly associated with MIDR in the current study. Similar findings were reported by Suh et al20 not only for upper incisor inclination, but also for other dental components such as overjet and overbite. Thus, the chief determinants of maxillary incisor display are soft and hard tissue components, and treatment should ideally be directed towards correction of these attributes. This analysis describes the association between the MIDR and different dental, skeletal and soft tissue components and provides insights of the etiological bases of inappropriate display of maxillary incisors. Findings of the current study may facilitate the decision-making process in orthodontic patients lacking an ideal maxillary incisal display, thus can help in making more efficient treatment plans for these patients. The orthodontic clinician can focus on the main underlying component, design an individualized treatment plan, and tailor a suitable mechanotherapy protocol according to the patient’s need. However, the variables included in the multiple linear regression model explain only 63% of the variation in MIDR, which indicates that other factors remain to be identified. The other limitation of the current study is the use of MIDR as the predictor of maxillary incisal display during function. In social circumstances, the maxillary incisal display during conversation, smile and other facial expressions has more practical significance, and thus should be analyzed accordingly. Hyperactivity of lip muscles has been reported as the possible cause of gummy smile by different researchers, and poor correlation has been reported between the MIDR and maxillary incisal display during smile in these patients. 16 - 18 Thus, studies with methodology involving evaluation of smile dynamic could provide better explanations of etiological factors of unaesthetic display of maxillary incisors during function. CONCLUSIONS: Maxillary incisal display at rest was generally greater in females than males. Upper lip length was found to be the strongest predictor of the maxillary incisal display at rest; however, several soft tissue, hard tissue and dental components affected MIDR. About two-third variance in the maxillary incisal display at rest was explained by the vertical facial pattern, and upper lip length and thickness.
Background: Maxillary incisal display is one of the most important attributes of smile esthetics. Methods: A cross-sectional study was conducted on 150 subjects (75 males, 75 females) aged 18-30 years. The MIDR was recorded from the pretreatment orthodontic records. The following parameters were assessed on lateral cephalograms: ANB angle, mandibular plane angle, palatal plane angle, lower anterior and total anterior facial heights, upper incisor inclination, upper anterior dentoalveolar height, and upper lip length, thickness and protrusion. The relationship between MIDR and various skeletal, dental and soft tissue components was assessed using linear regression analyses. Results: The mean MIDR was significantly greater in females than males (p = 0.011). A significant positive correlation was found between MIDR and ANB angle, mandibular plane angle and lower anterior facial height. A significant negative correlation was found between MIDR and upper lip length and thickness. Linear regression analysis showed that upper lip length was the strongest predictor of MIDR, explaining 29.7% of variance in MIDR. A multiple linear regression model based on mandibular plane angle, lower anterior facial height, upper lip length and upper lip thickness explained about 63.4% of variance in MIDR. Conclusions: Incisal display at rest was generally greater in females than males. Multiple factors play a role in determining MIDR, nevertheless upper lip length was found to be the strongest predictor of variations in MIDR.
INTRODUCTION: Smile is one of the most important expressions contributing to the facial attractiveness. An attractive and pleasing smile enhances the acceptance of an individual in the society by improving interpersonal relationships. 1 With patients becoming increasingly conscious of their dental appearance, smile esthetics has become the primary objective of orthodontic treatment. 2 The most important esthetic goal in orthodontics is to achieve a balanced smile, which can be best described as an appropriate positioning of teeth and gingival scaffold within the dynamic display zone.3 A significant portion of maxillary incisors is also visible during speech, mastication and various facial expressions. The vertical exposure of the maxillary incisors during function is strongly correlated to the maxillary incisor display at rest (MIDR). Various studies have shown that people with pleasing smile esthetics have a MIDR ranging from 2 to 4 mm.4,5 Excessive exposure of the maxillary incisors at rest may result in gummy smile; whereas, the reduced incisor exposure is less esthetic and is considered a sign of aging.4,5 A significant proportion of orthodontic patients present to the dental clinics with the chief complaint of an excessive or reduced maxillary incisor display. 6 The treatment planning for each patient aims at the correction of one or more hard or soft tissue components responsible for a less ideal incisal display. Several hard and soft tissue structures that surround and support maxillary incisors have been shown to affect the MIDR. 6 - 8 An increased or reduced vertical skull dimensions and a discrepancy in the sagittal jaw relationship are the primary skeletal components affecting the MIDR. However, some authors also claim that the vertical maxillary excess (VME) is the strongest determinant of the maxillary incisor display.9-11 The height of anterior portion of maxilla is dependent on the dentoalveolar segment, as patients with extruded anterior teeth have greater anterior maxillary dentoalveolar height. Depending on the severity of VME, orthodontic intrusion of maxillary incisors can be a viable option as an alternative to surgical repositioning of maxilla. 12 However, the true incisor intrusion is limited to 4 mm and its long term stability has not been demonstrated.12-15 The degree of upper incisor inclination is also related to upper incisor display, as retroclined incisors are usually more extruded. 7 Variations in the upper lip length directly affect the MIDR. 16 A short upper lip in relation to the underlying skeletal structures may result in an excessive MIDR and vice versa. 16 , 17 In patients with short upper lip, if the surgical approach to increase the lip length is not opted, the potential of a successful orthodontic camouflage is reduced. However, patients with hyperactive lip elevator muscles may present with a normal MIDR but still show excessive gingival tissues during smile. 18 Thus, along with the dental and skeletal components, the role of soft tissues in determining smile esthetics of an individual cannot be denied. Only few studies addressed the association between these underlying skeletal, dental and soft tissue components and MIDR. 19 , 20 Thus, the treatment of inappropriate display of maxillary incisors is usually limited to only few components that are easy to modify by orthodontic treatment or orthognathic surgery. The current study was designed to explore the role of different substructure attributes contributing to the display of maxillary incisors at rest, which may need to be altered by orthodontic or surgical treatment to improve dental esthetics. CONCLUSIONS: Maxillary incisal display at rest was generally greater in females than males. Upper lip length was found to be the strongest predictor of the maxillary incisal display at rest; however, several soft tissue, hard tissue and dental components affected MIDR. About two-third variance in the maxillary incisal display at rest was explained by the vertical facial pattern, and upper lip length and thickness.
Background: Maxillary incisal display is one of the most important attributes of smile esthetics. Methods: A cross-sectional study was conducted on 150 subjects (75 males, 75 females) aged 18-30 years. The MIDR was recorded from the pretreatment orthodontic records. The following parameters were assessed on lateral cephalograms: ANB angle, mandibular plane angle, palatal plane angle, lower anterior and total anterior facial heights, upper incisor inclination, upper anterior dentoalveolar height, and upper lip length, thickness and protrusion. The relationship between MIDR and various skeletal, dental and soft tissue components was assessed using linear regression analyses. Results: The mean MIDR was significantly greater in females than males (p = 0.011). A significant positive correlation was found between MIDR and ANB angle, mandibular plane angle and lower anterior facial height. A significant negative correlation was found between MIDR and upper lip length and thickness. Linear regression analysis showed that upper lip length was the strongest predictor of MIDR, explaining 29.7% of variance in MIDR. A multiple linear regression model based on mandibular plane angle, lower anterior facial height, upper lip length and upper lip thickness explained about 63.4% of variance in MIDR. Conclusions: Incisal display at rest was generally greater in females than males. Multiple factors play a role in determining MIDR, nevertheless upper lip length was found to be the strongest predictor of variations in MIDR.
5,047
272
[ 160, 87, 354 ]
8
[ "plane", "upper", "maxillary", "lip", "angle", "upper lip", "anterior", "midr", "distance", "display" ]
[ "orthodontics achieve", "smile esthetics midr", "incisor inclination dental", "display maxillary incisors", "dental appearance smile" ]
null
[CONTENT] Esthetics | Incisal display | Lip. [SUMMARY]
null
[CONTENT] Esthetics | Incisal display | Lip. [SUMMARY]
[CONTENT] Esthetics | Incisal display | Lip. [SUMMARY]
[CONTENT] Esthetics | Incisal display | Lip. [SUMMARY]
[CONTENT] Esthetics | Incisal display | Lip. [SUMMARY]
[CONTENT] Adolescent | Adult | Anatomic Landmarks | Cephalometry | Cross-Sectional Studies | Esthetics, Dental | Face | Female | Gingiva | Humans | Incisor | Linear Models | Lip | Male | Mandible | Maxilla | Skull | Smiling | Vertical Dimension | Young Adult [SUMMARY]
null
[CONTENT] Adolescent | Adult | Anatomic Landmarks | Cephalometry | Cross-Sectional Studies | Esthetics, Dental | Face | Female | Gingiva | Humans | Incisor | Linear Models | Lip | Male | Mandible | Maxilla | Skull | Smiling | Vertical Dimension | Young Adult [SUMMARY]
[CONTENT] Adolescent | Adult | Anatomic Landmarks | Cephalometry | Cross-Sectional Studies | Esthetics, Dental | Face | Female | Gingiva | Humans | Incisor | Linear Models | Lip | Male | Mandible | Maxilla | Skull | Smiling | Vertical Dimension | Young Adult [SUMMARY]
[CONTENT] Adolescent | Adult | Anatomic Landmarks | Cephalometry | Cross-Sectional Studies | Esthetics, Dental | Face | Female | Gingiva | Humans | Incisor | Linear Models | Lip | Male | Mandible | Maxilla | Skull | Smiling | Vertical Dimension | Young Adult [SUMMARY]
[CONTENT] Adolescent | Adult | Anatomic Landmarks | Cephalometry | Cross-Sectional Studies | Esthetics, Dental | Face | Female | Gingiva | Humans | Incisor | Linear Models | Lip | Male | Mandible | Maxilla | Skull | Smiling | Vertical Dimension | Young Adult [SUMMARY]
[CONTENT] orthodontics achieve | smile esthetics midr | incisor inclination dental | display maxillary incisors | dental appearance smile [SUMMARY]
null
[CONTENT] orthodontics achieve | smile esthetics midr | incisor inclination dental | display maxillary incisors | dental appearance smile [SUMMARY]
[CONTENT] orthodontics achieve | smile esthetics midr | incisor inclination dental | display maxillary incisors | dental appearance smile [SUMMARY]
[CONTENT] orthodontics achieve | smile esthetics midr | incisor inclination dental | display maxillary incisors | dental appearance smile [SUMMARY]
[CONTENT] orthodontics achieve | smile esthetics midr | incisor inclination dental | display maxillary incisors | dental appearance smile [SUMMARY]
[CONTENT] plane | upper | maxillary | lip | angle | upper lip | anterior | midr | distance | display [SUMMARY]
null
[CONTENT] plane | upper | maxillary | lip | angle | upper lip | anterior | midr | distance | display [SUMMARY]
[CONTENT] plane | upper | maxillary | lip | angle | upper lip | anterior | midr | distance | display [SUMMARY]
[CONTENT] plane | upper | maxillary | lip | angle | upper lip | anterior | midr | distance | display [SUMMARY]
[CONTENT] plane | upper | maxillary | lip | angle | upper lip | anterior | midr | distance | display [SUMMARY]
[CONTENT] smile | maxillary | incisors | midr | maxillary incisors | orthodontic | display | incisor | treatment | patients [SUMMARY]
null
[CONTENT] 001 | table | 150 | regression analysis | lip | 05 | analysis | upper | angle0 | height0 [SUMMARY]
[CONTENT] maxillary incisal display rest | incisal display rest | maxillary incisal | maxillary incisal display | display rest | incisal display | incisal | display | rest | maxillary [SUMMARY]
[CONTENT] plane | upper | maxillary | angle | lip | upper lip | distance | midr | display | anterior [SUMMARY]
[CONTENT] plane | upper | maxillary | angle | lip | upper lip | distance | midr | display | anterior [SUMMARY]
[CONTENT] one [SUMMARY]
null
[CONTENT] MIDR | 0.011 ||| MIDR | ANB ||| MIDR ||| Linear | MIDR | 29.7% | MIDR ||| about 63.4% | MIDR [SUMMARY]
[CONTENT] Incisal ||| MIDR | MIDR [SUMMARY]
[CONTENT] one ||| 150 | 75 | 75 | 18-30 years ||| MIDR ||| ANB ||| MIDR ||| ||| MIDR | 0.011 ||| MIDR | ANB ||| MIDR ||| Linear | MIDR | 29.7% | MIDR ||| about 63.4% | MIDR ||| ||| MIDR | MIDR [SUMMARY]
[CONTENT] one ||| 150 | 75 | 75 | 18-30 years ||| MIDR ||| ANB ||| MIDR ||| ||| MIDR | 0.011 ||| MIDR | ANB ||| MIDR ||| Linear | MIDR | 29.7% | MIDR ||| about 63.4% | MIDR ||| ||| MIDR | MIDR [SUMMARY]
A survey of senior medical students' attitudes and awareness toward teaching and participation in a formal clinical teaching elective: a Canadian perspective.
28178914
To prepare for careers in medicine, medical trainees must develop clinical teaching skills. It is unclear if Canadian medical students need or want to develop such skills. We sought to assess Canadian students' perceptions of clinical teaching, and their desire to pursue clinical teaching skills development via a clinical teaching elective (CTE) in their final year of medical school.
BACKGROUND
We designed a descriptive cross-sectional study of Canadian senior medical students, using an online survey to gauge teaching experience, career goals, perceived areas of confidence, and interest in a CTE.
METHODS
Students at 13 of 17 Canadian medical schools were invited to participate in the survey (4154 students). We collected 321 responses (7.8%). Most (75%) respondents expressed confidence in giving presentations, but fewer were confident providing bedside teaching (47%), teaching sensitive issues (42%), and presenting at journal clubs (42%). A total of 240 respondents (75%) expressed interest in participating in a CTE. The majority (61%) favored a two week elective, and preferred topics included bedside teaching (85%), teaching physical examination skills (71%), moderation of small group learning (63%), and mentorship in medicine (60%).
RESULTS
Our study demonstrates that a large number of Canadian medical students are interested in teaching in a clinical setting, but lack confidence in skills specific to clinical teaching. Our respondents signaled interest in participating in an elective in clinical teaching, particularly if it is offered in a two-week format.
CONCLUSION
[ "Adolescent", "Adult", "Attitude of Health Personnel", "Canada", "Cross-Sectional Studies", "Education, Medical, Undergraduate", "Female", "Humans", "Male", "Peer Group", "Professional Competence", "Self Efficacy", "Students, Medical", "Teaching", "Young Adult" ]
5328334
Introduction
To prepare for careers in medicine, medical students must develop clinical teaching skills [1–3]. As residents, medical graduates are typically active in educator roles and are responsible for a considerable component of the medical education of clinical clerkship students [2]. In principle, medical students who teach become better learners, more effective communicators and better prepared to teach later in their careers [3]. This implies a benefit from early involvement and engagement in clinical teaching. Many medical schools provide opportunities for medical students to develop scholarly research skills as prescribed by formal medical school accreditation standards [4,5]. On the other hand, there are no formal accreditation requirements mandating the promotion of teaching skills to undergraduate medical students. In 2008, a survey of 130 medical programs in the United States indicated that 44% (43 of 99 respondents) offered student-as-teacher (SAT) programs [6]. Prevailing content within these programs includes clinical teaching skill development, small group teaching skills and feedback derived from direct observation or videotaped teaching activity [6]. However, this study also indicated that a number of barriers exist regarding these SAT programs, including the challenge of creating a sense of value attributed to these initiatives, the lack of established national teaching competency standards to guide these programs, and the challenge of collating objective longitudinal data for the purposes of program evaluation [6]. Nonetheless, at least in the United States, many medical schools appear to provide formal clinical teaching skill training for their medical students. As of 2015, at least in the literature, it appeared that no Canadian medical school program offered the opportunity to develop clinical teaching skills to their medical students. Medical residents and clinicians teach in multiple settings. However, over the past twenty years, bedside teaching has diminished as a teaching modality with only 17% of students receiving dedicated bedside teaching [7]. In 2015, to help address the deficiency of clinical teaching skills training, the University of Ottawa Faculty of Medicine established Canada’s first clinical teaching elective (CTE) [8]. This was offered to medical students in their final year of undergraduate medical education. The elective was intended for a small number of fourth year medical students [5,6]; however, a surprisingly high number of students sought enrollment in the elective, with 25–30 students (out a class of 150) registering and eventually participating in the CTE [8]. It was unclear whether this interest in clinical teaching was unique to our institution or whether there was a greater sense of interest in the broader population of senior medical students. There is currently little known regarding Canadian medical student perceptions of clinical teaching training at the undergraduate level, including motivations for pursuing such training, barriers to participation and what objectives and content should be included in such initiatives. As such, the primary aim of this study was to conduct a national needs assessment, based on medical student self-perceptions to guide the development of a formal elective in clinical teaching. This needs assessment study would address perceived areas of deficiency in clinical teaching skills in senior medical school students, and would aim to design a clinical teaching skills initiative that conveys the teaching skill competencies required by Canadian medical school students.
null
null
null
null
Conclusion
Our study demonstrates a large number of senior medical students at Canadian medical schools are interested in developing their clinical teaching abilities, with a particular interest in participating in a 2-week elective during the latter part of medical school training. Based on our data, a key component of such an elective would be teaching medical students to be confident bedside teachers.
[ "Methods", "Respondents and setting", "Ethics", "Survey", "Recruitment and survey distribution", "Statistical analysis", "Results", "Discussion", "Conclusion" ]
[ " Respondents and setting Most Canadian medical schools offer a four-year Doctor of Medicine (MD) program, with two years of pre-clinical education followed by two years of clinical clerkship. Two Canadian medical schools (McMaster University and the University of Calgary) have a three-year MD program, with clinical clerkship occurring in the second and third years. Three of 17 medical schools in Canada offer instruction exclusively in French (these schools are in the province of Quebec). The survey was open to all senior medical students (in their last two years of training).\n Ethics Ottawa Health Science Network Research Ethics Board granted ethics approval for our study.\nOttawa Health Science Network Research Ethics Board granted ethics approval for our study.\n Survey We developed a 15-question online survey, available in English and French on the Survey Monkey platform (see Table 1). We asked basic demographic characteristics; dichotomous (‘yes/no’) questions regarding teaching or research experience and future plans; and questions regarding confidence in their own teaching abilities rated on a 5-point Likert scale (‘strongly disagree’ to ‘strongly agree’). For each quantitative survey item, the option to provide open-ended qualitative feedback (if students felt the need to expand or justify their Likert selection) was provided via a comment box. The survey was developed with two medical education consultants and piloted by three medical students prior to distribution.Table 1. English survey questions.Question #Question1State your age2Select your gender3Select the Faculty of Medicine to which you are affiliated4Please indicate your intended residency choice5Do you have any prior teaching experience?6If yes, please select all those that apply (e.g. Lecturer, Peer tutor, TAClinical teacher, Other)7Do you plan on working in an academic hospital?8Are you interested in pursuing clinical teaching for medical students/residents upon completion of residency?9If yes, please indicate which factors motivate you to engage in clinical teaching (e.g. AcademicAdvancement, Intrinsic Interest, Prestige, Requirement to work in an academic centre, Desire to ‘give back’, Increase confidence in teaching, Other)10Have you undertaken research in the area of medical education?11Confidence in Teaching abilities (Likert scale)12I would participate in a clinical teaching elective during my last year of medical school (Likert scale)13What would the ideal duration for the elective be?14What topics would you include in a two-week medical education/clinical teaching elective?15If you have any other comments, please enter them belowTA: teaching assistant.\n\nEnglish survey questions.\nTA: teaching assistant.\nWe developed a 15-question online survey, available in English and French on the Survey Monkey platform (see Table 1). We asked basic demographic characteristics; dichotomous (‘yes/no’) questions regarding teaching or research experience and future plans; and questions regarding confidence in their own teaching abilities rated on a 5-point Likert scale (‘strongly disagree’ to ‘strongly agree’). For each quantitative survey item, the option to provide open-ended qualitative feedback (if students felt the need to expand or justify their Likert selection) was provided via a comment box. The survey was developed with two medical education consultants and piloted by three medical students prior to distribution.Table 1. English survey questions.Question #Question1State your age2Select your gender3Select the Faculty of Medicine to which you are affiliated4Please indicate your intended residency choice5Do you have any prior teaching experience?6If yes, please select all those that apply (e.g. Lecturer, Peer tutor, TAClinical teacher, Other)7Do you plan on working in an academic hospital?8Are you interested in pursuing clinical teaching for medical students/residents upon completion of residency?9If yes, please indicate which factors motivate you to engage in clinical teaching (e.g. AcademicAdvancement, Intrinsic Interest, Prestige, Requirement to work in an academic centre, Desire to ‘give back’, Increase confidence in teaching, Other)10Have you undertaken research in the area of medical education?11Confidence in Teaching abilities (Likert scale)12I would participate in a clinical teaching elective during my last year of medical school (Likert scale)13What would the ideal duration for the elective be?14What topics would you include in a two-week medical education/clinical teaching elective?15If you have any other comments, please enter them belowTA: teaching assistant.\n\nEnglish survey questions.\nTA: teaching assistant.\n Recruitment and survey distribution We recruited respondents indirectly through their national student representatives. Briefly, each Canadian medical school has student representation on the Canadian Federation of Medical Students (CFMS). These representatives were contacted via email and these students agreed to disseminate the survey to the target medical student population at their home institution (note, each student was not contacted directly by our researchers). The student representatives then relayed these emails (including a brief introduction to the survey in English and in French, as well as the link to access the survey) to the senior medical students at each school, with one reminder email sent two weeks before the survey was closed. In total, 13 CFMS representatives (out of 17) agreed to help disseminate this survey to their local institutions. As such, the data collected from our survey represents the views from 76% of Canadian medical schools. At the time of the study there were 5583 senior medical students registered in the 17 Canadian Faculties of Medicine [9]. A total of 4154 students had access to the survey (from the 13 participating medical schools) and there were 321 eventual respondents, resulting in a response rate of 7.7% of those surveyed.\nWe recruited respondents indirectly through their national student representatives. Briefly, each Canadian medical school has student representation on the Canadian Federation of Medical Students (CFMS). These representatives were contacted via email and these students agreed to disseminate the survey to the target medical student population at their home institution (note, each student was not contacted directly by our researchers). The student representatives then relayed these emails (including a brief introduction to the survey in English and in French, as well as the link to access the survey) to the senior medical students at each school, with one reminder email sent two weeks before the survey was closed. In total, 13 CFMS representatives (out of 17) agreed to help disseminate this survey to their local institutions. As such, the data collected from our survey represents the views from 76% of Canadian medical schools. At the time of the study there were 5583 senior medical students registered in the 17 Canadian Faculties of Medicine [9]. A total of 4154 students had access to the survey (from the 13 participating medical schools) and there were 321 eventual respondents, resulting in a response rate of 7.7% of those surveyed.\n Statistical analysis We computed simple descriptive statistics (frequencies and percentages) for each survey question using Survey Monkey.\nWe computed simple descriptive statistics (frequencies and percentages) for each survey question using Survey Monkey.\nMost Canadian medical schools offer a four-year Doctor of Medicine (MD) program, with two years of pre-clinical education followed by two years of clinical clerkship. Two Canadian medical schools (McMaster University and the University of Calgary) have a three-year MD program, with clinical clerkship occurring in the second and third years. Three of 17 medical schools in Canada offer instruction exclusively in French (these schools are in the province of Quebec). The survey was open to all senior medical students (in their last two years of training).\n Ethics Ottawa Health Science Network Research Ethics Board granted ethics approval for our study.\nOttawa Health Science Network Research Ethics Board granted ethics approval for our study.\n Survey We developed a 15-question online survey, available in English and French on the Survey Monkey platform (see Table 1). We asked basic demographic characteristics; dichotomous (‘yes/no’) questions regarding teaching or research experience and future plans; and questions regarding confidence in their own teaching abilities rated on a 5-point Likert scale (‘strongly disagree’ to ‘strongly agree’). For each quantitative survey item, the option to provide open-ended qualitative feedback (if students felt the need to expand or justify their Likert selection) was provided via a comment box. The survey was developed with two medical education consultants and piloted by three medical students prior to distribution.Table 1. English survey questions.Question #Question1State your age2Select your gender3Select the Faculty of Medicine to which you are affiliated4Please indicate your intended residency choice5Do you have any prior teaching experience?6If yes, please select all those that apply (e.g. Lecturer, Peer tutor, TAClinical teacher, Other)7Do you plan on working in an academic hospital?8Are you interested in pursuing clinical teaching for medical students/residents upon completion of residency?9If yes, please indicate which factors motivate you to engage in clinical teaching (e.g. AcademicAdvancement, Intrinsic Interest, Prestige, Requirement to work in an academic centre, Desire to ‘give back’, Increase confidence in teaching, Other)10Have you undertaken research in the area of medical education?11Confidence in Teaching abilities (Likert scale)12I would participate in a clinical teaching elective during my last year of medical school (Likert scale)13What would the ideal duration for the elective be?14What topics would you include in a two-week medical education/clinical teaching elective?15If you have any other comments, please enter them belowTA: teaching assistant.\n\nEnglish survey questions.\nTA: teaching assistant.\nWe developed a 15-question online survey, available in English and French on the Survey Monkey platform (see Table 1). We asked basic demographic characteristics; dichotomous (‘yes/no’) questions regarding teaching or research experience and future plans; and questions regarding confidence in their own teaching abilities rated on a 5-point Likert scale (‘strongly disagree’ to ‘strongly agree’). For each quantitative survey item, the option to provide open-ended qualitative feedback (if students felt the need to expand or justify their Likert selection) was provided via a comment box. The survey was developed with two medical education consultants and piloted by three medical students prior to distribution.Table 1. English survey questions.Question #Question1State your age2Select your gender3Select the Faculty of Medicine to which you are affiliated4Please indicate your intended residency choice5Do you have any prior teaching experience?6If yes, please select all those that apply (e.g. Lecturer, Peer tutor, TAClinical teacher, Other)7Do you plan on working in an academic hospital?8Are you interested in pursuing clinical teaching for medical students/residents upon completion of residency?9If yes, please indicate which factors motivate you to engage in clinical teaching (e.g. AcademicAdvancement, Intrinsic Interest, Prestige, Requirement to work in an academic centre, Desire to ‘give back’, Increase confidence in teaching, Other)10Have you undertaken research in the area of medical education?11Confidence in Teaching abilities (Likert scale)12I would participate in a clinical teaching elective during my last year of medical school (Likert scale)13What would the ideal duration for the elective be?14What topics would you include in a two-week medical education/clinical teaching elective?15If you have any other comments, please enter them belowTA: teaching assistant.\n\nEnglish survey questions.\nTA: teaching assistant.\n Recruitment and survey distribution We recruited respondents indirectly through their national student representatives. Briefly, each Canadian medical school has student representation on the Canadian Federation of Medical Students (CFMS). These representatives were contacted via email and these students agreed to disseminate the survey to the target medical student population at their home institution (note, each student was not contacted directly by our researchers). The student representatives then relayed these emails (including a brief introduction to the survey in English and in French, as well as the link to access the survey) to the senior medical students at each school, with one reminder email sent two weeks before the survey was closed. In total, 13 CFMS representatives (out of 17) agreed to help disseminate this survey to their local institutions. As such, the data collected from our survey represents the views from 76% of Canadian medical schools. At the time of the study there were 5583 senior medical students registered in the 17 Canadian Faculties of Medicine [9]. A total of 4154 students had access to the survey (from the 13 participating medical schools) and there were 321 eventual respondents, resulting in a response rate of 7.7% of those surveyed.\nWe recruited respondents indirectly through their national student representatives. Briefly, each Canadian medical school has student representation on the Canadian Federation of Medical Students (CFMS). These representatives were contacted via email and these students agreed to disseminate the survey to the target medical student population at their home institution (note, each student was not contacted directly by our researchers). The student representatives then relayed these emails (including a brief introduction to the survey in English and in French, as well as the link to access the survey) to the senior medical students at each school, with one reminder email sent two weeks before the survey was closed. In total, 13 CFMS representatives (out of 17) agreed to help disseminate this survey to their local institutions. As such, the data collected from our survey represents the views from 76% of Canadian medical schools. At the time of the study there were 5583 senior medical students registered in the 17 Canadian Faculties of Medicine [9]. A total of 4154 students had access to the survey (from the 13 participating medical schools) and there were 321 eventual respondents, resulting in a response rate of 7.7% of those surveyed.\n Statistical analysis We computed simple descriptive statistics (frequencies and percentages) for each survey question using Survey Monkey.\nWe computed simple descriptive statistics (frequencies and percentages) for each survey question using Survey Monkey.", "Most Canadian medical schools offer a four-year Doctor of Medicine (MD) program, with two years of pre-clinical education followed by two years of clinical clerkship. Two Canadian medical schools (McMaster University and the University of Calgary) have a three-year MD program, with clinical clerkship occurring in the second and third years. Three of 17 medical schools in Canada offer instruction exclusively in French (these schools are in the province of Quebec). The survey was open to all senior medical students (in their last two years of training).\n Ethics Ottawa Health Science Network Research Ethics Board granted ethics approval for our study.\nOttawa Health Science Network Research Ethics Board granted ethics approval for our study.\n Survey We developed a 15-question online survey, available in English and French on the Survey Monkey platform (see Table 1). We asked basic demographic characteristics; dichotomous (‘yes/no’) questions regarding teaching or research experience and future plans; and questions regarding confidence in their own teaching abilities rated on a 5-point Likert scale (‘strongly disagree’ to ‘strongly agree’). For each quantitative survey item, the option to provide open-ended qualitative feedback (if students felt the need to expand or justify their Likert selection) was provided via a comment box. The survey was developed with two medical education consultants and piloted by three medical students prior to distribution.Table 1. English survey questions.Question #Question1State your age2Select your gender3Select the Faculty of Medicine to which you are affiliated4Please indicate your intended residency choice5Do you have any prior teaching experience?6If yes, please select all those that apply (e.g. Lecturer, Peer tutor, TAClinical teacher, Other)7Do you plan on working in an academic hospital?8Are you interested in pursuing clinical teaching for medical students/residents upon completion of residency?9If yes, please indicate which factors motivate you to engage in clinical teaching (e.g. AcademicAdvancement, Intrinsic Interest, Prestige, Requirement to work in an academic centre, Desire to ‘give back’, Increase confidence in teaching, Other)10Have you undertaken research in the area of medical education?11Confidence in Teaching abilities (Likert scale)12I would participate in a clinical teaching elective during my last year of medical school (Likert scale)13What would the ideal duration for the elective be?14What topics would you include in a two-week medical education/clinical teaching elective?15If you have any other comments, please enter them belowTA: teaching assistant.\n\nEnglish survey questions.\nTA: teaching assistant.\nWe developed a 15-question online survey, available in English and French on the Survey Monkey platform (see Table 1). We asked basic demographic characteristics; dichotomous (‘yes/no’) questions regarding teaching or research experience and future plans; and questions regarding confidence in their own teaching abilities rated on a 5-point Likert scale (‘strongly disagree’ to ‘strongly agree’). For each quantitative survey item, the option to provide open-ended qualitative feedback (if students felt the need to expand or justify their Likert selection) was provided via a comment box. The survey was developed with two medical education consultants and piloted by three medical students prior to distribution.Table 1. English survey questions.Question #Question1State your age2Select your gender3Select the Faculty of Medicine to which you are affiliated4Please indicate your intended residency choice5Do you have any prior teaching experience?6If yes, please select all those that apply (e.g. Lecturer, Peer tutor, TAClinical teacher, Other)7Do you plan on working in an academic hospital?8Are you interested in pursuing clinical teaching for medical students/residents upon completion of residency?9If yes, please indicate which factors motivate you to engage in clinical teaching (e.g. AcademicAdvancement, Intrinsic Interest, Prestige, Requirement to work in an academic centre, Desire to ‘give back’, Increase confidence in teaching, Other)10Have you undertaken research in the area of medical education?11Confidence in Teaching abilities (Likert scale)12I would participate in a clinical teaching elective during my last year of medical school (Likert scale)13What would the ideal duration for the elective be?14What topics would you include in a two-week medical education/clinical teaching elective?15If you have any other comments, please enter them belowTA: teaching assistant.\n\nEnglish survey questions.\nTA: teaching assistant.\n Recruitment and survey distribution We recruited respondents indirectly through their national student representatives. Briefly, each Canadian medical school has student representation on the Canadian Federation of Medical Students (CFMS). These representatives were contacted via email and these students agreed to disseminate the survey to the target medical student population at their home institution (note, each student was not contacted directly by our researchers). The student representatives then relayed these emails (including a brief introduction to the survey in English and in French, as well as the link to access the survey) to the senior medical students at each school, with one reminder email sent two weeks before the survey was closed. In total, 13 CFMS representatives (out of 17) agreed to help disseminate this survey to their local institutions. As such, the data collected from our survey represents the views from 76% of Canadian medical schools. At the time of the study there were 5583 senior medical students registered in the 17 Canadian Faculties of Medicine [9]. A total of 4154 students had access to the survey (from the 13 participating medical schools) and there were 321 eventual respondents, resulting in a response rate of 7.7% of those surveyed.\nWe recruited respondents indirectly through their national student representatives. Briefly, each Canadian medical school has student representation on the Canadian Federation of Medical Students (CFMS). These representatives were contacted via email and these students agreed to disseminate the survey to the target medical student population at their home institution (note, each student was not contacted directly by our researchers). The student representatives then relayed these emails (including a brief introduction to the survey in English and in French, as well as the link to access the survey) to the senior medical students at each school, with one reminder email sent two weeks before the survey was closed. In total, 13 CFMS representatives (out of 17) agreed to help disseminate this survey to their local institutions. As such, the data collected from our survey represents the views from 76% of Canadian medical schools. At the time of the study there were 5583 senior medical students registered in the 17 Canadian Faculties of Medicine [9]. A total of 4154 students had access to the survey (from the 13 participating medical schools) and there were 321 eventual respondents, resulting in a response rate of 7.7% of those surveyed.\n Statistical analysis We computed simple descriptive statistics (frequencies and percentages) for each survey question using Survey Monkey.\nWe computed simple descriptive statistics (frequencies and percentages) for each survey question using Survey Monkey.", "Ottawa Health Science Network Research Ethics Board granted ethics approval for our study.", "We developed a 15-question online survey, available in English and French on the Survey Monkey platform (see Table 1). We asked basic demographic characteristics; dichotomous (‘yes/no’) questions regarding teaching or research experience and future plans; and questions regarding confidence in their own teaching abilities rated on a 5-point Likert scale (‘strongly disagree’ to ‘strongly agree’). For each quantitative survey item, the option to provide open-ended qualitative feedback (if students felt the need to expand or justify their Likert selection) was provided via a comment box. The survey was developed with two medical education consultants and piloted by three medical students prior to distribution.Table 1. English survey questions.Question #Question1State your age2Select your gender3Select the Faculty of Medicine to which you are affiliated4Please indicate your intended residency choice5Do you have any prior teaching experience?6If yes, please select all those that apply (e.g. Lecturer, Peer tutor, TAClinical teacher, Other)7Do you plan on working in an academic hospital?8Are you interested in pursuing clinical teaching for medical students/residents upon completion of residency?9If yes, please indicate which factors motivate you to engage in clinical teaching (e.g. AcademicAdvancement, Intrinsic Interest, Prestige, Requirement to work in an academic centre, Desire to ‘give back’, Increase confidence in teaching, Other)10Have you undertaken research in the area of medical education?11Confidence in Teaching abilities (Likert scale)12I would participate in a clinical teaching elective during my last year of medical school (Likert scale)13What would the ideal duration for the elective be?14What topics would you include in a two-week medical education/clinical teaching elective?15If you have any other comments, please enter them belowTA: teaching assistant.\n\nEnglish survey questions.\nTA: teaching assistant.", "We recruited respondents indirectly through their national student representatives. Briefly, each Canadian medical school has student representation on the Canadian Federation of Medical Students (CFMS). These representatives were contacted via email and these students agreed to disseminate the survey to the target medical student population at their home institution (note, each student was not contacted directly by our researchers). The student representatives then relayed these emails (including a brief introduction to the survey in English and in French, as well as the link to access the survey) to the senior medical students at each school, with one reminder email sent two weeks before the survey was closed. In total, 13 CFMS representatives (out of 17) agreed to help disseminate this survey to their local institutions. As such, the data collected from our survey represents the views from 76% of Canadian medical schools. At the time of the study there were 5583 senior medical students registered in the 17 Canadian Faculties of Medicine [9]. A total of 4154 students had access to the survey (from the 13 participating medical schools) and there were 321 eventual respondents, resulting in a response rate of 7.7% of those surveyed.", "We computed simple descriptive statistics (frequencies and percentages) for each survey question using Survey Monkey.", "We collected 321 responses from senior medical students at 13 Canadian medical schools. Four Canadian medical schools declined to participate, including all three unilingual Francophone schools.\nDemographic information is outlined in Table 2. Of the 321 respondents, 311 (96%) responded to the survey in English while 10 responded to the survey in French. The majority of respondents were female (69%).Table 2. Demographic characteristics of 321 senior medical student survey respondents*CharacteristicsN (%)Sex Male98 (30.7%)Female220 (69%)Prefer not to state1 (0.3%)Age ≤ 18019–24160 (50%)25–29132 (41%)30–3422 (7%)35–397 (2%)≥ 400Medical School University of Ottawa71 (22%)University of British Columbia45 (14%)University of Toronto37 (12%)Dalhousie University34 (11%)McMaster University30 (9%)McGill University28 (9%)University of Alberta27 (8%)University of Manitoba15 (5%)Queen’s University10 (3%)Western University10 (3%)Memorial University of Newfoundland9 (3%)University of Saskatchewan3 (1%)University of Calgary2 (1%)*Not all respondents answered all the questions.\n\nDemographic characteristics of 321 senior medical student survey respondents*\n*Not all respondents answered all the questions.\nThe majority (74%) had prior teaching experience, of which the most common was as a tutor/peer tutor/supplemental instructor (either in premedical or early medical school training) as indicated in Table 3. Many survey respondents have academic plans: 95% will pursue clinical teaching upon completion of their residency, and 71% would like to work in an academic hospital. 53% of respondents hope to train in one of three disciplines: family medicine (27%), internal medicine (16%) and pediatrics (10%)Table 3. Teaching experience and future career goals of 321 senior medical student survey respondentsSurvey questionResponse‘Do you have any prior teaching experience?’ Yes (%)74No (%)26Total (n)320‘Have you undertaken research in the area of medical education?’ Yes (%)28No (%)72Total (n)321‘Do you plan on working in an academic hospital?’ Yes (%)71No (%)29Total (n)317‘Are you interested in pursuing clinical teaching for medical students/residents upon completion of residency?’ Yes (%)95No (%)5Total (n)321‘Please indicate your intended residency choice’ (N = 314) Family Medicine (n)86Internal Medicine (n)50Pediatrics (n)32Emergency Medicine (n)23Obstetrics and Gynecology (n)18Anesthesiology (n)8General Surgery (n)7Diagnostic Radiology (n)7Ophthalmology (n)7Plastic Surgery (n)7Neurology (n)6Prefer not to state (n)27Other (n)36\n\nTeaching experience and future career goals of 321 senior medical student survey respondents\nMany respondents felt confident giving presentations (75% agreeing or strongly agreeing) and facilitating small group sessions (71%), while fewer felt confident providing bedside teaching (47%), teaching sensitive issues (42%), and presenting at journal clubs (42%) (Table 4). The majority of students felt confident providing written feedback to learners (74%) and 67% felt confident in providing verbal feedback.Table 4. Perceived confidence in teaching abilities expressed by 319 senior medical student survey respondents.Answer OptionsStrongly disagreeDisagreeNeutralAgreeStrongly agreeI feel confident giving presentations.2%5%18%56%19%I feel confident facilitating small group sessions.1%8%20%52%18%I feel confident performing bedside teaching.3%17%34%38%8%I feel confident teaching about sensitive issues, communication, and ethics.3%23%32%33%9%I feel confident presenting at a journal club.4%26%28%32%10%I feel confident giving verbal feedback.2%8%23%56%11%I feel confident giving written feedback.1%7%18%53%21%\n\nPerceived confidence in teaching abilities expressed by 319 senior medical student survey respondents.\nOf the students surveyed, the majority (75%) agreed/strongly agreed that they would like to participate in such a clinical teaching elective, while 15% were neutral and 10% disagreed/strongly disagreed (Figure 1). In follow-up questioning, 61% of respondents indicated two weeks would be the ideal duration for the elective, while 31% preferred one week, and 4% chose one month.Figure 1. Response of 320 senior medical student survey respondents to statement ‘I would participate in a clinical teaching elective during my last year of medical school’.\n\nResponse of 320 senior medical student survey respondents to statement ‘I would participate in a clinical teaching elective during my last year of medical school’.\nThe needs analysis (Table 5) showed that students would like sessions on the following topics: bedside teaching (85%), teaching physical examination skills (71%), moderation of small group learning (63%), and how to mentor in medicine (60%).Table 5. Clinical teaching elective topics preferred by 318 senior medical student survey respondents*Topics%Bedside teaching85Teaching physical exam skills71Effective moderation of small group learning63Mentorship in medicine60Effective presentation skills54Simulation in medical education51Leadership in medicine47Assessment in medical education43Teaching in the ambulatory care setting43Teaching in the emergency room setting43How to give effective journal club presentations38Curriculum design in medical education37How to publish in the field of medical education24*Respondents were able to select more than one topic\n\nClinical teaching elective topics preferred by 318 senior medical student survey respondents*\n*Respondents were able to select more than one topic", "Our study demonstrates that the majority of senior Canadian medical student survey respondents are interested in, but not prepared for, teaching in a clinical setting. While 74% of survey respondents reported some form of teaching experience, only 3.3% of those had experience as clinical teachers (Table 3). A total of 95% planned to deliver clinical teaching in their career, but only 47% felt confident with bedside teaching (Table 4); only 42% of student were comfortable conducting and presenting at journal club, an expected educational task in most residency programs [10].\nThe second objective of the study was to assess student interest in participating in a clinical teaching elective if offered at their institution (Figure 1); 75% of survey respondents expressed interest in such an elective, with the majority indicating an ideal duration of two weeks.\nThis is the first Canadian study to ask these questions of medical students. In Australia, Burgess and colleagues evaluated the impact of a similar educational program, the ‘Teaching on the Run’ (TOR) program [11]. Prior to participating in TOR, Year three medical students expressed lack of confidence in their understanding of key educational principles and activities. Also, students were very interested in TOR – 67% of eligible students participated in the program. A key study by Soriano et al also noted that close to 50% of all U.S. medical schools offer students as teachers (SAT) training with 31% of schools offering two or four week teaching electives [6].\nIn this study, students were also surveyed regarding potential curricular content of a clinical teaching elective. Despite the advances in curriculum development and delivery over the past 10 years, students still have an overwhelming desire to be good bedside teachers and acquire confidence in explaining physical examination maneuvers to junior colleagues (Table 5). 85% of respondents felt bedside teaching to be an important component of such an elective, while students were least interested in including topics pertaining to publishing in the field of medical education (24%). The present clinical teaching elective offered at our institution uses standardized patients and small group sessions to help students improve bedside teaching techniques [8]. The evaluation component of this elective is currently in progress, but it is interesting to note other studies showing a potential long-term impact of such electives on clinical teaching outcomes [12].\nOur study has two major limitations, first of which is the low response rate. While 321 senior medical students completed our survey, there are approximately 5583 senior medical students in Canada this year. The survey was advertised indirectly to 4154 students from 13 of 17 medical schools. The relatively low participation rate in this population has historically been attributed to survey fatigue and lack of time in clinical clerkship settings. We were unable to contact each student directly to participate in the survey, but relied upon school representatives to forward on the survey link which may have had reduced efficiency in the delivery of the survey. Despite this, there would likely be a reasonable absolute number of students who would engage in a formal teaching elective if offered at each medical school. Even if we were to presume that our respondents were the only students in Canada interested in participating in a clinical teaching elective, this is still potentially 240 students – enough for an 8–10 person elective at each academic year at each participating school. During the final year of medical training, elective time in Canadian medical schools is quite limited and often geared towards electives that will enhance the chances of matching to limited residency positions. Hence, it is interesting to note that if given the chance, many students would participate in such an elective, which is not directly linked to any particular residency program. Based on our small study, the inclusion of such an elective should strongly be considered as a curricular component in the last year of medical training.\nA further limitation of our study is the existence of selection bias. First, our inability to survey students at four of 17 Canadian medical schools could be problematic. It is conceivable that students at the non-participating schools (a distributed medical education school, and three unilingual Francophone schools) may have less interest in clinical teaching than students at the 14 participating schools, but we believe this is unlikely.\nAdditionally, there were a high number of respondents interested in matching to residency positions in family medicine, internal medicine, and pediatrics; it is plausible that students interested in these specialties are more interested in clinical teaching than other students are. These three specialties account for the bulk of clinical learning experience for medical students, so it would be natural for students to associate them with opportunities for clinical teaching. Pragmatically this does not detract from our findings. Indeed, our sample is reflective of the Canadian reality: 53% of our respondents indicated an interest in one of these three specialties, whereas post-graduate training data indicate 65% of graduating medical students will start a residency in one of these specialties [5].", "Our study demonstrates a large number of senior medical students at Canadian medical schools are interested in developing their clinical teaching abilities, with a particular interest in participating in a 2-week elective during the latter part of medical school training. Based on our data, a key component of such an elective would be teaching medical students to be confident bedside teachers." ]
[ null, null, null, null, null, null, null, null, null ]
[ "Introduction", "Methods", "Respondents and setting", "Ethics", "Survey", "Recruitment and survey distribution", "Statistical analysis", "Results", "Discussion", "Conclusion" ]
[ "To prepare for careers in medicine, medical students must develop clinical teaching skills [1–3]. As residents, medical graduates are typically active in educator roles and are responsible for a considerable component of the medical education of clinical clerkship students [2]. In principle, medical students who teach become better learners, more effective communicators and better prepared to teach later in their careers [3]. This implies a benefit from early involvement and engagement in clinical teaching.\nMany medical schools provide opportunities for medical students to develop scholarly research skills as prescribed by formal medical school accreditation standards [4,5]. On the other hand, there are no formal accreditation requirements mandating the promotion of teaching skills to undergraduate medical students. In 2008, a survey of 130 medical programs in the United States indicated that 44% (43 of 99 respondents) offered student-as-teacher (SAT) programs [6]. Prevailing content within these programs includes clinical teaching skill development, small group teaching skills and feedback derived from direct observation or videotaped teaching activity [6]. However, this study also indicated that a number of barriers exist regarding these SAT programs, including the challenge of creating a sense of value attributed to these initiatives, the lack of established national teaching competency standards to guide these programs, and the challenge of collating objective longitudinal data for the purposes of program evaluation [6]. Nonetheless, at least in the United States, many medical schools appear to provide formal clinical teaching skill training for their medical students. As of 2015, at least in the literature, it appeared that no Canadian medical school program offered the opportunity to develop clinical teaching skills to their medical students.\nMedical residents and clinicians teach in multiple settings. However, over the past twenty years, bedside teaching has diminished as a teaching modality with only 17% of students receiving dedicated bedside teaching [7]. In 2015, to help address the deficiency of clinical teaching skills training, the University of Ottawa Faculty of Medicine established Canada’s first clinical teaching elective (CTE) [8]. This was offered to medical students in their final year of undergraduate medical education. The elective was intended for a small number of fourth year medical students [5,6]; however, a surprisingly high number of students sought enrollment in the elective, with 25–30 students (out a class of 150) registering and eventually participating in the CTE [8]. It was unclear whether this interest in clinical teaching was unique to our institution or whether there was a greater sense of interest in the broader population of senior medical students.\nThere is currently little known regarding Canadian medical student perceptions of clinical teaching training at the undergraduate level, including motivations for pursuing such training, barriers to participation and what objectives and content should be included in such initiatives. As such, the primary aim of this study was to conduct a national needs assessment, based on medical student self-perceptions to guide the development of a formal elective in clinical teaching. This needs assessment study would address perceived areas of deficiency in clinical teaching skills in senior medical school students, and would aim to design a clinical teaching skills initiative that conveys the teaching skill competencies required by Canadian medical school students.", " Respondents and setting Most Canadian medical schools offer a four-year Doctor of Medicine (MD) program, with two years of pre-clinical education followed by two years of clinical clerkship. Two Canadian medical schools (McMaster University and the University of Calgary) have a three-year MD program, with clinical clerkship occurring in the second and third years. Three of 17 medical schools in Canada offer instruction exclusively in French (these schools are in the province of Quebec). The survey was open to all senior medical students (in their last two years of training).\n Ethics Ottawa Health Science Network Research Ethics Board granted ethics approval for our study.\nOttawa Health Science Network Research Ethics Board granted ethics approval for our study.\n Survey We developed a 15-question online survey, available in English and French on the Survey Monkey platform (see Table 1). We asked basic demographic characteristics; dichotomous (‘yes/no’) questions regarding teaching or research experience and future plans; and questions regarding confidence in their own teaching abilities rated on a 5-point Likert scale (‘strongly disagree’ to ‘strongly agree’). For each quantitative survey item, the option to provide open-ended qualitative feedback (if students felt the need to expand or justify their Likert selection) was provided via a comment box. The survey was developed with two medical education consultants and piloted by three medical students prior to distribution.Table 1. English survey questions.Question #Question1State your age2Select your gender3Select the Faculty of Medicine to which you are affiliated4Please indicate your intended residency choice5Do you have any prior teaching experience?6If yes, please select all those that apply (e.g. Lecturer, Peer tutor, TAClinical teacher, Other)7Do you plan on working in an academic hospital?8Are you interested in pursuing clinical teaching for medical students/residents upon completion of residency?9If yes, please indicate which factors motivate you to engage in clinical teaching (e.g. AcademicAdvancement, Intrinsic Interest, Prestige, Requirement to work in an academic centre, Desire to ‘give back’, Increase confidence in teaching, Other)10Have you undertaken research in the area of medical education?11Confidence in Teaching abilities (Likert scale)12I would participate in a clinical teaching elective during my last year of medical school (Likert scale)13What would the ideal duration for the elective be?14What topics would you include in a two-week medical education/clinical teaching elective?15If you have any other comments, please enter them belowTA: teaching assistant.\n\nEnglish survey questions.\nTA: teaching assistant.\nWe developed a 15-question online survey, available in English and French on the Survey Monkey platform (see Table 1). We asked basic demographic characteristics; dichotomous (‘yes/no’) questions regarding teaching or research experience and future plans; and questions regarding confidence in their own teaching abilities rated on a 5-point Likert scale (‘strongly disagree’ to ‘strongly agree’). For each quantitative survey item, the option to provide open-ended qualitative feedback (if students felt the need to expand or justify their Likert selection) was provided via a comment box. The survey was developed with two medical education consultants and piloted by three medical students prior to distribution.Table 1. English survey questions.Question #Question1State your age2Select your gender3Select the Faculty of Medicine to which you are affiliated4Please indicate your intended residency choice5Do you have any prior teaching experience?6If yes, please select all those that apply (e.g. Lecturer, Peer tutor, TAClinical teacher, Other)7Do you plan on working in an academic hospital?8Are you interested in pursuing clinical teaching for medical students/residents upon completion of residency?9If yes, please indicate which factors motivate you to engage in clinical teaching (e.g. AcademicAdvancement, Intrinsic Interest, Prestige, Requirement to work in an academic centre, Desire to ‘give back’, Increase confidence in teaching, Other)10Have you undertaken research in the area of medical education?11Confidence in Teaching abilities (Likert scale)12I would participate in a clinical teaching elective during my last year of medical school (Likert scale)13What would the ideal duration for the elective be?14What topics would you include in a two-week medical education/clinical teaching elective?15If you have any other comments, please enter them belowTA: teaching assistant.\n\nEnglish survey questions.\nTA: teaching assistant.\n Recruitment and survey distribution We recruited respondents indirectly through their national student representatives. Briefly, each Canadian medical school has student representation on the Canadian Federation of Medical Students (CFMS). These representatives were contacted via email and these students agreed to disseminate the survey to the target medical student population at their home institution (note, each student was not contacted directly by our researchers). The student representatives then relayed these emails (including a brief introduction to the survey in English and in French, as well as the link to access the survey) to the senior medical students at each school, with one reminder email sent two weeks before the survey was closed. In total, 13 CFMS representatives (out of 17) agreed to help disseminate this survey to their local institutions. As such, the data collected from our survey represents the views from 76% of Canadian medical schools. At the time of the study there were 5583 senior medical students registered in the 17 Canadian Faculties of Medicine [9]. A total of 4154 students had access to the survey (from the 13 participating medical schools) and there were 321 eventual respondents, resulting in a response rate of 7.7% of those surveyed.\nWe recruited respondents indirectly through their national student representatives. Briefly, each Canadian medical school has student representation on the Canadian Federation of Medical Students (CFMS). These representatives were contacted via email and these students agreed to disseminate the survey to the target medical student population at their home institution (note, each student was not contacted directly by our researchers). The student representatives then relayed these emails (including a brief introduction to the survey in English and in French, as well as the link to access the survey) to the senior medical students at each school, with one reminder email sent two weeks before the survey was closed. In total, 13 CFMS representatives (out of 17) agreed to help disseminate this survey to their local institutions. As such, the data collected from our survey represents the views from 76% of Canadian medical schools. At the time of the study there were 5583 senior medical students registered in the 17 Canadian Faculties of Medicine [9]. A total of 4154 students had access to the survey (from the 13 participating medical schools) and there were 321 eventual respondents, resulting in a response rate of 7.7% of those surveyed.\n Statistical analysis We computed simple descriptive statistics (frequencies and percentages) for each survey question using Survey Monkey.\nWe computed simple descriptive statistics (frequencies and percentages) for each survey question using Survey Monkey.\nMost Canadian medical schools offer a four-year Doctor of Medicine (MD) program, with two years of pre-clinical education followed by two years of clinical clerkship. Two Canadian medical schools (McMaster University and the University of Calgary) have a three-year MD program, with clinical clerkship occurring in the second and third years. Three of 17 medical schools in Canada offer instruction exclusively in French (these schools are in the province of Quebec). The survey was open to all senior medical students (in their last two years of training).\n Ethics Ottawa Health Science Network Research Ethics Board granted ethics approval for our study.\nOttawa Health Science Network Research Ethics Board granted ethics approval for our study.\n Survey We developed a 15-question online survey, available in English and French on the Survey Monkey platform (see Table 1). We asked basic demographic characteristics; dichotomous (‘yes/no’) questions regarding teaching or research experience and future plans; and questions regarding confidence in their own teaching abilities rated on a 5-point Likert scale (‘strongly disagree’ to ‘strongly agree’). For each quantitative survey item, the option to provide open-ended qualitative feedback (if students felt the need to expand or justify their Likert selection) was provided via a comment box. The survey was developed with two medical education consultants and piloted by three medical students prior to distribution.Table 1. English survey questions.Question #Question1State your age2Select your gender3Select the Faculty of Medicine to which you are affiliated4Please indicate your intended residency choice5Do you have any prior teaching experience?6If yes, please select all those that apply (e.g. Lecturer, Peer tutor, TAClinical teacher, Other)7Do you plan on working in an academic hospital?8Are you interested in pursuing clinical teaching for medical students/residents upon completion of residency?9If yes, please indicate which factors motivate you to engage in clinical teaching (e.g. AcademicAdvancement, Intrinsic Interest, Prestige, Requirement to work in an academic centre, Desire to ‘give back’, Increase confidence in teaching, Other)10Have you undertaken research in the area of medical education?11Confidence in Teaching abilities (Likert scale)12I would participate in a clinical teaching elective during my last year of medical school (Likert scale)13What would the ideal duration for the elective be?14What topics would you include in a two-week medical education/clinical teaching elective?15If you have any other comments, please enter them belowTA: teaching assistant.\n\nEnglish survey questions.\nTA: teaching assistant.\nWe developed a 15-question online survey, available in English and French on the Survey Monkey platform (see Table 1). We asked basic demographic characteristics; dichotomous (‘yes/no’) questions regarding teaching or research experience and future plans; and questions regarding confidence in their own teaching abilities rated on a 5-point Likert scale (‘strongly disagree’ to ‘strongly agree’). For each quantitative survey item, the option to provide open-ended qualitative feedback (if students felt the need to expand or justify their Likert selection) was provided via a comment box. The survey was developed with two medical education consultants and piloted by three medical students prior to distribution.Table 1. English survey questions.Question #Question1State your age2Select your gender3Select the Faculty of Medicine to which you are affiliated4Please indicate your intended residency choice5Do you have any prior teaching experience?6If yes, please select all those that apply (e.g. Lecturer, Peer tutor, TAClinical teacher, Other)7Do you plan on working in an academic hospital?8Are you interested in pursuing clinical teaching for medical students/residents upon completion of residency?9If yes, please indicate which factors motivate you to engage in clinical teaching (e.g. AcademicAdvancement, Intrinsic Interest, Prestige, Requirement to work in an academic centre, Desire to ‘give back’, Increase confidence in teaching, Other)10Have you undertaken research in the area of medical education?11Confidence in Teaching abilities (Likert scale)12I would participate in a clinical teaching elective during my last year of medical school (Likert scale)13What would the ideal duration for the elective be?14What topics would you include in a two-week medical education/clinical teaching elective?15If you have any other comments, please enter them belowTA: teaching assistant.\n\nEnglish survey questions.\nTA: teaching assistant.\n Recruitment and survey distribution We recruited respondents indirectly through their national student representatives. Briefly, each Canadian medical school has student representation on the Canadian Federation of Medical Students (CFMS). These representatives were contacted via email and these students agreed to disseminate the survey to the target medical student population at their home institution (note, each student was not contacted directly by our researchers). The student representatives then relayed these emails (including a brief introduction to the survey in English and in French, as well as the link to access the survey) to the senior medical students at each school, with one reminder email sent two weeks before the survey was closed. In total, 13 CFMS representatives (out of 17) agreed to help disseminate this survey to their local institutions. As such, the data collected from our survey represents the views from 76% of Canadian medical schools. At the time of the study there were 5583 senior medical students registered in the 17 Canadian Faculties of Medicine [9]. A total of 4154 students had access to the survey (from the 13 participating medical schools) and there were 321 eventual respondents, resulting in a response rate of 7.7% of those surveyed.\nWe recruited respondents indirectly through their national student representatives. Briefly, each Canadian medical school has student representation on the Canadian Federation of Medical Students (CFMS). These representatives were contacted via email and these students agreed to disseminate the survey to the target medical student population at their home institution (note, each student was not contacted directly by our researchers). The student representatives then relayed these emails (including a brief introduction to the survey in English and in French, as well as the link to access the survey) to the senior medical students at each school, with one reminder email sent two weeks before the survey was closed. In total, 13 CFMS representatives (out of 17) agreed to help disseminate this survey to their local institutions. As such, the data collected from our survey represents the views from 76% of Canadian medical schools. At the time of the study there were 5583 senior medical students registered in the 17 Canadian Faculties of Medicine [9]. A total of 4154 students had access to the survey (from the 13 participating medical schools) and there were 321 eventual respondents, resulting in a response rate of 7.7% of those surveyed.\n Statistical analysis We computed simple descriptive statistics (frequencies and percentages) for each survey question using Survey Monkey.\nWe computed simple descriptive statistics (frequencies and percentages) for each survey question using Survey Monkey.", "Most Canadian medical schools offer a four-year Doctor of Medicine (MD) program, with two years of pre-clinical education followed by two years of clinical clerkship. Two Canadian medical schools (McMaster University and the University of Calgary) have a three-year MD program, with clinical clerkship occurring in the second and third years. Three of 17 medical schools in Canada offer instruction exclusively in French (these schools are in the province of Quebec). The survey was open to all senior medical students (in their last two years of training).\n Ethics Ottawa Health Science Network Research Ethics Board granted ethics approval for our study.\nOttawa Health Science Network Research Ethics Board granted ethics approval for our study.\n Survey We developed a 15-question online survey, available in English and French on the Survey Monkey platform (see Table 1). We asked basic demographic characteristics; dichotomous (‘yes/no’) questions regarding teaching or research experience and future plans; and questions regarding confidence in their own teaching abilities rated on a 5-point Likert scale (‘strongly disagree’ to ‘strongly agree’). For each quantitative survey item, the option to provide open-ended qualitative feedback (if students felt the need to expand or justify their Likert selection) was provided via a comment box. The survey was developed with two medical education consultants and piloted by three medical students prior to distribution.Table 1. English survey questions.Question #Question1State your age2Select your gender3Select the Faculty of Medicine to which you are affiliated4Please indicate your intended residency choice5Do you have any prior teaching experience?6If yes, please select all those that apply (e.g. Lecturer, Peer tutor, TAClinical teacher, Other)7Do you plan on working in an academic hospital?8Are you interested in pursuing clinical teaching for medical students/residents upon completion of residency?9If yes, please indicate which factors motivate you to engage in clinical teaching (e.g. AcademicAdvancement, Intrinsic Interest, Prestige, Requirement to work in an academic centre, Desire to ‘give back’, Increase confidence in teaching, Other)10Have you undertaken research in the area of medical education?11Confidence in Teaching abilities (Likert scale)12I would participate in a clinical teaching elective during my last year of medical school (Likert scale)13What would the ideal duration for the elective be?14What topics would you include in a two-week medical education/clinical teaching elective?15If you have any other comments, please enter them belowTA: teaching assistant.\n\nEnglish survey questions.\nTA: teaching assistant.\nWe developed a 15-question online survey, available in English and French on the Survey Monkey platform (see Table 1). We asked basic demographic characteristics; dichotomous (‘yes/no’) questions regarding teaching or research experience and future plans; and questions regarding confidence in their own teaching abilities rated on a 5-point Likert scale (‘strongly disagree’ to ‘strongly agree’). For each quantitative survey item, the option to provide open-ended qualitative feedback (if students felt the need to expand or justify their Likert selection) was provided via a comment box. The survey was developed with two medical education consultants and piloted by three medical students prior to distribution.Table 1. English survey questions.Question #Question1State your age2Select your gender3Select the Faculty of Medicine to which you are affiliated4Please indicate your intended residency choice5Do you have any prior teaching experience?6If yes, please select all those that apply (e.g. Lecturer, Peer tutor, TAClinical teacher, Other)7Do you plan on working in an academic hospital?8Are you interested in pursuing clinical teaching for medical students/residents upon completion of residency?9If yes, please indicate which factors motivate you to engage in clinical teaching (e.g. AcademicAdvancement, Intrinsic Interest, Prestige, Requirement to work in an academic centre, Desire to ‘give back’, Increase confidence in teaching, Other)10Have you undertaken research in the area of medical education?11Confidence in Teaching abilities (Likert scale)12I would participate in a clinical teaching elective during my last year of medical school (Likert scale)13What would the ideal duration for the elective be?14What topics would you include in a two-week medical education/clinical teaching elective?15If you have any other comments, please enter them belowTA: teaching assistant.\n\nEnglish survey questions.\nTA: teaching assistant.\n Recruitment and survey distribution We recruited respondents indirectly through their national student representatives. Briefly, each Canadian medical school has student representation on the Canadian Federation of Medical Students (CFMS). These representatives were contacted via email and these students agreed to disseminate the survey to the target medical student population at their home institution (note, each student was not contacted directly by our researchers). The student representatives then relayed these emails (including a brief introduction to the survey in English and in French, as well as the link to access the survey) to the senior medical students at each school, with one reminder email sent two weeks before the survey was closed. In total, 13 CFMS representatives (out of 17) agreed to help disseminate this survey to their local institutions. As such, the data collected from our survey represents the views from 76% of Canadian medical schools. At the time of the study there were 5583 senior medical students registered in the 17 Canadian Faculties of Medicine [9]. A total of 4154 students had access to the survey (from the 13 participating medical schools) and there were 321 eventual respondents, resulting in a response rate of 7.7% of those surveyed.\nWe recruited respondents indirectly through their national student representatives. Briefly, each Canadian medical school has student representation on the Canadian Federation of Medical Students (CFMS). These representatives were contacted via email and these students agreed to disseminate the survey to the target medical student population at their home institution (note, each student was not contacted directly by our researchers). The student representatives then relayed these emails (including a brief introduction to the survey in English and in French, as well as the link to access the survey) to the senior medical students at each school, with one reminder email sent two weeks before the survey was closed. In total, 13 CFMS representatives (out of 17) agreed to help disseminate this survey to their local institutions. As such, the data collected from our survey represents the views from 76% of Canadian medical schools. At the time of the study there were 5583 senior medical students registered in the 17 Canadian Faculties of Medicine [9]. A total of 4154 students had access to the survey (from the 13 participating medical schools) and there were 321 eventual respondents, resulting in a response rate of 7.7% of those surveyed.\n Statistical analysis We computed simple descriptive statistics (frequencies and percentages) for each survey question using Survey Monkey.\nWe computed simple descriptive statistics (frequencies and percentages) for each survey question using Survey Monkey.", "Ottawa Health Science Network Research Ethics Board granted ethics approval for our study.", "We developed a 15-question online survey, available in English and French on the Survey Monkey platform (see Table 1). We asked basic demographic characteristics; dichotomous (‘yes/no’) questions regarding teaching or research experience and future plans; and questions regarding confidence in their own teaching abilities rated on a 5-point Likert scale (‘strongly disagree’ to ‘strongly agree’). For each quantitative survey item, the option to provide open-ended qualitative feedback (if students felt the need to expand or justify their Likert selection) was provided via a comment box. The survey was developed with two medical education consultants and piloted by three medical students prior to distribution.Table 1. English survey questions.Question #Question1State your age2Select your gender3Select the Faculty of Medicine to which you are affiliated4Please indicate your intended residency choice5Do you have any prior teaching experience?6If yes, please select all those that apply (e.g. Lecturer, Peer tutor, TAClinical teacher, Other)7Do you plan on working in an academic hospital?8Are you interested in pursuing clinical teaching for medical students/residents upon completion of residency?9If yes, please indicate which factors motivate you to engage in clinical teaching (e.g. AcademicAdvancement, Intrinsic Interest, Prestige, Requirement to work in an academic centre, Desire to ‘give back’, Increase confidence in teaching, Other)10Have you undertaken research in the area of medical education?11Confidence in Teaching abilities (Likert scale)12I would participate in a clinical teaching elective during my last year of medical school (Likert scale)13What would the ideal duration for the elective be?14What topics would you include in a two-week medical education/clinical teaching elective?15If you have any other comments, please enter them belowTA: teaching assistant.\n\nEnglish survey questions.\nTA: teaching assistant.", "We recruited respondents indirectly through their national student representatives. Briefly, each Canadian medical school has student representation on the Canadian Federation of Medical Students (CFMS). These representatives were contacted via email and these students agreed to disseminate the survey to the target medical student population at their home institution (note, each student was not contacted directly by our researchers). The student representatives then relayed these emails (including a brief introduction to the survey in English and in French, as well as the link to access the survey) to the senior medical students at each school, with one reminder email sent two weeks before the survey was closed. In total, 13 CFMS representatives (out of 17) agreed to help disseminate this survey to their local institutions. As such, the data collected from our survey represents the views from 76% of Canadian medical schools. At the time of the study there were 5583 senior medical students registered in the 17 Canadian Faculties of Medicine [9]. A total of 4154 students had access to the survey (from the 13 participating medical schools) and there were 321 eventual respondents, resulting in a response rate of 7.7% of those surveyed.", "We computed simple descriptive statistics (frequencies and percentages) for each survey question using Survey Monkey.", "We collected 321 responses from senior medical students at 13 Canadian medical schools. Four Canadian medical schools declined to participate, including all three unilingual Francophone schools.\nDemographic information is outlined in Table 2. Of the 321 respondents, 311 (96%) responded to the survey in English while 10 responded to the survey in French. The majority of respondents were female (69%).Table 2. Demographic characteristics of 321 senior medical student survey respondents*CharacteristicsN (%)Sex Male98 (30.7%)Female220 (69%)Prefer not to state1 (0.3%)Age ≤ 18019–24160 (50%)25–29132 (41%)30–3422 (7%)35–397 (2%)≥ 400Medical School University of Ottawa71 (22%)University of British Columbia45 (14%)University of Toronto37 (12%)Dalhousie University34 (11%)McMaster University30 (9%)McGill University28 (9%)University of Alberta27 (8%)University of Manitoba15 (5%)Queen’s University10 (3%)Western University10 (3%)Memorial University of Newfoundland9 (3%)University of Saskatchewan3 (1%)University of Calgary2 (1%)*Not all respondents answered all the questions.\n\nDemographic characteristics of 321 senior medical student survey respondents*\n*Not all respondents answered all the questions.\nThe majority (74%) had prior teaching experience, of which the most common was as a tutor/peer tutor/supplemental instructor (either in premedical or early medical school training) as indicated in Table 3. Many survey respondents have academic plans: 95% will pursue clinical teaching upon completion of their residency, and 71% would like to work in an academic hospital. 53% of respondents hope to train in one of three disciplines: family medicine (27%), internal medicine (16%) and pediatrics (10%)Table 3. Teaching experience and future career goals of 321 senior medical student survey respondentsSurvey questionResponse‘Do you have any prior teaching experience?’ Yes (%)74No (%)26Total (n)320‘Have you undertaken research in the area of medical education?’ Yes (%)28No (%)72Total (n)321‘Do you plan on working in an academic hospital?’ Yes (%)71No (%)29Total (n)317‘Are you interested in pursuing clinical teaching for medical students/residents upon completion of residency?’ Yes (%)95No (%)5Total (n)321‘Please indicate your intended residency choice’ (N = 314) Family Medicine (n)86Internal Medicine (n)50Pediatrics (n)32Emergency Medicine (n)23Obstetrics and Gynecology (n)18Anesthesiology (n)8General Surgery (n)7Diagnostic Radiology (n)7Ophthalmology (n)7Plastic Surgery (n)7Neurology (n)6Prefer not to state (n)27Other (n)36\n\nTeaching experience and future career goals of 321 senior medical student survey respondents\nMany respondents felt confident giving presentations (75% agreeing or strongly agreeing) and facilitating small group sessions (71%), while fewer felt confident providing bedside teaching (47%), teaching sensitive issues (42%), and presenting at journal clubs (42%) (Table 4). The majority of students felt confident providing written feedback to learners (74%) and 67% felt confident in providing verbal feedback.Table 4. Perceived confidence in teaching abilities expressed by 319 senior medical student survey respondents.Answer OptionsStrongly disagreeDisagreeNeutralAgreeStrongly agreeI feel confident giving presentations.2%5%18%56%19%I feel confident facilitating small group sessions.1%8%20%52%18%I feel confident performing bedside teaching.3%17%34%38%8%I feel confident teaching about sensitive issues, communication, and ethics.3%23%32%33%9%I feel confident presenting at a journal club.4%26%28%32%10%I feel confident giving verbal feedback.2%8%23%56%11%I feel confident giving written feedback.1%7%18%53%21%\n\nPerceived confidence in teaching abilities expressed by 319 senior medical student survey respondents.\nOf the students surveyed, the majority (75%) agreed/strongly agreed that they would like to participate in such a clinical teaching elective, while 15% were neutral and 10% disagreed/strongly disagreed (Figure 1). In follow-up questioning, 61% of respondents indicated two weeks would be the ideal duration for the elective, while 31% preferred one week, and 4% chose one month.Figure 1. Response of 320 senior medical student survey respondents to statement ‘I would participate in a clinical teaching elective during my last year of medical school’.\n\nResponse of 320 senior medical student survey respondents to statement ‘I would participate in a clinical teaching elective during my last year of medical school’.\nThe needs analysis (Table 5) showed that students would like sessions on the following topics: bedside teaching (85%), teaching physical examination skills (71%), moderation of small group learning (63%), and how to mentor in medicine (60%).Table 5. Clinical teaching elective topics preferred by 318 senior medical student survey respondents*Topics%Bedside teaching85Teaching physical exam skills71Effective moderation of small group learning63Mentorship in medicine60Effective presentation skills54Simulation in medical education51Leadership in medicine47Assessment in medical education43Teaching in the ambulatory care setting43Teaching in the emergency room setting43How to give effective journal club presentations38Curriculum design in medical education37How to publish in the field of medical education24*Respondents were able to select more than one topic\n\nClinical teaching elective topics preferred by 318 senior medical student survey respondents*\n*Respondents were able to select more than one topic", "Our study demonstrates that the majority of senior Canadian medical student survey respondents are interested in, but not prepared for, teaching in a clinical setting. While 74% of survey respondents reported some form of teaching experience, only 3.3% of those had experience as clinical teachers (Table 3). A total of 95% planned to deliver clinical teaching in their career, but only 47% felt confident with bedside teaching (Table 4); only 42% of student were comfortable conducting and presenting at journal club, an expected educational task in most residency programs [10].\nThe second objective of the study was to assess student interest in participating in a clinical teaching elective if offered at their institution (Figure 1); 75% of survey respondents expressed interest in such an elective, with the majority indicating an ideal duration of two weeks.\nThis is the first Canadian study to ask these questions of medical students. In Australia, Burgess and colleagues evaluated the impact of a similar educational program, the ‘Teaching on the Run’ (TOR) program [11]. Prior to participating in TOR, Year three medical students expressed lack of confidence in their understanding of key educational principles and activities. Also, students were very interested in TOR – 67% of eligible students participated in the program. A key study by Soriano et al also noted that close to 50% of all U.S. medical schools offer students as teachers (SAT) training with 31% of schools offering two or four week teaching electives [6].\nIn this study, students were also surveyed regarding potential curricular content of a clinical teaching elective. Despite the advances in curriculum development and delivery over the past 10 years, students still have an overwhelming desire to be good bedside teachers and acquire confidence in explaining physical examination maneuvers to junior colleagues (Table 5). 85% of respondents felt bedside teaching to be an important component of such an elective, while students were least interested in including topics pertaining to publishing in the field of medical education (24%). The present clinical teaching elective offered at our institution uses standardized patients and small group sessions to help students improve bedside teaching techniques [8]. The evaluation component of this elective is currently in progress, but it is interesting to note other studies showing a potential long-term impact of such electives on clinical teaching outcomes [12].\nOur study has two major limitations, first of which is the low response rate. While 321 senior medical students completed our survey, there are approximately 5583 senior medical students in Canada this year. The survey was advertised indirectly to 4154 students from 13 of 17 medical schools. The relatively low participation rate in this population has historically been attributed to survey fatigue and lack of time in clinical clerkship settings. We were unable to contact each student directly to participate in the survey, but relied upon school representatives to forward on the survey link which may have had reduced efficiency in the delivery of the survey. Despite this, there would likely be a reasonable absolute number of students who would engage in a formal teaching elective if offered at each medical school. Even if we were to presume that our respondents were the only students in Canada interested in participating in a clinical teaching elective, this is still potentially 240 students – enough for an 8–10 person elective at each academic year at each participating school. During the final year of medical training, elective time in Canadian medical schools is quite limited and often geared towards electives that will enhance the chances of matching to limited residency positions. Hence, it is interesting to note that if given the chance, many students would participate in such an elective, which is not directly linked to any particular residency program. Based on our small study, the inclusion of such an elective should strongly be considered as a curricular component in the last year of medical training.\nA further limitation of our study is the existence of selection bias. First, our inability to survey students at four of 17 Canadian medical schools could be problematic. It is conceivable that students at the non-participating schools (a distributed medical education school, and three unilingual Francophone schools) may have less interest in clinical teaching than students at the 14 participating schools, but we believe this is unlikely.\nAdditionally, there were a high number of respondents interested in matching to residency positions in family medicine, internal medicine, and pediatrics; it is plausible that students interested in these specialties are more interested in clinical teaching than other students are. These three specialties account for the bulk of clinical learning experience for medical students, so it would be natural for students to associate them with opportunities for clinical teaching. Pragmatically this does not detract from our findings. Indeed, our sample is reflective of the Canadian reality: 53% of our respondents indicated an interest in one of these three specialties, whereas post-graduate training data indicate 65% of graduating medical students will start a residency in one of these specialties [5].", "Our study demonstrates a large number of senior medical students at Canadian medical schools are interested in developing their clinical teaching abilities, with a particular interest in participating in a 2-week elective during the latter part of medical school training. Based on our data, a key component of such an elective would be teaching medical students to be confident bedside teachers." ]
[ "intro", null, null, null, null, null, null, null, null, null ]
[ "Curriculum", "clinical teaching", "electives", "undergraduate medical education", "near-peer teaching" ]
Introduction: To prepare for careers in medicine, medical students must develop clinical teaching skills [1–3]. As residents, medical graduates are typically active in educator roles and are responsible for a considerable component of the medical education of clinical clerkship students [2]. In principle, medical students who teach become better learners, more effective communicators and better prepared to teach later in their careers [3]. This implies a benefit from early involvement and engagement in clinical teaching. Many medical schools provide opportunities for medical students to develop scholarly research skills as prescribed by formal medical school accreditation standards [4,5]. On the other hand, there are no formal accreditation requirements mandating the promotion of teaching skills to undergraduate medical students. In 2008, a survey of 130 medical programs in the United States indicated that 44% (43 of 99 respondents) offered student-as-teacher (SAT) programs [6]. Prevailing content within these programs includes clinical teaching skill development, small group teaching skills and feedback derived from direct observation or videotaped teaching activity [6]. However, this study also indicated that a number of barriers exist regarding these SAT programs, including the challenge of creating a sense of value attributed to these initiatives, the lack of established national teaching competency standards to guide these programs, and the challenge of collating objective longitudinal data for the purposes of program evaluation [6]. Nonetheless, at least in the United States, many medical schools appear to provide formal clinical teaching skill training for their medical students. As of 2015, at least in the literature, it appeared that no Canadian medical school program offered the opportunity to develop clinical teaching skills to their medical students. Medical residents and clinicians teach in multiple settings. However, over the past twenty years, bedside teaching has diminished as a teaching modality with only 17% of students receiving dedicated bedside teaching [7]. In 2015, to help address the deficiency of clinical teaching skills training, the University of Ottawa Faculty of Medicine established Canada’s first clinical teaching elective (CTE) [8]. This was offered to medical students in their final year of undergraduate medical education. The elective was intended for a small number of fourth year medical students [5,6]; however, a surprisingly high number of students sought enrollment in the elective, with 25–30 students (out a class of 150) registering and eventually participating in the CTE [8]. It was unclear whether this interest in clinical teaching was unique to our institution or whether there was a greater sense of interest in the broader population of senior medical students. There is currently little known regarding Canadian medical student perceptions of clinical teaching training at the undergraduate level, including motivations for pursuing such training, barriers to participation and what objectives and content should be included in such initiatives. As such, the primary aim of this study was to conduct a national needs assessment, based on medical student self-perceptions to guide the development of a formal elective in clinical teaching. This needs assessment study would address perceived areas of deficiency in clinical teaching skills in senior medical school students, and would aim to design a clinical teaching skills initiative that conveys the teaching skill competencies required by Canadian medical school students. Methods: Respondents and setting Most Canadian medical schools offer a four-year Doctor of Medicine (MD) program, with two years of pre-clinical education followed by two years of clinical clerkship. Two Canadian medical schools (McMaster University and the University of Calgary) have a three-year MD program, with clinical clerkship occurring in the second and third years. Three of 17 medical schools in Canada offer instruction exclusively in French (these schools are in the province of Quebec). The survey was open to all senior medical students (in their last two years of training). Ethics Ottawa Health Science Network Research Ethics Board granted ethics approval for our study. Ottawa Health Science Network Research Ethics Board granted ethics approval for our study. Survey We developed a 15-question online survey, available in English and French on the Survey Monkey platform (see Table 1). We asked basic demographic characteristics; dichotomous (‘yes/no’) questions regarding teaching or research experience and future plans; and questions regarding confidence in their own teaching abilities rated on a 5-point Likert scale (‘strongly disagree’ to ‘strongly agree’). For each quantitative survey item, the option to provide open-ended qualitative feedback (if students felt the need to expand or justify their Likert selection) was provided via a comment box. The survey was developed with two medical education consultants and piloted by three medical students prior to distribution.Table 1. English survey questions.Question #Question1State your age2Select your gender3Select the Faculty of Medicine to which you are affiliated4Please indicate your intended residency choice5Do you have any prior teaching experience?6If yes, please select all those that apply (e.g. Lecturer, Peer tutor, TAClinical teacher, Other)7Do you plan on working in an academic hospital?8Are you interested in pursuing clinical teaching for medical students/residents upon completion of residency?9If yes, please indicate which factors motivate you to engage in clinical teaching (e.g. AcademicAdvancement, Intrinsic Interest, Prestige, Requirement to work in an academic centre, Desire to ‘give back’, Increase confidence in teaching, Other)10Have you undertaken research in the area of medical education?11Confidence in Teaching abilities (Likert scale)12I would participate in a clinical teaching elective during my last year of medical school (Likert scale)13What would the ideal duration for the elective be?14What topics would you include in a two-week medical education/clinical teaching elective?15If you have any other comments, please enter them belowTA: teaching assistant. English survey questions. TA: teaching assistant. We developed a 15-question online survey, available in English and French on the Survey Monkey platform (see Table 1). We asked basic demographic characteristics; dichotomous (‘yes/no’) questions regarding teaching or research experience and future plans; and questions regarding confidence in their own teaching abilities rated on a 5-point Likert scale (‘strongly disagree’ to ‘strongly agree’). For each quantitative survey item, the option to provide open-ended qualitative feedback (if students felt the need to expand or justify their Likert selection) was provided via a comment box. The survey was developed with two medical education consultants and piloted by three medical students prior to distribution.Table 1. English survey questions.Question #Question1State your age2Select your gender3Select the Faculty of Medicine to which you are affiliated4Please indicate your intended residency choice5Do you have any prior teaching experience?6If yes, please select all those that apply (e.g. Lecturer, Peer tutor, TAClinical teacher, Other)7Do you plan on working in an academic hospital?8Are you interested in pursuing clinical teaching for medical students/residents upon completion of residency?9If yes, please indicate which factors motivate you to engage in clinical teaching (e.g. AcademicAdvancement, Intrinsic Interest, Prestige, Requirement to work in an academic centre, Desire to ‘give back’, Increase confidence in teaching, Other)10Have you undertaken research in the area of medical education?11Confidence in Teaching abilities (Likert scale)12I would participate in a clinical teaching elective during my last year of medical school (Likert scale)13What would the ideal duration for the elective be?14What topics would you include in a two-week medical education/clinical teaching elective?15If you have any other comments, please enter them belowTA: teaching assistant. English survey questions. TA: teaching assistant. Recruitment and survey distribution We recruited respondents indirectly through their national student representatives. Briefly, each Canadian medical school has student representation on the Canadian Federation of Medical Students (CFMS). These representatives were contacted via email and these students agreed to disseminate the survey to the target medical student population at their home institution (note, each student was not contacted directly by our researchers). The student representatives then relayed these emails (including a brief introduction to the survey in English and in French, as well as the link to access the survey) to the senior medical students at each school, with one reminder email sent two weeks before the survey was closed. In total, 13 CFMS representatives (out of 17) agreed to help disseminate this survey to their local institutions. As such, the data collected from our survey represents the views from 76% of Canadian medical schools. At the time of the study there were 5583 senior medical students registered in the 17 Canadian Faculties of Medicine [9]. A total of 4154 students had access to the survey (from the 13 participating medical schools) and there were 321 eventual respondents, resulting in a response rate of 7.7% of those surveyed. We recruited respondents indirectly through their national student representatives. Briefly, each Canadian medical school has student representation on the Canadian Federation of Medical Students (CFMS). These representatives were contacted via email and these students agreed to disseminate the survey to the target medical student population at their home institution (note, each student was not contacted directly by our researchers). The student representatives then relayed these emails (including a brief introduction to the survey in English and in French, as well as the link to access the survey) to the senior medical students at each school, with one reminder email sent two weeks before the survey was closed. In total, 13 CFMS representatives (out of 17) agreed to help disseminate this survey to their local institutions. As such, the data collected from our survey represents the views from 76% of Canadian medical schools. At the time of the study there were 5583 senior medical students registered in the 17 Canadian Faculties of Medicine [9]. A total of 4154 students had access to the survey (from the 13 participating medical schools) and there were 321 eventual respondents, resulting in a response rate of 7.7% of those surveyed. Statistical analysis We computed simple descriptive statistics (frequencies and percentages) for each survey question using Survey Monkey. We computed simple descriptive statistics (frequencies and percentages) for each survey question using Survey Monkey. Most Canadian medical schools offer a four-year Doctor of Medicine (MD) program, with two years of pre-clinical education followed by two years of clinical clerkship. Two Canadian medical schools (McMaster University and the University of Calgary) have a three-year MD program, with clinical clerkship occurring in the second and third years. Three of 17 medical schools in Canada offer instruction exclusively in French (these schools are in the province of Quebec). The survey was open to all senior medical students (in their last two years of training). Ethics Ottawa Health Science Network Research Ethics Board granted ethics approval for our study. Ottawa Health Science Network Research Ethics Board granted ethics approval for our study. Survey We developed a 15-question online survey, available in English and French on the Survey Monkey platform (see Table 1). We asked basic demographic characteristics; dichotomous (‘yes/no’) questions regarding teaching or research experience and future plans; and questions regarding confidence in their own teaching abilities rated on a 5-point Likert scale (‘strongly disagree’ to ‘strongly agree’). For each quantitative survey item, the option to provide open-ended qualitative feedback (if students felt the need to expand or justify their Likert selection) was provided via a comment box. The survey was developed with two medical education consultants and piloted by three medical students prior to distribution.Table 1. English survey questions.Question #Question1State your age2Select your gender3Select the Faculty of Medicine to which you are affiliated4Please indicate your intended residency choice5Do you have any prior teaching experience?6If yes, please select all those that apply (e.g. Lecturer, Peer tutor, TAClinical teacher, Other)7Do you plan on working in an academic hospital?8Are you interested in pursuing clinical teaching for medical students/residents upon completion of residency?9If yes, please indicate which factors motivate you to engage in clinical teaching (e.g. AcademicAdvancement, Intrinsic Interest, Prestige, Requirement to work in an academic centre, Desire to ‘give back’, Increase confidence in teaching, Other)10Have you undertaken research in the area of medical education?11Confidence in Teaching abilities (Likert scale)12I would participate in a clinical teaching elective during my last year of medical school (Likert scale)13What would the ideal duration for the elective be?14What topics would you include in a two-week medical education/clinical teaching elective?15If you have any other comments, please enter them belowTA: teaching assistant. English survey questions. TA: teaching assistant. We developed a 15-question online survey, available in English and French on the Survey Monkey platform (see Table 1). We asked basic demographic characteristics; dichotomous (‘yes/no’) questions regarding teaching or research experience and future plans; and questions regarding confidence in their own teaching abilities rated on a 5-point Likert scale (‘strongly disagree’ to ‘strongly agree’). For each quantitative survey item, the option to provide open-ended qualitative feedback (if students felt the need to expand or justify their Likert selection) was provided via a comment box. The survey was developed with two medical education consultants and piloted by three medical students prior to distribution.Table 1. English survey questions.Question #Question1State your age2Select your gender3Select the Faculty of Medicine to which you are affiliated4Please indicate your intended residency choice5Do you have any prior teaching experience?6If yes, please select all those that apply (e.g. Lecturer, Peer tutor, TAClinical teacher, Other)7Do you plan on working in an academic hospital?8Are you interested in pursuing clinical teaching for medical students/residents upon completion of residency?9If yes, please indicate which factors motivate you to engage in clinical teaching (e.g. AcademicAdvancement, Intrinsic Interest, Prestige, Requirement to work in an academic centre, Desire to ‘give back’, Increase confidence in teaching, Other)10Have you undertaken research in the area of medical education?11Confidence in Teaching abilities (Likert scale)12I would participate in a clinical teaching elective during my last year of medical school (Likert scale)13What would the ideal duration for the elective be?14What topics would you include in a two-week medical education/clinical teaching elective?15If you have any other comments, please enter them belowTA: teaching assistant. English survey questions. TA: teaching assistant. Recruitment and survey distribution We recruited respondents indirectly through their national student representatives. Briefly, each Canadian medical school has student representation on the Canadian Federation of Medical Students (CFMS). These representatives were contacted via email and these students agreed to disseminate the survey to the target medical student population at their home institution (note, each student was not contacted directly by our researchers). The student representatives then relayed these emails (including a brief introduction to the survey in English and in French, as well as the link to access the survey) to the senior medical students at each school, with one reminder email sent two weeks before the survey was closed. In total, 13 CFMS representatives (out of 17) agreed to help disseminate this survey to their local institutions. As such, the data collected from our survey represents the views from 76% of Canadian medical schools. At the time of the study there were 5583 senior medical students registered in the 17 Canadian Faculties of Medicine [9]. A total of 4154 students had access to the survey (from the 13 participating medical schools) and there were 321 eventual respondents, resulting in a response rate of 7.7% of those surveyed. We recruited respondents indirectly through their national student representatives. Briefly, each Canadian medical school has student representation on the Canadian Federation of Medical Students (CFMS). These representatives were contacted via email and these students agreed to disseminate the survey to the target medical student population at their home institution (note, each student was not contacted directly by our researchers). The student representatives then relayed these emails (including a brief introduction to the survey in English and in French, as well as the link to access the survey) to the senior medical students at each school, with one reminder email sent two weeks before the survey was closed. In total, 13 CFMS representatives (out of 17) agreed to help disseminate this survey to their local institutions. As such, the data collected from our survey represents the views from 76% of Canadian medical schools. At the time of the study there were 5583 senior medical students registered in the 17 Canadian Faculties of Medicine [9]. A total of 4154 students had access to the survey (from the 13 participating medical schools) and there were 321 eventual respondents, resulting in a response rate of 7.7% of those surveyed. Statistical analysis We computed simple descriptive statistics (frequencies and percentages) for each survey question using Survey Monkey. We computed simple descriptive statistics (frequencies and percentages) for each survey question using Survey Monkey. Respondents and setting: Most Canadian medical schools offer a four-year Doctor of Medicine (MD) program, with two years of pre-clinical education followed by two years of clinical clerkship. Two Canadian medical schools (McMaster University and the University of Calgary) have a three-year MD program, with clinical clerkship occurring in the second and third years. Three of 17 medical schools in Canada offer instruction exclusively in French (these schools are in the province of Quebec). The survey was open to all senior medical students (in their last two years of training). Ethics Ottawa Health Science Network Research Ethics Board granted ethics approval for our study. Ottawa Health Science Network Research Ethics Board granted ethics approval for our study. Survey We developed a 15-question online survey, available in English and French on the Survey Monkey platform (see Table 1). We asked basic demographic characteristics; dichotomous (‘yes/no’) questions regarding teaching or research experience and future plans; and questions regarding confidence in their own teaching abilities rated on a 5-point Likert scale (‘strongly disagree’ to ‘strongly agree’). For each quantitative survey item, the option to provide open-ended qualitative feedback (if students felt the need to expand or justify their Likert selection) was provided via a comment box. The survey was developed with two medical education consultants and piloted by three medical students prior to distribution.Table 1. English survey questions.Question #Question1State your age2Select your gender3Select the Faculty of Medicine to which you are affiliated4Please indicate your intended residency choice5Do you have any prior teaching experience?6If yes, please select all those that apply (e.g. Lecturer, Peer tutor, TAClinical teacher, Other)7Do you plan on working in an academic hospital?8Are you interested in pursuing clinical teaching for medical students/residents upon completion of residency?9If yes, please indicate which factors motivate you to engage in clinical teaching (e.g. AcademicAdvancement, Intrinsic Interest, Prestige, Requirement to work in an academic centre, Desire to ‘give back’, Increase confidence in teaching, Other)10Have you undertaken research in the area of medical education?11Confidence in Teaching abilities (Likert scale)12I would participate in a clinical teaching elective during my last year of medical school (Likert scale)13What would the ideal duration for the elective be?14What topics would you include in a two-week medical education/clinical teaching elective?15If you have any other comments, please enter them belowTA: teaching assistant. English survey questions. TA: teaching assistant. We developed a 15-question online survey, available in English and French on the Survey Monkey platform (see Table 1). We asked basic demographic characteristics; dichotomous (‘yes/no’) questions regarding teaching or research experience and future plans; and questions regarding confidence in their own teaching abilities rated on a 5-point Likert scale (‘strongly disagree’ to ‘strongly agree’). For each quantitative survey item, the option to provide open-ended qualitative feedback (if students felt the need to expand or justify their Likert selection) was provided via a comment box. The survey was developed with two medical education consultants and piloted by three medical students prior to distribution.Table 1. English survey questions.Question #Question1State your age2Select your gender3Select the Faculty of Medicine to which you are affiliated4Please indicate your intended residency choice5Do you have any prior teaching experience?6If yes, please select all those that apply (e.g. Lecturer, Peer tutor, TAClinical teacher, Other)7Do you plan on working in an academic hospital?8Are you interested in pursuing clinical teaching for medical students/residents upon completion of residency?9If yes, please indicate which factors motivate you to engage in clinical teaching (e.g. AcademicAdvancement, Intrinsic Interest, Prestige, Requirement to work in an academic centre, Desire to ‘give back’, Increase confidence in teaching, Other)10Have you undertaken research in the area of medical education?11Confidence in Teaching abilities (Likert scale)12I would participate in a clinical teaching elective during my last year of medical school (Likert scale)13What would the ideal duration for the elective be?14What topics would you include in a two-week medical education/clinical teaching elective?15If you have any other comments, please enter them belowTA: teaching assistant. English survey questions. TA: teaching assistant. Recruitment and survey distribution We recruited respondents indirectly through their national student representatives. Briefly, each Canadian medical school has student representation on the Canadian Federation of Medical Students (CFMS). These representatives were contacted via email and these students agreed to disseminate the survey to the target medical student population at their home institution (note, each student was not contacted directly by our researchers). The student representatives then relayed these emails (including a brief introduction to the survey in English and in French, as well as the link to access the survey) to the senior medical students at each school, with one reminder email sent two weeks before the survey was closed. In total, 13 CFMS representatives (out of 17) agreed to help disseminate this survey to their local institutions. As such, the data collected from our survey represents the views from 76% of Canadian medical schools. At the time of the study there were 5583 senior medical students registered in the 17 Canadian Faculties of Medicine [9]. A total of 4154 students had access to the survey (from the 13 participating medical schools) and there were 321 eventual respondents, resulting in a response rate of 7.7% of those surveyed. We recruited respondents indirectly through their national student representatives. Briefly, each Canadian medical school has student representation on the Canadian Federation of Medical Students (CFMS). These representatives were contacted via email and these students agreed to disseminate the survey to the target medical student population at their home institution (note, each student was not contacted directly by our researchers). The student representatives then relayed these emails (including a brief introduction to the survey in English and in French, as well as the link to access the survey) to the senior medical students at each school, with one reminder email sent two weeks before the survey was closed. In total, 13 CFMS representatives (out of 17) agreed to help disseminate this survey to their local institutions. As such, the data collected from our survey represents the views from 76% of Canadian medical schools. At the time of the study there were 5583 senior medical students registered in the 17 Canadian Faculties of Medicine [9]. A total of 4154 students had access to the survey (from the 13 participating medical schools) and there were 321 eventual respondents, resulting in a response rate of 7.7% of those surveyed. Statistical analysis We computed simple descriptive statistics (frequencies and percentages) for each survey question using Survey Monkey. We computed simple descriptive statistics (frequencies and percentages) for each survey question using Survey Monkey. Ethics: Ottawa Health Science Network Research Ethics Board granted ethics approval for our study. Survey: We developed a 15-question online survey, available in English and French on the Survey Monkey platform (see Table 1). We asked basic demographic characteristics; dichotomous (‘yes/no’) questions regarding teaching or research experience and future plans; and questions regarding confidence in their own teaching abilities rated on a 5-point Likert scale (‘strongly disagree’ to ‘strongly agree’). For each quantitative survey item, the option to provide open-ended qualitative feedback (if students felt the need to expand or justify their Likert selection) was provided via a comment box. The survey was developed with two medical education consultants and piloted by three medical students prior to distribution.Table 1. English survey questions.Question #Question1State your age2Select your gender3Select the Faculty of Medicine to which you are affiliated4Please indicate your intended residency choice5Do you have any prior teaching experience?6If yes, please select all those that apply (e.g. Lecturer, Peer tutor, TAClinical teacher, Other)7Do you plan on working in an academic hospital?8Are you interested in pursuing clinical teaching for medical students/residents upon completion of residency?9If yes, please indicate which factors motivate you to engage in clinical teaching (e.g. AcademicAdvancement, Intrinsic Interest, Prestige, Requirement to work in an academic centre, Desire to ‘give back’, Increase confidence in teaching, Other)10Have you undertaken research in the area of medical education?11Confidence in Teaching abilities (Likert scale)12I would participate in a clinical teaching elective during my last year of medical school (Likert scale)13What would the ideal duration for the elective be?14What topics would you include in a two-week medical education/clinical teaching elective?15If you have any other comments, please enter them belowTA: teaching assistant. English survey questions. TA: teaching assistant. Recruitment and survey distribution: We recruited respondents indirectly through their national student representatives. Briefly, each Canadian medical school has student representation on the Canadian Federation of Medical Students (CFMS). These representatives were contacted via email and these students agreed to disseminate the survey to the target medical student population at their home institution (note, each student was not contacted directly by our researchers). The student representatives then relayed these emails (including a brief introduction to the survey in English and in French, as well as the link to access the survey) to the senior medical students at each school, with one reminder email sent two weeks before the survey was closed. In total, 13 CFMS representatives (out of 17) agreed to help disseminate this survey to their local institutions. As such, the data collected from our survey represents the views from 76% of Canadian medical schools. At the time of the study there were 5583 senior medical students registered in the 17 Canadian Faculties of Medicine [9]. A total of 4154 students had access to the survey (from the 13 participating medical schools) and there were 321 eventual respondents, resulting in a response rate of 7.7% of those surveyed. Statistical analysis: We computed simple descriptive statistics (frequencies and percentages) for each survey question using Survey Monkey. Results: We collected 321 responses from senior medical students at 13 Canadian medical schools. Four Canadian medical schools declined to participate, including all three unilingual Francophone schools. Demographic information is outlined in Table 2. Of the 321 respondents, 311 (96%) responded to the survey in English while 10 responded to the survey in French. The majority of respondents were female (69%).Table 2. Demographic characteristics of 321 senior medical student survey respondents*CharacteristicsN (%)Sex Male98 (30.7%)Female220 (69%)Prefer not to state1 (0.3%)Age ≤ 18019–24160 (50%)25–29132 (41%)30–3422 (7%)35–397 (2%)≥ 400Medical School University of Ottawa71 (22%)University of British Columbia45 (14%)University of Toronto37 (12%)Dalhousie University34 (11%)McMaster University30 (9%)McGill University28 (9%)University of Alberta27 (8%)University of Manitoba15 (5%)Queen’s University10 (3%)Western University10 (3%)Memorial University of Newfoundland9 (3%)University of Saskatchewan3 (1%)University of Calgary2 (1%)*Not all respondents answered all the questions. Demographic characteristics of 321 senior medical student survey respondents* *Not all respondents answered all the questions. The majority (74%) had prior teaching experience, of which the most common was as a tutor/peer tutor/supplemental instructor (either in premedical or early medical school training) as indicated in Table 3. Many survey respondents have academic plans: 95% will pursue clinical teaching upon completion of their residency, and 71% would like to work in an academic hospital. 53% of respondents hope to train in one of three disciplines: family medicine (27%), internal medicine (16%) and pediatrics (10%)Table 3. Teaching experience and future career goals of 321 senior medical student survey respondentsSurvey questionResponse‘Do you have any prior teaching experience?’ Yes (%)74No (%)26Total (n)320‘Have you undertaken research in the area of medical education?’ Yes (%)28No (%)72Total (n)321‘Do you plan on working in an academic hospital?’ Yes (%)71No (%)29Total (n)317‘Are you interested in pursuing clinical teaching for medical students/residents upon completion of residency?’ Yes (%)95No (%)5Total (n)321‘Please indicate your intended residency choice’ (N = 314) Family Medicine (n)86Internal Medicine (n)50Pediatrics (n)32Emergency Medicine (n)23Obstetrics and Gynecology (n)18Anesthesiology (n)8General Surgery (n)7Diagnostic Radiology (n)7Ophthalmology (n)7Plastic Surgery (n)7Neurology (n)6Prefer not to state (n)27Other (n)36 Teaching experience and future career goals of 321 senior medical student survey respondents Many respondents felt confident giving presentations (75% agreeing or strongly agreeing) and facilitating small group sessions (71%), while fewer felt confident providing bedside teaching (47%), teaching sensitive issues (42%), and presenting at journal clubs (42%) (Table 4). The majority of students felt confident providing written feedback to learners (74%) and 67% felt confident in providing verbal feedback.Table 4. Perceived confidence in teaching abilities expressed by 319 senior medical student survey respondents.Answer OptionsStrongly disagreeDisagreeNeutralAgreeStrongly agreeI feel confident giving presentations.2%5%18%56%19%I feel confident facilitating small group sessions.1%8%20%52%18%I feel confident performing bedside teaching.3%17%34%38%8%I feel confident teaching about sensitive issues, communication, and ethics.3%23%32%33%9%I feel confident presenting at a journal club.4%26%28%32%10%I feel confident giving verbal feedback.2%8%23%56%11%I feel confident giving written feedback.1%7%18%53%21% Perceived confidence in teaching abilities expressed by 319 senior medical student survey respondents. Of the students surveyed, the majority (75%) agreed/strongly agreed that they would like to participate in such a clinical teaching elective, while 15% were neutral and 10% disagreed/strongly disagreed (Figure 1). In follow-up questioning, 61% of respondents indicated two weeks would be the ideal duration for the elective, while 31% preferred one week, and 4% chose one month.Figure 1. Response of 320 senior medical student survey respondents to statement ‘I would participate in a clinical teaching elective during my last year of medical school’. Response of 320 senior medical student survey respondents to statement ‘I would participate in a clinical teaching elective during my last year of medical school’. The needs analysis (Table 5) showed that students would like sessions on the following topics: bedside teaching (85%), teaching physical examination skills (71%), moderation of small group learning (63%), and how to mentor in medicine (60%).Table 5. Clinical teaching elective topics preferred by 318 senior medical student survey respondents*Topics%Bedside teaching85Teaching physical exam skills71Effective moderation of small group learning63Mentorship in medicine60Effective presentation skills54Simulation in medical education51Leadership in medicine47Assessment in medical education43Teaching in the ambulatory care setting43Teaching in the emergency room setting43How to give effective journal club presentations38Curriculum design in medical education37How to publish in the field of medical education24*Respondents were able to select more than one topic Clinical teaching elective topics preferred by 318 senior medical student survey respondents* *Respondents were able to select more than one topic Discussion: Our study demonstrates that the majority of senior Canadian medical student survey respondents are interested in, but not prepared for, teaching in a clinical setting. While 74% of survey respondents reported some form of teaching experience, only 3.3% of those had experience as clinical teachers (Table 3). A total of 95% planned to deliver clinical teaching in their career, but only 47% felt confident with bedside teaching (Table 4); only 42% of student were comfortable conducting and presenting at journal club, an expected educational task in most residency programs [10]. The second objective of the study was to assess student interest in participating in a clinical teaching elective if offered at their institution (Figure 1); 75% of survey respondents expressed interest in such an elective, with the majority indicating an ideal duration of two weeks. This is the first Canadian study to ask these questions of medical students. In Australia, Burgess and colleagues evaluated the impact of a similar educational program, the ‘Teaching on the Run’ (TOR) program [11]. Prior to participating in TOR, Year three medical students expressed lack of confidence in their understanding of key educational principles and activities. Also, students were very interested in TOR – 67% of eligible students participated in the program. A key study by Soriano et al also noted that close to 50% of all U.S. medical schools offer students as teachers (SAT) training with 31% of schools offering two or four week teaching electives [6]. In this study, students were also surveyed regarding potential curricular content of a clinical teaching elective. Despite the advances in curriculum development and delivery over the past 10 years, students still have an overwhelming desire to be good bedside teachers and acquire confidence in explaining physical examination maneuvers to junior colleagues (Table 5). 85% of respondents felt bedside teaching to be an important component of such an elective, while students were least interested in including topics pertaining to publishing in the field of medical education (24%). The present clinical teaching elective offered at our institution uses standardized patients and small group sessions to help students improve bedside teaching techniques [8]. The evaluation component of this elective is currently in progress, but it is interesting to note other studies showing a potential long-term impact of such electives on clinical teaching outcomes [12]. Our study has two major limitations, first of which is the low response rate. While 321 senior medical students completed our survey, there are approximately 5583 senior medical students in Canada this year. The survey was advertised indirectly to 4154 students from 13 of 17 medical schools. The relatively low participation rate in this population has historically been attributed to survey fatigue and lack of time in clinical clerkship settings. We were unable to contact each student directly to participate in the survey, but relied upon school representatives to forward on the survey link which may have had reduced efficiency in the delivery of the survey. Despite this, there would likely be a reasonable absolute number of students who would engage in a formal teaching elective if offered at each medical school. Even if we were to presume that our respondents were the only students in Canada interested in participating in a clinical teaching elective, this is still potentially 240 students – enough for an 8–10 person elective at each academic year at each participating school. During the final year of medical training, elective time in Canadian medical schools is quite limited and often geared towards electives that will enhance the chances of matching to limited residency positions. Hence, it is interesting to note that if given the chance, many students would participate in such an elective, which is not directly linked to any particular residency program. Based on our small study, the inclusion of such an elective should strongly be considered as a curricular component in the last year of medical training. A further limitation of our study is the existence of selection bias. First, our inability to survey students at four of 17 Canadian medical schools could be problematic. It is conceivable that students at the non-participating schools (a distributed medical education school, and three unilingual Francophone schools) may have less interest in clinical teaching than students at the 14 participating schools, but we believe this is unlikely. Additionally, there were a high number of respondents interested in matching to residency positions in family medicine, internal medicine, and pediatrics; it is plausible that students interested in these specialties are more interested in clinical teaching than other students are. These three specialties account for the bulk of clinical learning experience for medical students, so it would be natural for students to associate them with opportunities for clinical teaching. Pragmatically this does not detract from our findings. Indeed, our sample is reflective of the Canadian reality: 53% of our respondents indicated an interest in one of these three specialties, whereas post-graduate training data indicate 65% of graduating medical students will start a residency in one of these specialties [5]. Conclusion: Our study demonstrates a large number of senior medical students at Canadian medical schools are interested in developing their clinical teaching abilities, with a particular interest in participating in a 2-week elective during the latter part of medical school training. Based on our data, a key component of such an elective would be teaching medical students to be confident bedside teachers.
Background: To prepare for careers in medicine, medical trainees must develop clinical teaching skills. It is unclear if Canadian medical students need or want to develop such skills. We sought to assess Canadian students' perceptions of clinical teaching, and their desire to pursue clinical teaching skills development via a clinical teaching elective (CTE) in their final year of medical school. Methods: We designed a descriptive cross-sectional study of Canadian senior medical students, using an online survey to gauge teaching experience, career goals, perceived areas of confidence, and interest in a CTE. Results: Students at 13 of 17 Canadian medical schools were invited to participate in the survey (4154 students). We collected 321 responses (7.8%). Most (75%) respondents expressed confidence in giving presentations, but fewer were confident providing bedside teaching (47%), teaching sensitive issues (42%), and presenting at journal clubs (42%). A total of 240 respondents (75%) expressed interest in participating in a CTE. The majority (61%) favored a two week elective, and preferred topics included bedside teaching (85%), teaching physical examination skills (71%), moderation of small group learning (63%), and mentorship in medicine (60%). Conclusions: Our study demonstrates that a large number of Canadian medical students are interested in teaching in a clinical setting, but lack confidence in skills specific to clinical teaching. Our respondents signaled interest in participating in an elective in clinical teaching, particularly if it is offered in a two-week format.
Introduction: To prepare for careers in medicine, medical students must develop clinical teaching skills [1–3]. As residents, medical graduates are typically active in educator roles and are responsible for a considerable component of the medical education of clinical clerkship students [2]. In principle, medical students who teach become better learners, more effective communicators and better prepared to teach later in their careers [3]. This implies a benefit from early involvement and engagement in clinical teaching. Many medical schools provide opportunities for medical students to develop scholarly research skills as prescribed by formal medical school accreditation standards [4,5]. On the other hand, there are no formal accreditation requirements mandating the promotion of teaching skills to undergraduate medical students. In 2008, a survey of 130 medical programs in the United States indicated that 44% (43 of 99 respondents) offered student-as-teacher (SAT) programs [6]. Prevailing content within these programs includes clinical teaching skill development, small group teaching skills and feedback derived from direct observation or videotaped teaching activity [6]. However, this study also indicated that a number of barriers exist regarding these SAT programs, including the challenge of creating a sense of value attributed to these initiatives, the lack of established national teaching competency standards to guide these programs, and the challenge of collating objective longitudinal data for the purposes of program evaluation [6]. Nonetheless, at least in the United States, many medical schools appear to provide formal clinical teaching skill training for their medical students. As of 2015, at least in the literature, it appeared that no Canadian medical school program offered the opportunity to develop clinical teaching skills to their medical students. Medical residents and clinicians teach in multiple settings. However, over the past twenty years, bedside teaching has diminished as a teaching modality with only 17% of students receiving dedicated bedside teaching [7]. In 2015, to help address the deficiency of clinical teaching skills training, the University of Ottawa Faculty of Medicine established Canada’s first clinical teaching elective (CTE) [8]. This was offered to medical students in their final year of undergraduate medical education. The elective was intended for a small number of fourth year medical students [5,6]; however, a surprisingly high number of students sought enrollment in the elective, with 25–30 students (out a class of 150) registering and eventually participating in the CTE [8]. It was unclear whether this interest in clinical teaching was unique to our institution or whether there was a greater sense of interest in the broader population of senior medical students. There is currently little known regarding Canadian medical student perceptions of clinical teaching training at the undergraduate level, including motivations for pursuing such training, barriers to participation and what objectives and content should be included in such initiatives. As such, the primary aim of this study was to conduct a national needs assessment, based on medical student self-perceptions to guide the development of a formal elective in clinical teaching. This needs assessment study would address perceived areas of deficiency in clinical teaching skills in senior medical school students, and would aim to design a clinical teaching skills initiative that conveys the teaching skill competencies required by Canadian medical school students. Conclusion: Our study demonstrates a large number of senior medical students at Canadian medical schools are interested in developing their clinical teaching abilities, with a particular interest in participating in a 2-week elective during the latter part of medical school training. Based on our data, a key component of such an elective would be teaching medical students to be confident bedside teachers.
Background: To prepare for careers in medicine, medical trainees must develop clinical teaching skills. It is unclear if Canadian medical students need or want to develop such skills. We sought to assess Canadian students' perceptions of clinical teaching, and their desire to pursue clinical teaching skills development via a clinical teaching elective (CTE) in their final year of medical school. Methods: We designed a descriptive cross-sectional study of Canadian senior medical students, using an online survey to gauge teaching experience, career goals, perceived areas of confidence, and interest in a CTE. Results: Students at 13 of 17 Canadian medical schools were invited to participate in the survey (4154 students). We collected 321 responses (7.8%). Most (75%) respondents expressed confidence in giving presentations, but fewer were confident providing bedside teaching (47%), teaching sensitive issues (42%), and presenting at journal clubs (42%). A total of 240 respondents (75%) expressed interest in participating in a CTE. The majority (61%) favored a two week elective, and preferred topics included bedside teaching (85%), teaching physical examination skills (71%), moderation of small group learning (63%), and mentorship in medicine (60%). Conclusions: Our study demonstrates that a large number of Canadian medical students are interested in teaching in a clinical setting, but lack confidence in skills specific to clinical teaching. Our respondents signaled interest in participating in an elective in clinical teaching, particularly if it is offered in a two-week format.
7,112
314
[ 2612, 1303, 14, 332, 224, 18, 925, 957, 67 ]
10
[ "medical", "survey", "teaching", "students", "clinical", "clinical teaching", "medical students", "student", "canadian", "elective" ]
[ "medical education clinical", "clinical teaching abilities", "clinical teaching skill", "teaching medical schools", "skills medical students" ]
null
null
[CONTENT] Curriculum | clinical teaching | electives | undergraduate medical education | near-peer teaching [SUMMARY]
null
null
[CONTENT] Curriculum | clinical teaching | electives | undergraduate medical education | near-peer teaching [SUMMARY]
[CONTENT] Curriculum | clinical teaching | electives | undergraduate medical education | near-peer teaching [SUMMARY]
[CONTENT] Curriculum | clinical teaching | electives | undergraduate medical education | near-peer teaching [SUMMARY]
[CONTENT] Adolescent | Adult | Attitude of Health Personnel | Canada | Cross-Sectional Studies | Education, Medical, Undergraduate | Female | Humans | Male | Peer Group | Professional Competence | Self Efficacy | Students, Medical | Teaching | Young Adult [SUMMARY]
null
null
[CONTENT] Adolescent | Adult | Attitude of Health Personnel | Canada | Cross-Sectional Studies | Education, Medical, Undergraduate | Female | Humans | Male | Peer Group | Professional Competence | Self Efficacy | Students, Medical | Teaching | Young Adult [SUMMARY]
[CONTENT] Adolescent | Adult | Attitude of Health Personnel | Canada | Cross-Sectional Studies | Education, Medical, Undergraduate | Female | Humans | Male | Peer Group | Professional Competence | Self Efficacy | Students, Medical | Teaching | Young Adult [SUMMARY]
[CONTENT] Adolescent | Adult | Attitude of Health Personnel | Canada | Cross-Sectional Studies | Education, Medical, Undergraduate | Female | Humans | Male | Peer Group | Professional Competence | Self Efficacy | Students, Medical | Teaching | Young Adult [SUMMARY]
[CONTENT] medical education clinical | clinical teaching abilities | clinical teaching skill | teaching medical schools | skills medical students [SUMMARY]
null
null
[CONTENT] medical education clinical | clinical teaching abilities | clinical teaching skill | teaching medical schools | skills medical students [SUMMARY]
[CONTENT] medical education clinical | clinical teaching abilities | clinical teaching skill | teaching medical schools | skills medical students [SUMMARY]
[CONTENT] medical education clinical | clinical teaching abilities | clinical teaching skill | teaching medical schools | skills medical students [SUMMARY]
[CONTENT] medical | survey | teaching | students | clinical | clinical teaching | medical students | student | canadian | elective [SUMMARY]
null
null
[CONTENT] medical | survey | teaching | students | clinical | clinical teaching | medical students | student | canadian | elective [SUMMARY]
[CONTENT] medical | survey | teaching | students | clinical | clinical teaching | medical students | student | canadian | elective [SUMMARY]
[CONTENT] medical | survey | teaching | students | clinical | clinical teaching | medical students | student | canadian | elective [SUMMARY]
[CONTENT] medical | teaching | teaching skills | skills | students | clinical | clinical teaching | clinical teaching skills | programs | medical students [SUMMARY]
null
null
[CONTENT] medical | school training based data | schools interested developing | study demonstrates large number | senior medical students canadian | abilities particular interest | abilities particular | confident bedside teachers | particular interest participating week | developing [SUMMARY]
[CONTENT] medical | teaching | survey | students | clinical | clinical teaching | medical students | student | elective | canadian [SUMMARY]
[CONTENT] medical | teaching | survey | students | clinical | clinical teaching | medical students | student | elective | canadian [SUMMARY]
[CONTENT] ||| Canadian ||| Canadian | their final year [SUMMARY]
null
null
[CONTENT] Canadian ||| two-week [SUMMARY]
[CONTENT] ||| Canadian ||| Canadian | their final year ||| Canadian | CTE ||| ||| 13 | 17 | Canadian | 4154 ||| 321 | 7.8% ||| 75% | 47% | 42% | 42% ||| 240 | 75% | CTE ||| 61% | two week | 85% | 71% | 63% | 60% ||| Canadian ||| two-week [SUMMARY]
[CONTENT] ||| Canadian ||| Canadian | their final year ||| Canadian | CTE ||| ||| 13 | 17 | Canadian | 4154 ||| 321 | 7.8% ||| 75% | 47% | 42% | 42% ||| 240 | 75% | CTE ||| 61% | two week | 85% | 71% | 63% | 60% ||| Canadian ||| two-week [SUMMARY]
The beneficial effect of alpha-blockers for ureteral stent-related discomfort: systematic review and network meta-analysis for alfuzosin versus tamsulosin versus placebo.
26104313
This study was carried out a network meta-analysis of evidence from randomized controlled trials (RCTs) to evaluate stent-related discomfort in patients with alfuzosin or tamsulosin versus placebo.
BACKGROUND
Relevant RCTs were identified from electronic databases. The proceedings of appropriate meetings were also searched. Seven articles on the basis of RCTs were included in our meta-analysis. Using pairwise and network meta-analyses, comparisons were made by qualitative and quantitative syntheses. Evaluation was performed with the Ureteric Stent Symptoms Questionnaire to assess the urinary symptom score (USS) and body pain score (BPS).
METHODS
One of the seven RCTs was at moderate risk of bias for all quality criteria; two studies had a high risk of bias. In the network meta-analysis, both alfuzosin (mean difference [MD];-4.85, 95 % confidence interval [CI];-8.53--1.33) and tamsulosin (MD;-8.84, 95 % CI;-13.08--4.31) showed lower scores compared with placebo; however, the difference in USS for alfuzosin versus tamsulosin was not significant (MD; 3.99, 95 % CI;-1.23-9.04). Alfuzosin (MD;-5.71, 95 % CI;-11.32--0.52) and tamsulosin (MD;-7.77, 95 % CI;-13.68--2.14) showed lower scores for BPS compared with placebo; however, the MD between alfuzosin and tamsulosin was not significant (MD; 2.12, 95 % CI;-4.62-8.72). In the rank-probability test, tamsulosin ranked highest for USS and BPS, and alfuzosin was second.
RESULTS
The alpha-blockers significantly decreased USS and BPS in comparison with placebo. Tamsulosin might be more effective than alfuzosin.
CONCLUSION
[ "Adrenergic alpha-Antagonists", "Bayes Theorem", "Humans", "Pain", "Pain Measurement", "Quinazolines", "Randomized Controlled Trials as Topic", "Stents", "Sulfonamides", "Tamsulosin", "Treatment Outcome", "Ureteral Obstruction" ]
4477492
Background
In 1978, the ureteral double-J stent was first described by Finney et al. [1, 2]. Ureteral double-J stent insertion has been one of the most common urologic procedures; however, indwelling stents are often accompanied by significant morbidity including voiding and storage symptoms, flank pain, hematuria, and infection [3]. Symptoms of stent discomfort, including bladder irritation symptoms and flank pain or discomfort, are generally treated with oral analgesics, such as narcotics and anti­inflammatory medications; however these medications are only moderately effective. Alpha-blockers alleviate bladder irritation due to stents, resulting in reduced incidence of dysuria, frequency, and pain compared to placebo [4]. Ureteral stent discomfort may be due to spasms of the ureteral smooth muscle that surrounds the indwelling foreign object and may run the length of the ureter. Further, irritation of the trigone, which also has alpha-1d receptors, may be caused by the intravesical lower coil of the stent. Alternatively, voiding may increase pressure on the renal pelvis and cause discomfort [5]. Several studies have investigated if alpha-blockers can alleviate symptoms related to stent placement [6]. In 2011, Lamb et al. reported a pair-wise meta-analysis of randomized controlled trials (RCTs) indicating that orally administrated alpha-blockers reduce stent-related discomfort and storage symptoms as evaluated by the Ureteric Stent Symptoms Questionnaire (USSQ) [7]. Newly introduced network meta-analysis is a meta-analysis in which multiple treatments are compared using direct comparisons of interventions within RCTs and indirect comparisons across trials based on a common comparator [8–10]. The present systematic review and network meta-analysis examined available RCTs to study the effects of alpha-blockers on stent-related symptoms.
null
null
Results
Eligible studies The database search retrieved 21 articles covering 88 studies for potential inclusion in the meta-analysis. Fourteen articles were excluded based on the inclusion/exclusion criteria; eight of the fourteen articles were retrospective models, and three articles were reported with different tools and variables. The other three articles were excluded, because they did not report final results. Using using pairwise and network meta-analyses, the remaining seven articles were included in the qualitative and quantitative syntheses (Fig. 1).Fig. 1Flow diagram of evidence acquisition. Seven studies were ultimately included in the qualitative and quantitative syntheses using pairwise and network meta-analyses Flow diagram of evidence acquisition. Seven studies were ultimately included in the qualitative and quantitative syntheses using pairwise and network meta-analyses Data corresponding to confounding factors in each study are summarized in Table 1. Four studies included outcome comparisons between alfuzosin versus placebo [15–18]. Two trials reported on therapeutic outcomes of tamsulosin versus placebo [19, 20], and a three-arm trial compared outcomes of alfuzosin, tamsulosin, and placebo [21].Table 1Enrolled studies for this meta-analysisStentStoneStudyStudy designNTxPAlpha-blockerAnalgesicDuration (day)TypeSizeLengthIndicationSize (mm)LocationMeasureQuality assessment-risk biasTxPBeddingfield et al.[15]RCT552629AlfuzosinOn demand10NANAAdjustedPost URS6.357.255 % renalUSSQLow- urinary symptom score10 mg- body pain score- general health score25 % renal/ureteric- work performance score20 % ureteric- sexual health scoreDamiano et al.[20]RCT753837TamsulosinOn demand14PU7 FrAdjustedPost URSNANANAUSSQIntermediate0.4 mg- urinary symptom score- body pain scoreDeliveliotis et al.[16]RCT1005050AlfuzosinOn demand28PU5 FrAdjustedConservative treatment for stone, <10 mm, hydronephrosis7.67.119 upperUSSQLow23 mid- urinary symptom score10 mg48 lower- body pain score- general health score- sexual health scoreDellis et al.[21]RCT15010050*AlfuzosinOn demand28NA6 FrAdjustedPost ESWL, URSNANANAUSSQLow- urinary symptom score10 mgTamsulosin- body pain score0.4 mg- general health score- work performance score- sexual health scoreNazim et al.[17]RCT1306565AlfuzosinOn demand7PU4.7 Fr 6 FrNAPost URSNANA40 upperVASHigh28 midUSSQ10 mg62 lower- urinary symptom score- body pain scorePark et al.[18]RCT322012AlfuzosinNA42PU6 FrAdjustedPost URS, PCNL, Lap Pyelo, endo-ureterotomyNANANAUSSQHigh- urinary symptom score- body pain score10 mg- general health score- work performance score- sexual health scoreWang et al.[19]RCT1547975TamsulosinOn demand7Sil7 FrAdjustedPost URS99.416 upperUSSQLow49 mid- urinary symptom score0.4 mg89 lower- body pain score- general health score- work performance score- sexual health scoreAdjusted, stent length is height adjusted; NA, not available; Tx, Treatment group, P, placebo; PU, Polyurethane; Sil, Silcone;USSQ, Ureteric Stent Symptom Questionnaire; VAS, visual analogue scale; URS, ureteroscopy; PCNL, percutaneous nephrolithotomy; Lap Pyelo, laparoscopic pyeloplasty*50 patients received alfuzosin 10 mg, and another 50 patients received tamsulosin 0.4 mg Enrolled studies for this meta-analysis Adjusted, stent length is height adjusted; NA, not available; Tx, Treatment group, P, placebo; PU, Polyurethane; Sil, Silcone;USSQ, Ureteric Stent Symptom Questionnaire; VAS, visual analogue scale; URS, ureteroscopy; PCNL, percutaneous nephrolithotomy; Lap Pyelo, laparoscopic pyeloplasty *50 patients received alfuzosin 10 mg, and another 50 patients received tamsulosin 0.4 mg The database search retrieved 21 articles covering 88 studies for potential inclusion in the meta-analysis. Fourteen articles were excluded based on the inclusion/exclusion criteria; eight of the fourteen articles were retrospective models, and three articles were reported with different tools and variables. The other three articles were excluded, because they did not report final results. Using using pairwise and network meta-analyses, the remaining seven articles were included in the qualitative and quantitative syntheses (Fig. 1).Fig. 1Flow diagram of evidence acquisition. Seven studies were ultimately included in the qualitative and quantitative syntheses using pairwise and network meta-analyses Flow diagram of evidence acquisition. Seven studies were ultimately included in the qualitative and quantitative syntheses using pairwise and network meta-analyses Data corresponding to confounding factors in each study are summarized in Table 1. Four studies included outcome comparisons between alfuzosin versus placebo [15–18]. Two trials reported on therapeutic outcomes of tamsulosin versus placebo [19, 20], and a three-arm trial compared outcomes of alfuzosin, tamsulosin, and placebo [21].Table 1Enrolled studies for this meta-analysisStentStoneStudyStudy designNTxPAlpha-blockerAnalgesicDuration (day)TypeSizeLengthIndicationSize (mm)LocationMeasureQuality assessment-risk biasTxPBeddingfield et al.[15]RCT552629AlfuzosinOn demand10NANAAdjustedPost URS6.357.255 % renalUSSQLow- urinary symptom score10 mg- body pain score- general health score25 % renal/ureteric- work performance score20 % ureteric- sexual health scoreDamiano et al.[20]RCT753837TamsulosinOn demand14PU7 FrAdjustedPost URSNANANAUSSQIntermediate0.4 mg- urinary symptom score- body pain scoreDeliveliotis et al.[16]RCT1005050AlfuzosinOn demand28PU5 FrAdjustedConservative treatment for stone, <10 mm, hydronephrosis7.67.119 upperUSSQLow23 mid- urinary symptom score10 mg48 lower- body pain score- general health score- sexual health scoreDellis et al.[21]RCT15010050*AlfuzosinOn demand28NA6 FrAdjustedPost ESWL, URSNANANAUSSQLow- urinary symptom score10 mgTamsulosin- body pain score0.4 mg- general health score- work performance score- sexual health scoreNazim et al.[17]RCT1306565AlfuzosinOn demand7PU4.7 Fr 6 FrNAPost URSNANA40 upperVASHigh28 midUSSQ10 mg62 lower- urinary symptom score- body pain scorePark et al.[18]RCT322012AlfuzosinNA42PU6 FrAdjustedPost URS, PCNL, Lap Pyelo, endo-ureterotomyNANANAUSSQHigh- urinary symptom score- body pain score10 mg- general health score- work performance score- sexual health scoreWang et al.[19]RCT1547975TamsulosinOn demand7Sil7 FrAdjustedPost URS99.416 upperUSSQLow49 mid- urinary symptom score0.4 mg89 lower- body pain score- general health score- work performance score- sexual health scoreAdjusted, stent length is height adjusted; NA, not available; Tx, Treatment group, P, placebo; PU, Polyurethane; Sil, Silcone;USSQ, Ureteric Stent Symptom Questionnaire; VAS, visual analogue scale; URS, ureteroscopy; PCNL, percutaneous nephrolithotomy; Lap Pyelo, laparoscopic pyeloplasty*50 patients received alfuzosin 10 mg, and another 50 patients received tamsulosin 0.4 mg Enrolled studies for this meta-analysis Adjusted, stent length is height adjusted; NA, not available; Tx, Treatment group, P, placebo; PU, Polyurethane; Sil, Silcone;USSQ, Ureteric Stent Symptom Questionnaire; VAS, visual analogue scale; URS, ureteroscopy; PCNL, percutaneous nephrolithotomy; Lap Pyelo, laparoscopic pyeloplasty *50 patients received alfuzosin 10 mg, and another 50 patients received tamsulosin 0.4 mg Quality assessment Figures 2 and 3 present the details of quality assessment, as measured by the Cochrane Collaboration risk-of-bias tool. Four trials exhibited a low risk of bias for all quality criteria, and two studies were classified as having a high risk of bias (Table 1). The most common risk factor for quality assessment was the risk of blinding of outcome assessment; the second most common concerns were allocation concealment and blinding participants and personnel.Fig. 2Risk of bias graph. We reviewed the risk of bias in each study included in this meta-analysis and presented the results as percentages. Four trials exhibited a low risk of bias for all quality criteria, and two studies were classified as having a high risk of biasFig. 3Risk of bias summary. We reviewed the risk of bias in each of the studies included in this meta-analysis Risk of bias graph. We reviewed the risk of bias in each study included in this meta-analysis and presented the results as percentages. Four trials exhibited a low risk of bias for all quality criteria, and two studies were classified as having a high risk of bias Risk of bias summary. We reviewed the risk of bias in each of the studies included in this meta-analysis Figures 2 and 3 present the details of quality assessment, as measured by the Cochrane Collaboration risk-of-bias tool. Four trials exhibited a low risk of bias for all quality criteria, and two studies were classified as having a high risk of bias (Table 1). The most common risk factor for quality assessment was the risk of blinding of outcome assessment; the second most common concerns were allocation concealment and blinding participants and personnel.Fig. 2Risk of bias graph. We reviewed the risk of bias in each study included in this meta-analysis and presented the results as percentages. Four trials exhibited a low risk of bias for all quality criteria, and two studies were classified as having a high risk of biasFig. 3Risk of bias summary. We reviewed the risk of bias in each of the studies included in this meta-analysis Risk of bias graph. We reviewed the risk of bias in each study included in this meta-analysis and presented the results as percentages. Four trials exhibited a low risk of bias for all quality criteria, and two studies were classified as having a high risk of bias Risk of bias summary. We reviewed the risk of bias in each of the studies included in this meta-analysis Heterogeneity assessment Forest plots of pairwise meta-analyses are demonstrated in Fig. 4. A heterogeneity test for USS showed the following: χ2 = 96.43 with 7 df (P < 0.001) and I2 = 93 % in the analysis of alpha-blockers, including alfuzosin and tamsulosin versus placebo. Notable heterogeneities were detected in the analyses of all studies; thus, random-effects models were used to further assess these variables. In the analysis of BPS, a heterogeneity test demonstrated homogeneity with χ2 = 44.66 with 8 df (P < 0.001) and I2 = 82 %. Pairwise meta-analyses with random-effects models were also performed. None of the variables demonstrated heterogeneity in radial plots after selecting effect models for USS and BPS (Fig. 5).Fig. 4Pairwise meta-analysis. (a) urinary symptom score (b) body pain scoreFig. 5Radial plots. None of the variables demonstrated heterogeneity after selecting effect models for each variable in the radial plots. (a) urinary symptom score (b) body pain score Pairwise meta-analysis. (a) urinary symptom score (b) body pain score Radial plots. None of the variables demonstrated heterogeneity after selecting effect models for each variable in the radial plots. (a) urinary symptom score (b) body pain score Forest plots of pairwise meta-analyses are demonstrated in Fig. 4. A heterogeneity test for USS showed the following: χ2 = 96.43 with 7 df (P < 0.001) and I2 = 93 % in the analysis of alpha-blockers, including alfuzosin and tamsulosin versus placebo. Notable heterogeneities were detected in the analyses of all studies; thus, random-effects models were used to further assess these variables. In the analysis of BPS, a heterogeneity test demonstrated homogeneity with χ2 = 44.66 with 8 df (P < 0.001) and I2 = 82 %. Pairwise meta-analyses with random-effects models were also performed. None of the variables demonstrated heterogeneity in radial plots after selecting effect models for USS and BPS (Fig. 5).Fig. 4Pairwise meta-analysis. (a) urinary symptom score (b) body pain scoreFig. 5Radial plots. None of the variables demonstrated heterogeneity after selecting effect models for each variable in the radial plots. (a) urinary symptom score (b) body pain score Pairwise meta-analysis. (a) urinary symptom score (b) body pain score Radial plots. None of the variables demonstrated heterogeneity after selecting effect models for each variable in the radial plots. (a) urinary symptom score (b) body pain score Publication bias Funnel plots from pairwise meta-analyses are demonstrated in Fig. 6; however, with few studies, it was difficult to assess publication bias, although some degree of bias is suspected.Fig. 6Funnel plots. (a) urinary symptom score (b) body pain score. It was difficult to assess publication bias with few studies, although some degree of bias is suspected Funnel plots. (a) urinary symptom score (b) body pain score. It was difficult to assess publication bias with few studies, although some degree of bias is suspected Funnel plots from pairwise meta-analyses are demonstrated in Fig. 6; however, with few studies, it was difficult to assess publication bias, although some degree of bias is suspected.Fig. 6Funnel plots. (a) urinary symptom score (b) body pain score. It was difficult to assess publication bias with few studies, although some degree of bias is suspected Funnel plots. (a) urinary symptom score (b) body pain score. It was difficult to assess publication bias with few studies, although some degree of bias is suspected Pairwise meta-analysis for urinary symptom and body pain scores The forest plot using the random-effects model demomstrated an MD of −5.69 for USS (95 % CI [−8.84–−2.53], P < 0.001) between alpha-blockers and placebo (Fig. 4a). In subanalyses, both alfuzosin (MD; −4.12, 95 % CI [−5.51–−2.73], P < 0.001) and tamsulosin (MD; −7.83, 95 % CI [−14.35–−1.31], P = 0.02) had low MDs versus placebo. According to the forest plot for BPS, alpha-blockers were superior to placebo, with an MD of −6.20 (95 % CI [−8.74–−2.13], P < 0.001) (Fig. 4b). In the subanalysis of alfuzosin versus placebo, alfuzosin showed an MD of −4.21 (95 % CI [−5.56–−2.85], P < 0.001). Between tamsulosin and placebo, the random-effects model demonstrated an MD of −7.71 (95 % CI [−13.28–−2.13], P = 0.007) for BSP. The forest plot using the random-effects model demomstrated an MD of −5.69 for USS (95 % CI [−8.84–−2.53], P < 0.001) between alpha-blockers and placebo (Fig. 4a). In subanalyses, both alfuzosin (MD; −4.12, 95 % CI [−5.51–−2.73], P < 0.001) and tamsulosin (MD; −7.83, 95 % CI [−14.35–−1.31], P = 0.02) had low MDs versus placebo. According to the forest plot for BPS, alpha-blockers were superior to placebo, with an MD of −6.20 (95 % CI [−8.74–−2.13], P < 0.001) (Fig. 4b). In the subanalysis of alfuzosin versus placebo, alfuzosin showed an MD of −4.21 (95 % CI [−5.56–−2.85], P < 0.001). Between tamsulosin and placebo, the random-effects model demonstrated an MD of −7.71 (95 % CI [−13.28–−2.13], P = 0.007) for BSP. Network meta-analysis for urinary symptom and body pain scores Alfuzosin had a lower In USS than that of placebo (MD; −4.85, 95 % CI [−8.53–−1.33]). Tamsulosin also had a lower score than that of placebo (MD; −8.84, 95 % CI [−13.08–−4.31]. However, there was not a significant difference in MD between alfuzosin and tamsulosin according to network meta-analysis (MD; 3.99, 95 % CI [−1.23–9.04]). There were significant differences in the BPS achieved with alfuzosin versus placebo (MD; −5.71, 95 % CI [−11.32–−0.52]) and in tamsulosin versus placebo (MD; −7.77, 95 % CI [−13.68–−2.14]). Comparison of the BPS achieved with tamsulosin versus alfuzosin showed an MD of 2.12 (95 % CI [−4.62–8.72]), which was not significant (Fig. 7). In the rank-probability test, tamsulosin had the highest rank for USS, followed by alfuzosin. Tamsulosin was also ranked highest for BPS in the rank-probability test, followed by alfuzosin. The placebo was ranked lowest for USS and BPS (Fig. 8).Fig. 7Network meta-analysis. (a) urinary symptom score (b) body pain score. Alfuzosin and tamsulosin had a lower score both in USS and BPS than in the placebo. However, there was not a significant difference in MD between alfuzosin and tamsulosin according to network meta-analysisFig. 8Rank-probability test of network meta-analyses. (a) urinary symptom score (b) body pain score. Tamsulosin had the highest rank for USS, followed by alfuzosin. Tamsulosin was also ranked highest for BPS in the rank-probability test, followed by alfuzosin. The placebo was ranked lowest for USS and BPS Network meta-analysis. (a) urinary symptom score (b) body pain score. Alfuzosin and tamsulosin had a lower score both in USS and BPS than in the placebo. However, there was not a significant difference in MD between alfuzosin and tamsulosin according to network meta-analysis Rank-probability test of network meta-analyses. (a) urinary symptom score (b) body pain score. Tamsulosin had the highest rank for USS, followed by alfuzosin. Tamsulosin was also ranked highest for BPS in the rank-probability test, followed by alfuzosin. The placebo was ranked lowest for USS and BPS Alfuzosin had a lower In USS than that of placebo (MD; −4.85, 95 % CI [−8.53–−1.33]). Tamsulosin also had a lower score than that of placebo (MD; −8.84, 95 % CI [−13.08–−4.31]. However, there was not a significant difference in MD between alfuzosin and tamsulosin according to network meta-analysis (MD; 3.99, 95 % CI [−1.23–9.04]). There were significant differences in the BPS achieved with alfuzosin versus placebo (MD; −5.71, 95 % CI [−11.32–−0.52]) and in tamsulosin versus placebo (MD; −7.77, 95 % CI [−13.68–−2.14]). Comparison of the BPS achieved with tamsulosin versus alfuzosin showed an MD of 2.12 (95 % CI [−4.62–8.72]), which was not significant (Fig. 7). In the rank-probability test, tamsulosin had the highest rank for USS, followed by alfuzosin. Tamsulosin was also ranked highest for BPS in the rank-probability test, followed by alfuzosin. The placebo was ranked lowest for USS and BPS (Fig. 8).Fig. 7Network meta-analysis. (a) urinary symptom score (b) body pain score. Alfuzosin and tamsulosin had a lower score both in USS and BPS than in the placebo. However, there was not a significant difference in MD between alfuzosin and tamsulosin according to network meta-analysisFig. 8Rank-probability test of network meta-analyses. (a) urinary symptom score (b) body pain score. Tamsulosin had the highest rank for USS, followed by alfuzosin. Tamsulosin was also ranked highest for BPS in the rank-probability test, followed by alfuzosin. The placebo was ranked lowest for USS and BPS Network meta-analysis. (a) urinary symptom score (b) body pain score. Alfuzosin and tamsulosin had a lower score both in USS and BPS than in the placebo. However, there was not a significant difference in MD between alfuzosin and tamsulosin according to network meta-analysis Rank-probability test of network meta-analyses. (a) urinary symptom score (b) body pain score. Tamsulosin had the highest rank for USS, followed by alfuzosin. Tamsulosin was also ranked highest for BPS in the rank-probability test, followed by alfuzosin. The placebo was ranked lowest for USS and BPS
Conclusions
Ureteral stent-related symptoms are effectively alleviated with alpha-blockers. Tamsulosin might be more effective than alfuzosin. However, additional randomized controlled trials with alfuzosin and tamsulosin need to be performed for patients with ureteral stents.
[ "Inclusion criteria", "Search strategy", "Extraction of data", "Quality assessment for each study", "Heterogeneity tests", "Statistical analysis", "Eligible studies", "Quality assessment", "Heterogeneity assessment", "Publication bias", "Pairwise meta-analysis for urinary symptom and body pain scores", "Network meta-analysis for urinary symptom and body pain scores" ]
[ "Published RCTs that accorded with the following criteria were included. (i) The design of study had an assessment for alpha-blockers to treat ureteral stent discomfort. (ii) A match was performed between the baseline characteristics of patients from two groups, including the total number of subjects and the values of each index. (iii) Alpha-blockers were analyzed with standard therapy or a placebo group. (iv) Standard indications for ureteral stenting, such as stone treatment, ureteroscopic procedures, and ureteral surgery including pyeloplasty, were accepted. (v) Endpoint outcome parameters were described using USSQ, including urinary symptom score (USS) and body pain score (BPS). (vi) The full text of the study was available in English. This report was prepared in compliance with the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) statement (accessible at http://www.prisma-statement.org/).", "A literature search of all publications before 31 January 2014 was performed in EMBASE and PubMed. Additionally, a cross-reference search of eligible articles was performed to check studies that were not found during the computerized search. Combinations of the following MeSH terms and keywords were used: tamsulosin, alfuzosin, doxazosin, terazosin, silodosin, prazosin, alpha, stent, discomfort, pain, complication, ureter, ureteral, ureteric, and randomized controlled trial.", "A researcher (JKK) screened all titles and abstracts identified by the search strategy. Other two researchers (JYL and DHK) independently evaluated the full text of each paper to determine whether a paper met the inclusion criteria. Disagreements were resolved by discussion until a consensus was reached or by arbitration mediated by another researcher (KSC).", "After the final group of papers was agreed on, two researchers (JYL and JKK) independently evaluated the quality of each article. The Cochrane’s risk of bias as a quality assessment tool for RCTs were used. The assessment included assigning a judgment of “yes,” “no,” or “unclear” for each domain to designate a low, high, or unclear risk of bias, respectively. If one or no domain was deemed “unclear” or “no,” the study was classified as having a low risk of bias. If four or more domains were deemed “unclear” or “no,” the study was classified as having a high risk of bias. If two or three domains were deemed “unclear” or “no,” the study was classified as having a moderate risk of bias [11]. Quality assessment was performed with Review Manager 5 (RevMan 5.2.11, Cochrane Collaboration, Oxford, UK).", "Heterogeneity on included studies was examinated using the Q statistic and Higgins’ I2 statistic [12]. Higgins’ I2 measures the percentage of total variation due to heterogeneity rather than chance across studies. Higgins’ I2 was calculated as follows:\\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$$ {\\mathrm{I}}^2=\\frac{\\mathrm{Q}\\hbox{-} \\mathrm{d}\\mathrm{f}}{\\mathrm{Q}}\\times 100\\%, $$\\end{document}I2=Q‐dfQ×100%,\nin which “Q” was Cochran’s heterogeneity statistic, and “df” was the degrees of freedom.\nAn I2 ≥ 50 % was considered to represent substantial heterogeneity. For the Q statistic, heterogeneity was deemed to be significant for p less than 0.10 [13]. If there was evidence of heterogeneity, the data were analyzed using a random-effects model. A summary estimate of the test sensitivity was obtained with 95 % confidence intervals (CIs) after secondary examination of heterogeneity in the random-effects model using radial plots [14]. Studies in which positive results were confirmed were assessed with a pooled specificity with 95 % CIs.", "Outcome variables measured at specific time points were compared in terms of mean differences with 95 % CIs using a network meta-analysis. Analyses were based on non-informative priors for effect sizes and precision. Convergence and lack of auto-correlation were confirmed after four chains and a 50,000-simulation burn-in phase; finally, direct probability statements were derived from an additional 100,000-simulation phase. The probability that each group had the lowest rate of clinical events was assessed by Bayesian Markov Chain Monte Carlo modeling. Sensitivity analyses were performed by repeating the main computations with a fixed-effect method. Model fit was appraised by computing and comparing estimates for deviance and deviance information criterion. All statistical analyses were performed with Review Manager 5 and R (R version 3.0.3, R Foundation for Statistical Computing, Vienna, Austria; http://www.r-project.org) and the metafor and gemtc packages for pair-wise and network meta-analyses.", "The database search retrieved 21 articles covering 88 studies for potential inclusion in the meta-analysis. Fourteen articles were excluded based on the inclusion/exclusion criteria; eight of the fourteen articles were retrospective models, and three articles were reported with different tools and variables. The other three articles were excluded, because they did not report final results. Using using pairwise and network meta-analyses, the remaining seven articles were included in the qualitative and quantitative syntheses (Fig. 1).Fig. 1Flow diagram of evidence acquisition. Seven studies were ultimately included in the qualitative and quantitative syntheses using pairwise and network meta-analyses\nFlow diagram of evidence acquisition. Seven studies were ultimately included in the qualitative and quantitative syntheses using pairwise and network meta-analyses\nData corresponding to confounding factors in each study are summarized in Table 1. Four studies included outcome comparisons between alfuzosin versus placebo [15–18]. Two trials reported on therapeutic outcomes of tamsulosin versus placebo [19, 20], and a three-arm trial compared outcomes of alfuzosin, tamsulosin, and placebo [21].Table 1Enrolled studies for this meta-analysisStentStoneStudyStudy designNTxPAlpha-blockerAnalgesicDuration (day)TypeSizeLengthIndicationSize (mm)LocationMeasureQuality assessment-risk biasTxPBeddingfield et al.[15]RCT552629AlfuzosinOn demand10NANAAdjustedPost URS6.357.255 % renalUSSQLow- urinary symptom score10 mg- body pain score- general health score25 % renal/ureteric- work performance score20 % ureteric- sexual health scoreDamiano et al.[20]RCT753837TamsulosinOn demand14PU7 FrAdjustedPost URSNANANAUSSQIntermediate0.4 mg- urinary symptom score- body pain scoreDeliveliotis et al.[16]RCT1005050AlfuzosinOn demand28PU5 FrAdjustedConservative treatment for stone, <10 mm, hydronephrosis7.67.119 upperUSSQLow23 mid- urinary symptom score10 mg48 lower- body pain score- general health score- sexual health scoreDellis et al.[21]RCT15010050*AlfuzosinOn demand28NA6 FrAdjustedPost ESWL, URSNANANAUSSQLow- urinary symptom score10 mgTamsulosin- body pain score0.4 mg- general health score- work performance score- sexual health scoreNazim et al.[17]RCT1306565AlfuzosinOn demand7PU4.7 Fr 6 FrNAPost URSNANA40 upperVASHigh28 midUSSQ10 mg62 lower- urinary symptom score- body pain scorePark et al.[18]RCT322012AlfuzosinNA42PU6 FrAdjustedPost URS, PCNL, Lap Pyelo, endo-ureterotomyNANANAUSSQHigh- urinary symptom score- body pain score10 mg- general health score- work performance score- sexual health scoreWang et al.[19]RCT1547975TamsulosinOn demand7Sil7 FrAdjustedPost URS99.416 upperUSSQLow49 mid- urinary symptom score0.4 mg89 lower- body pain score- general health score- work performance score- sexual health scoreAdjusted, stent length is height adjusted; NA, not available; Tx, Treatment group, P, placebo; PU, Polyurethane; Sil, Silcone;USSQ, Ureteric Stent Symptom Questionnaire; VAS, visual analogue scale; URS, ureteroscopy; PCNL, percutaneous nephrolithotomy; Lap Pyelo, laparoscopic pyeloplasty*50 patients received alfuzosin 10 mg, and another 50 patients received tamsulosin 0.4 mg\nEnrolled studies for this meta-analysis\nAdjusted, stent length is height adjusted; NA, not available; Tx, Treatment group, P, placebo; PU, Polyurethane; Sil, Silcone;USSQ, Ureteric Stent Symptom Questionnaire; VAS, visual analogue scale; URS, ureteroscopy; PCNL, percutaneous nephrolithotomy; Lap Pyelo, laparoscopic pyeloplasty\n*50 patients received alfuzosin 10 mg, and another 50 patients received tamsulosin 0.4 mg", "Figures 2 and 3 present the details of quality assessment, as measured by the Cochrane Collaboration risk-of-bias tool. Four trials exhibited a low risk of bias for all quality criteria, and two studies were classified as having a high risk of bias (Table 1). The most common risk factor for quality assessment was the risk of blinding of outcome assessment; the second most common concerns were allocation concealment and blinding participants and personnel.Fig. 2Risk of bias graph. We reviewed the risk of bias in each study included in this meta-analysis and presented the results as percentages. Four trials exhibited a low risk of bias for all quality criteria, and two studies were classified as having a high risk of biasFig. 3Risk of bias summary. We reviewed the risk of bias in each of the studies included in this meta-analysis\nRisk of bias graph. We reviewed the risk of bias in each study included in this meta-analysis and presented the results as percentages. Four trials exhibited a low risk of bias for all quality criteria, and two studies were classified as having a high risk of bias\nRisk of bias summary. We reviewed the risk of bias in each of the studies included in this meta-analysis", "Forest plots of pairwise meta-analyses are demonstrated in Fig. 4. A heterogeneity test for USS showed the following: χ2 = 96.43 with 7 df (P < 0.001) and I2 = 93 % in the analysis of alpha-blockers, including alfuzosin and tamsulosin versus placebo. Notable heterogeneities were detected in the analyses of all studies; thus, random-effects models were used to further assess these variables. In the analysis of BPS, a heterogeneity test demonstrated homogeneity with χ2 = 44.66 with 8 df (P < 0.001) and I2 = 82 %. Pairwise meta-analyses with random-effects models were also performed. None of the variables demonstrated heterogeneity in radial plots after selecting effect models for USS and BPS (Fig. 5).Fig. 4Pairwise meta-analysis. (a) urinary symptom score (b) body pain scoreFig. 5Radial plots. None of the variables demonstrated heterogeneity after selecting effect models for each variable in the radial plots. (a) urinary symptom score (b) body pain score\nPairwise meta-analysis. (a) urinary symptom score (b) body pain score\nRadial plots. None of the variables demonstrated heterogeneity after selecting effect models for each variable in the radial plots. (a) urinary symptom score (b) body pain score", "Funnel plots from pairwise meta-analyses are demonstrated in Fig. 6; however, with few studies, it was difficult to assess publication bias, although some degree of bias is suspected.Fig. 6Funnel plots. (a) urinary symptom score (b) body pain score. It was difficult to assess publication bias with few studies, although some degree of bias is suspected\nFunnel plots. (a) urinary symptom score (b) body pain score. It was difficult to assess publication bias with few studies, although some degree of bias is suspected", "The forest plot using the random-effects model demomstrated an MD of −5.69 for USS (95 % CI [−8.84–−2.53], P < 0.001) between alpha-blockers and placebo (Fig. 4a). In subanalyses, both alfuzosin (MD; −4.12, 95 % CI [−5.51–−2.73], P < 0.001) and tamsulosin (MD; −7.83, 95 % CI [−14.35–−1.31], P = 0.02) had low MDs versus placebo. According to the forest plot for BPS, alpha-blockers were superior to placebo, with an MD of −6.20 (95 % CI [−8.74–−2.13], P < 0.001) (Fig. 4b). In the subanalysis of alfuzosin versus placebo, alfuzosin showed an MD of −4.21 (95 % CI [−5.56–−2.85], P < 0.001). Between tamsulosin and placebo, the random-effects model demonstrated an MD of −7.71 (95 % CI [−13.28–−2.13], P = 0.007) for BSP.", "Alfuzosin had a lower In USS than that of placebo (MD; −4.85, 95 % CI [−8.53–−1.33]). Tamsulosin also had a lower score than that of placebo (MD; −8.84, 95 % CI [−13.08–−4.31]. However, there was not a significant difference in MD between alfuzosin and tamsulosin according to network meta-analysis (MD; 3.99, 95 % CI [−1.23–9.04]). There were significant differences in the BPS achieved with alfuzosin versus placebo (MD; −5.71, 95 % CI [−11.32–−0.52]) and in tamsulosin versus placebo (MD; −7.77, 95 % CI [−13.68–−2.14]). Comparison of the BPS achieved with tamsulosin versus alfuzosin showed an MD of 2.12 (95 % CI [−4.62–8.72]), which was not significant (Fig. 7). In the rank-probability test, tamsulosin had the highest rank for USS, followed by alfuzosin. Tamsulosin was also ranked highest for BPS in the rank-probability test, followed by alfuzosin. The placebo was ranked lowest for USS and BPS (Fig. 8).Fig. 7Network meta-analysis. (a) urinary symptom score (b) body pain score. Alfuzosin and tamsulosin had a lower score both in USS and BPS than in the placebo. However, there was not a significant difference in MD between alfuzosin and tamsulosin according to network meta-analysisFig. 8Rank-probability test of network meta-analyses. (a) urinary symptom score (b) body pain score. Tamsulosin had the highest rank for USS, followed by alfuzosin. Tamsulosin was also ranked highest for BPS in the rank-probability test, followed by alfuzosin. The placebo was ranked lowest for USS and BPS\nNetwork meta-analysis. (a) urinary symptom score (b) body pain score. Alfuzosin and tamsulosin had a lower score both in USS and BPS than in the placebo. However, there was not a significant difference in MD between alfuzosin and tamsulosin according to network meta-analysis\nRank-probability test of network meta-analyses. (a) urinary symptom score (b) body pain score. Tamsulosin had the highest rank for USS, followed by alfuzosin. Tamsulosin was also ranked highest for BPS in the rank-probability test, followed by alfuzosin. The placebo was ranked lowest for USS and BPS" ]
[ null, null, null, null, null, null, null, null, null, null, null, null ]
[ "Background", "Methods", "Inclusion criteria", "Search strategy", "Extraction of data", "Quality assessment for each study", "Heterogeneity tests", "Statistical analysis", "Results", "Eligible studies", "Quality assessment", "Heterogeneity assessment", "Publication bias", "Pairwise meta-analysis for urinary symptom and body pain scores", "Network meta-analysis for urinary symptom and body pain scores", "Discussion and conclusion", "Conclusions" ]
[ "In 1978, the ureteral double-J stent was first described by Finney et al. [1, 2]. Ureteral double-J stent insertion has been one of the most common urologic procedures; however, indwelling stents are often accompanied by significant morbidity including voiding and storage symptoms, flank pain, hematuria, and infection [3]. Symptoms of stent discomfort, including bladder irritation symptoms and flank pain or discomfort, are generally treated with oral analgesics, such as narcotics and anti­inflammatory medications; however these medications are only moderately effective. Alpha-blockers alleviate bladder irritation due to stents, resulting in reduced incidence of dysuria, frequency, and pain compared to placebo [4]. Ureteral stent discomfort may be due to spasms of the ureteral smooth muscle that surrounds the indwelling foreign object and may run the length of the ureter. Further, irritation of the trigone, which also has alpha-1d receptors, may be caused by the intravesical lower coil of the stent. Alternatively, voiding may increase pressure on the renal pelvis and cause discomfort [5]. Several studies have investigated if alpha-blockers can alleviate symptoms related to stent placement [6]. In 2011, Lamb et al. reported a pair-wise meta-analysis of randomized controlled trials (RCTs) indicating that orally administrated alpha-blockers reduce stent-related discomfort and storage symptoms as evaluated by the Ureteric Stent Symptoms Questionnaire (USSQ) [7].\nNewly introduced network meta-analysis is a meta-analysis in which multiple treatments are compared using direct comparisons of interventions within RCTs and indirect comparisons across trials based on a common comparator [8–10]. The present systematic review and network meta-analysis examined available RCTs to study the effects of alpha-blockers on stent-related symptoms.", " Inclusion criteria Published RCTs that accorded with the following criteria were included. (i) The design of study had an assessment for alpha-blockers to treat ureteral stent discomfort. (ii) A match was performed between the baseline characteristics of patients from two groups, including the total number of subjects and the values of each index. (iii) Alpha-blockers were analyzed with standard therapy or a placebo group. (iv) Standard indications for ureteral stenting, such as stone treatment, ureteroscopic procedures, and ureteral surgery including pyeloplasty, were accepted. (v) Endpoint outcome parameters were described using USSQ, including urinary symptom score (USS) and body pain score (BPS). (vi) The full text of the study was available in English. This report was prepared in compliance with the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) statement (accessible at http://www.prisma-statement.org/).\nPublished RCTs that accorded with the following criteria were included. (i) The design of study had an assessment for alpha-blockers to treat ureteral stent discomfort. (ii) A match was performed between the baseline characteristics of patients from two groups, including the total number of subjects and the values of each index. (iii) Alpha-blockers were analyzed with standard therapy or a placebo group. (iv) Standard indications for ureteral stenting, such as stone treatment, ureteroscopic procedures, and ureteral surgery including pyeloplasty, were accepted. (v) Endpoint outcome parameters were described using USSQ, including urinary symptom score (USS) and body pain score (BPS). (vi) The full text of the study was available in English. This report was prepared in compliance with the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) statement (accessible at http://www.prisma-statement.org/).\n Search strategy A literature search of all publications before 31 January 2014 was performed in EMBASE and PubMed. Additionally, a cross-reference search of eligible articles was performed to check studies that were not found during the computerized search. Combinations of the following MeSH terms and keywords were used: tamsulosin, alfuzosin, doxazosin, terazosin, silodosin, prazosin, alpha, stent, discomfort, pain, complication, ureter, ureteral, ureteric, and randomized controlled trial.\nA literature search of all publications before 31 January 2014 was performed in EMBASE and PubMed. Additionally, a cross-reference search of eligible articles was performed to check studies that were not found during the computerized search. Combinations of the following MeSH terms and keywords were used: tamsulosin, alfuzosin, doxazosin, terazosin, silodosin, prazosin, alpha, stent, discomfort, pain, complication, ureter, ureteral, ureteric, and randomized controlled trial.\n Extraction of data A researcher (JKK) screened all titles and abstracts identified by the search strategy. Other two researchers (JYL and DHK) independently evaluated the full text of each paper to determine whether a paper met the inclusion criteria. Disagreements were resolved by discussion until a consensus was reached or by arbitration mediated by another researcher (KSC).\nA researcher (JKK) screened all titles and abstracts identified by the search strategy. Other two researchers (JYL and DHK) independently evaluated the full text of each paper to determine whether a paper met the inclusion criteria. Disagreements were resolved by discussion until a consensus was reached or by arbitration mediated by another researcher (KSC).\n Quality assessment for each study After the final group of papers was agreed on, two researchers (JYL and JKK) independently evaluated the quality of each article. The Cochrane’s risk of bias as a quality assessment tool for RCTs were used. The assessment included assigning a judgment of “yes,” “no,” or “unclear” for each domain to designate a low, high, or unclear risk of bias, respectively. If one or no domain was deemed “unclear” or “no,” the study was classified as having a low risk of bias. If four or more domains were deemed “unclear” or “no,” the study was classified as having a high risk of bias. If two or three domains were deemed “unclear” or “no,” the study was classified as having a moderate risk of bias [11]. Quality assessment was performed with Review Manager 5 (RevMan 5.2.11, Cochrane Collaboration, Oxford, UK).\nAfter the final group of papers was agreed on, two researchers (JYL and JKK) independently evaluated the quality of each article. The Cochrane’s risk of bias as a quality assessment tool for RCTs were used. The assessment included assigning a judgment of “yes,” “no,” or “unclear” for each domain to designate a low, high, or unclear risk of bias, respectively. If one or no domain was deemed “unclear” or “no,” the study was classified as having a low risk of bias. If four or more domains were deemed “unclear” or “no,” the study was classified as having a high risk of bias. If two or three domains were deemed “unclear” or “no,” the study was classified as having a moderate risk of bias [11]. Quality assessment was performed with Review Manager 5 (RevMan 5.2.11, Cochrane Collaboration, Oxford, UK).\n Heterogeneity tests Heterogeneity on included studies was examinated using the Q statistic and Higgins’ I2 statistic [12]. Higgins’ I2 measures the percentage of total variation due to heterogeneity rather than chance across studies. Higgins’ I2 was calculated as follows:\\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$$ {\\mathrm{I}}^2=\\frac{\\mathrm{Q}\\hbox{-} \\mathrm{d}\\mathrm{f}}{\\mathrm{Q}}\\times 100\\%, $$\\end{document}I2=Q‐dfQ×100%,\nin which “Q” was Cochran’s heterogeneity statistic, and “df” was the degrees of freedom.\nAn I2 ≥ 50 % was considered to represent substantial heterogeneity. For the Q statistic, heterogeneity was deemed to be significant for p less than 0.10 [13]. If there was evidence of heterogeneity, the data were analyzed using a random-effects model. A summary estimate of the test sensitivity was obtained with 95 % confidence intervals (CIs) after secondary examination of heterogeneity in the random-effects model using radial plots [14]. Studies in which positive results were confirmed were assessed with a pooled specificity with 95 % CIs.\nHeterogeneity on included studies was examinated using the Q statistic and Higgins’ I2 statistic [12]. Higgins’ I2 measures the percentage of total variation due to heterogeneity rather than chance across studies. Higgins’ I2 was calculated as follows:\\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$$ {\\mathrm{I}}^2=\\frac{\\mathrm{Q}\\hbox{-} \\mathrm{d}\\mathrm{f}}{\\mathrm{Q}}\\times 100\\%, $$\\end{document}I2=Q‐dfQ×100%,\nin which “Q” was Cochran’s heterogeneity statistic, and “df” was the degrees of freedom.\nAn I2 ≥ 50 % was considered to represent substantial heterogeneity. For the Q statistic, heterogeneity was deemed to be significant for p less than 0.10 [13]. If there was evidence of heterogeneity, the data were analyzed using a random-effects model. A summary estimate of the test sensitivity was obtained with 95 % confidence intervals (CIs) after secondary examination of heterogeneity in the random-effects model using radial plots [14]. Studies in which positive results were confirmed were assessed with a pooled specificity with 95 % CIs.\n Statistical analysis Outcome variables measured at specific time points were compared in terms of mean differences with 95 % CIs using a network meta-analysis. Analyses were based on non-informative priors for effect sizes and precision. Convergence and lack of auto-correlation were confirmed after four chains and a 50,000-simulation burn-in phase; finally, direct probability statements were derived from an additional 100,000-simulation phase. The probability that each group had the lowest rate of clinical events was assessed by Bayesian Markov Chain Monte Carlo modeling. Sensitivity analyses were performed by repeating the main computations with a fixed-effect method. Model fit was appraised by computing and comparing estimates for deviance and deviance information criterion. All statistical analyses were performed with Review Manager 5 and R (R version 3.0.3, R Foundation for Statistical Computing, Vienna, Austria; http://www.r-project.org) and the metafor and gemtc packages for pair-wise and network meta-analyses.\nOutcome variables measured at specific time points were compared in terms of mean differences with 95 % CIs using a network meta-analysis. Analyses were based on non-informative priors for effect sizes and precision. Convergence and lack of auto-correlation were confirmed after four chains and a 50,000-simulation burn-in phase; finally, direct probability statements were derived from an additional 100,000-simulation phase. The probability that each group had the lowest rate of clinical events was assessed by Bayesian Markov Chain Monte Carlo modeling. Sensitivity analyses were performed by repeating the main computations with a fixed-effect method. Model fit was appraised by computing and comparing estimates for deviance and deviance information criterion. All statistical analyses were performed with Review Manager 5 and R (R version 3.0.3, R Foundation for Statistical Computing, Vienna, Austria; http://www.r-project.org) and the metafor and gemtc packages for pair-wise and network meta-analyses.", "Published RCTs that accorded with the following criteria were included. (i) The design of study had an assessment for alpha-blockers to treat ureteral stent discomfort. (ii) A match was performed between the baseline characteristics of patients from two groups, including the total number of subjects and the values of each index. (iii) Alpha-blockers were analyzed with standard therapy or a placebo group. (iv) Standard indications for ureteral stenting, such as stone treatment, ureteroscopic procedures, and ureteral surgery including pyeloplasty, were accepted. (v) Endpoint outcome parameters were described using USSQ, including urinary symptom score (USS) and body pain score (BPS). (vi) The full text of the study was available in English. This report was prepared in compliance with the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) statement (accessible at http://www.prisma-statement.org/).", "A literature search of all publications before 31 January 2014 was performed in EMBASE and PubMed. Additionally, a cross-reference search of eligible articles was performed to check studies that were not found during the computerized search. Combinations of the following MeSH terms and keywords were used: tamsulosin, alfuzosin, doxazosin, terazosin, silodosin, prazosin, alpha, stent, discomfort, pain, complication, ureter, ureteral, ureteric, and randomized controlled trial.", "A researcher (JKK) screened all titles and abstracts identified by the search strategy. Other two researchers (JYL and DHK) independently evaluated the full text of each paper to determine whether a paper met the inclusion criteria. Disagreements were resolved by discussion until a consensus was reached or by arbitration mediated by another researcher (KSC).", "After the final group of papers was agreed on, two researchers (JYL and JKK) independently evaluated the quality of each article. The Cochrane’s risk of bias as a quality assessment tool for RCTs were used. The assessment included assigning a judgment of “yes,” “no,” or “unclear” for each domain to designate a low, high, or unclear risk of bias, respectively. If one or no domain was deemed “unclear” or “no,” the study was classified as having a low risk of bias. If four or more domains were deemed “unclear” or “no,” the study was classified as having a high risk of bias. If two or three domains were deemed “unclear” or “no,” the study was classified as having a moderate risk of bias [11]. Quality assessment was performed with Review Manager 5 (RevMan 5.2.11, Cochrane Collaboration, Oxford, UK).", "Heterogeneity on included studies was examinated using the Q statistic and Higgins’ I2 statistic [12]. Higgins’ I2 measures the percentage of total variation due to heterogeneity rather than chance across studies. Higgins’ I2 was calculated as follows:\\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$$ {\\mathrm{I}}^2=\\frac{\\mathrm{Q}\\hbox{-} \\mathrm{d}\\mathrm{f}}{\\mathrm{Q}}\\times 100\\%, $$\\end{document}I2=Q‐dfQ×100%,\nin which “Q” was Cochran’s heterogeneity statistic, and “df” was the degrees of freedom.\nAn I2 ≥ 50 % was considered to represent substantial heterogeneity. For the Q statistic, heterogeneity was deemed to be significant for p less than 0.10 [13]. If there was evidence of heterogeneity, the data were analyzed using a random-effects model. A summary estimate of the test sensitivity was obtained with 95 % confidence intervals (CIs) after secondary examination of heterogeneity in the random-effects model using radial plots [14]. Studies in which positive results were confirmed were assessed with a pooled specificity with 95 % CIs.", "Outcome variables measured at specific time points were compared in terms of mean differences with 95 % CIs using a network meta-analysis. Analyses were based on non-informative priors for effect sizes and precision. Convergence and lack of auto-correlation were confirmed after four chains and a 50,000-simulation burn-in phase; finally, direct probability statements were derived from an additional 100,000-simulation phase. The probability that each group had the lowest rate of clinical events was assessed by Bayesian Markov Chain Monte Carlo modeling. Sensitivity analyses were performed by repeating the main computations with a fixed-effect method. Model fit was appraised by computing and comparing estimates for deviance and deviance information criterion. All statistical analyses were performed with Review Manager 5 and R (R version 3.0.3, R Foundation for Statistical Computing, Vienna, Austria; http://www.r-project.org) and the metafor and gemtc packages for pair-wise and network meta-analyses.", " Eligible studies The database search retrieved 21 articles covering 88 studies for potential inclusion in the meta-analysis. Fourteen articles were excluded based on the inclusion/exclusion criteria; eight of the fourteen articles were retrospective models, and three articles were reported with different tools and variables. The other three articles were excluded, because they did not report final results. Using using pairwise and network meta-analyses, the remaining seven articles were included in the qualitative and quantitative syntheses (Fig. 1).Fig. 1Flow diagram of evidence acquisition. Seven studies were ultimately included in the qualitative and quantitative syntheses using pairwise and network meta-analyses\nFlow diagram of evidence acquisition. Seven studies were ultimately included in the qualitative and quantitative syntheses using pairwise and network meta-analyses\nData corresponding to confounding factors in each study are summarized in Table 1. Four studies included outcome comparisons between alfuzosin versus placebo [15–18]. Two trials reported on therapeutic outcomes of tamsulosin versus placebo [19, 20], and a three-arm trial compared outcomes of alfuzosin, tamsulosin, and placebo [21].Table 1Enrolled studies for this meta-analysisStentStoneStudyStudy designNTxPAlpha-blockerAnalgesicDuration (day)TypeSizeLengthIndicationSize (mm)LocationMeasureQuality assessment-risk biasTxPBeddingfield et al.[15]RCT552629AlfuzosinOn demand10NANAAdjustedPost URS6.357.255 % renalUSSQLow- urinary symptom score10 mg- body pain score- general health score25 % renal/ureteric- work performance score20 % ureteric- sexual health scoreDamiano et al.[20]RCT753837TamsulosinOn demand14PU7 FrAdjustedPost URSNANANAUSSQIntermediate0.4 mg- urinary symptom score- body pain scoreDeliveliotis et al.[16]RCT1005050AlfuzosinOn demand28PU5 FrAdjustedConservative treatment for stone, <10 mm, hydronephrosis7.67.119 upperUSSQLow23 mid- urinary symptom score10 mg48 lower- body pain score- general health score- sexual health scoreDellis et al.[21]RCT15010050*AlfuzosinOn demand28NA6 FrAdjustedPost ESWL, URSNANANAUSSQLow- urinary symptom score10 mgTamsulosin- body pain score0.4 mg- general health score- work performance score- sexual health scoreNazim et al.[17]RCT1306565AlfuzosinOn demand7PU4.7 Fr 6 FrNAPost URSNANA40 upperVASHigh28 midUSSQ10 mg62 lower- urinary symptom score- body pain scorePark et al.[18]RCT322012AlfuzosinNA42PU6 FrAdjustedPost URS, PCNL, Lap Pyelo, endo-ureterotomyNANANAUSSQHigh- urinary symptom score- body pain score10 mg- general health score- work performance score- sexual health scoreWang et al.[19]RCT1547975TamsulosinOn demand7Sil7 FrAdjustedPost URS99.416 upperUSSQLow49 mid- urinary symptom score0.4 mg89 lower- body pain score- general health score- work performance score- sexual health scoreAdjusted, stent length is height adjusted; NA, not available; Tx, Treatment group, P, placebo; PU, Polyurethane; Sil, Silcone;USSQ, Ureteric Stent Symptom Questionnaire; VAS, visual analogue scale; URS, ureteroscopy; PCNL, percutaneous nephrolithotomy; Lap Pyelo, laparoscopic pyeloplasty*50 patients received alfuzosin 10 mg, and another 50 patients received tamsulosin 0.4 mg\nEnrolled studies for this meta-analysis\nAdjusted, stent length is height adjusted; NA, not available; Tx, Treatment group, P, placebo; PU, Polyurethane; Sil, Silcone;USSQ, Ureteric Stent Symptom Questionnaire; VAS, visual analogue scale; URS, ureteroscopy; PCNL, percutaneous nephrolithotomy; Lap Pyelo, laparoscopic pyeloplasty\n*50 patients received alfuzosin 10 mg, and another 50 patients received tamsulosin 0.4 mg\nThe database search retrieved 21 articles covering 88 studies for potential inclusion in the meta-analysis. Fourteen articles were excluded based on the inclusion/exclusion criteria; eight of the fourteen articles were retrospective models, and three articles were reported with different tools and variables. The other three articles were excluded, because they did not report final results. Using using pairwise and network meta-analyses, the remaining seven articles were included in the qualitative and quantitative syntheses (Fig. 1).Fig. 1Flow diagram of evidence acquisition. Seven studies were ultimately included in the qualitative and quantitative syntheses using pairwise and network meta-analyses\nFlow diagram of evidence acquisition. Seven studies were ultimately included in the qualitative and quantitative syntheses using pairwise and network meta-analyses\nData corresponding to confounding factors in each study are summarized in Table 1. Four studies included outcome comparisons between alfuzosin versus placebo [15–18]. Two trials reported on therapeutic outcomes of tamsulosin versus placebo [19, 20], and a three-arm trial compared outcomes of alfuzosin, tamsulosin, and placebo [21].Table 1Enrolled studies for this meta-analysisStentStoneStudyStudy designNTxPAlpha-blockerAnalgesicDuration (day)TypeSizeLengthIndicationSize (mm)LocationMeasureQuality assessment-risk biasTxPBeddingfield et al.[15]RCT552629AlfuzosinOn demand10NANAAdjustedPost URS6.357.255 % renalUSSQLow- urinary symptom score10 mg- body pain score- general health score25 % renal/ureteric- work performance score20 % ureteric- sexual health scoreDamiano et al.[20]RCT753837TamsulosinOn demand14PU7 FrAdjustedPost URSNANANAUSSQIntermediate0.4 mg- urinary symptom score- body pain scoreDeliveliotis et al.[16]RCT1005050AlfuzosinOn demand28PU5 FrAdjustedConservative treatment for stone, <10 mm, hydronephrosis7.67.119 upperUSSQLow23 mid- urinary symptom score10 mg48 lower- body pain score- general health score- sexual health scoreDellis et al.[21]RCT15010050*AlfuzosinOn demand28NA6 FrAdjustedPost ESWL, URSNANANAUSSQLow- urinary symptom score10 mgTamsulosin- body pain score0.4 mg- general health score- work performance score- sexual health scoreNazim et al.[17]RCT1306565AlfuzosinOn demand7PU4.7 Fr 6 FrNAPost URSNANA40 upperVASHigh28 midUSSQ10 mg62 lower- urinary symptom score- body pain scorePark et al.[18]RCT322012AlfuzosinNA42PU6 FrAdjustedPost URS, PCNL, Lap Pyelo, endo-ureterotomyNANANAUSSQHigh- urinary symptom score- body pain score10 mg- general health score- work performance score- sexual health scoreWang et al.[19]RCT1547975TamsulosinOn demand7Sil7 FrAdjustedPost URS99.416 upperUSSQLow49 mid- urinary symptom score0.4 mg89 lower- body pain score- general health score- work performance score- sexual health scoreAdjusted, stent length is height adjusted; NA, not available; Tx, Treatment group, P, placebo; PU, Polyurethane; Sil, Silcone;USSQ, Ureteric Stent Symptom Questionnaire; VAS, visual analogue scale; URS, ureteroscopy; PCNL, percutaneous nephrolithotomy; Lap Pyelo, laparoscopic pyeloplasty*50 patients received alfuzosin 10 mg, and another 50 patients received tamsulosin 0.4 mg\nEnrolled studies for this meta-analysis\nAdjusted, stent length is height adjusted; NA, not available; Tx, Treatment group, P, placebo; PU, Polyurethane; Sil, Silcone;USSQ, Ureteric Stent Symptom Questionnaire; VAS, visual analogue scale; URS, ureteroscopy; PCNL, percutaneous nephrolithotomy; Lap Pyelo, laparoscopic pyeloplasty\n*50 patients received alfuzosin 10 mg, and another 50 patients received tamsulosin 0.4 mg\n Quality assessment Figures 2 and 3 present the details of quality assessment, as measured by the Cochrane Collaboration risk-of-bias tool. Four trials exhibited a low risk of bias for all quality criteria, and two studies were classified as having a high risk of bias (Table 1). The most common risk factor for quality assessment was the risk of blinding of outcome assessment; the second most common concerns were allocation concealment and blinding participants and personnel.Fig. 2Risk of bias graph. We reviewed the risk of bias in each study included in this meta-analysis and presented the results as percentages. Four trials exhibited a low risk of bias for all quality criteria, and two studies were classified as having a high risk of biasFig. 3Risk of bias summary. We reviewed the risk of bias in each of the studies included in this meta-analysis\nRisk of bias graph. We reviewed the risk of bias in each study included in this meta-analysis and presented the results as percentages. Four trials exhibited a low risk of bias for all quality criteria, and two studies were classified as having a high risk of bias\nRisk of bias summary. We reviewed the risk of bias in each of the studies included in this meta-analysis\nFigures 2 and 3 present the details of quality assessment, as measured by the Cochrane Collaboration risk-of-bias tool. Four trials exhibited a low risk of bias for all quality criteria, and two studies were classified as having a high risk of bias (Table 1). The most common risk factor for quality assessment was the risk of blinding of outcome assessment; the second most common concerns were allocation concealment and blinding participants and personnel.Fig. 2Risk of bias graph. We reviewed the risk of bias in each study included in this meta-analysis and presented the results as percentages. Four trials exhibited a low risk of bias for all quality criteria, and two studies were classified as having a high risk of biasFig. 3Risk of bias summary. We reviewed the risk of bias in each of the studies included in this meta-analysis\nRisk of bias graph. We reviewed the risk of bias in each study included in this meta-analysis and presented the results as percentages. Four trials exhibited a low risk of bias for all quality criteria, and two studies were classified as having a high risk of bias\nRisk of bias summary. We reviewed the risk of bias in each of the studies included in this meta-analysis\n Heterogeneity assessment Forest plots of pairwise meta-analyses are demonstrated in Fig. 4. A heterogeneity test for USS showed the following: χ2 = 96.43 with 7 df (P < 0.001) and I2 = 93 % in the analysis of alpha-blockers, including alfuzosin and tamsulosin versus placebo. Notable heterogeneities were detected in the analyses of all studies; thus, random-effects models were used to further assess these variables. In the analysis of BPS, a heterogeneity test demonstrated homogeneity with χ2 = 44.66 with 8 df (P < 0.001) and I2 = 82 %. Pairwise meta-analyses with random-effects models were also performed. None of the variables demonstrated heterogeneity in radial plots after selecting effect models for USS and BPS (Fig. 5).Fig. 4Pairwise meta-analysis. (a) urinary symptom score (b) body pain scoreFig. 5Radial plots. None of the variables demonstrated heterogeneity after selecting effect models for each variable in the radial plots. (a) urinary symptom score (b) body pain score\nPairwise meta-analysis. (a) urinary symptom score (b) body pain score\nRadial plots. None of the variables demonstrated heterogeneity after selecting effect models for each variable in the radial plots. (a) urinary symptom score (b) body pain score\nForest plots of pairwise meta-analyses are demonstrated in Fig. 4. A heterogeneity test for USS showed the following: χ2 = 96.43 with 7 df (P < 0.001) and I2 = 93 % in the analysis of alpha-blockers, including alfuzosin and tamsulosin versus placebo. Notable heterogeneities were detected in the analyses of all studies; thus, random-effects models were used to further assess these variables. In the analysis of BPS, a heterogeneity test demonstrated homogeneity with χ2 = 44.66 with 8 df (P < 0.001) and I2 = 82 %. Pairwise meta-analyses with random-effects models were also performed. None of the variables demonstrated heterogeneity in radial plots after selecting effect models for USS and BPS (Fig. 5).Fig. 4Pairwise meta-analysis. (a) urinary symptom score (b) body pain scoreFig. 5Radial plots. None of the variables demonstrated heterogeneity after selecting effect models for each variable in the radial plots. (a) urinary symptom score (b) body pain score\nPairwise meta-analysis. (a) urinary symptom score (b) body pain score\nRadial plots. None of the variables demonstrated heterogeneity after selecting effect models for each variable in the radial plots. (a) urinary symptom score (b) body pain score\n Publication bias Funnel plots from pairwise meta-analyses are demonstrated in Fig. 6; however, with few studies, it was difficult to assess publication bias, although some degree of bias is suspected.Fig. 6Funnel plots. (a) urinary symptom score (b) body pain score. It was difficult to assess publication bias with few studies, although some degree of bias is suspected\nFunnel plots. (a) urinary symptom score (b) body pain score. It was difficult to assess publication bias with few studies, although some degree of bias is suspected\nFunnel plots from pairwise meta-analyses are demonstrated in Fig. 6; however, with few studies, it was difficult to assess publication bias, although some degree of bias is suspected.Fig. 6Funnel plots. (a) urinary symptom score (b) body pain score. It was difficult to assess publication bias with few studies, although some degree of bias is suspected\nFunnel plots. (a) urinary symptom score (b) body pain score. It was difficult to assess publication bias with few studies, although some degree of bias is suspected\n Pairwise meta-analysis for urinary symptom and body pain scores The forest plot using the random-effects model demomstrated an MD of −5.69 for USS (95 % CI [−8.84–−2.53], P < 0.001) between alpha-blockers and placebo (Fig. 4a). In subanalyses, both alfuzosin (MD; −4.12, 95 % CI [−5.51–−2.73], P < 0.001) and tamsulosin (MD; −7.83, 95 % CI [−14.35–−1.31], P = 0.02) had low MDs versus placebo. According to the forest plot for BPS, alpha-blockers were superior to placebo, with an MD of −6.20 (95 % CI [−8.74–−2.13], P < 0.001) (Fig. 4b). In the subanalysis of alfuzosin versus placebo, alfuzosin showed an MD of −4.21 (95 % CI [−5.56–−2.85], P < 0.001). Between tamsulosin and placebo, the random-effects model demonstrated an MD of −7.71 (95 % CI [−13.28–−2.13], P = 0.007) for BSP.\nThe forest plot using the random-effects model demomstrated an MD of −5.69 for USS (95 % CI [−8.84–−2.53], P < 0.001) between alpha-blockers and placebo (Fig. 4a). In subanalyses, both alfuzosin (MD; −4.12, 95 % CI [−5.51–−2.73], P < 0.001) and tamsulosin (MD; −7.83, 95 % CI [−14.35–−1.31], P = 0.02) had low MDs versus placebo. According to the forest plot for BPS, alpha-blockers were superior to placebo, with an MD of −6.20 (95 % CI [−8.74–−2.13], P < 0.001) (Fig. 4b). In the subanalysis of alfuzosin versus placebo, alfuzosin showed an MD of −4.21 (95 % CI [−5.56–−2.85], P < 0.001). Between tamsulosin and placebo, the random-effects model demonstrated an MD of −7.71 (95 % CI [−13.28–−2.13], P = 0.007) for BSP.\n Network meta-analysis for urinary symptom and body pain scores Alfuzosin had a lower In USS than that of placebo (MD; −4.85, 95 % CI [−8.53–−1.33]). Tamsulosin also had a lower score than that of placebo (MD; −8.84, 95 % CI [−13.08–−4.31]. However, there was not a significant difference in MD between alfuzosin and tamsulosin according to network meta-analysis (MD; 3.99, 95 % CI [−1.23–9.04]). There were significant differences in the BPS achieved with alfuzosin versus placebo (MD; −5.71, 95 % CI [−11.32–−0.52]) and in tamsulosin versus placebo (MD; −7.77, 95 % CI [−13.68–−2.14]). Comparison of the BPS achieved with tamsulosin versus alfuzosin showed an MD of 2.12 (95 % CI [−4.62–8.72]), which was not significant (Fig. 7). In the rank-probability test, tamsulosin had the highest rank for USS, followed by alfuzosin. Tamsulosin was also ranked highest for BPS in the rank-probability test, followed by alfuzosin. The placebo was ranked lowest for USS and BPS (Fig. 8).Fig. 7Network meta-analysis. (a) urinary symptom score (b) body pain score. Alfuzosin and tamsulosin had a lower score both in USS and BPS than in the placebo. However, there was not a significant difference in MD between alfuzosin and tamsulosin according to network meta-analysisFig. 8Rank-probability test of network meta-analyses. (a) urinary symptom score (b) body pain score. Tamsulosin had the highest rank for USS, followed by alfuzosin. Tamsulosin was also ranked highest for BPS in the rank-probability test, followed by alfuzosin. The placebo was ranked lowest for USS and BPS\nNetwork meta-analysis. (a) urinary symptom score (b) body pain score. Alfuzosin and tamsulosin had a lower score both in USS and BPS than in the placebo. However, there was not a significant difference in MD between alfuzosin and tamsulosin according to network meta-analysis\nRank-probability test of network meta-analyses. (a) urinary symptom score (b) body pain score. Tamsulosin had the highest rank for USS, followed by alfuzosin. Tamsulosin was also ranked highest for BPS in the rank-probability test, followed by alfuzosin. The placebo was ranked lowest for USS and BPS\nAlfuzosin had a lower In USS than that of placebo (MD; −4.85, 95 % CI [−8.53–−1.33]). Tamsulosin also had a lower score than that of placebo (MD; −8.84, 95 % CI [−13.08–−4.31]. However, there was not a significant difference in MD between alfuzosin and tamsulosin according to network meta-analysis (MD; 3.99, 95 % CI [−1.23–9.04]). There were significant differences in the BPS achieved with alfuzosin versus placebo (MD; −5.71, 95 % CI [−11.32–−0.52]) and in tamsulosin versus placebo (MD; −7.77, 95 % CI [−13.68–−2.14]). Comparison of the BPS achieved with tamsulosin versus alfuzosin showed an MD of 2.12 (95 % CI [−4.62–8.72]), which was not significant (Fig. 7). In the rank-probability test, tamsulosin had the highest rank for USS, followed by alfuzosin. Tamsulosin was also ranked highest for BPS in the rank-probability test, followed by alfuzosin. The placebo was ranked lowest for USS and BPS (Fig. 8).Fig. 7Network meta-analysis. (a) urinary symptom score (b) body pain score. Alfuzosin and tamsulosin had a lower score both in USS and BPS than in the placebo. However, there was not a significant difference in MD between alfuzosin and tamsulosin according to network meta-analysisFig. 8Rank-probability test of network meta-analyses. (a) urinary symptom score (b) body pain score. Tamsulosin had the highest rank for USS, followed by alfuzosin. Tamsulosin was also ranked highest for BPS in the rank-probability test, followed by alfuzosin. The placebo was ranked lowest for USS and BPS\nNetwork meta-analysis. (a) urinary symptom score (b) body pain score. Alfuzosin and tamsulosin had a lower score both in USS and BPS than in the placebo. However, there was not a significant difference in MD between alfuzosin and tamsulosin according to network meta-analysis\nRank-probability test of network meta-analyses. (a) urinary symptom score (b) body pain score. Tamsulosin had the highest rank for USS, followed by alfuzosin. Tamsulosin was also ranked highest for BPS in the rank-probability test, followed by alfuzosin. The placebo was ranked lowest for USS and BPS", "The database search retrieved 21 articles covering 88 studies for potential inclusion in the meta-analysis. Fourteen articles were excluded based on the inclusion/exclusion criteria; eight of the fourteen articles were retrospective models, and three articles were reported with different tools and variables. The other three articles were excluded, because they did not report final results. Using using pairwise and network meta-analyses, the remaining seven articles were included in the qualitative and quantitative syntheses (Fig. 1).Fig. 1Flow diagram of evidence acquisition. Seven studies were ultimately included in the qualitative and quantitative syntheses using pairwise and network meta-analyses\nFlow diagram of evidence acquisition. Seven studies were ultimately included in the qualitative and quantitative syntheses using pairwise and network meta-analyses\nData corresponding to confounding factors in each study are summarized in Table 1. Four studies included outcome comparisons between alfuzosin versus placebo [15–18]. Two trials reported on therapeutic outcomes of tamsulosin versus placebo [19, 20], and a three-arm trial compared outcomes of alfuzosin, tamsulosin, and placebo [21].Table 1Enrolled studies for this meta-analysisStentStoneStudyStudy designNTxPAlpha-blockerAnalgesicDuration (day)TypeSizeLengthIndicationSize (mm)LocationMeasureQuality assessment-risk biasTxPBeddingfield et al.[15]RCT552629AlfuzosinOn demand10NANAAdjustedPost URS6.357.255 % renalUSSQLow- urinary symptom score10 mg- body pain score- general health score25 % renal/ureteric- work performance score20 % ureteric- sexual health scoreDamiano et al.[20]RCT753837TamsulosinOn demand14PU7 FrAdjustedPost URSNANANAUSSQIntermediate0.4 mg- urinary symptom score- body pain scoreDeliveliotis et al.[16]RCT1005050AlfuzosinOn demand28PU5 FrAdjustedConservative treatment for stone, <10 mm, hydronephrosis7.67.119 upperUSSQLow23 mid- urinary symptom score10 mg48 lower- body pain score- general health score- sexual health scoreDellis et al.[21]RCT15010050*AlfuzosinOn demand28NA6 FrAdjustedPost ESWL, URSNANANAUSSQLow- urinary symptom score10 mgTamsulosin- body pain score0.4 mg- general health score- work performance score- sexual health scoreNazim et al.[17]RCT1306565AlfuzosinOn demand7PU4.7 Fr 6 FrNAPost URSNANA40 upperVASHigh28 midUSSQ10 mg62 lower- urinary symptom score- body pain scorePark et al.[18]RCT322012AlfuzosinNA42PU6 FrAdjustedPost URS, PCNL, Lap Pyelo, endo-ureterotomyNANANAUSSQHigh- urinary symptom score- body pain score10 mg- general health score- work performance score- sexual health scoreWang et al.[19]RCT1547975TamsulosinOn demand7Sil7 FrAdjustedPost URS99.416 upperUSSQLow49 mid- urinary symptom score0.4 mg89 lower- body pain score- general health score- work performance score- sexual health scoreAdjusted, stent length is height adjusted; NA, not available; Tx, Treatment group, P, placebo; PU, Polyurethane; Sil, Silcone;USSQ, Ureteric Stent Symptom Questionnaire; VAS, visual analogue scale; URS, ureteroscopy; PCNL, percutaneous nephrolithotomy; Lap Pyelo, laparoscopic pyeloplasty*50 patients received alfuzosin 10 mg, and another 50 patients received tamsulosin 0.4 mg\nEnrolled studies for this meta-analysis\nAdjusted, stent length is height adjusted; NA, not available; Tx, Treatment group, P, placebo; PU, Polyurethane; Sil, Silcone;USSQ, Ureteric Stent Symptom Questionnaire; VAS, visual analogue scale; URS, ureteroscopy; PCNL, percutaneous nephrolithotomy; Lap Pyelo, laparoscopic pyeloplasty\n*50 patients received alfuzosin 10 mg, and another 50 patients received tamsulosin 0.4 mg", "Figures 2 and 3 present the details of quality assessment, as measured by the Cochrane Collaboration risk-of-bias tool. Four trials exhibited a low risk of bias for all quality criteria, and two studies were classified as having a high risk of bias (Table 1). The most common risk factor for quality assessment was the risk of blinding of outcome assessment; the second most common concerns were allocation concealment and blinding participants and personnel.Fig. 2Risk of bias graph. We reviewed the risk of bias in each study included in this meta-analysis and presented the results as percentages. Four trials exhibited a low risk of bias for all quality criteria, and two studies were classified as having a high risk of biasFig. 3Risk of bias summary. We reviewed the risk of bias in each of the studies included in this meta-analysis\nRisk of bias graph. We reviewed the risk of bias in each study included in this meta-analysis and presented the results as percentages. Four trials exhibited a low risk of bias for all quality criteria, and two studies were classified as having a high risk of bias\nRisk of bias summary. We reviewed the risk of bias in each of the studies included in this meta-analysis", "Forest plots of pairwise meta-analyses are demonstrated in Fig. 4. A heterogeneity test for USS showed the following: χ2 = 96.43 with 7 df (P < 0.001) and I2 = 93 % in the analysis of alpha-blockers, including alfuzosin and tamsulosin versus placebo. Notable heterogeneities were detected in the analyses of all studies; thus, random-effects models were used to further assess these variables. In the analysis of BPS, a heterogeneity test demonstrated homogeneity with χ2 = 44.66 with 8 df (P < 0.001) and I2 = 82 %. Pairwise meta-analyses with random-effects models were also performed. None of the variables demonstrated heterogeneity in radial plots after selecting effect models for USS and BPS (Fig. 5).Fig. 4Pairwise meta-analysis. (a) urinary symptom score (b) body pain scoreFig. 5Radial plots. None of the variables demonstrated heterogeneity after selecting effect models for each variable in the radial plots. (a) urinary symptom score (b) body pain score\nPairwise meta-analysis. (a) urinary symptom score (b) body pain score\nRadial plots. None of the variables demonstrated heterogeneity after selecting effect models for each variable in the radial plots. (a) urinary symptom score (b) body pain score", "Funnel plots from pairwise meta-analyses are demonstrated in Fig. 6; however, with few studies, it was difficult to assess publication bias, although some degree of bias is suspected.Fig. 6Funnel plots. (a) urinary symptom score (b) body pain score. It was difficult to assess publication bias with few studies, although some degree of bias is suspected\nFunnel plots. (a) urinary symptom score (b) body pain score. It was difficult to assess publication bias with few studies, although some degree of bias is suspected", "The forest plot using the random-effects model demomstrated an MD of −5.69 for USS (95 % CI [−8.84–−2.53], P < 0.001) between alpha-blockers and placebo (Fig. 4a). In subanalyses, both alfuzosin (MD; −4.12, 95 % CI [−5.51–−2.73], P < 0.001) and tamsulosin (MD; −7.83, 95 % CI [−14.35–−1.31], P = 0.02) had low MDs versus placebo. According to the forest plot for BPS, alpha-blockers were superior to placebo, with an MD of −6.20 (95 % CI [−8.74–−2.13], P < 0.001) (Fig. 4b). In the subanalysis of alfuzosin versus placebo, alfuzosin showed an MD of −4.21 (95 % CI [−5.56–−2.85], P < 0.001). Between tamsulosin and placebo, the random-effects model demonstrated an MD of −7.71 (95 % CI [−13.28–−2.13], P = 0.007) for BSP.", "Alfuzosin had a lower In USS than that of placebo (MD; −4.85, 95 % CI [−8.53–−1.33]). Tamsulosin also had a lower score than that of placebo (MD; −8.84, 95 % CI [−13.08–−4.31]. However, there was not a significant difference in MD between alfuzosin and tamsulosin according to network meta-analysis (MD; 3.99, 95 % CI [−1.23–9.04]). There were significant differences in the BPS achieved with alfuzosin versus placebo (MD; −5.71, 95 % CI [−11.32–−0.52]) and in tamsulosin versus placebo (MD; −7.77, 95 % CI [−13.68–−2.14]). Comparison of the BPS achieved with tamsulosin versus alfuzosin showed an MD of 2.12 (95 % CI [−4.62–8.72]), which was not significant (Fig. 7). In the rank-probability test, tamsulosin had the highest rank for USS, followed by alfuzosin. Tamsulosin was also ranked highest for BPS in the rank-probability test, followed by alfuzosin. The placebo was ranked lowest for USS and BPS (Fig. 8).Fig. 7Network meta-analysis. (a) urinary symptom score (b) body pain score. Alfuzosin and tamsulosin had a lower score both in USS and BPS than in the placebo. However, there was not a significant difference in MD between alfuzosin and tamsulosin according to network meta-analysisFig. 8Rank-probability test of network meta-analyses. (a) urinary symptom score (b) body pain score. Tamsulosin had the highest rank for USS, followed by alfuzosin. Tamsulosin was also ranked highest for BPS in the rank-probability test, followed by alfuzosin. The placebo was ranked lowest for USS and BPS\nNetwork meta-analysis. (a) urinary symptom score (b) body pain score. Alfuzosin and tamsulosin had a lower score both in USS and BPS than in the placebo. However, there was not a significant difference in MD between alfuzosin and tamsulosin according to network meta-analysis\nRank-probability test of network meta-analyses. (a) urinary symptom score (b) body pain score. Tamsulosin had the highest rank for USS, followed by alfuzosin. Tamsulosin was also ranked highest for BPS in the rank-probability test, followed by alfuzosin. The placebo was ranked lowest for USS and BPS", "Recently, two pair-wise meta-analyses of alpha-blockers in patients with ureteral stent discomfort were published. Yakoubi et al. conducted a meta-analysis of four RCTs [22]. However, they did not distinguish the types of alpha-blockers analyzed by each domain. Alpha-blockers reduced the scores for urinary symptoms, body pain, and general health index score but did not achieve significant changes in quality of work, sexual matters, or scores for additional problems. However, there were limits, as few studies were analyzed; in particular, only three studies were analyzed for quality of work and two for additional problems. Lamb et al. conducted a meta-analysis of five RCTs by distinguishing between alfuzosin and tamsulosin [7]; however, there was possible error in the effects-model with regards to heterogeneity. However, these meta-analyses showed decreased scores for both urinary symptoms and body pain after use of alpha-blockers.\nMore recently, Dellis et al. evaluated the effects of two different alpha-blockers for improving symptoms and quality of life in patients with indwelling ureteral stents in an RCT [21]. They prescribed alfuzosin, tamsulosin, or placebo to 50 patients and examined USSQ accordingly. Patients who received alpha-blockers had significantly decreased urinary symptoms, body pain, general health index, and sexual life scores compared to those of the control group. However, there was no difference in the quality of work score. Further, there was no difference between the alpha-blocker groups.\nThe goal of this study was to ascertain the difference between tamsulosin and alfuzosin in alleviating stent discomfort by comparing the effectiveness of alpha-blockers to placebo in seven RCTs based on urinary symptoms and body pain scores. In most studies, continuous treatment had been carried out before the removal of ureter stents. We divided alpha-blockers into alfuzosin and tamsulosin to conduct subgroup and network meta-analyses. In the subgroup analysis, two alpha-blockers appeared to significantly decrease urinary symptoms and body pain scores compared to those of the placebo. There was no statistical significance between alfuzosin and tamsulosin in network meta-analysis both in urinary symptoms and body pain scores, which was consistent with the results of previous studies. However, after conducting network meta-analysis using a rank-probability test, the effectiveness of tamsulosin appeared to be better than that of alfuzosin in both urinary symptoms and body pain scores (Fig. 8).\nWe suggested that the difference in efficacy between the two alpha-blockers arises from the physiologic characteristics of ureteral receptor distribution. The ureter contains two continuous thin muscle layers with a loosely spiraled internal layer and a more tightly spiraled external layer. A third outer longitudinal layer is located in the lower third of the ureter. The lower ureter is consisted of transitional epithelium, a connective tissue layer, and three layers of smooth muscle [23]. Peristalsis of ureter is initiated by spontaneous activity of the renal pelvis pacemaker cell and is essentially regulated by the myogenic mechanism and neurogenic factors; electrical and mechanical activities are conducted to inactive distal regions [24]. The histologic characteristics of the three smooth muscle layers in the lower portion of ureter and the denser innervation of the lower portion of ureter have become subjects of research interest. The alpha-1d receptor has the highest density in the lower portion of ureter. Tamsulosin is a subtype selective alpha-1a and alpha-1d blocker, whereas alfuzosin is a subtype non-selective alpha-1 blocker [25]. The two alpha-blockers may elicit different levels of efficacy due to differences in selectivity and the distribution of the alpha-1d receptor in the lower ureter. However, it remains unclear if subtype selectivity makes a significant contribution to the differences in efficacy of alpha-blockers. In the near future, prospective trials should compare several alpha-blockers, including silodosin and naftopidil, to confirm our results.\nA limitation of our study was that only two subdomains were included, urinary symptoms and body pain scores, as not all the RCTs involved in this study had available USSQ domains besides urinary symptoms and body pain scores (Table 1). These may cause a bias in the efficacy of the two alpha-blockers, which can be influenced by other symptoms and quality of life. All RCTs included in the present study used USSQ, which comprehensively assesses not only urinary symptoms and pain, but also sexual symptoms and quality of life. Although urinary symptoms and pain are the most problematic among stent-related symptoms of discomfort, low abdominal pain or discomfort, infection, and hematuria are also bothersome to patients. Furthermore, alpha-blockers can cause side effects on the central nervous system, sexual function, ejaculatory function, and cardiovascular system [26]. Therefore, it is important to compare not only the efficacy between the two medications but also the side effects. In addition, comparison of USSQ domains may also need to be considered. Some degree of publication bias was also a limitation of this study. However, Sutton et al. reviewed 48 articles from the Cochrane Database of Systematic Reviews and showed publication or related biases were common within the sample of meta-analyses assessed [27]. Moreover, they found that these biases did not affect the conclusions in most cases. Another limitation was that we did not take into consideration the possible effects of stent factors on stent discomfort. Although six out of seven RCTs reported that stent insertion was performed with height adjustment and the differences of stent materials were also considered, there was no consideration as to whether the stent was placed correctly or not. Lee et al. [28]. reported that only storage symptoms of the tamsulosin group were significantly lower than those of the analgesic group in the appropriate stent position. However, this medication effect was not observed in the inappropriate stent position group, and total IPSS and storage symptom scores were significantly higher than in the appropriate stent position group. The appropriate stent position might be one of the points to be considered when conducting the research on stent discomfort.", "Ureteral stent-related symptoms are effectively alleviated with alpha-blockers. Tamsulosin might be more effective than alfuzosin. However, additional randomized controlled trials with alfuzosin and tamsulosin need to be performed for patients with ureteral stents." ]
[ "introduction", "materials|methods", null, null, null, null, null, null, "results", null, null, null, null, null, null, "discussion", "conclusions" ]
[ "Ureter", "Stents", "Adrenergic alpha-antagonists", "Meta-analysis", "Bayes theorem" ]
Background: In 1978, the ureteral double-J stent was first described by Finney et al. [1, 2]. Ureteral double-J stent insertion has been one of the most common urologic procedures; however, indwelling stents are often accompanied by significant morbidity including voiding and storage symptoms, flank pain, hematuria, and infection [3]. Symptoms of stent discomfort, including bladder irritation symptoms and flank pain or discomfort, are generally treated with oral analgesics, such as narcotics and anti­inflammatory medications; however these medications are only moderately effective. Alpha-blockers alleviate bladder irritation due to stents, resulting in reduced incidence of dysuria, frequency, and pain compared to placebo [4]. Ureteral stent discomfort may be due to spasms of the ureteral smooth muscle that surrounds the indwelling foreign object and may run the length of the ureter. Further, irritation of the trigone, which also has alpha-1d receptors, may be caused by the intravesical lower coil of the stent. Alternatively, voiding may increase pressure on the renal pelvis and cause discomfort [5]. Several studies have investigated if alpha-blockers can alleviate symptoms related to stent placement [6]. In 2011, Lamb et al. reported a pair-wise meta-analysis of randomized controlled trials (RCTs) indicating that orally administrated alpha-blockers reduce stent-related discomfort and storage symptoms as evaluated by the Ureteric Stent Symptoms Questionnaire (USSQ) [7]. Newly introduced network meta-analysis is a meta-analysis in which multiple treatments are compared using direct comparisons of interventions within RCTs and indirect comparisons across trials based on a common comparator [8–10]. The present systematic review and network meta-analysis examined available RCTs to study the effects of alpha-blockers on stent-related symptoms. Methods: Inclusion criteria Published RCTs that accorded with the following criteria were included. (i) The design of study had an assessment for alpha-blockers to treat ureteral stent discomfort. (ii) A match was performed between the baseline characteristics of patients from two groups, including the total number of subjects and the values of each index. (iii) Alpha-blockers were analyzed with standard therapy or a placebo group. (iv) Standard indications for ureteral stenting, such as stone treatment, ureteroscopic procedures, and ureteral surgery including pyeloplasty, were accepted. (v) Endpoint outcome parameters were described using USSQ, including urinary symptom score (USS) and body pain score (BPS). (vi) The full text of the study was available in English. This report was prepared in compliance with the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) statement (accessible at http://www.prisma-statement.org/). Published RCTs that accorded with the following criteria were included. (i) The design of study had an assessment for alpha-blockers to treat ureteral stent discomfort. (ii) A match was performed between the baseline characteristics of patients from two groups, including the total number of subjects and the values of each index. (iii) Alpha-blockers were analyzed with standard therapy or a placebo group. (iv) Standard indications for ureteral stenting, such as stone treatment, ureteroscopic procedures, and ureteral surgery including pyeloplasty, were accepted. (v) Endpoint outcome parameters were described using USSQ, including urinary symptom score (USS) and body pain score (BPS). (vi) The full text of the study was available in English. This report was prepared in compliance with the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) statement (accessible at http://www.prisma-statement.org/). Search strategy A literature search of all publications before 31 January 2014 was performed in EMBASE and PubMed. Additionally, a cross-reference search of eligible articles was performed to check studies that were not found during the computerized search. Combinations of the following MeSH terms and keywords were used: tamsulosin, alfuzosin, doxazosin, terazosin, silodosin, prazosin, alpha, stent, discomfort, pain, complication, ureter, ureteral, ureteric, and randomized controlled trial. A literature search of all publications before 31 January 2014 was performed in EMBASE and PubMed. Additionally, a cross-reference search of eligible articles was performed to check studies that were not found during the computerized search. Combinations of the following MeSH terms and keywords were used: tamsulosin, alfuzosin, doxazosin, terazosin, silodosin, prazosin, alpha, stent, discomfort, pain, complication, ureter, ureteral, ureteric, and randomized controlled trial. Extraction of data A researcher (JKK) screened all titles and abstracts identified by the search strategy. Other two researchers (JYL and DHK) independently evaluated the full text of each paper to determine whether a paper met the inclusion criteria. Disagreements were resolved by discussion until a consensus was reached or by arbitration mediated by another researcher (KSC). A researcher (JKK) screened all titles and abstracts identified by the search strategy. Other two researchers (JYL and DHK) independently evaluated the full text of each paper to determine whether a paper met the inclusion criteria. Disagreements were resolved by discussion until a consensus was reached or by arbitration mediated by another researcher (KSC). Quality assessment for each study After the final group of papers was agreed on, two researchers (JYL and JKK) independently evaluated the quality of each article. The Cochrane’s risk of bias as a quality assessment tool for RCTs were used. The assessment included assigning a judgment of “yes,” “no,” or “unclear” for each domain to designate a low, high, or unclear risk of bias, respectively. If one or no domain was deemed “unclear” or “no,” the study was classified as having a low risk of bias. If four or more domains were deemed “unclear” or “no,” the study was classified as having a high risk of bias. If two or three domains were deemed “unclear” or “no,” the study was classified as having a moderate risk of bias [11]. Quality assessment was performed with Review Manager 5 (RevMan 5.2.11, Cochrane Collaboration, Oxford, UK). After the final group of papers was agreed on, two researchers (JYL and JKK) independently evaluated the quality of each article. The Cochrane’s risk of bias as a quality assessment tool for RCTs were used. The assessment included assigning a judgment of “yes,” “no,” or “unclear” for each domain to designate a low, high, or unclear risk of bias, respectively. If one or no domain was deemed “unclear” or “no,” the study was classified as having a low risk of bias. If four or more domains were deemed “unclear” or “no,” the study was classified as having a high risk of bias. If two or three domains were deemed “unclear” or “no,” the study was classified as having a moderate risk of bias [11]. Quality assessment was performed with Review Manager 5 (RevMan 5.2.11, Cochrane Collaboration, Oxford, UK). Heterogeneity tests Heterogeneity on included studies was examinated using the Q statistic and Higgins’ I2 statistic [12]. Higgins’ I2 measures the percentage of total variation due to heterogeneity rather than chance across studies. Higgins’ I2 was calculated as follows:\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$ {\mathrm{I}}^2=\frac{\mathrm{Q}\hbox{-} \mathrm{d}\mathrm{f}}{\mathrm{Q}}\times 100\%, $$\end{document}I2=Q‐dfQ×100%, in which “Q” was Cochran’s heterogeneity statistic, and “df” was the degrees of freedom. An I2 ≥ 50 % was considered to represent substantial heterogeneity. For the Q statistic, heterogeneity was deemed to be significant for p less than 0.10 [13]. If there was evidence of heterogeneity, the data were analyzed using a random-effects model. A summary estimate of the test sensitivity was obtained with 95 % confidence intervals (CIs) after secondary examination of heterogeneity in the random-effects model using radial plots [14]. Studies in which positive results were confirmed were assessed with a pooled specificity with 95 % CIs. Heterogeneity on included studies was examinated using the Q statistic and Higgins’ I2 statistic [12]. Higgins’ I2 measures the percentage of total variation due to heterogeneity rather than chance across studies. Higgins’ I2 was calculated as follows:\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$ {\mathrm{I}}^2=\frac{\mathrm{Q}\hbox{-} \mathrm{d}\mathrm{f}}{\mathrm{Q}}\times 100\%, $$\end{document}I2=Q‐dfQ×100%, in which “Q” was Cochran’s heterogeneity statistic, and “df” was the degrees of freedom. An I2 ≥ 50 % was considered to represent substantial heterogeneity. For the Q statistic, heterogeneity was deemed to be significant for p less than 0.10 [13]. If there was evidence of heterogeneity, the data were analyzed using a random-effects model. A summary estimate of the test sensitivity was obtained with 95 % confidence intervals (CIs) after secondary examination of heterogeneity in the random-effects model using radial plots [14]. Studies in which positive results were confirmed were assessed with a pooled specificity with 95 % CIs. Statistical analysis Outcome variables measured at specific time points were compared in terms of mean differences with 95 % CIs using a network meta-analysis. Analyses were based on non-informative priors for effect sizes and precision. Convergence and lack of auto-correlation were confirmed after four chains and a 50,000-simulation burn-in phase; finally, direct probability statements were derived from an additional 100,000-simulation phase. The probability that each group had the lowest rate of clinical events was assessed by Bayesian Markov Chain Monte Carlo modeling. Sensitivity analyses were performed by repeating the main computations with a fixed-effect method. Model fit was appraised by computing and comparing estimates for deviance and deviance information criterion. All statistical analyses were performed with Review Manager 5 and R (R version 3.0.3, R Foundation for Statistical Computing, Vienna, Austria; http://www.r-project.org) and the metafor and gemtc packages for pair-wise and network meta-analyses. Outcome variables measured at specific time points were compared in terms of mean differences with 95 % CIs using a network meta-analysis. Analyses were based on non-informative priors for effect sizes and precision. Convergence and lack of auto-correlation were confirmed after four chains and a 50,000-simulation burn-in phase; finally, direct probability statements were derived from an additional 100,000-simulation phase. The probability that each group had the lowest rate of clinical events was assessed by Bayesian Markov Chain Monte Carlo modeling. Sensitivity analyses were performed by repeating the main computations with a fixed-effect method. Model fit was appraised by computing and comparing estimates for deviance and deviance information criterion. All statistical analyses were performed with Review Manager 5 and R (R version 3.0.3, R Foundation for Statistical Computing, Vienna, Austria; http://www.r-project.org) and the metafor and gemtc packages for pair-wise and network meta-analyses. Inclusion criteria: Published RCTs that accorded with the following criteria were included. (i) The design of study had an assessment for alpha-blockers to treat ureteral stent discomfort. (ii) A match was performed between the baseline characteristics of patients from two groups, including the total number of subjects and the values of each index. (iii) Alpha-blockers were analyzed with standard therapy or a placebo group. (iv) Standard indications for ureteral stenting, such as stone treatment, ureteroscopic procedures, and ureteral surgery including pyeloplasty, were accepted. (v) Endpoint outcome parameters were described using USSQ, including urinary symptom score (USS) and body pain score (BPS). (vi) The full text of the study was available in English. This report was prepared in compliance with the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) statement (accessible at http://www.prisma-statement.org/). Search strategy: A literature search of all publications before 31 January 2014 was performed in EMBASE and PubMed. Additionally, a cross-reference search of eligible articles was performed to check studies that were not found during the computerized search. Combinations of the following MeSH terms and keywords were used: tamsulosin, alfuzosin, doxazosin, terazosin, silodosin, prazosin, alpha, stent, discomfort, pain, complication, ureter, ureteral, ureteric, and randomized controlled trial. Extraction of data: A researcher (JKK) screened all titles and abstracts identified by the search strategy. Other two researchers (JYL and DHK) independently evaluated the full text of each paper to determine whether a paper met the inclusion criteria. Disagreements were resolved by discussion until a consensus was reached or by arbitration mediated by another researcher (KSC). Quality assessment for each study: After the final group of papers was agreed on, two researchers (JYL and JKK) independently evaluated the quality of each article. The Cochrane’s risk of bias as a quality assessment tool for RCTs were used. The assessment included assigning a judgment of “yes,” “no,” or “unclear” for each domain to designate a low, high, or unclear risk of bias, respectively. If one or no domain was deemed “unclear” or “no,” the study was classified as having a low risk of bias. If four or more domains were deemed “unclear” or “no,” the study was classified as having a high risk of bias. If two or three domains were deemed “unclear” or “no,” the study was classified as having a moderate risk of bias [11]. Quality assessment was performed with Review Manager 5 (RevMan 5.2.11, Cochrane Collaboration, Oxford, UK). Heterogeneity tests: Heterogeneity on included studies was examinated using the Q statistic and Higgins’ I2 statistic [12]. Higgins’ I2 measures the percentage of total variation due to heterogeneity rather than chance across studies. Higgins’ I2 was calculated as follows:\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$ {\mathrm{I}}^2=\frac{\mathrm{Q}\hbox{-} \mathrm{d}\mathrm{f}}{\mathrm{Q}}\times 100\%, $$\end{document}I2=Q‐dfQ×100%, in which “Q” was Cochran’s heterogeneity statistic, and “df” was the degrees of freedom. An I2 ≥ 50 % was considered to represent substantial heterogeneity. For the Q statistic, heterogeneity was deemed to be significant for p less than 0.10 [13]. If there was evidence of heterogeneity, the data were analyzed using a random-effects model. A summary estimate of the test sensitivity was obtained with 95 % confidence intervals (CIs) after secondary examination of heterogeneity in the random-effects model using radial plots [14]. Studies in which positive results were confirmed were assessed with a pooled specificity with 95 % CIs. Statistical analysis: Outcome variables measured at specific time points were compared in terms of mean differences with 95 % CIs using a network meta-analysis. Analyses were based on non-informative priors for effect sizes and precision. Convergence and lack of auto-correlation were confirmed after four chains and a 50,000-simulation burn-in phase; finally, direct probability statements were derived from an additional 100,000-simulation phase. The probability that each group had the lowest rate of clinical events was assessed by Bayesian Markov Chain Monte Carlo modeling. Sensitivity analyses were performed by repeating the main computations with a fixed-effect method. Model fit was appraised by computing and comparing estimates for deviance and deviance information criterion. All statistical analyses were performed with Review Manager 5 and R (R version 3.0.3, R Foundation for Statistical Computing, Vienna, Austria; http://www.r-project.org) and the metafor and gemtc packages for pair-wise and network meta-analyses. Results: Eligible studies The database search retrieved 21 articles covering 88 studies for potential inclusion in the meta-analysis. Fourteen articles were excluded based on the inclusion/exclusion criteria; eight of the fourteen articles were retrospective models, and three articles were reported with different tools and variables. The other three articles were excluded, because they did not report final results. Using using pairwise and network meta-analyses, the remaining seven articles were included in the qualitative and quantitative syntheses (Fig. 1).Fig. 1Flow diagram of evidence acquisition. Seven studies were ultimately included in the qualitative and quantitative syntheses using pairwise and network meta-analyses Flow diagram of evidence acquisition. Seven studies were ultimately included in the qualitative and quantitative syntheses using pairwise and network meta-analyses Data corresponding to confounding factors in each study are summarized in Table 1. Four studies included outcome comparisons between alfuzosin versus placebo [15–18]. Two trials reported on therapeutic outcomes of tamsulosin versus placebo [19, 20], and a three-arm trial compared outcomes of alfuzosin, tamsulosin, and placebo [21].Table 1Enrolled studies for this meta-analysisStentStoneStudyStudy designNTxPAlpha-blockerAnalgesicDuration (day)TypeSizeLengthIndicationSize (mm)LocationMeasureQuality assessment-risk biasTxPBeddingfield et al.[15]RCT552629AlfuzosinOn demand10NANAAdjustedPost URS6.357.255 % renalUSSQLow- urinary symptom score10 mg- body pain score- general health score25 % renal/ureteric- work performance score20 % ureteric- sexual health scoreDamiano et al.[20]RCT753837TamsulosinOn demand14PU7 FrAdjustedPost URSNANANAUSSQIntermediate0.4 mg- urinary symptom score- body pain scoreDeliveliotis et al.[16]RCT1005050AlfuzosinOn demand28PU5 FrAdjustedConservative treatment for stone, <10 mm, hydronephrosis7.67.119 upperUSSQLow23 mid- urinary symptom score10 mg48 lower- body pain score- general health score- sexual health scoreDellis et al.[21]RCT15010050*AlfuzosinOn demand28NA6 FrAdjustedPost ESWL, URSNANANAUSSQLow- urinary symptom score10 mgTamsulosin- body pain score0.4 mg- general health score- work performance score- sexual health scoreNazim et al.[17]RCT1306565AlfuzosinOn demand7PU4.7 Fr 6 FrNAPost URSNANA40 upperVASHigh28 midUSSQ10 mg62 lower- urinary symptom score- body pain scorePark et al.[18]RCT322012AlfuzosinNA42PU6 FrAdjustedPost URS, PCNL, Lap Pyelo, endo-ureterotomyNANANAUSSQHigh- urinary symptom score- body pain score10 mg- general health score- work performance score- sexual health scoreWang et al.[19]RCT1547975TamsulosinOn demand7Sil7 FrAdjustedPost URS99.416 upperUSSQLow49 mid- urinary symptom score0.4 mg89 lower- body pain score- general health score- work performance score- sexual health scoreAdjusted, stent length is height adjusted; NA, not available; Tx, Treatment group, P, placebo; PU, Polyurethane; Sil, Silcone;USSQ, Ureteric Stent Symptom Questionnaire; VAS, visual analogue scale; URS, ureteroscopy; PCNL, percutaneous nephrolithotomy; Lap Pyelo, laparoscopic pyeloplasty*50 patients received alfuzosin 10 mg, and another 50 patients received tamsulosin 0.4 mg Enrolled studies for this meta-analysis Adjusted, stent length is height adjusted; NA, not available; Tx, Treatment group, P, placebo; PU, Polyurethane; Sil, Silcone;USSQ, Ureteric Stent Symptom Questionnaire; VAS, visual analogue scale; URS, ureteroscopy; PCNL, percutaneous nephrolithotomy; Lap Pyelo, laparoscopic pyeloplasty *50 patients received alfuzosin 10 mg, and another 50 patients received tamsulosin 0.4 mg The database search retrieved 21 articles covering 88 studies for potential inclusion in the meta-analysis. Fourteen articles were excluded based on the inclusion/exclusion criteria; eight of the fourteen articles were retrospective models, and three articles were reported with different tools and variables. The other three articles were excluded, because they did not report final results. Using using pairwise and network meta-analyses, the remaining seven articles were included in the qualitative and quantitative syntheses (Fig. 1).Fig. 1Flow diagram of evidence acquisition. Seven studies were ultimately included in the qualitative and quantitative syntheses using pairwise and network meta-analyses Flow diagram of evidence acquisition. Seven studies were ultimately included in the qualitative and quantitative syntheses using pairwise and network meta-analyses Data corresponding to confounding factors in each study are summarized in Table 1. Four studies included outcome comparisons between alfuzosin versus placebo [15–18]. Two trials reported on therapeutic outcomes of tamsulosin versus placebo [19, 20], and a three-arm trial compared outcomes of alfuzosin, tamsulosin, and placebo [21].Table 1Enrolled studies for this meta-analysisStentStoneStudyStudy designNTxPAlpha-blockerAnalgesicDuration (day)TypeSizeLengthIndicationSize (mm)LocationMeasureQuality assessment-risk biasTxPBeddingfield et al.[15]RCT552629AlfuzosinOn demand10NANAAdjustedPost URS6.357.255 % renalUSSQLow- urinary symptom score10 mg- body pain score- general health score25 % renal/ureteric- work performance score20 % ureteric- sexual health scoreDamiano et al.[20]RCT753837TamsulosinOn demand14PU7 FrAdjustedPost URSNANANAUSSQIntermediate0.4 mg- urinary symptom score- body pain scoreDeliveliotis et al.[16]RCT1005050AlfuzosinOn demand28PU5 FrAdjustedConservative treatment for stone, <10 mm, hydronephrosis7.67.119 upperUSSQLow23 mid- urinary symptom score10 mg48 lower- body pain score- general health score- sexual health scoreDellis et al.[21]RCT15010050*AlfuzosinOn demand28NA6 FrAdjustedPost ESWL, URSNANANAUSSQLow- urinary symptom score10 mgTamsulosin- body pain score0.4 mg- general health score- work performance score- sexual health scoreNazim et al.[17]RCT1306565AlfuzosinOn demand7PU4.7 Fr 6 FrNAPost URSNANA40 upperVASHigh28 midUSSQ10 mg62 lower- urinary symptom score- body pain scorePark et al.[18]RCT322012AlfuzosinNA42PU6 FrAdjustedPost URS, PCNL, Lap Pyelo, endo-ureterotomyNANANAUSSQHigh- urinary symptom score- body pain score10 mg- general health score- work performance score- sexual health scoreWang et al.[19]RCT1547975TamsulosinOn demand7Sil7 FrAdjustedPost URS99.416 upperUSSQLow49 mid- urinary symptom score0.4 mg89 lower- body pain score- general health score- work performance score- sexual health scoreAdjusted, stent length is height adjusted; NA, not available; Tx, Treatment group, P, placebo; PU, Polyurethane; Sil, Silcone;USSQ, Ureteric Stent Symptom Questionnaire; VAS, visual analogue scale; URS, ureteroscopy; PCNL, percutaneous nephrolithotomy; Lap Pyelo, laparoscopic pyeloplasty*50 patients received alfuzosin 10 mg, and another 50 patients received tamsulosin 0.4 mg Enrolled studies for this meta-analysis Adjusted, stent length is height adjusted; NA, not available; Tx, Treatment group, P, placebo; PU, Polyurethane; Sil, Silcone;USSQ, Ureteric Stent Symptom Questionnaire; VAS, visual analogue scale; URS, ureteroscopy; PCNL, percutaneous nephrolithotomy; Lap Pyelo, laparoscopic pyeloplasty *50 patients received alfuzosin 10 mg, and another 50 patients received tamsulosin 0.4 mg Quality assessment Figures 2 and 3 present the details of quality assessment, as measured by the Cochrane Collaboration risk-of-bias tool. Four trials exhibited a low risk of bias for all quality criteria, and two studies were classified as having a high risk of bias (Table 1). The most common risk factor for quality assessment was the risk of blinding of outcome assessment; the second most common concerns were allocation concealment and blinding participants and personnel.Fig. 2Risk of bias graph. We reviewed the risk of bias in each study included in this meta-analysis and presented the results as percentages. Four trials exhibited a low risk of bias for all quality criteria, and two studies were classified as having a high risk of biasFig. 3Risk of bias summary. We reviewed the risk of bias in each of the studies included in this meta-analysis Risk of bias graph. We reviewed the risk of bias in each study included in this meta-analysis and presented the results as percentages. Four trials exhibited a low risk of bias for all quality criteria, and two studies were classified as having a high risk of bias Risk of bias summary. We reviewed the risk of bias in each of the studies included in this meta-analysis Figures 2 and 3 present the details of quality assessment, as measured by the Cochrane Collaboration risk-of-bias tool. Four trials exhibited a low risk of bias for all quality criteria, and two studies were classified as having a high risk of bias (Table 1). The most common risk factor for quality assessment was the risk of blinding of outcome assessment; the second most common concerns were allocation concealment and blinding participants and personnel.Fig. 2Risk of bias graph. We reviewed the risk of bias in each study included in this meta-analysis and presented the results as percentages. Four trials exhibited a low risk of bias for all quality criteria, and two studies were classified as having a high risk of biasFig. 3Risk of bias summary. We reviewed the risk of bias in each of the studies included in this meta-analysis Risk of bias graph. We reviewed the risk of bias in each study included in this meta-analysis and presented the results as percentages. Four trials exhibited a low risk of bias for all quality criteria, and two studies were classified as having a high risk of bias Risk of bias summary. We reviewed the risk of bias in each of the studies included in this meta-analysis Heterogeneity assessment Forest plots of pairwise meta-analyses are demonstrated in Fig. 4. A heterogeneity test for USS showed the following: χ2 = 96.43 with 7 df (P < 0.001) and I2 = 93 % in the analysis of alpha-blockers, including alfuzosin and tamsulosin versus placebo. Notable heterogeneities were detected in the analyses of all studies; thus, random-effects models were used to further assess these variables. In the analysis of BPS, a heterogeneity test demonstrated homogeneity with χ2 = 44.66 with 8 df (P < 0.001) and I2 = 82 %. Pairwise meta-analyses with random-effects models were also performed. None of the variables demonstrated heterogeneity in radial plots after selecting effect models for USS and BPS (Fig. 5).Fig. 4Pairwise meta-analysis. (a) urinary symptom score (b) body pain scoreFig. 5Radial plots. None of the variables demonstrated heterogeneity after selecting effect models for each variable in the radial plots. (a) urinary symptom score (b) body pain score Pairwise meta-analysis. (a) urinary symptom score (b) body pain score Radial plots. None of the variables demonstrated heterogeneity after selecting effect models for each variable in the radial plots. (a) urinary symptom score (b) body pain score Forest plots of pairwise meta-analyses are demonstrated in Fig. 4. A heterogeneity test for USS showed the following: χ2 = 96.43 with 7 df (P < 0.001) and I2 = 93 % in the analysis of alpha-blockers, including alfuzosin and tamsulosin versus placebo. Notable heterogeneities were detected in the analyses of all studies; thus, random-effects models were used to further assess these variables. In the analysis of BPS, a heterogeneity test demonstrated homogeneity with χ2 = 44.66 with 8 df (P < 0.001) and I2 = 82 %. Pairwise meta-analyses with random-effects models were also performed. None of the variables demonstrated heterogeneity in radial plots after selecting effect models for USS and BPS (Fig. 5).Fig. 4Pairwise meta-analysis. (a) urinary symptom score (b) body pain scoreFig. 5Radial plots. None of the variables demonstrated heterogeneity after selecting effect models for each variable in the radial plots. (a) urinary symptom score (b) body pain score Pairwise meta-analysis. (a) urinary symptom score (b) body pain score Radial plots. None of the variables demonstrated heterogeneity after selecting effect models for each variable in the radial plots. (a) urinary symptom score (b) body pain score Publication bias Funnel plots from pairwise meta-analyses are demonstrated in Fig. 6; however, with few studies, it was difficult to assess publication bias, although some degree of bias is suspected.Fig. 6Funnel plots. (a) urinary symptom score (b) body pain score. It was difficult to assess publication bias with few studies, although some degree of bias is suspected Funnel plots. (a) urinary symptom score (b) body pain score. It was difficult to assess publication bias with few studies, although some degree of bias is suspected Funnel plots from pairwise meta-analyses are demonstrated in Fig. 6; however, with few studies, it was difficult to assess publication bias, although some degree of bias is suspected.Fig. 6Funnel plots. (a) urinary symptom score (b) body pain score. It was difficult to assess publication bias with few studies, although some degree of bias is suspected Funnel plots. (a) urinary symptom score (b) body pain score. It was difficult to assess publication bias with few studies, although some degree of bias is suspected Pairwise meta-analysis for urinary symptom and body pain scores The forest plot using the random-effects model demomstrated an MD of −5.69 for USS (95 % CI [−8.84–−2.53], P < 0.001) between alpha-blockers and placebo (Fig. 4a). In subanalyses, both alfuzosin (MD; −4.12, 95 % CI [−5.51–−2.73], P < 0.001) and tamsulosin (MD; −7.83, 95 % CI [−14.35–−1.31], P = 0.02) had low MDs versus placebo. According to the forest plot for BPS, alpha-blockers were superior to placebo, with an MD of −6.20 (95 % CI [−8.74–−2.13], P < 0.001) (Fig. 4b). In the subanalysis of alfuzosin versus placebo, alfuzosin showed an MD of −4.21 (95 % CI [−5.56–−2.85], P < 0.001). Between tamsulosin and placebo, the random-effects model demonstrated an MD of −7.71 (95 % CI [−13.28–−2.13], P = 0.007) for BSP. The forest plot using the random-effects model demomstrated an MD of −5.69 for USS (95 % CI [−8.84–−2.53], P < 0.001) between alpha-blockers and placebo (Fig. 4a). In subanalyses, both alfuzosin (MD; −4.12, 95 % CI [−5.51–−2.73], P < 0.001) and tamsulosin (MD; −7.83, 95 % CI [−14.35–−1.31], P = 0.02) had low MDs versus placebo. According to the forest plot for BPS, alpha-blockers were superior to placebo, with an MD of −6.20 (95 % CI [−8.74–−2.13], P < 0.001) (Fig. 4b). In the subanalysis of alfuzosin versus placebo, alfuzosin showed an MD of −4.21 (95 % CI [−5.56–−2.85], P < 0.001). Between tamsulosin and placebo, the random-effects model demonstrated an MD of −7.71 (95 % CI [−13.28–−2.13], P = 0.007) for BSP. Network meta-analysis for urinary symptom and body pain scores Alfuzosin had a lower In USS than that of placebo (MD; −4.85, 95 % CI [−8.53–−1.33]). Tamsulosin also had a lower score than that of placebo (MD; −8.84, 95 % CI [−13.08–−4.31]. However, there was not a significant difference in MD between alfuzosin and tamsulosin according to network meta-analysis (MD; 3.99, 95 % CI [−1.23–9.04]). There were significant differences in the BPS achieved with alfuzosin versus placebo (MD; −5.71, 95 % CI [−11.32–−0.52]) and in tamsulosin versus placebo (MD; −7.77, 95 % CI [−13.68–−2.14]). Comparison of the BPS achieved with tamsulosin versus alfuzosin showed an MD of 2.12 (95 % CI [−4.62–8.72]), which was not significant (Fig. 7). In the rank-probability test, tamsulosin had the highest rank for USS, followed by alfuzosin. Tamsulosin was also ranked highest for BPS in the rank-probability test, followed by alfuzosin. The placebo was ranked lowest for USS and BPS (Fig. 8).Fig. 7Network meta-analysis. (a) urinary symptom score (b) body pain score. Alfuzosin and tamsulosin had a lower score both in USS and BPS than in the placebo. However, there was not a significant difference in MD between alfuzosin and tamsulosin according to network meta-analysisFig. 8Rank-probability test of network meta-analyses. (a) urinary symptom score (b) body pain score. Tamsulosin had the highest rank for USS, followed by alfuzosin. Tamsulosin was also ranked highest for BPS in the rank-probability test, followed by alfuzosin. The placebo was ranked lowest for USS and BPS Network meta-analysis. (a) urinary symptom score (b) body pain score. Alfuzosin and tamsulosin had a lower score both in USS and BPS than in the placebo. However, there was not a significant difference in MD between alfuzosin and tamsulosin according to network meta-analysis Rank-probability test of network meta-analyses. (a) urinary symptom score (b) body pain score. Tamsulosin had the highest rank for USS, followed by alfuzosin. Tamsulosin was also ranked highest for BPS in the rank-probability test, followed by alfuzosin. The placebo was ranked lowest for USS and BPS Alfuzosin had a lower In USS than that of placebo (MD; −4.85, 95 % CI [−8.53–−1.33]). Tamsulosin also had a lower score than that of placebo (MD; −8.84, 95 % CI [−13.08–−4.31]. However, there was not a significant difference in MD between alfuzosin and tamsulosin according to network meta-analysis (MD; 3.99, 95 % CI [−1.23–9.04]). There were significant differences in the BPS achieved with alfuzosin versus placebo (MD; −5.71, 95 % CI [−11.32–−0.52]) and in tamsulosin versus placebo (MD; −7.77, 95 % CI [−13.68–−2.14]). Comparison of the BPS achieved with tamsulosin versus alfuzosin showed an MD of 2.12 (95 % CI [−4.62–8.72]), which was not significant (Fig. 7). In the rank-probability test, tamsulosin had the highest rank for USS, followed by alfuzosin. Tamsulosin was also ranked highest for BPS in the rank-probability test, followed by alfuzosin. The placebo was ranked lowest for USS and BPS (Fig. 8).Fig. 7Network meta-analysis. (a) urinary symptom score (b) body pain score. Alfuzosin and tamsulosin had a lower score both in USS and BPS than in the placebo. However, there was not a significant difference in MD between alfuzosin and tamsulosin according to network meta-analysisFig. 8Rank-probability test of network meta-analyses. (a) urinary symptom score (b) body pain score. Tamsulosin had the highest rank for USS, followed by alfuzosin. Tamsulosin was also ranked highest for BPS in the rank-probability test, followed by alfuzosin. The placebo was ranked lowest for USS and BPS Network meta-analysis. (a) urinary symptom score (b) body pain score. Alfuzosin and tamsulosin had a lower score both in USS and BPS than in the placebo. However, there was not a significant difference in MD between alfuzosin and tamsulosin according to network meta-analysis Rank-probability test of network meta-analyses. (a) urinary symptom score (b) body pain score. Tamsulosin had the highest rank for USS, followed by alfuzosin. Tamsulosin was also ranked highest for BPS in the rank-probability test, followed by alfuzosin. The placebo was ranked lowest for USS and BPS Eligible studies: The database search retrieved 21 articles covering 88 studies for potential inclusion in the meta-analysis. Fourteen articles were excluded based on the inclusion/exclusion criteria; eight of the fourteen articles were retrospective models, and three articles were reported with different tools and variables. The other three articles were excluded, because they did not report final results. Using using pairwise and network meta-analyses, the remaining seven articles were included in the qualitative and quantitative syntheses (Fig. 1).Fig. 1Flow diagram of evidence acquisition. Seven studies were ultimately included in the qualitative and quantitative syntheses using pairwise and network meta-analyses Flow diagram of evidence acquisition. Seven studies were ultimately included in the qualitative and quantitative syntheses using pairwise and network meta-analyses Data corresponding to confounding factors in each study are summarized in Table 1. Four studies included outcome comparisons between alfuzosin versus placebo [15–18]. Two trials reported on therapeutic outcomes of tamsulosin versus placebo [19, 20], and a three-arm trial compared outcomes of alfuzosin, tamsulosin, and placebo [21].Table 1Enrolled studies for this meta-analysisStentStoneStudyStudy designNTxPAlpha-blockerAnalgesicDuration (day)TypeSizeLengthIndicationSize (mm)LocationMeasureQuality assessment-risk biasTxPBeddingfield et al.[15]RCT552629AlfuzosinOn demand10NANAAdjustedPost URS6.357.255 % renalUSSQLow- urinary symptom score10 mg- body pain score- general health score25 % renal/ureteric- work performance score20 % ureteric- sexual health scoreDamiano et al.[20]RCT753837TamsulosinOn demand14PU7 FrAdjustedPost URSNANANAUSSQIntermediate0.4 mg- urinary symptom score- body pain scoreDeliveliotis et al.[16]RCT1005050AlfuzosinOn demand28PU5 FrAdjustedConservative treatment for stone, <10 mm, hydronephrosis7.67.119 upperUSSQLow23 mid- urinary symptom score10 mg48 lower- body pain score- general health score- sexual health scoreDellis et al.[21]RCT15010050*AlfuzosinOn demand28NA6 FrAdjustedPost ESWL, URSNANANAUSSQLow- urinary symptom score10 mgTamsulosin- body pain score0.4 mg- general health score- work performance score- sexual health scoreNazim et al.[17]RCT1306565AlfuzosinOn demand7PU4.7 Fr 6 FrNAPost URSNANA40 upperVASHigh28 midUSSQ10 mg62 lower- urinary symptom score- body pain scorePark et al.[18]RCT322012AlfuzosinNA42PU6 FrAdjustedPost URS, PCNL, Lap Pyelo, endo-ureterotomyNANANAUSSQHigh- urinary symptom score- body pain score10 mg- general health score- work performance score- sexual health scoreWang et al.[19]RCT1547975TamsulosinOn demand7Sil7 FrAdjustedPost URS99.416 upperUSSQLow49 mid- urinary symptom score0.4 mg89 lower- body pain score- general health score- work performance score- sexual health scoreAdjusted, stent length is height adjusted; NA, not available; Tx, Treatment group, P, placebo; PU, Polyurethane; Sil, Silcone;USSQ, Ureteric Stent Symptom Questionnaire; VAS, visual analogue scale; URS, ureteroscopy; PCNL, percutaneous nephrolithotomy; Lap Pyelo, laparoscopic pyeloplasty*50 patients received alfuzosin 10 mg, and another 50 patients received tamsulosin 0.4 mg Enrolled studies for this meta-analysis Adjusted, stent length is height adjusted; NA, not available; Tx, Treatment group, P, placebo; PU, Polyurethane; Sil, Silcone;USSQ, Ureteric Stent Symptom Questionnaire; VAS, visual analogue scale; URS, ureteroscopy; PCNL, percutaneous nephrolithotomy; Lap Pyelo, laparoscopic pyeloplasty *50 patients received alfuzosin 10 mg, and another 50 patients received tamsulosin 0.4 mg Quality assessment: Figures 2 and 3 present the details of quality assessment, as measured by the Cochrane Collaboration risk-of-bias tool. Four trials exhibited a low risk of bias for all quality criteria, and two studies were classified as having a high risk of bias (Table 1). The most common risk factor for quality assessment was the risk of blinding of outcome assessment; the second most common concerns were allocation concealment and blinding participants and personnel.Fig. 2Risk of bias graph. We reviewed the risk of bias in each study included in this meta-analysis and presented the results as percentages. Four trials exhibited a low risk of bias for all quality criteria, and two studies were classified as having a high risk of biasFig. 3Risk of bias summary. We reviewed the risk of bias in each of the studies included in this meta-analysis Risk of bias graph. We reviewed the risk of bias in each study included in this meta-analysis and presented the results as percentages. Four trials exhibited a low risk of bias for all quality criteria, and two studies were classified as having a high risk of bias Risk of bias summary. We reviewed the risk of bias in each of the studies included in this meta-analysis Heterogeneity assessment: Forest plots of pairwise meta-analyses are demonstrated in Fig. 4. A heterogeneity test for USS showed the following: χ2 = 96.43 with 7 df (P < 0.001) and I2 = 93 % in the analysis of alpha-blockers, including alfuzosin and tamsulosin versus placebo. Notable heterogeneities were detected in the analyses of all studies; thus, random-effects models were used to further assess these variables. In the analysis of BPS, a heterogeneity test demonstrated homogeneity with χ2 = 44.66 with 8 df (P < 0.001) and I2 = 82 %. Pairwise meta-analyses with random-effects models were also performed. None of the variables demonstrated heterogeneity in radial plots after selecting effect models for USS and BPS (Fig. 5).Fig. 4Pairwise meta-analysis. (a) urinary symptom score (b) body pain scoreFig. 5Radial plots. None of the variables demonstrated heterogeneity after selecting effect models for each variable in the radial plots. (a) urinary symptom score (b) body pain score Pairwise meta-analysis. (a) urinary symptom score (b) body pain score Radial plots. None of the variables demonstrated heterogeneity after selecting effect models for each variable in the radial plots. (a) urinary symptom score (b) body pain score Publication bias: Funnel plots from pairwise meta-analyses are demonstrated in Fig. 6; however, with few studies, it was difficult to assess publication bias, although some degree of bias is suspected.Fig. 6Funnel plots. (a) urinary symptom score (b) body pain score. It was difficult to assess publication bias with few studies, although some degree of bias is suspected Funnel plots. (a) urinary symptom score (b) body pain score. It was difficult to assess publication bias with few studies, although some degree of bias is suspected Pairwise meta-analysis for urinary symptom and body pain scores: The forest plot using the random-effects model demomstrated an MD of −5.69 for USS (95 % CI [−8.84–−2.53], P < 0.001) between alpha-blockers and placebo (Fig. 4a). In subanalyses, both alfuzosin (MD; −4.12, 95 % CI [−5.51–−2.73], P < 0.001) and tamsulosin (MD; −7.83, 95 % CI [−14.35–−1.31], P = 0.02) had low MDs versus placebo. According to the forest plot for BPS, alpha-blockers were superior to placebo, with an MD of −6.20 (95 % CI [−8.74–−2.13], P < 0.001) (Fig. 4b). In the subanalysis of alfuzosin versus placebo, alfuzosin showed an MD of −4.21 (95 % CI [−5.56–−2.85], P < 0.001). Between tamsulosin and placebo, the random-effects model demonstrated an MD of −7.71 (95 % CI [−13.28–−2.13], P = 0.007) for BSP. Network meta-analysis for urinary symptom and body pain scores: Alfuzosin had a lower In USS than that of placebo (MD; −4.85, 95 % CI [−8.53–−1.33]). Tamsulosin also had a lower score than that of placebo (MD; −8.84, 95 % CI [−13.08–−4.31]. However, there was not a significant difference in MD between alfuzosin and tamsulosin according to network meta-analysis (MD; 3.99, 95 % CI [−1.23–9.04]). There were significant differences in the BPS achieved with alfuzosin versus placebo (MD; −5.71, 95 % CI [−11.32–−0.52]) and in tamsulosin versus placebo (MD; −7.77, 95 % CI [−13.68–−2.14]). Comparison of the BPS achieved with tamsulosin versus alfuzosin showed an MD of 2.12 (95 % CI [−4.62–8.72]), which was not significant (Fig. 7). In the rank-probability test, tamsulosin had the highest rank for USS, followed by alfuzosin. Tamsulosin was also ranked highest for BPS in the rank-probability test, followed by alfuzosin. The placebo was ranked lowest for USS and BPS (Fig. 8).Fig. 7Network meta-analysis. (a) urinary symptom score (b) body pain score. Alfuzosin and tamsulosin had a lower score both in USS and BPS than in the placebo. However, there was not a significant difference in MD between alfuzosin and tamsulosin according to network meta-analysisFig. 8Rank-probability test of network meta-analyses. (a) urinary symptom score (b) body pain score. Tamsulosin had the highest rank for USS, followed by alfuzosin. Tamsulosin was also ranked highest for BPS in the rank-probability test, followed by alfuzosin. The placebo was ranked lowest for USS and BPS Network meta-analysis. (a) urinary symptom score (b) body pain score. Alfuzosin and tamsulosin had a lower score both in USS and BPS than in the placebo. However, there was not a significant difference in MD between alfuzosin and tamsulosin according to network meta-analysis Rank-probability test of network meta-analyses. (a) urinary symptom score (b) body pain score. Tamsulosin had the highest rank for USS, followed by alfuzosin. Tamsulosin was also ranked highest for BPS in the rank-probability test, followed by alfuzosin. The placebo was ranked lowest for USS and BPS Discussion and conclusion: Recently, two pair-wise meta-analyses of alpha-blockers in patients with ureteral stent discomfort were published. Yakoubi et al. conducted a meta-analysis of four RCTs [22]. However, they did not distinguish the types of alpha-blockers analyzed by each domain. Alpha-blockers reduced the scores for urinary symptoms, body pain, and general health index score but did not achieve significant changes in quality of work, sexual matters, or scores for additional problems. However, there were limits, as few studies were analyzed; in particular, only three studies were analyzed for quality of work and two for additional problems. Lamb et al. conducted a meta-analysis of five RCTs by distinguishing between alfuzosin and tamsulosin [7]; however, there was possible error in the effects-model with regards to heterogeneity. However, these meta-analyses showed decreased scores for both urinary symptoms and body pain after use of alpha-blockers. More recently, Dellis et al. evaluated the effects of two different alpha-blockers for improving symptoms and quality of life in patients with indwelling ureteral stents in an RCT [21]. They prescribed alfuzosin, tamsulosin, or placebo to 50 patients and examined USSQ accordingly. Patients who received alpha-blockers had significantly decreased urinary symptoms, body pain, general health index, and sexual life scores compared to those of the control group. However, there was no difference in the quality of work score. Further, there was no difference between the alpha-blocker groups. The goal of this study was to ascertain the difference between tamsulosin and alfuzosin in alleviating stent discomfort by comparing the effectiveness of alpha-blockers to placebo in seven RCTs based on urinary symptoms and body pain scores. In most studies, continuous treatment had been carried out before the removal of ureter stents. We divided alpha-blockers into alfuzosin and tamsulosin to conduct subgroup and network meta-analyses. In the subgroup analysis, two alpha-blockers appeared to significantly decrease urinary symptoms and body pain scores compared to those of the placebo. There was no statistical significance between alfuzosin and tamsulosin in network meta-analysis both in urinary symptoms and body pain scores, which was consistent with the results of previous studies. However, after conducting network meta-analysis using a rank-probability test, the effectiveness of tamsulosin appeared to be better than that of alfuzosin in both urinary symptoms and body pain scores (Fig. 8). We suggested that the difference in efficacy between the two alpha-blockers arises from the physiologic characteristics of ureteral receptor distribution. The ureter contains two continuous thin muscle layers with a loosely spiraled internal layer and a more tightly spiraled external layer. A third outer longitudinal layer is located in the lower third of the ureter. The lower ureter is consisted of transitional epithelium, a connective tissue layer, and three layers of smooth muscle [23]. Peristalsis of ureter is initiated by spontaneous activity of the renal pelvis pacemaker cell and is essentially regulated by the myogenic mechanism and neurogenic factors; electrical and mechanical activities are conducted to inactive distal regions [24]. The histologic characteristics of the three smooth muscle layers in the lower portion of ureter and the denser innervation of the lower portion of ureter have become subjects of research interest. The alpha-1d receptor has the highest density in the lower portion of ureter. Tamsulosin is a subtype selective alpha-1a and alpha-1d blocker, whereas alfuzosin is a subtype non-selective alpha-1 blocker [25]. The two alpha-blockers may elicit different levels of efficacy due to differences in selectivity and the distribution of the alpha-1d receptor in the lower ureter. However, it remains unclear if subtype selectivity makes a significant contribution to the differences in efficacy of alpha-blockers. In the near future, prospective trials should compare several alpha-blockers, including silodosin and naftopidil, to confirm our results. A limitation of our study was that only two subdomains were included, urinary symptoms and body pain scores, as not all the RCTs involved in this study had available USSQ domains besides urinary symptoms and body pain scores (Table 1). These may cause a bias in the efficacy of the two alpha-blockers, which can be influenced by other symptoms and quality of life. All RCTs included in the present study used USSQ, which comprehensively assesses not only urinary symptoms and pain, but also sexual symptoms and quality of life. Although urinary symptoms and pain are the most problematic among stent-related symptoms of discomfort, low abdominal pain or discomfort, infection, and hematuria are also bothersome to patients. Furthermore, alpha-blockers can cause side effects on the central nervous system, sexual function, ejaculatory function, and cardiovascular system [26]. Therefore, it is important to compare not only the efficacy between the two medications but also the side effects. In addition, comparison of USSQ domains may also need to be considered. Some degree of publication bias was also a limitation of this study. However, Sutton et al. reviewed 48 articles from the Cochrane Database of Systematic Reviews and showed publication or related biases were common within the sample of meta-analyses assessed [27]. Moreover, they found that these biases did not affect the conclusions in most cases. Another limitation was that we did not take into consideration the possible effects of stent factors on stent discomfort. Although six out of seven RCTs reported that stent insertion was performed with height adjustment and the differences of stent materials were also considered, there was no consideration as to whether the stent was placed correctly or not. Lee et al. [28]. reported that only storage symptoms of the tamsulosin group were significantly lower than those of the analgesic group in the appropriate stent position. However, this medication effect was not observed in the inappropriate stent position group, and total IPSS and storage symptom scores were significantly higher than in the appropriate stent position group. The appropriate stent position might be one of the points to be considered when conducting the research on stent discomfort. Conclusions: Ureteral stent-related symptoms are effectively alleviated with alpha-blockers. Tamsulosin might be more effective than alfuzosin. However, additional randomized controlled trials with alfuzosin and tamsulosin need to be performed for patients with ureteral stents.
Background: This study was carried out a network meta-analysis of evidence from randomized controlled trials (RCTs) to evaluate stent-related discomfort in patients with alfuzosin or tamsulosin versus placebo. Methods: Relevant RCTs were identified from electronic databases. The proceedings of appropriate meetings were also searched. Seven articles on the basis of RCTs were included in our meta-analysis. Using pairwise and network meta-analyses, comparisons were made by qualitative and quantitative syntheses. Evaluation was performed with the Ureteric Stent Symptoms Questionnaire to assess the urinary symptom score (USS) and body pain score (BPS). Results: One of the seven RCTs was at moderate risk of bias for all quality criteria; two studies had a high risk of bias. In the network meta-analysis, both alfuzosin (mean difference [MD];-4.85, 95 % confidence interval [CI];-8.53--1.33) and tamsulosin (MD;-8.84, 95 % CI;-13.08--4.31) showed lower scores compared with placebo; however, the difference in USS for alfuzosin versus tamsulosin was not significant (MD; 3.99, 95 % CI;-1.23-9.04). Alfuzosin (MD;-5.71, 95 % CI;-11.32--0.52) and tamsulosin (MD;-7.77, 95 % CI;-13.68--2.14) showed lower scores for BPS compared with placebo; however, the MD between alfuzosin and tamsulosin was not significant (MD; 2.12, 95 % CI;-4.62-8.72). In the rank-probability test, tamsulosin ranked highest for USS and BPS, and alfuzosin was second. Conclusions: The alpha-blockers significantly decreased USS and BPS in comparison with placebo. Tamsulosin might be more effective than alfuzosin.
Background: In 1978, the ureteral double-J stent was first described by Finney et al. [1, 2]. Ureteral double-J stent insertion has been one of the most common urologic procedures; however, indwelling stents are often accompanied by significant morbidity including voiding and storage symptoms, flank pain, hematuria, and infection [3]. Symptoms of stent discomfort, including bladder irritation symptoms and flank pain or discomfort, are generally treated with oral analgesics, such as narcotics and anti­inflammatory medications; however these medications are only moderately effective. Alpha-blockers alleviate bladder irritation due to stents, resulting in reduced incidence of dysuria, frequency, and pain compared to placebo [4]. Ureteral stent discomfort may be due to spasms of the ureteral smooth muscle that surrounds the indwelling foreign object and may run the length of the ureter. Further, irritation of the trigone, which also has alpha-1d receptors, may be caused by the intravesical lower coil of the stent. Alternatively, voiding may increase pressure on the renal pelvis and cause discomfort [5]. Several studies have investigated if alpha-blockers can alleviate symptoms related to stent placement [6]. In 2011, Lamb et al. reported a pair-wise meta-analysis of randomized controlled trials (RCTs) indicating that orally administrated alpha-blockers reduce stent-related discomfort and storage symptoms as evaluated by the Ureteric Stent Symptoms Questionnaire (USSQ) [7]. Newly introduced network meta-analysis is a meta-analysis in which multiple treatments are compared using direct comparisons of interventions within RCTs and indirect comparisons across trials based on a common comparator [8–10]. The present systematic review and network meta-analysis examined available RCTs to study the effects of alpha-blockers on stent-related symptoms. Conclusions: Ureteral stent-related symptoms are effectively alleviated with alpha-blockers. Tamsulosin might be more effective than alfuzosin. However, additional randomized controlled trials with alfuzosin and tamsulosin need to be performed for patients with ureteral stents.
Background: This study was carried out a network meta-analysis of evidence from randomized controlled trials (RCTs) to evaluate stent-related discomfort in patients with alfuzosin or tamsulosin versus placebo. Methods: Relevant RCTs were identified from electronic databases. The proceedings of appropriate meetings were also searched. Seven articles on the basis of RCTs were included in our meta-analysis. Using pairwise and network meta-analyses, comparisons were made by qualitative and quantitative syntheses. Evaluation was performed with the Ureteric Stent Symptoms Questionnaire to assess the urinary symptom score (USS) and body pain score (BPS). Results: One of the seven RCTs was at moderate risk of bias for all quality criteria; two studies had a high risk of bias. In the network meta-analysis, both alfuzosin (mean difference [MD];-4.85, 95 % confidence interval [CI];-8.53--1.33) and tamsulosin (MD;-8.84, 95 % CI;-13.08--4.31) showed lower scores compared with placebo; however, the difference in USS for alfuzosin versus tamsulosin was not significant (MD; 3.99, 95 % CI;-1.23-9.04). Alfuzosin (MD;-5.71, 95 % CI;-11.32--0.52) and tamsulosin (MD;-7.77, 95 % CI;-13.68--2.14) showed lower scores for BPS compared with placebo; however, the MD between alfuzosin and tamsulosin was not significant (MD; 2.12, 95 % CI;-4.62-8.72). In the rank-probability test, tamsulosin ranked highest for USS and BPS, and alfuzosin was second. Conclusions: The alpha-blockers significantly decreased USS and BPS in comparison with placebo. Tamsulosin might be more effective than alfuzosin.
9,811
317
[ 171, 85, 63, 182, 215, 175, 549, 240, 262, 108, 201, 449 ]
17
[ "score", "meta", "bias", "alfuzosin", "tamsulosin", "pain", "urinary", "body", "body pain", "placebo" ]
[ "patients ureteral stents", "placebo ureteral stent", "conclusions ureteral stent", "ureteric stent symptom", "bladder irritation stents" ]
null
[CONTENT] Ureter | Stents | Adrenergic alpha-antagonists | Meta-analysis | Bayes theorem [SUMMARY]
null
[CONTENT] Ureter | Stents | Adrenergic alpha-antagonists | Meta-analysis | Bayes theorem [SUMMARY]
[CONTENT] Ureter | Stents | Adrenergic alpha-antagonists | Meta-analysis | Bayes theorem [SUMMARY]
[CONTENT] Ureter | Stents | Adrenergic alpha-antagonists | Meta-analysis | Bayes theorem [SUMMARY]
[CONTENT] Ureter | Stents | Adrenergic alpha-antagonists | Meta-analysis | Bayes theorem [SUMMARY]
[CONTENT] Adrenergic alpha-Antagonists | Bayes Theorem | Humans | Pain | Pain Measurement | Quinazolines | Randomized Controlled Trials as Topic | Stents | Sulfonamides | Tamsulosin | Treatment Outcome | Ureteral Obstruction [SUMMARY]
null
[CONTENT] Adrenergic alpha-Antagonists | Bayes Theorem | Humans | Pain | Pain Measurement | Quinazolines | Randomized Controlled Trials as Topic | Stents | Sulfonamides | Tamsulosin | Treatment Outcome | Ureteral Obstruction [SUMMARY]
[CONTENT] Adrenergic alpha-Antagonists | Bayes Theorem | Humans | Pain | Pain Measurement | Quinazolines | Randomized Controlled Trials as Topic | Stents | Sulfonamides | Tamsulosin | Treatment Outcome | Ureteral Obstruction [SUMMARY]
[CONTENT] Adrenergic alpha-Antagonists | Bayes Theorem | Humans | Pain | Pain Measurement | Quinazolines | Randomized Controlled Trials as Topic | Stents | Sulfonamides | Tamsulosin | Treatment Outcome | Ureteral Obstruction [SUMMARY]
[CONTENT] Adrenergic alpha-Antagonists | Bayes Theorem | Humans | Pain | Pain Measurement | Quinazolines | Randomized Controlled Trials as Topic | Stents | Sulfonamides | Tamsulosin | Treatment Outcome | Ureteral Obstruction [SUMMARY]
[CONTENT] patients ureteral stents | placebo ureteral stent | conclusions ureteral stent | ureteric stent symptom | bladder irritation stents [SUMMARY]
null
[CONTENT] patients ureteral stents | placebo ureteral stent | conclusions ureteral stent | ureteric stent symptom | bladder irritation stents [SUMMARY]
[CONTENT] patients ureteral stents | placebo ureteral stent | conclusions ureteral stent | ureteric stent symptom | bladder irritation stents [SUMMARY]
[CONTENT] patients ureteral stents | placebo ureteral stent | conclusions ureteral stent | ureteric stent symptom | bladder irritation stents [SUMMARY]
[CONTENT] patients ureteral stents | placebo ureteral stent | conclusions ureteral stent | ureteric stent symptom | bladder irritation stents [SUMMARY]
[CONTENT] score | meta | bias | alfuzosin | tamsulosin | pain | urinary | body | body pain | placebo [SUMMARY]
null
[CONTENT] score | meta | bias | alfuzosin | tamsulosin | pain | urinary | body | body pain | placebo [SUMMARY]
[CONTENT] score | meta | bias | alfuzosin | tamsulosin | pain | urinary | body | body pain | placebo [SUMMARY]
[CONTENT] score | meta | bias | alfuzosin | tamsulosin | pain | urinary | body | body pain | placebo [SUMMARY]
[CONTENT] score | meta | bias | alfuzosin | tamsulosin | pain | urinary | body | body pain | placebo [SUMMARY]
[CONTENT] symptoms | stent | discomfort | irritation | alpha | ureteral | related | alpha blockers | blockers | double stent [SUMMARY]
null
[CONTENT] score | bias | md | alfuzosin | symptom | risk | tamsulosin | urinary symptom | meta | placebo [SUMMARY]
[CONTENT] ureteral | need performed | tamsulosin effective alfuzosin additional | tamsulosin effective alfuzosin | tamsulosin effective | alfuzosin additional | alfuzosin additional randomized controlled | controlled trials alfuzosin tamsulosin | controlled trials alfuzosin | alpha blockers tamsulosin effective [SUMMARY]
[CONTENT] bias | score | risk | risk bias | alfuzosin | tamsulosin | meta | pain | md | alpha [SUMMARY]
[CONTENT] bias | score | risk | risk bias | alfuzosin | tamsulosin | meta | pain | md | alpha [SUMMARY]
[CONTENT] [SUMMARY]
null
[CONTENT] One | seven | two ||| MD];-4.85 | 95 % | CI];-8.53--1.33 | 95 % | USS | MD | 3.99 | 95 % ||| Alfuzosin | 95 % | 95 % | BPS | MD | MD | 2.12 | 95 % ||| USS | second [SUMMARY]
[CONTENT] USS ||| Tamsulosin [SUMMARY]
[CONTENT] ||| ||| ||| Seven ||| ||| the Ureteric Stent Symptoms Questionnaire | USS ||| One | seven | two ||| MD];-4.85 | 95 % | CI];-8.53--1.33 | 95 % | USS | MD | 3.99 | 95 % ||| Alfuzosin | 95 % | 95 % | BPS | MD | MD | 2.12 | 95 % ||| USS | second ||| USS ||| Tamsulosin [SUMMARY]
[CONTENT] ||| ||| ||| Seven ||| ||| the Ureteric Stent Symptoms Questionnaire | USS ||| One | seven | two ||| MD];-4.85 | 95 % | CI];-8.53--1.33 | 95 % | USS | MD | 3.99 | 95 % ||| Alfuzosin | 95 % | 95 % | BPS | MD | MD | 2.12 | 95 % ||| USS | second ||| USS ||| Tamsulosin [SUMMARY]
Long-term influence of recurrent acute otitis media on neural involuntary attention switching in 2-year-old children.
26729018
A large group of young children are exposed to repetitive middle ear infections but the effects of the fluctuating hearing sensations on immature central auditory system are not fully understood. The present study investigated the consequences of early childhood recurrent acute otitis media (RAOM) on involuntary auditory attention switching.
BACKGROUND
By utilizing auditory event-related potentials, neural mechanisms of involuntary attention were studied in 22-26 month-old children (N = 18) who had had an early childhood RAOM and healthy controls (N = 19). The earlier and later phase of the P3a (eP3a and lP3a) and the late negativity (LN) were measured for embedded novel sounds in the passive multi-feature paradigm with repeating standard and deviant syllable stimuli. The children with RAOM had tympanostomy tubes inserted and all the children in both study groups had to have clinically healthy ears at the time of the measurement assessed by an otolaryngologist.
METHODS
The results showed that lP3a amplitude diminished less from frontal to central and parietal areas in the children with RAOM than the controls. This might reflect an immature control of involuntary attention switch. Furthermore, the LN latency was longer in children with RAOM than in the controls, which suggests delayed reorientation of attention in RAOM.
RESULTS
The lP3a and LN responses are affected in toddlers who have had a RAOM even when their ears are healthy. This suggests detrimental long-term effects of RAOM on the neural mechanisms of involuntary attention.
CONCLUSIONS
[ "Attention", "Auditory Pathways", "Child, Preschool", "Event-Related Potentials, P300", "Evoked Potentials, Auditory", "Female", "Humans", "Infant", "Male", "Otitis Media", "Recurrence" ]
4700565
Background
About 30 % of children have recurrent middle ear infections (recurrent acute otitis media, RAOM) in their early childhood [1, 2]. Due to challenges in diagnosing and classifying middle ear status, otitis media (OM) is commonly used as a general term for various forms of middle ear fluid and inflammation. A general definition for RAOM has been three or more episodes of acute otitis media (AOM) per 6 months or four or more AOM episodes per year [3]. After an episode of AOM, middle ear fluid is present for few days to over 2 months [4]. Fluid in the middle ear causes about 20–30 dB conductive hearing loss [5], and, especially when asymmetric, it affects interaural temporal and level difference cues compromising binaural sound localization [6]. Fluctuating hearing sensations during the development of central auditory system has been connected to atypical auditory processing [7–11], which can lead to problems in language acquisition [12]. Therefore, it is necessary to get a better understanding of the consequences of early childhood RAOM on immature central nervous system. Behavioral studies in children with OM have shown problems in regulation of auditory attention [13–18]. Involuntary orientation to environmental events as well as selective maintenance of attention is essential for speech processing and language learning. Involuntary attention accounts for the detection and selection of potentially biologically meaningful information of events unrelated to the ongoing task [19]. For example, a screeching noise of a braking car causes an attention switch of a pedestrian who is talking on the phone, and leads to the distraction of the ongoing activity. After the evaluation of the irrelevant novel stimulus, the reorientation back to the recent activity takes place. Involuntary attention is a bottom-up (stimulus-driven) process [19] but during maturation the developing top-down mechanisms start to inhibit distractors which are not meaningful, in other words, children learn to separate relevant from irrelevant stimuli [20, 21]. An excessive tendency to orient to the irrelevant stimuli requiring a lot of attentional resources makes goal-directed behavior harder [22]. School-aged children with OM history were shown to have deficits of selective auditory attention in dichotic listening tasks [13–15]. They also showed increased reorientation time of attention during behavioral tasks [16]. Rated by their teachers, school-children with OM history were suggested to be less task-oriented [17] but not in all studies [23]. Studies in toddlers are scarce, probably due to the weak co-operation skills in children at this age. However, toddlers with chronic OM were shown to express reduced attention during book reading at the time of middle ear effusion and, according the questionnaire, their mothers rated them as easily distractible [18]. The neural mechanisms beyond these findings are still unknown. Event-related potentials (ERPs) are a feasible approach for studying non-invasively neural mechanisms of involuntary auditory attention without tasks requiring co-operation skills [24]. The auditory P3a is a large positive deflection elicited by unexpected, novel sounds which substantially differ from other sounds, for example slam of the door or human cough. The P3a reflects involuntary attention mechanisms and orientation of attention [22]. It peaks fronto-centrally at 200–300 ms after the onset of a distracting stimulus [22, 25, 26]. P3a was often found to be biphasic [22, 27]. Two phases, early and late, have been identified in children [28–30] already at the age of 2 years [31]. Early P3a (eP3a) was suggested to reflect the automatic detection of violation in the neural model of existing world and thus, to represent the orientation of attention [32]. It is maximal at vertex and diminishes posteriorly and laterally [33]. In contrast, late P3a (lP3a) was suggested to reflect actual attention switch and it is maximal frontally [33]. Morphology of these responses is quite similar in children and adults but the scalp topography of children’s P3a is more anterior than that of adults [30]. The eP3a may mature earlier than lP3a, which continues to enhance frontally during development [34]. Hence, processing of acoustic novelty in the childhood resembles that in the adulthood although some underlying neural networks still continue to develop. Atypical P3a responses have been connected to abnormal involuntary attention, for example, parietally enhanced lP3a was found in children with attention deficit hyperactivity disorder (ADHD) [29]. The ERP waveform also reflects reorienting of attention back to the primary task after recognizing and evaluating a distracting stimulus [35, 36]. In adults, P3a is followed by reorienting negativity (RON) [37]. A counterpart of RON in children was suggested to be the late negativity (LN, also called as negative component, Nc) [28–31]. The LN latency, peaking at around 400–700 ms after the onset of a novel stimulus, reflects reorienting time [21]. The LN has the maximal amplitude at fronto-central scalp areas [21]. Large LN reflects enhanced neural effort to reorienting [37] or more attention paid to the surprising event [30]. During maturation, the LN amplitude has been suggested to decrease [30, 34]. The aim of this study was to compare the involuntary attentional mechanisms in 2-year-old healthy children with RAOM history and their healthy age-matched controls by recording auditory ERPs. For that purpose, novel stimuli were embedded in the multi-feature paradigm with syllables to elicit eP3a, lP3a, and LN. It was hypothesized that children with RAOM would show atypically enhanced and/or short latency P3a reflecting enhanced distractibility for the intrusive novel sounds and to have larger amplitude and/or longer latency of LN indicating more neural effort to reorienting and/or longer re-orientation time of attention than their healthy peers. Studies of P3a and novelty-related LN in toddlers are scarce and to our knowledge, this is the first study measuring these responses in children with RAOM.
null
null
Results
The eP3a significantly differed from zero in the children with RAOM and in the controls (two tailed t-test; p ≤ .001; Table 1, Fig. 1) with no group differences in the amplitude, amplitude scalp distribution, or latency. In both groups, the eP3a amplitude was stronger at the frontal and central electrodes than at the parietal electrodes (F [2, 57] = 42.09, p < .001, ƞ p2 = .53; LSD post hoc p < .001).Table 1Mean amplitude and latency of ERPs elicited by novel stimuli in children in both groupsElectrodeAmplitude µVLatency msRAOMControlRAOMControleP3aCz6.66 (4.30)6.97 (3.44)244 (26)248 (25)lP3aFz9.30 (3.69)9.67 (3.74)348 (36)341 (22)LNFz–2.82 (3.38)–2.34 (3.04) 599 (40) 526 (43)Standard deviations are in parenthesesA significant group difference is italized RAOM recurrent acute otitis media Fig. 1ERPs (eP3a, lP3a, and LN) elicited by novel stimuli in children with recurrent acute otitis media (RAOM) and their controls; a grand average standard and novel ERP waves, b grand average difference (novel minus standard ERP) waves, and c scalp topographies Mean amplitude and latency of ERPs elicited by novel stimuli in children in both groups Standard deviations are in parentheses A significant group difference is italized RAOM recurrent acute otitis media ERPs (eP3a, lP3a, and LN) elicited by novel stimuli in children with recurrent acute otitis media (RAOM) and their controls; a grand average standard and novel ERP waves, b grand average difference (novel minus standard ERP) waves, and c scalp topographies There was a significant lP3a in both groups (two tailed t-test; p ≤ .001; Table 1, Fig. 1). A repeated measures ANOVA for the lP3a amplitude indicated a significant AP x group interaction (F [2, 59] = 3.94, p = .03, ƞ p2 = .10). According to the LSD post hoc test, the children with RAOM showed a more even AP distribution than the control children who had a clear frontally maximal and posteriorly diminishing amplitude scalp distribution (mean amplitudes frontally 8.99 vs. 9.67 µV, centrally 7.12 vs. 6.04 µV, and parietally 3.64 vs. 2.17 µV in the RAOM and control groups, respectively). Furthermore, a significant AP x RL interaction with no group difference was found (F [4, 140] = 4.98, p < .001, ƞ p2 = .13). LSD post hoc test indicated an even RL amplitude distribution at frontal electrodes, but stronger left hemispheric activation compared to the vertex and the right line at central electrodes (p < .001) and also stronger left than right line responses at parietal electrodes (p = .03). There was no group difference for the lP3a latency. In both groups, a significant LN was found (two tailed t-test; RAOM: p = .001; control: p = .004; Table 1, Fig. 1). A repeated measures ANOVA for the LN amplitude indicated a significant AP × RL interaction (F [4, 148] = 2.96, p = .02). This was due to the weakest amplitude at F3 (LSD post hoc; p = .001–.03) and the strongest amplitude at Cz (LSD post hoc; p = .001–.002). There were no group differences in the amplitude or amplitude scalp distribution of LN. However, one-way ANOVA indicated a significant group difference in the LN latency (F [1, 37] = 32.76, p < .001, ƞ p2 = .47), which peaked later in the children with RAOM than in the controls.
Conclusions
To conclude, this study showed abnormal neural mechanisms of involuntary attention in 2-year-old children with RAOM. For the distracting novel sounds, the RAOM group showed atypical neural organization signified by a more even lP3a scalp distribution in anterior-posterior axis than in the controls, who had a more frontally oriented lP3a. This can be a sign of immature neural processing and enhanced distractibility. Furthermore, the children with RAOM showed delayed re-orienting back to the ongoing activity indicated by their prolonged LN latency. Since all the children had clinically healthy ears at the time of the study, the current results suggest that early childhood RAOM has long-term effects on the immature central nervous system. This further supports the suggestion that early childhood RAOM should be taken as a risk factor for the developing auditory central nervous system.
[ "Background", "Participants", "Stimuli and experimental design", "EEG recording", "Analysis" ]
[ "About 30 % of children have recurrent middle ear infections (recurrent acute otitis media, RAOM) in their early childhood [1, 2]. Due to challenges in diagnosing and classifying middle ear status, otitis media (OM) is commonly used as a general term for various forms of middle ear fluid and inflammation. A general definition for RAOM has been three or more episodes of acute otitis media (AOM) per 6 months or four or more AOM episodes per year [3]. After an episode of AOM, middle ear fluid is present for few days to over 2 months [4]. Fluid in the middle ear causes about 20–30 dB conductive hearing loss [5], and, especially when asymmetric, it affects interaural temporal and level difference cues compromising binaural sound localization [6]. Fluctuating hearing sensations during the development of central auditory system has been connected to atypical auditory processing [7–11], which can lead to problems in language acquisition [12]. Therefore, it is necessary to get a better understanding of the consequences of early childhood RAOM on immature central nervous system.\nBehavioral studies in children with OM have shown problems in regulation of auditory attention [13–18]. Involuntary orientation to environmental events as well as selective maintenance of attention is essential for speech processing and language learning. Involuntary attention accounts for the detection and selection of potentially biologically meaningful information of events unrelated to the ongoing task [19]. For example, a screeching noise of a braking car causes an attention switch of a pedestrian who is talking on the phone, and leads to the distraction of the ongoing activity. After the evaluation of the irrelevant novel stimulus, the reorientation back to the recent activity takes place. Involuntary attention is a bottom-up (stimulus-driven) process [19] but during maturation the developing top-down mechanisms start to inhibit distractors which are not meaningful, in other words, children learn to separate relevant from irrelevant stimuli [20, 21]. An excessive tendency to orient to the irrelevant stimuli requiring a lot of attentional resources makes goal-directed behavior harder [22].\nSchool-aged children with OM history were shown to have deficits of selective auditory attention in dichotic listening tasks [13–15]. They also showed increased reorientation time of attention during behavioral tasks [16]. Rated by their teachers, school-children with OM history were suggested to be less task-oriented [17] but not in all studies [23]. Studies in toddlers are scarce, probably due to the weak co-operation skills in children at this age. However, toddlers with chronic OM were shown to express reduced attention during book reading at the time of middle ear effusion and, according the questionnaire, their mothers rated them as easily distractible [18]. The neural mechanisms beyond these findings are still unknown.\nEvent-related potentials (ERPs) are a feasible approach for studying non-invasively neural mechanisms of involuntary auditory attention without tasks requiring co-operation skills [24]. The auditory P3a is a large positive deflection elicited by unexpected, novel sounds which substantially differ from other sounds, for example slam of the door or human cough. The P3a reflects involuntary attention mechanisms and orientation of attention [22]. It peaks fronto-centrally at 200–300 ms after the onset of a distracting stimulus [22, 25, 26].\nP3a was often found to be biphasic [22, 27]. Two phases, early and late, have been identified in children [28–30] already at the age of 2 years [31]. Early P3a (eP3a) was suggested to reflect the automatic detection of violation in the neural model of existing world and thus, to represent the orientation of attention [32]. It is maximal at vertex and diminishes posteriorly and laterally [33]. In contrast, late P3a (lP3a) was suggested to reflect actual attention switch and it is maximal frontally [33]. Morphology of these responses is quite similar in children and adults but the scalp topography of children’s P3a is more anterior than that of adults [30]. The eP3a may mature earlier than lP3a, which continues to enhance frontally during development [34]. Hence, processing of acoustic novelty in the childhood resembles that in the adulthood although some underlying neural networks still continue to develop. Atypical P3a responses have been connected to abnormal involuntary attention, for example, parietally enhanced lP3a was found in children with attention deficit hyperactivity disorder (ADHD) [29].\nThe ERP waveform also reflects reorienting of attention back to the primary task after recognizing and evaluating a distracting stimulus [35, 36]. In adults, P3a is followed by reorienting negativity (RON) [37]. A counterpart of RON in children was suggested to be the late negativity (LN, also called as negative component, Nc) [28–31]. The LN latency, peaking at around 400–700 ms after the onset of a novel stimulus, reflects reorienting time [21]. The LN has the maximal amplitude at fronto-central scalp areas [21]. Large LN reflects enhanced neural effort to reorienting [37] or more attention paid to the surprising event [30]. During maturation, the LN amplitude has been suggested to decrease [30, 34].\nThe aim of this study was to compare the involuntary attentional mechanisms in 2-year-old healthy children with RAOM history and their healthy age-matched controls by recording auditory ERPs. For that purpose, novel stimuli were embedded in the multi-feature paradigm with syllables to elicit eP3a, lP3a, and LN. It was hypothesized that children with RAOM would show atypically enhanced and/or short latency P3a reflecting enhanced distractibility for the intrusive novel sounds and to have larger amplitude and/or longer latency of LN indicating more neural effort to reorienting and/or longer re-orientation time of attention than their healthy peers. Studies of P3a and novelty-related LN in toddlers are scarce and to our knowledge, this is the first study measuring these responses in children with RAOM.", "Twenty-four children with a middle ear infection history were recruited to the RAOM group (at least three AOM per 6 months or four AOM per 1 year) from the Ear, Nose and Throat clinic of Oulu University hospital. During 1 year in 2009–2010, all children aged 22–26 months fulfilling the criteria of this study with a tympanostomy tube insertion participated (for a more detailed AOM history see [7]). The EEG recording was done on average 33 days (range 20–56 d) after the tympanostomy tube insertion. Twenty-two age matched control children with 0–2 AOM were recruited with public advertisements. All families participated voluntarily to the study and an informed written consent was obtained from the parents of children. Families were paid 15€ for travelling costs. The study was in accordance of Declaration of Helsinki and approved by the Ethics Committee of Northern Osthrobotnia Hospital District (reference number 6/2009).\nParticipants were from monolingual Finnish-speaking families. They were born full-term with normal birth weight, and developing typically in their sensory, cognitive, and motor skills according to parental questionnaires and the examinations at the family and health care clinics during the first 2 years of life. No family history of speech, language, or other developmental impairments or severe neuropsychiatric diseases was allowed. The standardized Finnish version of Reynell Developmental Language Scales III, the Comprehension scale [38, 39] was applied to exclude developmental language disorders. At the time of the EEG recording, transient evoked otoacoustic emissions (TEOAEs; nonlinear click sequence 1.5–4.5 kHz, 73 dB SPL, pass/refer result; MADSEN AccuScreen® pro, GN Otometrics, Taastrup, Denmark) were checked. Four children with RAOM and six control children did not co-operate in the TEOAE measurement, but all the children had passed a TEOAE screening at a postnatal period in Oulu University Hospital. Right before the EEG recording, all children were assessed with pneumatic otoscopy and if needed by tympanometry and/or otomicroscopy by an otolaryngologist to ensure that they had clinically healthy ears at the time of the measurement.\nIn the RAOM group, one child was excluded because of a family history of dyslexia and one because the results of the Reynell III did not meet the criteria for normal speech comprehension, and an additional examination of speech-language pathologist showed signs of severe language disorder. Two children with RAOM had atypically enhanced P3a responses (24.07 and 23.20 µV), and the statistical analysis indicated them to be outliers, i.e., their responses being at abnormal distance from the other ones (2.51–17.63 µV). Because we hypothesized that the children with RAOM would have enhanced P3a responses, we decided to exclude these two children from the further analysis to avoid the bias of these extreme values on the results of the RAOM group. Furthermore, two children did not arrive to the measurement at the appointed time. In the control group, two children had to be excluded from the analysis because of a large amount of alpha activity in their EEG leading to low signal-to-noise ratio. One control children was excluded because of acute OM diagnosed at the time of measurement. The total number of children in this study was 18 in the RAOM group and 19 in the control group after these exclusions. There were no significant differences between the final groups in gender (RAOM: 10 boys; controls 11 boys), age (RAOM: mean 24 months, min–max 22–26; controls: mean 24 months, min–max 22–26), or mother’s education (RAOM: 4 low, 13 middle or high; controls 2 low, 17 middle or high). The educational information of one mother in the RAOM group was not available.", "ERPs were recorded in a passive condition with the multi-feature paradigm (“Optimum-1”), which was shown to be a fast and eligible method for obtaining several ERPs reflecting different stages of auditory processing in adults [40–42], school-aged children [43, 44], and toddlers [45, 46]. In the multi-feature paradigm, the standard and the deviant sounds are presented in the same sequence so that every other stimulus is the standard and every other stimulus is one of the several deviants. In the deviants, only one sound feature (e.g., vowel or frequency) of the standard stimulus is changed at a time while the other features remain the same and strengthen the memory representation of the standard stimulus. To study attentional mechanisms, distracting novel sounds may also be embedded in the same sound stream [42, 45, 46].\nThe standards were Finnish semisynthetic consonant–vowel syllables/ke:/or/pi:/(duration 170 ms). Every other stimulus sequence included standard/ke:/and every other included standard/pi:/. The deviants (duration 170 ms) were five different deviations in these syllables (frequency F0 ± 8 Hz, intensity ± 7 dB, consonant from/ke:/to/pe:/and from/pi:/to/ki:/, vowel from/ke:/to/ki:/and from/pi:/to/pe:/, and vowel duration from syllable length of 170 ms to 120 ms) [42, 47]. The obligatory and MMN responses elicited by standards and deviants were reported earlier [7]. In addition, there were totally differing novel sounds (duration 200 ms, including a fall and a rise time of 10 ms), which were non-synthetic, environmental human (e.g. coughs and laughs) or non-human (e.g. door slamming and telephone ringing) sounds [42]. In the stimulus sequence, every other stimulus was a standard (probability 50 %) and every other was one of the deviants (probability 8.3 % for each) or a novel sound (probability 8.3 %). The presentation of stimuli was pseudo-randomized so that all five deviants and one novel stimulus appeared once among 12 successive stimuli, and the same deviant or novel was never repeated after the standard stimulus following it. The stimulus onset asynchrony was 670 ms. The stimuli were in the sequences lasting for about 6 min., each starting with 10 standards, and including 540 stimuli from which 275 were standards and 44 were novels sounds, the rest being deviant syllables (44 of each deviant type). Three to four stimulus sequences were presented to each participant.\nStimuli were presented in an electrically shielded and sound-attenuated room (reverberation time .3 s, background noise level 43 dB) with the sound pressure level of 75 dB via two loudspeakers (Genelec® 6010A, Genelec Ltd., Iisalmi, Finland). The loudspeakers were in front of the child at a distance of 1.3 m and in a 40-degree angle from the child’s head.", "The EEG (.16–1000 Hz, sampling rate 5000 Hz) was recorded with 32 channel electro-cap with Ag–AgCl electrodes placed according to the international 10/20 system (ActiCAP 002 and Brain Vision BrainAmp system and software; Brain Products GmbH, Gilching, Germany). The FCz electrode served as online reference and impedances were kept below 20 kΩ. Additional electrodes placed above the outer canthus of the right eye and below the outer canthus of the left eye with bipolar montage were used for electro-oculogram.\nToddlers sat in a chair or in their parent’s lap, watching voiceless cartoons or children’s books, or played with silent toys. The parents were instructed to be as quiet as possible. The recording was camera monitored from the next room and an experienced EEG technician monitored the quality of the EEG signal during recording. During the same recording session, the children participated in an EEG recording with three to four stimulus sequences with background noise [48]. The total examination time for each participant was about two and half hours, from which the EEG registration took about 45 min. There were breaks with refreshments between the stimulus sequences.", "Brain Vision Analyzer 2.0 (BrainProducts, GmpH) was used for offline analysis. Data were down sampled to 250 Hz and re-referenced to the average of the mastoid electrodes. Band pass filtering of .5–45 Hz, 24 dB/oct was applied to avoid aliasing and signals not originated from the brain [49]. After visual inspection, channels Fp1, Fp2, PO9, PO10, O1, Oz, and O2 were disabled from further analyzis because of artefacts. Ocular correction was done with an independent component analysis. Extracerebral artefacts with voltage exceeding ±150 μV at any electrode were removed and data were filtered with band pass of 1–20 Hz, 48 dB/oct. ERPs for standard and novel stimuli were averaged from baseline corrected EEG epochs of –100 ms prestimulus to 670 ms after stimulus onset. The first 10 standard stimuli in each recorded sequence and the standard stimuli right after the novel stimuli were excluded from the analysis. Two-tailed t-test indicated no significant group differences in the mean number of averaged epochs for standards or novels. The mean number of epochs for standard and novel stimuli in the RAOM group was 675 (min–max 373–856) and 133 (min–max 75–170), respectively, and in the control group 719 (min–max 517–856) and 143 (min–max 99–171), respectively.\nTo identify the P3a and LN, ERPs for standards were subtracted from those for novels. The grand average difference waves showed the biphasic P3a elicited by novel stimuli. Hence, eP3a and lP3a were separately analyzed. The channel selection for the peak detection was done after visual inspection, which showed the most prominent eP3a at the Cz electrode and lP3a and LN at the Fz electrode. The peak detection was done individually for each child within time windows of 180–300 ms for eP3a, 300–440 ms for lP3a, and 420–600 ms for LN. The peak latencies were determined from the most positive (eP3a and lP3a) or the most negative (LN) peak within those windows, and the mean peak amplitudes were calculated from ±20 ms time window around the peak latencies.\nThe statistical analyses were done for the F3, Fz, F4, C3, Cz, C4, P3, Pz, and P4 electrodes. The existence of each ERP was determined by comparing its amplitude to zero with a two-tailed t-test. The amplitude differences between the groups were examined with repeated measures analysis of variance (ANOVA) and the Fisher’s least significant difference (LSD) post hoc test. In ANOVA, between-subject factor was group (RAOM vs. control) and within-subject factors were anterior-posterior (AP; F3-Fz-F4 vs. C3-Cz-C4 vs. P3-Pz-P4) and right-left (RL; F3-C3-P3 vs. Fz-Cz-Pz vs. F4-C4-P4) electrode positions. The Huynh–Feldt correction was applied when appropriate. One-way ANOVA was used for studying latency differences between the groups. For the effect-size estimation, the partial eta squared (ƞ\np2) was calculated." ]
[ null, null, null, null, null ]
[ "Background", "Methods", "Participants", "Stimuli and experimental design", "EEG recording", "Analysis", "Results", "Discussion", "Conclusions" ]
[ "About 30 % of children have recurrent middle ear infections (recurrent acute otitis media, RAOM) in their early childhood [1, 2]. Due to challenges in diagnosing and classifying middle ear status, otitis media (OM) is commonly used as a general term for various forms of middle ear fluid and inflammation. A general definition for RAOM has been three or more episodes of acute otitis media (AOM) per 6 months or four or more AOM episodes per year [3]. After an episode of AOM, middle ear fluid is present for few days to over 2 months [4]. Fluid in the middle ear causes about 20–30 dB conductive hearing loss [5], and, especially when asymmetric, it affects interaural temporal and level difference cues compromising binaural sound localization [6]. Fluctuating hearing sensations during the development of central auditory system has been connected to atypical auditory processing [7–11], which can lead to problems in language acquisition [12]. Therefore, it is necessary to get a better understanding of the consequences of early childhood RAOM on immature central nervous system.\nBehavioral studies in children with OM have shown problems in regulation of auditory attention [13–18]. Involuntary orientation to environmental events as well as selective maintenance of attention is essential for speech processing and language learning. Involuntary attention accounts for the detection and selection of potentially biologically meaningful information of events unrelated to the ongoing task [19]. For example, a screeching noise of a braking car causes an attention switch of a pedestrian who is talking on the phone, and leads to the distraction of the ongoing activity. After the evaluation of the irrelevant novel stimulus, the reorientation back to the recent activity takes place. Involuntary attention is a bottom-up (stimulus-driven) process [19] but during maturation the developing top-down mechanisms start to inhibit distractors which are not meaningful, in other words, children learn to separate relevant from irrelevant stimuli [20, 21]. An excessive tendency to orient to the irrelevant stimuli requiring a lot of attentional resources makes goal-directed behavior harder [22].\nSchool-aged children with OM history were shown to have deficits of selective auditory attention in dichotic listening tasks [13–15]. They also showed increased reorientation time of attention during behavioral tasks [16]. Rated by their teachers, school-children with OM history were suggested to be less task-oriented [17] but not in all studies [23]. Studies in toddlers are scarce, probably due to the weak co-operation skills in children at this age. However, toddlers with chronic OM were shown to express reduced attention during book reading at the time of middle ear effusion and, according the questionnaire, their mothers rated them as easily distractible [18]. The neural mechanisms beyond these findings are still unknown.\nEvent-related potentials (ERPs) are a feasible approach for studying non-invasively neural mechanisms of involuntary auditory attention without tasks requiring co-operation skills [24]. The auditory P3a is a large positive deflection elicited by unexpected, novel sounds which substantially differ from other sounds, for example slam of the door or human cough. The P3a reflects involuntary attention mechanisms and orientation of attention [22]. It peaks fronto-centrally at 200–300 ms after the onset of a distracting stimulus [22, 25, 26].\nP3a was often found to be biphasic [22, 27]. Two phases, early and late, have been identified in children [28–30] already at the age of 2 years [31]. Early P3a (eP3a) was suggested to reflect the automatic detection of violation in the neural model of existing world and thus, to represent the orientation of attention [32]. It is maximal at vertex and diminishes posteriorly and laterally [33]. In contrast, late P3a (lP3a) was suggested to reflect actual attention switch and it is maximal frontally [33]. Morphology of these responses is quite similar in children and adults but the scalp topography of children’s P3a is more anterior than that of adults [30]. The eP3a may mature earlier than lP3a, which continues to enhance frontally during development [34]. Hence, processing of acoustic novelty in the childhood resembles that in the adulthood although some underlying neural networks still continue to develop. Atypical P3a responses have been connected to abnormal involuntary attention, for example, parietally enhanced lP3a was found in children with attention deficit hyperactivity disorder (ADHD) [29].\nThe ERP waveform also reflects reorienting of attention back to the primary task after recognizing and evaluating a distracting stimulus [35, 36]. In adults, P3a is followed by reorienting negativity (RON) [37]. A counterpart of RON in children was suggested to be the late negativity (LN, also called as negative component, Nc) [28–31]. The LN latency, peaking at around 400–700 ms after the onset of a novel stimulus, reflects reorienting time [21]. The LN has the maximal amplitude at fronto-central scalp areas [21]. Large LN reflects enhanced neural effort to reorienting [37] or more attention paid to the surprising event [30]. During maturation, the LN amplitude has been suggested to decrease [30, 34].\nThe aim of this study was to compare the involuntary attentional mechanisms in 2-year-old healthy children with RAOM history and their healthy age-matched controls by recording auditory ERPs. For that purpose, novel stimuli were embedded in the multi-feature paradigm with syllables to elicit eP3a, lP3a, and LN. It was hypothesized that children with RAOM would show atypically enhanced and/or short latency P3a reflecting enhanced distractibility for the intrusive novel sounds and to have larger amplitude and/or longer latency of LN indicating more neural effort to reorienting and/or longer re-orientation time of attention than their healthy peers. Studies of P3a and novelty-related LN in toddlers are scarce and to our knowledge, this is the first study measuring these responses in children with RAOM.", " Participants Twenty-four children with a middle ear infection history were recruited to the RAOM group (at least three AOM per 6 months or four AOM per 1 year) from the Ear, Nose and Throat clinic of Oulu University hospital. During 1 year in 2009–2010, all children aged 22–26 months fulfilling the criteria of this study with a tympanostomy tube insertion participated (for a more detailed AOM history see [7]). The EEG recording was done on average 33 days (range 20–56 d) after the tympanostomy tube insertion. Twenty-two age matched control children with 0–2 AOM were recruited with public advertisements. All families participated voluntarily to the study and an informed written consent was obtained from the parents of children. Families were paid 15€ for travelling costs. The study was in accordance of Declaration of Helsinki and approved by the Ethics Committee of Northern Osthrobotnia Hospital District (reference number 6/2009).\nParticipants were from monolingual Finnish-speaking families. They were born full-term with normal birth weight, and developing typically in their sensory, cognitive, and motor skills according to parental questionnaires and the examinations at the family and health care clinics during the first 2 years of life. No family history of speech, language, or other developmental impairments or severe neuropsychiatric diseases was allowed. The standardized Finnish version of Reynell Developmental Language Scales III, the Comprehension scale [38, 39] was applied to exclude developmental language disorders. At the time of the EEG recording, transient evoked otoacoustic emissions (TEOAEs; nonlinear click sequence 1.5–4.5 kHz, 73 dB SPL, pass/refer result; MADSEN AccuScreen® pro, GN Otometrics, Taastrup, Denmark) were checked. Four children with RAOM and six control children did not co-operate in the TEOAE measurement, but all the children had passed a TEOAE screening at a postnatal period in Oulu University Hospital. Right before the EEG recording, all children were assessed with pneumatic otoscopy and if needed by tympanometry and/or otomicroscopy by an otolaryngologist to ensure that they had clinically healthy ears at the time of the measurement.\nIn the RAOM group, one child was excluded because of a family history of dyslexia and one because the results of the Reynell III did not meet the criteria for normal speech comprehension, and an additional examination of speech-language pathologist showed signs of severe language disorder. Two children with RAOM had atypically enhanced P3a responses (24.07 and 23.20 µV), and the statistical analysis indicated them to be outliers, i.e., their responses being at abnormal distance from the other ones (2.51–17.63 µV). Because we hypothesized that the children with RAOM would have enhanced P3a responses, we decided to exclude these two children from the further analysis to avoid the bias of these extreme values on the results of the RAOM group. Furthermore, two children did not arrive to the measurement at the appointed time. In the control group, two children had to be excluded from the analysis because of a large amount of alpha activity in their EEG leading to low signal-to-noise ratio. One control children was excluded because of acute OM diagnosed at the time of measurement. The total number of children in this study was 18 in the RAOM group and 19 in the control group after these exclusions. There were no significant differences between the final groups in gender (RAOM: 10 boys; controls 11 boys), age (RAOM: mean 24 months, min–max 22–26; controls: mean 24 months, min–max 22–26), or mother’s education (RAOM: 4 low, 13 middle or high; controls 2 low, 17 middle or high). The educational information of one mother in the RAOM group was not available.\nTwenty-four children with a middle ear infection history were recruited to the RAOM group (at least three AOM per 6 months or four AOM per 1 year) from the Ear, Nose and Throat clinic of Oulu University hospital. During 1 year in 2009–2010, all children aged 22–26 months fulfilling the criteria of this study with a tympanostomy tube insertion participated (for a more detailed AOM history see [7]). The EEG recording was done on average 33 days (range 20–56 d) after the tympanostomy tube insertion. Twenty-two age matched control children with 0–2 AOM were recruited with public advertisements. All families participated voluntarily to the study and an informed written consent was obtained from the parents of children. Families were paid 15€ for travelling costs. The study was in accordance of Declaration of Helsinki and approved by the Ethics Committee of Northern Osthrobotnia Hospital District (reference number 6/2009).\nParticipants were from monolingual Finnish-speaking families. They were born full-term with normal birth weight, and developing typically in their sensory, cognitive, and motor skills according to parental questionnaires and the examinations at the family and health care clinics during the first 2 years of life. No family history of speech, language, or other developmental impairments or severe neuropsychiatric diseases was allowed. The standardized Finnish version of Reynell Developmental Language Scales III, the Comprehension scale [38, 39] was applied to exclude developmental language disorders. At the time of the EEG recording, transient evoked otoacoustic emissions (TEOAEs; nonlinear click sequence 1.5–4.5 kHz, 73 dB SPL, pass/refer result; MADSEN AccuScreen® pro, GN Otometrics, Taastrup, Denmark) were checked. Four children with RAOM and six control children did not co-operate in the TEOAE measurement, but all the children had passed a TEOAE screening at a postnatal period in Oulu University Hospital. Right before the EEG recording, all children were assessed with pneumatic otoscopy and if needed by tympanometry and/or otomicroscopy by an otolaryngologist to ensure that they had clinically healthy ears at the time of the measurement.\nIn the RAOM group, one child was excluded because of a family history of dyslexia and one because the results of the Reynell III did not meet the criteria for normal speech comprehension, and an additional examination of speech-language pathologist showed signs of severe language disorder. Two children with RAOM had atypically enhanced P3a responses (24.07 and 23.20 µV), and the statistical analysis indicated them to be outliers, i.e., their responses being at abnormal distance from the other ones (2.51–17.63 µV). Because we hypothesized that the children with RAOM would have enhanced P3a responses, we decided to exclude these two children from the further analysis to avoid the bias of these extreme values on the results of the RAOM group. Furthermore, two children did not arrive to the measurement at the appointed time. In the control group, two children had to be excluded from the analysis because of a large amount of alpha activity in their EEG leading to low signal-to-noise ratio. One control children was excluded because of acute OM diagnosed at the time of measurement. The total number of children in this study was 18 in the RAOM group and 19 in the control group after these exclusions. There were no significant differences between the final groups in gender (RAOM: 10 boys; controls 11 boys), age (RAOM: mean 24 months, min–max 22–26; controls: mean 24 months, min–max 22–26), or mother’s education (RAOM: 4 low, 13 middle or high; controls 2 low, 17 middle or high). The educational information of one mother in the RAOM group was not available.\n Stimuli and experimental design ERPs were recorded in a passive condition with the multi-feature paradigm (“Optimum-1”), which was shown to be a fast and eligible method for obtaining several ERPs reflecting different stages of auditory processing in adults [40–42], school-aged children [43, 44], and toddlers [45, 46]. In the multi-feature paradigm, the standard and the deviant sounds are presented in the same sequence so that every other stimulus is the standard and every other stimulus is one of the several deviants. In the deviants, only one sound feature (e.g., vowel or frequency) of the standard stimulus is changed at a time while the other features remain the same and strengthen the memory representation of the standard stimulus. To study attentional mechanisms, distracting novel sounds may also be embedded in the same sound stream [42, 45, 46].\nThe standards were Finnish semisynthetic consonant–vowel syllables/ke:/or/pi:/(duration 170 ms). Every other stimulus sequence included standard/ke:/and every other included standard/pi:/. The deviants (duration 170 ms) were five different deviations in these syllables (frequency F0 ± 8 Hz, intensity ± 7 dB, consonant from/ke:/to/pe:/and from/pi:/to/ki:/, vowel from/ke:/to/ki:/and from/pi:/to/pe:/, and vowel duration from syllable length of 170 ms to 120 ms) [42, 47]. The obligatory and MMN responses elicited by standards and deviants were reported earlier [7]. In addition, there were totally differing novel sounds (duration 200 ms, including a fall and a rise time of 10 ms), which were non-synthetic, environmental human (e.g. coughs and laughs) or non-human (e.g. door slamming and telephone ringing) sounds [42]. In the stimulus sequence, every other stimulus was a standard (probability 50 %) and every other was one of the deviants (probability 8.3 % for each) or a novel sound (probability 8.3 %). The presentation of stimuli was pseudo-randomized so that all five deviants and one novel stimulus appeared once among 12 successive stimuli, and the same deviant or novel was never repeated after the standard stimulus following it. The stimulus onset asynchrony was 670 ms. The stimuli were in the sequences lasting for about 6 min., each starting with 10 standards, and including 540 stimuli from which 275 were standards and 44 were novels sounds, the rest being deviant syllables (44 of each deviant type). Three to four stimulus sequences were presented to each participant.\nStimuli were presented in an electrically shielded and sound-attenuated room (reverberation time .3 s, background noise level 43 dB) with the sound pressure level of 75 dB via two loudspeakers (Genelec® 6010A, Genelec Ltd., Iisalmi, Finland). The loudspeakers were in front of the child at a distance of 1.3 m and in a 40-degree angle from the child’s head.\nERPs were recorded in a passive condition with the multi-feature paradigm (“Optimum-1”), which was shown to be a fast and eligible method for obtaining several ERPs reflecting different stages of auditory processing in adults [40–42], school-aged children [43, 44], and toddlers [45, 46]. In the multi-feature paradigm, the standard and the deviant sounds are presented in the same sequence so that every other stimulus is the standard and every other stimulus is one of the several deviants. In the deviants, only one sound feature (e.g., vowel or frequency) of the standard stimulus is changed at a time while the other features remain the same and strengthen the memory representation of the standard stimulus. To study attentional mechanisms, distracting novel sounds may also be embedded in the same sound stream [42, 45, 46].\nThe standards were Finnish semisynthetic consonant–vowel syllables/ke:/or/pi:/(duration 170 ms). Every other stimulus sequence included standard/ke:/and every other included standard/pi:/. The deviants (duration 170 ms) were five different deviations in these syllables (frequency F0 ± 8 Hz, intensity ± 7 dB, consonant from/ke:/to/pe:/and from/pi:/to/ki:/, vowel from/ke:/to/ki:/and from/pi:/to/pe:/, and vowel duration from syllable length of 170 ms to 120 ms) [42, 47]. The obligatory and MMN responses elicited by standards and deviants were reported earlier [7]. In addition, there were totally differing novel sounds (duration 200 ms, including a fall and a rise time of 10 ms), which were non-synthetic, environmental human (e.g. coughs and laughs) or non-human (e.g. door slamming and telephone ringing) sounds [42]. In the stimulus sequence, every other stimulus was a standard (probability 50 %) and every other was one of the deviants (probability 8.3 % for each) or a novel sound (probability 8.3 %). The presentation of stimuli was pseudo-randomized so that all five deviants and one novel stimulus appeared once among 12 successive stimuli, and the same deviant or novel was never repeated after the standard stimulus following it. The stimulus onset asynchrony was 670 ms. The stimuli were in the sequences lasting for about 6 min., each starting with 10 standards, and including 540 stimuli from which 275 were standards and 44 were novels sounds, the rest being deviant syllables (44 of each deviant type). Three to four stimulus sequences were presented to each participant.\nStimuli were presented in an electrically shielded and sound-attenuated room (reverberation time .3 s, background noise level 43 dB) with the sound pressure level of 75 dB via two loudspeakers (Genelec® 6010A, Genelec Ltd., Iisalmi, Finland). The loudspeakers were in front of the child at a distance of 1.3 m and in a 40-degree angle from the child’s head.\n EEG recording The EEG (.16–1000 Hz, sampling rate 5000 Hz) was recorded with 32 channel electro-cap with Ag–AgCl electrodes placed according to the international 10/20 system (ActiCAP 002 and Brain Vision BrainAmp system and software; Brain Products GmbH, Gilching, Germany). The FCz electrode served as online reference and impedances were kept below 20 kΩ. Additional electrodes placed above the outer canthus of the right eye and below the outer canthus of the left eye with bipolar montage were used for electro-oculogram.\nToddlers sat in a chair or in their parent’s lap, watching voiceless cartoons or children’s books, or played with silent toys. The parents were instructed to be as quiet as possible. The recording was camera monitored from the next room and an experienced EEG technician monitored the quality of the EEG signal during recording. During the same recording session, the children participated in an EEG recording with three to four stimulus sequences with background noise [48]. The total examination time for each participant was about two and half hours, from which the EEG registration took about 45 min. There were breaks with refreshments between the stimulus sequences.\nThe EEG (.16–1000 Hz, sampling rate 5000 Hz) was recorded with 32 channel electro-cap with Ag–AgCl electrodes placed according to the international 10/20 system (ActiCAP 002 and Brain Vision BrainAmp system and software; Brain Products GmbH, Gilching, Germany). The FCz electrode served as online reference and impedances were kept below 20 kΩ. Additional electrodes placed above the outer canthus of the right eye and below the outer canthus of the left eye with bipolar montage were used for electro-oculogram.\nToddlers sat in a chair or in their parent’s lap, watching voiceless cartoons or children’s books, or played with silent toys. The parents were instructed to be as quiet as possible. The recording was camera monitored from the next room and an experienced EEG technician monitored the quality of the EEG signal during recording. During the same recording session, the children participated in an EEG recording with three to four stimulus sequences with background noise [48]. The total examination time for each participant was about two and half hours, from which the EEG registration took about 45 min. There were breaks with refreshments between the stimulus sequences.\n Analysis Brain Vision Analyzer 2.0 (BrainProducts, GmpH) was used for offline analysis. Data were down sampled to 250 Hz and re-referenced to the average of the mastoid electrodes. Band pass filtering of .5–45 Hz, 24 dB/oct was applied to avoid aliasing and signals not originated from the brain [49]. After visual inspection, channels Fp1, Fp2, PO9, PO10, O1, Oz, and O2 were disabled from further analyzis because of artefacts. Ocular correction was done with an independent component analysis. Extracerebral artefacts with voltage exceeding ±150 μV at any electrode were removed and data were filtered with band pass of 1–20 Hz, 48 dB/oct. ERPs for standard and novel stimuli were averaged from baseline corrected EEG epochs of –100 ms prestimulus to 670 ms after stimulus onset. The first 10 standard stimuli in each recorded sequence and the standard stimuli right after the novel stimuli were excluded from the analysis. Two-tailed t-test indicated no significant group differences in the mean number of averaged epochs for standards or novels. The mean number of epochs for standard and novel stimuli in the RAOM group was 675 (min–max 373–856) and 133 (min–max 75–170), respectively, and in the control group 719 (min–max 517–856) and 143 (min–max 99–171), respectively.\nTo identify the P3a and LN, ERPs for standards were subtracted from those for novels. The grand average difference waves showed the biphasic P3a elicited by novel stimuli. Hence, eP3a and lP3a were separately analyzed. The channel selection for the peak detection was done after visual inspection, which showed the most prominent eP3a at the Cz electrode and lP3a and LN at the Fz electrode. The peak detection was done individually for each child within time windows of 180–300 ms for eP3a, 300–440 ms for lP3a, and 420–600 ms for LN. The peak latencies were determined from the most positive (eP3a and lP3a) or the most negative (LN) peak within those windows, and the mean peak amplitudes were calculated from ±20 ms time window around the peak latencies.\nThe statistical analyses were done for the F3, Fz, F4, C3, Cz, C4, P3, Pz, and P4 electrodes. The existence of each ERP was determined by comparing its amplitude to zero with a two-tailed t-test. The amplitude differences between the groups were examined with repeated measures analysis of variance (ANOVA) and the Fisher’s least significant difference (LSD) post hoc test. In ANOVA, between-subject factor was group (RAOM vs. control) and within-subject factors were anterior-posterior (AP; F3-Fz-F4 vs. C3-Cz-C4 vs. P3-Pz-P4) and right-left (RL; F3-C3-P3 vs. Fz-Cz-Pz vs. F4-C4-P4) electrode positions. The Huynh–Feldt correction was applied when appropriate. One-way ANOVA was used for studying latency differences between the groups. For the effect-size estimation, the partial eta squared (ƞ\np2) was calculated.\nBrain Vision Analyzer 2.0 (BrainProducts, GmpH) was used for offline analysis. Data were down sampled to 250 Hz and re-referenced to the average of the mastoid electrodes. Band pass filtering of .5–45 Hz, 24 dB/oct was applied to avoid aliasing and signals not originated from the brain [49]. After visual inspection, channels Fp1, Fp2, PO9, PO10, O1, Oz, and O2 were disabled from further analyzis because of artefacts. Ocular correction was done with an independent component analysis. Extracerebral artefacts with voltage exceeding ±150 μV at any electrode were removed and data were filtered with band pass of 1–20 Hz, 48 dB/oct. ERPs for standard and novel stimuli were averaged from baseline corrected EEG epochs of –100 ms prestimulus to 670 ms after stimulus onset. The first 10 standard stimuli in each recorded sequence and the standard stimuli right after the novel stimuli were excluded from the analysis. Two-tailed t-test indicated no significant group differences in the mean number of averaged epochs for standards or novels. The mean number of epochs for standard and novel stimuli in the RAOM group was 675 (min–max 373–856) and 133 (min–max 75–170), respectively, and in the control group 719 (min–max 517–856) and 143 (min–max 99–171), respectively.\nTo identify the P3a and LN, ERPs for standards were subtracted from those for novels. The grand average difference waves showed the biphasic P3a elicited by novel stimuli. Hence, eP3a and lP3a were separately analyzed. The channel selection for the peak detection was done after visual inspection, which showed the most prominent eP3a at the Cz electrode and lP3a and LN at the Fz electrode. The peak detection was done individually for each child within time windows of 180–300 ms for eP3a, 300–440 ms for lP3a, and 420–600 ms for LN. The peak latencies were determined from the most positive (eP3a and lP3a) or the most negative (LN) peak within those windows, and the mean peak amplitudes were calculated from ±20 ms time window around the peak latencies.\nThe statistical analyses were done for the F3, Fz, F4, C3, Cz, C4, P3, Pz, and P4 electrodes. The existence of each ERP was determined by comparing its amplitude to zero with a two-tailed t-test. The amplitude differences between the groups were examined with repeated measures analysis of variance (ANOVA) and the Fisher’s least significant difference (LSD) post hoc test. In ANOVA, between-subject factor was group (RAOM vs. control) and within-subject factors were anterior-posterior (AP; F3-Fz-F4 vs. C3-Cz-C4 vs. P3-Pz-P4) and right-left (RL; F3-C3-P3 vs. Fz-Cz-Pz vs. F4-C4-P4) electrode positions. The Huynh–Feldt correction was applied when appropriate. One-way ANOVA was used for studying latency differences between the groups. For the effect-size estimation, the partial eta squared (ƞ\np2) was calculated.", "Twenty-four children with a middle ear infection history were recruited to the RAOM group (at least three AOM per 6 months or four AOM per 1 year) from the Ear, Nose and Throat clinic of Oulu University hospital. During 1 year in 2009–2010, all children aged 22–26 months fulfilling the criteria of this study with a tympanostomy tube insertion participated (for a more detailed AOM history see [7]). The EEG recording was done on average 33 days (range 20–56 d) after the tympanostomy tube insertion. Twenty-two age matched control children with 0–2 AOM were recruited with public advertisements. All families participated voluntarily to the study and an informed written consent was obtained from the parents of children. Families were paid 15€ for travelling costs. The study was in accordance of Declaration of Helsinki and approved by the Ethics Committee of Northern Osthrobotnia Hospital District (reference number 6/2009).\nParticipants were from monolingual Finnish-speaking families. They were born full-term with normal birth weight, and developing typically in their sensory, cognitive, and motor skills according to parental questionnaires and the examinations at the family and health care clinics during the first 2 years of life. No family history of speech, language, or other developmental impairments or severe neuropsychiatric diseases was allowed. The standardized Finnish version of Reynell Developmental Language Scales III, the Comprehension scale [38, 39] was applied to exclude developmental language disorders. At the time of the EEG recording, transient evoked otoacoustic emissions (TEOAEs; nonlinear click sequence 1.5–4.5 kHz, 73 dB SPL, pass/refer result; MADSEN AccuScreen® pro, GN Otometrics, Taastrup, Denmark) were checked. Four children with RAOM and six control children did not co-operate in the TEOAE measurement, but all the children had passed a TEOAE screening at a postnatal period in Oulu University Hospital. Right before the EEG recording, all children were assessed with pneumatic otoscopy and if needed by tympanometry and/or otomicroscopy by an otolaryngologist to ensure that they had clinically healthy ears at the time of the measurement.\nIn the RAOM group, one child was excluded because of a family history of dyslexia and one because the results of the Reynell III did not meet the criteria for normal speech comprehension, and an additional examination of speech-language pathologist showed signs of severe language disorder. Two children with RAOM had atypically enhanced P3a responses (24.07 and 23.20 µV), and the statistical analysis indicated them to be outliers, i.e., their responses being at abnormal distance from the other ones (2.51–17.63 µV). Because we hypothesized that the children with RAOM would have enhanced P3a responses, we decided to exclude these two children from the further analysis to avoid the bias of these extreme values on the results of the RAOM group. Furthermore, two children did not arrive to the measurement at the appointed time. In the control group, two children had to be excluded from the analysis because of a large amount of alpha activity in their EEG leading to low signal-to-noise ratio. One control children was excluded because of acute OM diagnosed at the time of measurement. The total number of children in this study was 18 in the RAOM group and 19 in the control group after these exclusions. There were no significant differences between the final groups in gender (RAOM: 10 boys; controls 11 boys), age (RAOM: mean 24 months, min–max 22–26; controls: mean 24 months, min–max 22–26), or mother’s education (RAOM: 4 low, 13 middle or high; controls 2 low, 17 middle or high). The educational information of one mother in the RAOM group was not available.", "ERPs were recorded in a passive condition with the multi-feature paradigm (“Optimum-1”), which was shown to be a fast and eligible method for obtaining several ERPs reflecting different stages of auditory processing in adults [40–42], school-aged children [43, 44], and toddlers [45, 46]. In the multi-feature paradigm, the standard and the deviant sounds are presented in the same sequence so that every other stimulus is the standard and every other stimulus is one of the several deviants. In the deviants, only one sound feature (e.g., vowel or frequency) of the standard stimulus is changed at a time while the other features remain the same and strengthen the memory representation of the standard stimulus. To study attentional mechanisms, distracting novel sounds may also be embedded in the same sound stream [42, 45, 46].\nThe standards were Finnish semisynthetic consonant–vowel syllables/ke:/or/pi:/(duration 170 ms). Every other stimulus sequence included standard/ke:/and every other included standard/pi:/. The deviants (duration 170 ms) were five different deviations in these syllables (frequency F0 ± 8 Hz, intensity ± 7 dB, consonant from/ke:/to/pe:/and from/pi:/to/ki:/, vowel from/ke:/to/ki:/and from/pi:/to/pe:/, and vowel duration from syllable length of 170 ms to 120 ms) [42, 47]. The obligatory and MMN responses elicited by standards and deviants were reported earlier [7]. In addition, there were totally differing novel sounds (duration 200 ms, including a fall and a rise time of 10 ms), which were non-synthetic, environmental human (e.g. coughs and laughs) or non-human (e.g. door slamming and telephone ringing) sounds [42]. In the stimulus sequence, every other stimulus was a standard (probability 50 %) and every other was one of the deviants (probability 8.3 % for each) or a novel sound (probability 8.3 %). The presentation of stimuli was pseudo-randomized so that all five deviants and one novel stimulus appeared once among 12 successive stimuli, and the same deviant or novel was never repeated after the standard stimulus following it. The stimulus onset asynchrony was 670 ms. The stimuli were in the sequences lasting for about 6 min., each starting with 10 standards, and including 540 stimuli from which 275 were standards and 44 were novels sounds, the rest being deviant syllables (44 of each deviant type). Three to four stimulus sequences were presented to each participant.\nStimuli were presented in an electrically shielded and sound-attenuated room (reverberation time .3 s, background noise level 43 dB) with the sound pressure level of 75 dB via two loudspeakers (Genelec® 6010A, Genelec Ltd., Iisalmi, Finland). The loudspeakers were in front of the child at a distance of 1.3 m and in a 40-degree angle from the child’s head.", "The EEG (.16–1000 Hz, sampling rate 5000 Hz) was recorded with 32 channel electro-cap with Ag–AgCl electrodes placed according to the international 10/20 system (ActiCAP 002 and Brain Vision BrainAmp system and software; Brain Products GmbH, Gilching, Germany). The FCz electrode served as online reference and impedances were kept below 20 kΩ. Additional electrodes placed above the outer canthus of the right eye and below the outer canthus of the left eye with bipolar montage were used for electro-oculogram.\nToddlers sat in a chair or in their parent’s lap, watching voiceless cartoons or children’s books, or played with silent toys. The parents were instructed to be as quiet as possible. The recording was camera monitored from the next room and an experienced EEG technician monitored the quality of the EEG signal during recording. During the same recording session, the children participated in an EEG recording with three to four stimulus sequences with background noise [48]. The total examination time for each participant was about two and half hours, from which the EEG registration took about 45 min. There were breaks with refreshments between the stimulus sequences.", "Brain Vision Analyzer 2.0 (BrainProducts, GmpH) was used for offline analysis. Data were down sampled to 250 Hz and re-referenced to the average of the mastoid electrodes. Band pass filtering of .5–45 Hz, 24 dB/oct was applied to avoid aliasing and signals not originated from the brain [49]. After visual inspection, channels Fp1, Fp2, PO9, PO10, O1, Oz, and O2 were disabled from further analyzis because of artefacts. Ocular correction was done with an independent component analysis. Extracerebral artefacts with voltage exceeding ±150 μV at any electrode were removed and data were filtered with band pass of 1–20 Hz, 48 dB/oct. ERPs for standard and novel stimuli were averaged from baseline corrected EEG epochs of –100 ms prestimulus to 670 ms after stimulus onset. The first 10 standard stimuli in each recorded sequence and the standard stimuli right after the novel stimuli were excluded from the analysis. Two-tailed t-test indicated no significant group differences in the mean number of averaged epochs for standards or novels. The mean number of epochs for standard and novel stimuli in the RAOM group was 675 (min–max 373–856) and 133 (min–max 75–170), respectively, and in the control group 719 (min–max 517–856) and 143 (min–max 99–171), respectively.\nTo identify the P3a and LN, ERPs for standards were subtracted from those for novels. The grand average difference waves showed the biphasic P3a elicited by novel stimuli. Hence, eP3a and lP3a were separately analyzed. The channel selection for the peak detection was done after visual inspection, which showed the most prominent eP3a at the Cz electrode and lP3a and LN at the Fz electrode. The peak detection was done individually for each child within time windows of 180–300 ms for eP3a, 300–440 ms for lP3a, and 420–600 ms for LN. The peak latencies were determined from the most positive (eP3a and lP3a) or the most negative (LN) peak within those windows, and the mean peak amplitudes were calculated from ±20 ms time window around the peak latencies.\nThe statistical analyses were done for the F3, Fz, F4, C3, Cz, C4, P3, Pz, and P4 electrodes. The existence of each ERP was determined by comparing its amplitude to zero with a two-tailed t-test. The amplitude differences between the groups were examined with repeated measures analysis of variance (ANOVA) and the Fisher’s least significant difference (LSD) post hoc test. In ANOVA, between-subject factor was group (RAOM vs. control) and within-subject factors were anterior-posterior (AP; F3-Fz-F4 vs. C3-Cz-C4 vs. P3-Pz-P4) and right-left (RL; F3-C3-P3 vs. Fz-Cz-Pz vs. F4-C4-P4) electrode positions. The Huynh–Feldt correction was applied when appropriate. One-way ANOVA was used for studying latency differences between the groups. For the effect-size estimation, the partial eta squared (ƞ\np2) was calculated.", "The eP3a significantly differed from zero in the children with RAOM and in the controls (two tailed t-test; p ≤ .001; Table 1, Fig. 1) with no group differences in the amplitude, amplitude scalp distribution, or latency. In both groups, the eP3a amplitude was stronger at the frontal and central electrodes than at the parietal electrodes (F [2, 57] = 42.09, p < .001, ƞ\np2 = .53; LSD post hoc p < .001).Table 1Mean amplitude and latency of ERPs elicited by novel stimuli in children in both groupsElectrodeAmplitude µVLatency msRAOMControlRAOMControleP3aCz6.66 (4.30)6.97 (3.44)244 (26)248 (25)lP3aFz9.30 (3.69)9.67 (3.74)348 (36)341 (22)LNFz–2.82 (3.38)–2.34 (3.04)\n599 (40)\n526 (43)Standard deviations are in parenthesesA significant group difference is italized\nRAOM recurrent acute otitis media\nFig. 1ERPs (eP3a, lP3a, and LN) elicited by novel stimuli in children with recurrent acute otitis media (RAOM) and their controls; a grand average standard and novel ERP waves, b grand average difference (novel minus standard ERP) waves, and c scalp topographies\n\nMean amplitude and latency of ERPs elicited by novel stimuli in children in both groups\nStandard deviations are in parentheses\nA significant group difference is italized\n\nRAOM recurrent acute otitis media\nERPs (eP3a, lP3a, and LN) elicited by novel stimuli in children with recurrent acute otitis media (RAOM) and their controls; a grand average standard and novel ERP waves, b grand average difference (novel minus standard ERP) waves, and c scalp topographies\nThere was a significant lP3a in both groups (two tailed t-test; p ≤ .001; Table 1, Fig. 1). A repeated measures ANOVA for the lP3a amplitude indicated a significant AP x group interaction (F [2, 59] = 3.94, p = .03, ƞ\np2 = .10). According to the LSD post hoc test, the children with RAOM showed a more even AP distribution than the control children who had a clear frontally maximal and posteriorly diminishing amplitude scalp distribution (mean amplitudes frontally 8.99 vs. 9.67 µV, centrally 7.12 vs. 6.04 µV, and parietally 3.64 vs. 2.17 µV in the RAOM and control groups, respectively). Furthermore, a significant AP x RL interaction with no group difference was found (F [4, 140] = 4.98, p < .001, ƞ\np2 = .13). LSD post hoc test indicated an even RL amplitude distribution at frontal electrodes, but stronger left hemispheric activation compared to the vertex and the right line at central electrodes (p < .001) and also stronger left than right line responses at parietal electrodes (p = .03). There was no group difference for the lP3a latency.\nIn both groups, a significant LN was found (two tailed t-test; RAOM: p = .001; control: p = .004; Table 1, Fig. 1). A repeated measures ANOVA for the LN amplitude indicated a significant AP × RL interaction (F [4, 148] = 2.96, p = .02). This was due to the weakest amplitude at F3 (LSD post hoc; p = .001–.03) and the strongest amplitude at Cz (LSD post hoc; p = .001–.002). There were no group differences in the amplitude or amplitude scalp distribution of LN. However, one-way ANOVA indicated a significant group difference in the LN latency (F [1, 37] = 32.76, p < .001, ƞ\np2 = .47), which peaked later in the children with RAOM than in the controls.", "This study examined the effects of early childhood RAOM on neural mechanisms of involuntary attention at the age of 2 years. For that purpose, the P3a and LN elicited by distracting novel sounds were measured in the linguistic multi-feature paradigm at the time when all the participants had healthy ears and their sound encoding reflected by obligatory ERPs was found to be intact in an earlier study [7]. Both the children with RAOM and the controls showed a clearly identifiable P3a with two phases (eP3a and lP3a) and a LN, the morphology of which was found to be consistent with earlier studies in children [28–31, 45, 46]. However, the topography and timing of these responses were distinct in the two groups. These findings suggest different maturational trajectories in the two groups of children and suggest that the consequences of OM are not limited to the middle ear effusion period but the effects are long-lasting.\nThe amplitude, distribution, or latency of the eP3a did not differ between the groups. This suggests the similar automatic detection of a novel stimulus and the early stages of the orientation of attention [32] in the groups. The eP3a was larger frontally and centrally than parietally in both groups being in line with earlier studies in typically developed school-aged children [28, 34] and in adults [33].\nIn contrast, a significant group difference was found in the lP3a reflecting the actual attention switch [33]. The lP3a amplitude diminished less in the children with RAOM than in the controls from frontal to central and parietal areas, which may indicate an immature control of attention switch in children with RAOM. A frontally prominent lP3a has been linked to the neural maturation of the frontal cortex and attention control [28, 34]. Likewise, an enhanced lP3a at the posterior scalp areas has earlier been found in easily distractible children with ADHD [29]. The current result supports the behavioural finding on the distractibility of toddlers with OM [18]. Distractibility can lead to weak utilization of the auditory channel in learning [29, 50], since it limits the ability to ignore irrelevant auditory stimuli. At 2 years of age, this may contribute to the emerging language by disrupting child’s engagement with social-communicative actions critical for language learning.\nThe LN latency was longer in the children with RAOM than in the controls suggesting delayed reorienting back to the ongoing activity [30, 34]. This corresponds with previous results suggesting delayed reorienting in school-children in a behavioral test [16] and might indicate that children with RAOM have an abnormally low resistance to auditory distraction. This is supported by our previous results suggesting neural sensitivity to sound loudness changes in these same 2-year-old children with RAOM [7]. The result is also consistent with the elevated auditory sensitivity to sounds described in adolescents with childhood OM [51].\nOur results show that RAOM has long-term effects leading to abnormal attention control at the age of 2 years when rapid developmental neural changes are involved. Studies on attentional neural mechanisms in older children with early childhood RAOM would be pertinent since they would disclose whether the neural changes observed are transient or still persisting at the later stages of development.\nThere were two children in the RAOM group who were excluded from the group analysis because of their abnormally enhanced P3a responses. The exclusion was done to avoid the bias of these statistically confirmed outliers on the results of the RAOM group. However, we should notice that these extreme P3a responses might reflect a genuine effect of RAOM and indicate enhanced distractibility of these children, but this should be studied further in the future.\nWhen interpreting the results, it should be taken into account that the accurate hearing thresholds were not available at the time of the measurement. Accurate hearing thresholds can be reliably measured from the age of three onwards [52]. Because the participants in the current study were 22–26 months old we decided to use TEOAE screening to exclude congenital hearing losses. However, there were six children in the RAOM group and four children in the control group who could not tolerate the TEOAE measurement at the time of EEG. Because these children had passed the TEOAE screening at the postnatal period, we decided to include them in the study. However, there is a possibility that a child who has passed the TEOAE screening at birth may develop hearing deficit later. Hearing levels were assumed to be at normal levels in all participants while the children with RAOM had had tympanostomy tubes inserted and, according the parental reports, there were no concerns of hearing in the screenings at the family and health care clinics where Finnish children are followed up regularly. Furthermore, these same children showed age-typical cortical sound encoding with no group differences in our earlier study [7]. This refers to hearing levels within the normal range at the time of the EEG.", "To conclude, this study showed abnormal neural mechanisms of involuntary attention in 2-year-old children with RAOM. For the distracting novel sounds, the RAOM group showed atypical neural organization signified by a more even lP3a scalp distribution in anterior-posterior axis than in the controls, who had a more frontally oriented lP3a. This can be a sign of immature neural processing and enhanced distractibility. Furthermore, the children with RAOM showed delayed re-orienting back to the ongoing activity indicated by their prolonged LN latency. Since all the children had clinically healthy ears at the time of the study, the current results suggest that early childhood RAOM has long-term effects on the immature central nervous system. This further supports the suggestion that early childhood RAOM should be taken as a risk factor for the developing auditory central nervous system." ]
[ null, "materials|methods", null, null, null, null, "results", "discussion", "conclusion" ]
[ "Otitis media", "Involuntary attention", "Orienting", "ERPs", "P3a", "Late negativity" ]
Background: About 30 % of children have recurrent middle ear infections (recurrent acute otitis media, RAOM) in their early childhood [1, 2]. Due to challenges in diagnosing and classifying middle ear status, otitis media (OM) is commonly used as a general term for various forms of middle ear fluid and inflammation. A general definition for RAOM has been three or more episodes of acute otitis media (AOM) per 6 months or four or more AOM episodes per year [3]. After an episode of AOM, middle ear fluid is present for few days to over 2 months [4]. Fluid in the middle ear causes about 20–30 dB conductive hearing loss [5], and, especially when asymmetric, it affects interaural temporal and level difference cues compromising binaural sound localization [6]. Fluctuating hearing sensations during the development of central auditory system has been connected to atypical auditory processing [7–11], which can lead to problems in language acquisition [12]. Therefore, it is necessary to get a better understanding of the consequences of early childhood RAOM on immature central nervous system. Behavioral studies in children with OM have shown problems in regulation of auditory attention [13–18]. Involuntary orientation to environmental events as well as selective maintenance of attention is essential for speech processing and language learning. Involuntary attention accounts for the detection and selection of potentially biologically meaningful information of events unrelated to the ongoing task [19]. For example, a screeching noise of a braking car causes an attention switch of a pedestrian who is talking on the phone, and leads to the distraction of the ongoing activity. After the evaluation of the irrelevant novel stimulus, the reorientation back to the recent activity takes place. Involuntary attention is a bottom-up (stimulus-driven) process [19] but during maturation the developing top-down mechanisms start to inhibit distractors which are not meaningful, in other words, children learn to separate relevant from irrelevant stimuli [20, 21]. An excessive tendency to orient to the irrelevant stimuli requiring a lot of attentional resources makes goal-directed behavior harder [22]. School-aged children with OM history were shown to have deficits of selective auditory attention in dichotic listening tasks [13–15]. They also showed increased reorientation time of attention during behavioral tasks [16]. Rated by their teachers, school-children with OM history were suggested to be less task-oriented [17] but not in all studies [23]. Studies in toddlers are scarce, probably due to the weak co-operation skills in children at this age. However, toddlers with chronic OM were shown to express reduced attention during book reading at the time of middle ear effusion and, according the questionnaire, their mothers rated them as easily distractible [18]. The neural mechanisms beyond these findings are still unknown. Event-related potentials (ERPs) are a feasible approach for studying non-invasively neural mechanisms of involuntary auditory attention without tasks requiring co-operation skills [24]. The auditory P3a is a large positive deflection elicited by unexpected, novel sounds which substantially differ from other sounds, for example slam of the door or human cough. The P3a reflects involuntary attention mechanisms and orientation of attention [22]. It peaks fronto-centrally at 200–300 ms after the onset of a distracting stimulus [22, 25, 26]. P3a was often found to be biphasic [22, 27]. Two phases, early and late, have been identified in children [28–30] already at the age of 2 years [31]. Early P3a (eP3a) was suggested to reflect the automatic detection of violation in the neural model of existing world and thus, to represent the orientation of attention [32]. It is maximal at vertex and diminishes posteriorly and laterally [33]. In contrast, late P3a (lP3a) was suggested to reflect actual attention switch and it is maximal frontally [33]. Morphology of these responses is quite similar in children and adults but the scalp topography of children’s P3a is more anterior than that of adults [30]. The eP3a may mature earlier than lP3a, which continues to enhance frontally during development [34]. Hence, processing of acoustic novelty in the childhood resembles that in the adulthood although some underlying neural networks still continue to develop. Atypical P3a responses have been connected to abnormal involuntary attention, for example, parietally enhanced lP3a was found in children with attention deficit hyperactivity disorder (ADHD) [29]. The ERP waveform also reflects reorienting of attention back to the primary task after recognizing and evaluating a distracting stimulus [35, 36]. In adults, P3a is followed by reorienting negativity (RON) [37]. A counterpart of RON in children was suggested to be the late negativity (LN, also called as negative component, Nc) [28–31]. The LN latency, peaking at around 400–700 ms after the onset of a novel stimulus, reflects reorienting time [21]. The LN has the maximal amplitude at fronto-central scalp areas [21]. Large LN reflects enhanced neural effort to reorienting [37] or more attention paid to the surprising event [30]. During maturation, the LN amplitude has been suggested to decrease [30, 34]. The aim of this study was to compare the involuntary attentional mechanisms in 2-year-old healthy children with RAOM history and their healthy age-matched controls by recording auditory ERPs. For that purpose, novel stimuli were embedded in the multi-feature paradigm with syllables to elicit eP3a, lP3a, and LN. It was hypothesized that children with RAOM would show atypically enhanced and/or short latency P3a reflecting enhanced distractibility for the intrusive novel sounds and to have larger amplitude and/or longer latency of LN indicating more neural effort to reorienting and/or longer re-orientation time of attention than their healthy peers. Studies of P3a and novelty-related LN in toddlers are scarce and to our knowledge, this is the first study measuring these responses in children with RAOM. Methods: Participants Twenty-four children with a middle ear infection history were recruited to the RAOM group (at least three AOM per 6 months or four AOM per 1 year) from the Ear, Nose and Throat clinic of Oulu University hospital. During 1 year in 2009–2010, all children aged 22–26 months fulfilling the criteria of this study with a tympanostomy tube insertion participated (for a more detailed AOM history see [7]). The EEG recording was done on average 33 days (range 20–56 d) after the tympanostomy tube insertion. Twenty-two age matched control children with 0–2 AOM were recruited with public advertisements. All families participated voluntarily to the study and an informed written consent was obtained from the parents of children. Families were paid 15€ for travelling costs. The study was in accordance of Declaration of Helsinki and approved by the Ethics Committee of Northern Osthrobotnia Hospital District (reference number 6/2009). Participants were from monolingual Finnish-speaking families. They were born full-term with normal birth weight, and developing typically in their sensory, cognitive, and motor skills according to parental questionnaires and the examinations at the family and health care clinics during the first 2 years of life. No family history of speech, language, or other developmental impairments or severe neuropsychiatric diseases was allowed. The standardized Finnish version of Reynell Developmental Language Scales III, the Comprehension scale [38, 39] was applied to exclude developmental language disorders. At the time of the EEG recording, transient evoked otoacoustic emissions (TEOAEs; nonlinear click sequence 1.5–4.5 kHz, 73 dB SPL, pass/refer result; MADSEN AccuScreen® pro, GN Otometrics, Taastrup, Denmark) were checked. Four children with RAOM and six control children did not co-operate in the TEOAE measurement, but all the children had passed a TEOAE screening at a postnatal period in Oulu University Hospital. Right before the EEG recording, all children were assessed with pneumatic otoscopy and if needed by tympanometry and/or otomicroscopy by an otolaryngologist to ensure that they had clinically healthy ears at the time of the measurement. In the RAOM group, one child was excluded because of a family history of dyslexia and one because the results of the Reynell III did not meet the criteria for normal speech comprehension, and an additional examination of speech-language pathologist showed signs of severe language disorder. Two children with RAOM had atypically enhanced P3a responses (24.07 and 23.20 µV), and the statistical analysis indicated them to be outliers, i.e., their responses being at abnormal distance from the other ones (2.51–17.63 µV). Because we hypothesized that the children with RAOM would have enhanced P3a responses, we decided to exclude these two children from the further analysis to avoid the bias of these extreme values on the results of the RAOM group. Furthermore, two children did not arrive to the measurement at the appointed time. In the control group, two children had to be excluded from the analysis because of a large amount of alpha activity in their EEG leading to low signal-to-noise ratio. One control children was excluded because of acute OM diagnosed at the time of measurement. The total number of children in this study was 18 in the RAOM group and 19 in the control group after these exclusions. There were no significant differences between the final groups in gender (RAOM: 10 boys; controls 11 boys), age (RAOM: mean 24 months, min–max 22–26; controls: mean 24 months, min–max 22–26), or mother’s education (RAOM: 4 low, 13 middle or high; controls 2 low, 17 middle or high). The educational information of one mother in the RAOM group was not available. Twenty-four children with a middle ear infection history were recruited to the RAOM group (at least three AOM per 6 months or four AOM per 1 year) from the Ear, Nose and Throat clinic of Oulu University hospital. During 1 year in 2009–2010, all children aged 22–26 months fulfilling the criteria of this study with a tympanostomy tube insertion participated (for a more detailed AOM history see [7]). The EEG recording was done on average 33 days (range 20–56 d) after the tympanostomy tube insertion. Twenty-two age matched control children with 0–2 AOM were recruited with public advertisements. All families participated voluntarily to the study and an informed written consent was obtained from the parents of children. Families were paid 15€ for travelling costs. The study was in accordance of Declaration of Helsinki and approved by the Ethics Committee of Northern Osthrobotnia Hospital District (reference number 6/2009). Participants were from monolingual Finnish-speaking families. They were born full-term with normal birth weight, and developing typically in their sensory, cognitive, and motor skills according to parental questionnaires and the examinations at the family and health care clinics during the first 2 years of life. No family history of speech, language, or other developmental impairments or severe neuropsychiatric diseases was allowed. The standardized Finnish version of Reynell Developmental Language Scales III, the Comprehension scale [38, 39] was applied to exclude developmental language disorders. At the time of the EEG recording, transient evoked otoacoustic emissions (TEOAEs; nonlinear click sequence 1.5–4.5 kHz, 73 dB SPL, pass/refer result; MADSEN AccuScreen® pro, GN Otometrics, Taastrup, Denmark) were checked. Four children with RAOM and six control children did not co-operate in the TEOAE measurement, but all the children had passed a TEOAE screening at a postnatal period in Oulu University Hospital. Right before the EEG recording, all children were assessed with pneumatic otoscopy and if needed by tympanometry and/or otomicroscopy by an otolaryngologist to ensure that they had clinically healthy ears at the time of the measurement. In the RAOM group, one child was excluded because of a family history of dyslexia and one because the results of the Reynell III did not meet the criteria for normal speech comprehension, and an additional examination of speech-language pathologist showed signs of severe language disorder. Two children with RAOM had atypically enhanced P3a responses (24.07 and 23.20 µV), and the statistical analysis indicated them to be outliers, i.e., their responses being at abnormal distance from the other ones (2.51–17.63 µV). Because we hypothesized that the children with RAOM would have enhanced P3a responses, we decided to exclude these two children from the further analysis to avoid the bias of these extreme values on the results of the RAOM group. Furthermore, two children did not arrive to the measurement at the appointed time. In the control group, two children had to be excluded from the analysis because of a large amount of alpha activity in their EEG leading to low signal-to-noise ratio. One control children was excluded because of acute OM diagnosed at the time of measurement. The total number of children in this study was 18 in the RAOM group and 19 in the control group after these exclusions. There were no significant differences between the final groups in gender (RAOM: 10 boys; controls 11 boys), age (RAOM: mean 24 months, min–max 22–26; controls: mean 24 months, min–max 22–26), or mother’s education (RAOM: 4 low, 13 middle or high; controls 2 low, 17 middle or high). The educational information of one mother in the RAOM group was not available. Stimuli and experimental design ERPs were recorded in a passive condition with the multi-feature paradigm (“Optimum-1”), which was shown to be a fast and eligible method for obtaining several ERPs reflecting different stages of auditory processing in adults [40–42], school-aged children [43, 44], and toddlers [45, 46]. In the multi-feature paradigm, the standard and the deviant sounds are presented in the same sequence so that every other stimulus is the standard and every other stimulus is one of the several deviants. In the deviants, only one sound feature (e.g., vowel or frequency) of the standard stimulus is changed at a time while the other features remain the same and strengthen the memory representation of the standard stimulus. To study attentional mechanisms, distracting novel sounds may also be embedded in the same sound stream [42, 45, 46]. The standards were Finnish semisynthetic consonant–vowel syllables/ke:/or/pi:/(duration 170 ms). Every other stimulus sequence included standard/ke:/and every other included standard/pi:/. The deviants (duration 170 ms) were five different deviations in these syllables (frequency F0 ± 8 Hz, intensity ± 7 dB, consonant from/ke:/to/pe:/and from/pi:/to/ki:/, vowel from/ke:/to/ki:/and from/pi:/to/pe:/, and vowel duration from syllable length of 170 ms to 120 ms) [42, 47]. The obligatory and MMN responses elicited by standards and deviants were reported earlier [7]. In addition, there were totally differing novel sounds (duration 200 ms, including a fall and a rise time of 10 ms), which were non-synthetic, environmental human (e.g. coughs and laughs) or non-human (e.g. door slamming and telephone ringing) sounds [42]. In the stimulus sequence, every other stimulus was a standard (probability 50 %) and every other was one of the deviants (probability 8.3 % for each) or a novel sound (probability 8.3 %). The presentation of stimuli was pseudo-randomized so that all five deviants and one novel stimulus appeared once among 12 successive stimuli, and the same deviant or novel was never repeated after the standard stimulus following it. The stimulus onset asynchrony was 670 ms. The stimuli were in the sequences lasting for about 6 min., each starting with 10 standards, and including 540 stimuli from which 275 were standards and 44 were novels sounds, the rest being deviant syllables (44 of each deviant type). Three to four stimulus sequences were presented to each participant. Stimuli were presented in an electrically shielded and sound-attenuated room (reverberation time .3 s, background noise level 43 dB) with the sound pressure level of 75 dB via two loudspeakers (Genelec® 6010A, Genelec Ltd., Iisalmi, Finland). The loudspeakers were in front of the child at a distance of 1.3 m and in a 40-degree angle from the child’s head. ERPs were recorded in a passive condition with the multi-feature paradigm (“Optimum-1”), which was shown to be a fast and eligible method for obtaining several ERPs reflecting different stages of auditory processing in adults [40–42], school-aged children [43, 44], and toddlers [45, 46]. In the multi-feature paradigm, the standard and the deviant sounds are presented in the same sequence so that every other stimulus is the standard and every other stimulus is one of the several deviants. In the deviants, only one sound feature (e.g., vowel or frequency) of the standard stimulus is changed at a time while the other features remain the same and strengthen the memory representation of the standard stimulus. To study attentional mechanisms, distracting novel sounds may also be embedded in the same sound stream [42, 45, 46]. The standards were Finnish semisynthetic consonant–vowel syllables/ke:/or/pi:/(duration 170 ms). Every other stimulus sequence included standard/ke:/and every other included standard/pi:/. The deviants (duration 170 ms) were five different deviations in these syllables (frequency F0 ± 8 Hz, intensity ± 7 dB, consonant from/ke:/to/pe:/and from/pi:/to/ki:/, vowel from/ke:/to/ki:/and from/pi:/to/pe:/, and vowel duration from syllable length of 170 ms to 120 ms) [42, 47]. The obligatory and MMN responses elicited by standards and deviants were reported earlier [7]. In addition, there were totally differing novel sounds (duration 200 ms, including a fall and a rise time of 10 ms), which were non-synthetic, environmental human (e.g. coughs and laughs) or non-human (e.g. door slamming and telephone ringing) sounds [42]. In the stimulus sequence, every other stimulus was a standard (probability 50 %) and every other was one of the deviants (probability 8.3 % for each) or a novel sound (probability 8.3 %). The presentation of stimuli was pseudo-randomized so that all five deviants and one novel stimulus appeared once among 12 successive stimuli, and the same deviant or novel was never repeated after the standard stimulus following it. The stimulus onset asynchrony was 670 ms. The stimuli were in the sequences lasting for about 6 min., each starting with 10 standards, and including 540 stimuli from which 275 were standards and 44 were novels sounds, the rest being deviant syllables (44 of each deviant type). Three to four stimulus sequences were presented to each participant. Stimuli were presented in an electrically shielded and sound-attenuated room (reverberation time .3 s, background noise level 43 dB) with the sound pressure level of 75 dB via two loudspeakers (Genelec® 6010A, Genelec Ltd., Iisalmi, Finland). The loudspeakers were in front of the child at a distance of 1.3 m and in a 40-degree angle from the child’s head. EEG recording The EEG (.16–1000 Hz, sampling rate 5000 Hz) was recorded with 32 channel electro-cap with Ag–AgCl electrodes placed according to the international 10/20 system (ActiCAP 002 and Brain Vision BrainAmp system and software; Brain Products GmbH, Gilching, Germany). The FCz electrode served as online reference and impedances were kept below 20 kΩ. Additional electrodes placed above the outer canthus of the right eye and below the outer canthus of the left eye with bipolar montage were used for electro-oculogram. Toddlers sat in a chair or in their parent’s lap, watching voiceless cartoons or children’s books, or played with silent toys. The parents were instructed to be as quiet as possible. The recording was camera monitored from the next room and an experienced EEG technician monitored the quality of the EEG signal during recording. During the same recording session, the children participated in an EEG recording with three to four stimulus sequences with background noise [48]. The total examination time for each participant was about two and half hours, from which the EEG registration took about 45 min. There were breaks with refreshments between the stimulus sequences. The EEG (.16–1000 Hz, sampling rate 5000 Hz) was recorded with 32 channel electro-cap with Ag–AgCl electrodes placed according to the international 10/20 system (ActiCAP 002 and Brain Vision BrainAmp system and software; Brain Products GmbH, Gilching, Germany). The FCz electrode served as online reference and impedances were kept below 20 kΩ. Additional electrodes placed above the outer canthus of the right eye and below the outer canthus of the left eye with bipolar montage were used for electro-oculogram. Toddlers sat in a chair or in their parent’s lap, watching voiceless cartoons or children’s books, or played with silent toys. The parents were instructed to be as quiet as possible. The recording was camera monitored from the next room and an experienced EEG technician monitored the quality of the EEG signal during recording. During the same recording session, the children participated in an EEG recording with three to four stimulus sequences with background noise [48]. The total examination time for each participant was about two and half hours, from which the EEG registration took about 45 min. There were breaks with refreshments between the stimulus sequences. Analysis Brain Vision Analyzer 2.0 (BrainProducts, GmpH) was used for offline analysis. Data were down sampled to 250 Hz and re-referenced to the average of the mastoid electrodes. Band pass filtering of .5–45 Hz, 24 dB/oct was applied to avoid aliasing and signals not originated from the brain [49]. After visual inspection, channels Fp1, Fp2, PO9, PO10, O1, Oz, and O2 were disabled from further analyzis because of artefacts. Ocular correction was done with an independent component analysis. Extracerebral artefacts with voltage exceeding ±150 μV at any electrode were removed and data were filtered with band pass of 1–20 Hz, 48 dB/oct. ERPs for standard and novel stimuli were averaged from baseline corrected EEG epochs of –100 ms prestimulus to 670 ms after stimulus onset. The first 10 standard stimuli in each recorded sequence and the standard stimuli right after the novel stimuli were excluded from the analysis. Two-tailed t-test indicated no significant group differences in the mean number of averaged epochs for standards or novels. The mean number of epochs for standard and novel stimuli in the RAOM group was 675 (min–max 373–856) and 133 (min–max 75–170), respectively, and in the control group 719 (min–max 517–856) and 143 (min–max 99–171), respectively. To identify the P3a and LN, ERPs for standards were subtracted from those for novels. The grand average difference waves showed the biphasic P3a elicited by novel stimuli. Hence, eP3a and lP3a were separately analyzed. The channel selection for the peak detection was done after visual inspection, which showed the most prominent eP3a at the Cz electrode and lP3a and LN at the Fz electrode. The peak detection was done individually for each child within time windows of 180–300 ms for eP3a, 300–440 ms for lP3a, and 420–600 ms for LN. The peak latencies were determined from the most positive (eP3a and lP3a) or the most negative (LN) peak within those windows, and the mean peak amplitudes were calculated from ±20 ms time window around the peak latencies. The statistical analyses were done for the F3, Fz, F4, C3, Cz, C4, P3, Pz, and P4 electrodes. The existence of each ERP was determined by comparing its amplitude to zero with a two-tailed t-test. The amplitude differences between the groups were examined with repeated measures analysis of variance (ANOVA) and the Fisher’s least significant difference (LSD) post hoc test. In ANOVA, between-subject factor was group (RAOM vs. control) and within-subject factors were anterior-posterior (AP; F3-Fz-F4 vs. C3-Cz-C4 vs. P3-Pz-P4) and right-left (RL; F3-C3-P3 vs. Fz-Cz-Pz vs. F4-C4-P4) electrode positions. The Huynh–Feldt correction was applied when appropriate. One-way ANOVA was used for studying latency differences between the groups. For the effect-size estimation, the partial eta squared (ƞ p2) was calculated. Brain Vision Analyzer 2.0 (BrainProducts, GmpH) was used for offline analysis. Data were down sampled to 250 Hz and re-referenced to the average of the mastoid electrodes. Band pass filtering of .5–45 Hz, 24 dB/oct was applied to avoid aliasing and signals not originated from the brain [49]. After visual inspection, channels Fp1, Fp2, PO9, PO10, O1, Oz, and O2 were disabled from further analyzis because of artefacts. Ocular correction was done with an independent component analysis. Extracerebral artefacts with voltage exceeding ±150 μV at any electrode were removed and data were filtered with band pass of 1–20 Hz, 48 dB/oct. ERPs for standard and novel stimuli were averaged from baseline corrected EEG epochs of –100 ms prestimulus to 670 ms after stimulus onset. The first 10 standard stimuli in each recorded sequence and the standard stimuli right after the novel stimuli were excluded from the analysis. Two-tailed t-test indicated no significant group differences in the mean number of averaged epochs for standards or novels. The mean number of epochs for standard and novel stimuli in the RAOM group was 675 (min–max 373–856) and 133 (min–max 75–170), respectively, and in the control group 719 (min–max 517–856) and 143 (min–max 99–171), respectively. To identify the P3a and LN, ERPs for standards were subtracted from those for novels. The grand average difference waves showed the biphasic P3a elicited by novel stimuli. Hence, eP3a and lP3a were separately analyzed. The channel selection for the peak detection was done after visual inspection, which showed the most prominent eP3a at the Cz electrode and lP3a and LN at the Fz electrode. The peak detection was done individually for each child within time windows of 180–300 ms for eP3a, 300–440 ms for lP3a, and 420–600 ms for LN. The peak latencies were determined from the most positive (eP3a and lP3a) or the most negative (LN) peak within those windows, and the mean peak amplitudes were calculated from ±20 ms time window around the peak latencies. The statistical analyses were done for the F3, Fz, F4, C3, Cz, C4, P3, Pz, and P4 electrodes. The existence of each ERP was determined by comparing its amplitude to zero with a two-tailed t-test. The amplitude differences between the groups were examined with repeated measures analysis of variance (ANOVA) and the Fisher’s least significant difference (LSD) post hoc test. In ANOVA, between-subject factor was group (RAOM vs. control) and within-subject factors were anterior-posterior (AP; F3-Fz-F4 vs. C3-Cz-C4 vs. P3-Pz-P4) and right-left (RL; F3-C3-P3 vs. Fz-Cz-Pz vs. F4-C4-P4) electrode positions. The Huynh–Feldt correction was applied when appropriate. One-way ANOVA was used for studying latency differences between the groups. For the effect-size estimation, the partial eta squared (ƞ p2) was calculated. Participants: Twenty-four children with a middle ear infection history were recruited to the RAOM group (at least three AOM per 6 months or four AOM per 1 year) from the Ear, Nose and Throat clinic of Oulu University hospital. During 1 year in 2009–2010, all children aged 22–26 months fulfilling the criteria of this study with a tympanostomy tube insertion participated (for a more detailed AOM history see [7]). The EEG recording was done on average 33 days (range 20–56 d) after the tympanostomy tube insertion. Twenty-two age matched control children with 0–2 AOM were recruited with public advertisements. All families participated voluntarily to the study and an informed written consent was obtained from the parents of children. Families were paid 15€ for travelling costs. The study was in accordance of Declaration of Helsinki and approved by the Ethics Committee of Northern Osthrobotnia Hospital District (reference number 6/2009). Participants were from monolingual Finnish-speaking families. They were born full-term with normal birth weight, and developing typically in their sensory, cognitive, and motor skills according to parental questionnaires and the examinations at the family and health care clinics during the first 2 years of life. No family history of speech, language, or other developmental impairments or severe neuropsychiatric diseases was allowed. The standardized Finnish version of Reynell Developmental Language Scales III, the Comprehension scale [38, 39] was applied to exclude developmental language disorders. At the time of the EEG recording, transient evoked otoacoustic emissions (TEOAEs; nonlinear click sequence 1.5–4.5 kHz, 73 dB SPL, pass/refer result; MADSEN AccuScreen® pro, GN Otometrics, Taastrup, Denmark) were checked. Four children with RAOM and six control children did not co-operate in the TEOAE measurement, but all the children had passed a TEOAE screening at a postnatal period in Oulu University Hospital. Right before the EEG recording, all children were assessed with pneumatic otoscopy and if needed by tympanometry and/or otomicroscopy by an otolaryngologist to ensure that they had clinically healthy ears at the time of the measurement. In the RAOM group, one child was excluded because of a family history of dyslexia and one because the results of the Reynell III did not meet the criteria for normal speech comprehension, and an additional examination of speech-language pathologist showed signs of severe language disorder. Two children with RAOM had atypically enhanced P3a responses (24.07 and 23.20 µV), and the statistical analysis indicated them to be outliers, i.e., their responses being at abnormal distance from the other ones (2.51–17.63 µV). Because we hypothesized that the children with RAOM would have enhanced P3a responses, we decided to exclude these two children from the further analysis to avoid the bias of these extreme values on the results of the RAOM group. Furthermore, two children did not arrive to the measurement at the appointed time. In the control group, two children had to be excluded from the analysis because of a large amount of alpha activity in their EEG leading to low signal-to-noise ratio. One control children was excluded because of acute OM diagnosed at the time of measurement. The total number of children in this study was 18 in the RAOM group and 19 in the control group after these exclusions. There were no significant differences between the final groups in gender (RAOM: 10 boys; controls 11 boys), age (RAOM: mean 24 months, min–max 22–26; controls: mean 24 months, min–max 22–26), or mother’s education (RAOM: 4 low, 13 middle or high; controls 2 low, 17 middle or high). The educational information of one mother in the RAOM group was not available. Stimuli and experimental design: ERPs were recorded in a passive condition with the multi-feature paradigm (“Optimum-1”), which was shown to be a fast and eligible method for obtaining several ERPs reflecting different stages of auditory processing in adults [40–42], school-aged children [43, 44], and toddlers [45, 46]. In the multi-feature paradigm, the standard and the deviant sounds are presented in the same sequence so that every other stimulus is the standard and every other stimulus is one of the several deviants. In the deviants, only one sound feature (e.g., vowel or frequency) of the standard stimulus is changed at a time while the other features remain the same and strengthen the memory representation of the standard stimulus. To study attentional mechanisms, distracting novel sounds may also be embedded in the same sound stream [42, 45, 46]. The standards were Finnish semisynthetic consonant–vowel syllables/ke:/or/pi:/(duration 170 ms). Every other stimulus sequence included standard/ke:/and every other included standard/pi:/. The deviants (duration 170 ms) were five different deviations in these syllables (frequency F0 ± 8 Hz, intensity ± 7 dB, consonant from/ke:/to/pe:/and from/pi:/to/ki:/, vowel from/ke:/to/ki:/and from/pi:/to/pe:/, and vowel duration from syllable length of 170 ms to 120 ms) [42, 47]. The obligatory and MMN responses elicited by standards and deviants were reported earlier [7]. In addition, there were totally differing novel sounds (duration 200 ms, including a fall and a rise time of 10 ms), which were non-synthetic, environmental human (e.g. coughs and laughs) or non-human (e.g. door slamming and telephone ringing) sounds [42]. In the stimulus sequence, every other stimulus was a standard (probability 50 %) and every other was one of the deviants (probability 8.3 % for each) or a novel sound (probability 8.3 %). The presentation of stimuli was pseudo-randomized so that all five deviants and one novel stimulus appeared once among 12 successive stimuli, and the same deviant or novel was never repeated after the standard stimulus following it. The stimulus onset asynchrony was 670 ms. The stimuli were in the sequences lasting for about 6 min., each starting with 10 standards, and including 540 stimuli from which 275 were standards and 44 were novels sounds, the rest being deviant syllables (44 of each deviant type). Three to four stimulus sequences were presented to each participant. Stimuli were presented in an electrically shielded and sound-attenuated room (reverberation time .3 s, background noise level 43 dB) with the sound pressure level of 75 dB via two loudspeakers (Genelec® 6010A, Genelec Ltd., Iisalmi, Finland). The loudspeakers were in front of the child at a distance of 1.3 m and in a 40-degree angle from the child’s head. EEG recording: The EEG (.16–1000 Hz, sampling rate 5000 Hz) was recorded with 32 channel electro-cap with Ag–AgCl electrodes placed according to the international 10/20 system (ActiCAP 002 and Brain Vision BrainAmp system and software; Brain Products GmbH, Gilching, Germany). The FCz electrode served as online reference and impedances were kept below 20 kΩ. Additional electrodes placed above the outer canthus of the right eye and below the outer canthus of the left eye with bipolar montage were used for electro-oculogram. Toddlers sat in a chair or in their parent’s lap, watching voiceless cartoons or children’s books, or played with silent toys. The parents were instructed to be as quiet as possible. The recording was camera monitored from the next room and an experienced EEG technician monitored the quality of the EEG signal during recording. During the same recording session, the children participated in an EEG recording with three to four stimulus sequences with background noise [48]. The total examination time for each participant was about two and half hours, from which the EEG registration took about 45 min. There were breaks with refreshments between the stimulus sequences. Analysis: Brain Vision Analyzer 2.0 (BrainProducts, GmpH) was used for offline analysis. Data were down sampled to 250 Hz and re-referenced to the average of the mastoid electrodes. Band pass filtering of .5–45 Hz, 24 dB/oct was applied to avoid aliasing and signals not originated from the brain [49]. After visual inspection, channels Fp1, Fp2, PO9, PO10, O1, Oz, and O2 were disabled from further analyzis because of artefacts. Ocular correction was done with an independent component analysis. Extracerebral artefacts with voltage exceeding ±150 μV at any electrode were removed and data were filtered with band pass of 1–20 Hz, 48 dB/oct. ERPs for standard and novel stimuli were averaged from baseline corrected EEG epochs of –100 ms prestimulus to 670 ms after stimulus onset. The first 10 standard stimuli in each recorded sequence and the standard stimuli right after the novel stimuli were excluded from the analysis. Two-tailed t-test indicated no significant group differences in the mean number of averaged epochs for standards or novels. The mean number of epochs for standard and novel stimuli in the RAOM group was 675 (min–max 373–856) and 133 (min–max 75–170), respectively, and in the control group 719 (min–max 517–856) and 143 (min–max 99–171), respectively. To identify the P3a and LN, ERPs for standards were subtracted from those for novels. The grand average difference waves showed the biphasic P3a elicited by novel stimuli. Hence, eP3a and lP3a were separately analyzed. The channel selection for the peak detection was done after visual inspection, which showed the most prominent eP3a at the Cz electrode and lP3a and LN at the Fz electrode. The peak detection was done individually for each child within time windows of 180–300 ms for eP3a, 300–440 ms for lP3a, and 420–600 ms for LN. The peak latencies were determined from the most positive (eP3a and lP3a) or the most negative (LN) peak within those windows, and the mean peak amplitudes were calculated from ±20 ms time window around the peak latencies. The statistical analyses were done for the F3, Fz, F4, C3, Cz, C4, P3, Pz, and P4 electrodes. The existence of each ERP was determined by comparing its amplitude to zero with a two-tailed t-test. The amplitude differences between the groups were examined with repeated measures analysis of variance (ANOVA) and the Fisher’s least significant difference (LSD) post hoc test. In ANOVA, between-subject factor was group (RAOM vs. control) and within-subject factors were anterior-posterior (AP; F3-Fz-F4 vs. C3-Cz-C4 vs. P3-Pz-P4) and right-left (RL; F3-C3-P3 vs. Fz-Cz-Pz vs. F4-C4-P4) electrode positions. The Huynh–Feldt correction was applied when appropriate. One-way ANOVA was used for studying latency differences between the groups. For the effect-size estimation, the partial eta squared (ƞ p2) was calculated. Results: The eP3a significantly differed from zero in the children with RAOM and in the controls (two tailed t-test; p ≤ .001; Table 1, Fig. 1) with no group differences in the amplitude, amplitude scalp distribution, or latency. In both groups, the eP3a amplitude was stronger at the frontal and central electrodes than at the parietal electrodes (F [2, 57] = 42.09, p < .001, ƞ p2 = .53; LSD post hoc p < .001).Table 1Mean amplitude and latency of ERPs elicited by novel stimuli in children in both groupsElectrodeAmplitude µVLatency msRAOMControlRAOMControleP3aCz6.66 (4.30)6.97 (3.44)244 (26)248 (25)lP3aFz9.30 (3.69)9.67 (3.74)348 (36)341 (22)LNFz–2.82 (3.38)–2.34 (3.04) 599 (40) 526 (43)Standard deviations are in parenthesesA significant group difference is italized RAOM recurrent acute otitis media Fig. 1ERPs (eP3a, lP3a, and LN) elicited by novel stimuli in children with recurrent acute otitis media (RAOM) and their controls; a grand average standard and novel ERP waves, b grand average difference (novel minus standard ERP) waves, and c scalp topographies Mean amplitude and latency of ERPs elicited by novel stimuli in children in both groups Standard deviations are in parentheses A significant group difference is italized RAOM recurrent acute otitis media ERPs (eP3a, lP3a, and LN) elicited by novel stimuli in children with recurrent acute otitis media (RAOM) and their controls; a grand average standard and novel ERP waves, b grand average difference (novel minus standard ERP) waves, and c scalp topographies There was a significant lP3a in both groups (two tailed t-test; p ≤ .001; Table 1, Fig. 1). A repeated measures ANOVA for the lP3a amplitude indicated a significant AP x group interaction (F [2, 59] = 3.94, p = .03, ƞ p2 = .10). According to the LSD post hoc test, the children with RAOM showed a more even AP distribution than the control children who had a clear frontally maximal and posteriorly diminishing amplitude scalp distribution (mean amplitudes frontally 8.99 vs. 9.67 µV, centrally 7.12 vs. 6.04 µV, and parietally 3.64 vs. 2.17 µV in the RAOM and control groups, respectively). Furthermore, a significant AP x RL interaction with no group difference was found (F [4, 140] = 4.98, p < .001, ƞ p2 = .13). LSD post hoc test indicated an even RL amplitude distribution at frontal electrodes, but stronger left hemispheric activation compared to the vertex and the right line at central electrodes (p < .001) and also stronger left than right line responses at parietal electrodes (p = .03). There was no group difference for the lP3a latency. In both groups, a significant LN was found (two tailed t-test; RAOM: p = .001; control: p = .004; Table 1, Fig. 1). A repeated measures ANOVA for the LN amplitude indicated a significant AP × RL interaction (F [4, 148] = 2.96, p = .02). This was due to the weakest amplitude at F3 (LSD post hoc; p = .001–.03) and the strongest amplitude at Cz (LSD post hoc; p = .001–.002). There were no group differences in the amplitude or amplitude scalp distribution of LN. However, one-way ANOVA indicated a significant group difference in the LN latency (F [1, 37] = 32.76, p < .001, ƞ p2 = .47), which peaked later in the children with RAOM than in the controls. Discussion: This study examined the effects of early childhood RAOM on neural mechanisms of involuntary attention at the age of 2 years. For that purpose, the P3a and LN elicited by distracting novel sounds were measured in the linguistic multi-feature paradigm at the time when all the participants had healthy ears and their sound encoding reflected by obligatory ERPs was found to be intact in an earlier study [7]. Both the children with RAOM and the controls showed a clearly identifiable P3a with two phases (eP3a and lP3a) and a LN, the morphology of which was found to be consistent with earlier studies in children [28–31, 45, 46]. However, the topography and timing of these responses were distinct in the two groups. These findings suggest different maturational trajectories in the two groups of children and suggest that the consequences of OM are not limited to the middle ear effusion period but the effects are long-lasting. The amplitude, distribution, or latency of the eP3a did not differ between the groups. This suggests the similar automatic detection of a novel stimulus and the early stages of the orientation of attention [32] in the groups. The eP3a was larger frontally and centrally than parietally in both groups being in line with earlier studies in typically developed school-aged children [28, 34] and in adults [33]. In contrast, a significant group difference was found in the lP3a reflecting the actual attention switch [33]. The lP3a amplitude diminished less in the children with RAOM than in the controls from frontal to central and parietal areas, which may indicate an immature control of attention switch in children with RAOM. A frontally prominent lP3a has been linked to the neural maturation of the frontal cortex and attention control [28, 34]. Likewise, an enhanced lP3a at the posterior scalp areas has earlier been found in easily distractible children with ADHD [29]. The current result supports the behavioural finding on the distractibility of toddlers with OM [18]. Distractibility can lead to weak utilization of the auditory channel in learning [29, 50], since it limits the ability to ignore irrelevant auditory stimuli. At 2 years of age, this may contribute to the emerging language by disrupting child’s engagement with social-communicative actions critical for language learning. The LN latency was longer in the children with RAOM than in the controls suggesting delayed reorienting back to the ongoing activity [30, 34]. This corresponds with previous results suggesting delayed reorienting in school-children in a behavioral test [16] and might indicate that children with RAOM have an abnormally low resistance to auditory distraction. This is supported by our previous results suggesting neural sensitivity to sound loudness changes in these same 2-year-old children with RAOM [7]. The result is also consistent with the elevated auditory sensitivity to sounds described in adolescents with childhood OM [51]. Our results show that RAOM has long-term effects leading to abnormal attention control at the age of 2 years when rapid developmental neural changes are involved. Studies on attentional neural mechanisms in older children with early childhood RAOM would be pertinent since they would disclose whether the neural changes observed are transient or still persisting at the later stages of development. There were two children in the RAOM group who were excluded from the group analysis because of their abnormally enhanced P3a responses. The exclusion was done to avoid the bias of these statistically confirmed outliers on the results of the RAOM group. However, we should notice that these extreme P3a responses might reflect a genuine effect of RAOM and indicate enhanced distractibility of these children, but this should be studied further in the future. When interpreting the results, it should be taken into account that the accurate hearing thresholds were not available at the time of the measurement. Accurate hearing thresholds can be reliably measured from the age of three onwards [52]. Because the participants in the current study were 22–26 months old we decided to use TEOAE screening to exclude congenital hearing losses. However, there were six children in the RAOM group and four children in the control group who could not tolerate the TEOAE measurement at the time of EEG. Because these children had passed the TEOAE screening at the postnatal period, we decided to include them in the study. However, there is a possibility that a child who has passed the TEOAE screening at birth may develop hearing deficit later. Hearing levels were assumed to be at normal levels in all participants while the children with RAOM had had tympanostomy tubes inserted and, according the parental reports, there were no concerns of hearing in the screenings at the family and health care clinics where Finnish children are followed up regularly. Furthermore, these same children showed age-typical cortical sound encoding with no group differences in our earlier study [7]. This refers to hearing levels within the normal range at the time of the EEG. Conclusions: To conclude, this study showed abnormal neural mechanisms of involuntary attention in 2-year-old children with RAOM. For the distracting novel sounds, the RAOM group showed atypical neural organization signified by a more even lP3a scalp distribution in anterior-posterior axis than in the controls, who had a more frontally oriented lP3a. This can be a sign of immature neural processing and enhanced distractibility. Furthermore, the children with RAOM showed delayed re-orienting back to the ongoing activity indicated by their prolonged LN latency. Since all the children had clinically healthy ears at the time of the study, the current results suggest that early childhood RAOM has long-term effects on the immature central nervous system. This further supports the suggestion that early childhood RAOM should be taken as a risk factor for the developing auditory central nervous system.
Background: A large group of young children are exposed to repetitive middle ear infections but the effects of the fluctuating hearing sensations on immature central auditory system are not fully understood. The present study investigated the consequences of early childhood recurrent acute otitis media (RAOM) on involuntary auditory attention switching. Methods: By utilizing auditory event-related potentials, neural mechanisms of involuntary attention were studied in 22-26 month-old children (N = 18) who had had an early childhood RAOM and healthy controls (N = 19). The earlier and later phase of the P3a (eP3a and lP3a) and the late negativity (LN) were measured for embedded novel sounds in the passive multi-feature paradigm with repeating standard and deviant syllable stimuli. The children with RAOM had tympanostomy tubes inserted and all the children in both study groups had to have clinically healthy ears at the time of the measurement assessed by an otolaryngologist. Results: The results showed that lP3a amplitude diminished less from frontal to central and parietal areas in the children with RAOM than the controls. This might reflect an immature control of involuntary attention switch. Furthermore, the LN latency was longer in children with RAOM than in the controls, which suggests delayed reorientation of attention in RAOM. Conclusions: The lP3a and LN responses are affected in toddlers who have had a RAOM even when their ears are healthy. This suggests detrimental long-term effects of RAOM on the neural mechanisms of involuntary attention.
Background: About 30 % of children have recurrent middle ear infections (recurrent acute otitis media, RAOM) in their early childhood [1, 2]. Due to challenges in diagnosing and classifying middle ear status, otitis media (OM) is commonly used as a general term for various forms of middle ear fluid and inflammation. A general definition for RAOM has been three or more episodes of acute otitis media (AOM) per 6 months or four or more AOM episodes per year [3]. After an episode of AOM, middle ear fluid is present for few days to over 2 months [4]. Fluid in the middle ear causes about 20–30 dB conductive hearing loss [5], and, especially when asymmetric, it affects interaural temporal and level difference cues compromising binaural sound localization [6]. Fluctuating hearing sensations during the development of central auditory system has been connected to atypical auditory processing [7–11], which can lead to problems in language acquisition [12]. Therefore, it is necessary to get a better understanding of the consequences of early childhood RAOM on immature central nervous system. Behavioral studies in children with OM have shown problems in regulation of auditory attention [13–18]. Involuntary orientation to environmental events as well as selective maintenance of attention is essential for speech processing and language learning. Involuntary attention accounts for the detection and selection of potentially biologically meaningful information of events unrelated to the ongoing task [19]. For example, a screeching noise of a braking car causes an attention switch of a pedestrian who is talking on the phone, and leads to the distraction of the ongoing activity. After the evaluation of the irrelevant novel stimulus, the reorientation back to the recent activity takes place. Involuntary attention is a bottom-up (stimulus-driven) process [19] but during maturation the developing top-down mechanisms start to inhibit distractors which are not meaningful, in other words, children learn to separate relevant from irrelevant stimuli [20, 21]. An excessive tendency to orient to the irrelevant stimuli requiring a lot of attentional resources makes goal-directed behavior harder [22]. School-aged children with OM history were shown to have deficits of selective auditory attention in dichotic listening tasks [13–15]. They also showed increased reorientation time of attention during behavioral tasks [16]. Rated by their teachers, school-children with OM history were suggested to be less task-oriented [17] but not in all studies [23]. Studies in toddlers are scarce, probably due to the weak co-operation skills in children at this age. However, toddlers with chronic OM were shown to express reduced attention during book reading at the time of middle ear effusion and, according the questionnaire, their mothers rated them as easily distractible [18]. The neural mechanisms beyond these findings are still unknown. Event-related potentials (ERPs) are a feasible approach for studying non-invasively neural mechanisms of involuntary auditory attention without tasks requiring co-operation skills [24]. The auditory P3a is a large positive deflection elicited by unexpected, novel sounds which substantially differ from other sounds, for example slam of the door or human cough. The P3a reflects involuntary attention mechanisms and orientation of attention [22]. It peaks fronto-centrally at 200–300 ms after the onset of a distracting stimulus [22, 25, 26]. P3a was often found to be biphasic [22, 27]. Two phases, early and late, have been identified in children [28–30] already at the age of 2 years [31]. Early P3a (eP3a) was suggested to reflect the automatic detection of violation in the neural model of existing world and thus, to represent the orientation of attention [32]. It is maximal at vertex and diminishes posteriorly and laterally [33]. In contrast, late P3a (lP3a) was suggested to reflect actual attention switch and it is maximal frontally [33]. Morphology of these responses is quite similar in children and adults but the scalp topography of children’s P3a is more anterior than that of adults [30]. The eP3a may mature earlier than lP3a, which continues to enhance frontally during development [34]. Hence, processing of acoustic novelty in the childhood resembles that in the adulthood although some underlying neural networks still continue to develop. Atypical P3a responses have been connected to abnormal involuntary attention, for example, parietally enhanced lP3a was found in children with attention deficit hyperactivity disorder (ADHD) [29]. The ERP waveform also reflects reorienting of attention back to the primary task after recognizing and evaluating a distracting stimulus [35, 36]. In adults, P3a is followed by reorienting negativity (RON) [37]. A counterpart of RON in children was suggested to be the late negativity (LN, also called as negative component, Nc) [28–31]. The LN latency, peaking at around 400–700 ms after the onset of a novel stimulus, reflects reorienting time [21]. The LN has the maximal amplitude at fronto-central scalp areas [21]. Large LN reflects enhanced neural effort to reorienting [37] or more attention paid to the surprising event [30]. During maturation, the LN amplitude has been suggested to decrease [30, 34]. The aim of this study was to compare the involuntary attentional mechanisms in 2-year-old healthy children with RAOM history and their healthy age-matched controls by recording auditory ERPs. For that purpose, novel stimuli were embedded in the multi-feature paradigm with syllables to elicit eP3a, lP3a, and LN. It was hypothesized that children with RAOM would show atypically enhanced and/or short latency P3a reflecting enhanced distractibility for the intrusive novel sounds and to have larger amplitude and/or longer latency of LN indicating more neural effort to reorienting and/or longer re-orientation time of attention than their healthy peers. Studies of P3a and novelty-related LN in toddlers are scarce and to our knowledge, this is the first study measuring these responses in children with RAOM. Conclusions: To conclude, this study showed abnormal neural mechanisms of involuntary attention in 2-year-old children with RAOM. For the distracting novel sounds, the RAOM group showed atypical neural organization signified by a more even lP3a scalp distribution in anterior-posterior axis than in the controls, who had a more frontally oriented lP3a. This can be a sign of immature neural processing and enhanced distractibility. Furthermore, the children with RAOM showed delayed re-orienting back to the ongoing activity indicated by their prolonged LN latency. Since all the children had clinically healthy ears at the time of the study, the current results suggest that early childhood RAOM has long-term effects on the immature central nervous system. This further supports the suggestion that early childhood RAOM should be taken as a risk factor for the developing auditory central nervous system.
Background: A large group of young children are exposed to repetitive middle ear infections but the effects of the fluctuating hearing sensations on immature central auditory system are not fully understood. The present study investigated the consequences of early childhood recurrent acute otitis media (RAOM) on involuntary auditory attention switching. Methods: By utilizing auditory event-related potentials, neural mechanisms of involuntary attention were studied in 22-26 month-old children (N = 18) who had had an early childhood RAOM and healthy controls (N = 19). The earlier and later phase of the P3a (eP3a and lP3a) and the late negativity (LN) were measured for embedded novel sounds in the passive multi-feature paradigm with repeating standard and deviant syllable stimuli. The children with RAOM had tympanostomy tubes inserted and all the children in both study groups had to have clinically healthy ears at the time of the measurement assessed by an otolaryngologist. Results: The results showed that lP3a amplitude diminished less from frontal to central and parietal areas in the children with RAOM than the controls. This might reflect an immature control of involuntary attention switch. Furthermore, the LN latency was longer in children with RAOM than in the controls, which suggests delayed reorientation of attention in RAOM. Conclusions: The lP3a and LN responses are affected in toddlers who have had a RAOM even when their ears are healthy. This suggests detrimental long-term effects of RAOM on the neural mechanisms of involuntary attention.
9,459
289
[ 1166, 716, 586, 221, 608 ]
9
[ "children", "raom", "group", "stimulus", "novel", "stimuli", "standard", "ms", "time", "eeg" ]
[ "otitis media om", "ear fluid inflammation", "middle ear fluid", "ear status otitis", "otitis media aom" ]
null
[CONTENT] Otitis media | Involuntary attention | Orienting | ERPs | P3a | Late negativity [SUMMARY]
null
[CONTENT] Otitis media | Involuntary attention | Orienting | ERPs | P3a | Late negativity [SUMMARY]
[CONTENT] Otitis media | Involuntary attention | Orienting | ERPs | P3a | Late negativity [SUMMARY]
[CONTENT] Otitis media | Involuntary attention | Orienting | ERPs | P3a | Late negativity [SUMMARY]
[CONTENT] Otitis media | Involuntary attention | Orienting | ERPs | P3a | Late negativity [SUMMARY]
[CONTENT] Attention | Auditory Pathways | Child, Preschool | Event-Related Potentials, P300 | Evoked Potentials, Auditory | Female | Humans | Infant | Male | Otitis Media | Recurrence [SUMMARY]
null
[CONTENT] Attention | Auditory Pathways | Child, Preschool | Event-Related Potentials, P300 | Evoked Potentials, Auditory | Female | Humans | Infant | Male | Otitis Media | Recurrence [SUMMARY]
[CONTENT] Attention | Auditory Pathways | Child, Preschool | Event-Related Potentials, P300 | Evoked Potentials, Auditory | Female | Humans | Infant | Male | Otitis Media | Recurrence [SUMMARY]
[CONTENT] Attention | Auditory Pathways | Child, Preschool | Event-Related Potentials, P300 | Evoked Potentials, Auditory | Female | Humans | Infant | Male | Otitis Media | Recurrence [SUMMARY]
[CONTENT] Attention | Auditory Pathways | Child, Preschool | Event-Related Potentials, P300 | Evoked Potentials, Auditory | Female | Humans | Infant | Male | Otitis Media | Recurrence [SUMMARY]
[CONTENT] otitis media om | ear fluid inflammation | middle ear fluid | ear status otitis | otitis media aom [SUMMARY]
null
[CONTENT] otitis media om | ear fluid inflammation | middle ear fluid | ear status otitis | otitis media aom [SUMMARY]
[CONTENT] otitis media om | ear fluid inflammation | middle ear fluid | ear status otitis | otitis media aom [SUMMARY]
[CONTENT] otitis media om | ear fluid inflammation | middle ear fluid | ear status otitis | otitis media aom [SUMMARY]
[CONTENT] otitis media om | ear fluid inflammation | middle ear fluid | ear status otitis | otitis media aom [SUMMARY]
[CONTENT] children | raom | group | stimulus | novel | stimuli | standard | ms | time | eeg [SUMMARY]
null
[CONTENT] children | raom | group | stimulus | novel | stimuli | standard | ms | time | eeg [SUMMARY]
[CONTENT] children | raom | group | stimulus | novel | stimuli | standard | ms | time | eeg [SUMMARY]
[CONTENT] children | raom | group | stimulus | novel | stimuli | standard | ms | time | eeg [SUMMARY]
[CONTENT] children | raom | group | stimulus | novel | stimuli | standard | ms | time | eeg [SUMMARY]
[CONTENT] attention | children | p3a | involuntary | suggested | neural | 30 | reorienting | ln | auditory [SUMMARY]
null
[CONTENT] 001 | amplitude | significant | group difference | raom | group | difference | table | elicited novel stimuli children | erp waves [SUMMARY]
[CONTENT] raom | neural | central nervous system | nervous system | nervous | central nervous | early | childhood raom | immature | childhood [SUMMARY]
[CONTENT] children | raom | group | standard | stimulus | ms | attention | novel | eeg | stimuli [SUMMARY]
[CONTENT] children | raom | group | standard | stimulus | ms | attention | novel | eeg | stimuli [SUMMARY]
[CONTENT] ||| RAOM [SUMMARY]
null
[CONTENT] RAOM ||| ||| LN | RAOM | RAOM [SUMMARY]
[CONTENT] LN | RAOM ||| RAOM [SUMMARY]
[CONTENT] ||| RAOM ||| 22-26 month-old | RAOM ||| P3a | LN ||| RAOM ||| ||| RAOM ||| ||| LN | RAOM | RAOM ||| LN | RAOM ||| RAOM [SUMMARY]
[CONTENT] ||| RAOM ||| 22-26 month-old | RAOM ||| P3a | LN ||| RAOM ||| ||| RAOM ||| ||| LN | RAOM | RAOM ||| LN | RAOM ||| RAOM [SUMMARY]
Neuroprotective effects of dexmedetomidine against isoflurane-induced neuronal injury via glutamate regulation in neonatal rats.
30613136
Considerable evidences support the finding that the anesthesia reagent isoflurane increases neuronal cell death in young rats. Recent studies have shown that dexmedetomidine can reduce isoflurane-induced neuronal injury, but the mechanism remains unclear. We investigated whether isoflurane cause neurotoxicity to the central nervous system by regulating the N-methyl-D-aspartate receptor (NMDAR) and excitatory amino acid transporter1 (EAAT1) in young rats. Furthermore, we examined if dexmedetomidine could decrease isoflurane-induced neurotoxicity.
BACKGROUND
Neonatal rats (postnatal day 7, n=144) were randomly divided into four groups of 36 animals each: control (saline injection without isoflurane); isoflurane (2% for 4 h); isoflurane + single dose of dexmedetomidine (75 µg/kg, 20 min before the start of 2% isoflurane for 4 h); and isoflurane + dual doses of dexmedetomidine (25 µg/kg, 20 min before and 2 h after start of isoflurane at 2% for 4 h). Six neonates from each group were euthanatized at 2 h, 12 h, 24 h, 3 days, 7 days and 28 days post-anesthesia. Hippocampi were collected and processed for protein extraction. Expression levels of the NMDAR subunits NR2A and NR2B, EAAT1 and caspase-3 were measured by western blot analysis.
METHODS
Protein levels of NR2A, EAAT1 and caspase-3 were significantly increased in hippocampus of the isoflurane group from 2 h to 3 days, while NR2B levels were decreased. However, the -induced increase in NR2A, EAAT1 and caspase-3 and the decrease in NR2B in isoflurane-exposed rats were ameliorated in the rats treated with single or dual doses of dexmedetomidine. Isoflurane-induced neuronal damage in neonatal rats is due in part to the increase in NR2A and EAAT1 and the decrease in NR2B in the hippocampus.
RESULTS
Dexmedetomidine protects the brain against the use of isoflurane through the regulation of NR2A, NR2B and EAAT1. However, using the same amount of dexmedetomidine, the trend of protection is basically the same.
CONCLUSION
[ "Animals", "Dexmedetomidine", "Glutamic Acid", "Injections, Intraperitoneal", "Isoflurane", "Male", "Neurons", "Neuroprotective Agents", "Rats", "Rats, Sprague-Dawley" ]
6306062
Introduction
There are many newborns who need to receive anesthesia due to diagnosis or surgical needs each year.1 Generally, the most commonly used anesthetics, including isoflurane and ketamine, exert their effects by inhibiting the N-methyl-D-aspartate receptor (NMDAR) and/or potentiating the gamma amino acid type A receptor.1 The NMDAR is a heterotetramer with universal brain distribution. It is composed of NR1 and NR2 (2A, 2B, 2C, and 2D) subunits. NR2A and NR2B are the most dominant subunits in the brain.2–5 NMDARs have critical roles in synapse plasticity, long-term potentiation (LTP), and long-term depression (LTD). A persistent strengthening of synapses induces LTP to improve learning and memory,6–9 while a long-lasting decrease in synaptic strength causes LTD and results in learning and memory deficits.10 Meanwhile, LTD is needed for plasticity in the brain and prevents overexcitation of circuits. Studies have shown that prolonged or multiple exposures to NMDAR inhibitors such as isoflurane may affect cell proliferation, interrupt synaptic transmission, and induce apoptosis and neuronal injury in the central nerve system (CNS) during development in rodents.1,10–13 Liu et al11 found that a short exposure to isoflurane increased NR2B expression in adult mice and improved their cognitive functions. However, prolonged exposure to isoflurane decreased NR2B levels, increased the production of apoptotic proteins, and resulted in cognitive dysfunction. Frontal lobe and hippocampus demonstrated an age-dependent decrease in NMDAR and NR2B expression, along with delayed short-term memory, long-term memory impairment, and spatial cognitive dysfunction.7 Glutamate plays critical roles in synaptic plasticity and development by mediating the fast excitatory transmission. The extracellular glutamate concentration is kept low enough to avoid neurons from glutamate excitotoxicity.14 Excessive release of glutamate may lead to neuronal damage through receptor-mediated excitotoxicity, which plays an important role in many CNS diseases. Excitatory amino acid transporter 1 (EAAT1) is an important glutamate transporter in neurons and glial cells, which limits excessive increase of glutamate levels to reduce glutamate excitotoxicity.15 Dexmedetomidine is an agonist for α2-adrenoreceptors. It has been widely used in clinics and shows potent sedative, analgesic, and anxiolytic effects. Dexmedetomidine produces less stress responses and maintains a stable cardiovascular function. It has been shown that dexmedetomidine protects neuronal cells against isoflurane-induced hippocampal neuronal injury in neonatal mice.16,17 However, the mechanism underlying this protective effect has yet to be determined. In the present study, we investigated whether prolonged exposure to isoflurane generates neurotoxicity by changing the expression of NMDAR subunits and EAAT1. We further investigated if dexmedetomidine protects against isoflurane-induced neuronal injury and if glutamate neurotransmission could be influenced.
null
null
Results
Caspase-3 levels Compared with the control group, the levels of caspase-3 increased significantly at 2 hours, 12 hours, 24 hours, and 3 days after anesthesia in the isoflurane group (P<0.05, P<0.01, P<0.01, and P<0.01, respectively), but no significant difference was found between the control and isoflurane groups post-anesthesia 7 and 28 days after anesthesia. Compared with the isoflurane group, caspase-3 levels were significantly decreased in the isoflurane+ dexmedetomidine single-dose group at 12 hours, 24 hours, and 3 days (P<0.05, P<0.05, and P<0.05, respectively) after anesthesia. In the isoflurane+ dexmedetomidine dual-dose groups, caspase-3 levels were significantly decreased at 3 days (P<0.05) after anesthesia compared to isoflurane group (Figure 1). Compared with the control group, the levels of caspase-3 increased significantly at 2 hours, 12 hours, 24 hours, and 3 days after anesthesia in the isoflurane group (P<0.05, P<0.01, P<0.01, and P<0.01, respectively), but no significant difference was found between the control and isoflurane groups post-anesthesia 7 and 28 days after anesthesia. Compared with the isoflurane group, caspase-3 levels were significantly decreased in the isoflurane+ dexmedetomidine single-dose group at 12 hours, 24 hours, and 3 days (P<0.05, P<0.05, and P<0.05, respectively) after anesthesia. In the isoflurane+ dexmedetomidine dual-dose groups, caspase-3 levels were significantly decreased at 3 days (P<0.05) after anesthesia compared to isoflurane group (Figure 1). NMDAR NR2A levels Compared with the control group, the levels of NMDAR 2A increased significantly at 2 hours, 12 hours, 24 hours, and 3 days after anesthesia in the isoflurane group (P<0.05, P<0.01, P<0.01, and P<0.01, respectively), but no significant difference was found between the control and isoflurane groups post-anesthesia at 7 and 28 days after anesthesia. Compared with the isoflurane group, NMDAR 2A levels were significantly decreased in the isoflurane+ dexmedetomidine single-dose group at 2 hours, 12 hours, 24 hours, and 3 days (P<0.05, P<0.05, P<0.05, and P<0.01, respectively) after anesthesia. In the isoflurane+ dexmedetomidine dual-dose groups, NMDAR 2A levels were significantly decreased at 12 hours, 24 hours, and 3 days (P<0.05, P<0.05, and P<0.05, respectively) after anesthesia (Figure 2). Compared with the control group, the levels of NMDAR 2A increased significantly at 2 hours, 12 hours, 24 hours, and 3 days after anesthesia in the isoflurane group (P<0.05, P<0.01, P<0.01, and P<0.01, respectively), but no significant difference was found between the control and isoflurane groups post-anesthesia at 7 and 28 days after anesthesia. Compared with the isoflurane group, NMDAR 2A levels were significantly decreased in the isoflurane+ dexmedetomidine single-dose group at 2 hours, 12 hours, 24 hours, and 3 days (P<0.05, P<0.05, P<0.05, and P<0.01, respectively) after anesthesia. In the isoflurane+ dexmedetomidine dual-dose groups, NMDAR 2A levels were significantly decreased at 12 hours, 24 hours, and 3 days (P<0.05, P<0.05, and P<0.05, respectively) after anesthesia (Figure 2). NMDAR NR2B levels Compared with the control group, the levels of NMDAR 2B decreased significantly in the isoflurane group, but no significant difference was found between the control and isoflurane groups post-anesthesia 28 days after anesthesia. Compared with the isoflurane group, NMDAR 2B levels were significantly increased in the isoflurane+ dexmedetomidine single-dose group. In the isoflurane+ dexmedetomidine dual-dose groups, NMDAR 2A levels were significantly decreased at 12 hours and 3 days (P<0.05 and P<0.05, respectively) after anesthesia (Figure 3). Compared with the control group, the levels of NMDAR 2B decreased significantly in the isoflurane group, but no significant difference was found between the control and isoflurane groups post-anesthesia 28 days after anesthesia. Compared with the isoflurane group, NMDAR 2B levels were significantly increased in the isoflurane+ dexmedetomidine single-dose group. In the isoflurane+ dexmedetomidine dual-dose groups, NMDAR 2A levels were significantly decreased at 12 hours and 3 days (P<0.05 and P<0.05, respectively) after anesthesia (Figure 3). EAAT1 levels Compared with the control group, the levels of EAAT1 increased significantly at 2, 12, and 24 hours after anesthesia in the isoflurane group (P<0.05, P<0.01, and P<0.01, respectively), but no significant difference was found between the control and isoflurane groups post-anesthesia at 3, 7, and 28 days after anesthesia. Compared with the isoflurane group, EAAT1 levels were significantly decreased in the isoflurane+ dexmedetomidine single-dose group at 2, 12, and 24 hours (P<0.05, P<0.05, and P<0.05, respectively) after anesthesia. In the isoflurane+ dexmedetomidine dual-dose groups, NMDAR 2A levels were significantly decreased at 2, 12, and 24 hours (P<0.05, P<0.05, and P<0.05, respectively) after anesthesia (Figure 4). Compared with the control group, the levels of EAAT1 increased significantly at 2, 12, and 24 hours after anesthesia in the isoflurane group (P<0.05, P<0.01, and P<0.01, respectively), but no significant difference was found between the control and isoflurane groups post-anesthesia at 3, 7, and 28 days after anesthesia. Compared with the isoflurane group, EAAT1 levels were significantly decreased in the isoflurane+ dexmedetomidine single-dose group at 2, 12, and 24 hours (P<0.05, P<0.05, and P<0.05, respectively) after anesthesia. In the isoflurane+ dexmedetomidine dual-dose groups, NMDAR 2A levels were significantly decreased at 2, 12, and 24 hours (P<0.05, P<0.05, and P<0.05, respectively) after anesthesia (Figure 4).
Conclusion
Isoflurane caused an increase in apoptotic cells in neonatal rats, which was related to increased expression of NR2A and EAAT1 and decreased expression of NR2B. Isoflurane-induced neurotoxicity during critical developmental periods may be controlled by dexmedetomidine.
[ "Animals and grouping", "Experimental model", "Tissue preparation", "Western blot assays", "Statistical analyses", "Caspase-3 levels", "NMDAR NR2A levels", "NMDAR NR2B levels", "EAAT1 levels", "Conclusion" ]
[ "Experimental animals were 144 male Sprague Dawley rats (specific pathogen free) at postnatal day 7 weighing 12–20 g obtained from the Experimental Animal Facility at Shengjing Hospital of China Medical University (Taichung, Taiwan). The Ohmeda 210 anesthesia machine and F-MCI sensor were purchased from Datex-Ohmeda (Datex-Ohmeda, Helsinki, Finland). Isoflurane was purchased from Abbott Laboratories (Chicago, IL, USA) and dexmedetomidine was purchased from Jiangsu Hengrui Medicine Co. Ltd. (Lianyungang, China; Lot 14101732).\nAll experimental procedures and protocols were approved by The Laboratory Animal Care Committee of China Medical University, Shenyang, China (2015PS223K) and were performed in accordance with the Guidelines for the Care and Use of Laboratory Animals from the National Institutes of Health, USA. Rat pups were randomly divided into four groups (n=36 per group): control, isoflurane, isoflurane with a single dose of dexmedetomidine, and isoflurane with dual doses of dexmedetomidine. All pups initially stayed with their dams until postnatal day 21, when surviving rats were weaned and housed in a temperature- and light-controlled room (22°C; 12-hour light/dark cycle) with ad libitum access to food and water.", "For all isoflurane groups, rats were placed in a customized anesthetic chamber (30×20×20 cm3) with a gas inlet and outlet. The gas inlet was connected to a compressed oxygen tank and a standard anesthesia vaporizer to direct a constant 1 L/min flow of oxygen containing 2% isoflurane through the chamber. The isoflurane levels were continuously monitored through the gas outlet by a gas monitor. The infusion time of isoflurane was 4 hours for all isoflurane groups. For the group receiving a single dose of dexmedetomidine, the drug was administered intraperitoneally (ip) at 75 µg/kg 20 minutes prior to isoflurane infusion. For the group receiving dual doses of dexmedetomidine, the drug was administered twice at 25 µg/kg (ip) 20 minutes before and 2 hours after beginning isoflurane infusion. Sham animals received single saline injections (ip) and were exposed to 30% O2 for 4 hours. During anesthesia, the oxygen concentration was continuously monitored and maintained at 30%. All rats were allowed to breathe spontaneously. After anesthesia, the rats were closely monitored until they regained consciousness, and then were returned to their dams in their home cages.", "Rats were decapitated at 2 hours, 12 hours, 24 hours, 3 days, 7 days, and 28 days after the 4-hour anesthesia. Hippocampi were collected on ice and placed immediately into liquid nitrogen. The samples were removed from the liquid nitrogen 4 hours later and stored at −80°C until use.", "Lysis buffer (Beyotime, Shanghai, China) was added to hippocampal samples (10 µL per 1 mg tissue). Samples were sonicated and incubated at room temperature for 30 minutes. Lysates were centrifuged at 4°C, 12,000× g for 20 minutes, and the supernatants were collected. Bicinchoninic acid assay was used to measure total protein concentration. Equivalent quantities of protein (40 µg) were separated by 10% SDS-PAGE under reducing conditions and electrophoretically transferred to a polyvinylidene fluoride membrane. Membranes were blocked in 5% nonfat milk for 4 hours and washed three times. Membranes were then incubated overnight at 4°C with anti-GAPDH (1:5,000 dilution; Proteintech, Chicago, IL, USA), anti-NR2B rabbit monoclonal antibody (1:1,000 dilution; Abcam, Cambridge, UK), anti-EAAT1 rabbit monoclonal antibody (1:1,000 dilution; Cell Signaling Technology, Beverly, MA, USA) or anti-caspase-3 rabbit monoclonal antibody (1:1,000 dilution; Abcam). After three washes (10 minutes per wash), membranes were incubated with horseradish peroxidase-conjugated goat anti-rabbit IgG secondary antibody at room temperature for 90 minutes. Membranes were then washed three times (10 minutes per wash) and labeled proteins were visualized by incubating with chemiluminescent substrates (Thermo Fisher Scientific, Waltham, MA, USA) for 5 minutes. X-ray films were developed in a C300 developing machine (Azure Biosystems, Dublin, CA, USA), and ImageJ was used to semi-quantify protein levels. Protein levels of NR2A, NR2B, EAAT1, and caspase-3 were normalized to their respective GAPDH levels.", "Data were expressed as X¯±s. All statistical analyses were performed using SPSS (v19.0; IBM Corporation, Armonk, NY, USA). All continuous variables were tested for assumption of normality using the Shapiro–Wilk test. If the assumption of normality was met, one-way ANOVA followed by Tukey’s post hoc multiple comparison tests was used. If the assumption of normality was unmet, the Kruskal–Wallis H test was used. Significance was accepted at P<0.05.", "Compared with the control group, the levels of caspase-3 increased significantly at 2 hours, 12 hours, 24 hours, and 3 days after anesthesia in the isoflurane group (P<0.05, P<0.01, P<0.01, and P<0.01, respectively), but no significant difference was found between the control and isoflurane groups post-anesthesia 7 and 28 days after anesthesia. Compared with the isoflurane group, caspase-3 levels were significantly decreased in the isoflurane+ dexmedetomidine single-dose group at 12 hours, 24 hours, and 3 days (P<0.05, P<0.05, and P<0.05, respectively) after anesthesia. In the isoflurane+ dexmedetomidine dual-dose groups, caspase-3 levels were significantly decreased at 3 days (P<0.05) after anesthesia compared to isoflurane group (Figure 1).", "Compared with the control group, the levels of NMDAR 2A increased significantly at 2 hours, 12 hours, 24 hours, and 3 days after anesthesia in the isoflurane group (P<0.05, P<0.01, P<0.01, and P<0.01, respectively), but no significant difference was found between the control and isoflurane groups post-anesthesia at 7 and 28 days after anesthesia. Compared with the isoflurane group, NMDAR 2A levels were significantly decreased in the isoflurane+ dexmedetomidine single-dose group at 2 hours, 12 hours, 24 hours, and 3 days (P<0.05, P<0.05, P<0.05, and P<0.01, respectively) after anesthesia. In the isoflurane+ dexmedetomidine dual-dose groups, NMDAR 2A levels were significantly decreased at 12 hours, 24 hours, and 3 days (P<0.05, P<0.05, and P<0.05, respectively) after anesthesia (Figure 2).", "Compared with the control group, the levels of NMDAR 2B decreased significantly in the isoflurane group, but no significant difference was found between the control and isoflurane groups post-anesthesia 28 days after anesthesia. Compared with the isoflurane group, NMDAR 2B levels were significantly increased in the isoflurane+ dexmedetomidine single-dose group. In the isoflurane+ dexmedetomidine dual-dose groups, NMDAR 2A levels were significantly decreased at 12 hours and 3 days (P<0.05 and P<0.05, respectively) after anesthesia (Figure 3).", "Compared with the control group, the levels of EAAT1 increased significantly at 2, 12, and 24 hours after anesthesia in the isoflurane group (P<0.05, P<0.01, and P<0.01, respectively), but no significant difference was found between the control and isoflurane groups post-anesthesia at 3, 7, and 28 days after anesthesia. Compared with the isoflurane group, EAAT1 levels were significantly decreased in the isoflurane+ dexmedetomidine single-dose group at 2, 12, and 24 hours (P<0.05, P<0.05, and P<0.05, respectively) after anesthesia. In the isoflurane+ dexmedetomidine dual-dose groups, NMDAR 2A levels were significantly decreased at 2, 12, and 24 hours (P<0.05, P<0.05, and P<0.05, respectively) after anesthesia (Figure 4).", "Isoflurane caused an increase in apoptotic cells in neonatal rats, which was related to increased expression of NR2A and EAAT1 and decreased expression of NR2B. Isoflurane-induced neurotoxicity during critical developmental periods may be controlled by dexmedetomidine." ]
[ null, null, null, null, null, null, null, null, null, null ]
[ "Introduction", "Materials and methods", "Animals and grouping", "Experimental model", "Tissue preparation", "Western blot assays", "Statistical analyses", "Results", "Caspase-3 levels", "NMDAR NR2A levels", "NMDAR NR2B levels", "EAAT1 levels", "Discussion", "Conclusion" ]
[ "There are many newborns who need to receive anesthesia due to diagnosis or surgical needs each year.1 Generally, the most commonly used anesthetics, including isoflurane and ketamine, exert their effects by inhibiting the N-methyl-D-aspartate receptor (NMDAR) and/or potentiating the gamma amino acid type A receptor.1 The NMDAR is a heterotetramer with universal brain distribution. It is composed of NR1 and NR2 (2A, 2B, 2C, and 2D) subunits. NR2A and NR2B are the most dominant subunits in the brain.2–5 NMDARs have critical roles in synapse plasticity, long-term potentiation (LTP), and long-term depression (LTD). A persistent strengthening of synapses induces LTP to improve learning and memory,6–9 while a long-lasting decrease in synaptic strength causes LTD and results in learning and memory deficits.10 Meanwhile, LTD is needed for plasticity in the brain and prevents overexcitation of circuits. Studies have shown that prolonged or multiple exposures to NMDAR inhibitors such as isoflurane may affect cell proliferation, interrupt synaptic transmission, and induce apoptosis and neuronal injury in the central nerve system (CNS) during development in rodents.1,10–13 Liu et al11 found that a short exposure to isoflurane increased NR2B expression in adult mice and improved their cognitive functions. However, prolonged exposure to isoflurane decreased NR2B levels, increased the production of apoptotic proteins, and resulted in cognitive dysfunction. Frontal lobe and hippocampus demonstrated an age-dependent decrease in NMDAR and NR2B expression, along with delayed short-term memory, long-term memory impairment, and spatial cognitive dysfunction.7 Glutamate plays critical roles in synaptic plasticity and development by mediating the fast excitatory transmission. The extracellular glutamate concentration is kept low enough to avoid neurons from glutamate excitotoxicity.14 Excessive release of glutamate may lead to neuronal damage through receptor-mediated excitotoxicity, which plays an important role in many CNS diseases. Excitatory amino acid transporter 1 (EAAT1) is an important glutamate transporter in neurons and glial cells, which limits excessive increase of glutamate levels to reduce glutamate excitotoxicity.15\nDexmedetomidine is an agonist for α2-adrenoreceptors. It has been widely used in clinics and shows potent sedative, analgesic, and anxiolytic effects. Dexmedetomidine produces less stress responses and maintains a stable cardiovascular function. It has been shown that dexmedetomidine protects neuronal cells against isoflurane-induced hippocampal neuronal injury in neonatal mice.16,17 However, the mechanism underlying this protective effect has yet to be determined. In the present study, we investigated whether prolonged exposure to isoflurane generates neurotoxicity by changing the expression of NMDAR subunits and EAAT1. We further investigated if dexmedetomidine protects against isoflurane-induced neuronal injury and if glutamate neurotransmission could be influenced.", " Animals and grouping Experimental animals were 144 male Sprague Dawley rats (specific pathogen free) at postnatal day 7 weighing 12–20 g obtained from the Experimental Animal Facility at Shengjing Hospital of China Medical University (Taichung, Taiwan). The Ohmeda 210 anesthesia machine and F-MCI sensor were purchased from Datex-Ohmeda (Datex-Ohmeda, Helsinki, Finland). Isoflurane was purchased from Abbott Laboratories (Chicago, IL, USA) and dexmedetomidine was purchased from Jiangsu Hengrui Medicine Co. Ltd. (Lianyungang, China; Lot 14101732).\nAll experimental procedures and protocols were approved by The Laboratory Animal Care Committee of China Medical University, Shenyang, China (2015PS223K) and were performed in accordance with the Guidelines for the Care and Use of Laboratory Animals from the National Institutes of Health, USA. Rat pups were randomly divided into four groups (n=36 per group): control, isoflurane, isoflurane with a single dose of dexmedetomidine, and isoflurane with dual doses of dexmedetomidine. All pups initially stayed with their dams until postnatal day 21, when surviving rats were weaned and housed in a temperature- and light-controlled room (22°C; 12-hour light/dark cycle) with ad libitum access to food and water.\nExperimental animals were 144 male Sprague Dawley rats (specific pathogen free) at postnatal day 7 weighing 12–20 g obtained from the Experimental Animal Facility at Shengjing Hospital of China Medical University (Taichung, Taiwan). The Ohmeda 210 anesthesia machine and F-MCI sensor were purchased from Datex-Ohmeda (Datex-Ohmeda, Helsinki, Finland). Isoflurane was purchased from Abbott Laboratories (Chicago, IL, USA) and dexmedetomidine was purchased from Jiangsu Hengrui Medicine Co. Ltd. (Lianyungang, China; Lot 14101732).\nAll experimental procedures and protocols were approved by The Laboratory Animal Care Committee of China Medical University, Shenyang, China (2015PS223K) and were performed in accordance with the Guidelines for the Care and Use of Laboratory Animals from the National Institutes of Health, USA. Rat pups were randomly divided into four groups (n=36 per group): control, isoflurane, isoflurane with a single dose of dexmedetomidine, and isoflurane with dual doses of dexmedetomidine. All pups initially stayed with their dams until postnatal day 21, when surviving rats were weaned and housed in a temperature- and light-controlled room (22°C; 12-hour light/dark cycle) with ad libitum access to food and water.\n Experimental model For all isoflurane groups, rats were placed in a customized anesthetic chamber (30×20×20 cm3) with a gas inlet and outlet. The gas inlet was connected to a compressed oxygen tank and a standard anesthesia vaporizer to direct a constant 1 L/min flow of oxygen containing 2% isoflurane through the chamber. The isoflurane levels were continuously monitored through the gas outlet by a gas monitor. The infusion time of isoflurane was 4 hours for all isoflurane groups. For the group receiving a single dose of dexmedetomidine, the drug was administered intraperitoneally (ip) at 75 µg/kg 20 minutes prior to isoflurane infusion. For the group receiving dual doses of dexmedetomidine, the drug was administered twice at 25 µg/kg (ip) 20 minutes before and 2 hours after beginning isoflurane infusion. Sham animals received single saline injections (ip) and were exposed to 30% O2 for 4 hours. During anesthesia, the oxygen concentration was continuously monitored and maintained at 30%. All rats were allowed to breathe spontaneously. After anesthesia, the rats were closely monitored until they regained consciousness, and then were returned to their dams in their home cages.\nFor all isoflurane groups, rats were placed in a customized anesthetic chamber (30×20×20 cm3) with a gas inlet and outlet. The gas inlet was connected to a compressed oxygen tank and a standard anesthesia vaporizer to direct a constant 1 L/min flow of oxygen containing 2% isoflurane through the chamber. The isoflurane levels were continuously monitored through the gas outlet by a gas monitor. The infusion time of isoflurane was 4 hours for all isoflurane groups. For the group receiving a single dose of dexmedetomidine, the drug was administered intraperitoneally (ip) at 75 µg/kg 20 minutes prior to isoflurane infusion. For the group receiving dual doses of dexmedetomidine, the drug was administered twice at 25 µg/kg (ip) 20 minutes before and 2 hours after beginning isoflurane infusion. Sham animals received single saline injections (ip) and were exposed to 30% O2 for 4 hours. During anesthesia, the oxygen concentration was continuously monitored and maintained at 30%. All rats were allowed to breathe spontaneously. After anesthesia, the rats were closely monitored until they regained consciousness, and then were returned to their dams in their home cages.\n Tissue preparation Rats were decapitated at 2 hours, 12 hours, 24 hours, 3 days, 7 days, and 28 days after the 4-hour anesthesia. Hippocampi were collected on ice and placed immediately into liquid nitrogen. The samples were removed from the liquid nitrogen 4 hours later and stored at −80°C until use.\nRats were decapitated at 2 hours, 12 hours, 24 hours, 3 days, 7 days, and 28 days after the 4-hour anesthesia. Hippocampi were collected on ice and placed immediately into liquid nitrogen. The samples were removed from the liquid nitrogen 4 hours later and stored at −80°C until use.\n Western blot assays Lysis buffer (Beyotime, Shanghai, China) was added to hippocampal samples (10 µL per 1 mg tissue). Samples were sonicated and incubated at room temperature for 30 minutes. Lysates were centrifuged at 4°C, 12,000× g for 20 minutes, and the supernatants were collected. Bicinchoninic acid assay was used to measure total protein concentration. Equivalent quantities of protein (40 µg) were separated by 10% SDS-PAGE under reducing conditions and electrophoretically transferred to a polyvinylidene fluoride membrane. Membranes were blocked in 5% nonfat milk for 4 hours and washed three times. Membranes were then incubated overnight at 4°C with anti-GAPDH (1:5,000 dilution; Proteintech, Chicago, IL, USA), anti-NR2B rabbit monoclonal antibody (1:1,000 dilution; Abcam, Cambridge, UK), anti-EAAT1 rabbit monoclonal antibody (1:1,000 dilution; Cell Signaling Technology, Beverly, MA, USA) or anti-caspase-3 rabbit monoclonal antibody (1:1,000 dilution; Abcam). After three washes (10 minutes per wash), membranes were incubated with horseradish peroxidase-conjugated goat anti-rabbit IgG secondary antibody at room temperature for 90 minutes. Membranes were then washed three times (10 minutes per wash) and labeled proteins were visualized by incubating with chemiluminescent substrates (Thermo Fisher Scientific, Waltham, MA, USA) for 5 minutes. X-ray films were developed in a C300 developing machine (Azure Biosystems, Dublin, CA, USA), and ImageJ was used to semi-quantify protein levels. Protein levels of NR2A, NR2B, EAAT1, and caspase-3 were normalized to their respective GAPDH levels.\nLysis buffer (Beyotime, Shanghai, China) was added to hippocampal samples (10 µL per 1 mg tissue). Samples were sonicated and incubated at room temperature for 30 minutes. Lysates were centrifuged at 4°C, 12,000× g for 20 minutes, and the supernatants were collected. Bicinchoninic acid assay was used to measure total protein concentration. Equivalent quantities of protein (40 µg) were separated by 10% SDS-PAGE under reducing conditions and electrophoretically transferred to a polyvinylidene fluoride membrane. Membranes were blocked in 5% nonfat milk for 4 hours and washed three times. Membranes were then incubated overnight at 4°C with anti-GAPDH (1:5,000 dilution; Proteintech, Chicago, IL, USA), anti-NR2B rabbit monoclonal antibody (1:1,000 dilution; Abcam, Cambridge, UK), anti-EAAT1 rabbit monoclonal antibody (1:1,000 dilution; Cell Signaling Technology, Beverly, MA, USA) or anti-caspase-3 rabbit monoclonal antibody (1:1,000 dilution; Abcam). After three washes (10 minutes per wash), membranes were incubated with horseradish peroxidase-conjugated goat anti-rabbit IgG secondary antibody at room temperature for 90 minutes. Membranes were then washed three times (10 minutes per wash) and labeled proteins were visualized by incubating with chemiluminescent substrates (Thermo Fisher Scientific, Waltham, MA, USA) for 5 minutes. X-ray films were developed in a C300 developing machine (Azure Biosystems, Dublin, CA, USA), and ImageJ was used to semi-quantify protein levels. Protein levels of NR2A, NR2B, EAAT1, and caspase-3 were normalized to their respective GAPDH levels.\n Statistical analyses Data were expressed as X¯±s. All statistical analyses were performed using SPSS (v19.0; IBM Corporation, Armonk, NY, USA). All continuous variables were tested for assumption of normality using the Shapiro–Wilk test. If the assumption of normality was met, one-way ANOVA followed by Tukey’s post hoc multiple comparison tests was used. If the assumption of normality was unmet, the Kruskal–Wallis H test was used. Significance was accepted at P<0.05.\nData were expressed as X¯±s. All statistical analyses were performed using SPSS (v19.0; IBM Corporation, Armonk, NY, USA). All continuous variables were tested for assumption of normality using the Shapiro–Wilk test. If the assumption of normality was met, one-way ANOVA followed by Tukey’s post hoc multiple comparison tests was used. If the assumption of normality was unmet, the Kruskal–Wallis H test was used. Significance was accepted at P<0.05.", "Experimental animals were 144 male Sprague Dawley rats (specific pathogen free) at postnatal day 7 weighing 12–20 g obtained from the Experimental Animal Facility at Shengjing Hospital of China Medical University (Taichung, Taiwan). The Ohmeda 210 anesthesia machine and F-MCI sensor were purchased from Datex-Ohmeda (Datex-Ohmeda, Helsinki, Finland). Isoflurane was purchased from Abbott Laboratories (Chicago, IL, USA) and dexmedetomidine was purchased from Jiangsu Hengrui Medicine Co. Ltd. (Lianyungang, China; Lot 14101732).\nAll experimental procedures and protocols were approved by The Laboratory Animal Care Committee of China Medical University, Shenyang, China (2015PS223K) and were performed in accordance with the Guidelines for the Care and Use of Laboratory Animals from the National Institutes of Health, USA. Rat pups were randomly divided into four groups (n=36 per group): control, isoflurane, isoflurane with a single dose of dexmedetomidine, and isoflurane with dual doses of dexmedetomidine. All pups initially stayed with their dams until postnatal day 21, when surviving rats were weaned and housed in a temperature- and light-controlled room (22°C; 12-hour light/dark cycle) with ad libitum access to food and water.", "For all isoflurane groups, rats were placed in a customized anesthetic chamber (30×20×20 cm3) with a gas inlet and outlet. The gas inlet was connected to a compressed oxygen tank and a standard anesthesia vaporizer to direct a constant 1 L/min flow of oxygen containing 2% isoflurane through the chamber. The isoflurane levels were continuously monitored through the gas outlet by a gas monitor. The infusion time of isoflurane was 4 hours for all isoflurane groups. For the group receiving a single dose of dexmedetomidine, the drug was administered intraperitoneally (ip) at 75 µg/kg 20 minutes prior to isoflurane infusion. For the group receiving dual doses of dexmedetomidine, the drug was administered twice at 25 µg/kg (ip) 20 minutes before and 2 hours after beginning isoflurane infusion. Sham animals received single saline injections (ip) and were exposed to 30% O2 for 4 hours. During anesthesia, the oxygen concentration was continuously monitored and maintained at 30%. All rats were allowed to breathe spontaneously. After anesthesia, the rats were closely monitored until they regained consciousness, and then were returned to their dams in their home cages.", "Rats were decapitated at 2 hours, 12 hours, 24 hours, 3 days, 7 days, and 28 days after the 4-hour anesthesia. Hippocampi were collected on ice and placed immediately into liquid nitrogen. The samples were removed from the liquid nitrogen 4 hours later and stored at −80°C until use.", "Lysis buffer (Beyotime, Shanghai, China) was added to hippocampal samples (10 µL per 1 mg tissue). Samples were sonicated and incubated at room temperature for 30 minutes. Lysates were centrifuged at 4°C, 12,000× g for 20 minutes, and the supernatants were collected. Bicinchoninic acid assay was used to measure total protein concentration. Equivalent quantities of protein (40 µg) were separated by 10% SDS-PAGE under reducing conditions and electrophoretically transferred to a polyvinylidene fluoride membrane. Membranes were blocked in 5% nonfat milk for 4 hours and washed three times. Membranes were then incubated overnight at 4°C with anti-GAPDH (1:5,000 dilution; Proteintech, Chicago, IL, USA), anti-NR2B rabbit monoclonal antibody (1:1,000 dilution; Abcam, Cambridge, UK), anti-EAAT1 rabbit monoclonal antibody (1:1,000 dilution; Cell Signaling Technology, Beverly, MA, USA) or anti-caspase-3 rabbit monoclonal antibody (1:1,000 dilution; Abcam). After three washes (10 minutes per wash), membranes were incubated with horseradish peroxidase-conjugated goat anti-rabbit IgG secondary antibody at room temperature for 90 minutes. Membranes were then washed three times (10 minutes per wash) and labeled proteins were visualized by incubating with chemiluminescent substrates (Thermo Fisher Scientific, Waltham, MA, USA) for 5 minutes. X-ray films were developed in a C300 developing machine (Azure Biosystems, Dublin, CA, USA), and ImageJ was used to semi-quantify protein levels. Protein levels of NR2A, NR2B, EAAT1, and caspase-3 were normalized to their respective GAPDH levels.", "Data were expressed as X¯±s. All statistical analyses were performed using SPSS (v19.0; IBM Corporation, Armonk, NY, USA). All continuous variables were tested for assumption of normality using the Shapiro–Wilk test. If the assumption of normality was met, one-way ANOVA followed by Tukey’s post hoc multiple comparison tests was used. If the assumption of normality was unmet, the Kruskal–Wallis H test was used. Significance was accepted at P<0.05.", " Caspase-3 levels Compared with the control group, the levels of caspase-3 increased significantly at 2 hours, 12 hours, 24 hours, and 3 days after anesthesia in the isoflurane group (P<0.05, P<0.01, P<0.01, and P<0.01, respectively), but no significant difference was found between the control and isoflurane groups post-anesthesia 7 and 28 days after anesthesia. Compared with the isoflurane group, caspase-3 levels were significantly decreased in the isoflurane+ dexmedetomidine single-dose group at 12 hours, 24 hours, and 3 days (P<0.05, P<0.05, and P<0.05, respectively) after anesthesia. In the isoflurane+ dexmedetomidine dual-dose groups, caspase-3 levels were significantly decreased at 3 days (P<0.05) after anesthesia compared to isoflurane group (Figure 1).\nCompared with the control group, the levels of caspase-3 increased significantly at 2 hours, 12 hours, 24 hours, and 3 days after anesthesia in the isoflurane group (P<0.05, P<0.01, P<0.01, and P<0.01, respectively), but no significant difference was found between the control and isoflurane groups post-anesthesia 7 and 28 days after anesthesia. Compared with the isoflurane group, caspase-3 levels were significantly decreased in the isoflurane+ dexmedetomidine single-dose group at 12 hours, 24 hours, and 3 days (P<0.05, P<0.05, and P<0.05, respectively) after anesthesia. In the isoflurane+ dexmedetomidine dual-dose groups, caspase-3 levels were significantly decreased at 3 days (P<0.05) after anesthesia compared to isoflurane group (Figure 1).\n NMDAR NR2A levels Compared with the control group, the levels of NMDAR 2A increased significantly at 2 hours, 12 hours, 24 hours, and 3 days after anesthesia in the isoflurane group (P<0.05, P<0.01, P<0.01, and P<0.01, respectively), but no significant difference was found between the control and isoflurane groups post-anesthesia at 7 and 28 days after anesthesia. Compared with the isoflurane group, NMDAR 2A levels were significantly decreased in the isoflurane+ dexmedetomidine single-dose group at 2 hours, 12 hours, 24 hours, and 3 days (P<0.05, P<0.05, P<0.05, and P<0.01, respectively) after anesthesia. In the isoflurane+ dexmedetomidine dual-dose groups, NMDAR 2A levels were significantly decreased at 12 hours, 24 hours, and 3 days (P<0.05, P<0.05, and P<0.05, respectively) after anesthesia (Figure 2).\nCompared with the control group, the levels of NMDAR 2A increased significantly at 2 hours, 12 hours, 24 hours, and 3 days after anesthesia in the isoflurane group (P<0.05, P<0.01, P<0.01, and P<0.01, respectively), but no significant difference was found between the control and isoflurane groups post-anesthesia at 7 and 28 days after anesthesia. Compared with the isoflurane group, NMDAR 2A levels were significantly decreased in the isoflurane+ dexmedetomidine single-dose group at 2 hours, 12 hours, 24 hours, and 3 days (P<0.05, P<0.05, P<0.05, and P<0.01, respectively) after anesthesia. In the isoflurane+ dexmedetomidine dual-dose groups, NMDAR 2A levels were significantly decreased at 12 hours, 24 hours, and 3 days (P<0.05, P<0.05, and P<0.05, respectively) after anesthesia (Figure 2).\n NMDAR NR2B levels Compared with the control group, the levels of NMDAR 2B decreased significantly in the isoflurane group, but no significant difference was found between the control and isoflurane groups post-anesthesia 28 days after anesthesia. Compared with the isoflurane group, NMDAR 2B levels were significantly increased in the isoflurane+ dexmedetomidine single-dose group. In the isoflurane+ dexmedetomidine dual-dose groups, NMDAR 2A levels were significantly decreased at 12 hours and 3 days (P<0.05 and P<0.05, respectively) after anesthesia (Figure 3).\nCompared with the control group, the levels of NMDAR 2B decreased significantly in the isoflurane group, but no significant difference was found between the control and isoflurane groups post-anesthesia 28 days after anesthesia. Compared with the isoflurane group, NMDAR 2B levels were significantly increased in the isoflurane+ dexmedetomidine single-dose group. In the isoflurane+ dexmedetomidine dual-dose groups, NMDAR 2A levels were significantly decreased at 12 hours and 3 days (P<0.05 and P<0.05, respectively) after anesthesia (Figure 3).\n EAAT1 levels Compared with the control group, the levels of EAAT1 increased significantly at 2, 12, and 24 hours after anesthesia in the isoflurane group (P<0.05, P<0.01, and P<0.01, respectively), but no significant difference was found between the control and isoflurane groups post-anesthesia at 3, 7, and 28 days after anesthesia. Compared with the isoflurane group, EAAT1 levels were significantly decreased in the isoflurane+ dexmedetomidine single-dose group at 2, 12, and 24 hours (P<0.05, P<0.05, and P<0.05, respectively) after anesthesia. In the isoflurane+ dexmedetomidine dual-dose groups, NMDAR 2A levels were significantly decreased at 2, 12, and 24 hours (P<0.05, P<0.05, and P<0.05, respectively) after anesthesia (Figure 4).\nCompared with the control group, the levels of EAAT1 increased significantly at 2, 12, and 24 hours after anesthesia in the isoflurane group (P<0.05, P<0.01, and P<0.01, respectively), but no significant difference was found between the control and isoflurane groups post-anesthesia at 3, 7, and 28 days after anesthesia. Compared with the isoflurane group, EAAT1 levels were significantly decreased in the isoflurane+ dexmedetomidine single-dose group at 2, 12, and 24 hours (P<0.05, P<0.05, and P<0.05, respectively) after anesthesia. In the isoflurane+ dexmedetomidine dual-dose groups, NMDAR 2A levels were significantly decreased at 2, 12, and 24 hours (P<0.05, P<0.05, and P<0.05, respectively) after anesthesia (Figure 4).", "Compared with the control group, the levels of caspase-3 increased significantly at 2 hours, 12 hours, 24 hours, and 3 days after anesthesia in the isoflurane group (P<0.05, P<0.01, P<0.01, and P<0.01, respectively), but no significant difference was found between the control and isoflurane groups post-anesthesia 7 and 28 days after anesthesia. Compared with the isoflurane group, caspase-3 levels were significantly decreased in the isoflurane+ dexmedetomidine single-dose group at 12 hours, 24 hours, and 3 days (P<0.05, P<0.05, and P<0.05, respectively) after anesthesia. In the isoflurane+ dexmedetomidine dual-dose groups, caspase-3 levels were significantly decreased at 3 days (P<0.05) after anesthesia compared to isoflurane group (Figure 1).", "Compared with the control group, the levels of NMDAR 2A increased significantly at 2 hours, 12 hours, 24 hours, and 3 days after anesthesia in the isoflurane group (P<0.05, P<0.01, P<0.01, and P<0.01, respectively), but no significant difference was found between the control and isoflurane groups post-anesthesia at 7 and 28 days after anesthesia. Compared with the isoflurane group, NMDAR 2A levels were significantly decreased in the isoflurane+ dexmedetomidine single-dose group at 2 hours, 12 hours, 24 hours, and 3 days (P<0.05, P<0.05, P<0.05, and P<0.01, respectively) after anesthesia. In the isoflurane+ dexmedetomidine dual-dose groups, NMDAR 2A levels were significantly decreased at 12 hours, 24 hours, and 3 days (P<0.05, P<0.05, and P<0.05, respectively) after anesthesia (Figure 2).", "Compared with the control group, the levels of NMDAR 2B decreased significantly in the isoflurane group, but no significant difference was found between the control and isoflurane groups post-anesthesia 28 days after anesthesia. Compared with the isoflurane group, NMDAR 2B levels were significantly increased in the isoflurane+ dexmedetomidine single-dose group. In the isoflurane+ dexmedetomidine dual-dose groups, NMDAR 2A levels were significantly decreased at 12 hours and 3 days (P<0.05 and P<0.05, respectively) after anesthesia (Figure 3).", "Compared with the control group, the levels of EAAT1 increased significantly at 2, 12, and 24 hours after anesthesia in the isoflurane group (P<0.05, P<0.01, and P<0.01, respectively), but no significant difference was found between the control and isoflurane groups post-anesthesia at 3, 7, and 28 days after anesthesia. Compared with the isoflurane group, EAAT1 levels were significantly decreased in the isoflurane+ dexmedetomidine single-dose group at 2, 12, and 24 hours (P<0.05, P<0.05, and P<0.05, respectively) after anesthesia. In the isoflurane+ dexmedetomidine dual-dose groups, NMDAR 2A levels were significantly decreased at 2, 12, and 24 hours (P<0.05, P<0.05, and P<0.05, respectively) after anesthesia (Figure 4).", "Our studies have demonstrated that 2% isoflurane for 4 hours increases hippocampal neuronal apoptosis in Sprague Dawley rat pups. The result is consistent with a previous study which reported that rat pups subjected to an initial 4% isoflurane and subsequently maintained at 1 minimum alveolar concentration (MAC) exhibited a marked increase in cell death in several brain areas, including hippocampus.18 Dexmedetomidine administration (both single and dual doses) protected against isoflurane neurotoxicity. Using different doses of dexmedetomidine, single or dual administration did not have significantly different effects. Furthermore, isoflurane decreased NR2B expression and increased NR2A, EAAT1, and caspase-3 expression while dexmedetomidine changed the expression of the subunits, suggesting that the glutamate neurotransmission could be influenced. Sanders et al showed that dexmedetomidine attenuates isoflurane-induced neuronal injury and neurocognitive impairment in the hippocampus and thalamus.19 The present study further demonstrates that glutamate systems involving NR2A, NR2B, and EAAT1 are involved in the toxicity of isoflurane and the neuroprotection provided by dexmedetomidine.\nThe caspase family plays essential roles in programmed cell death and caspase-3 is a central mediator. In the early phase of apoptosis, activated caspase-3 cleaves corresponding cytoplasmic nuclear substrates and leads to cell death.20,21 Li et al reported a significantly greater amount of apoptotic cells in the hippocampus of rat pups exposed to 2% isoflurane for 6 hours.22 In our expriment, Western blot analysis showed that the expression of caspase-3 in hippocampus of isoflurane group was significantly higher than that of other groups in 3 days, leading to an increase in apoptotic proteins and neurotoxicity. The levels of caspase-3 in the single-dose dexmedetomidine group were slightly but nonsignificantly lower than the dual-dose group, and both groups were lower than the isoflurane group. Our results agree with previous studies by Li et al10 in which neonatal rats were exposed to 0.75% isoflurane for 6 hours. They showed that there was no significantly different protective effect between single (75 µg/kg) and triple administrations (25 µg/kg per administration) of dexmedetomidine when all rats received same doses. Results from both our group and the Li group showed the toxicity of isoflurane and the neuroprotective effect of dexmedetomidine.\nThe duration of 3 months after pregnancy to 4 years after birth in humans is equivalent to 7–14 days after birth in rodents. During this period, synapses are rapidly increased. Thus, if exposed to NMDAR inhibitors such as isoflurane, ketamine, or nitrous oxide, brain development is inhibited to various degrees, leading to neuronal injury and long-term disability.23,24 Anesthetic exposure changes NMDAR subunits, affects calcium influx, induces cell signaling cascades, and generates LTP or LTD.25,26 The levels of the NR2B subunit in the hippocampus decrease with age. Increasing the NR2B levels in the hippocampi of old rats enhances excitatory postsynaptic potentials, strengthens memory, and improves synaptic transmission function.27 Our results showed that NR2A levels were increased in the isoflurane group while NR2B levels were decreased, which may affect the ability of learning and memory in adulthood. We will explore this issue in future experiments. Furthermore, in NR2A knockout mice and NR2A regulatory signaling-deficient mice, activation of NR2B can generate LTP.28–30 NVP-AAM077, a NR2A specific inhibitor, is able to activate NMDARs and generate LTP.30,31 Compared to NR2A-dependent NMDARs, NR2B-dependent NMDARs carry a large amount of calcium ions which react preferentially with calmodulin kinase II, so they are more likely to induce LTP.32 Most neonatal rats have high levels of NR2B which begins to decline to adult levels 6–20 days after birth. On the other hand, the levels of NR2A subunits are low early after birth and begin to increase to adult levels 12 days after birth.33,34 Our data are consistent with results from Zhao et al35 showing that ketamine exposure in pregnant rats induces neuronal rats impairment, consistent with changes in NMDAR subunits. Thus, these results suggest that inhibition of NR2A and NR2B subunits is involved in isoflurane-induced neurotoxicity.\nEAAT1 is predominantly expressed in astrocytes and is one of the glutamate transporters required to maintain glutamate levels in the extracellular space.36,37 Qu et al37 reported that isoflurane induced spatial learning and memory impairment and produced increases in glutamate and EAAT1 levels in the hippocampal CA1 region of old rats, suggesting that the increase in glutamate induced excitotoxicity, while the increase in EAAT1 protected against it and further improved isoflurane-induced spatial learning and memory dysfunction. In the present study, we also found that EAAT1 levels were significantly increased in the isofluran e group. Those changes suggest that the enhanced EAAT1 levels may be related to neuroprotective mechanisms in the CNS that remove excessive glutamate released by excitatory synapses, maintaining extracellular glutamate levels and protecting against excitotoxicity. The decreased EAAT1 levels in the dexmedetomidine groups may be due to the neuroprotective effects of dexmedetomidine, which leads to attenuation of the increase in EAAT1.\nTaken together, the present study may provide guidance in the clinical use of drugs, suggesting that isoflurane anesthesia in pediatric patients may be combined with dexmedetomidine and other protective drugs to reduce brain damage. However, there are some limitations to our study. First, this study did not measure blood glucose, blood oxygen, or body temperature levels. Prolonged isoflurane anesthesia may cause hypoxia, hypercapnia, abnormal glucose levels, and hypothermia. All of these factors may lead to increased apoptosis, which can cause neurocognitive disorders. Second, NR2A and NR2B inhibitors were not used in the present studies, so we could not definitively determine whether apoptosis and the expression of certain subunits were directly related. Third, the plasma levels of dexmedetomidine were not measured and we could not determine the exact difference in the effective drug dosage in rats treated with high and low doses of dexmedetomidine.", "Isoflurane caused an increase in apoptotic cells in neonatal rats, which was related to increased expression of NR2A and EAAT1 and decreased expression of NR2B. Isoflurane-induced neurotoxicity during critical developmental periods may be controlled by dexmedetomidine." ]
[ "intro", "materials|methods", null, null, null, null, null, "results", null, null, null, null, "discussion", null ]
[ "dexmedetomidine", "isoflurane", "neurotoxicity", "apoptosis", "glutamate" ]
Introduction: There are many newborns who need to receive anesthesia due to diagnosis or surgical needs each year.1 Generally, the most commonly used anesthetics, including isoflurane and ketamine, exert their effects by inhibiting the N-methyl-D-aspartate receptor (NMDAR) and/or potentiating the gamma amino acid type A receptor.1 The NMDAR is a heterotetramer with universal brain distribution. It is composed of NR1 and NR2 (2A, 2B, 2C, and 2D) subunits. NR2A and NR2B are the most dominant subunits in the brain.2–5 NMDARs have critical roles in synapse plasticity, long-term potentiation (LTP), and long-term depression (LTD). A persistent strengthening of synapses induces LTP to improve learning and memory,6–9 while a long-lasting decrease in synaptic strength causes LTD and results in learning and memory deficits.10 Meanwhile, LTD is needed for plasticity in the brain and prevents overexcitation of circuits. Studies have shown that prolonged or multiple exposures to NMDAR inhibitors such as isoflurane may affect cell proliferation, interrupt synaptic transmission, and induce apoptosis and neuronal injury in the central nerve system (CNS) during development in rodents.1,10–13 Liu et al11 found that a short exposure to isoflurane increased NR2B expression in adult mice and improved their cognitive functions. However, prolonged exposure to isoflurane decreased NR2B levels, increased the production of apoptotic proteins, and resulted in cognitive dysfunction. Frontal lobe and hippocampus demonstrated an age-dependent decrease in NMDAR and NR2B expression, along with delayed short-term memory, long-term memory impairment, and spatial cognitive dysfunction.7 Glutamate plays critical roles in synaptic plasticity and development by mediating the fast excitatory transmission. The extracellular glutamate concentration is kept low enough to avoid neurons from glutamate excitotoxicity.14 Excessive release of glutamate may lead to neuronal damage through receptor-mediated excitotoxicity, which plays an important role in many CNS diseases. Excitatory amino acid transporter 1 (EAAT1) is an important glutamate transporter in neurons and glial cells, which limits excessive increase of glutamate levels to reduce glutamate excitotoxicity.15 Dexmedetomidine is an agonist for α2-adrenoreceptors. It has been widely used in clinics and shows potent sedative, analgesic, and anxiolytic effects. Dexmedetomidine produces less stress responses and maintains a stable cardiovascular function. It has been shown that dexmedetomidine protects neuronal cells against isoflurane-induced hippocampal neuronal injury in neonatal mice.16,17 However, the mechanism underlying this protective effect has yet to be determined. In the present study, we investigated whether prolonged exposure to isoflurane generates neurotoxicity by changing the expression of NMDAR subunits and EAAT1. We further investigated if dexmedetomidine protects against isoflurane-induced neuronal injury and if glutamate neurotransmission could be influenced. Materials and methods: Animals and grouping Experimental animals were 144 male Sprague Dawley rats (specific pathogen free) at postnatal day 7 weighing 12–20 g obtained from the Experimental Animal Facility at Shengjing Hospital of China Medical University (Taichung, Taiwan). The Ohmeda 210 anesthesia machine and F-MCI sensor were purchased from Datex-Ohmeda (Datex-Ohmeda, Helsinki, Finland). Isoflurane was purchased from Abbott Laboratories (Chicago, IL, USA) and dexmedetomidine was purchased from Jiangsu Hengrui Medicine Co. Ltd. (Lianyungang, China; Lot 14101732). All experimental procedures and protocols were approved by The Laboratory Animal Care Committee of China Medical University, Shenyang, China (2015PS223K) and were performed in accordance with the Guidelines for the Care and Use of Laboratory Animals from the National Institutes of Health, USA. Rat pups were randomly divided into four groups (n=36 per group): control, isoflurane, isoflurane with a single dose of dexmedetomidine, and isoflurane with dual doses of dexmedetomidine. All pups initially stayed with their dams until postnatal day 21, when surviving rats were weaned and housed in a temperature- and light-controlled room (22°C; 12-hour light/dark cycle) with ad libitum access to food and water. Experimental animals were 144 male Sprague Dawley rats (specific pathogen free) at postnatal day 7 weighing 12–20 g obtained from the Experimental Animal Facility at Shengjing Hospital of China Medical University (Taichung, Taiwan). The Ohmeda 210 anesthesia machine and F-MCI sensor were purchased from Datex-Ohmeda (Datex-Ohmeda, Helsinki, Finland). Isoflurane was purchased from Abbott Laboratories (Chicago, IL, USA) and dexmedetomidine was purchased from Jiangsu Hengrui Medicine Co. Ltd. (Lianyungang, China; Lot 14101732). All experimental procedures and protocols were approved by The Laboratory Animal Care Committee of China Medical University, Shenyang, China (2015PS223K) and were performed in accordance with the Guidelines for the Care and Use of Laboratory Animals from the National Institutes of Health, USA. Rat pups were randomly divided into four groups (n=36 per group): control, isoflurane, isoflurane with a single dose of dexmedetomidine, and isoflurane with dual doses of dexmedetomidine. All pups initially stayed with their dams until postnatal day 21, when surviving rats were weaned and housed in a temperature- and light-controlled room (22°C; 12-hour light/dark cycle) with ad libitum access to food and water. Experimental model For all isoflurane groups, rats were placed in a customized anesthetic chamber (30×20×20 cm3) with a gas inlet and outlet. The gas inlet was connected to a compressed oxygen tank and a standard anesthesia vaporizer to direct a constant 1 L/min flow of oxygen containing 2% isoflurane through the chamber. The isoflurane levels were continuously monitored through the gas outlet by a gas monitor. The infusion time of isoflurane was 4 hours for all isoflurane groups. For the group receiving a single dose of dexmedetomidine, the drug was administered intraperitoneally (ip) at 75 µg/kg 20 minutes prior to isoflurane infusion. For the group receiving dual doses of dexmedetomidine, the drug was administered twice at 25 µg/kg (ip) 20 minutes before and 2 hours after beginning isoflurane infusion. Sham animals received single saline injections (ip) and were exposed to 30% O2 for 4 hours. During anesthesia, the oxygen concentration was continuously monitored and maintained at 30%. All rats were allowed to breathe spontaneously. After anesthesia, the rats were closely monitored until they regained consciousness, and then were returned to their dams in their home cages. For all isoflurane groups, rats were placed in a customized anesthetic chamber (30×20×20 cm3) with a gas inlet and outlet. The gas inlet was connected to a compressed oxygen tank and a standard anesthesia vaporizer to direct a constant 1 L/min flow of oxygen containing 2% isoflurane through the chamber. The isoflurane levels were continuously monitored through the gas outlet by a gas monitor. The infusion time of isoflurane was 4 hours for all isoflurane groups. For the group receiving a single dose of dexmedetomidine, the drug was administered intraperitoneally (ip) at 75 µg/kg 20 minutes prior to isoflurane infusion. For the group receiving dual doses of dexmedetomidine, the drug was administered twice at 25 µg/kg (ip) 20 minutes before and 2 hours after beginning isoflurane infusion. Sham animals received single saline injections (ip) and were exposed to 30% O2 for 4 hours. During anesthesia, the oxygen concentration was continuously monitored and maintained at 30%. All rats were allowed to breathe spontaneously. After anesthesia, the rats were closely monitored until they regained consciousness, and then were returned to their dams in their home cages. Tissue preparation Rats were decapitated at 2 hours, 12 hours, 24 hours, 3 days, 7 days, and 28 days after the 4-hour anesthesia. Hippocampi were collected on ice and placed immediately into liquid nitrogen. The samples were removed from the liquid nitrogen 4 hours later and stored at −80°C until use. Rats were decapitated at 2 hours, 12 hours, 24 hours, 3 days, 7 days, and 28 days after the 4-hour anesthesia. Hippocampi were collected on ice and placed immediately into liquid nitrogen. The samples were removed from the liquid nitrogen 4 hours later and stored at −80°C until use. Western blot assays Lysis buffer (Beyotime, Shanghai, China) was added to hippocampal samples (10 µL per 1 mg tissue). Samples were sonicated and incubated at room temperature for 30 minutes. Lysates were centrifuged at 4°C, 12,000× g for 20 minutes, and the supernatants were collected. Bicinchoninic acid assay was used to measure total protein concentration. Equivalent quantities of protein (40 µg) were separated by 10% SDS-PAGE under reducing conditions and electrophoretically transferred to a polyvinylidene fluoride membrane. Membranes were blocked in 5% nonfat milk for 4 hours and washed three times. Membranes were then incubated overnight at 4°C with anti-GAPDH (1:5,000 dilution; Proteintech, Chicago, IL, USA), anti-NR2B rabbit monoclonal antibody (1:1,000 dilution; Abcam, Cambridge, UK), anti-EAAT1 rabbit monoclonal antibody (1:1,000 dilution; Cell Signaling Technology, Beverly, MA, USA) or anti-caspase-3 rabbit monoclonal antibody (1:1,000 dilution; Abcam). After three washes (10 minutes per wash), membranes were incubated with horseradish peroxidase-conjugated goat anti-rabbit IgG secondary antibody at room temperature for 90 minutes. Membranes were then washed three times (10 minutes per wash) and labeled proteins were visualized by incubating with chemiluminescent substrates (Thermo Fisher Scientific, Waltham, MA, USA) for 5 minutes. X-ray films were developed in a C300 developing machine (Azure Biosystems, Dublin, CA, USA), and ImageJ was used to semi-quantify protein levels. Protein levels of NR2A, NR2B, EAAT1, and caspase-3 were normalized to their respective GAPDH levels. Lysis buffer (Beyotime, Shanghai, China) was added to hippocampal samples (10 µL per 1 mg tissue). Samples were sonicated and incubated at room temperature for 30 minutes. Lysates were centrifuged at 4°C, 12,000× g for 20 minutes, and the supernatants were collected. Bicinchoninic acid assay was used to measure total protein concentration. Equivalent quantities of protein (40 µg) were separated by 10% SDS-PAGE under reducing conditions and electrophoretically transferred to a polyvinylidene fluoride membrane. Membranes were blocked in 5% nonfat milk for 4 hours and washed three times. Membranes were then incubated overnight at 4°C with anti-GAPDH (1:5,000 dilution; Proteintech, Chicago, IL, USA), anti-NR2B rabbit monoclonal antibody (1:1,000 dilution; Abcam, Cambridge, UK), anti-EAAT1 rabbit monoclonal antibody (1:1,000 dilution; Cell Signaling Technology, Beverly, MA, USA) or anti-caspase-3 rabbit monoclonal antibody (1:1,000 dilution; Abcam). After three washes (10 minutes per wash), membranes were incubated with horseradish peroxidase-conjugated goat anti-rabbit IgG secondary antibody at room temperature for 90 minutes. Membranes were then washed three times (10 minutes per wash) and labeled proteins were visualized by incubating with chemiluminescent substrates (Thermo Fisher Scientific, Waltham, MA, USA) for 5 minutes. X-ray films were developed in a C300 developing machine (Azure Biosystems, Dublin, CA, USA), and ImageJ was used to semi-quantify protein levels. Protein levels of NR2A, NR2B, EAAT1, and caspase-3 were normalized to their respective GAPDH levels. Statistical analyses Data were expressed as X¯±s. All statistical analyses were performed using SPSS (v19.0; IBM Corporation, Armonk, NY, USA). All continuous variables were tested for assumption of normality using the Shapiro–Wilk test. If the assumption of normality was met, one-way ANOVA followed by Tukey’s post hoc multiple comparison tests was used. If the assumption of normality was unmet, the Kruskal–Wallis H test was used. Significance was accepted at P<0.05. Data were expressed as X¯±s. All statistical analyses were performed using SPSS (v19.0; IBM Corporation, Armonk, NY, USA). All continuous variables were tested for assumption of normality using the Shapiro–Wilk test. If the assumption of normality was met, one-way ANOVA followed by Tukey’s post hoc multiple comparison tests was used. If the assumption of normality was unmet, the Kruskal–Wallis H test was used. Significance was accepted at P<0.05. Animals and grouping: Experimental animals were 144 male Sprague Dawley rats (specific pathogen free) at postnatal day 7 weighing 12–20 g obtained from the Experimental Animal Facility at Shengjing Hospital of China Medical University (Taichung, Taiwan). The Ohmeda 210 anesthesia machine and F-MCI sensor were purchased from Datex-Ohmeda (Datex-Ohmeda, Helsinki, Finland). Isoflurane was purchased from Abbott Laboratories (Chicago, IL, USA) and dexmedetomidine was purchased from Jiangsu Hengrui Medicine Co. Ltd. (Lianyungang, China; Lot 14101732). All experimental procedures and protocols were approved by The Laboratory Animal Care Committee of China Medical University, Shenyang, China (2015PS223K) and were performed in accordance with the Guidelines for the Care and Use of Laboratory Animals from the National Institutes of Health, USA. Rat pups were randomly divided into four groups (n=36 per group): control, isoflurane, isoflurane with a single dose of dexmedetomidine, and isoflurane with dual doses of dexmedetomidine. All pups initially stayed with their dams until postnatal day 21, when surviving rats were weaned and housed in a temperature- and light-controlled room (22°C; 12-hour light/dark cycle) with ad libitum access to food and water. Experimental model: For all isoflurane groups, rats were placed in a customized anesthetic chamber (30×20×20 cm3) with a gas inlet and outlet. The gas inlet was connected to a compressed oxygen tank and a standard anesthesia vaporizer to direct a constant 1 L/min flow of oxygen containing 2% isoflurane through the chamber. The isoflurane levels were continuously monitored through the gas outlet by a gas monitor. The infusion time of isoflurane was 4 hours for all isoflurane groups. For the group receiving a single dose of dexmedetomidine, the drug was administered intraperitoneally (ip) at 75 µg/kg 20 minutes prior to isoflurane infusion. For the group receiving dual doses of dexmedetomidine, the drug was administered twice at 25 µg/kg (ip) 20 minutes before and 2 hours after beginning isoflurane infusion. Sham animals received single saline injections (ip) and were exposed to 30% O2 for 4 hours. During anesthesia, the oxygen concentration was continuously monitored and maintained at 30%. All rats were allowed to breathe spontaneously. After anesthesia, the rats were closely monitored until they regained consciousness, and then were returned to their dams in their home cages. Tissue preparation: Rats were decapitated at 2 hours, 12 hours, 24 hours, 3 days, 7 days, and 28 days after the 4-hour anesthesia. Hippocampi were collected on ice and placed immediately into liquid nitrogen. The samples were removed from the liquid nitrogen 4 hours later and stored at −80°C until use. Western blot assays: Lysis buffer (Beyotime, Shanghai, China) was added to hippocampal samples (10 µL per 1 mg tissue). Samples were sonicated and incubated at room temperature for 30 minutes. Lysates were centrifuged at 4°C, 12,000× g for 20 minutes, and the supernatants were collected. Bicinchoninic acid assay was used to measure total protein concentration. Equivalent quantities of protein (40 µg) were separated by 10% SDS-PAGE under reducing conditions and electrophoretically transferred to a polyvinylidene fluoride membrane. Membranes were blocked in 5% nonfat milk for 4 hours and washed three times. Membranes were then incubated overnight at 4°C with anti-GAPDH (1:5,000 dilution; Proteintech, Chicago, IL, USA), anti-NR2B rabbit monoclonal antibody (1:1,000 dilution; Abcam, Cambridge, UK), anti-EAAT1 rabbit monoclonal antibody (1:1,000 dilution; Cell Signaling Technology, Beverly, MA, USA) or anti-caspase-3 rabbit monoclonal antibody (1:1,000 dilution; Abcam). After three washes (10 minutes per wash), membranes were incubated with horseradish peroxidase-conjugated goat anti-rabbit IgG secondary antibody at room temperature for 90 minutes. Membranes were then washed three times (10 minutes per wash) and labeled proteins were visualized by incubating with chemiluminescent substrates (Thermo Fisher Scientific, Waltham, MA, USA) for 5 minutes. X-ray films were developed in a C300 developing machine (Azure Biosystems, Dublin, CA, USA), and ImageJ was used to semi-quantify protein levels. Protein levels of NR2A, NR2B, EAAT1, and caspase-3 were normalized to their respective GAPDH levels. Statistical analyses: Data were expressed as X¯±s. All statistical analyses were performed using SPSS (v19.0; IBM Corporation, Armonk, NY, USA). All continuous variables were tested for assumption of normality using the Shapiro–Wilk test. If the assumption of normality was met, one-way ANOVA followed by Tukey’s post hoc multiple comparison tests was used. If the assumption of normality was unmet, the Kruskal–Wallis H test was used. Significance was accepted at P<0.05. Results: Caspase-3 levels Compared with the control group, the levels of caspase-3 increased significantly at 2 hours, 12 hours, 24 hours, and 3 days after anesthesia in the isoflurane group (P<0.05, P<0.01, P<0.01, and P<0.01, respectively), but no significant difference was found between the control and isoflurane groups post-anesthesia 7 and 28 days after anesthesia. Compared with the isoflurane group, caspase-3 levels were significantly decreased in the isoflurane+ dexmedetomidine single-dose group at 12 hours, 24 hours, and 3 days (P<0.05, P<0.05, and P<0.05, respectively) after anesthesia. In the isoflurane+ dexmedetomidine dual-dose groups, caspase-3 levels were significantly decreased at 3 days (P<0.05) after anesthesia compared to isoflurane group (Figure 1). Compared with the control group, the levels of caspase-3 increased significantly at 2 hours, 12 hours, 24 hours, and 3 days after anesthesia in the isoflurane group (P<0.05, P<0.01, P<0.01, and P<0.01, respectively), but no significant difference was found between the control and isoflurane groups post-anesthesia 7 and 28 days after anesthesia. Compared with the isoflurane group, caspase-3 levels were significantly decreased in the isoflurane+ dexmedetomidine single-dose group at 12 hours, 24 hours, and 3 days (P<0.05, P<0.05, and P<0.05, respectively) after anesthesia. In the isoflurane+ dexmedetomidine dual-dose groups, caspase-3 levels were significantly decreased at 3 days (P<0.05) after anesthesia compared to isoflurane group (Figure 1). NMDAR NR2A levels Compared with the control group, the levels of NMDAR 2A increased significantly at 2 hours, 12 hours, 24 hours, and 3 days after anesthesia in the isoflurane group (P<0.05, P<0.01, P<0.01, and P<0.01, respectively), but no significant difference was found between the control and isoflurane groups post-anesthesia at 7 and 28 days after anesthesia. Compared with the isoflurane group, NMDAR 2A levels were significantly decreased in the isoflurane+ dexmedetomidine single-dose group at 2 hours, 12 hours, 24 hours, and 3 days (P<0.05, P<0.05, P<0.05, and P<0.01, respectively) after anesthesia. In the isoflurane+ dexmedetomidine dual-dose groups, NMDAR 2A levels were significantly decreased at 12 hours, 24 hours, and 3 days (P<0.05, P<0.05, and P<0.05, respectively) after anesthesia (Figure 2). Compared with the control group, the levels of NMDAR 2A increased significantly at 2 hours, 12 hours, 24 hours, and 3 days after anesthesia in the isoflurane group (P<0.05, P<0.01, P<0.01, and P<0.01, respectively), but no significant difference was found between the control and isoflurane groups post-anesthesia at 7 and 28 days after anesthesia. Compared with the isoflurane group, NMDAR 2A levels were significantly decreased in the isoflurane+ dexmedetomidine single-dose group at 2 hours, 12 hours, 24 hours, and 3 days (P<0.05, P<0.05, P<0.05, and P<0.01, respectively) after anesthesia. In the isoflurane+ dexmedetomidine dual-dose groups, NMDAR 2A levels were significantly decreased at 12 hours, 24 hours, and 3 days (P<0.05, P<0.05, and P<0.05, respectively) after anesthesia (Figure 2). NMDAR NR2B levels Compared with the control group, the levels of NMDAR 2B decreased significantly in the isoflurane group, but no significant difference was found between the control and isoflurane groups post-anesthesia 28 days after anesthesia. Compared with the isoflurane group, NMDAR 2B levels were significantly increased in the isoflurane+ dexmedetomidine single-dose group. In the isoflurane+ dexmedetomidine dual-dose groups, NMDAR 2A levels were significantly decreased at 12 hours and 3 days (P<0.05 and P<0.05, respectively) after anesthesia (Figure 3). Compared with the control group, the levels of NMDAR 2B decreased significantly in the isoflurane group, but no significant difference was found between the control and isoflurane groups post-anesthesia 28 days after anesthesia. Compared with the isoflurane group, NMDAR 2B levels were significantly increased in the isoflurane+ dexmedetomidine single-dose group. In the isoflurane+ dexmedetomidine dual-dose groups, NMDAR 2A levels were significantly decreased at 12 hours and 3 days (P<0.05 and P<0.05, respectively) after anesthesia (Figure 3). EAAT1 levels Compared with the control group, the levels of EAAT1 increased significantly at 2, 12, and 24 hours after anesthesia in the isoflurane group (P<0.05, P<0.01, and P<0.01, respectively), but no significant difference was found between the control and isoflurane groups post-anesthesia at 3, 7, and 28 days after anesthesia. Compared with the isoflurane group, EAAT1 levels were significantly decreased in the isoflurane+ dexmedetomidine single-dose group at 2, 12, and 24 hours (P<0.05, P<0.05, and P<0.05, respectively) after anesthesia. In the isoflurane+ dexmedetomidine dual-dose groups, NMDAR 2A levels were significantly decreased at 2, 12, and 24 hours (P<0.05, P<0.05, and P<0.05, respectively) after anesthesia (Figure 4). Compared with the control group, the levels of EAAT1 increased significantly at 2, 12, and 24 hours after anesthesia in the isoflurane group (P<0.05, P<0.01, and P<0.01, respectively), but no significant difference was found between the control and isoflurane groups post-anesthesia at 3, 7, and 28 days after anesthesia. Compared with the isoflurane group, EAAT1 levels were significantly decreased in the isoflurane+ dexmedetomidine single-dose group at 2, 12, and 24 hours (P<0.05, P<0.05, and P<0.05, respectively) after anesthesia. In the isoflurane+ dexmedetomidine dual-dose groups, NMDAR 2A levels were significantly decreased at 2, 12, and 24 hours (P<0.05, P<0.05, and P<0.05, respectively) after anesthesia (Figure 4). Caspase-3 levels: Compared with the control group, the levels of caspase-3 increased significantly at 2 hours, 12 hours, 24 hours, and 3 days after anesthesia in the isoflurane group (P<0.05, P<0.01, P<0.01, and P<0.01, respectively), but no significant difference was found between the control and isoflurane groups post-anesthesia 7 and 28 days after anesthesia. Compared with the isoflurane group, caspase-3 levels were significantly decreased in the isoflurane+ dexmedetomidine single-dose group at 12 hours, 24 hours, and 3 days (P<0.05, P<0.05, and P<0.05, respectively) after anesthesia. In the isoflurane+ dexmedetomidine dual-dose groups, caspase-3 levels were significantly decreased at 3 days (P<0.05) after anesthesia compared to isoflurane group (Figure 1). NMDAR NR2A levels: Compared with the control group, the levels of NMDAR 2A increased significantly at 2 hours, 12 hours, 24 hours, and 3 days after anesthesia in the isoflurane group (P<0.05, P<0.01, P<0.01, and P<0.01, respectively), but no significant difference was found between the control and isoflurane groups post-anesthesia at 7 and 28 days after anesthesia. Compared with the isoflurane group, NMDAR 2A levels were significantly decreased in the isoflurane+ dexmedetomidine single-dose group at 2 hours, 12 hours, 24 hours, and 3 days (P<0.05, P<0.05, P<0.05, and P<0.01, respectively) after anesthesia. In the isoflurane+ dexmedetomidine dual-dose groups, NMDAR 2A levels were significantly decreased at 12 hours, 24 hours, and 3 days (P<0.05, P<0.05, and P<0.05, respectively) after anesthesia (Figure 2). NMDAR NR2B levels: Compared with the control group, the levels of NMDAR 2B decreased significantly in the isoflurane group, but no significant difference was found between the control and isoflurane groups post-anesthesia 28 days after anesthesia. Compared with the isoflurane group, NMDAR 2B levels were significantly increased in the isoflurane+ dexmedetomidine single-dose group. In the isoflurane+ dexmedetomidine dual-dose groups, NMDAR 2A levels were significantly decreased at 12 hours and 3 days (P<0.05 and P<0.05, respectively) after anesthesia (Figure 3). EAAT1 levels: Compared with the control group, the levels of EAAT1 increased significantly at 2, 12, and 24 hours after anesthesia in the isoflurane group (P<0.05, P<0.01, and P<0.01, respectively), but no significant difference was found between the control and isoflurane groups post-anesthesia at 3, 7, and 28 days after anesthesia. Compared with the isoflurane group, EAAT1 levels were significantly decreased in the isoflurane+ dexmedetomidine single-dose group at 2, 12, and 24 hours (P<0.05, P<0.05, and P<0.05, respectively) after anesthesia. In the isoflurane+ dexmedetomidine dual-dose groups, NMDAR 2A levels were significantly decreased at 2, 12, and 24 hours (P<0.05, P<0.05, and P<0.05, respectively) after anesthesia (Figure 4). Discussion: Our studies have demonstrated that 2% isoflurane for 4 hours increases hippocampal neuronal apoptosis in Sprague Dawley rat pups. The result is consistent with a previous study which reported that rat pups subjected to an initial 4% isoflurane and subsequently maintained at 1 minimum alveolar concentration (MAC) exhibited a marked increase in cell death in several brain areas, including hippocampus.18 Dexmedetomidine administration (both single and dual doses) protected against isoflurane neurotoxicity. Using different doses of dexmedetomidine, single or dual administration did not have significantly different effects. Furthermore, isoflurane decreased NR2B expression and increased NR2A, EAAT1, and caspase-3 expression while dexmedetomidine changed the expression of the subunits, suggesting that the glutamate neurotransmission could be influenced. Sanders et al showed that dexmedetomidine attenuates isoflurane-induced neuronal injury and neurocognitive impairment in the hippocampus and thalamus.19 The present study further demonstrates that glutamate systems involving NR2A, NR2B, and EAAT1 are involved in the toxicity of isoflurane and the neuroprotection provided by dexmedetomidine. The caspase family plays essential roles in programmed cell death and caspase-3 is a central mediator. In the early phase of apoptosis, activated caspase-3 cleaves corresponding cytoplasmic nuclear substrates and leads to cell death.20,21 Li et al reported a significantly greater amount of apoptotic cells in the hippocampus of rat pups exposed to 2% isoflurane for 6 hours.22 In our expriment, Western blot analysis showed that the expression of caspase-3 in hippocampus of isoflurane group was significantly higher than that of other groups in 3 days, leading to an increase in apoptotic proteins and neurotoxicity. The levels of caspase-3 in the single-dose dexmedetomidine group were slightly but nonsignificantly lower than the dual-dose group, and both groups were lower than the isoflurane group. Our results agree with previous studies by Li et al10 in which neonatal rats were exposed to 0.75% isoflurane for 6 hours. They showed that there was no significantly different protective effect between single (75 µg/kg) and triple administrations (25 µg/kg per administration) of dexmedetomidine when all rats received same doses. Results from both our group and the Li group showed the toxicity of isoflurane and the neuroprotective effect of dexmedetomidine. The duration of 3 months after pregnancy to 4 years after birth in humans is equivalent to 7–14 days after birth in rodents. During this period, synapses are rapidly increased. Thus, if exposed to NMDAR inhibitors such as isoflurane, ketamine, or nitrous oxide, brain development is inhibited to various degrees, leading to neuronal injury and long-term disability.23,24 Anesthetic exposure changes NMDAR subunits, affects calcium influx, induces cell signaling cascades, and generates LTP or LTD.25,26 The levels of the NR2B subunit in the hippocampus decrease with age. Increasing the NR2B levels in the hippocampi of old rats enhances excitatory postsynaptic potentials, strengthens memory, and improves synaptic transmission function.27 Our results showed that NR2A levels were increased in the isoflurane group while NR2B levels were decreased, which may affect the ability of learning and memory in adulthood. We will explore this issue in future experiments. Furthermore, in NR2A knockout mice and NR2A regulatory signaling-deficient mice, activation of NR2B can generate LTP.28–30 NVP-AAM077, a NR2A specific inhibitor, is able to activate NMDARs and generate LTP.30,31 Compared to NR2A-dependent NMDARs, NR2B-dependent NMDARs carry a large amount of calcium ions which react preferentially with calmodulin kinase II, so they are more likely to induce LTP.32 Most neonatal rats have high levels of NR2B which begins to decline to adult levels 6–20 days after birth. On the other hand, the levels of NR2A subunits are low early after birth and begin to increase to adult levels 12 days after birth.33,34 Our data are consistent with results from Zhao et al35 showing that ketamine exposure in pregnant rats induces neuronal rats impairment, consistent with changes in NMDAR subunits. Thus, these results suggest that inhibition of NR2A and NR2B subunits is involved in isoflurane-induced neurotoxicity. EAAT1 is predominantly expressed in astrocytes and is one of the glutamate transporters required to maintain glutamate levels in the extracellular space.36,37 Qu et al37 reported that isoflurane induced spatial learning and memory impairment and produced increases in glutamate and EAAT1 levels in the hippocampal CA1 region of old rats, suggesting that the increase in glutamate induced excitotoxicity, while the increase in EAAT1 protected against it and further improved isoflurane-induced spatial learning and memory dysfunction. In the present study, we also found that EAAT1 levels were significantly increased in the isofluran e group. Those changes suggest that the enhanced EAAT1 levels may be related to neuroprotective mechanisms in the CNS that remove excessive glutamate released by excitatory synapses, maintaining extracellular glutamate levels and protecting against excitotoxicity. The decreased EAAT1 levels in the dexmedetomidine groups may be due to the neuroprotective effects of dexmedetomidine, which leads to attenuation of the increase in EAAT1. Taken together, the present study may provide guidance in the clinical use of drugs, suggesting that isoflurane anesthesia in pediatric patients may be combined with dexmedetomidine and other protective drugs to reduce brain damage. However, there are some limitations to our study. First, this study did not measure blood glucose, blood oxygen, or body temperature levels. Prolonged isoflurane anesthesia may cause hypoxia, hypercapnia, abnormal glucose levels, and hypothermia. All of these factors may lead to increased apoptosis, which can cause neurocognitive disorders. Second, NR2A and NR2B inhibitors were not used in the present studies, so we could not definitively determine whether apoptosis and the expression of certain subunits were directly related. Third, the plasma levels of dexmedetomidine were not measured and we could not determine the exact difference in the effective drug dosage in rats treated with high and low doses of dexmedetomidine. Conclusion: Isoflurane caused an increase in apoptotic cells in neonatal rats, which was related to increased expression of NR2A and EAAT1 and decreased expression of NR2B. Isoflurane-induced neurotoxicity during critical developmental periods may be controlled by dexmedetomidine.
Background: Considerable evidences support the finding that the anesthesia reagent isoflurane increases neuronal cell death in young rats. Recent studies have shown that dexmedetomidine can reduce isoflurane-induced neuronal injury, but the mechanism remains unclear. We investigated whether isoflurane cause neurotoxicity to the central nervous system by regulating the N-methyl-D-aspartate receptor (NMDAR) and excitatory amino acid transporter1 (EAAT1) in young rats. Furthermore, we examined if dexmedetomidine could decrease isoflurane-induced neurotoxicity. Methods: Neonatal rats (postnatal day 7, n=144) were randomly divided into four groups of 36 animals each: control (saline injection without isoflurane); isoflurane (2% for 4 h); isoflurane + single dose of dexmedetomidine (75 µg/kg, 20 min before the start of 2% isoflurane for 4 h); and isoflurane + dual doses of dexmedetomidine (25 µg/kg, 20 min before and 2 h after start of isoflurane at 2% for 4 h). Six neonates from each group were euthanatized at 2 h, 12 h, 24 h, 3 days, 7 days and 28 days post-anesthesia. Hippocampi were collected and processed for protein extraction. Expression levels of the NMDAR subunits NR2A and NR2B, EAAT1 and caspase-3 were measured by western blot analysis. Results: Protein levels of NR2A, EAAT1 and caspase-3 were significantly increased in hippocampus of the isoflurane group from 2 h to 3 days, while NR2B levels were decreased. However, the -induced increase in NR2A, EAAT1 and caspase-3 and the decrease in NR2B in isoflurane-exposed rats were ameliorated in the rats treated with single or dual doses of dexmedetomidine. Isoflurane-induced neuronal damage in neonatal rats is due in part to the increase in NR2A and EAAT1 and the decrease in NR2B in the hippocampus. Conclusions: Dexmedetomidine protects the brain against the use of isoflurane through the regulation of NR2A, NR2B and EAAT1. However, using the same amount of dexmedetomidine, the trend of protection is basically the same.
Introduction: There are many newborns who need to receive anesthesia due to diagnosis or surgical needs each year.1 Generally, the most commonly used anesthetics, including isoflurane and ketamine, exert their effects by inhibiting the N-methyl-D-aspartate receptor (NMDAR) and/or potentiating the gamma amino acid type A receptor.1 The NMDAR is a heterotetramer with universal brain distribution. It is composed of NR1 and NR2 (2A, 2B, 2C, and 2D) subunits. NR2A and NR2B are the most dominant subunits in the brain.2–5 NMDARs have critical roles in synapse plasticity, long-term potentiation (LTP), and long-term depression (LTD). A persistent strengthening of synapses induces LTP to improve learning and memory,6–9 while a long-lasting decrease in synaptic strength causes LTD and results in learning and memory deficits.10 Meanwhile, LTD is needed for plasticity in the brain and prevents overexcitation of circuits. Studies have shown that prolonged or multiple exposures to NMDAR inhibitors such as isoflurane may affect cell proliferation, interrupt synaptic transmission, and induce apoptosis and neuronal injury in the central nerve system (CNS) during development in rodents.1,10–13 Liu et al11 found that a short exposure to isoflurane increased NR2B expression in adult mice and improved their cognitive functions. However, prolonged exposure to isoflurane decreased NR2B levels, increased the production of apoptotic proteins, and resulted in cognitive dysfunction. Frontal lobe and hippocampus demonstrated an age-dependent decrease in NMDAR and NR2B expression, along with delayed short-term memory, long-term memory impairment, and spatial cognitive dysfunction.7 Glutamate plays critical roles in synaptic plasticity and development by mediating the fast excitatory transmission. The extracellular glutamate concentration is kept low enough to avoid neurons from glutamate excitotoxicity.14 Excessive release of glutamate may lead to neuronal damage through receptor-mediated excitotoxicity, which plays an important role in many CNS diseases. Excitatory amino acid transporter 1 (EAAT1) is an important glutamate transporter in neurons and glial cells, which limits excessive increase of glutamate levels to reduce glutamate excitotoxicity.15 Dexmedetomidine is an agonist for α2-adrenoreceptors. It has been widely used in clinics and shows potent sedative, analgesic, and anxiolytic effects. Dexmedetomidine produces less stress responses and maintains a stable cardiovascular function. It has been shown that dexmedetomidine protects neuronal cells against isoflurane-induced hippocampal neuronal injury in neonatal mice.16,17 However, the mechanism underlying this protective effect has yet to be determined. In the present study, we investigated whether prolonged exposure to isoflurane generates neurotoxicity by changing the expression of NMDAR subunits and EAAT1. We further investigated if dexmedetomidine protects against isoflurane-induced neuronal injury and if glutamate neurotransmission could be influenced. Conclusion: Isoflurane caused an increase in apoptotic cells in neonatal rats, which was related to increased expression of NR2A and EAAT1 and decreased expression of NR2B. Isoflurane-induced neurotoxicity during critical developmental periods may be controlled by dexmedetomidine.
Background: Considerable evidences support the finding that the anesthesia reagent isoflurane increases neuronal cell death in young rats. Recent studies have shown that dexmedetomidine can reduce isoflurane-induced neuronal injury, but the mechanism remains unclear. We investigated whether isoflurane cause neurotoxicity to the central nervous system by regulating the N-methyl-D-aspartate receptor (NMDAR) and excitatory amino acid transporter1 (EAAT1) in young rats. Furthermore, we examined if dexmedetomidine could decrease isoflurane-induced neurotoxicity. Methods: Neonatal rats (postnatal day 7, n=144) were randomly divided into four groups of 36 animals each: control (saline injection without isoflurane); isoflurane (2% for 4 h); isoflurane + single dose of dexmedetomidine (75 µg/kg, 20 min before the start of 2% isoflurane for 4 h); and isoflurane + dual doses of dexmedetomidine (25 µg/kg, 20 min before and 2 h after start of isoflurane at 2% for 4 h). Six neonates from each group were euthanatized at 2 h, 12 h, 24 h, 3 days, 7 days and 28 days post-anesthesia. Hippocampi were collected and processed for protein extraction. Expression levels of the NMDAR subunits NR2A and NR2B, EAAT1 and caspase-3 were measured by western blot analysis. Results: Protein levels of NR2A, EAAT1 and caspase-3 were significantly increased in hippocampus of the isoflurane group from 2 h to 3 days, while NR2B levels were decreased. However, the -induced increase in NR2A, EAAT1 and caspase-3 and the decrease in NR2B in isoflurane-exposed rats were ameliorated in the rats treated with single or dual doses of dexmedetomidine. Isoflurane-induced neuronal damage in neonatal rats is due in part to the increase in NR2A and EAAT1 and the decrease in NR2B in the hippocampus. Conclusions: Dexmedetomidine protects the brain against the use of isoflurane through the regulation of NR2A, NR2B and EAAT1. However, using the same amount of dexmedetomidine, the trend of protection is basically the same.
6,018
385
[ 230, 217, 61, 308, 89, 139, 158, 94, 143, 40 ]
14
[ "isoflurane", "hours", "levels", "anesthesia", "group", "05", "dexmedetomidine", "days", "significantly", "12" ]
[ "nmdar inhibitors", "aspartate receptor nmdar", "nr2b levels hippocampi", "brain nmdars critical", "receptor nmdar heterotetramer" ]
null
[CONTENT] dexmedetomidine | isoflurane | neurotoxicity | apoptosis | glutamate [SUMMARY]
null
[CONTENT] dexmedetomidine | isoflurane | neurotoxicity | apoptosis | glutamate [SUMMARY]
[CONTENT] dexmedetomidine | isoflurane | neurotoxicity | apoptosis | glutamate [SUMMARY]
[CONTENT] dexmedetomidine | isoflurane | neurotoxicity | apoptosis | glutamate [SUMMARY]
[CONTENT] dexmedetomidine | isoflurane | neurotoxicity | apoptosis | glutamate [SUMMARY]
[CONTENT] Animals | Dexmedetomidine | Glutamic Acid | Injections, Intraperitoneal | Isoflurane | Male | Neurons | Neuroprotective Agents | Rats | Rats, Sprague-Dawley [SUMMARY]
null
[CONTENT] Animals | Dexmedetomidine | Glutamic Acid | Injections, Intraperitoneal | Isoflurane | Male | Neurons | Neuroprotective Agents | Rats | Rats, Sprague-Dawley [SUMMARY]
[CONTENT] Animals | Dexmedetomidine | Glutamic Acid | Injections, Intraperitoneal | Isoflurane | Male | Neurons | Neuroprotective Agents | Rats | Rats, Sprague-Dawley [SUMMARY]
[CONTENT] Animals | Dexmedetomidine | Glutamic Acid | Injections, Intraperitoneal | Isoflurane | Male | Neurons | Neuroprotective Agents | Rats | Rats, Sprague-Dawley [SUMMARY]
[CONTENT] Animals | Dexmedetomidine | Glutamic Acid | Injections, Intraperitoneal | Isoflurane | Male | Neurons | Neuroprotective Agents | Rats | Rats, Sprague-Dawley [SUMMARY]
[CONTENT] nmdar inhibitors | aspartate receptor nmdar | nr2b levels hippocampi | brain nmdars critical | receptor nmdar heterotetramer [SUMMARY]
null
[CONTENT] nmdar inhibitors | aspartate receptor nmdar | nr2b levels hippocampi | brain nmdars critical | receptor nmdar heterotetramer [SUMMARY]
[CONTENT] nmdar inhibitors | aspartate receptor nmdar | nr2b levels hippocampi | brain nmdars critical | receptor nmdar heterotetramer [SUMMARY]
[CONTENT] nmdar inhibitors | aspartate receptor nmdar | nr2b levels hippocampi | brain nmdars critical | receptor nmdar heterotetramer [SUMMARY]
[CONTENT] nmdar inhibitors | aspartate receptor nmdar | nr2b levels hippocampi | brain nmdars critical | receptor nmdar heterotetramer [SUMMARY]
[CONTENT] isoflurane | hours | levels | anesthesia | group | 05 | dexmedetomidine | days | significantly | 12 [SUMMARY]
null
[CONTENT] isoflurane | hours | levels | anesthesia | group | 05 | dexmedetomidine | days | significantly | 12 [SUMMARY]
[CONTENT] isoflurane | hours | levels | anesthesia | group | 05 | dexmedetomidine | days | significantly | 12 [SUMMARY]
[CONTENT] isoflurane | hours | levels | anesthesia | group | 05 | dexmedetomidine | days | significantly | 12 [SUMMARY]
[CONTENT] isoflurane | hours | levels | anesthesia | group | 05 | dexmedetomidine | days | significantly | 12 [SUMMARY]
[CONTENT] glutamate | neuronal | memory | long | term | plasticity | receptor | cognitive | exposure isoflurane | nmdar [SUMMARY]
null
[CONTENT] 05 | isoflurane | group | hours | anesthesia | significantly | 05 05 | 01 | levels | respectively [SUMMARY]
[CONTENT] expression | periods | related increased expression | cells neonatal rats | cells neonatal | caused increase apoptotic cells | caused increase apoptotic | caused increase | caused | eaat1 decreased [SUMMARY]
[CONTENT] isoflurane | 05 | hours | group | anesthesia | levels | days | significantly | dexmedetomidine | 05 05 [SUMMARY]
[CONTENT] isoflurane | 05 | hours | group | anesthesia | levels | days | significantly | dexmedetomidine | 05 05 [SUMMARY]
[CONTENT] anesthesia ||| ||| transporter1 | EAAT1 ||| [SUMMARY]
null
[CONTENT] NR2A | EAAT1 | 2 | 3 days ||| NR2A | EAAT1 | NR2B ||| NR2A | EAAT1 [SUMMARY]
[CONTENT] NR2A | NR2B | EAAT1 ||| [SUMMARY]
[CONTENT] anesthesia ||| ||| transporter1 | EAAT1 ||| ||| day 7 | four | 36 | isoflurane | 2% | 4 | 75 µg/kg | 20 | 2% | 4 | 25 | 20 | 2 | 2% | 4 ||| Six | 2 | 12 | 24 | 3 days | 7 days | 28 days ||| ||| NMDAR | NR2B | EAAT1 ||| ||| NR2A | EAAT1 | 2 | 3 days ||| NR2A | EAAT1 | NR2B ||| NR2A | EAAT1 ||| NR2A | NR2B | EAAT1 ||| [SUMMARY]
[CONTENT] anesthesia ||| ||| transporter1 | EAAT1 ||| ||| day 7 | four | 36 | isoflurane | 2% | 4 | 75 µg/kg | 20 | 2% | 4 | 25 | 20 | 2 | 2% | 4 ||| Six | 2 | 12 | 24 | 3 days | 7 days | 28 days ||| ||| NMDAR | NR2B | EAAT1 ||| ||| NR2A | EAAT1 | 2 | 3 days ||| NR2A | EAAT1 | NR2B ||| NR2A | EAAT1 ||| NR2A | NR2B | EAAT1 ||| [SUMMARY]
Pharmacokinetics of thalidomide in dogs: can feeding affect it? A preliminary study.
33016014
Tumor-associated neoangiogenesis is a crucial target for antitumor therapies. Thalidomide (TAL) is a promising anti-neoangiogenetic drug that has recently been used in the treatment of several malignancies in dogs.
BACKGROUND
Six healthy adult female Labradors were enrolled according to a randomized single-dose, 2-treatment, 2-phase, paired 2 × 2 cross-over study design. The dogs were administered a single 400 mg capsule of TAL in fasted and fed conditions. Blood was collected from 15 min to 48 h after dosing, and TAL quantified in plasma by a validated high-performance liquid chromatography method. The pharmacokinetics of TAL were analyzed using a non-compartmental approach.
METHODS
TAL concentration was quantifiable up to 10 h and 24 h after fasted and fed conditions, respectively. Cmax (fasted, 1.34 ± 0.12 μg/mL; fed, 2.47 ± 0.19 μg/mL) and Tmax (fasted, 3 h; fed, 10 h) differed substantially between the 2 groups. AUC and t1/2λz were significantly higher in fed (42.46 ± 6.64 mg × h/L; 17.14 ± 4.68 h) compared to fasted (12.38 ± 1.13 mg × h/L; 6.55 ± 1.25 h) dogs. The relative oral bioavailability of TAL for the fasted group was low (36.92% ± 3.28%).
RESULTS
Feeding affects the pharmacokinetics of oral TAL in dogs, showing a delayed, but higher absorption with different rate of elimination. These findings are of importance in clinical veterinary settings, and represent a starting point for further related studies.
CONCLUSIONS
[ "Administration, Oral", "Angiogenesis Inhibitors", "Animals", "Area Under Curve", "Biological Availability", "Chromatography, High Pressure Liquid", "Cross-Over Studies", "Dogs", "Fasting", "Female", "Half-Life", "Thalidomide" ]
7533387
INTRODUCTION
Thalidomide (TAL) was first synthesised in 1954, and was used clinically in Europe as a non-barbiturate hypno-sedative and antiemetic drug for morning sickness. It was thought that the sedative effect of TAL was generated by a different mechanism of action than that of barbiturates. This led to the belief that TAL was a ‘safe’ drug, with little CNS and respiratory depression or muscle incoordination [1], and no deaths from overdose or attempted suicide have ever been recorded [2]. However, in 1961 TAL was found to have a teratogenic effect in humans, and so was withdrawn from market. Despite its known teratogenicity, by 1965 TAL was the drug of choice for erythema nodosum leprosum [3]. The safety profile of TAL was not completely determined until 1998 [4], and since then, several trials in inflammatory and oncologic conditions have been run. TAL has shown promising antitumour activity in several malignancies and has been proposed as a drug of choice in multiple myeloma [56789]. Neoangiogenesis is a well recognized hallmark of cancer [10]. Today, tumoral-associated neoangiogenesis is a crucial target for antitumoral therapy. Several studies have shown that the tumour microenvironment is able to induce and promote neoangiogenesis [1011]. The potential anti-angiogenic effects of TAL were suspected in the early 1960s but were only confirmed in the 1990s [1213]. To date, the precise mechanisms responsible for the clinical activity of TAL have not yet been estabilished. However, TAL has been shown to inhibit angiogenesis induced by basic fibroblast growth factor in rabbit cornea or by vascular endothelial growth factor in a murine model of corneal vascularization [1214]. TAL also reduced interleukin-6 (IL-6), 1b (IL-1b), 10 (IL-10) and tumour necrosis factor-α production in an in vitro model [1516]. TAL has been used in canine chemotherapy for the treatment of hemangiosarcoma [1718], pulmonary [19] and mammary carcinoma [20]. Equivalent or even longer survival times have been reported compared to traditional intensive-dose chemotherapy. Unlike many other chemotherapeutic drugs, TAL is relatively well tolerated by dogs. Experimental trials have not found significant toxicity in Beagles treated for up 53 weeks with a dose of up to 1,000 mg/kg/day [21]. To date, the dose of TAL proposed for the treatment of tumours in canine patients has been empirically selected, with studies using a wide range of doses. Indeed, dose in the range of 2 to 26 mg/kg/day or 100–400 mg/dog per day have been reported [17192223]. A dose regimen selected based on scientific data is thus necessary in order to optimise TAL therapy in canine patients. To the best of the author's knowledge, no studies on the pharmacokinetics of TAL in dogs have been reported in the literature. The aim of this study was therefore to assess the pharmacokinetics of TAL after single oral administration in dogs. Additionally, the likely influence of feeding on the pharmacokinetic profile of TAL in dogs has been preliminarily investigated.
null
null
RESULTS
The analytical method showed a good linearity in the range between 0.05 and 10 μg/mL with a determination coefficient (R2) above 0.994 (y = 0.0976x − 0.0456). The intra- and inter-day precision resulted in coefficient of variation < 20%. The mean extraction recovery of TAL was 72.09% ± 5.04%; the LOD and LOQ were 0.05 μg/mL and 0.5 μg/mL, respectively. In the first phase of the study one dog in group 2 (fed) showed some adverse effects 12 h after TAL administration. These included shaking, stiff walk, staggering and whining. However, the blood samples were still collected at each timepoint, and the dog completely recovered after a few hours. It was replaced in phase 2 with another dog. In all the other experimental animals no adverse effects and no behavioural or health alterations were observed during or after the study. Plasma TAL concentration was quantifiable up to 10 h and 24 h after oral administration of 400 mg/dog in fasted and in fed conditions, respectively (Fig. 1). The main pharmacokinetic estimates are reported in Table 1. One fed dog in phase 2 showed a short Tmax and a higher Cmax compared with other dogs in the same group, as well as a more similar pharmacokinetic profile to the fasting group. This individual data set was considered as an outlier, and was excluded from the pharmacokinetic analyses. TAL, thalidomide. Values are presented as mean ± SE or median value (range). TAL, thalidomide; λz, terminal phase rate constant; t1/2λz, terminal half-life; Cmax, peak plasma concentration; Tmax, time of peak concentration; Cl/F, plasma clearance normalized for F; V/F, volume of distribution normalized for F; AUC0-last, area under the curve from 0 to last time collected samples; AUC0-∞, area under the curve from 0 h to infinity; MRT0-∞, mean residence time; F, bioavailability. *p < 0.05; †Values computed on 5 dogs; ‡Value computed on 4 dogs; §Values normalized for the dose expressed in mg/kg. Cmax, normalized for the dose expressed in mg/kg, differed substantially between the 2 groups (fasted, 1.34 ± 0.12 µg/mL; fed, 2.47 ± 0.19 µg/mL). Tmax differed considerably between the fasted (3 h) and the fed (10 h) animals. The t1/2λz values were variable but significantly different between the groups (fasted, 6.55 ± 1.25 h; fed, 17.14 ± 4.68 h), in-line with a different λz (fasted, 0.12 ± 0.02 1/h; fed, 0.05 ± 0.01 1/h). The AUC value was significantly higher in the fed group (normalized for the dose expressed in mg/kg: fasted, 12.38 ± 1.13 mg × h/L; fed, 42.46 ± 6.64 mg × h/L). As a result, the relative oral bioavailability of TAL for the fasted group was low (36.92% ± 3.28%).
null
null
[ "Drugs and chemicals", "Animals and experimental design", "Sample extraction procedure", "HPLC conditions", "Method validation and quantification", "Pharmacokinetic and statistical analysis" ]
[ "TAL for analytical testing (purity ≥ 99%) and phthalimide (purity ≥ 99%), used as internal standard (IS), were provided by Sigma-Aldrich (USA). Ammonium acetate, methanol (CH3OH), acetonitrile (CH3CN) and trifluoroacetic acid (98%) were purchased from VWR International (USA). Acetic acid 99–100% (CH3COOH) was obtained from J.T. Baker (USA). The water used was ultrapure grade, purified using a Milli- Q UV Purification System (Millipore Corporation, USA).", "Six adult female (2–7 years) Labradors with an average body weight of 34.6 ± 1.69 kg (median, 34.25 kg; range, 28.5–42.4 kg) were used. The experiment was approved by the University of Life Sciences, (Lublin, Poland) welfare ethics committee and carried out in accordance with the European law (2010/63/UE). The dogs were determined to be clinically healthy based on physical examination, serum chemistry and haematological analyses performed 48 h before the beginning of the study and were not treated with other therapeutic agents.\nThe dogs were randomly divided in to 2 groups (each containing 3 animals) using Research Randomizer software, and participated in a single-dose, 2-treatment, 2-phase, paired 2 × 2 cross-over study.\nThe drug was prepared by a compounding pharmacy, and administered as capsules containing 400 mg of pure TAL. Since animals had different body weights, the dose administered was an average of 11.74 ± 0.56 mg/kg (median, 11.76 mg/kg; range, 9.4–14.0 mg/kg).\nIn the first phase, group 1 (n = 3) was administered with 400 mg/dog (one capsule) after over-night fasting and group 2 (n = 3) was fed prior to and after administration of the same dose. The capsule was placed on the back of the tongue and 5 mL of water was administered to ensure that the capsule was swallowed. Canned dog food (Nature's Logic Canine Feast, USA) was provided as half the total amount 15 min before dosing, with the rest provided immediately after TAL administration. On each study day, in order to avoid the possibility of coprophagia impacting on the study, the dogs were kept in individual boxes for 48 h and observed closely during this period. A 2-week wash-out period was observed between the phases, then the treatment groups were inverted, and the experiment was repeated.\nThe dogs were checked daily for visible adverse effects for 7 days following completion of the study. To facilitate blood sampling, 1 h before the commencement of the study, an 18-gauge soft cannula (Delta Med, Italy) was inserted in the right medial saphenous vein. Blood samples (3 mL) were withdrawn into lithium heparin tubes (Aptaca Spa, Italy) at 15, 30, 45 min and 1, 1.5, 2, 4, 6, 8, 10, 16, 24, 34 and 48 h after administration of TAL. Blood samples were immediately centrifuged for 10 min at 1,500 × g at 4°C and the plasma was harvested. Since TAL is not stable in plasma [24], 2.0 mL of a stabilizer-solution (CH3OH/CH3CN, 1/1 (v/v) + CH3COOH 2%) [25] was immediately added to each mL of plasma sample as soon as it was harvested. Samples were transferred into cryo-vials and immediately frozen and stored at −80°C. Samples were analysed within 2 weeks of collection.", "Analysis was performed according to Saccomanni et al. [25], and slightly modified. In brief, an aliquot of 1.5 mL of sample (containing 0.5 mL of plasma and 1 mL of stabilizer-solution) was added to a 2.0 mL micro-centrifuge tube. After the addition of 100 µL of IS (50 µg/mL) and 2.0 µL of trifluoroacetic acid (deproteinizing agent), samples were vortexed for 30 sec, then sonicated and shaken at 60 oscillation/min for 10 min. Samples were then centrifuged (5,000 × g) for 10 min, and 1 mL of the organic layer was transferred into a clean tube and dried under a gentle stream of nitrogen at 30°C. The residue obtained was reconstituted with 100 µL of CH3OH/CH3CN, 1/1 (v/v) and after centrifugation (5,000 × g, 5 min) 20 µL of the upper layer was injected onto the high-performance liquid chromatography (HPLC).", "TAL in dog plasma was determined using an HPLC coupled with diode array detector (Series 2000; Jasco Europe, Italy) according to a slightly modified version of the method described by Saccomanni et al. [25]. A Gemini C18 analytical column (250 × 4.6 mm, 5 μm particle size; Phenomenex, USA) maintained at 25°C by a Peltier System (LC-4000; Jasco Europe) was used for the chromatographic analysis. The mobile phase consisted of CH3CN/10 mM acetate ammonium (pH 5.5) solution (25/75, v/v), which was freshly prepared each day before the analysis. The flow rate of the mobile phase was set at 1 mL/min. The wavelength was set at 220 nm.", "The analytical method was fully revalidated for dog plasma according to the European Medicines Agency guidelines [26] by examining the within-run precision, calculated from similar responses for 6 repeats of 3 control samples (0.1, 0.5, and 1 μg/mL) in one run. The between-run precision was determined by comparing the calculated response of the low (0.05 μg/mL), middle (1 μg/mL), and high (10 μg/mL) control samples over 3 consecutive daily runs (a total of 6 runs). The assay accuracy for within-run and between-runs was established by determining the ratio of calculated response to expected response for low (0.05 μg/mL), middle (1 μg/mL), and high (10 μg/mL) control samples over 6 runs. The limit of quantification (LOQ) was determined as signal-to-noise ratio of 10, and the limit of detection (LOD) as the signal-to-noise ratio of 3.\nTAL and IS stock solutions were prepared in a mixture of CH3OH/CH3CN, 1/1 (v/v) and in water, respectively, at a concentration of 1,000 μg/mL and stored at −80°C. These solutions were freshly prepared every 2 weeks. TAL stock solution was then diluted to reach concentrations of 0.25, 2.5, 5, 25 and 50 μg/mL and stored at −20°C. These last concentrations were then diluted immediately prior to use to reach the final concentrations of 0.05, 0.5, 1, 5, 10 μg/mL. These final dilutions were then used in preparation of a 5-point calibration curve of TAL in plasma matrices.\nStandard curves were constructed with standard TAL concentrations vs ratio of TAL/IS peak areas. The linearity of the regression curve was assessed based on the residual plot, the fit test and the back-calculation. Extraction recovery was evaluated by comparing the response (in area) of high, middle, and low standards and the IS, spiked into blank canine plasma (control), with the response of equivalent standards.", "The concentration of TAL vs. time was pharmacokinetically analyzed using a non-compartmental approach (ThothPro 4.3; ThothPro LLC, Poland). Cmax was the peak plasma concentration, and Tmax was the time at the peak plasma concentration. The elimination half-life (t1/2λz) was calculated using linear least squares regression analysis of the concentration-time curve, and the area under the curve (AUC) was calculated by the linear-up log-down rule to the final concentration-time point (Ct). From these values, the apparent volume of distribution (V = dose × area under the first moment curve [AUMC]/AUC2), mean residence time (MRT = AUMC/AUC) and clearance (Cl = dose/AUC) were determined. The relative bioavailability (F) was calculated for each dog using the following equation:\nData were found to be normally distributed (Shapiro-Wilk test). Paired t-tests were used to investigate statistically significant changes in pharmacokinetic estimates between groups (GraphPad Software; GraphPad, USA). The pharmacokinetic parameters are presented as means ± SE and Tmax (categorical variable) is expressed as median and range. In all the experiments, differences were considered significant if p < 0.05." ]
[ null, null, null, null, null, null ]
[ "INTRODUCTION", "MATERIALS AND METHODS", "Drugs and chemicals", "Animals and experimental design", "Sample extraction procedure", "HPLC conditions", "Method validation and quantification", "Pharmacokinetic and statistical analysis", "RESULTS", "DISCUSSION" ]
[ "Thalidomide (TAL) was first synthesised in 1954, and was used clinically in Europe as a non-barbiturate hypno-sedative and antiemetic drug for morning sickness. It was thought that the sedative effect of TAL was generated by a different mechanism of action than that of barbiturates. This led to the belief that TAL was a ‘safe’ drug, with little CNS and respiratory depression or muscle incoordination [1], and no deaths from overdose or attempted suicide have ever been recorded [2]. However, in 1961 TAL was found to have a teratogenic effect in humans, and so was withdrawn from market. Despite its known teratogenicity, by 1965 TAL was the drug of choice for erythema nodosum leprosum [3]. The safety profile of TAL was not completely determined until 1998 [4], and since then, several trials in inflammatory and oncologic conditions have been run. TAL has shown promising antitumour activity in several malignancies and has been proposed as a drug of choice in multiple myeloma [56789].\nNeoangiogenesis is a well recognized hallmark of cancer [10]. Today, tumoral-associated neoangiogenesis is a crucial target for antitumoral therapy. Several studies have shown that the tumour microenvironment is able to induce and promote neoangiogenesis [1011]. The potential anti-angiogenic effects of TAL were suspected in the early 1960s but were only confirmed in the 1990s [1213]. To date, the precise mechanisms responsible for the clinical activity of TAL have not yet been estabilished. However, TAL has been shown to inhibit angiogenesis induced by basic fibroblast growth factor in rabbit cornea or by vascular endothelial growth factor in a murine model of corneal vascularization [1214]. TAL also reduced interleukin-6 (IL-6), 1b (IL-1b), 10 (IL-10) and tumour necrosis factor-α production in an in vitro model [1516].\nTAL has been used in canine chemotherapy for the treatment of hemangiosarcoma [1718], pulmonary [19] and mammary carcinoma [20]. Equivalent or even longer survival times have been reported compared to traditional intensive-dose chemotherapy. Unlike many other chemotherapeutic drugs, TAL is relatively well tolerated by dogs. Experimental trials have not found significant toxicity in Beagles treated for up 53 weeks with a dose of up to 1,000 mg/kg/day [21]. To date, the dose of TAL proposed for the treatment of tumours in canine patients has been empirically selected, with studies using a wide range of doses. Indeed, dose in the range of 2 to 26 mg/kg/day or 100–400 mg/dog per day have been reported [17192223]. A dose regimen selected based on scientific data is thus necessary in order to optimise TAL therapy in canine patients.\nTo the best of the author's knowledge, no studies on the pharmacokinetics of TAL in dogs have been reported in the literature. The aim of this study was therefore to assess the pharmacokinetics of TAL after single oral administration in dogs. Additionally, the likely influence of feeding on the pharmacokinetic profile of TAL in dogs has been preliminarily investigated.", " Drugs and chemicals TAL for analytical testing (purity ≥ 99%) and phthalimide (purity ≥ 99%), used as internal standard (IS), were provided by Sigma-Aldrich (USA). Ammonium acetate, methanol (CH3OH), acetonitrile (CH3CN) and trifluoroacetic acid (98%) were purchased from VWR International (USA). Acetic acid 99–100% (CH3COOH) was obtained from J.T. Baker (USA). The water used was ultrapure grade, purified using a Milli- Q UV Purification System (Millipore Corporation, USA).\nTAL for analytical testing (purity ≥ 99%) and phthalimide (purity ≥ 99%), used as internal standard (IS), were provided by Sigma-Aldrich (USA). Ammonium acetate, methanol (CH3OH), acetonitrile (CH3CN) and trifluoroacetic acid (98%) were purchased from VWR International (USA). Acetic acid 99–100% (CH3COOH) was obtained from J.T. Baker (USA). The water used was ultrapure grade, purified using a Milli- Q UV Purification System (Millipore Corporation, USA).\n Animals and experimental design Six adult female (2–7 years) Labradors with an average body weight of 34.6 ± 1.69 kg (median, 34.25 kg; range, 28.5–42.4 kg) were used. The experiment was approved by the University of Life Sciences, (Lublin, Poland) welfare ethics committee and carried out in accordance with the European law (2010/63/UE). The dogs were determined to be clinically healthy based on physical examination, serum chemistry and haematological analyses performed 48 h before the beginning of the study and were not treated with other therapeutic agents.\nThe dogs were randomly divided in to 2 groups (each containing 3 animals) using Research Randomizer software, and participated in a single-dose, 2-treatment, 2-phase, paired 2 × 2 cross-over study.\nThe drug was prepared by a compounding pharmacy, and administered as capsules containing 400 mg of pure TAL. Since animals had different body weights, the dose administered was an average of 11.74 ± 0.56 mg/kg (median, 11.76 mg/kg; range, 9.4–14.0 mg/kg).\nIn the first phase, group 1 (n = 3) was administered with 400 mg/dog (one capsule) after over-night fasting and group 2 (n = 3) was fed prior to and after administration of the same dose. The capsule was placed on the back of the tongue and 5 mL of water was administered to ensure that the capsule was swallowed. Canned dog food (Nature's Logic Canine Feast, USA) was provided as half the total amount 15 min before dosing, with the rest provided immediately after TAL administration. On each study day, in order to avoid the possibility of coprophagia impacting on the study, the dogs were kept in individual boxes for 48 h and observed closely during this period. A 2-week wash-out period was observed between the phases, then the treatment groups were inverted, and the experiment was repeated.\nThe dogs were checked daily for visible adverse effects for 7 days following completion of the study. To facilitate blood sampling, 1 h before the commencement of the study, an 18-gauge soft cannula (Delta Med, Italy) was inserted in the right medial saphenous vein. Blood samples (3 mL) were withdrawn into lithium heparin tubes (Aptaca Spa, Italy) at 15, 30, 45 min and 1, 1.5, 2, 4, 6, 8, 10, 16, 24, 34 and 48 h after administration of TAL. Blood samples were immediately centrifuged for 10 min at 1,500 × g at 4°C and the plasma was harvested. Since TAL is not stable in plasma [24], 2.0 mL of a stabilizer-solution (CH3OH/CH3CN, 1/1 (v/v) + CH3COOH 2%) [25] was immediately added to each mL of plasma sample as soon as it was harvested. Samples were transferred into cryo-vials and immediately frozen and stored at −80°C. Samples were analysed within 2 weeks of collection.\nSix adult female (2–7 years) Labradors with an average body weight of 34.6 ± 1.69 kg (median, 34.25 kg; range, 28.5–42.4 kg) were used. The experiment was approved by the University of Life Sciences, (Lublin, Poland) welfare ethics committee and carried out in accordance with the European law (2010/63/UE). The dogs were determined to be clinically healthy based on physical examination, serum chemistry and haematological analyses performed 48 h before the beginning of the study and were not treated with other therapeutic agents.\nThe dogs were randomly divided in to 2 groups (each containing 3 animals) using Research Randomizer software, and participated in a single-dose, 2-treatment, 2-phase, paired 2 × 2 cross-over study.\nThe drug was prepared by a compounding pharmacy, and administered as capsules containing 400 mg of pure TAL. Since animals had different body weights, the dose administered was an average of 11.74 ± 0.56 mg/kg (median, 11.76 mg/kg; range, 9.4–14.0 mg/kg).\nIn the first phase, group 1 (n = 3) was administered with 400 mg/dog (one capsule) after over-night fasting and group 2 (n = 3) was fed prior to and after administration of the same dose. The capsule was placed on the back of the tongue and 5 mL of water was administered to ensure that the capsule was swallowed. Canned dog food (Nature's Logic Canine Feast, USA) was provided as half the total amount 15 min before dosing, with the rest provided immediately after TAL administration. On each study day, in order to avoid the possibility of coprophagia impacting on the study, the dogs were kept in individual boxes for 48 h and observed closely during this period. A 2-week wash-out period was observed between the phases, then the treatment groups were inverted, and the experiment was repeated.\nThe dogs were checked daily for visible adverse effects for 7 days following completion of the study. To facilitate blood sampling, 1 h before the commencement of the study, an 18-gauge soft cannula (Delta Med, Italy) was inserted in the right medial saphenous vein. Blood samples (3 mL) were withdrawn into lithium heparin tubes (Aptaca Spa, Italy) at 15, 30, 45 min and 1, 1.5, 2, 4, 6, 8, 10, 16, 24, 34 and 48 h after administration of TAL. Blood samples were immediately centrifuged for 10 min at 1,500 × g at 4°C and the plasma was harvested. Since TAL is not stable in plasma [24], 2.0 mL of a stabilizer-solution (CH3OH/CH3CN, 1/1 (v/v) + CH3COOH 2%) [25] was immediately added to each mL of plasma sample as soon as it was harvested. Samples were transferred into cryo-vials and immediately frozen and stored at −80°C. Samples were analysed within 2 weeks of collection.\n Sample extraction procedure Analysis was performed according to Saccomanni et al. [25], and slightly modified. In brief, an aliquot of 1.5 mL of sample (containing 0.5 mL of plasma and 1 mL of stabilizer-solution) was added to a 2.0 mL micro-centrifuge tube. After the addition of 100 µL of IS (50 µg/mL) and 2.0 µL of trifluoroacetic acid (deproteinizing agent), samples were vortexed for 30 sec, then sonicated and shaken at 60 oscillation/min for 10 min. Samples were then centrifuged (5,000 × g) for 10 min, and 1 mL of the organic layer was transferred into a clean tube and dried under a gentle stream of nitrogen at 30°C. The residue obtained was reconstituted with 100 µL of CH3OH/CH3CN, 1/1 (v/v) and after centrifugation (5,000 × g, 5 min) 20 µL of the upper layer was injected onto the high-performance liquid chromatography (HPLC).\nAnalysis was performed according to Saccomanni et al. [25], and slightly modified. In brief, an aliquot of 1.5 mL of sample (containing 0.5 mL of plasma and 1 mL of stabilizer-solution) was added to a 2.0 mL micro-centrifuge tube. After the addition of 100 µL of IS (50 µg/mL) and 2.0 µL of trifluoroacetic acid (deproteinizing agent), samples were vortexed for 30 sec, then sonicated and shaken at 60 oscillation/min for 10 min. Samples were then centrifuged (5,000 × g) for 10 min, and 1 mL of the organic layer was transferred into a clean tube and dried under a gentle stream of nitrogen at 30°C. The residue obtained was reconstituted with 100 µL of CH3OH/CH3CN, 1/1 (v/v) and after centrifugation (5,000 × g, 5 min) 20 µL of the upper layer was injected onto the high-performance liquid chromatography (HPLC).\n HPLC conditions TAL in dog plasma was determined using an HPLC coupled with diode array detector (Series 2000; Jasco Europe, Italy) according to a slightly modified version of the method described by Saccomanni et al. [25]. A Gemini C18 analytical column (250 × 4.6 mm, 5 μm particle size; Phenomenex, USA) maintained at 25°C by a Peltier System (LC-4000; Jasco Europe) was used for the chromatographic analysis. The mobile phase consisted of CH3CN/10 mM acetate ammonium (pH 5.5) solution (25/75, v/v), which was freshly prepared each day before the analysis. The flow rate of the mobile phase was set at 1 mL/min. The wavelength was set at 220 nm.\nTAL in dog plasma was determined using an HPLC coupled with diode array detector (Series 2000; Jasco Europe, Italy) according to a slightly modified version of the method described by Saccomanni et al. [25]. A Gemini C18 analytical column (250 × 4.6 mm, 5 μm particle size; Phenomenex, USA) maintained at 25°C by a Peltier System (LC-4000; Jasco Europe) was used for the chromatographic analysis. The mobile phase consisted of CH3CN/10 mM acetate ammonium (pH 5.5) solution (25/75, v/v), which was freshly prepared each day before the analysis. The flow rate of the mobile phase was set at 1 mL/min. The wavelength was set at 220 nm.\n Method validation and quantification The analytical method was fully revalidated for dog plasma according to the European Medicines Agency guidelines [26] by examining the within-run precision, calculated from similar responses for 6 repeats of 3 control samples (0.1, 0.5, and 1 μg/mL) in one run. The between-run precision was determined by comparing the calculated response of the low (0.05 μg/mL), middle (1 μg/mL), and high (10 μg/mL) control samples over 3 consecutive daily runs (a total of 6 runs). The assay accuracy for within-run and between-runs was established by determining the ratio of calculated response to expected response for low (0.05 μg/mL), middle (1 μg/mL), and high (10 μg/mL) control samples over 6 runs. The limit of quantification (LOQ) was determined as signal-to-noise ratio of 10, and the limit of detection (LOD) as the signal-to-noise ratio of 3.\nTAL and IS stock solutions were prepared in a mixture of CH3OH/CH3CN, 1/1 (v/v) and in water, respectively, at a concentration of 1,000 μg/mL and stored at −80°C. These solutions were freshly prepared every 2 weeks. TAL stock solution was then diluted to reach concentrations of 0.25, 2.5, 5, 25 and 50 μg/mL and stored at −20°C. These last concentrations were then diluted immediately prior to use to reach the final concentrations of 0.05, 0.5, 1, 5, 10 μg/mL. These final dilutions were then used in preparation of a 5-point calibration curve of TAL in plasma matrices.\nStandard curves were constructed with standard TAL concentrations vs ratio of TAL/IS peak areas. The linearity of the regression curve was assessed based on the residual plot, the fit test and the back-calculation. Extraction recovery was evaluated by comparing the response (in area) of high, middle, and low standards and the IS, spiked into blank canine plasma (control), with the response of equivalent standards.\nThe analytical method was fully revalidated for dog plasma according to the European Medicines Agency guidelines [26] by examining the within-run precision, calculated from similar responses for 6 repeats of 3 control samples (0.1, 0.5, and 1 μg/mL) in one run. The between-run precision was determined by comparing the calculated response of the low (0.05 μg/mL), middle (1 μg/mL), and high (10 μg/mL) control samples over 3 consecutive daily runs (a total of 6 runs). The assay accuracy for within-run and between-runs was established by determining the ratio of calculated response to expected response for low (0.05 μg/mL), middle (1 μg/mL), and high (10 μg/mL) control samples over 6 runs. The limit of quantification (LOQ) was determined as signal-to-noise ratio of 10, and the limit of detection (LOD) as the signal-to-noise ratio of 3.\nTAL and IS stock solutions were prepared in a mixture of CH3OH/CH3CN, 1/1 (v/v) and in water, respectively, at a concentration of 1,000 μg/mL and stored at −80°C. These solutions were freshly prepared every 2 weeks. TAL stock solution was then diluted to reach concentrations of 0.25, 2.5, 5, 25 and 50 μg/mL and stored at −20°C. These last concentrations were then diluted immediately prior to use to reach the final concentrations of 0.05, 0.5, 1, 5, 10 μg/mL. These final dilutions were then used in preparation of a 5-point calibration curve of TAL in plasma matrices.\nStandard curves were constructed with standard TAL concentrations vs ratio of TAL/IS peak areas. The linearity of the regression curve was assessed based on the residual plot, the fit test and the back-calculation. Extraction recovery was evaluated by comparing the response (in area) of high, middle, and low standards and the IS, spiked into blank canine plasma (control), with the response of equivalent standards.\n Pharmacokinetic and statistical analysis The concentration of TAL vs. time was pharmacokinetically analyzed using a non-compartmental approach (ThothPro 4.3; ThothPro LLC, Poland). Cmax was the peak plasma concentration, and Tmax was the time at the peak plasma concentration. The elimination half-life (t1/2λz) was calculated using linear least squares regression analysis of the concentration-time curve, and the area under the curve (AUC) was calculated by the linear-up log-down rule to the final concentration-time point (Ct). From these values, the apparent volume of distribution (V = dose × area under the first moment curve [AUMC]/AUC2), mean residence time (MRT = AUMC/AUC) and clearance (Cl = dose/AUC) were determined. The relative bioavailability (F) was calculated for each dog using the following equation:\nData were found to be normally distributed (Shapiro-Wilk test). Paired t-tests were used to investigate statistically significant changes in pharmacokinetic estimates between groups (GraphPad Software; GraphPad, USA). The pharmacokinetic parameters are presented as means ± SE and Tmax (categorical variable) is expressed as median and range. In all the experiments, differences were considered significant if p < 0.05.\nThe concentration of TAL vs. time was pharmacokinetically analyzed using a non-compartmental approach (ThothPro 4.3; ThothPro LLC, Poland). Cmax was the peak plasma concentration, and Tmax was the time at the peak plasma concentration. The elimination half-life (t1/2λz) was calculated using linear least squares regression analysis of the concentration-time curve, and the area under the curve (AUC) was calculated by the linear-up log-down rule to the final concentration-time point (Ct). From these values, the apparent volume of distribution (V = dose × area under the first moment curve [AUMC]/AUC2), mean residence time (MRT = AUMC/AUC) and clearance (Cl = dose/AUC) were determined. The relative bioavailability (F) was calculated for each dog using the following equation:\nData were found to be normally distributed (Shapiro-Wilk test). Paired t-tests were used to investigate statistically significant changes in pharmacokinetic estimates between groups (GraphPad Software; GraphPad, USA). The pharmacokinetic parameters are presented as means ± SE and Tmax (categorical variable) is expressed as median and range. In all the experiments, differences were considered significant if p < 0.05.", "TAL for analytical testing (purity ≥ 99%) and phthalimide (purity ≥ 99%), used as internal standard (IS), were provided by Sigma-Aldrich (USA). Ammonium acetate, methanol (CH3OH), acetonitrile (CH3CN) and trifluoroacetic acid (98%) were purchased from VWR International (USA). Acetic acid 99–100% (CH3COOH) was obtained from J.T. Baker (USA). The water used was ultrapure grade, purified using a Milli- Q UV Purification System (Millipore Corporation, USA).", "Six adult female (2–7 years) Labradors with an average body weight of 34.6 ± 1.69 kg (median, 34.25 kg; range, 28.5–42.4 kg) were used. The experiment was approved by the University of Life Sciences, (Lublin, Poland) welfare ethics committee and carried out in accordance with the European law (2010/63/UE). The dogs were determined to be clinically healthy based on physical examination, serum chemistry and haematological analyses performed 48 h before the beginning of the study and were not treated with other therapeutic agents.\nThe dogs were randomly divided in to 2 groups (each containing 3 animals) using Research Randomizer software, and participated in a single-dose, 2-treatment, 2-phase, paired 2 × 2 cross-over study.\nThe drug was prepared by a compounding pharmacy, and administered as capsules containing 400 mg of pure TAL. Since animals had different body weights, the dose administered was an average of 11.74 ± 0.56 mg/kg (median, 11.76 mg/kg; range, 9.4–14.0 mg/kg).\nIn the first phase, group 1 (n = 3) was administered with 400 mg/dog (one capsule) after over-night fasting and group 2 (n = 3) was fed prior to and after administration of the same dose. The capsule was placed on the back of the tongue and 5 mL of water was administered to ensure that the capsule was swallowed. Canned dog food (Nature's Logic Canine Feast, USA) was provided as half the total amount 15 min before dosing, with the rest provided immediately after TAL administration. On each study day, in order to avoid the possibility of coprophagia impacting on the study, the dogs were kept in individual boxes for 48 h and observed closely during this period. A 2-week wash-out period was observed between the phases, then the treatment groups were inverted, and the experiment was repeated.\nThe dogs were checked daily for visible adverse effects for 7 days following completion of the study. To facilitate blood sampling, 1 h before the commencement of the study, an 18-gauge soft cannula (Delta Med, Italy) was inserted in the right medial saphenous vein. Blood samples (3 mL) were withdrawn into lithium heparin tubes (Aptaca Spa, Italy) at 15, 30, 45 min and 1, 1.5, 2, 4, 6, 8, 10, 16, 24, 34 and 48 h after administration of TAL. Blood samples were immediately centrifuged for 10 min at 1,500 × g at 4°C and the plasma was harvested. Since TAL is not stable in plasma [24], 2.0 mL of a stabilizer-solution (CH3OH/CH3CN, 1/1 (v/v) + CH3COOH 2%) [25] was immediately added to each mL of plasma sample as soon as it was harvested. Samples were transferred into cryo-vials and immediately frozen and stored at −80°C. Samples were analysed within 2 weeks of collection.", "Analysis was performed according to Saccomanni et al. [25], and slightly modified. In brief, an aliquot of 1.5 mL of sample (containing 0.5 mL of plasma and 1 mL of stabilizer-solution) was added to a 2.0 mL micro-centrifuge tube. After the addition of 100 µL of IS (50 µg/mL) and 2.0 µL of trifluoroacetic acid (deproteinizing agent), samples were vortexed for 30 sec, then sonicated and shaken at 60 oscillation/min for 10 min. Samples were then centrifuged (5,000 × g) for 10 min, and 1 mL of the organic layer was transferred into a clean tube and dried under a gentle stream of nitrogen at 30°C. The residue obtained was reconstituted with 100 µL of CH3OH/CH3CN, 1/1 (v/v) and after centrifugation (5,000 × g, 5 min) 20 µL of the upper layer was injected onto the high-performance liquid chromatography (HPLC).", "TAL in dog plasma was determined using an HPLC coupled with diode array detector (Series 2000; Jasco Europe, Italy) according to a slightly modified version of the method described by Saccomanni et al. [25]. A Gemini C18 analytical column (250 × 4.6 mm, 5 μm particle size; Phenomenex, USA) maintained at 25°C by a Peltier System (LC-4000; Jasco Europe) was used for the chromatographic analysis. The mobile phase consisted of CH3CN/10 mM acetate ammonium (pH 5.5) solution (25/75, v/v), which was freshly prepared each day before the analysis. The flow rate of the mobile phase was set at 1 mL/min. The wavelength was set at 220 nm.", "The analytical method was fully revalidated for dog plasma according to the European Medicines Agency guidelines [26] by examining the within-run precision, calculated from similar responses for 6 repeats of 3 control samples (0.1, 0.5, and 1 μg/mL) in one run. The between-run precision was determined by comparing the calculated response of the low (0.05 μg/mL), middle (1 μg/mL), and high (10 μg/mL) control samples over 3 consecutive daily runs (a total of 6 runs). The assay accuracy for within-run and between-runs was established by determining the ratio of calculated response to expected response for low (0.05 μg/mL), middle (1 μg/mL), and high (10 μg/mL) control samples over 6 runs. The limit of quantification (LOQ) was determined as signal-to-noise ratio of 10, and the limit of detection (LOD) as the signal-to-noise ratio of 3.\nTAL and IS stock solutions were prepared in a mixture of CH3OH/CH3CN, 1/1 (v/v) and in water, respectively, at a concentration of 1,000 μg/mL and stored at −80°C. These solutions were freshly prepared every 2 weeks. TAL stock solution was then diluted to reach concentrations of 0.25, 2.5, 5, 25 and 50 μg/mL and stored at −20°C. These last concentrations were then diluted immediately prior to use to reach the final concentrations of 0.05, 0.5, 1, 5, 10 μg/mL. These final dilutions were then used in preparation of a 5-point calibration curve of TAL in plasma matrices.\nStandard curves were constructed with standard TAL concentrations vs ratio of TAL/IS peak areas. The linearity of the regression curve was assessed based on the residual plot, the fit test and the back-calculation. Extraction recovery was evaluated by comparing the response (in area) of high, middle, and low standards and the IS, spiked into blank canine plasma (control), with the response of equivalent standards.", "The concentration of TAL vs. time was pharmacokinetically analyzed using a non-compartmental approach (ThothPro 4.3; ThothPro LLC, Poland). Cmax was the peak plasma concentration, and Tmax was the time at the peak plasma concentration. The elimination half-life (t1/2λz) was calculated using linear least squares regression analysis of the concentration-time curve, and the area under the curve (AUC) was calculated by the linear-up log-down rule to the final concentration-time point (Ct). From these values, the apparent volume of distribution (V = dose × area under the first moment curve [AUMC]/AUC2), mean residence time (MRT = AUMC/AUC) and clearance (Cl = dose/AUC) were determined. The relative bioavailability (F) was calculated for each dog using the following equation:\nData were found to be normally distributed (Shapiro-Wilk test). Paired t-tests were used to investigate statistically significant changes in pharmacokinetic estimates between groups (GraphPad Software; GraphPad, USA). The pharmacokinetic parameters are presented as means ± SE and Tmax (categorical variable) is expressed as median and range. In all the experiments, differences were considered significant if p < 0.05.", "The analytical method showed a good linearity in the range between 0.05 and 10 μg/mL with a determination coefficient (R2) above 0.994 (y = 0.0976x − 0.0456). The intra- and inter-day precision resulted in coefficient of variation < 20%. The mean extraction recovery of TAL was 72.09% ± 5.04%; the LOD and LOQ were 0.05 μg/mL and 0.5 μg/mL, respectively.\nIn the first phase of the study one dog in group 2 (fed) showed some adverse effects 12 h after TAL administration. These included shaking, stiff walk, staggering and whining. However, the blood samples were still collected at each timepoint, and the dog completely recovered after a few hours. It was replaced in phase 2 with another dog. In all the other experimental animals no adverse effects and no behavioural or health alterations were observed during or after the study.\nPlasma TAL concentration was quantifiable up to 10 h and 24 h after oral administration of 400 mg/dog in fasted and in fed conditions, respectively (Fig. 1). The main pharmacokinetic estimates are reported in Table 1. One fed dog in phase 2 showed a short Tmax and a higher Cmax compared with other dogs in the same group, as well as a more similar pharmacokinetic profile to the fasting group. This individual data set was considered as an outlier, and was excluded from the pharmacokinetic analyses.\nTAL, thalidomide.\nValues are presented as mean ± SE or median value (range).\nTAL, thalidomide; λz, terminal phase rate constant; t1/2λz, terminal half-life; Cmax, peak plasma concentration; Tmax, time of peak concentration; Cl/F, plasma clearance normalized for F; V/F, volume of distribution normalized for F; AUC0-last, area under the curve from 0 to last time collected samples; AUC0-∞, area under the curve from 0 h to infinity; MRT0-∞, mean residence time; F, bioavailability.\n*p < 0.05; †Values computed on 5 dogs; ‡Value computed on 4 dogs; §Values normalized for the dose expressed in mg/kg.\nCmax, normalized for the dose expressed in mg/kg, differed substantially between the 2 groups (fasted, 1.34 ± 0.12 µg/mL; fed, 2.47 ± 0.19 µg/mL). Tmax differed considerably between the fasted (3 h) and the fed (10 h) animals.\nThe t1/2λz values were variable but significantly different between the groups (fasted, 6.55 ± 1.25 h; fed, 17.14 ± 4.68 h), in-line with a different λz (fasted, 0.12 ± 0.02 1/h; fed, 0.05 ± 0.01 1/h).\nThe AUC value was significantly higher in the fed group (normalized for the dose expressed in mg/kg: fasted, 12.38 ± 1.13 mg × h/L; fed, 42.46 ± 6.64 mg × h/L). As a result, the relative oral bioavailability of TAL for the fasted group was low (36.92% ± 3.28%).", "The main aim of the present study was to evaluate the pharmacokinetic profile of TAL after oral administration in dogs and to determine whether this profile is affected by feeding.\nThe dose of TAL administered in the present study (400 mg/dog, average 11.7 mg/kg) was selected based on clinical efficacy/adverse effects previously reported in dogs. A dose of 8.7 mg/kg/day and a 3-month daily-dose of 20 mg/kg followed by a 3-month daily-dose of 10 mg/kg were successfully used in the management of stage II–III splenic hemangiosarcomas [18] and canine mammary carcinomas [20], respectively, in dogs. This latter study showed adverse sedative effects in some dogs when given the higher dose (20 mg/kg), with symptom improvement when the dose was reduced to 10 mg/kg. The dose administered in our study was found to be safe with no visible signs of toxicity in animals. This concurs with the findings of a previous study [21], which also reported no visible signs of toxicity associated with this dose in 56 dogs. However, a multiple-dose study is needed to confirm this finding.\nThe toxic signs showed by the subject in the fed group during the first phase of the animal study were transient (around 4 h). The causes of these signs are not clear but are unlikely to be due to TAL. A study into the effects of chronic TAL administration in dogs [21] found that TAL administered up to 1,000 mg/kg/day for 53 weeks did not to induce any major systemic toxicity or tumours in dogs. There were no TAL-related changes in body weights, food consumption, electrocardiography, ophthalmoscopy, neurological function, or endocrine function. Some slight and/or transient variations observed in some hematology and blood chemistry values of dosed dogs were considered to be toxicologically insignificant, with these conclusions being supported by the lack of histopathologic changes. The only gross finding attributable to TAL was a yellow-green discoloration of the femur, rib, and/or calvarium. This aspect was not assessed in the present study since animals were not euthanized. The estimated non observed adverse effect level in dogs was 200 mg/kg/day [21] which almost 20 times higher than the dose administered in the present study. However, adverse events such as sedation, dizziness, constipation, and headache have been reported in humans after multiple clinical doses [272829] that do not match with the signs observed in the dog used in the present study.\nPlasma concentrations of TAL after fasted and fed conditions varied widely in our study. Statistical analysis and inspection of the plasma concentration vs time curves indicated that feeding considerably affects both the pharmacokinetic parameters and profiles. This information could be of paramount importance in clinical settings. Food intake delayed (Tmax) but increased TAL absorption (Cmax and AUC), in line with the negligible hydrophilicity of the active compound [3031]. Interestingly, the effect of food on TAL pharmacokinetics in humans are conflicting: some studies report no influences while others report minor effects on Cmax and AUC, with a significant delay to Tmax [228].\nThe type of food consumed can impact on the quality and quantity of the food effect. For example, fatty foods generally delay gastric emptying, thereby providing ample time for greater dissolution and absorption of drugs. This was seen with griseofulvin, a sparingly water-soluble drug, where coadministration with a fatty meal doubled its absorption relative to the fasted state. High-protein or carbohydrate-rich food had no effect on griseofulvin absorption [32]. The feed administered to dogs in our study was a fatty meal. TAL, which like griseofulvin is sparingly soluble in water, showed significantly higher absorption in 5 of the 6 fed dogs. However, some drugs' bioavailability is increased with a high fat diet, while dietary fiber may reduce drug availability, thus diverse feed types may have different impacts on the pharmacokinetics of TAL [333435]. Further studies investigating the impact of different types of feeds on TAL pharmacokinetics are warranted to investigate this issue.\nOne dog in the fed group was found to be a statistical outlier with a reduced Tmax similar to that reported for the fasting group. This could be explained by the contractile mechanism of the gallbladder emptying and filling in dogs [36]. In fact, the gallbladder alternates filling and emptying excursions even in fasted dogs. Alternatively, the dog may have had a reflux of duodenal fluid (containing bile) in the gastric lumen. Consequently, the production of an earlier emulsion may have led a higher Cmax and faster Tmax [37]. A statistical outlier was also described in a previous study that examined the effect of food on TAL pharmacokinetics in humans [28].\nHalf-life is a pharmacokinetic parameter used to compute the dose interval and the time to achieve the steady state concentration [38]. The half-life of TAL was statistically increased by feeding. This may be due to the feed acting as a drug reservoir, slowly releasing TAL during intestinal transit. If administered once-daily in fed dogs, TAL has an accumulation ratio (AUCsteady state/AUC1st adm) of around 2.5, while the steady state plasma concentration would be attained in around 4 days [38]. Half-life is a hybrid parameter that incorporates both clearance and volume of distribution. The low water solubility of TAL has prevented the development of a commercial intravenous formulation [2], and consequently it is impossible to calculate absolute clearance and volume of distribution, making extensive discussion of this estimate too speculative.\nTwo uncontrolled multiple-dose studies in breast cancer and glioma have attempted to correlate TAL concentration with tumour response in humans. Steady-state plasma concentrations ranging from 5 to 7 μg/mL resulted in stable disease for up to 74 weeks in 12/31 glioma patients, however, similar concentrations (6.2 μg/mL) for 8 weeks in metastatic breast cancer patients showed no tumour response [3940]. In a recent study in dogs, a similar TAL dose to that reported in the present study, was associated with equal or even longer survival times compared to intensive-dose chemotherapy in splenic hemangiosarcoma and mammary carcinomas [1820]. The average plasma concentration computed at the steady state at 11.7 mg/kg TAL administration once-daily in fed dogs resulted in 3.7 μg/mL. Even though this concentration is theoretical it might be used as a target for the treatment of several malignancies in dogs, especially for splenic hemangiosarcoma and pulmonary/mammary carcinomas. Although it has been reported that the pharmacokinetics of TAL do not change significantly between healthy and cancer patients, further studies on canine patients are warranted to verify whether TAL pharmacokinetics are unchanged in healthy dogs versus dogs diagnosed with cancer [274041].\nThe findings of this study should be interpreted while considering some limitations. Namely: the study used only female dogs of a single breed. It is well known that breed or sex-specific differences can lead to variances in drug absorption, distribution, metabolism, or elimination. In particular, differences in body weight and composition, animal size, P-450 enzyme isoforms or in plasma protein binding might occur in different breeds or in animals of different sex of the same breed [424344454647484950]. For instance, Labradors have a higher percentage of body fat compared to other breed such as Greyhounds or Beagles, and this might lead to a larger volume of distribution of certain liphophilic compounds [4351].\nIn conclusion, this is the first study to report the pharmacokinetics of TAL in dogs. Feeding significantly affects the pharmacokinetics, and this should be considered by veterinarians when using this drug in a clinical setting." ]
[ "intro", "materials|methods", null, null, null, null, null, null, "results", "discussion" ]
[ "Dogs", "fasting", "meals", "pharmacokinetics", "thalidomide" ]
INTRODUCTION: Thalidomide (TAL) was first synthesised in 1954, and was used clinically in Europe as a non-barbiturate hypno-sedative and antiemetic drug for morning sickness. It was thought that the sedative effect of TAL was generated by a different mechanism of action than that of barbiturates. This led to the belief that TAL was a ‘safe’ drug, with little CNS and respiratory depression or muscle incoordination [1], and no deaths from overdose or attempted suicide have ever been recorded [2]. However, in 1961 TAL was found to have a teratogenic effect in humans, and so was withdrawn from market. Despite its known teratogenicity, by 1965 TAL was the drug of choice for erythema nodosum leprosum [3]. The safety profile of TAL was not completely determined until 1998 [4], and since then, several trials in inflammatory and oncologic conditions have been run. TAL has shown promising antitumour activity in several malignancies and has been proposed as a drug of choice in multiple myeloma [56789]. Neoangiogenesis is a well recognized hallmark of cancer [10]. Today, tumoral-associated neoangiogenesis is a crucial target for antitumoral therapy. Several studies have shown that the tumour microenvironment is able to induce and promote neoangiogenesis [1011]. The potential anti-angiogenic effects of TAL were suspected in the early 1960s but were only confirmed in the 1990s [1213]. To date, the precise mechanisms responsible for the clinical activity of TAL have not yet been estabilished. However, TAL has been shown to inhibit angiogenesis induced by basic fibroblast growth factor in rabbit cornea or by vascular endothelial growth factor in a murine model of corneal vascularization [1214]. TAL also reduced interleukin-6 (IL-6), 1b (IL-1b), 10 (IL-10) and tumour necrosis factor-α production in an in vitro model [1516]. TAL has been used in canine chemotherapy for the treatment of hemangiosarcoma [1718], pulmonary [19] and mammary carcinoma [20]. Equivalent or even longer survival times have been reported compared to traditional intensive-dose chemotherapy. Unlike many other chemotherapeutic drugs, TAL is relatively well tolerated by dogs. Experimental trials have not found significant toxicity in Beagles treated for up 53 weeks with a dose of up to 1,000 mg/kg/day [21]. To date, the dose of TAL proposed for the treatment of tumours in canine patients has been empirically selected, with studies using a wide range of doses. Indeed, dose in the range of 2 to 26 mg/kg/day or 100–400 mg/dog per day have been reported [17192223]. A dose regimen selected based on scientific data is thus necessary in order to optimise TAL therapy in canine patients. To the best of the author's knowledge, no studies on the pharmacokinetics of TAL in dogs have been reported in the literature. The aim of this study was therefore to assess the pharmacokinetics of TAL after single oral administration in dogs. Additionally, the likely influence of feeding on the pharmacokinetic profile of TAL in dogs has been preliminarily investigated. MATERIALS AND METHODS: Drugs and chemicals TAL for analytical testing (purity ≥ 99%) and phthalimide (purity ≥ 99%), used as internal standard (IS), were provided by Sigma-Aldrich (USA). Ammonium acetate, methanol (CH3OH), acetonitrile (CH3CN) and trifluoroacetic acid (98%) were purchased from VWR International (USA). Acetic acid 99–100% (CH3COOH) was obtained from J.T. Baker (USA). The water used was ultrapure grade, purified using a Milli- Q UV Purification System (Millipore Corporation, USA). TAL for analytical testing (purity ≥ 99%) and phthalimide (purity ≥ 99%), used as internal standard (IS), were provided by Sigma-Aldrich (USA). Ammonium acetate, methanol (CH3OH), acetonitrile (CH3CN) and trifluoroacetic acid (98%) were purchased from VWR International (USA). Acetic acid 99–100% (CH3COOH) was obtained from J.T. Baker (USA). The water used was ultrapure grade, purified using a Milli- Q UV Purification System (Millipore Corporation, USA). Animals and experimental design Six adult female (2–7 years) Labradors with an average body weight of 34.6 ± 1.69 kg (median, 34.25 kg; range, 28.5–42.4 kg) were used. The experiment was approved by the University of Life Sciences, (Lublin, Poland) welfare ethics committee and carried out in accordance with the European law (2010/63/UE). The dogs were determined to be clinically healthy based on physical examination, serum chemistry and haematological analyses performed 48 h before the beginning of the study and were not treated with other therapeutic agents. The dogs were randomly divided in to 2 groups (each containing 3 animals) using Research Randomizer software, and participated in a single-dose, 2-treatment, 2-phase, paired 2 × 2 cross-over study. The drug was prepared by a compounding pharmacy, and administered as capsules containing 400 mg of pure TAL. Since animals had different body weights, the dose administered was an average of 11.74 ± 0.56 mg/kg (median, 11.76 mg/kg; range, 9.4–14.0 mg/kg). In the first phase, group 1 (n = 3) was administered with 400 mg/dog (one capsule) after over-night fasting and group 2 (n = 3) was fed prior to and after administration of the same dose. The capsule was placed on the back of the tongue and 5 mL of water was administered to ensure that the capsule was swallowed. Canned dog food (Nature's Logic Canine Feast, USA) was provided as half the total amount 15 min before dosing, with the rest provided immediately after TAL administration. On each study day, in order to avoid the possibility of coprophagia impacting on the study, the dogs were kept in individual boxes for 48 h and observed closely during this period. A 2-week wash-out period was observed between the phases, then the treatment groups were inverted, and the experiment was repeated. The dogs were checked daily for visible adverse effects for 7 days following completion of the study. To facilitate blood sampling, 1 h before the commencement of the study, an 18-gauge soft cannula (Delta Med, Italy) was inserted in the right medial saphenous vein. Blood samples (3 mL) were withdrawn into lithium heparin tubes (Aptaca Spa, Italy) at 15, 30, 45 min and 1, 1.5, 2, 4, 6, 8, 10, 16, 24, 34 and 48 h after administration of TAL. Blood samples were immediately centrifuged for 10 min at 1,500 × g at 4°C and the plasma was harvested. Since TAL is not stable in plasma [24], 2.0 mL of a stabilizer-solution (CH3OH/CH3CN, 1/1 (v/v) + CH3COOH 2%) [25] was immediately added to each mL of plasma sample as soon as it was harvested. Samples were transferred into cryo-vials and immediately frozen and stored at −80°C. Samples were analysed within 2 weeks of collection. Six adult female (2–7 years) Labradors with an average body weight of 34.6 ± 1.69 kg (median, 34.25 kg; range, 28.5–42.4 kg) were used. The experiment was approved by the University of Life Sciences, (Lublin, Poland) welfare ethics committee and carried out in accordance with the European law (2010/63/UE). The dogs were determined to be clinically healthy based on physical examination, serum chemistry and haematological analyses performed 48 h before the beginning of the study and were not treated with other therapeutic agents. The dogs were randomly divided in to 2 groups (each containing 3 animals) using Research Randomizer software, and participated in a single-dose, 2-treatment, 2-phase, paired 2 × 2 cross-over study. The drug was prepared by a compounding pharmacy, and administered as capsules containing 400 mg of pure TAL. Since animals had different body weights, the dose administered was an average of 11.74 ± 0.56 mg/kg (median, 11.76 mg/kg; range, 9.4–14.0 mg/kg). In the first phase, group 1 (n = 3) was administered with 400 mg/dog (one capsule) after over-night fasting and group 2 (n = 3) was fed prior to and after administration of the same dose. The capsule was placed on the back of the tongue and 5 mL of water was administered to ensure that the capsule was swallowed. Canned dog food (Nature's Logic Canine Feast, USA) was provided as half the total amount 15 min before dosing, with the rest provided immediately after TAL administration. On each study day, in order to avoid the possibility of coprophagia impacting on the study, the dogs were kept in individual boxes for 48 h and observed closely during this period. A 2-week wash-out period was observed between the phases, then the treatment groups were inverted, and the experiment was repeated. The dogs were checked daily for visible adverse effects for 7 days following completion of the study. To facilitate blood sampling, 1 h before the commencement of the study, an 18-gauge soft cannula (Delta Med, Italy) was inserted in the right medial saphenous vein. Blood samples (3 mL) were withdrawn into lithium heparin tubes (Aptaca Spa, Italy) at 15, 30, 45 min and 1, 1.5, 2, 4, 6, 8, 10, 16, 24, 34 and 48 h after administration of TAL. Blood samples were immediately centrifuged for 10 min at 1,500 × g at 4°C and the plasma was harvested. Since TAL is not stable in plasma [24], 2.0 mL of a stabilizer-solution (CH3OH/CH3CN, 1/1 (v/v) + CH3COOH 2%) [25] was immediately added to each mL of plasma sample as soon as it was harvested. Samples were transferred into cryo-vials and immediately frozen and stored at −80°C. Samples were analysed within 2 weeks of collection. Sample extraction procedure Analysis was performed according to Saccomanni et al. [25], and slightly modified. In brief, an aliquot of 1.5 mL of sample (containing 0.5 mL of plasma and 1 mL of stabilizer-solution) was added to a 2.0 mL micro-centrifuge tube. After the addition of 100 µL of IS (50 µg/mL) and 2.0 µL of trifluoroacetic acid (deproteinizing agent), samples were vortexed for 30 sec, then sonicated and shaken at 60 oscillation/min for 10 min. Samples were then centrifuged (5,000 × g) for 10 min, and 1 mL of the organic layer was transferred into a clean tube and dried under a gentle stream of nitrogen at 30°C. The residue obtained was reconstituted with 100 µL of CH3OH/CH3CN, 1/1 (v/v) and after centrifugation (5,000 × g, 5 min) 20 µL of the upper layer was injected onto the high-performance liquid chromatography (HPLC). Analysis was performed according to Saccomanni et al. [25], and slightly modified. In brief, an aliquot of 1.5 mL of sample (containing 0.5 mL of plasma and 1 mL of stabilizer-solution) was added to a 2.0 mL micro-centrifuge tube. After the addition of 100 µL of IS (50 µg/mL) and 2.0 µL of trifluoroacetic acid (deproteinizing agent), samples were vortexed for 30 sec, then sonicated and shaken at 60 oscillation/min for 10 min. Samples were then centrifuged (5,000 × g) for 10 min, and 1 mL of the organic layer was transferred into a clean tube and dried under a gentle stream of nitrogen at 30°C. The residue obtained was reconstituted with 100 µL of CH3OH/CH3CN, 1/1 (v/v) and after centrifugation (5,000 × g, 5 min) 20 µL of the upper layer was injected onto the high-performance liquid chromatography (HPLC). HPLC conditions TAL in dog plasma was determined using an HPLC coupled with diode array detector (Series 2000; Jasco Europe, Italy) according to a slightly modified version of the method described by Saccomanni et al. [25]. A Gemini C18 analytical column (250 × 4.6 mm, 5 μm particle size; Phenomenex, USA) maintained at 25°C by a Peltier System (LC-4000; Jasco Europe) was used for the chromatographic analysis. The mobile phase consisted of CH3CN/10 mM acetate ammonium (pH 5.5) solution (25/75, v/v), which was freshly prepared each day before the analysis. The flow rate of the mobile phase was set at 1 mL/min. The wavelength was set at 220 nm. TAL in dog plasma was determined using an HPLC coupled with diode array detector (Series 2000; Jasco Europe, Italy) according to a slightly modified version of the method described by Saccomanni et al. [25]. A Gemini C18 analytical column (250 × 4.6 mm, 5 μm particle size; Phenomenex, USA) maintained at 25°C by a Peltier System (LC-4000; Jasco Europe) was used for the chromatographic analysis. The mobile phase consisted of CH3CN/10 mM acetate ammonium (pH 5.5) solution (25/75, v/v), which was freshly prepared each day before the analysis. The flow rate of the mobile phase was set at 1 mL/min. The wavelength was set at 220 nm. Method validation and quantification The analytical method was fully revalidated for dog plasma according to the European Medicines Agency guidelines [26] by examining the within-run precision, calculated from similar responses for 6 repeats of 3 control samples (0.1, 0.5, and 1 μg/mL) in one run. The between-run precision was determined by comparing the calculated response of the low (0.05 μg/mL), middle (1 μg/mL), and high (10 μg/mL) control samples over 3 consecutive daily runs (a total of 6 runs). The assay accuracy for within-run and between-runs was established by determining the ratio of calculated response to expected response for low (0.05 μg/mL), middle (1 μg/mL), and high (10 μg/mL) control samples over 6 runs. The limit of quantification (LOQ) was determined as signal-to-noise ratio of 10, and the limit of detection (LOD) as the signal-to-noise ratio of 3. TAL and IS stock solutions were prepared in a mixture of CH3OH/CH3CN, 1/1 (v/v) and in water, respectively, at a concentration of 1,000 μg/mL and stored at −80°C. These solutions were freshly prepared every 2 weeks. TAL stock solution was then diluted to reach concentrations of 0.25, 2.5, 5, 25 and 50 μg/mL and stored at −20°C. These last concentrations were then diluted immediately prior to use to reach the final concentrations of 0.05, 0.5, 1, 5, 10 μg/mL. These final dilutions were then used in preparation of a 5-point calibration curve of TAL in plasma matrices. Standard curves were constructed with standard TAL concentrations vs ratio of TAL/IS peak areas. The linearity of the regression curve was assessed based on the residual plot, the fit test and the back-calculation. Extraction recovery was evaluated by comparing the response (in area) of high, middle, and low standards and the IS, spiked into blank canine plasma (control), with the response of equivalent standards. The analytical method was fully revalidated for dog plasma according to the European Medicines Agency guidelines [26] by examining the within-run precision, calculated from similar responses for 6 repeats of 3 control samples (0.1, 0.5, and 1 μg/mL) in one run. The between-run precision was determined by comparing the calculated response of the low (0.05 μg/mL), middle (1 μg/mL), and high (10 μg/mL) control samples over 3 consecutive daily runs (a total of 6 runs). The assay accuracy for within-run and between-runs was established by determining the ratio of calculated response to expected response for low (0.05 μg/mL), middle (1 μg/mL), and high (10 μg/mL) control samples over 6 runs. The limit of quantification (LOQ) was determined as signal-to-noise ratio of 10, and the limit of detection (LOD) as the signal-to-noise ratio of 3. TAL and IS stock solutions were prepared in a mixture of CH3OH/CH3CN, 1/1 (v/v) and in water, respectively, at a concentration of 1,000 μg/mL and stored at −80°C. These solutions were freshly prepared every 2 weeks. TAL stock solution was then diluted to reach concentrations of 0.25, 2.5, 5, 25 and 50 μg/mL and stored at −20°C. These last concentrations were then diluted immediately prior to use to reach the final concentrations of 0.05, 0.5, 1, 5, 10 μg/mL. These final dilutions were then used in preparation of a 5-point calibration curve of TAL in plasma matrices. Standard curves were constructed with standard TAL concentrations vs ratio of TAL/IS peak areas. The linearity of the regression curve was assessed based on the residual plot, the fit test and the back-calculation. Extraction recovery was evaluated by comparing the response (in area) of high, middle, and low standards and the IS, spiked into blank canine plasma (control), with the response of equivalent standards. Pharmacokinetic and statistical analysis The concentration of TAL vs. time was pharmacokinetically analyzed using a non-compartmental approach (ThothPro 4.3; ThothPro LLC, Poland). Cmax was the peak plasma concentration, and Tmax was the time at the peak plasma concentration. The elimination half-life (t1/2λz) was calculated using linear least squares regression analysis of the concentration-time curve, and the area under the curve (AUC) was calculated by the linear-up log-down rule to the final concentration-time point (Ct). From these values, the apparent volume of distribution (V = dose × area under the first moment curve [AUMC]/AUC2), mean residence time (MRT = AUMC/AUC) and clearance (Cl = dose/AUC) were determined. The relative bioavailability (F) was calculated for each dog using the following equation: Data were found to be normally distributed (Shapiro-Wilk test). Paired t-tests were used to investigate statistically significant changes in pharmacokinetic estimates between groups (GraphPad Software; GraphPad, USA). The pharmacokinetic parameters are presented as means ± SE and Tmax (categorical variable) is expressed as median and range. In all the experiments, differences were considered significant if p < 0.05. The concentration of TAL vs. time was pharmacokinetically analyzed using a non-compartmental approach (ThothPro 4.3; ThothPro LLC, Poland). Cmax was the peak plasma concentration, and Tmax was the time at the peak plasma concentration. The elimination half-life (t1/2λz) was calculated using linear least squares regression analysis of the concentration-time curve, and the area under the curve (AUC) was calculated by the linear-up log-down rule to the final concentration-time point (Ct). From these values, the apparent volume of distribution (V = dose × area under the first moment curve [AUMC]/AUC2), mean residence time (MRT = AUMC/AUC) and clearance (Cl = dose/AUC) were determined. The relative bioavailability (F) was calculated for each dog using the following equation: Data were found to be normally distributed (Shapiro-Wilk test). Paired t-tests were used to investigate statistically significant changes in pharmacokinetic estimates between groups (GraphPad Software; GraphPad, USA). The pharmacokinetic parameters are presented as means ± SE and Tmax (categorical variable) is expressed as median and range. In all the experiments, differences were considered significant if p < 0.05. Drugs and chemicals: TAL for analytical testing (purity ≥ 99%) and phthalimide (purity ≥ 99%), used as internal standard (IS), were provided by Sigma-Aldrich (USA). Ammonium acetate, methanol (CH3OH), acetonitrile (CH3CN) and trifluoroacetic acid (98%) were purchased from VWR International (USA). Acetic acid 99–100% (CH3COOH) was obtained from J.T. Baker (USA). The water used was ultrapure grade, purified using a Milli- Q UV Purification System (Millipore Corporation, USA). Animals and experimental design: Six adult female (2–7 years) Labradors with an average body weight of 34.6 ± 1.69 kg (median, 34.25 kg; range, 28.5–42.4 kg) were used. The experiment was approved by the University of Life Sciences, (Lublin, Poland) welfare ethics committee and carried out in accordance with the European law (2010/63/UE). The dogs were determined to be clinically healthy based on physical examination, serum chemistry and haematological analyses performed 48 h before the beginning of the study and were not treated with other therapeutic agents. The dogs were randomly divided in to 2 groups (each containing 3 animals) using Research Randomizer software, and participated in a single-dose, 2-treatment, 2-phase, paired 2 × 2 cross-over study. The drug was prepared by a compounding pharmacy, and administered as capsules containing 400 mg of pure TAL. Since animals had different body weights, the dose administered was an average of 11.74 ± 0.56 mg/kg (median, 11.76 mg/kg; range, 9.4–14.0 mg/kg). In the first phase, group 1 (n = 3) was administered with 400 mg/dog (one capsule) after over-night fasting and group 2 (n = 3) was fed prior to and after administration of the same dose. The capsule was placed on the back of the tongue and 5 mL of water was administered to ensure that the capsule was swallowed. Canned dog food (Nature's Logic Canine Feast, USA) was provided as half the total amount 15 min before dosing, with the rest provided immediately after TAL administration. On each study day, in order to avoid the possibility of coprophagia impacting on the study, the dogs were kept in individual boxes for 48 h and observed closely during this period. A 2-week wash-out period was observed between the phases, then the treatment groups were inverted, and the experiment was repeated. The dogs were checked daily for visible adverse effects for 7 days following completion of the study. To facilitate blood sampling, 1 h before the commencement of the study, an 18-gauge soft cannula (Delta Med, Italy) was inserted in the right medial saphenous vein. Blood samples (3 mL) were withdrawn into lithium heparin tubes (Aptaca Spa, Italy) at 15, 30, 45 min and 1, 1.5, 2, 4, 6, 8, 10, 16, 24, 34 and 48 h after administration of TAL. Blood samples were immediately centrifuged for 10 min at 1,500 × g at 4°C and the plasma was harvested. Since TAL is not stable in plasma [24], 2.0 mL of a stabilizer-solution (CH3OH/CH3CN, 1/1 (v/v) + CH3COOH 2%) [25] was immediately added to each mL of plasma sample as soon as it was harvested. Samples were transferred into cryo-vials and immediately frozen and stored at −80°C. Samples were analysed within 2 weeks of collection. Sample extraction procedure: Analysis was performed according to Saccomanni et al. [25], and slightly modified. In brief, an aliquot of 1.5 mL of sample (containing 0.5 mL of plasma and 1 mL of stabilizer-solution) was added to a 2.0 mL micro-centrifuge tube. After the addition of 100 µL of IS (50 µg/mL) and 2.0 µL of trifluoroacetic acid (deproteinizing agent), samples were vortexed for 30 sec, then sonicated and shaken at 60 oscillation/min for 10 min. Samples were then centrifuged (5,000 × g) for 10 min, and 1 mL of the organic layer was transferred into a clean tube and dried under a gentle stream of nitrogen at 30°C. The residue obtained was reconstituted with 100 µL of CH3OH/CH3CN, 1/1 (v/v) and after centrifugation (5,000 × g, 5 min) 20 µL of the upper layer was injected onto the high-performance liquid chromatography (HPLC). HPLC conditions: TAL in dog plasma was determined using an HPLC coupled with diode array detector (Series 2000; Jasco Europe, Italy) according to a slightly modified version of the method described by Saccomanni et al. [25]. A Gemini C18 analytical column (250 × 4.6 mm, 5 μm particle size; Phenomenex, USA) maintained at 25°C by a Peltier System (LC-4000; Jasco Europe) was used for the chromatographic analysis. The mobile phase consisted of CH3CN/10 mM acetate ammonium (pH 5.5) solution (25/75, v/v), which was freshly prepared each day before the analysis. The flow rate of the mobile phase was set at 1 mL/min. The wavelength was set at 220 nm. Method validation and quantification: The analytical method was fully revalidated for dog plasma according to the European Medicines Agency guidelines [26] by examining the within-run precision, calculated from similar responses for 6 repeats of 3 control samples (0.1, 0.5, and 1 μg/mL) in one run. The between-run precision was determined by comparing the calculated response of the low (0.05 μg/mL), middle (1 μg/mL), and high (10 μg/mL) control samples over 3 consecutive daily runs (a total of 6 runs). The assay accuracy for within-run and between-runs was established by determining the ratio of calculated response to expected response for low (0.05 μg/mL), middle (1 μg/mL), and high (10 μg/mL) control samples over 6 runs. The limit of quantification (LOQ) was determined as signal-to-noise ratio of 10, and the limit of detection (LOD) as the signal-to-noise ratio of 3. TAL and IS stock solutions were prepared in a mixture of CH3OH/CH3CN, 1/1 (v/v) and in water, respectively, at a concentration of 1,000 μg/mL and stored at −80°C. These solutions were freshly prepared every 2 weeks. TAL stock solution was then diluted to reach concentrations of 0.25, 2.5, 5, 25 and 50 μg/mL and stored at −20°C. These last concentrations were then diluted immediately prior to use to reach the final concentrations of 0.05, 0.5, 1, 5, 10 μg/mL. These final dilutions were then used in preparation of a 5-point calibration curve of TAL in plasma matrices. Standard curves were constructed with standard TAL concentrations vs ratio of TAL/IS peak areas. The linearity of the regression curve was assessed based on the residual plot, the fit test and the back-calculation. Extraction recovery was evaluated by comparing the response (in area) of high, middle, and low standards and the IS, spiked into blank canine plasma (control), with the response of equivalent standards. Pharmacokinetic and statistical analysis: The concentration of TAL vs. time was pharmacokinetically analyzed using a non-compartmental approach (ThothPro 4.3; ThothPro LLC, Poland). Cmax was the peak plasma concentration, and Tmax was the time at the peak plasma concentration. The elimination half-life (t1/2λz) was calculated using linear least squares regression analysis of the concentration-time curve, and the area under the curve (AUC) was calculated by the linear-up log-down rule to the final concentration-time point (Ct). From these values, the apparent volume of distribution (V = dose × area under the first moment curve [AUMC]/AUC2), mean residence time (MRT = AUMC/AUC) and clearance (Cl = dose/AUC) were determined. The relative bioavailability (F) was calculated for each dog using the following equation: Data were found to be normally distributed (Shapiro-Wilk test). Paired t-tests were used to investigate statistically significant changes in pharmacokinetic estimates between groups (GraphPad Software; GraphPad, USA). The pharmacokinetic parameters are presented as means ± SE and Tmax (categorical variable) is expressed as median and range. In all the experiments, differences were considered significant if p < 0.05. RESULTS: The analytical method showed a good linearity in the range between 0.05 and 10 μg/mL with a determination coefficient (R2) above 0.994 (y = 0.0976x − 0.0456). The intra- and inter-day precision resulted in coefficient of variation < 20%. The mean extraction recovery of TAL was 72.09% ± 5.04%; the LOD and LOQ were 0.05 μg/mL and 0.5 μg/mL, respectively. In the first phase of the study one dog in group 2 (fed) showed some adverse effects 12 h after TAL administration. These included shaking, stiff walk, staggering and whining. However, the blood samples were still collected at each timepoint, and the dog completely recovered after a few hours. It was replaced in phase 2 with another dog. In all the other experimental animals no adverse effects and no behavioural or health alterations were observed during or after the study. Plasma TAL concentration was quantifiable up to 10 h and 24 h after oral administration of 400 mg/dog in fasted and in fed conditions, respectively (Fig. 1). The main pharmacokinetic estimates are reported in Table 1. One fed dog in phase 2 showed a short Tmax and a higher Cmax compared with other dogs in the same group, as well as a more similar pharmacokinetic profile to the fasting group. This individual data set was considered as an outlier, and was excluded from the pharmacokinetic analyses. TAL, thalidomide. Values are presented as mean ± SE or median value (range). TAL, thalidomide; λz, terminal phase rate constant; t1/2λz, terminal half-life; Cmax, peak plasma concentration; Tmax, time of peak concentration; Cl/F, plasma clearance normalized for F; V/F, volume of distribution normalized for F; AUC0-last, area under the curve from 0 to last time collected samples; AUC0-∞, area under the curve from 0 h to infinity; MRT0-∞, mean residence time; F, bioavailability. *p < 0.05; †Values computed on 5 dogs; ‡Value computed on 4 dogs; §Values normalized for the dose expressed in mg/kg. Cmax, normalized for the dose expressed in mg/kg, differed substantially between the 2 groups (fasted, 1.34 ± 0.12 µg/mL; fed, 2.47 ± 0.19 µg/mL). Tmax differed considerably between the fasted (3 h) and the fed (10 h) animals. The t1/2λz values were variable but significantly different between the groups (fasted, 6.55 ± 1.25 h; fed, 17.14 ± 4.68 h), in-line with a different λz (fasted, 0.12 ± 0.02 1/h; fed, 0.05 ± 0.01 1/h). The AUC value was significantly higher in the fed group (normalized for the dose expressed in mg/kg: fasted, 12.38 ± 1.13 mg × h/L; fed, 42.46 ± 6.64 mg × h/L). As a result, the relative oral bioavailability of TAL for the fasted group was low (36.92% ± 3.28%). DISCUSSION: The main aim of the present study was to evaluate the pharmacokinetic profile of TAL after oral administration in dogs and to determine whether this profile is affected by feeding. The dose of TAL administered in the present study (400 mg/dog, average 11.7 mg/kg) was selected based on clinical efficacy/adverse effects previously reported in dogs. A dose of 8.7 mg/kg/day and a 3-month daily-dose of 20 mg/kg followed by a 3-month daily-dose of 10 mg/kg were successfully used in the management of stage II–III splenic hemangiosarcomas [18] and canine mammary carcinomas [20], respectively, in dogs. This latter study showed adverse sedative effects in some dogs when given the higher dose (20 mg/kg), with symptom improvement when the dose was reduced to 10 mg/kg. The dose administered in our study was found to be safe with no visible signs of toxicity in animals. This concurs with the findings of a previous study [21], which also reported no visible signs of toxicity associated with this dose in 56 dogs. However, a multiple-dose study is needed to confirm this finding. The toxic signs showed by the subject in the fed group during the first phase of the animal study were transient (around 4 h). The causes of these signs are not clear but are unlikely to be due to TAL. A study into the effects of chronic TAL administration in dogs [21] found that TAL administered up to 1,000 mg/kg/day for 53 weeks did not to induce any major systemic toxicity or tumours in dogs. There were no TAL-related changes in body weights, food consumption, electrocardiography, ophthalmoscopy, neurological function, or endocrine function. Some slight and/or transient variations observed in some hematology and blood chemistry values of dosed dogs were considered to be toxicologically insignificant, with these conclusions being supported by the lack of histopathologic changes. The only gross finding attributable to TAL was a yellow-green discoloration of the femur, rib, and/or calvarium. This aspect was not assessed in the present study since animals were not euthanized. The estimated non observed adverse effect level in dogs was 200 mg/kg/day [21] which almost 20 times higher than the dose administered in the present study. However, adverse events such as sedation, dizziness, constipation, and headache have been reported in humans after multiple clinical doses [272829] that do not match with the signs observed in the dog used in the present study. Plasma concentrations of TAL after fasted and fed conditions varied widely in our study. Statistical analysis and inspection of the plasma concentration vs time curves indicated that feeding considerably affects both the pharmacokinetic parameters and profiles. This information could be of paramount importance in clinical settings. Food intake delayed (Tmax) but increased TAL absorption (Cmax and AUC), in line with the negligible hydrophilicity of the active compound [3031]. Interestingly, the effect of food on TAL pharmacokinetics in humans are conflicting: some studies report no influences while others report minor effects on Cmax and AUC, with a significant delay to Tmax [228]. The type of food consumed can impact on the quality and quantity of the food effect. For example, fatty foods generally delay gastric emptying, thereby providing ample time for greater dissolution and absorption of drugs. This was seen with griseofulvin, a sparingly water-soluble drug, where coadministration with a fatty meal doubled its absorption relative to the fasted state. High-protein or carbohydrate-rich food had no effect on griseofulvin absorption [32]. The feed administered to dogs in our study was a fatty meal. TAL, which like griseofulvin is sparingly soluble in water, showed significantly higher absorption in 5 of the 6 fed dogs. However, some drugs' bioavailability is increased with a high fat diet, while dietary fiber may reduce drug availability, thus diverse feed types may have different impacts on the pharmacokinetics of TAL [333435]. Further studies investigating the impact of different types of feeds on TAL pharmacokinetics are warranted to investigate this issue. One dog in the fed group was found to be a statistical outlier with a reduced Tmax similar to that reported for the fasting group. This could be explained by the contractile mechanism of the gallbladder emptying and filling in dogs [36]. In fact, the gallbladder alternates filling and emptying excursions even in fasted dogs. Alternatively, the dog may have had a reflux of duodenal fluid (containing bile) in the gastric lumen. Consequently, the production of an earlier emulsion may have led a higher Cmax and faster Tmax [37]. A statistical outlier was also described in a previous study that examined the effect of food on TAL pharmacokinetics in humans [28]. Half-life is a pharmacokinetic parameter used to compute the dose interval and the time to achieve the steady state concentration [38]. The half-life of TAL was statistically increased by feeding. This may be due to the feed acting as a drug reservoir, slowly releasing TAL during intestinal transit. If administered once-daily in fed dogs, TAL has an accumulation ratio (AUCsteady state/AUC1st adm) of around 2.5, while the steady state plasma concentration would be attained in around 4 days [38]. Half-life is a hybrid parameter that incorporates both clearance and volume of distribution. The low water solubility of TAL has prevented the development of a commercial intravenous formulation [2], and consequently it is impossible to calculate absolute clearance and volume of distribution, making extensive discussion of this estimate too speculative. Two uncontrolled multiple-dose studies in breast cancer and glioma have attempted to correlate TAL concentration with tumour response in humans. Steady-state plasma concentrations ranging from 5 to 7 μg/mL resulted in stable disease for up to 74 weeks in 12/31 glioma patients, however, similar concentrations (6.2 μg/mL) for 8 weeks in metastatic breast cancer patients showed no tumour response [3940]. In a recent study in dogs, a similar TAL dose to that reported in the present study, was associated with equal or even longer survival times compared to intensive-dose chemotherapy in splenic hemangiosarcoma and mammary carcinomas [1820]. The average plasma concentration computed at the steady state at 11.7 mg/kg TAL administration once-daily in fed dogs resulted in 3.7 μg/mL. Even though this concentration is theoretical it might be used as a target for the treatment of several malignancies in dogs, especially for splenic hemangiosarcoma and pulmonary/mammary carcinomas. Although it has been reported that the pharmacokinetics of TAL do not change significantly between healthy and cancer patients, further studies on canine patients are warranted to verify whether TAL pharmacokinetics are unchanged in healthy dogs versus dogs diagnosed with cancer [274041]. The findings of this study should be interpreted while considering some limitations. Namely: the study used only female dogs of a single breed. It is well known that breed or sex-specific differences can lead to variances in drug absorption, distribution, metabolism, or elimination. In particular, differences in body weight and composition, animal size, P-450 enzyme isoforms or in plasma protein binding might occur in different breeds or in animals of different sex of the same breed [424344454647484950]. For instance, Labradors have a higher percentage of body fat compared to other breed such as Greyhounds or Beagles, and this might lead to a larger volume of distribution of certain liphophilic compounds [4351]. In conclusion, this is the first study to report the pharmacokinetics of TAL in dogs. Feeding significantly affects the pharmacokinetics, and this should be considered by veterinarians when using this drug in a clinical setting.
Background: Tumor-associated neoangiogenesis is a crucial target for antitumor therapies. Thalidomide (TAL) is a promising anti-neoangiogenetic drug that has recently been used in the treatment of several malignancies in dogs. Methods: Six healthy adult female Labradors were enrolled according to a randomized single-dose, 2-treatment, 2-phase, paired 2 × 2 cross-over study design. The dogs were administered a single 400 mg capsule of TAL in fasted and fed conditions. Blood was collected from 15 min to 48 h after dosing, and TAL quantified in plasma by a validated high-performance liquid chromatography method. The pharmacokinetics of TAL were analyzed using a non-compartmental approach. Results: TAL concentration was quantifiable up to 10 h and 24 h after fasted and fed conditions, respectively. Cmax (fasted, 1.34 ± 0.12 μg/mL; fed, 2.47 ± 0.19 μg/mL) and Tmax (fasted, 3 h; fed, 10 h) differed substantially between the 2 groups. AUC and t1/2λz were significantly higher in fed (42.46 ± 6.64 mg × h/L; 17.14 ± 4.68 h) compared to fasted (12.38 ± 1.13 mg × h/L; 6.55 ± 1.25 h) dogs. The relative oral bioavailability of TAL for the fasted group was low (36.92% ± 3.28%). Conclusions: Feeding affects the pharmacokinetics of oral TAL in dogs, showing a delayed, but higher absorption with different rate of elimination. These findings are of importance in clinical veterinary settings, and represent a starting point for further related studies.
null
null
7,711
307
[ 103, 582, 185, 139, 409, 235 ]
10
[ "tal", "ml", "dogs", "study", "plasma", "dose", "μg ml", "μg", "10", "mg" ]
[ "thalidomide tal", "pharmacokinetic analyses tal", "tal pharmacokinetics warranted", "chemotherapeutic drugs tal", "introduction thalidomide tal" ]
null
null
null
[CONTENT] Dogs | fasting | meals | pharmacokinetics | thalidomide [SUMMARY]
null
[CONTENT] Dogs | fasting | meals | pharmacokinetics | thalidomide [SUMMARY]
null
[CONTENT] Dogs | fasting | meals | pharmacokinetics | thalidomide [SUMMARY]
null
[CONTENT] Administration, Oral | Angiogenesis Inhibitors | Animals | Area Under Curve | Biological Availability | Chromatography, High Pressure Liquid | Cross-Over Studies | Dogs | Fasting | Female | Half-Life | Thalidomide [SUMMARY]
null
[CONTENT] Administration, Oral | Angiogenesis Inhibitors | Animals | Area Under Curve | Biological Availability | Chromatography, High Pressure Liquid | Cross-Over Studies | Dogs | Fasting | Female | Half-Life | Thalidomide [SUMMARY]
null
[CONTENT] Administration, Oral | Angiogenesis Inhibitors | Animals | Area Under Curve | Biological Availability | Chromatography, High Pressure Liquid | Cross-Over Studies | Dogs | Fasting | Female | Half-Life | Thalidomide [SUMMARY]
null
[CONTENT] thalidomide tal | pharmacokinetic analyses tal | tal pharmacokinetics warranted | chemotherapeutic drugs tal | introduction thalidomide tal [SUMMARY]
null
[CONTENT] thalidomide tal | pharmacokinetic analyses tal | tal pharmacokinetics warranted | chemotherapeutic drugs tal | introduction thalidomide tal [SUMMARY]
null
[CONTENT] thalidomide tal | pharmacokinetic analyses tal | tal pharmacokinetics warranted | chemotherapeutic drugs tal | introduction thalidomide tal [SUMMARY]
null
[CONTENT] tal | ml | dogs | study | plasma | dose | μg ml | μg | 10 | mg [SUMMARY]
null
[CONTENT] tal | ml | dogs | study | plasma | dose | μg ml | μg | 10 | mg [SUMMARY]
null
[CONTENT] tal | ml | dogs | study | plasma | dose | μg ml | μg | 10 | mg [SUMMARY]
null
[CONTENT] tal | shown | il | factor | neoangiogenesis | dose | drug | studies | dogs | reported [SUMMARY]
null
[CONTENT] fed | fasted | normalized | mg | 12 | group | dose expressed mg kg | expressed mg kg | expressed mg | dose expressed mg [SUMMARY]
null
[CONTENT] ml | tal | μg ml | μg | dogs | study | dose | mg | min | usa [SUMMARY]
null
[CONTENT] ||| TAL [SUMMARY]
null
[CONTENT] TAL | up to 10 | 24 | fed ||| 1.34 | 0.12 μg/mL | fed | 2.47 | 0.19 ||| ||| 3 | fed | 10 | 2 ||| fed | 42.46 | 6.64 | × | 17.14 | 12.38 | 1.13 | × | 6.55 | 1.25 ||| TAL | 36.92% | 3.28% [SUMMARY]
null
[CONTENT] ||| TAL ||| Six | Labradors | 2 | 2 | 2 ||| 400 | TAL | fed ||| 15 | 48 | TAL ||| TAL ||| TAL | up to 10 | 24 | fed ||| 1.34 | 0.12 μg/mL | fed | 2.47 | 0.19 ||| ||| 3 | fed | 10 | 2 ||| fed | 42.46 | 6.64 | × | 17.14 | 12.38 | 1.13 | × | 6.55 | 1.25 ||| TAL | 36.92% | 3.28% ||| TAL ||| [SUMMARY]
null
Ensuring hemodialysis adequacy by dialysis dose monitoring with UV spectroscopy analysis of spent dialyzate.
34812071
Patients' session-to-session variation has been shown to influence outcomes, making critical the monitoring of dialysis dose in each session. The aim of this study was to detect the intra-patient variability of blood single pool Kt/V as measured from pre-post dialysis blood urea and from the online tool Adimea®, which measures the ultraviolet absorbance of spent dialyzate.
INTRODUCTION
This open, one-armed, prospective non-interventional study, evaluates patients on bicarbonate hemodialysis or/and on hemodiafiltration. Dialysis was performed with B. Braun Dialog+ machines equipped with Adimea®. In the course of the prospective observation, online monitoring with Adimea® in each session was established without the target warning function being activated. A sample size of 97 patients was estimated.
METHODS
A total of 120 patients were enrolled in six centers in China (mean age 51.5 ± 12.2 years, 86.7% males, 24.2% diabetics). All had an AV-fistula. The proportion of patients with blood Kt/V < 1.20 at baseline was 48.3%. During follow-up with Adimea®, the subgroup with Kt/V > 1.20 at baseline remains at the same adequacy level for more than 90% of the patients. Those with a Kt/V < 1.20 at baseline, showed a significant increase of Kt/V to 60% of the patients reaching the adequacy level >1.20. The coefficient of variation for spKt/V as evaluated by Adimea® was 9.6 ± 3.4%, not significantly different from the 9.6 ± 8.6% as blood Kt/V taken at the same time.
RESULTS
Online monitoring of dialysis dose by Adimea® improves and maintains dialysis adequacy. Implementing online monitoring by Adimea into daily practice moves the quality of dialysis patient care a significant step forward.
CONCLUSION
[ "Adult", "Dialysis Solutions", "Female", "Humans", "Male", "Middle Aged", "Prospective Studies", "Renal Dialysis", "Spectrum Analysis", "Urea" ]
8948370
Introduction
The assessment of dialysis dose is important since extensive evidence has been accumulated on the dependency of survival on the adequacy of treatment. 1 In individual patients, patients’ session-to-session variation 2 has been shown to influence quality of treatment, so making the monitoring of dialysis dose in each session critical. Hemodialysis patient treatment quality is often sub-optimal, 3 with 15% to 20% of all patients receiving an inadequate dialysis dose.4,5 There are many reasons why the planned dialysis dose is not delivered: the prescribed treatment time is not achieved due to cardiovascular and hemodynamic instability or patient refusal. Inadequate blood pump speed is delivered in combination with sub-optimal needle placement, 6 as well as progressive access malfunction. 7 Regarding the dialysis time achieved per session, Martín Rodriguez et al. 8 analyzed all sessions during 2015 in 85 patients. They found that the median deviation between prescribed time with actual delivered time was of 9.01 min (1.1–28 min/session) and 15.5% of the patients showed a reduction higher than 10 min in at least 20% of the sessions during the year. Most frequently, the lack of delivery of the prescribed treatment time is associated with central venous catheters as vascular access, older age and a higher Charlson comorbidity index. 8 In addition, patient factors such as frailty, non-adherence as well as organizational factors such as inflexibility of the care process may act as barriers. Online monitoring of each session highlights early adjustments needed for the quality of the delivered care. 9 Real-time integration of information technology systems with hemodialysis machines support optimizing the delivered dialysis dose, thereby in all probability improving patient morbidity and mortality. 8 The aim of this study was to detect the intra-patient variability of dialysis dose as estimated from pre-post dialysis blood urea derived single pool Kt/V 10 (blood Kt/V) compared to ultraviolet absorbance as a continuous assessment of spent dialyzate (Adimea® Kt/V). 11 The secondary aim was to examine the feasibility of using Adimea® Kt/V to improve dialysis adequacy as stated by the routine blood Kt/V,12,13 evaluating whether the proportion of patients reaching the treatment goal increased during the course of the study. Many years after the first publication of clinical practice guidelines, 14 the lack of prescribed versus delivered dialysis dose is still not a marginal problem. Online monitoring tools are available but not fully introduced in dialysis practice in all centers to support clinical staff in establishing reliable processes. As a result, lower levels of dialysis adequacy in terms of weekly treatment time 15 and of dialysis dose15–18 are reported. The present study was conducted in the People’s Republic of China facilities not having previously used online monitoring tools, to raise attention in the routine daily practice for dialysis adequacy.
Patients and methods
This is an open, one-armed, prospective non-interventional study, evaluating patients on bicarbonate hemodialysis or on hemodiafiltration. Patients had to meet the following inclusion criteria for enrollment: (1) treated on chronic hemodialysis for at least 6 months on 3 sessions/week schedule, (2) arterio-venous fistula as vascular access, (3) documented spKt/V calculated from pre-post-dialysis urea 10 from 1.0 to 1.4 over the last 3 months or (4) average of spKt/V < 1.35 out of three consecutive blood measurements, (5) age ⩾ 18 years, and (6) voluntary participation and written informed consent. Patients with the following conditions were excluded: (1) presence of severe hematologic disorders (e.g. multiple myeloma), (2) life expectancy less than 6 months, (3) treatment with single-needle dialysis, and (4) ongoing Kt/V monitoring by Adimea® (B. Braun Avitum, Melsungen, Germany). In the course of the prospective observation, online monitoring with Adimea® in each session was established without the target warning function of Adimea® being activated. Therefore, in case the operative conditions were not compatible with the adequacy target, the machine displayed the discrepancy, but did not generate an alarm. After informed consent, baseline and retrospective data collection, patients were observed for a period of 6 months prospectively; data from conventional hemodialysis or hemodiafiltration were collected (Figure 1). Adimea® Kt/V, blood Kt/V, blood values, prescription data, and treatment adjustments made during the therapy as well as any adverse events were recorded in an electronic clinical report form. Patients were treated according to prescription by their physician and examined according to their individual status. Study flow-chart including selection criteria and study population. All data captured during the study were obtained from routine clinical care assessments. Dialysis was performed with B. Braun Dialog+ machines equipped with Adimea®. The data collection was strictly anonymous. Adimea® Kt/V Adimea® system utilizes the principles of spectroscopy. A light source transmits ultraviolet (UV) light through the dialyzate. Some molecules contained in the dialyzate, which are removed from the plasma during dialysis, absorb the light. This absorption is measured by a sensor. It has been proven, 19 that the UV absorption measurements can be used to determine the dialysis dose as there is a very close linear correlation between the measured UV absorption signal and the urea in the dialyzate. Adimea® has been validated for hemodialysis, hemodiafiltration, and single needle dialysis. 11 Adimea® system utilizes the principles of spectroscopy. A light source transmits ultraviolet (UV) light through the dialyzate. Some molecules contained in the dialyzate, which are removed from the plasma during dialysis, absorb the light. This absorption is measured by a sensor. It has been proven, 19 that the UV absorption measurements can be used to determine the dialysis dose as there is a very close linear correlation between the measured UV absorption signal and the urea in the dialyzate. Adimea® has been validated for hemodialysis, hemodiafiltration, and single needle dialysis. 11 Study approval and patient consent Trial registration was not compulsory because of the observational nature of the study. The protocol was submitted and approved by the Ethical Committee of each participating hospital from April 2, 2013 to March 14, 2014. All participating patients signed a written informed consent. Trial registration was not compulsory because of the observational nature of the study. The protocol was submitted and approved by the Ethical Committee of each participating hospital from April 2, 2013 to March 14, 2014. All participating patients signed a written informed consent. Sample size A sample size of N = 97 patients had 90% power to detect a difference in within-patient standard deviation of 0.01 (assuming a within-patient standard deviation of 0.07 for Adimea® Kt/V measurement and of 0.08 for blood Kt/V) and taking a common standard deviation of the differences of 0.03, using a paired t-test with a 0.05 two-sided significance level. To account for early discontinued patients with less than three monthly blood and Adimea® Kt/V measurements, a total of N = 120 enrolled patients were planned. This sample size was also considered sufficient to detect a difference of 0.037 between the within-patient mean values of the two procedures, using a paired t-test and assuming an overall standard deviation of 0.13, a power of 80%. A sample size of N = 97 patients had 90% power to detect a difference in within-patient standard deviation of 0.01 (assuming a within-patient standard deviation of 0.07 for Adimea® Kt/V measurement and of 0.08 for blood Kt/V) and taking a common standard deviation of the differences of 0.03, using a paired t-test with a 0.05 two-sided significance level. To account for early discontinued patients with less than three monthly blood and Adimea® Kt/V measurements, a total of N = 120 enrolled patients were planned. This sample size was also considered sufficient to detect a difference of 0.037 between the within-patient mean values of the two procedures, using a paired t-test and assuming an overall standard deviation of 0.13, a power of 80%. Statistical analysis Descriptive parameters are presented according to the type of variable evaluated (mean, standard deviation, median, upper and lower quartile for continuous data; counts, absolute and relative frequencies for categorical and ordinal data). Descriptive parameters are presented according to the type of variable evaluated (mean, standard deviation, median, upper and lower quartile for continuous data; counts, absolute and relative frequencies for categorical and ordinal data). Primary endpoint The primary endpoint was the session to session variability of Adimea® Kt/V and by blood Kt/V, assessed by the within-patient standard deviation (SD) and the derived coefficient of variability (SD/mean × 100) of the values using all available time points. The mean of the SDs were calculated and statistically compared using a paired t-test. In addition to the paired t-test, also 95% confidence intervals (CI) for the mean SD Adimea® and blood Kt/V were calculated. The level of statistical significance is assumed to be α = 0.05. Statistical analyses were conducted using IBM® SPSS® statistics version 24. The primary endpoint was the session to session variability of Adimea® Kt/V and by blood Kt/V, assessed by the within-patient standard deviation (SD) and the derived coefficient of variability (SD/mean × 100) of the values using all available time points. The mean of the SDs were calculated and statistically compared using a paired t-test. In addition to the paired t-test, also 95% confidence intervals (CI) for the mean SD Adimea® and blood Kt/V were calculated. The level of statistical significance is assumed to be α = 0.05. Statistical analyses were conducted using IBM® SPSS® statistics version 24.
Results
Table 1 reports the characteristics of the 120 patients enrolled in six centers in China. The mean age was 51.5 ± 12.2 years and 86.7% were males. The proportion of diabetics was 24.2% and all of them were with an AV-fistula for vascular access. Stratifying the patients by their monthly baseline blood Kt/V lower or greater/equal than 1.20, the proportion of patients with inadequate dialysis dose <1.20 Kt/V was 48.3%. This group of patients included a significant greater proportion of males (95% vs 79%), patients significantly taller (174 vs 169 cm), and with greater dry body weight (73 vs 62 kg). Demographics, comorbidities and main baseline lab results of enrolled patients by baseline Kt/V. Table 2 reports the main features of the extracorporeal dialysis treatment at baseline and during the 6-month follow-up stratified by baseline blood Kt/V. At baseline, 95.8% of the patients were on 4-h treatment/session and 3.3% were on 4.5 h treatment/session. The mean blood flow was 261 ± 28 mL/min. Dialyzer surface area ranged between 1.20 (17.5%) and 1.80 m 2 (24.2%). About 80.0% of the patients were on high-flux dialysis, 4.2% on on-line hemodiafiltration, with only marginal differences between the two groups stratified by level of dialysis dose (Table 2). Blood Kt/V was 1.21 ± 0.14, with 62/120 having a blood Kt/V ⩾ 1.20 and 58/120 below the target. Figure 2 shows the development of blood Kt/V during follow-up. The group ⩾1.20 at baseline increased on average and remained ⩾1.20 adequacy level for more than 90% of the patients. The other group, with all patients with a blood Kt/V < 1.20 at baseline, showed a significant increase of blood Kt/V in the first 20 days of follow-up with dialysis sessions monitored by Adimea®. As a result, the proportion of patients in the adequacy level ⩾1.20 increased from 0% to about 60% and was maintained for the full follow-up. Means and standard deviations of Adimea® Kt/V during follow-up in patients with baseline blood Kt/V ⩾ 1.20 and those with blood Kt/V < 1.20 are shown in Figure 3. Main characteristics of the dialysis treatment at baseline and during follow-up by baseline blood Kt/V. Development of blood Kt/V from baseline (without Adimea® monitoring) and during Adimea® monitoring follow-up by baseline blood Kt/V: (a) mean and standard deviation of blood Kt/V and (b) proportion of patients reaching the blood Kt/V target of 1.20. Development of Adimea® Kt/V during Adimea® monitoring follow-up by baseline blood Kt/V. Figure 4 shows the Bland and Altman graph of those sessions evaluated by Adimea® Kt/V as well as blood Kt/V (see also supplemental material). According to the primary aim of the study, the coefficient of variation (CV) for Adimea® Kt/V was measured and compared to the blood Kt/V. The results of Adimea Kt/V were 9.6 ± 3.4%, not significantly different from the 9.6 ± 8.6% of blood Kt/V. The distribution of the CV of Adimea® Kt/V was very similar to that of blood Kt/V, with most of the patients below 10% of CV (data not shown). Bland and Altman graphs showing distribution of difference between Adimea® and blood Kt/V in respect of their mean. Lines represent mean (- - -) and 95% confidence interval (-----).
null
null
[ "Adimea® Kt/V", "Study approval and patient consent", "Sample size", "Statistical analysis", "Primary endpoint" ]
[ "Adimea® system utilizes the principles of spectroscopy. A light source transmits ultraviolet (UV) light through the dialyzate. Some molecules contained in the dialyzate, which are removed from the plasma during dialysis, absorb the light. This absorption is measured by a sensor. It has been proven,\n19\n that the UV absorption measurements can be used to determine the dialysis dose as there is a very close linear correlation between the measured UV absorption signal and the urea in the dialyzate. Adimea® has been validated for hemodialysis, hemodiafiltration, and single needle dialysis.\n11\n", "Trial registration was not compulsory because of the observational nature of the study. The protocol was submitted and approved by the Ethical Committee of each participating hospital from April 2, 2013 to March 14, 2014.\nAll participating patients signed a written informed consent.", "A sample size of N = 97 patients had 90% power to detect a difference in within-patient standard deviation of 0.01 (assuming a within-patient standard deviation of 0.07 for Adimea® Kt/V measurement and of 0.08 for blood Kt/V) and taking a common standard deviation of the differences of 0.03, using a paired t-test with a 0.05 two-sided significance level. To account for early discontinued patients with less than three monthly blood and Adimea® Kt/V measurements, a total of N = 120 enrolled patients were planned.\nThis sample size was also considered sufficient to detect a difference of 0.037 between the within-patient mean values of the two procedures, using a paired t-test and assuming an overall standard deviation of 0.13, a power of 80%.", "Descriptive parameters are presented according to the type of variable evaluated (mean, standard deviation, median, upper and lower quartile for continuous data; counts, absolute and relative frequencies for categorical and ordinal data).", "The primary endpoint was the session to session variability of Adimea® Kt/V and by blood Kt/V, assessed by the within-patient standard deviation (SD) and the derived coefficient of variability (SD/mean × 100) of the values using all available time points.\nThe mean of the SDs were calculated and statistically compared using a paired t-test.\nIn addition to the paired t-test, also 95% confidence intervals (CI) for the mean SD Adimea® and blood Kt/V were calculated.\nThe level of statistical significance is assumed to be α = 0.05. Statistical analyses were conducted using IBM® SPSS® statistics version 24." ]
[ null, null, null, null, null ]
[ "Introduction", "Patients and methods", "Adimea® Kt/V", "Study approval and patient consent", "Sample size", "Statistical analysis", "Primary endpoint", "Results", "Discussion", "Supplemental Material" ]
[ "The assessment of dialysis dose is important since extensive evidence has been accumulated on the dependency of survival on the adequacy of treatment.\n1\n In individual patients, patients’ session-to-session variation\n2\n has been shown to influence quality of treatment, so making the monitoring of dialysis dose in each session critical. Hemodialysis patient treatment quality is often sub-optimal,\n3\n with 15% to 20% of all patients receiving an inadequate dialysis dose.4,5 There are many reasons why the planned dialysis dose is not delivered: the prescribed treatment time is not achieved due to cardiovascular and hemodynamic instability or patient refusal. Inadequate blood pump speed is delivered in combination with sub-optimal needle placement,\n6\n as well as progressive access malfunction.\n7\n Regarding the dialysis time achieved per session, Martín Rodriguez et al.\n8\n analyzed all sessions during 2015 in 85 patients. They found that the median deviation between prescribed time with actual delivered time was of 9.01 min (1.1–28 min/session) and 15.5% of the patients showed a reduction higher than 10 min in at least 20% of the sessions during the year.\nMost frequently, the lack of delivery of the prescribed treatment time is associated with central venous catheters as vascular access, older age and a higher Charlson comorbidity index.\n8\n In addition, patient factors such as frailty, non-adherence as well as organizational factors such as inflexibility of the care process may act as barriers. Online monitoring of each session highlights early adjustments needed for the quality of the delivered care.\n9\n Real-time integration of information technology systems with hemodialysis machines support optimizing the delivered dialysis dose, thereby in all probability improving patient morbidity and mortality.\n8\n\nThe aim of this study was to detect the intra-patient variability of dialysis dose as estimated from pre-post dialysis blood urea derived single pool Kt/V\n10\n (blood Kt/V) compared to ultraviolet absorbance as a continuous assessment of spent dialyzate (Adimea® Kt/V).\n11\n The secondary aim was to examine the feasibility of using Adimea® Kt/V to improve dialysis adequacy as stated by the routine blood Kt/V,12,13 evaluating whether the proportion of patients reaching the treatment goal increased during the course of the study. Many years after the first publication of clinical practice guidelines,\n14\n the lack of prescribed versus delivered dialysis dose is still not a marginal problem. Online monitoring tools are available but not fully introduced in dialysis practice in all centers to support clinical staff in establishing reliable processes.\nAs a result, lower levels of dialysis adequacy in terms of weekly treatment time\n15\n and of dialysis dose15–18 are reported. The present study was conducted in the People’s Republic of China facilities not having previously used online monitoring tools, to raise attention in the routine daily practice for dialysis adequacy.", "This is an open, one-armed, prospective non-interventional study, evaluating patients on bicarbonate hemodialysis or on hemodiafiltration.\nPatients had to meet the following inclusion criteria for enrollment: (1) treated on chronic hemodialysis for at least 6 months on 3 sessions/week schedule, (2) arterio-venous fistula as vascular access, (3) documented spKt/V calculated from pre-post-dialysis urea\n10\n from 1.0 to 1.4 over the last 3 months or (4) average of spKt/V < 1.35 out of three consecutive blood measurements, (5) age ⩾ 18 years, and (6) voluntary participation and written informed consent. Patients with the following conditions were excluded: (1) presence of severe hematologic disorders (e.g. multiple myeloma), (2) life expectancy less than 6 months, (3) treatment with single-needle dialysis, and (4) ongoing Kt/V monitoring by Adimea® (B. Braun Avitum, Melsungen, Germany). In the course of the prospective observation, online monitoring with Adimea® in each session was established without the target warning function of Adimea® being activated. Therefore, in case the operative conditions were not compatible with the adequacy target, the machine displayed the discrepancy, but did not generate an alarm. After informed consent, baseline and retrospective data collection, patients were observed for a period of 6 months prospectively; data from conventional hemodialysis or hemodiafiltration were collected (Figure 1). Adimea® Kt/V, blood Kt/V, blood values, prescription data, and treatment adjustments made during the therapy as well as any adverse events were recorded in an electronic clinical report form. Patients were treated according to prescription by their physician and examined according to their individual status.\nStudy flow-chart including selection criteria and study population.\nAll data captured during the study were obtained from routine clinical care assessments. Dialysis was performed with B. Braun Dialog+ machines equipped with Adimea®. The data collection was strictly anonymous.\nAdimea® Kt/V Adimea® system utilizes the principles of spectroscopy. A light source transmits ultraviolet (UV) light through the dialyzate. Some molecules contained in the dialyzate, which are removed from the plasma during dialysis, absorb the light. This absorption is measured by a sensor. It has been proven,\n19\n that the UV absorption measurements can be used to determine the dialysis dose as there is a very close linear correlation between the measured UV absorption signal and the urea in the dialyzate. Adimea® has been validated for hemodialysis, hemodiafiltration, and single needle dialysis.\n11\n\nAdimea® system utilizes the principles of spectroscopy. A light source transmits ultraviolet (UV) light through the dialyzate. Some molecules contained in the dialyzate, which are removed from the plasma during dialysis, absorb the light. This absorption is measured by a sensor. It has been proven,\n19\n that the UV absorption measurements can be used to determine the dialysis dose as there is a very close linear correlation between the measured UV absorption signal and the urea in the dialyzate. Adimea® has been validated for hemodialysis, hemodiafiltration, and single needle dialysis.\n11\n\nStudy approval and patient consent Trial registration was not compulsory because of the observational nature of the study. The protocol was submitted and approved by the Ethical Committee of each participating hospital from April 2, 2013 to March 14, 2014.\nAll participating patients signed a written informed consent.\nTrial registration was not compulsory because of the observational nature of the study. The protocol was submitted and approved by the Ethical Committee of each participating hospital from April 2, 2013 to March 14, 2014.\nAll participating patients signed a written informed consent.\nSample size A sample size of N = 97 patients had 90% power to detect a difference in within-patient standard deviation of 0.01 (assuming a within-patient standard deviation of 0.07 for Adimea® Kt/V measurement and of 0.08 for blood Kt/V) and taking a common standard deviation of the differences of 0.03, using a paired t-test with a 0.05 two-sided significance level. To account for early discontinued patients with less than three monthly blood and Adimea® Kt/V measurements, a total of N = 120 enrolled patients were planned.\nThis sample size was also considered sufficient to detect a difference of 0.037 between the within-patient mean values of the two procedures, using a paired t-test and assuming an overall standard deviation of 0.13, a power of 80%.\nA sample size of N = 97 patients had 90% power to detect a difference in within-patient standard deviation of 0.01 (assuming a within-patient standard deviation of 0.07 for Adimea® Kt/V measurement and of 0.08 for blood Kt/V) and taking a common standard deviation of the differences of 0.03, using a paired t-test with a 0.05 two-sided significance level. To account for early discontinued patients with less than three monthly blood and Adimea® Kt/V measurements, a total of N = 120 enrolled patients were planned.\nThis sample size was also considered sufficient to detect a difference of 0.037 between the within-patient mean values of the two procedures, using a paired t-test and assuming an overall standard deviation of 0.13, a power of 80%.\nStatistical analysis Descriptive parameters are presented according to the type of variable evaluated (mean, standard deviation, median, upper and lower quartile for continuous data; counts, absolute and relative frequencies for categorical and ordinal data).\nDescriptive parameters are presented according to the type of variable evaluated (mean, standard deviation, median, upper and lower quartile for continuous data; counts, absolute and relative frequencies for categorical and ordinal data).\nPrimary endpoint The primary endpoint was the session to session variability of Adimea® Kt/V and by blood Kt/V, assessed by the within-patient standard deviation (SD) and the derived coefficient of variability (SD/mean × 100) of the values using all available time points.\nThe mean of the SDs were calculated and statistically compared using a paired t-test.\nIn addition to the paired t-test, also 95% confidence intervals (CI) for the mean SD Adimea® and blood Kt/V were calculated.\nThe level of statistical significance is assumed to be α = 0.05. Statistical analyses were conducted using IBM® SPSS® statistics version 24.\nThe primary endpoint was the session to session variability of Adimea® Kt/V and by blood Kt/V, assessed by the within-patient standard deviation (SD) and the derived coefficient of variability (SD/mean × 100) of the values using all available time points.\nThe mean of the SDs were calculated and statistically compared using a paired t-test.\nIn addition to the paired t-test, also 95% confidence intervals (CI) for the mean SD Adimea® and blood Kt/V were calculated.\nThe level of statistical significance is assumed to be α = 0.05. Statistical analyses were conducted using IBM® SPSS® statistics version 24.", "Adimea® system utilizes the principles of spectroscopy. A light source transmits ultraviolet (UV) light through the dialyzate. Some molecules contained in the dialyzate, which are removed from the plasma during dialysis, absorb the light. This absorption is measured by a sensor. It has been proven,\n19\n that the UV absorption measurements can be used to determine the dialysis dose as there is a very close linear correlation between the measured UV absorption signal and the urea in the dialyzate. Adimea® has been validated for hemodialysis, hemodiafiltration, and single needle dialysis.\n11\n", "Trial registration was not compulsory because of the observational nature of the study. The protocol was submitted and approved by the Ethical Committee of each participating hospital from April 2, 2013 to March 14, 2014.\nAll participating patients signed a written informed consent.", "A sample size of N = 97 patients had 90% power to detect a difference in within-patient standard deviation of 0.01 (assuming a within-patient standard deviation of 0.07 for Adimea® Kt/V measurement and of 0.08 for blood Kt/V) and taking a common standard deviation of the differences of 0.03, using a paired t-test with a 0.05 two-sided significance level. To account for early discontinued patients with less than three monthly blood and Adimea® Kt/V measurements, a total of N = 120 enrolled patients were planned.\nThis sample size was also considered sufficient to detect a difference of 0.037 between the within-patient mean values of the two procedures, using a paired t-test and assuming an overall standard deviation of 0.13, a power of 80%.", "Descriptive parameters are presented according to the type of variable evaluated (mean, standard deviation, median, upper and lower quartile for continuous data; counts, absolute and relative frequencies for categorical and ordinal data).", "The primary endpoint was the session to session variability of Adimea® Kt/V and by blood Kt/V, assessed by the within-patient standard deviation (SD) and the derived coefficient of variability (SD/mean × 100) of the values using all available time points.\nThe mean of the SDs were calculated and statistically compared using a paired t-test.\nIn addition to the paired t-test, also 95% confidence intervals (CI) for the mean SD Adimea® and blood Kt/V were calculated.\nThe level of statistical significance is assumed to be α = 0.05. Statistical analyses were conducted using IBM® SPSS® statistics version 24.", "Table 1 reports the characteristics of the 120 patients enrolled in six centers in China. The mean age was 51.5 ± 12.2 years and 86.7% were males. The proportion of diabetics was 24.2% and all of them were with an AV-fistula for vascular access. Stratifying the patients by their monthly baseline blood Kt/V lower or greater/equal than 1.20, the proportion of patients with inadequate dialysis dose <1.20 Kt/V was 48.3%. This group of patients included a significant greater proportion of males (95% vs 79%), patients significantly taller (174 vs 169 cm), and with greater dry body weight (73 vs 62 kg).\nDemographics, comorbidities and main baseline lab results of enrolled patients by baseline Kt/V.\nTable 2 reports the main features of the extracorporeal dialysis treatment at baseline and during the 6-month follow-up stratified by baseline blood Kt/V. At baseline, 95.8% of the patients were on 4-h treatment/session and 3.3% were on 4.5 h treatment/session. The mean blood flow was 261 ± 28 mL/min. Dialyzer surface area ranged between 1.20 (17.5%) and 1.80 m\n2\n (24.2%). About 80.0% of the patients were on high-flux dialysis, 4.2% on on-line hemodiafiltration, with only marginal differences between the two groups stratified by level of dialysis dose (Table 2). Blood Kt/V was 1.21 ± 0.14, with 62/120 having a blood Kt/V ⩾ 1.20 and 58/120 below the target. Figure 2 shows the development of blood Kt/V during follow-up. The group ⩾1.20 at baseline increased on average and remained ⩾1.20 adequacy level for more than 90% of the patients. The other group, with all patients with a blood Kt/V < 1.20 at baseline, showed a significant increase of blood Kt/V in the first 20 days of follow-up with dialysis sessions monitored by Adimea®. As a result, the proportion of patients in the adequacy level ⩾1.20 increased from 0% to about 60% and was maintained for the full follow-up. Means and standard deviations of Adimea® Kt/V during follow-up in patients with baseline blood Kt/V ⩾ 1.20 and those with blood Kt/V < 1.20 are shown in Figure 3.\nMain characteristics of the dialysis treatment at baseline and during follow-up by baseline blood Kt/V.\nDevelopment of blood Kt/V from baseline (without Adimea® monitoring) and during Adimea® monitoring follow-up by baseline blood Kt/V: (a) mean and standard deviation of blood Kt/V and (b) proportion of patients reaching the blood Kt/V target of 1.20.\nDevelopment of Adimea® Kt/V during Adimea® monitoring follow-up by baseline blood Kt/V.\nFigure 4 shows the Bland and Altman graph of those sessions evaluated by Adimea® Kt/V as well as blood Kt/V (see also supplemental material). According to the primary aim of the study, the coefficient of variation (CV) for Adimea® Kt/V was measured and compared to the blood Kt/V. The results of Adimea Kt/V were 9.6 ± 3.4%, not significantly different from the 9.6 ± 8.6% of blood Kt/V. The distribution of the CV of Adimea® Kt/V was very similar to that of blood Kt/V, with most of the patients below 10% of CV (data not shown).\nBland and Altman graphs showing distribution of difference between Adimea® and blood Kt/V in respect of their mean. Lines represent mean (- - -) and 95% confidence interval (-----).", "This study evaluated the reliability of monitoring the dialysis dose by Adimea® Kt/V in each session and the intra-patient variability of dialysis dose in comparison to the reference method of blood Kt/V, confirming that online monitoring by Adimea® according to Bland and Altman graph (Figure 4) is a reliable method in monitoring dialysis dose and essentially contributes to improvement and maintaining of dialysis adequacy (Figure 2).\n20\n\nThe main aim of this study was to compare the variability of blood KT/V and online Kt/V dialysis dose by Adimea®. The result is a non-significant difference between the standard deviations of Adimea® and blood Kt/V. In addition, the coefficients of variation were of 9.6 ± 3.4% and 9.6 ± 8.6% by the online and the blood based method, respectively (p = NS). The range is comparable to evaluation by McIntyre et al. in 26 patients with ionic dialysance (13%) and pre-post dialysis urea blood Kt/V (11%) with no statistical difference.\nWhat are the components of intra-patient variability in Kt/V? Dealing with blood Kt/V, the accuracy of the measurement of dialysis dose depends on the accuracy and timing of drawing the sample and laboratory errors.\n21\n By using Adimea® throughout the full course of the dialysis session it was expected to decrease intra-patient variability, but the results of this study did not confirm the hypothesis. A partial answer to this apparent contradiction is probably the frequency of sampling. The intra-patient variability when dialysis dose is monitored at every session can be higher mainly because of the variability in delivered treatment time and blood flow, as pointed out by Lambie et al.\n22\n Other factors subject to practice variability (i.e. needle insertion\n6\n), needle size), patient related factors (i.e. hemodynamic condition, blood flow rate). Organizational factors in dialysis units (e.g. shift planning\n23\n) may affect delivered dialysis dose by indirectly affecting delivered treatment time. Another study,\n24\n following-up dialysis adequacy of 11 patients on hemodiafiltration with online urea monitoring by an experimental (but reliable) system based on a urease cartridge,\n25\n found high intra-patient variability, and reported in six stable patients a coefficient of variability of 9.5% and 5.2% for urea generation rate and Kt/V, respectively.\nAt baseline, in our study a high proportion of patients (48.3%) did not reach the minimum level of dialysis adequacy (according to blood Kt/V ⩾ 1.20). The high proportion of patients with blood Kt/V < 1.20 is likely due to study selection criteria (spKt/V in the range 1.00–1.40 or an average <1.35). Patients not reaching the level of adequacy were more likely to be males, were significantly taller and with greater body weight. However, despite their higher therapy needs in term of treatment time, blood flow and/or dialyzer surface, all dialysis parameters were quite similar to those of the other group adequately treated since baseline. During follow-up 60% of the patients originally with a blood Kt/V < 1.20 (58 out of 120) reached the target level of adequacy. Table 2 shows that the prescription of treatment time, blood flow etc. remained stable with minimal modifications and therefore, other causes might have increased delivered dialysis dose. A variable level of compliance to the prescription of the physician due to several reasons.26–28 Each additional minute of effective treatment time is associated with a significant 3.6% higher probability to achieve the level of dialysis adequacy.\n29\n Rocco and Burkart\n30\n described that a loss of 30 min of treatment time translates into an effective delivered Kt/V of 0.88 instead of the originally planned 1.05 according to former guidelines. A monitoring system such as Adimea® is of practical benefit because it allows to uncover those sessions not going to reach the Kt/V target value whilst the session is still running. In fact, the prediction of delivering an inadequate dialysis dose during dialysis enables a modification of the critical treatment parameters, including prolonging the session those few minutes required to fulfill the quality requirements. By showing the prediction graph on the machine screen to the patient, forecasted dialysis dose and implications on the actual operative conditions can be explained to the patient. Accounting for changes in treatment parameters, for example after increasing blood flow, in a few minutes the new target route will be promptly displayed.\n31\n Therefore, at the beginning of the session, if the initial lower blood flow has not been increased, the system will alarm showing the negative forecast of the final Kt/V. The same situation occurs if the blood flow is decreased during the treatment and not upregulated again, or in case of disturbances due to needle placement or arm position, or the dialyzer is losing performance because of fibers clotting. Even if in this study the alarm option of Adimea® was not activated, the efficacy of the system can be demonstrated by the sustained achievement of dialysis dose during the follow-up. It is interesting to note that Adimea® Kt/V of the sessions selected for blood Kt/V evaluation were not statistically different in respect of those other sessions in the same weekdays (data not shown). Doubts on the representativeness of once a month Kt/V have been raised by several publications.7,32,33 Without on-line monitoring, the testing day is the session nurses and patients know that they are under scrutiny and not surprisingly, this session is very likely to have a high focus. Two main options are commercially available to estimate online dialysis dose.\n34\n On the base of the equivalence of urea clearance to ionic dialysance, Kt/V has been calculated using the mean dialysance\n35\n intermittently estimated every 45 min, that is 6 times over 4-h treatment.\n36\n The ionic dialysance approach relies on the assumption that plasma conductivity remains stable during the measurement period, and actual alterations in plasma conductivity could have a significant impact on the accuracy of the measurement.\n36\n However, this approach may cause the risk of salt loading due to the spiking of dialyzate sodium up to 155 mEq/L as required for these measurements.\n37\n Even if on average previous studies did not find any evidence of clinically relevant salt loading during ionic dialysance measurements,38,39 patients with lower predialysis sodium concentration had most net gain during the sessions.\n39\n The intermittent approach of this method could miss problems occurring between the dialysance tests, such as alarms causing blood pump stop and/or dialysis fluid bypass. With a continuous method as the UV absorbance spent dialyzate monitoring, the immediate recognition of low clearances can trigger prompt interventions correcting or mitigating possible issues arising during treatment.\n40\n It can be applied not only in double but also in single needle hemodialysis as opposed to the ionic dialysance method.\n41\n\nThis study has strengths and limitations. The main limitation as a consequence of its observational nature is that the sequence of sessions monitored or not by Adimea® dialysis dose were not randomized. Thus, the benefit is judged on the base of the improvement from baseline. Because of the inclusion criteria of the last three spKt/V evaluations in the range 1.00–1.40 and exclusion of patients with catheters, the study likely selected patients with lower baseline Kt/V variability, thereby decreasing the probability to detect differences in variability between blood Kt/V and Adimea® Kt/V. The enrolled patients, being relatively young and mainly males with AV-Fistula for vascular access, might not fully represent the current population on dialysis in other countries. Considering improved quality of treatment by monitoring dialysis dose in this young population, in elderly frail patients even higher benefit is expected.\n42\n Older patients are at higher risk for receiving lower dialysis dose,\n43\n but when adequately dialyzed they have a good quality of life.\n44\n Given that their variability of dialysis dose is expected to be greater, the real-time transparency on delivered dialysis dose should support nurses to govern those modifiable factors such as treatment time or blood flow during the therapy.\nIn summary, this study confirms that simple to use integrated clinical tools,\n45\n that are able to monitor continuously delivered dose of dialysis have an absolute clinical utility, even in a young population with AV fistula. The 60% of this population not reaching Kt/V target at baseline were able to consistently achieve Kt/V adequacy during the study in monitoring naïve centers. Thus, a major component of the success is an improved understanding of the staff and the patients, empowering both groups for correct shared decision making. In frail patients, with vascular instability and frequent episodes of intradialytic hypotension as well as a high proportion of catheters used for vascular access, monitoring is expected to produce even higher benefit.42,46 The dialysis machine using the Adimea tool alarms for potential missing dose targets and follows changes with new target prognosis. An automatic control continuously adjusting factors as blood flow, dialyzate flow and treatment time (within single patient prescribed limits) to ensure the achievement of target dose might be integrated.\n47\n", "Click here for additional data file.\nSupplemental material, sj-pdf-1-jao-10.1177_03913988211059841 for Ensuring hemodialysis adequacy by dialysis dose monitoring with UV spectroscopy analysis of spent dialyzate by Li Zhang, Wenhu Liu, Chuanming Hao, Yani He, Ye Tao, Shiren Sun, Marten Jakob, Daniele Marcelli, Claudia Barth and Xiangmei Chen in The International Journal of Artificial Organs" ]
[ "intro", "methods", null, null, null, null, null, "results", "discussion", "supplementary-material" ]
[ "Dialysis adequacy", "artificial kidney", "apheresis & detoxification techniques", "hemodialysis", "hemodiafiltration", "chronic renal failure", "compliance to prescription" ]
Introduction: The assessment of dialysis dose is important since extensive evidence has been accumulated on the dependency of survival on the adequacy of treatment. 1 In individual patients, patients’ session-to-session variation 2 has been shown to influence quality of treatment, so making the monitoring of dialysis dose in each session critical. Hemodialysis patient treatment quality is often sub-optimal, 3 with 15% to 20% of all patients receiving an inadequate dialysis dose.4,5 There are many reasons why the planned dialysis dose is not delivered: the prescribed treatment time is not achieved due to cardiovascular and hemodynamic instability or patient refusal. Inadequate blood pump speed is delivered in combination with sub-optimal needle placement, 6 as well as progressive access malfunction. 7 Regarding the dialysis time achieved per session, Martín Rodriguez et al. 8 analyzed all sessions during 2015 in 85 patients. They found that the median deviation between prescribed time with actual delivered time was of 9.01 min (1.1–28 min/session) and 15.5% of the patients showed a reduction higher than 10 min in at least 20% of the sessions during the year. Most frequently, the lack of delivery of the prescribed treatment time is associated with central venous catheters as vascular access, older age and a higher Charlson comorbidity index. 8 In addition, patient factors such as frailty, non-adherence as well as organizational factors such as inflexibility of the care process may act as barriers. Online monitoring of each session highlights early adjustments needed for the quality of the delivered care. 9 Real-time integration of information technology systems with hemodialysis machines support optimizing the delivered dialysis dose, thereby in all probability improving patient morbidity and mortality. 8 The aim of this study was to detect the intra-patient variability of dialysis dose as estimated from pre-post dialysis blood urea derived single pool Kt/V 10 (blood Kt/V) compared to ultraviolet absorbance as a continuous assessment of spent dialyzate (Adimea® Kt/V). 11 The secondary aim was to examine the feasibility of using Adimea® Kt/V to improve dialysis adequacy as stated by the routine blood Kt/V,12,13 evaluating whether the proportion of patients reaching the treatment goal increased during the course of the study. Many years after the first publication of clinical practice guidelines, 14 the lack of prescribed versus delivered dialysis dose is still not a marginal problem. Online monitoring tools are available but not fully introduced in dialysis practice in all centers to support clinical staff in establishing reliable processes. As a result, lower levels of dialysis adequacy in terms of weekly treatment time 15 and of dialysis dose15–18 are reported. The present study was conducted in the People’s Republic of China facilities not having previously used online monitoring tools, to raise attention in the routine daily practice for dialysis adequacy. Patients and methods: This is an open, one-armed, prospective non-interventional study, evaluating patients on bicarbonate hemodialysis or on hemodiafiltration. Patients had to meet the following inclusion criteria for enrollment: (1) treated on chronic hemodialysis for at least 6 months on 3 sessions/week schedule, (2) arterio-venous fistula as vascular access, (3) documented spKt/V calculated from pre-post-dialysis urea 10 from 1.0 to 1.4 over the last 3 months or (4) average of spKt/V < 1.35 out of three consecutive blood measurements, (5) age ⩾ 18 years, and (6) voluntary participation and written informed consent. Patients with the following conditions were excluded: (1) presence of severe hematologic disorders (e.g. multiple myeloma), (2) life expectancy less than 6 months, (3) treatment with single-needle dialysis, and (4) ongoing Kt/V monitoring by Adimea® (B. Braun Avitum, Melsungen, Germany). In the course of the prospective observation, online monitoring with Adimea® in each session was established without the target warning function of Adimea® being activated. Therefore, in case the operative conditions were not compatible with the adequacy target, the machine displayed the discrepancy, but did not generate an alarm. After informed consent, baseline and retrospective data collection, patients were observed for a period of 6 months prospectively; data from conventional hemodialysis or hemodiafiltration were collected (Figure 1). Adimea® Kt/V, blood Kt/V, blood values, prescription data, and treatment adjustments made during the therapy as well as any adverse events were recorded in an electronic clinical report form. Patients were treated according to prescription by their physician and examined according to their individual status. Study flow-chart including selection criteria and study population. All data captured during the study were obtained from routine clinical care assessments. Dialysis was performed with B. Braun Dialog+ machines equipped with Adimea®. The data collection was strictly anonymous. Adimea® Kt/V Adimea® system utilizes the principles of spectroscopy. A light source transmits ultraviolet (UV) light through the dialyzate. Some molecules contained in the dialyzate, which are removed from the plasma during dialysis, absorb the light. This absorption is measured by a sensor. It has been proven, 19 that the UV absorption measurements can be used to determine the dialysis dose as there is a very close linear correlation between the measured UV absorption signal and the urea in the dialyzate. Adimea® has been validated for hemodialysis, hemodiafiltration, and single needle dialysis. 11 Adimea® system utilizes the principles of spectroscopy. A light source transmits ultraviolet (UV) light through the dialyzate. Some molecules contained in the dialyzate, which are removed from the plasma during dialysis, absorb the light. This absorption is measured by a sensor. It has been proven, 19 that the UV absorption measurements can be used to determine the dialysis dose as there is a very close linear correlation between the measured UV absorption signal and the urea in the dialyzate. Adimea® has been validated for hemodialysis, hemodiafiltration, and single needle dialysis. 11 Study approval and patient consent Trial registration was not compulsory because of the observational nature of the study. The protocol was submitted and approved by the Ethical Committee of each participating hospital from April 2, 2013 to March 14, 2014. All participating patients signed a written informed consent. Trial registration was not compulsory because of the observational nature of the study. The protocol was submitted and approved by the Ethical Committee of each participating hospital from April 2, 2013 to March 14, 2014. All participating patients signed a written informed consent. Sample size A sample size of N = 97 patients had 90% power to detect a difference in within-patient standard deviation of 0.01 (assuming a within-patient standard deviation of 0.07 for Adimea® Kt/V measurement and of 0.08 for blood Kt/V) and taking a common standard deviation of the differences of 0.03, using a paired t-test with a 0.05 two-sided significance level. To account for early discontinued patients with less than three monthly blood and Adimea® Kt/V measurements, a total of N = 120 enrolled patients were planned. This sample size was also considered sufficient to detect a difference of 0.037 between the within-patient mean values of the two procedures, using a paired t-test and assuming an overall standard deviation of 0.13, a power of 80%. A sample size of N = 97 patients had 90% power to detect a difference in within-patient standard deviation of 0.01 (assuming a within-patient standard deviation of 0.07 for Adimea® Kt/V measurement and of 0.08 for blood Kt/V) and taking a common standard deviation of the differences of 0.03, using a paired t-test with a 0.05 two-sided significance level. To account for early discontinued patients with less than three monthly blood and Adimea® Kt/V measurements, a total of N = 120 enrolled patients were planned. This sample size was also considered sufficient to detect a difference of 0.037 between the within-patient mean values of the two procedures, using a paired t-test and assuming an overall standard deviation of 0.13, a power of 80%. Statistical analysis Descriptive parameters are presented according to the type of variable evaluated (mean, standard deviation, median, upper and lower quartile for continuous data; counts, absolute and relative frequencies for categorical and ordinal data). Descriptive parameters are presented according to the type of variable evaluated (mean, standard deviation, median, upper and lower quartile for continuous data; counts, absolute and relative frequencies for categorical and ordinal data). Primary endpoint The primary endpoint was the session to session variability of Adimea® Kt/V and by blood Kt/V, assessed by the within-patient standard deviation (SD) and the derived coefficient of variability (SD/mean × 100) of the values using all available time points. The mean of the SDs were calculated and statistically compared using a paired t-test. In addition to the paired t-test, also 95% confidence intervals (CI) for the mean SD Adimea® and blood Kt/V were calculated. The level of statistical significance is assumed to be α = 0.05. Statistical analyses were conducted using IBM® SPSS® statistics version 24. The primary endpoint was the session to session variability of Adimea® Kt/V and by blood Kt/V, assessed by the within-patient standard deviation (SD) and the derived coefficient of variability (SD/mean × 100) of the values using all available time points. The mean of the SDs were calculated and statistically compared using a paired t-test. In addition to the paired t-test, also 95% confidence intervals (CI) for the mean SD Adimea® and blood Kt/V were calculated. The level of statistical significance is assumed to be α = 0.05. Statistical analyses were conducted using IBM® SPSS® statistics version 24. Adimea® Kt/V: Adimea® system utilizes the principles of spectroscopy. A light source transmits ultraviolet (UV) light through the dialyzate. Some molecules contained in the dialyzate, which are removed from the plasma during dialysis, absorb the light. This absorption is measured by a sensor. It has been proven, 19 that the UV absorption measurements can be used to determine the dialysis dose as there is a very close linear correlation between the measured UV absorption signal and the urea in the dialyzate. Adimea® has been validated for hemodialysis, hemodiafiltration, and single needle dialysis. 11 Study approval and patient consent: Trial registration was not compulsory because of the observational nature of the study. The protocol was submitted and approved by the Ethical Committee of each participating hospital from April 2, 2013 to March 14, 2014. All participating patients signed a written informed consent. Sample size: A sample size of N = 97 patients had 90% power to detect a difference in within-patient standard deviation of 0.01 (assuming a within-patient standard deviation of 0.07 for Adimea® Kt/V measurement and of 0.08 for blood Kt/V) and taking a common standard deviation of the differences of 0.03, using a paired t-test with a 0.05 two-sided significance level. To account for early discontinued patients with less than three monthly blood and Adimea® Kt/V measurements, a total of N = 120 enrolled patients were planned. This sample size was also considered sufficient to detect a difference of 0.037 between the within-patient mean values of the two procedures, using a paired t-test and assuming an overall standard deviation of 0.13, a power of 80%. Statistical analysis: Descriptive parameters are presented according to the type of variable evaluated (mean, standard deviation, median, upper and lower quartile for continuous data; counts, absolute and relative frequencies for categorical and ordinal data). Primary endpoint: The primary endpoint was the session to session variability of Adimea® Kt/V and by blood Kt/V, assessed by the within-patient standard deviation (SD) and the derived coefficient of variability (SD/mean × 100) of the values using all available time points. The mean of the SDs were calculated and statistically compared using a paired t-test. In addition to the paired t-test, also 95% confidence intervals (CI) for the mean SD Adimea® and blood Kt/V were calculated. The level of statistical significance is assumed to be α = 0.05. Statistical analyses were conducted using IBM® SPSS® statistics version 24. Results: Table 1 reports the characteristics of the 120 patients enrolled in six centers in China. The mean age was 51.5 ± 12.2 years and 86.7% were males. The proportion of diabetics was 24.2% and all of them were with an AV-fistula for vascular access. Stratifying the patients by their monthly baseline blood Kt/V lower or greater/equal than 1.20, the proportion of patients with inadequate dialysis dose <1.20 Kt/V was 48.3%. This group of patients included a significant greater proportion of males (95% vs 79%), patients significantly taller (174 vs 169 cm), and with greater dry body weight (73 vs 62 kg). Demographics, comorbidities and main baseline lab results of enrolled patients by baseline Kt/V. Table 2 reports the main features of the extracorporeal dialysis treatment at baseline and during the 6-month follow-up stratified by baseline blood Kt/V. At baseline, 95.8% of the patients were on 4-h treatment/session and 3.3% were on 4.5 h treatment/session. The mean blood flow was 261 ± 28 mL/min. Dialyzer surface area ranged between 1.20 (17.5%) and 1.80 m 2 (24.2%). About 80.0% of the patients were on high-flux dialysis, 4.2% on on-line hemodiafiltration, with only marginal differences between the two groups stratified by level of dialysis dose (Table 2). Blood Kt/V was 1.21 ± 0.14, with 62/120 having a blood Kt/V ⩾ 1.20 and 58/120 below the target. Figure 2 shows the development of blood Kt/V during follow-up. The group ⩾1.20 at baseline increased on average and remained ⩾1.20 adequacy level for more than 90% of the patients. The other group, with all patients with a blood Kt/V < 1.20 at baseline, showed a significant increase of blood Kt/V in the first 20 days of follow-up with dialysis sessions monitored by Adimea®. As a result, the proportion of patients in the adequacy level ⩾1.20 increased from 0% to about 60% and was maintained for the full follow-up. Means and standard deviations of Adimea® Kt/V during follow-up in patients with baseline blood Kt/V ⩾ 1.20 and those with blood Kt/V < 1.20 are shown in Figure 3. Main characteristics of the dialysis treatment at baseline and during follow-up by baseline blood Kt/V. Development of blood Kt/V from baseline (without Adimea® monitoring) and during Adimea® monitoring follow-up by baseline blood Kt/V: (a) mean and standard deviation of blood Kt/V and (b) proportion of patients reaching the blood Kt/V target of 1.20. Development of Adimea® Kt/V during Adimea® monitoring follow-up by baseline blood Kt/V. Figure 4 shows the Bland and Altman graph of those sessions evaluated by Adimea® Kt/V as well as blood Kt/V (see also supplemental material). According to the primary aim of the study, the coefficient of variation (CV) for Adimea® Kt/V was measured and compared to the blood Kt/V. The results of Adimea Kt/V were 9.6 ± 3.4%, not significantly different from the 9.6 ± 8.6% of blood Kt/V. The distribution of the CV of Adimea® Kt/V was very similar to that of blood Kt/V, with most of the patients below 10% of CV (data not shown). Bland and Altman graphs showing distribution of difference between Adimea® and blood Kt/V in respect of their mean. Lines represent mean (- - -) and 95% confidence interval (-----). Discussion: This study evaluated the reliability of monitoring the dialysis dose by Adimea® Kt/V in each session and the intra-patient variability of dialysis dose in comparison to the reference method of blood Kt/V, confirming that online monitoring by Adimea® according to Bland and Altman graph (Figure 4) is a reliable method in monitoring dialysis dose and essentially contributes to improvement and maintaining of dialysis adequacy (Figure 2). 20 The main aim of this study was to compare the variability of blood KT/V and online Kt/V dialysis dose by Adimea®. The result is a non-significant difference between the standard deviations of Adimea® and blood Kt/V. In addition, the coefficients of variation were of 9.6 ± 3.4% and 9.6 ± 8.6% by the online and the blood based method, respectively (p = NS). The range is comparable to evaluation by McIntyre et al. in 26 patients with ionic dialysance (13%) and pre-post dialysis urea blood Kt/V (11%) with no statistical difference. What are the components of intra-patient variability in Kt/V? Dealing with blood Kt/V, the accuracy of the measurement of dialysis dose depends on the accuracy and timing of drawing the sample and laboratory errors. 21 By using Adimea® throughout the full course of the dialysis session it was expected to decrease intra-patient variability, but the results of this study did not confirm the hypothesis. A partial answer to this apparent contradiction is probably the frequency of sampling. The intra-patient variability when dialysis dose is monitored at every session can be higher mainly because of the variability in delivered treatment time and blood flow, as pointed out by Lambie et al. 22 Other factors subject to practice variability (i.e. needle insertion 6 ), needle size), patient related factors (i.e. hemodynamic condition, blood flow rate). Organizational factors in dialysis units (e.g. shift planning 23 ) may affect delivered dialysis dose by indirectly affecting delivered treatment time. Another study, 24 following-up dialysis adequacy of 11 patients on hemodiafiltration with online urea monitoring by an experimental (but reliable) system based on a urease cartridge, 25 found high intra-patient variability, and reported in six stable patients a coefficient of variability of 9.5% and 5.2% for urea generation rate and Kt/V, respectively. At baseline, in our study a high proportion of patients (48.3%) did not reach the minimum level of dialysis adequacy (according to blood Kt/V ⩾ 1.20). The high proportion of patients with blood Kt/V < 1.20 is likely due to study selection criteria (spKt/V in the range 1.00–1.40 or an average <1.35). Patients not reaching the level of adequacy were more likely to be males, were significantly taller and with greater body weight. However, despite their higher therapy needs in term of treatment time, blood flow and/or dialyzer surface, all dialysis parameters were quite similar to those of the other group adequately treated since baseline. During follow-up 60% of the patients originally with a blood Kt/V < 1.20 (58 out of 120) reached the target level of adequacy. Table 2 shows that the prescription of treatment time, blood flow etc. remained stable with minimal modifications and therefore, other causes might have increased delivered dialysis dose. A variable level of compliance to the prescription of the physician due to several reasons.26–28 Each additional minute of effective treatment time is associated with a significant 3.6% higher probability to achieve the level of dialysis adequacy. 29 Rocco and Burkart 30 described that a loss of 30 min of treatment time translates into an effective delivered Kt/V of 0.88 instead of the originally planned 1.05 according to former guidelines. A monitoring system such as Adimea® is of practical benefit because it allows to uncover those sessions not going to reach the Kt/V target value whilst the session is still running. In fact, the prediction of delivering an inadequate dialysis dose during dialysis enables a modification of the critical treatment parameters, including prolonging the session those few minutes required to fulfill the quality requirements. By showing the prediction graph on the machine screen to the patient, forecasted dialysis dose and implications on the actual operative conditions can be explained to the patient. Accounting for changes in treatment parameters, for example after increasing blood flow, in a few minutes the new target route will be promptly displayed. 31 Therefore, at the beginning of the session, if the initial lower blood flow has not been increased, the system will alarm showing the negative forecast of the final Kt/V. The same situation occurs if the blood flow is decreased during the treatment and not upregulated again, or in case of disturbances due to needle placement or arm position, or the dialyzer is losing performance because of fibers clotting. Even if in this study the alarm option of Adimea® was not activated, the efficacy of the system can be demonstrated by the sustained achievement of dialysis dose during the follow-up. It is interesting to note that Adimea® Kt/V of the sessions selected for blood Kt/V evaluation were not statistically different in respect of those other sessions in the same weekdays (data not shown). Doubts on the representativeness of once a month Kt/V have been raised by several publications.7,32,33 Without on-line monitoring, the testing day is the session nurses and patients know that they are under scrutiny and not surprisingly, this session is very likely to have a high focus. Two main options are commercially available to estimate online dialysis dose. 34 On the base of the equivalence of urea clearance to ionic dialysance, Kt/V has been calculated using the mean dialysance 35 intermittently estimated every 45 min, that is 6 times over 4-h treatment. 36 The ionic dialysance approach relies on the assumption that plasma conductivity remains stable during the measurement period, and actual alterations in plasma conductivity could have a significant impact on the accuracy of the measurement. 36 However, this approach may cause the risk of salt loading due to the spiking of dialyzate sodium up to 155 mEq/L as required for these measurements. 37 Even if on average previous studies did not find any evidence of clinically relevant salt loading during ionic dialysance measurements,38,39 patients with lower predialysis sodium concentration had most net gain during the sessions. 39 The intermittent approach of this method could miss problems occurring between the dialysance tests, such as alarms causing blood pump stop and/or dialysis fluid bypass. With a continuous method as the UV absorbance spent dialyzate monitoring, the immediate recognition of low clearances can trigger prompt interventions correcting or mitigating possible issues arising during treatment. 40 It can be applied not only in double but also in single needle hemodialysis as opposed to the ionic dialysance method. 41 This study has strengths and limitations. The main limitation as a consequence of its observational nature is that the sequence of sessions monitored or not by Adimea® dialysis dose were not randomized. Thus, the benefit is judged on the base of the improvement from baseline. Because of the inclusion criteria of the last three spKt/V evaluations in the range 1.00–1.40 and exclusion of patients with catheters, the study likely selected patients with lower baseline Kt/V variability, thereby decreasing the probability to detect differences in variability between blood Kt/V and Adimea® Kt/V. The enrolled patients, being relatively young and mainly males with AV-Fistula for vascular access, might not fully represent the current population on dialysis in other countries. Considering improved quality of treatment by monitoring dialysis dose in this young population, in elderly frail patients even higher benefit is expected. 42 Older patients are at higher risk for receiving lower dialysis dose, 43 but when adequately dialyzed they have a good quality of life. 44 Given that their variability of dialysis dose is expected to be greater, the real-time transparency on delivered dialysis dose should support nurses to govern those modifiable factors such as treatment time or blood flow during the therapy. In summary, this study confirms that simple to use integrated clinical tools, 45 that are able to monitor continuously delivered dose of dialysis have an absolute clinical utility, even in a young population with AV fistula. The 60% of this population not reaching Kt/V target at baseline were able to consistently achieve Kt/V adequacy during the study in monitoring naïve centers. Thus, a major component of the success is an improved understanding of the staff and the patients, empowering both groups for correct shared decision making. In frail patients, with vascular instability and frequent episodes of intradialytic hypotension as well as a high proportion of catheters used for vascular access, monitoring is expected to produce even higher benefit.42,46 The dialysis machine using the Adimea tool alarms for potential missing dose targets and follows changes with new target prognosis. An automatic control continuously adjusting factors as blood flow, dialyzate flow and treatment time (within single patient prescribed limits) to ensure the achievement of target dose might be integrated. 47 Supplemental Material: Click here for additional data file. Supplemental material, sj-pdf-1-jao-10.1177_03913988211059841 for Ensuring hemodialysis adequacy by dialysis dose monitoring with UV spectroscopy analysis of spent dialyzate by Li Zhang, Wenhu Liu, Chuanming Hao, Yani He, Ye Tao, Shiren Sun, Marten Jakob, Daniele Marcelli, Claudia Barth and Xiangmei Chen in The International Journal of Artificial Organs
Background: Patients' session-to-session variation has been shown to influence outcomes, making critical the monitoring of dialysis dose in each session. The aim of this study was to detect the intra-patient variability of blood single pool Kt/V as measured from pre-post dialysis blood urea and from the online tool Adimea®, which measures the ultraviolet absorbance of spent dialyzate. Methods: This open, one-armed, prospective non-interventional study, evaluates patients on bicarbonate hemodialysis or/and on hemodiafiltration. Dialysis was performed with B. Braun Dialog+ machines equipped with Adimea®. In the course of the prospective observation, online monitoring with Adimea® in each session was established without the target warning function being activated. A sample size of 97 patients was estimated. Results: A total of 120 patients were enrolled in six centers in China (mean age 51.5 ± 12.2 years, 86.7% males, 24.2% diabetics). All had an AV-fistula. The proportion of patients with blood Kt/V < 1.20 at baseline was 48.3%. During follow-up with Adimea®, the subgroup with Kt/V > 1.20 at baseline remains at the same adequacy level for more than 90% of the patients. Those with a Kt/V < 1.20 at baseline, showed a significant increase of Kt/V to 60% of the patients reaching the adequacy level >1.20. The coefficient of variation for spKt/V as evaluated by Adimea® was 9.6 ± 3.4%, not significantly different from the 9.6 ± 8.6% as blood Kt/V taken at the same time. Conclusions: Online monitoring of dialysis dose by Adimea® improves and maintains dialysis adequacy. Implementing online monitoring by Adimea into daily practice moves the quality of dialysis patient care a significant step forward.
null
null
5,026
350
[ 110, 49, 155, 40, 132 ]
10
[ "kt", "dialysis", "blood", "patients", "adimea", "blood kt", "dose", "dialysis dose", "patient", "treatment" ]
[ "dialysis session", "follow dialysis sessions", "dialysis dose reasons", "dialysis dose comparison", "inadequate dialysis dose" ]
null
null
[CONTENT] Dialysis adequacy | artificial kidney | apheresis & detoxification techniques | hemodialysis | hemodiafiltration | chronic renal failure | compliance to prescription [SUMMARY]
[CONTENT] Dialysis adequacy | artificial kidney | apheresis & detoxification techniques | hemodialysis | hemodiafiltration | chronic renal failure | compliance to prescription [SUMMARY]
[CONTENT] Dialysis adequacy | artificial kidney | apheresis & detoxification techniques | hemodialysis | hemodiafiltration | chronic renal failure | compliance to prescription [SUMMARY]
null
[CONTENT] Dialysis adequacy | artificial kidney | apheresis & detoxification techniques | hemodialysis | hemodiafiltration | chronic renal failure | compliance to prescription [SUMMARY]
null
[CONTENT] Adult | Dialysis Solutions | Female | Humans | Male | Middle Aged | Prospective Studies | Renal Dialysis | Spectrum Analysis | Urea [SUMMARY]
[CONTENT] Adult | Dialysis Solutions | Female | Humans | Male | Middle Aged | Prospective Studies | Renal Dialysis | Spectrum Analysis | Urea [SUMMARY]
[CONTENT] Adult | Dialysis Solutions | Female | Humans | Male | Middle Aged | Prospective Studies | Renal Dialysis | Spectrum Analysis | Urea [SUMMARY]
null
[CONTENT] Adult | Dialysis Solutions | Female | Humans | Male | Middle Aged | Prospective Studies | Renal Dialysis | Spectrum Analysis | Urea [SUMMARY]
null
[CONTENT] dialysis session | follow dialysis sessions | dialysis dose reasons | dialysis dose comparison | inadequate dialysis dose [SUMMARY]
[CONTENT] dialysis session | follow dialysis sessions | dialysis dose reasons | dialysis dose comparison | inadequate dialysis dose [SUMMARY]
[CONTENT] dialysis session | follow dialysis sessions | dialysis dose reasons | dialysis dose comparison | inadequate dialysis dose [SUMMARY]
null
[CONTENT] dialysis session | follow dialysis sessions | dialysis dose reasons | dialysis dose comparison | inadequate dialysis dose [SUMMARY]
null
[CONTENT] kt | dialysis | blood | patients | adimea | blood kt | dose | dialysis dose | patient | treatment [SUMMARY]
[CONTENT] kt | dialysis | blood | patients | adimea | blood kt | dose | dialysis dose | patient | treatment [SUMMARY]
[CONTENT] kt | dialysis | blood | patients | adimea | blood kt | dose | dialysis dose | patient | treatment [SUMMARY]
null
[CONTENT] kt | dialysis | blood | patients | adimea | blood kt | dose | dialysis dose | patient | treatment [SUMMARY]
null
[CONTENT] dialysis | delivered | treatment | time | dialysis dose | dose | session | prescribed | patients | 15 [SUMMARY]
[CONTENT] adimea | kt | standard deviation | patients | standard | deviation | paired test | test | paired | blood [SUMMARY]
[CONTENT] kt | blood | blood kt | baseline | 20 | patients | follow | baseline blood kt | baseline blood | adimea [SUMMARY]
null
[CONTENT] kt | dialysis | blood | patients | adimea | blood kt | standard deviation | patient | standard | deviation [SUMMARY]
null
[CONTENT] ||| Kt/V | Adimea [SUMMARY]
[CONTENT] one ||| B. Braun Dialog+ | Adimea ||| Adimea ||| 97 [SUMMARY]
[CONTENT] 120 | six | China | age 51.5 | 12.2 years | 86.7% | 24.2% ||| AV ||| Kt/V < 1.20 | 48.3% ||| Adimea | Kt/V | 1.20 | more than 90% ||| Kt/V | 60% | 1.20 ||| Adimea | 9.6 | 3.4% | 9.6 | 8.6% | Kt/V [SUMMARY]
null
[CONTENT] ||| Kt/V | Adimea ||| one ||| B. Braun Dialog+ | Adimea ||| Adimea ||| 97 ||| 120 | six | China | age 51.5 | 12.2 years | 86.7% | 24.2% ||| AV ||| Kt/V < 1.20 | 48.3% ||| Adimea | Kt/V | 1.20 | more than 90% ||| Kt/V | 60% | 1.20 ||| Adimea | 9.6 | 3.4% | 9.6 | 8.6% | Kt/V ||| Adimea ||| Adimea | daily [SUMMARY]
null
Fonofos exposure and cancer incidence in the agricultural health study.
17185272
The Agricultural Health Study (AHS) is a prospective cohort study of licensed pesticide applicators from Iowa and North Carolina enrolled 1993-1997 and followed for incident cancer through 2002. A previous investigation in this cohort linked exposure to the organophosphate fonofos with incident prostate cancer in subjects with family history of prostate cancer.
BACKGROUND
Pesticide exposure and other data were collected using self-administered questionnaires. Poisson regression was used to calculate rate ratios (RRs) and 95% confidence intervals (CIs) while controlling for potential confounders.
METHODS
Relative to the unexposed, leukemia risk was elevated in the highest category of lifetime (RR = 2.24; 95% CI, 0.94-5.34, Ptrend = 0.07) and intensity-weighted exposure-days (RR = 2.67; 95% CI, 1.06-6.70, Ptrend = 0.04), a measure that takes into account factors that modify pesticide exposure. Although prostate cancer risk was unrelated to fonofos use overall, among applicators with a family history of prostate cancer, we observed a significant dose-response trend for lifetime exposure-days (Ptrend = 0.02, RR highest tertile vs. unexposed = 1.77, 95% CI, 1.03-3.05; RRinteraction = 1.28, 95% CI, 1.07-1.54). Intensity-weighted results were similar. No associations were observed with other examined cancer sites.
RESULTS
Further study is warranted to confirm findings with respect to leukemia and determine whether genetic susceptibility modifies prostate cancer risk from pesticide exposure.
CONCLUSIONS
[ "Adult", "Agricultural Workers' Diseases", "Agriculture", "Cohort Studies", "Family Health", "Female", "Fonofos", "Humans", "Incidence", "Iowa", "Male", "Middle Aged", "Neoplasms", "North Carolina", "Occupational Exposure", "Organophosphate Poisoning", "Organophosphorus Compounds", "Organothiophosphorus Compounds", "Pesticides", "Poisson Distribution", "Prospective Studies", "Prostatic Neoplasms", "Regression Analysis", "Risk Factors", "Surveys and Questionnaires" ]
1764168
null
null
Statistical analysis
All pesticide applicators who returned an enrollment questionnaire were eligible for analysis. We excluded applicators if they did not provide information on fonofos exposure duration (n = 5,987); if their first primary cancer occurred before enrollment (n = 937), if they did not live in either state on enrollment (n = 295); or if they did not provide information on birth year (n = 2), smoking (n = 1,664), or use of correlated pesticides (n = 3,054). After exclusions, 45,372 subjects were available for the lifetime exposure-days analysis. The intensity-weighted exposure-days analysis contained four fewer cancer cases and 55 fewer cancer-free subjects because of missing data on intensity metric covariates. Compared with retained subjects, excluded subjects were generally older and more likely to be from North Carolina. We categorized fonofos lifetime exposure-days and intensity-weighted exposure-days into tertiles based on the distribution of exposure among all cancer cases. Because the tertiles of intensity-weighted exposure-days based on the exposure distribution among all cancer cases resulted in categories inadequate for analyzing leukemia (one case in the lowest exposed group), we created more uniform tertiles for leukemia based on the exposure distribution among leukemia cases. We used two different reference groups—the unexposed and the lowest exposed groups—to analyze all cancer sites for which there were at least 15 exposed cases and 4 cases per lifetime exposure-days category. Specifically, we examined all cancers combined; cancers of the prostate, lung, and colon; melanoma skin cancer; leukemia; and lymphohematopoietic cancer consisting of Hodgkin lymphoma, NHL, leukemia, and multiple myeloma. Statistical analyses were conducted in AHS data release 0412.01 using Stata version 8 (StataCorp 2003). We used Poisson regression to calculate rate ratios (RRs) and 95% CIs while adjusting for multiple covariates. Covariates included category of age at enrollment (< 40, 40–49, 50–59, ≥ 60 years of age), state of residence, pack-years of smoking categorized at the median (0, ≤ 12, > 12), and use of the four most correlated pesticides [trichlorofon, carbofuran, imazethapyr, and S-ethyl dipropylthiocarbamate (EPTC)]. Pearson correlation coefficients ranged between 0.43 and 0.50. Use of each correlated pesticide was classified as never, low, and high, with the median lifetime exposure-days categorizing low and high usage. As an alternative strategy to account for use of other pesticides, we also replaced use of the most correlated pesticides with lifetime exposure-days to all pesticides. Further adjustment for education, sex, alcohol consumption in the previous 12 months, applicator type, cancer history in first-degree relatives, and enrollment year did not affect point estimates by > 10%. We performed linear trend tests to assess the overall dose–response trend by entering exposure categories ordinally in the models after assigning them median exposure value in that category. All statistical tests were two-sided.
Results
Selected characteristics of the study population are displayed in Table 1 according to lifetime exposure-days category. In this table, “lowest exposed” refers to those in the lowest exposure tertile (> 0–20 lifetime exposure-days), whereas “other exposed” refers to those in the middle and highest tertiles (> 20 lifetime exposure-days). Overall, two-thirds of the study participants were from Iowa. More than two-thirds of the cohort reported corn farming. Study subjects were also predominantly white and male. Just over half reported being never smokers. Close to 55% reported that the highest level of schooling attained was no more than a high school diploma. Approximately 40% reported a history of cancer in first-degree relatives. The unexposed group was generally younger, less likely to use alcohol, and slightly less likely to report a family history of cancer, used fewer pesticides in general, and planted fewer acres than either exposed group. Both Iowa participants and corn farmers were over-represented in the exposed categories. Based on these differences between the unexposed group and either exposed group, the lowest exposed group may represent the exposed group more closely. With the intensity-weighted metric, risk estimates for all cancers combined were not different from the null, regardless of the reference group used (Table 2). Colon cancer risk estimates were elevated, but only when using the unexposed as the reference, and the relationship was not monotonic. Leukemia risk estimates were elevated regardless of the reference group used. When the unexposed group was the reference, the RR was 2.67 (95% CI, 1.06–6.70) in the highest exposure category, and the test for linear trend was significant (ptrend = 0.04). When the lowest exposed group was the reference, the corresponding RR was 2.03 (95% CI, 0.58–7.05). The linear trend test was not significant. Fonofos intensity-weighted exposure-days were not related to the risk of any other examined cancer. Results were similar using the lifetime exposure-days metric (not shown). For example, using the unexposed group as the reference, leukemia RRs increased monotonically to 2.24 (95% CI, 0.94–5.34) in the highest tertile (ptrend = 0.07). When the lowest exposed tertile was used as the reference, the risk estimates increased monotonically with increasing exposure category to 2.18 (95% CI, 0.57–8.40) in the highest tertile. The test for linear trend was not significant. To account for the effect of misclassification due to the inclusion of exposure that occurred too recently to affect cancer risk, we repeated the analyses excluding 39 cancer cases and 1,389 cancer-free subjects who either reported first using fonofos during the 1990s or did not provide this information. The results were similar to those presented here (not shown). The results were also similar after repeating the analyses among Iowa participants only (not shown). Additionally, to control for pesticide use in general, we repeated the analyses adjusting for lifetime exposure-days to all pesticides instead of the most correlated pesticides, and the results did not differ from those presented here (not shown). To evaluate the effect of missing information, we repeated the analyses while allowing subjects with missing information on covariates to influence the outcome by assigning them an unspecified category. Once again, the results were largely the same (not shown). Finally, the results were similar when we separately examined fonofos days of use and years of use after categorizing each into none, low, and high categories for exposure, using the median to distinguish between low and high (not shown). We further investigated leukemia by separately examining chronic lymphocytic (eight exposed cases), chronic myelogenous (two exposed cases), acute myelogenous (five exposed cases), and all other leukemias (three exposed cases) (not shown). Acute lymphocytic leukemia could not be evaluated (no exposed cases). Although the CIs were wide and the point estimates were not significant, relative to the unexposed, the age-adjusted risk estimates in low- and high-exposure categories were elevated for all examined subtypes. In the high-exposure category, point estimates ranged from 1.75 for acute myelogenous to 3.65 for chronic myelogenous leukemia. We also adjusted leukemia risk estimates using data on use of gasoline, solvents, and paint, which were collected among private applicators using the take-home questionnaire (not shown). Although the subset of otherwise eligible applicators who provided the aforementioned information was small (eight exposed cases), adjusting for these exposures did not weaken the leukemia risk estimates. Finally, controlling for animal exposures using information on the number of livestock (other than poultry) or whether applicators butchered animals, provided veterinary services to livestock, or worked in swine or poultry containing areas, did not affect risk estimates (not shown). Table 3 shows prostate cancer risk relative to the unexposed using both metrics and stratified by family history of prostate cancer in first-degree relatives. We generated uniform exposure categories based on the exposure distribution among prostate cancer cases. In the group with no prostate cancer family history, risk was not associated with exposure regardless of the metric. In those with a family history of prostate cancer, the risk estimates increased, and significant linear trends were observed using either metric. Using the lifetime exposure-days metric, we observed a significant dose–response relationship (ptrend = 0.02), which resulted in a RR of 1.77 (95% CI, 1.03–3.05) in the highest exposure category. The interaction term, defined as the cross-product of family history of cancer and category of lifetime exposure-days (treated as a continuous variable), was significant (RR = 1.28; 95% CI, 1.07–1.54). With the intensity-weighted exposure-days metric, risk in the highest category was 1.83 (95% CI, 1.12–3.00). The test for linear trend was significant (ptrend = < 0.01), as was the interaction RR of 1.27 (95% CI, 1.07–1.51). When the analysis in Table 3 was repeated using the lowest exposed group as the reference, the results were similar but less pronounced due to decreased statistical power (not shown). Risk was related to fonofos use only in those with a family history of prostate cancer. Point estimates increased monotonically with lifetime exposure-days to 1.24 (95% CI, 0.61–2.51) in the highest category. The interaction RR was 1.25 (95% CI, 0.83–1.89). Point estimates generally increased with intensity-weighted exposure-days to 1.68 (95% CI, 0.83–3.39) in the highest category. The interaction RR was 1.27 (95% CI, 0.85–1.89). Linear trend tests were not significant using either metric. When the risk of the other examined cancers (all cancers combined, melanoma, leukemia, lymphohematopoietic cancers, lung cancer, and colon cancer) was similarly stratified, no discrepancies were observed comparing those with and without a family history of the specific cancer (not shown). When we did further analyses to disentangle the effects of prostate cancer family history and fonofos exposure, we observed that the age-adjusted main effect for ever compared with never fonofos exposure was 0.97 (95% CI, 0.80–1.17), whereas for family history of prostate cancer, it was 1.67 (95% CI, 1.35–2.07). The observed joint effect of the two exposures was 2.63 (95% CI, 1.96–3.53).
null
null
[ "Enrollment and follow-up", "Exposure assessment" ]
[ "The AHS has been described previously (Alavanja et al. 1996). Briefly, it is a prospective cohort of 52,395 private applicators (farmers) from Iowa and North Carolina and 4,916 commercial applicators (employees of pest control companies or businesses whose primary function is not pesticide application) from Iowa licensed to apply restricted use pesticides. This cohort represents 82% of eligible applicators from both states during the enrollment period of the study (13 December 1993 to 31 December 1997). Population-based cancer registries of both states were used to identify subjects with incident cancer diagnoses between enrollment and 31 December 2002. Subjects who died or moved out of the state were censored in the year of occurrence of either event. Vital status was ascertained using state death registries and the National Death Index. Residence information was obtained through motor vehicle records, pesticide registration records, and address files of the Internal Revenue Service. To date, average follow-up time is 7.5 years and follow-up is > 98% complete. All participants provided informed consent, and the research protocol was approved by all appropriate institutional review boards.", "On enrollment, pesticide applicators seeking a restricted-use pesticide license completed a self-administered questionnaire. The questionnaire obtained detailed exposure information (days of use per year, years of use, and decade of first use) for 22 pesticides (including fonofos) and ever/never use information for 28 additional pesticides, as well as information on application methods, pesticide mixing status, personal protective equipment use, personal equipment repair, smoking, current alcohol use, cancer history in first-degree relatives, and basic demographic data. For some analyses, we used information on solvent exposure that was collected using a self-administered take-home questionnaire completed by 44% of those enrolled [both questionnaires available at http://www.aghealth.org/questionnaires.html (AHS 2006)].\nWe estimated fonofos exposure in terms of cumulative fonofos lifetime exposure-days and intensity-weighted exposure-days. We calculated lifetime exposure-days as the cross-product of the questionnaire categories for frequency of fonofos use in an average year and the number of years of fonofos use, using the midpoints of the questionnaire categories. We assessed intensity of exposure using an algorithm developed by Dosemeci et al. (2002). This algorithm calculates an intensity score that takes into account the effect of modifying factors, such as how often an applicator personally mixed or prepared the pesticide, the type of application methods used, whether an applicator personally repaired pesticide application equipment, and the type of personal protective equipment used. By multiplying the intensity score with fonofos lifetime exposure-days, we obtained fonofos intensity-weighted exposure-days." ]
[ null, null ]
[ "Materials and Methods", "Enrollment and follow-up", "Exposure assessment", "Statistical analysis", "Results", "Discussion" ]
[ " Enrollment and follow-up The AHS has been described previously (Alavanja et al. 1996). Briefly, it is a prospective cohort of 52,395 private applicators (farmers) from Iowa and North Carolina and 4,916 commercial applicators (employees of pest control companies or businesses whose primary function is not pesticide application) from Iowa licensed to apply restricted use pesticides. This cohort represents 82% of eligible applicators from both states during the enrollment period of the study (13 December 1993 to 31 December 1997). Population-based cancer registries of both states were used to identify subjects with incident cancer diagnoses between enrollment and 31 December 2002. Subjects who died or moved out of the state were censored in the year of occurrence of either event. Vital status was ascertained using state death registries and the National Death Index. Residence information was obtained through motor vehicle records, pesticide registration records, and address files of the Internal Revenue Service. To date, average follow-up time is 7.5 years and follow-up is > 98% complete. All participants provided informed consent, and the research protocol was approved by all appropriate institutional review boards.\nThe AHS has been described previously (Alavanja et al. 1996). Briefly, it is a prospective cohort of 52,395 private applicators (farmers) from Iowa and North Carolina and 4,916 commercial applicators (employees of pest control companies or businesses whose primary function is not pesticide application) from Iowa licensed to apply restricted use pesticides. This cohort represents 82% of eligible applicators from both states during the enrollment period of the study (13 December 1993 to 31 December 1997). Population-based cancer registries of both states were used to identify subjects with incident cancer diagnoses between enrollment and 31 December 2002. Subjects who died or moved out of the state were censored in the year of occurrence of either event. Vital status was ascertained using state death registries and the National Death Index. Residence information was obtained through motor vehicle records, pesticide registration records, and address files of the Internal Revenue Service. To date, average follow-up time is 7.5 years and follow-up is > 98% complete. All participants provided informed consent, and the research protocol was approved by all appropriate institutional review boards.\n Exposure assessment On enrollment, pesticide applicators seeking a restricted-use pesticide license completed a self-administered questionnaire. The questionnaire obtained detailed exposure information (days of use per year, years of use, and decade of first use) for 22 pesticides (including fonofos) and ever/never use information for 28 additional pesticides, as well as information on application methods, pesticide mixing status, personal protective equipment use, personal equipment repair, smoking, current alcohol use, cancer history in first-degree relatives, and basic demographic data. For some analyses, we used information on solvent exposure that was collected using a self-administered take-home questionnaire completed by 44% of those enrolled [both questionnaires available at http://www.aghealth.org/questionnaires.html (AHS 2006)].\nWe estimated fonofos exposure in terms of cumulative fonofos lifetime exposure-days and intensity-weighted exposure-days. We calculated lifetime exposure-days as the cross-product of the questionnaire categories for frequency of fonofos use in an average year and the number of years of fonofos use, using the midpoints of the questionnaire categories. We assessed intensity of exposure using an algorithm developed by Dosemeci et al. (2002). This algorithm calculates an intensity score that takes into account the effect of modifying factors, such as how often an applicator personally mixed or prepared the pesticide, the type of application methods used, whether an applicator personally repaired pesticide application equipment, and the type of personal protective equipment used. By multiplying the intensity score with fonofos lifetime exposure-days, we obtained fonofos intensity-weighted exposure-days.\nOn enrollment, pesticide applicators seeking a restricted-use pesticide license completed a self-administered questionnaire. The questionnaire obtained detailed exposure information (days of use per year, years of use, and decade of first use) for 22 pesticides (including fonofos) and ever/never use information for 28 additional pesticides, as well as information on application methods, pesticide mixing status, personal protective equipment use, personal equipment repair, smoking, current alcohol use, cancer history in first-degree relatives, and basic demographic data. For some analyses, we used information on solvent exposure that was collected using a self-administered take-home questionnaire completed by 44% of those enrolled [both questionnaires available at http://www.aghealth.org/questionnaires.html (AHS 2006)].\nWe estimated fonofos exposure in terms of cumulative fonofos lifetime exposure-days and intensity-weighted exposure-days. We calculated lifetime exposure-days as the cross-product of the questionnaire categories for frequency of fonofos use in an average year and the number of years of fonofos use, using the midpoints of the questionnaire categories. We assessed intensity of exposure using an algorithm developed by Dosemeci et al. (2002). This algorithm calculates an intensity score that takes into account the effect of modifying factors, such as how often an applicator personally mixed or prepared the pesticide, the type of application methods used, whether an applicator personally repaired pesticide application equipment, and the type of personal protective equipment used. By multiplying the intensity score with fonofos lifetime exposure-days, we obtained fonofos intensity-weighted exposure-days.\n Statistical analysis All pesticide applicators who returned an enrollment questionnaire were eligible for analysis. We excluded applicators if they did not provide information on fonofos exposure duration (n = 5,987); if their first primary cancer occurred before enrollment (n = 937), if they did not live in either state on enrollment (n = 295); or if they did not provide information on birth year (n = 2), smoking (n = 1,664), or use of correlated pesticides (n = 3,054). After exclusions, 45,372 subjects were available for the lifetime exposure-days analysis. The intensity-weighted exposure-days analysis contained four fewer cancer cases and 55 fewer cancer-free subjects because of missing data on intensity metric covariates. Compared with retained subjects, excluded subjects were generally older and more likely to be from North Carolina.\nWe categorized fonofos lifetime exposure-days and intensity-weighted exposure-days into tertiles based on the distribution of exposure among all cancer cases. Because the tertiles of intensity-weighted exposure-days based on the exposure distribution among all cancer cases resulted in categories inadequate for analyzing leukemia (one case in the lowest exposed group), we created more uniform tertiles for leukemia based on the exposure distribution among leukemia cases. We used two different reference groups—the unexposed and the lowest exposed groups—to analyze all cancer sites for which there were at least 15 exposed cases and 4 cases per lifetime exposure-days category. Specifically, we examined all cancers combined; cancers of the prostate, lung, and colon; melanoma skin cancer; leukemia; and lymphohematopoietic cancer consisting of Hodgkin lymphoma, NHL, leukemia, and multiple myeloma.\nStatistical analyses were conducted in AHS data release 0412.01 using Stata version 8 (StataCorp 2003). We used Poisson regression to calculate rate ratios (RRs) and 95% CIs while adjusting for multiple covariates. Covariates included category of age at enrollment (< 40, 40–49, 50–59, ≥ 60 years of age), state of residence, pack-years of smoking categorized at the median (0, ≤ 12, > 12), and use of the four most correlated pesticides [trichlorofon, carbofuran, imazethapyr, and S-ethyl dipropylthiocarbamate (EPTC)]. Pearson correlation coefficients ranged between 0.43 and 0.50. Use of each correlated pesticide was classified as never, low, and high, with the median lifetime exposure-days categorizing low and high usage. As an alternative strategy to account for use of other pesticides, we also replaced use of the most correlated pesticides with lifetime exposure-days to all pesticides. Further adjustment for education, sex, alcohol consumption in the previous 12 months, applicator type, cancer history in first-degree relatives, and enrollment year did not affect point estimates by > 10%. We performed linear trend tests to assess the overall dose–response trend by entering exposure categories ordinally in the models after assigning them median exposure value in that category. All statistical tests were two-sided.\nAll pesticide applicators who returned an enrollment questionnaire were eligible for analysis. We excluded applicators if they did not provide information on fonofos exposure duration (n = 5,987); if their first primary cancer occurred before enrollment (n = 937), if they did not live in either state on enrollment (n = 295); or if they did not provide information on birth year (n = 2), smoking (n = 1,664), or use of correlated pesticides (n = 3,054). After exclusions, 45,372 subjects were available for the lifetime exposure-days analysis. The intensity-weighted exposure-days analysis contained four fewer cancer cases and 55 fewer cancer-free subjects because of missing data on intensity metric covariates. Compared with retained subjects, excluded subjects were generally older and more likely to be from North Carolina.\nWe categorized fonofos lifetime exposure-days and intensity-weighted exposure-days into tertiles based on the distribution of exposure among all cancer cases. Because the tertiles of intensity-weighted exposure-days based on the exposure distribution among all cancer cases resulted in categories inadequate for analyzing leukemia (one case in the lowest exposed group), we created more uniform tertiles for leukemia based on the exposure distribution among leukemia cases. We used two different reference groups—the unexposed and the lowest exposed groups—to analyze all cancer sites for which there were at least 15 exposed cases and 4 cases per lifetime exposure-days category. Specifically, we examined all cancers combined; cancers of the prostate, lung, and colon; melanoma skin cancer; leukemia; and lymphohematopoietic cancer consisting of Hodgkin lymphoma, NHL, leukemia, and multiple myeloma.\nStatistical analyses were conducted in AHS data release 0412.01 using Stata version 8 (StataCorp 2003). We used Poisson regression to calculate rate ratios (RRs) and 95% CIs while adjusting for multiple covariates. Covariates included category of age at enrollment (< 40, 40–49, 50–59, ≥ 60 years of age), state of residence, pack-years of smoking categorized at the median (0, ≤ 12, > 12), and use of the four most correlated pesticides [trichlorofon, carbofuran, imazethapyr, and S-ethyl dipropylthiocarbamate (EPTC)]. Pearson correlation coefficients ranged between 0.43 and 0.50. Use of each correlated pesticide was classified as never, low, and high, with the median lifetime exposure-days categorizing low and high usage. As an alternative strategy to account for use of other pesticides, we also replaced use of the most correlated pesticides with lifetime exposure-days to all pesticides. Further adjustment for education, sex, alcohol consumption in the previous 12 months, applicator type, cancer history in first-degree relatives, and enrollment year did not affect point estimates by > 10%. We performed linear trend tests to assess the overall dose–response trend by entering exposure categories ordinally in the models after assigning them median exposure value in that category. All statistical tests were two-sided.", "The AHS has been described previously (Alavanja et al. 1996). Briefly, it is a prospective cohort of 52,395 private applicators (farmers) from Iowa and North Carolina and 4,916 commercial applicators (employees of pest control companies or businesses whose primary function is not pesticide application) from Iowa licensed to apply restricted use pesticides. This cohort represents 82% of eligible applicators from both states during the enrollment period of the study (13 December 1993 to 31 December 1997). Population-based cancer registries of both states were used to identify subjects with incident cancer diagnoses between enrollment and 31 December 2002. Subjects who died or moved out of the state were censored in the year of occurrence of either event. Vital status was ascertained using state death registries and the National Death Index. Residence information was obtained through motor vehicle records, pesticide registration records, and address files of the Internal Revenue Service. To date, average follow-up time is 7.5 years and follow-up is > 98% complete. All participants provided informed consent, and the research protocol was approved by all appropriate institutional review boards.", "On enrollment, pesticide applicators seeking a restricted-use pesticide license completed a self-administered questionnaire. The questionnaire obtained detailed exposure information (days of use per year, years of use, and decade of first use) for 22 pesticides (including fonofos) and ever/never use information for 28 additional pesticides, as well as information on application methods, pesticide mixing status, personal protective equipment use, personal equipment repair, smoking, current alcohol use, cancer history in first-degree relatives, and basic demographic data. For some analyses, we used information on solvent exposure that was collected using a self-administered take-home questionnaire completed by 44% of those enrolled [both questionnaires available at http://www.aghealth.org/questionnaires.html (AHS 2006)].\nWe estimated fonofos exposure in terms of cumulative fonofos lifetime exposure-days and intensity-weighted exposure-days. We calculated lifetime exposure-days as the cross-product of the questionnaire categories for frequency of fonofos use in an average year and the number of years of fonofos use, using the midpoints of the questionnaire categories. We assessed intensity of exposure using an algorithm developed by Dosemeci et al. (2002). This algorithm calculates an intensity score that takes into account the effect of modifying factors, such as how often an applicator personally mixed or prepared the pesticide, the type of application methods used, whether an applicator personally repaired pesticide application equipment, and the type of personal protective equipment used. By multiplying the intensity score with fonofos lifetime exposure-days, we obtained fonofos intensity-weighted exposure-days.", "All pesticide applicators who returned an enrollment questionnaire were eligible for analysis. We excluded applicators if they did not provide information on fonofos exposure duration (n = 5,987); if their first primary cancer occurred before enrollment (n = 937), if they did not live in either state on enrollment (n = 295); or if they did not provide information on birth year (n = 2), smoking (n = 1,664), or use of correlated pesticides (n = 3,054). After exclusions, 45,372 subjects were available for the lifetime exposure-days analysis. The intensity-weighted exposure-days analysis contained four fewer cancer cases and 55 fewer cancer-free subjects because of missing data on intensity metric covariates. Compared with retained subjects, excluded subjects were generally older and more likely to be from North Carolina.\nWe categorized fonofos lifetime exposure-days and intensity-weighted exposure-days into tertiles based on the distribution of exposure among all cancer cases. Because the tertiles of intensity-weighted exposure-days based on the exposure distribution among all cancer cases resulted in categories inadequate for analyzing leukemia (one case in the lowest exposed group), we created more uniform tertiles for leukemia based on the exposure distribution among leukemia cases. We used two different reference groups—the unexposed and the lowest exposed groups—to analyze all cancer sites for which there were at least 15 exposed cases and 4 cases per lifetime exposure-days category. Specifically, we examined all cancers combined; cancers of the prostate, lung, and colon; melanoma skin cancer; leukemia; and lymphohematopoietic cancer consisting of Hodgkin lymphoma, NHL, leukemia, and multiple myeloma.\nStatistical analyses were conducted in AHS data release 0412.01 using Stata version 8 (StataCorp 2003). We used Poisson regression to calculate rate ratios (RRs) and 95% CIs while adjusting for multiple covariates. Covariates included category of age at enrollment (< 40, 40–49, 50–59, ≥ 60 years of age), state of residence, pack-years of smoking categorized at the median (0, ≤ 12, > 12), and use of the four most correlated pesticides [trichlorofon, carbofuran, imazethapyr, and S-ethyl dipropylthiocarbamate (EPTC)]. Pearson correlation coefficients ranged between 0.43 and 0.50. Use of each correlated pesticide was classified as never, low, and high, with the median lifetime exposure-days categorizing low and high usage. As an alternative strategy to account for use of other pesticides, we also replaced use of the most correlated pesticides with lifetime exposure-days to all pesticides. Further adjustment for education, sex, alcohol consumption in the previous 12 months, applicator type, cancer history in first-degree relatives, and enrollment year did not affect point estimates by > 10%. We performed linear trend tests to assess the overall dose–response trend by entering exposure categories ordinally in the models after assigning them median exposure value in that category. All statistical tests were two-sided.", "Selected characteristics of the study population are displayed in Table 1 according to lifetime exposure-days category. In this table, “lowest exposed” refers to those in the lowest exposure tertile (> 0–20 lifetime exposure-days), whereas “other exposed” refers to those in the middle and highest tertiles (> 20 lifetime exposure-days). Overall, two-thirds of the study participants were from Iowa. More than two-thirds of the cohort reported corn farming. Study subjects were also predominantly white and male. Just over half reported being never smokers. Close to 55% reported that the highest level of schooling attained was no more than a high school diploma. Approximately 40% reported a history of cancer in first-degree relatives.\nThe unexposed group was generally younger, less likely to use alcohol, and slightly less likely to report a family history of cancer, used fewer pesticides in general, and planted fewer acres than either exposed group. Both Iowa participants and corn farmers were over-represented in the exposed categories. Based on these differences between the unexposed group and either exposed group, the lowest exposed group may represent the exposed group more closely.\nWith the intensity-weighted metric, risk estimates for all cancers combined were not different from the null, regardless of the reference group used (Table 2). Colon cancer risk estimates were elevated, but only when using the unexposed as the reference, and the relationship was not monotonic. Leukemia risk estimates were elevated regardless of the reference group used. When the unexposed group was the reference, the RR was 2.67 (95% CI, 1.06–6.70) in the highest exposure category, and the test for linear trend was significant (ptrend = 0.04). When the lowest exposed group was the reference, the corresponding RR was 2.03 (95% CI, 0.58–7.05). The linear trend test was not significant. Fonofos intensity-weighted exposure-days were not related to the risk of any other examined cancer.\nResults were similar using the lifetime exposure-days metric (not shown). For example, using the unexposed group as the reference, leukemia RRs increased monotonically to 2.24 (95% CI, 0.94–5.34) in the highest tertile (ptrend = 0.07). When the lowest exposed tertile was used as the reference, the risk estimates increased monotonically with increasing exposure category to 2.18 (95% CI, 0.57–8.40) in the highest tertile. The test for linear trend was not significant.\nTo account for the effect of misclassification due to the inclusion of exposure that occurred too recently to affect cancer risk, we repeated the analyses excluding 39 cancer cases and 1,389 cancer-free subjects who either reported first using fonofos during the 1990s or did not provide this information. The results were similar to those presented here (not shown). The results were also similar after repeating the analyses among Iowa participants only (not shown). Additionally, to control for pesticide use in general, we repeated the analyses adjusting for lifetime exposure-days to all pesticides instead of the most correlated pesticides, and the results did not differ from those presented here (not shown). To evaluate the effect of missing information, we repeated the analyses while allowing subjects with missing information on covariates to influence the outcome by assigning them an unspecified category. Once again, the results were largely the same (not shown). Finally, the results were similar when we separately examined fonofos days of use and years of use after categorizing each into none, low, and high categories for exposure, using the median to distinguish between low and high (not shown).\nWe further investigated leukemia by separately examining chronic lymphocytic (eight exposed cases), chronic myelogenous (two exposed cases), acute myelogenous (five exposed cases), and all other leukemias (three exposed cases) (not shown). Acute lymphocytic leukemia could not be evaluated (no exposed cases). Although the CIs were wide and the point estimates were not significant, relative to the unexposed, the age-adjusted risk estimates in low- and high-exposure categories were elevated for all examined subtypes. In the high-exposure category, point estimates ranged from 1.75 for acute myelogenous to 3.65 for chronic myelogenous leukemia.\nWe also adjusted leukemia risk estimates using data on use of gasoline, solvents, and paint, which were collected among private applicators using the take-home questionnaire (not shown). Although the subset of otherwise eligible applicators who provided the aforementioned information was small (eight exposed cases), adjusting for these exposures did not weaken the leukemia risk estimates. Finally, controlling for animal exposures using information on the number of livestock (other than poultry) or whether applicators butchered animals, provided veterinary services to livestock, or worked in swine or poultry containing areas, did not affect risk estimates (not shown).\nTable 3 shows prostate cancer risk relative to the unexposed using both metrics and stratified by family history of prostate cancer in first-degree relatives. We generated uniform exposure categories based on the exposure distribution among prostate cancer cases. In the group with no prostate cancer family history, risk was not associated with exposure regardless of the metric. In those with a family history of prostate cancer, the risk estimates increased, and significant linear trends were observed using either metric. Using the lifetime exposure-days metric, we observed a significant dose–response relationship (ptrend = 0.02), which resulted in a RR of 1.77 (95% CI, 1.03–3.05) in the highest exposure category. The interaction term, defined as the cross-product of family history of cancer and category of lifetime exposure-days (treated as a continuous variable), was significant (RR = 1.28; 95% CI, 1.07–1.54). With the intensity-weighted exposure-days metric, risk in the highest category was 1.83 (95% CI, 1.12–3.00). The test for linear trend was significant (ptrend = < 0.01), as was the interaction RR of 1.27 (95% CI, 1.07–1.51).\nWhen the analysis in Table 3 was repeated using the lowest exposed group as the reference, the results were similar but less pronounced due to decreased statistical power (not shown). Risk was related to fonofos use only in those with a family history of prostate cancer. Point estimates increased monotonically with lifetime exposure-days to 1.24 (95% CI, 0.61–2.51) in the highest category. The interaction RR was 1.25 (95% CI, 0.83–1.89). Point estimates generally increased with intensity-weighted exposure-days to 1.68 (95% CI, 0.83–3.39) in the highest category. The interaction RR was 1.27 (95% CI, 0.85–1.89). Linear trend tests were not significant using either metric.\nWhen the risk of the other examined cancers (all cancers combined, melanoma, leukemia, lymphohematopoietic cancers, lung cancer, and colon cancer) was similarly stratified, no discrepancies were observed comparing those with and without a family history of the specific cancer (not shown).\nWhen we did further analyses to disentangle the effects of prostate cancer family history and fonofos exposure, we observed that the age-adjusted main effect for ever compared with never fonofos exposure was 0.97 (95% CI, 0.80–1.17), whereas for family history of prostate cancer, it was 1.67 (95% CI, 1.35–2.07). The observed joint effect of the two exposures was 2.63 (95% CI, 1.96–3.53).", "In this study we evaluated cumulative lifetime fonofos exposure until enrollment as a risk factor for incident cancer occurring between the end of enrollment through 2002. Almost 40% of exposed applicators first used fonofos before 1980. Thus, although the period of cancer incidence follow-up is 7.5 years on average, the actual time from first use to the end of the follow-up period is longer. We did not observe an association between fonofos exposure and the incidence of all cancers combined. We did not have enough cases to evaluate NHL. There was, however, evidence of an association between fonofos and leukemia. There was also an observed association between fonofos and prostate cancer among those with a family history of prostate cancer.\nOrganophosphates have been associated with leukemia and other immunologically related cancers in the epidemiologic literature (Brown et al. 1990; Cantor et al. 1992; Clavel et al. 1996; De Roos et al. 2003; Lee et al. 2004; Waddell et al. 2001; Zahm et al. 1993). The leukemogenic effects of organophosphates may be related to immune function perturbation. Organophosphates irreversibly inhibit acetylcholine esterase, an enzyme that breaks down the neurotransmitter acetylcholine into inactive metabolites. Lymphocytes contain essential components of a cholinergic system, and studies suggest that prolonged acetylcholine esterase receptor stimulation, which could result from irreversible acetylcholine esterase inhibition, can alter lymphocytic activity (Kawashima and Fujii 2003).\nProstate cancer risk was not related to fonofos exposure overall. We did, however, find increased prostate cancer risk associated with fonofos use for those with a family history of prostate cancer. This result was previously reported in a case–control analysis of prostate cancer in the AHS, albeit with 87 fewer cases and 3.2 years shorter follow-up (Alavanja et al. 2003). Here we extend this result by also reporting a dose response with lifetime exposure-days and intensity-weighted exposure-days.\nThe statistical interaction that we observed here between fonofos exposure and family history of prostate cancer could have several explanations. One explanation may be that positive prostate cancer family history may serve as a surrogate for an inherited genetic trait, such as a polymorphism in a metabolic enzyme. There are known polymorphic variants of several cytochrome P450 isoforms that vary considerably in their ratio of chlorpyrifos bioactivation to detoxification (Dai et al. 2001; Tang et al. 2001). As organothiophosphates, fonofos and chlorpyrifos are similar in that they must be metabolized to their bio-active neurotoxic oxon forms (Maroni et al. 2000), and if fonofos shares some of the same metabolic enzymes as chlorpyrifos, such a polymorphism may account for the interaction. Alternatively, fonofos, phorate, and chlorpyrifos significantly inhibit testosterone metabolism in human liver microsomes, most likely as a result of their noncompetitive inhibition of cytochrome P450 3A4 testosterone metabolism (Usmani et al. 2003).\nAs with any study, some exposure misclassification is likely (Acquavella et al. 2006), but because exposure information was collected prospectively we have no reason to believe that it occurred differentially between cancer cases and cancer-free subjects. In addition, some of the exposure considered here may have occurred too recently to contribute to cancer occurrence. However, we repeated several analyses restricted to those whose year of first use occurred before 1990, and the results did not appreciably differ from the unrestricted analyses.\nPesticide applicators come into contact with multiple farm chemicals, including pesticides, and other agents. A previous AHS examination determined that a relationship between pesticide exposure and disease is not likely confounded by farming or nonfarming activities (Coble et al. 2002). In this study, we attempted to control for exposures to other pesticides using two approaches. First, we adjusted the risk estimates for use of the four pesticides that were most correlated with fonofos. Alternatively, to control for the effects of exposure to all pesticides, we adjusted for lifetime exposure-days to all pesticides. The measured differences between exposed and unexposed applicators in age and family history of cancer would normally raise concerns that these groups also differed with respect to other unmeasured cancer risk factors, but the overall conclusions did not differ when the unexposed and lowest exposed groups were used as the reference.\nThe main strengths of this study include its large prospective design, complete recruitment and follow-up, and the use of semi-quantitative exposure measures that improve on the qualitative measures used in previous studies of pesticide exposure. In addition, our results are internally consistent, as further adjustment and subgroup analyses did not result in different conclusions.\nThis is, to our knowledge, the largest examination of any group occupationally exposed to fonofos. Our strategy for evaluating the carcinogenic potential of pesticides in the cohort is to examine each pesticide with respect to cancer outcomes, to examine each cancer outcome with respect to pesticide exposures, and to examine the consistency of the relationship across time, state, and license type. Our conclusions are limited because of the small number of exposed cases, especially for leukemia. As follow-up of the cohort continues, more cancer cases will develop as the cohort ages, at which point the relationship between cancer and exposure to fonofos and other pesticides needs to be confirmed." ]
[ "materials|methods", null, null, "methods", "results", "discussion" ]
[ "agriculture", "fonofos", "insecticides", "neoplasms", "occupational exposure", "organophosphorus compounds", "organothiophosphorus compounds", "pesticides" ]
Materials and Methods: Enrollment and follow-up The AHS has been described previously (Alavanja et al. 1996). Briefly, it is a prospective cohort of 52,395 private applicators (farmers) from Iowa and North Carolina and 4,916 commercial applicators (employees of pest control companies or businesses whose primary function is not pesticide application) from Iowa licensed to apply restricted use pesticides. This cohort represents 82% of eligible applicators from both states during the enrollment period of the study (13 December 1993 to 31 December 1997). Population-based cancer registries of both states were used to identify subjects with incident cancer diagnoses between enrollment and 31 December 2002. Subjects who died or moved out of the state were censored in the year of occurrence of either event. Vital status was ascertained using state death registries and the National Death Index. Residence information was obtained through motor vehicle records, pesticide registration records, and address files of the Internal Revenue Service. To date, average follow-up time is 7.5 years and follow-up is > 98% complete. All participants provided informed consent, and the research protocol was approved by all appropriate institutional review boards. The AHS has been described previously (Alavanja et al. 1996). Briefly, it is a prospective cohort of 52,395 private applicators (farmers) from Iowa and North Carolina and 4,916 commercial applicators (employees of pest control companies or businesses whose primary function is not pesticide application) from Iowa licensed to apply restricted use pesticides. This cohort represents 82% of eligible applicators from both states during the enrollment period of the study (13 December 1993 to 31 December 1997). Population-based cancer registries of both states were used to identify subjects with incident cancer diagnoses between enrollment and 31 December 2002. Subjects who died or moved out of the state were censored in the year of occurrence of either event. Vital status was ascertained using state death registries and the National Death Index. Residence information was obtained through motor vehicle records, pesticide registration records, and address files of the Internal Revenue Service. To date, average follow-up time is 7.5 years and follow-up is > 98% complete. All participants provided informed consent, and the research protocol was approved by all appropriate institutional review boards. Exposure assessment On enrollment, pesticide applicators seeking a restricted-use pesticide license completed a self-administered questionnaire. The questionnaire obtained detailed exposure information (days of use per year, years of use, and decade of first use) for 22 pesticides (including fonofos) and ever/never use information for 28 additional pesticides, as well as information on application methods, pesticide mixing status, personal protective equipment use, personal equipment repair, smoking, current alcohol use, cancer history in first-degree relatives, and basic demographic data. For some analyses, we used information on solvent exposure that was collected using a self-administered take-home questionnaire completed by 44% of those enrolled [both questionnaires available at http://www.aghealth.org/questionnaires.html (AHS 2006)]. We estimated fonofos exposure in terms of cumulative fonofos lifetime exposure-days and intensity-weighted exposure-days. We calculated lifetime exposure-days as the cross-product of the questionnaire categories for frequency of fonofos use in an average year and the number of years of fonofos use, using the midpoints of the questionnaire categories. We assessed intensity of exposure using an algorithm developed by Dosemeci et al. (2002). This algorithm calculates an intensity score that takes into account the effect of modifying factors, such as how often an applicator personally mixed or prepared the pesticide, the type of application methods used, whether an applicator personally repaired pesticide application equipment, and the type of personal protective equipment used. By multiplying the intensity score with fonofos lifetime exposure-days, we obtained fonofos intensity-weighted exposure-days. On enrollment, pesticide applicators seeking a restricted-use pesticide license completed a self-administered questionnaire. The questionnaire obtained detailed exposure information (days of use per year, years of use, and decade of first use) for 22 pesticides (including fonofos) and ever/never use information for 28 additional pesticides, as well as information on application methods, pesticide mixing status, personal protective equipment use, personal equipment repair, smoking, current alcohol use, cancer history in first-degree relatives, and basic demographic data. For some analyses, we used information on solvent exposure that was collected using a self-administered take-home questionnaire completed by 44% of those enrolled [both questionnaires available at http://www.aghealth.org/questionnaires.html (AHS 2006)]. We estimated fonofos exposure in terms of cumulative fonofos lifetime exposure-days and intensity-weighted exposure-days. We calculated lifetime exposure-days as the cross-product of the questionnaire categories for frequency of fonofos use in an average year and the number of years of fonofos use, using the midpoints of the questionnaire categories. We assessed intensity of exposure using an algorithm developed by Dosemeci et al. (2002). This algorithm calculates an intensity score that takes into account the effect of modifying factors, such as how often an applicator personally mixed or prepared the pesticide, the type of application methods used, whether an applicator personally repaired pesticide application equipment, and the type of personal protective equipment used. By multiplying the intensity score with fonofos lifetime exposure-days, we obtained fonofos intensity-weighted exposure-days. Statistical analysis All pesticide applicators who returned an enrollment questionnaire were eligible for analysis. We excluded applicators if they did not provide information on fonofos exposure duration (n = 5,987); if their first primary cancer occurred before enrollment (n = 937), if they did not live in either state on enrollment (n = 295); or if they did not provide information on birth year (n = 2), smoking (n = 1,664), or use of correlated pesticides (n = 3,054). After exclusions, 45,372 subjects were available for the lifetime exposure-days analysis. The intensity-weighted exposure-days analysis contained four fewer cancer cases and 55 fewer cancer-free subjects because of missing data on intensity metric covariates. Compared with retained subjects, excluded subjects were generally older and more likely to be from North Carolina. We categorized fonofos lifetime exposure-days and intensity-weighted exposure-days into tertiles based on the distribution of exposure among all cancer cases. Because the tertiles of intensity-weighted exposure-days based on the exposure distribution among all cancer cases resulted in categories inadequate for analyzing leukemia (one case in the lowest exposed group), we created more uniform tertiles for leukemia based on the exposure distribution among leukemia cases. We used two different reference groups—the unexposed and the lowest exposed groups—to analyze all cancer sites for which there were at least 15 exposed cases and 4 cases per lifetime exposure-days category. Specifically, we examined all cancers combined; cancers of the prostate, lung, and colon; melanoma skin cancer; leukemia; and lymphohematopoietic cancer consisting of Hodgkin lymphoma, NHL, leukemia, and multiple myeloma. Statistical analyses were conducted in AHS data release 0412.01 using Stata version 8 (StataCorp 2003). We used Poisson regression to calculate rate ratios (RRs) and 95% CIs while adjusting for multiple covariates. Covariates included category of age at enrollment (< 40, 40–49, 50–59, ≥ 60 years of age), state of residence, pack-years of smoking categorized at the median (0, ≤ 12, > 12), and use of the four most correlated pesticides [trichlorofon, carbofuran, imazethapyr, and S-ethyl dipropylthiocarbamate (EPTC)]. Pearson correlation coefficients ranged between 0.43 and 0.50. Use of each correlated pesticide was classified as never, low, and high, with the median lifetime exposure-days categorizing low and high usage. As an alternative strategy to account for use of other pesticides, we also replaced use of the most correlated pesticides with lifetime exposure-days to all pesticides. Further adjustment for education, sex, alcohol consumption in the previous 12 months, applicator type, cancer history in first-degree relatives, and enrollment year did not affect point estimates by > 10%. We performed linear trend tests to assess the overall dose–response trend by entering exposure categories ordinally in the models after assigning them median exposure value in that category. All statistical tests were two-sided. All pesticide applicators who returned an enrollment questionnaire were eligible for analysis. We excluded applicators if they did not provide information on fonofos exposure duration (n = 5,987); if their first primary cancer occurred before enrollment (n = 937), if they did not live in either state on enrollment (n = 295); or if they did not provide information on birth year (n = 2), smoking (n = 1,664), or use of correlated pesticides (n = 3,054). After exclusions, 45,372 subjects were available for the lifetime exposure-days analysis. The intensity-weighted exposure-days analysis contained four fewer cancer cases and 55 fewer cancer-free subjects because of missing data on intensity metric covariates. Compared with retained subjects, excluded subjects were generally older and more likely to be from North Carolina. We categorized fonofos lifetime exposure-days and intensity-weighted exposure-days into tertiles based on the distribution of exposure among all cancer cases. Because the tertiles of intensity-weighted exposure-days based on the exposure distribution among all cancer cases resulted in categories inadequate for analyzing leukemia (one case in the lowest exposed group), we created more uniform tertiles for leukemia based on the exposure distribution among leukemia cases. We used two different reference groups—the unexposed and the lowest exposed groups—to analyze all cancer sites for which there were at least 15 exposed cases and 4 cases per lifetime exposure-days category. Specifically, we examined all cancers combined; cancers of the prostate, lung, and colon; melanoma skin cancer; leukemia; and lymphohematopoietic cancer consisting of Hodgkin lymphoma, NHL, leukemia, and multiple myeloma. Statistical analyses were conducted in AHS data release 0412.01 using Stata version 8 (StataCorp 2003). We used Poisson regression to calculate rate ratios (RRs) and 95% CIs while adjusting for multiple covariates. Covariates included category of age at enrollment (< 40, 40–49, 50–59, ≥ 60 years of age), state of residence, pack-years of smoking categorized at the median (0, ≤ 12, > 12), and use of the four most correlated pesticides [trichlorofon, carbofuran, imazethapyr, and S-ethyl dipropylthiocarbamate (EPTC)]. Pearson correlation coefficients ranged between 0.43 and 0.50. Use of each correlated pesticide was classified as never, low, and high, with the median lifetime exposure-days categorizing low and high usage. As an alternative strategy to account for use of other pesticides, we also replaced use of the most correlated pesticides with lifetime exposure-days to all pesticides. Further adjustment for education, sex, alcohol consumption in the previous 12 months, applicator type, cancer history in first-degree relatives, and enrollment year did not affect point estimates by > 10%. We performed linear trend tests to assess the overall dose–response trend by entering exposure categories ordinally in the models after assigning them median exposure value in that category. All statistical tests were two-sided. Enrollment and follow-up: The AHS has been described previously (Alavanja et al. 1996). Briefly, it is a prospective cohort of 52,395 private applicators (farmers) from Iowa and North Carolina and 4,916 commercial applicators (employees of pest control companies or businesses whose primary function is not pesticide application) from Iowa licensed to apply restricted use pesticides. This cohort represents 82% of eligible applicators from both states during the enrollment period of the study (13 December 1993 to 31 December 1997). Population-based cancer registries of both states were used to identify subjects with incident cancer diagnoses between enrollment and 31 December 2002. Subjects who died or moved out of the state were censored in the year of occurrence of either event. Vital status was ascertained using state death registries and the National Death Index. Residence information was obtained through motor vehicle records, pesticide registration records, and address files of the Internal Revenue Service. To date, average follow-up time is 7.5 years and follow-up is > 98% complete. All participants provided informed consent, and the research protocol was approved by all appropriate institutional review boards. Exposure assessment: On enrollment, pesticide applicators seeking a restricted-use pesticide license completed a self-administered questionnaire. The questionnaire obtained detailed exposure information (days of use per year, years of use, and decade of first use) for 22 pesticides (including fonofos) and ever/never use information for 28 additional pesticides, as well as information on application methods, pesticide mixing status, personal protective equipment use, personal equipment repair, smoking, current alcohol use, cancer history in first-degree relatives, and basic demographic data. For some analyses, we used information on solvent exposure that was collected using a self-administered take-home questionnaire completed by 44% of those enrolled [both questionnaires available at http://www.aghealth.org/questionnaires.html (AHS 2006)]. We estimated fonofos exposure in terms of cumulative fonofos lifetime exposure-days and intensity-weighted exposure-days. We calculated lifetime exposure-days as the cross-product of the questionnaire categories for frequency of fonofos use in an average year and the number of years of fonofos use, using the midpoints of the questionnaire categories. We assessed intensity of exposure using an algorithm developed by Dosemeci et al. (2002). This algorithm calculates an intensity score that takes into account the effect of modifying factors, such as how often an applicator personally mixed or prepared the pesticide, the type of application methods used, whether an applicator personally repaired pesticide application equipment, and the type of personal protective equipment used. By multiplying the intensity score with fonofos lifetime exposure-days, we obtained fonofos intensity-weighted exposure-days. Statistical analysis: All pesticide applicators who returned an enrollment questionnaire were eligible for analysis. We excluded applicators if they did not provide information on fonofos exposure duration (n = 5,987); if their first primary cancer occurred before enrollment (n = 937), if they did not live in either state on enrollment (n = 295); or if they did not provide information on birth year (n = 2), smoking (n = 1,664), or use of correlated pesticides (n = 3,054). After exclusions, 45,372 subjects were available for the lifetime exposure-days analysis. The intensity-weighted exposure-days analysis contained four fewer cancer cases and 55 fewer cancer-free subjects because of missing data on intensity metric covariates. Compared with retained subjects, excluded subjects were generally older and more likely to be from North Carolina. We categorized fonofos lifetime exposure-days and intensity-weighted exposure-days into tertiles based on the distribution of exposure among all cancer cases. Because the tertiles of intensity-weighted exposure-days based on the exposure distribution among all cancer cases resulted in categories inadequate for analyzing leukemia (one case in the lowest exposed group), we created more uniform tertiles for leukemia based on the exposure distribution among leukemia cases. We used two different reference groups—the unexposed and the lowest exposed groups—to analyze all cancer sites for which there were at least 15 exposed cases and 4 cases per lifetime exposure-days category. Specifically, we examined all cancers combined; cancers of the prostate, lung, and colon; melanoma skin cancer; leukemia; and lymphohematopoietic cancer consisting of Hodgkin lymphoma, NHL, leukemia, and multiple myeloma. Statistical analyses were conducted in AHS data release 0412.01 using Stata version 8 (StataCorp 2003). We used Poisson regression to calculate rate ratios (RRs) and 95% CIs while adjusting for multiple covariates. Covariates included category of age at enrollment (< 40, 40–49, 50–59, ≥ 60 years of age), state of residence, pack-years of smoking categorized at the median (0, ≤ 12, > 12), and use of the four most correlated pesticides [trichlorofon, carbofuran, imazethapyr, and S-ethyl dipropylthiocarbamate (EPTC)]. Pearson correlation coefficients ranged between 0.43 and 0.50. Use of each correlated pesticide was classified as never, low, and high, with the median lifetime exposure-days categorizing low and high usage. As an alternative strategy to account for use of other pesticides, we also replaced use of the most correlated pesticides with lifetime exposure-days to all pesticides. Further adjustment for education, sex, alcohol consumption in the previous 12 months, applicator type, cancer history in first-degree relatives, and enrollment year did not affect point estimates by > 10%. We performed linear trend tests to assess the overall dose–response trend by entering exposure categories ordinally in the models after assigning them median exposure value in that category. All statistical tests were two-sided. Results: Selected characteristics of the study population are displayed in Table 1 according to lifetime exposure-days category. In this table, “lowest exposed” refers to those in the lowest exposure tertile (> 0–20 lifetime exposure-days), whereas “other exposed” refers to those in the middle and highest tertiles (> 20 lifetime exposure-days). Overall, two-thirds of the study participants were from Iowa. More than two-thirds of the cohort reported corn farming. Study subjects were also predominantly white and male. Just over half reported being never smokers. Close to 55% reported that the highest level of schooling attained was no more than a high school diploma. Approximately 40% reported a history of cancer in first-degree relatives. The unexposed group was generally younger, less likely to use alcohol, and slightly less likely to report a family history of cancer, used fewer pesticides in general, and planted fewer acres than either exposed group. Both Iowa participants and corn farmers were over-represented in the exposed categories. Based on these differences between the unexposed group and either exposed group, the lowest exposed group may represent the exposed group more closely. With the intensity-weighted metric, risk estimates for all cancers combined were not different from the null, regardless of the reference group used (Table 2). Colon cancer risk estimates were elevated, but only when using the unexposed as the reference, and the relationship was not monotonic. Leukemia risk estimates were elevated regardless of the reference group used. When the unexposed group was the reference, the RR was 2.67 (95% CI, 1.06–6.70) in the highest exposure category, and the test for linear trend was significant (ptrend = 0.04). When the lowest exposed group was the reference, the corresponding RR was 2.03 (95% CI, 0.58–7.05). The linear trend test was not significant. Fonofos intensity-weighted exposure-days were not related to the risk of any other examined cancer. Results were similar using the lifetime exposure-days metric (not shown). For example, using the unexposed group as the reference, leukemia RRs increased monotonically to 2.24 (95% CI, 0.94–5.34) in the highest tertile (ptrend = 0.07). When the lowest exposed tertile was used as the reference, the risk estimates increased monotonically with increasing exposure category to 2.18 (95% CI, 0.57–8.40) in the highest tertile. The test for linear trend was not significant. To account for the effect of misclassification due to the inclusion of exposure that occurred too recently to affect cancer risk, we repeated the analyses excluding 39 cancer cases and 1,389 cancer-free subjects who either reported first using fonofos during the 1990s or did not provide this information. The results were similar to those presented here (not shown). The results were also similar after repeating the analyses among Iowa participants only (not shown). Additionally, to control for pesticide use in general, we repeated the analyses adjusting for lifetime exposure-days to all pesticides instead of the most correlated pesticides, and the results did not differ from those presented here (not shown). To evaluate the effect of missing information, we repeated the analyses while allowing subjects with missing information on covariates to influence the outcome by assigning them an unspecified category. Once again, the results were largely the same (not shown). Finally, the results were similar when we separately examined fonofos days of use and years of use after categorizing each into none, low, and high categories for exposure, using the median to distinguish between low and high (not shown). We further investigated leukemia by separately examining chronic lymphocytic (eight exposed cases), chronic myelogenous (two exposed cases), acute myelogenous (five exposed cases), and all other leukemias (three exposed cases) (not shown). Acute lymphocytic leukemia could not be evaluated (no exposed cases). Although the CIs were wide and the point estimates were not significant, relative to the unexposed, the age-adjusted risk estimates in low- and high-exposure categories were elevated for all examined subtypes. In the high-exposure category, point estimates ranged from 1.75 for acute myelogenous to 3.65 for chronic myelogenous leukemia. We also adjusted leukemia risk estimates using data on use of gasoline, solvents, and paint, which were collected among private applicators using the take-home questionnaire (not shown). Although the subset of otherwise eligible applicators who provided the aforementioned information was small (eight exposed cases), adjusting for these exposures did not weaken the leukemia risk estimates. Finally, controlling for animal exposures using information on the number of livestock (other than poultry) or whether applicators butchered animals, provided veterinary services to livestock, or worked in swine or poultry containing areas, did not affect risk estimates (not shown). Table 3 shows prostate cancer risk relative to the unexposed using both metrics and stratified by family history of prostate cancer in first-degree relatives. We generated uniform exposure categories based on the exposure distribution among prostate cancer cases. In the group with no prostate cancer family history, risk was not associated with exposure regardless of the metric. In those with a family history of prostate cancer, the risk estimates increased, and significant linear trends were observed using either metric. Using the lifetime exposure-days metric, we observed a significant dose–response relationship (ptrend = 0.02), which resulted in a RR of 1.77 (95% CI, 1.03–3.05) in the highest exposure category. The interaction term, defined as the cross-product of family history of cancer and category of lifetime exposure-days (treated as a continuous variable), was significant (RR = 1.28; 95% CI, 1.07–1.54). With the intensity-weighted exposure-days metric, risk in the highest category was 1.83 (95% CI, 1.12–3.00). The test for linear trend was significant (ptrend = < 0.01), as was the interaction RR of 1.27 (95% CI, 1.07–1.51). When the analysis in Table 3 was repeated using the lowest exposed group as the reference, the results were similar but less pronounced due to decreased statistical power (not shown). Risk was related to fonofos use only in those with a family history of prostate cancer. Point estimates increased monotonically with lifetime exposure-days to 1.24 (95% CI, 0.61–2.51) in the highest category. The interaction RR was 1.25 (95% CI, 0.83–1.89). Point estimates generally increased with intensity-weighted exposure-days to 1.68 (95% CI, 0.83–3.39) in the highest category. The interaction RR was 1.27 (95% CI, 0.85–1.89). Linear trend tests were not significant using either metric. When the risk of the other examined cancers (all cancers combined, melanoma, leukemia, lymphohematopoietic cancers, lung cancer, and colon cancer) was similarly stratified, no discrepancies were observed comparing those with and without a family history of the specific cancer (not shown). When we did further analyses to disentangle the effects of prostate cancer family history and fonofos exposure, we observed that the age-adjusted main effect for ever compared with never fonofos exposure was 0.97 (95% CI, 0.80–1.17), whereas for family history of prostate cancer, it was 1.67 (95% CI, 1.35–2.07). The observed joint effect of the two exposures was 2.63 (95% CI, 1.96–3.53). Discussion: In this study we evaluated cumulative lifetime fonofos exposure until enrollment as a risk factor for incident cancer occurring between the end of enrollment through 2002. Almost 40% of exposed applicators first used fonofos before 1980. Thus, although the period of cancer incidence follow-up is 7.5 years on average, the actual time from first use to the end of the follow-up period is longer. We did not observe an association between fonofos exposure and the incidence of all cancers combined. We did not have enough cases to evaluate NHL. There was, however, evidence of an association between fonofos and leukemia. There was also an observed association between fonofos and prostate cancer among those with a family history of prostate cancer. Organophosphates have been associated with leukemia and other immunologically related cancers in the epidemiologic literature (Brown et al. 1990; Cantor et al. 1992; Clavel et al. 1996; De Roos et al. 2003; Lee et al. 2004; Waddell et al. 2001; Zahm et al. 1993). The leukemogenic effects of organophosphates may be related to immune function perturbation. Organophosphates irreversibly inhibit acetylcholine esterase, an enzyme that breaks down the neurotransmitter acetylcholine into inactive metabolites. Lymphocytes contain essential components of a cholinergic system, and studies suggest that prolonged acetylcholine esterase receptor stimulation, which could result from irreversible acetylcholine esterase inhibition, can alter lymphocytic activity (Kawashima and Fujii 2003). Prostate cancer risk was not related to fonofos exposure overall. We did, however, find increased prostate cancer risk associated with fonofos use for those with a family history of prostate cancer. This result was previously reported in a case–control analysis of prostate cancer in the AHS, albeit with 87 fewer cases and 3.2 years shorter follow-up (Alavanja et al. 2003). Here we extend this result by also reporting a dose response with lifetime exposure-days and intensity-weighted exposure-days. The statistical interaction that we observed here between fonofos exposure and family history of prostate cancer could have several explanations. One explanation may be that positive prostate cancer family history may serve as a surrogate for an inherited genetic trait, such as a polymorphism in a metabolic enzyme. There are known polymorphic variants of several cytochrome P450 isoforms that vary considerably in their ratio of chlorpyrifos bioactivation to detoxification (Dai et al. 2001; Tang et al. 2001). As organothiophosphates, fonofos and chlorpyrifos are similar in that they must be metabolized to their bio-active neurotoxic oxon forms (Maroni et al. 2000), and if fonofos shares some of the same metabolic enzymes as chlorpyrifos, such a polymorphism may account for the interaction. Alternatively, fonofos, phorate, and chlorpyrifos significantly inhibit testosterone metabolism in human liver microsomes, most likely as a result of their noncompetitive inhibition of cytochrome P450 3A4 testosterone metabolism (Usmani et al. 2003). As with any study, some exposure misclassification is likely (Acquavella et al. 2006), but because exposure information was collected prospectively we have no reason to believe that it occurred differentially between cancer cases and cancer-free subjects. In addition, some of the exposure considered here may have occurred too recently to contribute to cancer occurrence. However, we repeated several analyses restricted to those whose year of first use occurred before 1990, and the results did not appreciably differ from the unrestricted analyses. Pesticide applicators come into contact with multiple farm chemicals, including pesticides, and other agents. A previous AHS examination determined that a relationship between pesticide exposure and disease is not likely confounded by farming or nonfarming activities (Coble et al. 2002). In this study, we attempted to control for exposures to other pesticides using two approaches. First, we adjusted the risk estimates for use of the four pesticides that were most correlated with fonofos. Alternatively, to control for the effects of exposure to all pesticides, we adjusted for lifetime exposure-days to all pesticides. The measured differences between exposed and unexposed applicators in age and family history of cancer would normally raise concerns that these groups also differed with respect to other unmeasured cancer risk factors, but the overall conclusions did not differ when the unexposed and lowest exposed groups were used as the reference. The main strengths of this study include its large prospective design, complete recruitment and follow-up, and the use of semi-quantitative exposure measures that improve on the qualitative measures used in previous studies of pesticide exposure. In addition, our results are internally consistent, as further adjustment and subgroup analyses did not result in different conclusions. This is, to our knowledge, the largest examination of any group occupationally exposed to fonofos. Our strategy for evaluating the carcinogenic potential of pesticides in the cohort is to examine each pesticide with respect to cancer outcomes, to examine each cancer outcome with respect to pesticide exposures, and to examine the consistency of the relationship across time, state, and license type. Our conclusions are limited because of the small number of exposed cases, especially for leukemia. As follow-up of the cohort continues, more cancer cases will develop as the cohort ages, at which point the relationship between cancer and exposure to fonofos and other pesticides needs to be confirmed.
Background: The Agricultural Health Study (AHS) is a prospective cohort study of licensed pesticide applicators from Iowa and North Carolina enrolled 1993-1997 and followed for incident cancer through 2002. A previous investigation in this cohort linked exposure to the organophosphate fonofos with incident prostate cancer in subjects with family history of prostate cancer. Methods: Pesticide exposure and other data were collected using self-administered questionnaires. Poisson regression was used to calculate rate ratios (RRs) and 95% confidence intervals (CIs) while controlling for potential confounders. Results: Relative to the unexposed, leukemia risk was elevated in the highest category of lifetime (RR = 2.24; 95% CI, 0.94-5.34, Ptrend = 0.07) and intensity-weighted exposure-days (RR = 2.67; 95% CI, 1.06-6.70, Ptrend = 0.04), a measure that takes into account factors that modify pesticide exposure. Although prostate cancer risk was unrelated to fonofos use overall, among applicators with a family history of prostate cancer, we observed a significant dose-response trend for lifetime exposure-days (Ptrend = 0.02, RR highest tertile vs. unexposed = 1.77, 95% CI, 1.03-3.05; RRinteraction = 1.28, 95% CI, 1.07-1.54). Intensity-weighted results were similar. No associations were observed with other examined cancer sites. Conclusions: Further study is warranted to confirm findings with respect to leukemia and determine whether genetic susceptibility modifies prostate cancer risk from pesticide exposure.
null
null
5,713
289
[ 210, 298 ]
6
[ "exposure", "cancer", "days", "use", "exposure days", "fonofos", "lifetime", "lifetime exposure days", "pesticides", "lifetime exposure" ]
[ "cohort examine pesticide", "cancer occurred enrollment", "cancer diagnoses enrollment", "pesticide application iowa", "cancer registries states" ]
null
null
null
null
[CONTENT] agriculture | fonofos | insecticides | neoplasms | occupational exposure | organophosphorus compounds | organothiophosphorus compounds | pesticides [SUMMARY]
[CONTENT] agriculture | fonofos | insecticides | neoplasms | occupational exposure | organophosphorus compounds | organothiophosphorus compounds | pesticides [SUMMARY]
null
[CONTENT] agriculture | fonofos | insecticides | neoplasms | occupational exposure | organophosphorus compounds | organothiophosphorus compounds | pesticides [SUMMARY]
null
null
[CONTENT] Adult | Agricultural Workers' Diseases | Agriculture | Cohort Studies | Family Health | Female | Fonofos | Humans | Incidence | Iowa | Male | Middle Aged | Neoplasms | North Carolina | Occupational Exposure | Organophosphate Poisoning | Organophosphorus Compounds | Organothiophosphorus Compounds | Pesticides | Poisson Distribution | Prospective Studies | Prostatic Neoplasms | Regression Analysis | Risk Factors | Surveys and Questionnaires [SUMMARY]
[CONTENT] Adult | Agricultural Workers' Diseases | Agriculture | Cohort Studies | Family Health | Female | Fonofos | Humans | Incidence | Iowa | Male | Middle Aged | Neoplasms | North Carolina | Occupational Exposure | Organophosphate Poisoning | Organophosphorus Compounds | Organothiophosphorus Compounds | Pesticides | Poisson Distribution | Prospective Studies | Prostatic Neoplasms | Regression Analysis | Risk Factors | Surveys and Questionnaires [SUMMARY]
null
[CONTENT] Adult | Agricultural Workers' Diseases | Agriculture | Cohort Studies | Family Health | Female | Fonofos | Humans | Incidence | Iowa | Male | Middle Aged | Neoplasms | North Carolina | Occupational Exposure | Organophosphate Poisoning | Organophosphorus Compounds | Organothiophosphorus Compounds | Pesticides | Poisson Distribution | Prospective Studies | Prostatic Neoplasms | Regression Analysis | Risk Factors | Surveys and Questionnaires [SUMMARY]
null
null
[CONTENT] cohort examine pesticide | cancer occurred enrollment | cancer diagnoses enrollment | pesticide application iowa | cancer registries states [SUMMARY]
[CONTENT] cohort examine pesticide | cancer occurred enrollment | cancer diagnoses enrollment | pesticide application iowa | cancer registries states [SUMMARY]
null
[CONTENT] cohort examine pesticide | cancer occurred enrollment | cancer diagnoses enrollment | pesticide application iowa | cancer registries states [SUMMARY]
null
null
[CONTENT] exposure | cancer | days | use | exposure days | fonofos | lifetime | lifetime exposure days | pesticides | lifetime exposure [SUMMARY]
[CONTENT] exposure | cancer | days | use | exposure days | fonofos | lifetime | lifetime exposure days | pesticides | lifetime exposure [SUMMARY]
null
[CONTENT] exposure | cancer | days | use | exposure days | fonofos | lifetime | lifetime exposure days | pesticides | lifetime exposure [SUMMARY]
null
null
[CONTENT] exposure | exposure days | days | cancer | cases | use correlated | leukemia | lifetime | lifetime exposure | lifetime exposure days [SUMMARY]
[CONTENT] 95 ci | ci | risk | exposure | shown | 95 | exposed | highest | significant | cancer [SUMMARY]
null
[CONTENT] exposure | cancer | days | exposure days | use | fonofos | lifetime | cases | lifetime exposure days | lifetime exposure [SUMMARY]
null
null
[CONTENT] ||| Poisson | 95% [SUMMARY]
[CONTENT] 2.24 | 95% | CI | 0.94 | 0.07 | 2.67 | 95% | CI | 1.06-6.70 | 0.04 ||| lifetime exposure-days | 0.02 | 1.77 | 95% | CI | 1.03-3.05 | RRinteraction | 1.28 | 95% | CI | 1.07 ||| ||| [SUMMARY]
null
[CONTENT] The Agricultural Health Study | Iowa | North Carolina | 1993-1997 | 2002 ||| ||| ||| Poisson | 95% ||| ||| 2.24 | 95% | CI | 0.94 | 0.07 | 2.67 | 95% | CI | 1.06-6.70 | 0.04 ||| lifetime exposure-days | 0.02 | 1.77 | 95% | CI | 1.03-3.05 | RRinteraction | 1.28 | 95% | CI | 1.07 ||| ||| ||| [SUMMARY]
null
Linking geological and health sciences to assess childhood lead poisoning from artisanal gold mining in Nigeria.
23524139
In 2010, Médecins Sans Frontières discovered a lead poisoning outbreak linked to artisanal gold processing in northwestern Nigeria. The outbreak has killed approximately 400 young children and affected thousands more.
BACKGROUND
We applied diverse analytical methods to ore samples, soil and sweep samples from villages and family compounds, and plant foodstuff samples.
METHODS
Natural weathering of lead-rich gold ores before mining formed abundant, highly gastric-bioaccessible lead carbonates. The same fingerprint of lead minerals found in all sample types confirms that ore processing caused extreme contamination, with up to 185,000 ppm lead in soils/sweep samples and up to 145 ppm lead in plant foodstuffs. Incidental ingestion of soils via hand-to-mouth transmission and of dusts cleared from the respiratory tract is the dominant exposure pathway. Consumption of water and foodstuffs contaminated by the processing is likely lesser, but these are still significant exposure pathways. Although young children suffered the most immediate and severe consequences, results indicate that older children, adult workers, pregnant women, and breastfed infants are also at risk for lead poisoning. Mercury, arsenic, manganese, antimony, and crystalline silica exposures pose additional health threats.
RESULTS
Results inform ongoing efforts in Nigeria to assess lead contamination and poisoning, treat victims, mitigate exposures, and remediate contamination. Ore deposit geology, pre-mining weathering, and burgeoning artisanal mining may combine to cause similar lead poisoning disasters elsewhere globally.
CONCLUSIONS
[ "Child", "Child, Preschool", "Environmental Exposure", "Global Health", "Gold", "Humans", "Lead Poisoning", "Metals", "Mining", "Nigeria", "Particle Size" ]
3672918
null
null
Methods
Methods by which the different sample types were collected in Zamfara are described in Supplemental Material, Field Sampling Methods (http://dx.doi.org/10.1289/ehp.1206051). We analyzed the samples at USGS laboratories in Denver, Colorado (USA), incorporating appropriate quality assurance/quality control analyses of standard reference materials, duplicate sample splits, analytical duplicates, and blanks. See Supplemental Material, Table S1, for analytical method details and references (http://dx.doi.org/10.1289/ehp.1206051). Nearly 200 spot chemical analyses were performed in the laboratory by handheld XRF on > 50 raw ore samples, to assess natural chemical heterogeneities within and between the samples. Representative splits of all processed ores, soils, and sweep samples were analyzed for multiple parameters. Quantitative particle size distribution of samples sieved to < 2 mm was measured by laser diffraction. Powder X-ray diffraction was used to qualitatively identify relative weight proportions of specific minerals present above the detection limit of approximately 2 weight %. Total chemical concentrations of 42 elements were analyzed using inductively coupled plasma–mass spectrometry (ICP-MS). Total mercury was analyzed using continuous flow–cold vapor–atomic fluorescence spectroscopy (CVAFS). Scanning electron microscopy (SEM) was performed on a subset of raw ores, processed ores, soils, sweep samples, and grain samples to determine individual particle mineralogy, chemistry, size, and shape. Deionized water extractions were performed on a subset of processed ores, soils, and sweep samples to model constituent release into surface waters (Hageman 2007). In vitro bioaccessibility assessments (IVBA) were performed on a subset of processed ores, soils, and sweep samples to model toxicant bioaccessibility and bioavailability along ingestion exposure pathways (Drexler and Brattin 2007; Morman et al. 2009). Bioaccessibility measures the amount of a toxicant that is dissolved in the body’s fluids and is available for uptake into the body’s circulatory system, whereas bioavailability measures the amount of a toxicant that is absorbed by the body and transported to a site of toxic action [see references in Plumlee and Morman (2011)]. The IVBA we used leaches samples with simulated gastric fluid for 1 hr at 37°C [see Supplemental Material, Table S1 (http://dx.doi.org/10.1289/ehp.1206051)], and is based on the Drexler and Brattin (2007) method, validated for lead against juvenile swine uptake. The juvenile swine uptake model is a proxy for relative lead bioavailability in humans that integrates both lead dissolution in the stomach acids and uptake via the intestines (Casteel et al. 2006). This IVBA has not been validated against swine uptake for other toxicants such as arsenic, mercury, and manganese, but nonetheless provides useful insights into their potential gastric bioaccessibility (Plumlee and Morman, 2011). Plant foodstuff samples were analyzed for 40 elements by inductively coupled plasma–atomic emission spectroscopy (ICP-AES) and mercury by CVAFS. Analytical methods used for total and leachate chemical analyses provided concentration data for many potential elemental toxicants in addition to lead and mercury, such as arsenic, antimony, manganese, iron, aluminum, cadmium, copper, zinc, and nickel.
Results
Unweathered (primary) vein ores were dominated by quartz (crystalline silica), with variable amounts of galena (lead sulfide) and minor amounts of pyrite (iron sulfide), chalcopyrite (copper–iron sulfide), and arsenopyrite (iron–arsenic sulfide) [see Supplemental Material, Figure S1D–G, Table S2 (http://dx.doi.org/10.1289/ehp.1206051)]. Natural weathering and oxidation of the vein ores over millennia before mining partially converted primary sulfide minerals into complex secondary mineral assemblages with abundant lead carbonates and lead phosphates (see Supplemental Material, Figure S1D–G, Table S2). Dareta and Yargalma sweep and soil samples contained broken particles of the same complex suite of primary and secondary lead minerals as unprocessed and ground vein ores [see Supplemental Material, Table S2 (http://dx.doi.org/10.1289/ehp.1206051)]. This mineralogical fingerprint confirmed ore processing as the source for contamination. Quantitative particle size analysis and visual estimation by SEM element mapping (Figure 1A) show that > 90% of lead-rich particles in the ground ores, soil samples, and sweep samples were < 250 µm in diameter, regarded as a maximum size for incidental ingestion by hand–mouth transmission (Drexler and Brattin 2007). Visual estimation using SEM element mapping shows that > 50% of the lead-rich particles were also < 10–15 µm (Figure 1A), and could therefore be inhaled into at least the upper respiratory tract, where many would likely be trapped and cleared by mucociliary action. (A) Backscatter electron (BSE) scanning electron microscope (SEM) images of Nigeria ground ore (upper), eating area sweep (middle), and soil (lower) samples with overlain element maps for lead (in red). In all images, the brighter gray indicates higher mean atomic number. Bar = 250 µm. (B) BSE field emission SEM image of a cluster of plant fibers and mineral particles found in a grain sample (ground by a flour mill in Zamfara) having 3 ppm total lead. Elongated plant fibers are light to dark blue. Bright orange particles are lead carbonates, lead oxides, and lead phosphates. Pale orange/blue particles containing iron, chromium, and nickel are steel particles abraded from flourmill grinding plates. The cluster formed during grinding, with the fiber bundle attracting and trapping the mineral and metal fragments. Bar = 100 µm. Laboratory handheld XRF spot analyses of raw ore samples collected from 18 villages indicated that the ores being processed varied considerably in their lead content within samples, and between different villages and mine sources (Figure 2A). ICP-MS analyses measured up to 180,000 ppm lead in processed ore samples from Dareta and Yargalma (Table 1, Figure 2B). (A) Total lead (Pb) concentrations (measured in the laboratory using handheld XRF) in raw ore samples collected from different Zamfara villages. Multiple spot analyses were made on multiple ore samples from each village to account for substantial mineralogical heterogeneities within samples. (B) Total lead concentrations in processed ores, soils, and sweep samples from Dareta and Yargalma, as measured by ICP-M. Red line indicates U.S. EPA (2011a) RSSL for lead (400 ppm). Summary of USGS laboratory analytical results for total chemical composition. ICP-MS analyses found that all Dareta and Yargalma soil samples and most sweep samples had extreme lead concentrations (up to 185,000 ppm), far above the U.S. EPA (2011a) residential soil screening level (RSSL) of 400 ppm (Table 1, Figure 2). In contrast, lead concentrations in background soils from five villages without gold processing were < 25 ppm. Composite soil samples collected on the outskirts of Dareta and Yargalma (~ 100 m from the edge of each village) had elevated lead concentrations (122 and 293 ppm, respectively) (Table 1), indicating that processing-related contamination extended beyond village limits. Soils used to make adobe bricks (from ore washing areas) had lead levels as high as 58,900 ppm. Total lead concentrations measured with ICP-MS in soil and sweep samples with total lead > approximately 400 ppm were generally twice the concentrations measured on the same samples in the field by CDC/TG using handheld XRF [see Supplemental Material, Figure S2A (http://dx.doi.org/10.1289/ehp.1206051)]. TG has found such field underestimation to be common (von Lindern I, unpublished data), possibly resulting from sample compositing/sieving effects, summer heat impacts on the instruments, and/or lack of field XRF calibration standards having extreme lead concentrations. For the few samples analyzed with lead < approximately 400 ppm, lab ICP-MS results were variously greater than, close to, or less than the field XRF results (see Supplemental Material, Figure S2A inset). Extreme total mercury concentrations measured in soil and sweep samples (up to 4,600 ppm measured in the field by XRF; up to 68.1 ppm by laboratory CVAFS) were higher than levels measured in raw and ground ores, and were well above the U.S. EPA elemental mercury RSSL of 10 ppm [Table 1; see also Supplemental Material, Figure S3A (http://dx.doi.org/10.1289/ehp.1206051)]. Hence, mercury contamination resulted predominantly from the amalgamation processing. Substantially greater concentrations of mercury were measured in soil and sweep samples by field XRF compared with those measured by CVAFS for the same samples (see Supplemental Material, Figure S2B), indicating that mercury was volatilized from the samples after sample collection. Manganese concentrations in processed ores, background soils, village soils, and sweep samples (up to 1,320 ppm) commonly exceeded the U.S. EPA RSSL of 390 ppm (U.S. EPA 2011a), and were generally higher than concentrations measured in soils from villages without ore processing [Table 1; see also Supplemental Material, Figure S3B (http://dx.doi.org/10.1289/ehp.1206051)]. Concentrations of arsenic (up to 270 ppm) and antimony (up to 1,250 ppm) in some soil and sweep samples greatly exceeded U.S. EPA RSSLs of 0.39 (U.S. EPA 2011a) and 31 ppm (U.S. EPA 2002a), respectively (Table 1; see also Supplemental Material, Figure S3C–D). Concentrations of other potential environmental or human toxicants such as cadmium, zinc, copper, and nickel were well below U.S. EPA RSSLs. Deionized water leach tests on soil and sweep samples produced moderately alkaline leachates with pH from 7.7 to 9.1. Metal toxicants were not appreciably water soluble, with a maximum of 0.018% of total lead, 0.7% of total mercury, 2.9% of total manganese, < 0.2% of total arsenic, and 0.03% of total antimony being solubilized by water leaching of any given sample (Table 2). Ranges in percentage of water-leachable elemental toxicants measured in different sample types. Lead was generally highly gastric bioaccessible, with 39–66% of the total lead solubilized in an hour from 9 of 12 samples analyzed (Table 3, Figure 3). The highest percent bioaccessibility was measured in a less heavily contaminated village outskirt soil. Manganese was also generally quite gastric bioaccessible, with 6–43% of the total manganese solubilized. However, mercury (< 0.9% of total), arsenic (< 2.1% of total), and antimony (< 1.4% of the total) were not appreciably gastric bioaccessible (Table 3). Ranges in percentage of gastric-bioaccessible elemental toxicants measured in different sample types. Total lead (Pb) concentrations (ppm mass basis; mg lead/kg solid) and simulated gastric fluid–leachable lead concentrations (ppm mass basis; calculated as (mg lead/kg leachate) × (100 kg leachate/1 kg solid)] in Zamfara samples. Each bar pair represents results for a single sample. Percentage of bioaccessible lead is listed above the paired bars for each sample, and was calculated by dividing the SGF-leachable concentration by the total concentration for the sample, and then multiplying by 100. Horizontal red line indicates U.S. EPA (2011a) RSSL for lead (400 ppm). Chemical analyses of 39 rice, corn, spice, and medicinal herb samples found that all 16 processed samples and 19 of 23 raw samples were lead-contaminated (from 0.1 to 146 ppm) compared with plant standard reference materials [Table 1; see also Supplemental Material, Figure S3E (http://dx.doi.org/10.1289/ehp.1206051)]. The same suite of lead carbonates and other secondary lead minerals was found in the plant foodstuffs as in the ores, soils, and sweep samples (Figure 1B). This mineralogical fingerprint confirms that stored foodstuffs were contaminated by ore-processing dusts and that grains were contaminated when ground using flourmills also used for ore grinding. Elevated concentrations of mercury from 0.01 to 0.45 ppm (Table 1; see also Supplemental Material, Figure S3F) found in 10 of 16 processed and 8 of 23 raw foodstuff samples also indicate processing-related contamination, possibly from airborne mercury, use of flourmills for both food and ore regrinding, and foodstuff storage in cooking pots used for amalgamation.
Conclusions
The results of the present study support the conclusion that the fatal lead poisoning outbreak in northern Nigeria resulted from contamination of soils, living areas, water supplies, and foodstuffs by the processing of weathered, lead-rich gold ores containing abundant, highly gastric-bioaccessible secondary lead carbonate minerals. The dominant exposure pathway is incidental ingestion of lead-rich soil and dust particles by hand–mouth transmission and of inhaled dust particles cleared from the respiratory tract. Lesser but still significant pathways (each of which alone would be problematic) include consumption of water and foodstuffs contaminated by the processing. Although acute lead poisoning of young children has been the most immediate and severe consequence, older children, adult workers, pregnant women and their unborn children, and breastfeeding infants are also at risk. Other contaminants (manganese, arsenic, antimony, crystalline silica) may pose additional health threats. Lead poisoning may occur elsewhere in the world from artisanal mining in geologically and climatically favorable areas. This study underscores the value of collaborative interdisciplinary studies involving health, geological, and engineering scientists. This scientific input will aid development of evidence-based policies on artisanal resource extraction that greatly reduce environmental contamination and adverse health impacts.
[ "Supplemental Material" ]
[ "Click here for additional data file." ]
[ null ]
[ "Methods", "Results", "Discussion", "Conclusions", "Supplemental Material" ]
[ "Methods by which the different sample types were collected in Zamfara are described in Supplemental Material, Field Sampling Methods (http://dx.doi.org/10.1289/ehp.1206051).\nWe analyzed the samples at USGS laboratories in Denver, Colorado (USA), incorporating appropriate quality assurance/quality control analyses of standard reference materials, duplicate sample splits, analytical duplicates, and blanks. See Supplemental Material, Table S1, for analytical method details and references (http://dx.doi.org/10.1289/ehp.1206051).\nNearly 200 spot chemical analyses were performed in the laboratory by handheld XRF on > 50 raw ore samples, to assess natural chemical heterogeneities within and between the samples.\nRepresentative splits of all processed ores, soils, and sweep samples were analyzed for multiple parameters. Quantitative particle size distribution of samples sieved to < 2 mm was measured by laser diffraction. Powder X-ray diffraction was used to qualitatively identify relative weight proportions of specific minerals present above the detection limit of approximately 2 weight %. Total chemical concentrations of 42 elements were analyzed using inductively coupled plasma–mass spectrometry (ICP-MS). Total mercury was analyzed using continuous flow–cold vapor–atomic fluorescence spectroscopy (CVAFS).\nScanning electron microscopy (SEM) was performed on a subset of raw ores, processed ores, soils, sweep samples, and grain samples to determine individual particle mineralogy, chemistry, size, and shape.\nDeionized water extractions were performed on a subset of processed ores, soils, and sweep samples to model constituent release into surface waters (Hageman 2007).\nIn vitro bioaccessibility assessments (IVBA) were performed on a subset of processed ores, soils, and sweep samples to model toxicant bioaccessibility and bioavailability along ingestion exposure pathways (Drexler and Brattin 2007; Morman et al. 2009). Bioaccessibility measures the amount of a toxicant that is dissolved in the body’s fluids and is available for uptake into the body’s circulatory system, whereas bioavailability measures the amount of a toxicant that is absorbed by the body and transported to a site of toxic action [see references in Plumlee and Morman (2011)]. The IVBA we used leaches samples with simulated gastric fluid for 1 hr at 37°C [see Supplemental Material, Table S1 (http://dx.doi.org/10.1289/ehp.1206051)], and is based on the Drexler and Brattin (2007) method, validated for lead against juvenile swine uptake. The juvenile swine uptake model is a proxy for relative lead bioavailability in humans that integrates both lead dissolution in the stomach acids and uptake via the intestines (Casteel et al. 2006). This IVBA has not been validated against swine uptake for other toxicants such as arsenic, mercury, and manganese, but nonetheless provides useful insights into their potential gastric bioaccessibility (Plumlee and Morman, 2011).\nPlant foodstuff samples were analyzed for 40 elements by inductively coupled plasma–atomic emission spectroscopy (ICP-AES) and mercury by CVAFS.\nAnalytical methods used for total and leachate chemical analyses provided concentration data for many potential elemental toxicants in addition to lead and mercury, such as arsenic, antimony, manganese, iron, aluminum, cadmium, copper, zinc, and nickel.", "Unweathered (primary) vein ores were dominated by quartz (crystalline silica), with variable amounts of galena (lead sulfide) and minor amounts of pyrite (iron sulfide), chalcopyrite (copper–iron sulfide), and arsenopyrite (iron–arsenic sulfide) [see Supplemental Material, Figure S1D–G, Table S2 (http://dx.doi.org/10.1289/ehp.1206051)]. Natural weathering and oxidation of the vein ores over millennia before mining partially converted primary sulfide minerals into complex secondary mineral assemblages with abundant lead carbonates and lead phosphates (see Supplemental Material, Figure S1D–G, Table S2).\nDareta and Yargalma sweep and soil samples contained broken particles of the same complex suite of primary and secondary lead minerals as unprocessed and ground vein ores [see Supplemental Material, Table S2 (http://dx.doi.org/10.1289/ehp.1206051)]. This mineralogical fingerprint confirmed ore processing as the source for contamination.\nQuantitative particle size analysis and visual estimation by SEM element mapping (Figure 1A) show that > 90% of lead-rich particles in the ground ores, soil samples, and sweep samples were < 250 µm in diameter, regarded as a maximum size for incidental ingestion by hand–mouth transmission (Drexler and Brattin 2007). Visual estimation using SEM element mapping shows that > 50% of the lead-rich particles were also < 10–15 µm (Figure 1A), and could therefore be inhaled into at least the upper respiratory tract, where many would likely be trapped and cleared by mucociliary action.\n(A) Backscatter electron (BSE) scanning electron microscope (SEM) images of Nigeria ground ore (upper), eating area sweep (middle), and soil (lower) samples with overlain element maps for lead (in red). In all images, the brighter gray indicates higher mean atomic number. Bar = 250 µm. (B) BSE field emission SEM image of a cluster of plant fibers and mineral particles found in a grain sample (ground by a flour mill in Zamfara) having 3 ppm total lead. Elongated plant fibers are light to dark blue. Bright orange particles are lead carbonates, lead oxides, and lead phosphates. Pale orange/blue particles containing iron, chromium, and nickel are steel particles abraded from flourmill grinding plates. The cluster formed during grinding, with the fiber bundle attracting and trapping the mineral and metal fragments. Bar = 100 µm.\nLaboratory handheld XRF spot analyses of raw ore samples collected from 18 villages indicated that the ores being processed varied considerably in their lead content within samples, and between different villages and mine sources (Figure 2A). ICP-MS analyses measured up to 180,000 ppm lead in processed ore samples from Dareta and Yargalma (Table 1, Figure 2B).\n(A) Total lead (Pb) concentrations (measured in the laboratory using handheld XRF) in raw ore samples collected from different Zamfara villages. Multiple spot analyses were made on multiple ore samples from each village to account for substantial mineralogical heterogeneities within samples. (B) Total lead concentrations in processed ores, soils, and sweep samples from Dareta and Yargalma, as measured by ICP-M. Red line indicates U.S. EPA (2011a) RSSL for lead (400 ppm).\nSummary of USGS laboratory analytical results for total chemical composition.\nICP-MS analyses found that all Dareta and Yargalma soil samples and most sweep samples had extreme lead concentrations (up to 185,000 ppm), far above the U.S. EPA (2011a) residential soil screening level (RSSL) of 400 ppm (Table 1, Figure 2). In contrast, lead concentrations in background soils from five villages without gold processing were < 25 ppm. Composite soil samples collected on the outskirts of Dareta and Yargalma (~ 100 m from the edge of each village) had elevated lead concentrations (122 and 293 ppm, respectively) (Table 1), indicating that processing-related contamination extended beyond village limits. Soils used to make adobe bricks (from ore washing areas) had lead levels as high as 58,900 ppm.\nTotal lead concentrations measured with ICP-MS in soil and sweep samples with total lead > approximately 400 ppm were generally twice the concentrations measured on the same samples in the field by CDC/TG using handheld XRF [see Supplemental Material, Figure S2A (http://dx.doi.org/10.1289/ehp.1206051)]. TG has found such field underestimation to be common (von Lindern I, unpublished data), possibly resulting from sample compositing/sieving effects, summer heat impacts on the instruments, and/or lack of field XRF calibration standards having extreme lead concentrations. For the few samples analyzed with lead < approximately 400 ppm, lab ICP-MS results were variously greater than, close to, or less than the field XRF results (see Supplemental Material, Figure S2A inset).\nExtreme total mercury concentrations measured in soil and sweep samples (up to 4,600 ppm measured in the field by XRF; up to 68.1 ppm by laboratory CVAFS) were higher than levels measured in raw and ground ores, and were well above the U.S. EPA elemental mercury RSSL of 10 ppm [Table 1; see also Supplemental Material, Figure S3A (http://dx.doi.org/10.1289/ehp.1206051)]. Hence, mercury contamination resulted predominantly from the amalgamation processing. Substantially greater concentrations of mercury were measured in soil and sweep samples by field XRF compared with those measured by CVAFS for the same samples (see Supplemental Material, Figure S2B), indicating that mercury was volatilized from the samples after sample collection.\nManganese concentrations in processed ores, background soils, village soils, and sweep samples (up to 1,320 ppm) commonly exceeded the U.S. EPA RSSL of 390 ppm (U.S. EPA 2011a), and were generally higher than concentrations measured in soils from villages without ore processing [Table 1; see also Supplemental Material, Figure S3B (http://dx.doi.org/10.1289/ehp.1206051)]. Concentrations of arsenic (up to 270 ppm) and antimony (up to 1,250 ppm) in some soil and sweep samples greatly exceeded U.S. EPA RSSLs of 0.39 (U.S. EPA 2011a) and 31 ppm (U.S. EPA 2002a), respectively (Table 1; see also Supplemental Material, Figure S3C–D). Concentrations of other potential environmental or human toxicants such as cadmium, zinc, copper, and nickel were well below U.S. EPA RSSLs.\nDeionized water leach tests on soil and sweep samples produced moderately alkaline leachates with pH from 7.7 to 9.1. Metal toxicants were not appreciably water soluble, with a maximum of 0.018% of total lead, 0.7% of total mercury, 2.9% of total manganese, < 0.2% of total arsenic, and 0.03% of total antimony being solubilized by water leaching of any given sample (Table 2).\nRanges in percentage of water-leachable elemental toxicants measured in different sample types.\nLead was generally highly gastric bioaccessible, with 39–66% of the total lead solubilized in an hour from 9 of 12 samples analyzed (Table 3, Figure 3). The highest percent bioaccessibility was measured in a less heavily contaminated village outskirt soil. Manganese was also generally quite gastric bioaccessible, with 6–43% of the total manganese solubilized. However, mercury (< 0.9% of total), arsenic (< 2.1% of total), and antimony (< 1.4% of the total) were not appreciably gastric bioaccessible (Table 3).\nRanges in percentage of gastric-bioaccessible elemental toxicants measured in different sample types.\nTotal lead (Pb) concentrations (ppm mass basis; mg lead/kg solid) and simulated gastric fluid–leachable lead concentrations (ppm mass basis; calculated as (mg lead/kg leachate) × (100 kg leachate/1 kg solid)] in Zamfara samples. Each bar pair represents results for a single sample. Percentage of bioaccessible lead is listed above the paired bars for each sample, and was calculated by dividing the SGF-leachable concentration by the total concentration for the sample, and then multiplying by 100. Horizontal red line indicates U.S. EPA (2011a) RSSL for lead (400 ppm).\nChemical analyses of 39 rice, corn, spice, and medicinal herb samples found that all 16 processed samples and 19 of 23 raw samples were lead-contaminated (from 0.1 to 146 ppm) compared with plant standard reference materials [Table 1; see also Supplemental Material, Figure S3E (http://dx.doi.org/10.1289/ehp.1206051)]. The same suite of lead carbonates and other secondary lead minerals was found in the plant foodstuffs as in the ores, soils, and sweep samples (Figure 1B). This mineralogical fingerprint confirms that stored foodstuffs were contaminated by ore-processing dusts and that grains were contaminated when ground using flourmills also used for ore grinding. Elevated concentrations of mercury from 0.01 to 0.45 ppm (Table 1; see also Supplemental Material, Figure S3F) found in 10 of 16 processed and 8 of 23 raw foodstuff samples also indicate processing-related contamination, possibly from airborne mercury, use of flourmills for both food and ore regrinding, and foodstuff storage in cooking pots used for amalgamation.", "Our results document that ore deposit geology and mechanized ore grinding were fundamental causes of this unusual lead poisoning outbreak linked to artisanal gold mining. Not only can the vein gold ores be relatively lead rich, but much of the lead occurs in minerals with enhanced gastric bioaccessibility caused by natural weathering of the ores over millennia before mining. Weathering transformed minimally gastric-bioaccessible primary lead sulfides into abundant, highly gastric-bioaccessible secondary lead carbonates and moderately gastric-bioaccessible lead phosphates (Casteel et al. 2006). Mechanized ore grinding greatly increased both the volumes of ore that could be processed and the amounts of lead-rich particles having optimal size for dispersion as dusts and particle uptake by hand–mouth transmission or inhalation. By creating many particles < 10–15 µm in size, grinding also greatly enhanced the surface area per mass of ingested particles, thereby enhancing dissolution rates [see references in Plumlee and Morman (2011)].\nLead exposure pathways. Data are lacking to do a Zamfara-specific integrated exposure uptake biokinetic model for lead in children (U.S. EPA 2002b) because the model uses a series of U.S.-centric assumptions on dietary intake, living in houses with nonsoil floors, and other factors. However, our results can be used to help infer relative importance of various lead exposure pathways.\nFigure 4 shows results of calculations estimating plausible ranges in daily lead uptake from inadvertent ingestion of the different processed ore, soil, and sweep samples we analyzed.\nCalculated daily lead uptake assuming exposures to processed ores, soils, and sweep samples from Zamfara. For each sample, measured gastric bioaccessibility of lead (from Figure 3) was translated into gastric bioavailability using equations in Drexler and Brattin (2007) [see Supplemental Material, Lead Uptake Calculations (http://dx.doi.org/10.1289/ehp.1206051)]. The gastric bioavailability was then translated into daily uptake amount using soil consumption rates for young children from the literature (U.S. EPA 2011b). Brown bars assume 10-mg/day soil consumption (unrealistically clean conditions), and yellow bars assume 500-mg/day soil consumption (very dusty but plausible conditions). Bar pairs show results for the corresponding samples in Figure 3, except that bar pairs labeled as sample duplicates are averages of sample duplicate analyses. Horizontal red lines show WHO (2011) dietary exposure levels for 12-kg child (3.6 µg/day) and 16-kg child (4.8 µg/day) known to adversely affect health, and FDA PTTILs (FDA 1993) for pregnant or lactating women (25 µg/day) and adults (75 µg/day). Although called “intake levels,” PTTILs are in effect uptake levels because they were derived assuming 48% absorption.\nWe calculated lead uptake levels using soil consumption rates, our bioaccessibility results (Figure 3), and the method described by Drexler and Brattin (2007) to convert bioaccessible lead into bioavailable lead for uptake modeling. We used published soil consumption rates (e.g., U.S. EPA 2011b) of 10 and 500 mg/day—a range for children under clean (unlikely for the villages) to extremely dusty (more plausible) conditions. See Supplemental Material, Lead Uptake Calculations for details (http://dx.doi.org/10.1289/ehp.1206051).\nThe results (Figure 4) suggest that inadvertent ingestion could plausibly result in lead uptake as high as several tens to several thousands of micrograms per day, depending upon time spent by exposed persons in contaminated eating areas or ore processing areas. These lead uptake levels can vastly exceed the dietary lead exposure levels (0.3 µg/kg body weight/day) that WHO (2011b) recognizes as causing adverse health impacts in young children. They can also substantially exceed U.S. Food and Drug Administration (FDA) provisional total tolerable intake levels (PTTILs) (FDA 1993) for pregnant or lactating women (25 µg/day) and adults (75 µg/day). Potentially significant lead uptake could even occur from less heavily contaminated soils with total lead concentrations below the U.S. EPA 400-ppm RSSL. This is demonstrated by the village outskirt soil sample having 120 ppm total lead with 66% gastric bioaccessibility, which under plausibly high soil consumption rates could cause problematic lead uptake (Figures 3 and 4).\nAdditional lead uptake not accounted for by ingestion via hand–mouth transmission would also occur via ingestion of inhaled lead particles that are cleared by mucociliary action from the respiratory tract and swallowed (Plumlee and Morman 2011).\nDooyema et al. (2012) and UNEP/OCHA (2010) found evidence of processing-related contamination in samples of potable well waters and surface waters from the villages studied, with many having lead concentrations above the WHO (2011a) guideline of 10 µg/L. Most contaminated well water samples had 10–20 µg/L dissolved lead, several had up to several hundred micrograms per liter dissolved lead, and two (Dooyema et al. 2012) had total lead levels of 520 and 1,500 µg/L. Low levels of water-soluble lead found in ores and soils by our water leach tests suggest the highest lead concentrations in water likely resulted from suspended particles such as lead carbonates. Three-year-olds drinking 1.3 L of water a day (U.S. EPA 2011b) from most sampled wells could consume from 10 to several hundred micrograms of lead per day, with locally higher consumption rates of up to 2,000 µg/day for water from the most contaminated wells. Our results substantiate UNEP/OCHA (2010) conclusions that consumption of lead-contaminated water, although substantial, is a subordinate exposure pathway to incidental ingestion of lead-contaminated soils or dusts.\nConsumption of plant foodstuffs contaminated by lead particles from the processing is plausibly a lesser but still measureable contribution to total lead uptake. Other exposure pathways that still need evaluation include consumption of garden vegetables grown in contaminated soils; consumption of milk or meat from cows, goats, and chickens that forage in contaminated areas; consumption of breast milk from mothers exposed to contamination; and exposures to particles abraded from adobe bricks made with lead-contaminated wastes.\nAdditional health concerns. Deaths of and adverse health impacts on children < 5 years old led responding organizations to focus on preventing child death and illness from lead poisoning. However, our results indicate that older children and adults who process ores, pregnant women and their unborn children, and breastfed infants are also at risk for lead poisoning.\nThe potential environmental and health effects of mercury contamination from amalgamation processing should be further assessed in Zamfara, including environmental conversion of inorganic mercury to more toxic methylmercury; dermal mercury exposures during amalgamation; mercury vapor inhalation during amalgam smelting; and mercury uptake from contaminated food.\nElevated blood manganese levels may also be a health concern. Results indicate that manganese uptake from incidental ingestion of soils and dusts contaminated by ore processing is a plausible exposure route. Uptake of bioaccessible manganese, mercury, arsenic, and antimony from inhaled dusts in the sinuses, upper respiratory tract, and lungs is also plausible and could be evaluated with IVBAs using lung fluid simulants (Plumlee and Morman 2011). Contamination of local wetlands, ponds, and rivers by dusts and sluicing wastes could be a pathway for all toxicants into the aquatic food chain.\nBecause the ores are dominated by crystalline silica, silicosis and related diseases (e.g., silicotuberculosis) could become long-term health problems in ore processors who do not use appropriate respiratory protection or dust control measures (National Institute for Occupational Safety and Health 2002). Children and other bystanders to the processing may also be at risk.\nNascent research indicates that multiple-toxicant exposures can either exacerbate or counteract health effects of individual toxicants [Agency for Toxic Substances and Disease Registry (ATSDR) 2004]. No toxicological profile exists for the mix of all toxicants identified in this study. However, synergistic toxicological effects on neurodevelopment in early childhood have been found following lead and manganese coexposures (Henn et al. 2012). Other synergistic interactions such as lead–arsenic and lead–methylmercury have also been noted (ATSDR 2004).\nAiding the crisis response in Zamfara. MSF, Blacksmith Institute, TG, CDC, UNICEF, and Nigerian government agencies have implemented advocacy, education, remediation, and risk mitigation strategies in Dareta, Yargalma, and five other villages (MSF 2012; von Lindern et al. 2011). These include working with the local emirate to move gold processing out of family compounds and away from village centers; removing the top several centimeters of contaminated soil and replacing with clean soil; providing chelation therapy for > 2,000 severely affected children in remediated villages; and educating villagers on safe ore processing practices. Because of logistical challenges faced by field teams, the number of contaminated villages requiring remediation may be even greater than indicated by the 2010 screening survey of 74 Zamfara villages (Lo et al. 2012). Insights from our study help inform and refine these efforts.\nA systematic geological assessment of gold mines throughout the region is needed to screen lead-poor deposits from lead-rich deposits (Figure 2A) and identify deposits with abundant lead carbonates. Artisanal mining and processing could ideally focus on lead-poor ores. However, economic considerations will likely drive processing of all gold-bearing ores regardless of lead content. Hence, methods are needed to identify ores that require mitigation of lead contamination and exposures during mining and processing.\nUnfortunately, lead carbonates and lead oxides are not readily identifiable by eye. A chemical spot detection test (Esswein and Ashley 2003) successfully identifies lead-rich samples from the area [see Supplemental Material, Figure S4 (http://dx.doi.org/10.1289/ehp.1206051)], and could help workers identify lead-rich ores that lack visually distinctive lead sulfides.\nLack of laboratory facilities and need for rapid decisions in remote areas make handheld field XRF an essential field screening tool. It has helped identify dozens of Zamfara villages with lead contamination (Lo et al. 2012), and is key to guide remediation decisions and assess remediation effectiveness (von Lindern et al. 2011). Based on prior experience, TG field crews knew that field XRF underestimates lead concentrations (von Lindern I, unpublished data), and factored this into assessment or remediation decisions made based on field XRF results. Our results comparing field XRF and ICP-MS values for lead across wide concentration ranges will help users better understand the accuracy of XRF when making field decisions. Probable mercury loss from samples following sampling indicates that field XRF is the best way to assess mercury contamination.\nThe elevated levels of highly bioaccessible lead found in village outskirts soils compared to those in soils not affected by ore processing (Figures 2B, 3) indicates that XRF testing for contamination should be extended to > 100 m outside villages. Less heavily contaminated soils with lead concentrations < 400 ppm may result in problematic lead uptake under dusty conditions.\nContinued education of villagers and workers is needed to help ensure that soils contaminated by processing wastes are not used to make adobe bricks; mortars/pestles, flourmills, and cooking pots are not used for ore processing, food processing, and cooking; contaminated ore storage sacks are not reused for food storage or bedding; and stored foods are protected from processing-related contamination. Education on removal of particulate lead from potable well waters by allowing suspended sediments to settle before consumption should help lessen lead intake via water consumption.\nRelief organizations have suggested alternative gold extraction methods to reduce lead and mercury contamination, including wet processing to minimize dust generation, retorts to reduce mercury vapor emission during amalgam smelting, and cyanide-based chemical extraction. These alternatives have benefits but could inadvertently cause new waste disposal issues, contamination sources, and exposure pathways. For example, cyanide extraction requires ore breaking (with accompanying dust generation), and if done improperly could contaminate local waters with dissolved cyanide, lead, and arsenic. Because sulfides in the ores reduce cyanide extraction efficiency, workers may resort to ore roasting pretreatment, which would cause widespread contamination by deleterious sulfur dioxide gas and airborne roaster particulates with highly bioaccessible lead (Plumlee and Morman 2011).\nGlobal health implications. Price increases in gold and other metals have caused artisanal mining to burgeon globally, increasing the potential for lead poisoning outbreaks beyond Nigeria. For example, tens of thousands of people have been affected by lead poisoning at Kabwe, Zambia, which resulted from artisanal re-mining of and exposures to wastes from historical lead–zinc mining and smelting (Branan 2008). By understanding ore deposit geology and climate controls on pre-mining ore weathering (Plumlee and Morman 2011), geologists can help identify other artisanal mining areas that may be at higher risk for lead poisoning and need medical surveillance.\nOf highest risk are lead-bearing gold deposits and lead–zinc deposits that either contain abundant carbonate minerals (as at Kabwe) or are located in dry climates where surface and ground waters are relatively alkaline (as in Zamfara). In these situations, highly bioaccessible secondary lead carbonates are likely to be abundant. In contrast, some other gold deposit types mined artisanally are lead poor and pose low lead poisoning risk. However, they may contain high levels of arsenic or other toxicants that are of potential health concern (e.g., Ashanti gold belt, Ghana) (Hilson 2002). Artisanal re-mining in historical mining camps with prior uncontrolled smelting or roasting of lead-bearing ores (e.g., Kabwe) will have high bioaccessible lead and high lead poisoning risk regardless of deposit type (Plumlee and Morman, 2011).", "The results of the present study support the conclusion that the fatal lead poisoning outbreak in northern Nigeria resulted from contamination of soils, living areas, water supplies, and foodstuffs by the processing of weathered, lead-rich gold ores containing abundant, highly gastric-bioaccessible secondary lead carbonate minerals. The dominant exposure pathway is incidental ingestion of lead-rich soil and dust particles by hand–mouth transmission and of inhaled dust particles cleared from the respiratory tract. Lesser but still significant pathways (each of which alone would be problematic) include consumption of water and foodstuffs contaminated by the processing. Although acute lead poisoning of young children has been the most immediate and severe consequence, older children, adult workers, pregnant women and their unborn children, and breastfeeding infants are also at risk. Other contaminants (manganese, arsenic, antimony, crystalline silica) may pose additional health threats. Lead poisoning may occur elsewhere in the world from artisanal mining in geologically and climatically favorable areas.\nThis study underscores the value of collaborative interdisciplinary studies involving health, geological, and engineering scientists. This scientific input will aid development of evidence-based policies on artisanal resource extraction that greatly reduce environmental contamination and adverse health impacts.", "Click here for additional data file." ]
[ "methods", "results", "discussion", "conclusions", null ]
[ "artisanal mining", "environmental health", "lead poisoning", "mercury contamination", "ore deposit geology" ]
Methods: Methods by which the different sample types were collected in Zamfara are described in Supplemental Material, Field Sampling Methods (http://dx.doi.org/10.1289/ehp.1206051). We analyzed the samples at USGS laboratories in Denver, Colorado (USA), incorporating appropriate quality assurance/quality control analyses of standard reference materials, duplicate sample splits, analytical duplicates, and blanks. See Supplemental Material, Table S1, for analytical method details and references (http://dx.doi.org/10.1289/ehp.1206051). Nearly 200 spot chemical analyses were performed in the laboratory by handheld XRF on > 50 raw ore samples, to assess natural chemical heterogeneities within and between the samples. Representative splits of all processed ores, soils, and sweep samples were analyzed for multiple parameters. Quantitative particle size distribution of samples sieved to < 2 mm was measured by laser diffraction. Powder X-ray diffraction was used to qualitatively identify relative weight proportions of specific minerals present above the detection limit of approximately 2 weight %. Total chemical concentrations of 42 elements were analyzed using inductively coupled plasma–mass spectrometry (ICP-MS). Total mercury was analyzed using continuous flow–cold vapor–atomic fluorescence spectroscopy (CVAFS). Scanning electron microscopy (SEM) was performed on a subset of raw ores, processed ores, soils, sweep samples, and grain samples to determine individual particle mineralogy, chemistry, size, and shape. Deionized water extractions were performed on a subset of processed ores, soils, and sweep samples to model constituent release into surface waters (Hageman 2007). In vitro bioaccessibility assessments (IVBA) were performed on a subset of processed ores, soils, and sweep samples to model toxicant bioaccessibility and bioavailability along ingestion exposure pathways (Drexler and Brattin 2007; Morman et al. 2009). Bioaccessibility measures the amount of a toxicant that is dissolved in the body’s fluids and is available for uptake into the body’s circulatory system, whereas bioavailability measures the amount of a toxicant that is absorbed by the body and transported to a site of toxic action [see references in Plumlee and Morman (2011)]. The IVBA we used leaches samples with simulated gastric fluid for 1 hr at 37°C [see Supplemental Material, Table S1 (http://dx.doi.org/10.1289/ehp.1206051)], and is based on the Drexler and Brattin (2007) method, validated for lead against juvenile swine uptake. The juvenile swine uptake model is a proxy for relative lead bioavailability in humans that integrates both lead dissolution in the stomach acids and uptake via the intestines (Casteel et al. 2006). This IVBA has not been validated against swine uptake for other toxicants such as arsenic, mercury, and manganese, but nonetheless provides useful insights into their potential gastric bioaccessibility (Plumlee and Morman, 2011). Plant foodstuff samples were analyzed for 40 elements by inductively coupled plasma–atomic emission spectroscopy (ICP-AES) and mercury by CVAFS. Analytical methods used for total and leachate chemical analyses provided concentration data for many potential elemental toxicants in addition to lead and mercury, such as arsenic, antimony, manganese, iron, aluminum, cadmium, copper, zinc, and nickel. Results: Unweathered (primary) vein ores were dominated by quartz (crystalline silica), with variable amounts of galena (lead sulfide) and minor amounts of pyrite (iron sulfide), chalcopyrite (copper–iron sulfide), and arsenopyrite (iron–arsenic sulfide) [see Supplemental Material, Figure S1D–G, Table S2 (http://dx.doi.org/10.1289/ehp.1206051)]. Natural weathering and oxidation of the vein ores over millennia before mining partially converted primary sulfide minerals into complex secondary mineral assemblages with abundant lead carbonates and lead phosphates (see Supplemental Material, Figure S1D–G, Table S2). Dareta and Yargalma sweep and soil samples contained broken particles of the same complex suite of primary and secondary lead minerals as unprocessed and ground vein ores [see Supplemental Material, Table S2 (http://dx.doi.org/10.1289/ehp.1206051)]. This mineralogical fingerprint confirmed ore processing as the source for contamination. Quantitative particle size analysis and visual estimation by SEM element mapping (Figure 1A) show that > 90% of lead-rich particles in the ground ores, soil samples, and sweep samples were < 250 µm in diameter, regarded as a maximum size for incidental ingestion by hand–mouth transmission (Drexler and Brattin 2007). Visual estimation using SEM element mapping shows that > 50% of the lead-rich particles were also < 10–15 µm (Figure 1A), and could therefore be inhaled into at least the upper respiratory tract, where many would likely be trapped and cleared by mucociliary action. (A) Backscatter electron (BSE) scanning electron microscope (SEM) images of Nigeria ground ore (upper), eating area sweep (middle), and soil (lower) samples with overlain element maps for lead (in red). In all images, the brighter gray indicates higher mean atomic number. Bar = 250 µm. (B) BSE field emission SEM image of a cluster of plant fibers and mineral particles found in a grain sample (ground by a flour mill in Zamfara) having 3 ppm total lead. Elongated plant fibers are light to dark blue. Bright orange particles are lead carbonates, lead oxides, and lead phosphates. Pale orange/blue particles containing iron, chromium, and nickel are steel particles abraded from flourmill grinding plates. The cluster formed during grinding, with the fiber bundle attracting and trapping the mineral and metal fragments. Bar = 100 µm. Laboratory handheld XRF spot analyses of raw ore samples collected from 18 villages indicated that the ores being processed varied considerably in their lead content within samples, and between different villages and mine sources (Figure 2A). ICP-MS analyses measured up to 180,000 ppm lead in processed ore samples from Dareta and Yargalma (Table 1, Figure 2B). (A) Total lead (Pb) concentrations (measured in the laboratory using handheld XRF) in raw ore samples collected from different Zamfara villages. Multiple spot analyses were made on multiple ore samples from each village to account for substantial mineralogical heterogeneities within samples. (B) Total lead concentrations in processed ores, soils, and sweep samples from Dareta and Yargalma, as measured by ICP-M. Red line indicates U.S. EPA (2011a) RSSL for lead (400 ppm). Summary of USGS laboratory analytical results for total chemical composition. ICP-MS analyses found that all Dareta and Yargalma soil samples and most sweep samples had extreme lead concentrations (up to 185,000 ppm), far above the U.S. EPA (2011a) residential soil screening level (RSSL) of 400 ppm (Table 1, Figure 2). In contrast, lead concentrations in background soils from five villages without gold processing were < 25 ppm. Composite soil samples collected on the outskirts of Dareta and Yargalma (~ 100 m from the edge of each village) had elevated lead concentrations (122 and 293 ppm, respectively) (Table 1), indicating that processing-related contamination extended beyond village limits. Soils used to make adobe bricks (from ore washing areas) had lead levels as high as 58,900 ppm. Total lead concentrations measured with ICP-MS in soil and sweep samples with total lead > approximately 400 ppm were generally twice the concentrations measured on the same samples in the field by CDC/TG using handheld XRF [see Supplemental Material, Figure S2A (http://dx.doi.org/10.1289/ehp.1206051)]. TG has found such field underestimation to be common (von Lindern I, unpublished data), possibly resulting from sample compositing/sieving effects, summer heat impacts on the instruments, and/or lack of field XRF calibration standards having extreme lead concentrations. For the few samples analyzed with lead < approximately 400 ppm, lab ICP-MS results were variously greater than, close to, or less than the field XRF results (see Supplemental Material, Figure S2A inset). Extreme total mercury concentrations measured in soil and sweep samples (up to 4,600 ppm measured in the field by XRF; up to 68.1 ppm by laboratory CVAFS) were higher than levels measured in raw and ground ores, and were well above the U.S. EPA elemental mercury RSSL of 10 ppm [Table 1; see also Supplemental Material, Figure S3A (http://dx.doi.org/10.1289/ehp.1206051)]. Hence, mercury contamination resulted predominantly from the amalgamation processing. Substantially greater concentrations of mercury were measured in soil and sweep samples by field XRF compared with those measured by CVAFS for the same samples (see Supplemental Material, Figure S2B), indicating that mercury was volatilized from the samples after sample collection. Manganese concentrations in processed ores, background soils, village soils, and sweep samples (up to 1,320 ppm) commonly exceeded the U.S. EPA RSSL of 390 ppm (U.S. EPA 2011a), and were generally higher than concentrations measured in soils from villages without ore processing [Table 1; see also Supplemental Material, Figure S3B (http://dx.doi.org/10.1289/ehp.1206051)]. Concentrations of arsenic (up to 270 ppm) and antimony (up to 1,250 ppm) in some soil and sweep samples greatly exceeded U.S. EPA RSSLs of 0.39 (U.S. EPA 2011a) and 31 ppm (U.S. EPA 2002a), respectively (Table 1; see also Supplemental Material, Figure S3C–D). Concentrations of other potential environmental or human toxicants such as cadmium, zinc, copper, and nickel were well below U.S. EPA RSSLs. Deionized water leach tests on soil and sweep samples produced moderately alkaline leachates with pH from 7.7 to 9.1. Metal toxicants were not appreciably water soluble, with a maximum of 0.018% of total lead, 0.7% of total mercury, 2.9% of total manganese, < 0.2% of total arsenic, and 0.03% of total antimony being solubilized by water leaching of any given sample (Table 2). Ranges in percentage of water-leachable elemental toxicants measured in different sample types. Lead was generally highly gastric bioaccessible, with 39–66% of the total lead solubilized in an hour from 9 of 12 samples analyzed (Table 3, Figure 3). The highest percent bioaccessibility was measured in a less heavily contaminated village outskirt soil. Manganese was also generally quite gastric bioaccessible, with 6–43% of the total manganese solubilized. However, mercury (< 0.9% of total), arsenic (< 2.1% of total), and antimony (< 1.4% of the total) were not appreciably gastric bioaccessible (Table 3). Ranges in percentage of gastric-bioaccessible elemental toxicants measured in different sample types. Total lead (Pb) concentrations (ppm mass basis; mg lead/kg solid) and simulated gastric fluid–leachable lead concentrations (ppm mass basis; calculated as (mg lead/kg leachate) × (100 kg leachate/1 kg solid)] in Zamfara samples. Each bar pair represents results for a single sample. Percentage of bioaccessible lead is listed above the paired bars for each sample, and was calculated by dividing the SGF-leachable concentration by the total concentration for the sample, and then multiplying by 100. Horizontal red line indicates U.S. EPA (2011a) RSSL for lead (400 ppm). Chemical analyses of 39 rice, corn, spice, and medicinal herb samples found that all 16 processed samples and 19 of 23 raw samples were lead-contaminated (from 0.1 to 146 ppm) compared with plant standard reference materials [Table 1; see also Supplemental Material, Figure S3E (http://dx.doi.org/10.1289/ehp.1206051)]. The same suite of lead carbonates and other secondary lead minerals was found in the plant foodstuffs as in the ores, soils, and sweep samples (Figure 1B). This mineralogical fingerprint confirms that stored foodstuffs were contaminated by ore-processing dusts and that grains were contaminated when ground using flourmills also used for ore grinding. Elevated concentrations of mercury from 0.01 to 0.45 ppm (Table 1; see also Supplemental Material, Figure S3F) found in 10 of 16 processed and 8 of 23 raw foodstuff samples also indicate processing-related contamination, possibly from airborne mercury, use of flourmills for both food and ore regrinding, and foodstuff storage in cooking pots used for amalgamation. Discussion: Our results document that ore deposit geology and mechanized ore grinding were fundamental causes of this unusual lead poisoning outbreak linked to artisanal gold mining. Not only can the vein gold ores be relatively lead rich, but much of the lead occurs in minerals with enhanced gastric bioaccessibility caused by natural weathering of the ores over millennia before mining. Weathering transformed minimally gastric-bioaccessible primary lead sulfides into abundant, highly gastric-bioaccessible secondary lead carbonates and moderately gastric-bioaccessible lead phosphates (Casteel et al. 2006). Mechanized ore grinding greatly increased both the volumes of ore that could be processed and the amounts of lead-rich particles having optimal size for dispersion as dusts and particle uptake by hand–mouth transmission or inhalation. By creating many particles < 10–15 µm in size, grinding also greatly enhanced the surface area per mass of ingested particles, thereby enhancing dissolution rates [see references in Plumlee and Morman (2011)]. Lead exposure pathways. Data are lacking to do a Zamfara-specific integrated exposure uptake biokinetic model for lead in children (U.S. EPA 2002b) because the model uses a series of U.S.-centric assumptions on dietary intake, living in houses with nonsoil floors, and other factors. However, our results can be used to help infer relative importance of various lead exposure pathways. Figure 4 shows results of calculations estimating plausible ranges in daily lead uptake from inadvertent ingestion of the different processed ore, soil, and sweep samples we analyzed. Calculated daily lead uptake assuming exposures to processed ores, soils, and sweep samples from Zamfara. For each sample, measured gastric bioaccessibility of lead (from Figure 3) was translated into gastric bioavailability using equations in Drexler and Brattin (2007) [see Supplemental Material, Lead Uptake Calculations (http://dx.doi.org/10.1289/ehp.1206051)]. The gastric bioavailability was then translated into daily uptake amount using soil consumption rates for young children from the literature (U.S. EPA 2011b). Brown bars assume 10-mg/day soil consumption (unrealistically clean conditions), and yellow bars assume 500-mg/day soil consumption (very dusty but plausible conditions). Bar pairs show results for the corresponding samples in Figure 3, except that bar pairs labeled as sample duplicates are averages of sample duplicate analyses. Horizontal red lines show WHO (2011) dietary exposure levels for 12-kg child (3.6 µg/day) and 16-kg child (4.8 µg/day) known to adversely affect health, and FDA PTTILs (FDA 1993) for pregnant or lactating women (25 µg/day) and adults (75 µg/day). Although called “intake levels,” PTTILs are in effect uptake levels because they were derived assuming 48% absorption. We calculated lead uptake levels using soil consumption rates, our bioaccessibility results (Figure 3), and the method described by Drexler and Brattin (2007) to convert bioaccessible lead into bioavailable lead for uptake modeling. We used published soil consumption rates (e.g., U.S. EPA 2011b) of 10 and 500 mg/day—a range for children under clean (unlikely for the villages) to extremely dusty (more plausible) conditions. See Supplemental Material, Lead Uptake Calculations for details (http://dx.doi.org/10.1289/ehp.1206051). The results (Figure 4) suggest that inadvertent ingestion could plausibly result in lead uptake as high as several tens to several thousands of micrograms per day, depending upon time spent by exposed persons in contaminated eating areas or ore processing areas. These lead uptake levels can vastly exceed the dietary lead exposure levels (0.3 µg/kg body weight/day) that WHO (2011b) recognizes as causing adverse health impacts in young children. They can also substantially exceed U.S. Food and Drug Administration (FDA) provisional total tolerable intake levels (PTTILs) (FDA 1993) for pregnant or lactating women (25 µg/day) and adults (75 µg/day). Potentially significant lead uptake could even occur from less heavily contaminated soils with total lead concentrations below the U.S. EPA 400-ppm RSSL. This is demonstrated by the village outskirt soil sample having 120 ppm total lead with 66% gastric bioaccessibility, which under plausibly high soil consumption rates could cause problematic lead uptake (Figures 3 and 4). Additional lead uptake not accounted for by ingestion via hand–mouth transmission would also occur via ingestion of inhaled lead particles that are cleared by mucociliary action from the respiratory tract and swallowed (Plumlee and Morman 2011). Dooyema et al. (2012) and UNEP/OCHA (2010) found evidence of processing-related contamination in samples of potable well waters and surface waters from the villages studied, with many having lead concentrations above the WHO (2011a) guideline of 10 µg/L. Most contaminated well water samples had 10–20 µg/L dissolved lead, several had up to several hundred micrograms per liter dissolved lead, and two (Dooyema et al. 2012) had total lead levels of 520 and 1,500 µg/L. Low levels of water-soluble lead found in ores and soils by our water leach tests suggest the highest lead concentrations in water likely resulted from suspended particles such as lead carbonates. Three-year-olds drinking 1.3 L of water a day (U.S. EPA 2011b) from most sampled wells could consume from 10 to several hundred micrograms of lead per day, with locally higher consumption rates of up to 2,000 µg/day for water from the most contaminated wells. Our results substantiate UNEP/OCHA (2010) conclusions that consumption of lead-contaminated water, although substantial, is a subordinate exposure pathway to incidental ingestion of lead-contaminated soils or dusts. Consumption of plant foodstuffs contaminated by lead particles from the processing is plausibly a lesser but still measureable contribution to total lead uptake. Other exposure pathways that still need evaluation include consumption of garden vegetables grown in contaminated soils; consumption of milk or meat from cows, goats, and chickens that forage in contaminated areas; consumption of breast milk from mothers exposed to contamination; and exposures to particles abraded from adobe bricks made with lead-contaminated wastes. Additional health concerns. Deaths of and adverse health impacts on children < 5 years old led responding organizations to focus on preventing child death and illness from lead poisoning. However, our results indicate that older children and adults who process ores, pregnant women and their unborn children, and breastfed infants are also at risk for lead poisoning. The potential environmental and health effects of mercury contamination from amalgamation processing should be further assessed in Zamfara, including environmental conversion of inorganic mercury to more toxic methylmercury; dermal mercury exposures during amalgamation; mercury vapor inhalation during amalgam smelting; and mercury uptake from contaminated food. Elevated blood manganese levels may also be a health concern. Results indicate that manganese uptake from incidental ingestion of soils and dusts contaminated by ore processing is a plausible exposure route. Uptake of bioaccessible manganese, mercury, arsenic, and antimony from inhaled dusts in the sinuses, upper respiratory tract, and lungs is also plausible and could be evaluated with IVBAs using lung fluid simulants (Plumlee and Morman 2011). Contamination of local wetlands, ponds, and rivers by dusts and sluicing wastes could be a pathway for all toxicants into the aquatic food chain. Because the ores are dominated by crystalline silica, silicosis and related diseases (e.g., silicotuberculosis) could become long-term health problems in ore processors who do not use appropriate respiratory protection or dust control measures (National Institute for Occupational Safety and Health 2002). Children and other bystanders to the processing may also be at risk. Nascent research indicates that multiple-toxicant exposures can either exacerbate or counteract health effects of individual toxicants [Agency for Toxic Substances and Disease Registry (ATSDR) 2004]. No toxicological profile exists for the mix of all toxicants identified in this study. However, synergistic toxicological effects on neurodevelopment in early childhood have been found following lead and manganese coexposures (Henn et al. 2012). Other synergistic interactions such as lead–arsenic and lead–methylmercury have also been noted (ATSDR 2004). Aiding the crisis response in Zamfara. MSF, Blacksmith Institute, TG, CDC, UNICEF, and Nigerian government agencies have implemented advocacy, education, remediation, and risk mitigation strategies in Dareta, Yargalma, and five other villages (MSF 2012; von Lindern et al. 2011). These include working with the local emirate to move gold processing out of family compounds and away from village centers; removing the top several centimeters of contaminated soil and replacing with clean soil; providing chelation therapy for > 2,000 severely affected children in remediated villages; and educating villagers on safe ore processing practices. Because of logistical challenges faced by field teams, the number of contaminated villages requiring remediation may be even greater than indicated by the 2010 screening survey of 74 Zamfara villages (Lo et al. 2012). Insights from our study help inform and refine these efforts. A systematic geological assessment of gold mines throughout the region is needed to screen lead-poor deposits from lead-rich deposits (Figure 2A) and identify deposits with abundant lead carbonates. Artisanal mining and processing could ideally focus on lead-poor ores. However, economic considerations will likely drive processing of all gold-bearing ores regardless of lead content. Hence, methods are needed to identify ores that require mitigation of lead contamination and exposures during mining and processing. Unfortunately, lead carbonates and lead oxides are not readily identifiable by eye. A chemical spot detection test (Esswein and Ashley 2003) successfully identifies lead-rich samples from the area [see Supplemental Material, Figure S4 (http://dx.doi.org/10.1289/ehp.1206051)], and could help workers identify lead-rich ores that lack visually distinctive lead sulfides. Lack of laboratory facilities and need for rapid decisions in remote areas make handheld field XRF an essential field screening tool. It has helped identify dozens of Zamfara villages with lead contamination (Lo et al. 2012), and is key to guide remediation decisions and assess remediation effectiveness (von Lindern et al. 2011). Based on prior experience, TG field crews knew that field XRF underestimates lead concentrations (von Lindern I, unpublished data), and factored this into assessment or remediation decisions made based on field XRF results. Our results comparing field XRF and ICP-MS values for lead across wide concentration ranges will help users better understand the accuracy of XRF when making field decisions. Probable mercury loss from samples following sampling indicates that field XRF is the best way to assess mercury contamination. The elevated levels of highly bioaccessible lead found in village outskirts soils compared to those in soils not affected by ore processing (Figures 2B, 3) indicates that XRF testing for contamination should be extended to > 100 m outside villages. Less heavily contaminated soils with lead concentrations < 400 ppm may result in problematic lead uptake under dusty conditions. Continued education of villagers and workers is needed to help ensure that soils contaminated by processing wastes are not used to make adobe bricks; mortars/pestles, flourmills, and cooking pots are not used for ore processing, food processing, and cooking; contaminated ore storage sacks are not reused for food storage or bedding; and stored foods are protected from processing-related contamination. Education on removal of particulate lead from potable well waters by allowing suspended sediments to settle before consumption should help lessen lead intake via water consumption. Relief organizations have suggested alternative gold extraction methods to reduce lead and mercury contamination, including wet processing to minimize dust generation, retorts to reduce mercury vapor emission during amalgam smelting, and cyanide-based chemical extraction. These alternatives have benefits but could inadvertently cause new waste disposal issues, contamination sources, and exposure pathways. For example, cyanide extraction requires ore breaking (with accompanying dust generation), and if done improperly could contaminate local waters with dissolved cyanide, lead, and arsenic. Because sulfides in the ores reduce cyanide extraction efficiency, workers may resort to ore roasting pretreatment, which would cause widespread contamination by deleterious sulfur dioxide gas and airborne roaster particulates with highly bioaccessible lead (Plumlee and Morman 2011). Global health implications. Price increases in gold and other metals have caused artisanal mining to burgeon globally, increasing the potential for lead poisoning outbreaks beyond Nigeria. For example, tens of thousands of people have been affected by lead poisoning at Kabwe, Zambia, which resulted from artisanal re-mining of and exposures to wastes from historical lead–zinc mining and smelting (Branan 2008). By understanding ore deposit geology and climate controls on pre-mining ore weathering (Plumlee and Morman 2011), geologists can help identify other artisanal mining areas that may be at higher risk for lead poisoning and need medical surveillance. Of highest risk are lead-bearing gold deposits and lead–zinc deposits that either contain abundant carbonate minerals (as at Kabwe) or are located in dry climates where surface and ground waters are relatively alkaline (as in Zamfara). In these situations, highly bioaccessible secondary lead carbonates are likely to be abundant. In contrast, some other gold deposit types mined artisanally are lead poor and pose low lead poisoning risk. However, they may contain high levels of arsenic or other toxicants that are of potential health concern (e.g., Ashanti gold belt, Ghana) (Hilson 2002). Artisanal re-mining in historical mining camps with prior uncontrolled smelting or roasting of lead-bearing ores (e.g., Kabwe) will have high bioaccessible lead and high lead poisoning risk regardless of deposit type (Plumlee and Morman, 2011). Conclusions: The results of the present study support the conclusion that the fatal lead poisoning outbreak in northern Nigeria resulted from contamination of soils, living areas, water supplies, and foodstuffs by the processing of weathered, lead-rich gold ores containing abundant, highly gastric-bioaccessible secondary lead carbonate minerals. The dominant exposure pathway is incidental ingestion of lead-rich soil and dust particles by hand–mouth transmission and of inhaled dust particles cleared from the respiratory tract. Lesser but still significant pathways (each of which alone would be problematic) include consumption of water and foodstuffs contaminated by the processing. Although acute lead poisoning of young children has been the most immediate and severe consequence, older children, adult workers, pregnant women and their unborn children, and breastfeeding infants are also at risk. Other contaminants (manganese, arsenic, antimony, crystalline silica) may pose additional health threats. Lead poisoning may occur elsewhere in the world from artisanal mining in geologically and climatically favorable areas. This study underscores the value of collaborative interdisciplinary studies involving health, geological, and engineering scientists. This scientific input will aid development of evidence-based policies on artisanal resource extraction that greatly reduce environmental contamination and adverse health impacts. Supplemental Material: Click here for additional data file.
Background: In 2010, Médecins Sans Frontières discovered a lead poisoning outbreak linked to artisanal gold processing in northwestern Nigeria. The outbreak has killed approximately 400 young children and affected thousands more. Methods: We applied diverse analytical methods to ore samples, soil and sweep samples from villages and family compounds, and plant foodstuff samples. Results: Natural weathering of lead-rich gold ores before mining formed abundant, highly gastric-bioaccessible lead carbonates. The same fingerprint of lead minerals found in all sample types confirms that ore processing caused extreme contamination, with up to 185,000 ppm lead in soils/sweep samples and up to 145 ppm lead in plant foodstuffs. Incidental ingestion of soils via hand-to-mouth transmission and of dusts cleared from the respiratory tract is the dominant exposure pathway. Consumption of water and foodstuffs contaminated by the processing is likely lesser, but these are still significant exposure pathways. Although young children suffered the most immediate and severe consequences, results indicate that older children, adult workers, pregnant women, and breastfed infants are also at risk for lead poisoning. Mercury, arsenic, manganese, antimony, and crystalline silica exposures pose additional health threats. Conclusions: Results inform ongoing efforts in Nigeria to assess lead contamination and poisoning, treat victims, mitigate exposures, and remediate contamination. Ore deposit geology, pre-mining weathering, and burgeoning artisanal mining may combine to cause similar lead poisoning disasters elsewhere globally.
null
null
5,127
276
[ 7 ]
5
[ "lead", "samples", "ore", "ores", "total", "processing", "ppm", "uptake", "figure", "soil" ]
[ "zamfara sample measured", "ores soil samples", "analyses raw ore", "processing assessed zamfara", "solid zamfara samples" ]
null
null
null
[CONTENT] artisanal mining | environmental health | lead poisoning | mercury contamination | ore deposit geology [SUMMARY]
[CONTENT] artisanal mining | environmental health | lead poisoning | mercury contamination | ore deposit geology [SUMMARY]
[CONTENT] artisanal mining | environmental health | lead poisoning | mercury contamination | ore deposit geology [SUMMARY]
[CONTENT] artisanal mining | environmental health | lead poisoning | mercury contamination | ore deposit geology [SUMMARY]
null
null
[CONTENT] Child | Child, Preschool | Environmental Exposure | Global Health | Gold | Humans | Lead Poisoning | Metals | Mining | Nigeria | Particle Size [SUMMARY]
[CONTENT] Child | Child, Preschool | Environmental Exposure | Global Health | Gold | Humans | Lead Poisoning | Metals | Mining | Nigeria | Particle Size [SUMMARY]
[CONTENT] Child | Child, Preschool | Environmental Exposure | Global Health | Gold | Humans | Lead Poisoning | Metals | Mining | Nigeria | Particle Size [SUMMARY]
[CONTENT] Child | Child, Preschool | Environmental Exposure | Global Health | Gold | Humans | Lead Poisoning | Metals | Mining | Nigeria | Particle Size [SUMMARY]
null
null
[CONTENT] zamfara sample measured | ores soil samples | analyses raw ore | processing assessed zamfara | solid zamfara samples [SUMMARY]
[CONTENT] zamfara sample measured | ores soil samples | analyses raw ore | processing assessed zamfara | solid zamfara samples [SUMMARY]
[CONTENT] zamfara sample measured | ores soil samples | analyses raw ore | processing assessed zamfara | solid zamfara samples [SUMMARY]
[CONTENT] zamfara sample measured | ores soil samples | analyses raw ore | processing assessed zamfara | solid zamfara samples [SUMMARY]
null
null
[CONTENT] lead | samples | ore | ores | total | processing | ppm | uptake | figure | soil [SUMMARY]
[CONTENT] lead | samples | ore | ores | total | processing | ppm | uptake | figure | soil [SUMMARY]
[CONTENT] lead | samples | ore | ores | total | processing | ppm | uptake | figure | soil [SUMMARY]
[CONTENT] lead | samples | ore | ores | total | processing | ppm | uptake | figure | soil [SUMMARY]
null
null
[CONTENT] samples | uptake | performed | analyzed | ivba | subset | performed subset | swine | swine uptake | ores [SUMMARY]
[CONTENT] samples | lead | ppm | figure | total | concentrations | table | measured | epa | material figure [SUMMARY]
[CONTENT] lead | children | poisoning | health | lead poisoning | dust particles | study | artisanal | dust | particles [SUMMARY]
[CONTENT] lead | samples | additional data file | click additional data file | click additional data | click additional | click | additional data | file | data file [SUMMARY]
null
null
[CONTENT] [SUMMARY]
[CONTENT] ||| up to | 185,000 ppm | 145 ppm ||| ||| ||| ||| Mercury [SUMMARY]
[CONTENT] Nigeria ||| [SUMMARY]
[CONTENT] 2010 | Sans Frontières | Nigeria ||| approximately 400 | thousands ||| ||| ||| up to | 185,000 ppm | 145 ppm ||| ||| ||| ||| Mercury ||| Nigeria ||| [SUMMARY]
null
Additive effects of nutritional supplementation, together with bisphosphonates, on bone mineral density after hip fracture: a 12-month randomized controlled study.
25045257
After a hip fracture, a catabolic state develops, with increased bone loss during the first year. The aim of this study was to evaluate the effects of postoperative treatment with calcium, vitamin D, and bisphosphonates (alone or together) with nutritional supplementation on total hip and total body bone mineral density (BMD).
BACKGROUND
Seventy-nine patients (56 women), with a mean age of 79 years (range, 61-96 years) and with a recent hip fracture, who were ambulatory before fracture and without severe cognitive impairment, were included. Patients were randomized to treatment with bisphosphonates (risedronate 35 mg weekly) for 12 months (B; n=28), treatment with bisphosphonates along with nutritional supplementation (40 g protein, 600 kcal daily) for the first 6 months (BN; n=26), or to controls (C; n=25). All participants received calcium (1,000 mg) and vitamin D3 (800 IU) daily. Total hip and total body BMD were assessed with dual-energy X-ray absorptiometry at baseline, 6, and 12 months. Marker of bone resorption C-terminal telopeptide of collagen I and 25-hydroxy vitamin D were analyzed in serum.
METHODS
Analysis of complete cases (70/79 at 6 months and 67/79 at 12 months) showed an increase in total hip BMD of 0.7% in the BN group, whereas the B and C groups lost 1.1% and 2.4% of BMD, respectively, between baseline and 6 months (P=0.071, between groups). There was no change in total body BMD between baseline and 12 months in the BN group, whereas the B group and C group both lost BMD, with C losing more than B (P=0.009). Intention-to-treat analysis was in concordance with the complete cases analyses.
RESULTS
Protein-and energy-rich supplementation in addition to calcium, vitamin D, and bisphosphonate therapy had additive effects on total body BMD and total hip BMD among elderly hip fracture patients.
CONCLUSION
[ "Aged", "Aged, 80 and over", "Biomarkers", "Bone Density", "Bone Density Conservation Agents", "Bone Remodeling", "Calcium, Dietary", "Dietary Supplements", "Diphosphonates", "Female", "Hip Fractures", "Humans", "Male", "Middle Aged", "Postoperative Complications", "Postoperative Period", "Treatment Outcome", "Vitamin D", "Vitamins" ]
4094579
Introduction
Inadequate intake of protein and total calories, leading to malnutrition, is common among hip fracture patients.1,2 After a hip fracture, a catabolic state develops, characterized by increased loss of bone mineral density (BMD) during the first year.3–5 Because protein is an important structural component of bone and previous studies have reported a positive association between protein intake and BMD/bone mineral content,6–8 it is tempting to hypothesize that protein and energy supplementation may slow down postoperative bone loss. One previous study showed that hip fracture patients who received postoperative supplemental protein suffered less loss of BMD in the proximal femur compared with controls at 12 months.9 Otherwise, there are few studies that have investigated the effect of nutritional supplementation on BMD after hip fractures. Bisphosphonates are the most widely used drugs for treatment of osteoporosis and have been shown to reduce the risk for hip fracture.10,11 Risedronate, which was used in the present study, has previously been shown to increase BMD of the femoral neck and femoral trochanter in osteoporotic women aged 80 years and older after 6 months of treatment.12 Risedronate has not been studied for secondary prevention of bone loss in old adults after a hip fracture. However, beneficial effects on total hip BMD after hip fracture have been demonstrated, using zoledronic acid together with calcium and vitamin D.13 The primary aim of this study was to investigate whether postoperative treatment with a combination of protein-rich formula and bisphosphonates can reduce BMD loss after hip fracture better than bisphosphonates alone. Secondary aims were to study treatment effects on the bone resorption marker C-terminal telopeptide of collagen I (serum CTX-I), serum levels of 25-hydroxy vitamin D (25OHD), and parathyroid hormone (PTH).
Intention-to-treat analysis
The primary analysis population was used to replace missing data, and the secondary analysis population was used as a sensitivity analysis and showed the following results. The percentage change in total hip BMD between baseline and 6 months was +0.9% in the BN group and −0.5% and −2.7% in the B and C groups, respectively (P=0.03). The corresponding changes between baseline and 12 months were −0.8% for the BN group and −1.7% and −2.6% for the B and C groups, respectively (P=0.279). According to the sensitivity analysis, the percentage change in total body BMD between baseline and 6 months was −0.7% for the BN group and −0.7% and −1.3% for the B and C groups, respectively (P=0.436). Between baseline and 12 months, the percentage change in total body BMD was −0.02% for the BN group and −0.9% and −1.6% for the B and C groups, respectively (P=0.030). The sensitivity analysis confirmed the trend indicated by the results, showing a more pronounced decrease in serum-CTX-I in the B and BN groups than in the C group (P=0.019). The sensitivity analysis was in line with the results for complete cases, showing no significant differences among the three groups in changes of serum-25OHD and serum-PTH at either 6- or 12-month follow-up.
Results
No significant differences in baseline characteristics were found among the three treatment groups, other than the need for walking aids (Table 1). A total of 67 (85%) of the original 79 patients presented for the final follow-up; Figure 1 shows a patient flowchart. Among patients who dropped out, average age was 87 years (standard deviation, 5; range, 78–94 years), mean value of total hip BMD was 0.650 g/cm2 (standard deviation, 0.103), and total body BMD was 0.943 g/cm2 (standard deviation, 0.147). Treatment adherence Among the control patients, 17 of 24 patients who presented at 12 months took calcium and vitamin D as prescribed. In group B, 18 of 25 presenting patients took their daily dose of calcium and vitamin D, and 18 of 25 took the bisphosphonate as stipulated. In group BN, seven of 18 corresponding patients complied with the drugs and nutritional supplement prescription; the remaining eleven patients reported an intake of half the prescribed nutritional supplement. Three of these patients also reported a lower intake of prescribed bisphosphonate, and two took only half the prescribed daily dose of calcium and vitamin D. In total, 15 patients in the BN group took the bisphosphonate as stipulated. Three patients in the control group and two in the group B reported gastrointestinal complaints (either constipation or diarrhea). A dose reduction was made for one patient with hypercalcemia in group C. Among the control patients, 17 of 24 patients who presented at 12 months took calcium and vitamin D as prescribed. In group B, 18 of 25 presenting patients took their daily dose of calcium and vitamin D, and 18 of 25 took the bisphosphonate as stipulated. In group BN, seven of 18 corresponding patients complied with the drugs and nutritional supplement prescription; the remaining eleven patients reported an intake of half the prescribed nutritional supplement. Three of these patients also reported a lower intake of prescribed bisphosphonate, and two took only half the prescribed daily dose of calcium and vitamin D. In total, 15 patients in the BN group took the bisphosphonate as stipulated. Three patients in the control group and two in the group B reported gastrointestinal complaints (either constipation or diarrhea). A dose reduction was made for one patient with hypercalcemia in group C. Effects on BMD and bone turnover Seventy-nine patients were measured by DXA at inclusion, 68 patients at 6 months, and 66 at 12 months. Because of a total hip replacement on the uninjured side, nine patients could not be measured at the hip. During the first 6 months, total hip BMD increased by 0.7% in the BN group, whereas groups B and C showed losses of 1.1% and 2.4%, respectively (P by ANCOVA =0.071; Table 2; Figure 2). On average, there was no loss in total body BMD between baseline and 12 months in the BN group (Table 2). Moreover, both the B and C groups lost BMD, and this loss was greater among controls than in group B (P=0.009; Table 2; Figure 3). There was a trend for difference between groups, according to change in the bone resorption marker serum-CTX-I (P=0.055; Table 2). Within-group analysis showed a significant decrease in the serum-CTX-I marker of 33% and 36% in groups B and BN, respectively (P<0.001), whereas the smaller decrease of 12% in the C group was not significant (P=0.77). Seventy-nine patients were measured by DXA at inclusion, 68 patients at 6 months, and 66 at 12 months. Because of a total hip replacement on the uninjured side, nine patients could not be measured at the hip. During the first 6 months, total hip BMD increased by 0.7% in the BN group, whereas groups B and C showed losses of 1.1% and 2.4%, respectively (P by ANCOVA =0.071; Table 2; Figure 2). On average, there was no loss in total body BMD between baseline and 12 months in the BN group (Table 2). Moreover, both the B and C groups lost BMD, and this loss was greater among controls than in group B (P=0.009; Table 2; Figure 3). There was a trend for difference between groups, according to change in the bone resorption marker serum-CTX-I (P=0.055; Table 2). Within-group analysis showed a significant decrease in the serum-CTX-I marker of 33% and 36% in groups B and BN, respectively (P<0.001), whereas the smaller decrease of 12% in the C group was not significant (P=0.77). Levels of 25OHD and parathyroid hormone During the study, there was a mean increase in serum-25-OHD of between 17 nmol/L and 20 nmol/L in all three groups. Mean serum-25OHD concentrations in groups B and BN were below normal (ie, <50 nmol/L at baseline) but had normalized by the 12-month follow-up (Tables 1 and 2). In total, 59% of the patients had a baseline concentration of serum-25OHD that was lower than 50 nmol/L. Among them, eleven had values lower than 25 nmol/L. At the 12-month follow-up, 26% of the patients still had serum-25OHD concentrations that were lower than 50 nmol/L. We identified lack of compliance in eleven of the 18 patients who failed to normalize their concentration of serum-25OHD. Mean serum-PTH remained in the normal range for all groups at inclusion and at the 6-month and 12-month follow-ups; no significant difference was found among the groups on any measurement occasion. During the study, there was a mean increase in serum-25-OHD of between 17 nmol/L and 20 nmol/L in all three groups. Mean serum-25OHD concentrations in groups B and BN were below normal (ie, <50 nmol/L at baseline) but had normalized by the 12-month follow-up (Tables 1 and 2). In total, 59% of the patients had a baseline concentration of serum-25OHD that was lower than 50 nmol/L. Among them, eleven had values lower than 25 nmol/L. At the 12-month follow-up, 26% of the patients still had serum-25OHD concentrations that were lower than 50 nmol/L. We identified lack of compliance in eleven of the 18 patients who failed to normalize their concentration of serum-25OHD. Mean serum-PTH remained in the normal range for all groups at inclusion and at the 6-month and 12-month follow-ups; no significant difference was found among the groups on any measurement occasion. Intention-to-treat analysis The primary analysis population was used to replace missing data, and the secondary analysis population was used as a sensitivity analysis and showed the following results. The percentage change in total hip BMD between baseline and 6 months was +0.9% in the BN group and −0.5% and −2.7% in the B and C groups, respectively (P=0.03). The corresponding changes between baseline and 12 months were −0.8% for the BN group and −1.7% and −2.6% for the B and C groups, respectively (P=0.279). According to the sensitivity analysis, the percentage change in total body BMD between baseline and 6 months was −0.7% for the BN group and −0.7% and −1.3% for the B and C groups, respectively (P=0.436). Between baseline and 12 months, the percentage change in total body BMD was −0.02% for the BN group and −0.9% and −1.6% for the B and C groups, respectively (P=0.030). The sensitivity analysis confirmed the trend indicated by the results, showing a more pronounced decrease in serum-CTX-I in the B and BN groups than in the C group (P=0.019). The sensitivity analysis was in line with the results for complete cases, showing no significant differences among the three groups in changes of serum-25OHD and serum-PTH at either 6- or 12-month follow-up. The primary analysis population was used to replace missing data, and the secondary analysis population was used as a sensitivity analysis and showed the following results. The percentage change in total hip BMD between baseline and 6 months was +0.9% in the BN group and −0.5% and −2.7% in the B and C groups, respectively (P=0.03). The corresponding changes between baseline and 12 months were −0.8% for the BN group and −1.7% and −2.6% for the B and C groups, respectively (P=0.279). According to the sensitivity analysis, the percentage change in total body BMD between baseline and 6 months was −0.7% for the BN group and −0.7% and −1.3% for the B and C groups, respectively (P=0.436). Between baseline and 12 months, the percentage change in total body BMD was −0.02% for the BN group and −0.9% and −1.6% for the B and C groups, respectively (P=0.030). The sensitivity analysis confirmed the trend indicated by the results, showing a more pronounced decrease in serum-CTX-I in the B and BN groups than in the C group (P=0.019). The sensitivity analysis was in line with the results for complete cases, showing no significant differences among the three groups in changes of serum-25OHD and serum-PTH at either 6- or 12-month follow-up.
null
null
[ "Methods", "Study design and intervention", "Measurements", "Statistical methods", "Treatment adherence", "Effects on BMD and bone turnover", "Levels of 25OHD and parathyroid hormone" ]
[ " Patients The study included a total of 79 patients with a mean age of 79 years (standard deviation, 9; range, 61–96 years) and a history of recent hip fracture (femoral neck or trochanteric) who were admitted to any of the four university hospitals in Stockholm. Inclusion criteria were age 60 years or older, no severe cognitive impairment (Short Portable Mental Questionnaire15 [see below] score, ≥3), ambulatory before fracture, and body mass index 28 kg/m2 or lower. Exclusion criteria were pathological fractures and bisphosphonate treatment within the last year. Patients with alcohol/drug abuse or overt psychiatric disorders were excluded. Also excluded were patients with abnormal hepatic or renal laboratory parameters such as serum-alanine aminotransferase or serum-aspartate-aminotransferase twice the normal reference range or higher, respectively; serum-creatinine levels higher than 130 μmol/L or glomerular filtration rate lower than 30 mL/minute; or with bone metabolic disorders such as primary hyperparathyroidism, osteogenesis imperfecta, Paget’s disease, or myeloma. Patients with lactose intolerance, dysphagia, esophagitis, gastric ulcer, or malignancy were also excluded, as were patients with diabetes mellitus associated with nephropathy or retinopathy and patients with active iritis or uveitis.\nThe study included a total of 79 patients with a mean age of 79 years (standard deviation, 9; range, 61–96 years) and a history of recent hip fracture (femoral neck or trochanteric) who were admitted to any of the four university hospitals in Stockholm. Inclusion criteria were age 60 years or older, no severe cognitive impairment (Short Portable Mental Questionnaire15 [see below] score, ≥3), ambulatory before fracture, and body mass index 28 kg/m2 or lower. Exclusion criteria were pathological fractures and bisphosphonate treatment within the last year. Patients with alcohol/drug abuse or overt psychiatric disorders were excluded. Also excluded were patients with abnormal hepatic or renal laboratory parameters such as serum-alanine aminotransferase or serum-aspartate-aminotransferase twice the normal reference range or higher, respectively; serum-creatinine levels higher than 130 μmol/L or glomerular filtration rate lower than 30 mL/minute; or with bone metabolic disorders such as primary hyperparathyroidism, osteogenesis imperfecta, Paget’s disease, or myeloma. Patients with lactose intolerance, dysphagia, esophagitis, gastric ulcer, or malignancy were also excluded, as were patients with diabetes mellitus associated with nephropathy or retinopathy and patients with active iritis or uveitis.\n Study design and intervention Eligible patients who agreed to participate were randomized into three groups in blocks of twelve, using a sealed envelope technique, thereby ensuring equal distribution of patients in the three treatment groups at each center. Patients were followed-up for 12 months. All participants received 1,000 mg calcium and 800 IU vitamin D3 daily. The first group received 35 mg risedronate (Optinate® Septimum; Sanofi AB, Warner Chilcott, Weiterstadt, Germany) once weekly for 12 months (B; n=28). The second group received 35 mg risedronate once weekly for 12 months plus a nutritional supplement (Fresubin® protein energy drink; Fresenius Kabi, Bad Homburg, Germany) during the first 6 months after hip fracture (BN; n=26). The patients in the third group served as controls (C; n=25) and received calcium and vitamin D3 alone (Calcichew D3®; Takeda Pharmaceutical Company Limited, Osaka, Japan) for 12 months.\nThe supplement contained 150 kcal and 10 g protein/100 mL milk-based protein (80% casein and 20% whey). Patients were prescribed 200 mL twice daily, totaling 600 kcal with 40 g protein. Each study center was staffed by one physician and a trial nurse. The trial team was responsible for the randomization process, collection of morning blood samples 1–3 days postfracture, and ensuring that dual-energy X-ray absorptiometry (DXA) and all other examinations were carried out during the hospital stay.\nPharmacologic treatment and nutritional supplementation began as soon as patients were stable from a cardiovascular standpoint, able to take food by mouth, and able to sit in an upright position for 1 hour after taking their tablets.\nPatients were instructed both verbally and in writing to take bisphosphonate 30 minutes before food and other medications. They were examined at baseline and again during follow-up at 6 and 12 months. About once a month, the research nurses interviewed patients by telephone regarding compliance, food intake, pain, and general state of health.\nThe study was conducted in compliance with the Helsinki Declaration and was approved by the local ethics committee in Stockholm. All participants provided written informed consent. (ClinicalTrials.gov: NCT01950169).\nEligible patients who agreed to participate were randomized into three groups in blocks of twelve, using a sealed envelope technique, thereby ensuring equal distribution of patients in the three treatment groups at each center. Patients were followed-up for 12 months. All participants received 1,000 mg calcium and 800 IU vitamin D3 daily. The first group received 35 mg risedronate (Optinate® Septimum; Sanofi AB, Warner Chilcott, Weiterstadt, Germany) once weekly for 12 months (B; n=28). The second group received 35 mg risedronate once weekly for 12 months plus a nutritional supplement (Fresubin® protein energy drink; Fresenius Kabi, Bad Homburg, Germany) during the first 6 months after hip fracture (BN; n=26). The patients in the third group served as controls (C; n=25) and received calcium and vitamin D3 alone (Calcichew D3®; Takeda Pharmaceutical Company Limited, Osaka, Japan) for 12 months.\nThe supplement contained 150 kcal and 10 g protein/100 mL milk-based protein (80% casein and 20% whey). Patients were prescribed 200 mL twice daily, totaling 600 kcal with 40 g protein. Each study center was staffed by one physician and a trial nurse. The trial team was responsible for the randomization process, collection of morning blood samples 1–3 days postfracture, and ensuring that dual-energy X-ray absorptiometry (DXA) and all other examinations were carried out during the hospital stay.\nPharmacologic treatment and nutritional supplementation began as soon as patients were stable from a cardiovascular standpoint, able to take food by mouth, and able to sit in an upright position for 1 hour after taking their tablets.\nPatients were instructed both verbally and in writing to take bisphosphonate 30 minutes before food and other medications. They were examined at baseline and again during follow-up at 6 and 12 months. About once a month, the research nurses interviewed patients by telephone regarding compliance, food intake, pain, and general state of health.\nThe study was conducted in compliance with the Helsinki Declaration and was approved by the local ethics committee in Stockholm. All participants provided written informed consent. (ClinicalTrials.gov: NCT01950169).\n Measurements BMD was measured by DXA, using either Hologic (Hologic, Inc., Waltham, MA, USA) or Ge Lunar (Madison, WI, USA) densitometers. BMD on the uninjured side (total hip) was assessed, as well as total-body BMD. Results were expressed as areal density (g/cm2) and as standard deviation in relation to both mean value among healthy young individuals (T-score) and mean value of age- and sex-matched adults (Z-score). The precision error was 0.010 g/cm2 and 0.007 g/cm2, respectively, for the total hip and total body BMD. To compensate for variation in BMD measurements among different centers and densitometers, equations to standardize BMD were used to create standardized bone mineral density (sBMD) in milligrams per square centimeter for the total hip result.14 Plasma-calcium (mmol/L), plasma-albumin (g/L), and serum-PTH (ng/L) were analyzed according to standard hospital laboratory procedure for each center. Serum-25OHD (nmol/L) was analyzed at baseline and again after 12 months, using chemiluminescence immunoassays (Liason® 25 OH vitamin D Total Assay; DiaSorin Inc., Stillwater, MN, USA). To evaluate changes in bone turnover, serum CTX-I (ng/L) was analyzed at baseline and at 12 months, using the Beta-CrossLaps assay (Roche Diagnostics GmbH, Mannheim, Germany), a two-site immunometric (sandwich) assay based on electro-chemiluminescence detection. The interassay coefficient of variation was less than 20%. Patient height, weight, and body mass index were monitored. Height was measured in the supine position. Weight was calculated from total mass (lean, fat, and bone mineral content), obtained through DXA measurements. The examination also included an appraisal of cognitive function using the Short Portable Mental Questionnaire, including ten simple questions.15\nBMD was measured by DXA, using either Hologic (Hologic, Inc., Waltham, MA, USA) or Ge Lunar (Madison, WI, USA) densitometers. BMD on the uninjured side (total hip) was assessed, as well as total-body BMD. Results were expressed as areal density (g/cm2) and as standard deviation in relation to both mean value among healthy young individuals (T-score) and mean value of age- and sex-matched adults (Z-score). The precision error was 0.010 g/cm2 and 0.007 g/cm2, respectively, for the total hip and total body BMD. To compensate for variation in BMD measurements among different centers and densitometers, equations to standardize BMD were used to create standardized bone mineral density (sBMD) in milligrams per square centimeter for the total hip result.14 Plasma-calcium (mmol/L), plasma-albumin (g/L), and serum-PTH (ng/L) were analyzed according to standard hospital laboratory procedure for each center. Serum-25OHD (nmol/L) was analyzed at baseline and again after 12 months, using chemiluminescence immunoassays (Liason® 25 OH vitamin D Total Assay; DiaSorin Inc., Stillwater, MN, USA). To evaluate changes in bone turnover, serum CTX-I (ng/L) was analyzed at baseline and at 12 months, using the Beta-CrossLaps assay (Roche Diagnostics GmbH, Mannheim, Germany), a two-site immunometric (sandwich) assay based on electro-chemiluminescence detection. The interassay coefficient of variation was less than 20%. Patient height, weight, and body mass index were monitored. Height was measured in the supine position. Weight was calculated from total mass (lean, fat, and bone mineral content), obtained through DXA measurements. The examination also included an appraisal of cognitive function using the Short Portable Mental Questionnaire, including ten simple questions.15\n Statistical methods Calculations were performed using SPSS 22.0 for Windows (IBM Corporation, Armonk, NY, USA). Descriptive statistics included mean, standard deviation, median, range, and percentage. Differences between the three randomized treatment groups were analyzed using analysis of covariance (ANCOVA). Covariates used in the analysis of BMD were age, sex, total mass, and baseline BMD. Age and baseline value were used as covariates in the analysis of serum-25OHD. Baseline value was the only covariate used in the analysis of serum-PTH. The covariates used in the analysis of serum-CTX-I were age, sex, and baseline value. Normal distribution of serum-CTX-I values was achieved by transformation, using a logarithmic scale. A paired-samples Student’s t-test was used to compare differences in serum-CTX-I within each group between baseline and follow-up. In addition to analysis of complete cases, an intention-to-treat analysis was carried out16 in accordance with Consolidated Standards of Reporting Trials guidelines.17\nCalculations were performed using SPSS 22.0 for Windows (IBM Corporation, Armonk, NY, USA). Descriptive statistics included mean, standard deviation, median, range, and percentage. Differences between the three randomized treatment groups were analyzed using analysis of covariance (ANCOVA). Covariates used in the analysis of BMD were age, sex, total mass, and baseline BMD. Age and baseline value were used as covariates in the analysis of serum-25OHD. Baseline value was the only covariate used in the analysis of serum-PTH. The covariates used in the analysis of serum-CTX-I were age, sex, and baseline value. Normal distribution of serum-CTX-I values was achieved by transformation, using a logarithmic scale. A paired-samples Student’s t-test was used to compare differences in serum-CTX-I within each group between baseline and follow-up. In addition to analysis of complete cases, an intention-to-treat analysis was carried out16 in accordance with Consolidated Standards of Reporting Trials guidelines.17", "Eligible patients who agreed to participate were randomized into three groups in blocks of twelve, using a sealed envelope technique, thereby ensuring equal distribution of patients in the three treatment groups at each center. Patients were followed-up for 12 months. All participants received 1,000 mg calcium and 800 IU vitamin D3 daily. The first group received 35 mg risedronate (Optinate® Septimum; Sanofi AB, Warner Chilcott, Weiterstadt, Germany) once weekly for 12 months (B; n=28). The second group received 35 mg risedronate once weekly for 12 months plus a nutritional supplement (Fresubin® protein energy drink; Fresenius Kabi, Bad Homburg, Germany) during the first 6 months after hip fracture (BN; n=26). The patients in the third group served as controls (C; n=25) and received calcium and vitamin D3 alone (Calcichew D3®; Takeda Pharmaceutical Company Limited, Osaka, Japan) for 12 months.\nThe supplement contained 150 kcal and 10 g protein/100 mL milk-based protein (80% casein and 20% whey). Patients were prescribed 200 mL twice daily, totaling 600 kcal with 40 g protein. Each study center was staffed by one physician and a trial nurse. The trial team was responsible for the randomization process, collection of morning blood samples 1–3 days postfracture, and ensuring that dual-energy X-ray absorptiometry (DXA) and all other examinations were carried out during the hospital stay.\nPharmacologic treatment and nutritional supplementation began as soon as patients were stable from a cardiovascular standpoint, able to take food by mouth, and able to sit in an upright position for 1 hour after taking their tablets.\nPatients were instructed both verbally and in writing to take bisphosphonate 30 minutes before food and other medications. They were examined at baseline and again during follow-up at 6 and 12 months. About once a month, the research nurses interviewed patients by telephone regarding compliance, food intake, pain, and general state of health.\nThe study was conducted in compliance with the Helsinki Declaration and was approved by the local ethics committee in Stockholm. All participants provided written informed consent. (ClinicalTrials.gov: NCT01950169).", "BMD was measured by DXA, using either Hologic (Hologic, Inc., Waltham, MA, USA) or Ge Lunar (Madison, WI, USA) densitometers. BMD on the uninjured side (total hip) was assessed, as well as total-body BMD. Results were expressed as areal density (g/cm2) and as standard deviation in relation to both mean value among healthy young individuals (T-score) and mean value of age- and sex-matched adults (Z-score). The precision error was 0.010 g/cm2 and 0.007 g/cm2, respectively, for the total hip and total body BMD. To compensate for variation in BMD measurements among different centers and densitometers, equations to standardize BMD were used to create standardized bone mineral density (sBMD) in milligrams per square centimeter for the total hip result.14 Plasma-calcium (mmol/L), plasma-albumin (g/L), and serum-PTH (ng/L) were analyzed according to standard hospital laboratory procedure for each center. Serum-25OHD (nmol/L) was analyzed at baseline and again after 12 months, using chemiluminescence immunoassays (Liason® 25 OH vitamin D Total Assay; DiaSorin Inc., Stillwater, MN, USA). To evaluate changes in bone turnover, serum CTX-I (ng/L) was analyzed at baseline and at 12 months, using the Beta-CrossLaps assay (Roche Diagnostics GmbH, Mannheim, Germany), a two-site immunometric (sandwich) assay based on electro-chemiluminescence detection. The interassay coefficient of variation was less than 20%. Patient height, weight, and body mass index were monitored. Height was measured in the supine position. Weight was calculated from total mass (lean, fat, and bone mineral content), obtained through DXA measurements. The examination also included an appraisal of cognitive function using the Short Portable Mental Questionnaire, including ten simple questions.15", "Calculations were performed using SPSS 22.0 for Windows (IBM Corporation, Armonk, NY, USA). Descriptive statistics included mean, standard deviation, median, range, and percentage. Differences between the three randomized treatment groups were analyzed using analysis of covariance (ANCOVA). Covariates used in the analysis of BMD were age, sex, total mass, and baseline BMD. Age and baseline value were used as covariates in the analysis of serum-25OHD. Baseline value was the only covariate used in the analysis of serum-PTH. The covariates used in the analysis of serum-CTX-I were age, sex, and baseline value. Normal distribution of serum-CTX-I values was achieved by transformation, using a logarithmic scale. A paired-samples Student’s t-test was used to compare differences in serum-CTX-I within each group between baseline and follow-up. In addition to analysis of complete cases, an intention-to-treat analysis was carried out16 in accordance with Consolidated Standards of Reporting Trials guidelines.17", "Among the control patients, 17 of 24 patients who presented at 12 months took calcium and vitamin D as prescribed. In group B, 18 of 25 presenting patients took their daily dose of calcium and vitamin D, and 18 of 25 took the bisphosphonate as stipulated. In group BN, seven of 18 corresponding patients complied with the drugs and nutritional supplement prescription; the remaining eleven patients reported an intake of half the prescribed nutritional supplement. Three of these patients also reported a lower intake of prescribed bisphosphonate, and two took only half the prescribed daily dose of calcium and vitamin D. In total, 15 patients in the BN group took the bisphosphonate as stipulated.\nThree patients in the control group and two in the group B reported gastrointestinal complaints (either constipation or diarrhea). A dose reduction was made for one patient with hypercalcemia in group C.", "Seventy-nine patients were measured by DXA at inclusion, 68 patients at 6 months, and 66 at 12 months. Because of a total hip replacement on the uninjured side, nine patients could not be measured at the hip.\nDuring the first 6 months, total hip BMD increased by 0.7% in the BN group, whereas groups B and C showed losses of 1.1% and 2.4%, respectively (P by ANCOVA =0.071; Table 2; Figure 2).\nOn average, there was no loss in total body BMD between baseline and 12 months in the BN group (Table 2). Moreover, both the B and C groups lost BMD, and this loss was greater among controls than in group B (P=0.009; Table 2; Figure 3).\nThere was a trend for difference between groups, according to change in the bone resorption marker serum-CTX-I (P=0.055; Table 2). Within-group analysis showed a significant decrease in the serum-CTX-I marker of 33% and 36% in groups B and BN, respectively (P<0.001), whereas the smaller decrease of 12% in the C group was not significant (P=0.77).", "During the study, there was a mean increase in serum-25-OHD of between 17 nmol/L and 20 nmol/L in all three groups. Mean serum-25OHD concentrations in groups B and BN were below normal (ie, <50 nmol/L at baseline) but had normalized by the 12-month follow-up (Tables 1 and 2). In total, 59% of the patients had a baseline concentration of serum-25OHD that was lower than 50 nmol/L. Among them, eleven had values lower than 25 nmol/L. At the 12-month follow-up, 26% of the patients still had serum-25OHD concentrations that were lower than 50 nmol/L. We identified lack of compliance in eleven of the 18 patients who failed to normalize their concentration of serum-25OHD.\nMean serum-PTH remained in the normal range for all groups at inclusion and at the 6-month and 12-month follow-ups; no significant difference was found among the groups on any measurement occasion." ]
[ "methods", "methods", null, "methods", null, null, null ]
[ "Introduction", "Methods", "Patients", "Study design and intervention", "Measurements", "Statistical methods", "Results", "Treatment adherence", "Effects on BMD and bone turnover", "Levels of 25OHD and parathyroid hormone", "Intention-to-treat analysis", "Discussion" ]
[ "Inadequate intake of protein and total calories, leading to malnutrition, is common among hip fracture patients.1,2 After a hip fracture, a catabolic state develops, characterized by increased loss of bone mineral density (BMD) during the first year.3–5 Because protein is an important structural component of bone and previous studies have reported a positive association between protein intake and BMD/bone mineral content,6–8 it is tempting to hypothesize that protein and energy supplementation may slow down postoperative bone loss. One previous study showed that hip fracture patients who received postoperative supplemental protein suffered less loss of BMD in the proximal femur compared with controls at 12 months.9 Otherwise, there are few studies that have investigated the effect of nutritional supplementation on BMD after hip fractures.\nBisphosphonates are the most widely used drugs for treatment of osteoporosis and have been shown to reduce the risk for hip fracture.10,11 Risedronate, which was used in the present study, has previously been shown to increase BMD of the femoral neck and femoral trochanter in osteoporotic women aged 80 years and older after 6 months of treatment.12 Risedronate has not been studied for secondary prevention of bone loss in old adults after a hip fracture. However, beneficial effects on total hip BMD after hip fracture have been demonstrated, using zoledronic acid together with calcium and vitamin D.13\nThe primary aim of this study was to investigate whether postoperative treatment with a combination of protein-rich formula and bisphosphonates can reduce BMD loss after hip fracture better than bisphosphonates alone. Secondary aims were to study treatment effects on the bone resorption marker C-terminal telopeptide of collagen I (serum CTX-I), serum levels of 25-hydroxy vitamin D (25OHD), and parathyroid hormone (PTH).", " Patients The study included a total of 79 patients with a mean age of 79 years (standard deviation, 9; range, 61–96 years) and a history of recent hip fracture (femoral neck or trochanteric) who were admitted to any of the four university hospitals in Stockholm. Inclusion criteria were age 60 years or older, no severe cognitive impairment (Short Portable Mental Questionnaire15 [see below] score, ≥3), ambulatory before fracture, and body mass index 28 kg/m2 or lower. Exclusion criteria were pathological fractures and bisphosphonate treatment within the last year. Patients with alcohol/drug abuse or overt psychiatric disorders were excluded. Also excluded were patients with abnormal hepatic or renal laboratory parameters such as serum-alanine aminotransferase or serum-aspartate-aminotransferase twice the normal reference range or higher, respectively; serum-creatinine levels higher than 130 μmol/L or glomerular filtration rate lower than 30 mL/minute; or with bone metabolic disorders such as primary hyperparathyroidism, osteogenesis imperfecta, Paget’s disease, or myeloma. Patients with lactose intolerance, dysphagia, esophagitis, gastric ulcer, or malignancy were also excluded, as were patients with diabetes mellitus associated with nephropathy or retinopathy and patients with active iritis or uveitis.\nThe study included a total of 79 patients with a mean age of 79 years (standard deviation, 9; range, 61–96 years) and a history of recent hip fracture (femoral neck or trochanteric) who were admitted to any of the four university hospitals in Stockholm. Inclusion criteria were age 60 years or older, no severe cognitive impairment (Short Portable Mental Questionnaire15 [see below] score, ≥3), ambulatory before fracture, and body mass index 28 kg/m2 or lower. Exclusion criteria were pathological fractures and bisphosphonate treatment within the last year. Patients with alcohol/drug abuse or overt psychiatric disorders were excluded. Also excluded were patients with abnormal hepatic or renal laboratory parameters such as serum-alanine aminotransferase or serum-aspartate-aminotransferase twice the normal reference range or higher, respectively; serum-creatinine levels higher than 130 μmol/L or glomerular filtration rate lower than 30 mL/minute; or with bone metabolic disorders such as primary hyperparathyroidism, osteogenesis imperfecta, Paget’s disease, or myeloma. Patients with lactose intolerance, dysphagia, esophagitis, gastric ulcer, or malignancy were also excluded, as were patients with diabetes mellitus associated with nephropathy or retinopathy and patients with active iritis or uveitis.\n Study design and intervention Eligible patients who agreed to participate were randomized into three groups in blocks of twelve, using a sealed envelope technique, thereby ensuring equal distribution of patients in the three treatment groups at each center. Patients were followed-up for 12 months. All participants received 1,000 mg calcium and 800 IU vitamin D3 daily. The first group received 35 mg risedronate (Optinate® Septimum; Sanofi AB, Warner Chilcott, Weiterstadt, Germany) once weekly for 12 months (B; n=28). The second group received 35 mg risedronate once weekly for 12 months plus a nutritional supplement (Fresubin® protein energy drink; Fresenius Kabi, Bad Homburg, Germany) during the first 6 months after hip fracture (BN; n=26). The patients in the third group served as controls (C; n=25) and received calcium and vitamin D3 alone (Calcichew D3®; Takeda Pharmaceutical Company Limited, Osaka, Japan) for 12 months.\nThe supplement contained 150 kcal and 10 g protein/100 mL milk-based protein (80% casein and 20% whey). Patients were prescribed 200 mL twice daily, totaling 600 kcal with 40 g protein. Each study center was staffed by one physician and a trial nurse. The trial team was responsible for the randomization process, collection of morning blood samples 1–3 days postfracture, and ensuring that dual-energy X-ray absorptiometry (DXA) and all other examinations were carried out during the hospital stay.\nPharmacologic treatment and nutritional supplementation began as soon as patients were stable from a cardiovascular standpoint, able to take food by mouth, and able to sit in an upright position for 1 hour after taking their tablets.\nPatients were instructed both verbally and in writing to take bisphosphonate 30 minutes before food and other medications. They were examined at baseline and again during follow-up at 6 and 12 months. About once a month, the research nurses interviewed patients by telephone regarding compliance, food intake, pain, and general state of health.\nThe study was conducted in compliance with the Helsinki Declaration and was approved by the local ethics committee in Stockholm. All participants provided written informed consent. (ClinicalTrials.gov: NCT01950169).\nEligible patients who agreed to participate were randomized into three groups in blocks of twelve, using a sealed envelope technique, thereby ensuring equal distribution of patients in the three treatment groups at each center. Patients were followed-up for 12 months. All participants received 1,000 mg calcium and 800 IU vitamin D3 daily. The first group received 35 mg risedronate (Optinate® Septimum; Sanofi AB, Warner Chilcott, Weiterstadt, Germany) once weekly for 12 months (B; n=28). The second group received 35 mg risedronate once weekly for 12 months plus a nutritional supplement (Fresubin® protein energy drink; Fresenius Kabi, Bad Homburg, Germany) during the first 6 months after hip fracture (BN; n=26). The patients in the third group served as controls (C; n=25) and received calcium and vitamin D3 alone (Calcichew D3®; Takeda Pharmaceutical Company Limited, Osaka, Japan) for 12 months.\nThe supplement contained 150 kcal and 10 g protein/100 mL milk-based protein (80% casein and 20% whey). Patients were prescribed 200 mL twice daily, totaling 600 kcal with 40 g protein. Each study center was staffed by one physician and a trial nurse. The trial team was responsible for the randomization process, collection of morning blood samples 1–3 days postfracture, and ensuring that dual-energy X-ray absorptiometry (DXA) and all other examinations were carried out during the hospital stay.\nPharmacologic treatment and nutritional supplementation began as soon as patients were stable from a cardiovascular standpoint, able to take food by mouth, and able to sit in an upright position for 1 hour after taking their tablets.\nPatients were instructed both verbally and in writing to take bisphosphonate 30 minutes before food and other medications. They were examined at baseline and again during follow-up at 6 and 12 months. About once a month, the research nurses interviewed patients by telephone regarding compliance, food intake, pain, and general state of health.\nThe study was conducted in compliance with the Helsinki Declaration and was approved by the local ethics committee in Stockholm. All participants provided written informed consent. (ClinicalTrials.gov: NCT01950169).\n Measurements BMD was measured by DXA, using either Hologic (Hologic, Inc., Waltham, MA, USA) or Ge Lunar (Madison, WI, USA) densitometers. BMD on the uninjured side (total hip) was assessed, as well as total-body BMD. Results were expressed as areal density (g/cm2) and as standard deviation in relation to both mean value among healthy young individuals (T-score) and mean value of age- and sex-matched adults (Z-score). The precision error was 0.010 g/cm2 and 0.007 g/cm2, respectively, for the total hip and total body BMD. To compensate for variation in BMD measurements among different centers and densitometers, equations to standardize BMD were used to create standardized bone mineral density (sBMD) in milligrams per square centimeter for the total hip result.14 Plasma-calcium (mmol/L), plasma-albumin (g/L), and serum-PTH (ng/L) were analyzed according to standard hospital laboratory procedure for each center. Serum-25OHD (nmol/L) was analyzed at baseline and again after 12 months, using chemiluminescence immunoassays (Liason® 25 OH vitamin D Total Assay; DiaSorin Inc., Stillwater, MN, USA). To evaluate changes in bone turnover, serum CTX-I (ng/L) was analyzed at baseline and at 12 months, using the Beta-CrossLaps assay (Roche Diagnostics GmbH, Mannheim, Germany), a two-site immunometric (sandwich) assay based on electro-chemiluminescence detection. The interassay coefficient of variation was less than 20%. Patient height, weight, and body mass index were monitored. Height was measured in the supine position. Weight was calculated from total mass (lean, fat, and bone mineral content), obtained through DXA measurements. The examination also included an appraisal of cognitive function using the Short Portable Mental Questionnaire, including ten simple questions.15\nBMD was measured by DXA, using either Hologic (Hologic, Inc., Waltham, MA, USA) or Ge Lunar (Madison, WI, USA) densitometers. BMD on the uninjured side (total hip) was assessed, as well as total-body BMD. Results were expressed as areal density (g/cm2) and as standard deviation in relation to both mean value among healthy young individuals (T-score) and mean value of age- and sex-matched adults (Z-score). The precision error was 0.010 g/cm2 and 0.007 g/cm2, respectively, for the total hip and total body BMD. To compensate for variation in BMD measurements among different centers and densitometers, equations to standardize BMD were used to create standardized bone mineral density (sBMD) in milligrams per square centimeter for the total hip result.14 Plasma-calcium (mmol/L), plasma-albumin (g/L), and serum-PTH (ng/L) were analyzed according to standard hospital laboratory procedure for each center. Serum-25OHD (nmol/L) was analyzed at baseline and again after 12 months, using chemiluminescence immunoassays (Liason® 25 OH vitamin D Total Assay; DiaSorin Inc., Stillwater, MN, USA). To evaluate changes in bone turnover, serum CTX-I (ng/L) was analyzed at baseline and at 12 months, using the Beta-CrossLaps assay (Roche Diagnostics GmbH, Mannheim, Germany), a two-site immunometric (sandwich) assay based on electro-chemiluminescence detection. The interassay coefficient of variation was less than 20%. Patient height, weight, and body mass index were monitored. Height was measured in the supine position. Weight was calculated from total mass (lean, fat, and bone mineral content), obtained through DXA measurements. The examination also included an appraisal of cognitive function using the Short Portable Mental Questionnaire, including ten simple questions.15\n Statistical methods Calculations were performed using SPSS 22.0 for Windows (IBM Corporation, Armonk, NY, USA). Descriptive statistics included mean, standard deviation, median, range, and percentage. Differences between the three randomized treatment groups were analyzed using analysis of covariance (ANCOVA). Covariates used in the analysis of BMD were age, sex, total mass, and baseline BMD. Age and baseline value were used as covariates in the analysis of serum-25OHD. Baseline value was the only covariate used in the analysis of serum-PTH. The covariates used in the analysis of serum-CTX-I were age, sex, and baseline value. Normal distribution of serum-CTX-I values was achieved by transformation, using a logarithmic scale. A paired-samples Student’s t-test was used to compare differences in serum-CTX-I within each group between baseline and follow-up. In addition to analysis of complete cases, an intention-to-treat analysis was carried out16 in accordance with Consolidated Standards of Reporting Trials guidelines.17\nCalculations were performed using SPSS 22.0 for Windows (IBM Corporation, Armonk, NY, USA). Descriptive statistics included mean, standard deviation, median, range, and percentage. Differences between the three randomized treatment groups were analyzed using analysis of covariance (ANCOVA). Covariates used in the analysis of BMD were age, sex, total mass, and baseline BMD. Age and baseline value were used as covariates in the analysis of serum-25OHD. Baseline value was the only covariate used in the analysis of serum-PTH. The covariates used in the analysis of serum-CTX-I were age, sex, and baseline value. Normal distribution of serum-CTX-I values was achieved by transformation, using a logarithmic scale. A paired-samples Student’s t-test was used to compare differences in serum-CTX-I within each group between baseline and follow-up. In addition to analysis of complete cases, an intention-to-treat analysis was carried out16 in accordance with Consolidated Standards of Reporting Trials guidelines.17", "The study included a total of 79 patients with a mean age of 79 years (standard deviation, 9; range, 61–96 years) and a history of recent hip fracture (femoral neck or trochanteric) who were admitted to any of the four university hospitals in Stockholm. Inclusion criteria were age 60 years or older, no severe cognitive impairment (Short Portable Mental Questionnaire15 [see below] score, ≥3), ambulatory before fracture, and body mass index 28 kg/m2 or lower. Exclusion criteria were pathological fractures and bisphosphonate treatment within the last year. Patients with alcohol/drug abuse or overt psychiatric disorders were excluded. Also excluded were patients with abnormal hepatic or renal laboratory parameters such as serum-alanine aminotransferase or serum-aspartate-aminotransferase twice the normal reference range or higher, respectively; serum-creatinine levels higher than 130 μmol/L or glomerular filtration rate lower than 30 mL/minute; or with bone metabolic disorders such as primary hyperparathyroidism, osteogenesis imperfecta, Paget’s disease, or myeloma. Patients with lactose intolerance, dysphagia, esophagitis, gastric ulcer, or malignancy were also excluded, as were patients with diabetes mellitus associated with nephropathy or retinopathy and patients with active iritis or uveitis.", "Eligible patients who agreed to participate were randomized into three groups in blocks of twelve, using a sealed envelope technique, thereby ensuring equal distribution of patients in the three treatment groups at each center. Patients were followed-up for 12 months. All participants received 1,000 mg calcium and 800 IU vitamin D3 daily. The first group received 35 mg risedronate (Optinate® Septimum; Sanofi AB, Warner Chilcott, Weiterstadt, Germany) once weekly for 12 months (B; n=28). The second group received 35 mg risedronate once weekly for 12 months plus a nutritional supplement (Fresubin® protein energy drink; Fresenius Kabi, Bad Homburg, Germany) during the first 6 months after hip fracture (BN; n=26). The patients in the third group served as controls (C; n=25) and received calcium and vitamin D3 alone (Calcichew D3®; Takeda Pharmaceutical Company Limited, Osaka, Japan) for 12 months.\nThe supplement contained 150 kcal and 10 g protein/100 mL milk-based protein (80% casein and 20% whey). Patients were prescribed 200 mL twice daily, totaling 600 kcal with 40 g protein. Each study center was staffed by one physician and a trial nurse. The trial team was responsible for the randomization process, collection of morning blood samples 1–3 days postfracture, and ensuring that dual-energy X-ray absorptiometry (DXA) and all other examinations were carried out during the hospital stay.\nPharmacologic treatment and nutritional supplementation began as soon as patients were stable from a cardiovascular standpoint, able to take food by mouth, and able to sit in an upright position for 1 hour after taking their tablets.\nPatients were instructed both verbally and in writing to take bisphosphonate 30 minutes before food and other medications. They were examined at baseline and again during follow-up at 6 and 12 months. About once a month, the research nurses interviewed patients by telephone regarding compliance, food intake, pain, and general state of health.\nThe study was conducted in compliance with the Helsinki Declaration and was approved by the local ethics committee in Stockholm. All participants provided written informed consent. (ClinicalTrials.gov: NCT01950169).", "BMD was measured by DXA, using either Hologic (Hologic, Inc., Waltham, MA, USA) or Ge Lunar (Madison, WI, USA) densitometers. BMD on the uninjured side (total hip) was assessed, as well as total-body BMD. Results were expressed as areal density (g/cm2) and as standard deviation in relation to both mean value among healthy young individuals (T-score) and mean value of age- and sex-matched adults (Z-score). The precision error was 0.010 g/cm2 and 0.007 g/cm2, respectively, for the total hip and total body BMD. To compensate for variation in BMD measurements among different centers and densitometers, equations to standardize BMD were used to create standardized bone mineral density (sBMD) in milligrams per square centimeter for the total hip result.14 Plasma-calcium (mmol/L), plasma-albumin (g/L), and serum-PTH (ng/L) were analyzed according to standard hospital laboratory procedure for each center. Serum-25OHD (nmol/L) was analyzed at baseline and again after 12 months, using chemiluminescence immunoassays (Liason® 25 OH vitamin D Total Assay; DiaSorin Inc., Stillwater, MN, USA). To evaluate changes in bone turnover, serum CTX-I (ng/L) was analyzed at baseline and at 12 months, using the Beta-CrossLaps assay (Roche Diagnostics GmbH, Mannheim, Germany), a two-site immunometric (sandwich) assay based on electro-chemiluminescence detection. The interassay coefficient of variation was less than 20%. Patient height, weight, and body mass index were monitored. Height was measured in the supine position. Weight was calculated from total mass (lean, fat, and bone mineral content), obtained through DXA measurements. The examination also included an appraisal of cognitive function using the Short Portable Mental Questionnaire, including ten simple questions.15", "Calculations were performed using SPSS 22.0 for Windows (IBM Corporation, Armonk, NY, USA). Descriptive statistics included mean, standard deviation, median, range, and percentage. Differences between the three randomized treatment groups were analyzed using analysis of covariance (ANCOVA). Covariates used in the analysis of BMD were age, sex, total mass, and baseline BMD. Age and baseline value were used as covariates in the analysis of serum-25OHD. Baseline value was the only covariate used in the analysis of serum-PTH. The covariates used in the analysis of serum-CTX-I were age, sex, and baseline value. Normal distribution of serum-CTX-I values was achieved by transformation, using a logarithmic scale. A paired-samples Student’s t-test was used to compare differences in serum-CTX-I within each group between baseline and follow-up. In addition to analysis of complete cases, an intention-to-treat analysis was carried out16 in accordance with Consolidated Standards of Reporting Trials guidelines.17", "No significant differences in baseline characteristics were found among the three treatment groups, other than the need for walking aids (Table 1). A total of 67 (85%) of the original 79 patients presented for the final follow-up; Figure 1 shows a patient flowchart. Among patients who dropped out, average age was 87 years (standard deviation, 5; range, 78–94 years), mean value of total hip BMD was 0.650 g/cm2 (standard deviation, 0.103), and total body BMD was 0.943 g/cm2 (standard deviation, 0.147).\n Treatment adherence Among the control patients, 17 of 24 patients who presented at 12 months took calcium and vitamin D as prescribed. In group B, 18 of 25 presenting patients took their daily dose of calcium and vitamin D, and 18 of 25 took the bisphosphonate as stipulated. In group BN, seven of 18 corresponding patients complied with the drugs and nutritional supplement prescription; the remaining eleven patients reported an intake of half the prescribed nutritional supplement. Three of these patients also reported a lower intake of prescribed bisphosphonate, and two took only half the prescribed daily dose of calcium and vitamin D. In total, 15 patients in the BN group took the bisphosphonate as stipulated.\nThree patients in the control group and two in the group B reported gastrointestinal complaints (either constipation or diarrhea). A dose reduction was made for one patient with hypercalcemia in group C.\nAmong the control patients, 17 of 24 patients who presented at 12 months took calcium and vitamin D as prescribed. In group B, 18 of 25 presenting patients took their daily dose of calcium and vitamin D, and 18 of 25 took the bisphosphonate as stipulated. In group BN, seven of 18 corresponding patients complied with the drugs and nutritional supplement prescription; the remaining eleven patients reported an intake of half the prescribed nutritional supplement. Three of these patients also reported a lower intake of prescribed bisphosphonate, and two took only half the prescribed daily dose of calcium and vitamin D. In total, 15 patients in the BN group took the bisphosphonate as stipulated.\nThree patients in the control group and two in the group B reported gastrointestinal complaints (either constipation or diarrhea). A dose reduction was made for one patient with hypercalcemia in group C.\n Effects on BMD and bone turnover Seventy-nine patients were measured by DXA at inclusion, 68 patients at 6 months, and 66 at 12 months. Because of a total hip replacement on the uninjured side, nine patients could not be measured at the hip.\nDuring the first 6 months, total hip BMD increased by 0.7% in the BN group, whereas groups B and C showed losses of 1.1% and 2.4%, respectively (P by ANCOVA =0.071; Table 2; Figure 2).\nOn average, there was no loss in total body BMD between baseline and 12 months in the BN group (Table 2). Moreover, both the B and C groups lost BMD, and this loss was greater among controls than in group B (P=0.009; Table 2; Figure 3).\nThere was a trend for difference between groups, according to change in the bone resorption marker serum-CTX-I (P=0.055; Table 2). Within-group analysis showed a significant decrease in the serum-CTX-I marker of 33% and 36% in groups B and BN, respectively (P<0.001), whereas the smaller decrease of 12% in the C group was not significant (P=0.77).\nSeventy-nine patients were measured by DXA at inclusion, 68 patients at 6 months, and 66 at 12 months. Because of a total hip replacement on the uninjured side, nine patients could not be measured at the hip.\nDuring the first 6 months, total hip BMD increased by 0.7% in the BN group, whereas groups B and C showed losses of 1.1% and 2.4%, respectively (P by ANCOVA =0.071; Table 2; Figure 2).\nOn average, there was no loss in total body BMD between baseline and 12 months in the BN group (Table 2). Moreover, both the B and C groups lost BMD, and this loss was greater among controls than in group B (P=0.009; Table 2; Figure 3).\nThere was a trend for difference between groups, according to change in the bone resorption marker serum-CTX-I (P=0.055; Table 2). Within-group analysis showed a significant decrease in the serum-CTX-I marker of 33% and 36% in groups B and BN, respectively (P<0.001), whereas the smaller decrease of 12% in the C group was not significant (P=0.77).\n Levels of 25OHD and parathyroid hormone During the study, there was a mean increase in serum-25-OHD of between 17 nmol/L and 20 nmol/L in all three groups. Mean serum-25OHD concentrations in groups B and BN were below normal (ie, <50 nmol/L at baseline) but had normalized by the 12-month follow-up (Tables 1 and 2). In total, 59% of the patients had a baseline concentration of serum-25OHD that was lower than 50 nmol/L. Among them, eleven had values lower than 25 nmol/L. At the 12-month follow-up, 26% of the patients still had serum-25OHD concentrations that were lower than 50 nmol/L. We identified lack of compliance in eleven of the 18 patients who failed to normalize their concentration of serum-25OHD.\nMean serum-PTH remained in the normal range for all groups at inclusion and at the 6-month and 12-month follow-ups; no significant difference was found among the groups on any measurement occasion.\nDuring the study, there was a mean increase in serum-25-OHD of between 17 nmol/L and 20 nmol/L in all three groups. Mean serum-25OHD concentrations in groups B and BN were below normal (ie, <50 nmol/L at baseline) but had normalized by the 12-month follow-up (Tables 1 and 2). In total, 59% of the patients had a baseline concentration of serum-25OHD that was lower than 50 nmol/L. Among them, eleven had values lower than 25 nmol/L. At the 12-month follow-up, 26% of the patients still had serum-25OHD concentrations that were lower than 50 nmol/L. We identified lack of compliance in eleven of the 18 patients who failed to normalize their concentration of serum-25OHD.\nMean serum-PTH remained in the normal range for all groups at inclusion and at the 6-month and 12-month follow-ups; no significant difference was found among the groups on any measurement occasion.\n Intention-to-treat analysis The primary analysis population was used to replace missing data, and the secondary analysis population was used as a sensitivity analysis and showed the following results. The percentage change in total hip BMD between baseline and 6 months was +0.9% in the BN group and −0.5% and −2.7% in the B and C groups, respectively (P=0.03). The corresponding changes between baseline and 12 months were −0.8% for the BN group and −1.7% and −2.6% for the B and C groups, respectively (P=0.279).\nAccording to the sensitivity analysis, the percentage change in total body BMD between baseline and 6 months was −0.7% for the BN group and −0.7% and −1.3% for the B and C groups, respectively (P=0.436). Between baseline and 12 months, the percentage change in total body BMD was −0.02% for the BN group and −0.9% and −1.6% for the B and C groups, respectively (P=0.030).\nThe sensitivity analysis confirmed the trend indicated by the results, showing a more pronounced decrease in serum-CTX-I in the B and BN groups than in the C group (P=0.019). The sensitivity analysis was in line with the results for complete cases, showing no significant differences among the three groups in changes of serum-25OHD and serum-PTH at either 6- or 12-month follow-up.\nThe primary analysis population was used to replace missing data, and the secondary analysis population was used as a sensitivity analysis and showed the following results. The percentage change in total hip BMD between baseline and 6 months was +0.9% in the BN group and −0.5% and −2.7% in the B and C groups, respectively (P=0.03). The corresponding changes between baseline and 12 months were −0.8% for the BN group and −1.7% and −2.6% for the B and C groups, respectively (P=0.279).\nAccording to the sensitivity analysis, the percentage change in total body BMD between baseline and 6 months was −0.7% for the BN group and −0.7% and −1.3% for the B and C groups, respectively (P=0.436). Between baseline and 12 months, the percentage change in total body BMD was −0.02% for the BN group and −0.9% and −1.6% for the B and C groups, respectively (P=0.030).\nThe sensitivity analysis confirmed the trend indicated by the results, showing a more pronounced decrease in serum-CTX-I in the B and BN groups than in the C group (P=0.019). The sensitivity analysis was in line with the results for complete cases, showing no significant differences among the three groups in changes of serum-25OHD and serum-PTH at either 6- or 12-month follow-up.", "Among the control patients, 17 of 24 patients who presented at 12 months took calcium and vitamin D as prescribed. In group B, 18 of 25 presenting patients took their daily dose of calcium and vitamin D, and 18 of 25 took the bisphosphonate as stipulated. In group BN, seven of 18 corresponding patients complied with the drugs and nutritional supplement prescription; the remaining eleven patients reported an intake of half the prescribed nutritional supplement. Three of these patients also reported a lower intake of prescribed bisphosphonate, and two took only half the prescribed daily dose of calcium and vitamin D. In total, 15 patients in the BN group took the bisphosphonate as stipulated.\nThree patients in the control group and two in the group B reported gastrointestinal complaints (either constipation or diarrhea). A dose reduction was made for one patient with hypercalcemia in group C.", "Seventy-nine patients were measured by DXA at inclusion, 68 patients at 6 months, and 66 at 12 months. Because of a total hip replacement on the uninjured side, nine patients could not be measured at the hip.\nDuring the first 6 months, total hip BMD increased by 0.7% in the BN group, whereas groups B and C showed losses of 1.1% and 2.4%, respectively (P by ANCOVA =0.071; Table 2; Figure 2).\nOn average, there was no loss in total body BMD between baseline and 12 months in the BN group (Table 2). Moreover, both the B and C groups lost BMD, and this loss was greater among controls than in group B (P=0.009; Table 2; Figure 3).\nThere was a trend for difference between groups, according to change in the bone resorption marker serum-CTX-I (P=0.055; Table 2). Within-group analysis showed a significant decrease in the serum-CTX-I marker of 33% and 36% in groups B and BN, respectively (P<0.001), whereas the smaller decrease of 12% in the C group was not significant (P=0.77).", "During the study, there was a mean increase in serum-25-OHD of between 17 nmol/L and 20 nmol/L in all three groups. Mean serum-25OHD concentrations in groups B and BN were below normal (ie, <50 nmol/L at baseline) but had normalized by the 12-month follow-up (Tables 1 and 2). In total, 59% of the patients had a baseline concentration of serum-25OHD that was lower than 50 nmol/L. Among them, eleven had values lower than 25 nmol/L. At the 12-month follow-up, 26% of the patients still had serum-25OHD concentrations that were lower than 50 nmol/L. We identified lack of compliance in eleven of the 18 patients who failed to normalize their concentration of serum-25OHD.\nMean serum-PTH remained in the normal range for all groups at inclusion and at the 6-month and 12-month follow-ups; no significant difference was found among the groups on any measurement occasion.", "The primary analysis population was used to replace missing data, and the secondary analysis population was used as a sensitivity analysis and showed the following results. The percentage change in total hip BMD between baseline and 6 months was +0.9% in the BN group and −0.5% and −2.7% in the B and C groups, respectively (P=0.03). The corresponding changes between baseline and 12 months were −0.8% for the BN group and −1.7% and −2.6% for the B and C groups, respectively (P=0.279).\nAccording to the sensitivity analysis, the percentage change in total body BMD between baseline and 6 months was −0.7% for the BN group and −0.7% and −1.3% for the B and C groups, respectively (P=0.436). Between baseline and 12 months, the percentage change in total body BMD was −0.02% for the BN group and −0.9% and −1.6% for the B and C groups, respectively (P=0.030).\nThe sensitivity analysis confirmed the trend indicated by the results, showing a more pronounced decrease in serum-CTX-I in the B and BN groups than in the C group (P=0.019). The sensitivity analysis was in line with the results for complete cases, showing no significant differences among the three groups in changes of serum-25OHD and serum-PTH at either 6- or 12-month follow-up.", "We found that nutritional supplementation, in addition to calcium, vitamin D, and risedronate, had a positive effect on total hip BMD and total body BMD in elderly patients with a recent hip fracture. An annual loss of 0.27% and 0.25% of total hip BMD in women and men, respectively, has been reported for a healthy population, aged 50–85 years.18 However, prior studies have shown a much higher loss, at 2.0%–4.6% (hip BMD), the first year after hip fracture,3,5 which is consistent with the findings for the control group in the current study. Two studies reported an increase in total hip BMD after hip fracture when patients were treated with a single dose of intravenous zoledronic acid.13,19 The present study entailed bisphosphonates orally but still had bone-resorption-preventive effects compared with controls. Because absorption of orally administered treatment is low even under ideal circumstances, it could explain the lower net gain in BMD compared with the previous studies.13,19 Another important consideration is known suboptimal patient adherence with orally administered bisphosphonates,20,21 which we also observed in the current study.\nTo our knowledge, only a few studies have explored the possible effects of protein- and energy-rich nutritional supplements on BMD after a hip fracture. In a randomized controlled study of 60 women with femoral neck fracture, Tengstrand et al22 evaluated the effect of treatment with protein-rich supplementation alone or in combination with anabolic steroids on both hip and total body BMD. Although the difference in BMD between the groups did not reach statistical significance, the results of their study indicated an increase in total body BMD at 6 and 12 months in the groups that received protein and energy supplementation compared with the group treated only with calcium and vitamin D.22 Another study of 82 hip fracture patients showed that protein supplementation (20 g daily) for 12 months after hip fracture preserved BMD when compared with untreated controls.9 The nutritional supplementation used in the current study provided a 40 g daily dose of protein compared with only 20 g in the previous studies,9,22 which may explain why the BMD-preserving effect was already observable after 6 months. Furthermore, participants in our study were supplemented with 600 kcal/day, rather than the 250 kcal/day seen in one of the previous studies.9 Unlike previous studies, patients in the current study were also treated with drugs that inhibit bone resorption.9,22 The mean change in total hip BMD between groups at 6 months did not quite reach significance, but this may be explained by the small group size. However, the intention-to-treat analysis supported the results of complete cases and showed significant difference between groups in total hip BMD at 6 months and total body BMD at 12 months. The more rapid bone metabolic changes in total hip BMD could be a result of a larger content of trabecular bone compared with total body BMD. It may also explain the lack of effect on total hip BMD at 12 months because treatment with nutritional supplementation ended at 6 months. Although the study lasted for 1 year, we chose to give nutritional supplement only during the first 6 postoperative months, when the degree of catabolism is likely to be most pronounced.\nWe found no differences in vitamin D levels between groups to explain the disparities in BMD. All groups received vitamin D, and mean values were normalized during the study. However, 26% of all patients included still had values less than 50 nmol/L at the final follow-up; these findings were consistent with the results from a prior study in which hospitalized women at 66–95 years of age were treated with vitamin D3 (800 IE) and calcium (1,000 mg).23 As in the current study, mean levels were normalized after treatment, but in 18 (35%) of 51 patients, the levels remained low.23 Reasons proposed to explain these findings included insufficient vitamin D dose, a short supplementation period of only 3 months, and noncompliance issues. The first and last of these reasons may also apply to the current study.\nThe decrease in bone resorption marker serum-CTX-I levels became more pronounced in both risedronate-treated groups at 12 months. Because bone resorption-inhibiting drugs decrease CTX-I levels, these results reflect the expected treatment response to risedronate. Our baseline samples were drawn 1–3 days postfracture, which may have contributed to the variability in CTX-I levels. Other causes of variability not taken into account in the current study include food intake and circadian rhythm.24\nPotential limitations of our study were the inclusion and exclusion criteria, which selected for a group of hip fracture patients who were living independently, were ambulatory on admission, were without severe cognitive dysfunction, and were slightly younger than the average age for this particular diagnosis. Group size was also a limiting factor, as was lack of compliance despite regular telephone follow-ups. The randomized design was one of the strengths of the study, as was the relatively long treatment period. Moreover, the design of the current study was novel in that it combined nutritional supplementation with bisphosphonate treatment after hip fracture in elderly men and women.\nWe may thereby conclude that nutritional supplementation, along with an oral administered bisphosphonate, produces additive effects on BMD after hip fracture." ]
[ "intro", "methods", "subjects", "methods", null, "methods", "results", null, null, null, "methods", "discussion" ]
[ "hip fracture", "nutritional supplementation", "bisphosphonates", "bone mineral density" ]
Introduction: Inadequate intake of protein and total calories, leading to malnutrition, is common among hip fracture patients.1,2 After a hip fracture, a catabolic state develops, characterized by increased loss of bone mineral density (BMD) during the first year.3–5 Because protein is an important structural component of bone and previous studies have reported a positive association between protein intake and BMD/bone mineral content,6–8 it is tempting to hypothesize that protein and energy supplementation may slow down postoperative bone loss. One previous study showed that hip fracture patients who received postoperative supplemental protein suffered less loss of BMD in the proximal femur compared with controls at 12 months.9 Otherwise, there are few studies that have investigated the effect of nutritional supplementation on BMD after hip fractures. Bisphosphonates are the most widely used drugs for treatment of osteoporosis and have been shown to reduce the risk for hip fracture.10,11 Risedronate, which was used in the present study, has previously been shown to increase BMD of the femoral neck and femoral trochanter in osteoporotic women aged 80 years and older after 6 months of treatment.12 Risedronate has not been studied for secondary prevention of bone loss in old adults after a hip fracture. However, beneficial effects on total hip BMD after hip fracture have been demonstrated, using zoledronic acid together with calcium and vitamin D.13 The primary aim of this study was to investigate whether postoperative treatment with a combination of protein-rich formula and bisphosphonates can reduce BMD loss after hip fracture better than bisphosphonates alone. Secondary aims were to study treatment effects on the bone resorption marker C-terminal telopeptide of collagen I (serum CTX-I), serum levels of 25-hydroxy vitamin D (25OHD), and parathyroid hormone (PTH). Methods: Patients The study included a total of 79 patients with a mean age of 79 years (standard deviation, 9; range, 61–96 years) and a history of recent hip fracture (femoral neck or trochanteric) who were admitted to any of the four university hospitals in Stockholm. Inclusion criteria were age 60 years or older, no severe cognitive impairment (Short Portable Mental Questionnaire15 [see below] score, ≥3), ambulatory before fracture, and body mass index 28 kg/m2 or lower. Exclusion criteria were pathological fractures and bisphosphonate treatment within the last year. Patients with alcohol/drug abuse or overt psychiatric disorders were excluded. Also excluded were patients with abnormal hepatic or renal laboratory parameters such as serum-alanine aminotransferase or serum-aspartate-aminotransferase twice the normal reference range or higher, respectively; serum-creatinine levels higher than 130 μmol/L or glomerular filtration rate lower than 30 mL/minute; or with bone metabolic disorders such as primary hyperparathyroidism, osteogenesis imperfecta, Paget’s disease, or myeloma. Patients with lactose intolerance, dysphagia, esophagitis, gastric ulcer, or malignancy were also excluded, as were patients with diabetes mellitus associated with nephropathy or retinopathy and patients with active iritis or uveitis. The study included a total of 79 patients with a mean age of 79 years (standard deviation, 9; range, 61–96 years) and a history of recent hip fracture (femoral neck or trochanteric) who were admitted to any of the four university hospitals in Stockholm. Inclusion criteria were age 60 years or older, no severe cognitive impairment (Short Portable Mental Questionnaire15 [see below] score, ≥3), ambulatory before fracture, and body mass index 28 kg/m2 or lower. Exclusion criteria were pathological fractures and bisphosphonate treatment within the last year. Patients with alcohol/drug abuse or overt psychiatric disorders were excluded. Also excluded were patients with abnormal hepatic or renal laboratory parameters such as serum-alanine aminotransferase or serum-aspartate-aminotransferase twice the normal reference range or higher, respectively; serum-creatinine levels higher than 130 μmol/L or glomerular filtration rate lower than 30 mL/minute; or with bone metabolic disorders such as primary hyperparathyroidism, osteogenesis imperfecta, Paget’s disease, or myeloma. Patients with lactose intolerance, dysphagia, esophagitis, gastric ulcer, or malignancy were also excluded, as were patients with diabetes mellitus associated with nephropathy or retinopathy and patients with active iritis or uveitis. Study design and intervention Eligible patients who agreed to participate were randomized into three groups in blocks of twelve, using a sealed envelope technique, thereby ensuring equal distribution of patients in the three treatment groups at each center. Patients were followed-up for 12 months. All participants received 1,000 mg calcium and 800 IU vitamin D3 daily. The first group received 35 mg risedronate (Optinate® Septimum; Sanofi AB, Warner Chilcott, Weiterstadt, Germany) once weekly for 12 months (B; n=28). The second group received 35 mg risedronate once weekly for 12 months plus a nutritional supplement (Fresubin® protein energy drink; Fresenius Kabi, Bad Homburg, Germany) during the first 6 months after hip fracture (BN; n=26). The patients in the third group served as controls (C; n=25) and received calcium and vitamin D3 alone (Calcichew D3®; Takeda Pharmaceutical Company Limited, Osaka, Japan) for 12 months. The supplement contained 150 kcal and 10 g protein/100 mL milk-based protein (80% casein and 20% whey). Patients were prescribed 200 mL twice daily, totaling 600 kcal with 40 g protein. Each study center was staffed by one physician and a trial nurse. The trial team was responsible for the randomization process, collection of morning blood samples 1–3 days postfracture, and ensuring that dual-energy X-ray absorptiometry (DXA) and all other examinations were carried out during the hospital stay. Pharmacologic treatment and nutritional supplementation began as soon as patients were stable from a cardiovascular standpoint, able to take food by mouth, and able to sit in an upright position for 1 hour after taking their tablets. Patients were instructed both verbally and in writing to take bisphosphonate 30 minutes before food and other medications. They were examined at baseline and again during follow-up at 6 and 12 months. About once a month, the research nurses interviewed patients by telephone regarding compliance, food intake, pain, and general state of health. The study was conducted in compliance with the Helsinki Declaration and was approved by the local ethics committee in Stockholm. All participants provided written informed consent. (ClinicalTrials.gov: NCT01950169). Eligible patients who agreed to participate were randomized into three groups in blocks of twelve, using a sealed envelope technique, thereby ensuring equal distribution of patients in the three treatment groups at each center. Patients were followed-up for 12 months. All participants received 1,000 mg calcium and 800 IU vitamin D3 daily. The first group received 35 mg risedronate (Optinate® Septimum; Sanofi AB, Warner Chilcott, Weiterstadt, Germany) once weekly for 12 months (B; n=28). The second group received 35 mg risedronate once weekly for 12 months plus a nutritional supplement (Fresubin® protein energy drink; Fresenius Kabi, Bad Homburg, Germany) during the first 6 months after hip fracture (BN; n=26). The patients in the third group served as controls (C; n=25) and received calcium and vitamin D3 alone (Calcichew D3®; Takeda Pharmaceutical Company Limited, Osaka, Japan) for 12 months. The supplement contained 150 kcal and 10 g protein/100 mL milk-based protein (80% casein and 20% whey). Patients were prescribed 200 mL twice daily, totaling 600 kcal with 40 g protein. Each study center was staffed by one physician and a trial nurse. The trial team was responsible for the randomization process, collection of morning blood samples 1–3 days postfracture, and ensuring that dual-energy X-ray absorptiometry (DXA) and all other examinations were carried out during the hospital stay. Pharmacologic treatment and nutritional supplementation began as soon as patients were stable from a cardiovascular standpoint, able to take food by mouth, and able to sit in an upright position for 1 hour after taking their tablets. Patients were instructed both verbally and in writing to take bisphosphonate 30 minutes before food and other medications. They were examined at baseline and again during follow-up at 6 and 12 months. About once a month, the research nurses interviewed patients by telephone regarding compliance, food intake, pain, and general state of health. The study was conducted in compliance with the Helsinki Declaration and was approved by the local ethics committee in Stockholm. All participants provided written informed consent. (ClinicalTrials.gov: NCT01950169). Measurements BMD was measured by DXA, using either Hologic (Hologic, Inc., Waltham, MA, USA) or Ge Lunar (Madison, WI, USA) densitometers. BMD on the uninjured side (total hip) was assessed, as well as total-body BMD. Results were expressed as areal density (g/cm2) and as standard deviation in relation to both mean value among healthy young individuals (T-score) and mean value of age- and sex-matched adults (Z-score). The precision error was 0.010 g/cm2 and 0.007 g/cm2, respectively, for the total hip and total body BMD. To compensate for variation in BMD measurements among different centers and densitometers, equations to standardize BMD were used to create standardized bone mineral density (sBMD) in milligrams per square centimeter for the total hip result.14 Plasma-calcium (mmol/L), plasma-albumin (g/L), and serum-PTH (ng/L) were analyzed according to standard hospital laboratory procedure for each center. Serum-25OHD (nmol/L) was analyzed at baseline and again after 12 months, using chemiluminescence immunoassays (Liason® 25 OH vitamin D Total Assay; DiaSorin Inc., Stillwater, MN, USA). To evaluate changes in bone turnover, serum CTX-I (ng/L) was analyzed at baseline and at 12 months, using the Beta-CrossLaps assay (Roche Diagnostics GmbH, Mannheim, Germany), a two-site immunometric (sandwich) assay based on electro-chemiluminescence detection. The interassay coefficient of variation was less than 20%. Patient height, weight, and body mass index were monitored. Height was measured in the supine position. Weight was calculated from total mass (lean, fat, and bone mineral content), obtained through DXA measurements. The examination also included an appraisal of cognitive function using the Short Portable Mental Questionnaire, including ten simple questions.15 BMD was measured by DXA, using either Hologic (Hologic, Inc., Waltham, MA, USA) or Ge Lunar (Madison, WI, USA) densitometers. BMD on the uninjured side (total hip) was assessed, as well as total-body BMD. Results were expressed as areal density (g/cm2) and as standard deviation in relation to both mean value among healthy young individuals (T-score) and mean value of age- and sex-matched adults (Z-score). The precision error was 0.010 g/cm2 and 0.007 g/cm2, respectively, for the total hip and total body BMD. To compensate for variation in BMD measurements among different centers and densitometers, equations to standardize BMD were used to create standardized bone mineral density (sBMD) in milligrams per square centimeter for the total hip result.14 Plasma-calcium (mmol/L), plasma-albumin (g/L), and serum-PTH (ng/L) were analyzed according to standard hospital laboratory procedure for each center. Serum-25OHD (nmol/L) was analyzed at baseline and again after 12 months, using chemiluminescence immunoassays (Liason® 25 OH vitamin D Total Assay; DiaSorin Inc., Stillwater, MN, USA). To evaluate changes in bone turnover, serum CTX-I (ng/L) was analyzed at baseline and at 12 months, using the Beta-CrossLaps assay (Roche Diagnostics GmbH, Mannheim, Germany), a two-site immunometric (sandwich) assay based on electro-chemiluminescence detection. The interassay coefficient of variation was less than 20%. Patient height, weight, and body mass index were monitored. Height was measured in the supine position. Weight was calculated from total mass (lean, fat, and bone mineral content), obtained through DXA measurements. The examination also included an appraisal of cognitive function using the Short Portable Mental Questionnaire, including ten simple questions.15 Statistical methods Calculations were performed using SPSS 22.0 for Windows (IBM Corporation, Armonk, NY, USA). Descriptive statistics included mean, standard deviation, median, range, and percentage. Differences between the three randomized treatment groups were analyzed using analysis of covariance (ANCOVA). Covariates used in the analysis of BMD were age, sex, total mass, and baseline BMD. Age and baseline value were used as covariates in the analysis of serum-25OHD. Baseline value was the only covariate used in the analysis of serum-PTH. The covariates used in the analysis of serum-CTX-I were age, sex, and baseline value. Normal distribution of serum-CTX-I values was achieved by transformation, using a logarithmic scale. A paired-samples Student’s t-test was used to compare differences in serum-CTX-I within each group between baseline and follow-up. In addition to analysis of complete cases, an intention-to-treat analysis was carried out16 in accordance with Consolidated Standards of Reporting Trials guidelines.17 Calculations were performed using SPSS 22.0 for Windows (IBM Corporation, Armonk, NY, USA). Descriptive statistics included mean, standard deviation, median, range, and percentage. Differences between the three randomized treatment groups were analyzed using analysis of covariance (ANCOVA). Covariates used in the analysis of BMD were age, sex, total mass, and baseline BMD. Age and baseline value were used as covariates in the analysis of serum-25OHD. Baseline value was the only covariate used in the analysis of serum-PTH. The covariates used in the analysis of serum-CTX-I were age, sex, and baseline value. Normal distribution of serum-CTX-I values was achieved by transformation, using a logarithmic scale. A paired-samples Student’s t-test was used to compare differences in serum-CTX-I within each group between baseline and follow-up. In addition to analysis of complete cases, an intention-to-treat analysis was carried out16 in accordance with Consolidated Standards of Reporting Trials guidelines.17 Patients: The study included a total of 79 patients with a mean age of 79 years (standard deviation, 9; range, 61–96 years) and a history of recent hip fracture (femoral neck or trochanteric) who were admitted to any of the four university hospitals in Stockholm. Inclusion criteria were age 60 years or older, no severe cognitive impairment (Short Portable Mental Questionnaire15 [see below] score, ≥3), ambulatory before fracture, and body mass index 28 kg/m2 or lower. Exclusion criteria were pathological fractures and bisphosphonate treatment within the last year. Patients with alcohol/drug abuse or overt psychiatric disorders were excluded. Also excluded were patients with abnormal hepatic or renal laboratory parameters such as serum-alanine aminotransferase or serum-aspartate-aminotransferase twice the normal reference range or higher, respectively; serum-creatinine levels higher than 130 μmol/L or glomerular filtration rate lower than 30 mL/minute; or with bone metabolic disorders such as primary hyperparathyroidism, osteogenesis imperfecta, Paget’s disease, or myeloma. Patients with lactose intolerance, dysphagia, esophagitis, gastric ulcer, or malignancy were also excluded, as were patients with diabetes mellitus associated with nephropathy or retinopathy and patients with active iritis or uveitis. Study design and intervention: Eligible patients who agreed to participate were randomized into three groups in blocks of twelve, using a sealed envelope technique, thereby ensuring equal distribution of patients in the three treatment groups at each center. Patients were followed-up for 12 months. All participants received 1,000 mg calcium and 800 IU vitamin D3 daily. The first group received 35 mg risedronate (Optinate® Septimum; Sanofi AB, Warner Chilcott, Weiterstadt, Germany) once weekly for 12 months (B; n=28). The second group received 35 mg risedronate once weekly for 12 months plus a nutritional supplement (Fresubin® protein energy drink; Fresenius Kabi, Bad Homburg, Germany) during the first 6 months after hip fracture (BN; n=26). The patients in the third group served as controls (C; n=25) and received calcium and vitamin D3 alone (Calcichew D3®; Takeda Pharmaceutical Company Limited, Osaka, Japan) for 12 months. The supplement contained 150 kcal and 10 g protein/100 mL milk-based protein (80% casein and 20% whey). Patients were prescribed 200 mL twice daily, totaling 600 kcal with 40 g protein. Each study center was staffed by one physician and a trial nurse. The trial team was responsible for the randomization process, collection of morning blood samples 1–3 days postfracture, and ensuring that dual-energy X-ray absorptiometry (DXA) and all other examinations were carried out during the hospital stay. Pharmacologic treatment and nutritional supplementation began as soon as patients were stable from a cardiovascular standpoint, able to take food by mouth, and able to sit in an upright position for 1 hour after taking their tablets. Patients were instructed both verbally and in writing to take bisphosphonate 30 minutes before food and other medications. They were examined at baseline and again during follow-up at 6 and 12 months. About once a month, the research nurses interviewed patients by telephone regarding compliance, food intake, pain, and general state of health. The study was conducted in compliance with the Helsinki Declaration and was approved by the local ethics committee in Stockholm. All participants provided written informed consent. (ClinicalTrials.gov: NCT01950169). Measurements: BMD was measured by DXA, using either Hologic (Hologic, Inc., Waltham, MA, USA) or Ge Lunar (Madison, WI, USA) densitometers. BMD on the uninjured side (total hip) was assessed, as well as total-body BMD. Results were expressed as areal density (g/cm2) and as standard deviation in relation to both mean value among healthy young individuals (T-score) and mean value of age- and sex-matched adults (Z-score). The precision error was 0.010 g/cm2 and 0.007 g/cm2, respectively, for the total hip and total body BMD. To compensate for variation in BMD measurements among different centers and densitometers, equations to standardize BMD were used to create standardized bone mineral density (sBMD) in milligrams per square centimeter for the total hip result.14 Plasma-calcium (mmol/L), plasma-albumin (g/L), and serum-PTH (ng/L) were analyzed according to standard hospital laboratory procedure for each center. Serum-25OHD (nmol/L) was analyzed at baseline and again after 12 months, using chemiluminescence immunoassays (Liason® 25 OH vitamin D Total Assay; DiaSorin Inc., Stillwater, MN, USA). To evaluate changes in bone turnover, serum CTX-I (ng/L) was analyzed at baseline and at 12 months, using the Beta-CrossLaps assay (Roche Diagnostics GmbH, Mannheim, Germany), a two-site immunometric (sandwich) assay based on electro-chemiluminescence detection. The interassay coefficient of variation was less than 20%. Patient height, weight, and body mass index were monitored. Height was measured in the supine position. Weight was calculated from total mass (lean, fat, and bone mineral content), obtained through DXA measurements. The examination also included an appraisal of cognitive function using the Short Portable Mental Questionnaire, including ten simple questions.15 Statistical methods: Calculations were performed using SPSS 22.0 for Windows (IBM Corporation, Armonk, NY, USA). Descriptive statistics included mean, standard deviation, median, range, and percentage. Differences between the three randomized treatment groups were analyzed using analysis of covariance (ANCOVA). Covariates used in the analysis of BMD were age, sex, total mass, and baseline BMD. Age and baseline value were used as covariates in the analysis of serum-25OHD. Baseline value was the only covariate used in the analysis of serum-PTH. The covariates used in the analysis of serum-CTX-I were age, sex, and baseline value. Normal distribution of serum-CTX-I values was achieved by transformation, using a logarithmic scale. A paired-samples Student’s t-test was used to compare differences in serum-CTX-I within each group between baseline and follow-up. In addition to analysis of complete cases, an intention-to-treat analysis was carried out16 in accordance with Consolidated Standards of Reporting Trials guidelines.17 Results: No significant differences in baseline characteristics were found among the three treatment groups, other than the need for walking aids (Table 1). A total of 67 (85%) of the original 79 patients presented for the final follow-up; Figure 1 shows a patient flowchart. Among patients who dropped out, average age was 87 years (standard deviation, 5; range, 78–94 years), mean value of total hip BMD was 0.650 g/cm2 (standard deviation, 0.103), and total body BMD was 0.943 g/cm2 (standard deviation, 0.147). Treatment adherence Among the control patients, 17 of 24 patients who presented at 12 months took calcium and vitamin D as prescribed. In group B, 18 of 25 presenting patients took their daily dose of calcium and vitamin D, and 18 of 25 took the bisphosphonate as stipulated. In group BN, seven of 18 corresponding patients complied with the drugs and nutritional supplement prescription; the remaining eleven patients reported an intake of half the prescribed nutritional supplement. Three of these patients also reported a lower intake of prescribed bisphosphonate, and two took only half the prescribed daily dose of calcium and vitamin D. In total, 15 patients in the BN group took the bisphosphonate as stipulated. Three patients in the control group and two in the group B reported gastrointestinal complaints (either constipation or diarrhea). A dose reduction was made for one patient with hypercalcemia in group C. Among the control patients, 17 of 24 patients who presented at 12 months took calcium and vitamin D as prescribed. In group B, 18 of 25 presenting patients took their daily dose of calcium and vitamin D, and 18 of 25 took the bisphosphonate as stipulated. In group BN, seven of 18 corresponding patients complied with the drugs and nutritional supplement prescription; the remaining eleven patients reported an intake of half the prescribed nutritional supplement. Three of these patients also reported a lower intake of prescribed bisphosphonate, and two took only half the prescribed daily dose of calcium and vitamin D. In total, 15 patients in the BN group took the bisphosphonate as stipulated. Three patients in the control group and two in the group B reported gastrointestinal complaints (either constipation or diarrhea). A dose reduction was made for one patient with hypercalcemia in group C. Effects on BMD and bone turnover Seventy-nine patients were measured by DXA at inclusion, 68 patients at 6 months, and 66 at 12 months. Because of a total hip replacement on the uninjured side, nine patients could not be measured at the hip. During the first 6 months, total hip BMD increased by 0.7% in the BN group, whereas groups B and C showed losses of 1.1% and 2.4%, respectively (P by ANCOVA =0.071; Table 2; Figure 2). On average, there was no loss in total body BMD between baseline and 12 months in the BN group (Table 2). Moreover, both the B and C groups lost BMD, and this loss was greater among controls than in group B (P=0.009; Table 2; Figure 3). There was a trend for difference between groups, according to change in the bone resorption marker serum-CTX-I (P=0.055; Table 2). Within-group analysis showed a significant decrease in the serum-CTX-I marker of 33% and 36% in groups B and BN, respectively (P<0.001), whereas the smaller decrease of 12% in the C group was not significant (P=0.77). Seventy-nine patients were measured by DXA at inclusion, 68 patients at 6 months, and 66 at 12 months. Because of a total hip replacement on the uninjured side, nine patients could not be measured at the hip. During the first 6 months, total hip BMD increased by 0.7% in the BN group, whereas groups B and C showed losses of 1.1% and 2.4%, respectively (P by ANCOVA =0.071; Table 2; Figure 2). On average, there was no loss in total body BMD between baseline and 12 months in the BN group (Table 2). Moreover, both the B and C groups lost BMD, and this loss was greater among controls than in group B (P=0.009; Table 2; Figure 3). There was a trend for difference between groups, according to change in the bone resorption marker serum-CTX-I (P=0.055; Table 2). Within-group analysis showed a significant decrease in the serum-CTX-I marker of 33% and 36% in groups B and BN, respectively (P<0.001), whereas the smaller decrease of 12% in the C group was not significant (P=0.77). Levels of 25OHD and parathyroid hormone During the study, there was a mean increase in serum-25-OHD of between 17 nmol/L and 20 nmol/L in all three groups. Mean serum-25OHD concentrations in groups B and BN were below normal (ie, <50 nmol/L at baseline) but had normalized by the 12-month follow-up (Tables 1 and 2). In total, 59% of the patients had a baseline concentration of serum-25OHD that was lower than 50 nmol/L. Among them, eleven had values lower than 25 nmol/L. At the 12-month follow-up, 26% of the patients still had serum-25OHD concentrations that were lower than 50 nmol/L. We identified lack of compliance in eleven of the 18 patients who failed to normalize their concentration of serum-25OHD. Mean serum-PTH remained in the normal range for all groups at inclusion and at the 6-month and 12-month follow-ups; no significant difference was found among the groups on any measurement occasion. During the study, there was a mean increase in serum-25-OHD of between 17 nmol/L and 20 nmol/L in all three groups. Mean serum-25OHD concentrations in groups B and BN were below normal (ie, <50 nmol/L at baseline) but had normalized by the 12-month follow-up (Tables 1 and 2). In total, 59% of the patients had a baseline concentration of serum-25OHD that was lower than 50 nmol/L. Among them, eleven had values lower than 25 nmol/L. At the 12-month follow-up, 26% of the patients still had serum-25OHD concentrations that were lower than 50 nmol/L. We identified lack of compliance in eleven of the 18 patients who failed to normalize their concentration of serum-25OHD. Mean serum-PTH remained in the normal range for all groups at inclusion and at the 6-month and 12-month follow-ups; no significant difference was found among the groups on any measurement occasion. Intention-to-treat analysis The primary analysis population was used to replace missing data, and the secondary analysis population was used as a sensitivity analysis and showed the following results. The percentage change in total hip BMD between baseline and 6 months was +0.9% in the BN group and −0.5% and −2.7% in the B and C groups, respectively (P=0.03). The corresponding changes between baseline and 12 months were −0.8% for the BN group and −1.7% and −2.6% for the B and C groups, respectively (P=0.279). According to the sensitivity analysis, the percentage change in total body BMD between baseline and 6 months was −0.7% for the BN group and −0.7% and −1.3% for the B and C groups, respectively (P=0.436). Between baseline and 12 months, the percentage change in total body BMD was −0.02% for the BN group and −0.9% and −1.6% for the B and C groups, respectively (P=0.030). The sensitivity analysis confirmed the trend indicated by the results, showing a more pronounced decrease in serum-CTX-I in the B and BN groups than in the C group (P=0.019). The sensitivity analysis was in line with the results for complete cases, showing no significant differences among the three groups in changes of serum-25OHD and serum-PTH at either 6- or 12-month follow-up. The primary analysis population was used to replace missing data, and the secondary analysis population was used as a sensitivity analysis and showed the following results. The percentage change in total hip BMD between baseline and 6 months was +0.9% in the BN group and −0.5% and −2.7% in the B and C groups, respectively (P=0.03). The corresponding changes between baseline and 12 months were −0.8% for the BN group and −1.7% and −2.6% for the B and C groups, respectively (P=0.279). According to the sensitivity analysis, the percentage change in total body BMD between baseline and 6 months was −0.7% for the BN group and −0.7% and −1.3% for the B and C groups, respectively (P=0.436). Between baseline and 12 months, the percentage change in total body BMD was −0.02% for the BN group and −0.9% and −1.6% for the B and C groups, respectively (P=0.030). The sensitivity analysis confirmed the trend indicated by the results, showing a more pronounced decrease in serum-CTX-I in the B and BN groups than in the C group (P=0.019). The sensitivity analysis was in line with the results for complete cases, showing no significant differences among the three groups in changes of serum-25OHD and serum-PTH at either 6- or 12-month follow-up. Treatment adherence: Among the control patients, 17 of 24 patients who presented at 12 months took calcium and vitamin D as prescribed. In group B, 18 of 25 presenting patients took their daily dose of calcium and vitamin D, and 18 of 25 took the bisphosphonate as stipulated. In group BN, seven of 18 corresponding patients complied with the drugs and nutritional supplement prescription; the remaining eleven patients reported an intake of half the prescribed nutritional supplement. Three of these patients also reported a lower intake of prescribed bisphosphonate, and two took only half the prescribed daily dose of calcium and vitamin D. In total, 15 patients in the BN group took the bisphosphonate as stipulated. Three patients in the control group and two in the group B reported gastrointestinal complaints (either constipation or diarrhea). A dose reduction was made for one patient with hypercalcemia in group C. Effects on BMD and bone turnover: Seventy-nine patients were measured by DXA at inclusion, 68 patients at 6 months, and 66 at 12 months. Because of a total hip replacement on the uninjured side, nine patients could not be measured at the hip. During the first 6 months, total hip BMD increased by 0.7% in the BN group, whereas groups B and C showed losses of 1.1% and 2.4%, respectively (P by ANCOVA =0.071; Table 2; Figure 2). On average, there was no loss in total body BMD between baseline and 12 months in the BN group (Table 2). Moreover, both the B and C groups lost BMD, and this loss was greater among controls than in group B (P=0.009; Table 2; Figure 3). There was a trend for difference between groups, according to change in the bone resorption marker serum-CTX-I (P=0.055; Table 2). Within-group analysis showed a significant decrease in the serum-CTX-I marker of 33% and 36% in groups B and BN, respectively (P<0.001), whereas the smaller decrease of 12% in the C group was not significant (P=0.77). Levels of 25OHD and parathyroid hormone: During the study, there was a mean increase in serum-25-OHD of between 17 nmol/L and 20 nmol/L in all three groups. Mean serum-25OHD concentrations in groups B and BN were below normal (ie, <50 nmol/L at baseline) but had normalized by the 12-month follow-up (Tables 1 and 2). In total, 59% of the patients had a baseline concentration of serum-25OHD that was lower than 50 nmol/L. Among them, eleven had values lower than 25 nmol/L. At the 12-month follow-up, 26% of the patients still had serum-25OHD concentrations that were lower than 50 nmol/L. We identified lack of compliance in eleven of the 18 patients who failed to normalize their concentration of serum-25OHD. Mean serum-PTH remained in the normal range for all groups at inclusion and at the 6-month and 12-month follow-ups; no significant difference was found among the groups on any measurement occasion. Intention-to-treat analysis: The primary analysis population was used to replace missing data, and the secondary analysis population was used as a sensitivity analysis and showed the following results. The percentage change in total hip BMD between baseline and 6 months was +0.9% in the BN group and −0.5% and −2.7% in the B and C groups, respectively (P=0.03). The corresponding changes between baseline and 12 months were −0.8% for the BN group and −1.7% and −2.6% for the B and C groups, respectively (P=0.279). According to the sensitivity analysis, the percentage change in total body BMD between baseline and 6 months was −0.7% for the BN group and −0.7% and −1.3% for the B and C groups, respectively (P=0.436). Between baseline and 12 months, the percentage change in total body BMD was −0.02% for the BN group and −0.9% and −1.6% for the B and C groups, respectively (P=0.030). The sensitivity analysis confirmed the trend indicated by the results, showing a more pronounced decrease in serum-CTX-I in the B and BN groups than in the C group (P=0.019). The sensitivity analysis was in line with the results for complete cases, showing no significant differences among the three groups in changes of serum-25OHD and serum-PTH at either 6- or 12-month follow-up. Discussion: We found that nutritional supplementation, in addition to calcium, vitamin D, and risedronate, had a positive effect on total hip BMD and total body BMD in elderly patients with a recent hip fracture. An annual loss of 0.27% and 0.25% of total hip BMD in women and men, respectively, has been reported for a healthy population, aged 50–85 years.18 However, prior studies have shown a much higher loss, at 2.0%–4.6% (hip BMD), the first year after hip fracture,3,5 which is consistent with the findings for the control group in the current study. Two studies reported an increase in total hip BMD after hip fracture when patients were treated with a single dose of intravenous zoledronic acid.13,19 The present study entailed bisphosphonates orally but still had bone-resorption-preventive effects compared with controls. Because absorption of orally administered treatment is low even under ideal circumstances, it could explain the lower net gain in BMD compared with the previous studies.13,19 Another important consideration is known suboptimal patient adherence with orally administered bisphosphonates,20,21 which we also observed in the current study. To our knowledge, only a few studies have explored the possible effects of protein- and energy-rich nutritional supplements on BMD after a hip fracture. In a randomized controlled study of 60 women with femoral neck fracture, Tengstrand et al22 evaluated the effect of treatment with protein-rich supplementation alone or in combination with anabolic steroids on both hip and total body BMD. Although the difference in BMD between the groups did not reach statistical significance, the results of their study indicated an increase in total body BMD at 6 and 12 months in the groups that received protein and energy supplementation compared with the group treated only with calcium and vitamin D.22 Another study of 82 hip fracture patients showed that protein supplementation (20 g daily) for 12 months after hip fracture preserved BMD when compared with untreated controls.9 The nutritional supplementation used in the current study provided a 40 g daily dose of protein compared with only 20 g in the previous studies,9,22 which may explain why the BMD-preserving effect was already observable after 6 months. Furthermore, participants in our study were supplemented with 600 kcal/day, rather than the 250 kcal/day seen in one of the previous studies.9 Unlike previous studies, patients in the current study were also treated with drugs that inhibit bone resorption.9,22 The mean change in total hip BMD between groups at 6 months did not quite reach significance, but this may be explained by the small group size. However, the intention-to-treat analysis supported the results of complete cases and showed significant difference between groups in total hip BMD at 6 months and total body BMD at 12 months. The more rapid bone metabolic changes in total hip BMD could be a result of a larger content of trabecular bone compared with total body BMD. It may also explain the lack of effect on total hip BMD at 12 months because treatment with nutritional supplementation ended at 6 months. Although the study lasted for 1 year, we chose to give nutritional supplement only during the first 6 postoperative months, when the degree of catabolism is likely to be most pronounced. We found no differences in vitamin D levels between groups to explain the disparities in BMD. All groups received vitamin D, and mean values were normalized during the study. However, 26% of all patients included still had values less than 50 nmol/L at the final follow-up; these findings were consistent with the results from a prior study in which hospitalized women at 66–95 years of age were treated with vitamin D3 (800 IE) and calcium (1,000 mg).23 As in the current study, mean levels were normalized after treatment, but in 18 (35%) of 51 patients, the levels remained low.23 Reasons proposed to explain these findings included insufficient vitamin D dose, a short supplementation period of only 3 months, and noncompliance issues. The first and last of these reasons may also apply to the current study. The decrease in bone resorption marker serum-CTX-I levels became more pronounced in both risedronate-treated groups at 12 months. Because bone resorption-inhibiting drugs decrease CTX-I levels, these results reflect the expected treatment response to risedronate. Our baseline samples were drawn 1–3 days postfracture, which may have contributed to the variability in CTX-I levels. Other causes of variability not taken into account in the current study include food intake and circadian rhythm.24 Potential limitations of our study were the inclusion and exclusion criteria, which selected for a group of hip fracture patients who were living independently, were ambulatory on admission, were without severe cognitive dysfunction, and were slightly younger than the average age for this particular diagnosis. Group size was also a limiting factor, as was lack of compliance despite regular telephone follow-ups. The randomized design was one of the strengths of the study, as was the relatively long treatment period. Moreover, the design of the current study was novel in that it combined nutritional supplementation with bisphosphonate treatment after hip fracture in elderly men and women. We may thereby conclude that nutritional supplementation, along with an oral administered bisphosphonate, produces additive effects on BMD after hip fracture.
Background: After a hip fracture, a catabolic state develops, with increased bone loss during the first year. The aim of this study was to evaluate the effects of postoperative treatment with calcium, vitamin D, and bisphosphonates (alone or together) with nutritional supplementation on total hip and total body bone mineral density (BMD). Methods: Seventy-nine patients (56 women), with a mean age of 79 years (range, 61-96 years) and with a recent hip fracture, who were ambulatory before fracture and without severe cognitive impairment, were included. Patients were randomized to treatment with bisphosphonates (risedronate 35 mg weekly) for 12 months (B; n=28), treatment with bisphosphonates along with nutritional supplementation (40 g protein, 600 kcal daily) for the first 6 months (BN; n=26), or to controls (C; n=25). All participants received calcium (1,000 mg) and vitamin D3 (800 IU) daily. Total hip and total body BMD were assessed with dual-energy X-ray absorptiometry at baseline, 6, and 12 months. Marker of bone resorption C-terminal telopeptide of collagen I and 25-hydroxy vitamin D were analyzed in serum. Results: Analysis of complete cases (70/79 at 6 months and 67/79 at 12 months) showed an increase in total hip BMD of 0.7% in the BN group, whereas the B and C groups lost 1.1% and 2.4% of BMD, respectively, between baseline and 6 months (P=0.071, between groups). There was no change in total body BMD between baseline and 12 months in the BN group, whereas the B group and C group both lost BMD, with C losing more than B (P=0.009). Intention-to-treat analysis was in concordance with the complete cases analyses. Conclusions: Protein-and energy-rich supplementation in addition to calcium, vitamin D, and bisphosphonate therapy had additive effects on total body BMD and total hip BMD among elderly hip fracture patients.
null
null
7,676
391
[ 2438, 413, 369, 196, 161, 231, 191 ]
12
[ "patients", "bmd", "serum", "total", "group", "months", "groups", "12", "hip", "baseline" ]
[ "loss bone mineral", "protein intake bmd", "effect nutritional supplementation", "bmd hip fractures", "hip fracture beneficial" ]
null
null
[CONTENT] hip fracture | nutritional supplementation | bisphosphonates | bone mineral density [SUMMARY]
[CONTENT] hip fracture | nutritional supplementation | bisphosphonates | bone mineral density [SUMMARY]
[CONTENT] hip fracture | nutritional supplementation | bisphosphonates | bone mineral density [SUMMARY]
null
[CONTENT] hip fracture | nutritional supplementation | bisphosphonates | bone mineral density [SUMMARY]
null
[CONTENT] Aged | Aged, 80 and over | Biomarkers | Bone Density | Bone Density Conservation Agents | Bone Remodeling | Calcium, Dietary | Dietary Supplements | Diphosphonates | Female | Hip Fractures | Humans | Male | Middle Aged | Postoperative Complications | Postoperative Period | Treatment Outcome | Vitamin D | Vitamins [SUMMARY]
[CONTENT] Aged | Aged, 80 and over | Biomarkers | Bone Density | Bone Density Conservation Agents | Bone Remodeling | Calcium, Dietary | Dietary Supplements | Diphosphonates | Female | Hip Fractures | Humans | Male | Middle Aged | Postoperative Complications | Postoperative Period | Treatment Outcome | Vitamin D | Vitamins [SUMMARY]
[CONTENT] Aged | Aged, 80 and over | Biomarkers | Bone Density | Bone Density Conservation Agents | Bone Remodeling | Calcium, Dietary | Dietary Supplements | Diphosphonates | Female | Hip Fractures | Humans | Male | Middle Aged | Postoperative Complications | Postoperative Period | Treatment Outcome | Vitamin D | Vitamins [SUMMARY]
null
[CONTENT] Aged | Aged, 80 and over | Biomarkers | Bone Density | Bone Density Conservation Agents | Bone Remodeling | Calcium, Dietary | Dietary Supplements | Diphosphonates | Female | Hip Fractures | Humans | Male | Middle Aged | Postoperative Complications | Postoperative Period | Treatment Outcome | Vitamin D | Vitamins [SUMMARY]
null
[CONTENT] loss bone mineral | protein intake bmd | effect nutritional supplementation | bmd hip fractures | hip fracture beneficial [SUMMARY]
[CONTENT] loss bone mineral | protein intake bmd | effect nutritional supplementation | bmd hip fractures | hip fracture beneficial [SUMMARY]
[CONTENT] loss bone mineral | protein intake bmd | effect nutritional supplementation | bmd hip fractures | hip fracture beneficial [SUMMARY]
null
[CONTENT] loss bone mineral | protein intake bmd | effect nutritional supplementation | bmd hip fractures | hip fracture beneficial [SUMMARY]
null
[CONTENT] patients | bmd | serum | total | group | months | groups | 12 | hip | baseline [SUMMARY]
[CONTENT] patients | bmd | serum | total | group | months | groups | 12 | hip | baseline [SUMMARY]
[CONTENT] patients | bmd | serum | total | group | months | groups | 12 | hip | baseline [SUMMARY]
null
[CONTENT] patients | bmd | serum | total | group | months | groups | 12 | hip | baseline [SUMMARY]
null
[CONTENT] hip fracture | fracture | protein | hip | loss | bmd | bone | postoperative | bisphosphonates | study [SUMMARY]
[CONTENT] groups respectively | bn group groups respectively | sensitivity | sensitivity analysis | group groups respectively | analysis | bn group groups | group groups | groups | bn group [SUMMARY]
[CONTENT] group | groups | patients | bn | bn group | serum | took | analysis | 12 | months [SUMMARY]
null
[CONTENT] patients | bmd | group | serum | groups | months | analysis | total | hip | 12 [SUMMARY]
null
[CONTENT] the first year ||| BMD [SUMMARY]
[CONTENT] Seventy-nine | 56 | age of 79 years | 61-96 years ||| 35 | 12 months | n=28 | 40 | 600 | the first 6 months | BN ||| 1,000 | D3 | daily ||| BMD | 6 | 12 months ||| Marker | 25 [SUMMARY]
[CONTENT] 70/79 | 6 months | 67/79 | 12 months | BMD | 0.7% | BN | 1.1% | 2.4% | BMD | 6 months ||| BMD | 12 months | BN | BMD ||| [SUMMARY]
null
[CONTENT] the first year ||| BMD ||| Seventy-nine | 56 | age of 79 years | 61-96 years ||| 35 | 12 months | n=28 | 40 | 600 | the first 6 months | BN ||| 1,000 | D3 | daily ||| BMD | 6 | 12 months ||| Marker | 25 ||| ||| 70/79 | 6 months | 67/79 | 12 months | BMD | 0.7% | BN | 1.1% | 2.4% | BMD | 6 months ||| BMD | 12 months | BN | BMD ||| ||| BMD | BMD [SUMMARY]
null
The accuracy and completeness of drug information in Google snippet blocks.
34858091
Consumers commonly use the Internet for immediate drug information. In 2014, Google introduced the snippet block to programmatically search available websites to answer a question entered into the search engine without the need for the user to enter any websites. This study compared the accuracy and completeness of drug information found in Google snippet blocks to US Food and Drug Administration (FDA) medication guides.
INTRODUCTION
Ten outpatient drugs were selected from the 2018 Clinical Drugstats Database Medical Expenditure Panel Survey. Six questions in the medication guide for each drug were entered into the Google search engine to find the snippet block. The accuracy and completeness of drug information in the Google snippet block were quantified by two different pharmacists using a scoring system of 1 (less than 25% accurate/complete information) to 5 (100% accurate/complete information). Descriptive statistics were used to summarize the scores.
METHODS
For five out of the six questions, the information in the Google snippets had less than 50% accuracy and completeness compared to the medication guides. The average accuracy and completeness scores of the Google snippets were highest for "What are the ingredients of [the drug]?" with scores of 3.38 (51-75%) and 3.00 (51-75%), respectively. The question on "How to take [drug]?" had the lowest score with averages of 1.00 (<25%) for both accuracy and completeness.
RESULTS
Google snippets provide inaccurate and incomplete drug information when compared to FDA-approved drug medication guides. This aspect may cause patient harm; therefore, it is imperative for health care and health information professionals to provide reliable drug resources to patients and consumers if written information may be needed.
CONCLUSION
[ "Humans", "Internet", "Pharmaceutical Preparations", "Search Engine", "Writing" ]
8608169
INTRODUCTION
A survey conducted by the Pew Research Center for Internet and Technology found that 59% of adults used the Internet to obtain health information in the past year [1], and of those individuals, 80% began their search with Google [2]. With the Google search engine being so accessible to consumers for health-related information, it is important for health care professionals and health information professionals to understand how information is searched online and the types of sources that may provide the information that is found. Moreover, the quality and reliability of drug information found online remains a key concern among health care professionals and health information professionals [3]. In 2014, Google introduced a new feature called the snippet block to assist users with their searches. This feature programmatically searches available websites using an algorithm to answer the question entered into the search bar without requiring the user to enter any websites. The answer to the inquiry obtained by Google is displayed within a box at the top of the results page for the user [4], offering consumers quicker access to health-related information. A news article on National Public Radio (NPR) first highlighted the Google snippet feature in which a consumer asked about eating eggs during pregnancy on the Google search engine [5]. The Google snippet block provided an answer that eggs should be fully cooked to kill the bacteria; however, salmonella poisoning does not pose any fetal risks. As noted in the NPR feature, this answer was only partially correct: raw eggs may pose harm to a fetus if the mother is infected with the bacteria found in the eggs. A study conducted in 2019 further highlighted the frequency of inaccurate information retrieved through the Google search engine [6]. The study used the top ten health questions asked on Google from 2018, which included “What is a keto diet,” “What is ALS disease,” and “What is endometriosis.” Using these questions, the study assessed the quality of information provided by the featured snippet and knowledge panel, finding that the information retrieved from open-sourced websites (e.g., medicalnewstoday.com) was not reliable. In addition to health information, consumers frequently utilize Google, and its snippet block feature, to search for information about prescription and over-the-counter drugs [7–10]. This study was conducted to compare the accuracy and completeness of drug information found in Google snippet blocks to the US Food and Drug Administration (FDA) medication guides of ten outpatient drugs. The secondary objective was to identify the references cited by the Google snippet blocks.
METHODS
The list of drugs included in the analysis for this study was obtained from the 2018 Clinical Drugstats Database Medical Expenditure Panel Survey most commonly prescribed outpatient medications. From the list, medications that had a corresponding medication guide available from the FDA website were selected. The medication guides were chosen as the standard of comparison as they are reviewed and regulated by the FDA. In addition, medication guides are created for patients and contain information deemed necessary to prevent patient harm, allow patients to make an informed decision about medication use, and/or provide adherence information essential to product efficacy [11]. All the medication guides were obtained from the FDA website on the same day in December 2019 to avoid discrepancies caused by updates or changes. Within the medication guides, there were six key questions selected to obtain the Google snippet blocks: “What is [drug] used for?”, “How do I take [drug]?”, “What are the possible side effects of [drug]?”, “What should I avoid while taking [drug]?”, “What are the ingredients of [drug]?”, and “How do I store [drug]?” Each question for all of the drugs was entered into the Google search engine, and a screen capture of the Google snippet block was taken (Figure 1). If a question did not initially yield a Google snippet block, the question was reworded to produce a Google snippet block without losing the meaning of the original question (e.g., “how should I take [drug]” or “what to avoid while taking [drug]”). All Google snippet blocks for the questions were obtained on the same day in December 2019 as the medication guides. A screenshot of a Google snippet block when a question is entered into the search bar. Google and the Google logo are trademarks of Google LLC, and this article is not endorsed by or affiliated with Google in any way. Accuracy was defined as the amount of factual information, and completeness was defined as the amount of information that overlapped with the medication guide. The accuracy and completeness of drug information in the Google snippets were quantified using a scoring system of 1 (less than 25% accurate/complete information) to 5 (100% accurate/complete information) (Table 1). The scoring system was based on previous studies that evaluated accuracy and/or completeness of drug information, but included more increments to provide more precise ratings of the drug information [7, 8]. Rating scale to assess accuracy and completeness To ensure consistency among the raters using the adapted scoring system, two pharmacists, including the principal investigator, conducted an inter-rater reliability test (IRR) to rate accuracy and completeness of the drug information. Each question may receive a maximum of 5 points in each accuracy and completeness category. For the IRR test, eight drugs (Lexapro, Prozac, Coumadin, Cymbalta, Trillipix, Actos, Cipro, Colcrys) not included in the final analysis were selected and independently scored by each pharmacist. The intraclass correlation coefficient (ICC), two-way mixed effects with absolute agreement, was calculated to be 0.843, which was considered to be excellent agreement. Ten different drugs were then selected for the final analysis and scored according to the aforementioned process. Discrepancies among the ratings were discussed and resolved by consensus. The source of information cited by the Google snippets for each of the questions and the frequency in which they were cited were also collected. Scores provided by each pharmacist for the IRR test and the ICC were analyzed using SPSS, v25 (IBM SPSS, Armonk, NY). Descriptive statistics were used to summarize the scores for drugs with data presented as means with standard deviations. IRB oversight was exempted for this study as human participants were not included and human data were not collected.
RESULTS
A total of fifty-five Google snippet blocks were retrieved from the Google engine, as five questions did not have a Google snippet block available. There were nine questions that did not have the corresponding drug information in the medication guides for comparison; therefore, a total of forty-six Google snippet blocks were available for analysis. There were a total of seven drugs that did not have Google snippet blocks available or information for the question in the medication guide or a combination of the two. The average accuracy and completeness ratings for each drug are provided in Table 2. Mean accuracy and completeness scores by drug No medication guide information for one question and no Google snippet block for one question; total Google snippet blocks analyzed=4; Medication guide did not have drug information for four questions; total Google snippet blocks analyzed=2; No Google snippet block and drug information in the medication guide for one question; total Google snippet blocks analyzed=4; Medication guide did not have drug information for three questions; total Google snippet blocks analyzed=3; One Google snippet block was not available; total Google snippet blocks analyzed=5 The drug with the highest mean rating for accuracy was Zoloft at 3.2, and the lowest was Ultram with a mean rating of 1.3. Ambien, Mobic, and Wellbutrin had the highest mean rating for completeness of 2.5, and Protonix had the lowest mean rating of 1.0. For five out of the six questions, the information in the Google snippet blocks had less than 50% accuracy and completeness compared to the medication guides (Table 3). Mean accuracy and completeness scores by question Did not contain medication guide information for two drugs; total drugs analyzed=8; No snippet block available for two drugs; total drugs analyzed=8; No snippet block or medication guide information available for 5 drugs; total drugs analyzed=5; No medication guide information for one drug and no snippet block for two drugs; total drugs analyzed=8 The average accuracy and completeness scores of the Google snippet blocks were highest for the “What are the ingredients of [the drug]?” question, with average scores of 3.38 (51–75%) and 3.00 (51–75%), respectively. The question that had the lowest score was the “How to take [drug]?” with averages of 1.00 (<25%) for both accuracy and completeness. The sources referenced by the Google snippet blocks included Rxlist.com, which was the most frequently cited by seventeen Google snippet blocks. Everydayhealth.com was cited by seven of the Google snippet blocks; the remaining twenty-two Google snippet blocks were gathered from twelve different websites (Table 4). References cited by Google snippets (n=46)
null
null
[]
[]
[]
[ "INTRODUCTION", "METHODS", "RESULTS", "DISCUSSION" ]
[ "A survey conducted by the Pew Research Center for Internet and Technology found that 59% of adults used the Internet to obtain health information in the past year [1], and of those individuals, 80% began their search with Google [2]. With the Google search engine being so accessible to consumers for health-related information, it is important for health care professionals and health information professionals to understand how information is searched online and the types of sources that may provide the information that is found. Moreover, the quality and reliability of drug information found online remains a key concern among health care professionals and health information professionals [3].\nIn 2014, Google introduced a new feature called the snippet block to assist users with their searches. This feature programmatically searches available websites using an algorithm to answer the question entered into the search bar without requiring the user to enter any websites. The answer to the inquiry obtained by Google is displayed within a box at the top of the results page for the user [4], offering consumers quicker access to health-related information. A news article on National Public Radio (NPR) first highlighted the Google snippet feature in which a consumer asked about eating eggs during pregnancy on the Google search engine [5]. The Google snippet block provided an answer that eggs should be fully cooked to kill the bacteria; however, salmonella poisoning does not pose any fetal risks. As noted in the NPR feature, this answer was only partially correct: raw eggs may pose harm to a fetus if the mother is infected with the bacteria found in the eggs.\nA study conducted in 2019 further highlighted the frequency of inaccurate information retrieved through the Google search engine [6]. The study used the top ten health questions asked on Google from 2018, which included “What is a keto diet,” “What is ALS disease,” and “What is endometriosis.” Using these questions, the study assessed the quality of information provided by the featured snippet and knowledge panel, finding that the information retrieved from open-sourced websites (e.g., medicalnewstoday.com) was not reliable. In addition to health information, consumers frequently utilize Google, and its snippet block feature, to search for information about prescription and over-the-counter drugs [7–10].\nThis study was conducted to compare the accuracy and completeness of drug information found in Google snippet blocks to the US Food and Drug Administration (FDA) medication guides of ten outpatient drugs. The secondary objective was to identify the references cited by the Google snippet blocks.", "The list of drugs included in the analysis for this study was obtained from the 2018 Clinical Drugstats Database Medical Expenditure Panel Survey most commonly prescribed outpatient medications. From the list, medications that had a corresponding medication guide available from the FDA website were selected. The medication guides were chosen as the standard of comparison as they are reviewed and regulated by the FDA. In addition, medication guides are created for patients and contain information deemed necessary to prevent patient harm, allow patients to make an informed decision about medication use, and/or provide adherence information essential to product efficacy [11].\nAll the medication guides were obtained from the FDA website on the same day in December 2019 to avoid discrepancies caused by updates or changes. Within the medication guides, there were six key questions selected to obtain the Google snippet blocks: “What is [drug] used for?”, “How do I take [drug]?”, “What are the possible side effects of [drug]?”, “What should I avoid while taking [drug]?”, “What are the ingredients of [drug]?”, and “How do I store [drug]?”\nEach question for all of the drugs was entered into the Google search engine, and a screen capture of the Google snippet block was taken (Figure 1). If a question did not initially yield a Google snippet block, the question was reworded to produce a Google snippet block without losing the meaning of the original question (e.g., “how should I take [drug]” or “what to avoid while taking [drug]”). All Google snippet blocks for the questions were obtained on the same day in December 2019 as the medication guides.\nA screenshot of a Google snippet block when a question is entered into the search bar. Google and the Google logo are trademarks of Google LLC, and this article is not endorsed by or affiliated with Google in any way.\nAccuracy was defined as the amount of factual information, and completeness was defined as the amount of information that overlapped with the medication guide. The accuracy and completeness of drug information in the Google snippets were quantified using a scoring system of 1 (less than 25% accurate/complete information) to 5 (100% accurate/complete information) (Table 1). The scoring system was based on previous studies that evaluated accuracy and/or completeness of drug information, but included more increments to provide more precise ratings of the drug information [7, 8].\nRating scale to assess accuracy and completeness\nTo ensure consistency among the raters using the adapted scoring system, two pharmacists, including the principal investigator, conducted an inter-rater reliability test (IRR) to rate accuracy and completeness of the drug information. Each question may receive a maximum of 5 points in each accuracy and completeness category. For the IRR test, eight drugs (Lexapro, Prozac, Coumadin, Cymbalta, Trillipix, Actos, Cipro, Colcrys) not included in the final analysis were selected and independently scored by each pharmacist. The intraclass correlation coefficient (ICC), two-way mixed effects with absolute agreement, was calculated to be 0.843, which was considered to be excellent agreement.\nTen different drugs were then selected for the final analysis and scored according to the aforementioned process. Discrepancies among the ratings were discussed and resolved by consensus. The source of information cited by the Google snippets for each of the questions and the frequency in which they were cited were also collected.\nScores provided by each pharmacist for the IRR test and the ICC were analyzed using SPSS, v25 (IBM SPSS, Armonk, NY). Descriptive statistics were used to summarize the scores for drugs with data presented as means with standard deviations. IRB oversight was exempted for this study as human participants were not included and human data were not collected.", "A total of fifty-five Google snippet blocks were retrieved from the Google engine, as five questions did not have a Google snippet block available. There were nine questions that did not have the corresponding drug information in the medication guides for comparison; therefore, a total of forty-six Google snippet blocks were available for analysis. There were a total of seven drugs that did not have Google snippet blocks available or information for the question in the medication guide or a combination of the two. The average accuracy and completeness ratings for each drug are provided in Table 2.\nMean accuracy and completeness scores by drug\nNo medication guide information for one question and no Google snippet block for one question; total Google snippet blocks analyzed=4;\nMedication guide did not have drug information for four questions; total Google snippet blocks analyzed=2;\nNo Google snippet block and drug information in the medication guide for one question; total Google snippet blocks analyzed=4;\nMedication guide did not have drug information for three questions; total Google snippet blocks analyzed=3;\nOne Google snippet block was not available; total Google snippet blocks analyzed=5\nThe drug with the highest mean rating for accuracy was Zoloft at 3.2, and the lowest was Ultram with a mean rating of 1.3. Ambien, Mobic, and Wellbutrin had the highest mean rating for completeness of 2.5, and Protonix had the lowest mean rating of 1.0.\nFor five out of the six questions, the information in the Google snippet blocks had less than 50% accuracy and completeness compared to the medication guides (Table 3).\nMean accuracy and completeness scores by question\nDid not contain medication guide information for two drugs; total drugs analyzed=8;\nNo snippet block available for two drugs; total drugs analyzed=8;\nNo snippet block or medication guide information available for 5 drugs; total drugs analyzed=5;\nNo medication guide information for one drug and no snippet block for two drugs; total drugs analyzed=8\nThe average accuracy and completeness scores of the Google snippet blocks were highest for the “What are the ingredients of [the drug]?” question, with average scores of 3.38 (51–75%) and 3.00 (51–75%), respectively. The question that had the lowest score was the “How to take [drug]?” with averages of 1.00 (<25%) for both accuracy and completeness.\nThe sources referenced by the Google snippet blocks included Rxlist.com, which was the most frequently cited by seventeen Google snippet blocks. Everydayhealth.com was cited by seven of the Google snippet blocks; the remaining twenty-two Google snippet blocks were gathered from twelve different websites (Table 4).\nReferences cited by Google snippets (n=46)", "The drug information in Google snippet blocks had less than 50% accuracy and completeness for most of the questions relevant to drug therapy. The Google snippet blocks provided general information and lacked specificity related to certain components of the questions, which contributed to the low accuracy and completeness ratings. Moreover, the majority of the Google snippet blocks did not provide a complete list of common side effects, details about storage conditions, instructions on how to take the drug, or age groups the drug was approved for when compared to the medication guides.\nAlthough some of the questions such as the indications and ingredients may not alter the health of a patient, questions pertaining to the use of a drug may present the most harm. The question “How to take [drug]?” had the lowest ratings for both accuracy and completeness; therefore, individuals relying on Google snippet blocks to provide directions and guidance upon starting a new medication may not take the drug appropriately. Moreover, drugs with a medication guide require proper medication use and adherence to ensure the efficacy of the drug, so it is vital for patients to have accurate and complete instructions on how to take a medication.\nAn unexpected finding was the inconsistency of drug information provided in the medication guides. For the drugs Ultram, Protonix, Mobic, Plavix, and Ambien, there were several questions that were not included in the medication guide such as “how do I take [drug]?”, “what are the ingredients,” and “what to avoid while on [drug].” Manufacturers are required to follow a specific format and include content in the medication guides according to FDA regulations [12]. A possible explanation for the discrepancy is that these drugs have been on the market for more than ten years, and revisions of the medication guides to the new format may have been overlooked.\nThis study specifically focused on drug information found in Google snippets; however, the results were consistent with previous studies that evaluated drug information found in open-sourced websites or Google searches when compared to the medication guides [9–10].\nThere were several limitations in this study. The analysis only consisted of ten drugs from different drug classes, which provided insight into the accuracy and completeness of Google snippet blocks; however, a greater number of drugs may provide more robust data. Also, results may differ with inclusion of drugs administered in the hospital setting. The brand name was used to find the Google snippet blocks for analysis in the study. Some of the branded products may not be available on the market any longer (e.g., Desyrel) or may not be prescribed as often in the clinical setting, which may impact the type and amount of drug information found compared to using the generic name for the search instead. The scoring system was based on other scoring systems used in previous studies; however, it was not validated. Validating the scoring system may provide more reliable and consistent predictive value of the accuracy and completeness ratings. Readability was not evaluated in this study, as information provided by the Google snippet blocks was relatively short and may not have offered an accurate reading level. Future studies may evaluate the length of drug information found in online sources and its impact on consumer and patient preference and understanding of drugs compared to regulated documents such as the medication guides.\nGoogle snippet blocks were designed to provide convenience and assistance with searching information for consumers; however, this study suggests there is inaccurate and incomplete drug information found in the Google snippet blocks when compared to the medication guide. Inaccurate information regarding drugs and health-related information may cause patient harm; therefore, federal stakeholders such as the FDA, National Library of Medicine, or Human Health Services may prevent the potential for harm through policies or guidance on publication of drug information on open-sourced websites. With the lack of regulation currently on the quality and accuracy of drug information, it is imperative for health care and health information professionals to identify reliable resources and refer consumers and patients to these resources if written information is needed about their drugs. In addition, health information professionals and health sciences librarians may collaborate with physicians, pharmacists, and other health care professionals to develop appropriate consumer and patient education materials if reliable resources are not available." ]
[ "intro", "methods", "results", "discussion" ]
[ "Internet", "pharmacists", "drug information", "patient care", "search engine" ]
INTRODUCTION: A survey conducted by the Pew Research Center for Internet and Technology found that 59% of adults used the Internet to obtain health information in the past year [1], and of those individuals, 80% began their search with Google [2]. With the Google search engine being so accessible to consumers for health-related information, it is important for health care professionals and health information professionals to understand how information is searched online and the types of sources that may provide the information that is found. Moreover, the quality and reliability of drug information found online remains a key concern among health care professionals and health information professionals [3]. In 2014, Google introduced a new feature called the snippet block to assist users with their searches. This feature programmatically searches available websites using an algorithm to answer the question entered into the search bar without requiring the user to enter any websites. The answer to the inquiry obtained by Google is displayed within a box at the top of the results page for the user [4], offering consumers quicker access to health-related information. A news article on National Public Radio (NPR) first highlighted the Google snippet feature in which a consumer asked about eating eggs during pregnancy on the Google search engine [5]. The Google snippet block provided an answer that eggs should be fully cooked to kill the bacteria; however, salmonella poisoning does not pose any fetal risks. As noted in the NPR feature, this answer was only partially correct: raw eggs may pose harm to a fetus if the mother is infected with the bacteria found in the eggs. A study conducted in 2019 further highlighted the frequency of inaccurate information retrieved through the Google search engine [6]. The study used the top ten health questions asked on Google from 2018, which included “What is a keto diet,” “What is ALS disease,” and “What is endometriosis.” Using these questions, the study assessed the quality of information provided by the featured snippet and knowledge panel, finding that the information retrieved from open-sourced websites (e.g., medicalnewstoday.com) was not reliable. In addition to health information, consumers frequently utilize Google, and its snippet block feature, to search for information about prescription and over-the-counter drugs [7–10]. This study was conducted to compare the accuracy and completeness of drug information found in Google snippet blocks to the US Food and Drug Administration (FDA) medication guides of ten outpatient drugs. The secondary objective was to identify the references cited by the Google snippet blocks. METHODS: The list of drugs included in the analysis for this study was obtained from the 2018 Clinical Drugstats Database Medical Expenditure Panel Survey most commonly prescribed outpatient medications. From the list, medications that had a corresponding medication guide available from the FDA website were selected. The medication guides were chosen as the standard of comparison as they are reviewed and regulated by the FDA. In addition, medication guides are created for patients and contain information deemed necessary to prevent patient harm, allow patients to make an informed decision about medication use, and/or provide adherence information essential to product efficacy [11]. All the medication guides were obtained from the FDA website on the same day in December 2019 to avoid discrepancies caused by updates or changes. Within the medication guides, there were six key questions selected to obtain the Google snippet blocks: “What is [drug] used for?”, “How do I take [drug]?”, “What are the possible side effects of [drug]?”, “What should I avoid while taking [drug]?”, “What are the ingredients of [drug]?”, and “How do I store [drug]?” Each question for all of the drugs was entered into the Google search engine, and a screen capture of the Google snippet block was taken (Figure 1). If a question did not initially yield a Google snippet block, the question was reworded to produce a Google snippet block without losing the meaning of the original question (e.g., “how should I take [drug]” or “what to avoid while taking [drug]”). All Google snippet blocks for the questions were obtained on the same day in December 2019 as the medication guides. A screenshot of a Google snippet block when a question is entered into the search bar. Google and the Google logo are trademarks of Google LLC, and this article is not endorsed by or affiliated with Google in any way. Accuracy was defined as the amount of factual information, and completeness was defined as the amount of information that overlapped with the medication guide. The accuracy and completeness of drug information in the Google snippets were quantified using a scoring system of 1 (less than 25% accurate/complete information) to 5 (100% accurate/complete information) (Table 1). The scoring system was based on previous studies that evaluated accuracy and/or completeness of drug information, but included more increments to provide more precise ratings of the drug information [7, 8]. Rating scale to assess accuracy and completeness To ensure consistency among the raters using the adapted scoring system, two pharmacists, including the principal investigator, conducted an inter-rater reliability test (IRR) to rate accuracy and completeness of the drug information. Each question may receive a maximum of 5 points in each accuracy and completeness category. For the IRR test, eight drugs (Lexapro, Prozac, Coumadin, Cymbalta, Trillipix, Actos, Cipro, Colcrys) not included in the final analysis were selected and independently scored by each pharmacist. The intraclass correlation coefficient (ICC), two-way mixed effects with absolute agreement, was calculated to be 0.843, which was considered to be excellent agreement. Ten different drugs were then selected for the final analysis and scored according to the aforementioned process. Discrepancies among the ratings were discussed and resolved by consensus. The source of information cited by the Google snippets for each of the questions and the frequency in which they were cited were also collected. Scores provided by each pharmacist for the IRR test and the ICC were analyzed using SPSS, v25 (IBM SPSS, Armonk, NY). Descriptive statistics were used to summarize the scores for drugs with data presented as means with standard deviations. IRB oversight was exempted for this study as human participants were not included and human data were not collected. RESULTS: A total of fifty-five Google snippet blocks were retrieved from the Google engine, as five questions did not have a Google snippet block available. There were nine questions that did not have the corresponding drug information in the medication guides for comparison; therefore, a total of forty-six Google snippet blocks were available for analysis. There were a total of seven drugs that did not have Google snippet blocks available or information for the question in the medication guide or a combination of the two. The average accuracy and completeness ratings for each drug are provided in Table 2. Mean accuracy and completeness scores by drug No medication guide information for one question and no Google snippet block for one question; total Google snippet blocks analyzed=4; Medication guide did not have drug information for four questions; total Google snippet blocks analyzed=2; No Google snippet block and drug information in the medication guide for one question; total Google snippet blocks analyzed=4; Medication guide did not have drug information for three questions; total Google snippet blocks analyzed=3; One Google snippet block was not available; total Google snippet blocks analyzed=5 The drug with the highest mean rating for accuracy was Zoloft at 3.2, and the lowest was Ultram with a mean rating of 1.3. Ambien, Mobic, and Wellbutrin had the highest mean rating for completeness of 2.5, and Protonix had the lowest mean rating of 1.0. For five out of the six questions, the information in the Google snippet blocks had less than 50% accuracy and completeness compared to the medication guides (Table 3). Mean accuracy and completeness scores by question Did not contain medication guide information for two drugs; total drugs analyzed=8; No snippet block available for two drugs; total drugs analyzed=8; No snippet block or medication guide information available for 5 drugs; total drugs analyzed=5; No medication guide information for one drug and no snippet block for two drugs; total drugs analyzed=8 The average accuracy and completeness scores of the Google snippet blocks were highest for the “What are the ingredients of [the drug]?” question, with average scores of 3.38 (51–75%) and 3.00 (51–75%), respectively. The question that had the lowest score was the “How to take [drug]?” with averages of 1.00 (<25%) for both accuracy and completeness. The sources referenced by the Google snippet blocks included Rxlist.com, which was the most frequently cited by seventeen Google snippet blocks. Everydayhealth.com was cited by seven of the Google snippet blocks; the remaining twenty-two Google snippet blocks were gathered from twelve different websites (Table 4). References cited by Google snippets (n=46) DISCUSSION: The drug information in Google snippet blocks had less than 50% accuracy and completeness for most of the questions relevant to drug therapy. The Google snippet blocks provided general information and lacked specificity related to certain components of the questions, which contributed to the low accuracy and completeness ratings. Moreover, the majority of the Google snippet blocks did not provide a complete list of common side effects, details about storage conditions, instructions on how to take the drug, or age groups the drug was approved for when compared to the medication guides. Although some of the questions such as the indications and ingredients may not alter the health of a patient, questions pertaining to the use of a drug may present the most harm. The question “How to take [drug]?” had the lowest ratings for both accuracy and completeness; therefore, individuals relying on Google snippet blocks to provide directions and guidance upon starting a new medication may not take the drug appropriately. Moreover, drugs with a medication guide require proper medication use and adherence to ensure the efficacy of the drug, so it is vital for patients to have accurate and complete instructions on how to take a medication. An unexpected finding was the inconsistency of drug information provided in the medication guides. For the drugs Ultram, Protonix, Mobic, Plavix, and Ambien, there were several questions that were not included in the medication guide such as “how do I take [drug]?”, “what are the ingredients,” and “what to avoid while on [drug].” Manufacturers are required to follow a specific format and include content in the medication guides according to FDA regulations [12]. A possible explanation for the discrepancy is that these drugs have been on the market for more than ten years, and revisions of the medication guides to the new format may have been overlooked. This study specifically focused on drug information found in Google snippets; however, the results were consistent with previous studies that evaluated drug information found in open-sourced websites or Google searches when compared to the medication guides [9–10]. There were several limitations in this study. The analysis only consisted of ten drugs from different drug classes, which provided insight into the accuracy and completeness of Google snippet blocks; however, a greater number of drugs may provide more robust data. Also, results may differ with inclusion of drugs administered in the hospital setting. The brand name was used to find the Google snippet blocks for analysis in the study. Some of the branded products may not be available on the market any longer (e.g., Desyrel) or may not be prescribed as often in the clinical setting, which may impact the type and amount of drug information found compared to using the generic name for the search instead. The scoring system was based on other scoring systems used in previous studies; however, it was not validated. Validating the scoring system may provide more reliable and consistent predictive value of the accuracy and completeness ratings. Readability was not evaluated in this study, as information provided by the Google snippet blocks was relatively short and may not have offered an accurate reading level. Future studies may evaluate the length of drug information found in online sources and its impact on consumer and patient preference and understanding of drugs compared to regulated documents such as the medication guides. Google snippet blocks were designed to provide convenience and assistance with searching information for consumers; however, this study suggests there is inaccurate and incomplete drug information found in the Google snippet blocks when compared to the medication guide. Inaccurate information regarding drugs and health-related information may cause patient harm; therefore, federal stakeholders such as the FDA, National Library of Medicine, or Human Health Services may prevent the potential for harm through policies or guidance on publication of drug information on open-sourced websites. With the lack of regulation currently on the quality and accuracy of drug information, it is imperative for health care and health information professionals to identify reliable resources and refer consumers and patients to these resources if written information is needed about their drugs. In addition, health information professionals and health sciences librarians may collaborate with physicians, pharmacists, and other health care professionals to develop appropriate consumer and patient education materials if reliable resources are not available.
Background: Consumers commonly use the Internet for immediate drug information. In 2014, Google introduced the snippet block to programmatically search available websites to answer a question entered into the search engine without the need for the user to enter any websites. This study compared the accuracy and completeness of drug information found in Google snippet blocks to US Food and Drug Administration (FDA) medication guides. Methods: Ten outpatient drugs were selected from the 2018 Clinical Drugstats Database Medical Expenditure Panel Survey. Six questions in the medication guide for each drug were entered into the Google search engine to find the snippet block. The accuracy and completeness of drug information in the Google snippet block were quantified by two different pharmacists using a scoring system of 1 (less than 25% accurate/complete information) to 5 (100% accurate/complete information). Descriptive statistics were used to summarize the scores. Results: For five out of the six questions, the information in the Google snippets had less than 50% accuracy and completeness compared to the medication guides. The average accuracy and completeness scores of the Google snippets were highest for "What are the ingredients of [the drug]?" with scores of 3.38 (51-75%) and 3.00 (51-75%), respectively. The question on "How to take [drug]?" had the lowest score with averages of 1.00 (<25%) for both accuracy and completeness. Conclusions: Google snippets provide inaccurate and incomplete drug information when compared to FDA-approved drug medication guides. This aspect may cause patient harm; therefore, it is imperative for health care and health information professionals to provide reliable drug resources to patients and consumers if written information may be needed.
null
null
2,581
335
[]
4
[ "google", "information", "drug", "snippet", "google snippet", "medication", "blocks", "snippet blocks", "google snippet blocks", "drugs" ]
[ "information cited google", "drugs google snippet", "find google snippet", "pregnancy google search", "individuals relying google" ]
null
null
[CONTENT] Internet | pharmacists | drug information | patient care | search engine [SUMMARY]
[CONTENT] Internet | pharmacists | drug information | patient care | search engine [SUMMARY]
[CONTENT] Internet | pharmacists | drug information | patient care | search engine [SUMMARY]
null
[CONTENT] Internet | pharmacists | drug information | patient care | search engine [SUMMARY]
null
[CONTENT] Humans | Internet | Pharmaceutical Preparations | Search Engine | Writing [SUMMARY]
[CONTENT] Humans | Internet | Pharmaceutical Preparations | Search Engine | Writing [SUMMARY]
[CONTENT] Humans | Internet | Pharmaceutical Preparations | Search Engine | Writing [SUMMARY]
null
[CONTENT] Humans | Internet | Pharmaceutical Preparations | Search Engine | Writing [SUMMARY]
null
[CONTENT] information cited google | drugs google snippet | find google snippet | pregnancy google search | individuals relying google [SUMMARY]
[CONTENT] information cited google | drugs google snippet | find google snippet | pregnancy google search | individuals relying google [SUMMARY]
[CONTENT] information cited google | drugs google snippet | find google snippet | pregnancy google search | individuals relying google [SUMMARY]
null
[CONTENT] information cited google | drugs google snippet | find google snippet | pregnancy google search | individuals relying google [SUMMARY]
null
[CONTENT] google | information | drug | snippet | google snippet | medication | blocks | snippet blocks | google snippet blocks | drugs [SUMMARY]
[CONTENT] google | information | drug | snippet | google snippet | medication | blocks | snippet blocks | google snippet blocks | drugs [SUMMARY]
[CONTENT] google | information | drug | snippet | google snippet | medication | blocks | snippet blocks | google snippet blocks | drugs [SUMMARY]
null
[CONTENT] google | information | drug | snippet | google snippet | medication | blocks | snippet blocks | google snippet blocks | drugs [SUMMARY]
null
[CONTENT] information | health | google | feature | answer | eggs | found | search | snippet | health information [SUMMARY]
[CONTENT] google | drug | information | medication | selected | accuracy | snippet | google snippet | question | completeness [SUMMARY]
[CONTENT] total | snippet | google | google snippet | blocks | google snippet blocks | snippet blocks | analyzed | total google snippet blocks | total google [SUMMARY]
null
[CONTENT] google | information | drug | snippet | google snippet | medication | health | blocks | snippet blocks | google snippet blocks [SUMMARY]
null
[CONTENT] ||| 2014 | Google ||| Google | US Food and Drug Administration | FDA [SUMMARY]
[CONTENT] Ten | 2018 ||| Six | Google ||| Google | two | 1 | less than 25% | 5 | 100% ||| [SUMMARY]
[CONTENT] five | six | Google | less than 50% ||| Google | 3.38 | 51-75% | 3.00 | 51-75% ||| 1.00 | 25% [SUMMARY]
null
[CONTENT] ||| 2014 | Google ||| Google | US Food and Drug Administration | FDA ||| Ten | 2018 ||| Six | Google ||| Google | two | 1 | less than 25% | 5 | 100% ||| ||| five | six | Google | less than 50% ||| Google | 3.38 | 51-75% | 3.00 | 51-75% ||| 1.00 | 25% ||| ||| Google | FDA ||| [SUMMARY]
null
Mental health of indigenous school children in Northern Chile.
24438210
Anxiety and depressive disorders occur in all stages of life and are the most common childhood disorders. However, only recently has attention been paid to mental health problems in indigenous children and studies of anxiety and depressive disorders in these children are still scarce. This study compares the prevalence of anxiety and depressive symptoms in Aymara and non-Aymara children. Among the Aymara children, the study examines the relations between these symptoms and the degree of involvement with Aymara culture.
BACKGROUND
We recruited 748 children aged 9 to 15 years from nine schools serving low socioeconomic classes in the city of Arica, in northern Chile. The children were equally divided between boys and girls and 37% of the children were Aymara. To evaluate anxiety and depressive symptoms we used the Stress in Children (SiC) instrument and the Children Depression Inventory-Short version (CDI-S), and used an instrument we developed to assess level of involvement in the Aymara culture.
METHODS
There was no significant difference between Aymara and non-Aymara children on any of the instrument scales. Dividing the Aymara children into high-involvement (n = 89) and low-involvement (n = 186) groups, the low-involvement group had significantly higher scores on the Hopelessness subscale of the CDI-S (p = 0.02) and scores of marginally higher significance in overall Anxiety on the SiC (p = 0.06).
RESULTS
Although Aymara children have migrated from the high Andean plateau to the city, this migration has not resulted in a greater presence of anxiety and depressive symptoms. Greater involvement with the Aymara culture may be a protective factor against anxiety and depressive symptoms in Aymara children. This point to an additional benefit of maintaining cultural traditions within this population.
CONCLUSIONS
[ "Adolescent", "Anxiety Disorders", "Child", "Chile", "Depressive Disorder", "Female", "Humans", "Male", "Mental Health", "Personality Inventory", "Prevalence", "Schools" ]
3898225
Background
Psychiatric disorders have a high prevalence in children and adolescents, and often persistent into adulthood. Their importance has made them a growing subject of research in recent years [1]. Among the psychiatric disorders, anxiety and depression are the most common reasons for seeking mental health services [2-5]. Worlwide statistics reported rates of anxiety in children and adolescents range from 5.6% to 21%, depending on the criteria and the type of anxiety disorder [2,6-9], with higher rates observed in older children [10]. The prevalence of depressive disorders also varies depending on the population and evaluation method [11]. Rates of depressive disorders between 0.5% and 2.0% have been reported for children in the general population aged 9 to 11 years [8]. In Chile, anxiety and depressive disorders are the most prevalent mental health diagnoses in children, with reported rates of 8.3% and 5.1%, respectively [12]. There has been very little research on anxiety and depression among different ethnic groups, with no research in Chile, despite its numerous indigenous groups. Stressors that can impact mental health, such as racism, family disconnectedness, community dysfunction, and social disadvantage, are more prevalent among ethnic minorities [13,14]. Indigenous peoples have identified certain stressors as causes of poor health, such as loss of native lands, culture, and identity; covert and overt racism; marginalization; and powerlessness [14-17]. Culture can also affect the way in which symptoms of mental illness manifest. Culture can determine or frame causative, precipitating, or maintenance factors that influence the onset, symptom profile, impact, course, and outcome of mental illness [17,18]. The prevalence of anxiety symptoms in children and adolescents also varies between ethnic minorities [19]. For instance, adolescent immigrants in Belgium reported more traumatic experiences, more problems with their peers, and greater avoidance than non-immigrants [20]. Similar findings were observed in a study of immigrant children in the Netherlands, who demonstrated a greater level of externalized and internalized problems than their peers [21-24]. Migration to a new country or sociocultural context can cause stress due to cultural acclimatization. Such stress tends to increase levels of anxiety and depression, loneliness, psychosomatic symptoms, and contribute to a confused sense of identity [25]. Research on stress due to cultural acclimatization has focused on how conflicts with the host language and the perception of discrimination can affect psychological well-being [26,27]. However, stress due to cultural acclimatization can take various paths. Thus, acclimatization is driven by transcultural or intercultural dynamics [28,29]. While there are numerous studies on the mental health of indigenous adults and adolescents in North America, studies of indigenous children are rare, and even more rare among indigenous children of Latin America [30-34]. The lack of such studies on Chilean indigenous children is worrisome. Mental health problems in childhood have important repercussions in adulthood, including declines in academic achievement and interpersonal relations, as well as ongoing behavioural problems and drug abuse [11,35,36]. Factors pertaining to the risks to and protection of mental health in children and adolescents vary in different contexts, especially between developed and developing countries [37]. This study’s objective is to analyze the differences in the presence of anxiety and depressive symptoms between Aymara and non-Aymara children and to evaluate the relation between mental disorders and cultural involvement. Aymara is a centuries-old culture centred in the Andes mountains. In 2012, the Aymara community had a population of about 2 million living in g central and western Bolivia, southern Peru, northern Chile, and north western Argentina [38,39]. The Aymara have an agricultural economy based on cultivation of potatoes, corn, and quinoa and domestication of llamas, alpacas, and vicuñas [38,40,41], two activities that are complementary, both ecologically and economically [42,43]. The Aymara is a geographically broad and heterogeneous group, although certain common characteristics undoubtedly prevail [44]. The culture is characterized by its intergenerational communication, where elders provide advice to the young. In addition, it is a culture in which the mother focuses on household tasks and education, the father makes the family and monetary decisions and is the breadwinner, and family members work together to complete various tasks, with young children helping out with simple household tasks. The Aymara community in Chile has a population of approximately 48,000 [45], only 2,300 of which still live in their original mountain territories. The rest have emigrated towards the nearby port cities and mining regions, where they have intermingled with the working classes from other areas of the country [38,46-48]. Chile’s large-scale migration towards the coast, its policy of Chilenization – where indigenous people are encouraged to accept Chilean culture and abandon their own particularly during the Pinochet dictatorship – this last forged an identity for the Aymara people, shaped by the difficulty and complexity of the process in different areas [49]. This mass migration and rapid abandonment of rural settlements in the Andean foothills have been among the most difficult experiences for the Aymara. Migration has been a complex phenomenon that has not necessarily involved a departure without return, as evidenced by the number of simultaneous residencies and linkages that are maintained with the native communities [50]. In adapting to a hegemonic culture, Aymara families have abandoned, to some extent, traditional cultural patterns and are slowly adopting new and increasingly intercultural lifestyles [51,52]. These intercultural dynamics have led to an identity crisis, not only at the level of the Aymara population, but at the level of the country, in which the issue of interculturalism has not yet been constructively addressed. Given this context, a large number of people who could be identified as Aymara by heritage or because they still practice certain Aymara customs, are no longer considered such, at least until recently [53]. Various government organizations around the world have expressed concern for the rights of indigenous people, especially their autonomy to raise, educate, and ensure the well-being of their children, in line with children’s rights [54]. Such recognition, however, is insufficient to eliminate the discrimination and other problems Aymara descendants face, leading to reduced access to economic, educational, and social opportunities. Given these inequalities and the stress due to cultural acclimatization experienced by Aymara youth, we expect higher levels of anxiety and depressive symptoms among Aymara children compared to non-Aymara children.
Methods
Participants The sample consisted of 748 Chilean children aged 9 to 15 years (mean 11.81, SD 1.41). They were recruited from the fifth to eighth grades of nine elementary schools serving lower socioeconomic classes in the city of Arica, which has a large Aymara population. Four of the schools were public and five were government-subsidized private schools. The children were equally divided between boys (n = 374) and girls (n = 374) and 275 (37%) of the children were Aymara. Of the total sample there were 64 cases missing, this because they did not complete questionnaires. The city and schools were chosen in an effort to maximize the number of Aymara included in the study. The sample consisted of 748 Chilean children aged 9 to 15 years (mean 11.81, SD 1.41). They were recruited from the fifth to eighth grades of nine elementary schools serving lower socioeconomic classes in the city of Arica, which has a large Aymara population. Four of the schools were public and five were government-subsidized private schools. The children were equally divided between boys (n = 374) and girls (n = 374) and 275 (37%) of the children were Aymara. Of the total sample there were 64 cases missing, this because they did not complete questionnaires. The city and schools were chosen in an effort to maximize the number of Aymara included in the study. Instruments Stress in Children (SiC) [55]: SiC is a self-administered instrument for children aged 9 to 12 years. It is designed to measure perceived anxiety and low levels of well-being, in addition to important aspects relating to confrontation and social support. These factors may be considered part the broader concept of subjective health. This instrument was adapted for Chile by Caqueo-Urízar, Urzúa, and Osika [56]. The questionnaire consists of 21 items scored on a four-point Likert scale, with zero standing for none, one for sometimes, two for almost always, and three for always. The overall score is obtained by summing the points from each item, with a higher score indicating a higher degree of perceived stress. The mean plus or minus the standard deviation in a normal Swedish population was 2.05 ± 0.41 (girls, 1.99 ± 0.42; boys, 2.15 ± 0.37), with no significant differences detected between the genders. The two subscales of the questionnaire measure 1) loss of well-being, with items such as ‘I feel lonely’ and ‘I get sad’, and 2) sources of distress, requiring responses dealing with daily stressors, school being one of those with the highest factor loading. This questionnaire has high internal consistency (Cronbach’s alpha = 0.86) and is strongly associated with the Beck Youth Inventories of Emotional and Social Impairment [57]. Children Depression Inventory-Short (CDI-S) [58]: The CDI-S measures depressive symptoms in children. It contains three subscales: self-esteem, anhedonia, and hopelessness. It was adapted into Spanish by Barrio et al. [59]. The instrument contains 10 items that are easy for children to complete. It uses a three-point scale indicating absent, mild, or definite symptoms. Its reliability is good, with Cronbach’s alpha ranging from 0.71 to 0.89 and test–retest coefficients ranging from 0.74 to 0.83. This measure’s construct validity has also been confirmed [60]. Our sample’s Cronbach’s alpha was 0.84. The CDI-S can be used individually or collectively for subjects aged 7 to 17 years and takes between five and seven minutes to administer. The scores ranged from zero to 20, with an overall mean of 2.82 and a standard deviation of 2.43. The cut-off point is 7.04 for subjects aged 9 to 11 years and 8.24 for subjects aged 12 to 15 years. Inventory of the Level of Involvement in Aymara Culture – Escala de Involucramiento en la Cultura Aymara (EICA) [61]: This instrument measures a child’s level of involvement in Aymara culture. The development of this scale was carried out by the authors of the study who an epidemiologist, a clinical psychologist and an anthropologist. The instrument was not based on another scale, since there is no information and was held in two stages, one qualitative and one quantitative. It contains 25 items evaluated on a Likert scale from zero to two, where the higher the score, the greater the role of Aymara culture in the daily life of the youth. These scores correspond to the average number of points obtained in the scale or respective subscale after multiplication by 10 to avoid many decimal places and they represent the degree of involvement in each of several aspects of Aymara culture, ranging from zero to 20 points. The scale is composed of six subscales: family language use (six items), personal use of the Aymara language (three items), traditional celebrations (five items), traditional employment (three items), dances (five items), and music (three items). The scale has adequate psychometric properties and helps identify both children who are and children who are not highly involved in Aymara culture. The scale has a Cronbach’s alpha of 0.91. Stress in Children (SiC) [55]: SiC is a self-administered instrument for children aged 9 to 12 years. It is designed to measure perceived anxiety and low levels of well-being, in addition to important aspects relating to confrontation and social support. These factors may be considered part the broader concept of subjective health. This instrument was adapted for Chile by Caqueo-Urízar, Urzúa, and Osika [56]. The questionnaire consists of 21 items scored on a four-point Likert scale, with zero standing for none, one for sometimes, two for almost always, and three for always. The overall score is obtained by summing the points from each item, with a higher score indicating a higher degree of perceived stress. The mean plus or minus the standard deviation in a normal Swedish population was 2.05 ± 0.41 (girls, 1.99 ± 0.42; boys, 2.15 ± 0.37), with no significant differences detected between the genders. The two subscales of the questionnaire measure 1) loss of well-being, with items such as ‘I feel lonely’ and ‘I get sad’, and 2) sources of distress, requiring responses dealing with daily stressors, school being one of those with the highest factor loading. This questionnaire has high internal consistency (Cronbach’s alpha = 0.86) and is strongly associated with the Beck Youth Inventories of Emotional and Social Impairment [57]. Children Depression Inventory-Short (CDI-S) [58]: The CDI-S measures depressive symptoms in children. It contains three subscales: self-esteem, anhedonia, and hopelessness. It was adapted into Spanish by Barrio et al. [59]. The instrument contains 10 items that are easy for children to complete. It uses a three-point scale indicating absent, mild, or definite symptoms. Its reliability is good, with Cronbach’s alpha ranging from 0.71 to 0.89 and test–retest coefficients ranging from 0.74 to 0.83. This measure’s construct validity has also been confirmed [60]. Our sample’s Cronbach’s alpha was 0.84. The CDI-S can be used individually or collectively for subjects aged 7 to 17 years and takes between five and seven minutes to administer. The scores ranged from zero to 20, with an overall mean of 2.82 and a standard deviation of 2.43. The cut-off point is 7.04 for subjects aged 9 to 11 years and 8.24 for subjects aged 12 to 15 years. Inventory of the Level of Involvement in Aymara Culture – Escala de Involucramiento en la Cultura Aymara (EICA) [61]: This instrument measures a child’s level of involvement in Aymara culture. The development of this scale was carried out by the authors of the study who an epidemiologist, a clinical psychologist and an anthropologist. The instrument was not based on another scale, since there is no information and was held in two stages, one qualitative and one quantitative. It contains 25 items evaluated on a Likert scale from zero to two, where the higher the score, the greater the role of Aymara culture in the daily life of the youth. These scores correspond to the average number of points obtained in the scale or respective subscale after multiplication by 10 to avoid many decimal places and they represent the degree of involvement in each of several aspects of Aymara culture, ranging from zero to 20 points. The scale is composed of six subscales: family language use (six items), personal use of the Aymara language (three items), traditional celebrations (five items), traditional employment (three items), dances (five items), and music (three items). The scale has adequate psychometric properties and helps identify both children who are and children who are not highly involved in Aymara culture. The scale has a Cronbach’s alpha of 0.91. Procedure The study was approved by the Ethics Committee of the University of Tarapacá and the National Commission on Science and Technology of the Government of Chile (CONICYT). The project complies with the Declaration of Helsinki [62]. The study used an inventory of public and state-subsidized private schools. The latter have a considerable population of students of Aymara origin, according to the list of beneficiaries of the Indigenous Scholarship for 2010–2011, granted by the National Council for School Assistance and Grants for the Ministry of Education. The schools were chosen and their principals contacted to obtain their consent to participate in the study. Once approval was received, the children were invited to participate in the study. Parents were asked to sign an informed consent form consenting for their children to participate in the study. Children with consenting parents were free to decide for themselves whether to participate in the study. The children were assessed during school hours in group sessions run by two psychologists. The criteria for including a child in the Aymara arm of the study were: – One or more Aymara surnames, as established by the Indigenous Law; – Self-description as Aymara ethnicity, implying both the child and the parents consider themselves indigenous. The study was approved by the Ethics Committee of the University of Tarapacá and the National Commission on Science and Technology of the Government of Chile (CONICYT). The project complies with the Declaration of Helsinki [62]. The study used an inventory of public and state-subsidized private schools. The latter have a considerable population of students of Aymara origin, according to the list of beneficiaries of the Indigenous Scholarship for 2010–2011, granted by the National Council for School Assistance and Grants for the Ministry of Education. The schools were chosen and their principals contacted to obtain their consent to participate in the study. Once approval was received, the children were invited to participate in the study. Parents were asked to sign an informed consent form consenting for their children to participate in the study. Children with consenting parents were free to decide for themselves whether to participate in the study. The children were assessed during school hours in group sessions run by two psychologists. The criteria for including a child in the Aymara arm of the study were: – One or more Aymara surnames, as established by the Indigenous Law; – Self-description as Aymara ethnicity, implying both the child and the parents consider themselves indigenous. Data analysis Statistical analysis of survey data was performed using SPSS 17.0. Means and standard deviations were calculated for subscale scores of each group and an independent samples t-test was run. Statistical analysis of survey data was performed using SPSS 17.0. Means and standard deviations were calculated for subscale scores of each group and an independent samples t-test was run.
Results
First, our analysis considered the information on ethnicity provided by the parents. As shown in Table 1, there was no significant difference in anxiety or depression symptoms between the Aymara and non-Aymara groups. Anxiety and depression symptoms in Aymara and non-Aymara children, measured by the SiC and CDI-S N = sample; X = Mean; SD = Standard Deviation. The EICA, which was administered only to the Aymara children, categorized 89 children as having high involvement with Aymara culture and 186 children as having low involvement. Table 2 shows the SiC and CDI-S results for these two subgroups. There was a significant difference between the subgroups on the hopelessness subscale of the CDI-S (p = 0.02) and a marginally significant difference on the overall score on the SiC (p = 0.06). In both these cases, the low-involvement group scored higher. Anxiety and depression symptoms in Aymara children with high and low levels of cultural involvement N = sample; X = Mean; SD = Standard Deviation.
Conclusions
In our study population, Aymara school children living in the city did not differ significantly from their non-Aymara peers in levels of anxiety or depressive symptoms. Among Aymara children, greater involvement with their culture conferred some protection against anxiety and depressive symptoms. This points to an additional benefit of maintaining cultural traditions within this population.
[ "Background", "Participants", "Instruments", "Procedure", "Data analysis", "Abbreviations", "Competing interests", "Authors’ contributions", "Pre-publication history" ]
[ "Psychiatric disorders have a high prevalence in children and adolescents, and often persistent into adulthood. Their importance has made them a growing subject of research in recent years [1]. Among the psychiatric disorders, anxiety and depression are the most common reasons for seeking mental health services [2-5]. Worlwide statistics reported rates of anxiety in children and adolescents range from 5.6% to 21%, depending on the criteria and the type of anxiety disorder [2,6-9], with higher rates observed in older children [10]. The prevalence of depressive disorders also varies depending on the population and evaluation method [11]. Rates of depressive disorders between 0.5% and 2.0% have been reported for children in the general population aged 9 to 11 years [8].\nIn Chile, anxiety and depressive disorders are the most prevalent mental health diagnoses in children, with reported rates of 8.3% and 5.1%, respectively [12].\nThere has been very little research on anxiety and depression among different ethnic groups, with no research in Chile, despite its numerous indigenous groups.\nStressors that can impact mental health, such as racism, family disconnectedness, community dysfunction, and social disadvantage, are more prevalent among ethnic minorities [13,14]. Indigenous peoples have identified certain stressors as causes of poor health, such as loss of native lands, culture, and identity; covert and overt racism; marginalization; and powerlessness [14-17].\nCulture can also affect the way in which symptoms of mental illness manifest. Culture can determine or frame causative, precipitating, or maintenance factors that influence the onset, symptom profile, impact, course, and outcome of mental illness [17,18].\nThe prevalence of anxiety symptoms in children and adolescents also varies between ethnic minorities [19]. For instance, adolescent immigrants in Belgium reported more traumatic experiences, more problems with their peers, and greater avoidance than non-immigrants [20]. Similar findings were observed in a study of immigrant children in the Netherlands, who demonstrated a greater level of externalized and internalized problems than their peers [21-24].\nMigration to a new country or sociocultural context can cause stress due to cultural acclimatization. Such stress tends to increase levels of anxiety and depression, loneliness, psychosomatic symptoms, and contribute to a confused sense of identity [25].\nResearch on stress due to cultural acclimatization has focused on how conflicts with the host language and the perception of discrimination can affect psychological well-being [26,27]. However, stress due to cultural acclimatization can take various paths. Thus, acclimatization is driven by transcultural or intercultural dynamics [28,29].\nWhile there are numerous studies on the mental health of indigenous adults and adolescents in North America, studies of indigenous children are rare, and even more rare among indigenous children of Latin America [30-34].\nThe lack of such studies on Chilean indigenous children is worrisome. Mental health problems in childhood have important repercussions in adulthood, including declines in academic achievement and interpersonal relations, as well as ongoing behavioural problems and drug abuse [11,35,36]. Factors pertaining to the risks to and protection of mental health in children and adolescents vary in different contexts, especially between developed and developing countries [37].\nThis study’s objective is to analyze the differences in the presence of anxiety and depressive symptoms between Aymara and non-Aymara children and to evaluate the relation between mental disorders and cultural involvement.\nAymara is a centuries-old culture centred in the Andes mountains. In 2012, the Aymara community had a population of about 2 million living in g central and western Bolivia, southern Peru, northern Chile, and north western Argentina [38,39]. The Aymara have an agricultural economy based on cultivation of potatoes, corn, and quinoa and domestication of llamas, alpacas, and vicuñas [38,40,41], two activities that are complementary, both ecologically and economically [42,43]. The Aymara is a geographically broad and heterogeneous group, although certain common characteristics undoubtedly prevail [44]. The culture is characterized by its intergenerational communication, where elders provide advice to the young. In addition, it is a culture in which the mother focuses on household tasks and education, the father makes the family and monetary decisions and is the breadwinner, and family members work together to complete various tasks, with young children helping out with simple household tasks.\nThe Aymara community in Chile has a population of approximately 48,000 [45], only 2,300 of which still live in their original mountain territories. The rest have emigrated towards the nearby port cities and mining regions, where they have intermingled with the working classes from other areas of the country [38,46-48]. Chile’s large-scale migration towards the coast, its policy of Chilenization – where indigenous people are encouraged to accept Chilean culture and abandon their own particularly during the Pinochet dictatorship – this last forged an identity for the Aymara people, shaped by the difficulty and complexity of the process in different areas [49].\nThis mass migration and rapid abandonment of rural settlements in the Andean foothills have been among the most difficult experiences for the Aymara. Migration has been a complex phenomenon that has not necessarily involved a departure without return, as evidenced by the number of simultaneous residencies and linkages that are maintained with the native communities [50].\nIn adapting to a hegemonic culture, Aymara families have abandoned, to some extent, traditional cultural patterns and are slowly adopting new and increasingly intercultural lifestyles [51,52]. These intercultural dynamics have led to an identity crisis, not only at the level of the Aymara population, but at the level of the country, in which the issue of interculturalism has not yet been constructively addressed. Given this context, a large number of people who could be identified as Aymara by heritage or because they still practice certain Aymara customs, are no longer considered such, at least until recently [53].\nVarious government organizations around the world have expressed concern for the rights of indigenous people, especially their autonomy to raise, educate, and ensure the well-being of their children, in line with children’s rights [54]. Such recognition, however, is insufficient to eliminate the discrimination and other problems Aymara descendants face, leading to reduced access to economic, educational, and social opportunities.\nGiven these inequalities and the stress due to cultural acclimatization experienced by Aymara youth, we expect higher levels of anxiety and depressive symptoms among Aymara children compared to non-Aymara children.", "The sample consisted of 748 Chilean children aged 9 to 15 years (mean 11.81, SD 1.41). They were recruited from the fifth to eighth grades of nine elementary schools serving lower socioeconomic classes in the city of Arica, which has a large Aymara population. Four of the schools were public and five were government-subsidized private schools. The children were equally divided between boys (n = 374) and girls (n = 374) and 275 (37%) of the children were Aymara. Of the total sample there were 64 cases missing, this because they did not complete questionnaires. The city and schools were chosen in an effort to maximize the number of Aymara included in the study.", "\nStress in Children (SiC)\n[55]: SiC is a self-administered instrument for children aged 9 to 12 years. It is designed to measure perceived anxiety and low levels of well-being, in addition to important aspects relating to confrontation and social support. These factors may be considered part the broader concept of subjective health. This instrument was adapted for Chile by Caqueo-Urízar, Urzúa, and Osika [56].\nThe questionnaire consists of 21 items scored on a four-point Likert scale, with zero standing for none, one for sometimes, two for almost always, and three for always. The overall score is obtained by summing the points from each item, with a higher score indicating a higher degree of perceived stress. The mean plus or minus the standard deviation in a normal Swedish population was 2.05 ± 0.41 (girls, 1.99 ± 0.42; boys, 2.15 ± 0.37), with no significant differences detected between the genders. The two subscales of the questionnaire measure 1) loss of well-being, with items such as ‘I feel lonely’ and ‘I get sad’, and 2) sources of distress, requiring responses dealing with daily stressors, school being one of those with the highest factor loading. This questionnaire has high internal consistency (Cronbach’s alpha = 0.86) and is strongly associated with the Beck Youth Inventories of Emotional and Social Impairment [57].\n\nChildren Depression Inventory-Short (CDI-S)\n[58]: The CDI-S measures depressive symptoms in children. It contains three subscales: self-esteem, anhedonia, and hopelessness. It was adapted into Spanish by Barrio et al. [59]. The instrument contains 10 items that are easy for children to complete. It uses a three-point scale indicating absent, mild, or definite symptoms. Its reliability is good, with Cronbach’s alpha ranging from 0.71 to 0.89 and test–retest coefficients ranging from 0.74 to 0.83. This measure’s construct validity has also been confirmed [60]. Our sample’s Cronbach’s alpha was 0.84. The CDI-S can be used individually or collectively for subjects aged 7 to 17 years and takes between five and seven minutes to administer. The scores ranged from zero to 20, with an overall mean of 2.82 and a standard deviation of 2.43. The cut-off point is 7.04 for subjects aged 9 to 11 years and 8.24 for subjects aged 12 to 15 years.\n\nInventory of the Level of Involvement in Aymara Culture – Escala de Involucramiento en la Cultura Aymara (EICA)\n[61]: This instrument measures a child’s level of involvement in Aymara culture. The development of this scale was carried out by the authors of the study who an epidemiologist, a clinical psychologist and an anthropologist. The instrument was not based on another scale, since there is no information and was held in two stages, one qualitative and one quantitative. It contains 25 items evaluated on a Likert scale from zero to two, where the higher the score, the greater the role of Aymara culture in the daily life of the youth. These scores correspond to the average number of points obtained in the scale or respective subscale after multiplication by 10 to avoid many decimal places and they represent the degree of involvement in each of several aspects of Aymara culture, ranging from zero to 20 points. The scale is composed of six subscales: family language use (six items), personal use of the Aymara language (three items), traditional celebrations (five items), traditional employment (three items), dances (five items), and music (three items). The scale has adequate psychometric properties and helps identify both children who are and children who are not highly involved in Aymara culture. The scale has a Cronbach’s alpha of 0.91.", "The study was approved by the Ethics Committee of the University of Tarapacá and the National Commission on Science and Technology of the Government of Chile (CONICYT). The project complies with the Declaration of Helsinki [62].\nThe study used an inventory of public and state-subsidized private schools. The latter have a considerable population of students of Aymara origin, according to the list of beneficiaries of the Indigenous Scholarship for 2010–2011, granted by the National Council for School Assistance and Grants for the Ministry of Education. The schools were chosen and their principals contacted to obtain their consent to participate in the study. Once approval was received, the children were invited to participate in the study. Parents were asked to sign an informed consent form consenting for their children to participate in the study. Children with consenting parents were free to decide for themselves whether to participate in the study.\nThe children were assessed during school hours in group sessions run by two psychologists. The criteria for including a child in the Aymara arm of the study were:\n– One or more Aymara surnames, as established by the Indigenous Law;\n– Self-description as Aymara ethnicity, implying both the child and the parents consider themselves indigenous.", "Statistical analysis of survey data was performed using SPSS 17.0. Means and standard deviations were calculated for subscale scores of each group and an independent samples t-test was run.", "SiC: Stress in Children; CDI-S: Children Depression Inventory-Short; EICA: Inventory of the Level of Involvement in the Aymara Culture (Escala de Involucramiento en la Cultura Aymara).", "The authors have declared that there are no conflicts of interest in relation to the subject of this study.", "ACU contributed to the design and coordination of the study. AU was responsible for the primary study design, consulted on the methodology, and assisted with the data analysis and interpretation. KDM participated in the data collection and manuscript editing. All authors read and approved the final manuscript.", "The pre-publication history for this paper can be accessed here:\n\nhttp://www.biomedcentral.com/1471-244X/14/11/prepub\n" ]
[ null, null, null, null, null, null, null, null, null ]
[ "Background", "Methods", "Participants", "Instruments", "Procedure", "Data analysis", "Results", "Discussion", "Conclusions", "Abbreviations", "Competing interests", "Authors’ contributions", "Pre-publication history" ]
[ "Psychiatric disorders have a high prevalence in children and adolescents, and often persistent into adulthood. Their importance has made them a growing subject of research in recent years [1]. Among the psychiatric disorders, anxiety and depression are the most common reasons for seeking mental health services [2-5]. Worlwide statistics reported rates of anxiety in children and adolescents range from 5.6% to 21%, depending on the criteria and the type of anxiety disorder [2,6-9], with higher rates observed in older children [10]. The prevalence of depressive disorders also varies depending on the population and evaluation method [11]. Rates of depressive disorders between 0.5% and 2.0% have been reported for children in the general population aged 9 to 11 years [8].\nIn Chile, anxiety and depressive disorders are the most prevalent mental health diagnoses in children, with reported rates of 8.3% and 5.1%, respectively [12].\nThere has been very little research on anxiety and depression among different ethnic groups, with no research in Chile, despite its numerous indigenous groups.\nStressors that can impact mental health, such as racism, family disconnectedness, community dysfunction, and social disadvantage, are more prevalent among ethnic minorities [13,14]. Indigenous peoples have identified certain stressors as causes of poor health, such as loss of native lands, culture, and identity; covert and overt racism; marginalization; and powerlessness [14-17].\nCulture can also affect the way in which symptoms of mental illness manifest. Culture can determine or frame causative, precipitating, or maintenance factors that influence the onset, symptom profile, impact, course, and outcome of mental illness [17,18].\nThe prevalence of anxiety symptoms in children and adolescents also varies between ethnic minorities [19]. For instance, adolescent immigrants in Belgium reported more traumatic experiences, more problems with their peers, and greater avoidance than non-immigrants [20]. Similar findings were observed in a study of immigrant children in the Netherlands, who demonstrated a greater level of externalized and internalized problems than their peers [21-24].\nMigration to a new country or sociocultural context can cause stress due to cultural acclimatization. Such stress tends to increase levels of anxiety and depression, loneliness, psychosomatic symptoms, and contribute to a confused sense of identity [25].\nResearch on stress due to cultural acclimatization has focused on how conflicts with the host language and the perception of discrimination can affect psychological well-being [26,27]. However, stress due to cultural acclimatization can take various paths. Thus, acclimatization is driven by transcultural or intercultural dynamics [28,29].\nWhile there are numerous studies on the mental health of indigenous adults and adolescents in North America, studies of indigenous children are rare, and even more rare among indigenous children of Latin America [30-34].\nThe lack of such studies on Chilean indigenous children is worrisome. Mental health problems in childhood have important repercussions in adulthood, including declines in academic achievement and interpersonal relations, as well as ongoing behavioural problems and drug abuse [11,35,36]. Factors pertaining to the risks to and protection of mental health in children and adolescents vary in different contexts, especially between developed and developing countries [37].\nThis study’s objective is to analyze the differences in the presence of anxiety and depressive symptoms between Aymara and non-Aymara children and to evaluate the relation between mental disorders and cultural involvement.\nAymara is a centuries-old culture centred in the Andes mountains. In 2012, the Aymara community had a population of about 2 million living in g central and western Bolivia, southern Peru, northern Chile, and north western Argentina [38,39]. The Aymara have an agricultural economy based on cultivation of potatoes, corn, and quinoa and domestication of llamas, alpacas, and vicuñas [38,40,41], two activities that are complementary, both ecologically and economically [42,43]. The Aymara is a geographically broad and heterogeneous group, although certain common characteristics undoubtedly prevail [44]. The culture is characterized by its intergenerational communication, where elders provide advice to the young. In addition, it is a culture in which the mother focuses on household tasks and education, the father makes the family and monetary decisions and is the breadwinner, and family members work together to complete various tasks, with young children helping out with simple household tasks.\nThe Aymara community in Chile has a population of approximately 48,000 [45], only 2,300 of which still live in their original mountain territories. The rest have emigrated towards the nearby port cities and mining regions, where they have intermingled with the working classes from other areas of the country [38,46-48]. Chile’s large-scale migration towards the coast, its policy of Chilenization – where indigenous people are encouraged to accept Chilean culture and abandon their own particularly during the Pinochet dictatorship – this last forged an identity for the Aymara people, shaped by the difficulty and complexity of the process in different areas [49].\nThis mass migration and rapid abandonment of rural settlements in the Andean foothills have been among the most difficult experiences for the Aymara. Migration has been a complex phenomenon that has not necessarily involved a departure without return, as evidenced by the number of simultaneous residencies and linkages that are maintained with the native communities [50].\nIn adapting to a hegemonic culture, Aymara families have abandoned, to some extent, traditional cultural patterns and are slowly adopting new and increasingly intercultural lifestyles [51,52]. These intercultural dynamics have led to an identity crisis, not only at the level of the Aymara population, but at the level of the country, in which the issue of interculturalism has not yet been constructively addressed. Given this context, a large number of people who could be identified as Aymara by heritage or because they still practice certain Aymara customs, are no longer considered such, at least until recently [53].\nVarious government organizations around the world have expressed concern for the rights of indigenous people, especially their autonomy to raise, educate, and ensure the well-being of their children, in line with children’s rights [54]. Such recognition, however, is insufficient to eliminate the discrimination and other problems Aymara descendants face, leading to reduced access to economic, educational, and social opportunities.\nGiven these inequalities and the stress due to cultural acclimatization experienced by Aymara youth, we expect higher levels of anxiety and depressive symptoms among Aymara children compared to non-Aymara children.", " Participants The sample consisted of 748 Chilean children aged 9 to 15 years (mean 11.81, SD 1.41). They were recruited from the fifth to eighth grades of nine elementary schools serving lower socioeconomic classes in the city of Arica, which has a large Aymara population. Four of the schools were public and five were government-subsidized private schools. The children were equally divided between boys (n = 374) and girls (n = 374) and 275 (37%) of the children were Aymara. Of the total sample there were 64 cases missing, this because they did not complete questionnaires. The city and schools were chosen in an effort to maximize the number of Aymara included in the study.\nThe sample consisted of 748 Chilean children aged 9 to 15 years (mean 11.81, SD 1.41). They were recruited from the fifth to eighth grades of nine elementary schools serving lower socioeconomic classes in the city of Arica, which has a large Aymara population. Four of the schools were public and five were government-subsidized private schools. The children were equally divided between boys (n = 374) and girls (n = 374) and 275 (37%) of the children were Aymara. Of the total sample there were 64 cases missing, this because they did not complete questionnaires. The city and schools were chosen in an effort to maximize the number of Aymara included in the study.\n Instruments \nStress in Children (SiC)\n[55]: SiC is a self-administered instrument for children aged 9 to 12 years. It is designed to measure perceived anxiety and low levels of well-being, in addition to important aspects relating to confrontation and social support. These factors may be considered part the broader concept of subjective health. This instrument was adapted for Chile by Caqueo-Urízar, Urzúa, and Osika [56].\nThe questionnaire consists of 21 items scored on a four-point Likert scale, with zero standing for none, one for sometimes, two for almost always, and three for always. The overall score is obtained by summing the points from each item, with a higher score indicating a higher degree of perceived stress. The mean plus or minus the standard deviation in a normal Swedish population was 2.05 ± 0.41 (girls, 1.99 ± 0.42; boys, 2.15 ± 0.37), with no significant differences detected between the genders. The two subscales of the questionnaire measure 1) loss of well-being, with items such as ‘I feel lonely’ and ‘I get sad’, and 2) sources of distress, requiring responses dealing with daily stressors, school being one of those with the highest factor loading. This questionnaire has high internal consistency (Cronbach’s alpha = 0.86) and is strongly associated with the Beck Youth Inventories of Emotional and Social Impairment [57].\n\nChildren Depression Inventory-Short (CDI-S)\n[58]: The CDI-S measures depressive symptoms in children. It contains three subscales: self-esteem, anhedonia, and hopelessness. It was adapted into Spanish by Barrio et al. [59]. The instrument contains 10 items that are easy for children to complete. It uses a three-point scale indicating absent, mild, or definite symptoms. Its reliability is good, with Cronbach’s alpha ranging from 0.71 to 0.89 and test–retest coefficients ranging from 0.74 to 0.83. This measure’s construct validity has also been confirmed [60]. Our sample’s Cronbach’s alpha was 0.84. The CDI-S can be used individually or collectively for subjects aged 7 to 17 years and takes between five and seven minutes to administer. The scores ranged from zero to 20, with an overall mean of 2.82 and a standard deviation of 2.43. The cut-off point is 7.04 for subjects aged 9 to 11 years and 8.24 for subjects aged 12 to 15 years.\n\nInventory of the Level of Involvement in Aymara Culture – Escala de Involucramiento en la Cultura Aymara (EICA)\n[61]: This instrument measures a child’s level of involvement in Aymara culture. The development of this scale was carried out by the authors of the study who an epidemiologist, a clinical psychologist and an anthropologist. The instrument was not based on another scale, since there is no information and was held in two stages, one qualitative and one quantitative. It contains 25 items evaluated on a Likert scale from zero to two, where the higher the score, the greater the role of Aymara culture in the daily life of the youth. These scores correspond to the average number of points obtained in the scale or respective subscale after multiplication by 10 to avoid many decimal places and they represent the degree of involvement in each of several aspects of Aymara culture, ranging from zero to 20 points. The scale is composed of six subscales: family language use (six items), personal use of the Aymara language (three items), traditional celebrations (five items), traditional employment (three items), dances (five items), and music (three items). The scale has adequate psychometric properties and helps identify both children who are and children who are not highly involved in Aymara culture. The scale has a Cronbach’s alpha of 0.91.\n\nStress in Children (SiC)\n[55]: SiC is a self-administered instrument for children aged 9 to 12 years. It is designed to measure perceived anxiety and low levels of well-being, in addition to important aspects relating to confrontation and social support. These factors may be considered part the broader concept of subjective health. This instrument was adapted for Chile by Caqueo-Urízar, Urzúa, and Osika [56].\nThe questionnaire consists of 21 items scored on a four-point Likert scale, with zero standing for none, one for sometimes, two for almost always, and three for always. The overall score is obtained by summing the points from each item, with a higher score indicating a higher degree of perceived stress. The mean plus or minus the standard deviation in a normal Swedish population was 2.05 ± 0.41 (girls, 1.99 ± 0.42; boys, 2.15 ± 0.37), with no significant differences detected between the genders. The two subscales of the questionnaire measure 1) loss of well-being, with items such as ‘I feel lonely’ and ‘I get sad’, and 2) sources of distress, requiring responses dealing with daily stressors, school being one of those with the highest factor loading. This questionnaire has high internal consistency (Cronbach’s alpha = 0.86) and is strongly associated with the Beck Youth Inventories of Emotional and Social Impairment [57].\n\nChildren Depression Inventory-Short (CDI-S)\n[58]: The CDI-S measures depressive symptoms in children. It contains three subscales: self-esteem, anhedonia, and hopelessness. It was adapted into Spanish by Barrio et al. [59]. The instrument contains 10 items that are easy for children to complete. It uses a three-point scale indicating absent, mild, or definite symptoms. Its reliability is good, with Cronbach’s alpha ranging from 0.71 to 0.89 and test–retest coefficients ranging from 0.74 to 0.83. This measure’s construct validity has also been confirmed [60]. Our sample’s Cronbach’s alpha was 0.84. The CDI-S can be used individually or collectively for subjects aged 7 to 17 years and takes between five and seven minutes to administer. The scores ranged from zero to 20, with an overall mean of 2.82 and a standard deviation of 2.43. The cut-off point is 7.04 for subjects aged 9 to 11 years and 8.24 for subjects aged 12 to 15 years.\n\nInventory of the Level of Involvement in Aymara Culture – Escala de Involucramiento en la Cultura Aymara (EICA)\n[61]: This instrument measures a child’s level of involvement in Aymara culture. The development of this scale was carried out by the authors of the study who an epidemiologist, a clinical psychologist and an anthropologist. The instrument was not based on another scale, since there is no information and was held in two stages, one qualitative and one quantitative. It contains 25 items evaluated on a Likert scale from zero to two, where the higher the score, the greater the role of Aymara culture in the daily life of the youth. These scores correspond to the average number of points obtained in the scale or respective subscale after multiplication by 10 to avoid many decimal places and they represent the degree of involvement in each of several aspects of Aymara culture, ranging from zero to 20 points. The scale is composed of six subscales: family language use (six items), personal use of the Aymara language (three items), traditional celebrations (five items), traditional employment (three items), dances (five items), and music (three items). The scale has adequate psychometric properties and helps identify both children who are and children who are not highly involved in Aymara culture. The scale has a Cronbach’s alpha of 0.91.\n Procedure The study was approved by the Ethics Committee of the University of Tarapacá and the National Commission on Science and Technology of the Government of Chile (CONICYT). The project complies with the Declaration of Helsinki [62].\nThe study used an inventory of public and state-subsidized private schools. The latter have a considerable population of students of Aymara origin, according to the list of beneficiaries of the Indigenous Scholarship for 2010–2011, granted by the National Council for School Assistance and Grants for the Ministry of Education. The schools were chosen and their principals contacted to obtain their consent to participate in the study. Once approval was received, the children were invited to participate in the study. Parents were asked to sign an informed consent form consenting for their children to participate in the study. Children with consenting parents were free to decide for themselves whether to participate in the study.\nThe children were assessed during school hours in group sessions run by two psychologists. The criteria for including a child in the Aymara arm of the study were:\n– One or more Aymara surnames, as established by the Indigenous Law;\n– Self-description as Aymara ethnicity, implying both the child and the parents consider themselves indigenous.\nThe study was approved by the Ethics Committee of the University of Tarapacá and the National Commission on Science and Technology of the Government of Chile (CONICYT). The project complies with the Declaration of Helsinki [62].\nThe study used an inventory of public and state-subsidized private schools. The latter have a considerable population of students of Aymara origin, according to the list of beneficiaries of the Indigenous Scholarship for 2010–2011, granted by the National Council for School Assistance and Grants for the Ministry of Education. The schools were chosen and their principals contacted to obtain their consent to participate in the study. Once approval was received, the children were invited to participate in the study. Parents were asked to sign an informed consent form consenting for their children to participate in the study. Children with consenting parents were free to decide for themselves whether to participate in the study.\nThe children were assessed during school hours in group sessions run by two psychologists. The criteria for including a child in the Aymara arm of the study were:\n– One or more Aymara surnames, as established by the Indigenous Law;\n– Self-description as Aymara ethnicity, implying both the child and the parents consider themselves indigenous.\n Data analysis Statistical analysis of survey data was performed using SPSS 17.0. Means and standard deviations were calculated for subscale scores of each group and an independent samples t-test was run.\nStatistical analysis of survey data was performed using SPSS 17.0. Means and standard deviations were calculated for subscale scores of each group and an independent samples t-test was run.", "The sample consisted of 748 Chilean children aged 9 to 15 years (mean 11.81, SD 1.41). They were recruited from the fifth to eighth grades of nine elementary schools serving lower socioeconomic classes in the city of Arica, which has a large Aymara population. Four of the schools were public and five were government-subsidized private schools. The children were equally divided between boys (n = 374) and girls (n = 374) and 275 (37%) of the children were Aymara. Of the total sample there were 64 cases missing, this because they did not complete questionnaires. The city and schools were chosen in an effort to maximize the number of Aymara included in the study.", "\nStress in Children (SiC)\n[55]: SiC is a self-administered instrument for children aged 9 to 12 years. It is designed to measure perceived anxiety and low levels of well-being, in addition to important aspects relating to confrontation and social support. These factors may be considered part the broader concept of subjective health. This instrument was adapted for Chile by Caqueo-Urízar, Urzúa, and Osika [56].\nThe questionnaire consists of 21 items scored on a four-point Likert scale, with zero standing for none, one for sometimes, two for almost always, and three for always. The overall score is obtained by summing the points from each item, with a higher score indicating a higher degree of perceived stress. The mean plus or minus the standard deviation in a normal Swedish population was 2.05 ± 0.41 (girls, 1.99 ± 0.42; boys, 2.15 ± 0.37), with no significant differences detected between the genders. The two subscales of the questionnaire measure 1) loss of well-being, with items such as ‘I feel lonely’ and ‘I get sad’, and 2) sources of distress, requiring responses dealing with daily stressors, school being one of those with the highest factor loading. This questionnaire has high internal consistency (Cronbach’s alpha = 0.86) and is strongly associated with the Beck Youth Inventories of Emotional and Social Impairment [57].\n\nChildren Depression Inventory-Short (CDI-S)\n[58]: The CDI-S measures depressive symptoms in children. It contains three subscales: self-esteem, anhedonia, and hopelessness. It was adapted into Spanish by Barrio et al. [59]. The instrument contains 10 items that are easy for children to complete. It uses a three-point scale indicating absent, mild, or definite symptoms. Its reliability is good, with Cronbach’s alpha ranging from 0.71 to 0.89 and test–retest coefficients ranging from 0.74 to 0.83. This measure’s construct validity has also been confirmed [60]. Our sample’s Cronbach’s alpha was 0.84. The CDI-S can be used individually or collectively for subjects aged 7 to 17 years and takes between five and seven minutes to administer. The scores ranged from zero to 20, with an overall mean of 2.82 and a standard deviation of 2.43. The cut-off point is 7.04 for subjects aged 9 to 11 years and 8.24 for subjects aged 12 to 15 years.\n\nInventory of the Level of Involvement in Aymara Culture – Escala de Involucramiento en la Cultura Aymara (EICA)\n[61]: This instrument measures a child’s level of involvement in Aymara culture. The development of this scale was carried out by the authors of the study who an epidemiologist, a clinical psychologist and an anthropologist. The instrument was not based on another scale, since there is no information and was held in two stages, one qualitative and one quantitative. It contains 25 items evaluated on a Likert scale from zero to two, where the higher the score, the greater the role of Aymara culture in the daily life of the youth. These scores correspond to the average number of points obtained in the scale or respective subscale after multiplication by 10 to avoid many decimal places and they represent the degree of involvement in each of several aspects of Aymara culture, ranging from zero to 20 points. The scale is composed of six subscales: family language use (six items), personal use of the Aymara language (three items), traditional celebrations (five items), traditional employment (three items), dances (five items), and music (three items). The scale has adequate psychometric properties and helps identify both children who are and children who are not highly involved in Aymara culture. The scale has a Cronbach’s alpha of 0.91.", "The study was approved by the Ethics Committee of the University of Tarapacá and the National Commission on Science and Technology of the Government of Chile (CONICYT). The project complies with the Declaration of Helsinki [62].\nThe study used an inventory of public and state-subsidized private schools. The latter have a considerable population of students of Aymara origin, according to the list of beneficiaries of the Indigenous Scholarship for 2010–2011, granted by the National Council for School Assistance and Grants for the Ministry of Education. The schools were chosen and their principals contacted to obtain their consent to participate in the study. Once approval was received, the children were invited to participate in the study. Parents were asked to sign an informed consent form consenting for their children to participate in the study. Children with consenting parents were free to decide for themselves whether to participate in the study.\nThe children were assessed during school hours in group sessions run by two psychologists. The criteria for including a child in the Aymara arm of the study were:\n– One or more Aymara surnames, as established by the Indigenous Law;\n– Self-description as Aymara ethnicity, implying both the child and the parents consider themselves indigenous.", "Statistical analysis of survey data was performed using SPSS 17.0. Means and standard deviations were calculated for subscale scores of each group and an independent samples t-test was run.", "First, our analysis considered the information on ethnicity provided by the parents. As shown in Table 1, there was no significant difference in anxiety or depression symptoms between the Aymara and non-Aymara groups.\nAnxiety and depression symptoms in Aymara and non-Aymara children, measured by the SiC and CDI-S\nN = sample; X = Mean; SD = Standard Deviation.\nThe EICA, which was administered only to the Aymara children, categorized 89 children as having high involvement with Aymara culture and 186 children as having low involvement. Table 2 shows the SiC and CDI-S results for these two subgroups. There was a significant difference between the subgroups on the hopelessness subscale of the CDI-S (p = 0.02) and a marginally significant difference on the overall score on the SiC (p = 0.06). In both these cases, the low-involvement group scored higher.\nAnxiety and depression symptoms in Aymara children with high and low levels of cultural involvement\nN = sample; X = Mean; SD = Standard Deviation.", "The study’s objective was to measure potential differences in the mental health of Aymara and non-Aymara children in northern Chile. We found no difference in levels of anxiety or depression symptoms between Aymara children and their non-Aymara peers. This finding is contrary to the stress due to cultural acclimatization hypothesis [25]. This suggests that Aymara children possess adequate adaptive mechanisms for integrating into an urban context. These results are consistent with those of Zwirs et al. [63] in Europe and Costello et al. [64] in the United States.\nWhile there may be no difference in anxiety and depression symptoms between Aymara children and their non-Aymara peers, differences could still arise as they grow into adulthood and encounter greater discrimination [63].\nAnother possibility is that although these particular children are Aymara, they may have been away from the high Andean plateau long enough to have learned to live between two cultures: that of their ancestors and their current urban environment. It is possible that stress from cultural acclimatization may not have developed in these children and consequently there is no reduction in their mental health. In fact, various aspects of their lives may have improved [65]. Some authors describe the ‘cultural advantages’ of younger groups ‘forced’ into historic processes of cultural acclimatization [66]. This notion is consistent with a study of immigrant children in Barcelona, who developed strategies for dealing with the adaptation process and actively searched for solutions to conflicts [67].\nIn anthropological terms, the dynamics of cultural acclimatization can result on difficult but constructive processes of ‘creolization’. This means that ‘contact zones’ – interfaces between different practices or cultural environments [68] – are being generated in these indigenous children [66]. Outside these zones, coping strategies can arise that allow individuals to combine different living strategies, enabling adequate mental health.\nWithin the Aymara children, those with high involvement in Aymara traditions exhibited lower levels of anxiety and fewer feelings of hopelessness. These results indicate that Aymara children with high cultural involvement may cope better with anxiety and feelings of hopelessness. Traditional celebrations, for example, have two characteristics that protect against the development of mental disorders: social and community participation and religious events. The high involvement children’s regular contact with other children of the community involves them more with the cultural perception of more positive events that happen in one’s life, with fewer feelings of hopelessness. Research during the past two decades has shown that social support is significantly linked to psychological well-being [69]. Studies demonstrate that children with social support who are exposed to adverse experiences exhibit less psychological illness compared to children without social support [69].\nReligious beliefs have also been associated with healthy adjustment in adolescence. This may function through personal beliefs regarding behaviour, constraints on behaviour, or support for healthy behaviour by religious institutions [70,71]. In our study, participation in activities linked to religious beliefs or ritual practices specific to the Aymara culture could be a protective factor.\nThe study has a number of limitations. First, selection of the subjects was not random. Second, only Aymara children in Chile were evaluated, and these results may not apply to Aymara in other countries. Future studies should also examine children of other ethnicities in Chile, such as Mapuches, Rapa Nuis, and Quechuas, as well as indigenous children in other countries. Third, neither the SiC nor the CDI-S were validated in Chile, and only the SiC was adapted to the country. There is a likelihood of difficulties arising when psychological concepts and measurement techniques developed for one culture are used in another [20,72]. This issue can also be related to social expectancy. Fourth, this study contains no information from parents or teachers, which would have been important to consider to properly understanding the children’s problems. Finally, this was a cross-sectional study. It is important to carry out longitudinal studies to evaluate the consistency of findings over time [73].", "In our study population, Aymara school children living in the city did not differ significantly from their non-Aymara peers in levels of anxiety or depressive symptoms. Among Aymara children, greater involvement with their culture conferred some protection against anxiety and depressive symptoms. This points to an additional benefit of maintaining cultural traditions within this population.", "SiC: Stress in Children; CDI-S: Children Depression Inventory-Short; EICA: Inventory of the Level of Involvement in the Aymara Culture (Escala de Involucramiento en la Cultura Aymara).", "The authors have declared that there are no conflicts of interest in relation to the subject of this study.", "ACU contributed to the design and coordination of the study. AU was responsible for the primary study design, consulted on the methodology, and assisted with the data analysis and interpretation. KDM participated in the data collection and manuscript editing. All authors read and approved the final manuscript.", "The pre-publication history for this paper can be accessed here:\n\nhttp://www.biomedcentral.com/1471-244X/14/11/prepub\n" ]
[ null, "methods", null, null, null, null, "results", "discussion", "conclusions", null, null, null, null ]
[ "Mental health", "Indigenous", "Aymara children", "Anxiety", "Depression" ]
Background: Psychiatric disorders have a high prevalence in children and adolescents, and often persistent into adulthood. Their importance has made them a growing subject of research in recent years [1]. Among the psychiatric disorders, anxiety and depression are the most common reasons for seeking mental health services [2-5]. Worlwide statistics reported rates of anxiety in children and adolescents range from 5.6% to 21%, depending on the criteria and the type of anxiety disorder [2,6-9], with higher rates observed in older children [10]. The prevalence of depressive disorders also varies depending on the population and evaluation method [11]. Rates of depressive disorders between 0.5% and 2.0% have been reported for children in the general population aged 9 to 11 years [8]. In Chile, anxiety and depressive disorders are the most prevalent mental health diagnoses in children, with reported rates of 8.3% and 5.1%, respectively [12]. There has been very little research on anxiety and depression among different ethnic groups, with no research in Chile, despite its numerous indigenous groups. Stressors that can impact mental health, such as racism, family disconnectedness, community dysfunction, and social disadvantage, are more prevalent among ethnic minorities [13,14]. Indigenous peoples have identified certain stressors as causes of poor health, such as loss of native lands, culture, and identity; covert and overt racism; marginalization; and powerlessness [14-17]. Culture can also affect the way in which symptoms of mental illness manifest. Culture can determine or frame causative, precipitating, or maintenance factors that influence the onset, symptom profile, impact, course, and outcome of mental illness [17,18]. The prevalence of anxiety symptoms in children and adolescents also varies between ethnic minorities [19]. For instance, adolescent immigrants in Belgium reported more traumatic experiences, more problems with their peers, and greater avoidance than non-immigrants [20]. Similar findings were observed in a study of immigrant children in the Netherlands, who demonstrated a greater level of externalized and internalized problems than their peers [21-24]. Migration to a new country or sociocultural context can cause stress due to cultural acclimatization. Such stress tends to increase levels of anxiety and depression, loneliness, psychosomatic symptoms, and contribute to a confused sense of identity [25]. Research on stress due to cultural acclimatization has focused on how conflicts with the host language and the perception of discrimination can affect psychological well-being [26,27]. However, stress due to cultural acclimatization can take various paths. Thus, acclimatization is driven by transcultural or intercultural dynamics [28,29]. While there are numerous studies on the mental health of indigenous adults and adolescents in North America, studies of indigenous children are rare, and even more rare among indigenous children of Latin America [30-34]. The lack of such studies on Chilean indigenous children is worrisome. Mental health problems in childhood have important repercussions in adulthood, including declines in academic achievement and interpersonal relations, as well as ongoing behavioural problems and drug abuse [11,35,36]. Factors pertaining to the risks to and protection of mental health in children and adolescents vary in different contexts, especially between developed and developing countries [37]. This study’s objective is to analyze the differences in the presence of anxiety and depressive symptoms between Aymara and non-Aymara children and to evaluate the relation between mental disorders and cultural involvement. Aymara is a centuries-old culture centred in the Andes mountains. In 2012, the Aymara community had a population of about 2 million living in g central and western Bolivia, southern Peru, northern Chile, and north western Argentina [38,39]. The Aymara have an agricultural economy based on cultivation of potatoes, corn, and quinoa and domestication of llamas, alpacas, and vicuñas [38,40,41], two activities that are complementary, both ecologically and economically [42,43]. The Aymara is a geographically broad and heterogeneous group, although certain common characteristics undoubtedly prevail [44]. The culture is characterized by its intergenerational communication, where elders provide advice to the young. In addition, it is a culture in which the mother focuses on household tasks and education, the father makes the family and monetary decisions and is the breadwinner, and family members work together to complete various tasks, with young children helping out with simple household tasks. The Aymara community in Chile has a population of approximately 48,000 [45], only 2,300 of which still live in their original mountain territories. The rest have emigrated towards the nearby port cities and mining regions, where they have intermingled with the working classes from other areas of the country [38,46-48]. Chile’s large-scale migration towards the coast, its policy of Chilenization – where indigenous people are encouraged to accept Chilean culture and abandon their own particularly during the Pinochet dictatorship – this last forged an identity for the Aymara people, shaped by the difficulty and complexity of the process in different areas [49]. This mass migration and rapid abandonment of rural settlements in the Andean foothills have been among the most difficult experiences for the Aymara. Migration has been a complex phenomenon that has not necessarily involved a departure without return, as evidenced by the number of simultaneous residencies and linkages that are maintained with the native communities [50]. In adapting to a hegemonic culture, Aymara families have abandoned, to some extent, traditional cultural patterns and are slowly adopting new and increasingly intercultural lifestyles [51,52]. These intercultural dynamics have led to an identity crisis, not only at the level of the Aymara population, but at the level of the country, in which the issue of interculturalism has not yet been constructively addressed. Given this context, a large number of people who could be identified as Aymara by heritage or because they still practice certain Aymara customs, are no longer considered such, at least until recently [53]. Various government organizations around the world have expressed concern for the rights of indigenous people, especially their autonomy to raise, educate, and ensure the well-being of their children, in line with children’s rights [54]. Such recognition, however, is insufficient to eliminate the discrimination and other problems Aymara descendants face, leading to reduced access to economic, educational, and social opportunities. Given these inequalities and the stress due to cultural acclimatization experienced by Aymara youth, we expect higher levels of anxiety and depressive symptoms among Aymara children compared to non-Aymara children. Methods: Participants The sample consisted of 748 Chilean children aged 9 to 15 years (mean 11.81, SD 1.41). They were recruited from the fifth to eighth grades of nine elementary schools serving lower socioeconomic classes in the city of Arica, which has a large Aymara population. Four of the schools were public and five were government-subsidized private schools. The children were equally divided between boys (n = 374) and girls (n = 374) and 275 (37%) of the children were Aymara. Of the total sample there were 64 cases missing, this because they did not complete questionnaires. The city and schools were chosen in an effort to maximize the number of Aymara included in the study. The sample consisted of 748 Chilean children aged 9 to 15 years (mean 11.81, SD 1.41). They were recruited from the fifth to eighth grades of nine elementary schools serving lower socioeconomic classes in the city of Arica, which has a large Aymara population. Four of the schools were public and five were government-subsidized private schools. The children were equally divided between boys (n = 374) and girls (n = 374) and 275 (37%) of the children were Aymara. Of the total sample there were 64 cases missing, this because they did not complete questionnaires. The city and schools were chosen in an effort to maximize the number of Aymara included in the study. Instruments Stress in Children (SiC) [55]: SiC is a self-administered instrument for children aged 9 to 12 years. It is designed to measure perceived anxiety and low levels of well-being, in addition to important aspects relating to confrontation and social support. These factors may be considered part the broader concept of subjective health. This instrument was adapted for Chile by Caqueo-Urízar, Urzúa, and Osika [56]. The questionnaire consists of 21 items scored on a four-point Likert scale, with zero standing for none, one for sometimes, two for almost always, and three for always. The overall score is obtained by summing the points from each item, with a higher score indicating a higher degree of perceived stress. The mean plus or minus the standard deviation in a normal Swedish population was 2.05 ± 0.41 (girls, 1.99 ± 0.42; boys, 2.15 ± 0.37), with no significant differences detected between the genders. The two subscales of the questionnaire measure 1) loss of well-being, with items such as ‘I feel lonely’ and ‘I get sad’, and 2) sources of distress, requiring responses dealing with daily stressors, school being one of those with the highest factor loading. This questionnaire has high internal consistency (Cronbach’s alpha = 0.86) and is strongly associated with the Beck Youth Inventories of Emotional and Social Impairment [57]. Children Depression Inventory-Short (CDI-S) [58]: The CDI-S measures depressive symptoms in children. It contains three subscales: self-esteem, anhedonia, and hopelessness. It was adapted into Spanish by Barrio et al. [59]. The instrument contains 10 items that are easy for children to complete. It uses a three-point scale indicating absent, mild, or definite symptoms. Its reliability is good, with Cronbach’s alpha ranging from 0.71 to 0.89 and test–retest coefficients ranging from 0.74 to 0.83. This measure’s construct validity has also been confirmed [60]. Our sample’s Cronbach’s alpha was 0.84. The CDI-S can be used individually or collectively for subjects aged 7 to 17 years and takes between five and seven minutes to administer. The scores ranged from zero to 20, with an overall mean of 2.82 and a standard deviation of 2.43. The cut-off point is 7.04 for subjects aged 9 to 11 years and 8.24 for subjects aged 12 to 15 years. Inventory of the Level of Involvement in Aymara Culture – Escala de Involucramiento en la Cultura Aymara (EICA) [61]: This instrument measures a child’s level of involvement in Aymara culture. The development of this scale was carried out by the authors of the study who an epidemiologist, a clinical psychologist and an anthropologist. The instrument was not based on another scale, since there is no information and was held in two stages, one qualitative and one quantitative. It contains 25 items evaluated on a Likert scale from zero to two, where the higher the score, the greater the role of Aymara culture in the daily life of the youth. These scores correspond to the average number of points obtained in the scale or respective subscale after multiplication by 10 to avoid many decimal places and they represent the degree of involvement in each of several aspects of Aymara culture, ranging from zero to 20 points. The scale is composed of six subscales: family language use (six items), personal use of the Aymara language (three items), traditional celebrations (five items), traditional employment (three items), dances (five items), and music (three items). The scale has adequate psychometric properties and helps identify both children who are and children who are not highly involved in Aymara culture. The scale has a Cronbach’s alpha of 0.91. Stress in Children (SiC) [55]: SiC is a self-administered instrument for children aged 9 to 12 years. It is designed to measure perceived anxiety and low levels of well-being, in addition to important aspects relating to confrontation and social support. These factors may be considered part the broader concept of subjective health. This instrument was adapted for Chile by Caqueo-Urízar, Urzúa, and Osika [56]. The questionnaire consists of 21 items scored on a four-point Likert scale, with zero standing for none, one for sometimes, two for almost always, and three for always. The overall score is obtained by summing the points from each item, with a higher score indicating a higher degree of perceived stress. The mean plus or minus the standard deviation in a normal Swedish population was 2.05 ± 0.41 (girls, 1.99 ± 0.42; boys, 2.15 ± 0.37), with no significant differences detected between the genders. The two subscales of the questionnaire measure 1) loss of well-being, with items such as ‘I feel lonely’ and ‘I get sad’, and 2) sources of distress, requiring responses dealing with daily stressors, school being one of those with the highest factor loading. This questionnaire has high internal consistency (Cronbach’s alpha = 0.86) and is strongly associated with the Beck Youth Inventories of Emotional and Social Impairment [57]. Children Depression Inventory-Short (CDI-S) [58]: The CDI-S measures depressive symptoms in children. It contains three subscales: self-esteem, anhedonia, and hopelessness. It was adapted into Spanish by Barrio et al. [59]. The instrument contains 10 items that are easy for children to complete. It uses a three-point scale indicating absent, mild, or definite symptoms. Its reliability is good, with Cronbach’s alpha ranging from 0.71 to 0.89 and test–retest coefficients ranging from 0.74 to 0.83. This measure’s construct validity has also been confirmed [60]. Our sample’s Cronbach’s alpha was 0.84. The CDI-S can be used individually or collectively for subjects aged 7 to 17 years and takes between five and seven minutes to administer. The scores ranged from zero to 20, with an overall mean of 2.82 and a standard deviation of 2.43. The cut-off point is 7.04 for subjects aged 9 to 11 years and 8.24 for subjects aged 12 to 15 years. Inventory of the Level of Involvement in Aymara Culture – Escala de Involucramiento en la Cultura Aymara (EICA) [61]: This instrument measures a child’s level of involvement in Aymara culture. The development of this scale was carried out by the authors of the study who an epidemiologist, a clinical psychologist and an anthropologist. The instrument was not based on another scale, since there is no information and was held in two stages, one qualitative and one quantitative. It contains 25 items evaluated on a Likert scale from zero to two, where the higher the score, the greater the role of Aymara culture in the daily life of the youth. These scores correspond to the average number of points obtained in the scale or respective subscale after multiplication by 10 to avoid many decimal places and they represent the degree of involvement in each of several aspects of Aymara culture, ranging from zero to 20 points. The scale is composed of six subscales: family language use (six items), personal use of the Aymara language (three items), traditional celebrations (five items), traditional employment (three items), dances (five items), and music (three items). The scale has adequate psychometric properties and helps identify both children who are and children who are not highly involved in Aymara culture. The scale has a Cronbach’s alpha of 0.91. Procedure The study was approved by the Ethics Committee of the University of Tarapacá and the National Commission on Science and Technology of the Government of Chile (CONICYT). The project complies with the Declaration of Helsinki [62]. The study used an inventory of public and state-subsidized private schools. The latter have a considerable population of students of Aymara origin, according to the list of beneficiaries of the Indigenous Scholarship for 2010–2011, granted by the National Council for School Assistance and Grants for the Ministry of Education. The schools were chosen and their principals contacted to obtain their consent to participate in the study. Once approval was received, the children were invited to participate in the study. Parents were asked to sign an informed consent form consenting for their children to participate in the study. Children with consenting parents were free to decide for themselves whether to participate in the study. The children were assessed during school hours in group sessions run by two psychologists. The criteria for including a child in the Aymara arm of the study were: – One or more Aymara surnames, as established by the Indigenous Law; – Self-description as Aymara ethnicity, implying both the child and the parents consider themselves indigenous. The study was approved by the Ethics Committee of the University of Tarapacá and the National Commission on Science and Technology of the Government of Chile (CONICYT). The project complies with the Declaration of Helsinki [62]. The study used an inventory of public and state-subsidized private schools. The latter have a considerable population of students of Aymara origin, according to the list of beneficiaries of the Indigenous Scholarship for 2010–2011, granted by the National Council for School Assistance and Grants for the Ministry of Education. The schools were chosen and their principals contacted to obtain their consent to participate in the study. Once approval was received, the children were invited to participate in the study. Parents were asked to sign an informed consent form consenting for their children to participate in the study. Children with consenting parents were free to decide for themselves whether to participate in the study. The children were assessed during school hours in group sessions run by two psychologists. The criteria for including a child in the Aymara arm of the study were: – One or more Aymara surnames, as established by the Indigenous Law; – Self-description as Aymara ethnicity, implying both the child and the parents consider themselves indigenous. Data analysis Statistical analysis of survey data was performed using SPSS 17.0. Means and standard deviations were calculated for subscale scores of each group and an independent samples t-test was run. Statistical analysis of survey data was performed using SPSS 17.0. Means and standard deviations were calculated for subscale scores of each group and an independent samples t-test was run. Participants: The sample consisted of 748 Chilean children aged 9 to 15 years (mean 11.81, SD 1.41). They were recruited from the fifth to eighth grades of nine elementary schools serving lower socioeconomic classes in the city of Arica, which has a large Aymara population. Four of the schools were public and five were government-subsidized private schools. The children were equally divided between boys (n = 374) and girls (n = 374) and 275 (37%) of the children were Aymara. Of the total sample there were 64 cases missing, this because they did not complete questionnaires. The city and schools were chosen in an effort to maximize the number of Aymara included in the study. Instruments: Stress in Children (SiC) [55]: SiC is a self-administered instrument for children aged 9 to 12 years. It is designed to measure perceived anxiety and low levels of well-being, in addition to important aspects relating to confrontation and social support. These factors may be considered part the broader concept of subjective health. This instrument was adapted for Chile by Caqueo-Urízar, Urzúa, and Osika [56]. The questionnaire consists of 21 items scored on a four-point Likert scale, with zero standing for none, one for sometimes, two for almost always, and three for always. The overall score is obtained by summing the points from each item, with a higher score indicating a higher degree of perceived stress. The mean plus or minus the standard deviation in a normal Swedish population was 2.05 ± 0.41 (girls, 1.99 ± 0.42; boys, 2.15 ± 0.37), with no significant differences detected between the genders. The two subscales of the questionnaire measure 1) loss of well-being, with items such as ‘I feel lonely’ and ‘I get sad’, and 2) sources of distress, requiring responses dealing with daily stressors, school being one of those with the highest factor loading. This questionnaire has high internal consistency (Cronbach’s alpha = 0.86) and is strongly associated with the Beck Youth Inventories of Emotional and Social Impairment [57]. Children Depression Inventory-Short (CDI-S) [58]: The CDI-S measures depressive symptoms in children. It contains three subscales: self-esteem, anhedonia, and hopelessness. It was adapted into Spanish by Barrio et al. [59]. The instrument contains 10 items that are easy for children to complete. It uses a three-point scale indicating absent, mild, or definite symptoms. Its reliability is good, with Cronbach’s alpha ranging from 0.71 to 0.89 and test–retest coefficients ranging from 0.74 to 0.83. This measure’s construct validity has also been confirmed [60]. Our sample’s Cronbach’s alpha was 0.84. The CDI-S can be used individually or collectively for subjects aged 7 to 17 years and takes between five and seven minutes to administer. The scores ranged from zero to 20, with an overall mean of 2.82 and a standard deviation of 2.43. The cut-off point is 7.04 for subjects aged 9 to 11 years and 8.24 for subjects aged 12 to 15 years. Inventory of the Level of Involvement in Aymara Culture – Escala de Involucramiento en la Cultura Aymara (EICA) [61]: This instrument measures a child’s level of involvement in Aymara culture. The development of this scale was carried out by the authors of the study who an epidemiologist, a clinical psychologist and an anthropologist. The instrument was not based on another scale, since there is no information and was held in two stages, one qualitative and one quantitative. It contains 25 items evaluated on a Likert scale from zero to two, where the higher the score, the greater the role of Aymara culture in the daily life of the youth. These scores correspond to the average number of points obtained in the scale or respective subscale after multiplication by 10 to avoid many decimal places and they represent the degree of involvement in each of several aspects of Aymara culture, ranging from zero to 20 points. The scale is composed of six subscales: family language use (six items), personal use of the Aymara language (three items), traditional celebrations (five items), traditional employment (three items), dances (five items), and music (three items). The scale has adequate psychometric properties and helps identify both children who are and children who are not highly involved in Aymara culture. The scale has a Cronbach’s alpha of 0.91. Procedure: The study was approved by the Ethics Committee of the University of Tarapacá and the National Commission on Science and Technology of the Government of Chile (CONICYT). The project complies with the Declaration of Helsinki [62]. The study used an inventory of public and state-subsidized private schools. The latter have a considerable population of students of Aymara origin, according to the list of beneficiaries of the Indigenous Scholarship for 2010–2011, granted by the National Council for School Assistance and Grants for the Ministry of Education. The schools were chosen and their principals contacted to obtain their consent to participate in the study. Once approval was received, the children were invited to participate in the study. Parents were asked to sign an informed consent form consenting for their children to participate in the study. Children with consenting parents were free to decide for themselves whether to participate in the study. The children were assessed during school hours in group sessions run by two psychologists. The criteria for including a child in the Aymara arm of the study were: – One or more Aymara surnames, as established by the Indigenous Law; – Self-description as Aymara ethnicity, implying both the child and the parents consider themselves indigenous. Data analysis: Statistical analysis of survey data was performed using SPSS 17.0. Means and standard deviations were calculated for subscale scores of each group and an independent samples t-test was run. Results: First, our analysis considered the information on ethnicity provided by the parents. As shown in Table 1, there was no significant difference in anxiety or depression symptoms between the Aymara and non-Aymara groups. Anxiety and depression symptoms in Aymara and non-Aymara children, measured by the SiC and CDI-S N = sample; X = Mean; SD = Standard Deviation. The EICA, which was administered only to the Aymara children, categorized 89 children as having high involvement with Aymara culture and 186 children as having low involvement. Table 2 shows the SiC and CDI-S results for these two subgroups. There was a significant difference between the subgroups on the hopelessness subscale of the CDI-S (p = 0.02) and a marginally significant difference on the overall score on the SiC (p = 0.06). In both these cases, the low-involvement group scored higher. Anxiety and depression symptoms in Aymara children with high and low levels of cultural involvement N = sample; X = Mean; SD = Standard Deviation. Discussion: The study’s objective was to measure potential differences in the mental health of Aymara and non-Aymara children in northern Chile. We found no difference in levels of anxiety or depression symptoms between Aymara children and their non-Aymara peers. This finding is contrary to the stress due to cultural acclimatization hypothesis [25]. This suggests that Aymara children possess adequate adaptive mechanisms for integrating into an urban context. These results are consistent with those of Zwirs et al. [63] in Europe and Costello et al. [64] in the United States. While there may be no difference in anxiety and depression symptoms between Aymara children and their non-Aymara peers, differences could still arise as they grow into adulthood and encounter greater discrimination [63]. Another possibility is that although these particular children are Aymara, they may have been away from the high Andean plateau long enough to have learned to live between two cultures: that of their ancestors and their current urban environment. It is possible that stress from cultural acclimatization may not have developed in these children and consequently there is no reduction in their mental health. In fact, various aspects of their lives may have improved [65]. Some authors describe the ‘cultural advantages’ of younger groups ‘forced’ into historic processes of cultural acclimatization [66]. This notion is consistent with a study of immigrant children in Barcelona, who developed strategies for dealing with the adaptation process and actively searched for solutions to conflicts [67]. In anthropological terms, the dynamics of cultural acclimatization can result on difficult but constructive processes of ‘creolization’. This means that ‘contact zones’ – interfaces between different practices or cultural environments [68] – are being generated in these indigenous children [66]. Outside these zones, coping strategies can arise that allow individuals to combine different living strategies, enabling adequate mental health. Within the Aymara children, those with high involvement in Aymara traditions exhibited lower levels of anxiety and fewer feelings of hopelessness. These results indicate that Aymara children with high cultural involvement may cope better with anxiety and feelings of hopelessness. Traditional celebrations, for example, have two characteristics that protect against the development of mental disorders: social and community participation and religious events. The high involvement children’s regular contact with other children of the community involves them more with the cultural perception of more positive events that happen in one’s life, with fewer feelings of hopelessness. Research during the past two decades has shown that social support is significantly linked to psychological well-being [69]. Studies demonstrate that children with social support who are exposed to adverse experiences exhibit less psychological illness compared to children without social support [69]. Religious beliefs have also been associated with healthy adjustment in adolescence. This may function through personal beliefs regarding behaviour, constraints on behaviour, or support for healthy behaviour by religious institutions [70,71]. In our study, participation in activities linked to religious beliefs or ritual practices specific to the Aymara culture could be a protective factor. The study has a number of limitations. First, selection of the subjects was not random. Second, only Aymara children in Chile were evaluated, and these results may not apply to Aymara in other countries. Future studies should also examine children of other ethnicities in Chile, such as Mapuches, Rapa Nuis, and Quechuas, as well as indigenous children in other countries. Third, neither the SiC nor the CDI-S were validated in Chile, and only the SiC was adapted to the country. There is a likelihood of difficulties arising when psychological concepts and measurement techniques developed for one culture are used in another [20,72]. This issue can also be related to social expectancy. Fourth, this study contains no information from parents or teachers, which would have been important to consider to properly understanding the children’s problems. Finally, this was a cross-sectional study. It is important to carry out longitudinal studies to evaluate the consistency of findings over time [73]. Conclusions: In our study population, Aymara school children living in the city did not differ significantly from their non-Aymara peers in levels of anxiety or depressive symptoms. Among Aymara children, greater involvement with their culture conferred some protection against anxiety and depressive symptoms. This points to an additional benefit of maintaining cultural traditions within this population. Abbreviations: SiC: Stress in Children; CDI-S: Children Depression Inventory-Short; EICA: Inventory of the Level of Involvement in the Aymara Culture (Escala de Involucramiento en la Cultura Aymara). Competing interests: The authors have declared that there are no conflicts of interest in relation to the subject of this study. Authors’ contributions: ACU contributed to the design and coordination of the study. AU was responsible for the primary study design, consulted on the methodology, and assisted with the data analysis and interpretation. KDM participated in the data collection and manuscript editing. All authors read and approved the final manuscript. Pre-publication history: The pre-publication history for this paper can be accessed here: http://www.biomedcentral.com/1471-244X/14/11/prepub
Background: Anxiety and depressive disorders occur in all stages of life and are the most common childhood disorders. However, only recently has attention been paid to mental health problems in indigenous children and studies of anxiety and depressive disorders in these children are still scarce. This study compares the prevalence of anxiety and depressive symptoms in Aymara and non-Aymara children. Among the Aymara children, the study examines the relations between these symptoms and the degree of involvement with Aymara culture. Methods: We recruited 748 children aged 9 to 15 years from nine schools serving low socioeconomic classes in the city of Arica, in northern Chile. The children were equally divided between boys and girls and 37% of the children were Aymara. To evaluate anxiety and depressive symptoms we used the Stress in Children (SiC) instrument and the Children Depression Inventory-Short version (CDI-S), and used an instrument we developed to assess level of involvement in the Aymara culture. Results: There was no significant difference between Aymara and non-Aymara children on any of the instrument scales. Dividing the Aymara children into high-involvement (n = 89) and low-involvement (n = 186) groups, the low-involvement group had significantly higher scores on the Hopelessness subscale of the CDI-S (p = 0.02) and scores of marginally higher significance in overall Anxiety on the SiC (p = 0.06). Conclusions: Although Aymara children have migrated from the high Andean plateau to the city, this migration has not resulted in a greater presence of anxiety and depressive symptoms. Greater involvement with the Aymara culture may be a protective factor against anxiety and depressive symptoms in Aymara children. This point to an additional benefit of maintaining cultural traditions within this population.
Background: Psychiatric disorders have a high prevalence in children and adolescents, and often persistent into adulthood. Their importance has made them a growing subject of research in recent years [1]. Among the psychiatric disorders, anxiety and depression are the most common reasons for seeking mental health services [2-5]. Worlwide statistics reported rates of anxiety in children and adolescents range from 5.6% to 21%, depending on the criteria and the type of anxiety disorder [2,6-9], with higher rates observed in older children [10]. The prevalence of depressive disorders also varies depending on the population and evaluation method [11]. Rates of depressive disorders between 0.5% and 2.0% have been reported for children in the general population aged 9 to 11 years [8]. In Chile, anxiety and depressive disorders are the most prevalent mental health diagnoses in children, with reported rates of 8.3% and 5.1%, respectively [12]. There has been very little research on anxiety and depression among different ethnic groups, with no research in Chile, despite its numerous indigenous groups. Stressors that can impact mental health, such as racism, family disconnectedness, community dysfunction, and social disadvantage, are more prevalent among ethnic minorities [13,14]. Indigenous peoples have identified certain stressors as causes of poor health, such as loss of native lands, culture, and identity; covert and overt racism; marginalization; and powerlessness [14-17]. Culture can also affect the way in which symptoms of mental illness manifest. Culture can determine or frame causative, precipitating, or maintenance factors that influence the onset, symptom profile, impact, course, and outcome of mental illness [17,18]. The prevalence of anxiety symptoms in children and adolescents also varies between ethnic minorities [19]. For instance, adolescent immigrants in Belgium reported more traumatic experiences, more problems with their peers, and greater avoidance than non-immigrants [20]. Similar findings were observed in a study of immigrant children in the Netherlands, who demonstrated a greater level of externalized and internalized problems than their peers [21-24]. Migration to a new country or sociocultural context can cause stress due to cultural acclimatization. Such stress tends to increase levels of anxiety and depression, loneliness, psychosomatic symptoms, and contribute to a confused sense of identity [25]. Research on stress due to cultural acclimatization has focused on how conflicts with the host language and the perception of discrimination can affect psychological well-being [26,27]. However, stress due to cultural acclimatization can take various paths. Thus, acclimatization is driven by transcultural or intercultural dynamics [28,29]. While there are numerous studies on the mental health of indigenous adults and adolescents in North America, studies of indigenous children are rare, and even more rare among indigenous children of Latin America [30-34]. The lack of such studies on Chilean indigenous children is worrisome. Mental health problems in childhood have important repercussions in adulthood, including declines in academic achievement and interpersonal relations, as well as ongoing behavioural problems and drug abuse [11,35,36]. Factors pertaining to the risks to and protection of mental health in children and adolescents vary in different contexts, especially between developed and developing countries [37]. This study’s objective is to analyze the differences in the presence of anxiety and depressive symptoms between Aymara and non-Aymara children and to evaluate the relation between mental disorders and cultural involvement. Aymara is a centuries-old culture centred in the Andes mountains. In 2012, the Aymara community had a population of about 2 million living in g central and western Bolivia, southern Peru, northern Chile, and north western Argentina [38,39]. The Aymara have an agricultural economy based on cultivation of potatoes, corn, and quinoa and domestication of llamas, alpacas, and vicuñas [38,40,41], two activities that are complementary, both ecologically and economically [42,43]. The Aymara is a geographically broad and heterogeneous group, although certain common characteristics undoubtedly prevail [44]. The culture is characterized by its intergenerational communication, where elders provide advice to the young. In addition, it is a culture in which the mother focuses on household tasks and education, the father makes the family and monetary decisions and is the breadwinner, and family members work together to complete various tasks, with young children helping out with simple household tasks. The Aymara community in Chile has a population of approximately 48,000 [45], only 2,300 of which still live in their original mountain territories. The rest have emigrated towards the nearby port cities and mining regions, where they have intermingled with the working classes from other areas of the country [38,46-48]. Chile’s large-scale migration towards the coast, its policy of Chilenization – where indigenous people are encouraged to accept Chilean culture and abandon their own particularly during the Pinochet dictatorship – this last forged an identity for the Aymara people, shaped by the difficulty and complexity of the process in different areas [49]. This mass migration and rapid abandonment of rural settlements in the Andean foothills have been among the most difficult experiences for the Aymara. Migration has been a complex phenomenon that has not necessarily involved a departure without return, as evidenced by the number of simultaneous residencies and linkages that are maintained with the native communities [50]. In adapting to a hegemonic culture, Aymara families have abandoned, to some extent, traditional cultural patterns and are slowly adopting new and increasingly intercultural lifestyles [51,52]. These intercultural dynamics have led to an identity crisis, not only at the level of the Aymara population, but at the level of the country, in which the issue of interculturalism has not yet been constructively addressed. Given this context, a large number of people who could be identified as Aymara by heritage or because they still practice certain Aymara customs, are no longer considered such, at least until recently [53]. Various government organizations around the world have expressed concern for the rights of indigenous people, especially their autonomy to raise, educate, and ensure the well-being of their children, in line with children’s rights [54]. Such recognition, however, is insufficient to eliminate the discrimination and other problems Aymara descendants face, leading to reduced access to economic, educational, and social opportunities. Given these inequalities and the stress due to cultural acclimatization experienced by Aymara youth, we expect higher levels of anxiety and depressive symptoms among Aymara children compared to non-Aymara children. Conclusions: In our study population, Aymara school children living in the city did not differ significantly from their non-Aymara peers in levels of anxiety or depressive symptoms. Among Aymara children, greater involvement with their culture conferred some protection against anxiety and depressive symptoms. This points to an additional benefit of maintaining cultural traditions within this population.
Background: Anxiety and depressive disorders occur in all stages of life and are the most common childhood disorders. However, only recently has attention been paid to mental health problems in indigenous children and studies of anxiety and depressive disorders in these children are still scarce. This study compares the prevalence of anxiety and depressive symptoms in Aymara and non-Aymara children. Among the Aymara children, the study examines the relations between these symptoms and the degree of involvement with Aymara culture. Methods: We recruited 748 children aged 9 to 15 years from nine schools serving low socioeconomic classes in the city of Arica, in northern Chile. The children were equally divided between boys and girls and 37% of the children were Aymara. To evaluate anxiety and depressive symptoms we used the Stress in Children (SiC) instrument and the Children Depression Inventory-Short version (CDI-S), and used an instrument we developed to assess level of involvement in the Aymara culture. Results: There was no significant difference between Aymara and non-Aymara children on any of the instrument scales. Dividing the Aymara children into high-involvement (n = 89) and low-involvement (n = 186) groups, the low-involvement group had significantly higher scores on the Hopelessness subscale of the CDI-S (p = 0.02) and scores of marginally higher significance in overall Anxiety on the SiC (p = 0.06). Conclusions: Although Aymara children have migrated from the high Andean plateau to the city, this migration has not resulted in a greater presence of anxiety and depressive symptoms. Greater involvement with the Aymara culture may be a protective factor against anxiety and depressive symptoms in Aymara children. This point to an additional benefit of maintaining cultural traditions within this population.
5,932
339
[ 1256, 138, 740, 235, 33, 38, 20, 53, 16 ]
13
[ "children", "aymara", "study", "items", "scale", "culture", "anxiety", "involvement", "indigenous", "symptoms" ]
[ "10 prevalence depressive", "depressive symptoms children", "anxiety children adolescents", "years chile anxiety", "chile anxiety" ]
[CONTENT] Mental health | Indigenous | Aymara children | Anxiety | Depression [SUMMARY]
[CONTENT] Mental health | Indigenous | Aymara children | Anxiety | Depression [SUMMARY]
[CONTENT] Mental health | Indigenous | Aymara children | Anxiety | Depression [SUMMARY]
[CONTENT] Mental health | Indigenous | Aymara children | Anxiety | Depression [SUMMARY]
[CONTENT] Mental health | Indigenous | Aymara children | Anxiety | Depression [SUMMARY]
[CONTENT] Mental health | Indigenous | Aymara children | Anxiety | Depression [SUMMARY]
[CONTENT] Adolescent | Anxiety Disorders | Child | Chile | Depressive Disorder | Female | Humans | Male | Mental Health | Personality Inventory | Prevalence | Schools [SUMMARY]
[CONTENT] Adolescent | Anxiety Disorders | Child | Chile | Depressive Disorder | Female | Humans | Male | Mental Health | Personality Inventory | Prevalence | Schools [SUMMARY]
[CONTENT] Adolescent | Anxiety Disorders | Child | Chile | Depressive Disorder | Female | Humans | Male | Mental Health | Personality Inventory | Prevalence | Schools [SUMMARY]
[CONTENT] Adolescent | Anxiety Disorders | Child | Chile | Depressive Disorder | Female | Humans | Male | Mental Health | Personality Inventory | Prevalence | Schools [SUMMARY]
[CONTENT] Adolescent | Anxiety Disorders | Child | Chile | Depressive Disorder | Female | Humans | Male | Mental Health | Personality Inventory | Prevalence | Schools [SUMMARY]
[CONTENT] Adolescent | Anxiety Disorders | Child | Chile | Depressive Disorder | Female | Humans | Male | Mental Health | Personality Inventory | Prevalence | Schools [SUMMARY]
[CONTENT] 10 prevalence depressive | depressive symptoms children | anxiety children adolescents | years chile anxiety | chile anxiety [SUMMARY]
[CONTENT] 10 prevalence depressive | depressive symptoms children | anxiety children adolescents | years chile anxiety | chile anxiety [SUMMARY]
[CONTENT] 10 prevalence depressive | depressive symptoms children | anxiety children adolescents | years chile anxiety | chile anxiety [SUMMARY]
[CONTENT] 10 prevalence depressive | depressive symptoms children | anxiety children adolescents | years chile anxiety | chile anxiety [SUMMARY]
[CONTENT] 10 prevalence depressive | depressive symptoms children | anxiety children adolescents | years chile anxiety | chile anxiety [SUMMARY]
[CONTENT] 10 prevalence depressive | depressive symptoms children | anxiety children adolescents | years chile anxiety | chile anxiety [SUMMARY]
[CONTENT] children | aymara | study | items | scale | culture | anxiety | involvement | indigenous | symptoms [SUMMARY]
[CONTENT] children | aymara | study | items | scale | culture | anxiety | involvement | indigenous | symptoms [SUMMARY]
[CONTENT] children | aymara | study | items | scale | culture | anxiety | involvement | indigenous | symptoms [SUMMARY]
[CONTENT] children | aymara | study | items | scale | culture | anxiety | involvement | indigenous | symptoms [SUMMARY]
[CONTENT] children | aymara | study | items | scale | culture | anxiety | involvement | indigenous | symptoms [SUMMARY]
[CONTENT] children | aymara | study | items | scale | culture | anxiety | involvement | indigenous | symptoms [SUMMARY]
[CONTENT] mental | aymara | children | indigenous | mental health | disorders | anxiety | adolescents | health | acclimatization [SUMMARY]
[CONTENT] items | scale | aymara | children | schools | instrument | study | cronbach alpha | cronbach | zero [SUMMARY]
[CONTENT] aymara | significant difference | difference | depression symptoms aymara | depression symptoms | anxiety depression symptoms | anxiety depression symptoms aymara | anxiety depression | significant | low [SUMMARY]
[CONTENT] anxiety depressive symptoms | anxiety depressive | depressive | depressive symptoms | aymara | anxiety | population | symptoms | additional benefit maintaining | benefit maintaining cultural traditions [SUMMARY]
[CONTENT] aymara | children | study | items | schools | culture | anxiety | scale | involvement | symptoms [SUMMARY]
[CONTENT] aymara | children | study | items | schools | culture | anxiety | scale | involvement | symptoms [SUMMARY]
[CONTENT] ||| ||| Aymara | non-Aymara ||| Aymara | Aymara [SUMMARY]
[CONTENT] 748 | 9 to 15 years | nine | Arica | Chile ||| 37% | Aymara ||| the Stress in Children | the Children Depression Inventory-Short | CDI-S | Aymara [SUMMARY]
[CONTENT] Aymara | non-Aymara ||| Aymara | 89 | 186 | Hopelessness | the CDI-S | 0.02 | Anxiety on the SiC | 0.06 [SUMMARY]
[CONTENT] Aymara | Andean ||| Aymara | Aymara ||| [SUMMARY]
[CONTENT] ||| ||| Aymara | non-Aymara ||| Aymara | Aymara ||| 748 | 9 to 15 years | nine | Arica | Chile ||| 37% | Aymara ||| the Stress in Children | the Children Depression Inventory-Short | CDI-S | Aymara ||| Aymara | non-Aymara ||| Aymara | 89 | 186 | Hopelessness | the CDI-S | 0.02 | Anxiety on the SiC | 0.06 ||| Aymara | Andean ||| Aymara | Aymara ||| [SUMMARY]
[CONTENT] ||| ||| Aymara | non-Aymara ||| Aymara | Aymara ||| 748 | 9 to 15 years | nine | Arica | Chile ||| 37% | Aymara ||| the Stress in Children | the Children Depression Inventory-Short | CDI-S | Aymara ||| Aymara | non-Aymara ||| Aymara | 89 | 186 | Hopelessness | the CDI-S | 0.02 | Anxiety on the SiC | 0.06 ||| Aymara | Andean ||| Aymara | Aymara ||| [SUMMARY]
Temporal changes in the clinical-epidemiological profile of patients with Chagas disease at a referral center in Brazil.
34105626
We aimed to describe the sociodemographic, epidemiological, and clinical characteristics of patients with chronic Chagas disease (CD) at an infectious disease referral center. Changes in patient profiles over time were also evaluated.
INTRODUCTION
This retrospective study included patients with CD from November 1986-December 2019. All patients underwent an evaluation protocol that included sociodemographic profile; epidemiological history; anamnesis; and physical, cardiologic, and digestive examinations. Trend differences for each 5-year period from 1986 to 2019 were tested using a nonparametric trend test for continuous and generalized linear models with binomial distribution for categorical variables.
METHODS
A total of 2,168 patients (52.2% women) were included, with a mean age of 47.8 years old. White patients with low levels of education predominated. The reported transmission mode was vectorial in 90.2% of cases. The majority came from areas with a high prevalence (52.2%) and morbidity (67.8%) of CD. The most common clinical presentation was the indeterminate form (44.9%). The number of patients referred gradually decreased and the age at admission increased during the study period, as did the patients' levels of education.
RESULTS
The clinical profile of CD is characterized by a predominance of the indeterminate form of the disease. Regarding the patients who were followed up at the referral center, there was a progressive increase in the mean age and a concomitant decrease in the number of new patients. This reflects the successful control of vector and transfusion transmission in Brazil as well as the aging population of patients with CD.
CONCLUSIONS
[ "Aged", "Brazil", "Chagas Disease", "Female", "Humans", "Male", "Middle Aged", "Prevalence", "Referral and Consultation", "Retrospective Studies" ]
8186889
INTRODUCTION
Chagas disease (CD) is considered a neglected tropical disease by the World Health Organization, with an estimated 6-7 million people infected worldwide. The implementation of CD vector and transfusion control programs in the 1980s significantly decreased the rate of disease transmission in Latin American countries. However, several challenges have hampered the effective implementation of disease surveillance due to new outbreaks of orally transmitted CD in endemic countries and the possibility of vertical transmission even in nonendemic areas. Integrated surveillance and healthcare interventions are now directed at a large contingent of patients already infected with Trypanosoma cruzi (T. cruzi), a significant portion of whom may develop chronic Chagas heart disease, a major determinant of morbidity and mortality. Owing to rapid globalization, CD cases are no longer restricted to Latin America, constituting a new challenge in the battle against this disease 1 . In Brazil, the process of CD urbanization in the last decades of the 20th century has increased the number of patients with CD in urban cities, which has increased the demand for local health care services. This new urban context has also prompted the modification of the clinical-epidemiological profile of patients with CD, evidenced by changes in work activities, food consumption patterns, increased age, significant prevalence of comorbidities, and social determinants as a whole 2 - 4 . The first national serological survey that evaluated the prevalence of CD in Brazil aimed to quantify the endemic transmission of CD 5 . This study was conducted in rural areas of Brazil between 1975 and 1980, with an estimated national seroprevalence CD rate of 4.22%. A second national serological CD survey 6 conducted between 2001 and 2008 analyzed the seroprevalence in children aged up to five years, which only reported a CD serum positivity rate of 0.03%. The later survey highlighted the impact of the CD control measures implemented in previous decades, which led the Pan American Health Organization to grant Brazil an International Certificate of Elimination of CD transmission by Triatoma infestans and blood transfusions 7 . In Brazil, the current estimated prevalence of CD is much lower than that reported in the 1970s. However, the estimates reported in previous studies on the prevalence of CD are imprecise and subject to criticism due to the lack of standardized data collection and heterogeneity in most of these estimates 8 . Therefore, at present, the exact number of Brazilians with CD is unknown, although it is projected to be between 1 and 1.5 million people 9 . Currently, new cases of CD in Brazil are mostly restricted to the Legal Amazon, with oral being the primary route of transmission, particularly through the consumption of a local fruit called açaí when it is contaminated with T. cruzi 10 . With globalization, the disease has spread to countries in the Northern Hemisphere, particularly in the United States and Spain 11 . Because of this new geographic rearrangement of CD, studies describing the clinical and epidemiological profile of patients with chronic CD have been conducted in urban healthcare facilities in both endemic 12 - 15 and nonendemic regions 16 - 19 . Some of these facilities have also become reference centers for the treatment of CD that offer specialized care 20 - 23 . The Evandro Chagas Institute of Infectious Diseases of the Oswaldo Cruz Foundation (INI-Fiocruz), located in Rio de Janeiro, Brazil, is a national reference center for the treatment and research of infectious and tropical diseases covered by the Sistema Único de Saúde (SUS), the Brazilian National Health Service. INI-Fiocruz receives patients from various regions of the country and offers comprehensive and multidisciplinary care to patients with CD. To date, few studies have addressed the clinical and epidemiological profiles of the Brazilian population with chronic CD. The present study aimed to describe the clinical and epidemiological profiles of a historical cohort of patients with chronic CD followed up at the INI-Fiocruz.
METHODS
This was a retrospective descriptive study including patients diagnosed with chronic CD who were referred to the outpatient center of the INI-Fiocruz between November 1986 and December 2019. Clinical and epidemiological data were retrieved from the medical records. After the serological diagnosis of chronic CD was confirmed by two simultaneous reactive serological techniques, all patients underwent an initial evaluation protocol, which included sociodemographic information (age, level of education, and race); epidemiological history (transmission mode, country and state of origin, time away from endemic area); clinical anamnesis; a physical examination focused on chronic CD-related cardiovascular and digestive signs and symptoms; a 12-lead electrocardiogram (ECG); and a two-dimensional Doppler echocardiogram. According to the presence of symptoms related to the digestive form of CD, the following examinations were performed: upper gastrointestinal endoscopy, esophagography, colonoscopy, and a contrast barium enema. The level of education was categorized based on the number of years of formal study as illiterate, < 9 years, or > 9 years. Race was self-reported and classified as white, black, mulatto, or indigenous. Clinical forms of chronic CD were retrospectively classified according to the 2nd Brazilian Consensus on Chagas Disease 24 . The cardiac form was classified into A, B1, B2, C, or D stages, and the digestive form was classified into megaesophagus, megacolon, or both megaesophagus and megacolon. Information about the region of origin was classified according to the prevalence and morbidity of chronic CD, which was based on serological data from national prevalence surveys 5 and a national electrocardiographic survey 25 . The prevalence was categorized as low (< 2%), medium (2%-4%), high (> 4%), and nonendemic (Rio de Janeiro and Espírito Santo), whereas the levels of morbidity by area were categorized as low (normal ECGs > 50%), high (normal ECGs < 50%), and nonendemic areas (Rio de Janeiro and Espírito Santo). Data analysis Descriptive statistics are presented as means (standard deviations) for continuous and absolute frequencies (percentages) for categorical variables. Trend differences for each 5-year period from 1986 to 2019 were tested using a nonparametric trend test for continuous (nptrend command in Stata 13.0) variables and using generalized linear models with binomial distribution for categorical variables (binreg command in Stata 13.0). The independent variable was the 5-year period, and the dependent variables were the binary classes of each categorical variable. The link choice was selected according to the lowest Bayesian information criterion. Statistical significance was set at a 2-tailed p-value of <0.05. All statistical analyses were performed using Stata software (version 13.0; StataCorp LP.; College Station, TX, USA). Descriptive statistics are presented as means (standard deviations) for continuous and absolute frequencies (percentages) for categorical variables. Trend differences for each 5-year period from 1986 to 2019 were tested using a nonparametric trend test for continuous (nptrend command in Stata 13.0) variables and using generalized linear models with binomial distribution for categorical variables (binreg command in Stata 13.0). The independent variable was the 5-year period, and the dependent variables were the binary classes of each categorical variable. The link choice was selected according to the lowest Bayesian information criterion. Statistical significance was set at a 2-tailed p-value of <0.05. All statistical analyses were performed using Stata software (version 13.0; StataCorp LP.; College Station, TX, USA). Ethics approval This study was approved by the INI-Fiocruz Research Ethics Committee (number CAAE:35748820.1.0000.5262) on September 2, 2020 and was carried out in accordance with the 1964 Declaration of Helsinki and its later amendments. The need for informed consent was waived considering the retrospective nature of the study. This study was approved by the INI-Fiocruz Research Ethics Committee (number CAAE:35748820.1.0000.5262) on September 2, 2020 and was carried out in accordance with the 1964 Declaration of Helsinki and its later amendments. The need for informed consent was waived considering the retrospective nature of the study.
RESULTS
The characteristics of the patients referred to the INI-Fiocruz are shown in Table 1. A total of 2,168 patients (52.2% women) were included from August 1986 to December 2019, with a mean age of 47.8 years (range, 13-88 years). The plurality self-reported as white (49.8%) and had < 9 years of education (80.5%). The reported transmission mode was vectorial in 90.2% of the patients. The majority, originating from Brazil (98.7%), were born in areas with high prevalence (52.2%) and morbidity (67.8%) of CD, mostly Minas Gerais and Bahia, and had moved away from endemic areas for >20 years (65.8%). The indeterminate form was the most common clinical presentation (44.9%), and stage A (21.9%) was the most common clinical presentation of the cardiac form. Only a minority of patients had digestive or cardiodigestive forms (11.7%), with 7.4% presenting with megaesophagus, 2.5% with megacolon, and 1.8% with both megaesophagus and megacolon. TABLE 1:Characteristics of patients admitted at the INI-Fiocruz (n=2,168).VariableMean (SD) Minimum- Maximum or Frequency (%)Age (years)47.8 (12.8) 13-88Female sex1,132 (52.2)Race White1,080 (49.8)Mulatto815 (37.6)Black261 (12.0)Indigenous12 (0.6)Level of education Illiterate448 (20.7)< 9 years1,297 (59.8)> 9 years423 (19.5)Transmission mode Vectorial1,956 (90.2)Transfusion124 (5.7)Vertical62 (2.9)Oral2 (0.1)Not identified24 (1.1)Country of origin Brazil2,140 (98.7)Other Latin American countries28 (1.3)Region of origin according to prevalence Nonendemic Chagas disease 102 (4.7)Low Chagas disease prevalence 260 (12.0)Medium Chagas disease prevalence 674 (31.1)High Chagas disease prevalence 1,132 (52.2)Region of origin according to morbidity Nonendemic Chagas disease102 (4.7)Low Chagas disease morbidity 596 (27.5)High Chagas disease morbidity1,470 (67.8)Time away from endemic area None82 (3.8)1 to 20 years577 (26.6)>20 years1,426 (65.8)Nonendemic area83 (3.8)Clinical form (n=2,136) Indeterminate 960 (44.9)Cardiac924 (43.3)Digestive125 (5.9)Cardiodigestive127 (5.9)Clinical cardiac stagesa (n=2,136) None1085 (50.8)Stage A467 (21.9)Stage B1250 (11.7)Stage B2108 (5.1)Stage C206 (9.6)Stage D20 (0.9)Clinical digestive presentation None1914 (88.3)Megaesophagus160 (7.4)Megacolon54 (2.5)Megaesophagus and megacolon40 (1.8) aAccording to the 2nd Brazilian Consensus on Chagas Disease 24 . Abbreviations: INI-Fiocruz: The Evandro Chagas Institute of Infectious Diseases of the Oswaldo Cruz Foundation; SD: standard deviation. aAccording to the 2nd Brazilian Consensus on Chagas Disease 24 . Abbreviations: INI-Fiocruz: The Evandro Chagas Institute of Infectious Diseases of the Oswaldo Cruz Foundation; SD: standard deviation. The characteristics of patients referred to the INI-Fiocruz for each 5-year period are shown in Table 2. The number of patients referred increased from the 1986-1990 period (n=138) to the 2000-2004 period (n=569) and then gradually decreased until the 2015-2019 period (n=162). The age at the time of referral increased during all study periods (p<0.001). The number of white patients decreased over time (p<0.001), whereas the number of mulatto (p<0.001) and black (p<0.001) patients increased. Illiterate patients decreased over time (p =0.02), and those with >9 years of education increased (p=0.008). The vectorial and oral transmission modes increased over the study period (p<0.001 for both) while transfusion transmission decreased (p<0.001). TABLE 2:Characteristics of patients admitted at the INI-Fiocruz for each 5-year period (n=2,168)VariableMean (SD) inimum-Maximum or Frequency (%) p-value for trend 1986-19891990-19941995-19992000-20042005-20092010-20142015-2019 (n=138)(n=295)(n=498)(n=569)(n=307)(n=199)(n=162)Age (years)44.3 (10.6)45.3 (12.0)45.9 (11.3)46.3 (12.2)49.7 (14.1)53.5 (12.9)57.1 (12.9)<0.001 22-7319-8516-8413-8415-8719-8521-88 Female sex71 (51.8)168 (57.0)252 (50.60)283 (49.7)158 (51.5)116 (58.3)84 (51.9)0.93Race White76 (55.1)174 (59.0)244 (49.0)314 (55.2)154 (50.2)82 (41.2)36 (22.2)<0.001Mulatto52 (37.7)91 (30.9)197 (39.6)180 (31.6)101 (32.9)97 (48.7)97 (59.9)<0.001Black8 (5.8)28 (9.5)52 (10.4)72 (12.7)52 (16.9)20 (10.1)29 (17.9)<0.001Indigenous2 (1.5)2 (0.7)5 (1.0)3 (0.5)0 (0.0)0 (0.0)0 (0.0)0.02Level of education Illiterate32 (23.2)52 (17.6)141 (28.3)110 (19.3)50 (16.3)34 (17.1)29 (17.9)0.02< 9 years86 (62.3)182 (61.7)268 (53.8)350 (61.5)209 (68.1)117 (58.8)85 (52.2)0.87> 9 years20 (14.5)61 (20.7)89 (17.9)109 (19.2)48 (15.6)48 (24.1)48 (29.6)0.008Transmission mode Vectorial119 (86.2)254 (86.1)435 (87.4)525 (92.3)285 (92.8)189 (95.0)149 (92.0)<0.001Transfusion16 (11.6)29 (9.8)45 (9.0)16 (2.8)8 (2.6)3 (1.5)7 (4.3)<0.001Vertical1 (0.7)8 (2.7)13 (2.6)24 (4.2)9 (2.9)4 (2.0)3 (1.9)0.81Oral0 (0.0)0 (0.0)0 (0.0)0 (0.0)0 (0.0)0 (0.0)2 (1.2)<0.001Not identified2 (1.5)4 (1.4)5 (1.0)4 (0.7)5 (1.6)3 (1.5)1 (0.6)0.84Country of origin Brazil135 (97.8)289 (98.0)491 (98.6)565 (99.3)302 (98.4)197 (99.0)161 (99.4)0.15Other Latin American countries3 (2.2)6 (2.0)7 (1.4)4 (0.7)5 (1.6)2 (1.0)1 (0.6)Region of origin according to prevalence Nonendemic Chagas disease5 (3.6)18 (6.1)19 (3.8)31 (5.5)14 (4.6)8 (4.0)7 (4.3)0.78Low Chagas disease prevalence7 (5.1)17 (5.8)58 (11.7)74 (13.0)42 (13.7)38 (19.1)24 (14.8)<0.001Medium Chagas disease prevalence37 (26.8)93 (31.5)137 (27.5)186 (32.7)101 (32.9)56 (28.1)64 (39.5)0.04High Chagas disease prevalence89 (64.5)167 (56.6)284 (57.0)278 (48.9)150 (48.9)97 (48.7)67 (41.4)<0.001Region of origin according to morbidity Nonendemic Chagas disease area5 (3.7)18 (6.1)19 (3.8)31 (5.5)14 (4.6)8 (4.0)7 (4.3)0.78Low Chagas disease morbidity area22 (15.9)67 (22.7)130 (26.1)159 (27.9)95 (30.9)64 (32.2)59 (36.4)<0.001High Chagas disease morbidity area111 (80.4)210 (71.2)349 (70.1)379 (66.6)198 (64.5)127 (63.8)96 (59.3)<0.001Time away from endemic area None1 (0.7)8 (2.7)29 (5.8)19 (3.3)16 (5.2)5 (2.5)4 (2.5)0.901 to 20 years16 (11.6)57 (19.3)129 (25.9)179 (31.5)108 (35.2)62 (31.2)26 (16.1)0.002>20 years117 (84.8)216 (73.2)323 (64.9)346 (60.8)173 (56.4)124 (62.3)127 (78.4)0.005Nonendemic area4 (2.9)14 (4.8)17 (3.4)25 (4.4)10 (3.3)8 (4.0)5 (3.1)0.77Clinical form (n=2,136) Indeterminate56 (40.6)135 (45.8)264 (53.0)280 (51.2)107 (35.9)66 (33.3)52 (32.1)<0.001Cardiac68 (49.3)142 (48.1)204 (41.0)221 (40.4)140 (47.0)81 (40.9)68 (42.0)0.20Digestive6 (4.3)12 (4.1)14 (2.8)26 (4.8)23 (7.7)28 (14.2)16 (9.9)<0.001Cardiodigestive8 (5.8)6 (2.0)16 (3.2)20 (3.6)28 (9.4)23 (11.6)26 (16.0)<0.001Clinical cardiac stagesa (n=2,136) Stage A36 (26.1)58 (19.7)104 (20.9)129 (23.6)66 (22.2)33 (16.7)41 (25.3)0.93Stage B117 (12.3)45 (15.3)48 (9.6)48 (8.8)40 (13.4)32 (16.2)20 (12.4)0.59Stage B213 (9.4)16 (5.4)26 (5.2)21 (3.8)17 (5.7)8 (4.0)7 (4.3)0.10Stage C9 (6.5)21 (7.1)39 (7.8)38 (8.0)44 (14.8)30 (15.2)25 (15.3)<0.001Stage D1 (0.7)8 (2.7)3 (0.6)5 (0.9)1 (0.3)1 (0.5)1 (0.6)0.07Clinical digestive presentation Megaesophagus8 (5.8)14 (4.8)19 (3.8)26 (4.6)35 (11.4)33 (16.6)25 (15.4)<0.001Megacolon3 (2.2)3 (1.0)8 (1.6)12 (2.1)7 (2.3)10 (5.0)11 (6.8)<0.001Megaesophagus and megacolon3 (2.2)1 (0.3)3 (0.6)10 (1.8)9 (2.9)8 (4.0)6 (3.7)0.001 aAccording to the 2nd Brazilian Consensus on Chagas Disease (2015) 24 . Abbreviations: INI-Fiocruz: The Evandro Chagas Institute of Infectious Diseases of the Oswaldo Cruz Foundation; SD: standard deviation. aAccording to the 2nd Brazilian Consensus on Chagas Disease (2015) 24 . Abbreviations: INI-Fiocruz: The Evandro Chagas Institute of Infectious Diseases of the Oswaldo Cruz Foundation; SD: standard deviation. The percentage of patients that originated from low CD prevalence areas increased over the study period (p<0.001), while patients originating from high CD prevalence areas decreased (p<0.001). The same pattern was observed for the CD morbidity areas, with an increase in the percentage of patients who originated from low CD morbidity areas (p<0.001) and a decrease in patients from high CD morbidity areas (p<0.001). The number of patients who had moved away from an endemic area for <20 years increased (p=0.002) with a concomitant decrease of patients that had moved away for >20 years (p=0.005). The clinical presentation characteristics also changed over the study period, with an increase in the digestive form (p<0.001) and a decrease in the indeterminate form (p<0.001). Considering the clinical cardiac stages, there was an increase in the percentage of patients admitted with stage C (p<0.001), however there were no changes observed in the other stages.
null
null
[ "Data analysis", "Ethics approval" ]
[ "Descriptive statistics are presented as means (standard deviations) for continuous and absolute frequencies (percentages) for categorical variables. Trend differences for each 5-year period from 1986 to 2019 were tested using a nonparametric trend test for continuous (nptrend command in Stata 13.0) variables and using generalized linear models with binomial distribution for categorical variables (binreg command in Stata 13.0). The independent variable was the 5-year period, and the dependent variables were the binary classes of each categorical variable. The link choice was selected according to the lowest Bayesian information criterion. Statistical significance was set at a 2-tailed p-value of <0.05. All statistical analyses were performed using Stata software (version 13.0; StataCorp LP.; College Station, TX, USA). ", "This study was approved by the INI-Fiocruz Research Ethics Committee (number CAAE:35748820.1.0000.5262) on September 2, 2020 and was carried out in accordance with the 1964 Declaration of Helsinki and its later amendments. The need for informed consent was waived considering the retrospective nature of the study." ]
[ null, null ]
[ "INTRODUCTION", "METHODS", "Data analysis", "Ethics approval", "RESULTS", "DISCUSSION" ]
[ "Chagas disease (CD) is considered a neglected tropical disease by the World Health Organization, with an estimated 6-7 million people infected worldwide. The implementation of CD vector and transfusion control programs in the 1980s significantly decreased the rate of disease transmission in Latin American countries. However, several challenges have hampered the effective implementation of disease surveillance due to new outbreaks of orally transmitted CD in endemic countries and the possibility of vertical transmission even in nonendemic areas. Integrated surveillance and healthcare interventions are now directed at a large contingent of patients already infected with Trypanosoma cruzi (T. cruzi), a significant portion of whom may develop chronic Chagas heart disease, a major determinant of morbidity and mortality. Owing to rapid globalization, CD cases are no longer restricted to Latin America, constituting a new challenge in the battle against this disease\n1\n. In Brazil, the process of CD urbanization in the last decades of the 20th century has increased the number of patients with CD in urban cities, which has increased the demand for local health care services. This new urban context has also prompted the modification of the clinical-epidemiological profile of patients with CD, evidenced by changes in work activities, food consumption patterns, increased age, significant prevalence of comorbidities, and social determinants as a whole\n2\n\n-\n\n4\n. \n The first national serological survey that evaluated the prevalence of CD in Brazil aimed to quantify the endemic transmission of CD\n5\n. This study was conducted in rural areas of Brazil between 1975 and 1980, with an estimated national seroprevalence CD rate of 4.22%. A second national serological CD survey\n6\n conducted between 2001 and 2008 analyzed the seroprevalence in children aged up to five years, which only reported a CD serum positivity rate of 0.03%. The later survey highlighted the impact of the CD control measures implemented in previous decades, which led the Pan American Health Organization to grant Brazil an International Certificate of Elimination of CD transmission by Triatoma infestans and blood transfusions\n7\n. In Brazil, the current estimated prevalence of CD is much lower than that reported in the 1970s. However, the estimates reported in previous studies on the prevalence of CD are imprecise and subject to criticism due to the lack of standardized data collection and heterogeneity in most of these estimates\n8\n. Therefore, at present, the exact number of Brazilians with CD is unknown, although it is projected to be between 1 and 1.5 million people\n9\n. Currently, new cases of CD in Brazil are mostly restricted to the Legal Amazon, with oral being the primary route of transmission, particularly through the consumption of a local fruit called açaí when it is contaminated with T. cruzi\n\n10\n.\nWith globalization, the disease has spread to countries in the Northern Hemisphere, particularly in the United States and Spain\n11\n. Because of this new geographic rearrangement of CD, studies describing the clinical and epidemiological profile of patients with chronic CD have been conducted in urban healthcare facilities in both endemic\n12\n\n-\n\n15\n and nonendemic regions\n16\n\n-\n\n19\n. Some of these facilities have also become reference centers for the treatment of CD that offer specialized care\n20\n\n-\n\n23\n.\nThe Evandro Chagas Institute of Infectious Diseases of the Oswaldo Cruz Foundation (INI-Fiocruz), located in Rio de Janeiro, Brazil, is a national reference center for the treatment and research of infectious and tropical diseases covered by the Sistema Único de Saúde (SUS), the Brazilian National Health Service. INI-Fiocruz receives patients from various regions of the country and offers comprehensive and multidisciplinary care to patients with CD. To date, few studies have addressed the clinical and epidemiological profiles of the Brazilian population with chronic CD. The present study aimed to describe the clinical and epidemiological profiles of a historical cohort of patients with chronic CD followed up at the INI-Fiocruz.", "This was a retrospective descriptive study including patients diagnosed with chronic CD who were referred to the outpatient center of the INI-Fiocruz between November 1986 and December 2019. Clinical and epidemiological data were retrieved from the medical records.\nAfter the serological diagnosis of chronic CD was confirmed by two simultaneous reactive serological techniques, all patients underwent an initial evaluation protocol, which included sociodemographic information (age, level of education, and race); epidemiological history (transmission mode, country and state of origin, time away from endemic area); clinical anamnesis; a physical examination focused on chronic CD-related cardiovascular and digestive signs and symptoms; a 12-lead electrocardiogram (ECG); and a two-dimensional Doppler echocardiogram. According to the presence of symptoms related to the digestive form of CD, the following examinations were performed: upper gastrointestinal endoscopy, esophagography, colonoscopy, and a contrast barium enema. The level of education was categorized based on the number of years of formal study as illiterate, < 9 years, or > 9 years. Race was self-reported and classified as white, black, mulatto, or indigenous.\nClinical forms of chronic CD were retrospectively classified according to the 2nd Brazilian Consensus on Chagas Disease\n24\n. The cardiac form was classified into A, B1, B2, C, or D stages, and the digestive form was classified into megaesophagus, megacolon, or both megaesophagus and megacolon. Information about the region of origin was classified according to the prevalence and morbidity of chronic CD, which was based on serological data from national prevalence surveys\n5\n and a national electrocardiographic survey\n25\n. The prevalence was categorized as low (< 2%), medium (2%-4%), high (> 4%), and nonendemic (Rio de Janeiro and Espírito Santo), whereas the levels of morbidity by area were categorized as low (normal ECGs > 50%), high (normal ECGs < 50%), and nonendemic areas (Rio de Janeiro and Espírito Santo). \nData analysis Descriptive statistics are presented as means (standard deviations) for continuous and absolute frequencies (percentages) for categorical variables. Trend differences for each 5-year period from 1986 to 2019 were tested using a nonparametric trend test for continuous (nptrend command in Stata 13.0) variables and using generalized linear models with binomial distribution for categorical variables (binreg command in Stata 13.0). The independent variable was the 5-year period, and the dependent variables were the binary classes of each categorical variable. The link choice was selected according to the lowest Bayesian information criterion. Statistical significance was set at a 2-tailed p-value of <0.05. All statistical analyses were performed using Stata software (version 13.0; StataCorp LP.; College Station, TX, USA). \nDescriptive statistics are presented as means (standard deviations) for continuous and absolute frequencies (percentages) for categorical variables. Trend differences for each 5-year period from 1986 to 2019 were tested using a nonparametric trend test for continuous (nptrend command in Stata 13.0) variables and using generalized linear models with binomial distribution for categorical variables (binreg command in Stata 13.0). The independent variable was the 5-year period, and the dependent variables were the binary classes of each categorical variable. The link choice was selected according to the lowest Bayesian information criterion. Statistical significance was set at a 2-tailed p-value of <0.05. All statistical analyses were performed using Stata software (version 13.0; StataCorp LP.; College Station, TX, USA). \nEthics approval This study was approved by the INI-Fiocruz Research Ethics Committee (number CAAE:35748820.1.0000.5262) on September 2, 2020 and was carried out in accordance with the 1964 Declaration of Helsinki and its later amendments. The need for informed consent was waived considering the retrospective nature of the study.\nThis study was approved by the INI-Fiocruz Research Ethics Committee (number CAAE:35748820.1.0000.5262) on September 2, 2020 and was carried out in accordance with the 1964 Declaration of Helsinki and its later amendments. The need for informed consent was waived considering the retrospective nature of the study.", "Descriptive statistics are presented as means (standard deviations) for continuous and absolute frequencies (percentages) for categorical variables. Trend differences for each 5-year period from 1986 to 2019 were tested using a nonparametric trend test for continuous (nptrend command in Stata 13.0) variables and using generalized linear models with binomial distribution for categorical variables (binreg command in Stata 13.0). The independent variable was the 5-year period, and the dependent variables were the binary classes of each categorical variable. The link choice was selected according to the lowest Bayesian information criterion. Statistical significance was set at a 2-tailed p-value of <0.05. All statistical analyses were performed using Stata software (version 13.0; StataCorp LP.; College Station, TX, USA). ", "This study was approved by the INI-Fiocruz Research Ethics Committee (number CAAE:35748820.1.0000.5262) on September 2, 2020 and was carried out in accordance with the 1964 Declaration of Helsinki and its later amendments. The need for informed consent was waived considering the retrospective nature of the study.", "The characteristics of the patients referred to the INI-Fiocruz are shown in Table 1. A total of 2,168 patients (52.2% women) were included from August 1986 to December 2019, with a mean age of 47.8 years (range, 13-88 years). The plurality self-reported as white (49.8%) and had < 9 years of education (80.5%). The reported transmission mode was vectorial in 90.2% of the patients. The majority, originating from Brazil (98.7%), were born in areas with high prevalence (52.2%) and morbidity (67.8%) of CD, mostly Minas Gerais and Bahia, and had moved away from endemic areas for >20 years (65.8%). The indeterminate form was the most common clinical presentation (44.9%), and stage A (21.9%) was the most common clinical presentation of the cardiac form. Only a minority of patients had digestive or cardiodigestive forms (11.7%), with 7.4% presenting with megaesophagus, 2.5% with megacolon, and 1.8% with both megaesophagus and megacolon.\n\nTABLE 1:Characteristics of patients admitted at the INI-Fiocruz (n=2,168).VariableMean (SD) Minimum-\nMaximum or Frequency (%)Age (years)47.8 (12.8) 13-88Female sex1,132 (52.2)Race\nWhite1,080 (49.8)Mulatto815 (37.6)Black261 (12.0)Indigenous12 (0.6)Level of education\nIlliterate448 (20.7)< 9 years1,297 (59.8)> 9 years423 (19.5)Transmission mode\nVectorial1,956 (90.2)Transfusion124 (5.7)Vertical62 (2.9)Oral2 (0.1)Not identified24 (1.1)Country of origin\nBrazil2,140 (98.7)Other Latin American countries28 (1.3)Region of origin according to prevalence\nNonendemic Chagas disease 102 (4.7)Low Chagas disease prevalence 260 (12.0)Medium Chagas disease prevalence 674 (31.1)High Chagas disease prevalence 1,132 (52.2)Region of origin according to morbidity\nNonendemic Chagas disease102 (4.7)Low Chagas disease morbidity 596 (27.5)High Chagas disease morbidity1,470 (67.8)Time away from endemic area\nNone82 (3.8)1 to 20 years577 (26.6)>20 years1,426 (65.8)Nonendemic area83 (3.8)Clinical form (n=2,136)\nIndeterminate 960 (44.9)Cardiac924 (43.3)Digestive125 (5.9)Cardiodigestive127 (5.9)Clinical cardiac stagesa (n=2,136)\nNone1085 (50.8)Stage A467 (21.9)Stage B1250 (11.7)Stage B2108 (5.1)Stage C206 (9.6)Stage D20 (0.9)Clinical digestive presentation\nNone1914 (88.3)Megaesophagus160 (7.4)Megacolon54 (2.5)Megaesophagus and megacolon40 (1.8)\naAccording to the 2nd Brazilian Consensus on Chagas Disease\n24\n. Abbreviations: INI-Fiocruz: The Evandro Chagas Institute of Infectious Diseases of the Oswaldo Cruz Foundation; SD: standard deviation.\n\n\naAccording to the 2nd Brazilian Consensus on Chagas Disease\n24\n. Abbreviations: INI-Fiocruz: The Evandro Chagas Institute of Infectious Diseases of the Oswaldo Cruz Foundation; SD: standard deviation.\nThe characteristics of patients referred to the INI-Fiocruz for each 5-year period are shown in Table 2. The number of patients referred increased from the 1986-1990 period (n=138) to the 2000-2004 period (n=569) and then gradually decreased until the 2015-2019 period (n=162). The age at the time of referral increased during all study periods (p<0.001). The number of white patients decreased over time (p<0.001), whereas the number of mulatto (p<0.001) and black (p<0.001) patients increased. Illiterate patients decreased over time (p =0.02), and those with >9 years of education increased (p=0.008). The vectorial and oral transmission modes increased over the study period (p<0.001 for both) while transfusion transmission decreased (p<0.001).\n\nTABLE 2:Characteristics of patients admitted at the INI-Fiocruz for each 5-year period (n=2,168)VariableMean (SD) inimum-Maximum or Frequency (%) p-value for trend\n1986-19891990-19941995-19992000-20042005-20092010-20142015-2019\n\n(n=138)(n=295)(n=498)(n=569)(n=307)(n=199)(n=162)Age (years)44.3 (10.6)45.3 (12.0)45.9 (11.3)46.3 (12.2)49.7 (14.1)53.5 (12.9)57.1 (12.9)<0.001\n22-7319-8516-8413-8415-8719-8521-88\nFemale sex71 (51.8)168 (57.0)252 (50.60)283 (49.7)158 (51.5)116 (58.3)84 (51.9)0.93Race\n\n\n\n\n\n\n\nWhite76 (55.1)174 (59.0)244 (49.0)314 (55.2)154 (50.2)82 (41.2)36 (22.2)<0.001Mulatto52 (37.7)91 (30.9)197 (39.6)180 (31.6)101 (32.9)97 (48.7)97 (59.9)<0.001Black8 (5.8)28 (9.5)52 (10.4)72 (12.7)52 (16.9)20 (10.1)29 (17.9)<0.001Indigenous2 (1.5)2 (0.7)5 (1.0)3 (0.5)0 (0.0)0 (0.0)0 (0.0)0.02Level of education\n\n\n\n\n\n\n\nIlliterate32 (23.2)52 (17.6)141 (28.3)110 (19.3)50 (16.3)34 (17.1)29 (17.9)0.02< 9 years86 (62.3)182 (61.7)268 (53.8)350 (61.5)209 (68.1)117 (58.8)85 (52.2)0.87> 9 years20 (14.5)61 (20.7)89 (17.9)109 (19.2)48 (15.6)48 (24.1)48 (29.6)0.008Transmission mode\n\n\n\n\n\n\n\nVectorial119 (86.2)254 (86.1)435 (87.4)525 (92.3)285 (92.8)189 (95.0)149 (92.0)<0.001Transfusion16 (11.6)29 (9.8)45 (9.0)16 (2.8)8 (2.6)3 (1.5)7 (4.3)<0.001Vertical1 (0.7)8 (2.7)13 (2.6)24 (4.2)9 (2.9)4 (2.0)3 (1.9)0.81Oral0 (0.0)0 (0.0)0 (0.0)0 (0.0)0 (0.0)0 (0.0)2 (1.2)<0.001Not identified2 (1.5)4 (1.4)5 (1.0)4 (0.7)5 (1.6)3 (1.5)1 (0.6)0.84Country of origin\n\n\n\n\n\n\n\nBrazil135 (97.8)289 (98.0)491 (98.6)565 (99.3)302 (98.4)197 (99.0)161 (99.4)0.15Other Latin American countries3 (2.2)6 (2.0)7 (1.4)4 (0.7)5 (1.6)2 (1.0)1 (0.6)Region of origin according to prevalence\n\n\n\n\n\n\n\nNonendemic Chagas disease5 (3.6)18 (6.1)19 (3.8)31 (5.5)14 (4.6)8 (4.0)7 (4.3)0.78Low Chagas disease prevalence7 (5.1)17 (5.8)58 (11.7)74 (13.0)42 (13.7)38 (19.1)24 (14.8)<0.001Medium Chagas disease prevalence37 (26.8)93 (31.5)137 (27.5)186 (32.7)101 (32.9)56 (28.1)64 (39.5)0.04High Chagas disease prevalence89 (64.5)167 (56.6)284 (57.0)278 (48.9)150 (48.9)97 (48.7)67 (41.4)<0.001Region of origin according to morbidity\n\n\n\n\n\n\n\nNonendemic Chagas disease area5 (3.7)18 (6.1)19 (3.8)31 (5.5)14 (4.6)8 (4.0)7 (4.3)0.78Low Chagas disease morbidity area22 (15.9)67 (22.7)130 (26.1)159 (27.9)95 (30.9)64 (32.2)59 (36.4)<0.001High Chagas disease morbidity area111 (80.4)210 (71.2)349 (70.1)379 (66.6)198 (64.5)127 (63.8)96 (59.3)<0.001Time away from endemic area\n\n\n\n\n\n\n\nNone1 (0.7)8 (2.7)29 (5.8)19 (3.3)16 (5.2)5 (2.5)4 (2.5)0.901 to 20 years16 (11.6)57 (19.3)129 (25.9)179 (31.5)108 (35.2)62 (31.2)26 (16.1)0.002>20 years117 (84.8)216 (73.2)323 (64.9)346 (60.8)173 (56.4)124 (62.3)127 (78.4)0.005Nonendemic area4 (2.9)14 (4.8)17 (3.4)25 (4.4)10 (3.3)8 (4.0)5 (3.1)0.77Clinical form (n=2,136)\n\n\n\n\n\n\n\nIndeterminate56 (40.6)135 (45.8)264 (53.0)280 (51.2)107 (35.9)66 (33.3)52 (32.1)<0.001Cardiac68 (49.3)142 (48.1)204 (41.0)221 (40.4)140 (47.0)81 (40.9)68 (42.0)0.20Digestive6 (4.3)12 (4.1)14 (2.8)26 (4.8)23 (7.7)28 (14.2)16 (9.9)<0.001Cardiodigestive8 (5.8)6 (2.0)16 (3.2)20 (3.6)28 (9.4)23 (11.6)26 (16.0)<0.001Clinical cardiac stagesa (n=2,136)\n\n\n\n\n\n\n\nStage A36 (26.1)58 (19.7)104 (20.9)129 (23.6)66 (22.2)33 (16.7)41 (25.3)0.93Stage B117 (12.3)45 (15.3)48 (9.6)48 (8.8)40 (13.4)32 (16.2)20 (12.4)0.59Stage B213 (9.4)16 (5.4)26 (5.2)21 (3.8)17 (5.7)8 (4.0)7 (4.3)0.10Stage C9 (6.5)21 (7.1)39 (7.8)38 (8.0)44 (14.8)30 (15.2)25 (15.3)<0.001Stage D1 (0.7)8 (2.7)3 (0.6)5 (0.9)1 (0.3)1 (0.5)1 (0.6)0.07Clinical digestive presentation\n\n\n\n\n\n\n\nMegaesophagus8 (5.8)14 (4.8)19 (3.8)26 (4.6)35 (11.4)33 (16.6)25 (15.4)<0.001Megacolon3 (2.2)3 (1.0)8 (1.6)12 (2.1)7 (2.3)10 (5.0)11 (6.8)<0.001Megaesophagus and megacolon3 (2.2)1 (0.3)3 (0.6)10 (1.8)9 (2.9)8 (4.0)6 (3.7)0.001\naAccording to the 2nd Brazilian Consensus on Chagas Disease (2015)\n24\n. Abbreviations: INI-Fiocruz: The Evandro Chagas Institute of Infectious Diseases of the Oswaldo Cruz Foundation; SD: standard deviation.\n\n\naAccording to the 2nd Brazilian Consensus on Chagas Disease (2015)\n24\n. Abbreviations: INI-Fiocruz: The Evandro Chagas Institute of Infectious Diseases of the Oswaldo Cruz Foundation; SD: standard deviation.\nThe percentage of patients that originated from low CD prevalence areas increased over the study period (p<0.001), while patients originating from high CD prevalence areas decreased (p<0.001). The same pattern was observed for the CD morbidity areas, with an increase in the percentage of patients who originated from low CD morbidity areas (p<0.001) and a decrease in patients from high CD morbidity areas (p<0.001). The number of patients who had moved away from an endemic area for <20 years increased (p=0.002) with a concomitant decrease of patients that had moved away for >20 years (p=0.005).\nThe clinical presentation characteristics also changed over the study period, with an increase in the digestive form (p<0.001) and a decrease in the indeterminate form (p<0.001). Considering the clinical cardiac stages, there was an increase in the percentage of patients admitted with stage C (p<0.001), however there were no changes observed in the other stages. ", "The INI-Fiocruz is a reference center for CD that provides diagnostic interpretation for patients referred from blood banks, primary and secondary care units, private health services, or by spontaneous demand, and it offers integral and multidisciplinary clinical care to patients with CD. In the present study, most patients were long-time residents of the metropolitan region of the state of Rio de Janeiro and who had been away from endemic areas for many years. Although patients lived in Rio de Janeiro, most were migrants from 19 Brazilian states, predominantly from Bahia and Minas Gerais, which constitutes almost 50% of the studied cohort. A study conducted in the 1960s that evaluated natural-born Brazilian citizens with chronic CD residing in the city of Rio de Janeiro reported the same prevalence\n26\n. A similar profile was reported in a study published by Ianni\n27\n of patients living in the city of São Paulo, indicating the migratory profile of the CD-infected population. Dias et al.\n28\n studied the status of chronic CD in the Northeastern region and reported that the state of Bahia had the highest prevalence of the disease in the region, which is similar to that of the state of Minas Gerais. They also discussed the social causes of the strong emigration to large urban centers in the Southeastern region, which justifies the significant number of people from Minas Gerais and Bahia in the present cohort. Besides, 1.3% of the patients come from other South American countries, predominantly from Bolivia, a country with a prevalence rate of 4.4% for chronic CD among migrants\n29\n.\nThe vector route is the most probable transmission mechanism. The majority of patients reported living in rural areas in houses made of mud and straw and were aware of the triatomine bug, while some even reported intradomicile coexistence with the insect, yet only a few remembered that they had been bitten. Most patients claimed that they were unaware that the triatomine bug was a health risk. This finding indicates the knowledge deficit regarding CD among these patients, which reinforces why most of them do not consider living with triatomine bugs as a threat to their health\n30\n. Several studies on transmission mechanisms have also reported the vector route as the primary route of disease transmission\n12\n\n,\n\n13\n\n,\n\n20\n.\nThis cohort showed a slight predominance of women (52.2%), characterizing a balanced cohort in terms of sex and reflecting the distribution of men and women in the Brazilian population, which is 51% women and 49% men according to the Brazilian Institute of Geography and Statistics. In previous studies, women accounted for 46% to 84% of the total study population\n13\n\n-\n\n17\n\n,\n\n20\n\n-\n\n23\n\n,\n\n34\n\n,\n\n36\n\n,\n\n37\n.\nIn the center where this study was conducted, white patients predominated (49.8%). Gontijo et al.\n20\n reported the predominance of mulattos (43%) in Minas Gerais, while Gasparim et al.\n13\n reported 51.1% of whites in Paraná, suggesting ethnic differences by geographic location. Most patients (59.8%) had low levels of education. This is attributed to the origin of the patients, usually born in rural areas without access to formal education. Gontijo et al.\n20\n showed that 84% of patients examined in a reference outpatient clinic in Belo Horizonte, Brazil were either illiterate or semi-illiterate. Pereira et al.\n14\n reported that 40.2% of patients in a reference center in Fortaleza were illiterate. The number of white patients decreased over time, whereas the number of mulatto and black patients increased. The number of illiterate patients decreased and those with >9 years of education increased. These data suggest that the Brazilian population of low socioeconomic strata has gained access to both health services and formal education in the last few decades.\nThe mean age of the patients was 47.8 years old. Field studies performed between 1960 and the early 2000s reported a progressive increase in the mean age of patients with chronic CD over time, from less than 25 to 45 years\n31\n\n-\n\n33\n. Studies on the clinical epidemiological profile of patients with chronic CD in urban centers reported that the mean age showed the same increasing trend in the last decades, ranging from 37.7 to 67.5 years\n12\n\n,\n\n13\n\n,\n\n15\n\n,\n\n20\n\n-\n\n22\n. Studies conducted in nonendemic regions of the Northern Hemisphere, where patients migrated from South America, especially Bolivia and Central America, reported younger patients with chronic CD with a mean age ranging from 28.5 to 47 years\n16\n\n-\n\n18\n\n,\n\n34\n\n-\n\n37\n, reflecting the lack of CD vectorial transmission control in their countries of origin and that CD vectorial transmission remained active in rural areas. The present study showed a progressive increase over time in the age of patients who were included in the study cohort during the 5-year intervals of the study period, with the mean age increasing from 44.3 to 57.1 years. Following the increase in the age of patients with CD, there was an inverse decrease in the number of new patients requesting care at the INI-Fiocruz. An ascending temporal curve was observed between 1986 and 1994, stabilizing between 1995 and 2004, and decreasing in 2005. This behavior may reflect the successful control of vector transmission by Triatoma infestans as well as by blood transfusions in Brazil; however this may also indicate the decrease in migration from rural to urban areas verified in recent decades due to the decreased economic power of urban cities consequently attracting fewer people.\nStudies on the clinical form of CD conducted in urban areas have shown that the cardiac form predominates, with a prevalence ranging from 56% to 66%\n12\n\n-\n\n16\n\n,\n\n22\n\n,\n\n23\n Few studies have reported a higher incidence of the indeterminate form, varying between 56% and 81.6%\n17\n\n,\n\n20\n. These differences in the prevalence of clinical forms are attributed to the profile of the healthcare unit. CD reference centers usually receive asymptomatic donors from blood banks, asymptomatic family members of patients under follow-up at the institution, and spontaneous demand for serological diagnosis, which tends to mirror the epidemiological reality of the disease in which the indeterminate forms predominate. Symptomatic patients are expected in secondary and tertiary care units, which involve the management of patients with more complex cases. Therefore, cases of the cardiac and digestive forms are more prevalent in these units. In the present study, the indeterminate form was the most prevalent (44.9%). The digestive form, either isolated or associated with heart disease, had a prevalence of 11.8%, which is within the range of the mean prevalence of this clinical form (9%-41%) presented in other studies\n12\n\n-\n\n15\n\n,\n\n17\n\n,\n\n19\n\n,\n\n20\n\n,\n\n22\n. Regarding the cardiac and cardiodigestive forms, the most common cardiac stage was stage A (44.4%), which, if associated with the indeterminate form, would account for 65.8% of the cohort, with normal left ventricular systolic function on the ECG indicating patients with long-term benign prognoses.\nIn this cohort, although the indeterminate form was common among all patients throughout the study period, it occurred mainly between 1995 and 2004. Before and after this period, the cardiac form was more prevalent in newly diagnosed patients. Until 1994, the cardiac form reflected patients coming from areas with active vector transmission and with higher CD prevalence and morbidity. After the mid-2000s, although there was an increased number of patients coming from areas of lower CD morbidity, they tended to be older with several comorbidities such as systemic arterial hypertension, dyslipidemia, diabetes mellitus, and ischemic heart disease. These comorbidities usually lead to cardiac changes that are reflected in the ECG and may mimic the electrocardiographic changes of Chagas heart disease leading patients to be classified as having the cardiac form of the disease\n38\n. The progressive increase in the presence of mega forms of CD over time may also be related to the aging patients in the CD cohort justified by the gradual progression of neuronal degeneration associated with the disease\n39\n. \nThe INI-Fiocruz cohort is mostly comprised of patients who migrated from various Brazilian regions, which may partially increase the representativeness of the total CD-infected population in Brazil, characterized by a predominance of the indeterminate clinical form of the disease. In addition, the progressive decrease in the number of new patients who entered the cohort over the past few years possibly reflects the success of the vector transmission control by Triatoma infestans and by the transfusion route in recent decades in Brazil. We also observed the gradual aging of patients with chronic CD. " ]
[ "intro", "methods", null, null, "results", "discussion" ]
[ "Chagas disease", "Epidemiologic studies", "Cohort studies" ]
INTRODUCTION: Chagas disease (CD) is considered a neglected tropical disease by the World Health Organization, with an estimated 6-7 million people infected worldwide. The implementation of CD vector and transfusion control programs in the 1980s significantly decreased the rate of disease transmission in Latin American countries. However, several challenges have hampered the effective implementation of disease surveillance due to new outbreaks of orally transmitted CD in endemic countries and the possibility of vertical transmission even in nonendemic areas. Integrated surveillance and healthcare interventions are now directed at a large contingent of patients already infected with Trypanosoma cruzi (T. cruzi), a significant portion of whom may develop chronic Chagas heart disease, a major determinant of morbidity and mortality. Owing to rapid globalization, CD cases are no longer restricted to Latin America, constituting a new challenge in the battle against this disease 1 . In Brazil, the process of CD urbanization in the last decades of the 20th century has increased the number of patients with CD in urban cities, which has increased the demand for local health care services. This new urban context has also prompted the modification of the clinical-epidemiological profile of patients with CD, evidenced by changes in work activities, food consumption patterns, increased age, significant prevalence of comorbidities, and social determinants as a whole 2 - 4 . The first national serological survey that evaluated the prevalence of CD in Brazil aimed to quantify the endemic transmission of CD 5 . This study was conducted in rural areas of Brazil between 1975 and 1980, with an estimated national seroprevalence CD rate of 4.22%. A second national serological CD survey 6 conducted between 2001 and 2008 analyzed the seroprevalence in children aged up to five years, which only reported a CD serum positivity rate of 0.03%. The later survey highlighted the impact of the CD control measures implemented in previous decades, which led the Pan American Health Organization to grant Brazil an International Certificate of Elimination of CD transmission by Triatoma infestans and blood transfusions 7 . In Brazil, the current estimated prevalence of CD is much lower than that reported in the 1970s. However, the estimates reported in previous studies on the prevalence of CD are imprecise and subject to criticism due to the lack of standardized data collection and heterogeneity in most of these estimates 8 . Therefore, at present, the exact number of Brazilians with CD is unknown, although it is projected to be between 1 and 1.5 million people 9 . Currently, new cases of CD in Brazil are mostly restricted to the Legal Amazon, with oral being the primary route of transmission, particularly through the consumption of a local fruit called açaí when it is contaminated with T. cruzi 10 . With globalization, the disease has spread to countries in the Northern Hemisphere, particularly in the United States and Spain 11 . Because of this new geographic rearrangement of CD, studies describing the clinical and epidemiological profile of patients with chronic CD have been conducted in urban healthcare facilities in both endemic 12 - 15 and nonendemic regions 16 - 19 . Some of these facilities have also become reference centers for the treatment of CD that offer specialized care 20 - 23 . The Evandro Chagas Institute of Infectious Diseases of the Oswaldo Cruz Foundation (INI-Fiocruz), located in Rio de Janeiro, Brazil, is a national reference center for the treatment and research of infectious and tropical diseases covered by the Sistema Único de Saúde (SUS), the Brazilian National Health Service. INI-Fiocruz receives patients from various regions of the country and offers comprehensive and multidisciplinary care to patients with CD. To date, few studies have addressed the clinical and epidemiological profiles of the Brazilian population with chronic CD. The present study aimed to describe the clinical and epidemiological profiles of a historical cohort of patients with chronic CD followed up at the INI-Fiocruz. METHODS: This was a retrospective descriptive study including patients diagnosed with chronic CD who were referred to the outpatient center of the INI-Fiocruz between November 1986 and December 2019. Clinical and epidemiological data were retrieved from the medical records. After the serological diagnosis of chronic CD was confirmed by two simultaneous reactive serological techniques, all patients underwent an initial evaluation protocol, which included sociodemographic information (age, level of education, and race); epidemiological history (transmission mode, country and state of origin, time away from endemic area); clinical anamnesis; a physical examination focused on chronic CD-related cardiovascular and digestive signs and symptoms; a 12-lead electrocardiogram (ECG); and a two-dimensional Doppler echocardiogram. According to the presence of symptoms related to the digestive form of CD, the following examinations were performed: upper gastrointestinal endoscopy, esophagography, colonoscopy, and a contrast barium enema. The level of education was categorized based on the number of years of formal study as illiterate, < 9 years, or > 9 years. Race was self-reported and classified as white, black, mulatto, or indigenous. Clinical forms of chronic CD were retrospectively classified according to the 2nd Brazilian Consensus on Chagas Disease 24 . The cardiac form was classified into A, B1, B2, C, or D stages, and the digestive form was classified into megaesophagus, megacolon, or both megaesophagus and megacolon. Information about the region of origin was classified according to the prevalence and morbidity of chronic CD, which was based on serological data from national prevalence surveys 5 and a national electrocardiographic survey 25 . The prevalence was categorized as low (< 2%), medium (2%-4%), high (> 4%), and nonendemic (Rio de Janeiro and Espírito Santo), whereas the levels of morbidity by area were categorized as low (normal ECGs > 50%), high (normal ECGs < 50%), and nonendemic areas (Rio de Janeiro and Espírito Santo). Data analysis Descriptive statistics are presented as means (standard deviations) for continuous and absolute frequencies (percentages) for categorical variables. Trend differences for each 5-year period from 1986 to 2019 were tested using a nonparametric trend test for continuous (nptrend command in Stata 13.0) variables and using generalized linear models with binomial distribution for categorical variables (binreg command in Stata 13.0). The independent variable was the 5-year period, and the dependent variables were the binary classes of each categorical variable. The link choice was selected according to the lowest Bayesian information criterion. Statistical significance was set at a 2-tailed p-value of <0.05. All statistical analyses were performed using Stata software (version 13.0; StataCorp LP.; College Station, TX, USA). Descriptive statistics are presented as means (standard deviations) for continuous and absolute frequencies (percentages) for categorical variables. Trend differences for each 5-year period from 1986 to 2019 were tested using a nonparametric trend test for continuous (nptrend command in Stata 13.0) variables and using generalized linear models with binomial distribution for categorical variables (binreg command in Stata 13.0). The independent variable was the 5-year period, and the dependent variables were the binary classes of each categorical variable. The link choice was selected according to the lowest Bayesian information criterion. Statistical significance was set at a 2-tailed p-value of <0.05. All statistical analyses were performed using Stata software (version 13.0; StataCorp LP.; College Station, TX, USA). Ethics approval This study was approved by the INI-Fiocruz Research Ethics Committee (number CAAE:35748820.1.0000.5262) on September 2, 2020 and was carried out in accordance with the 1964 Declaration of Helsinki and its later amendments. The need for informed consent was waived considering the retrospective nature of the study. This study was approved by the INI-Fiocruz Research Ethics Committee (number CAAE:35748820.1.0000.5262) on September 2, 2020 and was carried out in accordance with the 1964 Declaration of Helsinki and its later amendments. The need for informed consent was waived considering the retrospective nature of the study. Data analysis: Descriptive statistics are presented as means (standard deviations) for continuous and absolute frequencies (percentages) for categorical variables. Trend differences for each 5-year period from 1986 to 2019 were tested using a nonparametric trend test for continuous (nptrend command in Stata 13.0) variables and using generalized linear models with binomial distribution for categorical variables (binreg command in Stata 13.0). The independent variable was the 5-year period, and the dependent variables were the binary classes of each categorical variable. The link choice was selected according to the lowest Bayesian information criterion. Statistical significance was set at a 2-tailed p-value of <0.05. All statistical analyses were performed using Stata software (version 13.0; StataCorp LP.; College Station, TX, USA). Ethics approval: This study was approved by the INI-Fiocruz Research Ethics Committee (number CAAE:35748820.1.0000.5262) on September 2, 2020 and was carried out in accordance with the 1964 Declaration of Helsinki and its later amendments. The need for informed consent was waived considering the retrospective nature of the study. RESULTS: The characteristics of the patients referred to the INI-Fiocruz are shown in Table 1. A total of 2,168 patients (52.2% women) were included from August 1986 to December 2019, with a mean age of 47.8 years (range, 13-88 years). The plurality self-reported as white (49.8%) and had < 9 years of education (80.5%). The reported transmission mode was vectorial in 90.2% of the patients. The majority, originating from Brazil (98.7%), were born in areas with high prevalence (52.2%) and morbidity (67.8%) of CD, mostly Minas Gerais and Bahia, and had moved away from endemic areas for >20 years (65.8%). The indeterminate form was the most common clinical presentation (44.9%), and stage A (21.9%) was the most common clinical presentation of the cardiac form. Only a minority of patients had digestive or cardiodigestive forms (11.7%), with 7.4% presenting with megaesophagus, 2.5% with megacolon, and 1.8% with both megaesophagus and megacolon. TABLE 1:Characteristics of patients admitted at the INI-Fiocruz (n=2,168).VariableMean (SD) Minimum- Maximum or Frequency (%)Age (years)47.8 (12.8) 13-88Female sex1,132 (52.2)Race White1,080 (49.8)Mulatto815 (37.6)Black261 (12.0)Indigenous12 (0.6)Level of education Illiterate448 (20.7)< 9 years1,297 (59.8)> 9 years423 (19.5)Transmission mode Vectorial1,956 (90.2)Transfusion124 (5.7)Vertical62 (2.9)Oral2 (0.1)Not identified24 (1.1)Country of origin Brazil2,140 (98.7)Other Latin American countries28 (1.3)Region of origin according to prevalence Nonendemic Chagas disease 102 (4.7)Low Chagas disease prevalence 260 (12.0)Medium Chagas disease prevalence 674 (31.1)High Chagas disease prevalence 1,132 (52.2)Region of origin according to morbidity Nonendemic Chagas disease102 (4.7)Low Chagas disease morbidity 596 (27.5)High Chagas disease morbidity1,470 (67.8)Time away from endemic area None82 (3.8)1 to 20 years577 (26.6)>20 years1,426 (65.8)Nonendemic area83 (3.8)Clinical form (n=2,136) Indeterminate 960 (44.9)Cardiac924 (43.3)Digestive125 (5.9)Cardiodigestive127 (5.9)Clinical cardiac stagesa (n=2,136) None1085 (50.8)Stage A467 (21.9)Stage B1250 (11.7)Stage B2108 (5.1)Stage C206 (9.6)Stage D20 (0.9)Clinical digestive presentation None1914 (88.3)Megaesophagus160 (7.4)Megacolon54 (2.5)Megaesophagus and megacolon40 (1.8) aAccording to the 2nd Brazilian Consensus on Chagas Disease 24 . Abbreviations: INI-Fiocruz: The Evandro Chagas Institute of Infectious Diseases of the Oswaldo Cruz Foundation; SD: standard deviation. aAccording to the 2nd Brazilian Consensus on Chagas Disease 24 . Abbreviations: INI-Fiocruz: The Evandro Chagas Institute of Infectious Diseases of the Oswaldo Cruz Foundation; SD: standard deviation. The characteristics of patients referred to the INI-Fiocruz for each 5-year period are shown in Table 2. The number of patients referred increased from the 1986-1990 period (n=138) to the 2000-2004 period (n=569) and then gradually decreased until the 2015-2019 period (n=162). The age at the time of referral increased during all study periods (p<0.001). The number of white patients decreased over time (p<0.001), whereas the number of mulatto (p<0.001) and black (p<0.001) patients increased. Illiterate patients decreased over time (p =0.02), and those with >9 years of education increased (p=0.008). The vectorial and oral transmission modes increased over the study period (p<0.001 for both) while transfusion transmission decreased (p<0.001). TABLE 2:Characteristics of patients admitted at the INI-Fiocruz for each 5-year period (n=2,168)VariableMean (SD) inimum-Maximum or Frequency (%) p-value for trend 1986-19891990-19941995-19992000-20042005-20092010-20142015-2019 (n=138)(n=295)(n=498)(n=569)(n=307)(n=199)(n=162)Age (years)44.3 (10.6)45.3 (12.0)45.9 (11.3)46.3 (12.2)49.7 (14.1)53.5 (12.9)57.1 (12.9)<0.001 22-7319-8516-8413-8415-8719-8521-88 Female sex71 (51.8)168 (57.0)252 (50.60)283 (49.7)158 (51.5)116 (58.3)84 (51.9)0.93Race White76 (55.1)174 (59.0)244 (49.0)314 (55.2)154 (50.2)82 (41.2)36 (22.2)<0.001Mulatto52 (37.7)91 (30.9)197 (39.6)180 (31.6)101 (32.9)97 (48.7)97 (59.9)<0.001Black8 (5.8)28 (9.5)52 (10.4)72 (12.7)52 (16.9)20 (10.1)29 (17.9)<0.001Indigenous2 (1.5)2 (0.7)5 (1.0)3 (0.5)0 (0.0)0 (0.0)0 (0.0)0.02Level of education Illiterate32 (23.2)52 (17.6)141 (28.3)110 (19.3)50 (16.3)34 (17.1)29 (17.9)0.02< 9 years86 (62.3)182 (61.7)268 (53.8)350 (61.5)209 (68.1)117 (58.8)85 (52.2)0.87> 9 years20 (14.5)61 (20.7)89 (17.9)109 (19.2)48 (15.6)48 (24.1)48 (29.6)0.008Transmission mode Vectorial119 (86.2)254 (86.1)435 (87.4)525 (92.3)285 (92.8)189 (95.0)149 (92.0)<0.001Transfusion16 (11.6)29 (9.8)45 (9.0)16 (2.8)8 (2.6)3 (1.5)7 (4.3)<0.001Vertical1 (0.7)8 (2.7)13 (2.6)24 (4.2)9 (2.9)4 (2.0)3 (1.9)0.81Oral0 (0.0)0 (0.0)0 (0.0)0 (0.0)0 (0.0)0 (0.0)2 (1.2)<0.001Not identified2 (1.5)4 (1.4)5 (1.0)4 (0.7)5 (1.6)3 (1.5)1 (0.6)0.84Country of origin Brazil135 (97.8)289 (98.0)491 (98.6)565 (99.3)302 (98.4)197 (99.0)161 (99.4)0.15Other Latin American countries3 (2.2)6 (2.0)7 (1.4)4 (0.7)5 (1.6)2 (1.0)1 (0.6)Region of origin according to prevalence Nonendemic Chagas disease5 (3.6)18 (6.1)19 (3.8)31 (5.5)14 (4.6)8 (4.0)7 (4.3)0.78Low Chagas disease prevalence7 (5.1)17 (5.8)58 (11.7)74 (13.0)42 (13.7)38 (19.1)24 (14.8)<0.001Medium Chagas disease prevalence37 (26.8)93 (31.5)137 (27.5)186 (32.7)101 (32.9)56 (28.1)64 (39.5)0.04High Chagas disease prevalence89 (64.5)167 (56.6)284 (57.0)278 (48.9)150 (48.9)97 (48.7)67 (41.4)<0.001Region of origin according to morbidity Nonendemic Chagas disease area5 (3.7)18 (6.1)19 (3.8)31 (5.5)14 (4.6)8 (4.0)7 (4.3)0.78Low Chagas disease morbidity area22 (15.9)67 (22.7)130 (26.1)159 (27.9)95 (30.9)64 (32.2)59 (36.4)<0.001High Chagas disease morbidity area111 (80.4)210 (71.2)349 (70.1)379 (66.6)198 (64.5)127 (63.8)96 (59.3)<0.001Time away from endemic area None1 (0.7)8 (2.7)29 (5.8)19 (3.3)16 (5.2)5 (2.5)4 (2.5)0.901 to 20 years16 (11.6)57 (19.3)129 (25.9)179 (31.5)108 (35.2)62 (31.2)26 (16.1)0.002>20 years117 (84.8)216 (73.2)323 (64.9)346 (60.8)173 (56.4)124 (62.3)127 (78.4)0.005Nonendemic area4 (2.9)14 (4.8)17 (3.4)25 (4.4)10 (3.3)8 (4.0)5 (3.1)0.77Clinical form (n=2,136) Indeterminate56 (40.6)135 (45.8)264 (53.0)280 (51.2)107 (35.9)66 (33.3)52 (32.1)<0.001Cardiac68 (49.3)142 (48.1)204 (41.0)221 (40.4)140 (47.0)81 (40.9)68 (42.0)0.20Digestive6 (4.3)12 (4.1)14 (2.8)26 (4.8)23 (7.7)28 (14.2)16 (9.9)<0.001Cardiodigestive8 (5.8)6 (2.0)16 (3.2)20 (3.6)28 (9.4)23 (11.6)26 (16.0)<0.001Clinical cardiac stagesa (n=2,136) Stage A36 (26.1)58 (19.7)104 (20.9)129 (23.6)66 (22.2)33 (16.7)41 (25.3)0.93Stage B117 (12.3)45 (15.3)48 (9.6)48 (8.8)40 (13.4)32 (16.2)20 (12.4)0.59Stage B213 (9.4)16 (5.4)26 (5.2)21 (3.8)17 (5.7)8 (4.0)7 (4.3)0.10Stage C9 (6.5)21 (7.1)39 (7.8)38 (8.0)44 (14.8)30 (15.2)25 (15.3)<0.001Stage D1 (0.7)8 (2.7)3 (0.6)5 (0.9)1 (0.3)1 (0.5)1 (0.6)0.07Clinical digestive presentation Megaesophagus8 (5.8)14 (4.8)19 (3.8)26 (4.6)35 (11.4)33 (16.6)25 (15.4)<0.001Megacolon3 (2.2)3 (1.0)8 (1.6)12 (2.1)7 (2.3)10 (5.0)11 (6.8)<0.001Megaesophagus and megacolon3 (2.2)1 (0.3)3 (0.6)10 (1.8)9 (2.9)8 (4.0)6 (3.7)0.001 aAccording to the 2nd Brazilian Consensus on Chagas Disease (2015) 24 . Abbreviations: INI-Fiocruz: The Evandro Chagas Institute of Infectious Diseases of the Oswaldo Cruz Foundation; SD: standard deviation. aAccording to the 2nd Brazilian Consensus on Chagas Disease (2015) 24 . Abbreviations: INI-Fiocruz: The Evandro Chagas Institute of Infectious Diseases of the Oswaldo Cruz Foundation; SD: standard deviation. The percentage of patients that originated from low CD prevalence areas increased over the study period (p<0.001), while patients originating from high CD prevalence areas decreased (p<0.001). The same pattern was observed for the CD morbidity areas, with an increase in the percentage of patients who originated from low CD morbidity areas (p<0.001) and a decrease in patients from high CD morbidity areas (p<0.001). The number of patients who had moved away from an endemic area for <20 years increased (p=0.002) with a concomitant decrease of patients that had moved away for >20 years (p=0.005). The clinical presentation characteristics also changed over the study period, with an increase in the digestive form (p<0.001) and a decrease in the indeterminate form (p<0.001). Considering the clinical cardiac stages, there was an increase in the percentage of patients admitted with stage C (p<0.001), however there were no changes observed in the other stages. DISCUSSION: The INI-Fiocruz is a reference center for CD that provides diagnostic interpretation for patients referred from blood banks, primary and secondary care units, private health services, or by spontaneous demand, and it offers integral and multidisciplinary clinical care to patients with CD. In the present study, most patients were long-time residents of the metropolitan region of the state of Rio de Janeiro and who had been away from endemic areas for many years. Although patients lived in Rio de Janeiro, most were migrants from 19 Brazilian states, predominantly from Bahia and Minas Gerais, which constitutes almost 50% of the studied cohort. A study conducted in the 1960s that evaluated natural-born Brazilian citizens with chronic CD residing in the city of Rio de Janeiro reported the same prevalence 26 . A similar profile was reported in a study published by Ianni 27 of patients living in the city of São Paulo, indicating the migratory profile of the CD-infected population. Dias et al. 28 studied the status of chronic CD in the Northeastern region and reported that the state of Bahia had the highest prevalence of the disease in the region, which is similar to that of the state of Minas Gerais. They also discussed the social causes of the strong emigration to large urban centers in the Southeastern region, which justifies the significant number of people from Minas Gerais and Bahia in the present cohort. Besides, 1.3% of the patients come from other South American countries, predominantly from Bolivia, a country with a prevalence rate of 4.4% for chronic CD among migrants 29 . The vector route is the most probable transmission mechanism. The majority of patients reported living in rural areas in houses made of mud and straw and were aware of the triatomine bug, while some even reported intradomicile coexistence with the insect, yet only a few remembered that they had been bitten. Most patients claimed that they were unaware that the triatomine bug was a health risk. This finding indicates the knowledge deficit regarding CD among these patients, which reinforces why most of them do not consider living with triatomine bugs as a threat to their health 30 . Several studies on transmission mechanisms have also reported the vector route as the primary route of disease transmission 12 , 13 , 20 . This cohort showed a slight predominance of women (52.2%), characterizing a balanced cohort in terms of sex and reflecting the distribution of men and women in the Brazilian population, which is 51% women and 49% men according to the Brazilian Institute of Geography and Statistics. In previous studies, women accounted for 46% to 84% of the total study population 13 - 17 , 20 - 23 , 34 , 36 , 37 . In the center where this study was conducted, white patients predominated (49.8%). Gontijo et al. 20 reported the predominance of mulattos (43%) in Minas Gerais, while Gasparim et al. 13 reported 51.1% of whites in Paraná, suggesting ethnic differences by geographic location. Most patients (59.8%) had low levels of education. This is attributed to the origin of the patients, usually born in rural areas without access to formal education. Gontijo et al. 20 showed that 84% of patients examined in a reference outpatient clinic in Belo Horizonte, Brazil were either illiterate or semi-illiterate. Pereira et al. 14 reported that 40.2% of patients in a reference center in Fortaleza were illiterate. The number of white patients decreased over time, whereas the number of mulatto and black patients increased. The number of illiterate patients decreased and those with >9 years of education increased. These data suggest that the Brazilian population of low socioeconomic strata has gained access to both health services and formal education in the last few decades. The mean age of the patients was 47.8 years old. Field studies performed between 1960 and the early 2000s reported a progressive increase in the mean age of patients with chronic CD over time, from less than 25 to 45 years 31 - 33 . Studies on the clinical epidemiological profile of patients with chronic CD in urban centers reported that the mean age showed the same increasing trend in the last decades, ranging from 37.7 to 67.5 years 12 , 13 , 15 , 20 - 22 . Studies conducted in nonendemic regions of the Northern Hemisphere, where patients migrated from South America, especially Bolivia and Central America, reported younger patients with chronic CD with a mean age ranging from 28.5 to 47 years 16 - 18 , 34 - 37 , reflecting the lack of CD vectorial transmission control in their countries of origin and that CD vectorial transmission remained active in rural areas. The present study showed a progressive increase over time in the age of patients who were included in the study cohort during the 5-year intervals of the study period, with the mean age increasing from 44.3 to 57.1 years. Following the increase in the age of patients with CD, there was an inverse decrease in the number of new patients requesting care at the INI-Fiocruz. An ascending temporal curve was observed between 1986 and 1994, stabilizing between 1995 and 2004, and decreasing in 2005. This behavior may reflect the successful control of vector transmission by Triatoma infestans as well as by blood transfusions in Brazil; however this may also indicate the decrease in migration from rural to urban areas verified in recent decades due to the decreased economic power of urban cities consequently attracting fewer people. Studies on the clinical form of CD conducted in urban areas have shown that the cardiac form predominates, with a prevalence ranging from 56% to 66% 12 - 16 , 22 , 23 Few studies have reported a higher incidence of the indeterminate form, varying between 56% and 81.6% 17 , 20 . These differences in the prevalence of clinical forms are attributed to the profile of the healthcare unit. CD reference centers usually receive asymptomatic donors from blood banks, asymptomatic family members of patients under follow-up at the institution, and spontaneous demand for serological diagnosis, which tends to mirror the epidemiological reality of the disease in which the indeterminate forms predominate. Symptomatic patients are expected in secondary and tertiary care units, which involve the management of patients with more complex cases. Therefore, cases of the cardiac and digestive forms are more prevalent in these units. In the present study, the indeterminate form was the most prevalent (44.9%). The digestive form, either isolated or associated with heart disease, had a prevalence of 11.8%, which is within the range of the mean prevalence of this clinical form (9%-41%) presented in other studies 12 - 15 , 17 , 19 , 20 , 22 . Regarding the cardiac and cardiodigestive forms, the most common cardiac stage was stage A (44.4%), which, if associated with the indeterminate form, would account for 65.8% of the cohort, with normal left ventricular systolic function on the ECG indicating patients with long-term benign prognoses. In this cohort, although the indeterminate form was common among all patients throughout the study period, it occurred mainly between 1995 and 2004. Before and after this period, the cardiac form was more prevalent in newly diagnosed patients. Until 1994, the cardiac form reflected patients coming from areas with active vector transmission and with higher CD prevalence and morbidity. After the mid-2000s, although there was an increased number of patients coming from areas of lower CD morbidity, they tended to be older with several comorbidities such as systemic arterial hypertension, dyslipidemia, diabetes mellitus, and ischemic heart disease. These comorbidities usually lead to cardiac changes that are reflected in the ECG and may mimic the electrocardiographic changes of Chagas heart disease leading patients to be classified as having the cardiac form of the disease 38 . The progressive increase in the presence of mega forms of CD over time may also be related to the aging patients in the CD cohort justified by the gradual progression of neuronal degeneration associated with the disease 39 . The INI-Fiocruz cohort is mostly comprised of patients who migrated from various Brazilian regions, which may partially increase the representativeness of the total CD-infected population in Brazil, characterized by a predominance of the indeterminate clinical form of the disease. In addition, the progressive decrease in the number of new patients who entered the cohort over the past few years possibly reflects the success of the vector transmission control by Triatoma infestans and by the transfusion route in recent decades in Brazil. We also observed the gradual aging of patients with chronic CD.
Background: We aimed to describe the sociodemographic, epidemiological, and clinical characteristics of patients with chronic Chagas disease (CD) at an infectious disease referral center. Changes in patient profiles over time were also evaluated. Methods: This retrospective study included patients with CD from November 1986-December 2019. All patients underwent an evaluation protocol that included sociodemographic profile; epidemiological history; anamnesis; and physical, cardiologic, and digestive examinations. Trend differences for each 5-year period from 1986 to 2019 were tested using a nonparametric trend test for continuous and generalized linear models with binomial distribution for categorical variables. Results: A total of 2,168 patients (52.2% women) were included, with a mean age of 47.8 years old. White patients with low levels of education predominated. The reported transmission mode was vectorial in 90.2% of cases. The majority came from areas with a high prevalence (52.2%) and morbidity (67.8%) of CD. The most common clinical presentation was the indeterminate form (44.9%). The number of patients referred gradually decreased and the age at admission increased during the study period, as did the patients' levels of education. Conclusions: The clinical profile of CD is characterized by a predominance of the indeterminate form of the disease. Regarding the patients who were followed up at the referral center, there was a progressive increase in the mean age and a concomitant decrease in the number of new patients. This reflects the successful control of vector and transfusion transmission in Brazil as well as the aging population of patients with CD.
null
null
5,093
303
[ 146, 53 ]
6
[ "patients", "cd", "disease", "chagas", "study", "prevalence", "form", "20", "years", "clinical" ]
[ "chagas disease area5", "introduction chagas disease", "chagas disease 2015", "prevalence nonendemic chagas", "chagas disease morbidity" ]
null
null
[CONTENT] Chagas disease | Epidemiologic studies | Cohort studies [SUMMARY]
[CONTENT] Chagas disease | Epidemiologic studies | Cohort studies [SUMMARY]
[CONTENT] Chagas disease | Epidemiologic studies | Cohort studies [SUMMARY]
null
[CONTENT] Chagas disease | Epidemiologic studies | Cohort studies [SUMMARY]
null
[CONTENT] Aged | Brazil | Chagas Disease | Female | Humans | Male | Middle Aged | Prevalence | Referral and Consultation | Retrospective Studies [SUMMARY]
[CONTENT] Aged | Brazil | Chagas Disease | Female | Humans | Male | Middle Aged | Prevalence | Referral and Consultation | Retrospective Studies [SUMMARY]
[CONTENT] Aged | Brazil | Chagas Disease | Female | Humans | Male | Middle Aged | Prevalence | Referral and Consultation | Retrospective Studies [SUMMARY]
null
[CONTENT] Aged | Brazil | Chagas Disease | Female | Humans | Male | Middle Aged | Prevalence | Referral and Consultation | Retrospective Studies [SUMMARY]
null
[CONTENT] chagas disease area5 | introduction chagas disease | chagas disease 2015 | prevalence nonendemic chagas | chagas disease morbidity [SUMMARY]
[CONTENT] chagas disease area5 | introduction chagas disease | chagas disease 2015 | prevalence nonendemic chagas | chagas disease morbidity [SUMMARY]
[CONTENT] chagas disease area5 | introduction chagas disease | chagas disease 2015 | prevalence nonendemic chagas | chagas disease morbidity [SUMMARY]
null
[CONTENT] chagas disease area5 | introduction chagas disease | chagas disease 2015 | prevalence nonendemic chagas | chagas disease morbidity [SUMMARY]
null
[CONTENT] patients | cd | disease | chagas | study | prevalence | form | 20 | years | clinical [SUMMARY]
[CONTENT] patients | cd | disease | chagas | study | prevalence | form | 20 | years | clinical [SUMMARY]
[CONTENT] patients | cd | disease | chagas | study | prevalence | form | 20 | years | clinical [SUMMARY]
null
[CONTENT] patients | cd | disease | chagas | study | prevalence | form | 20 | years | clinical [SUMMARY]
null
[CONTENT] cd | brazil | disease | patients | national | new | health | prevalence cd | estimated | cruzi [SUMMARY]
[CONTENT] variables | stata | categorical | classified | 13 | cd | chronic | chronic cd | command | categorical variables [SUMMARY]
[CONTENT] 001 | chagas | chagas disease | patients | 48 | disease | 20 | 16 | 14 | 26 [SUMMARY]
null
[CONTENT] cd | patients | variables | study | disease | 13 | stata | categorical | chagas | prevalence [SUMMARY]
null
[CONTENT] Chagas ||| [SUMMARY]
[CONTENT] November 1986-December 2019 ||| ||| 5-year | 1986 | 2019 | linear [SUMMARY]
[CONTENT] 2,168 | 52.2% | 47.8 years old ||| ||| 90.2% ||| 52.2% | 67.8% ||| 44.9% ||| [SUMMARY]
null
[CONTENT] Chagas ||| ||| November 1986-December 2019 ||| ||| 5-year | 1986 | 2019 | linear ||| ||| 2,168 | 52.2% | 47.8 years old ||| ||| 90.2% ||| 52.2% | 67.8% ||| 44.9% ||| ||| ||| ||| Brazil [SUMMARY]
null
Association between low-density cholesterol change and outcomes in acute ischemic stroke patients who underwent reperfusion therapy.
34530762
Low-density lipoprotein cholesterol (LDL-C) can increase cardiovascular risk. However, the association between LDL-C change and functional outcomes in acute ischemic stroke (AIS) patients who underwent reperfusion therapy remains unclear.
BACKGROUND
Patients who received reperfusion therapy were consecutively enrolled. LDL-C measurement was conducted at the emergency department immediately after admission and during hospitalization. The change of LDL-C level (ΔLDL-C) was calculated by subtracting the lowest LDL-C among all measurements during hospitalization from the admission LDL-C. Poor functional outcome was defined as modified Rankin Scale (mRS) > 2 at 90 days.
METHODS
A total of 432 patients were enrolled (mean age 69.2 ± 13.5 years, 54.6 % males). The mean LDL-C level at admission was 2.55 ± 0.93 mmol/L. The median ΔLDL-C level was 0.43 mmol/L (IQR 0.08-0.94 mmol/L). A total of 263 (60.9 %) patients had poor functional outcomes at 90 days. There was no significant association between admission LDL-C level and functional outcome (OR 0.99, 95 % CI 0.77-1.27, p = 0.904). ΔLDL-C level was positively associated with poor functional outcome (OR 1.80, 95 % CI 1,12-2.91, p = 0.016). When patients were divided into tertiles according to ΔLDL-C, those in the upper tertile (T3, 0.80-3.98 mmol/L) were positively associated with poor functional outcomes compared to patients in the lower tertile (T1, -0.91-0.13 mmol/L) (OR 2.56, 95 % CI 1.22-5.36, p = 0.013). The risk of poor functional outcome increased significantly with ΔLDL-C tertile (P-trend = 0.010).
RESULTS
In AIS patients who underwent reperfusion therapy, the decrease in LDL-C level during hospitalization was significantly associated with poor functional outcomes at 90 days.
CONCLUSIONS
[ "Aged", "Brain Ischemia", "Cholesterol, LDL", "Female", "Humans", "Ischemic Stroke", "Male", "Reperfusion", "Stroke" ]
8447794
Background
The association between low-density lipoprotein cholesterol (LDL-C) and outcomes in acute ischemic stroke (AIS) patients remains controversial [1–11]. The inconsistent results might be explained by oxidative stress. Since AIS patients may suffer from enhanced free-radical damage after reperfusion therapy [12, 13], focusing on patients with reperfusion therapy might clarify the role of LDL-C. However, previous studies on the association between LDL-C level and outcomes in AIS patients with reperfusion therapy failed to reach a consensus [5–8]. The conflicting conclusions may be due to the single measurement of LDL-C. Some studies found that serum LDL-C levels decreased after the onset of AIS [14–18]. Under oxidative stress, low-density lipoprotein (LDL) gets oxidized into oxidized low-density lipoprotein (oxLDL) [19]. The extent of decreased LDL-C may reflect the degree of increased oxLDL, which may indicate the severity of oxidative stress [20] and is positively associated with poor functional outcomes [21–24]. Therefore, a more sensitive marker may be the change in serum LDL-C during hospitalization [25]. However, there is uncertainty on the association between LDL-C change and outcomes in patients with reperfusion therapy. In this study, we aimed to explore the association between changes in LDL-C levels and functional outcomes in these patients.
null
null
Results
Baseline characteristics and outcome As shown in Fig. 1, a total of 640 AIS patients underwent reperfusion therapy in our center, there were 24.6 % (158/640) patients missed LDL-C levels or outcome follow-up, we compared included patients to these patients, and we found that there were no significant differences in demographic parameters (age and gender), vascular risk factors (diabetes, atrial fibrillation, current smoking, and coronary heart diseases), baseline NIHSS score, TOAST classification, and reperfusion therapy method between two groups, except for more patients with prior history of stroke and hypertension among the included group (Table S1 in supplementary materials). Finally, a total of 432 patients (mean age 69.2 ± 13.5 years, 54.6 % males) were included. As shown in Table 1, the mean admission LDL-C level was 2.55 ± 0.93 mmol/L, the mean lowest LDL-C level during hospitalization was 2.00 ± 0.88mmol/L, and the median ΔLDL-C was 0.43 mmol/L (IQR 0.08–0.94 mmol/L). The median interval time between stroke onset and emergency department was 2.5 h (IQR 1.8-3.0 h). The median interval time between admission and the lowest LDL-C measurement during hospitalization was 3.1 d (IQR 0.8–6.6 d). For most patients (357/432, 82.6 %), LDL-C levels decreased during hospitalization. A total of 263 (60.9 %) patients had poor 90-day functional outcomes. Fig. 1Patients’ inclusion flowchart. AIS, acute ischemic stroke; LDL-C, low-density lipoprotein cholesterol; mRS, modified Rankin Scale; ALT, alanine aminotransferase; AST, aspartate aminotransferaseTable 1Patient characteristics stratified by functional outcome at 90 daysVariablesOverallGood functional outcomePoor functional outcomeP value(n = 432)(n = 169)(n = 263)Age, years, mean (SD)69.2 (13.5)64.9 (13.9)72.0 (12.5)< 0.001Male, n (%)236 (54.6)109 (64.4)127 (48.3)< 0.001Hypertension, n (%)259 (60.0)95 (56.2)164 (62.4)0.203Diabetes, n (%)99 (22.9)34 (20.1)65 (24.7)0.267hyperlipemia, n (%)34 (7.9)13 (7.7)21 (8.0)0.912Atrial fibrillation, n (%)216 (50.0)63 (37.3)133 (58.2)< 0.001Valvular heart diseases, n (%)74 (17.1)25 (14.8)49 (18.6)0.301Coronary heart diseases, n (%)63 (14.6)22 (13.0)41 (15.6)0.460Previous stroke, n (%)39 (9.0)12(7.1)27 (10.3)0.263Current smoking, n (%)107 (24.8)52 (30.8)55 (20.9)0.021Alcohol consumption, n (%)100 (23.1)52 (30.8)51 (19.4)0.021Statin use before admission, n (%)39 (9.0)14 (8.3)25 (9.5)0.665Baseline NIHSS, median (Q1-Q3)14 (9–18)9 (6–14)15(12–20)< 0.001LDL-C, mmol/L, mean (SD)2.55 (0.93)2.59 (0.91)2.53 (0.95)0.496HDL, mmol/L, mean (SD)1.27 (0.42)1.25 (0. 49)1.29 (0.36)0.332TG, mmol/L, median (Q1-Q3)1.24 (0.87–1.89)1.20 (0.85–1.94)1.20 (0.88–1.81)0.566TC, mmol/L, mean (SD)4.23 (1.13)4.22 (1.13)4.23 (1.13)0.966Serum glucose, mmol/L, mean (SD)7.53 (6.49–9.18)7.96 (2.76)8.57 (3.04)0.035White blood cell, a10^9 /L, mean (SD)8.47 (3.19)8.23 (2.77)8.62 (3.43)0.207TOAST classification, n (%)< 0.001 Large-artery Atherosclerosis139 (32.2)59 (34.9)80 (30.4) Cardio-embolism192(44.4)57 (33.7)135 (51.3) Lacunar28 (6.5)22 (13.0)6 (2.3) Other13 (3.0)7 (4.1)6 (2.3)  Undetermined60 (13.9)24 (14.2)36 (13.7)Reperfusion therapy method, n (%)0.092 thrombolysis only124 (28.7)58 (34.3)66 (25.1) thrombectomy only210 (48.6)70 (41.4)140 (53.2) thrombolysis and thrombectomy98 (22.7)41 (24.3)57 (21.7)Statin use during hospitalization, n (%)263 (60.9)103 (60.9)160 (60.8)0.982Interval between stroke onset and emergency department, h, median (Q1-Q3)2.5 (1.8-3.0)2.3 (2.0–3.0)2.5 (1.7–3.5)0.610Interval between stroke onset and admission measurement of LDL-C, h, median (Q1-Q3)3.1 (2.2–3.9)3.1 (2.1–3.8)3.1 (2.1–3.8)0.685Interval between admission and follow-up measurement of LDL-C, d, median (Q1-Q3)3.1 (0.8–6.6)1.7 (0.6–6.4)3.4 (1.0-7.1)0.008aΔLDL-C, mmol/L, median (Q1-Q3)0.43 (0.08–0.94)0.28 (0.07–0.78)0.52 (0.10–1.03)0.009The lowest LDL-C during hospitalization, mmol/l, median (Q1-Q3)1.92 (1.38–2.44)2.11 (1.48–2.69)1.83 (1.34–2.32)0.002LDL-C variation, n (%)0.542decreased (ΔLDL-C > 0)357 (82.6)142 (84.0)215 (81.7)increased (ΔLDL-C ≤ 0)75 (17.4)27 (16.0)48 (18.3)SD standard deviation, LDL-C low-density lipoprotein cholesterol, ΔLDL-C the change of low-density lipoprotein cholesterol during hospitalization, NIHSS National Institutes of Health Stroke Scale, TOAST the Trial of Org 10,172 in Acute Stroke TreatmentaΔLDL-C was calculated by subtracting the lowest LDL-C among all measurements during hospitalization from the admission LDL-C Patients’ inclusion flowchart. AIS, acute ischemic stroke; LDL-C, low-density lipoprotein cholesterol; mRS, modified Rankin Scale; ALT, alanine aminotransferase; AST, aspartate aminotransferase Patient characteristics stratified by functional outcome at 90 days SD standard deviation, LDL-C low-density lipoprotein cholesterol, ΔLDL-C the change of low-density lipoprotein cholesterol during hospitalization, NIHSS National Institutes of Health Stroke Scale, TOAST the Trial of Org 10,172 in Acute Stroke Treatment aΔLDL-C was calculated by subtracting the lowest LDL-C among all measurements during hospitalization from the admission LDL-C As shown in Fig. 1, a total of 640 AIS patients underwent reperfusion therapy in our center, there were 24.6 % (158/640) patients missed LDL-C levels or outcome follow-up, we compared included patients to these patients, and we found that there were no significant differences in demographic parameters (age and gender), vascular risk factors (diabetes, atrial fibrillation, current smoking, and coronary heart diseases), baseline NIHSS score, TOAST classification, and reperfusion therapy method between two groups, except for more patients with prior history of stroke and hypertension among the included group (Table S1 in supplementary materials). Finally, a total of 432 patients (mean age 69.2 ± 13.5 years, 54.6 % males) were included. As shown in Table 1, the mean admission LDL-C level was 2.55 ± 0.93 mmol/L, the mean lowest LDL-C level during hospitalization was 2.00 ± 0.88mmol/L, and the median ΔLDL-C was 0.43 mmol/L (IQR 0.08–0.94 mmol/L). The median interval time between stroke onset and emergency department was 2.5 h (IQR 1.8-3.0 h). The median interval time between admission and the lowest LDL-C measurement during hospitalization was 3.1 d (IQR 0.8–6.6 d). For most patients (357/432, 82.6 %), LDL-C levels decreased during hospitalization. A total of 263 (60.9 %) patients had poor 90-day functional outcomes. Fig. 1Patients’ inclusion flowchart. AIS, acute ischemic stroke; LDL-C, low-density lipoprotein cholesterol; mRS, modified Rankin Scale; ALT, alanine aminotransferase; AST, aspartate aminotransferaseTable 1Patient characteristics stratified by functional outcome at 90 daysVariablesOverallGood functional outcomePoor functional outcomeP value(n = 432)(n = 169)(n = 263)Age, years, mean (SD)69.2 (13.5)64.9 (13.9)72.0 (12.5)< 0.001Male, n (%)236 (54.6)109 (64.4)127 (48.3)< 0.001Hypertension, n (%)259 (60.0)95 (56.2)164 (62.4)0.203Diabetes, n (%)99 (22.9)34 (20.1)65 (24.7)0.267hyperlipemia, n (%)34 (7.9)13 (7.7)21 (8.0)0.912Atrial fibrillation, n (%)216 (50.0)63 (37.3)133 (58.2)< 0.001Valvular heart diseases, n (%)74 (17.1)25 (14.8)49 (18.6)0.301Coronary heart diseases, n (%)63 (14.6)22 (13.0)41 (15.6)0.460Previous stroke, n (%)39 (9.0)12(7.1)27 (10.3)0.263Current smoking, n (%)107 (24.8)52 (30.8)55 (20.9)0.021Alcohol consumption, n (%)100 (23.1)52 (30.8)51 (19.4)0.021Statin use before admission, n (%)39 (9.0)14 (8.3)25 (9.5)0.665Baseline NIHSS, median (Q1-Q3)14 (9–18)9 (6–14)15(12–20)< 0.001LDL-C, mmol/L, mean (SD)2.55 (0.93)2.59 (0.91)2.53 (0.95)0.496HDL, mmol/L, mean (SD)1.27 (0.42)1.25 (0. 49)1.29 (0.36)0.332TG, mmol/L, median (Q1-Q3)1.24 (0.87–1.89)1.20 (0.85–1.94)1.20 (0.88–1.81)0.566TC, mmol/L, mean (SD)4.23 (1.13)4.22 (1.13)4.23 (1.13)0.966Serum glucose, mmol/L, mean (SD)7.53 (6.49–9.18)7.96 (2.76)8.57 (3.04)0.035White blood cell, a10^9 /L, mean (SD)8.47 (3.19)8.23 (2.77)8.62 (3.43)0.207TOAST classification, n (%)< 0.001 Large-artery Atherosclerosis139 (32.2)59 (34.9)80 (30.4) Cardio-embolism192(44.4)57 (33.7)135 (51.3) Lacunar28 (6.5)22 (13.0)6 (2.3) Other13 (3.0)7 (4.1)6 (2.3)  Undetermined60 (13.9)24 (14.2)36 (13.7)Reperfusion therapy method, n (%)0.092 thrombolysis only124 (28.7)58 (34.3)66 (25.1) thrombectomy only210 (48.6)70 (41.4)140 (53.2) thrombolysis and thrombectomy98 (22.7)41 (24.3)57 (21.7)Statin use during hospitalization, n (%)263 (60.9)103 (60.9)160 (60.8)0.982Interval between stroke onset and emergency department, h, median (Q1-Q3)2.5 (1.8-3.0)2.3 (2.0–3.0)2.5 (1.7–3.5)0.610Interval between stroke onset and admission measurement of LDL-C, h, median (Q1-Q3)3.1 (2.2–3.9)3.1 (2.1–3.8)3.1 (2.1–3.8)0.685Interval between admission and follow-up measurement of LDL-C, d, median (Q1-Q3)3.1 (0.8–6.6)1.7 (0.6–6.4)3.4 (1.0-7.1)0.008aΔLDL-C, mmol/L, median (Q1-Q3)0.43 (0.08–0.94)0.28 (0.07–0.78)0.52 (0.10–1.03)0.009The lowest LDL-C during hospitalization, mmol/l, median (Q1-Q3)1.92 (1.38–2.44)2.11 (1.48–2.69)1.83 (1.34–2.32)0.002LDL-C variation, n (%)0.542decreased (ΔLDL-C > 0)357 (82.6)142 (84.0)215 (81.7)increased (ΔLDL-C ≤ 0)75 (17.4)27 (16.0)48 (18.3)SD standard deviation, LDL-C low-density lipoprotein cholesterol, ΔLDL-C the change of low-density lipoprotein cholesterol during hospitalization, NIHSS National Institutes of Health Stroke Scale, TOAST the Trial of Org 10,172 in Acute Stroke TreatmentaΔLDL-C was calculated by subtracting the lowest LDL-C among all measurements during hospitalization from the admission LDL-C Patients’ inclusion flowchart. AIS, acute ischemic stroke; LDL-C, low-density lipoprotein cholesterol; mRS, modified Rankin Scale; ALT, alanine aminotransferase; AST, aspartate aminotransferase Patient characteristics stratified by functional outcome at 90 days SD standard deviation, LDL-C low-density lipoprotein cholesterol, ΔLDL-C the change of low-density lipoprotein cholesterol during hospitalization, NIHSS National Institutes of Health Stroke Scale, TOAST the Trial of Org 10,172 in Acute Stroke Treatment aΔLDL-C was calculated by subtracting the lowest LDL-C among all measurements during hospitalization from the admission LDL-C Association between admission LDL-C and outcome Age, sex, baseline NIHSS score, atrial fibrillation, current smoking, drinking consumption, TOAST classification, serum glucose, reperfusion therapy method, and the interval between admission LDL-C and the lowest LDL-C during hospitalization were significantly correlated with poor outcomes in univariate analysis (Table 2). There was no significant association between admission LDL-C level and functional outcome at 90 days when LDL-C level was regarded as a continuous variable (OR 1.03, 95 % CI 0.81–1.31, p = 0.802), or categorical variable (T3 vs. T1, OR 0.97, 95 % CI 0.55–1.71, p = 0.919, Table 3). Table 2Univariable logistic regression analysis of variables associated with poor functional outcomeVariableUnadjusted odds ratio (95% confidence interval)p-valueAge1.04 (1.03, 1.06)<0.001Male0.51 (0.35, 0.77)0.001Hypertension1.29 (0.87, 1.91)0.204Diabetes1.30 (0.82, 2.08)0.268hyperlipemia1.04 (0.51, 2.14)0.912Atrial fibrillation2.34 (1.57, 3.48)<0.001Valvular heart diseases1.32 (0.78, 2.23)0.302coronary heart diseases1.23 (0.71, 2.16)0.460Statin use before admission1.16 (0.59, 2.31)0.666Previous stroke1.50 (0.74, 3.04)0.265Current smoking0.60 (0.38, 0.93)0.021Alcohol consumption0.59 (0.38, 0.93)0.022Baseline NIHSS score1.17 (1.12, 1.21)<0.001HDL upon admission1.28 (0.78, 2.10)0.334TG upon admission0.92 (0.78, 1.09)0.333TC upon admission1.00 (0.85, 1.19)0.966Serum glucose1.08 (1.00, 1.17)0.038White blood cell1.04 (0.98, 1.11)0.208TOAST classificationLarge-artery AtherosclerosisReferenceCardio-embolism1.75 (1.11, 2.76)0.017Lacunar0.20 (0.08, 0.53)0.001Other0.63 (0.20, 1.98)0.431Undetermined1.11 (0.60, 2.05)0.748Reperfusion therapy methodthrombolysis onlyReferencethrombectomy only1.76 (1.12, 2.77)0.015thrombolysis and thrombectomy1.22 (0.72, 2.09)0.463Statin use during hospitalization0.98 (0.67, 1.48)0.995Interval between stroke onset and emergency department1.05 (0.89, 1.24)0.546Interval between stroke onset and admission measurement of LDL-C1.04 (0.89, 1.21)0.616Interval between admission and follow-up measurement of LDL-C1.06 (1.01, 1.12)0.023NIHSS National Institutes of Health Stroke Scale, TOAST the Trial of Org 10,172 in Acute Stroke TreatmentTable 3Multivariate logistic regression analysis between admission LDL-C and poor functional outcomeaVariableNon-adjusted modelAdjusted modelAdmission LDL-C, mmol/L0.93 (0.76, 1.14), 0.4961.03 (0.81, 1.31), 0.802Admission LDL-C tertiles, mmol/L T1(0.74–2.15)ReferenceReference T2(2.16–2.80)1.31 (0.81, 2.12), 0.2711.62 (0.92, 2.85), 0.096 T3(2.81–9.61)0.82 (0.51, 1.31), 0.4040.97 (0.55, 1.71), 0.919Adjusted model: adjusted for age, sex, atrial fibrillation, Current smoking, drinking consumption, baseline NIHSS score, Serum glucose, TOAST classification, and reperfusion therapy methodLDL-C low-Density Lipoprotein Cholesterol, NIHSS National Institutes of Health Stroke Scale, TOAST the Trial of Org 10,172 in Acute Stroke TreatmentaResults for each model are presented as odds ratio (95 % confidence interval), p-value Univariable logistic regression analysis of variables associated with poor functional outcome NIHSS National Institutes of Health Stroke Scale, TOAST the Trial of Org 10,172 in Acute Stroke Treatment Multivariate logistic regression analysis between admission LDL-C and poor functional outcomea Adjusted model: adjusted for age, sex, atrial fibrillation, Current smoking, drinking consumption, baseline NIHSS score, Serum glucose, TOAST classification, and reperfusion therapy method LDL-C low-Density Lipoprotein Cholesterol, NIHSS National Institutes of Health Stroke Scale, TOAST the Trial of Org 10,172 in Acute Stroke Treatment aResults for each model are presented as odds ratio (95 % confidence interval), p-value Age, sex, baseline NIHSS score, atrial fibrillation, current smoking, drinking consumption, TOAST classification, serum glucose, reperfusion therapy method, and the interval between admission LDL-C and the lowest LDL-C during hospitalization were significantly correlated with poor outcomes in univariate analysis (Table 2). There was no significant association between admission LDL-C level and functional outcome at 90 days when LDL-C level was regarded as a continuous variable (OR 1.03, 95 % CI 0.81–1.31, p = 0.802), or categorical variable (T3 vs. T1, OR 0.97, 95 % CI 0.55–1.71, p = 0.919, Table 3). Table 2Univariable logistic regression analysis of variables associated with poor functional outcomeVariableUnadjusted odds ratio (95% confidence interval)p-valueAge1.04 (1.03, 1.06)<0.001Male0.51 (0.35, 0.77)0.001Hypertension1.29 (0.87, 1.91)0.204Diabetes1.30 (0.82, 2.08)0.268hyperlipemia1.04 (0.51, 2.14)0.912Atrial fibrillation2.34 (1.57, 3.48)<0.001Valvular heart diseases1.32 (0.78, 2.23)0.302coronary heart diseases1.23 (0.71, 2.16)0.460Statin use before admission1.16 (0.59, 2.31)0.666Previous stroke1.50 (0.74, 3.04)0.265Current smoking0.60 (0.38, 0.93)0.021Alcohol consumption0.59 (0.38, 0.93)0.022Baseline NIHSS score1.17 (1.12, 1.21)<0.001HDL upon admission1.28 (0.78, 2.10)0.334TG upon admission0.92 (0.78, 1.09)0.333TC upon admission1.00 (0.85, 1.19)0.966Serum glucose1.08 (1.00, 1.17)0.038White blood cell1.04 (0.98, 1.11)0.208TOAST classificationLarge-artery AtherosclerosisReferenceCardio-embolism1.75 (1.11, 2.76)0.017Lacunar0.20 (0.08, 0.53)0.001Other0.63 (0.20, 1.98)0.431Undetermined1.11 (0.60, 2.05)0.748Reperfusion therapy methodthrombolysis onlyReferencethrombectomy only1.76 (1.12, 2.77)0.015thrombolysis and thrombectomy1.22 (0.72, 2.09)0.463Statin use during hospitalization0.98 (0.67, 1.48)0.995Interval between stroke onset and emergency department1.05 (0.89, 1.24)0.546Interval between stroke onset and admission measurement of LDL-C1.04 (0.89, 1.21)0.616Interval between admission and follow-up measurement of LDL-C1.06 (1.01, 1.12)0.023NIHSS National Institutes of Health Stroke Scale, TOAST the Trial of Org 10,172 in Acute Stroke TreatmentTable 3Multivariate logistic regression analysis between admission LDL-C and poor functional outcomeaVariableNon-adjusted modelAdjusted modelAdmission LDL-C, mmol/L0.93 (0.76, 1.14), 0.4961.03 (0.81, 1.31), 0.802Admission LDL-C tertiles, mmol/L T1(0.74–2.15)ReferenceReference T2(2.16–2.80)1.31 (0.81, 2.12), 0.2711.62 (0.92, 2.85), 0.096 T3(2.81–9.61)0.82 (0.51, 1.31), 0.4040.97 (0.55, 1.71), 0.919Adjusted model: adjusted for age, sex, atrial fibrillation, Current smoking, drinking consumption, baseline NIHSS score, Serum glucose, TOAST classification, and reperfusion therapy methodLDL-C low-Density Lipoprotein Cholesterol, NIHSS National Institutes of Health Stroke Scale, TOAST the Trial of Org 10,172 in Acute Stroke TreatmentaResults for each model are presented as odds ratio (95 % confidence interval), p-value Univariable logistic regression analysis of variables associated with poor functional outcome NIHSS National Institutes of Health Stroke Scale, TOAST the Trial of Org 10,172 in Acute Stroke Treatment Multivariate logistic regression analysis between admission LDL-C and poor functional outcomea Adjusted model: adjusted for age, sex, atrial fibrillation, Current smoking, drinking consumption, baseline NIHSS score, Serum glucose, TOAST classification, and reperfusion therapy method LDL-C low-Density Lipoprotein Cholesterol, NIHSS National Institutes of Health Stroke Scale, TOAST the Trial of Org 10,172 in Acute Stroke Treatment aResults for each model are presented as odds ratio (95 % confidence interval), p-value Association between ΔLDL-C and outcome When ΔLDL-C was regarded as a continuous variable, ΔLDL-C was significantly associated with poor functional outcome at 90 days in univariate analysis (OR 1.55, 95 % CI 1.12–2.15, p = 0.009, Table 4). After adjusting for confounding variables, the association between ΔLDL-C and the poor outcome remained significant (OR 1.80, 95 % CI 1.12–2.91, p = 0.016). When ΔLDL-C was regarded as a categorical variable, patients in the upper tertile (T3, 0.80–3.98 mmol/L) had a higher risk of poor outcome than those in the lower tertile (T1, -0.91-0.13 mmol/L) in univariate analysis (OR 1.92, 95 % CI 1.15–3.20, p = 0.012). After adjusting for confounding variables, the association between ΔLDL-C and the poor outcome remained significant (OR 2.56, 95 % CI 1.22–5.36, p = 0.013). The risk of poor functional outcome increased significantly with ΔLDL-C tertile (P-trend = 0.010). Table 4Multivariate logistic regression analysis between ΔLDL-C and poor functional outcomeaVariableNon-adjusted modelAdjusted model 1Adjusted model 2ΔLDL-C, mmol/l1.55(1.12, 2.15),0.0091.79(1.11, 2.89),0.0171.80(1.12, 2.91),0.016ΔLDL-C tertiles, mmol/l T1 (-0.91-0.13)ReferenceReferenceReference T2 (0.14–0.79)1.03(0.65, 1.64),0.8881.23(0.70, 2.18),0.4731.24(0.70, 2.19),0.470 T3 (0.80–3.98)1.92(1.15, 3.20),0.0122.56(1.22, 5.35),0.0132.56(1.22, 5.36),0.013P-trend0.0070.0100.010Adjusted model 1: adjusted for age, sex, atrial fibrillation, Current smoking, drinking consumption, baseline NIHSS score, Serum glucose, TOAST classification, reperfusion therapy method, and interval between admission and follow-up measurement of LDL-CAdjusted model 2: adjusted for variables in model 1 and statin use during hospitalizationLDL-C low-density lipoprotein cholesterol, ΔLDL-C the change of low-density lipoprotein cholesterol during hospitalization, NIHSS National Institutes of Health Stroke Scale, TOAST the Trial of Org 10,172 in Acute Stroke TreatmentaResults for each model are presented as odds ratio (95 % confidence interval), p-value Multivariate logistic regression analysis between ΔLDL-C and poor functional outcomea Adjusted model 1: adjusted for age, sex, atrial fibrillation, Current smoking, drinking consumption, baseline NIHSS score, Serum glucose, TOAST classification, reperfusion therapy method, and interval between admission and follow-up measurement of LDL-C Adjusted model 2: adjusted for variables in model 1 and statin use during hospitalization LDL-C low-density lipoprotein cholesterol, ΔLDL-C the change of low-density lipoprotein cholesterol during hospitalization, NIHSS National Institutes of Health Stroke Scale, TOAST the Trial of Org 10,172 in Acute Stroke Treatment aResults for each model are presented as odds ratio (95 % confidence interval), p-value When ΔLDL-C was regarded as a continuous variable, ΔLDL-C was significantly associated with poor functional outcome at 90 days in univariate analysis (OR 1.55, 95 % CI 1.12–2.15, p = 0.009, Table 4). After adjusting for confounding variables, the association between ΔLDL-C and the poor outcome remained significant (OR 1.80, 95 % CI 1.12–2.91, p = 0.016). When ΔLDL-C was regarded as a categorical variable, patients in the upper tertile (T3, 0.80–3.98 mmol/L) had a higher risk of poor outcome than those in the lower tertile (T1, -0.91-0.13 mmol/L) in univariate analysis (OR 1.92, 95 % CI 1.15–3.20, p = 0.012). After adjusting for confounding variables, the association between ΔLDL-C and the poor outcome remained significant (OR 2.56, 95 % CI 1.22–5.36, p = 0.013). The risk of poor functional outcome increased significantly with ΔLDL-C tertile (P-trend = 0.010). Table 4Multivariate logistic regression analysis between ΔLDL-C and poor functional outcomeaVariableNon-adjusted modelAdjusted model 1Adjusted model 2ΔLDL-C, mmol/l1.55(1.12, 2.15),0.0091.79(1.11, 2.89),0.0171.80(1.12, 2.91),0.016ΔLDL-C tertiles, mmol/l T1 (-0.91-0.13)ReferenceReferenceReference T2 (0.14–0.79)1.03(0.65, 1.64),0.8881.23(0.70, 2.18),0.4731.24(0.70, 2.19),0.470 T3 (0.80–3.98)1.92(1.15, 3.20),0.0122.56(1.22, 5.35),0.0132.56(1.22, 5.36),0.013P-trend0.0070.0100.010Adjusted model 1: adjusted for age, sex, atrial fibrillation, Current smoking, drinking consumption, baseline NIHSS score, Serum glucose, TOAST classification, reperfusion therapy method, and interval between admission and follow-up measurement of LDL-CAdjusted model 2: adjusted for variables in model 1 and statin use during hospitalizationLDL-C low-density lipoprotein cholesterol, ΔLDL-C the change of low-density lipoprotein cholesterol during hospitalization, NIHSS National Institutes of Health Stroke Scale, TOAST the Trial of Org 10,172 in Acute Stroke TreatmentaResults for each model are presented as odds ratio (95 % confidence interval), p-value Multivariate logistic regression analysis between ΔLDL-C and poor functional outcomea Adjusted model 1: adjusted for age, sex, atrial fibrillation, Current smoking, drinking consumption, baseline NIHSS score, Serum glucose, TOAST classification, reperfusion therapy method, and interval between admission and follow-up measurement of LDL-C Adjusted model 2: adjusted for variables in model 1 and statin use during hospitalization LDL-C low-density lipoprotein cholesterol, ΔLDL-C the change of low-density lipoprotein cholesterol during hospitalization, NIHSS National Institutes of Health Stroke Scale, TOAST the Trial of Org 10,172 in Acute Stroke Treatment aResults for each model are presented as odds ratio (95 % confidence interval), p-value
Conclusions
There was no significant association between admission LDL-C level and outcomes in AIS patients who underwent reperfusion therapy, while the decrease in LDL-C level during hospitalization was positively associated with poor functional outcomes at 90 days.
[ "Background", "Methods", "Study population", "Baseline data", "Outcome", "Statistical analysis", "Baseline characteristics and outcome", "Association between admission LDL-C and outcome", "Association between ΔLDL-C and outcome", "" ]
[ "The association between low-density lipoprotein cholesterol (LDL-C) and outcomes in acute ischemic stroke (AIS) patients remains controversial [1–11]. The inconsistent results might be explained by oxidative stress. Since AIS patients may suffer from enhanced free-radical damage after reperfusion therapy [12, 13], focusing on patients with reperfusion therapy might clarify the role of LDL-C. However, previous studies on the association between LDL-C level and outcomes in AIS patients with reperfusion therapy failed to reach a consensus [5–8]. The conflicting conclusions may be due to the single measurement of LDL-C.\nSome studies found that serum LDL-C levels decreased after the onset of AIS [14–18]. Under oxidative stress, low-density lipoprotein (LDL) gets oxidized into oxidized low-density lipoprotein (oxLDL) [19]. The extent of decreased LDL-C may reflect the degree of increased oxLDL, which may indicate the severity of oxidative stress [20] and is positively associated with poor functional outcomes [21–24]. Therefore, a more sensitive marker may be the change in serum LDL-C during hospitalization [25].\nHowever, there is uncertainty on the association between LDL-C change and outcomes in patients with reperfusion therapy. In this study, we aimed to explore the association between changes in LDL-C levels and functional outcomes in these patients.", "Study population This is a retrospective study. AIS patients admitted to Neurology Department, West China Hospital were consecutively enrolled between 1st June 2018 and 31st January 2021. AIS was diagnosed based on clinical manifestation and brain image [26]. Patients were included as follows: (1) underwent reperfusion therapy within 6 h after symptom onset, including intravenous thrombolysis with alteplase and/or endovascular thrombectomy (including mechanical or thrombus aspiration thrombectomy, or both, with or without intra-arterial alteplase infusion), and (2) LDL-C levels were measured at emergency department immediately after admission and at least on another occasion during hospitalization. The exclusion criteria were as follows: (1) premorbid modified Rankin scale [mRS] scores > 1, (2) younger than 18 years, (3) had a liver injury that may affect serum lipid levels [15], or (4) malignancy. We obtained informed consent from each patient or their relative. The Scientific Research Department of West China Hospital approved this study.\nThis is a retrospective study. AIS patients admitted to Neurology Department, West China Hospital were consecutively enrolled between 1st June 2018 and 31st January 2021. AIS was diagnosed based on clinical manifestation and brain image [26]. Patients were included as follows: (1) underwent reperfusion therapy within 6 h after symptom onset, including intravenous thrombolysis with alteplase and/or endovascular thrombectomy (including mechanical or thrombus aspiration thrombectomy, or both, with or without intra-arterial alteplase infusion), and (2) LDL-C levels were measured at emergency department immediately after admission and at least on another occasion during hospitalization. The exclusion criteria were as follows: (1) premorbid modified Rankin scale [mRS] scores > 1, (2) younger than 18 years, (3) had a liver injury that may affect serum lipid levels [15], or (4) malignancy. We obtained informed consent from each patient or their relative. The Scientific Research Department of West China Hospital approved this study.\nBaseline data Data on demographics (age, gender), level of neurological severity (according to the National Institute of Health Stroke Scale [NIHSS] score), risk factors (atrial fibrillation, hypertension, hyperlipidemia, diabetes mellitus, smoking status, and coronary heart diseases), laboratory results (white blood cell, glucose, TG, TC, HDL, and LDL-C), and the interval between stroke onset and emergency department were documented at admission. The interval between stroke onset and admission measurement of LDL-C and the interval between admission and follow-up measurement of LDL-C during hospitalization were also documented. Serum LDL-C was measured by the automatic biochemistry analyzer (Roche Cobas 8000) [27]. The Trial of Org 10,172 in Acute Stroke Treatment (TOAST) classification system was conducted to identify stroke subtypes [28].\nData on demographics (age, gender), level of neurological severity (according to the National Institute of Health Stroke Scale [NIHSS] score), risk factors (atrial fibrillation, hypertension, hyperlipidemia, diabetes mellitus, smoking status, and coronary heart diseases), laboratory results (white blood cell, glucose, TG, TC, HDL, and LDL-C), and the interval between stroke onset and emergency department were documented at admission. The interval between stroke onset and admission measurement of LDL-C and the interval between admission and follow-up measurement of LDL-C during hospitalization were also documented. Serum LDL-C was measured by the automatic biochemistry analyzer (Roche Cobas 8000) [27]. The Trial of Org 10,172 in Acute Stroke Treatment (TOAST) classification system was conducted to identify stroke subtypes [28].\nOutcome All patients were followed up by telephone or interview at 90 days to evaluate their functional outcomes blinded to their LDL-C levels. We used the modified Rankin Scale (mRS) to measure functional outcomes at 90 days [29]. Poor functional outcome was defined as mRS score > 2 [29].\nAll patients were followed up by telephone or interview at 90 days to evaluate their functional outcomes blinded to their LDL-C levels. We used the modified Rankin Scale (mRS) to measure functional outcomes at 90 days [29]. Poor functional outcome was defined as mRS score > 2 [29].\nStatistical analysis Continuous variables were reported as means with standard deviations (SD) for normally distributed parameters or medians with interquartile range (IQR) for non-normally distributed parameters. Frequencies or percentages were used to describe categorical variables. Descriptive analyses of study population baseline characteristics and 90-day outcomes were reported for groups using the χ2 test or Fisher’s exact test for categorical data, the Student’s t-test, and the Mann-Whitney U test for continuous variables as appropriate. Significant confounders were defined as variables within p < 0.10 in univariate analysis. The change of LDL-C level (ΔLDL-C) was calculated by subtracting the lowest LDL-C among all measurements during hospitalization from the admission LDL-C: a positive ΔLDL-C indicated LDL-C decreased during hospitalization, and a negative ΔLDL-C indicated an increase in LDL-C level. Multivariate logistic regression models were used to determine associations between ΔLDL-C and outcome. To further explore the associations, we did trend analyses by categorizing ΔLDL-C into tertiles [30]. Trends across tertiles (P-trend) of ΔLDL-C were determined by entering the median value of ΔLDL-C in each category as a continuous variable [31]. Data were reported as odds ratios (OR) and 95 % confidence intervals (CI). A two-sided P value less than 0.05 was considered statistically significant. All analyses were performed using IBM SPSS Statistics (25.0; IBM, Armonk, NY, USA).\nContinuous variables were reported as means with standard deviations (SD) for normally distributed parameters or medians with interquartile range (IQR) for non-normally distributed parameters. Frequencies or percentages were used to describe categorical variables. Descriptive analyses of study population baseline characteristics and 90-day outcomes were reported for groups using the χ2 test or Fisher’s exact test for categorical data, the Student’s t-test, and the Mann-Whitney U test for continuous variables as appropriate. Significant confounders were defined as variables within p < 0.10 in univariate analysis. The change of LDL-C level (ΔLDL-C) was calculated by subtracting the lowest LDL-C among all measurements during hospitalization from the admission LDL-C: a positive ΔLDL-C indicated LDL-C decreased during hospitalization, and a negative ΔLDL-C indicated an increase in LDL-C level. Multivariate logistic regression models were used to determine associations between ΔLDL-C and outcome. To further explore the associations, we did trend analyses by categorizing ΔLDL-C into tertiles [30]. Trends across tertiles (P-trend) of ΔLDL-C were determined by entering the median value of ΔLDL-C in each category as a continuous variable [31]. Data were reported as odds ratios (OR) and 95 % confidence intervals (CI). A two-sided P value less than 0.05 was considered statistically significant. All analyses were performed using IBM SPSS Statistics (25.0; IBM, Armonk, NY, USA).", "This is a retrospective study. AIS patients admitted to Neurology Department, West China Hospital were consecutively enrolled between 1st June 2018 and 31st January 2021. AIS was diagnosed based on clinical manifestation and brain image [26]. Patients were included as follows: (1) underwent reperfusion therapy within 6 h after symptom onset, including intravenous thrombolysis with alteplase and/or endovascular thrombectomy (including mechanical or thrombus aspiration thrombectomy, or both, with or without intra-arterial alteplase infusion), and (2) LDL-C levels were measured at emergency department immediately after admission and at least on another occasion during hospitalization. The exclusion criteria were as follows: (1) premorbid modified Rankin scale [mRS] scores > 1, (2) younger than 18 years, (3) had a liver injury that may affect serum lipid levels [15], or (4) malignancy. We obtained informed consent from each patient or their relative. The Scientific Research Department of West China Hospital approved this study.", "Data on demographics (age, gender), level of neurological severity (according to the National Institute of Health Stroke Scale [NIHSS] score), risk factors (atrial fibrillation, hypertension, hyperlipidemia, diabetes mellitus, smoking status, and coronary heart diseases), laboratory results (white blood cell, glucose, TG, TC, HDL, and LDL-C), and the interval between stroke onset and emergency department were documented at admission. The interval between stroke onset and admission measurement of LDL-C and the interval between admission and follow-up measurement of LDL-C during hospitalization were also documented. Serum LDL-C was measured by the automatic biochemistry analyzer (Roche Cobas 8000) [27]. The Trial of Org 10,172 in Acute Stroke Treatment (TOAST) classification system was conducted to identify stroke subtypes [28].", "All patients were followed up by telephone or interview at 90 days to evaluate their functional outcomes blinded to their LDL-C levels. We used the modified Rankin Scale (mRS) to measure functional outcomes at 90 days [29]. Poor functional outcome was defined as mRS score > 2 [29].", "Continuous variables were reported as means with standard deviations (SD) for normally distributed parameters or medians with interquartile range (IQR) for non-normally distributed parameters. Frequencies or percentages were used to describe categorical variables. Descriptive analyses of study population baseline characteristics and 90-day outcomes were reported for groups using the χ2 test or Fisher’s exact test for categorical data, the Student’s t-test, and the Mann-Whitney U test for continuous variables as appropriate. Significant confounders were defined as variables within p < 0.10 in univariate analysis. The change of LDL-C level (ΔLDL-C) was calculated by subtracting the lowest LDL-C among all measurements during hospitalization from the admission LDL-C: a positive ΔLDL-C indicated LDL-C decreased during hospitalization, and a negative ΔLDL-C indicated an increase in LDL-C level. Multivariate logistic regression models were used to determine associations between ΔLDL-C and outcome. To further explore the associations, we did trend analyses by categorizing ΔLDL-C into tertiles [30]. Trends across tertiles (P-trend) of ΔLDL-C were determined by entering the median value of ΔLDL-C in each category as a continuous variable [31]. Data were reported as odds ratios (OR) and 95 % confidence intervals (CI). A two-sided P value less than 0.05 was considered statistically significant. All analyses were performed using IBM SPSS Statistics (25.0; IBM, Armonk, NY, USA).", "As shown in Fig. 1, a total of 640 AIS patients underwent reperfusion therapy in our center, there were 24.6 % (158/640) patients missed LDL-C levels or outcome follow-up, we compared included patients to these patients, and we found that there were no significant differences in demographic parameters (age and gender), vascular risk factors (diabetes, atrial fibrillation, current smoking, and coronary heart diseases), baseline NIHSS score, TOAST classification, and reperfusion therapy method between two groups, except for more patients with prior history of stroke and hypertension among the included group (Table S1 in supplementary materials). Finally, a total of 432 patients (mean age 69.2 ± 13.5 years, 54.6 % males) were included. As shown in Table 1, the mean admission LDL-C level was 2.55 ± 0.93 mmol/L, the mean lowest LDL-C level during hospitalization was 2.00 ± 0.88mmol/L, and the median ΔLDL-C was 0.43 mmol/L (IQR 0.08–0.94 mmol/L). The median interval time between stroke onset and emergency department was 2.5 h (IQR 1.8-3.0 h). The median interval time between admission and the lowest LDL-C measurement during hospitalization was 3.1 d (IQR 0.8–6.6 d). For most patients (357/432, 82.6 %), LDL-C levels decreased during hospitalization. A total of 263 (60.9 %) patients had poor 90-day functional outcomes.\nFig. 1Patients’ inclusion flowchart. AIS, acute ischemic stroke; LDL-C, low-density lipoprotein cholesterol; mRS, modified Rankin Scale; ALT, alanine aminotransferase; AST, aspartate aminotransferaseTable 1Patient characteristics stratified by functional outcome at 90 daysVariablesOverallGood functional outcomePoor functional outcomeP value(n = 432)(n = 169)(n = 263)Age, years, mean (SD)69.2 (13.5)64.9 (13.9)72.0 (12.5)< 0.001Male, n (%)236 (54.6)109 (64.4)127 (48.3)< 0.001Hypertension, n (%)259 (60.0)95 (56.2)164 (62.4)0.203Diabetes, n (%)99 (22.9)34 (20.1)65 (24.7)0.267hyperlipemia, n (%)34 (7.9)13 (7.7)21 (8.0)0.912Atrial fibrillation, n (%)216 (50.0)63 (37.3)133 (58.2)< 0.001Valvular heart diseases, n (%)74 (17.1)25 (14.8)49 (18.6)0.301Coronary heart diseases, n (%)63 (14.6)22 (13.0)41 (15.6)0.460Previous stroke, n (%)39 (9.0)12(7.1)27 (10.3)0.263Current smoking, n (%)107 (24.8)52 (30.8)55 (20.9)0.021Alcohol consumption, n (%)100 (23.1)52 (30.8)51 (19.4)0.021Statin use before admission, n (%)39 (9.0)14 (8.3)25 (9.5)0.665Baseline NIHSS, median (Q1-Q3)14 (9–18)9 (6–14)15(12–20)< 0.001LDL-C, mmol/L, mean (SD)2.55 (0.93)2.59 (0.91)2.53 (0.95)0.496HDL, mmol/L, mean (SD)1.27 (0.42)1.25 (0. 49)1.29 (0.36)0.332TG, mmol/L, median (Q1-Q3)1.24 (0.87–1.89)1.20 (0.85–1.94)1.20 (0.88–1.81)0.566TC, mmol/L, mean (SD)4.23 (1.13)4.22 (1.13)4.23 (1.13)0.966Serum glucose, mmol/L, mean (SD)7.53 (6.49–9.18)7.96 (2.76)8.57 (3.04)0.035White blood cell, a10^9 /L, mean (SD)8.47 (3.19)8.23 (2.77)8.62 (3.43)0.207TOAST classification, n (%)< 0.001 Large-artery Atherosclerosis139 (32.2)59 (34.9)80 (30.4) Cardio-embolism192(44.4)57 (33.7)135 (51.3) Lacunar28 (6.5)22 (13.0)6 (2.3) Other13 (3.0)7 (4.1)6 (2.3)  Undetermined60 (13.9)24 (14.2)36 (13.7)Reperfusion therapy method, n (%)0.092 thrombolysis only124 (28.7)58 (34.3)66 (25.1) thrombectomy only210 (48.6)70 (41.4)140 (53.2) thrombolysis and thrombectomy98 (22.7)41 (24.3)57 (21.7)Statin use during hospitalization, n (%)263 (60.9)103 (60.9)160 (60.8)0.982Interval between stroke onset and emergency department, h, median (Q1-Q3)2.5 (1.8-3.0)2.3 (2.0–3.0)2.5 (1.7–3.5)0.610Interval between stroke onset and admission measurement of LDL-C, h, median (Q1-Q3)3.1 (2.2–3.9)3.1 (2.1–3.8)3.1 (2.1–3.8)0.685Interval between admission and follow-up measurement of LDL-C, d, median (Q1-Q3)3.1 (0.8–6.6)1.7 (0.6–6.4)3.4 (1.0-7.1)0.008aΔLDL-C, mmol/L, median (Q1-Q3)0.43 (0.08–0.94)0.28 (0.07–0.78)0.52 (0.10–1.03)0.009The lowest LDL-C during hospitalization, mmol/l, median (Q1-Q3)1.92 (1.38–2.44)2.11 (1.48–2.69)1.83 (1.34–2.32)0.002LDL-C variation, n (%)0.542decreased (ΔLDL-C > 0)357 (82.6)142 (84.0)215 (81.7)increased (ΔLDL-C ≤ 0)75 (17.4)27 (16.0)48 (18.3)SD standard deviation, LDL-C low-density lipoprotein cholesterol, ΔLDL-C the change of low-density lipoprotein cholesterol during hospitalization, NIHSS National Institutes of Health Stroke Scale, TOAST the Trial of Org 10,172 in Acute Stroke TreatmentaΔLDL-C was calculated by subtracting the lowest LDL-C among all measurements during hospitalization from the admission LDL-C\nPatients’ inclusion flowchart. AIS, acute ischemic stroke; LDL-C, low-density lipoprotein cholesterol; mRS, modified Rankin Scale; ALT, alanine aminotransferase; AST, aspartate aminotransferase\nPatient characteristics stratified by functional outcome at 90 days\nSD standard deviation, LDL-C low-density lipoprotein cholesterol, ΔLDL-C the change of low-density lipoprotein cholesterol during hospitalization, NIHSS National Institutes of Health Stroke Scale, TOAST the Trial of Org 10,172 in Acute Stroke Treatment\naΔLDL-C was calculated by subtracting the lowest LDL-C among all measurements during hospitalization from the admission LDL-C", "Age, sex, baseline NIHSS score, atrial fibrillation, current smoking, drinking consumption, TOAST classification, serum glucose, reperfusion therapy method, and the interval between admission LDL-C and the lowest LDL-C during hospitalization were significantly correlated with poor outcomes in univariate analysis (Table 2). There was no significant association between admission LDL-C level and functional outcome at 90 days when LDL-C level was regarded as a continuous variable (OR 1.03, 95 % CI 0.81–1.31, p = 0.802), or categorical variable (T3 vs. T1, OR 0.97, 95 % CI 0.55–1.71, p = 0.919, Table 3).\nTable 2Univariable logistic regression analysis of variables associated with poor functional outcomeVariableUnadjusted odds ratio (95% confidence interval)p-valueAge1.04 (1.03, 1.06)<0.001Male0.51 (0.35, 0.77)0.001Hypertension1.29 (0.87, 1.91)0.204Diabetes1.30 (0.82, 2.08)0.268hyperlipemia1.04 (0.51, 2.14)0.912Atrial fibrillation2.34 (1.57, 3.48)<0.001Valvular heart diseases1.32 (0.78, 2.23)0.302coronary heart diseases1.23 (0.71, 2.16)0.460Statin use before admission1.16 (0.59, 2.31)0.666Previous stroke1.50 (0.74, 3.04)0.265Current smoking0.60 (0.38, 0.93)0.021Alcohol consumption0.59 (0.38, 0.93)0.022Baseline NIHSS score1.17 (1.12, 1.21)<0.001HDL upon admission1.28 (0.78, 2.10)0.334TG upon admission0.92 (0.78, 1.09)0.333TC upon admission1.00 (0.85, 1.19)0.966Serum glucose1.08 (1.00, 1.17)0.038White blood cell1.04 (0.98, 1.11)0.208TOAST classificationLarge-artery AtherosclerosisReferenceCardio-embolism1.75 (1.11, 2.76)0.017Lacunar0.20 (0.08, 0.53)0.001Other0.63 (0.20, 1.98)0.431Undetermined1.11 (0.60, 2.05)0.748Reperfusion therapy methodthrombolysis onlyReferencethrombectomy only1.76 (1.12, 2.77)0.015thrombolysis and thrombectomy1.22 (0.72, 2.09)0.463Statin use during hospitalization0.98 (0.67, 1.48)0.995Interval between stroke onset and emergency department1.05 (0.89, 1.24)0.546Interval between stroke onset and admission measurement of LDL-C1.04 (0.89, 1.21)0.616Interval between admission and follow-up measurement of LDL-C1.06 (1.01, 1.12)0.023NIHSS National Institutes of Health Stroke Scale, TOAST the Trial of Org 10,172 in Acute Stroke TreatmentTable 3Multivariate logistic regression analysis between admission LDL-C and poor functional outcomeaVariableNon-adjusted modelAdjusted modelAdmission LDL-C, mmol/L0.93 (0.76, 1.14), 0.4961.03 (0.81, 1.31), 0.802Admission LDL-C tertiles, mmol/L T1(0.74–2.15)ReferenceReference T2(2.16–2.80)1.31 (0.81, 2.12), 0.2711.62 (0.92, 2.85), 0.096 T3(2.81–9.61)0.82 (0.51, 1.31), 0.4040.97 (0.55, 1.71), 0.919Adjusted model: adjusted for age, sex, atrial fibrillation, Current smoking, drinking consumption, baseline NIHSS score, Serum glucose, TOAST classification, and reperfusion therapy methodLDL-C low-Density Lipoprotein Cholesterol, NIHSS National Institutes of Health Stroke Scale, TOAST the Trial of Org 10,172 in Acute Stroke TreatmentaResults for each model are presented as odds ratio (95 % confidence interval), p-value\nUnivariable logistic regression analysis of variables associated with poor functional outcome\nNIHSS National Institutes of Health Stroke Scale, TOAST the Trial of Org 10,172 in Acute Stroke Treatment\nMultivariate logistic regression analysis between admission LDL-C and poor functional outcomea\nAdjusted model: adjusted for age, sex, atrial fibrillation, Current smoking, drinking consumption, baseline NIHSS score, Serum glucose, TOAST classification, and reperfusion therapy method\nLDL-C low-Density Lipoprotein Cholesterol, NIHSS National Institutes of Health Stroke Scale, TOAST the Trial of Org 10,172 in Acute Stroke Treatment\naResults for each model are presented as odds ratio (95 % confidence interval), p-value", "When ΔLDL-C was regarded as a continuous variable, ΔLDL-C was significantly associated with poor functional outcome at 90 days in univariate analysis (OR 1.55, 95 % CI 1.12–2.15, p = 0.009, Table 4). After adjusting for confounding variables, the association between ΔLDL-C and the poor outcome remained significant (OR 1.80, 95 % CI 1.12–2.91, p = 0.016).\nWhen ΔLDL-C was regarded as a categorical variable, patients in the upper tertile (T3, 0.80–3.98 mmol/L) had a higher risk of poor outcome than those in the lower tertile (T1, -0.91-0.13 mmol/L) in univariate analysis (OR 1.92, 95 % CI 1.15–3.20, p = 0.012). After adjusting for confounding variables, the association between ΔLDL-C and the poor outcome remained significant (OR 2.56, 95 % CI 1.22–5.36, p = 0.013). The risk of poor functional outcome increased significantly with ΔLDL-C tertile (P-trend = 0.010).\nTable 4Multivariate logistic regression analysis between ΔLDL-C and poor functional outcomeaVariableNon-adjusted modelAdjusted model 1Adjusted model 2ΔLDL-C, mmol/l1.55(1.12, 2.15),0.0091.79(1.11, 2.89),0.0171.80(1.12, 2.91),0.016ΔLDL-C tertiles, mmol/l T1 (-0.91-0.13)ReferenceReferenceReference T2 (0.14–0.79)1.03(0.65, 1.64),0.8881.23(0.70, 2.18),0.4731.24(0.70, 2.19),0.470 T3 (0.80–3.98)1.92(1.15, 3.20),0.0122.56(1.22, 5.35),0.0132.56(1.22, 5.36),0.013P-trend0.0070.0100.010Adjusted model 1: adjusted for age, sex, atrial fibrillation, Current smoking, drinking consumption, baseline NIHSS score, Serum glucose, TOAST classification, reperfusion therapy method, and interval between admission and follow-up measurement of LDL-CAdjusted model 2: adjusted for variables in model 1 and statin use during hospitalizationLDL-C low-density lipoprotein cholesterol, ΔLDL-C the change of low-density lipoprotein cholesterol during hospitalization, NIHSS National Institutes of Health Stroke Scale, TOAST the Trial of Org 10,172 in Acute Stroke TreatmentaResults for each model are presented as odds ratio (95 % confidence interval), p-value\nMultivariate logistic regression analysis between ΔLDL-C and poor functional outcomea\nAdjusted model 1: adjusted for age, sex, atrial fibrillation, Current smoking, drinking consumption, baseline NIHSS score, Serum glucose, TOAST classification, reperfusion therapy method, and interval between admission and follow-up measurement of LDL-C\nAdjusted model 2: adjusted for variables in model 1 and statin use during hospitalization\nLDL-C low-density lipoprotein cholesterol, ΔLDL-C the change of low-density lipoprotein cholesterol during hospitalization, NIHSS National Institutes of Health Stroke Scale, TOAST the Trial of Org 10,172 in Acute Stroke Treatment\naResults for each model are presented as odds ratio (95 % confidence interval), p-value", "\nAdditional file 1: Table S1. Patient characteristics stratified by included patients and excluded patients\n\nAdditional file 1: Table S1. Patient characteristics stratified by included patients and excluded patients" ]
[ null, null, null, null, null, null, null, null, null, null ]
[ "Background", "Methods", "Study population", "Baseline data", "Outcome", "Statistical analysis", "Results", "Baseline characteristics and outcome", "Association between admission LDL-C and outcome", "Association between ΔLDL-C and outcome", "Discussion", "Conclusions", "Supplementary Information", "" ]
[ "The association between low-density lipoprotein cholesterol (LDL-C) and outcomes in acute ischemic stroke (AIS) patients remains controversial [1–11]. The inconsistent results might be explained by oxidative stress. Since AIS patients may suffer from enhanced free-radical damage after reperfusion therapy [12, 13], focusing on patients with reperfusion therapy might clarify the role of LDL-C. However, previous studies on the association between LDL-C level and outcomes in AIS patients with reperfusion therapy failed to reach a consensus [5–8]. The conflicting conclusions may be due to the single measurement of LDL-C.\nSome studies found that serum LDL-C levels decreased after the onset of AIS [14–18]. Under oxidative stress, low-density lipoprotein (LDL) gets oxidized into oxidized low-density lipoprotein (oxLDL) [19]. The extent of decreased LDL-C may reflect the degree of increased oxLDL, which may indicate the severity of oxidative stress [20] and is positively associated with poor functional outcomes [21–24]. Therefore, a more sensitive marker may be the change in serum LDL-C during hospitalization [25].\nHowever, there is uncertainty on the association between LDL-C change and outcomes in patients with reperfusion therapy. In this study, we aimed to explore the association between changes in LDL-C levels and functional outcomes in these patients.", "Study population This is a retrospective study. AIS patients admitted to Neurology Department, West China Hospital were consecutively enrolled between 1st June 2018 and 31st January 2021. AIS was diagnosed based on clinical manifestation and brain image [26]. Patients were included as follows: (1) underwent reperfusion therapy within 6 h after symptom onset, including intravenous thrombolysis with alteplase and/or endovascular thrombectomy (including mechanical or thrombus aspiration thrombectomy, or both, with or without intra-arterial alteplase infusion), and (2) LDL-C levels were measured at emergency department immediately after admission and at least on another occasion during hospitalization. The exclusion criteria were as follows: (1) premorbid modified Rankin scale [mRS] scores > 1, (2) younger than 18 years, (3) had a liver injury that may affect serum lipid levels [15], or (4) malignancy. We obtained informed consent from each patient or their relative. The Scientific Research Department of West China Hospital approved this study.\nThis is a retrospective study. AIS patients admitted to Neurology Department, West China Hospital were consecutively enrolled between 1st June 2018 and 31st January 2021. AIS was diagnosed based on clinical manifestation and brain image [26]. Patients were included as follows: (1) underwent reperfusion therapy within 6 h after symptom onset, including intravenous thrombolysis with alteplase and/or endovascular thrombectomy (including mechanical or thrombus aspiration thrombectomy, or both, with or without intra-arterial alteplase infusion), and (2) LDL-C levels were measured at emergency department immediately after admission and at least on another occasion during hospitalization. The exclusion criteria were as follows: (1) premorbid modified Rankin scale [mRS] scores > 1, (2) younger than 18 years, (3) had a liver injury that may affect serum lipid levels [15], or (4) malignancy. We obtained informed consent from each patient or their relative. The Scientific Research Department of West China Hospital approved this study.\nBaseline data Data on demographics (age, gender), level of neurological severity (according to the National Institute of Health Stroke Scale [NIHSS] score), risk factors (atrial fibrillation, hypertension, hyperlipidemia, diabetes mellitus, smoking status, and coronary heart diseases), laboratory results (white blood cell, glucose, TG, TC, HDL, and LDL-C), and the interval between stroke onset and emergency department were documented at admission. The interval between stroke onset and admission measurement of LDL-C and the interval between admission and follow-up measurement of LDL-C during hospitalization were also documented. Serum LDL-C was measured by the automatic biochemistry analyzer (Roche Cobas 8000) [27]. The Trial of Org 10,172 in Acute Stroke Treatment (TOAST) classification system was conducted to identify stroke subtypes [28].\nData on demographics (age, gender), level of neurological severity (according to the National Institute of Health Stroke Scale [NIHSS] score), risk factors (atrial fibrillation, hypertension, hyperlipidemia, diabetes mellitus, smoking status, and coronary heart diseases), laboratory results (white blood cell, glucose, TG, TC, HDL, and LDL-C), and the interval between stroke onset and emergency department were documented at admission. The interval between stroke onset and admission measurement of LDL-C and the interval between admission and follow-up measurement of LDL-C during hospitalization were also documented. Serum LDL-C was measured by the automatic biochemistry analyzer (Roche Cobas 8000) [27]. The Trial of Org 10,172 in Acute Stroke Treatment (TOAST) classification system was conducted to identify stroke subtypes [28].\nOutcome All patients were followed up by telephone or interview at 90 days to evaluate their functional outcomes blinded to their LDL-C levels. We used the modified Rankin Scale (mRS) to measure functional outcomes at 90 days [29]. Poor functional outcome was defined as mRS score > 2 [29].\nAll patients were followed up by telephone or interview at 90 days to evaluate their functional outcomes blinded to their LDL-C levels. We used the modified Rankin Scale (mRS) to measure functional outcomes at 90 days [29]. Poor functional outcome was defined as mRS score > 2 [29].\nStatistical analysis Continuous variables were reported as means with standard deviations (SD) for normally distributed parameters or medians with interquartile range (IQR) for non-normally distributed parameters. Frequencies or percentages were used to describe categorical variables. Descriptive analyses of study population baseline characteristics and 90-day outcomes were reported for groups using the χ2 test or Fisher’s exact test for categorical data, the Student’s t-test, and the Mann-Whitney U test for continuous variables as appropriate. Significant confounders were defined as variables within p < 0.10 in univariate analysis. The change of LDL-C level (ΔLDL-C) was calculated by subtracting the lowest LDL-C among all measurements during hospitalization from the admission LDL-C: a positive ΔLDL-C indicated LDL-C decreased during hospitalization, and a negative ΔLDL-C indicated an increase in LDL-C level. Multivariate logistic regression models were used to determine associations between ΔLDL-C and outcome. To further explore the associations, we did trend analyses by categorizing ΔLDL-C into tertiles [30]. Trends across tertiles (P-trend) of ΔLDL-C were determined by entering the median value of ΔLDL-C in each category as a continuous variable [31]. Data were reported as odds ratios (OR) and 95 % confidence intervals (CI). A two-sided P value less than 0.05 was considered statistically significant. All analyses were performed using IBM SPSS Statistics (25.0; IBM, Armonk, NY, USA).\nContinuous variables were reported as means with standard deviations (SD) for normally distributed parameters or medians with interquartile range (IQR) for non-normally distributed parameters. Frequencies or percentages were used to describe categorical variables. Descriptive analyses of study population baseline characteristics and 90-day outcomes were reported for groups using the χ2 test or Fisher’s exact test for categorical data, the Student’s t-test, and the Mann-Whitney U test for continuous variables as appropriate. Significant confounders were defined as variables within p < 0.10 in univariate analysis. The change of LDL-C level (ΔLDL-C) was calculated by subtracting the lowest LDL-C among all measurements during hospitalization from the admission LDL-C: a positive ΔLDL-C indicated LDL-C decreased during hospitalization, and a negative ΔLDL-C indicated an increase in LDL-C level. Multivariate logistic regression models were used to determine associations between ΔLDL-C and outcome. To further explore the associations, we did trend analyses by categorizing ΔLDL-C into tertiles [30]. Trends across tertiles (P-trend) of ΔLDL-C were determined by entering the median value of ΔLDL-C in each category as a continuous variable [31]. Data were reported as odds ratios (OR) and 95 % confidence intervals (CI). A two-sided P value less than 0.05 was considered statistically significant. All analyses were performed using IBM SPSS Statistics (25.0; IBM, Armonk, NY, USA).", "This is a retrospective study. AIS patients admitted to Neurology Department, West China Hospital were consecutively enrolled between 1st June 2018 and 31st January 2021. AIS was diagnosed based on clinical manifestation and brain image [26]. Patients were included as follows: (1) underwent reperfusion therapy within 6 h after symptom onset, including intravenous thrombolysis with alteplase and/or endovascular thrombectomy (including mechanical or thrombus aspiration thrombectomy, or both, with or without intra-arterial alteplase infusion), and (2) LDL-C levels were measured at emergency department immediately after admission and at least on another occasion during hospitalization. The exclusion criteria were as follows: (1) premorbid modified Rankin scale [mRS] scores > 1, (2) younger than 18 years, (3) had a liver injury that may affect serum lipid levels [15], or (4) malignancy. We obtained informed consent from each patient or their relative. The Scientific Research Department of West China Hospital approved this study.", "Data on demographics (age, gender), level of neurological severity (according to the National Institute of Health Stroke Scale [NIHSS] score), risk factors (atrial fibrillation, hypertension, hyperlipidemia, diabetes mellitus, smoking status, and coronary heart diseases), laboratory results (white blood cell, glucose, TG, TC, HDL, and LDL-C), and the interval between stroke onset and emergency department were documented at admission. The interval between stroke onset and admission measurement of LDL-C and the interval between admission and follow-up measurement of LDL-C during hospitalization were also documented. Serum LDL-C was measured by the automatic biochemistry analyzer (Roche Cobas 8000) [27]. The Trial of Org 10,172 in Acute Stroke Treatment (TOAST) classification system was conducted to identify stroke subtypes [28].", "All patients were followed up by telephone or interview at 90 days to evaluate their functional outcomes blinded to their LDL-C levels. We used the modified Rankin Scale (mRS) to measure functional outcomes at 90 days [29]. Poor functional outcome was defined as mRS score > 2 [29].", "Continuous variables were reported as means with standard deviations (SD) for normally distributed parameters or medians with interquartile range (IQR) for non-normally distributed parameters. Frequencies or percentages were used to describe categorical variables. Descriptive analyses of study population baseline characteristics and 90-day outcomes were reported for groups using the χ2 test or Fisher’s exact test for categorical data, the Student’s t-test, and the Mann-Whitney U test for continuous variables as appropriate. Significant confounders were defined as variables within p < 0.10 in univariate analysis. The change of LDL-C level (ΔLDL-C) was calculated by subtracting the lowest LDL-C among all measurements during hospitalization from the admission LDL-C: a positive ΔLDL-C indicated LDL-C decreased during hospitalization, and a negative ΔLDL-C indicated an increase in LDL-C level. Multivariate logistic regression models were used to determine associations between ΔLDL-C and outcome. To further explore the associations, we did trend analyses by categorizing ΔLDL-C into tertiles [30]. Trends across tertiles (P-trend) of ΔLDL-C were determined by entering the median value of ΔLDL-C in each category as a continuous variable [31]. Data were reported as odds ratios (OR) and 95 % confidence intervals (CI). A two-sided P value less than 0.05 was considered statistically significant. All analyses were performed using IBM SPSS Statistics (25.0; IBM, Armonk, NY, USA).", "Baseline characteristics and outcome As shown in Fig. 1, a total of 640 AIS patients underwent reperfusion therapy in our center, there were 24.6 % (158/640) patients missed LDL-C levels or outcome follow-up, we compared included patients to these patients, and we found that there were no significant differences in demographic parameters (age and gender), vascular risk factors (diabetes, atrial fibrillation, current smoking, and coronary heart diseases), baseline NIHSS score, TOAST classification, and reperfusion therapy method between two groups, except for more patients with prior history of stroke and hypertension among the included group (Table S1 in supplementary materials). Finally, a total of 432 patients (mean age 69.2 ± 13.5 years, 54.6 % males) were included. As shown in Table 1, the mean admission LDL-C level was 2.55 ± 0.93 mmol/L, the mean lowest LDL-C level during hospitalization was 2.00 ± 0.88mmol/L, and the median ΔLDL-C was 0.43 mmol/L (IQR 0.08–0.94 mmol/L). The median interval time between stroke onset and emergency department was 2.5 h (IQR 1.8-3.0 h). The median interval time between admission and the lowest LDL-C measurement during hospitalization was 3.1 d (IQR 0.8–6.6 d). For most patients (357/432, 82.6 %), LDL-C levels decreased during hospitalization. A total of 263 (60.9 %) patients had poor 90-day functional outcomes.\nFig. 1Patients’ inclusion flowchart. AIS, acute ischemic stroke; LDL-C, low-density lipoprotein cholesterol; mRS, modified Rankin Scale; ALT, alanine aminotransferase; AST, aspartate aminotransferaseTable 1Patient characteristics stratified by functional outcome at 90 daysVariablesOverallGood functional outcomePoor functional outcomeP value(n = 432)(n = 169)(n = 263)Age, years, mean (SD)69.2 (13.5)64.9 (13.9)72.0 (12.5)< 0.001Male, n (%)236 (54.6)109 (64.4)127 (48.3)< 0.001Hypertension, n (%)259 (60.0)95 (56.2)164 (62.4)0.203Diabetes, n (%)99 (22.9)34 (20.1)65 (24.7)0.267hyperlipemia, n (%)34 (7.9)13 (7.7)21 (8.0)0.912Atrial fibrillation, n (%)216 (50.0)63 (37.3)133 (58.2)< 0.001Valvular heart diseases, n (%)74 (17.1)25 (14.8)49 (18.6)0.301Coronary heart diseases, n (%)63 (14.6)22 (13.0)41 (15.6)0.460Previous stroke, n (%)39 (9.0)12(7.1)27 (10.3)0.263Current smoking, n (%)107 (24.8)52 (30.8)55 (20.9)0.021Alcohol consumption, n (%)100 (23.1)52 (30.8)51 (19.4)0.021Statin use before admission, n (%)39 (9.0)14 (8.3)25 (9.5)0.665Baseline NIHSS, median (Q1-Q3)14 (9–18)9 (6–14)15(12–20)< 0.001LDL-C, mmol/L, mean (SD)2.55 (0.93)2.59 (0.91)2.53 (0.95)0.496HDL, mmol/L, mean (SD)1.27 (0.42)1.25 (0. 49)1.29 (0.36)0.332TG, mmol/L, median (Q1-Q3)1.24 (0.87–1.89)1.20 (0.85–1.94)1.20 (0.88–1.81)0.566TC, mmol/L, mean (SD)4.23 (1.13)4.22 (1.13)4.23 (1.13)0.966Serum glucose, mmol/L, mean (SD)7.53 (6.49–9.18)7.96 (2.76)8.57 (3.04)0.035White blood cell, a10^9 /L, mean (SD)8.47 (3.19)8.23 (2.77)8.62 (3.43)0.207TOAST classification, n (%)< 0.001 Large-artery Atherosclerosis139 (32.2)59 (34.9)80 (30.4) Cardio-embolism192(44.4)57 (33.7)135 (51.3) Lacunar28 (6.5)22 (13.0)6 (2.3) Other13 (3.0)7 (4.1)6 (2.3)  Undetermined60 (13.9)24 (14.2)36 (13.7)Reperfusion therapy method, n (%)0.092 thrombolysis only124 (28.7)58 (34.3)66 (25.1) thrombectomy only210 (48.6)70 (41.4)140 (53.2) thrombolysis and thrombectomy98 (22.7)41 (24.3)57 (21.7)Statin use during hospitalization, n (%)263 (60.9)103 (60.9)160 (60.8)0.982Interval between stroke onset and emergency department, h, median (Q1-Q3)2.5 (1.8-3.0)2.3 (2.0–3.0)2.5 (1.7–3.5)0.610Interval between stroke onset and admission measurement of LDL-C, h, median (Q1-Q3)3.1 (2.2–3.9)3.1 (2.1–3.8)3.1 (2.1–3.8)0.685Interval between admission and follow-up measurement of LDL-C, d, median (Q1-Q3)3.1 (0.8–6.6)1.7 (0.6–6.4)3.4 (1.0-7.1)0.008aΔLDL-C, mmol/L, median (Q1-Q3)0.43 (0.08–0.94)0.28 (0.07–0.78)0.52 (0.10–1.03)0.009The lowest LDL-C during hospitalization, mmol/l, median (Q1-Q3)1.92 (1.38–2.44)2.11 (1.48–2.69)1.83 (1.34–2.32)0.002LDL-C variation, n (%)0.542decreased (ΔLDL-C > 0)357 (82.6)142 (84.0)215 (81.7)increased (ΔLDL-C ≤ 0)75 (17.4)27 (16.0)48 (18.3)SD standard deviation, LDL-C low-density lipoprotein cholesterol, ΔLDL-C the change of low-density lipoprotein cholesterol during hospitalization, NIHSS National Institutes of Health Stroke Scale, TOAST the Trial of Org 10,172 in Acute Stroke TreatmentaΔLDL-C was calculated by subtracting the lowest LDL-C among all measurements during hospitalization from the admission LDL-C\nPatients’ inclusion flowchart. AIS, acute ischemic stroke; LDL-C, low-density lipoprotein cholesterol; mRS, modified Rankin Scale; ALT, alanine aminotransferase; AST, aspartate aminotransferase\nPatient characteristics stratified by functional outcome at 90 days\nSD standard deviation, LDL-C low-density lipoprotein cholesterol, ΔLDL-C the change of low-density lipoprotein cholesterol during hospitalization, NIHSS National Institutes of Health Stroke Scale, TOAST the Trial of Org 10,172 in Acute Stroke Treatment\naΔLDL-C was calculated by subtracting the lowest LDL-C among all measurements during hospitalization from the admission LDL-C\nAs shown in Fig. 1, a total of 640 AIS patients underwent reperfusion therapy in our center, there were 24.6 % (158/640) patients missed LDL-C levels or outcome follow-up, we compared included patients to these patients, and we found that there were no significant differences in demographic parameters (age and gender), vascular risk factors (diabetes, atrial fibrillation, current smoking, and coronary heart diseases), baseline NIHSS score, TOAST classification, and reperfusion therapy method between two groups, except for more patients with prior history of stroke and hypertension among the included group (Table S1 in supplementary materials). Finally, a total of 432 patients (mean age 69.2 ± 13.5 years, 54.6 % males) were included. As shown in Table 1, the mean admission LDL-C level was 2.55 ± 0.93 mmol/L, the mean lowest LDL-C level during hospitalization was 2.00 ± 0.88mmol/L, and the median ΔLDL-C was 0.43 mmol/L (IQR 0.08–0.94 mmol/L). The median interval time between stroke onset and emergency department was 2.5 h (IQR 1.8-3.0 h). The median interval time between admission and the lowest LDL-C measurement during hospitalization was 3.1 d (IQR 0.8–6.6 d). For most patients (357/432, 82.6 %), LDL-C levels decreased during hospitalization. A total of 263 (60.9 %) patients had poor 90-day functional outcomes.\nFig. 1Patients’ inclusion flowchart. AIS, acute ischemic stroke; LDL-C, low-density lipoprotein cholesterol; mRS, modified Rankin Scale; ALT, alanine aminotransferase; AST, aspartate aminotransferaseTable 1Patient characteristics stratified by functional outcome at 90 daysVariablesOverallGood functional outcomePoor functional outcomeP value(n = 432)(n = 169)(n = 263)Age, years, mean (SD)69.2 (13.5)64.9 (13.9)72.0 (12.5)< 0.001Male, n (%)236 (54.6)109 (64.4)127 (48.3)< 0.001Hypertension, n (%)259 (60.0)95 (56.2)164 (62.4)0.203Diabetes, n (%)99 (22.9)34 (20.1)65 (24.7)0.267hyperlipemia, n (%)34 (7.9)13 (7.7)21 (8.0)0.912Atrial fibrillation, n (%)216 (50.0)63 (37.3)133 (58.2)< 0.001Valvular heart diseases, n (%)74 (17.1)25 (14.8)49 (18.6)0.301Coronary heart diseases, n (%)63 (14.6)22 (13.0)41 (15.6)0.460Previous stroke, n (%)39 (9.0)12(7.1)27 (10.3)0.263Current smoking, n (%)107 (24.8)52 (30.8)55 (20.9)0.021Alcohol consumption, n (%)100 (23.1)52 (30.8)51 (19.4)0.021Statin use before admission, n (%)39 (9.0)14 (8.3)25 (9.5)0.665Baseline NIHSS, median (Q1-Q3)14 (9–18)9 (6–14)15(12–20)< 0.001LDL-C, mmol/L, mean (SD)2.55 (0.93)2.59 (0.91)2.53 (0.95)0.496HDL, mmol/L, mean (SD)1.27 (0.42)1.25 (0. 49)1.29 (0.36)0.332TG, mmol/L, median (Q1-Q3)1.24 (0.87–1.89)1.20 (0.85–1.94)1.20 (0.88–1.81)0.566TC, mmol/L, mean (SD)4.23 (1.13)4.22 (1.13)4.23 (1.13)0.966Serum glucose, mmol/L, mean (SD)7.53 (6.49–9.18)7.96 (2.76)8.57 (3.04)0.035White blood cell, a10^9 /L, mean (SD)8.47 (3.19)8.23 (2.77)8.62 (3.43)0.207TOAST classification, n (%)< 0.001 Large-artery Atherosclerosis139 (32.2)59 (34.9)80 (30.4) Cardio-embolism192(44.4)57 (33.7)135 (51.3) Lacunar28 (6.5)22 (13.0)6 (2.3) Other13 (3.0)7 (4.1)6 (2.3)  Undetermined60 (13.9)24 (14.2)36 (13.7)Reperfusion therapy method, n (%)0.092 thrombolysis only124 (28.7)58 (34.3)66 (25.1) thrombectomy only210 (48.6)70 (41.4)140 (53.2) thrombolysis and thrombectomy98 (22.7)41 (24.3)57 (21.7)Statin use during hospitalization, n (%)263 (60.9)103 (60.9)160 (60.8)0.982Interval between stroke onset and emergency department, h, median (Q1-Q3)2.5 (1.8-3.0)2.3 (2.0–3.0)2.5 (1.7–3.5)0.610Interval between stroke onset and admission measurement of LDL-C, h, median (Q1-Q3)3.1 (2.2–3.9)3.1 (2.1–3.8)3.1 (2.1–3.8)0.685Interval between admission and follow-up measurement of LDL-C, d, median (Q1-Q3)3.1 (0.8–6.6)1.7 (0.6–6.4)3.4 (1.0-7.1)0.008aΔLDL-C, mmol/L, median (Q1-Q3)0.43 (0.08–0.94)0.28 (0.07–0.78)0.52 (0.10–1.03)0.009The lowest LDL-C during hospitalization, mmol/l, median (Q1-Q3)1.92 (1.38–2.44)2.11 (1.48–2.69)1.83 (1.34–2.32)0.002LDL-C variation, n (%)0.542decreased (ΔLDL-C > 0)357 (82.6)142 (84.0)215 (81.7)increased (ΔLDL-C ≤ 0)75 (17.4)27 (16.0)48 (18.3)SD standard deviation, LDL-C low-density lipoprotein cholesterol, ΔLDL-C the change of low-density lipoprotein cholesterol during hospitalization, NIHSS National Institutes of Health Stroke Scale, TOAST the Trial of Org 10,172 in Acute Stroke TreatmentaΔLDL-C was calculated by subtracting the lowest LDL-C among all measurements during hospitalization from the admission LDL-C\nPatients’ inclusion flowchart. AIS, acute ischemic stroke; LDL-C, low-density lipoprotein cholesterol; mRS, modified Rankin Scale; ALT, alanine aminotransferase; AST, aspartate aminotransferase\nPatient characteristics stratified by functional outcome at 90 days\nSD standard deviation, LDL-C low-density lipoprotein cholesterol, ΔLDL-C the change of low-density lipoprotein cholesterol during hospitalization, NIHSS National Institutes of Health Stroke Scale, TOAST the Trial of Org 10,172 in Acute Stroke Treatment\naΔLDL-C was calculated by subtracting the lowest LDL-C among all measurements during hospitalization from the admission LDL-C\nAssociation between admission LDL-C and outcome Age, sex, baseline NIHSS score, atrial fibrillation, current smoking, drinking consumption, TOAST classification, serum glucose, reperfusion therapy method, and the interval between admission LDL-C and the lowest LDL-C during hospitalization were significantly correlated with poor outcomes in univariate analysis (Table 2). There was no significant association between admission LDL-C level and functional outcome at 90 days when LDL-C level was regarded as a continuous variable (OR 1.03, 95 % CI 0.81–1.31, p = 0.802), or categorical variable (T3 vs. T1, OR 0.97, 95 % CI 0.55–1.71, p = 0.919, Table 3).\nTable 2Univariable logistic regression analysis of variables associated with poor functional outcomeVariableUnadjusted odds ratio (95% confidence interval)p-valueAge1.04 (1.03, 1.06)<0.001Male0.51 (0.35, 0.77)0.001Hypertension1.29 (0.87, 1.91)0.204Diabetes1.30 (0.82, 2.08)0.268hyperlipemia1.04 (0.51, 2.14)0.912Atrial fibrillation2.34 (1.57, 3.48)<0.001Valvular heart diseases1.32 (0.78, 2.23)0.302coronary heart diseases1.23 (0.71, 2.16)0.460Statin use before admission1.16 (0.59, 2.31)0.666Previous stroke1.50 (0.74, 3.04)0.265Current smoking0.60 (0.38, 0.93)0.021Alcohol consumption0.59 (0.38, 0.93)0.022Baseline NIHSS score1.17 (1.12, 1.21)<0.001HDL upon admission1.28 (0.78, 2.10)0.334TG upon admission0.92 (0.78, 1.09)0.333TC upon admission1.00 (0.85, 1.19)0.966Serum glucose1.08 (1.00, 1.17)0.038White blood cell1.04 (0.98, 1.11)0.208TOAST classificationLarge-artery AtherosclerosisReferenceCardio-embolism1.75 (1.11, 2.76)0.017Lacunar0.20 (0.08, 0.53)0.001Other0.63 (0.20, 1.98)0.431Undetermined1.11 (0.60, 2.05)0.748Reperfusion therapy methodthrombolysis onlyReferencethrombectomy only1.76 (1.12, 2.77)0.015thrombolysis and thrombectomy1.22 (0.72, 2.09)0.463Statin use during hospitalization0.98 (0.67, 1.48)0.995Interval between stroke onset and emergency department1.05 (0.89, 1.24)0.546Interval between stroke onset and admission measurement of LDL-C1.04 (0.89, 1.21)0.616Interval between admission and follow-up measurement of LDL-C1.06 (1.01, 1.12)0.023NIHSS National Institutes of Health Stroke Scale, TOAST the Trial of Org 10,172 in Acute Stroke TreatmentTable 3Multivariate logistic regression analysis between admission LDL-C and poor functional outcomeaVariableNon-adjusted modelAdjusted modelAdmission LDL-C, mmol/L0.93 (0.76, 1.14), 0.4961.03 (0.81, 1.31), 0.802Admission LDL-C tertiles, mmol/L T1(0.74–2.15)ReferenceReference T2(2.16–2.80)1.31 (0.81, 2.12), 0.2711.62 (0.92, 2.85), 0.096 T3(2.81–9.61)0.82 (0.51, 1.31), 0.4040.97 (0.55, 1.71), 0.919Adjusted model: adjusted for age, sex, atrial fibrillation, Current smoking, drinking consumption, baseline NIHSS score, Serum glucose, TOAST classification, and reperfusion therapy methodLDL-C low-Density Lipoprotein Cholesterol, NIHSS National Institutes of Health Stroke Scale, TOAST the Trial of Org 10,172 in Acute Stroke TreatmentaResults for each model are presented as odds ratio (95 % confidence interval), p-value\nUnivariable logistic regression analysis of variables associated with poor functional outcome\nNIHSS National Institutes of Health Stroke Scale, TOAST the Trial of Org 10,172 in Acute Stroke Treatment\nMultivariate logistic regression analysis between admission LDL-C and poor functional outcomea\nAdjusted model: adjusted for age, sex, atrial fibrillation, Current smoking, drinking consumption, baseline NIHSS score, Serum glucose, TOAST classification, and reperfusion therapy method\nLDL-C low-Density Lipoprotein Cholesterol, NIHSS National Institutes of Health Stroke Scale, TOAST the Trial of Org 10,172 in Acute Stroke Treatment\naResults for each model are presented as odds ratio (95 % confidence interval), p-value\nAge, sex, baseline NIHSS score, atrial fibrillation, current smoking, drinking consumption, TOAST classification, serum glucose, reperfusion therapy method, and the interval between admission LDL-C and the lowest LDL-C during hospitalization were significantly correlated with poor outcomes in univariate analysis (Table 2). There was no significant association between admission LDL-C level and functional outcome at 90 days when LDL-C level was regarded as a continuous variable (OR 1.03, 95 % CI 0.81–1.31, p = 0.802), or categorical variable (T3 vs. T1, OR 0.97, 95 % CI 0.55–1.71, p = 0.919, Table 3).\nTable 2Univariable logistic regression analysis of variables associated with poor functional outcomeVariableUnadjusted odds ratio (95% confidence interval)p-valueAge1.04 (1.03, 1.06)<0.001Male0.51 (0.35, 0.77)0.001Hypertension1.29 (0.87, 1.91)0.204Diabetes1.30 (0.82, 2.08)0.268hyperlipemia1.04 (0.51, 2.14)0.912Atrial fibrillation2.34 (1.57, 3.48)<0.001Valvular heart diseases1.32 (0.78, 2.23)0.302coronary heart diseases1.23 (0.71, 2.16)0.460Statin use before admission1.16 (0.59, 2.31)0.666Previous stroke1.50 (0.74, 3.04)0.265Current smoking0.60 (0.38, 0.93)0.021Alcohol consumption0.59 (0.38, 0.93)0.022Baseline NIHSS score1.17 (1.12, 1.21)<0.001HDL upon admission1.28 (0.78, 2.10)0.334TG upon admission0.92 (0.78, 1.09)0.333TC upon admission1.00 (0.85, 1.19)0.966Serum glucose1.08 (1.00, 1.17)0.038White blood cell1.04 (0.98, 1.11)0.208TOAST classificationLarge-artery AtherosclerosisReferenceCardio-embolism1.75 (1.11, 2.76)0.017Lacunar0.20 (0.08, 0.53)0.001Other0.63 (0.20, 1.98)0.431Undetermined1.11 (0.60, 2.05)0.748Reperfusion therapy methodthrombolysis onlyReferencethrombectomy only1.76 (1.12, 2.77)0.015thrombolysis and thrombectomy1.22 (0.72, 2.09)0.463Statin use during hospitalization0.98 (0.67, 1.48)0.995Interval between stroke onset and emergency department1.05 (0.89, 1.24)0.546Interval between stroke onset and admission measurement of LDL-C1.04 (0.89, 1.21)0.616Interval between admission and follow-up measurement of LDL-C1.06 (1.01, 1.12)0.023NIHSS National Institutes of Health Stroke Scale, TOAST the Trial of Org 10,172 in Acute Stroke TreatmentTable 3Multivariate logistic regression analysis between admission LDL-C and poor functional outcomeaVariableNon-adjusted modelAdjusted modelAdmission LDL-C, mmol/L0.93 (0.76, 1.14), 0.4961.03 (0.81, 1.31), 0.802Admission LDL-C tertiles, mmol/L T1(0.74–2.15)ReferenceReference T2(2.16–2.80)1.31 (0.81, 2.12), 0.2711.62 (0.92, 2.85), 0.096 T3(2.81–9.61)0.82 (0.51, 1.31), 0.4040.97 (0.55, 1.71), 0.919Adjusted model: adjusted for age, sex, atrial fibrillation, Current smoking, drinking consumption, baseline NIHSS score, Serum glucose, TOAST classification, and reperfusion therapy methodLDL-C low-Density Lipoprotein Cholesterol, NIHSS National Institutes of Health Stroke Scale, TOAST the Trial of Org 10,172 in Acute Stroke TreatmentaResults for each model are presented as odds ratio (95 % confidence interval), p-value\nUnivariable logistic regression analysis of variables associated with poor functional outcome\nNIHSS National Institutes of Health Stroke Scale, TOAST the Trial of Org 10,172 in Acute Stroke Treatment\nMultivariate logistic regression analysis between admission LDL-C and poor functional outcomea\nAdjusted model: adjusted for age, sex, atrial fibrillation, Current smoking, drinking consumption, baseline NIHSS score, Serum glucose, TOAST classification, and reperfusion therapy method\nLDL-C low-Density Lipoprotein Cholesterol, NIHSS National Institutes of Health Stroke Scale, TOAST the Trial of Org 10,172 in Acute Stroke Treatment\naResults for each model are presented as odds ratio (95 % confidence interval), p-value\nAssociation between ΔLDL-C and outcome When ΔLDL-C was regarded as a continuous variable, ΔLDL-C was significantly associated with poor functional outcome at 90 days in univariate analysis (OR 1.55, 95 % CI 1.12–2.15, p = 0.009, Table 4). After adjusting for confounding variables, the association between ΔLDL-C and the poor outcome remained significant (OR 1.80, 95 % CI 1.12–2.91, p = 0.016).\nWhen ΔLDL-C was regarded as a categorical variable, patients in the upper tertile (T3, 0.80–3.98 mmol/L) had a higher risk of poor outcome than those in the lower tertile (T1, -0.91-0.13 mmol/L) in univariate analysis (OR 1.92, 95 % CI 1.15–3.20, p = 0.012). After adjusting for confounding variables, the association between ΔLDL-C and the poor outcome remained significant (OR 2.56, 95 % CI 1.22–5.36, p = 0.013). The risk of poor functional outcome increased significantly with ΔLDL-C tertile (P-trend = 0.010).\nTable 4Multivariate logistic regression analysis between ΔLDL-C and poor functional outcomeaVariableNon-adjusted modelAdjusted model 1Adjusted model 2ΔLDL-C, mmol/l1.55(1.12, 2.15),0.0091.79(1.11, 2.89),0.0171.80(1.12, 2.91),0.016ΔLDL-C tertiles, mmol/l T1 (-0.91-0.13)ReferenceReferenceReference T2 (0.14–0.79)1.03(0.65, 1.64),0.8881.23(0.70, 2.18),0.4731.24(0.70, 2.19),0.470 T3 (0.80–3.98)1.92(1.15, 3.20),0.0122.56(1.22, 5.35),0.0132.56(1.22, 5.36),0.013P-trend0.0070.0100.010Adjusted model 1: adjusted for age, sex, atrial fibrillation, Current smoking, drinking consumption, baseline NIHSS score, Serum glucose, TOAST classification, reperfusion therapy method, and interval between admission and follow-up measurement of LDL-CAdjusted model 2: adjusted for variables in model 1 and statin use during hospitalizationLDL-C low-density lipoprotein cholesterol, ΔLDL-C the change of low-density lipoprotein cholesterol during hospitalization, NIHSS National Institutes of Health Stroke Scale, TOAST the Trial of Org 10,172 in Acute Stroke TreatmentaResults for each model are presented as odds ratio (95 % confidence interval), p-value\nMultivariate logistic regression analysis between ΔLDL-C and poor functional outcomea\nAdjusted model 1: adjusted for age, sex, atrial fibrillation, Current smoking, drinking consumption, baseline NIHSS score, Serum glucose, TOAST classification, reperfusion therapy method, and interval between admission and follow-up measurement of LDL-C\nAdjusted model 2: adjusted for variables in model 1 and statin use during hospitalization\nLDL-C low-density lipoprotein cholesterol, ΔLDL-C the change of low-density lipoprotein cholesterol during hospitalization, NIHSS National Institutes of Health Stroke Scale, TOAST the Trial of Org 10,172 in Acute Stroke Treatment\naResults for each model are presented as odds ratio (95 % confidence interval), p-value\nWhen ΔLDL-C was regarded as a continuous variable, ΔLDL-C was significantly associated with poor functional outcome at 90 days in univariate analysis (OR 1.55, 95 % CI 1.12–2.15, p = 0.009, Table 4). After adjusting for confounding variables, the association between ΔLDL-C and the poor outcome remained significant (OR 1.80, 95 % CI 1.12–2.91, p = 0.016).\nWhen ΔLDL-C was regarded as a categorical variable, patients in the upper tertile (T3, 0.80–3.98 mmol/L) had a higher risk of poor outcome than those in the lower tertile (T1, -0.91-0.13 mmol/L) in univariate analysis (OR 1.92, 95 % CI 1.15–3.20, p = 0.012). After adjusting for confounding variables, the association between ΔLDL-C and the poor outcome remained significant (OR 2.56, 95 % CI 1.22–5.36, p = 0.013). The risk of poor functional outcome increased significantly with ΔLDL-C tertile (P-trend = 0.010).\nTable 4Multivariate logistic regression analysis between ΔLDL-C and poor functional outcomeaVariableNon-adjusted modelAdjusted model 1Adjusted model 2ΔLDL-C, mmol/l1.55(1.12, 2.15),0.0091.79(1.11, 2.89),0.0171.80(1.12, 2.91),0.016ΔLDL-C tertiles, mmol/l T1 (-0.91-0.13)ReferenceReferenceReference T2 (0.14–0.79)1.03(0.65, 1.64),0.8881.23(0.70, 2.18),0.4731.24(0.70, 2.19),0.470 T3 (0.80–3.98)1.92(1.15, 3.20),0.0122.56(1.22, 5.35),0.0132.56(1.22, 5.36),0.013P-trend0.0070.0100.010Adjusted model 1: adjusted for age, sex, atrial fibrillation, Current smoking, drinking consumption, baseline NIHSS score, Serum glucose, TOAST classification, reperfusion therapy method, and interval between admission and follow-up measurement of LDL-CAdjusted model 2: adjusted for variables in model 1 and statin use during hospitalizationLDL-C low-density lipoprotein cholesterol, ΔLDL-C the change of low-density lipoprotein cholesterol during hospitalization, NIHSS National Institutes of Health Stroke Scale, TOAST the Trial of Org 10,172 in Acute Stroke TreatmentaResults for each model are presented as odds ratio (95 % confidence interval), p-value\nMultivariate logistic regression analysis between ΔLDL-C and poor functional outcomea\nAdjusted model 1: adjusted for age, sex, atrial fibrillation, Current smoking, drinking consumption, baseline NIHSS score, Serum glucose, TOAST classification, reperfusion therapy method, and interval between admission and follow-up measurement of LDL-C\nAdjusted model 2: adjusted for variables in model 1 and statin use during hospitalization\nLDL-C low-density lipoprotein cholesterol, ΔLDL-C the change of low-density lipoprotein cholesterol during hospitalization, NIHSS National Institutes of Health Stroke Scale, TOAST the Trial of Org 10,172 in Acute Stroke Treatment\naResults for each model are presented as odds ratio (95 % confidence interval), p-value", "As shown in Fig. 1, a total of 640 AIS patients underwent reperfusion therapy in our center, there were 24.6 % (158/640) patients missed LDL-C levels or outcome follow-up, we compared included patients to these patients, and we found that there were no significant differences in demographic parameters (age and gender), vascular risk factors (diabetes, atrial fibrillation, current smoking, and coronary heart diseases), baseline NIHSS score, TOAST classification, and reperfusion therapy method between two groups, except for more patients with prior history of stroke and hypertension among the included group (Table S1 in supplementary materials). Finally, a total of 432 patients (mean age 69.2 ± 13.5 years, 54.6 % males) were included. As shown in Table 1, the mean admission LDL-C level was 2.55 ± 0.93 mmol/L, the mean lowest LDL-C level during hospitalization was 2.00 ± 0.88mmol/L, and the median ΔLDL-C was 0.43 mmol/L (IQR 0.08–0.94 mmol/L). The median interval time between stroke onset and emergency department was 2.5 h (IQR 1.8-3.0 h). The median interval time between admission and the lowest LDL-C measurement during hospitalization was 3.1 d (IQR 0.8–6.6 d). For most patients (357/432, 82.6 %), LDL-C levels decreased during hospitalization. A total of 263 (60.9 %) patients had poor 90-day functional outcomes.\nFig. 1Patients’ inclusion flowchart. AIS, acute ischemic stroke; LDL-C, low-density lipoprotein cholesterol; mRS, modified Rankin Scale; ALT, alanine aminotransferase; AST, aspartate aminotransferaseTable 1Patient characteristics stratified by functional outcome at 90 daysVariablesOverallGood functional outcomePoor functional outcomeP value(n = 432)(n = 169)(n = 263)Age, years, mean (SD)69.2 (13.5)64.9 (13.9)72.0 (12.5)< 0.001Male, n (%)236 (54.6)109 (64.4)127 (48.3)< 0.001Hypertension, n (%)259 (60.0)95 (56.2)164 (62.4)0.203Diabetes, n (%)99 (22.9)34 (20.1)65 (24.7)0.267hyperlipemia, n (%)34 (7.9)13 (7.7)21 (8.0)0.912Atrial fibrillation, n (%)216 (50.0)63 (37.3)133 (58.2)< 0.001Valvular heart diseases, n (%)74 (17.1)25 (14.8)49 (18.6)0.301Coronary heart diseases, n (%)63 (14.6)22 (13.0)41 (15.6)0.460Previous stroke, n (%)39 (9.0)12(7.1)27 (10.3)0.263Current smoking, n (%)107 (24.8)52 (30.8)55 (20.9)0.021Alcohol consumption, n (%)100 (23.1)52 (30.8)51 (19.4)0.021Statin use before admission, n (%)39 (9.0)14 (8.3)25 (9.5)0.665Baseline NIHSS, median (Q1-Q3)14 (9–18)9 (6–14)15(12–20)< 0.001LDL-C, mmol/L, mean (SD)2.55 (0.93)2.59 (0.91)2.53 (0.95)0.496HDL, mmol/L, mean (SD)1.27 (0.42)1.25 (0. 49)1.29 (0.36)0.332TG, mmol/L, median (Q1-Q3)1.24 (0.87–1.89)1.20 (0.85–1.94)1.20 (0.88–1.81)0.566TC, mmol/L, mean (SD)4.23 (1.13)4.22 (1.13)4.23 (1.13)0.966Serum glucose, mmol/L, mean (SD)7.53 (6.49–9.18)7.96 (2.76)8.57 (3.04)0.035White blood cell, a10^9 /L, mean (SD)8.47 (3.19)8.23 (2.77)8.62 (3.43)0.207TOAST classification, n (%)< 0.001 Large-artery Atherosclerosis139 (32.2)59 (34.9)80 (30.4) Cardio-embolism192(44.4)57 (33.7)135 (51.3) Lacunar28 (6.5)22 (13.0)6 (2.3) Other13 (3.0)7 (4.1)6 (2.3)  Undetermined60 (13.9)24 (14.2)36 (13.7)Reperfusion therapy method, n (%)0.092 thrombolysis only124 (28.7)58 (34.3)66 (25.1) thrombectomy only210 (48.6)70 (41.4)140 (53.2) thrombolysis and thrombectomy98 (22.7)41 (24.3)57 (21.7)Statin use during hospitalization, n (%)263 (60.9)103 (60.9)160 (60.8)0.982Interval between stroke onset and emergency department, h, median (Q1-Q3)2.5 (1.8-3.0)2.3 (2.0–3.0)2.5 (1.7–3.5)0.610Interval between stroke onset and admission measurement of LDL-C, h, median (Q1-Q3)3.1 (2.2–3.9)3.1 (2.1–3.8)3.1 (2.1–3.8)0.685Interval between admission and follow-up measurement of LDL-C, d, median (Q1-Q3)3.1 (0.8–6.6)1.7 (0.6–6.4)3.4 (1.0-7.1)0.008aΔLDL-C, mmol/L, median (Q1-Q3)0.43 (0.08–0.94)0.28 (0.07–0.78)0.52 (0.10–1.03)0.009The lowest LDL-C during hospitalization, mmol/l, median (Q1-Q3)1.92 (1.38–2.44)2.11 (1.48–2.69)1.83 (1.34–2.32)0.002LDL-C variation, n (%)0.542decreased (ΔLDL-C > 0)357 (82.6)142 (84.0)215 (81.7)increased (ΔLDL-C ≤ 0)75 (17.4)27 (16.0)48 (18.3)SD standard deviation, LDL-C low-density lipoprotein cholesterol, ΔLDL-C the change of low-density lipoprotein cholesterol during hospitalization, NIHSS National Institutes of Health Stroke Scale, TOAST the Trial of Org 10,172 in Acute Stroke TreatmentaΔLDL-C was calculated by subtracting the lowest LDL-C among all measurements during hospitalization from the admission LDL-C\nPatients’ inclusion flowchart. AIS, acute ischemic stroke; LDL-C, low-density lipoprotein cholesterol; mRS, modified Rankin Scale; ALT, alanine aminotransferase; AST, aspartate aminotransferase\nPatient characteristics stratified by functional outcome at 90 days\nSD standard deviation, LDL-C low-density lipoprotein cholesterol, ΔLDL-C the change of low-density lipoprotein cholesterol during hospitalization, NIHSS National Institutes of Health Stroke Scale, TOAST the Trial of Org 10,172 in Acute Stroke Treatment\naΔLDL-C was calculated by subtracting the lowest LDL-C among all measurements during hospitalization from the admission LDL-C", "Age, sex, baseline NIHSS score, atrial fibrillation, current smoking, drinking consumption, TOAST classification, serum glucose, reperfusion therapy method, and the interval between admission LDL-C and the lowest LDL-C during hospitalization were significantly correlated with poor outcomes in univariate analysis (Table 2). There was no significant association between admission LDL-C level and functional outcome at 90 days when LDL-C level was regarded as a continuous variable (OR 1.03, 95 % CI 0.81–1.31, p = 0.802), or categorical variable (T3 vs. T1, OR 0.97, 95 % CI 0.55–1.71, p = 0.919, Table 3).\nTable 2Univariable logistic regression analysis of variables associated with poor functional outcomeVariableUnadjusted odds ratio (95% confidence interval)p-valueAge1.04 (1.03, 1.06)<0.001Male0.51 (0.35, 0.77)0.001Hypertension1.29 (0.87, 1.91)0.204Diabetes1.30 (0.82, 2.08)0.268hyperlipemia1.04 (0.51, 2.14)0.912Atrial fibrillation2.34 (1.57, 3.48)<0.001Valvular heart diseases1.32 (0.78, 2.23)0.302coronary heart diseases1.23 (0.71, 2.16)0.460Statin use before admission1.16 (0.59, 2.31)0.666Previous stroke1.50 (0.74, 3.04)0.265Current smoking0.60 (0.38, 0.93)0.021Alcohol consumption0.59 (0.38, 0.93)0.022Baseline NIHSS score1.17 (1.12, 1.21)<0.001HDL upon admission1.28 (0.78, 2.10)0.334TG upon admission0.92 (0.78, 1.09)0.333TC upon admission1.00 (0.85, 1.19)0.966Serum glucose1.08 (1.00, 1.17)0.038White blood cell1.04 (0.98, 1.11)0.208TOAST classificationLarge-artery AtherosclerosisReferenceCardio-embolism1.75 (1.11, 2.76)0.017Lacunar0.20 (0.08, 0.53)0.001Other0.63 (0.20, 1.98)0.431Undetermined1.11 (0.60, 2.05)0.748Reperfusion therapy methodthrombolysis onlyReferencethrombectomy only1.76 (1.12, 2.77)0.015thrombolysis and thrombectomy1.22 (0.72, 2.09)0.463Statin use during hospitalization0.98 (0.67, 1.48)0.995Interval between stroke onset and emergency department1.05 (0.89, 1.24)0.546Interval between stroke onset and admission measurement of LDL-C1.04 (0.89, 1.21)0.616Interval between admission and follow-up measurement of LDL-C1.06 (1.01, 1.12)0.023NIHSS National Institutes of Health Stroke Scale, TOAST the Trial of Org 10,172 in Acute Stroke TreatmentTable 3Multivariate logistic regression analysis between admission LDL-C and poor functional outcomeaVariableNon-adjusted modelAdjusted modelAdmission LDL-C, mmol/L0.93 (0.76, 1.14), 0.4961.03 (0.81, 1.31), 0.802Admission LDL-C tertiles, mmol/L T1(0.74–2.15)ReferenceReference T2(2.16–2.80)1.31 (0.81, 2.12), 0.2711.62 (0.92, 2.85), 0.096 T3(2.81–9.61)0.82 (0.51, 1.31), 0.4040.97 (0.55, 1.71), 0.919Adjusted model: adjusted for age, sex, atrial fibrillation, Current smoking, drinking consumption, baseline NIHSS score, Serum glucose, TOAST classification, and reperfusion therapy methodLDL-C low-Density Lipoprotein Cholesterol, NIHSS National Institutes of Health Stroke Scale, TOAST the Trial of Org 10,172 in Acute Stroke TreatmentaResults for each model are presented as odds ratio (95 % confidence interval), p-value\nUnivariable logistic regression analysis of variables associated with poor functional outcome\nNIHSS National Institutes of Health Stroke Scale, TOAST the Trial of Org 10,172 in Acute Stroke Treatment\nMultivariate logistic regression analysis between admission LDL-C and poor functional outcomea\nAdjusted model: adjusted for age, sex, atrial fibrillation, Current smoking, drinking consumption, baseline NIHSS score, Serum glucose, TOAST classification, and reperfusion therapy method\nLDL-C low-Density Lipoprotein Cholesterol, NIHSS National Institutes of Health Stroke Scale, TOAST the Trial of Org 10,172 in Acute Stroke Treatment\naResults for each model are presented as odds ratio (95 % confidence interval), p-value", "When ΔLDL-C was regarded as a continuous variable, ΔLDL-C was significantly associated with poor functional outcome at 90 days in univariate analysis (OR 1.55, 95 % CI 1.12–2.15, p = 0.009, Table 4). After adjusting for confounding variables, the association between ΔLDL-C and the poor outcome remained significant (OR 1.80, 95 % CI 1.12–2.91, p = 0.016).\nWhen ΔLDL-C was regarded as a categorical variable, patients in the upper tertile (T3, 0.80–3.98 mmol/L) had a higher risk of poor outcome than those in the lower tertile (T1, -0.91-0.13 mmol/L) in univariate analysis (OR 1.92, 95 % CI 1.15–3.20, p = 0.012). After adjusting for confounding variables, the association between ΔLDL-C and the poor outcome remained significant (OR 2.56, 95 % CI 1.22–5.36, p = 0.013). The risk of poor functional outcome increased significantly with ΔLDL-C tertile (P-trend = 0.010).\nTable 4Multivariate logistic regression analysis between ΔLDL-C and poor functional outcomeaVariableNon-adjusted modelAdjusted model 1Adjusted model 2ΔLDL-C, mmol/l1.55(1.12, 2.15),0.0091.79(1.11, 2.89),0.0171.80(1.12, 2.91),0.016ΔLDL-C tertiles, mmol/l T1 (-0.91-0.13)ReferenceReferenceReference T2 (0.14–0.79)1.03(0.65, 1.64),0.8881.23(0.70, 2.18),0.4731.24(0.70, 2.19),0.470 T3 (0.80–3.98)1.92(1.15, 3.20),0.0122.56(1.22, 5.35),0.0132.56(1.22, 5.36),0.013P-trend0.0070.0100.010Adjusted model 1: adjusted for age, sex, atrial fibrillation, Current smoking, drinking consumption, baseline NIHSS score, Serum glucose, TOAST classification, reperfusion therapy method, and interval between admission and follow-up measurement of LDL-CAdjusted model 2: adjusted for variables in model 1 and statin use during hospitalizationLDL-C low-density lipoprotein cholesterol, ΔLDL-C the change of low-density lipoprotein cholesterol during hospitalization, NIHSS National Institutes of Health Stroke Scale, TOAST the Trial of Org 10,172 in Acute Stroke TreatmentaResults for each model are presented as odds ratio (95 % confidence interval), p-value\nMultivariate logistic regression analysis between ΔLDL-C and poor functional outcomea\nAdjusted model 1: adjusted for age, sex, atrial fibrillation, Current smoking, drinking consumption, baseline NIHSS score, Serum glucose, TOAST classification, reperfusion therapy method, and interval between admission and follow-up measurement of LDL-C\nAdjusted model 2: adjusted for variables in model 1 and statin use during hospitalization\nLDL-C low-density lipoprotein cholesterol, ΔLDL-C the change of low-density lipoprotein cholesterol during hospitalization, NIHSS National Institutes of Health Stroke Scale, TOAST the Trial of Org 10,172 in Acute Stroke Treatment\naResults for each model are presented as odds ratio (95 % confidence interval), p-value", "We found that the admission LDL-C was not associated with functional outcomes at 90 days. For most AIS patients who underwent reperfusion therapy, LDL-C decreased during hospitalization. The decrease of LDL-C during hospitalization was associated with poor 90-day functional outcomes. We suggested that the magnitude of decrease in LDL-C during hospitalization may reflect the severity of oxidative stress in the acute phase of AIS generated by ischemic stroke and/or brain tissue reperfusion, which might be positively associated with poor 90-day functional outcome in AIS patients with reperfusion therapy.\nThere was a discrepancy in the prognostic significance between LDL-C level and outcomes in AIS patients [1–11]. Some studies found that higher LDL-C level was associated with poor outcomes in AIS patients [1–3], while some studies found that lower LDL-C was associated with poor outcome in AIS patients [4, 5], others failed to find a significant association between LDL-C and outcome [6–11]. The inconsistent results might be explained by differences in sample size, patient selection, potential confounder, outcome assessment, and different measurement times of LDL-C. Several studies suggested that LDL got oxidized into oxLDL under oxidative stress [19], and oxLDL may contribute to exacerbate free-radical damage in the acute phase of AIS [20, 32–35]. Since AIS patients with reperfusion therapy could suffer from enhanced oxidative injury [12], focusing on these patients might clarify the role of LDL-C.\nHowever, previous studies on the association between LDL-C level and outcomes in AIS patients who underwent reperfusion therapy were rare, and these conclusions failed to reach a consensus [5–8]. Previous studies of AIS patients with thrombolytic therapy failed to find an association between baseline LDL-C and outcome [6–8], which was in line with our study. Recently, a retrospective study involving 174 AIS patients with endovascular thrombectomy (EVT) therapy found that a higher LDL-C level at admission was independently associated with favorable functional outcomes at 3 months [5]. The conflicting results in AIS patients with reperfusion therapy may be due to the single measurement of LDL-C.\nPrevious studies suggested that LDL-C levels showed a decreased trend during the acute phase of AIS [14–18]. However, only one study investigated the association between LDL-C change and outcomes in AIS patients [25]. This multicenter study of 676 AIS patients found that increased LDL-C was associated with poor outcomes at discharge [25]. For most patients (566/676, 83.7 %), LDL-C levels decreased during hospitalization in this study, but it did not clarify the association between decreased LDL-C and outcome further. In the current study, we also found that for most patients (357/432, 82.6 %), LDL-C levels decreased during hospitalization, and decreased LDL-C level was significantly associated with poor 90-day functional outcome in AIS patients with reperfusion therapy.\nThough the underlying mechanism of the association between ΔLDL-C and outcome remains unclear, it could be explained as follows: LDL-C gets oxidized into oxLDL under oxidative stress [19] and oxLDL is the major marker of oxidative stress [20, 32–34]. Previous studies found that high oxLDL is positively associated with poor functional outcomes in AIS patients [21–23]. Therefore, we speculated that the increased oxLDL level may be associated with decreased LDL-C level during hospitalization, the extent of decreased LDL-C may reflect the degree of increased oxLDL, which may indicate the severity of oxidative stress and contribute to poor functional outcomes. Although the specific mechanism of ΔLDL-C level during hospitalization remains unclear, during the oxidative challenge, LDL-C gets oxidized into oxLDL [19], which contributes to free-radical damage [20, 32–35] and poor outcome [21–23].\nA study of 3019 AIS /TIA patients from the Clopidogrel in High-Risk Patients with Acute Non-Disabling Cerebrovascular Events (CHANCE) trial found that higher levels of ox-LDL and ox-LDL/LDL significantly increased the risk of poor functional outcome in AIS patients [21], which may provide evidence to support our hypothesis. Of course, more studies of high quality are needed to verify the above hypothesis.\nFrom a clinical point of view, since LDL-C is a widely available biomarker and is measured frequently, our findings might help clinicians to identify AIS patients who underwent reperfusion therapy at risk of 90-day poor functional outcome and guide therapy properly, besides there was no additional financial burden for patients’ families.\nSome limitations should be noted. Firstly, this was a retrospective study and we could not measure oxLDL. Therefore, we could not confirm that the change in LDL-C levels was due to free radical damage; however, a previous study found that the higher levels of oxLDL, and ox-LDL/LDL-C significantly increased the risk of poor outcome [21], which supported our hypothesis. Secondly, we could not measure LDL-C at a specific time for each patient. The results might vary with different testing times [16]. However, in multivariate analysis, we adjusted the interval between admission and follow-up measurement of LDL-C during hospitalization in Model1, and our results remain significant in the present study. Thirdly, statin therapy could have influenced LDL-C levels. A randomized controlled trial of 60 AIS patients found that LDL-C decreased significantly in statin-treated patients on the 7th day and 3 months [17]. In our study, the median interval between admission and the lowest LDL-C level measurement was 3.1d (IQR 0.8–6.6 d). In addition, there was no significant association between statin usage and outcome in univariate analysis. Moreover, when we further adjusted for this variable in Model 2, our findings remained significant. Therefore, the influence of statin usage may be limited in our study. Fourthly, we measured LDL-C in a non-fasting state, which might influence the results, but a meta-analysis of 68 studies found that the association between LDL-C and ischemic stroke remained even when measured in non-fasting patients [36]. Finally, patients did not conduct a computed tomographic angiography after reperfusion therapy in our hospital, therefore we could not evaluate the status of their blood vessels, which might influence our results.", "There was no significant association between admission LDL-C level and outcomes in AIS patients who underwent reperfusion therapy, while the decrease in LDL-C level during hospitalization was positively associated with poor functional outcomes at 90 days.", " \nAdditional file 1: Table S1. Patient characteristics stratified by included patients and excluded patients\n\nAdditional file 1: Table S1. Patient characteristics stratified by included patients and excluded patients\n\nAdditional file 1: Table S1. Patient characteristics stratified by included patients and excluded patients\n\nAdditional file 1: Table S1. Patient characteristics stratified by included patients and excluded patients", "\nAdditional file 1: Table S1. Patient characteristics stratified by included patients and excluded patients\n\nAdditional file 1: Table S1. Patient characteristics stratified by included patients and excluded patients" ]
[ null, null, null, null, null, null, "results", null, null, null, "discussion", "conclusion", "supplementary-material", null ]
[ "Low-density lipoprotein cholesterol", "Change", "Acute ischemic stroke", "Reperfusion therapy", "Outcome" ]
Background: The association between low-density lipoprotein cholesterol (LDL-C) and outcomes in acute ischemic stroke (AIS) patients remains controversial [1–11]. The inconsistent results might be explained by oxidative stress. Since AIS patients may suffer from enhanced free-radical damage after reperfusion therapy [12, 13], focusing on patients with reperfusion therapy might clarify the role of LDL-C. However, previous studies on the association between LDL-C level and outcomes in AIS patients with reperfusion therapy failed to reach a consensus [5–8]. The conflicting conclusions may be due to the single measurement of LDL-C. Some studies found that serum LDL-C levels decreased after the onset of AIS [14–18]. Under oxidative stress, low-density lipoprotein (LDL) gets oxidized into oxidized low-density lipoprotein (oxLDL) [19]. The extent of decreased LDL-C may reflect the degree of increased oxLDL, which may indicate the severity of oxidative stress [20] and is positively associated with poor functional outcomes [21–24]. Therefore, a more sensitive marker may be the change in serum LDL-C during hospitalization [25]. However, there is uncertainty on the association between LDL-C change and outcomes in patients with reperfusion therapy. In this study, we aimed to explore the association between changes in LDL-C levels and functional outcomes in these patients. Methods: Study population This is a retrospective study. AIS patients admitted to Neurology Department, West China Hospital were consecutively enrolled between 1st June 2018 and 31st January 2021. AIS was diagnosed based on clinical manifestation and brain image [26]. Patients were included as follows: (1) underwent reperfusion therapy within 6 h after symptom onset, including intravenous thrombolysis with alteplase and/or endovascular thrombectomy (including mechanical or thrombus aspiration thrombectomy, or both, with or without intra-arterial alteplase infusion), and (2) LDL-C levels were measured at emergency department immediately after admission and at least on another occasion during hospitalization. The exclusion criteria were as follows: (1) premorbid modified Rankin scale [mRS] scores > 1, (2) younger than 18 years, (3) had a liver injury that may affect serum lipid levels [15], or (4) malignancy. We obtained informed consent from each patient or their relative. The Scientific Research Department of West China Hospital approved this study. This is a retrospective study. AIS patients admitted to Neurology Department, West China Hospital were consecutively enrolled between 1st June 2018 and 31st January 2021. AIS was diagnosed based on clinical manifestation and brain image [26]. Patients were included as follows: (1) underwent reperfusion therapy within 6 h after symptom onset, including intravenous thrombolysis with alteplase and/or endovascular thrombectomy (including mechanical or thrombus aspiration thrombectomy, or both, with or without intra-arterial alteplase infusion), and (2) LDL-C levels were measured at emergency department immediately after admission and at least on another occasion during hospitalization. The exclusion criteria were as follows: (1) premorbid modified Rankin scale [mRS] scores > 1, (2) younger than 18 years, (3) had a liver injury that may affect serum lipid levels [15], or (4) malignancy. We obtained informed consent from each patient or their relative. The Scientific Research Department of West China Hospital approved this study. Baseline data Data on demographics (age, gender), level of neurological severity (according to the National Institute of Health Stroke Scale [NIHSS] score), risk factors (atrial fibrillation, hypertension, hyperlipidemia, diabetes mellitus, smoking status, and coronary heart diseases), laboratory results (white blood cell, glucose, TG, TC, HDL, and LDL-C), and the interval between stroke onset and emergency department were documented at admission. The interval between stroke onset and admission measurement of LDL-C and the interval between admission and follow-up measurement of LDL-C during hospitalization were also documented. Serum LDL-C was measured by the automatic biochemistry analyzer (Roche Cobas 8000) [27]. The Trial of Org 10,172 in Acute Stroke Treatment (TOAST) classification system was conducted to identify stroke subtypes [28]. Data on demographics (age, gender), level of neurological severity (according to the National Institute of Health Stroke Scale [NIHSS] score), risk factors (atrial fibrillation, hypertension, hyperlipidemia, diabetes mellitus, smoking status, and coronary heart diseases), laboratory results (white blood cell, glucose, TG, TC, HDL, and LDL-C), and the interval between stroke onset and emergency department were documented at admission. The interval between stroke onset and admission measurement of LDL-C and the interval between admission and follow-up measurement of LDL-C during hospitalization were also documented. Serum LDL-C was measured by the automatic biochemistry analyzer (Roche Cobas 8000) [27]. The Trial of Org 10,172 in Acute Stroke Treatment (TOAST) classification system was conducted to identify stroke subtypes [28]. Outcome All patients were followed up by telephone or interview at 90 days to evaluate their functional outcomes blinded to their LDL-C levels. We used the modified Rankin Scale (mRS) to measure functional outcomes at 90 days [29]. Poor functional outcome was defined as mRS score > 2 [29]. All patients were followed up by telephone or interview at 90 days to evaluate their functional outcomes blinded to their LDL-C levels. We used the modified Rankin Scale (mRS) to measure functional outcomes at 90 days [29]. Poor functional outcome was defined as mRS score > 2 [29]. Statistical analysis Continuous variables were reported as means with standard deviations (SD) for normally distributed parameters or medians with interquartile range (IQR) for non-normally distributed parameters. Frequencies or percentages were used to describe categorical variables. Descriptive analyses of study population baseline characteristics and 90-day outcomes were reported for groups using the χ2 test or Fisher’s exact test for categorical data, the Student’s t-test, and the Mann-Whitney U test for continuous variables as appropriate. Significant confounders were defined as variables within p < 0.10 in univariate analysis. The change of LDL-C level (ΔLDL-C) was calculated by subtracting the lowest LDL-C among all measurements during hospitalization from the admission LDL-C: a positive ΔLDL-C indicated LDL-C decreased during hospitalization, and a negative ΔLDL-C indicated an increase in LDL-C level. Multivariate logistic regression models were used to determine associations between ΔLDL-C and outcome. To further explore the associations, we did trend analyses by categorizing ΔLDL-C into tertiles [30]. Trends across tertiles (P-trend) of ΔLDL-C were determined by entering the median value of ΔLDL-C in each category as a continuous variable [31]. Data were reported as odds ratios (OR) and 95 % confidence intervals (CI). A two-sided P value less than 0.05 was considered statistically significant. All analyses were performed using IBM SPSS Statistics (25.0; IBM, Armonk, NY, USA). Continuous variables were reported as means with standard deviations (SD) for normally distributed parameters or medians with interquartile range (IQR) for non-normally distributed parameters. Frequencies or percentages were used to describe categorical variables. Descriptive analyses of study population baseline characteristics and 90-day outcomes were reported for groups using the χ2 test or Fisher’s exact test for categorical data, the Student’s t-test, and the Mann-Whitney U test for continuous variables as appropriate. Significant confounders were defined as variables within p < 0.10 in univariate analysis. The change of LDL-C level (ΔLDL-C) was calculated by subtracting the lowest LDL-C among all measurements during hospitalization from the admission LDL-C: a positive ΔLDL-C indicated LDL-C decreased during hospitalization, and a negative ΔLDL-C indicated an increase in LDL-C level. Multivariate logistic regression models were used to determine associations between ΔLDL-C and outcome. To further explore the associations, we did trend analyses by categorizing ΔLDL-C into tertiles [30]. Trends across tertiles (P-trend) of ΔLDL-C were determined by entering the median value of ΔLDL-C in each category as a continuous variable [31]. Data were reported as odds ratios (OR) and 95 % confidence intervals (CI). A two-sided P value less than 0.05 was considered statistically significant. All analyses were performed using IBM SPSS Statistics (25.0; IBM, Armonk, NY, USA). Study population: This is a retrospective study. AIS patients admitted to Neurology Department, West China Hospital were consecutively enrolled between 1st June 2018 and 31st January 2021. AIS was diagnosed based on clinical manifestation and brain image [26]. Patients were included as follows: (1) underwent reperfusion therapy within 6 h after symptom onset, including intravenous thrombolysis with alteplase and/or endovascular thrombectomy (including mechanical or thrombus aspiration thrombectomy, or both, with or without intra-arterial alteplase infusion), and (2) LDL-C levels were measured at emergency department immediately after admission and at least on another occasion during hospitalization. The exclusion criteria were as follows: (1) premorbid modified Rankin scale [mRS] scores > 1, (2) younger than 18 years, (3) had a liver injury that may affect serum lipid levels [15], or (4) malignancy. We obtained informed consent from each patient or their relative. The Scientific Research Department of West China Hospital approved this study. Baseline data: Data on demographics (age, gender), level of neurological severity (according to the National Institute of Health Stroke Scale [NIHSS] score), risk factors (atrial fibrillation, hypertension, hyperlipidemia, diabetes mellitus, smoking status, and coronary heart diseases), laboratory results (white blood cell, glucose, TG, TC, HDL, and LDL-C), and the interval between stroke onset and emergency department were documented at admission. The interval between stroke onset and admission measurement of LDL-C and the interval between admission and follow-up measurement of LDL-C during hospitalization were also documented. Serum LDL-C was measured by the automatic biochemistry analyzer (Roche Cobas 8000) [27]. The Trial of Org 10,172 in Acute Stroke Treatment (TOAST) classification system was conducted to identify stroke subtypes [28]. Outcome: All patients were followed up by telephone or interview at 90 days to evaluate their functional outcomes blinded to their LDL-C levels. We used the modified Rankin Scale (mRS) to measure functional outcomes at 90 days [29]. Poor functional outcome was defined as mRS score > 2 [29]. Statistical analysis: Continuous variables were reported as means with standard deviations (SD) for normally distributed parameters or medians with interquartile range (IQR) for non-normally distributed parameters. Frequencies or percentages were used to describe categorical variables. Descriptive analyses of study population baseline characteristics and 90-day outcomes were reported for groups using the χ2 test or Fisher’s exact test for categorical data, the Student’s t-test, and the Mann-Whitney U test for continuous variables as appropriate. Significant confounders were defined as variables within p < 0.10 in univariate analysis. The change of LDL-C level (ΔLDL-C) was calculated by subtracting the lowest LDL-C among all measurements during hospitalization from the admission LDL-C: a positive ΔLDL-C indicated LDL-C decreased during hospitalization, and a negative ΔLDL-C indicated an increase in LDL-C level. Multivariate logistic regression models were used to determine associations between ΔLDL-C and outcome. To further explore the associations, we did trend analyses by categorizing ΔLDL-C into tertiles [30]. Trends across tertiles (P-trend) of ΔLDL-C were determined by entering the median value of ΔLDL-C in each category as a continuous variable [31]. Data were reported as odds ratios (OR) and 95 % confidence intervals (CI). A two-sided P value less than 0.05 was considered statistically significant. All analyses were performed using IBM SPSS Statistics (25.0; IBM, Armonk, NY, USA). Results: Baseline characteristics and outcome As shown in Fig. 1, a total of 640 AIS patients underwent reperfusion therapy in our center, there were 24.6 % (158/640) patients missed LDL-C levels or outcome follow-up, we compared included patients to these patients, and we found that there were no significant differences in demographic parameters (age and gender), vascular risk factors (diabetes, atrial fibrillation, current smoking, and coronary heart diseases), baseline NIHSS score, TOAST classification, and reperfusion therapy method between two groups, except for more patients with prior history of stroke and hypertension among the included group (Table S1 in supplementary materials). Finally, a total of 432 patients (mean age 69.2 ± 13.5 years, 54.6 % males) were included. As shown in Table 1, the mean admission LDL-C level was 2.55 ± 0.93 mmol/L, the mean lowest LDL-C level during hospitalization was 2.00 ± 0.88mmol/L, and the median ΔLDL-C was 0.43 mmol/L (IQR 0.08–0.94 mmol/L). The median interval time between stroke onset and emergency department was 2.5 h (IQR 1.8-3.0 h). The median interval time between admission and the lowest LDL-C measurement during hospitalization was 3.1 d (IQR 0.8–6.6 d). For most patients (357/432, 82.6 %), LDL-C levels decreased during hospitalization. A total of 263 (60.9 %) patients had poor 90-day functional outcomes. Fig. 1Patients’ inclusion flowchart. AIS, acute ischemic stroke; LDL-C, low-density lipoprotein cholesterol; mRS, modified Rankin Scale; ALT, alanine aminotransferase; AST, aspartate aminotransferaseTable 1Patient characteristics stratified by functional outcome at 90 daysVariablesOverallGood functional outcomePoor functional outcomeP value(n = 432)(n = 169)(n = 263)Age, years, mean (SD)69.2 (13.5)64.9 (13.9)72.0 (12.5)< 0.001Male, n (%)236 (54.6)109 (64.4)127 (48.3)< 0.001Hypertension, n (%)259 (60.0)95 (56.2)164 (62.4)0.203Diabetes, n (%)99 (22.9)34 (20.1)65 (24.7)0.267hyperlipemia, n (%)34 (7.9)13 (7.7)21 (8.0)0.912Atrial fibrillation, n (%)216 (50.0)63 (37.3)133 (58.2)< 0.001Valvular heart diseases, n (%)74 (17.1)25 (14.8)49 (18.6)0.301Coronary heart diseases, n (%)63 (14.6)22 (13.0)41 (15.6)0.460Previous stroke, n (%)39 (9.0)12(7.1)27 (10.3)0.263Current smoking, n (%)107 (24.8)52 (30.8)55 (20.9)0.021Alcohol consumption, n (%)100 (23.1)52 (30.8)51 (19.4)0.021Statin use before admission, n (%)39 (9.0)14 (8.3)25 (9.5)0.665Baseline NIHSS, median (Q1-Q3)14 (9–18)9 (6–14)15(12–20)< 0.001LDL-C, mmol/L, mean (SD)2.55 (0.93)2.59 (0.91)2.53 (0.95)0.496HDL, mmol/L, mean (SD)1.27 (0.42)1.25 (0. 49)1.29 (0.36)0.332TG, mmol/L, median (Q1-Q3)1.24 (0.87–1.89)1.20 (0.85–1.94)1.20 (0.88–1.81)0.566TC, mmol/L, mean (SD)4.23 (1.13)4.22 (1.13)4.23 (1.13)0.966Serum glucose, mmol/L, mean (SD)7.53 (6.49–9.18)7.96 (2.76)8.57 (3.04)0.035White blood cell, a10^9 /L, mean (SD)8.47 (3.19)8.23 (2.77)8.62 (3.43)0.207TOAST classification, n (%)< 0.001 Large-artery Atherosclerosis139 (32.2)59 (34.9)80 (30.4) Cardio-embolism192(44.4)57 (33.7)135 (51.3) Lacunar28 (6.5)22 (13.0)6 (2.3) Other13 (3.0)7 (4.1)6 (2.3)  Undetermined60 (13.9)24 (14.2)36 (13.7)Reperfusion therapy method, n (%)0.092 thrombolysis only124 (28.7)58 (34.3)66 (25.1) thrombectomy only210 (48.6)70 (41.4)140 (53.2) thrombolysis and thrombectomy98 (22.7)41 (24.3)57 (21.7)Statin use during hospitalization, n (%)263 (60.9)103 (60.9)160 (60.8)0.982Interval between stroke onset and emergency department, h, median (Q1-Q3)2.5 (1.8-3.0)2.3 (2.0–3.0)2.5 (1.7–3.5)0.610Interval between stroke onset and admission measurement of LDL-C, h, median (Q1-Q3)3.1 (2.2–3.9)3.1 (2.1–3.8)3.1 (2.1–3.8)0.685Interval between admission and follow-up measurement of LDL-C, d, median (Q1-Q3)3.1 (0.8–6.6)1.7 (0.6–6.4)3.4 (1.0-7.1)0.008aΔLDL-C, mmol/L, median (Q1-Q3)0.43 (0.08–0.94)0.28 (0.07–0.78)0.52 (0.10–1.03)0.009The lowest LDL-C during hospitalization, mmol/l, median (Q1-Q3)1.92 (1.38–2.44)2.11 (1.48–2.69)1.83 (1.34–2.32)0.002LDL-C variation, n (%)0.542decreased (ΔLDL-C > 0)357 (82.6)142 (84.0)215 (81.7)increased (ΔLDL-C ≤ 0)75 (17.4)27 (16.0)48 (18.3)SD standard deviation, LDL-C low-density lipoprotein cholesterol, ΔLDL-C the change of low-density lipoprotein cholesterol during hospitalization, NIHSS National Institutes of Health Stroke Scale, TOAST the Trial of Org 10,172 in Acute Stroke TreatmentaΔLDL-C was calculated by subtracting the lowest LDL-C among all measurements during hospitalization from the admission LDL-C Patients’ inclusion flowchart. AIS, acute ischemic stroke; LDL-C, low-density lipoprotein cholesterol; mRS, modified Rankin Scale; ALT, alanine aminotransferase; AST, aspartate aminotransferase Patient characteristics stratified by functional outcome at 90 days SD standard deviation, LDL-C low-density lipoprotein cholesterol, ΔLDL-C the change of low-density lipoprotein cholesterol during hospitalization, NIHSS National Institutes of Health Stroke Scale, TOAST the Trial of Org 10,172 in Acute Stroke Treatment aΔLDL-C was calculated by subtracting the lowest LDL-C among all measurements during hospitalization from the admission LDL-C As shown in Fig. 1, a total of 640 AIS patients underwent reperfusion therapy in our center, there were 24.6 % (158/640) patients missed LDL-C levels or outcome follow-up, we compared included patients to these patients, and we found that there were no significant differences in demographic parameters (age and gender), vascular risk factors (diabetes, atrial fibrillation, current smoking, and coronary heart diseases), baseline NIHSS score, TOAST classification, and reperfusion therapy method between two groups, except for more patients with prior history of stroke and hypertension among the included group (Table S1 in supplementary materials). Finally, a total of 432 patients (mean age 69.2 ± 13.5 years, 54.6 % males) were included. As shown in Table 1, the mean admission LDL-C level was 2.55 ± 0.93 mmol/L, the mean lowest LDL-C level during hospitalization was 2.00 ± 0.88mmol/L, and the median ΔLDL-C was 0.43 mmol/L (IQR 0.08–0.94 mmol/L). The median interval time between stroke onset and emergency department was 2.5 h (IQR 1.8-3.0 h). The median interval time between admission and the lowest LDL-C measurement during hospitalization was 3.1 d (IQR 0.8–6.6 d). For most patients (357/432, 82.6 %), LDL-C levels decreased during hospitalization. A total of 263 (60.9 %) patients had poor 90-day functional outcomes. Fig. 1Patients’ inclusion flowchart. AIS, acute ischemic stroke; LDL-C, low-density lipoprotein cholesterol; mRS, modified Rankin Scale; ALT, alanine aminotransferase; AST, aspartate aminotransferaseTable 1Patient characteristics stratified by functional outcome at 90 daysVariablesOverallGood functional outcomePoor functional outcomeP value(n = 432)(n = 169)(n = 263)Age, years, mean (SD)69.2 (13.5)64.9 (13.9)72.0 (12.5)< 0.001Male, n (%)236 (54.6)109 (64.4)127 (48.3)< 0.001Hypertension, n (%)259 (60.0)95 (56.2)164 (62.4)0.203Diabetes, n (%)99 (22.9)34 (20.1)65 (24.7)0.267hyperlipemia, n (%)34 (7.9)13 (7.7)21 (8.0)0.912Atrial fibrillation, n (%)216 (50.0)63 (37.3)133 (58.2)< 0.001Valvular heart diseases, n (%)74 (17.1)25 (14.8)49 (18.6)0.301Coronary heart diseases, n (%)63 (14.6)22 (13.0)41 (15.6)0.460Previous stroke, n (%)39 (9.0)12(7.1)27 (10.3)0.263Current smoking, n (%)107 (24.8)52 (30.8)55 (20.9)0.021Alcohol consumption, n (%)100 (23.1)52 (30.8)51 (19.4)0.021Statin use before admission, n (%)39 (9.0)14 (8.3)25 (9.5)0.665Baseline NIHSS, median (Q1-Q3)14 (9–18)9 (6–14)15(12–20)< 0.001LDL-C, mmol/L, mean (SD)2.55 (0.93)2.59 (0.91)2.53 (0.95)0.496HDL, mmol/L, mean (SD)1.27 (0.42)1.25 (0. 49)1.29 (0.36)0.332TG, mmol/L, median (Q1-Q3)1.24 (0.87–1.89)1.20 (0.85–1.94)1.20 (0.88–1.81)0.566TC, mmol/L, mean (SD)4.23 (1.13)4.22 (1.13)4.23 (1.13)0.966Serum glucose, mmol/L, mean (SD)7.53 (6.49–9.18)7.96 (2.76)8.57 (3.04)0.035White blood cell, a10^9 /L, mean (SD)8.47 (3.19)8.23 (2.77)8.62 (3.43)0.207TOAST classification, n (%)< 0.001 Large-artery Atherosclerosis139 (32.2)59 (34.9)80 (30.4) Cardio-embolism192(44.4)57 (33.7)135 (51.3) Lacunar28 (6.5)22 (13.0)6 (2.3) Other13 (3.0)7 (4.1)6 (2.3)  Undetermined60 (13.9)24 (14.2)36 (13.7)Reperfusion therapy method, n (%)0.092 thrombolysis only124 (28.7)58 (34.3)66 (25.1) thrombectomy only210 (48.6)70 (41.4)140 (53.2) thrombolysis and thrombectomy98 (22.7)41 (24.3)57 (21.7)Statin use during hospitalization, n (%)263 (60.9)103 (60.9)160 (60.8)0.982Interval between stroke onset and emergency department, h, median (Q1-Q3)2.5 (1.8-3.0)2.3 (2.0–3.0)2.5 (1.7–3.5)0.610Interval between stroke onset and admission measurement of LDL-C, h, median (Q1-Q3)3.1 (2.2–3.9)3.1 (2.1–3.8)3.1 (2.1–3.8)0.685Interval between admission and follow-up measurement of LDL-C, d, median (Q1-Q3)3.1 (0.8–6.6)1.7 (0.6–6.4)3.4 (1.0-7.1)0.008aΔLDL-C, mmol/L, median (Q1-Q3)0.43 (0.08–0.94)0.28 (0.07–0.78)0.52 (0.10–1.03)0.009The lowest LDL-C during hospitalization, mmol/l, median (Q1-Q3)1.92 (1.38–2.44)2.11 (1.48–2.69)1.83 (1.34–2.32)0.002LDL-C variation, n (%)0.542decreased (ΔLDL-C > 0)357 (82.6)142 (84.0)215 (81.7)increased (ΔLDL-C ≤ 0)75 (17.4)27 (16.0)48 (18.3)SD standard deviation, LDL-C low-density lipoprotein cholesterol, ΔLDL-C the change of low-density lipoprotein cholesterol during hospitalization, NIHSS National Institutes of Health Stroke Scale, TOAST the Trial of Org 10,172 in Acute Stroke TreatmentaΔLDL-C was calculated by subtracting the lowest LDL-C among all measurements during hospitalization from the admission LDL-C Patients’ inclusion flowchart. AIS, acute ischemic stroke; LDL-C, low-density lipoprotein cholesterol; mRS, modified Rankin Scale; ALT, alanine aminotransferase; AST, aspartate aminotransferase Patient characteristics stratified by functional outcome at 90 days SD standard deviation, LDL-C low-density lipoprotein cholesterol, ΔLDL-C the change of low-density lipoprotein cholesterol during hospitalization, NIHSS National Institutes of Health Stroke Scale, TOAST the Trial of Org 10,172 in Acute Stroke Treatment aΔLDL-C was calculated by subtracting the lowest LDL-C among all measurements during hospitalization from the admission LDL-C Association between admission LDL-C and outcome Age, sex, baseline NIHSS score, atrial fibrillation, current smoking, drinking consumption, TOAST classification, serum glucose, reperfusion therapy method, and the interval between admission LDL-C and the lowest LDL-C during hospitalization were significantly correlated with poor outcomes in univariate analysis (Table 2). There was no significant association between admission LDL-C level and functional outcome at 90 days when LDL-C level was regarded as a continuous variable (OR 1.03, 95 % CI 0.81–1.31, p = 0.802), or categorical variable (T3 vs. T1, OR 0.97, 95 % CI 0.55–1.71, p = 0.919, Table 3). Table 2Univariable logistic regression analysis of variables associated with poor functional outcomeVariableUnadjusted odds ratio (95% confidence interval)p-valueAge1.04 (1.03, 1.06)<0.001Male0.51 (0.35, 0.77)0.001Hypertension1.29 (0.87, 1.91)0.204Diabetes1.30 (0.82, 2.08)0.268hyperlipemia1.04 (0.51, 2.14)0.912Atrial fibrillation2.34 (1.57, 3.48)<0.001Valvular heart diseases1.32 (0.78, 2.23)0.302coronary heart diseases1.23 (0.71, 2.16)0.460Statin use before admission1.16 (0.59, 2.31)0.666Previous stroke1.50 (0.74, 3.04)0.265Current smoking0.60 (0.38, 0.93)0.021Alcohol consumption0.59 (0.38, 0.93)0.022Baseline NIHSS score1.17 (1.12, 1.21)<0.001HDL upon admission1.28 (0.78, 2.10)0.334TG upon admission0.92 (0.78, 1.09)0.333TC upon admission1.00 (0.85, 1.19)0.966Serum glucose1.08 (1.00, 1.17)0.038White blood cell1.04 (0.98, 1.11)0.208TOAST classificationLarge-artery AtherosclerosisReferenceCardio-embolism1.75 (1.11, 2.76)0.017Lacunar0.20 (0.08, 0.53)0.001Other0.63 (0.20, 1.98)0.431Undetermined1.11 (0.60, 2.05)0.748Reperfusion therapy methodthrombolysis onlyReferencethrombectomy only1.76 (1.12, 2.77)0.015thrombolysis and thrombectomy1.22 (0.72, 2.09)0.463Statin use during hospitalization0.98 (0.67, 1.48)0.995Interval between stroke onset and emergency department1.05 (0.89, 1.24)0.546Interval between stroke onset and admission measurement of LDL-C1.04 (0.89, 1.21)0.616Interval between admission and follow-up measurement of LDL-C1.06 (1.01, 1.12)0.023NIHSS National Institutes of Health Stroke Scale, TOAST the Trial of Org 10,172 in Acute Stroke TreatmentTable 3Multivariate logistic regression analysis between admission LDL-C and poor functional outcomeaVariableNon-adjusted modelAdjusted modelAdmission LDL-C, mmol/L0.93 (0.76, 1.14), 0.4961.03 (0.81, 1.31), 0.802Admission LDL-C tertiles, mmol/L T1(0.74–2.15)ReferenceReference T2(2.16–2.80)1.31 (0.81, 2.12), 0.2711.62 (0.92, 2.85), 0.096 T3(2.81–9.61)0.82 (0.51, 1.31), 0.4040.97 (0.55, 1.71), 0.919Adjusted model: adjusted for age, sex, atrial fibrillation, Current smoking, drinking consumption, baseline NIHSS score, Serum glucose, TOAST classification, and reperfusion therapy methodLDL-C low-Density Lipoprotein Cholesterol, NIHSS National Institutes of Health Stroke Scale, TOAST the Trial of Org 10,172 in Acute Stroke TreatmentaResults for each model are presented as odds ratio (95 % confidence interval), p-value Univariable logistic regression analysis of variables associated with poor functional outcome NIHSS National Institutes of Health Stroke Scale, TOAST the Trial of Org 10,172 in Acute Stroke Treatment Multivariate logistic regression analysis between admission LDL-C and poor functional outcomea Adjusted model: adjusted for age, sex, atrial fibrillation, Current smoking, drinking consumption, baseline NIHSS score, Serum glucose, TOAST classification, and reperfusion therapy method LDL-C low-Density Lipoprotein Cholesterol, NIHSS National Institutes of Health Stroke Scale, TOAST the Trial of Org 10,172 in Acute Stroke Treatment aResults for each model are presented as odds ratio (95 % confidence interval), p-value Age, sex, baseline NIHSS score, atrial fibrillation, current smoking, drinking consumption, TOAST classification, serum glucose, reperfusion therapy method, and the interval between admission LDL-C and the lowest LDL-C during hospitalization were significantly correlated with poor outcomes in univariate analysis (Table 2). There was no significant association between admission LDL-C level and functional outcome at 90 days when LDL-C level was regarded as a continuous variable (OR 1.03, 95 % CI 0.81–1.31, p = 0.802), or categorical variable (T3 vs. T1, OR 0.97, 95 % CI 0.55–1.71, p = 0.919, Table 3). Table 2Univariable logistic regression analysis of variables associated with poor functional outcomeVariableUnadjusted odds ratio (95% confidence interval)p-valueAge1.04 (1.03, 1.06)<0.001Male0.51 (0.35, 0.77)0.001Hypertension1.29 (0.87, 1.91)0.204Diabetes1.30 (0.82, 2.08)0.268hyperlipemia1.04 (0.51, 2.14)0.912Atrial fibrillation2.34 (1.57, 3.48)<0.001Valvular heart diseases1.32 (0.78, 2.23)0.302coronary heart diseases1.23 (0.71, 2.16)0.460Statin use before admission1.16 (0.59, 2.31)0.666Previous stroke1.50 (0.74, 3.04)0.265Current smoking0.60 (0.38, 0.93)0.021Alcohol consumption0.59 (0.38, 0.93)0.022Baseline NIHSS score1.17 (1.12, 1.21)<0.001HDL upon admission1.28 (0.78, 2.10)0.334TG upon admission0.92 (0.78, 1.09)0.333TC upon admission1.00 (0.85, 1.19)0.966Serum glucose1.08 (1.00, 1.17)0.038White blood cell1.04 (0.98, 1.11)0.208TOAST classificationLarge-artery AtherosclerosisReferenceCardio-embolism1.75 (1.11, 2.76)0.017Lacunar0.20 (0.08, 0.53)0.001Other0.63 (0.20, 1.98)0.431Undetermined1.11 (0.60, 2.05)0.748Reperfusion therapy methodthrombolysis onlyReferencethrombectomy only1.76 (1.12, 2.77)0.015thrombolysis and thrombectomy1.22 (0.72, 2.09)0.463Statin use during hospitalization0.98 (0.67, 1.48)0.995Interval between stroke onset and emergency department1.05 (0.89, 1.24)0.546Interval between stroke onset and admission measurement of LDL-C1.04 (0.89, 1.21)0.616Interval between admission and follow-up measurement of LDL-C1.06 (1.01, 1.12)0.023NIHSS National Institutes of Health Stroke Scale, TOAST the Trial of Org 10,172 in Acute Stroke TreatmentTable 3Multivariate logistic regression analysis between admission LDL-C and poor functional outcomeaVariableNon-adjusted modelAdjusted modelAdmission LDL-C, mmol/L0.93 (0.76, 1.14), 0.4961.03 (0.81, 1.31), 0.802Admission LDL-C tertiles, mmol/L T1(0.74–2.15)ReferenceReference T2(2.16–2.80)1.31 (0.81, 2.12), 0.2711.62 (0.92, 2.85), 0.096 T3(2.81–9.61)0.82 (0.51, 1.31), 0.4040.97 (0.55, 1.71), 0.919Adjusted model: adjusted for age, sex, atrial fibrillation, Current smoking, drinking consumption, baseline NIHSS score, Serum glucose, TOAST classification, and reperfusion therapy methodLDL-C low-Density Lipoprotein Cholesterol, NIHSS National Institutes of Health Stroke Scale, TOAST the Trial of Org 10,172 in Acute Stroke TreatmentaResults for each model are presented as odds ratio (95 % confidence interval), p-value Univariable logistic regression analysis of variables associated with poor functional outcome NIHSS National Institutes of Health Stroke Scale, TOAST the Trial of Org 10,172 in Acute Stroke Treatment Multivariate logistic regression analysis between admission LDL-C and poor functional outcomea Adjusted model: adjusted for age, sex, atrial fibrillation, Current smoking, drinking consumption, baseline NIHSS score, Serum glucose, TOAST classification, and reperfusion therapy method LDL-C low-Density Lipoprotein Cholesterol, NIHSS National Institutes of Health Stroke Scale, TOAST the Trial of Org 10,172 in Acute Stroke Treatment aResults for each model are presented as odds ratio (95 % confidence interval), p-value Association between ΔLDL-C and outcome When ΔLDL-C was regarded as a continuous variable, ΔLDL-C was significantly associated with poor functional outcome at 90 days in univariate analysis (OR 1.55, 95 % CI 1.12–2.15, p = 0.009, Table 4). After adjusting for confounding variables, the association between ΔLDL-C and the poor outcome remained significant (OR 1.80, 95 % CI 1.12–2.91, p = 0.016). When ΔLDL-C was regarded as a categorical variable, patients in the upper tertile (T3, 0.80–3.98 mmol/L) had a higher risk of poor outcome than those in the lower tertile (T1, -0.91-0.13 mmol/L) in univariate analysis (OR 1.92, 95 % CI 1.15–3.20, p = 0.012). After adjusting for confounding variables, the association between ΔLDL-C and the poor outcome remained significant (OR 2.56, 95 % CI 1.22–5.36, p = 0.013). The risk of poor functional outcome increased significantly with ΔLDL-C tertile (P-trend = 0.010). Table 4Multivariate logistic regression analysis between ΔLDL-C and poor functional outcomeaVariableNon-adjusted modelAdjusted model 1Adjusted model 2ΔLDL-C, mmol/l1.55(1.12, 2.15),0.0091.79(1.11, 2.89),0.0171.80(1.12, 2.91),0.016ΔLDL-C tertiles, mmol/l T1 (-0.91-0.13)ReferenceReferenceReference T2 (0.14–0.79)1.03(0.65, 1.64),0.8881.23(0.70, 2.18),0.4731.24(0.70, 2.19),0.470 T3 (0.80–3.98)1.92(1.15, 3.20),0.0122.56(1.22, 5.35),0.0132.56(1.22, 5.36),0.013P-trend0.0070.0100.010Adjusted model 1: adjusted for age, sex, atrial fibrillation, Current smoking, drinking consumption, baseline NIHSS score, Serum glucose, TOAST classification, reperfusion therapy method, and interval between admission and follow-up measurement of LDL-CAdjusted model 2: adjusted for variables in model 1 and statin use during hospitalizationLDL-C low-density lipoprotein cholesterol, ΔLDL-C the change of low-density lipoprotein cholesterol during hospitalization, NIHSS National Institutes of Health Stroke Scale, TOAST the Trial of Org 10,172 in Acute Stroke TreatmentaResults for each model are presented as odds ratio (95 % confidence interval), p-value Multivariate logistic regression analysis between ΔLDL-C and poor functional outcomea Adjusted model 1: adjusted for age, sex, atrial fibrillation, Current smoking, drinking consumption, baseline NIHSS score, Serum glucose, TOAST classification, reperfusion therapy method, and interval between admission and follow-up measurement of LDL-C Adjusted model 2: adjusted for variables in model 1 and statin use during hospitalization LDL-C low-density lipoprotein cholesterol, ΔLDL-C the change of low-density lipoprotein cholesterol during hospitalization, NIHSS National Institutes of Health Stroke Scale, TOAST the Trial of Org 10,172 in Acute Stroke Treatment aResults for each model are presented as odds ratio (95 % confidence interval), p-value When ΔLDL-C was regarded as a continuous variable, ΔLDL-C was significantly associated with poor functional outcome at 90 days in univariate analysis (OR 1.55, 95 % CI 1.12–2.15, p = 0.009, Table 4). After adjusting for confounding variables, the association between ΔLDL-C and the poor outcome remained significant (OR 1.80, 95 % CI 1.12–2.91, p = 0.016). When ΔLDL-C was regarded as a categorical variable, patients in the upper tertile (T3, 0.80–3.98 mmol/L) had a higher risk of poor outcome than those in the lower tertile (T1, -0.91-0.13 mmol/L) in univariate analysis (OR 1.92, 95 % CI 1.15–3.20, p = 0.012). After adjusting for confounding variables, the association between ΔLDL-C and the poor outcome remained significant (OR 2.56, 95 % CI 1.22–5.36, p = 0.013). The risk of poor functional outcome increased significantly with ΔLDL-C tertile (P-trend = 0.010). Table 4Multivariate logistic regression analysis between ΔLDL-C and poor functional outcomeaVariableNon-adjusted modelAdjusted model 1Adjusted model 2ΔLDL-C, mmol/l1.55(1.12, 2.15),0.0091.79(1.11, 2.89),0.0171.80(1.12, 2.91),0.016ΔLDL-C tertiles, mmol/l T1 (-0.91-0.13)ReferenceReferenceReference T2 (0.14–0.79)1.03(0.65, 1.64),0.8881.23(0.70, 2.18),0.4731.24(0.70, 2.19),0.470 T3 (0.80–3.98)1.92(1.15, 3.20),0.0122.56(1.22, 5.35),0.0132.56(1.22, 5.36),0.013P-trend0.0070.0100.010Adjusted model 1: adjusted for age, sex, atrial fibrillation, Current smoking, drinking consumption, baseline NIHSS score, Serum glucose, TOAST classification, reperfusion therapy method, and interval between admission and follow-up measurement of LDL-CAdjusted model 2: adjusted for variables in model 1 and statin use during hospitalizationLDL-C low-density lipoprotein cholesterol, ΔLDL-C the change of low-density lipoprotein cholesterol during hospitalization, NIHSS National Institutes of Health Stroke Scale, TOAST the Trial of Org 10,172 in Acute Stroke TreatmentaResults for each model are presented as odds ratio (95 % confidence interval), p-value Multivariate logistic regression analysis between ΔLDL-C and poor functional outcomea Adjusted model 1: adjusted for age, sex, atrial fibrillation, Current smoking, drinking consumption, baseline NIHSS score, Serum glucose, TOAST classification, reperfusion therapy method, and interval between admission and follow-up measurement of LDL-C Adjusted model 2: adjusted for variables in model 1 and statin use during hospitalization LDL-C low-density lipoprotein cholesterol, ΔLDL-C the change of low-density lipoprotein cholesterol during hospitalization, NIHSS National Institutes of Health Stroke Scale, TOAST the Trial of Org 10,172 in Acute Stroke Treatment aResults for each model are presented as odds ratio (95 % confidence interval), p-value Baseline characteristics and outcome: As shown in Fig. 1, a total of 640 AIS patients underwent reperfusion therapy in our center, there were 24.6 % (158/640) patients missed LDL-C levels or outcome follow-up, we compared included patients to these patients, and we found that there were no significant differences in demographic parameters (age and gender), vascular risk factors (diabetes, atrial fibrillation, current smoking, and coronary heart diseases), baseline NIHSS score, TOAST classification, and reperfusion therapy method between two groups, except for more patients with prior history of stroke and hypertension among the included group (Table S1 in supplementary materials). Finally, a total of 432 patients (mean age 69.2 ± 13.5 years, 54.6 % males) were included. As shown in Table 1, the mean admission LDL-C level was 2.55 ± 0.93 mmol/L, the mean lowest LDL-C level during hospitalization was 2.00 ± 0.88mmol/L, and the median ΔLDL-C was 0.43 mmol/L (IQR 0.08–0.94 mmol/L). The median interval time between stroke onset and emergency department was 2.5 h (IQR 1.8-3.0 h). The median interval time between admission and the lowest LDL-C measurement during hospitalization was 3.1 d (IQR 0.8–6.6 d). For most patients (357/432, 82.6 %), LDL-C levels decreased during hospitalization. A total of 263 (60.9 %) patients had poor 90-day functional outcomes. Fig. 1Patients’ inclusion flowchart. AIS, acute ischemic stroke; LDL-C, low-density lipoprotein cholesterol; mRS, modified Rankin Scale; ALT, alanine aminotransferase; AST, aspartate aminotransferaseTable 1Patient characteristics stratified by functional outcome at 90 daysVariablesOverallGood functional outcomePoor functional outcomeP value(n = 432)(n = 169)(n = 263)Age, years, mean (SD)69.2 (13.5)64.9 (13.9)72.0 (12.5)< 0.001Male, n (%)236 (54.6)109 (64.4)127 (48.3)< 0.001Hypertension, n (%)259 (60.0)95 (56.2)164 (62.4)0.203Diabetes, n (%)99 (22.9)34 (20.1)65 (24.7)0.267hyperlipemia, n (%)34 (7.9)13 (7.7)21 (8.0)0.912Atrial fibrillation, n (%)216 (50.0)63 (37.3)133 (58.2)< 0.001Valvular heart diseases, n (%)74 (17.1)25 (14.8)49 (18.6)0.301Coronary heart diseases, n (%)63 (14.6)22 (13.0)41 (15.6)0.460Previous stroke, n (%)39 (9.0)12(7.1)27 (10.3)0.263Current smoking, n (%)107 (24.8)52 (30.8)55 (20.9)0.021Alcohol consumption, n (%)100 (23.1)52 (30.8)51 (19.4)0.021Statin use before admission, n (%)39 (9.0)14 (8.3)25 (9.5)0.665Baseline NIHSS, median (Q1-Q3)14 (9–18)9 (6–14)15(12–20)< 0.001LDL-C, mmol/L, mean (SD)2.55 (0.93)2.59 (0.91)2.53 (0.95)0.496HDL, mmol/L, mean (SD)1.27 (0.42)1.25 (0. 49)1.29 (0.36)0.332TG, mmol/L, median (Q1-Q3)1.24 (0.87–1.89)1.20 (0.85–1.94)1.20 (0.88–1.81)0.566TC, mmol/L, mean (SD)4.23 (1.13)4.22 (1.13)4.23 (1.13)0.966Serum glucose, mmol/L, mean (SD)7.53 (6.49–9.18)7.96 (2.76)8.57 (3.04)0.035White blood cell, a10^9 /L, mean (SD)8.47 (3.19)8.23 (2.77)8.62 (3.43)0.207TOAST classification, n (%)< 0.001 Large-artery Atherosclerosis139 (32.2)59 (34.9)80 (30.4) Cardio-embolism192(44.4)57 (33.7)135 (51.3) Lacunar28 (6.5)22 (13.0)6 (2.3) Other13 (3.0)7 (4.1)6 (2.3)  Undetermined60 (13.9)24 (14.2)36 (13.7)Reperfusion therapy method, n (%)0.092 thrombolysis only124 (28.7)58 (34.3)66 (25.1) thrombectomy only210 (48.6)70 (41.4)140 (53.2) thrombolysis and thrombectomy98 (22.7)41 (24.3)57 (21.7)Statin use during hospitalization, n (%)263 (60.9)103 (60.9)160 (60.8)0.982Interval between stroke onset and emergency department, h, median (Q1-Q3)2.5 (1.8-3.0)2.3 (2.0–3.0)2.5 (1.7–3.5)0.610Interval between stroke onset and admission measurement of LDL-C, h, median (Q1-Q3)3.1 (2.2–3.9)3.1 (2.1–3.8)3.1 (2.1–3.8)0.685Interval between admission and follow-up measurement of LDL-C, d, median (Q1-Q3)3.1 (0.8–6.6)1.7 (0.6–6.4)3.4 (1.0-7.1)0.008aΔLDL-C, mmol/L, median (Q1-Q3)0.43 (0.08–0.94)0.28 (0.07–0.78)0.52 (0.10–1.03)0.009The lowest LDL-C during hospitalization, mmol/l, median (Q1-Q3)1.92 (1.38–2.44)2.11 (1.48–2.69)1.83 (1.34–2.32)0.002LDL-C variation, n (%)0.542decreased (ΔLDL-C > 0)357 (82.6)142 (84.0)215 (81.7)increased (ΔLDL-C ≤ 0)75 (17.4)27 (16.0)48 (18.3)SD standard deviation, LDL-C low-density lipoprotein cholesterol, ΔLDL-C the change of low-density lipoprotein cholesterol during hospitalization, NIHSS National Institutes of Health Stroke Scale, TOAST the Trial of Org 10,172 in Acute Stroke TreatmentaΔLDL-C was calculated by subtracting the lowest LDL-C among all measurements during hospitalization from the admission LDL-C Patients’ inclusion flowchart. AIS, acute ischemic stroke; LDL-C, low-density lipoprotein cholesterol; mRS, modified Rankin Scale; ALT, alanine aminotransferase; AST, aspartate aminotransferase Patient characteristics stratified by functional outcome at 90 days SD standard deviation, LDL-C low-density lipoprotein cholesterol, ΔLDL-C the change of low-density lipoprotein cholesterol during hospitalization, NIHSS National Institutes of Health Stroke Scale, TOAST the Trial of Org 10,172 in Acute Stroke Treatment aΔLDL-C was calculated by subtracting the lowest LDL-C among all measurements during hospitalization from the admission LDL-C Association between admission LDL-C and outcome: Age, sex, baseline NIHSS score, atrial fibrillation, current smoking, drinking consumption, TOAST classification, serum glucose, reperfusion therapy method, and the interval between admission LDL-C and the lowest LDL-C during hospitalization were significantly correlated with poor outcomes in univariate analysis (Table 2). There was no significant association between admission LDL-C level and functional outcome at 90 days when LDL-C level was regarded as a continuous variable (OR 1.03, 95 % CI 0.81–1.31, p = 0.802), or categorical variable (T3 vs. T1, OR 0.97, 95 % CI 0.55–1.71, p = 0.919, Table 3). Table 2Univariable logistic regression analysis of variables associated with poor functional outcomeVariableUnadjusted odds ratio (95% confidence interval)p-valueAge1.04 (1.03, 1.06)<0.001Male0.51 (0.35, 0.77)0.001Hypertension1.29 (0.87, 1.91)0.204Diabetes1.30 (0.82, 2.08)0.268hyperlipemia1.04 (0.51, 2.14)0.912Atrial fibrillation2.34 (1.57, 3.48)<0.001Valvular heart diseases1.32 (0.78, 2.23)0.302coronary heart diseases1.23 (0.71, 2.16)0.460Statin use before admission1.16 (0.59, 2.31)0.666Previous stroke1.50 (0.74, 3.04)0.265Current smoking0.60 (0.38, 0.93)0.021Alcohol consumption0.59 (0.38, 0.93)0.022Baseline NIHSS score1.17 (1.12, 1.21)<0.001HDL upon admission1.28 (0.78, 2.10)0.334TG upon admission0.92 (0.78, 1.09)0.333TC upon admission1.00 (0.85, 1.19)0.966Serum glucose1.08 (1.00, 1.17)0.038White blood cell1.04 (0.98, 1.11)0.208TOAST classificationLarge-artery AtherosclerosisReferenceCardio-embolism1.75 (1.11, 2.76)0.017Lacunar0.20 (0.08, 0.53)0.001Other0.63 (0.20, 1.98)0.431Undetermined1.11 (0.60, 2.05)0.748Reperfusion therapy methodthrombolysis onlyReferencethrombectomy only1.76 (1.12, 2.77)0.015thrombolysis and thrombectomy1.22 (0.72, 2.09)0.463Statin use during hospitalization0.98 (0.67, 1.48)0.995Interval between stroke onset and emergency department1.05 (0.89, 1.24)0.546Interval between stroke onset and admission measurement of LDL-C1.04 (0.89, 1.21)0.616Interval between admission and follow-up measurement of LDL-C1.06 (1.01, 1.12)0.023NIHSS National Institutes of Health Stroke Scale, TOAST the Trial of Org 10,172 in Acute Stroke TreatmentTable 3Multivariate logistic regression analysis between admission LDL-C and poor functional outcomeaVariableNon-adjusted modelAdjusted modelAdmission LDL-C, mmol/L0.93 (0.76, 1.14), 0.4961.03 (0.81, 1.31), 0.802Admission LDL-C tertiles, mmol/L T1(0.74–2.15)ReferenceReference T2(2.16–2.80)1.31 (0.81, 2.12), 0.2711.62 (0.92, 2.85), 0.096 T3(2.81–9.61)0.82 (0.51, 1.31), 0.4040.97 (0.55, 1.71), 0.919Adjusted model: adjusted for age, sex, atrial fibrillation, Current smoking, drinking consumption, baseline NIHSS score, Serum glucose, TOAST classification, and reperfusion therapy methodLDL-C low-Density Lipoprotein Cholesterol, NIHSS National Institutes of Health Stroke Scale, TOAST the Trial of Org 10,172 in Acute Stroke TreatmentaResults for each model are presented as odds ratio (95 % confidence interval), p-value Univariable logistic regression analysis of variables associated with poor functional outcome NIHSS National Institutes of Health Stroke Scale, TOAST the Trial of Org 10,172 in Acute Stroke Treatment Multivariate logistic regression analysis between admission LDL-C and poor functional outcomea Adjusted model: adjusted for age, sex, atrial fibrillation, Current smoking, drinking consumption, baseline NIHSS score, Serum glucose, TOAST classification, and reperfusion therapy method LDL-C low-Density Lipoprotein Cholesterol, NIHSS National Institutes of Health Stroke Scale, TOAST the Trial of Org 10,172 in Acute Stroke Treatment aResults for each model are presented as odds ratio (95 % confidence interval), p-value Association between ΔLDL-C and outcome: When ΔLDL-C was regarded as a continuous variable, ΔLDL-C was significantly associated with poor functional outcome at 90 days in univariate analysis (OR 1.55, 95 % CI 1.12–2.15, p = 0.009, Table 4). After adjusting for confounding variables, the association between ΔLDL-C and the poor outcome remained significant (OR 1.80, 95 % CI 1.12–2.91, p = 0.016). When ΔLDL-C was regarded as a categorical variable, patients in the upper tertile (T3, 0.80–3.98 mmol/L) had a higher risk of poor outcome than those in the lower tertile (T1, -0.91-0.13 mmol/L) in univariate analysis (OR 1.92, 95 % CI 1.15–3.20, p = 0.012). After adjusting for confounding variables, the association between ΔLDL-C and the poor outcome remained significant (OR 2.56, 95 % CI 1.22–5.36, p = 0.013). The risk of poor functional outcome increased significantly with ΔLDL-C tertile (P-trend = 0.010). Table 4Multivariate logistic regression analysis between ΔLDL-C and poor functional outcomeaVariableNon-adjusted modelAdjusted model 1Adjusted model 2ΔLDL-C, mmol/l1.55(1.12, 2.15),0.0091.79(1.11, 2.89),0.0171.80(1.12, 2.91),0.016ΔLDL-C tertiles, mmol/l T1 (-0.91-0.13)ReferenceReferenceReference T2 (0.14–0.79)1.03(0.65, 1.64),0.8881.23(0.70, 2.18),0.4731.24(0.70, 2.19),0.470 T3 (0.80–3.98)1.92(1.15, 3.20),0.0122.56(1.22, 5.35),0.0132.56(1.22, 5.36),0.013P-trend0.0070.0100.010Adjusted model 1: adjusted for age, sex, atrial fibrillation, Current smoking, drinking consumption, baseline NIHSS score, Serum glucose, TOAST classification, reperfusion therapy method, and interval between admission and follow-up measurement of LDL-CAdjusted model 2: adjusted for variables in model 1 and statin use during hospitalizationLDL-C low-density lipoprotein cholesterol, ΔLDL-C the change of low-density lipoprotein cholesterol during hospitalization, NIHSS National Institutes of Health Stroke Scale, TOAST the Trial of Org 10,172 in Acute Stroke TreatmentaResults for each model are presented as odds ratio (95 % confidence interval), p-value Multivariate logistic regression analysis between ΔLDL-C and poor functional outcomea Adjusted model 1: adjusted for age, sex, atrial fibrillation, Current smoking, drinking consumption, baseline NIHSS score, Serum glucose, TOAST classification, reperfusion therapy method, and interval between admission and follow-up measurement of LDL-C Adjusted model 2: adjusted for variables in model 1 and statin use during hospitalization LDL-C low-density lipoprotein cholesterol, ΔLDL-C the change of low-density lipoprotein cholesterol during hospitalization, NIHSS National Institutes of Health Stroke Scale, TOAST the Trial of Org 10,172 in Acute Stroke Treatment aResults for each model are presented as odds ratio (95 % confidence interval), p-value Discussion: We found that the admission LDL-C was not associated with functional outcomes at 90 days. For most AIS patients who underwent reperfusion therapy, LDL-C decreased during hospitalization. The decrease of LDL-C during hospitalization was associated with poor 90-day functional outcomes. We suggested that the magnitude of decrease in LDL-C during hospitalization may reflect the severity of oxidative stress in the acute phase of AIS generated by ischemic stroke and/or brain tissue reperfusion, which might be positively associated with poor 90-day functional outcome in AIS patients with reperfusion therapy. There was a discrepancy in the prognostic significance between LDL-C level and outcomes in AIS patients [1–11]. Some studies found that higher LDL-C level was associated with poor outcomes in AIS patients [1–3], while some studies found that lower LDL-C was associated with poor outcome in AIS patients [4, 5], others failed to find a significant association between LDL-C and outcome [6–11]. The inconsistent results might be explained by differences in sample size, patient selection, potential confounder, outcome assessment, and different measurement times of LDL-C. Several studies suggested that LDL got oxidized into oxLDL under oxidative stress [19], and oxLDL may contribute to exacerbate free-radical damage in the acute phase of AIS [20, 32–35]. Since AIS patients with reperfusion therapy could suffer from enhanced oxidative injury [12], focusing on these patients might clarify the role of LDL-C. However, previous studies on the association between LDL-C level and outcomes in AIS patients who underwent reperfusion therapy were rare, and these conclusions failed to reach a consensus [5–8]. Previous studies of AIS patients with thrombolytic therapy failed to find an association between baseline LDL-C and outcome [6–8], which was in line with our study. Recently, a retrospective study involving 174 AIS patients with endovascular thrombectomy (EVT) therapy found that a higher LDL-C level at admission was independently associated with favorable functional outcomes at 3 months [5]. The conflicting results in AIS patients with reperfusion therapy may be due to the single measurement of LDL-C. Previous studies suggested that LDL-C levels showed a decreased trend during the acute phase of AIS [14–18]. However, only one study investigated the association between LDL-C change and outcomes in AIS patients [25]. This multicenter study of 676 AIS patients found that increased LDL-C was associated with poor outcomes at discharge [25]. For most patients (566/676, 83.7 %), LDL-C levels decreased during hospitalization in this study, but it did not clarify the association between decreased LDL-C and outcome further. In the current study, we also found that for most patients (357/432, 82.6 %), LDL-C levels decreased during hospitalization, and decreased LDL-C level was significantly associated with poor 90-day functional outcome in AIS patients with reperfusion therapy. Though the underlying mechanism of the association between ΔLDL-C and outcome remains unclear, it could be explained as follows: LDL-C gets oxidized into oxLDL under oxidative stress [19] and oxLDL is the major marker of oxidative stress [20, 32–34]. Previous studies found that high oxLDL is positively associated with poor functional outcomes in AIS patients [21–23]. Therefore, we speculated that the increased oxLDL level may be associated with decreased LDL-C level during hospitalization, the extent of decreased LDL-C may reflect the degree of increased oxLDL, which may indicate the severity of oxidative stress and contribute to poor functional outcomes. Although the specific mechanism of ΔLDL-C level during hospitalization remains unclear, during the oxidative challenge, LDL-C gets oxidized into oxLDL [19], which contributes to free-radical damage [20, 32–35] and poor outcome [21–23]. A study of 3019 AIS /TIA patients from the Clopidogrel in High-Risk Patients with Acute Non-Disabling Cerebrovascular Events (CHANCE) trial found that higher levels of ox-LDL and ox-LDL/LDL significantly increased the risk of poor functional outcome in AIS patients [21], which may provide evidence to support our hypothesis. Of course, more studies of high quality are needed to verify the above hypothesis. From a clinical point of view, since LDL-C is a widely available biomarker and is measured frequently, our findings might help clinicians to identify AIS patients who underwent reperfusion therapy at risk of 90-day poor functional outcome and guide therapy properly, besides there was no additional financial burden for patients’ families. Some limitations should be noted. Firstly, this was a retrospective study and we could not measure oxLDL. Therefore, we could not confirm that the change in LDL-C levels was due to free radical damage; however, a previous study found that the higher levels of oxLDL, and ox-LDL/LDL-C significantly increased the risk of poor outcome [21], which supported our hypothesis. Secondly, we could not measure LDL-C at a specific time for each patient. The results might vary with different testing times [16]. However, in multivariate analysis, we adjusted the interval between admission and follow-up measurement of LDL-C during hospitalization in Model1, and our results remain significant in the present study. Thirdly, statin therapy could have influenced LDL-C levels. A randomized controlled trial of 60 AIS patients found that LDL-C decreased significantly in statin-treated patients on the 7th day and 3 months [17]. In our study, the median interval between admission and the lowest LDL-C level measurement was 3.1d (IQR 0.8–6.6 d). In addition, there was no significant association between statin usage and outcome in univariate analysis. Moreover, when we further adjusted for this variable in Model 2, our findings remained significant. Therefore, the influence of statin usage may be limited in our study. Fourthly, we measured LDL-C in a non-fasting state, which might influence the results, but a meta-analysis of 68 studies found that the association between LDL-C and ischemic stroke remained even when measured in non-fasting patients [36]. Finally, patients did not conduct a computed tomographic angiography after reperfusion therapy in our hospital, therefore we could not evaluate the status of their blood vessels, which might influence our results. Conclusions: There was no significant association between admission LDL-C level and outcomes in AIS patients who underwent reperfusion therapy, while the decrease in LDL-C level during hospitalization was positively associated with poor functional outcomes at 90 days. Supplementary Information: Additional file 1: Table S1. Patient characteristics stratified by included patients and excluded patients Additional file 1: Table S1. Patient characteristics stratified by included patients and excluded patients Additional file 1: Table S1. Patient characteristics stratified by included patients and excluded patients Additional file 1: Table S1. Patient characteristics stratified by included patients and excluded patients : Additional file 1: Table S1. Patient characteristics stratified by included patients and excluded patients Additional file 1: Table S1. Patient characteristics stratified by included patients and excluded patients
Background: Low-density lipoprotein cholesterol (LDL-C) can increase cardiovascular risk. However, the association between LDL-C change and functional outcomes in acute ischemic stroke (AIS) patients who underwent reperfusion therapy remains unclear. Methods: Patients who received reperfusion therapy were consecutively enrolled. LDL-C measurement was conducted at the emergency department immediately after admission and during hospitalization. The change of LDL-C level (ΔLDL-C) was calculated by subtracting the lowest LDL-C among all measurements during hospitalization from the admission LDL-C. Poor functional outcome was defined as modified Rankin Scale (mRS) > 2 at 90 days. Results: A total of 432 patients were enrolled (mean age 69.2 ± 13.5 years, 54.6 % males). The mean LDL-C level at admission was 2.55 ± 0.93 mmol/L. The median ΔLDL-C level was 0.43 mmol/L (IQR 0.08-0.94 mmol/L). A total of 263 (60.9 %) patients had poor functional outcomes at 90 days. There was no significant association between admission LDL-C level and functional outcome (OR 0.99, 95 % CI 0.77-1.27, p = 0.904). ΔLDL-C level was positively associated with poor functional outcome (OR 1.80, 95 % CI 1,12-2.91, p = 0.016). When patients were divided into tertiles according to ΔLDL-C, those in the upper tertile (T3, 0.80-3.98 mmol/L) were positively associated with poor functional outcomes compared to patients in the lower tertile (T1, -0.91-0.13 mmol/L) (OR 2.56, 95 % CI 1.22-5.36, p = 0.013). The risk of poor functional outcome increased significantly with ΔLDL-C tertile (P-trend = 0.010). Conclusions: In AIS patients who underwent reperfusion therapy, the decrease in LDL-C level during hospitalization was significantly associated with poor functional outcomes at 90 days.
Background: The association between low-density lipoprotein cholesterol (LDL-C) and outcomes in acute ischemic stroke (AIS) patients remains controversial [1–11]. The inconsistent results might be explained by oxidative stress. Since AIS patients may suffer from enhanced free-radical damage after reperfusion therapy [12, 13], focusing on patients with reperfusion therapy might clarify the role of LDL-C. However, previous studies on the association between LDL-C level and outcomes in AIS patients with reperfusion therapy failed to reach a consensus [5–8]. The conflicting conclusions may be due to the single measurement of LDL-C. Some studies found that serum LDL-C levels decreased after the onset of AIS [14–18]. Under oxidative stress, low-density lipoprotein (LDL) gets oxidized into oxidized low-density lipoprotein (oxLDL) [19]. The extent of decreased LDL-C may reflect the degree of increased oxLDL, which may indicate the severity of oxidative stress [20] and is positively associated with poor functional outcomes [21–24]. Therefore, a more sensitive marker may be the change in serum LDL-C during hospitalization [25]. However, there is uncertainty on the association between LDL-C change and outcomes in patients with reperfusion therapy. In this study, we aimed to explore the association between changes in LDL-C levels and functional outcomes in these patients. Conclusions: There was no significant association between admission LDL-C level and outcomes in AIS patients who underwent reperfusion therapy, while the decrease in LDL-C level during hospitalization was positively associated with poor functional outcomes at 90 days.
Background: Low-density lipoprotein cholesterol (LDL-C) can increase cardiovascular risk. However, the association between LDL-C change and functional outcomes in acute ischemic stroke (AIS) patients who underwent reperfusion therapy remains unclear. Methods: Patients who received reperfusion therapy were consecutively enrolled. LDL-C measurement was conducted at the emergency department immediately after admission and during hospitalization. The change of LDL-C level (ΔLDL-C) was calculated by subtracting the lowest LDL-C among all measurements during hospitalization from the admission LDL-C. Poor functional outcome was defined as modified Rankin Scale (mRS) > 2 at 90 days. Results: A total of 432 patients were enrolled (mean age 69.2 ± 13.5 years, 54.6 % males). The mean LDL-C level at admission was 2.55 ± 0.93 mmol/L. The median ΔLDL-C level was 0.43 mmol/L (IQR 0.08-0.94 mmol/L). A total of 263 (60.9 %) patients had poor functional outcomes at 90 days. There was no significant association between admission LDL-C level and functional outcome (OR 0.99, 95 % CI 0.77-1.27, p = 0.904). ΔLDL-C level was positively associated with poor functional outcome (OR 1.80, 95 % CI 1,12-2.91, p = 0.016). When patients were divided into tertiles according to ΔLDL-C, those in the upper tertile (T3, 0.80-3.98 mmol/L) were positively associated with poor functional outcomes compared to patients in the lower tertile (T1, -0.91-0.13 mmol/L) (OR 2.56, 95 % CI 1.22-5.36, p = 0.013). The risk of poor functional outcome increased significantly with ΔLDL-C tertile (P-trend = 0.010). Conclusions: In AIS patients who underwent reperfusion therapy, the decrease in LDL-C level during hospitalization was significantly associated with poor functional outcomes at 90 days.
10,687
398
[ 267, 1428, 193, 162, 61, 291, 1087, 641, 542, 34 ]
14
[ "ldl", "stroke", "patients", "δldl", "admission", "functional", "hospitalization", "outcome", "poor", "nihss" ]
[ "lipoprotein ldl gets", "infusion ldl levels", "cholesterol ldl outcomes", "ischemic stroke ldl", "reperfusion therapy ldl" ]
null
[CONTENT] Low-density lipoprotein cholesterol | Change | Acute ischemic stroke | Reperfusion therapy | Outcome [SUMMARY]
null
[CONTENT] Low-density lipoprotein cholesterol | Change | Acute ischemic stroke | Reperfusion therapy | Outcome [SUMMARY]
[CONTENT] Low-density lipoprotein cholesterol | Change | Acute ischemic stroke | Reperfusion therapy | Outcome [SUMMARY]
[CONTENT] Low-density lipoprotein cholesterol | Change | Acute ischemic stroke | Reperfusion therapy | Outcome [SUMMARY]
[CONTENT] Low-density lipoprotein cholesterol | Change | Acute ischemic stroke | Reperfusion therapy | Outcome [SUMMARY]
[CONTENT] Aged | Brain Ischemia | Cholesterol, LDL | Female | Humans | Ischemic Stroke | Male | Reperfusion | Stroke [SUMMARY]
null
[CONTENT] Aged | Brain Ischemia | Cholesterol, LDL | Female | Humans | Ischemic Stroke | Male | Reperfusion | Stroke [SUMMARY]
[CONTENT] Aged | Brain Ischemia | Cholesterol, LDL | Female | Humans | Ischemic Stroke | Male | Reperfusion | Stroke [SUMMARY]
[CONTENT] Aged | Brain Ischemia | Cholesterol, LDL | Female | Humans | Ischemic Stroke | Male | Reperfusion | Stroke [SUMMARY]
[CONTENT] Aged | Brain Ischemia | Cholesterol, LDL | Female | Humans | Ischemic Stroke | Male | Reperfusion | Stroke [SUMMARY]
[CONTENT] lipoprotein ldl gets | infusion ldl levels | cholesterol ldl outcomes | ischemic stroke ldl | reperfusion therapy ldl [SUMMARY]
null
[CONTENT] lipoprotein ldl gets | infusion ldl levels | cholesterol ldl outcomes | ischemic stroke ldl | reperfusion therapy ldl [SUMMARY]
[CONTENT] lipoprotein ldl gets | infusion ldl levels | cholesterol ldl outcomes | ischemic stroke ldl | reperfusion therapy ldl [SUMMARY]
[CONTENT] lipoprotein ldl gets | infusion ldl levels | cholesterol ldl outcomes | ischemic stroke ldl | reperfusion therapy ldl [SUMMARY]
[CONTENT] lipoprotein ldl gets | infusion ldl levels | cholesterol ldl outcomes | ischemic stroke ldl | reperfusion therapy ldl [SUMMARY]
[CONTENT] ldl | stroke | patients | δldl | admission | functional | hospitalization | outcome | poor | nihss [SUMMARY]
null
[CONTENT] ldl | stroke | patients | δldl | admission | functional | hospitalization | outcome | poor | nihss [SUMMARY]
[CONTENT] ldl | stroke | patients | δldl | admission | functional | hospitalization | outcome | poor | nihss [SUMMARY]
[CONTENT] ldl | stroke | patients | δldl | admission | functional | hospitalization | outcome | poor | nihss [SUMMARY]
[CONTENT] ldl | stroke | patients | δldl | admission | functional | hospitalization | outcome | poor | nihss [SUMMARY]
[CONTENT] ldl | patients reperfusion therapy | stress | patients reperfusion | oxidative | oxidative stress | patients | association | outcomes | ais [SUMMARY]
null
[CONTENT] stroke | ldl | mmol | model | δldl | 13 | nihss | toast | mean | adjusted [SUMMARY]
[CONTENT] ldl level | therapy decrease ldl level | underwent reperfusion therapy decrease | therapy decrease ldl | therapy decrease | level hospitalization positively | level hospitalization positively associated | ldl level hospitalization positively | admission ldl level outcomes | hospitalization positively [SUMMARY]
[CONTENT] ldl | patients | stroke | δldl | functional | outcomes | ais | admission | poor | hospitalization [SUMMARY]
[CONTENT] ldl | patients | stroke | δldl | functional | outcomes | ais | admission | poor | hospitalization [SUMMARY]
[CONTENT] ||| AIS [SUMMARY]
null
[CONTENT] 432 | age 69.2 | 13.5 years | 54.6 % ||| 2.55 ± | 0.93 ||| 0.43 ||| 263 | 60.9 % | 90 days ||| 0.99 | 95 % | CI | 0.77-1.27 | 0.904 ||| 1.80 | 95 % | 1,12-2.91 | 0.016 ||| T3 | 0.80 | T1 | 2.56 | 95 % | CI | 1.22-5.36 | 0.013 ||| 0.010 [SUMMARY]
[CONTENT] AIS | 90 days [SUMMARY]
[CONTENT] ||| AIS ||| ||| ||| LDL-C. Poor | Rankin Scale | 2 | 90 days ||| ||| 432 | age 69.2 | 13.5 years | 54.6 % ||| 2.55 ± | 0.93 ||| 0.43 ||| 263 | 60.9 % | 90 days ||| 0.99 | 95 % | CI | 0.77-1.27 | 0.904 ||| 1.80 | 95 % | 1,12-2.91 | 0.016 ||| T3 | 0.80 | T1 | 2.56 | 95 % | CI | 1.22-5.36 | 0.013 ||| 0.010 ||| AIS | 90 days [SUMMARY]
[CONTENT] ||| AIS ||| ||| ||| LDL-C. Poor | Rankin Scale | 2 | 90 days ||| ||| 432 | age 69.2 | 13.5 years | 54.6 % ||| 2.55 ± | 0.93 ||| 0.43 ||| 263 | 60.9 % | 90 days ||| 0.99 | 95 % | CI | 0.77-1.27 | 0.904 ||| 1.80 | 95 % | 1,12-2.91 | 0.016 ||| T3 | 0.80 | T1 | 2.56 | 95 % | CI | 1.22-5.36 | 0.013 ||| 0.010 ||| AIS | 90 days [SUMMARY]
Prediction of Low-Dose Aspirin-Induced Gastric Toxicity Using Nuclear Magnetic Resonance Spectroscopy-Based Pharmacometabolomics in Rats.
35408523
Low-dose aspirin (LDA) is the backbone for secondary prevention of coronary artery disease, although limited by gastric toxicity. This study aimed to identify novel metabolites that could predict LDA-induced gastric toxicity using pharmacometabolomics.
BACKGROUND
Pre-dosed urine samples were collected from male Sprague-Dawley rats. The rats were treated with either LDA (10 mg/kg) or 1% methylcellulose (10 mL/kg) per oral for 28 days. The rats' stomachs were examined for gastric toxicity using a stereomicroscope. The urine samples were analyzed using a proton nuclear magnetic resonance spectroscopy. Metabolites were systematically identified by exploring established databases and multivariate analyses to determine the spectral pattern of metabolites related to LDA-induced gastric toxicity.
METHODS
Treatment with LDA resulted in gastric toxicity in 20/32 rats (62.5%). The orthogonal projections to latent structures discriminant analysis (OPLS-DA) model displayed a goodness-of-fit (R2Y) value of 0.947, suggesting near-perfect reproducibility and a goodness-of-prediction (Q2Y) of -0.185 with perfect sensitivity, specificity and accuracy (100%). Furthermore, the area under the receiver operating characteristic (AUROC) displayed was 1. The final OPLS-DA model had an R2Y value of 0.726 and Q2Y of 0.142 with sensitivity (100%), specificity (95.0%) and accuracy (96.9%). Citrate, hippurate, methylamine, trimethylamine N-oxide and alpha-keto-glutarate were identified as the possible metabolites implicated in the LDA-induced gastric toxicity.
RESULTS
The study identified metabolic signatures that correlated with the development of a low-dose Aspirin-induced gastric toxicity in rats. This pharmacometabolomic approach could further be validated to predict LDA-induced gastric toxicity in patients with coronary artery disease.
CONCLUSION
[ "Animals", "Aspirin", "Coronary Artery Disease", "Humans", "Magnetic Resonance Spectroscopy", "Male", "Metabolomics", "Rats", "Rats, Sprague-Dawley", "Reproducibility of Results", "Stomach" ]
9000689
1. Introduction
Coronary artery disease (CAD) is a leading cause of cardiovascular disease (CVD) related morbidity and mortality globally [1]. Low-dose aspirin (LDA) is the mainstay for the secondary prevention of CAD [2]. Aspirin inhibits platelet activity by irreversibly deactivating cyclooxygenase-I (COX-1), leading to the inhibition of platelet thromboxane-A2 (TXA2) production and TXA2-mediated platelet activation [3]. The activity of Aspirin on TXA2 explains its distinct efficacy in preventing atherothrombosis and shared gastrointestinal (GI) side effects with other antiplatelets [3]. Alternative antiplatelets or co-administration with gastro-protective agents are presently the most common strategies to reduce Aspirin-induced GI side effects [4,5]. However, alternative Aspirin use is limited with cost burden, pill burden and decreased effectiveness, necessitating the need for more cost-effective strategies. There are limited studies on strategies, such as pharmacometabonomics, that predict the manifestation of gastric toxicity prior to LDA dosing. Pharmacometabolomics is a fast, economical and less invasive approach to predict drug-induced toxicity and complements personalized therapy. Proton nuclear magnetic resonance (1H-NMR) spectroscopy is a relatively new methodology for predicting drug effects using pre-dosed biomarkers of biofluids. NMR spectroscopy-based pharmacometabolomics is defined as “the prediction of the outcome (e.g., toxicity or efficacy) of a drug or xenobiotic in individuals, based on a mathematical model of pre-intervention metabolite signatures” [6]. NMR spectroscopy is the “gold standard” in pharmacometabolomics because of its non-destructive nature and enables the observation of the dynamics, partition and the quantification of metabolites in bio-samples. The pharmacometabolomics combines 1H-NMR and multivariate analysis in order to provide a detailed examination of the changes in the metabolic signatures of bio-samples. Therefore, this study aimed to identify metabolites that predict LDA-induced gastric toxicity using 1H-NMR-based pharmacometabonomics in rats.
null
null
2. Results
2.1. Gastric Toxicity At the end of the low dose aspirin dosing period (28 days), none of the rats 0/10 (0%) in the vehicle-treated group had any form of gastric toxicity. However, most rats 20/32 (62.5%) in the LDA-treated group developed gastric toxicity. Representative samples of the gastric toxicity are shown in Figure 1. At the end of the low dose aspirin dosing period (28 days), none of the rats 0/10 (0%) in the vehicle-treated group had any form of gastric toxicity. However, most rats 20/32 (62.5%) in the LDA-treated group developed gastric toxicity. Representative samples of the gastric toxicity are shown in Figure 1. 2.2. Pre-Dose Profiling Models Principal Component Analysis (PCA) did not show clear discriminations between the groups. However, the Orthogonal Projections to Latent Structures Discriminant Analysis (OPLS-DA) score plots for the profiling model displayed clear discrimination between gastric toxic and non-toxic groups, as shown in Figure 2. The model had a goodness-of-fit value (R2Y) of 0.947 (very close to 1). However, the goodness-of-prediction value (Q2Y) of −0.185 (<0.4) indicates that the model has a poor predictive capacity. The model has perfect (100%) sensitivity, specificity and accuracy values. It also has a perfect AUROC curve value of 1. The permutation test provided an R2Y intercept value of 0.919 and a Q2Y intercept value of −1.02 (Figure 3A). There is an overlap between the red and blue lines of the AUROC curve because the AUC value is 1 in both cases (Figure 3B). The number of variables with VIP value >1 was 72, as highlighted in Table 1. Further examination and exclusion of spectral noise using Topsin resulted in 10 regions ascertained as signals integrated with Topspin before uploading to SIMCA for further screening and identifying useful discriminating metabolites. Principal Component Analysis (PCA) did not show clear discriminations between the groups. However, the Orthogonal Projections to Latent Structures Discriminant Analysis (OPLS-DA) score plots for the profiling model displayed clear discrimination between gastric toxic and non-toxic groups, as shown in Figure 2. The model had a goodness-of-fit value (R2Y) of 0.947 (very close to 1). However, the goodness-of-prediction value (Q2Y) of −0.185 (<0.4) indicates that the model has a poor predictive capacity. The model has perfect (100%) sensitivity, specificity and accuracy values. It also has a perfect AUROC curve value of 1. The permutation test provided an R2Y intercept value of 0.919 and a Q2Y intercept value of −1.02 (Figure 3A). There is an overlap between the red and blue lines of the AUROC curve because the AUC value is 1 in both cases (Figure 3B). The number of variables with VIP value >1 was 72, as highlighted in Table 1. Further examination and exclusion of spectral noise using Topsin resulted in 10 regions ascertained as signals integrated with Topspin before uploading to SIMCA for further screening and identifying useful discriminating metabolites. 2.3. Pre-Dosed Identification Models PCA did not show clear discrimination between the two groups in Figure 4A. Except for two gastric toxic samples (G12 and G3) misclassified to be in the non-gastric toxic group, the pre-dose urine OPLSDA model successfully segregated the samples into gastric toxic and non-toxic groups, as shown in Figure 4B. The goodness values are also summarized in Table 1. R2Y value was 0.726, and the Q2Y value was 0.142. The sensitivity, specificity and accuracy of the identification model were 100%, 95% and 96.88%, respectively. The model also has an AUROC value of 1. Furthermore, the model was internally validated using the permutation plot (Figure 5). The pre-dose rat urine identification model passed almost all validity criteria plots. The majority of the Q2 values to the left are lower than the original Q2 point on the right. The blue regression line of the Q2 point intersects the vertical axis (on the left) below 0 (−0.875). Moreover, most of the R2 values (to the left) are lower than the original R2 value. PCA did not show clear discrimination between the two groups in Figure 4A. Except for two gastric toxic samples (G12 and G3) misclassified to be in the non-gastric toxic group, the pre-dose urine OPLSDA model successfully segregated the samples into gastric toxic and non-toxic groups, as shown in Figure 4B. The goodness values are also summarized in Table 1. R2Y value was 0.726, and the Q2Y value was 0.142. The sensitivity, specificity and accuracy of the identification model were 100%, 95% and 96.88%, respectively. The model also has an AUROC value of 1. Furthermore, the model was internally validated using the permutation plot (Figure 5). The pre-dose rat urine identification model passed almost all validity criteria plots. The majority of the Q2 values to the left are lower than the original Q2 point on the right. The blue regression line of the Q2 point intersects the vertical axis (on the left) below 0 (−0.875). Moreover, most of the R2 values (to the left) are lower than the original R2 value. 2.4. Identification of Biomarkers to Predict LDA-Induced Gastric Toxicity After searches in the three databases, the ten regions of the pre-dose rats’ urine were identified to correspond to five metabolites, as highlighted in Table 2. These five metabolites were identified as putative biomarkers that may predict LDA-induced gastric toxicity. An example of the spectral differences in the metabolites identified between the gastric toxic and non-gastric toxic rats is depicted in Figure 6. The triplet at 2.431–2.459 ppm belonging to alpha-keto-glutarate, the doublet at 2.531–2.571 ppm belonging to citrate and a doublet at 3.965–3.981 ppm belonging to hippurate are used to show the differences. After searches in the three databases, the ten regions of the pre-dose rats’ urine were identified to correspond to five metabolites, as highlighted in Table 2. These five metabolites were identified as putative biomarkers that may predict LDA-induced gastric toxicity. An example of the spectral differences in the metabolites identified between the gastric toxic and non-gastric toxic rats is depicted in Figure 6. The triplet at 2.431–2.459 ppm belonging to alpha-keto-glutarate, the doublet at 2.531–2.571 ppm belonging to citrate and a doublet at 3.965–3.981 ppm belonging to hippurate are used to show the differences.
5. Conclusions
The pharmacometabolomic analysis of the pre-dose 1H-NMR urine spectra identified metabolic signatures that correlated with the development of LDA-induced gastric toxicity and could predict gastric toxicity related to LDA. Citrate, hippurate, methylamine, trimethylamine N-oxide, and alpha-keto-glutarate were the putative metabolites identified and possibly implicated in LDA-induced gastric toxicity. The final model demonstrated good discriminatory properties, reproducibility and limited predictive capacity. This pharmacometabolomic approach can be translated to predict gastric toxicity in CAD patients when validated in humans.
[ "2.1. Gastric Toxicity", "2.2. Pre-Dose Profiling Models", "2.3. Pre-Dosed Identification Models", "2.4. Identification of Biomarkers to Predict LDA-Induced Gastric Toxicity", "4. Materials and Methods", "4.1. Animals", "4.2. Experimental Protocol", "4.3. Sample Collection", "4.4. Stomach Preparation", "4.5. Pharmacometabolomics Analysis", "4.6. Statistical Analysis", "4.7. Metabolites Identification" ]
[ "At the end of the low dose aspirin dosing period (28 days), none of the rats 0/10 (0%) in the vehicle-treated group had any form of gastric toxicity. However, most rats 20/32 (62.5%) in the LDA-treated group developed gastric toxicity. Representative samples of the gastric toxicity are shown in Figure 1.", "Principal Component Analysis (PCA) did not show clear discriminations between the groups. However, the Orthogonal Projections to Latent Structures Discriminant Analysis (OPLS-DA) score plots for the profiling model displayed clear discrimination between gastric toxic and non-toxic groups, as shown in Figure 2. The model had a goodness-of-fit value (R2Y) of 0.947 (very close to 1). However, the goodness-of-prediction value (Q2Y) of −0.185 (<0.4) indicates that the model has a poor predictive capacity.\nThe model has perfect (100%) sensitivity, specificity and accuracy values. It also has a perfect AUROC curve value of 1. The permutation test provided an R2Y intercept value of 0.919 and a Q2Y intercept value of −1.02 (Figure 3A). There is an overlap between the red and blue lines of the AUROC curve because the AUC value is 1 in both cases (Figure 3B).\nThe number of variables with VIP value >1 was 72, as highlighted in Table 1. Further examination and exclusion of spectral noise using Topsin resulted in 10 regions ascertained as signals integrated with Topspin before uploading to SIMCA for further screening and identifying useful discriminating metabolites.", "PCA did not show clear discrimination between the two groups in Figure 4A. Except for two gastric toxic samples (G12 and G3) misclassified to be in the non-gastric toxic group, the pre-dose urine OPLSDA model successfully segregated the samples into gastric toxic and non-toxic groups, as shown in Figure 4B. The goodness values are also summarized in Table 1. R2Y value was 0.726, and the Q2Y value was 0.142. The sensitivity, specificity and accuracy of the identification model were 100%, 95% and 96.88%, respectively.\nThe model also has an AUROC value of 1. Furthermore, the model was internally validated using the permutation plot (Figure 5). The pre-dose rat urine identification model passed almost all validity criteria plots. The majority of the Q2 values to the left are lower than the original Q2 point on the right. The blue regression line of the Q2 point intersects the vertical axis (on the left) below 0 (−0.875). Moreover, most of the R2 values (to the left) are lower than the original R2 value.", "After searches in the three databases, the ten regions of the pre-dose rats’ urine were identified to correspond to five metabolites, as highlighted in Table 2. These five metabolites were identified as putative biomarkers that may predict LDA-induced gastric toxicity.\nAn example of the spectral differences in the metabolites identified between the gastric toxic and non-gastric toxic rats is depicted in Figure 6. The triplet at 2.431–2.459 ppm belonging to alpha-keto-glutarate, the doublet at 2.531–2.571 ppm belonging to citrate and a doublet at 3.965–3.981 ppm belonging to hippurate are used to show the differences.", "4.1. Animals Male Sprague-Dawley (SD) rats (250–300 g body weight) were obtained from the Animal Research Complex (ARC) of Advanced Medical and Dental Institute (AMDI), Universiti Sains Malaysia and acclimatized to the animal research room for seven days. The rats were given access to Altromin-1320 maintenance diet for rats/mice (Altromin International, Lage, Germany) and water ad libitum. The rats were housed in a room maintained at a temperature of 20 ± 2 °C, relative humidity of 55 ± 10%, respectively, and a 12 h light/dark cycle throughout the study. The rats were kept in individual cages (1 rat per cage) when the samples were not taken but placed in metabolic cages during the periods for sample collection. The experimental protocol was approved by the Institutional Animal Care and Use Committees (IACUC). All procedures were carried out according to the recommendations of IACUC.\nMale Sprague-Dawley (SD) rats (250–300 g body weight) were obtained from the Animal Research Complex (ARC) of Advanced Medical and Dental Institute (AMDI), Universiti Sains Malaysia and acclimatized to the animal research room for seven days. The rats were given access to Altromin-1320 maintenance diet for rats/mice (Altromin International, Lage, Germany) and water ad libitum. The rats were housed in a room maintained at a temperature of 20 ± 2 °C, relative humidity of 55 ± 10%, respectively, and a 12 h light/dark cycle throughout the study. The rats were kept in individual cages (1 rat per cage) when the samples were not taken but placed in metabolic cages during the periods for sample collection. The experimental protocol was approved by the Institutional Animal Care and Use Committees (IACUC). All procedures were carried out according to the recommendations of IACUC.\n4.2. Experimental Protocol Two experimental groups were designed using a stratified randomization system based on rats’ bodyweight viz: group I (control, n = 10) and group II (treatment, n = 32). The rats in group I were administered 1% methylcellulose (10 mL/kg), while those in group II were administered low-dose aspirin, LDA (10 mg/kg) per oral for 28 days through the intra-gastric (IG) route with an oral gavage. Aspirin, sparingly soluble in water, was suspended in 1% methylcellulose before administration. The LDA (10 mg/kg) was equivalent to the clinical dose of 100 mg daily for adults [20,21], and it was used in a similar study to demonstrate LDA-induced gastric toxicity in rats [21].\nTwo experimental groups were designed using a stratified randomization system based on rats’ bodyweight viz: group I (control, n = 10) and group II (treatment, n = 32). The rats in group I were administered 1% methylcellulose (10 mL/kg), while those in group II were administered low-dose aspirin, LDA (10 mg/kg) per oral for 28 days through the intra-gastric (IG) route with an oral gavage. Aspirin, sparingly soluble in water, was suspended in 1% methylcellulose before administration. The LDA (10 mg/kg) was equivalent to the clinical dose of 100 mg daily for adults [20,21], and it was used in a similar study to demonstrate LDA-induced gastric toxicity in rats [21].\n4.3. Sample Collection The rats were transferred to individual metabolic cages three days before urine collection to acclimatize before sample collection [22]. Twenty-four-hour urine samples were collected on day 1 (pre-dosed) and day 28 (post-dosed) using the metabolic cage urine collector containing a preservative (0.5 mL of 100 mg/mL solution of sodium azide (NaN3)) [6]. The amount of each urine sample received was recorded, transferred into a 15 mL falcon tube, centrifuged at 2500× g for 10 min at 4 °C to remove particles [23] and aliquoted into two 2 mL microcentrifuge tubes. The aliquot and the remaining bulk of urine were stored in a −80 °C freezer until analysis by the proton nuclear magnetic resonance (1H-NMR) spectroscopy. All rats were euthanized on the 28th day of the experiment with an overdose of a ketamine/xylazine (91.0/9.1 mg/mL) cocktail. \nThe rats were transferred to individual metabolic cages three days before urine collection to acclimatize before sample collection [22]. Twenty-four-hour urine samples were collected on day 1 (pre-dosed) and day 28 (post-dosed) using the metabolic cage urine collector containing a preservative (0.5 mL of 100 mg/mL solution of sodium azide (NaN3)) [6]. The amount of each urine sample received was recorded, transferred into a 15 mL falcon tube, centrifuged at 2500× g for 10 min at 4 °C to remove particles [23] and aliquoted into two 2 mL microcentrifuge tubes. The aliquot and the remaining bulk of urine were stored in a −80 °C freezer until analysis by the proton nuclear magnetic resonance (1H-NMR) spectroscopy. All rats were euthanized on the 28th day of the experiment with an overdose of a ketamine/xylazine (91.0/9.1 mg/mL) cocktail. \n4.4. Stomach Preparation Four millilitres of 10% aqueous buffered formalin was instilled IG using oral gavage for in situ intraluminal fixations to preserve the integrity of the stomach tissue before opening up the abdominal cavity for stomach harvesting [24]. The stomachs were detached after five minutes in situ fixation and excised along the greater curvature. They are subsequently rinsed with cold saline and pinned on a polystyrene board with the mucosa facing upward to flatten the stomach. After that, the stomachs were dried using a manual blower. Drying was necessary to enhance sample visualization and prevent light reflection from the microscope. \nEach stomach sample was examined under the stereomicroscope (SZ61, Olympus Europa Holding GMBH, Hamburg, Germany). The software (Cellsens) was launched on a desktop monitor, and stomach images were captured. The image of the entire stomach could not be captured at once, even at the lowest zoom magnification (0.67×) and working distance (110 mm); hence, the segments were snapped and later stitched into a single image using Photoshop (Adobe Photoshop CS5 Version: 12.0), as recommended by the pathologist. The ulcerations and, likewise, the entire glandular stomach perimeter were measured with the aid of CellSens life science imaging software (Ver. 1.9 Olympus America, Inc., Center Valley, PA, USA). The stomachs were primarily classified based on the presence or absence of lesions (gastric toxicity).\nFour millilitres of 10% aqueous buffered formalin was instilled IG using oral gavage for in situ intraluminal fixations to preserve the integrity of the stomach tissue before opening up the abdominal cavity for stomach harvesting [24]. The stomachs were detached after five minutes in situ fixation and excised along the greater curvature. They are subsequently rinsed with cold saline and pinned on a polystyrene board with the mucosa facing upward to flatten the stomach. After that, the stomachs were dried using a manual blower. Drying was necessary to enhance sample visualization and prevent light reflection from the microscope. \nEach stomach sample was examined under the stereomicroscope (SZ61, Olympus Europa Holding GMBH, Hamburg, Germany). The software (Cellsens) was launched on a desktop monitor, and stomach images were captured. The image of the entire stomach could not be captured at once, even at the lowest zoom magnification (0.67×) and working distance (110 mm); hence, the segments were snapped and later stitched into a single image using Photoshop (Adobe Photoshop CS5 Version: 12.0), as recommended by the pathologist. The ulcerations and, likewise, the entire glandular stomach perimeter were measured with the aid of CellSens life science imaging software (Ver. 1.9 Olympus America, Inc., Center Valley, PA, USA). The stomachs were primarily classified based on the presence or absence of lesions (gastric toxicity).\n4.5. Pharmacometabolomics Analysis 1H-NMR spectra were acquired at 700.14 MHz using an ASCEND™ 700MHz NMR Spectrophotometer (Bruker BioSpin Corp., Rheinstetten, Germany). A day preceding NMR analysis, aliquots of the urine samples to be analyzed were transferred from the freezer (−8 °C) into the refrigerator (4 °C) and allowed to thaw overnight (at least 8 h) to avoid sample degradation resulting from abrupt defrosting [25]. The thawed aliquots of the samples were centrifuged (MIKRO 22 R, HettichZentrifugen®, Tuttlingen, Germany) at 12,000× g at 4 °C for 5 min to remove any insoluble sediment from the solution. Urine measuring 400 µL and 200 µL of phosphate buffer were mixed (2:1) in a microcentrifuge tube [26] and vortexed for a few seconds to ensure uniform mixing. Then, 550 µL was transferred to a 5 mm NMR tube using a pipette and securely closed by its cap (BRUKER®, BioSpin, Rheinstetten, Germany). \nTo ensure that an NMR tube was properly positioned, a sample gauge was used to align the NMR tube in the spinner. The tubes were then inserted into the respective sample holders and loaded into the spectrometer using Bruker’s IconNMR™ automation software. After the insertion of NMR tubes, the spectra of the 1H-NMR were acquired and processed using the automation interphase of IconNMR™ with TopSpin 3.5 (BRUKER®, BioSpin, Rheinstetten, Germany) software. The acquisition stages were locking, shimming and acquisition, and the data processing stages were Fourier Transform, phase correction and baseline correction. All spectra were acquired without spinning the sample. Each sample is given a lag time of five minutes (300 s) for thermal equilibration in the magnetic field before measuring 300 K. For each sample, the probe was automatically tuned and matched, and the magnetic field was locked on Urine+D2O and shimmed through a specifically optimized shim file for urine samples. Automatic 1H pulse calibration (pulsecal) was performed on each sample to reduce sample variability effects due to salt contents. 1H-NMR experiments were automated with the ICON NMR using the standardized acquisition and the processing parameters are as follows: pulse program (noesygppr1d), time domain (65536), dummy scans (4), scans (16), sweep width (20.5186 ppm), acquisition time (2.281 s), relaxation delay (4 s), receiver gain (12.70), dwell time (34.80 µs), mixing time (0.01 s), line broadening (0.30 Hz) and transmitter frequency offset (3289.90). The spectra were processed using Bruker Topspin 3.5 pl7.\n1H-NMR spectra were acquired at 700.14 MHz using an ASCEND™ 700MHz NMR Spectrophotometer (Bruker BioSpin Corp., Rheinstetten, Germany). A day preceding NMR analysis, aliquots of the urine samples to be analyzed were transferred from the freezer (−8 °C) into the refrigerator (4 °C) and allowed to thaw overnight (at least 8 h) to avoid sample degradation resulting from abrupt defrosting [25]. The thawed aliquots of the samples were centrifuged (MIKRO 22 R, HettichZentrifugen®, Tuttlingen, Germany) at 12,000× g at 4 °C for 5 min to remove any insoluble sediment from the solution. Urine measuring 400 µL and 200 µL of phosphate buffer were mixed (2:1) in a microcentrifuge tube [26] and vortexed for a few seconds to ensure uniform mixing. Then, 550 µL was transferred to a 5 mm NMR tube using a pipette and securely closed by its cap (BRUKER®, BioSpin, Rheinstetten, Germany). \nTo ensure that an NMR tube was properly positioned, a sample gauge was used to align the NMR tube in the spinner. The tubes were then inserted into the respective sample holders and loaded into the spectrometer using Bruker’s IconNMR™ automation software. After the insertion of NMR tubes, the spectra of the 1H-NMR were acquired and processed using the automation interphase of IconNMR™ with TopSpin 3.5 (BRUKER®, BioSpin, Rheinstetten, Germany) software. The acquisition stages were locking, shimming and acquisition, and the data processing stages were Fourier Transform, phase correction and baseline correction. All spectra were acquired without spinning the sample. Each sample is given a lag time of five minutes (300 s) for thermal equilibration in the magnetic field before measuring 300 K. For each sample, the probe was automatically tuned and matched, and the magnetic field was locked on Urine+D2O and shimmed through a specifically optimized shim file for urine samples. Automatic 1H pulse calibration (pulsecal) was performed on each sample to reduce sample variability effects due to salt contents. 1H-NMR experiments were automated with the ICON NMR using the standardized acquisition and the processing parameters are as follows: pulse program (noesygppr1d), time domain (65536), dummy scans (4), scans (16), sweep width (20.5186 ppm), acquisition time (2.281 s), relaxation delay (4 s), receiver gain (12.70), dwell time (34.80 µs), mixing time (0.01 s), line broadening (0.30 Hz) and transmitter frequency offset (3289.90). The spectra were processed using Bruker Topspin 3.5 pl7.\n4.6. Statistical Analysis At the end of the experiments, the rats were classified into gastric toxic or non-gastric toxic based on the presence or absence of any gastric toxicity, respectively. Spectra were bucketed to 0.04 ppm using AMIX software (BRUKER®, Rheinstetten, Germany). Water (4.7–4.9 ppm) and urea (5.5–6.1 ppm) regions were excluded. The bucket table was then imported to SIMCA 14.1 software (MKS Umetrics® Sweden, Umeå, Sweden). Skewed data were log-transformed. The data were then scaled using Pareto scaling (Pr scaling). \nPrincipal component analysis (PCA) was conducted, and the score plot was examined to explore the behaviour of the data. Hotelling’s T2 plot was used to discriminate intrinsic outliers in each group. The orthogonal partial least squares discriminant analysis (OPLSDA) was utilized to test the association between the buckets and gastric toxicity. This initial discriminatory model was the profiling model. The misclassification table was used to indicate the sensitivity, specificity and accuracy of each model. The best differentiating model was selected based on the two goodness values: the goodness-of-fit (R2Y) and goodness-of-prediction (Q2Y). A large R2Y (close to 1) is a necessary condition for a good model. It indicates good reproducibility. Likewise, a Q2Y value >0.5 signifies good predictivity. The variance between the two goodness values should not be too significant to ensure the right prediction and to avoid overfitting.\nThe variable importance for the projection (VIP) plot was then generated. The VIP plot summarizes the significance of the variables both to explain X (the predictors) and to correlate to Y (the outcome). The value of the VIP score, which is greater than 1, is the typical rule for selecting variables that are important, relevant and potentially discriminating [27,28]. Therefore, buckets with VIP value > 1 were chosen for further analysis. These spectral buckets (with VIP values > 1) were copied to an excel sheet and sorted in ascending order. The corresponding spectra for each bucket were verified in Topspin, and spectral noise was excluded from true signals. The 3-(trimethyl-silyl) propionic acid (TSP) peak was defined as the reference, and the peaks were calibrated by reference to its peak.\nA new data table was created by copying the relative integrals with their corresponding chemical shift (ppm) into an Excel sheet. The TSP integral was excluded from the table to not affect the analysis (as it is only a reference) before importing to SIMCA. The data were also log-transformed, Pareto scaled and explored initially using PCA. OPLSDA was also applied, and VIP plots were generated. This second discriminatory model was the “Identification Model” (final model). The misclassification table was generated to show the proportion of correctly classified observations in the dataset. In SIMCA, the ability of a model to classify the individual subjects correctly or incorrectly is evaluated by the misclassification table tool [29,30].\nFurthermore, the permutation plot (Y-scrambling) was used as an internal validation of the model. This compares the goodness tests (R2 and Q2) of the original model with the goodness test of several generated models by randomly permutating Y-observations (the outcome) while keeping the X-matrix (the predictors) constant. The number of permutations was set to 100 [31]. This means that the model was randomly built and validated 100 times. The AUROC curve was also computed to visualize the performance of the discriminatory models. It serves as a quantitative measure of the performance of the model. The performance parameter ranges between 0.5 (bad classification) and 1.0 (perfect classification).\nAt the end of the experiments, the rats were classified into gastric toxic or non-gastric toxic based on the presence or absence of any gastric toxicity, respectively. Spectra were bucketed to 0.04 ppm using AMIX software (BRUKER®, Rheinstetten, Germany). Water (4.7–4.9 ppm) and urea (5.5–6.1 ppm) regions were excluded. The bucket table was then imported to SIMCA 14.1 software (MKS Umetrics® Sweden, Umeå, Sweden). Skewed data were log-transformed. The data were then scaled using Pareto scaling (Pr scaling). \nPrincipal component analysis (PCA) was conducted, and the score plot was examined to explore the behaviour of the data. Hotelling’s T2 plot was used to discriminate intrinsic outliers in each group. The orthogonal partial least squares discriminant analysis (OPLSDA) was utilized to test the association between the buckets and gastric toxicity. This initial discriminatory model was the profiling model. The misclassification table was used to indicate the sensitivity, specificity and accuracy of each model. The best differentiating model was selected based on the two goodness values: the goodness-of-fit (R2Y) and goodness-of-prediction (Q2Y). A large R2Y (close to 1) is a necessary condition for a good model. It indicates good reproducibility. Likewise, a Q2Y value >0.5 signifies good predictivity. The variance between the two goodness values should not be too significant to ensure the right prediction and to avoid overfitting.\nThe variable importance for the projection (VIP) plot was then generated. The VIP plot summarizes the significance of the variables both to explain X (the predictors) and to correlate to Y (the outcome). The value of the VIP score, which is greater than 1, is the typical rule for selecting variables that are important, relevant and potentially discriminating [27,28]. Therefore, buckets with VIP value > 1 were chosen for further analysis. These spectral buckets (with VIP values > 1) were copied to an excel sheet and sorted in ascending order. The corresponding spectra for each bucket were verified in Topspin, and spectral noise was excluded from true signals. The 3-(trimethyl-silyl) propionic acid (TSP) peak was defined as the reference, and the peaks were calibrated by reference to its peak.\nA new data table was created by copying the relative integrals with their corresponding chemical shift (ppm) into an Excel sheet. The TSP integral was excluded from the table to not affect the analysis (as it is only a reference) before importing to SIMCA. The data were also log-transformed, Pareto scaled and explored initially using PCA. OPLSDA was also applied, and VIP plots were generated. This second discriminatory model was the “Identification Model” (final model). The misclassification table was generated to show the proportion of correctly classified observations in the dataset. In SIMCA, the ability of a model to classify the individual subjects correctly or incorrectly is evaluated by the misclassification table tool [29,30].\nFurthermore, the permutation plot (Y-scrambling) was used as an internal validation of the model. This compares the goodness tests (R2 and Q2) of the original model with the goodness test of several generated models by randomly permutating Y-observations (the outcome) while keeping the X-matrix (the predictors) constant. The number of permutations was set to 100 [31]. This means that the model was randomly built and validated 100 times. The AUROC curve was also computed to visualize the performance of the discriminatory models. It serves as a quantitative measure of the performance of the model. The performance parameter ranges between 0.5 (bad classification) and 1.0 (perfect classification).\n4.7. Metabolites Identification Metabolites were identified by systematically exploring three major databases, namely Biological Magnetic Resonance Data Bank (BMRB), Human Metabolome Database (HMDB) and Chenomx NMR Suite 6.0 (Chenomx® Inc., Edmonton, AB, Canada). The quest begins by first exploring BMRB. The important chemical shifts identified from the identification model (previous step) were individually inputted in the designated field for exploring metabolites in the BMRB database. This generated several matching peaks along with their corresponding metabolites. The generated matching metabolites were individually cross-referenced in the HMDB database to ascertain their availability in the urine. The prospective metabolites available in the urine were further cross-matched in the Chenomx profiler to ascertain their identity.\nMetabolites were identified by systematically exploring three major databases, namely Biological Magnetic Resonance Data Bank (BMRB), Human Metabolome Database (HMDB) and Chenomx NMR Suite 6.0 (Chenomx® Inc., Edmonton, AB, Canada). The quest begins by first exploring BMRB. The important chemical shifts identified from the identification model (previous step) were individually inputted in the designated field for exploring metabolites in the BMRB database. This generated several matching peaks along with their corresponding metabolites. The generated matching metabolites were individually cross-referenced in the HMDB database to ascertain their availability in the urine. The prospective metabolites available in the urine were further cross-matched in the Chenomx profiler to ascertain their identity.", "Male Sprague-Dawley (SD) rats (250–300 g body weight) were obtained from the Animal Research Complex (ARC) of Advanced Medical and Dental Institute (AMDI), Universiti Sains Malaysia and acclimatized to the animal research room for seven days. The rats were given access to Altromin-1320 maintenance diet for rats/mice (Altromin International, Lage, Germany) and water ad libitum. The rats were housed in a room maintained at a temperature of 20 ± 2 °C, relative humidity of 55 ± 10%, respectively, and a 12 h light/dark cycle throughout the study. The rats were kept in individual cages (1 rat per cage) when the samples were not taken but placed in metabolic cages during the periods for sample collection. The experimental protocol was approved by the Institutional Animal Care and Use Committees (IACUC). All procedures were carried out according to the recommendations of IACUC.", "Two experimental groups were designed using a stratified randomization system based on rats’ bodyweight viz: group I (control, n = 10) and group II (treatment, n = 32). The rats in group I were administered 1% methylcellulose (10 mL/kg), while those in group II were administered low-dose aspirin, LDA (10 mg/kg) per oral for 28 days through the intra-gastric (IG) route with an oral gavage. Aspirin, sparingly soluble in water, was suspended in 1% methylcellulose before administration. The LDA (10 mg/kg) was equivalent to the clinical dose of 100 mg daily for adults [20,21], and it was used in a similar study to demonstrate LDA-induced gastric toxicity in rats [21].", "The rats were transferred to individual metabolic cages three days before urine collection to acclimatize before sample collection [22]. Twenty-four-hour urine samples were collected on day 1 (pre-dosed) and day 28 (post-dosed) using the metabolic cage urine collector containing a preservative (0.5 mL of 100 mg/mL solution of sodium azide (NaN3)) [6]. The amount of each urine sample received was recorded, transferred into a 15 mL falcon tube, centrifuged at 2500× g for 10 min at 4 °C to remove particles [23] and aliquoted into two 2 mL microcentrifuge tubes. The aliquot and the remaining bulk of urine were stored in a −80 °C freezer until analysis by the proton nuclear magnetic resonance (1H-NMR) spectroscopy. All rats were euthanized on the 28th day of the experiment with an overdose of a ketamine/xylazine (91.0/9.1 mg/mL) cocktail. ", "Four millilitres of 10% aqueous buffered formalin was instilled IG using oral gavage for in situ intraluminal fixations to preserve the integrity of the stomach tissue before opening up the abdominal cavity for stomach harvesting [24]. The stomachs were detached after five minutes in situ fixation and excised along the greater curvature. They are subsequently rinsed with cold saline and pinned on a polystyrene board with the mucosa facing upward to flatten the stomach. After that, the stomachs were dried using a manual blower. Drying was necessary to enhance sample visualization and prevent light reflection from the microscope. \nEach stomach sample was examined under the stereomicroscope (SZ61, Olympus Europa Holding GMBH, Hamburg, Germany). The software (Cellsens) was launched on a desktop monitor, and stomach images were captured. The image of the entire stomach could not be captured at once, even at the lowest zoom magnification (0.67×) and working distance (110 mm); hence, the segments were snapped and later stitched into a single image using Photoshop (Adobe Photoshop CS5 Version: 12.0), as recommended by the pathologist. The ulcerations and, likewise, the entire glandular stomach perimeter were measured with the aid of CellSens life science imaging software (Ver. 1.9 Olympus America, Inc., Center Valley, PA, USA). The stomachs were primarily classified based on the presence or absence of lesions (gastric toxicity).", "1H-NMR spectra were acquired at 700.14 MHz using an ASCEND™ 700MHz NMR Spectrophotometer (Bruker BioSpin Corp., Rheinstetten, Germany). A day preceding NMR analysis, aliquots of the urine samples to be analyzed were transferred from the freezer (−8 °C) into the refrigerator (4 °C) and allowed to thaw overnight (at least 8 h) to avoid sample degradation resulting from abrupt defrosting [25]. The thawed aliquots of the samples were centrifuged (MIKRO 22 R, HettichZentrifugen®, Tuttlingen, Germany) at 12,000× g at 4 °C for 5 min to remove any insoluble sediment from the solution. Urine measuring 400 µL and 200 µL of phosphate buffer were mixed (2:1) in a microcentrifuge tube [26] and vortexed for a few seconds to ensure uniform mixing. Then, 550 µL was transferred to a 5 mm NMR tube using a pipette and securely closed by its cap (BRUKER®, BioSpin, Rheinstetten, Germany). \nTo ensure that an NMR tube was properly positioned, a sample gauge was used to align the NMR tube in the spinner. The tubes were then inserted into the respective sample holders and loaded into the spectrometer using Bruker’s IconNMR™ automation software. After the insertion of NMR tubes, the spectra of the 1H-NMR were acquired and processed using the automation interphase of IconNMR™ with TopSpin 3.5 (BRUKER®, BioSpin, Rheinstetten, Germany) software. The acquisition stages were locking, shimming and acquisition, and the data processing stages were Fourier Transform, phase correction and baseline correction. All spectra were acquired without spinning the sample. Each sample is given a lag time of five minutes (300 s) for thermal equilibration in the magnetic field before measuring 300 K. For each sample, the probe was automatically tuned and matched, and the magnetic field was locked on Urine+D2O and shimmed through a specifically optimized shim file for urine samples. Automatic 1H pulse calibration (pulsecal) was performed on each sample to reduce sample variability effects due to salt contents. 1H-NMR experiments were automated with the ICON NMR using the standardized acquisition and the processing parameters are as follows: pulse program (noesygppr1d), time domain (65536), dummy scans (4), scans (16), sweep width (20.5186 ppm), acquisition time (2.281 s), relaxation delay (4 s), receiver gain (12.70), dwell time (34.80 µs), mixing time (0.01 s), line broadening (0.30 Hz) and transmitter frequency offset (3289.90). The spectra were processed using Bruker Topspin 3.5 pl7.", "At the end of the experiments, the rats were classified into gastric toxic or non-gastric toxic based on the presence or absence of any gastric toxicity, respectively. Spectra were bucketed to 0.04 ppm using AMIX software (BRUKER®, Rheinstetten, Germany). Water (4.7–4.9 ppm) and urea (5.5–6.1 ppm) regions were excluded. The bucket table was then imported to SIMCA 14.1 software (MKS Umetrics® Sweden, Umeå, Sweden). Skewed data were log-transformed. The data were then scaled using Pareto scaling (Pr scaling). \nPrincipal component analysis (PCA) was conducted, and the score plot was examined to explore the behaviour of the data. Hotelling’s T2 plot was used to discriminate intrinsic outliers in each group. The orthogonal partial least squares discriminant analysis (OPLSDA) was utilized to test the association between the buckets and gastric toxicity. This initial discriminatory model was the profiling model. The misclassification table was used to indicate the sensitivity, specificity and accuracy of each model. The best differentiating model was selected based on the two goodness values: the goodness-of-fit (R2Y) and goodness-of-prediction (Q2Y). A large R2Y (close to 1) is a necessary condition for a good model. It indicates good reproducibility. Likewise, a Q2Y value >0.5 signifies good predictivity. The variance between the two goodness values should not be too significant to ensure the right prediction and to avoid overfitting.\nThe variable importance for the projection (VIP) plot was then generated. The VIP plot summarizes the significance of the variables both to explain X (the predictors) and to correlate to Y (the outcome). The value of the VIP score, which is greater than 1, is the typical rule for selecting variables that are important, relevant and potentially discriminating [27,28]. Therefore, buckets with VIP value > 1 were chosen for further analysis. These spectral buckets (with VIP values > 1) were copied to an excel sheet and sorted in ascending order. The corresponding spectra for each bucket were verified in Topspin, and spectral noise was excluded from true signals. The 3-(trimethyl-silyl) propionic acid (TSP) peak was defined as the reference, and the peaks were calibrated by reference to its peak.\nA new data table was created by copying the relative integrals with their corresponding chemical shift (ppm) into an Excel sheet. The TSP integral was excluded from the table to not affect the analysis (as it is only a reference) before importing to SIMCA. The data were also log-transformed, Pareto scaled and explored initially using PCA. OPLSDA was also applied, and VIP plots were generated. This second discriminatory model was the “Identification Model” (final model). The misclassification table was generated to show the proportion of correctly classified observations in the dataset. In SIMCA, the ability of a model to classify the individual subjects correctly or incorrectly is evaluated by the misclassification table tool [29,30].\nFurthermore, the permutation plot (Y-scrambling) was used as an internal validation of the model. This compares the goodness tests (R2 and Q2) of the original model with the goodness test of several generated models by randomly permutating Y-observations (the outcome) while keeping the X-matrix (the predictors) constant. The number of permutations was set to 100 [31]. This means that the model was randomly built and validated 100 times. The AUROC curve was also computed to visualize the performance of the discriminatory models. It serves as a quantitative measure of the performance of the model. The performance parameter ranges between 0.5 (bad classification) and 1.0 (perfect classification).", "Metabolites were identified by systematically exploring three major databases, namely Biological Magnetic Resonance Data Bank (BMRB), Human Metabolome Database (HMDB) and Chenomx NMR Suite 6.0 (Chenomx® Inc., Edmonton, AB, Canada). The quest begins by first exploring BMRB. The important chemical shifts identified from the identification model (previous step) were individually inputted in the designated field for exploring metabolites in the BMRB database. This generated several matching peaks along with their corresponding metabolites. The generated matching metabolites were individually cross-referenced in the HMDB database to ascertain their availability in the urine. The prospective metabolites available in the urine were further cross-matched in the Chenomx profiler to ascertain their identity." ]
[ null, null, null, null, null, null, null, null, null, null, null, null ]
[ "1. Introduction", "2. Results", "2.1. Gastric Toxicity", "2.2. Pre-Dose Profiling Models", "2.3. Pre-Dosed Identification Models", "2.4. Identification of Biomarkers to Predict LDA-Induced Gastric Toxicity", "3. Discussion", "4. Materials and Methods", "4.1. Animals", "4.2. Experimental Protocol", "4.3. Sample Collection", "4.4. Stomach Preparation", "4.5. Pharmacometabolomics Analysis", "4.6. Statistical Analysis", "4.7. Metabolites Identification", "5. Conclusions" ]
[ "Coronary artery disease (CAD) is a leading cause of cardiovascular disease (CVD) related morbidity and mortality globally [1]. Low-dose aspirin (LDA) is the mainstay for the secondary prevention of CAD [2]. Aspirin inhibits platelet activity by irreversibly deactivating cyclooxygenase-I (COX-1), leading to the inhibition of platelet thromboxane-A2 (TXA2) production and TXA2-mediated platelet activation [3]. The activity of Aspirin on TXA2 explains its distinct efficacy in preventing atherothrombosis and shared gastrointestinal (GI) side effects with other antiplatelets [3]. Alternative antiplatelets or co-administration with gastro-protective agents are presently the most common strategies to reduce Aspirin-induced GI side effects [4,5]. However, alternative Aspirin use is limited with cost burden, pill burden and decreased effectiveness, necessitating the need for more cost-effective strategies.\nThere are limited studies on strategies, such as pharmacometabonomics, that predict the manifestation of gastric toxicity prior to LDA dosing. Pharmacometabolomics is a fast, economical and less invasive approach to predict drug-induced toxicity and complements personalized therapy. Proton nuclear magnetic resonance (1H-NMR) spectroscopy is a relatively new methodology for predicting drug effects using pre-dosed biomarkers of biofluids. NMR spectroscopy-based pharmacometabolomics is defined as “the prediction of the outcome (e.g., toxicity or efficacy) of a drug or xenobiotic in individuals, based on a mathematical model of pre-intervention metabolite signatures” [6]. NMR spectroscopy is the “gold standard” in pharmacometabolomics because of its non-destructive nature and enables the observation of the dynamics, partition and the quantification of metabolites in bio-samples. The pharmacometabolomics combines 1H-NMR and multivariate analysis in order to provide a detailed examination of the changes in the metabolic signatures of bio-samples. Therefore, this study aimed to identify metabolites that predict LDA-induced gastric toxicity using 1H-NMR-based pharmacometabonomics in rats.", "2.1. Gastric Toxicity At the end of the low dose aspirin dosing period (28 days), none of the rats 0/10 (0%) in the vehicle-treated group had any form of gastric toxicity. However, most rats 20/32 (62.5%) in the LDA-treated group developed gastric toxicity. Representative samples of the gastric toxicity are shown in Figure 1.\nAt the end of the low dose aspirin dosing period (28 days), none of the rats 0/10 (0%) in the vehicle-treated group had any form of gastric toxicity. However, most rats 20/32 (62.5%) in the LDA-treated group developed gastric toxicity. Representative samples of the gastric toxicity are shown in Figure 1.\n2.2. Pre-Dose Profiling Models Principal Component Analysis (PCA) did not show clear discriminations between the groups. However, the Orthogonal Projections to Latent Structures Discriminant Analysis (OPLS-DA) score plots for the profiling model displayed clear discrimination between gastric toxic and non-toxic groups, as shown in Figure 2. The model had a goodness-of-fit value (R2Y) of 0.947 (very close to 1). However, the goodness-of-prediction value (Q2Y) of −0.185 (<0.4) indicates that the model has a poor predictive capacity.\nThe model has perfect (100%) sensitivity, specificity and accuracy values. It also has a perfect AUROC curve value of 1. The permutation test provided an R2Y intercept value of 0.919 and a Q2Y intercept value of −1.02 (Figure 3A). There is an overlap between the red and blue lines of the AUROC curve because the AUC value is 1 in both cases (Figure 3B).\nThe number of variables with VIP value >1 was 72, as highlighted in Table 1. Further examination and exclusion of spectral noise using Topsin resulted in 10 regions ascertained as signals integrated with Topspin before uploading to SIMCA for further screening and identifying useful discriminating metabolites.\nPrincipal Component Analysis (PCA) did not show clear discriminations between the groups. However, the Orthogonal Projections to Latent Structures Discriminant Analysis (OPLS-DA) score plots for the profiling model displayed clear discrimination between gastric toxic and non-toxic groups, as shown in Figure 2. The model had a goodness-of-fit value (R2Y) of 0.947 (very close to 1). However, the goodness-of-prediction value (Q2Y) of −0.185 (<0.4) indicates that the model has a poor predictive capacity.\nThe model has perfect (100%) sensitivity, specificity and accuracy values. It also has a perfect AUROC curve value of 1. The permutation test provided an R2Y intercept value of 0.919 and a Q2Y intercept value of −1.02 (Figure 3A). There is an overlap between the red and blue lines of the AUROC curve because the AUC value is 1 in both cases (Figure 3B).\nThe number of variables with VIP value >1 was 72, as highlighted in Table 1. Further examination and exclusion of spectral noise using Topsin resulted in 10 regions ascertained as signals integrated with Topspin before uploading to SIMCA for further screening and identifying useful discriminating metabolites.\n2.3. Pre-Dosed Identification Models PCA did not show clear discrimination between the two groups in Figure 4A. Except for two gastric toxic samples (G12 and G3) misclassified to be in the non-gastric toxic group, the pre-dose urine OPLSDA model successfully segregated the samples into gastric toxic and non-toxic groups, as shown in Figure 4B. The goodness values are also summarized in Table 1. R2Y value was 0.726, and the Q2Y value was 0.142. The sensitivity, specificity and accuracy of the identification model were 100%, 95% and 96.88%, respectively.\nThe model also has an AUROC value of 1. Furthermore, the model was internally validated using the permutation plot (Figure 5). The pre-dose rat urine identification model passed almost all validity criteria plots. The majority of the Q2 values to the left are lower than the original Q2 point on the right. The blue regression line of the Q2 point intersects the vertical axis (on the left) below 0 (−0.875). Moreover, most of the R2 values (to the left) are lower than the original R2 value.\nPCA did not show clear discrimination between the two groups in Figure 4A. Except for two gastric toxic samples (G12 and G3) misclassified to be in the non-gastric toxic group, the pre-dose urine OPLSDA model successfully segregated the samples into gastric toxic and non-toxic groups, as shown in Figure 4B. The goodness values are also summarized in Table 1. R2Y value was 0.726, and the Q2Y value was 0.142. The sensitivity, specificity and accuracy of the identification model were 100%, 95% and 96.88%, respectively.\nThe model also has an AUROC value of 1. Furthermore, the model was internally validated using the permutation plot (Figure 5). The pre-dose rat urine identification model passed almost all validity criteria plots. The majority of the Q2 values to the left are lower than the original Q2 point on the right. The blue regression line of the Q2 point intersects the vertical axis (on the left) below 0 (−0.875). Moreover, most of the R2 values (to the left) are lower than the original R2 value.\n2.4. Identification of Biomarkers to Predict LDA-Induced Gastric Toxicity After searches in the three databases, the ten regions of the pre-dose rats’ urine were identified to correspond to five metabolites, as highlighted in Table 2. These five metabolites were identified as putative biomarkers that may predict LDA-induced gastric toxicity.\nAn example of the spectral differences in the metabolites identified between the gastric toxic and non-gastric toxic rats is depicted in Figure 6. The triplet at 2.431–2.459 ppm belonging to alpha-keto-glutarate, the doublet at 2.531–2.571 ppm belonging to citrate and a doublet at 3.965–3.981 ppm belonging to hippurate are used to show the differences.\nAfter searches in the three databases, the ten regions of the pre-dose rats’ urine were identified to correspond to five metabolites, as highlighted in Table 2. These five metabolites were identified as putative biomarkers that may predict LDA-induced gastric toxicity.\nAn example of the spectral differences in the metabolites identified between the gastric toxic and non-gastric toxic rats is depicted in Figure 6. The triplet at 2.431–2.459 ppm belonging to alpha-keto-glutarate, the doublet at 2.531–2.571 ppm belonging to citrate and a doublet at 3.965–3.981 ppm belonging to hippurate are used to show the differences.", "At the end of the low dose aspirin dosing period (28 days), none of the rats 0/10 (0%) in the vehicle-treated group had any form of gastric toxicity. However, most rats 20/32 (62.5%) in the LDA-treated group developed gastric toxicity. Representative samples of the gastric toxicity are shown in Figure 1.", "Principal Component Analysis (PCA) did not show clear discriminations between the groups. However, the Orthogonal Projections to Latent Structures Discriminant Analysis (OPLS-DA) score plots for the profiling model displayed clear discrimination between gastric toxic and non-toxic groups, as shown in Figure 2. The model had a goodness-of-fit value (R2Y) of 0.947 (very close to 1). However, the goodness-of-prediction value (Q2Y) of −0.185 (<0.4) indicates that the model has a poor predictive capacity.\nThe model has perfect (100%) sensitivity, specificity and accuracy values. It also has a perfect AUROC curve value of 1. The permutation test provided an R2Y intercept value of 0.919 and a Q2Y intercept value of −1.02 (Figure 3A). There is an overlap between the red and blue lines of the AUROC curve because the AUC value is 1 in both cases (Figure 3B).\nThe number of variables with VIP value >1 was 72, as highlighted in Table 1. Further examination and exclusion of spectral noise using Topsin resulted in 10 regions ascertained as signals integrated with Topspin before uploading to SIMCA for further screening and identifying useful discriminating metabolites.", "PCA did not show clear discrimination between the two groups in Figure 4A. Except for two gastric toxic samples (G12 and G3) misclassified to be in the non-gastric toxic group, the pre-dose urine OPLSDA model successfully segregated the samples into gastric toxic and non-toxic groups, as shown in Figure 4B. The goodness values are also summarized in Table 1. R2Y value was 0.726, and the Q2Y value was 0.142. The sensitivity, specificity and accuracy of the identification model were 100%, 95% and 96.88%, respectively.\nThe model also has an AUROC value of 1. Furthermore, the model was internally validated using the permutation plot (Figure 5). The pre-dose rat urine identification model passed almost all validity criteria plots. The majority of the Q2 values to the left are lower than the original Q2 point on the right. The blue regression line of the Q2 point intersects the vertical axis (on the left) below 0 (−0.875). Moreover, most of the R2 values (to the left) are lower than the original R2 value.", "After searches in the three databases, the ten regions of the pre-dose rats’ urine were identified to correspond to five metabolites, as highlighted in Table 2. These five metabolites were identified as putative biomarkers that may predict LDA-induced gastric toxicity.\nAn example of the spectral differences in the metabolites identified between the gastric toxic and non-gastric toxic rats is depicted in Figure 6. The triplet at 2.431–2.459 ppm belonging to alpha-keto-glutarate, the doublet at 2.531–2.571 ppm belonging to citrate and a doublet at 3.965–3.981 ppm belonging to hippurate are used to show the differences.", "The study demonstrated the robustness of the model developed for pre-dosed prediction of LDA-induced gastric toxicity. The model reaffirms the assertion by Szymańska and colleagues that the area under the receiver operating characteristic (AUROC) or the number of misclassification is more precise in detecting biomarkers responsible for 2-class differentiation or discrimination [7]. They also proclaim that Q2 values are not very good as diagnostic statistics in discriminatory analysis models such as orthogonal projections to latent structures discriminant analysis (OPLS-DA) [7]. The goodness-of-fit (R2Y) value of 0.947 indicates the near-perfect reproducibility of the model. However, the goodness-of-prediction (Q2Y) value of −0.185 (less than the recommended 0.4 for biological models) shows that the model has poor predictive capacity [8]. The goodness-of-prediction values even lower than 0.3 have been reported in several metabolomic studies [9,10]. In such instances, it is recommended that the models are further assessed using permutation tests [10,11]. \nAlthough the AUROC value of 1 confirms the validity of the model, internal validation by conducting a permutation test indicates overfitting in the model. This may be due to the limited number of samples to perform an external validation of the model. Based on the aforementioned statistical parameters, this model may be suitable for predicting LDA-induced gastric toxicity when externally validated with independent data. Previous researchers stated that models with good sensitivity and specificity are suited for both screening and confirmation of disease [12]. Therefore, the diagnostic statistics qualify the model for developing a diagnostic kit, which can be used clinically to screen patients with the propensity to develop LDA-induced gastric toxicity when validated in a human study. Logically, a minimal number of discriminating metabolites will promote their clinical acceptance and utility. Reducing the crucial bins from 72 to 10 makes the model more clinically relevant. When validated with human data, the five identified metabolites (corresponding to the ten significant bins) from the final model may be used to develop a diagnostic kit that can be clinically useful.\nThe citric acid (citrate) is a weak acid that is formed endogenously in the tricarboxylic acid (TCA) cycle or consumed through some foods. The TCA cycle is also known as the citric acid cycle. NSAIDs have been found to cause the opening of \"mitochondrial permeability transition pore\", consequently leading to the uncoupling of oxidative phosphorylation, increasing the resting state respiration and disrupting the mitochondrial transmembrane potential. These NSAID-induced changes play a significant function in initiating tissue damage [13]. Takeuchi and colleagues [13] found no changes in serum citrate concentration after administering NSAIDs, including aspirin. They, however, found a decrease in citrate levels in stomach tissue extracts compared with controls. They, therefore, deduced two events to be associated with NSAID-induced gastric injury: hyperactivity of collagenase in the stomach and a decrease in levels of citrate (and other metabolites) as indicators of altered TCA cycle activity, which is a mitochondrial pathway. Other researchers [14] found citrate to be statistically reduced in the NSAID-induced gastric damage group compared to the control. Moreover, Takeuchi, et al. [15] reported no change in the serum levels of citrate after low-dose aspirin.\nMethylamine is an endogenous metabolite resulting from the breakdown of amine. Its tissue level is found to increase in some disease conditions such as diabetes mellitus. The levels of methylamine and ammonia are mutually controlled by a multi-functional enzyme known as semicarbazide-sensitive amine oxidase (SSAO). The activity of SSAO deaminates methylamine to formaldehyde, thereby producing ammonia and hydrogen peroxide. An increase in serum SSAO activity has been reported in patients with some disease conditions, including diabetes mellitus, Alzheimer’s disease and vascular disorders. The deamination of methylamine catalyzed by SSAO results in the production of toxic formaldehyde. In this study, methylamine is one of the predictors of LDA-induced gastric toxicity in the urine. The exact mechanism/pathway linking aspirin-induced GI toxicity and methylamine has not been established. Perhaps future studies can focus on identifying the links that may show other pathways related to aspirin-induced GI toxicity.\nTrimethylamine N-oxide (TMAO) results from the oxidation of trimethylamine. It is a common metabolite in both humans and animals. Specifically, TMAO is endogenously synthesized from trimethylamine derived from choline. Choline is usually obtained from either dietary lecithin or carnitine [16]. A link between blood and urinary levels of TMAO and gut microbiota has been established [17]. The concentration of TMAO increases if the number of bacteria that convert the trimethylamine to TMAO in the gut increases. It can, therefore, be inferred from this that subjects having a higher number of microbes that promote the synthesis of TMAO are also at an increased risk of developing aspirin-induced gastric toxicity. Previous researchers reported that NSAIDs increase the level of TMAO compared to controls [14].\nHippuric acid is a product of the conjugation of benzoic acid with glycine. It is referred to as an acyl glycine. Acyl glycines are synthesized through an enzyme known as glycine N-acyl transferase. Hippuric acid is a common constituent of urine, and its quantity is increased with an increase in the intake of phenolic compounds such as tea, fruit juices and wine. These phenols are changed to benzoic acid, which is subsequently converted to hippuric acid and excreted in urine. Gastrointestinal microflora appears to be responsible for quinic acid metabolism in hippuric acid. Indomethacin has been found to cause a decrease in hippurate, possibly due to the disruption of the normal microorganisms in the gastrointestinal tract [18].\nOne of two ketone derivatives of glutaric acid is alpha-ketoglutaric acid, also known as 2-oxoglutaric acid. When used without qualification, the word “ketoglutaric acid” usually always refers to the alpha version. The only difference between beta ketoglutaric acid and other ketoglutaric acids is the position of the ketone functional group, and it is significantly less prevalent. Alpha-ketoglutarate, commonly known as 2-oxoglutarate, is a biologically significant carboxylate. It is a keto acid formed when glutamate is deaminated, and it is an intermediate in the Krebs cycle [19]. ", "4.1. Animals Male Sprague-Dawley (SD) rats (250–300 g body weight) were obtained from the Animal Research Complex (ARC) of Advanced Medical and Dental Institute (AMDI), Universiti Sains Malaysia and acclimatized to the animal research room for seven days. The rats were given access to Altromin-1320 maintenance diet for rats/mice (Altromin International, Lage, Germany) and water ad libitum. The rats were housed in a room maintained at a temperature of 20 ± 2 °C, relative humidity of 55 ± 10%, respectively, and a 12 h light/dark cycle throughout the study. The rats were kept in individual cages (1 rat per cage) when the samples were not taken but placed in metabolic cages during the periods for sample collection. The experimental protocol was approved by the Institutional Animal Care and Use Committees (IACUC). All procedures were carried out according to the recommendations of IACUC.\nMale Sprague-Dawley (SD) rats (250–300 g body weight) were obtained from the Animal Research Complex (ARC) of Advanced Medical and Dental Institute (AMDI), Universiti Sains Malaysia and acclimatized to the animal research room for seven days. The rats were given access to Altromin-1320 maintenance diet for rats/mice (Altromin International, Lage, Germany) and water ad libitum. The rats were housed in a room maintained at a temperature of 20 ± 2 °C, relative humidity of 55 ± 10%, respectively, and a 12 h light/dark cycle throughout the study. The rats were kept in individual cages (1 rat per cage) when the samples were not taken but placed in metabolic cages during the periods for sample collection. The experimental protocol was approved by the Institutional Animal Care and Use Committees (IACUC). All procedures were carried out according to the recommendations of IACUC.\n4.2. Experimental Protocol Two experimental groups were designed using a stratified randomization system based on rats’ bodyweight viz: group I (control, n = 10) and group II (treatment, n = 32). The rats in group I were administered 1% methylcellulose (10 mL/kg), while those in group II were administered low-dose aspirin, LDA (10 mg/kg) per oral for 28 days through the intra-gastric (IG) route with an oral gavage. Aspirin, sparingly soluble in water, was suspended in 1% methylcellulose before administration. The LDA (10 mg/kg) was equivalent to the clinical dose of 100 mg daily for adults [20,21], and it was used in a similar study to demonstrate LDA-induced gastric toxicity in rats [21].\nTwo experimental groups were designed using a stratified randomization system based on rats’ bodyweight viz: group I (control, n = 10) and group II (treatment, n = 32). The rats in group I were administered 1% methylcellulose (10 mL/kg), while those in group II were administered low-dose aspirin, LDA (10 mg/kg) per oral for 28 days through the intra-gastric (IG) route with an oral gavage. Aspirin, sparingly soluble in water, was suspended in 1% methylcellulose before administration. The LDA (10 mg/kg) was equivalent to the clinical dose of 100 mg daily for adults [20,21], and it was used in a similar study to demonstrate LDA-induced gastric toxicity in rats [21].\n4.3. Sample Collection The rats were transferred to individual metabolic cages three days before urine collection to acclimatize before sample collection [22]. Twenty-four-hour urine samples were collected on day 1 (pre-dosed) and day 28 (post-dosed) using the metabolic cage urine collector containing a preservative (0.5 mL of 100 mg/mL solution of sodium azide (NaN3)) [6]. The amount of each urine sample received was recorded, transferred into a 15 mL falcon tube, centrifuged at 2500× g for 10 min at 4 °C to remove particles [23] and aliquoted into two 2 mL microcentrifuge tubes. The aliquot and the remaining bulk of urine were stored in a −80 °C freezer until analysis by the proton nuclear magnetic resonance (1H-NMR) spectroscopy. All rats were euthanized on the 28th day of the experiment with an overdose of a ketamine/xylazine (91.0/9.1 mg/mL) cocktail. \nThe rats were transferred to individual metabolic cages three days before urine collection to acclimatize before sample collection [22]. Twenty-four-hour urine samples were collected on day 1 (pre-dosed) and day 28 (post-dosed) using the metabolic cage urine collector containing a preservative (0.5 mL of 100 mg/mL solution of sodium azide (NaN3)) [6]. The amount of each urine sample received was recorded, transferred into a 15 mL falcon tube, centrifuged at 2500× g for 10 min at 4 °C to remove particles [23] and aliquoted into two 2 mL microcentrifuge tubes. The aliquot and the remaining bulk of urine were stored in a −80 °C freezer until analysis by the proton nuclear magnetic resonance (1H-NMR) spectroscopy. All rats were euthanized on the 28th day of the experiment with an overdose of a ketamine/xylazine (91.0/9.1 mg/mL) cocktail. \n4.4. Stomach Preparation Four millilitres of 10% aqueous buffered formalin was instilled IG using oral gavage for in situ intraluminal fixations to preserve the integrity of the stomach tissue before opening up the abdominal cavity for stomach harvesting [24]. The stomachs were detached after five minutes in situ fixation and excised along the greater curvature. They are subsequently rinsed with cold saline and pinned on a polystyrene board with the mucosa facing upward to flatten the stomach. After that, the stomachs were dried using a manual blower. Drying was necessary to enhance sample visualization and prevent light reflection from the microscope. \nEach stomach sample was examined under the stereomicroscope (SZ61, Olympus Europa Holding GMBH, Hamburg, Germany). The software (Cellsens) was launched on a desktop monitor, and stomach images were captured. The image of the entire stomach could not be captured at once, even at the lowest zoom magnification (0.67×) and working distance (110 mm); hence, the segments were snapped and later stitched into a single image using Photoshop (Adobe Photoshop CS5 Version: 12.0), as recommended by the pathologist. The ulcerations and, likewise, the entire glandular stomach perimeter were measured with the aid of CellSens life science imaging software (Ver. 1.9 Olympus America, Inc., Center Valley, PA, USA). The stomachs were primarily classified based on the presence or absence of lesions (gastric toxicity).\nFour millilitres of 10% aqueous buffered formalin was instilled IG using oral gavage for in situ intraluminal fixations to preserve the integrity of the stomach tissue before opening up the abdominal cavity for stomach harvesting [24]. The stomachs were detached after five minutes in situ fixation and excised along the greater curvature. They are subsequently rinsed with cold saline and pinned on a polystyrene board with the mucosa facing upward to flatten the stomach. After that, the stomachs were dried using a manual blower. Drying was necessary to enhance sample visualization and prevent light reflection from the microscope. \nEach stomach sample was examined under the stereomicroscope (SZ61, Olympus Europa Holding GMBH, Hamburg, Germany). The software (Cellsens) was launched on a desktop monitor, and stomach images were captured. The image of the entire stomach could not be captured at once, even at the lowest zoom magnification (0.67×) and working distance (110 mm); hence, the segments were snapped and later stitched into a single image using Photoshop (Adobe Photoshop CS5 Version: 12.0), as recommended by the pathologist. The ulcerations and, likewise, the entire glandular stomach perimeter were measured with the aid of CellSens life science imaging software (Ver. 1.9 Olympus America, Inc., Center Valley, PA, USA). The stomachs were primarily classified based on the presence or absence of lesions (gastric toxicity).\n4.5. Pharmacometabolomics Analysis 1H-NMR spectra were acquired at 700.14 MHz using an ASCEND™ 700MHz NMR Spectrophotometer (Bruker BioSpin Corp., Rheinstetten, Germany). A day preceding NMR analysis, aliquots of the urine samples to be analyzed were transferred from the freezer (−8 °C) into the refrigerator (4 °C) and allowed to thaw overnight (at least 8 h) to avoid sample degradation resulting from abrupt defrosting [25]. The thawed aliquots of the samples were centrifuged (MIKRO 22 R, HettichZentrifugen®, Tuttlingen, Germany) at 12,000× g at 4 °C for 5 min to remove any insoluble sediment from the solution. Urine measuring 400 µL and 200 µL of phosphate buffer were mixed (2:1) in a microcentrifuge tube [26] and vortexed for a few seconds to ensure uniform mixing. Then, 550 µL was transferred to a 5 mm NMR tube using a pipette and securely closed by its cap (BRUKER®, BioSpin, Rheinstetten, Germany). \nTo ensure that an NMR tube was properly positioned, a sample gauge was used to align the NMR tube in the spinner. The tubes were then inserted into the respective sample holders and loaded into the spectrometer using Bruker’s IconNMR™ automation software. After the insertion of NMR tubes, the spectra of the 1H-NMR were acquired and processed using the automation interphase of IconNMR™ with TopSpin 3.5 (BRUKER®, BioSpin, Rheinstetten, Germany) software. The acquisition stages were locking, shimming and acquisition, and the data processing stages were Fourier Transform, phase correction and baseline correction. All spectra were acquired without spinning the sample. Each sample is given a lag time of five minutes (300 s) for thermal equilibration in the magnetic field before measuring 300 K. For each sample, the probe was automatically tuned and matched, and the magnetic field was locked on Urine+D2O and shimmed through a specifically optimized shim file for urine samples. Automatic 1H pulse calibration (pulsecal) was performed on each sample to reduce sample variability effects due to salt contents. 1H-NMR experiments were automated with the ICON NMR using the standardized acquisition and the processing parameters are as follows: pulse program (noesygppr1d), time domain (65536), dummy scans (4), scans (16), sweep width (20.5186 ppm), acquisition time (2.281 s), relaxation delay (4 s), receiver gain (12.70), dwell time (34.80 µs), mixing time (0.01 s), line broadening (0.30 Hz) and transmitter frequency offset (3289.90). The spectra were processed using Bruker Topspin 3.5 pl7.\n1H-NMR spectra were acquired at 700.14 MHz using an ASCEND™ 700MHz NMR Spectrophotometer (Bruker BioSpin Corp., Rheinstetten, Germany). A day preceding NMR analysis, aliquots of the urine samples to be analyzed were transferred from the freezer (−8 °C) into the refrigerator (4 °C) and allowed to thaw overnight (at least 8 h) to avoid sample degradation resulting from abrupt defrosting [25]. The thawed aliquots of the samples were centrifuged (MIKRO 22 R, HettichZentrifugen®, Tuttlingen, Germany) at 12,000× g at 4 °C for 5 min to remove any insoluble sediment from the solution. Urine measuring 400 µL and 200 µL of phosphate buffer were mixed (2:1) in a microcentrifuge tube [26] and vortexed for a few seconds to ensure uniform mixing. Then, 550 µL was transferred to a 5 mm NMR tube using a pipette and securely closed by its cap (BRUKER®, BioSpin, Rheinstetten, Germany). \nTo ensure that an NMR tube was properly positioned, a sample gauge was used to align the NMR tube in the spinner. The tubes were then inserted into the respective sample holders and loaded into the spectrometer using Bruker’s IconNMR™ automation software. After the insertion of NMR tubes, the spectra of the 1H-NMR were acquired and processed using the automation interphase of IconNMR™ with TopSpin 3.5 (BRUKER®, BioSpin, Rheinstetten, Germany) software. The acquisition stages were locking, shimming and acquisition, and the data processing stages were Fourier Transform, phase correction and baseline correction. All spectra were acquired without spinning the sample. Each sample is given a lag time of five minutes (300 s) for thermal equilibration in the magnetic field before measuring 300 K. For each sample, the probe was automatically tuned and matched, and the magnetic field was locked on Urine+D2O and shimmed through a specifically optimized shim file for urine samples. Automatic 1H pulse calibration (pulsecal) was performed on each sample to reduce sample variability effects due to salt contents. 1H-NMR experiments were automated with the ICON NMR using the standardized acquisition and the processing parameters are as follows: pulse program (noesygppr1d), time domain (65536), dummy scans (4), scans (16), sweep width (20.5186 ppm), acquisition time (2.281 s), relaxation delay (4 s), receiver gain (12.70), dwell time (34.80 µs), mixing time (0.01 s), line broadening (0.30 Hz) and transmitter frequency offset (3289.90). The spectra were processed using Bruker Topspin 3.5 pl7.\n4.6. Statistical Analysis At the end of the experiments, the rats were classified into gastric toxic or non-gastric toxic based on the presence or absence of any gastric toxicity, respectively. Spectra were bucketed to 0.04 ppm using AMIX software (BRUKER®, Rheinstetten, Germany). Water (4.7–4.9 ppm) and urea (5.5–6.1 ppm) regions were excluded. The bucket table was then imported to SIMCA 14.1 software (MKS Umetrics® Sweden, Umeå, Sweden). Skewed data were log-transformed. The data were then scaled using Pareto scaling (Pr scaling). \nPrincipal component analysis (PCA) was conducted, and the score plot was examined to explore the behaviour of the data. Hotelling’s T2 plot was used to discriminate intrinsic outliers in each group. The orthogonal partial least squares discriminant analysis (OPLSDA) was utilized to test the association between the buckets and gastric toxicity. This initial discriminatory model was the profiling model. The misclassification table was used to indicate the sensitivity, specificity and accuracy of each model. The best differentiating model was selected based on the two goodness values: the goodness-of-fit (R2Y) and goodness-of-prediction (Q2Y). A large R2Y (close to 1) is a necessary condition for a good model. It indicates good reproducibility. Likewise, a Q2Y value >0.5 signifies good predictivity. The variance between the two goodness values should not be too significant to ensure the right prediction and to avoid overfitting.\nThe variable importance for the projection (VIP) plot was then generated. The VIP plot summarizes the significance of the variables both to explain X (the predictors) and to correlate to Y (the outcome). The value of the VIP score, which is greater than 1, is the typical rule for selecting variables that are important, relevant and potentially discriminating [27,28]. Therefore, buckets with VIP value > 1 were chosen for further analysis. These spectral buckets (with VIP values > 1) were copied to an excel sheet and sorted in ascending order. The corresponding spectra for each bucket were verified in Topspin, and spectral noise was excluded from true signals. The 3-(trimethyl-silyl) propionic acid (TSP) peak was defined as the reference, and the peaks were calibrated by reference to its peak.\nA new data table was created by copying the relative integrals with their corresponding chemical shift (ppm) into an Excel sheet. The TSP integral was excluded from the table to not affect the analysis (as it is only a reference) before importing to SIMCA. The data were also log-transformed, Pareto scaled and explored initially using PCA. OPLSDA was also applied, and VIP plots were generated. This second discriminatory model was the “Identification Model” (final model). The misclassification table was generated to show the proportion of correctly classified observations in the dataset. In SIMCA, the ability of a model to classify the individual subjects correctly or incorrectly is evaluated by the misclassification table tool [29,30].\nFurthermore, the permutation plot (Y-scrambling) was used as an internal validation of the model. This compares the goodness tests (R2 and Q2) of the original model with the goodness test of several generated models by randomly permutating Y-observations (the outcome) while keeping the X-matrix (the predictors) constant. The number of permutations was set to 100 [31]. This means that the model was randomly built and validated 100 times. The AUROC curve was also computed to visualize the performance of the discriminatory models. It serves as a quantitative measure of the performance of the model. The performance parameter ranges between 0.5 (bad classification) and 1.0 (perfect classification).\nAt the end of the experiments, the rats were classified into gastric toxic or non-gastric toxic based on the presence or absence of any gastric toxicity, respectively. Spectra were bucketed to 0.04 ppm using AMIX software (BRUKER®, Rheinstetten, Germany). Water (4.7–4.9 ppm) and urea (5.5–6.1 ppm) regions were excluded. The bucket table was then imported to SIMCA 14.1 software (MKS Umetrics® Sweden, Umeå, Sweden). Skewed data were log-transformed. The data were then scaled using Pareto scaling (Pr scaling). \nPrincipal component analysis (PCA) was conducted, and the score plot was examined to explore the behaviour of the data. Hotelling’s T2 plot was used to discriminate intrinsic outliers in each group. The orthogonal partial least squares discriminant analysis (OPLSDA) was utilized to test the association between the buckets and gastric toxicity. This initial discriminatory model was the profiling model. The misclassification table was used to indicate the sensitivity, specificity and accuracy of each model. The best differentiating model was selected based on the two goodness values: the goodness-of-fit (R2Y) and goodness-of-prediction (Q2Y). A large R2Y (close to 1) is a necessary condition for a good model. It indicates good reproducibility. Likewise, a Q2Y value >0.5 signifies good predictivity. The variance between the two goodness values should not be too significant to ensure the right prediction and to avoid overfitting.\nThe variable importance for the projection (VIP) plot was then generated. The VIP plot summarizes the significance of the variables both to explain X (the predictors) and to correlate to Y (the outcome). The value of the VIP score, which is greater than 1, is the typical rule for selecting variables that are important, relevant and potentially discriminating [27,28]. Therefore, buckets with VIP value > 1 were chosen for further analysis. These spectral buckets (with VIP values > 1) were copied to an excel sheet and sorted in ascending order. The corresponding spectra for each bucket were verified in Topspin, and spectral noise was excluded from true signals. The 3-(trimethyl-silyl) propionic acid (TSP) peak was defined as the reference, and the peaks were calibrated by reference to its peak.\nA new data table was created by copying the relative integrals with their corresponding chemical shift (ppm) into an Excel sheet. The TSP integral was excluded from the table to not affect the analysis (as it is only a reference) before importing to SIMCA. The data were also log-transformed, Pareto scaled and explored initially using PCA. OPLSDA was also applied, and VIP plots were generated. This second discriminatory model was the “Identification Model” (final model). The misclassification table was generated to show the proportion of correctly classified observations in the dataset. In SIMCA, the ability of a model to classify the individual subjects correctly or incorrectly is evaluated by the misclassification table tool [29,30].\nFurthermore, the permutation plot (Y-scrambling) was used as an internal validation of the model. This compares the goodness tests (R2 and Q2) of the original model with the goodness test of several generated models by randomly permutating Y-observations (the outcome) while keeping the X-matrix (the predictors) constant. The number of permutations was set to 100 [31]. This means that the model was randomly built and validated 100 times. The AUROC curve was also computed to visualize the performance of the discriminatory models. It serves as a quantitative measure of the performance of the model. The performance parameter ranges between 0.5 (bad classification) and 1.0 (perfect classification).\n4.7. Metabolites Identification Metabolites were identified by systematically exploring three major databases, namely Biological Magnetic Resonance Data Bank (BMRB), Human Metabolome Database (HMDB) and Chenomx NMR Suite 6.0 (Chenomx® Inc., Edmonton, AB, Canada). The quest begins by first exploring BMRB. The important chemical shifts identified from the identification model (previous step) were individually inputted in the designated field for exploring metabolites in the BMRB database. This generated several matching peaks along with their corresponding metabolites. The generated matching metabolites were individually cross-referenced in the HMDB database to ascertain their availability in the urine. The prospective metabolites available in the urine were further cross-matched in the Chenomx profiler to ascertain their identity.\nMetabolites were identified by systematically exploring three major databases, namely Biological Magnetic Resonance Data Bank (BMRB), Human Metabolome Database (HMDB) and Chenomx NMR Suite 6.0 (Chenomx® Inc., Edmonton, AB, Canada). The quest begins by first exploring BMRB. The important chemical shifts identified from the identification model (previous step) were individually inputted in the designated field for exploring metabolites in the BMRB database. This generated several matching peaks along with their corresponding metabolites. The generated matching metabolites were individually cross-referenced in the HMDB database to ascertain their availability in the urine. The prospective metabolites available in the urine were further cross-matched in the Chenomx profiler to ascertain their identity.", "Male Sprague-Dawley (SD) rats (250–300 g body weight) were obtained from the Animal Research Complex (ARC) of Advanced Medical and Dental Institute (AMDI), Universiti Sains Malaysia and acclimatized to the animal research room for seven days. The rats were given access to Altromin-1320 maintenance diet for rats/mice (Altromin International, Lage, Germany) and water ad libitum. The rats were housed in a room maintained at a temperature of 20 ± 2 °C, relative humidity of 55 ± 10%, respectively, and a 12 h light/dark cycle throughout the study. The rats were kept in individual cages (1 rat per cage) when the samples were not taken but placed in metabolic cages during the periods for sample collection. The experimental protocol was approved by the Institutional Animal Care and Use Committees (IACUC). All procedures were carried out according to the recommendations of IACUC.", "Two experimental groups were designed using a stratified randomization system based on rats’ bodyweight viz: group I (control, n = 10) and group II (treatment, n = 32). The rats in group I were administered 1% methylcellulose (10 mL/kg), while those in group II were administered low-dose aspirin, LDA (10 mg/kg) per oral for 28 days through the intra-gastric (IG) route with an oral gavage. Aspirin, sparingly soluble in water, was suspended in 1% methylcellulose before administration. The LDA (10 mg/kg) was equivalent to the clinical dose of 100 mg daily for adults [20,21], and it was used in a similar study to demonstrate LDA-induced gastric toxicity in rats [21].", "The rats were transferred to individual metabolic cages three days before urine collection to acclimatize before sample collection [22]. Twenty-four-hour urine samples were collected on day 1 (pre-dosed) and day 28 (post-dosed) using the metabolic cage urine collector containing a preservative (0.5 mL of 100 mg/mL solution of sodium azide (NaN3)) [6]. The amount of each urine sample received was recorded, transferred into a 15 mL falcon tube, centrifuged at 2500× g for 10 min at 4 °C to remove particles [23] and aliquoted into two 2 mL microcentrifuge tubes. The aliquot and the remaining bulk of urine were stored in a −80 °C freezer until analysis by the proton nuclear magnetic resonance (1H-NMR) spectroscopy. All rats were euthanized on the 28th day of the experiment with an overdose of a ketamine/xylazine (91.0/9.1 mg/mL) cocktail. ", "Four millilitres of 10% aqueous buffered formalin was instilled IG using oral gavage for in situ intraluminal fixations to preserve the integrity of the stomach tissue before opening up the abdominal cavity for stomach harvesting [24]. The stomachs were detached after five minutes in situ fixation and excised along the greater curvature. They are subsequently rinsed with cold saline and pinned on a polystyrene board with the mucosa facing upward to flatten the stomach. After that, the stomachs were dried using a manual blower. Drying was necessary to enhance sample visualization and prevent light reflection from the microscope. \nEach stomach sample was examined under the stereomicroscope (SZ61, Olympus Europa Holding GMBH, Hamburg, Germany). The software (Cellsens) was launched on a desktop monitor, and stomach images were captured. The image of the entire stomach could not be captured at once, even at the lowest zoom magnification (0.67×) and working distance (110 mm); hence, the segments were snapped and later stitched into a single image using Photoshop (Adobe Photoshop CS5 Version: 12.0), as recommended by the pathologist. The ulcerations and, likewise, the entire glandular stomach perimeter were measured with the aid of CellSens life science imaging software (Ver. 1.9 Olympus America, Inc., Center Valley, PA, USA). The stomachs were primarily classified based on the presence or absence of lesions (gastric toxicity).", "1H-NMR spectra were acquired at 700.14 MHz using an ASCEND™ 700MHz NMR Spectrophotometer (Bruker BioSpin Corp., Rheinstetten, Germany). A day preceding NMR analysis, aliquots of the urine samples to be analyzed were transferred from the freezer (−8 °C) into the refrigerator (4 °C) and allowed to thaw overnight (at least 8 h) to avoid sample degradation resulting from abrupt defrosting [25]. The thawed aliquots of the samples were centrifuged (MIKRO 22 R, HettichZentrifugen®, Tuttlingen, Germany) at 12,000× g at 4 °C for 5 min to remove any insoluble sediment from the solution. Urine measuring 400 µL and 200 µL of phosphate buffer were mixed (2:1) in a microcentrifuge tube [26] and vortexed for a few seconds to ensure uniform mixing. Then, 550 µL was transferred to a 5 mm NMR tube using a pipette and securely closed by its cap (BRUKER®, BioSpin, Rheinstetten, Germany). \nTo ensure that an NMR tube was properly positioned, a sample gauge was used to align the NMR tube in the spinner. The tubes were then inserted into the respective sample holders and loaded into the spectrometer using Bruker’s IconNMR™ automation software. After the insertion of NMR tubes, the spectra of the 1H-NMR were acquired and processed using the automation interphase of IconNMR™ with TopSpin 3.5 (BRUKER®, BioSpin, Rheinstetten, Germany) software. The acquisition stages were locking, shimming and acquisition, and the data processing stages were Fourier Transform, phase correction and baseline correction. All spectra were acquired without spinning the sample. Each sample is given a lag time of five minutes (300 s) for thermal equilibration in the magnetic field before measuring 300 K. For each sample, the probe was automatically tuned and matched, and the magnetic field was locked on Urine+D2O and shimmed through a specifically optimized shim file for urine samples. Automatic 1H pulse calibration (pulsecal) was performed on each sample to reduce sample variability effects due to salt contents. 1H-NMR experiments were automated with the ICON NMR using the standardized acquisition and the processing parameters are as follows: pulse program (noesygppr1d), time domain (65536), dummy scans (4), scans (16), sweep width (20.5186 ppm), acquisition time (2.281 s), relaxation delay (4 s), receiver gain (12.70), dwell time (34.80 µs), mixing time (0.01 s), line broadening (0.30 Hz) and transmitter frequency offset (3289.90). The spectra were processed using Bruker Topspin 3.5 pl7.", "At the end of the experiments, the rats were classified into gastric toxic or non-gastric toxic based on the presence or absence of any gastric toxicity, respectively. Spectra were bucketed to 0.04 ppm using AMIX software (BRUKER®, Rheinstetten, Germany). Water (4.7–4.9 ppm) and urea (5.5–6.1 ppm) regions were excluded. The bucket table was then imported to SIMCA 14.1 software (MKS Umetrics® Sweden, Umeå, Sweden). Skewed data were log-transformed. The data were then scaled using Pareto scaling (Pr scaling). \nPrincipal component analysis (PCA) was conducted, and the score plot was examined to explore the behaviour of the data. Hotelling’s T2 plot was used to discriminate intrinsic outliers in each group. The orthogonal partial least squares discriminant analysis (OPLSDA) was utilized to test the association between the buckets and gastric toxicity. This initial discriminatory model was the profiling model. The misclassification table was used to indicate the sensitivity, specificity and accuracy of each model. The best differentiating model was selected based on the two goodness values: the goodness-of-fit (R2Y) and goodness-of-prediction (Q2Y). A large R2Y (close to 1) is a necessary condition for a good model. It indicates good reproducibility. Likewise, a Q2Y value >0.5 signifies good predictivity. The variance between the two goodness values should not be too significant to ensure the right prediction and to avoid overfitting.\nThe variable importance for the projection (VIP) plot was then generated. The VIP plot summarizes the significance of the variables both to explain X (the predictors) and to correlate to Y (the outcome). The value of the VIP score, which is greater than 1, is the typical rule for selecting variables that are important, relevant and potentially discriminating [27,28]. Therefore, buckets with VIP value > 1 were chosen for further analysis. These spectral buckets (with VIP values > 1) were copied to an excel sheet and sorted in ascending order. The corresponding spectra for each bucket were verified in Topspin, and spectral noise was excluded from true signals. The 3-(trimethyl-silyl) propionic acid (TSP) peak was defined as the reference, and the peaks were calibrated by reference to its peak.\nA new data table was created by copying the relative integrals with their corresponding chemical shift (ppm) into an Excel sheet. The TSP integral was excluded from the table to not affect the analysis (as it is only a reference) before importing to SIMCA. The data were also log-transformed, Pareto scaled and explored initially using PCA. OPLSDA was also applied, and VIP plots were generated. This second discriminatory model was the “Identification Model” (final model). The misclassification table was generated to show the proportion of correctly classified observations in the dataset. In SIMCA, the ability of a model to classify the individual subjects correctly or incorrectly is evaluated by the misclassification table tool [29,30].\nFurthermore, the permutation plot (Y-scrambling) was used as an internal validation of the model. This compares the goodness tests (R2 and Q2) of the original model with the goodness test of several generated models by randomly permutating Y-observations (the outcome) while keeping the X-matrix (the predictors) constant. The number of permutations was set to 100 [31]. This means that the model was randomly built and validated 100 times. The AUROC curve was also computed to visualize the performance of the discriminatory models. It serves as a quantitative measure of the performance of the model. The performance parameter ranges between 0.5 (bad classification) and 1.0 (perfect classification).", "Metabolites were identified by systematically exploring three major databases, namely Biological Magnetic Resonance Data Bank (BMRB), Human Metabolome Database (HMDB) and Chenomx NMR Suite 6.0 (Chenomx® Inc., Edmonton, AB, Canada). The quest begins by first exploring BMRB. The important chemical shifts identified from the identification model (previous step) were individually inputted in the designated field for exploring metabolites in the BMRB database. This generated several matching peaks along with their corresponding metabolites. The generated matching metabolites were individually cross-referenced in the HMDB database to ascertain their availability in the urine. The prospective metabolites available in the urine were further cross-matched in the Chenomx profiler to ascertain their identity.", "The pharmacometabolomic analysis of the pre-dose 1H-NMR urine spectra identified metabolic signatures that correlated with the development of LDA-induced gastric toxicity and could predict gastric toxicity related to LDA. Citrate, hippurate, methylamine, trimethylamine N-oxide, and alpha-keto-glutarate were the putative metabolites identified and possibly implicated in LDA-induced gastric toxicity. The final model demonstrated good discriminatory properties, reproducibility and limited predictive capacity. This pharmacometabolomic approach can be translated to predict gastric toxicity in CAD patients when validated in humans." ]
[ "intro", "results", null, null, null, null, "discussion", null, null, null, null, null, null, null, null, "conclusions" ]
[ "aspirin", "pharmacometabolomic", "nuclear magnetic resonance", "spectroscopy", "gastric toxicity", "multivariate analysis" ]
1. Introduction: Coronary artery disease (CAD) is a leading cause of cardiovascular disease (CVD) related morbidity and mortality globally [1]. Low-dose aspirin (LDA) is the mainstay for the secondary prevention of CAD [2]. Aspirin inhibits platelet activity by irreversibly deactivating cyclooxygenase-I (COX-1), leading to the inhibition of platelet thromboxane-A2 (TXA2) production and TXA2-mediated platelet activation [3]. The activity of Aspirin on TXA2 explains its distinct efficacy in preventing atherothrombosis and shared gastrointestinal (GI) side effects with other antiplatelets [3]. Alternative antiplatelets or co-administration with gastro-protective agents are presently the most common strategies to reduce Aspirin-induced GI side effects [4,5]. However, alternative Aspirin use is limited with cost burden, pill burden and decreased effectiveness, necessitating the need for more cost-effective strategies. There are limited studies on strategies, such as pharmacometabonomics, that predict the manifestation of gastric toxicity prior to LDA dosing. Pharmacometabolomics is a fast, economical and less invasive approach to predict drug-induced toxicity and complements personalized therapy. Proton nuclear magnetic resonance (1H-NMR) spectroscopy is a relatively new methodology for predicting drug effects using pre-dosed biomarkers of biofluids. NMR spectroscopy-based pharmacometabolomics is defined as “the prediction of the outcome (e.g., toxicity or efficacy) of a drug or xenobiotic in individuals, based on a mathematical model of pre-intervention metabolite signatures” [6]. NMR spectroscopy is the “gold standard” in pharmacometabolomics because of its non-destructive nature and enables the observation of the dynamics, partition and the quantification of metabolites in bio-samples. The pharmacometabolomics combines 1H-NMR and multivariate analysis in order to provide a detailed examination of the changes in the metabolic signatures of bio-samples. Therefore, this study aimed to identify metabolites that predict LDA-induced gastric toxicity using 1H-NMR-based pharmacometabonomics in rats. 2. Results: 2.1. Gastric Toxicity At the end of the low dose aspirin dosing period (28 days), none of the rats 0/10 (0%) in the vehicle-treated group had any form of gastric toxicity. However, most rats 20/32 (62.5%) in the LDA-treated group developed gastric toxicity. Representative samples of the gastric toxicity are shown in Figure 1. At the end of the low dose aspirin dosing period (28 days), none of the rats 0/10 (0%) in the vehicle-treated group had any form of gastric toxicity. However, most rats 20/32 (62.5%) in the LDA-treated group developed gastric toxicity. Representative samples of the gastric toxicity are shown in Figure 1. 2.2. Pre-Dose Profiling Models Principal Component Analysis (PCA) did not show clear discriminations between the groups. However, the Orthogonal Projections to Latent Structures Discriminant Analysis (OPLS-DA) score plots for the profiling model displayed clear discrimination between gastric toxic and non-toxic groups, as shown in Figure 2. The model had a goodness-of-fit value (R2Y) of 0.947 (very close to 1). However, the goodness-of-prediction value (Q2Y) of −0.185 (<0.4) indicates that the model has a poor predictive capacity. The model has perfect (100%) sensitivity, specificity and accuracy values. It also has a perfect AUROC curve value of 1. The permutation test provided an R2Y intercept value of 0.919 and a Q2Y intercept value of −1.02 (Figure 3A). There is an overlap between the red and blue lines of the AUROC curve because the AUC value is 1 in both cases (Figure 3B). The number of variables with VIP value >1 was 72, as highlighted in Table 1. Further examination and exclusion of spectral noise using Topsin resulted in 10 regions ascertained as signals integrated with Topspin before uploading to SIMCA for further screening and identifying useful discriminating metabolites. Principal Component Analysis (PCA) did not show clear discriminations between the groups. However, the Orthogonal Projections to Latent Structures Discriminant Analysis (OPLS-DA) score plots for the profiling model displayed clear discrimination between gastric toxic and non-toxic groups, as shown in Figure 2. The model had a goodness-of-fit value (R2Y) of 0.947 (very close to 1). However, the goodness-of-prediction value (Q2Y) of −0.185 (<0.4) indicates that the model has a poor predictive capacity. The model has perfect (100%) sensitivity, specificity and accuracy values. It also has a perfect AUROC curve value of 1. The permutation test provided an R2Y intercept value of 0.919 and a Q2Y intercept value of −1.02 (Figure 3A). There is an overlap between the red and blue lines of the AUROC curve because the AUC value is 1 in both cases (Figure 3B). The number of variables with VIP value >1 was 72, as highlighted in Table 1. Further examination and exclusion of spectral noise using Topsin resulted in 10 regions ascertained as signals integrated with Topspin before uploading to SIMCA for further screening and identifying useful discriminating metabolites. 2.3. Pre-Dosed Identification Models PCA did not show clear discrimination between the two groups in Figure 4A. Except for two gastric toxic samples (G12 and G3) misclassified to be in the non-gastric toxic group, the pre-dose urine OPLSDA model successfully segregated the samples into gastric toxic and non-toxic groups, as shown in Figure 4B. The goodness values are also summarized in Table 1. R2Y value was 0.726, and the Q2Y value was 0.142. The sensitivity, specificity and accuracy of the identification model were 100%, 95% and 96.88%, respectively. The model also has an AUROC value of 1. Furthermore, the model was internally validated using the permutation plot (Figure 5). The pre-dose rat urine identification model passed almost all validity criteria plots. The majority of the Q2 values to the left are lower than the original Q2 point on the right. The blue regression line of the Q2 point intersects the vertical axis (on the left) below 0 (−0.875). Moreover, most of the R2 values (to the left) are lower than the original R2 value. PCA did not show clear discrimination between the two groups in Figure 4A. Except for two gastric toxic samples (G12 and G3) misclassified to be in the non-gastric toxic group, the pre-dose urine OPLSDA model successfully segregated the samples into gastric toxic and non-toxic groups, as shown in Figure 4B. The goodness values are also summarized in Table 1. R2Y value was 0.726, and the Q2Y value was 0.142. The sensitivity, specificity and accuracy of the identification model were 100%, 95% and 96.88%, respectively. The model also has an AUROC value of 1. Furthermore, the model was internally validated using the permutation plot (Figure 5). The pre-dose rat urine identification model passed almost all validity criteria plots. The majority of the Q2 values to the left are lower than the original Q2 point on the right. The blue regression line of the Q2 point intersects the vertical axis (on the left) below 0 (−0.875). Moreover, most of the R2 values (to the left) are lower than the original R2 value. 2.4. Identification of Biomarkers to Predict LDA-Induced Gastric Toxicity After searches in the three databases, the ten regions of the pre-dose rats’ urine were identified to correspond to five metabolites, as highlighted in Table 2. These five metabolites were identified as putative biomarkers that may predict LDA-induced gastric toxicity. An example of the spectral differences in the metabolites identified between the gastric toxic and non-gastric toxic rats is depicted in Figure 6. The triplet at 2.431–2.459 ppm belonging to alpha-keto-glutarate, the doublet at 2.531–2.571 ppm belonging to citrate and a doublet at 3.965–3.981 ppm belonging to hippurate are used to show the differences. After searches in the three databases, the ten regions of the pre-dose rats’ urine were identified to correspond to five metabolites, as highlighted in Table 2. These five metabolites were identified as putative biomarkers that may predict LDA-induced gastric toxicity. An example of the spectral differences in the metabolites identified between the gastric toxic and non-gastric toxic rats is depicted in Figure 6. The triplet at 2.431–2.459 ppm belonging to alpha-keto-glutarate, the doublet at 2.531–2.571 ppm belonging to citrate and a doublet at 3.965–3.981 ppm belonging to hippurate are used to show the differences. 2.1. Gastric Toxicity: At the end of the low dose aspirin dosing period (28 days), none of the rats 0/10 (0%) in the vehicle-treated group had any form of gastric toxicity. However, most rats 20/32 (62.5%) in the LDA-treated group developed gastric toxicity. Representative samples of the gastric toxicity are shown in Figure 1. 2.2. Pre-Dose Profiling Models: Principal Component Analysis (PCA) did not show clear discriminations between the groups. However, the Orthogonal Projections to Latent Structures Discriminant Analysis (OPLS-DA) score plots for the profiling model displayed clear discrimination between gastric toxic and non-toxic groups, as shown in Figure 2. The model had a goodness-of-fit value (R2Y) of 0.947 (very close to 1). However, the goodness-of-prediction value (Q2Y) of −0.185 (<0.4) indicates that the model has a poor predictive capacity. The model has perfect (100%) sensitivity, specificity and accuracy values. It also has a perfect AUROC curve value of 1. The permutation test provided an R2Y intercept value of 0.919 and a Q2Y intercept value of −1.02 (Figure 3A). There is an overlap between the red and blue lines of the AUROC curve because the AUC value is 1 in both cases (Figure 3B). The number of variables with VIP value >1 was 72, as highlighted in Table 1. Further examination and exclusion of spectral noise using Topsin resulted in 10 regions ascertained as signals integrated with Topspin before uploading to SIMCA for further screening and identifying useful discriminating metabolites. 2.3. Pre-Dosed Identification Models: PCA did not show clear discrimination between the two groups in Figure 4A. Except for two gastric toxic samples (G12 and G3) misclassified to be in the non-gastric toxic group, the pre-dose urine OPLSDA model successfully segregated the samples into gastric toxic and non-toxic groups, as shown in Figure 4B. The goodness values are also summarized in Table 1. R2Y value was 0.726, and the Q2Y value was 0.142. The sensitivity, specificity and accuracy of the identification model were 100%, 95% and 96.88%, respectively. The model also has an AUROC value of 1. Furthermore, the model was internally validated using the permutation plot (Figure 5). The pre-dose rat urine identification model passed almost all validity criteria plots. The majority of the Q2 values to the left are lower than the original Q2 point on the right. The blue regression line of the Q2 point intersects the vertical axis (on the left) below 0 (−0.875). Moreover, most of the R2 values (to the left) are lower than the original R2 value. 2.4. Identification of Biomarkers to Predict LDA-Induced Gastric Toxicity: After searches in the three databases, the ten regions of the pre-dose rats’ urine were identified to correspond to five metabolites, as highlighted in Table 2. These five metabolites were identified as putative biomarkers that may predict LDA-induced gastric toxicity. An example of the spectral differences in the metabolites identified between the gastric toxic and non-gastric toxic rats is depicted in Figure 6. The triplet at 2.431–2.459 ppm belonging to alpha-keto-glutarate, the doublet at 2.531–2.571 ppm belonging to citrate and a doublet at 3.965–3.981 ppm belonging to hippurate are used to show the differences. 3. Discussion: The study demonstrated the robustness of the model developed for pre-dosed prediction of LDA-induced gastric toxicity. The model reaffirms the assertion by Szymańska and colleagues that the area under the receiver operating characteristic (AUROC) or the number of misclassification is more precise in detecting biomarkers responsible for 2-class differentiation or discrimination [7]. They also proclaim that Q2 values are not very good as diagnostic statistics in discriminatory analysis models such as orthogonal projections to latent structures discriminant analysis (OPLS-DA) [7]. The goodness-of-fit (R2Y) value of 0.947 indicates the near-perfect reproducibility of the model. However, the goodness-of-prediction (Q2Y) value of −0.185 (less than the recommended 0.4 for biological models) shows that the model has poor predictive capacity [8]. The goodness-of-prediction values even lower than 0.3 have been reported in several metabolomic studies [9,10]. In such instances, it is recommended that the models are further assessed using permutation tests [10,11]. Although the AUROC value of 1 confirms the validity of the model, internal validation by conducting a permutation test indicates overfitting in the model. This may be due to the limited number of samples to perform an external validation of the model. Based on the aforementioned statistical parameters, this model may be suitable for predicting LDA-induced gastric toxicity when externally validated with independent data. Previous researchers stated that models with good sensitivity and specificity are suited for both screening and confirmation of disease [12]. Therefore, the diagnostic statistics qualify the model for developing a diagnostic kit, which can be used clinically to screen patients with the propensity to develop LDA-induced gastric toxicity when validated in a human study. Logically, a minimal number of discriminating metabolites will promote their clinical acceptance and utility. Reducing the crucial bins from 72 to 10 makes the model more clinically relevant. When validated with human data, the five identified metabolites (corresponding to the ten significant bins) from the final model may be used to develop a diagnostic kit that can be clinically useful. The citric acid (citrate) is a weak acid that is formed endogenously in the tricarboxylic acid (TCA) cycle or consumed through some foods. The TCA cycle is also known as the citric acid cycle. NSAIDs have been found to cause the opening of "mitochondrial permeability transition pore", consequently leading to the uncoupling of oxidative phosphorylation, increasing the resting state respiration and disrupting the mitochondrial transmembrane potential. These NSAID-induced changes play a significant function in initiating tissue damage [13]. Takeuchi and colleagues [13] found no changes in serum citrate concentration after administering NSAIDs, including aspirin. They, however, found a decrease in citrate levels in stomach tissue extracts compared with controls. They, therefore, deduced two events to be associated with NSAID-induced gastric injury: hyperactivity of collagenase in the stomach and a decrease in levels of citrate (and other metabolites) as indicators of altered TCA cycle activity, which is a mitochondrial pathway. Other researchers [14] found citrate to be statistically reduced in the NSAID-induced gastric damage group compared to the control. Moreover, Takeuchi, et al. [15] reported no change in the serum levels of citrate after low-dose aspirin. Methylamine is an endogenous metabolite resulting from the breakdown of amine. Its tissue level is found to increase in some disease conditions such as diabetes mellitus. The levels of methylamine and ammonia are mutually controlled by a multi-functional enzyme known as semicarbazide-sensitive amine oxidase (SSAO). The activity of SSAO deaminates methylamine to formaldehyde, thereby producing ammonia and hydrogen peroxide. An increase in serum SSAO activity has been reported in patients with some disease conditions, including diabetes mellitus, Alzheimer’s disease and vascular disorders. The deamination of methylamine catalyzed by SSAO results in the production of toxic formaldehyde. In this study, methylamine is one of the predictors of LDA-induced gastric toxicity in the urine. The exact mechanism/pathway linking aspirin-induced GI toxicity and methylamine has not been established. Perhaps future studies can focus on identifying the links that may show other pathways related to aspirin-induced GI toxicity. Trimethylamine N-oxide (TMAO) results from the oxidation of trimethylamine. It is a common metabolite in both humans and animals. Specifically, TMAO is endogenously synthesized from trimethylamine derived from choline. Choline is usually obtained from either dietary lecithin or carnitine [16]. A link between blood and urinary levels of TMAO and gut microbiota has been established [17]. The concentration of TMAO increases if the number of bacteria that convert the trimethylamine to TMAO in the gut increases. It can, therefore, be inferred from this that subjects having a higher number of microbes that promote the synthesis of TMAO are also at an increased risk of developing aspirin-induced gastric toxicity. Previous researchers reported that NSAIDs increase the level of TMAO compared to controls [14]. Hippuric acid is a product of the conjugation of benzoic acid with glycine. It is referred to as an acyl glycine. Acyl glycines are synthesized through an enzyme known as glycine N-acyl transferase. Hippuric acid is a common constituent of urine, and its quantity is increased with an increase in the intake of phenolic compounds such as tea, fruit juices and wine. These phenols are changed to benzoic acid, which is subsequently converted to hippuric acid and excreted in urine. Gastrointestinal microflora appears to be responsible for quinic acid metabolism in hippuric acid. Indomethacin has been found to cause a decrease in hippurate, possibly due to the disruption of the normal microorganisms in the gastrointestinal tract [18]. One of two ketone derivatives of glutaric acid is alpha-ketoglutaric acid, also known as 2-oxoglutaric acid. When used without qualification, the word “ketoglutaric acid” usually always refers to the alpha version. The only difference between beta ketoglutaric acid and other ketoglutaric acids is the position of the ketone functional group, and it is significantly less prevalent. Alpha-ketoglutarate, commonly known as 2-oxoglutarate, is a biologically significant carboxylate. It is a keto acid formed when glutamate is deaminated, and it is an intermediate in the Krebs cycle [19]. 4. Materials and Methods: 4.1. Animals Male Sprague-Dawley (SD) rats (250–300 g body weight) were obtained from the Animal Research Complex (ARC) of Advanced Medical and Dental Institute (AMDI), Universiti Sains Malaysia and acclimatized to the animal research room for seven days. The rats were given access to Altromin-1320 maintenance diet for rats/mice (Altromin International, Lage, Germany) and water ad libitum. The rats were housed in a room maintained at a temperature of 20 ± 2 °C, relative humidity of 55 ± 10%, respectively, and a 12 h light/dark cycle throughout the study. The rats were kept in individual cages (1 rat per cage) when the samples were not taken but placed in metabolic cages during the periods for sample collection. The experimental protocol was approved by the Institutional Animal Care and Use Committees (IACUC). All procedures were carried out according to the recommendations of IACUC. Male Sprague-Dawley (SD) rats (250–300 g body weight) were obtained from the Animal Research Complex (ARC) of Advanced Medical and Dental Institute (AMDI), Universiti Sains Malaysia and acclimatized to the animal research room for seven days. The rats were given access to Altromin-1320 maintenance diet for rats/mice (Altromin International, Lage, Germany) and water ad libitum. The rats were housed in a room maintained at a temperature of 20 ± 2 °C, relative humidity of 55 ± 10%, respectively, and a 12 h light/dark cycle throughout the study. The rats were kept in individual cages (1 rat per cage) when the samples were not taken but placed in metabolic cages during the periods for sample collection. The experimental protocol was approved by the Institutional Animal Care and Use Committees (IACUC). All procedures were carried out according to the recommendations of IACUC. 4.2. Experimental Protocol Two experimental groups were designed using a stratified randomization system based on rats’ bodyweight viz: group I (control, n = 10) and group II (treatment, n = 32). The rats in group I were administered 1% methylcellulose (10 mL/kg), while those in group II were administered low-dose aspirin, LDA (10 mg/kg) per oral for 28 days through the intra-gastric (IG) route with an oral gavage. Aspirin, sparingly soluble in water, was suspended in 1% methylcellulose before administration. The LDA (10 mg/kg) was equivalent to the clinical dose of 100 mg daily for adults [20,21], and it was used in a similar study to demonstrate LDA-induced gastric toxicity in rats [21]. Two experimental groups were designed using a stratified randomization system based on rats’ bodyweight viz: group I (control, n = 10) and group II (treatment, n = 32). The rats in group I were administered 1% methylcellulose (10 mL/kg), while those in group II were administered low-dose aspirin, LDA (10 mg/kg) per oral for 28 days through the intra-gastric (IG) route with an oral gavage. Aspirin, sparingly soluble in water, was suspended in 1% methylcellulose before administration. The LDA (10 mg/kg) was equivalent to the clinical dose of 100 mg daily for adults [20,21], and it was used in a similar study to demonstrate LDA-induced gastric toxicity in rats [21]. 4.3. Sample Collection The rats were transferred to individual metabolic cages three days before urine collection to acclimatize before sample collection [22]. Twenty-four-hour urine samples were collected on day 1 (pre-dosed) and day 28 (post-dosed) using the metabolic cage urine collector containing a preservative (0.5 mL of 100 mg/mL solution of sodium azide (NaN3)) [6]. The amount of each urine sample received was recorded, transferred into a 15 mL falcon tube, centrifuged at 2500× g for 10 min at 4 °C to remove particles [23] and aliquoted into two 2 mL microcentrifuge tubes. The aliquot and the remaining bulk of urine were stored in a −80 °C freezer until analysis by the proton nuclear magnetic resonance (1H-NMR) spectroscopy. All rats were euthanized on the 28th day of the experiment with an overdose of a ketamine/xylazine (91.0/9.1 mg/mL) cocktail. The rats were transferred to individual metabolic cages three days before urine collection to acclimatize before sample collection [22]. Twenty-four-hour urine samples were collected on day 1 (pre-dosed) and day 28 (post-dosed) using the metabolic cage urine collector containing a preservative (0.5 mL of 100 mg/mL solution of sodium azide (NaN3)) [6]. The amount of each urine sample received was recorded, transferred into a 15 mL falcon tube, centrifuged at 2500× g for 10 min at 4 °C to remove particles [23] and aliquoted into two 2 mL microcentrifuge tubes. The aliquot and the remaining bulk of urine were stored in a −80 °C freezer until analysis by the proton nuclear magnetic resonance (1H-NMR) spectroscopy. All rats were euthanized on the 28th day of the experiment with an overdose of a ketamine/xylazine (91.0/9.1 mg/mL) cocktail. 4.4. Stomach Preparation Four millilitres of 10% aqueous buffered formalin was instilled IG using oral gavage for in situ intraluminal fixations to preserve the integrity of the stomach tissue before opening up the abdominal cavity for stomach harvesting [24]. The stomachs were detached after five minutes in situ fixation and excised along the greater curvature. They are subsequently rinsed with cold saline and pinned on a polystyrene board with the mucosa facing upward to flatten the stomach. After that, the stomachs were dried using a manual blower. Drying was necessary to enhance sample visualization and prevent light reflection from the microscope. Each stomach sample was examined under the stereomicroscope (SZ61, Olympus Europa Holding GMBH, Hamburg, Germany). The software (Cellsens) was launched on a desktop monitor, and stomach images were captured. The image of the entire stomach could not be captured at once, even at the lowest zoom magnification (0.67×) and working distance (110 mm); hence, the segments were snapped and later stitched into a single image using Photoshop (Adobe Photoshop CS5 Version: 12.0), as recommended by the pathologist. The ulcerations and, likewise, the entire glandular stomach perimeter were measured with the aid of CellSens life science imaging software (Ver. 1.9 Olympus America, Inc., Center Valley, PA, USA). The stomachs were primarily classified based on the presence or absence of lesions (gastric toxicity). Four millilitres of 10% aqueous buffered formalin was instilled IG using oral gavage for in situ intraluminal fixations to preserve the integrity of the stomach tissue before opening up the abdominal cavity for stomach harvesting [24]. The stomachs were detached after five minutes in situ fixation and excised along the greater curvature. They are subsequently rinsed with cold saline and pinned on a polystyrene board with the mucosa facing upward to flatten the stomach. After that, the stomachs were dried using a manual blower. Drying was necessary to enhance sample visualization and prevent light reflection from the microscope. Each stomach sample was examined under the stereomicroscope (SZ61, Olympus Europa Holding GMBH, Hamburg, Germany). The software (Cellsens) was launched on a desktop monitor, and stomach images were captured. The image of the entire stomach could not be captured at once, even at the lowest zoom magnification (0.67×) and working distance (110 mm); hence, the segments were snapped and later stitched into a single image using Photoshop (Adobe Photoshop CS5 Version: 12.0), as recommended by the pathologist. The ulcerations and, likewise, the entire glandular stomach perimeter were measured with the aid of CellSens life science imaging software (Ver. 1.9 Olympus America, Inc., Center Valley, PA, USA). The stomachs were primarily classified based on the presence or absence of lesions (gastric toxicity). 4.5. Pharmacometabolomics Analysis 1H-NMR spectra were acquired at 700.14 MHz using an ASCEND™ 700MHz NMR Spectrophotometer (Bruker BioSpin Corp., Rheinstetten, Germany). A day preceding NMR analysis, aliquots of the urine samples to be analyzed were transferred from the freezer (−8 °C) into the refrigerator (4 °C) and allowed to thaw overnight (at least 8 h) to avoid sample degradation resulting from abrupt defrosting [25]. The thawed aliquots of the samples were centrifuged (MIKRO 22 R, HettichZentrifugen®, Tuttlingen, Germany) at 12,000× g at 4 °C for 5 min to remove any insoluble sediment from the solution. Urine measuring 400 µL and 200 µL of phosphate buffer were mixed (2:1) in a microcentrifuge tube [26] and vortexed for a few seconds to ensure uniform mixing. Then, 550 µL was transferred to a 5 mm NMR tube using a pipette and securely closed by its cap (BRUKER®, BioSpin, Rheinstetten, Germany). To ensure that an NMR tube was properly positioned, a sample gauge was used to align the NMR tube in the spinner. The tubes were then inserted into the respective sample holders and loaded into the spectrometer using Bruker’s IconNMR™ automation software. After the insertion of NMR tubes, the spectra of the 1H-NMR were acquired and processed using the automation interphase of IconNMR™ with TopSpin 3.5 (BRUKER®, BioSpin, Rheinstetten, Germany) software. The acquisition stages were locking, shimming and acquisition, and the data processing stages were Fourier Transform, phase correction and baseline correction. All spectra were acquired without spinning the sample. Each sample is given a lag time of five minutes (300 s) for thermal equilibration in the magnetic field before measuring 300 K. For each sample, the probe was automatically tuned and matched, and the magnetic field was locked on Urine+D2O and shimmed through a specifically optimized shim file for urine samples. Automatic 1H pulse calibration (pulsecal) was performed on each sample to reduce sample variability effects due to salt contents. 1H-NMR experiments were automated with the ICON NMR using the standardized acquisition and the processing parameters are as follows: pulse program (noesygppr1d), time domain (65536), dummy scans (4), scans (16), sweep width (20.5186 ppm), acquisition time (2.281 s), relaxation delay (4 s), receiver gain (12.70), dwell time (34.80 µs), mixing time (0.01 s), line broadening (0.30 Hz) and transmitter frequency offset (3289.90). The spectra were processed using Bruker Topspin 3.5 pl7. 1H-NMR spectra were acquired at 700.14 MHz using an ASCEND™ 700MHz NMR Spectrophotometer (Bruker BioSpin Corp., Rheinstetten, Germany). A day preceding NMR analysis, aliquots of the urine samples to be analyzed were transferred from the freezer (−8 °C) into the refrigerator (4 °C) and allowed to thaw overnight (at least 8 h) to avoid sample degradation resulting from abrupt defrosting [25]. The thawed aliquots of the samples were centrifuged (MIKRO 22 R, HettichZentrifugen®, Tuttlingen, Germany) at 12,000× g at 4 °C for 5 min to remove any insoluble sediment from the solution. Urine measuring 400 µL and 200 µL of phosphate buffer were mixed (2:1) in a microcentrifuge tube [26] and vortexed for a few seconds to ensure uniform mixing. Then, 550 µL was transferred to a 5 mm NMR tube using a pipette and securely closed by its cap (BRUKER®, BioSpin, Rheinstetten, Germany). To ensure that an NMR tube was properly positioned, a sample gauge was used to align the NMR tube in the spinner. The tubes were then inserted into the respective sample holders and loaded into the spectrometer using Bruker’s IconNMR™ automation software. After the insertion of NMR tubes, the spectra of the 1H-NMR were acquired and processed using the automation interphase of IconNMR™ with TopSpin 3.5 (BRUKER®, BioSpin, Rheinstetten, Germany) software. The acquisition stages were locking, shimming and acquisition, and the data processing stages were Fourier Transform, phase correction and baseline correction. All spectra were acquired without spinning the sample. Each sample is given a lag time of five minutes (300 s) for thermal equilibration in the magnetic field before measuring 300 K. For each sample, the probe was automatically tuned and matched, and the magnetic field was locked on Urine+D2O and shimmed through a specifically optimized shim file for urine samples. Automatic 1H pulse calibration (pulsecal) was performed on each sample to reduce sample variability effects due to salt contents. 1H-NMR experiments were automated with the ICON NMR using the standardized acquisition and the processing parameters are as follows: pulse program (noesygppr1d), time domain (65536), dummy scans (4), scans (16), sweep width (20.5186 ppm), acquisition time (2.281 s), relaxation delay (4 s), receiver gain (12.70), dwell time (34.80 µs), mixing time (0.01 s), line broadening (0.30 Hz) and transmitter frequency offset (3289.90). The spectra were processed using Bruker Topspin 3.5 pl7. 4.6. Statistical Analysis At the end of the experiments, the rats were classified into gastric toxic or non-gastric toxic based on the presence or absence of any gastric toxicity, respectively. Spectra were bucketed to 0.04 ppm using AMIX software (BRUKER®, Rheinstetten, Germany). Water (4.7–4.9 ppm) and urea (5.5–6.1 ppm) regions were excluded. The bucket table was then imported to SIMCA 14.1 software (MKS Umetrics® Sweden, Umeå, Sweden). Skewed data were log-transformed. The data were then scaled using Pareto scaling (Pr scaling). Principal component analysis (PCA) was conducted, and the score plot was examined to explore the behaviour of the data. Hotelling’s T2 plot was used to discriminate intrinsic outliers in each group. The orthogonal partial least squares discriminant analysis (OPLSDA) was utilized to test the association between the buckets and gastric toxicity. This initial discriminatory model was the profiling model. The misclassification table was used to indicate the sensitivity, specificity and accuracy of each model. The best differentiating model was selected based on the two goodness values: the goodness-of-fit (R2Y) and goodness-of-prediction (Q2Y). A large R2Y (close to 1) is a necessary condition for a good model. It indicates good reproducibility. Likewise, a Q2Y value >0.5 signifies good predictivity. The variance between the two goodness values should not be too significant to ensure the right prediction and to avoid overfitting. The variable importance for the projection (VIP) plot was then generated. The VIP plot summarizes the significance of the variables both to explain X (the predictors) and to correlate to Y (the outcome). The value of the VIP score, which is greater than 1, is the typical rule for selecting variables that are important, relevant and potentially discriminating [27,28]. Therefore, buckets with VIP value > 1 were chosen for further analysis. These spectral buckets (with VIP values > 1) were copied to an excel sheet and sorted in ascending order. The corresponding spectra for each bucket were verified in Topspin, and spectral noise was excluded from true signals. The 3-(trimethyl-silyl) propionic acid (TSP) peak was defined as the reference, and the peaks were calibrated by reference to its peak. A new data table was created by copying the relative integrals with their corresponding chemical shift (ppm) into an Excel sheet. The TSP integral was excluded from the table to not affect the analysis (as it is only a reference) before importing to SIMCA. The data were also log-transformed, Pareto scaled and explored initially using PCA. OPLSDA was also applied, and VIP plots were generated. This second discriminatory model was the “Identification Model” (final model). The misclassification table was generated to show the proportion of correctly classified observations in the dataset. In SIMCA, the ability of a model to classify the individual subjects correctly or incorrectly is evaluated by the misclassification table tool [29,30]. Furthermore, the permutation plot (Y-scrambling) was used as an internal validation of the model. This compares the goodness tests (R2 and Q2) of the original model with the goodness test of several generated models by randomly permutating Y-observations (the outcome) while keeping the X-matrix (the predictors) constant. The number of permutations was set to 100 [31]. This means that the model was randomly built and validated 100 times. The AUROC curve was also computed to visualize the performance of the discriminatory models. It serves as a quantitative measure of the performance of the model. The performance parameter ranges between 0.5 (bad classification) and 1.0 (perfect classification). At the end of the experiments, the rats were classified into gastric toxic or non-gastric toxic based on the presence or absence of any gastric toxicity, respectively. Spectra were bucketed to 0.04 ppm using AMIX software (BRUKER®, Rheinstetten, Germany). Water (4.7–4.9 ppm) and urea (5.5–6.1 ppm) regions were excluded. The bucket table was then imported to SIMCA 14.1 software (MKS Umetrics® Sweden, Umeå, Sweden). Skewed data were log-transformed. The data were then scaled using Pareto scaling (Pr scaling). Principal component analysis (PCA) was conducted, and the score plot was examined to explore the behaviour of the data. Hotelling’s T2 plot was used to discriminate intrinsic outliers in each group. The orthogonal partial least squares discriminant analysis (OPLSDA) was utilized to test the association between the buckets and gastric toxicity. This initial discriminatory model was the profiling model. The misclassification table was used to indicate the sensitivity, specificity and accuracy of each model. The best differentiating model was selected based on the two goodness values: the goodness-of-fit (R2Y) and goodness-of-prediction (Q2Y). A large R2Y (close to 1) is a necessary condition for a good model. It indicates good reproducibility. Likewise, a Q2Y value >0.5 signifies good predictivity. The variance between the two goodness values should not be too significant to ensure the right prediction and to avoid overfitting. The variable importance for the projection (VIP) plot was then generated. The VIP plot summarizes the significance of the variables both to explain X (the predictors) and to correlate to Y (the outcome). The value of the VIP score, which is greater than 1, is the typical rule for selecting variables that are important, relevant and potentially discriminating [27,28]. Therefore, buckets with VIP value > 1 were chosen for further analysis. These spectral buckets (with VIP values > 1) were copied to an excel sheet and sorted in ascending order. The corresponding spectra for each bucket were verified in Topspin, and spectral noise was excluded from true signals. The 3-(trimethyl-silyl) propionic acid (TSP) peak was defined as the reference, and the peaks were calibrated by reference to its peak. A new data table was created by copying the relative integrals with their corresponding chemical shift (ppm) into an Excel sheet. The TSP integral was excluded from the table to not affect the analysis (as it is only a reference) before importing to SIMCA. The data were also log-transformed, Pareto scaled and explored initially using PCA. OPLSDA was also applied, and VIP plots were generated. This second discriminatory model was the “Identification Model” (final model). The misclassification table was generated to show the proportion of correctly classified observations in the dataset. In SIMCA, the ability of a model to classify the individual subjects correctly or incorrectly is evaluated by the misclassification table tool [29,30]. Furthermore, the permutation plot (Y-scrambling) was used as an internal validation of the model. This compares the goodness tests (R2 and Q2) of the original model with the goodness test of several generated models by randomly permutating Y-observations (the outcome) while keeping the X-matrix (the predictors) constant. The number of permutations was set to 100 [31]. This means that the model was randomly built and validated 100 times. The AUROC curve was also computed to visualize the performance of the discriminatory models. It serves as a quantitative measure of the performance of the model. The performance parameter ranges between 0.5 (bad classification) and 1.0 (perfect classification). 4.7. Metabolites Identification Metabolites were identified by systematically exploring three major databases, namely Biological Magnetic Resonance Data Bank (BMRB), Human Metabolome Database (HMDB) and Chenomx NMR Suite 6.0 (Chenomx® Inc., Edmonton, AB, Canada). The quest begins by first exploring BMRB. The important chemical shifts identified from the identification model (previous step) were individually inputted in the designated field for exploring metabolites in the BMRB database. This generated several matching peaks along with their corresponding metabolites. The generated matching metabolites were individually cross-referenced in the HMDB database to ascertain their availability in the urine. The prospective metabolites available in the urine were further cross-matched in the Chenomx profiler to ascertain their identity. Metabolites were identified by systematically exploring three major databases, namely Biological Magnetic Resonance Data Bank (BMRB), Human Metabolome Database (HMDB) and Chenomx NMR Suite 6.0 (Chenomx® Inc., Edmonton, AB, Canada). The quest begins by first exploring BMRB. The important chemical shifts identified from the identification model (previous step) were individually inputted in the designated field for exploring metabolites in the BMRB database. This generated several matching peaks along with their corresponding metabolites. The generated matching metabolites were individually cross-referenced in the HMDB database to ascertain their availability in the urine. The prospective metabolites available in the urine were further cross-matched in the Chenomx profiler to ascertain their identity. 4.1. Animals: Male Sprague-Dawley (SD) rats (250–300 g body weight) were obtained from the Animal Research Complex (ARC) of Advanced Medical and Dental Institute (AMDI), Universiti Sains Malaysia and acclimatized to the animal research room for seven days. The rats were given access to Altromin-1320 maintenance diet for rats/mice (Altromin International, Lage, Germany) and water ad libitum. The rats were housed in a room maintained at a temperature of 20 ± 2 °C, relative humidity of 55 ± 10%, respectively, and a 12 h light/dark cycle throughout the study. The rats were kept in individual cages (1 rat per cage) when the samples were not taken but placed in metabolic cages during the periods for sample collection. The experimental protocol was approved by the Institutional Animal Care and Use Committees (IACUC). All procedures were carried out according to the recommendations of IACUC. 4.2. Experimental Protocol: Two experimental groups were designed using a stratified randomization system based on rats’ bodyweight viz: group I (control, n = 10) and group II (treatment, n = 32). The rats in group I were administered 1% methylcellulose (10 mL/kg), while those in group II were administered low-dose aspirin, LDA (10 mg/kg) per oral for 28 days through the intra-gastric (IG) route with an oral gavage. Aspirin, sparingly soluble in water, was suspended in 1% methylcellulose before administration. The LDA (10 mg/kg) was equivalent to the clinical dose of 100 mg daily for adults [20,21], and it was used in a similar study to demonstrate LDA-induced gastric toxicity in rats [21]. 4.3. Sample Collection: The rats were transferred to individual metabolic cages three days before urine collection to acclimatize before sample collection [22]. Twenty-four-hour urine samples were collected on day 1 (pre-dosed) and day 28 (post-dosed) using the metabolic cage urine collector containing a preservative (0.5 mL of 100 mg/mL solution of sodium azide (NaN3)) [6]. The amount of each urine sample received was recorded, transferred into a 15 mL falcon tube, centrifuged at 2500× g for 10 min at 4 °C to remove particles [23] and aliquoted into two 2 mL microcentrifuge tubes. The aliquot and the remaining bulk of urine were stored in a −80 °C freezer until analysis by the proton nuclear magnetic resonance (1H-NMR) spectroscopy. All rats were euthanized on the 28th day of the experiment with an overdose of a ketamine/xylazine (91.0/9.1 mg/mL) cocktail. 4.4. Stomach Preparation: Four millilitres of 10% aqueous buffered formalin was instilled IG using oral gavage for in situ intraluminal fixations to preserve the integrity of the stomach tissue before opening up the abdominal cavity for stomach harvesting [24]. The stomachs were detached after five minutes in situ fixation and excised along the greater curvature. They are subsequently rinsed with cold saline and pinned on a polystyrene board with the mucosa facing upward to flatten the stomach. After that, the stomachs were dried using a manual blower. Drying was necessary to enhance sample visualization and prevent light reflection from the microscope. Each stomach sample was examined under the stereomicroscope (SZ61, Olympus Europa Holding GMBH, Hamburg, Germany). The software (Cellsens) was launched on a desktop monitor, and stomach images were captured. The image of the entire stomach could not be captured at once, even at the lowest zoom magnification (0.67×) and working distance (110 mm); hence, the segments were snapped and later stitched into a single image using Photoshop (Adobe Photoshop CS5 Version: 12.0), as recommended by the pathologist. The ulcerations and, likewise, the entire glandular stomach perimeter were measured with the aid of CellSens life science imaging software (Ver. 1.9 Olympus America, Inc., Center Valley, PA, USA). The stomachs were primarily classified based on the presence or absence of lesions (gastric toxicity). 4.5. Pharmacometabolomics Analysis: 1H-NMR spectra were acquired at 700.14 MHz using an ASCEND™ 700MHz NMR Spectrophotometer (Bruker BioSpin Corp., Rheinstetten, Germany). A day preceding NMR analysis, aliquots of the urine samples to be analyzed were transferred from the freezer (−8 °C) into the refrigerator (4 °C) and allowed to thaw overnight (at least 8 h) to avoid sample degradation resulting from abrupt defrosting [25]. The thawed aliquots of the samples were centrifuged (MIKRO 22 R, HettichZentrifugen®, Tuttlingen, Germany) at 12,000× g at 4 °C for 5 min to remove any insoluble sediment from the solution. Urine measuring 400 µL and 200 µL of phosphate buffer were mixed (2:1) in a microcentrifuge tube [26] and vortexed for a few seconds to ensure uniform mixing. Then, 550 µL was transferred to a 5 mm NMR tube using a pipette and securely closed by its cap (BRUKER®, BioSpin, Rheinstetten, Germany). To ensure that an NMR tube was properly positioned, a sample gauge was used to align the NMR tube in the spinner. The tubes were then inserted into the respective sample holders and loaded into the spectrometer using Bruker’s IconNMR™ automation software. After the insertion of NMR tubes, the spectra of the 1H-NMR were acquired and processed using the automation interphase of IconNMR™ with TopSpin 3.5 (BRUKER®, BioSpin, Rheinstetten, Germany) software. The acquisition stages were locking, shimming and acquisition, and the data processing stages were Fourier Transform, phase correction and baseline correction. All spectra were acquired without spinning the sample. Each sample is given a lag time of five minutes (300 s) for thermal equilibration in the magnetic field before measuring 300 K. For each sample, the probe was automatically tuned and matched, and the magnetic field was locked on Urine+D2O and shimmed through a specifically optimized shim file for urine samples. Automatic 1H pulse calibration (pulsecal) was performed on each sample to reduce sample variability effects due to salt contents. 1H-NMR experiments were automated with the ICON NMR using the standardized acquisition and the processing parameters are as follows: pulse program (noesygppr1d), time domain (65536), dummy scans (4), scans (16), sweep width (20.5186 ppm), acquisition time (2.281 s), relaxation delay (4 s), receiver gain (12.70), dwell time (34.80 µs), mixing time (0.01 s), line broadening (0.30 Hz) and transmitter frequency offset (3289.90). The spectra were processed using Bruker Topspin 3.5 pl7. 4.6. Statistical Analysis: At the end of the experiments, the rats were classified into gastric toxic or non-gastric toxic based on the presence or absence of any gastric toxicity, respectively. Spectra were bucketed to 0.04 ppm using AMIX software (BRUKER®, Rheinstetten, Germany). Water (4.7–4.9 ppm) and urea (5.5–6.1 ppm) regions were excluded. The bucket table was then imported to SIMCA 14.1 software (MKS Umetrics® Sweden, Umeå, Sweden). Skewed data were log-transformed. The data were then scaled using Pareto scaling (Pr scaling). Principal component analysis (PCA) was conducted, and the score plot was examined to explore the behaviour of the data. Hotelling’s T2 plot was used to discriminate intrinsic outliers in each group. The orthogonal partial least squares discriminant analysis (OPLSDA) was utilized to test the association between the buckets and gastric toxicity. This initial discriminatory model was the profiling model. The misclassification table was used to indicate the sensitivity, specificity and accuracy of each model. The best differentiating model was selected based on the two goodness values: the goodness-of-fit (R2Y) and goodness-of-prediction (Q2Y). A large R2Y (close to 1) is a necessary condition for a good model. It indicates good reproducibility. Likewise, a Q2Y value >0.5 signifies good predictivity. The variance between the two goodness values should not be too significant to ensure the right prediction and to avoid overfitting. The variable importance for the projection (VIP) plot was then generated. The VIP plot summarizes the significance of the variables both to explain X (the predictors) and to correlate to Y (the outcome). The value of the VIP score, which is greater than 1, is the typical rule for selecting variables that are important, relevant and potentially discriminating [27,28]. Therefore, buckets with VIP value > 1 were chosen for further analysis. These spectral buckets (with VIP values > 1) were copied to an excel sheet and sorted in ascending order. The corresponding spectra for each bucket were verified in Topspin, and spectral noise was excluded from true signals. The 3-(trimethyl-silyl) propionic acid (TSP) peak was defined as the reference, and the peaks were calibrated by reference to its peak. A new data table was created by copying the relative integrals with their corresponding chemical shift (ppm) into an Excel sheet. The TSP integral was excluded from the table to not affect the analysis (as it is only a reference) before importing to SIMCA. The data were also log-transformed, Pareto scaled and explored initially using PCA. OPLSDA was also applied, and VIP plots were generated. This second discriminatory model was the “Identification Model” (final model). The misclassification table was generated to show the proportion of correctly classified observations in the dataset. In SIMCA, the ability of a model to classify the individual subjects correctly or incorrectly is evaluated by the misclassification table tool [29,30]. Furthermore, the permutation plot (Y-scrambling) was used as an internal validation of the model. This compares the goodness tests (R2 and Q2) of the original model with the goodness test of several generated models by randomly permutating Y-observations (the outcome) while keeping the X-matrix (the predictors) constant. The number of permutations was set to 100 [31]. This means that the model was randomly built and validated 100 times. The AUROC curve was also computed to visualize the performance of the discriminatory models. It serves as a quantitative measure of the performance of the model. The performance parameter ranges between 0.5 (bad classification) and 1.0 (perfect classification). 4.7. Metabolites Identification: Metabolites were identified by systematically exploring three major databases, namely Biological Magnetic Resonance Data Bank (BMRB), Human Metabolome Database (HMDB) and Chenomx NMR Suite 6.0 (Chenomx® Inc., Edmonton, AB, Canada). The quest begins by first exploring BMRB. The important chemical shifts identified from the identification model (previous step) were individually inputted in the designated field for exploring metabolites in the BMRB database. This generated several matching peaks along with their corresponding metabolites. The generated matching metabolites were individually cross-referenced in the HMDB database to ascertain their availability in the urine. The prospective metabolites available in the urine were further cross-matched in the Chenomx profiler to ascertain their identity. 5. Conclusions: The pharmacometabolomic analysis of the pre-dose 1H-NMR urine spectra identified metabolic signatures that correlated with the development of LDA-induced gastric toxicity and could predict gastric toxicity related to LDA. Citrate, hippurate, methylamine, trimethylamine N-oxide, and alpha-keto-glutarate were the putative metabolites identified and possibly implicated in LDA-induced gastric toxicity. The final model demonstrated good discriminatory properties, reproducibility and limited predictive capacity. This pharmacometabolomic approach can be translated to predict gastric toxicity in CAD patients when validated in humans.
Background: Low-dose aspirin (LDA) is the backbone for secondary prevention of coronary artery disease, although limited by gastric toxicity. This study aimed to identify novel metabolites that could predict LDA-induced gastric toxicity using pharmacometabolomics. Methods: Pre-dosed urine samples were collected from male Sprague-Dawley rats. The rats were treated with either LDA (10 mg/kg) or 1% methylcellulose (10 mL/kg) per oral for 28 days. The rats' stomachs were examined for gastric toxicity using a stereomicroscope. The urine samples were analyzed using a proton nuclear magnetic resonance spectroscopy. Metabolites were systematically identified by exploring established databases and multivariate analyses to determine the spectral pattern of metabolites related to LDA-induced gastric toxicity. Results: Treatment with LDA resulted in gastric toxicity in 20/32 rats (62.5%). The orthogonal projections to latent structures discriminant analysis (OPLS-DA) model displayed a goodness-of-fit (R2Y) value of 0.947, suggesting near-perfect reproducibility and a goodness-of-prediction (Q2Y) of -0.185 with perfect sensitivity, specificity and accuracy (100%). Furthermore, the area under the receiver operating characteristic (AUROC) displayed was 1. The final OPLS-DA model had an R2Y value of 0.726 and Q2Y of 0.142 with sensitivity (100%), specificity (95.0%) and accuracy (96.9%). Citrate, hippurate, methylamine, trimethylamine N-oxide and alpha-keto-glutarate were identified as the possible metabolites implicated in the LDA-induced gastric toxicity. Conclusions: The study identified metabolic signatures that correlated with the development of a low-dose Aspirin-induced gastric toxicity in rats. This pharmacometabolomic approach could further be validated to predict LDA-induced gastric toxicity in patients with coronary artery disease.
1. Introduction: Coronary artery disease (CAD) is a leading cause of cardiovascular disease (CVD) related morbidity and mortality globally [1]. Low-dose aspirin (LDA) is the mainstay for the secondary prevention of CAD [2]. Aspirin inhibits platelet activity by irreversibly deactivating cyclooxygenase-I (COX-1), leading to the inhibition of platelet thromboxane-A2 (TXA2) production and TXA2-mediated platelet activation [3]. The activity of Aspirin on TXA2 explains its distinct efficacy in preventing atherothrombosis and shared gastrointestinal (GI) side effects with other antiplatelets [3]. Alternative antiplatelets or co-administration with gastro-protective agents are presently the most common strategies to reduce Aspirin-induced GI side effects [4,5]. However, alternative Aspirin use is limited with cost burden, pill burden and decreased effectiveness, necessitating the need for more cost-effective strategies. There are limited studies on strategies, such as pharmacometabonomics, that predict the manifestation of gastric toxicity prior to LDA dosing. Pharmacometabolomics is a fast, economical and less invasive approach to predict drug-induced toxicity and complements personalized therapy. Proton nuclear magnetic resonance (1H-NMR) spectroscopy is a relatively new methodology for predicting drug effects using pre-dosed biomarkers of biofluids. NMR spectroscopy-based pharmacometabolomics is defined as “the prediction of the outcome (e.g., toxicity or efficacy) of a drug or xenobiotic in individuals, based on a mathematical model of pre-intervention metabolite signatures” [6]. NMR spectroscopy is the “gold standard” in pharmacometabolomics because of its non-destructive nature and enables the observation of the dynamics, partition and the quantification of metabolites in bio-samples. The pharmacometabolomics combines 1H-NMR and multivariate analysis in order to provide a detailed examination of the changes in the metabolic signatures of bio-samples. Therefore, this study aimed to identify metabolites that predict LDA-induced gastric toxicity using 1H-NMR-based pharmacometabonomics in rats. 5. Conclusions: The pharmacometabolomic analysis of the pre-dose 1H-NMR urine spectra identified metabolic signatures that correlated with the development of LDA-induced gastric toxicity and could predict gastric toxicity related to LDA. Citrate, hippurate, methylamine, trimethylamine N-oxide, and alpha-keto-glutarate were the putative metabolites identified and possibly implicated in LDA-induced gastric toxicity. The final model demonstrated good discriminatory properties, reproducibility and limited predictive capacity. This pharmacometabolomic approach can be translated to predict gastric toxicity in CAD patients when validated in humans.
Background: Low-dose aspirin (LDA) is the backbone for secondary prevention of coronary artery disease, although limited by gastric toxicity. This study aimed to identify novel metabolites that could predict LDA-induced gastric toxicity using pharmacometabolomics. Methods: Pre-dosed urine samples were collected from male Sprague-Dawley rats. The rats were treated with either LDA (10 mg/kg) or 1% methylcellulose (10 mL/kg) per oral for 28 days. The rats' stomachs were examined for gastric toxicity using a stereomicroscope. The urine samples were analyzed using a proton nuclear magnetic resonance spectroscopy. Metabolites were systematically identified by exploring established databases and multivariate analyses to determine the spectral pattern of metabolites related to LDA-induced gastric toxicity. Results: Treatment with LDA resulted in gastric toxicity in 20/32 rats (62.5%). The orthogonal projections to latent structures discriminant analysis (OPLS-DA) model displayed a goodness-of-fit (R2Y) value of 0.947, suggesting near-perfect reproducibility and a goodness-of-prediction (Q2Y) of -0.185 with perfect sensitivity, specificity and accuracy (100%). Furthermore, the area under the receiver operating characteristic (AUROC) displayed was 1. The final OPLS-DA model had an R2Y value of 0.726 and Q2Y of 0.142 with sensitivity (100%), specificity (95.0%) and accuracy (96.9%). Citrate, hippurate, methylamine, trimethylamine N-oxide and alpha-keto-glutarate were identified as the possible metabolites implicated in the LDA-induced gastric toxicity. Conclusions: The study identified metabolic signatures that correlated with the development of a low-dose Aspirin-induced gastric toxicity in rats. This pharmacometabolomic approach could further be validated to predict LDA-induced gastric toxicity in patients with coronary artery disease.
10,051
350
[ 68, 232, 209, 113, 4254, 174, 153, 178, 264, 496, 710, 132 ]
16
[ "model", "gastric", "rats", "urine", "value", "toxicity", "nmr", "sample", "gastric toxicity", "metabolites" ]
[ "aspirin induced gastric", "aspirin txa2", "alternative aspirin", "inhibits platelet activity", "inhibition platelet thromboxane" ]
null
[CONTENT] aspirin | pharmacometabolomic | nuclear magnetic resonance | spectroscopy | gastric toxicity | multivariate analysis [SUMMARY]
null
[CONTENT] aspirin | pharmacometabolomic | nuclear magnetic resonance | spectroscopy | gastric toxicity | multivariate analysis [SUMMARY]
[CONTENT] aspirin | pharmacometabolomic | nuclear magnetic resonance | spectroscopy | gastric toxicity | multivariate analysis [SUMMARY]
[CONTENT] aspirin | pharmacometabolomic | nuclear magnetic resonance | spectroscopy | gastric toxicity | multivariate analysis [SUMMARY]
[CONTENT] aspirin | pharmacometabolomic | nuclear magnetic resonance | spectroscopy | gastric toxicity | multivariate analysis [SUMMARY]
[CONTENT] Animals | Aspirin | Coronary Artery Disease | Humans | Magnetic Resonance Spectroscopy | Male | Metabolomics | Rats | Rats, Sprague-Dawley | Reproducibility of Results | Stomach [SUMMARY]
null
[CONTENT] Animals | Aspirin | Coronary Artery Disease | Humans | Magnetic Resonance Spectroscopy | Male | Metabolomics | Rats | Rats, Sprague-Dawley | Reproducibility of Results | Stomach [SUMMARY]
[CONTENT] Animals | Aspirin | Coronary Artery Disease | Humans | Magnetic Resonance Spectroscopy | Male | Metabolomics | Rats | Rats, Sprague-Dawley | Reproducibility of Results | Stomach [SUMMARY]
[CONTENT] Animals | Aspirin | Coronary Artery Disease | Humans | Magnetic Resonance Spectroscopy | Male | Metabolomics | Rats | Rats, Sprague-Dawley | Reproducibility of Results | Stomach [SUMMARY]
[CONTENT] Animals | Aspirin | Coronary Artery Disease | Humans | Magnetic Resonance Spectroscopy | Male | Metabolomics | Rats | Rats, Sprague-Dawley | Reproducibility of Results | Stomach [SUMMARY]
[CONTENT] aspirin induced gastric | aspirin txa2 | alternative aspirin | inhibits platelet activity | inhibition platelet thromboxane [SUMMARY]
null
[CONTENT] aspirin induced gastric | aspirin txa2 | alternative aspirin | inhibits platelet activity | inhibition platelet thromboxane [SUMMARY]
[CONTENT] aspirin induced gastric | aspirin txa2 | alternative aspirin | inhibits platelet activity | inhibition platelet thromboxane [SUMMARY]
[CONTENT] aspirin induced gastric | aspirin txa2 | alternative aspirin | inhibits platelet activity | inhibition platelet thromboxane [SUMMARY]
[CONTENT] aspirin induced gastric | aspirin txa2 | alternative aspirin | inhibits platelet activity | inhibition platelet thromboxane [SUMMARY]
[CONTENT] model | gastric | rats | urine | value | toxicity | nmr | sample | gastric toxicity | metabolites [SUMMARY]
null
[CONTENT] model | gastric | rats | urine | value | toxicity | nmr | sample | gastric toxicity | metabolites [SUMMARY]
[CONTENT] model | gastric | rats | urine | value | toxicity | nmr | sample | gastric toxicity | metabolites [SUMMARY]
[CONTENT] model | gastric | rats | urine | value | toxicity | nmr | sample | gastric toxicity | metabolites [SUMMARY]
[CONTENT] model | gastric | rats | urine | value | toxicity | nmr | sample | gastric toxicity | metabolites [SUMMARY]
[CONTENT] pharmacometabolomics | aspirin | nmr | txa2 | platelet | drug | strategies | nmr spectroscopy | effects | spectroscopy [SUMMARY]
null
[CONTENT] value | figure | toxic | gastric | model | gastric toxic | ppm belonging | belonging | left | groups [SUMMARY]
[CONTENT] pharmacometabolomic | predict gastric toxicity | predict gastric | toxicity | gastric toxicity | gastric | lda | predict | identified | induced [SUMMARY]
[CONTENT] model | gastric | value | rats | toxicity | urine | gastric toxicity | nmr | metabolites | figure [SUMMARY]
[CONTENT] model | gastric | value | rats | toxicity | urine | gastric toxicity | nmr | metabolites | figure [SUMMARY]
[CONTENT] ||| [SUMMARY]
null
[CONTENT] 20/32 | 62.5% ||| 0.947 | 100% ||| 1 ||| OPLS | 0.726 | Q2Y of | 0.142 | 100% | 95.0% | 96.9% ||| [SUMMARY]
[CONTENT] ||| [SUMMARY]
[CONTENT] ||| ||| Sprague-Dawley ||| 10 mg/kg | 1% | 10 mL/kg | 28 days ||| ||| ||| ||| ||| 20/32 | 62.5% ||| 0.947 | 100% ||| 1 ||| OPLS | 0.726 | Q2Y of | 0.142 | 100% | 95.0% | 96.9% ||| ||| ||| [SUMMARY]
[CONTENT] ||| ||| Sprague-Dawley ||| 10 mg/kg | 1% | 10 mL/kg | 28 days ||| ||| ||| ||| ||| 20/32 | 62.5% ||| 0.947 | 100% ||| 1 ||| OPLS | 0.726 | Q2Y of | 0.142 | 100% | 95.0% | 96.9% ||| ||| ||| [SUMMARY]
The Oral Microbiome Impacts the Link between Sugar Consumption and Caries: A Preliminary Study.
36145068
The excessive and frequent intake of refined sugar leads to caries. However, the relationship between the amount of sugar intake and the risk of caries is not always consistent. Oral microbial profile and function may impact the link between them. This study aims to identify the plaque microbiota characteristics of caries subjects with low (CL) and high (CH) sugar consumption, and of caries-free subjects with low (FL) and high sugar (FH) consumption.
BACKGROUND
A total of 40 adolescents were enrolled in the study, and supragingival plaque samples were collected and subjected to metagenomic analyses. The caries status, sugar consumption, and oral-health behaviors of the subjects were recorded.
METHODS
The results indicate that the CL group showed a higher abundance of several cariogenic microorganisms Lactobacillus, A. gerencseriae, A. dentails, S. mutans, C. albicans, S. wiggsiae and P. acidifaciens. C. gingivalis, and P. gingivalis, which were enriched in the FH group. In terms of gene function, the phosphotransferase sugar uptake system, phosphotransferase system, and several two-component responses-regulator pairs were enriched in the CL group.
RESULTS
Overall, our data suggest the existence of an increased cariogenic microbial community and sugar catabolism potential in the CL group, and a healthy microbial community in the FH group, which had self-stabilizing functional potential.
CONCLUSION
[ "Adolescent", "Candida albicans", "Dental Caries", "Dietary Sugars", "Humans", "Lactobacillus", "Microbiota", "Phosphotransferases", "Streptococcus mutans", "Sugars" ]
9503897
1. Introduction
Dental caries characterized by the demineralization of tooth tissue resulting from the fermentation of dietary carbohydrates by acid-producing bacteria [1]. The existence of cariogenic microorganisms is a prerequisite for the development of dental caries, and fermentable carbohydrates, particularly sugars, are key contributors to dysbiosis in the oral microbiota that promotes caries. Consuming sugar has negative effects on oral health, as cariogenic bacteria convert monosaccharides into acids that are detrimental to the teeth. Therefore, it is generally accepted that the excessive and frequent intake of refined sugar leads to the development of caries [2,3,4]. However, the relationship between the amount of sugar intake and caries risk is not always consistent [4,5]. In fact, some individuals with low sugar consumption still have a high caries risk, while some others are caries-free even with higher sugar consumption [4,5]. Many factors may influence the relationship between them, such as high socioeconomic status, exposure to fluoridated water, frequent tooth brushing, the availability of sugar for cariogenic bacterial digestion, and the availability of saliva to counteract bacteria and acids [5,6]. Previous studies suggested that oral ecological microbial profiles are corelated with the amount of sugar consumption [7]. Keller et al. found that some acidogenic and acid-tolerant caries-associated organisms were less abundant in a low-sugar group’s dental plaque [8]. Chen indicated that higher sugar-sweetened-beverage consumption may disrupt the oral microecology, and reduce the variety of microbiota during childhood, leading to an increase in cariogenic genera [9]. Altered plaque microbial profiles in adolescents with different levels of sugar intake were shown [8,9]. However, no study has explored the impact of oral microbiota on the relationship between sugar consumption and risk of caries. We hypothesize that the oral microbiota of high-caries-risk adolescents with a low-sugar diet may have a special microbial community composition and function, leading to the formation of a local cariogenic environment with limited sugar. The microbial community of low-caries-risk subjects with higher sugar consumption may have some special bacterial antagonism to the cariogenic bacteria resulting in a state that protects teeth against caries even in a high-carbohydrate environment. Therefore, the purpose of this present study is to compare the oral microbial profiles of high-caries-risk adolescents with a low intake of free sugars to those of caries-free adolescents with a high consumption of free sugars.
null
null
3. Results
In total, 40 male and female teenagers aged 12–13 years were recruited; 45% were boys, and 55% were girls. The samples were divided into four groups: caries individuals with low sugar consumption (CL group), caries individuals with high sugar consumption (CH group), caries-free individuals with low sugar consumption (FL group), and caries-free individuals with high sugar consumption (FH group). Table 1 reports the characteristics of the demographic variables and the oral hygiene habits of the included subjects. As presented in Table 1, there was no difference in the clinical information that could be confounding variables among the four groups. After data trimming and filtering out the low-quality data and host contamination, a total of 651,389,975 (195.43 GB) reads were acquired, and the average clean data were 4.89 GB per sample. A total of 22 phyla, 36 classes, 93 orders, 214 families, 761 genera, and 2235 species were detected in the 40 samples. Figure 1a–c shows the top 20 phyla, genera, and species, respectively, with relatively high abundance in the four groups. 3.1. Microbiota Composition Based on Sugar Consumption and Caries Status The bacterial distribution within the different groups was analyzed. We compared the relative abundance of the top 20 bacteria in the four groups with some interesting findings. Figure 2 shows the relative abundance of the phyla and species that were significantly higher in the FH and CL groups compared with the three other groups. As illustrated in Figure 2, at the phylum level, the CL group showed a higher relative abundance of Fusobacteria. At the genus level, the CL group was associated with a greater abundance of Lactobacillus. At the species level, in the CL group, Actinomyces gerencseriae and Actinomyces dentails had a higher relative abundance of the top 20 species covered. Streptococcus mutans, Candida albicans, Scardovia wiggsiae, and Propionibacterium acidifaciens were also more abundant in the CL group out of the top 20 species covered. However, eight species were more abundant in the FH group, including Capnocytophaga gingivalis and Porphyromonas gingivalis (the figure titled “Significant differences in relative abundance of plaque microbial taxa among the four groups by LEfSe analysis” was too large to place in the manuscript, so we were uploaded it as Supplemental Material Figure S2). Venn diagram was used to illustrate the shared and unique taxa for the species level among groups. A total of 2235 species were identified in our study. The overlap region S indicates the microbiome shared in all samples among the four groups, and 933 species were detected in all of the four groups. There were 84, 207, 75 and 250 unique species detected in the CL, CH, FH, and FL groups, respectively (Figure 3). PCoA (Figure 4a) and NMDS (Figure 4b) did not reveal differences in taxonomic unit structural richness among groups. The results demonstrate that the microbial diversity of the four groups was similar. The bacterial distribution within the different groups was analyzed. We compared the relative abundance of the top 20 bacteria in the four groups with some interesting findings. Figure 2 shows the relative abundance of the phyla and species that were significantly higher in the FH and CL groups compared with the three other groups. As illustrated in Figure 2, at the phylum level, the CL group showed a higher relative abundance of Fusobacteria. At the genus level, the CL group was associated with a greater abundance of Lactobacillus. At the species level, in the CL group, Actinomyces gerencseriae and Actinomyces dentails had a higher relative abundance of the top 20 species covered. Streptococcus mutans, Candida albicans, Scardovia wiggsiae, and Propionibacterium acidifaciens were also more abundant in the CL group out of the top 20 species covered. However, eight species were more abundant in the FH group, including Capnocytophaga gingivalis and Porphyromonas gingivalis (the figure titled “Significant differences in relative abundance of plaque microbial taxa among the four groups by LEfSe analysis” was too large to place in the manuscript, so we were uploaded it as Supplemental Material Figure S2). Venn diagram was used to illustrate the shared and unique taxa for the species level among groups. A total of 2235 species were identified in our study. The overlap region S indicates the microbiome shared in all samples among the four groups, and 933 species were detected in all of the four groups. There were 84, 207, 75 and 250 unique species detected in the CL, CH, FH, and FL groups, respectively (Figure 3). PCoA (Figure 4a) and NMDS (Figure 4b) did not reveal differences in taxonomic unit structural richness among groups. The results demonstrate that the microbial diversity of the four groups was similar. 3.2. Microbiota Composition Based on Sugar Consumption and Caries Status We annotated the oral gene catalogs with the Kyoto Encyclopedia of Genes and Genomes (KEGG) database in order to compare the functional characteristics of the microbiota between groups. The metabolic pathway that includes 5923 annotated genes was the most abundant of the six main KEGG pathways that were found (Figure 5a). There were 47 Level 2 pathways, with the metabolism of the carbohydrates being the most abundant (Figure 5b). The metabolism of cofactors and vitamins, and amino acid metabolism were other common markers in the Level 2 pathways. The top 10 most abundant KOs were discovered by performing BLAST against the KEGG Orthology (KO) database, and they are shown in Figure 5c. LEfSe analysis of the gene profiles showed enormous differences at the functional level of the groups. At the second level, enrichment in membrane transport was observed in the CL group compared with the other groups. At the third level, the synthesis and degradation of ketone bodies, the phosphotransferase system (PTS), carbapenem biosynthesis, quorum sensing, retrograde endocannabinoid signaling, basal transcription factors, chlorocyclohexane and chlorobenzene degradation, and the NOD-like receptor signaling pathway were enriched in the CL group, while lipopolysaccharide biosynthesis and isoquinoline alkaloid biosynthesis exhibited a higher level in the FH group (Figure 6a). In total, 10 KEGG metabolic modules were positively associated with the CL group’s microbiomes, while 22 KEGG metabolic modules had a higher proportion in the FH group (Figure 6b). Phosphotransferase sugar uptake system N-acetylgalactosamine (M00277), numerous two-component histidine kinase-response regulator systems including VicK/VicR (cell wall metabolism) (M00459), LytS-LytR (M00492), and CiaH/CiaR (M00521), and several trace metal transport systems, including those for manganese/zinc (M00791) and nickel (M00440), were more enriched in the CL group. Modules associated with biosynthesis were more abundant in the FH group, including serine (M00020), ectoine (M00033), lipopolysaccharide (M00060), CMP-KDO (M00063), ascorbate (M00114), NAD (M00115), ubiquinone (M00117), biotin (M00123), pyridoxal (M00124), Riboflavin (M00125), aminoacyl-tRNA (M00359), biotin (M00573, M00577), and aurachin (M00848). The FH group also had enriched xenobiotic efflux pump transporters, including multidrug resistance, AcrAB-TolC/SmeDEF (M00647), multidrug resistance, AmeABC (M00699), multidrug resistance, MexAB-OprM (M00718) (the figure titled “LEfSe analysis was used to identify Level 3 and module level with significant differences in relative abundance among the four groups” was too large to place in the manuscript, so we uploaded it as Supplementary Figure S2). We annotated the oral gene catalogs with the Kyoto Encyclopedia of Genes and Genomes (KEGG) database in order to compare the functional characteristics of the microbiota between groups. The metabolic pathway that includes 5923 annotated genes was the most abundant of the six main KEGG pathways that were found (Figure 5a). There were 47 Level 2 pathways, with the metabolism of the carbohydrates being the most abundant (Figure 5b). The metabolism of cofactors and vitamins, and amino acid metabolism were other common markers in the Level 2 pathways. The top 10 most abundant KOs were discovered by performing BLAST against the KEGG Orthology (KO) database, and they are shown in Figure 5c. LEfSe analysis of the gene profiles showed enormous differences at the functional level of the groups. At the second level, enrichment in membrane transport was observed in the CL group compared with the other groups. At the third level, the synthesis and degradation of ketone bodies, the phosphotransferase system (PTS), carbapenem biosynthesis, quorum sensing, retrograde endocannabinoid signaling, basal transcription factors, chlorocyclohexane and chlorobenzene degradation, and the NOD-like receptor signaling pathway were enriched in the CL group, while lipopolysaccharide biosynthesis and isoquinoline alkaloid biosynthesis exhibited a higher level in the FH group (Figure 6a). In total, 10 KEGG metabolic modules were positively associated with the CL group’s microbiomes, while 22 KEGG metabolic modules had a higher proportion in the FH group (Figure 6b). Phosphotransferase sugar uptake system N-acetylgalactosamine (M00277), numerous two-component histidine kinase-response regulator systems including VicK/VicR (cell wall metabolism) (M00459), LytS-LytR (M00492), and CiaH/CiaR (M00521), and several trace metal transport systems, including those for manganese/zinc (M00791) and nickel (M00440), were more enriched in the CL group. Modules associated with biosynthesis were more abundant in the FH group, including serine (M00020), ectoine (M00033), lipopolysaccharide (M00060), CMP-KDO (M00063), ascorbate (M00114), NAD (M00115), ubiquinone (M00117), biotin (M00123), pyridoxal (M00124), Riboflavin (M00125), aminoacyl-tRNA (M00359), biotin (M00573, M00577), and aurachin (M00848). The FH group also had enriched xenobiotic efflux pump transporters, including multidrug resistance, AcrAB-TolC/SmeDEF (M00647), multidrug resistance, AmeABC (M00699), multidrug resistance, MexAB-OprM (M00718) (the figure titled “LEfSe analysis was used to identify Level 3 and module level with significant differences in relative abundance among the four groups” was too large to place in the manuscript, so we uploaded it as Supplementary Figure S2).
5. Conclusions
In conclusion, findings from both previous studies and the current study lend indirect support to the notion that the caries phenotype is a disorder of the microbial community metabolism. In the CL group, even with a lower sugar intake, the subjects had a high caries risk because of the existence of a cariogenic microbial community and increased sugar catabolism potential. Even though the subjects in the FH group take up more sugar, they had a low caries risk due to a healthy microbial community being capable of resisting the cariogenic microbial population and preventing pH reduction by inhibiting sugar uptake. The present study has the obvious shortcoming of a limited sample size that only permits preliminary findings. In addition, sugar consumption data were collected via questionnaire, and some free sugars were not included, such as sucrose in homemade dishes. Given this, it is prudent not to overstate the consequences of the findings. Future investigations on a larger scale are required to confirm and validate our findings.
[ "2. Materials and Methods", "2.1. Ethical Approval", "2.2. Clinical Examination and Sampling", "2.3. Questionnaire Survey and Free-Sugar Intake Assessment", "2.4. Metagenomic Analysis", "2.5. Data Visual Exhibition, and Statistical and Bioinformatic Analysis", "3.1. Microbiota Composition Based on Sugar Consumption and Caries Status", "3.2. Microbiota Composition Based on Sugar Consumption and Caries Status" ]
[ "2.1. Ethical Approval This study was approved by the Ethical Committee of the Hospital of Stomatology, Sun Yat-sen University, in Guangzhou, China (KQEC-2021-24-03). Before this study, we obtained written informed consent from the parents or legal guardians of all participants.\nThis study was approved by the Ethical Committee of the Hospital of Stomatology, Sun Yat-sen University, in Guangzhou, China (KQEC-2021-24-03). Before this study, we obtained written informed consent from the parents or legal guardians of all participants.\n2.2. Clinical Examination and Sampling Prior to recruiting participants for the study, we performed an initial screening. During the initial screening, we conducted a structured questionnaire survey and clinical examination. The contents of the structured questionnaire were split into three parts: (1) Demographic information: sex, age, residence, and whether the child was an only child in their family and their primary caregiver. (2) Socioeconomic information: family income, caregivers’ educational levels, and whether they had dental insurance. (3) Oral-health-related behaviors: tooth-brushing frequency, flossing habits, toothpaste containing fluoride or not, mouthwash or not, frequency of snack consumption, frequency of sweet-drink consumption, and dental attendance experience. Caries status and plaque index were also evaluated. A skilled dentist used the International Caries Detection and Assessment System II (ICDAS-II) to measure the subjects’ caries experience, and recorded it as decaying, missing, and filled teeth (DMFT). The clinical examination was conducted under artificial lighting in the classroom by utilizing a dental mirror and a Community Periodontal Index (CPI) probe. Codes 3–6 in the ICDAS system were recorded as decayed teeth. Inclusion criteria were: healthy 12–13 years old adolescents who were caries-free and without any history of dental caries (DMFT = 0) or caries-active (DMFT ≥ 5) [10]. The plaque index of the Silness and Loe scale was used to record dental plaque. The subjects were excluded on the basis of the following criteria: (1): adolescents who had taken antibiotics during the past three months, (2) adolescents with bacterial or viral infections in other sections of the body, and (3) teenagers with gingivitis or wearing orthodontic appliances. In total, 40 adolescents with a similar average family income, composed of 20 caries-active adolescents and 20 caries-free adolescents, were enrolled.\nBefore the sample collection, breakfast was not permitted, and participating adolescents were not allowed to brush their teeth for 12 h. All the teenagers were instructed to rinse their mouths before the exam. Prior to sampling, the teeth were gently dried using an air stream to prevent saliva contamination. The buccal surfaces of the anterior and posterior teeth were scraped clean of dental plaque using a sterile scaler, and the plaque was then promptly transferred to a sterile 1.5 mL Eppendorf tube. The samples were brought to the laboratory as quickly as possible and frozen at −80 °C before analysis.\nPrior to recruiting participants for the study, we performed an initial screening. During the initial screening, we conducted a structured questionnaire survey and clinical examination. The contents of the structured questionnaire were split into three parts: (1) Demographic information: sex, age, residence, and whether the child was an only child in their family and their primary caregiver. (2) Socioeconomic information: family income, caregivers’ educational levels, and whether they had dental insurance. (3) Oral-health-related behaviors: tooth-brushing frequency, flossing habits, toothpaste containing fluoride or not, mouthwash or not, frequency of snack consumption, frequency of sweet-drink consumption, and dental attendance experience. Caries status and plaque index were also evaluated. A skilled dentist used the International Caries Detection and Assessment System II (ICDAS-II) to measure the subjects’ caries experience, and recorded it as decaying, missing, and filled teeth (DMFT). The clinical examination was conducted under artificial lighting in the classroom by utilizing a dental mirror and a Community Periodontal Index (CPI) probe. Codes 3–6 in the ICDAS system were recorded as decayed teeth. Inclusion criteria were: healthy 12–13 years old adolescents who were caries-free and without any history of dental caries (DMFT = 0) or caries-active (DMFT ≥ 5) [10]. The plaque index of the Silness and Loe scale was used to record dental plaque. The subjects were excluded on the basis of the following criteria: (1): adolescents who had taken antibiotics during the past three months, (2) adolescents with bacterial or viral infections in other sections of the body, and (3) teenagers with gingivitis or wearing orthodontic appliances. In total, 40 adolescents with a similar average family income, composed of 20 caries-active adolescents and 20 caries-free adolescents, were enrolled.\nBefore the sample collection, breakfast was not permitted, and participating adolescents were not allowed to brush their teeth for 12 h. All the teenagers were instructed to rinse their mouths before the exam. Prior to sampling, the teeth were gently dried using an air stream to prevent saliva contamination. The buccal surfaces of the anterior and posterior teeth were scraped clean of dental plaque using a sterile scaler, and the plaque was then promptly transferred to a sterile 1.5 mL Eppendorf tube. The samples were brought to the laboratory as quickly as possible and frozen at −80 °C before analysis.\n2.3. Questionnaire Survey and Free-Sugar Intake Assessment The teenagers completed a questionnaire with three parts regarding their dietary habits and oral-health-related behaviors under the supervision of their parents. Part 1 was focused on the demographic information of students, Part 2 was primarily about students’ use of sugar-sweetened drinks (SSBs) and sweetened foods, and Part 3 was mostly concerned with oral health-related behaviors [11].\nPart 1: gender, age, ethnic group, height and weight, residence, family income, caregivers’ education levels and whether they had dental insurance, whether the child was an only child in their family and their primary caregiver.\nPart 2: included among SSBs were carbonated beverages, vegetable protein beverages, juice or juice drinks, tea beverages, sports beverages, and bubble tea. Cakes, desserts, candies (such as chocolate, Snickers, and Maltesers), and preserved fruits were all examples of foods that were sweetened (dried fruits and candied fruits). Other included as sources of free sugar were honey, flavored milk, and yogurt. For the typical intake of SSBs and sugary meals, the response options were 100, 200, 300, 400, and 500 mL/time, and 25, 50, 75, 100, 150, and 200 g/time.\nPart 3: the frequency of tooth brushing, flossing habits, mouthwash or not, and whether the toothpaste contained fluoride.\nThe assessment of free-sugar intake referred to the study conducted by Q Lin et al. [11]. Following that, subjects were divided into two groups: those who consumed less than 50 g of sugar per day, and those who consumed more than 50 g of sugar per day.\nThe teenagers completed a questionnaire with three parts regarding their dietary habits and oral-health-related behaviors under the supervision of their parents. Part 1 was focused on the demographic information of students, Part 2 was primarily about students’ use of sugar-sweetened drinks (SSBs) and sweetened foods, and Part 3 was mostly concerned with oral health-related behaviors [11].\nPart 1: gender, age, ethnic group, height and weight, residence, family income, caregivers’ education levels and whether they had dental insurance, whether the child was an only child in their family and their primary caregiver.\nPart 2: included among SSBs were carbonated beverages, vegetable protein beverages, juice or juice drinks, tea beverages, sports beverages, and bubble tea. Cakes, desserts, candies (such as chocolate, Snickers, and Maltesers), and preserved fruits were all examples of foods that were sweetened (dried fruits and candied fruits). Other included as sources of free sugar were honey, flavored milk, and yogurt. For the typical intake of SSBs and sugary meals, the response options were 100, 200, 300, 400, and 500 mL/time, and 25, 50, 75, 100, 150, and 200 g/time.\nPart 3: the frequency of tooth brushing, flossing habits, mouthwash or not, and whether the toothpaste contained fluoride.\nThe assessment of free-sugar intake referred to the study conducted by Q Lin et al. [11]. Following that, subjects were divided into two groups: those who consumed less than 50 g of sugar per day, and those who consumed more than 50 g of sugar per day.\n2.4. Metagenomic Analysis Each sample was subjected to the CTAB technique for the extraction of microbial DNA. The resulting pellet was redissolved into 50 L of TE buffer (10 mM Tris, 1 mM EDTA). The tube was filled with 1 μL of RNase A to digest the RNA, and it was then incubated at 37 °C for 15 min [12]. Nanodrop and agarose gel electrophoresis were used to evaluate the final DNA concentration and purity. The DNA libraries were created using the manufacturer’s instructions with the NEB Next® Ultra TM DNA Library Prep Kit (Illumina, San Diego, CA, USA). The qualifying libraries were then sequenced at Wekemo Tech Co., Ltd. in Shenzhen, China using an Illumina NovaSeq PE150 platform. We handled the samples in accordance with the accepted procedures. During DNA extraction and library assembly, blank controls were used. In this analysis, there was no contamination in any of the samples. The collected raw reads were processed. Each sample was sequenced, and human reads were then removed from the metagenomic dataset by using Bowtie2 to align the DNA sequences to the human genome. Lastly, host-originated readings and low-quality reads were eliminated to produce clean data. The clean sequences of all samples were annotated and categorized, and Bracken was used to predict the actual relative abundance of species. We gathered data at seven taxonomical hierarchies from each sample using the algorithms of the lowest common ancestor [13] and gene abundance estimates [14].\nUnsuccessful reads were filtered out after the unigene sequences had been aligned using DIAMOND (version 0.7.10.59, Benjamin Buchfink. Drost lab, Max Planck Institute for Biology, Tübingen, Germany) and the UniRef90 protein database. The gene abundance tables were obtained on the basis of the Uniref90 ID and the corresponding relationship of the KEGG database; then, the functional abundance profiles of each sample were drawn.\nEach sample was subjected to the CTAB technique for the extraction of microbial DNA. The resulting pellet was redissolved into 50 L of TE buffer (10 mM Tris, 1 mM EDTA). The tube was filled with 1 μL of RNase A to digest the RNA, and it was then incubated at 37 °C for 15 min [12]. Nanodrop and agarose gel electrophoresis were used to evaluate the final DNA concentration and purity. The DNA libraries were created using the manufacturer’s instructions with the NEB Next® Ultra TM DNA Library Prep Kit (Illumina, San Diego, CA, USA). The qualifying libraries were then sequenced at Wekemo Tech Co., Ltd. in Shenzhen, China using an Illumina NovaSeq PE150 platform. We handled the samples in accordance with the accepted procedures. During DNA extraction and library assembly, blank controls were used. In this analysis, there was no contamination in any of the samples. The collected raw reads were processed. Each sample was sequenced, and human reads were then removed from the metagenomic dataset by using Bowtie2 to align the DNA sequences to the human genome. Lastly, host-originated readings and low-quality reads were eliminated to produce clean data. The clean sequences of all samples were annotated and categorized, and Bracken was used to predict the actual relative abundance of species. We gathered data at seven taxonomical hierarchies from each sample using the algorithms of the lowest common ancestor [13] and gene abundance estimates [14].\nUnsuccessful reads were filtered out after the unigene sequences had been aligned using DIAMOND (version 0.7.10.59, Benjamin Buchfink. Drost lab, Max Planck Institute for Biology, Tübingen, Germany) and the UniRef90 protein database. The gene abundance tables were obtained on the basis of the Uniref90 ID and the corresponding relationship of the KEGG database; then, the functional abundance profiles of each sample were drawn.\n2.5. Data Visual Exhibition, and Statistical and Bioinformatic Analysis The visuals and statistical analysis in this study were conducted using R software (version 3.8, Ross Ihaka and Robert Gentleman. The University of Auckland, Auckland, New Zealand) and QIIME (version 2, Rob Knight. University of California San Diego, San Diego, CA, USA). Fisher’s exact test was used to compare the clinical variables between groups. The Dunn test was used to detect significant differences in relative abundance between groups, with differences considered to be significant when p < 0.05. LEfSe analysis was used to compare microbial profiles and community functional structure, and the logarithmic LDA score threshold was set to 2.0. The core microbiome at the species level was depicted using a Venn diagram. Diversity analysis was carried out using nonmetric multidimensional scaling (NMDS) and principal coordinate analysis (PCoA) based on Bray–Curtis distance.\nThe visuals and statistical analysis in this study were conducted using R software (version 3.8, Ross Ihaka and Robert Gentleman. The University of Auckland, Auckland, New Zealand) and QIIME (version 2, Rob Knight. University of California San Diego, San Diego, CA, USA). Fisher’s exact test was used to compare the clinical variables between groups. The Dunn test was used to detect significant differences in relative abundance between groups, with differences considered to be significant when p < 0.05. LEfSe analysis was used to compare microbial profiles and community functional structure, and the logarithmic LDA score threshold was set to 2.0. The core microbiome at the species level was depicted using a Venn diagram. Diversity analysis was carried out using nonmetric multidimensional scaling (NMDS) and principal coordinate analysis (PCoA) based on Bray–Curtis distance.", "This study was approved by the Ethical Committee of the Hospital of Stomatology, Sun Yat-sen University, in Guangzhou, China (KQEC-2021-24-03). Before this study, we obtained written informed consent from the parents or legal guardians of all participants.", "Prior to recruiting participants for the study, we performed an initial screening. During the initial screening, we conducted a structured questionnaire survey and clinical examination. The contents of the structured questionnaire were split into three parts: (1) Demographic information: sex, age, residence, and whether the child was an only child in their family and their primary caregiver. (2) Socioeconomic information: family income, caregivers’ educational levels, and whether they had dental insurance. (3) Oral-health-related behaviors: tooth-brushing frequency, flossing habits, toothpaste containing fluoride or not, mouthwash or not, frequency of snack consumption, frequency of sweet-drink consumption, and dental attendance experience. Caries status and plaque index were also evaluated. A skilled dentist used the International Caries Detection and Assessment System II (ICDAS-II) to measure the subjects’ caries experience, and recorded it as decaying, missing, and filled teeth (DMFT). The clinical examination was conducted under artificial lighting in the classroom by utilizing a dental mirror and a Community Periodontal Index (CPI) probe. Codes 3–6 in the ICDAS system were recorded as decayed teeth. Inclusion criteria were: healthy 12–13 years old adolescents who were caries-free and without any history of dental caries (DMFT = 0) or caries-active (DMFT ≥ 5) [10]. The plaque index of the Silness and Loe scale was used to record dental plaque. The subjects were excluded on the basis of the following criteria: (1): adolescents who had taken antibiotics during the past three months, (2) adolescents with bacterial or viral infections in other sections of the body, and (3) teenagers with gingivitis or wearing orthodontic appliances. In total, 40 adolescents with a similar average family income, composed of 20 caries-active adolescents and 20 caries-free adolescents, were enrolled.\nBefore the sample collection, breakfast was not permitted, and participating adolescents were not allowed to brush their teeth for 12 h. All the teenagers were instructed to rinse their mouths before the exam. Prior to sampling, the teeth were gently dried using an air stream to prevent saliva contamination. The buccal surfaces of the anterior and posterior teeth were scraped clean of dental plaque using a sterile scaler, and the plaque was then promptly transferred to a sterile 1.5 mL Eppendorf tube. The samples were brought to the laboratory as quickly as possible and frozen at −80 °C before analysis.", "The teenagers completed a questionnaire with three parts regarding their dietary habits and oral-health-related behaviors under the supervision of their parents. Part 1 was focused on the demographic information of students, Part 2 was primarily about students’ use of sugar-sweetened drinks (SSBs) and sweetened foods, and Part 3 was mostly concerned with oral health-related behaviors [11].\nPart 1: gender, age, ethnic group, height and weight, residence, family income, caregivers’ education levels and whether they had dental insurance, whether the child was an only child in their family and their primary caregiver.\nPart 2: included among SSBs were carbonated beverages, vegetable protein beverages, juice or juice drinks, tea beverages, sports beverages, and bubble tea. Cakes, desserts, candies (such as chocolate, Snickers, and Maltesers), and preserved fruits were all examples of foods that were sweetened (dried fruits and candied fruits). Other included as sources of free sugar were honey, flavored milk, and yogurt. For the typical intake of SSBs and sugary meals, the response options were 100, 200, 300, 400, and 500 mL/time, and 25, 50, 75, 100, 150, and 200 g/time.\nPart 3: the frequency of tooth brushing, flossing habits, mouthwash or not, and whether the toothpaste contained fluoride.\nThe assessment of free-sugar intake referred to the study conducted by Q Lin et al. [11]. Following that, subjects were divided into two groups: those who consumed less than 50 g of sugar per day, and those who consumed more than 50 g of sugar per day.", "Each sample was subjected to the CTAB technique for the extraction of microbial DNA. The resulting pellet was redissolved into 50 L of TE buffer (10 mM Tris, 1 mM EDTA). The tube was filled with 1 μL of RNase A to digest the RNA, and it was then incubated at 37 °C for 15 min [12]. Nanodrop and agarose gel electrophoresis were used to evaluate the final DNA concentration and purity. The DNA libraries were created using the manufacturer’s instructions with the NEB Next® Ultra TM DNA Library Prep Kit (Illumina, San Diego, CA, USA). The qualifying libraries were then sequenced at Wekemo Tech Co., Ltd. in Shenzhen, China using an Illumina NovaSeq PE150 platform. We handled the samples in accordance with the accepted procedures. During DNA extraction and library assembly, blank controls were used. In this analysis, there was no contamination in any of the samples. The collected raw reads were processed. Each sample was sequenced, and human reads were then removed from the metagenomic dataset by using Bowtie2 to align the DNA sequences to the human genome. Lastly, host-originated readings and low-quality reads were eliminated to produce clean data. The clean sequences of all samples were annotated and categorized, and Bracken was used to predict the actual relative abundance of species. We gathered data at seven taxonomical hierarchies from each sample using the algorithms of the lowest common ancestor [13] and gene abundance estimates [14].\nUnsuccessful reads were filtered out after the unigene sequences had been aligned using DIAMOND (version 0.7.10.59, Benjamin Buchfink. Drost lab, Max Planck Institute for Biology, Tübingen, Germany) and the UniRef90 protein database. The gene abundance tables were obtained on the basis of the Uniref90 ID and the corresponding relationship of the KEGG database; then, the functional abundance profiles of each sample were drawn.", "The visuals and statistical analysis in this study were conducted using R software (version 3.8, Ross Ihaka and Robert Gentleman. The University of Auckland, Auckland, New Zealand) and QIIME (version 2, Rob Knight. University of California San Diego, San Diego, CA, USA). Fisher’s exact test was used to compare the clinical variables between groups. The Dunn test was used to detect significant differences in relative abundance between groups, with differences considered to be significant when p < 0.05. LEfSe analysis was used to compare microbial profiles and community functional structure, and the logarithmic LDA score threshold was set to 2.0. The core microbiome at the species level was depicted using a Venn diagram. Diversity analysis was carried out using nonmetric multidimensional scaling (NMDS) and principal coordinate analysis (PCoA) based on Bray–Curtis distance.", "The bacterial distribution within the different groups was analyzed. We compared the relative abundance of the top 20 bacteria in the four groups with some interesting findings. Figure 2 shows the relative abundance of the phyla and species that were significantly higher in the FH and CL groups compared with the three other groups. As illustrated in Figure 2, at the phylum level, the CL group showed a higher relative abundance of Fusobacteria. At the genus level, the CL group was associated with a greater abundance of Lactobacillus. At the species level, in the CL group, Actinomyces gerencseriae and Actinomyces dentails had a higher relative abundance of the top 20 species covered. Streptococcus mutans, Candida albicans, Scardovia wiggsiae, and Propionibacterium acidifaciens were also more abundant in the CL group out of the top 20 species covered. However, eight species were more abundant in the FH group, including Capnocytophaga gingivalis and Porphyromonas gingivalis (the figure titled “Significant differences in relative abundance of plaque microbial taxa among the four groups by LEfSe analysis” was too large to place in the manuscript, so we were uploaded it as Supplemental Material Figure S2).\nVenn diagram was used to illustrate the shared and unique taxa for the species level among groups. A total of 2235 species were identified in our study. The overlap region S indicates the microbiome shared in all samples among the four groups, and 933 species were detected in all of the four groups. There were 84, 207, 75 and 250 unique species detected in the CL, CH, FH, and FL groups, respectively (Figure 3).\nPCoA (Figure 4a) and NMDS (Figure 4b) did not reveal differences in taxonomic unit structural richness among groups. The results demonstrate that the microbial diversity of the four groups was similar.", "We annotated the oral gene catalogs with the Kyoto Encyclopedia of Genes and Genomes (KEGG) database in order to compare the functional characteristics of the microbiota between groups. The metabolic pathway that includes 5923 annotated genes was the most abundant of the six main KEGG pathways that were found (Figure 5a). There were 47 Level 2 pathways, with the metabolism of the carbohydrates being the most abundant (Figure 5b). The metabolism of cofactors and vitamins, and amino acid metabolism were other common markers in the Level 2 pathways. The top 10 most abundant KOs were discovered by performing BLAST against the KEGG Orthology (KO) database, and they are shown in Figure 5c.\nLEfSe analysis of the gene profiles showed enormous differences at the functional level of the groups. At the second level, enrichment in membrane transport was observed in the CL group compared with the other groups. At the third level, the synthesis and degradation of ketone bodies, the phosphotransferase system (PTS), carbapenem biosynthesis, quorum sensing, retrograde endocannabinoid signaling, basal transcription factors, chlorocyclohexane and chlorobenzene degradation, and the NOD-like receptor signaling pathway were enriched in the CL group, while lipopolysaccharide biosynthesis and isoquinoline alkaloid biosynthesis exhibited a higher level in the FH group (Figure 6a).\nIn total, 10 KEGG metabolic modules were positively associated with the CL group’s microbiomes, while 22 KEGG metabolic modules had a higher proportion in the FH group (Figure 6b). Phosphotransferase sugar uptake system N-acetylgalactosamine (M00277), numerous two-component histidine kinase-response regulator systems including VicK/VicR (cell wall metabolism) (M00459), LytS-LytR (M00492), and CiaH/CiaR (M00521), and several trace metal transport systems, including those for manganese/zinc (M00791) and nickel (M00440), were more enriched in the CL group. Modules associated with biosynthesis were more abundant in the FH group, including serine (M00020), ectoine (M00033), lipopolysaccharide (M00060), CMP-KDO (M00063), ascorbate (M00114), NAD (M00115), ubiquinone (M00117), biotin (M00123), pyridoxal (M00124), Riboflavin (M00125), aminoacyl-tRNA (M00359), biotin (M00573, M00577), and aurachin (M00848). The FH group also had enriched xenobiotic efflux pump transporters, including multidrug resistance, AcrAB-TolC/SmeDEF (M00647), multidrug resistance, AmeABC (M00699), multidrug resistance, MexAB-OprM (M00718) (the figure titled “LEfSe analysis was used to identify Level 3 and module level with significant differences in relative abundance among the four groups” was too large to place in the manuscript, so we uploaded it as Supplementary Figure S2)." ]
[ null, null, null, null, null, null, null, null ]
[ "1. Introduction", "2. Materials and Methods", "2.1. Ethical Approval", "2.2. Clinical Examination and Sampling", "2.3. Questionnaire Survey and Free-Sugar Intake Assessment", "2.4. Metagenomic Analysis", "2.5. Data Visual Exhibition, and Statistical and Bioinformatic Analysis", "3. Results", "3.1. Microbiota Composition Based on Sugar Consumption and Caries Status", "3.2. Microbiota Composition Based on Sugar Consumption and Caries Status", "4. Discussion", "5. Conclusions" ]
[ "Dental caries characterized by the demineralization of tooth tissue resulting from the fermentation of dietary carbohydrates by acid-producing bacteria [1]. The existence of cariogenic microorganisms is a prerequisite for the development of dental caries, and fermentable carbohydrates, particularly sugars, are key contributors to dysbiosis in the oral microbiota that promotes caries. Consuming sugar has negative effects on oral health, as cariogenic bacteria convert monosaccharides into acids that are detrimental to the teeth. Therefore, it is generally accepted that the excessive and frequent intake of refined sugar leads to the development of caries [2,3,4]. However, the relationship between the amount of sugar intake and caries risk is not always consistent [4,5]. In fact, some individuals with low sugar consumption still have a high caries risk, while some others are caries-free even with higher sugar consumption [4,5]. Many factors may influence the relationship between them, such as high socioeconomic status, exposure to fluoridated water, frequent tooth brushing, the availability of sugar for cariogenic bacterial digestion, and the availability of saliva to counteract bacteria and acids [5,6].\nPrevious studies suggested that oral ecological microbial profiles are corelated with the amount of sugar consumption [7]. Keller et al. found that some acidogenic and acid-tolerant caries-associated organisms were less abundant in a low-sugar group’s dental plaque [8]. Chen indicated that higher sugar-sweetened-beverage consumption may disrupt the oral microecology, and reduce the variety of microbiota during childhood, leading to an increase in cariogenic genera [9]. Altered plaque microbial profiles in adolescents with different levels of sugar intake were shown [8,9]. However, no study has explored the impact of oral microbiota on the relationship between sugar consumption and risk of caries.\nWe hypothesize that the oral microbiota of high-caries-risk adolescents with a low-sugar diet may have a special microbial community composition and function, leading to the formation of a local cariogenic environment with limited sugar. The microbial community of low-caries-risk subjects with higher sugar consumption may have some special bacterial antagonism to the cariogenic bacteria resulting in a state that protects teeth against caries even in a high-carbohydrate environment. Therefore, the purpose of this present study is to compare the oral microbial profiles of high-caries-risk adolescents with a low intake of free sugars to those of caries-free adolescents with a high consumption of free sugars.", "2.1. Ethical Approval This study was approved by the Ethical Committee of the Hospital of Stomatology, Sun Yat-sen University, in Guangzhou, China (KQEC-2021-24-03). Before this study, we obtained written informed consent from the parents or legal guardians of all participants.\nThis study was approved by the Ethical Committee of the Hospital of Stomatology, Sun Yat-sen University, in Guangzhou, China (KQEC-2021-24-03). Before this study, we obtained written informed consent from the parents or legal guardians of all participants.\n2.2. Clinical Examination and Sampling Prior to recruiting participants for the study, we performed an initial screening. During the initial screening, we conducted a structured questionnaire survey and clinical examination. The contents of the structured questionnaire were split into three parts: (1) Demographic information: sex, age, residence, and whether the child was an only child in their family and their primary caregiver. (2) Socioeconomic information: family income, caregivers’ educational levels, and whether they had dental insurance. (3) Oral-health-related behaviors: tooth-brushing frequency, flossing habits, toothpaste containing fluoride or not, mouthwash or not, frequency of snack consumption, frequency of sweet-drink consumption, and dental attendance experience. Caries status and plaque index were also evaluated. A skilled dentist used the International Caries Detection and Assessment System II (ICDAS-II) to measure the subjects’ caries experience, and recorded it as decaying, missing, and filled teeth (DMFT). The clinical examination was conducted under artificial lighting in the classroom by utilizing a dental mirror and a Community Periodontal Index (CPI) probe. Codes 3–6 in the ICDAS system were recorded as decayed teeth. Inclusion criteria were: healthy 12–13 years old adolescents who were caries-free and without any history of dental caries (DMFT = 0) or caries-active (DMFT ≥ 5) [10]. The plaque index of the Silness and Loe scale was used to record dental plaque. The subjects were excluded on the basis of the following criteria: (1): adolescents who had taken antibiotics during the past three months, (2) adolescents with bacterial or viral infections in other sections of the body, and (3) teenagers with gingivitis or wearing orthodontic appliances. In total, 40 adolescents with a similar average family income, composed of 20 caries-active adolescents and 20 caries-free adolescents, were enrolled.\nBefore the sample collection, breakfast was not permitted, and participating adolescents were not allowed to brush their teeth for 12 h. All the teenagers were instructed to rinse their mouths before the exam. Prior to sampling, the teeth were gently dried using an air stream to prevent saliva contamination. The buccal surfaces of the anterior and posterior teeth were scraped clean of dental plaque using a sterile scaler, and the plaque was then promptly transferred to a sterile 1.5 mL Eppendorf tube. The samples were brought to the laboratory as quickly as possible and frozen at −80 °C before analysis.\nPrior to recruiting participants for the study, we performed an initial screening. During the initial screening, we conducted a structured questionnaire survey and clinical examination. The contents of the structured questionnaire were split into three parts: (1) Demographic information: sex, age, residence, and whether the child was an only child in their family and their primary caregiver. (2) Socioeconomic information: family income, caregivers’ educational levels, and whether they had dental insurance. (3) Oral-health-related behaviors: tooth-brushing frequency, flossing habits, toothpaste containing fluoride or not, mouthwash or not, frequency of snack consumption, frequency of sweet-drink consumption, and dental attendance experience. Caries status and plaque index were also evaluated. A skilled dentist used the International Caries Detection and Assessment System II (ICDAS-II) to measure the subjects’ caries experience, and recorded it as decaying, missing, and filled teeth (DMFT). The clinical examination was conducted under artificial lighting in the classroom by utilizing a dental mirror and a Community Periodontal Index (CPI) probe. Codes 3–6 in the ICDAS system were recorded as decayed teeth. Inclusion criteria were: healthy 12–13 years old adolescents who were caries-free and without any history of dental caries (DMFT = 0) or caries-active (DMFT ≥ 5) [10]. The plaque index of the Silness and Loe scale was used to record dental plaque. The subjects were excluded on the basis of the following criteria: (1): adolescents who had taken antibiotics during the past three months, (2) adolescents with bacterial or viral infections in other sections of the body, and (3) teenagers with gingivitis or wearing orthodontic appliances. In total, 40 adolescents with a similar average family income, composed of 20 caries-active adolescents and 20 caries-free adolescents, were enrolled.\nBefore the sample collection, breakfast was not permitted, and participating adolescents were not allowed to brush their teeth for 12 h. All the teenagers were instructed to rinse their mouths before the exam. Prior to sampling, the teeth were gently dried using an air stream to prevent saliva contamination. The buccal surfaces of the anterior and posterior teeth were scraped clean of dental plaque using a sterile scaler, and the plaque was then promptly transferred to a sterile 1.5 mL Eppendorf tube. The samples were brought to the laboratory as quickly as possible and frozen at −80 °C before analysis.\n2.3. Questionnaire Survey and Free-Sugar Intake Assessment The teenagers completed a questionnaire with three parts regarding their dietary habits and oral-health-related behaviors under the supervision of their parents. Part 1 was focused on the demographic information of students, Part 2 was primarily about students’ use of sugar-sweetened drinks (SSBs) and sweetened foods, and Part 3 was mostly concerned with oral health-related behaviors [11].\nPart 1: gender, age, ethnic group, height and weight, residence, family income, caregivers’ education levels and whether they had dental insurance, whether the child was an only child in their family and their primary caregiver.\nPart 2: included among SSBs were carbonated beverages, vegetable protein beverages, juice or juice drinks, tea beverages, sports beverages, and bubble tea. Cakes, desserts, candies (such as chocolate, Snickers, and Maltesers), and preserved fruits were all examples of foods that were sweetened (dried fruits and candied fruits). Other included as sources of free sugar were honey, flavored milk, and yogurt. For the typical intake of SSBs and sugary meals, the response options were 100, 200, 300, 400, and 500 mL/time, and 25, 50, 75, 100, 150, and 200 g/time.\nPart 3: the frequency of tooth brushing, flossing habits, mouthwash or not, and whether the toothpaste contained fluoride.\nThe assessment of free-sugar intake referred to the study conducted by Q Lin et al. [11]. Following that, subjects were divided into two groups: those who consumed less than 50 g of sugar per day, and those who consumed more than 50 g of sugar per day.\nThe teenagers completed a questionnaire with three parts regarding their dietary habits and oral-health-related behaviors under the supervision of their parents. Part 1 was focused on the demographic information of students, Part 2 was primarily about students’ use of sugar-sweetened drinks (SSBs) and sweetened foods, and Part 3 was mostly concerned with oral health-related behaviors [11].\nPart 1: gender, age, ethnic group, height and weight, residence, family income, caregivers’ education levels and whether they had dental insurance, whether the child was an only child in their family and their primary caregiver.\nPart 2: included among SSBs were carbonated beverages, vegetable protein beverages, juice or juice drinks, tea beverages, sports beverages, and bubble tea. Cakes, desserts, candies (such as chocolate, Snickers, and Maltesers), and preserved fruits were all examples of foods that were sweetened (dried fruits and candied fruits). Other included as sources of free sugar were honey, flavored milk, and yogurt. For the typical intake of SSBs and sugary meals, the response options were 100, 200, 300, 400, and 500 mL/time, and 25, 50, 75, 100, 150, and 200 g/time.\nPart 3: the frequency of tooth brushing, flossing habits, mouthwash or not, and whether the toothpaste contained fluoride.\nThe assessment of free-sugar intake referred to the study conducted by Q Lin et al. [11]. Following that, subjects were divided into two groups: those who consumed less than 50 g of sugar per day, and those who consumed more than 50 g of sugar per day.\n2.4. Metagenomic Analysis Each sample was subjected to the CTAB technique for the extraction of microbial DNA. The resulting pellet was redissolved into 50 L of TE buffer (10 mM Tris, 1 mM EDTA). The tube was filled with 1 μL of RNase A to digest the RNA, and it was then incubated at 37 °C for 15 min [12]. Nanodrop and agarose gel electrophoresis were used to evaluate the final DNA concentration and purity. The DNA libraries were created using the manufacturer’s instructions with the NEB Next® Ultra TM DNA Library Prep Kit (Illumina, San Diego, CA, USA). The qualifying libraries were then sequenced at Wekemo Tech Co., Ltd. in Shenzhen, China using an Illumina NovaSeq PE150 platform. We handled the samples in accordance with the accepted procedures. During DNA extraction and library assembly, blank controls were used. In this analysis, there was no contamination in any of the samples. The collected raw reads were processed. Each sample was sequenced, and human reads were then removed from the metagenomic dataset by using Bowtie2 to align the DNA sequences to the human genome. Lastly, host-originated readings and low-quality reads were eliminated to produce clean data. The clean sequences of all samples were annotated and categorized, and Bracken was used to predict the actual relative abundance of species. We gathered data at seven taxonomical hierarchies from each sample using the algorithms of the lowest common ancestor [13] and gene abundance estimates [14].\nUnsuccessful reads were filtered out after the unigene sequences had been aligned using DIAMOND (version 0.7.10.59, Benjamin Buchfink. Drost lab, Max Planck Institute for Biology, Tübingen, Germany) and the UniRef90 protein database. The gene abundance tables were obtained on the basis of the Uniref90 ID and the corresponding relationship of the KEGG database; then, the functional abundance profiles of each sample were drawn.\nEach sample was subjected to the CTAB technique for the extraction of microbial DNA. The resulting pellet was redissolved into 50 L of TE buffer (10 mM Tris, 1 mM EDTA). The tube was filled with 1 μL of RNase A to digest the RNA, and it was then incubated at 37 °C for 15 min [12]. Nanodrop and agarose gel electrophoresis were used to evaluate the final DNA concentration and purity. The DNA libraries were created using the manufacturer’s instructions with the NEB Next® Ultra TM DNA Library Prep Kit (Illumina, San Diego, CA, USA). The qualifying libraries were then sequenced at Wekemo Tech Co., Ltd. in Shenzhen, China using an Illumina NovaSeq PE150 platform. We handled the samples in accordance with the accepted procedures. During DNA extraction and library assembly, blank controls were used. In this analysis, there was no contamination in any of the samples. The collected raw reads were processed. Each sample was sequenced, and human reads were then removed from the metagenomic dataset by using Bowtie2 to align the DNA sequences to the human genome. Lastly, host-originated readings and low-quality reads were eliminated to produce clean data. The clean sequences of all samples were annotated and categorized, and Bracken was used to predict the actual relative abundance of species. We gathered data at seven taxonomical hierarchies from each sample using the algorithms of the lowest common ancestor [13] and gene abundance estimates [14].\nUnsuccessful reads were filtered out after the unigene sequences had been aligned using DIAMOND (version 0.7.10.59, Benjamin Buchfink. Drost lab, Max Planck Institute for Biology, Tübingen, Germany) and the UniRef90 protein database. The gene abundance tables were obtained on the basis of the Uniref90 ID and the corresponding relationship of the KEGG database; then, the functional abundance profiles of each sample were drawn.\n2.5. Data Visual Exhibition, and Statistical and Bioinformatic Analysis The visuals and statistical analysis in this study were conducted using R software (version 3.8, Ross Ihaka and Robert Gentleman. The University of Auckland, Auckland, New Zealand) and QIIME (version 2, Rob Knight. University of California San Diego, San Diego, CA, USA). Fisher’s exact test was used to compare the clinical variables between groups. The Dunn test was used to detect significant differences in relative abundance between groups, with differences considered to be significant when p < 0.05. LEfSe analysis was used to compare microbial profiles and community functional structure, and the logarithmic LDA score threshold was set to 2.0. The core microbiome at the species level was depicted using a Venn diagram. Diversity analysis was carried out using nonmetric multidimensional scaling (NMDS) and principal coordinate analysis (PCoA) based on Bray–Curtis distance.\nThe visuals and statistical analysis in this study were conducted using R software (version 3.8, Ross Ihaka and Robert Gentleman. The University of Auckland, Auckland, New Zealand) and QIIME (version 2, Rob Knight. University of California San Diego, San Diego, CA, USA). Fisher’s exact test was used to compare the clinical variables between groups. The Dunn test was used to detect significant differences in relative abundance between groups, with differences considered to be significant when p < 0.05. LEfSe analysis was used to compare microbial profiles and community functional structure, and the logarithmic LDA score threshold was set to 2.0. The core microbiome at the species level was depicted using a Venn diagram. Diversity analysis was carried out using nonmetric multidimensional scaling (NMDS) and principal coordinate analysis (PCoA) based on Bray–Curtis distance.", "This study was approved by the Ethical Committee of the Hospital of Stomatology, Sun Yat-sen University, in Guangzhou, China (KQEC-2021-24-03). Before this study, we obtained written informed consent from the parents or legal guardians of all participants.", "Prior to recruiting participants for the study, we performed an initial screening. During the initial screening, we conducted a structured questionnaire survey and clinical examination. The contents of the structured questionnaire were split into three parts: (1) Demographic information: sex, age, residence, and whether the child was an only child in their family and their primary caregiver. (2) Socioeconomic information: family income, caregivers’ educational levels, and whether they had dental insurance. (3) Oral-health-related behaviors: tooth-brushing frequency, flossing habits, toothpaste containing fluoride or not, mouthwash or not, frequency of snack consumption, frequency of sweet-drink consumption, and dental attendance experience. Caries status and plaque index were also evaluated. A skilled dentist used the International Caries Detection and Assessment System II (ICDAS-II) to measure the subjects’ caries experience, and recorded it as decaying, missing, and filled teeth (DMFT). The clinical examination was conducted under artificial lighting in the classroom by utilizing a dental mirror and a Community Periodontal Index (CPI) probe. Codes 3–6 in the ICDAS system were recorded as decayed teeth. Inclusion criteria were: healthy 12–13 years old adolescents who were caries-free and without any history of dental caries (DMFT = 0) or caries-active (DMFT ≥ 5) [10]. The plaque index of the Silness and Loe scale was used to record dental plaque. The subjects were excluded on the basis of the following criteria: (1): adolescents who had taken antibiotics during the past three months, (2) adolescents with bacterial or viral infections in other sections of the body, and (3) teenagers with gingivitis or wearing orthodontic appliances. In total, 40 adolescents with a similar average family income, composed of 20 caries-active adolescents and 20 caries-free adolescents, were enrolled.\nBefore the sample collection, breakfast was not permitted, and participating adolescents were not allowed to brush their teeth for 12 h. All the teenagers were instructed to rinse their mouths before the exam. Prior to sampling, the teeth were gently dried using an air stream to prevent saliva contamination. The buccal surfaces of the anterior and posterior teeth were scraped clean of dental plaque using a sterile scaler, and the plaque was then promptly transferred to a sterile 1.5 mL Eppendorf tube. The samples were brought to the laboratory as quickly as possible and frozen at −80 °C before analysis.", "The teenagers completed a questionnaire with three parts regarding their dietary habits and oral-health-related behaviors under the supervision of their parents. Part 1 was focused on the demographic information of students, Part 2 was primarily about students’ use of sugar-sweetened drinks (SSBs) and sweetened foods, and Part 3 was mostly concerned with oral health-related behaviors [11].\nPart 1: gender, age, ethnic group, height and weight, residence, family income, caregivers’ education levels and whether they had dental insurance, whether the child was an only child in their family and their primary caregiver.\nPart 2: included among SSBs were carbonated beverages, vegetable protein beverages, juice or juice drinks, tea beverages, sports beverages, and bubble tea. Cakes, desserts, candies (such as chocolate, Snickers, and Maltesers), and preserved fruits were all examples of foods that were sweetened (dried fruits and candied fruits). Other included as sources of free sugar were honey, flavored milk, and yogurt. For the typical intake of SSBs and sugary meals, the response options were 100, 200, 300, 400, and 500 mL/time, and 25, 50, 75, 100, 150, and 200 g/time.\nPart 3: the frequency of tooth brushing, flossing habits, mouthwash or not, and whether the toothpaste contained fluoride.\nThe assessment of free-sugar intake referred to the study conducted by Q Lin et al. [11]. Following that, subjects were divided into two groups: those who consumed less than 50 g of sugar per day, and those who consumed more than 50 g of sugar per day.", "Each sample was subjected to the CTAB technique for the extraction of microbial DNA. The resulting pellet was redissolved into 50 L of TE buffer (10 mM Tris, 1 mM EDTA). The tube was filled with 1 μL of RNase A to digest the RNA, and it was then incubated at 37 °C for 15 min [12]. Nanodrop and agarose gel electrophoresis were used to evaluate the final DNA concentration and purity. The DNA libraries were created using the manufacturer’s instructions with the NEB Next® Ultra TM DNA Library Prep Kit (Illumina, San Diego, CA, USA). The qualifying libraries were then sequenced at Wekemo Tech Co., Ltd. in Shenzhen, China using an Illumina NovaSeq PE150 platform. We handled the samples in accordance with the accepted procedures. During DNA extraction and library assembly, blank controls were used. In this analysis, there was no contamination in any of the samples. The collected raw reads were processed. Each sample was sequenced, and human reads were then removed from the metagenomic dataset by using Bowtie2 to align the DNA sequences to the human genome. Lastly, host-originated readings and low-quality reads were eliminated to produce clean data. The clean sequences of all samples were annotated and categorized, and Bracken was used to predict the actual relative abundance of species. We gathered data at seven taxonomical hierarchies from each sample using the algorithms of the lowest common ancestor [13] and gene abundance estimates [14].\nUnsuccessful reads were filtered out after the unigene sequences had been aligned using DIAMOND (version 0.7.10.59, Benjamin Buchfink. Drost lab, Max Planck Institute for Biology, Tübingen, Germany) and the UniRef90 protein database. The gene abundance tables were obtained on the basis of the Uniref90 ID and the corresponding relationship of the KEGG database; then, the functional abundance profiles of each sample were drawn.", "The visuals and statistical analysis in this study were conducted using R software (version 3.8, Ross Ihaka and Robert Gentleman. The University of Auckland, Auckland, New Zealand) and QIIME (version 2, Rob Knight. University of California San Diego, San Diego, CA, USA). Fisher’s exact test was used to compare the clinical variables between groups. The Dunn test was used to detect significant differences in relative abundance between groups, with differences considered to be significant when p < 0.05. LEfSe analysis was used to compare microbial profiles and community functional structure, and the logarithmic LDA score threshold was set to 2.0. The core microbiome at the species level was depicted using a Venn diagram. Diversity analysis was carried out using nonmetric multidimensional scaling (NMDS) and principal coordinate analysis (PCoA) based on Bray–Curtis distance.", "In total, 40 male and female teenagers aged 12–13 years were recruited; 45% were boys, and 55% were girls. The samples were divided into four groups: caries individuals with low sugar consumption (CL group), caries individuals with high sugar consumption (CH group), caries-free individuals with low sugar consumption (FL group), and caries-free individuals with high sugar consumption (FH group). Table 1 reports the characteristics of the demographic variables and the oral hygiene habits of the included subjects. As presented in Table 1, there was no difference in the clinical information that could be confounding variables among the four groups.\nAfter data trimming and filtering out the low-quality data and host contamination, a total of 651,389,975 (195.43 GB) reads were acquired, and the average clean data were 4.89 GB per sample. A total of 22 phyla, 36 classes, 93 orders, 214 families, 761 genera, and 2235 species were detected in the 40 samples. Figure 1a–c shows the top 20 phyla, genera, and species, respectively, with relatively high abundance in the four groups.\n3.1. Microbiota Composition Based on Sugar Consumption and Caries Status The bacterial distribution within the different groups was analyzed. We compared the relative abundance of the top 20 bacteria in the four groups with some interesting findings. Figure 2 shows the relative abundance of the phyla and species that were significantly higher in the FH and CL groups compared with the three other groups. As illustrated in Figure 2, at the phylum level, the CL group showed a higher relative abundance of Fusobacteria. At the genus level, the CL group was associated with a greater abundance of Lactobacillus. At the species level, in the CL group, Actinomyces gerencseriae and Actinomyces dentails had a higher relative abundance of the top 20 species covered. Streptococcus mutans, Candida albicans, Scardovia wiggsiae, and Propionibacterium acidifaciens were also more abundant in the CL group out of the top 20 species covered. However, eight species were more abundant in the FH group, including Capnocytophaga gingivalis and Porphyromonas gingivalis (the figure titled “Significant differences in relative abundance of plaque microbial taxa among the four groups by LEfSe analysis” was too large to place in the manuscript, so we were uploaded it as Supplemental Material Figure S2).\nVenn diagram was used to illustrate the shared and unique taxa for the species level among groups. A total of 2235 species were identified in our study. The overlap region S indicates the microbiome shared in all samples among the four groups, and 933 species were detected in all of the four groups. There were 84, 207, 75 and 250 unique species detected in the CL, CH, FH, and FL groups, respectively (Figure 3).\nPCoA (Figure 4a) and NMDS (Figure 4b) did not reveal differences in taxonomic unit structural richness among groups. The results demonstrate that the microbial diversity of the four groups was similar.\nThe bacterial distribution within the different groups was analyzed. We compared the relative abundance of the top 20 bacteria in the four groups with some interesting findings. Figure 2 shows the relative abundance of the phyla and species that were significantly higher in the FH and CL groups compared with the three other groups. As illustrated in Figure 2, at the phylum level, the CL group showed a higher relative abundance of Fusobacteria. At the genus level, the CL group was associated with a greater abundance of Lactobacillus. At the species level, in the CL group, Actinomyces gerencseriae and Actinomyces dentails had a higher relative abundance of the top 20 species covered. Streptococcus mutans, Candida albicans, Scardovia wiggsiae, and Propionibacterium acidifaciens were also more abundant in the CL group out of the top 20 species covered. However, eight species were more abundant in the FH group, including Capnocytophaga gingivalis and Porphyromonas gingivalis (the figure titled “Significant differences in relative abundance of plaque microbial taxa among the four groups by LEfSe analysis” was too large to place in the manuscript, so we were uploaded it as Supplemental Material Figure S2).\nVenn diagram was used to illustrate the shared and unique taxa for the species level among groups. A total of 2235 species were identified in our study. The overlap region S indicates the microbiome shared in all samples among the four groups, and 933 species were detected in all of the four groups. There were 84, 207, 75 and 250 unique species detected in the CL, CH, FH, and FL groups, respectively (Figure 3).\nPCoA (Figure 4a) and NMDS (Figure 4b) did not reveal differences in taxonomic unit structural richness among groups. The results demonstrate that the microbial diversity of the four groups was similar.\n3.2. Microbiota Composition Based on Sugar Consumption and Caries Status We annotated the oral gene catalogs with the Kyoto Encyclopedia of Genes and Genomes (KEGG) database in order to compare the functional characteristics of the microbiota between groups. The metabolic pathway that includes 5923 annotated genes was the most abundant of the six main KEGG pathways that were found (Figure 5a). There were 47 Level 2 pathways, with the metabolism of the carbohydrates being the most abundant (Figure 5b). The metabolism of cofactors and vitamins, and amino acid metabolism were other common markers in the Level 2 pathways. The top 10 most abundant KOs were discovered by performing BLAST against the KEGG Orthology (KO) database, and they are shown in Figure 5c.\nLEfSe analysis of the gene profiles showed enormous differences at the functional level of the groups. At the second level, enrichment in membrane transport was observed in the CL group compared with the other groups. At the third level, the synthesis and degradation of ketone bodies, the phosphotransferase system (PTS), carbapenem biosynthesis, quorum sensing, retrograde endocannabinoid signaling, basal transcription factors, chlorocyclohexane and chlorobenzene degradation, and the NOD-like receptor signaling pathway were enriched in the CL group, while lipopolysaccharide biosynthesis and isoquinoline alkaloid biosynthesis exhibited a higher level in the FH group (Figure 6a).\nIn total, 10 KEGG metabolic modules were positively associated with the CL group’s microbiomes, while 22 KEGG metabolic modules had a higher proportion in the FH group (Figure 6b). Phosphotransferase sugar uptake system N-acetylgalactosamine (M00277), numerous two-component histidine kinase-response regulator systems including VicK/VicR (cell wall metabolism) (M00459), LytS-LytR (M00492), and CiaH/CiaR (M00521), and several trace metal transport systems, including those for manganese/zinc (M00791) and nickel (M00440), were more enriched in the CL group. Modules associated with biosynthesis were more abundant in the FH group, including serine (M00020), ectoine (M00033), lipopolysaccharide (M00060), CMP-KDO (M00063), ascorbate (M00114), NAD (M00115), ubiquinone (M00117), biotin (M00123), pyridoxal (M00124), Riboflavin (M00125), aminoacyl-tRNA (M00359), biotin (M00573, M00577), and aurachin (M00848). The FH group also had enriched xenobiotic efflux pump transporters, including multidrug resistance, AcrAB-TolC/SmeDEF (M00647), multidrug resistance, AmeABC (M00699), multidrug resistance, MexAB-OprM (M00718) (the figure titled “LEfSe analysis was used to identify Level 3 and module level with significant differences in relative abundance among the four groups” was too large to place in the manuscript, so we uploaded it as Supplementary Figure S2).\nWe annotated the oral gene catalogs with the Kyoto Encyclopedia of Genes and Genomes (KEGG) database in order to compare the functional characteristics of the microbiota between groups. The metabolic pathway that includes 5923 annotated genes was the most abundant of the six main KEGG pathways that were found (Figure 5a). There were 47 Level 2 pathways, with the metabolism of the carbohydrates being the most abundant (Figure 5b). The metabolism of cofactors and vitamins, and amino acid metabolism were other common markers in the Level 2 pathways. The top 10 most abundant KOs were discovered by performing BLAST against the KEGG Orthology (KO) database, and they are shown in Figure 5c.\nLEfSe analysis of the gene profiles showed enormous differences at the functional level of the groups. At the second level, enrichment in membrane transport was observed in the CL group compared with the other groups. At the third level, the synthesis and degradation of ketone bodies, the phosphotransferase system (PTS), carbapenem biosynthesis, quorum sensing, retrograde endocannabinoid signaling, basal transcription factors, chlorocyclohexane and chlorobenzene degradation, and the NOD-like receptor signaling pathway were enriched in the CL group, while lipopolysaccharide biosynthesis and isoquinoline alkaloid biosynthesis exhibited a higher level in the FH group (Figure 6a).\nIn total, 10 KEGG metabolic modules were positively associated with the CL group’s microbiomes, while 22 KEGG metabolic modules had a higher proportion in the FH group (Figure 6b). Phosphotransferase sugar uptake system N-acetylgalactosamine (M00277), numerous two-component histidine kinase-response regulator systems including VicK/VicR (cell wall metabolism) (M00459), LytS-LytR (M00492), and CiaH/CiaR (M00521), and several trace metal transport systems, including those for manganese/zinc (M00791) and nickel (M00440), were more enriched in the CL group. Modules associated with biosynthesis were more abundant in the FH group, including serine (M00020), ectoine (M00033), lipopolysaccharide (M00060), CMP-KDO (M00063), ascorbate (M00114), NAD (M00115), ubiquinone (M00117), biotin (M00123), pyridoxal (M00124), Riboflavin (M00125), aminoacyl-tRNA (M00359), biotin (M00573, M00577), and aurachin (M00848). The FH group also had enriched xenobiotic efflux pump transporters, including multidrug resistance, AcrAB-TolC/SmeDEF (M00647), multidrug resistance, AmeABC (M00699), multidrug resistance, MexAB-OprM (M00718) (the figure titled “LEfSe analysis was used to identify Level 3 and module level with significant differences in relative abundance among the four groups” was too large to place in the manuscript, so we uploaded it as Supplementary Figure S2).", "The bacterial distribution within the different groups was analyzed. We compared the relative abundance of the top 20 bacteria in the four groups with some interesting findings. Figure 2 shows the relative abundance of the phyla and species that were significantly higher in the FH and CL groups compared with the three other groups. As illustrated in Figure 2, at the phylum level, the CL group showed a higher relative abundance of Fusobacteria. At the genus level, the CL group was associated with a greater abundance of Lactobacillus. At the species level, in the CL group, Actinomyces gerencseriae and Actinomyces dentails had a higher relative abundance of the top 20 species covered. Streptococcus mutans, Candida albicans, Scardovia wiggsiae, and Propionibacterium acidifaciens were also more abundant in the CL group out of the top 20 species covered. However, eight species were more abundant in the FH group, including Capnocytophaga gingivalis and Porphyromonas gingivalis (the figure titled “Significant differences in relative abundance of plaque microbial taxa among the four groups by LEfSe analysis” was too large to place in the manuscript, so we were uploaded it as Supplemental Material Figure S2).\nVenn diagram was used to illustrate the shared and unique taxa for the species level among groups. A total of 2235 species were identified in our study. The overlap region S indicates the microbiome shared in all samples among the four groups, and 933 species were detected in all of the four groups. There were 84, 207, 75 and 250 unique species detected in the CL, CH, FH, and FL groups, respectively (Figure 3).\nPCoA (Figure 4a) and NMDS (Figure 4b) did not reveal differences in taxonomic unit structural richness among groups. The results demonstrate that the microbial diversity of the four groups was similar.", "We annotated the oral gene catalogs with the Kyoto Encyclopedia of Genes and Genomes (KEGG) database in order to compare the functional characteristics of the microbiota between groups. The metabolic pathway that includes 5923 annotated genes was the most abundant of the six main KEGG pathways that were found (Figure 5a). There were 47 Level 2 pathways, with the metabolism of the carbohydrates being the most abundant (Figure 5b). The metabolism of cofactors and vitamins, and amino acid metabolism were other common markers in the Level 2 pathways. The top 10 most abundant KOs were discovered by performing BLAST against the KEGG Orthology (KO) database, and they are shown in Figure 5c.\nLEfSe analysis of the gene profiles showed enormous differences at the functional level of the groups. At the second level, enrichment in membrane transport was observed in the CL group compared with the other groups. At the third level, the synthesis and degradation of ketone bodies, the phosphotransferase system (PTS), carbapenem biosynthesis, quorum sensing, retrograde endocannabinoid signaling, basal transcription factors, chlorocyclohexane and chlorobenzene degradation, and the NOD-like receptor signaling pathway were enriched in the CL group, while lipopolysaccharide biosynthesis and isoquinoline alkaloid biosynthesis exhibited a higher level in the FH group (Figure 6a).\nIn total, 10 KEGG metabolic modules were positively associated with the CL group’s microbiomes, while 22 KEGG metabolic modules had a higher proportion in the FH group (Figure 6b). Phosphotransferase sugar uptake system N-acetylgalactosamine (M00277), numerous two-component histidine kinase-response regulator systems including VicK/VicR (cell wall metabolism) (M00459), LytS-LytR (M00492), and CiaH/CiaR (M00521), and several trace metal transport systems, including those for manganese/zinc (M00791) and nickel (M00440), were more enriched in the CL group. Modules associated with biosynthesis were more abundant in the FH group, including serine (M00020), ectoine (M00033), lipopolysaccharide (M00060), CMP-KDO (M00063), ascorbate (M00114), NAD (M00115), ubiquinone (M00117), biotin (M00123), pyridoxal (M00124), Riboflavin (M00125), aminoacyl-tRNA (M00359), biotin (M00573, M00577), and aurachin (M00848). The FH group also had enriched xenobiotic efflux pump transporters, including multidrug resistance, AcrAB-TolC/SmeDEF (M00647), multidrug resistance, AmeABC (M00699), multidrug resistance, MexAB-OprM (M00718) (the figure titled “LEfSe analysis was used to identify Level 3 and module level with significant differences in relative abundance among the four groups” was too large to place in the manuscript, so we uploaded it as Supplementary Figure S2).", "The present study employed metagenomic sequencing to characterize the tooth plaque microbiota in a group of Chinese adolescents with a focus on microbiomic profiles and function in caries subjects with low sugar consumption and caries-free subjects with high sugar consumption. The main findings were that oral microbial profiles and function impact the relationship between sugar consumption and caries.\nFor this pilot study, we compared the microbial profiles of the subjects with comparable backgrounds. Since all the adolescents resided in the same city (Foshan), there were no racial, cultural, demographic, or geographical distinctions in the selected sample. Examined individuals had an average monthly family income between CNY 3000 and 7000, so it was reasonable to assume that their children had a comparable socioeconomic status. In addition, the water in Foshan has not been fluoridated. Therefore, the individuals’ fluoride intake comes from fluoride-containing toothpaste and mouthwash. Since none of the individuals used mouthwash, the only available source of fluoride for the subjects was fluoride toothpaste. We found no statistically significant difference among the four groups in their use of fluoride toothpaste. The same held true for oral hygiene habits, the plaque index, the pH value of saliva, and the buffering capacity of saliva. We, therefore, presumed that all children had comparable backgrounds.\nIn high caries risk subjects with a low sugar consumption, the relative abundance of Lactobacillus was increased, as well as the relative abundance of several species including Actinomyces dentails, Actinomyces gerencseriae, Streptococcus mutans, and Candida albicans. These species are frequently associated with caries progression [15]. Consistent with previous studies, it is not surprising to find a high relative abundance of Lactobacillus in the CL group [16]. Before S. mutans was found, it had been thought that Lactobacillus was the main cause of caries because there was a strong association between the amount of Lactobacillus in saliva and the severity of caries [5]. Lactobacillus is currently thought to not be the caries initiator, but that it plays an important role in caries progression [17].\nThese findings suggest that even individuals with low sugar consumption may have a high caries risk when the abundance of some cariogenic bacteria, such as Actinomyces gerencseriae, Scardovia wiggsiae, Propionibacterium acidifaciens, Streptococcus mutans, and Candida albicans, is increased. Clinical studies linked high levels of Streptococcus mutans (S. mutans) to high caries risk due to its acidogenicity, acidification, and ability to synthesize exopolysaccharides in dental plaques [12]. Candida albicans, a commensal oral fungus, can form pathogenic mixed-species biofilms with S. mutans mediated by extracellular polysaccharides (EPSs), enhancing the virulence leading to the onset of caries. Previous studies confirmed that, when C. albicans is combined with S. mutans in a rat model, it causes more severe dental caries [18,19], and coinfection with S. mutans and C. albicans is highly related with severe early child caries [20,21]. The present findings further support this result, indicating that the increased abundance of S. mutans and C.albicans is closely related to the increased risk of caries.\nScardovia wiggsiae (S. wiggsiae) was described as a new caries pathogen that was detected from caries in children and adolescents [22,23]. S. wiggsiae showed acid tolerance and acidogenicity, increasing its ecological competitiveness in acidic conditions such as caries lesions [24]. S. wiggsiae exhibited minimal caries induction by itself. However, when co-inoculated with S. mutans, significant cavity formation was observed [25], and Eriksson et al. found that adolescents with active caries were characterized by a bacterial complex that contained S. mutans and S. wiggsiae [23]. In agreement with previous studies, we also observed higher relative abundance of S. wiggsiae in the CL groups as compared with the three other groups. Other bacteria found in our study to be more abundant in the CL group were Actinomyces gerencseriae and Propionibacterium acidifaciens. These two bacteria were positively associated with caries [26,27]. Similar findings were described in our previous study with pit and fissure caries [14].\nIn our study, the FH group had a higher abundance of Capnocytophaga gingivalis (C. gingivalis) and Porphyromonas gingivalis. Consistent with our findings, C. gingivalis is frequently detected in subgingival plaque associated with oral health [16]. Porphyromonas gingivalis (P. gingivalis) is a common periodontal pathogen that appears in greater proportion in periodontitis than it does in healthy subjects [28]. Additionally, P. gingivalis salivary levels in a caries-free group were considerably greater than those in the periodontally healthy group with caries, according to Y Iwano [29]. A possible explanation is that P. gingivalis is antagonistic to S. mutans [30]. The former can attenuate the virulence properties of the latter by interfering with its Com quorum-sensing system [31]. As a result, FH subjects should have a healthy microbial community capable of resisting the cariogenic microbial population.\nThe result of the functional analyses indicates significant differences among groups at various functional levels. A quorum-sensing system is a cell-to-cell signaling system allowing for bacteria to develop coordinated social behavior [32]. The quorum-sensing system exhibited a higher level in the CL group, suggesting active signaling inside the biofilm. We also found an enrichment of the phosphotransferase sugar-uptake and phosphotransferase systems in the CL group, indicating enrichment for the sugar-uptake, transport, and metabolization pathways. Moreover, a notable enrichment of several two-component response–regulator pairs that are responsible for transcriptional changes in reaction to environmental stimuli was observed in the CL group. Relman DA pioneered the theory that increases in sugar catabolism potential are key markers of caries development and progression [33]. According to our research results and those of previous studies, we suggest the existence of a cariogenic microbial interaction network. We speculate that sugar uptake, transport, and metabolization capacity, and the response to the environmental stimuli of the microbial community increased in CL group subjects. Therefore, in the presence of limited sugar, the cariogenic microbial community of the CL group could respond quickly and metabolize sugar to rapidly produce acid, leading to a low-pH microenvironment within the biofilm. Additionally, the microbial community of FH subjects had self-stabilizing functional potential in which sugar compounds were not easily taken up by the microorganism, preventing the associated pH decrease.", "In conclusion, findings from both previous studies and the current study lend indirect support to the notion that the caries phenotype is a disorder of the microbial community metabolism. In the CL group, even with a lower sugar intake, the subjects had a high caries risk because of the existence of a cariogenic microbial community and increased sugar catabolism potential. Even though the subjects in the FH group take up more sugar, they had a low caries risk due to a healthy microbial community being capable of resisting the cariogenic microbial population and preventing pH reduction by inhibiting sugar uptake. The present study has the obvious shortcoming of a limited sample size that only permits preliminary findings. In addition, sugar consumption data were collected via questionnaire, and some free sugars were not included, such as sucrose in homemade dishes. Given this, it is prudent not to overstate the consequences of the findings. Future investigations on a larger scale are required to confirm and validate our findings." ]
[ "intro", null, null, null, null, null, null, "results", null, null, "discussion", "conclusions" ]
[ "oral microbiota", "sugar consumption", "dental caries" ]
1. Introduction: Dental caries characterized by the demineralization of tooth tissue resulting from the fermentation of dietary carbohydrates by acid-producing bacteria [1]. The existence of cariogenic microorganisms is a prerequisite for the development of dental caries, and fermentable carbohydrates, particularly sugars, are key contributors to dysbiosis in the oral microbiota that promotes caries. Consuming sugar has negative effects on oral health, as cariogenic bacteria convert monosaccharides into acids that are detrimental to the teeth. Therefore, it is generally accepted that the excessive and frequent intake of refined sugar leads to the development of caries [2,3,4]. However, the relationship between the amount of sugar intake and caries risk is not always consistent [4,5]. In fact, some individuals with low sugar consumption still have a high caries risk, while some others are caries-free even with higher sugar consumption [4,5]. Many factors may influence the relationship between them, such as high socioeconomic status, exposure to fluoridated water, frequent tooth brushing, the availability of sugar for cariogenic bacterial digestion, and the availability of saliva to counteract bacteria and acids [5,6]. Previous studies suggested that oral ecological microbial profiles are corelated with the amount of sugar consumption [7]. Keller et al. found that some acidogenic and acid-tolerant caries-associated organisms were less abundant in a low-sugar group’s dental plaque [8]. Chen indicated that higher sugar-sweetened-beverage consumption may disrupt the oral microecology, and reduce the variety of microbiota during childhood, leading to an increase in cariogenic genera [9]. Altered plaque microbial profiles in adolescents with different levels of sugar intake were shown [8,9]. However, no study has explored the impact of oral microbiota on the relationship between sugar consumption and risk of caries. We hypothesize that the oral microbiota of high-caries-risk adolescents with a low-sugar diet may have a special microbial community composition and function, leading to the formation of a local cariogenic environment with limited sugar. The microbial community of low-caries-risk subjects with higher sugar consumption may have some special bacterial antagonism to the cariogenic bacteria resulting in a state that protects teeth against caries even in a high-carbohydrate environment. Therefore, the purpose of this present study is to compare the oral microbial profiles of high-caries-risk adolescents with a low intake of free sugars to those of caries-free adolescents with a high consumption of free sugars. 2. Materials and Methods: 2.1. Ethical Approval This study was approved by the Ethical Committee of the Hospital of Stomatology, Sun Yat-sen University, in Guangzhou, China (KQEC-2021-24-03). Before this study, we obtained written informed consent from the parents or legal guardians of all participants. This study was approved by the Ethical Committee of the Hospital of Stomatology, Sun Yat-sen University, in Guangzhou, China (KQEC-2021-24-03). Before this study, we obtained written informed consent from the parents or legal guardians of all participants. 2.2. Clinical Examination and Sampling Prior to recruiting participants for the study, we performed an initial screening. During the initial screening, we conducted a structured questionnaire survey and clinical examination. The contents of the structured questionnaire were split into three parts: (1) Demographic information: sex, age, residence, and whether the child was an only child in their family and their primary caregiver. (2) Socioeconomic information: family income, caregivers’ educational levels, and whether they had dental insurance. (3) Oral-health-related behaviors: tooth-brushing frequency, flossing habits, toothpaste containing fluoride or not, mouthwash or not, frequency of snack consumption, frequency of sweet-drink consumption, and dental attendance experience. Caries status and plaque index were also evaluated. A skilled dentist used the International Caries Detection and Assessment System II (ICDAS-II) to measure the subjects’ caries experience, and recorded it as decaying, missing, and filled teeth (DMFT). The clinical examination was conducted under artificial lighting in the classroom by utilizing a dental mirror and a Community Periodontal Index (CPI) probe. Codes 3–6 in the ICDAS system were recorded as decayed teeth. Inclusion criteria were: healthy 12–13 years old adolescents who were caries-free and without any history of dental caries (DMFT = 0) or caries-active (DMFT ≥ 5) [10]. The plaque index of the Silness and Loe scale was used to record dental plaque. The subjects were excluded on the basis of the following criteria: (1): adolescents who had taken antibiotics during the past three months, (2) adolescents with bacterial or viral infections in other sections of the body, and (3) teenagers with gingivitis or wearing orthodontic appliances. In total, 40 adolescents with a similar average family income, composed of 20 caries-active adolescents and 20 caries-free adolescents, were enrolled. Before the sample collection, breakfast was not permitted, and participating adolescents were not allowed to brush their teeth for 12 h. All the teenagers were instructed to rinse their mouths before the exam. Prior to sampling, the teeth were gently dried using an air stream to prevent saliva contamination. The buccal surfaces of the anterior and posterior teeth were scraped clean of dental plaque using a sterile scaler, and the plaque was then promptly transferred to a sterile 1.5 mL Eppendorf tube. The samples were brought to the laboratory as quickly as possible and frozen at −80 °C before analysis. Prior to recruiting participants for the study, we performed an initial screening. During the initial screening, we conducted a structured questionnaire survey and clinical examination. The contents of the structured questionnaire were split into three parts: (1) Demographic information: sex, age, residence, and whether the child was an only child in their family and their primary caregiver. (2) Socioeconomic information: family income, caregivers’ educational levels, and whether they had dental insurance. (3) Oral-health-related behaviors: tooth-brushing frequency, flossing habits, toothpaste containing fluoride or not, mouthwash or not, frequency of snack consumption, frequency of sweet-drink consumption, and dental attendance experience. Caries status and plaque index were also evaluated. A skilled dentist used the International Caries Detection and Assessment System II (ICDAS-II) to measure the subjects’ caries experience, and recorded it as decaying, missing, and filled teeth (DMFT). The clinical examination was conducted under artificial lighting in the classroom by utilizing a dental mirror and a Community Periodontal Index (CPI) probe. Codes 3–6 in the ICDAS system were recorded as decayed teeth. Inclusion criteria were: healthy 12–13 years old adolescents who were caries-free and without any history of dental caries (DMFT = 0) or caries-active (DMFT ≥ 5) [10]. The plaque index of the Silness and Loe scale was used to record dental plaque. The subjects were excluded on the basis of the following criteria: (1): adolescents who had taken antibiotics during the past three months, (2) adolescents with bacterial or viral infections in other sections of the body, and (3) teenagers with gingivitis or wearing orthodontic appliances. In total, 40 adolescents with a similar average family income, composed of 20 caries-active adolescents and 20 caries-free adolescents, were enrolled. Before the sample collection, breakfast was not permitted, and participating adolescents were not allowed to brush their teeth for 12 h. All the teenagers were instructed to rinse their mouths before the exam. Prior to sampling, the teeth were gently dried using an air stream to prevent saliva contamination. The buccal surfaces of the anterior and posterior teeth were scraped clean of dental plaque using a sterile scaler, and the plaque was then promptly transferred to a sterile 1.5 mL Eppendorf tube. The samples were brought to the laboratory as quickly as possible and frozen at −80 °C before analysis. 2.3. Questionnaire Survey and Free-Sugar Intake Assessment The teenagers completed a questionnaire with three parts regarding their dietary habits and oral-health-related behaviors under the supervision of their parents. Part 1 was focused on the demographic information of students, Part 2 was primarily about students’ use of sugar-sweetened drinks (SSBs) and sweetened foods, and Part 3 was mostly concerned with oral health-related behaviors [11]. Part 1: gender, age, ethnic group, height and weight, residence, family income, caregivers’ education levels and whether they had dental insurance, whether the child was an only child in their family and their primary caregiver. Part 2: included among SSBs were carbonated beverages, vegetable protein beverages, juice or juice drinks, tea beverages, sports beverages, and bubble tea. Cakes, desserts, candies (such as chocolate, Snickers, and Maltesers), and preserved fruits were all examples of foods that were sweetened (dried fruits and candied fruits). Other included as sources of free sugar were honey, flavored milk, and yogurt. For the typical intake of SSBs and sugary meals, the response options were 100, 200, 300, 400, and 500 mL/time, and 25, 50, 75, 100, 150, and 200 g/time. Part 3: the frequency of tooth brushing, flossing habits, mouthwash or not, and whether the toothpaste contained fluoride. The assessment of free-sugar intake referred to the study conducted by Q Lin et al. [11]. Following that, subjects were divided into two groups: those who consumed less than 50 g of sugar per day, and those who consumed more than 50 g of sugar per day. The teenagers completed a questionnaire with three parts regarding their dietary habits and oral-health-related behaviors under the supervision of their parents. Part 1 was focused on the demographic information of students, Part 2 was primarily about students’ use of sugar-sweetened drinks (SSBs) and sweetened foods, and Part 3 was mostly concerned with oral health-related behaviors [11]. Part 1: gender, age, ethnic group, height and weight, residence, family income, caregivers’ education levels and whether they had dental insurance, whether the child was an only child in their family and their primary caregiver. Part 2: included among SSBs were carbonated beverages, vegetable protein beverages, juice or juice drinks, tea beverages, sports beverages, and bubble tea. Cakes, desserts, candies (such as chocolate, Snickers, and Maltesers), and preserved fruits were all examples of foods that were sweetened (dried fruits and candied fruits). Other included as sources of free sugar were honey, flavored milk, and yogurt. For the typical intake of SSBs and sugary meals, the response options were 100, 200, 300, 400, and 500 mL/time, and 25, 50, 75, 100, 150, and 200 g/time. Part 3: the frequency of tooth brushing, flossing habits, mouthwash or not, and whether the toothpaste contained fluoride. The assessment of free-sugar intake referred to the study conducted by Q Lin et al. [11]. Following that, subjects were divided into two groups: those who consumed less than 50 g of sugar per day, and those who consumed more than 50 g of sugar per day. 2.4. Metagenomic Analysis Each sample was subjected to the CTAB technique for the extraction of microbial DNA. The resulting pellet was redissolved into 50 L of TE buffer (10 mM Tris, 1 mM EDTA). The tube was filled with 1 μL of RNase A to digest the RNA, and it was then incubated at 37 °C for 15 min [12]. Nanodrop and agarose gel electrophoresis were used to evaluate the final DNA concentration and purity. The DNA libraries were created using the manufacturer’s instructions with the NEB Next® Ultra TM DNA Library Prep Kit (Illumina, San Diego, CA, USA). The qualifying libraries were then sequenced at Wekemo Tech Co., Ltd. in Shenzhen, China using an Illumina NovaSeq PE150 platform. We handled the samples in accordance with the accepted procedures. During DNA extraction and library assembly, blank controls were used. In this analysis, there was no contamination in any of the samples. The collected raw reads were processed. Each sample was sequenced, and human reads were then removed from the metagenomic dataset by using Bowtie2 to align the DNA sequences to the human genome. Lastly, host-originated readings and low-quality reads were eliminated to produce clean data. The clean sequences of all samples were annotated and categorized, and Bracken was used to predict the actual relative abundance of species. We gathered data at seven taxonomical hierarchies from each sample using the algorithms of the lowest common ancestor [13] and gene abundance estimates [14]. Unsuccessful reads were filtered out after the unigene sequences had been aligned using DIAMOND (version 0.7.10.59, Benjamin Buchfink. Drost lab, Max Planck Institute for Biology, Tübingen, Germany) and the UniRef90 protein database. The gene abundance tables were obtained on the basis of the Uniref90 ID and the corresponding relationship of the KEGG database; then, the functional abundance profiles of each sample were drawn. Each sample was subjected to the CTAB technique for the extraction of microbial DNA. The resulting pellet was redissolved into 50 L of TE buffer (10 mM Tris, 1 mM EDTA). The tube was filled with 1 μL of RNase A to digest the RNA, and it was then incubated at 37 °C for 15 min [12]. Nanodrop and agarose gel electrophoresis were used to evaluate the final DNA concentration and purity. The DNA libraries were created using the manufacturer’s instructions with the NEB Next® Ultra TM DNA Library Prep Kit (Illumina, San Diego, CA, USA). The qualifying libraries were then sequenced at Wekemo Tech Co., Ltd. in Shenzhen, China using an Illumina NovaSeq PE150 platform. We handled the samples in accordance with the accepted procedures. During DNA extraction and library assembly, blank controls were used. In this analysis, there was no contamination in any of the samples. The collected raw reads were processed. Each sample was sequenced, and human reads were then removed from the metagenomic dataset by using Bowtie2 to align the DNA sequences to the human genome. Lastly, host-originated readings and low-quality reads were eliminated to produce clean data. The clean sequences of all samples were annotated and categorized, and Bracken was used to predict the actual relative abundance of species. We gathered data at seven taxonomical hierarchies from each sample using the algorithms of the lowest common ancestor [13] and gene abundance estimates [14]. Unsuccessful reads were filtered out after the unigene sequences had been aligned using DIAMOND (version 0.7.10.59, Benjamin Buchfink. Drost lab, Max Planck Institute for Biology, Tübingen, Germany) and the UniRef90 protein database. The gene abundance tables were obtained on the basis of the Uniref90 ID and the corresponding relationship of the KEGG database; then, the functional abundance profiles of each sample were drawn. 2.5. Data Visual Exhibition, and Statistical and Bioinformatic Analysis The visuals and statistical analysis in this study were conducted using R software (version 3.8, Ross Ihaka and Robert Gentleman. The University of Auckland, Auckland, New Zealand) and QIIME (version 2, Rob Knight. University of California San Diego, San Diego, CA, USA). Fisher’s exact test was used to compare the clinical variables between groups. The Dunn test was used to detect significant differences in relative abundance between groups, with differences considered to be significant when p < 0.05. LEfSe analysis was used to compare microbial profiles and community functional structure, and the logarithmic LDA score threshold was set to 2.0. The core microbiome at the species level was depicted using a Venn diagram. Diversity analysis was carried out using nonmetric multidimensional scaling (NMDS) and principal coordinate analysis (PCoA) based on Bray–Curtis distance. The visuals and statistical analysis in this study were conducted using R software (version 3.8, Ross Ihaka and Robert Gentleman. The University of Auckland, Auckland, New Zealand) and QIIME (version 2, Rob Knight. University of California San Diego, San Diego, CA, USA). Fisher’s exact test was used to compare the clinical variables between groups. The Dunn test was used to detect significant differences in relative abundance between groups, with differences considered to be significant when p < 0.05. LEfSe analysis was used to compare microbial profiles and community functional structure, and the logarithmic LDA score threshold was set to 2.0. The core microbiome at the species level was depicted using a Venn diagram. Diversity analysis was carried out using nonmetric multidimensional scaling (NMDS) and principal coordinate analysis (PCoA) based on Bray–Curtis distance. 2.1. Ethical Approval: This study was approved by the Ethical Committee of the Hospital of Stomatology, Sun Yat-sen University, in Guangzhou, China (KQEC-2021-24-03). Before this study, we obtained written informed consent from the parents or legal guardians of all participants. 2.2. Clinical Examination and Sampling: Prior to recruiting participants for the study, we performed an initial screening. During the initial screening, we conducted a structured questionnaire survey and clinical examination. The contents of the structured questionnaire were split into three parts: (1) Demographic information: sex, age, residence, and whether the child was an only child in their family and their primary caregiver. (2) Socioeconomic information: family income, caregivers’ educational levels, and whether they had dental insurance. (3) Oral-health-related behaviors: tooth-brushing frequency, flossing habits, toothpaste containing fluoride or not, mouthwash or not, frequency of snack consumption, frequency of sweet-drink consumption, and dental attendance experience. Caries status and plaque index were also evaluated. A skilled dentist used the International Caries Detection and Assessment System II (ICDAS-II) to measure the subjects’ caries experience, and recorded it as decaying, missing, and filled teeth (DMFT). The clinical examination was conducted under artificial lighting in the classroom by utilizing a dental mirror and a Community Periodontal Index (CPI) probe. Codes 3–6 in the ICDAS system were recorded as decayed teeth. Inclusion criteria were: healthy 12–13 years old adolescents who were caries-free and without any history of dental caries (DMFT = 0) or caries-active (DMFT ≥ 5) [10]. The plaque index of the Silness and Loe scale was used to record dental plaque. The subjects were excluded on the basis of the following criteria: (1): adolescents who had taken antibiotics during the past three months, (2) adolescents with bacterial or viral infections in other sections of the body, and (3) teenagers with gingivitis or wearing orthodontic appliances. In total, 40 adolescents with a similar average family income, composed of 20 caries-active adolescents and 20 caries-free adolescents, were enrolled. Before the sample collection, breakfast was not permitted, and participating adolescents were not allowed to brush their teeth for 12 h. All the teenagers were instructed to rinse their mouths before the exam. Prior to sampling, the teeth were gently dried using an air stream to prevent saliva contamination. The buccal surfaces of the anterior and posterior teeth were scraped clean of dental plaque using a sterile scaler, and the plaque was then promptly transferred to a sterile 1.5 mL Eppendorf tube. The samples were brought to the laboratory as quickly as possible and frozen at −80 °C before analysis. 2.3. Questionnaire Survey and Free-Sugar Intake Assessment: The teenagers completed a questionnaire with three parts regarding their dietary habits and oral-health-related behaviors under the supervision of their parents. Part 1 was focused on the demographic information of students, Part 2 was primarily about students’ use of sugar-sweetened drinks (SSBs) and sweetened foods, and Part 3 was mostly concerned with oral health-related behaviors [11]. Part 1: gender, age, ethnic group, height and weight, residence, family income, caregivers’ education levels and whether they had dental insurance, whether the child was an only child in their family and their primary caregiver. Part 2: included among SSBs were carbonated beverages, vegetable protein beverages, juice or juice drinks, tea beverages, sports beverages, and bubble tea. Cakes, desserts, candies (such as chocolate, Snickers, and Maltesers), and preserved fruits were all examples of foods that were sweetened (dried fruits and candied fruits). Other included as sources of free sugar were honey, flavored milk, and yogurt. For the typical intake of SSBs and sugary meals, the response options were 100, 200, 300, 400, and 500 mL/time, and 25, 50, 75, 100, 150, and 200 g/time. Part 3: the frequency of tooth brushing, flossing habits, mouthwash or not, and whether the toothpaste contained fluoride. The assessment of free-sugar intake referred to the study conducted by Q Lin et al. [11]. Following that, subjects were divided into two groups: those who consumed less than 50 g of sugar per day, and those who consumed more than 50 g of sugar per day. 2.4. Metagenomic Analysis: Each sample was subjected to the CTAB technique for the extraction of microbial DNA. The resulting pellet was redissolved into 50 L of TE buffer (10 mM Tris, 1 mM EDTA). The tube was filled with 1 μL of RNase A to digest the RNA, and it was then incubated at 37 °C for 15 min [12]. Nanodrop and agarose gel electrophoresis were used to evaluate the final DNA concentration and purity. The DNA libraries were created using the manufacturer’s instructions with the NEB Next® Ultra TM DNA Library Prep Kit (Illumina, San Diego, CA, USA). The qualifying libraries were then sequenced at Wekemo Tech Co., Ltd. in Shenzhen, China using an Illumina NovaSeq PE150 platform. We handled the samples in accordance with the accepted procedures. During DNA extraction and library assembly, blank controls were used. In this analysis, there was no contamination in any of the samples. The collected raw reads were processed. Each sample was sequenced, and human reads were then removed from the metagenomic dataset by using Bowtie2 to align the DNA sequences to the human genome. Lastly, host-originated readings and low-quality reads were eliminated to produce clean data. The clean sequences of all samples were annotated and categorized, and Bracken was used to predict the actual relative abundance of species. We gathered data at seven taxonomical hierarchies from each sample using the algorithms of the lowest common ancestor [13] and gene abundance estimates [14]. Unsuccessful reads were filtered out after the unigene sequences had been aligned using DIAMOND (version 0.7.10.59, Benjamin Buchfink. Drost lab, Max Planck Institute for Biology, Tübingen, Germany) and the UniRef90 protein database. The gene abundance tables were obtained on the basis of the Uniref90 ID and the corresponding relationship of the KEGG database; then, the functional abundance profiles of each sample were drawn. 2.5. Data Visual Exhibition, and Statistical and Bioinformatic Analysis: The visuals and statistical analysis in this study were conducted using R software (version 3.8, Ross Ihaka and Robert Gentleman. The University of Auckland, Auckland, New Zealand) and QIIME (version 2, Rob Knight. University of California San Diego, San Diego, CA, USA). Fisher’s exact test was used to compare the clinical variables between groups. The Dunn test was used to detect significant differences in relative abundance between groups, with differences considered to be significant when p < 0.05. LEfSe analysis was used to compare microbial profiles and community functional structure, and the logarithmic LDA score threshold was set to 2.0. The core microbiome at the species level was depicted using a Venn diagram. Diversity analysis was carried out using nonmetric multidimensional scaling (NMDS) and principal coordinate analysis (PCoA) based on Bray–Curtis distance. 3. Results: In total, 40 male and female teenagers aged 12–13 years were recruited; 45% were boys, and 55% were girls. The samples were divided into four groups: caries individuals with low sugar consumption (CL group), caries individuals with high sugar consumption (CH group), caries-free individuals with low sugar consumption (FL group), and caries-free individuals with high sugar consumption (FH group). Table 1 reports the characteristics of the demographic variables and the oral hygiene habits of the included subjects. As presented in Table 1, there was no difference in the clinical information that could be confounding variables among the four groups. After data trimming and filtering out the low-quality data and host contamination, a total of 651,389,975 (195.43 GB) reads were acquired, and the average clean data were 4.89 GB per sample. A total of 22 phyla, 36 classes, 93 orders, 214 families, 761 genera, and 2235 species were detected in the 40 samples. Figure 1a–c shows the top 20 phyla, genera, and species, respectively, with relatively high abundance in the four groups. 3.1. Microbiota Composition Based on Sugar Consumption and Caries Status The bacterial distribution within the different groups was analyzed. We compared the relative abundance of the top 20 bacteria in the four groups with some interesting findings. Figure 2 shows the relative abundance of the phyla and species that were significantly higher in the FH and CL groups compared with the three other groups. As illustrated in Figure 2, at the phylum level, the CL group showed a higher relative abundance of Fusobacteria. At the genus level, the CL group was associated with a greater abundance of Lactobacillus. At the species level, in the CL group, Actinomyces gerencseriae and Actinomyces dentails had a higher relative abundance of the top 20 species covered. Streptococcus mutans, Candida albicans, Scardovia wiggsiae, and Propionibacterium acidifaciens were also more abundant in the CL group out of the top 20 species covered. However, eight species were more abundant in the FH group, including Capnocytophaga gingivalis and Porphyromonas gingivalis (the figure titled “Significant differences in relative abundance of plaque microbial taxa among the four groups by LEfSe analysis” was too large to place in the manuscript, so we were uploaded it as Supplemental Material Figure S2). Venn diagram was used to illustrate the shared and unique taxa for the species level among groups. A total of 2235 species were identified in our study. The overlap region S indicates the microbiome shared in all samples among the four groups, and 933 species were detected in all of the four groups. There were 84, 207, 75 and 250 unique species detected in the CL, CH, FH, and FL groups, respectively (Figure 3). PCoA (Figure 4a) and NMDS (Figure 4b) did not reveal differences in taxonomic unit structural richness among groups. The results demonstrate that the microbial diversity of the four groups was similar. The bacterial distribution within the different groups was analyzed. We compared the relative abundance of the top 20 bacteria in the four groups with some interesting findings. Figure 2 shows the relative abundance of the phyla and species that were significantly higher in the FH and CL groups compared with the three other groups. As illustrated in Figure 2, at the phylum level, the CL group showed a higher relative abundance of Fusobacteria. At the genus level, the CL group was associated with a greater abundance of Lactobacillus. At the species level, in the CL group, Actinomyces gerencseriae and Actinomyces dentails had a higher relative abundance of the top 20 species covered. Streptococcus mutans, Candida albicans, Scardovia wiggsiae, and Propionibacterium acidifaciens were also more abundant in the CL group out of the top 20 species covered. However, eight species were more abundant in the FH group, including Capnocytophaga gingivalis and Porphyromonas gingivalis (the figure titled “Significant differences in relative abundance of plaque microbial taxa among the four groups by LEfSe analysis” was too large to place in the manuscript, so we were uploaded it as Supplemental Material Figure S2). Venn diagram was used to illustrate the shared and unique taxa for the species level among groups. A total of 2235 species were identified in our study. The overlap region S indicates the microbiome shared in all samples among the four groups, and 933 species were detected in all of the four groups. There were 84, 207, 75 and 250 unique species detected in the CL, CH, FH, and FL groups, respectively (Figure 3). PCoA (Figure 4a) and NMDS (Figure 4b) did not reveal differences in taxonomic unit structural richness among groups. The results demonstrate that the microbial diversity of the four groups was similar. 3.2. Microbiota Composition Based on Sugar Consumption and Caries Status We annotated the oral gene catalogs with the Kyoto Encyclopedia of Genes and Genomes (KEGG) database in order to compare the functional characteristics of the microbiota between groups. The metabolic pathway that includes 5923 annotated genes was the most abundant of the six main KEGG pathways that were found (Figure 5a). There were 47 Level 2 pathways, with the metabolism of the carbohydrates being the most abundant (Figure 5b). The metabolism of cofactors and vitamins, and amino acid metabolism were other common markers in the Level 2 pathways. The top 10 most abundant KOs were discovered by performing BLAST against the KEGG Orthology (KO) database, and they are shown in Figure 5c. LEfSe analysis of the gene profiles showed enormous differences at the functional level of the groups. At the second level, enrichment in membrane transport was observed in the CL group compared with the other groups. At the third level, the synthesis and degradation of ketone bodies, the phosphotransferase system (PTS), carbapenem biosynthesis, quorum sensing, retrograde endocannabinoid signaling, basal transcription factors, chlorocyclohexane and chlorobenzene degradation, and the NOD-like receptor signaling pathway were enriched in the CL group, while lipopolysaccharide biosynthesis and isoquinoline alkaloid biosynthesis exhibited a higher level in the FH group (Figure 6a). In total, 10 KEGG metabolic modules were positively associated with the CL group’s microbiomes, while 22 KEGG metabolic modules had a higher proportion in the FH group (Figure 6b). Phosphotransferase sugar uptake system N-acetylgalactosamine (M00277), numerous two-component histidine kinase-response regulator systems including VicK/VicR (cell wall metabolism) (M00459), LytS-LytR (M00492), and CiaH/CiaR (M00521), and several trace metal transport systems, including those for manganese/zinc (M00791) and nickel (M00440), were more enriched in the CL group. Modules associated with biosynthesis were more abundant in the FH group, including serine (M00020), ectoine (M00033), lipopolysaccharide (M00060), CMP-KDO (M00063), ascorbate (M00114), NAD (M00115), ubiquinone (M00117), biotin (M00123), pyridoxal (M00124), Riboflavin (M00125), aminoacyl-tRNA (M00359), biotin (M00573, M00577), and aurachin (M00848). The FH group also had enriched xenobiotic efflux pump transporters, including multidrug resistance, AcrAB-TolC/SmeDEF (M00647), multidrug resistance, AmeABC (M00699), multidrug resistance, MexAB-OprM (M00718) (the figure titled “LEfSe analysis was used to identify Level 3 and module level with significant differences in relative abundance among the four groups” was too large to place in the manuscript, so we uploaded it as Supplementary Figure S2). We annotated the oral gene catalogs with the Kyoto Encyclopedia of Genes and Genomes (KEGG) database in order to compare the functional characteristics of the microbiota between groups. The metabolic pathway that includes 5923 annotated genes was the most abundant of the six main KEGG pathways that were found (Figure 5a). There were 47 Level 2 pathways, with the metabolism of the carbohydrates being the most abundant (Figure 5b). The metabolism of cofactors and vitamins, and amino acid metabolism were other common markers in the Level 2 pathways. The top 10 most abundant KOs were discovered by performing BLAST against the KEGG Orthology (KO) database, and they are shown in Figure 5c. LEfSe analysis of the gene profiles showed enormous differences at the functional level of the groups. At the second level, enrichment in membrane transport was observed in the CL group compared with the other groups. At the third level, the synthesis and degradation of ketone bodies, the phosphotransferase system (PTS), carbapenem biosynthesis, quorum sensing, retrograde endocannabinoid signaling, basal transcription factors, chlorocyclohexane and chlorobenzene degradation, and the NOD-like receptor signaling pathway were enriched in the CL group, while lipopolysaccharide biosynthesis and isoquinoline alkaloid biosynthesis exhibited a higher level in the FH group (Figure 6a). In total, 10 KEGG metabolic modules were positively associated with the CL group’s microbiomes, while 22 KEGG metabolic modules had a higher proportion in the FH group (Figure 6b). Phosphotransferase sugar uptake system N-acetylgalactosamine (M00277), numerous two-component histidine kinase-response regulator systems including VicK/VicR (cell wall metabolism) (M00459), LytS-LytR (M00492), and CiaH/CiaR (M00521), and several trace metal transport systems, including those for manganese/zinc (M00791) and nickel (M00440), were more enriched in the CL group. Modules associated with biosynthesis were more abundant in the FH group, including serine (M00020), ectoine (M00033), lipopolysaccharide (M00060), CMP-KDO (M00063), ascorbate (M00114), NAD (M00115), ubiquinone (M00117), biotin (M00123), pyridoxal (M00124), Riboflavin (M00125), aminoacyl-tRNA (M00359), biotin (M00573, M00577), and aurachin (M00848). The FH group also had enriched xenobiotic efflux pump transporters, including multidrug resistance, AcrAB-TolC/SmeDEF (M00647), multidrug resistance, AmeABC (M00699), multidrug resistance, MexAB-OprM (M00718) (the figure titled “LEfSe analysis was used to identify Level 3 and module level with significant differences in relative abundance among the four groups” was too large to place in the manuscript, so we uploaded it as Supplementary Figure S2). 3.1. Microbiota Composition Based on Sugar Consumption and Caries Status: The bacterial distribution within the different groups was analyzed. We compared the relative abundance of the top 20 bacteria in the four groups with some interesting findings. Figure 2 shows the relative abundance of the phyla and species that were significantly higher in the FH and CL groups compared with the three other groups. As illustrated in Figure 2, at the phylum level, the CL group showed a higher relative abundance of Fusobacteria. At the genus level, the CL group was associated with a greater abundance of Lactobacillus. At the species level, in the CL group, Actinomyces gerencseriae and Actinomyces dentails had a higher relative abundance of the top 20 species covered. Streptococcus mutans, Candida albicans, Scardovia wiggsiae, and Propionibacterium acidifaciens were also more abundant in the CL group out of the top 20 species covered. However, eight species were more abundant in the FH group, including Capnocytophaga gingivalis and Porphyromonas gingivalis (the figure titled “Significant differences in relative abundance of plaque microbial taxa among the four groups by LEfSe analysis” was too large to place in the manuscript, so we were uploaded it as Supplemental Material Figure S2). Venn diagram was used to illustrate the shared and unique taxa for the species level among groups. A total of 2235 species were identified in our study. The overlap region S indicates the microbiome shared in all samples among the four groups, and 933 species were detected in all of the four groups. There were 84, 207, 75 and 250 unique species detected in the CL, CH, FH, and FL groups, respectively (Figure 3). PCoA (Figure 4a) and NMDS (Figure 4b) did not reveal differences in taxonomic unit structural richness among groups. The results demonstrate that the microbial diversity of the four groups was similar. 3.2. Microbiota Composition Based on Sugar Consumption and Caries Status: We annotated the oral gene catalogs with the Kyoto Encyclopedia of Genes and Genomes (KEGG) database in order to compare the functional characteristics of the microbiota between groups. The metabolic pathway that includes 5923 annotated genes was the most abundant of the six main KEGG pathways that were found (Figure 5a). There were 47 Level 2 pathways, with the metabolism of the carbohydrates being the most abundant (Figure 5b). The metabolism of cofactors and vitamins, and amino acid metabolism were other common markers in the Level 2 pathways. The top 10 most abundant KOs were discovered by performing BLAST against the KEGG Orthology (KO) database, and they are shown in Figure 5c. LEfSe analysis of the gene profiles showed enormous differences at the functional level of the groups. At the second level, enrichment in membrane transport was observed in the CL group compared with the other groups. At the third level, the synthesis and degradation of ketone bodies, the phosphotransferase system (PTS), carbapenem biosynthesis, quorum sensing, retrograde endocannabinoid signaling, basal transcription factors, chlorocyclohexane and chlorobenzene degradation, and the NOD-like receptor signaling pathway were enriched in the CL group, while lipopolysaccharide biosynthesis and isoquinoline alkaloid biosynthesis exhibited a higher level in the FH group (Figure 6a). In total, 10 KEGG metabolic modules were positively associated with the CL group’s microbiomes, while 22 KEGG metabolic modules had a higher proportion in the FH group (Figure 6b). Phosphotransferase sugar uptake system N-acetylgalactosamine (M00277), numerous two-component histidine kinase-response regulator systems including VicK/VicR (cell wall metabolism) (M00459), LytS-LytR (M00492), and CiaH/CiaR (M00521), and several trace metal transport systems, including those for manganese/zinc (M00791) and nickel (M00440), were more enriched in the CL group. Modules associated with biosynthesis were more abundant in the FH group, including serine (M00020), ectoine (M00033), lipopolysaccharide (M00060), CMP-KDO (M00063), ascorbate (M00114), NAD (M00115), ubiquinone (M00117), biotin (M00123), pyridoxal (M00124), Riboflavin (M00125), aminoacyl-tRNA (M00359), biotin (M00573, M00577), and aurachin (M00848). The FH group also had enriched xenobiotic efflux pump transporters, including multidrug resistance, AcrAB-TolC/SmeDEF (M00647), multidrug resistance, AmeABC (M00699), multidrug resistance, MexAB-OprM (M00718) (the figure titled “LEfSe analysis was used to identify Level 3 and module level with significant differences in relative abundance among the four groups” was too large to place in the manuscript, so we uploaded it as Supplementary Figure S2). 4. Discussion: The present study employed metagenomic sequencing to characterize the tooth plaque microbiota in a group of Chinese adolescents with a focus on microbiomic profiles and function in caries subjects with low sugar consumption and caries-free subjects with high sugar consumption. The main findings were that oral microbial profiles and function impact the relationship between sugar consumption and caries. For this pilot study, we compared the microbial profiles of the subjects with comparable backgrounds. Since all the adolescents resided in the same city (Foshan), there were no racial, cultural, demographic, or geographical distinctions in the selected sample. Examined individuals had an average monthly family income between CNY 3000 and 7000, so it was reasonable to assume that their children had a comparable socioeconomic status. In addition, the water in Foshan has not been fluoridated. Therefore, the individuals’ fluoride intake comes from fluoride-containing toothpaste and mouthwash. Since none of the individuals used mouthwash, the only available source of fluoride for the subjects was fluoride toothpaste. We found no statistically significant difference among the four groups in their use of fluoride toothpaste. The same held true for oral hygiene habits, the plaque index, the pH value of saliva, and the buffering capacity of saliva. We, therefore, presumed that all children had comparable backgrounds. In high caries risk subjects with a low sugar consumption, the relative abundance of Lactobacillus was increased, as well as the relative abundance of several species including Actinomyces dentails, Actinomyces gerencseriae, Streptococcus mutans, and Candida albicans. These species are frequently associated with caries progression [15]. Consistent with previous studies, it is not surprising to find a high relative abundance of Lactobacillus in the CL group [16]. Before S. mutans was found, it had been thought that Lactobacillus was the main cause of caries because there was a strong association between the amount of Lactobacillus in saliva and the severity of caries [5]. Lactobacillus is currently thought to not be the caries initiator, but that it plays an important role in caries progression [17]. These findings suggest that even individuals with low sugar consumption may have a high caries risk when the abundance of some cariogenic bacteria, such as Actinomyces gerencseriae, Scardovia wiggsiae, Propionibacterium acidifaciens, Streptococcus mutans, and Candida albicans, is increased. Clinical studies linked high levels of Streptococcus mutans (S. mutans) to high caries risk due to its acidogenicity, acidification, and ability to synthesize exopolysaccharides in dental plaques [12]. Candida albicans, a commensal oral fungus, can form pathogenic mixed-species biofilms with S. mutans mediated by extracellular polysaccharides (EPSs), enhancing the virulence leading to the onset of caries. Previous studies confirmed that, when C. albicans is combined with S. mutans in a rat model, it causes more severe dental caries [18,19], and coinfection with S. mutans and C. albicans is highly related with severe early child caries [20,21]. The present findings further support this result, indicating that the increased abundance of S. mutans and C.albicans is closely related to the increased risk of caries. Scardovia wiggsiae (S. wiggsiae) was described as a new caries pathogen that was detected from caries in children and adolescents [22,23]. S. wiggsiae showed acid tolerance and acidogenicity, increasing its ecological competitiveness in acidic conditions such as caries lesions [24]. S. wiggsiae exhibited minimal caries induction by itself. However, when co-inoculated with S. mutans, significant cavity formation was observed [25], and Eriksson et al. found that adolescents with active caries were characterized by a bacterial complex that contained S. mutans and S. wiggsiae [23]. In agreement with previous studies, we also observed higher relative abundance of S. wiggsiae in the CL groups as compared with the three other groups. Other bacteria found in our study to be more abundant in the CL group were Actinomyces gerencseriae and Propionibacterium acidifaciens. These two bacteria were positively associated with caries [26,27]. Similar findings were described in our previous study with pit and fissure caries [14]. In our study, the FH group had a higher abundance of Capnocytophaga gingivalis (C. gingivalis) and Porphyromonas gingivalis. Consistent with our findings, C. gingivalis is frequently detected in subgingival plaque associated with oral health [16]. Porphyromonas gingivalis (P. gingivalis) is a common periodontal pathogen that appears in greater proportion in periodontitis than it does in healthy subjects [28]. Additionally, P. gingivalis salivary levels in a caries-free group were considerably greater than those in the periodontally healthy group with caries, according to Y Iwano [29]. A possible explanation is that P. gingivalis is antagonistic to S. mutans [30]. The former can attenuate the virulence properties of the latter by interfering with its Com quorum-sensing system [31]. As a result, FH subjects should have a healthy microbial community capable of resisting the cariogenic microbial population. The result of the functional analyses indicates significant differences among groups at various functional levels. A quorum-sensing system is a cell-to-cell signaling system allowing for bacteria to develop coordinated social behavior [32]. The quorum-sensing system exhibited a higher level in the CL group, suggesting active signaling inside the biofilm. We also found an enrichment of the phosphotransferase sugar-uptake and phosphotransferase systems in the CL group, indicating enrichment for the sugar-uptake, transport, and metabolization pathways. Moreover, a notable enrichment of several two-component response–regulator pairs that are responsible for transcriptional changes in reaction to environmental stimuli was observed in the CL group. Relman DA pioneered the theory that increases in sugar catabolism potential are key markers of caries development and progression [33]. According to our research results and those of previous studies, we suggest the existence of a cariogenic microbial interaction network. We speculate that sugar uptake, transport, and metabolization capacity, and the response to the environmental stimuli of the microbial community increased in CL group subjects. Therefore, in the presence of limited sugar, the cariogenic microbial community of the CL group could respond quickly and metabolize sugar to rapidly produce acid, leading to a low-pH microenvironment within the biofilm. Additionally, the microbial community of FH subjects had self-stabilizing functional potential in which sugar compounds were not easily taken up by the microorganism, preventing the associated pH decrease. 5. Conclusions: In conclusion, findings from both previous studies and the current study lend indirect support to the notion that the caries phenotype is a disorder of the microbial community metabolism. In the CL group, even with a lower sugar intake, the subjects had a high caries risk because of the existence of a cariogenic microbial community and increased sugar catabolism potential. Even though the subjects in the FH group take up more sugar, they had a low caries risk due to a healthy microbial community being capable of resisting the cariogenic microbial population and preventing pH reduction by inhibiting sugar uptake. The present study has the obvious shortcoming of a limited sample size that only permits preliminary findings. In addition, sugar consumption data were collected via questionnaire, and some free sugars were not included, such as sucrose in homemade dishes. Given this, it is prudent not to overstate the consequences of the findings. Future investigations on a larger scale are required to confirm and validate our findings.
Background: The excessive and frequent intake of refined sugar leads to caries. However, the relationship between the amount of sugar intake and the risk of caries is not always consistent. Oral microbial profile and function may impact the link between them. This study aims to identify the plaque microbiota characteristics of caries subjects with low (CL) and high (CH) sugar consumption, and of caries-free subjects with low (FL) and high sugar (FH) consumption. Methods: A total of 40 adolescents were enrolled in the study, and supragingival plaque samples were collected and subjected to metagenomic analyses. The caries status, sugar consumption, and oral-health behaviors of the subjects were recorded. Results: The results indicate that the CL group showed a higher abundance of several cariogenic microorganisms Lactobacillus, A. gerencseriae, A. dentails, S. mutans, C. albicans, S. wiggsiae and P. acidifaciens. C. gingivalis, and P. gingivalis, which were enriched in the FH group. In terms of gene function, the phosphotransferase sugar uptake system, phosphotransferase system, and several two-component responses-regulator pairs were enriched in the CL group. Conclusions: Overall, our data suggest the existence of an increased cariogenic microbial community and sugar catabolism potential in the CL group, and a healthy microbial community in the FH group, which had self-stabilizing functional potential.
1. Introduction: Dental caries characterized by the demineralization of tooth tissue resulting from the fermentation of dietary carbohydrates by acid-producing bacteria [1]. The existence of cariogenic microorganisms is a prerequisite for the development of dental caries, and fermentable carbohydrates, particularly sugars, are key contributors to dysbiosis in the oral microbiota that promotes caries. Consuming sugar has negative effects on oral health, as cariogenic bacteria convert monosaccharides into acids that are detrimental to the teeth. Therefore, it is generally accepted that the excessive and frequent intake of refined sugar leads to the development of caries [2,3,4]. However, the relationship between the amount of sugar intake and caries risk is not always consistent [4,5]. In fact, some individuals with low sugar consumption still have a high caries risk, while some others are caries-free even with higher sugar consumption [4,5]. Many factors may influence the relationship between them, such as high socioeconomic status, exposure to fluoridated water, frequent tooth brushing, the availability of sugar for cariogenic bacterial digestion, and the availability of saliva to counteract bacteria and acids [5,6]. Previous studies suggested that oral ecological microbial profiles are corelated with the amount of sugar consumption [7]. Keller et al. found that some acidogenic and acid-tolerant caries-associated organisms were less abundant in a low-sugar group’s dental plaque [8]. Chen indicated that higher sugar-sweetened-beverage consumption may disrupt the oral microecology, and reduce the variety of microbiota during childhood, leading to an increase in cariogenic genera [9]. Altered plaque microbial profiles in adolescents with different levels of sugar intake were shown [8,9]. However, no study has explored the impact of oral microbiota on the relationship between sugar consumption and risk of caries. We hypothesize that the oral microbiota of high-caries-risk adolescents with a low-sugar diet may have a special microbial community composition and function, leading to the formation of a local cariogenic environment with limited sugar. The microbial community of low-caries-risk subjects with higher sugar consumption may have some special bacterial antagonism to the cariogenic bacteria resulting in a state that protects teeth against caries even in a high-carbohydrate environment. Therefore, the purpose of this present study is to compare the oral microbial profiles of high-caries-risk adolescents with a low intake of free sugars to those of caries-free adolescents with a high consumption of free sugars. 5. Conclusions: In conclusion, findings from both previous studies and the current study lend indirect support to the notion that the caries phenotype is a disorder of the microbial community metabolism. In the CL group, even with a lower sugar intake, the subjects had a high caries risk because of the existence of a cariogenic microbial community and increased sugar catabolism potential. Even though the subjects in the FH group take up more sugar, they had a low caries risk due to a healthy microbial community being capable of resisting the cariogenic microbial population and preventing pH reduction by inhibiting sugar uptake. The present study has the obvious shortcoming of a limited sample size that only permits preliminary findings. In addition, sugar consumption data were collected via questionnaire, and some free sugars were not included, such as sucrose in homemade dishes. Given this, it is prudent not to overstate the consequences of the findings. Future investigations on a larger scale are required to confirm and validate our findings.
Background: The excessive and frequent intake of refined sugar leads to caries. However, the relationship between the amount of sugar intake and the risk of caries is not always consistent. Oral microbial profile and function may impact the link between them. This study aims to identify the plaque microbiota characteristics of caries subjects with low (CL) and high (CH) sugar consumption, and of caries-free subjects with low (FL) and high sugar (FH) consumption. Methods: A total of 40 adolescents were enrolled in the study, and supragingival plaque samples were collected and subjected to metagenomic analyses. The caries status, sugar consumption, and oral-health behaviors of the subjects were recorded. Results: The results indicate that the CL group showed a higher abundance of several cariogenic microorganisms Lactobacillus, A. gerencseriae, A. dentails, S. mutans, C. albicans, S. wiggsiae and P. acidifaciens. C. gingivalis, and P. gingivalis, which were enriched in the FH group. In terms of gene function, the phosphotransferase sugar uptake system, phosphotransferase system, and several two-component responses-regulator pairs were enriched in the CL group. Conclusions: Overall, our data suggest the existence of an increased cariogenic microbial community and sugar catabolism potential in the CL group, and a healthy microbial community in the FH group, which had self-stabilizing functional potential.
8,937
265
[ 2772, 51, 470, 327, 356, 160, 338, 528 ]
12
[ "caries", "groups", "group", "sugar", "abundance", "figure", "cl", "level", "species", "cl group" ]
[ "caries hypothesize oral", "consumption risk caries", "sugar intake caries", "caries relationship sugar", "caries risk acidogenicity" ]
null
[CONTENT] oral microbiota | sugar consumption | dental caries [SUMMARY]
null
[CONTENT] oral microbiota | sugar consumption | dental caries [SUMMARY]
[CONTENT] oral microbiota | sugar consumption | dental caries [SUMMARY]
[CONTENT] oral microbiota | sugar consumption | dental caries [SUMMARY]
[CONTENT] oral microbiota | sugar consumption | dental caries [SUMMARY]
[CONTENT] Adolescent | Candida albicans | Dental Caries | Dietary Sugars | Humans | Lactobacillus | Microbiota | Phosphotransferases | Streptococcus mutans | Sugars [SUMMARY]
null
[CONTENT] Adolescent | Candida albicans | Dental Caries | Dietary Sugars | Humans | Lactobacillus | Microbiota | Phosphotransferases | Streptococcus mutans | Sugars [SUMMARY]
[CONTENT] Adolescent | Candida albicans | Dental Caries | Dietary Sugars | Humans | Lactobacillus | Microbiota | Phosphotransferases | Streptococcus mutans | Sugars [SUMMARY]
[CONTENT] Adolescent | Candida albicans | Dental Caries | Dietary Sugars | Humans | Lactobacillus | Microbiota | Phosphotransferases | Streptococcus mutans | Sugars [SUMMARY]
[CONTENT] Adolescent | Candida albicans | Dental Caries | Dietary Sugars | Humans | Lactobacillus | Microbiota | Phosphotransferases | Streptococcus mutans | Sugars [SUMMARY]
[CONTENT] caries hypothesize oral | consumption risk caries | sugar intake caries | caries relationship sugar | caries risk acidogenicity [SUMMARY]
null
[CONTENT] caries hypothesize oral | consumption risk caries | sugar intake caries | caries relationship sugar | caries risk acidogenicity [SUMMARY]
[CONTENT] caries hypothesize oral | consumption risk caries | sugar intake caries | caries relationship sugar | caries risk acidogenicity [SUMMARY]
[CONTENT] caries hypothesize oral | consumption risk caries | sugar intake caries | caries relationship sugar | caries risk acidogenicity [SUMMARY]
[CONTENT] caries hypothesize oral | consumption risk caries | sugar intake caries | caries relationship sugar | caries risk acidogenicity [SUMMARY]
[CONTENT] caries | groups | group | sugar | abundance | figure | cl | level | species | cl group [SUMMARY]
null
[CONTENT] caries | groups | group | sugar | abundance | figure | cl | level | species | cl group [SUMMARY]
[CONTENT] caries | groups | group | sugar | abundance | figure | cl | level | species | cl group [SUMMARY]
[CONTENT] caries | groups | group | sugar | abundance | figure | cl | level | species | cl group [SUMMARY]
[CONTENT] caries | groups | group | sugar | abundance | figure | cl | level | species | cl group [SUMMARY]
[CONTENT] caries | sugar | cariogenic | risk | high | consumption | caries risk | oral | sugar consumption | higher sugar [SUMMARY]
null
[CONTENT] figure | groups | group | level | cl | species | cl group | fh | abundance | abundant [SUMMARY]
[CONTENT] findings | sugar | microbial community | microbial | cariogenic microbial | community | caries | caries risk | cariogenic | risk [SUMMARY]
[CONTENT] caries | sugar | groups | figure | group | cl | abundance | level | species | cl group [SUMMARY]
[CONTENT] caries | sugar | groups | figure | group | cl | abundance | level | species | cl group [SUMMARY]
[CONTENT] ||| ||| ||| FL [SUMMARY]
null
[CONTENT] CL | S. | C. | S. | P. ||| FH ||| two | CL [SUMMARY]
[CONTENT] CL | FH [SUMMARY]
[CONTENT] ||| ||| ||| FL ||| 40 ||| ||| ||| CL | S. | C. | S. | P. ||| FH ||| two | CL ||| CL | FH [SUMMARY]
[CONTENT] ||| ||| ||| FL ||| 40 ||| ||| ||| CL | S. | C. | S. | P. ||| FH ||| two | CL ||| CL | FH [SUMMARY]
Prevalence and associated factors of
30602986
Plasmodium falciparum and soil transmitted helminth (STHs) infection are widespread in sub-Sahara Africa, where co-infection is also common. This study assessed the prevalence of these infections and their risk factors among pregnant women in Osogbo, Nigeria.
BACKGROUND
A total of 200 pregnant women attending the antenatal clinic were recruited. Plasmodium falciparum was detected using thick and thin film methods, while formol ether concentration method was used for STHs detection. A questionnaire was used to investigate the possible risk factors associated with acquisition of malaria and helminth infections.
METHODS
The prevalence of P. falciparum, STHs and their co-infection was 29.5%, 12% and 5% respectively. P. falciparum, STHs and P. falciparum + STHs co-infection was significantly higher in primigravidae (52.5% vs 58.3% vs 80%) than in secongravidae (18.6% vs 25.0% vs 20%) and multigravidae (28.8% vs 16.7% vs 0%) (p=0.02). Prevalence associated factors identified for P. falciparum was age (p=0.0001) while gravidity (p=0.02) was identified for P. falciparum + STHs co-infection.
RESULTS
High prevalence of P. falciparum and helminth infections was observed among the pregnant women with primigravidae being the most susceptible to co-infection. There is an urgent need to implement an effective malaria and STHs preventive method for this high risk population.
CONCLUSION
[ "Adult", "Ancylostomatoidea", "Animals", "Ascaris lumbricoides", "Coinfection", "Cross-Sectional Studies", "Female", "Helminthiasis", "Helminths", "Humans", "Intestinal Diseases, Parasitic", "Malaria, Falciparum", "Nigeria", "Plasmodium falciparum", "Pregnancy", "Pregnancy Complications, Parasitic", "Pregnant Women", "Prevalence", "Soil", "Strongyloides", "Young Adult" ]
6307031
Introduction
Malaria and soil transmitted helminths (STHs) infections are the most important parasitic infections in sub-Saharan Africa, where a significant proportion of the populations including pregnant women are exposed to these infections. In Nigeria the coverage of antenatal care is put at 61% and maternal mortality rate is put at over 560/100,000 pregnant women annually1. Malaria caused by Plasmodium falciparum is one of the major problems encountered by these pregnant women living in malaria endemic areas of Nigeria2. Pregnant women have a higher density of parasitaemia and have more complications of P. falciparum infection than non-pregnant women3. Plasmodium falciparum infection contributes significantly to maternal anaemia, low birth weight of infants, intrauterine growth retardation, preterm deliveries and infant mortality in sub-Saharan Africa3,4,5. Similarly, STHs infections are widely distributed in tropical and sub-tropical areas of the developing countries6. STHs infections are associated with cognitive impairment and lowered educational achievement, anaemia, stunted growth, physical and mental development, malnutrition and responsible for about one million deaths per year7. Co-infection of P. falciparum and STHs is common given the spatial coincidence of risk between malaria and helminths infections among individual living in Africa8,6. A major impact of malaria and STHs infections is anaemia which serves as a public health problem in the tropics9. Plasmodium falciparum has been shown to contribute significantly to anaemia in both pregnant women and young children using a number of mechanisms that includes haemolysis and phagocytosis3,10. Blood loss resulting from hookworm infection is considered as the main cause of anaemia11. Therefore the combination of P. falciparum and STHs (mostly hookworm infection) infection is considered as a strong indicator of moderate to serve anaemia12. Iron deficiency anaemia due to STHs infection depends on many factors, including the iron status of the individual and the bioavailability of iron in the diet. Most African women live on diets with poor bioavailable iron which results in low iron stores and could predispose helminth positive pregnant women to iron deficiency anaemia13. In addition, many other factors like environmental conditions, educational status, adherence to preventive measures could directly affect the prevalence of malaria and helminths in sub-Saharan Africa. Few studies have reported the occurrence, interaction and risk factors of P. falciparum and helminths infection in pregnant women in Nigeria. Epidemiological surveillance and impact of both P. falciparum and STHs remain poorly defined among pregnant women in Nigeria. This study was therefore conducted to investigate the prevalence and risk factors of P. falciparum malaria and/or helminth co-infection in pregnant women in Osogbo, South Western, Nigeria.
Methodology
Study area The study was carried out between October 2012 and May 2013 at the State General Hospital, Ikirun and Osogbo, Osun State, Nigeria. These communities are in urban areas in Osun State Nigeria. Osogbo is the state capital with a landed area of about 835 hectare and population projection of over 3 Million people as at 2006 population census. The climate in Osogbo is tropical with two seasons; October to February (dry season) and March to July rainy season. The average daily temperature is 32°C with a minimum temperature of 19°C and a maximum temperature of 35.9°C14. Malaria transmission is usually intense in the raining season in these two study areas. The study was carried out between October 2012 and May 2013 at the State General Hospital, Ikirun and Osogbo, Osun State, Nigeria. These communities are in urban areas in Osun State Nigeria. Osogbo is the state capital with a landed area of about 835 hectare and population projection of over 3 Million people as at 2006 population census. The climate in Osogbo is tropical with two seasons; October to February (dry season) and March to July rainy season. The average daily temperature is 32°C with a minimum temperature of 19°C and a maximum temperature of 35.9°C14. Malaria transmission is usually intense in the raining season in these two study areas. Study population and recruitment This was a cross-sectional study of pregnant women attending antenatal clinic. Inclusion criteria for participations were: pregnant women attending antenatal clinics, absence of general danger signs (for example vomiting, inability to sit or stand); absence of signs of severe and complicated falciparum malaria; absence of recent history of convulsion; absence of hyperpyrexia (i.e. axillary temperature > 39.5°C) and informed consent of the participating women. The women were adequately informed on the benefit and risks of the study and their consent was sought verbally. Only consented women were recruited into the study. Ethical approval was obtained from Ethical committee of the State Hospital at the Hospital Management Board, Osogbo. After verbal consent, questionnaire were administered to collect information on age, parity, gestational age, toilet facilities, malaria prevention methods and history of drugs taken. The number of pregnant women included in this study was determined using the formula. The prevalence of P. falciparum was estimated as 35% based on the averge value of a wide range of publications in this area. The marginal error was considered as 5% using 95% confidence level (standard value 1.96); the minimum sample size was determined to be approximately 188 and subsequently 200 was collected. This research was reviewed and approved by the Ethical Review Committee of the Osun State Ministry of Health, Osogbo. This was a cross-sectional study of pregnant women attending antenatal clinic. Inclusion criteria for participations were: pregnant women attending antenatal clinics, absence of general danger signs (for example vomiting, inability to sit or stand); absence of signs of severe and complicated falciparum malaria; absence of recent history of convulsion; absence of hyperpyrexia (i.e. axillary temperature > 39.5°C) and informed consent of the participating women. The women were adequately informed on the benefit and risks of the study and their consent was sought verbally. Only consented women were recruited into the study. Ethical approval was obtained from Ethical committee of the State Hospital at the Hospital Management Board, Osogbo. After verbal consent, questionnaire were administered to collect information on age, parity, gestational age, toilet facilities, malaria prevention methods and history of drugs taken. The number of pregnant women included in this study was determined using the formula. The prevalence of P. falciparum was estimated as 35% based on the averge value of a wide range of publications in this area. The marginal error was considered as 5% using 95% confidence level (standard value 1.96); the minimum sample size was determined to be approximately 188 and subsequently 200 was collected. This research was reviewed and approved by the Ethical Review Committee of the Osun State Ministry of Health, Osogbo. Sample collection and analysis Blood was collected from the thumb with a sterile lancet onto two seperate slides after cleaning the thumb with methylated spirit. The slides were used for the examination and estimation of malaria parasite from thick and thin film preparation. Blood was also collected into heparinised capillary tube for PCV estimation. A wide mouth clean specimen bottle with tight screw cover was individual provided for each participant for collection of stool sample in the morning of the day of their antenatal clinic visit. The sample was collected early in the morning and processed less than 12 hours when it was passed. The stool samples were visualized macroscopically. Direct saline and formol-ether concentration method of stool samples were prepared and microscopically examined using 10× and 40× objectives15. Blood was collected from the thumb with a sterile lancet onto two seperate slides after cleaning the thumb with methylated spirit. The slides were used for the examination and estimation of malaria parasite from thick and thin film preparation. Blood was also collected into heparinised capillary tube for PCV estimation. A wide mouth clean specimen bottle with tight screw cover was individual provided for each participant for collection of stool sample in the morning of the day of their antenatal clinic visit. The sample was collected early in the morning and processed less than 12 hours when it was passed. The stool samples were visualized macroscopically. Direct saline and formol-ether concentration method of stool samples were prepared and microscopically examined using 10× and 40× objectives15. Data analysis Data analysis was performed using SPSS version 16.0. Chi- square tests and descriptive statistics were used to describe the differences in socio-demographic charactersitics. P-value ≤ 0.05 was considered significant. Point estimation of prevalence of malaria and STHs infection was calculated based on the blood and stool sample results. The comparison of prevalence of P. falciparum, STHs and P. falciparum + STHs co-infection between malaria and STHs was performed using chi square test. Odds ratios (OR) with a 95% confidence interval were computed to compare the strength of association. Data analysis was performed using SPSS version 16.0. Chi- square tests and descriptive statistics were used to describe the differences in socio-demographic charactersitics. P-value ≤ 0.05 was considered significant. Point estimation of prevalence of malaria and STHs infection was calculated based on the blood and stool sample results. The comparison of prevalence of P. falciparum, STHs and P. falciparum + STHs co-infection between malaria and STHs was performed using chi square test. Odds ratios (OR) with a 95% confidence interval were computed to compare the strength of association.
Results
General characteristics of the study population A total of 200 pregnant women with a mean age of 26.5 years (range:15–40) were enrolled in this study (Table 1). The mean gestational age of the pregnant women at enrolment was 17.4 weeks (range: 8–36 weeks). The mean packed cell volume (PCV) was 35.94±6.50. The overall-prevalence of P. falciparum and STHs was 29.5% (59/200) and 12% (24/200) respectively (Table 1). Out of the 59 malaria positive pregnant women, 24.5% (n=49) were positive for P. falciparum only, 6.5% (n=13) were positive for STHs only while 5.0% (n=10) were co-infected with P. falciparum and STHs. Two (1%) of the STHs infected women had co-infection of hookworm+A. lumbricoides Participants characteristics and prevalence of malaria and helminthic infection among pregnant and non- pregnant women A total of 200 pregnant women with a mean age of 26.5 years (range:15–40) were enrolled in this study (Table 1). The mean gestational age of the pregnant women at enrolment was 17.4 weeks (range: 8–36 weeks). The mean packed cell volume (PCV) was 35.94±6.50. The overall-prevalence of P. falciparum and STHs was 29.5% (59/200) and 12% (24/200) respectively (Table 1). Out of the 59 malaria positive pregnant women, 24.5% (n=49) were positive for P. falciparum only, 6.5% (n=13) were positive for STHs only while 5.0% (n=10) were co-infected with P. falciparum and STHs. Two (1%) of the STHs infected women had co-infection of hookworm+A. lumbricoides Participants characteristics and prevalence of malaria and helminthic infection among pregnant and non- pregnant women Prevalence of P. falciparum and STHs The prevalence of P. falciparum, STHs and P. falciparum+STHs co-infection with respect to age, gravidity and gestational age is shown in Table 2. The highest prevalence of P. falciparum infection was recorded in the age group >30 years (49.2%) followed by age group 25–29 years (30.5%) and the least in age group <20 (5.1%) and the difference was statistically significant (p=0.001). Prevalence of P.falciparum and STHs infection was higher among primigraviadae, but the difference was not statistically significant. For P. falciparum+STHs co-infection the primigravidae (80%) had the highest prevalence compared to secongravidae (20%) and multigravidae (0%) and the difference was statistically significant (p=0.02). Those in the second trimester of pregnancy had the highest prevalence of falciparum (57.6%), STHs (70.8) and P.falciparum+STHs co-infection (70%) but the difference was not statistically significant. Prevalence of Malaria and intestinal helminth infection among pregnant by age, parity, gestational age and PCV Values are statistically significant (p ≤ 0.05) The prevalence of P. falciparum, STHs and P. falciparum+STHs co-infection with respect to age, gravidity and gestational age is shown in Table 2. The highest prevalence of P. falciparum infection was recorded in the age group >30 years (49.2%) followed by age group 25–29 years (30.5%) and the least in age group <20 (5.1%) and the difference was statistically significant (p=0.001). Prevalence of P.falciparum and STHs infection was higher among primigraviadae, but the difference was not statistically significant. For P. falciparum+STHs co-infection the primigravidae (80%) had the highest prevalence compared to secongravidae (20%) and multigravidae (0%) and the difference was statistically significant (p=0.02). Those in the second trimester of pregnancy had the highest prevalence of falciparum (57.6%), STHs (70.8) and P.falciparum+STHs co-infection (70%) but the difference was not statistically significant. Prevalence of Malaria and intestinal helminth infection among pregnant by age, parity, gestational age and PCV Values are statistically significant (p ≤ 0.05) Association between P. falciparum and intestinal helminth The association between P. falciparum and STHs infection among the pregnant women is shown in Table 3. Out of the 59 pregnant women that were positive for malaria, 10 (16.9 %) were co-infected with STHs and the difference was not statistically significant (p = 0.23). In the final model, children infected with helminths in this study are almost twice as likely to have P. falciparum infection than those without helminth infection (OR=1.85 (95% confidence interval CI: 0.77–4.45). An increase odds of P. falciparum was also observed among children infected with A. lumbricoides when compared to those not infected (OR =2.13; CI: 0.83- 5.44). Although a similar observation to those of A. lumbricoides was observed for hookworm and S. stercoralis, only one observation was recorded in both cases which makes the analysis not extremely reliable (Table 3). Relationship between malaria and intestinal helminth infection among pregnant women The association between P. falciparum and STHs infection among the pregnant women is shown in Table 3. Out of the 59 pregnant women that were positive for malaria, 10 (16.9 %) were co-infected with STHs and the difference was not statistically significant (p = 0.23). In the final model, children infected with helminths in this study are almost twice as likely to have P. falciparum infection than those without helminth infection (OR=1.85 (95% confidence interval CI: 0.77–4.45). An increase odds of P. falciparum was also observed among children infected with A. lumbricoides when compared to those not infected (OR =2.13; CI: 0.83- 5.44). Although a similar observation to those of A. lumbricoides was observed for hookworm and S. stercoralis, only one observation was recorded in both cases which makes the analysis not extremely reliable (Table 3). Relationship between malaria and intestinal helminth infection among pregnant women Influence of demographic and attitudinal practice on P.falciparum and helminth infection Table 4 shows the influence of demographic characteristics and attitudinal practice on P.falciparum and STHs infection in the study population. Age was significantly associated with P.falciparum infection (P=0.0001) among pregnant women in the study area. Education, employment status, toilet facility type, gravidity and gestational age were not significantly associated with P. falciparum infection. Age, education, employment status, toilet facility type and gestational age were not significantly associated with P. falciparum infection. None of the factors was found to be significantly associated with STHs infection. The highest prevalence of STHs was among pregnant women aged 25–29 years (45.8%; 11/24), unemployed (41.7%; 10/24) and self-employed (41.7%;10/24), primigravidae (58.3%; 14), those who used pit latrines (58.3%; 14/24) and those who were in the second trimester of pregnancy (70.8%; 17/24). Of the 24 pregnant women positive for helminths, 7 (29.2%) were anaemic while 17% (10/59) of the malaria positive pregnant women were also anaemic with PCV ≤30% (Table 3). The highest prevalence of malaria was found among women aged >30 yrs (49.2%), unemployed (47.5%), and those that used pit latrines (49.2%). Three of the 10 pregnant women with P. falciparum helminth co-infection were anaemic. Amongst the 10 pregnant women co infected with P. falciparum and STHs, the highest prevalence was observed among pregnant women aged 25–29 years (60%), unemployed (50%), those who used pit latrines (50%), those in the second trimester of pregnancy (70%) and those who had not taken any sulfadoxine-pyrimethamine (SP) for malaria prevention (70%) but the differences were not statistically significant (table 3). Prevalence of P. falciparum was also higher among primigraviadae (52.5%), women in the second trimester of pregnancy (57.6%) and women who had not received any SP for malaria prevention (55.9%) (Table 4). Risk factors associated with malaria and helminths infection among pregnant women Table 4 shows the influence of demographic characteristics and attitudinal practice on P.falciparum and STHs infection in the study population. Age was significantly associated with P.falciparum infection (P=0.0001) among pregnant women in the study area. Education, employment status, toilet facility type, gravidity and gestational age were not significantly associated with P. falciparum infection. Age, education, employment status, toilet facility type and gestational age were not significantly associated with P. falciparum infection. None of the factors was found to be significantly associated with STHs infection. The highest prevalence of STHs was among pregnant women aged 25–29 years (45.8%; 11/24), unemployed (41.7%; 10/24) and self-employed (41.7%;10/24), primigravidae (58.3%; 14), those who used pit latrines (58.3%; 14/24) and those who were in the second trimester of pregnancy (70.8%; 17/24). Of the 24 pregnant women positive for helminths, 7 (29.2%) were anaemic while 17% (10/59) of the malaria positive pregnant women were also anaemic with PCV ≤30% (Table 3). The highest prevalence of malaria was found among women aged >30 yrs (49.2%), unemployed (47.5%), and those that used pit latrines (49.2%). Three of the 10 pregnant women with P. falciparum helminth co-infection were anaemic. Amongst the 10 pregnant women co infected with P. falciparum and STHs, the highest prevalence was observed among pregnant women aged 25–29 years (60%), unemployed (50%), those who used pit latrines (50%), those in the second trimester of pregnancy (70%) and those who had not taken any sulfadoxine-pyrimethamine (SP) for malaria prevention (70%) but the differences were not statistically significant (table 3). Prevalence of P. falciparum was also higher among primigraviadae (52.5%), women in the second trimester of pregnancy (57.6%) and women who had not received any SP for malaria prevention (55.9%) (Table 4). Risk factors associated with malaria and helminths infection among pregnant women
Conclusion
The results suggest that P. falciparum and STHs infection and their co-infection could constitute a serious problem among women attending antenatal clinic (ANC) with primigravida being more parasitized than the multigravida. More effort should be placed on the control and prevention of malaria and intestinal helminths among pregnant women in endemic areas in other to reduce the associated risks and burden.
[ "Study area", "Study population and recruitment", "Sample collection and analysis", "Data analysis", "General characteristics of the study population", "Prevalence of P. falciparum and STHs", "Association between P. falciparum and intestinal helminth", "Influence of demographic and attitudinal practice on P.falciparum and helminth infection" ]
[ "The study was carried out between October 2012 and May 2013 at the State General Hospital, Ikirun and Osogbo, Osun State, Nigeria. These communities are in urban areas in Osun State Nigeria. Osogbo is the state capital with a landed area of about 835 hectare and population projection of over 3 Million people as at 2006 population census. The climate in Osogbo is tropical with two seasons; October to February (dry season) and March to July rainy season. The average daily temperature is 32°C with a minimum temperature of 19°C and a maximum temperature of 35.9°C14. Malaria transmission is usually intense in the raining season in these two study areas.", "This was a cross-sectional study of pregnant women attending antenatal clinic. Inclusion criteria for participations were: pregnant women attending antenatal clinics, absence of general danger signs (for example vomiting, inability to sit or stand); absence of signs of severe and complicated falciparum malaria; absence of recent history of convulsion; absence of hyperpyrexia (i.e. axillary temperature > 39.5°C) and informed consent of the participating women. The women were adequately informed on the benefit and risks of the study and their consent was sought verbally. Only consented women were recruited into the study. Ethical approval was obtained from Ethical committee of the State Hospital at the Hospital Management Board, Osogbo. After verbal consent, questionnaire were administered to collect information on age, parity, gestational age, toilet facilities, malaria prevention methods and history of drugs taken. The number of pregnant women included in this study was determined using the formula.\nThe prevalence of P. falciparum was estimated as 35% based on the averge value of a wide range of publications in this area. The marginal error was considered as 5% using 95% confidence level (standard value 1.96); the minimum sample size was determined to be approximately 188 and subsequently 200 was collected. This research was reviewed and approved by the Ethical Review Committee of the Osun State Ministry of Health, Osogbo.", "Blood was collected from the thumb with a sterile lancet onto two seperate slides after cleaning the thumb with methylated spirit. The slides were used for the examination and estimation of malaria parasite from thick and thin film preparation. Blood was also collected into heparinised capillary tube for PCV estimation. A wide mouth clean specimen bottle with tight screw cover was individual provided for each participant for collection of stool sample in the morning of the day of their antenatal clinic visit. The sample was collected early in the morning and processed less than 12 hours when it was passed. The stool samples were visualized macroscopically. Direct saline and formol-ether concentration method of stool samples were prepared and microscopically examined using 10× and 40× objectives15.", "Data analysis was performed using SPSS version 16.0. Chi- square tests and descriptive statistics were used to describe the differences in socio-demographic charactersitics. P-value ≤ 0.05 was considered significant. Point estimation of prevalence of malaria and STHs infection was calculated based on the blood and stool sample results. The comparison of prevalence of P. falciparum, STHs and P. falciparum + STHs co-infection between malaria and STHs was performed using chi square test. Odds ratios (OR) with a 95% confidence interval were computed to compare the strength of association.", "A total of 200 pregnant women with a mean age of 26.5 years (range:15–40) were enrolled in this study (Table 1). The mean gestational age of the pregnant women at enrolment was 17.4 weeks (range: 8–36 weeks). The mean packed cell volume (PCV) was 35.94±6.50. The overall-prevalence of P. falciparum and STHs was 29.5% (59/200) and 12% (24/200) respectively (Table 1). Out of the 59 malaria positive pregnant women, 24.5% (n=49) were positive for P. falciparum only, 6.5% (n=13) were positive for STHs only while 5.0% (n=10) were co-infected with P. falciparum and STHs. Two (1%) of the STHs infected women had co-infection of hookworm+A. lumbricoides\nParticipants characteristics and prevalence of malaria and helminthic infection among pregnant and non- pregnant women", "The prevalence of P. falciparum, STHs and P. falciparum+STHs co-infection with respect to age, gravidity and gestational age is shown in Table 2. The highest prevalence of P. falciparum infection was recorded in the age group >30 years (49.2%) followed by age group 25–29 years (30.5%) and the least in age group <20 (5.1%) and the difference was statistically significant (p=0.001). Prevalence of P.falciparum and STHs infection was higher among primigraviadae, but the difference was not statistically significant. For P. falciparum+STHs co-infection the primigravidae (80%) had the highest prevalence compared to secongravidae (20%) and multigravidae (0%) and the difference was statistically significant (p=0.02). Those in the second trimester of pregnancy had the highest prevalence of falciparum (57.6%), STHs (70.8) and P.falciparum+STHs co-infection (70%) but the difference was not statistically significant.\nPrevalence of Malaria and intestinal helminth infection among pregnant by age, parity, gestational age and PCV\nValues are statistically significant (p ≤ 0.05)", "The association between P. falciparum and STHs infection among the pregnant women is shown in Table 3. Out of the 59 pregnant women that were positive for malaria, 10 (16.9 %) were co-infected with STHs and the difference was not statistically significant (p = 0.23). In the final model, children infected with helminths in this study are almost twice as likely to have P. falciparum infection than those without helminth infection (OR=1.85 (95% confidence interval CI: 0.77–4.45). An increase odds of P. falciparum was also observed among children infected with A. lumbricoides when compared to those not infected (OR =2.13; CI: 0.83- 5.44). Although a similar observation to those of A. lumbricoides was observed for hookworm and S. stercoralis, only one observation was recorded in both cases which makes the analysis not extremely reliable (Table 3).\nRelationship between malaria and intestinal helminth infection among pregnant women", "Table 4 shows the influence of demographic characteristics and attitudinal practice on P.falciparum and STHs infection in the study population. Age was significantly associated with P.falciparum infection (P=0.0001) among pregnant women in the study area. Education, employment status, toilet facility type, gravidity and gestational age were not significantly associated with P. falciparum infection. Age, education, employment status, toilet facility type and gestational age were not significantly associated with P. falciparum infection. None of the factors was found to be significantly associated with STHs infection. The highest prevalence of STHs was among pregnant women aged 25–29 years (45.8%; 11/24), unemployed (41.7%; 10/24) and self-employed (41.7%;10/24), primigravidae (58.3%; 14), those who used pit latrines (58.3%; 14/24) and those who were in the second trimester of pregnancy (70.8%; 17/24). Of the 24 pregnant women positive for helminths, 7 (29.2%) were anaemic while 17% (10/59) of the malaria positive pregnant women were also anaemic with PCV ≤30% (Table 3). The highest prevalence of malaria was found among women aged >30 yrs (49.2%), unemployed (47.5%), and those that used pit latrines (49.2%). Three of the 10 pregnant women with P. falciparum helminth co-infection were anaemic. Amongst the 10 pregnant women co infected with P. falciparum and STHs, the highest prevalence was observed among pregnant women aged 25–29 years (60%), unemployed (50%), those who used pit latrines (50%), those in the second trimester of pregnancy (70%) and those who had not taken any sulfadoxine-pyrimethamine (SP) for malaria prevention (70%) but the differences were not statistically significant (table 3). Prevalence of P. falciparum was also higher among primigraviadae (52.5%), women in the second trimester of pregnancy (57.6%) and women who had not received any SP for malaria prevention (55.9%) (Table 4).\nRisk factors associated with malaria and helminths infection among pregnant women" ]
[ null, null, null, null, null, null, null, null ]
[ "Introduction", "Methodology", "Study area", "Study population and recruitment", "Sample collection and analysis", "Data analysis", "Results", "General characteristics of the study population", "Prevalence of P. falciparum and STHs", "Association between P. falciparum and intestinal helminth", "Influence of demographic and attitudinal practice on P.falciparum and helminth infection", "Discussion", "Conclusion" ]
[ "Malaria and soil transmitted helminths (STHs) infections are the most important parasitic infections in sub-Saharan Africa, where a significant proportion of the populations including pregnant women are exposed to these infections. In Nigeria the coverage of antenatal care is put at 61% and maternal mortality rate is put at over 560/100,000 pregnant women annually1. Malaria caused by Plasmodium falciparum is one of the major problems encountered by these pregnant women living in malaria endemic areas of Nigeria2. Pregnant women have a higher density of parasitaemia and have more complications of P. falciparum infection than non-pregnant women3. Plasmodium falciparum infection contributes significantly to maternal anaemia, low birth weight of infants, intrauterine growth retardation, preterm deliveries and infant mortality in sub-Saharan Africa3,4,5. Similarly, STHs infections are widely distributed in tropical and sub-tropical areas of the developing countries6. STHs infections are associated with cognitive impairment and lowered educational achievement, anaemia, stunted growth, physical and mental development, malnutrition and responsible for about one million deaths per year7. Co-infection of P. falciparum and STHs is common given the spatial coincidence of risk between malaria and helminths infections among individual living in Africa8,6.\nA major impact of malaria and STHs infections is anaemia which serves as a public health problem in the tropics9. Plasmodium falciparum has been shown to contribute significantly to anaemia in both pregnant women and young children using a number of mechanisms that includes haemolysis and phagocytosis3,10. Blood loss resulting from hookworm infection is considered as the main cause of anaemia11. Therefore the combination of P. falciparum and STHs (mostly hookworm infection) infection is considered as a strong indicator of moderate to serve anaemia12. Iron deficiency anaemia due to STHs infection depends on many factors, including the iron status of the individual and the bioavailability of iron in the diet. Most African women live on diets with poor bioavailable iron which results in low iron stores and could predispose helminth positive pregnant women to iron deficiency anaemia13. In addition, many other factors like environmental conditions, educational status, adherence to preventive measures could directly affect the prevalence of malaria and helminths in sub-Saharan Africa.\nFew studies have reported the occurrence, interaction and risk factors of P. falciparum and helminths infection in pregnant women in Nigeria. Epidemiological surveillance and impact of both P. falciparum and STHs remain poorly defined among pregnant women in Nigeria. This study was therefore conducted to investigate the prevalence and risk factors of P. falciparum malaria and/or helminth co-infection in pregnant women in Osogbo, South Western, Nigeria.", " Study area The study was carried out between October 2012 and May 2013 at the State General Hospital, Ikirun and Osogbo, Osun State, Nigeria. These communities are in urban areas in Osun State Nigeria. Osogbo is the state capital with a landed area of about 835 hectare and population projection of over 3 Million people as at 2006 population census. The climate in Osogbo is tropical with two seasons; October to February (dry season) and March to July rainy season. The average daily temperature is 32°C with a minimum temperature of 19°C and a maximum temperature of 35.9°C14. Malaria transmission is usually intense in the raining season in these two study areas.\nThe study was carried out between October 2012 and May 2013 at the State General Hospital, Ikirun and Osogbo, Osun State, Nigeria. These communities are in urban areas in Osun State Nigeria. Osogbo is the state capital with a landed area of about 835 hectare and population projection of over 3 Million people as at 2006 population census. The climate in Osogbo is tropical with two seasons; October to February (dry season) and March to July rainy season. The average daily temperature is 32°C with a minimum temperature of 19°C and a maximum temperature of 35.9°C14. Malaria transmission is usually intense in the raining season in these two study areas.\n Study population and recruitment This was a cross-sectional study of pregnant women attending antenatal clinic. Inclusion criteria for participations were: pregnant women attending antenatal clinics, absence of general danger signs (for example vomiting, inability to sit or stand); absence of signs of severe and complicated falciparum malaria; absence of recent history of convulsion; absence of hyperpyrexia (i.e. axillary temperature > 39.5°C) and informed consent of the participating women. The women were adequately informed on the benefit and risks of the study and their consent was sought verbally. Only consented women were recruited into the study. Ethical approval was obtained from Ethical committee of the State Hospital at the Hospital Management Board, Osogbo. After verbal consent, questionnaire were administered to collect information on age, parity, gestational age, toilet facilities, malaria prevention methods and history of drugs taken. The number of pregnant women included in this study was determined using the formula.\nThe prevalence of P. falciparum was estimated as 35% based on the averge value of a wide range of publications in this area. The marginal error was considered as 5% using 95% confidence level (standard value 1.96); the minimum sample size was determined to be approximately 188 and subsequently 200 was collected. This research was reviewed and approved by the Ethical Review Committee of the Osun State Ministry of Health, Osogbo.\nThis was a cross-sectional study of pregnant women attending antenatal clinic. Inclusion criteria for participations were: pregnant women attending antenatal clinics, absence of general danger signs (for example vomiting, inability to sit or stand); absence of signs of severe and complicated falciparum malaria; absence of recent history of convulsion; absence of hyperpyrexia (i.e. axillary temperature > 39.5°C) and informed consent of the participating women. The women were adequately informed on the benefit and risks of the study and their consent was sought verbally. Only consented women were recruited into the study. Ethical approval was obtained from Ethical committee of the State Hospital at the Hospital Management Board, Osogbo. After verbal consent, questionnaire were administered to collect information on age, parity, gestational age, toilet facilities, malaria prevention methods and history of drugs taken. The number of pregnant women included in this study was determined using the formula.\nThe prevalence of P. falciparum was estimated as 35% based on the averge value of a wide range of publications in this area. The marginal error was considered as 5% using 95% confidence level (standard value 1.96); the minimum sample size was determined to be approximately 188 and subsequently 200 was collected. This research was reviewed and approved by the Ethical Review Committee of the Osun State Ministry of Health, Osogbo.\n Sample collection and analysis Blood was collected from the thumb with a sterile lancet onto two seperate slides after cleaning the thumb with methylated spirit. The slides were used for the examination and estimation of malaria parasite from thick and thin film preparation. Blood was also collected into heparinised capillary tube for PCV estimation. A wide mouth clean specimen bottle with tight screw cover was individual provided for each participant for collection of stool sample in the morning of the day of their antenatal clinic visit. The sample was collected early in the morning and processed less than 12 hours when it was passed. The stool samples were visualized macroscopically. Direct saline and formol-ether concentration method of stool samples were prepared and microscopically examined using 10× and 40× objectives15.\nBlood was collected from the thumb with a sterile lancet onto two seperate slides after cleaning the thumb with methylated spirit. The slides were used for the examination and estimation of malaria parasite from thick and thin film preparation. Blood was also collected into heparinised capillary tube for PCV estimation. A wide mouth clean specimen bottle with tight screw cover was individual provided for each participant for collection of stool sample in the morning of the day of their antenatal clinic visit. The sample was collected early in the morning and processed less than 12 hours when it was passed. The stool samples were visualized macroscopically. Direct saline and formol-ether concentration method of stool samples were prepared and microscopically examined using 10× and 40× objectives15.\n Data analysis Data analysis was performed using SPSS version 16.0. Chi- square tests and descriptive statistics were used to describe the differences in socio-demographic charactersitics. P-value ≤ 0.05 was considered significant. Point estimation of prevalence of malaria and STHs infection was calculated based on the blood and stool sample results. The comparison of prevalence of P. falciparum, STHs and P. falciparum + STHs co-infection between malaria and STHs was performed using chi square test. Odds ratios (OR) with a 95% confidence interval were computed to compare the strength of association.\nData analysis was performed using SPSS version 16.0. Chi- square tests and descriptive statistics were used to describe the differences in socio-demographic charactersitics. P-value ≤ 0.05 was considered significant. Point estimation of prevalence of malaria and STHs infection was calculated based on the blood and stool sample results. The comparison of prevalence of P. falciparum, STHs and P. falciparum + STHs co-infection between malaria and STHs was performed using chi square test. Odds ratios (OR) with a 95% confidence interval were computed to compare the strength of association.", "The study was carried out between October 2012 and May 2013 at the State General Hospital, Ikirun and Osogbo, Osun State, Nigeria. These communities are in urban areas in Osun State Nigeria. Osogbo is the state capital with a landed area of about 835 hectare and population projection of over 3 Million people as at 2006 population census. The climate in Osogbo is tropical with two seasons; October to February (dry season) and March to July rainy season. The average daily temperature is 32°C with a minimum temperature of 19°C and a maximum temperature of 35.9°C14. Malaria transmission is usually intense in the raining season in these two study areas.", "This was a cross-sectional study of pregnant women attending antenatal clinic. Inclusion criteria for participations were: pregnant women attending antenatal clinics, absence of general danger signs (for example vomiting, inability to sit or stand); absence of signs of severe and complicated falciparum malaria; absence of recent history of convulsion; absence of hyperpyrexia (i.e. axillary temperature > 39.5°C) and informed consent of the participating women. The women were adequately informed on the benefit and risks of the study and their consent was sought verbally. Only consented women were recruited into the study. Ethical approval was obtained from Ethical committee of the State Hospital at the Hospital Management Board, Osogbo. After verbal consent, questionnaire were administered to collect information on age, parity, gestational age, toilet facilities, malaria prevention methods and history of drugs taken. The number of pregnant women included in this study was determined using the formula.\nThe prevalence of P. falciparum was estimated as 35% based on the averge value of a wide range of publications in this area. The marginal error was considered as 5% using 95% confidence level (standard value 1.96); the minimum sample size was determined to be approximately 188 and subsequently 200 was collected. This research was reviewed and approved by the Ethical Review Committee of the Osun State Ministry of Health, Osogbo.", "Blood was collected from the thumb with a sterile lancet onto two seperate slides after cleaning the thumb with methylated spirit. The slides were used for the examination and estimation of malaria parasite from thick and thin film preparation. Blood was also collected into heparinised capillary tube for PCV estimation. A wide mouth clean specimen bottle with tight screw cover was individual provided for each participant for collection of stool sample in the morning of the day of their antenatal clinic visit. The sample was collected early in the morning and processed less than 12 hours when it was passed. The stool samples were visualized macroscopically. Direct saline and formol-ether concentration method of stool samples were prepared and microscopically examined using 10× and 40× objectives15.", "Data analysis was performed using SPSS version 16.0. Chi- square tests and descriptive statistics were used to describe the differences in socio-demographic charactersitics. P-value ≤ 0.05 was considered significant. Point estimation of prevalence of malaria and STHs infection was calculated based on the blood and stool sample results. The comparison of prevalence of P. falciparum, STHs and P. falciparum + STHs co-infection between malaria and STHs was performed using chi square test. Odds ratios (OR) with a 95% confidence interval were computed to compare the strength of association.", " General characteristics of the study population A total of 200 pregnant women with a mean age of 26.5 years (range:15–40) were enrolled in this study (Table 1). The mean gestational age of the pregnant women at enrolment was 17.4 weeks (range: 8–36 weeks). The mean packed cell volume (PCV) was 35.94±6.50. The overall-prevalence of P. falciparum and STHs was 29.5% (59/200) and 12% (24/200) respectively (Table 1). Out of the 59 malaria positive pregnant women, 24.5% (n=49) were positive for P. falciparum only, 6.5% (n=13) were positive for STHs only while 5.0% (n=10) were co-infected with P. falciparum and STHs. Two (1%) of the STHs infected women had co-infection of hookworm+A. lumbricoides\nParticipants characteristics and prevalence of malaria and helminthic infection among pregnant and non- pregnant women\nA total of 200 pregnant women with a mean age of 26.5 years (range:15–40) were enrolled in this study (Table 1). The mean gestational age of the pregnant women at enrolment was 17.4 weeks (range: 8–36 weeks). The mean packed cell volume (PCV) was 35.94±6.50. The overall-prevalence of P. falciparum and STHs was 29.5% (59/200) and 12% (24/200) respectively (Table 1). Out of the 59 malaria positive pregnant women, 24.5% (n=49) were positive for P. falciparum only, 6.5% (n=13) were positive for STHs only while 5.0% (n=10) were co-infected with P. falciparum and STHs. Two (1%) of the STHs infected women had co-infection of hookworm+A. lumbricoides\nParticipants characteristics and prevalence of malaria and helminthic infection among pregnant and non- pregnant women\n Prevalence of P. falciparum and STHs The prevalence of P. falciparum, STHs and P. falciparum+STHs co-infection with respect to age, gravidity and gestational age is shown in Table 2. The highest prevalence of P. falciparum infection was recorded in the age group >30 years (49.2%) followed by age group 25–29 years (30.5%) and the least in age group <20 (5.1%) and the difference was statistically significant (p=0.001). Prevalence of P.falciparum and STHs infection was higher among primigraviadae, but the difference was not statistically significant. For P. falciparum+STHs co-infection the primigravidae (80%) had the highest prevalence compared to secongravidae (20%) and multigravidae (0%) and the difference was statistically significant (p=0.02). Those in the second trimester of pregnancy had the highest prevalence of falciparum (57.6%), STHs (70.8) and P.falciparum+STHs co-infection (70%) but the difference was not statistically significant.\nPrevalence of Malaria and intestinal helminth infection among pregnant by age, parity, gestational age and PCV\nValues are statistically significant (p ≤ 0.05)\nThe prevalence of P. falciparum, STHs and P. falciparum+STHs co-infection with respect to age, gravidity and gestational age is shown in Table 2. The highest prevalence of P. falciparum infection was recorded in the age group >30 years (49.2%) followed by age group 25–29 years (30.5%) and the least in age group <20 (5.1%) and the difference was statistically significant (p=0.001). Prevalence of P.falciparum and STHs infection was higher among primigraviadae, but the difference was not statistically significant. For P. falciparum+STHs co-infection the primigravidae (80%) had the highest prevalence compared to secongravidae (20%) and multigravidae (0%) and the difference was statistically significant (p=0.02). Those in the second trimester of pregnancy had the highest prevalence of falciparum (57.6%), STHs (70.8) and P.falciparum+STHs co-infection (70%) but the difference was not statistically significant.\nPrevalence of Malaria and intestinal helminth infection among pregnant by age, parity, gestational age and PCV\nValues are statistically significant (p ≤ 0.05)\n Association between P. falciparum and intestinal helminth The association between P. falciparum and STHs infection among the pregnant women is shown in Table 3. Out of the 59 pregnant women that were positive for malaria, 10 (16.9 %) were co-infected with STHs and the difference was not statistically significant (p = 0.23). In the final model, children infected with helminths in this study are almost twice as likely to have P. falciparum infection than those without helminth infection (OR=1.85 (95% confidence interval CI: 0.77–4.45). An increase odds of P. falciparum was also observed among children infected with A. lumbricoides when compared to those not infected (OR =2.13; CI: 0.83- 5.44). Although a similar observation to those of A. lumbricoides was observed for hookworm and S. stercoralis, only one observation was recorded in both cases which makes the analysis not extremely reliable (Table 3).\nRelationship between malaria and intestinal helminth infection among pregnant women\nThe association between P. falciparum and STHs infection among the pregnant women is shown in Table 3. Out of the 59 pregnant women that were positive for malaria, 10 (16.9 %) were co-infected with STHs and the difference was not statistically significant (p = 0.23). In the final model, children infected with helminths in this study are almost twice as likely to have P. falciparum infection than those without helminth infection (OR=1.85 (95% confidence interval CI: 0.77–4.45). An increase odds of P. falciparum was also observed among children infected with A. lumbricoides when compared to those not infected (OR =2.13; CI: 0.83- 5.44). Although a similar observation to those of A. lumbricoides was observed for hookworm and S. stercoralis, only one observation was recorded in both cases which makes the analysis not extremely reliable (Table 3).\nRelationship between malaria and intestinal helminth infection among pregnant women\n Influence of demographic and attitudinal practice on P.falciparum and helminth infection Table 4 shows the influence of demographic characteristics and attitudinal practice on P.falciparum and STHs infection in the study population. Age was significantly associated with P.falciparum infection (P=0.0001) among pregnant women in the study area. Education, employment status, toilet facility type, gravidity and gestational age were not significantly associated with P. falciparum infection. Age, education, employment status, toilet facility type and gestational age were not significantly associated with P. falciparum infection. None of the factors was found to be significantly associated with STHs infection. The highest prevalence of STHs was among pregnant women aged 25–29 years (45.8%; 11/24), unemployed (41.7%; 10/24) and self-employed (41.7%;10/24), primigravidae (58.3%; 14), those who used pit latrines (58.3%; 14/24) and those who were in the second trimester of pregnancy (70.8%; 17/24). Of the 24 pregnant women positive for helminths, 7 (29.2%) were anaemic while 17% (10/59) of the malaria positive pregnant women were also anaemic with PCV ≤30% (Table 3). The highest prevalence of malaria was found among women aged >30 yrs (49.2%), unemployed (47.5%), and those that used pit latrines (49.2%). Three of the 10 pregnant women with P. falciparum helminth co-infection were anaemic. Amongst the 10 pregnant women co infected with P. falciparum and STHs, the highest prevalence was observed among pregnant women aged 25–29 years (60%), unemployed (50%), those who used pit latrines (50%), those in the second trimester of pregnancy (70%) and those who had not taken any sulfadoxine-pyrimethamine (SP) for malaria prevention (70%) but the differences were not statistically significant (table 3). Prevalence of P. falciparum was also higher among primigraviadae (52.5%), women in the second trimester of pregnancy (57.6%) and women who had not received any SP for malaria prevention (55.9%) (Table 4).\nRisk factors associated with malaria and helminths infection among pregnant women\nTable 4 shows the influence of demographic characteristics and attitudinal practice on P.falciparum and STHs infection in the study population. Age was significantly associated with P.falciparum infection (P=0.0001) among pregnant women in the study area. Education, employment status, toilet facility type, gravidity and gestational age were not significantly associated with P. falciparum infection. Age, education, employment status, toilet facility type and gestational age were not significantly associated with P. falciparum infection. None of the factors was found to be significantly associated with STHs infection. The highest prevalence of STHs was among pregnant women aged 25–29 years (45.8%; 11/24), unemployed (41.7%; 10/24) and self-employed (41.7%;10/24), primigravidae (58.3%; 14), those who used pit latrines (58.3%; 14/24) and those who were in the second trimester of pregnancy (70.8%; 17/24). Of the 24 pregnant women positive for helminths, 7 (29.2%) were anaemic while 17% (10/59) of the malaria positive pregnant women were also anaemic with PCV ≤30% (Table 3). The highest prevalence of malaria was found among women aged >30 yrs (49.2%), unemployed (47.5%), and those that used pit latrines (49.2%). Three of the 10 pregnant women with P. falciparum helminth co-infection were anaemic. Amongst the 10 pregnant women co infected with P. falciparum and STHs, the highest prevalence was observed among pregnant women aged 25–29 years (60%), unemployed (50%), those who used pit latrines (50%), those in the second trimester of pregnancy (70%) and those who had not taken any sulfadoxine-pyrimethamine (SP) for malaria prevention (70%) but the differences were not statistically significant (table 3). Prevalence of P. falciparum was also higher among primigraviadae (52.5%), women in the second trimester of pregnancy (57.6%) and women who had not received any SP for malaria prevention (55.9%) (Table 4).\nRisk factors associated with malaria and helminths infection among pregnant women", "A total of 200 pregnant women with a mean age of 26.5 years (range:15–40) were enrolled in this study (Table 1). The mean gestational age of the pregnant women at enrolment was 17.4 weeks (range: 8–36 weeks). The mean packed cell volume (PCV) was 35.94±6.50. The overall-prevalence of P. falciparum and STHs was 29.5% (59/200) and 12% (24/200) respectively (Table 1). Out of the 59 malaria positive pregnant women, 24.5% (n=49) were positive for P. falciparum only, 6.5% (n=13) were positive for STHs only while 5.0% (n=10) were co-infected with P. falciparum and STHs. Two (1%) of the STHs infected women had co-infection of hookworm+A. lumbricoides\nParticipants characteristics and prevalence of malaria and helminthic infection among pregnant and non- pregnant women", "The prevalence of P. falciparum, STHs and P. falciparum+STHs co-infection with respect to age, gravidity and gestational age is shown in Table 2. The highest prevalence of P. falciparum infection was recorded in the age group >30 years (49.2%) followed by age group 25–29 years (30.5%) and the least in age group <20 (5.1%) and the difference was statistically significant (p=0.001). Prevalence of P.falciparum and STHs infection was higher among primigraviadae, but the difference was not statistically significant. For P. falciparum+STHs co-infection the primigravidae (80%) had the highest prevalence compared to secongravidae (20%) and multigravidae (0%) and the difference was statistically significant (p=0.02). Those in the second trimester of pregnancy had the highest prevalence of falciparum (57.6%), STHs (70.8) and P.falciparum+STHs co-infection (70%) but the difference was not statistically significant.\nPrevalence of Malaria and intestinal helminth infection among pregnant by age, parity, gestational age and PCV\nValues are statistically significant (p ≤ 0.05)", "The association between P. falciparum and STHs infection among the pregnant women is shown in Table 3. Out of the 59 pregnant women that were positive for malaria, 10 (16.9 %) were co-infected with STHs and the difference was not statistically significant (p = 0.23). In the final model, children infected with helminths in this study are almost twice as likely to have P. falciparum infection than those without helminth infection (OR=1.85 (95% confidence interval CI: 0.77–4.45). An increase odds of P. falciparum was also observed among children infected with A. lumbricoides when compared to those not infected (OR =2.13; CI: 0.83- 5.44). Although a similar observation to those of A. lumbricoides was observed for hookworm and S. stercoralis, only one observation was recorded in both cases which makes the analysis not extremely reliable (Table 3).\nRelationship between malaria and intestinal helminth infection among pregnant women", "Table 4 shows the influence of demographic characteristics and attitudinal practice on P.falciparum and STHs infection in the study population. Age was significantly associated with P.falciparum infection (P=0.0001) among pregnant women in the study area. Education, employment status, toilet facility type, gravidity and gestational age were not significantly associated with P. falciparum infection. Age, education, employment status, toilet facility type and gestational age were not significantly associated with P. falciparum infection. None of the factors was found to be significantly associated with STHs infection. The highest prevalence of STHs was among pregnant women aged 25–29 years (45.8%; 11/24), unemployed (41.7%; 10/24) and self-employed (41.7%;10/24), primigravidae (58.3%; 14), those who used pit latrines (58.3%; 14/24) and those who were in the second trimester of pregnancy (70.8%; 17/24). Of the 24 pregnant women positive for helminths, 7 (29.2%) were anaemic while 17% (10/59) of the malaria positive pregnant women were also anaemic with PCV ≤30% (Table 3). The highest prevalence of malaria was found among women aged >30 yrs (49.2%), unemployed (47.5%), and those that used pit latrines (49.2%). Three of the 10 pregnant women with P. falciparum helminth co-infection were anaemic. Amongst the 10 pregnant women co infected with P. falciparum and STHs, the highest prevalence was observed among pregnant women aged 25–29 years (60%), unemployed (50%), those who used pit latrines (50%), those in the second trimester of pregnancy (70%) and those who had not taken any sulfadoxine-pyrimethamine (SP) for malaria prevention (70%) but the differences were not statistically significant (table 3). Prevalence of P. falciparum was also higher among primigraviadae (52.5%), women in the second trimester of pregnancy (57.6%) and women who had not received any SP for malaria prevention (55.9%) (Table 4).\nRisk factors associated with malaria and helminths infection among pregnant women", "This study presents data on the prevalence of P. falciparum and STHs co-infection and some possible associated risk factors in pregnant women in South Western, Nigeria. P. falciparum is a major cause of mortality in pregnant women and their foetus and it is endemic in many parts of the world where its co-occurrence with STHs has been reported16. This study observed that malaria is still highly prevalent among pregnant women in Nigeria. The 29.5% prevalence of P. falciparum recorded in this study was similar to the prevalence of 21.9% reported in Ilorin Nigeria, 21.9% in Cameroon17, and 36.3% in Ghana18. However the prevalence was lower than the reports from some other parts of Nigeria (Kebbi (41.6%)19, Ethiopia (11.6%)20 and Libreville (Gabon) (57%)21. Also, a study in Angola and Sudan recorded a prevalence of 8.6% and 13.7% respectively among pregnant women22,23. The differences in the prevalence of P. falciparum in pregnancy may be attributed to various reasons like sample size, abundance of vector but also primarily linked availability of effective control measures in the endemic areas.\nAlthough WHO recommends two doses of sulfadoxine-pyrimethamine (IPTp-SP) for malaria prophylaxis in pregnancy24, 53% of the women in this study did not recieved any SP prescription during Ante-Natal Care (ANC) and only 47.0% recieved one or two doses. Use of IPTp has been shown to decrease Plasmodium parasitemia25 but unfortunately the implementation of this recommendation has been sub-optimal in many endemic countries as observed in our study. Improving the coverage of pregnant women receiving IPTp in malaria endemic region, like in Nigeria is essential in other to reduce the mortality associated with malaria in pregnenacy.\nAn overall prevalence of 12.0% was observed for STHs infections amongst pregnant women in this study. This shows the susceptibility of the pregnancy state to infection which could be due to impaired immunity25. The prevalence observed in our study is lower compared to what was reported in Ghana (25.7%)18, Ethiopia (41%)20 and Kenya (13.8%)26. The relatively low prevelence of STHs in this study could be an indication of improved sanitation and proper sewage disposal and improved toilet facilities as shown by the fact that 39% of the participant had improved toilet facilities. The prevalence of A. lumbricoides (15.3%) was higher than those of other helminths, an observation that has been made in other studies in North Central and South South region of Nigeria (19.1%)27. However, the prevalence of A. lumbricoides recorded here was higher than 5.0% recorded in South Eastern Nigeria28 and Kenya26.\nOnly, 5% of the participants in this study were co-infected with P. falciparum and STHs of the women who had malaria while 12% had some type of worm infection. Awareness of the importance of co-infection is increasing and suggestions have been made that helminths infection may influence susceptibility to other infections including malaria (Nacher 2001; Bentwich, 2000). This association may vary by geographical region, the local species and the risk factors for the parasites being studied29. In this study, although no significant association was observed in the interaction between helminth and malaria, the odds revealed that pregnant women infected with helminths, more precisely A. lumbricoides were almost two times as likely to have P. falciparum infection compared to those without A. lumbricoides infection. This is similar to our previous observation in studies conducted among school children in different locations in Nigeria6,30. Also, similar association was previously observed among cohort of pregnant women in Ghana18. Although the main reason for this observation is yet to be elucidated, the general belief is that helminths drive the type 2 (Th2) immune response which down-regulates the Th1 anti-malarial immune response, resulting in increased risk of malaria infection. This explanation may not hold true in all circumstances as other authors have conflicting reports on how helminths regulate P. falciparum infection. More studies are therefore needed to expound on these conflicting observations.\nAlthough, the prevalence of P. falciparum, STHs and co-infection was higher among the primigravidae compared to secondigravida and multigravida, no significant association was observed between primiparity and P. falciparum infection. However, primiparity was significantly associated with P. falciparum and STHs coinfection. Primiparity has been identified as a risk factor for P. falciparum infection during pregnancy3,21,25,29.\nMajority of the pregnant women were not anaemic and so had a PCV value of 30% or more. A study in Ethiopia recorded anaemia prevalence of 53.9% among pregnant women31. Anaemia (PCV ≤30%) was not associated with P. falciparum infection, STHs infection and P. falciparum + STHs co-infection among the pregnant women studied. Lack of association between hookworm and anaemia has been reported in Brazil11. Hookworm may cause anaemia particularly in pregnant women32 and hemoglobin levels have been found to be lowest in pregnant women who had helminth or malaria infections33.\nAge was identified as a risk factor for malaria in pregnancy while gravidity was a risk factor for P. falciparum +STHs co-infection. Previous studies have identified the risk factors for malaria in pregnancy as age and primigravidae10,21,25. However, a lack of association between age or parity and malaria has been reported in Sudan22. No risk factor was identified for helminth infection in this study. Any risk factors (if present) in this study might not have been detected in this study given the low number of women who were infected with helminths (especially with hookworm). Again the use of one stool sample from each woman is a limiting factor as the proportion of women with low-intensity infection could have been mis-classifed as uninfected since multiple samples are required for accurate detection34. Future study with larger sample size is therefore recommended to help further investigate the impact anemia and other risk factor in pregnant women.\nThe risk factors for helminth infections depend on the route of transmission and life cycle of the helminth, and are related to the state of hygiene, sanitation and environmental conditions (temperature and humidity)16,35. It is also worth noting that pregnant women in the rural communities who are the most vulnerable to malaria and helminth infection do not attend antenatal clinics. The implication of this is that higher prevalence of malaria, helminth and malaria helmith coinfection could be recorded in a rural community based study. Also other studies have shown other risk factors associated with STHs infection include the absence of garbage bin in the household and age of the mother at the time of marriage, house floor type or handwash33,36,37. However, data for these factors were not included in the study and may have an effect which was not observed. Poor education and unemployment have also been identified as risk factors for both helminth and P. falciparum infections33.", "The results suggest that P. falciparum and STHs infection and their co-infection could constitute a serious problem among women attending antenatal clinic (ANC) with primigravida being more parasitized than the multigravida. More effort should be placed on the control and prevention of malaria and intestinal helminths among pregnant women in endemic areas in other to reduce the associated risks and burden." ]
[ "intro", "methods", null, null, null, null, "results", null, null, null, null, "discussion", "conclusions" ]
[ "\nP.falciparum\n", "STHs", "Co-infection", "pregnant women", "Nigeria" ]
Introduction: Malaria and soil transmitted helminths (STHs) infections are the most important parasitic infections in sub-Saharan Africa, where a significant proportion of the populations including pregnant women are exposed to these infections. In Nigeria the coverage of antenatal care is put at 61% and maternal mortality rate is put at over 560/100,000 pregnant women annually1. Malaria caused by Plasmodium falciparum is one of the major problems encountered by these pregnant women living in malaria endemic areas of Nigeria2. Pregnant women have a higher density of parasitaemia and have more complications of P. falciparum infection than non-pregnant women3. Plasmodium falciparum infection contributes significantly to maternal anaemia, low birth weight of infants, intrauterine growth retardation, preterm deliveries and infant mortality in sub-Saharan Africa3,4,5. Similarly, STHs infections are widely distributed in tropical and sub-tropical areas of the developing countries6. STHs infections are associated with cognitive impairment and lowered educational achievement, anaemia, stunted growth, physical and mental development, malnutrition and responsible for about one million deaths per year7. Co-infection of P. falciparum and STHs is common given the spatial coincidence of risk between malaria and helminths infections among individual living in Africa8,6. A major impact of malaria and STHs infections is anaemia which serves as a public health problem in the tropics9. Plasmodium falciparum has been shown to contribute significantly to anaemia in both pregnant women and young children using a number of mechanisms that includes haemolysis and phagocytosis3,10. Blood loss resulting from hookworm infection is considered as the main cause of anaemia11. Therefore the combination of P. falciparum and STHs (mostly hookworm infection) infection is considered as a strong indicator of moderate to serve anaemia12. Iron deficiency anaemia due to STHs infection depends on many factors, including the iron status of the individual and the bioavailability of iron in the diet. Most African women live on diets with poor bioavailable iron which results in low iron stores and could predispose helminth positive pregnant women to iron deficiency anaemia13. In addition, many other factors like environmental conditions, educational status, adherence to preventive measures could directly affect the prevalence of malaria and helminths in sub-Saharan Africa. Few studies have reported the occurrence, interaction and risk factors of P. falciparum and helminths infection in pregnant women in Nigeria. Epidemiological surveillance and impact of both P. falciparum and STHs remain poorly defined among pregnant women in Nigeria. This study was therefore conducted to investigate the prevalence and risk factors of P. falciparum malaria and/or helminth co-infection in pregnant women in Osogbo, South Western, Nigeria. Methodology: Study area The study was carried out between October 2012 and May 2013 at the State General Hospital, Ikirun and Osogbo, Osun State, Nigeria. These communities are in urban areas in Osun State Nigeria. Osogbo is the state capital with a landed area of about 835 hectare and population projection of over 3 Million people as at 2006 population census. The climate in Osogbo is tropical with two seasons; October to February (dry season) and March to July rainy season. The average daily temperature is 32°C with a minimum temperature of 19°C and a maximum temperature of 35.9°C14. Malaria transmission is usually intense in the raining season in these two study areas. The study was carried out between October 2012 and May 2013 at the State General Hospital, Ikirun and Osogbo, Osun State, Nigeria. These communities are in urban areas in Osun State Nigeria. Osogbo is the state capital with a landed area of about 835 hectare and population projection of over 3 Million people as at 2006 population census. The climate in Osogbo is tropical with two seasons; October to February (dry season) and March to July rainy season. The average daily temperature is 32°C with a minimum temperature of 19°C and a maximum temperature of 35.9°C14. Malaria transmission is usually intense in the raining season in these two study areas. Study population and recruitment This was a cross-sectional study of pregnant women attending antenatal clinic. Inclusion criteria for participations were: pregnant women attending antenatal clinics, absence of general danger signs (for example vomiting, inability to sit or stand); absence of signs of severe and complicated falciparum malaria; absence of recent history of convulsion; absence of hyperpyrexia (i.e. axillary temperature > 39.5°C) and informed consent of the participating women. The women were adequately informed on the benefit and risks of the study and their consent was sought verbally. Only consented women were recruited into the study. Ethical approval was obtained from Ethical committee of the State Hospital at the Hospital Management Board, Osogbo. After verbal consent, questionnaire were administered to collect information on age, parity, gestational age, toilet facilities, malaria prevention methods and history of drugs taken. The number of pregnant women included in this study was determined using the formula. The prevalence of P. falciparum was estimated as 35% based on the averge value of a wide range of publications in this area. The marginal error was considered as 5% using 95% confidence level (standard value 1.96); the minimum sample size was determined to be approximately 188 and subsequently 200 was collected. This research was reviewed and approved by the Ethical Review Committee of the Osun State Ministry of Health, Osogbo. This was a cross-sectional study of pregnant women attending antenatal clinic. Inclusion criteria for participations were: pregnant women attending antenatal clinics, absence of general danger signs (for example vomiting, inability to sit or stand); absence of signs of severe and complicated falciparum malaria; absence of recent history of convulsion; absence of hyperpyrexia (i.e. axillary temperature > 39.5°C) and informed consent of the participating women. The women were adequately informed on the benefit and risks of the study and their consent was sought verbally. Only consented women were recruited into the study. Ethical approval was obtained from Ethical committee of the State Hospital at the Hospital Management Board, Osogbo. After verbal consent, questionnaire were administered to collect information on age, parity, gestational age, toilet facilities, malaria prevention methods and history of drugs taken. The number of pregnant women included in this study was determined using the formula. The prevalence of P. falciparum was estimated as 35% based on the averge value of a wide range of publications in this area. The marginal error was considered as 5% using 95% confidence level (standard value 1.96); the minimum sample size was determined to be approximately 188 and subsequently 200 was collected. This research was reviewed and approved by the Ethical Review Committee of the Osun State Ministry of Health, Osogbo. Sample collection and analysis Blood was collected from the thumb with a sterile lancet onto two seperate slides after cleaning the thumb with methylated spirit. The slides were used for the examination and estimation of malaria parasite from thick and thin film preparation. Blood was also collected into heparinised capillary tube for PCV estimation. A wide mouth clean specimen bottle with tight screw cover was individual provided for each participant for collection of stool sample in the morning of the day of their antenatal clinic visit. The sample was collected early in the morning and processed less than 12 hours when it was passed. The stool samples were visualized macroscopically. Direct saline and formol-ether concentration method of stool samples were prepared and microscopically examined using 10× and 40× objectives15. Blood was collected from the thumb with a sterile lancet onto two seperate slides after cleaning the thumb with methylated spirit. The slides were used for the examination and estimation of malaria parasite from thick and thin film preparation. Blood was also collected into heparinised capillary tube for PCV estimation. A wide mouth clean specimen bottle with tight screw cover was individual provided for each participant for collection of stool sample in the morning of the day of their antenatal clinic visit. The sample was collected early in the morning and processed less than 12 hours when it was passed. The stool samples were visualized macroscopically. Direct saline and formol-ether concentration method of stool samples were prepared and microscopically examined using 10× and 40× objectives15. Data analysis Data analysis was performed using SPSS version 16.0. Chi- square tests and descriptive statistics were used to describe the differences in socio-demographic charactersitics. P-value ≤ 0.05 was considered significant. Point estimation of prevalence of malaria and STHs infection was calculated based on the blood and stool sample results. The comparison of prevalence of P. falciparum, STHs and P. falciparum + STHs co-infection between malaria and STHs was performed using chi square test. Odds ratios (OR) with a 95% confidence interval were computed to compare the strength of association. Data analysis was performed using SPSS version 16.0. Chi- square tests and descriptive statistics were used to describe the differences in socio-demographic charactersitics. P-value ≤ 0.05 was considered significant. Point estimation of prevalence of malaria and STHs infection was calculated based on the blood and stool sample results. The comparison of prevalence of P. falciparum, STHs and P. falciparum + STHs co-infection between malaria and STHs was performed using chi square test. Odds ratios (OR) with a 95% confidence interval were computed to compare the strength of association. Study area: The study was carried out between October 2012 and May 2013 at the State General Hospital, Ikirun and Osogbo, Osun State, Nigeria. These communities are in urban areas in Osun State Nigeria. Osogbo is the state capital with a landed area of about 835 hectare and population projection of over 3 Million people as at 2006 population census. The climate in Osogbo is tropical with two seasons; October to February (dry season) and March to July rainy season. The average daily temperature is 32°C with a minimum temperature of 19°C and a maximum temperature of 35.9°C14. Malaria transmission is usually intense in the raining season in these two study areas. Study population and recruitment: This was a cross-sectional study of pregnant women attending antenatal clinic. Inclusion criteria for participations were: pregnant women attending antenatal clinics, absence of general danger signs (for example vomiting, inability to sit or stand); absence of signs of severe and complicated falciparum malaria; absence of recent history of convulsion; absence of hyperpyrexia (i.e. axillary temperature > 39.5°C) and informed consent of the participating women. The women were adequately informed on the benefit and risks of the study and their consent was sought verbally. Only consented women were recruited into the study. Ethical approval was obtained from Ethical committee of the State Hospital at the Hospital Management Board, Osogbo. After verbal consent, questionnaire were administered to collect information on age, parity, gestational age, toilet facilities, malaria prevention methods and history of drugs taken. The number of pregnant women included in this study was determined using the formula. The prevalence of P. falciparum was estimated as 35% based on the averge value of a wide range of publications in this area. The marginal error was considered as 5% using 95% confidence level (standard value 1.96); the minimum sample size was determined to be approximately 188 and subsequently 200 was collected. This research was reviewed and approved by the Ethical Review Committee of the Osun State Ministry of Health, Osogbo. Sample collection and analysis: Blood was collected from the thumb with a sterile lancet onto two seperate slides after cleaning the thumb with methylated spirit. The slides were used for the examination and estimation of malaria parasite from thick and thin film preparation. Blood was also collected into heparinised capillary tube for PCV estimation. A wide mouth clean specimen bottle with tight screw cover was individual provided for each participant for collection of stool sample in the morning of the day of their antenatal clinic visit. The sample was collected early in the morning and processed less than 12 hours when it was passed. The stool samples were visualized macroscopically. Direct saline and formol-ether concentration method of stool samples were prepared and microscopically examined using 10× and 40× objectives15. Data analysis: Data analysis was performed using SPSS version 16.0. Chi- square tests and descriptive statistics were used to describe the differences in socio-demographic charactersitics. P-value ≤ 0.05 was considered significant. Point estimation of prevalence of malaria and STHs infection was calculated based on the blood and stool sample results. The comparison of prevalence of P. falciparum, STHs and P. falciparum + STHs co-infection between malaria and STHs was performed using chi square test. Odds ratios (OR) with a 95% confidence interval were computed to compare the strength of association. Results: General characteristics of the study population A total of 200 pregnant women with a mean age of 26.5 years (range:15–40) were enrolled in this study (Table 1). The mean gestational age of the pregnant women at enrolment was 17.4 weeks (range: 8–36 weeks). The mean packed cell volume (PCV) was 35.94±6.50. The overall-prevalence of P. falciparum and STHs was 29.5% (59/200) and 12% (24/200) respectively (Table 1). Out of the 59 malaria positive pregnant women, 24.5% (n=49) were positive for P. falciparum only, 6.5% (n=13) were positive for STHs only while 5.0% (n=10) were co-infected with P. falciparum and STHs. Two (1%) of the STHs infected women had co-infection of hookworm+A. lumbricoides Participants characteristics and prevalence of malaria and helminthic infection among pregnant and non- pregnant women A total of 200 pregnant women with a mean age of 26.5 years (range:15–40) were enrolled in this study (Table 1). The mean gestational age of the pregnant women at enrolment was 17.4 weeks (range: 8–36 weeks). The mean packed cell volume (PCV) was 35.94±6.50. The overall-prevalence of P. falciparum and STHs was 29.5% (59/200) and 12% (24/200) respectively (Table 1). Out of the 59 malaria positive pregnant women, 24.5% (n=49) were positive for P. falciparum only, 6.5% (n=13) were positive for STHs only while 5.0% (n=10) were co-infected with P. falciparum and STHs. Two (1%) of the STHs infected women had co-infection of hookworm+A. lumbricoides Participants characteristics and prevalence of malaria and helminthic infection among pregnant and non- pregnant women Prevalence of P. falciparum and STHs The prevalence of P. falciparum, STHs and P. falciparum+STHs co-infection with respect to age, gravidity and gestational age is shown in Table 2. The highest prevalence of P. falciparum infection was recorded in the age group >30 years (49.2%) followed by age group 25–29 years (30.5%) and the least in age group <20 (5.1%) and the difference was statistically significant (p=0.001). Prevalence of P.falciparum and STHs infection was higher among primigraviadae, but the difference was not statistically significant. For P. falciparum+STHs co-infection the primigravidae (80%) had the highest prevalence compared to secongravidae (20%) and multigravidae (0%) and the difference was statistically significant (p=0.02). Those in the second trimester of pregnancy had the highest prevalence of falciparum (57.6%), STHs (70.8) and P.falciparum+STHs co-infection (70%) but the difference was not statistically significant. Prevalence of Malaria and intestinal helminth infection among pregnant by age, parity, gestational age and PCV Values are statistically significant (p ≤ 0.05) The prevalence of P. falciparum, STHs and P. falciparum+STHs co-infection with respect to age, gravidity and gestational age is shown in Table 2. The highest prevalence of P. falciparum infection was recorded in the age group >30 years (49.2%) followed by age group 25–29 years (30.5%) and the least in age group <20 (5.1%) and the difference was statistically significant (p=0.001). Prevalence of P.falciparum and STHs infection was higher among primigraviadae, but the difference was not statistically significant. For P. falciparum+STHs co-infection the primigravidae (80%) had the highest prevalence compared to secongravidae (20%) and multigravidae (0%) and the difference was statistically significant (p=0.02). Those in the second trimester of pregnancy had the highest prevalence of falciparum (57.6%), STHs (70.8) and P.falciparum+STHs co-infection (70%) but the difference was not statistically significant. Prevalence of Malaria and intestinal helminth infection among pregnant by age, parity, gestational age and PCV Values are statistically significant (p ≤ 0.05) Association between P. falciparum and intestinal helminth The association between P. falciparum and STHs infection among the pregnant women is shown in Table 3. Out of the 59 pregnant women that were positive for malaria, 10 (16.9 %) were co-infected with STHs and the difference was not statistically significant (p = 0.23). In the final model, children infected with helminths in this study are almost twice as likely to have P. falciparum infection than those without helminth infection (OR=1.85 (95% confidence interval CI: 0.77–4.45). An increase odds of P. falciparum was also observed among children infected with A. lumbricoides when compared to those not infected (OR =2.13; CI: 0.83- 5.44). Although a similar observation to those of A. lumbricoides was observed for hookworm and S. stercoralis, only one observation was recorded in both cases which makes the analysis not extremely reliable (Table 3). Relationship between malaria and intestinal helminth infection among pregnant women The association between P. falciparum and STHs infection among the pregnant women is shown in Table 3. Out of the 59 pregnant women that were positive for malaria, 10 (16.9 %) were co-infected with STHs and the difference was not statistically significant (p = 0.23). In the final model, children infected with helminths in this study are almost twice as likely to have P. falciparum infection than those without helminth infection (OR=1.85 (95% confidence interval CI: 0.77–4.45). An increase odds of P. falciparum was also observed among children infected with A. lumbricoides when compared to those not infected (OR =2.13; CI: 0.83- 5.44). Although a similar observation to those of A. lumbricoides was observed for hookworm and S. stercoralis, only one observation was recorded in both cases which makes the analysis not extremely reliable (Table 3). Relationship between malaria and intestinal helminth infection among pregnant women Influence of demographic and attitudinal practice on P.falciparum and helminth infection Table 4 shows the influence of demographic characteristics and attitudinal practice on P.falciparum and STHs infection in the study population. Age was significantly associated with P.falciparum infection (P=0.0001) among pregnant women in the study area. Education, employment status, toilet facility type, gravidity and gestational age were not significantly associated with P. falciparum infection. Age, education, employment status, toilet facility type and gestational age were not significantly associated with P. falciparum infection. None of the factors was found to be significantly associated with STHs infection. The highest prevalence of STHs was among pregnant women aged 25–29 years (45.8%; 11/24), unemployed (41.7%; 10/24) and self-employed (41.7%;10/24), primigravidae (58.3%; 14), those who used pit latrines (58.3%; 14/24) and those who were in the second trimester of pregnancy (70.8%; 17/24). Of the 24 pregnant women positive for helminths, 7 (29.2%) were anaemic while 17% (10/59) of the malaria positive pregnant women were also anaemic with PCV ≤30% (Table 3). The highest prevalence of malaria was found among women aged >30 yrs (49.2%), unemployed (47.5%), and those that used pit latrines (49.2%). Three of the 10 pregnant women with P. falciparum helminth co-infection were anaemic. Amongst the 10 pregnant women co infected with P. falciparum and STHs, the highest prevalence was observed among pregnant women aged 25–29 years (60%), unemployed (50%), those who used pit latrines (50%), those in the second trimester of pregnancy (70%) and those who had not taken any sulfadoxine-pyrimethamine (SP) for malaria prevention (70%) but the differences were not statistically significant (table 3). Prevalence of P. falciparum was also higher among primigraviadae (52.5%), women in the second trimester of pregnancy (57.6%) and women who had not received any SP for malaria prevention (55.9%) (Table 4). Risk factors associated with malaria and helminths infection among pregnant women Table 4 shows the influence of demographic characteristics and attitudinal practice on P.falciparum and STHs infection in the study population. Age was significantly associated with P.falciparum infection (P=0.0001) among pregnant women in the study area. Education, employment status, toilet facility type, gravidity and gestational age were not significantly associated with P. falciparum infection. Age, education, employment status, toilet facility type and gestational age were not significantly associated with P. falciparum infection. None of the factors was found to be significantly associated with STHs infection. The highest prevalence of STHs was among pregnant women aged 25–29 years (45.8%; 11/24), unemployed (41.7%; 10/24) and self-employed (41.7%;10/24), primigravidae (58.3%; 14), those who used pit latrines (58.3%; 14/24) and those who were in the second trimester of pregnancy (70.8%; 17/24). Of the 24 pregnant women positive for helminths, 7 (29.2%) were anaemic while 17% (10/59) of the malaria positive pregnant women were also anaemic with PCV ≤30% (Table 3). The highest prevalence of malaria was found among women aged >30 yrs (49.2%), unemployed (47.5%), and those that used pit latrines (49.2%). Three of the 10 pregnant women with P. falciparum helminth co-infection were anaemic. Amongst the 10 pregnant women co infected with P. falciparum and STHs, the highest prevalence was observed among pregnant women aged 25–29 years (60%), unemployed (50%), those who used pit latrines (50%), those in the second trimester of pregnancy (70%) and those who had not taken any sulfadoxine-pyrimethamine (SP) for malaria prevention (70%) but the differences were not statistically significant (table 3). Prevalence of P. falciparum was also higher among primigraviadae (52.5%), women in the second trimester of pregnancy (57.6%) and women who had not received any SP for malaria prevention (55.9%) (Table 4). Risk factors associated with malaria and helminths infection among pregnant women General characteristics of the study population: A total of 200 pregnant women with a mean age of 26.5 years (range:15–40) were enrolled in this study (Table 1). The mean gestational age of the pregnant women at enrolment was 17.4 weeks (range: 8–36 weeks). The mean packed cell volume (PCV) was 35.94±6.50. The overall-prevalence of P. falciparum and STHs was 29.5% (59/200) and 12% (24/200) respectively (Table 1). Out of the 59 malaria positive pregnant women, 24.5% (n=49) were positive for P. falciparum only, 6.5% (n=13) were positive for STHs only while 5.0% (n=10) were co-infected with P. falciparum and STHs. Two (1%) of the STHs infected women had co-infection of hookworm+A. lumbricoides Participants characteristics and prevalence of malaria and helminthic infection among pregnant and non- pregnant women Prevalence of P. falciparum and STHs: The prevalence of P. falciparum, STHs and P. falciparum+STHs co-infection with respect to age, gravidity and gestational age is shown in Table 2. The highest prevalence of P. falciparum infection was recorded in the age group >30 years (49.2%) followed by age group 25–29 years (30.5%) and the least in age group <20 (5.1%) and the difference was statistically significant (p=0.001). Prevalence of P.falciparum and STHs infection was higher among primigraviadae, but the difference was not statistically significant. For P. falciparum+STHs co-infection the primigravidae (80%) had the highest prevalence compared to secongravidae (20%) and multigravidae (0%) and the difference was statistically significant (p=0.02). Those in the second trimester of pregnancy had the highest prevalence of falciparum (57.6%), STHs (70.8) and P.falciparum+STHs co-infection (70%) but the difference was not statistically significant. Prevalence of Malaria and intestinal helminth infection among pregnant by age, parity, gestational age and PCV Values are statistically significant (p ≤ 0.05) Association between P. falciparum and intestinal helminth: The association between P. falciparum and STHs infection among the pregnant women is shown in Table 3. Out of the 59 pregnant women that were positive for malaria, 10 (16.9 %) were co-infected with STHs and the difference was not statistically significant (p = 0.23). In the final model, children infected with helminths in this study are almost twice as likely to have P. falciparum infection than those without helminth infection (OR=1.85 (95% confidence interval CI: 0.77–4.45). An increase odds of P. falciparum was also observed among children infected with A. lumbricoides when compared to those not infected (OR =2.13; CI: 0.83- 5.44). Although a similar observation to those of A. lumbricoides was observed for hookworm and S. stercoralis, only one observation was recorded in both cases which makes the analysis not extremely reliable (Table 3). Relationship between malaria and intestinal helminth infection among pregnant women Influence of demographic and attitudinal practice on P.falciparum and helminth infection: Table 4 shows the influence of demographic characteristics and attitudinal practice on P.falciparum and STHs infection in the study population. Age was significantly associated with P.falciparum infection (P=0.0001) among pregnant women in the study area. Education, employment status, toilet facility type, gravidity and gestational age were not significantly associated with P. falciparum infection. Age, education, employment status, toilet facility type and gestational age were not significantly associated with P. falciparum infection. None of the factors was found to be significantly associated with STHs infection. The highest prevalence of STHs was among pregnant women aged 25–29 years (45.8%; 11/24), unemployed (41.7%; 10/24) and self-employed (41.7%;10/24), primigravidae (58.3%; 14), those who used pit latrines (58.3%; 14/24) and those who were in the second trimester of pregnancy (70.8%; 17/24). Of the 24 pregnant women positive for helminths, 7 (29.2%) were anaemic while 17% (10/59) of the malaria positive pregnant women were also anaemic with PCV ≤30% (Table 3). The highest prevalence of malaria was found among women aged >30 yrs (49.2%), unemployed (47.5%), and those that used pit latrines (49.2%). Three of the 10 pregnant women with P. falciparum helminth co-infection were anaemic. Amongst the 10 pregnant women co infected with P. falciparum and STHs, the highest prevalence was observed among pregnant women aged 25–29 years (60%), unemployed (50%), those who used pit latrines (50%), those in the second trimester of pregnancy (70%) and those who had not taken any sulfadoxine-pyrimethamine (SP) for malaria prevention (70%) but the differences were not statistically significant (table 3). Prevalence of P. falciparum was also higher among primigraviadae (52.5%), women in the second trimester of pregnancy (57.6%) and women who had not received any SP for malaria prevention (55.9%) (Table 4). Risk factors associated with malaria and helminths infection among pregnant women Discussion: This study presents data on the prevalence of P. falciparum and STHs co-infection and some possible associated risk factors in pregnant women in South Western, Nigeria. P. falciparum is a major cause of mortality in pregnant women and their foetus and it is endemic in many parts of the world where its co-occurrence with STHs has been reported16. This study observed that malaria is still highly prevalent among pregnant women in Nigeria. The 29.5% prevalence of P. falciparum recorded in this study was similar to the prevalence of 21.9% reported in Ilorin Nigeria, 21.9% in Cameroon17, and 36.3% in Ghana18. However the prevalence was lower than the reports from some other parts of Nigeria (Kebbi (41.6%)19, Ethiopia (11.6%)20 and Libreville (Gabon) (57%)21. Also, a study in Angola and Sudan recorded a prevalence of 8.6% and 13.7% respectively among pregnant women22,23. The differences in the prevalence of P. falciparum in pregnancy may be attributed to various reasons like sample size, abundance of vector but also primarily linked availability of effective control measures in the endemic areas. Although WHO recommends two doses of sulfadoxine-pyrimethamine (IPTp-SP) for malaria prophylaxis in pregnancy24, 53% of the women in this study did not recieved any SP prescription during Ante-Natal Care (ANC) and only 47.0% recieved one or two doses. Use of IPTp has been shown to decrease Plasmodium parasitemia25 but unfortunately the implementation of this recommendation has been sub-optimal in many endemic countries as observed in our study. Improving the coverage of pregnant women receiving IPTp in malaria endemic region, like in Nigeria is essential in other to reduce the mortality associated with malaria in pregnenacy. An overall prevalence of 12.0% was observed for STHs infections amongst pregnant women in this study. This shows the susceptibility of the pregnancy state to infection which could be due to impaired immunity25. The prevalence observed in our study is lower compared to what was reported in Ghana (25.7%)18, Ethiopia (41%)20 and Kenya (13.8%)26. The relatively low prevelence of STHs in this study could be an indication of improved sanitation and proper sewage disposal and improved toilet facilities as shown by the fact that 39% of the participant had improved toilet facilities. The prevalence of A. lumbricoides (15.3%) was higher than those of other helminths, an observation that has been made in other studies in North Central and South South region of Nigeria (19.1%)27. However, the prevalence of A. lumbricoides recorded here was higher than 5.0% recorded in South Eastern Nigeria28 and Kenya26. Only, 5% of the participants in this study were co-infected with P. falciparum and STHs of the women who had malaria while 12% had some type of worm infection. Awareness of the importance of co-infection is increasing and suggestions have been made that helminths infection may influence susceptibility to other infections including malaria (Nacher 2001; Bentwich, 2000). This association may vary by geographical region, the local species and the risk factors for the parasites being studied29. In this study, although no significant association was observed in the interaction between helminth and malaria, the odds revealed that pregnant women infected with helminths, more precisely A. lumbricoides were almost two times as likely to have P. falciparum infection compared to those without A. lumbricoides infection. This is similar to our previous observation in studies conducted among school children in different locations in Nigeria6,30. Also, similar association was previously observed among cohort of pregnant women in Ghana18. Although the main reason for this observation is yet to be elucidated, the general belief is that helminths drive the type 2 (Th2) immune response which down-regulates the Th1 anti-malarial immune response, resulting in increased risk of malaria infection. This explanation may not hold true in all circumstances as other authors have conflicting reports on how helminths regulate P. falciparum infection. More studies are therefore needed to expound on these conflicting observations. Although, the prevalence of P. falciparum, STHs and co-infection was higher among the primigravidae compared to secondigravida and multigravida, no significant association was observed between primiparity and P. falciparum infection. However, primiparity was significantly associated with P. falciparum and STHs coinfection. Primiparity has been identified as a risk factor for P. falciparum infection during pregnancy3,21,25,29. Majority of the pregnant women were not anaemic and so had a PCV value of 30% or more. A study in Ethiopia recorded anaemia prevalence of 53.9% among pregnant women31. Anaemia (PCV ≤30%) was not associated with P. falciparum infection, STHs infection and P. falciparum + STHs co-infection among the pregnant women studied. Lack of association between hookworm and anaemia has been reported in Brazil11. Hookworm may cause anaemia particularly in pregnant women32 and hemoglobin levels have been found to be lowest in pregnant women who had helminth or malaria infections33. Age was identified as a risk factor for malaria in pregnancy while gravidity was a risk factor for P. falciparum +STHs co-infection. Previous studies have identified the risk factors for malaria in pregnancy as age and primigravidae10,21,25. However, a lack of association between age or parity and malaria has been reported in Sudan22. No risk factor was identified for helminth infection in this study. Any risk factors (if present) in this study might not have been detected in this study given the low number of women who were infected with helminths (especially with hookworm). Again the use of one stool sample from each woman is a limiting factor as the proportion of women with low-intensity infection could have been mis-classifed as uninfected since multiple samples are required for accurate detection34. Future study with larger sample size is therefore recommended to help further investigate the impact anemia and other risk factor in pregnant women. The risk factors for helminth infections depend on the route of transmission and life cycle of the helminth, and are related to the state of hygiene, sanitation and environmental conditions (temperature and humidity)16,35. It is also worth noting that pregnant women in the rural communities who are the most vulnerable to malaria and helminth infection do not attend antenatal clinics. The implication of this is that higher prevalence of malaria, helminth and malaria helmith coinfection could be recorded in a rural community based study. Also other studies have shown other risk factors associated with STHs infection include the absence of garbage bin in the household and age of the mother at the time of marriage, house floor type or handwash33,36,37. However, data for these factors were not included in the study and may have an effect which was not observed. Poor education and unemployment have also been identified as risk factors for both helminth and P. falciparum infections33. Conclusion: The results suggest that P. falciparum and STHs infection and their co-infection could constitute a serious problem among women attending antenatal clinic (ANC) with primigravida being more parasitized than the multigravida. More effort should be placed on the control and prevention of malaria and intestinal helminths among pregnant women in endemic areas in other to reduce the associated risks and burden.
Background: Plasmodium falciparum and soil transmitted helminth (STHs) infection are widespread in sub-Sahara Africa, where co-infection is also common. This study assessed the prevalence of these infections and their risk factors among pregnant women in Osogbo, Nigeria. Methods: A total of 200 pregnant women attending the antenatal clinic were recruited. Plasmodium falciparum was detected using thick and thin film methods, while formol ether concentration method was used for STHs detection. A questionnaire was used to investigate the possible risk factors associated with acquisition of malaria and helminth infections. Results: The prevalence of P. falciparum, STHs and their co-infection was 29.5%, 12% and 5% respectively. P. falciparum, STHs and P. falciparum + STHs co-infection was significantly higher in primigravidae (52.5% vs 58.3% vs 80%) than in secongravidae (18.6% vs 25.0% vs 20%) and multigravidae (28.8% vs 16.7% vs 0%) (p=0.02). Prevalence associated factors identified for P. falciparum was age (p=0.0001) while gravidity (p=0.02) was identified for P. falciparum + STHs co-infection. Conclusions: High prevalence of P. falciparum and helminth infections was observed among the pregnant women with primigravidae being the most susceptible to co-infection. There is an urgent need to implement an effective malaria and STHs preventive method for this high risk population.
Introduction: Malaria and soil transmitted helminths (STHs) infections are the most important parasitic infections in sub-Saharan Africa, where a significant proportion of the populations including pregnant women are exposed to these infections. In Nigeria the coverage of antenatal care is put at 61% and maternal mortality rate is put at over 560/100,000 pregnant women annually1. Malaria caused by Plasmodium falciparum is one of the major problems encountered by these pregnant women living in malaria endemic areas of Nigeria2. Pregnant women have a higher density of parasitaemia and have more complications of P. falciparum infection than non-pregnant women3. Plasmodium falciparum infection contributes significantly to maternal anaemia, low birth weight of infants, intrauterine growth retardation, preterm deliveries and infant mortality in sub-Saharan Africa3,4,5. Similarly, STHs infections are widely distributed in tropical and sub-tropical areas of the developing countries6. STHs infections are associated with cognitive impairment and lowered educational achievement, anaemia, stunted growth, physical and mental development, malnutrition and responsible for about one million deaths per year7. Co-infection of P. falciparum and STHs is common given the spatial coincidence of risk between malaria and helminths infections among individual living in Africa8,6. A major impact of malaria and STHs infections is anaemia which serves as a public health problem in the tropics9. Plasmodium falciparum has been shown to contribute significantly to anaemia in both pregnant women and young children using a number of mechanisms that includes haemolysis and phagocytosis3,10. Blood loss resulting from hookworm infection is considered as the main cause of anaemia11. Therefore the combination of P. falciparum and STHs (mostly hookworm infection) infection is considered as a strong indicator of moderate to serve anaemia12. Iron deficiency anaemia due to STHs infection depends on many factors, including the iron status of the individual and the bioavailability of iron in the diet. Most African women live on diets with poor bioavailable iron which results in low iron stores and could predispose helminth positive pregnant women to iron deficiency anaemia13. In addition, many other factors like environmental conditions, educational status, adherence to preventive measures could directly affect the prevalence of malaria and helminths in sub-Saharan Africa. Few studies have reported the occurrence, interaction and risk factors of P. falciparum and helminths infection in pregnant women in Nigeria. Epidemiological surveillance and impact of both P. falciparum and STHs remain poorly defined among pregnant women in Nigeria. This study was therefore conducted to investigate the prevalence and risk factors of P. falciparum malaria and/or helminth co-infection in pregnant women in Osogbo, South Western, Nigeria. Conclusion: The results suggest that P. falciparum and STHs infection and their co-infection could constitute a serious problem among women attending antenatal clinic (ANC) with primigravida being more parasitized than the multigravida. More effort should be placed on the control and prevention of malaria and intestinal helminths among pregnant women in endemic areas in other to reduce the associated risks and burden.
Background: Plasmodium falciparum and soil transmitted helminth (STHs) infection are widespread in sub-Sahara Africa, where co-infection is also common. This study assessed the prevalence of these infections and their risk factors among pregnant women in Osogbo, Nigeria. Methods: A total of 200 pregnant women attending the antenatal clinic were recruited. Plasmodium falciparum was detected using thick and thin film methods, while formol ether concentration method was used for STHs detection. A questionnaire was used to investigate the possible risk factors associated with acquisition of malaria and helminth infections. Results: The prevalence of P. falciparum, STHs and their co-infection was 29.5%, 12% and 5% respectively. P. falciparum, STHs and P. falciparum + STHs co-infection was significantly higher in primigravidae (52.5% vs 58.3% vs 80%) than in secongravidae (18.6% vs 25.0% vs 20%) and multigravidae (28.8% vs 16.7% vs 0%) (p=0.02). Prevalence associated factors identified for P. falciparum was age (p=0.0001) while gravidity (p=0.02) was identified for P. falciparum + STHs co-infection. Conclusions: High prevalence of P. falciparum and helminth infections was observed among the pregnant women with primigravidae being the most susceptible to co-infection. There is an urgent need to implement an effective malaria and STHs preventive method for this high risk population.
6,645
267
[ 128, 255, 134, 104, 164, 206, 173, 403 ]
13
[ "falciparum", "women", "infection", "pregnant", "sths", "pregnant women", "malaria", "prevalence", "study", "age" ]
[ "infections including malaria", "factor malaria pregnancy", "malaria sths infections", "malaria prophylaxis pregnancy24", "prevalence malaria sths" ]
[CONTENT] P.falciparum | STHs | Co-infection | pregnant women | Nigeria [SUMMARY]
[CONTENT] P.falciparum | STHs | Co-infection | pregnant women | Nigeria [SUMMARY]
[CONTENT] P.falciparum | STHs | Co-infection | pregnant women | Nigeria [SUMMARY]
[CONTENT] P.falciparum | STHs | Co-infection | pregnant women | Nigeria [SUMMARY]
[CONTENT] P.falciparum | STHs | Co-infection | pregnant women | Nigeria [SUMMARY]
[CONTENT] P.falciparum | STHs | Co-infection | pregnant women | Nigeria [SUMMARY]
[CONTENT] Adult | Ancylostomatoidea | Animals | Ascaris lumbricoides | Coinfection | Cross-Sectional Studies | Female | Helminthiasis | Helminths | Humans | Intestinal Diseases, Parasitic | Malaria, Falciparum | Nigeria | Plasmodium falciparum | Pregnancy | Pregnancy Complications, Parasitic | Pregnant Women | Prevalence | Soil | Strongyloides | Young Adult [SUMMARY]
[CONTENT] Adult | Ancylostomatoidea | Animals | Ascaris lumbricoides | Coinfection | Cross-Sectional Studies | Female | Helminthiasis | Helminths | Humans | Intestinal Diseases, Parasitic | Malaria, Falciparum | Nigeria | Plasmodium falciparum | Pregnancy | Pregnancy Complications, Parasitic | Pregnant Women | Prevalence | Soil | Strongyloides | Young Adult [SUMMARY]
[CONTENT] Adult | Ancylostomatoidea | Animals | Ascaris lumbricoides | Coinfection | Cross-Sectional Studies | Female | Helminthiasis | Helminths | Humans | Intestinal Diseases, Parasitic | Malaria, Falciparum | Nigeria | Plasmodium falciparum | Pregnancy | Pregnancy Complications, Parasitic | Pregnant Women | Prevalence | Soil | Strongyloides | Young Adult [SUMMARY]
[CONTENT] Adult | Ancylostomatoidea | Animals | Ascaris lumbricoides | Coinfection | Cross-Sectional Studies | Female | Helminthiasis | Helminths | Humans | Intestinal Diseases, Parasitic | Malaria, Falciparum | Nigeria | Plasmodium falciparum | Pregnancy | Pregnancy Complications, Parasitic | Pregnant Women | Prevalence | Soil | Strongyloides | Young Adult [SUMMARY]
[CONTENT] Adult | Ancylostomatoidea | Animals | Ascaris lumbricoides | Coinfection | Cross-Sectional Studies | Female | Helminthiasis | Helminths | Humans | Intestinal Diseases, Parasitic | Malaria, Falciparum | Nigeria | Plasmodium falciparum | Pregnancy | Pregnancy Complications, Parasitic | Pregnant Women | Prevalence | Soil | Strongyloides | Young Adult [SUMMARY]
[CONTENT] Adult | Ancylostomatoidea | Animals | Ascaris lumbricoides | Coinfection | Cross-Sectional Studies | Female | Helminthiasis | Helminths | Humans | Intestinal Diseases, Parasitic | Malaria, Falciparum | Nigeria | Plasmodium falciparum | Pregnancy | Pregnancy Complications, Parasitic | Pregnant Women | Prevalence | Soil | Strongyloides | Young Adult [SUMMARY]
[CONTENT] infections including malaria | factor malaria pregnancy | malaria sths infections | malaria prophylaxis pregnancy24 | prevalence malaria sths [SUMMARY]
[CONTENT] infections including malaria | factor malaria pregnancy | malaria sths infections | malaria prophylaxis pregnancy24 | prevalence malaria sths [SUMMARY]
[CONTENT] infections including malaria | factor malaria pregnancy | malaria sths infections | malaria prophylaxis pregnancy24 | prevalence malaria sths [SUMMARY]
[CONTENT] infections including malaria | factor malaria pregnancy | malaria sths infections | malaria prophylaxis pregnancy24 | prevalence malaria sths [SUMMARY]
[CONTENT] infections including malaria | factor malaria pregnancy | malaria sths infections | malaria prophylaxis pregnancy24 | prevalence malaria sths [SUMMARY]
[CONTENT] infections including malaria | factor malaria pregnancy | malaria sths infections | malaria prophylaxis pregnancy24 | prevalence malaria sths [SUMMARY]
[CONTENT] falciparum | women | infection | pregnant | sths | pregnant women | malaria | prevalence | study | age [SUMMARY]
[CONTENT] falciparum | women | infection | pregnant | sths | pregnant women | malaria | prevalence | study | age [SUMMARY]
[CONTENT] falciparum | women | infection | pregnant | sths | pregnant women | malaria | prevalence | study | age [SUMMARY]
[CONTENT] falciparum | women | infection | pregnant | sths | pregnant women | malaria | prevalence | study | age [SUMMARY]
[CONTENT] falciparum | women | infection | pregnant | sths | pregnant women | malaria | prevalence | study | age [SUMMARY]
[CONTENT] falciparum | women | infection | pregnant | sths | pregnant women | malaria | prevalence | study | age [SUMMARY]
[CONTENT] infections | iron | women | anaemia | pregnant | pregnant women | infection | falciparum | sths infections | sub [SUMMARY]
[CONTENT] state | osogbo | study | absence | collected | sample | stool | temperature | women | consent [SUMMARY]
[CONTENT] women | falciparum | infection | pregnant | sths | age | pregnant women | 24 | prevalence | table [SUMMARY]
[CONTENT] infection co | women endemic areas reduce | problem women attending | problem women | co infection constitute | co infection constitute problem | constitute | constitute problem | constitute problem women | constitute problem women attending [SUMMARY]
[CONTENT] women | infection | falciparum | sths | pregnant | pregnant women | prevalence | malaria | age | study [SUMMARY]
[CONTENT] women | infection | falciparum | sths | pregnant | pregnant women | prevalence | malaria | age | study [SUMMARY]
[CONTENT] ||| Osogbo | Nigeria [SUMMARY]
[CONTENT] 200 ||| ||| malaria | helminth [SUMMARY]
[CONTENT] 29.5% | 12% and 5% ||| P. | 52.5% | 58.3% | 80% | 18.6% | 25.0% | 20% | 28.8% | 16.7% | 0% ||| [SUMMARY]
[CONTENT] helminth ||| [SUMMARY]
[CONTENT] ||| Osogbo | Nigeria ||| 200 ||| ||| malaria | helminth ||| ||| 29.5% | 12% and 5% ||| P. | 52.5% | 58.3% | 80% | 18.6% | 25.0% | 20% | 28.8% | 16.7% | 0% ||| ||| helminth ||| [SUMMARY]
[CONTENT] ||| Osogbo | Nigeria ||| 200 ||| ||| malaria | helminth ||| ||| 29.5% | 12% and 5% ||| P. | 52.5% | 58.3% | 80% | 18.6% | 25.0% | 20% | 28.8% | 16.7% | 0% ||| ||| helminth ||| [SUMMARY]
Predictors of pneumothorax following endoscopic valve therapy in patients with severe emphysema.
27536088
Endoscopic valve implantation is an effective treatment for patients with advanced emphysema. Despite the minimally invasive procedure, valve placement is associated with risks, the most common of which is pneumothorax. This study was designed to identify predictors of pneumothorax following endoscopic valve implantation.
BACKGROUND
Preinterventional clinical measures (vital capacity, forced expiratory volume in 1 second, residual volume, total lung capacity, 6-minute walk test), qualitative computed tomography (CT) parameters (fissure integrity, blebs/bulla, subpleural nodules, pleural adhesions, partial atelectasis, fibrotic bands, emphysema type) and quantitative CT parameters (volume and low attenuation volume of the target lobe and the ipsilateral untreated lobe, target air trapping, ipsilateral lobe volume/hemithorax volume, collapsibility of the target lobe and the ipsilateral untreated lobe) were retrospectively evaluated in patients who underwent endoscopic valve placement (n=129). Regression analysis was performed to compare those who developed pneumothorax following valve therapy (n=46) with those who developed target lobe volume reduction without pneumothorax (n=83).
METHODS
Low attenuation volume% of ipsilateral untreated lobe (odds ratio [OR] =1.08, P=0.001), ipsilateral untreated lobe volume/hemithorax volume (OR =0.93, P=0.017), emphysema type (OR =0.26, P=0.018), pleural adhesions (OR =0.33, P=0.012) and residual volume (OR =1.58, P=0.012) were found to be significant predictors of pneumothorax. Fissure integrity (OR =1.16, P=0.075) and 6-minute walk test (OR =1.05, P=0.077) were also indicative of pneumothorax. The model including the aforementioned parameters predicted whether a patient would experience a pneumothorax 84% of the time (area under the curve =0.84).
FINDING
Clinical and CT parameters provide a promising tool to effectively identify patients at high risk of pneumothorax following endoscopic valve therapy.
INTERPRETATION
[ "Adult", "Aged", "Aged, 80 and over", "Area Under Curve", "Bronchoscopy", "Exercise Tolerance", "Female", "Forced Expiratory Volume", "Humans", "Linear Models", "Logistic Models", "Lung", "Male", "Middle Aged", "Multidetector Computed Tomography", "Odds Ratio", "Pneumothorax", "Predictive Value of Tests", "Pulmonary Emphysema", "ROC Curve", "Retrospective Studies", "Risk Factors", "Severity of Illness Index", "Treatment Outcome", "Vital Capacity", "Walk Test" ]
4976918
Introduction
Endoscopic valve therapy has evolved as a new therapeutic modality in patients with advanced chronic obstructive pulmonary disease (COPD) and emphysema. This technique involves the implantation of one-way valves in the emphysematous lung lobe. By allowing the air to exit during expiration, the valves lead to target lobe volume reduction (TLVR), whereby a complete lobar atelectasis represents the optimal result. The first randomized controlled trials (RCTs), known as VENT (“Endobronchial Valves for Emphysema Palliation Trial”) and Euro-VENT, demonstrated encouraging results concerning the safety and effectiveness of this procedure.1,2 Adverse events, including COPD exacerbations, pneumonia, mild hemoptysis, hypercapnia, and pneumothorax, were observed, but the rate of serious complications did not differ between the treatment and control groups. In these RCTs, the pneumothorax rate was 4.2% and 4.5% at 90 days following the intervention. In VENT and Euro-VENT, complete interlobar fissures in the preinterventional multidetector computed tomography (MDCT), which is a surrogate for low interlobar collateral ventilation, and lobar occlusion were found to be predictors of a meaningful clinical outcome following valve placement. Therefore, only a complete occlusion of the target lobe in patients with low collateral ventilation was recommended; this was reevaluated in two recently published RCTs, the BeLieVeR-HIFI study and STELVIO.3,4 These trials confirmed the efficacy of valve therapy but revealed an increased pneumothorax rate of 8%–18%. It is likely that a parenchymal rupture of the ipsilateral untreated lobe due to rapid expansion in the case of volume reduction of the treated lobe is the reason for the pneumothorax. In further trials, the authors reported even higher pneumothorax rates of 25%, 23%, and 20%.5–7 Thus, the optimized patient selection is not only associated with improved outcomes following valve placement but also implies a higher risk of pneumothorax. Completeness of the fissure is particularly considered to support optimized outcomes as well as the advent of pneumothorax. As pneumothorax seems to occur only in patients with significant volume shift, it is assumed that they may nevertheless experience good clinical outcomes in case of persistent TLVR after recovering from pneumothorax.8 However, pneumothorax presents a severe complication that frequently requires chest tube insertion and is associated with immobilization, prolonged hospitalization, and further endoscopic or surgical interventions.9,10 Furthermore, tension pneumothorax may also present a life-threatening complication that may lead to a shift of the mediastinum and obstruction of venous return to the heart, compromising cardiovascular circumstances. Therefore, assessing the predictors of pneumothorax other than fissure integrity, which also predicts TLVR, is of great importance.
Statistical analysis
Univariate logistic regression models were fitted to assess the association between each of the 41 variables and pneumothorax following valve therapy. We also assessed whether each continuous variable had a linear or quadratic association with pneumothorax in the logit scale. Stepwise forward regression model was fitted to assess the association between possible predictor variables and pneumothorax. Variables with a P-value <0.5 were included in the stepwise forward regression model in the most appropriate form (linear or quadratic). The paired Student’s t-test was used to compare within-group differences in qualitative parameters. P-values <0.05 were considered to be significant. Receiver operating characteristic curves were created using the final model of predictors.
Results
The patient characteristics of the 83 subjects with a radiologically confirmed TLVR (Figure 1) and those of 46 patients who developed pneumothorax (Figure 2) are presented in Table 1. On the baseline MDCT scans, fissure integrity varied from 80% to 100%. The fissure integrity was between 90% and 100% in 121 patients and between 80% and 90% in eight patients. Overall, the median of fissure completeness was 100% (interquartile range 98.3 to 100). Thirty-eight of the 46 patients (83%) who developed pneumothorax underwent chest tube insertion. In 18 patients (39%), valve explantation was necessary. Eight patients (17%) underwent additional video-assisted thoracoscopy to seal the fistula. Three months after recovering from pneumothorax, patients experienced slight, but not clinically relevant improvements in lung function parameters, exercise capacity and dyspnoe score (Table 2). After the multivariable regression analysis (Table 3), the LAV% of the untreated ipsilateral lobe, the volume of the untreated ipsilateral lobe to volume of the hemithorax, the predominant type of emphysema, the presence of pleural adhesions, and the residual volume were variables that were independently associated with pneumothorax, adjusting for the target lobe; the 6-MWT and the fissure integrity were borderline non-significant. Thereby, residual volume (odds ratio [OR] =1.58, P=0.012) and untreated ipsilateral lobe LAV% (OR =1.08, P=0.001) were associated with an increased risk of pneumothorax. In contrast, the presence of pleural adhesions (OR =0.33, P=0.012), the presence of panlobular emphysema (OR =0.26, P=0.018), and the volume of the untreated ipsilateral lobe to the volume of the hemithorax (OR =0.93, P=0.017) were associated with a decreased risk of pneumothorax. Finally, the 6-MWT (OR =1.05, P=0.077) and the fissure integrity (OR =1.16, P=0.075) tended to be associated with an increased risk of pneumothorax. The area under the curve using the full model with eight variables was 0.84. This indicates that it is possible to predict whether a patient would experience a pneumothorax 84% of the time (Figure 3).
null
null
[ "Methods", "MDCT parameters" ]
[ "In this retrospective analysis, clinical and MDCT scan data of patients who underwent valve therapy were examined to determine predictors of pneumothorax following valve therapy. The protocol of this retrospective analysis was approved by the local Ethics Committee of Heidelberg (S-609/2012). The majority of the patients were treated within different prospective trials after written informed consent. As the data in this current analysis were retrospectively analyzed no further patient consent was required.\n Subjects and clinical parameters In this retrospective analysis, baseline clinical measures and MDCT scan data of 129 consecutive patients (mean age: 64 years, range 43–81 years, sex: 50% male) who experienced TLVR (n=83) or pneumothorax (n=46) following valve therapy between 2009 and 2013 were assessed. The mean forced expiratory volume in 1 second (FEV1) was 0.8±0.2 l (32%±8% predicted) and the mean residual volume (RV) was 5.6±1.4 l (257%±59% predicted). All the patients experienced a complete occlusion of the target lobe by endobronchial (Pulmonx, Inc., Neuchatel, Switzerland) or intrabronchial valves (Olympus Corporation, Tokyo, Japan) at the Thoraxklinik at the University of Heidelberg. Prior to valve therapy, MDCT, including software analysis of the emphysema (YACTA, “yet another CT analyser”), and 99 m-Technetium perfusion scan were performed to identify the target lobe. Sixty-one patients (47%) experienced complete occlusion of the left lower lobe, 34 (26%) of the left upper lobe, 20 (16%) of the right upper lobe, 13 (10%) of the right lower lobe, and one (1%) of the right upper and middle lobes. TLVR was defined as lung parenchymal collapse with the development of a soft-tissue dense atelectasis at a lobar or segmental level on a follow-up chest X-ray or CT scan.\nClinical parameters prior to valve therapy and 3 months following pneumothorax, including vital capacity (VC), forced expiratory volume in 1 second (FEV1), residual volume (RV), total lung capacity (TLC), and the 6-minute walk test (6-MWT), were extracted from the patients’ records. Descriptive parameters (target lobe) were also evaluated.\nIn this retrospective analysis, baseline clinical measures and MDCT scan data of 129 consecutive patients (mean age: 64 years, range 43–81 years, sex: 50% male) who experienced TLVR (n=83) or pneumothorax (n=46) following valve therapy between 2009 and 2013 were assessed. The mean forced expiratory volume in 1 second (FEV1) was 0.8±0.2 l (32%±8% predicted) and the mean residual volume (RV) was 5.6±1.4 l (257%±59% predicted). All the patients experienced a complete occlusion of the target lobe by endobronchial (Pulmonx, Inc., Neuchatel, Switzerland) or intrabronchial valves (Olympus Corporation, Tokyo, Japan) at the Thoraxklinik at the University of Heidelberg. Prior to valve therapy, MDCT, including software analysis of the emphysema (YACTA, “yet another CT analyser”), and 99 m-Technetium perfusion scan were performed to identify the target lobe. Sixty-one patients (47%) experienced complete occlusion of the left lower lobe, 34 (26%) of the left upper lobe, 20 (16%) of the right upper lobe, 13 (10%) of the right lower lobe, and one (1%) of the right upper and middle lobes. TLVR was defined as lung parenchymal collapse with the development of a soft-tissue dense atelectasis at a lobar or segmental level on a follow-up chest X-ray or CT scan.\nClinical parameters prior to valve therapy and 3 months following pneumothorax, including vital capacity (VC), forced expiratory volume in 1 second (FEV1), residual volume (RV), total lung capacity (TLC), and the 6-minute walk test (6-MWT), were extracted from the patients’ records. Descriptive parameters (target lobe) were also evaluated.\n MDCT parameters The qualitative and quantitative MDCT parameters were assessed on preinterventional baseline MDCT scans (64-slice Somatom Definition AS64, Siemens Medical Solution, Forchheim, Germany); the CT protocol varied with reconstruction slice thickness of 1.0 (filter I40f), and doses of 100 kV, 117 mAseffective or 120 kV, 70 mAseffective. The qualitative MDCT parameters were visually assessed by one experienced chest radiologist. The quantitative CT measurements were obtained by using syngo.via software (Siemens Medical Solution), which provides automated three-dimensional quantifications for the assessment of emphysema. The mean lung density of the lobes, lobar volumes, low attenuation volumes (LAVs) and the lobar collapsibility were obtained. The LAV (lung volumes with attenuation values <950 Hounsfield units) and LAV% (the ratio of LAV to the volume of the region of interest) that correlate with lung function parameters, visual CT emphysema scores, and the COPD assessment test were used to describe the extent and severity of emphysema.11–14\nThe qualitative and quantitative MDCT parameters were assessed on preinterventional baseline MDCT scans (64-slice Somatom Definition AS64, Siemens Medical Solution, Forchheim, Germany); the CT protocol varied with reconstruction slice thickness of 1.0 (filter I40f), and doses of 100 kV, 117 mAseffective or 120 kV, 70 mAseffective. The qualitative MDCT parameters were visually assessed by one experienced chest radiologist. The quantitative CT measurements were obtained by using syngo.via software (Siemens Medical Solution), which provides automated three-dimensional quantifications for the assessment of emphysema. The mean lung density of the lobes, lobar volumes, low attenuation volumes (LAVs) and the lobar collapsibility were obtained. The LAV (lung volumes with attenuation values <950 Hounsfield units) and LAV% (the ratio of LAV to the volume of the region of interest) that correlate with lung function parameters, visual CT emphysema scores, and the COPD assessment test were used to describe the extent and severity of emphysema.11–14\n Statistical analysis Univariate logistic regression models were fitted to assess the association between each of the 41 variables and pneumothorax following valve therapy. We also assessed whether each continuous variable had a linear or quadratic association with pneumothorax in the logit scale. Stepwise forward regression model was fitted to assess the association between possible predictor variables and pneumothorax. Variables with a P-value <0.5 were included in the stepwise forward regression model in the most appropriate form (linear or quadratic). The paired Student’s t-test was used to compare within-group differences in qualitative parameters. P-values <0.05 were considered to be significant. Receiver operating characteristic curves were created using the final model of predictors.\nUnivariate logistic regression models were fitted to assess the association between each of the 41 variables and pneumothorax following valve therapy. We also assessed whether each continuous variable had a linear or quadratic association with pneumothorax in the logit scale. Stepwise forward regression model was fitted to assess the association between possible predictor variables and pneumothorax. Variables with a P-value <0.5 were included in the stepwise forward regression model in the most appropriate form (linear or quadratic). The paired Student’s t-test was used to compare within-group differences in qualitative parameters. P-values <0.05 were considered to be significant. Receiver operating characteristic curves were created using the final model of predictors.", "The qualitative and quantitative MDCT parameters were assessed on preinterventional baseline MDCT scans (64-slice Somatom Definition AS64, Siemens Medical Solution, Forchheim, Germany); the CT protocol varied with reconstruction slice thickness of 1.0 (filter I40f), and doses of 100 kV, 117 mAseffective or 120 kV, 70 mAseffective. The qualitative MDCT parameters were visually assessed by one experienced chest radiologist. The quantitative CT measurements were obtained by using syngo.via software (Siemens Medical Solution), which provides automated three-dimensional quantifications for the assessment of emphysema. The mean lung density of the lobes, lobar volumes, low attenuation volumes (LAVs) and the lobar collapsibility were obtained. The LAV (lung volumes with attenuation values <950 Hounsfield units) and LAV% (the ratio of LAV to the volume of the region of interest) that correlate with lung function parameters, visual CT emphysema scores, and the COPD assessment test were used to describe the extent and severity of emphysema.11–14" ]
[ "methods", null ]
[ "Introduction", "Methods", "Subjects and clinical parameters", "MDCT parameters", "Statistical analysis", "Results", "Discussion" ]
[ "Endoscopic valve therapy has evolved as a new therapeutic modality in patients with advanced chronic obstructive pulmonary disease (COPD) and emphysema. This technique involves the implantation of one-way valves in the emphysematous lung lobe. By allowing the air to exit during expiration, the valves lead to target lobe volume reduction (TLVR), whereby a complete lobar atelectasis represents the optimal result.\nThe first randomized controlled trials (RCTs), known as VENT (“Endobronchial Valves for Emphysema Palliation Trial”) and Euro-VENT, demonstrated encouraging results concerning the safety and effectiveness of this procedure.1,2 Adverse events, including COPD exacerbations, pneumonia, mild hemoptysis, hypercapnia, and pneumothorax, were observed, but the rate of serious complications did not differ between the treatment and control groups. In these RCTs, the pneumothorax rate was 4.2% and 4.5% at 90 days following the intervention.\nIn VENT and Euro-VENT, complete interlobar fissures in the preinterventional multidetector computed tomography (MDCT), which is a surrogate for low interlobar collateral ventilation, and lobar occlusion were found to be predictors of a meaningful clinical outcome following valve placement. Therefore, only a complete occlusion of the target lobe in patients with low collateral ventilation was recommended; this was reevaluated in two recently published RCTs, the BeLieVeR-HIFI study and STELVIO.3,4 These trials confirmed the efficacy of valve therapy but revealed an increased pneumothorax rate of 8%–18%. It is likely that a parenchymal rupture of the ipsilateral untreated lobe due to rapid expansion in the case of volume reduction of the treated lobe is the reason for the pneumothorax. In further trials, the authors reported even higher pneumothorax rates of 25%, 23%, and 20%.5–7 Thus, the optimized patient selection is not only associated with improved outcomes following valve placement but also implies a higher risk of pneumothorax. Completeness of the fissure is particularly considered to support optimized outcomes as well as the advent of pneumothorax.\nAs pneumothorax seems to occur only in patients with significant volume shift, it is assumed that they may nevertheless experience good clinical outcomes in case of persistent TLVR after recovering from pneumothorax.8 However, pneumothorax presents a severe complication that frequently requires chest tube insertion and is associated with immobilization, prolonged hospitalization, and further endoscopic or surgical interventions.9,10 Furthermore, tension pneumothorax may also present a life-threatening complication that may lead to a shift of the mediastinum and obstruction of venous return to the heart, compromising cardiovascular circumstances. Therefore, assessing the predictors of pneumothorax other than fissure integrity, which also predicts TLVR, is of great importance.", "In this retrospective analysis, clinical and MDCT scan data of patients who underwent valve therapy were examined to determine predictors of pneumothorax following valve therapy. The protocol of this retrospective analysis was approved by the local Ethics Committee of Heidelberg (S-609/2012). The majority of the patients were treated within different prospective trials after written informed consent. As the data in this current analysis were retrospectively analyzed no further patient consent was required.\n Subjects and clinical parameters In this retrospective analysis, baseline clinical measures and MDCT scan data of 129 consecutive patients (mean age: 64 years, range 43–81 years, sex: 50% male) who experienced TLVR (n=83) or pneumothorax (n=46) following valve therapy between 2009 and 2013 were assessed. The mean forced expiratory volume in 1 second (FEV1) was 0.8±0.2 l (32%±8% predicted) and the mean residual volume (RV) was 5.6±1.4 l (257%±59% predicted). All the patients experienced a complete occlusion of the target lobe by endobronchial (Pulmonx, Inc., Neuchatel, Switzerland) or intrabronchial valves (Olympus Corporation, Tokyo, Japan) at the Thoraxklinik at the University of Heidelberg. Prior to valve therapy, MDCT, including software analysis of the emphysema (YACTA, “yet another CT analyser”), and 99 m-Technetium perfusion scan were performed to identify the target lobe. Sixty-one patients (47%) experienced complete occlusion of the left lower lobe, 34 (26%) of the left upper lobe, 20 (16%) of the right upper lobe, 13 (10%) of the right lower lobe, and one (1%) of the right upper and middle lobes. TLVR was defined as lung parenchymal collapse with the development of a soft-tissue dense atelectasis at a lobar or segmental level on a follow-up chest X-ray or CT scan.\nClinical parameters prior to valve therapy and 3 months following pneumothorax, including vital capacity (VC), forced expiratory volume in 1 second (FEV1), residual volume (RV), total lung capacity (TLC), and the 6-minute walk test (6-MWT), were extracted from the patients’ records. Descriptive parameters (target lobe) were also evaluated.\nIn this retrospective analysis, baseline clinical measures and MDCT scan data of 129 consecutive patients (mean age: 64 years, range 43–81 years, sex: 50% male) who experienced TLVR (n=83) or pneumothorax (n=46) following valve therapy between 2009 and 2013 were assessed. The mean forced expiratory volume in 1 second (FEV1) was 0.8±0.2 l (32%±8% predicted) and the mean residual volume (RV) was 5.6±1.4 l (257%±59% predicted). All the patients experienced a complete occlusion of the target lobe by endobronchial (Pulmonx, Inc., Neuchatel, Switzerland) or intrabronchial valves (Olympus Corporation, Tokyo, Japan) at the Thoraxklinik at the University of Heidelberg. Prior to valve therapy, MDCT, including software analysis of the emphysema (YACTA, “yet another CT analyser”), and 99 m-Technetium perfusion scan were performed to identify the target lobe. Sixty-one patients (47%) experienced complete occlusion of the left lower lobe, 34 (26%) of the left upper lobe, 20 (16%) of the right upper lobe, 13 (10%) of the right lower lobe, and one (1%) of the right upper and middle lobes. TLVR was defined as lung parenchymal collapse with the development of a soft-tissue dense atelectasis at a lobar or segmental level on a follow-up chest X-ray or CT scan.\nClinical parameters prior to valve therapy and 3 months following pneumothorax, including vital capacity (VC), forced expiratory volume in 1 second (FEV1), residual volume (RV), total lung capacity (TLC), and the 6-minute walk test (6-MWT), were extracted from the patients’ records. Descriptive parameters (target lobe) were also evaluated.\n MDCT parameters The qualitative and quantitative MDCT parameters were assessed on preinterventional baseline MDCT scans (64-slice Somatom Definition AS64, Siemens Medical Solution, Forchheim, Germany); the CT protocol varied with reconstruction slice thickness of 1.0 (filter I40f), and doses of 100 kV, 117 mAseffective or 120 kV, 70 mAseffective. The qualitative MDCT parameters were visually assessed by one experienced chest radiologist. The quantitative CT measurements were obtained by using syngo.via software (Siemens Medical Solution), which provides automated three-dimensional quantifications for the assessment of emphysema. The mean lung density of the lobes, lobar volumes, low attenuation volumes (LAVs) and the lobar collapsibility were obtained. The LAV (lung volumes with attenuation values <950 Hounsfield units) and LAV% (the ratio of LAV to the volume of the region of interest) that correlate with lung function parameters, visual CT emphysema scores, and the COPD assessment test were used to describe the extent and severity of emphysema.11–14\nThe qualitative and quantitative MDCT parameters were assessed on preinterventional baseline MDCT scans (64-slice Somatom Definition AS64, Siemens Medical Solution, Forchheim, Germany); the CT protocol varied with reconstruction slice thickness of 1.0 (filter I40f), and doses of 100 kV, 117 mAseffective or 120 kV, 70 mAseffective. The qualitative MDCT parameters were visually assessed by one experienced chest radiologist. The quantitative CT measurements were obtained by using syngo.via software (Siemens Medical Solution), which provides automated three-dimensional quantifications for the assessment of emphysema. The mean lung density of the lobes, lobar volumes, low attenuation volumes (LAVs) and the lobar collapsibility were obtained. The LAV (lung volumes with attenuation values <950 Hounsfield units) and LAV% (the ratio of LAV to the volume of the region of interest) that correlate with lung function parameters, visual CT emphysema scores, and the COPD assessment test were used to describe the extent and severity of emphysema.11–14\n Statistical analysis Univariate logistic regression models were fitted to assess the association between each of the 41 variables and pneumothorax following valve therapy. We also assessed whether each continuous variable had a linear or quadratic association with pneumothorax in the logit scale. Stepwise forward regression model was fitted to assess the association between possible predictor variables and pneumothorax. Variables with a P-value <0.5 were included in the stepwise forward regression model in the most appropriate form (linear or quadratic). The paired Student’s t-test was used to compare within-group differences in qualitative parameters. P-values <0.05 were considered to be significant. Receiver operating characteristic curves were created using the final model of predictors.\nUnivariate logistic regression models were fitted to assess the association between each of the 41 variables and pneumothorax following valve therapy. We also assessed whether each continuous variable had a linear or quadratic association with pneumothorax in the logit scale. Stepwise forward regression model was fitted to assess the association between possible predictor variables and pneumothorax. Variables with a P-value <0.5 were included in the stepwise forward regression model in the most appropriate form (linear or quadratic). The paired Student’s t-test was used to compare within-group differences in qualitative parameters. P-values <0.05 were considered to be significant. Receiver operating characteristic curves were created using the final model of predictors.", "In this retrospective analysis, baseline clinical measures and MDCT scan data of 129 consecutive patients (mean age: 64 years, range 43–81 years, sex: 50% male) who experienced TLVR (n=83) or pneumothorax (n=46) following valve therapy between 2009 and 2013 were assessed. The mean forced expiratory volume in 1 second (FEV1) was 0.8±0.2 l (32%±8% predicted) and the mean residual volume (RV) was 5.6±1.4 l (257%±59% predicted). All the patients experienced a complete occlusion of the target lobe by endobronchial (Pulmonx, Inc., Neuchatel, Switzerland) or intrabronchial valves (Olympus Corporation, Tokyo, Japan) at the Thoraxklinik at the University of Heidelberg. Prior to valve therapy, MDCT, including software analysis of the emphysema (YACTA, “yet another CT analyser”), and 99 m-Technetium perfusion scan were performed to identify the target lobe. Sixty-one patients (47%) experienced complete occlusion of the left lower lobe, 34 (26%) of the left upper lobe, 20 (16%) of the right upper lobe, 13 (10%) of the right lower lobe, and one (1%) of the right upper and middle lobes. TLVR was defined as lung parenchymal collapse with the development of a soft-tissue dense atelectasis at a lobar or segmental level on a follow-up chest X-ray or CT scan.\nClinical parameters prior to valve therapy and 3 months following pneumothorax, including vital capacity (VC), forced expiratory volume in 1 second (FEV1), residual volume (RV), total lung capacity (TLC), and the 6-minute walk test (6-MWT), were extracted from the patients’ records. Descriptive parameters (target lobe) were also evaluated.", "The qualitative and quantitative MDCT parameters were assessed on preinterventional baseline MDCT scans (64-slice Somatom Definition AS64, Siemens Medical Solution, Forchheim, Germany); the CT protocol varied with reconstruction slice thickness of 1.0 (filter I40f), and doses of 100 kV, 117 mAseffective or 120 kV, 70 mAseffective. The qualitative MDCT parameters were visually assessed by one experienced chest radiologist. The quantitative CT measurements were obtained by using syngo.via software (Siemens Medical Solution), which provides automated three-dimensional quantifications for the assessment of emphysema. The mean lung density of the lobes, lobar volumes, low attenuation volumes (LAVs) and the lobar collapsibility were obtained. The LAV (lung volumes with attenuation values <950 Hounsfield units) and LAV% (the ratio of LAV to the volume of the region of interest) that correlate with lung function parameters, visual CT emphysema scores, and the COPD assessment test were used to describe the extent and severity of emphysema.11–14", "Univariate logistic regression models were fitted to assess the association between each of the 41 variables and pneumothorax following valve therapy. We also assessed whether each continuous variable had a linear or quadratic association with pneumothorax in the logit scale. Stepwise forward regression model was fitted to assess the association between possible predictor variables and pneumothorax. Variables with a P-value <0.5 were included in the stepwise forward regression model in the most appropriate form (linear or quadratic). The paired Student’s t-test was used to compare within-group differences in qualitative parameters. P-values <0.05 were considered to be significant. Receiver operating characteristic curves were created using the final model of predictors.", "The patient characteristics of the 83 subjects with a radiologically confirmed TLVR (Figure 1) and those of 46 patients who developed pneumothorax (Figure 2) are presented in Table 1. On the baseline MDCT scans, fissure integrity varied from 80% to 100%. The fissure integrity was between 90% and 100% in 121 patients and between 80% and 90% in eight patients. Overall, the median of fissure completeness was 100% (interquartile range 98.3 to 100).\nThirty-eight of the 46 patients (83%) who developed pneumothorax underwent chest tube insertion. In 18 patients (39%), valve explantation was necessary. Eight patients (17%) underwent additional video-assisted thoracoscopy to seal the fistula. Three months after recovering from pneumothorax, patients experienced slight, but not clinically relevant improvements in lung function parameters, exercise capacity and dyspnoe score (Table 2).\nAfter the multivariable regression analysis (Table 3), the LAV% of the untreated ipsilateral lobe, the volume of the untreated ipsilateral lobe to volume of the hemithorax, the predominant type of emphysema, the presence of pleural adhesions, and the residual volume were variables that were independently associated with pneumothorax, adjusting for the target lobe; the 6-MWT and the fissure integrity were borderline non-significant. Thereby, residual volume (odds ratio [OR] =1.58, P=0.012) and untreated ipsilateral lobe LAV% (OR =1.08, P=0.001) were associated with an increased risk of pneumothorax. In contrast, the presence of pleural adhesions (OR =0.33, P=0.012), the presence of panlobular emphysema (OR =0.26, P=0.018), and the volume of the untreated ipsilateral lobe to the volume of the hemithorax (OR =0.93, P=0.017) were associated with a decreased risk of pneumothorax. Finally, the 6-MWT (OR =1.05, P=0.077) and the fissure integrity (OR =1.16, P=0.075) tended to be associated with an increased risk of pneumothorax. The area under the curve using the full model with eight variables was 0.84. This indicates that it is possible to predict whether a patient would experience a pneumothorax 84% of the time (Figure 3).", "Endoscopic valve therapy has evolved as an effective therapy for patients with severe COPD and emphysema. It has been developed as a minimally invasive technique to achieve lung volume reduction with less morbidity and mortality than has been reported in lung volume reduction surgery.15 However, valve therapy is also associated with potential complications. Over the past several years, the rate of pneumothorax following valve insertion particularly increased due to modified patient selection. A complete fissure is not only a predictor of excellent outcome but also of pneumothorax. Pneumothorax occurring in 18%–25% in patients undergoing valve insertion is actually the most common complication following valve therapy.4–7\nTo assess the risk of pneumothorax, we performed this retrospective analysis to determine the predictors of pneumothorax following valve insertion. As patients with TLVR and pneumothorax fulfill the criterion of complete interlobar fissure that is regarded as a prerequisite indication for valve therapy, we focused on both of these patient cohorts; the distinction between both these groups is crucial. As a result, the median of fissure integrity was 100%, which explains why fissure integrity was not found to be a statistically significant predictor of pneumothorax.\nIn this analysis, the LAV% of the untreated ipsilateral lobe, the volume of the untreated ipsilateral lobe to the volume of the hemithorax, the emphysema type, and pleural adhesions were significant CT predictors of pneumothorax. The higher the emphysematous destruction of the untreated ipsilateral lobe, the higher the risk of pneumothorax. The important role of the untreated ipsilateral lobe can be explained by the development of pneumothorax following valve implantation. It is hypothesized that pneumothorax occurs by parenchymal rupture in the adjacent untreated lung lobe in cases of rapid TLVR. In contrast to expectations, panlobular emphysema was found to be a protective factor for the development of pneumothorax. Initially it was assumed that panlobular emphysema distribution would increase the risk, but exactly the opposite was actually found. The blebs and bullae that were assumed to increase the risk were also found to play no significant role in the development of pneumothorax. The finding of the protective character of a panlobular emphysema can not be sufficiently explained; possibly, the compliance of the lung tissue in the different emphysema types varies and may be associated with a different risk of pneumothorax. In addition, pleural adhesions were surprisingly found to be protective factors against pneumothorax. As a clinical predictor, the residual volume was a variable that was independently associated with pneumothorax, adjusting for the target lobe. The higher the residual volume, the higher the risk of pneumothorax. A high residual volume may be associated with a greater volume shift, increasing the risk of parenchymal rupture. In that model, all of these parameters would predict whether a patient would experience a pneumothorax 84% of the time.\nOne retrospective trial demonstrated that most patients experienced an improvement in clinical outcomes due to the lung volume shift despite pneumothorax.8 This finding was confirmed by another trial demonstrating a statistically beneficial outcome in patients who developed pneumothorax following valve therapy.16 However, only patients in whom lobar atelectasis could be observed after recovering from pneumothorax will experience an outstanding improvement in lung function parameters, while patients without a significant TLVR following pneumothorax will experience slight but not relevant clinical improvement. In this analysis, no relevant clinical improvement was achieved after the development of a postinterventional pneumothorax, but pneumothorax following valve therapy did not impair the clinical outcome. Thus, there are some controversial data related to outcomes following pneumothorax in valve patients, but most studies suggest that patients who develop TLVR despite pneumothorax will benefit. However, pneumothorax is associated with prolonged hospitalization and immobilization, leading to muscle wasting and often requiring further intervention, such as chest tube insertion or surgical intervention.\nIn one trial, bed rest and antitussive therapy successfully reduced the risk of pneumothorax following valve insertion.5 Forty patients who received modified medical care, including 48 hours of strict bed rest and 16 mg codeine if needed for cough up to three times a day, were compared to 32 patients with standard medical care following valve therapy. While eight patients from the standard medical care group developed pneumothorax during hospitalization, only two from the modified medical care group experienced pneumothorax (P=0.02). However, the number of treated patients in this study population was too low to allow a general statement for or against this modified postinterventional strategy.\nAssessing the risk of pneumothorax helps to find the balance between efficacy and safety. Therefore, finding the predictors for pneumothorax is of great importance. The risk of pneumothorax of 18%–25% is accepted because the development of pneumothorax does not impair clinical patient outcomes. Nevertheless, the evaluation of valid prevention measures is crucial to minimize the risk of pneumothorax that presents a severe adverse event in patients with advanced emphysema.\nIn summary, pneumothorax is a commonly anticipated complication of valve therapy in patients with severe emphysema. Therefore, close monitoring following intervention is necessary. Patients with a high LAV% of the untreated ipsilateral lobe and a high residual volume are at particularly great risk of pneumothorax. A high volume of the untreated ipsilateral lobe compared to volume of the hemithorax, panlobular emphysema and pleural adhesions, however, were found to be protective. A thorough evaluation of the risks is essential to determine and discuss the therapy and risk–benefit profiles with the patient." ]
[ "intro", "methods", "subjects", null, "methods", "results", "discussion" ]
[ "endoscopic lung volume reduction", "COPD", "emphysema", "pneumothorax", "valve therapy" ]
Introduction: Endoscopic valve therapy has evolved as a new therapeutic modality in patients with advanced chronic obstructive pulmonary disease (COPD) and emphysema. This technique involves the implantation of one-way valves in the emphysematous lung lobe. By allowing the air to exit during expiration, the valves lead to target lobe volume reduction (TLVR), whereby a complete lobar atelectasis represents the optimal result. The first randomized controlled trials (RCTs), known as VENT (“Endobronchial Valves for Emphysema Palliation Trial”) and Euro-VENT, demonstrated encouraging results concerning the safety and effectiveness of this procedure.1,2 Adverse events, including COPD exacerbations, pneumonia, mild hemoptysis, hypercapnia, and pneumothorax, were observed, but the rate of serious complications did not differ between the treatment and control groups. In these RCTs, the pneumothorax rate was 4.2% and 4.5% at 90 days following the intervention. In VENT and Euro-VENT, complete interlobar fissures in the preinterventional multidetector computed tomography (MDCT), which is a surrogate for low interlobar collateral ventilation, and lobar occlusion were found to be predictors of a meaningful clinical outcome following valve placement. Therefore, only a complete occlusion of the target lobe in patients with low collateral ventilation was recommended; this was reevaluated in two recently published RCTs, the BeLieVeR-HIFI study and STELVIO.3,4 These trials confirmed the efficacy of valve therapy but revealed an increased pneumothorax rate of 8%–18%. It is likely that a parenchymal rupture of the ipsilateral untreated lobe due to rapid expansion in the case of volume reduction of the treated lobe is the reason for the pneumothorax. In further trials, the authors reported even higher pneumothorax rates of 25%, 23%, and 20%.5–7 Thus, the optimized patient selection is not only associated with improved outcomes following valve placement but also implies a higher risk of pneumothorax. Completeness of the fissure is particularly considered to support optimized outcomes as well as the advent of pneumothorax. As pneumothorax seems to occur only in patients with significant volume shift, it is assumed that they may nevertheless experience good clinical outcomes in case of persistent TLVR after recovering from pneumothorax.8 However, pneumothorax presents a severe complication that frequently requires chest tube insertion and is associated with immobilization, prolonged hospitalization, and further endoscopic or surgical interventions.9,10 Furthermore, tension pneumothorax may also present a life-threatening complication that may lead to a shift of the mediastinum and obstruction of venous return to the heart, compromising cardiovascular circumstances. Therefore, assessing the predictors of pneumothorax other than fissure integrity, which also predicts TLVR, is of great importance. Methods: In this retrospective analysis, clinical and MDCT scan data of patients who underwent valve therapy were examined to determine predictors of pneumothorax following valve therapy. The protocol of this retrospective analysis was approved by the local Ethics Committee of Heidelberg (S-609/2012). The majority of the patients were treated within different prospective trials after written informed consent. As the data in this current analysis were retrospectively analyzed no further patient consent was required. Subjects and clinical parameters In this retrospective analysis, baseline clinical measures and MDCT scan data of 129 consecutive patients (mean age: 64 years, range 43–81 years, sex: 50% male) who experienced TLVR (n=83) or pneumothorax (n=46) following valve therapy between 2009 and 2013 were assessed. The mean forced expiratory volume in 1 second (FEV1) was 0.8±0.2 l (32%±8% predicted) and the mean residual volume (RV) was 5.6±1.4 l (257%±59% predicted). All the patients experienced a complete occlusion of the target lobe by endobronchial (Pulmonx, Inc., Neuchatel, Switzerland) or intrabronchial valves (Olympus Corporation, Tokyo, Japan) at the Thoraxklinik at the University of Heidelberg. Prior to valve therapy, MDCT, including software analysis of the emphysema (YACTA, “yet another CT analyser”), and 99 m-Technetium perfusion scan were performed to identify the target lobe. Sixty-one patients (47%) experienced complete occlusion of the left lower lobe, 34 (26%) of the left upper lobe, 20 (16%) of the right upper lobe, 13 (10%) of the right lower lobe, and one (1%) of the right upper and middle lobes. TLVR was defined as lung parenchymal collapse with the development of a soft-tissue dense atelectasis at a lobar or segmental level on a follow-up chest X-ray or CT scan. Clinical parameters prior to valve therapy and 3 months following pneumothorax, including vital capacity (VC), forced expiratory volume in 1 second (FEV1), residual volume (RV), total lung capacity (TLC), and the 6-minute walk test (6-MWT), were extracted from the patients’ records. Descriptive parameters (target lobe) were also evaluated. In this retrospective analysis, baseline clinical measures and MDCT scan data of 129 consecutive patients (mean age: 64 years, range 43–81 years, sex: 50% male) who experienced TLVR (n=83) or pneumothorax (n=46) following valve therapy between 2009 and 2013 were assessed. The mean forced expiratory volume in 1 second (FEV1) was 0.8±0.2 l (32%±8% predicted) and the mean residual volume (RV) was 5.6±1.4 l (257%±59% predicted). All the patients experienced a complete occlusion of the target lobe by endobronchial (Pulmonx, Inc., Neuchatel, Switzerland) or intrabronchial valves (Olympus Corporation, Tokyo, Japan) at the Thoraxklinik at the University of Heidelberg. Prior to valve therapy, MDCT, including software analysis of the emphysema (YACTA, “yet another CT analyser”), and 99 m-Technetium perfusion scan were performed to identify the target lobe. Sixty-one patients (47%) experienced complete occlusion of the left lower lobe, 34 (26%) of the left upper lobe, 20 (16%) of the right upper lobe, 13 (10%) of the right lower lobe, and one (1%) of the right upper and middle lobes. TLVR was defined as lung parenchymal collapse with the development of a soft-tissue dense atelectasis at a lobar or segmental level on a follow-up chest X-ray or CT scan. Clinical parameters prior to valve therapy and 3 months following pneumothorax, including vital capacity (VC), forced expiratory volume in 1 second (FEV1), residual volume (RV), total lung capacity (TLC), and the 6-minute walk test (6-MWT), were extracted from the patients’ records. Descriptive parameters (target lobe) were also evaluated. MDCT parameters The qualitative and quantitative MDCT parameters were assessed on preinterventional baseline MDCT scans (64-slice Somatom Definition AS64, Siemens Medical Solution, Forchheim, Germany); the CT protocol varied with reconstruction slice thickness of 1.0 (filter I40f), and doses of 100 kV, 117 mAseffective or 120 kV, 70 mAseffective. The qualitative MDCT parameters were visually assessed by one experienced chest radiologist. The quantitative CT measurements were obtained by using syngo.via software (Siemens Medical Solution), which provides automated three-dimensional quantifications for the assessment of emphysema. The mean lung density of the lobes, lobar volumes, low attenuation volumes (LAVs) and the lobar collapsibility were obtained. The LAV (lung volumes with attenuation values <950 Hounsfield units) and LAV% (the ratio of LAV to the volume of the region of interest) that correlate with lung function parameters, visual CT emphysema scores, and the COPD assessment test were used to describe the extent and severity of emphysema.11–14 The qualitative and quantitative MDCT parameters were assessed on preinterventional baseline MDCT scans (64-slice Somatom Definition AS64, Siemens Medical Solution, Forchheim, Germany); the CT protocol varied with reconstruction slice thickness of 1.0 (filter I40f), and doses of 100 kV, 117 mAseffective or 120 kV, 70 mAseffective. The qualitative MDCT parameters were visually assessed by one experienced chest radiologist. The quantitative CT measurements were obtained by using syngo.via software (Siemens Medical Solution), which provides automated three-dimensional quantifications for the assessment of emphysema. The mean lung density of the lobes, lobar volumes, low attenuation volumes (LAVs) and the lobar collapsibility were obtained. The LAV (lung volumes with attenuation values <950 Hounsfield units) and LAV% (the ratio of LAV to the volume of the region of interest) that correlate with lung function parameters, visual CT emphysema scores, and the COPD assessment test were used to describe the extent and severity of emphysema.11–14 Statistical analysis Univariate logistic regression models were fitted to assess the association between each of the 41 variables and pneumothorax following valve therapy. We also assessed whether each continuous variable had a linear or quadratic association with pneumothorax in the logit scale. Stepwise forward regression model was fitted to assess the association between possible predictor variables and pneumothorax. Variables with a P-value <0.5 were included in the stepwise forward regression model in the most appropriate form (linear or quadratic). The paired Student’s t-test was used to compare within-group differences in qualitative parameters. P-values <0.05 were considered to be significant. Receiver operating characteristic curves were created using the final model of predictors. Univariate logistic regression models were fitted to assess the association between each of the 41 variables and pneumothorax following valve therapy. We also assessed whether each continuous variable had a linear or quadratic association with pneumothorax in the logit scale. Stepwise forward regression model was fitted to assess the association between possible predictor variables and pneumothorax. Variables with a P-value <0.5 were included in the stepwise forward regression model in the most appropriate form (linear or quadratic). The paired Student’s t-test was used to compare within-group differences in qualitative parameters. P-values <0.05 were considered to be significant. Receiver operating characteristic curves were created using the final model of predictors. Subjects and clinical parameters: In this retrospective analysis, baseline clinical measures and MDCT scan data of 129 consecutive patients (mean age: 64 years, range 43–81 years, sex: 50% male) who experienced TLVR (n=83) or pneumothorax (n=46) following valve therapy between 2009 and 2013 were assessed. The mean forced expiratory volume in 1 second (FEV1) was 0.8±0.2 l (32%±8% predicted) and the mean residual volume (RV) was 5.6±1.4 l (257%±59% predicted). All the patients experienced a complete occlusion of the target lobe by endobronchial (Pulmonx, Inc., Neuchatel, Switzerland) or intrabronchial valves (Olympus Corporation, Tokyo, Japan) at the Thoraxklinik at the University of Heidelberg. Prior to valve therapy, MDCT, including software analysis of the emphysema (YACTA, “yet another CT analyser”), and 99 m-Technetium perfusion scan were performed to identify the target lobe. Sixty-one patients (47%) experienced complete occlusion of the left lower lobe, 34 (26%) of the left upper lobe, 20 (16%) of the right upper lobe, 13 (10%) of the right lower lobe, and one (1%) of the right upper and middle lobes. TLVR was defined as lung parenchymal collapse with the development of a soft-tissue dense atelectasis at a lobar or segmental level on a follow-up chest X-ray or CT scan. Clinical parameters prior to valve therapy and 3 months following pneumothorax, including vital capacity (VC), forced expiratory volume in 1 second (FEV1), residual volume (RV), total lung capacity (TLC), and the 6-minute walk test (6-MWT), were extracted from the patients’ records. Descriptive parameters (target lobe) were also evaluated. MDCT parameters: The qualitative and quantitative MDCT parameters were assessed on preinterventional baseline MDCT scans (64-slice Somatom Definition AS64, Siemens Medical Solution, Forchheim, Germany); the CT protocol varied with reconstruction slice thickness of 1.0 (filter I40f), and doses of 100 kV, 117 mAseffective or 120 kV, 70 mAseffective. The qualitative MDCT parameters were visually assessed by one experienced chest radiologist. The quantitative CT measurements were obtained by using syngo.via software (Siemens Medical Solution), which provides automated three-dimensional quantifications for the assessment of emphysema. The mean lung density of the lobes, lobar volumes, low attenuation volumes (LAVs) and the lobar collapsibility were obtained. The LAV (lung volumes with attenuation values <950 Hounsfield units) and LAV% (the ratio of LAV to the volume of the region of interest) that correlate with lung function parameters, visual CT emphysema scores, and the COPD assessment test were used to describe the extent and severity of emphysema.11–14 Statistical analysis: Univariate logistic regression models were fitted to assess the association between each of the 41 variables and pneumothorax following valve therapy. We also assessed whether each continuous variable had a linear or quadratic association with pneumothorax in the logit scale. Stepwise forward regression model was fitted to assess the association between possible predictor variables and pneumothorax. Variables with a P-value <0.5 were included in the stepwise forward regression model in the most appropriate form (linear or quadratic). The paired Student’s t-test was used to compare within-group differences in qualitative parameters. P-values <0.05 were considered to be significant. Receiver operating characteristic curves were created using the final model of predictors. Results: The patient characteristics of the 83 subjects with a radiologically confirmed TLVR (Figure 1) and those of 46 patients who developed pneumothorax (Figure 2) are presented in Table 1. On the baseline MDCT scans, fissure integrity varied from 80% to 100%. The fissure integrity was between 90% and 100% in 121 patients and between 80% and 90% in eight patients. Overall, the median of fissure completeness was 100% (interquartile range 98.3 to 100). Thirty-eight of the 46 patients (83%) who developed pneumothorax underwent chest tube insertion. In 18 patients (39%), valve explantation was necessary. Eight patients (17%) underwent additional video-assisted thoracoscopy to seal the fistula. Three months after recovering from pneumothorax, patients experienced slight, but not clinically relevant improvements in lung function parameters, exercise capacity and dyspnoe score (Table 2). After the multivariable regression analysis (Table 3), the LAV% of the untreated ipsilateral lobe, the volume of the untreated ipsilateral lobe to volume of the hemithorax, the predominant type of emphysema, the presence of pleural adhesions, and the residual volume were variables that were independently associated with pneumothorax, adjusting for the target lobe; the 6-MWT and the fissure integrity were borderline non-significant. Thereby, residual volume (odds ratio [OR] =1.58, P=0.012) and untreated ipsilateral lobe LAV% (OR =1.08, P=0.001) were associated with an increased risk of pneumothorax. In contrast, the presence of pleural adhesions (OR =0.33, P=0.012), the presence of panlobular emphysema (OR =0.26, P=0.018), and the volume of the untreated ipsilateral lobe to the volume of the hemithorax (OR =0.93, P=0.017) were associated with a decreased risk of pneumothorax. Finally, the 6-MWT (OR =1.05, P=0.077) and the fissure integrity (OR =1.16, P=0.075) tended to be associated with an increased risk of pneumothorax. The area under the curve using the full model with eight variables was 0.84. This indicates that it is possible to predict whether a patient would experience a pneumothorax 84% of the time (Figure 3). Discussion: Endoscopic valve therapy has evolved as an effective therapy for patients with severe COPD and emphysema. It has been developed as a minimally invasive technique to achieve lung volume reduction with less morbidity and mortality than has been reported in lung volume reduction surgery.15 However, valve therapy is also associated with potential complications. Over the past several years, the rate of pneumothorax following valve insertion particularly increased due to modified patient selection. A complete fissure is not only a predictor of excellent outcome but also of pneumothorax. Pneumothorax occurring in 18%–25% in patients undergoing valve insertion is actually the most common complication following valve therapy.4–7 To assess the risk of pneumothorax, we performed this retrospective analysis to determine the predictors of pneumothorax following valve insertion. As patients with TLVR and pneumothorax fulfill the criterion of complete interlobar fissure that is regarded as a prerequisite indication for valve therapy, we focused on both of these patient cohorts; the distinction between both these groups is crucial. As a result, the median of fissure integrity was 100%, which explains why fissure integrity was not found to be a statistically significant predictor of pneumothorax. In this analysis, the LAV% of the untreated ipsilateral lobe, the volume of the untreated ipsilateral lobe to the volume of the hemithorax, the emphysema type, and pleural adhesions were significant CT predictors of pneumothorax. The higher the emphysematous destruction of the untreated ipsilateral lobe, the higher the risk of pneumothorax. The important role of the untreated ipsilateral lobe can be explained by the development of pneumothorax following valve implantation. It is hypothesized that pneumothorax occurs by parenchymal rupture in the adjacent untreated lung lobe in cases of rapid TLVR. In contrast to expectations, panlobular emphysema was found to be a protective factor for the development of pneumothorax. Initially it was assumed that panlobular emphysema distribution would increase the risk, but exactly the opposite was actually found. The blebs and bullae that were assumed to increase the risk were also found to play no significant role in the development of pneumothorax. The finding of the protective character of a panlobular emphysema can not be sufficiently explained; possibly, the compliance of the lung tissue in the different emphysema types varies and may be associated with a different risk of pneumothorax. In addition, pleural adhesions were surprisingly found to be protective factors against pneumothorax. As a clinical predictor, the residual volume was a variable that was independently associated with pneumothorax, adjusting for the target lobe. The higher the residual volume, the higher the risk of pneumothorax. A high residual volume may be associated with a greater volume shift, increasing the risk of parenchymal rupture. In that model, all of these parameters would predict whether a patient would experience a pneumothorax 84% of the time. One retrospective trial demonstrated that most patients experienced an improvement in clinical outcomes due to the lung volume shift despite pneumothorax.8 This finding was confirmed by another trial demonstrating a statistically beneficial outcome in patients who developed pneumothorax following valve therapy.16 However, only patients in whom lobar atelectasis could be observed after recovering from pneumothorax will experience an outstanding improvement in lung function parameters, while patients without a significant TLVR following pneumothorax will experience slight but not relevant clinical improvement. In this analysis, no relevant clinical improvement was achieved after the development of a postinterventional pneumothorax, but pneumothorax following valve therapy did not impair the clinical outcome. Thus, there are some controversial data related to outcomes following pneumothorax in valve patients, but most studies suggest that patients who develop TLVR despite pneumothorax will benefit. However, pneumothorax is associated with prolonged hospitalization and immobilization, leading to muscle wasting and often requiring further intervention, such as chest tube insertion or surgical intervention. In one trial, bed rest and antitussive therapy successfully reduced the risk of pneumothorax following valve insertion.5 Forty patients who received modified medical care, including 48 hours of strict bed rest and 16 mg codeine if needed for cough up to three times a day, were compared to 32 patients with standard medical care following valve therapy. While eight patients from the standard medical care group developed pneumothorax during hospitalization, only two from the modified medical care group experienced pneumothorax (P=0.02). However, the number of treated patients in this study population was too low to allow a general statement for or against this modified postinterventional strategy. Assessing the risk of pneumothorax helps to find the balance between efficacy and safety. Therefore, finding the predictors for pneumothorax is of great importance. The risk of pneumothorax of 18%–25% is accepted because the development of pneumothorax does not impair clinical patient outcomes. Nevertheless, the evaluation of valid prevention measures is crucial to minimize the risk of pneumothorax that presents a severe adverse event in patients with advanced emphysema. In summary, pneumothorax is a commonly anticipated complication of valve therapy in patients with severe emphysema. Therefore, close monitoring following intervention is necessary. Patients with a high LAV% of the untreated ipsilateral lobe and a high residual volume are at particularly great risk of pneumothorax. A high volume of the untreated ipsilateral lobe compared to volume of the hemithorax, panlobular emphysema and pleural adhesions, however, were found to be protective. A thorough evaluation of the risks is essential to determine and discuss the therapy and risk–benefit profiles with the patient.
Background: Endoscopic valve implantation is an effective treatment for patients with advanced emphysema. Despite the minimally invasive procedure, valve placement is associated with risks, the most common of which is pneumothorax. This study was designed to identify predictors of pneumothorax following endoscopic valve implantation. Methods: Preinterventional clinical measures (vital capacity, forced expiratory volume in 1 second, residual volume, total lung capacity, 6-minute walk test), qualitative computed tomography (CT) parameters (fissure integrity, blebs/bulla, subpleural nodules, pleural adhesions, partial atelectasis, fibrotic bands, emphysema type) and quantitative CT parameters (volume and low attenuation volume of the target lobe and the ipsilateral untreated lobe, target air trapping, ipsilateral lobe volume/hemithorax volume, collapsibility of the target lobe and the ipsilateral untreated lobe) were retrospectively evaluated in patients who underwent endoscopic valve placement (n=129). Regression analysis was performed to compare those who developed pneumothorax following valve therapy (n=46) with those who developed target lobe volume reduction without pneumothorax (n=83). Results: Low attenuation volume% of ipsilateral untreated lobe (odds ratio [OR] =1.08, P=0.001), ipsilateral untreated lobe volume/hemithorax volume (OR =0.93, P=0.017), emphysema type (OR =0.26, P=0.018), pleural adhesions (OR =0.33, P=0.012) and residual volume (OR =1.58, P=0.012) were found to be significant predictors of pneumothorax. Fissure integrity (OR =1.16, P=0.075) and 6-minute walk test (OR =1.05, P=0.077) were also indicative of pneumothorax. The model including the aforementioned parameters predicted whether a patient would experience a pneumothorax 84% of the time (area under the curve =0.84). Conclusions: Clinical and CT parameters provide a promising tool to effectively identify patients at high risk of pneumothorax following endoscopic valve therapy.
null
null
3,979
357
[ 1408, 182 ]
7
[ "pneumothorax", "patients", "lobe", "volume", "valve", "therapy", "parameters", "emphysema", "valve therapy", "following" ]
[ "emphysema palliation trial", "emphysema summary pneumothorax", "endoscopic valve therapy", "copd emphysema technique", "valves emphysematous lung" ]
null
null
[CONTENT] endoscopic lung volume reduction | COPD | emphysema | pneumothorax | valve therapy [SUMMARY]
[CONTENT] endoscopic lung volume reduction | COPD | emphysema | pneumothorax | valve therapy [SUMMARY]
[CONTENT] endoscopic lung volume reduction | COPD | emphysema | pneumothorax | valve therapy [SUMMARY]
null
[CONTENT] endoscopic lung volume reduction | COPD | emphysema | pneumothorax | valve therapy [SUMMARY]
null
[CONTENT] Adult | Aged | Aged, 80 and over | Area Under Curve | Bronchoscopy | Exercise Tolerance | Female | Forced Expiratory Volume | Humans | Linear Models | Logistic Models | Lung | Male | Middle Aged | Multidetector Computed Tomography | Odds Ratio | Pneumothorax | Predictive Value of Tests | Pulmonary Emphysema | ROC Curve | Retrospective Studies | Risk Factors | Severity of Illness Index | Treatment Outcome | Vital Capacity | Walk Test [SUMMARY]
[CONTENT] Adult | Aged | Aged, 80 and over | Area Under Curve | Bronchoscopy | Exercise Tolerance | Female | Forced Expiratory Volume | Humans | Linear Models | Logistic Models | Lung | Male | Middle Aged | Multidetector Computed Tomography | Odds Ratio | Pneumothorax | Predictive Value of Tests | Pulmonary Emphysema | ROC Curve | Retrospective Studies | Risk Factors | Severity of Illness Index | Treatment Outcome | Vital Capacity | Walk Test [SUMMARY]
[CONTENT] Adult | Aged | Aged, 80 and over | Area Under Curve | Bronchoscopy | Exercise Tolerance | Female | Forced Expiratory Volume | Humans | Linear Models | Logistic Models | Lung | Male | Middle Aged | Multidetector Computed Tomography | Odds Ratio | Pneumothorax | Predictive Value of Tests | Pulmonary Emphysema | ROC Curve | Retrospective Studies | Risk Factors | Severity of Illness Index | Treatment Outcome | Vital Capacity | Walk Test [SUMMARY]
null
[CONTENT] Adult | Aged | Aged, 80 and over | Area Under Curve | Bronchoscopy | Exercise Tolerance | Female | Forced Expiratory Volume | Humans | Linear Models | Logistic Models | Lung | Male | Middle Aged | Multidetector Computed Tomography | Odds Ratio | Pneumothorax | Predictive Value of Tests | Pulmonary Emphysema | ROC Curve | Retrospective Studies | Risk Factors | Severity of Illness Index | Treatment Outcome | Vital Capacity | Walk Test [SUMMARY]
null
[CONTENT] emphysema palliation trial | emphysema summary pneumothorax | endoscopic valve therapy | copd emphysema technique | valves emphysematous lung [SUMMARY]
[CONTENT] emphysema palliation trial | emphysema summary pneumothorax | endoscopic valve therapy | copd emphysema technique | valves emphysematous lung [SUMMARY]
[CONTENT] emphysema palliation trial | emphysema summary pneumothorax | endoscopic valve therapy | copd emphysema technique | valves emphysematous lung [SUMMARY]
null
[CONTENT] emphysema palliation trial | emphysema summary pneumothorax | endoscopic valve therapy | copd emphysema technique | valves emphysematous lung [SUMMARY]
null
[CONTENT] pneumothorax | patients | lobe | volume | valve | therapy | parameters | emphysema | valve therapy | following [SUMMARY]
[CONTENT] pneumothorax | patients | lobe | volume | valve | therapy | parameters | emphysema | valve therapy | following [SUMMARY]
[CONTENT] pneumothorax | patients | lobe | volume | valve | therapy | parameters | emphysema | valve therapy | following [SUMMARY]
null
[CONTENT] pneumothorax | patients | lobe | volume | valve | therapy | parameters | emphysema | valve therapy | following [SUMMARY]
null
[CONTENT] pneumothorax | vent | rcts | lobe | outcomes | trials | rate | valves | lead | collateral [SUMMARY]
[CONTENT] association | regression | variables | model | forward regression | forward | linear | linear quadratic | regression model | stepwise [SUMMARY]
[CONTENT] pneumothorax | patients | fissure | ipsilateral lobe | untreated ipsilateral | untreated ipsilateral lobe | presence | figure | table | volume [SUMMARY]
null
[CONTENT] pneumothorax | lobe | patients | volume | valve | therapy | valve therapy | following | emphysema | parameters [SUMMARY]
null
[CONTENT] ||| ||| [SUMMARY]
[CONTENT] 1 second | 6-minute | CT ||| [SUMMARY]
[CONTENT] ||| 1.08, P=0.001 | 0.93 | 0.26 | 0.33 ||| 1.16 | 6-minute | 1.05 | P=0.077 ||| 84% | 0.84 [SUMMARY]
null
[CONTENT] ||| ||| ||| 1 second | 6-minute | CT ||| ||| ||| 1.08, P=0.001 | 0.93 | 0.26 | 0.33 ||| 1.16 | 6-minute | 1.05 | P=0.077 ||| 84% | 0.84 ||| CT [SUMMARY]
null
Incidental gallbladder cancer diagnosis confers survival advantage irrespective of tumour stage and characteristics.
35664962
Incidental gallbladder cancer (IGBC) represents 50%-60% of gallbladder cancer cases. Data are conflicting on the role of IGBC diagnosis in oncological outcomes. Some studies suggest that IGBC diagnosis does not affect outcomes, while others that overall survival (OS) is longer in these cases compared to non-incidental diagnosis (NIGBC). Furthermore, some studies reported early tumour stages and histopathologic characteristics as possible confounders, while others not.
BACKGROUND
Retrospective analysis of all patient referrals with gallbladder cancer between 2008 and 2020 in a tertiary hepatobiliary centre. Statistical comparison of patient and tumour characteristics between IGBC and NIGBC subgroups was performed. Survival analysis for the whole cohort, surgical and non-surgical subgroups was done with the Kaplan-Meier method and the use of log rank test. Risk analysis was performed with univariable and multivariable Cox regression analysis.
METHODS
The cohort included 261 patients with gallbladder cancer. 65% of cases had NIGBC and 35% had IGBC. A total of 90 patients received surgical treatment (66% of IGBC cases and 19% of NIGBC cases). NIGBC patients had more advanced T stage and required more extensive resections than IGBC ones. OS was longer in patients with IGBC in the whole cohort (29 vs 4 mo, P < 0.001), as well as in the non-surgical (14 vs 2 mo, P < 0.001) and surgical subgroups (29 vs 16.5 mo, P = 0.001). Disease free survival (DFS) after surgery was longer in patients with IGBC (21.5 mo vs 8.5 mo, P = 0.007). N stage and resection margin status were identified as independent predictors of OS and DFS. NIGBC diagnosis was identified as an independent predictor of OS.
RESULTS
IGBC diagnosis may confer a survival advantage independently of the pathological stage and tumour characteristics. Prospective studies are required to further investigate this, including detailed pathological analysis and molecular gene expression.
CONCLUSION
[ "Disease-Free Survival", "Gallbladder Neoplasms", "Humans", "Incidental Findings", "Neoplasm Staging", "Retrospective Studies", "Survival Analysis" ]
9150056
INTRODUCTION
Gallbladder cancer (GBC) is associated with poor prognosis even after treatment, with median overall survival (OS) ranging in the literature between 3 and 22 mo[1,2]. Incidental gallbladder cancer (IGBC) discovered on routine histological examination of gallbladder specimens after cholecystectomy is more common than non-incidental gallbladder cancer (NIGBC) and represents 50%-60% of all cases[3-5]. The prognostic implication of incidental or non-incidental diagnosis in oncological outcomes is still a matter of debate as is the effect of the timing of curative intent resection which is performed as a secondary operation in IGBC. Published evidence are contradictory with some studies suggesting that incidental diagnosis does not affect survival[5-8], while others showed longer survival with IGBC[9-11]. Earlier tumour stages in the IGBC group have been suggested as a confounding factor for any potential survival benefit[5]. On the contrary, other studies identified a survival benefit in IGBC even after controlling for tumour stage and degree of differentiation[9,10]. The aim of this study was to investigate the role of IGBC diagnosis in patient OS and especially after surgical treatment with curative intent.
MATERIALS AND METHODS
This is a retrospective single tertiary centre cohort study between January 2008 and December 2020. The sample included all patients with a histological diagnosis of GBC obtained by surgery or biopsy. The management of all patients was discussed and agreed in the hepatobiliary multidisciplinary (MDT) meeting. IGBC diagnosis was established after histopathological examination of specimens following cholecystectomy for benign aetiology. This was followed by complete staging with a computerized tomography scan of the thorax, abdomen and pelvis (CT-TAP) with subsequent curative intent resection if appropriate. NIGBC diagnosis was made based on imaging and/or biopsy after MDT discussion of referred patients. All patients had staging with CT-TAP, followed by surgery if clinically appropriate. In patients with locally advanced disease, neoadjuvant chemotherapy was administered and resection was contemplated after restaging. Liver magnetic resonance imaging and positron emission tomography scans were used selectively in both groups if liver metastases or extrahepatic disease was suspected on CT. Following surgical resection all patients were referred to oncology for assessment of adjuvant chemotherapy (AC). Data were collected and recorded for patient's demographics, American society of anesthesiology (ASA) score, extent of surgical resection, histology, chemotherapy, recurrence and survival. The extent of surgery was defined as minor if radical cholecystectomy, gallbladder (GB) bed resection or liver segments IVb/V resection with or without bile duct resection was performed. It also included patients who only had bile duct resection. The resection was defined as major if a major hepatectomy (three or more liver segments) or multi-visceral resection was performed. Recurrence was defined as local/regional (GB bed, hilar lymph nodes), distant or both. The primary outcome of the study was difference in OS between IGBC and NIGBC and the secondary outcome was difference in disease-free survival (DFS). T-test, Chi-Square, Fisher’s Exact and Mann-Whitney U tests were used as appropriate to compare variables and outcomes between the two groups, with statistical significance set at P < 0.05. Survival analysis was performed with the Kaplan-Meier method and log rank test was used to compare survival curves between the study groups. Univariable and multivariable time to event analyses were performed using the Cox proportional hazard model to determine risk factors for OS and DFS. Variables were subjected to a univariable analysis first and those with P < 0.2 were introduced into a multivariable model. Hazard ratios (HR) and associated 95% confidence intervals (CI) were calculated. A two-tailed P value < 0.05 was considered statistically significant. All statistical analyses were performed using the software package SPSS Statistics for Windows (version 25.0; SPSS Inc., Chicago, IL, United States).
null
null
CONCLUSION
Published evidence is still contradicting. The theory that IGBC and NIGBC are two distinct variants of the same disease remains to be proven by detailed pathologic assessment and research in cancer molecular genetics.
[ "INTRODUCTION", "RESULTS", "Cohort characteristics and management pathway", "Patient and tumour characteristics of surgical patients ", "Survival analysis", "Risk analysis", "DISCUSSION", "CONCLUSION" ]
[ "Gallbladder cancer (GBC) is associated with poor prognosis even after treatment, with median overall survival (OS) ranging in the literature between 3 and 22 mo[1,2]. Incidental gallbladder cancer (IGBC) discovered on routine histological examination of gallbladder specimens after cholecystectomy is more common than non-incidental gallbladder cancer (NIGBC) and represents 50%-60% of all cases[3-5]. The prognostic implication of incidental or non-incidental diagnosis in oncological outcomes is still a matter of debate as is the effect of the timing of curative intent resection which is performed as a secondary operation in IGBC. \nPublished evidence are contradictory with some studies suggesting that incidental diagnosis does not affect survival[5-8], while others showed longer survival with IGBC[9-11]. Earlier tumour stages in the IGBC group have been suggested as a confounding factor for any potential survival benefit[5]. On the contrary, other studies identified a survival benefit in IGBC even after controlling for tumour stage and degree of differentiation[9,10].\nThe aim of this study was to investigate the role of IGBC diagnosis in patient OS and especially after surgical treatment with curative intent.", "Cohort characteristics and management pathway The study cohort comprised of 261 patients, with 35% presenting as IGBC and 65% had non incidental presentation (NIGBC) at the time of diagnosis (Figure 1). Median age was 69 years [interquartile range (IQR) 61-77] and male to female ratio was 1:3. Eighty-one percent of NIGBC and 34% of IGBC patients did not undergo resection. For the majority of these (82%) locally advanced or metastatic disease was the main reason. Other causes included patient’s choice, poor medical status and pathological stage < 1b (where resection is not indicated) (Table 1). Reasons for not having AC after resection were patients’ choice or comorbidities and early tumour stages (CIS, T1/T2, N0) with negative resection margins.\nManagement pathways of whole cohort.\nReasons for not proceeding with oncological resection, n (%)\nData missing for one patient.\nThe study cohort comprised of 261 patients, with 35% presenting as IGBC and 65% had non incidental presentation (NIGBC) at the time of diagnosis (Figure 1). Median age was 69 years [interquartile range (IQR) 61-77] and male to female ratio was 1:3. Eighty-one percent of NIGBC and 34% of IGBC patients did not undergo resection. For the majority of these (82%) locally advanced or metastatic disease was the main reason. Other causes included patient’s choice, poor medical status and pathological stage < 1b (where resection is not indicated) (Table 1). Reasons for not having AC after resection were patients’ choice or comorbidities and early tumour stages (CIS, T1/T2, N0) with negative resection margins.\nManagement pathways of whole cohort.\nReasons for not proceeding with oncological resection, n (%)\nData missing for one patient.\nPatient and tumour characteristics of surgical patients A total of 90 patients had curative intent resection. The type of resection depended on IGBC vs NIGBC diagnosis, pre-operative staging, intra-operative findings, cystic duct margin status and the T stage of IGBC patients. Hepatoduodenal (portal) lymphadenectomy was performed in all patients. For IGBC cases, the median time from the time of the index cholecystectomy to the curative resection was 13.5 wk (IQR: 11-16 wk).\nPatient and tumour characteristics are shown on Table 2. Patients with NIGBC had more advanced T stage and underwent more extensive resections compared to those with IGBC. Similarly, N stage approached but did not reach significance. The types of procedures performed are shown on Table 3.\nPatient and tumour characteristics for the surgical group, n (%)\nASA: American Society of Anesthesiology; CCI: Charlson Comorbidity Index; BMI: Body mass index.\nTypes of surgeries performed in surgically treated patients of the study group, n (%)\nPortal lymphadenectomy performed in all cases.\nA total of 90 patients had curative intent resection. The type of resection depended on IGBC vs NIGBC diagnosis, pre-operative staging, intra-operative findings, cystic duct margin status and the T stage of IGBC patients. Hepatoduodenal (portal) lymphadenectomy was performed in all patients. For IGBC cases, the median time from the time of the index cholecystectomy to the curative resection was 13.5 wk (IQR: 11-16 wk).\nPatient and tumour characteristics are shown on Table 2. Patients with NIGBC had more advanced T stage and underwent more extensive resections compared to those with IGBC. Similarly, N stage approached but did not reach significance. The types of procedures performed are shown on Table 3.\nPatient and tumour characteristics for the surgical group, n (%)\nASA: American Society of Anesthesiology; CCI: Charlson Comorbidity Index; BMI: Body mass index.\nTypes of surgeries performed in surgically treated patients of the study group, n (%)\nPortal lymphadenectomy performed in all cases.\nSurvival analysis For the whole cohort, median OS was longer in patients with IGBC diagnosis, (29 vs 4 mo, P < 0.001). OS of IGBC patients was significantly longer compared to NIGBC patients in the non-surgical group (14 vs 2 mo, P < 0.001), as well as the surgical group (29 vs 16.5 mo, P = 0.001). Similarly, DFS was significantly longer in patients with IGBC (21.5 mo vs 8.5 mo, P = 0.007) (Figure 2). \n\nKaplan Meier survival curves for incidental gallbladder cancer vs non-incidental gallbladder cancer. A: Overall survival (OS) of all cohort; B: Overall survival (OS) of non-surgical treatment; C: OS for surgical treatment; D: Disease-free survival for surgical treatment. IGBC: Incidental gallbladder cancer; NIGBC: Non-incidental gallbladder cancer.\nFor the whole cohort, median OS was longer in patients with IGBC diagnosis, (29 vs 4 mo, P < 0.001). OS of IGBC patients was significantly longer compared to NIGBC patients in the non-surgical group (14 vs 2 mo, P < 0.001), as well as the surgical group (29 vs 16.5 mo, P = 0.001). Similarly, DFS was significantly longer in patients with IGBC (21.5 mo vs 8.5 mo, P = 0.007) (Figure 2). \n\nKaplan Meier survival curves for incidental gallbladder cancer vs non-incidental gallbladder cancer. A: Overall survival (OS) of all cohort; B: Overall survival (OS) of non-surgical treatment; C: OS for surgical treatment; D: Disease-free survival for surgical treatment. IGBC: Incidental gallbladder cancer; NIGBC: Non-incidental gallbladder cancer.\nRisk analysis Univariable Cox regression analysis (Table 4) identified that age, ASA, T stage, N stage, resection margin status and NIGBC diagnosis were significantly related to OS (P < 0.05). On multivariable analysis, N stage, resection margin status and NIGBC diagnosis were identified as independent predictors of survival, increasing the risk of mortality by 3, 5 and 2 times respectively (Table 4).\nCox proportional hazard analysis for overall survival\nASA: American Society of Anesthesiology; CCI: Charlson Comorbidity Index; BMI: Body mass index.\nSimilarly, T stage, N stage, resection margin status and NIGBC diagnosis were identified to be associated with worse DFS on univariable analysis, however only N stage and resection margin status were found to be independent predictors on multivariable analysis (Table 5).\nCox proportional hazard analysis for disease free survival\nASA: American Society of Anesthesiology; CCI: Charlson Comorbidity Index; BMI: Body mass index.\nIn an effort to statistically investigate if NIGBC diagnosis acted as a confounding factor for the T stage of the disease despite the use of time-dependent regression analysis, models without this parameter were produced. Again, only N stage and margin status were identified as independent prognostic factors for OS and DFS, while T stage was not (Tables 4 and 5). \nUnivariable Cox regression analysis (Table 4) identified that age, ASA, T stage, N stage, resection margin status and NIGBC diagnosis were significantly related to OS (P < 0.05). On multivariable analysis, N stage, resection margin status and NIGBC diagnosis were identified as independent predictors of survival, increasing the risk of mortality by 3, 5 and 2 times respectively (Table 4).\nCox proportional hazard analysis for overall survival\nASA: American Society of Anesthesiology; CCI: Charlson Comorbidity Index; BMI: Body mass index.\nSimilarly, T stage, N stage, resection margin status and NIGBC diagnosis were identified to be associated with worse DFS on univariable analysis, however only N stage and resection margin status were found to be independent predictors on multivariable analysis (Table 5).\nCox proportional hazard analysis for disease free survival\nASA: American Society of Anesthesiology; CCI: Charlson Comorbidity Index; BMI: Body mass index.\nIn an effort to statistically investigate if NIGBC diagnosis acted as a confounding factor for the T stage of the disease despite the use of time-dependent regression analysis, models without this parameter were produced. Again, only N stage and margin status were identified as independent prognostic factors for OS and DFS, while T stage was not (Tables 4 and 5). ", "The study cohort comprised of 261 patients, with 35% presenting as IGBC and 65% had non incidental presentation (NIGBC) at the time of diagnosis (Figure 1). Median age was 69 years [interquartile range (IQR) 61-77] and male to female ratio was 1:3. Eighty-one percent of NIGBC and 34% of IGBC patients did not undergo resection. For the majority of these (82%) locally advanced or metastatic disease was the main reason. Other causes included patient’s choice, poor medical status and pathological stage < 1b (where resection is not indicated) (Table 1). Reasons for not having AC after resection were patients’ choice or comorbidities and early tumour stages (CIS, T1/T2, N0) with negative resection margins.\nManagement pathways of whole cohort.\nReasons for not proceeding with oncological resection, n (%)\nData missing for one patient.", "A total of 90 patients had curative intent resection. The type of resection depended on IGBC vs NIGBC diagnosis, pre-operative staging, intra-operative findings, cystic duct margin status and the T stage of IGBC patients. Hepatoduodenal (portal) lymphadenectomy was performed in all patients. For IGBC cases, the median time from the time of the index cholecystectomy to the curative resection was 13.5 wk (IQR: 11-16 wk).\nPatient and tumour characteristics are shown on Table 2. Patients with NIGBC had more advanced T stage and underwent more extensive resections compared to those with IGBC. Similarly, N stage approached but did not reach significance. The types of procedures performed are shown on Table 3.\nPatient and tumour characteristics for the surgical group, n (%)\nASA: American Society of Anesthesiology; CCI: Charlson Comorbidity Index; BMI: Body mass index.\nTypes of surgeries performed in surgically treated patients of the study group, n (%)\nPortal lymphadenectomy performed in all cases.", "For the whole cohort, median OS was longer in patients with IGBC diagnosis, (29 vs 4 mo, P < 0.001). OS of IGBC patients was significantly longer compared to NIGBC patients in the non-surgical group (14 vs 2 mo, P < 0.001), as well as the surgical group (29 vs 16.5 mo, P = 0.001). Similarly, DFS was significantly longer in patients with IGBC (21.5 mo vs 8.5 mo, P = 0.007) (Figure 2). \n\nKaplan Meier survival curves for incidental gallbladder cancer vs non-incidental gallbladder cancer. A: Overall survival (OS) of all cohort; B: Overall survival (OS) of non-surgical treatment; C: OS for surgical treatment; D: Disease-free survival for surgical treatment. IGBC: Incidental gallbladder cancer; NIGBC: Non-incidental gallbladder cancer.", "Univariable Cox regression analysis (Table 4) identified that age, ASA, T stage, N stage, resection margin status and NIGBC diagnosis were significantly related to OS (P < 0.05). On multivariable analysis, N stage, resection margin status and NIGBC diagnosis were identified as independent predictors of survival, increasing the risk of mortality by 3, 5 and 2 times respectively (Table 4).\nCox proportional hazard analysis for overall survival\nASA: American Society of Anesthesiology; CCI: Charlson Comorbidity Index; BMI: Body mass index.\nSimilarly, T stage, N stage, resection margin status and NIGBC diagnosis were identified to be associated with worse DFS on univariable analysis, however only N stage and resection margin status were found to be independent predictors on multivariable analysis (Table 5).\nCox proportional hazard analysis for disease free survival\nASA: American Society of Anesthesiology; CCI: Charlson Comorbidity Index; BMI: Body mass index.\nIn an effort to statistically investigate if NIGBC diagnosis acted as a confounding factor for the T stage of the disease despite the use of time-dependent regression analysis, models without this parameter were produced. Again, only N stage and margin status were identified as independent prognostic factors for OS and DFS, while T stage was not (Tables 4 and 5). ", "GBC is a rare malignancy with unfavorable prognosis despite the advances in oncological treatments[1,2,12]. The timing of GBC diagnosis, whether incidental after cholecystectomy for benign causes or pre-operative on imaging and/or biopsy, has been previously under investigation for the potential effect in outcomes, however evidence is scarce. Violation of the anatomical planes around the tumour and incomplete clearance during the index laparoscopic cholecystectomy with residual disease in 35%-46% of the patients have been proposed as factors responsible for adverse oncological outcomes in IGBC patients[13-15]. Furthermore, inadvertent GB perforation during cholecystectomy has been reported in up to 22%[16,17], and this could theoretically lead to tumour seeding and metastatic disease. Interestingly, the site of invasion of local disease has also been shown to play an important role in T2 disease[15]. Nonetheless, some published evidence suggested that IGBC diagnosis confers favourable survival, regardless of the stage or degree of differentiation of the disease[9,11]. On the other hand, in other studies, NIGBC diagnosis did not adversely affect survival[5-8]. In a meta-analysis of 51 studies by Pyo et al[18], IGBC patients had favorable survival in comparison to NIGBC. However, not all studies included in this meta-analysis were able to show the same difference between both groups.\nSixty five percent of referred patients in our cohort had a preoperative radiological diagnosis of GBC. Sixty six percent of all patients did not proceed to an oncological resection (81% of NIGBC and 36% of IGBC patients), 16% of these due to locally advanced disease (all NIGBC patients) and 38% due to metastatic disease on staging imaging (23% of IGBC and 46% of NIGBC patients). Only 3% of IGBC were stage T1a or below and therefore, no further resection was indicated. Of note only 7% of patients were deemed unfit for surgical treatment. It is clear that the majority of the patients that did not proceed to management with curative intent were due to the late presentation of the disease, a fact that is well described for GBC[19]. The percentage of patients who had AC after curative intent surgery was low (22%). This is comparable with the published literature of around 24%[20]. The reasons for this may include patients’ choice and comorbidities, however, may also be attributed to the change in recommended best practice over the years of the study. According to a previously published expert consensus statement, AC was considered only in patients with high risk pathologic features: T3-T4 stages, metastatic lymph nodes and positive resection margins[21]. However, after the BILCAP trial showing improved survival with capecitabine (36.4 mo to 51.1 mo; P = 0.028), it is currently recommended that all patients with resected biliary tract malignancy, including GB cancer, receive 6 mo of adjuvant capecitabine[22]. The results of the currently ongoing ACTICCA-1 trial are eagerly awaited and will provide further evidence if the combination chemotherapy of gemcitabine and cisplatin is superior to capecitabine monotherapy in the adjuvant setting.\nOur data suggests that NIGBC diagnosis adversely affected oncological outcomes. NIGBC patients were more likely to have higher stages of the disease (T3/4), consequently undergoing more extensive resections. The range of oncological procedures for selected cases included multi-visceral resections and major hepatectomies to achieve histologically clear resection margins. Routine performance of such procedures is not associated with survival benefit and has high morbidity rates; however, it is still indicated when the vascular inflow or resection margins are/may be compromised[23-25]. Positive lymph node status was more common in patients with NIGBC with the difference approaching statistical significance. OS of all patients with NIGBC diagnosis was substantially worse than those with IGBC and this was also noted in both surgical and non-surgical subgroups. Furthermore, in the surgical subgroup, NIGBC diagnosis was identified as an independent predictor of OS; doubling the risk of mortality. Stronger predictors were pN stage and margin status, increasing the risk by 3 and 5 times respectively. These findings persisted when models computed that accounted for the possibility of NIGBC diagnosis acting as a confounding factor for T stage (by not including this parameter in the analysis), indicating that it is not true. This seemingly paradoxical observation may be explained by the presence of micro-metastases in early stages of GBC [26], which would affect and in the end dictate OS and DFS, rather than pT stage. Similar were the results for DFS, with N stage and margin status conferring higher relative risk of recurrence, while NIGBC diagnosis approached but did not reach significance. \nLymph node status is an important prognostic factor in GBC. Widmann et al[27] in a meta-analysis of 18 observational studies which included more than 27000 patients, has shown that lymph node involvement has significant effect on OS and DFS. Lymphadenectomy also was associated with better OS and DFS in patients with T1b, T2 and T3 disease. This was not clearly demonstrated with T4 disease due to the low number of cases undergoing curative resection in this stage. Lymph node micrometastases, defined as disease detected on immune-histochemical staining, have also been described to correlate with the pathologic N stage of the disease and disease prognosis[28]. Nonetheless, the significance of the extent of lymphadenectomy and lymph node yield is controversial in the published literature, with data from two studies suggesting that harvesting more than four lymph nodes during surgery is associated with improved survival[29,30], whilst a third study concluded that lymph node yield does not correlate with improved survival[31]. These differences may be explained by the differences in pathological reporting (higher lymph node yield may result in more accurate pN staging) and/or variations in the non-surgical part of the patients’ management, such differences in the administration of neoadjuvant and adjuvant chemotherapy, regimens, duration etc. \nOverall, our data suggest that the timing of diagnosis of GBC may play a significant role in the oncological outcomes. Due to the retrospective nature of the study and the long study period, data on the site of invasion (hepatic vs peritoneal) were not available, hence this could not be investigated as a possible explanation[32,33]. Another possibility is a difference in the genetic profile of the tumours which could account for different behavior, such as early micrometastases, that cannot be captured by the common radiological investigations and standard pathological parameters of stage and differentiation [34]. However, this cannot be investigated by the current study and would require a prospective molecular study design.\nThe limitations of the study include its single centre retrospective methodology. The long study period also included differences and evolution in the surgical approach during the oncological resection, with more bile duct resections done during the early study period to achieve a negative margin and a greater lymph node yield. In the later years, bile duct resection was only performed in the presence of a positive cystic duct margin. Nonetheless, this is the first study to include all patients referred for GBC to a tertiary regional centre, rather than only the ones receiving surgical treatment, therefore providing outcome data in an intention to treat basis over the whole referral cohort including the patients that did not receive surgical treatment. It is also one of few studies to demonstrate the effect of non-incidental diagnosis on the oncological outcomes. ", "Conclusively, the presented data suggest that IGBC diagnosis may confer a survival advantage, including patients that received surgical treatment, independently of the pathological stage and tumour characteristics. Prospective studies are required to investigate the reasons behind this, including detailed pathological analysis and molecular gene expression. " ]
[ null, null, null, null, null, null, null, null ]
[ "INTRODUCTION", "MATERIALS AND METHODS", "RESULTS", "Cohort characteristics and management pathway", "Patient and tumour characteristics of surgical patients ", "Survival analysis", "Risk analysis", "DISCUSSION", "CONCLUSION" ]
[ "Gallbladder cancer (GBC) is associated with poor prognosis even after treatment, with median overall survival (OS) ranging in the literature between 3 and 22 mo[1,2]. Incidental gallbladder cancer (IGBC) discovered on routine histological examination of gallbladder specimens after cholecystectomy is more common than non-incidental gallbladder cancer (NIGBC) and represents 50%-60% of all cases[3-5]. The prognostic implication of incidental or non-incidental diagnosis in oncological outcomes is still a matter of debate as is the effect of the timing of curative intent resection which is performed as a secondary operation in IGBC. \nPublished evidence are contradictory with some studies suggesting that incidental diagnosis does not affect survival[5-8], while others showed longer survival with IGBC[9-11]. Earlier tumour stages in the IGBC group have been suggested as a confounding factor for any potential survival benefit[5]. On the contrary, other studies identified a survival benefit in IGBC even after controlling for tumour stage and degree of differentiation[9,10].\nThe aim of this study was to investigate the role of IGBC diagnosis in patient OS and especially after surgical treatment with curative intent.", "This is a retrospective single tertiary centre cohort study between January 2008 and December 2020. The sample included all patients with a histological diagnosis of GBC obtained by surgery or biopsy. The management of all patients was discussed and agreed in the hepatobiliary multidisciplinary (MDT) meeting. IGBC diagnosis was established after histopathological examination of specimens following cholecystectomy for benign aetiology. This was followed by complete staging with a computerized tomography scan of the thorax, abdomen and pelvis (CT-TAP) with subsequent curative intent resection if appropriate. NIGBC diagnosis was made based on imaging and/or biopsy after MDT discussion of referred patients. All patients had staging with CT-TAP, followed by surgery if clinically appropriate. In patients with locally advanced disease, neoadjuvant chemotherapy was administered and resection was contemplated after restaging. Liver magnetic resonance imaging and positron emission tomography scans were used selectively in both groups if liver metastases or extrahepatic disease was suspected on CT. Following surgical resection all patients were referred to oncology for assessment of adjuvant chemotherapy (AC).\nData were collected and recorded for patient's demographics, American society of anesthesiology (ASA) score, extent of surgical resection, histology, chemotherapy, recurrence and survival. The extent of surgery was defined as minor if radical cholecystectomy, gallbladder (GB) bed resection or liver segments IVb/V resection with or without bile duct resection was performed. It also included patients who only had bile duct resection. The resection was defined as major if a major hepatectomy (three or more liver segments) or multi-visceral resection was performed. Recurrence was defined as local/regional (GB bed, hilar lymph nodes), distant or both. The primary outcome of the study was difference in OS between IGBC and NIGBC and the secondary outcome was difference in disease-free survival (DFS). \n\nT-test, Chi-Square, Fisher’s Exact and Mann-Whitney U tests were used as appropriate to compare variables and outcomes between the two groups, with statistical significance set at P < 0.05. Survival analysis was performed with the Kaplan-Meier method and log rank test was used to compare survival curves between the study groups. Univariable and multivariable time to event analyses were performed using the Cox proportional hazard model to determine risk factors for OS and DFS. Variables were subjected to a univariable analysis first and those with P < 0.2 were introduced into a multivariable model. Hazard ratios (HR) and associated 95% confidence intervals (CI) were calculated. A two-tailed P value < 0.05 was considered statistically significant. All statistical analyses were performed using the software package SPSS Statistics for Windows (version 25.0; SPSS Inc., Chicago, IL, United States).", "Cohort characteristics and management pathway The study cohort comprised of 261 patients, with 35% presenting as IGBC and 65% had non incidental presentation (NIGBC) at the time of diagnosis (Figure 1). Median age was 69 years [interquartile range (IQR) 61-77] and male to female ratio was 1:3. Eighty-one percent of NIGBC and 34% of IGBC patients did not undergo resection. For the majority of these (82%) locally advanced or metastatic disease was the main reason. Other causes included patient’s choice, poor medical status and pathological stage < 1b (where resection is not indicated) (Table 1). Reasons for not having AC after resection were patients’ choice or comorbidities and early tumour stages (CIS, T1/T2, N0) with negative resection margins.\nManagement pathways of whole cohort.\nReasons for not proceeding with oncological resection, n (%)\nData missing for one patient.\nThe study cohort comprised of 261 patients, with 35% presenting as IGBC and 65% had non incidental presentation (NIGBC) at the time of diagnosis (Figure 1). Median age was 69 years [interquartile range (IQR) 61-77] and male to female ratio was 1:3. Eighty-one percent of NIGBC and 34% of IGBC patients did not undergo resection. For the majority of these (82%) locally advanced or metastatic disease was the main reason. Other causes included patient’s choice, poor medical status and pathological stage < 1b (where resection is not indicated) (Table 1). Reasons for not having AC after resection were patients’ choice or comorbidities and early tumour stages (CIS, T1/T2, N0) with negative resection margins.\nManagement pathways of whole cohort.\nReasons for not proceeding with oncological resection, n (%)\nData missing for one patient.\nPatient and tumour characteristics of surgical patients A total of 90 patients had curative intent resection. The type of resection depended on IGBC vs NIGBC diagnosis, pre-operative staging, intra-operative findings, cystic duct margin status and the T stage of IGBC patients. Hepatoduodenal (portal) lymphadenectomy was performed in all patients. For IGBC cases, the median time from the time of the index cholecystectomy to the curative resection was 13.5 wk (IQR: 11-16 wk).\nPatient and tumour characteristics are shown on Table 2. Patients with NIGBC had more advanced T stage and underwent more extensive resections compared to those with IGBC. Similarly, N stage approached but did not reach significance. The types of procedures performed are shown on Table 3.\nPatient and tumour characteristics for the surgical group, n (%)\nASA: American Society of Anesthesiology; CCI: Charlson Comorbidity Index; BMI: Body mass index.\nTypes of surgeries performed in surgically treated patients of the study group, n (%)\nPortal lymphadenectomy performed in all cases.\nA total of 90 patients had curative intent resection. The type of resection depended on IGBC vs NIGBC diagnosis, pre-operative staging, intra-operative findings, cystic duct margin status and the T stage of IGBC patients. Hepatoduodenal (portal) lymphadenectomy was performed in all patients. For IGBC cases, the median time from the time of the index cholecystectomy to the curative resection was 13.5 wk (IQR: 11-16 wk).\nPatient and tumour characteristics are shown on Table 2. Patients with NIGBC had more advanced T stage and underwent more extensive resections compared to those with IGBC. Similarly, N stage approached but did not reach significance. The types of procedures performed are shown on Table 3.\nPatient and tumour characteristics for the surgical group, n (%)\nASA: American Society of Anesthesiology; CCI: Charlson Comorbidity Index; BMI: Body mass index.\nTypes of surgeries performed in surgically treated patients of the study group, n (%)\nPortal lymphadenectomy performed in all cases.\nSurvival analysis For the whole cohort, median OS was longer in patients with IGBC diagnosis, (29 vs 4 mo, P < 0.001). OS of IGBC patients was significantly longer compared to NIGBC patients in the non-surgical group (14 vs 2 mo, P < 0.001), as well as the surgical group (29 vs 16.5 mo, P = 0.001). Similarly, DFS was significantly longer in patients with IGBC (21.5 mo vs 8.5 mo, P = 0.007) (Figure 2). \n\nKaplan Meier survival curves for incidental gallbladder cancer vs non-incidental gallbladder cancer. A: Overall survival (OS) of all cohort; B: Overall survival (OS) of non-surgical treatment; C: OS for surgical treatment; D: Disease-free survival for surgical treatment. IGBC: Incidental gallbladder cancer; NIGBC: Non-incidental gallbladder cancer.\nFor the whole cohort, median OS was longer in patients with IGBC diagnosis, (29 vs 4 mo, P < 0.001). OS of IGBC patients was significantly longer compared to NIGBC patients in the non-surgical group (14 vs 2 mo, P < 0.001), as well as the surgical group (29 vs 16.5 mo, P = 0.001). Similarly, DFS was significantly longer in patients with IGBC (21.5 mo vs 8.5 mo, P = 0.007) (Figure 2). \n\nKaplan Meier survival curves for incidental gallbladder cancer vs non-incidental gallbladder cancer. A: Overall survival (OS) of all cohort; B: Overall survival (OS) of non-surgical treatment; C: OS for surgical treatment; D: Disease-free survival for surgical treatment. IGBC: Incidental gallbladder cancer; NIGBC: Non-incidental gallbladder cancer.\nRisk analysis Univariable Cox regression analysis (Table 4) identified that age, ASA, T stage, N stage, resection margin status and NIGBC diagnosis were significantly related to OS (P < 0.05). On multivariable analysis, N stage, resection margin status and NIGBC diagnosis were identified as independent predictors of survival, increasing the risk of mortality by 3, 5 and 2 times respectively (Table 4).\nCox proportional hazard analysis for overall survival\nASA: American Society of Anesthesiology; CCI: Charlson Comorbidity Index; BMI: Body mass index.\nSimilarly, T stage, N stage, resection margin status and NIGBC diagnosis were identified to be associated with worse DFS on univariable analysis, however only N stage and resection margin status were found to be independent predictors on multivariable analysis (Table 5).\nCox proportional hazard analysis for disease free survival\nASA: American Society of Anesthesiology; CCI: Charlson Comorbidity Index; BMI: Body mass index.\nIn an effort to statistically investigate if NIGBC diagnosis acted as a confounding factor for the T stage of the disease despite the use of time-dependent regression analysis, models without this parameter were produced. Again, only N stage and margin status were identified as independent prognostic factors for OS and DFS, while T stage was not (Tables 4 and 5). \nUnivariable Cox regression analysis (Table 4) identified that age, ASA, T stage, N stage, resection margin status and NIGBC diagnosis were significantly related to OS (P < 0.05). On multivariable analysis, N stage, resection margin status and NIGBC diagnosis were identified as independent predictors of survival, increasing the risk of mortality by 3, 5 and 2 times respectively (Table 4).\nCox proportional hazard analysis for overall survival\nASA: American Society of Anesthesiology; CCI: Charlson Comorbidity Index; BMI: Body mass index.\nSimilarly, T stage, N stage, resection margin status and NIGBC diagnosis were identified to be associated with worse DFS on univariable analysis, however only N stage and resection margin status were found to be independent predictors on multivariable analysis (Table 5).\nCox proportional hazard analysis for disease free survival\nASA: American Society of Anesthesiology; CCI: Charlson Comorbidity Index; BMI: Body mass index.\nIn an effort to statistically investigate if NIGBC diagnosis acted as a confounding factor for the T stage of the disease despite the use of time-dependent regression analysis, models without this parameter were produced. Again, only N stage and margin status were identified as independent prognostic factors for OS and DFS, while T stage was not (Tables 4 and 5). ", "The study cohort comprised of 261 patients, with 35% presenting as IGBC and 65% had non incidental presentation (NIGBC) at the time of diagnosis (Figure 1). Median age was 69 years [interquartile range (IQR) 61-77] and male to female ratio was 1:3. Eighty-one percent of NIGBC and 34% of IGBC patients did not undergo resection. For the majority of these (82%) locally advanced or metastatic disease was the main reason. Other causes included patient’s choice, poor medical status and pathological stage < 1b (where resection is not indicated) (Table 1). Reasons for not having AC after resection were patients’ choice or comorbidities and early tumour stages (CIS, T1/T2, N0) with negative resection margins.\nManagement pathways of whole cohort.\nReasons for not proceeding with oncological resection, n (%)\nData missing for one patient.", "A total of 90 patients had curative intent resection. The type of resection depended on IGBC vs NIGBC diagnosis, pre-operative staging, intra-operative findings, cystic duct margin status and the T stage of IGBC patients. Hepatoduodenal (portal) lymphadenectomy was performed in all patients. For IGBC cases, the median time from the time of the index cholecystectomy to the curative resection was 13.5 wk (IQR: 11-16 wk).\nPatient and tumour characteristics are shown on Table 2. Patients with NIGBC had more advanced T stage and underwent more extensive resections compared to those with IGBC. Similarly, N stage approached but did not reach significance. The types of procedures performed are shown on Table 3.\nPatient and tumour characteristics for the surgical group, n (%)\nASA: American Society of Anesthesiology; CCI: Charlson Comorbidity Index; BMI: Body mass index.\nTypes of surgeries performed in surgically treated patients of the study group, n (%)\nPortal lymphadenectomy performed in all cases.", "For the whole cohort, median OS was longer in patients with IGBC diagnosis, (29 vs 4 mo, P < 0.001). OS of IGBC patients was significantly longer compared to NIGBC patients in the non-surgical group (14 vs 2 mo, P < 0.001), as well as the surgical group (29 vs 16.5 mo, P = 0.001). Similarly, DFS was significantly longer in patients with IGBC (21.5 mo vs 8.5 mo, P = 0.007) (Figure 2). \n\nKaplan Meier survival curves for incidental gallbladder cancer vs non-incidental gallbladder cancer. A: Overall survival (OS) of all cohort; B: Overall survival (OS) of non-surgical treatment; C: OS for surgical treatment; D: Disease-free survival for surgical treatment. IGBC: Incidental gallbladder cancer; NIGBC: Non-incidental gallbladder cancer.", "Univariable Cox regression analysis (Table 4) identified that age, ASA, T stage, N stage, resection margin status and NIGBC diagnosis were significantly related to OS (P < 0.05). On multivariable analysis, N stage, resection margin status and NIGBC diagnosis were identified as independent predictors of survival, increasing the risk of mortality by 3, 5 and 2 times respectively (Table 4).\nCox proportional hazard analysis for overall survival\nASA: American Society of Anesthesiology; CCI: Charlson Comorbidity Index; BMI: Body mass index.\nSimilarly, T stage, N stage, resection margin status and NIGBC diagnosis were identified to be associated with worse DFS on univariable analysis, however only N stage and resection margin status were found to be independent predictors on multivariable analysis (Table 5).\nCox proportional hazard analysis for disease free survival\nASA: American Society of Anesthesiology; CCI: Charlson Comorbidity Index; BMI: Body mass index.\nIn an effort to statistically investigate if NIGBC diagnosis acted as a confounding factor for the T stage of the disease despite the use of time-dependent regression analysis, models without this parameter were produced. Again, only N stage and margin status were identified as independent prognostic factors for OS and DFS, while T stage was not (Tables 4 and 5). ", "GBC is a rare malignancy with unfavorable prognosis despite the advances in oncological treatments[1,2,12]. The timing of GBC diagnosis, whether incidental after cholecystectomy for benign causes or pre-operative on imaging and/or biopsy, has been previously under investigation for the potential effect in outcomes, however evidence is scarce. Violation of the anatomical planes around the tumour and incomplete clearance during the index laparoscopic cholecystectomy with residual disease in 35%-46% of the patients have been proposed as factors responsible for adverse oncological outcomes in IGBC patients[13-15]. Furthermore, inadvertent GB perforation during cholecystectomy has been reported in up to 22%[16,17], and this could theoretically lead to tumour seeding and metastatic disease. Interestingly, the site of invasion of local disease has also been shown to play an important role in T2 disease[15]. Nonetheless, some published evidence suggested that IGBC diagnosis confers favourable survival, regardless of the stage or degree of differentiation of the disease[9,11]. On the other hand, in other studies, NIGBC diagnosis did not adversely affect survival[5-8]. In a meta-analysis of 51 studies by Pyo et al[18], IGBC patients had favorable survival in comparison to NIGBC. However, not all studies included in this meta-analysis were able to show the same difference between both groups.\nSixty five percent of referred patients in our cohort had a preoperative radiological diagnosis of GBC. Sixty six percent of all patients did not proceed to an oncological resection (81% of NIGBC and 36% of IGBC patients), 16% of these due to locally advanced disease (all NIGBC patients) and 38% due to metastatic disease on staging imaging (23% of IGBC and 46% of NIGBC patients). Only 3% of IGBC were stage T1a or below and therefore, no further resection was indicated. Of note only 7% of patients were deemed unfit for surgical treatment. It is clear that the majority of the patients that did not proceed to management with curative intent were due to the late presentation of the disease, a fact that is well described for GBC[19]. The percentage of patients who had AC after curative intent surgery was low (22%). This is comparable with the published literature of around 24%[20]. The reasons for this may include patients’ choice and comorbidities, however, may also be attributed to the change in recommended best practice over the years of the study. According to a previously published expert consensus statement, AC was considered only in patients with high risk pathologic features: T3-T4 stages, metastatic lymph nodes and positive resection margins[21]. However, after the BILCAP trial showing improved survival with capecitabine (36.4 mo to 51.1 mo; P = 0.028), it is currently recommended that all patients with resected biliary tract malignancy, including GB cancer, receive 6 mo of adjuvant capecitabine[22]. The results of the currently ongoing ACTICCA-1 trial are eagerly awaited and will provide further evidence if the combination chemotherapy of gemcitabine and cisplatin is superior to capecitabine monotherapy in the adjuvant setting.\nOur data suggests that NIGBC diagnosis adversely affected oncological outcomes. NIGBC patients were more likely to have higher stages of the disease (T3/4), consequently undergoing more extensive resections. The range of oncological procedures for selected cases included multi-visceral resections and major hepatectomies to achieve histologically clear resection margins. Routine performance of such procedures is not associated with survival benefit and has high morbidity rates; however, it is still indicated when the vascular inflow or resection margins are/may be compromised[23-25]. Positive lymph node status was more common in patients with NIGBC with the difference approaching statistical significance. OS of all patients with NIGBC diagnosis was substantially worse than those with IGBC and this was also noted in both surgical and non-surgical subgroups. Furthermore, in the surgical subgroup, NIGBC diagnosis was identified as an independent predictor of OS; doubling the risk of mortality. Stronger predictors were pN stage and margin status, increasing the risk by 3 and 5 times respectively. These findings persisted when models computed that accounted for the possibility of NIGBC diagnosis acting as a confounding factor for T stage (by not including this parameter in the analysis), indicating that it is not true. This seemingly paradoxical observation may be explained by the presence of micro-metastases in early stages of GBC [26], which would affect and in the end dictate OS and DFS, rather than pT stage. Similar were the results for DFS, with N stage and margin status conferring higher relative risk of recurrence, while NIGBC diagnosis approached but did not reach significance. \nLymph node status is an important prognostic factor in GBC. Widmann et al[27] in a meta-analysis of 18 observational studies which included more than 27000 patients, has shown that lymph node involvement has significant effect on OS and DFS. Lymphadenectomy also was associated with better OS and DFS in patients with T1b, T2 and T3 disease. This was not clearly demonstrated with T4 disease due to the low number of cases undergoing curative resection in this stage. Lymph node micrometastases, defined as disease detected on immune-histochemical staining, have also been described to correlate with the pathologic N stage of the disease and disease prognosis[28]. Nonetheless, the significance of the extent of lymphadenectomy and lymph node yield is controversial in the published literature, with data from two studies suggesting that harvesting more than four lymph nodes during surgery is associated with improved survival[29,30], whilst a third study concluded that lymph node yield does not correlate with improved survival[31]. These differences may be explained by the differences in pathological reporting (higher lymph node yield may result in more accurate pN staging) and/or variations in the non-surgical part of the patients’ management, such differences in the administration of neoadjuvant and adjuvant chemotherapy, regimens, duration etc. \nOverall, our data suggest that the timing of diagnosis of GBC may play a significant role in the oncological outcomes. Due to the retrospective nature of the study and the long study period, data on the site of invasion (hepatic vs peritoneal) were not available, hence this could not be investigated as a possible explanation[32,33]. Another possibility is a difference in the genetic profile of the tumours which could account for different behavior, such as early micrometastases, that cannot be captured by the common radiological investigations and standard pathological parameters of stage and differentiation [34]. However, this cannot be investigated by the current study and would require a prospective molecular study design.\nThe limitations of the study include its single centre retrospective methodology. The long study period also included differences and evolution in the surgical approach during the oncological resection, with more bile duct resections done during the early study period to achieve a negative margin and a greater lymph node yield. In the later years, bile duct resection was only performed in the presence of a positive cystic duct margin. Nonetheless, this is the first study to include all patients referred for GBC to a tertiary regional centre, rather than only the ones receiving surgical treatment, therefore providing outcome data in an intention to treat basis over the whole referral cohort including the patients that did not receive surgical treatment. It is also one of few studies to demonstrate the effect of non-incidental diagnosis on the oncological outcomes. ", "Conclusively, the presented data suggest that IGBC diagnosis may confer a survival advantage, including patients that received surgical treatment, independently of the pathological stage and tumour characteristics. Prospective studies are required to investigate the reasons behind this, including detailed pathological analysis and molecular gene expression. " ]
[ null, "methods", null, null, null, null, null, null, null ]
[ "Gallbladder cancer", "Incidental gallbladder cancer", "Non-incidental gallbladder cancer", "Gallbladder cancer survival" ]
INTRODUCTION: Gallbladder cancer (GBC) is associated with poor prognosis even after treatment, with median overall survival (OS) ranging in the literature between 3 and 22 mo[1,2]. Incidental gallbladder cancer (IGBC) discovered on routine histological examination of gallbladder specimens after cholecystectomy is more common than non-incidental gallbladder cancer (NIGBC) and represents 50%-60% of all cases[3-5]. The prognostic implication of incidental or non-incidental diagnosis in oncological outcomes is still a matter of debate as is the effect of the timing of curative intent resection which is performed as a secondary operation in IGBC. Published evidence are contradictory with some studies suggesting that incidental diagnosis does not affect survival[5-8], while others showed longer survival with IGBC[9-11]. Earlier tumour stages in the IGBC group have been suggested as a confounding factor for any potential survival benefit[5]. On the contrary, other studies identified a survival benefit in IGBC even after controlling for tumour stage and degree of differentiation[9,10]. The aim of this study was to investigate the role of IGBC diagnosis in patient OS and especially after surgical treatment with curative intent. MATERIALS AND METHODS: This is a retrospective single tertiary centre cohort study between January 2008 and December 2020. The sample included all patients with a histological diagnosis of GBC obtained by surgery or biopsy. The management of all patients was discussed and agreed in the hepatobiliary multidisciplinary (MDT) meeting. IGBC diagnosis was established after histopathological examination of specimens following cholecystectomy for benign aetiology. This was followed by complete staging with a computerized tomography scan of the thorax, abdomen and pelvis (CT-TAP) with subsequent curative intent resection if appropriate. NIGBC diagnosis was made based on imaging and/or biopsy after MDT discussion of referred patients. All patients had staging with CT-TAP, followed by surgery if clinically appropriate. In patients with locally advanced disease, neoadjuvant chemotherapy was administered and resection was contemplated after restaging. Liver magnetic resonance imaging and positron emission tomography scans were used selectively in both groups if liver metastases or extrahepatic disease was suspected on CT. Following surgical resection all patients were referred to oncology for assessment of adjuvant chemotherapy (AC). Data were collected and recorded for patient's demographics, American society of anesthesiology (ASA) score, extent of surgical resection, histology, chemotherapy, recurrence and survival. The extent of surgery was defined as minor if radical cholecystectomy, gallbladder (GB) bed resection or liver segments IVb/V resection with or without bile duct resection was performed. It also included patients who only had bile duct resection. The resection was defined as major if a major hepatectomy (three or more liver segments) or multi-visceral resection was performed. Recurrence was defined as local/regional (GB bed, hilar lymph nodes), distant or both. The primary outcome of the study was difference in OS between IGBC and NIGBC and the secondary outcome was difference in disease-free survival (DFS). T-test, Chi-Square, Fisher’s Exact and Mann-Whitney U tests were used as appropriate to compare variables and outcomes between the two groups, with statistical significance set at P < 0.05. Survival analysis was performed with the Kaplan-Meier method and log rank test was used to compare survival curves between the study groups. Univariable and multivariable time to event analyses were performed using the Cox proportional hazard model to determine risk factors for OS and DFS. Variables were subjected to a univariable analysis first and those with P < 0.2 were introduced into a multivariable model. Hazard ratios (HR) and associated 95% confidence intervals (CI) were calculated. A two-tailed P value < 0.05 was considered statistically significant. All statistical analyses were performed using the software package SPSS Statistics for Windows (version 25.0; SPSS Inc., Chicago, IL, United States). RESULTS: Cohort characteristics and management pathway The study cohort comprised of 261 patients, with 35% presenting as IGBC and 65% had non incidental presentation (NIGBC) at the time of diagnosis (Figure 1). Median age was 69 years [interquartile range (IQR) 61-77] and male to female ratio was 1:3. Eighty-one percent of NIGBC and 34% of IGBC patients did not undergo resection. For the majority of these (82%) locally advanced or metastatic disease was the main reason. Other causes included patient’s choice, poor medical status and pathological stage < 1b (where resection is not indicated) (Table 1). Reasons for not having AC after resection were patients’ choice or comorbidities and early tumour stages (CIS, T1/T2, N0) with negative resection margins. Management pathways of whole cohort. Reasons for not proceeding with oncological resection, n (%) Data missing for one patient. The study cohort comprised of 261 patients, with 35% presenting as IGBC and 65% had non incidental presentation (NIGBC) at the time of diagnosis (Figure 1). Median age was 69 years [interquartile range (IQR) 61-77] and male to female ratio was 1:3. Eighty-one percent of NIGBC and 34% of IGBC patients did not undergo resection. For the majority of these (82%) locally advanced or metastatic disease was the main reason. Other causes included patient’s choice, poor medical status and pathological stage < 1b (where resection is not indicated) (Table 1). Reasons for not having AC after resection were patients’ choice or comorbidities and early tumour stages (CIS, T1/T2, N0) with negative resection margins. Management pathways of whole cohort. Reasons for not proceeding with oncological resection, n (%) Data missing for one patient. Patient and tumour characteristics of surgical patients A total of 90 patients had curative intent resection. The type of resection depended on IGBC vs NIGBC diagnosis, pre-operative staging, intra-operative findings, cystic duct margin status and the T stage of IGBC patients. Hepatoduodenal (portal) lymphadenectomy was performed in all patients. For IGBC cases, the median time from the time of the index cholecystectomy to the curative resection was 13.5 wk (IQR: 11-16 wk). Patient and tumour characteristics are shown on Table 2. Patients with NIGBC had more advanced T stage and underwent more extensive resections compared to those with IGBC. Similarly, N stage approached but did not reach significance. The types of procedures performed are shown on Table 3. Patient and tumour characteristics for the surgical group, n (%) ASA: American Society of Anesthesiology; CCI: Charlson Comorbidity Index; BMI: Body mass index. Types of surgeries performed in surgically treated patients of the study group, n (%) Portal lymphadenectomy performed in all cases. A total of 90 patients had curative intent resection. The type of resection depended on IGBC vs NIGBC diagnosis, pre-operative staging, intra-operative findings, cystic duct margin status and the T stage of IGBC patients. Hepatoduodenal (portal) lymphadenectomy was performed in all patients. For IGBC cases, the median time from the time of the index cholecystectomy to the curative resection was 13.5 wk (IQR: 11-16 wk). Patient and tumour characteristics are shown on Table 2. Patients with NIGBC had more advanced T stage and underwent more extensive resections compared to those with IGBC. Similarly, N stage approached but did not reach significance. The types of procedures performed are shown on Table 3. Patient and tumour characteristics for the surgical group, n (%) ASA: American Society of Anesthesiology; CCI: Charlson Comorbidity Index; BMI: Body mass index. Types of surgeries performed in surgically treated patients of the study group, n (%) Portal lymphadenectomy performed in all cases. Survival analysis For the whole cohort, median OS was longer in patients with IGBC diagnosis, (29 vs 4 mo, P < 0.001). OS of IGBC patients was significantly longer compared to NIGBC patients in the non-surgical group (14 vs 2 mo, P < 0.001), as well as the surgical group (29 vs 16.5 mo, P = 0.001). Similarly, DFS was significantly longer in patients with IGBC (21.5 mo vs 8.5 mo, P = 0.007) (Figure 2). Kaplan Meier survival curves for incidental gallbladder cancer vs non-incidental gallbladder cancer. A: Overall survival (OS) of all cohort; B: Overall survival (OS) of non-surgical treatment; C: OS for surgical treatment; D: Disease-free survival for surgical treatment. IGBC: Incidental gallbladder cancer; NIGBC: Non-incidental gallbladder cancer. For the whole cohort, median OS was longer in patients with IGBC diagnosis, (29 vs 4 mo, P < 0.001). OS of IGBC patients was significantly longer compared to NIGBC patients in the non-surgical group (14 vs 2 mo, P < 0.001), as well as the surgical group (29 vs 16.5 mo, P = 0.001). Similarly, DFS was significantly longer in patients with IGBC (21.5 mo vs 8.5 mo, P = 0.007) (Figure 2). Kaplan Meier survival curves for incidental gallbladder cancer vs non-incidental gallbladder cancer. A: Overall survival (OS) of all cohort; B: Overall survival (OS) of non-surgical treatment; C: OS for surgical treatment; D: Disease-free survival for surgical treatment. IGBC: Incidental gallbladder cancer; NIGBC: Non-incidental gallbladder cancer. Risk analysis Univariable Cox regression analysis (Table 4) identified that age, ASA, T stage, N stage, resection margin status and NIGBC diagnosis were significantly related to OS (P < 0.05). On multivariable analysis, N stage, resection margin status and NIGBC diagnosis were identified as independent predictors of survival, increasing the risk of mortality by 3, 5 and 2 times respectively (Table 4). Cox proportional hazard analysis for overall survival ASA: American Society of Anesthesiology; CCI: Charlson Comorbidity Index; BMI: Body mass index. Similarly, T stage, N stage, resection margin status and NIGBC diagnosis were identified to be associated with worse DFS on univariable analysis, however only N stage and resection margin status were found to be independent predictors on multivariable analysis (Table 5). Cox proportional hazard analysis for disease free survival ASA: American Society of Anesthesiology; CCI: Charlson Comorbidity Index; BMI: Body mass index. In an effort to statistically investigate if NIGBC diagnosis acted as a confounding factor for the T stage of the disease despite the use of time-dependent regression analysis, models without this parameter were produced. Again, only N stage and margin status were identified as independent prognostic factors for OS and DFS, while T stage was not (Tables 4 and 5). Univariable Cox regression analysis (Table 4) identified that age, ASA, T stage, N stage, resection margin status and NIGBC diagnosis were significantly related to OS (P < 0.05). On multivariable analysis, N stage, resection margin status and NIGBC diagnosis were identified as independent predictors of survival, increasing the risk of mortality by 3, 5 and 2 times respectively (Table 4). Cox proportional hazard analysis for overall survival ASA: American Society of Anesthesiology; CCI: Charlson Comorbidity Index; BMI: Body mass index. Similarly, T stage, N stage, resection margin status and NIGBC diagnosis were identified to be associated with worse DFS on univariable analysis, however only N stage and resection margin status were found to be independent predictors on multivariable analysis (Table 5). Cox proportional hazard analysis for disease free survival ASA: American Society of Anesthesiology; CCI: Charlson Comorbidity Index; BMI: Body mass index. In an effort to statistically investigate if NIGBC diagnosis acted as a confounding factor for the T stage of the disease despite the use of time-dependent regression analysis, models without this parameter were produced. Again, only N stage and margin status were identified as independent prognostic factors for OS and DFS, while T stage was not (Tables 4 and 5). Cohort characteristics and management pathway: The study cohort comprised of 261 patients, with 35% presenting as IGBC and 65% had non incidental presentation (NIGBC) at the time of diagnosis (Figure 1). Median age was 69 years [interquartile range (IQR) 61-77] and male to female ratio was 1:3. Eighty-one percent of NIGBC and 34% of IGBC patients did not undergo resection. For the majority of these (82%) locally advanced or metastatic disease was the main reason. Other causes included patient’s choice, poor medical status and pathological stage < 1b (where resection is not indicated) (Table 1). Reasons for not having AC after resection were patients’ choice or comorbidities and early tumour stages (CIS, T1/T2, N0) with negative resection margins. Management pathways of whole cohort. Reasons for not proceeding with oncological resection, n (%) Data missing for one patient. Patient and tumour characteristics of surgical patients : A total of 90 patients had curative intent resection. The type of resection depended on IGBC vs NIGBC diagnosis, pre-operative staging, intra-operative findings, cystic duct margin status and the T stage of IGBC patients. Hepatoduodenal (portal) lymphadenectomy was performed in all patients. For IGBC cases, the median time from the time of the index cholecystectomy to the curative resection was 13.5 wk (IQR: 11-16 wk). Patient and tumour characteristics are shown on Table 2. Patients with NIGBC had more advanced T stage and underwent more extensive resections compared to those with IGBC. Similarly, N stage approached but did not reach significance. The types of procedures performed are shown on Table 3. Patient and tumour characteristics for the surgical group, n (%) ASA: American Society of Anesthesiology; CCI: Charlson Comorbidity Index; BMI: Body mass index. Types of surgeries performed in surgically treated patients of the study group, n (%) Portal lymphadenectomy performed in all cases. Survival analysis: For the whole cohort, median OS was longer in patients with IGBC diagnosis, (29 vs 4 mo, P < 0.001). OS of IGBC patients was significantly longer compared to NIGBC patients in the non-surgical group (14 vs 2 mo, P < 0.001), as well as the surgical group (29 vs 16.5 mo, P = 0.001). Similarly, DFS was significantly longer in patients with IGBC (21.5 mo vs 8.5 mo, P = 0.007) (Figure 2). Kaplan Meier survival curves for incidental gallbladder cancer vs non-incidental gallbladder cancer. A: Overall survival (OS) of all cohort; B: Overall survival (OS) of non-surgical treatment; C: OS for surgical treatment; D: Disease-free survival for surgical treatment. IGBC: Incidental gallbladder cancer; NIGBC: Non-incidental gallbladder cancer. Risk analysis: Univariable Cox regression analysis (Table 4) identified that age, ASA, T stage, N stage, resection margin status and NIGBC diagnosis were significantly related to OS (P < 0.05). On multivariable analysis, N stage, resection margin status and NIGBC diagnosis were identified as independent predictors of survival, increasing the risk of mortality by 3, 5 and 2 times respectively (Table 4). Cox proportional hazard analysis for overall survival ASA: American Society of Anesthesiology; CCI: Charlson Comorbidity Index; BMI: Body mass index. Similarly, T stage, N stage, resection margin status and NIGBC diagnosis were identified to be associated with worse DFS on univariable analysis, however only N stage and resection margin status were found to be independent predictors on multivariable analysis (Table 5). Cox proportional hazard analysis for disease free survival ASA: American Society of Anesthesiology; CCI: Charlson Comorbidity Index; BMI: Body mass index. In an effort to statistically investigate if NIGBC diagnosis acted as a confounding factor for the T stage of the disease despite the use of time-dependent regression analysis, models without this parameter were produced. Again, only N stage and margin status were identified as independent prognostic factors for OS and DFS, while T stage was not (Tables 4 and 5). DISCUSSION: GBC is a rare malignancy with unfavorable prognosis despite the advances in oncological treatments[1,2,12]. The timing of GBC diagnosis, whether incidental after cholecystectomy for benign causes or pre-operative on imaging and/or biopsy, has been previously under investigation for the potential effect in outcomes, however evidence is scarce. Violation of the anatomical planes around the tumour and incomplete clearance during the index laparoscopic cholecystectomy with residual disease in 35%-46% of the patients have been proposed as factors responsible for adverse oncological outcomes in IGBC patients[13-15]. Furthermore, inadvertent GB perforation during cholecystectomy has been reported in up to 22%[16,17], and this could theoretically lead to tumour seeding and metastatic disease. Interestingly, the site of invasion of local disease has also been shown to play an important role in T2 disease[15]. Nonetheless, some published evidence suggested that IGBC diagnosis confers favourable survival, regardless of the stage or degree of differentiation of the disease[9,11]. On the other hand, in other studies, NIGBC diagnosis did not adversely affect survival[5-8]. In a meta-analysis of 51 studies by Pyo et al[18], IGBC patients had favorable survival in comparison to NIGBC. However, not all studies included in this meta-analysis were able to show the same difference between both groups. Sixty five percent of referred patients in our cohort had a preoperative radiological diagnosis of GBC. Sixty six percent of all patients did not proceed to an oncological resection (81% of NIGBC and 36% of IGBC patients), 16% of these due to locally advanced disease (all NIGBC patients) and 38% due to metastatic disease on staging imaging (23% of IGBC and 46% of NIGBC patients). Only 3% of IGBC were stage T1a or below and therefore, no further resection was indicated. Of note only 7% of patients were deemed unfit for surgical treatment. It is clear that the majority of the patients that did not proceed to management with curative intent were due to the late presentation of the disease, a fact that is well described for GBC[19]. The percentage of patients who had AC after curative intent surgery was low (22%). This is comparable with the published literature of around 24%[20]. The reasons for this may include patients’ choice and comorbidities, however, may also be attributed to the change in recommended best practice over the years of the study. According to a previously published expert consensus statement, AC was considered only in patients with high risk pathologic features: T3-T4 stages, metastatic lymph nodes and positive resection margins[21]. However, after the BILCAP trial showing improved survival with capecitabine (36.4 mo to 51.1 mo; P = 0.028), it is currently recommended that all patients with resected biliary tract malignancy, including GB cancer, receive 6 mo of adjuvant capecitabine[22]. The results of the currently ongoing ACTICCA-1 trial are eagerly awaited and will provide further evidence if the combination chemotherapy of gemcitabine and cisplatin is superior to capecitabine monotherapy in the adjuvant setting. Our data suggests that NIGBC diagnosis adversely affected oncological outcomes. NIGBC patients were more likely to have higher stages of the disease (T3/4), consequently undergoing more extensive resections. The range of oncological procedures for selected cases included multi-visceral resections and major hepatectomies to achieve histologically clear resection margins. Routine performance of such procedures is not associated with survival benefit and has high morbidity rates; however, it is still indicated when the vascular inflow or resection margins are/may be compromised[23-25]. Positive lymph node status was more common in patients with NIGBC with the difference approaching statistical significance. OS of all patients with NIGBC diagnosis was substantially worse than those with IGBC and this was also noted in both surgical and non-surgical subgroups. Furthermore, in the surgical subgroup, NIGBC diagnosis was identified as an independent predictor of OS; doubling the risk of mortality. Stronger predictors were pN stage and margin status, increasing the risk by 3 and 5 times respectively. These findings persisted when models computed that accounted for the possibility of NIGBC diagnosis acting as a confounding factor for T stage (by not including this parameter in the analysis), indicating that it is not true. This seemingly paradoxical observation may be explained by the presence of micro-metastases in early stages of GBC [26], which would affect and in the end dictate OS and DFS, rather than pT stage. Similar were the results for DFS, with N stage and margin status conferring higher relative risk of recurrence, while NIGBC diagnosis approached but did not reach significance. Lymph node status is an important prognostic factor in GBC. Widmann et al[27] in a meta-analysis of 18 observational studies which included more than 27000 patients, has shown that lymph node involvement has significant effect on OS and DFS. Lymphadenectomy also was associated with better OS and DFS in patients with T1b, T2 and T3 disease. This was not clearly demonstrated with T4 disease due to the low number of cases undergoing curative resection in this stage. Lymph node micrometastases, defined as disease detected on immune-histochemical staining, have also been described to correlate with the pathologic N stage of the disease and disease prognosis[28]. Nonetheless, the significance of the extent of lymphadenectomy and lymph node yield is controversial in the published literature, with data from two studies suggesting that harvesting more than four lymph nodes during surgery is associated with improved survival[29,30], whilst a third study concluded that lymph node yield does not correlate with improved survival[31]. These differences may be explained by the differences in pathological reporting (higher lymph node yield may result in more accurate pN staging) and/or variations in the non-surgical part of the patients’ management, such differences in the administration of neoadjuvant and adjuvant chemotherapy, regimens, duration etc. Overall, our data suggest that the timing of diagnosis of GBC may play a significant role in the oncological outcomes. Due to the retrospective nature of the study and the long study period, data on the site of invasion (hepatic vs peritoneal) were not available, hence this could not be investigated as a possible explanation[32,33]. Another possibility is a difference in the genetic profile of the tumours which could account for different behavior, such as early micrometastases, that cannot be captured by the common radiological investigations and standard pathological parameters of stage and differentiation [34]. However, this cannot be investigated by the current study and would require a prospective molecular study design. The limitations of the study include its single centre retrospective methodology. The long study period also included differences and evolution in the surgical approach during the oncological resection, with more bile duct resections done during the early study period to achieve a negative margin and a greater lymph node yield. In the later years, bile duct resection was only performed in the presence of a positive cystic duct margin. Nonetheless, this is the first study to include all patients referred for GBC to a tertiary regional centre, rather than only the ones receiving surgical treatment, therefore providing outcome data in an intention to treat basis over the whole referral cohort including the patients that did not receive surgical treatment. It is also one of few studies to demonstrate the effect of non-incidental diagnosis on the oncological outcomes. CONCLUSION: Conclusively, the presented data suggest that IGBC diagnosis may confer a survival advantage, including patients that received surgical treatment, independently of the pathological stage and tumour characteristics. Prospective studies are required to investigate the reasons behind this, including detailed pathological analysis and molecular gene expression.
Background: Incidental gallbladder cancer (IGBC) represents 50%-60% of gallbladder cancer cases. Data are conflicting on the role of IGBC diagnosis in oncological outcomes. Some studies suggest that IGBC diagnosis does not affect outcomes, while others that overall survival (OS) is longer in these cases compared to non-incidental diagnosis (NIGBC). Furthermore, some studies reported early tumour stages and histopathologic characteristics as possible confounders, while others not. Methods: Retrospective analysis of all patient referrals with gallbladder cancer between 2008 and 2020 in a tertiary hepatobiliary centre. Statistical comparison of patient and tumour characteristics between IGBC and NIGBC subgroups was performed. Survival analysis for the whole cohort, surgical and non-surgical subgroups was done with the Kaplan-Meier method and the use of log rank test. Risk analysis was performed with univariable and multivariable Cox regression analysis. Results: The cohort included 261 patients with gallbladder cancer. 65% of cases had NIGBC and 35% had IGBC. A total of 90 patients received surgical treatment (66% of IGBC cases and 19% of NIGBC cases). NIGBC patients had more advanced T stage and required more extensive resections than IGBC ones. OS was longer in patients with IGBC in the whole cohort (29 vs 4 mo, P < 0.001), as well as in the non-surgical (14 vs 2 mo, P < 0.001) and surgical subgroups (29 vs 16.5 mo, P = 0.001). Disease free survival (DFS) after surgery was longer in patients with IGBC (21.5 mo vs 8.5 mo, P = 0.007). N stage and resection margin status were identified as independent predictors of OS and DFS. NIGBC diagnosis was identified as an independent predictor of OS. Conclusions: IGBC diagnosis may confer a survival advantage independently of the pathological stage and tumour characteristics. Prospective studies are required to further investigate this, including detailed pathological analysis and molecular gene expression.
INTRODUCTION: Gallbladder cancer (GBC) is associated with poor prognosis even after treatment, with median overall survival (OS) ranging in the literature between 3 and 22 mo[1,2]. Incidental gallbladder cancer (IGBC) discovered on routine histological examination of gallbladder specimens after cholecystectomy is more common than non-incidental gallbladder cancer (NIGBC) and represents 50%-60% of all cases[3-5]. The prognostic implication of incidental or non-incidental diagnosis in oncological outcomes is still a matter of debate as is the effect of the timing of curative intent resection which is performed as a secondary operation in IGBC. Published evidence are contradictory with some studies suggesting that incidental diagnosis does not affect survival[5-8], while others showed longer survival with IGBC[9-11]. Earlier tumour stages in the IGBC group have been suggested as a confounding factor for any potential survival benefit[5]. On the contrary, other studies identified a survival benefit in IGBC even after controlling for tumour stage and degree of differentiation[9,10]. The aim of this study was to investigate the role of IGBC diagnosis in patient OS and especially after surgical treatment with curative intent. CONCLUSION: Published evidence is still contradicting. The theory that IGBC and NIGBC are two distinct variants of the same disease remains to be proven by detailed pathologic assessment and research in cancer molecular genetics.
Background: Incidental gallbladder cancer (IGBC) represents 50%-60% of gallbladder cancer cases. Data are conflicting on the role of IGBC diagnosis in oncological outcomes. Some studies suggest that IGBC diagnosis does not affect outcomes, while others that overall survival (OS) is longer in these cases compared to non-incidental diagnosis (NIGBC). Furthermore, some studies reported early tumour stages and histopathologic characteristics as possible confounders, while others not. Methods: Retrospective analysis of all patient referrals with gallbladder cancer between 2008 and 2020 in a tertiary hepatobiliary centre. Statistical comparison of patient and tumour characteristics between IGBC and NIGBC subgroups was performed. Survival analysis for the whole cohort, surgical and non-surgical subgroups was done with the Kaplan-Meier method and the use of log rank test. Risk analysis was performed with univariable and multivariable Cox regression analysis. Results: The cohort included 261 patients with gallbladder cancer. 65% of cases had NIGBC and 35% had IGBC. A total of 90 patients received surgical treatment (66% of IGBC cases and 19% of NIGBC cases). NIGBC patients had more advanced T stage and required more extensive resections than IGBC ones. OS was longer in patients with IGBC in the whole cohort (29 vs 4 mo, P < 0.001), as well as in the non-surgical (14 vs 2 mo, P < 0.001) and surgical subgroups (29 vs 16.5 mo, P = 0.001). Disease free survival (DFS) after surgery was longer in patients with IGBC (21.5 mo vs 8.5 mo, P = 0.007). N stage and resection margin status were identified as independent predictors of OS and DFS. NIGBC diagnosis was identified as an independent predictor of OS. Conclusions: IGBC diagnosis may confer a survival advantage independently of the pathological stage and tumour characteristics. Prospective studies are required to further investigate this, including detailed pathological analysis and molecular gene expression.
4,611
370
[ 213, 1612, 178, 196, 168, 252, 1379, 51 ]
9
[ "patients", "resection", "stage", "igbc", "nigbc", "survival", "diagnosis", "analysis", "surgical", "os" ]
[ "gallbladder cancer gbc", "igbc incidental gallbladder", "gallbladder cancer cohort", "gallbladder cancer nigbc", "gallbladder cancer vs" ]
null
[CONTENT] Gallbladder cancer | Incidental gallbladder cancer | Non-incidental gallbladder cancer | Gallbladder cancer survival [SUMMARY]
[CONTENT] Gallbladder cancer | Incidental gallbladder cancer | Non-incidental gallbladder cancer | Gallbladder cancer survival [SUMMARY]
null
[CONTENT] Gallbladder cancer | Incidental gallbladder cancer | Non-incidental gallbladder cancer | Gallbladder cancer survival [SUMMARY]
[CONTENT] Gallbladder cancer | Incidental gallbladder cancer | Non-incidental gallbladder cancer | Gallbladder cancer survival [SUMMARY]
[CONTENT] Gallbladder cancer | Incidental gallbladder cancer | Non-incidental gallbladder cancer | Gallbladder cancer survival [SUMMARY]
[CONTENT] Disease-Free Survival | Gallbladder Neoplasms | Humans | Incidental Findings | Neoplasm Staging | Retrospective Studies | Survival Analysis [SUMMARY]
[CONTENT] Disease-Free Survival | Gallbladder Neoplasms | Humans | Incidental Findings | Neoplasm Staging | Retrospective Studies | Survival Analysis [SUMMARY]
null
[CONTENT] Disease-Free Survival | Gallbladder Neoplasms | Humans | Incidental Findings | Neoplasm Staging | Retrospective Studies | Survival Analysis [SUMMARY]
[CONTENT] Disease-Free Survival | Gallbladder Neoplasms | Humans | Incidental Findings | Neoplasm Staging | Retrospective Studies | Survival Analysis [SUMMARY]
[CONTENT] Disease-Free Survival | Gallbladder Neoplasms | Humans | Incidental Findings | Neoplasm Staging | Retrospective Studies | Survival Analysis [SUMMARY]
[CONTENT] gallbladder cancer gbc | igbc incidental gallbladder | gallbladder cancer cohort | gallbladder cancer nigbc | gallbladder cancer vs [SUMMARY]
[CONTENT] gallbladder cancer gbc | igbc incidental gallbladder | gallbladder cancer cohort | gallbladder cancer nigbc | gallbladder cancer vs [SUMMARY]
null
[CONTENT] gallbladder cancer gbc | igbc incidental gallbladder | gallbladder cancer cohort | gallbladder cancer nigbc | gallbladder cancer vs [SUMMARY]
[CONTENT] gallbladder cancer gbc | igbc incidental gallbladder | gallbladder cancer cohort | gallbladder cancer nigbc | gallbladder cancer vs [SUMMARY]
[CONTENT] gallbladder cancer gbc | igbc incidental gallbladder | gallbladder cancer cohort | gallbladder cancer nigbc | gallbladder cancer vs [SUMMARY]
[CONTENT] patients | resection | stage | igbc | nigbc | survival | diagnosis | analysis | surgical | os [SUMMARY]
[CONTENT] patients | resection | stage | igbc | nigbc | survival | diagnosis | analysis | surgical | os [SUMMARY]
null
[CONTENT] patients | resection | stage | igbc | nigbc | survival | diagnosis | analysis | surgical | os [SUMMARY]
[CONTENT] patients | resection | stage | igbc | nigbc | survival | diagnosis | analysis | surgical | os [SUMMARY]
[CONTENT] patients | resection | stage | igbc | nigbc | survival | diagnosis | analysis | surgical | os [SUMMARY]
[CONTENT] incidental | gallbladder | igbc | survival | gallbladder cancer | cancer | benefit | incidental diagnosis | survival benefit | incidental gallbladder [SUMMARY]
[CONTENT] resection | liver | patients | ct | appropriate | performed | groups | chemotherapy | defined | surgery [SUMMARY]
null
[CONTENT] including | pathological | patients received | treatment independently pathological stage | required investigate | required | igbc diagnosis confer survival | analysis molecular gene expression | analysis molecular gene | surgical treatment independently [SUMMARY]
[CONTENT] patients | resection | igbc | stage | survival | nigbc | analysis | os | incidental | diagnosis [SUMMARY]
[CONTENT] patients | resection | igbc | stage | survival | nigbc | analysis | os | incidental | diagnosis [SUMMARY]
[CONTENT] 50%-60% ||| ||| ||| [SUMMARY]
[CONTENT] between 2008 and 2020 | tertiary ||| ||| Kaplan ||| [SUMMARY]
null
[CONTENT] ||| [SUMMARY]
[CONTENT] 50%-60% ||| ||| ||| ||| between 2008 and 2020 | tertiary ||| ||| Kaplan ||| ||| ||| 261 ||| 65% | 35% ||| 90 | 66% | 19% ||| ||| 29 | 4 | P < 0.001 | 14 | P < 0.001 | 29 | 16.5 | 0.001 ||| 21.5 mo | 8.5 | 0.007 ||| ||| ||| ||| [SUMMARY]
[CONTENT] 50%-60% ||| ||| ||| ||| between 2008 and 2020 | tertiary ||| ||| Kaplan ||| ||| ||| 261 ||| 65% | 35% ||| 90 | 66% | 19% ||| ||| 29 | 4 | P < 0.001 | 14 | P < 0.001 | 29 | 16.5 | 0.001 ||| 21.5 mo | 8.5 | 0.007 ||| ||| ||| ||| [SUMMARY]
A one-year prospective study on the occurrence of traumatic spinal cord injury and clinical complications during hospitalisation in North-East Tanzania.
34795737
Clinical complications following spinal cord injury are a big concern as they account for increased cost of rehabilitation, poor outcomes and mortality.
BACKGROUND
Prospective data were collected from all persons with traumatic spinal cord injury from North-East Tanzania from their admission to discharge from the hospital. Neurological progress and complications were assessed routinely. Data were captured using a form that incorporated the components of the core data set of the International Spinal Cord Society and were analysed descriptively.
METHOD
A total of 87 persons with traumatic spinal cord injury were admitted at the hospital with a mean age of 40.2 ± 15.8 years. There were 69 (79.3%) males, and 58 (66.6%) of the injuries resulted from falls. Spasms (41 patients, 47.1%), neuropathic pain (40 patients, 46%), and constipation (35 patients, 40.2%) were the most commonly reported complications. The annual incidence rate in the Kilimanjaro region was at least 38 cases per million.
RESULTS
The incidence of traumatic spinal cord injury in the Kilimanjaro region is relatively high. In-hospital complications are prevalent and are worth addressing for successful rehabilitation.
CONCLUSION
[ "Adult", "Databases, Factual", "Female", "Hospitalization", "Humans", "Male", "Middle Aged", "Prospective Studies", "Spinal Cord Injuries", "Tanzania" ]
8568242
Introduction
Trauma to the spinal cord may lead to death at the scene of the accident, during the acute phase, or later due to organ failure and/or clinical complications. For the past five years, systematic reviews have shown that the annual global incidence of traumatic spinal cord injury (TSCI) ranges from 3.6 to 246.0 new cases per million population 1,2. However, there is a scarcity of epidemiological studies on TSCI in low-income countries, particularly in Africa. In the sub-Saharan region, two prospective studies in South Africa and Botswana reported annual incidences of 75.6 and 13 per million population, respectively 3, 4. Although the data were inconclusive, a review of the few available reports from low-income countries suggested that the incidence of TSCI was more than 25 cases per million population annually5. As soon as TSCI has occurred, there are few days or weeks of immediate risk of death due to acute autonomic dysfunction and cardiorespiratory failure 6. With comprehensive life support and rehabilitation services, this risk of acute death decreases, and the patient stabilises physiologically and psychologically with time. However, depending on the level and severity of the injury, this acute and most critical period may be followed by persistent troubling symptoms such as neuropathic pain and spasms that affect rehabilitation and quality of life in general. Neuropathic pain and spasms are among the signs and symptoms of SCI and may persist for the rest of the person's life 7, 8. Common secondary complications that are not among the signs and symptoms of TSCI include pressure ulcers (PUs), respiratory and urinary tract infections (UTIs), and deep venous thrombosis 9. It has been shown that these secondary complications elevate the risk of death in persons with SCI even after discharge from the hospital. PUs is among the leading and most complex fatal complications in the SCI population 10. In one hospital-based prospective study in South Africa, PUs is registered as the most prevalent complications following SCI 11. Management of PUs is complex, and if the wound becomes infected death due to septicaemia may occur 12. Following a PU on the immobile, insensitive, and vascular diminished tissue, healing becomes difficult and susceptibility to infections increases. Septicaemia, which is one of the most common causes of death in persons with SCI results not only from unsuccessfully treated PUs, but also from UTIs or pneumonia12, 13. Another route for infections is the urinary tract due to repeated or prolonged use of catheters 14. Both indwelling and intermittent catheterisation can introduce different types of bacteria to the urinary system, leading to UTIs. A recent systematic review has shown that UTIs are the leading complications among persons with SCI and are acknowledged as difficult to prevent 12. However, UTIs are not as potentially fatal as PUs. Depending on the type and extent of infection, untreated UTIs might cause irreversible destruction to the urinary system15. It is not uncommon for UTIs to ascend all the way to the kidneys, leading to nephritis and dysfunction of the vascular and urinary system. Pulmonary complications are also common and account for a significant proportion of the mortality during or after the acute phase, especially in cases with high-level injuries and in elderly persons with TSCI 16. Apart from respiratory failure, which is most common in the early and most acute phase, respiratory infections such as pneumonia can be either acute or later complications. Common factors that are known to contribute to respiratory complications include high spinal level, complete injury, history of smoking, and steroid use 17. All of these TSCI-related complications are of medical and rehabilitation concern because they are associated poor functional outcomes, high rehabilitation cost, prolonged hospital stay, anxiety, depression, poor quality of life (QoL), and death 18. However, most of these complications can be prevented with adequate knowledge, skills, and equipment. Lack of the aforementioned pre-requisites accounts for the reported high prevalence of complications in low-income countries. There are very few published studies in Tanzania with regards to TSCI-related complications, thus it is difficult to inform prevention, care, and rehabilitation services. This study aimed to describe the occurrence of TSCI and prevailing complications during hospitalisation in one rural area of Tanzania.
Methods
Study Design This was a descriptive one-year prospective study in which data were collected from TSCI patients on admission, during hospitalisation, and at discharge. This was a descriptive one-year prospective study in which data were collected from TSCI patients on admission, during hospitalisation, and at discharge. Study population and setting This study targeted all newly injured persons from north-east Tanzania admitted to Kilimanjaro Christian Medical Centre (KCMC), which is the only consultant hospital with a spinal unit in the region. Tool for data collection A data collection sheet based on the international SCI core data set 19, 20 was used in this study. Other variables such as associated injuries, in-hospital complications, skeletal injury, death, and ambulatory state at discharge were also added to the data collection sheet. Additionally, a chart was developed to register the anatomical location and date of starting and healing of PUs21. The American Spinal Injury Association Impairment Scale22,23 was used to classify the completeness of the injury. This study targeted all newly injured persons from north-east Tanzania admitted to Kilimanjaro Christian Medical Centre (KCMC), which is the only consultant hospital with a spinal unit in the region. Tool for data collection A data collection sheet based on the international SCI core data set 19, 20 was used in this study. Other variables such as associated injuries, in-hospital complications, skeletal injury, death, and ambulatory state at discharge were also added to the data collection sheet. Additionally, a chart was developed to register the anatomical location and date of starting and healing of PUs21. The American Spinal Injury Association Impairment Scale22,23 was used to classify the completeness of the injury. Procedure for data collection All new cases of TSCI were identified from the hospital admission records within the first 24 hours of admission. TSCI was confirmed by evidence of persisting sensory and/or motor loss, which might include bladder and bowel dysfunction beyond the normal spinal shock period. The injured person was followed up in the ward, and neurological assessment was conducted as a matter of routine. All patients consented for their medical data to be used for this study. The data collection sheet included sociodemographic information, cause of injury, and the address of each patient. Each patient was inspected for associated injuries and whether or not there were skin changes suggestive of PUs. PUs was defined as any skin change due to sustained pressure that required any form of management, including a planned pressure relief procedure. Patients were asked to report any new complaint such as constipation, pain, or spasms. Each patient was followed up from admission to discharge or death. All new cases of TSCI were identified from the hospital admission records within the first 24 hours of admission. TSCI was confirmed by evidence of persisting sensory and/or motor loss, which might include bladder and bowel dysfunction beyond the normal spinal shock period. The injured person was followed up in the ward, and neurological assessment was conducted as a matter of routine. All patients consented for their medical data to be used for this study. The data collection sheet included sociodemographic information, cause of injury, and the address of each patient. Each patient was inspected for associated injuries and whether or not there were skin changes suggestive of PUs. PUs was defined as any skin change due to sustained pressure that required any form of management, including a planned pressure relief procedure. Patients were asked to report any new complaint such as constipation, pain, or spasms. Each patient was followed up from admission to discharge or death. Data analysis Data were cleaned and entered into the Statistical Package for Social Science (SPSS version 25) for analysis. Age, length of hospital stay (LOS), and time for PU appearance and healing were analysed descriptively, and the results are given as means and standard deviations. Pain and spasms were subjectively reported and described by the patient. Incidence based on the Kilimanjaro region population (N = 1,910,555) was calculated after subtracting children 10 years of age and younger (N = 464,600) because none of these were admitted with TSCI in 2017. Data were cleaned and entered into the Statistical Package for Social Science (SPSS version 25) for analysis. Age, length of hospital stay (LOS), and time for PU appearance and healing were analysed descriptively, and the results are given as means and standard deviations. Pain and spasms were subjectively reported and described by the patient. Incidence based on the Kilimanjaro region population (N = 1,910,555) was calculated after subtracting children 10 years of age and younger (N = 464,600) because none of these were admitted with TSCI in 2017. Ethical considerations This study was a part of a larger project on SCI in the north-east Tanzania that had ethical approval from the research ethics review committee of the KCMC, registration number 620. Patients were requested to consent to the use of their medical information for research purposes. This study was a part of a larger project on SCI in the north-east Tanzania that had ethical approval from the research ethics review committee of the KCMC, registration number 620. Patients were requested to consent to the use of their medical information for research purposes.
Results
Socio-demographic characteristics For the year 2017, a total of 87 persons with TSCI were admitted to KCMC. Of these, 56 (64.4%) were from the Kilimanjaro region and 31 (35.6%) were from neighbouring regions. There were 69 (79.3%) males, and the mean age was 40.22 ± 15.77 years. Most injuries involved the cervical level (49 patients (56.3%)) followed by lumbar (27 patients (31%)) and thoracic (11 patients (12.6%)) levels. In terms of the severity of the TSCI, 35 (40.20%) patients had complete (AIS “A”) and 44 (50.6%) patients had incomplete (AIS “B, C, D”) injuries, while 8 (9.20%) patients did not have any neurological deficits after 24 hours. The mean LOS was 120.59 ± 55.29 days for tetraplegia and 115 ± 44.89 days for paraplegia (range 8–273 days). Causes, annual incidence, and mortality following TSCI The majority of TSCI cases (59 patients (66.6%)) were caused by falls, and 41 (47.1%) of these were falls from height (especially trees). Road traffic accidents lead to 25 (28.7%) cases, while 4 (4.6%) cases were from other forms of trauma including assaults, being fallen on by a heavy object, and being attacked by an animal. The Kilimanjaro region was estimated to have a population of about 1,910,555 in 2017 based on the Tanzanian national census of 2012 and an annual population growth rate of 2.9%. However, for the year 2017 there was no one aged 10 years or less who were admitted with TSCI. For this reason, those who were in this age group (464,600) were considered as having a negligible risk, and thus the total at-risk population was 1,445,955 persons. The incidence of TSCI for the Kilimanjaro region was 56 cases/1,445,955 persons, giving a rate of at least 38 cases per million population per year. For this period of one year, 15 patients died while in the hospital giving an annual mortality rate of 17.2%. For the year 2017, a total of 87 persons with TSCI were admitted to KCMC. Of these, 56 (64.4%) were from the Kilimanjaro region and 31 (35.6%) were from neighbouring regions. There were 69 (79.3%) males, and the mean age was 40.22 ± 15.77 years. Most injuries involved the cervical level (49 patients (56.3%)) followed by lumbar (27 patients (31%)) and thoracic (11 patients (12.6%)) levels. In terms of the severity of the TSCI, 35 (40.20%) patients had complete (AIS “A”) and 44 (50.6%) patients had incomplete (AIS “B, C, D”) injuries, while 8 (9.20%) patients did not have any neurological deficits after 24 hours. The mean LOS was 120.59 ± 55.29 days for tetraplegia and 115 ± 44.89 days for paraplegia (range 8–273 days). Causes, annual incidence, and mortality following TSCI The majority of TSCI cases (59 patients (66.6%)) were caused by falls, and 41 (47.1%) of these were falls from height (especially trees). Road traffic accidents lead to 25 (28.7%) cases, while 4 (4.6%) cases were from other forms of trauma including assaults, being fallen on by a heavy object, and being attacked by an animal. The Kilimanjaro region was estimated to have a population of about 1,910,555 in 2017 based on the Tanzanian national census of 2012 and an annual population growth rate of 2.9%. However, for the year 2017 there was no one aged 10 years or less who were admitted with TSCI. For this reason, those who were in this age group (464,600) were considered as having a negligible risk, and thus the total at-risk population was 1,445,955 persons. The incidence of TSCI for the Kilimanjaro region was 56 cases/1,445,955 persons, giving a rate of at least 38 cases per million population per year. For this period of one year, 15 patients died while in the hospital giving an annual mortality rate of 17.2%. Complications SCI-related complications were identified by clinical observation (PUs), patient reports (spasms and neuropathic pain), and laboratory or radiological confirmations (respiratory infection, UTI, and deep venous thrombosis). Spasms and neuropathic pain were the most prevalent complications, while deep vein thrombosis was the least reported complication as shown in Figure 1. Prevalence of clinical complications SCI-related complications were identified by clinical observation (PUs), patient reports (spasms and neuropathic pain), and laboratory or radiological confirmations (respiratory infection, UTI, and deep venous thrombosis). Spasms and neuropathic pain were the most prevalent complications, while deep vein thrombosis was the least reported complication as shown in Figure 1. Prevalence of clinical complications Location and time to occurrence and healing of PUs PU were followed up to assess their occurrence and healing time. The prevalence of PUs was 37.9%, and the calcaneus was the most common site of occurrence (Figure 2). Prevalence of PUs according to body location As seen in Table 1, PUs was more prevalent for tetraplegia compared to paraplegia, and the longest healing time was observed for PUs on the sacrum and the greater trochanters. Some of the PUs did not heal during hospitalisation (the follow up period). Location and time to appearance and healing of PUs in tetraplegia and paraplegia PU were followed up to assess their occurrence and healing time. The prevalence of PUs was 37.9%, and the calcaneus was the most common site of occurrence (Figure 2). Prevalence of PUs according to body location As seen in Table 1, PUs was more prevalent for tetraplegia compared to paraplegia, and the longest healing time was observed for PUs on the sacrum and the greater trochanters. Some of the PUs did not heal during hospitalisation (the follow up period). Location and time to appearance and healing of PUs in tetraplegia and paraplegia
Conclusion
TSCI has relatively high incidence in the north-east region of Tanzania as compared to published studies from other low-income settings. In-hospital complications are substantially prevalent and might reflect the observed long LOS and increased mortality. Such complications adversely affect rehabilitation and activities of daily living leading to dependency and low QoL.
[ "Study Design", "Study population and setting", "Procedure for data collection", "Data analysis", "Ethical considerations", "Socio-demographic characteristics", "Complications", "Location and time to occurrence and healing of PUs", "Limitation of the study" ]
[ "This was a descriptive one-year prospective study in which data were collected from TSCI patients on admission, during hospitalisation, and at discharge.", "This study targeted all newly injured persons from north-east Tanzania admitted to Kilimanjaro Christian Medical Centre (KCMC), which is the only consultant hospital with a spinal unit in the region.\nTool for data collection\nA data collection sheet based on the international SCI core data set 19, 20 was used in this study. Other variables such as associated injuries, in-hospital complications, skeletal injury, death, and ambulatory state at discharge were also added to the data collection sheet. Additionally, a chart was developed to register the anatomical location and date of starting and healing of PUs21. The American Spinal Injury Association Impairment Scale22,23 was used to classify the completeness of the injury.", "All new cases of TSCI were identified from the hospital admission records within the first 24 hours of admission. TSCI was confirmed by evidence of persisting sensory and/or motor loss, which might include bladder and bowel dysfunction beyond the normal spinal shock period. The injured person was followed up in the ward, and neurological assessment was conducted as a matter of routine. All patients consented for their medical data to be used for this study. The data collection sheet included sociodemographic information, cause of injury, and the address of each patient. Each patient was inspected for associated injuries and whether or not there were skin changes suggestive of PUs. PUs was defined as any skin change due to sustained pressure that required any form of management, including a planned pressure relief procedure. Patients were asked to report any new complaint such as constipation, pain, or spasms. Each patient was followed up from admission to discharge or death.", "Data were cleaned and entered into the Statistical Package for Social Science (SPSS version 25) for analysis. Age, length of hospital stay (LOS), and time for PU appearance and healing were analysed descriptively, and the results are given as means and standard deviations. Pain and spasms were subjectively reported and described by the patient. Incidence based on the Kilimanjaro region population (N = 1,910,555) was calculated after subtracting children 10 years of age and younger (N = 464,600) because none of these were admitted with TSCI in 2017.", "This study was a part of a larger project on SCI in the north-east Tanzania that had ethical approval from the research ethics review committee of the KCMC, registration number 620. Patients were requested to consent to the use of their medical information for research purposes.", "For the year 2017, a total of 87 persons with TSCI were admitted to KCMC. Of these, 56 (64.4%) were from the Kilimanjaro region and 31 (35.6%) were from neighbouring regions. There were 69 (79.3%) males, and the mean age was 40.22 ± 15.77 years. Most injuries involved the cervical level (49 patients (56.3%)) followed by lumbar (27 patients (31%)) and thoracic (11 patients (12.6%)) levels. In terms of the severity of the TSCI, 35 (40.20%) patients had complete (AIS “A”) and 44 (50.6%) patients had incomplete (AIS “B, C, D”) injuries, while 8 (9.20%) patients did not have any neurological deficits after 24 hours. The mean LOS was 120.59 ± 55.29 days for tetraplegia and 115 ± 44.89 days for paraplegia (range 8–273 days).\nCauses, annual incidence, and mortality following TSCI The majority of TSCI cases (59 patients (66.6%)) were caused by falls, and 41 (47.1%) of these were falls from height (especially trees). Road traffic accidents lead to 25 (28.7%) cases, while 4 (4.6%) cases were from other forms of trauma including assaults, being fallen on by a heavy object, and being attacked by an animal.\nThe Kilimanjaro region was estimated to have a population of about 1,910,555 in 2017 based on the Tanzanian national census of 2012 and an annual population growth rate of 2.9%. However, for the year 2017 there was no one aged 10 years or less who were admitted with TSCI. For this reason, those who were in this age group (464,600) were considered as having a negligible risk, and thus the total at-risk population was 1,445,955 persons. The incidence of TSCI for the Kilimanjaro region was 56 cases/1,445,955 persons, giving a rate of at least 38 cases per million population per year. For this period of one year, 15 patients died while in the hospital giving an annual mortality rate of 17.2%.", "SCI-related complications were identified by clinical observation (PUs), patient reports (spasms and neuropathic pain), and laboratory or radiological confirmations (respiratory infection, UTI, and deep venous thrombosis). Spasms and neuropathic pain were the most prevalent complications, while deep vein thrombosis was the least reported complication as shown in Figure 1.\nPrevalence of clinical complications", "PU were followed up to assess their occurrence and healing time. The prevalence of PUs was 37.9%, and the calcaneus was the most common site of occurrence (Figure 2).\nPrevalence of PUs according to body location\nAs seen in Table 1, PUs was more prevalent for tetraplegia compared to paraplegia, and the longest healing time was observed for PUs on the sacrum and the greater trochanters. Some of the PUs did not heal during hospitalisation (the follow up period).\nLocation and time to appearance and healing of PUs in tetraplegia and paraplegia", "There are common shortfalls in most epidemiological studies from low-income countries. Like ours, they are normally hospital based and fail to account for patients who die before reaching the hospital. There is also the possibility that some injured persons stay at home and self-medicate or seek alternative traditional treatments. All of these reasons suggest that there is an under-estimation of the magnitude of TSCI in the low-income countries and thus the results of this study are not necessarily exceptional." ]
[ null, null, null, null, null, null, null, null, null ]
[ "Introduction", "Methods", "Study Design", "Study population and setting", "Procedure for data collection", "Data analysis", "Ethical considerations", "Results", "Socio-demographic characteristics", "Complications", "Location and time to occurrence and healing of PUs", "Discussion", "Limitation of the study", "Conclusion" ]
[ "Trauma to the spinal cord may lead to death at the scene of the accident, during the acute phase, or later due to organ failure and/or clinical complications. For the past five years, systematic reviews have shown that the annual global incidence of traumatic spinal cord injury (TSCI) ranges from 3.6 to 246.0 new cases per million population 1,2. However, there is a scarcity of epidemiological studies on TSCI in low-income countries, particularly in Africa. In the sub-Saharan region, two prospective studies in South Africa and Botswana reported annual incidences of 75.6 and 13 per million population, respectively 3, 4. Although the data were inconclusive, a review of the few available reports from low-income countries suggested that the incidence of TSCI was more than 25 cases per million population annually5.\nAs soon as TSCI has occurred, there are few days or weeks of immediate risk of death due to acute autonomic dysfunction and cardiorespiratory failure 6. With comprehensive life support and rehabilitation services, this risk of acute death decreases, and the patient stabilises physiologically and psychologically with time. However, depending on the level and severity of the injury, this acute and most critical period may be followed by persistent troubling symptoms such as neuropathic pain and spasms that affect rehabilitation and quality of life in general. Neuropathic pain and spasms are among the signs and symptoms of SCI and may persist for the rest of the person's life 7, 8. Common secondary complications that are not among the signs and symptoms of TSCI include pressure ulcers (PUs), respiratory and urinary tract infections (UTIs), and deep venous thrombosis 9. It has been shown that these secondary complications elevate the risk of death in persons with SCI even after discharge from the hospital.\nPUs is among the leading and most complex fatal complications in the SCI population 10. In one hospital-based prospective study in South Africa, PUs is registered as the most prevalent complications following SCI 11. Management of PUs is complex, and if the wound becomes infected death due to septicaemia may occur 12. Following a PU on the immobile, insensitive, and vascular diminished tissue, healing becomes difficult and susceptibility to infections increases. Septicaemia, which is one of the most common causes of death in persons with SCI results not only from unsuccessfully treated PUs, but also from UTIs or pneumonia12, 13.\nAnother route for infections is the urinary tract due to repeated or prolonged use of catheters 14. Both indwelling and intermittent catheterisation can introduce different types of bacteria to the urinary system, leading to UTIs. A recent systematic review has shown that UTIs are the leading complications among persons with SCI and are acknowledged as difficult to prevent 12. However, UTIs are not as potentially fatal as PUs. Depending on the type and extent of infection, untreated UTIs might cause irreversible destruction to the urinary system15. It is not uncommon for UTIs to ascend all the way to the kidneys, leading to nephritis and dysfunction of the vascular and urinary system.\nPulmonary complications are also common and account for a significant proportion of the mortality during or after the acute phase, especially in cases with high-level injuries and in elderly persons with TSCI 16. Apart from respiratory failure, which is most common in the early and most acute phase, respiratory infections such as pneumonia can be either acute or later complications. Common factors that are known to contribute to respiratory complications include high spinal level, complete injury, history of smoking, and steroid use 17. All of these TSCI-related complications are of medical and rehabilitation concern because they are associated poor functional outcomes, high rehabilitation cost, prolonged hospital stay, anxiety, depression, poor quality of life (QoL), and death 18. However, most of these complications can be prevented with adequate knowledge, skills, and equipment. Lack of the aforementioned pre-requisites accounts for the reported high prevalence of complications in low-income countries. There are very few published studies in Tanzania with regards to TSCI-related complications, thus it is difficult to inform prevention, care, and rehabilitation services. This study aimed to describe the occurrence of TSCI and prevailing complications during hospitalisation in one rural area of Tanzania.", "Study Design This was a descriptive one-year prospective study in which data were collected from TSCI patients on admission, during hospitalisation, and at discharge.\nThis was a descriptive one-year prospective study in which data were collected from TSCI patients on admission, during hospitalisation, and at discharge.\nStudy population and setting This study targeted all newly injured persons from north-east Tanzania admitted to Kilimanjaro Christian Medical Centre (KCMC), which is the only consultant hospital with a spinal unit in the region.\nTool for data collection\nA data collection sheet based on the international SCI core data set 19, 20 was used in this study. Other variables such as associated injuries, in-hospital complications, skeletal injury, death, and ambulatory state at discharge were also added to the data collection sheet. Additionally, a chart was developed to register the anatomical location and date of starting and healing of PUs21. The American Spinal Injury Association Impairment Scale22,23 was used to classify the completeness of the injury.\nThis study targeted all newly injured persons from north-east Tanzania admitted to Kilimanjaro Christian Medical Centre (KCMC), which is the only consultant hospital with a spinal unit in the region.\nTool for data collection\nA data collection sheet based on the international SCI core data set 19, 20 was used in this study. Other variables such as associated injuries, in-hospital complications, skeletal injury, death, and ambulatory state at discharge were also added to the data collection sheet. Additionally, a chart was developed to register the anatomical location and date of starting and healing of PUs21. The American Spinal Injury Association Impairment Scale22,23 was used to classify the completeness of the injury.\nProcedure for data collection All new cases of TSCI were identified from the hospital admission records within the first 24 hours of admission. TSCI was confirmed by evidence of persisting sensory and/or motor loss, which might include bladder and bowel dysfunction beyond the normal spinal shock period. The injured person was followed up in the ward, and neurological assessment was conducted as a matter of routine. All patients consented for their medical data to be used for this study. The data collection sheet included sociodemographic information, cause of injury, and the address of each patient. Each patient was inspected for associated injuries and whether or not there were skin changes suggestive of PUs. PUs was defined as any skin change due to sustained pressure that required any form of management, including a planned pressure relief procedure. Patients were asked to report any new complaint such as constipation, pain, or spasms. Each patient was followed up from admission to discharge or death.\nAll new cases of TSCI were identified from the hospital admission records within the first 24 hours of admission. TSCI was confirmed by evidence of persisting sensory and/or motor loss, which might include bladder and bowel dysfunction beyond the normal spinal shock period. The injured person was followed up in the ward, and neurological assessment was conducted as a matter of routine. All patients consented for their medical data to be used for this study. The data collection sheet included sociodemographic information, cause of injury, and the address of each patient. Each patient was inspected for associated injuries and whether or not there were skin changes suggestive of PUs. PUs was defined as any skin change due to sustained pressure that required any form of management, including a planned pressure relief procedure. Patients were asked to report any new complaint such as constipation, pain, or spasms. Each patient was followed up from admission to discharge or death.\nData analysis Data were cleaned and entered into the Statistical Package for Social Science (SPSS version 25) for analysis. Age, length of hospital stay (LOS), and time for PU appearance and healing were analysed descriptively, and the results are given as means and standard deviations. Pain and spasms were subjectively reported and described by the patient. Incidence based on the Kilimanjaro region population (N = 1,910,555) was calculated after subtracting children 10 years of age and younger (N = 464,600) because none of these were admitted with TSCI in 2017.\nData were cleaned and entered into the Statistical Package for Social Science (SPSS version 25) for analysis. Age, length of hospital stay (LOS), and time for PU appearance and healing were analysed descriptively, and the results are given as means and standard deviations. Pain and spasms were subjectively reported and described by the patient. Incidence based on the Kilimanjaro region population (N = 1,910,555) was calculated after subtracting children 10 years of age and younger (N = 464,600) because none of these were admitted with TSCI in 2017.\nEthical considerations This study was a part of a larger project on SCI in the north-east Tanzania that had ethical approval from the research ethics review committee of the KCMC, registration number 620. Patients were requested to consent to the use of their medical information for research purposes.\nThis study was a part of a larger project on SCI in the north-east Tanzania that had ethical approval from the research ethics review committee of the KCMC, registration number 620. Patients were requested to consent to the use of their medical information for research purposes.", "This was a descriptive one-year prospective study in which data were collected from TSCI patients on admission, during hospitalisation, and at discharge.", "This study targeted all newly injured persons from north-east Tanzania admitted to Kilimanjaro Christian Medical Centre (KCMC), which is the only consultant hospital with a spinal unit in the region.\nTool for data collection\nA data collection sheet based on the international SCI core data set 19, 20 was used in this study. Other variables such as associated injuries, in-hospital complications, skeletal injury, death, and ambulatory state at discharge were also added to the data collection sheet. Additionally, a chart was developed to register the anatomical location and date of starting and healing of PUs21. The American Spinal Injury Association Impairment Scale22,23 was used to classify the completeness of the injury.", "All new cases of TSCI were identified from the hospital admission records within the first 24 hours of admission. TSCI was confirmed by evidence of persisting sensory and/or motor loss, which might include bladder and bowel dysfunction beyond the normal spinal shock period. The injured person was followed up in the ward, and neurological assessment was conducted as a matter of routine. All patients consented for their medical data to be used for this study. The data collection sheet included sociodemographic information, cause of injury, and the address of each patient. Each patient was inspected for associated injuries and whether or not there were skin changes suggestive of PUs. PUs was defined as any skin change due to sustained pressure that required any form of management, including a planned pressure relief procedure. Patients were asked to report any new complaint such as constipation, pain, or spasms. Each patient was followed up from admission to discharge or death.", "Data were cleaned and entered into the Statistical Package for Social Science (SPSS version 25) for analysis. Age, length of hospital stay (LOS), and time for PU appearance and healing were analysed descriptively, and the results are given as means and standard deviations. Pain and spasms were subjectively reported and described by the patient. Incidence based on the Kilimanjaro region population (N = 1,910,555) was calculated after subtracting children 10 years of age and younger (N = 464,600) because none of these were admitted with TSCI in 2017.", "This study was a part of a larger project on SCI in the north-east Tanzania that had ethical approval from the research ethics review committee of the KCMC, registration number 620. Patients were requested to consent to the use of their medical information for research purposes.", "Socio-demographic characteristics For the year 2017, a total of 87 persons with TSCI were admitted to KCMC. Of these, 56 (64.4%) were from the Kilimanjaro region and 31 (35.6%) were from neighbouring regions. There were 69 (79.3%) males, and the mean age was 40.22 ± 15.77 years. Most injuries involved the cervical level (49 patients (56.3%)) followed by lumbar (27 patients (31%)) and thoracic (11 patients (12.6%)) levels. In terms of the severity of the TSCI, 35 (40.20%) patients had complete (AIS “A”) and 44 (50.6%) patients had incomplete (AIS “B, C, D”) injuries, while 8 (9.20%) patients did not have any neurological deficits after 24 hours. The mean LOS was 120.59 ± 55.29 days for tetraplegia and 115 ± 44.89 days for paraplegia (range 8–273 days).\nCauses, annual incidence, and mortality following TSCI The majority of TSCI cases (59 patients (66.6%)) were caused by falls, and 41 (47.1%) of these were falls from height (especially trees). Road traffic accidents lead to 25 (28.7%) cases, while 4 (4.6%) cases were from other forms of trauma including assaults, being fallen on by a heavy object, and being attacked by an animal.\nThe Kilimanjaro region was estimated to have a population of about 1,910,555 in 2017 based on the Tanzanian national census of 2012 and an annual population growth rate of 2.9%. However, for the year 2017 there was no one aged 10 years or less who were admitted with TSCI. For this reason, those who were in this age group (464,600) were considered as having a negligible risk, and thus the total at-risk population was 1,445,955 persons. The incidence of TSCI for the Kilimanjaro region was 56 cases/1,445,955 persons, giving a rate of at least 38 cases per million population per year. For this period of one year, 15 patients died while in the hospital giving an annual mortality rate of 17.2%.\nFor the year 2017, a total of 87 persons with TSCI were admitted to KCMC. Of these, 56 (64.4%) were from the Kilimanjaro region and 31 (35.6%) were from neighbouring regions. There were 69 (79.3%) males, and the mean age was 40.22 ± 15.77 years. Most injuries involved the cervical level (49 patients (56.3%)) followed by lumbar (27 patients (31%)) and thoracic (11 patients (12.6%)) levels. In terms of the severity of the TSCI, 35 (40.20%) patients had complete (AIS “A”) and 44 (50.6%) patients had incomplete (AIS “B, C, D”) injuries, while 8 (9.20%) patients did not have any neurological deficits after 24 hours. The mean LOS was 120.59 ± 55.29 days for tetraplegia and 115 ± 44.89 days for paraplegia (range 8–273 days).\nCauses, annual incidence, and mortality following TSCI The majority of TSCI cases (59 patients (66.6%)) were caused by falls, and 41 (47.1%) of these were falls from height (especially trees). Road traffic accidents lead to 25 (28.7%) cases, while 4 (4.6%) cases were from other forms of trauma including assaults, being fallen on by a heavy object, and being attacked by an animal.\nThe Kilimanjaro region was estimated to have a population of about 1,910,555 in 2017 based on the Tanzanian national census of 2012 and an annual population growth rate of 2.9%. However, for the year 2017 there was no one aged 10 years or less who were admitted with TSCI. For this reason, those who were in this age group (464,600) were considered as having a negligible risk, and thus the total at-risk population was 1,445,955 persons. The incidence of TSCI for the Kilimanjaro region was 56 cases/1,445,955 persons, giving a rate of at least 38 cases per million population per year. For this period of one year, 15 patients died while in the hospital giving an annual mortality rate of 17.2%.\nComplications SCI-related complications were identified by clinical observation (PUs), patient reports (spasms and neuropathic pain), and laboratory or radiological confirmations (respiratory infection, UTI, and deep venous thrombosis). Spasms and neuropathic pain were the most prevalent complications, while deep vein thrombosis was the least reported complication as shown in Figure 1.\nPrevalence of clinical complications\nSCI-related complications were identified by clinical observation (PUs), patient reports (spasms and neuropathic pain), and laboratory or radiological confirmations (respiratory infection, UTI, and deep venous thrombosis). Spasms and neuropathic pain were the most prevalent complications, while deep vein thrombosis was the least reported complication as shown in Figure 1.\nPrevalence of clinical complications\nLocation and time to occurrence and healing of PUs PU were followed up to assess their occurrence and healing time. The prevalence of PUs was 37.9%, and the calcaneus was the most common site of occurrence (Figure 2).\nPrevalence of PUs according to body location\nAs seen in Table 1, PUs was more prevalent for tetraplegia compared to paraplegia, and the longest healing time was observed for PUs on the sacrum and the greater trochanters. Some of the PUs did not heal during hospitalisation (the follow up period).\nLocation and time to appearance and healing of PUs in tetraplegia and paraplegia\nPU were followed up to assess their occurrence and healing time. The prevalence of PUs was 37.9%, and the calcaneus was the most common site of occurrence (Figure 2).\nPrevalence of PUs according to body location\nAs seen in Table 1, PUs was more prevalent for tetraplegia compared to paraplegia, and the longest healing time was observed for PUs on the sacrum and the greater trochanters. Some of the PUs did not heal during hospitalisation (the follow up period).\nLocation and time to appearance and healing of PUs in tetraplegia and paraplegia", "For the year 2017, a total of 87 persons with TSCI were admitted to KCMC. Of these, 56 (64.4%) were from the Kilimanjaro region and 31 (35.6%) were from neighbouring regions. There were 69 (79.3%) males, and the mean age was 40.22 ± 15.77 years. Most injuries involved the cervical level (49 patients (56.3%)) followed by lumbar (27 patients (31%)) and thoracic (11 patients (12.6%)) levels. In terms of the severity of the TSCI, 35 (40.20%) patients had complete (AIS “A”) and 44 (50.6%) patients had incomplete (AIS “B, C, D”) injuries, while 8 (9.20%) patients did not have any neurological deficits after 24 hours. The mean LOS was 120.59 ± 55.29 days for tetraplegia and 115 ± 44.89 days for paraplegia (range 8–273 days).\nCauses, annual incidence, and mortality following TSCI The majority of TSCI cases (59 patients (66.6%)) were caused by falls, and 41 (47.1%) of these were falls from height (especially trees). Road traffic accidents lead to 25 (28.7%) cases, while 4 (4.6%) cases were from other forms of trauma including assaults, being fallen on by a heavy object, and being attacked by an animal.\nThe Kilimanjaro region was estimated to have a population of about 1,910,555 in 2017 based on the Tanzanian national census of 2012 and an annual population growth rate of 2.9%. However, for the year 2017 there was no one aged 10 years or less who were admitted with TSCI. For this reason, those who were in this age group (464,600) were considered as having a negligible risk, and thus the total at-risk population was 1,445,955 persons. The incidence of TSCI for the Kilimanjaro region was 56 cases/1,445,955 persons, giving a rate of at least 38 cases per million population per year. For this period of one year, 15 patients died while in the hospital giving an annual mortality rate of 17.2%.", "SCI-related complications were identified by clinical observation (PUs), patient reports (spasms and neuropathic pain), and laboratory or radiological confirmations (respiratory infection, UTI, and deep venous thrombosis). Spasms and neuropathic pain were the most prevalent complications, while deep vein thrombosis was the least reported complication as shown in Figure 1.\nPrevalence of clinical complications", "PU were followed up to assess their occurrence and healing time. The prevalence of PUs was 37.9%, and the calcaneus was the most common site of occurrence (Figure 2).\nPrevalence of PUs according to body location\nAs seen in Table 1, PUs was more prevalent for tetraplegia compared to paraplegia, and the longest healing time was observed for PUs on the sacrum and the greater trochanters. Some of the PUs did not heal during hospitalisation (the follow up period).\nLocation and time to appearance and healing of PUs in tetraplegia and paraplegia", "This study reports a relatively high incidence of TSCI compared to the results of a systematic review of TSCI in low-income countries 24. The majority of these injuries are due to falls from height and road traffic accidents. Neuropathic pain, spasms, PUs, and UTIs are the most reported complications among persons with TSCI in this region during hospitalisation.\nThis study registered a higher incidence rate of traumatic spinal cord injury in the Kilimanjaro region than those reported for other low- and middle-income settings4,5, 25, 26, but lower than a study in Cape Town, South Africa 3. The majority of these injuries are as a result of falls and road traffic accident. Identification of factors associated with falls and road traffic accidents such as alcohol consumption prior to the incident, environmental and occupational hazards is a pre-requisite to prevention of TSCI 27. Simultaneous with trauma-prevention strategies, there ought to be a development of community-based evacuation and transportation service for injured persons as well as up-to-date trauma registries in these settings in order to obtain more conclusive statistics with fewer missing cases 1.\nWe report neuropathic pain and spasms as the most prevalent complications in persons with TSCI. These complications lead to massive functional limitations, hinder active rehabilitation, and cause psychosocial problems and lower one's perceived QoL 18. As was shown in studies in New Zealand and Latvia, neuropathic pain normally persists even after discharge from the hospital, suggesting that these patients might have life-long consequences 28, 29. The description and treatment of neuropathic pain is complex, especially when occurring concurrently with spasms 9. It is strongly recommended that pain should be managed appropriately as early as possible to minimize the risk of chronicity 30. However, unavailability and unaffordability of appropriate medications for neuropathic pain and spasms in most low-income settings deters rehabilitation efforts. In addition to the paralysis, these two complications affect movement and functional posture, hence the individual becomes dependent in most activities of daily living. In the long term, chronic pain continues to limit the function of the individual even after discharge from the hospital. At the same time, spasms may keep a limb in a dominant position leading to contractures. For this reason, the QoL of persons with TSCI is continuously affected by pain and spasms even after discharge from the hospital 31, and an interdisciplinary approach is required that includes medication, pain counselling, occupational therapy, and physiotherapy.\nPUs is among the most prevalent and deadliest complications globally and accounts to prolonged length of hospital stay and rehabilitation cost 32. Although PUs are still prevalent globally with the same physical risk factors, they are more prevalent in the low-income countries due to various reasons33. One of the reasons is insufficient knowledge and skills on early identification and prevention of their occurrence. Second, there is insufficient pressure management equipment such as appropriate mattresses and cushions There are also insufficient numbers of nursing staff to carry out routine turning of patients as recommended. As a result, PUs remains prevalent and contribute significantly to high rehabilitation costs, prolonged hospitalisation, and increased mortality in low and middle-income countries33. For a person having PUs, active rehabilitation such as exercising and training for mobility and activities of daily living are limited. For example, PUs around the sacrum and pelvis lead to restricted sitting time, hence the individual spends most of their time lying in bed. Prolonged bed rest not only affects functioning, but also adds to the risk of more PUs elsewhere on the body. Prevention of PUs and successful treatment of those that occur enhances early active rehabilitation and reduces LOS and mortality. Although prevention of PU is costly, the cost of living with or treatinit once it has occurred can be unbearable for persons living in a low-income setting.\nAnother complication of major concern in this study was UTIs, which are common not only in the low, but also middle and high-income countries34. It is almost impossible to avoid UTIs considering the fact that the majority of persons with SCI use catheters for bladder emptying at some point in or throughout their life. However, the method of catheter use and the level of sterility determine the chances of contracting a UTI. For example, a person who uses disposable catheters for intermittent catheterisation would have lower odds of a UTI as compared to a person re-using the same catheter due to dangers of contamination35,36. In the same way, a person with TSCI who uses an indwelling cathetehas a higher risk of UTI than one who uses clean intermittent catheterisation37. Indwelling catheterisation is also known for keeping residual urine in the bladder, which contributes to the formation of bladder stones and to reflux that might damage the kidneys. Properly conducted self-performed intermittent catheterisation or suprapubic catheters for persons who cannot void spontaneously has lower odds for UTIs and hardly any residual urine. Unfortunately, larger populations of persons with TSCI in low-income countries are either using indwelling catheterisation or have to clean and re-use a catheter, which exposes them to a higher risk of UTI. While clean intermittent self-catheterisation is still the most recommended, disposable catheters are too costly in low-income countries and thus efforts should be focused on ensuring cleanliness when using catheters repeatedly. Like other complications, UTIs are also associated with high rehabilitation costs and poor QoL among persons with TSCI globally 38. With symptoms of UTI such as fever, there is limited ability for the patient to engage in exercises or other forms of active rehabilitation.", "There are common shortfalls in most epidemiological studies from low-income countries. Like ours, they are normally hospital based and fail to account for patients who die before reaching the hospital. There is also the possibility that some injured persons stay at home and self-medicate or seek alternative traditional treatments. All of these reasons suggest that there is an under-estimation of the magnitude of TSCI in the low-income countries and thus the results of this study are not necessarily exceptional.", "TSCI has relatively high incidence in the north-east region of Tanzania as compared to published studies from other low-income settings. In-hospital complications are substantially prevalent and might reflect the observed long LOS and increased mortality. Such complications adversely affect rehabilitation and activities of daily living leading to dependency and low QoL." ]
[ "intro", "methods", null, null, null, null, null, "results", null, null, null, "discussion", null, "conclusions" ]
[ "Rehabilitation", "pressure ulcer", "spasm", "urinary tract infections", "low income countries" ]
Introduction: Trauma to the spinal cord may lead to death at the scene of the accident, during the acute phase, or later due to organ failure and/or clinical complications. For the past five years, systematic reviews have shown that the annual global incidence of traumatic spinal cord injury (TSCI) ranges from 3.6 to 246.0 new cases per million population 1,2. However, there is a scarcity of epidemiological studies on TSCI in low-income countries, particularly in Africa. In the sub-Saharan region, two prospective studies in South Africa and Botswana reported annual incidences of 75.6 and 13 per million population, respectively 3, 4. Although the data were inconclusive, a review of the few available reports from low-income countries suggested that the incidence of TSCI was more than 25 cases per million population annually5. As soon as TSCI has occurred, there are few days or weeks of immediate risk of death due to acute autonomic dysfunction and cardiorespiratory failure 6. With comprehensive life support and rehabilitation services, this risk of acute death decreases, and the patient stabilises physiologically and psychologically with time. However, depending on the level and severity of the injury, this acute and most critical period may be followed by persistent troubling symptoms such as neuropathic pain and spasms that affect rehabilitation and quality of life in general. Neuropathic pain and spasms are among the signs and symptoms of SCI and may persist for the rest of the person's life 7, 8. Common secondary complications that are not among the signs and symptoms of TSCI include pressure ulcers (PUs), respiratory and urinary tract infections (UTIs), and deep venous thrombosis 9. It has been shown that these secondary complications elevate the risk of death in persons with SCI even after discharge from the hospital. PUs is among the leading and most complex fatal complications in the SCI population 10. In one hospital-based prospective study in South Africa, PUs is registered as the most prevalent complications following SCI 11. Management of PUs is complex, and if the wound becomes infected death due to septicaemia may occur 12. Following a PU on the immobile, insensitive, and vascular diminished tissue, healing becomes difficult and susceptibility to infections increases. Septicaemia, which is one of the most common causes of death in persons with SCI results not only from unsuccessfully treated PUs, but also from UTIs or pneumonia12, 13. Another route for infections is the urinary tract due to repeated or prolonged use of catheters 14. Both indwelling and intermittent catheterisation can introduce different types of bacteria to the urinary system, leading to UTIs. A recent systematic review has shown that UTIs are the leading complications among persons with SCI and are acknowledged as difficult to prevent 12. However, UTIs are not as potentially fatal as PUs. Depending on the type and extent of infection, untreated UTIs might cause irreversible destruction to the urinary system15. It is not uncommon for UTIs to ascend all the way to the kidneys, leading to nephritis and dysfunction of the vascular and urinary system. Pulmonary complications are also common and account for a significant proportion of the mortality during or after the acute phase, especially in cases with high-level injuries and in elderly persons with TSCI 16. Apart from respiratory failure, which is most common in the early and most acute phase, respiratory infections such as pneumonia can be either acute or later complications. Common factors that are known to contribute to respiratory complications include high spinal level, complete injury, history of smoking, and steroid use 17. All of these TSCI-related complications are of medical and rehabilitation concern because they are associated poor functional outcomes, high rehabilitation cost, prolonged hospital stay, anxiety, depression, poor quality of life (QoL), and death 18. However, most of these complications can be prevented with adequate knowledge, skills, and equipment. Lack of the aforementioned pre-requisites accounts for the reported high prevalence of complications in low-income countries. There are very few published studies in Tanzania with regards to TSCI-related complications, thus it is difficult to inform prevention, care, and rehabilitation services. This study aimed to describe the occurrence of TSCI and prevailing complications during hospitalisation in one rural area of Tanzania. Methods: Study Design This was a descriptive one-year prospective study in which data were collected from TSCI patients on admission, during hospitalisation, and at discharge. This was a descriptive one-year prospective study in which data were collected from TSCI patients on admission, during hospitalisation, and at discharge. Study population and setting This study targeted all newly injured persons from north-east Tanzania admitted to Kilimanjaro Christian Medical Centre (KCMC), which is the only consultant hospital with a spinal unit in the region. Tool for data collection A data collection sheet based on the international SCI core data set 19, 20 was used in this study. Other variables such as associated injuries, in-hospital complications, skeletal injury, death, and ambulatory state at discharge were also added to the data collection sheet. Additionally, a chart was developed to register the anatomical location and date of starting and healing of PUs21. The American Spinal Injury Association Impairment Scale22,23 was used to classify the completeness of the injury. This study targeted all newly injured persons from north-east Tanzania admitted to Kilimanjaro Christian Medical Centre (KCMC), which is the only consultant hospital with a spinal unit in the region. Tool for data collection A data collection sheet based on the international SCI core data set 19, 20 was used in this study. Other variables such as associated injuries, in-hospital complications, skeletal injury, death, and ambulatory state at discharge were also added to the data collection sheet. Additionally, a chart was developed to register the anatomical location and date of starting and healing of PUs21. The American Spinal Injury Association Impairment Scale22,23 was used to classify the completeness of the injury. Procedure for data collection All new cases of TSCI were identified from the hospital admission records within the first 24 hours of admission. TSCI was confirmed by evidence of persisting sensory and/or motor loss, which might include bladder and bowel dysfunction beyond the normal spinal shock period. The injured person was followed up in the ward, and neurological assessment was conducted as a matter of routine. All patients consented for their medical data to be used for this study. The data collection sheet included sociodemographic information, cause of injury, and the address of each patient. Each patient was inspected for associated injuries and whether or not there were skin changes suggestive of PUs. PUs was defined as any skin change due to sustained pressure that required any form of management, including a planned pressure relief procedure. Patients were asked to report any new complaint such as constipation, pain, or spasms. Each patient was followed up from admission to discharge or death. All new cases of TSCI were identified from the hospital admission records within the first 24 hours of admission. TSCI was confirmed by evidence of persisting sensory and/or motor loss, which might include bladder and bowel dysfunction beyond the normal spinal shock period. The injured person was followed up in the ward, and neurological assessment was conducted as a matter of routine. All patients consented for their medical data to be used for this study. The data collection sheet included sociodemographic information, cause of injury, and the address of each patient. Each patient was inspected for associated injuries and whether or not there were skin changes suggestive of PUs. PUs was defined as any skin change due to sustained pressure that required any form of management, including a planned pressure relief procedure. Patients were asked to report any new complaint such as constipation, pain, or spasms. Each patient was followed up from admission to discharge or death. Data analysis Data were cleaned and entered into the Statistical Package for Social Science (SPSS version 25) for analysis. Age, length of hospital stay (LOS), and time for PU appearance and healing were analysed descriptively, and the results are given as means and standard deviations. Pain and spasms were subjectively reported and described by the patient. Incidence based on the Kilimanjaro region population (N = 1,910,555) was calculated after subtracting children 10 years of age and younger (N = 464,600) because none of these were admitted with TSCI in 2017. Data were cleaned and entered into the Statistical Package for Social Science (SPSS version 25) for analysis. Age, length of hospital stay (LOS), and time for PU appearance and healing were analysed descriptively, and the results are given as means and standard deviations. Pain and spasms were subjectively reported and described by the patient. Incidence based on the Kilimanjaro region population (N = 1,910,555) was calculated after subtracting children 10 years of age and younger (N = 464,600) because none of these were admitted with TSCI in 2017. Ethical considerations This study was a part of a larger project on SCI in the north-east Tanzania that had ethical approval from the research ethics review committee of the KCMC, registration number 620. Patients were requested to consent to the use of their medical information for research purposes. This study was a part of a larger project on SCI in the north-east Tanzania that had ethical approval from the research ethics review committee of the KCMC, registration number 620. Patients were requested to consent to the use of their medical information for research purposes. Study Design: This was a descriptive one-year prospective study in which data were collected from TSCI patients on admission, during hospitalisation, and at discharge. Study population and setting: This study targeted all newly injured persons from north-east Tanzania admitted to Kilimanjaro Christian Medical Centre (KCMC), which is the only consultant hospital with a spinal unit in the region. Tool for data collection A data collection sheet based on the international SCI core data set 19, 20 was used in this study. Other variables such as associated injuries, in-hospital complications, skeletal injury, death, and ambulatory state at discharge were also added to the data collection sheet. Additionally, a chart was developed to register the anatomical location and date of starting and healing of PUs21. The American Spinal Injury Association Impairment Scale22,23 was used to classify the completeness of the injury. Procedure for data collection: All new cases of TSCI were identified from the hospital admission records within the first 24 hours of admission. TSCI was confirmed by evidence of persisting sensory and/or motor loss, which might include bladder and bowel dysfunction beyond the normal spinal shock period. The injured person was followed up in the ward, and neurological assessment was conducted as a matter of routine. All patients consented for their medical data to be used for this study. The data collection sheet included sociodemographic information, cause of injury, and the address of each patient. Each patient was inspected for associated injuries and whether or not there were skin changes suggestive of PUs. PUs was defined as any skin change due to sustained pressure that required any form of management, including a planned pressure relief procedure. Patients were asked to report any new complaint such as constipation, pain, or spasms. Each patient was followed up from admission to discharge or death. Data analysis: Data were cleaned and entered into the Statistical Package for Social Science (SPSS version 25) for analysis. Age, length of hospital stay (LOS), and time for PU appearance and healing were analysed descriptively, and the results are given as means and standard deviations. Pain and spasms were subjectively reported and described by the patient. Incidence based on the Kilimanjaro region population (N = 1,910,555) was calculated after subtracting children 10 years of age and younger (N = 464,600) because none of these were admitted with TSCI in 2017. Ethical considerations: This study was a part of a larger project on SCI in the north-east Tanzania that had ethical approval from the research ethics review committee of the KCMC, registration number 620. Patients were requested to consent to the use of their medical information for research purposes. Results: Socio-demographic characteristics For the year 2017, a total of 87 persons with TSCI were admitted to KCMC. Of these, 56 (64.4%) were from the Kilimanjaro region and 31 (35.6%) were from neighbouring regions. There were 69 (79.3%) males, and the mean age was 40.22 ± 15.77 years. Most injuries involved the cervical level (49 patients (56.3%)) followed by lumbar (27 patients (31%)) and thoracic (11 patients (12.6%)) levels. In terms of the severity of the TSCI, 35 (40.20%) patients had complete (AIS “A”) and 44 (50.6%) patients had incomplete (AIS “B, C, D”) injuries, while 8 (9.20%) patients did not have any neurological deficits after 24 hours. The mean LOS was 120.59 ± 55.29 days for tetraplegia and 115 ± 44.89 days for paraplegia (range 8–273 days). Causes, annual incidence, and mortality following TSCI The majority of TSCI cases (59 patients (66.6%)) were caused by falls, and 41 (47.1%) of these were falls from height (especially trees). Road traffic accidents lead to 25 (28.7%) cases, while 4 (4.6%) cases were from other forms of trauma including assaults, being fallen on by a heavy object, and being attacked by an animal. The Kilimanjaro region was estimated to have a population of about 1,910,555 in 2017 based on the Tanzanian national census of 2012 and an annual population growth rate of 2.9%. However, for the year 2017 there was no one aged 10 years or less who were admitted with TSCI. For this reason, those who were in this age group (464,600) were considered as having a negligible risk, and thus the total at-risk population was 1,445,955 persons. The incidence of TSCI for the Kilimanjaro region was 56 cases/1,445,955 persons, giving a rate of at least 38 cases per million population per year. For this period of one year, 15 patients died while in the hospital giving an annual mortality rate of 17.2%. For the year 2017, a total of 87 persons with TSCI were admitted to KCMC. Of these, 56 (64.4%) were from the Kilimanjaro region and 31 (35.6%) were from neighbouring regions. There were 69 (79.3%) males, and the mean age was 40.22 ± 15.77 years. Most injuries involved the cervical level (49 patients (56.3%)) followed by lumbar (27 patients (31%)) and thoracic (11 patients (12.6%)) levels. In terms of the severity of the TSCI, 35 (40.20%) patients had complete (AIS “A”) and 44 (50.6%) patients had incomplete (AIS “B, C, D”) injuries, while 8 (9.20%) patients did not have any neurological deficits after 24 hours. The mean LOS was 120.59 ± 55.29 days for tetraplegia and 115 ± 44.89 days for paraplegia (range 8–273 days). Causes, annual incidence, and mortality following TSCI The majority of TSCI cases (59 patients (66.6%)) were caused by falls, and 41 (47.1%) of these were falls from height (especially trees). Road traffic accidents lead to 25 (28.7%) cases, while 4 (4.6%) cases were from other forms of trauma including assaults, being fallen on by a heavy object, and being attacked by an animal. The Kilimanjaro region was estimated to have a population of about 1,910,555 in 2017 based on the Tanzanian national census of 2012 and an annual population growth rate of 2.9%. However, for the year 2017 there was no one aged 10 years or less who were admitted with TSCI. For this reason, those who were in this age group (464,600) were considered as having a negligible risk, and thus the total at-risk population was 1,445,955 persons. The incidence of TSCI for the Kilimanjaro region was 56 cases/1,445,955 persons, giving a rate of at least 38 cases per million population per year. For this period of one year, 15 patients died while in the hospital giving an annual mortality rate of 17.2%. Complications SCI-related complications were identified by clinical observation (PUs), patient reports (spasms and neuropathic pain), and laboratory or radiological confirmations (respiratory infection, UTI, and deep venous thrombosis). Spasms and neuropathic pain were the most prevalent complications, while deep vein thrombosis was the least reported complication as shown in Figure 1. Prevalence of clinical complications SCI-related complications were identified by clinical observation (PUs), patient reports (spasms and neuropathic pain), and laboratory or radiological confirmations (respiratory infection, UTI, and deep venous thrombosis). Spasms and neuropathic pain were the most prevalent complications, while deep vein thrombosis was the least reported complication as shown in Figure 1. Prevalence of clinical complications Location and time to occurrence and healing of PUs PU were followed up to assess their occurrence and healing time. The prevalence of PUs was 37.9%, and the calcaneus was the most common site of occurrence (Figure 2). Prevalence of PUs according to body location As seen in Table 1, PUs was more prevalent for tetraplegia compared to paraplegia, and the longest healing time was observed for PUs on the sacrum and the greater trochanters. Some of the PUs did not heal during hospitalisation (the follow up period). Location and time to appearance and healing of PUs in tetraplegia and paraplegia PU were followed up to assess their occurrence and healing time. The prevalence of PUs was 37.9%, and the calcaneus was the most common site of occurrence (Figure 2). Prevalence of PUs according to body location As seen in Table 1, PUs was more prevalent for tetraplegia compared to paraplegia, and the longest healing time was observed for PUs on the sacrum and the greater trochanters. Some of the PUs did not heal during hospitalisation (the follow up period). Location and time to appearance and healing of PUs in tetraplegia and paraplegia Socio-demographic characteristics: For the year 2017, a total of 87 persons with TSCI were admitted to KCMC. Of these, 56 (64.4%) were from the Kilimanjaro region and 31 (35.6%) were from neighbouring regions. There were 69 (79.3%) males, and the mean age was 40.22 ± 15.77 years. Most injuries involved the cervical level (49 patients (56.3%)) followed by lumbar (27 patients (31%)) and thoracic (11 patients (12.6%)) levels. In terms of the severity of the TSCI, 35 (40.20%) patients had complete (AIS “A”) and 44 (50.6%) patients had incomplete (AIS “B, C, D”) injuries, while 8 (9.20%) patients did not have any neurological deficits after 24 hours. The mean LOS was 120.59 ± 55.29 days for tetraplegia and 115 ± 44.89 days for paraplegia (range 8–273 days). Causes, annual incidence, and mortality following TSCI The majority of TSCI cases (59 patients (66.6%)) were caused by falls, and 41 (47.1%) of these were falls from height (especially trees). Road traffic accidents lead to 25 (28.7%) cases, while 4 (4.6%) cases were from other forms of trauma including assaults, being fallen on by a heavy object, and being attacked by an animal. The Kilimanjaro region was estimated to have a population of about 1,910,555 in 2017 based on the Tanzanian national census of 2012 and an annual population growth rate of 2.9%. However, for the year 2017 there was no one aged 10 years or less who were admitted with TSCI. For this reason, those who were in this age group (464,600) were considered as having a negligible risk, and thus the total at-risk population was 1,445,955 persons. The incidence of TSCI for the Kilimanjaro region was 56 cases/1,445,955 persons, giving a rate of at least 38 cases per million population per year. For this period of one year, 15 patients died while in the hospital giving an annual mortality rate of 17.2%. Complications: SCI-related complications were identified by clinical observation (PUs), patient reports (spasms and neuropathic pain), and laboratory or radiological confirmations (respiratory infection, UTI, and deep venous thrombosis). Spasms and neuropathic pain were the most prevalent complications, while deep vein thrombosis was the least reported complication as shown in Figure 1. Prevalence of clinical complications Location and time to occurrence and healing of PUs: PU were followed up to assess their occurrence and healing time. The prevalence of PUs was 37.9%, and the calcaneus was the most common site of occurrence (Figure 2). Prevalence of PUs according to body location As seen in Table 1, PUs was more prevalent for tetraplegia compared to paraplegia, and the longest healing time was observed for PUs on the sacrum and the greater trochanters. Some of the PUs did not heal during hospitalisation (the follow up period). Location and time to appearance and healing of PUs in tetraplegia and paraplegia Discussion: This study reports a relatively high incidence of TSCI compared to the results of a systematic review of TSCI in low-income countries 24. The majority of these injuries are due to falls from height and road traffic accidents. Neuropathic pain, spasms, PUs, and UTIs are the most reported complications among persons with TSCI in this region during hospitalisation. This study registered a higher incidence rate of traumatic spinal cord injury in the Kilimanjaro region than those reported for other low- and middle-income settings4,5, 25, 26, but lower than a study in Cape Town, South Africa 3. The majority of these injuries are as a result of falls and road traffic accident. Identification of factors associated with falls and road traffic accidents such as alcohol consumption prior to the incident, environmental and occupational hazards is a pre-requisite to prevention of TSCI 27. Simultaneous with trauma-prevention strategies, there ought to be a development of community-based evacuation and transportation service for injured persons as well as up-to-date trauma registries in these settings in order to obtain more conclusive statistics with fewer missing cases 1. We report neuropathic pain and spasms as the most prevalent complications in persons with TSCI. These complications lead to massive functional limitations, hinder active rehabilitation, and cause psychosocial problems and lower one's perceived QoL 18. As was shown in studies in New Zealand and Latvia, neuropathic pain normally persists even after discharge from the hospital, suggesting that these patients might have life-long consequences 28, 29. The description and treatment of neuropathic pain is complex, especially when occurring concurrently with spasms 9. It is strongly recommended that pain should be managed appropriately as early as possible to minimize the risk of chronicity 30. However, unavailability and unaffordability of appropriate medications for neuropathic pain and spasms in most low-income settings deters rehabilitation efforts. In addition to the paralysis, these two complications affect movement and functional posture, hence the individual becomes dependent in most activities of daily living. In the long term, chronic pain continues to limit the function of the individual even after discharge from the hospital. At the same time, spasms may keep a limb in a dominant position leading to contractures. For this reason, the QoL of persons with TSCI is continuously affected by pain and spasms even after discharge from the hospital 31, and an interdisciplinary approach is required that includes medication, pain counselling, occupational therapy, and physiotherapy. PUs is among the most prevalent and deadliest complications globally and accounts to prolonged length of hospital stay and rehabilitation cost 32. Although PUs are still prevalent globally with the same physical risk factors, they are more prevalent in the low-income countries due to various reasons33. One of the reasons is insufficient knowledge and skills on early identification and prevention of their occurrence. Second, there is insufficient pressure management equipment such as appropriate mattresses and cushions There are also insufficient numbers of nursing staff to carry out routine turning of patients as recommended. As a result, PUs remains prevalent and contribute significantly to high rehabilitation costs, prolonged hospitalisation, and increased mortality in low and middle-income countries33. For a person having PUs, active rehabilitation such as exercising and training for mobility and activities of daily living are limited. For example, PUs around the sacrum and pelvis lead to restricted sitting time, hence the individual spends most of their time lying in bed. Prolonged bed rest not only affects functioning, but also adds to the risk of more PUs elsewhere on the body. Prevention of PUs and successful treatment of those that occur enhances early active rehabilitation and reduces LOS and mortality. Although prevention of PU is costly, the cost of living with or treatinit once it has occurred can be unbearable for persons living in a low-income setting. Another complication of major concern in this study was UTIs, which are common not only in the low, but also middle and high-income countries34. It is almost impossible to avoid UTIs considering the fact that the majority of persons with SCI use catheters for bladder emptying at some point in or throughout their life. However, the method of catheter use and the level of sterility determine the chances of contracting a UTI. For example, a person who uses disposable catheters for intermittent catheterisation would have lower odds of a UTI as compared to a person re-using the same catheter due to dangers of contamination35,36. In the same way, a person with TSCI who uses an indwelling cathetehas a higher risk of UTI than one who uses clean intermittent catheterisation37. Indwelling catheterisation is also known for keeping residual urine in the bladder, which contributes to the formation of bladder stones and to reflux that might damage the kidneys. Properly conducted self-performed intermittent catheterisation or suprapubic catheters for persons who cannot void spontaneously has lower odds for UTIs and hardly any residual urine. Unfortunately, larger populations of persons with TSCI in low-income countries are either using indwelling catheterisation or have to clean and re-use a catheter, which exposes them to a higher risk of UTI. While clean intermittent self-catheterisation is still the most recommended, disposable catheters are too costly in low-income countries and thus efforts should be focused on ensuring cleanliness when using catheters repeatedly. Like other complications, UTIs are also associated with high rehabilitation costs and poor QoL among persons with TSCI globally 38. With symptoms of UTI such as fever, there is limited ability for the patient to engage in exercises or other forms of active rehabilitation. Limitation of the study: There are common shortfalls in most epidemiological studies from low-income countries. Like ours, they are normally hospital based and fail to account for patients who die before reaching the hospital. There is also the possibility that some injured persons stay at home and self-medicate or seek alternative traditional treatments. All of these reasons suggest that there is an under-estimation of the magnitude of TSCI in the low-income countries and thus the results of this study are not necessarily exceptional. Conclusion: TSCI has relatively high incidence in the north-east region of Tanzania as compared to published studies from other low-income settings. In-hospital complications are substantially prevalent and might reflect the observed long LOS and increased mortality. Such complications adversely affect rehabilitation and activities of daily living leading to dependency and low QoL.
Background: Clinical complications following spinal cord injury are a big concern as they account for increased cost of rehabilitation, poor outcomes and mortality. Methods: Prospective data were collected from all persons with traumatic spinal cord injury from North-East Tanzania from their admission to discharge from the hospital. Neurological progress and complications were assessed routinely. Data were captured using a form that incorporated the components of the core data set of the International Spinal Cord Society and were analysed descriptively. Results: A total of 87 persons with traumatic spinal cord injury were admitted at the hospital with a mean age of 40.2 ± 15.8 years. There were 69 (79.3%) males, and 58 (66.6%) of the injuries resulted from falls. Spasms (41 patients, 47.1%), neuropathic pain (40 patients, 46%), and constipation (35 patients, 40.2%) were the most commonly reported complications. The annual incidence rate in the Kilimanjaro region was at least 38 cases per million. Conclusions: The incidence of traumatic spinal cord injury in the Kilimanjaro region is relatively high. In-hospital complications are prevalent and are worth addressing for successful rehabilitation.
Introduction: Trauma to the spinal cord may lead to death at the scene of the accident, during the acute phase, or later due to organ failure and/or clinical complications. For the past five years, systematic reviews have shown that the annual global incidence of traumatic spinal cord injury (TSCI) ranges from 3.6 to 246.0 new cases per million population 1,2. However, there is a scarcity of epidemiological studies on TSCI in low-income countries, particularly in Africa. In the sub-Saharan region, two prospective studies in South Africa and Botswana reported annual incidences of 75.6 and 13 per million population, respectively 3, 4. Although the data were inconclusive, a review of the few available reports from low-income countries suggested that the incidence of TSCI was more than 25 cases per million population annually5. As soon as TSCI has occurred, there are few days or weeks of immediate risk of death due to acute autonomic dysfunction and cardiorespiratory failure 6. With comprehensive life support and rehabilitation services, this risk of acute death decreases, and the patient stabilises physiologically and psychologically with time. However, depending on the level and severity of the injury, this acute and most critical period may be followed by persistent troubling symptoms such as neuropathic pain and spasms that affect rehabilitation and quality of life in general. Neuropathic pain and spasms are among the signs and symptoms of SCI and may persist for the rest of the person's life 7, 8. Common secondary complications that are not among the signs and symptoms of TSCI include pressure ulcers (PUs), respiratory and urinary tract infections (UTIs), and deep venous thrombosis 9. It has been shown that these secondary complications elevate the risk of death in persons with SCI even after discharge from the hospital. PUs is among the leading and most complex fatal complications in the SCI population 10. In one hospital-based prospective study in South Africa, PUs is registered as the most prevalent complications following SCI 11. Management of PUs is complex, and if the wound becomes infected death due to septicaemia may occur 12. Following a PU on the immobile, insensitive, and vascular diminished tissue, healing becomes difficult and susceptibility to infections increases. Septicaemia, which is one of the most common causes of death in persons with SCI results not only from unsuccessfully treated PUs, but also from UTIs or pneumonia12, 13. Another route for infections is the urinary tract due to repeated or prolonged use of catheters 14. Both indwelling and intermittent catheterisation can introduce different types of bacteria to the urinary system, leading to UTIs. A recent systematic review has shown that UTIs are the leading complications among persons with SCI and are acknowledged as difficult to prevent 12. However, UTIs are not as potentially fatal as PUs. Depending on the type and extent of infection, untreated UTIs might cause irreversible destruction to the urinary system15. It is not uncommon for UTIs to ascend all the way to the kidneys, leading to nephritis and dysfunction of the vascular and urinary system. Pulmonary complications are also common and account for a significant proportion of the mortality during or after the acute phase, especially in cases with high-level injuries and in elderly persons with TSCI 16. Apart from respiratory failure, which is most common in the early and most acute phase, respiratory infections such as pneumonia can be either acute or later complications. Common factors that are known to contribute to respiratory complications include high spinal level, complete injury, history of smoking, and steroid use 17. All of these TSCI-related complications are of medical and rehabilitation concern because they are associated poor functional outcomes, high rehabilitation cost, prolonged hospital stay, anxiety, depression, poor quality of life (QoL), and death 18. However, most of these complications can be prevented with adequate knowledge, skills, and equipment. Lack of the aforementioned pre-requisites accounts for the reported high prevalence of complications in low-income countries. There are very few published studies in Tanzania with regards to TSCI-related complications, thus it is difficult to inform prevention, care, and rehabilitation services. This study aimed to describe the occurrence of TSCI and prevailing complications during hospitalisation in one rural area of Tanzania. Conclusion: TSCI has relatively high incidence in the north-east region of Tanzania as compared to published studies from other low-income settings. In-hospital complications are substantially prevalent and might reflect the observed long LOS and increased mortality. Such complications adversely affect rehabilitation and activities of daily living leading to dependency and low QoL.
Background: Clinical complications following spinal cord injury are a big concern as they account for increased cost of rehabilitation, poor outcomes and mortality. Methods: Prospective data were collected from all persons with traumatic spinal cord injury from North-East Tanzania from their admission to discharge from the hospital. Neurological progress and complications were assessed routinely. Data were captured using a form that incorporated the components of the core data set of the International Spinal Cord Society and were analysed descriptively. Results: A total of 87 persons with traumatic spinal cord injury were admitted at the hospital with a mean age of 40.2 ± 15.8 years. There were 69 (79.3%) males, and 58 (66.6%) of the injuries resulted from falls. Spasms (41 patients, 47.1%), neuropathic pain (40 patients, 46%), and constipation (35 patients, 40.2%) were the most commonly reported complications. The annual incidence rate in the Kilimanjaro region was at least 38 cases per million. Conclusions: The incidence of traumatic spinal cord injury in the Kilimanjaro region is relatively high. In-hospital complications are prevalent and are worth addressing for successful rehabilitation.
5,319
226
[ 27, 131, 173, 103, 51, 407, 69, 107, 92 ]
14
[ "tsci", "pus", "patients", "complications", "data", "study", "persons", "hospital", "pain", "cases" ]
[ "introduction trauma spinal", "spinal injury association", "rate traumatic spinal", "spinal cord injury", "incidence traumatic spinal" ]
[CONTENT] Rehabilitation | pressure ulcer | spasm | urinary tract infections | low income countries [SUMMARY]
[CONTENT] Rehabilitation | pressure ulcer | spasm | urinary tract infections | low income countries [SUMMARY]
[CONTENT] Rehabilitation | pressure ulcer | spasm | urinary tract infections | low income countries [SUMMARY]
[CONTENT] Rehabilitation | pressure ulcer | spasm | urinary tract infections | low income countries [SUMMARY]
[CONTENT] Rehabilitation | pressure ulcer | spasm | urinary tract infections | low income countries [SUMMARY]
[CONTENT] Rehabilitation | pressure ulcer | spasm | urinary tract infections | low income countries [SUMMARY]
[CONTENT] Adult | Databases, Factual | Female | Hospitalization | Humans | Male | Middle Aged | Prospective Studies | Spinal Cord Injuries | Tanzania [SUMMARY]
[CONTENT] Adult | Databases, Factual | Female | Hospitalization | Humans | Male | Middle Aged | Prospective Studies | Spinal Cord Injuries | Tanzania [SUMMARY]
[CONTENT] Adult | Databases, Factual | Female | Hospitalization | Humans | Male | Middle Aged | Prospective Studies | Spinal Cord Injuries | Tanzania [SUMMARY]
[CONTENT] Adult | Databases, Factual | Female | Hospitalization | Humans | Male | Middle Aged | Prospective Studies | Spinal Cord Injuries | Tanzania [SUMMARY]
[CONTENT] Adult | Databases, Factual | Female | Hospitalization | Humans | Male | Middle Aged | Prospective Studies | Spinal Cord Injuries | Tanzania [SUMMARY]
[CONTENT] Adult | Databases, Factual | Female | Hospitalization | Humans | Male | Middle Aged | Prospective Studies | Spinal Cord Injuries | Tanzania [SUMMARY]
[CONTENT] introduction trauma spinal | spinal injury association | rate traumatic spinal | spinal cord injury | incidence traumatic spinal [SUMMARY]
[CONTENT] introduction trauma spinal | spinal injury association | rate traumatic spinal | spinal cord injury | incidence traumatic spinal [SUMMARY]
[CONTENT] introduction trauma spinal | spinal injury association | rate traumatic spinal | spinal cord injury | incidence traumatic spinal [SUMMARY]
[CONTENT] introduction trauma spinal | spinal injury association | rate traumatic spinal | spinal cord injury | incidence traumatic spinal [SUMMARY]
[CONTENT] introduction trauma spinal | spinal injury association | rate traumatic spinal | spinal cord injury | incidence traumatic spinal [SUMMARY]
[CONTENT] introduction trauma spinal | spinal injury association | rate traumatic spinal | spinal cord injury | incidence traumatic spinal [SUMMARY]
[CONTENT] tsci | pus | patients | complications | data | study | persons | hospital | pain | cases [SUMMARY]
[CONTENT] tsci | pus | patients | complications | data | study | persons | hospital | pain | cases [SUMMARY]
[CONTENT] tsci | pus | patients | complications | data | study | persons | hospital | pain | cases [SUMMARY]
[CONTENT] tsci | pus | patients | complications | data | study | persons | hospital | pain | cases [SUMMARY]
[CONTENT] tsci | pus | patients | complications | data | study | persons | hospital | pain | cases [SUMMARY]
[CONTENT] tsci | pus | patients | complications | data | study | persons | hospital | pain | cases [SUMMARY]
[CONTENT] complications | acute | utis | urinary | death | infections | tsci | rehabilitation | life | pus [SUMMARY]
[CONTENT] data | data collection | collection | admission | study | injury | data collection sheet | sheet | collection sheet | patient [SUMMARY]
[CONTENT] pus | patients | cases | year | tsci | 56 | population | days | paraplegia | rate [SUMMARY]
[CONTENT] low | complications | daily living leading dependency | living leading | substantially prevalent reflect | substantially prevalent | substantially | compared published studies low | compared published studies | compared published [SUMMARY]
[CONTENT] pus | tsci | patients | complications | data | study | low | admission | hospital | pain [SUMMARY]
[CONTENT] pus | tsci | patients | complications | data | study | low | admission | hospital | pain [SUMMARY]
[CONTENT] [SUMMARY]
[CONTENT] North-East Tanzania ||| ||| the International Spinal Cord Society [SUMMARY]
[CONTENT] 87 | 40.2 | 15.8 years ||| 69 | 79.3% | 58 | 66.6% ||| 41 | 47.1% | 40 | 46% | 35 | 40.2% ||| annual | Kilimanjaro | at least 38 [SUMMARY]
[CONTENT] Kilimanjaro ||| [SUMMARY]
[CONTENT] ||| North-East Tanzania ||| ||| the International Spinal Cord Society ||| 87 | 40.2 | 15.8 years ||| 69 | 79.3% | 58 | 66.6% ||| 41 | 47.1% | 40 | 46% | 35 | 40.2% ||| annual | Kilimanjaro | at least 38 ||| Kilimanjaro ||| [SUMMARY]
[CONTENT] ||| North-East Tanzania ||| ||| the International Spinal Cord Society ||| 87 | 40.2 | 15.8 years ||| 69 | 79.3% | 58 | 66.6% ||| 41 | 47.1% | 40 | 46% | 35 | 40.2% ||| annual | Kilimanjaro | at least 38 ||| Kilimanjaro ||| [SUMMARY]
Association between Nonalcoholic Fatty Liver Disease and Carotid Artery Disease in a Community-Based Chinese Population: A Cross-Sectional Study.
30246712
Nonalcoholic fatty liver disease (NAFLD) is one of the most common chronic liver diseases with a high prevalence in the general population. The association between NAFLD and cardiovascular disease has been well addressed in previous studies. However, whether NAFLD is associated with carotid artery disease in a community-based Chinese population remained unclear. The aim of this study was to investigate the association between NAFLD and carotid artery disease.
BACKGROUND
A total of 2612 participants (1091 men and 1521 women) aged 40 years and older from Jidong of Tangshan city (China) were selected for this study. NAFLD was diagnosed by abdominal ultrasonography. The presence of carotid stenosis or plaque was evaluated by carotid artery ultrasonography. Logistic regression was used to analyze the association between NAFLD and carotid artery disease.
METHODS
Participants with NAFLD have a higher prevalence of carotid stenosis (12.9% vs. 4.6%) and carotid plaque (21.9% vs. 15.0%) than those without NAFLD. After adjusting for age, gender, smoking status, income, physical activity, diabetes, hypertension, triglyceride, waist-hip ratio, and high-density lipoprotein, NAFLD is significantly associated with carotid stenosis (odds ratio [OR]: 2.06, 95% confidence interval [CI]: 1.45-2.91), but the association between NAFLD and carotid plaque is not statistically significant (OR: 1.10, 95% CI: 0.8-1.40).
RESULTS
A significant association between NAFLD and carotid stenosis is found in a Chinese population.
CONCLUSION
[ "Adult", "Carotid Artery Diseases", "Carotid Intima-Media Thickness", "China", "Cross-Sectional Studies", "Female", "Humans", "Male", "Middle Aged", "Non-alcoholic Fatty Liver Disease", "Prevalence", "Risk Factors" ]
6166459
INTRODUCTION
Nonalcoholic fatty liver disease (NAFLD) has become a major public health concern in developing countries, and it emerges as the most common chronic liver disease around the world with a prevalence of 20–30% in the general population.[1] The clinical condition with histological features of NAFLD is ranging from simple steatosis to steatohepatitis and cirrhosis. NAFLD is also recognized as the hepatic manifestation of metabolic syndrome which includes obesity, type 2 diabetes mellitus, dyslipidemia, and hypertension. The close association between NAFLD and metabolic syndrome has been demonstrated previously.[2] Indeed, NAFLD is considered to be another component of the metabolic syndrome which is a key mediator for the relationship of NAFLD and cardiovascular disease. A strong association between NAFLD and metabolic syndrome has stimulated more attention to its putative role in the occurrence and development of cardiovascular disease.[3] Carotid atherosclerosis, a common carotid artery disease, is recognized as one of the major cardiovascular diseases. Recently, NAFLD was suspected to be associated with an increased risk of cardiovascular disease, including exacerbating carotid atherosclerosis and coronary artery disease (CAD).[4] NAFLD patients might be at a heightened risk to suffer from cardiovascular disease, especially carotid artery disease.[5] The presence of carotid plaque indicated that a clinical model of early atherosclerosis was an indicator of increased risk of cardiovascular disease.[67] In addition, carotid stenosis was an important risk factor for transient ischemic attacks and strokes which was correlated well with cardiovascular disease.[8] We use carotid stenosis and carotid plaque measured by carotid ultrasound as good surrogate markers of carotid artery disease. Therefore, the aim of this study was to investigate the association between NAFLD and carotid artery disease.
METHODS
Ethical approval The study was conducted in accordance with the Declaration of Helsinki and was approved by the Ethics Committee of Jidong Oilfield Inc., Medical Centers. Informed written consent was obtained from all participants before their enrollment in this study. The study was conducted in accordance with the Declaration of Helsinki and was approved by the Ethics Committee of Jidong Oilfield Inc., Medical Centers. Informed written consent was obtained from all participants before their enrollment in this study. Study design and population From July 2013 to August 2014, all residents aged ≥18 years from Jidong community were invited to participate in this study at the time of their regular annual physical examination performed at the Jidong Oilfield Hospital. Almost 9078 residents completed a standard questionnaire, underwent physical examinations and laboratory assessments, and provided informed consent at recruitment.[9] The Jidong community is located in Caofeidian District which is in the south of Tangshan city and near the Bohai Sea. The detail information about the research has been described in the past.[1011] Among these resident population, those aged ≥40 years, without missing data on ultrasonography, who signed informed consent were selected as participants. Furthermore, they should meet the following criteria: (1) no history of cancer, stroke, atrial fibrillation, heart failure, or myocardial infarction; (2) without excessive alcohol consumption (men: ≥20 g/day and women: ≥10 g/day for more than a year); (3) absence of a history of positive HBsAg; and (4) with complete information. From July 2013 to August 2014, all residents aged ≥18 years from Jidong community were invited to participate in this study at the time of their regular annual physical examination performed at the Jidong Oilfield Hospital. Almost 9078 residents completed a standard questionnaire, underwent physical examinations and laboratory assessments, and provided informed consent at recruitment.[9] The Jidong community is located in Caofeidian District which is in the south of Tangshan city and near the Bohai Sea. The detail information about the research has been described in the past.[1011] Among these resident population, those aged ≥40 years, without missing data on ultrasonography, who signed informed consent were selected as participants. Furthermore, they should meet the following criteria: (1) no history of cancer, stroke, atrial fibrillation, heart failure, or myocardial infarction; (2) without excessive alcohol consumption (men: ≥20 g/day and women: ≥10 g/day for more than a year); (3) absence of a history of positive HBsAg; and (4) with complete information. Measurement of nonalcoholic fatty liver disease Liver ultrasonography was performed in participants aged ≥40 years using a high-resolution B-mode topographic ultrasound system with a 3.5 MHz probe (ACUSON X300, Siemens, Germany) to assess the prevalence of NAFLD. Compared to histology, ultrasonography had a sensitivity of 85% and a specificity of 94% in detecting fatty liver disease.[12] According to conventional criteria, fatty liver disease was diagnosed through characteristic echo patterns, such as diffusely increased liver near-field ultrasound echo (bright liver); liver echo was greater than kidney and vascular blurring and the gradual attenuation of far-field ultrasound echo.[13] In addition, abdominal ultrasonography scanning was examined by well-trained sonographers who were unaware of the clinical presentation and laboratory findings of participants during the whole ultrasonic examination. Liver ultrasonography was performed in participants aged ≥40 years using a high-resolution B-mode topographic ultrasound system with a 3.5 MHz probe (ACUSON X300, Siemens, Germany) to assess the prevalence of NAFLD. Compared to histology, ultrasonography had a sensitivity of 85% and a specificity of 94% in detecting fatty liver disease.[12] According to conventional criteria, fatty liver disease was diagnosed through characteristic echo patterns, such as diffusely increased liver near-field ultrasound echo (bright liver); liver echo was greater than kidney and vascular blurring and the gradual attenuation of far-field ultrasound echo.[13] In addition, abdominal ultrasonography scanning was examined by well-trained sonographers who were unaware of the clinical presentation and laboratory findings of participants during the whole ultrasonic examination. Assessment of carotid stenosis and carotid plaque For the evaluation of the prevalence of carotid stenosis, all participants (≥40 years) underwent bilateral carotid duplex sonography in a supine position by expert operators who were blinded to the goal of the study, clinical data, and laboratory findings. According to the Society of Radiologists in Ultrasound Consensus Conference, we graded the severity of carotid stenosis.[14] The categories were classified as normal (no stenosis) and carotid stenosis (<50% stenosis; ≥50% stenosis or occlusion). In the light of the established ultrasound criteria, (1) normal (no presence of stenosis) was defined that internal carotid artery (ICA) peak systolic velocity (PSV) was less than 125 cm/s and no plaque or intimal thickening was visible; (2) <50% stenosis was defined that ICA PSV was less than 125 cm/s but plaque or intimal thickening was visible; and (3) ≥50% stenosis or occlusion was considered when ICA PSV was greater than 125 cm/s and plaque was visible, or there was no detectable patent lumen on gray-scale ultrasonography and no flow on spectral, power, and color Doppler ultrasonography.[101516] Moreover, the higher value (left or right) was considered for analysis if bilateral stenosis was present. To assess the complexity and stability of carotid plaque, ultrasound examination (Philips iU22 ultrasound system, Philips Medical Systems, Bothell, WA, USA) was also operated by well-trained and certified sonographers, and the results of the examination were reviewed by two independent operators. Carotid plaque was defined as a focal structure encroaching into the arterial lumen of at least 0.5 mm or 50% of the surrounding intima-media thickness (IMT) value, or demonstrated as a thickness of 1.5 mm from the intima-lumen interface to the media–adventitia interface.[17] In this study, we classified carotid plaque as normal (without plaque), stable plaque (plaques had a uniform texture and present a smooth and regular surface and plaques with high-level or homogeneous echoes), and unstable plaque (plaques with incomplete fibrous cap or ulcerated plaques and plaques with low-level or heterogeneous echoes) according to different stabilities.[18] Both longitudinal and transverse images of bilateral carotid arteries were obtained to extensively evaluate plaques while differences between their evaluations needed to be resolved by consensus. For the evaluation of the prevalence of carotid stenosis, all participants (≥40 years) underwent bilateral carotid duplex sonography in a supine position by expert operators who were blinded to the goal of the study, clinical data, and laboratory findings. According to the Society of Radiologists in Ultrasound Consensus Conference, we graded the severity of carotid stenosis.[14] The categories were classified as normal (no stenosis) and carotid stenosis (<50% stenosis; ≥50% stenosis or occlusion). In the light of the established ultrasound criteria, (1) normal (no presence of stenosis) was defined that internal carotid artery (ICA) peak systolic velocity (PSV) was less than 125 cm/s and no plaque or intimal thickening was visible; (2) <50% stenosis was defined that ICA PSV was less than 125 cm/s but plaque or intimal thickening was visible; and (3) ≥50% stenosis or occlusion was considered when ICA PSV was greater than 125 cm/s and plaque was visible, or there was no detectable patent lumen on gray-scale ultrasonography and no flow on spectral, power, and color Doppler ultrasonography.[101516] Moreover, the higher value (left or right) was considered for analysis if bilateral stenosis was present. To assess the complexity and stability of carotid plaque, ultrasound examination (Philips iU22 ultrasound system, Philips Medical Systems, Bothell, WA, USA) was also operated by well-trained and certified sonographers, and the results of the examination were reviewed by two independent operators. Carotid plaque was defined as a focal structure encroaching into the arterial lumen of at least 0.5 mm or 50% of the surrounding intima-media thickness (IMT) value, or demonstrated as a thickness of 1.5 mm from the intima-lumen interface to the media–adventitia interface.[17] In this study, we classified carotid plaque as normal (without plaque), stable plaque (plaques had a uniform texture and present a smooth and regular surface and plaques with high-level or homogeneous echoes), and unstable plaque (plaques with incomplete fibrous cap or ulcerated plaques and plaques with low-level or heterogeneous echoes) according to different stabilities.[18] Both longitudinal and transverse images of bilateral carotid arteries were obtained to extensively evaluate plaques while differences between their evaluations needed to be resolved by consensus. Assessment of potential covariates The information of demographic (age, sex, income, physical activities, and smoking status) and clinical characteristics (waist-hip ratio (WHR), hypertension, and diabetes) was collected using standardized questionnaires.[9] Biochemical variables containing some indexes such as triglyceride (TG), total cholesterol (TC), high-density lipoprotein-cholesterol (HDL-C), and low-density lipoprotein-cholesterol (LDL-C) were measured using standard methods at the Central Laboratory of Jidong Oilfield Hospital.[10] According to the response of smoking status, participants were divided into three categories, never (<100 cigarettes in entire life), past, and current smoker. Education level was also classified into three categories: primary school or below, middle or high school, and college or above. The classification of physical activity was based on the following three kinds of circumstances: inactive, moderately active, and active. Simultaneously, WHR was calculated as waist circumference (cm) divided by the hip circumference (cm) which was used as measure of abdominal obesity. Hypertension was defined as a self-report history of hypertension, using antihypertensive medication or systolic blood pressure ≥140 mmHg and diastolic blood pressure ≥90 mmHg.[10] The definition of diabetes was fasting blood glucose ≥7.0 mmol/L, current treatment with insulin, oral hypoglycemic agents, or a history of diabetes mellitus.[111920] The information of demographic (age, sex, income, physical activities, and smoking status) and clinical characteristics (waist-hip ratio (WHR), hypertension, and diabetes) was collected using standardized questionnaires.[9] Biochemical variables containing some indexes such as triglyceride (TG), total cholesterol (TC), high-density lipoprotein-cholesterol (HDL-C), and low-density lipoprotein-cholesterol (LDL-C) were measured using standard methods at the Central Laboratory of Jidong Oilfield Hospital.[10] According to the response of smoking status, participants were divided into three categories, never (<100 cigarettes in entire life), past, and current smoker. Education level was also classified into three categories: primary school or below, middle or high school, and college or above. The classification of physical activity was based on the following three kinds of circumstances: inactive, moderately active, and active. Simultaneously, WHR was calculated as waist circumference (cm) divided by the hip circumference (cm) which was used as measure of abdominal obesity. Hypertension was defined as a self-report history of hypertension, using antihypertensive medication or systolic blood pressure ≥140 mmHg and diastolic blood pressure ≥90 mmHg.[10] The definition of diabetes was fasting blood glucose ≥7.0 mmol/L, current treatment with insulin, oral hypoglycemic agents, or a history of diabetes mellitus.[111920] Statistical analyses Considering that the prevalence of NAFLD and carotid stenosis is 25% and 12%, respectively, at two-sided α = 0.05, power = 0.80, and odds ratio (OR) = 1.50, the sample size can be assumed to be 2360 (PASS11, NCSS, LLC 329 North 1000 East, Kaysville, Utah 84037, USA).[21] The normal distribution of continuous variables was tested by one-sample Kolmogorov-Smirnov test. Continuous variables underlying normal distribution were presented as mean ± standard deviation and compared using t-test or analysis of variance (ANOVA), and otherwise presented as median (interquartile range) and compared by corresponding nonparametric methods. The frequencies and percentages were used to describe categorical variables, and the Chi-squared test was applied to compare among groups. Logistic regression was used to calculate ORs and 95% confidence intervals (CIs) and to determine the association between NAFLD and carotid stenosis or plaque. After adjusting for age, gender, smoking status, income, physical activity, diabetes, hypertension, WHR, TG, and HDL-C, the association between different severity of carotid stenosis and NAFLD, as well as the relationship between carotid plaque with different stability and NAFLD were also investigated. The association of NAFLD and carotid stenosis or plaque was also examined in stratification of age and gender analysis, respectively. Statistical analyses were performed using the SAS version 9.4 (SAS Institute, Cary, North Carolina, USA). All statistical tests were two-sided, and P < 0.05 was considered statistically significant. Considering that the prevalence of NAFLD and carotid stenosis is 25% and 12%, respectively, at two-sided α = 0.05, power = 0.80, and odds ratio (OR) = 1.50, the sample size can be assumed to be 2360 (PASS11, NCSS, LLC 329 North 1000 East, Kaysville, Utah 84037, USA).[21] The normal distribution of continuous variables was tested by one-sample Kolmogorov-Smirnov test. Continuous variables underlying normal distribution were presented as mean ± standard deviation and compared using t-test or analysis of variance (ANOVA), and otherwise presented as median (interquartile range) and compared by corresponding nonparametric methods. The frequencies and percentages were used to describe categorical variables, and the Chi-squared test was applied to compare among groups. Logistic regression was used to calculate ORs and 95% confidence intervals (CIs) and to determine the association between NAFLD and carotid stenosis or plaque. After adjusting for age, gender, smoking status, income, physical activity, diabetes, hypertension, WHR, TG, and HDL-C, the association between different severity of carotid stenosis and NAFLD, as well as the relationship between carotid plaque with different stability and NAFLD were also investigated. The association of NAFLD and carotid stenosis or plaque was also examined in stratification of age and gender analysis, respectively. Statistical analyses were performed using the SAS version 9.4 (SAS Institute, Cary, North Carolina, USA). All statistical tests were two-sided, and P < 0.05 was considered statistically significant.
RESULTS
Baseline characteristics of study participants From the initial sample of 9078 participants, 3396 residents aged ≥40 years and with complete data on ultrasonography examination were selected as study samples. Among the 3396 participants, 784 participants were excluded for the following reasons: 168 participants with history of myocardial infarction, heart failure, stroke, atrial fibrillation, and cancer; 369 participants with excessive alcohol consumption; 131 participants with history of positive HBsAg; and 116 participants without complete information. Finally, 2612 participants were included in the final analysis. Demographic data, clinical characteristics, and biochemical variables of all participants were presented in Table 1. Among the total 2612 participants, the mean age was 53.6 ± 8.6 years and 41.8% (n = 1091) were men. The 1375 participants with NAFLD consisted of 707 (64.8%) males and 668 (43.9%) females. Totally 342 (24.9%) participants with NAFLD were current smokers. Biochemical parameters including WHR, TG and TC were higher in participants with NAFLD than those without NAFLD. In addition, higher prevalence of diabetes and hypertension was also found in the group of NAFLD. Participants with NAFLD had a higher prevalence of carotid stenosis compared to those without NAFLD (<50% stenosis: 17% vs. 11.2%; ≥50% stenosis: 13.0% vs. 4.6%). A higher prevalence of unstable plaque (19.4% vs. 12.8%) was also demonstrated in participants with NAFLD than those without NAFLD in this study. Baseline characteristics of participants enrolled in this study *: t values; †: χ2 values; ‡: Z values. NAFLD: Nonalcoholic fatty liver disease; WHR: Waist-hip ratio; TG: Triglyceride; TC: Total cholesterol; HDL-C: High-density lipoprotein-cholesterol; LDL-C: Low-density lipoprotein-cholesterol; CIMT: Carotid intima-media thickness; SD: Standard deviation; IQR: Interquartile range. From the initial sample of 9078 participants, 3396 residents aged ≥40 years and with complete data on ultrasonography examination were selected as study samples. Among the 3396 participants, 784 participants were excluded for the following reasons: 168 participants with history of myocardial infarction, heart failure, stroke, atrial fibrillation, and cancer; 369 participants with excessive alcohol consumption; 131 participants with history of positive HBsAg; and 116 participants without complete information. Finally, 2612 participants were included in the final analysis. Demographic data, clinical characteristics, and biochemical variables of all participants were presented in Table 1. Among the total 2612 participants, the mean age was 53.6 ± 8.6 years and 41.8% (n = 1091) were men. The 1375 participants with NAFLD consisted of 707 (64.8%) males and 668 (43.9%) females. Totally 342 (24.9%) participants with NAFLD were current smokers. Biochemical parameters including WHR, TG and TC were higher in participants with NAFLD than those without NAFLD. In addition, higher prevalence of diabetes and hypertension was also found in the group of NAFLD. Participants with NAFLD had a higher prevalence of carotid stenosis compared to those without NAFLD (<50% stenosis: 17% vs. 11.2%; ≥50% stenosis: 13.0% vs. 4.6%). A higher prevalence of unstable plaque (19.4% vs. 12.8%) was also demonstrated in participants with NAFLD than those without NAFLD in this study. Baseline characteristics of participants enrolled in this study *: t values; †: χ2 values; ‡: Z values. NAFLD: Nonalcoholic fatty liver disease; WHR: Waist-hip ratio; TG: Triglyceride; TC: Total cholesterol; HDL-C: High-density lipoprotein-cholesterol; LDL-C: Low-density lipoprotein-cholesterol; CIMT: Carotid intima-media thickness; SD: Standard deviation; IQR: Interquartile range. Association between carotid stenosis and nonalcoholic fatty liver disease Among the 1375 participants with NAFLD, 12.9% (178/1375) met the diagnostic criteria for carotid stenosis. After adjusting for age, gender, education level, income, physical activity, smoking status, diabetes, hypertension, WHR, TG, and HDL-C, the association between NAFLD and carotid stenosis was statistically significant (OR: 2.06, 95% CI: 1.45–2.91). In age- and gender-stratified analysis, the association between NAFLD and carotid stenosis was positively significant among different groups (female: OR: 2.34, 95% CI: 1.29–4.24; male: OR: 1.89, 95% CI: 1.22–2.91; 40–60 years: OR: 1.76, 95% CI: 1.15–2.70; >60 years: OR: 3.12, 95% CI: 1.62–6.02) [Figure 1]. A positive association was observed between carotid stenosis (≥50% stenosis) and NAFLD according to the classification of the severity of carotid stenosis with normal as reference group (OR: 2.06, 95% CI: 1.45–2.93, P < 0.01) [Table 2]. The prevalence of carotid stenosis or plaque in overall participants and participants with NAFLD or without NAFLD. OR with 95% CI of NALFD for carotid stenosis or plaque in total, different gender, and age categories. Covariates for total included age, gender, smoking status, income, physical activity, diabetes, hypertension, WHR, TG, and HDL-C. Covariates for different genders included age, smoking status, income, physical activity, diabetes, hypertension, WHR, TG, and HDL-C. Covariates for different ages included gender, smoking status, income, physical activity, diabetes, hypertension, WHR, TG, and HDL-C. NAFLD: Nonalcoholic fatty liver disease; OR: Odds ratio; CI: Confidence interval; WHR: Waist-hip ratio; TG: Triglyceride; HDL-C: High-density lipoprotein-cholesterol. Association between different severity of carotid stenosis and NAFLD stratified by gender and age Data were presented by ORs (95% CIs). *Total adjusted for age, gender, smoking status, income, physical activity, diabetes, hypertension, WHR, TG, and HDL-C; †Gender subgroup adjusted for age, smoking status, income, physical activity, diabetes, hypertension, WHR, TG, and HDL-C; ‡Age subgroup adjusted for gender, smoking status, income, physical activity, diabetes, hypertension, WHR, TG, and HDL-C. NAFLD: Nonalcoholic fatty liver disease; WHR: Waist-hip ratio; TG: Triglyceride; HDL-C: High-density lipoprotein-cholesterol; CIs: Confidence intervals; ORs: Odds ratios. Among the 1375 participants with NAFLD, 12.9% (178/1375) met the diagnostic criteria for carotid stenosis. After adjusting for age, gender, education level, income, physical activity, smoking status, diabetes, hypertension, WHR, TG, and HDL-C, the association between NAFLD and carotid stenosis was statistically significant (OR: 2.06, 95% CI: 1.45–2.91). In age- and gender-stratified analysis, the association between NAFLD and carotid stenosis was positively significant among different groups (female: OR: 2.34, 95% CI: 1.29–4.24; male: OR: 1.89, 95% CI: 1.22–2.91; 40–60 years: OR: 1.76, 95% CI: 1.15–2.70; >60 years: OR: 3.12, 95% CI: 1.62–6.02) [Figure 1]. A positive association was observed between carotid stenosis (≥50% stenosis) and NAFLD according to the classification of the severity of carotid stenosis with normal as reference group (OR: 2.06, 95% CI: 1.45–2.93, P < 0.01) [Table 2]. The prevalence of carotid stenosis or plaque in overall participants and participants with NAFLD or without NAFLD. OR with 95% CI of NALFD for carotid stenosis or plaque in total, different gender, and age categories. Covariates for total included age, gender, smoking status, income, physical activity, diabetes, hypertension, WHR, TG, and HDL-C. Covariates for different genders included age, smoking status, income, physical activity, diabetes, hypertension, WHR, TG, and HDL-C. Covariates for different ages included gender, smoking status, income, physical activity, diabetes, hypertension, WHR, TG, and HDL-C. NAFLD: Nonalcoholic fatty liver disease; OR: Odds ratio; CI: Confidence interval; WHR: Waist-hip ratio; TG: Triglyceride; HDL-C: High-density lipoprotein-cholesterol. Association between different severity of carotid stenosis and NAFLD stratified by gender and age Data were presented by ORs (95% CIs). *Total adjusted for age, gender, smoking status, income, physical activity, diabetes, hypertension, WHR, TG, and HDL-C; †Gender subgroup adjusted for age, smoking status, income, physical activity, diabetes, hypertension, WHR, TG, and HDL-C; ‡Age subgroup adjusted for gender, smoking status, income, physical activity, diabetes, hypertension, WHR, TG, and HDL-C. NAFLD: Nonalcoholic fatty liver disease; WHR: Waist-hip ratio; TG: Triglyceride; HDL-C: High-density lipoprotein-cholesterol; CIs: Confidence intervals; ORs: Odds ratios. Relationship between carotid plaque and nonalcoholic fatty liver disease NAFLD was not significantly associated with carotid plaque in the whole participants (OR: 1.10, 95% CI: 0.8–1.40) after adjusting for age, gender, education level, income, physical activity, smoking status, diabetes, hypertension, WHR, TG, and HDL-C [Figure 1]. The results also showed that there was no significant correlation between carotid plaque and NAFLD after the stratification of age and gender. Moreover, the carotid plaque was classified according to different stability. The association between unstable plaque or stable plaque and NAFLD was still not statistically significant (unstable plaque: OR: 1.13, 95% CI: 0.87–1.35; stable plaque: OR: 0.94, 95% CI: 0.53–1.67) after adjustment for potential confounders in this study [Table 3]. Association between carotid plaque of different stability and NAFLD stratified by gender and age Data were presented by ORs (95% CIs). *Total adjusted for age, gender, smoking status, income, physical activity, diabetes, hypertension, WHR, TG, and HDL-C; †Gender subgroup adjusted for age, smoking status, income, physical activity, diabetes, hypertension, WHR, TG, and HDL-C; ‡Age subgroup adjusted for gender, smoking status, income, physical activity, diabetes, hypertension, WHR, TG, and HDL-C. NAFLD: Nonalcoholic fatty liver disease; WHR: Waist-hip ratio; TG: Triglyceride; HDL-C: High-density lipoprotein-cholesterol; CIs: Confidence intervals; ORs: Odds ratios. NAFLD was not significantly associated with carotid plaque in the whole participants (OR: 1.10, 95% CI: 0.8–1.40) after adjusting for age, gender, education level, income, physical activity, smoking status, diabetes, hypertension, WHR, TG, and HDL-C [Figure 1]. The results also showed that there was no significant correlation between carotid plaque and NAFLD after the stratification of age and gender. Moreover, the carotid plaque was classified according to different stability. The association between unstable plaque or stable plaque and NAFLD was still not statistically significant (unstable plaque: OR: 1.13, 95% CI: 0.87–1.35; stable plaque: OR: 0.94, 95% CI: 0.53–1.67) after adjustment for potential confounders in this study [Table 3]. Association between carotid plaque of different stability and NAFLD stratified by gender and age Data were presented by ORs (95% CIs). *Total adjusted for age, gender, smoking status, income, physical activity, diabetes, hypertension, WHR, TG, and HDL-C; †Gender subgroup adjusted for age, smoking status, income, physical activity, diabetes, hypertension, WHR, TG, and HDL-C; ‡Age subgroup adjusted for gender, smoking status, income, physical activity, diabetes, hypertension, WHR, TG, and HDL-C. NAFLD: Nonalcoholic fatty liver disease; WHR: Waist-hip ratio; TG: Triglyceride; HDL-C: High-density lipoprotein-cholesterol; CIs: Confidence intervals; ORs: Odds ratios.
null
null
[ "Ethical approval", "Study design and population", "Measurement of nonalcoholic fatty liver disease", "Assessment of carotid stenosis and carotid plaque", "Assessment of potential covariates", "Statistical analyses", "Baseline characteristics of study participants", "Association between carotid stenosis and nonalcoholic fatty liver disease", "Relationship between carotid plaque and nonalcoholic fatty liver disease", "Financial support and sponsorship" ]
[ "The study was conducted in accordance with the Declaration of Helsinki and was approved by the Ethics Committee of Jidong Oilfield Inc., Medical Centers. Informed written consent was obtained from all participants before their enrollment in this study.", "From July 2013 to August 2014, all residents aged ≥18 years from Jidong community were invited to participate in this study at the time of their regular annual physical examination performed at the Jidong Oilfield Hospital. Almost 9078 residents completed a standard questionnaire, underwent physical examinations and laboratory assessments, and provided informed consent at recruitment.[9] The Jidong community is located in Caofeidian District which is in the south of Tangshan city and near the Bohai Sea. The detail information about the research has been described in the past.[1011] Among these resident population, those aged ≥40 years, without missing data on ultrasonography, who signed informed consent were selected as participants. Furthermore, they should meet the following criteria: (1) no history of cancer, stroke, atrial fibrillation, heart failure, or myocardial infarction; (2) without excessive alcohol consumption (men: ≥20 g/day and women: ≥10 g/day for more than a year); (3) absence of a history of positive HBsAg; and (4) with complete information.", "Liver ultrasonography was performed in participants aged ≥40 years using a high-resolution B-mode topographic ultrasound system with a 3.5 MHz probe (ACUSON X300, Siemens, Germany) to assess the prevalence of NAFLD. Compared to histology, ultrasonography had a sensitivity of 85% and a specificity of 94% in detecting fatty liver disease.[12] According to conventional criteria, fatty liver disease was diagnosed through characteristic echo patterns, such as diffusely increased liver near-field ultrasound echo (bright liver); liver echo was greater than kidney and vascular blurring and the gradual attenuation of far-field ultrasound echo.[13] In addition, abdominal ultrasonography scanning was examined by well-trained sonographers who were unaware of the clinical presentation and laboratory findings of participants during the whole ultrasonic examination.", "For the evaluation of the prevalence of carotid stenosis, all participants (≥40 years) underwent bilateral carotid duplex sonography in a supine position by expert operators who were blinded to the goal of the study, clinical data, and laboratory findings. According to the Society of Radiologists in Ultrasound Consensus Conference, we graded the severity of carotid stenosis.[14] The categories were classified as normal (no stenosis) and carotid stenosis (<50% stenosis; ≥50% stenosis or occlusion). In the light of the established ultrasound criteria, (1) normal (no presence of stenosis) was defined that internal carotid artery (ICA) peak systolic velocity (PSV) was less than 125 cm/s and no plaque or intimal thickening was visible; (2) <50% stenosis was defined that ICA PSV was less than 125 cm/s but plaque or intimal thickening was visible; and (3) ≥50% stenosis or occlusion was considered when ICA PSV was greater than 125 cm/s and plaque was visible, or there was no detectable patent lumen on gray-scale ultrasonography and no flow on spectral, power, and color Doppler ultrasonography.[101516] Moreover, the higher value (left or right) was considered for analysis if bilateral stenosis was present.\nTo assess the complexity and stability of carotid plaque, ultrasound examination (Philips iU22 ultrasound system, Philips Medical Systems, Bothell, WA, USA) was also operated by well-trained and certified sonographers, and the results of the examination were reviewed by two independent operators. Carotid plaque was defined as a focal structure encroaching into the arterial lumen of at least 0.5 mm or 50% of the surrounding intima-media thickness (IMT) value, or demonstrated as a thickness of 1.5 mm from the intima-lumen interface to the media–adventitia interface.[17] In this study, we classified carotid plaque as normal (without plaque), stable plaque (plaques had a uniform texture and present a smooth and regular surface and plaques with high-level or homogeneous echoes), and unstable plaque (plaques with incomplete fibrous cap or ulcerated plaques and plaques with low-level or heterogeneous echoes) according to different stabilities.[18] Both longitudinal and transverse images of bilateral carotid arteries were obtained to extensively evaluate plaques while differences between their evaluations needed to be resolved by consensus.", "The information of demographic (age, sex, income, physical activities, and smoking status) and clinical characteristics (waist-hip ratio (WHR), hypertension, and diabetes) was collected using standardized questionnaires.[9] Biochemical variables containing some indexes such as triglyceride (TG), total cholesterol (TC), high-density lipoprotein-cholesterol (HDL-C), and low-density lipoprotein-cholesterol (LDL-C) were measured using standard methods at the Central Laboratory of Jidong Oilfield Hospital.[10]\nAccording to the response of smoking status, participants were divided into three categories, never (<100 cigarettes in entire life), past, and current smoker. Education level was also classified into three categories: primary school or below, middle or high school, and college or above. The classification of physical activity was based on the following three kinds of circumstances: inactive, moderately active, and active. Simultaneously, WHR was calculated as waist circumference (cm) divided by the hip circumference (cm) which was used as measure of abdominal obesity. Hypertension was defined as a self-report history of hypertension, using antihypertensive medication or systolic blood pressure ≥140 mmHg and diastolic blood pressure ≥90 mmHg.[10] The definition of diabetes was fasting blood glucose ≥7.0 mmol/L, current treatment with insulin, oral hypoglycemic agents, or a history of diabetes mellitus.[111920]", "Considering that the prevalence of NAFLD and carotid stenosis is 25% and 12%, respectively, at two-sided α = 0.05, power = 0.80, and odds ratio (OR) = 1.50, the sample size can be assumed to be 2360 (PASS11, NCSS, LLC 329 North 1000 East, Kaysville, Utah 84037, USA).[21] The normal distribution of continuous variables was tested by one-sample Kolmogorov-Smirnov test. Continuous variables underlying normal distribution were presented as mean ± standard deviation and compared using t-test or analysis of variance (ANOVA), and otherwise presented as median (interquartile range) and compared by corresponding nonparametric methods. The frequencies and percentages were used to describe categorical variables, and the Chi-squared test was applied to compare among groups. Logistic regression was used to calculate ORs and 95% confidence intervals (CIs) and to determine the association between NAFLD and carotid stenosis or plaque. After adjusting for age, gender, smoking status, income, physical activity, diabetes, hypertension, WHR, TG, and HDL-C, the association between different severity of carotid stenosis and NAFLD, as well as the relationship between carotid plaque with different stability and NAFLD were also investigated. The association of NAFLD and carotid stenosis or plaque was also examined in stratification of age and gender analysis, respectively.\nStatistical analyses were performed using the SAS version 9.4 (SAS Institute, Cary, North Carolina, USA). All statistical tests were two-sided, and P < 0.05 was considered statistically significant.", "From the initial sample of 9078 participants, 3396 residents aged ≥40 years and with complete data on ultrasonography examination were selected as study samples. Among the 3396 participants, 784 participants were excluded for the following reasons: 168 participants with history of myocardial infarction, heart failure, stroke, atrial fibrillation, and cancer; 369 participants with excessive alcohol consumption; 131 participants with history of positive HBsAg; and 116 participants without complete information. Finally, 2612 participants were included in the final analysis.\nDemographic data, clinical characteristics, and biochemical variables of all participants were presented in Table 1. Among the total 2612 participants, the mean age was 53.6 ± 8.6 years and 41.8% (n = 1091) were men. The 1375 participants with NAFLD consisted of 707 (64.8%) males and 668 (43.9%) females. Totally 342 (24.9%) participants with NAFLD were current smokers. Biochemical parameters including WHR, TG and TC were higher in participants with NAFLD than those without NAFLD. In addition, higher prevalence of diabetes and hypertension was also found in the group of NAFLD. Participants with NAFLD had a higher prevalence of carotid stenosis compared to those without NAFLD (<50% stenosis: 17% vs. 11.2%; ≥50% stenosis: 13.0% vs. 4.6%). A higher prevalence of unstable plaque (19.4% vs. 12.8%) was also demonstrated in participants with NAFLD than those without NAFLD in this study.\nBaseline characteristics of participants enrolled in this study\n*: t values; †: χ2 values; ‡: Z values. NAFLD: Nonalcoholic fatty liver disease; WHR: Waist-hip ratio; TG: Triglyceride; TC: Total cholesterol; HDL-C: High-density lipoprotein-cholesterol; LDL-C: Low-density lipoprotein-cholesterol; CIMT: Carotid intima-media thickness; SD: Standard deviation; IQR: Interquartile range.", "Among the 1375 participants with NAFLD, 12.9% (178/1375) met the diagnostic criteria for carotid stenosis. After adjusting for age, gender, education level, income, physical activity, smoking status, diabetes, hypertension, WHR, TG, and HDL-C, the association between NAFLD and carotid stenosis was statistically significant (OR: 2.06, 95% CI: 1.45–2.91). In age- and gender-stratified analysis, the association between NAFLD and carotid stenosis was positively significant among different groups (female: OR: 2.34, 95% CI: 1.29–4.24; male: OR: 1.89, 95% CI: 1.22–2.91; 40–60 years: OR: 1.76, 95% CI: 1.15–2.70; >60 years: OR: 3.12, 95% CI: 1.62–6.02) [Figure 1]. A positive association was observed between carotid stenosis (≥50% stenosis) and NAFLD according to the classification of the severity of carotid stenosis with normal as reference group (OR: 2.06, 95% CI: 1.45–2.93, P < 0.01) [Table 2].\nThe prevalence of carotid stenosis or plaque in overall participants and participants with NAFLD or without NAFLD. OR with 95% CI of NALFD for carotid stenosis or plaque in total, different gender, and age categories. Covariates for total included age, gender, smoking status, income, physical activity, diabetes, hypertension, WHR, TG, and HDL-C. Covariates for different genders included age, smoking status, income, physical activity, diabetes, hypertension, WHR, TG, and HDL-C. Covariates for different ages included gender, smoking status, income, physical activity, diabetes, hypertension, WHR, TG, and HDL-C. NAFLD: Nonalcoholic fatty liver disease; OR: Odds ratio; CI: Confidence interval; WHR: Waist-hip ratio; TG: Triglyceride; HDL-C: High-density lipoprotein-cholesterol.\nAssociation between different severity of carotid stenosis and NAFLD stratified by gender and age\nData were presented by ORs (95% CIs). *Total adjusted for age, gender, smoking status, income, physical activity, diabetes, hypertension, WHR, TG, and HDL-C; †Gender subgroup adjusted for age, smoking status, income, physical activity, diabetes, hypertension, WHR, TG, and HDL-C; ‡Age subgroup adjusted for gender, smoking status, income, physical activity, diabetes, hypertension, WHR, TG, and HDL-C. NAFLD: Nonalcoholic fatty liver disease; WHR: Waist-hip ratio; TG: Triglyceride; HDL-C: High-density lipoprotein-cholesterol; CIs: Confidence intervals; ORs: Odds ratios.", "NAFLD was not significantly associated with carotid plaque in the whole participants (OR: 1.10, 95% CI: 0.8–1.40) after adjusting for age, gender, education level, income, physical activity, smoking status, diabetes, hypertension, WHR, TG, and HDL-C [Figure 1]. The results also showed that there was no significant correlation between carotid plaque and NAFLD after the stratification of age and gender. Moreover, the carotid plaque was classified according to different stability. The association between unstable plaque or stable plaque and NAFLD was still not statistically significant (unstable plaque: OR: 1.13, 95% CI: 0.87–1.35; stable plaque: OR: 0.94, 95% CI: 0.53–1.67) after adjustment for potential confounders in this study [Table 3].\nAssociation between carotid plaque of different stability and NAFLD stratified by gender and age\nData were presented by ORs (95% CIs). *Total adjusted for age, gender, smoking status, income, physical activity, diabetes, hypertension, WHR, TG, and HDL-C; †Gender subgroup adjusted for age, smoking status, income, physical activity, diabetes, hypertension, WHR, TG, and HDL-C; ‡Age subgroup adjusted for gender, smoking status, income, physical activity, diabetes, hypertension, WHR, TG, and HDL-C. NAFLD: Nonalcoholic fatty liver disease; WHR: Waist-hip ratio; TG: Triglyceride; HDL-C: High-density lipoprotein-cholesterol; CIs: Confidence intervals; ORs: Odds ratios.", "This study was supported by grants from the National Natural Science Foundation of China (No. 81670294, No. 81202279, and No. 81473057) and the National Social Science Foundation of China (No. 17BGL184)." ]
[ null, null, null, null, null, null, null, null, null, null ]
[ "INTRODUCTION", "METHODS", "Ethical approval", "Study design and population", "Measurement of nonalcoholic fatty liver disease", "Assessment of carotid stenosis and carotid plaque", "Assessment of potential covariates", "Statistical analyses", "RESULTS", "Baseline characteristics of study participants", "Association between carotid stenosis and nonalcoholic fatty liver disease", "Relationship between carotid plaque and nonalcoholic fatty liver disease", "DISCUSSION", "Financial support and sponsorship", "Conflicts of interest" ]
[ "Nonalcoholic fatty liver disease (NAFLD) has become a major public health concern in developing countries, and it emerges as the most common chronic liver disease around the world with a prevalence of 20–30% in the general population.[1] The clinical condition with histological features of NAFLD is ranging from simple steatosis to steatohepatitis and cirrhosis. NAFLD is also recognized as the hepatic manifestation of metabolic syndrome which includes obesity, type 2 diabetes mellitus, dyslipidemia, and hypertension. The close association between NAFLD and metabolic syndrome has been demonstrated previously.[2] Indeed, NAFLD is considered to be another component of the metabolic syndrome which is a key mediator for the relationship of NAFLD and cardiovascular disease. A strong association between NAFLD and metabolic syndrome has stimulated more attention to its putative role in the occurrence and development of cardiovascular disease.[3]\nCarotid atherosclerosis, a common carotid artery disease, is recognized as one of the major cardiovascular diseases. Recently, NAFLD was suspected to be associated with an increased risk of cardiovascular disease, including exacerbating carotid atherosclerosis and coronary artery disease (CAD).[4] NAFLD patients might be at a heightened risk to suffer from cardiovascular disease, especially carotid artery disease.[5] The presence of carotid plaque indicated that a clinical model of early atherosclerosis was an indicator of increased risk of cardiovascular disease.[67] In addition, carotid stenosis was an important risk factor for transient ischemic attacks and strokes which was correlated well with cardiovascular disease.[8] We use carotid stenosis and carotid plaque measured by carotid ultrasound as good surrogate markers of carotid artery disease. Therefore, the aim of this study was to investigate the association between NAFLD and carotid artery disease.", " Ethical approval The study was conducted in accordance with the Declaration of Helsinki and was approved by the Ethics Committee of Jidong Oilfield Inc., Medical Centers. Informed written consent was obtained from all participants before their enrollment in this study.\nThe study was conducted in accordance with the Declaration of Helsinki and was approved by the Ethics Committee of Jidong Oilfield Inc., Medical Centers. Informed written consent was obtained from all participants before their enrollment in this study.\n Study design and population From July 2013 to August 2014, all residents aged ≥18 years from Jidong community were invited to participate in this study at the time of their regular annual physical examination performed at the Jidong Oilfield Hospital. Almost 9078 residents completed a standard questionnaire, underwent physical examinations and laboratory assessments, and provided informed consent at recruitment.[9] The Jidong community is located in Caofeidian District which is in the south of Tangshan city and near the Bohai Sea. The detail information about the research has been described in the past.[1011] Among these resident population, those aged ≥40 years, without missing data on ultrasonography, who signed informed consent were selected as participants. Furthermore, they should meet the following criteria: (1) no history of cancer, stroke, atrial fibrillation, heart failure, or myocardial infarction; (2) without excessive alcohol consumption (men: ≥20 g/day and women: ≥10 g/day for more than a year); (3) absence of a history of positive HBsAg; and (4) with complete information.\nFrom July 2013 to August 2014, all residents aged ≥18 years from Jidong community were invited to participate in this study at the time of their regular annual physical examination performed at the Jidong Oilfield Hospital. Almost 9078 residents completed a standard questionnaire, underwent physical examinations and laboratory assessments, and provided informed consent at recruitment.[9] The Jidong community is located in Caofeidian District which is in the south of Tangshan city and near the Bohai Sea. The detail information about the research has been described in the past.[1011] Among these resident population, those aged ≥40 years, without missing data on ultrasonography, who signed informed consent were selected as participants. Furthermore, they should meet the following criteria: (1) no history of cancer, stroke, atrial fibrillation, heart failure, or myocardial infarction; (2) without excessive alcohol consumption (men: ≥20 g/day and women: ≥10 g/day for more than a year); (3) absence of a history of positive HBsAg; and (4) with complete information.\n Measurement of nonalcoholic fatty liver disease Liver ultrasonography was performed in participants aged ≥40 years using a high-resolution B-mode topographic ultrasound system with a 3.5 MHz probe (ACUSON X300, Siemens, Germany) to assess the prevalence of NAFLD. Compared to histology, ultrasonography had a sensitivity of 85% and a specificity of 94% in detecting fatty liver disease.[12] According to conventional criteria, fatty liver disease was diagnosed through characteristic echo patterns, such as diffusely increased liver near-field ultrasound echo (bright liver); liver echo was greater than kidney and vascular blurring and the gradual attenuation of far-field ultrasound echo.[13] In addition, abdominal ultrasonography scanning was examined by well-trained sonographers who were unaware of the clinical presentation and laboratory findings of participants during the whole ultrasonic examination.\nLiver ultrasonography was performed in participants aged ≥40 years using a high-resolution B-mode topographic ultrasound system with a 3.5 MHz probe (ACUSON X300, Siemens, Germany) to assess the prevalence of NAFLD. Compared to histology, ultrasonography had a sensitivity of 85% and a specificity of 94% in detecting fatty liver disease.[12] According to conventional criteria, fatty liver disease was diagnosed through characteristic echo patterns, such as diffusely increased liver near-field ultrasound echo (bright liver); liver echo was greater than kidney and vascular blurring and the gradual attenuation of far-field ultrasound echo.[13] In addition, abdominal ultrasonography scanning was examined by well-trained sonographers who were unaware of the clinical presentation and laboratory findings of participants during the whole ultrasonic examination.\n Assessment of carotid stenosis and carotid plaque For the evaluation of the prevalence of carotid stenosis, all participants (≥40 years) underwent bilateral carotid duplex sonography in a supine position by expert operators who were blinded to the goal of the study, clinical data, and laboratory findings. According to the Society of Radiologists in Ultrasound Consensus Conference, we graded the severity of carotid stenosis.[14] The categories were classified as normal (no stenosis) and carotid stenosis (<50% stenosis; ≥50% stenosis or occlusion). In the light of the established ultrasound criteria, (1) normal (no presence of stenosis) was defined that internal carotid artery (ICA) peak systolic velocity (PSV) was less than 125 cm/s and no plaque or intimal thickening was visible; (2) <50% stenosis was defined that ICA PSV was less than 125 cm/s but plaque or intimal thickening was visible; and (3) ≥50% stenosis or occlusion was considered when ICA PSV was greater than 125 cm/s and plaque was visible, or there was no detectable patent lumen on gray-scale ultrasonography and no flow on spectral, power, and color Doppler ultrasonography.[101516] Moreover, the higher value (left or right) was considered for analysis if bilateral stenosis was present.\nTo assess the complexity and stability of carotid plaque, ultrasound examination (Philips iU22 ultrasound system, Philips Medical Systems, Bothell, WA, USA) was also operated by well-trained and certified sonographers, and the results of the examination were reviewed by two independent operators. Carotid plaque was defined as a focal structure encroaching into the arterial lumen of at least 0.5 mm or 50% of the surrounding intima-media thickness (IMT) value, or demonstrated as a thickness of 1.5 mm from the intima-lumen interface to the media–adventitia interface.[17] In this study, we classified carotid plaque as normal (without plaque), stable plaque (plaques had a uniform texture and present a smooth and regular surface and plaques with high-level or homogeneous echoes), and unstable plaque (plaques with incomplete fibrous cap or ulcerated plaques and plaques with low-level or heterogeneous echoes) according to different stabilities.[18] Both longitudinal and transverse images of bilateral carotid arteries were obtained to extensively evaluate plaques while differences between their evaluations needed to be resolved by consensus.\nFor the evaluation of the prevalence of carotid stenosis, all participants (≥40 years) underwent bilateral carotid duplex sonography in a supine position by expert operators who were blinded to the goal of the study, clinical data, and laboratory findings. According to the Society of Radiologists in Ultrasound Consensus Conference, we graded the severity of carotid stenosis.[14] The categories were classified as normal (no stenosis) and carotid stenosis (<50% stenosis; ≥50% stenosis or occlusion). In the light of the established ultrasound criteria, (1) normal (no presence of stenosis) was defined that internal carotid artery (ICA) peak systolic velocity (PSV) was less than 125 cm/s and no plaque or intimal thickening was visible; (2) <50% stenosis was defined that ICA PSV was less than 125 cm/s but plaque or intimal thickening was visible; and (3) ≥50% stenosis or occlusion was considered when ICA PSV was greater than 125 cm/s and plaque was visible, or there was no detectable patent lumen on gray-scale ultrasonography and no flow on spectral, power, and color Doppler ultrasonography.[101516] Moreover, the higher value (left or right) was considered for analysis if bilateral stenosis was present.\nTo assess the complexity and stability of carotid plaque, ultrasound examination (Philips iU22 ultrasound system, Philips Medical Systems, Bothell, WA, USA) was also operated by well-trained and certified sonographers, and the results of the examination were reviewed by two independent operators. Carotid plaque was defined as a focal structure encroaching into the arterial lumen of at least 0.5 mm or 50% of the surrounding intima-media thickness (IMT) value, or demonstrated as a thickness of 1.5 mm from the intima-lumen interface to the media–adventitia interface.[17] In this study, we classified carotid plaque as normal (without plaque), stable plaque (plaques had a uniform texture and present a smooth and regular surface and plaques with high-level or homogeneous echoes), and unstable plaque (plaques with incomplete fibrous cap or ulcerated plaques and plaques with low-level or heterogeneous echoes) according to different stabilities.[18] Both longitudinal and transverse images of bilateral carotid arteries were obtained to extensively evaluate plaques while differences between their evaluations needed to be resolved by consensus.\n Assessment of potential covariates The information of demographic (age, sex, income, physical activities, and smoking status) and clinical characteristics (waist-hip ratio (WHR), hypertension, and diabetes) was collected using standardized questionnaires.[9] Biochemical variables containing some indexes such as triglyceride (TG), total cholesterol (TC), high-density lipoprotein-cholesterol (HDL-C), and low-density lipoprotein-cholesterol (LDL-C) were measured using standard methods at the Central Laboratory of Jidong Oilfield Hospital.[10]\nAccording to the response of smoking status, participants were divided into three categories, never (<100 cigarettes in entire life), past, and current smoker. Education level was also classified into three categories: primary school or below, middle or high school, and college or above. The classification of physical activity was based on the following three kinds of circumstances: inactive, moderately active, and active. Simultaneously, WHR was calculated as waist circumference (cm) divided by the hip circumference (cm) which was used as measure of abdominal obesity. Hypertension was defined as a self-report history of hypertension, using antihypertensive medication or systolic blood pressure ≥140 mmHg and diastolic blood pressure ≥90 mmHg.[10] The definition of diabetes was fasting blood glucose ≥7.0 mmol/L, current treatment with insulin, oral hypoglycemic agents, or a history of diabetes mellitus.[111920]\nThe information of demographic (age, sex, income, physical activities, and smoking status) and clinical characteristics (waist-hip ratio (WHR), hypertension, and diabetes) was collected using standardized questionnaires.[9] Biochemical variables containing some indexes such as triglyceride (TG), total cholesterol (TC), high-density lipoprotein-cholesterol (HDL-C), and low-density lipoprotein-cholesterol (LDL-C) were measured using standard methods at the Central Laboratory of Jidong Oilfield Hospital.[10]\nAccording to the response of smoking status, participants were divided into three categories, never (<100 cigarettes in entire life), past, and current smoker. Education level was also classified into three categories: primary school or below, middle or high school, and college or above. The classification of physical activity was based on the following three kinds of circumstances: inactive, moderately active, and active. Simultaneously, WHR was calculated as waist circumference (cm) divided by the hip circumference (cm) which was used as measure of abdominal obesity. Hypertension was defined as a self-report history of hypertension, using antihypertensive medication or systolic blood pressure ≥140 mmHg and diastolic blood pressure ≥90 mmHg.[10] The definition of diabetes was fasting blood glucose ≥7.0 mmol/L, current treatment with insulin, oral hypoglycemic agents, or a history of diabetes mellitus.[111920]\n Statistical analyses Considering that the prevalence of NAFLD and carotid stenosis is 25% and 12%, respectively, at two-sided α = 0.05, power = 0.80, and odds ratio (OR) = 1.50, the sample size can be assumed to be 2360 (PASS11, NCSS, LLC 329 North 1000 East, Kaysville, Utah 84037, USA).[21] The normal distribution of continuous variables was tested by one-sample Kolmogorov-Smirnov test. Continuous variables underlying normal distribution were presented as mean ± standard deviation and compared using t-test or analysis of variance (ANOVA), and otherwise presented as median (interquartile range) and compared by corresponding nonparametric methods. The frequencies and percentages were used to describe categorical variables, and the Chi-squared test was applied to compare among groups. Logistic regression was used to calculate ORs and 95% confidence intervals (CIs) and to determine the association between NAFLD and carotid stenosis or plaque. After adjusting for age, gender, smoking status, income, physical activity, diabetes, hypertension, WHR, TG, and HDL-C, the association between different severity of carotid stenosis and NAFLD, as well as the relationship between carotid plaque with different stability and NAFLD were also investigated. The association of NAFLD and carotid stenosis or plaque was also examined in stratification of age and gender analysis, respectively.\nStatistical analyses were performed using the SAS version 9.4 (SAS Institute, Cary, North Carolina, USA). All statistical tests were two-sided, and P < 0.05 was considered statistically significant.\nConsidering that the prevalence of NAFLD and carotid stenosis is 25% and 12%, respectively, at two-sided α = 0.05, power = 0.80, and odds ratio (OR) = 1.50, the sample size can be assumed to be 2360 (PASS11, NCSS, LLC 329 North 1000 East, Kaysville, Utah 84037, USA).[21] The normal distribution of continuous variables was tested by one-sample Kolmogorov-Smirnov test. Continuous variables underlying normal distribution were presented as mean ± standard deviation and compared using t-test or analysis of variance (ANOVA), and otherwise presented as median (interquartile range) and compared by corresponding nonparametric methods. The frequencies and percentages were used to describe categorical variables, and the Chi-squared test was applied to compare among groups. Logistic regression was used to calculate ORs and 95% confidence intervals (CIs) and to determine the association between NAFLD and carotid stenosis or plaque. After adjusting for age, gender, smoking status, income, physical activity, diabetes, hypertension, WHR, TG, and HDL-C, the association between different severity of carotid stenosis and NAFLD, as well as the relationship between carotid plaque with different stability and NAFLD were also investigated. The association of NAFLD and carotid stenosis or plaque was also examined in stratification of age and gender analysis, respectively.\nStatistical analyses were performed using the SAS version 9.4 (SAS Institute, Cary, North Carolina, USA). All statistical tests were two-sided, and P < 0.05 was considered statistically significant.", "The study was conducted in accordance with the Declaration of Helsinki and was approved by the Ethics Committee of Jidong Oilfield Inc., Medical Centers. Informed written consent was obtained from all participants before their enrollment in this study.", "From July 2013 to August 2014, all residents aged ≥18 years from Jidong community were invited to participate in this study at the time of their regular annual physical examination performed at the Jidong Oilfield Hospital. Almost 9078 residents completed a standard questionnaire, underwent physical examinations and laboratory assessments, and provided informed consent at recruitment.[9] The Jidong community is located in Caofeidian District which is in the south of Tangshan city and near the Bohai Sea. The detail information about the research has been described in the past.[1011] Among these resident population, those aged ≥40 years, without missing data on ultrasonography, who signed informed consent were selected as participants. Furthermore, they should meet the following criteria: (1) no history of cancer, stroke, atrial fibrillation, heart failure, or myocardial infarction; (2) without excessive alcohol consumption (men: ≥20 g/day and women: ≥10 g/day for more than a year); (3) absence of a history of positive HBsAg; and (4) with complete information.", "Liver ultrasonography was performed in participants aged ≥40 years using a high-resolution B-mode topographic ultrasound system with a 3.5 MHz probe (ACUSON X300, Siemens, Germany) to assess the prevalence of NAFLD. Compared to histology, ultrasonography had a sensitivity of 85% and a specificity of 94% in detecting fatty liver disease.[12] According to conventional criteria, fatty liver disease was diagnosed through characteristic echo patterns, such as diffusely increased liver near-field ultrasound echo (bright liver); liver echo was greater than kidney and vascular blurring and the gradual attenuation of far-field ultrasound echo.[13] In addition, abdominal ultrasonography scanning was examined by well-trained sonographers who were unaware of the clinical presentation and laboratory findings of participants during the whole ultrasonic examination.", "For the evaluation of the prevalence of carotid stenosis, all participants (≥40 years) underwent bilateral carotid duplex sonography in a supine position by expert operators who were blinded to the goal of the study, clinical data, and laboratory findings. According to the Society of Radiologists in Ultrasound Consensus Conference, we graded the severity of carotid stenosis.[14] The categories were classified as normal (no stenosis) and carotid stenosis (<50% stenosis; ≥50% stenosis or occlusion). In the light of the established ultrasound criteria, (1) normal (no presence of stenosis) was defined that internal carotid artery (ICA) peak systolic velocity (PSV) was less than 125 cm/s and no plaque or intimal thickening was visible; (2) <50% stenosis was defined that ICA PSV was less than 125 cm/s but plaque or intimal thickening was visible; and (3) ≥50% stenosis or occlusion was considered when ICA PSV was greater than 125 cm/s and plaque was visible, or there was no detectable patent lumen on gray-scale ultrasonography and no flow on spectral, power, and color Doppler ultrasonography.[101516] Moreover, the higher value (left or right) was considered for analysis if bilateral stenosis was present.\nTo assess the complexity and stability of carotid plaque, ultrasound examination (Philips iU22 ultrasound system, Philips Medical Systems, Bothell, WA, USA) was also operated by well-trained and certified sonographers, and the results of the examination were reviewed by two independent operators. Carotid plaque was defined as a focal structure encroaching into the arterial lumen of at least 0.5 mm or 50% of the surrounding intima-media thickness (IMT) value, or demonstrated as a thickness of 1.5 mm from the intima-lumen interface to the media–adventitia interface.[17] In this study, we classified carotid plaque as normal (without plaque), stable plaque (plaques had a uniform texture and present a smooth and regular surface and plaques with high-level or homogeneous echoes), and unstable plaque (plaques with incomplete fibrous cap or ulcerated plaques and plaques with low-level or heterogeneous echoes) according to different stabilities.[18] Both longitudinal and transverse images of bilateral carotid arteries were obtained to extensively evaluate plaques while differences between their evaluations needed to be resolved by consensus.", "The information of demographic (age, sex, income, physical activities, and smoking status) and clinical characteristics (waist-hip ratio (WHR), hypertension, and diabetes) was collected using standardized questionnaires.[9] Biochemical variables containing some indexes such as triglyceride (TG), total cholesterol (TC), high-density lipoprotein-cholesterol (HDL-C), and low-density lipoprotein-cholesterol (LDL-C) were measured using standard methods at the Central Laboratory of Jidong Oilfield Hospital.[10]\nAccording to the response of smoking status, participants were divided into three categories, never (<100 cigarettes in entire life), past, and current smoker. Education level was also classified into three categories: primary school or below, middle or high school, and college or above. The classification of physical activity was based on the following three kinds of circumstances: inactive, moderately active, and active. Simultaneously, WHR was calculated as waist circumference (cm) divided by the hip circumference (cm) which was used as measure of abdominal obesity. Hypertension was defined as a self-report history of hypertension, using antihypertensive medication or systolic blood pressure ≥140 mmHg and diastolic blood pressure ≥90 mmHg.[10] The definition of diabetes was fasting blood glucose ≥7.0 mmol/L, current treatment with insulin, oral hypoglycemic agents, or a history of diabetes mellitus.[111920]", "Considering that the prevalence of NAFLD and carotid stenosis is 25% and 12%, respectively, at two-sided α = 0.05, power = 0.80, and odds ratio (OR) = 1.50, the sample size can be assumed to be 2360 (PASS11, NCSS, LLC 329 North 1000 East, Kaysville, Utah 84037, USA).[21] The normal distribution of continuous variables was tested by one-sample Kolmogorov-Smirnov test. Continuous variables underlying normal distribution were presented as mean ± standard deviation and compared using t-test or analysis of variance (ANOVA), and otherwise presented as median (interquartile range) and compared by corresponding nonparametric methods. The frequencies and percentages were used to describe categorical variables, and the Chi-squared test was applied to compare among groups. Logistic regression was used to calculate ORs and 95% confidence intervals (CIs) and to determine the association between NAFLD and carotid stenosis or plaque. After adjusting for age, gender, smoking status, income, physical activity, diabetes, hypertension, WHR, TG, and HDL-C, the association between different severity of carotid stenosis and NAFLD, as well as the relationship between carotid plaque with different stability and NAFLD were also investigated. The association of NAFLD and carotid stenosis or plaque was also examined in stratification of age and gender analysis, respectively.\nStatistical analyses were performed using the SAS version 9.4 (SAS Institute, Cary, North Carolina, USA). All statistical tests were two-sided, and P < 0.05 was considered statistically significant.", " Baseline characteristics of study participants From the initial sample of 9078 participants, 3396 residents aged ≥40 years and with complete data on ultrasonography examination were selected as study samples. Among the 3396 participants, 784 participants were excluded for the following reasons: 168 participants with history of myocardial infarction, heart failure, stroke, atrial fibrillation, and cancer; 369 participants with excessive alcohol consumption; 131 participants with history of positive HBsAg; and 116 participants without complete information. Finally, 2612 participants were included in the final analysis.\nDemographic data, clinical characteristics, and biochemical variables of all participants were presented in Table 1. Among the total 2612 participants, the mean age was 53.6 ± 8.6 years and 41.8% (n = 1091) were men. The 1375 participants with NAFLD consisted of 707 (64.8%) males and 668 (43.9%) females. Totally 342 (24.9%) participants with NAFLD were current smokers. Biochemical parameters including WHR, TG and TC were higher in participants with NAFLD than those without NAFLD. In addition, higher prevalence of diabetes and hypertension was also found in the group of NAFLD. Participants with NAFLD had a higher prevalence of carotid stenosis compared to those without NAFLD (<50% stenosis: 17% vs. 11.2%; ≥50% stenosis: 13.0% vs. 4.6%). A higher prevalence of unstable plaque (19.4% vs. 12.8%) was also demonstrated in participants with NAFLD than those without NAFLD in this study.\nBaseline characteristics of participants enrolled in this study\n*: t values; †: χ2 values; ‡: Z values. NAFLD: Nonalcoholic fatty liver disease; WHR: Waist-hip ratio; TG: Triglyceride; TC: Total cholesterol; HDL-C: High-density lipoprotein-cholesterol; LDL-C: Low-density lipoprotein-cholesterol; CIMT: Carotid intima-media thickness; SD: Standard deviation; IQR: Interquartile range.\nFrom the initial sample of 9078 participants, 3396 residents aged ≥40 years and with complete data on ultrasonography examination were selected as study samples. Among the 3396 participants, 784 participants were excluded for the following reasons: 168 participants with history of myocardial infarction, heart failure, stroke, atrial fibrillation, and cancer; 369 participants with excessive alcohol consumption; 131 participants with history of positive HBsAg; and 116 participants without complete information. Finally, 2612 participants were included in the final analysis.\nDemographic data, clinical characteristics, and biochemical variables of all participants were presented in Table 1. Among the total 2612 participants, the mean age was 53.6 ± 8.6 years and 41.8% (n = 1091) were men. The 1375 participants with NAFLD consisted of 707 (64.8%) males and 668 (43.9%) females. Totally 342 (24.9%) participants with NAFLD were current smokers. Biochemical parameters including WHR, TG and TC were higher in participants with NAFLD than those without NAFLD. In addition, higher prevalence of diabetes and hypertension was also found in the group of NAFLD. Participants with NAFLD had a higher prevalence of carotid stenosis compared to those without NAFLD (<50% stenosis: 17% vs. 11.2%; ≥50% stenosis: 13.0% vs. 4.6%). A higher prevalence of unstable plaque (19.4% vs. 12.8%) was also demonstrated in participants with NAFLD than those without NAFLD in this study.\nBaseline characteristics of participants enrolled in this study\n*: t values; †: χ2 values; ‡: Z values. NAFLD: Nonalcoholic fatty liver disease; WHR: Waist-hip ratio; TG: Triglyceride; TC: Total cholesterol; HDL-C: High-density lipoprotein-cholesterol; LDL-C: Low-density lipoprotein-cholesterol; CIMT: Carotid intima-media thickness; SD: Standard deviation; IQR: Interquartile range.\n Association between carotid stenosis and nonalcoholic fatty liver disease Among the 1375 participants with NAFLD, 12.9% (178/1375) met the diagnostic criteria for carotid stenosis. After adjusting for age, gender, education level, income, physical activity, smoking status, diabetes, hypertension, WHR, TG, and HDL-C, the association between NAFLD and carotid stenosis was statistically significant (OR: 2.06, 95% CI: 1.45–2.91). In age- and gender-stratified analysis, the association between NAFLD and carotid stenosis was positively significant among different groups (female: OR: 2.34, 95% CI: 1.29–4.24; male: OR: 1.89, 95% CI: 1.22–2.91; 40–60 years: OR: 1.76, 95% CI: 1.15–2.70; >60 years: OR: 3.12, 95% CI: 1.62–6.02) [Figure 1]. A positive association was observed between carotid stenosis (≥50% stenosis) and NAFLD according to the classification of the severity of carotid stenosis with normal as reference group (OR: 2.06, 95% CI: 1.45–2.93, P < 0.01) [Table 2].\nThe prevalence of carotid stenosis or plaque in overall participants and participants with NAFLD or without NAFLD. OR with 95% CI of NALFD for carotid stenosis or plaque in total, different gender, and age categories. Covariates for total included age, gender, smoking status, income, physical activity, diabetes, hypertension, WHR, TG, and HDL-C. Covariates for different genders included age, smoking status, income, physical activity, diabetes, hypertension, WHR, TG, and HDL-C. Covariates for different ages included gender, smoking status, income, physical activity, diabetes, hypertension, WHR, TG, and HDL-C. NAFLD: Nonalcoholic fatty liver disease; OR: Odds ratio; CI: Confidence interval; WHR: Waist-hip ratio; TG: Triglyceride; HDL-C: High-density lipoprotein-cholesterol.\nAssociation between different severity of carotid stenosis and NAFLD stratified by gender and age\nData were presented by ORs (95% CIs). *Total adjusted for age, gender, smoking status, income, physical activity, diabetes, hypertension, WHR, TG, and HDL-C; †Gender subgroup adjusted for age, smoking status, income, physical activity, diabetes, hypertension, WHR, TG, and HDL-C; ‡Age subgroup adjusted for gender, smoking status, income, physical activity, diabetes, hypertension, WHR, TG, and HDL-C. NAFLD: Nonalcoholic fatty liver disease; WHR: Waist-hip ratio; TG: Triglyceride; HDL-C: High-density lipoprotein-cholesterol; CIs: Confidence intervals; ORs: Odds ratios.\nAmong the 1375 participants with NAFLD, 12.9% (178/1375) met the diagnostic criteria for carotid stenosis. After adjusting for age, gender, education level, income, physical activity, smoking status, diabetes, hypertension, WHR, TG, and HDL-C, the association between NAFLD and carotid stenosis was statistically significant (OR: 2.06, 95% CI: 1.45–2.91). In age- and gender-stratified analysis, the association between NAFLD and carotid stenosis was positively significant among different groups (female: OR: 2.34, 95% CI: 1.29–4.24; male: OR: 1.89, 95% CI: 1.22–2.91; 40–60 years: OR: 1.76, 95% CI: 1.15–2.70; >60 years: OR: 3.12, 95% CI: 1.62–6.02) [Figure 1]. A positive association was observed between carotid stenosis (≥50% stenosis) and NAFLD according to the classification of the severity of carotid stenosis with normal as reference group (OR: 2.06, 95% CI: 1.45–2.93, P < 0.01) [Table 2].\nThe prevalence of carotid stenosis or plaque in overall participants and participants with NAFLD or without NAFLD. OR with 95% CI of NALFD for carotid stenosis or plaque in total, different gender, and age categories. Covariates for total included age, gender, smoking status, income, physical activity, diabetes, hypertension, WHR, TG, and HDL-C. Covariates for different genders included age, smoking status, income, physical activity, diabetes, hypertension, WHR, TG, and HDL-C. Covariates for different ages included gender, smoking status, income, physical activity, diabetes, hypertension, WHR, TG, and HDL-C. NAFLD: Nonalcoholic fatty liver disease; OR: Odds ratio; CI: Confidence interval; WHR: Waist-hip ratio; TG: Triglyceride; HDL-C: High-density lipoprotein-cholesterol.\nAssociation between different severity of carotid stenosis and NAFLD stratified by gender and age\nData were presented by ORs (95% CIs). *Total adjusted for age, gender, smoking status, income, physical activity, diabetes, hypertension, WHR, TG, and HDL-C; †Gender subgroup adjusted for age, smoking status, income, physical activity, diabetes, hypertension, WHR, TG, and HDL-C; ‡Age subgroup adjusted for gender, smoking status, income, physical activity, diabetes, hypertension, WHR, TG, and HDL-C. NAFLD: Nonalcoholic fatty liver disease; WHR: Waist-hip ratio; TG: Triglyceride; HDL-C: High-density lipoprotein-cholesterol; CIs: Confidence intervals; ORs: Odds ratios.\n Relationship between carotid plaque and nonalcoholic fatty liver disease NAFLD was not significantly associated with carotid plaque in the whole participants (OR: 1.10, 95% CI: 0.8–1.40) after adjusting for age, gender, education level, income, physical activity, smoking status, diabetes, hypertension, WHR, TG, and HDL-C [Figure 1]. The results also showed that there was no significant correlation between carotid plaque and NAFLD after the stratification of age and gender. Moreover, the carotid plaque was classified according to different stability. The association between unstable plaque or stable plaque and NAFLD was still not statistically significant (unstable plaque: OR: 1.13, 95% CI: 0.87–1.35; stable plaque: OR: 0.94, 95% CI: 0.53–1.67) after adjustment for potential confounders in this study [Table 3].\nAssociation between carotid plaque of different stability and NAFLD stratified by gender and age\nData were presented by ORs (95% CIs). *Total adjusted for age, gender, smoking status, income, physical activity, diabetes, hypertension, WHR, TG, and HDL-C; †Gender subgroup adjusted for age, smoking status, income, physical activity, diabetes, hypertension, WHR, TG, and HDL-C; ‡Age subgroup adjusted for gender, smoking status, income, physical activity, diabetes, hypertension, WHR, TG, and HDL-C. NAFLD: Nonalcoholic fatty liver disease; WHR: Waist-hip ratio; TG: Triglyceride; HDL-C: High-density lipoprotein-cholesterol; CIs: Confidence intervals; ORs: Odds ratios.\nNAFLD was not significantly associated with carotid plaque in the whole participants (OR: 1.10, 95% CI: 0.8–1.40) after adjusting for age, gender, education level, income, physical activity, smoking status, diabetes, hypertension, WHR, TG, and HDL-C [Figure 1]. The results also showed that there was no significant correlation between carotid plaque and NAFLD after the stratification of age and gender. Moreover, the carotid plaque was classified according to different stability. The association between unstable plaque or stable plaque and NAFLD was still not statistically significant (unstable plaque: OR: 1.13, 95% CI: 0.87–1.35; stable plaque: OR: 0.94, 95% CI: 0.53–1.67) after adjustment for potential confounders in this study [Table 3].\nAssociation between carotid plaque of different stability and NAFLD stratified by gender and age\nData were presented by ORs (95% CIs). *Total adjusted for age, gender, smoking status, income, physical activity, diabetes, hypertension, WHR, TG, and HDL-C; †Gender subgroup adjusted for age, smoking status, income, physical activity, diabetes, hypertension, WHR, TG, and HDL-C; ‡Age subgroup adjusted for gender, smoking status, income, physical activity, diabetes, hypertension, WHR, TG, and HDL-C. NAFLD: Nonalcoholic fatty liver disease; WHR: Waist-hip ratio; TG: Triglyceride; HDL-C: High-density lipoprotein-cholesterol; CIs: Confidence intervals; ORs: Odds ratios.", "From the initial sample of 9078 participants, 3396 residents aged ≥40 years and with complete data on ultrasonography examination were selected as study samples. Among the 3396 participants, 784 participants were excluded for the following reasons: 168 participants with history of myocardial infarction, heart failure, stroke, atrial fibrillation, and cancer; 369 participants with excessive alcohol consumption; 131 participants with history of positive HBsAg; and 116 participants without complete information. Finally, 2612 participants were included in the final analysis.\nDemographic data, clinical characteristics, and biochemical variables of all participants were presented in Table 1. Among the total 2612 participants, the mean age was 53.6 ± 8.6 years and 41.8% (n = 1091) were men. The 1375 participants with NAFLD consisted of 707 (64.8%) males and 668 (43.9%) females. Totally 342 (24.9%) participants with NAFLD were current smokers. Biochemical parameters including WHR, TG and TC were higher in participants with NAFLD than those without NAFLD. In addition, higher prevalence of diabetes and hypertension was also found in the group of NAFLD. Participants with NAFLD had a higher prevalence of carotid stenosis compared to those without NAFLD (<50% stenosis: 17% vs. 11.2%; ≥50% stenosis: 13.0% vs. 4.6%). A higher prevalence of unstable plaque (19.4% vs. 12.8%) was also demonstrated in participants with NAFLD than those without NAFLD in this study.\nBaseline characteristics of participants enrolled in this study\n*: t values; †: χ2 values; ‡: Z values. NAFLD: Nonalcoholic fatty liver disease; WHR: Waist-hip ratio; TG: Triglyceride; TC: Total cholesterol; HDL-C: High-density lipoprotein-cholesterol; LDL-C: Low-density lipoprotein-cholesterol; CIMT: Carotid intima-media thickness; SD: Standard deviation; IQR: Interquartile range.", "Among the 1375 participants with NAFLD, 12.9% (178/1375) met the diagnostic criteria for carotid stenosis. After adjusting for age, gender, education level, income, physical activity, smoking status, diabetes, hypertension, WHR, TG, and HDL-C, the association between NAFLD and carotid stenosis was statistically significant (OR: 2.06, 95% CI: 1.45–2.91). In age- and gender-stratified analysis, the association between NAFLD and carotid stenosis was positively significant among different groups (female: OR: 2.34, 95% CI: 1.29–4.24; male: OR: 1.89, 95% CI: 1.22–2.91; 40–60 years: OR: 1.76, 95% CI: 1.15–2.70; >60 years: OR: 3.12, 95% CI: 1.62–6.02) [Figure 1]. A positive association was observed between carotid stenosis (≥50% stenosis) and NAFLD according to the classification of the severity of carotid stenosis with normal as reference group (OR: 2.06, 95% CI: 1.45–2.93, P < 0.01) [Table 2].\nThe prevalence of carotid stenosis or plaque in overall participants and participants with NAFLD or without NAFLD. OR with 95% CI of NALFD for carotid stenosis or plaque in total, different gender, and age categories. Covariates for total included age, gender, smoking status, income, physical activity, diabetes, hypertension, WHR, TG, and HDL-C. Covariates for different genders included age, smoking status, income, physical activity, diabetes, hypertension, WHR, TG, and HDL-C. Covariates for different ages included gender, smoking status, income, physical activity, diabetes, hypertension, WHR, TG, and HDL-C. NAFLD: Nonalcoholic fatty liver disease; OR: Odds ratio; CI: Confidence interval; WHR: Waist-hip ratio; TG: Triglyceride; HDL-C: High-density lipoprotein-cholesterol.\nAssociation between different severity of carotid stenosis and NAFLD stratified by gender and age\nData were presented by ORs (95% CIs). *Total adjusted for age, gender, smoking status, income, physical activity, diabetes, hypertension, WHR, TG, and HDL-C; †Gender subgroup adjusted for age, smoking status, income, physical activity, diabetes, hypertension, WHR, TG, and HDL-C; ‡Age subgroup adjusted for gender, smoking status, income, physical activity, diabetes, hypertension, WHR, TG, and HDL-C. NAFLD: Nonalcoholic fatty liver disease; WHR: Waist-hip ratio; TG: Triglyceride; HDL-C: High-density lipoprotein-cholesterol; CIs: Confidence intervals; ORs: Odds ratios.", "NAFLD was not significantly associated with carotid plaque in the whole participants (OR: 1.10, 95% CI: 0.8–1.40) after adjusting for age, gender, education level, income, physical activity, smoking status, diabetes, hypertension, WHR, TG, and HDL-C [Figure 1]. The results also showed that there was no significant correlation between carotid plaque and NAFLD after the stratification of age and gender. Moreover, the carotid plaque was classified according to different stability. The association between unstable plaque or stable plaque and NAFLD was still not statistically significant (unstable plaque: OR: 1.13, 95% CI: 0.87–1.35; stable plaque: OR: 0.94, 95% CI: 0.53–1.67) after adjustment for potential confounders in this study [Table 3].\nAssociation between carotid plaque of different stability and NAFLD stratified by gender and age\nData were presented by ORs (95% CIs). *Total adjusted for age, gender, smoking status, income, physical activity, diabetes, hypertension, WHR, TG, and HDL-C; †Gender subgroup adjusted for age, smoking status, income, physical activity, diabetes, hypertension, WHR, TG, and HDL-C; ‡Age subgroup adjusted for gender, smoking status, income, physical activity, diabetes, hypertension, WHR, TG, and HDL-C. NAFLD: Nonalcoholic fatty liver disease; WHR: Waist-hip ratio; TG: Triglyceride; HDL-C: High-density lipoprotein-cholesterol; CIs: Confidence intervals; ORs: Odds ratios.", "In the present study, we focused on the correlation between NAFLD and carotid artery disease. The primary outcome measure was that the association between NAFLD and carotid stenosis was independently significant, while the association of carotid plaque was not independently significant in this community-based population. This study attempted to explore the association between NAFLD and carotid artery disease (assessed by carotid stenosis and carotid plaque) in China.\nA lot of evidence indicated that NAFLD was related to cardiovascular disease.[2223] Sookoian and Pirola[2] performed a systematic review and described that NAFLD was associated with the presence of carotid plaque and endothelial dysfunction which was a reliable marker of subclinical atherosclerosis. Volzke et al.[24] suggested that carotid plaques were more frequently in NAFLD patients in comparison with normal including 3212 participants. In this study, we did not observe a significant association between NAFLD and carotid plaque. Consistent with our results, a study in middle-aged and elderly Chinese showed that the association between NAFLD and carotid plaque was not statistically significant.[25] However, in a study of 14,445 adults, NAFLD was discovered to be associated with an increased risk of carotid plaque diagnosed by ultrasound.[26] The inconsistent association between NAFLD and carotid plaque might be caused by the ethnic differences, region differences, and the difference in methodology of defining NAFLD. Therefore, the association needed further validation in multiple ethnic and different region populations. Earlier studies also demonstrated the presence of carotid plaque increased with age and gender difference. Accordingly, we stratified by age and gender to analyze the different stability of carotid plaque with NAFLD although the results were not statistically significant. The inconsistence might be possibly ascribed to the less sensitive to ultrasound than to biopsy during identifying NAFLD. NAFLD was considered as a marker of metabolic disorders which could exaggerate the effects on the development of atherosclerosis.[27] However, the biological mechanism of NAFLD promoting the development of athero sclerosis (measured through carotid plaque) was still unclear.\nSimilarly, same to carotid plaques, carotid stenosis was also used as a surrogate marker to represent the carotid artery disease in this study. Compared with inconsistent results regarding the association between carotid plaque and NAFLD, those between carotid stenosis and NAFLD are consistent. Several studies demonstrated a positive association between NAFLD and cardiovascular-related disease (assessed by carotid IMT or carotid plaque).[328] Sinn et al.[29] ever reported that men with NAFLD had a higher risk of carotid stenosis in modest alcohol drinkers. An independently significant correlation of participants with NAFLD and carotid artery disease (assessed by carotid artery stenosis) was detected in the present study, and a positive association between carotid stenosis (≥50% stenosis) and NAFLD was found in the stratified analysis. In addition, NAFLD was assumed to be the hepatic manifestation of metabolic syndrome. However, a study reported that metabolic syndrome was not associated with carotid stenosis (≥50% stenosis) in patients with a recent diagnosis of CAD.[30] The relatively small sample size (168 patients) in that study might partly account for this inconsistency. The results in this study also showed that NAFLD was an independent risk factor for carotid stenosis in different gender and age groups. This gender-specific association might be caused by the effects of estrogens which protected females from cardiovascular disease. Furthermore, age was another important risk factor of the association between carotid stenosis and NAFLD. A previous study demonstrated a higher prevalence of carotid stenosis or NAFLD with the age increased, and the relationship between NAFLD and carotid stenosis among older participants was statistically significant, that was similar to our results.[31]\nFurthermore, the association between CAD and NAFLD had been examined in some studies, which detected a higher prevalence of CAD in patients with NAFLD (80.4% vs. 60.7%).[323334] Possible reasons for these inconsistent results might be that patients underwent coronarography before detecting NAFLD. These studies also confirmed that CAD was positively associated with NAFLD (OR: 3.31), which revealed a significant association between NAFLD and cardiovascular disease. Therefore, the findings in the study of the positive association between NAFLD and carotid artery disease (assessed by carotid stenosis) might, to some extent, explain the higher risk of cerebrovascular disease among people with NAFLD, which might add to the available evidence and effects on risk prediction of patients.\nAccording to some recent epidemiological studies, the prevalence of NAFLD in China was higher than the estimates in Western countries while it was increasing and had reached epidemic proportions.[35] In the present study, participants with NAFLD had a higher prevalence of carotid stenosis and carotid plaque which was different from those without NAFLD. Carotid stenosis was the serious stage of the development of carotid atherosclerosis, and the relationship between carotid plaque and carotid atherosclerosis was closely linked. Although the mechanism linking NAFLD and cardiovascular events was elusive, some studies still posed that the mechanism of the association between atherosclerosis and NAFLD might be a complex progress involving an interaction among insulin resistance, an inflammatory status and oxidative stress, and appeared to be important in both early and later stages of the atherosclerotic progress.[36] A systemic inflammatory status with pro-inflammatory and atherogenic molecules might play an important role in the relationship between NAFLD and cardiovascular disease.[37] These studies provided bases for further exploration of the underlying mechanisms between NAFLD and cardiovascular disease. Furthermore, common genetic variants were another factors influencing the risk of cardiovascular disease.[38] Apart from the main risk factors of age and gender, obesity and a list of metabolic-related problems also played an important role in the presence of NAFLD. These factors could predict the risk of carotid stenosis or carotid plaque and forecast some cardiovascular-related diseases in NAFLD patients. Therefore, carotid stenosis or plaque might be the hub of NAFLD and cardiovascular disease.\nWe acknowledged several limitations in this present study. First, the study could not evaluate the temporal natural of the relationship between NAFLD and carotid stenosis or plaque and also could not draw a causal inference of them because of the cross-sectional design. Second, some participants were absent for the ultrasound examination, which might lead to the selection bias and restrict the generalization of the findings. Moreover, the diagnosis of NAFLD was based on ultrasonography which had less sensitivity compared to liver biopsy and could cause a bias for the prevalence of NAFLD. Third, we excluded the participants who had history of excessive alcohol consumption and were positive for HBsAg; however, other types of liver diseases, such as hepatitis C and liver cirrhosis, were not taken into account and might confound the association between NAFLD and carotid plaque. Finally, since the whole participants were just from the Jidong community of Tangshan city, they could not be regarded as representative of the Chinese population.\nIn conclusion, our data suggest that NAFLD is associated with carotid stenosis but not with carotid plaque. Compared to non-NAFLD individuals, participants with NAFLD have a higher risk of carotid stenosis, particularly for women and older participants. NAFLD might be a predicator of early carotid atherosclerosis which is assessed by carotid stenosis or carotid plaque as surrogate markers.\n Financial support and sponsorship This study was supported by grants from the National Natural Science Foundation of China (No. 81670294, No. 81202279, and No. 81473057) and the National Social Science Foundation of China (No. 17BGL184).\nThis study was supported by grants from the National Natural Science Foundation of China (No. 81670294, No. 81202279, and No. 81473057) and the National Social Science Foundation of China (No. 17BGL184).\n Conflicts of interest There are no conflicts of interest.\nThere are no conflicts of interest.", "This study was supported by grants from the National Natural Science Foundation of China (No. 81670294, No. 81202279, and No. 81473057) and the National Social Science Foundation of China (No. 17BGL184).", "There are no conflicts of interest." ]
[ "intro", "methods", null, null, null, null, null, null, "results", null, null, null, "discussion", null, "COI-statement" ]
[ "Association", "Carotid Artery Disease", "Carotid Stenosis", "Nonalcoholic Fatty Liver Disease" ]
INTRODUCTION: Nonalcoholic fatty liver disease (NAFLD) has become a major public health concern in developing countries, and it emerges as the most common chronic liver disease around the world with a prevalence of 20–30% in the general population.[1] The clinical condition with histological features of NAFLD is ranging from simple steatosis to steatohepatitis and cirrhosis. NAFLD is also recognized as the hepatic manifestation of metabolic syndrome which includes obesity, type 2 diabetes mellitus, dyslipidemia, and hypertension. The close association between NAFLD and metabolic syndrome has been demonstrated previously.[2] Indeed, NAFLD is considered to be another component of the metabolic syndrome which is a key mediator for the relationship of NAFLD and cardiovascular disease. A strong association between NAFLD and metabolic syndrome has stimulated more attention to its putative role in the occurrence and development of cardiovascular disease.[3] Carotid atherosclerosis, a common carotid artery disease, is recognized as one of the major cardiovascular diseases. Recently, NAFLD was suspected to be associated with an increased risk of cardiovascular disease, including exacerbating carotid atherosclerosis and coronary artery disease (CAD).[4] NAFLD patients might be at a heightened risk to suffer from cardiovascular disease, especially carotid artery disease.[5] The presence of carotid plaque indicated that a clinical model of early atherosclerosis was an indicator of increased risk of cardiovascular disease.[67] In addition, carotid stenosis was an important risk factor for transient ischemic attacks and strokes which was correlated well with cardiovascular disease.[8] We use carotid stenosis and carotid plaque measured by carotid ultrasound as good surrogate markers of carotid artery disease. Therefore, the aim of this study was to investigate the association between NAFLD and carotid artery disease. METHODS: Ethical approval The study was conducted in accordance with the Declaration of Helsinki and was approved by the Ethics Committee of Jidong Oilfield Inc., Medical Centers. Informed written consent was obtained from all participants before their enrollment in this study. The study was conducted in accordance with the Declaration of Helsinki and was approved by the Ethics Committee of Jidong Oilfield Inc., Medical Centers. Informed written consent was obtained from all participants before their enrollment in this study. Study design and population From July 2013 to August 2014, all residents aged ≥18 years from Jidong community were invited to participate in this study at the time of their regular annual physical examination performed at the Jidong Oilfield Hospital. Almost 9078 residents completed a standard questionnaire, underwent physical examinations and laboratory assessments, and provided informed consent at recruitment.[9] The Jidong community is located in Caofeidian District which is in the south of Tangshan city and near the Bohai Sea. The detail information about the research has been described in the past.[1011] Among these resident population, those aged ≥40 years, without missing data on ultrasonography, who signed informed consent were selected as participants. Furthermore, they should meet the following criteria: (1) no history of cancer, stroke, atrial fibrillation, heart failure, or myocardial infarction; (2) without excessive alcohol consumption (men: ≥20 g/day and women: ≥10 g/day for more than a year); (3) absence of a history of positive HBsAg; and (4) with complete information. From July 2013 to August 2014, all residents aged ≥18 years from Jidong community were invited to participate in this study at the time of their regular annual physical examination performed at the Jidong Oilfield Hospital. Almost 9078 residents completed a standard questionnaire, underwent physical examinations and laboratory assessments, and provided informed consent at recruitment.[9] The Jidong community is located in Caofeidian District which is in the south of Tangshan city and near the Bohai Sea. The detail information about the research has been described in the past.[1011] Among these resident population, those aged ≥40 years, without missing data on ultrasonography, who signed informed consent were selected as participants. Furthermore, they should meet the following criteria: (1) no history of cancer, stroke, atrial fibrillation, heart failure, or myocardial infarction; (2) without excessive alcohol consumption (men: ≥20 g/day and women: ≥10 g/day for more than a year); (3) absence of a history of positive HBsAg; and (4) with complete information. Measurement of nonalcoholic fatty liver disease Liver ultrasonography was performed in participants aged ≥40 years using a high-resolution B-mode topographic ultrasound system with a 3.5 MHz probe (ACUSON X300, Siemens, Germany) to assess the prevalence of NAFLD. Compared to histology, ultrasonography had a sensitivity of 85% and a specificity of 94% in detecting fatty liver disease.[12] According to conventional criteria, fatty liver disease was diagnosed through characteristic echo patterns, such as diffusely increased liver near-field ultrasound echo (bright liver); liver echo was greater than kidney and vascular blurring and the gradual attenuation of far-field ultrasound echo.[13] In addition, abdominal ultrasonography scanning was examined by well-trained sonographers who were unaware of the clinical presentation and laboratory findings of participants during the whole ultrasonic examination. Liver ultrasonography was performed in participants aged ≥40 years using a high-resolution B-mode topographic ultrasound system with a 3.5 MHz probe (ACUSON X300, Siemens, Germany) to assess the prevalence of NAFLD. Compared to histology, ultrasonography had a sensitivity of 85% and a specificity of 94% in detecting fatty liver disease.[12] According to conventional criteria, fatty liver disease was diagnosed through characteristic echo patterns, such as diffusely increased liver near-field ultrasound echo (bright liver); liver echo was greater than kidney and vascular blurring and the gradual attenuation of far-field ultrasound echo.[13] In addition, abdominal ultrasonography scanning was examined by well-trained sonographers who were unaware of the clinical presentation and laboratory findings of participants during the whole ultrasonic examination. Assessment of carotid stenosis and carotid plaque For the evaluation of the prevalence of carotid stenosis, all participants (≥40 years) underwent bilateral carotid duplex sonography in a supine position by expert operators who were blinded to the goal of the study, clinical data, and laboratory findings. According to the Society of Radiologists in Ultrasound Consensus Conference, we graded the severity of carotid stenosis.[14] The categories were classified as normal (no stenosis) and carotid stenosis (<50% stenosis; ≥50% stenosis or occlusion). In the light of the established ultrasound criteria, (1) normal (no presence of stenosis) was defined that internal carotid artery (ICA) peak systolic velocity (PSV) was less than 125 cm/s and no plaque or intimal thickening was visible; (2) <50% stenosis was defined that ICA PSV was less than 125 cm/s but plaque or intimal thickening was visible; and (3) ≥50% stenosis or occlusion was considered when ICA PSV was greater than 125 cm/s and plaque was visible, or there was no detectable patent lumen on gray-scale ultrasonography and no flow on spectral, power, and color Doppler ultrasonography.[101516] Moreover, the higher value (left or right) was considered for analysis if bilateral stenosis was present. To assess the complexity and stability of carotid plaque, ultrasound examination (Philips iU22 ultrasound system, Philips Medical Systems, Bothell, WA, USA) was also operated by well-trained and certified sonographers, and the results of the examination were reviewed by two independent operators. Carotid plaque was defined as a focal structure encroaching into the arterial lumen of at least 0.5 mm or 50% of the surrounding intima-media thickness (IMT) value, or demonstrated as a thickness of 1.5 mm from the intima-lumen interface to the media–adventitia interface.[17] In this study, we classified carotid plaque as normal (without plaque), stable plaque (plaques had a uniform texture and present a smooth and regular surface and plaques with high-level or homogeneous echoes), and unstable plaque (plaques with incomplete fibrous cap or ulcerated plaques and plaques with low-level or heterogeneous echoes) according to different stabilities.[18] Both longitudinal and transverse images of bilateral carotid arteries were obtained to extensively evaluate plaques while differences between their evaluations needed to be resolved by consensus. For the evaluation of the prevalence of carotid stenosis, all participants (≥40 years) underwent bilateral carotid duplex sonography in a supine position by expert operators who were blinded to the goal of the study, clinical data, and laboratory findings. According to the Society of Radiologists in Ultrasound Consensus Conference, we graded the severity of carotid stenosis.[14] The categories were classified as normal (no stenosis) and carotid stenosis (<50% stenosis; ≥50% stenosis or occlusion). In the light of the established ultrasound criteria, (1) normal (no presence of stenosis) was defined that internal carotid artery (ICA) peak systolic velocity (PSV) was less than 125 cm/s and no plaque or intimal thickening was visible; (2) <50% stenosis was defined that ICA PSV was less than 125 cm/s but plaque or intimal thickening was visible; and (3) ≥50% stenosis or occlusion was considered when ICA PSV was greater than 125 cm/s and plaque was visible, or there was no detectable patent lumen on gray-scale ultrasonography and no flow on spectral, power, and color Doppler ultrasonography.[101516] Moreover, the higher value (left or right) was considered for analysis if bilateral stenosis was present. To assess the complexity and stability of carotid plaque, ultrasound examination (Philips iU22 ultrasound system, Philips Medical Systems, Bothell, WA, USA) was also operated by well-trained and certified sonographers, and the results of the examination were reviewed by two independent operators. Carotid plaque was defined as a focal structure encroaching into the arterial lumen of at least 0.5 mm or 50% of the surrounding intima-media thickness (IMT) value, or demonstrated as a thickness of 1.5 mm from the intima-lumen interface to the media–adventitia interface.[17] In this study, we classified carotid plaque as normal (without plaque), stable plaque (plaques had a uniform texture and present a smooth and regular surface and plaques with high-level or homogeneous echoes), and unstable plaque (plaques with incomplete fibrous cap or ulcerated plaques and plaques with low-level or heterogeneous echoes) according to different stabilities.[18] Both longitudinal and transverse images of bilateral carotid arteries were obtained to extensively evaluate plaques while differences between their evaluations needed to be resolved by consensus. Assessment of potential covariates The information of demographic (age, sex, income, physical activities, and smoking status) and clinical characteristics (waist-hip ratio (WHR), hypertension, and diabetes) was collected using standardized questionnaires.[9] Biochemical variables containing some indexes such as triglyceride (TG), total cholesterol (TC), high-density lipoprotein-cholesterol (HDL-C), and low-density lipoprotein-cholesterol (LDL-C) were measured using standard methods at the Central Laboratory of Jidong Oilfield Hospital.[10] According to the response of smoking status, participants were divided into three categories, never (<100 cigarettes in entire life), past, and current smoker. Education level was also classified into three categories: primary school or below, middle or high school, and college or above. The classification of physical activity was based on the following three kinds of circumstances: inactive, moderately active, and active. Simultaneously, WHR was calculated as waist circumference (cm) divided by the hip circumference (cm) which was used as measure of abdominal obesity. Hypertension was defined as a self-report history of hypertension, using antihypertensive medication or systolic blood pressure ≥140 mmHg and diastolic blood pressure ≥90 mmHg.[10] The definition of diabetes was fasting blood glucose ≥7.0 mmol/L, current treatment with insulin, oral hypoglycemic agents, or a history of diabetes mellitus.[111920] The information of demographic (age, sex, income, physical activities, and smoking status) and clinical characteristics (waist-hip ratio (WHR), hypertension, and diabetes) was collected using standardized questionnaires.[9] Biochemical variables containing some indexes such as triglyceride (TG), total cholesterol (TC), high-density lipoprotein-cholesterol (HDL-C), and low-density lipoprotein-cholesterol (LDL-C) were measured using standard methods at the Central Laboratory of Jidong Oilfield Hospital.[10] According to the response of smoking status, participants were divided into three categories, never (<100 cigarettes in entire life), past, and current smoker. Education level was also classified into three categories: primary school or below, middle or high school, and college or above. The classification of physical activity was based on the following three kinds of circumstances: inactive, moderately active, and active. Simultaneously, WHR was calculated as waist circumference (cm) divided by the hip circumference (cm) which was used as measure of abdominal obesity. Hypertension was defined as a self-report history of hypertension, using antihypertensive medication or systolic blood pressure ≥140 mmHg and diastolic blood pressure ≥90 mmHg.[10] The definition of diabetes was fasting blood glucose ≥7.0 mmol/L, current treatment with insulin, oral hypoglycemic agents, or a history of diabetes mellitus.[111920] Statistical analyses Considering that the prevalence of NAFLD and carotid stenosis is 25% and 12%, respectively, at two-sided α = 0.05, power = 0.80, and odds ratio (OR) = 1.50, the sample size can be assumed to be 2360 (PASS11, NCSS, LLC 329 North 1000 East, Kaysville, Utah 84037, USA).[21] The normal distribution of continuous variables was tested by one-sample Kolmogorov-Smirnov test. Continuous variables underlying normal distribution were presented as mean ± standard deviation and compared using t-test or analysis of variance (ANOVA), and otherwise presented as median (interquartile range) and compared by corresponding nonparametric methods. The frequencies and percentages were used to describe categorical variables, and the Chi-squared test was applied to compare among groups. Logistic regression was used to calculate ORs and 95% confidence intervals (CIs) and to determine the association between NAFLD and carotid stenosis or plaque. After adjusting for age, gender, smoking status, income, physical activity, diabetes, hypertension, WHR, TG, and HDL-C, the association between different severity of carotid stenosis and NAFLD, as well as the relationship between carotid plaque with different stability and NAFLD were also investigated. The association of NAFLD and carotid stenosis or plaque was also examined in stratification of age and gender analysis, respectively. Statistical analyses were performed using the SAS version 9.4 (SAS Institute, Cary, North Carolina, USA). All statistical tests were two-sided, and P < 0.05 was considered statistically significant. Considering that the prevalence of NAFLD and carotid stenosis is 25% and 12%, respectively, at two-sided α = 0.05, power = 0.80, and odds ratio (OR) = 1.50, the sample size can be assumed to be 2360 (PASS11, NCSS, LLC 329 North 1000 East, Kaysville, Utah 84037, USA).[21] The normal distribution of continuous variables was tested by one-sample Kolmogorov-Smirnov test. Continuous variables underlying normal distribution were presented as mean ± standard deviation and compared using t-test or analysis of variance (ANOVA), and otherwise presented as median (interquartile range) and compared by corresponding nonparametric methods. The frequencies and percentages were used to describe categorical variables, and the Chi-squared test was applied to compare among groups. Logistic regression was used to calculate ORs and 95% confidence intervals (CIs) and to determine the association between NAFLD and carotid stenosis or plaque. After adjusting for age, gender, smoking status, income, physical activity, diabetes, hypertension, WHR, TG, and HDL-C, the association between different severity of carotid stenosis and NAFLD, as well as the relationship between carotid plaque with different stability and NAFLD were also investigated. The association of NAFLD and carotid stenosis or plaque was also examined in stratification of age and gender analysis, respectively. Statistical analyses were performed using the SAS version 9.4 (SAS Institute, Cary, North Carolina, USA). All statistical tests were two-sided, and P < 0.05 was considered statistically significant. Ethical approval: The study was conducted in accordance with the Declaration of Helsinki and was approved by the Ethics Committee of Jidong Oilfield Inc., Medical Centers. Informed written consent was obtained from all participants before their enrollment in this study. Study design and population: From July 2013 to August 2014, all residents aged ≥18 years from Jidong community were invited to participate in this study at the time of their regular annual physical examination performed at the Jidong Oilfield Hospital. Almost 9078 residents completed a standard questionnaire, underwent physical examinations and laboratory assessments, and provided informed consent at recruitment.[9] The Jidong community is located in Caofeidian District which is in the south of Tangshan city and near the Bohai Sea. The detail information about the research has been described in the past.[1011] Among these resident population, those aged ≥40 years, without missing data on ultrasonography, who signed informed consent were selected as participants. Furthermore, they should meet the following criteria: (1) no history of cancer, stroke, atrial fibrillation, heart failure, or myocardial infarction; (2) without excessive alcohol consumption (men: ≥20 g/day and women: ≥10 g/day for more than a year); (3) absence of a history of positive HBsAg; and (4) with complete information. Measurement of nonalcoholic fatty liver disease: Liver ultrasonography was performed in participants aged ≥40 years using a high-resolution B-mode topographic ultrasound system with a 3.5 MHz probe (ACUSON X300, Siemens, Germany) to assess the prevalence of NAFLD. Compared to histology, ultrasonography had a sensitivity of 85% and a specificity of 94% in detecting fatty liver disease.[12] According to conventional criteria, fatty liver disease was diagnosed through characteristic echo patterns, such as diffusely increased liver near-field ultrasound echo (bright liver); liver echo was greater than kidney and vascular blurring and the gradual attenuation of far-field ultrasound echo.[13] In addition, abdominal ultrasonography scanning was examined by well-trained sonographers who were unaware of the clinical presentation and laboratory findings of participants during the whole ultrasonic examination. Assessment of carotid stenosis and carotid plaque: For the evaluation of the prevalence of carotid stenosis, all participants (≥40 years) underwent bilateral carotid duplex sonography in a supine position by expert operators who were blinded to the goal of the study, clinical data, and laboratory findings. According to the Society of Radiologists in Ultrasound Consensus Conference, we graded the severity of carotid stenosis.[14] The categories were classified as normal (no stenosis) and carotid stenosis (<50% stenosis; ≥50% stenosis or occlusion). In the light of the established ultrasound criteria, (1) normal (no presence of stenosis) was defined that internal carotid artery (ICA) peak systolic velocity (PSV) was less than 125 cm/s and no plaque or intimal thickening was visible; (2) <50% stenosis was defined that ICA PSV was less than 125 cm/s but plaque or intimal thickening was visible; and (3) ≥50% stenosis or occlusion was considered when ICA PSV was greater than 125 cm/s and plaque was visible, or there was no detectable patent lumen on gray-scale ultrasonography and no flow on spectral, power, and color Doppler ultrasonography.[101516] Moreover, the higher value (left or right) was considered for analysis if bilateral stenosis was present. To assess the complexity and stability of carotid plaque, ultrasound examination (Philips iU22 ultrasound system, Philips Medical Systems, Bothell, WA, USA) was also operated by well-trained and certified sonographers, and the results of the examination were reviewed by two independent operators. Carotid plaque was defined as a focal structure encroaching into the arterial lumen of at least 0.5 mm or 50% of the surrounding intima-media thickness (IMT) value, or demonstrated as a thickness of 1.5 mm from the intima-lumen interface to the media–adventitia interface.[17] In this study, we classified carotid plaque as normal (without plaque), stable plaque (plaques had a uniform texture and present a smooth and regular surface and plaques with high-level or homogeneous echoes), and unstable plaque (plaques with incomplete fibrous cap or ulcerated plaques and plaques with low-level or heterogeneous echoes) according to different stabilities.[18] Both longitudinal and transverse images of bilateral carotid arteries were obtained to extensively evaluate plaques while differences between their evaluations needed to be resolved by consensus. Assessment of potential covariates: The information of demographic (age, sex, income, physical activities, and smoking status) and clinical characteristics (waist-hip ratio (WHR), hypertension, and diabetes) was collected using standardized questionnaires.[9] Biochemical variables containing some indexes such as triglyceride (TG), total cholesterol (TC), high-density lipoprotein-cholesterol (HDL-C), and low-density lipoprotein-cholesterol (LDL-C) were measured using standard methods at the Central Laboratory of Jidong Oilfield Hospital.[10] According to the response of smoking status, participants were divided into three categories, never (<100 cigarettes in entire life), past, and current smoker. Education level was also classified into three categories: primary school or below, middle or high school, and college or above. The classification of physical activity was based on the following three kinds of circumstances: inactive, moderately active, and active. Simultaneously, WHR was calculated as waist circumference (cm) divided by the hip circumference (cm) which was used as measure of abdominal obesity. Hypertension was defined as a self-report history of hypertension, using antihypertensive medication or systolic blood pressure ≥140 mmHg and diastolic blood pressure ≥90 mmHg.[10] The definition of diabetes was fasting blood glucose ≥7.0 mmol/L, current treatment with insulin, oral hypoglycemic agents, or a history of diabetes mellitus.[111920] Statistical analyses: Considering that the prevalence of NAFLD and carotid stenosis is 25% and 12%, respectively, at two-sided α = 0.05, power = 0.80, and odds ratio (OR) = 1.50, the sample size can be assumed to be 2360 (PASS11, NCSS, LLC 329 North 1000 East, Kaysville, Utah 84037, USA).[21] The normal distribution of continuous variables was tested by one-sample Kolmogorov-Smirnov test. Continuous variables underlying normal distribution were presented as mean ± standard deviation and compared using t-test or analysis of variance (ANOVA), and otherwise presented as median (interquartile range) and compared by corresponding nonparametric methods. The frequencies and percentages were used to describe categorical variables, and the Chi-squared test was applied to compare among groups. Logistic regression was used to calculate ORs and 95% confidence intervals (CIs) and to determine the association between NAFLD and carotid stenosis or plaque. After adjusting for age, gender, smoking status, income, physical activity, diabetes, hypertension, WHR, TG, and HDL-C, the association between different severity of carotid stenosis and NAFLD, as well as the relationship between carotid plaque with different stability and NAFLD were also investigated. The association of NAFLD and carotid stenosis or plaque was also examined in stratification of age and gender analysis, respectively. Statistical analyses were performed using the SAS version 9.4 (SAS Institute, Cary, North Carolina, USA). All statistical tests were two-sided, and P < 0.05 was considered statistically significant. RESULTS: Baseline characteristics of study participants From the initial sample of 9078 participants, 3396 residents aged ≥40 years and with complete data on ultrasonography examination were selected as study samples. Among the 3396 participants, 784 participants were excluded for the following reasons: 168 participants with history of myocardial infarction, heart failure, stroke, atrial fibrillation, and cancer; 369 participants with excessive alcohol consumption; 131 participants with history of positive HBsAg; and 116 participants without complete information. Finally, 2612 participants were included in the final analysis. Demographic data, clinical characteristics, and biochemical variables of all participants were presented in Table 1. Among the total 2612 participants, the mean age was 53.6 ± 8.6 years and 41.8% (n = 1091) were men. The 1375 participants with NAFLD consisted of 707 (64.8%) males and 668 (43.9%) females. Totally 342 (24.9%) participants with NAFLD were current smokers. Biochemical parameters including WHR, TG and TC were higher in participants with NAFLD than those without NAFLD. In addition, higher prevalence of diabetes and hypertension was also found in the group of NAFLD. Participants with NAFLD had a higher prevalence of carotid stenosis compared to those without NAFLD (<50% stenosis: 17% vs. 11.2%; ≥50% stenosis: 13.0% vs. 4.6%). A higher prevalence of unstable plaque (19.4% vs. 12.8%) was also demonstrated in participants with NAFLD than those without NAFLD in this study. Baseline characteristics of participants enrolled in this study *: t values; †: χ2 values; ‡: Z values. NAFLD: Nonalcoholic fatty liver disease; WHR: Waist-hip ratio; TG: Triglyceride; TC: Total cholesterol; HDL-C: High-density lipoprotein-cholesterol; LDL-C: Low-density lipoprotein-cholesterol; CIMT: Carotid intima-media thickness; SD: Standard deviation; IQR: Interquartile range. From the initial sample of 9078 participants, 3396 residents aged ≥40 years and with complete data on ultrasonography examination were selected as study samples. Among the 3396 participants, 784 participants were excluded for the following reasons: 168 participants with history of myocardial infarction, heart failure, stroke, atrial fibrillation, and cancer; 369 participants with excessive alcohol consumption; 131 participants with history of positive HBsAg; and 116 participants without complete information. Finally, 2612 participants were included in the final analysis. Demographic data, clinical characteristics, and biochemical variables of all participants were presented in Table 1. Among the total 2612 participants, the mean age was 53.6 ± 8.6 years and 41.8% (n = 1091) were men. The 1375 participants with NAFLD consisted of 707 (64.8%) males and 668 (43.9%) females. Totally 342 (24.9%) participants with NAFLD were current smokers. Biochemical parameters including WHR, TG and TC were higher in participants with NAFLD than those without NAFLD. In addition, higher prevalence of diabetes and hypertension was also found in the group of NAFLD. Participants with NAFLD had a higher prevalence of carotid stenosis compared to those without NAFLD (<50% stenosis: 17% vs. 11.2%; ≥50% stenosis: 13.0% vs. 4.6%). A higher prevalence of unstable plaque (19.4% vs. 12.8%) was also demonstrated in participants with NAFLD than those without NAFLD in this study. Baseline characteristics of participants enrolled in this study *: t values; †: χ2 values; ‡: Z values. NAFLD: Nonalcoholic fatty liver disease; WHR: Waist-hip ratio; TG: Triglyceride; TC: Total cholesterol; HDL-C: High-density lipoprotein-cholesterol; LDL-C: Low-density lipoprotein-cholesterol; CIMT: Carotid intima-media thickness; SD: Standard deviation; IQR: Interquartile range. Association between carotid stenosis and nonalcoholic fatty liver disease Among the 1375 participants with NAFLD, 12.9% (178/1375) met the diagnostic criteria for carotid stenosis. After adjusting for age, gender, education level, income, physical activity, smoking status, diabetes, hypertension, WHR, TG, and HDL-C, the association between NAFLD and carotid stenosis was statistically significant (OR: 2.06, 95% CI: 1.45–2.91). In age- and gender-stratified analysis, the association between NAFLD and carotid stenosis was positively significant among different groups (female: OR: 2.34, 95% CI: 1.29–4.24; male: OR: 1.89, 95% CI: 1.22–2.91; 40–60 years: OR: 1.76, 95% CI: 1.15–2.70; >60 years: OR: 3.12, 95% CI: 1.62–6.02) [Figure 1]. A positive association was observed between carotid stenosis (≥50% stenosis) and NAFLD according to the classification of the severity of carotid stenosis with normal as reference group (OR: 2.06, 95% CI: 1.45–2.93, P < 0.01) [Table 2]. The prevalence of carotid stenosis or plaque in overall participants and participants with NAFLD or without NAFLD. OR with 95% CI of NALFD for carotid stenosis or plaque in total, different gender, and age categories. Covariates for total included age, gender, smoking status, income, physical activity, diabetes, hypertension, WHR, TG, and HDL-C. Covariates for different genders included age, smoking status, income, physical activity, diabetes, hypertension, WHR, TG, and HDL-C. Covariates for different ages included gender, smoking status, income, physical activity, diabetes, hypertension, WHR, TG, and HDL-C. NAFLD: Nonalcoholic fatty liver disease; OR: Odds ratio; CI: Confidence interval; WHR: Waist-hip ratio; TG: Triglyceride; HDL-C: High-density lipoprotein-cholesterol. Association between different severity of carotid stenosis and NAFLD stratified by gender and age Data were presented by ORs (95% CIs). *Total adjusted for age, gender, smoking status, income, physical activity, diabetes, hypertension, WHR, TG, and HDL-C; †Gender subgroup adjusted for age, smoking status, income, physical activity, diabetes, hypertension, WHR, TG, and HDL-C; ‡Age subgroup adjusted for gender, smoking status, income, physical activity, diabetes, hypertension, WHR, TG, and HDL-C. NAFLD: Nonalcoholic fatty liver disease; WHR: Waist-hip ratio; TG: Triglyceride; HDL-C: High-density lipoprotein-cholesterol; CIs: Confidence intervals; ORs: Odds ratios. Among the 1375 participants with NAFLD, 12.9% (178/1375) met the diagnostic criteria for carotid stenosis. After adjusting for age, gender, education level, income, physical activity, smoking status, diabetes, hypertension, WHR, TG, and HDL-C, the association between NAFLD and carotid stenosis was statistically significant (OR: 2.06, 95% CI: 1.45–2.91). In age- and gender-stratified analysis, the association between NAFLD and carotid stenosis was positively significant among different groups (female: OR: 2.34, 95% CI: 1.29–4.24; male: OR: 1.89, 95% CI: 1.22–2.91; 40–60 years: OR: 1.76, 95% CI: 1.15–2.70; >60 years: OR: 3.12, 95% CI: 1.62–6.02) [Figure 1]. A positive association was observed between carotid stenosis (≥50% stenosis) and NAFLD according to the classification of the severity of carotid stenosis with normal as reference group (OR: 2.06, 95% CI: 1.45–2.93, P < 0.01) [Table 2]. The prevalence of carotid stenosis or plaque in overall participants and participants with NAFLD or without NAFLD. OR with 95% CI of NALFD for carotid stenosis or plaque in total, different gender, and age categories. Covariates for total included age, gender, smoking status, income, physical activity, diabetes, hypertension, WHR, TG, and HDL-C. Covariates for different genders included age, smoking status, income, physical activity, diabetes, hypertension, WHR, TG, and HDL-C. Covariates for different ages included gender, smoking status, income, physical activity, diabetes, hypertension, WHR, TG, and HDL-C. NAFLD: Nonalcoholic fatty liver disease; OR: Odds ratio; CI: Confidence interval; WHR: Waist-hip ratio; TG: Triglyceride; HDL-C: High-density lipoprotein-cholesterol. Association between different severity of carotid stenosis and NAFLD stratified by gender and age Data were presented by ORs (95% CIs). *Total adjusted for age, gender, smoking status, income, physical activity, diabetes, hypertension, WHR, TG, and HDL-C; †Gender subgroup adjusted for age, smoking status, income, physical activity, diabetes, hypertension, WHR, TG, and HDL-C; ‡Age subgroup adjusted for gender, smoking status, income, physical activity, diabetes, hypertension, WHR, TG, and HDL-C. NAFLD: Nonalcoholic fatty liver disease; WHR: Waist-hip ratio; TG: Triglyceride; HDL-C: High-density lipoprotein-cholesterol; CIs: Confidence intervals; ORs: Odds ratios. Relationship between carotid plaque and nonalcoholic fatty liver disease NAFLD was not significantly associated with carotid plaque in the whole participants (OR: 1.10, 95% CI: 0.8–1.40) after adjusting for age, gender, education level, income, physical activity, smoking status, diabetes, hypertension, WHR, TG, and HDL-C [Figure 1]. The results also showed that there was no significant correlation between carotid plaque and NAFLD after the stratification of age and gender. Moreover, the carotid plaque was classified according to different stability. The association between unstable plaque or stable plaque and NAFLD was still not statistically significant (unstable plaque: OR: 1.13, 95% CI: 0.87–1.35; stable plaque: OR: 0.94, 95% CI: 0.53–1.67) after adjustment for potential confounders in this study [Table 3]. Association between carotid plaque of different stability and NAFLD stratified by gender and age Data were presented by ORs (95% CIs). *Total adjusted for age, gender, smoking status, income, physical activity, diabetes, hypertension, WHR, TG, and HDL-C; †Gender subgroup adjusted for age, smoking status, income, physical activity, diabetes, hypertension, WHR, TG, and HDL-C; ‡Age subgroup adjusted for gender, smoking status, income, physical activity, diabetes, hypertension, WHR, TG, and HDL-C. NAFLD: Nonalcoholic fatty liver disease; WHR: Waist-hip ratio; TG: Triglyceride; HDL-C: High-density lipoprotein-cholesterol; CIs: Confidence intervals; ORs: Odds ratios. NAFLD was not significantly associated with carotid plaque in the whole participants (OR: 1.10, 95% CI: 0.8–1.40) after adjusting for age, gender, education level, income, physical activity, smoking status, diabetes, hypertension, WHR, TG, and HDL-C [Figure 1]. The results also showed that there was no significant correlation between carotid plaque and NAFLD after the stratification of age and gender. Moreover, the carotid plaque was classified according to different stability. The association between unstable plaque or stable plaque and NAFLD was still not statistically significant (unstable plaque: OR: 1.13, 95% CI: 0.87–1.35; stable plaque: OR: 0.94, 95% CI: 0.53–1.67) after adjustment for potential confounders in this study [Table 3]. Association between carotid plaque of different stability and NAFLD stratified by gender and age Data were presented by ORs (95% CIs). *Total adjusted for age, gender, smoking status, income, physical activity, diabetes, hypertension, WHR, TG, and HDL-C; †Gender subgroup adjusted for age, smoking status, income, physical activity, diabetes, hypertension, WHR, TG, and HDL-C; ‡Age subgroup adjusted for gender, smoking status, income, physical activity, diabetes, hypertension, WHR, TG, and HDL-C. NAFLD: Nonalcoholic fatty liver disease; WHR: Waist-hip ratio; TG: Triglyceride; HDL-C: High-density lipoprotein-cholesterol; CIs: Confidence intervals; ORs: Odds ratios. Baseline characteristics of study participants: From the initial sample of 9078 participants, 3396 residents aged ≥40 years and with complete data on ultrasonography examination were selected as study samples. Among the 3396 participants, 784 participants were excluded for the following reasons: 168 participants with history of myocardial infarction, heart failure, stroke, atrial fibrillation, and cancer; 369 participants with excessive alcohol consumption; 131 participants with history of positive HBsAg; and 116 participants without complete information. Finally, 2612 participants were included in the final analysis. Demographic data, clinical characteristics, and biochemical variables of all participants were presented in Table 1. Among the total 2612 participants, the mean age was 53.6 ± 8.6 years and 41.8% (n = 1091) were men. The 1375 participants with NAFLD consisted of 707 (64.8%) males and 668 (43.9%) females. Totally 342 (24.9%) participants with NAFLD were current smokers. Biochemical parameters including WHR, TG and TC were higher in participants with NAFLD than those without NAFLD. In addition, higher prevalence of diabetes and hypertension was also found in the group of NAFLD. Participants with NAFLD had a higher prevalence of carotid stenosis compared to those without NAFLD (<50% stenosis: 17% vs. 11.2%; ≥50% stenosis: 13.0% vs. 4.6%). A higher prevalence of unstable plaque (19.4% vs. 12.8%) was also demonstrated in participants with NAFLD than those without NAFLD in this study. Baseline characteristics of participants enrolled in this study *: t values; †: χ2 values; ‡: Z values. NAFLD: Nonalcoholic fatty liver disease; WHR: Waist-hip ratio; TG: Triglyceride; TC: Total cholesterol; HDL-C: High-density lipoprotein-cholesterol; LDL-C: Low-density lipoprotein-cholesterol; CIMT: Carotid intima-media thickness; SD: Standard deviation; IQR: Interquartile range. Association between carotid stenosis and nonalcoholic fatty liver disease: Among the 1375 participants with NAFLD, 12.9% (178/1375) met the diagnostic criteria for carotid stenosis. After adjusting for age, gender, education level, income, physical activity, smoking status, diabetes, hypertension, WHR, TG, and HDL-C, the association between NAFLD and carotid stenosis was statistically significant (OR: 2.06, 95% CI: 1.45–2.91). In age- and gender-stratified analysis, the association between NAFLD and carotid stenosis was positively significant among different groups (female: OR: 2.34, 95% CI: 1.29–4.24; male: OR: 1.89, 95% CI: 1.22–2.91; 40–60 years: OR: 1.76, 95% CI: 1.15–2.70; >60 years: OR: 3.12, 95% CI: 1.62–6.02) [Figure 1]. A positive association was observed between carotid stenosis (≥50% stenosis) and NAFLD according to the classification of the severity of carotid stenosis with normal as reference group (OR: 2.06, 95% CI: 1.45–2.93, P < 0.01) [Table 2]. The prevalence of carotid stenosis or plaque in overall participants and participants with NAFLD or without NAFLD. OR with 95% CI of NALFD for carotid stenosis or plaque in total, different gender, and age categories. Covariates for total included age, gender, smoking status, income, physical activity, diabetes, hypertension, WHR, TG, and HDL-C. Covariates for different genders included age, smoking status, income, physical activity, diabetes, hypertension, WHR, TG, and HDL-C. Covariates for different ages included gender, smoking status, income, physical activity, diabetes, hypertension, WHR, TG, and HDL-C. NAFLD: Nonalcoholic fatty liver disease; OR: Odds ratio; CI: Confidence interval; WHR: Waist-hip ratio; TG: Triglyceride; HDL-C: High-density lipoprotein-cholesterol. Association between different severity of carotid stenosis and NAFLD stratified by gender and age Data were presented by ORs (95% CIs). *Total adjusted for age, gender, smoking status, income, physical activity, diabetes, hypertension, WHR, TG, and HDL-C; †Gender subgroup adjusted for age, smoking status, income, physical activity, diabetes, hypertension, WHR, TG, and HDL-C; ‡Age subgroup adjusted for gender, smoking status, income, physical activity, diabetes, hypertension, WHR, TG, and HDL-C. NAFLD: Nonalcoholic fatty liver disease; WHR: Waist-hip ratio; TG: Triglyceride; HDL-C: High-density lipoprotein-cholesterol; CIs: Confidence intervals; ORs: Odds ratios. Relationship between carotid plaque and nonalcoholic fatty liver disease: NAFLD was not significantly associated with carotid plaque in the whole participants (OR: 1.10, 95% CI: 0.8–1.40) after adjusting for age, gender, education level, income, physical activity, smoking status, diabetes, hypertension, WHR, TG, and HDL-C [Figure 1]. The results also showed that there was no significant correlation between carotid plaque and NAFLD after the stratification of age and gender. Moreover, the carotid plaque was classified according to different stability. The association between unstable plaque or stable plaque and NAFLD was still not statistically significant (unstable plaque: OR: 1.13, 95% CI: 0.87–1.35; stable plaque: OR: 0.94, 95% CI: 0.53–1.67) after adjustment for potential confounders in this study [Table 3]. Association between carotid plaque of different stability and NAFLD stratified by gender and age Data were presented by ORs (95% CIs). *Total adjusted for age, gender, smoking status, income, physical activity, diabetes, hypertension, WHR, TG, and HDL-C; †Gender subgroup adjusted for age, smoking status, income, physical activity, diabetes, hypertension, WHR, TG, and HDL-C; ‡Age subgroup adjusted for gender, smoking status, income, physical activity, diabetes, hypertension, WHR, TG, and HDL-C. NAFLD: Nonalcoholic fatty liver disease; WHR: Waist-hip ratio; TG: Triglyceride; HDL-C: High-density lipoprotein-cholesterol; CIs: Confidence intervals; ORs: Odds ratios. DISCUSSION: In the present study, we focused on the correlation between NAFLD and carotid artery disease. The primary outcome measure was that the association between NAFLD and carotid stenosis was independently significant, while the association of carotid plaque was not independently significant in this community-based population. This study attempted to explore the association between NAFLD and carotid artery disease (assessed by carotid stenosis and carotid plaque) in China. A lot of evidence indicated that NAFLD was related to cardiovascular disease.[2223] Sookoian and Pirola[2] performed a systematic review and described that NAFLD was associated with the presence of carotid plaque and endothelial dysfunction which was a reliable marker of subclinical atherosclerosis. Volzke et al.[24] suggested that carotid plaques were more frequently in NAFLD patients in comparison with normal including 3212 participants. In this study, we did not observe a significant association between NAFLD and carotid plaque. Consistent with our results, a study in middle-aged and elderly Chinese showed that the association between NAFLD and carotid plaque was not statistically significant.[25] However, in a study of 14,445 adults, NAFLD was discovered to be associated with an increased risk of carotid plaque diagnosed by ultrasound.[26] The inconsistent association between NAFLD and carotid plaque might be caused by the ethnic differences, region differences, and the difference in methodology of defining NAFLD. Therefore, the association needed further validation in multiple ethnic and different region populations. Earlier studies also demonstrated the presence of carotid plaque increased with age and gender difference. Accordingly, we stratified by age and gender to analyze the different stability of carotid plaque with NAFLD although the results were not statistically significant. The inconsistence might be possibly ascribed to the less sensitive to ultrasound than to biopsy during identifying NAFLD. NAFLD was considered as a marker of metabolic disorders which could exaggerate the effects on the development of atherosclerosis.[27] However, the biological mechanism of NAFLD promoting the development of athero sclerosis (measured through carotid plaque) was still unclear. Similarly, same to carotid plaques, carotid stenosis was also used as a surrogate marker to represent the carotid artery disease in this study. Compared with inconsistent results regarding the association between carotid plaque and NAFLD, those between carotid stenosis and NAFLD are consistent. Several studies demonstrated a positive association between NAFLD and cardiovascular-related disease (assessed by carotid IMT or carotid plaque).[328] Sinn et al.[29] ever reported that men with NAFLD had a higher risk of carotid stenosis in modest alcohol drinkers. An independently significant correlation of participants with NAFLD and carotid artery disease (assessed by carotid artery stenosis) was detected in the present study, and a positive association between carotid stenosis (≥50% stenosis) and NAFLD was found in the stratified analysis. In addition, NAFLD was assumed to be the hepatic manifestation of metabolic syndrome. However, a study reported that metabolic syndrome was not associated with carotid stenosis (≥50% stenosis) in patients with a recent diagnosis of CAD.[30] The relatively small sample size (168 patients) in that study might partly account for this inconsistency. The results in this study also showed that NAFLD was an independent risk factor for carotid stenosis in different gender and age groups. This gender-specific association might be caused by the effects of estrogens which protected females from cardiovascular disease. Furthermore, age was another important risk factor of the association between carotid stenosis and NAFLD. A previous study demonstrated a higher prevalence of carotid stenosis or NAFLD with the age increased, and the relationship between NAFLD and carotid stenosis among older participants was statistically significant, that was similar to our results.[31] Furthermore, the association between CAD and NAFLD had been examined in some studies, which detected a higher prevalence of CAD in patients with NAFLD (80.4% vs. 60.7%).[323334] Possible reasons for these inconsistent results might be that patients underwent coronarography before detecting NAFLD. These studies also confirmed that CAD was positively associated with NAFLD (OR: 3.31), which revealed a significant association between NAFLD and cardiovascular disease. Therefore, the findings in the study of the positive association between NAFLD and carotid artery disease (assessed by carotid stenosis) might, to some extent, explain the higher risk of cerebrovascular disease among people with NAFLD, which might add to the available evidence and effects on risk prediction of patients. According to some recent epidemiological studies, the prevalence of NAFLD in China was higher than the estimates in Western countries while it was increasing and had reached epidemic proportions.[35] In the present study, participants with NAFLD had a higher prevalence of carotid stenosis and carotid plaque which was different from those without NAFLD. Carotid stenosis was the serious stage of the development of carotid atherosclerosis, and the relationship between carotid plaque and carotid atherosclerosis was closely linked. Although the mechanism linking NAFLD and cardiovascular events was elusive, some studies still posed that the mechanism of the association between atherosclerosis and NAFLD might be a complex progress involving an interaction among insulin resistance, an inflammatory status and oxidative stress, and appeared to be important in both early and later stages of the atherosclerotic progress.[36] A systemic inflammatory status with pro-inflammatory and atherogenic molecules might play an important role in the relationship between NAFLD and cardiovascular disease.[37] These studies provided bases for further exploration of the underlying mechanisms between NAFLD and cardiovascular disease. Furthermore, common genetic variants were another factors influencing the risk of cardiovascular disease.[38] Apart from the main risk factors of age and gender, obesity and a list of metabolic-related problems also played an important role in the presence of NAFLD. These factors could predict the risk of carotid stenosis or carotid plaque and forecast some cardiovascular-related diseases in NAFLD patients. Therefore, carotid stenosis or plaque might be the hub of NAFLD and cardiovascular disease. We acknowledged several limitations in this present study. First, the study could not evaluate the temporal natural of the relationship between NAFLD and carotid stenosis or plaque and also could not draw a causal inference of them because of the cross-sectional design. Second, some participants were absent for the ultrasound examination, which might lead to the selection bias and restrict the generalization of the findings. Moreover, the diagnosis of NAFLD was based on ultrasonography which had less sensitivity compared to liver biopsy and could cause a bias for the prevalence of NAFLD. Third, we excluded the participants who had history of excessive alcohol consumption and were positive for HBsAg; however, other types of liver diseases, such as hepatitis C and liver cirrhosis, were not taken into account and might confound the association between NAFLD and carotid plaque. Finally, since the whole participants were just from the Jidong community of Tangshan city, they could not be regarded as representative of the Chinese population. In conclusion, our data suggest that NAFLD is associated with carotid stenosis but not with carotid plaque. Compared to non-NAFLD individuals, participants with NAFLD have a higher risk of carotid stenosis, particularly for women and older participants. NAFLD might be a predicator of early carotid atherosclerosis which is assessed by carotid stenosis or carotid plaque as surrogate markers. Financial support and sponsorship This study was supported by grants from the National Natural Science Foundation of China (No. 81670294, No. 81202279, and No. 81473057) and the National Social Science Foundation of China (No. 17BGL184). This study was supported by grants from the National Natural Science Foundation of China (No. 81670294, No. 81202279, and No. 81473057) and the National Social Science Foundation of China (No. 17BGL184). Conflicts of interest There are no conflicts of interest. There are no conflicts of interest. Financial support and sponsorship: This study was supported by grants from the National Natural Science Foundation of China (No. 81670294, No. 81202279, and No. 81473057) and the National Social Science Foundation of China (No. 17BGL184). Conflicts of interest: There are no conflicts of interest.
Background: Nonalcoholic fatty liver disease (NAFLD) is one of the most common chronic liver diseases with a high prevalence in the general population. The association between NAFLD and cardiovascular disease has been well addressed in previous studies. However, whether NAFLD is associated with carotid artery disease in a community-based Chinese population remained unclear. The aim of this study was to investigate the association between NAFLD and carotid artery disease. Methods: A total of 2612 participants (1091 men and 1521 women) aged 40 years and older from Jidong of Tangshan city (China) were selected for this study. NAFLD was diagnosed by abdominal ultrasonography. The presence of carotid stenosis or plaque was evaluated by carotid artery ultrasonography. Logistic regression was used to analyze the association between NAFLD and carotid artery disease. Results: Participants with NAFLD have a higher prevalence of carotid stenosis (12.9% vs. 4.6%) and carotid plaque (21.9% vs. 15.0%) than those without NAFLD. After adjusting for age, gender, smoking status, income, physical activity, diabetes, hypertension, triglyceride, waist-hip ratio, and high-density lipoprotein, NAFLD is significantly associated with carotid stenosis (odds ratio [OR]: 2.06, 95% confidence interval [CI]: 1.45-2.91), but the association between NAFLD and carotid plaque is not statistically significant (OR: 1.10, 95% CI: 0.8-1.40). Conclusions: A significant association between NAFLD and carotid stenosis is found in a Chinese population.
null
null
9,588
293
[ 41, 196, 144, 440, 261, 294, 361, 511, 296, 42 ]
15
[ "carotid", "nafld", "stenosis", "plaque", "participants", "carotid stenosis", "age", "whr", "gender", "tg" ]
[ "cardiovascular disease carotid", "steatohepatitis cirrhosis nafld", "carotid atherosclerosis common", "fatty liver disease", "nafld metabolic syndrome" ]
null
null
[CONTENT] Association | Carotid Artery Disease | Carotid Stenosis | Nonalcoholic Fatty Liver Disease [SUMMARY]
[CONTENT] Association | Carotid Artery Disease | Carotid Stenosis | Nonalcoholic Fatty Liver Disease [SUMMARY]
[CONTENT] Association | Carotid Artery Disease | Carotid Stenosis | Nonalcoholic Fatty Liver Disease [SUMMARY]
null
[CONTENT] Association | Carotid Artery Disease | Carotid Stenosis | Nonalcoholic Fatty Liver Disease [SUMMARY]
null
[CONTENT] Adult | Carotid Artery Diseases | Carotid Intima-Media Thickness | China | Cross-Sectional Studies | Female | Humans | Male | Middle Aged | Non-alcoholic Fatty Liver Disease | Prevalence | Risk Factors [SUMMARY]
[CONTENT] Adult | Carotid Artery Diseases | Carotid Intima-Media Thickness | China | Cross-Sectional Studies | Female | Humans | Male | Middle Aged | Non-alcoholic Fatty Liver Disease | Prevalence | Risk Factors [SUMMARY]
[CONTENT] Adult | Carotid Artery Diseases | Carotid Intima-Media Thickness | China | Cross-Sectional Studies | Female | Humans | Male | Middle Aged | Non-alcoholic Fatty Liver Disease | Prevalence | Risk Factors [SUMMARY]
null
[CONTENT] Adult | Carotid Artery Diseases | Carotid Intima-Media Thickness | China | Cross-Sectional Studies | Female | Humans | Male | Middle Aged | Non-alcoholic Fatty Liver Disease | Prevalence | Risk Factors [SUMMARY]
null
[CONTENT] cardiovascular disease carotid | steatohepatitis cirrhosis nafld | carotid atherosclerosis common | fatty liver disease | nafld metabolic syndrome [SUMMARY]
[CONTENT] cardiovascular disease carotid | steatohepatitis cirrhosis nafld | carotid atherosclerosis common | fatty liver disease | nafld metabolic syndrome [SUMMARY]
[CONTENT] cardiovascular disease carotid | steatohepatitis cirrhosis nafld | carotid atherosclerosis common | fatty liver disease | nafld metabolic syndrome [SUMMARY]
null
[CONTENT] cardiovascular disease carotid | steatohepatitis cirrhosis nafld | carotid atherosclerosis common | fatty liver disease | nafld metabolic syndrome [SUMMARY]
null
[CONTENT] carotid | nafld | stenosis | plaque | participants | carotid stenosis | age | whr | gender | tg [SUMMARY]
[CONTENT] carotid | nafld | stenosis | plaque | participants | carotid stenosis | age | whr | gender | tg [SUMMARY]
[CONTENT] carotid | nafld | stenosis | plaque | participants | carotid stenosis | age | whr | gender | tg [SUMMARY]
null
[CONTENT] carotid | nafld | stenosis | plaque | participants | carotid stenosis | age | whr | gender | tg [SUMMARY]
null
[CONTENT] disease | cardiovascular | carotid | cardiovascular disease | nafld | artery disease | artery | syndrome | carotid artery disease | risk [SUMMARY]
[CONTENT] stenosis | carotid | plaque | plaques | ultrasound | cm | carotid stenosis | echo | liver | 50 [SUMMARY]
[CONTENT] nafld | gender | whr | tg | participants | ci | hdl | age | 95 ci | 95 [SUMMARY]
null
[CONTENT] nafld | carotid | stenosis | plaque | participants | carotid stenosis | conflicts interest | interest | conflicts | gender [SUMMARY]
null
[CONTENT] ||| NAFLD ||| NAFLD | Chinese ||| NAFLD [SUMMARY]
[CONTENT] 2612 | 1091 | 1521 | 40 years | Jidong | Tangshan city | China ||| NAFLD ||| ||| NAFLD [SUMMARY]
[CONTENT] NAFLD | 12.9% | 4.6% | 21.9% | 15.0% ||| NAFLD | 2.06 | 95% ||| CI | 1.45-2.91 | NAFLD | 1.10 | 95% | CI | 0.8-1.40 [SUMMARY]
null
[CONTENT] ||| ||| NAFLD ||| NAFLD | Chinese ||| NAFLD ||| 2612 | 1091 | 1521 | 40 years | Jidong | Tangshan city | China ||| NAFLD ||| ||| NAFLD ||| NAFLD | 12.9% | 4.6% | 21.9% | 15.0% ||| NAFLD | 2.06 | 95% ||| CI | 1.45-2.91 | NAFLD | 1.10 | 95% | CI | 0.8-1.40 ||| NAFLD | Chinese [SUMMARY]
null
Anatomical variations of the renal artery: a computerized tomographic angiogram study in living kidney donors at a Nigerian Kidney Transplant Center.
35222578
Understanding of the renal vascular anatomy is key to a safe and successful donor nephrectomy, which ultimately impacts on the renal graft function and survival in kidney transplant recipients.
BACKGROUND
The computerized tomography angiograms of 100 consecutive living kidney donors were prospectively reviewed over an 18-month period. Anatomical variations of the renal arteries including accessory arteries and early divisions were noted. Duration of surgery and ischemic time were recorded intra-operatively. Data analysis was carried out using IBM SPSS version 20.
MATERIALS AND METHODS
There were variations in renal artery configuration in 50 (50%) cases, 32% were accessory renal arteries while 18% were early branches of the renal artery. The classical bilateral solitary renal arteries were found in 50 (50%) of potential donors. There was statistically significant longer operating and ischemic time in donors with multiple renal arteries as compared with solitary arteries (p<0.05).
RESULTS
There are a wide variety of renal artery configurations seen in potential kidney donors. The classical solitary renal artery remains the commonest and most favourable configuration for donor nephrectomy and transplantation.
CONCLUSION
[ "Angiography", "Humans", "Kidney", "Kidney Transplantation", "Nephrectomy", "Nigeria", "Renal Artery", "Retrospective Studies" ]
8843298
Introduction
The anatomy of the renal artery plays a key role in selection of kidney donors for a renal transplant program1. The ideal renal artery for ease of vascular anastomosis is one which is solitary, of good diameter and length. Each kidney is classically supplied by a single renal artery, which arises as a lateral branch of abdominal aorta, between the levels of 1st and 2nd lumbar vertebrae2, 3. The left renal artery is shorter in length while the longer right renal artery passes posterior to the inferior vena cava (IVC) to gain access to the kidney at the renal hilum. Renal arteries give branches to the adrenal glands, renal pelvis and proximal ureters. After entering the hilum, each artery divides into five segmental end arteries which do not freely anastomose with each other. This therefore means that injury to a segmental renal artery would cause a segmental renal infarction. Computerized Tomography Angiography (CTA) is the investigation of choice for identifying the renal arterial anatomy4. Studies have shown that the classical description of the renal vasculature, formed by one renal artery and vein are found in less than 25% of the population5, 6. There have also been controversies in literature with regards the number of renal arteries and their branches7–9. Most often encountered morphological variations of renal artery are its variable number and unusual branches originating from it10–12. These variations are sometime incidentally found during autopsy or surgical operation. Terminologies like supranumerary, supplementary and accessory have been used to describe the variable configurations of the renal artery. According to Sampaio and Passos, these arteries are referred to as multiple, since they are segmental vessels for the kidneys, without anastomoses between themselves and they should be named according to the territory supplied by them as- hilar, superior polar and inferior polar13. The knowledge of the possible variations in the configuration of the renal arteries is necessary for surgical management during donor nephrectomy, repair of abdominal aorta aneurysm, other retroperitoneal urological procedures and angiographic interventions.14–16 The aim of this study is to report the various anatomical configurations of the renal arteries in a cohort of randomised healthy kidney donors using computerized tomography angiogram.
null
null
Result
A total of 100 patients were successfully recruited for this study. Their ages ranged from 18 to 53 years with a mean age of 31.2±7.7years. Males accounted for 88% while the rest were females (Table 1). Ninety-eight (98%) of the donors eventually underwent donor nephrectomy while 2 of the potential donors could not proceed with surgery due to sudden death (1) and a cerebrovascular accident (1) in the recipients few hours-days to surgery. Socio-demographic distribution of living kidney donors Of the 100 CTAs studied, we observed that 50 (50%) had the classical bilateral vascular renal arterial anatomy of a solitary renal artery with no accessory branch or early division (Figure 1). Among the other 50 patients, 32 (32%%) had either unilateral or bilateral accessory renal arteries while 18 (18%) had unilateral or bilateral early divisions of the renal artery. An accessory renal artery was found on the right side in 10 patients (20%) and left side in 12 cases(24%) respectively (Figure 2). Showing single renal artery on both sides Showing an accessory hilar arteryto the left kidney Bilateral accessory vessels were found in another 10 patients(Figure 3) with various rare variants seen in our series (Figure 4). Out of the 18(18%) cases of early branching artery, 9% and 8% were found on the right and left side respectively while only one case of bilateral early branching was seen (Table 2). Bilateral accessory renal arteries Unusual bilateral accessory renal arteries Distribution of Renal Artery configuration findings on CT Angiogram The presence of both accessory renal arteries and early divisions were higher among kidneys of male donors when compared to female donors (p= 0.002 and p=0.003 respectively). There was a co-existence of early division and accessory renal arteries in 2 patients (Figure 5). Co-existence of accessory right renal artery and left early division of renal artery In the 10 cases where there were bilateral accessory renal arteries, the side with larger diameter accessory renal artery was selected for donor nephrectomy. These renal allografts when extracted intra-operatively required meticulous bench vascular surgery. 6 of allografts had bench artero-arterostomy in end-to-side fashion to produce a common stump using prolene 6/0 sutures while 4 others had 2 separate annastomosis of the renal arteries to the external iliac artery in end-to-side fashion. All kidneys had good perfusion and renal function on the table. There was significantly longer surgery time, warm and cold ischemic time in renal allografts with multiple arteries (p<0.05) (Table 3). These findings significantly affected our decision of the renal unit to harvest from the donor for transplantation. There was an inadvertent transection of an early division of the renal artery in 1 donor nephrectomy allograft with consequent devascuarisation of about 20% of the allograft requiring bench re-anastomosis of the transected segments of the artery with satisfactory outcome.. There was a significantly longer total surgery time, warm and cold ischemic time among kidney donors with solitary renal arteries when compared to multiple (accessory/ early branches) renal arteries (Table 3). Breakdown of Mean durations of surgery time and ischemic time
Conclusion
Fifty percent of living kidney donors have either early divisions or accessory renal arteries .It has become increasingly important for kidney transplant surgeons to familiarize themselves with the anatomical variations of the renal artery and its correlation with surgery.
[ "Recommendation" ]
[ "There maybe a need to carry out a larger multi-centered and international study in order to assess a larger population of donors so as to have a more robust and representative result." ]
[ null ]
[ "Introduction", "Materials and Method", "Result", "Discussion", "Conclusion", "Recommendation" ]
[ "The anatomy of the renal artery plays a key role in selection of kidney donors for a renal transplant program1. The ideal renal artery for ease of vascular anastomosis is one which is solitary, of good diameter and length. Each kidney is classically supplied by a single renal artery, which arises as a lateral branch of abdominal aorta, between the levels of 1st and 2nd lumbar vertebrae2, 3. The left renal artery is shorter in length while the longer right renal artery passes posterior to the inferior vena cava (IVC) to gain access to the kidney at the renal hilum. Renal arteries give branches to the adrenal glands, renal pelvis and proximal ureters. After entering the hilum, each artery divides into five segmental end arteries which do not freely anastomose with each other. This therefore means that injury to a segmental renal artery would cause a segmental renal infarction. Computerized Tomography Angiography (CTA) is the investigation of choice for identifying the renal arterial anatomy4. Studies have shown that the classical description of the renal vasculature, formed by one renal artery and vein are found in less than 25% of the population5, 6. There have also been controversies in literature with regards the number of renal arteries and their branches7–9. Most often encountered morphological variations of renal artery are its variable number and unusual branches originating from it10–12. These variations are sometime incidentally found during autopsy or surgical operation. Terminologies like supranumerary, supplementary and accessory have been used to describe the variable configurations of the renal artery. According to Sampaio and Passos, these arteries are referred to as multiple, since they are segmental vessels for the kidneys, without anastomoses between themselves and they should be named according to the territory supplied by them as- hilar, superior polar and inferior polar13. The knowledge of the possible variations in the configuration of the renal arteries is necessary for surgical management during donor nephrectomy, repair of abdominal aorta aneurysm, other retroperitoneal urological procedures and angiographic interventions.14–16\nThe aim of this study is to report the various anatomical configurations of the renal arteries in a cohort of randomised healthy kidney donors using computerized tomography angiogram.", "This was a prospective, cross-sectional, hospital-based study conducted on 100 healthy living kidney donors in Zenith Medical and Kidney Center (ZMKC), Gudu, Abuja over an 18- month period (January 2018 to June 2019). Patients who were being planned for donor nephrectomy haven been found compatible with a recipient were recruited for this study. During the study period, the CT angiographic images were reviewed independently by two consultant urologists and a radiologist. All CT angiographic images were performed in the radiology unit of ZMKC using the unit protocol. The images were studied for the number of renal arteries originating from the abdominal aorta, the presence of early divisions into segmental arteries and presence of accessory arteries. For the purpose of this study, any division of the renal artery within 1cm from its origin on the abdominal aorta was considered an early division while any other artery arising from the aorta or its branches to supply the kidney other than the main renal artery was termed an accessory artery. Tiny cortical branches were not taken into account as accessory renal arteries. Donor nephrectomies were all performed using a flank, extra-peritoneal approach using 11th rib-transcostal or subcostal incisions while the recipient kidney transplant is performed in supine position via and extra-peritoneal approach using a Gibson incision. The findings were analyzed in both kidneys per patient and data was entered into a pre-designed proforma. Descriptive statistics were used to express the data with continuous variables summarized using arithmetic mean and standard deviation, while the categorical variables were summarized as proportions and frequencies. A p-value of less than 0.05 was considered significant. Data analysis was carried out using IBM Statistical Package for Social Sciences (SPSS) version 20.", "A total of 100 patients were successfully recruited for this study. Their ages ranged from 18 to 53 years with a mean age of 31.2±7.7years. Males accounted for 88% while the rest were females (Table 1). Ninety-eight (98%) of the donors eventually underwent donor nephrectomy while 2 of the potential donors could not proceed with surgery due to sudden death (1) and a cerebrovascular accident (1) in the recipients few hours-days to surgery.\nSocio-demographic distribution of living kidney donors\nOf the 100 CTAs studied, we observed that 50 (50%) had the classical bilateral vascular renal arterial anatomy of a solitary renal artery with no accessory branch or early division (Figure 1). Among the other 50 patients, 32 (32%%) had either unilateral or bilateral accessory renal arteries while 18 (18%) had unilateral or bilateral early divisions of the renal artery. An accessory renal artery was found on the right side in 10 patients (20%) and left side in 12 cases(24%) respectively (Figure 2).\nShowing single renal artery on both sides\nShowing an accessory hilar arteryto the left kidney\nBilateral accessory vessels were found in another 10 patients(Figure 3) with various rare variants seen in our series (Figure 4). Out of the 18(18%) cases of early branching artery, 9% and 8% were found on the right and left side respectively while only one case of bilateral early branching was seen (Table 2).\nBilateral accessory renal arteries\nUnusual bilateral accessory renal arteries\nDistribution of Renal Artery configuration findings on CT Angiogram\nThe presence of both accessory renal arteries and early divisions were higher among kidneys of male donors when compared to female donors (p= 0.002 and p=0.003 respectively).\nThere was a co-existence of early division and accessory renal arteries in 2 patients (Figure 5).\nCo-existence of accessory right renal artery and left early division of renal artery\nIn the 10 cases where there were bilateral accessory renal arteries, the side with larger diameter accessory renal artery was selected for donor nephrectomy. These renal allografts when extracted intra-operatively required meticulous bench vascular surgery. 6 of allografts had bench artero-arterostomy in end-to-side fashion to produce a common stump using prolene 6/0 sutures while 4 others had 2 separate annastomosis of the renal arteries to the external iliac artery in end-to-side fashion. All kidneys had good perfusion and renal function on the table. There was significantly longer surgery time, warm and cold ischemic time in renal allografts with multiple arteries (p<0.05) (Table 3). These findings significantly affected our decision of the renal unit to harvest from the donor for transplantation. There was an inadvertent transection of an early division of the renal artery in 1 donor nephrectomy allograft with consequent devascuarisation of about 20% of the allograft requiring bench re-anastomosis of the transected segments of the artery with satisfactory outcome.. There was a significantly longer total surgery time, warm and cold ischemic time among kidney donors with solitary renal arteries when compared to multiple (accessory/ early branches) renal arteries (Table 3).\nBreakdown of Mean durations of surgery time and ischemic time", "The knowledge of the detailed anatomy of the renal arteries plays an important role in planning major surgical procedures like live donor nephrectomy. The need for CT angiographic studies can therefore not be over-emphasized in this regards. Presence of an accessory renal artery connotes a more challenging surgical prospect for both the donor nephrectomy and allograft implantation surgeons. In cases where the kidney with an accessory renal artery cannot be avoided for donor nephrectomy (especially when they occur bilaterally), there is a need for meticulous dissection of the accessory arteries to ensure good length for ease of anatomosis. This will also translate to a longer cold and second warm ischemic time as there may be need for a double-barrel bench anastomosis of the accessory arteries or performing multiple vascular anatomosis with the iliac artery of choice on the recipient. Considering these challenges, it is usually the surgeons's preference to harvest a renal unit with the classical configuration of the renal artery. Our study revealed that 1 in 2 patients had the classical single bilateral renal arteries which was similar to findings by Kumaresan et al where 49 % had the normal anatomy17. The high incidence of renal artery variations regarding their origin and number as found in the index study has been similarly reported by many researchers 7–9. The various types of accessory renal arteries, their positions, method of entry to the kidney and its segmentation were studied extensively by Sykes (1963)18. Obstruction to any of these vessels leads to infarction of the segment of the kidney supplied.13 Due to the important clinical correlation of these real arterial variations, terminologies like hilar, superior polar and inferior polar renal arteries have been described in literature in order to better qualify them.\nSimilar to our finding on the prevalence of early branching and accessory renal arteries (18% & 32% respectively), Budhiraja et al in an Indian cadaveric study and Kumaresan et al reported close results. This however slightly differed from a study by Gumus et al who reported early division in 27% and accessory renal artery in 27%19. This may however be as a result of racial differences asrenal artery variations are more common in Africans than Indians20, 21. Bilateral early branching which was seen in only 1% in the index study is also a rare finding in literature (ref). Intra-operatively during donor nephrectomy and renal pelvis surgeries, superior and inferior polar extra hilar branches can be injured during mobilization13. Inferior polar artery injuries can be a cause of ureteropelvic junction obstruction10. Nowadays, allografts with multiple renal arteries have become a necessity to maintain donor pool22, but its outcome is still a matter of discussion. Some surgical scholars opine that multiple renal arteries increase the chances of acute rejection and poor graft functions 23, 24, while others like Benedetti et al25 did not find significant difference with regard to acute rejection rate in grafts with single versus multiple arteries. It is however agreed generally that allografts with multiple renal arteries have a higher risk of renal artery stenosis 26 and technical difficulties for surgeon performing transplant operation7, 27", "Fifty percent of living kidney donors have either early divisions or accessory renal arteries .It has become increasingly important for kidney transplant surgeons to familiarize themselves with the anatomical variations of the renal artery and its correlation with surgery.", "There maybe a need to carry out a larger multi-centered and international study in order to assess a larger population of donors so as to have a more robust and representative result." ]
[ "intro", "materials|method", "results", "discussion", "conclusions", null ]
[ "Renal artery", "variations", "living kidney donors" ]
Introduction: The anatomy of the renal artery plays a key role in selection of kidney donors for a renal transplant program1. The ideal renal artery for ease of vascular anastomosis is one which is solitary, of good diameter and length. Each kidney is classically supplied by a single renal artery, which arises as a lateral branch of abdominal aorta, between the levels of 1st and 2nd lumbar vertebrae2, 3. The left renal artery is shorter in length while the longer right renal artery passes posterior to the inferior vena cava (IVC) to gain access to the kidney at the renal hilum. Renal arteries give branches to the adrenal glands, renal pelvis and proximal ureters. After entering the hilum, each artery divides into five segmental end arteries which do not freely anastomose with each other. This therefore means that injury to a segmental renal artery would cause a segmental renal infarction. Computerized Tomography Angiography (CTA) is the investigation of choice for identifying the renal arterial anatomy4. Studies have shown that the classical description of the renal vasculature, formed by one renal artery and vein are found in less than 25% of the population5, 6. There have also been controversies in literature with regards the number of renal arteries and their branches7–9. Most often encountered morphological variations of renal artery are its variable number and unusual branches originating from it10–12. These variations are sometime incidentally found during autopsy or surgical operation. Terminologies like supranumerary, supplementary and accessory have been used to describe the variable configurations of the renal artery. According to Sampaio and Passos, these arteries are referred to as multiple, since they are segmental vessels for the kidneys, without anastomoses between themselves and they should be named according to the territory supplied by them as- hilar, superior polar and inferior polar13. The knowledge of the possible variations in the configuration of the renal arteries is necessary for surgical management during donor nephrectomy, repair of abdominal aorta aneurysm, other retroperitoneal urological procedures and angiographic interventions.14–16 The aim of this study is to report the various anatomical configurations of the renal arteries in a cohort of randomised healthy kidney donors using computerized tomography angiogram. Materials and Method: This was a prospective, cross-sectional, hospital-based study conducted on 100 healthy living kidney donors in Zenith Medical and Kidney Center (ZMKC), Gudu, Abuja over an 18- month period (January 2018 to June 2019). Patients who were being planned for donor nephrectomy haven been found compatible with a recipient were recruited for this study. During the study period, the CT angiographic images were reviewed independently by two consultant urologists and a radiologist. All CT angiographic images were performed in the radiology unit of ZMKC using the unit protocol. The images were studied for the number of renal arteries originating from the abdominal aorta, the presence of early divisions into segmental arteries and presence of accessory arteries. For the purpose of this study, any division of the renal artery within 1cm from its origin on the abdominal aorta was considered an early division while any other artery arising from the aorta or its branches to supply the kidney other than the main renal artery was termed an accessory artery. Tiny cortical branches were not taken into account as accessory renal arteries. Donor nephrectomies were all performed using a flank, extra-peritoneal approach using 11th rib-transcostal or subcostal incisions while the recipient kidney transplant is performed in supine position via and extra-peritoneal approach using a Gibson incision. The findings were analyzed in both kidneys per patient and data was entered into a pre-designed proforma. Descriptive statistics were used to express the data with continuous variables summarized using arithmetic mean and standard deviation, while the categorical variables were summarized as proportions and frequencies. A p-value of less than 0.05 was considered significant. Data analysis was carried out using IBM Statistical Package for Social Sciences (SPSS) version 20. Result: A total of 100 patients were successfully recruited for this study. Their ages ranged from 18 to 53 years with a mean age of 31.2±7.7years. Males accounted for 88% while the rest were females (Table 1). Ninety-eight (98%) of the donors eventually underwent donor nephrectomy while 2 of the potential donors could not proceed with surgery due to sudden death (1) and a cerebrovascular accident (1) in the recipients few hours-days to surgery. Socio-demographic distribution of living kidney donors Of the 100 CTAs studied, we observed that 50 (50%) had the classical bilateral vascular renal arterial anatomy of a solitary renal artery with no accessory branch or early division (Figure 1). Among the other 50 patients, 32 (32%%) had either unilateral or bilateral accessory renal arteries while 18 (18%) had unilateral or bilateral early divisions of the renal artery. An accessory renal artery was found on the right side in 10 patients (20%) and left side in 12 cases(24%) respectively (Figure 2). Showing single renal artery on both sides Showing an accessory hilar arteryto the left kidney Bilateral accessory vessels were found in another 10 patients(Figure 3) with various rare variants seen in our series (Figure 4). Out of the 18(18%) cases of early branching artery, 9% and 8% were found on the right and left side respectively while only one case of bilateral early branching was seen (Table 2). Bilateral accessory renal arteries Unusual bilateral accessory renal arteries Distribution of Renal Artery configuration findings on CT Angiogram The presence of both accessory renal arteries and early divisions were higher among kidneys of male donors when compared to female donors (p= 0.002 and p=0.003 respectively). There was a co-existence of early division and accessory renal arteries in 2 patients (Figure 5). Co-existence of accessory right renal artery and left early division of renal artery In the 10 cases where there were bilateral accessory renal arteries, the side with larger diameter accessory renal artery was selected for donor nephrectomy. These renal allografts when extracted intra-operatively required meticulous bench vascular surgery. 6 of allografts had bench artero-arterostomy in end-to-side fashion to produce a common stump using prolene 6/0 sutures while 4 others had 2 separate annastomosis of the renal arteries to the external iliac artery in end-to-side fashion. All kidneys had good perfusion and renal function on the table. There was significantly longer surgery time, warm and cold ischemic time in renal allografts with multiple arteries (p<0.05) (Table 3). These findings significantly affected our decision of the renal unit to harvest from the donor for transplantation. There was an inadvertent transection of an early division of the renal artery in 1 donor nephrectomy allograft with consequent devascuarisation of about 20% of the allograft requiring bench re-anastomosis of the transected segments of the artery with satisfactory outcome.. There was a significantly longer total surgery time, warm and cold ischemic time among kidney donors with solitary renal arteries when compared to multiple (accessory/ early branches) renal arteries (Table 3). Breakdown of Mean durations of surgery time and ischemic time Discussion: The knowledge of the detailed anatomy of the renal arteries plays an important role in planning major surgical procedures like live donor nephrectomy. The need for CT angiographic studies can therefore not be over-emphasized in this regards. Presence of an accessory renal artery connotes a more challenging surgical prospect for both the donor nephrectomy and allograft implantation surgeons. In cases where the kidney with an accessory renal artery cannot be avoided for donor nephrectomy (especially when they occur bilaterally), there is a need for meticulous dissection of the accessory arteries to ensure good length for ease of anatomosis. This will also translate to a longer cold and second warm ischemic time as there may be need for a double-barrel bench anastomosis of the accessory arteries or performing multiple vascular anatomosis with the iliac artery of choice on the recipient. Considering these challenges, it is usually the surgeons's preference to harvest a renal unit with the classical configuration of the renal artery. Our study revealed that 1 in 2 patients had the classical single bilateral renal arteries which was similar to findings by Kumaresan et al where 49 % had the normal anatomy17. The high incidence of renal artery variations regarding their origin and number as found in the index study has been similarly reported by many researchers 7–9. The various types of accessory renal arteries, their positions, method of entry to the kidney and its segmentation were studied extensively by Sykes (1963)18. Obstruction to any of these vessels leads to infarction of the segment of the kidney supplied.13 Due to the important clinical correlation of these real arterial variations, terminologies like hilar, superior polar and inferior polar renal arteries have been described in literature in order to better qualify them. Similar to our finding on the prevalence of early branching and accessory renal arteries (18% & 32% respectively), Budhiraja et al in an Indian cadaveric study and Kumaresan et al reported close results. This however slightly differed from a study by Gumus et al who reported early division in 27% and accessory renal artery in 27%19. This may however be as a result of racial differences asrenal artery variations are more common in Africans than Indians20, 21. Bilateral early branching which was seen in only 1% in the index study is also a rare finding in literature (ref). Intra-operatively during donor nephrectomy and renal pelvis surgeries, superior and inferior polar extra hilar branches can be injured during mobilization13. Inferior polar artery injuries can be a cause of ureteropelvic junction obstruction10. Nowadays, allografts with multiple renal arteries have become a necessity to maintain donor pool22, but its outcome is still a matter of discussion. Some surgical scholars opine that multiple renal arteries increase the chances of acute rejection and poor graft functions 23, 24, while others like Benedetti et al25 did not find significant difference with regard to acute rejection rate in grafts with single versus multiple arteries. It is however agreed generally that allografts with multiple renal arteries have a higher risk of renal artery stenosis 26 and technical difficulties for surgeon performing transplant operation7, 27 Conclusion: Fifty percent of living kidney donors have either early divisions or accessory renal arteries .It has become increasingly important for kidney transplant surgeons to familiarize themselves with the anatomical variations of the renal artery and its correlation with surgery. Recommendation: There maybe a need to carry out a larger multi-centered and international study in order to assess a larger population of donors so as to have a more robust and representative result.
Background: Understanding of the renal vascular anatomy is key to a safe and successful donor nephrectomy, which ultimately impacts on the renal graft function and survival in kidney transplant recipients. Methods: The computerized tomography angiograms of 100 consecutive living kidney donors were prospectively reviewed over an 18-month period. Anatomical variations of the renal arteries including accessory arteries and early divisions were noted. Duration of surgery and ischemic time were recorded intra-operatively. Data analysis was carried out using IBM SPSS version 20. Results: There were variations in renal artery configuration in 50 (50%) cases, 32% were accessory renal arteries while 18% were early branches of the renal artery. The classical bilateral solitary renal arteries were found in 50 (50%) of potential donors. There was statistically significant longer operating and ischemic time in donors with multiple renal arteries as compared with solitary arteries (p<0.05). Conclusions: There are a wide variety of renal artery configurations seen in potential kidney donors. The classical solitary renal artery remains the commonest and most favourable configuration for donor nephrectomy and transplantation.
Introduction: The anatomy of the renal artery plays a key role in selection of kidney donors for a renal transplant program1. The ideal renal artery for ease of vascular anastomosis is one which is solitary, of good diameter and length. Each kidney is classically supplied by a single renal artery, which arises as a lateral branch of abdominal aorta, between the levels of 1st and 2nd lumbar vertebrae2, 3. The left renal artery is shorter in length while the longer right renal artery passes posterior to the inferior vena cava (IVC) to gain access to the kidney at the renal hilum. Renal arteries give branches to the adrenal glands, renal pelvis and proximal ureters. After entering the hilum, each artery divides into five segmental end arteries which do not freely anastomose with each other. This therefore means that injury to a segmental renal artery would cause a segmental renal infarction. Computerized Tomography Angiography (CTA) is the investigation of choice for identifying the renal arterial anatomy4. Studies have shown that the classical description of the renal vasculature, formed by one renal artery and vein are found in less than 25% of the population5, 6. There have also been controversies in literature with regards the number of renal arteries and their branches7–9. Most often encountered morphological variations of renal artery are its variable number and unusual branches originating from it10–12. These variations are sometime incidentally found during autopsy or surgical operation. Terminologies like supranumerary, supplementary and accessory have been used to describe the variable configurations of the renal artery. According to Sampaio and Passos, these arteries are referred to as multiple, since they are segmental vessels for the kidneys, without anastomoses between themselves and they should be named according to the territory supplied by them as- hilar, superior polar and inferior polar13. The knowledge of the possible variations in the configuration of the renal arteries is necessary for surgical management during donor nephrectomy, repair of abdominal aorta aneurysm, other retroperitoneal urological procedures and angiographic interventions.14–16 The aim of this study is to report the various anatomical configurations of the renal arteries in a cohort of randomised healthy kidney donors using computerized tomography angiogram. Conclusion: Fifty percent of living kidney donors have either early divisions or accessory renal arteries .It has become increasingly important for kidney transplant surgeons to familiarize themselves with the anatomical variations of the renal artery and its correlation with surgery.
Background: Understanding of the renal vascular anatomy is key to a safe and successful donor nephrectomy, which ultimately impacts on the renal graft function and survival in kidney transplant recipients. Methods: The computerized tomography angiograms of 100 consecutive living kidney donors were prospectively reviewed over an 18-month period. Anatomical variations of the renal arteries including accessory arteries and early divisions were noted. Duration of surgery and ischemic time were recorded intra-operatively. Data analysis was carried out using IBM SPSS version 20. Results: There were variations in renal artery configuration in 50 (50%) cases, 32% were accessory renal arteries while 18% were early branches of the renal artery. The classical bilateral solitary renal arteries were found in 50 (50%) of potential donors. There was statistically significant longer operating and ischemic time in donors with multiple renal arteries as compared with solitary arteries (p<0.05). Conclusions: There are a wide variety of renal artery configurations seen in potential kidney donors. The classical solitary renal artery remains the commonest and most favourable configuration for donor nephrectomy and transplantation.
2,005
211
[ 35 ]
6
[ "renal", "artery", "arteries", "renal artery", "accessory", "renal arteries", "kidney", "early", "accessory renal", "donor" ]
[ "renal artery selected", "vascular renal arterial", "segmental renal artery", "renal artery configuration", "renal arterial anatomy4" ]
null
[CONTENT] Renal artery | variations | living kidney donors [SUMMARY]
null
[CONTENT] Renal artery | variations | living kidney donors [SUMMARY]
[CONTENT] Renal artery | variations | living kidney donors [SUMMARY]
[CONTENT] Renal artery | variations | living kidney donors [SUMMARY]
[CONTENT] Renal artery | variations | living kidney donors [SUMMARY]
[CONTENT] Angiography | Humans | Kidney | Kidney Transplantation | Nephrectomy | Nigeria | Renal Artery | Retrospective Studies [SUMMARY]
null
[CONTENT] Angiography | Humans | Kidney | Kidney Transplantation | Nephrectomy | Nigeria | Renal Artery | Retrospective Studies [SUMMARY]
[CONTENT] Angiography | Humans | Kidney | Kidney Transplantation | Nephrectomy | Nigeria | Renal Artery | Retrospective Studies [SUMMARY]
[CONTENT] Angiography | Humans | Kidney | Kidney Transplantation | Nephrectomy | Nigeria | Renal Artery | Retrospective Studies [SUMMARY]
[CONTENT] Angiography | Humans | Kidney | Kidney Transplantation | Nephrectomy | Nigeria | Renal Artery | Retrospective Studies [SUMMARY]
[CONTENT] renal artery selected | vascular renal arterial | segmental renal artery | renal artery configuration | renal arterial anatomy4 [SUMMARY]
null
[CONTENT] renal artery selected | vascular renal arterial | segmental renal artery | renal artery configuration | renal arterial anatomy4 [SUMMARY]
[CONTENT] renal artery selected | vascular renal arterial | segmental renal artery | renal artery configuration | renal arterial anatomy4 [SUMMARY]
[CONTENT] renal artery selected | vascular renal arterial | segmental renal artery | renal artery configuration | renal arterial anatomy4 [SUMMARY]
[CONTENT] renal artery selected | vascular renal arterial | segmental renal artery | renal artery configuration | renal arterial anatomy4 [SUMMARY]
[CONTENT] renal | artery | arteries | renal artery | accessory | renal arteries | kidney | early | accessory renal | donor [SUMMARY]
null
[CONTENT] renal | artery | arteries | renal artery | accessory | renal arteries | kidney | early | accessory renal | donor [SUMMARY]
[CONTENT] renal | artery | arteries | renal artery | accessory | renal arteries | kidney | early | accessory renal | donor [SUMMARY]
[CONTENT] renal | artery | arteries | renal artery | accessory | renal arteries | kidney | early | accessory renal | donor [SUMMARY]
[CONTENT] renal | artery | arteries | renal artery | accessory | renal arteries | kidney | early | accessory renal | donor [SUMMARY]
[CONTENT] renal | artery | renal artery | segmental | arteries | variations | kidney | renal arteries | according | computerized tomography [SUMMARY]
null
[CONTENT] renal | accessory | bilateral | artery | early | arteries | figure | bilateral accessory | table | surgery [SUMMARY]
[CONTENT] renal | kidney | variations renal artery correlation | surgeons familiarize anatomical variations | arteries increasingly | arteries increasingly important | arteries increasingly important kidney | early divisions accessory renal | early divisions accessory | familiarize [SUMMARY]
[CONTENT] renal | artery | arteries | renal artery | accessory | renal arteries | kidney | early | accessory renal | donors [SUMMARY]
[CONTENT] renal | artery | arteries | renal artery | accessory | renal arteries | kidney | early | accessory renal | donors [SUMMARY]
[CONTENT] [SUMMARY]
null
[CONTENT] 50 | 50% | 32% | 18% ||| 50 | 50% ||| p<0.05 [SUMMARY]
[CONTENT] ||| [SUMMARY]
[CONTENT] ||| 100 | 18-month ||| ||| ||| IBM | SPSS | 20 ||| 50 | 50% | 32% | 18% ||| 50 | 50% ||| p<0.05 ||| ||| [SUMMARY]
[CONTENT] ||| 100 | 18-month ||| ||| ||| IBM | SPSS | 20 ||| 50 | 50% | 32% | 18% ||| 50 | 50% ||| p<0.05 ||| ||| [SUMMARY]
Resolution of left atrial appendage thrombi: No difference between phenprocoumon and non-vitamin K-dependent oral antagonists.
35373849
Atrial fibrillation is the most important risk factor for left atrial appendage (LAA) thrombi, a potentially life-threatening condition. Thrombus resolution may prevent embolic events and allow rhythm-control strategies, which have been shown to reduce cardiovascular complications.
BACKGROUND
Consecutive patients with LAA-thrombi from June 2013 to June 2017 were included in an observational single-center analysis. The primary endpoint was defined as the resolution of the thrombus. The observational period was 1 year. Resolutions rates in patients on phenprocoumon or NOACs were compared and the time to resolution was analyzed.
METHODS
We identified 114 patients with LAA-thrombi. There was no significant difference in the efficacy of resolution between phenprocoumon and NOACs (p = .499) at the time of first control which took place after a mean of 58 ± 42.2 (median 48) days. At first control most thrombi were dissolved (74.6%). The analysis after set-time intervals revealed a resolution rate of 2/3 of LAA-thrombi after 8-10 weeks in the phenprocoumon and NOAC groups. After 12 weeks a higher number of thrombi had resolved in the presence of NOAC (89.3%) whereas in the presence of phenprocoumon 68.3% had resolved (p = .046).
RESULTS
In this large observational study NOACs were found to be potent drugs for the resolution of LAA-thrombi. In addition, the resolution of LAA-thrombi was found to be faster in the presence of NOAC as compared to phenprocoumon.
CONCLUSION
[ "Administration, Oral", "Anticoagulants", "Atrial Appendage", "Atrial Fibrillation", "Echocardiography, Transesophageal", "Humans", "Phenprocoumon", "Thrombosis" ]
9175243
INTRODUCTION
Over 26 million people worldwide suffer from a stroke every year. In western countries 20% of all strokes and transient ischemic attacks (TIAs) are of cardioembolic origin. 1 , 2 Cardioembolic strokes are more severe than other ischemic strokes. 3 , 4 There has been a steady increase of cardioembolic strokes in the last few years. 5 In patients with atrial fibrillation (AF) as the most important risk factor, a consequent anticoagulant therapy avoids 70% of all cardioembolic strokes. 5 , 6 Therefore, the presence of intracardiac thrombi is a potentially life‐threatening condition because of the risk of embolization. To allow rhythm control in patients with AF which may be of prognostic relevance 7 the presence of intracardiac thrombi has to be excluded in advance. Thus, not only prevention but also adequate therapy of existing thrombi is of utmost importance. Today non‐vitamin K‐dependent oral anticoagulants (NOACs) are favored for the prevention of intracardiac thrombi in the majority of patients with AF. 8 However, only limited data exist for the use of NOACs in the resolution of already existing left atrial appendage (LAA) thrombi. 9 NOACs benefit from their simpler application compared to the use of vitamin K antagonists (VKAs) and are largely independent on alimentation or co‐medication. Additionally, therapeutic levels are consistent for most patients and their use is mostly restricted by renal insufficiency. In contrast, VKAs show many interdependencies especially with the frequently used antiarrhythmic drug amiodarone and therapeutic levels can vary depending on alimentation. Still, the amount of practical experience on thrombus resolution acquired over decades may favor VKA. 10 , 11 This study aims at comparing the efficacy of phenprocoumon and NOACs in the resolution of LAA‐thrombi in a real‐world setting.
METHODS
This analysis included all consecutive patients diagnosed with an intracardiac thrombus from June 2013 to June 2017 in a general cardiology clinic (SFH Münster, Germany). The intent was to compare the resolving potential of NOACs and the VKA phenprocoumon on thrombi of the LAA. Patients with non‐LAA thrombi (e.g., left ventricular thrombi) were excluded. The primary endpoint was defined as the resolution of the thrombus when patients presented again for follow‐up. Informed consent was obtained from all individual participants included in the study. The datasets generated and analyzed during the current study are not publicly available, as per internal protocol, but are available from the corresponding author on reasonable request. Persistence of the intracardiac thrombi after 1 year was defined as the secondary endpoint. Controls were made by TEE and follow‐up appointments were scheduled according to hospital capacity between 4 and 13 weeks after diagnosis. All changes in anticoagulant therapy with date and specific substance were analyzed. This study was performed in line with the principles of the Declaration of Helsinki. Approval was granted by the Ethics Committee of “Ärztekammer Westfalen‐Lippe” (Nr. 2019‐641‐f‐S). LAA thrombi (n = 114) were diagnosed by transesophageal echocardiography (TEE; n = 111), computer tomography (CT, n = 2), or magnetic resonance imaging (MRI, n = 1). The vast majority of patients presented with symptomatic AF. Some patients already presented with cardioembolic complications and were diagnosed with AF in the following diagnosis of a thromboembolic event. Statistical analysis After collecting all data, the evaluation and statistical analysis was made using SPSS Version 25 for Mac OS X (SPSS Inc.). All descriptive data were specified by absolute and relative frequency and complemented with median, arithmetic mean, and SD where necessary. The χ 2 test was used to test for independency. The Kaplan–Meier estimator was used in the context of event history analysis to identify time to resolution under therapy with different types of anticoagulation. Log‐rank tests were used to test for significance. In case of small size of groups Fisher's exact test was applied. After collecting all data, the evaluation and statistical analysis was made using SPSS Version 25 for Mac OS X (SPSS Inc.). All descriptive data were specified by absolute and relative frequency and complemented with median, arithmetic mean, and SD where necessary. The χ 2 test was used to test for independency. The Kaplan–Meier estimator was used in the context of event history analysis to identify time to resolution under therapy with different types of anticoagulation. Log‐rank tests were used to test for significance. In case of small size of groups Fisher's exact test was applied.
RESULTS
Baseline characteristics and comorbidities One hundred and sixty‐three consecutive patients were included, out of those 114 were diagnosed with an intracardiac thrombus located in the left atrium (n = 1) or LAA (n = 113). The remaining patients had LV‐thrombi (n = 36) or thrombi of other locations (n = 13) and were not taken into account. After thrombus detection, overall baseline characteristics between patients on phenprocoumon and NOAC did not differ significantly (Table 1). Groups were comparable with regard to comorbidities, with a slightly lower average CHA2DS2‐VASc Score in the NOAC group (2.68 ± 1.1, n = 19 vs. 3.25 ± 1.8, n = 48; p = .207). Characteristics of the patient collective with LAA‐thrombi (n = 114) in absolute numbers and percentage including SD were needed at the time of inclusion Note: Characteristics of patients on phenprocoumon and NOAC at the time of first control. The p value is calculated between patients on phenprocoumon and patients on NOAC. Impairment of kidney function was defined as GFR < 90 ml/min/1.73 m² body surface area. One hundred and sixty‐three consecutive patients were included, out of those 114 were diagnosed with an intracardiac thrombus located in the left atrium (n = 1) or LAA (n = 113). The remaining patients had LV‐thrombi (n = 36) or thrombi of other locations (n = 13) and were not taken into account. After thrombus detection, overall baseline characteristics between patients on phenprocoumon and NOAC did not differ significantly (Table 1). Groups were comparable with regard to comorbidities, with a slightly lower average CHA2DS2‐VASc Score in the NOAC group (2.68 ± 1.1, n = 19 vs. 3.25 ± 1.8, n = 48; p = .207). Characteristics of the patient collective with LAA‐thrombi (n = 114) in absolute numbers and percentage including SD were needed at the time of inclusion Note: Characteristics of patients on phenprocoumon and NOAC at the time of first control. The p value is calculated between patients on phenprocoumon and patients on NOAC. Impairment of kidney function was defined as GFR < 90 ml/min/1.73 m² body surface area. Use of oral anticoagulants At the time of thrombus detection, 47 patients (41.2%) were already on oral anticoagulation (28 on phenprocoumon, 9 on Rivaroxaban, 6 on Apixaban, 3 on Edoxaban, and 1 on Dabigatran). In those not on OAC, anticoagulation was started with phenprocoumon (n = 40) or NOAC (n = 7 Rivaroxaban, n = 8 Apixaban, n = 1 Dabigatran) according to the decision of the treating physician. Heparin was the OAC of the first choice in 10 patients. All of these patients were critically ill, four died within 2 weeks after diagnosis, four were lost to follow‐up, and the remaining two were discontinued on anticoagulation because of bleeding complications. If a thrombus was detected in the presence of OAC the drugs were continued in 25 patients (53.2%), changed to phenprocoumon after previous NOAC therapy in 15 out of 19 (79.0%), or from phenprocoumon to NOAC in 6 out of 28 patients (21.4%), 3 were put on Rivaroxaban and 3 on Apixaban. One patient was changed to heparin after previous phenprocoumon therapy. Overall a steady increase in the use of NOACs during the study period was noted, starting at 6.25% in 2013 and reaching 46.2% at the end of the inclusion period in 2018. At the time of thrombus detection, 47 patients (41.2%) were already on oral anticoagulation (28 on phenprocoumon, 9 on Rivaroxaban, 6 on Apixaban, 3 on Edoxaban, and 1 on Dabigatran). In those not on OAC, anticoagulation was started with phenprocoumon (n = 40) or NOAC (n = 7 Rivaroxaban, n = 8 Apixaban, n = 1 Dabigatran) according to the decision of the treating physician. Heparin was the OAC of the first choice in 10 patients. All of these patients were critically ill, four died within 2 weeks after diagnosis, four were lost to follow‐up, and the remaining two were discontinued on anticoagulation because of bleeding complications. If a thrombus was detected in the presence of OAC the drugs were continued in 25 patients (53.2%), changed to phenprocoumon after previous NOAC therapy in 15 out of 19 (79.0%), or from phenprocoumon to NOAC in 6 out of 28 patients (21.4%), 3 were put on Rivaroxaban and 3 on Apixaban. One patient was changed to heparin after previous phenprocoumon therapy. Overall a steady increase in the use of NOACs during the study period was noted, starting at 6.25% in 2013 and reaching 46.2% at the end of the inclusion period in 2018. AF and LAA thrombi Most patients (91.9%) with an LAA‐thrombus had documented AF or atrial flutter at the time of diagnosis. Persistent AF was the most common overall as well as in the subgroups of patients on phenprocoumon or NOAC (68.8% and 78.9%, respectively; Table 1). Correspondingly, the majority of patients for whom all relevant data points could be acquired (n = 99) showed varying degrees of dilatation of the LA, only 23.2% of patients showed no dilatation according to the American Society of Echocardiography (Table 1). 12 The mean size of LAA thrombi was 128.9 mm² (±142.5) for the 38 patients for whom all relevant data points could be acquired. Most patients (91.9%) with an LAA‐thrombus had documented AF or atrial flutter at the time of diagnosis. Persistent AF was the most common overall as well as in the subgroups of patients on phenprocoumon or NOAC (68.8% and 78.9%, respectively; Table 1). Correspondingly, the majority of patients for whom all relevant data points could be acquired (n = 99) showed varying degrees of dilatation of the LA, only 23.2% of patients showed no dilatation according to the American Society of Echocardiography (Table 1). 12 The mean size of LAA thrombi was 128.9 mm² (±142.5) for the 38 patients for whom all relevant data points could be acquired. Major adverse events The majority of patients (88.4%) did not suffer a stroke before being diagnosed with an intracardiac thrombus. During the course of the study, 5.3% (n = 6; 4 on VKA, none on NOAK) of patients suffered from a stroke and 1.8% (n = 2; both on VKA) from a TIA. In the group of patients who had a previous stroke (n = 11), two patients (18.2%; both on VKA) suffered from a recurrent stroke and one patient from a recurrent TIA after the diagnosis of intracardiac thrombi was established. The majority of patients (88.4%) did not suffer a stroke before being diagnosed with an intracardiac thrombus. During the course of the study, 5.3% (n = 6; 4 on VKA, none on NOAK) of patients suffered from a stroke and 1.8% (n = 2; both on VKA) from a TIA. In the group of patients who had a previous stroke (n = 11), two patients (18.2%; both on VKA) suffered from a recurrent stroke and one patient from a recurrent TIA after the diagnosis of intracardiac thrombi was established. Time of control and resolution of thrombi Out of the 114 LAA‐thrombi, 67 (62.3%) patients were controlled at least once and 47 patients were lost to follow‐up for several reasons, for example, death (n = 7), referral to a secondary hospital (n = 4), different follow‐up modality (e.g., CT/MRI instead of TEE; n = 2). Yet, the majority of patients did not present again within 1 year for their scheduled follow‐up TEE (n = 34). The average time to first control TEE was 58 ± 42.2 (median 48) days or 8 weeks. Disregarding the type of oral anticoagulation administered the LAA‐thrombi were dissolved in 74.6% of cases at the time of first control. Out of the 67 patients, 48 were treated with phenprocoumon. In this group 77.1% of LAA‐thrombi were dissolved, and the average time to first control was 63 days (±48.1) or 9 weeks. The INR was considered effective in 69% of patients at this isolated point in time. The remaining 19 patients were treated with NOACs and showed a resolution rate of 73.7%. The control took place after an average time of 48 days (±18.3). There was no significant difference in the resolution of LAA thrombi at the point of first control depending on the type of anticoagulation therapy (p = .499). Almost all patients (15 out of 16) who had no resolution of their LAA‐thrombus at the time of first control were re‐evaluated in a second control. At this time six additional LAA‐thrombi were dissolved. Out of the 114 LAA‐thrombi, 67 (62.3%) patients were controlled at least once and 47 patients were lost to follow‐up for several reasons, for example, death (n = 7), referral to a secondary hospital (n = 4), different follow‐up modality (e.g., CT/MRI instead of TEE; n = 2). Yet, the majority of patients did not present again within 1 year for their scheduled follow‐up TEE (n = 34). The average time to first control TEE was 58 ± 42.2 (median 48) days or 8 weeks. Disregarding the type of oral anticoagulation administered the LAA‐thrombi were dissolved in 74.6% of cases at the time of first control. Out of the 67 patients, 48 were treated with phenprocoumon. In this group 77.1% of LAA‐thrombi were dissolved, and the average time to first control was 63 days (±48.1) or 9 weeks. The INR was considered effective in 69% of patients at this isolated point in time. The remaining 19 patients were treated with NOACs and showed a resolution rate of 73.7%. The control took place after an average time of 48 days (±18.3). There was no significant difference in the resolution of LAA thrombi at the point of first control depending on the type of anticoagulation therapy (p = .499). Almost all patients (15 out of 16) who had no resolution of their LAA‐thrombus at the time of first control were re‐evaluated in a second control. At this time six additional LAA‐thrombi were dissolved. Time to resolution Out of the 67 patients who were controlled at least once, 56 showed the resolution of the thrombus within 1 year. Irrespective of the type of oral anticoagulation the average time to resolution was 77.8 (SD ± 7.4) days, the median was 53.5 days. A resolution rate of two‐thirds of thrombi was reached after 71 days (Figure 1), correspondingly a control rate of more than 80% was reached after 10 weeks (Table 2). Additionally, one patient was first controlled after 211 days and one after 245 days. Both thrombi were dissolved. Overall time‐dependent resolution of LAA thrombi. The x axis shows the duration in days and the y axis the proportion of persisting thrombi. The value of 2/3 resolution of initially diagnosed thrombi is reached after 71 days Control rates and resolution of thrombi regardless of the type of oral anticoagulation Note: Cutoffs were chosen to match realistic time frames for clinical controls. Displayed are the number of controlled patients and the number of dissolved thrombi after 4, 6, 8, 10, and 12 weeks. After 10 weeks 82.1% of patients were controlled, corresponding to 64.3% of total thrombi dissolved. Conversely out of the 46 patients controlled, 78.3% of their thrombi were dissolved. As expected, the percentage of thrombi dissolved increases with the longer time intervals. Out of the 67 patients who were controlled at least once, 56 showed the resolution of the thrombus within 1 year. Irrespective of the type of oral anticoagulation the average time to resolution was 77.8 (SD ± 7.4) days, the median was 53.5 days. A resolution rate of two‐thirds of thrombi was reached after 71 days (Figure 1), correspondingly a control rate of more than 80% was reached after 10 weeks (Table 2). Additionally, one patient was first controlled after 211 days and one after 245 days. Both thrombi were dissolved. Overall time‐dependent resolution of LAA thrombi. The x axis shows the duration in days and the y axis the proportion of persisting thrombi. The value of 2/3 resolution of initially diagnosed thrombi is reached after 71 days Control rates and resolution of thrombi regardless of the type of oral anticoagulation Note: Cutoffs were chosen to match realistic time frames for clinical controls. Displayed are the number of controlled patients and the number of dissolved thrombi after 4, 6, 8, 10, and 12 weeks. After 10 weeks 82.1% of patients were controlled, corresponding to 64.3% of total thrombi dissolved. Conversely out of the 46 patients controlled, 78.3% of their thrombi were dissolved. As expected, the percentage of thrombi dissolved increases with the longer time intervals. Resolution of LAA‐thrombi in dependence of the type of oral anticoagulation Both groups did not differ in their time to first control with an average of 63 and 48 days (p = .224) in the phenprocoumon and NOAC groups, respectively. The average time to resolution was 79.4 ± 8.6 days using VKA. In the presence of NOACs the average time was 59.7 ± 12.6 days. There is no significant difference in resolution time depending on the administered type of anticoagulation (p = .201) (Figure 2). The resolution of LAA‐thrombi depending on the type of oral anticoagulation is demonstrated in Table 3. Comparable percentages of controlled patients were seen, for example, after 8 weeks control rates were 82.1% and 81.3% in the phenprocoumon and NOAC group, respectively. Resolution of LAA‐thrombi depending on the type of oral anticoagulation. The x axis shows the time in days and the y axis the proportion of persisting thrombi. Dark gray shows the curve when phenprocoumon was administered, black shows patients on NOACs. A resolution rate of 2/3 of thrombi is reached earlier in patients on NOACs than in patients on phenprocoumon. These data respect all changes in the anticoagulation regime, and the number of days while switching to anticoagulation was also accounted for Control rates and resolution rates under NOAC and phenprocoumon therapy Note:  Cutoffs were chosen to match realistic time frames for clinical controls. Displayed are the number of controlled patients and the number of dissolved thrombi after 4, 6, 8, 10, and 12 weeks. Patients in both groups did not differ significantly in time to control (χ 2 = 0.367, Spearman correlation coefficient r = .634). Controls performed after 4, 6, 8, and 10 weeks did not differ significantly in thrombus resolution rate in the two groups, after 12 weeks a significantly higher resolution rate was found when administering NOACs. After 4 and 6 weeks, resolution rates were overall low, but slighter higher in the NOAC group. With regard to the control after 10 weeks when overall 2/3 of thrombi was dissolved, there was a resolution rate in the VKA group of 63.4%. In the group of patients treated with NOAC, there was a resolution rate of 73.4%. After 12 weeks 90% of all patients were controlled at least once; resolutions rates were 93% in the NOAC group and 68% in the VKA group. Until the point of 10 weeks there was no significant difference in the time to resolution of LAA thrombi comparing NOACs and VKA but there is a clear trend towards faster resolution in the NOAC therapy group. However, after 12 weeks of treatment with either NOAC or VKA the thrombi had more often resolved in the presence of NAOC as compared to phenprocoumon. Both groups did not differ in their time to first control with an average of 63 and 48 days (p = .224) in the phenprocoumon and NOAC groups, respectively. The average time to resolution was 79.4 ± 8.6 days using VKA. In the presence of NOACs the average time was 59.7 ± 12.6 days. There is no significant difference in resolution time depending on the administered type of anticoagulation (p = .201) (Figure 2). The resolution of LAA‐thrombi depending on the type of oral anticoagulation is demonstrated in Table 3. Comparable percentages of controlled patients were seen, for example, after 8 weeks control rates were 82.1% and 81.3% in the phenprocoumon and NOAC group, respectively. Resolution of LAA‐thrombi depending on the type of oral anticoagulation. The x axis shows the time in days and the y axis the proportion of persisting thrombi. Dark gray shows the curve when phenprocoumon was administered, black shows patients on NOACs. A resolution rate of 2/3 of thrombi is reached earlier in patients on NOACs than in patients on phenprocoumon. These data respect all changes in the anticoagulation regime, and the number of days while switching to anticoagulation was also accounted for Control rates and resolution rates under NOAC and phenprocoumon therapy Note:  Cutoffs were chosen to match realistic time frames for clinical controls. Displayed are the number of controlled patients and the number of dissolved thrombi after 4, 6, 8, 10, and 12 weeks. Patients in both groups did not differ significantly in time to control (χ 2 = 0.367, Spearman correlation coefficient r = .634). Controls performed after 4, 6, 8, and 10 weeks did not differ significantly in thrombus resolution rate in the two groups, after 12 weeks a significantly higher resolution rate was found when administering NOACs. After 4 and 6 weeks, resolution rates were overall low, but slighter higher in the NOAC group. With regard to the control after 10 weeks when overall 2/3 of thrombi was dissolved, there was a resolution rate in the VKA group of 63.4%. In the group of patients treated with NOAC, there was a resolution rate of 73.4%. After 12 weeks 90% of all patients were controlled at least once; resolutions rates were 93% in the NOAC group and 68% in the VKA group. Until the point of 10 weeks there was no significant difference in the time to resolution of LAA thrombi comparing NOACs and VKA but there is a clear trend towards faster resolution in the NOAC therapy group. However, after 12 weeks of treatment with either NOAC or VKA the thrombi had more often resolved in the presence of NAOC as compared to phenprocoumon.
CONCLUSION
In this large observational study the overall efficacy of thrombus resolution did not differ significantly between VKA and NOACs at the point of first TEE control. Regardless of the type of anticoagulation, 76.1% of thrombi were dissolved at first control. The cutoff value of two‐thirds of thrombi dissolved was reached faster when administering a NOAC. We found an overall favorable relation between number of controlled patients and number of dissolved thrombi after 10 weeks independent on the oral anticoagulation used. It would therefore be advisable to schedule follow‐up appointments at this time. Overall there was no difference in resolution of LAA‐thrombi when comparing NOACs and phenprocoumon in a real‐life setting. Further studies are needed to prove the role of NOACs and to differentiate between the resolving potential of thrombin‐ and Factor Xa inhibition as well as for each NOAC individually.
[ "INTRODUCTION", "Statistical analysis", "Baseline characteristics and comorbidities", "Use of oral anticoagulants", "AF and LAA thrombi", "Major adverse events", "Time of control and resolution of thrombi", "Time to resolution", "Resolution of LAA‐thrombi in dependence of the type of oral anticoagulation", "ACKNOWLEDGMENT" ]
[ "Over 26 million people worldwide suffer from a stroke every year. In western countries 20% of all strokes and transient ischemic attacks (TIAs) are of cardioembolic origin.\n1\n, \n2\n Cardioembolic strokes are more severe than other ischemic strokes.\n3\n, \n4\n There has been a steady increase of cardioembolic strokes in the last few years.\n5\n In patients with atrial fibrillation (AF) as the most important risk factor, a consequent anticoagulant therapy avoids 70% of all cardioembolic strokes.\n5\n, \n6\n Therefore, the presence of intracardiac thrombi is a potentially life‐threatening condition because of the risk of embolization. To allow rhythm control in patients with AF which may be of prognostic relevance\n7\n the presence of intracardiac thrombi has to be excluded in advance.\nThus, not only prevention but also adequate therapy of existing thrombi is of utmost importance. Today non‐vitamin K‐dependent oral anticoagulants (NOACs) are favored for the prevention of intracardiac thrombi in the majority of patients with AF.\n8\n However, only limited data exist for the use of NOACs in the resolution of already existing left atrial appendage (LAA) thrombi.\n9\n NOACs benefit from their simpler application compared to the use of vitamin K antagonists (VKAs) and are largely independent on alimentation or co‐medication. Additionally, therapeutic levels are consistent for most patients and their use is mostly restricted by renal insufficiency. In contrast, VKAs show many interdependencies especially with the frequently used antiarrhythmic drug amiodarone and therapeutic levels can vary depending on alimentation. Still, the amount of practical experience on thrombus resolution acquired over decades may favor VKA.\n10\n, \n11\n\n\nThis study aims at comparing the efficacy of phenprocoumon and NOACs in the resolution of LAA‐thrombi in a real‐world setting.", "After collecting all data, the evaluation and statistical analysis was made using SPSS Version 25 for Mac OS X (SPSS Inc.). All descriptive data were specified by absolute and relative frequency and complemented with median, arithmetic mean, and SD where necessary. The χ\n2 test was used to test for independency. The Kaplan–Meier estimator was used in the context of event history analysis to identify time to resolution under therapy with different types of anticoagulation. Log‐rank tests were used to test for significance. In case of small size of groups Fisher's exact test was applied.", "One hundred and sixty‐three consecutive patients were included, out of those 114 were diagnosed with an intracardiac thrombus located in the left atrium (n = 1) or LAA (n = 113). The remaining patients had LV‐thrombi (n = 36) or thrombi of other locations (n = 13) and were not taken into account.\nAfter thrombus detection, overall baseline characteristics between patients on phenprocoumon and NOAC did not differ significantly (Table 1). Groups were comparable with regard to comorbidities, with a slightly lower average CHA2DS2‐VASc Score in the NOAC group (2.68 ± 1.1, n = 19 vs. 3.25 ± 1.8, n = 48; p = .207).\nCharacteristics of the patient collective with LAA‐thrombi (n = 114) in absolute numbers and percentage including SD were needed at the time of inclusion\n\nNote: Characteristics of patients on phenprocoumon and NOAC at the time of first control. The p value is calculated between patients on phenprocoumon and patients on NOAC.\nImpairment of kidney function was defined as GFR < 90 ml/min/1.73 m² body surface area.", "At the time of thrombus detection, 47 patients (41.2%) were already on oral anticoagulation (28 on phenprocoumon, 9 on Rivaroxaban, 6 on Apixaban, 3 on Edoxaban, and 1 on Dabigatran). In those not on OAC, anticoagulation was started with phenprocoumon (n = 40) or NOAC (n = 7 Rivaroxaban, n = 8 Apixaban, n = 1 Dabigatran) according to the decision of the treating physician. Heparin was the OAC of the first choice in 10 patients. All of these patients were critically ill, four died within 2 weeks after diagnosis, four were lost to follow‐up, and the remaining two were discontinued on anticoagulation because of bleeding complications.\nIf a thrombus was detected in the presence of OAC the drugs were continued in 25 patients (53.2%), changed to phenprocoumon after previous NOAC therapy in 15 out of 19 (79.0%), or from phenprocoumon to NOAC in 6 out of 28 patients (21.4%), 3 were put on Rivaroxaban and 3 on Apixaban. One patient was changed to heparin after previous phenprocoumon therapy. Overall a steady increase in the use of NOACs during the study period was noted, starting at 6.25% in 2013 and reaching 46.2% at the end of the inclusion period in 2018.", "Most patients (91.9%) with an LAA‐thrombus had documented AF or atrial flutter at the time of diagnosis. Persistent AF was the most common overall as well as in the subgroups of patients on phenprocoumon or NOAC (68.8% and 78.9%, respectively; Table 1). Correspondingly, the majority of patients for whom all relevant data points could be acquired (n = 99) showed varying degrees of dilatation of the LA, only 23.2% of patients showed no dilatation according to the American Society of Echocardiography (Table 1).\n12\n The mean size of LAA thrombi was 128.9 mm² (±142.5) for the 38 patients for whom all relevant data points could be acquired.", "The majority of patients (88.4%) did not suffer a stroke before being diagnosed with an intracardiac thrombus. During the course of the study, 5.3% (n = 6; 4 on VKA, none on NOAK) of patients suffered from a stroke and 1.8% (n = 2; both on VKA) from a TIA. In the group of patients who had a previous stroke (n = 11), two patients (18.2%; both on VKA) suffered from a recurrent stroke and one patient from a recurrent TIA after the diagnosis of intracardiac thrombi was established.", "Out of the 114 LAA‐thrombi, 67 (62.3%) patients were controlled at least once and 47 patients were lost to follow‐up for several reasons, for example, death (n = 7), referral to a secondary hospital (n = 4), different follow‐up modality (e.g., CT/MRI instead of TEE; n = 2). Yet, the majority of patients did not present again within 1 year for their scheduled follow‐up TEE (n = 34).\nThe average time to first control TEE was 58 ± 42.2 (median 48) days or 8 weeks. Disregarding the type of oral anticoagulation administered the LAA‐thrombi were dissolved in 74.6% of cases at the time of first control. Out of the 67 patients, 48 were treated with phenprocoumon. In this group 77.1% of LAA‐thrombi were dissolved, and the average time to first control was 63 days (±48.1) or 9 weeks. The INR was considered effective in 69% of patients at this isolated point in time. The remaining 19 patients were treated with NOACs and showed a resolution rate of 73.7%. The control took place after an average time of 48 days (±18.3). There was no significant difference in the resolution of LAA thrombi at the point of first control depending on the type of anticoagulation therapy (p = .499). Almost all patients (15 out of 16) who had no resolution of their LAA‐thrombus at the time of first control were re‐evaluated in a second control. At this time six additional LAA‐thrombi were dissolved.", "Out of the 67 patients who were controlled at least once, 56 showed the resolution of the thrombus within 1 year. Irrespective of the type of oral anticoagulation the average time to resolution was 77.8 (SD ± 7.4) days, the median was 53.5 days. A resolution rate of two‐thirds of thrombi was reached after 71 days (Figure 1), correspondingly a control rate of more than 80% was reached after 10 weeks (Table 2). Additionally, one patient was first controlled after 211 days and one after 245 days. Both thrombi were dissolved.\nOverall time‐dependent resolution of LAA thrombi. The x axis shows the duration in days and the y axis the proportion of persisting thrombi. The value of 2/3 resolution of initially diagnosed thrombi is reached after 71 days\nControl rates and resolution of thrombi regardless of the type of oral anticoagulation\nNote: Cutoffs were chosen to match realistic time frames for clinical controls. Displayed are the number of controlled patients and the number of dissolved thrombi after 4, 6, 8, 10, and 12 weeks. After 10 weeks 82.1% of patients were controlled, corresponding to 64.3% of total thrombi dissolved. Conversely out of the 46 patients controlled, 78.3% of their thrombi were dissolved. As expected, the percentage of thrombi dissolved increases with the longer time intervals.", "Both groups did not differ in their time to first control with an average of 63 and 48 days (p = .224) in the phenprocoumon and NOAC groups, respectively. The average time to resolution was 79.4 ± 8.6 days using VKA. In the presence of NOACs the average time was 59.7 ± 12.6 days. There is no significant difference in resolution time depending on the administered type of anticoagulation (p = .201) (Figure 2). The resolution of LAA‐thrombi depending on the type of oral anticoagulation is demonstrated in Table 3. Comparable percentages of controlled patients were seen, for example, after 8 weeks control rates were 82.1% and 81.3% in the phenprocoumon and NOAC group, respectively.\nResolution of LAA‐thrombi depending on the type of oral anticoagulation. The x axis shows the time in days and the y axis the proportion of persisting thrombi. Dark gray shows the curve when phenprocoumon was administered, black shows patients on NOACs. A resolution rate of 2/3 of thrombi is reached earlier in patients on NOACs than in patients on phenprocoumon. These data respect all changes in the anticoagulation regime, and the number of days while switching to anticoagulation was also accounted for\nControl rates and resolution rates under NOAC and phenprocoumon therapy\n\nNote:  Cutoffs were chosen to match realistic time frames for clinical controls. Displayed are the number of controlled patients and the number of dissolved thrombi after 4, 6, 8, 10, and 12 weeks. Patients in both groups did not differ significantly in time to control (χ\n2 = 0.367, Spearman correlation coefficient r = .634). Controls performed after 4, 6, 8, and 10 weeks did not differ significantly in thrombus resolution rate in the two groups, after 12 weeks a significantly higher resolution rate was found when administering NOACs.\nAfter 4 and 6 weeks, resolution rates were overall low, but slighter higher in the NOAC group. With regard to the control after 10 weeks when overall 2/3 of thrombi was dissolved, there was a resolution rate in the VKA group of 63.4%. In the group of patients treated with NOAC, there was a resolution rate of 73.4%. After 12 weeks 90% of all patients were controlled at least once; resolutions rates were 93% in the NOAC group and 68% in the VKA group. Until the point of 10 weeks there was no significant difference in the time to resolution of LAA thrombi comparing NOACs and VKA but there is a clear trend towards faster resolution in the NOAC therapy group. However, after 12 weeks of treatment with either NOAC or VKA the thrombi had more often resolved in the presence of NAOC as compared to phenprocoumon.", "Open Access funding enabled and organized by Projekt DEAL." ]
[ null, null, null, null, null, null, null, null, null, "COI-statement" ]
[ "INTRODUCTION", "METHODS", "Statistical analysis", "RESULTS", "Baseline characteristics and comorbidities", "Use of oral anticoagulants", "AF and LAA thrombi", "Major adverse events", "Time of control and resolution of thrombi", "Time to resolution", "Resolution of LAA‐thrombi in dependence of the type of oral anticoagulation", "DISCUSSION", "CONCLUSION", "ACKNOWLEDGMENT", "CONFLICTS OF INTEREST" ]
[ "Over 26 million people worldwide suffer from a stroke every year. In western countries 20% of all strokes and transient ischemic attacks (TIAs) are of cardioembolic origin.\n1\n, \n2\n Cardioembolic strokes are more severe than other ischemic strokes.\n3\n, \n4\n There has been a steady increase of cardioembolic strokes in the last few years.\n5\n In patients with atrial fibrillation (AF) as the most important risk factor, a consequent anticoagulant therapy avoids 70% of all cardioembolic strokes.\n5\n, \n6\n Therefore, the presence of intracardiac thrombi is a potentially life‐threatening condition because of the risk of embolization. To allow rhythm control in patients with AF which may be of prognostic relevance\n7\n the presence of intracardiac thrombi has to be excluded in advance.\nThus, not only prevention but also adequate therapy of existing thrombi is of utmost importance. Today non‐vitamin K‐dependent oral anticoagulants (NOACs) are favored for the prevention of intracardiac thrombi in the majority of patients with AF.\n8\n However, only limited data exist for the use of NOACs in the resolution of already existing left atrial appendage (LAA) thrombi.\n9\n NOACs benefit from their simpler application compared to the use of vitamin K antagonists (VKAs) and are largely independent on alimentation or co‐medication. Additionally, therapeutic levels are consistent for most patients and their use is mostly restricted by renal insufficiency. In contrast, VKAs show many interdependencies especially with the frequently used antiarrhythmic drug amiodarone and therapeutic levels can vary depending on alimentation. Still, the amount of practical experience on thrombus resolution acquired over decades may favor VKA.\n10\n, \n11\n\n\nThis study aims at comparing the efficacy of phenprocoumon and NOACs in the resolution of LAA‐thrombi in a real‐world setting.", "This analysis included all consecutive patients diagnosed with an intracardiac thrombus from June 2013 to June 2017 in a general cardiology clinic (SFH Münster, Germany). The intent was to compare the resolving potential of NOACs and the VKA phenprocoumon on thrombi of the LAA. Patients with non‐LAA thrombi (e.g., left ventricular thrombi) were excluded. The primary endpoint was defined as the resolution of the thrombus when patients presented again for follow‐up. Informed consent was obtained from all individual participants included in the study. The datasets generated and analyzed during the current study are not publicly available, as per internal protocol, but are available from the corresponding author on reasonable request.\nPersistence of the intracardiac thrombi after 1 year was defined as the secondary endpoint. Controls were made by TEE and follow‐up appointments were scheduled according to hospital capacity between 4 and 13 weeks after diagnosis. All changes in anticoagulant therapy with date and specific substance were analyzed.\nThis study was performed in line with the principles of the Declaration of Helsinki. Approval was granted by the Ethics Committee of “Ärztekammer Westfalen‐Lippe” (Nr. 2019‐641‐f‐S).\nLAA thrombi (n = 114) were diagnosed by transesophageal echocardiography (TEE; n = 111), computer tomography (CT, n = 2), or magnetic resonance imaging (MRI, n = 1). The vast majority of patients presented with symptomatic AF. Some patients already presented with cardioembolic complications and were diagnosed with AF in the following diagnosis of a thromboembolic event.\n Statistical analysis After collecting all data, the evaluation and statistical analysis was made using SPSS Version 25 for Mac OS X (SPSS Inc.). All descriptive data were specified by absolute and relative frequency and complemented with median, arithmetic mean, and SD where necessary. The χ\n2 test was used to test for independency. The Kaplan–Meier estimator was used in the context of event history analysis to identify time to resolution under therapy with different types of anticoagulation. Log‐rank tests were used to test for significance. In case of small size of groups Fisher's exact test was applied.\nAfter collecting all data, the evaluation and statistical analysis was made using SPSS Version 25 for Mac OS X (SPSS Inc.). All descriptive data were specified by absolute and relative frequency and complemented with median, arithmetic mean, and SD where necessary. The χ\n2 test was used to test for independency. The Kaplan–Meier estimator was used in the context of event history analysis to identify time to resolution under therapy with different types of anticoagulation. Log‐rank tests were used to test for significance. In case of small size of groups Fisher's exact test was applied.", "After collecting all data, the evaluation and statistical analysis was made using SPSS Version 25 for Mac OS X (SPSS Inc.). All descriptive data were specified by absolute and relative frequency and complemented with median, arithmetic mean, and SD where necessary. The χ\n2 test was used to test for independency. The Kaplan–Meier estimator was used in the context of event history analysis to identify time to resolution under therapy with different types of anticoagulation. Log‐rank tests were used to test for significance. In case of small size of groups Fisher's exact test was applied.", " Baseline characteristics and comorbidities One hundred and sixty‐three consecutive patients were included, out of those 114 were diagnosed with an intracardiac thrombus located in the left atrium (n = 1) or LAA (n = 113). The remaining patients had LV‐thrombi (n = 36) or thrombi of other locations (n = 13) and were not taken into account.\nAfter thrombus detection, overall baseline characteristics between patients on phenprocoumon and NOAC did not differ significantly (Table 1). Groups were comparable with regard to comorbidities, with a slightly lower average CHA2DS2‐VASc Score in the NOAC group (2.68 ± 1.1, n = 19 vs. 3.25 ± 1.8, n = 48; p = .207).\nCharacteristics of the patient collective with LAA‐thrombi (n = 114) in absolute numbers and percentage including SD were needed at the time of inclusion\n\nNote: Characteristics of patients on phenprocoumon and NOAC at the time of first control. The p value is calculated between patients on phenprocoumon and patients on NOAC.\nImpairment of kidney function was defined as GFR < 90 ml/min/1.73 m² body surface area.\nOne hundred and sixty‐three consecutive patients were included, out of those 114 were diagnosed with an intracardiac thrombus located in the left atrium (n = 1) or LAA (n = 113). The remaining patients had LV‐thrombi (n = 36) or thrombi of other locations (n = 13) and were not taken into account.\nAfter thrombus detection, overall baseline characteristics between patients on phenprocoumon and NOAC did not differ significantly (Table 1). Groups were comparable with regard to comorbidities, with a slightly lower average CHA2DS2‐VASc Score in the NOAC group (2.68 ± 1.1, n = 19 vs. 3.25 ± 1.8, n = 48; p = .207).\nCharacteristics of the patient collective with LAA‐thrombi (n = 114) in absolute numbers and percentage including SD were needed at the time of inclusion\n\nNote: Characteristics of patients on phenprocoumon and NOAC at the time of first control. The p value is calculated between patients on phenprocoumon and patients on NOAC.\nImpairment of kidney function was defined as GFR < 90 ml/min/1.73 m² body surface area.\n Use of oral anticoagulants At the time of thrombus detection, 47 patients (41.2%) were already on oral anticoagulation (28 on phenprocoumon, 9 on Rivaroxaban, 6 on Apixaban, 3 on Edoxaban, and 1 on Dabigatran). In those not on OAC, anticoagulation was started with phenprocoumon (n = 40) or NOAC (n = 7 Rivaroxaban, n = 8 Apixaban, n = 1 Dabigatran) according to the decision of the treating physician. Heparin was the OAC of the first choice in 10 patients. All of these patients were critically ill, four died within 2 weeks after diagnosis, four were lost to follow‐up, and the remaining two were discontinued on anticoagulation because of bleeding complications.\nIf a thrombus was detected in the presence of OAC the drugs were continued in 25 patients (53.2%), changed to phenprocoumon after previous NOAC therapy in 15 out of 19 (79.0%), or from phenprocoumon to NOAC in 6 out of 28 patients (21.4%), 3 were put on Rivaroxaban and 3 on Apixaban. One patient was changed to heparin after previous phenprocoumon therapy. Overall a steady increase in the use of NOACs during the study period was noted, starting at 6.25% in 2013 and reaching 46.2% at the end of the inclusion period in 2018.\nAt the time of thrombus detection, 47 patients (41.2%) were already on oral anticoagulation (28 on phenprocoumon, 9 on Rivaroxaban, 6 on Apixaban, 3 on Edoxaban, and 1 on Dabigatran). In those not on OAC, anticoagulation was started with phenprocoumon (n = 40) or NOAC (n = 7 Rivaroxaban, n = 8 Apixaban, n = 1 Dabigatran) according to the decision of the treating physician. Heparin was the OAC of the first choice in 10 patients. All of these patients were critically ill, four died within 2 weeks after diagnosis, four were lost to follow‐up, and the remaining two were discontinued on anticoagulation because of bleeding complications.\nIf a thrombus was detected in the presence of OAC the drugs were continued in 25 patients (53.2%), changed to phenprocoumon after previous NOAC therapy in 15 out of 19 (79.0%), or from phenprocoumon to NOAC in 6 out of 28 patients (21.4%), 3 were put on Rivaroxaban and 3 on Apixaban. One patient was changed to heparin after previous phenprocoumon therapy. Overall a steady increase in the use of NOACs during the study period was noted, starting at 6.25% in 2013 and reaching 46.2% at the end of the inclusion period in 2018.\n AF and LAA thrombi Most patients (91.9%) with an LAA‐thrombus had documented AF or atrial flutter at the time of diagnosis. Persistent AF was the most common overall as well as in the subgroups of patients on phenprocoumon or NOAC (68.8% and 78.9%, respectively; Table 1). Correspondingly, the majority of patients for whom all relevant data points could be acquired (n = 99) showed varying degrees of dilatation of the LA, only 23.2% of patients showed no dilatation according to the American Society of Echocardiography (Table 1).\n12\n The mean size of LAA thrombi was 128.9 mm² (±142.5) for the 38 patients for whom all relevant data points could be acquired.\nMost patients (91.9%) with an LAA‐thrombus had documented AF or atrial flutter at the time of diagnosis. Persistent AF was the most common overall as well as in the subgroups of patients on phenprocoumon or NOAC (68.8% and 78.9%, respectively; Table 1). Correspondingly, the majority of patients for whom all relevant data points could be acquired (n = 99) showed varying degrees of dilatation of the LA, only 23.2% of patients showed no dilatation according to the American Society of Echocardiography (Table 1).\n12\n The mean size of LAA thrombi was 128.9 mm² (±142.5) for the 38 patients for whom all relevant data points could be acquired.\n Major adverse events The majority of patients (88.4%) did not suffer a stroke before being diagnosed with an intracardiac thrombus. During the course of the study, 5.3% (n = 6; 4 on VKA, none on NOAK) of patients suffered from a stroke and 1.8% (n = 2; both on VKA) from a TIA. In the group of patients who had a previous stroke (n = 11), two patients (18.2%; both on VKA) suffered from a recurrent stroke and one patient from a recurrent TIA after the diagnosis of intracardiac thrombi was established.\nThe majority of patients (88.4%) did not suffer a stroke before being diagnosed with an intracardiac thrombus. During the course of the study, 5.3% (n = 6; 4 on VKA, none on NOAK) of patients suffered from a stroke and 1.8% (n = 2; both on VKA) from a TIA. In the group of patients who had a previous stroke (n = 11), two patients (18.2%; both on VKA) suffered from a recurrent stroke and one patient from a recurrent TIA after the diagnosis of intracardiac thrombi was established.\n Time of control and resolution of thrombi Out of the 114 LAA‐thrombi, 67 (62.3%) patients were controlled at least once and 47 patients were lost to follow‐up for several reasons, for example, death (n = 7), referral to a secondary hospital (n = 4), different follow‐up modality (e.g., CT/MRI instead of TEE; n = 2). Yet, the majority of patients did not present again within 1 year for their scheduled follow‐up TEE (n = 34).\nThe average time to first control TEE was 58 ± 42.2 (median 48) days or 8 weeks. Disregarding the type of oral anticoagulation administered the LAA‐thrombi were dissolved in 74.6% of cases at the time of first control. Out of the 67 patients, 48 were treated with phenprocoumon. In this group 77.1% of LAA‐thrombi were dissolved, and the average time to first control was 63 days (±48.1) or 9 weeks. The INR was considered effective in 69% of patients at this isolated point in time. The remaining 19 patients were treated with NOACs and showed a resolution rate of 73.7%. The control took place after an average time of 48 days (±18.3). There was no significant difference in the resolution of LAA thrombi at the point of first control depending on the type of anticoagulation therapy (p = .499). Almost all patients (15 out of 16) who had no resolution of their LAA‐thrombus at the time of first control were re‐evaluated in a second control. At this time six additional LAA‐thrombi were dissolved.\nOut of the 114 LAA‐thrombi, 67 (62.3%) patients were controlled at least once and 47 patients were lost to follow‐up for several reasons, for example, death (n = 7), referral to a secondary hospital (n = 4), different follow‐up modality (e.g., CT/MRI instead of TEE; n = 2). Yet, the majority of patients did not present again within 1 year for their scheduled follow‐up TEE (n = 34).\nThe average time to first control TEE was 58 ± 42.2 (median 48) days or 8 weeks. Disregarding the type of oral anticoagulation administered the LAA‐thrombi were dissolved in 74.6% of cases at the time of first control. Out of the 67 patients, 48 were treated with phenprocoumon. In this group 77.1% of LAA‐thrombi were dissolved, and the average time to first control was 63 days (±48.1) or 9 weeks. The INR was considered effective in 69% of patients at this isolated point in time. The remaining 19 patients were treated with NOACs and showed a resolution rate of 73.7%. The control took place after an average time of 48 days (±18.3). There was no significant difference in the resolution of LAA thrombi at the point of first control depending on the type of anticoagulation therapy (p = .499). Almost all patients (15 out of 16) who had no resolution of their LAA‐thrombus at the time of first control were re‐evaluated in a second control. At this time six additional LAA‐thrombi were dissolved.\n Time to resolution Out of the 67 patients who were controlled at least once, 56 showed the resolution of the thrombus within 1 year. Irrespective of the type of oral anticoagulation the average time to resolution was 77.8 (SD ± 7.4) days, the median was 53.5 days. A resolution rate of two‐thirds of thrombi was reached after 71 days (Figure 1), correspondingly a control rate of more than 80% was reached after 10 weeks (Table 2). Additionally, one patient was first controlled after 211 days and one after 245 days. Both thrombi were dissolved.\nOverall time‐dependent resolution of LAA thrombi. The x axis shows the duration in days and the y axis the proportion of persisting thrombi. The value of 2/3 resolution of initially diagnosed thrombi is reached after 71 days\nControl rates and resolution of thrombi regardless of the type of oral anticoagulation\nNote: Cutoffs were chosen to match realistic time frames for clinical controls. Displayed are the number of controlled patients and the number of dissolved thrombi after 4, 6, 8, 10, and 12 weeks. After 10 weeks 82.1% of patients were controlled, corresponding to 64.3% of total thrombi dissolved. Conversely out of the 46 patients controlled, 78.3% of their thrombi were dissolved. As expected, the percentage of thrombi dissolved increases with the longer time intervals.\nOut of the 67 patients who were controlled at least once, 56 showed the resolution of the thrombus within 1 year. Irrespective of the type of oral anticoagulation the average time to resolution was 77.8 (SD ± 7.4) days, the median was 53.5 days. A resolution rate of two‐thirds of thrombi was reached after 71 days (Figure 1), correspondingly a control rate of more than 80% was reached after 10 weeks (Table 2). Additionally, one patient was first controlled after 211 days and one after 245 days. Both thrombi were dissolved.\nOverall time‐dependent resolution of LAA thrombi. The x axis shows the duration in days and the y axis the proportion of persisting thrombi. The value of 2/3 resolution of initially diagnosed thrombi is reached after 71 days\nControl rates and resolution of thrombi regardless of the type of oral anticoagulation\nNote: Cutoffs were chosen to match realistic time frames for clinical controls. Displayed are the number of controlled patients and the number of dissolved thrombi after 4, 6, 8, 10, and 12 weeks. After 10 weeks 82.1% of patients were controlled, corresponding to 64.3% of total thrombi dissolved. Conversely out of the 46 patients controlled, 78.3% of their thrombi were dissolved. As expected, the percentage of thrombi dissolved increases with the longer time intervals.\n Resolution of LAA‐thrombi in dependence of the type of oral anticoagulation Both groups did not differ in their time to first control with an average of 63 and 48 days (p = .224) in the phenprocoumon and NOAC groups, respectively. The average time to resolution was 79.4 ± 8.6 days using VKA. In the presence of NOACs the average time was 59.7 ± 12.6 days. There is no significant difference in resolution time depending on the administered type of anticoagulation (p = .201) (Figure 2). The resolution of LAA‐thrombi depending on the type of oral anticoagulation is demonstrated in Table 3. Comparable percentages of controlled patients were seen, for example, after 8 weeks control rates were 82.1% and 81.3% in the phenprocoumon and NOAC group, respectively.\nResolution of LAA‐thrombi depending on the type of oral anticoagulation. The x axis shows the time in days and the y axis the proportion of persisting thrombi. Dark gray shows the curve when phenprocoumon was administered, black shows patients on NOACs. A resolution rate of 2/3 of thrombi is reached earlier in patients on NOACs than in patients on phenprocoumon. These data respect all changes in the anticoagulation regime, and the number of days while switching to anticoagulation was also accounted for\nControl rates and resolution rates under NOAC and phenprocoumon therapy\n\nNote:  Cutoffs were chosen to match realistic time frames for clinical controls. Displayed are the number of controlled patients and the number of dissolved thrombi after 4, 6, 8, 10, and 12 weeks. Patients in both groups did not differ significantly in time to control (χ\n2 = 0.367, Spearman correlation coefficient r = .634). Controls performed after 4, 6, 8, and 10 weeks did not differ significantly in thrombus resolution rate in the two groups, after 12 weeks a significantly higher resolution rate was found when administering NOACs.\nAfter 4 and 6 weeks, resolution rates were overall low, but slighter higher in the NOAC group. With regard to the control after 10 weeks when overall 2/3 of thrombi was dissolved, there was a resolution rate in the VKA group of 63.4%. In the group of patients treated with NOAC, there was a resolution rate of 73.4%. After 12 weeks 90% of all patients were controlled at least once; resolutions rates were 93% in the NOAC group and 68% in the VKA group. Until the point of 10 weeks there was no significant difference in the time to resolution of LAA thrombi comparing NOACs and VKA but there is a clear trend towards faster resolution in the NOAC therapy group. However, after 12 weeks of treatment with either NOAC or VKA the thrombi had more often resolved in the presence of NAOC as compared to phenprocoumon.\nBoth groups did not differ in their time to first control with an average of 63 and 48 days (p = .224) in the phenprocoumon and NOAC groups, respectively. The average time to resolution was 79.4 ± 8.6 days using VKA. In the presence of NOACs the average time was 59.7 ± 12.6 days. There is no significant difference in resolution time depending on the administered type of anticoagulation (p = .201) (Figure 2). The resolution of LAA‐thrombi depending on the type of oral anticoagulation is demonstrated in Table 3. Comparable percentages of controlled patients were seen, for example, after 8 weeks control rates were 82.1% and 81.3% in the phenprocoumon and NOAC group, respectively.\nResolution of LAA‐thrombi depending on the type of oral anticoagulation. The x axis shows the time in days and the y axis the proportion of persisting thrombi. Dark gray shows the curve when phenprocoumon was administered, black shows patients on NOACs. A resolution rate of 2/3 of thrombi is reached earlier in patients on NOACs than in patients on phenprocoumon. These data respect all changes in the anticoagulation regime, and the number of days while switching to anticoagulation was also accounted for\nControl rates and resolution rates under NOAC and phenprocoumon therapy\n\nNote:  Cutoffs were chosen to match realistic time frames for clinical controls. Displayed are the number of controlled patients and the number of dissolved thrombi after 4, 6, 8, 10, and 12 weeks. Patients in both groups did not differ significantly in time to control (χ\n2 = 0.367, Spearman correlation coefficient r = .634). Controls performed after 4, 6, 8, and 10 weeks did not differ significantly in thrombus resolution rate in the two groups, after 12 weeks a significantly higher resolution rate was found when administering NOACs.\nAfter 4 and 6 weeks, resolution rates were overall low, but slighter higher in the NOAC group. With regard to the control after 10 weeks when overall 2/3 of thrombi was dissolved, there was a resolution rate in the VKA group of 63.4%. In the group of patients treated with NOAC, there was a resolution rate of 73.4%. After 12 weeks 90% of all patients were controlled at least once; resolutions rates were 93% in the NOAC group and 68% in the VKA group. Until the point of 10 weeks there was no significant difference in the time to resolution of LAA thrombi comparing NOACs and VKA but there is a clear trend towards faster resolution in the NOAC therapy group. However, after 12 weeks of treatment with either NOAC or VKA the thrombi had more often resolved in the presence of NAOC as compared to phenprocoumon.", "One hundred and sixty‐three consecutive patients were included, out of those 114 were diagnosed with an intracardiac thrombus located in the left atrium (n = 1) or LAA (n = 113). The remaining patients had LV‐thrombi (n = 36) or thrombi of other locations (n = 13) and were not taken into account.\nAfter thrombus detection, overall baseline characteristics between patients on phenprocoumon and NOAC did not differ significantly (Table 1). Groups were comparable with regard to comorbidities, with a slightly lower average CHA2DS2‐VASc Score in the NOAC group (2.68 ± 1.1, n = 19 vs. 3.25 ± 1.8, n = 48; p = .207).\nCharacteristics of the patient collective with LAA‐thrombi (n = 114) in absolute numbers and percentage including SD were needed at the time of inclusion\n\nNote: Characteristics of patients on phenprocoumon and NOAC at the time of first control. The p value is calculated between patients on phenprocoumon and patients on NOAC.\nImpairment of kidney function was defined as GFR < 90 ml/min/1.73 m² body surface area.", "At the time of thrombus detection, 47 patients (41.2%) were already on oral anticoagulation (28 on phenprocoumon, 9 on Rivaroxaban, 6 on Apixaban, 3 on Edoxaban, and 1 on Dabigatran). In those not on OAC, anticoagulation was started with phenprocoumon (n = 40) or NOAC (n = 7 Rivaroxaban, n = 8 Apixaban, n = 1 Dabigatran) according to the decision of the treating physician. Heparin was the OAC of the first choice in 10 patients. All of these patients were critically ill, four died within 2 weeks after diagnosis, four were lost to follow‐up, and the remaining two were discontinued on anticoagulation because of bleeding complications.\nIf a thrombus was detected in the presence of OAC the drugs were continued in 25 patients (53.2%), changed to phenprocoumon after previous NOAC therapy in 15 out of 19 (79.0%), or from phenprocoumon to NOAC in 6 out of 28 patients (21.4%), 3 were put on Rivaroxaban and 3 on Apixaban. One patient was changed to heparin after previous phenprocoumon therapy. Overall a steady increase in the use of NOACs during the study period was noted, starting at 6.25% in 2013 and reaching 46.2% at the end of the inclusion period in 2018.", "Most patients (91.9%) with an LAA‐thrombus had documented AF or atrial flutter at the time of diagnosis. Persistent AF was the most common overall as well as in the subgroups of patients on phenprocoumon or NOAC (68.8% and 78.9%, respectively; Table 1). Correspondingly, the majority of patients for whom all relevant data points could be acquired (n = 99) showed varying degrees of dilatation of the LA, only 23.2% of patients showed no dilatation according to the American Society of Echocardiography (Table 1).\n12\n The mean size of LAA thrombi was 128.9 mm² (±142.5) for the 38 patients for whom all relevant data points could be acquired.", "The majority of patients (88.4%) did not suffer a stroke before being diagnosed with an intracardiac thrombus. During the course of the study, 5.3% (n = 6; 4 on VKA, none on NOAK) of patients suffered from a stroke and 1.8% (n = 2; both on VKA) from a TIA. In the group of patients who had a previous stroke (n = 11), two patients (18.2%; both on VKA) suffered from a recurrent stroke and one patient from a recurrent TIA after the diagnosis of intracardiac thrombi was established.", "Out of the 114 LAA‐thrombi, 67 (62.3%) patients were controlled at least once and 47 patients were lost to follow‐up for several reasons, for example, death (n = 7), referral to a secondary hospital (n = 4), different follow‐up modality (e.g., CT/MRI instead of TEE; n = 2). Yet, the majority of patients did not present again within 1 year for their scheduled follow‐up TEE (n = 34).\nThe average time to first control TEE was 58 ± 42.2 (median 48) days or 8 weeks. Disregarding the type of oral anticoagulation administered the LAA‐thrombi were dissolved in 74.6% of cases at the time of first control. Out of the 67 patients, 48 were treated with phenprocoumon. In this group 77.1% of LAA‐thrombi were dissolved, and the average time to first control was 63 days (±48.1) or 9 weeks. The INR was considered effective in 69% of patients at this isolated point in time. The remaining 19 patients were treated with NOACs and showed a resolution rate of 73.7%. The control took place after an average time of 48 days (±18.3). There was no significant difference in the resolution of LAA thrombi at the point of first control depending on the type of anticoagulation therapy (p = .499). Almost all patients (15 out of 16) who had no resolution of their LAA‐thrombus at the time of first control were re‐evaluated in a second control. At this time six additional LAA‐thrombi were dissolved.", "Out of the 67 patients who were controlled at least once, 56 showed the resolution of the thrombus within 1 year. Irrespective of the type of oral anticoagulation the average time to resolution was 77.8 (SD ± 7.4) days, the median was 53.5 days. A resolution rate of two‐thirds of thrombi was reached after 71 days (Figure 1), correspondingly a control rate of more than 80% was reached after 10 weeks (Table 2). Additionally, one patient was first controlled after 211 days and one after 245 days. Both thrombi were dissolved.\nOverall time‐dependent resolution of LAA thrombi. The x axis shows the duration in days and the y axis the proportion of persisting thrombi. The value of 2/3 resolution of initially diagnosed thrombi is reached after 71 days\nControl rates and resolution of thrombi regardless of the type of oral anticoagulation\nNote: Cutoffs were chosen to match realistic time frames for clinical controls. Displayed are the number of controlled patients and the number of dissolved thrombi after 4, 6, 8, 10, and 12 weeks. After 10 weeks 82.1% of patients were controlled, corresponding to 64.3% of total thrombi dissolved. Conversely out of the 46 patients controlled, 78.3% of their thrombi were dissolved. As expected, the percentage of thrombi dissolved increases with the longer time intervals.", "Both groups did not differ in their time to first control with an average of 63 and 48 days (p = .224) in the phenprocoumon and NOAC groups, respectively. The average time to resolution was 79.4 ± 8.6 days using VKA. In the presence of NOACs the average time was 59.7 ± 12.6 days. There is no significant difference in resolution time depending on the administered type of anticoagulation (p = .201) (Figure 2). The resolution of LAA‐thrombi depending on the type of oral anticoagulation is demonstrated in Table 3. Comparable percentages of controlled patients were seen, for example, after 8 weeks control rates were 82.1% and 81.3% in the phenprocoumon and NOAC group, respectively.\nResolution of LAA‐thrombi depending on the type of oral anticoagulation. The x axis shows the time in days and the y axis the proportion of persisting thrombi. Dark gray shows the curve when phenprocoumon was administered, black shows patients on NOACs. A resolution rate of 2/3 of thrombi is reached earlier in patients on NOACs than in patients on phenprocoumon. These data respect all changes in the anticoagulation regime, and the number of days while switching to anticoagulation was also accounted for\nControl rates and resolution rates under NOAC and phenprocoumon therapy\n\nNote:  Cutoffs were chosen to match realistic time frames for clinical controls. Displayed are the number of controlled patients and the number of dissolved thrombi after 4, 6, 8, 10, and 12 weeks. Patients in both groups did not differ significantly in time to control (χ\n2 = 0.367, Spearman correlation coefficient r = .634). Controls performed after 4, 6, 8, and 10 weeks did not differ significantly in thrombus resolution rate in the two groups, after 12 weeks a significantly higher resolution rate was found when administering NOACs.\nAfter 4 and 6 weeks, resolution rates were overall low, but slighter higher in the NOAC group. With regard to the control after 10 weeks when overall 2/3 of thrombi was dissolved, there was a resolution rate in the VKA group of 63.4%. In the group of patients treated with NOAC, there was a resolution rate of 73.4%. After 12 weeks 90% of all patients were controlled at least once; resolutions rates were 93% in the NOAC group and 68% in the VKA group. Until the point of 10 weeks there was no significant difference in the time to resolution of LAA thrombi comparing NOACs and VKA but there is a clear trend towards faster resolution in the NOAC therapy group. However, after 12 weeks of treatment with either NOAC or VKA the thrombi had more often resolved in the presence of NAOC as compared to phenprocoumon.", "Intracardiac thrombi are a potentially life‐threatening condition because of their risk of embolization. NOACs already have a significant value in the prevention of thromboembolic events in patients with AF. In this setting NOACs are the preferred treatment over the use of VKA.\n8\n The major advantage for the group of NOACs is their overall better safety profile. Additional benefits include simple application with consistent and conceivable therapeutic levels for a wide range of patients with strict regulations for dose reduction and without the need for measurement of therapeutic blood levels. Moreover, NOACs profit from overall less drug interactions and their effect does not depend on alimentation.\n10\n, \n11\n\n\nThe present study analyzed 114 patients with an LAA‐thrombus, primarily treated with NOAC or phenprocoumon. Overall, 64.4% of patients were controlled at least once. The average time to first control was 58 days or approximately 8 weeks. The therapeutic adherence of the collective in this study was as low as in comparable observational studies like CLOT‐AF.\n13\n As the average time to resolution was 112 days in CLOT‐AF when administering a NOAC, in our study resolution was observed after approximately 60 days in patients with NOAC therapy. The difference may be explained by a smaller sample size in CLOT‐AF and later time to first control. In CLOT‐AF 14 of 15 patients showed resolution of the thrombus at the time of first control; one thrombus was not resolved in the run of the study.\nIn the NOAC group in our study 73.7% of thrombi were dissolved at the time of first control. This indicates that the first control probably took place later in CLOT‐AF compared to our study.\n13\n\n\nComparing the results of the X‐TRA trial with our data it can be noted that after 6 weeks there was a very similar resolution rate of 40% across the NOAC group as a whole. It may overall be possible that the time to first control was too short to fully investigate the dissolving potential of Rivaroxaban\n9\n resulting in a resolution rate of 41.5% in X‐TRA.\nComparing the dissolving potential of NOACs with the potential of VKA reported from other studies the effectiveness of the NOACs seems to be at least as good as that of VKA. Only very limited data exist that investigates the potential of VKA in thrombus resolution. In a TEE study from Jaber et al. the resolution rate in patients, who mostly received VKA (or heparin) with a mean INR of 2.2, after diagnosis of left atrial thrombus was 80.1% after 47 ± 18 days. One may speculate that the INR levels were better controlled than in a real‐world setting.\n14\n However, similar results were demonstrated in a study by Collins et al.15 in which the resolution rate of patients treated with VKA was 89% after 4 weeks with 18 patients included.\nAt the time of diagnosis about half of the patients already received therapeutic anticoagulation. Phenprocoumon was used more frequently and patients on VKAs were more likely to be continued on VKA after diagnosis of a thrombus. This fact most likely reflects the remaining uncertainness in the use of NOACs and the confidence in VKA therapy with a measurable therapeutic effect (INR) in the context of cardiac thrombus formation. In the run of the study, an increasing use of NOACs was observed as current AF guidelines prefer the use of NOACs.\nAfter 10 weeks nearly 2/3 of LAA thrombi were dissolved. This point seems to be the best time for a first control regarding the number of tested patients and dissolved thrombi. However, comparatively low rates of resolution were found after 6 weeks, a time when one‐third of the patients had already received the first control. Scheduling the first control after 6 weeks seems a practical approach for daily clinical routine, while resolution numbers indicate this to be rather suboptimal. With regard to the resolution rates in dependence of the type of oral anticoagulation there was a trend towards earlier resolution of LAA thrombi in patients with NOAC therapy. This trend accumulated in a significant difference after 12 weeks with a benefit for NOACs (93.3% vs. 70.7%; p = .046). This benefit of faster resolution when using NOACs may be caused by consistent drug levels with easy dosing and precise rules for dose reduction though a selection bias in patients cannot be excluded. In addition, phenprocoumon and related substances only inhibit the synthesis of vitamin‐K‐depending coagulation factors, while Apixaban not only inactivates Factor Xa in plasma but also inactivates Factor Xa when already bound to thrombi.\n16\n The clinical consequence of a potentially faster resolution when using NOACs is first and foremost reflected in the possibility for earlier controls of LAA‐thrombi making an earlier antiarrhythmic strategy for AF possible.\nWhen deciding on the type of oral anticoagulation it should also be taken into account that the number of thromboembolic complications may be reduced by faster resolution of the thrombus.", "In this large observational study the overall efficacy of thrombus resolution did not differ significantly between VKA and NOACs at the point of first TEE control. Regardless of the type of anticoagulation, 76.1% of thrombi were dissolved at first control. The cutoff value of two‐thirds of thrombi dissolved was reached faster when administering a NOAC. We found an overall favorable relation between number of controlled patients and number of dissolved thrombi after 10 weeks independent on the oral anticoagulation used. It would therefore be advisable to schedule follow‐up appointments at this time. Overall there was no difference in resolution of LAA‐thrombi when comparing NOACs and phenprocoumon in a real‐life setting. Further studies are needed to prove the role of NOACs and to differentiate between the resolving potential of thrombin‐ and Factor Xa inhibition as well as for each NOAC individually.", "Open Access funding enabled and organized by Projekt DEAL.", "L. Eckardt reports receiving lecture honora from Boehringer Ingelheim, Daiichy Sankyo, BMS Pfizer, and Bayer Medical. H. Wedekind reports receiving lecture honora from Bayer Medical and AstraZeneca. The remaining authors declare no conflicts of interest." ]
[ null, "methods", null, "results", null, null, null, null, null, null, null, "discussion", "conclusions", "COI-statement", "COI-statement" ]
[ "atrial fibrillation", "intracardiac thrombus", "LAA‐thrombus", "NOAC", "phenprocoumon", "thrombus resolution" ]
INTRODUCTION: Over 26 million people worldwide suffer from a stroke every year. In western countries 20% of all strokes and transient ischemic attacks (TIAs) are of cardioembolic origin. 1 , 2 Cardioembolic strokes are more severe than other ischemic strokes. 3 , 4 There has been a steady increase of cardioembolic strokes in the last few years. 5 In patients with atrial fibrillation (AF) as the most important risk factor, a consequent anticoagulant therapy avoids 70% of all cardioembolic strokes. 5 , 6 Therefore, the presence of intracardiac thrombi is a potentially life‐threatening condition because of the risk of embolization. To allow rhythm control in patients with AF which may be of prognostic relevance 7 the presence of intracardiac thrombi has to be excluded in advance. Thus, not only prevention but also adequate therapy of existing thrombi is of utmost importance. Today non‐vitamin K‐dependent oral anticoagulants (NOACs) are favored for the prevention of intracardiac thrombi in the majority of patients with AF. 8 However, only limited data exist for the use of NOACs in the resolution of already existing left atrial appendage (LAA) thrombi. 9 NOACs benefit from their simpler application compared to the use of vitamin K antagonists (VKAs) and are largely independent on alimentation or co‐medication. Additionally, therapeutic levels are consistent for most patients and their use is mostly restricted by renal insufficiency. In contrast, VKAs show many interdependencies especially with the frequently used antiarrhythmic drug amiodarone and therapeutic levels can vary depending on alimentation. Still, the amount of practical experience on thrombus resolution acquired over decades may favor VKA. 10 , 11 This study aims at comparing the efficacy of phenprocoumon and NOACs in the resolution of LAA‐thrombi in a real‐world setting. METHODS: This analysis included all consecutive patients diagnosed with an intracardiac thrombus from June 2013 to June 2017 in a general cardiology clinic (SFH Münster, Germany). The intent was to compare the resolving potential of NOACs and the VKA phenprocoumon on thrombi of the LAA. Patients with non‐LAA thrombi (e.g., left ventricular thrombi) were excluded. The primary endpoint was defined as the resolution of the thrombus when patients presented again for follow‐up. Informed consent was obtained from all individual participants included in the study. The datasets generated and analyzed during the current study are not publicly available, as per internal protocol, but are available from the corresponding author on reasonable request. Persistence of the intracardiac thrombi after 1 year was defined as the secondary endpoint. Controls were made by TEE and follow‐up appointments were scheduled according to hospital capacity between 4 and 13 weeks after diagnosis. All changes in anticoagulant therapy with date and specific substance were analyzed. This study was performed in line with the principles of the Declaration of Helsinki. Approval was granted by the Ethics Committee of “Ärztekammer Westfalen‐Lippe” (Nr. 2019‐641‐f‐S). LAA thrombi (n = 114) were diagnosed by transesophageal echocardiography (TEE; n = 111), computer tomography (CT, n = 2), or magnetic resonance imaging (MRI, n = 1). The vast majority of patients presented with symptomatic AF. Some patients already presented with cardioembolic complications and were diagnosed with AF in the following diagnosis of a thromboembolic event. Statistical analysis After collecting all data, the evaluation and statistical analysis was made using SPSS Version 25 for Mac OS X (SPSS Inc.). All descriptive data were specified by absolute and relative frequency and complemented with median, arithmetic mean, and SD where necessary. The χ 2 test was used to test for independency. The Kaplan–Meier estimator was used in the context of event history analysis to identify time to resolution under therapy with different types of anticoagulation. Log‐rank tests were used to test for significance. In case of small size of groups Fisher's exact test was applied. After collecting all data, the evaluation and statistical analysis was made using SPSS Version 25 for Mac OS X (SPSS Inc.). All descriptive data were specified by absolute and relative frequency and complemented with median, arithmetic mean, and SD where necessary. The χ 2 test was used to test for independency. The Kaplan–Meier estimator was used in the context of event history analysis to identify time to resolution under therapy with different types of anticoagulation. Log‐rank tests were used to test for significance. In case of small size of groups Fisher's exact test was applied. Statistical analysis: After collecting all data, the evaluation and statistical analysis was made using SPSS Version 25 for Mac OS X (SPSS Inc.). All descriptive data were specified by absolute and relative frequency and complemented with median, arithmetic mean, and SD where necessary. The χ 2 test was used to test for independency. The Kaplan–Meier estimator was used in the context of event history analysis to identify time to resolution under therapy with different types of anticoagulation. Log‐rank tests were used to test for significance. In case of small size of groups Fisher's exact test was applied. RESULTS: Baseline characteristics and comorbidities One hundred and sixty‐three consecutive patients were included, out of those 114 were diagnosed with an intracardiac thrombus located in the left atrium (n = 1) or LAA (n = 113). The remaining patients had LV‐thrombi (n = 36) or thrombi of other locations (n = 13) and were not taken into account. After thrombus detection, overall baseline characteristics between patients on phenprocoumon and NOAC did not differ significantly (Table 1). Groups were comparable with regard to comorbidities, with a slightly lower average CHA2DS2‐VASc Score in the NOAC group (2.68 ± 1.1, n = 19 vs. 3.25 ± 1.8, n = 48; p = .207). Characteristics of the patient collective with LAA‐thrombi (n = 114) in absolute numbers and percentage including SD were needed at the time of inclusion Note: Characteristics of patients on phenprocoumon and NOAC at the time of first control. The p value is calculated between patients on phenprocoumon and patients on NOAC. Impairment of kidney function was defined as GFR < 90 ml/min/1.73 m² body surface area. One hundred and sixty‐three consecutive patients were included, out of those 114 were diagnosed with an intracardiac thrombus located in the left atrium (n = 1) or LAA (n = 113). The remaining patients had LV‐thrombi (n = 36) or thrombi of other locations (n = 13) and were not taken into account. After thrombus detection, overall baseline characteristics between patients on phenprocoumon and NOAC did not differ significantly (Table 1). Groups were comparable with regard to comorbidities, with a slightly lower average CHA2DS2‐VASc Score in the NOAC group (2.68 ± 1.1, n = 19 vs. 3.25 ± 1.8, n = 48; p = .207). Characteristics of the patient collective with LAA‐thrombi (n = 114) in absolute numbers and percentage including SD were needed at the time of inclusion Note: Characteristics of patients on phenprocoumon and NOAC at the time of first control. The p value is calculated between patients on phenprocoumon and patients on NOAC. Impairment of kidney function was defined as GFR < 90 ml/min/1.73 m² body surface area. Use of oral anticoagulants At the time of thrombus detection, 47 patients (41.2%) were already on oral anticoagulation (28 on phenprocoumon, 9 on Rivaroxaban, 6 on Apixaban, 3 on Edoxaban, and 1 on Dabigatran). In those not on OAC, anticoagulation was started with phenprocoumon (n = 40) or NOAC (n = 7 Rivaroxaban, n = 8 Apixaban, n = 1 Dabigatran) according to the decision of the treating physician. Heparin was the OAC of the first choice in 10 patients. All of these patients were critically ill, four died within 2 weeks after diagnosis, four were lost to follow‐up, and the remaining two were discontinued on anticoagulation because of bleeding complications. If a thrombus was detected in the presence of OAC the drugs were continued in 25 patients (53.2%), changed to phenprocoumon after previous NOAC therapy in 15 out of 19 (79.0%), or from phenprocoumon to NOAC in 6 out of 28 patients (21.4%), 3 were put on Rivaroxaban and 3 on Apixaban. One patient was changed to heparin after previous phenprocoumon therapy. Overall a steady increase in the use of NOACs during the study period was noted, starting at 6.25% in 2013 and reaching 46.2% at the end of the inclusion period in 2018. At the time of thrombus detection, 47 patients (41.2%) were already on oral anticoagulation (28 on phenprocoumon, 9 on Rivaroxaban, 6 on Apixaban, 3 on Edoxaban, and 1 on Dabigatran). In those not on OAC, anticoagulation was started with phenprocoumon (n = 40) or NOAC (n = 7 Rivaroxaban, n = 8 Apixaban, n = 1 Dabigatran) according to the decision of the treating physician. Heparin was the OAC of the first choice in 10 patients. All of these patients were critically ill, four died within 2 weeks after diagnosis, four were lost to follow‐up, and the remaining two were discontinued on anticoagulation because of bleeding complications. If a thrombus was detected in the presence of OAC the drugs were continued in 25 patients (53.2%), changed to phenprocoumon after previous NOAC therapy in 15 out of 19 (79.0%), or from phenprocoumon to NOAC in 6 out of 28 patients (21.4%), 3 were put on Rivaroxaban and 3 on Apixaban. One patient was changed to heparin after previous phenprocoumon therapy. Overall a steady increase in the use of NOACs during the study period was noted, starting at 6.25% in 2013 and reaching 46.2% at the end of the inclusion period in 2018. AF and LAA thrombi Most patients (91.9%) with an LAA‐thrombus had documented AF or atrial flutter at the time of diagnosis. Persistent AF was the most common overall as well as in the subgroups of patients on phenprocoumon or NOAC (68.8% and 78.9%, respectively; Table 1). Correspondingly, the majority of patients for whom all relevant data points could be acquired (n = 99) showed varying degrees of dilatation of the LA, only 23.2% of patients showed no dilatation according to the American Society of Echocardiography (Table 1). 12 The mean size of LAA thrombi was 128.9 mm² (±142.5) for the 38 patients for whom all relevant data points could be acquired. Most patients (91.9%) with an LAA‐thrombus had documented AF or atrial flutter at the time of diagnosis. Persistent AF was the most common overall as well as in the subgroups of patients on phenprocoumon or NOAC (68.8% and 78.9%, respectively; Table 1). Correspondingly, the majority of patients for whom all relevant data points could be acquired (n = 99) showed varying degrees of dilatation of the LA, only 23.2% of patients showed no dilatation according to the American Society of Echocardiography (Table 1). 12 The mean size of LAA thrombi was 128.9 mm² (±142.5) for the 38 patients for whom all relevant data points could be acquired. Major adverse events The majority of patients (88.4%) did not suffer a stroke before being diagnosed with an intracardiac thrombus. During the course of the study, 5.3% (n = 6; 4 on VKA, none on NOAK) of patients suffered from a stroke and 1.8% (n = 2; both on VKA) from a TIA. In the group of patients who had a previous stroke (n = 11), two patients (18.2%; both on VKA) suffered from a recurrent stroke and one patient from a recurrent TIA after the diagnosis of intracardiac thrombi was established. The majority of patients (88.4%) did not suffer a stroke before being diagnosed with an intracardiac thrombus. During the course of the study, 5.3% (n = 6; 4 on VKA, none on NOAK) of patients suffered from a stroke and 1.8% (n = 2; both on VKA) from a TIA. In the group of patients who had a previous stroke (n = 11), two patients (18.2%; both on VKA) suffered from a recurrent stroke and one patient from a recurrent TIA after the diagnosis of intracardiac thrombi was established. Time of control and resolution of thrombi Out of the 114 LAA‐thrombi, 67 (62.3%) patients were controlled at least once and 47 patients were lost to follow‐up for several reasons, for example, death (n = 7), referral to a secondary hospital (n = 4), different follow‐up modality (e.g., CT/MRI instead of TEE; n = 2). Yet, the majority of patients did not present again within 1 year for their scheduled follow‐up TEE (n = 34). The average time to first control TEE was 58 ± 42.2 (median 48) days or 8 weeks. Disregarding the type of oral anticoagulation administered the LAA‐thrombi were dissolved in 74.6% of cases at the time of first control. Out of the 67 patients, 48 were treated with phenprocoumon. In this group 77.1% of LAA‐thrombi were dissolved, and the average time to first control was 63 days (±48.1) or 9 weeks. The INR was considered effective in 69% of patients at this isolated point in time. The remaining 19 patients were treated with NOACs and showed a resolution rate of 73.7%. The control took place after an average time of 48 days (±18.3). There was no significant difference in the resolution of LAA thrombi at the point of first control depending on the type of anticoagulation therapy (p = .499). Almost all patients (15 out of 16) who had no resolution of their LAA‐thrombus at the time of first control were re‐evaluated in a second control. At this time six additional LAA‐thrombi were dissolved. Out of the 114 LAA‐thrombi, 67 (62.3%) patients were controlled at least once and 47 patients were lost to follow‐up for several reasons, for example, death (n = 7), referral to a secondary hospital (n = 4), different follow‐up modality (e.g., CT/MRI instead of TEE; n = 2). Yet, the majority of patients did not present again within 1 year for their scheduled follow‐up TEE (n = 34). The average time to first control TEE was 58 ± 42.2 (median 48) days or 8 weeks. Disregarding the type of oral anticoagulation administered the LAA‐thrombi were dissolved in 74.6% of cases at the time of first control. Out of the 67 patients, 48 were treated with phenprocoumon. In this group 77.1% of LAA‐thrombi were dissolved, and the average time to first control was 63 days (±48.1) or 9 weeks. The INR was considered effective in 69% of patients at this isolated point in time. The remaining 19 patients were treated with NOACs and showed a resolution rate of 73.7%. The control took place after an average time of 48 days (±18.3). There was no significant difference in the resolution of LAA thrombi at the point of first control depending on the type of anticoagulation therapy (p = .499). Almost all patients (15 out of 16) who had no resolution of their LAA‐thrombus at the time of first control were re‐evaluated in a second control. At this time six additional LAA‐thrombi were dissolved. Time to resolution Out of the 67 patients who were controlled at least once, 56 showed the resolution of the thrombus within 1 year. Irrespective of the type of oral anticoagulation the average time to resolution was 77.8 (SD ± 7.4) days, the median was 53.5 days. A resolution rate of two‐thirds of thrombi was reached after 71 days (Figure 1), correspondingly a control rate of more than 80% was reached after 10 weeks (Table 2). Additionally, one patient was first controlled after 211 days and one after 245 days. Both thrombi were dissolved. Overall time‐dependent resolution of LAA thrombi. The x axis shows the duration in days and the y axis the proportion of persisting thrombi. The value of 2/3 resolution of initially diagnosed thrombi is reached after 71 days Control rates and resolution of thrombi regardless of the type of oral anticoagulation Note: Cutoffs were chosen to match realistic time frames for clinical controls. Displayed are the number of controlled patients and the number of dissolved thrombi after 4, 6, 8, 10, and 12 weeks. After 10 weeks 82.1% of patients were controlled, corresponding to 64.3% of total thrombi dissolved. Conversely out of the 46 patients controlled, 78.3% of their thrombi were dissolved. As expected, the percentage of thrombi dissolved increases with the longer time intervals. Out of the 67 patients who were controlled at least once, 56 showed the resolution of the thrombus within 1 year. Irrespective of the type of oral anticoagulation the average time to resolution was 77.8 (SD ± 7.4) days, the median was 53.5 days. A resolution rate of two‐thirds of thrombi was reached after 71 days (Figure 1), correspondingly a control rate of more than 80% was reached after 10 weeks (Table 2). Additionally, one patient was first controlled after 211 days and one after 245 days. Both thrombi were dissolved. Overall time‐dependent resolution of LAA thrombi. The x axis shows the duration in days and the y axis the proportion of persisting thrombi. The value of 2/3 resolution of initially diagnosed thrombi is reached after 71 days Control rates and resolution of thrombi regardless of the type of oral anticoagulation Note: Cutoffs were chosen to match realistic time frames for clinical controls. Displayed are the number of controlled patients and the number of dissolved thrombi after 4, 6, 8, 10, and 12 weeks. After 10 weeks 82.1% of patients were controlled, corresponding to 64.3% of total thrombi dissolved. Conversely out of the 46 patients controlled, 78.3% of their thrombi were dissolved. As expected, the percentage of thrombi dissolved increases with the longer time intervals. Resolution of LAA‐thrombi in dependence of the type of oral anticoagulation Both groups did not differ in their time to first control with an average of 63 and 48 days (p = .224) in the phenprocoumon and NOAC groups, respectively. The average time to resolution was 79.4 ± 8.6 days using VKA. In the presence of NOACs the average time was 59.7 ± 12.6 days. There is no significant difference in resolution time depending on the administered type of anticoagulation (p = .201) (Figure 2). The resolution of LAA‐thrombi depending on the type of oral anticoagulation is demonstrated in Table 3. Comparable percentages of controlled patients were seen, for example, after 8 weeks control rates were 82.1% and 81.3% in the phenprocoumon and NOAC group, respectively. Resolution of LAA‐thrombi depending on the type of oral anticoagulation. The x axis shows the time in days and the y axis the proportion of persisting thrombi. Dark gray shows the curve when phenprocoumon was administered, black shows patients on NOACs. A resolution rate of 2/3 of thrombi is reached earlier in patients on NOACs than in patients on phenprocoumon. These data respect all changes in the anticoagulation regime, and the number of days while switching to anticoagulation was also accounted for Control rates and resolution rates under NOAC and phenprocoumon therapy Note:  Cutoffs were chosen to match realistic time frames for clinical controls. Displayed are the number of controlled patients and the number of dissolved thrombi after 4, 6, 8, 10, and 12 weeks. Patients in both groups did not differ significantly in time to control (χ 2 = 0.367, Spearman correlation coefficient r = .634). Controls performed after 4, 6, 8, and 10 weeks did not differ significantly in thrombus resolution rate in the two groups, after 12 weeks a significantly higher resolution rate was found when administering NOACs. After 4 and 6 weeks, resolution rates were overall low, but slighter higher in the NOAC group. With regard to the control after 10 weeks when overall 2/3 of thrombi was dissolved, there was a resolution rate in the VKA group of 63.4%. In the group of patients treated with NOAC, there was a resolution rate of 73.4%. After 12 weeks 90% of all patients were controlled at least once; resolutions rates were 93% in the NOAC group and 68% in the VKA group. Until the point of 10 weeks there was no significant difference in the time to resolution of LAA thrombi comparing NOACs and VKA but there is a clear trend towards faster resolution in the NOAC therapy group. However, after 12 weeks of treatment with either NOAC or VKA the thrombi had more often resolved in the presence of NAOC as compared to phenprocoumon. Both groups did not differ in their time to first control with an average of 63 and 48 days (p = .224) in the phenprocoumon and NOAC groups, respectively. The average time to resolution was 79.4 ± 8.6 days using VKA. In the presence of NOACs the average time was 59.7 ± 12.6 days. There is no significant difference in resolution time depending on the administered type of anticoagulation (p = .201) (Figure 2). The resolution of LAA‐thrombi depending on the type of oral anticoagulation is demonstrated in Table 3. Comparable percentages of controlled patients were seen, for example, after 8 weeks control rates were 82.1% and 81.3% in the phenprocoumon and NOAC group, respectively. Resolution of LAA‐thrombi depending on the type of oral anticoagulation. The x axis shows the time in days and the y axis the proportion of persisting thrombi. Dark gray shows the curve when phenprocoumon was administered, black shows patients on NOACs. A resolution rate of 2/3 of thrombi is reached earlier in patients on NOACs than in patients on phenprocoumon. These data respect all changes in the anticoagulation regime, and the number of days while switching to anticoagulation was also accounted for Control rates and resolution rates under NOAC and phenprocoumon therapy Note:  Cutoffs were chosen to match realistic time frames for clinical controls. Displayed are the number of controlled patients and the number of dissolved thrombi after 4, 6, 8, 10, and 12 weeks. Patients in both groups did not differ significantly in time to control (χ 2 = 0.367, Spearman correlation coefficient r = .634). Controls performed after 4, 6, 8, and 10 weeks did not differ significantly in thrombus resolution rate in the two groups, after 12 weeks a significantly higher resolution rate was found when administering NOACs. After 4 and 6 weeks, resolution rates were overall low, but slighter higher in the NOAC group. With regard to the control after 10 weeks when overall 2/3 of thrombi was dissolved, there was a resolution rate in the VKA group of 63.4%. In the group of patients treated with NOAC, there was a resolution rate of 73.4%. After 12 weeks 90% of all patients were controlled at least once; resolutions rates were 93% in the NOAC group and 68% in the VKA group. Until the point of 10 weeks there was no significant difference in the time to resolution of LAA thrombi comparing NOACs and VKA but there is a clear trend towards faster resolution in the NOAC therapy group. However, after 12 weeks of treatment with either NOAC or VKA the thrombi had more often resolved in the presence of NAOC as compared to phenprocoumon. Baseline characteristics and comorbidities: One hundred and sixty‐three consecutive patients were included, out of those 114 were diagnosed with an intracardiac thrombus located in the left atrium (n = 1) or LAA (n = 113). The remaining patients had LV‐thrombi (n = 36) or thrombi of other locations (n = 13) and were not taken into account. After thrombus detection, overall baseline characteristics between patients on phenprocoumon and NOAC did not differ significantly (Table 1). Groups were comparable with regard to comorbidities, with a slightly lower average CHA2DS2‐VASc Score in the NOAC group (2.68 ± 1.1, n = 19 vs. 3.25 ± 1.8, n = 48; p = .207). Characteristics of the patient collective with LAA‐thrombi (n = 114) in absolute numbers and percentage including SD were needed at the time of inclusion Note: Characteristics of patients on phenprocoumon and NOAC at the time of first control. The p value is calculated between patients on phenprocoumon and patients on NOAC. Impairment of kidney function was defined as GFR < 90 ml/min/1.73 m² body surface area. Use of oral anticoagulants: At the time of thrombus detection, 47 patients (41.2%) were already on oral anticoagulation (28 on phenprocoumon, 9 on Rivaroxaban, 6 on Apixaban, 3 on Edoxaban, and 1 on Dabigatran). In those not on OAC, anticoagulation was started with phenprocoumon (n = 40) or NOAC (n = 7 Rivaroxaban, n = 8 Apixaban, n = 1 Dabigatran) according to the decision of the treating physician. Heparin was the OAC of the first choice in 10 patients. All of these patients were critically ill, four died within 2 weeks after diagnosis, four were lost to follow‐up, and the remaining two were discontinued on anticoagulation because of bleeding complications. If a thrombus was detected in the presence of OAC the drugs were continued in 25 patients (53.2%), changed to phenprocoumon after previous NOAC therapy in 15 out of 19 (79.0%), or from phenprocoumon to NOAC in 6 out of 28 patients (21.4%), 3 were put on Rivaroxaban and 3 on Apixaban. One patient was changed to heparin after previous phenprocoumon therapy. Overall a steady increase in the use of NOACs during the study period was noted, starting at 6.25% in 2013 and reaching 46.2% at the end of the inclusion period in 2018. AF and LAA thrombi: Most patients (91.9%) with an LAA‐thrombus had documented AF or atrial flutter at the time of diagnosis. Persistent AF was the most common overall as well as in the subgroups of patients on phenprocoumon or NOAC (68.8% and 78.9%, respectively; Table 1). Correspondingly, the majority of patients for whom all relevant data points could be acquired (n = 99) showed varying degrees of dilatation of the LA, only 23.2% of patients showed no dilatation according to the American Society of Echocardiography (Table 1). 12 The mean size of LAA thrombi was 128.9 mm² (±142.5) for the 38 patients for whom all relevant data points could be acquired. Major adverse events: The majority of patients (88.4%) did not suffer a stroke before being diagnosed with an intracardiac thrombus. During the course of the study, 5.3% (n = 6; 4 on VKA, none on NOAK) of patients suffered from a stroke and 1.8% (n = 2; both on VKA) from a TIA. In the group of patients who had a previous stroke (n = 11), two patients (18.2%; both on VKA) suffered from a recurrent stroke and one patient from a recurrent TIA after the diagnosis of intracardiac thrombi was established. Time of control and resolution of thrombi: Out of the 114 LAA‐thrombi, 67 (62.3%) patients were controlled at least once and 47 patients were lost to follow‐up for several reasons, for example, death (n = 7), referral to a secondary hospital (n = 4), different follow‐up modality (e.g., CT/MRI instead of TEE; n = 2). Yet, the majority of patients did not present again within 1 year for their scheduled follow‐up TEE (n = 34). The average time to first control TEE was 58 ± 42.2 (median 48) days or 8 weeks. Disregarding the type of oral anticoagulation administered the LAA‐thrombi were dissolved in 74.6% of cases at the time of first control. Out of the 67 patients, 48 were treated with phenprocoumon. In this group 77.1% of LAA‐thrombi were dissolved, and the average time to first control was 63 days (±48.1) or 9 weeks. The INR was considered effective in 69% of patients at this isolated point in time. The remaining 19 patients were treated with NOACs and showed a resolution rate of 73.7%. The control took place after an average time of 48 days (±18.3). There was no significant difference in the resolution of LAA thrombi at the point of first control depending on the type of anticoagulation therapy (p = .499). Almost all patients (15 out of 16) who had no resolution of their LAA‐thrombus at the time of first control were re‐evaluated in a second control. At this time six additional LAA‐thrombi were dissolved. Time to resolution: Out of the 67 patients who were controlled at least once, 56 showed the resolution of the thrombus within 1 year. Irrespective of the type of oral anticoagulation the average time to resolution was 77.8 (SD ± 7.4) days, the median was 53.5 days. A resolution rate of two‐thirds of thrombi was reached after 71 days (Figure 1), correspondingly a control rate of more than 80% was reached after 10 weeks (Table 2). Additionally, one patient was first controlled after 211 days and one after 245 days. Both thrombi were dissolved. Overall time‐dependent resolution of LAA thrombi. The x axis shows the duration in days and the y axis the proportion of persisting thrombi. The value of 2/3 resolution of initially diagnosed thrombi is reached after 71 days Control rates and resolution of thrombi regardless of the type of oral anticoagulation Note: Cutoffs were chosen to match realistic time frames for clinical controls. Displayed are the number of controlled patients and the number of dissolved thrombi after 4, 6, 8, 10, and 12 weeks. After 10 weeks 82.1% of patients were controlled, corresponding to 64.3% of total thrombi dissolved. Conversely out of the 46 patients controlled, 78.3% of their thrombi were dissolved. As expected, the percentage of thrombi dissolved increases with the longer time intervals. Resolution of LAA‐thrombi in dependence of the type of oral anticoagulation: Both groups did not differ in their time to first control with an average of 63 and 48 days (p = .224) in the phenprocoumon and NOAC groups, respectively. The average time to resolution was 79.4 ± 8.6 days using VKA. In the presence of NOACs the average time was 59.7 ± 12.6 days. There is no significant difference in resolution time depending on the administered type of anticoagulation (p = .201) (Figure 2). The resolution of LAA‐thrombi depending on the type of oral anticoagulation is demonstrated in Table 3. Comparable percentages of controlled patients were seen, for example, after 8 weeks control rates were 82.1% and 81.3% in the phenprocoumon and NOAC group, respectively. Resolution of LAA‐thrombi depending on the type of oral anticoagulation. The x axis shows the time in days and the y axis the proportion of persisting thrombi. Dark gray shows the curve when phenprocoumon was administered, black shows patients on NOACs. A resolution rate of 2/3 of thrombi is reached earlier in patients on NOACs than in patients on phenprocoumon. These data respect all changes in the anticoagulation regime, and the number of days while switching to anticoagulation was also accounted for Control rates and resolution rates under NOAC and phenprocoumon therapy Note:  Cutoffs were chosen to match realistic time frames for clinical controls. Displayed are the number of controlled patients and the number of dissolved thrombi after 4, 6, 8, 10, and 12 weeks. Patients in both groups did not differ significantly in time to control (χ 2 = 0.367, Spearman correlation coefficient r = .634). Controls performed after 4, 6, 8, and 10 weeks did not differ significantly in thrombus resolution rate in the two groups, after 12 weeks a significantly higher resolution rate was found when administering NOACs. After 4 and 6 weeks, resolution rates were overall low, but slighter higher in the NOAC group. With regard to the control after 10 weeks when overall 2/3 of thrombi was dissolved, there was a resolution rate in the VKA group of 63.4%. In the group of patients treated with NOAC, there was a resolution rate of 73.4%. After 12 weeks 90% of all patients were controlled at least once; resolutions rates were 93% in the NOAC group and 68% in the VKA group. Until the point of 10 weeks there was no significant difference in the time to resolution of LAA thrombi comparing NOACs and VKA but there is a clear trend towards faster resolution in the NOAC therapy group. However, after 12 weeks of treatment with either NOAC or VKA the thrombi had more often resolved in the presence of NAOC as compared to phenprocoumon. DISCUSSION: Intracardiac thrombi are a potentially life‐threatening condition because of their risk of embolization. NOACs already have a significant value in the prevention of thromboembolic events in patients with AF. In this setting NOACs are the preferred treatment over the use of VKA. 8 The major advantage for the group of NOACs is their overall better safety profile. Additional benefits include simple application with consistent and conceivable therapeutic levels for a wide range of patients with strict regulations for dose reduction and without the need for measurement of therapeutic blood levels. Moreover, NOACs profit from overall less drug interactions and their effect does not depend on alimentation. 10 , 11 The present study analyzed 114 patients with an LAA‐thrombus, primarily treated with NOAC or phenprocoumon. Overall, 64.4% of patients were controlled at least once. The average time to first control was 58 days or approximately 8 weeks. The therapeutic adherence of the collective in this study was as low as in comparable observational studies like CLOT‐AF. 13 As the average time to resolution was 112 days in CLOT‐AF when administering a NOAC, in our study resolution was observed after approximately 60 days in patients with NOAC therapy. The difference may be explained by a smaller sample size in CLOT‐AF and later time to first control. In CLOT‐AF 14 of 15 patients showed resolution of the thrombus at the time of first control; one thrombus was not resolved in the run of the study. In the NOAC group in our study 73.7% of thrombi were dissolved at the time of first control. This indicates that the first control probably took place later in CLOT‐AF compared to our study. 13 Comparing the results of the X‐TRA trial with our data it can be noted that after 6 weeks there was a very similar resolution rate of 40% across the NOAC group as a whole. It may overall be possible that the time to first control was too short to fully investigate the dissolving potential of Rivaroxaban 9 resulting in a resolution rate of 41.5% in X‐TRA. Comparing the dissolving potential of NOACs with the potential of VKA reported from other studies the effectiveness of the NOACs seems to be at least as good as that of VKA. Only very limited data exist that investigates the potential of VKA in thrombus resolution. In a TEE study from Jaber et al. the resolution rate in patients, who mostly received VKA (or heparin) with a mean INR of 2.2, after diagnosis of left atrial thrombus was 80.1% after 47 ± 18 days. One may speculate that the INR levels were better controlled than in a real‐world setting. 14 However, similar results were demonstrated in a study by Collins et al.15 in which the resolution rate of patients treated with VKA was 89% after 4 weeks with 18 patients included. At the time of diagnosis about half of the patients already received therapeutic anticoagulation. Phenprocoumon was used more frequently and patients on VKAs were more likely to be continued on VKA after diagnosis of a thrombus. This fact most likely reflects the remaining uncertainness in the use of NOACs and the confidence in VKA therapy with a measurable therapeutic effect (INR) in the context of cardiac thrombus formation. In the run of the study, an increasing use of NOACs was observed as current AF guidelines prefer the use of NOACs. After 10 weeks nearly 2/3 of LAA thrombi were dissolved. This point seems to be the best time for a first control regarding the number of tested patients and dissolved thrombi. However, comparatively low rates of resolution were found after 6 weeks, a time when one‐third of the patients had already received the first control. Scheduling the first control after 6 weeks seems a practical approach for daily clinical routine, while resolution numbers indicate this to be rather suboptimal. With regard to the resolution rates in dependence of the type of oral anticoagulation there was a trend towards earlier resolution of LAA thrombi in patients with NOAC therapy. This trend accumulated in a significant difference after 12 weeks with a benefit for NOACs (93.3% vs. 70.7%; p = .046). This benefit of faster resolution when using NOACs may be caused by consistent drug levels with easy dosing and precise rules for dose reduction though a selection bias in patients cannot be excluded. In addition, phenprocoumon and related substances only inhibit the synthesis of vitamin‐K‐depending coagulation factors, while Apixaban not only inactivates Factor Xa in plasma but also inactivates Factor Xa when already bound to thrombi. 16 The clinical consequence of a potentially faster resolution when using NOACs is first and foremost reflected in the possibility for earlier controls of LAA‐thrombi making an earlier antiarrhythmic strategy for AF possible. When deciding on the type of oral anticoagulation it should also be taken into account that the number of thromboembolic complications may be reduced by faster resolution of the thrombus. CONCLUSION: In this large observational study the overall efficacy of thrombus resolution did not differ significantly between VKA and NOACs at the point of first TEE control. Regardless of the type of anticoagulation, 76.1% of thrombi were dissolved at first control. The cutoff value of two‐thirds of thrombi dissolved was reached faster when administering a NOAC. We found an overall favorable relation between number of controlled patients and number of dissolved thrombi after 10 weeks independent on the oral anticoagulation used. It would therefore be advisable to schedule follow‐up appointments at this time. Overall there was no difference in resolution of LAA‐thrombi when comparing NOACs and phenprocoumon in a real‐life setting. Further studies are needed to prove the role of NOACs and to differentiate between the resolving potential of thrombin‐ and Factor Xa inhibition as well as for each NOAC individually. ACKNOWLEDGMENT: Open Access funding enabled and organized by Projekt DEAL. CONFLICTS OF INTEREST: L. Eckardt reports receiving lecture honora from Boehringer Ingelheim, Daiichy Sankyo, BMS Pfizer, and Bayer Medical. H. Wedekind reports receiving lecture honora from Bayer Medical and AstraZeneca. The remaining authors declare no conflicts of interest.
Background: Atrial fibrillation is the most important risk factor for left atrial appendage (LAA) thrombi, a potentially life-threatening condition. Thrombus resolution may prevent embolic events and allow rhythm-control strategies, which have been shown to reduce cardiovascular complications. Methods: Consecutive patients with LAA-thrombi from June 2013 to June 2017 were included in an observational single-center analysis. The primary endpoint was defined as the resolution of the thrombus. The observational period was 1 year. Resolutions rates in patients on phenprocoumon or NOACs were compared and the time to resolution was analyzed. Results: We identified 114 patients with LAA-thrombi. There was no significant difference in the efficacy of resolution between phenprocoumon and NOACs (p = .499) at the time of first control which took place after a mean of 58 ± 42.2 (median 48) days. At first control most thrombi were dissolved (74.6%). The analysis after set-time intervals revealed a resolution rate of 2/3 of LAA-thrombi after 8-10 weeks in the phenprocoumon and NOAC groups. After 12 weeks a higher number of thrombi had resolved in the presence of NOAC (89.3%) whereas in the presence of phenprocoumon 68.3% had resolved (p = .046). Conclusions: In this large observational study NOACs were found to be potent drugs for the resolution of LAA-thrombi. In addition, the resolution of LAA-thrombi was found to be faster in the presence of NOAC as compared to phenprocoumon.
INTRODUCTION: Over 26 million people worldwide suffer from a stroke every year. In western countries 20% of all strokes and transient ischemic attacks (TIAs) are of cardioembolic origin. 1 , 2 Cardioembolic strokes are more severe than other ischemic strokes. 3 , 4 There has been a steady increase of cardioembolic strokes in the last few years. 5 In patients with atrial fibrillation (AF) as the most important risk factor, a consequent anticoagulant therapy avoids 70% of all cardioembolic strokes. 5 , 6 Therefore, the presence of intracardiac thrombi is a potentially life‐threatening condition because of the risk of embolization. To allow rhythm control in patients with AF which may be of prognostic relevance 7 the presence of intracardiac thrombi has to be excluded in advance. Thus, not only prevention but also adequate therapy of existing thrombi is of utmost importance. Today non‐vitamin K‐dependent oral anticoagulants (NOACs) are favored for the prevention of intracardiac thrombi in the majority of patients with AF. 8 However, only limited data exist for the use of NOACs in the resolution of already existing left atrial appendage (LAA) thrombi. 9 NOACs benefit from their simpler application compared to the use of vitamin K antagonists (VKAs) and are largely independent on alimentation or co‐medication. Additionally, therapeutic levels are consistent for most patients and their use is mostly restricted by renal insufficiency. In contrast, VKAs show many interdependencies especially with the frequently used antiarrhythmic drug amiodarone and therapeutic levels can vary depending on alimentation. Still, the amount of practical experience on thrombus resolution acquired over decades may favor VKA. 10 , 11 This study aims at comparing the efficacy of phenprocoumon and NOACs in the resolution of LAA‐thrombi in a real‐world setting. CONCLUSION: In this large observational study the overall efficacy of thrombus resolution did not differ significantly between VKA and NOACs at the point of first TEE control. Regardless of the type of anticoagulation, 76.1% of thrombi were dissolved at first control. The cutoff value of two‐thirds of thrombi dissolved was reached faster when administering a NOAC. We found an overall favorable relation between number of controlled patients and number of dissolved thrombi after 10 weeks independent on the oral anticoagulation used. It would therefore be advisable to schedule follow‐up appointments at this time. Overall there was no difference in resolution of LAA‐thrombi when comparing NOACs and phenprocoumon in a real‐life setting. Further studies are needed to prove the role of NOACs and to differentiate between the resolving potential of thrombin‐ and Factor Xa inhibition as well as for each NOAC individually.
Background: Atrial fibrillation is the most important risk factor for left atrial appendage (LAA) thrombi, a potentially life-threatening condition. Thrombus resolution may prevent embolic events and allow rhythm-control strategies, which have been shown to reduce cardiovascular complications. Methods: Consecutive patients with LAA-thrombi from June 2013 to June 2017 were included in an observational single-center analysis. The primary endpoint was defined as the resolution of the thrombus. The observational period was 1 year. Resolutions rates in patients on phenprocoumon or NOACs were compared and the time to resolution was analyzed. Results: We identified 114 patients with LAA-thrombi. There was no significant difference in the efficacy of resolution between phenprocoumon and NOACs (p = .499) at the time of first control which took place after a mean of 58 ± 42.2 (median 48) days. At first control most thrombi were dissolved (74.6%). The analysis after set-time intervals revealed a resolution rate of 2/3 of LAA-thrombi after 8-10 weeks in the phenprocoumon and NOAC groups. After 12 weeks a higher number of thrombi had resolved in the presence of NOAC (89.3%) whereas in the presence of phenprocoumon 68.3% had resolved (p = .046). Conclusions: In this large observational study NOACs were found to be potent drugs for the resolution of LAA-thrombi. In addition, the resolution of LAA-thrombi was found to be faster in the presence of NOAC as compared to phenprocoumon.
7,737
296
[ 343, 112, 233, 253, 137, 118, 304, 260, 527, 10 ]
15
[ "patients", "thrombi", "resolution", "time", "control", "laa", "noac", "phenprocoumon", "weeks", "days" ]
[ "anticoagulant therapy avoids", "anticoagulants time thrombus", "anticoagulation 76 thrombi", "anticoagulants noacs favored", "oral anticoagulants noacs" ]
[CONTENT] atrial fibrillation | intracardiac thrombus | LAA‐thrombus | NOAC | phenprocoumon | thrombus resolution [SUMMARY]
[CONTENT] atrial fibrillation | intracardiac thrombus | LAA‐thrombus | NOAC | phenprocoumon | thrombus resolution [SUMMARY]
[CONTENT] atrial fibrillation | intracardiac thrombus | LAA‐thrombus | NOAC | phenprocoumon | thrombus resolution [SUMMARY]
[CONTENT] atrial fibrillation | intracardiac thrombus | LAA‐thrombus | NOAC | phenprocoumon | thrombus resolution [SUMMARY]
[CONTENT] atrial fibrillation | intracardiac thrombus | LAA‐thrombus | NOAC | phenprocoumon | thrombus resolution [SUMMARY]
[CONTENT] atrial fibrillation | intracardiac thrombus | LAA‐thrombus | NOAC | phenprocoumon | thrombus resolution [SUMMARY]
[CONTENT] Administration, Oral | Anticoagulants | Atrial Appendage | Atrial Fibrillation | Echocardiography, Transesophageal | Humans | Phenprocoumon | Thrombosis [SUMMARY]
[CONTENT] Administration, Oral | Anticoagulants | Atrial Appendage | Atrial Fibrillation | Echocardiography, Transesophageal | Humans | Phenprocoumon | Thrombosis [SUMMARY]
[CONTENT] Administration, Oral | Anticoagulants | Atrial Appendage | Atrial Fibrillation | Echocardiography, Transesophageal | Humans | Phenprocoumon | Thrombosis [SUMMARY]
[CONTENT] Administration, Oral | Anticoagulants | Atrial Appendage | Atrial Fibrillation | Echocardiography, Transesophageal | Humans | Phenprocoumon | Thrombosis [SUMMARY]
[CONTENT] Administration, Oral | Anticoagulants | Atrial Appendage | Atrial Fibrillation | Echocardiography, Transesophageal | Humans | Phenprocoumon | Thrombosis [SUMMARY]
[CONTENT] Administration, Oral | Anticoagulants | Atrial Appendage | Atrial Fibrillation | Echocardiography, Transesophageal | Humans | Phenprocoumon | Thrombosis [SUMMARY]
[CONTENT] anticoagulant therapy avoids | anticoagulants time thrombus | anticoagulation 76 thrombi | anticoagulants noacs favored | oral anticoagulants noacs [SUMMARY]
[CONTENT] anticoagulant therapy avoids | anticoagulants time thrombus | anticoagulation 76 thrombi | anticoagulants noacs favored | oral anticoagulants noacs [SUMMARY]
[CONTENT] anticoagulant therapy avoids | anticoagulants time thrombus | anticoagulation 76 thrombi | anticoagulants noacs favored | oral anticoagulants noacs [SUMMARY]
[CONTENT] anticoagulant therapy avoids | anticoagulants time thrombus | anticoagulation 76 thrombi | anticoagulants noacs favored | oral anticoagulants noacs [SUMMARY]
[CONTENT] anticoagulant therapy avoids | anticoagulants time thrombus | anticoagulation 76 thrombi | anticoagulants noacs favored | oral anticoagulants noacs [SUMMARY]
[CONTENT] anticoagulant therapy avoids | anticoagulants time thrombus | anticoagulation 76 thrombi | anticoagulants noacs favored | oral anticoagulants noacs [SUMMARY]
[CONTENT] patients | thrombi | resolution | time | control | laa | noac | phenprocoumon | weeks | days [SUMMARY]
[CONTENT] patients | thrombi | resolution | time | control | laa | noac | phenprocoumon | weeks | days [SUMMARY]
[CONTENT] patients | thrombi | resolution | time | control | laa | noac | phenprocoumon | weeks | days [SUMMARY]
[CONTENT] patients | thrombi | resolution | time | control | laa | noac | phenprocoumon | weeks | days [SUMMARY]
[CONTENT] patients | thrombi | resolution | time | control | laa | noac | phenprocoumon | weeks | days [SUMMARY]
[CONTENT] patients | thrombi | resolution | time | control | laa | noac | phenprocoumon | weeks | days [SUMMARY]
[CONTENT] strokes | cardioembolic | cardioembolic strokes | thrombi | use | noacs | ischemic | presence intracardiac thrombi | presence intracardiac | existing [SUMMARY]
[CONTENT] test | analysis | spss | presented | patients presented | statistical | statistical analysis | event | data | thrombi [SUMMARY]
[CONTENT] patients | thrombi | resolution | time | days | noac | control | weeks | phenprocoumon | laa [SUMMARY]
[CONTENT] dissolved | thrombi | noacs | overall | number | thrombi dissolved | noac | control | inhibition | real life [SUMMARY]
[CONTENT] patients | thrombi | resolution | time | noac | control | days | laa | weeks | phenprocoumon [SUMMARY]
[CONTENT] patients | thrombi | resolution | time | noac | control | days | laa | weeks | phenprocoumon [SUMMARY]
[CONTENT] LAA ||| Thrombus [SUMMARY]
[CONTENT] June 2013 to June 2017 ||| ||| 1 year ||| NOACs [SUMMARY]
[CONTENT] 114 ||| phenprocoumon | NOACs | .499 | first | 58 ± | 42.2 | 48) days ||| first | 74.6% ||| 2/3 | 8-10 weeks | NOAC ||| 12 weeks | NOAC | 89.3% | 68.3% | .046 [SUMMARY]
[CONTENT] ||| NOAC [SUMMARY]
[CONTENT] LAA ||| Thrombus ||| June 2013 to June 2017 ||| ||| 1 year ||| NOACs ||| ||| 114 ||| phenprocoumon | NOACs | .499 | first | 58 ± | 42.2 | 48) days ||| first | 74.6% ||| 2/3 | 8-10 weeks | NOAC ||| 12 weeks | NOAC | 89.3% | 68.3% | .046 ||| ||| NOAC [SUMMARY]
[CONTENT] LAA ||| Thrombus ||| June 2013 to June 2017 ||| ||| 1 year ||| NOACs ||| ||| 114 ||| phenprocoumon | NOACs | .499 | first | 58 ± | 42.2 | 48) days ||| first | 74.6% ||| 2/3 | 8-10 weeks | NOAC ||| 12 weeks | NOAC | 89.3% | 68.3% | .046 ||| ||| NOAC [SUMMARY]
Surgical reintervention after failed antireflux surgery: a systematic review of the literature.
19347410
Outcome and morbidity of redo antireflux surgery are suggested to be less satisfactory than those of primary surgery. Studies reporting on redo surgery, however, are usually much smaller than those of primary surgery. The aim of this study was to summarize the currently available literature on redo antireflux surgery.
BACKGROUND
A structured literature search was performed in the electronic databases of MEDLINE, EMBASE, and Cochrane Central Register of Controlled Trials.
MATERIAL AND METHODS
A total of 81 studies met the inclusion criteria. The study design was prospective in 29, retrospective in 15, and not reported in 37 studies. In these studies, 4,584 reoperations in 4,509 patients are reported. Recurrent reflux and dysphagia were the most frequent indications; intraoperative complications occurred in 21.4% and postoperative complications in 15.6%, with an overall mortality rate of 0.9%. The conversion rate in laparoscopic surgery was 8.7%. Mean(+/-SEM) duration of surgery was 177.4 +/- 10.3 min and mean hospital stay was 5.5 +/- 0.5 days. Symptomatic outcome was successful in 81.1% and was equal in the laparoscopic and conventional approach. Objective outcome was obtained in 24 studies (29.6%) and success was reported in 78.3%, with a slightly higher success rate in case of laparoscopy than with open surgery (85.8% vs. 78.0%).
RESULTS
This systematic review on redo antireflux surgery has confirmed that morbidity and mortality after redo surgery is higher than after primary surgery and symptomatic and objective outcome are less satisfactory. Data on objective results were scarce and consistency with regard to reporting outcome is necessary.
CONCLUSION
[ "Fundoplication", "Gastroesophageal Reflux", "Humans", "Laparoscopy", "Laparotomy", "Patient Satisfaction", "Reoperation", "Treatment Failure", "Treatment Outcome" ]
2710493
Introduction
Antireflux surgery for refractory gastroesophageal reflux disease (GERD) has satisfactory outcome in 85–90% of patients.1–6 In the remaining 10–15%, reflux symptoms persist, recur, or complications occur. Dysphagia is a frequent complication of fundoplication.7 The indications for reoperation are far from straightforward, varying from severe recurrent symptoms with a more than adequate anatomical result to recurrent abnormal anatomy without any symptoms at all. Studies on reoperations also show similar wide variations with a full range of abnormal anatomy, symptoms and objective failure documented by esophageal manometry, and pH monitoring. In our recently published study on redo antireflux surgery, morbidity and mortality were higher than after primary antireflux surgery, with a symptomatic and objective success rate of 70% which is obviously inferior to the outcome of primary surgery.4,8 Several other studies have been published describing causes of failure of conventional and laparoscopic antireflux surgery. Most studies have included only a small group of patients, so an adequate impression on the outcome of reoperation is hard to extract from such studies. This study aims to summarize the currently available literature on surgical reintervention after primary antireflux surgery focusing on morbidity, mortality, and outcome in order to get a more complete overview of the results of redo antireflux surgery and to give guidelines about how patients should be informed on their chances of success.
null
null
Results
General Results One thousand six hundred twenty-five articles were eligible for further selection after removing duplicate hits, and finally, 73 articles met the inclusion criteria (Fig. 1). The references of these articles yielded eight more articles for inclusion. These articles had not been identified with the initial search strategy because of absence of abstracts in the databases or atypical description for the intervention or disease. Eventually, 81 articles were eligible for inclusion in this study. According to the Oxford Centre for Evidence Based Medicine Levels of Evidence, 27 studies had a level of evidence IIb (33.3%)8, 10–35, two level of evidence IIIb (2.5%)36, 37, and 15 level of evidence IV (18.5%)38–52. The remaining 37 studies (45.7%) were cohort studies, but a level of evidence could not be adjudged owing to unknown study design53–89. Baseline characteristics extracted from the individual studies are shown in Table 2. Figure 1Results of search strategy and selection of studies. Table 2Baseline Characteristics Extracted from the Included StudiesNumber of patients (n)4,509 Male1,524 (33.8%) Female1,762 (39.1%) Sex not reported1,223 (27.1%)Age (years)51.3 ± 0.8Number of reoperations (n)4,584Study period (months)10.8 ± 0.7Duration between primary surgery and reoperation (months)38.3 ± 4.1Study design of the individual studies Prospective cohort study27 (33.3%) Retrospective cohort study14 (17.3%) Prospective case–control study2 (2.5%) Retrospective case–control study1 (1.2%) Not reported37 (45.7%) Results of search strategy and selection of studies. Baseline Characteristics Extracted from the Included Studies One thousand six hundred twenty-five articles were eligible for further selection after removing duplicate hits, and finally, 73 articles met the inclusion criteria (Fig. 1). The references of these articles yielded eight more articles for inclusion. These articles had not been identified with the initial search strategy because of absence of abstracts in the databases or atypical description for the intervention or disease. Eventually, 81 articles were eligible for inclusion in this study. According to the Oxford Centre for Evidence Based Medicine Levels of Evidence, 27 studies had a level of evidence IIb (33.3%)8, 10–35, two level of evidence IIIb (2.5%)36, 37, and 15 level of evidence IV (18.5%)38–52. The remaining 37 studies (45.7%) were cohort studies, but a level of evidence could not be adjudged owing to unknown study design53–89. Baseline characteristics extracted from the individual studies are shown in Table 2. Figure 1Results of search strategy and selection of studies. Table 2Baseline Characteristics Extracted from the Included StudiesNumber of patients (n)4,509 Male1,524 (33.8%) Female1,762 (39.1%) Sex not reported1,223 (27.1%)Age (years)51.3 ± 0.8Number of reoperations (n)4,584Study period (months)10.8 ± 0.7Duration between primary surgery and reoperation (months)38.3 ± 4.1Study design of the individual studies Prospective cohort study27 (33.3%) Retrospective cohort study14 (17.3%) Prospective case–control study2 (2.5%) Retrospective case–control study1 (1.2%) Not reported37 (45.7%) Results of search strategy and selection of studies. Baseline Characteristics Extracted from the Included Studies Primary Antireflux Procedures Total fundoplication performed by laparoscopy, laparotomy, or thoracotomy was the most frequently reported primary antireflux procedure followed by partial fundoplication (Table 3). The type of primary antireflux procedure was not reported in almost one third, and 241 patients (5.3%) underwent more than one previous operation before inclusion in the original studies. Table 3Type and Indication of Primary Antireflux Procedures and Reoperations Primary procedures (n = 4,750)Reoperations (n = 4,584)Indication of operationsRecurrent reflux–1,912 (41.7%)Dysphagia–760 (16.6%)Recurrent reflux and dysphagia–184 (4.0%)Anatomical abnormality–114 (2.5%)Gasbloat syndrome–31 (0.7%)Miscellaneous–148 (3.2%)Not reported–1,435 (31.3%)Type of operationsTotal fundoplication2,162 (45.5%)2,397 (52.3%)Partial fundoplication471 (9.9%)999 (21.8%)Resection surgery–327 (7.1%)Miscellaneous procedures657 (13.8%)737 (16.1%)Not reported1,460 (30.7%)124 (2.7%) Type and Indication of Primary Antireflux Procedures and Reoperations Total fundoplication performed by laparoscopy, laparotomy, or thoracotomy was the most frequently reported primary antireflux procedure followed by partial fundoplication (Table 3). The type of primary antireflux procedure was not reported in almost one third, and 241 patients (5.3%) underwent more than one previous operation before inclusion in the original studies. Table 3Type and Indication of Primary Antireflux Procedures and Reoperations Primary procedures (n = 4,750)Reoperations (n = 4,584)Indication of operationsRecurrent reflux–1,912 (41.7%)Dysphagia–760 (16.6%)Recurrent reflux and dysphagia–184 (4.0%)Anatomical abnormality–114 (2.5%)Gasbloat syndrome–31 (0.7%)Miscellaneous–148 (3.2%)Not reported–1,435 (31.3%)Type of operationsTotal fundoplication2,162 (45.5%)2,397 (52.3%)Partial fundoplication471 (9.9%)999 (21.8%)Resection surgery–327 (7.1%)Miscellaneous procedures657 (13.8%)737 (16.1%)Not reported1,460 (30.7%)124 (2.7%) Type and Indication of Primary Antireflux Procedures and Reoperations Causes of Failure of Primary Antireflux Surgery Causes of failure of the previous antireflux procedure were reported on 3,175 reoperations in total. Intrathoracic wrap migration, total or partial disruption of the wrap, and telescoping were the most common anatomical abnormalities encountered (Table 4). Esophageal motility disorder or erroneous diagnosis, i.e., another primary disease than GERD, were the causes of failure of the previous operation in 62 patients (2.0%). In 194 reoperations (6.1%), no cause of failure could be identified. Table 4Causes of Failure of Previous Antireflux Procedure  n = 3,175Anatomical abnormalitiesIntrathoracic wrap migration885 (27.9%)Wrap disruption722 (22.7%)Telescoping448 (14.1%)Para-esophageal hiatal herniation195 (6.1%)Hiatal disruption167 (5.3%)Tight wrap168 (5.3%)Stricture60 (1.9%)Wrong primary diagnosisAchalasia37 (1.2%)Esophageal spasms7 (0.2%)Sclerodermia4 (0.1%)Esophageal carcinoma1 (0.03%)Disturbed esophageal motility13 (0.4%)No cause for failure identified194 (6.1%)Miscellaneous347 (10.9%)Not reported120 (3.8%)Percentages exceed 100% since more than one cause of failure was found during several reoperations Causes of Failure of Previous Antireflux Procedure Percentages exceed 100% since more than one cause of failure was found during several reoperations From six studies, it was shown that wrap disruption and telescoping were more frequent after conventional primary surgery, whereas disruption of hiatal repair and a tight wrap were more frequent after laparoscopic primary repair (Table 5).18,49,61,67,84,85 Intrathoracic wrap migration was reported by Serafina et al.85 to be more frequent after conventional primary procedures (13/17, 76.5% vs. 5/11, 45.5%), whereas Heniford et al.67 showed that this was more frequent after laparoscopic primary repair (16/22, 72.7% vs. 13/33, 39.4%). In the study by Salminen et al.,84 intrathoracic wrap migration was equal after conventional and laparoscopic primary surgery. Table 5Anatomical Abnormalities Depending on the Approach of Primary Surgery and the Indication of ReoperationAnatomical abnormalities depending on the approach of primary surgeryConventional (abdominal) approach (n = 120)Laparoscopic approach (n = 132)Wrap disruption48 (40.0%)24 (18.2%)Telescoping32 (26.6%)10 (7.6%)Hiatal disruption23 (19.2%)42 (31.8%)Tight wrap2 (1.7%)24 (18.2%)Miscellaneous36 (30.0%)42 (31.8%)Anatomical abnormalities depending on the indication of reoperationRecurrent reflux (n = 234)Dysphagia (n = 118)Intrathoracic wrap migration104 (44.4%)18 (15.3%)Wrap disruption109 (46.6%)12 (10.2%)No cause of failure34 (14.5%)51 (43.2%)Miscellaneous64 (27.4%)54 (45.8%)Percentages exceed 100% since more than one cause of failure was found during several reoperations Anatomical Abnormalities Depending on the Approach of Primary Surgery and the Indication of Reoperation Percentages exceed 100% since more than one cause of failure was found during several reoperations In five other studies,8,11,12,31,72 it was shown that intrathoracic wrap migration and wrap disruption were more frequent in the case of recurrent reflux, whereas in the case of dysphagia, no cause of failure could be demonstrated more frequently (Table 5). Causes of failure of the previous antireflux procedure were reported on 3,175 reoperations in total. Intrathoracic wrap migration, total or partial disruption of the wrap, and telescoping were the most common anatomical abnormalities encountered (Table 4). Esophageal motility disorder or erroneous diagnosis, i.e., another primary disease than GERD, were the causes of failure of the previous operation in 62 patients (2.0%). In 194 reoperations (6.1%), no cause of failure could be identified. Table 4Causes of Failure of Previous Antireflux Procedure  n = 3,175Anatomical abnormalitiesIntrathoracic wrap migration885 (27.9%)Wrap disruption722 (22.7%)Telescoping448 (14.1%)Para-esophageal hiatal herniation195 (6.1%)Hiatal disruption167 (5.3%)Tight wrap168 (5.3%)Stricture60 (1.9%)Wrong primary diagnosisAchalasia37 (1.2%)Esophageal spasms7 (0.2%)Sclerodermia4 (0.1%)Esophageal carcinoma1 (0.03%)Disturbed esophageal motility13 (0.4%)No cause for failure identified194 (6.1%)Miscellaneous347 (10.9%)Not reported120 (3.8%)Percentages exceed 100% since more than one cause of failure was found during several reoperations Causes of Failure of Previous Antireflux Procedure Percentages exceed 100% since more than one cause of failure was found during several reoperations From six studies, it was shown that wrap disruption and telescoping were more frequent after conventional primary surgery, whereas disruption of hiatal repair and a tight wrap were more frequent after laparoscopic primary repair (Table 5).18,49,61,67,84,85 Intrathoracic wrap migration was reported by Serafina et al.85 to be more frequent after conventional primary procedures (13/17, 76.5% vs. 5/11, 45.5%), whereas Heniford et al.67 showed that this was more frequent after laparoscopic primary repair (16/22, 72.7% vs. 13/33, 39.4%). In the study by Salminen et al.,84 intrathoracic wrap migration was equal after conventional and laparoscopic primary surgery. Table 5Anatomical Abnormalities Depending on the Approach of Primary Surgery and the Indication of ReoperationAnatomical abnormalities depending on the approach of primary surgeryConventional (abdominal) approach (n = 120)Laparoscopic approach (n = 132)Wrap disruption48 (40.0%)24 (18.2%)Telescoping32 (26.6%)10 (7.6%)Hiatal disruption23 (19.2%)42 (31.8%)Tight wrap2 (1.7%)24 (18.2%)Miscellaneous36 (30.0%)42 (31.8%)Anatomical abnormalities depending on the indication of reoperationRecurrent reflux (n = 234)Dysphagia (n = 118)Intrathoracic wrap migration104 (44.4%)18 (15.3%)Wrap disruption109 (46.6%)12 (10.2%)No cause of failure34 (14.5%)51 (43.2%)Miscellaneous64 (27.4%)54 (45.8%)Percentages exceed 100% since more than one cause of failure was found during several reoperations Anatomical Abnormalities Depending on the Approach of Primary Surgery and the Indication of Reoperation Percentages exceed 100% since more than one cause of failure was found during several reoperations In five other studies,8,11,12,31,72 it was shown that intrathoracic wrap migration and wrap disruption were more frequent in the case of recurrent reflux, whereas in the case of dysphagia, no cause of failure could be demonstrated more frequently (Table 5). Indications for Reoperations Recurrent reflux and dysphagia were the most frequent indications for reoperations (Table 3). In 1,435 reoperations (31.3%), the indication for reoperation was not reported. Preoperative symptoms were assessed by questionnaire in 26 studies (32.1%).10,14,17,18,23–25,28,30,33,36,45,53,54,56,61–66,71,74,76,87,88 In most studies (93.8%), preoperative workup consisted of esophagogastroduodenoscopy, barium swallow, and/or esophageal pH monitoring.10–28,30–41,43–46,48–76,78,79,81–89 Recurrent reflux and dysphagia were the most frequent indications for reoperations (Table 3). In 1,435 reoperations (31.3%), the indication for reoperation was not reported. Preoperative symptoms were assessed by questionnaire in 26 studies (32.1%).10,14,17,18,23–25,28,30,33,36,45,53,54,56,61–66,71,74,76,87,88 In most studies (93.8%), preoperative workup consisted of esophagogastroduodenoscopy, barium swallow, and/or esophageal pH monitoring.10–28,30–41,43–46,48–76,78,79,81–89 Type and Route of Reoperations Total or partial fundoplication was the most frequently performed reoperation (Table 3), whereas the type of reoperation was not reported in 124 patients (2.7%). The laparoscopic approach was used in 1,666 reoperations (36.3%); 1,589 reoperations (34.7%) were performed by the conventional (open) abdominal route and 1,041 (22.7%) by thoracotomy. The approach of reoperation was not reported in the remaining 288 reoperations (6.3%). More than one reintervention was performed in 75 patients (1.7%). The esophagus was totally or partially resected during 125 reoperations (2.7%). The reasons to perform esophageal resection were severe esophagitis with or without Barrett metaplasia,15,25,59 peptic stricture of the esophagus,10,33,51,57,72,81 severely disturbed esophageal motility,26,44,57,81 or short esophagus.70,82 In 202 reoperations (4.4%), gastric resection was performed. Indications for this were alkaline reflux,10 dense adhesions on attempted refundoplication,33,59,86 or severe gastric paresis.25,81 Total or partial fundoplication was the most frequently performed reoperation (Table 3), whereas the type of reoperation was not reported in 124 patients (2.7%). The laparoscopic approach was used in 1,666 reoperations (36.3%); 1,589 reoperations (34.7%) were performed by the conventional (open) abdominal route and 1,041 (22.7%) by thoracotomy. The approach of reoperation was not reported in the remaining 288 reoperations (6.3%). More than one reintervention was performed in 75 patients (1.7%). The esophagus was totally or partially resected during 125 reoperations (2.7%). The reasons to perform esophageal resection were severe esophagitis with or without Barrett metaplasia,15,25,59 peptic stricture of the esophagus,10,33,51,57,72,81 severely disturbed esophageal motility,26,44,57,81 or short esophagus.70,82 In 202 reoperations (4.4%), gastric resection was performed. Indications for this were alkaline reflux,10 dense adhesions on attempted refundoplication,33,59,86 or severe gastric paresis.25,81 Intra- and Postoperative Results The different intra- and postoperative parameters were only reported in a subset of the original studies. Intraoperative complications were reported in 454 of 2,123 reoperations (21.4%) and were more frequent during laparoscopic than during open abdominal reoperations (150/770, 19.5% vs. 5/92, 5.4%). Laceration or perforation of the esophagus and/or stomach was the most common (Table 6). Postoperative complications were present after 546 of 3,491 reoperations (15.6%). Infectious, pulmonary, and cardiac complications were the most common postoperative complications (Table 6). Open abdominal reoperations were accompanied with more complications than laparoscopic reoperations (55/317, 17.4% vs. 98/642, 15.3%). Thirty-seven of 4,329 patients (0.9%) died intra- or postoperatively (Table 6). No mortality occurred in studies only reporting on laparoscopic reoperations, while the mortality rate was 1.3% in studies in which all reoperations were performed by a conventional abdominal approach. Table 6Intra- and Postoperative Results of ReoperationsIntraoperative complications N = 2,123a Injury of esophagus and stomach (n)278 (13.1%)Pneumothorax (n)73 (3.4%)Hemorrhage (n)41 (1.9%)Splenectomy (n)7 (0.3%)Other (n)49 (2.3%)Not reported (n)6 (0.3%)Postoperative complications N = 3491a Pulmonary complication (n)125 (3.6%)Wound infection (n)64 (1.8%)Leakage from alimentary tract (n)52 (1.5%)Urinary tract infection (n)12 (0.3%)Other infectious complications (n)48 (1.4%)Cardiac complications (n)31 (0.9%)Hemorrhage (n)22 (0.6%)Other (n)136 (3.9%)Not reported (n)56 (1.6%)Causes of mortality N = 4,329a Infectious (n)11 (0.3%)Pulmonary (n)7 (0.2%)Cardiac (n)4 (0.1%)Miscellaneous (n)10 (0.2%)Not reported (n)5 (0.1%) aTotal number of reoperations in which the intra- and postoperative complications and mortality rate were reported Intra- and Postoperative Results of Reoperations aTotal number of reoperations in which the intra- and postoperative complications and mortality rate were reported Mean duration of reoperation was 177.4 ± 10.3 min, mean intraoperative blood loss 205.5 ± 35.6 ml, and mean hospital stay 5.5 ± 0.5 days. Comparing results of laparoscopic reoperations with laparotomy regarding the preceding parameters was not possible due to the small number of well-documented studies in the laparotomy group. Reoperation was performed laparoscopically in 36.3% of all cases with a conversion rate of 8.7%. Causes of conversion were dense adhesions (n = 57, 39.3%), severe intraoperative bleeding (n = 11, 7.6%), poor visualization (n = 3, 2.1%), and other (n = 15, 10.3%). In the remaining 59 cases (40.7%), the reason for conversion was not reported. The different intra- and postoperative parameters were only reported in a subset of the original studies. Intraoperative complications were reported in 454 of 2,123 reoperations (21.4%) and were more frequent during laparoscopic than during open abdominal reoperations (150/770, 19.5% vs. 5/92, 5.4%). Laceration or perforation of the esophagus and/or stomach was the most common (Table 6). Postoperative complications were present after 546 of 3,491 reoperations (15.6%). Infectious, pulmonary, and cardiac complications were the most common postoperative complications (Table 6). Open abdominal reoperations were accompanied with more complications than laparoscopic reoperations (55/317, 17.4% vs. 98/642, 15.3%). Thirty-seven of 4,329 patients (0.9%) died intra- or postoperatively (Table 6). No mortality occurred in studies only reporting on laparoscopic reoperations, while the mortality rate was 1.3% in studies in which all reoperations were performed by a conventional abdominal approach. Table 6Intra- and Postoperative Results of ReoperationsIntraoperative complications N = 2,123a Injury of esophagus and stomach (n)278 (13.1%)Pneumothorax (n)73 (3.4%)Hemorrhage (n)41 (1.9%)Splenectomy (n)7 (0.3%)Other (n)49 (2.3%)Not reported (n)6 (0.3%)Postoperative complications N = 3491a Pulmonary complication (n)125 (3.6%)Wound infection (n)64 (1.8%)Leakage from alimentary tract (n)52 (1.5%)Urinary tract infection (n)12 (0.3%)Other infectious complications (n)48 (1.4%)Cardiac complications (n)31 (0.9%)Hemorrhage (n)22 (0.6%)Other (n)136 (3.9%)Not reported (n)56 (1.6%)Causes of mortality N = 4,329a Infectious (n)11 (0.3%)Pulmonary (n)7 (0.2%)Cardiac (n)4 (0.1%)Miscellaneous (n)10 (0.2%)Not reported (n)5 (0.1%) aTotal number of reoperations in which the intra- and postoperative complications and mortality rate were reported Intra- and Postoperative Results of Reoperations aTotal number of reoperations in which the intra- and postoperative complications and mortality rate were reported Mean duration of reoperation was 177.4 ± 10.3 min, mean intraoperative blood loss 205.5 ± 35.6 ml, and mean hospital stay 5.5 ± 0.5 days. Comparing results of laparoscopic reoperations with laparotomy regarding the preceding parameters was not possible due to the small number of well-documented studies in the laparotomy group. Reoperation was performed laparoscopically in 36.3% of all cases with a conversion rate of 8.7%. Causes of conversion were dense adhesions (n = 57, 39.3%), severe intraoperative bleeding (n = 11, 7.6%), poor visualization (n = 3, 2.1%), and other (n = 15, 10.3%). In the remaining 59 cases (40.7%), the reason for conversion was not reported. Symptomatic Outcome after Reoperations Symptomatic outcome after reoperation was determined in 79 studies (97.5%)8,10–18,20–28,30–89 and reported as successful in 81% of patients, although with different definitions of success (Table 7). Data were obtained by questionnaires in 29 studies (36.7%),8,10,11,16–18,20,22–24,27,28,30,34–37,42,45,46,48,49,54,55,61,69,71,80,84 by interview in 21 (26.6%),13,25,31,38,41,47,52,53,57,60,62,65–68,73,74,78,82,83,85 and this was not reported in the remaining 29 studies (36.7%).12,14,15,21,26,32,33,39,40,43,44,50,51,56,58,59,63,64,70,72,75–77,79,81,86–89 The mean success rate in studies only reporting on laparoscopic reoperations (17 studies)11–13,23–25,28,31,35,39,41,48,50,53,61,70,85 was 84.2 ± 2.5% and 84.6 ± 3.4% in studies in which all reoperations were performed by a conventional abdominal approach (ten studies).10,22,33,44,58,68,69,75,76,86 In patients in whom the reoperation was performed for symptoms only, 82.0 ± 10.7% had successful symptomatic outcome,47,79 and the success rate was 81.0 ± 12.1% in patients with recurrent reflux documented by pH monitoring.10,12,56,89 Comparing the outcome of total and partial refundoplication, Awad et al.53 reported symptomatic success in 68% and 60% of patients, respectively. In two other studies,11,45, however, no relationship between the type of fundoplication and the symptomatic outcome was found. Table 7Symptomatic and Objective Outcome after ReoperationDefinition of successful symptomatic outcome in the individual studiesSymptomatic outcomeObjective outcome n = 79 Degree of symptoms at follow-up25 (31.6%)– Patient satisfaction22 (27.8%)–  Satisfaction defined 6 (27.3%)–  Satisfaction not defined 16 (72.7%)– Visick grading system7 (8.9%)– Visick grading system combined with patient satisfaction1 (1.3%)– Scores calculated from specific quality of life questionnaires5 (6.3%)– Miscellaneous5 (6.3%)– Not reported14 (17.7%)–Patients available at follow-up3 338 (74.0%)581 (12.9%)Duration of follow-up (months)34.2 ± 2.721.8 ± 4.7Patients with successful outcome2 706 (81.1%)455 (78.3%)Values are given as mean ± SEM unless otherwise stated Symptomatic and Objective Outcome after Reoperation Values are given as mean ± SEM unless otherwise stated Symptomatic outcome after reoperation was determined in 79 studies (97.5%)8,10–18,20–28,30–89 and reported as successful in 81% of patients, although with different definitions of success (Table 7). Data were obtained by questionnaires in 29 studies (36.7%),8,10,11,16–18,20,22–24,27,28,30,34–37,42,45,46,48,49,54,55,61,69,71,80,84 by interview in 21 (26.6%),13,25,31,38,41,47,52,53,57,60,62,65–68,73,74,78,82,83,85 and this was not reported in the remaining 29 studies (36.7%).12,14,15,21,26,32,33,39,40,43,44,50,51,56,58,59,63,64,70,72,75–77,79,81,86–89 The mean success rate in studies only reporting on laparoscopic reoperations (17 studies)11–13,23–25,28,31,35,39,41,48,50,53,61,70,85 was 84.2 ± 2.5% and 84.6 ± 3.4% in studies in which all reoperations were performed by a conventional abdominal approach (ten studies).10,22,33,44,58,68,69,75,76,86 In patients in whom the reoperation was performed for symptoms only, 82.0 ± 10.7% had successful symptomatic outcome,47,79 and the success rate was 81.0 ± 12.1% in patients with recurrent reflux documented by pH monitoring.10,12,56,89 Comparing the outcome of total and partial refundoplication, Awad et al.53 reported symptomatic success in 68% and 60% of patients, respectively. In two other studies,11,45, however, no relationship between the type of fundoplication and the symptomatic outcome was found. Table 7Symptomatic and Objective Outcome after ReoperationDefinition of successful symptomatic outcome in the individual studiesSymptomatic outcomeObjective outcome n = 79 Degree of symptoms at follow-up25 (31.6%)– Patient satisfaction22 (27.8%)–  Satisfaction defined 6 (27.3%)–  Satisfaction not defined 16 (72.7%)– Visick grading system7 (8.9%)– Visick grading system combined with patient satisfaction1 (1.3%)– Scores calculated from specific quality of life questionnaires5 (6.3%)– Miscellaneous5 (6.3%)– Not reported14 (17.7%)–Patients available at follow-up3 338 (74.0%)581 (12.9%)Duration of follow-up (months)34.2 ± 2.721.8 ± 4.7Patients with successful outcome2 706 (81.1%)455 (78.3%)Values are given as mean ± SEM unless otherwise stated Symptomatic and Objective Outcome after Reoperation Values are given as mean ± SEM unless otherwise stated Objective Outcome after Reoperations Objective outcome was reported in 696 patients (15.4%) in 24 studies (29.6%), without a definition of success17,18,20 or the number of successful cases,14,17,18,20,28,49,87however, in seven studies. In the remaining 17 studies, successful objective outcome was defined as normal acid exposure during pH monitoring in 11,8,15,19,23,25,36,38,51,57,58,88 absence of esophagitis in four,10,54,59,76 combination of these both in one,75 and the absence of reflux during radiologic imaging in another one.65 In these 17 studies, 78% had a successful objective outcome (Table 7). The mean success rate of laparoscopic reoperation (four studies19,23,25,88) seemed higher than in the case of a conventional abdominal approach (four other studies10,58,75,76), 85.8 ± 5.6% and 78.0 ± 10.1%, respectively. Objective outcome was reported in 696 patients (15.4%) in 24 studies (29.6%), without a definition of success17,18,20 or the number of successful cases,14,17,18,20,28,49,87however, in seven studies. In the remaining 17 studies, successful objective outcome was defined as normal acid exposure during pH monitoring in 11,8,15,19,23,25,36,38,51,57,58,88 absence of esophagitis in four,10,54,59,76 combination of these both in one,75 and the absence of reflux during radiologic imaging in another one.65 In these 17 studies, 78% had a successful objective outcome (Table 7). The mean success rate of laparoscopic reoperation (four studies19,23,25,88) seemed higher than in the case of a conventional abdominal approach (four other studies10,58,75,76), 85.8 ± 5.6% and 78.0 ± 10.1%, respectively.
Conclusion
Redo antireflux surgery has a higher morbidity and mortality rate than primary antireflux surgery and symptomatic outcome is less satisfactory. Consistency with regard to reporting on symptomatic and objective outcome is necessary. Data on objective results after redo antireflux surgery are scarce and a plea can be made to subject all primary cases to full-scale evaluation, before and after antireflux surgery. Data to support this suggestion with evidence, like adequate cost-effectiveness studies, are lacking. The relative disappointing results of redo antireflux surgery with regard to morbidity, mortality, and symptomatic outcome support the opinion that redo surgery is tertiary referral center surgery and these centers should continue their efforts to collect prospective subjective and objective data.
[ "Material and Methods", "Search Strategy", "Selection of Studies", "Analysis of Data from Selected Studies", "Data Analysis", "General Results", "Primary Antireflux Procedures", "Causes of Failure of Primary Antireflux Surgery", "Indications for Reoperations", "Type and Route of Reoperations", "Intra- and Postoperative Results", "Symptomatic Outcome after Reoperations", "Objective Outcome after Reoperations" ]
[ " Search Strategy A literature search was performed in three electronic databases, MEDLINE using the Pubmed search engine, EMBASE, and the Cochrane Central Register of Controlled Trials. The databases were searched for all years, up to November 2008. Search terms were entered to identify the relevant studies. Separate search terms were entered for the intervention, i.e., surgical reintervention, and the disease, i.e., GERD. For the disease, dysphagia was also used because this is a frequent indication for reoperation. For both the intervention and the disease, headwords in the thesaurus of the three databases [Medical Subject Heading (MeSH) Thesaurus in Pubmed and the Cochrane library and the Emtree Thesaurus in EMBASE] and free text words in title and abstract were used as search terms. The headwords from the thesaurus and the different synonyms for free text words were coupled by the Boolean operator “OR”. The combination of search terms for the intervention and disease were subsequently coupled by the Boolean operator “AND”. The free text words and headwords identified in the thesauruses are listed in Table 1.\nTable 1Search Terms used in this ReviewInterventionDiseaseFree text words in title and abstract of MEDLINE, EMBASE, and the Cochrane LibraryRefundoplication(s)Gastro esophageal refluxRedoGastro esophageal reflux disease(s)Redo surgeryGastro esophageal reflux disorder(s)Redo surgical procedureGastro esophageal refluxRedo Nissen (fundoplication)Gastro esophageal reflux disease(s)Redo antireflux procedureGastoesophageal reflux disorder(s)Redo antireflux surgeryGastroesophageal refluxReoperative antireflux surgeryGastroesophageal reflux disease(s)Revisional surgeryGastroesophageal reflux disorder(s)Reoperation(s)GERDReintervention(s)GORDSurgical revision(s)Reflux disease(s)Second look surgeryEsophagitisOesophagitisDysphagiaHeadwords in the Medical Subject Head (MeSH) Thesaurus of Pubmed and the Cochrane libraryReoperationDeglutition disordersSecond-look surgeryEsophagitisHeadwords in the Emtree Thesaurus of EMBASEReoperationStomach function disorderSecond look surgeryDysphagiaEsophagitis\n\nSearch Terms used in this Review\nA literature search was performed in three electronic databases, MEDLINE using the Pubmed search engine, EMBASE, and the Cochrane Central Register of Controlled Trials. The databases were searched for all years, up to November 2008. Search terms were entered to identify the relevant studies. Separate search terms were entered for the intervention, i.e., surgical reintervention, and the disease, i.e., GERD. For the disease, dysphagia was also used because this is a frequent indication for reoperation. For both the intervention and the disease, headwords in the thesaurus of the three databases [Medical Subject Heading (MeSH) Thesaurus in Pubmed and the Cochrane library and the Emtree Thesaurus in EMBASE] and free text words in title and abstract were used as search terms. The headwords from the thesaurus and the different synonyms for free text words were coupled by the Boolean operator “OR”. The combination of search terms for the intervention and disease were subsequently coupled by the Boolean operator “AND”. The free text words and headwords identified in the thesauruses are listed in Table 1.\nTable 1Search Terms used in this ReviewInterventionDiseaseFree text words in title and abstract of MEDLINE, EMBASE, and the Cochrane LibraryRefundoplication(s)Gastro esophageal refluxRedoGastro esophageal reflux disease(s)Redo surgeryGastro esophageal reflux disorder(s)Redo surgical procedureGastro esophageal refluxRedo Nissen (fundoplication)Gastro esophageal reflux disease(s)Redo antireflux procedureGastoesophageal reflux disorder(s)Redo antireflux surgeryGastroesophageal refluxReoperative antireflux surgeryGastroesophageal reflux disease(s)Revisional surgeryGastroesophageal reflux disorder(s)Reoperation(s)GERDReintervention(s)GORDSurgical revision(s)Reflux disease(s)Second look surgeryEsophagitisOesophagitisDysphagiaHeadwords in the Medical Subject Head (MeSH) Thesaurus of Pubmed and the Cochrane libraryReoperationDeglutition disordersSecond-look surgeryEsophagitisHeadwords in the Emtree Thesaurus of EMBASEReoperationStomach function disorderSecond look surgeryDysphagiaEsophagitis\n\nSearch Terms used in this Review\n Selection of Studies The studies identified by the search strategy were independently selected by two reviewers (E.F. and W.D.) based on title, abstract, and full text. The literature was searched for randomized controlled trials, cohort studies, and case–control studies on the feasibility and/or outcome of surgical reinterventions. Studies in children, on other indications for primary surgery than GERD, conservative treatment of symptoms following primary antireflux surgery, surgical reintervention within 30 days after primary surgery, and patients cohorts with less than ten patients were not included. Only articles in English were included. Additionally, references of all selected publications were reviewed for other relevant studies. In case of a difference in opinion between the two reviewers about in- or exclusion of a study, the opinion of a third reviewer was decisive.\nThe studies identified by the search strategy were independently selected by two reviewers (E.F. and W.D.) based on title, abstract, and full text. The literature was searched for randomized controlled trials, cohort studies, and case–control studies on the feasibility and/or outcome of surgical reinterventions. Studies in children, on other indications for primary surgery than GERD, conservative treatment of symptoms following primary antireflux surgery, surgical reintervention within 30 days after primary surgery, and patients cohorts with less than ten patients were not included. Only articles in English were included. Additionally, references of all selected publications were reviewed for other relevant studies. In case of a difference in opinion between the two reviewers about in- or exclusion of a study, the opinion of a third reviewer was decisive.\n Analysis of Data from Selected Studies Data of the selected studies were independently acquired by two reviewers (E.F. and W.D.). Study design, time period, number of patients, sex ratio, and mean age were retrieved from the studies. Based on the study design, each study was qualified by a level of evidence according to the Oxford Centre for Evidence Based Medicine Levels of Evidence.9 Type and approach of primary antireflux interventions and reoperations, mean period between both interventions, causes of failure of primary surgery and perioperative information, i.e. intra- and postoperative complications, mortality, number and causes of conversions in case of laparoscopic reoperations, mean intraoperative blood loss, duration of reoperations, and hospital stay were also extracted from the included studies. Completeness of follow-up, number of patients available, mean duration of follow-up, method of obtaining outcome at follow-up, and the definition and percentage of patients with successful symptomatic and objective outcome were extracted from all studies.\nData of the selected studies were independently acquired by two reviewers (E.F. and W.D.). Study design, time period, number of patients, sex ratio, and mean age were retrieved from the studies. Based on the study design, each study was qualified by a level of evidence according to the Oxford Centre for Evidence Based Medicine Levels of Evidence.9 Type and approach of primary antireflux interventions and reoperations, mean period between both interventions, causes of failure of primary surgery and perioperative information, i.e. intra- and postoperative complications, mortality, number and causes of conversions in case of laparoscopic reoperations, mean intraoperative blood loss, duration of reoperations, and hospital stay were also extracted from the included studies. Completeness of follow-up, number of patients available, mean duration of follow-up, method of obtaining outcome at follow-up, and the definition and percentage of patients with successful symptomatic and objective outcome were extracted from all studies.\n Data Analysis Data were analysed using SPSS version 15.0 for Windows (SPSS Inc., Chicago, IL, USA). Values were expressed as mean ± SEM. Statistical analysis was not performed owing to the lack of statistically appropriate data from the included studies.\nData were analysed using SPSS version 15.0 for Windows (SPSS Inc., Chicago, IL, USA). Values were expressed as mean ± SEM. Statistical analysis was not performed owing to the lack of statistically appropriate data from the included studies.", "A literature search was performed in three electronic databases, MEDLINE using the Pubmed search engine, EMBASE, and the Cochrane Central Register of Controlled Trials. The databases were searched for all years, up to November 2008. Search terms were entered to identify the relevant studies. Separate search terms were entered for the intervention, i.e., surgical reintervention, and the disease, i.e., GERD. For the disease, dysphagia was also used because this is a frequent indication for reoperation. For both the intervention and the disease, headwords in the thesaurus of the three databases [Medical Subject Heading (MeSH) Thesaurus in Pubmed and the Cochrane library and the Emtree Thesaurus in EMBASE] and free text words in title and abstract were used as search terms. The headwords from the thesaurus and the different synonyms for free text words were coupled by the Boolean operator “OR”. The combination of search terms for the intervention and disease were subsequently coupled by the Boolean operator “AND”. The free text words and headwords identified in the thesauruses are listed in Table 1.\nTable 1Search Terms used in this ReviewInterventionDiseaseFree text words in title and abstract of MEDLINE, EMBASE, and the Cochrane LibraryRefundoplication(s)Gastro esophageal refluxRedoGastro esophageal reflux disease(s)Redo surgeryGastro esophageal reflux disorder(s)Redo surgical procedureGastro esophageal refluxRedo Nissen (fundoplication)Gastro esophageal reflux disease(s)Redo antireflux procedureGastoesophageal reflux disorder(s)Redo antireflux surgeryGastroesophageal refluxReoperative antireflux surgeryGastroesophageal reflux disease(s)Revisional surgeryGastroesophageal reflux disorder(s)Reoperation(s)GERDReintervention(s)GORDSurgical revision(s)Reflux disease(s)Second look surgeryEsophagitisOesophagitisDysphagiaHeadwords in the Medical Subject Head (MeSH) Thesaurus of Pubmed and the Cochrane libraryReoperationDeglutition disordersSecond-look surgeryEsophagitisHeadwords in the Emtree Thesaurus of EMBASEReoperationStomach function disorderSecond look surgeryDysphagiaEsophagitis\n\nSearch Terms used in this Review", "The studies identified by the search strategy were independently selected by two reviewers (E.F. and W.D.) based on title, abstract, and full text. The literature was searched for randomized controlled trials, cohort studies, and case–control studies on the feasibility and/or outcome of surgical reinterventions. Studies in children, on other indications for primary surgery than GERD, conservative treatment of symptoms following primary antireflux surgery, surgical reintervention within 30 days after primary surgery, and patients cohorts with less than ten patients were not included. Only articles in English were included. Additionally, references of all selected publications were reviewed for other relevant studies. In case of a difference in opinion between the two reviewers about in- or exclusion of a study, the opinion of a third reviewer was decisive.", "Data of the selected studies were independently acquired by two reviewers (E.F. and W.D.). Study design, time period, number of patients, sex ratio, and mean age were retrieved from the studies. Based on the study design, each study was qualified by a level of evidence according to the Oxford Centre for Evidence Based Medicine Levels of Evidence.9 Type and approach of primary antireflux interventions and reoperations, mean period between both interventions, causes of failure of primary surgery and perioperative information, i.e. intra- and postoperative complications, mortality, number and causes of conversions in case of laparoscopic reoperations, mean intraoperative blood loss, duration of reoperations, and hospital stay were also extracted from the included studies. Completeness of follow-up, number of patients available, mean duration of follow-up, method of obtaining outcome at follow-up, and the definition and percentage of patients with successful symptomatic and objective outcome were extracted from all studies.", "Data were analysed using SPSS version 15.0 for Windows (SPSS Inc., Chicago, IL, USA). Values were expressed as mean ± SEM. Statistical analysis was not performed owing to the lack of statistically appropriate data from the included studies.", "One thousand six hundred twenty-five articles were eligible for further selection after removing duplicate hits, and finally, 73 articles met the inclusion criteria (Fig. 1). The references of these articles yielded eight more articles for inclusion. These articles had not been identified with the initial search strategy because of absence of abstracts in the databases or atypical description for the intervention or disease. Eventually, 81 articles were eligible for inclusion in this study. According to the Oxford Centre for Evidence Based Medicine Levels of Evidence, 27 studies had a level of evidence IIb (33.3%)8, 10–35, two level of evidence IIIb (2.5%)36, 37, and 15 level of evidence IV (18.5%)38–52. The remaining 37 studies (45.7%) were cohort studies, but a level of evidence could not be adjudged owing to unknown study design53–89. Baseline characteristics extracted from the individual studies are shown in Table 2.\nFigure 1Results of search strategy and selection of studies.\nTable 2Baseline Characteristics Extracted from the Included StudiesNumber of patients (n)4,509 Male1,524 (33.8%) Female1,762 (39.1%) Sex not reported1,223 (27.1%)Age (years)51.3 ± 0.8Number of reoperations (n)4,584Study period (months)10.8 ± 0.7Duration between primary surgery and reoperation (months)38.3 ± 4.1Study design of the individual studies Prospective cohort study27 (33.3%) Retrospective cohort study14 (17.3%) Prospective case–control study2 (2.5%) Retrospective case–control study1 (1.2%) Not reported37 (45.7%)\n\nResults of search strategy and selection of studies.\nBaseline Characteristics Extracted from the Included Studies", "Total fundoplication performed by laparoscopy, laparotomy, or thoracotomy was the most frequently reported primary antireflux procedure followed by partial fundoplication (Table 3). The type of primary antireflux procedure was not reported in almost one third, and 241 patients (5.3%) underwent more than one previous operation before inclusion in the original studies.\nTable 3Type and Indication of Primary Antireflux Procedures and Reoperations Primary procedures (n = 4,750)Reoperations (n = 4,584)Indication of operationsRecurrent reflux–1,912 (41.7%)Dysphagia–760 (16.6%)Recurrent reflux and dysphagia–184 (4.0%)Anatomical abnormality–114 (2.5%)Gasbloat syndrome–31 (0.7%)Miscellaneous–148 (3.2%)Not reported–1,435 (31.3%)Type of operationsTotal fundoplication2,162 (45.5%)2,397 (52.3%)Partial fundoplication471 (9.9%)999 (21.8%)Resection surgery–327 (7.1%)Miscellaneous procedures657 (13.8%)737 (16.1%)Not reported1,460 (30.7%)124 (2.7%)\n\nType and Indication of Primary Antireflux Procedures and Reoperations", "Causes of failure of the previous antireflux procedure were reported on 3,175 reoperations in total. Intrathoracic wrap migration, total or partial disruption of the wrap, and telescoping were the most common anatomical abnormalities encountered (Table 4). Esophageal motility disorder or erroneous diagnosis, i.e., another primary disease than GERD, were the causes of failure of the previous operation in 62 patients (2.0%). In 194 reoperations (6.1%), no cause of failure could be identified.\nTable 4Causes of Failure of Previous Antireflux Procedure \nn = 3,175Anatomical abnormalitiesIntrathoracic wrap migration885 (27.9%)Wrap disruption722 (22.7%)Telescoping448 (14.1%)Para-esophageal hiatal herniation195 (6.1%)Hiatal disruption167 (5.3%)Tight wrap168 (5.3%)Stricture60 (1.9%)Wrong primary diagnosisAchalasia37 (1.2%)Esophageal spasms7 (0.2%)Sclerodermia4 (0.1%)Esophageal carcinoma1 (0.03%)Disturbed esophageal motility13 (0.4%)No cause for failure identified194 (6.1%)Miscellaneous347 (10.9%)Not reported120 (3.8%)Percentages exceed 100% since more than one cause of failure was found during several reoperations\n\nCauses of Failure of Previous Antireflux Procedure\nPercentages exceed 100% since more than one cause of failure was found during several reoperations\nFrom six studies, it was shown that wrap disruption and telescoping were more frequent after conventional primary surgery, whereas disruption of hiatal repair and a tight wrap were more frequent after laparoscopic primary repair (Table 5).18,49,61,67,84,85 Intrathoracic wrap migration was reported by Serafina et al.85 to be more frequent after conventional primary procedures (13/17, 76.5% vs. 5/11, 45.5%), whereas Heniford et al.67 showed that this was more frequent after laparoscopic primary repair (16/22, 72.7% vs. 13/33, 39.4%). In the study by Salminen et al.,84 intrathoracic wrap migration was equal after conventional and laparoscopic primary surgery.\nTable 5Anatomical Abnormalities Depending on the Approach of Primary Surgery and the Indication of ReoperationAnatomical abnormalities depending on the approach of primary surgeryConventional (abdominal) approach (n = 120)Laparoscopic approach (n = 132)Wrap disruption48 (40.0%)24 (18.2%)Telescoping32 (26.6%)10 (7.6%)Hiatal disruption23 (19.2%)42 (31.8%)Tight wrap2 (1.7%)24 (18.2%)Miscellaneous36 (30.0%)42 (31.8%)Anatomical abnormalities depending on the indication of reoperationRecurrent reflux (n = 234)Dysphagia (n = 118)Intrathoracic wrap migration104 (44.4%)18 (15.3%)Wrap disruption109 (46.6%)12 (10.2%)No cause of failure34 (14.5%)51 (43.2%)Miscellaneous64 (27.4%)54 (45.8%)Percentages exceed 100% since more than one cause of failure was found during several reoperations\n\nAnatomical Abnormalities Depending on the Approach of Primary Surgery and the Indication of Reoperation\nPercentages exceed 100% since more than one cause of failure was found during several reoperations\nIn five other studies,8,11,12,31,72 it was shown that intrathoracic wrap migration and wrap disruption were more frequent in the case of recurrent reflux, whereas in the case of dysphagia, no cause of failure could be demonstrated more frequently (Table 5).", "Recurrent reflux and dysphagia were the most frequent indications for reoperations (Table 3). In 1,435 reoperations (31.3%), the indication for reoperation was not reported. Preoperative symptoms were assessed by questionnaire in 26 studies (32.1%).10,14,17,18,23–25,28,30,33,36,45,53,54,56,61–66,71,74,76,87,88 In most studies (93.8%), preoperative workup consisted of esophagogastroduodenoscopy, barium swallow, and/or esophageal pH monitoring.10–28,30–41,43–46,48–76,78,79,81–89\n", "Total or partial fundoplication was the most frequently performed reoperation (Table 3), whereas the type of reoperation was not reported in 124 patients (2.7%). The laparoscopic approach was used in 1,666 reoperations (36.3%); 1,589 reoperations (34.7%) were performed by the conventional (open) abdominal route and 1,041 (22.7%) by thoracotomy. The approach of reoperation was not reported in the remaining 288 reoperations (6.3%). More than one reintervention was performed in 75 patients (1.7%).\nThe esophagus was totally or partially resected during 125 reoperations (2.7%). The reasons to perform esophageal resection were severe esophagitis with or without Barrett metaplasia,15,25,59 peptic stricture of the esophagus,10,33,51,57,72,81 severely disturbed esophageal motility,26,44,57,81 or short esophagus.70,82 In 202 reoperations (4.4%), gastric resection was performed. Indications for this were alkaline reflux,10 dense adhesions on attempted refundoplication,33,59,86 or severe gastric paresis.25,81\n", "The different intra- and postoperative parameters were only reported in a subset of the original studies. Intraoperative complications were reported in 454 of 2,123 reoperations (21.4%) and were more frequent during laparoscopic than during open abdominal reoperations (150/770, 19.5% vs. 5/92, 5.4%). Laceration or perforation of the esophagus and/or stomach was the most common (Table 6). Postoperative complications were present after 546 of 3,491 reoperations (15.6%). Infectious, pulmonary, and cardiac complications were the most common postoperative complications (Table 6). Open abdominal reoperations were accompanied with more complications than laparoscopic reoperations (55/317, 17.4% vs. 98/642, 15.3%). Thirty-seven of 4,329 patients (0.9%) died intra- or postoperatively (Table 6). No mortality occurred in studies only reporting on laparoscopic reoperations, while the mortality rate was 1.3% in studies in which all reoperations were performed by a conventional abdominal approach.\nTable 6Intra- and Postoperative Results of ReoperationsIntraoperative complications\nN = 2,123a\nInjury of esophagus and stomach (n)278 (13.1%)Pneumothorax (n)73 (3.4%)Hemorrhage (n)41 (1.9%)Splenectomy (n)7 (0.3%)Other (n)49 (2.3%)Not reported (n)6 (0.3%)Postoperative complications\nN = 3491a\nPulmonary complication (n)125 (3.6%)Wound infection (n)64 (1.8%)Leakage from alimentary tract (n)52 (1.5%)Urinary tract infection (n)12 (0.3%)Other infectious complications (n)48 (1.4%)Cardiac complications (n)31 (0.9%)Hemorrhage (n)22 (0.6%)Other (n)136 (3.9%)Not reported (n)56 (1.6%)Causes of mortality\nN = 4,329a\nInfectious (n)11 (0.3%)Pulmonary (n)7 (0.2%)Cardiac (n)4 (0.1%)Miscellaneous (n)10 (0.2%)Not reported (n)5 (0.1%)\naTotal number of reoperations in which the intra- and postoperative complications and mortality rate were reported\n\nIntra- and Postoperative Results of Reoperations\n\naTotal number of reoperations in which the intra- and postoperative complications and mortality rate were reported\nMean duration of reoperation was 177.4 ± 10.3 min, mean intraoperative blood loss 205.5 ± 35.6 ml, and mean hospital stay 5.5 ± 0.5 days. Comparing results of laparoscopic reoperations with laparotomy regarding the preceding parameters was not possible due to the small number of well-documented studies in the laparotomy group.\nReoperation was performed laparoscopically in 36.3% of all cases with a conversion rate of 8.7%. Causes of conversion were dense adhesions (n = 57, 39.3%), severe intraoperative bleeding (n = 11, 7.6%), poor visualization (n = 3, 2.1%), and other (n = 15, 10.3%). In the remaining 59 cases (40.7%), the reason for conversion was not reported.", "Symptomatic outcome after reoperation was determined in 79 studies (97.5%)8,10–18,20–28,30–89 and reported as successful in 81% of patients, although with different definitions of success (Table 7). Data were obtained by questionnaires in 29 studies (36.7%),8,10,11,16–18,20,22–24,27,28,30,34–37,42,45,46,48,49,54,55,61,69,71,80,84 by interview in 21 (26.6%),13,25,31,38,41,47,52,53,57,60,62,65–68,73,74,78,82,83,85 and this was not reported in the remaining 29 studies (36.7%).12,14,15,21,26,32,33,39,40,43,44,50,51,56,58,59,63,64,70,72,75–77,79,81,86–89 The mean success rate in studies only reporting on laparoscopic reoperations (17 studies)11–13,23–25,28,31,35,39,41,48,50,53,61,70,85 was 84.2 ± 2.5% and 84.6 ± 3.4% in studies in which all reoperations were performed by a conventional abdominal approach (ten studies).10,22,33,44,58,68,69,75,76,86 In patients in whom the reoperation was performed for symptoms only, 82.0 ± 10.7% had successful symptomatic outcome,47,79 and the success rate was 81.0 ± 12.1% in patients with recurrent reflux documented by pH monitoring.10,12,56,89 Comparing the outcome of total and partial refundoplication, Awad et al.53 reported symptomatic success in 68% and 60% of patients, respectively. In two other studies,11,45, however, no relationship between the type of fundoplication and the symptomatic outcome was found.\nTable 7Symptomatic and Objective Outcome after ReoperationDefinition of successful symptomatic outcome in the individual studiesSymptomatic outcomeObjective outcome\nn = 79 Degree of symptoms at follow-up25 (31.6%)– Patient satisfaction22 (27.8%)–  Satisfaction defined 6 (27.3%)–  Satisfaction not defined 16 (72.7%)– Visick grading system7 (8.9%)– Visick grading system combined with patient satisfaction1 (1.3%)– Scores calculated from specific quality of life questionnaires5 (6.3%)– Miscellaneous5 (6.3%)– Not reported14 (17.7%)–Patients available at follow-up3 338 (74.0%)581 (12.9%)Duration of follow-up (months)34.2 ± 2.721.8 ± 4.7Patients with successful outcome2 706 (81.1%)455 (78.3%)Values are given as mean ± SEM unless otherwise stated\n\nSymptomatic and Objective Outcome after Reoperation\nValues are given as mean ± SEM unless otherwise stated", "Objective outcome was reported in 696 patients (15.4%) in 24 studies (29.6%), without a definition of success17,18,20 or the number of successful cases,14,17,18,20,28,49,87however, in seven studies. In the remaining 17 studies, successful objective outcome was defined as normal acid exposure during pH monitoring in 11,8,15,19,23,25,36,38,51,57,58,88 absence of esophagitis in four,10,54,59,76 combination of these both in one,75 and the absence of reflux during radiologic imaging in another one.65 In these 17 studies, 78% had a successful objective outcome (Table 7). The mean success rate of laparoscopic reoperation (four studies19,23,25,88) seemed higher than in the case of a conventional abdominal approach (four other studies10,58,75,76), 85.8 ± 5.6% and 78.0 ± 10.1%, respectively." ]
[ null, null, null, null, null, null, null, null, null, null, null, null, null ]
[ "Introduction", "Material and Methods", "Search Strategy", "Selection of Studies", "Analysis of Data from Selected Studies", "Data Analysis", "Results", "General Results", "Primary Antireflux Procedures", "Causes of Failure of Primary Antireflux Surgery", "Indications for Reoperations", "Type and Route of Reoperations", "Intra- and Postoperative Results", "Symptomatic Outcome after Reoperations", "Objective Outcome after Reoperations", "Discussion", "Conclusion" ]
[ "Antireflux surgery for refractory gastroesophageal reflux disease (GERD) has satisfactory outcome in 85–90% of patients.1–6 In the remaining 10–15%, reflux symptoms persist, recur, or complications occur. Dysphagia is a frequent complication of fundoplication.7 The indications for reoperation are far from straightforward, varying from severe recurrent symptoms with a more than adequate anatomical result to recurrent abnormal anatomy without any symptoms at all. Studies on reoperations also show similar wide variations with a full range of abnormal anatomy, symptoms and objective failure documented by esophageal manometry, and pH monitoring.\nIn our recently published study on redo antireflux surgery, morbidity and mortality were higher than after primary antireflux surgery, with a symptomatic and objective success rate of 70% which is obviously inferior to the outcome of primary surgery.4,8 Several other studies have been published describing causes of failure of conventional and laparoscopic antireflux surgery. Most studies have included only a small group of patients, so an adequate impression on the outcome of reoperation is hard to extract from such studies.\nThis study aims to summarize the currently available literature on surgical reintervention after primary antireflux surgery focusing on morbidity, mortality, and outcome in order to get a more complete overview of the results of redo antireflux surgery and to give guidelines about how patients should be informed on their chances of success.", " Search Strategy A literature search was performed in three electronic databases, MEDLINE using the Pubmed search engine, EMBASE, and the Cochrane Central Register of Controlled Trials. The databases were searched for all years, up to November 2008. Search terms were entered to identify the relevant studies. Separate search terms were entered for the intervention, i.e., surgical reintervention, and the disease, i.e., GERD. For the disease, dysphagia was also used because this is a frequent indication for reoperation. For both the intervention and the disease, headwords in the thesaurus of the three databases [Medical Subject Heading (MeSH) Thesaurus in Pubmed and the Cochrane library and the Emtree Thesaurus in EMBASE] and free text words in title and abstract were used as search terms. The headwords from the thesaurus and the different synonyms for free text words were coupled by the Boolean operator “OR”. The combination of search terms for the intervention and disease were subsequently coupled by the Boolean operator “AND”. The free text words and headwords identified in the thesauruses are listed in Table 1.\nTable 1Search Terms used in this ReviewInterventionDiseaseFree text words in title and abstract of MEDLINE, EMBASE, and the Cochrane LibraryRefundoplication(s)Gastro esophageal refluxRedoGastro esophageal reflux disease(s)Redo surgeryGastro esophageal reflux disorder(s)Redo surgical procedureGastro esophageal refluxRedo Nissen (fundoplication)Gastro esophageal reflux disease(s)Redo antireflux procedureGastoesophageal reflux disorder(s)Redo antireflux surgeryGastroesophageal refluxReoperative antireflux surgeryGastroesophageal reflux disease(s)Revisional surgeryGastroesophageal reflux disorder(s)Reoperation(s)GERDReintervention(s)GORDSurgical revision(s)Reflux disease(s)Second look surgeryEsophagitisOesophagitisDysphagiaHeadwords in the Medical Subject Head (MeSH) Thesaurus of Pubmed and the Cochrane libraryReoperationDeglutition disordersSecond-look surgeryEsophagitisHeadwords in the Emtree Thesaurus of EMBASEReoperationStomach function disorderSecond look surgeryDysphagiaEsophagitis\n\nSearch Terms used in this Review\nA literature search was performed in three electronic databases, MEDLINE using the Pubmed search engine, EMBASE, and the Cochrane Central Register of Controlled Trials. The databases were searched for all years, up to November 2008. Search terms were entered to identify the relevant studies. Separate search terms were entered for the intervention, i.e., surgical reintervention, and the disease, i.e., GERD. For the disease, dysphagia was also used because this is a frequent indication for reoperation. For both the intervention and the disease, headwords in the thesaurus of the three databases [Medical Subject Heading (MeSH) Thesaurus in Pubmed and the Cochrane library and the Emtree Thesaurus in EMBASE] and free text words in title and abstract were used as search terms. The headwords from the thesaurus and the different synonyms for free text words were coupled by the Boolean operator “OR”. The combination of search terms for the intervention and disease were subsequently coupled by the Boolean operator “AND”. The free text words and headwords identified in the thesauruses are listed in Table 1.\nTable 1Search Terms used in this ReviewInterventionDiseaseFree text words in title and abstract of MEDLINE, EMBASE, and the Cochrane LibraryRefundoplication(s)Gastro esophageal refluxRedoGastro esophageal reflux disease(s)Redo surgeryGastro esophageal reflux disorder(s)Redo surgical procedureGastro esophageal refluxRedo Nissen (fundoplication)Gastro esophageal reflux disease(s)Redo antireflux procedureGastoesophageal reflux disorder(s)Redo antireflux surgeryGastroesophageal refluxReoperative antireflux surgeryGastroesophageal reflux disease(s)Revisional surgeryGastroesophageal reflux disorder(s)Reoperation(s)GERDReintervention(s)GORDSurgical revision(s)Reflux disease(s)Second look surgeryEsophagitisOesophagitisDysphagiaHeadwords in the Medical Subject Head (MeSH) Thesaurus of Pubmed and the Cochrane libraryReoperationDeglutition disordersSecond-look surgeryEsophagitisHeadwords in the Emtree Thesaurus of EMBASEReoperationStomach function disorderSecond look surgeryDysphagiaEsophagitis\n\nSearch Terms used in this Review\n Selection of Studies The studies identified by the search strategy were independently selected by two reviewers (E.F. and W.D.) based on title, abstract, and full text. The literature was searched for randomized controlled trials, cohort studies, and case–control studies on the feasibility and/or outcome of surgical reinterventions. Studies in children, on other indications for primary surgery than GERD, conservative treatment of symptoms following primary antireflux surgery, surgical reintervention within 30 days after primary surgery, and patients cohorts with less than ten patients were not included. Only articles in English were included. Additionally, references of all selected publications were reviewed for other relevant studies. In case of a difference in opinion between the two reviewers about in- or exclusion of a study, the opinion of a third reviewer was decisive.\nThe studies identified by the search strategy were independently selected by two reviewers (E.F. and W.D.) based on title, abstract, and full text. The literature was searched for randomized controlled trials, cohort studies, and case–control studies on the feasibility and/or outcome of surgical reinterventions. Studies in children, on other indications for primary surgery than GERD, conservative treatment of symptoms following primary antireflux surgery, surgical reintervention within 30 days after primary surgery, and patients cohorts with less than ten patients were not included. Only articles in English were included. Additionally, references of all selected publications were reviewed for other relevant studies. In case of a difference in opinion between the two reviewers about in- or exclusion of a study, the opinion of a third reviewer was decisive.\n Analysis of Data from Selected Studies Data of the selected studies were independently acquired by two reviewers (E.F. and W.D.). Study design, time period, number of patients, sex ratio, and mean age were retrieved from the studies. Based on the study design, each study was qualified by a level of evidence according to the Oxford Centre for Evidence Based Medicine Levels of Evidence.9 Type and approach of primary antireflux interventions and reoperations, mean period between both interventions, causes of failure of primary surgery and perioperative information, i.e. intra- and postoperative complications, mortality, number and causes of conversions in case of laparoscopic reoperations, mean intraoperative blood loss, duration of reoperations, and hospital stay were also extracted from the included studies. Completeness of follow-up, number of patients available, mean duration of follow-up, method of obtaining outcome at follow-up, and the definition and percentage of patients with successful symptomatic and objective outcome were extracted from all studies.\nData of the selected studies were independently acquired by two reviewers (E.F. and W.D.). Study design, time period, number of patients, sex ratio, and mean age were retrieved from the studies. Based on the study design, each study was qualified by a level of evidence according to the Oxford Centre for Evidence Based Medicine Levels of Evidence.9 Type and approach of primary antireflux interventions and reoperations, mean period between both interventions, causes of failure of primary surgery and perioperative information, i.e. intra- and postoperative complications, mortality, number and causes of conversions in case of laparoscopic reoperations, mean intraoperative blood loss, duration of reoperations, and hospital stay were also extracted from the included studies. Completeness of follow-up, number of patients available, mean duration of follow-up, method of obtaining outcome at follow-up, and the definition and percentage of patients with successful symptomatic and objective outcome were extracted from all studies.\n Data Analysis Data were analysed using SPSS version 15.0 for Windows (SPSS Inc., Chicago, IL, USA). Values were expressed as mean ± SEM. Statistical analysis was not performed owing to the lack of statistically appropriate data from the included studies.\nData were analysed using SPSS version 15.0 for Windows (SPSS Inc., Chicago, IL, USA). Values were expressed as mean ± SEM. Statistical analysis was not performed owing to the lack of statistically appropriate data from the included studies.", "A literature search was performed in three electronic databases, MEDLINE using the Pubmed search engine, EMBASE, and the Cochrane Central Register of Controlled Trials. The databases were searched for all years, up to November 2008. Search terms were entered to identify the relevant studies. Separate search terms were entered for the intervention, i.e., surgical reintervention, and the disease, i.e., GERD. For the disease, dysphagia was also used because this is a frequent indication for reoperation. For both the intervention and the disease, headwords in the thesaurus of the three databases [Medical Subject Heading (MeSH) Thesaurus in Pubmed and the Cochrane library and the Emtree Thesaurus in EMBASE] and free text words in title and abstract were used as search terms. The headwords from the thesaurus and the different synonyms for free text words were coupled by the Boolean operator “OR”. The combination of search terms for the intervention and disease were subsequently coupled by the Boolean operator “AND”. The free text words and headwords identified in the thesauruses are listed in Table 1.\nTable 1Search Terms used in this ReviewInterventionDiseaseFree text words in title and abstract of MEDLINE, EMBASE, and the Cochrane LibraryRefundoplication(s)Gastro esophageal refluxRedoGastro esophageal reflux disease(s)Redo surgeryGastro esophageal reflux disorder(s)Redo surgical procedureGastro esophageal refluxRedo Nissen (fundoplication)Gastro esophageal reflux disease(s)Redo antireflux procedureGastoesophageal reflux disorder(s)Redo antireflux surgeryGastroesophageal refluxReoperative antireflux surgeryGastroesophageal reflux disease(s)Revisional surgeryGastroesophageal reflux disorder(s)Reoperation(s)GERDReintervention(s)GORDSurgical revision(s)Reflux disease(s)Second look surgeryEsophagitisOesophagitisDysphagiaHeadwords in the Medical Subject Head (MeSH) Thesaurus of Pubmed and the Cochrane libraryReoperationDeglutition disordersSecond-look surgeryEsophagitisHeadwords in the Emtree Thesaurus of EMBASEReoperationStomach function disorderSecond look surgeryDysphagiaEsophagitis\n\nSearch Terms used in this Review", "The studies identified by the search strategy were independently selected by two reviewers (E.F. and W.D.) based on title, abstract, and full text. The literature was searched for randomized controlled trials, cohort studies, and case–control studies on the feasibility and/or outcome of surgical reinterventions. Studies in children, on other indications for primary surgery than GERD, conservative treatment of symptoms following primary antireflux surgery, surgical reintervention within 30 days after primary surgery, and patients cohorts with less than ten patients were not included. Only articles in English were included. Additionally, references of all selected publications were reviewed for other relevant studies. In case of a difference in opinion between the two reviewers about in- or exclusion of a study, the opinion of a third reviewer was decisive.", "Data of the selected studies were independently acquired by two reviewers (E.F. and W.D.). Study design, time period, number of patients, sex ratio, and mean age were retrieved from the studies. Based on the study design, each study was qualified by a level of evidence according to the Oxford Centre for Evidence Based Medicine Levels of Evidence.9 Type and approach of primary antireflux interventions and reoperations, mean period between both interventions, causes of failure of primary surgery and perioperative information, i.e. intra- and postoperative complications, mortality, number and causes of conversions in case of laparoscopic reoperations, mean intraoperative blood loss, duration of reoperations, and hospital stay were also extracted from the included studies. Completeness of follow-up, number of patients available, mean duration of follow-up, method of obtaining outcome at follow-up, and the definition and percentage of patients with successful symptomatic and objective outcome were extracted from all studies.", "Data were analysed using SPSS version 15.0 for Windows (SPSS Inc., Chicago, IL, USA). Values were expressed as mean ± SEM. Statistical analysis was not performed owing to the lack of statistically appropriate data from the included studies.", " General Results One thousand six hundred twenty-five articles were eligible for further selection after removing duplicate hits, and finally, 73 articles met the inclusion criteria (Fig. 1). The references of these articles yielded eight more articles for inclusion. These articles had not been identified with the initial search strategy because of absence of abstracts in the databases or atypical description for the intervention or disease. Eventually, 81 articles were eligible for inclusion in this study. According to the Oxford Centre for Evidence Based Medicine Levels of Evidence, 27 studies had a level of evidence IIb (33.3%)8, 10–35, two level of evidence IIIb (2.5%)36, 37, and 15 level of evidence IV (18.5%)38–52. The remaining 37 studies (45.7%) were cohort studies, but a level of evidence could not be adjudged owing to unknown study design53–89. Baseline characteristics extracted from the individual studies are shown in Table 2.\nFigure 1Results of search strategy and selection of studies.\nTable 2Baseline Characteristics Extracted from the Included StudiesNumber of patients (n)4,509 Male1,524 (33.8%) Female1,762 (39.1%) Sex not reported1,223 (27.1%)Age (years)51.3 ± 0.8Number of reoperations (n)4,584Study period (months)10.8 ± 0.7Duration between primary surgery and reoperation (months)38.3 ± 4.1Study design of the individual studies Prospective cohort study27 (33.3%) Retrospective cohort study14 (17.3%) Prospective case–control study2 (2.5%) Retrospective case–control study1 (1.2%) Not reported37 (45.7%)\n\nResults of search strategy and selection of studies.\nBaseline Characteristics Extracted from the Included Studies\nOne thousand six hundred twenty-five articles were eligible for further selection after removing duplicate hits, and finally, 73 articles met the inclusion criteria (Fig. 1). The references of these articles yielded eight more articles for inclusion. These articles had not been identified with the initial search strategy because of absence of abstracts in the databases or atypical description for the intervention or disease. Eventually, 81 articles were eligible for inclusion in this study. According to the Oxford Centre for Evidence Based Medicine Levels of Evidence, 27 studies had a level of evidence IIb (33.3%)8, 10–35, two level of evidence IIIb (2.5%)36, 37, and 15 level of evidence IV (18.5%)38–52. The remaining 37 studies (45.7%) were cohort studies, but a level of evidence could not be adjudged owing to unknown study design53–89. Baseline characteristics extracted from the individual studies are shown in Table 2.\nFigure 1Results of search strategy and selection of studies.\nTable 2Baseline Characteristics Extracted from the Included StudiesNumber of patients (n)4,509 Male1,524 (33.8%) Female1,762 (39.1%) Sex not reported1,223 (27.1%)Age (years)51.3 ± 0.8Number of reoperations (n)4,584Study period (months)10.8 ± 0.7Duration between primary surgery and reoperation (months)38.3 ± 4.1Study design of the individual studies Prospective cohort study27 (33.3%) Retrospective cohort study14 (17.3%) Prospective case–control study2 (2.5%) Retrospective case–control study1 (1.2%) Not reported37 (45.7%)\n\nResults of search strategy and selection of studies.\nBaseline Characteristics Extracted from the Included Studies\n Primary Antireflux Procedures Total fundoplication performed by laparoscopy, laparotomy, or thoracotomy was the most frequently reported primary antireflux procedure followed by partial fundoplication (Table 3). The type of primary antireflux procedure was not reported in almost one third, and 241 patients (5.3%) underwent more than one previous operation before inclusion in the original studies.\nTable 3Type and Indication of Primary Antireflux Procedures and Reoperations Primary procedures (n = 4,750)Reoperations (n = 4,584)Indication of operationsRecurrent reflux–1,912 (41.7%)Dysphagia–760 (16.6%)Recurrent reflux and dysphagia–184 (4.0%)Anatomical abnormality–114 (2.5%)Gasbloat syndrome–31 (0.7%)Miscellaneous–148 (3.2%)Not reported–1,435 (31.3%)Type of operationsTotal fundoplication2,162 (45.5%)2,397 (52.3%)Partial fundoplication471 (9.9%)999 (21.8%)Resection surgery–327 (7.1%)Miscellaneous procedures657 (13.8%)737 (16.1%)Not reported1,460 (30.7%)124 (2.7%)\n\nType and Indication of Primary Antireflux Procedures and Reoperations\nTotal fundoplication performed by laparoscopy, laparotomy, or thoracotomy was the most frequently reported primary antireflux procedure followed by partial fundoplication (Table 3). The type of primary antireflux procedure was not reported in almost one third, and 241 patients (5.3%) underwent more than one previous operation before inclusion in the original studies.\nTable 3Type and Indication of Primary Antireflux Procedures and Reoperations Primary procedures (n = 4,750)Reoperations (n = 4,584)Indication of operationsRecurrent reflux–1,912 (41.7%)Dysphagia–760 (16.6%)Recurrent reflux and dysphagia–184 (4.0%)Anatomical abnormality–114 (2.5%)Gasbloat syndrome–31 (0.7%)Miscellaneous–148 (3.2%)Not reported–1,435 (31.3%)Type of operationsTotal fundoplication2,162 (45.5%)2,397 (52.3%)Partial fundoplication471 (9.9%)999 (21.8%)Resection surgery–327 (7.1%)Miscellaneous procedures657 (13.8%)737 (16.1%)Not reported1,460 (30.7%)124 (2.7%)\n\nType and Indication of Primary Antireflux Procedures and Reoperations\n Causes of Failure of Primary Antireflux Surgery Causes of failure of the previous antireflux procedure were reported on 3,175 reoperations in total. Intrathoracic wrap migration, total or partial disruption of the wrap, and telescoping were the most common anatomical abnormalities encountered (Table 4). Esophageal motility disorder or erroneous diagnosis, i.e., another primary disease than GERD, were the causes of failure of the previous operation in 62 patients (2.0%). In 194 reoperations (6.1%), no cause of failure could be identified.\nTable 4Causes of Failure of Previous Antireflux Procedure \nn = 3,175Anatomical abnormalitiesIntrathoracic wrap migration885 (27.9%)Wrap disruption722 (22.7%)Telescoping448 (14.1%)Para-esophageal hiatal herniation195 (6.1%)Hiatal disruption167 (5.3%)Tight wrap168 (5.3%)Stricture60 (1.9%)Wrong primary diagnosisAchalasia37 (1.2%)Esophageal spasms7 (0.2%)Sclerodermia4 (0.1%)Esophageal carcinoma1 (0.03%)Disturbed esophageal motility13 (0.4%)No cause for failure identified194 (6.1%)Miscellaneous347 (10.9%)Not reported120 (3.8%)Percentages exceed 100% since more than one cause of failure was found during several reoperations\n\nCauses of Failure of Previous Antireflux Procedure\nPercentages exceed 100% since more than one cause of failure was found during several reoperations\nFrom six studies, it was shown that wrap disruption and telescoping were more frequent after conventional primary surgery, whereas disruption of hiatal repair and a tight wrap were more frequent after laparoscopic primary repair (Table 5).18,49,61,67,84,85 Intrathoracic wrap migration was reported by Serafina et al.85 to be more frequent after conventional primary procedures (13/17, 76.5% vs. 5/11, 45.5%), whereas Heniford et al.67 showed that this was more frequent after laparoscopic primary repair (16/22, 72.7% vs. 13/33, 39.4%). In the study by Salminen et al.,84 intrathoracic wrap migration was equal after conventional and laparoscopic primary surgery.\nTable 5Anatomical Abnormalities Depending on the Approach of Primary Surgery and the Indication of ReoperationAnatomical abnormalities depending on the approach of primary surgeryConventional (abdominal) approach (n = 120)Laparoscopic approach (n = 132)Wrap disruption48 (40.0%)24 (18.2%)Telescoping32 (26.6%)10 (7.6%)Hiatal disruption23 (19.2%)42 (31.8%)Tight wrap2 (1.7%)24 (18.2%)Miscellaneous36 (30.0%)42 (31.8%)Anatomical abnormalities depending on the indication of reoperationRecurrent reflux (n = 234)Dysphagia (n = 118)Intrathoracic wrap migration104 (44.4%)18 (15.3%)Wrap disruption109 (46.6%)12 (10.2%)No cause of failure34 (14.5%)51 (43.2%)Miscellaneous64 (27.4%)54 (45.8%)Percentages exceed 100% since more than one cause of failure was found during several reoperations\n\nAnatomical Abnormalities Depending on the Approach of Primary Surgery and the Indication of Reoperation\nPercentages exceed 100% since more than one cause of failure was found during several reoperations\nIn five other studies,8,11,12,31,72 it was shown that intrathoracic wrap migration and wrap disruption were more frequent in the case of recurrent reflux, whereas in the case of dysphagia, no cause of failure could be demonstrated more frequently (Table 5).\nCauses of failure of the previous antireflux procedure were reported on 3,175 reoperations in total. Intrathoracic wrap migration, total or partial disruption of the wrap, and telescoping were the most common anatomical abnormalities encountered (Table 4). Esophageal motility disorder or erroneous diagnosis, i.e., another primary disease than GERD, were the causes of failure of the previous operation in 62 patients (2.0%). In 194 reoperations (6.1%), no cause of failure could be identified.\nTable 4Causes of Failure of Previous Antireflux Procedure \nn = 3,175Anatomical abnormalitiesIntrathoracic wrap migration885 (27.9%)Wrap disruption722 (22.7%)Telescoping448 (14.1%)Para-esophageal hiatal herniation195 (6.1%)Hiatal disruption167 (5.3%)Tight wrap168 (5.3%)Stricture60 (1.9%)Wrong primary diagnosisAchalasia37 (1.2%)Esophageal spasms7 (0.2%)Sclerodermia4 (0.1%)Esophageal carcinoma1 (0.03%)Disturbed esophageal motility13 (0.4%)No cause for failure identified194 (6.1%)Miscellaneous347 (10.9%)Not reported120 (3.8%)Percentages exceed 100% since more than one cause of failure was found during several reoperations\n\nCauses of Failure of Previous Antireflux Procedure\nPercentages exceed 100% since more than one cause of failure was found during several reoperations\nFrom six studies, it was shown that wrap disruption and telescoping were more frequent after conventional primary surgery, whereas disruption of hiatal repair and a tight wrap were more frequent after laparoscopic primary repair (Table 5).18,49,61,67,84,85 Intrathoracic wrap migration was reported by Serafina et al.85 to be more frequent after conventional primary procedures (13/17, 76.5% vs. 5/11, 45.5%), whereas Heniford et al.67 showed that this was more frequent after laparoscopic primary repair (16/22, 72.7% vs. 13/33, 39.4%). In the study by Salminen et al.,84 intrathoracic wrap migration was equal after conventional and laparoscopic primary surgery.\nTable 5Anatomical Abnormalities Depending on the Approach of Primary Surgery and the Indication of ReoperationAnatomical abnormalities depending on the approach of primary surgeryConventional (abdominal) approach (n = 120)Laparoscopic approach (n = 132)Wrap disruption48 (40.0%)24 (18.2%)Telescoping32 (26.6%)10 (7.6%)Hiatal disruption23 (19.2%)42 (31.8%)Tight wrap2 (1.7%)24 (18.2%)Miscellaneous36 (30.0%)42 (31.8%)Anatomical abnormalities depending on the indication of reoperationRecurrent reflux (n = 234)Dysphagia (n = 118)Intrathoracic wrap migration104 (44.4%)18 (15.3%)Wrap disruption109 (46.6%)12 (10.2%)No cause of failure34 (14.5%)51 (43.2%)Miscellaneous64 (27.4%)54 (45.8%)Percentages exceed 100% since more than one cause of failure was found during several reoperations\n\nAnatomical Abnormalities Depending on the Approach of Primary Surgery and the Indication of Reoperation\nPercentages exceed 100% since more than one cause of failure was found during several reoperations\nIn five other studies,8,11,12,31,72 it was shown that intrathoracic wrap migration and wrap disruption were more frequent in the case of recurrent reflux, whereas in the case of dysphagia, no cause of failure could be demonstrated more frequently (Table 5).\n Indications for Reoperations Recurrent reflux and dysphagia were the most frequent indications for reoperations (Table 3). In 1,435 reoperations (31.3%), the indication for reoperation was not reported. Preoperative symptoms were assessed by questionnaire in 26 studies (32.1%).10,14,17,18,23–25,28,30,33,36,45,53,54,56,61–66,71,74,76,87,88 In most studies (93.8%), preoperative workup consisted of esophagogastroduodenoscopy, barium swallow, and/or esophageal pH monitoring.10–28,30–41,43–46,48–76,78,79,81–89\n\nRecurrent reflux and dysphagia were the most frequent indications for reoperations (Table 3). In 1,435 reoperations (31.3%), the indication for reoperation was not reported. Preoperative symptoms were assessed by questionnaire in 26 studies (32.1%).10,14,17,18,23–25,28,30,33,36,45,53,54,56,61–66,71,74,76,87,88 In most studies (93.8%), preoperative workup consisted of esophagogastroduodenoscopy, barium swallow, and/or esophageal pH monitoring.10–28,30–41,43–46,48–76,78,79,81–89\n\n Type and Route of Reoperations Total or partial fundoplication was the most frequently performed reoperation (Table 3), whereas the type of reoperation was not reported in 124 patients (2.7%). The laparoscopic approach was used in 1,666 reoperations (36.3%); 1,589 reoperations (34.7%) were performed by the conventional (open) abdominal route and 1,041 (22.7%) by thoracotomy. The approach of reoperation was not reported in the remaining 288 reoperations (6.3%). More than one reintervention was performed in 75 patients (1.7%).\nThe esophagus was totally or partially resected during 125 reoperations (2.7%). The reasons to perform esophageal resection were severe esophagitis with or without Barrett metaplasia,15,25,59 peptic stricture of the esophagus,10,33,51,57,72,81 severely disturbed esophageal motility,26,44,57,81 or short esophagus.70,82 In 202 reoperations (4.4%), gastric resection was performed. Indications for this were alkaline reflux,10 dense adhesions on attempted refundoplication,33,59,86 or severe gastric paresis.25,81\n\nTotal or partial fundoplication was the most frequently performed reoperation (Table 3), whereas the type of reoperation was not reported in 124 patients (2.7%). The laparoscopic approach was used in 1,666 reoperations (36.3%); 1,589 reoperations (34.7%) were performed by the conventional (open) abdominal route and 1,041 (22.7%) by thoracotomy. The approach of reoperation was not reported in the remaining 288 reoperations (6.3%). More than one reintervention was performed in 75 patients (1.7%).\nThe esophagus was totally or partially resected during 125 reoperations (2.7%). The reasons to perform esophageal resection were severe esophagitis with or without Barrett metaplasia,15,25,59 peptic stricture of the esophagus,10,33,51,57,72,81 severely disturbed esophageal motility,26,44,57,81 or short esophagus.70,82 In 202 reoperations (4.4%), gastric resection was performed. Indications for this were alkaline reflux,10 dense adhesions on attempted refundoplication,33,59,86 or severe gastric paresis.25,81\n\n Intra- and Postoperative Results The different intra- and postoperative parameters were only reported in a subset of the original studies. Intraoperative complications were reported in 454 of 2,123 reoperations (21.4%) and were more frequent during laparoscopic than during open abdominal reoperations (150/770, 19.5% vs. 5/92, 5.4%). Laceration or perforation of the esophagus and/or stomach was the most common (Table 6). Postoperative complications were present after 546 of 3,491 reoperations (15.6%). Infectious, pulmonary, and cardiac complications were the most common postoperative complications (Table 6). Open abdominal reoperations were accompanied with more complications than laparoscopic reoperations (55/317, 17.4% vs. 98/642, 15.3%). Thirty-seven of 4,329 patients (0.9%) died intra- or postoperatively (Table 6). No mortality occurred in studies only reporting on laparoscopic reoperations, while the mortality rate was 1.3% in studies in which all reoperations were performed by a conventional abdominal approach.\nTable 6Intra- and Postoperative Results of ReoperationsIntraoperative complications\nN = 2,123a\nInjury of esophagus and stomach (n)278 (13.1%)Pneumothorax (n)73 (3.4%)Hemorrhage (n)41 (1.9%)Splenectomy (n)7 (0.3%)Other (n)49 (2.3%)Not reported (n)6 (0.3%)Postoperative complications\nN = 3491a\nPulmonary complication (n)125 (3.6%)Wound infection (n)64 (1.8%)Leakage from alimentary tract (n)52 (1.5%)Urinary tract infection (n)12 (0.3%)Other infectious complications (n)48 (1.4%)Cardiac complications (n)31 (0.9%)Hemorrhage (n)22 (0.6%)Other (n)136 (3.9%)Not reported (n)56 (1.6%)Causes of mortality\nN = 4,329a\nInfectious (n)11 (0.3%)Pulmonary (n)7 (0.2%)Cardiac (n)4 (0.1%)Miscellaneous (n)10 (0.2%)Not reported (n)5 (0.1%)\naTotal number of reoperations in which the intra- and postoperative complications and mortality rate were reported\n\nIntra- and Postoperative Results of Reoperations\n\naTotal number of reoperations in which the intra- and postoperative complications and mortality rate were reported\nMean duration of reoperation was 177.4 ± 10.3 min, mean intraoperative blood loss 205.5 ± 35.6 ml, and mean hospital stay 5.5 ± 0.5 days. Comparing results of laparoscopic reoperations with laparotomy regarding the preceding parameters was not possible due to the small number of well-documented studies in the laparotomy group.\nReoperation was performed laparoscopically in 36.3% of all cases with a conversion rate of 8.7%. Causes of conversion were dense adhesions (n = 57, 39.3%), severe intraoperative bleeding (n = 11, 7.6%), poor visualization (n = 3, 2.1%), and other (n = 15, 10.3%). In the remaining 59 cases (40.7%), the reason for conversion was not reported.\nThe different intra- and postoperative parameters were only reported in a subset of the original studies. Intraoperative complications were reported in 454 of 2,123 reoperations (21.4%) and were more frequent during laparoscopic than during open abdominal reoperations (150/770, 19.5% vs. 5/92, 5.4%). Laceration or perforation of the esophagus and/or stomach was the most common (Table 6). Postoperative complications were present after 546 of 3,491 reoperations (15.6%). Infectious, pulmonary, and cardiac complications were the most common postoperative complications (Table 6). Open abdominal reoperations were accompanied with more complications than laparoscopic reoperations (55/317, 17.4% vs. 98/642, 15.3%). Thirty-seven of 4,329 patients (0.9%) died intra- or postoperatively (Table 6). No mortality occurred in studies only reporting on laparoscopic reoperations, while the mortality rate was 1.3% in studies in which all reoperations were performed by a conventional abdominal approach.\nTable 6Intra- and Postoperative Results of ReoperationsIntraoperative complications\nN = 2,123a\nInjury of esophagus and stomach (n)278 (13.1%)Pneumothorax (n)73 (3.4%)Hemorrhage (n)41 (1.9%)Splenectomy (n)7 (0.3%)Other (n)49 (2.3%)Not reported (n)6 (0.3%)Postoperative complications\nN = 3491a\nPulmonary complication (n)125 (3.6%)Wound infection (n)64 (1.8%)Leakage from alimentary tract (n)52 (1.5%)Urinary tract infection (n)12 (0.3%)Other infectious complications (n)48 (1.4%)Cardiac complications (n)31 (0.9%)Hemorrhage (n)22 (0.6%)Other (n)136 (3.9%)Not reported (n)56 (1.6%)Causes of mortality\nN = 4,329a\nInfectious (n)11 (0.3%)Pulmonary (n)7 (0.2%)Cardiac (n)4 (0.1%)Miscellaneous (n)10 (0.2%)Not reported (n)5 (0.1%)\naTotal number of reoperations in which the intra- and postoperative complications and mortality rate were reported\n\nIntra- and Postoperative Results of Reoperations\n\naTotal number of reoperations in which the intra- and postoperative complications and mortality rate were reported\nMean duration of reoperation was 177.4 ± 10.3 min, mean intraoperative blood loss 205.5 ± 35.6 ml, and mean hospital stay 5.5 ± 0.5 days. Comparing results of laparoscopic reoperations with laparotomy regarding the preceding parameters was not possible due to the small number of well-documented studies in the laparotomy group.\nReoperation was performed laparoscopically in 36.3% of all cases with a conversion rate of 8.7%. Causes of conversion were dense adhesions (n = 57, 39.3%), severe intraoperative bleeding (n = 11, 7.6%), poor visualization (n = 3, 2.1%), and other (n = 15, 10.3%). In the remaining 59 cases (40.7%), the reason for conversion was not reported.\n Symptomatic Outcome after Reoperations Symptomatic outcome after reoperation was determined in 79 studies (97.5%)8,10–18,20–28,30–89 and reported as successful in 81% of patients, although with different definitions of success (Table 7). Data were obtained by questionnaires in 29 studies (36.7%),8,10,11,16–18,20,22–24,27,28,30,34–37,42,45,46,48,49,54,55,61,69,71,80,84 by interview in 21 (26.6%),13,25,31,38,41,47,52,53,57,60,62,65–68,73,74,78,82,83,85 and this was not reported in the remaining 29 studies (36.7%).12,14,15,21,26,32,33,39,40,43,44,50,51,56,58,59,63,64,70,72,75–77,79,81,86–89 The mean success rate in studies only reporting on laparoscopic reoperations (17 studies)11–13,23–25,28,31,35,39,41,48,50,53,61,70,85 was 84.2 ± 2.5% and 84.6 ± 3.4% in studies in which all reoperations were performed by a conventional abdominal approach (ten studies).10,22,33,44,58,68,69,75,76,86 In patients in whom the reoperation was performed for symptoms only, 82.0 ± 10.7% had successful symptomatic outcome,47,79 and the success rate was 81.0 ± 12.1% in patients with recurrent reflux documented by pH monitoring.10,12,56,89 Comparing the outcome of total and partial refundoplication, Awad et al.53 reported symptomatic success in 68% and 60% of patients, respectively. In two other studies,11,45, however, no relationship between the type of fundoplication and the symptomatic outcome was found.\nTable 7Symptomatic and Objective Outcome after ReoperationDefinition of successful symptomatic outcome in the individual studiesSymptomatic outcomeObjective outcome\nn = 79 Degree of symptoms at follow-up25 (31.6%)– Patient satisfaction22 (27.8%)–  Satisfaction defined 6 (27.3%)–  Satisfaction not defined 16 (72.7%)– Visick grading system7 (8.9%)– Visick grading system combined with patient satisfaction1 (1.3%)– Scores calculated from specific quality of life questionnaires5 (6.3%)– Miscellaneous5 (6.3%)– Not reported14 (17.7%)–Patients available at follow-up3 338 (74.0%)581 (12.9%)Duration of follow-up (months)34.2 ± 2.721.8 ± 4.7Patients with successful outcome2 706 (81.1%)455 (78.3%)Values are given as mean ± SEM unless otherwise stated\n\nSymptomatic and Objective Outcome after Reoperation\nValues are given as mean ± SEM unless otherwise stated\nSymptomatic outcome after reoperation was determined in 79 studies (97.5%)8,10–18,20–28,30–89 and reported as successful in 81% of patients, although with different definitions of success (Table 7). Data were obtained by questionnaires in 29 studies (36.7%),8,10,11,16–18,20,22–24,27,28,30,34–37,42,45,46,48,49,54,55,61,69,71,80,84 by interview in 21 (26.6%),13,25,31,38,41,47,52,53,57,60,62,65–68,73,74,78,82,83,85 and this was not reported in the remaining 29 studies (36.7%).12,14,15,21,26,32,33,39,40,43,44,50,51,56,58,59,63,64,70,72,75–77,79,81,86–89 The mean success rate in studies only reporting on laparoscopic reoperations (17 studies)11–13,23–25,28,31,35,39,41,48,50,53,61,70,85 was 84.2 ± 2.5% and 84.6 ± 3.4% in studies in which all reoperations were performed by a conventional abdominal approach (ten studies).10,22,33,44,58,68,69,75,76,86 In patients in whom the reoperation was performed for symptoms only, 82.0 ± 10.7% had successful symptomatic outcome,47,79 and the success rate was 81.0 ± 12.1% in patients with recurrent reflux documented by pH monitoring.10,12,56,89 Comparing the outcome of total and partial refundoplication, Awad et al.53 reported symptomatic success in 68% and 60% of patients, respectively. In two other studies,11,45, however, no relationship between the type of fundoplication and the symptomatic outcome was found.\nTable 7Symptomatic and Objective Outcome after ReoperationDefinition of successful symptomatic outcome in the individual studiesSymptomatic outcomeObjective outcome\nn = 79 Degree of symptoms at follow-up25 (31.6%)– Patient satisfaction22 (27.8%)–  Satisfaction defined 6 (27.3%)–  Satisfaction not defined 16 (72.7%)– Visick grading system7 (8.9%)– Visick grading system combined with patient satisfaction1 (1.3%)– Scores calculated from specific quality of life questionnaires5 (6.3%)– Miscellaneous5 (6.3%)– Not reported14 (17.7%)–Patients available at follow-up3 338 (74.0%)581 (12.9%)Duration of follow-up (months)34.2 ± 2.721.8 ± 4.7Patients with successful outcome2 706 (81.1%)455 (78.3%)Values are given as mean ± SEM unless otherwise stated\n\nSymptomatic and Objective Outcome after Reoperation\nValues are given as mean ± SEM unless otherwise stated\n Objective Outcome after Reoperations Objective outcome was reported in 696 patients (15.4%) in 24 studies (29.6%), without a definition of success17,18,20 or the number of successful cases,14,17,18,20,28,49,87however, in seven studies. In the remaining 17 studies, successful objective outcome was defined as normal acid exposure during pH monitoring in 11,8,15,19,23,25,36,38,51,57,58,88 absence of esophagitis in four,10,54,59,76 combination of these both in one,75 and the absence of reflux during radiologic imaging in another one.65 In these 17 studies, 78% had a successful objective outcome (Table 7). The mean success rate of laparoscopic reoperation (four studies19,23,25,88) seemed higher than in the case of a conventional abdominal approach (four other studies10,58,75,76), 85.8 ± 5.6% and 78.0 ± 10.1%, respectively.\nObjective outcome was reported in 696 patients (15.4%) in 24 studies (29.6%), without a definition of success17,18,20 or the number of successful cases,14,17,18,20,28,49,87however, in seven studies. In the remaining 17 studies, successful objective outcome was defined as normal acid exposure during pH monitoring in 11,8,15,19,23,25,36,38,51,57,58,88 absence of esophagitis in four,10,54,59,76 combination of these both in one,75 and the absence of reflux during radiologic imaging in another one.65 In these 17 studies, 78% had a successful objective outcome (Table 7). The mean success rate of laparoscopic reoperation (four studies19,23,25,88) seemed higher than in the case of a conventional abdominal approach (four other studies10,58,75,76), 85.8 ± 5.6% and 78.0 ± 10.1%, respectively.", "One thousand six hundred twenty-five articles were eligible for further selection after removing duplicate hits, and finally, 73 articles met the inclusion criteria (Fig. 1). The references of these articles yielded eight more articles for inclusion. These articles had not been identified with the initial search strategy because of absence of abstracts in the databases or atypical description for the intervention or disease. Eventually, 81 articles were eligible for inclusion in this study. According to the Oxford Centre for Evidence Based Medicine Levels of Evidence, 27 studies had a level of evidence IIb (33.3%)8, 10–35, two level of evidence IIIb (2.5%)36, 37, and 15 level of evidence IV (18.5%)38–52. The remaining 37 studies (45.7%) were cohort studies, but a level of evidence could not be adjudged owing to unknown study design53–89. Baseline characteristics extracted from the individual studies are shown in Table 2.\nFigure 1Results of search strategy and selection of studies.\nTable 2Baseline Characteristics Extracted from the Included StudiesNumber of patients (n)4,509 Male1,524 (33.8%) Female1,762 (39.1%) Sex not reported1,223 (27.1%)Age (years)51.3 ± 0.8Number of reoperations (n)4,584Study period (months)10.8 ± 0.7Duration between primary surgery and reoperation (months)38.3 ± 4.1Study design of the individual studies Prospective cohort study27 (33.3%) Retrospective cohort study14 (17.3%) Prospective case–control study2 (2.5%) Retrospective case–control study1 (1.2%) Not reported37 (45.7%)\n\nResults of search strategy and selection of studies.\nBaseline Characteristics Extracted from the Included Studies", "Total fundoplication performed by laparoscopy, laparotomy, or thoracotomy was the most frequently reported primary antireflux procedure followed by partial fundoplication (Table 3). The type of primary antireflux procedure was not reported in almost one third, and 241 patients (5.3%) underwent more than one previous operation before inclusion in the original studies.\nTable 3Type and Indication of Primary Antireflux Procedures and Reoperations Primary procedures (n = 4,750)Reoperations (n = 4,584)Indication of operationsRecurrent reflux–1,912 (41.7%)Dysphagia–760 (16.6%)Recurrent reflux and dysphagia–184 (4.0%)Anatomical abnormality–114 (2.5%)Gasbloat syndrome–31 (0.7%)Miscellaneous–148 (3.2%)Not reported–1,435 (31.3%)Type of operationsTotal fundoplication2,162 (45.5%)2,397 (52.3%)Partial fundoplication471 (9.9%)999 (21.8%)Resection surgery–327 (7.1%)Miscellaneous procedures657 (13.8%)737 (16.1%)Not reported1,460 (30.7%)124 (2.7%)\n\nType and Indication of Primary Antireflux Procedures and Reoperations", "Causes of failure of the previous antireflux procedure were reported on 3,175 reoperations in total. Intrathoracic wrap migration, total or partial disruption of the wrap, and telescoping were the most common anatomical abnormalities encountered (Table 4). Esophageal motility disorder or erroneous diagnosis, i.e., another primary disease than GERD, were the causes of failure of the previous operation in 62 patients (2.0%). In 194 reoperations (6.1%), no cause of failure could be identified.\nTable 4Causes of Failure of Previous Antireflux Procedure \nn = 3,175Anatomical abnormalitiesIntrathoracic wrap migration885 (27.9%)Wrap disruption722 (22.7%)Telescoping448 (14.1%)Para-esophageal hiatal herniation195 (6.1%)Hiatal disruption167 (5.3%)Tight wrap168 (5.3%)Stricture60 (1.9%)Wrong primary diagnosisAchalasia37 (1.2%)Esophageal spasms7 (0.2%)Sclerodermia4 (0.1%)Esophageal carcinoma1 (0.03%)Disturbed esophageal motility13 (0.4%)No cause for failure identified194 (6.1%)Miscellaneous347 (10.9%)Not reported120 (3.8%)Percentages exceed 100% since more than one cause of failure was found during several reoperations\n\nCauses of Failure of Previous Antireflux Procedure\nPercentages exceed 100% since more than one cause of failure was found during several reoperations\nFrom six studies, it was shown that wrap disruption and telescoping were more frequent after conventional primary surgery, whereas disruption of hiatal repair and a tight wrap were more frequent after laparoscopic primary repair (Table 5).18,49,61,67,84,85 Intrathoracic wrap migration was reported by Serafina et al.85 to be more frequent after conventional primary procedures (13/17, 76.5% vs. 5/11, 45.5%), whereas Heniford et al.67 showed that this was more frequent after laparoscopic primary repair (16/22, 72.7% vs. 13/33, 39.4%). In the study by Salminen et al.,84 intrathoracic wrap migration was equal after conventional and laparoscopic primary surgery.\nTable 5Anatomical Abnormalities Depending on the Approach of Primary Surgery and the Indication of ReoperationAnatomical abnormalities depending on the approach of primary surgeryConventional (abdominal) approach (n = 120)Laparoscopic approach (n = 132)Wrap disruption48 (40.0%)24 (18.2%)Telescoping32 (26.6%)10 (7.6%)Hiatal disruption23 (19.2%)42 (31.8%)Tight wrap2 (1.7%)24 (18.2%)Miscellaneous36 (30.0%)42 (31.8%)Anatomical abnormalities depending on the indication of reoperationRecurrent reflux (n = 234)Dysphagia (n = 118)Intrathoracic wrap migration104 (44.4%)18 (15.3%)Wrap disruption109 (46.6%)12 (10.2%)No cause of failure34 (14.5%)51 (43.2%)Miscellaneous64 (27.4%)54 (45.8%)Percentages exceed 100% since more than one cause of failure was found during several reoperations\n\nAnatomical Abnormalities Depending on the Approach of Primary Surgery and the Indication of Reoperation\nPercentages exceed 100% since more than one cause of failure was found during several reoperations\nIn five other studies,8,11,12,31,72 it was shown that intrathoracic wrap migration and wrap disruption were more frequent in the case of recurrent reflux, whereas in the case of dysphagia, no cause of failure could be demonstrated more frequently (Table 5).", "Recurrent reflux and dysphagia were the most frequent indications for reoperations (Table 3). In 1,435 reoperations (31.3%), the indication for reoperation was not reported. Preoperative symptoms were assessed by questionnaire in 26 studies (32.1%).10,14,17,18,23–25,28,30,33,36,45,53,54,56,61–66,71,74,76,87,88 In most studies (93.8%), preoperative workup consisted of esophagogastroduodenoscopy, barium swallow, and/or esophageal pH monitoring.10–28,30–41,43–46,48–76,78,79,81–89\n", "Total or partial fundoplication was the most frequently performed reoperation (Table 3), whereas the type of reoperation was not reported in 124 patients (2.7%). The laparoscopic approach was used in 1,666 reoperations (36.3%); 1,589 reoperations (34.7%) were performed by the conventional (open) abdominal route and 1,041 (22.7%) by thoracotomy. The approach of reoperation was not reported in the remaining 288 reoperations (6.3%). More than one reintervention was performed in 75 patients (1.7%).\nThe esophagus was totally or partially resected during 125 reoperations (2.7%). The reasons to perform esophageal resection were severe esophagitis with or without Barrett metaplasia,15,25,59 peptic stricture of the esophagus,10,33,51,57,72,81 severely disturbed esophageal motility,26,44,57,81 or short esophagus.70,82 In 202 reoperations (4.4%), gastric resection was performed. Indications for this were alkaline reflux,10 dense adhesions on attempted refundoplication,33,59,86 or severe gastric paresis.25,81\n", "The different intra- and postoperative parameters were only reported in a subset of the original studies. Intraoperative complications were reported in 454 of 2,123 reoperations (21.4%) and were more frequent during laparoscopic than during open abdominal reoperations (150/770, 19.5% vs. 5/92, 5.4%). Laceration or perforation of the esophagus and/or stomach was the most common (Table 6). Postoperative complications were present after 546 of 3,491 reoperations (15.6%). Infectious, pulmonary, and cardiac complications were the most common postoperative complications (Table 6). Open abdominal reoperations were accompanied with more complications than laparoscopic reoperations (55/317, 17.4% vs. 98/642, 15.3%). Thirty-seven of 4,329 patients (0.9%) died intra- or postoperatively (Table 6). No mortality occurred in studies only reporting on laparoscopic reoperations, while the mortality rate was 1.3% in studies in which all reoperations were performed by a conventional abdominal approach.\nTable 6Intra- and Postoperative Results of ReoperationsIntraoperative complications\nN = 2,123a\nInjury of esophagus and stomach (n)278 (13.1%)Pneumothorax (n)73 (3.4%)Hemorrhage (n)41 (1.9%)Splenectomy (n)7 (0.3%)Other (n)49 (2.3%)Not reported (n)6 (0.3%)Postoperative complications\nN = 3491a\nPulmonary complication (n)125 (3.6%)Wound infection (n)64 (1.8%)Leakage from alimentary tract (n)52 (1.5%)Urinary tract infection (n)12 (0.3%)Other infectious complications (n)48 (1.4%)Cardiac complications (n)31 (0.9%)Hemorrhage (n)22 (0.6%)Other (n)136 (3.9%)Not reported (n)56 (1.6%)Causes of mortality\nN = 4,329a\nInfectious (n)11 (0.3%)Pulmonary (n)7 (0.2%)Cardiac (n)4 (0.1%)Miscellaneous (n)10 (0.2%)Not reported (n)5 (0.1%)\naTotal number of reoperations in which the intra- and postoperative complications and mortality rate were reported\n\nIntra- and Postoperative Results of Reoperations\n\naTotal number of reoperations in which the intra- and postoperative complications and mortality rate were reported\nMean duration of reoperation was 177.4 ± 10.3 min, mean intraoperative blood loss 205.5 ± 35.6 ml, and mean hospital stay 5.5 ± 0.5 days. Comparing results of laparoscopic reoperations with laparotomy regarding the preceding parameters was not possible due to the small number of well-documented studies in the laparotomy group.\nReoperation was performed laparoscopically in 36.3% of all cases with a conversion rate of 8.7%. Causes of conversion were dense adhesions (n = 57, 39.3%), severe intraoperative bleeding (n = 11, 7.6%), poor visualization (n = 3, 2.1%), and other (n = 15, 10.3%). In the remaining 59 cases (40.7%), the reason for conversion was not reported.", "Symptomatic outcome after reoperation was determined in 79 studies (97.5%)8,10–18,20–28,30–89 and reported as successful in 81% of patients, although with different definitions of success (Table 7). Data were obtained by questionnaires in 29 studies (36.7%),8,10,11,16–18,20,22–24,27,28,30,34–37,42,45,46,48,49,54,55,61,69,71,80,84 by interview in 21 (26.6%),13,25,31,38,41,47,52,53,57,60,62,65–68,73,74,78,82,83,85 and this was not reported in the remaining 29 studies (36.7%).12,14,15,21,26,32,33,39,40,43,44,50,51,56,58,59,63,64,70,72,75–77,79,81,86–89 The mean success rate in studies only reporting on laparoscopic reoperations (17 studies)11–13,23–25,28,31,35,39,41,48,50,53,61,70,85 was 84.2 ± 2.5% and 84.6 ± 3.4% in studies in which all reoperations were performed by a conventional abdominal approach (ten studies).10,22,33,44,58,68,69,75,76,86 In patients in whom the reoperation was performed for symptoms only, 82.0 ± 10.7% had successful symptomatic outcome,47,79 and the success rate was 81.0 ± 12.1% in patients with recurrent reflux documented by pH monitoring.10,12,56,89 Comparing the outcome of total and partial refundoplication, Awad et al.53 reported symptomatic success in 68% and 60% of patients, respectively. In two other studies,11,45, however, no relationship between the type of fundoplication and the symptomatic outcome was found.\nTable 7Symptomatic and Objective Outcome after ReoperationDefinition of successful symptomatic outcome in the individual studiesSymptomatic outcomeObjective outcome\nn = 79 Degree of symptoms at follow-up25 (31.6%)– Patient satisfaction22 (27.8%)–  Satisfaction defined 6 (27.3%)–  Satisfaction not defined 16 (72.7%)– Visick grading system7 (8.9%)– Visick grading system combined with patient satisfaction1 (1.3%)– Scores calculated from specific quality of life questionnaires5 (6.3%)– Miscellaneous5 (6.3%)– Not reported14 (17.7%)–Patients available at follow-up3 338 (74.0%)581 (12.9%)Duration of follow-up (months)34.2 ± 2.721.8 ± 4.7Patients with successful outcome2 706 (81.1%)455 (78.3%)Values are given as mean ± SEM unless otherwise stated\n\nSymptomatic and Objective Outcome after Reoperation\nValues are given as mean ± SEM unless otherwise stated", "Objective outcome was reported in 696 patients (15.4%) in 24 studies (29.6%), without a definition of success17,18,20 or the number of successful cases,14,17,18,20,28,49,87however, in seven studies. In the remaining 17 studies, successful objective outcome was defined as normal acid exposure during pH monitoring in 11,8,15,19,23,25,36,38,51,57,58,88 absence of esophagitis in four,10,54,59,76 combination of these both in one,75 and the absence of reflux during radiologic imaging in another one.65 In these 17 studies, 78% had a successful objective outcome (Table 7). The mean success rate of laparoscopic reoperation (four studies19,23,25,88) seemed higher than in the case of a conventional abdominal approach (four other studies10,58,75,76), 85.8 ± 5.6% and 78.0 ± 10.1%, respectively.", "The often reported observations that morbidity and mortality are higher after redo antireflux surgery and symptomatic outcome is inferior to primary antireflux surgery have been confirmed in this systematic review on all studies currently available. Very few had a prospective study design, and in almost half of all, the type of analysis was not even reported. Moreover, most studies only present symptomatic outcome, and data on anatomy and function of the esophagogastric junction are scarce.\nMorbidity was most frequently caused by direct injury of the esophagus and stomach during reoperation in the current review, and this was confirmed in our own data on redo surgery,8 mainly as a result of increased complexity due to adhesions after the primary operation. Most primary interventions in the studies reviewed were performed by the conventional approach. Nowadays, with laparoscopy as the golden standard, less adhesions may be encountered if redo surgery is required. This might improve the outlook for these patients with a lower chance of iatrogenic organ damage, but this has to be proven in future studies. Although postoperative morbidity and mortality appeared to be lower after laparoscopic reoperations compared to the open abdominal approach, intraoperative complications occurred more frequently during laparoscopic surgery. These data, however, are not based on comparison between both approaches within individual studies, and therefore, this should, in our opinion, be interpreted with caution.\nThe cause of failure was recognized in 93.8% and mainly consisted of anatomical abnormalities or an erroneous indication for primary surgery. Disruption of hiatal repair and a too tight wrap were more frequently observed after the laparoscopic than after the open approach. This again underlines the difficulty of doing an adequate hiatal repair and creating a “floppy” wrap by laparoscopy. Achalasia was the most frequently reported incorrect diagnosis as the cause of failure, and this supports the inclusion of esophageal manometry and 24-h pH monitoring in the preoperative workup. It has also been suggested that a too tight fundoplication can cause an achalasia-like clinical picture.90 Esophageal manometry shows, in those circumstances, a non-relaxing lower esophageal sphincter, but not an aperistaltic esophagus.91\n\nPreoperative workup before reoperation is, apparently, not standardized but tailored to the cause of failure and the indication for reoperation. In the case of dysphagia, this consists of barium swallow to evaluate the esophageal and gastric anatomy and esophageal manometry to detect whether or not a motility disorder may be an (additional) cause of failure. In patients with reflux symptoms, extensive reevaluation is essential. Symptoms have been shown, however, to be bad predictors of pathological reflux after primary antireflux surgery92 and unrelated to anatomical wrap position.93 Therefore, objective preoperative workup is equal to patients evaluated for primary antireflux surgery and consists of esophagogastroduodenoscopy, esophageal manometry, and 24-h pH monitoring, completed with barium swallow to evaluate the anatomy in addition to endoscopy.\nSymptomatic outcome was described in most studies in this review with a success rate ranging from 56% to 100%. The definitions for success showed considerable variation and focus either on a more general or overall system or on specific symptoms with or without mentioning data on quality of life and the effect of surgery on quality of life aspects, compromising comparison between the individual studies. Patient satisfaction was a frequently used method for scoring symptomatic outcome. Patient’s satisfaction is important and clinically highly relevant, but it does not directly refer to the specific symptoms of the disease, and consequently, this type of scoring does not provide insight in which aspects of the disease have improved and whether or not reflux symptoms have been exchanged by, for example, dysphagia. The Visick grading system, indicating that the disease was cured or improved with Visick grades I and II or unchanged or worsened in grades III and IV considered a symptomatic failure,94 correlated well with postoperative daily reflux related symptoms and daily complaints of dysphagia in our patient group on redo antireflux surgery.8\n\nObjective outcome was only reported in less than one third of the included studies in this review, with a mean success rate of 78%, which is slightly worse than after primary surgery. In our unit, all patients are encouraged to undergo stationary esophageal manometry and ambulatory 24-hr esophageal pH monitoring before and after primary as well as redo antireflux surgery primarily for quality control, but also to be able to correlate the functional results with symptoms and to understand possible future symptoms. Although previous studies have shown that for a good symptomatic outcome after primary surgery optimal anatomical and functional results are not a prerequisite,92,93 more studies reporting the anatomical and functional status of the esophagus and stomach after redo surgery are required to outline a more complete overall picture of the outcome of redo antireflux surgery.", "Redo antireflux surgery has a higher morbidity and mortality rate than primary antireflux surgery and symptomatic outcome is less satisfactory. Consistency with regard to reporting on symptomatic and objective outcome is necessary. Data on objective results after redo antireflux surgery are scarce and a plea can be made to subject all primary cases to full-scale evaluation, before and after antireflux surgery. Data to support this suggestion with evidence, like adequate cost-effectiveness studies, are lacking. The relative disappointing results of redo antireflux surgery with regard to morbidity, mortality, and symptomatic outcome support the opinion that redo surgery is tertiary referral center surgery and these centers should continue their efforts to collect prospective subjective and objective data." ]
[ "introduction", null, null, null, null, null, "results", null, null, null, null, null, null, null, null, "discussion", "conclusion" ]
[ "Gastro esophageal reflux disease", "Antireflux surgery", "Nissen fundoplication", "Dysphagia", "Reoperation" ]
Introduction: Antireflux surgery for refractory gastroesophageal reflux disease (GERD) has satisfactory outcome in 85–90% of patients.1–6 In the remaining 10–15%, reflux symptoms persist, recur, or complications occur. Dysphagia is a frequent complication of fundoplication.7 The indications for reoperation are far from straightforward, varying from severe recurrent symptoms with a more than adequate anatomical result to recurrent abnormal anatomy without any symptoms at all. Studies on reoperations also show similar wide variations with a full range of abnormal anatomy, symptoms and objective failure documented by esophageal manometry, and pH monitoring. In our recently published study on redo antireflux surgery, morbidity and mortality were higher than after primary antireflux surgery, with a symptomatic and objective success rate of 70% which is obviously inferior to the outcome of primary surgery.4,8 Several other studies have been published describing causes of failure of conventional and laparoscopic antireflux surgery. Most studies have included only a small group of patients, so an adequate impression on the outcome of reoperation is hard to extract from such studies. This study aims to summarize the currently available literature on surgical reintervention after primary antireflux surgery focusing on morbidity, mortality, and outcome in order to get a more complete overview of the results of redo antireflux surgery and to give guidelines about how patients should be informed on their chances of success. Material and Methods: Search Strategy A literature search was performed in three electronic databases, MEDLINE using the Pubmed search engine, EMBASE, and the Cochrane Central Register of Controlled Trials. The databases were searched for all years, up to November 2008. Search terms were entered to identify the relevant studies. Separate search terms were entered for the intervention, i.e., surgical reintervention, and the disease, i.e., GERD. For the disease, dysphagia was also used because this is a frequent indication for reoperation. For both the intervention and the disease, headwords in the thesaurus of the three databases [Medical Subject Heading (MeSH) Thesaurus in Pubmed and the Cochrane library and the Emtree Thesaurus in EMBASE] and free text words in title and abstract were used as search terms. The headwords from the thesaurus and the different synonyms for free text words were coupled by the Boolean operator “OR”. The combination of search terms for the intervention and disease were subsequently coupled by the Boolean operator “AND”. The free text words and headwords identified in the thesauruses are listed in Table 1. Table 1Search Terms used in this ReviewInterventionDiseaseFree text words in title and abstract of MEDLINE, EMBASE, and the Cochrane LibraryRefundoplication(s)Gastro esophageal refluxRedoGastro esophageal reflux disease(s)Redo surgeryGastro esophageal reflux disorder(s)Redo surgical procedureGastro esophageal refluxRedo Nissen (fundoplication)Gastro esophageal reflux disease(s)Redo antireflux procedureGastoesophageal reflux disorder(s)Redo antireflux surgeryGastroesophageal refluxReoperative antireflux surgeryGastroesophageal reflux disease(s)Revisional surgeryGastroesophageal reflux disorder(s)Reoperation(s)GERDReintervention(s)GORDSurgical revision(s)Reflux disease(s)Second look surgeryEsophagitisOesophagitisDysphagiaHeadwords in the Medical Subject Head (MeSH) Thesaurus of Pubmed and the Cochrane libraryReoperationDeglutition disordersSecond-look surgeryEsophagitisHeadwords in the Emtree Thesaurus of EMBASEReoperationStomach function disorderSecond look surgeryDysphagiaEsophagitis Search Terms used in this Review A literature search was performed in three electronic databases, MEDLINE using the Pubmed search engine, EMBASE, and the Cochrane Central Register of Controlled Trials. The databases were searched for all years, up to November 2008. Search terms were entered to identify the relevant studies. Separate search terms were entered for the intervention, i.e., surgical reintervention, and the disease, i.e., GERD. For the disease, dysphagia was also used because this is a frequent indication for reoperation. For both the intervention and the disease, headwords in the thesaurus of the three databases [Medical Subject Heading (MeSH) Thesaurus in Pubmed and the Cochrane library and the Emtree Thesaurus in EMBASE] and free text words in title and abstract were used as search terms. The headwords from the thesaurus and the different synonyms for free text words were coupled by the Boolean operator “OR”. The combination of search terms for the intervention and disease were subsequently coupled by the Boolean operator “AND”. The free text words and headwords identified in the thesauruses are listed in Table 1. Table 1Search Terms used in this ReviewInterventionDiseaseFree text words in title and abstract of MEDLINE, EMBASE, and the Cochrane LibraryRefundoplication(s)Gastro esophageal refluxRedoGastro esophageal reflux disease(s)Redo surgeryGastro esophageal reflux disorder(s)Redo surgical procedureGastro esophageal refluxRedo Nissen (fundoplication)Gastro esophageal reflux disease(s)Redo antireflux procedureGastoesophageal reflux disorder(s)Redo antireflux surgeryGastroesophageal refluxReoperative antireflux surgeryGastroesophageal reflux disease(s)Revisional surgeryGastroesophageal reflux disorder(s)Reoperation(s)GERDReintervention(s)GORDSurgical revision(s)Reflux disease(s)Second look surgeryEsophagitisOesophagitisDysphagiaHeadwords in the Medical Subject Head (MeSH) Thesaurus of Pubmed and the Cochrane libraryReoperationDeglutition disordersSecond-look surgeryEsophagitisHeadwords in the Emtree Thesaurus of EMBASEReoperationStomach function disorderSecond look surgeryDysphagiaEsophagitis Search Terms used in this Review Selection of Studies The studies identified by the search strategy were independently selected by two reviewers (E.F. and W.D.) based on title, abstract, and full text. The literature was searched for randomized controlled trials, cohort studies, and case–control studies on the feasibility and/or outcome of surgical reinterventions. Studies in children, on other indications for primary surgery than GERD, conservative treatment of symptoms following primary antireflux surgery, surgical reintervention within 30 days after primary surgery, and patients cohorts with less than ten patients were not included. Only articles in English were included. Additionally, references of all selected publications were reviewed for other relevant studies. In case of a difference in opinion between the two reviewers about in- or exclusion of a study, the opinion of a third reviewer was decisive. The studies identified by the search strategy were independently selected by two reviewers (E.F. and W.D.) based on title, abstract, and full text. The literature was searched for randomized controlled trials, cohort studies, and case–control studies on the feasibility and/or outcome of surgical reinterventions. Studies in children, on other indications for primary surgery than GERD, conservative treatment of symptoms following primary antireflux surgery, surgical reintervention within 30 days after primary surgery, and patients cohorts with less than ten patients were not included. Only articles in English were included. Additionally, references of all selected publications were reviewed for other relevant studies. In case of a difference in opinion between the two reviewers about in- or exclusion of a study, the opinion of a third reviewer was decisive. Analysis of Data from Selected Studies Data of the selected studies were independently acquired by two reviewers (E.F. and W.D.). Study design, time period, number of patients, sex ratio, and mean age were retrieved from the studies. Based on the study design, each study was qualified by a level of evidence according to the Oxford Centre for Evidence Based Medicine Levels of Evidence.9 Type and approach of primary antireflux interventions and reoperations, mean period between both interventions, causes of failure of primary surgery and perioperative information, i.e. intra- and postoperative complications, mortality, number and causes of conversions in case of laparoscopic reoperations, mean intraoperative blood loss, duration of reoperations, and hospital stay were also extracted from the included studies. Completeness of follow-up, number of patients available, mean duration of follow-up, method of obtaining outcome at follow-up, and the definition and percentage of patients with successful symptomatic and objective outcome were extracted from all studies. Data of the selected studies were independently acquired by two reviewers (E.F. and W.D.). Study design, time period, number of patients, sex ratio, and mean age were retrieved from the studies. Based on the study design, each study was qualified by a level of evidence according to the Oxford Centre for Evidence Based Medicine Levels of Evidence.9 Type and approach of primary antireflux interventions and reoperations, mean period between both interventions, causes of failure of primary surgery and perioperative information, i.e. intra- and postoperative complications, mortality, number and causes of conversions in case of laparoscopic reoperations, mean intraoperative blood loss, duration of reoperations, and hospital stay were also extracted from the included studies. Completeness of follow-up, number of patients available, mean duration of follow-up, method of obtaining outcome at follow-up, and the definition and percentage of patients with successful symptomatic and objective outcome were extracted from all studies. Data Analysis Data were analysed using SPSS version 15.0 for Windows (SPSS Inc., Chicago, IL, USA). Values were expressed as mean ± SEM. Statistical analysis was not performed owing to the lack of statistically appropriate data from the included studies. Data were analysed using SPSS version 15.0 for Windows (SPSS Inc., Chicago, IL, USA). Values were expressed as mean ± SEM. Statistical analysis was not performed owing to the lack of statistically appropriate data from the included studies. Search Strategy: A literature search was performed in three electronic databases, MEDLINE using the Pubmed search engine, EMBASE, and the Cochrane Central Register of Controlled Trials. The databases were searched for all years, up to November 2008. Search terms were entered to identify the relevant studies. Separate search terms were entered for the intervention, i.e., surgical reintervention, and the disease, i.e., GERD. For the disease, dysphagia was also used because this is a frequent indication for reoperation. For both the intervention and the disease, headwords in the thesaurus of the three databases [Medical Subject Heading (MeSH) Thesaurus in Pubmed and the Cochrane library and the Emtree Thesaurus in EMBASE] and free text words in title and abstract were used as search terms. The headwords from the thesaurus and the different synonyms for free text words were coupled by the Boolean operator “OR”. The combination of search terms for the intervention and disease were subsequently coupled by the Boolean operator “AND”. The free text words and headwords identified in the thesauruses are listed in Table 1. Table 1Search Terms used in this ReviewInterventionDiseaseFree text words in title and abstract of MEDLINE, EMBASE, and the Cochrane LibraryRefundoplication(s)Gastro esophageal refluxRedoGastro esophageal reflux disease(s)Redo surgeryGastro esophageal reflux disorder(s)Redo surgical procedureGastro esophageal refluxRedo Nissen (fundoplication)Gastro esophageal reflux disease(s)Redo antireflux procedureGastoesophageal reflux disorder(s)Redo antireflux surgeryGastroesophageal refluxReoperative antireflux surgeryGastroesophageal reflux disease(s)Revisional surgeryGastroesophageal reflux disorder(s)Reoperation(s)GERDReintervention(s)GORDSurgical revision(s)Reflux disease(s)Second look surgeryEsophagitisOesophagitisDysphagiaHeadwords in the Medical Subject Head (MeSH) Thesaurus of Pubmed and the Cochrane libraryReoperationDeglutition disordersSecond-look surgeryEsophagitisHeadwords in the Emtree Thesaurus of EMBASEReoperationStomach function disorderSecond look surgeryDysphagiaEsophagitis Search Terms used in this Review Selection of Studies: The studies identified by the search strategy were independently selected by two reviewers (E.F. and W.D.) based on title, abstract, and full text. The literature was searched for randomized controlled trials, cohort studies, and case–control studies on the feasibility and/or outcome of surgical reinterventions. Studies in children, on other indications for primary surgery than GERD, conservative treatment of symptoms following primary antireflux surgery, surgical reintervention within 30 days after primary surgery, and patients cohorts with less than ten patients were not included. Only articles in English were included. Additionally, references of all selected publications were reviewed for other relevant studies. In case of a difference in opinion between the two reviewers about in- or exclusion of a study, the opinion of a third reviewer was decisive. Analysis of Data from Selected Studies: Data of the selected studies were independently acquired by two reviewers (E.F. and W.D.). Study design, time period, number of patients, sex ratio, and mean age were retrieved from the studies. Based on the study design, each study was qualified by a level of evidence according to the Oxford Centre for Evidence Based Medicine Levels of Evidence.9 Type and approach of primary antireflux interventions and reoperations, mean period between both interventions, causes of failure of primary surgery and perioperative information, i.e. intra- and postoperative complications, mortality, number and causes of conversions in case of laparoscopic reoperations, mean intraoperative blood loss, duration of reoperations, and hospital stay were also extracted from the included studies. Completeness of follow-up, number of patients available, mean duration of follow-up, method of obtaining outcome at follow-up, and the definition and percentage of patients with successful symptomatic and objective outcome were extracted from all studies. Data Analysis: Data were analysed using SPSS version 15.0 for Windows (SPSS Inc., Chicago, IL, USA). Values were expressed as mean ± SEM. Statistical analysis was not performed owing to the lack of statistically appropriate data from the included studies. Results: General Results One thousand six hundred twenty-five articles were eligible for further selection after removing duplicate hits, and finally, 73 articles met the inclusion criteria (Fig. 1). The references of these articles yielded eight more articles for inclusion. These articles had not been identified with the initial search strategy because of absence of abstracts in the databases or atypical description for the intervention or disease. Eventually, 81 articles were eligible for inclusion in this study. According to the Oxford Centre for Evidence Based Medicine Levels of Evidence, 27 studies had a level of evidence IIb (33.3%)8, 10–35, two level of evidence IIIb (2.5%)36, 37, and 15 level of evidence IV (18.5%)38–52. The remaining 37 studies (45.7%) were cohort studies, but a level of evidence could not be adjudged owing to unknown study design53–89. Baseline characteristics extracted from the individual studies are shown in Table 2. Figure 1Results of search strategy and selection of studies. Table 2Baseline Characteristics Extracted from the Included StudiesNumber of patients (n)4,509 Male1,524 (33.8%) Female1,762 (39.1%) Sex not reported1,223 (27.1%)Age (years)51.3 ± 0.8Number of reoperations (n)4,584Study period (months)10.8 ± 0.7Duration between primary surgery and reoperation (months)38.3 ± 4.1Study design of the individual studies Prospective cohort study27 (33.3%) Retrospective cohort study14 (17.3%) Prospective case–control study2 (2.5%) Retrospective case–control study1 (1.2%) Not reported37 (45.7%) Results of search strategy and selection of studies. Baseline Characteristics Extracted from the Included Studies One thousand six hundred twenty-five articles were eligible for further selection after removing duplicate hits, and finally, 73 articles met the inclusion criteria (Fig. 1). The references of these articles yielded eight more articles for inclusion. These articles had not been identified with the initial search strategy because of absence of abstracts in the databases or atypical description for the intervention or disease. Eventually, 81 articles were eligible for inclusion in this study. According to the Oxford Centre for Evidence Based Medicine Levels of Evidence, 27 studies had a level of evidence IIb (33.3%)8, 10–35, two level of evidence IIIb (2.5%)36, 37, and 15 level of evidence IV (18.5%)38–52. The remaining 37 studies (45.7%) were cohort studies, but a level of evidence could not be adjudged owing to unknown study design53–89. Baseline characteristics extracted from the individual studies are shown in Table 2. Figure 1Results of search strategy and selection of studies. Table 2Baseline Characteristics Extracted from the Included StudiesNumber of patients (n)4,509 Male1,524 (33.8%) Female1,762 (39.1%) Sex not reported1,223 (27.1%)Age (years)51.3 ± 0.8Number of reoperations (n)4,584Study period (months)10.8 ± 0.7Duration between primary surgery and reoperation (months)38.3 ± 4.1Study design of the individual studies Prospective cohort study27 (33.3%) Retrospective cohort study14 (17.3%) Prospective case–control study2 (2.5%) Retrospective case–control study1 (1.2%) Not reported37 (45.7%) Results of search strategy and selection of studies. Baseline Characteristics Extracted from the Included Studies Primary Antireflux Procedures Total fundoplication performed by laparoscopy, laparotomy, or thoracotomy was the most frequently reported primary antireflux procedure followed by partial fundoplication (Table 3). The type of primary antireflux procedure was not reported in almost one third, and 241 patients (5.3%) underwent more than one previous operation before inclusion in the original studies. Table 3Type and Indication of Primary Antireflux Procedures and Reoperations Primary procedures (n = 4,750)Reoperations (n = 4,584)Indication of operationsRecurrent reflux–1,912 (41.7%)Dysphagia–760 (16.6%)Recurrent reflux and dysphagia–184 (4.0%)Anatomical abnormality–114 (2.5%)Gasbloat syndrome–31 (0.7%)Miscellaneous–148 (3.2%)Not reported–1,435 (31.3%)Type of operationsTotal fundoplication2,162 (45.5%)2,397 (52.3%)Partial fundoplication471 (9.9%)999 (21.8%)Resection surgery–327 (7.1%)Miscellaneous procedures657 (13.8%)737 (16.1%)Not reported1,460 (30.7%)124 (2.7%) Type and Indication of Primary Antireflux Procedures and Reoperations Total fundoplication performed by laparoscopy, laparotomy, or thoracotomy was the most frequently reported primary antireflux procedure followed by partial fundoplication (Table 3). The type of primary antireflux procedure was not reported in almost one third, and 241 patients (5.3%) underwent more than one previous operation before inclusion in the original studies. Table 3Type and Indication of Primary Antireflux Procedures and Reoperations Primary procedures (n = 4,750)Reoperations (n = 4,584)Indication of operationsRecurrent reflux–1,912 (41.7%)Dysphagia–760 (16.6%)Recurrent reflux and dysphagia–184 (4.0%)Anatomical abnormality–114 (2.5%)Gasbloat syndrome–31 (0.7%)Miscellaneous–148 (3.2%)Not reported–1,435 (31.3%)Type of operationsTotal fundoplication2,162 (45.5%)2,397 (52.3%)Partial fundoplication471 (9.9%)999 (21.8%)Resection surgery–327 (7.1%)Miscellaneous procedures657 (13.8%)737 (16.1%)Not reported1,460 (30.7%)124 (2.7%) Type and Indication of Primary Antireflux Procedures and Reoperations Causes of Failure of Primary Antireflux Surgery Causes of failure of the previous antireflux procedure were reported on 3,175 reoperations in total. Intrathoracic wrap migration, total or partial disruption of the wrap, and telescoping were the most common anatomical abnormalities encountered (Table 4). Esophageal motility disorder or erroneous diagnosis, i.e., another primary disease than GERD, were the causes of failure of the previous operation in 62 patients (2.0%). In 194 reoperations (6.1%), no cause of failure could be identified. Table 4Causes of Failure of Previous Antireflux Procedure  n = 3,175Anatomical abnormalitiesIntrathoracic wrap migration885 (27.9%)Wrap disruption722 (22.7%)Telescoping448 (14.1%)Para-esophageal hiatal herniation195 (6.1%)Hiatal disruption167 (5.3%)Tight wrap168 (5.3%)Stricture60 (1.9%)Wrong primary diagnosisAchalasia37 (1.2%)Esophageal spasms7 (0.2%)Sclerodermia4 (0.1%)Esophageal carcinoma1 (0.03%)Disturbed esophageal motility13 (0.4%)No cause for failure identified194 (6.1%)Miscellaneous347 (10.9%)Not reported120 (3.8%)Percentages exceed 100% since more than one cause of failure was found during several reoperations Causes of Failure of Previous Antireflux Procedure Percentages exceed 100% since more than one cause of failure was found during several reoperations From six studies, it was shown that wrap disruption and telescoping were more frequent after conventional primary surgery, whereas disruption of hiatal repair and a tight wrap were more frequent after laparoscopic primary repair (Table 5).18,49,61,67,84,85 Intrathoracic wrap migration was reported by Serafina et al.85 to be more frequent after conventional primary procedures (13/17, 76.5% vs. 5/11, 45.5%), whereas Heniford et al.67 showed that this was more frequent after laparoscopic primary repair (16/22, 72.7% vs. 13/33, 39.4%). In the study by Salminen et al.,84 intrathoracic wrap migration was equal after conventional and laparoscopic primary surgery. Table 5Anatomical Abnormalities Depending on the Approach of Primary Surgery and the Indication of ReoperationAnatomical abnormalities depending on the approach of primary surgeryConventional (abdominal) approach (n = 120)Laparoscopic approach (n = 132)Wrap disruption48 (40.0%)24 (18.2%)Telescoping32 (26.6%)10 (7.6%)Hiatal disruption23 (19.2%)42 (31.8%)Tight wrap2 (1.7%)24 (18.2%)Miscellaneous36 (30.0%)42 (31.8%)Anatomical abnormalities depending on the indication of reoperationRecurrent reflux (n = 234)Dysphagia (n = 118)Intrathoracic wrap migration104 (44.4%)18 (15.3%)Wrap disruption109 (46.6%)12 (10.2%)No cause of failure34 (14.5%)51 (43.2%)Miscellaneous64 (27.4%)54 (45.8%)Percentages exceed 100% since more than one cause of failure was found during several reoperations Anatomical Abnormalities Depending on the Approach of Primary Surgery and the Indication of Reoperation Percentages exceed 100% since more than one cause of failure was found during several reoperations In five other studies,8,11,12,31,72 it was shown that intrathoracic wrap migration and wrap disruption were more frequent in the case of recurrent reflux, whereas in the case of dysphagia, no cause of failure could be demonstrated more frequently (Table 5). Causes of failure of the previous antireflux procedure were reported on 3,175 reoperations in total. Intrathoracic wrap migration, total or partial disruption of the wrap, and telescoping were the most common anatomical abnormalities encountered (Table 4). Esophageal motility disorder or erroneous diagnosis, i.e., another primary disease than GERD, were the causes of failure of the previous operation in 62 patients (2.0%). In 194 reoperations (6.1%), no cause of failure could be identified. Table 4Causes of Failure of Previous Antireflux Procedure  n = 3,175Anatomical abnormalitiesIntrathoracic wrap migration885 (27.9%)Wrap disruption722 (22.7%)Telescoping448 (14.1%)Para-esophageal hiatal herniation195 (6.1%)Hiatal disruption167 (5.3%)Tight wrap168 (5.3%)Stricture60 (1.9%)Wrong primary diagnosisAchalasia37 (1.2%)Esophageal spasms7 (0.2%)Sclerodermia4 (0.1%)Esophageal carcinoma1 (0.03%)Disturbed esophageal motility13 (0.4%)No cause for failure identified194 (6.1%)Miscellaneous347 (10.9%)Not reported120 (3.8%)Percentages exceed 100% since more than one cause of failure was found during several reoperations Causes of Failure of Previous Antireflux Procedure Percentages exceed 100% since more than one cause of failure was found during several reoperations From six studies, it was shown that wrap disruption and telescoping were more frequent after conventional primary surgery, whereas disruption of hiatal repair and a tight wrap were more frequent after laparoscopic primary repair (Table 5).18,49,61,67,84,85 Intrathoracic wrap migration was reported by Serafina et al.85 to be more frequent after conventional primary procedures (13/17, 76.5% vs. 5/11, 45.5%), whereas Heniford et al.67 showed that this was more frequent after laparoscopic primary repair (16/22, 72.7% vs. 13/33, 39.4%). In the study by Salminen et al.,84 intrathoracic wrap migration was equal after conventional and laparoscopic primary surgery. Table 5Anatomical Abnormalities Depending on the Approach of Primary Surgery and the Indication of ReoperationAnatomical abnormalities depending on the approach of primary surgeryConventional (abdominal) approach (n = 120)Laparoscopic approach (n = 132)Wrap disruption48 (40.0%)24 (18.2%)Telescoping32 (26.6%)10 (7.6%)Hiatal disruption23 (19.2%)42 (31.8%)Tight wrap2 (1.7%)24 (18.2%)Miscellaneous36 (30.0%)42 (31.8%)Anatomical abnormalities depending on the indication of reoperationRecurrent reflux (n = 234)Dysphagia (n = 118)Intrathoracic wrap migration104 (44.4%)18 (15.3%)Wrap disruption109 (46.6%)12 (10.2%)No cause of failure34 (14.5%)51 (43.2%)Miscellaneous64 (27.4%)54 (45.8%)Percentages exceed 100% since more than one cause of failure was found during several reoperations Anatomical Abnormalities Depending on the Approach of Primary Surgery and the Indication of Reoperation Percentages exceed 100% since more than one cause of failure was found during several reoperations In five other studies,8,11,12,31,72 it was shown that intrathoracic wrap migration and wrap disruption were more frequent in the case of recurrent reflux, whereas in the case of dysphagia, no cause of failure could be demonstrated more frequently (Table 5). Indications for Reoperations Recurrent reflux and dysphagia were the most frequent indications for reoperations (Table 3). In 1,435 reoperations (31.3%), the indication for reoperation was not reported. Preoperative symptoms were assessed by questionnaire in 26 studies (32.1%).10,14,17,18,23–25,28,30,33,36,45,53,54,56,61–66,71,74,76,87,88 In most studies (93.8%), preoperative workup consisted of esophagogastroduodenoscopy, barium swallow, and/or esophageal pH monitoring.10–28,30–41,43–46,48–76,78,79,81–89 Recurrent reflux and dysphagia were the most frequent indications for reoperations (Table 3). In 1,435 reoperations (31.3%), the indication for reoperation was not reported. Preoperative symptoms were assessed by questionnaire in 26 studies (32.1%).10,14,17,18,23–25,28,30,33,36,45,53,54,56,61–66,71,74,76,87,88 In most studies (93.8%), preoperative workup consisted of esophagogastroduodenoscopy, barium swallow, and/or esophageal pH monitoring.10–28,30–41,43–46,48–76,78,79,81–89 Type and Route of Reoperations Total or partial fundoplication was the most frequently performed reoperation (Table 3), whereas the type of reoperation was not reported in 124 patients (2.7%). The laparoscopic approach was used in 1,666 reoperations (36.3%); 1,589 reoperations (34.7%) were performed by the conventional (open) abdominal route and 1,041 (22.7%) by thoracotomy. The approach of reoperation was not reported in the remaining 288 reoperations (6.3%). More than one reintervention was performed in 75 patients (1.7%). The esophagus was totally or partially resected during 125 reoperations (2.7%). The reasons to perform esophageal resection were severe esophagitis with or without Barrett metaplasia,15,25,59 peptic stricture of the esophagus,10,33,51,57,72,81 severely disturbed esophageal motility,26,44,57,81 or short esophagus.70,82 In 202 reoperations (4.4%), gastric resection was performed. Indications for this were alkaline reflux,10 dense adhesions on attempted refundoplication,33,59,86 or severe gastric paresis.25,81 Total or partial fundoplication was the most frequently performed reoperation (Table 3), whereas the type of reoperation was not reported in 124 patients (2.7%). The laparoscopic approach was used in 1,666 reoperations (36.3%); 1,589 reoperations (34.7%) were performed by the conventional (open) abdominal route and 1,041 (22.7%) by thoracotomy. The approach of reoperation was not reported in the remaining 288 reoperations (6.3%). More than one reintervention was performed in 75 patients (1.7%). The esophagus was totally or partially resected during 125 reoperations (2.7%). The reasons to perform esophageal resection were severe esophagitis with or without Barrett metaplasia,15,25,59 peptic stricture of the esophagus,10,33,51,57,72,81 severely disturbed esophageal motility,26,44,57,81 or short esophagus.70,82 In 202 reoperations (4.4%), gastric resection was performed. Indications for this were alkaline reflux,10 dense adhesions on attempted refundoplication,33,59,86 or severe gastric paresis.25,81 Intra- and Postoperative Results The different intra- and postoperative parameters were only reported in a subset of the original studies. Intraoperative complications were reported in 454 of 2,123 reoperations (21.4%) and were more frequent during laparoscopic than during open abdominal reoperations (150/770, 19.5% vs. 5/92, 5.4%). Laceration or perforation of the esophagus and/or stomach was the most common (Table 6). Postoperative complications were present after 546 of 3,491 reoperations (15.6%). Infectious, pulmonary, and cardiac complications were the most common postoperative complications (Table 6). Open abdominal reoperations were accompanied with more complications than laparoscopic reoperations (55/317, 17.4% vs. 98/642, 15.3%). Thirty-seven of 4,329 patients (0.9%) died intra- or postoperatively (Table 6). No mortality occurred in studies only reporting on laparoscopic reoperations, while the mortality rate was 1.3% in studies in which all reoperations were performed by a conventional abdominal approach. Table 6Intra- and Postoperative Results of ReoperationsIntraoperative complications N = 2,123a Injury of esophagus and stomach (n)278 (13.1%)Pneumothorax (n)73 (3.4%)Hemorrhage (n)41 (1.9%)Splenectomy (n)7 (0.3%)Other (n)49 (2.3%)Not reported (n)6 (0.3%)Postoperative complications N = 3491a Pulmonary complication (n)125 (3.6%)Wound infection (n)64 (1.8%)Leakage from alimentary tract (n)52 (1.5%)Urinary tract infection (n)12 (0.3%)Other infectious complications (n)48 (1.4%)Cardiac complications (n)31 (0.9%)Hemorrhage (n)22 (0.6%)Other (n)136 (3.9%)Not reported (n)56 (1.6%)Causes of mortality N = 4,329a Infectious (n)11 (0.3%)Pulmonary (n)7 (0.2%)Cardiac (n)4 (0.1%)Miscellaneous (n)10 (0.2%)Not reported (n)5 (0.1%) aTotal number of reoperations in which the intra- and postoperative complications and mortality rate were reported Intra- and Postoperative Results of Reoperations aTotal number of reoperations in which the intra- and postoperative complications and mortality rate were reported Mean duration of reoperation was 177.4 ± 10.3 min, mean intraoperative blood loss 205.5 ± 35.6 ml, and mean hospital stay 5.5 ± 0.5 days. Comparing results of laparoscopic reoperations with laparotomy regarding the preceding parameters was not possible due to the small number of well-documented studies in the laparotomy group. Reoperation was performed laparoscopically in 36.3% of all cases with a conversion rate of 8.7%. Causes of conversion were dense adhesions (n = 57, 39.3%), severe intraoperative bleeding (n = 11, 7.6%), poor visualization (n = 3, 2.1%), and other (n = 15, 10.3%). In the remaining 59 cases (40.7%), the reason for conversion was not reported. The different intra- and postoperative parameters were only reported in a subset of the original studies. Intraoperative complications were reported in 454 of 2,123 reoperations (21.4%) and were more frequent during laparoscopic than during open abdominal reoperations (150/770, 19.5% vs. 5/92, 5.4%). Laceration or perforation of the esophagus and/or stomach was the most common (Table 6). Postoperative complications were present after 546 of 3,491 reoperations (15.6%). Infectious, pulmonary, and cardiac complications were the most common postoperative complications (Table 6). Open abdominal reoperations were accompanied with more complications than laparoscopic reoperations (55/317, 17.4% vs. 98/642, 15.3%). Thirty-seven of 4,329 patients (0.9%) died intra- or postoperatively (Table 6). No mortality occurred in studies only reporting on laparoscopic reoperations, while the mortality rate was 1.3% in studies in which all reoperations were performed by a conventional abdominal approach. Table 6Intra- and Postoperative Results of ReoperationsIntraoperative complications N = 2,123a Injury of esophagus and stomach (n)278 (13.1%)Pneumothorax (n)73 (3.4%)Hemorrhage (n)41 (1.9%)Splenectomy (n)7 (0.3%)Other (n)49 (2.3%)Not reported (n)6 (0.3%)Postoperative complications N = 3491a Pulmonary complication (n)125 (3.6%)Wound infection (n)64 (1.8%)Leakage from alimentary tract (n)52 (1.5%)Urinary tract infection (n)12 (0.3%)Other infectious complications (n)48 (1.4%)Cardiac complications (n)31 (0.9%)Hemorrhage (n)22 (0.6%)Other (n)136 (3.9%)Not reported (n)56 (1.6%)Causes of mortality N = 4,329a Infectious (n)11 (0.3%)Pulmonary (n)7 (0.2%)Cardiac (n)4 (0.1%)Miscellaneous (n)10 (0.2%)Not reported (n)5 (0.1%) aTotal number of reoperations in which the intra- and postoperative complications and mortality rate were reported Intra- and Postoperative Results of Reoperations aTotal number of reoperations in which the intra- and postoperative complications and mortality rate were reported Mean duration of reoperation was 177.4 ± 10.3 min, mean intraoperative blood loss 205.5 ± 35.6 ml, and mean hospital stay 5.5 ± 0.5 days. Comparing results of laparoscopic reoperations with laparotomy regarding the preceding parameters was not possible due to the small number of well-documented studies in the laparotomy group. Reoperation was performed laparoscopically in 36.3% of all cases with a conversion rate of 8.7%. Causes of conversion were dense adhesions (n = 57, 39.3%), severe intraoperative bleeding (n = 11, 7.6%), poor visualization (n = 3, 2.1%), and other (n = 15, 10.3%). In the remaining 59 cases (40.7%), the reason for conversion was not reported. Symptomatic Outcome after Reoperations Symptomatic outcome after reoperation was determined in 79 studies (97.5%)8,10–18,20–28,30–89 and reported as successful in 81% of patients, although with different definitions of success (Table 7). Data were obtained by questionnaires in 29 studies (36.7%),8,10,11,16–18,20,22–24,27,28,30,34–37,42,45,46,48,49,54,55,61,69,71,80,84 by interview in 21 (26.6%),13,25,31,38,41,47,52,53,57,60,62,65–68,73,74,78,82,83,85 and this was not reported in the remaining 29 studies (36.7%).12,14,15,21,26,32,33,39,40,43,44,50,51,56,58,59,63,64,70,72,75–77,79,81,86–89 The mean success rate in studies only reporting on laparoscopic reoperations (17 studies)11–13,23–25,28,31,35,39,41,48,50,53,61,70,85 was 84.2 ± 2.5% and 84.6 ± 3.4% in studies in which all reoperations were performed by a conventional abdominal approach (ten studies).10,22,33,44,58,68,69,75,76,86 In patients in whom the reoperation was performed for symptoms only, 82.0 ± 10.7% had successful symptomatic outcome,47,79 and the success rate was 81.0 ± 12.1% in patients with recurrent reflux documented by pH monitoring.10,12,56,89 Comparing the outcome of total and partial refundoplication, Awad et al.53 reported symptomatic success in 68% and 60% of patients, respectively. In two other studies,11,45, however, no relationship between the type of fundoplication and the symptomatic outcome was found. Table 7Symptomatic and Objective Outcome after ReoperationDefinition of successful symptomatic outcome in the individual studiesSymptomatic outcomeObjective outcome n = 79 Degree of symptoms at follow-up25 (31.6%)– Patient satisfaction22 (27.8%)–  Satisfaction defined 6 (27.3%)–  Satisfaction not defined 16 (72.7%)– Visick grading system7 (8.9%)– Visick grading system combined with patient satisfaction1 (1.3%)– Scores calculated from specific quality of life questionnaires5 (6.3%)– Miscellaneous5 (6.3%)– Not reported14 (17.7%)–Patients available at follow-up3 338 (74.0%)581 (12.9%)Duration of follow-up (months)34.2 ± 2.721.8 ± 4.7Patients with successful outcome2 706 (81.1%)455 (78.3%)Values are given as mean ± SEM unless otherwise stated Symptomatic and Objective Outcome after Reoperation Values are given as mean ± SEM unless otherwise stated Symptomatic outcome after reoperation was determined in 79 studies (97.5%)8,10–18,20–28,30–89 and reported as successful in 81% of patients, although with different definitions of success (Table 7). Data were obtained by questionnaires in 29 studies (36.7%),8,10,11,16–18,20,22–24,27,28,30,34–37,42,45,46,48,49,54,55,61,69,71,80,84 by interview in 21 (26.6%),13,25,31,38,41,47,52,53,57,60,62,65–68,73,74,78,82,83,85 and this was not reported in the remaining 29 studies (36.7%).12,14,15,21,26,32,33,39,40,43,44,50,51,56,58,59,63,64,70,72,75–77,79,81,86–89 The mean success rate in studies only reporting on laparoscopic reoperations (17 studies)11–13,23–25,28,31,35,39,41,48,50,53,61,70,85 was 84.2 ± 2.5% and 84.6 ± 3.4% in studies in which all reoperations were performed by a conventional abdominal approach (ten studies).10,22,33,44,58,68,69,75,76,86 In patients in whom the reoperation was performed for symptoms only, 82.0 ± 10.7% had successful symptomatic outcome,47,79 and the success rate was 81.0 ± 12.1% in patients with recurrent reflux documented by pH monitoring.10,12,56,89 Comparing the outcome of total and partial refundoplication, Awad et al.53 reported symptomatic success in 68% and 60% of patients, respectively. In two other studies,11,45, however, no relationship between the type of fundoplication and the symptomatic outcome was found. Table 7Symptomatic and Objective Outcome after ReoperationDefinition of successful symptomatic outcome in the individual studiesSymptomatic outcomeObjective outcome n = 79 Degree of symptoms at follow-up25 (31.6%)– Patient satisfaction22 (27.8%)–  Satisfaction defined 6 (27.3%)–  Satisfaction not defined 16 (72.7%)– Visick grading system7 (8.9%)– Visick grading system combined with patient satisfaction1 (1.3%)– Scores calculated from specific quality of life questionnaires5 (6.3%)– Miscellaneous5 (6.3%)– Not reported14 (17.7%)–Patients available at follow-up3 338 (74.0%)581 (12.9%)Duration of follow-up (months)34.2 ± 2.721.8 ± 4.7Patients with successful outcome2 706 (81.1%)455 (78.3%)Values are given as mean ± SEM unless otherwise stated Symptomatic and Objective Outcome after Reoperation Values are given as mean ± SEM unless otherwise stated Objective Outcome after Reoperations Objective outcome was reported in 696 patients (15.4%) in 24 studies (29.6%), without a definition of success17,18,20 or the number of successful cases,14,17,18,20,28,49,87however, in seven studies. In the remaining 17 studies, successful objective outcome was defined as normal acid exposure during pH monitoring in 11,8,15,19,23,25,36,38,51,57,58,88 absence of esophagitis in four,10,54,59,76 combination of these both in one,75 and the absence of reflux during radiologic imaging in another one.65 In these 17 studies, 78% had a successful objective outcome (Table 7). The mean success rate of laparoscopic reoperation (four studies19,23,25,88) seemed higher than in the case of a conventional abdominal approach (four other studies10,58,75,76), 85.8 ± 5.6% and 78.0 ± 10.1%, respectively. Objective outcome was reported in 696 patients (15.4%) in 24 studies (29.6%), without a definition of success17,18,20 or the number of successful cases,14,17,18,20,28,49,87however, in seven studies. In the remaining 17 studies, successful objective outcome was defined as normal acid exposure during pH monitoring in 11,8,15,19,23,25,36,38,51,57,58,88 absence of esophagitis in four,10,54,59,76 combination of these both in one,75 and the absence of reflux during radiologic imaging in another one.65 In these 17 studies, 78% had a successful objective outcome (Table 7). The mean success rate of laparoscopic reoperation (four studies19,23,25,88) seemed higher than in the case of a conventional abdominal approach (four other studies10,58,75,76), 85.8 ± 5.6% and 78.0 ± 10.1%, respectively. General Results: One thousand six hundred twenty-five articles were eligible for further selection after removing duplicate hits, and finally, 73 articles met the inclusion criteria (Fig. 1). The references of these articles yielded eight more articles for inclusion. These articles had not been identified with the initial search strategy because of absence of abstracts in the databases or atypical description for the intervention or disease. Eventually, 81 articles were eligible for inclusion in this study. According to the Oxford Centre for Evidence Based Medicine Levels of Evidence, 27 studies had a level of evidence IIb (33.3%)8, 10–35, two level of evidence IIIb (2.5%)36, 37, and 15 level of evidence IV (18.5%)38–52. The remaining 37 studies (45.7%) were cohort studies, but a level of evidence could not be adjudged owing to unknown study design53–89. Baseline characteristics extracted from the individual studies are shown in Table 2. Figure 1Results of search strategy and selection of studies. Table 2Baseline Characteristics Extracted from the Included StudiesNumber of patients (n)4,509 Male1,524 (33.8%) Female1,762 (39.1%) Sex not reported1,223 (27.1%)Age (years)51.3 ± 0.8Number of reoperations (n)4,584Study period (months)10.8 ± 0.7Duration between primary surgery and reoperation (months)38.3 ± 4.1Study design of the individual studies Prospective cohort study27 (33.3%) Retrospective cohort study14 (17.3%) Prospective case–control study2 (2.5%) Retrospective case–control study1 (1.2%) Not reported37 (45.7%) Results of search strategy and selection of studies. Baseline Characteristics Extracted from the Included Studies Primary Antireflux Procedures: Total fundoplication performed by laparoscopy, laparotomy, or thoracotomy was the most frequently reported primary antireflux procedure followed by partial fundoplication (Table 3). The type of primary antireflux procedure was not reported in almost one third, and 241 patients (5.3%) underwent more than one previous operation before inclusion in the original studies. Table 3Type and Indication of Primary Antireflux Procedures and Reoperations Primary procedures (n = 4,750)Reoperations (n = 4,584)Indication of operationsRecurrent reflux–1,912 (41.7%)Dysphagia–760 (16.6%)Recurrent reflux and dysphagia–184 (4.0%)Anatomical abnormality–114 (2.5%)Gasbloat syndrome–31 (0.7%)Miscellaneous–148 (3.2%)Not reported–1,435 (31.3%)Type of operationsTotal fundoplication2,162 (45.5%)2,397 (52.3%)Partial fundoplication471 (9.9%)999 (21.8%)Resection surgery–327 (7.1%)Miscellaneous procedures657 (13.8%)737 (16.1%)Not reported1,460 (30.7%)124 (2.7%) Type and Indication of Primary Antireflux Procedures and Reoperations Causes of Failure of Primary Antireflux Surgery: Causes of failure of the previous antireflux procedure were reported on 3,175 reoperations in total. Intrathoracic wrap migration, total or partial disruption of the wrap, and telescoping were the most common anatomical abnormalities encountered (Table 4). Esophageal motility disorder or erroneous diagnosis, i.e., another primary disease than GERD, were the causes of failure of the previous operation in 62 patients (2.0%). In 194 reoperations (6.1%), no cause of failure could be identified. Table 4Causes of Failure of Previous Antireflux Procedure  n = 3,175Anatomical abnormalitiesIntrathoracic wrap migration885 (27.9%)Wrap disruption722 (22.7%)Telescoping448 (14.1%)Para-esophageal hiatal herniation195 (6.1%)Hiatal disruption167 (5.3%)Tight wrap168 (5.3%)Stricture60 (1.9%)Wrong primary diagnosisAchalasia37 (1.2%)Esophageal spasms7 (0.2%)Sclerodermia4 (0.1%)Esophageal carcinoma1 (0.03%)Disturbed esophageal motility13 (0.4%)No cause for failure identified194 (6.1%)Miscellaneous347 (10.9%)Not reported120 (3.8%)Percentages exceed 100% since more than one cause of failure was found during several reoperations Causes of Failure of Previous Antireflux Procedure Percentages exceed 100% since more than one cause of failure was found during several reoperations From six studies, it was shown that wrap disruption and telescoping were more frequent after conventional primary surgery, whereas disruption of hiatal repair and a tight wrap were more frequent after laparoscopic primary repair (Table 5).18,49,61,67,84,85 Intrathoracic wrap migration was reported by Serafina et al.85 to be more frequent after conventional primary procedures (13/17, 76.5% vs. 5/11, 45.5%), whereas Heniford et al.67 showed that this was more frequent after laparoscopic primary repair (16/22, 72.7% vs. 13/33, 39.4%). In the study by Salminen et al.,84 intrathoracic wrap migration was equal after conventional and laparoscopic primary surgery. Table 5Anatomical Abnormalities Depending on the Approach of Primary Surgery and the Indication of ReoperationAnatomical abnormalities depending on the approach of primary surgeryConventional (abdominal) approach (n = 120)Laparoscopic approach (n = 132)Wrap disruption48 (40.0%)24 (18.2%)Telescoping32 (26.6%)10 (7.6%)Hiatal disruption23 (19.2%)42 (31.8%)Tight wrap2 (1.7%)24 (18.2%)Miscellaneous36 (30.0%)42 (31.8%)Anatomical abnormalities depending on the indication of reoperationRecurrent reflux (n = 234)Dysphagia (n = 118)Intrathoracic wrap migration104 (44.4%)18 (15.3%)Wrap disruption109 (46.6%)12 (10.2%)No cause of failure34 (14.5%)51 (43.2%)Miscellaneous64 (27.4%)54 (45.8%)Percentages exceed 100% since more than one cause of failure was found during several reoperations Anatomical Abnormalities Depending on the Approach of Primary Surgery and the Indication of Reoperation Percentages exceed 100% since more than one cause of failure was found during several reoperations In five other studies,8,11,12,31,72 it was shown that intrathoracic wrap migration and wrap disruption were more frequent in the case of recurrent reflux, whereas in the case of dysphagia, no cause of failure could be demonstrated more frequently (Table 5). Indications for Reoperations: Recurrent reflux and dysphagia were the most frequent indications for reoperations (Table 3). In 1,435 reoperations (31.3%), the indication for reoperation was not reported. Preoperative symptoms were assessed by questionnaire in 26 studies (32.1%).10,14,17,18,23–25,28,30,33,36,45,53,54,56,61–66,71,74,76,87,88 In most studies (93.8%), preoperative workup consisted of esophagogastroduodenoscopy, barium swallow, and/or esophageal pH monitoring.10–28,30–41,43–46,48–76,78,79,81–89 Type and Route of Reoperations: Total or partial fundoplication was the most frequently performed reoperation (Table 3), whereas the type of reoperation was not reported in 124 patients (2.7%). The laparoscopic approach was used in 1,666 reoperations (36.3%); 1,589 reoperations (34.7%) were performed by the conventional (open) abdominal route and 1,041 (22.7%) by thoracotomy. The approach of reoperation was not reported in the remaining 288 reoperations (6.3%). More than one reintervention was performed in 75 patients (1.7%). The esophagus was totally or partially resected during 125 reoperations (2.7%). The reasons to perform esophageal resection were severe esophagitis with or without Barrett metaplasia,15,25,59 peptic stricture of the esophagus,10,33,51,57,72,81 severely disturbed esophageal motility,26,44,57,81 or short esophagus.70,82 In 202 reoperations (4.4%), gastric resection was performed. Indications for this were alkaline reflux,10 dense adhesions on attempted refundoplication,33,59,86 or severe gastric paresis.25,81 Intra- and Postoperative Results: The different intra- and postoperative parameters were only reported in a subset of the original studies. Intraoperative complications were reported in 454 of 2,123 reoperations (21.4%) and were more frequent during laparoscopic than during open abdominal reoperations (150/770, 19.5% vs. 5/92, 5.4%). Laceration or perforation of the esophagus and/or stomach was the most common (Table 6). Postoperative complications were present after 546 of 3,491 reoperations (15.6%). Infectious, pulmonary, and cardiac complications were the most common postoperative complications (Table 6). Open abdominal reoperations were accompanied with more complications than laparoscopic reoperations (55/317, 17.4% vs. 98/642, 15.3%). Thirty-seven of 4,329 patients (0.9%) died intra- or postoperatively (Table 6). No mortality occurred in studies only reporting on laparoscopic reoperations, while the mortality rate was 1.3% in studies in which all reoperations were performed by a conventional abdominal approach. Table 6Intra- and Postoperative Results of ReoperationsIntraoperative complications N = 2,123a Injury of esophagus and stomach (n)278 (13.1%)Pneumothorax (n)73 (3.4%)Hemorrhage (n)41 (1.9%)Splenectomy (n)7 (0.3%)Other (n)49 (2.3%)Not reported (n)6 (0.3%)Postoperative complications N = 3491a Pulmonary complication (n)125 (3.6%)Wound infection (n)64 (1.8%)Leakage from alimentary tract (n)52 (1.5%)Urinary tract infection (n)12 (0.3%)Other infectious complications (n)48 (1.4%)Cardiac complications (n)31 (0.9%)Hemorrhage (n)22 (0.6%)Other (n)136 (3.9%)Not reported (n)56 (1.6%)Causes of mortality N = 4,329a Infectious (n)11 (0.3%)Pulmonary (n)7 (0.2%)Cardiac (n)4 (0.1%)Miscellaneous (n)10 (0.2%)Not reported (n)5 (0.1%) aTotal number of reoperations in which the intra- and postoperative complications and mortality rate were reported Intra- and Postoperative Results of Reoperations aTotal number of reoperations in which the intra- and postoperative complications and mortality rate were reported Mean duration of reoperation was 177.4 ± 10.3 min, mean intraoperative blood loss 205.5 ± 35.6 ml, and mean hospital stay 5.5 ± 0.5 days. Comparing results of laparoscopic reoperations with laparotomy regarding the preceding parameters was not possible due to the small number of well-documented studies in the laparotomy group. Reoperation was performed laparoscopically in 36.3% of all cases with a conversion rate of 8.7%. Causes of conversion were dense adhesions (n = 57, 39.3%), severe intraoperative bleeding (n = 11, 7.6%), poor visualization (n = 3, 2.1%), and other (n = 15, 10.3%). In the remaining 59 cases (40.7%), the reason for conversion was not reported. Symptomatic Outcome after Reoperations: Symptomatic outcome after reoperation was determined in 79 studies (97.5%)8,10–18,20–28,30–89 and reported as successful in 81% of patients, although with different definitions of success (Table 7). Data were obtained by questionnaires in 29 studies (36.7%),8,10,11,16–18,20,22–24,27,28,30,34–37,42,45,46,48,49,54,55,61,69,71,80,84 by interview in 21 (26.6%),13,25,31,38,41,47,52,53,57,60,62,65–68,73,74,78,82,83,85 and this was not reported in the remaining 29 studies (36.7%).12,14,15,21,26,32,33,39,40,43,44,50,51,56,58,59,63,64,70,72,75–77,79,81,86–89 The mean success rate in studies only reporting on laparoscopic reoperations (17 studies)11–13,23–25,28,31,35,39,41,48,50,53,61,70,85 was 84.2 ± 2.5% and 84.6 ± 3.4% in studies in which all reoperations were performed by a conventional abdominal approach (ten studies).10,22,33,44,58,68,69,75,76,86 In patients in whom the reoperation was performed for symptoms only, 82.0 ± 10.7% had successful symptomatic outcome,47,79 and the success rate was 81.0 ± 12.1% in patients with recurrent reflux documented by pH monitoring.10,12,56,89 Comparing the outcome of total and partial refundoplication, Awad et al.53 reported symptomatic success in 68% and 60% of patients, respectively. In two other studies,11,45, however, no relationship between the type of fundoplication and the symptomatic outcome was found. Table 7Symptomatic and Objective Outcome after ReoperationDefinition of successful symptomatic outcome in the individual studiesSymptomatic outcomeObjective outcome n = 79 Degree of symptoms at follow-up25 (31.6%)– Patient satisfaction22 (27.8%)–  Satisfaction defined 6 (27.3%)–  Satisfaction not defined 16 (72.7%)– Visick grading system7 (8.9%)– Visick grading system combined with patient satisfaction1 (1.3%)– Scores calculated from specific quality of life questionnaires5 (6.3%)– Miscellaneous5 (6.3%)– Not reported14 (17.7%)–Patients available at follow-up3 338 (74.0%)581 (12.9%)Duration of follow-up (months)34.2 ± 2.721.8 ± 4.7Patients with successful outcome2 706 (81.1%)455 (78.3%)Values are given as mean ± SEM unless otherwise stated Symptomatic and Objective Outcome after Reoperation Values are given as mean ± SEM unless otherwise stated Objective Outcome after Reoperations: Objective outcome was reported in 696 patients (15.4%) in 24 studies (29.6%), without a definition of success17,18,20 or the number of successful cases,14,17,18,20,28,49,87however, in seven studies. In the remaining 17 studies, successful objective outcome was defined as normal acid exposure during pH monitoring in 11,8,15,19,23,25,36,38,51,57,58,88 absence of esophagitis in four,10,54,59,76 combination of these both in one,75 and the absence of reflux during radiologic imaging in another one.65 In these 17 studies, 78% had a successful objective outcome (Table 7). The mean success rate of laparoscopic reoperation (four studies19,23,25,88) seemed higher than in the case of a conventional abdominal approach (four other studies10,58,75,76), 85.8 ± 5.6% and 78.0 ± 10.1%, respectively. Discussion: The often reported observations that morbidity and mortality are higher after redo antireflux surgery and symptomatic outcome is inferior to primary antireflux surgery have been confirmed in this systematic review on all studies currently available. Very few had a prospective study design, and in almost half of all, the type of analysis was not even reported. Moreover, most studies only present symptomatic outcome, and data on anatomy and function of the esophagogastric junction are scarce. Morbidity was most frequently caused by direct injury of the esophagus and stomach during reoperation in the current review, and this was confirmed in our own data on redo surgery,8 mainly as a result of increased complexity due to adhesions after the primary operation. Most primary interventions in the studies reviewed were performed by the conventional approach. Nowadays, with laparoscopy as the golden standard, less adhesions may be encountered if redo surgery is required. This might improve the outlook for these patients with a lower chance of iatrogenic organ damage, but this has to be proven in future studies. Although postoperative morbidity and mortality appeared to be lower after laparoscopic reoperations compared to the open abdominal approach, intraoperative complications occurred more frequently during laparoscopic surgery. These data, however, are not based on comparison between both approaches within individual studies, and therefore, this should, in our opinion, be interpreted with caution. The cause of failure was recognized in 93.8% and mainly consisted of anatomical abnormalities or an erroneous indication for primary surgery. Disruption of hiatal repair and a too tight wrap were more frequently observed after the laparoscopic than after the open approach. This again underlines the difficulty of doing an adequate hiatal repair and creating a “floppy” wrap by laparoscopy. Achalasia was the most frequently reported incorrect diagnosis as the cause of failure, and this supports the inclusion of esophageal manometry and 24-h pH monitoring in the preoperative workup. It has also been suggested that a too tight fundoplication can cause an achalasia-like clinical picture.90 Esophageal manometry shows, in those circumstances, a non-relaxing lower esophageal sphincter, but not an aperistaltic esophagus.91 Preoperative workup before reoperation is, apparently, not standardized but tailored to the cause of failure and the indication for reoperation. In the case of dysphagia, this consists of barium swallow to evaluate the esophageal and gastric anatomy and esophageal manometry to detect whether or not a motility disorder may be an (additional) cause of failure. In patients with reflux symptoms, extensive reevaluation is essential. Symptoms have been shown, however, to be bad predictors of pathological reflux after primary antireflux surgery92 and unrelated to anatomical wrap position.93 Therefore, objective preoperative workup is equal to patients evaluated for primary antireflux surgery and consists of esophagogastroduodenoscopy, esophageal manometry, and 24-h pH monitoring, completed with barium swallow to evaluate the anatomy in addition to endoscopy. Symptomatic outcome was described in most studies in this review with a success rate ranging from 56% to 100%. The definitions for success showed considerable variation and focus either on a more general or overall system or on specific symptoms with or without mentioning data on quality of life and the effect of surgery on quality of life aspects, compromising comparison between the individual studies. Patient satisfaction was a frequently used method for scoring symptomatic outcome. Patient’s satisfaction is important and clinically highly relevant, but it does not directly refer to the specific symptoms of the disease, and consequently, this type of scoring does not provide insight in which aspects of the disease have improved and whether or not reflux symptoms have been exchanged by, for example, dysphagia. The Visick grading system, indicating that the disease was cured or improved with Visick grades I and II or unchanged or worsened in grades III and IV considered a symptomatic failure,94 correlated well with postoperative daily reflux related symptoms and daily complaints of dysphagia in our patient group on redo antireflux surgery.8 Objective outcome was only reported in less than one third of the included studies in this review, with a mean success rate of 78%, which is slightly worse than after primary surgery. In our unit, all patients are encouraged to undergo stationary esophageal manometry and ambulatory 24-hr esophageal pH monitoring before and after primary as well as redo antireflux surgery primarily for quality control, but also to be able to correlate the functional results with symptoms and to understand possible future symptoms. Although previous studies have shown that for a good symptomatic outcome after primary surgery optimal anatomical and functional results are not a prerequisite,92,93 more studies reporting the anatomical and functional status of the esophagus and stomach after redo surgery are required to outline a more complete overall picture of the outcome of redo antireflux surgery. Conclusion: Redo antireflux surgery has a higher morbidity and mortality rate than primary antireflux surgery and symptomatic outcome is less satisfactory. Consistency with regard to reporting on symptomatic and objective outcome is necessary. Data on objective results after redo antireflux surgery are scarce and a plea can be made to subject all primary cases to full-scale evaluation, before and after antireflux surgery. Data to support this suggestion with evidence, like adequate cost-effectiveness studies, are lacking. The relative disappointing results of redo antireflux surgery with regard to morbidity, mortality, and symptomatic outcome support the opinion that redo surgery is tertiary referral center surgery and these centers should continue their efforts to collect prospective subjective and objective data.
Background: Outcome and morbidity of redo antireflux surgery are suggested to be less satisfactory than those of primary surgery. Studies reporting on redo surgery, however, are usually much smaller than those of primary surgery. The aim of this study was to summarize the currently available literature on redo antireflux surgery. Methods: A structured literature search was performed in the electronic databases of MEDLINE, EMBASE, and Cochrane Central Register of Controlled Trials. Results: A total of 81 studies met the inclusion criteria. The study design was prospective in 29, retrospective in 15, and not reported in 37 studies. In these studies, 4,584 reoperations in 4,509 patients are reported. Recurrent reflux and dysphagia were the most frequent indications; intraoperative complications occurred in 21.4% and postoperative complications in 15.6%, with an overall mortality rate of 0.9%. The conversion rate in laparoscopic surgery was 8.7%. Mean(+/-SEM) duration of surgery was 177.4 +/- 10.3 min and mean hospital stay was 5.5 +/- 0.5 days. Symptomatic outcome was successful in 81.1% and was equal in the laparoscopic and conventional approach. Objective outcome was obtained in 24 studies (29.6%) and success was reported in 78.3%, with a slightly higher success rate in case of laparoscopy than with open surgery (85.8% vs. 78.0%). Conclusions: This systematic review on redo antireflux surgery has confirmed that morbidity and mortality after redo surgery is higher than after primary surgery and symptomatic and objective outcome are less satisfactory. Data on objective results were scarce and consistency with regard to reporting outcome is necessary.
Introduction: Antireflux surgery for refractory gastroesophageal reflux disease (GERD) has satisfactory outcome in 85–90% of patients.1–6 In the remaining 10–15%, reflux symptoms persist, recur, or complications occur. Dysphagia is a frequent complication of fundoplication.7 The indications for reoperation are far from straightforward, varying from severe recurrent symptoms with a more than adequate anatomical result to recurrent abnormal anatomy without any symptoms at all. Studies on reoperations also show similar wide variations with a full range of abnormal anatomy, symptoms and objective failure documented by esophageal manometry, and pH monitoring. In our recently published study on redo antireflux surgery, morbidity and mortality were higher than after primary antireflux surgery, with a symptomatic and objective success rate of 70% which is obviously inferior to the outcome of primary surgery.4,8 Several other studies have been published describing causes of failure of conventional and laparoscopic antireflux surgery. Most studies have included only a small group of patients, so an adequate impression on the outcome of reoperation is hard to extract from such studies. This study aims to summarize the currently available literature on surgical reintervention after primary antireflux surgery focusing on morbidity, mortality, and outcome in order to get a more complete overview of the results of redo antireflux surgery and to give guidelines about how patients should be informed on their chances of success. Conclusion: Redo antireflux surgery has a higher morbidity and mortality rate than primary antireflux surgery and symptomatic outcome is less satisfactory. Consistency with regard to reporting on symptomatic and objective outcome is necessary. Data on objective results after redo antireflux surgery are scarce and a plea can be made to subject all primary cases to full-scale evaluation, before and after antireflux surgery. Data to support this suggestion with evidence, like adequate cost-effectiveness studies, are lacking. The relative disappointing results of redo antireflux surgery with regard to morbidity, mortality, and symptomatic outcome support the opinion that redo surgery is tertiary referral center surgery and these centers should continue their efforts to collect prospective subjective and objective data.
Background: Outcome and morbidity of redo antireflux surgery are suggested to be less satisfactory than those of primary surgery. Studies reporting on redo surgery, however, are usually much smaller than those of primary surgery. The aim of this study was to summarize the currently available literature on redo antireflux surgery. Methods: A structured literature search was performed in the electronic databases of MEDLINE, EMBASE, and Cochrane Central Register of Controlled Trials. Results: A total of 81 studies met the inclusion criteria. The study design was prospective in 29, retrospective in 15, and not reported in 37 studies. In these studies, 4,584 reoperations in 4,509 patients are reported. Recurrent reflux and dysphagia were the most frequent indications; intraoperative complications occurred in 21.4% and postoperative complications in 15.6%, with an overall mortality rate of 0.9%. The conversion rate in laparoscopic surgery was 8.7%. Mean(+/-SEM) duration of surgery was 177.4 +/- 10.3 min and mean hospital stay was 5.5 +/- 0.5 days. Symptomatic outcome was successful in 81.1% and was equal in the laparoscopic and conventional approach. Objective outcome was obtained in 24 studies (29.6%) and success was reported in 78.3%, with a slightly higher success rate in case of laparoscopy than with open surgery (85.8% vs. 78.0%). Conclusions: This systematic review on redo antireflux surgery has confirmed that morbidity and mortality after redo surgery is higher than after primary surgery and symptomatic and objective outcome are less satisfactory. Data on objective results were scarce and consistency with regard to reporting outcome is necessary.
10,157
302
[ 1367, 299, 147, 177, 48, 310, 149, 516, 66, 172, 517, 372, 139 ]
17
[ "studies", "reoperations", "primary", "reported", "outcome", "table", "surgery", "patients", "10", "antireflux" ]
[ "reoperations recurrent reflux", "reflux primary antireflux", "reflux disorder reoperation", "antireflux proceduregastoesophageal reflux", "refluxreoperative antireflux surgerygastroesophageal" ]
null
[CONTENT] Gastro esophageal reflux disease | Antireflux surgery | Nissen fundoplication | Dysphagia | Reoperation [SUMMARY]
null
[CONTENT] Gastro esophageal reflux disease | Antireflux surgery | Nissen fundoplication | Dysphagia | Reoperation [SUMMARY]
[CONTENT] Gastro esophageal reflux disease | Antireflux surgery | Nissen fundoplication | Dysphagia | Reoperation [SUMMARY]
[CONTENT] Gastro esophageal reflux disease | Antireflux surgery | Nissen fundoplication | Dysphagia | Reoperation [SUMMARY]
[CONTENT] Gastro esophageal reflux disease | Antireflux surgery | Nissen fundoplication | Dysphagia | Reoperation [SUMMARY]
[CONTENT] Fundoplication | Gastroesophageal Reflux | Humans | Laparoscopy | Laparotomy | Patient Satisfaction | Reoperation | Treatment Failure | Treatment Outcome [SUMMARY]
null
[CONTENT] Fundoplication | Gastroesophageal Reflux | Humans | Laparoscopy | Laparotomy | Patient Satisfaction | Reoperation | Treatment Failure | Treatment Outcome [SUMMARY]
[CONTENT] Fundoplication | Gastroesophageal Reflux | Humans | Laparoscopy | Laparotomy | Patient Satisfaction | Reoperation | Treatment Failure | Treatment Outcome [SUMMARY]
[CONTENT] Fundoplication | Gastroesophageal Reflux | Humans | Laparoscopy | Laparotomy | Patient Satisfaction | Reoperation | Treatment Failure | Treatment Outcome [SUMMARY]
[CONTENT] Fundoplication | Gastroesophageal Reflux | Humans | Laparoscopy | Laparotomy | Patient Satisfaction | Reoperation | Treatment Failure | Treatment Outcome [SUMMARY]
[CONTENT] reoperations recurrent reflux | reflux primary antireflux | reflux disorder reoperation | antireflux proceduregastoesophageal reflux | refluxreoperative antireflux surgerygastroesophageal [SUMMARY]
null
[CONTENT] reoperations recurrent reflux | reflux primary antireflux | reflux disorder reoperation | antireflux proceduregastoesophageal reflux | refluxreoperative antireflux surgerygastroesophageal [SUMMARY]
[CONTENT] reoperations recurrent reflux | reflux primary antireflux | reflux disorder reoperation | antireflux proceduregastoesophageal reflux | refluxreoperative antireflux surgerygastroesophageal [SUMMARY]
[CONTENT] reoperations recurrent reflux | reflux primary antireflux | reflux disorder reoperation | antireflux proceduregastoesophageal reflux | refluxreoperative antireflux surgerygastroesophageal [SUMMARY]
[CONTENT] reoperations recurrent reflux | reflux primary antireflux | reflux disorder reoperation | antireflux proceduregastoesophageal reflux | refluxreoperative antireflux surgerygastroesophageal [SUMMARY]
[CONTENT] studies | reoperations | primary | reported | outcome | table | surgery | patients | 10 | antireflux [SUMMARY]
null
[CONTENT] studies | reoperations | primary | reported | outcome | table | surgery | patients | 10 | antireflux [SUMMARY]
[CONTENT] studies | reoperations | primary | reported | outcome | table | surgery | patients | 10 | antireflux [SUMMARY]
[CONTENT] studies | reoperations | primary | reported | outcome | table | surgery | patients | 10 | antireflux [SUMMARY]
[CONTENT] studies | reoperations | primary | reported | outcome | table | surgery | patients | 10 | antireflux [SUMMARY]
[CONTENT] antireflux surgery | surgery | antireflux | symptoms | anatomy symptoms | surgery studies | abnormal anatomy symptoms | published | abnormal | abnormal anatomy [SUMMARY]
null
[CONTENT] reoperations | wrap | reported | studies | 10 | primary | table | failure | complications | cause [SUMMARY]
[CONTENT] surgery | antireflux surgery | redo | redo antireflux surgery | antireflux | support | regard | redo antireflux | results redo antireflux surgery | results redo antireflux [SUMMARY]
[CONTENT] studies | reoperations | primary | surgery | antireflux | outcome | reported | 10 | patients | esophageal [SUMMARY]
[CONTENT] studies | reoperations | primary | surgery | antireflux | outcome | reported | 10 | patients | esophageal [SUMMARY]
[CONTENT] ||| ||| [SUMMARY]
null
[CONTENT] 81 ||| 29 | 15 | 37 ||| 4,584 | 4,509 ||| dysphagia | 21.4% | 15.6% | 0.9% ||| 8.7% ||| Mean(+/-SEM | 177.4 | 10.3 | 5.5 +/- 0.5 days ||| Symptomatic | 81.1% ||| 24 | 29.6% | 78.3% | 85.8% | 78.0% [SUMMARY]
[CONTENT] ||| [SUMMARY]
[CONTENT] ||| ||| ||| MEDLINE | EMBASE | Cochrane Central Register of Controlled Trials ||| 81 ||| 29 | 15 | 37 ||| 4,584 | 4,509 ||| dysphagia | 21.4% | 15.6% | 0.9% ||| 8.7% ||| Mean(+/-SEM | 177.4 | 10.3 | 5.5 +/- 0.5 days ||| Symptomatic | 81.1% ||| 24 | 29.6% | 78.3% | 85.8% | 78.0% ||| ||| [SUMMARY]
[CONTENT] ||| ||| ||| MEDLINE | EMBASE | Cochrane Central Register of Controlled Trials ||| 81 ||| 29 | 15 | 37 ||| 4,584 | 4,509 ||| dysphagia | 21.4% | 15.6% | 0.9% ||| 8.7% ||| Mean(+/-SEM | 177.4 | 10.3 | 5.5 +/- 0.5 days ||| Symptomatic | 81.1% ||| 24 | 29.6% | 78.3% | 85.8% | 78.0% ||| ||| [SUMMARY]
The Clinical Education Partnership Initiative: an innovative approach to global health education.
25547408
Despite evidence that international clinical electives can be educationally and professionally beneficial to both visiting and in-country trainees, these opportunities remain challenging for American residents to participate in abroad. Additionally, even when logistically possible, they are often poorly structured. The Universities of Washington (UW) and Nairobi (UoN) have enjoyed a long-standing research collaboration, which recently expanded into the UoN Medical Education Partnership Initiative (MEPI). Based on MEPI in Kenya, the Clinical Education Partnership Initiative (CEPI) is a new educational exchange program between UoN and UW. CEPI allows UW residents to partner with Kenyan trainees in clinical care and teaching activities at Naivasha District Hospital (NDH), one of UoN's MEPI training sites in Kenya.
BACKGROUND
UW and UoN faculty collaborated to create a curriculum and structure for the program. A Chief Resident from the UW Department of Medicine coordinated the program at NDH. From August 2012 through April 2014, 32 UW participants from 5 medical specialties spent between 4 and 12 weeks working in NDH. In addition to clinical duties, all took part in formal and informal educational activities. Before and after their rotations, UW residents completed surveys evaluating clinical competencies and cross-cultural educational and research skills. Kenyan trainees also completed surveys after working with UW residents for three months.
METHODS
UW trainees reported a significant increase in exposure to various tropical and other diseases, an increased sense of self-reliance, particularly in a resource-limited setting, and an improved understanding of how social and cultural factors can affect health. Kenyan trainees reported both an increase in clinical skills and confidence, and an appreciation for learning a different approach to patient care and professionalism.
RESULTS
After participating in CEPI, both Kenyan and US trainees noted improvement in their clinical knowledge and skills and a broader understanding of what it means to be clinicians. Through structured partnerships between institutions, educational exchange that benefits both parties is possible.
CONCLUSIONS
[ "Capacity Building", "Clinical Competence", "Cooperative Behavior", "Culture", "Curriculum", "Global Health", "Health Resources", "Humans", "Interinstitutional Relations", "Internship and Residency", "Kenya", "Schools, Medical", "Washington" ]
4335420
Background
International health electives International health electives (IHEs) in some settings have been shown to increase United States (US) medical trainees’ understanding of public health, primary care, health disparities and barriers to care, in addition to raising National Board of Medical Examiners (NBME) scores and improving physical exam and history taking skills [1]. As such, both the demand for and the prevalence of these experiences are increasing among medical trainees [2]. Currently, 87% of US medical schools offer international clinical experiences to students, and it is estimated that over 25% of students participate in these programs [3,4]. Graduating medical students throughout all disciplines rank the availability of international experiences as among the most important factors in their choice of residency program [5]. However, due to the rigors of residency training and a number of other constraints, only 59% of residency programs offer international clinical rotations to residents, and as few as 10% of residents may be able to take advantage of these opportunities, when they exist [4]. Pitfalls for IHEs over the last several decades have been described. A common problem inherent to international clinical experiences is poorly defined academic goals and clinical competencies [3]. Another commonly cited issue, especially among medical students, is lack of structure and supervision, leading to concerns that one’s presence in the host institution may be detrimental to patient care [6]. Finally, concerns have been raised that short-term international clinical experiences might benefit the sending institution and its trainees more than that of the host institution, and that benefits for the host institution can be difficult to measure [7]. Medical training in most parts of sub-Saharan Africa includes 6 years of undergraduate medical school and one year of internship, in which newly graduated clinicians (medical officer interns) are placed in government hospitals to practice. Inadequate numbers of medical training facilities and faculty positions have been identified as challenges facing medical education in sub-Saharan Africa [8], leading to trainees with limited access to senior clinicians for both teaching and role modeling. Formal training programs involving regularly scheduled, structured group didactics for doctors recruited to work in rural Africa have been shown to increase “professional socialization,” or the “learning of attitudes, norms, self images, values, beliefs and behavioral patterns associated with professional practice,” in addition to clinical knowledge and skills [9,10]. This enhanced sense of connection was thought to be related to the mode of training rather than the content, and led to long-term professional relationships and increased job satisfaction [9]. Despite these advantages, there are numerous barriers in place for structured continuing medical education in most healthcare facilities in sub-Saharan Africa [11]. Given the relative lack of international clinical opportunities for US residents, the limitations on medical education in much of sub-Saharan Africa and the logistical and ethical considerations that have been described pertaining to IHEs for medical students, the Universities of Washington and Nairobi have recently initiated a new clinical training program for UW and UoN residents in Naivasha, Kenya. This one-month rotation is based on a long-term, structured educational exchange partnership, and aims to provide trainees with supervised, hands-on experience based on a predefined curriculum covering four main competencies of healthcare delivery in resource-poor settings. International health electives (IHEs) in some settings have been shown to increase United States (US) medical trainees’ understanding of public health, primary care, health disparities and barriers to care, in addition to raising National Board of Medical Examiners (NBME) scores and improving physical exam and history taking skills [1]. As such, both the demand for and the prevalence of these experiences are increasing among medical trainees [2]. Currently, 87% of US medical schools offer international clinical experiences to students, and it is estimated that over 25% of students participate in these programs [3,4]. Graduating medical students throughout all disciplines rank the availability of international experiences as among the most important factors in their choice of residency program [5]. However, due to the rigors of residency training and a number of other constraints, only 59% of residency programs offer international clinical rotations to residents, and as few as 10% of residents may be able to take advantage of these opportunities, when they exist [4]. Pitfalls for IHEs over the last several decades have been described. A common problem inherent to international clinical experiences is poorly defined academic goals and clinical competencies [3]. Another commonly cited issue, especially among medical students, is lack of structure and supervision, leading to concerns that one’s presence in the host institution may be detrimental to patient care [6]. Finally, concerns have been raised that short-term international clinical experiences might benefit the sending institution and its trainees more than that of the host institution, and that benefits for the host institution can be difficult to measure [7]. Medical training in most parts of sub-Saharan Africa includes 6 years of undergraduate medical school and one year of internship, in which newly graduated clinicians (medical officer interns) are placed in government hospitals to practice. Inadequate numbers of medical training facilities and faculty positions have been identified as challenges facing medical education in sub-Saharan Africa [8], leading to trainees with limited access to senior clinicians for both teaching and role modeling. Formal training programs involving regularly scheduled, structured group didactics for doctors recruited to work in rural Africa have been shown to increase “professional socialization,” or the “learning of attitudes, norms, self images, values, beliefs and behavioral patterns associated with professional practice,” in addition to clinical knowledge and skills [9,10]. This enhanced sense of connection was thought to be related to the mode of training rather than the content, and led to long-term professional relationships and increased job satisfaction [9]. Despite these advantages, there are numerous barriers in place for structured continuing medical education in most healthcare facilities in sub-Saharan Africa [11]. Given the relative lack of international clinical opportunities for US residents, the limitations on medical education in much of sub-Saharan Africa and the logistical and ethical considerations that have been described pertaining to IHEs for medical students, the Universities of Washington and Nairobi have recently initiated a new clinical training program for UW and UoN residents in Naivasha, Kenya. This one-month rotation is based on a long-term, structured educational exchange partnership, and aims to provide trainees with supervised, hands-on experience based on a predefined curriculum covering four main competencies of healthcare delivery in resource-poor settings. Partnership for innovative medical education in Kenya Researchers at the University of Washington (UW) have collaborated with researchers from the University of Nairobi (UoN) on various projects for over 30 years [12]. This longstanding collaboration recently culminated in a Medical Education Partnership Initiative (MEPI) grant from the National Institutes of Health and PEPFAR. The new initiative, called Partnership for Innovative Medical Education in Kenya, or PRIME-K, is designed to strengthen medical education, improve retention of clinicians in rural parts of Kenya, and foster research capacity in Kenya. Modeled on the Washington, Wyoming, Alaska, Montana and Idaho (WWAMI) Regional Medical Education Program, which provides medical education for all 5 states and extends medical training to very rural and remote areas [13,14], a new decentralized training system has been created for UoN medical students [15]. Since 2010, 238 medical students from UoN have completed rotations in 14 hospitals throughout the country, on both the district and provincial levels. Through the foundation of PRIME-K, the UW-UoN partnership has now launched a new initiative called the Clinical Education Partnership Initiative, which focuses on cross-cultural clinical education experiences in Kenya. Researchers at the University of Washington (UW) have collaborated with researchers from the University of Nairobi (UoN) on various projects for over 30 years [12]. This longstanding collaboration recently culminated in a Medical Education Partnership Initiative (MEPI) grant from the National Institutes of Health and PEPFAR. The new initiative, called Partnership for Innovative Medical Education in Kenya, or PRIME-K, is designed to strengthen medical education, improve retention of clinicians in rural parts of Kenya, and foster research capacity in Kenya. Modeled on the Washington, Wyoming, Alaska, Montana and Idaho (WWAMI) Regional Medical Education Program, which provides medical education for all 5 states and extends medical training to very rural and remote areas [13,14], a new decentralized training system has been created for UoN medical students [15]. Since 2010, 238 medical students from UoN have completed rotations in 14 hospitals throughout the country, on both the district and provincial levels. Through the foundation of PRIME-K, the UW-UoN partnership has now launched a new initiative called the Clinical Education Partnership Initiative, which focuses on cross-cultural clinical education experiences in Kenya.
null
null
Results & discussion
Naivasha district hospital trainees Surveys were completed by NDH trainees four times, in November 2012, May and September 2013, and March 2014. A total of 50 surveys were completed. Of these, 26 were completed by clinical officer interns (COIs), 11 were completed by medical officer interns (MOIs), 10 were medical officers (MOs), two were students, and one was a clinical officer (CO). Trainees from NDH reported learning from UW residents most frequently during ward rounds, teaching conferences, and informal educational settings. Of all educational settings, they ranked surgical theatre and teaching conferences to be most helpful. 100% of NDH trainees reported feeling more confident about medical knowledge and more confident taking care of patients as a result of their interaction with UW residents. Trainees at NDH most frequently cited improvements in diagnosis and management of patients when asked how their interactions with UW residents benefitted them. They specified that they learned “more etiologies to conditions,” how to “arrive at a particular diagnosis” using the “correct investigations,” and “coming up with differential [diagnoses]”. Others mentioned getting help with physical examination and “proper management” of patients from the UW residents. Commonly mentioned ways that UW residents helped with medical knowledge were updating, reinforcing, or broadening knowledge gained in medical school. Specific skills that were frequently mentioned were wound care, physical examination, problem-solving and procedures. Many NDH trainees stated they appreciated getting a “different approach” to patient care from UW residents. This was mentioned both with respect to clinical decision making and to a broader sense of one’s approach to doctoring. One trainee mentioned that the most useful element of the partnership was “getting to know the alternative way of approaching the usual conditions, and not sticking to the one way of approach,” highlighting the opportunity to broaden one’s methods for clinical decision making. Others mentioned “different approach to get the diagnosis,” and “stimulating my thought process”. Additionally, trainees alluded to learning a different approach to the art of doctoring, in both the patient-doctor relationship and professional relationships between clinicians. Teamwork and communication were stressed by several trainees, who stated that they appreciated learning “work distribution” and “importance of working as a team”. One trainee stated that UW residents were helpful in encouraging the trainee to “ask when I am not sure without fear”. With regard to the patient-doctor relationship, different Kenyan trainees mentioned that “talking to patients” and “how to give a patient maximum concern,” were examples of skills learned from this alternative approach, as was demonstrating “ideals of patient care,” and “holistic care”. One trainee stated that the residents were “a source of motivation. Because they don’t give up… they did everything they could do”. Another trainee simply stated, “I learnt the art of compassion”. Overall, Kenyan trainees reported feeling supported by and more confident as a result of the program. “I can now make confident decisions when left unsupervised due to the acquired knowledge,” one mentioned. “I found it very interactive, if we teach those people what we know, and they teach us what we don’t know, there is going to be a working relationship”. Surveys were completed by NDH trainees four times, in November 2012, May and September 2013, and March 2014. A total of 50 surveys were completed. Of these, 26 were completed by clinical officer interns (COIs), 11 were completed by medical officer interns (MOIs), 10 were medical officers (MOs), two were students, and one was a clinical officer (CO). Trainees from NDH reported learning from UW residents most frequently during ward rounds, teaching conferences, and informal educational settings. Of all educational settings, they ranked surgical theatre and teaching conferences to be most helpful. 100% of NDH trainees reported feeling more confident about medical knowledge and more confident taking care of patients as a result of their interaction with UW residents. Trainees at NDH most frequently cited improvements in diagnosis and management of patients when asked how their interactions with UW residents benefitted them. They specified that they learned “more etiologies to conditions,” how to “arrive at a particular diagnosis” using the “correct investigations,” and “coming up with differential [diagnoses]”. Others mentioned getting help with physical examination and “proper management” of patients from the UW residents. Commonly mentioned ways that UW residents helped with medical knowledge were updating, reinforcing, or broadening knowledge gained in medical school. Specific skills that were frequently mentioned were wound care, physical examination, problem-solving and procedures. Many NDH trainees stated they appreciated getting a “different approach” to patient care from UW residents. This was mentioned both with respect to clinical decision making and to a broader sense of one’s approach to doctoring. One trainee mentioned that the most useful element of the partnership was “getting to know the alternative way of approaching the usual conditions, and not sticking to the one way of approach,” highlighting the opportunity to broaden one’s methods for clinical decision making. Others mentioned “different approach to get the diagnosis,” and “stimulating my thought process”. Additionally, trainees alluded to learning a different approach to the art of doctoring, in both the patient-doctor relationship and professional relationships between clinicians. Teamwork and communication were stressed by several trainees, who stated that they appreciated learning “work distribution” and “importance of working as a team”. One trainee stated that UW residents were helpful in encouraging the trainee to “ask when I am not sure without fear”. With regard to the patient-doctor relationship, different Kenyan trainees mentioned that “talking to patients” and “how to give a patient maximum concern,” were examples of skills learned from this alternative approach, as was demonstrating “ideals of patient care,” and “holistic care”. One trainee stated that the residents were “a source of motivation. Because they don’t give up… they did everything they could do”. Another trainee simply stated, “I learnt the art of compassion”. Overall, Kenyan trainees reported feeling supported by and more confident as a result of the program. “I can now make confident decisions when left unsupervised due to the acquired knowledge,” one mentioned. “I found it very interactive, if we teach those people what we know, and they teach us what we don’t know, there is going to be a working relationship”. University of Washington trainees Of the 32 UW trainees who participated, 30 (94%) completed before and after surveys, making a total of 60 surveys; however, 5 “before” surveys were lost during a robbery, leaving 55 for analysis, including 25 complete before-and-after pairs. During their 4-week rotations, visiting trainees reported seeing significantly more cases of advanced HIV, tuberculosis, malaria, pediatric HIV, severe malnutrition, neonatal sepsis, and neonatal asphyxiation cases than they ever had previously during their medical training. On average, after their rotation U.S. trainees reported feeling more confident diagnosing disease based on history and physical examination, managing patients in a setting with limited resources, working in very different hospital systems, and using clinical guidelines for management of patients in resource limited settings (Table 1). U.S. trainees also reported engaging in certain educational activities significantly more often during the rotation than they had in their previous training (Table 2). These included teaching people of different cultural and linguistic backgrounds, working with colleagues from different cultural and linguistic backgrounds, and researching educational topics with few resources. Finally, U.S. trainees reported a significantly greater understanding of the complex role that social determinants can play in health, with significant improvements in understanding how poverty affects health and how economic, social and cultural barriers can affect health care seeking (Table 3).Table 1 UW Before- and after- rotation clinical competency scores Involvement in care of patients with:* Average score before rotation Average score during rotation Average difference in before/after p-value§ Advanced HIV2.362.920.560.05Tuberculosis2.243.080.84<0.01Malaria1.482.440.96<0.01Pediatric HIV1.232.080.85<0.01Severe malnutrition2.002.640.640.03Neonatal sepsis2.423.250.830.03Neonatal asphyxia1.422.921.50<0.01Severe diarrhea1.732.360.630.01 Comfort with the following clinical skills:** Diagnosing disease based on history, physical, and vital signs3.163.560.400.08Managing patients with limited resources2.603.560.96<0.01Working effectively in hospital systems different from in the USA2.583.460.88<0.01Use of clinical guidelines for management2.583.751.17<0.01*Scores based on a 5-point scale: 1 = 0 patients seen, 2 = 1-5 patients seen, 3 = 6-20 patients seen, 4 = 21-50 patients seen, 5 = Greater than 50 patients seen.**Scores based on a 5-point scale: 1 = Not at all, 2 = A little bit, 3 = Somewhat, 4 = Reasonably, 5 = Very.§P-values calculated from paired t-tests.Table 2 UW Before- and after- rotation educational experience scores* Number of times engaged in the following activities: Average score before rotation Average score during rotation Average difference in before/after P-value§ Teaching people of different cultural or linguistic backgrounds from your own2.243.681.44<0.01Researching educational topics with few available resources1.963.001.04<0.01Working with colleagues from different cultural backgrounds from your own3.084.281.20<0.01*Scores based on a 5-point scale: 1 = Never, 2 = 1-5 Times, 3 = 6-20 Times, 4 = 21-50 Times, 5= >50 Times.§P-values calculated from paired t-tests.Table 3 UW Before- and after- rotation community health work scores* Level of understanding of: Average score before rotation Average score during rotation Average difference in before/after P-value§ How extreme poverty affects health3.134.131.00<0.01Economic barriers to health care seeking3.334.210.88<0.01Social and cultural barriers to health care seeking3.333.960.630.01*Scores based on a 5-point scale: 1 = Not at all, 2 = A little bit, 3 = Somewhat, 4 = Reasonably, 5 = Very.§P-values calculated from paired t-tests. UW Before- and after- rotation clinical competency scores *Scores based on a 5-point scale: 1 = 0 patients seen, 2 = 1-5 patients seen, 3 = 6-20 patients seen, 4 = 21-50 patients seen, 5 = Greater than 50 patients seen. **Scores based on a 5-point scale: 1 = Not at all, 2 = A little bit, 3 = Somewhat, 4 = Reasonably, 5 = Very. §P-values calculated from paired t-tests. UW Before- and after- rotation educational experience scores* *Scores based on a 5-point scale: 1 = Never, 2 = 1-5 Times, 3 = 6-20 Times, 4 = 21-50 Times, 5= >50 Times. §P-values calculated from paired t-tests. UW Before- and after- rotation community health work scores* *Scores based on a 5-point scale: 1 = Not at all, 2 = A little bit, 3 = Somewhat, 4 = Reasonably, 5 = Very. §P-values calculated from paired t-tests. When asked what the most educational aspects of the rotation were, most U.S. trainees mentioned patient management with limited resources. “The experience of working in a low resource healthcare [setting] was the most valuable. The decisions I had to make with limited information or imaging taught me skills that I would not have obtained in my residency.” Other commonly cited aspects were the exposure to rare diagnoses such as advanced HIV and TB, and the opportunity to teach and motivate others at the hospital. Twenty-nine (91%) out of 32 U.S. trainees reported experiencing a change in the way they viewed themselves as clinicians as a result of their rotation in Naivasha. The majority of these cited increased confidence and self-reliance as a clinician, especially when dealing with resource limitations. Other commonly expressed feelings were gratitude for the education, training and resources available in the U.S. and feeling more committed to careers in global health. Interestingly, while the majority of internal and family medicine residents felt more confidence, those in procedure-based residencies such as surgery and obstetrics-gynecology tended to feel more humbled by the experience. A surgery resident reported “I’m not more confident having had this experience that so stretched my comfort limits and my abilities. I’m actually more humble, with a new appreciation and hunger for the skills, wisdom, and judgment I have yet to learn as I continue my surgical training,” while an ob-gyn resident stated “I feel that medical education is a great gift that I have received and this rotation [made me want] to share this with others”. “Challenging” and “rewarding” were the most frequently cited descriptions for the rotation overall, and these were usually mentioned together, implying that the challenges themselves provided the greatest rewards. The educational value was also highlighted, with trainees describing growth in clinical knowledge, understanding about disparities, and learning about the culture. One trainee described the rotation as “The single most unique, challenging and valuable rotation in my residency”. Of the 32 UW trainees who participated, 30 (94%) completed before and after surveys, making a total of 60 surveys; however, 5 “before” surveys were lost during a robbery, leaving 55 for analysis, including 25 complete before-and-after pairs. During their 4-week rotations, visiting trainees reported seeing significantly more cases of advanced HIV, tuberculosis, malaria, pediatric HIV, severe malnutrition, neonatal sepsis, and neonatal asphyxiation cases than they ever had previously during their medical training. On average, after their rotation U.S. trainees reported feeling more confident diagnosing disease based on history and physical examination, managing patients in a setting with limited resources, working in very different hospital systems, and using clinical guidelines for management of patients in resource limited settings (Table 1). U.S. trainees also reported engaging in certain educational activities significantly more often during the rotation than they had in their previous training (Table 2). These included teaching people of different cultural and linguistic backgrounds, working with colleagues from different cultural and linguistic backgrounds, and researching educational topics with few resources. Finally, U.S. trainees reported a significantly greater understanding of the complex role that social determinants can play in health, with significant improvements in understanding how poverty affects health and how economic, social and cultural barriers can affect health care seeking (Table 3).Table 1 UW Before- and after- rotation clinical competency scores Involvement in care of patients with:* Average score before rotation Average score during rotation Average difference in before/after p-value§ Advanced HIV2.362.920.560.05Tuberculosis2.243.080.84<0.01Malaria1.482.440.96<0.01Pediatric HIV1.232.080.85<0.01Severe malnutrition2.002.640.640.03Neonatal sepsis2.423.250.830.03Neonatal asphyxia1.422.921.50<0.01Severe diarrhea1.732.360.630.01 Comfort with the following clinical skills:** Diagnosing disease based on history, physical, and vital signs3.163.560.400.08Managing patients with limited resources2.603.560.96<0.01Working effectively in hospital systems different from in the USA2.583.460.88<0.01Use of clinical guidelines for management2.583.751.17<0.01*Scores based on a 5-point scale: 1 = 0 patients seen, 2 = 1-5 patients seen, 3 = 6-20 patients seen, 4 = 21-50 patients seen, 5 = Greater than 50 patients seen.**Scores based on a 5-point scale: 1 = Not at all, 2 = A little bit, 3 = Somewhat, 4 = Reasonably, 5 = Very.§P-values calculated from paired t-tests.Table 2 UW Before- and after- rotation educational experience scores* Number of times engaged in the following activities: Average score before rotation Average score during rotation Average difference in before/after P-value§ Teaching people of different cultural or linguistic backgrounds from your own2.243.681.44<0.01Researching educational topics with few available resources1.963.001.04<0.01Working with colleagues from different cultural backgrounds from your own3.084.281.20<0.01*Scores based on a 5-point scale: 1 = Never, 2 = 1-5 Times, 3 = 6-20 Times, 4 = 21-50 Times, 5= >50 Times.§P-values calculated from paired t-tests.Table 3 UW Before- and after- rotation community health work scores* Level of understanding of: Average score before rotation Average score during rotation Average difference in before/after P-value§ How extreme poverty affects health3.134.131.00<0.01Economic barriers to health care seeking3.334.210.88<0.01Social and cultural barriers to health care seeking3.333.960.630.01*Scores based on a 5-point scale: 1 = Not at all, 2 = A little bit, 3 = Somewhat, 4 = Reasonably, 5 = Very.§P-values calculated from paired t-tests. UW Before- and after- rotation clinical competency scores *Scores based on a 5-point scale: 1 = 0 patients seen, 2 = 1-5 patients seen, 3 = 6-20 patients seen, 4 = 21-50 patients seen, 5 = Greater than 50 patients seen. **Scores based on a 5-point scale: 1 = Not at all, 2 = A little bit, 3 = Somewhat, 4 = Reasonably, 5 = Very. §P-values calculated from paired t-tests. UW Before- and after- rotation educational experience scores* *Scores based on a 5-point scale: 1 = Never, 2 = 1-5 Times, 3 = 6-20 Times, 4 = 21-50 Times, 5= >50 Times. §P-values calculated from paired t-tests. UW Before- and after- rotation community health work scores* *Scores based on a 5-point scale: 1 = Not at all, 2 = A little bit, 3 = Somewhat, 4 = Reasonably, 5 = Very. §P-values calculated from paired t-tests. When asked what the most educational aspects of the rotation were, most U.S. trainees mentioned patient management with limited resources. “The experience of working in a low resource healthcare [setting] was the most valuable. The decisions I had to make with limited information or imaging taught me skills that I would not have obtained in my residency.” Other commonly cited aspects were the exposure to rare diagnoses such as advanced HIV and TB, and the opportunity to teach and motivate others at the hospital. Twenty-nine (91%) out of 32 U.S. trainees reported experiencing a change in the way they viewed themselves as clinicians as a result of their rotation in Naivasha. The majority of these cited increased confidence and self-reliance as a clinician, especially when dealing with resource limitations. Other commonly expressed feelings were gratitude for the education, training and resources available in the U.S. and feeling more committed to careers in global health. Interestingly, while the majority of internal and family medicine residents felt more confidence, those in procedure-based residencies such as surgery and obstetrics-gynecology tended to feel more humbled by the experience. A surgery resident reported “I’m not more confident having had this experience that so stretched my comfort limits and my abilities. I’m actually more humble, with a new appreciation and hunger for the skills, wisdom, and judgment I have yet to learn as I continue my surgical training,” while an ob-gyn resident stated “I feel that medical education is a great gift that I have received and this rotation [made me want] to share this with others”. “Challenging” and “rewarding” were the most frequently cited descriptions for the rotation overall, and these were usually mentioned together, implying that the challenges themselves provided the greatest rewards. The educational value was also highlighted, with trainees describing growth in clinical knowledge, understanding about disparities, and learning about the culture. One trainee described the rotation as “The single most unique, challenging and valuable rotation in my residency”.
Conclusions
CEPI is an innovative new program designed to foster educational exchange between medical trainees from two vastly different academic backgrounds: University of Washington and University of Nairobi. The partnership between these two universities is founded in several decades of research collaboration, lending stability and long-term investment to the program. Additionally, educational supervision and support provided by the CEPI Chief Resident allows trainees to focus on clinical education, rather than logistics of living and practicing in a foreign setting for a short time. The Chief Resident also provides structure to the training program for both UW and Kenyan trainees, acts as a liaison between the two universities, and reduces the time that Kenyan consultants and administrators have to dedicate to the logistics of the program to essentially zero. In its first year and a half, 32 UW trainees and up to 50 Kenyan trainees have benefitted from this exchange program. All participants from both countries reported having learned a great deal from the experience. There were several clinical and educational competencies that UW residents were able to practice more frequently in Kenya than in the USA. Many UW residents reported increased confidence. Other residents, particularly those from procedure-based specialties, described a re-kindled appreciation for the rigors of and supervision provided by specialty training back home. The prevalent sense among Kenyan trainees that interactions with UW residents helped them by teaching “a different approach” is a critical finding. This “different approach,” whether referring to a clinician’s thought process, relationship with the patient, or relationship with a colleague, is as important for trainees in rural areas to learn as clinical knowledge. During these critical years of training, clinicians will develop habits that will likely inform their practice for years to come. Working with senior clinicians from different backgrounds can provide models for various professional behaviors, but this can be difficult to achieve in remote, resource-poor settings. By bringing UW residents to NDH, CEPI effectively provides a variety of professional role models to rural Kenyan trainees. This study had several significant limitations. First, all results are self-reported measures of learning, and not patient care outcomes, which may be more pertinent and less vulnerable to various types of bias. Second, a significant amount of data was lost during a robbery, reducing the number of analyzable results. We limited our evaluation process to participants in the program and did not include perspectives from the hospital staff, who may also be affected by CEPI activities. Third, we did not evaluate any “control” group with which to compare our results, and therefore cannot definitively attribute any changes observed to our program. Finally, changes over time were not measured, so it is impossible to predict whether the effects may be limited. As CEPI continues to bring UW residents to Naivasha, there are several further directions the program may take that might enhance and broaden the experience for both UW and Kenyan trainees. Although we have reported on only clinical work and education here, the curriculum encompasses two additional components of the program, and these should be evaluated in future years when more established. Both community health work and research are program elements that will help to round out resident experiences in future years of CEPI. Tele-learning through video broadcasting of teaching conferences would be a great addition to the existing teaching structure, and would add to the sense of partner institutions joined in learning. Residents working toward their Masters of Medicine degrees (MMed, equivalent to residency training in the US) at UoN will be joining UW residents for rotations at NDH in the future, increasing the sense of partnership among individual trainees at the same stage of training and broadening the opportunities for collaboration and research endeavors. Additionally, opportunities for NDH trainees to travel to Seattle to experience clinical rotations there would provide a true exchange, and much greater depth to the overall relationship between the two institutions. Through CEPI we have found a long-term, sustainable partnership for educational exchange to be beneficial to all parties involved; with the years to come, this promises to become a multi-faceted, profound educational experience that may change the career trajectories of clinicians in Kenya and Seattle alike.
[ "International health electives", "Partnership for innovative medical education in Kenya", "Setting", "Curriculum", "Structure", "Monitoring and evaluation", "Naivasha district hospital trainees", "University of Washington trainees" ]
[ "International health electives (IHEs) in some settings have been shown to increase United States (US) medical trainees’ understanding of public health, primary care, health disparities and barriers to care, in addition to raising National Board of Medical Examiners (NBME) scores and improving physical exam and history taking skills [1]. As such, both the demand for and the prevalence of these experiences are increasing among medical trainees [2]. Currently, 87% of US medical schools offer international clinical experiences to students, and it is estimated that over 25% of students participate in these programs [3,4]. Graduating medical students throughout all disciplines rank the availability of international experiences as among the most important factors in their choice of residency program [5]. However, due to the rigors of residency training and a number of other constraints, only 59% of residency programs offer international clinical rotations to residents, and as few as 10% of residents may be able to take advantage of these opportunities, when they exist [4].\nPitfalls for IHEs over the last several decades have been described. A common problem inherent to international clinical experiences is poorly defined academic goals and clinical competencies [3]. Another commonly cited issue, especially among medical students, is lack of structure and supervision, leading to concerns that one’s presence in the host institution may be detrimental to patient care [6]. Finally, concerns have been raised that short-term international clinical experiences might benefit the sending institution and its trainees more than that of the host institution, and that benefits for the host institution can be difficult to measure [7].\nMedical training in most parts of sub-Saharan Africa includes 6 years of undergraduate medical school and one year of internship, in which newly graduated clinicians (medical officer interns) are placed in government hospitals to practice. Inadequate numbers of medical training facilities and faculty positions have been identified as challenges facing medical education in sub-Saharan Africa [8], leading to trainees with limited access to senior clinicians for both teaching and role modeling. Formal training programs involving regularly scheduled, structured group didactics for doctors recruited to work in rural Africa have been shown to increase “professional socialization,” or the “learning of attitudes, norms, self images, values, beliefs and behavioral patterns associated with professional practice,” in addition to clinical knowledge and skills [9,10]. This enhanced sense of connection was thought to be related to the mode of training rather than the content, and led to long-term professional relationships and increased job satisfaction [9]. Despite these advantages, there are numerous barriers in place for structured continuing medical education in most healthcare facilities in sub-Saharan Africa [11].\nGiven the relative lack of international clinical opportunities for US residents, the limitations on medical education in much of sub-Saharan Africa and the logistical and ethical considerations that have been described pertaining to IHEs for medical students, the Universities of Washington and Nairobi have recently initiated a new clinical training program for UW and UoN residents in Naivasha, Kenya. This one-month rotation is based on a long-term, structured educational exchange partnership, and aims to provide trainees with supervised, hands-on experience based on a predefined curriculum covering four main competencies of healthcare delivery in resource-poor settings.", "Researchers at the University of Washington (UW) have collaborated with researchers from the University of Nairobi (UoN) on various projects for over 30 years [12]. This longstanding collaboration recently culminated in a Medical Education Partnership Initiative (MEPI) grant from the National Institutes of Health and PEPFAR. The new initiative, called Partnership for Innovative Medical Education in Kenya, or PRIME-K, is designed to strengthen medical education, improve retention of clinicians in rural parts of Kenya, and foster research capacity in Kenya. Modeled on the Washington, Wyoming, Alaska, Montana and Idaho (WWAMI) Regional Medical Education Program, which provides medical education for all 5 states and extends medical training to very rural and remote areas [13,14], a new decentralized training system has been created for UoN medical students [15]. Since 2010, 238 medical students from UoN have completed rotations in 14 hospitals throughout the country, on both the district and provincial levels. Through the foundation of PRIME-K, the UW-UoN partnership has now launched a new initiative called the Clinical Education Partnership Initiative, which focuses on cross-cultural clinical education experiences in Kenya.", "Naivasha District Hospital (NDH) was chosen as a site for the Clinical Education Partnership Initiative (CEPI), due to its proximity to Nairobi and its success with the establishment of a PRIME-K rotation for UoN medical students. NDH is a Level 4 Ministry of Health facility that serves both the urban and rural populations of Naivasha, a catchment area of over 350,000 people [16]. In total, 7 consultants (attending physicians who are permanent employees) work at the hospital, including two surgeons, one otolaryngologist, one internist, one obstetrician-gynecologist, one radiologist, and one pediatrician. The facility currently has just over 200 beds, many of which are often occupied by two patients. There are two operating rooms, an urgent care center, an HIV clinic, a TB clinic, and a rotating consultant’s clinic where the specialists see their continuity patients. Faculty from both University of Washington and University of Nairobi met with the Ministry of Health and the Medical Superintendent of the hospital, who approved the partnership.", "Prior to program implementation, UW and UoN faculty collaborated to develop a curriculum outlining the goals and competencies that UW residents would be expected to achieve during rotations. Drawing on both faculty experience and resources for global health education from both institutions, four main objectives were defined: Clinical Skills & Knowledge, Education & Mentorship, Community Health Work, and Research & Partnerships. Within each area, specific educational goals for residents were established and the activities of the rotation were designed to achieve the goals.", "A Global Health Chief Resident from the UW Department of Medicine was appointed and moved to Naivasha to establish the program in August 2012, and the first residents arrived in September 2012. The Chief Resident organized all logistics of the program for the participants, including transportation, housing, educational activities, and clinical work. The rotation was open to any resident in any department of the University of Washington, and select medical students identified by the medical school’s global health track. Twenty-nine residents from 5 departments and three medical students have completed rotations between September 2012 and April 2014 (see Figure 1). All residents were in their second or third year of residency (out of 3 years for internal and family medicine, 4 years for Ob-Gyn, and 6 years for surgery), and the students were in the end of their fourth year. Each resident attended an orientation in Seattle prior to departure and another one in Naivasha on arrival. Housing and within-country transportation were provided for the UW trainees. The majority of residents spent 4 weeks in Naivasha, and participated in a number of clinical activities including hospital ward rounds, consultant’s outpatient clinic, emergency/urgent care, operating theatre, and dispensary visits. Additionally, UW trainees presented at least two educational lectures during their rotation and spent an average of 2–3 days in rural communities engaged in public health outreach activities. UW trainees worked side by side with their Kenyan colleagues in all settings, including educational conferences. Kenyan trainees and clinical staff participating in CEPI included medical officer interns (medical doctors who have completed a 6-year combined undergraduate & medical school degree), medical officers (medical doctors who have completed medical school and intern year but have not undergone specialty training in residency), clinical officers interns (midlevel providers who have completed 4-year clinical officer training), and medical students.Figure 1\nDescription of UW participants*.\n\n\nDescription of UW participants*.\n", "Faculty members from UW and UoN collaborated to create a monitoring and evaluation structure for both UW and NDH trainees prior to the start of the program. Because data were collected from program participants for quality control and program improvement purposes, data collection was not considered to be research by both the UW and UoN ethics committees, and ethical approval was not applicable. UW residents completed before- and after-rotation surveys (Additional files 1 and 2) that were designed to measure success in achieving the curricular goals of the program, which are outlined above. These were done anonymously; however, while the surveys did not include specific identifying information such as name, some of the questions may have provided information that would allow identification in certain circumstances. Additionally, UW residents were asked to complete a fully anonymous online evaluation of the rotation after their return to Seattle. Kenyan trainees completed anonymous surveys every 3 months describing their involvement with UW residents during each time period (Additional file 3). A group of Kenyan trainees also participated in one semi-structured focus group discussion. Surveys for UW and UoN trainees did not differ based on level of training. Survey scores were compiled in Microsoft Excel, and paired t-tests were used to determine significance of change.", "Surveys were completed by NDH trainees four times, in November 2012, May and September 2013, and March 2014. A total of 50 surveys were completed. Of these, 26 were completed by clinical officer interns (COIs), 11 were completed by medical officer interns (MOIs), 10 were medical officers (MOs), two were students, and one was a clinical officer (CO). Trainees from NDH reported learning from UW residents most frequently during ward rounds, teaching conferences, and informal educational settings. Of all educational settings, they ranked surgical theatre and teaching conferences to be most helpful. 100% of NDH trainees reported feeling more confident about medical knowledge and more confident taking care of patients as a result of their interaction with UW residents.\nTrainees at NDH most frequently cited improvements in diagnosis and management of patients when asked how their interactions with UW residents benefitted them. They specified that they learned “more etiologies to conditions,” how to “arrive at a particular diagnosis” using the “correct investigations,” and “coming up with differential [diagnoses]”. Others mentioned getting help with physical examination and “proper management” of patients from the UW residents. Commonly mentioned ways that UW residents helped with medical knowledge were updating, reinforcing, or broadening knowledge gained in medical school. Specific skills that were frequently mentioned were wound care, physical examination, problem-solving and procedures.\nMany NDH trainees stated they appreciated getting a “different approach” to patient care from UW residents. This was mentioned both with respect to clinical decision making and to a broader sense of one’s approach to doctoring. One trainee mentioned that the most useful element of the partnership was “getting to know the alternative way of approaching the usual conditions, and not sticking to the one way of approach,” highlighting the opportunity to broaden one’s methods for clinical decision making. Others mentioned “different approach to get the diagnosis,” and “stimulating my thought process”.\nAdditionally, trainees alluded to learning a different approach to the art of doctoring, in both the patient-doctor relationship and professional relationships between clinicians. Teamwork and communication were stressed by several trainees, who stated that they appreciated learning “work distribution” and “importance of working as a team”. One trainee stated that UW residents were helpful in encouraging the trainee to “ask when I am not sure without fear”. With regard to the patient-doctor relationship, different Kenyan trainees mentioned that “talking to patients” and “how to give a patient maximum concern,” were examples of skills learned from this alternative approach, as was demonstrating “ideals of patient care,” and “holistic care”. One trainee stated that the residents were “a source of motivation. Because they don’t give up… they did everything they could do”. Another trainee simply stated, “I learnt the art of compassion”.\nOverall, Kenyan trainees reported feeling supported by and more confident as a result of the program. “I can now make confident decisions when left unsupervised due to the acquired knowledge,” one mentioned. “I found it very interactive, if we teach those people what we know, and they teach us what we don’t know, there is going to be a working relationship”.", "Of the 32 UW trainees who participated, 30 (94%) completed before and after surveys, making a total of 60 surveys; however, 5 “before” surveys were lost during a robbery, leaving 55 for analysis, including 25 complete before-and-after pairs. During their 4-week rotations, visiting trainees reported seeing significantly more cases of advanced HIV, tuberculosis, malaria, pediatric HIV, severe malnutrition, neonatal sepsis, and neonatal asphyxiation cases than they ever had previously during their medical training. On average, after their rotation U.S. trainees reported feeling more confident diagnosing disease based on history and physical examination, managing patients in a setting with limited resources, working in very different hospital systems, and using clinical guidelines for management of patients in resource limited settings (Table 1). U.S. trainees also reported engaging in certain educational activities significantly more often during the rotation than they had in their previous training (Table 2). These included teaching people of different cultural and linguistic backgrounds, working with colleagues from different cultural and linguistic backgrounds, and researching educational topics with few resources. Finally, U.S. trainees reported a significantly greater understanding of the complex role that social determinants can play in health, with significant improvements in understanding how poverty affects health and how economic, social and cultural barriers can affect health care seeking (Table 3).Table 1\nUW Before- and after- rotation clinical competency scores\n\nInvolvement in care of patients with:*\n\nAverage score before rotation\n\nAverage score during rotation\n\nAverage difference in before/after\n\np-value§\nAdvanced HIV2.362.920.560.05Tuberculosis2.243.080.84<0.01Malaria1.482.440.96<0.01Pediatric HIV1.232.080.85<0.01Severe malnutrition2.002.640.640.03Neonatal sepsis2.423.250.830.03Neonatal asphyxia1.422.921.50<0.01Severe diarrhea1.732.360.630.01\nComfort with the following clinical skills:**\nDiagnosing disease based on history, physical, and vital signs3.163.560.400.08Managing patients with limited resources2.603.560.96<0.01Working effectively in hospital systems different from in the USA2.583.460.88<0.01Use of clinical guidelines for management2.583.751.17<0.01*Scores based on a 5-point scale: 1 = 0 patients seen, 2 = 1-5 patients seen, 3 = 6-20 patients seen, 4 = 21-50 patients seen, 5 = Greater than 50 patients seen.**Scores based on a 5-point scale: 1 = Not at all, 2 = A little bit, 3 = Somewhat, 4 = Reasonably, 5 = Very.§P-values calculated from paired t-tests.Table 2\nUW Before- and after- rotation educational experience scores*\n\nNumber of times engaged in the following activities:\n\nAverage score before rotation\n\nAverage score during rotation\n\nAverage difference in before/after\n\nP-value§\nTeaching people of different cultural or linguistic backgrounds from your own2.243.681.44<0.01Researching educational topics with few available resources1.963.001.04<0.01Working with colleagues from different cultural backgrounds from your own3.084.281.20<0.01*Scores based on a 5-point scale: 1 = Never, 2 = 1-5 Times, 3 = 6-20 Times, 4 = 21-50 Times, 5= >50 Times.§P-values calculated from paired t-tests.Table 3\nUW Before- and after- rotation community health work scores*\n\nLevel of understanding of:\n\nAverage score before rotation\n\nAverage score during rotation\n\nAverage difference in before/after\n\nP-value§\nHow extreme poverty affects health3.134.131.00<0.01Economic barriers to health care seeking3.334.210.88<0.01Social and cultural barriers to health care seeking3.333.960.630.01*Scores based on a 5-point scale: 1 = Not at all, 2 = A little bit, 3 = Somewhat, 4 = Reasonably, 5 = Very.§P-values calculated from paired t-tests.\n\nUW Before- and after- rotation clinical competency scores\n\n*Scores based on a 5-point scale: 1 = 0 patients seen, 2 = 1-5 patients seen, 3 = 6-20 patients seen, 4 = 21-50 patients seen, 5 = Greater than 50 patients seen.\n**Scores based on a 5-point scale: 1 = Not at all, 2 = A little bit, 3 = Somewhat, 4 = Reasonably, 5 = Very.\n§P-values calculated from paired t-tests.\n\nUW Before- and after- rotation educational experience scores*\n\n*Scores based on a 5-point scale: 1 = Never, 2 = 1-5 Times, 3 = 6-20 Times, 4 = 21-50 Times, 5= >50 Times.\n§P-values calculated from paired t-tests.\n\nUW Before- and after- rotation community health work scores*\n\n*Scores based on a 5-point scale: 1 = Not at all, 2 = A little bit, 3 = Somewhat, 4 = Reasonably, 5 = Very.\n§P-values calculated from paired t-tests.\nWhen asked what the most educational aspects of the rotation were, most U.S. trainees mentioned patient management with limited resources. “The experience of working in a low resource healthcare [setting] was the most valuable. The decisions I had to make with limited information or imaging taught me skills that I would not have obtained in my residency.” Other commonly cited aspects were the exposure to rare diagnoses such as advanced HIV and TB, and the opportunity to teach and motivate others at the hospital.\nTwenty-nine (91%) out of 32 U.S. trainees reported experiencing a change in the way they viewed themselves as clinicians as a result of their rotation in Naivasha. The majority of these cited increased confidence and self-reliance as a clinician, especially when dealing with resource limitations. Other commonly expressed feelings were gratitude for the education, training and resources available in the U.S. and feeling more committed to careers in global health. Interestingly, while the majority of internal and family medicine residents felt more confidence, those in procedure-based residencies such as surgery and obstetrics-gynecology tended to feel more humbled by the experience. A surgery resident reported “I’m not more confident having had this experience that so stretched my comfort limits and my abilities. I’m actually more humble, with a new appreciation and hunger for the skills, wisdom, and judgment I have yet to learn as I continue my surgical training,” while an ob-gyn resident stated “I feel that medical education is a great gift that I have received and this rotation [made me want] to share this with others”.\n“Challenging” and “rewarding” were the most frequently cited descriptions for the rotation overall, and these were usually mentioned together, implying that the challenges themselves provided the greatest rewards. The educational value was also highlighted, with trainees describing growth in clinical knowledge, understanding about disparities, and learning about the culture. One trainee described the rotation as “The single most unique, challenging and valuable rotation in my residency”." ]
[ null, null, null, null, null, null, null, null ]
[ "Background", "International health electives", "Partnership for innovative medical education in Kenya", "Methods", "Setting", "Curriculum", "Structure", "Monitoring and evaluation", "Results & discussion", "Naivasha district hospital trainees", "University of Washington trainees", "Conclusions" ]
[ " International health electives International health electives (IHEs) in some settings have been shown to increase United States (US) medical trainees’ understanding of public health, primary care, health disparities and barriers to care, in addition to raising National Board of Medical Examiners (NBME) scores and improving physical exam and history taking skills [1]. As such, both the demand for and the prevalence of these experiences are increasing among medical trainees [2]. Currently, 87% of US medical schools offer international clinical experiences to students, and it is estimated that over 25% of students participate in these programs [3,4]. Graduating medical students throughout all disciplines rank the availability of international experiences as among the most important factors in their choice of residency program [5]. However, due to the rigors of residency training and a number of other constraints, only 59% of residency programs offer international clinical rotations to residents, and as few as 10% of residents may be able to take advantage of these opportunities, when they exist [4].\nPitfalls for IHEs over the last several decades have been described. A common problem inherent to international clinical experiences is poorly defined academic goals and clinical competencies [3]. Another commonly cited issue, especially among medical students, is lack of structure and supervision, leading to concerns that one’s presence in the host institution may be detrimental to patient care [6]. Finally, concerns have been raised that short-term international clinical experiences might benefit the sending institution and its trainees more than that of the host institution, and that benefits for the host institution can be difficult to measure [7].\nMedical training in most parts of sub-Saharan Africa includes 6 years of undergraduate medical school and one year of internship, in which newly graduated clinicians (medical officer interns) are placed in government hospitals to practice. Inadequate numbers of medical training facilities and faculty positions have been identified as challenges facing medical education in sub-Saharan Africa [8], leading to trainees with limited access to senior clinicians for both teaching and role modeling. Formal training programs involving regularly scheduled, structured group didactics for doctors recruited to work in rural Africa have been shown to increase “professional socialization,” or the “learning of attitudes, norms, self images, values, beliefs and behavioral patterns associated with professional practice,” in addition to clinical knowledge and skills [9,10]. This enhanced sense of connection was thought to be related to the mode of training rather than the content, and led to long-term professional relationships and increased job satisfaction [9]. Despite these advantages, there are numerous barriers in place for structured continuing medical education in most healthcare facilities in sub-Saharan Africa [11].\nGiven the relative lack of international clinical opportunities for US residents, the limitations on medical education in much of sub-Saharan Africa and the logistical and ethical considerations that have been described pertaining to IHEs for medical students, the Universities of Washington and Nairobi have recently initiated a new clinical training program for UW and UoN residents in Naivasha, Kenya. This one-month rotation is based on a long-term, structured educational exchange partnership, and aims to provide trainees with supervised, hands-on experience based on a predefined curriculum covering four main competencies of healthcare delivery in resource-poor settings.\nInternational health electives (IHEs) in some settings have been shown to increase United States (US) medical trainees’ understanding of public health, primary care, health disparities and barriers to care, in addition to raising National Board of Medical Examiners (NBME) scores and improving physical exam and history taking skills [1]. As such, both the demand for and the prevalence of these experiences are increasing among medical trainees [2]. Currently, 87% of US medical schools offer international clinical experiences to students, and it is estimated that over 25% of students participate in these programs [3,4]. Graduating medical students throughout all disciplines rank the availability of international experiences as among the most important factors in their choice of residency program [5]. However, due to the rigors of residency training and a number of other constraints, only 59% of residency programs offer international clinical rotations to residents, and as few as 10% of residents may be able to take advantage of these opportunities, when they exist [4].\nPitfalls for IHEs over the last several decades have been described. A common problem inherent to international clinical experiences is poorly defined academic goals and clinical competencies [3]. Another commonly cited issue, especially among medical students, is lack of structure and supervision, leading to concerns that one’s presence in the host institution may be detrimental to patient care [6]. Finally, concerns have been raised that short-term international clinical experiences might benefit the sending institution and its trainees more than that of the host institution, and that benefits for the host institution can be difficult to measure [7].\nMedical training in most parts of sub-Saharan Africa includes 6 years of undergraduate medical school and one year of internship, in which newly graduated clinicians (medical officer interns) are placed in government hospitals to practice. Inadequate numbers of medical training facilities and faculty positions have been identified as challenges facing medical education in sub-Saharan Africa [8], leading to trainees with limited access to senior clinicians for both teaching and role modeling. Formal training programs involving regularly scheduled, structured group didactics for doctors recruited to work in rural Africa have been shown to increase “professional socialization,” or the “learning of attitudes, norms, self images, values, beliefs and behavioral patterns associated with professional practice,” in addition to clinical knowledge and skills [9,10]. This enhanced sense of connection was thought to be related to the mode of training rather than the content, and led to long-term professional relationships and increased job satisfaction [9]. Despite these advantages, there are numerous barriers in place for structured continuing medical education in most healthcare facilities in sub-Saharan Africa [11].\nGiven the relative lack of international clinical opportunities for US residents, the limitations on medical education in much of sub-Saharan Africa and the logistical and ethical considerations that have been described pertaining to IHEs for medical students, the Universities of Washington and Nairobi have recently initiated a new clinical training program for UW and UoN residents in Naivasha, Kenya. This one-month rotation is based on a long-term, structured educational exchange partnership, and aims to provide trainees with supervised, hands-on experience based on a predefined curriculum covering four main competencies of healthcare delivery in resource-poor settings.\n Partnership for innovative medical education in Kenya Researchers at the University of Washington (UW) have collaborated with researchers from the University of Nairobi (UoN) on various projects for over 30 years [12]. This longstanding collaboration recently culminated in a Medical Education Partnership Initiative (MEPI) grant from the National Institutes of Health and PEPFAR. The new initiative, called Partnership for Innovative Medical Education in Kenya, or PRIME-K, is designed to strengthen medical education, improve retention of clinicians in rural parts of Kenya, and foster research capacity in Kenya. Modeled on the Washington, Wyoming, Alaska, Montana and Idaho (WWAMI) Regional Medical Education Program, which provides medical education for all 5 states and extends medical training to very rural and remote areas [13,14], a new decentralized training system has been created for UoN medical students [15]. Since 2010, 238 medical students from UoN have completed rotations in 14 hospitals throughout the country, on both the district and provincial levels. Through the foundation of PRIME-K, the UW-UoN partnership has now launched a new initiative called the Clinical Education Partnership Initiative, which focuses on cross-cultural clinical education experiences in Kenya.\nResearchers at the University of Washington (UW) have collaborated with researchers from the University of Nairobi (UoN) on various projects for over 30 years [12]. This longstanding collaboration recently culminated in a Medical Education Partnership Initiative (MEPI) grant from the National Institutes of Health and PEPFAR. The new initiative, called Partnership for Innovative Medical Education in Kenya, or PRIME-K, is designed to strengthen medical education, improve retention of clinicians in rural parts of Kenya, and foster research capacity in Kenya. Modeled on the Washington, Wyoming, Alaska, Montana and Idaho (WWAMI) Regional Medical Education Program, which provides medical education for all 5 states and extends medical training to very rural and remote areas [13,14], a new decentralized training system has been created for UoN medical students [15]. Since 2010, 238 medical students from UoN have completed rotations in 14 hospitals throughout the country, on both the district and provincial levels. Through the foundation of PRIME-K, the UW-UoN partnership has now launched a new initiative called the Clinical Education Partnership Initiative, which focuses on cross-cultural clinical education experiences in Kenya.", "International health electives (IHEs) in some settings have been shown to increase United States (US) medical trainees’ understanding of public health, primary care, health disparities and barriers to care, in addition to raising National Board of Medical Examiners (NBME) scores and improving physical exam and history taking skills [1]. As such, both the demand for and the prevalence of these experiences are increasing among medical trainees [2]. Currently, 87% of US medical schools offer international clinical experiences to students, and it is estimated that over 25% of students participate in these programs [3,4]. Graduating medical students throughout all disciplines rank the availability of international experiences as among the most important factors in their choice of residency program [5]. However, due to the rigors of residency training and a number of other constraints, only 59% of residency programs offer international clinical rotations to residents, and as few as 10% of residents may be able to take advantage of these opportunities, when they exist [4].\nPitfalls for IHEs over the last several decades have been described. A common problem inherent to international clinical experiences is poorly defined academic goals and clinical competencies [3]. Another commonly cited issue, especially among medical students, is lack of structure and supervision, leading to concerns that one’s presence in the host institution may be detrimental to patient care [6]. Finally, concerns have been raised that short-term international clinical experiences might benefit the sending institution and its trainees more than that of the host institution, and that benefits for the host institution can be difficult to measure [7].\nMedical training in most parts of sub-Saharan Africa includes 6 years of undergraduate medical school and one year of internship, in which newly graduated clinicians (medical officer interns) are placed in government hospitals to practice. Inadequate numbers of medical training facilities and faculty positions have been identified as challenges facing medical education in sub-Saharan Africa [8], leading to trainees with limited access to senior clinicians for both teaching and role modeling. Formal training programs involving regularly scheduled, structured group didactics for doctors recruited to work in rural Africa have been shown to increase “professional socialization,” or the “learning of attitudes, norms, self images, values, beliefs and behavioral patterns associated with professional practice,” in addition to clinical knowledge and skills [9,10]. This enhanced sense of connection was thought to be related to the mode of training rather than the content, and led to long-term professional relationships and increased job satisfaction [9]. Despite these advantages, there are numerous barriers in place for structured continuing medical education in most healthcare facilities in sub-Saharan Africa [11].\nGiven the relative lack of international clinical opportunities for US residents, the limitations on medical education in much of sub-Saharan Africa and the logistical and ethical considerations that have been described pertaining to IHEs for medical students, the Universities of Washington and Nairobi have recently initiated a new clinical training program for UW and UoN residents in Naivasha, Kenya. This one-month rotation is based on a long-term, structured educational exchange partnership, and aims to provide trainees with supervised, hands-on experience based on a predefined curriculum covering four main competencies of healthcare delivery in resource-poor settings.", "Researchers at the University of Washington (UW) have collaborated with researchers from the University of Nairobi (UoN) on various projects for over 30 years [12]. This longstanding collaboration recently culminated in a Medical Education Partnership Initiative (MEPI) grant from the National Institutes of Health and PEPFAR. The new initiative, called Partnership for Innovative Medical Education in Kenya, or PRIME-K, is designed to strengthen medical education, improve retention of clinicians in rural parts of Kenya, and foster research capacity in Kenya. Modeled on the Washington, Wyoming, Alaska, Montana and Idaho (WWAMI) Regional Medical Education Program, which provides medical education for all 5 states and extends medical training to very rural and remote areas [13,14], a new decentralized training system has been created for UoN medical students [15]. Since 2010, 238 medical students from UoN have completed rotations in 14 hospitals throughout the country, on both the district and provincial levels. Through the foundation of PRIME-K, the UW-UoN partnership has now launched a new initiative called the Clinical Education Partnership Initiative, which focuses on cross-cultural clinical education experiences in Kenya.", " Setting Naivasha District Hospital (NDH) was chosen as a site for the Clinical Education Partnership Initiative (CEPI), due to its proximity to Nairobi and its success with the establishment of a PRIME-K rotation for UoN medical students. NDH is a Level 4 Ministry of Health facility that serves both the urban and rural populations of Naivasha, a catchment area of over 350,000 people [16]. In total, 7 consultants (attending physicians who are permanent employees) work at the hospital, including two surgeons, one otolaryngologist, one internist, one obstetrician-gynecologist, one radiologist, and one pediatrician. The facility currently has just over 200 beds, many of which are often occupied by two patients. There are two operating rooms, an urgent care center, an HIV clinic, a TB clinic, and a rotating consultant’s clinic where the specialists see their continuity patients. Faculty from both University of Washington and University of Nairobi met with the Ministry of Health and the Medical Superintendent of the hospital, who approved the partnership.\nNaivasha District Hospital (NDH) was chosen as a site for the Clinical Education Partnership Initiative (CEPI), due to its proximity to Nairobi and its success with the establishment of a PRIME-K rotation for UoN medical students. NDH is a Level 4 Ministry of Health facility that serves both the urban and rural populations of Naivasha, a catchment area of over 350,000 people [16]. In total, 7 consultants (attending physicians who are permanent employees) work at the hospital, including two surgeons, one otolaryngologist, one internist, one obstetrician-gynecologist, one radiologist, and one pediatrician. The facility currently has just over 200 beds, many of which are often occupied by two patients. There are two operating rooms, an urgent care center, an HIV clinic, a TB clinic, and a rotating consultant’s clinic where the specialists see their continuity patients. Faculty from both University of Washington and University of Nairobi met with the Ministry of Health and the Medical Superintendent of the hospital, who approved the partnership.\n Curriculum Prior to program implementation, UW and UoN faculty collaborated to develop a curriculum outlining the goals and competencies that UW residents would be expected to achieve during rotations. Drawing on both faculty experience and resources for global health education from both institutions, four main objectives were defined: Clinical Skills & Knowledge, Education & Mentorship, Community Health Work, and Research & Partnerships. Within each area, specific educational goals for residents were established and the activities of the rotation were designed to achieve the goals.\nPrior to program implementation, UW and UoN faculty collaborated to develop a curriculum outlining the goals and competencies that UW residents would be expected to achieve during rotations. Drawing on both faculty experience and resources for global health education from both institutions, four main objectives were defined: Clinical Skills & Knowledge, Education & Mentorship, Community Health Work, and Research & Partnerships. Within each area, specific educational goals for residents were established and the activities of the rotation were designed to achieve the goals.\n Structure A Global Health Chief Resident from the UW Department of Medicine was appointed and moved to Naivasha to establish the program in August 2012, and the first residents arrived in September 2012. The Chief Resident organized all logistics of the program for the participants, including transportation, housing, educational activities, and clinical work. The rotation was open to any resident in any department of the University of Washington, and select medical students identified by the medical school’s global health track. Twenty-nine residents from 5 departments and three medical students have completed rotations between September 2012 and April 2014 (see Figure 1). All residents were in their second or third year of residency (out of 3 years for internal and family medicine, 4 years for Ob-Gyn, and 6 years for surgery), and the students were in the end of their fourth year. Each resident attended an orientation in Seattle prior to departure and another one in Naivasha on arrival. Housing and within-country transportation were provided for the UW trainees. The majority of residents spent 4 weeks in Naivasha, and participated in a number of clinical activities including hospital ward rounds, consultant’s outpatient clinic, emergency/urgent care, operating theatre, and dispensary visits. Additionally, UW trainees presented at least two educational lectures during their rotation and spent an average of 2–3 days in rural communities engaged in public health outreach activities. UW trainees worked side by side with their Kenyan colleagues in all settings, including educational conferences. Kenyan trainees and clinical staff participating in CEPI included medical officer interns (medical doctors who have completed a 6-year combined undergraduate & medical school degree), medical officers (medical doctors who have completed medical school and intern year but have not undergone specialty training in residency), clinical officers interns (midlevel providers who have completed 4-year clinical officer training), and medical students.Figure 1\nDescription of UW participants*.\n\n\nDescription of UW participants*.\n\nA Global Health Chief Resident from the UW Department of Medicine was appointed and moved to Naivasha to establish the program in August 2012, and the first residents arrived in September 2012. The Chief Resident organized all logistics of the program for the participants, including transportation, housing, educational activities, and clinical work. The rotation was open to any resident in any department of the University of Washington, and select medical students identified by the medical school’s global health track. Twenty-nine residents from 5 departments and three medical students have completed rotations between September 2012 and April 2014 (see Figure 1). All residents were in their second or third year of residency (out of 3 years for internal and family medicine, 4 years for Ob-Gyn, and 6 years for surgery), and the students were in the end of their fourth year. Each resident attended an orientation in Seattle prior to departure and another one in Naivasha on arrival. Housing and within-country transportation were provided for the UW trainees. The majority of residents spent 4 weeks in Naivasha, and participated in a number of clinical activities including hospital ward rounds, consultant’s outpatient clinic, emergency/urgent care, operating theatre, and dispensary visits. Additionally, UW trainees presented at least two educational lectures during their rotation and spent an average of 2–3 days in rural communities engaged in public health outreach activities. UW trainees worked side by side with their Kenyan colleagues in all settings, including educational conferences. Kenyan trainees and clinical staff participating in CEPI included medical officer interns (medical doctors who have completed a 6-year combined undergraduate & medical school degree), medical officers (medical doctors who have completed medical school and intern year but have not undergone specialty training in residency), clinical officers interns (midlevel providers who have completed 4-year clinical officer training), and medical students.Figure 1\nDescription of UW participants*.\n\n\nDescription of UW participants*.\n\n Monitoring and evaluation Faculty members from UW and UoN collaborated to create a monitoring and evaluation structure for both UW and NDH trainees prior to the start of the program. Because data were collected from program participants for quality control and program improvement purposes, data collection was not considered to be research by both the UW and UoN ethics committees, and ethical approval was not applicable. UW residents completed before- and after-rotation surveys (Additional files 1 and 2) that were designed to measure success in achieving the curricular goals of the program, which are outlined above. These were done anonymously; however, while the surveys did not include specific identifying information such as name, some of the questions may have provided information that would allow identification in certain circumstances. Additionally, UW residents were asked to complete a fully anonymous online evaluation of the rotation after their return to Seattle. Kenyan trainees completed anonymous surveys every 3 months describing their involvement with UW residents during each time period (Additional file 3). A group of Kenyan trainees also participated in one semi-structured focus group discussion. Surveys for UW and UoN trainees did not differ based on level of training. Survey scores were compiled in Microsoft Excel, and paired t-tests were used to determine significance of change.\nFaculty members from UW and UoN collaborated to create a monitoring and evaluation structure for both UW and NDH trainees prior to the start of the program. Because data were collected from program participants for quality control and program improvement purposes, data collection was not considered to be research by both the UW and UoN ethics committees, and ethical approval was not applicable. UW residents completed before- and after-rotation surveys (Additional files 1 and 2) that were designed to measure success in achieving the curricular goals of the program, which are outlined above. These were done anonymously; however, while the surveys did not include specific identifying information such as name, some of the questions may have provided information that would allow identification in certain circumstances. Additionally, UW residents were asked to complete a fully anonymous online evaluation of the rotation after their return to Seattle. Kenyan trainees completed anonymous surveys every 3 months describing their involvement with UW residents during each time period (Additional file 3). A group of Kenyan trainees also participated in one semi-structured focus group discussion. Surveys for UW and UoN trainees did not differ based on level of training. Survey scores were compiled in Microsoft Excel, and paired t-tests were used to determine significance of change.", "Naivasha District Hospital (NDH) was chosen as a site for the Clinical Education Partnership Initiative (CEPI), due to its proximity to Nairobi and its success with the establishment of a PRIME-K rotation for UoN medical students. NDH is a Level 4 Ministry of Health facility that serves both the urban and rural populations of Naivasha, a catchment area of over 350,000 people [16]. In total, 7 consultants (attending physicians who are permanent employees) work at the hospital, including two surgeons, one otolaryngologist, one internist, one obstetrician-gynecologist, one radiologist, and one pediatrician. The facility currently has just over 200 beds, many of which are often occupied by two patients. There are two operating rooms, an urgent care center, an HIV clinic, a TB clinic, and a rotating consultant’s clinic where the specialists see their continuity patients. Faculty from both University of Washington and University of Nairobi met with the Ministry of Health and the Medical Superintendent of the hospital, who approved the partnership.", "Prior to program implementation, UW and UoN faculty collaborated to develop a curriculum outlining the goals and competencies that UW residents would be expected to achieve during rotations. Drawing on both faculty experience and resources for global health education from both institutions, four main objectives were defined: Clinical Skills & Knowledge, Education & Mentorship, Community Health Work, and Research & Partnerships. Within each area, specific educational goals for residents were established and the activities of the rotation were designed to achieve the goals.", "A Global Health Chief Resident from the UW Department of Medicine was appointed and moved to Naivasha to establish the program in August 2012, and the first residents arrived in September 2012. The Chief Resident organized all logistics of the program for the participants, including transportation, housing, educational activities, and clinical work. The rotation was open to any resident in any department of the University of Washington, and select medical students identified by the medical school’s global health track. Twenty-nine residents from 5 departments and three medical students have completed rotations between September 2012 and April 2014 (see Figure 1). All residents were in their second or third year of residency (out of 3 years for internal and family medicine, 4 years for Ob-Gyn, and 6 years for surgery), and the students were in the end of their fourth year. Each resident attended an orientation in Seattle prior to departure and another one in Naivasha on arrival. Housing and within-country transportation were provided for the UW trainees. The majority of residents spent 4 weeks in Naivasha, and participated in a number of clinical activities including hospital ward rounds, consultant’s outpatient clinic, emergency/urgent care, operating theatre, and dispensary visits. Additionally, UW trainees presented at least two educational lectures during their rotation and spent an average of 2–3 days in rural communities engaged in public health outreach activities. UW trainees worked side by side with their Kenyan colleagues in all settings, including educational conferences. Kenyan trainees and clinical staff participating in CEPI included medical officer interns (medical doctors who have completed a 6-year combined undergraduate & medical school degree), medical officers (medical doctors who have completed medical school and intern year but have not undergone specialty training in residency), clinical officers interns (midlevel providers who have completed 4-year clinical officer training), and medical students.Figure 1\nDescription of UW participants*.\n\n\nDescription of UW participants*.\n", "Faculty members from UW and UoN collaborated to create a monitoring and evaluation structure for both UW and NDH trainees prior to the start of the program. Because data were collected from program participants for quality control and program improvement purposes, data collection was not considered to be research by both the UW and UoN ethics committees, and ethical approval was not applicable. UW residents completed before- and after-rotation surveys (Additional files 1 and 2) that were designed to measure success in achieving the curricular goals of the program, which are outlined above. These were done anonymously; however, while the surveys did not include specific identifying information such as name, some of the questions may have provided information that would allow identification in certain circumstances. Additionally, UW residents were asked to complete a fully anonymous online evaluation of the rotation after their return to Seattle. Kenyan trainees completed anonymous surveys every 3 months describing their involvement with UW residents during each time period (Additional file 3). A group of Kenyan trainees also participated in one semi-structured focus group discussion. Surveys for UW and UoN trainees did not differ based on level of training. Survey scores were compiled in Microsoft Excel, and paired t-tests were used to determine significance of change.", " Naivasha district hospital trainees Surveys were completed by NDH trainees four times, in November 2012, May and September 2013, and March 2014. A total of 50 surveys were completed. Of these, 26 were completed by clinical officer interns (COIs), 11 were completed by medical officer interns (MOIs), 10 were medical officers (MOs), two were students, and one was a clinical officer (CO). Trainees from NDH reported learning from UW residents most frequently during ward rounds, teaching conferences, and informal educational settings. Of all educational settings, they ranked surgical theatre and teaching conferences to be most helpful. 100% of NDH trainees reported feeling more confident about medical knowledge and more confident taking care of patients as a result of their interaction with UW residents.\nTrainees at NDH most frequently cited improvements in diagnosis and management of patients when asked how their interactions with UW residents benefitted them. They specified that they learned “more etiologies to conditions,” how to “arrive at a particular diagnosis” using the “correct investigations,” and “coming up with differential [diagnoses]”. Others mentioned getting help with physical examination and “proper management” of patients from the UW residents. Commonly mentioned ways that UW residents helped with medical knowledge were updating, reinforcing, or broadening knowledge gained in medical school. Specific skills that were frequently mentioned were wound care, physical examination, problem-solving and procedures.\nMany NDH trainees stated they appreciated getting a “different approach” to patient care from UW residents. This was mentioned both with respect to clinical decision making and to a broader sense of one’s approach to doctoring. One trainee mentioned that the most useful element of the partnership was “getting to know the alternative way of approaching the usual conditions, and not sticking to the one way of approach,” highlighting the opportunity to broaden one’s methods for clinical decision making. Others mentioned “different approach to get the diagnosis,” and “stimulating my thought process”.\nAdditionally, trainees alluded to learning a different approach to the art of doctoring, in both the patient-doctor relationship and professional relationships between clinicians. Teamwork and communication were stressed by several trainees, who stated that they appreciated learning “work distribution” and “importance of working as a team”. One trainee stated that UW residents were helpful in encouraging the trainee to “ask when I am not sure without fear”. With regard to the patient-doctor relationship, different Kenyan trainees mentioned that “talking to patients” and “how to give a patient maximum concern,” were examples of skills learned from this alternative approach, as was demonstrating “ideals of patient care,” and “holistic care”. One trainee stated that the residents were “a source of motivation. Because they don’t give up… they did everything they could do”. Another trainee simply stated, “I learnt the art of compassion”.\nOverall, Kenyan trainees reported feeling supported by and more confident as a result of the program. “I can now make confident decisions when left unsupervised due to the acquired knowledge,” one mentioned. “I found it very interactive, if we teach those people what we know, and they teach us what we don’t know, there is going to be a working relationship”.\nSurveys were completed by NDH trainees four times, in November 2012, May and September 2013, and March 2014. A total of 50 surveys were completed. Of these, 26 were completed by clinical officer interns (COIs), 11 were completed by medical officer interns (MOIs), 10 were medical officers (MOs), two were students, and one was a clinical officer (CO). Trainees from NDH reported learning from UW residents most frequently during ward rounds, teaching conferences, and informal educational settings. Of all educational settings, they ranked surgical theatre and teaching conferences to be most helpful. 100% of NDH trainees reported feeling more confident about medical knowledge and more confident taking care of patients as a result of their interaction with UW residents.\nTrainees at NDH most frequently cited improvements in diagnosis and management of patients when asked how their interactions with UW residents benefitted them. They specified that they learned “more etiologies to conditions,” how to “arrive at a particular diagnosis” using the “correct investigations,” and “coming up with differential [diagnoses]”. Others mentioned getting help with physical examination and “proper management” of patients from the UW residents. Commonly mentioned ways that UW residents helped with medical knowledge were updating, reinforcing, or broadening knowledge gained in medical school. Specific skills that were frequently mentioned were wound care, physical examination, problem-solving and procedures.\nMany NDH trainees stated they appreciated getting a “different approach” to patient care from UW residents. This was mentioned both with respect to clinical decision making and to a broader sense of one’s approach to doctoring. One trainee mentioned that the most useful element of the partnership was “getting to know the alternative way of approaching the usual conditions, and not sticking to the one way of approach,” highlighting the opportunity to broaden one’s methods for clinical decision making. Others mentioned “different approach to get the diagnosis,” and “stimulating my thought process”.\nAdditionally, trainees alluded to learning a different approach to the art of doctoring, in both the patient-doctor relationship and professional relationships between clinicians. Teamwork and communication were stressed by several trainees, who stated that they appreciated learning “work distribution” and “importance of working as a team”. One trainee stated that UW residents were helpful in encouraging the trainee to “ask when I am not sure without fear”. With regard to the patient-doctor relationship, different Kenyan trainees mentioned that “talking to patients” and “how to give a patient maximum concern,” were examples of skills learned from this alternative approach, as was demonstrating “ideals of patient care,” and “holistic care”. One trainee stated that the residents were “a source of motivation. Because they don’t give up… they did everything they could do”. Another trainee simply stated, “I learnt the art of compassion”.\nOverall, Kenyan trainees reported feeling supported by and more confident as a result of the program. “I can now make confident decisions when left unsupervised due to the acquired knowledge,” one mentioned. “I found it very interactive, if we teach those people what we know, and they teach us what we don’t know, there is going to be a working relationship”.\n University of Washington trainees Of the 32 UW trainees who participated, 30 (94%) completed before and after surveys, making a total of 60 surveys; however, 5 “before” surveys were lost during a robbery, leaving 55 for analysis, including 25 complete before-and-after pairs. During their 4-week rotations, visiting trainees reported seeing significantly more cases of advanced HIV, tuberculosis, malaria, pediatric HIV, severe malnutrition, neonatal sepsis, and neonatal asphyxiation cases than they ever had previously during their medical training. On average, after their rotation U.S. trainees reported feeling more confident diagnosing disease based on history and physical examination, managing patients in a setting with limited resources, working in very different hospital systems, and using clinical guidelines for management of patients in resource limited settings (Table 1). U.S. trainees also reported engaging in certain educational activities significantly more often during the rotation than they had in their previous training (Table 2). These included teaching people of different cultural and linguistic backgrounds, working with colleagues from different cultural and linguistic backgrounds, and researching educational topics with few resources. Finally, U.S. trainees reported a significantly greater understanding of the complex role that social determinants can play in health, with significant improvements in understanding how poverty affects health and how economic, social and cultural barriers can affect health care seeking (Table 3).Table 1\nUW Before- and after- rotation clinical competency scores\n\nInvolvement in care of patients with:*\n\nAverage score before rotation\n\nAverage score during rotation\n\nAverage difference in before/after\n\np-value§\nAdvanced HIV2.362.920.560.05Tuberculosis2.243.080.84<0.01Malaria1.482.440.96<0.01Pediatric HIV1.232.080.85<0.01Severe malnutrition2.002.640.640.03Neonatal sepsis2.423.250.830.03Neonatal asphyxia1.422.921.50<0.01Severe diarrhea1.732.360.630.01\nComfort with the following clinical skills:**\nDiagnosing disease based on history, physical, and vital signs3.163.560.400.08Managing patients with limited resources2.603.560.96<0.01Working effectively in hospital systems different from in the USA2.583.460.88<0.01Use of clinical guidelines for management2.583.751.17<0.01*Scores based on a 5-point scale: 1 = 0 patients seen, 2 = 1-5 patients seen, 3 = 6-20 patients seen, 4 = 21-50 patients seen, 5 = Greater than 50 patients seen.**Scores based on a 5-point scale: 1 = Not at all, 2 = A little bit, 3 = Somewhat, 4 = Reasonably, 5 = Very.§P-values calculated from paired t-tests.Table 2\nUW Before- and after- rotation educational experience scores*\n\nNumber of times engaged in the following activities:\n\nAverage score before rotation\n\nAverage score during rotation\n\nAverage difference in before/after\n\nP-value§\nTeaching people of different cultural or linguistic backgrounds from your own2.243.681.44<0.01Researching educational topics with few available resources1.963.001.04<0.01Working with colleagues from different cultural backgrounds from your own3.084.281.20<0.01*Scores based on a 5-point scale: 1 = Never, 2 = 1-5 Times, 3 = 6-20 Times, 4 = 21-50 Times, 5= >50 Times.§P-values calculated from paired t-tests.Table 3\nUW Before- and after- rotation community health work scores*\n\nLevel of understanding of:\n\nAverage score before rotation\n\nAverage score during rotation\n\nAverage difference in before/after\n\nP-value§\nHow extreme poverty affects health3.134.131.00<0.01Economic barriers to health care seeking3.334.210.88<0.01Social and cultural barriers to health care seeking3.333.960.630.01*Scores based on a 5-point scale: 1 = Not at all, 2 = A little bit, 3 = Somewhat, 4 = Reasonably, 5 = Very.§P-values calculated from paired t-tests.\n\nUW Before- and after- rotation clinical competency scores\n\n*Scores based on a 5-point scale: 1 = 0 patients seen, 2 = 1-5 patients seen, 3 = 6-20 patients seen, 4 = 21-50 patients seen, 5 = Greater than 50 patients seen.\n**Scores based on a 5-point scale: 1 = Not at all, 2 = A little bit, 3 = Somewhat, 4 = Reasonably, 5 = Very.\n§P-values calculated from paired t-tests.\n\nUW Before- and after- rotation educational experience scores*\n\n*Scores based on a 5-point scale: 1 = Never, 2 = 1-5 Times, 3 = 6-20 Times, 4 = 21-50 Times, 5= >50 Times.\n§P-values calculated from paired t-tests.\n\nUW Before- and after- rotation community health work scores*\n\n*Scores based on a 5-point scale: 1 = Not at all, 2 = A little bit, 3 = Somewhat, 4 = Reasonably, 5 = Very.\n§P-values calculated from paired t-tests.\nWhen asked what the most educational aspects of the rotation were, most U.S. trainees mentioned patient management with limited resources. “The experience of working in a low resource healthcare [setting] was the most valuable. The decisions I had to make with limited information or imaging taught me skills that I would not have obtained in my residency.” Other commonly cited aspects were the exposure to rare diagnoses such as advanced HIV and TB, and the opportunity to teach and motivate others at the hospital.\nTwenty-nine (91%) out of 32 U.S. trainees reported experiencing a change in the way they viewed themselves as clinicians as a result of their rotation in Naivasha. The majority of these cited increased confidence and self-reliance as a clinician, especially when dealing with resource limitations. Other commonly expressed feelings were gratitude for the education, training and resources available in the U.S. and feeling more committed to careers in global health. Interestingly, while the majority of internal and family medicine residents felt more confidence, those in procedure-based residencies such as surgery and obstetrics-gynecology tended to feel more humbled by the experience. A surgery resident reported “I’m not more confident having had this experience that so stretched my comfort limits and my abilities. I’m actually more humble, with a new appreciation and hunger for the skills, wisdom, and judgment I have yet to learn as I continue my surgical training,” while an ob-gyn resident stated “I feel that medical education is a great gift that I have received and this rotation [made me want] to share this with others”.\n“Challenging” and “rewarding” were the most frequently cited descriptions for the rotation overall, and these were usually mentioned together, implying that the challenges themselves provided the greatest rewards. The educational value was also highlighted, with trainees describing growth in clinical knowledge, understanding about disparities, and learning about the culture. One trainee described the rotation as “The single most unique, challenging and valuable rotation in my residency”.\nOf the 32 UW trainees who participated, 30 (94%) completed before and after surveys, making a total of 60 surveys; however, 5 “before” surveys were lost during a robbery, leaving 55 for analysis, including 25 complete before-and-after pairs. During their 4-week rotations, visiting trainees reported seeing significantly more cases of advanced HIV, tuberculosis, malaria, pediatric HIV, severe malnutrition, neonatal sepsis, and neonatal asphyxiation cases than they ever had previously during their medical training. On average, after their rotation U.S. trainees reported feeling more confident diagnosing disease based on history and physical examination, managing patients in a setting with limited resources, working in very different hospital systems, and using clinical guidelines for management of patients in resource limited settings (Table 1). U.S. trainees also reported engaging in certain educational activities significantly more often during the rotation than they had in their previous training (Table 2). These included teaching people of different cultural and linguistic backgrounds, working with colleagues from different cultural and linguistic backgrounds, and researching educational topics with few resources. Finally, U.S. trainees reported a significantly greater understanding of the complex role that social determinants can play in health, with significant improvements in understanding how poverty affects health and how economic, social and cultural barriers can affect health care seeking (Table 3).Table 1\nUW Before- and after- rotation clinical competency scores\n\nInvolvement in care of patients with:*\n\nAverage score before rotation\n\nAverage score during rotation\n\nAverage difference in before/after\n\np-value§\nAdvanced HIV2.362.920.560.05Tuberculosis2.243.080.84<0.01Malaria1.482.440.96<0.01Pediatric HIV1.232.080.85<0.01Severe malnutrition2.002.640.640.03Neonatal sepsis2.423.250.830.03Neonatal asphyxia1.422.921.50<0.01Severe diarrhea1.732.360.630.01\nComfort with the following clinical skills:**\nDiagnosing disease based on history, physical, and vital signs3.163.560.400.08Managing patients with limited resources2.603.560.96<0.01Working effectively in hospital systems different from in the USA2.583.460.88<0.01Use of clinical guidelines for management2.583.751.17<0.01*Scores based on a 5-point scale: 1 = 0 patients seen, 2 = 1-5 patients seen, 3 = 6-20 patients seen, 4 = 21-50 patients seen, 5 = Greater than 50 patients seen.**Scores based on a 5-point scale: 1 = Not at all, 2 = A little bit, 3 = Somewhat, 4 = Reasonably, 5 = Very.§P-values calculated from paired t-tests.Table 2\nUW Before- and after- rotation educational experience scores*\n\nNumber of times engaged in the following activities:\n\nAverage score before rotation\n\nAverage score during rotation\n\nAverage difference in before/after\n\nP-value§\nTeaching people of different cultural or linguistic backgrounds from your own2.243.681.44<0.01Researching educational topics with few available resources1.963.001.04<0.01Working with colleagues from different cultural backgrounds from your own3.084.281.20<0.01*Scores based on a 5-point scale: 1 = Never, 2 = 1-5 Times, 3 = 6-20 Times, 4 = 21-50 Times, 5= >50 Times.§P-values calculated from paired t-tests.Table 3\nUW Before- and after- rotation community health work scores*\n\nLevel of understanding of:\n\nAverage score before rotation\n\nAverage score during rotation\n\nAverage difference in before/after\n\nP-value§\nHow extreme poverty affects health3.134.131.00<0.01Economic barriers to health care seeking3.334.210.88<0.01Social and cultural barriers to health care seeking3.333.960.630.01*Scores based on a 5-point scale: 1 = Not at all, 2 = A little bit, 3 = Somewhat, 4 = Reasonably, 5 = Very.§P-values calculated from paired t-tests.\n\nUW Before- and after- rotation clinical competency scores\n\n*Scores based on a 5-point scale: 1 = 0 patients seen, 2 = 1-5 patients seen, 3 = 6-20 patients seen, 4 = 21-50 patients seen, 5 = Greater than 50 patients seen.\n**Scores based on a 5-point scale: 1 = Not at all, 2 = A little bit, 3 = Somewhat, 4 = Reasonably, 5 = Very.\n§P-values calculated from paired t-tests.\n\nUW Before- and after- rotation educational experience scores*\n\n*Scores based on a 5-point scale: 1 = Never, 2 = 1-5 Times, 3 = 6-20 Times, 4 = 21-50 Times, 5= >50 Times.\n§P-values calculated from paired t-tests.\n\nUW Before- and after- rotation community health work scores*\n\n*Scores based on a 5-point scale: 1 = Not at all, 2 = A little bit, 3 = Somewhat, 4 = Reasonably, 5 = Very.\n§P-values calculated from paired t-tests.\nWhen asked what the most educational aspects of the rotation were, most U.S. trainees mentioned patient management with limited resources. “The experience of working in a low resource healthcare [setting] was the most valuable. The decisions I had to make with limited information or imaging taught me skills that I would not have obtained in my residency.” Other commonly cited aspects were the exposure to rare diagnoses such as advanced HIV and TB, and the opportunity to teach and motivate others at the hospital.\nTwenty-nine (91%) out of 32 U.S. trainees reported experiencing a change in the way they viewed themselves as clinicians as a result of their rotation in Naivasha. The majority of these cited increased confidence and self-reliance as a clinician, especially when dealing with resource limitations. Other commonly expressed feelings were gratitude for the education, training and resources available in the U.S. and feeling more committed to careers in global health. Interestingly, while the majority of internal and family medicine residents felt more confidence, those in procedure-based residencies such as surgery and obstetrics-gynecology tended to feel more humbled by the experience. A surgery resident reported “I’m not more confident having had this experience that so stretched my comfort limits and my abilities. I’m actually more humble, with a new appreciation and hunger for the skills, wisdom, and judgment I have yet to learn as I continue my surgical training,” while an ob-gyn resident stated “I feel that medical education is a great gift that I have received and this rotation [made me want] to share this with others”.\n“Challenging” and “rewarding” were the most frequently cited descriptions for the rotation overall, and these were usually mentioned together, implying that the challenges themselves provided the greatest rewards. The educational value was also highlighted, with trainees describing growth in clinical knowledge, understanding about disparities, and learning about the culture. One trainee described the rotation as “The single most unique, challenging and valuable rotation in my residency”.", "Surveys were completed by NDH trainees four times, in November 2012, May and September 2013, and March 2014. A total of 50 surveys were completed. Of these, 26 were completed by clinical officer interns (COIs), 11 were completed by medical officer interns (MOIs), 10 were medical officers (MOs), two were students, and one was a clinical officer (CO). Trainees from NDH reported learning from UW residents most frequently during ward rounds, teaching conferences, and informal educational settings. Of all educational settings, they ranked surgical theatre and teaching conferences to be most helpful. 100% of NDH trainees reported feeling more confident about medical knowledge and more confident taking care of patients as a result of their interaction with UW residents.\nTrainees at NDH most frequently cited improvements in diagnosis and management of patients when asked how their interactions with UW residents benefitted them. They specified that they learned “more etiologies to conditions,” how to “arrive at a particular diagnosis” using the “correct investigations,” and “coming up with differential [diagnoses]”. Others mentioned getting help with physical examination and “proper management” of patients from the UW residents. Commonly mentioned ways that UW residents helped with medical knowledge were updating, reinforcing, or broadening knowledge gained in medical school. Specific skills that were frequently mentioned were wound care, physical examination, problem-solving and procedures.\nMany NDH trainees stated they appreciated getting a “different approach” to patient care from UW residents. This was mentioned both with respect to clinical decision making and to a broader sense of one’s approach to doctoring. One trainee mentioned that the most useful element of the partnership was “getting to know the alternative way of approaching the usual conditions, and not sticking to the one way of approach,” highlighting the opportunity to broaden one’s methods for clinical decision making. Others mentioned “different approach to get the diagnosis,” and “stimulating my thought process”.\nAdditionally, trainees alluded to learning a different approach to the art of doctoring, in both the patient-doctor relationship and professional relationships between clinicians. Teamwork and communication were stressed by several trainees, who stated that they appreciated learning “work distribution” and “importance of working as a team”. One trainee stated that UW residents were helpful in encouraging the trainee to “ask when I am not sure without fear”. With regard to the patient-doctor relationship, different Kenyan trainees mentioned that “talking to patients” and “how to give a patient maximum concern,” were examples of skills learned from this alternative approach, as was demonstrating “ideals of patient care,” and “holistic care”. One trainee stated that the residents were “a source of motivation. Because they don’t give up… they did everything they could do”. Another trainee simply stated, “I learnt the art of compassion”.\nOverall, Kenyan trainees reported feeling supported by and more confident as a result of the program. “I can now make confident decisions when left unsupervised due to the acquired knowledge,” one mentioned. “I found it very interactive, if we teach those people what we know, and they teach us what we don’t know, there is going to be a working relationship”.", "Of the 32 UW trainees who participated, 30 (94%) completed before and after surveys, making a total of 60 surveys; however, 5 “before” surveys were lost during a robbery, leaving 55 for analysis, including 25 complete before-and-after pairs. During their 4-week rotations, visiting trainees reported seeing significantly more cases of advanced HIV, tuberculosis, malaria, pediatric HIV, severe malnutrition, neonatal sepsis, and neonatal asphyxiation cases than they ever had previously during their medical training. On average, after their rotation U.S. trainees reported feeling more confident diagnosing disease based on history and physical examination, managing patients in a setting with limited resources, working in very different hospital systems, and using clinical guidelines for management of patients in resource limited settings (Table 1). U.S. trainees also reported engaging in certain educational activities significantly more often during the rotation than they had in their previous training (Table 2). These included teaching people of different cultural and linguistic backgrounds, working with colleagues from different cultural and linguistic backgrounds, and researching educational topics with few resources. Finally, U.S. trainees reported a significantly greater understanding of the complex role that social determinants can play in health, with significant improvements in understanding how poverty affects health and how economic, social and cultural barriers can affect health care seeking (Table 3).Table 1\nUW Before- and after- rotation clinical competency scores\n\nInvolvement in care of patients with:*\n\nAverage score before rotation\n\nAverage score during rotation\n\nAverage difference in before/after\n\np-value§\nAdvanced HIV2.362.920.560.05Tuberculosis2.243.080.84<0.01Malaria1.482.440.96<0.01Pediatric HIV1.232.080.85<0.01Severe malnutrition2.002.640.640.03Neonatal sepsis2.423.250.830.03Neonatal asphyxia1.422.921.50<0.01Severe diarrhea1.732.360.630.01\nComfort with the following clinical skills:**\nDiagnosing disease based on history, physical, and vital signs3.163.560.400.08Managing patients with limited resources2.603.560.96<0.01Working effectively in hospital systems different from in the USA2.583.460.88<0.01Use of clinical guidelines for management2.583.751.17<0.01*Scores based on a 5-point scale: 1 = 0 patients seen, 2 = 1-5 patients seen, 3 = 6-20 patients seen, 4 = 21-50 patients seen, 5 = Greater than 50 patients seen.**Scores based on a 5-point scale: 1 = Not at all, 2 = A little bit, 3 = Somewhat, 4 = Reasonably, 5 = Very.§P-values calculated from paired t-tests.Table 2\nUW Before- and after- rotation educational experience scores*\n\nNumber of times engaged in the following activities:\n\nAverage score before rotation\n\nAverage score during rotation\n\nAverage difference in before/after\n\nP-value§\nTeaching people of different cultural or linguistic backgrounds from your own2.243.681.44<0.01Researching educational topics with few available resources1.963.001.04<0.01Working with colleagues from different cultural backgrounds from your own3.084.281.20<0.01*Scores based on a 5-point scale: 1 = Never, 2 = 1-5 Times, 3 = 6-20 Times, 4 = 21-50 Times, 5= >50 Times.§P-values calculated from paired t-tests.Table 3\nUW Before- and after- rotation community health work scores*\n\nLevel of understanding of:\n\nAverage score before rotation\n\nAverage score during rotation\n\nAverage difference in before/after\n\nP-value§\nHow extreme poverty affects health3.134.131.00<0.01Economic barriers to health care seeking3.334.210.88<0.01Social and cultural barriers to health care seeking3.333.960.630.01*Scores based on a 5-point scale: 1 = Not at all, 2 = A little bit, 3 = Somewhat, 4 = Reasonably, 5 = Very.§P-values calculated from paired t-tests.\n\nUW Before- and after- rotation clinical competency scores\n\n*Scores based on a 5-point scale: 1 = 0 patients seen, 2 = 1-5 patients seen, 3 = 6-20 patients seen, 4 = 21-50 patients seen, 5 = Greater than 50 patients seen.\n**Scores based on a 5-point scale: 1 = Not at all, 2 = A little bit, 3 = Somewhat, 4 = Reasonably, 5 = Very.\n§P-values calculated from paired t-tests.\n\nUW Before- and after- rotation educational experience scores*\n\n*Scores based on a 5-point scale: 1 = Never, 2 = 1-5 Times, 3 = 6-20 Times, 4 = 21-50 Times, 5= >50 Times.\n§P-values calculated from paired t-tests.\n\nUW Before- and after- rotation community health work scores*\n\n*Scores based on a 5-point scale: 1 = Not at all, 2 = A little bit, 3 = Somewhat, 4 = Reasonably, 5 = Very.\n§P-values calculated from paired t-tests.\nWhen asked what the most educational aspects of the rotation were, most U.S. trainees mentioned patient management with limited resources. “The experience of working in a low resource healthcare [setting] was the most valuable. The decisions I had to make with limited information or imaging taught me skills that I would not have obtained in my residency.” Other commonly cited aspects were the exposure to rare diagnoses such as advanced HIV and TB, and the opportunity to teach and motivate others at the hospital.\nTwenty-nine (91%) out of 32 U.S. trainees reported experiencing a change in the way they viewed themselves as clinicians as a result of their rotation in Naivasha. The majority of these cited increased confidence and self-reliance as a clinician, especially when dealing with resource limitations. Other commonly expressed feelings were gratitude for the education, training and resources available in the U.S. and feeling more committed to careers in global health. Interestingly, while the majority of internal and family medicine residents felt more confidence, those in procedure-based residencies such as surgery and obstetrics-gynecology tended to feel more humbled by the experience. A surgery resident reported “I’m not more confident having had this experience that so stretched my comfort limits and my abilities. I’m actually more humble, with a new appreciation and hunger for the skills, wisdom, and judgment I have yet to learn as I continue my surgical training,” while an ob-gyn resident stated “I feel that medical education is a great gift that I have received and this rotation [made me want] to share this with others”.\n“Challenging” and “rewarding” were the most frequently cited descriptions for the rotation overall, and these were usually mentioned together, implying that the challenges themselves provided the greatest rewards. The educational value was also highlighted, with trainees describing growth in clinical knowledge, understanding about disparities, and learning about the culture. One trainee described the rotation as “The single most unique, challenging and valuable rotation in my residency”.", "CEPI is an innovative new program designed to foster educational exchange between medical trainees from two vastly different academic backgrounds: University of Washington and University of Nairobi. The partnership between these two universities is founded in several decades of research collaboration, lending stability and long-term investment to the program. Additionally, educational supervision and support provided by the CEPI Chief Resident allows trainees to focus on clinical education, rather than logistics of living and practicing in a foreign setting for a short time. The Chief Resident also provides structure to the training program for both UW and Kenyan trainees, acts as a liaison between the two universities, and reduces the time that Kenyan consultants and administrators have to dedicate to the logistics of the program to essentially zero.\nIn its first year and a half, 32 UW trainees and up to 50 Kenyan trainees have benefitted from this exchange program. All participants from both countries reported having learned a great deal from the experience. There were several clinical and educational competencies that UW residents were able to practice more frequently in Kenya than in the USA. Many UW residents reported increased confidence. Other residents, particularly those from procedure-based specialties, described a re-kindled appreciation for the rigors of and supervision provided by specialty training back home.\nThe prevalent sense among Kenyan trainees that interactions with UW residents helped them by teaching “a different approach” is a critical finding. This “different approach,” whether referring to a clinician’s thought process, relationship with the patient, or relationship with a colleague, is as important for trainees in rural areas to learn as clinical knowledge. During these critical years of training, clinicians will develop habits that will likely inform their practice for years to come. Working with senior clinicians from different backgrounds can provide models for various professional behaviors, but this can be difficult to achieve in remote, resource-poor settings. By bringing UW residents to NDH, CEPI effectively provides a variety of professional role models to rural Kenyan trainees.\nThis study had several significant limitations. First, all results are self-reported measures of learning, and not patient care outcomes, which may be more pertinent and less vulnerable to various types of bias. Second, a significant amount of data was lost during a robbery, reducing the number of analyzable results. We limited our evaluation process to participants in the program and did not include perspectives from the hospital staff, who may also be affected by CEPI activities. Third, we did not evaluate any “control” group with which to compare our results, and therefore cannot definitively attribute any changes observed to our program. Finally, changes over time were not measured, so it is impossible to predict whether the effects may be limited.\nAs CEPI continues to bring UW residents to Naivasha, there are several further directions the program may take that might enhance and broaden the experience for both UW and Kenyan trainees. Although we have reported on only clinical work and education here, the curriculum encompasses two additional components of the program, and these should be evaluated in future years when more established. Both community health work and research are program elements that will help to round out resident experiences in future years of CEPI. Tele-learning through video broadcasting of teaching conferences would be a great addition to the existing teaching structure, and would add to the sense of partner institutions joined in learning. Residents working toward their Masters of Medicine degrees (MMed, equivalent to residency training in the US) at UoN will be joining UW residents for rotations at NDH in the future, increasing the sense of partnership among individual trainees at the same stage of training and broadening the opportunities for collaboration and research endeavors. Additionally, opportunities for NDH trainees to travel to Seattle to experience clinical rotations there would provide a true exchange, and much greater depth to the overall relationship between the two institutions. Through CEPI we have found a long-term, sustainable partnership for educational exchange to be beneficial to all parties involved; with the years to come, this promises to become a multi-faceted, profound educational experience that may change the career trajectories of clinicians in Kenya and Seattle alike." ]
[ "introduction", null, null, "materials|methods", null, null, null, null, "results", null, null, "conclusion" ]
[ "International", "Clinical rotation", "Medical education", "Residents", "Kenya" ]
Background: International health electives International health electives (IHEs) in some settings have been shown to increase United States (US) medical trainees’ understanding of public health, primary care, health disparities and barriers to care, in addition to raising National Board of Medical Examiners (NBME) scores and improving physical exam and history taking skills [1]. As such, both the demand for and the prevalence of these experiences are increasing among medical trainees [2]. Currently, 87% of US medical schools offer international clinical experiences to students, and it is estimated that over 25% of students participate in these programs [3,4]. Graduating medical students throughout all disciplines rank the availability of international experiences as among the most important factors in their choice of residency program [5]. However, due to the rigors of residency training and a number of other constraints, only 59% of residency programs offer international clinical rotations to residents, and as few as 10% of residents may be able to take advantage of these opportunities, when they exist [4]. Pitfalls for IHEs over the last several decades have been described. A common problem inherent to international clinical experiences is poorly defined academic goals and clinical competencies [3]. Another commonly cited issue, especially among medical students, is lack of structure and supervision, leading to concerns that one’s presence in the host institution may be detrimental to patient care [6]. Finally, concerns have been raised that short-term international clinical experiences might benefit the sending institution and its trainees more than that of the host institution, and that benefits for the host institution can be difficult to measure [7]. Medical training in most parts of sub-Saharan Africa includes 6 years of undergraduate medical school and one year of internship, in which newly graduated clinicians (medical officer interns) are placed in government hospitals to practice. Inadequate numbers of medical training facilities and faculty positions have been identified as challenges facing medical education in sub-Saharan Africa [8], leading to trainees with limited access to senior clinicians for both teaching and role modeling. Formal training programs involving regularly scheduled, structured group didactics for doctors recruited to work in rural Africa have been shown to increase “professional socialization,” or the “learning of attitudes, norms, self images, values, beliefs and behavioral patterns associated with professional practice,” in addition to clinical knowledge and skills [9,10]. This enhanced sense of connection was thought to be related to the mode of training rather than the content, and led to long-term professional relationships and increased job satisfaction [9]. Despite these advantages, there are numerous barriers in place for structured continuing medical education in most healthcare facilities in sub-Saharan Africa [11]. Given the relative lack of international clinical opportunities for US residents, the limitations on medical education in much of sub-Saharan Africa and the logistical and ethical considerations that have been described pertaining to IHEs for medical students, the Universities of Washington and Nairobi have recently initiated a new clinical training program for UW and UoN residents in Naivasha, Kenya. This one-month rotation is based on a long-term, structured educational exchange partnership, and aims to provide trainees with supervised, hands-on experience based on a predefined curriculum covering four main competencies of healthcare delivery in resource-poor settings. International health electives (IHEs) in some settings have been shown to increase United States (US) medical trainees’ understanding of public health, primary care, health disparities and barriers to care, in addition to raising National Board of Medical Examiners (NBME) scores and improving physical exam and history taking skills [1]. As such, both the demand for and the prevalence of these experiences are increasing among medical trainees [2]. Currently, 87% of US medical schools offer international clinical experiences to students, and it is estimated that over 25% of students participate in these programs [3,4]. Graduating medical students throughout all disciplines rank the availability of international experiences as among the most important factors in their choice of residency program [5]. However, due to the rigors of residency training and a number of other constraints, only 59% of residency programs offer international clinical rotations to residents, and as few as 10% of residents may be able to take advantage of these opportunities, when they exist [4]. Pitfalls for IHEs over the last several decades have been described. A common problem inherent to international clinical experiences is poorly defined academic goals and clinical competencies [3]. Another commonly cited issue, especially among medical students, is lack of structure and supervision, leading to concerns that one’s presence in the host institution may be detrimental to patient care [6]. Finally, concerns have been raised that short-term international clinical experiences might benefit the sending institution and its trainees more than that of the host institution, and that benefits for the host institution can be difficult to measure [7]. Medical training in most parts of sub-Saharan Africa includes 6 years of undergraduate medical school and one year of internship, in which newly graduated clinicians (medical officer interns) are placed in government hospitals to practice. Inadequate numbers of medical training facilities and faculty positions have been identified as challenges facing medical education in sub-Saharan Africa [8], leading to trainees with limited access to senior clinicians for both teaching and role modeling. Formal training programs involving regularly scheduled, structured group didactics for doctors recruited to work in rural Africa have been shown to increase “professional socialization,” or the “learning of attitudes, norms, self images, values, beliefs and behavioral patterns associated with professional practice,” in addition to clinical knowledge and skills [9,10]. This enhanced sense of connection was thought to be related to the mode of training rather than the content, and led to long-term professional relationships and increased job satisfaction [9]. Despite these advantages, there are numerous barriers in place for structured continuing medical education in most healthcare facilities in sub-Saharan Africa [11]. Given the relative lack of international clinical opportunities for US residents, the limitations on medical education in much of sub-Saharan Africa and the logistical and ethical considerations that have been described pertaining to IHEs for medical students, the Universities of Washington and Nairobi have recently initiated a new clinical training program for UW and UoN residents in Naivasha, Kenya. This one-month rotation is based on a long-term, structured educational exchange partnership, and aims to provide trainees with supervised, hands-on experience based on a predefined curriculum covering four main competencies of healthcare delivery in resource-poor settings. Partnership for innovative medical education in Kenya Researchers at the University of Washington (UW) have collaborated with researchers from the University of Nairobi (UoN) on various projects for over 30 years [12]. This longstanding collaboration recently culminated in a Medical Education Partnership Initiative (MEPI) grant from the National Institutes of Health and PEPFAR. The new initiative, called Partnership for Innovative Medical Education in Kenya, or PRIME-K, is designed to strengthen medical education, improve retention of clinicians in rural parts of Kenya, and foster research capacity in Kenya. Modeled on the Washington, Wyoming, Alaska, Montana and Idaho (WWAMI) Regional Medical Education Program, which provides medical education for all 5 states and extends medical training to very rural and remote areas [13,14], a new decentralized training system has been created for UoN medical students [15]. Since 2010, 238 medical students from UoN have completed rotations in 14 hospitals throughout the country, on both the district and provincial levels. Through the foundation of PRIME-K, the UW-UoN partnership has now launched a new initiative called the Clinical Education Partnership Initiative, which focuses on cross-cultural clinical education experiences in Kenya. Researchers at the University of Washington (UW) have collaborated with researchers from the University of Nairobi (UoN) on various projects for over 30 years [12]. This longstanding collaboration recently culminated in a Medical Education Partnership Initiative (MEPI) grant from the National Institutes of Health and PEPFAR. The new initiative, called Partnership for Innovative Medical Education in Kenya, or PRIME-K, is designed to strengthen medical education, improve retention of clinicians in rural parts of Kenya, and foster research capacity in Kenya. Modeled on the Washington, Wyoming, Alaska, Montana and Idaho (WWAMI) Regional Medical Education Program, which provides medical education for all 5 states and extends medical training to very rural and remote areas [13,14], a new decentralized training system has been created for UoN medical students [15]. Since 2010, 238 medical students from UoN have completed rotations in 14 hospitals throughout the country, on both the district and provincial levels. Through the foundation of PRIME-K, the UW-UoN partnership has now launched a new initiative called the Clinical Education Partnership Initiative, which focuses on cross-cultural clinical education experiences in Kenya. International health electives: International health electives (IHEs) in some settings have been shown to increase United States (US) medical trainees’ understanding of public health, primary care, health disparities and barriers to care, in addition to raising National Board of Medical Examiners (NBME) scores and improving physical exam and history taking skills [1]. As such, both the demand for and the prevalence of these experiences are increasing among medical trainees [2]. Currently, 87% of US medical schools offer international clinical experiences to students, and it is estimated that over 25% of students participate in these programs [3,4]. Graduating medical students throughout all disciplines rank the availability of international experiences as among the most important factors in their choice of residency program [5]. However, due to the rigors of residency training and a number of other constraints, only 59% of residency programs offer international clinical rotations to residents, and as few as 10% of residents may be able to take advantage of these opportunities, when they exist [4]. Pitfalls for IHEs over the last several decades have been described. A common problem inherent to international clinical experiences is poorly defined academic goals and clinical competencies [3]. Another commonly cited issue, especially among medical students, is lack of structure and supervision, leading to concerns that one’s presence in the host institution may be detrimental to patient care [6]. Finally, concerns have been raised that short-term international clinical experiences might benefit the sending institution and its trainees more than that of the host institution, and that benefits for the host institution can be difficult to measure [7]. Medical training in most parts of sub-Saharan Africa includes 6 years of undergraduate medical school and one year of internship, in which newly graduated clinicians (medical officer interns) are placed in government hospitals to practice. Inadequate numbers of medical training facilities and faculty positions have been identified as challenges facing medical education in sub-Saharan Africa [8], leading to trainees with limited access to senior clinicians for both teaching and role modeling. Formal training programs involving regularly scheduled, structured group didactics for doctors recruited to work in rural Africa have been shown to increase “professional socialization,” or the “learning of attitudes, norms, self images, values, beliefs and behavioral patterns associated with professional practice,” in addition to clinical knowledge and skills [9,10]. This enhanced sense of connection was thought to be related to the mode of training rather than the content, and led to long-term professional relationships and increased job satisfaction [9]. Despite these advantages, there are numerous barriers in place for structured continuing medical education in most healthcare facilities in sub-Saharan Africa [11]. Given the relative lack of international clinical opportunities for US residents, the limitations on medical education in much of sub-Saharan Africa and the logistical and ethical considerations that have been described pertaining to IHEs for medical students, the Universities of Washington and Nairobi have recently initiated a new clinical training program for UW and UoN residents in Naivasha, Kenya. This one-month rotation is based on a long-term, structured educational exchange partnership, and aims to provide trainees with supervised, hands-on experience based on a predefined curriculum covering four main competencies of healthcare delivery in resource-poor settings. Partnership for innovative medical education in Kenya: Researchers at the University of Washington (UW) have collaborated with researchers from the University of Nairobi (UoN) on various projects for over 30 years [12]. This longstanding collaboration recently culminated in a Medical Education Partnership Initiative (MEPI) grant from the National Institutes of Health and PEPFAR. The new initiative, called Partnership for Innovative Medical Education in Kenya, or PRIME-K, is designed to strengthen medical education, improve retention of clinicians in rural parts of Kenya, and foster research capacity in Kenya. Modeled on the Washington, Wyoming, Alaska, Montana and Idaho (WWAMI) Regional Medical Education Program, which provides medical education for all 5 states and extends medical training to very rural and remote areas [13,14], a new decentralized training system has been created for UoN medical students [15]. Since 2010, 238 medical students from UoN have completed rotations in 14 hospitals throughout the country, on both the district and provincial levels. Through the foundation of PRIME-K, the UW-UoN partnership has now launched a new initiative called the Clinical Education Partnership Initiative, which focuses on cross-cultural clinical education experiences in Kenya. Methods: Setting Naivasha District Hospital (NDH) was chosen as a site for the Clinical Education Partnership Initiative (CEPI), due to its proximity to Nairobi and its success with the establishment of a PRIME-K rotation for UoN medical students. NDH is a Level 4 Ministry of Health facility that serves both the urban and rural populations of Naivasha, a catchment area of over 350,000 people [16]. In total, 7 consultants (attending physicians who are permanent employees) work at the hospital, including two surgeons, one otolaryngologist, one internist, one obstetrician-gynecologist, one radiologist, and one pediatrician. The facility currently has just over 200 beds, many of which are often occupied by two patients. There are two operating rooms, an urgent care center, an HIV clinic, a TB clinic, and a rotating consultant’s clinic where the specialists see their continuity patients. Faculty from both University of Washington and University of Nairobi met with the Ministry of Health and the Medical Superintendent of the hospital, who approved the partnership. Naivasha District Hospital (NDH) was chosen as a site for the Clinical Education Partnership Initiative (CEPI), due to its proximity to Nairobi and its success with the establishment of a PRIME-K rotation for UoN medical students. NDH is a Level 4 Ministry of Health facility that serves both the urban and rural populations of Naivasha, a catchment area of over 350,000 people [16]. In total, 7 consultants (attending physicians who are permanent employees) work at the hospital, including two surgeons, one otolaryngologist, one internist, one obstetrician-gynecologist, one radiologist, and one pediatrician. The facility currently has just over 200 beds, many of which are often occupied by two patients. There are two operating rooms, an urgent care center, an HIV clinic, a TB clinic, and a rotating consultant’s clinic where the specialists see their continuity patients. Faculty from both University of Washington and University of Nairobi met with the Ministry of Health and the Medical Superintendent of the hospital, who approved the partnership. Curriculum Prior to program implementation, UW and UoN faculty collaborated to develop a curriculum outlining the goals and competencies that UW residents would be expected to achieve during rotations. Drawing on both faculty experience and resources for global health education from both institutions, four main objectives were defined: Clinical Skills & Knowledge, Education & Mentorship, Community Health Work, and Research & Partnerships. Within each area, specific educational goals for residents were established and the activities of the rotation were designed to achieve the goals. Prior to program implementation, UW and UoN faculty collaborated to develop a curriculum outlining the goals and competencies that UW residents would be expected to achieve during rotations. Drawing on both faculty experience and resources for global health education from both institutions, four main objectives were defined: Clinical Skills & Knowledge, Education & Mentorship, Community Health Work, and Research & Partnerships. Within each area, specific educational goals for residents were established and the activities of the rotation were designed to achieve the goals. Structure A Global Health Chief Resident from the UW Department of Medicine was appointed and moved to Naivasha to establish the program in August 2012, and the first residents arrived in September 2012. The Chief Resident organized all logistics of the program for the participants, including transportation, housing, educational activities, and clinical work. The rotation was open to any resident in any department of the University of Washington, and select medical students identified by the medical school’s global health track. Twenty-nine residents from 5 departments and three medical students have completed rotations between September 2012 and April 2014 (see Figure 1). All residents were in their second or third year of residency (out of 3 years for internal and family medicine, 4 years for Ob-Gyn, and 6 years for surgery), and the students were in the end of their fourth year. Each resident attended an orientation in Seattle prior to departure and another one in Naivasha on arrival. Housing and within-country transportation were provided for the UW trainees. The majority of residents spent 4 weeks in Naivasha, and participated in a number of clinical activities including hospital ward rounds, consultant’s outpatient clinic, emergency/urgent care, operating theatre, and dispensary visits. Additionally, UW trainees presented at least two educational lectures during their rotation and spent an average of 2–3 days in rural communities engaged in public health outreach activities. UW trainees worked side by side with their Kenyan colleagues in all settings, including educational conferences. Kenyan trainees and clinical staff participating in CEPI included medical officer interns (medical doctors who have completed a 6-year combined undergraduate & medical school degree), medical officers (medical doctors who have completed medical school and intern year but have not undergone specialty training in residency), clinical officers interns (midlevel providers who have completed 4-year clinical officer training), and medical students.Figure 1 Description of UW participants*. Description of UW participants*. A Global Health Chief Resident from the UW Department of Medicine was appointed and moved to Naivasha to establish the program in August 2012, and the first residents arrived in September 2012. The Chief Resident organized all logistics of the program for the participants, including transportation, housing, educational activities, and clinical work. The rotation was open to any resident in any department of the University of Washington, and select medical students identified by the medical school’s global health track. Twenty-nine residents from 5 departments and three medical students have completed rotations between September 2012 and April 2014 (see Figure 1). All residents were in their second or third year of residency (out of 3 years for internal and family medicine, 4 years for Ob-Gyn, and 6 years for surgery), and the students were in the end of their fourth year. Each resident attended an orientation in Seattle prior to departure and another one in Naivasha on arrival. Housing and within-country transportation were provided for the UW trainees. The majority of residents spent 4 weeks in Naivasha, and participated in a number of clinical activities including hospital ward rounds, consultant’s outpatient clinic, emergency/urgent care, operating theatre, and dispensary visits. Additionally, UW trainees presented at least two educational lectures during their rotation and spent an average of 2–3 days in rural communities engaged in public health outreach activities. UW trainees worked side by side with their Kenyan colleagues in all settings, including educational conferences. Kenyan trainees and clinical staff participating in CEPI included medical officer interns (medical doctors who have completed a 6-year combined undergraduate & medical school degree), medical officers (medical doctors who have completed medical school and intern year but have not undergone specialty training in residency), clinical officers interns (midlevel providers who have completed 4-year clinical officer training), and medical students.Figure 1 Description of UW participants*. Description of UW participants*. Monitoring and evaluation Faculty members from UW and UoN collaborated to create a monitoring and evaluation structure for both UW and NDH trainees prior to the start of the program. Because data were collected from program participants for quality control and program improvement purposes, data collection was not considered to be research by both the UW and UoN ethics committees, and ethical approval was not applicable. UW residents completed before- and after-rotation surveys (Additional files 1 and 2) that were designed to measure success in achieving the curricular goals of the program, which are outlined above. These were done anonymously; however, while the surveys did not include specific identifying information such as name, some of the questions may have provided information that would allow identification in certain circumstances. Additionally, UW residents were asked to complete a fully anonymous online evaluation of the rotation after their return to Seattle. Kenyan trainees completed anonymous surveys every 3 months describing their involvement with UW residents during each time period (Additional file 3). A group of Kenyan trainees also participated in one semi-structured focus group discussion. Surveys for UW and UoN trainees did not differ based on level of training. Survey scores were compiled in Microsoft Excel, and paired t-tests were used to determine significance of change. Faculty members from UW and UoN collaborated to create a monitoring and evaluation structure for both UW and NDH trainees prior to the start of the program. Because data were collected from program participants for quality control and program improvement purposes, data collection was not considered to be research by both the UW and UoN ethics committees, and ethical approval was not applicable. UW residents completed before- and after-rotation surveys (Additional files 1 and 2) that were designed to measure success in achieving the curricular goals of the program, which are outlined above. These were done anonymously; however, while the surveys did not include specific identifying information such as name, some of the questions may have provided information that would allow identification in certain circumstances. Additionally, UW residents were asked to complete a fully anonymous online evaluation of the rotation after their return to Seattle. Kenyan trainees completed anonymous surveys every 3 months describing their involvement with UW residents during each time period (Additional file 3). A group of Kenyan trainees also participated in one semi-structured focus group discussion. Surveys for UW and UoN trainees did not differ based on level of training. Survey scores were compiled in Microsoft Excel, and paired t-tests were used to determine significance of change. Setting: Naivasha District Hospital (NDH) was chosen as a site for the Clinical Education Partnership Initiative (CEPI), due to its proximity to Nairobi and its success with the establishment of a PRIME-K rotation for UoN medical students. NDH is a Level 4 Ministry of Health facility that serves both the urban and rural populations of Naivasha, a catchment area of over 350,000 people [16]. In total, 7 consultants (attending physicians who are permanent employees) work at the hospital, including two surgeons, one otolaryngologist, one internist, one obstetrician-gynecologist, one radiologist, and one pediatrician. The facility currently has just over 200 beds, many of which are often occupied by two patients. There are two operating rooms, an urgent care center, an HIV clinic, a TB clinic, and a rotating consultant’s clinic where the specialists see their continuity patients. Faculty from both University of Washington and University of Nairobi met with the Ministry of Health and the Medical Superintendent of the hospital, who approved the partnership. Curriculum: Prior to program implementation, UW and UoN faculty collaborated to develop a curriculum outlining the goals and competencies that UW residents would be expected to achieve during rotations. Drawing on both faculty experience and resources for global health education from both institutions, four main objectives were defined: Clinical Skills & Knowledge, Education & Mentorship, Community Health Work, and Research & Partnerships. Within each area, specific educational goals for residents were established and the activities of the rotation were designed to achieve the goals. Structure: A Global Health Chief Resident from the UW Department of Medicine was appointed and moved to Naivasha to establish the program in August 2012, and the first residents arrived in September 2012. The Chief Resident organized all logistics of the program for the participants, including transportation, housing, educational activities, and clinical work. The rotation was open to any resident in any department of the University of Washington, and select medical students identified by the medical school’s global health track. Twenty-nine residents from 5 departments and three medical students have completed rotations between September 2012 and April 2014 (see Figure 1). All residents were in their second or third year of residency (out of 3 years for internal and family medicine, 4 years for Ob-Gyn, and 6 years for surgery), and the students were in the end of their fourth year. Each resident attended an orientation in Seattle prior to departure and another one in Naivasha on arrival. Housing and within-country transportation were provided for the UW trainees. The majority of residents spent 4 weeks in Naivasha, and participated in a number of clinical activities including hospital ward rounds, consultant’s outpatient clinic, emergency/urgent care, operating theatre, and dispensary visits. Additionally, UW trainees presented at least two educational lectures during their rotation and spent an average of 2–3 days in rural communities engaged in public health outreach activities. UW trainees worked side by side with their Kenyan colleagues in all settings, including educational conferences. Kenyan trainees and clinical staff participating in CEPI included medical officer interns (medical doctors who have completed a 6-year combined undergraduate & medical school degree), medical officers (medical doctors who have completed medical school and intern year but have not undergone specialty training in residency), clinical officers interns (midlevel providers who have completed 4-year clinical officer training), and medical students.Figure 1 Description of UW participants*. Description of UW participants*. Monitoring and evaluation: Faculty members from UW and UoN collaborated to create a monitoring and evaluation structure for both UW and NDH trainees prior to the start of the program. Because data were collected from program participants for quality control and program improvement purposes, data collection was not considered to be research by both the UW and UoN ethics committees, and ethical approval was not applicable. UW residents completed before- and after-rotation surveys (Additional files 1 and 2) that were designed to measure success in achieving the curricular goals of the program, which are outlined above. These were done anonymously; however, while the surveys did not include specific identifying information such as name, some of the questions may have provided information that would allow identification in certain circumstances. Additionally, UW residents were asked to complete a fully anonymous online evaluation of the rotation after their return to Seattle. Kenyan trainees completed anonymous surveys every 3 months describing their involvement with UW residents during each time period (Additional file 3). A group of Kenyan trainees also participated in one semi-structured focus group discussion. Surveys for UW and UoN trainees did not differ based on level of training. Survey scores were compiled in Microsoft Excel, and paired t-tests were used to determine significance of change. Results & discussion: Naivasha district hospital trainees Surveys were completed by NDH trainees four times, in November 2012, May and September 2013, and March 2014. A total of 50 surveys were completed. Of these, 26 were completed by clinical officer interns (COIs), 11 were completed by medical officer interns (MOIs), 10 were medical officers (MOs), two were students, and one was a clinical officer (CO). Trainees from NDH reported learning from UW residents most frequently during ward rounds, teaching conferences, and informal educational settings. Of all educational settings, they ranked surgical theatre and teaching conferences to be most helpful. 100% of NDH trainees reported feeling more confident about medical knowledge and more confident taking care of patients as a result of their interaction with UW residents. Trainees at NDH most frequently cited improvements in diagnosis and management of patients when asked how their interactions with UW residents benefitted them. They specified that they learned “more etiologies to conditions,” how to “arrive at a particular diagnosis” using the “correct investigations,” and “coming up with differential [diagnoses]”. Others mentioned getting help with physical examination and “proper management” of patients from the UW residents. Commonly mentioned ways that UW residents helped with medical knowledge were updating, reinforcing, or broadening knowledge gained in medical school. Specific skills that were frequently mentioned were wound care, physical examination, problem-solving and procedures. Many NDH trainees stated they appreciated getting a “different approach” to patient care from UW residents. This was mentioned both with respect to clinical decision making and to a broader sense of one’s approach to doctoring. One trainee mentioned that the most useful element of the partnership was “getting to know the alternative way of approaching the usual conditions, and not sticking to the one way of approach,” highlighting the opportunity to broaden one’s methods for clinical decision making. Others mentioned “different approach to get the diagnosis,” and “stimulating my thought process”. Additionally, trainees alluded to learning a different approach to the art of doctoring, in both the patient-doctor relationship and professional relationships between clinicians. Teamwork and communication were stressed by several trainees, who stated that they appreciated learning “work distribution” and “importance of working as a team”. One trainee stated that UW residents were helpful in encouraging the trainee to “ask when I am not sure without fear”. With regard to the patient-doctor relationship, different Kenyan trainees mentioned that “talking to patients” and “how to give a patient maximum concern,” were examples of skills learned from this alternative approach, as was demonstrating “ideals of patient care,” and “holistic care”. One trainee stated that the residents were “a source of motivation. Because they don’t give up… they did everything they could do”. Another trainee simply stated, “I learnt the art of compassion”. Overall, Kenyan trainees reported feeling supported by and more confident as a result of the program. “I can now make confident decisions when left unsupervised due to the acquired knowledge,” one mentioned. “I found it very interactive, if we teach those people what we know, and they teach us what we don’t know, there is going to be a working relationship”. Surveys were completed by NDH trainees four times, in November 2012, May and September 2013, and March 2014. A total of 50 surveys were completed. Of these, 26 were completed by clinical officer interns (COIs), 11 were completed by medical officer interns (MOIs), 10 were medical officers (MOs), two were students, and one was a clinical officer (CO). Trainees from NDH reported learning from UW residents most frequently during ward rounds, teaching conferences, and informal educational settings. Of all educational settings, they ranked surgical theatre and teaching conferences to be most helpful. 100% of NDH trainees reported feeling more confident about medical knowledge and more confident taking care of patients as a result of their interaction with UW residents. Trainees at NDH most frequently cited improvements in diagnosis and management of patients when asked how their interactions with UW residents benefitted them. They specified that they learned “more etiologies to conditions,” how to “arrive at a particular diagnosis” using the “correct investigations,” and “coming up with differential [diagnoses]”. Others mentioned getting help with physical examination and “proper management” of patients from the UW residents. Commonly mentioned ways that UW residents helped with medical knowledge were updating, reinforcing, or broadening knowledge gained in medical school. Specific skills that were frequently mentioned were wound care, physical examination, problem-solving and procedures. Many NDH trainees stated they appreciated getting a “different approach” to patient care from UW residents. This was mentioned both with respect to clinical decision making and to a broader sense of one’s approach to doctoring. One trainee mentioned that the most useful element of the partnership was “getting to know the alternative way of approaching the usual conditions, and not sticking to the one way of approach,” highlighting the opportunity to broaden one’s methods for clinical decision making. Others mentioned “different approach to get the diagnosis,” and “stimulating my thought process”. Additionally, trainees alluded to learning a different approach to the art of doctoring, in both the patient-doctor relationship and professional relationships between clinicians. Teamwork and communication were stressed by several trainees, who stated that they appreciated learning “work distribution” and “importance of working as a team”. One trainee stated that UW residents were helpful in encouraging the trainee to “ask when I am not sure without fear”. With regard to the patient-doctor relationship, different Kenyan trainees mentioned that “talking to patients” and “how to give a patient maximum concern,” were examples of skills learned from this alternative approach, as was demonstrating “ideals of patient care,” and “holistic care”. One trainee stated that the residents were “a source of motivation. Because they don’t give up… they did everything they could do”. Another trainee simply stated, “I learnt the art of compassion”. Overall, Kenyan trainees reported feeling supported by and more confident as a result of the program. “I can now make confident decisions when left unsupervised due to the acquired knowledge,” one mentioned. “I found it very interactive, if we teach those people what we know, and they teach us what we don’t know, there is going to be a working relationship”. University of Washington trainees Of the 32 UW trainees who participated, 30 (94%) completed before and after surveys, making a total of 60 surveys; however, 5 “before” surveys were lost during a robbery, leaving 55 for analysis, including 25 complete before-and-after pairs. During their 4-week rotations, visiting trainees reported seeing significantly more cases of advanced HIV, tuberculosis, malaria, pediatric HIV, severe malnutrition, neonatal sepsis, and neonatal asphyxiation cases than they ever had previously during their medical training. On average, after their rotation U.S. trainees reported feeling more confident diagnosing disease based on history and physical examination, managing patients in a setting with limited resources, working in very different hospital systems, and using clinical guidelines for management of patients in resource limited settings (Table 1). U.S. trainees also reported engaging in certain educational activities significantly more often during the rotation than they had in their previous training (Table 2). These included teaching people of different cultural and linguistic backgrounds, working with colleagues from different cultural and linguistic backgrounds, and researching educational topics with few resources. Finally, U.S. trainees reported a significantly greater understanding of the complex role that social determinants can play in health, with significant improvements in understanding how poverty affects health and how economic, social and cultural barriers can affect health care seeking (Table 3).Table 1 UW Before- and after- rotation clinical competency scores Involvement in care of patients with:* Average score before rotation Average score during rotation Average difference in before/after p-value§ Advanced HIV2.362.920.560.05Tuberculosis2.243.080.84<0.01Malaria1.482.440.96<0.01Pediatric HIV1.232.080.85<0.01Severe malnutrition2.002.640.640.03Neonatal sepsis2.423.250.830.03Neonatal asphyxia1.422.921.50<0.01Severe diarrhea1.732.360.630.01 Comfort with the following clinical skills:** Diagnosing disease based on history, physical, and vital signs3.163.560.400.08Managing patients with limited resources2.603.560.96<0.01Working effectively in hospital systems different from in the USA2.583.460.88<0.01Use of clinical guidelines for management2.583.751.17<0.01*Scores based on a 5-point scale: 1 = 0 patients seen, 2 = 1-5 patients seen, 3 = 6-20 patients seen, 4 = 21-50 patients seen, 5 = Greater than 50 patients seen.**Scores based on a 5-point scale: 1 = Not at all, 2 = A little bit, 3 = Somewhat, 4 = Reasonably, 5 = Very.§P-values calculated from paired t-tests.Table 2 UW Before- and after- rotation educational experience scores* Number of times engaged in the following activities: Average score before rotation Average score during rotation Average difference in before/after P-value§ Teaching people of different cultural or linguistic backgrounds from your own2.243.681.44<0.01Researching educational topics with few available resources1.963.001.04<0.01Working with colleagues from different cultural backgrounds from your own3.084.281.20<0.01*Scores based on a 5-point scale: 1 = Never, 2 = 1-5 Times, 3 = 6-20 Times, 4 = 21-50 Times, 5= >50 Times.§P-values calculated from paired t-tests.Table 3 UW Before- and after- rotation community health work scores* Level of understanding of: Average score before rotation Average score during rotation Average difference in before/after P-value§ How extreme poverty affects health3.134.131.00<0.01Economic barriers to health care seeking3.334.210.88<0.01Social and cultural barriers to health care seeking3.333.960.630.01*Scores based on a 5-point scale: 1 = Not at all, 2 = A little bit, 3 = Somewhat, 4 = Reasonably, 5 = Very.§P-values calculated from paired t-tests. UW Before- and after- rotation clinical competency scores *Scores based on a 5-point scale: 1 = 0 patients seen, 2 = 1-5 patients seen, 3 = 6-20 patients seen, 4 = 21-50 patients seen, 5 = Greater than 50 patients seen. **Scores based on a 5-point scale: 1 = Not at all, 2 = A little bit, 3 = Somewhat, 4 = Reasonably, 5 = Very. §P-values calculated from paired t-tests. UW Before- and after- rotation educational experience scores* *Scores based on a 5-point scale: 1 = Never, 2 = 1-5 Times, 3 = 6-20 Times, 4 = 21-50 Times, 5= >50 Times. §P-values calculated from paired t-tests. UW Before- and after- rotation community health work scores* *Scores based on a 5-point scale: 1 = Not at all, 2 = A little bit, 3 = Somewhat, 4 = Reasonably, 5 = Very. §P-values calculated from paired t-tests. When asked what the most educational aspects of the rotation were, most U.S. trainees mentioned patient management with limited resources. “The experience of working in a low resource healthcare [setting] was the most valuable. The decisions I had to make with limited information or imaging taught me skills that I would not have obtained in my residency.” Other commonly cited aspects were the exposure to rare diagnoses such as advanced HIV and TB, and the opportunity to teach and motivate others at the hospital. Twenty-nine (91%) out of 32 U.S. trainees reported experiencing a change in the way they viewed themselves as clinicians as a result of their rotation in Naivasha. The majority of these cited increased confidence and self-reliance as a clinician, especially when dealing with resource limitations. Other commonly expressed feelings were gratitude for the education, training and resources available in the U.S. and feeling more committed to careers in global health. Interestingly, while the majority of internal and family medicine residents felt more confidence, those in procedure-based residencies such as surgery and obstetrics-gynecology tended to feel more humbled by the experience. A surgery resident reported “I’m not more confident having had this experience that so stretched my comfort limits and my abilities. I’m actually more humble, with a new appreciation and hunger for the skills, wisdom, and judgment I have yet to learn as I continue my surgical training,” while an ob-gyn resident stated “I feel that medical education is a great gift that I have received and this rotation [made me want] to share this with others”. “Challenging” and “rewarding” were the most frequently cited descriptions for the rotation overall, and these were usually mentioned together, implying that the challenges themselves provided the greatest rewards. The educational value was also highlighted, with trainees describing growth in clinical knowledge, understanding about disparities, and learning about the culture. One trainee described the rotation as “The single most unique, challenging and valuable rotation in my residency”. Of the 32 UW trainees who participated, 30 (94%) completed before and after surveys, making a total of 60 surveys; however, 5 “before” surveys were lost during a robbery, leaving 55 for analysis, including 25 complete before-and-after pairs. During their 4-week rotations, visiting trainees reported seeing significantly more cases of advanced HIV, tuberculosis, malaria, pediatric HIV, severe malnutrition, neonatal sepsis, and neonatal asphyxiation cases than they ever had previously during their medical training. On average, after their rotation U.S. trainees reported feeling more confident diagnosing disease based on history and physical examination, managing patients in a setting with limited resources, working in very different hospital systems, and using clinical guidelines for management of patients in resource limited settings (Table 1). U.S. trainees also reported engaging in certain educational activities significantly more often during the rotation than they had in their previous training (Table 2). These included teaching people of different cultural and linguistic backgrounds, working with colleagues from different cultural and linguistic backgrounds, and researching educational topics with few resources. Finally, U.S. trainees reported a significantly greater understanding of the complex role that social determinants can play in health, with significant improvements in understanding how poverty affects health and how economic, social and cultural barriers can affect health care seeking (Table 3).Table 1 UW Before- and after- rotation clinical competency scores Involvement in care of patients with:* Average score before rotation Average score during rotation Average difference in before/after p-value§ Advanced HIV2.362.920.560.05Tuberculosis2.243.080.84<0.01Malaria1.482.440.96<0.01Pediatric HIV1.232.080.85<0.01Severe malnutrition2.002.640.640.03Neonatal sepsis2.423.250.830.03Neonatal asphyxia1.422.921.50<0.01Severe diarrhea1.732.360.630.01 Comfort with the following clinical skills:** Diagnosing disease based on history, physical, and vital signs3.163.560.400.08Managing patients with limited resources2.603.560.96<0.01Working effectively in hospital systems different from in the USA2.583.460.88<0.01Use of clinical guidelines for management2.583.751.17<0.01*Scores based on a 5-point scale: 1 = 0 patients seen, 2 = 1-5 patients seen, 3 = 6-20 patients seen, 4 = 21-50 patients seen, 5 = Greater than 50 patients seen.**Scores based on a 5-point scale: 1 = Not at all, 2 = A little bit, 3 = Somewhat, 4 = Reasonably, 5 = Very.§P-values calculated from paired t-tests.Table 2 UW Before- and after- rotation educational experience scores* Number of times engaged in the following activities: Average score before rotation Average score during rotation Average difference in before/after P-value§ Teaching people of different cultural or linguistic backgrounds from your own2.243.681.44<0.01Researching educational topics with few available resources1.963.001.04<0.01Working with colleagues from different cultural backgrounds from your own3.084.281.20<0.01*Scores based on a 5-point scale: 1 = Never, 2 = 1-5 Times, 3 = 6-20 Times, 4 = 21-50 Times, 5= >50 Times.§P-values calculated from paired t-tests.Table 3 UW Before- and after- rotation community health work scores* Level of understanding of: Average score before rotation Average score during rotation Average difference in before/after P-value§ How extreme poverty affects health3.134.131.00<0.01Economic barriers to health care seeking3.334.210.88<0.01Social and cultural barriers to health care seeking3.333.960.630.01*Scores based on a 5-point scale: 1 = Not at all, 2 = A little bit, 3 = Somewhat, 4 = Reasonably, 5 = Very.§P-values calculated from paired t-tests. UW Before- and after- rotation clinical competency scores *Scores based on a 5-point scale: 1 = 0 patients seen, 2 = 1-5 patients seen, 3 = 6-20 patients seen, 4 = 21-50 patients seen, 5 = Greater than 50 patients seen. **Scores based on a 5-point scale: 1 = Not at all, 2 = A little bit, 3 = Somewhat, 4 = Reasonably, 5 = Very. §P-values calculated from paired t-tests. UW Before- and after- rotation educational experience scores* *Scores based on a 5-point scale: 1 = Never, 2 = 1-5 Times, 3 = 6-20 Times, 4 = 21-50 Times, 5= >50 Times. §P-values calculated from paired t-tests. UW Before- and after- rotation community health work scores* *Scores based on a 5-point scale: 1 = Not at all, 2 = A little bit, 3 = Somewhat, 4 = Reasonably, 5 = Very. §P-values calculated from paired t-tests. When asked what the most educational aspects of the rotation were, most U.S. trainees mentioned patient management with limited resources. “The experience of working in a low resource healthcare [setting] was the most valuable. The decisions I had to make with limited information or imaging taught me skills that I would not have obtained in my residency.” Other commonly cited aspects were the exposure to rare diagnoses such as advanced HIV and TB, and the opportunity to teach and motivate others at the hospital. Twenty-nine (91%) out of 32 U.S. trainees reported experiencing a change in the way they viewed themselves as clinicians as a result of their rotation in Naivasha. The majority of these cited increased confidence and self-reliance as a clinician, especially when dealing with resource limitations. Other commonly expressed feelings were gratitude for the education, training and resources available in the U.S. and feeling more committed to careers in global health. Interestingly, while the majority of internal and family medicine residents felt more confidence, those in procedure-based residencies such as surgery and obstetrics-gynecology tended to feel more humbled by the experience. A surgery resident reported “I’m not more confident having had this experience that so stretched my comfort limits and my abilities. I’m actually more humble, with a new appreciation and hunger for the skills, wisdom, and judgment I have yet to learn as I continue my surgical training,” while an ob-gyn resident stated “I feel that medical education is a great gift that I have received and this rotation [made me want] to share this with others”. “Challenging” and “rewarding” were the most frequently cited descriptions for the rotation overall, and these were usually mentioned together, implying that the challenges themselves provided the greatest rewards. The educational value was also highlighted, with trainees describing growth in clinical knowledge, understanding about disparities, and learning about the culture. One trainee described the rotation as “The single most unique, challenging and valuable rotation in my residency”. Naivasha district hospital trainees: Surveys were completed by NDH trainees four times, in November 2012, May and September 2013, and March 2014. A total of 50 surveys were completed. Of these, 26 were completed by clinical officer interns (COIs), 11 were completed by medical officer interns (MOIs), 10 were medical officers (MOs), two were students, and one was a clinical officer (CO). Trainees from NDH reported learning from UW residents most frequently during ward rounds, teaching conferences, and informal educational settings. Of all educational settings, they ranked surgical theatre and teaching conferences to be most helpful. 100% of NDH trainees reported feeling more confident about medical knowledge and more confident taking care of patients as a result of their interaction with UW residents. Trainees at NDH most frequently cited improvements in diagnosis and management of patients when asked how their interactions with UW residents benefitted them. They specified that they learned “more etiologies to conditions,” how to “arrive at a particular diagnosis” using the “correct investigations,” and “coming up with differential [diagnoses]”. Others mentioned getting help with physical examination and “proper management” of patients from the UW residents. Commonly mentioned ways that UW residents helped with medical knowledge were updating, reinforcing, or broadening knowledge gained in medical school. Specific skills that were frequently mentioned were wound care, physical examination, problem-solving and procedures. Many NDH trainees stated they appreciated getting a “different approach” to patient care from UW residents. This was mentioned both with respect to clinical decision making and to a broader sense of one’s approach to doctoring. One trainee mentioned that the most useful element of the partnership was “getting to know the alternative way of approaching the usual conditions, and not sticking to the one way of approach,” highlighting the opportunity to broaden one’s methods for clinical decision making. Others mentioned “different approach to get the diagnosis,” and “stimulating my thought process”. Additionally, trainees alluded to learning a different approach to the art of doctoring, in both the patient-doctor relationship and professional relationships between clinicians. Teamwork and communication were stressed by several trainees, who stated that they appreciated learning “work distribution” and “importance of working as a team”. One trainee stated that UW residents were helpful in encouraging the trainee to “ask when I am not sure without fear”. With regard to the patient-doctor relationship, different Kenyan trainees mentioned that “talking to patients” and “how to give a patient maximum concern,” were examples of skills learned from this alternative approach, as was demonstrating “ideals of patient care,” and “holistic care”. One trainee stated that the residents were “a source of motivation. Because they don’t give up… they did everything they could do”. Another trainee simply stated, “I learnt the art of compassion”. Overall, Kenyan trainees reported feeling supported by and more confident as a result of the program. “I can now make confident decisions when left unsupervised due to the acquired knowledge,” one mentioned. “I found it very interactive, if we teach those people what we know, and they teach us what we don’t know, there is going to be a working relationship”. University of Washington trainees: Of the 32 UW trainees who participated, 30 (94%) completed before and after surveys, making a total of 60 surveys; however, 5 “before” surveys were lost during a robbery, leaving 55 for analysis, including 25 complete before-and-after pairs. During their 4-week rotations, visiting trainees reported seeing significantly more cases of advanced HIV, tuberculosis, malaria, pediatric HIV, severe malnutrition, neonatal sepsis, and neonatal asphyxiation cases than they ever had previously during their medical training. On average, after their rotation U.S. trainees reported feeling more confident diagnosing disease based on history and physical examination, managing patients in a setting with limited resources, working in very different hospital systems, and using clinical guidelines for management of patients in resource limited settings (Table 1). U.S. trainees also reported engaging in certain educational activities significantly more often during the rotation than they had in their previous training (Table 2). These included teaching people of different cultural and linguistic backgrounds, working with colleagues from different cultural and linguistic backgrounds, and researching educational topics with few resources. Finally, U.S. trainees reported a significantly greater understanding of the complex role that social determinants can play in health, with significant improvements in understanding how poverty affects health and how economic, social and cultural barriers can affect health care seeking (Table 3).Table 1 UW Before- and after- rotation clinical competency scores Involvement in care of patients with:* Average score before rotation Average score during rotation Average difference in before/after p-value§ Advanced HIV2.362.920.560.05Tuberculosis2.243.080.84<0.01Malaria1.482.440.96<0.01Pediatric HIV1.232.080.85<0.01Severe malnutrition2.002.640.640.03Neonatal sepsis2.423.250.830.03Neonatal asphyxia1.422.921.50<0.01Severe diarrhea1.732.360.630.01 Comfort with the following clinical skills:** Diagnosing disease based on history, physical, and vital signs3.163.560.400.08Managing patients with limited resources2.603.560.96<0.01Working effectively in hospital systems different from in the USA2.583.460.88<0.01Use of clinical guidelines for management2.583.751.17<0.01*Scores based on a 5-point scale: 1 = 0 patients seen, 2 = 1-5 patients seen, 3 = 6-20 patients seen, 4 = 21-50 patients seen, 5 = Greater than 50 patients seen.**Scores based on a 5-point scale: 1 = Not at all, 2 = A little bit, 3 = Somewhat, 4 = Reasonably, 5 = Very.§P-values calculated from paired t-tests.Table 2 UW Before- and after- rotation educational experience scores* Number of times engaged in the following activities: Average score before rotation Average score during rotation Average difference in before/after P-value§ Teaching people of different cultural or linguistic backgrounds from your own2.243.681.44<0.01Researching educational topics with few available resources1.963.001.04<0.01Working with colleagues from different cultural backgrounds from your own3.084.281.20<0.01*Scores based on a 5-point scale: 1 = Never, 2 = 1-5 Times, 3 = 6-20 Times, 4 = 21-50 Times, 5= >50 Times.§P-values calculated from paired t-tests.Table 3 UW Before- and after- rotation community health work scores* Level of understanding of: Average score before rotation Average score during rotation Average difference in before/after P-value§ How extreme poverty affects health3.134.131.00<0.01Economic barriers to health care seeking3.334.210.88<0.01Social and cultural barriers to health care seeking3.333.960.630.01*Scores based on a 5-point scale: 1 = Not at all, 2 = A little bit, 3 = Somewhat, 4 = Reasonably, 5 = Very.§P-values calculated from paired t-tests. UW Before- and after- rotation clinical competency scores *Scores based on a 5-point scale: 1 = 0 patients seen, 2 = 1-5 patients seen, 3 = 6-20 patients seen, 4 = 21-50 patients seen, 5 = Greater than 50 patients seen. **Scores based on a 5-point scale: 1 = Not at all, 2 = A little bit, 3 = Somewhat, 4 = Reasonably, 5 = Very. §P-values calculated from paired t-tests. UW Before- and after- rotation educational experience scores* *Scores based on a 5-point scale: 1 = Never, 2 = 1-5 Times, 3 = 6-20 Times, 4 = 21-50 Times, 5= >50 Times. §P-values calculated from paired t-tests. UW Before- and after- rotation community health work scores* *Scores based on a 5-point scale: 1 = Not at all, 2 = A little bit, 3 = Somewhat, 4 = Reasonably, 5 = Very. §P-values calculated from paired t-tests. When asked what the most educational aspects of the rotation were, most U.S. trainees mentioned patient management with limited resources. “The experience of working in a low resource healthcare [setting] was the most valuable. The decisions I had to make with limited information or imaging taught me skills that I would not have obtained in my residency.” Other commonly cited aspects were the exposure to rare diagnoses such as advanced HIV and TB, and the opportunity to teach and motivate others at the hospital. Twenty-nine (91%) out of 32 U.S. trainees reported experiencing a change in the way they viewed themselves as clinicians as a result of their rotation in Naivasha. The majority of these cited increased confidence and self-reliance as a clinician, especially when dealing with resource limitations. Other commonly expressed feelings were gratitude for the education, training and resources available in the U.S. and feeling more committed to careers in global health. Interestingly, while the majority of internal and family medicine residents felt more confidence, those in procedure-based residencies such as surgery and obstetrics-gynecology tended to feel more humbled by the experience. A surgery resident reported “I’m not more confident having had this experience that so stretched my comfort limits and my abilities. I’m actually more humble, with a new appreciation and hunger for the skills, wisdom, and judgment I have yet to learn as I continue my surgical training,” while an ob-gyn resident stated “I feel that medical education is a great gift that I have received and this rotation [made me want] to share this with others”. “Challenging” and “rewarding” were the most frequently cited descriptions for the rotation overall, and these were usually mentioned together, implying that the challenges themselves provided the greatest rewards. The educational value was also highlighted, with trainees describing growth in clinical knowledge, understanding about disparities, and learning about the culture. One trainee described the rotation as “The single most unique, challenging and valuable rotation in my residency”. Conclusions: CEPI is an innovative new program designed to foster educational exchange between medical trainees from two vastly different academic backgrounds: University of Washington and University of Nairobi. The partnership between these two universities is founded in several decades of research collaboration, lending stability and long-term investment to the program. Additionally, educational supervision and support provided by the CEPI Chief Resident allows trainees to focus on clinical education, rather than logistics of living and practicing in a foreign setting for a short time. The Chief Resident also provides structure to the training program for both UW and Kenyan trainees, acts as a liaison between the two universities, and reduces the time that Kenyan consultants and administrators have to dedicate to the logistics of the program to essentially zero. In its first year and a half, 32 UW trainees and up to 50 Kenyan trainees have benefitted from this exchange program. All participants from both countries reported having learned a great deal from the experience. There were several clinical and educational competencies that UW residents were able to practice more frequently in Kenya than in the USA. Many UW residents reported increased confidence. Other residents, particularly those from procedure-based specialties, described a re-kindled appreciation for the rigors of and supervision provided by specialty training back home. The prevalent sense among Kenyan trainees that interactions with UW residents helped them by teaching “a different approach” is a critical finding. This “different approach,” whether referring to a clinician’s thought process, relationship with the patient, or relationship with a colleague, is as important for trainees in rural areas to learn as clinical knowledge. During these critical years of training, clinicians will develop habits that will likely inform their practice for years to come. Working with senior clinicians from different backgrounds can provide models for various professional behaviors, but this can be difficult to achieve in remote, resource-poor settings. By bringing UW residents to NDH, CEPI effectively provides a variety of professional role models to rural Kenyan trainees. This study had several significant limitations. First, all results are self-reported measures of learning, and not patient care outcomes, which may be more pertinent and less vulnerable to various types of bias. Second, a significant amount of data was lost during a robbery, reducing the number of analyzable results. We limited our evaluation process to participants in the program and did not include perspectives from the hospital staff, who may also be affected by CEPI activities. Third, we did not evaluate any “control” group with which to compare our results, and therefore cannot definitively attribute any changes observed to our program. Finally, changes over time were not measured, so it is impossible to predict whether the effects may be limited. As CEPI continues to bring UW residents to Naivasha, there are several further directions the program may take that might enhance and broaden the experience for both UW and Kenyan trainees. Although we have reported on only clinical work and education here, the curriculum encompasses two additional components of the program, and these should be evaluated in future years when more established. Both community health work and research are program elements that will help to round out resident experiences in future years of CEPI. Tele-learning through video broadcasting of teaching conferences would be a great addition to the existing teaching structure, and would add to the sense of partner institutions joined in learning. Residents working toward their Masters of Medicine degrees (MMed, equivalent to residency training in the US) at UoN will be joining UW residents for rotations at NDH in the future, increasing the sense of partnership among individual trainees at the same stage of training and broadening the opportunities for collaboration and research endeavors. Additionally, opportunities for NDH trainees to travel to Seattle to experience clinical rotations there would provide a true exchange, and much greater depth to the overall relationship between the two institutions. Through CEPI we have found a long-term, sustainable partnership for educational exchange to be beneficial to all parties involved; with the years to come, this promises to become a multi-faceted, profound educational experience that may change the career trajectories of clinicians in Kenya and Seattle alike.
Background: Despite evidence that international clinical electives can be educationally and professionally beneficial to both visiting and in-country trainees, these opportunities remain challenging for American residents to participate in abroad. Additionally, even when logistically possible, they are often poorly structured. The Universities of Washington (UW) and Nairobi (UoN) have enjoyed a long-standing research collaboration, which recently expanded into the UoN Medical Education Partnership Initiative (MEPI). Based on MEPI in Kenya, the Clinical Education Partnership Initiative (CEPI) is a new educational exchange program between UoN and UW. CEPI allows UW residents to partner with Kenyan trainees in clinical care and teaching activities at Naivasha District Hospital (NDH), one of UoN's MEPI training sites in Kenya. Methods: UW and UoN faculty collaborated to create a curriculum and structure for the program. A Chief Resident from the UW Department of Medicine coordinated the program at NDH. From August 2012 through April 2014, 32 UW participants from 5 medical specialties spent between 4 and 12 weeks working in NDH. In addition to clinical duties, all took part in formal and informal educational activities. Before and after their rotations, UW residents completed surveys evaluating clinical competencies and cross-cultural educational and research skills. Kenyan trainees also completed surveys after working with UW residents for three months. Results: UW trainees reported a significant increase in exposure to various tropical and other diseases, an increased sense of self-reliance, particularly in a resource-limited setting, and an improved understanding of how social and cultural factors can affect health. Kenyan trainees reported both an increase in clinical skills and confidence, and an appreciation for learning a different approach to patient care and professionalism. Conclusions: After participating in CEPI, both Kenyan and US trainees noted improvement in their clinical knowledge and skills and a broader understanding of what it means to be clinicians. Through structured partnerships between institutions, educational exchange that benefits both parties is possible.
Background: International health electives International health electives (IHEs) in some settings have been shown to increase United States (US) medical trainees’ understanding of public health, primary care, health disparities and barriers to care, in addition to raising National Board of Medical Examiners (NBME) scores and improving physical exam and history taking skills [1]. As such, both the demand for and the prevalence of these experiences are increasing among medical trainees [2]. Currently, 87% of US medical schools offer international clinical experiences to students, and it is estimated that over 25% of students participate in these programs [3,4]. Graduating medical students throughout all disciplines rank the availability of international experiences as among the most important factors in their choice of residency program [5]. However, due to the rigors of residency training and a number of other constraints, only 59% of residency programs offer international clinical rotations to residents, and as few as 10% of residents may be able to take advantage of these opportunities, when they exist [4]. Pitfalls for IHEs over the last several decades have been described. A common problem inherent to international clinical experiences is poorly defined academic goals and clinical competencies [3]. Another commonly cited issue, especially among medical students, is lack of structure and supervision, leading to concerns that one’s presence in the host institution may be detrimental to patient care [6]. Finally, concerns have been raised that short-term international clinical experiences might benefit the sending institution and its trainees more than that of the host institution, and that benefits for the host institution can be difficult to measure [7]. Medical training in most parts of sub-Saharan Africa includes 6 years of undergraduate medical school and one year of internship, in which newly graduated clinicians (medical officer interns) are placed in government hospitals to practice. Inadequate numbers of medical training facilities and faculty positions have been identified as challenges facing medical education in sub-Saharan Africa [8], leading to trainees with limited access to senior clinicians for both teaching and role modeling. Formal training programs involving regularly scheduled, structured group didactics for doctors recruited to work in rural Africa have been shown to increase “professional socialization,” or the “learning of attitudes, norms, self images, values, beliefs and behavioral patterns associated with professional practice,” in addition to clinical knowledge and skills [9,10]. This enhanced sense of connection was thought to be related to the mode of training rather than the content, and led to long-term professional relationships and increased job satisfaction [9]. Despite these advantages, there are numerous barriers in place for structured continuing medical education in most healthcare facilities in sub-Saharan Africa [11]. Given the relative lack of international clinical opportunities for US residents, the limitations on medical education in much of sub-Saharan Africa and the logistical and ethical considerations that have been described pertaining to IHEs for medical students, the Universities of Washington and Nairobi have recently initiated a new clinical training program for UW and UoN residents in Naivasha, Kenya. This one-month rotation is based on a long-term, structured educational exchange partnership, and aims to provide trainees with supervised, hands-on experience based on a predefined curriculum covering four main competencies of healthcare delivery in resource-poor settings. International health electives (IHEs) in some settings have been shown to increase United States (US) medical trainees’ understanding of public health, primary care, health disparities and barriers to care, in addition to raising National Board of Medical Examiners (NBME) scores and improving physical exam and history taking skills [1]. As such, both the demand for and the prevalence of these experiences are increasing among medical trainees [2]. Currently, 87% of US medical schools offer international clinical experiences to students, and it is estimated that over 25% of students participate in these programs [3,4]. Graduating medical students throughout all disciplines rank the availability of international experiences as among the most important factors in their choice of residency program [5]. However, due to the rigors of residency training and a number of other constraints, only 59% of residency programs offer international clinical rotations to residents, and as few as 10% of residents may be able to take advantage of these opportunities, when they exist [4]. Pitfalls for IHEs over the last several decades have been described. A common problem inherent to international clinical experiences is poorly defined academic goals and clinical competencies [3]. Another commonly cited issue, especially among medical students, is lack of structure and supervision, leading to concerns that one’s presence in the host institution may be detrimental to patient care [6]. Finally, concerns have been raised that short-term international clinical experiences might benefit the sending institution and its trainees more than that of the host institution, and that benefits for the host institution can be difficult to measure [7]. Medical training in most parts of sub-Saharan Africa includes 6 years of undergraduate medical school and one year of internship, in which newly graduated clinicians (medical officer interns) are placed in government hospitals to practice. Inadequate numbers of medical training facilities and faculty positions have been identified as challenges facing medical education in sub-Saharan Africa [8], leading to trainees with limited access to senior clinicians for both teaching and role modeling. Formal training programs involving regularly scheduled, structured group didactics for doctors recruited to work in rural Africa have been shown to increase “professional socialization,” or the “learning of attitudes, norms, self images, values, beliefs and behavioral patterns associated with professional practice,” in addition to clinical knowledge and skills [9,10]. This enhanced sense of connection was thought to be related to the mode of training rather than the content, and led to long-term professional relationships and increased job satisfaction [9]. Despite these advantages, there are numerous barriers in place for structured continuing medical education in most healthcare facilities in sub-Saharan Africa [11]. Given the relative lack of international clinical opportunities for US residents, the limitations on medical education in much of sub-Saharan Africa and the logistical and ethical considerations that have been described pertaining to IHEs for medical students, the Universities of Washington and Nairobi have recently initiated a new clinical training program for UW and UoN residents in Naivasha, Kenya. This one-month rotation is based on a long-term, structured educational exchange partnership, and aims to provide trainees with supervised, hands-on experience based on a predefined curriculum covering four main competencies of healthcare delivery in resource-poor settings. Partnership for innovative medical education in Kenya Researchers at the University of Washington (UW) have collaborated with researchers from the University of Nairobi (UoN) on various projects for over 30 years [12]. This longstanding collaboration recently culminated in a Medical Education Partnership Initiative (MEPI) grant from the National Institutes of Health and PEPFAR. The new initiative, called Partnership for Innovative Medical Education in Kenya, or PRIME-K, is designed to strengthen medical education, improve retention of clinicians in rural parts of Kenya, and foster research capacity in Kenya. Modeled on the Washington, Wyoming, Alaska, Montana and Idaho (WWAMI) Regional Medical Education Program, which provides medical education for all 5 states and extends medical training to very rural and remote areas [13,14], a new decentralized training system has been created for UoN medical students [15]. Since 2010, 238 medical students from UoN have completed rotations in 14 hospitals throughout the country, on both the district and provincial levels. Through the foundation of PRIME-K, the UW-UoN partnership has now launched a new initiative called the Clinical Education Partnership Initiative, which focuses on cross-cultural clinical education experiences in Kenya. Researchers at the University of Washington (UW) have collaborated with researchers from the University of Nairobi (UoN) on various projects for over 30 years [12]. This longstanding collaboration recently culminated in a Medical Education Partnership Initiative (MEPI) grant from the National Institutes of Health and PEPFAR. The new initiative, called Partnership for Innovative Medical Education in Kenya, or PRIME-K, is designed to strengthen medical education, improve retention of clinicians in rural parts of Kenya, and foster research capacity in Kenya. Modeled on the Washington, Wyoming, Alaska, Montana and Idaho (WWAMI) Regional Medical Education Program, which provides medical education for all 5 states and extends medical training to very rural and remote areas [13,14], a new decentralized training system has been created for UoN medical students [15]. Since 2010, 238 medical students from UoN have completed rotations in 14 hospitals throughout the country, on both the district and provincial levels. Through the foundation of PRIME-K, the UW-UoN partnership has now launched a new initiative called the Clinical Education Partnership Initiative, which focuses on cross-cultural clinical education experiences in Kenya. Conclusions: CEPI is an innovative new program designed to foster educational exchange between medical trainees from two vastly different academic backgrounds: University of Washington and University of Nairobi. The partnership between these two universities is founded in several decades of research collaboration, lending stability and long-term investment to the program. Additionally, educational supervision and support provided by the CEPI Chief Resident allows trainees to focus on clinical education, rather than logistics of living and practicing in a foreign setting for a short time. The Chief Resident also provides structure to the training program for both UW and Kenyan trainees, acts as a liaison between the two universities, and reduces the time that Kenyan consultants and administrators have to dedicate to the logistics of the program to essentially zero. In its first year and a half, 32 UW trainees and up to 50 Kenyan trainees have benefitted from this exchange program. All participants from both countries reported having learned a great deal from the experience. There were several clinical and educational competencies that UW residents were able to practice more frequently in Kenya than in the USA. Many UW residents reported increased confidence. Other residents, particularly those from procedure-based specialties, described a re-kindled appreciation for the rigors of and supervision provided by specialty training back home. The prevalent sense among Kenyan trainees that interactions with UW residents helped them by teaching “a different approach” is a critical finding. This “different approach,” whether referring to a clinician’s thought process, relationship with the patient, or relationship with a colleague, is as important for trainees in rural areas to learn as clinical knowledge. During these critical years of training, clinicians will develop habits that will likely inform their practice for years to come. Working with senior clinicians from different backgrounds can provide models for various professional behaviors, but this can be difficult to achieve in remote, resource-poor settings. By bringing UW residents to NDH, CEPI effectively provides a variety of professional role models to rural Kenyan trainees. This study had several significant limitations. First, all results are self-reported measures of learning, and not patient care outcomes, which may be more pertinent and less vulnerable to various types of bias. Second, a significant amount of data was lost during a robbery, reducing the number of analyzable results. We limited our evaluation process to participants in the program and did not include perspectives from the hospital staff, who may also be affected by CEPI activities. Third, we did not evaluate any “control” group with which to compare our results, and therefore cannot definitively attribute any changes observed to our program. Finally, changes over time were not measured, so it is impossible to predict whether the effects may be limited. As CEPI continues to bring UW residents to Naivasha, there are several further directions the program may take that might enhance and broaden the experience for both UW and Kenyan trainees. Although we have reported on only clinical work and education here, the curriculum encompasses two additional components of the program, and these should be evaluated in future years when more established. Both community health work and research are program elements that will help to round out resident experiences in future years of CEPI. Tele-learning through video broadcasting of teaching conferences would be a great addition to the existing teaching structure, and would add to the sense of partner institutions joined in learning. Residents working toward their Masters of Medicine degrees (MMed, equivalent to residency training in the US) at UoN will be joining UW residents for rotations at NDH in the future, increasing the sense of partnership among individual trainees at the same stage of training and broadening the opportunities for collaboration and research endeavors. Additionally, opportunities for NDH trainees to travel to Seattle to experience clinical rotations there would provide a true exchange, and much greater depth to the overall relationship between the two institutions. Through CEPI we have found a long-term, sustainable partnership for educational exchange to be beneficial to all parties involved; with the years to come, this promises to become a multi-faceted, profound educational experience that may change the career trajectories of clinicians in Kenya and Seattle alike.
Background: Despite evidence that international clinical electives can be educationally and professionally beneficial to both visiting and in-country trainees, these opportunities remain challenging for American residents to participate in abroad. Additionally, even when logistically possible, they are often poorly structured. The Universities of Washington (UW) and Nairobi (UoN) have enjoyed a long-standing research collaboration, which recently expanded into the UoN Medical Education Partnership Initiative (MEPI). Based on MEPI in Kenya, the Clinical Education Partnership Initiative (CEPI) is a new educational exchange program between UoN and UW. CEPI allows UW residents to partner with Kenyan trainees in clinical care and teaching activities at Naivasha District Hospital (NDH), one of UoN's MEPI training sites in Kenya. Methods: UW and UoN faculty collaborated to create a curriculum and structure for the program. A Chief Resident from the UW Department of Medicine coordinated the program at NDH. From August 2012 through April 2014, 32 UW participants from 5 medical specialties spent between 4 and 12 weeks working in NDH. In addition to clinical duties, all took part in formal and informal educational activities. Before and after their rotations, UW residents completed surveys evaluating clinical competencies and cross-cultural educational and research skills. Kenyan trainees also completed surveys after working with UW residents for three months. Results: UW trainees reported a significant increase in exposure to various tropical and other diseases, an increased sense of self-reliance, particularly in a resource-limited setting, and an improved understanding of how social and cultural factors can affect health. Kenyan trainees reported both an increase in clinical skills and confidence, and an appreciation for learning a different approach to patient care and professionalism. Conclusions: After participating in CEPI, both Kenyan and US trainees noted improvement in their clinical knowledge and skills and a broader understanding of what it means to be clinicians. Through structured partnerships between institutions, educational exchange that benefits both parties is possible.
12,113
377
[ 641, 221, 196, 93, 378, 238, 635, 1336 ]
12
[ "medical", "uw", "trainees", "clinical", "rotation", "residents", "health", "patients", "training", "education" ]
[ "residency training", "international clinical opportunities", "international clinical experiences", "electives international health", "residency program rigors" ]
null
[CONTENT] International | Clinical rotation | Medical education | Residents | Kenya [SUMMARY]
null
[CONTENT] International | Clinical rotation | Medical education | Residents | Kenya [SUMMARY]
[CONTENT] International | Clinical rotation | Medical education | Residents | Kenya [SUMMARY]
[CONTENT] International | Clinical rotation | Medical education | Residents | Kenya [SUMMARY]
[CONTENT] International | Clinical rotation | Medical education | Residents | Kenya [SUMMARY]
[CONTENT] Capacity Building | Clinical Competence | Cooperative Behavior | Culture | Curriculum | Global Health | Health Resources | Humans | Interinstitutional Relations | Internship and Residency | Kenya | Schools, Medical | Washington [SUMMARY]
null
[CONTENT] Capacity Building | Clinical Competence | Cooperative Behavior | Culture | Curriculum | Global Health | Health Resources | Humans | Interinstitutional Relations | Internship and Residency | Kenya | Schools, Medical | Washington [SUMMARY]
[CONTENT] Capacity Building | Clinical Competence | Cooperative Behavior | Culture | Curriculum | Global Health | Health Resources | Humans | Interinstitutional Relations | Internship and Residency | Kenya | Schools, Medical | Washington [SUMMARY]
[CONTENT] Capacity Building | Clinical Competence | Cooperative Behavior | Culture | Curriculum | Global Health | Health Resources | Humans | Interinstitutional Relations | Internship and Residency | Kenya | Schools, Medical | Washington [SUMMARY]
[CONTENT] Capacity Building | Clinical Competence | Cooperative Behavior | Culture | Curriculum | Global Health | Health Resources | Humans | Interinstitutional Relations | Internship and Residency | Kenya | Schools, Medical | Washington [SUMMARY]
[CONTENT] residency training | international clinical opportunities | international clinical experiences | electives international health | residency program rigors [SUMMARY]
null
[CONTENT] residency training | international clinical opportunities | international clinical experiences | electives international health | residency program rigors [SUMMARY]
[CONTENT] residency training | international clinical opportunities | international clinical experiences | electives international health | residency program rigors [SUMMARY]
[CONTENT] residency training | international clinical opportunities | international clinical experiences | electives international health | residency program rigors [SUMMARY]
[CONTENT] residency training | international clinical opportunities | international clinical experiences | electives international health | residency program rigors [SUMMARY]
[CONTENT] medical | uw | trainees | clinical | rotation | residents | health | patients | training | education [SUMMARY]
null
[CONTENT] medical | uw | trainees | clinical | rotation | residents | health | patients | training | education [SUMMARY]
[CONTENT] medical | uw | trainees | clinical | rotation | residents | health | patients | training | education [SUMMARY]
[CONTENT] medical | uw | trainees | clinical | rotation | residents | health | patients | training | education [SUMMARY]
[CONTENT] medical | uw | trainees | clinical | rotation | residents | health | patients | training | education [SUMMARY]
[CONTENT] medical | international | medical education | education | africa | international clinical | experiences | clinical | kenya | training [SUMMARY]
null
[CONTENT] patients | rotation | seen | patients seen | trainees | scores | times | mentioned | based point scale | based point [SUMMARY]
[CONTENT] trainees | cepi | program | uw | kenyan | uw residents | residents | exchange | results | future [SUMMARY]
[CONTENT] medical | uw | trainees | residents | clinical | rotation | education | patients | uw residents | health [SUMMARY]
[CONTENT] medical | uw | trainees | residents | clinical | rotation | education | patients | uw residents | health [SUMMARY]
[CONTENT] American ||| ||| The Universities of Washington | Nairobi | MEPI ||| MEPI | Kenya | the Clinical Education Partnership Initiative | CEPI | UoN | UW ||| CEPI | UW | Kenyan | Naivasha District Hospital | NDH | one | UoN | MEPI | Kenya [SUMMARY]
null
[CONTENT] ||| Kenyan [SUMMARY]
[CONTENT] CEPI | Kenyan | US | clinicians ||| [SUMMARY]
[CONTENT] ||| American ||| ||| The Universities of Washington | Nairobi | MEPI ||| MEPI | Kenya | the Clinical Education Partnership Initiative | CEPI | UoN | UW ||| CEPI | UW | Kenyan | Naivasha District Hospital | NDH | one | UoN | MEPI | Kenya ||| ||| the UW Department of Medicine | NDH ||| August 2012 through April 2014 | 32 UW | 5 | between 4 and 12 weeks | NDH ||| ||| UW ||| Kenyan | three months ||| ||| Kenyan ||| CEPI | Kenyan | US | clinicians ||| [SUMMARY]
[CONTENT] ||| American ||| ||| The Universities of Washington | Nairobi | MEPI ||| MEPI | Kenya | the Clinical Education Partnership Initiative | CEPI | UoN | UW ||| CEPI | UW | Kenyan | Naivasha District Hospital | NDH | one | UoN | MEPI | Kenya ||| ||| the UW Department of Medicine | NDH ||| August 2012 through April 2014 | 32 UW | 5 | between 4 and 12 weeks | NDH ||| ||| UW ||| Kenyan | three months ||| ||| Kenyan ||| CEPI | Kenyan | US | clinicians ||| [SUMMARY]
Dynamics in perioperative neutrophil-to-lymphocyte*platelet ratio as a predictor of early acute kidney injury following cardiovascular surgery.
34187280
In this study, we applied a composite index of neutrophil-lymphocyte * platelet ratio (NLPR), and explore the significance of the dynamics of perioperative NLPR in predicting cardiac surgery-associated acute kidney injury (CSA-AKI).
BACKGROUND
During July 1st and December 31th 2019, participants were prospectively derived from the 'Zhongshan Cardiovascular Surgery Cohort'. NLPR was determined using neutrophil counts, lymphocyte and platelet count at the two-time points. Dose-response relationship analyses were applied to delineate the non-linear odds ratio (OR) of CSA-AKI in different NLPR levels. Then NLPRs were integrated into the generalized estimating equation (GEE) to predict the risk of AKI.
METHODS
Of 2449 patients receiving cardiovascular surgery, 838 (34.2%) cases developed CSA-AKI with stage 1 (n = 658, 26.9%), stage 2-3 (n = 180, 7.3%). Compared with non-AKI patients, both preoperative and postoperative NLPR were higher in AKI patients (1.1[0.8, 1.8] vs. 0.9[0.7,1.4], p < 0.001; 12.4[7.5, 20.0] vs. 10.1[6.4,16.7], p < 0.001). Such an effect was a 'J'-shaped relationship: CSA-AKI's risk was relatively flat until 1.0 of preoperative NLPR and increased rapidly afterward, with an odds ratio of 1.13 (1.06-1.19) per 1 unit. Similarly, patients whose postoperative NLPR value >11.0 were more likely to develop AKI with an OR of 1.02. Integrating the dynamic NLPRs into the GEE model, we found that the AUC was 0.806(95% CI 0.793-0.819), which was significantly higher than the AUC without NLPR (0.799, p < 0.001).
RESULTS
Dynamics of perioperative NPLR is a promising marker for predicting acute kidney injury. It will facilitate AKI risk management and allow clinicians to intervene early so as to reverse renal damage.
CONCLUSION
[ "Acute Kidney Injury", "Adult", "Aged", "Biomarkers", "Blood Platelets", "Cardiac Surgical Procedures", "Female", "Humans", "Leukocyte Count", "Lymphocytes", "Male", "Middle Aged", "Neutrophils", "Platelet Count", "Postoperative Complications", "Prospective Studies", "Risk Factors" ]
8260043
Introduction
Acute kidney injury (AKI) is a complicated and multifactorial clinical syndrome, which is characterized by a sudden decrease in renal function. Cardiac surgery is the second commonest reason for AKI behind sepsis [1]. It was reported that the incidence of cardiac surgery-associated AKI (CSA-AKI) varied from 5% to 42% [2]. CSA-AKI not only prolongs the length of stay in the intensive care unit (ICU) and hospitalization but increases the cost of care and mortality [3,4]. Early identification of patients at high risk of CSA-AKI and prompt modification of reversible factors will optimize the clinical outcome and avoid unnecessary kidney damage. Preoperative inflammation is commonly encountered in patients receiving cardiac surgery. It can be caused either by heart infection (such as infectious endocarditis) or other chronic infectious diseases (COPD). In clinical practice, the neutrophil to lymphocyte ratio (NLR) is an easily effective and low-cost biomarker of inflammatory status [5,6]. Previous studies have demonstrated its value in early prediction of infectious conditions [5,7], cancers [8–13] and cardiovascular diseases [14]. As the initiator of inflammation, neutrophils can quickly migrate from peripheral blood circulation to kidney under the action of chemokines. It can result in CSA-AKI by blocking renal micro-vessels and secreting oxygen free radicals and proteases. Besides, platelet-neutrophil aggregates are formed during the inflammatory process [15] and associated with diverse pathologies [16,17]. It also can cause renal damage through vascular lesions and tissue destruction [18]. To this end, NLPR may be the more promising biomarker of inflammation than NLR in CSA-AKI prediction. Although several studies have shown that high NLPR was associated with postoperative AKI in cardiovascular surgery patients [19], there is still no research that considers the dynamics of NLPR in the prediction model. The effect of NLPR on AKI is not a simple linear relationship. It is crucial to delineate the dose-response relationship between NLPR and AKI risk and the cutoff of NLPR for CSA-AKI prediction. This study aims to explore the significance of the dynamic changes in perioperative NLPR for predicting CSA-AKI.
Methods
Study design and patient selection This study was designed as a prospective cohort study based on the ‘Zhongshan Cardiovascular Surgery Cohort’. This cohort was established in 2009 and contained nearly 30,000 subjects with complete demographic and clinical records. We recruited patients who underwent cardiac surgeries in 1 July 2019 to 31 December 2019. The inclusion criteria of participants were as follows: (1) aged over 18 years, (2) underwent cardiac surgery, (3) did not receive renal replacement therapy (RRT) before surgery, and (4) had completed renal and cardiac data. We further excluded those who were under 18 years old, received the heart transplant and RRT, or took less than one serum creatinine (SCr) test. This study was approved by the Institutional Committee of Zhongshan Hospital (B2017-039). Participants were given informed consent prior to recruiting, and their identity information was replaced as a unique code during analysis. This study was designed as a prospective cohort study based on the ‘Zhongshan Cardiovascular Surgery Cohort’. This cohort was established in 2009 and contained nearly 30,000 subjects with complete demographic and clinical records. We recruited patients who underwent cardiac surgeries in 1 July 2019 to 31 December 2019. The inclusion criteria of participants were as follows: (1) aged over 18 years, (2) underwent cardiac surgery, (3) did not receive renal replacement therapy (RRT) before surgery, and (4) had completed renal and cardiac data. We further excluded those who were under 18 years old, received the heart transplant and RRT, or took less than one serum creatinine (SCr) test. This study was approved by the Institutional Committee of Zhongshan Hospital (B2017-039). Participants were given informed consent prior to recruiting, and their identity information was replaced as a unique code during analysis. Data collection Patients’ demographics were collected through a structured questionnaire. Clinical data with timestamps were extracted from the electronic medical records (EMR). We selected thirty-seven factors for analysis. They were divided into five groups in chronological order: (1) demographics and comorbidity: age, gender, body mass index (BMI), hypertension, diabetes, unstable angina pectoris (UAP), and myocardial infarction (MI) within the last 3 months; (2) preoperative cardio-renal function [within 24 h of admission to hospital]: cardiac surgery history, coronary angiography, New York heart association (NYHA) grade, left ventricular eject fractions (LVEF), pulmonary artery pressure (PAH), SCr, blood urea nitrogen (BUN), estimated glomerular filtration rate (eGFR), serum uric acid (SUA) and proteinuria; (3) perioperative blood routine test [within 24 h of admission]: glucose, albumin, hemoglobin, peripheral blood cell counts including erythrocyte, total leukocyte, neutrophil, lymphocyte and platelet; (4) intraoperative factors: cardiopulmonary bypass (CPB), surgery type, aortic cross-clamp time (ACCT), ultrafiltration volume and blood transfusion; (5) postoperative factors (within 1 h of arrival in the intensive care unit [ICU]): APACHE II score, Euro score and blood routine test. Moreover, we also retrieved the preoperative medication data for patients with higher NLPR to evaluate the effect of antibiotic use on AKI occurrence. Patients’ demographics were collected through a structured questionnaire. Clinical data with timestamps were extracted from the electronic medical records (EMR). We selected thirty-seven factors for analysis. They were divided into five groups in chronological order: (1) demographics and comorbidity: age, gender, body mass index (BMI), hypertension, diabetes, unstable angina pectoris (UAP), and myocardial infarction (MI) within the last 3 months; (2) preoperative cardio-renal function [within 24 h of admission to hospital]: cardiac surgery history, coronary angiography, New York heart association (NYHA) grade, left ventricular eject fractions (LVEF), pulmonary artery pressure (PAH), SCr, blood urea nitrogen (BUN), estimated glomerular filtration rate (eGFR), serum uric acid (SUA) and proteinuria; (3) perioperative blood routine test [within 24 h of admission]: glucose, albumin, hemoglobin, peripheral blood cell counts including erythrocyte, total leukocyte, neutrophil, lymphocyte and platelet; (4) intraoperative factors: cardiopulmonary bypass (CPB), surgery type, aortic cross-clamp time (ACCT), ultrafiltration volume and blood transfusion; (5) postoperative factors (within 1 h of arrival in the intensive care unit [ICU]): APACHE II score, Euro score and blood routine test. Moreover, we also retrieved the preoperative medication data for patients with higher NLPR to evaluate the effect of antibiotic use on AKI occurrence. Definition and classification According to the 2012 Kidney Disease: Improving Global Outcomes (KDIGO) criteria [20], AKI was defined as the absolute value of the SCr increase ≥0.3 mg/dL (≥26.5 μmol/L) within 48 h or an increase ≥1.5 times baseline levels within 7 days, or a urine output <0.5 mL/kg/h lasting over 6 h. The SCr value on admission was regarded as the baseline level. CSA-AKI was diagnosed by comparing it with the postoperative SCr value. For those receiving multiple SCr tests after surgery, we defined the highest one as the peak value. The equation of NLPR was factorized as: Neutrophil−to−Platelet*Lymphocyte Ratio (NLPR)=Neutrophil count (109/L) *100Lymphocyte count (109/L)*Platelet count(109/L) It was calculated at two-time points: on admission and within 1 h at ICU. Cardiac surgery was typed into the valve, coronary artery bypass grafting (CABG), aorta, valve + CABG, valve + large vessels and others. According to the 2012 Kidney Disease: Improving Global Outcomes (KDIGO) criteria [20], AKI was defined as the absolute value of the SCr increase ≥0.3 mg/dL (≥26.5 μmol/L) within 48 h or an increase ≥1.5 times baseline levels within 7 days, or a urine output <0.5 mL/kg/h lasting over 6 h. The SCr value on admission was regarded as the baseline level. CSA-AKI was diagnosed by comparing it with the postoperative SCr value. For those receiving multiple SCr tests after surgery, we defined the highest one as the peak value. The equation of NLPR was factorized as: Neutrophil−to−Platelet*Lymphocyte Ratio (NLPR)=Neutrophil count (109/L) *100Lymphocyte count (109/L)*Platelet count(109/L) It was calculated at two-time points: on admission and within 1 h at ICU. Cardiac surgery was typed into the valve, coronary artery bypass grafting (CABG), aorta, valve + CABG, valve + large vessels and others. Statistical analysis Statistics were conducted by using R 3.6.1 software (R core team). Continuous variables are presented as mean ± standard deviation or median with inter-quartile range (IQR). Categorical variables were expressed in numbers and percentages (‘Hmisc’ package). Missing data were found in less than 10% of the records, mainly focusing on NYHA grade, BMI, and albumin. We imputed the missing value by the methods of multiple imputations (‘mice’ package in R, method=‘rf’, m = 5, maxit = 5). Student’s t test and Wilcoxon test were applied to compare the distributional difference of continuous variables. Pearson test and Fisher’s exact test were applied to compare the proportional difference of categorical variables (‘gmodels’ package). Given the prior hypothesis of the dose-response relationship, we used restricted cubic splines with three knots at the 10th, 50th and 90th centiles to flexibly model the association of neutrophil count, lymphocyte count, platelet count, and NLPR with CSA-AKI incidence (‘tidyverse’, ‘rms’ and ‘ggplot2’ packages). Non-linearity was tested by using a likelihood ratio test comparing the model with only a linear term against the model with linear and cubic spline terms. Bounded with the median value, we used a logistic model to calculate odds ratios (ORs) for each 1/10 unit increase in predict variables. Moreover, the generalized estimating equation (GEE) was applied for multivariate analysis, which combined the time-dependent indicators of leukocyte counts and NLPR (‘geepack’ package). We applied the purposeful selection of covariates [21]. In model 1, we only included NLPR in two-time points for univariate analysis. Then we included other factors gradually: (i) model 2: model 1 plus demographic factors; (ii) model 3: model 2 plus preoperative factors; (iii) model 4: model 3 plus intra-/postoperative factors. The predictive power was quantified through the area under the receiver operating characteristic (AUROC) curves (‘pROC’ and ‘ggplot2’ packages). Delong test was used to compare the significance of ROC curves. All statistical tests were two-tailed, and we regarded p < 0.05 as the criterion for statistical significance. Statistics were conducted by using R 3.6.1 software (R core team). Continuous variables are presented as mean ± standard deviation or median with inter-quartile range (IQR). Categorical variables were expressed in numbers and percentages (‘Hmisc’ package). Missing data were found in less than 10% of the records, mainly focusing on NYHA grade, BMI, and albumin. We imputed the missing value by the methods of multiple imputations (‘mice’ package in R, method=‘rf’, m = 5, maxit = 5). Student’s t test and Wilcoxon test were applied to compare the distributional difference of continuous variables. Pearson test and Fisher’s exact test were applied to compare the proportional difference of categorical variables (‘gmodels’ package). Given the prior hypothesis of the dose-response relationship, we used restricted cubic splines with three knots at the 10th, 50th and 90th centiles to flexibly model the association of neutrophil count, lymphocyte count, platelet count, and NLPR with CSA-AKI incidence (‘tidyverse’, ‘rms’ and ‘ggplot2’ packages). Non-linearity was tested by using a likelihood ratio test comparing the model with only a linear term against the model with linear and cubic spline terms. Bounded with the median value, we used a logistic model to calculate odds ratios (ORs) for each 1/10 unit increase in predict variables. Moreover, the generalized estimating equation (GEE) was applied for multivariate analysis, which combined the time-dependent indicators of leukocyte counts and NLPR (‘geepack’ package). We applied the purposeful selection of covariates [21]. In model 1, we only included NLPR in two-time points for univariate analysis. Then we included other factors gradually: (i) model 2: model 1 plus demographic factors; (ii) model 3: model 2 plus preoperative factors; (iii) model 4: model 3 plus intra-/postoperative factors. The predictive power was quantified through the area under the receiver operating characteristic (AUROC) curves (‘pROC’ and ‘ggplot2’ packages). Delong test was used to compare the significance of ROC curves. All statistical tests were two-tailed, and we regarded p < 0.05 as the criterion for statistical significance.
Results
Baseline characteristics and CSA-AKI incidence Of the 2618 enrolled participants who underwent cardiac surgeries, 2449 patients met the inclusion criteria and were comprised of formal analysis (Supplement Figure 1). The average age was 57.2 ± 12.8 years old, and male patients accounted for 59.2% (n = 1449). According to KDIGO classification, 838 (34.2%) patients developed AKI after surgery. Of them, 658 patients located in Stage-1, and another 131 and 49 cases were Stage-2 and Stage-3, respectively. The rate of AKI-RRT was 1.2% (n = 30). Table 1 listed the major factors of CSA-AKI. Patients who were elder age, male, had higher BMI, or suffered from hypertension and diabetes were more likely to occur CSA-AKI. Cardio-surgery history, coronary arteriography, attenuated LVEF were positively associated with CSA-AKI. Given that patients already had preexisting renal dysfunction at admission, the risk of CSA-AKI was also increased. In terms of intraoperative factors, patients who underwent complicated cardiac surgery (aorta, valve combined with CABG, or larger vessels) were more susceptible to CSA-AKI. The application of CPB and blood transfusion, prolonged ACCT, and high ultrafiltration volume also had notable impacts on CSA-AKI. With the increase of APACHE II/Euro score, the postoperative risk CSA-AKI also kept growing. Demographic and perioperative factors of CSA-AKI (n = 2449). †Refers to Student’s t test (for continuous variables which were normally distributed). #Refers to Pearson test (for binary and unordered categorical variables). *Refers to Fisher’s exact test (for categorical variables when sample sizes are small). ‡Refers to Wilcoxon test (for continuous variables which were non-normally distributed). ACCT: aortic cross-clamp time; AKI: acute kidney injury; aOR: adjusted odds ratio; CABG: coronary artery bypass grafting; CI: confidence interval; CPB: cardiac pulmonary bypass; CSA-AKI: cardiac surgery-associated acute kidney injury; eGFR: estimated glomerular filtration rate; LVEF: left ventricular ejection fractions; NYHA: New York heart association; PAH: pulmonary arterial hypertension; SCr: serum creatinine; SUA: serum uric acid. Of the 2618 enrolled participants who underwent cardiac surgeries, 2449 patients met the inclusion criteria and were comprised of formal analysis (Supplement Figure 1). The average age was 57.2 ± 12.8 years old, and male patients accounted for 59.2% (n = 1449). According to KDIGO classification, 838 (34.2%) patients developed AKI after surgery. Of them, 658 patients located in Stage-1, and another 131 and 49 cases were Stage-2 and Stage-3, respectively. The rate of AKI-RRT was 1.2% (n = 30). Table 1 listed the major factors of CSA-AKI. Patients who were elder age, male, had higher BMI, or suffered from hypertension and diabetes were more likely to occur CSA-AKI. Cardio-surgery history, coronary arteriography, attenuated LVEF were positively associated with CSA-AKI. Given that patients already had preexisting renal dysfunction at admission, the risk of CSA-AKI was also increased. In terms of intraoperative factors, patients who underwent complicated cardiac surgery (aorta, valve combined with CABG, or larger vessels) were more susceptible to CSA-AKI. The application of CPB and blood transfusion, prolonged ACCT, and high ultrafiltration volume also had notable impacts on CSA-AKI. With the increase of APACHE II/Euro score, the postoperative risk CSA-AKI also kept growing. Demographic and perioperative factors of CSA-AKI (n = 2449). †Refers to Student’s t test (for continuous variables which were normally distributed). #Refers to Pearson test (for binary and unordered categorical variables). *Refers to Fisher’s exact test (for categorical variables when sample sizes are small). ‡Refers to Wilcoxon test (for continuous variables which were non-normally distributed). ACCT: aortic cross-clamp time; AKI: acute kidney injury; aOR: adjusted odds ratio; CABG: coronary artery bypass grafting; CI: confidence interval; CPB: cardiac pulmonary bypass; CSA-AKI: cardiac surgery-associated acute kidney injury; eGFR: estimated glomerular filtration rate; LVEF: left ventricular ejection fractions; NYHA: New York heart association; PAH: pulmonary arterial hypertension; SCr: serum creatinine; SUA: serum uric acid. Dynamics of peripheral blood cell counts and NLPR The blood analysis was summarized in Table 2. Compared with non-AKI patients, preoperative neutrophil counts (3.9 ± 2.4 vs. 3.5 ± 1.6, p < 0.001; aOR 1.07 (95% CI 1.02–1.12), p < 0.001), and NLPR (1.1[0.8, 1.8] vs. 0.9[0.7,1.4], p < 0.001; aOR 1.15 (95% CI 1.09–1.22), p < 0.001) were higher in AKI patients. Lymphocyte (1.7 ± 0.6 vs. 1.9 ± 0.7, p < 0.001; aOR 0.74 (95% CI 0.64–0.85), p < 0.001) and platelet (182.2 ± 59.6 vs. 196.4 ± 60.9, p < 0.001; aOR 0.99 (95% CI 0.99–1.00), p < 0.001) counts were negatively associated with AKI risk. Due to the posttraumatic stress response after surgery, blood cell counts changed a lot. While patients with a higher postoperative NLPR still had an increased risk of developing AKI (12.4[7.5, 20.0] vs. 10.6[6.4,16.7], p < 0.001; aOR 1.02 (95% CI 1.02–1.03), p < 0.001). Distribution of peripheral blood cell counts and NLPR in AKI and non-AKI patients (n = 2449). aOR was adjusted by age, gender, and body mass index. †Refers to Student’s t test (for continuous variables which were normally distributed). ‡Refers to Wilcoxon test (for continuous variables which were non-normally distributed). AKI: acute kidney injury; aOR: adjusted odds ratio; ICU: intensive care unit;NLPR: neutrophil-to-platelet*lymphocyte ratio. The blood analysis was summarized in Table 2. Compared with non-AKI patients, preoperative neutrophil counts (3.9 ± 2.4 vs. 3.5 ± 1.6, p < 0.001; aOR 1.07 (95% CI 1.02–1.12), p < 0.001), and NLPR (1.1[0.8, 1.8] vs. 0.9[0.7,1.4], p < 0.001; aOR 1.15 (95% CI 1.09–1.22), p < 0.001) were higher in AKI patients. Lymphocyte (1.7 ± 0.6 vs. 1.9 ± 0.7, p < 0.001; aOR 0.74 (95% CI 0.64–0.85), p < 0.001) and platelet (182.2 ± 59.6 vs. 196.4 ± 60.9, p < 0.001; aOR 0.99 (95% CI 0.99–1.00), p < 0.001) counts were negatively associated with AKI risk. Due to the posttraumatic stress response after surgery, blood cell counts changed a lot. While patients with a higher postoperative NLPR still had an increased risk of developing AKI (12.4[7.5, 20.0] vs. 10.6[6.4,16.7], p < 0.001; aOR 1.02 (95% CI 1.02–1.03), p < 0.001). Distribution of peripheral blood cell counts and NLPR in AKI and non-AKI patients (n = 2449). aOR was adjusted by age, gender, and body mass index. †Refers to Student’s t test (for continuous variables which were normally distributed). ‡Refers to Wilcoxon test (for continuous variables which were non-normally distributed). AKI: acute kidney injury; aOR: adjusted odds ratio; ICU: intensive care unit;NLPR: neutrophil-to-platelet*lymphocyte ratio. Dose-response relationship of NLPR and AKI risk In Figure 1, we used restricted cubic splines to visualize the dose-response relationship of blood cells with CSA-AKI incidence. Considering the ‘U’ shape relationship between preoperative neutrophil counts and AKI, the plot showed a decrease of the risk within the lower range of neutrophil counts, which reached the lowest risk around the median (3.2*109). It then increased after that (p for non-linearity <0.001). The OR per 1 unit increase was 0.76 (95% CI 0.61–0.95) below 3.2*109, and 1.11 (95% CI 1.04–1.17) above 3.3*109. The risk of CSA-AKI was relatively flat until around the median of lymphocyte, platelet, NLPR. It then started to increase rapidly afterward. The median of preoperative NLPR was 1.0. Above the median, the OR per 1 unit increase was 1.13 (95% CI 1.06–1.19). In comparison, no significant association was found below the median (OR 1.22 95% CI 0.63–2.38). Similarly, we observed such a ‘J’ shape curve between postoperative NLPR and AKI. The OR increased along with the NLPR increase after surgery below the median of 11.0 (OR 1.02 95% CI 1.01–1.03). Then we classified the NLPR value into five groups (Table 3). Compared with the preoperative NLPR level of 0.50–0.99, AKI risk kept growth in higher NLPR groups (the adjusted OR increased from 1.23 to 2.61). As for postoperative NLPR, the incidence was increased remarkably from 31.6% in the reference group to 46.2% in the highest group (aOR rose from 1.13 to 2.09). Dose-response relationship of CSA-AKI and peripheral blood cell counts and NLPR. Perioperative NLPR levels and their association with CSA-AKI (n = 2449). aOR was adjusted by age, gender, and body mass index. AKI: acute kidney injury; aOR: adjusted odds ratio; NLPR: neutrophil to platelet*lymphocyte ratio. Antibiotic treatment and the risk of CSA-AKI in high NLPR groups (n = 365). aOR was adjusted by age, gender, and body mass index. AKI: acute kidney injury; aOR: adjusted odds ratio; NLPR: neutrophil to platelet*lymphocyte ratio. In Figure 1, we used restricted cubic splines to visualize the dose-response relationship of blood cells with CSA-AKI incidence. Considering the ‘U’ shape relationship between preoperative neutrophil counts and AKI, the plot showed a decrease of the risk within the lower range of neutrophil counts, which reached the lowest risk around the median (3.2*109). It then increased after that (p for non-linearity <0.001). The OR per 1 unit increase was 0.76 (95% CI 0.61–0.95) below 3.2*109, and 1.11 (95% CI 1.04–1.17) above 3.3*109. The risk of CSA-AKI was relatively flat until around the median of lymphocyte, platelet, NLPR. It then started to increase rapidly afterward. The median of preoperative NLPR was 1.0. Above the median, the OR per 1 unit increase was 1.13 (95% CI 1.06–1.19). In comparison, no significant association was found below the median (OR 1.22 95% CI 0.63–2.38). Similarly, we observed such a ‘J’ shape curve between postoperative NLPR and AKI. The OR increased along with the NLPR increase after surgery below the median of 11.0 (OR 1.02 95% CI 1.01–1.03). Then we classified the NLPR value into five groups (Table 3). Compared with the preoperative NLPR level of 0.50–0.99, AKI risk kept growth in higher NLPR groups (the adjusted OR increased from 1.23 to 2.61). As for postoperative NLPR, the incidence was increased remarkably from 31.6% in the reference group to 46.2% in the highest group (aOR rose from 1.13 to 2.09). Dose-response relationship of CSA-AKI and peripheral blood cell counts and NLPR. Perioperative NLPR levels and their association with CSA-AKI (n = 2449). aOR was adjusted by age, gender, and body mass index. AKI: acute kidney injury; aOR: adjusted odds ratio; NLPR: neutrophil to platelet*lymphocyte ratio. Antibiotic treatment and the risk of CSA-AKI in high NLPR groups (n = 365). aOR was adjusted by age, gender, and body mass index. AKI: acute kidney injury; aOR: adjusted odds ratio; NLPR: neutrophil to platelet*lymphocyte ratio. Preoperative antibiotic use in patients with high NLPR We further retrieved the medication data, especially the antibiotic use, in 365 patients whose preoperative NLPR was over 2.0 (Table 4). Of them, 22.4% (n = 82) had ever used antibiotics before cardiac surgery. The incidence of AKI in patients receiving antibiotic treatment was 32.9%, significantly lower than that without antibiotics (54.8%, p < 0.001). The aOR was estimated as 0.44 (95% CI 0.26–0.76). Subgroup analysis revealed that patients in the second-highest NLPR group (2.00–2.99) could benefit more from antibiotic treatment (aOR = 0.39, 95% CI 0.16–0.95). We further retrieved the medication data, especially the antibiotic use, in 365 patients whose preoperative NLPR was over 2.0 (Table 4). Of them, 22.4% (n = 82) had ever used antibiotics before cardiac surgery. The incidence of AKI in patients receiving antibiotic treatment was 32.9%, significantly lower than that without antibiotics (54.8%, p < 0.001). The aOR was estimated as 0.44 (95% CI 0.26–0.76). Subgroup analysis revealed that patients in the second-highest NLPR group (2.00–2.99) could benefit more from antibiotic treatment (aOR = 0.39, 95% CI 0.16–0.95). Prediction model of CSA-AKI based on perioperative NLPR We combined the NLPR values in two-time points to predict CSA-AKI occurrence (Figure 2). Considering the correlation among repeated measures, we applied the generalized estimating equation (GEE) for multiple analyses. In the model with NLPR alone, the AUC value was 0.621, 95%CI 0.604–0.638. The optimal cutoff of NLPR was set at 1.3, which has a sensitivity of 51.0% and specificity of 67.0%. With the enrollment of variables progressively, the AUC values increased from 0.671 (95% CI 0.655–0.687) to 0.730 (95% CI 0.715–0.745), and then to 0.806 (95% CI 0.793–0.819). Moreover, we also evaluated the attribution of NLPR to AKI in the full model. Delong test revealed that the AUC of model with NLPR was significantly higher than the AUC without NLPR (AUC: 0.806 vs. 0.799, Z = 4.081, p < 0.001). Prediction model of CSA-AKI through the generalized estimating equation (GEE). (3A: the optimal NLPR cutoff was set at 1.3; 3B: model 1 only enrolled the NLPR values in two-time points, model2 was model 1 plus demographic factors; model 3 was model 2 plus preoperative factors, model 4 was model 3 plus intra-/postoperative factor; 3C: comparison between the model with and without NLPR). We combined the NLPR values in two-time points to predict CSA-AKI occurrence (Figure 2). Considering the correlation among repeated measures, we applied the generalized estimating equation (GEE) for multiple analyses. In the model with NLPR alone, the AUC value was 0.621, 95%CI 0.604–0.638. The optimal cutoff of NLPR was set at 1.3, which has a sensitivity of 51.0% and specificity of 67.0%. With the enrollment of variables progressively, the AUC values increased from 0.671 (95% CI 0.655–0.687) to 0.730 (95% CI 0.715–0.745), and then to 0.806 (95% CI 0.793–0.819). Moreover, we also evaluated the attribution of NLPR to AKI in the full model. Delong test revealed that the AUC of model with NLPR was significantly higher than the AUC without NLPR (AUC: 0.806 vs. 0.799, Z = 4.081, p < 0.001). Prediction model of CSA-AKI through the generalized estimating equation (GEE). (3A: the optimal NLPR cutoff was set at 1.3; 3B: model 1 only enrolled the NLPR values in two-time points, model2 was model 1 plus demographic factors; model 3 was model 2 plus preoperative factors, model 4 was model 3 plus intra-/postoperative factor; 3C: comparison between the model with and without NLPR).
null
null
[ "Study design and patient selection", "Data collection", "Definition and classification", "Statistical analysis", "Baseline characteristics and CSA-AKI incidence", "Dynamics of peripheral blood cell counts and NLPR", "Dose-response relationship of NLPR and AKI risk", "Preoperative antibiotic use in patients with high NLPR", "Prediction model of CSA-AKI based on perioperative NLPR" ]
[ "This study was designed as a prospective cohort study based on the ‘Zhongshan Cardiovascular Surgery Cohort’. This cohort was established in 2009 and contained nearly 30,000 subjects with complete demographic and clinical records. We recruited patients who underwent cardiac surgeries in 1 July 2019 to 31 December 2019. The inclusion criteria of participants were as follows: (1) aged over 18 years, (2) underwent cardiac surgery, (3) did not receive renal replacement therapy (RRT) before surgery, and (4) had completed renal and cardiac data. We further excluded those who were under 18 years old, received the heart transplant and RRT, or took less than one serum creatinine (SCr) test. This study was approved by the Institutional Committee of Zhongshan Hospital (B2017-039). Participants were given informed consent prior to recruiting, and their identity information was replaced as a unique code during analysis.", "Patients’ demographics were collected through a structured questionnaire. Clinical data with timestamps were extracted from the electronic medical records (EMR). We selected thirty-seven factors for analysis. They were divided into five groups in chronological order: (1) demographics and comorbidity: age, gender, body mass index (BMI), hypertension, diabetes, unstable angina pectoris (UAP), and myocardial infarction (MI) within the last 3 months; (2) preoperative cardio-renal function [within 24 h of admission to hospital]: cardiac surgery history, coronary angiography, New York heart association (NYHA) grade, left ventricular eject fractions (LVEF), pulmonary artery pressure (PAH), SCr, blood urea nitrogen (BUN), estimated glomerular filtration rate (eGFR), serum uric acid (SUA) and proteinuria; (3) perioperative blood routine test [within 24 h of admission]: glucose, albumin, hemoglobin, peripheral blood cell counts including erythrocyte, total leukocyte, neutrophil, lymphocyte and platelet; (4) intraoperative factors: cardiopulmonary bypass (CPB), surgery type, aortic cross-clamp time (ACCT), ultrafiltration volume and blood transfusion; (5) postoperative factors (within 1 h of arrival in the intensive care unit [ICU]): APACHE II score, Euro score and blood routine test. Moreover, we also retrieved the preoperative medication data for patients with higher NLPR to evaluate the effect of antibiotic use on AKI occurrence.", "According to the 2012 Kidney Disease: Improving Global Outcomes (KDIGO) criteria [20], AKI was defined as the absolute value of the SCr increase ≥0.3 mg/dL (≥26.5 μmol/L) within 48 h or an increase ≥1.5 times baseline levels within 7 days, or a urine output <0.5 mL/kg/h lasting over 6 h. The SCr value on admission was regarded as the baseline level. CSA-AKI was diagnosed by comparing it with the postoperative SCr value. For those receiving multiple SCr tests after surgery, we defined the highest one as the peak value. The equation of NLPR was factorized as:\nNeutrophil−to−Platelet*Lymphocyte Ratio (NLPR)=Neutrophil count (109/L)\n*100Lymphocyte count (109/L)*Platelet count(109/L)\n\nIt was calculated at two-time points: on admission and within 1 h at ICU. Cardiac surgery was typed into the valve, coronary artery bypass grafting (CABG), aorta, valve + CABG, valve + large vessels and others.", "Statistics were conducted by using R 3.6.1 software (R core team). Continuous variables are presented as mean ± standard deviation or median with inter-quartile range (IQR). Categorical variables were expressed in numbers and percentages (‘Hmisc’ package). Missing data were found in less than 10% of the records, mainly focusing on NYHA grade, BMI, and albumin. We imputed the missing value by the methods of multiple imputations (‘mice’ package in R, method=‘rf’, m = 5, maxit = 5). Student’s t test and Wilcoxon test were applied to compare the distributional difference of continuous variables. Pearson test and Fisher’s exact test were applied to compare the proportional difference of categorical variables (‘gmodels’ package). Given the prior hypothesis of the dose-response relationship, we used restricted cubic splines with three knots at the 10th, 50th and 90th centiles to flexibly model the association of neutrophil count, lymphocyte count, platelet count, and NLPR with CSA-AKI incidence (‘tidyverse’, ‘rms’ and ‘ggplot2’ packages). Non-linearity was tested by using a likelihood ratio test comparing the model with only a linear term against the model with linear and cubic spline terms. Bounded with the median value, we used a logistic model to calculate odds ratios (ORs) for each 1/10 unit increase in predict variables. Moreover, the generalized estimating equation (GEE) was applied for multivariate analysis, which combined the time-dependent indicators of leukocyte counts and NLPR (‘geepack’ package). We applied the purposeful selection of covariates [21]. In model 1, we only included NLPR in two-time points for univariate analysis. Then we included other factors gradually: (i) model 2: model 1 plus demographic factors; (ii) model 3: model 2 plus preoperative factors; (iii) model 4: model 3 plus intra-/postoperative factors. The predictive power was quantified through the area under the receiver operating characteristic (AUROC) curves (‘pROC’ and ‘ggplot2’ packages). Delong test was used to compare the significance of ROC curves. All statistical tests were two-tailed, and we regarded p < 0.05 as the criterion for statistical significance.", "Of the 2618 enrolled participants who underwent cardiac surgeries, 2449 patients met the inclusion criteria and were comprised of formal analysis (Supplement Figure 1). The average age was 57.2 ± 12.8 years old, and male patients accounted for 59.2% (n = 1449). According to KDIGO classification, 838 (34.2%) patients developed AKI after surgery. Of them, 658 patients located in Stage-1, and another 131 and 49 cases were Stage-2 and Stage-3, respectively. The rate of AKI-RRT was 1.2% (n = 30). Table 1 listed the major factors of CSA-AKI. Patients who were elder age, male, had higher BMI, or suffered from hypertension and diabetes were more likely to occur CSA-AKI. Cardio-surgery history, coronary arteriography, attenuated LVEF were positively associated with CSA-AKI. Given that patients already had preexisting renal dysfunction at admission, the risk of CSA-AKI was also increased. In terms of intraoperative factors, patients who underwent complicated cardiac surgery (aorta, valve combined with CABG, or larger vessels) were more susceptible to CSA-AKI. The application of CPB and blood transfusion, prolonged ACCT, and high ultrafiltration volume also had notable impacts on CSA-AKI. With the increase of APACHE II/Euro score, the postoperative risk CSA-AKI also kept growing.\nDemographic and perioperative factors of CSA-AKI (n = 2449).\n†Refers to Student’s t test (for continuous variables which were normally distributed).\n#Refers to Pearson test (for binary and unordered categorical variables).\n*Refers to Fisher’s exact test (for categorical variables when sample sizes are small).\n‡Refers to Wilcoxon test (for continuous variables which were non-normally distributed).\nACCT: aortic cross-clamp time; AKI: acute kidney injury; aOR: adjusted odds ratio; CABG: coronary artery bypass grafting; CI: confidence interval; CPB: cardiac pulmonary bypass; CSA-AKI: cardiac surgery-associated acute kidney injury; eGFR: estimated glomerular filtration rate; LVEF: left ventricular ejection fractions; NYHA: New York heart association; PAH: pulmonary arterial hypertension; SCr: serum creatinine; SUA: serum uric acid.", "The blood analysis was summarized in Table 2. Compared with non-AKI patients, preoperative neutrophil counts (3.9 ± 2.4 vs. 3.5 ± 1.6, p < 0.001; aOR 1.07 (95% CI 1.02–1.12), p < 0.001), and NLPR (1.1[0.8, 1.8] vs. 0.9[0.7,1.4], p < 0.001; aOR 1.15 (95% CI 1.09–1.22), p < 0.001) were higher in AKI patients. Lymphocyte (1.7 ± 0.6 vs. 1.9 ± 0.7, p < 0.001; aOR 0.74 (95% CI 0.64–0.85), p < 0.001) and platelet (182.2 ± 59.6 vs. 196.4 ± 60.9, p < 0.001; aOR 0.99 (95% CI 0.99–1.00), p < 0.001) counts were negatively associated with AKI risk. Due to the posttraumatic stress response after surgery, blood cell counts changed a lot. While patients with a higher postoperative NLPR still had an increased risk of developing AKI (12.4[7.5, 20.0] vs. 10.6[6.4,16.7], p < 0.001; aOR 1.02 (95% CI 1.02–1.03), p < 0.001).\nDistribution of peripheral blood cell counts and NLPR in AKI and non-AKI patients (n = 2449).\naOR was adjusted by age, gender, and body mass index.\n†Refers to Student’s t test (for continuous variables which were normally distributed).\n‡Refers to Wilcoxon test (for continuous variables which were non-normally distributed).\nAKI: acute kidney injury; aOR: adjusted odds ratio; ICU: intensive care unit;NLPR: neutrophil-to-platelet*lymphocyte ratio.", "In Figure 1, we used restricted cubic splines to visualize the dose-response relationship of blood cells with CSA-AKI incidence. Considering the ‘U’ shape relationship between preoperative neutrophil counts and AKI, the plot showed a decrease of the risk within the lower range of neutrophil counts, which reached the lowest risk around the median (3.2*109). It then increased after that (p for non-linearity <0.001). The OR per 1 unit increase was 0.76 (95% CI 0.61–0.95) below 3.2*109, and 1.11 (95% CI 1.04–1.17) above 3.3*109. The risk of CSA-AKI was relatively flat until around the median of lymphocyte, platelet, NLPR. It then started to increase rapidly afterward. The median of preoperative NLPR was 1.0. Above the median, the OR per 1 unit increase was 1.13 (95% CI 1.06–1.19). In comparison, no significant association was found below the median (OR 1.22 95% CI 0.63–2.38). Similarly, we observed such a ‘J’ shape curve between postoperative NLPR and AKI. The OR increased along with the NLPR increase after surgery below the median of 11.0 (OR 1.02 95% CI 1.01–1.03). Then we classified the NLPR value into five groups (Table 3). Compared with the preoperative NLPR level of 0.50–0.99, AKI risk kept growth in higher NLPR groups (the adjusted OR increased from 1.23 to 2.61). As for postoperative NLPR, the incidence was increased remarkably from 31.6% in the reference group to 46.2% in the highest group (aOR rose from 1.13 to 2.09).\nDose-response relationship of CSA-AKI and peripheral blood cell counts and NLPR.\nPerioperative NLPR levels and their association with CSA-AKI (n = 2449).\naOR was adjusted by age, gender, and body mass index.\nAKI: acute kidney injury; aOR: adjusted odds ratio; NLPR: neutrophil to platelet*lymphocyte ratio.\nAntibiotic treatment and the risk of CSA-AKI in high NLPR groups (n = 365).\naOR was adjusted by age, gender, and body mass index.\nAKI: acute kidney injury; aOR: adjusted odds ratio; NLPR: neutrophil to platelet*lymphocyte ratio.", "We further retrieved the medication data, especially the antibiotic use, in 365 patients whose preoperative NLPR was over 2.0 (Table 4). Of them, 22.4% (n = 82) had ever used antibiotics before cardiac surgery. The incidence of AKI in patients receiving antibiotic treatment was 32.9%, significantly lower than that without antibiotics (54.8%, p < 0.001). The aOR was estimated as 0.44 (95% CI 0.26–0.76). Subgroup analysis revealed that patients in the second-highest NLPR group (2.00–2.99) could benefit more from antibiotic treatment (aOR = 0.39, 95% CI 0.16–0.95).", "We combined the NLPR values in two-time points to predict CSA-AKI occurrence (Figure 2). Considering the correlation among repeated measures, we applied the generalized estimating equation (GEE) for multiple analyses. In the model with NLPR alone, the AUC value was 0.621, 95%CI 0.604–0.638. The optimal cutoff of NLPR was set at 1.3, which has a sensitivity of 51.0% and specificity of 67.0%. With the enrollment of variables progressively, the AUC values increased from 0.671 (95% CI 0.655–0.687) to 0.730 (95% CI 0.715–0.745), and then to 0.806 (95% CI 0.793–0.819). Moreover, we also evaluated the attribution of NLPR to AKI in the full model. Delong test revealed that the AUC of model with NLPR was significantly higher than the AUC without NLPR (AUC: 0.806 vs. 0.799, Z = 4.081, p < 0.001).\nPrediction model of CSA-AKI through the generalized estimating equation (GEE). (3A: the optimal NLPR cutoff was set at 1.3; 3B: model 1 only enrolled the NLPR values in two-time points, model2 was model 1 plus demographic factors; model 3 was model 2 plus preoperative factors, model 4 was model 3 plus intra-/postoperative factor; 3C: comparison between the model with and without NLPR)." ]
[ null, null, null, null, null, null, null, null, null ]
[ "Introduction", "Methods", "Study design and patient selection", "Data collection", "Definition and classification", "Statistical analysis", "Results", "Baseline characteristics and CSA-AKI incidence", "Dynamics of peripheral blood cell counts and NLPR", "Dose-response relationship of NLPR and AKI risk", "Preoperative antibiotic use in patients with high NLPR", "Prediction model of CSA-AKI based on perioperative NLPR", "Discussion", "Supplementary Material" ]
[ "Acute kidney injury (AKI) is a complicated and multifactorial clinical syndrome, which is characterized by a sudden decrease in renal function. Cardiac surgery is the second commonest reason for AKI behind sepsis [1]. It was reported that the incidence of cardiac surgery-associated AKI (CSA-AKI) varied from 5% to 42% [2]. CSA-AKI not only prolongs the length of stay in the intensive care unit (ICU) and hospitalization but increases the cost of care and mortality [3,4]. Early identification of patients at high risk of CSA-AKI and prompt modification of reversible factors will optimize the clinical outcome and avoid unnecessary kidney damage.\nPreoperative inflammation is commonly encountered in patients receiving cardiac surgery. It can be caused either by heart infection (such as infectious endocarditis) or other chronic infectious diseases (COPD). In clinical practice, the neutrophil to lymphocyte ratio (NLR) is an easily effective and low-cost biomarker of inflammatory status [5,6]. Previous studies have demonstrated its value in early prediction of infectious conditions [5,7], cancers [8–13] and cardiovascular diseases [14]. As the initiator of inflammation, neutrophils can quickly migrate from peripheral blood circulation to kidney under the action of chemokines. It can result in CSA-AKI by blocking renal micro-vessels and secreting oxygen free radicals and proteases. Besides, platelet-neutrophil aggregates are formed during the inflammatory process [15] and associated with diverse pathologies [16,17]. It also can cause renal damage through vascular lesions and tissue destruction [18]. To this end, NLPR may be the more promising biomarker of inflammation than NLR in CSA-AKI prediction. Although several studies have shown that high NLPR was associated with postoperative AKI in cardiovascular surgery patients [19], there is still no research that considers the dynamics of NLPR in the prediction model. The effect of NLPR on AKI is not a simple linear relationship. It is crucial to delineate the dose-response relationship between NLPR and AKI risk and the cutoff of NLPR for CSA-AKI prediction. This study aims to explore the significance of the dynamic changes in perioperative NLPR for predicting CSA-AKI.", "Study design and patient selection This study was designed as a prospective cohort study based on the ‘Zhongshan Cardiovascular Surgery Cohort’. This cohort was established in 2009 and contained nearly 30,000 subjects with complete demographic and clinical records. We recruited patients who underwent cardiac surgeries in 1 July 2019 to 31 December 2019. The inclusion criteria of participants were as follows: (1) aged over 18 years, (2) underwent cardiac surgery, (3) did not receive renal replacement therapy (RRT) before surgery, and (4) had completed renal and cardiac data. We further excluded those who were under 18 years old, received the heart transplant and RRT, or took less than one serum creatinine (SCr) test. This study was approved by the Institutional Committee of Zhongshan Hospital (B2017-039). Participants were given informed consent prior to recruiting, and their identity information was replaced as a unique code during analysis.\nThis study was designed as a prospective cohort study based on the ‘Zhongshan Cardiovascular Surgery Cohort’. This cohort was established in 2009 and contained nearly 30,000 subjects with complete demographic and clinical records. We recruited patients who underwent cardiac surgeries in 1 July 2019 to 31 December 2019. The inclusion criteria of participants were as follows: (1) aged over 18 years, (2) underwent cardiac surgery, (3) did not receive renal replacement therapy (RRT) before surgery, and (4) had completed renal and cardiac data. We further excluded those who were under 18 years old, received the heart transplant and RRT, or took less than one serum creatinine (SCr) test. This study was approved by the Institutional Committee of Zhongshan Hospital (B2017-039). Participants were given informed consent prior to recruiting, and their identity information was replaced as a unique code during analysis.\nData collection Patients’ demographics were collected through a structured questionnaire. Clinical data with timestamps were extracted from the electronic medical records (EMR). We selected thirty-seven factors for analysis. They were divided into five groups in chronological order: (1) demographics and comorbidity: age, gender, body mass index (BMI), hypertension, diabetes, unstable angina pectoris (UAP), and myocardial infarction (MI) within the last 3 months; (2) preoperative cardio-renal function [within 24 h of admission to hospital]: cardiac surgery history, coronary angiography, New York heart association (NYHA) grade, left ventricular eject fractions (LVEF), pulmonary artery pressure (PAH), SCr, blood urea nitrogen (BUN), estimated glomerular filtration rate (eGFR), serum uric acid (SUA) and proteinuria; (3) perioperative blood routine test [within 24 h of admission]: glucose, albumin, hemoglobin, peripheral blood cell counts including erythrocyte, total leukocyte, neutrophil, lymphocyte and platelet; (4) intraoperative factors: cardiopulmonary bypass (CPB), surgery type, aortic cross-clamp time (ACCT), ultrafiltration volume and blood transfusion; (5) postoperative factors (within 1 h of arrival in the intensive care unit [ICU]): APACHE II score, Euro score and blood routine test. Moreover, we also retrieved the preoperative medication data for patients with higher NLPR to evaluate the effect of antibiotic use on AKI occurrence.\nPatients’ demographics were collected through a structured questionnaire. Clinical data with timestamps were extracted from the electronic medical records (EMR). We selected thirty-seven factors for analysis. They were divided into five groups in chronological order: (1) demographics and comorbidity: age, gender, body mass index (BMI), hypertension, diabetes, unstable angina pectoris (UAP), and myocardial infarction (MI) within the last 3 months; (2) preoperative cardio-renal function [within 24 h of admission to hospital]: cardiac surgery history, coronary angiography, New York heart association (NYHA) grade, left ventricular eject fractions (LVEF), pulmonary artery pressure (PAH), SCr, blood urea nitrogen (BUN), estimated glomerular filtration rate (eGFR), serum uric acid (SUA) and proteinuria; (3) perioperative blood routine test [within 24 h of admission]: glucose, albumin, hemoglobin, peripheral blood cell counts including erythrocyte, total leukocyte, neutrophil, lymphocyte and platelet; (4) intraoperative factors: cardiopulmonary bypass (CPB), surgery type, aortic cross-clamp time (ACCT), ultrafiltration volume and blood transfusion; (5) postoperative factors (within 1 h of arrival in the intensive care unit [ICU]): APACHE II score, Euro score and blood routine test. Moreover, we also retrieved the preoperative medication data for patients with higher NLPR to evaluate the effect of antibiotic use on AKI occurrence.\nDefinition and classification According to the 2012 Kidney Disease: Improving Global Outcomes (KDIGO) criteria [20], AKI was defined as the absolute value of the SCr increase ≥0.3 mg/dL (≥26.5 μmol/L) within 48 h or an increase ≥1.5 times baseline levels within 7 days, or a urine output <0.5 mL/kg/h lasting over 6 h. The SCr value on admission was regarded as the baseline level. CSA-AKI was diagnosed by comparing it with the postoperative SCr value. For those receiving multiple SCr tests after surgery, we defined the highest one as the peak value. The equation of NLPR was factorized as:\nNeutrophil−to−Platelet*Lymphocyte Ratio (NLPR)=Neutrophil count (109/L)\n*100Lymphocyte count (109/L)*Platelet count(109/L)\n\nIt was calculated at two-time points: on admission and within 1 h at ICU. Cardiac surgery was typed into the valve, coronary artery bypass grafting (CABG), aorta, valve + CABG, valve + large vessels and others.\nAccording to the 2012 Kidney Disease: Improving Global Outcomes (KDIGO) criteria [20], AKI was defined as the absolute value of the SCr increase ≥0.3 mg/dL (≥26.5 μmol/L) within 48 h or an increase ≥1.5 times baseline levels within 7 days, or a urine output <0.5 mL/kg/h lasting over 6 h. The SCr value on admission was regarded as the baseline level. CSA-AKI was diagnosed by comparing it with the postoperative SCr value. For those receiving multiple SCr tests after surgery, we defined the highest one as the peak value. The equation of NLPR was factorized as:\nNeutrophil−to−Platelet*Lymphocyte Ratio (NLPR)=Neutrophil count (109/L)\n*100Lymphocyte count (109/L)*Platelet count(109/L)\n\nIt was calculated at two-time points: on admission and within 1 h at ICU. Cardiac surgery was typed into the valve, coronary artery bypass grafting (CABG), aorta, valve + CABG, valve + large vessels and others.\nStatistical analysis Statistics were conducted by using R 3.6.1 software (R core team). Continuous variables are presented as mean ± standard deviation or median with inter-quartile range (IQR). Categorical variables were expressed in numbers and percentages (‘Hmisc’ package). Missing data were found in less than 10% of the records, mainly focusing on NYHA grade, BMI, and albumin. We imputed the missing value by the methods of multiple imputations (‘mice’ package in R, method=‘rf’, m = 5, maxit = 5). Student’s t test and Wilcoxon test were applied to compare the distributional difference of continuous variables. Pearson test and Fisher’s exact test were applied to compare the proportional difference of categorical variables (‘gmodels’ package). Given the prior hypothesis of the dose-response relationship, we used restricted cubic splines with three knots at the 10th, 50th and 90th centiles to flexibly model the association of neutrophil count, lymphocyte count, platelet count, and NLPR with CSA-AKI incidence (‘tidyverse’, ‘rms’ and ‘ggplot2’ packages). Non-linearity was tested by using a likelihood ratio test comparing the model with only a linear term against the model with linear and cubic spline terms. Bounded with the median value, we used a logistic model to calculate odds ratios (ORs) for each 1/10 unit increase in predict variables. Moreover, the generalized estimating equation (GEE) was applied for multivariate analysis, which combined the time-dependent indicators of leukocyte counts and NLPR (‘geepack’ package). We applied the purposeful selection of covariates [21]. In model 1, we only included NLPR in two-time points for univariate analysis. Then we included other factors gradually: (i) model 2: model 1 plus demographic factors; (ii) model 3: model 2 plus preoperative factors; (iii) model 4: model 3 plus intra-/postoperative factors. The predictive power was quantified through the area under the receiver operating characteristic (AUROC) curves (‘pROC’ and ‘ggplot2’ packages). Delong test was used to compare the significance of ROC curves. All statistical tests were two-tailed, and we regarded p < 0.05 as the criterion for statistical significance.\nStatistics were conducted by using R 3.6.1 software (R core team). Continuous variables are presented as mean ± standard deviation or median with inter-quartile range (IQR). Categorical variables were expressed in numbers and percentages (‘Hmisc’ package). Missing data were found in less than 10% of the records, mainly focusing on NYHA grade, BMI, and albumin. We imputed the missing value by the methods of multiple imputations (‘mice’ package in R, method=‘rf’, m = 5, maxit = 5). Student’s t test and Wilcoxon test were applied to compare the distributional difference of continuous variables. Pearson test and Fisher’s exact test were applied to compare the proportional difference of categorical variables (‘gmodels’ package). Given the prior hypothesis of the dose-response relationship, we used restricted cubic splines with three knots at the 10th, 50th and 90th centiles to flexibly model the association of neutrophil count, lymphocyte count, platelet count, and NLPR with CSA-AKI incidence (‘tidyverse’, ‘rms’ and ‘ggplot2’ packages). Non-linearity was tested by using a likelihood ratio test comparing the model with only a linear term against the model with linear and cubic spline terms. Bounded with the median value, we used a logistic model to calculate odds ratios (ORs) for each 1/10 unit increase in predict variables. Moreover, the generalized estimating equation (GEE) was applied for multivariate analysis, which combined the time-dependent indicators of leukocyte counts and NLPR (‘geepack’ package). We applied the purposeful selection of covariates [21]. In model 1, we only included NLPR in two-time points for univariate analysis. Then we included other factors gradually: (i) model 2: model 1 plus demographic factors; (ii) model 3: model 2 plus preoperative factors; (iii) model 4: model 3 plus intra-/postoperative factors. The predictive power was quantified through the area under the receiver operating characteristic (AUROC) curves (‘pROC’ and ‘ggplot2’ packages). Delong test was used to compare the significance of ROC curves. All statistical tests were two-tailed, and we regarded p < 0.05 as the criterion for statistical significance.", "This study was designed as a prospective cohort study based on the ‘Zhongshan Cardiovascular Surgery Cohort’. This cohort was established in 2009 and contained nearly 30,000 subjects with complete demographic and clinical records. We recruited patients who underwent cardiac surgeries in 1 July 2019 to 31 December 2019. The inclusion criteria of participants were as follows: (1) aged over 18 years, (2) underwent cardiac surgery, (3) did not receive renal replacement therapy (RRT) before surgery, and (4) had completed renal and cardiac data. We further excluded those who were under 18 years old, received the heart transplant and RRT, or took less than one serum creatinine (SCr) test. This study was approved by the Institutional Committee of Zhongshan Hospital (B2017-039). Participants were given informed consent prior to recruiting, and their identity information was replaced as a unique code during analysis.", "Patients’ demographics were collected through a structured questionnaire. Clinical data with timestamps were extracted from the electronic medical records (EMR). We selected thirty-seven factors for analysis. They were divided into five groups in chronological order: (1) demographics and comorbidity: age, gender, body mass index (BMI), hypertension, diabetes, unstable angina pectoris (UAP), and myocardial infarction (MI) within the last 3 months; (2) preoperative cardio-renal function [within 24 h of admission to hospital]: cardiac surgery history, coronary angiography, New York heart association (NYHA) grade, left ventricular eject fractions (LVEF), pulmonary artery pressure (PAH), SCr, blood urea nitrogen (BUN), estimated glomerular filtration rate (eGFR), serum uric acid (SUA) and proteinuria; (3) perioperative blood routine test [within 24 h of admission]: glucose, albumin, hemoglobin, peripheral blood cell counts including erythrocyte, total leukocyte, neutrophil, lymphocyte and platelet; (4) intraoperative factors: cardiopulmonary bypass (CPB), surgery type, aortic cross-clamp time (ACCT), ultrafiltration volume and blood transfusion; (5) postoperative factors (within 1 h of arrival in the intensive care unit [ICU]): APACHE II score, Euro score and blood routine test. Moreover, we also retrieved the preoperative medication data for patients with higher NLPR to evaluate the effect of antibiotic use on AKI occurrence.", "According to the 2012 Kidney Disease: Improving Global Outcomes (KDIGO) criteria [20], AKI was defined as the absolute value of the SCr increase ≥0.3 mg/dL (≥26.5 μmol/L) within 48 h or an increase ≥1.5 times baseline levels within 7 days, or a urine output <0.5 mL/kg/h lasting over 6 h. The SCr value on admission was regarded as the baseline level. CSA-AKI was diagnosed by comparing it with the postoperative SCr value. For those receiving multiple SCr tests after surgery, we defined the highest one as the peak value. The equation of NLPR was factorized as:\nNeutrophil−to−Platelet*Lymphocyte Ratio (NLPR)=Neutrophil count (109/L)\n*100Lymphocyte count (109/L)*Platelet count(109/L)\n\nIt was calculated at two-time points: on admission and within 1 h at ICU. Cardiac surgery was typed into the valve, coronary artery bypass grafting (CABG), aorta, valve + CABG, valve + large vessels and others.", "Statistics were conducted by using R 3.6.1 software (R core team). Continuous variables are presented as mean ± standard deviation or median with inter-quartile range (IQR). Categorical variables were expressed in numbers and percentages (‘Hmisc’ package). Missing data were found in less than 10% of the records, mainly focusing on NYHA grade, BMI, and albumin. We imputed the missing value by the methods of multiple imputations (‘mice’ package in R, method=‘rf’, m = 5, maxit = 5). Student’s t test and Wilcoxon test were applied to compare the distributional difference of continuous variables. Pearson test and Fisher’s exact test were applied to compare the proportional difference of categorical variables (‘gmodels’ package). Given the prior hypothesis of the dose-response relationship, we used restricted cubic splines with three knots at the 10th, 50th and 90th centiles to flexibly model the association of neutrophil count, lymphocyte count, platelet count, and NLPR with CSA-AKI incidence (‘tidyverse’, ‘rms’ and ‘ggplot2’ packages). Non-linearity was tested by using a likelihood ratio test comparing the model with only a linear term against the model with linear and cubic spline terms. Bounded with the median value, we used a logistic model to calculate odds ratios (ORs) for each 1/10 unit increase in predict variables. Moreover, the generalized estimating equation (GEE) was applied for multivariate analysis, which combined the time-dependent indicators of leukocyte counts and NLPR (‘geepack’ package). We applied the purposeful selection of covariates [21]. In model 1, we only included NLPR in two-time points for univariate analysis. Then we included other factors gradually: (i) model 2: model 1 plus demographic factors; (ii) model 3: model 2 plus preoperative factors; (iii) model 4: model 3 plus intra-/postoperative factors. The predictive power was quantified through the area under the receiver operating characteristic (AUROC) curves (‘pROC’ and ‘ggplot2’ packages). Delong test was used to compare the significance of ROC curves. All statistical tests were two-tailed, and we regarded p < 0.05 as the criterion for statistical significance.", "Baseline characteristics and CSA-AKI incidence Of the 2618 enrolled participants who underwent cardiac surgeries, 2449 patients met the inclusion criteria and were comprised of formal analysis (Supplement Figure 1). The average age was 57.2 ± 12.8 years old, and male patients accounted for 59.2% (n = 1449). According to KDIGO classification, 838 (34.2%) patients developed AKI after surgery. Of them, 658 patients located in Stage-1, and another 131 and 49 cases were Stage-2 and Stage-3, respectively. The rate of AKI-RRT was 1.2% (n = 30). Table 1 listed the major factors of CSA-AKI. Patients who were elder age, male, had higher BMI, or suffered from hypertension and diabetes were more likely to occur CSA-AKI. Cardio-surgery history, coronary arteriography, attenuated LVEF were positively associated with CSA-AKI. Given that patients already had preexisting renal dysfunction at admission, the risk of CSA-AKI was also increased. In terms of intraoperative factors, patients who underwent complicated cardiac surgery (aorta, valve combined with CABG, or larger vessels) were more susceptible to CSA-AKI. The application of CPB and blood transfusion, prolonged ACCT, and high ultrafiltration volume also had notable impacts on CSA-AKI. With the increase of APACHE II/Euro score, the postoperative risk CSA-AKI also kept growing.\nDemographic and perioperative factors of CSA-AKI (n = 2449).\n†Refers to Student’s t test (for continuous variables which were normally distributed).\n#Refers to Pearson test (for binary and unordered categorical variables).\n*Refers to Fisher’s exact test (for categorical variables when sample sizes are small).\n‡Refers to Wilcoxon test (for continuous variables which were non-normally distributed).\nACCT: aortic cross-clamp time; AKI: acute kidney injury; aOR: adjusted odds ratio; CABG: coronary artery bypass grafting; CI: confidence interval; CPB: cardiac pulmonary bypass; CSA-AKI: cardiac surgery-associated acute kidney injury; eGFR: estimated glomerular filtration rate; LVEF: left ventricular ejection fractions; NYHA: New York heart association; PAH: pulmonary arterial hypertension; SCr: serum creatinine; SUA: serum uric acid.\nOf the 2618 enrolled participants who underwent cardiac surgeries, 2449 patients met the inclusion criteria and were comprised of formal analysis (Supplement Figure 1). The average age was 57.2 ± 12.8 years old, and male patients accounted for 59.2% (n = 1449). According to KDIGO classification, 838 (34.2%) patients developed AKI after surgery. Of them, 658 patients located in Stage-1, and another 131 and 49 cases were Stage-2 and Stage-3, respectively. The rate of AKI-RRT was 1.2% (n = 30). Table 1 listed the major factors of CSA-AKI. Patients who were elder age, male, had higher BMI, or suffered from hypertension and diabetes were more likely to occur CSA-AKI. Cardio-surgery history, coronary arteriography, attenuated LVEF were positively associated with CSA-AKI. Given that patients already had preexisting renal dysfunction at admission, the risk of CSA-AKI was also increased. In terms of intraoperative factors, patients who underwent complicated cardiac surgery (aorta, valve combined with CABG, or larger vessels) were more susceptible to CSA-AKI. The application of CPB and blood transfusion, prolonged ACCT, and high ultrafiltration volume also had notable impacts on CSA-AKI. With the increase of APACHE II/Euro score, the postoperative risk CSA-AKI also kept growing.\nDemographic and perioperative factors of CSA-AKI (n = 2449).\n†Refers to Student’s t test (for continuous variables which were normally distributed).\n#Refers to Pearson test (for binary and unordered categorical variables).\n*Refers to Fisher’s exact test (for categorical variables when sample sizes are small).\n‡Refers to Wilcoxon test (for continuous variables which were non-normally distributed).\nACCT: aortic cross-clamp time; AKI: acute kidney injury; aOR: adjusted odds ratio; CABG: coronary artery bypass grafting; CI: confidence interval; CPB: cardiac pulmonary bypass; CSA-AKI: cardiac surgery-associated acute kidney injury; eGFR: estimated glomerular filtration rate; LVEF: left ventricular ejection fractions; NYHA: New York heart association; PAH: pulmonary arterial hypertension; SCr: serum creatinine; SUA: serum uric acid.\nDynamics of peripheral blood cell counts and NLPR The blood analysis was summarized in Table 2. Compared with non-AKI patients, preoperative neutrophil counts (3.9 ± 2.4 vs. 3.5 ± 1.6, p < 0.001; aOR 1.07 (95% CI 1.02–1.12), p < 0.001), and NLPR (1.1[0.8, 1.8] vs. 0.9[0.7,1.4], p < 0.001; aOR 1.15 (95% CI 1.09–1.22), p < 0.001) were higher in AKI patients. Lymphocyte (1.7 ± 0.6 vs. 1.9 ± 0.7, p < 0.001; aOR 0.74 (95% CI 0.64–0.85), p < 0.001) and platelet (182.2 ± 59.6 vs. 196.4 ± 60.9, p < 0.001; aOR 0.99 (95% CI 0.99–1.00), p < 0.001) counts were negatively associated with AKI risk. Due to the posttraumatic stress response after surgery, blood cell counts changed a lot. While patients with a higher postoperative NLPR still had an increased risk of developing AKI (12.4[7.5, 20.0] vs. 10.6[6.4,16.7], p < 0.001; aOR 1.02 (95% CI 1.02–1.03), p < 0.001).\nDistribution of peripheral blood cell counts and NLPR in AKI and non-AKI patients (n = 2449).\naOR was adjusted by age, gender, and body mass index.\n†Refers to Student’s t test (for continuous variables which were normally distributed).\n‡Refers to Wilcoxon test (for continuous variables which were non-normally distributed).\nAKI: acute kidney injury; aOR: adjusted odds ratio; ICU: intensive care unit;NLPR: neutrophil-to-platelet*lymphocyte ratio.\nThe blood analysis was summarized in Table 2. Compared with non-AKI patients, preoperative neutrophil counts (3.9 ± 2.4 vs. 3.5 ± 1.6, p < 0.001; aOR 1.07 (95% CI 1.02–1.12), p < 0.001), and NLPR (1.1[0.8, 1.8] vs. 0.9[0.7,1.4], p < 0.001; aOR 1.15 (95% CI 1.09–1.22), p < 0.001) were higher in AKI patients. Lymphocyte (1.7 ± 0.6 vs. 1.9 ± 0.7, p < 0.001; aOR 0.74 (95% CI 0.64–0.85), p < 0.001) and platelet (182.2 ± 59.6 vs. 196.4 ± 60.9, p < 0.001; aOR 0.99 (95% CI 0.99–1.00), p < 0.001) counts were negatively associated with AKI risk. Due to the posttraumatic stress response after surgery, blood cell counts changed a lot. While patients with a higher postoperative NLPR still had an increased risk of developing AKI (12.4[7.5, 20.0] vs. 10.6[6.4,16.7], p < 0.001; aOR 1.02 (95% CI 1.02–1.03), p < 0.001).\nDistribution of peripheral blood cell counts and NLPR in AKI and non-AKI patients (n = 2449).\naOR was adjusted by age, gender, and body mass index.\n†Refers to Student’s t test (for continuous variables which were normally distributed).\n‡Refers to Wilcoxon test (for continuous variables which were non-normally distributed).\nAKI: acute kidney injury; aOR: adjusted odds ratio; ICU: intensive care unit;NLPR: neutrophil-to-platelet*lymphocyte ratio.\nDose-response relationship of NLPR and AKI risk In Figure 1, we used restricted cubic splines to visualize the dose-response relationship of blood cells with CSA-AKI incidence. Considering the ‘U’ shape relationship between preoperative neutrophil counts and AKI, the plot showed a decrease of the risk within the lower range of neutrophil counts, which reached the lowest risk around the median (3.2*109). It then increased after that (p for non-linearity <0.001). The OR per 1 unit increase was 0.76 (95% CI 0.61–0.95) below 3.2*109, and 1.11 (95% CI 1.04–1.17) above 3.3*109. The risk of CSA-AKI was relatively flat until around the median of lymphocyte, platelet, NLPR. It then started to increase rapidly afterward. The median of preoperative NLPR was 1.0. Above the median, the OR per 1 unit increase was 1.13 (95% CI 1.06–1.19). In comparison, no significant association was found below the median (OR 1.22 95% CI 0.63–2.38). Similarly, we observed such a ‘J’ shape curve between postoperative NLPR and AKI. The OR increased along with the NLPR increase after surgery below the median of 11.0 (OR 1.02 95% CI 1.01–1.03). Then we classified the NLPR value into five groups (Table 3). Compared with the preoperative NLPR level of 0.50–0.99, AKI risk kept growth in higher NLPR groups (the adjusted OR increased from 1.23 to 2.61). As for postoperative NLPR, the incidence was increased remarkably from 31.6% in the reference group to 46.2% in the highest group (aOR rose from 1.13 to 2.09).\nDose-response relationship of CSA-AKI and peripheral blood cell counts and NLPR.\nPerioperative NLPR levels and their association with CSA-AKI (n = 2449).\naOR was adjusted by age, gender, and body mass index.\nAKI: acute kidney injury; aOR: adjusted odds ratio; NLPR: neutrophil to platelet*lymphocyte ratio.\nAntibiotic treatment and the risk of CSA-AKI in high NLPR groups (n = 365).\naOR was adjusted by age, gender, and body mass index.\nAKI: acute kidney injury; aOR: adjusted odds ratio; NLPR: neutrophil to platelet*lymphocyte ratio.\nIn Figure 1, we used restricted cubic splines to visualize the dose-response relationship of blood cells with CSA-AKI incidence. Considering the ‘U’ shape relationship between preoperative neutrophil counts and AKI, the plot showed a decrease of the risk within the lower range of neutrophil counts, which reached the lowest risk around the median (3.2*109). It then increased after that (p for non-linearity <0.001). The OR per 1 unit increase was 0.76 (95% CI 0.61–0.95) below 3.2*109, and 1.11 (95% CI 1.04–1.17) above 3.3*109. The risk of CSA-AKI was relatively flat until around the median of lymphocyte, platelet, NLPR. It then started to increase rapidly afterward. The median of preoperative NLPR was 1.0. Above the median, the OR per 1 unit increase was 1.13 (95% CI 1.06–1.19). In comparison, no significant association was found below the median (OR 1.22 95% CI 0.63–2.38). Similarly, we observed such a ‘J’ shape curve between postoperative NLPR and AKI. The OR increased along with the NLPR increase after surgery below the median of 11.0 (OR 1.02 95% CI 1.01–1.03). Then we classified the NLPR value into five groups (Table 3). Compared with the preoperative NLPR level of 0.50–0.99, AKI risk kept growth in higher NLPR groups (the adjusted OR increased from 1.23 to 2.61). As for postoperative NLPR, the incidence was increased remarkably from 31.6% in the reference group to 46.2% in the highest group (aOR rose from 1.13 to 2.09).\nDose-response relationship of CSA-AKI and peripheral blood cell counts and NLPR.\nPerioperative NLPR levels and their association with CSA-AKI (n = 2449).\naOR was adjusted by age, gender, and body mass index.\nAKI: acute kidney injury; aOR: adjusted odds ratio; NLPR: neutrophil to platelet*lymphocyte ratio.\nAntibiotic treatment and the risk of CSA-AKI in high NLPR groups (n = 365).\naOR was adjusted by age, gender, and body mass index.\nAKI: acute kidney injury; aOR: adjusted odds ratio; NLPR: neutrophil to platelet*lymphocyte ratio.\nPreoperative antibiotic use in patients with high NLPR We further retrieved the medication data, especially the antibiotic use, in 365 patients whose preoperative NLPR was over 2.0 (Table 4). Of them, 22.4% (n = 82) had ever used antibiotics before cardiac surgery. The incidence of AKI in patients receiving antibiotic treatment was 32.9%, significantly lower than that without antibiotics (54.8%, p < 0.001). The aOR was estimated as 0.44 (95% CI 0.26–0.76). Subgroup analysis revealed that patients in the second-highest NLPR group (2.00–2.99) could benefit more from antibiotic treatment (aOR = 0.39, 95% CI 0.16–0.95).\nWe further retrieved the medication data, especially the antibiotic use, in 365 patients whose preoperative NLPR was over 2.0 (Table 4). Of them, 22.4% (n = 82) had ever used antibiotics before cardiac surgery. The incidence of AKI in patients receiving antibiotic treatment was 32.9%, significantly lower than that without antibiotics (54.8%, p < 0.001). The aOR was estimated as 0.44 (95% CI 0.26–0.76). Subgroup analysis revealed that patients in the second-highest NLPR group (2.00–2.99) could benefit more from antibiotic treatment (aOR = 0.39, 95% CI 0.16–0.95).\nPrediction model of CSA-AKI based on perioperative NLPR We combined the NLPR values in two-time points to predict CSA-AKI occurrence (Figure 2). Considering the correlation among repeated measures, we applied the generalized estimating equation (GEE) for multiple analyses. In the model with NLPR alone, the AUC value was 0.621, 95%CI 0.604–0.638. The optimal cutoff of NLPR was set at 1.3, which has a sensitivity of 51.0% and specificity of 67.0%. With the enrollment of variables progressively, the AUC values increased from 0.671 (95% CI 0.655–0.687) to 0.730 (95% CI 0.715–0.745), and then to 0.806 (95% CI 0.793–0.819). Moreover, we also evaluated the attribution of NLPR to AKI in the full model. Delong test revealed that the AUC of model with NLPR was significantly higher than the AUC without NLPR (AUC: 0.806 vs. 0.799, Z = 4.081, p < 0.001).\nPrediction model of CSA-AKI through the generalized estimating equation (GEE). (3A: the optimal NLPR cutoff was set at 1.3; 3B: model 1 only enrolled the NLPR values in two-time points, model2 was model 1 plus demographic factors; model 3 was model 2 plus preoperative factors, model 4 was model 3 plus intra-/postoperative factor; 3C: comparison between the model with and without NLPR).\nWe combined the NLPR values in two-time points to predict CSA-AKI occurrence (Figure 2). Considering the correlation among repeated measures, we applied the generalized estimating equation (GEE) for multiple analyses. In the model with NLPR alone, the AUC value was 0.621, 95%CI 0.604–0.638. The optimal cutoff of NLPR was set at 1.3, which has a sensitivity of 51.0% and specificity of 67.0%. With the enrollment of variables progressively, the AUC values increased from 0.671 (95% CI 0.655–0.687) to 0.730 (95% CI 0.715–0.745), and then to 0.806 (95% CI 0.793–0.819). Moreover, we also evaluated the attribution of NLPR to AKI in the full model. Delong test revealed that the AUC of model with NLPR was significantly higher than the AUC without NLPR (AUC: 0.806 vs. 0.799, Z = 4.081, p < 0.001).\nPrediction model of CSA-AKI through the generalized estimating equation (GEE). (3A: the optimal NLPR cutoff was set at 1.3; 3B: model 1 only enrolled the NLPR values in two-time points, model2 was model 1 plus demographic factors; model 3 was model 2 plus preoperative factors, model 4 was model 3 plus intra-/postoperative factor; 3C: comparison between the model with and without NLPR).", "Of the 2618 enrolled participants who underwent cardiac surgeries, 2449 patients met the inclusion criteria and were comprised of formal analysis (Supplement Figure 1). The average age was 57.2 ± 12.8 years old, and male patients accounted for 59.2% (n = 1449). According to KDIGO classification, 838 (34.2%) patients developed AKI after surgery. Of them, 658 patients located in Stage-1, and another 131 and 49 cases were Stage-2 and Stage-3, respectively. The rate of AKI-RRT was 1.2% (n = 30). Table 1 listed the major factors of CSA-AKI. Patients who were elder age, male, had higher BMI, or suffered from hypertension and diabetes were more likely to occur CSA-AKI. Cardio-surgery history, coronary arteriography, attenuated LVEF were positively associated with CSA-AKI. Given that patients already had preexisting renal dysfunction at admission, the risk of CSA-AKI was also increased. In terms of intraoperative factors, patients who underwent complicated cardiac surgery (aorta, valve combined with CABG, or larger vessels) were more susceptible to CSA-AKI. The application of CPB and blood transfusion, prolonged ACCT, and high ultrafiltration volume also had notable impacts on CSA-AKI. With the increase of APACHE II/Euro score, the postoperative risk CSA-AKI also kept growing.\nDemographic and perioperative factors of CSA-AKI (n = 2449).\n†Refers to Student’s t test (for continuous variables which were normally distributed).\n#Refers to Pearson test (for binary and unordered categorical variables).\n*Refers to Fisher’s exact test (for categorical variables when sample sizes are small).\n‡Refers to Wilcoxon test (for continuous variables which were non-normally distributed).\nACCT: aortic cross-clamp time; AKI: acute kidney injury; aOR: adjusted odds ratio; CABG: coronary artery bypass grafting; CI: confidence interval; CPB: cardiac pulmonary bypass; CSA-AKI: cardiac surgery-associated acute kidney injury; eGFR: estimated glomerular filtration rate; LVEF: left ventricular ejection fractions; NYHA: New York heart association; PAH: pulmonary arterial hypertension; SCr: serum creatinine; SUA: serum uric acid.", "The blood analysis was summarized in Table 2. Compared with non-AKI patients, preoperative neutrophil counts (3.9 ± 2.4 vs. 3.5 ± 1.6, p < 0.001; aOR 1.07 (95% CI 1.02–1.12), p < 0.001), and NLPR (1.1[0.8, 1.8] vs. 0.9[0.7,1.4], p < 0.001; aOR 1.15 (95% CI 1.09–1.22), p < 0.001) were higher in AKI patients. Lymphocyte (1.7 ± 0.6 vs. 1.9 ± 0.7, p < 0.001; aOR 0.74 (95% CI 0.64–0.85), p < 0.001) and platelet (182.2 ± 59.6 vs. 196.4 ± 60.9, p < 0.001; aOR 0.99 (95% CI 0.99–1.00), p < 0.001) counts were negatively associated with AKI risk. Due to the posttraumatic stress response after surgery, blood cell counts changed a lot. While patients with a higher postoperative NLPR still had an increased risk of developing AKI (12.4[7.5, 20.0] vs. 10.6[6.4,16.7], p < 0.001; aOR 1.02 (95% CI 1.02–1.03), p < 0.001).\nDistribution of peripheral blood cell counts and NLPR in AKI and non-AKI patients (n = 2449).\naOR was adjusted by age, gender, and body mass index.\n†Refers to Student’s t test (for continuous variables which were normally distributed).\n‡Refers to Wilcoxon test (for continuous variables which were non-normally distributed).\nAKI: acute kidney injury; aOR: adjusted odds ratio; ICU: intensive care unit;NLPR: neutrophil-to-platelet*lymphocyte ratio.", "In Figure 1, we used restricted cubic splines to visualize the dose-response relationship of blood cells with CSA-AKI incidence. Considering the ‘U’ shape relationship between preoperative neutrophil counts and AKI, the plot showed a decrease of the risk within the lower range of neutrophil counts, which reached the lowest risk around the median (3.2*109). It then increased after that (p for non-linearity <0.001). The OR per 1 unit increase was 0.76 (95% CI 0.61–0.95) below 3.2*109, and 1.11 (95% CI 1.04–1.17) above 3.3*109. The risk of CSA-AKI was relatively flat until around the median of lymphocyte, platelet, NLPR. It then started to increase rapidly afterward. The median of preoperative NLPR was 1.0. Above the median, the OR per 1 unit increase was 1.13 (95% CI 1.06–1.19). In comparison, no significant association was found below the median (OR 1.22 95% CI 0.63–2.38). Similarly, we observed such a ‘J’ shape curve between postoperative NLPR and AKI. The OR increased along with the NLPR increase after surgery below the median of 11.0 (OR 1.02 95% CI 1.01–1.03). Then we classified the NLPR value into five groups (Table 3). Compared with the preoperative NLPR level of 0.50–0.99, AKI risk kept growth in higher NLPR groups (the adjusted OR increased from 1.23 to 2.61). As for postoperative NLPR, the incidence was increased remarkably from 31.6% in the reference group to 46.2% in the highest group (aOR rose from 1.13 to 2.09).\nDose-response relationship of CSA-AKI and peripheral blood cell counts and NLPR.\nPerioperative NLPR levels and their association with CSA-AKI (n = 2449).\naOR was adjusted by age, gender, and body mass index.\nAKI: acute kidney injury; aOR: adjusted odds ratio; NLPR: neutrophil to platelet*lymphocyte ratio.\nAntibiotic treatment and the risk of CSA-AKI in high NLPR groups (n = 365).\naOR was adjusted by age, gender, and body mass index.\nAKI: acute kidney injury; aOR: adjusted odds ratio; NLPR: neutrophil to platelet*lymphocyte ratio.", "We further retrieved the medication data, especially the antibiotic use, in 365 patients whose preoperative NLPR was over 2.0 (Table 4). Of them, 22.4% (n = 82) had ever used antibiotics before cardiac surgery. The incidence of AKI in patients receiving antibiotic treatment was 32.9%, significantly lower than that without antibiotics (54.8%, p < 0.001). The aOR was estimated as 0.44 (95% CI 0.26–0.76). Subgroup analysis revealed that patients in the second-highest NLPR group (2.00–2.99) could benefit more from antibiotic treatment (aOR = 0.39, 95% CI 0.16–0.95).", "We combined the NLPR values in two-time points to predict CSA-AKI occurrence (Figure 2). Considering the correlation among repeated measures, we applied the generalized estimating equation (GEE) for multiple analyses. In the model with NLPR alone, the AUC value was 0.621, 95%CI 0.604–0.638. The optimal cutoff of NLPR was set at 1.3, which has a sensitivity of 51.0% and specificity of 67.0%. With the enrollment of variables progressively, the AUC values increased from 0.671 (95% CI 0.655–0.687) to 0.730 (95% CI 0.715–0.745), and then to 0.806 (95% CI 0.793–0.819). Moreover, we also evaluated the attribution of NLPR to AKI in the full model. Delong test revealed that the AUC of model with NLPR was significantly higher than the AUC without NLPR (AUC: 0.806 vs. 0.799, Z = 4.081, p < 0.001).\nPrediction model of CSA-AKI through the generalized estimating equation (GEE). (3A: the optimal NLPR cutoff was set at 1.3; 3B: model 1 only enrolled the NLPR values in two-time points, model2 was model 1 plus demographic factors; model 3 was model 2 plus preoperative factors, model 4 was model 3 plus intra-/postoperative factor; 3C: comparison between the model with and without NLPR).", "The present study provides the first report for integrating the dynamics of NLPR at two-time points into the prediction model and described the dose-response relationship of NLPR and AKI risk. Previous studies also identified that NLPR was associated with AKI in sepsis [22] and abdominal surgery [23]. Compared with non-AKI patients, we found that perioperative neutrophil counts and NLPR were higher in AKI patients. The relationship between preoperative neutrophils counts and AKI was not linear but showed a ‘U’ shape. Restricted cubic splines showed a decrease of the risk within the lower range of neutrophil counts, which reached the lowest risk around the median and then increased after that. Neutrophil aggregation is reported acting as a subclinical inflammation stage, while neutropenia reflects a physiological condition that the body may be in disorder, such as serious inflammation or myelosuppression [24,25]. The effect of preoperative and postoperative NLPR on AKI was a ‘J’ shape. CSA-AKI’s risk was relatively flat until 1.0 of preoperative NLPR and increased rapidly afterward, with an odds ratio of 1.13 per 1 unit.\nNeutrophils and lymphocytes play essential roles in the innate immune system and adaptive immune system, respectively [26]. At the early stage of inflammation, neutrophils are the first cells recruited from blood to tissue. It can phagocytose pathogens and particles, generate reactive oxygen and nitrogen species, and release antimicrobial peptides, which strengthen the innate immune system. In the present study, we found that, per 1 unit neutrophils increase, the OR of CSA-AKI was 1.11 (95% CI 1.04–1.17), indicating that subclinical neutrophilia also played an important role in the pathogenesis of AKI. It might be explained by the high proportion of ‘primed for recruitment’ neutrophils. Noah et al. found that a distinct population of primed neutrophils in circulation had a high expression of adhesion and activation markers [27]. Additionally, previous experimental studies revealed that T helper 1 (Th1) and T helper 2 (Th2) cells can contribute to the pathophysiology of AKI [28]. During the process of inflammation, free platelets could bind to neutrophils to form platelet-neutrophil aggregates (PNA) [29]. Substances released by PNA also induce neutrophils tissue invasion, inflammatory mediator release and exacerbate local or systemic inflammatory response.\nCardiovascular surgery involves a series of renal injury pathways. It includes hypoperfusion, ischemia-reperfusion injury, neurohumoral activation, inflammation and oxidative stress [30]. CPB are associated with the activation of multiple inflammatory pathways and the increase pro-inflammatory cytokines. Neutrophil infiltration is detected in post-ischemic mouse kidneys [31,32] and biopsy samples from patients with early AKI participate in the pathogenesis of renal injury following IRI [33,34]. The possible mechanism is that neutrophils participate in the induction of renal injury by obstructing the renal microvasculature and secreting oxygen free radicals and proteases [25]. The adhesion and accumulation of activated platelets within the vascular beds of renal tissue early after IR can act as a bridge between leukocytes and the endothelial wall, thereby enhancing leukocyte recruitment, activation and extravasation [35,36]. Consistent with our study, reduced platelet counts were associated with a higher incidence of AKI [37,38]. The main reason for thrombopenia is the absolute decrease of platelet amount and/or the relative decrease of free platelet proportion. On one hand, the total amount of platelets decreased when patients suffered from malnutrition or receiving a prolonged ACCT during cardiovascular surgery. This ischemia reperfusion injury has been proved as the main pathophysiological mechanism of CSA-AKI. On the other hand, the PNA can lead to vascular lesions and tissue destruction in kidney.\nIn this study, we creatively incorporated platelets into traditional NLR and the new indicator was used to quantify the effect of inflammation on AKI. After integrating the dynamic NLPRs into multiple predictive models, we found that the AUC was up to 0.806, higher than the AUC without NLPR. It will facilitate AKI risk classification management and allow clinicians to intervene AKI in early-stage. While the model’s specificity was superior to sensitivity. It suggests that NLPR could be most effectively used to exclude AKI. For high-risk patients, further testing (SCr or novel biomarkers) is necessary to stratify their AKI risk. Notably, we found that the incidence of AKI in patients receiving antibiotic treatment was significantly lower than that without antibiotics. This phenomenon may be related to blocking certain inflammatory processes by antibiotics. Vancomycin and ceftriaxone have been proved to be potentially nephrotoxic. However, these antibiotics are still the first-line antibiotic for the infection after heart surgery. Subgroup analysis revealed that the application of nephrotoxic antibiotics could reduce the risk of CSA-AKI. However, cardiac surgeons and nephrologists should reach a consensus and weigh up the pros and cons of nephrotoxic antibiotics.\nStill, our study has several limitations to be declared. Firstly, this study was designed as a single-center study. And it would be better to follow-up the patients and describe the relation between NLPR and long-term kidney recovery. Secondly, we did not compare it with C-reactive protein and monitor the related cytokine expression, such as interleukin (IL)-8, myeloperoxidase, and IL-6. However, our pilot study showed that NLPR had a highly positive correlation to procalcitonin and lactic acid. In future studies, a large-scale cohort and multi-omics analysis are needed to verify the predictive value of NLPR in CSA-AKI incidence.\nIn conclusion, the dynamics of perioperative NPLR is a promising and noninvasive marker relating to acute kidney injury during the perioperative period. Protective strategies should be implemented for those who had initially increased NLPR. It will facilitate AKI risk management and allow clinicians to intervene early so as to reverse renal damage.", "Click here for additional data file." ]
[ "intro", "methods", null, null, null, null, "results", null, null, null, null, null, "discussion", "supplementary-material" ]
[ "Acute kidney injury", "neutrophils", "lymphocyte", "platelet", "predictor", "cardiovascular surgery" ]
Introduction: Acute kidney injury (AKI) is a complicated and multifactorial clinical syndrome, which is characterized by a sudden decrease in renal function. Cardiac surgery is the second commonest reason for AKI behind sepsis [1]. It was reported that the incidence of cardiac surgery-associated AKI (CSA-AKI) varied from 5% to 42% [2]. CSA-AKI not only prolongs the length of stay in the intensive care unit (ICU) and hospitalization but increases the cost of care and mortality [3,4]. Early identification of patients at high risk of CSA-AKI and prompt modification of reversible factors will optimize the clinical outcome and avoid unnecessary kidney damage. Preoperative inflammation is commonly encountered in patients receiving cardiac surgery. It can be caused either by heart infection (such as infectious endocarditis) or other chronic infectious diseases (COPD). In clinical practice, the neutrophil to lymphocyte ratio (NLR) is an easily effective and low-cost biomarker of inflammatory status [5,6]. Previous studies have demonstrated its value in early prediction of infectious conditions [5,7], cancers [8–13] and cardiovascular diseases [14]. As the initiator of inflammation, neutrophils can quickly migrate from peripheral blood circulation to kidney under the action of chemokines. It can result in CSA-AKI by blocking renal micro-vessels and secreting oxygen free radicals and proteases. Besides, platelet-neutrophil aggregates are formed during the inflammatory process [15] and associated with diverse pathologies [16,17]. It also can cause renal damage through vascular lesions and tissue destruction [18]. To this end, NLPR may be the more promising biomarker of inflammation than NLR in CSA-AKI prediction. Although several studies have shown that high NLPR was associated with postoperative AKI in cardiovascular surgery patients [19], there is still no research that considers the dynamics of NLPR in the prediction model. The effect of NLPR on AKI is not a simple linear relationship. It is crucial to delineate the dose-response relationship between NLPR and AKI risk and the cutoff of NLPR for CSA-AKI prediction. This study aims to explore the significance of the dynamic changes in perioperative NLPR for predicting CSA-AKI. Methods: Study design and patient selection This study was designed as a prospective cohort study based on the ‘Zhongshan Cardiovascular Surgery Cohort’. This cohort was established in 2009 and contained nearly 30,000 subjects with complete demographic and clinical records. We recruited patients who underwent cardiac surgeries in 1 July 2019 to 31 December 2019. The inclusion criteria of participants were as follows: (1) aged over 18 years, (2) underwent cardiac surgery, (3) did not receive renal replacement therapy (RRT) before surgery, and (4) had completed renal and cardiac data. We further excluded those who were under 18 years old, received the heart transplant and RRT, or took less than one serum creatinine (SCr) test. This study was approved by the Institutional Committee of Zhongshan Hospital (B2017-039). Participants were given informed consent prior to recruiting, and their identity information was replaced as a unique code during analysis. This study was designed as a prospective cohort study based on the ‘Zhongshan Cardiovascular Surgery Cohort’. This cohort was established in 2009 and contained nearly 30,000 subjects with complete demographic and clinical records. We recruited patients who underwent cardiac surgeries in 1 July 2019 to 31 December 2019. The inclusion criteria of participants were as follows: (1) aged over 18 years, (2) underwent cardiac surgery, (3) did not receive renal replacement therapy (RRT) before surgery, and (4) had completed renal and cardiac data. We further excluded those who were under 18 years old, received the heart transplant and RRT, or took less than one serum creatinine (SCr) test. This study was approved by the Institutional Committee of Zhongshan Hospital (B2017-039). Participants were given informed consent prior to recruiting, and their identity information was replaced as a unique code during analysis. Data collection Patients’ demographics were collected through a structured questionnaire. Clinical data with timestamps were extracted from the electronic medical records (EMR). We selected thirty-seven factors for analysis. They were divided into five groups in chronological order: (1) demographics and comorbidity: age, gender, body mass index (BMI), hypertension, diabetes, unstable angina pectoris (UAP), and myocardial infarction (MI) within the last 3 months; (2) preoperative cardio-renal function [within 24 h of admission to hospital]: cardiac surgery history, coronary angiography, New York heart association (NYHA) grade, left ventricular eject fractions (LVEF), pulmonary artery pressure (PAH), SCr, blood urea nitrogen (BUN), estimated glomerular filtration rate (eGFR), serum uric acid (SUA) and proteinuria; (3) perioperative blood routine test [within 24 h of admission]: glucose, albumin, hemoglobin, peripheral blood cell counts including erythrocyte, total leukocyte, neutrophil, lymphocyte and platelet; (4) intraoperative factors: cardiopulmonary bypass (CPB), surgery type, aortic cross-clamp time (ACCT), ultrafiltration volume and blood transfusion; (5) postoperative factors (within 1 h of arrival in the intensive care unit [ICU]): APACHE II score, Euro score and blood routine test. Moreover, we also retrieved the preoperative medication data for patients with higher NLPR to evaluate the effect of antibiotic use on AKI occurrence. Patients’ demographics were collected through a structured questionnaire. Clinical data with timestamps were extracted from the electronic medical records (EMR). We selected thirty-seven factors for analysis. They were divided into five groups in chronological order: (1) demographics and comorbidity: age, gender, body mass index (BMI), hypertension, diabetes, unstable angina pectoris (UAP), and myocardial infarction (MI) within the last 3 months; (2) preoperative cardio-renal function [within 24 h of admission to hospital]: cardiac surgery history, coronary angiography, New York heart association (NYHA) grade, left ventricular eject fractions (LVEF), pulmonary artery pressure (PAH), SCr, blood urea nitrogen (BUN), estimated glomerular filtration rate (eGFR), serum uric acid (SUA) and proteinuria; (3) perioperative blood routine test [within 24 h of admission]: glucose, albumin, hemoglobin, peripheral blood cell counts including erythrocyte, total leukocyte, neutrophil, lymphocyte and platelet; (4) intraoperative factors: cardiopulmonary bypass (CPB), surgery type, aortic cross-clamp time (ACCT), ultrafiltration volume and blood transfusion; (5) postoperative factors (within 1 h of arrival in the intensive care unit [ICU]): APACHE II score, Euro score and blood routine test. Moreover, we also retrieved the preoperative medication data for patients with higher NLPR to evaluate the effect of antibiotic use on AKI occurrence. Definition and classification According to the 2012 Kidney Disease: Improving Global Outcomes (KDIGO) criteria [20], AKI was defined as the absolute value of the SCr increase ≥0.3 mg/dL (≥26.5 μmol/L) within 48 h or an increase ≥1.5 times baseline levels within 7 days, or a urine output <0.5 mL/kg/h lasting over 6 h. The SCr value on admission was regarded as the baseline level. CSA-AKI was diagnosed by comparing it with the postoperative SCr value. For those receiving multiple SCr tests after surgery, we defined the highest one as the peak value. The equation of NLPR was factorized as: Neutrophil−to−Platelet*Lymphocyte Ratio (NLPR)=Neutrophil count (109/L) *100Lymphocyte count (109/L)*Platelet count(109/L) It was calculated at two-time points: on admission and within 1 h at ICU. Cardiac surgery was typed into the valve, coronary artery bypass grafting (CABG), aorta, valve + CABG, valve + large vessels and others. According to the 2012 Kidney Disease: Improving Global Outcomes (KDIGO) criteria [20], AKI was defined as the absolute value of the SCr increase ≥0.3 mg/dL (≥26.5 μmol/L) within 48 h or an increase ≥1.5 times baseline levels within 7 days, or a urine output <0.5 mL/kg/h lasting over 6 h. The SCr value on admission was regarded as the baseline level. CSA-AKI was diagnosed by comparing it with the postoperative SCr value. For those receiving multiple SCr tests after surgery, we defined the highest one as the peak value. The equation of NLPR was factorized as: Neutrophil−to−Platelet*Lymphocyte Ratio (NLPR)=Neutrophil count (109/L) *100Lymphocyte count (109/L)*Platelet count(109/L) It was calculated at two-time points: on admission and within 1 h at ICU. Cardiac surgery was typed into the valve, coronary artery bypass grafting (CABG), aorta, valve + CABG, valve + large vessels and others. Statistical analysis Statistics were conducted by using R 3.6.1 software (R core team). Continuous variables are presented as mean ± standard deviation or median with inter-quartile range (IQR). Categorical variables were expressed in numbers and percentages (‘Hmisc’ package). Missing data were found in less than 10% of the records, mainly focusing on NYHA grade, BMI, and albumin. We imputed the missing value by the methods of multiple imputations (‘mice’ package in R, method=‘rf’, m = 5, maxit = 5). Student’s t test and Wilcoxon test were applied to compare the distributional difference of continuous variables. Pearson test and Fisher’s exact test were applied to compare the proportional difference of categorical variables (‘gmodels’ package). Given the prior hypothesis of the dose-response relationship, we used restricted cubic splines with three knots at the 10th, 50th and 90th centiles to flexibly model the association of neutrophil count, lymphocyte count, platelet count, and NLPR with CSA-AKI incidence (‘tidyverse’, ‘rms’ and ‘ggplot2’ packages). Non-linearity was tested by using a likelihood ratio test comparing the model with only a linear term against the model with linear and cubic spline terms. Bounded with the median value, we used a logistic model to calculate odds ratios (ORs) for each 1/10 unit increase in predict variables. Moreover, the generalized estimating equation (GEE) was applied for multivariate analysis, which combined the time-dependent indicators of leukocyte counts and NLPR (‘geepack’ package). We applied the purposeful selection of covariates [21]. In model 1, we only included NLPR in two-time points for univariate analysis. Then we included other factors gradually: (i) model 2: model 1 plus demographic factors; (ii) model 3: model 2 plus preoperative factors; (iii) model 4: model 3 plus intra-/postoperative factors. The predictive power was quantified through the area under the receiver operating characteristic (AUROC) curves (‘pROC’ and ‘ggplot2’ packages). Delong test was used to compare the significance of ROC curves. All statistical tests were two-tailed, and we regarded p < 0.05 as the criterion for statistical significance. Statistics were conducted by using R 3.6.1 software (R core team). Continuous variables are presented as mean ± standard deviation or median with inter-quartile range (IQR). Categorical variables were expressed in numbers and percentages (‘Hmisc’ package). Missing data were found in less than 10% of the records, mainly focusing on NYHA grade, BMI, and albumin. We imputed the missing value by the methods of multiple imputations (‘mice’ package in R, method=‘rf’, m = 5, maxit = 5). Student’s t test and Wilcoxon test were applied to compare the distributional difference of continuous variables. Pearson test and Fisher’s exact test were applied to compare the proportional difference of categorical variables (‘gmodels’ package). Given the prior hypothesis of the dose-response relationship, we used restricted cubic splines with three knots at the 10th, 50th and 90th centiles to flexibly model the association of neutrophil count, lymphocyte count, platelet count, and NLPR with CSA-AKI incidence (‘tidyverse’, ‘rms’ and ‘ggplot2’ packages). Non-linearity was tested by using a likelihood ratio test comparing the model with only a linear term against the model with linear and cubic spline terms. Bounded with the median value, we used a logistic model to calculate odds ratios (ORs) for each 1/10 unit increase in predict variables. Moreover, the generalized estimating equation (GEE) was applied for multivariate analysis, which combined the time-dependent indicators of leukocyte counts and NLPR (‘geepack’ package). We applied the purposeful selection of covariates [21]. In model 1, we only included NLPR in two-time points for univariate analysis. Then we included other factors gradually: (i) model 2: model 1 plus demographic factors; (ii) model 3: model 2 plus preoperative factors; (iii) model 4: model 3 plus intra-/postoperative factors. The predictive power was quantified through the area under the receiver operating characteristic (AUROC) curves (‘pROC’ and ‘ggplot2’ packages). Delong test was used to compare the significance of ROC curves. All statistical tests were two-tailed, and we regarded p < 0.05 as the criterion for statistical significance. Study design and patient selection: This study was designed as a prospective cohort study based on the ‘Zhongshan Cardiovascular Surgery Cohort’. This cohort was established in 2009 and contained nearly 30,000 subjects with complete demographic and clinical records. We recruited patients who underwent cardiac surgeries in 1 July 2019 to 31 December 2019. The inclusion criteria of participants were as follows: (1) aged over 18 years, (2) underwent cardiac surgery, (3) did not receive renal replacement therapy (RRT) before surgery, and (4) had completed renal and cardiac data. We further excluded those who were under 18 years old, received the heart transplant and RRT, or took less than one serum creatinine (SCr) test. This study was approved by the Institutional Committee of Zhongshan Hospital (B2017-039). Participants were given informed consent prior to recruiting, and their identity information was replaced as a unique code during analysis. Data collection: Patients’ demographics were collected through a structured questionnaire. Clinical data with timestamps were extracted from the electronic medical records (EMR). We selected thirty-seven factors for analysis. They were divided into five groups in chronological order: (1) demographics and comorbidity: age, gender, body mass index (BMI), hypertension, diabetes, unstable angina pectoris (UAP), and myocardial infarction (MI) within the last 3 months; (2) preoperative cardio-renal function [within 24 h of admission to hospital]: cardiac surgery history, coronary angiography, New York heart association (NYHA) grade, left ventricular eject fractions (LVEF), pulmonary artery pressure (PAH), SCr, blood urea nitrogen (BUN), estimated glomerular filtration rate (eGFR), serum uric acid (SUA) and proteinuria; (3) perioperative blood routine test [within 24 h of admission]: glucose, albumin, hemoglobin, peripheral blood cell counts including erythrocyte, total leukocyte, neutrophil, lymphocyte and platelet; (4) intraoperative factors: cardiopulmonary bypass (CPB), surgery type, aortic cross-clamp time (ACCT), ultrafiltration volume and blood transfusion; (5) postoperative factors (within 1 h of arrival in the intensive care unit [ICU]): APACHE II score, Euro score and blood routine test. Moreover, we also retrieved the preoperative medication data for patients with higher NLPR to evaluate the effect of antibiotic use on AKI occurrence. Definition and classification: According to the 2012 Kidney Disease: Improving Global Outcomes (KDIGO) criteria [20], AKI was defined as the absolute value of the SCr increase ≥0.3 mg/dL (≥26.5 μmol/L) within 48 h or an increase ≥1.5 times baseline levels within 7 days, or a urine output <0.5 mL/kg/h lasting over 6 h. The SCr value on admission was regarded as the baseline level. CSA-AKI was diagnosed by comparing it with the postoperative SCr value. For those receiving multiple SCr tests after surgery, we defined the highest one as the peak value. The equation of NLPR was factorized as: Neutrophil−to−Platelet*Lymphocyte Ratio (NLPR)=Neutrophil count (109/L) *100Lymphocyte count (109/L)*Platelet count(109/L) It was calculated at two-time points: on admission and within 1 h at ICU. Cardiac surgery was typed into the valve, coronary artery bypass grafting (CABG), aorta, valve + CABG, valve + large vessels and others. Statistical analysis: Statistics were conducted by using R 3.6.1 software (R core team). Continuous variables are presented as mean ± standard deviation or median with inter-quartile range (IQR). Categorical variables were expressed in numbers and percentages (‘Hmisc’ package). Missing data were found in less than 10% of the records, mainly focusing on NYHA grade, BMI, and albumin. We imputed the missing value by the methods of multiple imputations (‘mice’ package in R, method=‘rf’, m = 5, maxit = 5). Student’s t test and Wilcoxon test were applied to compare the distributional difference of continuous variables. Pearson test and Fisher’s exact test were applied to compare the proportional difference of categorical variables (‘gmodels’ package). Given the prior hypothesis of the dose-response relationship, we used restricted cubic splines with three knots at the 10th, 50th and 90th centiles to flexibly model the association of neutrophil count, lymphocyte count, platelet count, and NLPR with CSA-AKI incidence (‘tidyverse’, ‘rms’ and ‘ggplot2’ packages). Non-linearity was tested by using a likelihood ratio test comparing the model with only a linear term against the model with linear and cubic spline terms. Bounded with the median value, we used a logistic model to calculate odds ratios (ORs) for each 1/10 unit increase in predict variables. Moreover, the generalized estimating equation (GEE) was applied for multivariate analysis, which combined the time-dependent indicators of leukocyte counts and NLPR (‘geepack’ package). We applied the purposeful selection of covariates [21]. In model 1, we only included NLPR in two-time points for univariate analysis. Then we included other factors gradually: (i) model 2: model 1 plus demographic factors; (ii) model 3: model 2 plus preoperative factors; (iii) model 4: model 3 plus intra-/postoperative factors. The predictive power was quantified through the area under the receiver operating characteristic (AUROC) curves (‘pROC’ and ‘ggplot2’ packages). Delong test was used to compare the significance of ROC curves. All statistical tests were two-tailed, and we regarded p < 0.05 as the criterion for statistical significance. Results: Baseline characteristics and CSA-AKI incidence Of the 2618 enrolled participants who underwent cardiac surgeries, 2449 patients met the inclusion criteria and were comprised of formal analysis (Supplement Figure 1). The average age was 57.2 ± 12.8 years old, and male patients accounted for 59.2% (n = 1449). According to KDIGO classification, 838 (34.2%) patients developed AKI after surgery. Of them, 658 patients located in Stage-1, and another 131 and 49 cases were Stage-2 and Stage-3, respectively. The rate of AKI-RRT was 1.2% (n = 30). Table 1 listed the major factors of CSA-AKI. Patients who were elder age, male, had higher BMI, or suffered from hypertension and diabetes were more likely to occur CSA-AKI. Cardio-surgery history, coronary arteriography, attenuated LVEF were positively associated with CSA-AKI. Given that patients already had preexisting renal dysfunction at admission, the risk of CSA-AKI was also increased. In terms of intraoperative factors, patients who underwent complicated cardiac surgery (aorta, valve combined with CABG, or larger vessels) were more susceptible to CSA-AKI. The application of CPB and blood transfusion, prolonged ACCT, and high ultrafiltration volume also had notable impacts on CSA-AKI. With the increase of APACHE II/Euro score, the postoperative risk CSA-AKI also kept growing. Demographic and perioperative factors of CSA-AKI (n = 2449). †Refers to Student’s t test (for continuous variables which were normally distributed). #Refers to Pearson test (for binary and unordered categorical variables). *Refers to Fisher’s exact test (for categorical variables when sample sizes are small). ‡Refers to Wilcoxon test (for continuous variables which were non-normally distributed). ACCT: aortic cross-clamp time; AKI: acute kidney injury; aOR: adjusted odds ratio; CABG: coronary artery bypass grafting; CI: confidence interval; CPB: cardiac pulmonary bypass; CSA-AKI: cardiac surgery-associated acute kidney injury; eGFR: estimated glomerular filtration rate; LVEF: left ventricular ejection fractions; NYHA: New York heart association; PAH: pulmonary arterial hypertension; SCr: serum creatinine; SUA: serum uric acid. Of the 2618 enrolled participants who underwent cardiac surgeries, 2449 patients met the inclusion criteria and were comprised of formal analysis (Supplement Figure 1). The average age was 57.2 ± 12.8 years old, and male patients accounted for 59.2% (n = 1449). According to KDIGO classification, 838 (34.2%) patients developed AKI after surgery. Of them, 658 patients located in Stage-1, and another 131 and 49 cases were Stage-2 and Stage-3, respectively. The rate of AKI-RRT was 1.2% (n = 30). Table 1 listed the major factors of CSA-AKI. Patients who were elder age, male, had higher BMI, or suffered from hypertension and diabetes were more likely to occur CSA-AKI. Cardio-surgery history, coronary arteriography, attenuated LVEF were positively associated with CSA-AKI. Given that patients already had preexisting renal dysfunction at admission, the risk of CSA-AKI was also increased. In terms of intraoperative factors, patients who underwent complicated cardiac surgery (aorta, valve combined with CABG, or larger vessels) were more susceptible to CSA-AKI. The application of CPB and blood transfusion, prolonged ACCT, and high ultrafiltration volume also had notable impacts on CSA-AKI. With the increase of APACHE II/Euro score, the postoperative risk CSA-AKI also kept growing. Demographic and perioperative factors of CSA-AKI (n = 2449). †Refers to Student’s t test (for continuous variables which were normally distributed). #Refers to Pearson test (for binary and unordered categorical variables). *Refers to Fisher’s exact test (for categorical variables when sample sizes are small). ‡Refers to Wilcoxon test (for continuous variables which were non-normally distributed). ACCT: aortic cross-clamp time; AKI: acute kidney injury; aOR: adjusted odds ratio; CABG: coronary artery bypass grafting; CI: confidence interval; CPB: cardiac pulmonary bypass; CSA-AKI: cardiac surgery-associated acute kidney injury; eGFR: estimated glomerular filtration rate; LVEF: left ventricular ejection fractions; NYHA: New York heart association; PAH: pulmonary arterial hypertension; SCr: serum creatinine; SUA: serum uric acid. Dynamics of peripheral blood cell counts and NLPR The blood analysis was summarized in Table 2. Compared with non-AKI patients, preoperative neutrophil counts (3.9 ± 2.4 vs. 3.5 ± 1.6, p < 0.001; aOR 1.07 (95% CI 1.02–1.12), p < 0.001), and NLPR (1.1[0.8, 1.8] vs. 0.9[0.7,1.4], p < 0.001; aOR 1.15 (95% CI 1.09–1.22), p < 0.001) were higher in AKI patients. Lymphocyte (1.7 ± 0.6 vs. 1.9 ± 0.7, p < 0.001; aOR 0.74 (95% CI 0.64–0.85), p < 0.001) and platelet (182.2 ± 59.6 vs. 196.4 ± 60.9, p < 0.001; aOR 0.99 (95% CI 0.99–1.00), p < 0.001) counts were negatively associated with AKI risk. Due to the posttraumatic stress response after surgery, blood cell counts changed a lot. While patients with a higher postoperative NLPR still had an increased risk of developing AKI (12.4[7.5, 20.0] vs. 10.6[6.4,16.7], p < 0.001; aOR 1.02 (95% CI 1.02–1.03), p < 0.001). Distribution of peripheral blood cell counts and NLPR in AKI and non-AKI patients (n = 2449). aOR was adjusted by age, gender, and body mass index. †Refers to Student’s t test (for continuous variables which were normally distributed). ‡Refers to Wilcoxon test (for continuous variables which were non-normally distributed). AKI: acute kidney injury; aOR: adjusted odds ratio; ICU: intensive care unit;NLPR: neutrophil-to-platelet*lymphocyte ratio. The blood analysis was summarized in Table 2. Compared with non-AKI patients, preoperative neutrophil counts (3.9 ± 2.4 vs. 3.5 ± 1.6, p < 0.001; aOR 1.07 (95% CI 1.02–1.12), p < 0.001), and NLPR (1.1[0.8, 1.8] vs. 0.9[0.7,1.4], p < 0.001; aOR 1.15 (95% CI 1.09–1.22), p < 0.001) were higher in AKI patients. Lymphocyte (1.7 ± 0.6 vs. 1.9 ± 0.7, p < 0.001; aOR 0.74 (95% CI 0.64–0.85), p < 0.001) and platelet (182.2 ± 59.6 vs. 196.4 ± 60.9, p < 0.001; aOR 0.99 (95% CI 0.99–1.00), p < 0.001) counts were negatively associated with AKI risk. Due to the posttraumatic stress response after surgery, blood cell counts changed a lot. While patients with a higher postoperative NLPR still had an increased risk of developing AKI (12.4[7.5, 20.0] vs. 10.6[6.4,16.7], p < 0.001; aOR 1.02 (95% CI 1.02–1.03), p < 0.001). Distribution of peripheral blood cell counts and NLPR in AKI and non-AKI patients (n = 2449). aOR was adjusted by age, gender, and body mass index. †Refers to Student’s t test (for continuous variables which were normally distributed). ‡Refers to Wilcoxon test (for continuous variables which were non-normally distributed). AKI: acute kidney injury; aOR: adjusted odds ratio; ICU: intensive care unit;NLPR: neutrophil-to-platelet*lymphocyte ratio. Dose-response relationship of NLPR and AKI risk In Figure 1, we used restricted cubic splines to visualize the dose-response relationship of blood cells with CSA-AKI incidence. Considering the ‘U’ shape relationship between preoperative neutrophil counts and AKI, the plot showed a decrease of the risk within the lower range of neutrophil counts, which reached the lowest risk around the median (3.2*109). It then increased after that (p for non-linearity <0.001). The OR per 1 unit increase was 0.76 (95% CI 0.61–0.95) below 3.2*109, and 1.11 (95% CI 1.04–1.17) above 3.3*109. The risk of CSA-AKI was relatively flat until around the median of lymphocyte, platelet, NLPR. It then started to increase rapidly afterward. The median of preoperative NLPR was 1.0. Above the median, the OR per 1 unit increase was 1.13 (95% CI 1.06–1.19). In comparison, no significant association was found below the median (OR 1.22 95% CI 0.63–2.38). Similarly, we observed such a ‘J’ shape curve between postoperative NLPR and AKI. The OR increased along with the NLPR increase after surgery below the median of 11.0 (OR 1.02 95% CI 1.01–1.03). Then we classified the NLPR value into five groups (Table 3). Compared with the preoperative NLPR level of 0.50–0.99, AKI risk kept growth in higher NLPR groups (the adjusted OR increased from 1.23 to 2.61). As for postoperative NLPR, the incidence was increased remarkably from 31.6% in the reference group to 46.2% in the highest group (aOR rose from 1.13 to 2.09). Dose-response relationship of CSA-AKI and peripheral blood cell counts and NLPR. Perioperative NLPR levels and their association with CSA-AKI (n = 2449). aOR was adjusted by age, gender, and body mass index. AKI: acute kidney injury; aOR: adjusted odds ratio; NLPR: neutrophil to platelet*lymphocyte ratio. Antibiotic treatment and the risk of CSA-AKI in high NLPR groups (n = 365). aOR was adjusted by age, gender, and body mass index. AKI: acute kidney injury; aOR: adjusted odds ratio; NLPR: neutrophil to platelet*lymphocyte ratio. In Figure 1, we used restricted cubic splines to visualize the dose-response relationship of blood cells with CSA-AKI incidence. Considering the ‘U’ shape relationship between preoperative neutrophil counts and AKI, the plot showed a decrease of the risk within the lower range of neutrophil counts, which reached the lowest risk around the median (3.2*109). It then increased after that (p for non-linearity <0.001). The OR per 1 unit increase was 0.76 (95% CI 0.61–0.95) below 3.2*109, and 1.11 (95% CI 1.04–1.17) above 3.3*109. The risk of CSA-AKI was relatively flat until around the median of lymphocyte, platelet, NLPR. It then started to increase rapidly afterward. The median of preoperative NLPR was 1.0. Above the median, the OR per 1 unit increase was 1.13 (95% CI 1.06–1.19). In comparison, no significant association was found below the median (OR 1.22 95% CI 0.63–2.38). Similarly, we observed such a ‘J’ shape curve between postoperative NLPR and AKI. The OR increased along with the NLPR increase after surgery below the median of 11.0 (OR 1.02 95% CI 1.01–1.03). Then we classified the NLPR value into five groups (Table 3). Compared with the preoperative NLPR level of 0.50–0.99, AKI risk kept growth in higher NLPR groups (the adjusted OR increased from 1.23 to 2.61). As for postoperative NLPR, the incidence was increased remarkably from 31.6% in the reference group to 46.2% in the highest group (aOR rose from 1.13 to 2.09). Dose-response relationship of CSA-AKI and peripheral blood cell counts and NLPR. Perioperative NLPR levels and their association with CSA-AKI (n = 2449). aOR was adjusted by age, gender, and body mass index. AKI: acute kidney injury; aOR: adjusted odds ratio; NLPR: neutrophil to platelet*lymphocyte ratio. Antibiotic treatment and the risk of CSA-AKI in high NLPR groups (n = 365). aOR was adjusted by age, gender, and body mass index. AKI: acute kidney injury; aOR: adjusted odds ratio; NLPR: neutrophil to platelet*lymphocyte ratio. Preoperative antibiotic use in patients with high NLPR We further retrieved the medication data, especially the antibiotic use, in 365 patients whose preoperative NLPR was over 2.0 (Table 4). Of them, 22.4% (n = 82) had ever used antibiotics before cardiac surgery. The incidence of AKI in patients receiving antibiotic treatment was 32.9%, significantly lower than that without antibiotics (54.8%, p < 0.001). The aOR was estimated as 0.44 (95% CI 0.26–0.76). Subgroup analysis revealed that patients in the second-highest NLPR group (2.00–2.99) could benefit more from antibiotic treatment (aOR = 0.39, 95% CI 0.16–0.95). We further retrieved the medication data, especially the antibiotic use, in 365 patients whose preoperative NLPR was over 2.0 (Table 4). Of them, 22.4% (n = 82) had ever used antibiotics before cardiac surgery. The incidence of AKI in patients receiving antibiotic treatment was 32.9%, significantly lower than that without antibiotics (54.8%, p < 0.001). The aOR was estimated as 0.44 (95% CI 0.26–0.76). Subgroup analysis revealed that patients in the second-highest NLPR group (2.00–2.99) could benefit more from antibiotic treatment (aOR = 0.39, 95% CI 0.16–0.95). Prediction model of CSA-AKI based on perioperative NLPR We combined the NLPR values in two-time points to predict CSA-AKI occurrence (Figure 2). Considering the correlation among repeated measures, we applied the generalized estimating equation (GEE) for multiple analyses. In the model with NLPR alone, the AUC value was 0.621, 95%CI 0.604–0.638. The optimal cutoff of NLPR was set at 1.3, which has a sensitivity of 51.0% and specificity of 67.0%. With the enrollment of variables progressively, the AUC values increased from 0.671 (95% CI 0.655–0.687) to 0.730 (95% CI 0.715–0.745), and then to 0.806 (95% CI 0.793–0.819). Moreover, we also evaluated the attribution of NLPR to AKI in the full model. Delong test revealed that the AUC of model with NLPR was significantly higher than the AUC without NLPR (AUC: 0.806 vs. 0.799, Z = 4.081, p < 0.001). Prediction model of CSA-AKI through the generalized estimating equation (GEE). (3A: the optimal NLPR cutoff was set at 1.3; 3B: model 1 only enrolled the NLPR values in two-time points, model2 was model 1 plus demographic factors; model 3 was model 2 plus preoperative factors, model 4 was model 3 plus intra-/postoperative factor; 3C: comparison between the model with and without NLPR). We combined the NLPR values in two-time points to predict CSA-AKI occurrence (Figure 2). Considering the correlation among repeated measures, we applied the generalized estimating equation (GEE) for multiple analyses. In the model with NLPR alone, the AUC value was 0.621, 95%CI 0.604–0.638. The optimal cutoff of NLPR was set at 1.3, which has a sensitivity of 51.0% and specificity of 67.0%. With the enrollment of variables progressively, the AUC values increased from 0.671 (95% CI 0.655–0.687) to 0.730 (95% CI 0.715–0.745), and then to 0.806 (95% CI 0.793–0.819). Moreover, we also evaluated the attribution of NLPR to AKI in the full model. Delong test revealed that the AUC of model with NLPR was significantly higher than the AUC without NLPR (AUC: 0.806 vs. 0.799, Z = 4.081, p < 0.001). Prediction model of CSA-AKI through the generalized estimating equation (GEE). (3A: the optimal NLPR cutoff was set at 1.3; 3B: model 1 only enrolled the NLPR values in two-time points, model2 was model 1 plus demographic factors; model 3 was model 2 plus preoperative factors, model 4 was model 3 plus intra-/postoperative factor; 3C: comparison between the model with and without NLPR). Baseline characteristics and CSA-AKI incidence: Of the 2618 enrolled participants who underwent cardiac surgeries, 2449 patients met the inclusion criteria and were comprised of formal analysis (Supplement Figure 1). The average age was 57.2 ± 12.8 years old, and male patients accounted for 59.2% (n = 1449). According to KDIGO classification, 838 (34.2%) patients developed AKI after surgery. Of them, 658 patients located in Stage-1, and another 131 and 49 cases were Stage-2 and Stage-3, respectively. The rate of AKI-RRT was 1.2% (n = 30). Table 1 listed the major factors of CSA-AKI. Patients who were elder age, male, had higher BMI, or suffered from hypertension and diabetes were more likely to occur CSA-AKI. Cardio-surgery history, coronary arteriography, attenuated LVEF were positively associated with CSA-AKI. Given that patients already had preexisting renal dysfunction at admission, the risk of CSA-AKI was also increased. In terms of intraoperative factors, patients who underwent complicated cardiac surgery (aorta, valve combined with CABG, or larger vessels) were more susceptible to CSA-AKI. The application of CPB and blood transfusion, prolonged ACCT, and high ultrafiltration volume also had notable impacts on CSA-AKI. With the increase of APACHE II/Euro score, the postoperative risk CSA-AKI also kept growing. Demographic and perioperative factors of CSA-AKI (n = 2449). †Refers to Student’s t test (for continuous variables which were normally distributed). #Refers to Pearson test (for binary and unordered categorical variables). *Refers to Fisher’s exact test (for categorical variables when sample sizes are small). ‡Refers to Wilcoxon test (for continuous variables which were non-normally distributed). ACCT: aortic cross-clamp time; AKI: acute kidney injury; aOR: adjusted odds ratio; CABG: coronary artery bypass grafting; CI: confidence interval; CPB: cardiac pulmonary bypass; CSA-AKI: cardiac surgery-associated acute kidney injury; eGFR: estimated glomerular filtration rate; LVEF: left ventricular ejection fractions; NYHA: New York heart association; PAH: pulmonary arterial hypertension; SCr: serum creatinine; SUA: serum uric acid. Dynamics of peripheral blood cell counts and NLPR: The blood analysis was summarized in Table 2. Compared with non-AKI patients, preoperative neutrophil counts (3.9 ± 2.4 vs. 3.5 ± 1.6, p < 0.001; aOR 1.07 (95% CI 1.02–1.12), p < 0.001), and NLPR (1.1[0.8, 1.8] vs. 0.9[0.7,1.4], p < 0.001; aOR 1.15 (95% CI 1.09–1.22), p < 0.001) were higher in AKI patients. Lymphocyte (1.7 ± 0.6 vs. 1.9 ± 0.7, p < 0.001; aOR 0.74 (95% CI 0.64–0.85), p < 0.001) and platelet (182.2 ± 59.6 vs. 196.4 ± 60.9, p < 0.001; aOR 0.99 (95% CI 0.99–1.00), p < 0.001) counts were negatively associated with AKI risk. Due to the posttraumatic stress response after surgery, blood cell counts changed a lot. While patients with a higher postoperative NLPR still had an increased risk of developing AKI (12.4[7.5, 20.0] vs. 10.6[6.4,16.7], p < 0.001; aOR 1.02 (95% CI 1.02–1.03), p < 0.001). Distribution of peripheral blood cell counts and NLPR in AKI and non-AKI patients (n = 2449). aOR was adjusted by age, gender, and body mass index. †Refers to Student’s t test (for continuous variables which were normally distributed). ‡Refers to Wilcoxon test (for continuous variables which were non-normally distributed). AKI: acute kidney injury; aOR: adjusted odds ratio; ICU: intensive care unit;NLPR: neutrophil-to-platelet*lymphocyte ratio. Dose-response relationship of NLPR and AKI risk: In Figure 1, we used restricted cubic splines to visualize the dose-response relationship of blood cells with CSA-AKI incidence. Considering the ‘U’ shape relationship between preoperative neutrophil counts and AKI, the plot showed a decrease of the risk within the lower range of neutrophil counts, which reached the lowest risk around the median (3.2*109). It then increased after that (p for non-linearity <0.001). The OR per 1 unit increase was 0.76 (95% CI 0.61–0.95) below 3.2*109, and 1.11 (95% CI 1.04–1.17) above 3.3*109. The risk of CSA-AKI was relatively flat until around the median of lymphocyte, platelet, NLPR. It then started to increase rapidly afterward. The median of preoperative NLPR was 1.0. Above the median, the OR per 1 unit increase was 1.13 (95% CI 1.06–1.19). In comparison, no significant association was found below the median (OR 1.22 95% CI 0.63–2.38). Similarly, we observed such a ‘J’ shape curve between postoperative NLPR and AKI. The OR increased along with the NLPR increase after surgery below the median of 11.0 (OR 1.02 95% CI 1.01–1.03). Then we classified the NLPR value into five groups (Table 3). Compared with the preoperative NLPR level of 0.50–0.99, AKI risk kept growth in higher NLPR groups (the adjusted OR increased from 1.23 to 2.61). As for postoperative NLPR, the incidence was increased remarkably from 31.6% in the reference group to 46.2% in the highest group (aOR rose from 1.13 to 2.09). Dose-response relationship of CSA-AKI and peripheral blood cell counts and NLPR. Perioperative NLPR levels and their association with CSA-AKI (n = 2449). aOR was adjusted by age, gender, and body mass index. AKI: acute kidney injury; aOR: adjusted odds ratio; NLPR: neutrophil to platelet*lymphocyte ratio. Antibiotic treatment and the risk of CSA-AKI in high NLPR groups (n = 365). aOR was adjusted by age, gender, and body mass index. AKI: acute kidney injury; aOR: adjusted odds ratio; NLPR: neutrophil to platelet*lymphocyte ratio. Preoperative antibiotic use in patients with high NLPR: We further retrieved the medication data, especially the antibiotic use, in 365 patients whose preoperative NLPR was over 2.0 (Table 4). Of them, 22.4% (n = 82) had ever used antibiotics before cardiac surgery. The incidence of AKI in patients receiving antibiotic treatment was 32.9%, significantly lower than that without antibiotics (54.8%, p < 0.001). The aOR was estimated as 0.44 (95% CI 0.26–0.76). Subgroup analysis revealed that patients in the second-highest NLPR group (2.00–2.99) could benefit more from antibiotic treatment (aOR = 0.39, 95% CI 0.16–0.95). Prediction model of CSA-AKI based on perioperative NLPR: We combined the NLPR values in two-time points to predict CSA-AKI occurrence (Figure 2). Considering the correlation among repeated measures, we applied the generalized estimating equation (GEE) for multiple analyses. In the model with NLPR alone, the AUC value was 0.621, 95%CI 0.604–0.638. The optimal cutoff of NLPR was set at 1.3, which has a sensitivity of 51.0% and specificity of 67.0%. With the enrollment of variables progressively, the AUC values increased from 0.671 (95% CI 0.655–0.687) to 0.730 (95% CI 0.715–0.745), and then to 0.806 (95% CI 0.793–0.819). Moreover, we also evaluated the attribution of NLPR to AKI in the full model. Delong test revealed that the AUC of model with NLPR was significantly higher than the AUC without NLPR (AUC: 0.806 vs. 0.799, Z = 4.081, p < 0.001). Prediction model of CSA-AKI through the generalized estimating equation (GEE). (3A: the optimal NLPR cutoff was set at 1.3; 3B: model 1 only enrolled the NLPR values in two-time points, model2 was model 1 plus demographic factors; model 3 was model 2 plus preoperative factors, model 4 was model 3 plus intra-/postoperative factor; 3C: comparison between the model with and without NLPR). Discussion: The present study provides the first report for integrating the dynamics of NLPR at two-time points into the prediction model and described the dose-response relationship of NLPR and AKI risk. Previous studies also identified that NLPR was associated with AKI in sepsis [22] and abdominal surgery [23]. Compared with non-AKI patients, we found that perioperative neutrophil counts and NLPR were higher in AKI patients. The relationship between preoperative neutrophils counts and AKI was not linear but showed a ‘U’ shape. Restricted cubic splines showed a decrease of the risk within the lower range of neutrophil counts, which reached the lowest risk around the median and then increased after that. Neutrophil aggregation is reported acting as a subclinical inflammation stage, while neutropenia reflects a physiological condition that the body may be in disorder, such as serious inflammation or myelosuppression [24,25]. The effect of preoperative and postoperative NLPR on AKI was a ‘J’ shape. CSA-AKI’s risk was relatively flat until 1.0 of preoperative NLPR and increased rapidly afterward, with an odds ratio of 1.13 per 1 unit. Neutrophils and lymphocytes play essential roles in the innate immune system and adaptive immune system, respectively [26]. At the early stage of inflammation, neutrophils are the first cells recruited from blood to tissue. It can phagocytose pathogens and particles, generate reactive oxygen and nitrogen species, and release antimicrobial peptides, which strengthen the innate immune system. In the present study, we found that, per 1 unit neutrophils increase, the OR of CSA-AKI was 1.11 (95% CI 1.04–1.17), indicating that subclinical neutrophilia also played an important role in the pathogenesis of AKI. It might be explained by the high proportion of ‘primed for recruitment’ neutrophils. Noah et al. found that a distinct population of primed neutrophils in circulation had a high expression of adhesion and activation markers [27]. Additionally, previous experimental studies revealed that T helper 1 (Th1) and T helper 2 (Th2) cells can contribute to the pathophysiology of AKI [28]. During the process of inflammation, free platelets could bind to neutrophils to form platelet-neutrophil aggregates (PNA) [29]. Substances released by PNA also induce neutrophils tissue invasion, inflammatory mediator release and exacerbate local or systemic inflammatory response. Cardiovascular surgery involves a series of renal injury pathways. It includes hypoperfusion, ischemia-reperfusion injury, neurohumoral activation, inflammation and oxidative stress [30]. CPB are associated with the activation of multiple inflammatory pathways and the increase pro-inflammatory cytokines. Neutrophil infiltration is detected in post-ischemic mouse kidneys [31,32] and biopsy samples from patients with early AKI participate in the pathogenesis of renal injury following IRI [33,34]. The possible mechanism is that neutrophils participate in the induction of renal injury by obstructing the renal microvasculature and secreting oxygen free radicals and proteases [25]. The adhesion and accumulation of activated platelets within the vascular beds of renal tissue early after IR can act as a bridge between leukocytes and the endothelial wall, thereby enhancing leukocyte recruitment, activation and extravasation [35,36]. Consistent with our study, reduced platelet counts were associated with a higher incidence of AKI [37,38]. The main reason for thrombopenia is the absolute decrease of platelet amount and/or the relative decrease of free platelet proportion. On one hand, the total amount of platelets decreased when patients suffered from malnutrition or receiving a prolonged ACCT during cardiovascular surgery. This ischemia reperfusion injury has been proved as the main pathophysiological mechanism of CSA-AKI. On the other hand, the PNA can lead to vascular lesions and tissue destruction in kidney. In this study, we creatively incorporated platelets into traditional NLR and the new indicator was used to quantify the effect of inflammation on AKI. After integrating the dynamic NLPRs into multiple predictive models, we found that the AUC was up to 0.806, higher than the AUC without NLPR. It will facilitate AKI risk classification management and allow clinicians to intervene AKI in early-stage. While the model’s specificity was superior to sensitivity. It suggests that NLPR could be most effectively used to exclude AKI. For high-risk patients, further testing (SCr or novel biomarkers) is necessary to stratify their AKI risk. Notably, we found that the incidence of AKI in patients receiving antibiotic treatment was significantly lower than that without antibiotics. This phenomenon may be related to blocking certain inflammatory processes by antibiotics. Vancomycin and ceftriaxone have been proved to be potentially nephrotoxic. However, these antibiotics are still the first-line antibiotic for the infection after heart surgery. Subgroup analysis revealed that the application of nephrotoxic antibiotics could reduce the risk of CSA-AKI. However, cardiac surgeons and nephrologists should reach a consensus and weigh up the pros and cons of nephrotoxic antibiotics. Still, our study has several limitations to be declared. Firstly, this study was designed as a single-center study. And it would be better to follow-up the patients and describe the relation between NLPR and long-term kidney recovery. Secondly, we did not compare it with C-reactive protein and monitor the related cytokine expression, such as interleukin (IL)-8, myeloperoxidase, and IL-6. However, our pilot study showed that NLPR had a highly positive correlation to procalcitonin and lactic acid. In future studies, a large-scale cohort and multi-omics analysis are needed to verify the predictive value of NLPR in CSA-AKI incidence. In conclusion, the dynamics of perioperative NPLR is a promising and noninvasive marker relating to acute kidney injury during the perioperative period. Protective strategies should be implemented for those who had initially increased NLPR. It will facilitate AKI risk management and allow clinicians to intervene early so as to reverse renal damage. Supplementary Material: Click here for additional data file.
Background: In this study, we applied a composite index of neutrophil-lymphocyte * platelet ratio (NLPR), and explore the significance of the dynamics of perioperative NLPR in predicting cardiac surgery-associated acute kidney injury (CSA-AKI). Methods: During July 1st and December 31th 2019, participants were prospectively derived from the 'Zhongshan Cardiovascular Surgery Cohort'. NLPR was determined using neutrophil counts, lymphocyte and platelet count at the two-time points. Dose-response relationship analyses were applied to delineate the non-linear odds ratio (OR) of CSA-AKI in different NLPR levels. Then NLPRs were integrated into the generalized estimating equation (GEE) to predict the risk of AKI. Results: Of 2449 patients receiving cardiovascular surgery, 838 (34.2%) cases developed CSA-AKI with stage 1 (n = 658, 26.9%), stage 2-3 (n = 180, 7.3%). Compared with non-AKI patients, both preoperative and postoperative NLPR were higher in AKI patients (1.1[0.8, 1.8] vs. 0.9[0.7,1.4], p < 0.001; 12.4[7.5, 20.0] vs. 10.1[6.4,16.7], p < 0.001). Such an effect was a 'J'-shaped relationship: CSA-AKI's risk was relatively flat until 1.0 of preoperative NLPR and increased rapidly afterward, with an odds ratio of 1.13 (1.06-1.19) per 1 unit. Similarly, patients whose postoperative NLPR value >11.0 were more likely to develop AKI with an OR of 1.02. Integrating the dynamic NLPRs into the GEE model, we found that the AUC was 0.806(95% CI 0.793-0.819), which was significantly higher than the AUC without NLPR (0.799, p < 0.001). Conclusions: Dynamics of perioperative NPLR is a promising marker for predicting acute kidney injury. It will facilitate AKI risk management and allow clinicians to intervene early so as to reverse renal damage.
null
null
9,730
376
[ 174, 287, 203, 438, 438, 331, 433, 121, 254 ]
14
[ "aki", "nlpr", "model", "csa aki", "csa", "patients", "95", "ci", "surgery", "95 ci" ]
[ "lymphocyte ratio preoperative", "reason aki sepsis", "relating acute kidney", "associated aki sepsis", "acute kidney injury" ]
null
null
[CONTENT] Acute kidney injury | neutrophils | lymphocyte | platelet | predictor | cardiovascular surgery [SUMMARY]
[CONTENT] Acute kidney injury | neutrophils | lymphocyte | platelet | predictor | cardiovascular surgery [SUMMARY]
[CONTENT] Acute kidney injury | neutrophils | lymphocyte | platelet | predictor | cardiovascular surgery [SUMMARY]
null
[CONTENT] Acute kidney injury | neutrophils | lymphocyte | platelet | predictor | cardiovascular surgery [SUMMARY]
null
[CONTENT] Acute Kidney Injury | Adult | Aged | Biomarkers | Blood Platelets | Cardiac Surgical Procedures | Female | Humans | Leukocyte Count | Lymphocytes | Male | Middle Aged | Neutrophils | Platelet Count | Postoperative Complications | Prospective Studies | Risk Factors [SUMMARY]
[CONTENT] Acute Kidney Injury | Adult | Aged | Biomarkers | Blood Platelets | Cardiac Surgical Procedures | Female | Humans | Leukocyte Count | Lymphocytes | Male | Middle Aged | Neutrophils | Platelet Count | Postoperative Complications | Prospective Studies | Risk Factors [SUMMARY]
[CONTENT] Acute Kidney Injury | Adult | Aged | Biomarkers | Blood Platelets | Cardiac Surgical Procedures | Female | Humans | Leukocyte Count | Lymphocytes | Male | Middle Aged | Neutrophils | Platelet Count | Postoperative Complications | Prospective Studies | Risk Factors [SUMMARY]
null
[CONTENT] Acute Kidney Injury | Adult | Aged | Biomarkers | Blood Platelets | Cardiac Surgical Procedures | Female | Humans | Leukocyte Count | Lymphocytes | Male | Middle Aged | Neutrophils | Platelet Count | Postoperative Complications | Prospective Studies | Risk Factors [SUMMARY]
null
[CONTENT] lymphocyte ratio preoperative | reason aki sepsis | relating acute kidney | associated aki sepsis | acute kidney injury [SUMMARY]
[CONTENT] lymphocyte ratio preoperative | reason aki sepsis | relating acute kidney | associated aki sepsis | acute kidney injury [SUMMARY]
[CONTENT] lymphocyte ratio preoperative | reason aki sepsis | relating acute kidney | associated aki sepsis | acute kidney injury [SUMMARY]
null
[CONTENT] lymphocyte ratio preoperative | reason aki sepsis | relating acute kidney | associated aki sepsis | acute kidney injury [SUMMARY]
null
[CONTENT] aki | nlpr | model | csa aki | csa | patients | 95 | ci | surgery | 95 ci [SUMMARY]
[CONTENT] aki | nlpr | model | csa aki | csa | patients | 95 | ci | surgery | 95 ci [SUMMARY]
[CONTENT] aki | nlpr | model | csa aki | csa | patients | 95 | ci | surgery | 95 ci [SUMMARY]
null
[CONTENT] aki | nlpr | model | csa aki | csa | patients | 95 | ci | surgery | 95 ci [SUMMARY]
null
[CONTENT] aki | csa | csa aki | infectious | nlpr | prediction | inflammation | clinical | biomarker | csa aki prediction [SUMMARY]
[CONTENT] model | count | test | factors | package | scr | value | variables | surgery | applied [SUMMARY]
[CONTENT] aki | nlpr | 95 | aor | 95 ci | ci | 001 | csa aki | csa | patients [SUMMARY]
null
[CONTENT] aki | nlpr | model | 95 | patients | csa aki | csa | 95 ci | aor | ci [SUMMARY]
null
[CONTENT] NLPR | NLPR [SUMMARY]
[CONTENT] July 1st and December 31th 2019 | the 'Zhongshan Cardiovascular Surgery Cohort' ||| NLPR | two ||| NLPR ||| NLPRs | GEE | AKI [SUMMARY]
[CONTENT] 2449 | 838 | 34.2% | 1 | 658 | 26.9% | 2-3 | 180 | 7.3% ||| NLPR | AKI | 1.1[0.8 | 1.8 | 0.9[0.7,1.4 | 0.001 | 12.4[7.5 | 20.0 | 10.1[6.4,16.7 | 0.001 ||| 1.0 | NLPR | 1.13 | 1.06-1.19 | 1 ||| NLPR | 11.0 | 1.02 ||| GEE | CI | 0.793-0.819 | NLPR | 0.799 | 0.001 [SUMMARY]
null
[CONTENT] NLPR | NLPR ||| July 1st and December 31th 2019 | the 'Zhongshan Cardiovascular Surgery Cohort' ||| NLPR | two ||| NLPR ||| NLPRs | GEE | AKI ||| 2449 | 838 | 34.2% | 1 | 658 | 26.9% | 2-3 | 180 | 7.3% ||| NLPR | AKI | 1.1[0.8 | 1.8 | 0.9[0.7,1.4 | 0.001 | 12.4[7.5 | 20.0 | 10.1[6.4,16.7 | 0.001 ||| 1.0 | NLPR | 1.13 | 1.06-1.19 | 1 ||| NLPR | 11.0 | 1.02 ||| GEE | CI | 0.793-0.819 | NLPR | 0.799 | 0.001 ||| NPLR ||| clinicians [SUMMARY]
null
Efficacy of TB-PCR using EBUS-TBNA samples in patients with intrathoracic granulomatous lymphadenopathy.
26710846
Endobronchial ultrasound-guided transbronchial needle aspiration (EBUS-TBNA) is widely used to perform mediastinal lymph node sampling. However, little information is available on polymerase chain reaction for Mycobacterium tuberculosis (TB-PCR) using EBUS-TBNA samples in patients with intrathoracic granulomatous lymphadenopathy (IGL).
BACKGROUND
A retrospective study using a prospectively collected database was performed from January 2010 to December 2014 to evaluate the efficacy of the TB-PCR test using EBUS-TBNA samples in patients with IGL. During the study period, 87 consecutive patients with isolated intrathoracic lymphadenopathy who received EBUS-TBNA were registered and 46 patients with IGL were included.
METHODS
Of the 46 patients with IGL, tuberculous lymphadenitis and sarcoidosis were diagnosed in 16 and 30 patients, respectively. The sensitivity, specificity, positive predictive value, and negative predictive value of TB-PCR for tuberculous lymphadenitis were 56, 100, 100, and 81%, respectively. The overall diagnostic accuracy of TB-PCR for tuberculous lymphadenitis was 85%. In addition, seven (17%) patients had non-diagnostic results from a histological examination and all of them had non-diagnostic microbiological results of an acid-fast bacilli smear and culture. Four (57%) of the seven patients with non-diagnostic results had positive TB-PCR results, and anti-tuberculosis treatment led to clinical and radiological improvement in all of the patients.
RESULTS
TB-PCR using EBUS-TBNA samples is a useful laboratory test for diagnosing IGL. Moreover, this technique can prevent further invasive evaluation in patients whose histological and microbiological tests are non-diagnostic.
CONCLUSIONS
[ "Adult", "Endoscopic Ultrasound-Guided Fine Needle Aspiration", "Female", "Humans", "Lymph Nodes", "Male", "Middle Aged", "Mycobacterium tuberculosis", "Polymerase Chain Reaction", "Predictive Value of Tests", "Retrospective Studies", "Sarcoidosis", "Sensitivity and Specificity", "Tuberculosis, Lymph Node" ]
4693414
Background
Intrathoracic granulomatous lymphadenopathies (IGLs), such as tuberculous lymphadenitis and sarcoidosis, are frequently encountered by respiratory physicians, and their diagnosis is based on histological and microbiological tests [1–3]. Conventional transbronchial needle aspiration or mediastinoscopy has traditionally been used to perform lymph node biopsy for histological examinations, as well as for stains and cultures for acid-fast organisms [4–6]. With the recent advent of endobronchial ultrasound, endobronchial ultrasound-guided transbronchial needle aspiration (EBUS-TBNA) has been widely used to perform mediastinal lymph node biopsy or aspiration. There is increasing evidence regarding EBUS-TBNA as the first examination of choice in the evaluation of patients with IGL [7–9]. Most IGLs comprise tuberculous lymphadenitis and sarcoidosis. Occasionally, differentiating between these IGLs using a histological examination alone is difficult. Moreover, the diagnostic yield of acid-fast bacilli culture is still unsatisfactory, although such culture is considered the gold standard diagnostic method for tuberculosis. In addition to histological and microbiological tests, polymerase chain reaction for Mycobacterium tuberculosis (TB-PCR) is recognized as a useful test in the differential diagnosis of tuberculous lymphadenitis and sarcoidosis [10]. However, little information is available on TB-PCR using EBUS-TBNA samples. Therefore, we conducted this study to examine the diagnostic performance of TB-PCR using EBUS-TBNA samples in patients with IGL. We analyzed a prospectively collected database in South Korea, where the incidence of tuberculosis is intermediate (97/100,000 per year) [11].
null
null
Results
Study population Using EBUS-TBNA, 82 lymph nodes were examined in 46 patients with IGL. All of the patients showed negative results using a screening assay for antibody to human immunodeficiency virus. Baseline characteristics are shown in Table 1. The mean age of the patients was 47.1 ± 17.1 years and there were 28 (61 %) female patients. The mean shortest diameter of lymph nodes that were examined using EBUS-TBNA was 17.7 ± 5.0 mm (median, 17.9 mm) in patients with sarcoidosis and 22.6 ± 11.3 mm (median, 20.0 mm) in patients with tuberculous lymphadenitis.Table 1Baseline characteristics of 46 patients with intrathoracic granulomatous lymphadenopathyVariablesNo. (%) or mean ± SDAge47.1 ± 17.1Female gender28 (61)Number of LN involved per patienta 7.0 ± 3.6Number of LN examined per patient1.7 ± 0.7Number of needle pass per LN3.7 ± 1.8Number of tissue core achieved per LN2.0 ± 1.3Stations of LN puncturedb  2R2/82 (2) 4R31/82 (38) 4 L3/82 (4) 733/82 (40) 10R1/82 (1) 11R9/82 (11) 11 L3/82 (4)Diameter of LN examined using EBUS-TBNAc  Shortest diameter, mm19.4 ± 8.0 Longest diameter, mm28.7 ± 11.4 aInvolved lymph node was defined as the shortest diameter ≥ 1 cm on a CT scan bBased on the International Association for the Study of Lung Cancer (IASLC) lymph node map cMeasured on an axial CT scanSD, standard deviation; LN, lymph node; EBUS-TBNA, endobronchial ultrasound-guided transbronchial needle aspiration Baseline characteristics of 46 patients with intrathoracic granulomatous lymphadenopathy aInvolved lymph node was defined as the shortest diameter ≥ 1 cm on a CT scan bBased on the International Association for the Study of Lung Cancer (IASLC) lymph node map cMeasured on an axial CT scan SD, standard deviation; LN, lymph node; EBUS-TBNA, endobronchial ultrasound-guided transbronchial needle aspiration Using EBUS-TBNA, 82 lymph nodes were examined in 46 patients with IGL. All of the patients showed negative results using a screening assay for antibody to human immunodeficiency virus. Baseline characteristics are shown in Table 1. The mean age of the patients was 47.1 ± 17.1 years and there were 28 (61 %) female patients. The mean shortest diameter of lymph nodes that were examined using EBUS-TBNA was 17.7 ± 5.0 mm (median, 17.9 mm) in patients with sarcoidosis and 22.6 ± 11.3 mm (median, 20.0 mm) in patients with tuberculous lymphadenitis.Table 1Baseline characteristics of 46 patients with intrathoracic granulomatous lymphadenopathyVariablesNo. (%) or mean ± SDAge47.1 ± 17.1Female gender28 (61)Number of LN involved per patienta 7.0 ± 3.6Number of LN examined per patient1.7 ± 0.7Number of needle pass per LN3.7 ± 1.8Number of tissue core achieved per LN2.0 ± 1.3Stations of LN puncturedb  2R2/82 (2) 4R31/82 (38) 4 L3/82 (4) 733/82 (40) 10R1/82 (1) 11R9/82 (11) 11 L3/82 (4)Diameter of LN examined using EBUS-TBNAc  Shortest diameter, mm19.4 ± 8.0 Longest diameter, mm28.7 ± 11.4 aInvolved lymph node was defined as the shortest diameter ≥ 1 cm on a CT scan bBased on the International Association for the Study of Lung Cancer (IASLC) lymph node map cMeasured on an axial CT scanSD, standard deviation; LN, lymph node; EBUS-TBNA, endobronchial ultrasound-guided transbronchial needle aspiration Baseline characteristics of 46 patients with intrathoracic granulomatous lymphadenopathy aInvolved lymph node was defined as the shortest diameter ≥ 1 cm on a CT scan bBased on the International Association for the Study of Lung Cancer (IASLC) lymph node map cMeasured on an axial CT scan SD, standard deviation; LN, lymph node; EBUS-TBNA, endobronchial ultrasound-guided transbronchial needle aspiration Diagnosis of tuberculous lymphadenitis and sarcoidosis Of the 46 patients with IGL, 16 with tuberculous lymphadenitis and 30 with sarcoidosis were identified. Sputum acid-fast bacilli smear and M. tuberculosis culture were positive in 13 % (2/16) and 19 % (3/16) of patients with tuberculous lymphadenitis, respectively. According to the results of the M. tuberculosis culture, three patients had a confirmative diagnosis of tuberculous lymphadenitis. Based on the clinical and radiological responses after anti-tuberculosis treatment or histological results, the remaining 13 patients were diagnosed with tuberculous lymphadenitis. Of 30 patients with sarcoidosis, 28 were diagnosed according to compatible clinical, radiological, and laboratory results with demonstration of noncaseating granulomas using EBUS-TBNA samples. Mediastinoscopy was subsequently performed in one patient whose EBUS-TBNA was shown to be anthracotic pigmentation with inflammation (Patient 1 in Table 2). The surgical specimen showed chronic granulomatous inflammation without caseous necrosis, and sarcoidosis was finally diagnosed with compatible radiological and laboratory results. Another patient whose EBUS-TBNA samples were shown to be inadequate (grade V) refused mediastinoscopy. Therefore, she was followed up for more than 12 months and clinically diagnosed with sarcoidosis based on radiological and laboratory results (Patient 6 in Table 2).Table 2Detailed characteristics of seven patients with grade IV or V for histological resultsCharacteristicsPatient number1234567SexFemaleMaleMaleFemaleFemaleFemaleFemaleAge, years65782773633356Number of LN involved per patienta 3332484Number of LN examined per patient1212221Number of needle pass per LN64 and 243 and 32 and 26 and 21Number of tissue core achieved per LN20 and 211 and 22 and 21 and 10Stations of LN punctured77 and 11R4R4R and 10R2R and 4R4R and 74RShortest diameter of LN punctured, mm10.613.0 and 15.032.418.9 and 12.214.6 and 10.517.2 and 12.615.7Results of EBUS-TBNA samples Acid-fast bacilli smearNegativeNegativeNegativeNegativeNegativeNegativeNegative Acid-fast bacilli cultureNegativeNegativeNegativeNegativeNegativeNegativeNegative TB-PCRNot detectedMTB detectedMTB detectedMTB detectedNot detectedNot detectedMTB detectedHistological gradeIVIVIVIVVVVMediastinoscopyPerformedNot performedNot performedNot performedPerformedNot performedNot performedFinal diagnosisSarcoidosisTBLTBLTBLTBLSarcoidosisb TBL aLymph nodes with the shortest diameter ≥ 10 mm on an axial CT scan bPatient 6 was clinically diagnosed with sarcoidosis based on radiological and laboratory results after more than 12 months of follow-upLN, lymph node; EBUS-TBNA, endobronchial ultrasound-guided transbronchial needle aspiration; TB-PCR; polymerase chain reaction for Mycobacterium tuberculosis; MTB, Mycobacterium tuberculosis; TBL, tuberculous lymphadenitis Detailed characteristics of seven patients with grade IV or V for histological results aLymph nodes with the shortest diameter ≥ 10 mm on an axial CT scan bPatient 6 was clinically diagnosed with sarcoidosis based on radiological and laboratory results after more than 12 months of follow-up LN, lymph node; EBUS-TBNA, endobronchial ultrasound-guided transbronchial needle aspiration; TB-PCR; polymerase chain reaction for Mycobacterium tuberculosis; MTB, Mycobacterium tuberculosis; TBL, tuberculous lymphadenitis Of the 46 patients with IGL, 16 with tuberculous lymphadenitis and 30 with sarcoidosis were identified. Sputum acid-fast bacilli smear and M. tuberculosis culture were positive in 13 % (2/16) and 19 % (3/16) of patients with tuberculous lymphadenitis, respectively. According to the results of the M. tuberculosis culture, three patients had a confirmative diagnosis of tuberculous lymphadenitis. Based on the clinical and radiological responses after anti-tuberculosis treatment or histological results, the remaining 13 patients were diagnosed with tuberculous lymphadenitis. Of 30 patients with sarcoidosis, 28 were diagnosed according to compatible clinical, radiological, and laboratory results with demonstration of noncaseating granulomas using EBUS-TBNA samples. Mediastinoscopy was subsequently performed in one patient whose EBUS-TBNA was shown to be anthracotic pigmentation with inflammation (Patient 1 in Table 2). The surgical specimen showed chronic granulomatous inflammation without caseous necrosis, and sarcoidosis was finally diagnosed with compatible radiological and laboratory results. Another patient whose EBUS-TBNA samples were shown to be inadequate (grade V) refused mediastinoscopy. Therefore, she was followed up for more than 12 months and clinically diagnosed with sarcoidosis based on radiological and laboratory results (Patient 6 in Table 2).Table 2Detailed characteristics of seven patients with grade IV or V for histological resultsCharacteristicsPatient number1234567SexFemaleMaleMaleFemaleFemaleFemaleFemaleAge, years65782773633356Number of LN involved per patienta 3332484Number of LN examined per patient1212221Number of needle pass per LN64 and 243 and 32 and 26 and 21Number of tissue core achieved per LN20 and 211 and 22 and 21 and 10Stations of LN punctured77 and 11R4R4R and 10R2R and 4R4R and 74RShortest diameter of LN punctured, mm10.613.0 and 15.032.418.9 and 12.214.6 and 10.517.2 and 12.615.7Results of EBUS-TBNA samples Acid-fast bacilli smearNegativeNegativeNegativeNegativeNegativeNegativeNegative Acid-fast bacilli cultureNegativeNegativeNegativeNegativeNegativeNegativeNegative TB-PCRNot detectedMTB detectedMTB detectedMTB detectedNot detectedNot detectedMTB detectedHistological gradeIVIVIVIVVVVMediastinoscopyPerformedNot performedNot performedNot performedPerformedNot performedNot performedFinal diagnosisSarcoidosisTBLTBLTBLTBLSarcoidosisb TBL aLymph nodes with the shortest diameter ≥ 10 mm on an axial CT scan bPatient 6 was clinically diagnosed with sarcoidosis based on radiological and laboratory results after more than 12 months of follow-upLN, lymph node; EBUS-TBNA, endobronchial ultrasound-guided transbronchial needle aspiration; TB-PCR; polymerase chain reaction for Mycobacterium tuberculosis; MTB, Mycobacterium tuberculosis; TBL, tuberculous lymphadenitis Detailed characteristics of seven patients with grade IV or V for histological results aLymph nodes with the shortest diameter ≥ 10 mm on an axial CT scan bPatient 6 was clinically diagnosed with sarcoidosis based on radiological and laboratory results after more than 12 months of follow-up LN, lymph node; EBUS-TBNA, endobronchial ultrasound-guided transbronchial needle aspiration; TB-PCR; polymerase chain reaction for Mycobacterium tuberculosis; MTB, Mycobacterium tuberculosis; TBL, tuberculous lymphadenitis Diagnostic performance of TB-PCR Nested PCR and real-time PCR were performed in 29 and 17 patients, respectively. TB-PCR was not detected in all patients with sarcoidosis. Of 16 patients with tuberculous lymphadenitis, nine showed positive TB-PCR results. The sensitivity, specificity, positive predictive value, and negative predictive value were 56 % (95 % CI [confidence interval], 29.9–81.2), 100 % (95 % CI, 88.3–100.0), 100 % (95 % CI, 66.2–100.0), and 81 % (95 % CI, 64.8–92.0), respectively. There was no difference in the diagnostic performance between nested and real-time TB-PCR. In addition, the overall diagnostic accuracy of TB-PCR for tuberculous lymphadenitis was 85 % in patients with IGL. Nested PCR and real-time PCR were performed in 29 and 17 patients, respectively. TB-PCR was not detected in all patients with sarcoidosis. Of 16 patients with tuberculous lymphadenitis, nine showed positive TB-PCR results. The sensitivity, specificity, positive predictive value, and negative predictive value were 56 % (95 % CI [confidence interval], 29.9–81.2), 100 % (95 % CI, 88.3–100.0), 100 % (95 % CI, 66.2–100.0), and 81 % (95 % CI, 64.8–92.0), respectively. There was no difference in the diagnostic performance between nested and real-time TB-PCR. In addition, the overall diagnostic accuracy of TB-PCR for tuberculous lymphadenitis was 85 % in patients with IGL. Classification of histology Histological results were grades I to III in 39 patients (85 %) and grades IV to V in seven patients (15 %) (Table 3). Four (9 %) and three (7 %) patients with grade IV and V disease, respectively, regarded as non-diagnostic, also showed non-diagnostic microbiological results of acid-fast bacilli smear and culture. Two patients who underwent mediastinoscopy were eventually diagnosed with sarcoidosis and tuberculous lymphadenitis (Patients 1 and 5, respectively, Table 2). Four (57 %) of seven patients with grades IV to V disease had positive TB-PCR results. All four patients achieved clinical and radiological improvement after anti-tuberculosis treatment (Patients 2, 3, 4, and 7 in Table 2).Table 3Results of histological examinationsGradeResults of polymerase chain reaction for Mycobacterium tuberculosis SarcoidosisTuberculous lymphadenitisI0/0 (0)5/8 (63)II0/28 (0)0/1 (0)III0/0 (0)0/2 (0)IV0/1 (0)3/3 (100)V0/1 (0)1/2 (50)Total0/30 (0)9/16 (56)Data are presented as the number of positive results/number of patients who received a test (%) Results of histological examinations Data are presented as the number of positive results/number of patients who received a test (%) Histological results were grades I to III in 39 patients (85 %) and grades IV to V in seven patients (15 %) (Table 3). Four (9 %) and three (7 %) patients with grade IV and V disease, respectively, regarded as non-diagnostic, also showed non-diagnostic microbiological results of acid-fast bacilli smear and culture. Two patients who underwent mediastinoscopy were eventually diagnosed with sarcoidosis and tuberculous lymphadenitis (Patients 1 and 5, respectively, Table 2). Four (57 %) of seven patients with grades IV to V disease had positive TB-PCR results. All four patients achieved clinical and radiological improvement after anti-tuberculosis treatment (Patients 2, 3, 4, and 7 in Table 2).Table 3Results of histological examinationsGradeResults of polymerase chain reaction for Mycobacterium tuberculosis SarcoidosisTuberculous lymphadenitisI0/0 (0)5/8 (63)II0/28 (0)0/1 (0)III0/0 (0)0/2 (0)IV0/1 (0)3/3 (100)V0/1 (0)1/2 (50)Total0/30 (0)9/16 (56)Data are presented as the number of positive results/number of patients who received a test (%) Results of histological examinations Data are presented as the number of positive results/number of patients who received a test (%) Diagnostic yield of EBUS-TBNA The overall diagnostic yield of the combined histological and microbiological data was 85 %. When the results of TB-PCR were combined with the histological and microbiological data, the overall diagnostic yield increased to 94 %. The overall diagnostic yield of the combined histological and microbiological data was 85 %. When the results of TB-PCR were combined with the histological and microbiological data, the overall diagnostic yield increased to 94 %.
Conclusions
TB-PCR using EBUS-TBNA samples is a useful laboratory test for diagnosing IGL. Moreover, this method can prevent further invasive evaluation in patients whose histological and microbiological tests are non-diagnostic.
[ "Study population", "EBUS-TBNA procedure", "TB-PCR", "Diagnosis of tuberculous lymphadenitis and sarcoidosis", "Classification of granulomatous inflammation", "Statistical analysis", "Study population", "Diagnosis of tuberculous lymphadenitis and sarcoidosis", "Diagnostic performance of TB-PCR", "Classification of histology", "Diagnostic yield of EBUS-TBNA" ]
[ "From January 2010 to December 2014, a retrospective study with a prospectively collected database was performed to evaluate the efficacy of TB-PCR using EBUS-TBNA samples in patients with IGL at Pusan National University Hospital (university-affiliated, tertiary referral hospital in Busan, South Korea). During the study period, all consecutive patients who received EBUS-TBNA were prospectively registered. As a result, 87 patients with isolated intrathoracic lymphadenopathy, defined as lymphadenopathy without lung parenchymal abnormalities, received EBUS-TBNA. Fifty patients were diagnosed with IGL (Fig. 1). Of the 50 selected patients with IGL, four who were not subjected to TB-PCR were excluded from the analysis. Therefore, 46 patients with IGL were finally included in the present study.Fig. 1Study flow diagram. *Of the five patients with reactive hyperplasia, two were confirmed by subsequent mediastinoscopy, and a CT scan of the remaining three patients showed decreased or unchanged lymph node sizes. †All five patients with anthracotic lymph nodes were followed up for more than 6 months, and the lymph node size was decreased or unchanged on subsequent CT. ‡In six patients who were lost to follow-up, the results of EBUS-TBNA were insufficient specimens in three patients and reactive hyperplasia in the other patients. §Histological specimens were classified into five grades: I) epithelioid granulomatous reaction with caseation, II) epithelioid granulomatous reaction without caseation, III) nongranulomatous reaction with necrosis, IV) nonspecific, and V) inadequate sample. EBUS-TBNA, endobronchial ultrasound-guided transbronchial needle aspiration; IGL, intrathoracic granulomatous lymphadenopathy; TB-PCR, polymerase chain reaction for Mycobacterium tuberculosis\n\nStudy flow diagram. *Of the five patients with reactive hyperplasia, two were confirmed by subsequent mediastinoscopy, and a CT scan of the remaining three patients showed decreased or unchanged lymph node sizes. †All five patients with anthracotic lymph nodes were followed up for more than 6 months, and the lymph node size was decreased or unchanged on subsequent CT. ‡In six patients who were lost to follow-up, the results of EBUS-TBNA were insufficient specimens in three patients and reactive hyperplasia in the other patients. §Histological specimens were classified into five grades: I) epithelioid granulomatous reaction with caseation, II) epithelioid granulomatous reaction without caseation, III) nongranulomatous reaction with necrosis, IV) nonspecific, and V) inadequate sample. EBUS-TBNA, endobronchial ultrasound-guided transbronchial needle aspiration; IGL, intrathoracic granulomatous lymphadenopathy; TB-PCR, polymerase chain reaction for Mycobacterium tuberculosis\n\nThe Institutional Review Board of Pusan National University Hospital approved this study (No. E-2015040). Informed consent was waived because of the retrospective nature of the study.", "All bronchoscopic procedures were performed under conscious sedation with local anesthesia by three pulmonary physicians (Eom JS, Mok JH, and Lee K). Before EBUS-TBNA, conventional bronchoscopy was conducted in a standard fashion for airway inspection and administration of lidocaine into the tracheobronchial tree via the working channel of the bronchoscope. Following conventional bronchoscopy, EBUS-TBNA was performed using a dedicated bronchoscope with a linear ultrasound transducer (BF-UC260F-OL8; Olympus, Tokyo, Japan). Systemic assessment of the mediastinal, hilar, and interlobar lymph nodes was made based on computed tomography (CT) findings [12], and target lymph nodes were punctured with a 22-gauge needle (NA-201SX-4022; Olympus, Tokyo, Japan) under real-time ultrasound guidance. Two or more punctures were performed on each target lymph node until at least two tissue core specimens were obtained. One of the tissue core specimens was placed in formalin for histological examination. Other tissue core specimens that were placed in sterile saline were analyzed by fluorescence microscopy using auramine-rhodamine staining. Additionally, solid (3 % Ogawa medium) and liquid media (BacT/ALERT MP; bioMérieux, Durham, NC, USA) were used to culture for mycobacteria from the specimens in sterile saline.", "Nested PCR (for formalin-fixed paraffin-embedded specimens) or real-time PCR (for specimens in sterile saline) was conducted to detect nucleic acids of M. tuberculosis using tissue core samples collected by EBUS-TBNA.\nFor nested PCR, formalin-fixed paraffin-embedded tissues (12-μm thick) were incubated in 1 mL of xylene at 60 °C for 30 min and then centrifuged for 10 min (8800 rpm). Paraffin and the supernatant were removed from the samples after centrifugation. The same procedures were repeated until deparaffinization was complete. After adding 1 mL of alcohol, the samples were centrifuged for 5 min (8800 rpm), and the supernatant was removed. The samples were then air-dried as pellets. DNA was extracted using QIAamp (Qiagen, Valencia, CA, USA) according to the manufacturer’s instructions. PCR amplifications were performed using M. tuberculosis IS6110 primers (MTB-PCR kit; Biosewoom, Seoul, South Korea) according to the manufacturer’s instructions. The first round using outer primers and the second round using inner primers amplified a 256- and 181-bp fragment, respectively. Finally, the PCR products were visualized in a 2 % agarose gel.\nFor real-time PCR, specimens in sterile saline were filtered and 1 mL of phosphate-based saline was added. After centrifugation for 3 min (13,000 rpm), the supernatant was removed. Next, 50 μL of extraction buffer was added to the sample, and the sample was heated at 100 °C for 20 min. After centrifugation for 3 min (13,000 rpm), the supernatant was used in PCR. Real-time PCR was performed using the AdvanSure TB/NTM real-time PCR kit (LG Life Science, Seoul, South Korea) according to the manufacturer’s protocol. Three channels were used in the real-time PCR reaction (M. tuberculosis complex, mycobacteria, and internal control). Signals for FAM, HEX, and Cy5 were measured in each channel. M. tuberculosis was considered present if the cycle threshold of rpoB was less than 35 in each signal and greater than or equivalent to that of IS6110.", "Tuberculous lymphadenitis was regarded as present when M. tuberculosis was cultured in EBUS-TBNA samples. Patients with histological findings suggesting tuberculosis (granulomatous reaction with caseation necrosis or multinucleated giant cells associated with epithelioid histiocytes) or radiological findings compatible with tuberculous lymphadenitis (CT findings of nodes with central low attenuation and peripheral rim enhancement) were also considered to have tuberculosis only if clinical and radiological improvement was achieved after the standard anti-tuberculosis treatment [13]. Clinical and radiological improvement was defined as disappearance of symptoms and a decrease in the size of lymph nodes at a follow-up radiological examination. The diagnosis of sarcoidosis required compatible clinical and radiological results, exclusion of other granulomatous diseases with a similar histological or clinical picture (e.g., tuberculous lymphadenitis), and histological demonstration of noncaseating granulomas [1, 3].", "Histological specimens were classified into five grades as previously reported [14]: I) epithelioid granulomatous reaction with caseation, II) epithelioid granulomatous reaction without caseation, III) nongranulomatous reaction with necrosis, IV) nonspecific, and V) inadequate sample. If two or more lymph nodes were examined in one patient, the most suitable specimen for diagnosis was selected for analysis.", "Analyses were conducted on a per-patient basis according to the final diagnosis and results of the TB-PCR tests. All categorical and continuous variables are shown as numbers (percentages) and means (standard deviation), respectively. The overall diagnostic accuracy of TB-PCR for tuberculous lymphadenitis was calculated as follows: diagnostic accuracy (%) = number of patients with IGL who were accurately diagnosed (total number of true-positive and true-negative cases)/total number of patients with IGL [15]. P values of <0.05 were considered to indicate significance. All analyses were conducted using SPSS for Windows version 17.0 (SPSS Inc., Chicago, IL, USA).", "Using EBUS-TBNA, 82 lymph nodes were examined in 46 patients with IGL. All of the patients showed negative results using a screening assay for antibody to human immunodeficiency virus. Baseline characteristics are shown in Table 1. The mean age of the patients was 47.1 ± 17.1 years and there were 28 (61 %) female patients. The mean shortest diameter of lymph nodes that were examined using EBUS-TBNA was 17.7 ± 5.0 mm (median, 17.9 mm) in patients with sarcoidosis and 22.6 ± 11.3 mm (median, 20.0 mm) in patients with tuberculous lymphadenitis.Table 1Baseline characteristics of 46 patients with intrathoracic granulomatous lymphadenopathyVariablesNo. (%) or mean ± SDAge47.1 ± 17.1Female gender28 (61)Number of LN involved per patienta\n7.0 ± 3.6Number of LN examined per patient1.7 ± 0.7Number of needle pass per LN3.7 ± 1.8Number of tissue core achieved per LN2.0 ± 1.3Stations of LN puncturedb\n 2R2/82 (2) 4R31/82 (38) 4 L3/82 (4) 733/82 (40) 10R1/82 (1) 11R9/82 (11) 11 L3/82 (4)Diameter of LN examined using EBUS-TBNAc\n Shortest diameter, mm19.4 ± 8.0 Longest diameter, mm28.7 ± 11.4\naInvolved lymph node was defined as the shortest diameter ≥ 1 cm on a CT scan\nbBased on the International Association for the Study of Lung Cancer (IASLC) lymph node map\ncMeasured on an axial CT scanSD, standard deviation; LN, lymph node; EBUS-TBNA, endobronchial ultrasound-guided transbronchial needle aspiration\nBaseline characteristics of 46 patients with intrathoracic granulomatous lymphadenopathy\n\naInvolved lymph node was defined as the shortest diameter ≥ 1 cm on a CT scan\n\nbBased on the International Association for the Study of Lung Cancer (IASLC) lymph node map\n\ncMeasured on an axial CT scan\nSD, standard deviation; LN, lymph node; EBUS-TBNA, endobronchial ultrasound-guided transbronchial needle aspiration", "Of the 46 patients with IGL, 16 with tuberculous lymphadenitis and 30 with sarcoidosis were identified. Sputum acid-fast bacilli smear and M. tuberculosis culture were positive in 13 % (2/16) and 19 % (3/16) of patients with tuberculous lymphadenitis, respectively. According to the results of the M. tuberculosis culture, three patients had a confirmative diagnosis of tuberculous lymphadenitis. Based on the clinical and radiological responses after anti-tuberculosis treatment or histological results, the remaining 13 patients were diagnosed with tuberculous lymphadenitis. Of 30 patients with sarcoidosis, 28 were diagnosed according to compatible clinical, radiological, and laboratory results with demonstration of noncaseating granulomas using EBUS-TBNA samples. Mediastinoscopy was subsequently performed in one patient whose EBUS-TBNA was shown to be anthracotic pigmentation with inflammation (Patient 1 in Table 2). The surgical specimen showed chronic granulomatous inflammation without caseous necrosis, and sarcoidosis was finally diagnosed with compatible radiological and laboratory results. Another patient whose EBUS-TBNA samples were shown to be inadequate (grade V) refused mediastinoscopy. Therefore, she was followed up for more than 12 months and clinically diagnosed with sarcoidosis based on radiological and laboratory results (Patient 6 in Table 2).Table 2Detailed characteristics of seven patients with grade IV or V for histological resultsCharacteristicsPatient number1234567SexFemaleMaleMaleFemaleFemaleFemaleFemaleAge, years65782773633356Number of LN involved per patienta\n3332484Number of LN examined per patient1212221Number of needle pass per LN64 and 243 and 32 and 26 and 21Number of tissue core achieved per LN20 and 211 and 22 and 21 and 10Stations of LN punctured77 and 11R4R4R and 10R2R and 4R4R and 74RShortest diameter of LN punctured, mm10.613.0 and 15.032.418.9 and 12.214.6 and 10.517.2 and 12.615.7Results of EBUS-TBNA samples Acid-fast bacilli smearNegativeNegativeNegativeNegativeNegativeNegativeNegative Acid-fast bacilli cultureNegativeNegativeNegativeNegativeNegativeNegativeNegative TB-PCRNot detectedMTB detectedMTB detectedMTB detectedNot detectedNot detectedMTB detectedHistological gradeIVIVIVIVVVVMediastinoscopyPerformedNot performedNot performedNot performedPerformedNot performedNot performedFinal diagnosisSarcoidosisTBLTBLTBLTBLSarcoidosisb\nTBL\naLymph nodes with the shortest diameter ≥ 10 mm on an axial CT scan\nbPatient 6 was clinically diagnosed with sarcoidosis based on radiological and laboratory results after more than 12 months of follow-upLN, lymph node; EBUS-TBNA, endobronchial ultrasound-guided transbronchial needle aspiration; TB-PCR; polymerase chain reaction for Mycobacterium tuberculosis; MTB, Mycobacterium tuberculosis; TBL, tuberculous lymphadenitis\nDetailed characteristics of seven patients with grade IV or V for histological results\n\naLymph nodes with the shortest diameter ≥ 10 mm on an axial CT scan\n\nbPatient 6 was clinically diagnosed with sarcoidosis based on radiological and laboratory results after more than 12 months of follow-up\nLN, lymph node; EBUS-TBNA, endobronchial ultrasound-guided transbronchial needle aspiration; TB-PCR; polymerase chain reaction for Mycobacterium tuberculosis; MTB, Mycobacterium tuberculosis; TBL, tuberculous lymphadenitis", "Nested PCR and real-time PCR were performed in 29 and 17 patients, respectively. TB-PCR was not detected in all patients with sarcoidosis. Of 16 patients with tuberculous lymphadenitis, nine showed positive TB-PCR results. The sensitivity, specificity, positive predictive value, and negative predictive value were 56 % (95 % CI [confidence interval], 29.9–81.2), 100 % (95 % CI, 88.3–100.0), 100 % (95 % CI, 66.2–100.0), and 81 % (95 % CI, 64.8–92.0), respectively. There was no difference in the diagnostic performance between nested and real-time TB-PCR. In addition, the overall diagnostic accuracy of TB-PCR for tuberculous lymphadenitis was 85 % in patients with IGL.", "Histological results were grades I to III in 39 patients (85 %) and grades IV to V in seven patients (15 %) (Table 3). Four (9 %) and three (7 %) patients with grade IV and V disease, respectively, regarded as non-diagnostic, also showed non-diagnostic microbiological results of acid-fast bacilli smear and culture. Two patients who underwent mediastinoscopy were eventually diagnosed with sarcoidosis and tuberculous lymphadenitis (Patients 1 and 5, respectively, Table 2). Four (57 %) of seven patients with grades IV to V disease had positive TB-PCR results. All four patients achieved clinical and radiological improvement after anti-tuberculosis treatment (Patients 2, 3, 4, and 7 in Table 2).Table 3Results of histological examinationsGradeResults of polymerase chain reaction for Mycobacterium tuberculosis\nSarcoidosisTuberculous lymphadenitisI0/0 (0)5/8 (63)II0/28 (0)0/1 (0)III0/0 (0)0/2 (0)IV0/1 (0)3/3 (100)V0/1 (0)1/2 (50)Total0/30 (0)9/16 (56)Data are presented as the number of positive results/number of patients who received a test (%)\nResults of histological examinations\nData are presented as the number of positive results/number of patients who received a test (%)", "The overall diagnostic yield of the combined histological and microbiological data was 85 %. When the results of TB-PCR were combined with the histological and microbiological data, the overall diagnostic yield increased to 94 %." ]
[ null, null, null, null, null, null, null, null, null, null, null ]
[ "Background", "Methods", "Study population", "EBUS-TBNA procedure", "TB-PCR", "Diagnosis of tuberculous lymphadenitis and sarcoidosis", "Classification of granulomatous inflammation", "Statistical analysis", "Results", "Study population", "Diagnosis of tuberculous lymphadenitis and sarcoidosis", "Diagnostic performance of TB-PCR", "Classification of histology", "Diagnostic yield of EBUS-TBNA", "Discussion", "Conclusions" ]
[ "Intrathoracic granulomatous lymphadenopathies (IGLs), such as tuberculous lymphadenitis and sarcoidosis, are frequently encountered by respiratory physicians, and their diagnosis is based on histological and microbiological tests [1–3]. Conventional transbronchial needle aspiration or mediastinoscopy has traditionally been used to perform lymph node biopsy for histological examinations, as well as for stains and cultures for acid-fast organisms [4–6]. With the recent advent of endobronchial ultrasound, endobronchial ultrasound-guided transbronchial needle aspiration (EBUS-TBNA) has been widely used to perform mediastinal lymph node biopsy or aspiration. There is increasing evidence regarding EBUS-TBNA as the first examination of choice in the evaluation of patients with IGL [7–9].\nMost IGLs comprise tuberculous lymphadenitis and sarcoidosis. Occasionally, differentiating between these IGLs using a histological examination alone is difficult. Moreover, the diagnostic yield of acid-fast bacilli culture is still unsatisfactory, although such culture is considered the gold standard diagnostic method for tuberculosis. In addition to histological and microbiological tests, polymerase chain reaction for Mycobacterium tuberculosis (TB-PCR) is recognized as a useful test in the differential diagnosis of tuberculous lymphadenitis and sarcoidosis [10]. However, little information is available on TB-PCR using EBUS-TBNA samples. Therefore, we conducted this study to examine the diagnostic performance of TB-PCR using EBUS-TBNA samples in patients with IGL. We analyzed a prospectively collected database in South Korea, where the incidence of tuberculosis is intermediate (97/100,000 per year) [11].", " Study population From January 2010 to December 2014, a retrospective study with a prospectively collected database was performed to evaluate the efficacy of TB-PCR using EBUS-TBNA samples in patients with IGL at Pusan National University Hospital (university-affiliated, tertiary referral hospital in Busan, South Korea). During the study period, all consecutive patients who received EBUS-TBNA were prospectively registered. As a result, 87 patients with isolated intrathoracic lymphadenopathy, defined as lymphadenopathy without lung parenchymal abnormalities, received EBUS-TBNA. Fifty patients were diagnosed with IGL (Fig. 1). Of the 50 selected patients with IGL, four who were not subjected to TB-PCR were excluded from the analysis. Therefore, 46 patients with IGL were finally included in the present study.Fig. 1Study flow diagram. *Of the five patients with reactive hyperplasia, two were confirmed by subsequent mediastinoscopy, and a CT scan of the remaining three patients showed decreased or unchanged lymph node sizes. †All five patients with anthracotic lymph nodes were followed up for more than 6 months, and the lymph node size was decreased or unchanged on subsequent CT. ‡In six patients who were lost to follow-up, the results of EBUS-TBNA were insufficient specimens in three patients and reactive hyperplasia in the other patients. §Histological specimens were classified into five grades: I) epithelioid granulomatous reaction with caseation, II) epithelioid granulomatous reaction without caseation, III) nongranulomatous reaction with necrosis, IV) nonspecific, and V) inadequate sample. EBUS-TBNA, endobronchial ultrasound-guided transbronchial needle aspiration; IGL, intrathoracic granulomatous lymphadenopathy; TB-PCR, polymerase chain reaction for Mycobacterium tuberculosis\n\nStudy flow diagram. *Of the five patients with reactive hyperplasia, two were confirmed by subsequent mediastinoscopy, and a CT scan of the remaining three patients showed decreased or unchanged lymph node sizes. †All five patients with anthracotic lymph nodes were followed up for more than 6 months, and the lymph node size was decreased or unchanged on subsequent CT. ‡In six patients who were lost to follow-up, the results of EBUS-TBNA were insufficient specimens in three patients and reactive hyperplasia in the other patients. §Histological specimens were classified into five grades: I) epithelioid granulomatous reaction with caseation, II) epithelioid granulomatous reaction without caseation, III) nongranulomatous reaction with necrosis, IV) nonspecific, and V) inadequate sample. EBUS-TBNA, endobronchial ultrasound-guided transbronchial needle aspiration; IGL, intrathoracic granulomatous lymphadenopathy; TB-PCR, polymerase chain reaction for Mycobacterium tuberculosis\n\nThe Institutional Review Board of Pusan National University Hospital approved this study (No. E-2015040). Informed consent was waived because of the retrospective nature of the study.\nFrom January 2010 to December 2014, a retrospective study with a prospectively collected database was performed to evaluate the efficacy of TB-PCR using EBUS-TBNA samples in patients with IGL at Pusan National University Hospital (university-affiliated, tertiary referral hospital in Busan, South Korea). During the study period, all consecutive patients who received EBUS-TBNA were prospectively registered. As a result, 87 patients with isolated intrathoracic lymphadenopathy, defined as lymphadenopathy without lung parenchymal abnormalities, received EBUS-TBNA. Fifty patients were diagnosed with IGL (Fig. 1). Of the 50 selected patients with IGL, four who were not subjected to TB-PCR were excluded from the analysis. Therefore, 46 patients with IGL were finally included in the present study.Fig. 1Study flow diagram. *Of the five patients with reactive hyperplasia, two were confirmed by subsequent mediastinoscopy, and a CT scan of the remaining three patients showed decreased or unchanged lymph node sizes. †All five patients with anthracotic lymph nodes were followed up for more than 6 months, and the lymph node size was decreased or unchanged on subsequent CT. ‡In six patients who were lost to follow-up, the results of EBUS-TBNA were insufficient specimens in three patients and reactive hyperplasia in the other patients. §Histological specimens were classified into five grades: I) epithelioid granulomatous reaction with caseation, II) epithelioid granulomatous reaction without caseation, III) nongranulomatous reaction with necrosis, IV) nonspecific, and V) inadequate sample. EBUS-TBNA, endobronchial ultrasound-guided transbronchial needle aspiration; IGL, intrathoracic granulomatous lymphadenopathy; TB-PCR, polymerase chain reaction for Mycobacterium tuberculosis\n\nStudy flow diagram. *Of the five patients with reactive hyperplasia, two were confirmed by subsequent mediastinoscopy, and a CT scan of the remaining three patients showed decreased or unchanged lymph node sizes. †All five patients with anthracotic lymph nodes were followed up for more than 6 months, and the lymph node size was decreased or unchanged on subsequent CT. ‡In six patients who were lost to follow-up, the results of EBUS-TBNA were insufficient specimens in three patients and reactive hyperplasia in the other patients. §Histological specimens were classified into five grades: I) epithelioid granulomatous reaction with caseation, II) epithelioid granulomatous reaction without caseation, III) nongranulomatous reaction with necrosis, IV) nonspecific, and V) inadequate sample. EBUS-TBNA, endobronchial ultrasound-guided transbronchial needle aspiration; IGL, intrathoracic granulomatous lymphadenopathy; TB-PCR, polymerase chain reaction for Mycobacterium tuberculosis\n\nThe Institutional Review Board of Pusan National University Hospital approved this study (No. E-2015040). Informed consent was waived because of the retrospective nature of the study.\n EBUS-TBNA procedure All bronchoscopic procedures were performed under conscious sedation with local anesthesia by three pulmonary physicians (Eom JS, Mok JH, and Lee K). Before EBUS-TBNA, conventional bronchoscopy was conducted in a standard fashion for airway inspection and administration of lidocaine into the tracheobronchial tree via the working channel of the bronchoscope. Following conventional bronchoscopy, EBUS-TBNA was performed using a dedicated bronchoscope with a linear ultrasound transducer (BF-UC260F-OL8; Olympus, Tokyo, Japan). Systemic assessment of the mediastinal, hilar, and interlobar lymph nodes was made based on computed tomography (CT) findings [12], and target lymph nodes were punctured with a 22-gauge needle (NA-201SX-4022; Olympus, Tokyo, Japan) under real-time ultrasound guidance. Two or more punctures were performed on each target lymph node until at least two tissue core specimens were obtained. One of the tissue core specimens was placed in formalin for histological examination. Other tissue core specimens that were placed in sterile saline were analyzed by fluorescence microscopy using auramine-rhodamine staining. Additionally, solid (3 % Ogawa medium) and liquid media (BacT/ALERT MP; bioMérieux, Durham, NC, USA) were used to culture for mycobacteria from the specimens in sterile saline.\nAll bronchoscopic procedures were performed under conscious sedation with local anesthesia by three pulmonary physicians (Eom JS, Mok JH, and Lee K). Before EBUS-TBNA, conventional bronchoscopy was conducted in a standard fashion for airway inspection and administration of lidocaine into the tracheobronchial tree via the working channel of the bronchoscope. Following conventional bronchoscopy, EBUS-TBNA was performed using a dedicated bronchoscope with a linear ultrasound transducer (BF-UC260F-OL8; Olympus, Tokyo, Japan). Systemic assessment of the mediastinal, hilar, and interlobar lymph nodes was made based on computed tomography (CT) findings [12], and target lymph nodes were punctured with a 22-gauge needle (NA-201SX-4022; Olympus, Tokyo, Japan) under real-time ultrasound guidance. Two or more punctures were performed on each target lymph node until at least two tissue core specimens were obtained. One of the tissue core specimens was placed in formalin for histological examination. Other tissue core specimens that were placed in sterile saline were analyzed by fluorescence microscopy using auramine-rhodamine staining. Additionally, solid (3 % Ogawa medium) and liquid media (BacT/ALERT MP; bioMérieux, Durham, NC, USA) were used to culture for mycobacteria from the specimens in sterile saline.\n TB-PCR Nested PCR (for formalin-fixed paraffin-embedded specimens) or real-time PCR (for specimens in sterile saline) was conducted to detect nucleic acids of M. tuberculosis using tissue core samples collected by EBUS-TBNA.\nFor nested PCR, formalin-fixed paraffin-embedded tissues (12-μm thick) were incubated in 1 mL of xylene at 60 °C for 30 min and then centrifuged for 10 min (8800 rpm). Paraffin and the supernatant were removed from the samples after centrifugation. The same procedures were repeated until deparaffinization was complete. After adding 1 mL of alcohol, the samples were centrifuged for 5 min (8800 rpm), and the supernatant was removed. The samples were then air-dried as pellets. DNA was extracted using QIAamp (Qiagen, Valencia, CA, USA) according to the manufacturer’s instructions. PCR amplifications were performed using M. tuberculosis IS6110 primers (MTB-PCR kit; Biosewoom, Seoul, South Korea) according to the manufacturer’s instructions. The first round using outer primers and the second round using inner primers amplified a 256- and 181-bp fragment, respectively. Finally, the PCR products were visualized in a 2 % agarose gel.\nFor real-time PCR, specimens in sterile saline were filtered and 1 mL of phosphate-based saline was added. After centrifugation for 3 min (13,000 rpm), the supernatant was removed. Next, 50 μL of extraction buffer was added to the sample, and the sample was heated at 100 °C for 20 min. After centrifugation for 3 min (13,000 rpm), the supernatant was used in PCR. Real-time PCR was performed using the AdvanSure TB/NTM real-time PCR kit (LG Life Science, Seoul, South Korea) according to the manufacturer’s protocol. Three channels were used in the real-time PCR reaction (M. tuberculosis complex, mycobacteria, and internal control). Signals for FAM, HEX, and Cy5 were measured in each channel. M. tuberculosis was considered present if the cycle threshold of rpoB was less than 35 in each signal and greater than or equivalent to that of IS6110.\nNested PCR (for formalin-fixed paraffin-embedded specimens) or real-time PCR (for specimens in sterile saline) was conducted to detect nucleic acids of M. tuberculosis using tissue core samples collected by EBUS-TBNA.\nFor nested PCR, formalin-fixed paraffin-embedded tissues (12-μm thick) were incubated in 1 mL of xylene at 60 °C for 30 min and then centrifuged for 10 min (8800 rpm). Paraffin and the supernatant were removed from the samples after centrifugation. The same procedures were repeated until deparaffinization was complete. After adding 1 mL of alcohol, the samples were centrifuged for 5 min (8800 rpm), and the supernatant was removed. The samples were then air-dried as pellets. DNA was extracted using QIAamp (Qiagen, Valencia, CA, USA) according to the manufacturer’s instructions. PCR amplifications were performed using M. tuberculosis IS6110 primers (MTB-PCR kit; Biosewoom, Seoul, South Korea) according to the manufacturer’s instructions. The first round using outer primers and the second round using inner primers amplified a 256- and 181-bp fragment, respectively. Finally, the PCR products were visualized in a 2 % agarose gel.\nFor real-time PCR, specimens in sterile saline were filtered and 1 mL of phosphate-based saline was added. After centrifugation for 3 min (13,000 rpm), the supernatant was removed. Next, 50 μL of extraction buffer was added to the sample, and the sample was heated at 100 °C for 20 min. After centrifugation for 3 min (13,000 rpm), the supernatant was used in PCR. Real-time PCR was performed using the AdvanSure TB/NTM real-time PCR kit (LG Life Science, Seoul, South Korea) according to the manufacturer’s protocol. Three channels were used in the real-time PCR reaction (M. tuberculosis complex, mycobacteria, and internal control). Signals for FAM, HEX, and Cy5 were measured in each channel. M. tuberculosis was considered present if the cycle threshold of rpoB was less than 35 in each signal and greater than or equivalent to that of IS6110.\n Diagnosis of tuberculous lymphadenitis and sarcoidosis Tuberculous lymphadenitis was regarded as present when M. tuberculosis was cultured in EBUS-TBNA samples. Patients with histological findings suggesting tuberculosis (granulomatous reaction with caseation necrosis or multinucleated giant cells associated with epithelioid histiocytes) or radiological findings compatible with tuberculous lymphadenitis (CT findings of nodes with central low attenuation and peripheral rim enhancement) were also considered to have tuberculosis only if clinical and radiological improvement was achieved after the standard anti-tuberculosis treatment [13]. Clinical and radiological improvement was defined as disappearance of symptoms and a decrease in the size of lymph nodes at a follow-up radiological examination. The diagnosis of sarcoidosis required compatible clinical and radiological results, exclusion of other granulomatous diseases with a similar histological or clinical picture (e.g., tuberculous lymphadenitis), and histological demonstration of noncaseating granulomas [1, 3].\nTuberculous lymphadenitis was regarded as present when M. tuberculosis was cultured in EBUS-TBNA samples. Patients with histological findings suggesting tuberculosis (granulomatous reaction with caseation necrosis or multinucleated giant cells associated with epithelioid histiocytes) or radiological findings compatible with tuberculous lymphadenitis (CT findings of nodes with central low attenuation and peripheral rim enhancement) were also considered to have tuberculosis only if clinical and radiological improvement was achieved after the standard anti-tuberculosis treatment [13]. Clinical and radiological improvement was defined as disappearance of symptoms and a decrease in the size of lymph nodes at a follow-up radiological examination. The diagnosis of sarcoidosis required compatible clinical and radiological results, exclusion of other granulomatous diseases with a similar histological or clinical picture (e.g., tuberculous lymphadenitis), and histological demonstration of noncaseating granulomas [1, 3].\n Classification of granulomatous inflammation Histological specimens were classified into five grades as previously reported [14]: I) epithelioid granulomatous reaction with caseation, II) epithelioid granulomatous reaction without caseation, III) nongranulomatous reaction with necrosis, IV) nonspecific, and V) inadequate sample. If two or more lymph nodes were examined in one patient, the most suitable specimen for diagnosis was selected for analysis.\nHistological specimens were classified into five grades as previously reported [14]: I) epithelioid granulomatous reaction with caseation, II) epithelioid granulomatous reaction without caseation, III) nongranulomatous reaction with necrosis, IV) nonspecific, and V) inadequate sample. If two or more lymph nodes were examined in one patient, the most suitable specimen for diagnosis was selected for analysis.\n Statistical analysis Analyses were conducted on a per-patient basis according to the final diagnosis and results of the TB-PCR tests. All categorical and continuous variables are shown as numbers (percentages) and means (standard deviation), respectively. The overall diagnostic accuracy of TB-PCR for tuberculous lymphadenitis was calculated as follows: diagnostic accuracy (%) = number of patients with IGL who were accurately diagnosed (total number of true-positive and true-negative cases)/total number of patients with IGL [15]. P values of <0.05 were considered to indicate significance. All analyses were conducted using SPSS for Windows version 17.0 (SPSS Inc., Chicago, IL, USA).\nAnalyses were conducted on a per-patient basis according to the final diagnosis and results of the TB-PCR tests. All categorical and continuous variables are shown as numbers (percentages) and means (standard deviation), respectively. The overall diagnostic accuracy of TB-PCR for tuberculous lymphadenitis was calculated as follows: diagnostic accuracy (%) = number of patients with IGL who were accurately diagnosed (total number of true-positive and true-negative cases)/total number of patients with IGL [15]. P values of <0.05 were considered to indicate significance. All analyses were conducted using SPSS for Windows version 17.0 (SPSS Inc., Chicago, IL, USA).", "From January 2010 to December 2014, a retrospective study with a prospectively collected database was performed to evaluate the efficacy of TB-PCR using EBUS-TBNA samples in patients with IGL at Pusan National University Hospital (university-affiliated, tertiary referral hospital in Busan, South Korea). During the study period, all consecutive patients who received EBUS-TBNA were prospectively registered. As a result, 87 patients with isolated intrathoracic lymphadenopathy, defined as lymphadenopathy without lung parenchymal abnormalities, received EBUS-TBNA. Fifty patients were diagnosed with IGL (Fig. 1). Of the 50 selected patients with IGL, four who were not subjected to TB-PCR were excluded from the analysis. Therefore, 46 patients with IGL were finally included in the present study.Fig. 1Study flow diagram. *Of the five patients with reactive hyperplasia, two were confirmed by subsequent mediastinoscopy, and a CT scan of the remaining three patients showed decreased or unchanged lymph node sizes. †All five patients with anthracotic lymph nodes were followed up for more than 6 months, and the lymph node size was decreased or unchanged on subsequent CT. ‡In six patients who were lost to follow-up, the results of EBUS-TBNA were insufficient specimens in three patients and reactive hyperplasia in the other patients. §Histological specimens were classified into five grades: I) epithelioid granulomatous reaction with caseation, II) epithelioid granulomatous reaction without caseation, III) nongranulomatous reaction with necrosis, IV) nonspecific, and V) inadequate sample. EBUS-TBNA, endobronchial ultrasound-guided transbronchial needle aspiration; IGL, intrathoracic granulomatous lymphadenopathy; TB-PCR, polymerase chain reaction for Mycobacterium tuberculosis\n\nStudy flow diagram. *Of the five patients with reactive hyperplasia, two were confirmed by subsequent mediastinoscopy, and a CT scan of the remaining three patients showed decreased or unchanged lymph node sizes. †All five patients with anthracotic lymph nodes were followed up for more than 6 months, and the lymph node size was decreased or unchanged on subsequent CT. ‡In six patients who were lost to follow-up, the results of EBUS-TBNA were insufficient specimens in three patients and reactive hyperplasia in the other patients. §Histological specimens were classified into five grades: I) epithelioid granulomatous reaction with caseation, II) epithelioid granulomatous reaction without caseation, III) nongranulomatous reaction with necrosis, IV) nonspecific, and V) inadequate sample. EBUS-TBNA, endobronchial ultrasound-guided transbronchial needle aspiration; IGL, intrathoracic granulomatous lymphadenopathy; TB-PCR, polymerase chain reaction for Mycobacterium tuberculosis\n\nThe Institutional Review Board of Pusan National University Hospital approved this study (No. E-2015040). Informed consent was waived because of the retrospective nature of the study.", "All bronchoscopic procedures were performed under conscious sedation with local anesthesia by three pulmonary physicians (Eom JS, Mok JH, and Lee K). Before EBUS-TBNA, conventional bronchoscopy was conducted in a standard fashion for airway inspection and administration of lidocaine into the tracheobronchial tree via the working channel of the bronchoscope. Following conventional bronchoscopy, EBUS-TBNA was performed using a dedicated bronchoscope with a linear ultrasound transducer (BF-UC260F-OL8; Olympus, Tokyo, Japan). Systemic assessment of the mediastinal, hilar, and interlobar lymph nodes was made based on computed tomography (CT) findings [12], and target lymph nodes were punctured with a 22-gauge needle (NA-201SX-4022; Olympus, Tokyo, Japan) under real-time ultrasound guidance. Two or more punctures were performed on each target lymph node until at least two tissue core specimens were obtained. One of the tissue core specimens was placed in formalin for histological examination. Other tissue core specimens that were placed in sterile saline were analyzed by fluorescence microscopy using auramine-rhodamine staining. Additionally, solid (3 % Ogawa medium) and liquid media (BacT/ALERT MP; bioMérieux, Durham, NC, USA) were used to culture for mycobacteria from the specimens in sterile saline.", "Nested PCR (for formalin-fixed paraffin-embedded specimens) or real-time PCR (for specimens in sterile saline) was conducted to detect nucleic acids of M. tuberculosis using tissue core samples collected by EBUS-TBNA.\nFor nested PCR, formalin-fixed paraffin-embedded tissues (12-μm thick) were incubated in 1 mL of xylene at 60 °C for 30 min and then centrifuged for 10 min (8800 rpm). Paraffin and the supernatant were removed from the samples after centrifugation. The same procedures were repeated until deparaffinization was complete. After adding 1 mL of alcohol, the samples were centrifuged for 5 min (8800 rpm), and the supernatant was removed. The samples were then air-dried as pellets. DNA was extracted using QIAamp (Qiagen, Valencia, CA, USA) according to the manufacturer’s instructions. PCR amplifications were performed using M. tuberculosis IS6110 primers (MTB-PCR kit; Biosewoom, Seoul, South Korea) according to the manufacturer’s instructions. The first round using outer primers and the second round using inner primers amplified a 256- and 181-bp fragment, respectively. Finally, the PCR products were visualized in a 2 % agarose gel.\nFor real-time PCR, specimens in sterile saline were filtered and 1 mL of phosphate-based saline was added. After centrifugation for 3 min (13,000 rpm), the supernatant was removed. Next, 50 μL of extraction buffer was added to the sample, and the sample was heated at 100 °C for 20 min. After centrifugation for 3 min (13,000 rpm), the supernatant was used in PCR. Real-time PCR was performed using the AdvanSure TB/NTM real-time PCR kit (LG Life Science, Seoul, South Korea) according to the manufacturer’s protocol. Three channels were used in the real-time PCR reaction (M. tuberculosis complex, mycobacteria, and internal control). Signals for FAM, HEX, and Cy5 were measured in each channel. M. tuberculosis was considered present if the cycle threshold of rpoB was less than 35 in each signal and greater than or equivalent to that of IS6110.", "Tuberculous lymphadenitis was regarded as present when M. tuberculosis was cultured in EBUS-TBNA samples. Patients with histological findings suggesting tuberculosis (granulomatous reaction with caseation necrosis or multinucleated giant cells associated with epithelioid histiocytes) or radiological findings compatible with tuberculous lymphadenitis (CT findings of nodes with central low attenuation and peripheral rim enhancement) were also considered to have tuberculosis only if clinical and radiological improvement was achieved after the standard anti-tuberculosis treatment [13]. Clinical and radiological improvement was defined as disappearance of symptoms and a decrease in the size of lymph nodes at a follow-up radiological examination. The diagnosis of sarcoidosis required compatible clinical and radiological results, exclusion of other granulomatous diseases with a similar histological or clinical picture (e.g., tuberculous lymphadenitis), and histological demonstration of noncaseating granulomas [1, 3].", "Histological specimens were classified into five grades as previously reported [14]: I) epithelioid granulomatous reaction with caseation, II) epithelioid granulomatous reaction without caseation, III) nongranulomatous reaction with necrosis, IV) nonspecific, and V) inadequate sample. If two or more lymph nodes were examined in one patient, the most suitable specimen for diagnosis was selected for analysis.", "Analyses were conducted on a per-patient basis according to the final diagnosis and results of the TB-PCR tests. All categorical and continuous variables are shown as numbers (percentages) and means (standard deviation), respectively. The overall diagnostic accuracy of TB-PCR for tuberculous lymphadenitis was calculated as follows: diagnostic accuracy (%) = number of patients with IGL who were accurately diagnosed (total number of true-positive and true-negative cases)/total number of patients with IGL [15]. P values of <0.05 were considered to indicate significance. All analyses were conducted using SPSS for Windows version 17.0 (SPSS Inc., Chicago, IL, USA).", " Study population Using EBUS-TBNA, 82 lymph nodes were examined in 46 patients with IGL. All of the patients showed negative results using a screening assay for antibody to human immunodeficiency virus. Baseline characteristics are shown in Table 1. The mean age of the patients was 47.1 ± 17.1 years and there were 28 (61 %) female patients. The mean shortest diameter of lymph nodes that were examined using EBUS-TBNA was 17.7 ± 5.0 mm (median, 17.9 mm) in patients with sarcoidosis and 22.6 ± 11.3 mm (median, 20.0 mm) in patients with tuberculous lymphadenitis.Table 1Baseline characteristics of 46 patients with intrathoracic granulomatous lymphadenopathyVariablesNo. (%) or mean ± SDAge47.1 ± 17.1Female gender28 (61)Number of LN involved per patienta\n7.0 ± 3.6Number of LN examined per patient1.7 ± 0.7Number of needle pass per LN3.7 ± 1.8Number of tissue core achieved per LN2.0 ± 1.3Stations of LN puncturedb\n 2R2/82 (2) 4R31/82 (38) 4 L3/82 (4) 733/82 (40) 10R1/82 (1) 11R9/82 (11) 11 L3/82 (4)Diameter of LN examined using EBUS-TBNAc\n Shortest diameter, mm19.4 ± 8.0 Longest diameter, mm28.7 ± 11.4\naInvolved lymph node was defined as the shortest diameter ≥ 1 cm on a CT scan\nbBased on the International Association for the Study of Lung Cancer (IASLC) lymph node map\ncMeasured on an axial CT scanSD, standard deviation; LN, lymph node; EBUS-TBNA, endobronchial ultrasound-guided transbronchial needle aspiration\nBaseline characteristics of 46 patients with intrathoracic granulomatous lymphadenopathy\n\naInvolved lymph node was defined as the shortest diameter ≥ 1 cm on a CT scan\n\nbBased on the International Association for the Study of Lung Cancer (IASLC) lymph node map\n\ncMeasured on an axial CT scan\nSD, standard deviation; LN, lymph node; EBUS-TBNA, endobronchial ultrasound-guided transbronchial needle aspiration\nUsing EBUS-TBNA, 82 lymph nodes were examined in 46 patients with IGL. All of the patients showed negative results using a screening assay for antibody to human immunodeficiency virus. Baseline characteristics are shown in Table 1. The mean age of the patients was 47.1 ± 17.1 years and there were 28 (61 %) female patients. The mean shortest diameter of lymph nodes that were examined using EBUS-TBNA was 17.7 ± 5.0 mm (median, 17.9 mm) in patients with sarcoidosis and 22.6 ± 11.3 mm (median, 20.0 mm) in patients with tuberculous lymphadenitis.Table 1Baseline characteristics of 46 patients with intrathoracic granulomatous lymphadenopathyVariablesNo. (%) or mean ± SDAge47.1 ± 17.1Female gender28 (61)Number of LN involved per patienta\n7.0 ± 3.6Number of LN examined per patient1.7 ± 0.7Number of needle pass per LN3.7 ± 1.8Number of tissue core achieved per LN2.0 ± 1.3Stations of LN puncturedb\n 2R2/82 (2) 4R31/82 (38) 4 L3/82 (4) 733/82 (40) 10R1/82 (1) 11R9/82 (11) 11 L3/82 (4)Diameter of LN examined using EBUS-TBNAc\n Shortest diameter, mm19.4 ± 8.0 Longest diameter, mm28.7 ± 11.4\naInvolved lymph node was defined as the shortest diameter ≥ 1 cm on a CT scan\nbBased on the International Association for the Study of Lung Cancer (IASLC) lymph node map\ncMeasured on an axial CT scanSD, standard deviation; LN, lymph node; EBUS-TBNA, endobronchial ultrasound-guided transbronchial needle aspiration\nBaseline characteristics of 46 patients with intrathoracic granulomatous lymphadenopathy\n\naInvolved lymph node was defined as the shortest diameter ≥ 1 cm on a CT scan\n\nbBased on the International Association for the Study of Lung Cancer (IASLC) lymph node map\n\ncMeasured on an axial CT scan\nSD, standard deviation; LN, lymph node; EBUS-TBNA, endobronchial ultrasound-guided transbronchial needle aspiration\n Diagnosis of tuberculous lymphadenitis and sarcoidosis Of the 46 patients with IGL, 16 with tuberculous lymphadenitis and 30 with sarcoidosis were identified. Sputum acid-fast bacilli smear and M. tuberculosis culture were positive in 13 % (2/16) and 19 % (3/16) of patients with tuberculous lymphadenitis, respectively. According to the results of the M. tuberculosis culture, three patients had a confirmative diagnosis of tuberculous lymphadenitis. Based on the clinical and radiological responses after anti-tuberculosis treatment or histological results, the remaining 13 patients were diagnosed with tuberculous lymphadenitis. Of 30 patients with sarcoidosis, 28 were diagnosed according to compatible clinical, radiological, and laboratory results with demonstration of noncaseating granulomas using EBUS-TBNA samples. Mediastinoscopy was subsequently performed in one patient whose EBUS-TBNA was shown to be anthracotic pigmentation with inflammation (Patient 1 in Table 2). The surgical specimen showed chronic granulomatous inflammation without caseous necrosis, and sarcoidosis was finally diagnosed with compatible radiological and laboratory results. Another patient whose EBUS-TBNA samples were shown to be inadequate (grade V) refused mediastinoscopy. Therefore, she was followed up for more than 12 months and clinically diagnosed with sarcoidosis based on radiological and laboratory results (Patient 6 in Table 2).Table 2Detailed characteristics of seven patients with grade IV or V for histological resultsCharacteristicsPatient number1234567SexFemaleMaleMaleFemaleFemaleFemaleFemaleAge, years65782773633356Number of LN involved per patienta\n3332484Number of LN examined per patient1212221Number of needle pass per LN64 and 243 and 32 and 26 and 21Number of tissue core achieved per LN20 and 211 and 22 and 21 and 10Stations of LN punctured77 and 11R4R4R and 10R2R and 4R4R and 74RShortest diameter of LN punctured, mm10.613.0 and 15.032.418.9 and 12.214.6 and 10.517.2 and 12.615.7Results of EBUS-TBNA samples Acid-fast bacilli smearNegativeNegativeNegativeNegativeNegativeNegativeNegative Acid-fast bacilli cultureNegativeNegativeNegativeNegativeNegativeNegativeNegative TB-PCRNot detectedMTB detectedMTB detectedMTB detectedNot detectedNot detectedMTB detectedHistological gradeIVIVIVIVVVVMediastinoscopyPerformedNot performedNot performedNot performedPerformedNot performedNot performedFinal diagnosisSarcoidosisTBLTBLTBLTBLSarcoidosisb\nTBL\naLymph nodes with the shortest diameter ≥ 10 mm on an axial CT scan\nbPatient 6 was clinically diagnosed with sarcoidosis based on radiological and laboratory results after more than 12 months of follow-upLN, lymph node; EBUS-TBNA, endobronchial ultrasound-guided transbronchial needle aspiration; TB-PCR; polymerase chain reaction for Mycobacterium tuberculosis; MTB, Mycobacterium tuberculosis; TBL, tuberculous lymphadenitis\nDetailed characteristics of seven patients with grade IV or V for histological results\n\naLymph nodes with the shortest diameter ≥ 10 mm on an axial CT scan\n\nbPatient 6 was clinically diagnosed with sarcoidosis based on radiological and laboratory results after more than 12 months of follow-up\nLN, lymph node; EBUS-TBNA, endobronchial ultrasound-guided transbronchial needle aspiration; TB-PCR; polymerase chain reaction for Mycobacterium tuberculosis; MTB, Mycobacterium tuberculosis; TBL, tuberculous lymphadenitis\nOf the 46 patients with IGL, 16 with tuberculous lymphadenitis and 30 with sarcoidosis were identified. Sputum acid-fast bacilli smear and M. tuberculosis culture were positive in 13 % (2/16) and 19 % (3/16) of patients with tuberculous lymphadenitis, respectively. According to the results of the M. tuberculosis culture, three patients had a confirmative diagnosis of tuberculous lymphadenitis. Based on the clinical and radiological responses after anti-tuberculosis treatment or histological results, the remaining 13 patients were diagnosed with tuberculous lymphadenitis. Of 30 patients with sarcoidosis, 28 were diagnosed according to compatible clinical, radiological, and laboratory results with demonstration of noncaseating granulomas using EBUS-TBNA samples. Mediastinoscopy was subsequently performed in one patient whose EBUS-TBNA was shown to be anthracotic pigmentation with inflammation (Patient 1 in Table 2). The surgical specimen showed chronic granulomatous inflammation without caseous necrosis, and sarcoidosis was finally diagnosed with compatible radiological and laboratory results. Another patient whose EBUS-TBNA samples were shown to be inadequate (grade V) refused mediastinoscopy. Therefore, she was followed up for more than 12 months and clinically diagnosed with sarcoidosis based on radiological and laboratory results (Patient 6 in Table 2).Table 2Detailed characteristics of seven patients with grade IV or V for histological resultsCharacteristicsPatient number1234567SexFemaleMaleMaleFemaleFemaleFemaleFemaleAge, years65782773633356Number of LN involved per patienta\n3332484Number of LN examined per patient1212221Number of needle pass per LN64 and 243 and 32 and 26 and 21Number of tissue core achieved per LN20 and 211 and 22 and 21 and 10Stations of LN punctured77 and 11R4R4R and 10R2R and 4R4R and 74RShortest diameter of LN punctured, mm10.613.0 and 15.032.418.9 and 12.214.6 and 10.517.2 and 12.615.7Results of EBUS-TBNA samples Acid-fast bacilli smearNegativeNegativeNegativeNegativeNegativeNegativeNegative Acid-fast bacilli cultureNegativeNegativeNegativeNegativeNegativeNegativeNegative TB-PCRNot detectedMTB detectedMTB detectedMTB detectedNot detectedNot detectedMTB detectedHistological gradeIVIVIVIVVVVMediastinoscopyPerformedNot performedNot performedNot performedPerformedNot performedNot performedFinal diagnosisSarcoidosisTBLTBLTBLTBLSarcoidosisb\nTBL\naLymph nodes with the shortest diameter ≥ 10 mm on an axial CT scan\nbPatient 6 was clinically diagnosed with sarcoidosis based on radiological and laboratory results after more than 12 months of follow-upLN, lymph node; EBUS-TBNA, endobronchial ultrasound-guided transbronchial needle aspiration; TB-PCR; polymerase chain reaction for Mycobacterium tuberculosis; MTB, Mycobacterium tuberculosis; TBL, tuberculous lymphadenitis\nDetailed characteristics of seven patients with grade IV or V for histological results\n\naLymph nodes with the shortest diameter ≥ 10 mm on an axial CT scan\n\nbPatient 6 was clinically diagnosed with sarcoidosis based on radiological and laboratory results after more than 12 months of follow-up\nLN, lymph node; EBUS-TBNA, endobronchial ultrasound-guided transbronchial needle aspiration; TB-PCR; polymerase chain reaction for Mycobacterium tuberculosis; MTB, Mycobacterium tuberculosis; TBL, tuberculous lymphadenitis\n Diagnostic performance of TB-PCR Nested PCR and real-time PCR were performed in 29 and 17 patients, respectively. TB-PCR was not detected in all patients with sarcoidosis. Of 16 patients with tuberculous lymphadenitis, nine showed positive TB-PCR results. The sensitivity, specificity, positive predictive value, and negative predictive value were 56 % (95 % CI [confidence interval], 29.9–81.2), 100 % (95 % CI, 88.3–100.0), 100 % (95 % CI, 66.2–100.0), and 81 % (95 % CI, 64.8–92.0), respectively. There was no difference in the diagnostic performance between nested and real-time TB-PCR. In addition, the overall diagnostic accuracy of TB-PCR for tuberculous lymphadenitis was 85 % in patients with IGL.\nNested PCR and real-time PCR were performed in 29 and 17 patients, respectively. TB-PCR was not detected in all patients with sarcoidosis. Of 16 patients with tuberculous lymphadenitis, nine showed positive TB-PCR results. The sensitivity, specificity, positive predictive value, and negative predictive value were 56 % (95 % CI [confidence interval], 29.9–81.2), 100 % (95 % CI, 88.3–100.0), 100 % (95 % CI, 66.2–100.0), and 81 % (95 % CI, 64.8–92.0), respectively. There was no difference in the diagnostic performance between nested and real-time TB-PCR. In addition, the overall diagnostic accuracy of TB-PCR for tuberculous lymphadenitis was 85 % in patients with IGL.\n Classification of histology Histological results were grades I to III in 39 patients (85 %) and grades IV to V in seven patients (15 %) (Table 3). Four (9 %) and three (7 %) patients with grade IV and V disease, respectively, regarded as non-diagnostic, also showed non-diagnostic microbiological results of acid-fast bacilli smear and culture. Two patients who underwent mediastinoscopy were eventually diagnosed with sarcoidosis and tuberculous lymphadenitis (Patients 1 and 5, respectively, Table 2). Four (57 %) of seven patients with grades IV to V disease had positive TB-PCR results. All four patients achieved clinical and radiological improvement after anti-tuberculosis treatment (Patients 2, 3, 4, and 7 in Table 2).Table 3Results of histological examinationsGradeResults of polymerase chain reaction for Mycobacterium tuberculosis\nSarcoidosisTuberculous lymphadenitisI0/0 (0)5/8 (63)II0/28 (0)0/1 (0)III0/0 (0)0/2 (0)IV0/1 (0)3/3 (100)V0/1 (0)1/2 (50)Total0/30 (0)9/16 (56)Data are presented as the number of positive results/number of patients who received a test (%)\nResults of histological examinations\nData are presented as the number of positive results/number of patients who received a test (%)\nHistological results were grades I to III in 39 patients (85 %) and grades IV to V in seven patients (15 %) (Table 3). Four (9 %) and three (7 %) patients with grade IV and V disease, respectively, regarded as non-diagnostic, also showed non-diagnostic microbiological results of acid-fast bacilli smear and culture. Two patients who underwent mediastinoscopy were eventually diagnosed with sarcoidosis and tuberculous lymphadenitis (Patients 1 and 5, respectively, Table 2). Four (57 %) of seven patients with grades IV to V disease had positive TB-PCR results. All four patients achieved clinical and radiological improvement after anti-tuberculosis treatment (Patients 2, 3, 4, and 7 in Table 2).Table 3Results of histological examinationsGradeResults of polymerase chain reaction for Mycobacterium tuberculosis\nSarcoidosisTuberculous lymphadenitisI0/0 (0)5/8 (63)II0/28 (0)0/1 (0)III0/0 (0)0/2 (0)IV0/1 (0)3/3 (100)V0/1 (0)1/2 (50)Total0/30 (0)9/16 (56)Data are presented as the number of positive results/number of patients who received a test (%)\nResults of histological examinations\nData are presented as the number of positive results/number of patients who received a test (%)\n Diagnostic yield of EBUS-TBNA The overall diagnostic yield of the combined histological and microbiological data was 85 %. When the results of TB-PCR were combined with the histological and microbiological data, the overall diagnostic yield increased to 94 %.\nThe overall diagnostic yield of the combined histological and microbiological data was 85 %. When the results of TB-PCR were combined with the histological and microbiological data, the overall diagnostic yield increased to 94 %.", "Using EBUS-TBNA, 82 lymph nodes were examined in 46 patients with IGL. All of the patients showed negative results using a screening assay for antibody to human immunodeficiency virus. Baseline characteristics are shown in Table 1. The mean age of the patients was 47.1 ± 17.1 years and there were 28 (61 %) female patients. The mean shortest diameter of lymph nodes that were examined using EBUS-TBNA was 17.7 ± 5.0 mm (median, 17.9 mm) in patients with sarcoidosis and 22.6 ± 11.3 mm (median, 20.0 mm) in patients with tuberculous lymphadenitis.Table 1Baseline characteristics of 46 patients with intrathoracic granulomatous lymphadenopathyVariablesNo. (%) or mean ± SDAge47.1 ± 17.1Female gender28 (61)Number of LN involved per patienta\n7.0 ± 3.6Number of LN examined per patient1.7 ± 0.7Number of needle pass per LN3.7 ± 1.8Number of tissue core achieved per LN2.0 ± 1.3Stations of LN puncturedb\n 2R2/82 (2) 4R31/82 (38) 4 L3/82 (4) 733/82 (40) 10R1/82 (1) 11R9/82 (11) 11 L3/82 (4)Diameter of LN examined using EBUS-TBNAc\n Shortest diameter, mm19.4 ± 8.0 Longest diameter, mm28.7 ± 11.4\naInvolved lymph node was defined as the shortest diameter ≥ 1 cm on a CT scan\nbBased on the International Association for the Study of Lung Cancer (IASLC) lymph node map\ncMeasured on an axial CT scanSD, standard deviation; LN, lymph node; EBUS-TBNA, endobronchial ultrasound-guided transbronchial needle aspiration\nBaseline characteristics of 46 patients with intrathoracic granulomatous lymphadenopathy\n\naInvolved lymph node was defined as the shortest diameter ≥ 1 cm on a CT scan\n\nbBased on the International Association for the Study of Lung Cancer (IASLC) lymph node map\n\ncMeasured on an axial CT scan\nSD, standard deviation; LN, lymph node; EBUS-TBNA, endobronchial ultrasound-guided transbronchial needle aspiration", "Of the 46 patients with IGL, 16 with tuberculous lymphadenitis and 30 with sarcoidosis were identified. Sputum acid-fast bacilli smear and M. tuberculosis culture were positive in 13 % (2/16) and 19 % (3/16) of patients with tuberculous lymphadenitis, respectively. According to the results of the M. tuberculosis culture, three patients had a confirmative diagnosis of tuberculous lymphadenitis. Based on the clinical and radiological responses after anti-tuberculosis treatment or histological results, the remaining 13 patients were diagnosed with tuberculous lymphadenitis. Of 30 patients with sarcoidosis, 28 were diagnosed according to compatible clinical, radiological, and laboratory results with demonstration of noncaseating granulomas using EBUS-TBNA samples. Mediastinoscopy was subsequently performed in one patient whose EBUS-TBNA was shown to be anthracotic pigmentation with inflammation (Patient 1 in Table 2). The surgical specimen showed chronic granulomatous inflammation without caseous necrosis, and sarcoidosis was finally diagnosed with compatible radiological and laboratory results. Another patient whose EBUS-TBNA samples were shown to be inadequate (grade V) refused mediastinoscopy. Therefore, she was followed up for more than 12 months and clinically diagnosed with sarcoidosis based on radiological and laboratory results (Patient 6 in Table 2).Table 2Detailed characteristics of seven patients with grade IV or V for histological resultsCharacteristicsPatient number1234567SexFemaleMaleMaleFemaleFemaleFemaleFemaleAge, years65782773633356Number of LN involved per patienta\n3332484Number of LN examined per patient1212221Number of needle pass per LN64 and 243 and 32 and 26 and 21Number of tissue core achieved per LN20 and 211 and 22 and 21 and 10Stations of LN punctured77 and 11R4R4R and 10R2R and 4R4R and 74RShortest diameter of LN punctured, mm10.613.0 and 15.032.418.9 and 12.214.6 and 10.517.2 and 12.615.7Results of EBUS-TBNA samples Acid-fast bacilli smearNegativeNegativeNegativeNegativeNegativeNegativeNegative Acid-fast bacilli cultureNegativeNegativeNegativeNegativeNegativeNegativeNegative TB-PCRNot detectedMTB detectedMTB detectedMTB detectedNot detectedNot detectedMTB detectedHistological gradeIVIVIVIVVVVMediastinoscopyPerformedNot performedNot performedNot performedPerformedNot performedNot performedFinal diagnosisSarcoidosisTBLTBLTBLTBLSarcoidosisb\nTBL\naLymph nodes with the shortest diameter ≥ 10 mm on an axial CT scan\nbPatient 6 was clinically diagnosed with sarcoidosis based on radiological and laboratory results after more than 12 months of follow-upLN, lymph node; EBUS-TBNA, endobronchial ultrasound-guided transbronchial needle aspiration; TB-PCR; polymerase chain reaction for Mycobacterium tuberculosis; MTB, Mycobacterium tuberculosis; TBL, tuberculous lymphadenitis\nDetailed characteristics of seven patients with grade IV or V for histological results\n\naLymph nodes with the shortest diameter ≥ 10 mm on an axial CT scan\n\nbPatient 6 was clinically diagnosed with sarcoidosis based on radiological and laboratory results after more than 12 months of follow-up\nLN, lymph node; EBUS-TBNA, endobronchial ultrasound-guided transbronchial needle aspiration; TB-PCR; polymerase chain reaction for Mycobacterium tuberculosis; MTB, Mycobacterium tuberculosis; TBL, tuberculous lymphadenitis", "Nested PCR and real-time PCR were performed in 29 and 17 patients, respectively. TB-PCR was not detected in all patients with sarcoidosis. Of 16 patients with tuberculous lymphadenitis, nine showed positive TB-PCR results. The sensitivity, specificity, positive predictive value, and negative predictive value were 56 % (95 % CI [confidence interval], 29.9–81.2), 100 % (95 % CI, 88.3–100.0), 100 % (95 % CI, 66.2–100.0), and 81 % (95 % CI, 64.8–92.0), respectively. There was no difference in the diagnostic performance between nested and real-time TB-PCR. In addition, the overall diagnostic accuracy of TB-PCR for tuberculous lymphadenitis was 85 % in patients with IGL.", "Histological results were grades I to III in 39 patients (85 %) and grades IV to V in seven patients (15 %) (Table 3). Four (9 %) and three (7 %) patients with grade IV and V disease, respectively, regarded as non-diagnostic, also showed non-diagnostic microbiological results of acid-fast bacilli smear and culture. Two patients who underwent mediastinoscopy were eventually diagnosed with sarcoidosis and tuberculous lymphadenitis (Patients 1 and 5, respectively, Table 2). Four (57 %) of seven patients with grades IV to V disease had positive TB-PCR results. All four patients achieved clinical and radiological improvement after anti-tuberculosis treatment (Patients 2, 3, 4, and 7 in Table 2).Table 3Results of histological examinationsGradeResults of polymerase chain reaction for Mycobacterium tuberculosis\nSarcoidosisTuberculous lymphadenitisI0/0 (0)5/8 (63)II0/28 (0)0/1 (0)III0/0 (0)0/2 (0)IV0/1 (0)3/3 (100)V0/1 (0)1/2 (50)Total0/30 (0)9/16 (56)Data are presented as the number of positive results/number of patients who received a test (%)\nResults of histological examinations\nData are presented as the number of positive results/number of patients who received a test (%)", "The overall diagnostic yield of the combined histological and microbiological data was 85 %. When the results of TB-PCR were combined with the histological and microbiological data, the overall diagnostic yield increased to 94 %.", "In addition to acid-fast bacilli smear and culture, TB-PCR using sputum specimens is traditionally recognized as a useful examination in the diagnosis of pulmonary tuberculosis. Similar to pulmonary tuberculosis, TB-PCR using fine needle aspiration samples of cervical, axillary, or inguinal lymph nodes has also been reported as a useful molecular test [16, 17]. In the present study, we found that the sensitivity, specificity, positive predictive value, negative predictive value, and overall diagnostic accuracy of TB-PCR using EBUS-TBNA samples in patients with IGL were 56, 100, 100, 81, and 85 %, respectively. Additionally, when we compared the diagnostic yield of combined histological and microbiological examinations using EBUS-TBNA samples with a combination of the three modalities of TB-PCR, histology, and microbiology, the diagnostic yield increased from 85 to 94 % in patients with IGL. The results of the current study are in line with those of previous reports regarding TB-PCR with lymph nodes other than mediastinal IGLs.\nThe most important role of EBUS-TBNA in patients with mediastinal lymphadenopathy is to prevent mediastinoscopy with general anesthesia. A previous study of isolated mediastinal lymphadenopathy reported that EBUS-TBNA prevented 87 % of mediastinoscopies [7]. In the present study, seven patients had non-diagnostic EBUS-TBNA results (grade IV or V). Of them, TB-PCR was detected in four patients. In these four patients, anti-tuberculosis treatment led to clinical and radiological improvement. Therefore, the TB-PCR test could be effective and prevent mediastinoscopy when EBUS-TBNA results are non-diagnostic. Our findings suggest that TB-PCR using EBUS-TBNA samples could extend the role of EBUS-TBNA to prevent mediastinoscopy.\nIn a previous study of tuberculous lymphadenitis by Navani et al. [8], the diagnostic yield of combined histological and microbiological tests using EBUS-TBNA samples was much higher than that in the present study; the sensitivity was 94 %, and M tuberculosis was cultured in 47 % of samples. Histological and microbiological tests using EBUS-TBNA samples appear to be sufficient for the diagnosis of tuberculous lymphadenitis. However, the lymph node size examined by EBUS-TBNA in previous studies was larger than that in the current study (median, 22.0 vs. 20.0 mm, respectively). The bioburden of M. tuberculosis in tuberculous lymphadenitis might have been much higher in the previous study by Navani et al. compared with the current study [8].\nPrevious studies applied TB-PCR using EBUS-TBNA samples as follows. Geake et al. reported that the sensitivity and specificity of TB-PCR using EBUS-TBNA samples were 38 and 100 %, respectively [18]. However, TB-PCR was performed in 29 of 39 patients with tuberculous lymphadenitis, and thus selection bias might have influenced their results. Senturk et al. analyzed 30 patients who were diagnosed with tuberculous lymphadenitis (27 patients with EBUS-TBNA samples and three with mediastinoscopic samples) [19]. Analysis of TB-PCR was performed with a sensitivity of 57 % and specificity of 100 %. Generally, tissue specimens collected by mediastinoscopy are larger than those collected by EBUS-TBNA. Therefore, the sensitivity and specificity of TB-PCR presented by Senturk et al. could not be used to interpret the results of TB-PCR using EBUS-TBNA samples. Dhasmana et al. analyzed the performance of Xpert MTB/RIF in the diagnosis of mediastinal tuberculous lymphadenitis [20]. They showed that Xpert MTB/RIF had a higher sensitivity (73 %) than that found in our study (56 %). Dhasmana et al. performed Xpert MTB/RIF in patients with culture-positive tuberculous lymphadenitis. However, our study population consisted of culture-negative cases with clinical suspicion, as well as culture-positive tuberculous lymphadenitis. Not all mediastinal tuberculous lymphadenitis samples would have shown a positive culture for M. tuberculosis. Therefore, Dhasmana et al.’s results might have overestimated the diagnostic performance of Xpert MTB/RIF.\nThere are several limitations in this study. First, the present study was performed retrospectively. Although the data were collected prospectively, TB-PCR was not performed in four of the 50 selected patients with IGL (Fig. 1). We acknowledge that there was potential for selection bias, which might have influenced our results. Second, our study included a relatively small sample size. Third, we could not unify the PCR methods. To date, which method of TB-PCR is superior for detection of M. tuberculosis remains unknown. These limitations need to be verified in prospective trials with larger study populations and a uniform study protocol.", "TB-PCR using EBUS-TBNA samples is a useful laboratory test for diagnosing IGL. Moreover, this method can prevent further invasive evaluation in patients whose histological and microbiological tests are non-diagnostic." ]
[ "introduction", "materials|methods", null, null, null, null, null, null, "results", null, null, null, null, null, "discussion", "conclusion" ]
[ "Endobronchial ultrasound-guided transbronchial needle aspiration", "Polymerase chain reaction", "Sarcoidosis", "Sensitivity and specificity", "Tuberculous lymphadenitis" ]
Background: Intrathoracic granulomatous lymphadenopathies (IGLs), such as tuberculous lymphadenitis and sarcoidosis, are frequently encountered by respiratory physicians, and their diagnosis is based on histological and microbiological tests [1–3]. Conventional transbronchial needle aspiration or mediastinoscopy has traditionally been used to perform lymph node biopsy for histological examinations, as well as for stains and cultures for acid-fast organisms [4–6]. With the recent advent of endobronchial ultrasound, endobronchial ultrasound-guided transbronchial needle aspiration (EBUS-TBNA) has been widely used to perform mediastinal lymph node biopsy or aspiration. There is increasing evidence regarding EBUS-TBNA as the first examination of choice in the evaluation of patients with IGL [7–9]. Most IGLs comprise tuberculous lymphadenitis and sarcoidosis. Occasionally, differentiating between these IGLs using a histological examination alone is difficult. Moreover, the diagnostic yield of acid-fast bacilli culture is still unsatisfactory, although such culture is considered the gold standard diagnostic method for tuberculosis. In addition to histological and microbiological tests, polymerase chain reaction for Mycobacterium tuberculosis (TB-PCR) is recognized as a useful test in the differential diagnosis of tuberculous lymphadenitis and sarcoidosis [10]. However, little information is available on TB-PCR using EBUS-TBNA samples. Therefore, we conducted this study to examine the diagnostic performance of TB-PCR using EBUS-TBNA samples in patients with IGL. We analyzed a prospectively collected database in South Korea, where the incidence of tuberculosis is intermediate (97/100,000 per year) [11]. Methods: Study population From January 2010 to December 2014, a retrospective study with a prospectively collected database was performed to evaluate the efficacy of TB-PCR using EBUS-TBNA samples in patients with IGL at Pusan National University Hospital (university-affiliated, tertiary referral hospital in Busan, South Korea). During the study period, all consecutive patients who received EBUS-TBNA were prospectively registered. As a result, 87 patients with isolated intrathoracic lymphadenopathy, defined as lymphadenopathy without lung parenchymal abnormalities, received EBUS-TBNA. Fifty patients were diagnosed with IGL (Fig. 1). Of the 50 selected patients with IGL, four who were not subjected to TB-PCR were excluded from the analysis. Therefore, 46 patients with IGL were finally included in the present study.Fig. 1Study flow diagram. *Of the five patients with reactive hyperplasia, two were confirmed by subsequent mediastinoscopy, and a CT scan of the remaining three patients showed decreased or unchanged lymph node sizes. †All five patients with anthracotic lymph nodes were followed up for more than 6 months, and the lymph node size was decreased or unchanged on subsequent CT. ‡In six patients who were lost to follow-up, the results of EBUS-TBNA were insufficient specimens in three patients and reactive hyperplasia in the other patients. §Histological specimens were classified into five grades: I) epithelioid granulomatous reaction with caseation, II) epithelioid granulomatous reaction without caseation, III) nongranulomatous reaction with necrosis, IV) nonspecific, and V) inadequate sample. EBUS-TBNA, endobronchial ultrasound-guided transbronchial needle aspiration; IGL, intrathoracic granulomatous lymphadenopathy; TB-PCR, polymerase chain reaction for Mycobacterium tuberculosis Study flow diagram. *Of the five patients with reactive hyperplasia, two were confirmed by subsequent mediastinoscopy, and a CT scan of the remaining three patients showed decreased or unchanged lymph node sizes. †All five patients with anthracotic lymph nodes were followed up for more than 6 months, and the lymph node size was decreased or unchanged on subsequent CT. ‡In six patients who were lost to follow-up, the results of EBUS-TBNA were insufficient specimens in three patients and reactive hyperplasia in the other patients. §Histological specimens were classified into five grades: I) epithelioid granulomatous reaction with caseation, II) epithelioid granulomatous reaction without caseation, III) nongranulomatous reaction with necrosis, IV) nonspecific, and V) inadequate sample. EBUS-TBNA, endobronchial ultrasound-guided transbronchial needle aspiration; IGL, intrathoracic granulomatous lymphadenopathy; TB-PCR, polymerase chain reaction for Mycobacterium tuberculosis The Institutional Review Board of Pusan National University Hospital approved this study (No. E-2015040). Informed consent was waived because of the retrospective nature of the study. From January 2010 to December 2014, a retrospective study with a prospectively collected database was performed to evaluate the efficacy of TB-PCR using EBUS-TBNA samples in patients with IGL at Pusan National University Hospital (university-affiliated, tertiary referral hospital in Busan, South Korea). During the study period, all consecutive patients who received EBUS-TBNA were prospectively registered. As a result, 87 patients with isolated intrathoracic lymphadenopathy, defined as lymphadenopathy without lung parenchymal abnormalities, received EBUS-TBNA. Fifty patients were diagnosed with IGL (Fig. 1). Of the 50 selected patients with IGL, four who were not subjected to TB-PCR were excluded from the analysis. Therefore, 46 patients with IGL were finally included in the present study.Fig. 1Study flow diagram. *Of the five patients with reactive hyperplasia, two were confirmed by subsequent mediastinoscopy, and a CT scan of the remaining three patients showed decreased or unchanged lymph node sizes. †All five patients with anthracotic lymph nodes were followed up for more than 6 months, and the lymph node size was decreased or unchanged on subsequent CT. ‡In six patients who were lost to follow-up, the results of EBUS-TBNA were insufficient specimens in three patients and reactive hyperplasia in the other patients. §Histological specimens were classified into five grades: I) epithelioid granulomatous reaction with caseation, II) epithelioid granulomatous reaction without caseation, III) nongranulomatous reaction with necrosis, IV) nonspecific, and V) inadequate sample. EBUS-TBNA, endobronchial ultrasound-guided transbronchial needle aspiration; IGL, intrathoracic granulomatous lymphadenopathy; TB-PCR, polymerase chain reaction for Mycobacterium tuberculosis Study flow diagram. *Of the five patients with reactive hyperplasia, two were confirmed by subsequent mediastinoscopy, and a CT scan of the remaining three patients showed decreased or unchanged lymph node sizes. †All five patients with anthracotic lymph nodes were followed up for more than 6 months, and the lymph node size was decreased or unchanged on subsequent CT. ‡In six patients who were lost to follow-up, the results of EBUS-TBNA were insufficient specimens in three patients and reactive hyperplasia in the other patients. §Histological specimens were classified into five grades: I) epithelioid granulomatous reaction with caseation, II) epithelioid granulomatous reaction without caseation, III) nongranulomatous reaction with necrosis, IV) nonspecific, and V) inadequate sample. EBUS-TBNA, endobronchial ultrasound-guided transbronchial needle aspiration; IGL, intrathoracic granulomatous lymphadenopathy; TB-PCR, polymerase chain reaction for Mycobacterium tuberculosis The Institutional Review Board of Pusan National University Hospital approved this study (No. E-2015040). Informed consent was waived because of the retrospective nature of the study. EBUS-TBNA procedure All bronchoscopic procedures were performed under conscious sedation with local anesthesia by three pulmonary physicians (Eom JS, Mok JH, and Lee K). Before EBUS-TBNA, conventional bronchoscopy was conducted in a standard fashion for airway inspection and administration of lidocaine into the tracheobronchial tree via the working channel of the bronchoscope. Following conventional bronchoscopy, EBUS-TBNA was performed using a dedicated bronchoscope with a linear ultrasound transducer (BF-UC260F-OL8; Olympus, Tokyo, Japan). Systemic assessment of the mediastinal, hilar, and interlobar lymph nodes was made based on computed tomography (CT) findings [12], and target lymph nodes were punctured with a 22-gauge needle (NA-201SX-4022; Olympus, Tokyo, Japan) under real-time ultrasound guidance. Two or more punctures were performed on each target lymph node until at least two tissue core specimens were obtained. One of the tissue core specimens was placed in formalin for histological examination. Other tissue core specimens that were placed in sterile saline were analyzed by fluorescence microscopy using auramine-rhodamine staining. Additionally, solid (3 % Ogawa medium) and liquid media (BacT/ALERT MP; bioMérieux, Durham, NC, USA) were used to culture for mycobacteria from the specimens in sterile saline. All bronchoscopic procedures were performed under conscious sedation with local anesthesia by three pulmonary physicians (Eom JS, Mok JH, and Lee K). Before EBUS-TBNA, conventional bronchoscopy was conducted in a standard fashion for airway inspection and administration of lidocaine into the tracheobronchial tree via the working channel of the bronchoscope. Following conventional bronchoscopy, EBUS-TBNA was performed using a dedicated bronchoscope with a linear ultrasound transducer (BF-UC260F-OL8; Olympus, Tokyo, Japan). Systemic assessment of the mediastinal, hilar, and interlobar lymph nodes was made based on computed tomography (CT) findings [12], and target lymph nodes were punctured with a 22-gauge needle (NA-201SX-4022; Olympus, Tokyo, Japan) under real-time ultrasound guidance. Two or more punctures were performed on each target lymph node until at least two tissue core specimens were obtained. One of the tissue core specimens was placed in formalin for histological examination. Other tissue core specimens that were placed in sterile saline were analyzed by fluorescence microscopy using auramine-rhodamine staining. Additionally, solid (3 % Ogawa medium) and liquid media (BacT/ALERT MP; bioMérieux, Durham, NC, USA) were used to culture for mycobacteria from the specimens in sterile saline. TB-PCR Nested PCR (for formalin-fixed paraffin-embedded specimens) or real-time PCR (for specimens in sterile saline) was conducted to detect nucleic acids of M. tuberculosis using tissue core samples collected by EBUS-TBNA. For nested PCR, formalin-fixed paraffin-embedded tissues (12-μm thick) were incubated in 1 mL of xylene at 60 °C for 30 min and then centrifuged for 10 min (8800 rpm). Paraffin and the supernatant were removed from the samples after centrifugation. The same procedures were repeated until deparaffinization was complete. After adding 1 mL of alcohol, the samples were centrifuged for 5 min (8800 rpm), and the supernatant was removed. The samples were then air-dried as pellets. DNA was extracted using QIAamp (Qiagen, Valencia, CA, USA) according to the manufacturer’s instructions. PCR amplifications were performed using M. tuberculosis IS6110 primers (MTB-PCR kit; Biosewoom, Seoul, South Korea) according to the manufacturer’s instructions. The first round using outer primers and the second round using inner primers amplified a 256- and 181-bp fragment, respectively. Finally, the PCR products were visualized in a 2 % agarose gel. For real-time PCR, specimens in sterile saline were filtered and 1 mL of phosphate-based saline was added. After centrifugation for 3 min (13,000 rpm), the supernatant was removed. Next, 50 μL of extraction buffer was added to the sample, and the sample was heated at 100 °C for 20 min. After centrifugation for 3 min (13,000 rpm), the supernatant was used in PCR. Real-time PCR was performed using the AdvanSure TB/NTM real-time PCR kit (LG Life Science, Seoul, South Korea) according to the manufacturer’s protocol. Three channels were used in the real-time PCR reaction (M. tuberculosis complex, mycobacteria, and internal control). Signals for FAM, HEX, and Cy5 were measured in each channel. M. tuberculosis was considered present if the cycle threshold of rpoB was less than 35 in each signal and greater than or equivalent to that of IS6110. Nested PCR (for formalin-fixed paraffin-embedded specimens) or real-time PCR (for specimens in sterile saline) was conducted to detect nucleic acids of M. tuberculosis using tissue core samples collected by EBUS-TBNA. For nested PCR, formalin-fixed paraffin-embedded tissues (12-μm thick) were incubated in 1 mL of xylene at 60 °C for 30 min and then centrifuged for 10 min (8800 rpm). Paraffin and the supernatant were removed from the samples after centrifugation. The same procedures were repeated until deparaffinization was complete. After adding 1 mL of alcohol, the samples were centrifuged for 5 min (8800 rpm), and the supernatant was removed. The samples were then air-dried as pellets. DNA was extracted using QIAamp (Qiagen, Valencia, CA, USA) according to the manufacturer’s instructions. PCR amplifications were performed using M. tuberculosis IS6110 primers (MTB-PCR kit; Biosewoom, Seoul, South Korea) according to the manufacturer’s instructions. The first round using outer primers and the second round using inner primers amplified a 256- and 181-bp fragment, respectively. Finally, the PCR products were visualized in a 2 % agarose gel. For real-time PCR, specimens in sterile saline were filtered and 1 mL of phosphate-based saline was added. After centrifugation for 3 min (13,000 rpm), the supernatant was removed. Next, 50 μL of extraction buffer was added to the sample, and the sample was heated at 100 °C for 20 min. After centrifugation for 3 min (13,000 rpm), the supernatant was used in PCR. Real-time PCR was performed using the AdvanSure TB/NTM real-time PCR kit (LG Life Science, Seoul, South Korea) according to the manufacturer’s protocol. Three channels were used in the real-time PCR reaction (M. tuberculosis complex, mycobacteria, and internal control). Signals for FAM, HEX, and Cy5 were measured in each channel. M. tuberculosis was considered present if the cycle threshold of rpoB was less than 35 in each signal and greater than or equivalent to that of IS6110. Diagnosis of tuberculous lymphadenitis and sarcoidosis Tuberculous lymphadenitis was regarded as present when M. tuberculosis was cultured in EBUS-TBNA samples. Patients with histological findings suggesting tuberculosis (granulomatous reaction with caseation necrosis or multinucleated giant cells associated with epithelioid histiocytes) or radiological findings compatible with tuberculous lymphadenitis (CT findings of nodes with central low attenuation and peripheral rim enhancement) were also considered to have tuberculosis only if clinical and radiological improvement was achieved after the standard anti-tuberculosis treatment [13]. Clinical and radiological improvement was defined as disappearance of symptoms and a decrease in the size of lymph nodes at a follow-up radiological examination. The diagnosis of sarcoidosis required compatible clinical and radiological results, exclusion of other granulomatous diseases with a similar histological or clinical picture (e.g., tuberculous lymphadenitis), and histological demonstration of noncaseating granulomas [1, 3]. Tuberculous lymphadenitis was regarded as present when M. tuberculosis was cultured in EBUS-TBNA samples. Patients with histological findings suggesting tuberculosis (granulomatous reaction with caseation necrosis or multinucleated giant cells associated with epithelioid histiocytes) or radiological findings compatible with tuberculous lymphadenitis (CT findings of nodes with central low attenuation and peripheral rim enhancement) were also considered to have tuberculosis only if clinical and radiological improvement was achieved after the standard anti-tuberculosis treatment [13]. Clinical and radiological improvement was defined as disappearance of symptoms and a decrease in the size of lymph nodes at a follow-up radiological examination. The diagnosis of sarcoidosis required compatible clinical and radiological results, exclusion of other granulomatous diseases with a similar histological or clinical picture (e.g., tuberculous lymphadenitis), and histological demonstration of noncaseating granulomas [1, 3]. Classification of granulomatous inflammation Histological specimens were classified into five grades as previously reported [14]: I) epithelioid granulomatous reaction with caseation, II) epithelioid granulomatous reaction without caseation, III) nongranulomatous reaction with necrosis, IV) nonspecific, and V) inadequate sample. If two or more lymph nodes were examined in one patient, the most suitable specimen for diagnosis was selected for analysis. Histological specimens were classified into five grades as previously reported [14]: I) epithelioid granulomatous reaction with caseation, II) epithelioid granulomatous reaction without caseation, III) nongranulomatous reaction with necrosis, IV) nonspecific, and V) inadequate sample. If two or more lymph nodes were examined in one patient, the most suitable specimen for diagnosis was selected for analysis. Statistical analysis Analyses were conducted on a per-patient basis according to the final diagnosis and results of the TB-PCR tests. All categorical and continuous variables are shown as numbers (percentages) and means (standard deviation), respectively. The overall diagnostic accuracy of TB-PCR for tuberculous lymphadenitis was calculated as follows: diagnostic accuracy (%) = number of patients with IGL who were accurately diagnosed (total number of true-positive and true-negative cases)/total number of patients with IGL [15]. P values of <0.05 were considered to indicate significance. All analyses were conducted using SPSS for Windows version 17.0 (SPSS Inc., Chicago, IL, USA). Analyses were conducted on a per-patient basis according to the final diagnosis and results of the TB-PCR tests. All categorical and continuous variables are shown as numbers (percentages) and means (standard deviation), respectively. The overall diagnostic accuracy of TB-PCR for tuberculous lymphadenitis was calculated as follows: diagnostic accuracy (%) = number of patients with IGL who were accurately diagnosed (total number of true-positive and true-negative cases)/total number of patients with IGL [15]. P values of <0.05 were considered to indicate significance. All analyses were conducted using SPSS for Windows version 17.0 (SPSS Inc., Chicago, IL, USA). Study population: From January 2010 to December 2014, a retrospective study with a prospectively collected database was performed to evaluate the efficacy of TB-PCR using EBUS-TBNA samples in patients with IGL at Pusan National University Hospital (university-affiliated, tertiary referral hospital in Busan, South Korea). During the study period, all consecutive patients who received EBUS-TBNA were prospectively registered. As a result, 87 patients with isolated intrathoracic lymphadenopathy, defined as lymphadenopathy without lung parenchymal abnormalities, received EBUS-TBNA. Fifty patients were diagnosed with IGL (Fig. 1). Of the 50 selected patients with IGL, four who were not subjected to TB-PCR were excluded from the analysis. Therefore, 46 patients with IGL were finally included in the present study.Fig. 1Study flow diagram. *Of the five patients with reactive hyperplasia, two were confirmed by subsequent mediastinoscopy, and a CT scan of the remaining three patients showed decreased or unchanged lymph node sizes. †All five patients with anthracotic lymph nodes were followed up for more than 6 months, and the lymph node size was decreased or unchanged on subsequent CT. ‡In six patients who were lost to follow-up, the results of EBUS-TBNA were insufficient specimens in three patients and reactive hyperplasia in the other patients. §Histological specimens were classified into five grades: I) epithelioid granulomatous reaction with caseation, II) epithelioid granulomatous reaction without caseation, III) nongranulomatous reaction with necrosis, IV) nonspecific, and V) inadequate sample. EBUS-TBNA, endobronchial ultrasound-guided transbronchial needle aspiration; IGL, intrathoracic granulomatous lymphadenopathy; TB-PCR, polymerase chain reaction for Mycobacterium tuberculosis Study flow diagram. *Of the five patients with reactive hyperplasia, two were confirmed by subsequent mediastinoscopy, and a CT scan of the remaining three patients showed decreased or unchanged lymph node sizes. †All five patients with anthracotic lymph nodes were followed up for more than 6 months, and the lymph node size was decreased or unchanged on subsequent CT. ‡In six patients who were lost to follow-up, the results of EBUS-TBNA were insufficient specimens in three patients and reactive hyperplasia in the other patients. §Histological specimens were classified into five grades: I) epithelioid granulomatous reaction with caseation, II) epithelioid granulomatous reaction without caseation, III) nongranulomatous reaction with necrosis, IV) nonspecific, and V) inadequate sample. EBUS-TBNA, endobronchial ultrasound-guided transbronchial needle aspiration; IGL, intrathoracic granulomatous lymphadenopathy; TB-PCR, polymerase chain reaction for Mycobacterium tuberculosis The Institutional Review Board of Pusan National University Hospital approved this study (No. E-2015040). Informed consent was waived because of the retrospective nature of the study. EBUS-TBNA procedure: All bronchoscopic procedures were performed under conscious sedation with local anesthesia by three pulmonary physicians (Eom JS, Mok JH, and Lee K). Before EBUS-TBNA, conventional bronchoscopy was conducted in a standard fashion for airway inspection and administration of lidocaine into the tracheobronchial tree via the working channel of the bronchoscope. Following conventional bronchoscopy, EBUS-TBNA was performed using a dedicated bronchoscope with a linear ultrasound transducer (BF-UC260F-OL8; Olympus, Tokyo, Japan). Systemic assessment of the mediastinal, hilar, and interlobar lymph nodes was made based on computed tomography (CT) findings [12], and target lymph nodes were punctured with a 22-gauge needle (NA-201SX-4022; Olympus, Tokyo, Japan) under real-time ultrasound guidance. Two or more punctures were performed on each target lymph node until at least two tissue core specimens were obtained. One of the tissue core specimens was placed in formalin for histological examination. Other tissue core specimens that were placed in sterile saline were analyzed by fluorescence microscopy using auramine-rhodamine staining. Additionally, solid (3 % Ogawa medium) and liquid media (BacT/ALERT MP; bioMérieux, Durham, NC, USA) were used to culture for mycobacteria from the specimens in sterile saline. TB-PCR: Nested PCR (for formalin-fixed paraffin-embedded specimens) or real-time PCR (for specimens in sterile saline) was conducted to detect nucleic acids of M. tuberculosis using tissue core samples collected by EBUS-TBNA. For nested PCR, formalin-fixed paraffin-embedded tissues (12-μm thick) were incubated in 1 mL of xylene at 60 °C for 30 min and then centrifuged for 10 min (8800 rpm). Paraffin and the supernatant were removed from the samples after centrifugation. The same procedures were repeated until deparaffinization was complete. After adding 1 mL of alcohol, the samples were centrifuged for 5 min (8800 rpm), and the supernatant was removed. The samples were then air-dried as pellets. DNA was extracted using QIAamp (Qiagen, Valencia, CA, USA) according to the manufacturer’s instructions. PCR amplifications were performed using M. tuberculosis IS6110 primers (MTB-PCR kit; Biosewoom, Seoul, South Korea) according to the manufacturer’s instructions. The first round using outer primers and the second round using inner primers amplified a 256- and 181-bp fragment, respectively. Finally, the PCR products were visualized in a 2 % agarose gel. For real-time PCR, specimens in sterile saline were filtered and 1 mL of phosphate-based saline was added. After centrifugation for 3 min (13,000 rpm), the supernatant was removed. Next, 50 μL of extraction buffer was added to the sample, and the sample was heated at 100 °C for 20 min. After centrifugation for 3 min (13,000 rpm), the supernatant was used in PCR. Real-time PCR was performed using the AdvanSure TB/NTM real-time PCR kit (LG Life Science, Seoul, South Korea) according to the manufacturer’s protocol. Three channels were used in the real-time PCR reaction (M. tuberculosis complex, mycobacteria, and internal control). Signals for FAM, HEX, and Cy5 were measured in each channel. M. tuberculosis was considered present if the cycle threshold of rpoB was less than 35 in each signal and greater than or equivalent to that of IS6110. Diagnosis of tuberculous lymphadenitis and sarcoidosis: Tuberculous lymphadenitis was regarded as present when M. tuberculosis was cultured in EBUS-TBNA samples. Patients with histological findings suggesting tuberculosis (granulomatous reaction with caseation necrosis or multinucleated giant cells associated with epithelioid histiocytes) or radiological findings compatible with tuberculous lymphadenitis (CT findings of nodes with central low attenuation and peripheral rim enhancement) were also considered to have tuberculosis only if clinical and radiological improvement was achieved after the standard anti-tuberculosis treatment [13]. Clinical and radiological improvement was defined as disappearance of symptoms and a decrease in the size of lymph nodes at a follow-up radiological examination. The diagnosis of sarcoidosis required compatible clinical and radiological results, exclusion of other granulomatous diseases with a similar histological or clinical picture (e.g., tuberculous lymphadenitis), and histological demonstration of noncaseating granulomas [1, 3]. Classification of granulomatous inflammation: Histological specimens were classified into five grades as previously reported [14]: I) epithelioid granulomatous reaction with caseation, II) epithelioid granulomatous reaction without caseation, III) nongranulomatous reaction with necrosis, IV) nonspecific, and V) inadequate sample. If two or more lymph nodes were examined in one patient, the most suitable specimen for diagnosis was selected for analysis. Statistical analysis: Analyses were conducted on a per-patient basis according to the final diagnosis and results of the TB-PCR tests. All categorical and continuous variables are shown as numbers (percentages) and means (standard deviation), respectively. The overall diagnostic accuracy of TB-PCR for tuberculous lymphadenitis was calculated as follows: diagnostic accuracy (%) = number of patients with IGL who were accurately diagnosed (total number of true-positive and true-negative cases)/total number of patients with IGL [15]. P values of <0.05 were considered to indicate significance. All analyses were conducted using SPSS for Windows version 17.0 (SPSS Inc., Chicago, IL, USA). Results: Study population Using EBUS-TBNA, 82 lymph nodes were examined in 46 patients with IGL. All of the patients showed negative results using a screening assay for antibody to human immunodeficiency virus. Baseline characteristics are shown in Table 1. The mean age of the patients was 47.1 ± 17.1 years and there were 28 (61 %) female patients. The mean shortest diameter of lymph nodes that were examined using EBUS-TBNA was 17.7 ± 5.0 mm (median, 17.9 mm) in patients with sarcoidosis and 22.6 ± 11.3 mm (median, 20.0 mm) in patients with tuberculous lymphadenitis.Table 1Baseline characteristics of 46 patients with intrathoracic granulomatous lymphadenopathyVariablesNo. (%) or mean ± SDAge47.1 ± 17.1Female gender28 (61)Number of LN involved per patienta 7.0 ± 3.6Number of LN examined per patient1.7 ± 0.7Number of needle pass per LN3.7 ± 1.8Number of tissue core achieved per LN2.0 ± 1.3Stations of LN puncturedb  2R2/82 (2) 4R31/82 (38) 4 L3/82 (4) 733/82 (40) 10R1/82 (1) 11R9/82 (11) 11 L3/82 (4)Diameter of LN examined using EBUS-TBNAc  Shortest diameter, mm19.4 ± 8.0 Longest diameter, mm28.7 ± 11.4 aInvolved lymph node was defined as the shortest diameter ≥ 1 cm on a CT scan bBased on the International Association for the Study of Lung Cancer (IASLC) lymph node map cMeasured on an axial CT scanSD, standard deviation; LN, lymph node; EBUS-TBNA, endobronchial ultrasound-guided transbronchial needle aspiration Baseline characteristics of 46 patients with intrathoracic granulomatous lymphadenopathy aInvolved lymph node was defined as the shortest diameter ≥ 1 cm on a CT scan bBased on the International Association for the Study of Lung Cancer (IASLC) lymph node map cMeasured on an axial CT scan SD, standard deviation; LN, lymph node; EBUS-TBNA, endobronchial ultrasound-guided transbronchial needle aspiration Using EBUS-TBNA, 82 lymph nodes were examined in 46 patients with IGL. All of the patients showed negative results using a screening assay for antibody to human immunodeficiency virus. Baseline characteristics are shown in Table 1. The mean age of the patients was 47.1 ± 17.1 years and there were 28 (61 %) female patients. The mean shortest diameter of lymph nodes that were examined using EBUS-TBNA was 17.7 ± 5.0 mm (median, 17.9 mm) in patients with sarcoidosis and 22.6 ± 11.3 mm (median, 20.0 mm) in patients with tuberculous lymphadenitis.Table 1Baseline characteristics of 46 patients with intrathoracic granulomatous lymphadenopathyVariablesNo. (%) or mean ± SDAge47.1 ± 17.1Female gender28 (61)Number of LN involved per patienta 7.0 ± 3.6Number of LN examined per patient1.7 ± 0.7Number of needle pass per LN3.7 ± 1.8Number of tissue core achieved per LN2.0 ± 1.3Stations of LN puncturedb  2R2/82 (2) 4R31/82 (38) 4 L3/82 (4) 733/82 (40) 10R1/82 (1) 11R9/82 (11) 11 L3/82 (4)Diameter of LN examined using EBUS-TBNAc  Shortest diameter, mm19.4 ± 8.0 Longest diameter, mm28.7 ± 11.4 aInvolved lymph node was defined as the shortest diameter ≥ 1 cm on a CT scan bBased on the International Association for the Study of Lung Cancer (IASLC) lymph node map cMeasured on an axial CT scanSD, standard deviation; LN, lymph node; EBUS-TBNA, endobronchial ultrasound-guided transbronchial needle aspiration Baseline characteristics of 46 patients with intrathoracic granulomatous lymphadenopathy aInvolved lymph node was defined as the shortest diameter ≥ 1 cm on a CT scan bBased on the International Association for the Study of Lung Cancer (IASLC) lymph node map cMeasured on an axial CT scan SD, standard deviation; LN, lymph node; EBUS-TBNA, endobronchial ultrasound-guided transbronchial needle aspiration Diagnosis of tuberculous lymphadenitis and sarcoidosis Of the 46 patients with IGL, 16 with tuberculous lymphadenitis and 30 with sarcoidosis were identified. Sputum acid-fast bacilli smear and M. tuberculosis culture were positive in 13 % (2/16) and 19 % (3/16) of patients with tuberculous lymphadenitis, respectively. According to the results of the M. tuberculosis culture, three patients had a confirmative diagnosis of tuberculous lymphadenitis. Based on the clinical and radiological responses after anti-tuberculosis treatment or histological results, the remaining 13 patients were diagnosed with tuberculous lymphadenitis. Of 30 patients with sarcoidosis, 28 were diagnosed according to compatible clinical, radiological, and laboratory results with demonstration of noncaseating granulomas using EBUS-TBNA samples. Mediastinoscopy was subsequently performed in one patient whose EBUS-TBNA was shown to be anthracotic pigmentation with inflammation (Patient 1 in Table 2). The surgical specimen showed chronic granulomatous inflammation without caseous necrosis, and sarcoidosis was finally diagnosed with compatible radiological and laboratory results. Another patient whose EBUS-TBNA samples were shown to be inadequate (grade V) refused mediastinoscopy. Therefore, she was followed up for more than 12 months and clinically diagnosed with sarcoidosis based on radiological and laboratory results (Patient 6 in Table 2).Table 2Detailed characteristics of seven patients with grade IV or V for histological resultsCharacteristicsPatient number1234567SexFemaleMaleMaleFemaleFemaleFemaleFemaleAge, years65782773633356Number of LN involved per patienta 3332484Number of LN examined per patient1212221Number of needle pass per LN64 and 243 and 32 and 26 and 21Number of tissue core achieved per LN20 and 211 and 22 and 21 and 10Stations of LN punctured77 and 11R4R4R and 10R2R and 4R4R and 74RShortest diameter of LN punctured, mm10.613.0 and 15.032.418.9 and 12.214.6 and 10.517.2 and 12.615.7Results of EBUS-TBNA samples Acid-fast bacilli smearNegativeNegativeNegativeNegativeNegativeNegativeNegative Acid-fast bacilli cultureNegativeNegativeNegativeNegativeNegativeNegativeNegative TB-PCRNot detectedMTB detectedMTB detectedMTB detectedNot detectedNot detectedMTB detectedHistological gradeIVIVIVIVVVVMediastinoscopyPerformedNot performedNot performedNot performedPerformedNot performedNot performedFinal diagnosisSarcoidosisTBLTBLTBLTBLSarcoidosisb TBL aLymph nodes with the shortest diameter ≥ 10 mm on an axial CT scan bPatient 6 was clinically diagnosed with sarcoidosis based on radiological and laboratory results after more than 12 months of follow-upLN, lymph node; EBUS-TBNA, endobronchial ultrasound-guided transbronchial needle aspiration; TB-PCR; polymerase chain reaction for Mycobacterium tuberculosis; MTB, Mycobacterium tuberculosis; TBL, tuberculous lymphadenitis Detailed characteristics of seven patients with grade IV or V for histological results aLymph nodes with the shortest diameter ≥ 10 mm on an axial CT scan bPatient 6 was clinically diagnosed with sarcoidosis based on radiological and laboratory results after more than 12 months of follow-up LN, lymph node; EBUS-TBNA, endobronchial ultrasound-guided transbronchial needle aspiration; TB-PCR; polymerase chain reaction for Mycobacterium tuberculosis; MTB, Mycobacterium tuberculosis; TBL, tuberculous lymphadenitis Of the 46 patients with IGL, 16 with tuberculous lymphadenitis and 30 with sarcoidosis were identified. Sputum acid-fast bacilli smear and M. tuberculosis culture were positive in 13 % (2/16) and 19 % (3/16) of patients with tuberculous lymphadenitis, respectively. According to the results of the M. tuberculosis culture, three patients had a confirmative diagnosis of tuberculous lymphadenitis. Based on the clinical and radiological responses after anti-tuberculosis treatment or histological results, the remaining 13 patients were diagnosed with tuberculous lymphadenitis. Of 30 patients with sarcoidosis, 28 were diagnosed according to compatible clinical, radiological, and laboratory results with demonstration of noncaseating granulomas using EBUS-TBNA samples. Mediastinoscopy was subsequently performed in one patient whose EBUS-TBNA was shown to be anthracotic pigmentation with inflammation (Patient 1 in Table 2). The surgical specimen showed chronic granulomatous inflammation without caseous necrosis, and sarcoidosis was finally diagnosed with compatible radiological and laboratory results. Another patient whose EBUS-TBNA samples were shown to be inadequate (grade V) refused mediastinoscopy. Therefore, she was followed up for more than 12 months and clinically diagnosed with sarcoidosis based on radiological and laboratory results (Patient 6 in Table 2).Table 2Detailed characteristics of seven patients with grade IV or V for histological resultsCharacteristicsPatient number1234567SexFemaleMaleMaleFemaleFemaleFemaleFemaleAge, years65782773633356Number of LN involved per patienta 3332484Number of LN examined per patient1212221Number of needle pass per LN64 and 243 and 32 and 26 and 21Number of tissue core achieved per LN20 and 211 and 22 and 21 and 10Stations of LN punctured77 and 11R4R4R and 10R2R and 4R4R and 74RShortest diameter of LN punctured, mm10.613.0 and 15.032.418.9 and 12.214.6 and 10.517.2 and 12.615.7Results of EBUS-TBNA samples Acid-fast bacilli smearNegativeNegativeNegativeNegativeNegativeNegativeNegative Acid-fast bacilli cultureNegativeNegativeNegativeNegativeNegativeNegativeNegative TB-PCRNot detectedMTB detectedMTB detectedMTB detectedNot detectedNot detectedMTB detectedHistological gradeIVIVIVIVVVVMediastinoscopyPerformedNot performedNot performedNot performedPerformedNot performedNot performedFinal diagnosisSarcoidosisTBLTBLTBLTBLSarcoidosisb TBL aLymph nodes with the shortest diameter ≥ 10 mm on an axial CT scan bPatient 6 was clinically diagnosed with sarcoidosis based on radiological and laboratory results after more than 12 months of follow-upLN, lymph node; EBUS-TBNA, endobronchial ultrasound-guided transbronchial needle aspiration; TB-PCR; polymerase chain reaction for Mycobacterium tuberculosis; MTB, Mycobacterium tuberculosis; TBL, tuberculous lymphadenitis Detailed characteristics of seven patients with grade IV or V for histological results aLymph nodes with the shortest diameter ≥ 10 mm on an axial CT scan bPatient 6 was clinically diagnosed with sarcoidosis based on radiological and laboratory results after more than 12 months of follow-up LN, lymph node; EBUS-TBNA, endobronchial ultrasound-guided transbronchial needle aspiration; TB-PCR; polymerase chain reaction for Mycobacterium tuberculosis; MTB, Mycobacterium tuberculosis; TBL, tuberculous lymphadenitis Diagnostic performance of TB-PCR Nested PCR and real-time PCR were performed in 29 and 17 patients, respectively. TB-PCR was not detected in all patients with sarcoidosis. Of 16 patients with tuberculous lymphadenitis, nine showed positive TB-PCR results. The sensitivity, specificity, positive predictive value, and negative predictive value were 56 % (95 % CI [confidence interval], 29.9–81.2), 100 % (95 % CI, 88.3–100.0), 100 % (95 % CI, 66.2–100.0), and 81 % (95 % CI, 64.8–92.0), respectively. There was no difference in the diagnostic performance between nested and real-time TB-PCR. In addition, the overall diagnostic accuracy of TB-PCR for tuberculous lymphadenitis was 85 % in patients with IGL. Nested PCR and real-time PCR were performed in 29 and 17 patients, respectively. TB-PCR was not detected in all patients with sarcoidosis. Of 16 patients with tuberculous lymphadenitis, nine showed positive TB-PCR results. The sensitivity, specificity, positive predictive value, and negative predictive value were 56 % (95 % CI [confidence interval], 29.9–81.2), 100 % (95 % CI, 88.3–100.0), 100 % (95 % CI, 66.2–100.0), and 81 % (95 % CI, 64.8–92.0), respectively. There was no difference in the diagnostic performance between nested and real-time TB-PCR. In addition, the overall diagnostic accuracy of TB-PCR for tuberculous lymphadenitis was 85 % in patients with IGL. Classification of histology Histological results were grades I to III in 39 patients (85 %) and grades IV to V in seven patients (15 %) (Table 3). Four (9 %) and three (7 %) patients with grade IV and V disease, respectively, regarded as non-diagnostic, also showed non-diagnostic microbiological results of acid-fast bacilli smear and culture. Two patients who underwent mediastinoscopy were eventually diagnosed with sarcoidosis and tuberculous lymphadenitis (Patients 1 and 5, respectively, Table 2). Four (57 %) of seven patients with grades IV to V disease had positive TB-PCR results. All four patients achieved clinical and radiological improvement after anti-tuberculosis treatment (Patients 2, 3, 4, and 7 in Table 2).Table 3Results of histological examinationsGradeResults of polymerase chain reaction for Mycobacterium tuberculosis SarcoidosisTuberculous lymphadenitisI0/0 (0)5/8 (63)II0/28 (0)0/1 (0)III0/0 (0)0/2 (0)IV0/1 (0)3/3 (100)V0/1 (0)1/2 (50)Total0/30 (0)9/16 (56)Data are presented as the number of positive results/number of patients who received a test (%) Results of histological examinations Data are presented as the number of positive results/number of patients who received a test (%) Histological results were grades I to III in 39 patients (85 %) and grades IV to V in seven patients (15 %) (Table 3). Four (9 %) and three (7 %) patients with grade IV and V disease, respectively, regarded as non-diagnostic, also showed non-diagnostic microbiological results of acid-fast bacilli smear and culture. Two patients who underwent mediastinoscopy were eventually diagnosed with sarcoidosis and tuberculous lymphadenitis (Patients 1 and 5, respectively, Table 2). Four (57 %) of seven patients with grades IV to V disease had positive TB-PCR results. All four patients achieved clinical and radiological improvement after anti-tuberculosis treatment (Patients 2, 3, 4, and 7 in Table 2).Table 3Results of histological examinationsGradeResults of polymerase chain reaction for Mycobacterium tuberculosis SarcoidosisTuberculous lymphadenitisI0/0 (0)5/8 (63)II0/28 (0)0/1 (0)III0/0 (0)0/2 (0)IV0/1 (0)3/3 (100)V0/1 (0)1/2 (50)Total0/30 (0)9/16 (56)Data are presented as the number of positive results/number of patients who received a test (%) Results of histological examinations Data are presented as the number of positive results/number of patients who received a test (%) Diagnostic yield of EBUS-TBNA The overall diagnostic yield of the combined histological and microbiological data was 85 %. When the results of TB-PCR were combined with the histological and microbiological data, the overall diagnostic yield increased to 94 %. The overall diagnostic yield of the combined histological and microbiological data was 85 %. When the results of TB-PCR were combined with the histological and microbiological data, the overall diagnostic yield increased to 94 %. Study population: Using EBUS-TBNA, 82 lymph nodes were examined in 46 patients with IGL. All of the patients showed negative results using a screening assay for antibody to human immunodeficiency virus. Baseline characteristics are shown in Table 1. The mean age of the patients was 47.1 ± 17.1 years and there were 28 (61 %) female patients. The mean shortest diameter of lymph nodes that were examined using EBUS-TBNA was 17.7 ± 5.0 mm (median, 17.9 mm) in patients with sarcoidosis and 22.6 ± 11.3 mm (median, 20.0 mm) in patients with tuberculous lymphadenitis.Table 1Baseline characteristics of 46 patients with intrathoracic granulomatous lymphadenopathyVariablesNo. (%) or mean ± SDAge47.1 ± 17.1Female gender28 (61)Number of LN involved per patienta 7.0 ± 3.6Number of LN examined per patient1.7 ± 0.7Number of needle pass per LN3.7 ± 1.8Number of tissue core achieved per LN2.0 ± 1.3Stations of LN puncturedb  2R2/82 (2) 4R31/82 (38) 4 L3/82 (4) 733/82 (40) 10R1/82 (1) 11R9/82 (11) 11 L3/82 (4)Diameter of LN examined using EBUS-TBNAc  Shortest diameter, mm19.4 ± 8.0 Longest diameter, mm28.7 ± 11.4 aInvolved lymph node was defined as the shortest diameter ≥ 1 cm on a CT scan bBased on the International Association for the Study of Lung Cancer (IASLC) lymph node map cMeasured on an axial CT scanSD, standard deviation; LN, lymph node; EBUS-TBNA, endobronchial ultrasound-guided transbronchial needle aspiration Baseline characteristics of 46 patients with intrathoracic granulomatous lymphadenopathy aInvolved lymph node was defined as the shortest diameter ≥ 1 cm on a CT scan bBased on the International Association for the Study of Lung Cancer (IASLC) lymph node map cMeasured on an axial CT scan SD, standard deviation; LN, lymph node; EBUS-TBNA, endobronchial ultrasound-guided transbronchial needle aspiration Diagnosis of tuberculous lymphadenitis and sarcoidosis: Of the 46 patients with IGL, 16 with tuberculous lymphadenitis and 30 with sarcoidosis were identified. Sputum acid-fast bacilli smear and M. tuberculosis culture were positive in 13 % (2/16) and 19 % (3/16) of patients with tuberculous lymphadenitis, respectively. According to the results of the M. tuberculosis culture, three patients had a confirmative diagnosis of tuberculous lymphadenitis. Based on the clinical and radiological responses after anti-tuberculosis treatment or histological results, the remaining 13 patients were diagnosed with tuberculous lymphadenitis. Of 30 patients with sarcoidosis, 28 were diagnosed according to compatible clinical, radiological, and laboratory results with demonstration of noncaseating granulomas using EBUS-TBNA samples. Mediastinoscopy was subsequently performed in one patient whose EBUS-TBNA was shown to be anthracotic pigmentation with inflammation (Patient 1 in Table 2). The surgical specimen showed chronic granulomatous inflammation without caseous necrosis, and sarcoidosis was finally diagnosed with compatible radiological and laboratory results. Another patient whose EBUS-TBNA samples were shown to be inadequate (grade V) refused mediastinoscopy. Therefore, she was followed up for more than 12 months and clinically diagnosed with sarcoidosis based on radiological and laboratory results (Patient 6 in Table 2).Table 2Detailed characteristics of seven patients with grade IV or V for histological resultsCharacteristicsPatient number1234567SexFemaleMaleMaleFemaleFemaleFemaleFemaleAge, years65782773633356Number of LN involved per patienta 3332484Number of LN examined per patient1212221Number of needle pass per LN64 and 243 and 32 and 26 and 21Number of tissue core achieved per LN20 and 211 and 22 and 21 and 10Stations of LN punctured77 and 11R4R4R and 10R2R and 4R4R and 74RShortest diameter of LN punctured, mm10.613.0 and 15.032.418.9 and 12.214.6 and 10.517.2 and 12.615.7Results of EBUS-TBNA samples Acid-fast bacilli smearNegativeNegativeNegativeNegativeNegativeNegativeNegative Acid-fast bacilli cultureNegativeNegativeNegativeNegativeNegativeNegativeNegative TB-PCRNot detectedMTB detectedMTB detectedMTB detectedNot detectedNot detectedMTB detectedHistological gradeIVIVIVIVVVVMediastinoscopyPerformedNot performedNot performedNot performedPerformedNot performedNot performedFinal diagnosisSarcoidosisTBLTBLTBLTBLSarcoidosisb TBL aLymph nodes with the shortest diameter ≥ 10 mm on an axial CT scan bPatient 6 was clinically diagnosed with sarcoidosis based on radiological and laboratory results after more than 12 months of follow-upLN, lymph node; EBUS-TBNA, endobronchial ultrasound-guided transbronchial needle aspiration; TB-PCR; polymerase chain reaction for Mycobacterium tuberculosis; MTB, Mycobacterium tuberculosis; TBL, tuberculous lymphadenitis Detailed characteristics of seven patients with grade IV or V for histological results aLymph nodes with the shortest diameter ≥ 10 mm on an axial CT scan bPatient 6 was clinically diagnosed with sarcoidosis based on radiological and laboratory results after more than 12 months of follow-up LN, lymph node; EBUS-TBNA, endobronchial ultrasound-guided transbronchial needle aspiration; TB-PCR; polymerase chain reaction for Mycobacterium tuberculosis; MTB, Mycobacterium tuberculosis; TBL, tuberculous lymphadenitis Diagnostic performance of TB-PCR: Nested PCR and real-time PCR were performed in 29 and 17 patients, respectively. TB-PCR was not detected in all patients with sarcoidosis. Of 16 patients with tuberculous lymphadenitis, nine showed positive TB-PCR results. The sensitivity, specificity, positive predictive value, and negative predictive value were 56 % (95 % CI [confidence interval], 29.9–81.2), 100 % (95 % CI, 88.3–100.0), 100 % (95 % CI, 66.2–100.0), and 81 % (95 % CI, 64.8–92.0), respectively. There was no difference in the diagnostic performance between nested and real-time TB-PCR. In addition, the overall diagnostic accuracy of TB-PCR for tuberculous lymphadenitis was 85 % in patients with IGL. Classification of histology: Histological results were grades I to III in 39 patients (85 %) and grades IV to V in seven patients (15 %) (Table 3). Four (9 %) and three (7 %) patients with grade IV and V disease, respectively, regarded as non-diagnostic, also showed non-diagnostic microbiological results of acid-fast bacilli smear and culture. Two patients who underwent mediastinoscopy were eventually diagnosed with sarcoidosis and tuberculous lymphadenitis (Patients 1 and 5, respectively, Table 2). Four (57 %) of seven patients with grades IV to V disease had positive TB-PCR results. All four patients achieved clinical and radiological improvement after anti-tuberculosis treatment (Patients 2, 3, 4, and 7 in Table 2).Table 3Results of histological examinationsGradeResults of polymerase chain reaction for Mycobacterium tuberculosis SarcoidosisTuberculous lymphadenitisI0/0 (0)5/8 (63)II0/28 (0)0/1 (0)III0/0 (0)0/2 (0)IV0/1 (0)3/3 (100)V0/1 (0)1/2 (50)Total0/30 (0)9/16 (56)Data are presented as the number of positive results/number of patients who received a test (%) Results of histological examinations Data are presented as the number of positive results/number of patients who received a test (%) Diagnostic yield of EBUS-TBNA: The overall diagnostic yield of the combined histological and microbiological data was 85 %. When the results of TB-PCR were combined with the histological and microbiological data, the overall diagnostic yield increased to 94 %. Discussion: In addition to acid-fast bacilli smear and culture, TB-PCR using sputum specimens is traditionally recognized as a useful examination in the diagnosis of pulmonary tuberculosis. Similar to pulmonary tuberculosis, TB-PCR using fine needle aspiration samples of cervical, axillary, or inguinal lymph nodes has also been reported as a useful molecular test [16, 17]. In the present study, we found that the sensitivity, specificity, positive predictive value, negative predictive value, and overall diagnostic accuracy of TB-PCR using EBUS-TBNA samples in patients with IGL were 56, 100, 100, 81, and 85 %, respectively. Additionally, when we compared the diagnostic yield of combined histological and microbiological examinations using EBUS-TBNA samples with a combination of the three modalities of TB-PCR, histology, and microbiology, the diagnostic yield increased from 85 to 94 % in patients with IGL. The results of the current study are in line with those of previous reports regarding TB-PCR with lymph nodes other than mediastinal IGLs. The most important role of EBUS-TBNA in patients with mediastinal lymphadenopathy is to prevent mediastinoscopy with general anesthesia. A previous study of isolated mediastinal lymphadenopathy reported that EBUS-TBNA prevented 87 % of mediastinoscopies [7]. In the present study, seven patients had non-diagnostic EBUS-TBNA results (grade IV or V). Of them, TB-PCR was detected in four patients. In these four patients, anti-tuberculosis treatment led to clinical and radiological improvement. Therefore, the TB-PCR test could be effective and prevent mediastinoscopy when EBUS-TBNA results are non-diagnostic. Our findings suggest that TB-PCR using EBUS-TBNA samples could extend the role of EBUS-TBNA to prevent mediastinoscopy. In a previous study of tuberculous lymphadenitis by Navani et al. [8], the diagnostic yield of combined histological and microbiological tests using EBUS-TBNA samples was much higher than that in the present study; the sensitivity was 94 %, and M tuberculosis was cultured in 47 % of samples. Histological and microbiological tests using EBUS-TBNA samples appear to be sufficient for the diagnosis of tuberculous lymphadenitis. However, the lymph node size examined by EBUS-TBNA in previous studies was larger than that in the current study (median, 22.0 vs. 20.0 mm, respectively). The bioburden of M. tuberculosis in tuberculous lymphadenitis might have been much higher in the previous study by Navani et al. compared with the current study [8]. Previous studies applied TB-PCR using EBUS-TBNA samples as follows. Geake et al. reported that the sensitivity and specificity of TB-PCR using EBUS-TBNA samples were 38 and 100 %, respectively [18]. However, TB-PCR was performed in 29 of 39 patients with tuberculous lymphadenitis, and thus selection bias might have influenced their results. Senturk et al. analyzed 30 patients who were diagnosed with tuberculous lymphadenitis (27 patients with EBUS-TBNA samples and three with mediastinoscopic samples) [19]. Analysis of TB-PCR was performed with a sensitivity of 57 % and specificity of 100 %. Generally, tissue specimens collected by mediastinoscopy are larger than those collected by EBUS-TBNA. Therefore, the sensitivity and specificity of TB-PCR presented by Senturk et al. could not be used to interpret the results of TB-PCR using EBUS-TBNA samples. Dhasmana et al. analyzed the performance of Xpert MTB/RIF in the diagnosis of mediastinal tuberculous lymphadenitis [20]. They showed that Xpert MTB/RIF had a higher sensitivity (73 %) than that found in our study (56 %). Dhasmana et al. performed Xpert MTB/RIF in patients with culture-positive tuberculous lymphadenitis. However, our study population consisted of culture-negative cases with clinical suspicion, as well as culture-positive tuberculous lymphadenitis. Not all mediastinal tuberculous lymphadenitis samples would have shown a positive culture for M. tuberculosis. Therefore, Dhasmana et al.’s results might have overestimated the diagnostic performance of Xpert MTB/RIF. There are several limitations in this study. First, the present study was performed retrospectively. Although the data were collected prospectively, TB-PCR was not performed in four of the 50 selected patients with IGL (Fig. 1). We acknowledge that there was potential for selection bias, which might have influenced our results. Second, our study included a relatively small sample size. Third, we could not unify the PCR methods. To date, which method of TB-PCR is superior for detection of M. tuberculosis remains unknown. These limitations need to be verified in prospective trials with larger study populations and a uniform study protocol. Conclusions: TB-PCR using EBUS-TBNA samples is a useful laboratory test for diagnosing IGL. Moreover, this method can prevent further invasive evaluation in patients whose histological and microbiological tests are non-diagnostic.
Background: Endobronchial ultrasound-guided transbronchial needle aspiration (EBUS-TBNA) is widely used to perform mediastinal lymph node sampling. However, little information is available on polymerase chain reaction for Mycobacterium tuberculosis (TB-PCR) using EBUS-TBNA samples in patients with intrathoracic granulomatous lymphadenopathy (IGL). Methods: A retrospective study using a prospectively collected database was performed from January 2010 to December 2014 to evaluate the efficacy of the TB-PCR test using EBUS-TBNA samples in patients with IGL. During the study period, 87 consecutive patients with isolated intrathoracic lymphadenopathy who received EBUS-TBNA were registered and 46 patients with IGL were included. Results: Of the 46 patients with IGL, tuberculous lymphadenitis and sarcoidosis were diagnosed in 16 and 30 patients, respectively. The sensitivity, specificity, positive predictive value, and negative predictive value of TB-PCR for tuberculous lymphadenitis were 56, 100, 100, and 81%, respectively. The overall diagnostic accuracy of TB-PCR for tuberculous lymphadenitis was 85%. In addition, seven (17%) patients had non-diagnostic results from a histological examination and all of them had non-diagnostic microbiological results of an acid-fast bacilli smear and culture. Four (57%) of the seven patients with non-diagnostic results had positive TB-PCR results, and anti-tuberculosis treatment led to clinical and radiological improvement in all of the patients. Conclusions: TB-PCR using EBUS-TBNA samples is a useful laboratory test for diagnosing IGL. Moreover, this technique can prevent further invasive evaluation in patients whose histological and microbiological tests are non-diagnostic.
Background: Intrathoracic granulomatous lymphadenopathies (IGLs), such as tuberculous lymphadenitis and sarcoidosis, are frequently encountered by respiratory physicians, and their diagnosis is based on histological and microbiological tests [1–3]. Conventional transbronchial needle aspiration or mediastinoscopy has traditionally been used to perform lymph node biopsy for histological examinations, as well as for stains and cultures for acid-fast organisms [4–6]. With the recent advent of endobronchial ultrasound, endobronchial ultrasound-guided transbronchial needle aspiration (EBUS-TBNA) has been widely used to perform mediastinal lymph node biopsy or aspiration. There is increasing evidence regarding EBUS-TBNA as the first examination of choice in the evaluation of patients with IGL [7–9]. Most IGLs comprise tuberculous lymphadenitis and sarcoidosis. Occasionally, differentiating between these IGLs using a histological examination alone is difficult. Moreover, the diagnostic yield of acid-fast bacilli culture is still unsatisfactory, although such culture is considered the gold standard diagnostic method for tuberculosis. In addition to histological and microbiological tests, polymerase chain reaction for Mycobacterium tuberculosis (TB-PCR) is recognized as a useful test in the differential diagnosis of tuberculous lymphadenitis and sarcoidosis [10]. However, little information is available on TB-PCR using EBUS-TBNA samples. Therefore, we conducted this study to examine the diagnostic performance of TB-PCR using EBUS-TBNA samples in patients with IGL. We analyzed a prospectively collected database in South Korea, where the incidence of tuberculosis is intermediate (97/100,000 per year) [11]. Conclusions: TB-PCR using EBUS-TBNA samples is a useful laboratory test for diagnosing IGL. Moreover, this method can prevent further invasive evaluation in patients whose histological and microbiological tests are non-diagnostic.
Background: Endobronchial ultrasound-guided transbronchial needle aspiration (EBUS-TBNA) is widely used to perform mediastinal lymph node sampling. However, little information is available on polymerase chain reaction for Mycobacterium tuberculosis (TB-PCR) using EBUS-TBNA samples in patients with intrathoracic granulomatous lymphadenopathy (IGL). Methods: A retrospective study using a prospectively collected database was performed from January 2010 to December 2014 to evaluate the efficacy of the TB-PCR test using EBUS-TBNA samples in patients with IGL. During the study period, 87 consecutive patients with isolated intrathoracic lymphadenopathy who received EBUS-TBNA were registered and 46 patients with IGL were included. Results: Of the 46 patients with IGL, tuberculous lymphadenitis and sarcoidosis were diagnosed in 16 and 30 patients, respectively. The sensitivity, specificity, positive predictive value, and negative predictive value of TB-PCR for tuberculous lymphadenitis were 56, 100, 100, and 81%, respectively. The overall diagnostic accuracy of TB-PCR for tuberculous lymphadenitis was 85%. In addition, seven (17%) patients had non-diagnostic results from a histological examination and all of them had non-diagnostic microbiological results of an acid-fast bacilli smear and culture. Four (57%) of the seven patients with non-diagnostic results had positive TB-PCR results, and anti-tuberculosis treatment led to clinical and radiological improvement in all of the patients. Conclusions: TB-PCR using EBUS-TBNA samples is a useful laboratory test for diagnosing IGL. Moreover, this technique can prevent further invasive evaluation in patients whose histological and microbiological tests are non-diagnostic.
10,064
316
[ 516, 241, 429, 153, 70, 130, 397, 517, 154, 237, 42 ]
16
[ "patients", "pcr", "ebus", "tbna", "ebus tbna", "tb", "results", "lymph", "tuberculosis", "tb pcr" ]
[ "igls tuberculous lymphadenitis", "lymphadenitis mediastinal tuberculous", "lymphadenopathies igls tuberculous", "tuberculous lymphadenitis samples", "sarcoidosis tuberculous lymphadenitis" ]
null
[CONTENT] Endobronchial ultrasound-guided transbronchial needle aspiration | Polymerase chain reaction | Sarcoidosis | Sensitivity and specificity | Tuberculous lymphadenitis [SUMMARY]
null
[CONTENT] Endobronchial ultrasound-guided transbronchial needle aspiration | Polymerase chain reaction | Sarcoidosis | Sensitivity and specificity | Tuberculous lymphadenitis [SUMMARY]
[CONTENT] Endobronchial ultrasound-guided transbronchial needle aspiration | Polymerase chain reaction | Sarcoidosis | Sensitivity and specificity | Tuberculous lymphadenitis [SUMMARY]
[CONTENT] Endobronchial ultrasound-guided transbronchial needle aspiration | Polymerase chain reaction | Sarcoidosis | Sensitivity and specificity | Tuberculous lymphadenitis [SUMMARY]
[CONTENT] Endobronchial ultrasound-guided transbronchial needle aspiration | Polymerase chain reaction | Sarcoidosis | Sensitivity and specificity | Tuberculous lymphadenitis [SUMMARY]
[CONTENT] Adult | Endoscopic Ultrasound-Guided Fine Needle Aspiration | Female | Humans | Lymph Nodes | Male | Middle Aged | Mycobacterium tuberculosis | Polymerase Chain Reaction | Predictive Value of Tests | Retrospective Studies | Sarcoidosis | Sensitivity and Specificity | Tuberculosis, Lymph Node [SUMMARY]
null
[CONTENT] Adult | Endoscopic Ultrasound-Guided Fine Needle Aspiration | Female | Humans | Lymph Nodes | Male | Middle Aged | Mycobacterium tuberculosis | Polymerase Chain Reaction | Predictive Value of Tests | Retrospective Studies | Sarcoidosis | Sensitivity and Specificity | Tuberculosis, Lymph Node [SUMMARY]
[CONTENT] Adult | Endoscopic Ultrasound-Guided Fine Needle Aspiration | Female | Humans | Lymph Nodes | Male | Middle Aged | Mycobacterium tuberculosis | Polymerase Chain Reaction | Predictive Value of Tests | Retrospective Studies | Sarcoidosis | Sensitivity and Specificity | Tuberculosis, Lymph Node [SUMMARY]
[CONTENT] Adult | Endoscopic Ultrasound-Guided Fine Needle Aspiration | Female | Humans | Lymph Nodes | Male | Middle Aged | Mycobacterium tuberculosis | Polymerase Chain Reaction | Predictive Value of Tests | Retrospective Studies | Sarcoidosis | Sensitivity and Specificity | Tuberculosis, Lymph Node [SUMMARY]
[CONTENT] Adult | Endoscopic Ultrasound-Guided Fine Needle Aspiration | Female | Humans | Lymph Nodes | Male | Middle Aged | Mycobacterium tuberculosis | Polymerase Chain Reaction | Predictive Value of Tests | Retrospective Studies | Sarcoidosis | Sensitivity and Specificity | Tuberculosis, Lymph Node [SUMMARY]
[CONTENT] igls tuberculous lymphadenitis | lymphadenitis mediastinal tuberculous | lymphadenopathies igls tuberculous | tuberculous lymphadenitis samples | sarcoidosis tuberculous lymphadenitis [SUMMARY]
null
[CONTENT] igls tuberculous lymphadenitis | lymphadenitis mediastinal tuberculous | lymphadenopathies igls tuberculous | tuberculous lymphadenitis samples | sarcoidosis tuberculous lymphadenitis [SUMMARY]
[CONTENT] igls tuberculous lymphadenitis | lymphadenitis mediastinal tuberculous | lymphadenopathies igls tuberculous | tuberculous lymphadenitis samples | sarcoidosis tuberculous lymphadenitis [SUMMARY]
[CONTENT] igls tuberculous lymphadenitis | lymphadenitis mediastinal tuberculous | lymphadenopathies igls tuberculous | tuberculous lymphadenitis samples | sarcoidosis tuberculous lymphadenitis [SUMMARY]
[CONTENT] igls tuberculous lymphadenitis | lymphadenitis mediastinal tuberculous | lymphadenopathies igls tuberculous | tuberculous lymphadenitis samples | sarcoidosis tuberculous lymphadenitis [SUMMARY]
[CONTENT] patients | pcr | ebus | tbna | ebus tbna | tb | results | lymph | tuberculosis | tb pcr [SUMMARY]
null
[CONTENT] patients | pcr | ebus | tbna | ebus tbna | tb | results | lymph | tuberculosis | tb pcr [SUMMARY]
[CONTENT] patients | pcr | ebus | tbna | ebus tbna | tb | results | lymph | tuberculosis | tb pcr [SUMMARY]
[CONTENT] patients | pcr | ebus | tbna | ebus tbna | tb | results | lymph | tuberculosis | tb pcr [SUMMARY]
[CONTENT] patients | pcr | ebus | tbna | ebus tbna | tb | results | lymph | tuberculosis | tb pcr [SUMMARY]
[CONTENT] igls | lymphadenitis sarcoidosis | tuberculous lymphadenitis sarcoidosis | biopsy | node biopsy | lymph node biopsy | perform | ebus | tbna | ebus tbna [SUMMARY]
null
[CONTENT] patients | ln | results | diameter | 82 | table | sarcoidosis | ebus | tuberculous | tuberculous lymphadenitis [SUMMARY]
[CONTENT] diagnosing | invasive | patients histological microbiological | test diagnosing | useful laboratory | useful laboratory test | method prevent invasive evaluation | method prevent invasive | method prevent | useful laboratory test diagnosing [SUMMARY]
[CONTENT] patients | pcr | ebus | tbna | ebus tbna | tb pcr | tb | tuberculosis | results | histological [SUMMARY]
[CONTENT] patients | pcr | ebus | tbna | ebus tbna | tb pcr | tb | tuberculosis | results | histological [SUMMARY]
[CONTENT] ||| Mycobacterium | IGL [SUMMARY]
null
[CONTENT] 46 | IGL | 16 | 30 ||| 56 | 100 | 100 | 81% ||| TB-PCR | 85% ||| seven | 17% | bacilli ||| Four | 57% | seven [SUMMARY]
[CONTENT] IGL ||| [SUMMARY]
[CONTENT] ||| Mycobacterium | IGL ||| January 2010 to December 2014 | TB | IGL ||| 87 | 46 | IGL ||| ||| 46 | IGL | 16 | 30 ||| 56 | 100 | 100 | 81% ||| TB-PCR | 85% ||| seven | 17% | bacilli ||| Four | 57% | seven ||| IGL ||| [SUMMARY]
[CONTENT] ||| Mycobacterium | IGL ||| January 2010 to December 2014 | TB | IGL ||| 87 | 46 | IGL ||| ||| 46 | IGL | 16 | 30 ||| 56 | 100 | 100 | 81% ||| TB-PCR | 85% ||| seven | 17% | bacilli ||| Four | 57% | seven ||| IGL ||| [SUMMARY]
Benign Histopathologic Findings of Endometrium among Perimenopausal Women presenting with Abnormal Uterine Bleeding: A Descriptive Cross-sectional Study.
35199744
Abnormal uterine bleeding is the most common presenting complaint in the perimenopausal age group. Endometrial biopsy obtained by dilatation and curettage is the preferred modality of investigation to determine the causative pathology of abnormal uterine bleeding. The objective of this study was to find out the prevalence of the benign histopathological findings in perimenopausal women presenting with abnormal uterine bleeding.
INTRODUCTION
This descriptive cross-sectional study was conducted among patients between 1st June 2020 to 30th September 2021. Ethical approval was taken from the Institutional Review Committee of Kathmandu Medical College (reference number: 305202002). Using the convenience sampling method, 96 cases of endometrial biopsies were studied under light microscopy. Data was analyzed using the Statistical Package for the Social Sciences version23.0. Point estimate at 95% Confidence Interval was calculated along with frequency and proportion for binary data.
METHODS
Among the 96 specimens, the prevalence of benign findings was 93 (96.9%) (93-100 at 95% Confidence Interval). Among them, the commonest benign histopathologic spectrum was hormonal imbalance pattern in 40 (41.7%) followed by normal menstrual pattern 35 (36.5%). Five (5.2%) cases showed chronic endometritis. Six (6.2%) cases of endometrial hyperplasia without atypia were identified. Three (3.1%) cases showed endometrial atrophy. Four (4.1%) cases showed endometrial polyp.
RESULTS
The prevalence of benign histopathological findings among endometrial biopsies in the study was similar to other studies.
CONCLUSIONS
[ "Cross-Sectional Studies", "Endometrial Hyperplasia", "Endometrium", "Female", "Humans", "Perimenopause", "Uterine Hemorrhage" ]
9124327
INTRODUCTION
Abnormal uterine bleeding (AUB) is any type of menstrual bleeding that does not fall within the normal ranges for amount, frequency, duration or cyclicity.1 It is the most common presenting complaint in gynecology outpatient department, especially in perimenopausal age group.2 Perimenopause is transitional years prior to menopause which may be due to variation in normal cyclical pattern as a result of physiological hormonal changes.AUB with abnormal blood loss has a profound effect on the quality of the woman's life. However, most of the cases of AUB in perimenopausal age are of benign nature without requiring any invasive surgeries. Endometrial biopsy obtained by dilatation and curettage is the preferred modality of investigation to determine the causative pathology of AUB.3 Histopathologic changes on endometrium may vary from physiological findings to overt pathologic features in patients with AUB in different population and age group. So far limited data is available regarding the endometrial patterns seen in perimenopausal women in our population. The objectives of this study were to find out the prevalence of benign histopathological findings in perimenopausal women presenting with AUB.
METHODS
This was a descriptive cross-sectional study conducted among patients 1st June 2020 to 30th September 2021. The institutional review committee of Kathmandu Medical College and Teaching Hospital provided the ethical approval (reference number:305202002). Using convenience sampling technique, endometrial biopsy specimen submitted to the Department of Pathology for histopathologic evaluation during the study period were included in the study. The sample size was calculated as, n = Z2 × p × q / e2   = (1.96)2 × (0.5) × (1-0.5) / (0.1)2   = 96 where, n= required sample size, Z= 1.96 at 95% Confidence Interval (CI), p= prevalence taken as 50% for maximum sample size, q= 1-p e= margin of error, 10% Therefore, the calculated sample size was 96. Hence, 96 endometrial samples were collected. The endometrial samples were fixed in 10%formalin for 12-24 hours and the entire tissue was submitted for routine processing. 5|j thickness sections taken from paraffin blocks were stained with Haematoxylin and Eosin (H&E) and studied under light microscopy by a pathologist. Patients with cervical-vaginal pathology and systemic causes of abnormal uterine bleeding were excluded from the study. Relevant demographic data were obtained from requisition form provided with the specimen. Data were entered and analyzed using the Statistical Package for the Social Sciences version 23. Point estimate at 95% Cl was calculated along with frequency and proportion for binary data.
RESULTS
In the present study, out of the 96 cases of endometrial biopsies included, the prevalence of benign histopathological findings was 93 (96.9%) (93-100 at 95% CI). Only three (3.1%) cases showed histopathologic evidence of endometrial malignancy. Majority of patients with AUB had a hormone imbalance pattern on endometrial biopsy 40 (41.7%). Endometrial atrophy was noted in three (3.1%) cases. Six (6.2%) cases showed endometrial hyperplasia without atypia. Five (5.2%) cases showed features of endometritis. In four (4.1%) cases AUB was due to endometrial polyp (Table 1). Histopathologic changes favoring hormone imbalance included disordered proliferative endometrium 32 (80%), non-secretory endometrium with endometrial and stromal breakdown in 3 (7.5%) and pill effect in 5 (12.5%) cases. Most of the women presenting with AUB fell into the 40-44-year age group, 38 (39.58%), followed by 45-49 year age group, 32 (33.33%) and 50-54 year age group, 26 (27.08%) (Figure 1). Menorrhagia, in 35 (36.4%) cases, was the most common presenting symptom (Table 2). Despite having AUB, a significant number of cases showed normal cyclical endometrium where 24 (25%) showed proliferative endometrium and 11 (11.5%) showed secretory endometrium.
CONCLUSIONS
The prevalence of benign histopathological findings among endometrial biopsies in the study was similar to other studies. Hormonal imbalance pattern is the predominant benign histopathologic pattern seen in perimenopausal women presenting with AUB. Histopathologic findings in an endometrial biopsy helps gynaecologists to decide appropriate management strategies.
[]
[]
[]
[ "INTRODUCTION", "METHODS", "RESULTS", "DISCUSSION", "CONCLUSIONS" ]
[ "Abnormal uterine bleeding (AUB) is any type of menstrual bleeding that does not fall within the normal ranges for amount, frequency, duration or cyclicity.1 It is the most common presenting complaint in gynecology outpatient department, especially in perimenopausal age group.2 Perimenopause is transitional years prior to menopause which may be due to variation in normal cyclical pattern as a result of physiological hormonal changes.AUB with abnormal blood loss has a profound effect on the quality of the woman's life.\nHowever, most of the cases of AUB in perimenopausal age are of benign nature without requiring any invasive surgeries. Endometrial biopsy obtained by dilatation and curettage is the preferred modality of investigation to determine the causative pathology of AUB.3 Histopathologic changes on endometrium may vary from physiological findings to overt pathologic features in patients with AUB in different population and age group. So far limited data is available regarding the endometrial patterns seen in perimenopausal women in our population.\nThe objectives of this study were to find out the prevalence of benign histopathological findings in perimenopausal women presenting with AUB.", "This was a descriptive cross-sectional study conducted among patients 1st June 2020 to 30th September 2021. The institutional review committee of Kathmandu Medical College and Teaching Hospital provided the ethical approval (reference number:305202002). Using convenience sampling technique, endometrial biopsy specimen submitted to the Department of Pathology for histopathologic evaluation during the study period were included in the study. The sample size was calculated as,\nn = Z2 × p × q / e2\n  = (1.96)2 × (0.5) × (1-0.5) / (0.1)2\n  = 96\nwhere,\nn= required sample size,\nZ= 1.96 at 95% Confidence Interval (CI),\np= prevalence taken as 50% for maximum sample size,\nq= 1-p\ne= margin of error, 10%\nTherefore, the calculated sample size was 96. Hence, 96 endometrial samples were collected. The endometrial samples were fixed in 10%formalin for 12-24 hours and the entire tissue was submitted for routine processing. 5|j thickness sections taken from paraffin blocks were stained with Haematoxylin and Eosin (H&E) and studied under light microscopy by a pathologist. Patients with cervical-vaginal pathology and systemic causes of abnormal uterine bleeding were excluded from the study. Relevant demographic data were obtained from requisition form provided with the specimen.\nData were entered and analyzed using the Statistical Package for the Social Sciences version 23. Point estimate at 95% Cl was calculated along with frequency and proportion for binary data.", "In the present study, out of the 96 cases of endometrial biopsies included, the prevalence of benign histopathological findings was 93 (96.9%) (93-100 at 95% CI). Only three (3.1%) cases showed histopathologic evidence of endometrial malignancy. Majority of patients with AUB had a hormone imbalance pattern on endometrial biopsy 40 (41.7%). Endometrial atrophy was noted in three (3.1%) cases. Six (6.2%) cases showed endometrial hyperplasia without atypia. Five (5.2%) cases showed features of endometritis. In four (4.1%) cases AUB was due to endometrial polyp (Table 1).\nHistopathologic changes favoring hormone imbalance included disordered proliferative endometrium 32 (80%), non-secretory endometrium with endometrial and stromal breakdown in 3 (7.5%) and pill effect in 5 (12.5%) cases.\nMost of the women presenting with AUB fell into the 40-44-year age group, 38 (39.58%), followed by 45-49 year age group, 32 (33.33%) and 50-54 year age group, 26 (27.08%) (Figure 1).\nMenorrhagia, in 35 (36.4%) cases, was the most common presenting symptom (Table 2).\nDespite having AUB, a significant number of cases showed normal cyclical endometrium where 24 (25%) showed proliferative endometrium and 11 (11.5%) showed secretory endometrium.", "The endometrium undergoes a complex regular cycle of periodic proliferation, differentiation, breakdown and regeneration which forms the basis of normal menstrual cycle; any deviation from it would result in abnormal uterine bleeding. Clinically AUB manifests as menorrhagia, polymenorrhea, polymenorrhagia, metrorrhagia, menometrorrhagia, intermenstrual bleeding, etc.4,5 A new classification for the causes of abnormal uterine bleeding is based on the acronym PALM-COEIN - Polyps, Adenomyosis, Leiomyoma, Malignancy, Hyperplasia, Coagulopathy, Ovulatory disorders, Endometrial causes, Iatrogenic and Not classified, was developed by the International Federation of Gynaecology and Obstetrics in November 2010.6,7,8 When unscheduled bleeding occurs, especially if it is heavy or prolonged, further investigations such as trans-vaginal ultrasound and, hysteroscopy with biopsy if indicated, are needed.9 The accuracy of endometrial biopsy for the detection of endometrial abnormalities has been reported to be as high as 96%.10 Diagnosis and the management strategies of AUB is not complete without evaluation of histopathological characteristics of endometrial biopsy.11 Endometrial biopsy tissue obtained by a simple procedure of D and C (Dilation and curettage) is an effective and safe diagnostic step in evaluation of AUB and for the diagnosis of endometrial pathologies.12,13\nAUB is common in the perimenopausal age group. In a study conducted by Chapagain in a tertiary hospital in Nepal, majority of cases of AUB seen in perimenopausal age group was between 40-44 years (45.5%).14 We also observed that the common age of AUB was noted in the 40-44 years age group (39.58%) followed by 45-49 years age group (33.33%) and 50-54 years age group (27.08%) which is similar to the study done by Valson and Bhatiayani, et al. Probable reason of increased incidence of AUB in this age group (40-50 years) is that the women in this age group are in their climacteric period. As women approach menopause, cycles shorten and become intermittently anovulatory due to the decline in the level of ovarian follicles and estradiol level.15,16\nMenorrhagia (36.4%) was the most common presentation followed by polymenorrhea (26%) in our study. In the study conducted by Chapagain, menorrhagia was the commonest presenting complaint (40.3%) followed by menometrorrhagia (23.4%).14 which was similar to the observation made by Chaudhary, Razzaq, et al. and Shrestha in their studies which were conducted in a population of similar race, socio-economic status and environment as our study.17,18,19 Study conducted by Abid, et al. however found polymenorrhea as the commonest presenting complaint.20\nVarious studies have shown benign histopatholgical changes in patients presenting with AUB.14-22 In our study the commonest histological pattern in perimenopausal women was hormonal imbalance pattern (41.7%) followed by normal cyclical pattern (proliferative and secretory pattern combined, 36.5%). Among the cases showing hormone imbalance patterns, histomorphologic features showed predominantly disordered proliferative endometrium (32/40 cases), glandular and stromal breakdown (3/40 cases) and pill effect (5/40 cases). Abid, et al. also reported hormonal imbalance pattern was the commonest in perimenopausal age group (54%). This correlates with the fact that perimenopausal age is transition from normal ovulation to anovulation.20\nThirty-five women with AUB showed normal cyclical endometrium which was the second common pattern in our study. Out of these 36 cases, 24 (25%) showed proliferative endometrium and 11 (11.5%) revealed secretory phase endometrium. This is in contrast to the studies done by Das et al, Razzaq et al, Bhatiyani and Singh, et al. who reported normal cyclical pattern to be the commonest pattern of endometrium. Their studies included endometrial biopsy in a wide age range while our study took into account only cases which fell into the perimenopausal age group (40-54 years).16,18,21,22\nEndometrial pathologies such as endometrial hyperplasia, polyps, submucous leiomyoma and endometrial carcinoma should be suspected and evaluation of endometrium is necessary in AUB in perimenopausal women.23 We observed endometrial hyperplasia without atypia in six cases (6.2%). Three of these cases were present in the 45-49 year age group and the other three in the 50-54 years age group. Endometrial hyperplasia without atypia is a proliferation of endometrial glands of irregular size and shape without significant cytological atypia. It is most commonly diagnosed in perimenopause with symptoms of abnormal, non-cyclical vaginal bleeding. It is a result of prolonged estrogen exposure unopposed by progesterone or progestational agents acting on the entire endometrial field.24 In the study done by Abid et al, endometrial hyperplasia was present in 5% of cases withAUB and the incidence was most common in the postmenopausal and perimenopausal age group.20\nWe observed inflammatory pathology only in 5 cases (5.2%). All these cases showed features of non-specific chronic endometritis. Abid, et al. reported a lower incidence of endometritis (9.1%) in perimenopausal age group than in reproductive age group (18%) which they attributed to the fact that the women in reproductive age group had a greater chance of exposure to casesarian sections, spontaneous and therapeutic abortion and intrauterine contraceptive device etc. hence prone to develop chronic endometritis. Perveen, et al. found chronic endometritis in a larger number of cases (37%).25 This variation may be due to socioeconomic status, hygienic conditions or exposure to surgical intervention.\nEndometrial polyp was present in four cases (4.1%). None of the polyps showed atypical hyperplasia or carcinoma. Azim, et al. demonstrated an increasing frequency of polyp with advancing age.26A small number of cases demonstrated atrophic endometrium (3.1%). All three cases were in the 50-54 age group and no history of menopause was present in them. This pattern is most common in post-menopausal women however, anovulation seen in perimenopasual age group eventually leads to permanent loss of ovarian function manifested as atrophic or inactive endometrium.20\nOur study was a descriptive hospital-based study carried in a selected age group of women in a short duration of time, therefore might not be representative of the entire population. However,this study would provide a database with regards to benign histopathological patterns in AUB in perimenopausal age groups.", "The prevalence of benign histopathological findings among endometrial biopsies in the study was similar to other studies. Hormonal imbalance pattern is the predominant benign histopathologic pattern seen in perimenopausal women presenting with AUB. Histopathologic findings in an endometrial biopsy helps gynaecologists to decide appropriate management strategies." ]
[ "intro", "methods", "results", "discussion", "conclusions" ]
[ "\nabnormal uterine bleeding\n", "\nhistopathology\n", "\nperimenopausal\n" ]
INTRODUCTION: Abnormal uterine bleeding (AUB) is any type of menstrual bleeding that does not fall within the normal ranges for amount, frequency, duration or cyclicity.1 It is the most common presenting complaint in gynecology outpatient department, especially in perimenopausal age group.2 Perimenopause is transitional years prior to menopause which may be due to variation in normal cyclical pattern as a result of physiological hormonal changes.AUB with abnormal blood loss has a profound effect on the quality of the woman's life. However, most of the cases of AUB in perimenopausal age are of benign nature without requiring any invasive surgeries. Endometrial biopsy obtained by dilatation and curettage is the preferred modality of investigation to determine the causative pathology of AUB.3 Histopathologic changes on endometrium may vary from physiological findings to overt pathologic features in patients with AUB in different population and age group. So far limited data is available regarding the endometrial patterns seen in perimenopausal women in our population. The objectives of this study were to find out the prevalence of benign histopathological findings in perimenopausal women presenting with AUB. METHODS: This was a descriptive cross-sectional study conducted among patients 1st June 2020 to 30th September 2021. The institutional review committee of Kathmandu Medical College and Teaching Hospital provided the ethical approval (reference number:305202002). Using convenience sampling technique, endometrial biopsy specimen submitted to the Department of Pathology for histopathologic evaluation during the study period were included in the study. The sample size was calculated as, n = Z2 × p × q / e2   = (1.96)2 × (0.5) × (1-0.5) / (0.1)2   = 96 where, n= required sample size, Z= 1.96 at 95% Confidence Interval (CI), p= prevalence taken as 50% for maximum sample size, q= 1-p e= margin of error, 10% Therefore, the calculated sample size was 96. Hence, 96 endometrial samples were collected. The endometrial samples were fixed in 10%formalin for 12-24 hours and the entire tissue was submitted for routine processing. 5|j thickness sections taken from paraffin blocks were stained with Haematoxylin and Eosin (H&E) and studied under light microscopy by a pathologist. Patients with cervical-vaginal pathology and systemic causes of abnormal uterine bleeding were excluded from the study. Relevant demographic data were obtained from requisition form provided with the specimen. Data were entered and analyzed using the Statistical Package for the Social Sciences version 23. Point estimate at 95% Cl was calculated along with frequency and proportion for binary data. RESULTS: In the present study, out of the 96 cases of endometrial biopsies included, the prevalence of benign histopathological findings was 93 (96.9%) (93-100 at 95% CI). Only three (3.1%) cases showed histopathologic evidence of endometrial malignancy. Majority of patients with AUB had a hormone imbalance pattern on endometrial biopsy 40 (41.7%). Endometrial atrophy was noted in three (3.1%) cases. Six (6.2%) cases showed endometrial hyperplasia without atypia. Five (5.2%) cases showed features of endometritis. In four (4.1%) cases AUB was due to endometrial polyp (Table 1). Histopathologic changes favoring hormone imbalance included disordered proliferative endometrium 32 (80%), non-secretory endometrium with endometrial and stromal breakdown in 3 (7.5%) and pill effect in 5 (12.5%) cases. Most of the women presenting with AUB fell into the 40-44-year age group, 38 (39.58%), followed by 45-49 year age group, 32 (33.33%) and 50-54 year age group, 26 (27.08%) (Figure 1). Menorrhagia, in 35 (36.4%) cases, was the most common presenting symptom (Table 2). Despite having AUB, a significant number of cases showed normal cyclical endometrium where 24 (25%) showed proliferative endometrium and 11 (11.5%) showed secretory endometrium. DISCUSSION: The endometrium undergoes a complex regular cycle of periodic proliferation, differentiation, breakdown and regeneration which forms the basis of normal menstrual cycle; any deviation from it would result in abnormal uterine bleeding. Clinically AUB manifests as menorrhagia, polymenorrhea, polymenorrhagia, metrorrhagia, menometrorrhagia, intermenstrual bleeding, etc.4,5 A new classification for the causes of abnormal uterine bleeding is based on the acronym PALM-COEIN - Polyps, Adenomyosis, Leiomyoma, Malignancy, Hyperplasia, Coagulopathy, Ovulatory disorders, Endometrial causes, Iatrogenic and Not classified, was developed by the International Federation of Gynaecology and Obstetrics in November 2010.6,7,8 When unscheduled bleeding occurs, especially if it is heavy or prolonged, further investigations such as trans-vaginal ultrasound and, hysteroscopy with biopsy if indicated, are needed.9 The accuracy of endometrial biopsy for the detection of endometrial abnormalities has been reported to be as high as 96%.10 Diagnosis and the management strategies of AUB is not complete without evaluation of histopathological characteristics of endometrial biopsy.11 Endometrial biopsy tissue obtained by a simple procedure of D and C (Dilation and curettage) is an effective and safe diagnostic step in evaluation of AUB and for the diagnosis of endometrial pathologies.12,13 AUB is common in the perimenopausal age group. In a study conducted by Chapagain in a tertiary hospital in Nepal, majority of cases of AUB seen in perimenopausal age group was between 40-44 years (45.5%).14 We also observed that the common age of AUB was noted in the 40-44 years age group (39.58%) followed by 45-49 years age group (33.33%) and 50-54 years age group (27.08%) which is similar to the study done by Valson and Bhatiayani, et al. Probable reason of increased incidence of AUB in this age group (40-50 years) is that the women in this age group are in their climacteric period. As women approach menopause, cycles shorten and become intermittently anovulatory due to the decline in the level of ovarian follicles and estradiol level.15,16 Menorrhagia (36.4%) was the most common presentation followed by polymenorrhea (26%) in our study. In the study conducted by Chapagain, menorrhagia was the commonest presenting complaint (40.3%) followed by menometrorrhagia (23.4%).14 which was similar to the observation made by Chaudhary, Razzaq, et al. and Shrestha in their studies which were conducted in a population of similar race, socio-economic status and environment as our study.17,18,19 Study conducted by Abid, et al. however found polymenorrhea as the commonest presenting complaint.20 Various studies have shown benign histopatholgical changes in patients presenting with AUB.14-22 In our study the commonest histological pattern in perimenopausal women was hormonal imbalance pattern (41.7%) followed by normal cyclical pattern (proliferative and secretory pattern combined, 36.5%). Among the cases showing hormone imbalance patterns, histomorphologic features showed predominantly disordered proliferative endometrium (32/40 cases), glandular and stromal breakdown (3/40 cases) and pill effect (5/40 cases). Abid, et al. also reported hormonal imbalance pattern was the commonest in perimenopausal age group (54%). This correlates with the fact that perimenopausal age is transition from normal ovulation to anovulation.20 Thirty-five women with AUB showed normal cyclical endometrium which was the second common pattern in our study. Out of these 36 cases, 24 (25%) showed proliferative endometrium and 11 (11.5%) revealed secretory phase endometrium. This is in contrast to the studies done by Das et al, Razzaq et al, Bhatiyani and Singh, et al. who reported normal cyclical pattern to be the commonest pattern of endometrium. Their studies included endometrial biopsy in a wide age range while our study took into account only cases which fell into the perimenopausal age group (40-54 years).16,18,21,22 Endometrial pathologies such as endometrial hyperplasia, polyps, submucous leiomyoma and endometrial carcinoma should be suspected and evaluation of endometrium is necessary in AUB in perimenopausal women.23 We observed endometrial hyperplasia without atypia in six cases (6.2%). Three of these cases were present in the 45-49 year age group and the other three in the 50-54 years age group. Endometrial hyperplasia without atypia is a proliferation of endometrial glands of irregular size and shape without significant cytological atypia. It is most commonly diagnosed in perimenopause with symptoms of abnormal, non-cyclical vaginal bleeding. It is a result of prolonged estrogen exposure unopposed by progesterone or progestational agents acting on the entire endometrial field.24 In the study done by Abid et al, endometrial hyperplasia was present in 5% of cases withAUB and the incidence was most common in the postmenopausal and perimenopausal age group.20 We observed inflammatory pathology only in 5 cases (5.2%). All these cases showed features of non-specific chronic endometritis. Abid, et al. reported a lower incidence of endometritis (9.1%) in perimenopausal age group than in reproductive age group (18%) which they attributed to the fact that the women in reproductive age group had a greater chance of exposure to casesarian sections, spontaneous and therapeutic abortion and intrauterine contraceptive device etc. hence prone to develop chronic endometritis. Perveen, et al. found chronic endometritis in a larger number of cases (37%).25 This variation may be due to socioeconomic status, hygienic conditions or exposure to surgical intervention. Endometrial polyp was present in four cases (4.1%). None of the polyps showed atypical hyperplasia or carcinoma. Azim, et al. demonstrated an increasing frequency of polyp with advancing age.26A small number of cases demonstrated atrophic endometrium (3.1%). All three cases were in the 50-54 age group and no history of menopause was present in them. This pattern is most common in post-menopausal women however, anovulation seen in perimenopasual age group eventually leads to permanent loss of ovarian function manifested as atrophic or inactive endometrium.20 Our study was a descriptive hospital-based study carried in a selected age group of women in a short duration of time, therefore might not be representative of the entire population. However,this study would provide a database with regards to benign histopathological patterns in AUB in perimenopausal age groups. CONCLUSIONS: The prevalence of benign histopathological findings among endometrial biopsies in the study was similar to other studies. Hormonal imbalance pattern is the predominant benign histopathologic pattern seen in perimenopausal women presenting with AUB. Histopathologic findings in an endometrial biopsy helps gynaecologists to decide appropriate management strategies.
Background: Abnormal uterine bleeding is the most common presenting complaint in the perimenopausal age group. Endometrial biopsy obtained by dilatation and curettage is the preferred modality of investigation to determine the causative pathology of abnormal uterine bleeding. The objective of this study was to find out the prevalence of the benign histopathological findings in perimenopausal women presenting with abnormal uterine bleeding. Methods: This descriptive cross-sectional study was conducted among patients between 1st June 2020 to 30th September 2021. Ethical approval was taken from the Institutional Review Committee of Kathmandu Medical College (reference number: 305202002). Using the convenience sampling method, 96 cases of endometrial biopsies were studied under light microscopy. Data was analyzed using the Statistical Package for the Social Sciences version23.0. Point estimate at 95% Confidence Interval was calculated along with frequency and proportion for binary data. Results: Among the 96 specimens, the prevalence of benign findings was 93 (96.9%) (93-100 at 95% Confidence Interval). Among them, the commonest benign histopathologic spectrum was hormonal imbalance pattern in 40 (41.7%) followed by normal menstrual pattern 35 (36.5%). Five (5.2%) cases showed chronic endometritis. Six (6.2%) cases of endometrial hyperplasia without atypia were identified. Three (3.1%) cases showed endometrial atrophy. Four (4.1%) cases showed endometrial polyp. Conclusions: The prevalence of benign histopathological findings among endometrial biopsies in the study was similar to other studies.
INTRODUCTION: Abnormal uterine bleeding (AUB) is any type of menstrual bleeding that does not fall within the normal ranges for amount, frequency, duration or cyclicity.1 It is the most common presenting complaint in gynecology outpatient department, especially in perimenopausal age group.2 Perimenopause is transitional years prior to menopause which may be due to variation in normal cyclical pattern as a result of physiological hormonal changes.AUB with abnormal blood loss has a profound effect on the quality of the woman's life. However, most of the cases of AUB in perimenopausal age are of benign nature without requiring any invasive surgeries. Endometrial biopsy obtained by dilatation and curettage is the preferred modality of investigation to determine the causative pathology of AUB.3 Histopathologic changes on endometrium may vary from physiological findings to overt pathologic features in patients with AUB in different population and age group. So far limited data is available regarding the endometrial patterns seen in perimenopausal women in our population. The objectives of this study were to find out the prevalence of benign histopathological findings in perimenopausal women presenting with AUB. CONCLUSIONS: The prevalence of benign histopathological findings among endometrial biopsies in the study was similar to other studies. Hormonal imbalance pattern is the predominant benign histopathologic pattern seen in perimenopausal women presenting with AUB. Histopathologic findings in an endometrial biopsy helps gynaecologists to decide appropriate management strategies.
Background: Abnormal uterine bleeding is the most common presenting complaint in the perimenopausal age group. Endometrial biopsy obtained by dilatation and curettage is the preferred modality of investigation to determine the causative pathology of abnormal uterine bleeding. The objective of this study was to find out the prevalence of the benign histopathological findings in perimenopausal women presenting with abnormal uterine bleeding. Methods: This descriptive cross-sectional study was conducted among patients between 1st June 2020 to 30th September 2021. Ethical approval was taken from the Institutional Review Committee of Kathmandu Medical College (reference number: 305202002). Using the convenience sampling method, 96 cases of endometrial biopsies were studied under light microscopy. Data was analyzed using the Statistical Package for the Social Sciences version23.0. Point estimate at 95% Confidence Interval was calculated along with frequency and proportion for binary data. Results: Among the 96 specimens, the prevalence of benign findings was 93 (96.9%) (93-100 at 95% Confidence Interval). Among them, the commonest benign histopathologic spectrum was hormonal imbalance pattern in 40 (41.7%) followed by normal menstrual pattern 35 (36.5%). Five (5.2%) cases showed chronic endometritis. Six (6.2%) cases of endometrial hyperplasia without atypia were identified. Three (3.1%) cases showed endometrial atrophy. Four (4.1%) cases showed endometrial polyp. Conclusions: The prevalence of benign histopathological findings among endometrial biopsies in the study was similar to other studies.
1,971
283
[]
5
[ "endometrial", "age", "cases", "age group", "group", "aub", "study", "endometrium", "perimenopausal", "pattern" ]
[ "findings endometrial", "aub type menstrual", "endometritis perimenopausal age", "incidence endometritis perimenopausal", "uterine bleeding aub" ]
[CONTENT] abnormal uterine bleeding | histopathology | perimenopausal [SUMMARY]
[CONTENT] abnormal uterine bleeding | histopathology | perimenopausal [SUMMARY]
[CONTENT] abnormal uterine bleeding | histopathology | perimenopausal [SUMMARY]
[CONTENT] abnormal uterine bleeding | histopathology | perimenopausal [SUMMARY]
[CONTENT] abnormal uterine bleeding | histopathology | perimenopausal [SUMMARY]
[CONTENT] abnormal uterine bleeding | histopathology | perimenopausal [SUMMARY]
[CONTENT] Cross-Sectional Studies | Endometrial Hyperplasia | Endometrium | Female | Humans | Perimenopause | Uterine Hemorrhage [SUMMARY]
[CONTENT] Cross-Sectional Studies | Endometrial Hyperplasia | Endometrium | Female | Humans | Perimenopause | Uterine Hemorrhage [SUMMARY]
[CONTENT] Cross-Sectional Studies | Endometrial Hyperplasia | Endometrium | Female | Humans | Perimenopause | Uterine Hemorrhage [SUMMARY]
[CONTENT] Cross-Sectional Studies | Endometrial Hyperplasia | Endometrium | Female | Humans | Perimenopause | Uterine Hemorrhage [SUMMARY]
[CONTENT] Cross-Sectional Studies | Endometrial Hyperplasia | Endometrium | Female | Humans | Perimenopause | Uterine Hemorrhage [SUMMARY]
[CONTENT] Cross-Sectional Studies | Endometrial Hyperplasia | Endometrium | Female | Humans | Perimenopause | Uterine Hemorrhage [SUMMARY]
[CONTENT] findings endometrial | aub type menstrual | endometritis perimenopausal age | incidence endometritis perimenopausal | uterine bleeding aub [SUMMARY]
[CONTENT] findings endometrial | aub type menstrual | endometritis perimenopausal age | incidence endometritis perimenopausal | uterine bleeding aub [SUMMARY]
[CONTENT] findings endometrial | aub type menstrual | endometritis perimenopausal age | incidence endometritis perimenopausal | uterine bleeding aub [SUMMARY]
[CONTENT] findings endometrial | aub type menstrual | endometritis perimenopausal age | incidence endometritis perimenopausal | uterine bleeding aub [SUMMARY]
[CONTENT] findings endometrial | aub type menstrual | endometritis perimenopausal age | incidence endometritis perimenopausal | uterine bleeding aub [SUMMARY]
[CONTENT] findings endometrial | aub type menstrual | endometritis perimenopausal age | incidence endometritis perimenopausal | uterine bleeding aub [SUMMARY]
[CONTENT] endometrial | age | cases | age group | group | aub | study | endometrium | perimenopausal | pattern [SUMMARY]
[CONTENT] endometrial | age | cases | age group | group | aub | study | endometrium | perimenopausal | pattern [SUMMARY]
[CONTENT] endometrial | age | cases | age group | group | aub | study | endometrium | perimenopausal | pattern [SUMMARY]
[CONTENT] endometrial | age | cases | age group | group | aub | study | endometrium | perimenopausal | pattern [SUMMARY]
[CONTENT] endometrial | age | cases | age group | group | aub | study | endometrium | perimenopausal | pattern [SUMMARY]
[CONTENT] endometrial | age | cases | age group | group | aub | study | endometrium | perimenopausal | pattern [SUMMARY]
[CONTENT] aub | perimenopausal | age | physiological | population | perimenopausal age | normal | perimenopausal women | changes | bleeding [SUMMARY]
[CONTENT] sample | sample size | 96 | size | calculated | data | endometrial samples | size 96 | provided | 96 96 [SUMMARY]
[CONTENT] cases | showed | endometrium | endometrial | cases showed | year | year age | year age group | aub | age [SUMMARY]
[CONTENT] findings endometrial | findings | histopathologic | pattern | benign | appropriate management strategies | gynaecologists decide appropriate | gynaecologists decide appropriate management | appropriate | appropriate management [SUMMARY]
[CONTENT] endometrial | cases | age | aub | group | age group | perimenopausal | endometrium | study | showed [SUMMARY]
[CONTENT] endometrial | cases | age | aub | group | age group | perimenopausal | endometrium | study | showed [SUMMARY]
[CONTENT] ||| ||| [SUMMARY]
[CONTENT] between 1st June 2020 to 30th September 2021 ||| the Institutional Review Committee | Kathmandu Medical College | 305202002 ||| 96 ||| the Statistical Package | Social Sciences version23.0 ||| Point | 95% [SUMMARY]
[CONTENT] 96 | 93 | 96.9% ||| 93-100 | 95% ||| 40 | 41.7% | 35 | 36.5% ||| Five | 5.2% ||| 6.2% ||| Three | 3.1% ||| Four | 4.1% [SUMMARY]
[CONTENT] [SUMMARY]
[CONTENT] ||| ||| ||| between 1st June 2020 to 30th September 2021 ||| the Institutional Review Committee | Kathmandu Medical College | 305202002 ||| 96 ||| the Statistical Package | Social Sciences version23.0 ||| Point | 95% ||| ||| 96 | 93 | 96.9% | 93-100 | 95% ||| 40 | 41.7% | 35 | 36.5% ||| Five | 5.2% ||| 6.2% ||| Three | 3.1% ||| Four | 4.1% ||| [SUMMARY]
[CONTENT] ||| ||| ||| between 1st June 2020 to 30th September 2021 ||| the Institutional Review Committee | Kathmandu Medical College | 305202002 ||| 96 ||| the Statistical Package | Social Sciences version23.0 ||| Point | 95% ||| ||| 96 | 93 | 96.9% | 93-100 | 95% ||| 40 | 41.7% | 35 | 36.5% ||| Five | 5.2% ||| 6.2% ||| Three | 3.1% ||| Four | 4.1% ||| [SUMMARY]
Ketamine ameliorates oxidative stress-induced apoptosis in experimental traumatic brain injury via the Nrf2 pathway.
29713142
Ketamine can act as a multifunctional neuroprotective agent by inhibiting oxidative stress, cellular dysfunction, and apoptosis. Although it has been proven to be effective in various neurologic disorders, the mechanism of the treatment of traumatic brain injury (TBI) is not fully understood. The aim of this study was to investigate the neuroprotective function of ketamine in models of TBI and the potential role of the nuclear factor erythroid 2-related factor 2 (Nrf2) pathway in this putative protective effect.
BACKGROUND
Wild-type male mice were randomly assigned to five groups: Sham group, Sham + ketamine group, TBI group, TBI + vehicle group, and TBI + ketamine group. Marmarou's weight drop model in mice was used to induce TBI, after which either ketamine or vehicle was administered via intraperitoneal injection. After 24 h, the brain samples were collected for analysis.
MATERIALS AND METHODS
Ketamine significantly ameliorated secondary brain injury induced by TBI, including neurological deficits, brain water content, and neuronal apoptosis. In addition, the levels of malondialdehyde (MDA), glutathione peroxidase (GPx), and superoxide dismutase (SOD) were restored by the ketamine treatment. Western blotting and immunohistochemistry showed that ketamine significantly increased the level of Nrf2. Furthermore, administration of ketamine also induced the expression of Nrf2 pathway-related downstream factors, including hemeoxygenase-1 and quinine oxidoreductase-1, at the pre- and post-transcriptional levels.
RESULTS
Ketamine exhibits neuroprotective effects by attenuating oxidative stress and apoptosis after TBI. Therefore, ketamine could be an effective therapeutic agent for the treatment of TBI.
CONCLUSION
[ "Animals", "Apoptosis", "Brain Injuries, Traumatic", "Cell Survival", "Disease Models, Animal", "Dose-Response Relationship, Drug", "Ketamine", "Male", "Mice", "Mice, Inbred ICR", "NF-E2-Related Factor 2", "Neurons", "Oxidative Stress" ]
5907785
Introduction
Traumatic brain injury (TBI) is defined as a mechanical injury that causes numerous deaths and has an adverse impact on families and society.1,2 Over the past few years, great efforts have been directed toward identifying effective ways to improve the prognosis of TBI; however, many approaches have failed during clinical trials.3 It is widely believed that TBI causes both primary and secondary brain damage. Although the primary brain injury is a major factor, the secondary brain damage precipitates a complex pathological process that can lead to a series of endogenous events, including oxidative stress, glutamate excitotoxicity, and the activation of inflammatory responses.4 Of these processes, oxidative stress plays the most important role in secondary damage,5 not only because of the excessive production of reactive oxygen species (ROS) but also due to the exhaustion of the endogenous antioxidant system. Nuclear factor erythroid 2-related factor 2 (Nrf2) is a key transcription factor which controls the redox state as the cell’s main defense mechanism against oxidative stress.6,7 Nrf2 acts by activating the antioxidant response, which induces the transcription of a wide range of genes that are able to counteract the harmful effects of oxidative stress, thus restoring intracellular homeostasis.8 Under basal conditions, Nrf2 is localized in the cytoplasm, anchored by the Kelch-like ECH-associated protein (Keap1), and degraded by the ubiquitin-26S proteasome pathway, with a half-life of 20 min.9 Once the cell encounters stimulations, Nrf2 dissociates from Keap1 and is translocated into the nucleus, where it induces the expression of antioxidant enzymes. The enzymes regarded as antioxidants include nicotinamide adenine dinucleotide phosphate (NADPH), quinine oxidoreductase-1 (NQO-1), heme oxidase-1 (HO-1), superoxide dismutase (SOD), and glutathione peroxidase (GPx).10,11 These free radical scavenging enzymes form a powerful antioxidant defense mechanism. Ketamine, an N-methyl-D-aspartate receptor antagonist, has shown potent anti-inflammatory effects in various models of systemic inflammation, including endotoxemia, ischemia, and burns.12,13 These effects were observed in multiple organ systems and involve the modulation of certain molecular mediators of the inflammatory response.14 In addition, when administered either before or after various pro-inflammatory responses, ketamine was found to diminish systemic production of cytokines and improve survival.15 In models of brain injury and ischemia, ketamine elicits neuroprotective effects.16 Moreover, recent evidence suggests that the use of ketamine could be beneficial for patients with TBI.17 Nevertheless, the effects of ketamine on TBI-induced oxidative stress in the brain remain to be fully elucidated, and the present study was performed to better examine these effects.
Western blot analysis
Protein extraction was performed according to the instructions provided by the manufacturer of the Protein Extraction Kit (Beyotime Biotech Inc, Nantong, China). Equal amounts of protein were loaded on a 10% or 12% sodium dodecyl sulfate–polyacrylamide gel and then transferred onto polyvinylidene fluoride membranes. The membranes were blocked for 2 h in 5% skimmed milk in TBST at room temperature. After that, the membranes were separately incubated overnight, at 4°C, with antibodies against Nrf2 (1:1,000 diluted, rabbit; Abcam, Cambridge, MA, USA), HO-1 (1:200, rabbit; Santa Cruz Biotechnology Inc., Dallas, TX, USA), and NQO-1 (1:1,000 diluted, rabbit; Abcam), Bax (1:400 diluted, rabbit; Abcam) and cleaved caspase-3 (1:1,000 diluted, rabbit; Cell Signaling Technology, Danvers, MA, USA), Bcl-2 (1:1,000 diluted, rabbit; Cell Signaling Technology), and β-actin (1:5,000 diluted, rabbit; Bioworld Technology, St Louis Park, MN, USA) in blocking buffer. Then, the membranes were washed with TBST for 15 min and were incubated with the secondary antibodies (1:5,000 diluted, goat; Bioworld Technology) for 2 h. After three 15-min washes with TBST, the protein bands were visualized by enhanced chemiluminescence (ECL) (EMD Millipore, Billerica, MA, USA) and exposure to X-ray film. All the results were analyzed with the Un-Scan-It 6.1 software (Silk Scientific Inc., Orem, UT, USA). The density of each band was separately quantified.
Results
Ketamine treatment provided protection in injured brains The brain water content was also examined to confirm the protection of ketamine 24 h after TBI, as previously described (Figure 1A). The results showed that the TBI and TBI + vehicle groups had significantly increased brain water contents compared with the Sham group. Brain edema was attenuated in the ketamine-treated groups, which was consistent with the results of the neurological score. Specifically, the treatment with 60 mg/kg of ketamine conferred a better effect than the treatment with 30 mg/kg of ketamine. Accordingly, 60 mg/kg of ketamine was used in the subsequent experiments. The neurological score was used to evaluate the motor function of the mice at 24 and 48 h after TBI (Figure 1B). All mice were trained 3 days pre-TBI. At 48 h after TBI, the motor scores of the TBI and TBI + vehicle groups were lower than the scores of the Sham and Sham + ketamine groups. The ketamine-treated group exhibited reduced neurological deficits compared with the TBI group at 24 h, which were also lower than the deficits of the Sham and Sham + ketamine groups. The group that received a higher dose (60 mg/kg) showed a better effect than the low dose (30 mg/kg) for the TBI + ketamine group at 24 and 48 h after TBI, but the highest dose (100 mg/kg) failed to further enhance the neuroprotective effect. There was no difference between the Sham and Sham + ketamine groups. So, the Sham + ketamine group would be absent from the subsequent experiments. The brain water content was also examined to confirm the protection of ketamine 24 h after TBI, as previously described (Figure 1A). The results showed that the TBI and TBI + vehicle groups had significantly increased brain water contents compared with the Sham group. Brain edema was attenuated in the ketamine-treated groups, which was consistent with the results of the neurological score. Specifically, the treatment with 60 mg/kg of ketamine conferred a better effect than the treatment with 30 mg/kg of ketamine. Accordingly, 60 mg/kg of ketamine was used in the subsequent experiments. The neurological score was used to evaluate the motor function of the mice at 24 and 48 h after TBI (Figure 1B). All mice were trained 3 days pre-TBI. At 48 h after TBI, the motor scores of the TBI and TBI + vehicle groups were lower than the scores of the Sham and Sham + ketamine groups. The ketamine-treated group exhibited reduced neurological deficits compared with the TBI group at 24 h, which were also lower than the deficits of the Sham and Sham + ketamine groups. The group that received a higher dose (60 mg/kg) showed a better effect than the low dose (30 mg/kg) for the TBI + ketamine group at 24 and 48 h after TBI, but the highest dose (100 mg/kg) failed to further enhance the neuroprotective effect. There was no difference between the Sham and Sham + ketamine groups. So, the Sham + ketamine group would be absent from the subsequent experiments. Ketamine reduced neuronal apoptosis To investigate the protective effects of ketamine, neuron morphology was examined by Nissl staining. While neurons in the Sham group were clear and intact (Figure 2A), multiple neurons in the TBI and TBI + vehicle groups were damaged, exhibiting extensive changes including oval or triangular nuclei and shrunken cytoplasm (Figure 2B and C). However, an improvement in neuronal morphology and cytoarchitecture was observed in TBI mice treated with ketamine (Figure 2D). To investigate the mechanistic basis for the effects of ketamine, TUNEL staining was used to examine neural cells in injured brain tissue. TUNEL-positive cells were detected at a low frequency in the brains of mice in the Sham group (Figure 2E). The apoptotic index was increased in the TBI and TBI + vehicle groups compared to Sham animals (Figure 2F and G), but was reduced in the TBI + ketamine groups (Figure 2H–J). These results indicate that ketamine blocks neural apoptosis induced by TBI. Additionally, Western blot analysis was performed to assess the expression of cleaved caspase-3, an indicator of apoptosis. Cleaved caspase-3 expression increased significantly following TBI, relative to the Sham group. However, the treatment with ketamine reduced the level of cleaved caspase-3, relative to the TBI + vehicle group (Figure 3A and B). To investigate the protective effects of ketamine, neuron morphology was examined by Nissl staining. While neurons in the Sham group were clear and intact (Figure 2A), multiple neurons in the TBI and TBI + vehicle groups were damaged, exhibiting extensive changes including oval or triangular nuclei and shrunken cytoplasm (Figure 2B and C). However, an improvement in neuronal morphology and cytoarchitecture was observed in TBI mice treated with ketamine (Figure 2D). To investigate the mechanistic basis for the effects of ketamine, TUNEL staining was used to examine neural cells in injured brain tissue. TUNEL-positive cells were detected at a low frequency in the brains of mice in the Sham group (Figure 2E). The apoptotic index was increased in the TBI and TBI + vehicle groups compared to Sham animals (Figure 2F and G), but was reduced in the TBI + ketamine groups (Figure 2H–J). These results indicate that ketamine blocks neural apoptosis induced by TBI. Additionally, Western blot analysis was performed to assess the expression of cleaved caspase-3, an indicator of apoptosis. Cleaved caspase-3 expression increased significantly following TBI, relative to the Sham group. However, the treatment with ketamine reduced the level of cleaved caspase-3, relative to the TBI + vehicle group (Figure 3A and B). Expression of apoptotic factors was reduced by ketamine administration The protective effects of ketamine against TBI-induced neural apoptosis were examined by Western blot analysis. These effects were reversed in TBI mice treated with ketamine, in which the translocation of Bax was inhibited. The expression of the pro-apoptotic factor Bax increased following TBI when compared with the Sham group (Figure 3A and D), whereas the expression of the anti-apoptotic factor Bcl-2 decreased when compared with the Sham group (Figure 3A and C). However, the treatment with 60 mg/kg of ketamine reversed the expression levels of Bax and Bcl-2 relative to the TBI + vehicle group at 24 h after TBI. The protective effects of ketamine against TBI-induced neural apoptosis were examined by Western blot analysis. These effects were reversed in TBI mice treated with ketamine, in which the translocation of Bax was inhibited. The expression of the pro-apoptotic factor Bax increased following TBI when compared with the Sham group (Figure 3A and D), whereas the expression of the anti-apoptotic factor Bcl-2 decreased when compared with the Sham group (Figure 3A and C). However, the treatment with 60 mg/kg of ketamine reversed the expression levels of Bax and Bcl-2 relative to the TBI + vehicle group at 24 h after TBI. Ketamine reduced oxidative stress following TBI To evaluate the effect of ketamine on oxidative stress induced by TBI, MDA level, activities of SOD and GPx, indicators of lipid peroxidation, and antioxidant levels, respectively, were assessed. The MDA level was increased in the TBI + vehicle group compared with the Sham group. This effect was mitigated by the administration of 60 mg/kg of ketamine (Figure 4A). The activities of GPx and SOD were both decreased after TBI, while ketamine treatment increased their activity (Figure 4B and C). To evaluate the effect of ketamine on oxidative stress induced by TBI, MDA level, activities of SOD and GPx, indicators of lipid peroxidation, and antioxidant levels, respectively, were assessed. The MDA level was increased in the TBI + vehicle group compared with the Sham group. This effect was mitigated by the administration of 60 mg/kg of ketamine (Figure 4A). The activities of GPx and SOD were both decreased after TBI, while ketamine treatment increased their activity (Figure 4B and C). Ketamine promoted the expression of Nrf2 and Nrf2 downstream factors The expression of Nrf2 was investigated by Western blot analysis and immunohistochemistry. Compared with the Sham group, both TBI and ketamine administration induced Nrf2 expression (Figure 5A, B, and D–F). However, compared with the TBI + vehicle group, the TBI + ketamine groups exhibited significantly increased Nrf2 levels, which indicates that ketamine promoted the Nrf2 level following TBI (Figure 5C–F). The expression of NQO-1 and HO-1 was also measured by Western blot analysis. At the protein level, NQO-1 and HO-1 were both upregulated after TBI (Figure 5E, G, and H). Additionally, administration of ketamine further enhanced protein expression compared with the vehicle. These results demonstrate that ketamine induced the expression of factors downstream of Nrf2 via activation of the Nrf2 and Antioxidant Responsive Element (Nrf2-ARE) signaling pathway (Figure 5E, G, and H). The expression of Nrf2 was investigated by Western blot analysis and immunohistochemistry. Compared with the Sham group, both TBI and ketamine administration induced Nrf2 expression (Figure 5A, B, and D–F). However, compared with the TBI + vehicle group, the TBI + ketamine groups exhibited significantly increased Nrf2 levels, which indicates that ketamine promoted the Nrf2 level following TBI (Figure 5C–F). The expression of NQO-1 and HO-1 was also measured by Western blot analysis. At the protein level, NQO-1 and HO-1 were both upregulated after TBI (Figure 5E, G, and H). Additionally, administration of ketamine further enhanced protein expression compared with the vehicle. These results demonstrate that ketamine induced the expression of factors downstream of Nrf2 via activation of the Nrf2 and Antioxidant Responsive Element (Nrf2-ARE) signaling pathway (Figure 5E, G, and H).
Conclusion
The present study demonstrated that ketamine attenuated the oxidative reaction by enhancing the expression and activity of antioxidant enzymes in a TBI mouse model. Furthermore, experimental data showed that ketamine exerted neuroprotection against TBI, by partially combating oxidative stress via the activation of the Nrf2 pathway and its downstream proteins. However, further study is still needed to elucidate the underlying mechanism of Nrf2 pathway transformation after ketamine administration.
[ "Animals", "Models of TBI", "Experimental design", "Measurement of the brain water content", "Neurological evaluation", "Nissl staining", "Immunohistochemical staining", "Malondialdehyde (MDA) content and the activities of SOD and GPx", "Statistical analyses", "Ketamine treatment provided protection in injured brains", "Ketamine reduced neuronal apoptosis", "Expression of apoptotic factors was reduced by ketamine administration", "Ketamine reduced oxidative stress following TBI", "Ketamine promoted the expression of Nrf2 and Nrf2 downstream factors", "Conclusion" ]
[ "Male Institute of Cancer Research (ICR) mice (Experimental Animal Centre of Fujian Medical University, Fujian, China), weighing 28–32 g, were used in this study. Mice were housed under a 12-h light/dark cycle, with free access to food and water, and were acclimatized for at least 1 week before the experiment. Experimental protocols were approved by the Animal Care and Use Committee of Fujian University and complied with the Guide for the Care and Use of Laboratory Animals by the National Institute of Health (NIH, Bethesda, MD, USA).", "The TBI model used in this study was a modified version of Marmarou’s weight drop model.18 Mice were briefly anesthetized with an intraperitoneal injection of 10% chloral hydrate and placed in a stereotaxic frame. Following a 1.5 cm midline scalp incision, the fascia was reflected to expose the skull. After locating the left anterior frontal area (the impact area), a 200 g weight was released onto the skull. During the operation, the dura was kept intact. The scalp incision was subsequently closed and sutured. The mice were then returned to their former cages. Sham animals underwent identical procedures, except the weight drop.", "Mice (132 mice were used, 18 mice died) were randomly divided into five groups, namely Sham (n = 24), Sham + ketamine (n = 6), TBI (n = 24), TBI + vehicle (n = 24), TBI + ketamine (three subgroups: 30 [n = 6], 60 [n = 6], and 100 [n = 6] mg/kg). The mice in the TBI + ketamine groups were injected intraperitoneally with the respective dose of ketamine (Hengrui, Jiangsu, China), 30 min after TBI. At the corresponding time points, all mice in the TBI + vehicle group were administered equivalent volumes of vehicle solution. All mice were sacrificed 24 h after TBI.", "The brain water content was measured according to a previous study.19 Left brain cortical tissue samples were collected on sacrification. The collected tissue was positioned directly over the injury site, covering the contusion and the penumbra. The fresh tissue was weighed to record the wet weight, and then dried for 72 h at 80°C and weighed to record the dry weight. The brain water content was calculated by the formula: [(wet weight − dry weight)/ wet weight] × 100%.", "Neurological deficit was evaluated by the grip test, which was developed based on the test of gross vestibulomotor function, as described previously.20 Briefly, mice were placed on a thin, horizontal metal wire (45 cm long) that was suspended between two vertical poles, 45 cm above a foam pad.\nEvery mouse was required to perform 10 different tasks that represented motor function, balance, and alertness. One point was given for failing to perform each of the tasks, and thus 0 = minimum deficit and 10 = maximum deficit. A lower score demonstrated less neurological deficits. The severity of injury is defined by the initial NSS, which is evaluated 1 h after TBI. Testing was performed by the investigator who was blinded to the experimental groups. The sequence of the behavioral tasks was randomized.", "For Nissl staining, the sections of paraffin-embedded brain tissue (5 μm thick) were stained by cresyl violet, according to a previous study.20 “Normal” neurons had one or two large, round nuclei, located in the central soma, and abundant cytoplasm. In contrast, positive cells had irregular neuronal cell bodies, shrinking and hyperchromatic nuclei, and dried-up cytoplasm with vacuoles. The histological examination was performed by two observers who were blinded to the group assignment.", "Tissue sections (4 μm thick) were incubated overnight with the primary antibody against Nrf2 (1:100; Abcam) at 4°C. Following a 15-min wash in phosphate-buffered saline (PBS), the sections were incubated with horse radish peroxidase (HRP)-conjugated IgG (diluted 1:400, rabbit; Santa Cruz Biotechnology Inc.) for 1 h at room temperature. After three washes with PBS, the immunolabeled protein was visualized as brown staining, after staining with diaminobenzidine and the hematoxylin counterstaining. In each coronary section, six randomly selected fields were observed under a light microscope (ECLIPSE E100; Nikon Corporation, Tokyo, Japan). Then, the number of positive neurons in each of the six fields was recorded, and the average number of positive neurons was calculated for each sample.", "The MDA content and the activities of SOD and GPx were measured with a spectrophotometer, using the appropriate kits (Nanjing Jiancheng Biochemistry Co, Nanjing, China), according to the manufacturer’s instructions. Total protein concentration was determined by the Bradford method. The MDA level and the activities of SOD and GPx were expressed as nmol/mg protein and U/mg protein, respectively.", "The SPSS 17.0 software (SPSS, Inc., Chicago, IL, USA) was used for statistical analyses. Data were expressed as mean ± standard error of the mean (SEM), and the differences were evaluated by one-way analysis of variance (ANOVA), followed by Tukey’s test. Statistically significant differences were indicated by P < 0.05.", "The brain water content was also examined to confirm the protection of ketamine 24 h after TBI, as previously described (Figure 1A). The results showed that the TBI and TBI + vehicle groups had significantly increased brain water contents compared with the Sham group. Brain edema was attenuated in the ketamine-treated groups, which was consistent with the results of the neurological score. Specifically, the treatment with 60 mg/kg of ketamine conferred a better effect than the treatment with 30 mg/kg of ketamine. Accordingly, 60 mg/kg of ketamine was used in the subsequent experiments.\nThe neurological score was used to evaluate the motor function of the mice at 24 and 48 h after TBI (Figure 1B). All mice were trained 3 days pre-TBI. At 48 h after TBI, the motor scores of the TBI and TBI + vehicle groups were lower than the scores of the Sham and Sham + ketamine groups. The ketamine-treated group exhibited reduced neurological deficits compared with the TBI group at 24 h, which were also lower than the deficits of the Sham and Sham + ketamine groups. The group that received a higher dose (60 mg/kg) showed a better effect than the low dose (30 mg/kg) for the TBI + ketamine group at 24 and 48 h after TBI, but the highest dose (100 mg/kg) failed to further enhance the neuroprotective effect. There was no difference between the Sham and Sham + ketamine groups. So, the Sham + ketamine group would be absent from the subsequent experiments.", "To investigate the protective effects of ketamine, neuron morphology was examined by Nissl staining. While neurons in the Sham group were clear and intact (Figure 2A), multiple neurons in the TBI and TBI + vehicle groups were damaged, exhibiting extensive changes including oval or triangular nuclei and shrunken cytoplasm (Figure 2B and C). However, an improvement in neuronal morphology and cytoarchitecture was observed in TBI mice treated with ketamine (Figure 2D).\nTo investigate the mechanistic basis for the effects of ketamine, TUNEL staining was used to examine neural cells in injured brain tissue. TUNEL-positive cells were detected at a low frequency in the brains of mice in the Sham group (Figure 2E). The apoptotic index was increased in the TBI and TBI + vehicle groups compared to Sham animals (Figure 2F and G), but was reduced in the TBI + ketamine groups (Figure 2H–J). These results indicate that ketamine blocks neural apoptosis induced by TBI.\nAdditionally, Western blot analysis was performed to assess the expression of cleaved caspase-3, an indicator of apoptosis. Cleaved caspase-3 expression increased significantly following TBI, relative to the Sham group. However, the treatment with ketamine reduced the level of cleaved caspase-3, relative to the TBI + vehicle group (Figure 3A and B).", "The protective effects of ketamine against TBI-induced neural apoptosis were examined by Western blot analysis. These effects were reversed in TBI mice treated with ketamine, in which the translocation of Bax was inhibited. The expression of the pro-apoptotic factor Bax increased following TBI when compared with the Sham group (Figure 3A and D), whereas the expression of the anti-apoptotic factor Bcl-2 decreased when compared with the Sham group (Figure 3A and C). However, the treatment with 60 mg/kg of ketamine reversed the expression levels of Bax and Bcl-2 relative to the TBI + vehicle group at 24 h after TBI.", "To evaluate the effect of ketamine on oxidative stress induced by TBI, MDA level, activities of SOD and GPx, indicators of lipid peroxidation, and antioxidant levels, respectively, were assessed. The MDA level was increased in the TBI + vehicle group compared with the Sham group. This effect was mitigated by the administration of 60 mg/kg of ketamine (Figure 4A). The activities of GPx and SOD were both decreased after TBI, while ketamine treatment increased their activity (Figure 4B and C).", "The expression of Nrf2 was investigated by Western blot analysis and immunohistochemistry. Compared with the Sham group, both TBI and ketamine administration induced Nrf2 expression (Figure 5A, B, and D–F). However, compared with the TBI + vehicle group, the TBI + ketamine groups exhibited significantly increased Nrf2 levels, which indicates that ketamine promoted the Nrf2 level following TBI (Figure 5C–F).\nThe expression of NQO-1 and HO-1 was also measured by Western blot analysis. At the protein level, NQO-1 and HO-1 were both upregulated after TBI (Figure 5E, G, and H). Additionally, administration of ketamine further enhanced protein expression compared with the vehicle. These results demonstrate that ketamine induced the expression of factors downstream of Nrf2 via activation of the Nrf2 and Antioxidant Responsive Element (Nrf2-ARE) signaling pathway (Figure 5E, G, and H).", "The present study demonstrated that ketamine attenuated the oxidative reaction by enhancing the expression and activity of antioxidant enzymes in a TBI mouse model. Furthermore, experimental data showed that ketamine exerted neuroprotection against TBI, by partially combating oxidative stress via the activation of the Nrf2 pathway and its downstream proteins. However, further study is still needed to elucidate the underlying mechanism of Nrf2 pathway transformation after ketamine administration." ]
[ null, null, "methods", null, null, null, null, null, null, null, null, null, null, null, null ]
[ "Introduction", "Materials and methods", "Animals", "Models of TBI", "Experimental design", "Measurement of the brain water content", "Neurological evaluation", "Nissl staining", "Western blot analysis", "Immunohistochemical staining", "Malondialdehyde (MDA) content and the activities of SOD and GPx", "Statistical analyses", "Results", "Ketamine treatment provided protection in injured brains", "Ketamine reduced neuronal apoptosis", "Expression of apoptotic factors was reduced by ketamine administration", "Ketamine reduced oxidative stress following TBI", "Ketamine promoted the expression of Nrf2 and Nrf2 downstream factors", "Discussion", "Conclusion" ]
[ "Traumatic brain injury (TBI) is defined as a mechanical injury that causes numerous deaths and has an adverse impact on families and society.1,2 Over the past few years, great efforts have been directed toward identifying effective ways to improve the prognosis of TBI; however, many approaches have failed during clinical trials.3 It is widely believed that TBI causes both primary and secondary brain damage. Although the primary brain injury is a major factor, the secondary brain damage precipitates a complex pathological process that can lead to a series of endogenous events, including oxidative stress, glutamate excitotoxicity, and the activation of inflammatory responses.4 Of these processes, oxidative stress plays the most important role in secondary damage,5 not only because of the excessive production of reactive oxygen species (ROS) but also due to the exhaustion of the endogenous antioxidant system.\nNuclear factor erythroid 2-related factor 2 (Nrf2) is a key transcription factor which controls the redox state as the cell’s main defense mechanism against oxidative stress.6,7 Nrf2 acts by activating the antioxidant response, which induces the transcription of a wide range of genes that are able to counteract the harmful effects of oxidative stress, thus restoring intracellular homeostasis.8 Under basal conditions, Nrf2 is localized in the cytoplasm, anchored by the Kelch-like ECH-associated protein (Keap1), and degraded by the ubiquitin-26S proteasome pathway, with a half-life of 20 min.9 Once the cell encounters stimulations, Nrf2 dissociates from Keap1 and is translocated into the nucleus, where it induces the expression of antioxidant enzymes. The enzymes regarded as antioxidants include nicotinamide adenine dinucleotide phosphate (NADPH), quinine oxidoreductase-1 (NQO-1), heme oxidase-1 (HO-1), superoxide dismutase (SOD), and glutathione peroxidase (GPx).10,11 These free radical scavenging enzymes form a powerful antioxidant defense mechanism.\nKetamine, an N-methyl-D-aspartate receptor antagonist, has shown potent anti-inflammatory effects in various models of systemic inflammation, including endotoxemia, ischemia, and burns.12,13 These effects were observed in multiple organ systems and involve the modulation of certain molecular mediators of the inflammatory response.14 In addition, when administered either before or after various pro-inflammatory responses, ketamine was found to diminish systemic production of cytokines and improve survival.15 In models of brain injury and ischemia, ketamine elicits neuroprotective effects.16 Moreover, recent evidence suggests that the use of ketamine could be beneficial for patients with TBI.17 Nevertheless, the effects of ketamine on TBI-induced oxidative stress in the brain remain to be fully elucidated, and the present study was performed to better examine these effects.", " Animals Male Institute of Cancer Research (ICR) mice (Experimental Animal Centre of Fujian Medical University, Fujian, China), weighing 28–32 g, were used in this study. Mice were housed under a 12-h light/dark cycle, with free access to food and water, and were acclimatized for at least 1 week before the experiment. Experimental protocols were approved by the Animal Care and Use Committee of Fujian University and complied with the Guide for the Care and Use of Laboratory Animals by the National Institute of Health (NIH, Bethesda, MD, USA).\nMale Institute of Cancer Research (ICR) mice (Experimental Animal Centre of Fujian Medical University, Fujian, China), weighing 28–32 g, were used in this study. Mice were housed under a 12-h light/dark cycle, with free access to food and water, and were acclimatized for at least 1 week before the experiment. Experimental protocols were approved by the Animal Care and Use Committee of Fujian University and complied with the Guide for the Care and Use of Laboratory Animals by the National Institute of Health (NIH, Bethesda, MD, USA).\n Models of TBI The TBI model used in this study was a modified version of Marmarou’s weight drop model.18 Mice were briefly anesthetized with an intraperitoneal injection of 10% chloral hydrate and placed in a stereotaxic frame. Following a 1.5 cm midline scalp incision, the fascia was reflected to expose the skull. After locating the left anterior frontal area (the impact area), a 200 g weight was released onto the skull. During the operation, the dura was kept intact. The scalp incision was subsequently closed and sutured. The mice were then returned to their former cages. Sham animals underwent identical procedures, except the weight drop.\nThe TBI model used in this study was a modified version of Marmarou’s weight drop model.18 Mice were briefly anesthetized with an intraperitoneal injection of 10% chloral hydrate and placed in a stereotaxic frame. Following a 1.5 cm midline scalp incision, the fascia was reflected to expose the skull. After locating the left anterior frontal area (the impact area), a 200 g weight was released onto the skull. During the operation, the dura was kept intact. The scalp incision was subsequently closed and sutured. The mice were then returned to their former cages. Sham animals underwent identical procedures, except the weight drop.\n Experimental design Mice (132 mice were used, 18 mice died) were randomly divided into five groups, namely Sham (n = 24), Sham + ketamine (n = 6), TBI (n = 24), TBI + vehicle (n = 24), TBI + ketamine (three subgroups: 30 [n = 6], 60 [n = 6], and 100 [n = 6] mg/kg). The mice in the TBI + ketamine groups were injected intraperitoneally with the respective dose of ketamine (Hengrui, Jiangsu, China), 30 min after TBI. At the corresponding time points, all mice in the TBI + vehicle group were administered equivalent volumes of vehicle solution. All mice were sacrificed 24 h after TBI.\nMice (132 mice were used, 18 mice died) were randomly divided into five groups, namely Sham (n = 24), Sham + ketamine (n = 6), TBI (n = 24), TBI + vehicle (n = 24), TBI + ketamine (three subgroups: 30 [n = 6], 60 [n = 6], and 100 [n = 6] mg/kg). The mice in the TBI + ketamine groups were injected intraperitoneally with the respective dose of ketamine (Hengrui, Jiangsu, China), 30 min after TBI. At the corresponding time points, all mice in the TBI + vehicle group were administered equivalent volumes of vehicle solution. All mice were sacrificed 24 h after TBI.\n Measurement of the brain water content The brain water content was measured according to a previous study.19 Left brain cortical tissue samples were collected on sacrification. The collected tissue was positioned directly over the injury site, covering the contusion and the penumbra. The fresh tissue was weighed to record the wet weight, and then dried for 72 h at 80°C and weighed to record the dry weight. The brain water content was calculated by the formula: [(wet weight − dry weight)/ wet weight] × 100%.\nThe brain water content was measured according to a previous study.19 Left brain cortical tissue samples were collected on sacrification. The collected tissue was positioned directly over the injury site, covering the contusion and the penumbra. The fresh tissue was weighed to record the wet weight, and then dried for 72 h at 80°C and weighed to record the dry weight. The brain water content was calculated by the formula: [(wet weight − dry weight)/ wet weight] × 100%.\n Neurological evaluation Neurological deficit was evaluated by the grip test, which was developed based on the test of gross vestibulomotor function, as described previously.20 Briefly, mice were placed on a thin, horizontal metal wire (45 cm long) that was suspended between two vertical poles, 45 cm above a foam pad.\nEvery mouse was required to perform 10 different tasks that represented motor function, balance, and alertness. One point was given for failing to perform each of the tasks, and thus 0 = minimum deficit and 10 = maximum deficit. A lower score demonstrated less neurological deficits. The severity of injury is defined by the initial NSS, which is evaluated 1 h after TBI. Testing was performed by the investigator who was blinded to the experimental groups. The sequence of the behavioral tasks was randomized.\nNeurological deficit was evaluated by the grip test, which was developed based on the test of gross vestibulomotor function, as described previously.20 Briefly, mice were placed on a thin, horizontal metal wire (45 cm long) that was suspended between two vertical poles, 45 cm above a foam pad.\nEvery mouse was required to perform 10 different tasks that represented motor function, balance, and alertness. One point was given for failing to perform each of the tasks, and thus 0 = minimum deficit and 10 = maximum deficit. A lower score demonstrated less neurological deficits. The severity of injury is defined by the initial NSS, which is evaluated 1 h after TBI. Testing was performed by the investigator who was blinded to the experimental groups. The sequence of the behavioral tasks was randomized.\n Nissl staining For Nissl staining, the sections of paraffin-embedded brain tissue (5 μm thick) were stained by cresyl violet, according to a previous study.20 “Normal” neurons had one or two large, round nuclei, located in the central soma, and abundant cytoplasm. In contrast, positive cells had irregular neuronal cell bodies, shrinking and hyperchromatic nuclei, and dried-up cytoplasm with vacuoles. The histological examination was performed by two observers who were blinded to the group assignment.\nFor Nissl staining, the sections of paraffin-embedded brain tissue (5 μm thick) were stained by cresyl violet, according to a previous study.20 “Normal” neurons had one or two large, round nuclei, located in the central soma, and abundant cytoplasm. In contrast, positive cells had irregular neuronal cell bodies, shrinking and hyperchromatic nuclei, and dried-up cytoplasm with vacuoles. The histological examination was performed by two observers who were blinded to the group assignment.\n Western blot analysis Protein extraction was performed according to the instructions provided by the manufacturer of the Protein Extraction Kit (Beyotime Biotech Inc, Nantong, China). Equal amounts of protein were loaded on a 10% or 12% sodium dodecyl sulfate–polyacrylamide gel and then transferred onto polyvinylidene fluoride membranes. The membranes were blocked for 2 h in 5% skimmed milk in TBST at room temperature. After that, the membranes were separately incubated overnight, at 4°C, with antibodies against Nrf2 (1:1,000 diluted, rabbit; Abcam, Cambridge, MA, USA), HO-1 (1:200, rabbit; Santa Cruz Biotechnology Inc., Dallas, TX, USA), and NQO-1 (1:1,000 diluted, rabbit; Abcam), Bax (1:400 diluted, rabbit; Abcam) and cleaved caspase-3 (1:1,000 diluted, rabbit; Cell Signaling Technology, Danvers, MA, USA), Bcl-2 (1:1,000 diluted, rabbit; Cell Signaling Technology), and β-actin (1:5,000 diluted, rabbit; Bioworld Technology, St Louis Park, MN, USA) in blocking buffer. Then, the membranes were washed with TBST for 15 min and were incubated with the secondary antibodies (1:5,000 diluted, goat; Bioworld Technology) for 2 h. After three 15-min washes with TBST, the protein bands were visualized by enhanced chemiluminescence (ECL) (EMD Millipore, Billerica, MA, USA) and exposure to X-ray film. All the results were analyzed with the Un-Scan-It 6.1 software (Silk Scientific Inc., Orem, UT, USA). The density of each band was separately quantified.\nProtein extraction was performed according to the instructions provided by the manufacturer of the Protein Extraction Kit (Beyotime Biotech Inc, Nantong, China). Equal amounts of protein were loaded on a 10% or 12% sodium dodecyl sulfate–polyacrylamide gel and then transferred onto polyvinylidene fluoride membranes. The membranes were blocked for 2 h in 5% skimmed milk in TBST at room temperature. After that, the membranes were separately incubated overnight, at 4°C, with antibodies against Nrf2 (1:1,000 diluted, rabbit; Abcam, Cambridge, MA, USA), HO-1 (1:200, rabbit; Santa Cruz Biotechnology Inc., Dallas, TX, USA), and NQO-1 (1:1,000 diluted, rabbit; Abcam), Bax (1:400 diluted, rabbit; Abcam) and cleaved caspase-3 (1:1,000 diluted, rabbit; Cell Signaling Technology, Danvers, MA, USA), Bcl-2 (1:1,000 diluted, rabbit; Cell Signaling Technology), and β-actin (1:5,000 diluted, rabbit; Bioworld Technology, St Louis Park, MN, USA) in blocking buffer. Then, the membranes were washed with TBST for 15 min and were incubated with the secondary antibodies (1:5,000 diluted, goat; Bioworld Technology) for 2 h. After three 15-min washes with TBST, the protein bands were visualized by enhanced chemiluminescence (ECL) (EMD Millipore, Billerica, MA, USA) and exposure to X-ray film. All the results were analyzed with the Un-Scan-It 6.1 software (Silk Scientific Inc., Orem, UT, USA). The density of each band was separately quantified.\n Immunohistochemical staining Tissue sections (4 μm thick) were incubated overnight with the primary antibody against Nrf2 (1:100; Abcam) at 4°C. Following a 15-min wash in phosphate-buffered saline (PBS), the sections were incubated with horse radish peroxidase (HRP)-conjugated IgG (diluted 1:400, rabbit; Santa Cruz Biotechnology Inc.) for 1 h at room temperature. After three washes with PBS, the immunolabeled protein was visualized as brown staining, after staining with diaminobenzidine and the hematoxylin counterstaining. In each coronary section, six randomly selected fields were observed under a light microscope (ECLIPSE E100; Nikon Corporation, Tokyo, Japan). Then, the number of positive neurons in each of the six fields was recorded, and the average number of positive neurons was calculated for each sample.\nTissue sections (4 μm thick) were incubated overnight with the primary antibody against Nrf2 (1:100; Abcam) at 4°C. Following a 15-min wash in phosphate-buffered saline (PBS), the sections were incubated with horse radish peroxidase (HRP)-conjugated IgG (diluted 1:400, rabbit; Santa Cruz Biotechnology Inc.) for 1 h at room temperature. After three washes with PBS, the immunolabeled protein was visualized as brown staining, after staining with diaminobenzidine and the hematoxylin counterstaining. In each coronary section, six randomly selected fields were observed under a light microscope (ECLIPSE E100; Nikon Corporation, Tokyo, Japan). Then, the number of positive neurons in each of the six fields was recorded, and the average number of positive neurons was calculated for each sample.\n Malondialdehyde (MDA) content and the activities of SOD and GPx The MDA content and the activities of SOD and GPx were measured with a spectrophotometer, using the appropriate kits (Nanjing Jiancheng Biochemistry Co, Nanjing, China), according to the manufacturer’s instructions. Total protein concentration was determined by the Bradford method. The MDA level and the activities of SOD and GPx were expressed as nmol/mg protein and U/mg protein, respectively.\nThe MDA content and the activities of SOD and GPx were measured with a spectrophotometer, using the appropriate kits (Nanjing Jiancheng Biochemistry Co, Nanjing, China), according to the manufacturer’s instructions. Total protein concentration was determined by the Bradford method. The MDA level and the activities of SOD and GPx were expressed as nmol/mg protein and U/mg protein, respectively.\n Statistical analyses The SPSS 17.0 software (SPSS, Inc., Chicago, IL, USA) was used for statistical analyses. Data were expressed as mean ± standard error of the mean (SEM), and the differences were evaluated by one-way analysis of variance (ANOVA), followed by Tukey’s test. Statistically significant differences were indicated by P < 0.05.\nThe SPSS 17.0 software (SPSS, Inc., Chicago, IL, USA) was used for statistical analyses. Data were expressed as mean ± standard error of the mean (SEM), and the differences were evaluated by one-way analysis of variance (ANOVA), followed by Tukey’s test. Statistically significant differences were indicated by P < 0.05.", "Male Institute of Cancer Research (ICR) mice (Experimental Animal Centre of Fujian Medical University, Fujian, China), weighing 28–32 g, were used in this study. Mice were housed under a 12-h light/dark cycle, with free access to food and water, and were acclimatized for at least 1 week before the experiment. Experimental protocols were approved by the Animal Care and Use Committee of Fujian University and complied with the Guide for the Care and Use of Laboratory Animals by the National Institute of Health (NIH, Bethesda, MD, USA).", "The TBI model used in this study was a modified version of Marmarou’s weight drop model.18 Mice were briefly anesthetized with an intraperitoneal injection of 10% chloral hydrate and placed in a stereotaxic frame. Following a 1.5 cm midline scalp incision, the fascia was reflected to expose the skull. After locating the left anterior frontal area (the impact area), a 200 g weight was released onto the skull. During the operation, the dura was kept intact. The scalp incision was subsequently closed and sutured. The mice were then returned to their former cages. Sham animals underwent identical procedures, except the weight drop.", "Mice (132 mice were used, 18 mice died) were randomly divided into five groups, namely Sham (n = 24), Sham + ketamine (n = 6), TBI (n = 24), TBI + vehicle (n = 24), TBI + ketamine (three subgroups: 30 [n = 6], 60 [n = 6], and 100 [n = 6] mg/kg). The mice in the TBI + ketamine groups were injected intraperitoneally with the respective dose of ketamine (Hengrui, Jiangsu, China), 30 min after TBI. At the corresponding time points, all mice in the TBI + vehicle group were administered equivalent volumes of vehicle solution. All mice were sacrificed 24 h after TBI.", "The brain water content was measured according to a previous study.19 Left brain cortical tissue samples were collected on sacrification. The collected tissue was positioned directly over the injury site, covering the contusion and the penumbra. The fresh tissue was weighed to record the wet weight, and then dried for 72 h at 80°C and weighed to record the dry weight. The brain water content was calculated by the formula: [(wet weight − dry weight)/ wet weight] × 100%.", "Neurological deficit was evaluated by the grip test, which was developed based on the test of gross vestibulomotor function, as described previously.20 Briefly, mice were placed on a thin, horizontal metal wire (45 cm long) that was suspended between two vertical poles, 45 cm above a foam pad.\nEvery mouse was required to perform 10 different tasks that represented motor function, balance, and alertness. One point was given for failing to perform each of the tasks, and thus 0 = minimum deficit and 10 = maximum deficit. A lower score demonstrated less neurological deficits. The severity of injury is defined by the initial NSS, which is evaluated 1 h after TBI. Testing was performed by the investigator who was blinded to the experimental groups. The sequence of the behavioral tasks was randomized.", "For Nissl staining, the sections of paraffin-embedded brain tissue (5 μm thick) were stained by cresyl violet, according to a previous study.20 “Normal” neurons had one or two large, round nuclei, located in the central soma, and abundant cytoplasm. In contrast, positive cells had irregular neuronal cell bodies, shrinking and hyperchromatic nuclei, and dried-up cytoplasm with vacuoles. The histological examination was performed by two observers who were blinded to the group assignment.", "Protein extraction was performed according to the instructions provided by the manufacturer of the Protein Extraction Kit (Beyotime Biotech Inc, Nantong, China). Equal amounts of protein were loaded on a 10% or 12% sodium dodecyl sulfate–polyacrylamide gel and then transferred onto polyvinylidene fluoride membranes. The membranes were blocked for 2 h in 5% skimmed milk in TBST at room temperature. After that, the membranes were separately incubated overnight, at 4°C, with antibodies against Nrf2 (1:1,000 diluted, rabbit; Abcam, Cambridge, MA, USA), HO-1 (1:200, rabbit; Santa Cruz Biotechnology Inc., Dallas, TX, USA), and NQO-1 (1:1,000 diluted, rabbit; Abcam), Bax (1:400 diluted, rabbit; Abcam) and cleaved caspase-3 (1:1,000 diluted, rabbit; Cell Signaling Technology, Danvers, MA, USA), Bcl-2 (1:1,000 diluted, rabbit; Cell Signaling Technology), and β-actin (1:5,000 diluted, rabbit; Bioworld Technology, St Louis Park, MN, USA) in blocking buffer. Then, the membranes were washed with TBST for 15 min and were incubated with the secondary antibodies (1:5,000 diluted, goat; Bioworld Technology) for 2 h. After three 15-min washes with TBST, the protein bands were visualized by enhanced chemiluminescence (ECL) (EMD Millipore, Billerica, MA, USA) and exposure to X-ray film. All the results were analyzed with the Un-Scan-It 6.1 software (Silk Scientific Inc., Orem, UT, USA). The density of each band was separately quantified.", "Tissue sections (4 μm thick) were incubated overnight with the primary antibody against Nrf2 (1:100; Abcam) at 4°C. Following a 15-min wash in phosphate-buffered saline (PBS), the sections were incubated with horse radish peroxidase (HRP)-conjugated IgG (diluted 1:400, rabbit; Santa Cruz Biotechnology Inc.) for 1 h at room temperature. After three washes with PBS, the immunolabeled protein was visualized as brown staining, after staining with diaminobenzidine and the hematoxylin counterstaining. In each coronary section, six randomly selected fields were observed under a light microscope (ECLIPSE E100; Nikon Corporation, Tokyo, Japan). Then, the number of positive neurons in each of the six fields was recorded, and the average number of positive neurons was calculated for each sample.", "The MDA content and the activities of SOD and GPx were measured with a spectrophotometer, using the appropriate kits (Nanjing Jiancheng Biochemistry Co, Nanjing, China), according to the manufacturer’s instructions. Total protein concentration was determined by the Bradford method. The MDA level and the activities of SOD and GPx were expressed as nmol/mg protein and U/mg protein, respectively.", "The SPSS 17.0 software (SPSS, Inc., Chicago, IL, USA) was used for statistical analyses. Data were expressed as mean ± standard error of the mean (SEM), and the differences were evaluated by one-way analysis of variance (ANOVA), followed by Tukey’s test. Statistically significant differences were indicated by P < 0.05.", " Ketamine treatment provided protection in injured brains The brain water content was also examined to confirm the protection of ketamine 24 h after TBI, as previously described (Figure 1A). The results showed that the TBI and TBI + vehicle groups had significantly increased brain water contents compared with the Sham group. Brain edema was attenuated in the ketamine-treated groups, which was consistent with the results of the neurological score. Specifically, the treatment with 60 mg/kg of ketamine conferred a better effect than the treatment with 30 mg/kg of ketamine. Accordingly, 60 mg/kg of ketamine was used in the subsequent experiments.\nThe neurological score was used to evaluate the motor function of the mice at 24 and 48 h after TBI (Figure 1B). All mice were trained 3 days pre-TBI. At 48 h after TBI, the motor scores of the TBI and TBI + vehicle groups were lower than the scores of the Sham and Sham + ketamine groups. The ketamine-treated group exhibited reduced neurological deficits compared with the TBI group at 24 h, which were also lower than the deficits of the Sham and Sham + ketamine groups. The group that received a higher dose (60 mg/kg) showed a better effect than the low dose (30 mg/kg) for the TBI + ketamine group at 24 and 48 h after TBI, but the highest dose (100 mg/kg) failed to further enhance the neuroprotective effect. There was no difference between the Sham and Sham + ketamine groups. So, the Sham + ketamine group would be absent from the subsequent experiments.\nThe brain water content was also examined to confirm the protection of ketamine 24 h after TBI, as previously described (Figure 1A). The results showed that the TBI and TBI + vehicle groups had significantly increased brain water contents compared with the Sham group. Brain edema was attenuated in the ketamine-treated groups, which was consistent with the results of the neurological score. Specifically, the treatment with 60 mg/kg of ketamine conferred a better effect than the treatment with 30 mg/kg of ketamine. Accordingly, 60 mg/kg of ketamine was used in the subsequent experiments.\nThe neurological score was used to evaluate the motor function of the mice at 24 and 48 h after TBI (Figure 1B). All mice were trained 3 days pre-TBI. At 48 h after TBI, the motor scores of the TBI and TBI + vehicle groups were lower than the scores of the Sham and Sham + ketamine groups. The ketamine-treated group exhibited reduced neurological deficits compared with the TBI group at 24 h, which were also lower than the deficits of the Sham and Sham + ketamine groups. The group that received a higher dose (60 mg/kg) showed a better effect than the low dose (30 mg/kg) for the TBI + ketamine group at 24 and 48 h after TBI, but the highest dose (100 mg/kg) failed to further enhance the neuroprotective effect. There was no difference between the Sham and Sham + ketamine groups. So, the Sham + ketamine group would be absent from the subsequent experiments.\n Ketamine reduced neuronal apoptosis To investigate the protective effects of ketamine, neuron morphology was examined by Nissl staining. While neurons in the Sham group were clear and intact (Figure 2A), multiple neurons in the TBI and TBI + vehicle groups were damaged, exhibiting extensive changes including oval or triangular nuclei and shrunken cytoplasm (Figure 2B and C). However, an improvement in neuronal morphology and cytoarchitecture was observed in TBI mice treated with ketamine (Figure 2D).\nTo investigate the mechanistic basis for the effects of ketamine, TUNEL staining was used to examine neural cells in injured brain tissue. TUNEL-positive cells were detected at a low frequency in the brains of mice in the Sham group (Figure 2E). The apoptotic index was increased in the TBI and TBI + vehicle groups compared to Sham animals (Figure 2F and G), but was reduced in the TBI + ketamine groups (Figure 2H–J). These results indicate that ketamine blocks neural apoptosis induced by TBI.\nAdditionally, Western blot analysis was performed to assess the expression of cleaved caspase-3, an indicator of apoptosis. Cleaved caspase-3 expression increased significantly following TBI, relative to the Sham group. However, the treatment with ketamine reduced the level of cleaved caspase-3, relative to the TBI + vehicle group (Figure 3A and B).\nTo investigate the protective effects of ketamine, neuron morphology was examined by Nissl staining. While neurons in the Sham group were clear and intact (Figure 2A), multiple neurons in the TBI and TBI + vehicle groups were damaged, exhibiting extensive changes including oval or triangular nuclei and shrunken cytoplasm (Figure 2B and C). However, an improvement in neuronal morphology and cytoarchitecture was observed in TBI mice treated with ketamine (Figure 2D).\nTo investigate the mechanistic basis for the effects of ketamine, TUNEL staining was used to examine neural cells in injured brain tissue. TUNEL-positive cells were detected at a low frequency in the brains of mice in the Sham group (Figure 2E). The apoptotic index was increased in the TBI and TBI + vehicle groups compared to Sham animals (Figure 2F and G), but was reduced in the TBI + ketamine groups (Figure 2H–J). These results indicate that ketamine blocks neural apoptosis induced by TBI.\nAdditionally, Western blot analysis was performed to assess the expression of cleaved caspase-3, an indicator of apoptosis. Cleaved caspase-3 expression increased significantly following TBI, relative to the Sham group. However, the treatment with ketamine reduced the level of cleaved caspase-3, relative to the TBI + vehicle group (Figure 3A and B).\n Expression of apoptotic factors was reduced by ketamine administration The protective effects of ketamine against TBI-induced neural apoptosis were examined by Western blot analysis. These effects were reversed in TBI mice treated with ketamine, in which the translocation of Bax was inhibited. The expression of the pro-apoptotic factor Bax increased following TBI when compared with the Sham group (Figure 3A and D), whereas the expression of the anti-apoptotic factor Bcl-2 decreased when compared with the Sham group (Figure 3A and C). However, the treatment with 60 mg/kg of ketamine reversed the expression levels of Bax and Bcl-2 relative to the TBI + vehicle group at 24 h after TBI.\nThe protective effects of ketamine against TBI-induced neural apoptosis were examined by Western blot analysis. These effects were reversed in TBI mice treated with ketamine, in which the translocation of Bax was inhibited. The expression of the pro-apoptotic factor Bax increased following TBI when compared with the Sham group (Figure 3A and D), whereas the expression of the anti-apoptotic factor Bcl-2 decreased when compared with the Sham group (Figure 3A and C). However, the treatment with 60 mg/kg of ketamine reversed the expression levels of Bax and Bcl-2 relative to the TBI + vehicle group at 24 h after TBI.\n Ketamine reduced oxidative stress following TBI To evaluate the effect of ketamine on oxidative stress induced by TBI, MDA level, activities of SOD and GPx, indicators of lipid peroxidation, and antioxidant levels, respectively, were assessed. The MDA level was increased in the TBI + vehicle group compared with the Sham group. This effect was mitigated by the administration of 60 mg/kg of ketamine (Figure 4A). The activities of GPx and SOD were both decreased after TBI, while ketamine treatment increased their activity (Figure 4B and C).\nTo evaluate the effect of ketamine on oxidative stress induced by TBI, MDA level, activities of SOD and GPx, indicators of lipid peroxidation, and antioxidant levels, respectively, were assessed. The MDA level was increased in the TBI + vehicle group compared with the Sham group. This effect was mitigated by the administration of 60 mg/kg of ketamine (Figure 4A). The activities of GPx and SOD were both decreased after TBI, while ketamine treatment increased their activity (Figure 4B and C).\n Ketamine promoted the expression of Nrf2 and Nrf2 downstream factors The expression of Nrf2 was investigated by Western blot analysis and immunohistochemistry. Compared with the Sham group, both TBI and ketamine administration induced Nrf2 expression (Figure 5A, B, and D–F). However, compared with the TBI + vehicle group, the TBI + ketamine groups exhibited significantly increased Nrf2 levels, which indicates that ketamine promoted the Nrf2 level following TBI (Figure 5C–F).\nThe expression of NQO-1 and HO-1 was also measured by Western blot analysis. At the protein level, NQO-1 and HO-1 were both upregulated after TBI (Figure 5E, G, and H). Additionally, administration of ketamine further enhanced protein expression compared with the vehicle. These results demonstrate that ketamine induced the expression of factors downstream of Nrf2 via activation of the Nrf2 and Antioxidant Responsive Element (Nrf2-ARE) signaling pathway (Figure 5E, G, and H).\nThe expression of Nrf2 was investigated by Western blot analysis and immunohistochemistry. Compared with the Sham group, both TBI and ketamine administration induced Nrf2 expression (Figure 5A, B, and D–F). However, compared with the TBI + vehicle group, the TBI + ketamine groups exhibited significantly increased Nrf2 levels, which indicates that ketamine promoted the Nrf2 level following TBI (Figure 5C–F).\nThe expression of NQO-1 and HO-1 was also measured by Western blot analysis. At the protein level, NQO-1 and HO-1 were both upregulated after TBI (Figure 5E, G, and H). Additionally, administration of ketamine further enhanced protein expression compared with the vehicle. These results demonstrate that ketamine induced the expression of factors downstream of Nrf2 via activation of the Nrf2 and Antioxidant Responsive Element (Nrf2-ARE) signaling pathway (Figure 5E, G, and H).", "The brain water content was also examined to confirm the protection of ketamine 24 h after TBI, as previously described (Figure 1A). The results showed that the TBI and TBI + vehicle groups had significantly increased brain water contents compared with the Sham group. Brain edema was attenuated in the ketamine-treated groups, which was consistent with the results of the neurological score. Specifically, the treatment with 60 mg/kg of ketamine conferred a better effect than the treatment with 30 mg/kg of ketamine. Accordingly, 60 mg/kg of ketamine was used in the subsequent experiments.\nThe neurological score was used to evaluate the motor function of the mice at 24 and 48 h after TBI (Figure 1B). All mice were trained 3 days pre-TBI. At 48 h after TBI, the motor scores of the TBI and TBI + vehicle groups were lower than the scores of the Sham and Sham + ketamine groups. The ketamine-treated group exhibited reduced neurological deficits compared with the TBI group at 24 h, which were also lower than the deficits of the Sham and Sham + ketamine groups. The group that received a higher dose (60 mg/kg) showed a better effect than the low dose (30 mg/kg) for the TBI + ketamine group at 24 and 48 h after TBI, but the highest dose (100 mg/kg) failed to further enhance the neuroprotective effect. There was no difference between the Sham and Sham + ketamine groups. So, the Sham + ketamine group would be absent from the subsequent experiments.", "To investigate the protective effects of ketamine, neuron morphology was examined by Nissl staining. While neurons in the Sham group were clear and intact (Figure 2A), multiple neurons in the TBI and TBI + vehicle groups were damaged, exhibiting extensive changes including oval or triangular nuclei and shrunken cytoplasm (Figure 2B and C). However, an improvement in neuronal morphology and cytoarchitecture was observed in TBI mice treated with ketamine (Figure 2D).\nTo investigate the mechanistic basis for the effects of ketamine, TUNEL staining was used to examine neural cells in injured brain tissue. TUNEL-positive cells were detected at a low frequency in the brains of mice in the Sham group (Figure 2E). The apoptotic index was increased in the TBI and TBI + vehicle groups compared to Sham animals (Figure 2F and G), but was reduced in the TBI + ketamine groups (Figure 2H–J). These results indicate that ketamine blocks neural apoptosis induced by TBI.\nAdditionally, Western blot analysis was performed to assess the expression of cleaved caspase-3, an indicator of apoptosis. Cleaved caspase-3 expression increased significantly following TBI, relative to the Sham group. However, the treatment with ketamine reduced the level of cleaved caspase-3, relative to the TBI + vehicle group (Figure 3A and B).", "The protective effects of ketamine against TBI-induced neural apoptosis were examined by Western blot analysis. These effects were reversed in TBI mice treated with ketamine, in which the translocation of Bax was inhibited. The expression of the pro-apoptotic factor Bax increased following TBI when compared with the Sham group (Figure 3A and D), whereas the expression of the anti-apoptotic factor Bcl-2 decreased when compared with the Sham group (Figure 3A and C). However, the treatment with 60 mg/kg of ketamine reversed the expression levels of Bax and Bcl-2 relative to the TBI + vehicle group at 24 h after TBI.", "To evaluate the effect of ketamine on oxidative stress induced by TBI, MDA level, activities of SOD and GPx, indicators of lipid peroxidation, and antioxidant levels, respectively, were assessed. The MDA level was increased in the TBI + vehicle group compared with the Sham group. This effect was mitigated by the administration of 60 mg/kg of ketamine (Figure 4A). The activities of GPx and SOD were both decreased after TBI, while ketamine treatment increased their activity (Figure 4B and C).", "The expression of Nrf2 was investigated by Western blot analysis and immunohistochemistry. Compared with the Sham group, both TBI and ketamine administration induced Nrf2 expression (Figure 5A, B, and D–F). However, compared with the TBI + vehicle group, the TBI + ketamine groups exhibited significantly increased Nrf2 levels, which indicates that ketamine promoted the Nrf2 level following TBI (Figure 5C–F).\nThe expression of NQO-1 and HO-1 was also measured by Western blot analysis. At the protein level, NQO-1 and HO-1 were both upregulated after TBI (Figure 5E, G, and H). Additionally, administration of ketamine further enhanced protein expression compared with the vehicle. These results demonstrate that ketamine induced the expression of factors downstream of Nrf2 via activation of the Nrf2 and Antioxidant Responsive Element (Nrf2-ARE) signaling pathway (Figure 5E, G, and H).", "The secondary injury after TBI represents consecutive pathological processes that consist of excitotoxicity, activation of inflammatory responses, oxidative stress, etc.21 Several oxidants and their derivatives are generated after TBI, which enhances the production of ROS along with the exhaustion of antioxidant defense enzymes, such as SOD, catalase, and GPx.22 Substantial evidence indicates that oxidative stress is a major contributor to the pathophysiology of TBI.23 Thus, neuroprotective approaches must be aimed at limiting and reversing oxidative stress.\nKetamine is often used as a kind of anesthetic in clinical work and animal experiments. It appears to exert effects through not only NMDA receptors but also non-NMDA receptor mechanisms.24 In a previous study, 10 or 50 mg/kg of ketamine was used as a subanaesthetic dose and 100 mg/kg of ketamine as an anesthetic dose.25 The higher dose (60 mg/kg) showed a better effect than the lower dose (30 mg/kg) at 24 and 48 h after TBI, but the highest dose (100 mg/kg) failed to further enhance the neuroprotective effect. It demonstrated that the protective effect of ketamine was related to dosage in the case of subanaesthesia. Ketamine is a highly effective antioxidant that has been frequently used as a protectant in many kinds of traumatic injury studies although the effect is incompletely understood yet.26–28\nThe present study showed that the treatment with ketamine after TBI attenuated brain contusion-induced oxidative insult, alleviated brain edema, improved neurological function scores, and prevented brain neuronal loss after TBI.\nThis beneficial role led us to consider the neuroprotective effects and the mechanisms underlying oxidative insult. The ability of ketamine to stimulate the activity of antioxidant enzymes plays a critical role in its antioxidative characteristics.29 Lipid peroxidation, which refers to the oxidative degradation of lipids, increases membrane permeability, leading to cell damage. MDA has been utilized as an index of lipid peroxidation. In addition, both SOD and GPx are antioxidant enzymes that catalyze the reduction of glutathione.30,31 The conversion of MDA and the activities of SOD and GPx in the cortex of mice with TBI indicate that oxidative stress occurs following TBI. Apart from that, the results of this study also showed that the treatment with ketamine partly relieves this impact, suggesting that ketamine could attenuate TBI-induced oxidative stress.\nThe Nrf2 pathway plays an important role in cellular adaptation to oxidative stress by upregulating Phase II enzymes.3,32 In addition, previous studies indicated that Nrf2 protein as well as Phase II enzymes, such as NQO-1 and HO-1, are activated after TBI.33,34 The present study demonstrated that exposure to ketamine (60 mg/kg) after TBI resulted in a significant increase in Nrf2 expression, as well as in the level of NQO-1 and HO-1, through the activation of the Nrf2 signaling pathway.", "The present study demonstrated that ketamine attenuated the oxidative reaction by enhancing the expression and activity of antioxidant enzymes in a TBI mouse model. Furthermore, experimental data showed that ketamine exerted neuroprotection against TBI, by partially combating oxidative stress via the activation of the Nrf2 pathway and its downstream proteins. However, further study is still needed to elucidate the underlying mechanism of Nrf2 pathway transformation after ketamine administration." ]
[ "intro", "materials|methods", null, null, "methods", null, null, null, "methods", null, null, null, "results", null, null, null, null, null, "discussion", null ]
[ "traumatic brain injury", "ketamine", "oxidative stress", "Nrf2", "apoptosis" ]
Introduction: Traumatic brain injury (TBI) is defined as a mechanical injury that causes numerous deaths and has an adverse impact on families and society.1,2 Over the past few years, great efforts have been directed toward identifying effective ways to improve the prognosis of TBI; however, many approaches have failed during clinical trials.3 It is widely believed that TBI causes both primary and secondary brain damage. Although the primary brain injury is a major factor, the secondary brain damage precipitates a complex pathological process that can lead to a series of endogenous events, including oxidative stress, glutamate excitotoxicity, and the activation of inflammatory responses.4 Of these processes, oxidative stress plays the most important role in secondary damage,5 not only because of the excessive production of reactive oxygen species (ROS) but also due to the exhaustion of the endogenous antioxidant system. Nuclear factor erythroid 2-related factor 2 (Nrf2) is a key transcription factor which controls the redox state as the cell’s main defense mechanism against oxidative stress.6,7 Nrf2 acts by activating the antioxidant response, which induces the transcription of a wide range of genes that are able to counteract the harmful effects of oxidative stress, thus restoring intracellular homeostasis.8 Under basal conditions, Nrf2 is localized in the cytoplasm, anchored by the Kelch-like ECH-associated protein (Keap1), and degraded by the ubiquitin-26S proteasome pathway, with a half-life of 20 min.9 Once the cell encounters stimulations, Nrf2 dissociates from Keap1 and is translocated into the nucleus, where it induces the expression of antioxidant enzymes. The enzymes regarded as antioxidants include nicotinamide adenine dinucleotide phosphate (NADPH), quinine oxidoreductase-1 (NQO-1), heme oxidase-1 (HO-1), superoxide dismutase (SOD), and glutathione peroxidase (GPx).10,11 These free radical scavenging enzymes form a powerful antioxidant defense mechanism. Ketamine, an N-methyl-D-aspartate receptor antagonist, has shown potent anti-inflammatory effects in various models of systemic inflammation, including endotoxemia, ischemia, and burns.12,13 These effects were observed in multiple organ systems and involve the modulation of certain molecular mediators of the inflammatory response.14 In addition, when administered either before or after various pro-inflammatory responses, ketamine was found to diminish systemic production of cytokines and improve survival.15 In models of brain injury and ischemia, ketamine elicits neuroprotective effects.16 Moreover, recent evidence suggests that the use of ketamine could be beneficial for patients with TBI.17 Nevertheless, the effects of ketamine on TBI-induced oxidative stress in the brain remain to be fully elucidated, and the present study was performed to better examine these effects. Materials and methods: Animals Male Institute of Cancer Research (ICR) mice (Experimental Animal Centre of Fujian Medical University, Fujian, China), weighing 28–32 g, were used in this study. Mice were housed under a 12-h light/dark cycle, with free access to food and water, and were acclimatized for at least 1 week before the experiment. Experimental protocols were approved by the Animal Care and Use Committee of Fujian University and complied with the Guide for the Care and Use of Laboratory Animals by the National Institute of Health (NIH, Bethesda, MD, USA). Male Institute of Cancer Research (ICR) mice (Experimental Animal Centre of Fujian Medical University, Fujian, China), weighing 28–32 g, were used in this study. Mice were housed under a 12-h light/dark cycle, with free access to food and water, and were acclimatized for at least 1 week before the experiment. Experimental protocols were approved by the Animal Care and Use Committee of Fujian University and complied with the Guide for the Care and Use of Laboratory Animals by the National Institute of Health (NIH, Bethesda, MD, USA). Models of TBI The TBI model used in this study was a modified version of Marmarou’s weight drop model.18 Mice were briefly anesthetized with an intraperitoneal injection of 10% chloral hydrate and placed in a stereotaxic frame. Following a 1.5 cm midline scalp incision, the fascia was reflected to expose the skull. After locating the left anterior frontal area (the impact area), a 200 g weight was released onto the skull. During the operation, the dura was kept intact. The scalp incision was subsequently closed and sutured. The mice were then returned to their former cages. Sham animals underwent identical procedures, except the weight drop. The TBI model used in this study was a modified version of Marmarou’s weight drop model.18 Mice were briefly anesthetized with an intraperitoneal injection of 10% chloral hydrate and placed in a stereotaxic frame. Following a 1.5 cm midline scalp incision, the fascia was reflected to expose the skull. After locating the left anterior frontal area (the impact area), a 200 g weight was released onto the skull. During the operation, the dura was kept intact. The scalp incision was subsequently closed and sutured. The mice were then returned to their former cages. Sham animals underwent identical procedures, except the weight drop. Experimental design Mice (132 mice were used, 18 mice died) were randomly divided into five groups, namely Sham (n = 24), Sham + ketamine (n = 6), TBI (n = 24), TBI + vehicle (n = 24), TBI + ketamine (three subgroups: 30 [n = 6], 60 [n = 6], and 100 [n = 6] mg/kg). The mice in the TBI + ketamine groups were injected intraperitoneally with the respective dose of ketamine (Hengrui, Jiangsu, China), 30 min after TBI. At the corresponding time points, all mice in the TBI + vehicle group were administered equivalent volumes of vehicle solution. All mice were sacrificed 24 h after TBI. Mice (132 mice were used, 18 mice died) were randomly divided into five groups, namely Sham (n = 24), Sham + ketamine (n = 6), TBI (n = 24), TBI + vehicle (n = 24), TBI + ketamine (three subgroups: 30 [n = 6], 60 [n = 6], and 100 [n = 6] mg/kg). The mice in the TBI + ketamine groups were injected intraperitoneally with the respective dose of ketamine (Hengrui, Jiangsu, China), 30 min after TBI. At the corresponding time points, all mice in the TBI + vehicle group were administered equivalent volumes of vehicle solution. All mice were sacrificed 24 h after TBI. Measurement of the brain water content The brain water content was measured according to a previous study.19 Left brain cortical tissue samples were collected on sacrification. The collected tissue was positioned directly over the injury site, covering the contusion and the penumbra. The fresh tissue was weighed to record the wet weight, and then dried for 72 h at 80°C and weighed to record the dry weight. The brain water content was calculated by the formula: [(wet weight − dry weight)/ wet weight] × 100%. The brain water content was measured according to a previous study.19 Left brain cortical tissue samples were collected on sacrification. The collected tissue was positioned directly over the injury site, covering the contusion and the penumbra. The fresh tissue was weighed to record the wet weight, and then dried for 72 h at 80°C and weighed to record the dry weight. The brain water content was calculated by the formula: [(wet weight − dry weight)/ wet weight] × 100%. Neurological evaluation Neurological deficit was evaluated by the grip test, which was developed based on the test of gross vestibulomotor function, as described previously.20 Briefly, mice were placed on a thin, horizontal metal wire (45 cm long) that was suspended between two vertical poles, 45 cm above a foam pad. Every mouse was required to perform 10 different tasks that represented motor function, balance, and alertness. One point was given for failing to perform each of the tasks, and thus 0 = minimum deficit and 10 = maximum deficit. A lower score demonstrated less neurological deficits. The severity of injury is defined by the initial NSS, which is evaluated 1 h after TBI. Testing was performed by the investigator who was blinded to the experimental groups. The sequence of the behavioral tasks was randomized. Neurological deficit was evaluated by the grip test, which was developed based on the test of gross vestibulomotor function, as described previously.20 Briefly, mice were placed on a thin, horizontal metal wire (45 cm long) that was suspended between two vertical poles, 45 cm above a foam pad. Every mouse was required to perform 10 different tasks that represented motor function, balance, and alertness. One point was given for failing to perform each of the tasks, and thus 0 = minimum deficit and 10 = maximum deficit. A lower score demonstrated less neurological deficits. The severity of injury is defined by the initial NSS, which is evaluated 1 h after TBI. Testing was performed by the investigator who was blinded to the experimental groups. The sequence of the behavioral tasks was randomized. Nissl staining For Nissl staining, the sections of paraffin-embedded brain tissue (5 μm thick) were stained by cresyl violet, according to a previous study.20 “Normal” neurons had one or two large, round nuclei, located in the central soma, and abundant cytoplasm. In contrast, positive cells had irregular neuronal cell bodies, shrinking and hyperchromatic nuclei, and dried-up cytoplasm with vacuoles. The histological examination was performed by two observers who were blinded to the group assignment. For Nissl staining, the sections of paraffin-embedded brain tissue (5 μm thick) were stained by cresyl violet, according to a previous study.20 “Normal” neurons had one or two large, round nuclei, located in the central soma, and abundant cytoplasm. In contrast, positive cells had irregular neuronal cell bodies, shrinking and hyperchromatic nuclei, and dried-up cytoplasm with vacuoles. The histological examination was performed by two observers who were blinded to the group assignment. Western blot analysis Protein extraction was performed according to the instructions provided by the manufacturer of the Protein Extraction Kit (Beyotime Biotech Inc, Nantong, China). Equal amounts of protein were loaded on a 10% or 12% sodium dodecyl sulfate–polyacrylamide gel and then transferred onto polyvinylidene fluoride membranes. The membranes were blocked for 2 h in 5% skimmed milk in TBST at room temperature. After that, the membranes were separately incubated overnight, at 4°C, with antibodies against Nrf2 (1:1,000 diluted, rabbit; Abcam, Cambridge, MA, USA), HO-1 (1:200, rabbit; Santa Cruz Biotechnology Inc., Dallas, TX, USA), and NQO-1 (1:1,000 diluted, rabbit; Abcam), Bax (1:400 diluted, rabbit; Abcam) and cleaved caspase-3 (1:1,000 diluted, rabbit; Cell Signaling Technology, Danvers, MA, USA), Bcl-2 (1:1,000 diluted, rabbit; Cell Signaling Technology), and β-actin (1:5,000 diluted, rabbit; Bioworld Technology, St Louis Park, MN, USA) in blocking buffer. Then, the membranes were washed with TBST for 15 min and were incubated with the secondary antibodies (1:5,000 diluted, goat; Bioworld Technology) for 2 h. After three 15-min washes with TBST, the protein bands were visualized by enhanced chemiluminescence (ECL) (EMD Millipore, Billerica, MA, USA) and exposure to X-ray film. All the results were analyzed with the Un-Scan-It 6.1 software (Silk Scientific Inc., Orem, UT, USA). The density of each band was separately quantified. Protein extraction was performed according to the instructions provided by the manufacturer of the Protein Extraction Kit (Beyotime Biotech Inc, Nantong, China). Equal amounts of protein were loaded on a 10% or 12% sodium dodecyl sulfate–polyacrylamide gel and then transferred onto polyvinylidene fluoride membranes. The membranes were blocked for 2 h in 5% skimmed milk in TBST at room temperature. After that, the membranes were separately incubated overnight, at 4°C, with antibodies against Nrf2 (1:1,000 diluted, rabbit; Abcam, Cambridge, MA, USA), HO-1 (1:200, rabbit; Santa Cruz Biotechnology Inc., Dallas, TX, USA), and NQO-1 (1:1,000 diluted, rabbit; Abcam), Bax (1:400 diluted, rabbit; Abcam) and cleaved caspase-3 (1:1,000 diluted, rabbit; Cell Signaling Technology, Danvers, MA, USA), Bcl-2 (1:1,000 diluted, rabbit; Cell Signaling Technology), and β-actin (1:5,000 diluted, rabbit; Bioworld Technology, St Louis Park, MN, USA) in blocking buffer. Then, the membranes were washed with TBST for 15 min and were incubated with the secondary antibodies (1:5,000 diluted, goat; Bioworld Technology) for 2 h. After three 15-min washes with TBST, the protein bands were visualized by enhanced chemiluminescence (ECL) (EMD Millipore, Billerica, MA, USA) and exposure to X-ray film. All the results were analyzed with the Un-Scan-It 6.1 software (Silk Scientific Inc., Orem, UT, USA). The density of each band was separately quantified. Immunohistochemical staining Tissue sections (4 μm thick) were incubated overnight with the primary antibody against Nrf2 (1:100; Abcam) at 4°C. Following a 15-min wash in phosphate-buffered saline (PBS), the sections were incubated with horse radish peroxidase (HRP)-conjugated IgG (diluted 1:400, rabbit; Santa Cruz Biotechnology Inc.) for 1 h at room temperature. After three washes with PBS, the immunolabeled protein was visualized as brown staining, after staining with diaminobenzidine and the hematoxylin counterstaining. In each coronary section, six randomly selected fields were observed under a light microscope (ECLIPSE E100; Nikon Corporation, Tokyo, Japan). Then, the number of positive neurons in each of the six fields was recorded, and the average number of positive neurons was calculated for each sample. Tissue sections (4 μm thick) were incubated overnight with the primary antibody against Nrf2 (1:100; Abcam) at 4°C. Following a 15-min wash in phosphate-buffered saline (PBS), the sections were incubated with horse radish peroxidase (HRP)-conjugated IgG (diluted 1:400, rabbit; Santa Cruz Biotechnology Inc.) for 1 h at room temperature. After three washes with PBS, the immunolabeled protein was visualized as brown staining, after staining with diaminobenzidine and the hematoxylin counterstaining. In each coronary section, six randomly selected fields were observed under a light microscope (ECLIPSE E100; Nikon Corporation, Tokyo, Japan). Then, the number of positive neurons in each of the six fields was recorded, and the average number of positive neurons was calculated for each sample. Malondialdehyde (MDA) content and the activities of SOD and GPx The MDA content and the activities of SOD and GPx were measured with a spectrophotometer, using the appropriate kits (Nanjing Jiancheng Biochemistry Co, Nanjing, China), according to the manufacturer’s instructions. Total protein concentration was determined by the Bradford method. The MDA level and the activities of SOD and GPx were expressed as nmol/mg protein and U/mg protein, respectively. The MDA content and the activities of SOD and GPx were measured with a spectrophotometer, using the appropriate kits (Nanjing Jiancheng Biochemistry Co, Nanjing, China), according to the manufacturer’s instructions. Total protein concentration was determined by the Bradford method. The MDA level and the activities of SOD and GPx were expressed as nmol/mg protein and U/mg protein, respectively. Statistical analyses The SPSS 17.0 software (SPSS, Inc., Chicago, IL, USA) was used for statistical analyses. Data were expressed as mean ± standard error of the mean (SEM), and the differences were evaluated by one-way analysis of variance (ANOVA), followed by Tukey’s test. Statistically significant differences were indicated by P < 0.05. The SPSS 17.0 software (SPSS, Inc., Chicago, IL, USA) was used for statistical analyses. Data were expressed as mean ± standard error of the mean (SEM), and the differences were evaluated by one-way analysis of variance (ANOVA), followed by Tukey’s test. Statistically significant differences were indicated by P < 0.05. Animals: Male Institute of Cancer Research (ICR) mice (Experimental Animal Centre of Fujian Medical University, Fujian, China), weighing 28–32 g, were used in this study. Mice were housed under a 12-h light/dark cycle, with free access to food and water, and were acclimatized for at least 1 week before the experiment. Experimental protocols were approved by the Animal Care and Use Committee of Fujian University and complied with the Guide for the Care and Use of Laboratory Animals by the National Institute of Health (NIH, Bethesda, MD, USA). Models of TBI: The TBI model used in this study was a modified version of Marmarou’s weight drop model.18 Mice were briefly anesthetized with an intraperitoneal injection of 10% chloral hydrate and placed in a stereotaxic frame. Following a 1.5 cm midline scalp incision, the fascia was reflected to expose the skull. After locating the left anterior frontal area (the impact area), a 200 g weight was released onto the skull. During the operation, the dura was kept intact. The scalp incision was subsequently closed and sutured. The mice were then returned to their former cages. Sham animals underwent identical procedures, except the weight drop. Experimental design: Mice (132 mice were used, 18 mice died) were randomly divided into five groups, namely Sham (n = 24), Sham + ketamine (n = 6), TBI (n = 24), TBI + vehicle (n = 24), TBI + ketamine (three subgroups: 30 [n = 6], 60 [n = 6], and 100 [n = 6] mg/kg). The mice in the TBI + ketamine groups were injected intraperitoneally with the respective dose of ketamine (Hengrui, Jiangsu, China), 30 min after TBI. At the corresponding time points, all mice in the TBI + vehicle group were administered equivalent volumes of vehicle solution. All mice were sacrificed 24 h after TBI. Measurement of the brain water content: The brain water content was measured according to a previous study.19 Left brain cortical tissue samples were collected on sacrification. The collected tissue was positioned directly over the injury site, covering the contusion and the penumbra. The fresh tissue was weighed to record the wet weight, and then dried for 72 h at 80°C and weighed to record the dry weight. The brain water content was calculated by the formula: [(wet weight − dry weight)/ wet weight] × 100%. Neurological evaluation: Neurological deficit was evaluated by the grip test, which was developed based on the test of gross vestibulomotor function, as described previously.20 Briefly, mice were placed on a thin, horizontal metal wire (45 cm long) that was suspended between two vertical poles, 45 cm above a foam pad. Every mouse was required to perform 10 different tasks that represented motor function, balance, and alertness. One point was given for failing to perform each of the tasks, and thus 0 = minimum deficit and 10 = maximum deficit. A lower score demonstrated less neurological deficits. The severity of injury is defined by the initial NSS, which is evaluated 1 h after TBI. Testing was performed by the investigator who was blinded to the experimental groups. The sequence of the behavioral tasks was randomized. Nissl staining: For Nissl staining, the sections of paraffin-embedded brain tissue (5 μm thick) were stained by cresyl violet, according to a previous study.20 “Normal” neurons had one or two large, round nuclei, located in the central soma, and abundant cytoplasm. In contrast, positive cells had irregular neuronal cell bodies, shrinking and hyperchromatic nuclei, and dried-up cytoplasm with vacuoles. The histological examination was performed by two observers who were blinded to the group assignment. Western blot analysis: Protein extraction was performed according to the instructions provided by the manufacturer of the Protein Extraction Kit (Beyotime Biotech Inc, Nantong, China). Equal amounts of protein were loaded on a 10% or 12% sodium dodecyl sulfate–polyacrylamide gel and then transferred onto polyvinylidene fluoride membranes. The membranes were blocked for 2 h in 5% skimmed milk in TBST at room temperature. After that, the membranes were separately incubated overnight, at 4°C, with antibodies against Nrf2 (1:1,000 diluted, rabbit; Abcam, Cambridge, MA, USA), HO-1 (1:200, rabbit; Santa Cruz Biotechnology Inc., Dallas, TX, USA), and NQO-1 (1:1,000 diluted, rabbit; Abcam), Bax (1:400 diluted, rabbit; Abcam) and cleaved caspase-3 (1:1,000 diluted, rabbit; Cell Signaling Technology, Danvers, MA, USA), Bcl-2 (1:1,000 diluted, rabbit; Cell Signaling Technology), and β-actin (1:5,000 diluted, rabbit; Bioworld Technology, St Louis Park, MN, USA) in blocking buffer. Then, the membranes were washed with TBST for 15 min and were incubated with the secondary antibodies (1:5,000 diluted, goat; Bioworld Technology) for 2 h. After three 15-min washes with TBST, the protein bands were visualized by enhanced chemiluminescence (ECL) (EMD Millipore, Billerica, MA, USA) and exposure to X-ray film. All the results were analyzed with the Un-Scan-It 6.1 software (Silk Scientific Inc., Orem, UT, USA). The density of each band was separately quantified. Immunohistochemical staining: Tissue sections (4 μm thick) were incubated overnight with the primary antibody against Nrf2 (1:100; Abcam) at 4°C. Following a 15-min wash in phosphate-buffered saline (PBS), the sections were incubated with horse radish peroxidase (HRP)-conjugated IgG (diluted 1:400, rabbit; Santa Cruz Biotechnology Inc.) for 1 h at room temperature. After three washes with PBS, the immunolabeled protein was visualized as brown staining, after staining with diaminobenzidine and the hematoxylin counterstaining. In each coronary section, six randomly selected fields were observed under a light microscope (ECLIPSE E100; Nikon Corporation, Tokyo, Japan). Then, the number of positive neurons in each of the six fields was recorded, and the average number of positive neurons was calculated for each sample. Malondialdehyde (MDA) content and the activities of SOD and GPx: The MDA content and the activities of SOD and GPx were measured with a spectrophotometer, using the appropriate kits (Nanjing Jiancheng Biochemistry Co, Nanjing, China), according to the manufacturer’s instructions. Total protein concentration was determined by the Bradford method. The MDA level and the activities of SOD and GPx were expressed as nmol/mg protein and U/mg protein, respectively. Statistical analyses: The SPSS 17.0 software (SPSS, Inc., Chicago, IL, USA) was used for statistical analyses. Data were expressed as mean ± standard error of the mean (SEM), and the differences were evaluated by one-way analysis of variance (ANOVA), followed by Tukey’s test. Statistically significant differences were indicated by P < 0.05. Results: Ketamine treatment provided protection in injured brains The brain water content was also examined to confirm the protection of ketamine 24 h after TBI, as previously described (Figure 1A). The results showed that the TBI and TBI + vehicle groups had significantly increased brain water contents compared with the Sham group. Brain edema was attenuated in the ketamine-treated groups, which was consistent with the results of the neurological score. Specifically, the treatment with 60 mg/kg of ketamine conferred a better effect than the treatment with 30 mg/kg of ketamine. Accordingly, 60 mg/kg of ketamine was used in the subsequent experiments. The neurological score was used to evaluate the motor function of the mice at 24 and 48 h after TBI (Figure 1B). All mice were trained 3 days pre-TBI. At 48 h after TBI, the motor scores of the TBI and TBI + vehicle groups were lower than the scores of the Sham and Sham + ketamine groups. The ketamine-treated group exhibited reduced neurological deficits compared with the TBI group at 24 h, which were also lower than the deficits of the Sham and Sham + ketamine groups. The group that received a higher dose (60 mg/kg) showed a better effect than the low dose (30 mg/kg) for the TBI + ketamine group at 24 and 48 h after TBI, but the highest dose (100 mg/kg) failed to further enhance the neuroprotective effect. There was no difference between the Sham and Sham + ketamine groups. So, the Sham + ketamine group would be absent from the subsequent experiments. The brain water content was also examined to confirm the protection of ketamine 24 h after TBI, as previously described (Figure 1A). The results showed that the TBI and TBI + vehicle groups had significantly increased brain water contents compared with the Sham group. Brain edema was attenuated in the ketamine-treated groups, which was consistent with the results of the neurological score. Specifically, the treatment with 60 mg/kg of ketamine conferred a better effect than the treatment with 30 mg/kg of ketamine. Accordingly, 60 mg/kg of ketamine was used in the subsequent experiments. The neurological score was used to evaluate the motor function of the mice at 24 and 48 h after TBI (Figure 1B). All mice were trained 3 days pre-TBI. At 48 h after TBI, the motor scores of the TBI and TBI + vehicle groups were lower than the scores of the Sham and Sham + ketamine groups. The ketamine-treated group exhibited reduced neurological deficits compared with the TBI group at 24 h, which were also lower than the deficits of the Sham and Sham + ketamine groups. The group that received a higher dose (60 mg/kg) showed a better effect than the low dose (30 mg/kg) for the TBI + ketamine group at 24 and 48 h after TBI, but the highest dose (100 mg/kg) failed to further enhance the neuroprotective effect. There was no difference between the Sham and Sham + ketamine groups. So, the Sham + ketamine group would be absent from the subsequent experiments. Ketamine reduced neuronal apoptosis To investigate the protective effects of ketamine, neuron morphology was examined by Nissl staining. While neurons in the Sham group were clear and intact (Figure 2A), multiple neurons in the TBI and TBI + vehicle groups were damaged, exhibiting extensive changes including oval or triangular nuclei and shrunken cytoplasm (Figure 2B and C). However, an improvement in neuronal morphology and cytoarchitecture was observed in TBI mice treated with ketamine (Figure 2D). To investigate the mechanistic basis for the effects of ketamine, TUNEL staining was used to examine neural cells in injured brain tissue. TUNEL-positive cells were detected at a low frequency in the brains of mice in the Sham group (Figure 2E). The apoptotic index was increased in the TBI and TBI + vehicle groups compared to Sham animals (Figure 2F and G), but was reduced in the TBI + ketamine groups (Figure 2H–J). These results indicate that ketamine blocks neural apoptosis induced by TBI. Additionally, Western blot analysis was performed to assess the expression of cleaved caspase-3, an indicator of apoptosis. Cleaved caspase-3 expression increased significantly following TBI, relative to the Sham group. However, the treatment with ketamine reduced the level of cleaved caspase-3, relative to the TBI + vehicle group (Figure 3A and B). To investigate the protective effects of ketamine, neuron morphology was examined by Nissl staining. While neurons in the Sham group were clear and intact (Figure 2A), multiple neurons in the TBI and TBI + vehicle groups were damaged, exhibiting extensive changes including oval or triangular nuclei and shrunken cytoplasm (Figure 2B and C). However, an improvement in neuronal morphology and cytoarchitecture was observed in TBI mice treated with ketamine (Figure 2D). To investigate the mechanistic basis for the effects of ketamine, TUNEL staining was used to examine neural cells in injured brain tissue. TUNEL-positive cells were detected at a low frequency in the brains of mice in the Sham group (Figure 2E). The apoptotic index was increased in the TBI and TBI + vehicle groups compared to Sham animals (Figure 2F and G), but was reduced in the TBI + ketamine groups (Figure 2H–J). These results indicate that ketamine blocks neural apoptosis induced by TBI. Additionally, Western blot analysis was performed to assess the expression of cleaved caspase-3, an indicator of apoptosis. Cleaved caspase-3 expression increased significantly following TBI, relative to the Sham group. However, the treatment with ketamine reduced the level of cleaved caspase-3, relative to the TBI + vehicle group (Figure 3A and B). Expression of apoptotic factors was reduced by ketamine administration The protective effects of ketamine against TBI-induced neural apoptosis were examined by Western blot analysis. These effects were reversed in TBI mice treated with ketamine, in which the translocation of Bax was inhibited. The expression of the pro-apoptotic factor Bax increased following TBI when compared with the Sham group (Figure 3A and D), whereas the expression of the anti-apoptotic factor Bcl-2 decreased when compared with the Sham group (Figure 3A and C). However, the treatment with 60 mg/kg of ketamine reversed the expression levels of Bax and Bcl-2 relative to the TBI + vehicle group at 24 h after TBI. The protective effects of ketamine against TBI-induced neural apoptosis were examined by Western blot analysis. These effects were reversed in TBI mice treated with ketamine, in which the translocation of Bax was inhibited. The expression of the pro-apoptotic factor Bax increased following TBI when compared with the Sham group (Figure 3A and D), whereas the expression of the anti-apoptotic factor Bcl-2 decreased when compared with the Sham group (Figure 3A and C). However, the treatment with 60 mg/kg of ketamine reversed the expression levels of Bax and Bcl-2 relative to the TBI + vehicle group at 24 h after TBI. Ketamine reduced oxidative stress following TBI To evaluate the effect of ketamine on oxidative stress induced by TBI, MDA level, activities of SOD and GPx, indicators of lipid peroxidation, and antioxidant levels, respectively, were assessed. The MDA level was increased in the TBI + vehicle group compared with the Sham group. This effect was mitigated by the administration of 60 mg/kg of ketamine (Figure 4A). The activities of GPx and SOD were both decreased after TBI, while ketamine treatment increased their activity (Figure 4B and C). To evaluate the effect of ketamine on oxidative stress induced by TBI, MDA level, activities of SOD and GPx, indicators of lipid peroxidation, and antioxidant levels, respectively, were assessed. The MDA level was increased in the TBI + vehicle group compared with the Sham group. This effect was mitigated by the administration of 60 mg/kg of ketamine (Figure 4A). The activities of GPx and SOD were both decreased after TBI, while ketamine treatment increased their activity (Figure 4B and C). Ketamine promoted the expression of Nrf2 and Nrf2 downstream factors The expression of Nrf2 was investigated by Western blot analysis and immunohistochemistry. Compared with the Sham group, both TBI and ketamine administration induced Nrf2 expression (Figure 5A, B, and D–F). However, compared with the TBI + vehicle group, the TBI + ketamine groups exhibited significantly increased Nrf2 levels, which indicates that ketamine promoted the Nrf2 level following TBI (Figure 5C–F). The expression of NQO-1 and HO-1 was also measured by Western blot analysis. At the protein level, NQO-1 and HO-1 were both upregulated after TBI (Figure 5E, G, and H). Additionally, administration of ketamine further enhanced protein expression compared with the vehicle. These results demonstrate that ketamine induced the expression of factors downstream of Nrf2 via activation of the Nrf2 and Antioxidant Responsive Element (Nrf2-ARE) signaling pathway (Figure 5E, G, and H). The expression of Nrf2 was investigated by Western blot analysis and immunohistochemistry. Compared with the Sham group, both TBI and ketamine administration induced Nrf2 expression (Figure 5A, B, and D–F). However, compared with the TBI + vehicle group, the TBI + ketamine groups exhibited significantly increased Nrf2 levels, which indicates that ketamine promoted the Nrf2 level following TBI (Figure 5C–F). The expression of NQO-1 and HO-1 was also measured by Western blot analysis. At the protein level, NQO-1 and HO-1 were both upregulated after TBI (Figure 5E, G, and H). Additionally, administration of ketamine further enhanced protein expression compared with the vehicle. These results demonstrate that ketamine induced the expression of factors downstream of Nrf2 via activation of the Nrf2 and Antioxidant Responsive Element (Nrf2-ARE) signaling pathway (Figure 5E, G, and H). Ketamine treatment provided protection in injured brains: The brain water content was also examined to confirm the protection of ketamine 24 h after TBI, as previously described (Figure 1A). The results showed that the TBI and TBI + vehicle groups had significantly increased brain water contents compared with the Sham group. Brain edema was attenuated in the ketamine-treated groups, which was consistent with the results of the neurological score. Specifically, the treatment with 60 mg/kg of ketamine conferred a better effect than the treatment with 30 mg/kg of ketamine. Accordingly, 60 mg/kg of ketamine was used in the subsequent experiments. The neurological score was used to evaluate the motor function of the mice at 24 and 48 h after TBI (Figure 1B). All mice were trained 3 days pre-TBI. At 48 h after TBI, the motor scores of the TBI and TBI + vehicle groups were lower than the scores of the Sham and Sham + ketamine groups. The ketamine-treated group exhibited reduced neurological deficits compared with the TBI group at 24 h, which were also lower than the deficits of the Sham and Sham + ketamine groups. The group that received a higher dose (60 mg/kg) showed a better effect than the low dose (30 mg/kg) for the TBI + ketamine group at 24 and 48 h after TBI, but the highest dose (100 mg/kg) failed to further enhance the neuroprotective effect. There was no difference between the Sham and Sham + ketamine groups. So, the Sham + ketamine group would be absent from the subsequent experiments. Ketamine reduced neuronal apoptosis: To investigate the protective effects of ketamine, neuron morphology was examined by Nissl staining. While neurons in the Sham group were clear and intact (Figure 2A), multiple neurons in the TBI and TBI + vehicle groups were damaged, exhibiting extensive changes including oval or triangular nuclei and shrunken cytoplasm (Figure 2B and C). However, an improvement in neuronal morphology and cytoarchitecture was observed in TBI mice treated with ketamine (Figure 2D). To investigate the mechanistic basis for the effects of ketamine, TUNEL staining was used to examine neural cells in injured brain tissue. TUNEL-positive cells were detected at a low frequency in the brains of mice in the Sham group (Figure 2E). The apoptotic index was increased in the TBI and TBI + vehicle groups compared to Sham animals (Figure 2F and G), but was reduced in the TBI + ketamine groups (Figure 2H–J). These results indicate that ketamine blocks neural apoptosis induced by TBI. Additionally, Western blot analysis was performed to assess the expression of cleaved caspase-3, an indicator of apoptosis. Cleaved caspase-3 expression increased significantly following TBI, relative to the Sham group. However, the treatment with ketamine reduced the level of cleaved caspase-3, relative to the TBI + vehicle group (Figure 3A and B). Expression of apoptotic factors was reduced by ketamine administration: The protective effects of ketamine against TBI-induced neural apoptosis were examined by Western blot analysis. These effects were reversed in TBI mice treated with ketamine, in which the translocation of Bax was inhibited. The expression of the pro-apoptotic factor Bax increased following TBI when compared with the Sham group (Figure 3A and D), whereas the expression of the anti-apoptotic factor Bcl-2 decreased when compared with the Sham group (Figure 3A and C). However, the treatment with 60 mg/kg of ketamine reversed the expression levels of Bax and Bcl-2 relative to the TBI + vehicle group at 24 h after TBI. Ketamine reduced oxidative stress following TBI: To evaluate the effect of ketamine on oxidative stress induced by TBI, MDA level, activities of SOD and GPx, indicators of lipid peroxidation, and antioxidant levels, respectively, were assessed. The MDA level was increased in the TBI + vehicle group compared with the Sham group. This effect was mitigated by the administration of 60 mg/kg of ketamine (Figure 4A). The activities of GPx and SOD were both decreased after TBI, while ketamine treatment increased their activity (Figure 4B and C). Ketamine promoted the expression of Nrf2 and Nrf2 downstream factors: The expression of Nrf2 was investigated by Western blot analysis and immunohistochemistry. Compared with the Sham group, both TBI and ketamine administration induced Nrf2 expression (Figure 5A, B, and D–F). However, compared with the TBI + vehicle group, the TBI + ketamine groups exhibited significantly increased Nrf2 levels, which indicates that ketamine promoted the Nrf2 level following TBI (Figure 5C–F). The expression of NQO-1 and HO-1 was also measured by Western blot analysis. At the protein level, NQO-1 and HO-1 were both upregulated after TBI (Figure 5E, G, and H). Additionally, administration of ketamine further enhanced protein expression compared with the vehicle. These results demonstrate that ketamine induced the expression of factors downstream of Nrf2 via activation of the Nrf2 and Antioxidant Responsive Element (Nrf2-ARE) signaling pathway (Figure 5E, G, and H). Discussion: The secondary injury after TBI represents consecutive pathological processes that consist of excitotoxicity, activation of inflammatory responses, oxidative stress, etc.21 Several oxidants and their derivatives are generated after TBI, which enhances the production of ROS along with the exhaustion of antioxidant defense enzymes, such as SOD, catalase, and GPx.22 Substantial evidence indicates that oxidative stress is a major contributor to the pathophysiology of TBI.23 Thus, neuroprotective approaches must be aimed at limiting and reversing oxidative stress. Ketamine is often used as a kind of anesthetic in clinical work and animal experiments. It appears to exert effects through not only NMDA receptors but also non-NMDA receptor mechanisms.24 In a previous study, 10 or 50 mg/kg of ketamine was used as a subanaesthetic dose and 100 mg/kg of ketamine as an anesthetic dose.25 The higher dose (60 mg/kg) showed a better effect than the lower dose (30 mg/kg) at 24 and 48 h after TBI, but the highest dose (100 mg/kg) failed to further enhance the neuroprotective effect. It demonstrated that the protective effect of ketamine was related to dosage in the case of subanaesthesia. Ketamine is a highly effective antioxidant that has been frequently used as a protectant in many kinds of traumatic injury studies although the effect is incompletely understood yet.26–28 The present study showed that the treatment with ketamine after TBI attenuated brain contusion-induced oxidative insult, alleviated brain edema, improved neurological function scores, and prevented brain neuronal loss after TBI. This beneficial role led us to consider the neuroprotective effects and the mechanisms underlying oxidative insult. The ability of ketamine to stimulate the activity of antioxidant enzymes plays a critical role in its antioxidative characteristics.29 Lipid peroxidation, which refers to the oxidative degradation of lipids, increases membrane permeability, leading to cell damage. MDA has been utilized as an index of lipid peroxidation. In addition, both SOD and GPx are antioxidant enzymes that catalyze the reduction of glutathione.30,31 The conversion of MDA and the activities of SOD and GPx in the cortex of mice with TBI indicate that oxidative stress occurs following TBI. Apart from that, the results of this study also showed that the treatment with ketamine partly relieves this impact, suggesting that ketamine could attenuate TBI-induced oxidative stress. The Nrf2 pathway plays an important role in cellular adaptation to oxidative stress by upregulating Phase II enzymes.3,32 In addition, previous studies indicated that Nrf2 protein as well as Phase II enzymes, such as NQO-1 and HO-1, are activated after TBI.33,34 The present study demonstrated that exposure to ketamine (60 mg/kg) after TBI resulted in a significant increase in Nrf2 expression, as well as in the level of NQO-1 and HO-1, through the activation of the Nrf2 signaling pathway. Conclusion: The present study demonstrated that ketamine attenuated the oxidative reaction by enhancing the expression and activity of antioxidant enzymes in a TBI mouse model. Furthermore, experimental data showed that ketamine exerted neuroprotection against TBI, by partially combating oxidative stress via the activation of the Nrf2 pathway and its downstream proteins. However, further study is still needed to elucidate the underlying mechanism of Nrf2 pathway transformation after ketamine administration.
Background: Ketamine can act as a multifunctional neuroprotective agent by inhibiting oxidative stress, cellular dysfunction, and apoptosis. Although it has been proven to be effective in various neurologic disorders, the mechanism of the treatment of traumatic brain injury (TBI) is not fully understood. The aim of this study was to investigate the neuroprotective function of ketamine in models of TBI and the potential role of the nuclear factor erythroid 2-related factor 2 (Nrf2) pathway in this putative protective effect. Methods: Wild-type male mice were randomly assigned to five groups: Sham group, Sham + ketamine group, TBI group, TBI + vehicle group, and TBI + ketamine group. Marmarou's weight drop model in mice was used to induce TBI, after which either ketamine or vehicle was administered via intraperitoneal injection. After 24 h, the brain samples were collected for analysis. Results: Ketamine significantly ameliorated secondary brain injury induced by TBI, including neurological deficits, brain water content, and neuronal apoptosis. In addition, the levels of malondialdehyde (MDA), glutathione peroxidase (GPx), and superoxide dismutase (SOD) were restored by the ketamine treatment. Western blotting and immunohistochemistry showed that ketamine significantly increased the level of Nrf2. Furthermore, administration of ketamine also induced the expression of Nrf2 pathway-related downstream factors, including hemeoxygenase-1 and quinine oxidoreductase-1, at the pre- and post-transcriptional levels. Conclusions: Ketamine exhibits neuroprotective effects by attenuating oxidative stress and apoptosis after TBI. Therefore, ketamine could be an effective therapeutic agent for the treatment of TBI.
Introduction: Traumatic brain injury (TBI) is defined as a mechanical injury that causes numerous deaths and has an adverse impact on families and society.1,2 Over the past few years, great efforts have been directed toward identifying effective ways to improve the prognosis of TBI; however, many approaches have failed during clinical trials.3 It is widely believed that TBI causes both primary and secondary brain damage. Although the primary brain injury is a major factor, the secondary brain damage precipitates a complex pathological process that can lead to a series of endogenous events, including oxidative stress, glutamate excitotoxicity, and the activation of inflammatory responses.4 Of these processes, oxidative stress plays the most important role in secondary damage,5 not only because of the excessive production of reactive oxygen species (ROS) but also due to the exhaustion of the endogenous antioxidant system. Nuclear factor erythroid 2-related factor 2 (Nrf2) is a key transcription factor which controls the redox state as the cell’s main defense mechanism against oxidative stress.6,7 Nrf2 acts by activating the antioxidant response, which induces the transcription of a wide range of genes that are able to counteract the harmful effects of oxidative stress, thus restoring intracellular homeostasis.8 Under basal conditions, Nrf2 is localized in the cytoplasm, anchored by the Kelch-like ECH-associated protein (Keap1), and degraded by the ubiquitin-26S proteasome pathway, with a half-life of 20 min.9 Once the cell encounters stimulations, Nrf2 dissociates from Keap1 and is translocated into the nucleus, where it induces the expression of antioxidant enzymes. The enzymes regarded as antioxidants include nicotinamide adenine dinucleotide phosphate (NADPH), quinine oxidoreductase-1 (NQO-1), heme oxidase-1 (HO-1), superoxide dismutase (SOD), and glutathione peroxidase (GPx).10,11 These free radical scavenging enzymes form a powerful antioxidant defense mechanism. Ketamine, an N-methyl-D-aspartate receptor antagonist, has shown potent anti-inflammatory effects in various models of systemic inflammation, including endotoxemia, ischemia, and burns.12,13 These effects were observed in multiple organ systems and involve the modulation of certain molecular mediators of the inflammatory response.14 In addition, when administered either before or after various pro-inflammatory responses, ketamine was found to diminish systemic production of cytokines and improve survival.15 In models of brain injury and ischemia, ketamine elicits neuroprotective effects.16 Moreover, recent evidence suggests that the use of ketamine could be beneficial for patients with TBI.17 Nevertheless, the effects of ketamine on TBI-induced oxidative stress in the brain remain to be fully elucidated, and the present study was performed to better examine these effects. Conclusion: The present study demonstrated that ketamine attenuated the oxidative reaction by enhancing the expression and activity of antioxidant enzymes in a TBI mouse model. Furthermore, experimental data showed that ketamine exerted neuroprotection against TBI, by partially combating oxidative stress via the activation of the Nrf2 pathway and its downstream proteins. However, further study is still needed to elucidate the underlying mechanism of Nrf2 pathway transformation after ketamine administration.
Background: Ketamine can act as a multifunctional neuroprotective agent by inhibiting oxidative stress, cellular dysfunction, and apoptosis. Although it has been proven to be effective in various neurologic disorders, the mechanism of the treatment of traumatic brain injury (TBI) is not fully understood. The aim of this study was to investigate the neuroprotective function of ketamine in models of TBI and the potential role of the nuclear factor erythroid 2-related factor 2 (Nrf2) pathway in this putative protective effect. Methods: Wild-type male mice were randomly assigned to five groups: Sham group, Sham + ketamine group, TBI group, TBI + vehicle group, and TBI + ketamine group. Marmarou's weight drop model in mice was used to induce TBI, after which either ketamine or vehicle was administered via intraperitoneal injection. After 24 h, the brain samples were collected for analysis. Results: Ketamine significantly ameliorated secondary brain injury induced by TBI, including neurological deficits, brain water content, and neuronal apoptosis. In addition, the levels of malondialdehyde (MDA), glutathione peroxidase (GPx), and superoxide dismutase (SOD) were restored by the ketamine treatment. Western blotting and immunohistochemistry showed that ketamine significantly increased the level of Nrf2. Furthermore, administration of ketamine also induced the expression of Nrf2 pathway-related downstream factors, including hemeoxygenase-1 and quinine oxidoreductase-1, at the pre- and post-transcriptional levels. Conclusions: Ketamine exhibits neuroprotective effects by attenuating oxidative stress and apoptosis after TBI. Therefore, ketamine could be an effective therapeutic agent for the treatment of TBI.
8,011
302
[ 109, 117, 145, 92, 151, 91, 150, 73, 68, 300, 247, 119, 97, 168, 74 ]
20
[ "tbi", "ketamine", "sham", "group", "figure", "mice", "nrf2", "mg", "groups", "expression" ]
[ "inflammatory responses oxidative", "oxidative stress major", "nrf2 antioxidant", "oxidative stress brain", "activation nrf2 antioxidant" ]
[CONTENT] traumatic brain injury | ketamine | oxidative stress | Nrf2 | apoptosis [SUMMARY]
[CONTENT] traumatic brain injury | ketamine | oxidative stress | Nrf2 | apoptosis [SUMMARY]
[CONTENT] traumatic brain injury | ketamine | oxidative stress | Nrf2 | apoptosis [SUMMARY]
[CONTENT] traumatic brain injury | ketamine | oxidative stress | Nrf2 | apoptosis [SUMMARY]
[CONTENT] traumatic brain injury | ketamine | oxidative stress | Nrf2 | apoptosis [SUMMARY]
[CONTENT] traumatic brain injury | ketamine | oxidative stress | Nrf2 | apoptosis [SUMMARY]
[CONTENT] Animals | Apoptosis | Brain Injuries, Traumatic | Cell Survival | Disease Models, Animal | Dose-Response Relationship, Drug | Ketamine | Male | Mice | Mice, Inbred ICR | NF-E2-Related Factor 2 | Neurons | Oxidative Stress [SUMMARY]
[CONTENT] Animals | Apoptosis | Brain Injuries, Traumatic | Cell Survival | Disease Models, Animal | Dose-Response Relationship, Drug | Ketamine | Male | Mice | Mice, Inbred ICR | NF-E2-Related Factor 2 | Neurons | Oxidative Stress [SUMMARY]
[CONTENT] Animals | Apoptosis | Brain Injuries, Traumatic | Cell Survival | Disease Models, Animal | Dose-Response Relationship, Drug | Ketamine | Male | Mice | Mice, Inbred ICR | NF-E2-Related Factor 2 | Neurons | Oxidative Stress [SUMMARY]
[CONTENT] Animals | Apoptosis | Brain Injuries, Traumatic | Cell Survival | Disease Models, Animal | Dose-Response Relationship, Drug | Ketamine | Male | Mice | Mice, Inbred ICR | NF-E2-Related Factor 2 | Neurons | Oxidative Stress [SUMMARY]
[CONTENT] Animals | Apoptosis | Brain Injuries, Traumatic | Cell Survival | Disease Models, Animal | Dose-Response Relationship, Drug | Ketamine | Male | Mice | Mice, Inbred ICR | NF-E2-Related Factor 2 | Neurons | Oxidative Stress [SUMMARY]
[CONTENT] Animals | Apoptosis | Brain Injuries, Traumatic | Cell Survival | Disease Models, Animal | Dose-Response Relationship, Drug | Ketamine | Male | Mice | Mice, Inbred ICR | NF-E2-Related Factor 2 | Neurons | Oxidative Stress [SUMMARY]
[CONTENT] inflammatory responses oxidative | oxidative stress major | nrf2 antioxidant | oxidative stress brain | activation nrf2 antioxidant [SUMMARY]
[CONTENT] inflammatory responses oxidative | oxidative stress major | nrf2 antioxidant | oxidative stress brain | activation nrf2 antioxidant [SUMMARY]
[CONTENT] inflammatory responses oxidative | oxidative stress major | nrf2 antioxidant | oxidative stress brain | activation nrf2 antioxidant [SUMMARY]
[CONTENT] inflammatory responses oxidative | oxidative stress major | nrf2 antioxidant | oxidative stress brain | activation nrf2 antioxidant [SUMMARY]
[CONTENT] inflammatory responses oxidative | oxidative stress major | nrf2 antioxidant | oxidative stress brain | activation nrf2 antioxidant [SUMMARY]
[CONTENT] inflammatory responses oxidative | oxidative stress major | nrf2 antioxidant | oxidative stress brain | activation nrf2 antioxidant [SUMMARY]
[CONTENT] tbi | ketamine | sham | group | figure | mice | nrf2 | mg | groups | expression [SUMMARY]
[CONTENT] tbi | ketamine | sham | group | figure | mice | nrf2 | mg | groups | expression [SUMMARY]
[CONTENT] tbi | ketamine | sham | group | figure | mice | nrf2 | mg | groups | expression [SUMMARY]
[CONTENT] tbi | ketamine | sham | group | figure | mice | nrf2 | mg | groups | expression [SUMMARY]
[CONTENT] tbi | ketamine | sham | group | figure | mice | nrf2 | mg | groups | expression [SUMMARY]
[CONTENT] tbi | ketamine | sham | group | figure | mice | nrf2 | mg | groups | expression [SUMMARY]
[CONTENT] effects | inflammatory | stress | oxidative stress | oxidative | brain | factor | brain injury | injury | damage [SUMMARY]
[CONTENT] rabbit | diluted | 000 | diluted rabbit | 000 diluted | 000 diluted rabbit | usa | membranes | technology | rabbit abcam [SUMMARY]
[CONTENT] ketamine | tbi | figure | group | sham | expression | groups | compared | sham group | vehicle [SUMMARY]
[CONTENT] nrf2 pathway | ketamine | oxidative | pathway | study | nrf2 | enhancing expression | enhancing | oxidative reaction | oxidative reaction enhancing [SUMMARY]
[CONTENT] tbi | ketamine | figure | group | sham | mice | nrf2 | weight | mg | expression [SUMMARY]
[CONTENT] tbi | ketamine | figure | group | sham | mice | nrf2 | weight | mg | expression [SUMMARY]
[CONTENT] ||| TBI ||| TBI | 2 | 2 [SUMMARY]
[CONTENT] five | Sham + ketamine group | TBI | TBI | TBI | ketamine ||| Marmarou | TBI ||| 24 [SUMMARY]
[CONTENT] Ketamine | TBI ||| malondialdehyde | MDA | SOD ||| Nrf2 ||| Nrf2 [SUMMARY]
[CONTENT] Ketamine | TBI ||| TBI [SUMMARY]
[CONTENT] Ketamine ||| TBI ||| TBI | 2 | 2 ||| five | Sham + ketamine group | TBI | TBI | TBI | ketamine ||| Marmarou | TBI ||| 24 ||| ||| Ketamine | TBI ||| malondialdehyde | MDA | SOD ||| Nrf2 ||| Nrf2 ||| Ketamine | TBI ||| TBI [SUMMARY]
[CONTENT] Ketamine ||| TBI ||| TBI | 2 | 2 ||| five | Sham + ketamine group | TBI | TBI | TBI | ketamine ||| Marmarou | TBI ||| 24 ||| ||| Ketamine | TBI ||| malondialdehyde | MDA | SOD ||| Nrf2 ||| Nrf2 ||| Ketamine | TBI ||| TBI [SUMMARY]
Sexually transmitted infections and semen quality from subfertile men with and without leukocytospermia.
34154600
The role of sexually transmitted infections (STIs) in semen parameters and male infertility is still a controversial area. Previous studies have found bacterial infection in a minority of infertile leukocytospermic males. This study aims to investigate the prevalence of STIs in semen from subfertile men with leukocytospermia (LCS) and without leukocytospermia (non-LCS) and their associations with sperm quality.
BACKGROUND
Semen samples were collected from 195 men who asked for a fertility evaluation. Infection with the above 6 pathogens was assessed in each sample. Sperm quality was compared in subfertile men with and without LCS.
METHODS
The LCS group had significantly decreased semen volume, sperm concentration, progressive motility, total motility and normal morphology. The infection rates of Ureaplasma urealyticum (Uuu), Ureaplasma parvum (Uup), Mycoplasma hominis (MH), Mycoplasma genitalium (MG), Chlamydia trachomatis (CT), herpes simplex virus-2 (HSV-2) and Neisseria gonorrhoeae (NG) were 8.7 %, 21.0 %, 8.2 %, 2.1 %, 3.6 %, 1.0 and 0 %, respectively. The STI detection rates of patients with LCS were higher than those of the non-LCS group (52.3 % vs. 39.3 %), although there was no statistically significant difference between the two groups (P = 0.07). All semen parameters were not significantly different between LCS with STIs and without STIs, except the semen volume in the MG-infected patients with LCS was significantly lower than that in the noninfected group.
RESULTS
LCS was associated with a reduction in semen quality, but was not associated with STIs.
CONCLUSIONS
[ "Adult", "Cohort Studies", "Cross-Sectional Studies", "Humans", "Infertility, Male", "Leukocytes", "Male", "Semen", "Semen Analysis", "Sexually Transmitted Diseases" ]
8218447
Introduction
Up to 30 % of infertile men have leukocytospermia which refers to the presence of a high concentration (≥ 1 × 106/ml) of white blood cells (WBCs) in semen [1]. Leukocytes, including granulocytes (50–60 %), macrophages (20–30 %) and T lymphocytes (2–5 %), have been shown to be a negative factor for semen quality as they induce low sperm motility [2]. The origin of leukocytes in the ejaculate is mainly from the epididymis and is associated with immunosurveillance [3]. Hence, bacterial and viral infections were considered and detected in semen from symptomatic and asymptomatic leukocytospermic infertile males, although they attending andrology clinics often do not have an apparent infection. More than 30 sexually transmitted infections (STIs), including herpes simplex virus (HSV), Chlamydia trachomatis (CT), Ureaplasma spp. (UU), Mycoplasma hominis (MH), Mycoplasma genitalium (MG) and Neisseria gonorrhoeae (NG) have been identified as pathogens that cause genital injury, semen infection, prostatitis, urethritis, epididymitis and orchitis in men [4–7]. Previous studies observed sexually transmitted pathogens in semen in relation to impaired sperm quality and reduced pregnancy rates [8, 9]. However, unlike the strong negative correlation between STIs and fertility in females, the link in males remains controversial [10]. For example, several studies demonstrated that UU increased secondary infertility and reduced sperm quality, including low sperm concentration and motility [11], while other studies did not find a correlation between UU and semen paremeters [12, 13]. Interestingly, UU can be divided into two subtypes, Ureaplasma urealyticum (Uuu) and Ureaplasma parvum (Uup), and studies have shown that Uup is more associated with poor semen quality than Uuu due to differential pathogenicity [14]. Thus, the controversial results in previous studies may be due to the lack of discrimination between Uuu and Uup. Recent studies reported that compared with patients with nonleukocytospermia (non-LCS), decreased semen parameters (e.g., sperm concentration, progressive motility, normal morphology) were observed in asymptomatic leukocytospermia (LCS), while those with STI (e.g., UU)-positive leukocytospermia performed more significantly [5, 15]. In this study, we aimed to investigate the prevalence of sexually transmitted pathogens by molecular methods in semen samples from infertile men with and without leukocytes and their effects on semen quality.
null
null
Results
Table 1 displays the mean (± SD) and median (25th, 75th) clinical characteristics and semen parameters from 195 men in the present study. The ages of patients with LCS and without LCS were 32.4 ± 6.9 and 31.6 ± 5.8 years, respectively. Of the studied population, more than 50 % of patients graduated from college/university. A total of 60.0 % of patients were diagnosed with primary infertility. Drinking and smoking were found in 70.3 and 42.6 % of patients, respectively. There was no significant difference between the two groups regarding age, clinical conditions (e.g., BMI, education, duration of infertility, and type of infertility) or lifestyle (e.g., smoking and drinking). Table 1Characteristics and descriptive statistics of the whole cohortClinical characteristicsTotal(n = 195)LCS(n = 88)Non-LCS(n = 107)PAge(year), mean ± s.d.31.9 ± 6.332.4 ± 6.931.6 ± 5.80.39BMI (kg/m2), mean ± s.d.24.6 ± 3.324.1 ± 3.024.8 ± 3.50.39Education, n (%)  primary school5 (2.6)1 (1.1)4 (3.7)0.32  junior high school40 (20.5)21 (23.9)19 (17.8)  High school43 (22.1)22 (25.0)21 (19.6)  College/University107 (54.8)44 (50.0)63 (58.9)Duration of infertility, n (%)  1 year106 (54.4)47 (53.4)59 (55.1)0.35  2 years47 (24.1)25 (28.4)22 (20.6)  ≥ 3 years42 (21.5)16 (18.2)26 (24.3)Type of infertility, n (%)  Primary117 (60.0)49 (55.7)68 (63.6)0.26  Secondary78 (40.0)39 (44.3)39 (36.4)Alcohol status, n (%)  Nondrinkers58 (29.7)24 (27.3)34 (31.8)0.49  drinkers137 (70.3)64 (72.7)73 (68.2)Smoking status, n (%)  Nonsmokers112 (57.4)48 (54.5)64 (59.8)0.46  Smokers83 (42.6)40 (45.5)43 (40.2)Semen parameters  Abstinence time (days), mean ± s.d4.3 ± 1.94.2 ± 1.94.3 ± 2.00.75  Semen volume (ml), median (Q1, Q3)2.9 (2.0, 4.0)2.8 (1.9, 3.7)3.0 (2.2,4.1)0.04  Sperm concentration (×106/ml), median (Q1, Q3)79.0 (37.0, 121.6)66.4 (28.2, 106.0)89.5 (54.5, 135.5)0.01  Progressive motility (%), median (Q1, Q3)38.0 (18.7, 51.0)33.8 (19.1, 45.1)41.6 (16.7, 54.6)0.02  Total motility (%), median (Q1, Q3)46.0 (22.4, 59.3)40.8 (22.5, 55.2)50.2 (21.4, 63.8)0.03  Normal morphology (%), median (Q1, Q3)5.0 (4.0, 8.0)5.0 (3.0, 7.0)5.0 (4.0, 8.0)0.35  leukocytes (×106/ml), median (Q1, Q3)0.4 (0.1, 4.6)4.3 (2.9, 6.5)0.2 (0.1, 0.3)< 0.001  AsAs (%), median (Q1, Q3)1.5 (0, 4.0) (n = 124)1.0 (0, 4.0) (n = 48)2.0 (1.0, 3.8) (n = 76)0.99BMI body mass index. Data are presented as frequency (percentage) for categorical variables and mean ± standard deviation (SD) for continuous variables if normally distributed and medians (interquartile range, IQR) if not. P values were derived from Pearson’s chi-square test and Student’s t-test for parametric comparisons and the Mann-Whitney U test for nonparametric comparisons. Q1: 25th percentile. Q3: 75th percentile Characteristics and descriptive statistics of the whole cohort BMI body mass index. Data are presented as frequency (percentage) for categorical variables and mean ± standard deviation (SD) for continuous variables if normally distributed and medians (interquartile range, IQR) if not. P values were derived from Pearson’s chi-square test and Student’s t-test for parametric comparisons and the Mann-Whitney U test for nonparametric comparisons. Q1: 25th percentile. Q3: 75th percentile As expected, the median leukocytes in the LCS and non-LCS groups were 4.3 and 0.2, respectively (Table 2). The median (25th, 75th) semen volume from LCS and non-LCS group were 2.8 (1.9, 3.7) and 3.0 (2.2, 4.1) (P = 0.04); sperm concentration were 66.4 (28.2, 106.0) and 89.5 (54.5, 135.5) (P = 0.01); progressive motility were 33.8 (19.1, 45.1) and 41.6 (16.7, 54.6) (P = 0.02); total motility was 40.8 (22.5, 55.2) and 50.2 (21.4, 63.8) (P = 0.03); and normal morphology was 5.0 (3.0, 7.0) and 5.0 (4.0, 8.0) (P = 0.35), respectively. The LCS group had significantly decreased semen volume, sperm concentration, progressive motility, total motility and normal morphology. In addition, there were no significant differences between the two groups for AsAs. Table 2Frequency of STI pathogens in semen samplesPathogenOverall (n = 195)LCS (n = 88)Non-LCS (n = 107)PUU58 (29.2)25 (28.4)33 (30.8)0.71Uuu, n (%)17 (8.7)7 (8.0)10 (9.3)0.73Uup, n (%)41 (21.0)18 (20.5)23 (21.5)0.86Uup1, n (%)4 (2.1)1 (1.1)3 (2.8)0.63*Uup3, n (%)19 (9.7)10 (11.4)9 (8.4)0.49Uup6, n (%)16 (8.2)7 (8.0)9 (8.4)0.91Uup14, n (%)2 (1.0)0 (0)2 (1.9)0.50*MH, n (%)16 (8.2)10 (11.4)6 (5.6)0.15MG, n (%)4 (2.1)4 (4.5)0 (0)0.04*CT, n (%)7 (3.6)5 (5.7)2 (1.9)0.25*HSV-2, n (%)2 (1.0)1 (1.1)1 (0.9)0.99*NG, n (%)0 (0)0 (0)0 (0)-Total87 (44.6)45 (52.3)42 (39.3)0.10Data are presented as frequency. P values were derived from Pearson’s chi-square test if not otherwise indicated*Fisher’s exact test Frequency of STI pathogens in semen samples Data are presented as frequency. P values were derived from Pearson’s chi-square test if not otherwise indicated *Fisher’s exact test As shown in Table 2, the STI-positive semen in subfertility patients was 45.1 %. The STI detection rates of patients with LCS were higher than those of the non-LCS group, (52.3 % vs. 39.3 %), although there was no statistically significant difference between the two groups (P = 0.07). Of 195 semen samples, UU accounted for 58 (29.2 %), which was the most prevalent pathogen detected, followed by MH (8.2 %), CT (3.6 %), MG (2.1 %) and HSV-2 (1.0 %). NG was not detected in any semen sample. Strikingly, the rate of MG-positive samples from the LCS group was significantly higher than that in the non-LCS group (P = 0.04), while other STI pathogens were similarly detected between the two groups. Table 3 shows that 13 of the 71 STI-positive samples contained more than one pathogen, including 7 and 6 coinfected samples in the LCS and non-LCS groups, respectively. For the LCS group, 4 and 3 semen samples had 2 and 3 pathogens detections, respectively. Specifically, 5 of 7 semen samples were positive for UU and MH, 3 were positive for UU and MG and 1 was positive for UU and CT. For the non-LCS group, 5 and 1 of these samples were positive for 2 and 3 pathogens, respectively. Table 3Total simultaneous STI pathogens detected in the LCS and non-LCS groupsPathogenTotal(n = 195)LCS(n = 88)Non-LCS(n = 107)PNo-infect, n (%)124 (63.6)52 (59.1)72 (67.3)0.24Infect, n (%)  158 (29.7)29 (33.0)29 (27.1)0.58  29 (4.6)4 (4.5)5 (4.7)  34 (2.1)3 (3.4)1 (0.9)Data are presented as frequency. P values were derived from Pearson’s chi-square test if not otherwise indicated Total simultaneous STI pathogens detected in the LCS and non-LCS groups Data are presented as frequency. P values were derived from Pearson’s chi-square test if not otherwise indicated The semen parameters from the LCS and non-LCS groups are summarized in Table 4. In both the LCS group and the non-LCS group, semen parameters were similar between patients with STI detection and those without STI detection. Moreover, the prevalence of STI pathogens in patients with normal semen parameters was not significantly different from that in patients with abnormal semen parameters (e.g., semen volume < 1.5 mL, sperm concentration < 15 × 106/mL, progressive motility < 32 × 106/mL, total motility < 40 × 106/mL and motility morphology < 4 %). Table 4Comparison of semen parameters in patients with LCS and non-LCSLCS (n = 88)Non-LCS (n = 107)Infected (n = 36)Non-infected (n = 52)PInfected (n = 35)Non-infected (n = 72)PAbstinence time (days), mean ± s.d4.2 ± 2.04.3 ± 1.80.814.4 ± 2.04.2 ± 2.00.59Semen volume (ml), median (Q1, Q3)2.5(1.7, 3.4)2.9(2.0, 4.0)0.182.9(2.1, 4.2)3.1(2.2, 4.1)0.92Semen volume < 1.5 (ml), n (%)6 (16.7)4 (7.7)0.192 (5.7)1 (1.4)0.25*Sperm concentration (×106/ml), median (Q1, Q3)53.8(24.3, 101.4)69.2(28.8, 106.8)0.6183.8(38.9, 133.2)90.2(55.4, 138.3)0.55Sperm concentration < 15 × 106/ml, n (%)4 (11.1)6 (11.5)0.952 (5.7)8 (11.1)0.49*Progressive motility (%), median (Q1, Q3)35.6(19.9, 45.3)31.5(19.0, 45.1)0.6546.5(25.4, 59.2)40.0(15.7, 54.4)0.39Progressive motility < 32 %, n (%)14 (38.9)27 (51.9)0.2310 (28.6)27 (37.5)0.36Total motility (%), median (Q1, Q3)41.2(23.2, 55.2)38.2(22.5, 55.6)0.7251.7(30.2, 63.9)48.5(19.4, 63.7)0.63Total motility < 40 %, n (%)16 (44.4)27 (51.9)0.4910 (28.6)27 (37.5)0.36Normal morphology (%), median (Q1, Q3)5.0(3.0, 6.8)5.0(3.3, 7.0)0.946.0(4.0, 8.0)5.0(4.0, 8.0)0.29Normal morphology < 4 %, n (%)10 (27.8)13 (25.0)0.778 (22.9)16 (22.2)0.94Leukocytes (×106/ml), median (Q1, Q3)6.4(3.0, 6.3)4.2(2.6, 7.0)0.730.2(0.1, 0.3)0.2(0.1, 0.3)0.21Data are presented as frequency (percentage) for categorical variables and mean ± standard deviation (SD) for continuous variables if normally distributed and medians (interquartile range, IQR) if not. P values were derived from Pearson’s chi-square test and Student’s t-test for parametric comparisons and the Mann-Whitney U test for nonparametric comparisons. Q1: 25th percentile. Q3: 75th percentile*Fisher’s exact test Comparison of semen parameters in patients with LCS and non-LCS 2.5 (1.7, 3.4) 2.9 (2.0, 4.0) 2.9 (2.1, 4.2) 3.1 (2.2, 4.1) 53.8 (24.3, 101.4) 69.2 (28.8, 106.8) 83.8 (38.9, 133.2) 90.2 (55.4, 138.3) 35.6 (19.9, 45.3) 31.5 (19.0, 45.1) 46.5 (25.4, 59.2) 40.0 (15.7, 54.4) 41.2 (23.2, 55.2) 38.2 (22.5, 55.6) 51.7 (30.2, 63.9) 48.5 (19.4, 63.7) 5.0 (3.0, 6.8) 5.0 (3.3, 7.0) 6.0 (4.0, 8.0) 5.0 (4.0, 8.0) 6.4 (3.0, 6.3) 4.2 (2.6, 7.0) 0.2 (0.1, 0.3) 0.2 (0.1, 0.3) Data are presented as frequency (percentage) for categorical variables and mean ± standard deviation (SD) for continuous variables if normally distributed and medians (interquartile range, IQR) if not. P values were derived from Pearson’s chi-square test and Student’s t-test for parametric comparisons and the Mann-Whitney U test for nonparametric comparisons. Q1: 25th percentile. Q3: 75th percentile *Fisher’s exact test Table 5 list the association of semen quality and STI pathogens in the LCS group. All semen parameters were not statistically significant for the UU-infected patients, particularly for Uuu-, Uup3- and Uup6-positive patients. Interestingly, multiple comparison analysis found that semen volume was significantly different among groups. Semen volume was lower in patients with MG infection than in those without any STI pathogen detection. Furthermore, leucocytes in semen were not significantly different between the individual STI-positive and STI-negative groups. Table 5Sexually transmitted infections on semen quality in patients with LCSUuuUup3Uup6MHMGCTNegativeP valueaSemen volume (ml), median (Q1, Q3)2.0(1.1, 2.9)3.0(2.7, 5.1)2.8(1.8, 3.6)2.1(1.6, 2.8)1.5(1.1, 1.8)b2.0(1.0, 2.5)2.9(2.0, 4.0)0.003Sperm concentration (× 106/ml), median (Q1, Q3)35.1(26.8, 62.0)60.9(27.7, 106.1)110.6(17.6, 254.6)45.1(22.8, 83.2)48.2(27.0, 97.7)46.7(30.8, 121.5)69.2(28.8, 106.8)0.65Progressive sperm motility (%), median (Q1, Q3)33.1(18.4, 43.2)30.3(21.3, 38.6)32.7(10.5, 58.0)34.7(15.8, 41.6)40.3(19.6, 50.1)39.5(36.5, 55.4)31.5(19.0, 45.1)0.69Total sperm motility (%), median (Q1, Q3)37.7(19.8, 50.0)39.6(27.0, 50.8)39.1(12.1, 68.1)40.2(19.2, 46.1)46.0(22.2, 55.5)55.2(44.3, 65.1)38.2(22.5, 55.6)0.56Normal sperm morphology (%), median (Q1, Q3)5.0(3.3, 6.0)5.5(2.8, 8.0)4.0(2.0, 11.0)5.0(4.0, 6.0)5.5(3.5, 6.0)6.0(3.5, 7.5)5.0(3.3, 7.0)0.99Leukocytes (×106/ml), median (Q1, Q3)6.2(4.1, 11.9)4.2(3.1, 6.0)2.9(2.4, 8.1)3.8(3.1, 6.2)4.3(2.4, 6.3)5.5(4.0, 9.1)4.2(2.6, 7.0)0.55Data are presented as medians (interquartile range, IQR). Q1: 25th percentile. Q3: 75th percentile. aP value according to the Kruskal-Wallis test. bP < 0.05 vs. negative group Sexually transmitted infections on semen quality in patients with LCS 2.0 (1.1, 2.9) 3.0 (2.7, 5.1) 2.8 (1.8, 3.6) 2.1 (1.6, 2.8) 1.5 (1.1, 1.8)b 2.0 (1.0, 2.5) 2.9 (2.0, 4.0) 35.1 (26.8, 62.0) 60.9 (27.7, 106.1) 110.6 (17.6, 254.6) 45.1 (22.8, 83.2) 48.2 (27.0, 97.7) 46.7 (30.8, 121.5) 69.2 (28.8, 106.8) 33.1 (18.4, 43.2) 30.3 (21.3, 38.6) 32.7 (10.5, 58.0) 34.7 (15.8, 41.6) 40.3 (19.6, 50.1) 39.5 (36.5, 55.4) 31.5 (19.0, 45.1) 37.7 (19.8, 50.0) 39.6 (27.0, 50.8) 39.1 (12.1, 68.1) 40.2 (19.2, 46.1) 46.0 (22.2, 55.5) 55.2 (44.3, 65.1) 38.2 (22.5, 55.6) 5.0 (3.3, 6.0) 5.5 (2.8, 8.0) 4.0 (2.0, 11.0) 5.0 (4.0, 6.0) 5.5 (3.5, 6.0) 6.0 (3.5, 7.5) 5.0 (3.3, 7.0) 6.2 (4.1, 11.9) 4.2 (3.1, 6.0) 2.9 (2.4, 8.1) 3.8 (3.1, 6.2) 4.3 (2.4, 6.3) 5.5 (4.0, 9.1) 4.2 (2.6, 7.0) Data are presented as medians (interquartile range, IQR). Q1: 25th percentile. Q3: 75th percentile. aP value according to the Kruskal-Wallis test. bP < 0.05 vs. negative group
null
null
[ "Methods", "Patients", "Semen analysis", "Detection of sexually transmitted pathogens", "Statistical analysis" ]
[ "Patients A total of 195 men who sought a fertility evaluation at the Reproductive Center of The First Affiliated Hospital of University of Science and Technology of China (USTC) from July 2019 to July 2020 were included in this study. The exclusion criteria were men who were diagnosed with genetic defects related to reproductive tract, azoospermia, varicocele, testicular trauma, cryptorchidism, postmumps orchitis and chronic severe debilitating medical illnesses. This study was approved by The First Affiliated Hospital of USTC Ethical Committee, Anhui, China (No. 2019P040).\nA total of 195 men who sought a fertility evaluation at the Reproductive Center of The First Affiliated Hospital of University of Science and Technology of China (USTC) from July 2019 to July 2020 were included in this study. The exclusion criteria were men who were diagnosed with genetic defects related to reproductive tract, azoospermia, varicocele, testicular trauma, cryptorchidism, postmumps orchitis and chronic severe debilitating medical illnesses. This study was approved by The First Affiliated Hospital of USTC Ethical Committee, Anhui, China (No. 2019P040).\nSemen analysis The semen volume, sperm concentration, total sperm count, motility, and morphological normality were determined according to the WHO 2010 guidelines for semen analysis (fifth edition, 2010) [1]. Briefly, semen samples were obtained after an abstinence period of 2–7 days. Semen quality was evaluated after liquefaction at 37 °C for at least 30 min. Semen volume was calculated by weighing the semen sample in a tube. Computer-assisted sperm analysis (CASA) was used to measure sperm concentration, progressive motility and total motility (SAS, Beijing, China). Sperm morphology was determined through Diff-Quick staining (Anke Biotechnology, Hefei, China). Leukocytes were stained by peroxidase test using benzidine (Anke Biotechnology, Hefei, China). Antisperm antibodies (AsAs) were investigated by the mixed antiglobulin reaction (MAR) method (Anke Biotechnology, Hefei, China).\nLaboratory personnel were trained in semen collecting and detecting procedures according to the National Research Institute for Family Planning. In addition, the laboratory participates in an external quality control (EQA) program by NHC Key Laboratory of Male Reproduction and Genetics.\nThe semen volume, sperm concentration, total sperm count, motility, and morphological normality were determined according to the WHO 2010 guidelines for semen analysis (fifth edition, 2010) [1]. Briefly, semen samples were obtained after an abstinence period of 2–7 days. Semen quality was evaluated after liquefaction at 37 °C for at least 30 min. Semen volume was calculated by weighing the semen sample in a tube. Computer-assisted sperm analysis (CASA) was used to measure sperm concentration, progressive motility and total motility (SAS, Beijing, China). Sperm morphology was determined through Diff-Quick staining (Anke Biotechnology, Hefei, China). Leukocytes were stained by peroxidase test using benzidine (Anke Biotechnology, Hefei, China). Antisperm antibodies (AsAs) were investigated by the mixed antiglobulin reaction (MAR) method (Anke Biotechnology, Hefei, China).\nLaboratory personnel were trained in semen collecting and detecting procedures according to the National Research Institute for Family Planning. In addition, the laboratory participates in an external quality control (EQA) program by NHC Key Laboratory of Male Reproduction and Genetics.\nDetection of sexually transmitted pathogens For each male patient, 200 µL of semen specimens was used for the extraction of microorganism DNA. Amplified Sexually transmitted pathogen DNA, including Uuu, Uup (Uup1, Uup3, Uup6, and Uup14), CT, HSV-2, MH, MG and NG, was detected by the “flow-through hybridization” technique using the STD (sexually transmitted disease) 6 GenoArray Diagnostic kit [16]. Samples infected with any pathogen were assigned to the pathogen-infected group, those with no infection with any pathogen were assigned to the non-infected group.\nFor each male patient, 200 µL of semen specimens was used for the extraction of microorganism DNA. Amplified Sexually transmitted pathogen DNA, including Uuu, Uup (Uup1, Uup3, Uup6, and Uup14), CT, HSV-2, MH, MG and NG, was detected by the “flow-through hybridization” technique using the STD (sexually transmitted disease) 6 GenoArray Diagnostic kit [16]. Samples infected with any pathogen were assigned to the pathogen-infected group, those with no infection with any pathogen were assigned to the non-infected group.\nStatistical analysis Qualitative variables were described as frequencies (percentages), and quantitative variables were described as the mean ± standard deviation (SD) if normally distributed and medians (interquartile range, IQR) if not. Pearson’s chi-square test and Student’s t-test were used for parametric comparisons, and the Mann-Whitney U test was utilized for nonparametric comparisons. In addition, comparison of sexually transmitted pathogens (Uuu, Uup3, Uup6, MH, MG and CT) were performed by the Kruskal-Wallis test with multiple comparisons. Abnormal semen parameters according to the WHO 2010 recommended standards were listed as follows: semen volume < 1.5 mL, sperm concentration < 15 × 106/mL, progressive motility < 32 × 106/mL and motility morphology < 4 %. P value < 0.05 was considered to indicate statistical significance. All statistical analyses were performed using SPSS version 17 (SPSS Inc., Chicago, IL, USA).\nQualitative variables were described as frequencies (percentages), and quantitative variables were described as the mean ± standard deviation (SD) if normally distributed and medians (interquartile range, IQR) if not. Pearson’s chi-square test and Student’s t-test were used for parametric comparisons, and the Mann-Whitney U test was utilized for nonparametric comparisons. In addition, comparison of sexually transmitted pathogens (Uuu, Uup3, Uup6, MH, MG and CT) were performed by the Kruskal-Wallis test with multiple comparisons. Abnormal semen parameters according to the WHO 2010 recommended standards were listed as follows: semen volume < 1.5 mL, sperm concentration < 15 × 106/mL, progressive motility < 32 × 106/mL and motility morphology < 4 %. P value < 0.05 was considered to indicate statistical significance. All statistical analyses were performed using SPSS version 17 (SPSS Inc., Chicago, IL, USA).", "A total of 195 men who sought a fertility evaluation at the Reproductive Center of The First Affiliated Hospital of University of Science and Technology of China (USTC) from July 2019 to July 2020 were included in this study. The exclusion criteria were men who were diagnosed with genetic defects related to reproductive tract, azoospermia, varicocele, testicular trauma, cryptorchidism, postmumps orchitis and chronic severe debilitating medical illnesses. This study was approved by The First Affiliated Hospital of USTC Ethical Committee, Anhui, China (No. 2019P040).", "The semen volume, sperm concentration, total sperm count, motility, and morphological normality were determined according to the WHO 2010 guidelines for semen analysis (fifth edition, 2010) [1]. Briefly, semen samples were obtained after an abstinence period of 2–7 days. Semen quality was evaluated after liquefaction at 37 °C for at least 30 min. Semen volume was calculated by weighing the semen sample in a tube. Computer-assisted sperm analysis (CASA) was used to measure sperm concentration, progressive motility and total motility (SAS, Beijing, China). Sperm morphology was determined through Diff-Quick staining (Anke Biotechnology, Hefei, China). Leukocytes were stained by peroxidase test using benzidine (Anke Biotechnology, Hefei, China). Antisperm antibodies (AsAs) were investigated by the mixed antiglobulin reaction (MAR) method (Anke Biotechnology, Hefei, China).\nLaboratory personnel were trained in semen collecting and detecting procedures according to the National Research Institute for Family Planning. In addition, the laboratory participates in an external quality control (EQA) program by NHC Key Laboratory of Male Reproduction and Genetics.", "For each male patient, 200 µL of semen specimens was used for the extraction of microorganism DNA. Amplified Sexually transmitted pathogen DNA, including Uuu, Uup (Uup1, Uup3, Uup6, and Uup14), CT, HSV-2, MH, MG and NG, was detected by the “flow-through hybridization” technique using the STD (sexually transmitted disease) 6 GenoArray Diagnostic kit [16]. Samples infected with any pathogen were assigned to the pathogen-infected group, those with no infection with any pathogen were assigned to the non-infected group.", "Qualitative variables were described as frequencies (percentages), and quantitative variables were described as the mean ± standard deviation (SD) if normally distributed and medians (interquartile range, IQR) if not. Pearson’s chi-square test and Student’s t-test were used for parametric comparisons, and the Mann-Whitney U test was utilized for nonparametric comparisons. In addition, comparison of sexually transmitted pathogens (Uuu, Uup3, Uup6, MH, MG and CT) were performed by the Kruskal-Wallis test with multiple comparisons. Abnormal semen parameters according to the WHO 2010 recommended standards were listed as follows: semen volume < 1.5 mL, sperm concentration < 15 × 106/mL, progressive motility < 32 × 106/mL and motility morphology < 4 %. P value < 0.05 was considered to indicate statistical significance. All statistical analyses were performed using SPSS version 17 (SPSS Inc., Chicago, IL, USA)." ]
[ null, null, null, null, null ]
[ "Introduction", "Methods", "Patients", "Semen analysis", "Detection of sexually transmitted pathogens", "Statistical analysis", "Results", "Discussion" ]
[ "Up to 30 % of infertile men have leukocytospermia which refers to the presence of a high concentration (≥ 1 × 106/ml) of white blood cells (WBCs) in semen [1]. Leukocytes, including granulocytes (50–60 %), macrophages (20–30 %) and T lymphocytes (2–5 %), have been shown to be a negative factor for semen quality as they induce low sperm motility [2]. The origin of leukocytes in the ejaculate is mainly from the epididymis and is associated with immunosurveillance [3]. Hence, bacterial and viral infections were considered and detected in semen from symptomatic and asymptomatic leukocytospermic infertile males, although they attending andrology clinics often do not have an apparent infection.\nMore than 30 sexually transmitted infections (STIs), including herpes simplex virus (HSV), Chlamydia trachomatis (CT), Ureaplasma spp. (UU), Mycoplasma hominis (MH), Mycoplasma genitalium (MG) and Neisseria gonorrhoeae (NG) have been identified as pathogens that cause genital injury, semen infection, prostatitis, urethritis, epididymitis and orchitis in men [4–7]. Previous studies observed sexually transmitted pathogens in semen in relation to impaired sperm quality and reduced pregnancy rates [8, 9]. However, unlike the strong negative correlation between STIs and fertility in females, the link in males remains controversial [10]. For example, several studies demonstrated that UU increased secondary infertility and reduced sperm quality, including low sperm concentration and motility [11], while other studies did not find a correlation between UU and semen paremeters [12, 13]. Interestingly, UU can be divided into two subtypes, Ureaplasma urealyticum (Uuu) and Ureaplasma parvum (Uup), and studies have shown that Uup is more associated with poor semen quality than Uuu due to differential pathogenicity [14]. Thus, the controversial results in previous studies may be due to the lack of discrimination between Uuu and Uup.\nRecent studies reported that compared with patients with nonleukocytospermia (non-LCS), decreased semen parameters (e.g., sperm concentration, progressive motility, normal morphology) were observed in asymptomatic leukocytospermia (LCS), while those with STI (e.g., UU)-positive leukocytospermia performed more significantly [5, 15]. In this study, we aimed to investigate the prevalence of sexually transmitted pathogens by molecular methods in semen samples from infertile men with and without leukocytes and their effects on semen quality.", "Patients A total of 195 men who sought a fertility evaluation at the Reproductive Center of The First Affiliated Hospital of University of Science and Technology of China (USTC) from July 2019 to July 2020 were included in this study. The exclusion criteria were men who were diagnosed with genetic defects related to reproductive tract, azoospermia, varicocele, testicular trauma, cryptorchidism, postmumps orchitis and chronic severe debilitating medical illnesses. This study was approved by The First Affiliated Hospital of USTC Ethical Committee, Anhui, China (No. 2019P040).\nA total of 195 men who sought a fertility evaluation at the Reproductive Center of The First Affiliated Hospital of University of Science and Technology of China (USTC) from July 2019 to July 2020 were included in this study. The exclusion criteria were men who were diagnosed with genetic defects related to reproductive tract, azoospermia, varicocele, testicular trauma, cryptorchidism, postmumps orchitis and chronic severe debilitating medical illnesses. This study was approved by The First Affiliated Hospital of USTC Ethical Committee, Anhui, China (No. 2019P040).\nSemen analysis The semen volume, sperm concentration, total sperm count, motility, and morphological normality were determined according to the WHO 2010 guidelines for semen analysis (fifth edition, 2010) [1]. Briefly, semen samples were obtained after an abstinence period of 2–7 days. Semen quality was evaluated after liquefaction at 37 °C for at least 30 min. Semen volume was calculated by weighing the semen sample in a tube. Computer-assisted sperm analysis (CASA) was used to measure sperm concentration, progressive motility and total motility (SAS, Beijing, China). Sperm morphology was determined through Diff-Quick staining (Anke Biotechnology, Hefei, China). Leukocytes were stained by peroxidase test using benzidine (Anke Biotechnology, Hefei, China). Antisperm antibodies (AsAs) were investigated by the mixed antiglobulin reaction (MAR) method (Anke Biotechnology, Hefei, China).\nLaboratory personnel were trained in semen collecting and detecting procedures according to the National Research Institute for Family Planning. In addition, the laboratory participates in an external quality control (EQA) program by NHC Key Laboratory of Male Reproduction and Genetics.\nThe semen volume, sperm concentration, total sperm count, motility, and morphological normality were determined according to the WHO 2010 guidelines for semen analysis (fifth edition, 2010) [1]. Briefly, semen samples were obtained after an abstinence period of 2–7 days. Semen quality was evaluated after liquefaction at 37 °C for at least 30 min. Semen volume was calculated by weighing the semen sample in a tube. Computer-assisted sperm analysis (CASA) was used to measure sperm concentration, progressive motility and total motility (SAS, Beijing, China). Sperm morphology was determined through Diff-Quick staining (Anke Biotechnology, Hefei, China). Leukocytes were stained by peroxidase test using benzidine (Anke Biotechnology, Hefei, China). Antisperm antibodies (AsAs) were investigated by the mixed antiglobulin reaction (MAR) method (Anke Biotechnology, Hefei, China).\nLaboratory personnel were trained in semen collecting and detecting procedures according to the National Research Institute for Family Planning. In addition, the laboratory participates in an external quality control (EQA) program by NHC Key Laboratory of Male Reproduction and Genetics.\nDetection of sexually transmitted pathogens For each male patient, 200 µL of semen specimens was used for the extraction of microorganism DNA. Amplified Sexually transmitted pathogen DNA, including Uuu, Uup (Uup1, Uup3, Uup6, and Uup14), CT, HSV-2, MH, MG and NG, was detected by the “flow-through hybridization” technique using the STD (sexually transmitted disease) 6 GenoArray Diagnostic kit [16]. Samples infected with any pathogen were assigned to the pathogen-infected group, those with no infection with any pathogen were assigned to the non-infected group.\nFor each male patient, 200 µL of semen specimens was used for the extraction of microorganism DNA. Amplified Sexually transmitted pathogen DNA, including Uuu, Uup (Uup1, Uup3, Uup6, and Uup14), CT, HSV-2, MH, MG and NG, was detected by the “flow-through hybridization” technique using the STD (sexually transmitted disease) 6 GenoArray Diagnostic kit [16]. Samples infected with any pathogen were assigned to the pathogen-infected group, those with no infection with any pathogen were assigned to the non-infected group.\nStatistical analysis Qualitative variables were described as frequencies (percentages), and quantitative variables were described as the mean ± standard deviation (SD) if normally distributed and medians (interquartile range, IQR) if not. Pearson’s chi-square test and Student’s t-test were used for parametric comparisons, and the Mann-Whitney U test was utilized for nonparametric comparisons. In addition, comparison of sexually transmitted pathogens (Uuu, Uup3, Uup6, MH, MG and CT) were performed by the Kruskal-Wallis test with multiple comparisons. Abnormal semen parameters according to the WHO 2010 recommended standards were listed as follows: semen volume < 1.5 mL, sperm concentration < 15 × 106/mL, progressive motility < 32 × 106/mL and motility morphology < 4 %. P value < 0.05 was considered to indicate statistical significance. All statistical analyses were performed using SPSS version 17 (SPSS Inc., Chicago, IL, USA).\nQualitative variables were described as frequencies (percentages), and quantitative variables were described as the mean ± standard deviation (SD) if normally distributed and medians (interquartile range, IQR) if not. Pearson’s chi-square test and Student’s t-test were used for parametric comparisons, and the Mann-Whitney U test was utilized for nonparametric comparisons. In addition, comparison of sexually transmitted pathogens (Uuu, Uup3, Uup6, MH, MG and CT) were performed by the Kruskal-Wallis test with multiple comparisons. Abnormal semen parameters according to the WHO 2010 recommended standards were listed as follows: semen volume < 1.5 mL, sperm concentration < 15 × 106/mL, progressive motility < 32 × 106/mL and motility morphology < 4 %. P value < 0.05 was considered to indicate statistical significance. All statistical analyses were performed using SPSS version 17 (SPSS Inc., Chicago, IL, USA).", "A total of 195 men who sought a fertility evaluation at the Reproductive Center of The First Affiliated Hospital of University of Science and Technology of China (USTC) from July 2019 to July 2020 were included in this study. The exclusion criteria were men who were diagnosed with genetic defects related to reproductive tract, azoospermia, varicocele, testicular trauma, cryptorchidism, postmumps orchitis and chronic severe debilitating medical illnesses. This study was approved by The First Affiliated Hospital of USTC Ethical Committee, Anhui, China (No. 2019P040).", "The semen volume, sperm concentration, total sperm count, motility, and morphological normality were determined according to the WHO 2010 guidelines for semen analysis (fifth edition, 2010) [1]. Briefly, semen samples were obtained after an abstinence period of 2–7 days. Semen quality was evaluated after liquefaction at 37 °C for at least 30 min. Semen volume was calculated by weighing the semen sample in a tube. Computer-assisted sperm analysis (CASA) was used to measure sperm concentration, progressive motility and total motility (SAS, Beijing, China). Sperm morphology was determined through Diff-Quick staining (Anke Biotechnology, Hefei, China). Leukocytes were stained by peroxidase test using benzidine (Anke Biotechnology, Hefei, China). Antisperm antibodies (AsAs) were investigated by the mixed antiglobulin reaction (MAR) method (Anke Biotechnology, Hefei, China).\nLaboratory personnel were trained in semen collecting and detecting procedures according to the National Research Institute for Family Planning. In addition, the laboratory participates in an external quality control (EQA) program by NHC Key Laboratory of Male Reproduction and Genetics.", "For each male patient, 200 µL of semen specimens was used for the extraction of microorganism DNA. Amplified Sexually transmitted pathogen DNA, including Uuu, Uup (Uup1, Uup3, Uup6, and Uup14), CT, HSV-2, MH, MG and NG, was detected by the “flow-through hybridization” technique using the STD (sexually transmitted disease) 6 GenoArray Diagnostic kit [16]. Samples infected with any pathogen were assigned to the pathogen-infected group, those with no infection with any pathogen were assigned to the non-infected group.", "Qualitative variables were described as frequencies (percentages), and quantitative variables were described as the mean ± standard deviation (SD) if normally distributed and medians (interquartile range, IQR) if not. Pearson’s chi-square test and Student’s t-test were used for parametric comparisons, and the Mann-Whitney U test was utilized for nonparametric comparisons. In addition, comparison of sexually transmitted pathogens (Uuu, Uup3, Uup6, MH, MG and CT) were performed by the Kruskal-Wallis test with multiple comparisons. Abnormal semen parameters according to the WHO 2010 recommended standards were listed as follows: semen volume < 1.5 mL, sperm concentration < 15 × 106/mL, progressive motility < 32 × 106/mL and motility morphology < 4 %. P value < 0.05 was considered to indicate statistical significance. All statistical analyses were performed using SPSS version 17 (SPSS Inc., Chicago, IL, USA).", "Table 1 displays the mean (± SD) and median (25th, 75th) clinical characteristics and semen parameters from 195 men in the present study. The ages of patients with LCS and without LCS were 32.4 ± 6.9 and 31.6 ± 5.8 years, respectively. Of the studied population, more than 50 % of patients graduated from college/university. A total of 60.0 % of patients were diagnosed with primary infertility. Drinking and smoking were found in 70.3 and 42.6 % of patients, respectively. There was no significant difference between the two groups regarding age, clinical conditions (e.g., BMI, education, duration of infertility, and type of infertility) or lifestyle (e.g., smoking and drinking).\nTable 1Characteristics and descriptive statistics of the whole cohortClinical characteristicsTotal(n = 195)LCS(n = 88)Non-LCS(n = 107)PAge(year), mean ± s.d.31.9 ± 6.332.4 ± 6.931.6 ± 5.80.39BMI (kg/m2), mean ± s.d.24.6 ± 3.324.1 ± 3.024.8 ± 3.50.39Education, n (%)  primary school5 (2.6)1 (1.1)4 (3.7)0.32  junior high school40 (20.5)21 (23.9)19 (17.8)  High school43 (22.1)22 (25.0)21 (19.6)  College/University107 (54.8)44 (50.0)63 (58.9)Duration of infertility, n (%)  1 year106 (54.4)47 (53.4)59 (55.1)0.35  2 years47 (24.1)25 (28.4)22 (20.6)  ≥ 3 years42 (21.5)16 (18.2)26 (24.3)Type of infertility, n (%)  Primary117 (60.0)49 (55.7)68 (63.6)0.26  Secondary78 (40.0)39 (44.3)39 (36.4)Alcohol status, n (%)  Nondrinkers58 (29.7)24 (27.3)34 (31.8)0.49  drinkers137 (70.3)64 (72.7)73 (68.2)Smoking status, n (%)  Nonsmokers112 (57.4)48 (54.5)64 (59.8)0.46  Smokers83 (42.6)40 (45.5)43 (40.2)Semen parameters  Abstinence time (days), mean ± s.d4.3 ± 1.94.2 ± 1.94.3 ± 2.00.75  Semen volume (ml), median (Q1, Q3)2.9 (2.0, 4.0)2.8 (1.9, 3.7)3.0 (2.2,4.1)0.04  Sperm concentration (×106/ml), median (Q1, Q3)79.0 (37.0, 121.6)66.4 (28.2, 106.0)89.5 (54.5, 135.5)0.01  Progressive motility (%), median (Q1, Q3)38.0 (18.7, 51.0)33.8 (19.1, 45.1)41.6 (16.7, 54.6)0.02  Total motility (%), median (Q1, Q3)46.0 (22.4, 59.3)40.8 (22.5, 55.2)50.2 (21.4, 63.8)0.03  Normal morphology (%), median (Q1, Q3)5.0 (4.0, 8.0)5.0 (3.0, 7.0)5.0 (4.0, 8.0)0.35  leukocytes (×106/ml), median (Q1, Q3)0.4 (0.1, 4.6)4.3 (2.9, 6.5)0.2 (0.1, 0.3)< 0.001  AsAs (%), median (Q1, Q3)1.5 (0, 4.0) (n = 124)1.0 (0, 4.0) (n = 48)2.0 (1.0, 3.8) (n = 76)0.99BMI body mass index. Data are presented as frequency (percentage) for categorical variables and mean ± standard deviation (SD) for continuous variables if normally distributed and medians (interquartile range, IQR) if not. P values were derived from Pearson’s chi-square test and Student’s t-test for parametric comparisons and the Mann-Whitney U test for nonparametric comparisons. Q1: 25th percentile. Q3: 75th percentile\nCharacteristics and descriptive statistics of the whole cohort\nBMI body mass index. Data are presented as frequency (percentage) for categorical variables and mean ± standard deviation (SD) for continuous variables if normally distributed and medians (interquartile range, IQR) if not. P values were derived from Pearson’s chi-square test and Student’s t-test for parametric comparisons and the Mann-Whitney U test for nonparametric comparisons. Q1: 25th percentile. Q3: 75th percentile\nAs expected, the median leukocytes in the LCS and non-LCS groups were 4.3 and 0.2, respectively (Table 2). The median (25th, 75th) semen volume from LCS and non-LCS group were 2.8 (1.9, 3.7) and 3.0 (2.2, 4.1) (P = 0.04); sperm concentration were 66.4 (28.2, 106.0) and 89.5 (54.5, 135.5) (P = 0.01); progressive motility were 33.8 (19.1, 45.1) and 41.6 (16.7, 54.6) (P = 0.02); total motility was 40.8 (22.5, 55.2) and 50.2 (21.4, 63.8) (P = 0.03); and normal morphology was 5.0 (3.0, 7.0) and 5.0 (4.0, 8.0) (P = 0.35), respectively. The LCS group had significantly decreased semen volume, sperm concentration, progressive motility, total motility and normal morphology. In addition, there were no significant differences between the two groups for AsAs.\nTable 2Frequency of STI pathogens in semen samplesPathogenOverall (n = 195)LCS (n = 88)Non-LCS (n = 107)PUU58 (29.2)25 (28.4)33 (30.8)0.71Uuu, n (%)17 (8.7)7 (8.0)10 (9.3)0.73Uup, n (%)41 (21.0)18 (20.5)23 (21.5)0.86Uup1, n (%)4 (2.1)1 (1.1)3 (2.8)0.63*Uup3, n (%)19 (9.7)10 (11.4)9 (8.4)0.49Uup6, n (%)16 (8.2)7 (8.0)9 (8.4)0.91Uup14, n (%)2 (1.0)0 (0)2 (1.9)0.50*MH, n (%)16 (8.2)10 (11.4)6 (5.6)0.15MG, n (%)4 (2.1)4 (4.5)0 (0)0.04*CT, n (%)7 (3.6)5 (5.7)2 (1.9)0.25*HSV-2, n (%)2 (1.0)1 (1.1)1 (0.9)0.99*NG, n (%)0 (0)0 (0)0 (0)-Total87 (44.6)45 (52.3)42 (39.3)0.10Data are presented as frequency. P values were derived from Pearson’s chi-square test if not otherwise indicated*Fisher’s exact test\nFrequency of STI pathogens in semen samples\nData are presented as frequency. P values were derived from Pearson’s chi-square test if not otherwise indicated\n*Fisher’s exact test\nAs shown in Table 2, the STI-positive semen in subfertility patients was 45.1 %. The STI detection rates of patients with LCS were higher than those of the non-LCS group, (52.3 % vs. 39.3 %), although there was no statistically significant difference between the two groups (P = 0.07). Of 195 semen samples, UU accounted for 58 (29.2 %), which was the most prevalent pathogen detected, followed by MH (8.2 %), CT (3.6 %), MG (2.1 %) and HSV-2 (1.0 %). NG was not detected in any semen sample. Strikingly, the rate of MG-positive samples from the LCS group was significantly higher than that in the non-LCS group (P = 0.04), while other STI pathogens were similarly detected between the two groups.\nTable 3 shows that 13 of the 71 STI-positive samples contained more than one pathogen, including 7 and 6 coinfected samples in the LCS and non-LCS groups, respectively. For the LCS group, 4 and 3 semen samples had 2 and 3 pathogens detections, respectively. Specifically, 5 of 7 semen samples were positive for UU and MH, 3 were positive for UU and MG and 1 was positive for UU and CT. For the non-LCS group, 5 and 1 of these samples were positive for 2 and 3 pathogens, respectively.\nTable 3Total simultaneous STI pathogens detected in the LCS and non-LCS groupsPathogenTotal(n = 195)LCS(n = 88)Non-LCS(n = 107)PNo-infect, n (%)124 (63.6)52 (59.1)72 (67.3)0.24Infect, n (%)  158 (29.7)29 (33.0)29 (27.1)0.58  29 (4.6)4 (4.5)5 (4.7)  34 (2.1)3 (3.4)1 (0.9)Data are presented as frequency. P values were derived from Pearson’s chi-square test if not otherwise indicated\nTotal simultaneous STI pathogens detected in the LCS and non-LCS groups\nData are presented as frequency. P values were derived from Pearson’s chi-square test if not otherwise indicated\nThe semen parameters from the LCS and non-LCS groups are summarized in Table 4. In both the LCS group and the non-LCS group, semen parameters were similar between patients with STI detection and those without STI detection. Moreover, the prevalence of STI pathogens in patients with normal semen parameters was not significantly different from that in patients with abnormal semen parameters (e.g., semen volume < 1.5 mL, sperm concentration < 15 × 106/mL, progressive motility < 32 × 106/mL, total motility < 40 × 106/mL and motility morphology < 4 %).\nTable 4Comparison of semen parameters in patients with LCS and non-LCSLCS (n = 88)Non-LCS (n = 107)Infected (n = 36)Non-infected (n = 52)PInfected (n = 35)Non-infected (n = 72)PAbstinence time (days), mean ± s.d4.2 ± 2.04.3 ± 1.80.814.4 ± 2.04.2 ± 2.00.59Semen volume (ml), median (Q1, Q3)2.5(1.7, 3.4)2.9(2.0, 4.0)0.182.9(2.1, 4.2)3.1(2.2, 4.1)0.92Semen volume < 1.5 (ml), n (%)6 (16.7)4 (7.7)0.192 (5.7)1 (1.4)0.25*Sperm concentration (×106/ml), median (Q1, Q3)53.8(24.3, 101.4)69.2(28.8, 106.8)0.6183.8(38.9, 133.2)90.2(55.4, 138.3)0.55Sperm concentration < 15 × 106/ml, n (%)4 (11.1)6 (11.5)0.952 (5.7)8 (11.1)0.49*Progressive motility (%), median (Q1, Q3)35.6(19.9, 45.3)31.5(19.0, 45.1)0.6546.5(25.4, 59.2)40.0(15.7, 54.4)0.39Progressive motility < 32 %, n (%)14 (38.9)27 (51.9)0.2310 (28.6)27 (37.5)0.36Total motility (%), median (Q1, Q3)41.2(23.2, 55.2)38.2(22.5, 55.6)0.7251.7(30.2, 63.9)48.5(19.4, 63.7)0.63Total motility < 40 %, n (%)16 (44.4)27 (51.9)0.4910 (28.6)27 (37.5)0.36Normal morphology (%), median (Q1, Q3)5.0(3.0, 6.8)5.0(3.3, 7.0)0.946.0(4.0, 8.0)5.0(4.0, 8.0)0.29Normal morphology < 4 %, n (%)10 (27.8)13 (25.0)0.778 (22.9)16 (22.2)0.94Leukocytes (×106/ml), median (Q1, Q3)6.4(3.0, 6.3)4.2(2.6, 7.0)0.730.2(0.1, 0.3)0.2(0.1, 0.3)0.21Data are presented as frequency (percentage) for categorical variables and mean ± standard deviation (SD) for continuous variables if normally distributed and medians (interquartile range, IQR) if not. P values were derived from Pearson’s chi-square test and Student’s t-test for parametric comparisons and the Mann-Whitney U test for nonparametric comparisons. Q1: 25th percentile. Q3: 75th percentile*Fisher’s exact test\nComparison of semen parameters in patients with LCS and non-LCS\n2.5\n(1.7, 3.4)\n2.9\n(2.0, 4.0)\n2.9\n(2.1, 4.2)\n3.1\n(2.2, 4.1)\n53.8\n(24.3, 101.4)\n69.2\n(28.8, 106.8)\n83.8\n(38.9, 133.2)\n90.2\n(55.4, 138.3)\n35.6\n(19.9, 45.3)\n31.5\n(19.0, 45.1)\n46.5\n(25.4, 59.2)\n40.0\n(15.7, 54.4)\n41.2\n(23.2, 55.2)\n38.2\n(22.5, 55.6)\n51.7\n(30.2, 63.9)\n48.5\n(19.4, 63.7)\n5.0\n(3.0, 6.8)\n5.0\n(3.3, 7.0)\n6.0\n(4.0, 8.0)\n5.0\n(4.0, 8.0)\n6.4\n(3.0, 6.3)\n4.2\n(2.6, 7.0)\n0.2\n(0.1, 0.3)\n0.2\n(0.1, 0.3)\nData are presented as frequency (percentage) for categorical variables and mean ± standard deviation (SD) for continuous variables if normally distributed and medians (interquartile range, IQR) if not. P values were derived from Pearson’s chi-square test and Student’s t-test for parametric comparisons and the Mann-Whitney U test for nonparametric comparisons. Q1: 25th percentile. Q3: 75th percentile\n*Fisher’s exact test\nTable 5 list the association of semen quality and STI pathogens in the LCS group. All semen parameters were not statistically significant for the UU-infected patients, particularly for Uuu-, Uup3- and Uup6-positive patients. Interestingly, multiple comparison analysis found that semen volume was significantly different among groups. Semen volume was lower in patients with MG infection than in those without any STI pathogen detection. Furthermore, leucocytes in semen were not significantly different between the individual STI-positive and STI-negative groups.\nTable 5Sexually transmitted infections on semen quality in patients with LCSUuuUup3Uup6MHMGCTNegativeP valueaSemen volume (ml), median (Q1, Q3)2.0(1.1, 2.9)3.0(2.7, 5.1)2.8(1.8, 3.6)2.1(1.6, 2.8)1.5(1.1, 1.8)b2.0(1.0, 2.5)2.9(2.0, 4.0)0.003Sperm concentration (× 106/ml), median (Q1, Q3)35.1(26.8, 62.0)60.9(27.7, 106.1)110.6(17.6, 254.6)45.1(22.8, 83.2)48.2(27.0, 97.7)46.7(30.8, 121.5)69.2(28.8, 106.8)0.65Progressive sperm motility (%), median (Q1, Q3)33.1(18.4, 43.2)30.3(21.3, 38.6)32.7(10.5, 58.0)34.7(15.8, 41.6)40.3(19.6, 50.1)39.5(36.5, 55.4)31.5(19.0, 45.1)0.69Total sperm motility (%), median (Q1, Q3)37.7(19.8, 50.0)39.6(27.0, 50.8)39.1(12.1, 68.1)40.2(19.2, 46.1)46.0(22.2, 55.5)55.2(44.3, 65.1)38.2(22.5, 55.6)0.56Normal sperm morphology (%), median (Q1, Q3)5.0(3.3, 6.0)5.5(2.8, 8.0)4.0(2.0, 11.0)5.0(4.0, 6.0)5.5(3.5, 6.0)6.0(3.5, 7.5)5.0(3.3, 7.0)0.99Leukocytes (×106/ml), median (Q1, Q3)6.2(4.1, 11.9)4.2(3.1, 6.0)2.9(2.4, 8.1)3.8(3.1, 6.2)4.3(2.4, 6.3)5.5(4.0, 9.1)4.2(2.6, 7.0)0.55Data are presented as medians (interquartile range, IQR). Q1: 25th percentile. Q3: 75th percentile. aP value according to the Kruskal-Wallis test. bP < 0.05 vs. negative group\nSexually transmitted infections on semen quality in patients with LCS\n2.0\n(1.1, 2.9)\n3.0\n(2.7, 5.1)\n2.8\n(1.8, 3.6)\n2.1\n(1.6, 2.8)\n1.5\n(1.1, 1.8)b\n2.0\n(1.0, 2.5)\n2.9\n(2.0, 4.0)\n35.1\n(26.8, 62.0)\n60.9\n(27.7, 106.1)\n110.6\n(17.6, 254.6)\n45.1\n(22.8, 83.2)\n48.2\n(27.0, 97.7)\n46.7\n(30.8, 121.5)\n69.2\n(28.8, 106.8)\n33.1\n(18.4, 43.2)\n30.3\n(21.3, 38.6)\n32.7\n(10.5, 58.0)\n34.7\n(15.8, 41.6)\n40.3\n(19.6, 50.1)\n39.5\n(36.5, 55.4)\n31.5\n(19.0, 45.1)\n37.7\n(19.8, 50.0)\n39.6\n(27.0, 50.8)\n39.1\n(12.1, 68.1)\n40.2\n(19.2, 46.1)\n46.0\n(22.2, 55.5)\n55.2\n(44.3, 65.1)\n38.2\n(22.5, 55.6)\n5.0\n(3.3, 6.0)\n5.5\n(2.8, 8.0)\n4.0\n(2.0, 11.0)\n5.0\n(4.0, 6.0)\n5.5\n(3.5, 6.0)\n6.0\n(3.5, 7.5)\n5.0\n(3.3, 7.0)\n6.2\n(4.1, 11.9)\n4.2\n(3.1, 6.0)\n2.9\n(2.4, 8.1)\n3.8\n(3.1, 6.2)\n4.3\n(2.4, 6.3)\n5.5\n(4.0, 9.1)\n4.2\n(2.6, 7.0)\nData are presented as medians (interquartile range, IQR). Q1: 25th percentile. Q3: 75th percentile. aP value according to the Kruskal-Wallis test. bP < 0.05 vs. negative group", "During the past decade, the association of STI pathogens, leukocytes and semen quality in male infertility has remained a controversial topic [15]. In this context, we examined the effect of STI pathogenson semen quality in subfertile men with LCS and without LCS using the STD6 GenoArray Diagnostic assay. The method allows the detection of six STI pathogens (NG, CT, UU (Uuu, Uup), MH, MG and HSV-2). The prevalence of STI pathogens in semen samples from men with and without leukocytospermia was 52.3 and 39.3 %, respectively. We also showed that men with MG infection had lower semen volumes than men without infection in the LCS group. However, semen parameters were not significantly different between UU-positive and STI-negative patients.\nA wide range of STI-positive percentages were observed in semen samples from infertile men, and this wide range of variability may be due to different types of studied populations, sample sizes and detection methods for STI pathogens [17, 18]. The diagnosis of STIs and therefore therapy is difficult and debated in relation to its association with male fertility [19]. A number of studies have identified the presence of pathogenic bacteria in semen based on culture or PCR [18]. Due to the limitations of culture methods, such as time consumption and low specificity, PCR-based diagnostic methods present rapid and sensitive detection of STI pathogens [20, 21]. In this study, we detected STI pathogens from semen samples by the GenoArray assay, which is a qualitative PCR-based in vitro test and rapid identification of mutations related to STD6 in a single test. Hence, this assay could be a potential screening method for semen pathogens in infertility or STD clinics.\nHSV infections are associated with abnormal sperm parameters and male infertility by indirectly causing immune responses [22, 23]. Approximately 2–50 % of infertile men were positive for HSV-1/2 DNA [24, 25]. In the current study, the prevalence of HSV-2 pathogens in semen from subfertility patients was 1.0 %, with no difference between the LCS and non-LCS groups. As a low rate of HSV-2 positive semen samples was detected, we did not observe an association of HSV-2 with semen quality.\nCT infects the genital tract, resulting in a decrease in seminal volume, sperm concentration, motility and normal morphology [26]. Current studies show that the prevalence of CT is 0.4–42.3 % in asymptomatic males [9]. We observed that the rate of CT-positive semen samples was 3.6 % from subfertility patients. Additionally, CT was associated with a slight decrease in semen volume. The mechanism by which CT affects sperm quality has not yet been elucidated.\nNG infection has been associated with urethritis, epididymitis and prostatitis inflammation and thus can lead to infertility. However, the prevalence of NG pathogens was 0-6.5 %, with lower prevalence in infertile men than in men with other STI pathogens [18, 27]. Thus, few studies have investigated its correlation with semen parameters. Similar to previous studies, we detected no NG infection in semen samples from subfertility patients with and without LCS.\nUU is the most common microorganism of the lower genitourinary tract in males and is a potentially pathogenic species with an etiologic role in male infertility [28]. The UU-positive rate in the semen samples of infertile men ranges from 5 to 42 %. Historically, the association of CT with semen parameters has been inconsistent. Some of the studies pointed out that UU-positive semen had lower sperm concentrations and sperm motility than UU-negative semen. However, other studies did not observe any correlation between UU and semen parameters [29]. Of interest, our present study confirmed that UU was the most common widespread pathogen in semen (29.2 % of positive samples), with no difference in UU infection between subfertile men with LCS and without LCS. A previous study reported that the frequency of Uuu in the semen of infertile men was higher than that of Uup, indicating that Uup may have a lower correlation with infertility. In this study, we observed a higher prevalence of Uup compared with Uuu in sub-fertile men. Indeed, there was no significant association between semen quality and Uuu and Uup infections. Although we did not find an obvious impact of UU on semen quality, further work exploring the different UU species induced infertility is required.\nMH is an STI pathogen that often causes gynecologic and obstetric complications in females, such as postabortion fevers, rupture of fetal membranes, and pelvic inflammatory disease [30]. Although MH is commonly detected in men with nongonoccocal urethritis (NGU), its role in sperm function and male fertility remains unclear. The prevalence of MH in our study was comparable to that reported by Gdoura et al. (8.2 % vs. 9.2 %) [29]. The detection rate of MH was higher in patients with LCS than in those without LCS, although the difference was not statistically significant, suggesting a possible relationship between MH infection and male infertility.\nThe bacterium MG was isolated and identified as an STI pathogen in two urethral specimens from men with NGU. Studies also showed that the rate of MG infection was higher in symptomatic patients than in asymptomatic patients [31, 32]. Previously, up to 17.1 % of infertile men had MG infection [33], indicating that this pathogen may be important for male fertility outcomes. In line with another study, the prevalence of MG in asymptomatic patients was not high (2.1 %) [34]. In addition, we observed that the MG-positive rate of subfertile men with LCS was higher than that of men without LCS, suggesting a link between the presence of MG pathogens and male infertility. Regarding semen quality parameters, there was a decrease in seminal volume in the sample with MG infection compared to that without MG infection. Therefore, our results confirm that MG infection of the male genital tract could induce infertility. Due to the high azithromycin treatment failure rate (37.4 %) for MG infection in men of infertile couples in China, we should notice that inflammatory changes caused by MG long-term infection result in infertility [33].\nAccording to infertility, men with LCS had a reduction in fructose concentration compared with non-LCS men, and LCS may affect seminal vesicle function. STI pathogens, including MG and CT, are associated with low semen volume, suggesting that subclinical infections seem to affect epididymal function.\nThere are several strengths of our findings. First, we showed a high rate of STI pathogens in asymptomatic subfertile men. The prevalence of STI pathogens was not associated with the leucocyte counts in semen, thus suggesting that the determination of more than 106 peroxidase- positive cells per milliliter cannot discard STIs and asymptomatic genital tract infections. Second, in this cross-sectional study, we showed that subfertile men with STIs had worse semen volumes than those without infection. Because of the probable impact of infection on semen quality, some tests performed for STI detection in leukocytospermia should be questioned. Thus, screening for STI pathogens could be part of the routine diagnostic workup of subfertile men.\nThe present study has inherent limitations. First, this study has a possible selection bias because this was a single center-based cross-sectional study. Second, a control group of normal fertile men is lacking. Third, due to a lack of antibiotic therapy against STI pathogens, the potential impact of infection treatment could not be assessed. A final limitation of the study is the small sample size and the results need to be validated with a larger number of patients. Nevertheless, our observations show the importance of investigating semen infection in the clinical practice diagnostic workup of subfertile men.\nOverall, we demonstrated that STI pathogens were detected at a high rate in semen samples from asymptomatic subfertile men using a sensitive molecular assay, with no difference in prevalence between the LCS and non-LCS groups. LCS was associated with a reduction in semen quality, but was not associated with STIs. Thus, the present study finding may be beneficial for clinicians to perform an assessment of the association of STIs and subfertile men with LCS." ]
[ "introduction", null, null, null, null, null, "results", "discussion" ]
[ "Leukocytospermia", "Sexually transmitted infections", "Semen parameters" ]
Introduction: Up to 30 % of infertile men have leukocytospermia which refers to the presence of a high concentration (≥ 1 × 106/ml) of white blood cells (WBCs) in semen [1]. Leukocytes, including granulocytes (50–60 %), macrophages (20–30 %) and T lymphocytes (2–5 %), have been shown to be a negative factor for semen quality as they induce low sperm motility [2]. The origin of leukocytes in the ejaculate is mainly from the epididymis and is associated with immunosurveillance [3]. Hence, bacterial and viral infections were considered and detected in semen from symptomatic and asymptomatic leukocytospermic infertile males, although they attending andrology clinics often do not have an apparent infection. More than 30 sexually transmitted infections (STIs), including herpes simplex virus (HSV), Chlamydia trachomatis (CT), Ureaplasma spp. (UU), Mycoplasma hominis (MH), Mycoplasma genitalium (MG) and Neisseria gonorrhoeae (NG) have been identified as pathogens that cause genital injury, semen infection, prostatitis, urethritis, epididymitis and orchitis in men [4–7]. Previous studies observed sexually transmitted pathogens in semen in relation to impaired sperm quality and reduced pregnancy rates [8, 9]. However, unlike the strong negative correlation between STIs and fertility in females, the link in males remains controversial [10]. For example, several studies demonstrated that UU increased secondary infertility and reduced sperm quality, including low sperm concentration and motility [11], while other studies did not find a correlation between UU and semen paremeters [12, 13]. Interestingly, UU can be divided into two subtypes, Ureaplasma urealyticum (Uuu) and Ureaplasma parvum (Uup), and studies have shown that Uup is more associated with poor semen quality than Uuu due to differential pathogenicity [14]. Thus, the controversial results in previous studies may be due to the lack of discrimination between Uuu and Uup. Recent studies reported that compared with patients with nonleukocytospermia (non-LCS), decreased semen parameters (e.g., sperm concentration, progressive motility, normal morphology) were observed in asymptomatic leukocytospermia (LCS), while those with STI (e.g., UU)-positive leukocytospermia performed more significantly [5, 15]. In this study, we aimed to investigate the prevalence of sexually transmitted pathogens by molecular methods in semen samples from infertile men with and without leukocytes and their effects on semen quality. Methods: Patients A total of 195 men who sought a fertility evaluation at the Reproductive Center of The First Affiliated Hospital of University of Science and Technology of China (USTC) from July 2019 to July 2020 were included in this study. The exclusion criteria were men who were diagnosed with genetic defects related to reproductive tract, azoospermia, varicocele, testicular trauma, cryptorchidism, postmumps orchitis and chronic severe debilitating medical illnesses. This study was approved by The First Affiliated Hospital of USTC Ethical Committee, Anhui, China (No. 2019P040). A total of 195 men who sought a fertility evaluation at the Reproductive Center of The First Affiliated Hospital of University of Science and Technology of China (USTC) from July 2019 to July 2020 were included in this study. The exclusion criteria were men who were diagnosed with genetic defects related to reproductive tract, azoospermia, varicocele, testicular trauma, cryptorchidism, postmumps orchitis and chronic severe debilitating medical illnesses. This study was approved by The First Affiliated Hospital of USTC Ethical Committee, Anhui, China (No. 2019P040). Semen analysis The semen volume, sperm concentration, total sperm count, motility, and morphological normality were determined according to the WHO 2010 guidelines for semen analysis (fifth edition, 2010) [1]. Briefly, semen samples were obtained after an abstinence period of 2–7 days. Semen quality was evaluated after liquefaction at 37 °C for at least 30 min. Semen volume was calculated by weighing the semen sample in a tube. Computer-assisted sperm analysis (CASA) was used to measure sperm concentration, progressive motility and total motility (SAS, Beijing, China). Sperm morphology was determined through Diff-Quick staining (Anke Biotechnology, Hefei, China). Leukocytes were stained by peroxidase test using benzidine (Anke Biotechnology, Hefei, China). Antisperm antibodies (AsAs) were investigated by the mixed antiglobulin reaction (MAR) method (Anke Biotechnology, Hefei, China). Laboratory personnel were trained in semen collecting and detecting procedures according to the National Research Institute for Family Planning. In addition, the laboratory participates in an external quality control (EQA) program by NHC Key Laboratory of Male Reproduction and Genetics. The semen volume, sperm concentration, total sperm count, motility, and morphological normality were determined according to the WHO 2010 guidelines for semen analysis (fifth edition, 2010) [1]. Briefly, semen samples were obtained after an abstinence period of 2–7 days. Semen quality was evaluated after liquefaction at 37 °C for at least 30 min. Semen volume was calculated by weighing the semen sample in a tube. Computer-assisted sperm analysis (CASA) was used to measure sperm concentration, progressive motility and total motility (SAS, Beijing, China). Sperm morphology was determined through Diff-Quick staining (Anke Biotechnology, Hefei, China). Leukocytes were stained by peroxidase test using benzidine (Anke Biotechnology, Hefei, China). Antisperm antibodies (AsAs) were investigated by the mixed antiglobulin reaction (MAR) method (Anke Biotechnology, Hefei, China). Laboratory personnel were trained in semen collecting and detecting procedures according to the National Research Institute for Family Planning. In addition, the laboratory participates in an external quality control (EQA) program by NHC Key Laboratory of Male Reproduction and Genetics. Detection of sexually transmitted pathogens For each male patient, 200 µL of semen specimens was used for the extraction of microorganism DNA. Amplified Sexually transmitted pathogen DNA, including Uuu, Uup (Uup1, Uup3, Uup6, and Uup14), CT, HSV-2, MH, MG and NG, was detected by the “flow-through hybridization” technique using the STD (sexually transmitted disease) 6 GenoArray Diagnostic kit [16]. Samples infected with any pathogen were assigned to the pathogen-infected group, those with no infection with any pathogen were assigned to the non-infected group. For each male patient, 200 µL of semen specimens was used for the extraction of microorganism DNA. Amplified Sexually transmitted pathogen DNA, including Uuu, Uup (Uup1, Uup3, Uup6, and Uup14), CT, HSV-2, MH, MG and NG, was detected by the “flow-through hybridization” technique using the STD (sexually transmitted disease) 6 GenoArray Diagnostic kit [16]. Samples infected with any pathogen were assigned to the pathogen-infected group, those with no infection with any pathogen were assigned to the non-infected group. Statistical analysis Qualitative variables were described as frequencies (percentages), and quantitative variables were described as the mean ± standard deviation (SD) if normally distributed and medians (interquartile range, IQR) if not. Pearson’s chi-square test and Student’s t-test were used for parametric comparisons, and the Mann-Whitney U test was utilized for nonparametric comparisons. In addition, comparison of sexually transmitted pathogens (Uuu, Uup3, Uup6, MH, MG and CT) were performed by the Kruskal-Wallis test with multiple comparisons. Abnormal semen parameters according to the WHO 2010 recommended standards were listed as follows: semen volume < 1.5 mL, sperm concentration < 15 × 106/mL, progressive motility < 32 × 106/mL and motility morphology < 4 %. P value < 0.05 was considered to indicate statistical significance. All statistical analyses were performed using SPSS version 17 (SPSS Inc., Chicago, IL, USA). Qualitative variables were described as frequencies (percentages), and quantitative variables were described as the mean ± standard deviation (SD) if normally distributed and medians (interquartile range, IQR) if not. Pearson’s chi-square test and Student’s t-test were used for parametric comparisons, and the Mann-Whitney U test was utilized for nonparametric comparisons. In addition, comparison of sexually transmitted pathogens (Uuu, Uup3, Uup6, MH, MG and CT) were performed by the Kruskal-Wallis test with multiple comparisons. Abnormal semen parameters according to the WHO 2010 recommended standards were listed as follows: semen volume < 1.5 mL, sperm concentration < 15 × 106/mL, progressive motility < 32 × 106/mL and motility morphology < 4 %. P value < 0.05 was considered to indicate statistical significance. All statistical analyses were performed using SPSS version 17 (SPSS Inc., Chicago, IL, USA). Patients: A total of 195 men who sought a fertility evaluation at the Reproductive Center of The First Affiliated Hospital of University of Science and Technology of China (USTC) from July 2019 to July 2020 were included in this study. The exclusion criteria were men who were diagnosed with genetic defects related to reproductive tract, azoospermia, varicocele, testicular trauma, cryptorchidism, postmumps orchitis and chronic severe debilitating medical illnesses. This study was approved by The First Affiliated Hospital of USTC Ethical Committee, Anhui, China (No. 2019P040). Semen analysis: The semen volume, sperm concentration, total sperm count, motility, and morphological normality were determined according to the WHO 2010 guidelines for semen analysis (fifth edition, 2010) [1]. Briefly, semen samples were obtained after an abstinence period of 2–7 days. Semen quality was evaluated after liquefaction at 37 °C for at least 30 min. Semen volume was calculated by weighing the semen sample in a tube. Computer-assisted sperm analysis (CASA) was used to measure sperm concentration, progressive motility and total motility (SAS, Beijing, China). Sperm morphology was determined through Diff-Quick staining (Anke Biotechnology, Hefei, China). Leukocytes were stained by peroxidase test using benzidine (Anke Biotechnology, Hefei, China). Antisperm antibodies (AsAs) were investigated by the mixed antiglobulin reaction (MAR) method (Anke Biotechnology, Hefei, China). Laboratory personnel were trained in semen collecting and detecting procedures according to the National Research Institute for Family Planning. In addition, the laboratory participates in an external quality control (EQA) program by NHC Key Laboratory of Male Reproduction and Genetics. Detection of sexually transmitted pathogens: For each male patient, 200 µL of semen specimens was used for the extraction of microorganism DNA. Amplified Sexually transmitted pathogen DNA, including Uuu, Uup (Uup1, Uup3, Uup6, and Uup14), CT, HSV-2, MH, MG and NG, was detected by the “flow-through hybridization” technique using the STD (sexually transmitted disease) 6 GenoArray Diagnostic kit [16]. Samples infected with any pathogen were assigned to the pathogen-infected group, those with no infection with any pathogen were assigned to the non-infected group. Statistical analysis: Qualitative variables were described as frequencies (percentages), and quantitative variables were described as the mean ± standard deviation (SD) if normally distributed and medians (interquartile range, IQR) if not. Pearson’s chi-square test and Student’s t-test were used for parametric comparisons, and the Mann-Whitney U test was utilized for nonparametric comparisons. In addition, comparison of sexually transmitted pathogens (Uuu, Uup3, Uup6, MH, MG and CT) were performed by the Kruskal-Wallis test with multiple comparisons. Abnormal semen parameters according to the WHO 2010 recommended standards were listed as follows: semen volume < 1.5 mL, sperm concentration < 15 × 106/mL, progressive motility < 32 × 106/mL and motility morphology < 4 %. P value < 0.05 was considered to indicate statistical significance. All statistical analyses were performed using SPSS version 17 (SPSS Inc., Chicago, IL, USA). Results: Table 1 displays the mean (± SD) and median (25th, 75th) clinical characteristics and semen parameters from 195 men in the present study. The ages of patients with LCS and without LCS were 32.4 ± 6.9 and 31.6 ± 5.8 years, respectively. Of the studied population, more than 50 % of patients graduated from college/university. A total of 60.0 % of patients were diagnosed with primary infertility. Drinking and smoking were found in 70.3 and 42.6 % of patients, respectively. There was no significant difference between the two groups regarding age, clinical conditions (e.g., BMI, education, duration of infertility, and type of infertility) or lifestyle (e.g., smoking and drinking). Table 1Characteristics and descriptive statistics of the whole cohortClinical characteristicsTotal(n = 195)LCS(n = 88)Non-LCS(n = 107)PAge(year), mean ± s.d.31.9 ± 6.332.4 ± 6.931.6 ± 5.80.39BMI (kg/m2), mean ± s.d.24.6 ± 3.324.1 ± 3.024.8 ± 3.50.39Education, n (%)  primary school5 (2.6)1 (1.1)4 (3.7)0.32  junior high school40 (20.5)21 (23.9)19 (17.8)  High school43 (22.1)22 (25.0)21 (19.6)  College/University107 (54.8)44 (50.0)63 (58.9)Duration of infertility, n (%)  1 year106 (54.4)47 (53.4)59 (55.1)0.35  2 years47 (24.1)25 (28.4)22 (20.6)  ≥ 3 years42 (21.5)16 (18.2)26 (24.3)Type of infertility, n (%)  Primary117 (60.0)49 (55.7)68 (63.6)0.26  Secondary78 (40.0)39 (44.3)39 (36.4)Alcohol status, n (%)  Nondrinkers58 (29.7)24 (27.3)34 (31.8)0.49  drinkers137 (70.3)64 (72.7)73 (68.2)Smoking status, n (%)  Nonsmokers112 (57.4)48 (54.5)64 (59.8)0.46  Smokers83 (42.6)40 (45.5)43 (40.2)Semen parameters  Abstinence time (days), mean ± s.d4.3 ± 1.94.2 ± 1.94.3 ± 2.00.75  Semen volume (ml), median (Q1, Q3)2.9 (2.0, 4.0)2.8 (1.9, 3.7)3.0 (2.2,4.1)0.04  Sperm concentration (×106/ml), median (Q1, Q3)79.0 (37.0, 121.6)66.4 (28.2, 106.0)89.5 (54.5, 135.5)0.01  Progressive motility (%), median (Q1, Q3)38.0 (18.7, 51.0)33.8 (19.1, 45.1)41.6 (16.7, 54.6)0.02  Total motility (%), median (Q1, Q3)46.0 (22.4, 59.3)40.8 (22.5, 55.2)50.2 (21.4, 63.8)0.03  Normal morphology (%), median (Q1, Q3)5.0 (4.0, 8.0)5.0 (3.0, 7.0)5.0 (4.0, 8.0)0.35  leukocytes (×106/ml), median (Q1, Q3)0.4 (0.1, 4.6)4.3 (2.9, 6.5)0.2 (0.1, 0.3)< 0.001  AsAs (%), median (Q1, Q3)1.5 (0, 4.0) (n = 124)1.0 (0, 4.0) (n = 48)2.0 (1.0, 3.8) (n = 76)0.99BMI body mass index. Data are presented as frequency (percentage) for categorical variables and mean ± standard deviation (SD) for continuous variables if normally distributed and medians (interquartile range, IQR) if not. P values were derived from Pearson’s chi-square test and Student’s t-test for parametric comparisons and the Mann-Whitney U test for nonparametric comparisons. Q1: 25th percentile. Q3: 75th percentile Characteristics and descriptive statistics of the whole cohort BMI body mass index. Data are presented as frequency (percentage) for categorical variables and mean ± standard deviation (SD) for continuous variables if normally distributed and medians (interquartile range, IQR) if not. P values were derived from Pearson’s chi-square test and Student’s t-test for parametric comparisons and the Mann-Whitney U test for nonparametric comparisons. Q1: 25th percentile. Q3: 75th percentile As expected, the median leukocytes in the LCS and non-LCS groups were 4.3 and 0.2, respectively (Table 2). The median (25th, 75th) semen volume from LCS and non-LCS group were 2.8 (1.9, 3.7) and 3.0 (2.2, 4.1) (P = 0.04); sperm concentration were 66.4 (28.2, 106.0) and 89.5 (54.5, 135.5) (P = 0.01); progressive motility were 33.8 (19.1, 45.1) and 41.6 (16.7, 54.6) (P = 0.02); total motility was 40.8 (22.5, 55.2) and 50.2 (21.4, 63.8) (P = 0.03); and normal morphology was 5.0 (3.0, 7.0) and 5.0 (4.0, 8.0) (P = 0.35), respectively. The LCS group had significantly decreased semen volume, sperm concentration, progressive motility, total motility and normal morphology. In addition, there were no significant differences between the two groups for AsAs. Table 2Frequency of STI pathogens in semen samplesPathogenOverall (n = 195)LCS (n = 88)Non-LCS (n = 107)PUU58 (29.2)25 (28.4)33 (30.8)0.71Uuu, n (%)17 (8.7)7 (8.0)10 (9.3)0.73Uup, n (%)41 (21.0)18 (20.5)23 (21.5)0.86Uup1, n (%)4 (2.1)1 (1.1)3 (2.8)0.63*Uup3, n (%)19 (9.7)10 (11.4)9 (8.4)0.49Uup6, n (%)16 (8.2)7 (8.0)9 (8.4)0.91Uup14, n (%)2 (1.0)0 (0)2 (1.9)0.50*MH, n (%)16 (8.2)10 (11.4)6 (5.6)0.15MG, n (%)4 (2.1)4 (4.5)0 (0)0.04*CT, n (%)7 (3.6)5 (5.7)2 (1.9)0.25*HSV-2, n (%)2 (1.0)1 (1.1)1 (0.9)0.99*NG, n (%)0 (0)0 (0)0 (0)-Total87 (44.6)45 (52.3)42 (39.3)0.10Data are presented as frequency. P values were derived from Pearson’s chi-square test if not otherwise indicated*Fisher’s exact test Frequency of STI pathogens in semen samples Data are presented as frequency. P values were derived from Pearson’s chi-square test if not otherwise indicated *Fisher’s exact test As shown in Table 2, the STI-positive semen in subfertility patients was 45.1 %. The STI detection rates of patients with LCS were higher than those of the non-LCS group, (52.3 % vs. 39.3 %), although there was no statistically significant difference between the two groups (P = 0.07). Of 195 semen samples, UU accounted for 58 (29.2 %), which was the most prevalent pathogen detected, followed by MH (8.2 %), CT (3.6 %), MG (2.1 %) and HSV-2 (1.0 %). NG was not detected in any semen sample. Strikingly, the rate of MG-positive samples from the LCS group was significantly higher than that in the non-LCS group (P = 0.04), while other STI pathogens were similarly detected between the two groups. Table 3 shows that 13 of the 71 STI-positive samples contained more than one pathogen, including 7 and 6 coinfected samples in the LCS and non-LCS groups, respectively. For the LCS group, 4 and 3 semen samples had 2 and 3 pathogens detections, respectively. Specifically, 5 of 7 semen samples were positive for UU and MH, 3 were positive for UU and MG and 1 was positive for UU and CT. For the non-LCS group, 5 and 1 of these samples were positive for 2 and 3 pathogens, respectively. Table 3Total simultaneous STI pathogens detected in the LCS and non-LCS groupsPathogenTotal(n = 195)LCS(n = 88)Non-LCS(n = 107)PNo-infect, n (%)124 (63.6)52 (59.1)72 (67.3)0.24Infect, n (%)  158 (29.7)29 (33.0)29 (27.1)0.58  29 (4.6)4 (4.5)5 (4.7)  34 (2.1)3 (3.4)1 (0.9)Data are presented as frequency. P values were derived from Pearson’s chi-square test if not otherwise indicated Total simultaneous STI pathogens detected in the LCS and non-LCS groups Data are presented as frequency. P values were derived from Pearson’s chi-square test if not otherwise indicated The semen parameters from the LCS and non-LCS groups are summarized in Table 4. In both the LCS group and the non-LCS group, semen parameters were similar between patients with STI detection and those without STI detection. Moreover, the prevalence of STI pathogens in patients with normal semen parameters was not significantly different from that in patients with abnormal semen parameters (e.g., semen volume < 1.5 mL, sperm concentration < 15 × 106/mL, progressive motility < 32 × 106/mL, total motility < 40 × 106/mL and motility morphology < 4 %). Table 4Comparison of semen parameters in patients with LCS and non-LCSLCS (n = 88)Non-LCS (n = 107)Infected (n = 36)Non-infected (n = 52)PInfected (n = 35)Non-infected (n = 72)PAbstinence time (days), mean ± s.d4.2 ± 2.04.3 ± 1.80.814.4 ± 2.04.2 ± 2.00.59Semen volume (ml), median (Q1, Q3)2.5(1.7, 3.4)2.9(2.0, 4.0)0.182.9(2.1, 4.2)3.1(2.2, 4.1)0.92Semen volume < 1.5 (ml), n (%)6 (16.7)4 (7.7)0.192 (5.7)1 (1.4)0.25*Sperm concentration (×106/ml), median (Q1, Q3)53.8(24.3, 101.4)69.2(28.8, 106.8)0.6183.8(38.9, 133.2)90.2(55.4, 138.3)0.55Sperm concentration < 15 × 106/ml, n (%)4 (11.1)6 (11.5)0.952 (5.7)8 (11.1)0.49*Progressive motility (%), median (Q1, Q3)35.6(19.9, 45.3)31.5(19.0, 45.1)0.6546.5(25.4, 59.2)40.0(15.7, 54.4)0.39Progressive motility < 32 %, n (%)14 (38.9)27 (51.9)0.2310 (28.6)27 (37.5)0.36Total motility (%), median (Q1, Q3)41.2(23.2, 55.2)38.2(22.5, 55.6)0.7251.7(30.2, 63.9)48.5(19.4, 63.7)0.63Total motility < 40 %, n (%)16 (44.4)27 (51.9)0.4910 (28.6)27 (37.5)0.36Normal morphology (%), median (Q1, Q3)5.0(3.0, 6.8)5.0(3.3, 7.0)0.946.0(4.0, 8.0)5.0(4.0, 8.0)0.29Normal morphology < 4 %, n (%)10 (27.8)13 (25.0)0.778 (22.9)16 (22.2)0.94Leukocytes (×106/ml), median (Q1, Q3)6.4(3.0, 6.3)4.2(2.6, 7.0)0.730.2(0.1, 0.3)0.2(0.1, 0.3)0.21Data are presented as frequency (percentage) for categorical variables and mean ± standard deviation (SD) for continuous variables if normally distributed and medians (interquartile range, IQR) if not. P values were derived from Pearson’s chi-square test and Student’s t-test for parametric comparisons and the Mann-Whitney U test for nonparametric comparisons. Q1: 25th percentile. Q3: 75th percentile*Fisher’s exact test Comparison of semen parameters in patients with LCS and non-LCS 2.5 (1.7, 3.4) 2.9 (2.0, 4.0) 2.9 (2.1, 4.2) 3.1 (2.2, 4.1) 53.8 (24.3, 101.4) 69.2 (28.8, 106.8) 83.8 (38.9, 133.2) 90.2 (55.4, 138.3) 35.6 (19.9, 45.3) 31.5 (19.0, 45.1) 46.5 (25.4, 59.2) 40.0 (15.7, 54.4) 41.2 (23.2, 55.2) 38.2 (22.5, 55.6) 51.7 (30.2, 63.9) 48.5 (19.4, 63.7) 5.0 (3.0, 6.8) 5.0 (3.3, 7.0) 6.0 (4.0, 8.0) 5.0 (4.0, 8.0) 6.4 (3.0, 6.3) 4.2 (2.6, 7.0) 0.2 (0.1, 0.3) 0.2 (0.1, 0.3) Data are presented as frequency (percentage) for categorical variables and mean ± standard deviation (SD) for continuous variables if normally distributed and medians (interquartile range, IQR) if not. P values were derived from Pearson’s chi-square test and Student’s t-test for parametric comparisons and the Mann-Whitney U test for nonparametric comparisons. Q1: 25th percentile. Q3: 75th percentile *Fisher’s exact test Table 5 list the association of semen quality and STI pathogens in the LCS group. All semen parameters were not statistically significant for the UU-infected patients, particularly for Uuu-, Uup3- and Uup6-positive patients. Interestingly, multiple comparison analysis found that semen volume was significantly different among groups. Semen volume was lower in patients with MG infection than in those without any STI pathogen detection. Furthermore, leucocytes in semen were not significantly different between the individual STI-positive and STI-negative groups. Table 5Sexually transmitted infections on semen quality in patients with LCSUuuUup3Uup6MHMGCTNegativeP valueaSemen volume (ml), median (Q1, Q3)2.0(1.1, 2.9)3.0(2.7, 5.1)2.8(1.8, 3.6)2.1(1.6, 2.8)1.5(1.1, 1.8)b2.0(1.0, 2.5)2.9(2.0, 4.0)0.003Sperm concentration (× 106/ml), median (Q1, Q3)35.1(26.8, 62.0)60.9(27.7, 106.1)110.6(17.6, 254.6)45.1(22.8, 83.2)48.2(27.0, 97.7)46.7(30.8, 121.5)69.2(28.8, 106.8)0.65Progressive sperm motility (%), median (Q1, Q3)33.1(18.4, 43.2)30.3(21.3, 38.6)32.7(10.5, 58.0)34.7(15.8, 41.6)40.3(19.6, 50.1)39.5(36.5, 55.4)31.5(19.0, 45.1)0.69Total sperm motility (%), median (Q1, Q3)37.7(19.8, 50.0)39.6(27.0, 50.8)39.1(12.1, 68.1)40.2(19.2, 46.1)46.0(22.2, 55.5)55.2(44.3, 65.1)38.2(22.5, 55.6)0.56Normal sperm morphology (%), median (Q1, Q3)5.0(3.3, 6.0)5.5(2.8, 8.0)4.0(2.0, 11.0)5.0(4.0, 6.0)5.5(3.5, 6.0)6.0(3.5, 7.5)5.0(3.3, 7.0)0.99Leukocytes (×106/ml), median (Q1, Q3)6.2(4.1, 11.9)4.2(3.1, 6.0)2.9(2.4, 8.1)3.8(3.1, 6.2)4.3(2.4, 6.3)5.5(4.0, 9.1)4.2(2.6, 7.0)0.55Data are presented as medians (interquartile range, IQR). Q1: 25th percentile. Q3: 75th percentile. aP value according to the Kruskal-Wallis test. bP < 0.05 vs. negative group Sexually transmitted infections on semen quality in patients with LCS 2.0 (1.1, 2.9) 3.0 (2.7, 5.1) 2.8 (1.8, 3.6) 2.1 (1.6, 2.8) 1.5 (1.1, 1.8)b 2.0 (1.0, 2.5) 2.9 (2.0, 4.0) 35.1 (26.8, 62.0) 60.9 (27.7, 106.1) 110.6 (17.6, 254.6) 45.1 (22.8, 83.2) 48.2 (27.0, 97.7) 46.7 (30.8, 121.5) 69.2 (28.8, 106.8) 33.1 (18.4, 43.2) 30.3 (21.3, 38.6) 32.7 (10.5, 58.0) 34.7 (15.8, 41.6) 40.3 (19.6, 50.1) 39.5 (36.5, 55.4) 31.5 (19.0, 45.1) 37.7 (19.8, 50.0) 39.6 (27.0, 50.8) 39.1 (12.1, 68.1) 40.2 (19.2, 46.1) 46.0 (22.2, 55.5) 55.2 (44.3, 65.1) 38.2 (22.5, 55.6) 5.0 (3.3, 6.0) 5.5 (2.8, 8.0) 4.0 (2.0, 11.0) 5.0 (4.0, 6.0) 5.5 (3.5, 6.0) 6.0 (3.5, 7.5) 5.0 (3.3, 7.0) 6.2 (4.1, 11.9) 4.2 (3.1, 6.0) 2.9 (2.4, 8.1) 3.8 (3.1, 6.2) 4.3 (2.4, 6.3) 5.5 (4.0, 9.1) 4.2 (2.6, 7.0) Data are presented as medians (interquartile range, IQR). Q1: 25th percentile. Q3: 75th percentile. aP value according to the Kruskal-Wallis test. bP < 0.05 vs. negative group Discussion: During the past decade, the association of STI pathogens, leukocytes and semen quality in male infertility has remained a controversial topic [15]. In this context, we examined the effect of STI pathogenson semen quality in subfertile men with LCS and without LCS using the STD6 GenoArray Diagnostic assay. The method allows the detection of six STI pathogens (NG, CT, UU (Uuu, Uup), MH, MG and HSV-2). The prevalence of STI pathogens in semen samples from men with and without leukocytospermia was 52.3 and 39.3 %, respectively. We also showed that men with MG infection had lower semen volumes than men without infection in the LCS group. However, semen parameters were not significantly different between UU-positive and STI-negative patients. A wide range of STI-positive percentages were observed in semen samples from infertile men, and this wide range of variability may be due to different types of studied populations, sample sizes and detection methods for STI pathogens [17, 18]. The diagnosis of STIs and therefore therapy is difficult and debated in relation to its association with male fertility [19]. A number of studies have identified the presence of pathogenic bacteria in semen based on culture or PCR [18]. Due to the limitations of culture methods, such as time consumption and low specificity, PCR-based diagnostic methods present rapid and sensitive detection of STI pathogens [20, 21]. In this study, we detected STI pathogens from semen samples by the GenoArray assay, which is a qualitative PCR-based in vitro test and rapid identification of mutations related to STD6 in a single test. Hence, this assay could be a potential screening method for semen pathogens in infertility or STD clinics. HSV infections are associated with abnormal sperm parameters and male infertility by indirectly causing immune responses [22, 23]. Approximately 2–50 % of infertile men were positive for HSV-1/2 DNA [24, 25]. In the current study, the prevalence of HSV-2 pathogens in semen from subfertility patients was 1.0 %, with no difference between the LCS and non-LCS groups. As a low rate of HSV-2 positive semen samples was detected, we did not observe an association of HSV-2 with semen quality. CT infects the genital tract, resulting in a decrease in seminal volume, sperm concentration, motility and normal morphology [26]. Current studies show that the prevalence of CT is 0.4–42.3 % in asymptomatic males [9]. We observed that the rate of CT-positive semen samples was 3.6 % from subfertility patients. Additionally, CT was associated with a slight decrease in semen volume. The mechanism by which CT affects sperm quality has not yet been elucidated. NG infection has been associated with urethritis, epididymitis and prostatitis inflammation and thus can lead to infertility. However, the prevalence of NG pathogens was 0-6.5 %, with lower prevalence in infertile men than in men with other STI pathogens [18, 27]. Thus, few studies have investigated its correlation with semen parameters. Similar to previous studies, we detected no NG infection in semen samples from subfertility patients with and without LCS. UU is the most common microorganism of the lower genitourinary tract in males and is a potentially pathogenic species with an etiologic role in male infertility [28]. The UU-positive rate in the semen samples of infertile men ranges from 5 to 42 %. Historically, the association of CT with semen parameters has been inconsistent. Some of the studies pointed out that UU-positive semen had lower sperm concentrations and sperm motility than UU-negative semen. However, other studies did not observe any correlation between UU and semen parameters [29]. Of interest, our present study confirmed that UU was the most common widespread pathogen in semen (29.2 % of positive samples), with no difference in UU infection between subfertile men with LCS and without LCS. A previous study reported that the frequency of Uuu in the semen of infertile men was higher than that of Uup, indicating that Uup may have a lower correlation with infertility. In this study, we observed a higher prevalence of Uup compared with Uuu in sub-fertile men. Indeed, there was no significant association between semen quality and Uuu and Uup infections. Although we did not find an obvious impact of UU on semen quality, further work exploring the different UU species induced infertility is required. MH is an STI pathogen that often causes gynecologic and obstetric complications in females, such as postabortion fevers, rupture of fetal membranes, and pelvic inflammatory disease [30]. Although MH is commonly detected in men with nongonoccocal urethritis (NGU), its role in sperm function and male fertility remains unclear. The prevalence of MH in our study was comparable to that reported by Gdoura et al. (8.2 % vs. 9.2 %) [29]. The detection rate of MH was higher in patients with LCS than in those without LCS, although the difference was not statistically significant, suggesting a possible relationship between MH infection and male infertility. The bacterium MG was isolated and identified as an STI pathogen in two urethral specimens from men with NGU. Studies also showed that the rate of MG infection was higher in symptomatic patients than in asymptomatic patients [31, 32]. Previously, up to 17.1 % of infertile men had MG infection [33], indicating that this pathogen may be important for male fertility outcomes. In line with another study, the prevalence of MG in asymptomatic patients was not high (2.1 %) [34]. In addition, we observed that the MG-positive rate of subfertile men with LCS was higher than that of men without LCS, suggesting a link between the presence of MG pathogens and male infertility. Regarding semen quality parameters, there was a decrease in seminal volume in the sample with MG infection compared to that without MG infection. Therefore, our results confirm that MG infection of the male genital tract could induce infertility. Due to the high azithromycin treatment failure rate (37.4 %) for MG infection in men of infertile couples in China, we should notice that inflammatory changes caused by MG long-term infection result in infertility [33]. According to infertility, men with LCS had a reduction in fructose concentration compared with non-LCS men, and LCS may affect seminal vesicle function. STI pathogens, including MG and CT, are associated with low semen volume, suggesting that subclinical infections seem to affect epididymal function. There are several strengths of our findings. First, we showed a high rate of STI pathogens in asymptomatic subfertile men. The prevalence of STI pathogens was not associated with the leucocyte counts in semen, thus suggesting that the determination of more than 106 peroxidase- positive cells per milliliter cannot discard STIs and asymptomatic genital tract infections. Second, in this cross-sectional study, we showed that subfertile men with STIs had worse semen volumes than those without infection. Because of the probable impact of infection on semen quality, some tests performed for STI detection in leukocytospermia should be questioned. Thus, screening for STI pathogens could be part of the routine diagnostic workup of subfertile men. The present study has inherent limitations. First, this study has a possible selection bias because this was a single center-based cross-sectional study. Second, a control group of normal fertile men is lacking. Third, due to a lack of antibiotic therapy against STI pathogens, the potential impact of infection treatment could not be assessed. A final limitation of the study is the small sample size and the results need to be validated with a larger number of patients. Nevertheless, our observations show the importance of investigating semen infection in the clinical practice diagnostic workup of subfertile men. Overall, we demonstrated that STI pathogens were detected at a high rate in semen samples from asymptomatic subfertile men using a sensitive molecular assay, with no difference in prevalence between the LCS and non-LCS groups. LCS was associated with a reduction in semen quality, but was not associated with STIs. Thus, the present study finding may be beneficial for clinicians to perform an assessment of the association of STIs and subfertile men with LCS.
Background: The role of sexually transmitted infections (STIs) in semen parameters and male infertility is still a controversial area. Previous studies have found bacterial infection in a minority of infertile leukocytospermic males. This study aims to investigate the prevalence of STIs in semen from subfertile men with leukocytospermia (LCS) and without leukocytospermia (non-LCS) and their associations with sperm quality. Methods: Semen samples were collected from 195 men who asked for a fertility evaluation. Infection with the above 6 pathogens was assessed in each sample. Sperm quality was compared in subfertile men with and without LCS. Results: The LCS group had significantly decreased semen volume, sperm concentration, progressive motility, total motility and normal morphology. The infection rates of Ureaplasma urealyticum (Uuu), Ureaplasma parvum (Uup), Mycoplasma hominis (MH), Mycoplasma genitalium (MG), Chlamydia trachomatis (CT), herpes simplex virus-2 (HSV-2) and Neisseria gonorrhoeae (NG) were 8.7 %, 21.0 %, 8.2 %, 2.1 %, 3.6 %, 1.0 and 0 %, respectively. The STI detection rates of patients with LCS were higher than those of the non-LCS group (52.3 % vs. 39.3 %), although there was no statistically significant difference between the two groups (P = 0.07). All semen parameters were not significantly different between LCS with STIs and without STIs, except the semen volume in the MG-infected patients with LCS was significantly lower than that in the noninfected group. Conclusions: LCS was associated with a reduction in semen quality, but was not associated with STIs.
null
null
7,048
319
[ 1255, 99, 216, 109, 195 ]
8
[ "semen", "lcs", "test", "men", "sperm", "motility", "sti", "pathogens", "patients", "106" ]
[ "hsv pathogens semen", "infections semen", "leukocytes ejaculate mainly", "wbcs semen leukocytes", "leukocytes semen quality" ]
null
null
null
[CONTENT] Leukocytospermia | Sexually transmitted infections | Semen parameters [SUMMARY]
null
[CONTENT] Leukocytospermia | Sexually transmitted infections | Semen parameters [SUMMARY]
null
[CONTENT] Leukocytospermia | Sexually transmitted infections | Semen parameters [SUMMARY]
null
[CONTENT] Adult | Cohort Studies | Cross-Sectional Studies | Humans | Infertility, Male | Leukocytes | Male | Semen | Semen Analysis | Sexually Transmitted Diseases [SUMMARY]
null
[CONTENT] Adult | Cohort Studies | Cross-Sectional Studies | Humans | Infertility, Male | Leukocytes | Male | Semen | Semen Analysis | Sexually Transmitted Diseases [SUMMARY]
null
[CONTENT] Adult | Cohort Studies | Cross-Sectional Studies | Humans | Infertility, Male | Leukocytes | Male | Semen | Semen Analysis | Sexually Transmitted Diseases [SUMMARY]
null
[CONTENT] hsv pathogens semen | infections semen | leukocytes ejaculate mainly | wbcs semen leukocytes | leukocytes semen quality [SUMMARY]
null
[CONTENT] hsv pathogens semen | infections semen | leukocytes ejaculate mainly | wbcs semen leukocytes | leukocytes semen quality [SUMMARY]
null
[CONTENT] hsv pathogens semen | infections semen | leukocytes ejaculate mainly | wbcs semen leukocytes | leukocytes semen quality [SUMMARY]
null
[CONTENT] semen | lcs | test | men | sperm | motility | sti | pathogens | patients | 106 [SUMMARY]
null
[CONTENT] semen | lcs | test | men | sperm | motility | sti | pathogens | patients | 106 [SUMMARY]
null
[CONTENT] semen | lcs | test | men | sperm | motility | sti | pathogens | patients | 106 [SUMMARY]
null
[CONTENT] studies | semen | uu | ureaplasma | quality | infertile | leukocytospermia | sperm | reduced | mycoplasma [SUMMARY]
null
[CONTENT] lcs | q1 | q3 | median | median q1 q3 | median q1 | q1 q3 | 55 | 19 | 22 [SUMMARY]
null
[CONTENT] semen | lcs | sperm | men | test | china | motility | pathogen | sti | pathogens [SUMMARY]
null
[CONTENT] ||| ||| [SUMMARY]
null
[CONTENT] LCS ||| Ureaplasma | Mycoplasma | MH | Mycoplasma | Chlamydia | Neisseria | NG | 8.7 % | 21.0 % | 8.2 % | 2.1 % | 3.6 % | 1.0 | 0 % ||| STI | 52.3 % | 39.3 % | two | 0.07 ||| LCS [SUMMARY]
null
[CONTENT] ||| ||| ||| 195 ||| 6 ||| ||| ||| LCS ||| Ureaplasma | Mycoplasma | MH | Mycoplasma | Chlamydia | Neisseria | NG | 8.7 % | 21.0 % | 8.2 % | 2.1 % | 3.6 % | 1.0 | 0 % ||| STI | 52.3 % | 39.3 % | two | 0.07 ||| LCS ||| [SUMMARY]
null
Trend Analysis and Prediction of Hepatobiliary Pancreatic Cancer Incidence and Mortality in Korea.
35851861
This study aimed to analyze the current trends and predict the epidemiologic features of hepatobiliary and pancreatic (HBP) cancers according to the Korea Central Cancer Registry to provide insights into health policy.
BACKGROUND
Incidence data from 1999 to 2017 and mortality data from 2002 to 2018 were obtained from the Korea National Cancer Incidence Database and Statistics Korea, respectively. The future incidence rate from 2018 to 2040 and mortality rate from 2019 to 2040 of each HBP cancer were predicted using an age-period-cohort model. All analyses, including incidence and mortality, were stratified by sex.
METHODS
From 1999 to 2017, the age-standardized incidence rate (ASIR) of HBP cancers per 100,000 population had changed (liver, 25.8 to 13.5; gallbladder [GB], 2.9 to 2.6; bile ducts, 5.1 to 5.9; ampulla of Vater [AoV], 0.9 to 0.9; and pancreatic, 5.6 to 7.3). The age-standardized mortality rate (ASMR) per 100,000 population from 2002 to 2018 of each cancer had declined, excluding pancreatic cancer (5.5 to 5.6). The predicted ASIR of pancreatic cancer per 100,000 population from 2018 to 2040 increased (7.5 to 8.2), but that of other cancers decreased. Furthermore, the predicted ASMR per 100,000 population from 2019 to 2040 decreased in all types of cancers: liver (6.5 to 3.2), GB (1.4 to 0.9), bile ducts (4.3 to 2.9), AoV (0.3 to 0.2), and pancreas (5.4 to 4.7). However, in terms of sex, the predicted ASMR of pancreatic cancer per 100,000 population in females increased (3.8 to 4.9).
RESULTS
The annual incidence and mortality cases of HBP cancers are generally predicted to increase. Especially, pancreatic cancer has an increasing incidence and will be the leading cause of cancer-related death among HBP cancers.
CONCLUSION
[ "Female", "Humans", "Incidence", "Pancreatic Neoplasms", "Registries", "Republic of Korea" ]
9294502
INTRODUCTION
Since 1983, when the cause of death statistics started to be published, cancer has been reported as the leading cause of death in Korea, making it the country’s major public health concern.1 In 2018, over 240,000 patients were newly diagnosed with cancer in Korea, and more than one-fourth of all-cause mortality cases resulted from cancer.2 Supported by a national policy on cancer prevention and screening in Korea, the incidence rate of cancer peaked in 2011 and gradually declined subsequently. The overall cancer mortality rate has also steadily decreased since 2002 owing to diagnostic and treatment modality advances. In 2018, liver, pancreatic, and biliary tract cancers were the 6th (6.5% of all new cancers), 8th (3.1%), and 9th (2.9%) most common cancers in Korea, respectively. However, when the incidence rates of all hepatobiliary and pancreatic (HBP) cancers were added, they comprise 12.5%, which is higher than that of stomach cancer (12.0%), the most common cancer in Korea. With the availability of vaccination and effective treatment for viral hepatitis, especially hepatitis B, the incidence of hepatocellular carcinoma (HCC) is decreasing; in contrast, the incidence rates of pancreatic and biliary tract cancers are rapidly increasing, and liver and pancreatic cancers ranked 2nd and 5th in the most common causes of cancer-related death in 2018.234567 Unfortunately, considering the relatively small number of patients with each type of HBP cancers, liver, pancreatic, and biliary tract cancers have not been prioritized in the national health policy, despite being fatal. Although studies from our institution have reported short-term future predictions of cancer incidence and mortality in Korea annually, no studies have focused on future cancer burdens regarding long-term changes in the incidence and mortality rates of HBP cancers.89 Moreover, although the cancer registration system in Korea is highly efficient in providing nationwide cancer statistics, a lag time of at least two years is required for data collection and analysis in a specific year.1011 In planning and implementing comprehensive cancer control programs, the number of new cases and deaths that are expected to occur in the future should be estimated. Therefore, this study aimed to analyze the current trends and predict the epidemiologic features of liver, pancreatic, and biliary tract cancers according to the Korea Central Cancer Registry data. The findings of this study could provide useful insights into health policy planning and budget allocation, including cancer prevention, diagnostic and treatment strategies.
METHODS
Data collection The Korean Ministry of Health and Welfare initiated the Korea Central Cancer Registry (KCCR), a nationwide, hospital-based cancer registry, in 1980. Since 1999, the KCCR has collected cancer incidence data countrywide by integrating a nationwide hospital-based KCCR database with data from regional cancer registries. The KCCR built the Korea National Cancer Incidence Database (KNCI DB), which collects data from hospitals, 11 population-based registries, and additional medical record reviews since 2005. They have been providing the nationwide cancer incidence, survival, and prevalence statistics annually.12 This database contains information on age; sex; region; diagnosis; date; primary cancer site; histological type; most valid diagnostic method; summary stage of surveillance, epidemiology, and end results program (since 2005); and first treatment course within 4 months after diagnosis. In this study, we obtained cancer incidence data from 1999–2017 from the KNCI DB. Cancer cases were classified according to the International Classification of Diseases for Oncology (ICD-O), third edition, and converted according to the International Classification of Diseases, 10th edition (ICD-10).1314 We also collected cancer mortality data from 2002–2018 from Statistics Korea.1 The cause of death was coded and classified according to ICD-10.14 We obtained the estimated population from 1999–2040 from Statistics Korea as well. Cancer sites were categorized as follows: liver (C22.0, C22.9), gallbladder (GB) (C23), Intra- and extrahepatic bile duct (bile duct [BD]; IBD and EBD, respectively) (C22.1, C24.0, C24.8, C24.9), ampulla of Vater (AoV) (C24.1), and pancreas (C25). The Korean Ministry of Health and Welfare initiated the Korea Central Cancer Registry (KCCR), a nationwide, hospital-based cancer registry, in 1980. Since 1999, the KCCR has collected cancer incidence data countrywide by integrating a nationwide hospital-based KCCR database with data from regional cancer registries. The KCCR built the Korea National Cancer Incidence Database (KNCI DB), which collects data from hospitals, 11 population-based registries, and additional medical record reviews since 2005. They have been providing the nationwide cancer incidence, survival, and prevalence statistics annually.12 This database contains information on age; sex; region; diagnosis; date; primary cancer site; histological type; most valid diagnostic method; summary stage of surveillance, epidemiology, and end results program (since 2005); and first treatment course within 4 months after diagnosis. In this study, we obtained cancer incidence data from 1999–2017 from the KNCI DB. Cancer cases were classified according to the International Classification of Diseases for Oncology (ICD-O), third edition, and converted according to the International Classification of Diseases, 10th edition (ICD-10).1314 We also collected cancer mortality data from 2002–2018 from Statistics Korea.1 The cause of death was coded and classified according to ICD-10.14 We obtained the estimated population from 1999–2040 from Statistics Korea as well. Cancer sites were categorized as follows: liver (C22.0, C22.9), gallbladder (GB) (C23), Intra- and extrahepatic bile duct (bile duct [BD]; IBD and EBD, respectively) (C22.1, C24.0, C24.8, C24.9), ampulla of Vater (AoV) (C24.1), and pancreas (C25). Ethical statement The study protocol conformed to the ethical guidelines of the 1975 Declaration of Helsinki, as reflected in a prior approval obtained from the institutional ethical committee (Institutional Review Board, National Cancer Center, Korea [NCC2020-0093]), Written informed consent from the participants was waived because of the retrospective design of the study. The study protocol conformed to the ethical guidelines of the 1975 Declaration of Helsinki, as reflected in a prior approval obtained from the institutional ethical committee (Institutional Review Board, National Cancer Center, Korea [NCC2020-0093]), Written informed consent from the participants was waived because of the retrospective design of the study. Statistical analysis Rates were expressed as crude rate (CR) and age-standardized rates (ASR) per 100,000 individuals. The CR was calculated as the total number of incidence (crude incidence rate [CIR]) or mortality (crude mortality rate [CMR]) cases divided by the midyear population of the specified years. ASRs were standardized using the registered population of year 2000 and expressed per 100,000 persons; the incidence rate ratio using ASRs was also calculated.15 Based on the CIR and the age-standardized incidence rate (ASIR), which were calculated until 2017, the prediction was made for 2018–2040. Conversely, the CMR and age-standardized mortality rate (ASMR) were calculated until 2018, and a forecast was made for 2019–2040. Incidence and mortality were modeled using an age-period-cohort model and extrapolated to 2040. The age-period-cohort model was as follows: λ (age, period) = g[fA (age) + fP (period) + fC (cohort)] where λ represents the incidence (mortality) rate according to age and calendar period; g is the “link” function; and fA, fP, and fC represent the functions of age, period (year of incidence or mortality), and cohort (year of birth, i.e., cohort = period–age), respectively.16 We used the exponential function as the link function, and fA, fP, and fC as natural cubic splines. We used the “apcspline” Stata ado-program to conduct annual percentage change (APC) modeling, with a default setting for internal knots for the spline (6 for age variable, 5 for period variable, and 3 for cohort variable). When modeling using the spline, we absorbed the linear trends in period and cohort over the timespan with observed data into a drift component and attenuated the drift into the future. A function was developed in Stata to fit the model to the incidence/mortality of cancer in the individual (from 1999 to 2017 in incidence, 2002 to 2018 in mortality) and 5-year age groups (0–4, 5–9, …, 74–79, 80+) by sex. To estimate age-specific cancer rates in future years, an APC model was fitted to age-specific rates for the 5-year age groups against their respective years, and the projected age-specific rates were multiplied by the age-specific population to obtain the projected number of cancer cases and deaths for the upcoming years. To adjust the future demographic structure, we employed an estimated population by Statistics Korea to calculate the ASIR and ASMR. Linear regression models were used on logarithmic scaled ASRs and calendar years to identify time points with significant trend changes and to calculate the APC for each segment between these time points.17 The joinpoint regression analysis describes changes in data trends by connecting several different line segments on a log scale at “joinpoints,” which was performed to detect when significant changes occurred in cancer trends according to sex and cancer site. The linear regression model was applied to ASRs by 5-year age groups against their respective years, based on the observed cancer incidence data of the latest trends to predict ASRs. Moreover, APC, which is the average percentage change of ASRs, is calculated as follows: APC=Ry+1−RyRy×100=(eb1−1)×100, where log(Ry) = b0 + b1y × year Log(Ry) is the natural log-transformed ASR.17 The weighted average of APCs was the average APC (AAPC) with 95% confidence intervals for the whole period of interest.18 We also performed a pairwise comparison, and tested the differences between sexes. All statistical tests were two-tailed, and P values < 0.05 were considered statistically significant. Incidence and trends were analyzed using SAS 9.4 (SAS Institute, Inc., Cary, NC, USA), Joinpoint 4.7.0.0 (National Cancer Institute, Bethesda, MD, USA), and Stata version 16.1 (StataCorp LP, College Station, TX, USA). Rates were expressed as crude rate (CR) and age-standardized rates (ASR) per 100,000 individuals. The CR was calculated as the total number of incidence (crude incidence rate [CIR]) or mortality (crude mortality rate [CMR]) cases divided by the midyear population of the specified years. ASRs were standardized using the registered population of year 2000 and expressed per 100,000 persons; the incidence rate ratio using ASRs was also calculated.15 Based on the CIR and the age-standardized incidence rate (ASIR), which were calculated until 2017, the prediction was made for 2018–2040. Conversely, the CMR and age-standardized mortality rate (ASMR) were calculated until 2018, and a forecast was made for 2019–2040. Incidence and mortality were modeled using an age-period-cohort model and extrapolated to 2040. The age-period-cohort model was as follows: λ (age, period) = g[fA (age) + fP (period) + fC (cohort)] where λ represents the incidence (mortality) rate according to age and calendar period; g is the “link” function; and fA, fP, and fC represent the functions of age, period (year of incidence or mortality), and cohort (year of birth, i.e., cohort = period–age), respectively.16 We used the exponential function as the link function, and fA, fP, and fC as natural cubic splines. We used the “apcspline” Stata ado-program to conduct annual percentage change (APC) modeling, with a default setting for internal knots for the spline (6 for age variable, 5 for period variable, and 3 for cohort variable). When modeling using the spline, we absorbed the linear trends in period and cohort over the timespan with observed data into a drift component and attenuated the drift into the future. A function was developed in Stata to fit the model to the incidence/mortality of cancer in the individual (from 1999 to 2017 in incidence, 2002 to 2018 in mortality) and 5-year age groups (0–4, 5–9, …, 74–79, 80+) by sex. To estimate age-specific cancer rates in future years, an APC model was fitted to age-specific rates for the 5-year age groups against their respective years, and the projected age-specific rates were multiplied by the age-specific population to obtain the projected number of cancer cases and deaths for the upcoming years. To adjust the future demographic structure, we employed an estimated population by Statistics Korea to calculate the ASIR and ASMR. Linear regression models were used on logarithmic scaled ASRs and calendar years to identify time points with significant trend changes and to calculate the APC for each segment between these time points.17 The joinpoint regression analysis describes changes in data trends by connecting several different line segments on a log scale at “joinpoints,” which was performed to detect when significant changes occurred in cancer trends according to sex and cancer site. The linear regression model was applied to ASRs by 5-year age groups against their respective years, based on the observed cancer incidence data of the latest trends to predict ASRs. Moreover, APC, which is the average percentage change of ASRs, is calculated as follows: APC=Ry+1−RyRy×100=(eb1−1)×100, where log(Ry) = b0 + b1y × year Log(Ry) is the natural log-transformed ASR.17 The weighted average of APCs was the average APC (AAPC) with 95% confidence intervals for the whole period of interest.18 We also performed a pairwise comparison, and tested the differences between sexes. All statistical tests were two-tailed, and P values < 0.05 were considered statistically significant. Incidence and trends were analyzed using SAS 9.4 (SAS Institute, Inc., Cary, NC, USA), Joinpoint 4.7.0.0 (National Cancer Institute, Bethesda, MD, USA), and Stata version 16.1 (StataCorp LP, College Station, TX, USA).
RESULTS
Incidence and mortality rates of liver, pancreatic, and biliary tract cancers in Korea from 1999 to 2017, and from 2002 to 2018 From 1999 to 2017, HBP cancer cases had been increasing in number continuously. Compared with HCC, which showed a relatively slow increase of new cases GB, BD, AoV, and pancreatic cancers had increased for approximately 2–3 times within the same period. In ASIR analysis, the ASIRs of liver and GB cancers decreased, whereas those of BD and pancreatic cancer increased (Table 1). Supplementary Table 1 shows the trend of annual cancer incidence cases among HBP cancers. GB = gallbladder, BD (I+E) = bile duct (intrahepatic+extrahepatic), AoV = ampulla of Vater. During the same period, trends in the ASIR of each cancer were as follows: 25.8 to 13.5 in liver cancer (AAPC −3.6%; P < 0.001), 2.9 to 2.6 in GB cancer (AAPC −0.5%; P = 0.171), 5.1 to 5.9 in BD cancer (AAPC 1.1%; P = 0.084), 0.9 to 0.9 in AoV cancer (AAPC −0.6%; P = 0.217), and 5.6 to 7.3 in pancreatic cancer (AAPC 1.5%; P < 0.001). The APC of ASIR in liver and pancreatic cancers was statistically significant (Table 2). In the subgroup analysis, the trends of change of ASIRs were similar between both sexes; APC of the liver was significantly low and that of the BD and pancreas was significantly high (Supplementary Table 2). In particular, the APC of ASIR in pancreatic cancer was significantly higher in females than in males (2.4% vs. 0.8%, P < 0.001). ASR = age-standardized rates, AAPC = average annual percent change, GB = gallbladder, BD (I+E) = bile duct (intrahepatic+extrahepatic), AoV = ampulla of vater. aObserved values; bEstimated values. From 2002 to 2018, the ASMR of each cancer decreased, except for pancreatic cancer (18.7 to 8.0 in liver [AAPC −5.2%; P < 0.001], 2.4 to 1.6 in GB [AAPC −2.5%; P < 0.001], 5.5 to 4.6 in IBD and EBD [AAPC −1.0%; P < 0.001], 0.4 to 0.4 in AoV [AAPC −0.2%; P = 0.654], and 5.5 to 5.6 in pancreatic [AAPC 0.3%; P = 0.091]) cancers per 100,000 persons. Thus, except for pancreatic cancer, the APC of all ASMRs in other HBP cancers was negative (Table 2). In the subgroup analysis, the trends of change of ASMRs in liver, GB, and BD cancers were decreasing, similar to those in males and females. The ASMR in pancreatic cancer had been decreased in males but increased in females. Meanwhile, the ASMR in AoV cancer showed no significant difference in the past in both sexes (Supplementary Table 2). From 1999 to 2017, HBP cancer cases had been increasing in number continuously. Compared with HCC, which showed a relatively slow increase of new cases GB, BD, AoV, and pancreatic cancers had increased for approximately 2–3 times within the same period. In ASIR analysis, the ASIRs of liver and GB cancers decreased, whereas those of BD and pancreatic cancer increased (Table 1). Supplementary Table 1 shows the trend of annual cancer incidence cases among HBP cancers. GB = gallbladder, BD (I+E) = bile duct (intrahepatic+extrahepatic), AoV = ampulla of Vater. During the same period, trends in the ASIR of each cancer were as follows: 25.8 to 13.5 in liver cancer (AAPC −3.6%; P < 0.001), 2.9 to 2.6 in GB cancer (AAPC −0.5%; P = 0.171), 5.1 to 5.9 in BD cancer (AAPC 1.1%; P = 0.084), 0.9 to 0.9 in AoV cancer (AAPC −0.6%; P = 0.217), and 5.6 to 7.3 in pancreatic cancer (AAPC 1.5%; P < 0.001). The APC of ASIR in liver and pancreatic cancers was statistically significant (Table 2). In the subgroup analysis, the trends of change of ASIRs were similar between both sexes; APC of the liver was significantly low and that of the BD and pancreas was significantly high (Supplementary Table 2). In particular, the APC of ASIR in pancreatic cancer was significantly higher in females than in males (2.4% vs. 0.8%, P < 0.001). ASR = age-standardized rates, AAPC = average annual percent change, GB = gallbladder, BD (I+E) = bile duct (intrahepatic+extrahepatic), AoV = ampulla of vater. aObserved values; bEstimated values. From 2002 to 2018, the ASMR of each cancer decreased, except for pancreatic cancer (18.7 to 8.0 in liver [AAPC −5.2%; P < 0.001], 2.4 to 1.6 in GB [AAPC −2.5%; P < 0.001], 5.5 to 4.6 in IBD and EBD [AAPC −1.0%; P < 0.001], 0.4 to 0.4 in AoV [AAPC −0.2%; P = 0.654], and 5.5 to 5.6 in pancreatic [AAPC 0.3%; P = 0.091]) cancers per 100,000 persons. Thus, except for pancreatic cancer, the APC of all ASMRs in other HBP cancers was negative (Table 2). In the subgroup analysis, the trends of change of ASMRs in liver, GB, and BD cancers were decreasing, similar to those in males and females. The ASMR in pancreatic cancer had been decreased in males but increased in females. Meanwhile, the ASMR in AoV cancer showed no significant difference in the past in both sexes (Supplementary Table 2). Predicting future epidemiology of liver, pancreatic, and biliary tract cancer in Korea by 2040 Predicted annual cases of newly diagnosed cancer from 2018 to 2040 increased in all cancer types as follows: 12,157 to 13,089 (liver), 2,681 to 4,038 (GB), 6,304 to 7,964 (IBD and EBD), 844 to 1,376 (AoV), and 7,459 to 16,170 (pancreas) (Fig. 1). The CIR of pancreatic cancer is predicted to increase more rapidly than those of other cancers (Fig. 2A). Meanwhile, the predicted ASIR increased in pancreatic cancer (7.5 to 8.2 [AAPC 0.5%; P < 0.001]) but decreased in all other cancers (13.2 to 8.5 in liver [AAPC −2.0%; P < 0.001], 2.5 to 1.7 in GB [AAPC −1.6%; P < 0.001], 5.9 to 3.2 in IBD and EBD [AAPC −2.7%; P < 0.001], and 0.8 to 0.6 in AoV [AAPC −1.2%; P < 0.001]) (Table 2, Fig. 2B). Both sexes were predicted to have similar increasing trends of CIR (Supplementary Fig. 1A and B). In subgroup analysis in both sexes, the predicted ASIR of pancreatic cancer increased only in females (6.2 to 7.9 [AAPC 1.1%; P < 0.001] vs.8.6 to 8.5 [AAPC −0.1%; P < 0.001]). Meanwhile, the predicted ASIR in GB, IBD/EBD, and AoV cancers decreased slightly in both males and females. Moreover, the ASIR of liver cancer decreased more rapidly in males (21.8 to 12.5 [AAPC −2,5%; P < 0.001] than in females (5.3 to 4.5 [AAPC −0.8%; P < 0.001]) (Supplementary Table 3, Supplementary Fig. 1C and D). GB = gallbladder, BD = bile duct, AoV = ampulla of Vater. GB = gallbladder, BD = bile duct, AoV = ampulla of Vater. Furthermore, the predicted annual deaths from 2019 to 2040 decreased in liver cancer (7,551 to 6,037) but increased in other cancers (GB, 1,795 to 2,391; IBD and EBD, 5,313 to 7,928; AoV, 397 to 536; pancreas, 6,202 to 11,023). CMRs are predicted to increase markedly in pancreatic and BD cancers but decrease in liver cancer (Fig. 3A). However, the predicted ASMR decreased in all cancer types (7.5 to 3.2 in liver [AAPC −4.0%; P < 0.001], 1.5 to 0.9 in GB [AAPC −2.4%; P < 0.001], 4.5 to 2.9 in IBD and EBD [AAPC −2.1%; P < 0.001], 0.3 to 0.2 in AoV [AAPC −2.6%; P < 0.001], and 5.6 to 4.7 in pancreatic cancer [AAPC −0.8%; P < 0.001]) (Fig. 3B). Pancreatic cancer was predicted to be the leading cause of cancer-related death among HBP cancers. Similar CMR trends are expected in both sexes. Although CMR in liver cancer was predicted to be still higher in males than in females, it was predicted to decrease more markedly in males (22.8 to 18.7) than in females (6.4 to 5.1). Meanwhile, CMR in pancreatic cancer was predicted to increase more markedly (P < 0.001, data not shown) in females (12.1 to 25.1) than in males (11.8 to 18.2) (Supplementary Fig. 2A and B). The ASMR of pancreatic cancer was also predicted to increase in females (4.8 to 4.9) (P < 0.001) but decrease in males (6.3 to 4.5) (P < 0.001) (Supplementary Table 3). Furthermore, the ASMR (4.9) of pancreatic cancer in females in 2040 were predicted to be the highest among all HBP cancers (liver, 1.1; GB, 0.9; BD,1.7; AoV, 0.1). Moreover, the ASMR of the liver (12.9 to 5.4 [AAPC −4.1%; P < 0.001]) was still expected to be the highest in males in 2040 (GB, 0.9; BD, 4.2; AoV, 0.3; pancreas, 4.5). However, the predicted ASMR of other HBP cancers was decreased (P < 0.001) (Supplementary Table 3, Supplementary Fig. 2C and D). GB = gallbladder, BD = bile duct, AoV = ampulla of Vater. Predicted annual cases of newly diagnosed cancer from 2018 to 2040 increased in all cancer types as follows: 12,157 to 13,089 (liver), 2,681 to 4,038 (GB), 6,304 to 7,964 (IBD and EBD), 844 to 1,376 (AoV), and 7,459 to 16,170 (pancreas) (Fig. 1). The CIR of pancreatic cancer is predicted to increase more rapidly than those of other cancers (Fig. 2A). Meanwhile, the predicted ASIR increased in pancreatic cancer (7.5 to 8.2 [AAPC 0.5%; P < 0.001]) but decreased in all other cancers (13.2 to 8.5 in liver [AAPC −2.0%; P < 0.001], 2.5 to 1.7 in GB [AAPC −1.6%; P < 0.001], 5.9 to 3.2 in IBD and EBD [AAPC −2.7%; P < 0.001], and 0.8 to 0.6 in AoV [AAPC −1.2%; P < 0.001]) (Table 2, Fig. 2B). Both sexes were predicted to have similar increasing trends of CIR (Supplementary Fig. 1A and B). In subgroup analysis in both sexes, the predicted ASIR of pancreatic cancer increased only in females (6.2 to 7.9 [AAPC 1.1%; P < 0.001] vs.8.6 to 8.5 [AAPC −0.1%; P < 0.001]). Meanwhile, the predicted ASIR in GB, IBD/EBD, and AoV cancers decreased slightly in both males and females. Moreover, the ASIR of liver cancer decreased more rapidly in males (21.8 to 12.5 [AAPC −2,5%; P < 0.001] than in females (5.3 to 4.5 [AAPC −0.8%; P < 0.001]) (Supplementary Table 3, Supplementary Fig. 1C and D). GB = gallbladder, BD = bile duct, AoV = ampulla of Vater. GB = gallbladder, BD = bile duct, AoV = ampulla of Vater. Furthermore, the predicted annual deaths from 2019 to 2040 decreased in liver cancer (7,551 to 6,037) but increased in other cancers (GB, 1,795 to 2,391; IBD and EBD, 5,313 to 7,928; AoV, 397 to 536; pancreas, 6,202 to 11,023). CMRs are predicted to increase markedly in pancreatic and BD cancers but decrease in liver cancer (Fig. 3A). However, the predicted ASMR decreased in all cancer types (7.5 to 3.2 in liver [AAPC −4.0%; P < 0.001], 1.5 to 0.9 in GB [AAPC −2.4%; P < 0.001], 4.5 to 2.9 in IBD and EBD [AAPC −2.1%; P < 0.001], 0.3 to 0.2 in AoV [AAPC −2.6%; P < 0.001], and 5.6 to 4.7 in pancreatic cancer [AAPC −0.8%; P < 0.001]) (Fig. 3B). Pancreatic cancer was predicted to be the leading cause of cancer-related death among HBP cancers. Similar CMR trends are expected in both sexes. Although CMR in liver cancer was predicted to be still higher in males than in females, it was predicted to decrease more markedly in males (22.8 to 18.7) than in females (6.4 to 5.1). Meanwhile, CMR in pancreatic cancer was predicted to increase more markedly (P < 0.001, data not shown) in females (12.1 to 25.1) than in males (11.8 to 18.2) (Supplementary Fig. 2A and B). The ASMR of pancreatic cancer was also predicted to increase in females (4.8 to 4.9) (P < 0.001) but decrease in males (6.3 to 4.5) (P < 0.001) (Supplementary Table 3). Furthermore, the ASMR (4.9) of pancreatic cancer in females in 2040 were predicted to be the highest among all HBP cancers (liver, 1.1; GB, 0.9; BD,1.7; AoV, 0.1). Moreover, the ASMR of the liver (12.9 to 5.4 [AAPC −4.1%; P < 0.001]) was still expected to be the highest in males in 2040 (GB, 0.9; BD, 4.2; AoV, 0.3; pancreas, 4.5). However, the predicted ASMR of other HBP cancers was decreased (P < 0.001) (Supplementary Table 3, Supplementary Fig. 2C and D). GB = gallbladder, BD = bile duct, AoV = ampulla of Vater.
null
null
[ "Data collection", "Ethical statement", "Statistical analysis", "Incidence and mortality rates of liver, pancreatic, and biliary tract cancers in Korea from 1999 to 2017, and from 2002 to 2018", "Predicting future epidemiology of liver, pancreatic, and biliary tract cancer in Korea by 2040" ]
[ "The Korean Ministry of Health and Welfare initiated the Korea Central Cancer Registry (KCCR), a nationwide, hospital-based cancer registry, in 1980. Since 1999, the KCCR has collected cancer incidence data countrywide by integrating a nationwide hospital-based KCCR database with data from regional cancer registries. The KCCR built the Korea National Cancer Incidence Database (KNCI DB), which collects data from hospitals, 11 population-based registries, and additional medical record reviews since 2005. They have been providing the nationwide cancer incidence, survival, and prevalence statistics annually.12 This database contains information on age; sex; region; diagnosis; date; primary cancer site; histological type; most valid diagnostic method; summary stage of surveillance, epidemiology, and end results program (since 2005); and first treatment course within 4 months after diagnosis.\nIn this study, we obtained cancer incidence data from 1999–2017 from the KNCI DB. Cancer cases were classified according to the International Classification of Diseases for Oncology (ICD-O), third edition, and converted according to the International Classification of Diseases, 10th edition (ICD-10).1314 We also collected cancer mortality data from 2002–2018 from Statistics Korea.1 The cause of death was coded and classified according to ICD-10.14 We obtained the estimated population from 1999–2040 from Statistics Korea as well.\nCancer sites were categorized as follows: liver (C22.0, C22.9), gallbladder (GB) (C23), Intra- and extrahepatic bile duct (bile duct [BD]; IBD and EBD, respectively) (C22.1, C24.0, C24.8, C24.9), ampulla of Vater (AoV) (C24.1), and pancreas (C25).", "The study protocol conformed to the ethical guidelines of the 1975 Declaration of Helsinki, as reflected in a prior approval obtained from the institutional ethical committee (Institutional Review Board, National Cancer Center, Korea [NCC2020-0093]), Written informed consent from the participants was waived because of the retrospective design of the study.", "Rates were expressed as crude rate (CR) and age-standardized rates (ASR) per 100,000 individuals. The CR was calculated as the total number of incidence (crude incidence rate [CIR]) or mortality (crude mortality rate [CMR]) cases divided by the midyear population of the specified years. ASRs were standardized using the registered population of year 2000 and expressed per 100,000 persons; the incidence rate ratio using ASRs was also calculated.15 Based on the CIR and the age-standardized incidence rate (ASIR), which were calculated until 2017, the prediction was made for 2018–2040. Conversely, the CMR and age-standardized mortality rate (ASMR) were calculated until 2018, and a forecast was made for 2019–2040.\nIncidence and mortality were modeled using an age-period-cohort model and extrapolated to 2040. The age-period-cohort model was as follows:\n\nλ (age, period) = g[fA (age) + fP (period) + fC (cohort)]\n\nwhere λ represents the incidence (mortality) rate according to age and calendar period; g is the “link” function; and fA, fP, and fC represent the functions of age, period (year of incidence or mortality), and cohort (year of birth, i.e., cohort = period–age), respectively.16 We used the exponential function as the link function, and fA, fP, and fC as natural cubic splines.\nWe used the “apcspline” Stata ado-program to conduct annual percentage change (APC) modeling, with a default setting for internal knots for the spline (6 for age variable, 5 for period variable, and 3 for cohort variable).\nWhen modeling using the spline, we absorbed the linear trends in period and cohort over the timespan with observed data into a drift component and attenuated the drift into the future. A function was developed in Stata to fit the model to the incidence/mortality of cancer in the individual (from 1999 to 2017 in incidence, 2002 to 2018 in mortality) and 5-year age groups (0–4, 5–9, …, 74–79, 80+) by sex. To estimate age-specific cancer rates in future years, an APC model was fitted to age-specific rates for the 5-year age groups against their respective years, and the projected age-specific rates were multiplied by the age-specific population to obtain the projected number of cancer cases and deaths for the upcoming years. To adjust the future demographic structure, we employed an estimated population by Statistics Korea to calculate the ASIR and ASMR. Linear regression models were used on logarithmic scaled ASRs and calendar years to identify time points with significant trend changes and to calculate the APC for each segment between these time points.17 The joinpoint regression analysis describes changes in data trends by connecting several different line segments on a log scale at “joinpoints,” which was performed to detect when significant changes occurred in cancer trends according to sex and cancer site. The linear regression model was applied to ASRs by 5-year age groups against their respective years, based on the observed cancer incidence data of the latest trends to predict ASRs.\nMoreover, APC, which is the average percentage change of ASRs, is calculated as follows:\n\n\nAPC=Ry+1−RyRy×100=(eb1−1)×100,\n\n\nwhere log(Ry) = b0 + b1y × year\nLog(Ry) is the natural log-transformed ASR.17\nThe weighted average of APCs was the average APC (AAPC) with 95% confidence intervals for the whole period of interest.18 We also performed a pairwise comparison, and tested the differences between sexes. All statistical tests were two-tailed, and P values < 0.05 were considered statistically significant. Incidence and trends were analyzed using SAS 9.4 (SAS Institute, Inc., Cary, NC, USA), Joinpoint 4.7.0.0 (National Cancer Institute, Bethesda, MD, USA), and Stata version 16.1 (StataCorp LP, College Station, TX, USA).", "From 1999 to 2017, HBP cancer cases had been increasing in number continuously. Compared with HCC, which showed a relatively slow increase of new cases GB, BD, AoV, and pancreatic cancers had increased for approximately 2–3 times within the same period. In ASIR analysis, the ASIRs of liver and GB cancers decreased, whereas those of BD and pancreatic cancer increased (Table 1). Supplementary Table 1 shows the trend of annual cancer incidence cases among HBP cancers.\nGB = gallbladder, BD (I+E) = bile duct (intrahepatic+extrahepatic), AoV = ampulla of Vater.\nDuring the same period, trends in the ASIR of each cancer were as follows: 25.8 to 13.5 in liver cancer (AAPC −3.6%; P < 0.001), 2.9 to 2.6 in GB cancer (AAPC −0.5%; P = 0.171), 5.1 to 5.9 in BD cancer (AAPC 1.1%; P = 0.084), 0.9 to 0.9 in AoV cancer (AAPC −0.6%; P = 0.217), and 5.6 to 7.3 in pancreatic cancer (AAPC 1.5%; P < 0.001). The APC of ASIR in liver and pancreatic cancers was statistically significant (Table 2). In the subgroup analysis, the trends of change of ASIRs were similar between both sexes; APC of the liver was significantly low and that of the BD and pancreas was significantly high (Supplementary Table 2). In particular, the APC of ASIR in pancreatic cancer was significantly higher in females than in males (2.4% vs. 0.8%, P < 0.001).\nASR = age-standardized rates, AAPC = average annual percent change, GB = gallbladder, BD (I+E) = bile duct (intrahepatic+extrahepatic), AoV = ampulla of vater.\naObserved values; bEstimated values.\nFrom 2002 to 2018, the ASMR of each cancer decreased, except for pancreatic cancer (18.7 to 8.0 in liver [AAPC −5.2%; P < 0.001], 2.4 to 1.6 in GB [AAPC −2.5%; P < 0.001], 5.5 to 4.6 in IBD and EBD [AAPC −1.0%; P < 0.001], 0.4 to 0.4 in AoV [AAPC −0.2%; P = 0.654], and 5.5 to 5.6 in pancreatic [AAPC 0.3%; P = 0.091]) cancers per 100,000 persons. Thus, except for pancreatic cancer, the APC of all ASMRs in other HBP cancers was negative (Table 2). In the subgroup analysis, the trends of change of ASMRs in liver, GB, and BD cancers were decreasing, similar to those in males and females. The ASMR in pancreatic cancer had been decreased in males but increased in females. Meanwhile, the ASMR in AoV cancer showed no significant difference in the past in both sexes (Supplementary Table 2).", "Predicted annual cases of newly diagnosed cancer from 2018 to 2040 increased in all cancer types as follows: 12,157 to 13,089 (liver), 2,681 to 4,038 (GB), 6,304 to 7,964 (IBD and EBD), 844 to 1,376 (AoV), and 7,459 to 16,170 (pancreas) (Fig. 1). The CIR of pancreatic cancer is predicted to increase more rapidly than those of other cancers (Fig. 2A). Meanwhile, the predicted ASIR increased in pancreatic cancer (7.5 to 8.2 [AAPC 0.5%; P < 0.001]) but decreased in all other cancers (13.2 to 8.5 in liver [AAPC −2.0%; P < 0.001], 2.5 to 1.7 in GB [AAPC −1.6%; P < 0.001], 5.9 to 3.2 in IBD and EBD [AAPC −2.7%; P < 0.001], and 0.8 to 0.6 in AoV [AAPC −1.2%; P < 0.001]) (Table 2, Fig. 2B). Both sexes were predicted to have similar increasing trends of CIR (Supplementary Fig. 1A and B). In subgroup analysis in both sexes, the predicted ASIR of pancreatic cancer increased only in females (6.2 to 7.9 [AAPC 1.1%; P < 0.001] vs.8.6 to 8.5 [AAPC −0.1%; P < 0.001]). Meanwhile, the predicted ASIR in GB, IBD/EBD, and AoV cancers decreased slightly in both males and females. Moreover, the ASIR of liver cancer decreased more rapidly in males (21.8 to 12.5 [AAPC −2,5%; P < 0.001] than in females (5.3 to 4.5 [AAPC −0.8%; P < 0.001]) (Supplementary Table 3, Supplementary Fig. 1C and D).\nGB = gallbladder, BD = bile duct, AoV = ampulla of Vater.\nGB = gallbladder, BD = bile duct, AoV = ampulla of Vater.\nFurthermore, the predicted annual deaths from 2019 to 2040 decreased in liver cancer (7,551 to 6,037) but increased in other cancers (GB, 1,795 to 2,391; IBD and EBD, 5,313 to 7,928; AoV, 397 to 536; pancreas, 6,202 to 11,023). CMRs are predicted to increase markedly in pancreatic and BD cancers but decrease in liver cancer (Fig. 3A). However, the predicted ASMR decreased in all cancer types (7.5 to 3.2 in liver [AAPC −4.0%; P < 0.001], 1.5 to 0.9 in GB [AAPC −2.4%; P < 0.001], 4.5 to 2.9 in IBD and EBD [AAPC −2.1%; P < 0.001], 0.3 to 0.2 in AoV [AAPC −2.6%; P < 0.001], and 5.6 to 4.7 in pancreatic cancer [AAPC −0.8%; P < 0.001]) (Fig. 3B). Pancreatic cancer was predicted to be the leading cause of cancer-related death among HBP cancers. Similar CMR trends are expected in both sexes. Although CMR in liver cancer was predicted to be still higher in males than in females, it was predicted to decrease more markedly in males (22.8 to 18.7) than in females (6.4 to 5.1). Meanwhile, CMR in pancreatic cancer was predicted to increase more markedly (P < 0.001, data not shown) in females (12.1 to 25.1) than in males (11.8 to 18.2) (Supplementary Fig. 2A and B). The ASMR of pancreatic cancer was also predicted to increase in females (4.8 to 4.9) (P < 0.001) but decrease in males (6.3 to 4.5) (P < 0.001) (Supplementary Table 3). Furthermore, the ASMR (4.9) of pancreatic cancer in females in 2040 were predicted to be the highest among all HBP cancers (liver, 1.1; GB, 0.9; BD,1.7; AoV, 0.1). Moreover, the ASMR of the liver (12.9 to 5.4 [AAPC −4.1%; P < 0.001]) was still expected to be the highest in males in 2040 (GB, 0.9; BD, 4.2; AoV, 0.3; pancreas, 4.5). However, the predicted ASMR of other HBP cancers was decreased (P < 0.001) (Supplementary Table 3, Supplementary Fig. 2C and D).\nGB = gallbladder, BD = bile duct, AoV = ampulla of Vater." ]
[ null, null, null, null, null ]
[ "INTRODUCTION", "METHODS", "Data collection", "Ethical statement", "Statistical analysis", "RESULTS", "Incidence and mortality rates of liver, pancreatic, and biliary tract cancers in Korea from 1999 to 2017, and from 2002 to 2018", "Predicting future epidemiology of liver, pancreatic, and biliary tract cancer in Korea by 2040", "DISCUSSION" ]
[ "Since 1983, when the cause of death statistics started to be published, cancer has been reported as the leading cause of death in Korea, making it the country’s major public health concern.1 In 2018, over 240,000 patients were newly diagnosed with cancer in Korea, and more than one-fourth of all-cause mortality cases resulted from cancer.2 Supported by a national policy on cancer prevention and screening in Korea, the incidence rate of cancer peaked in 2011 and gradually declined subsequently. The overall cancer mortality rate has also steadily decreased since 2002 owing to diagnostic and treatment modality advances. In 2018, liver, pancreatic, and biliary tract cancers were the 6th (6.5% of all new cancers), 8th (3.1%), and 9th (2.9%) most common cancers in Korea, respectively. However, when the incidence rates of all hepatobiliary and pancreatic (HBP) cancers were added, they comprise 12.5%, which is higher than that of stomach cancer (12.0%), the most common cancer in Korea. With the availability of vaccination and effective treatment for viral hepatitis, especially hepatitis B, the incidence of hepatocellular carcinoma (HCC) is decreasing; in contrast, the incidence rates of pancreatic and biliary tract cancers are rapidly increasing, and liver and pancreatic cancers ranked 2nd and 5th in the most common causes of cancer-related death in 2018.234567 Unfortunately, considering the relatively small number of patients with each type of HBP cancers, liver, pancreatic, and biliary tract cancers have not been prioritized in the national health policy, despite being fatal. Although studies from our institution have reported short-term future predictions of cancer incidence and mortality in Korea annually, no studies have focused on future cancer burdens regarding long-term changes in the incidence and mortality rates of HBP cancers.89 Moreover, although the cancer registration system in Korea is highly efficient in providing nationwide cancer statistics, a lag time of at least two years is required for data collection and analysis in a specific year.1011 In planning and implementing comprehensive cancer control programs, the number of new cases and deaths that are expected to occur in the future should be estimated. Therefore, this study aimed to analyze the current trends and predict the epidemiologic features of liver, pancreatic, and biliary tract cancers according to the Korea Central Cancer Registry data. The findings of this study could provide useful insights into health policy planning and budget allocation, including cancer prevention, diagnostic and treatment strategies.", " Data collection The Korean Ministry of Health and Welfare initiated the Korea Central Cancer Registry (KCCR), a nationwide, hospital-based cancer registry, in 1980. Since 1999, the KCCR has collected cancer incidence data countrywide by integrating a nationwide hospital-based KCCR database with data from regional cancer registries. The KCCR built the Korea National Cancer Incidence Database (KNCI DB), which collects data from hospitals, 11 population-based registries, and additional medical record reviews since 2005. They have been providing the nationwide cancer incidence, survival, and prevalence statistics annually.12 This database contains information on age; sex; region; diagnosis; date; primary cancer site; histological type; most valid diagnostic method; summary stage of surveillance, epidemiology, and end results program (since 2005); and first treatment course within 4 months after diagnosis.\nIn this study, we obtained cancer incidence data from 1999–2017 from the KNCI DB. Cancer cases were classified according to the International Classification of Diseases for Oncology (ICD-O), third edition, and converted according to the International Classification of Diseases, 10th edition (ICD-10).1314 We also collected cancer mortality data from 2002–2018 from Statistics Korea.1 The cause of death was coded and classified according to ICD-10.14 We obtained the estimated population from 1999–2040 from Statistics Korea as well.\nCancer sites were categorized as follows: liver (C22.0, C22.9), gallbladder (GB) (C23), Intra- and extrahepatic bile duct (bile duct [BD]; IBD and EBD, respectively) (C22.1, C24.0, C24.8, C24.9), ampulla of Vater (AoV) (C24.1), and pancreas (C25).\nThe Korean Ministry of Health and Welfare initiated the Korea Central Cancer Registry (KCCR), a nationwide, hospital-based cancer registry, in 1980. Since 1999, the KCCR has collected cancer incidence data countrywide by integrating a nationwide hospital-based KCCR database with data from regional cancer registries. The KCCR built the Korea National Cancer Incidence Database (KNCI DB), which collects data from hospitals, 11 population-based registries, and additional medical record reviews since 2005. They have been providing the nationwide cancer incidence, survival, and prevalence statistics annually.12 This database contains information on age; sex; region; diagnosis; date; primary cancer site; histological type; most valid diagnostic method; summary stage of surveillance, epidemiology, and end results program (since 2005); and first treatment course within 4 months after diagnosis.\nIn this study, we obtained cancer incidence data from 1999–2017 from the KNCI DB. Cancer cases were classified according to the International Classification of Diseases for Oncology (ICD-O), third edition, and converted according to the International Classification of Diseases, 10th edition (ICD-10).1314 We also collected cancer mortality data from 2002–2018 from Statistics Korea.1 The cause of death was coded and classified according to ICD-10.14 We obtained the estimated population from 1999–2040 from Statistics Korea as well.\nCancer sites were categorized as follows: liver (C22.0, C22.9), gallbladder (GB) (C23), Intra- and extrahepatic bile duct (bile duct [BD]; IBD and EBD, respectively) (C22.1, C24.0, C24.8, C24.9), ampulla of Vater (AoV) (C24.1), and pancreas (C25).\n Ethical statement The study protocol conformed to the ethical guidelines of the 1975 Declaration of Helsinki, as reflected in a prior approval obtained from the institutional ethical committee (Institutional Review Board, National Cancer Center, Korea [NCC2020-0093]), Written informed consent from the participants was waived because of the retrospective design of the study.\nThe study protocol conformed to the ethical guidelines of the 1975 Declaration of Helsinki, as reflected in a prior approval obtained from the institutional ethical committee (Institutional Review Board, National Cancer Center, Korea [NCC2020-0093]), Written informed consent from the participants was waived because of the retrospective design of the study.\n Statistical analysis Rates were expressed as crude rate (CR) and age-standardized rates (ASR) per 100,000 individuals. The CR was calculated as the total number of incidence (crude incidence rate [CIR]) or mortality (crude mortality rate [CMR]) cases divided by the midyear population of the specified years. ASRs were standardized using the registered population of year 2000 and expressed per 100,000 persons; the incidence rate ratio using ASRs was also calculated.15 Based on the CIR and the age-standardized incidence rate (ASIR), which were calculated until 2017, the prediction was made for 2018–2040. Conversely, the CMR and age-standardized mortality rate (ASMR) were calculated until 2018, and a forecast was made for 2019–2040.\nIncidence and mortality were modeled using an age-period-cohort model and extrapolated to 2040. The age-period-cohort model was as follows:\n\nλ (age, period) = g[fA (age) + fP (period) + fC (cohort)]\n\nwhere λ represents the incidence (mortality) rate according to age and calendar period; g is the “link” function; and fA, fP, and fC represent the functions of age, period (year of incidence or mortality), and cohort (year of birth, i.e., cohort = period–age), respectively.16 We used the exponential function as the link function, and fA, fP, and fC as natural cubic splines.\nWe used the “apcspline” Stata ado-program to conduct annual percentage change (APC) modeling, with a default setting for internal knots for the spline (6 for age variable, 5 for period variable, and 3 for cohort variable).\nWhen modeling using the spline, we absorbed the linear trends in period and cohort over the timespan with observed data into a drift component and attenuated the drift into the future. A function was developed in Stata to fit the model to the incidence/mortality of cancer in the individual (from 1999 to 2017 in incidence, 2002 to 2018 in mortality) and 5-year age groups (0–4, 5–9, …, 74–79, 80+) by sex. To estimate age-specific cancer rates in future years, an APC model was fitted to age-specific rates for the 5-year age groups against their respective years, and the projected age-specific rates were multiplied by the age-specific population to obtain the projected number of cancer cases and deaths for the upcoming years. To adjust the future demographic structure, we employed an estimated population by Statistics Korea to calculate the ASIR and ASMR. Linear regression models were used on logarithmic scaled ASRs and calendar years to identify time points with significant trend changes and to calculate the APC for each segment between these time points.17 The joinpoint regression analysis describes changes in data trends by connecting several different line segments on a log scale at “joinpoints,” which was performed to detect when significant changes occurred in cancer trends according to sex and cancer site. The linear regression model was applied to ASRs by 5-year age groups against their respective years, based on the observed cancer incidence data of the latest trends to predict ASRs.\nMoreover, APC, which is the average percentage change of ASRs, is calculated as follows:\n\n\nAPC=Ry+1−RyRy×100=(eb1−1)×100,\n\n\nwhere log(Ry) = b0 + b1y × year\nLog(Ry) is the natural log-transformed ASR.17\nThe weighted average of APCs was the average APC (AAPC) with 95% confidence intervals for the whole period of interest.18 We also performed a pairwise comparison, and tested the differences between sexes. All statistical tests were two-tailed, and P values < 0.05 were considered statistically significant. Incidence and trends were analyzed using SAS 9.4 (SAS Institute, Inc., Cary, NC, USA), Joinpoint 4.7.0.0 (National Cancer Institute, Bethesda, MD, USA), and Stata version 16.1 (StataCorp LP, College Station, TX, USA).\nRates were expressed as crude rate (CR) and age-standardized rates (ASR) per 100,000 individuals. The CR was calculated as the total number of incidence (crude incidence rate [CIR]) or mortality (crude mortality rate [CMR]) cases divided by the midyear population of the specified years. ASRs were standardized using the registered population of year 2000 and expressed per 100,000 persons; the incidence rate ratio using ASRs was also calculated.15 Based on the CIR and the age-standardized incidence rate (ASIR), which were calculated until 2017, the prediction was made for 2018–2040. Conversely, the CMR and age-standardized mortality rate (ASMR) were calculated until 2018, and a forecast was made for 2019–2040.\nIncidence and mortality were modeled using an age-period-cohort model and extrapolated to 2040. The age-period-cohort model was as follows:\n\nλ (age, period) = g[fA (age) + fP (period) + fC (cohort)]\n\nwhere λ represents the incidence (mortality) rate according to age and calendar period; g is the “link” function; and fA, fP, and fC represent the functions of age, period (year of incidence or mortality), and cohort (year of birth, i.e., cohort = period–age), respectively.16 We used the exponential function as the link function, and fA, fP, and fC as natural cubic splines.\nWe used the “apcspline” Stata ado-program to conduct annual percentage change (APC) modeling, with a default setting for internal knots for the spline (6 for age variable, 5 for period variable, and 3 for cohort variable).\nWhen modeling using the spline, we absorbed the linear trends in period and cohort over the timespan with observed data into a drift component and attenuated the drift into the future. A function was developed in Stata to fit the model to the incidence/mortality of cancer in the individual (from 1999 to 2017 in incidence, 2002 to 2018 in mortality) and 5-year age groups (0–4, 5–9, …, 74–79, 80+) by sex. To estimate age-specific cancer rates in future years, an APC model was fitted to age-specific rates for the 5-year age groups against their respective years, and the projected age-specific rates were multiplied by the age-specific population to obtain the projected number of cancer cases and deaths for the upcoming years. To adjust the future demographic structure, we employed an estimated population by Statistics Korea to calculate the ASIR and ASMR. Linear regression models were used on logarithmic scaled ASRs and calendar years to identify time points with significant trend changes and to calculate the APC for each segment between these time points.17 The joinpoint regression analysis describes changes in data trends by connecting several different line segments on a log scale at “joinpoints,” which was performed to detect when significant changes occurred in cancer trends according to sex and cancer site. The linear regression model was applied to ASRs by 5-year age groups against their respective years, based on the observed cancer incidence data of the latest trends to predict ASRs.\nMoreover, APC, which is the average percentage change of ASRs, is calculated as follows:\n\n\nAPC=Ry+1−RyRy×100=(eb1−1)×100,\n\n\nwhere log(Ry) = b0 + b1y × year\nLog(Ry) is the natural log-transformed ASR.17\nThe weighted average of APCs was the average APC (AAPC) with 95% confidence intervals for the whole period of interest.18 We also performed a pairwise comparison, and tested the differences between sexes. All statistical tests were two-tailed, and P values < 0.05 were considered statistically significant. Incidence and trends were analyzed using SAS 9.4 (SAS Institute, Inc., Cary, NC, USA), Joinpoint 4.7.0.0 (National Cancer Institute, Bethesda, MD, USA), and Stata version 16.1 (StataCorp LP, College Station, TX, USA).", "The Korean Ministry of Health and Welfare initiated the Korea Central Cancer Registry (KCCR), a nationwide, hospital-based cancer registry, in 1980. Since 1999, the KCCR has collected cancer incidence data countrywide by integrating a nationwide hospital-based KCCR database with data from regional cancer registries. The KCCR built the Korea National Cancer Incidence Database (KNCI DB), which collects data from hospitals, 11 population-based registries, and additional medical record reviews since 2005. They have been providing the nationwide cancer incidence, survival, and prevalence statistics annually.12 This database contains information on age; sex; region; diagnosis; date; primary cancer site; histological type; most valid diagnostic method; summary stage of surveillance, epidemiology, and end results program (since 2005); and first treatment course within 4 months after diagnosis.\nIn this study, we obtained cancer incidence data from 1999–2017 from the KNCI DB. Cancer cases were classified according to the International Classification of Diseases for Oncology (ICD-O), third edition, and converted according to the International Classification of Diseases, 10th edition (ICD-10).1314 We also collected cancer mortality data from 2002–2018 from Statistics Korea.1 The cause of death was coded and classified according to ICD-10.14 We obtained the estimated population from 1999–2040 from Statistics Korea as well.\nCancer sites were categorized as follows: liver (C22.0, C22.9), gallbladder (GB) (C23), Intra- and extrahepatic bile duct (bile duct [BD]; IBD and EBD, respectively) (C22.1, C24.0, C24.8, C24.9), ampulla of Vater (AoV) (C24.1), and pancreas (C25).", "The study protocol conformed to the ethical guidelines of the 1975 Declaration of Helsinki, as reflected in a prior approval obtained from the institutional ethical committee (Institutional Review Board, National Cancer Center, Korea [NCC2020-0093]), Written informed consent from the participants was waived because of the retrospective design of the study.", "Rates were expressed as crude rate (CR) and age-standardized rates (ASR) per 100,000 individuals. The CR was calculated as the total number of incidence (crude incidence rate [CIR]) or mortality (crude mortality rate [CMR]) cases divided by the midyear population of the specified years. ASRs were standardized using the registered population of year 2000 and expressed per 100,000 persons; the incidence rate ratio using ASRs was also calculated.15 Based on the CIR and the age-standardized incidence rate (ASIR), which were calculated until 2017, the prediction was made for 2018–2040. Conversely, the CMR and age-standardized mortality rate (ASMR) were calculated until 2018, and a forecast was made for 2019–2040.\nIncidence and mortality were modeled using an age-period-cohort model and extrapolated to 2040. The age-period-cohort model was as follows:\n\nλ (age, period) = g[fA (age) + fP (period) + fC (cohort)]\n\nwhere λ represents the incidence (mortality) rate according to age and calendar period; g is the “link” function; and fA, fP, and fC represent the functions of age, period (year of incidence or mortality), and cohort (year of birth, i.e., cohort = period–age), respectively.16 We used the exponential function as the link function, and fA, fP, and fC as natural cubic splines.\nWe used the “apcspline” Stata ado-program to conduct annual percentage change (APC) modeling, with a default setting for internal knots for the spline (6 for age variable, 5 for period variable, and 3 for cohort variable).\nWhen modeling using the spline, we absorbed the linear trends in period and cohort over the timespan with observed data into a drift component and attenuated the drift into the future. A function was developed in Stata to fit the model to the incidence/mortality of cancer in the individual (from 1999 to 2017 in incidence, 2002 to 2018 in mortality) and 5-year age groups (0–4, 5–9, …, 74–79, 80+) by sex. To estimate age-specific cancer rates in future years, an APC model was fitted to age-specific rates for the 5-year age groups against their respective years, and the projected age-specific rates were multiplied by the age-specific population to obtain the projected number of cancer cases and deaths for the upcoming years. To adjust the future demographic structure, we employed an estimated population by Statistics Korea to calculate the ASIR and ASMR. Linear regression models were used on logarithmic scaled ASRs and calendar years to identify time points with significant trend changes and to calculate the APC for each segment between these time points.17 The joinpoint regression analysis describes changes in data trends by connecting several different line segments on a log scale at “joinpoints,” which was performed to detect when significant changes occurred in cancer trends according to sex and cancer site. The linear regression model was applied to ASRs by 5-year age groups against their respective years, based on the observed cancer incidence data of the latest trends to predict ASRs.\nMoreover, APC, which is the average percentage change of ASRs, is calculated as follows:\n\n\nAPC=Ry+1−RyRy×100=(eb1−1)×100,\n\n\nwhere log(Ry) = b0 + b1y × year\nLog(Ry) is the natural log-transformed ASR.17\nThe weighted average of APCs was the average APC (AAPC) with 95% confidence intervals for the whole period of interest.18 We also performed a pairwise comparison, and tested the differences between sexes. All statistical tests were two-tailed, and P values < 0.05 were considered statistically significant. Incidence and trends were analyzed using SAS 9.4 (SAS Institute, Inc., Cary, NC, USA), Joinpoint 4.7.0.0 (National Cancer Institute, Bethesda, MD, USA), and Stata version 16.1 (StataCorp LP, College Station, TX, USA).", " Incidence and mortality rates of liver, pancreatic, and biliary tract cancers in Korea from 1999 to 2017, and from 2002 to 2018 From 1999 to 2017, HBP cancer cases had been increasing in number continuously. Compared with HCC, which showed a relatively slow increase of new cases GB, BD, AoV, and pancreatic cancers had increased for approximately 2–3 times within the same period. In ASIR analysis, the ASIRs of liver and GB cancers decreased, whereas those of BD and pancreatic cancer increased (Table 1). Supplementary Table 1 shows the trend of annual cancer incidence cases among HBP cancers.\nGB = gallbladder, BD (I+E) = bile duct (intrahepatic+extrahepatic), AoV = ampulla of Vater.\nDuring the same period, trends in the ASIR of each cancer were as follows: 25.8 to 13.5 in liver cancer (AAPC −3.6%; P < 0.001), 2.9 to 2.6 in GB cancer (AAPC −0.5%; P = 0.171), 5.1 to 5.9 in BD cancer (AAPC 1.1%; P = 0.084), 0.9 to 0.9 in AoV cancer (AAPC −0.6%; P = 0.217), and 5.6 to 7.3 in pancreatic cancer (AAPC 1.5%; P < 0.001). The APC of ASIR in liver and pancreatic cancers was statistically significant (Table 2). In the subgroup analysis, the trends of change of ASIRs were similar between both sexes; APC of the liver was significantly low and that of the BD and pancreas was significantly high (Supplementary Table 2). In particular, the APC of ASIR in pancreatic cancer was significantly higher in females than in males (2.4% vs. 0.8%, P < 0.001).\nASR = age-standardized rates, AAPC = average annual percent change, GB = gallbladder, BD (I+E) = bile duct (intrahepatic+extrahepatic), AoV = ampulla of vater.\naObserved values; bEstimated values.\nFrom 2002 to 2018, the ASMR of each cancer decreased, except for pancreatic cancer (18.7 to 8.0 in liver [AAPC −5.2%; P < 0.001], 2.4 to 1.6 in GB [AAPC −2.5%; P < 0.001], 5.5 to 4.6 in IBD and EBD [AAPC −1.0%; P < 0.001], 0.4 to 0.4 in AoV [AAPC −0.2%; P = 0.654], and 5.5 to 5.6 in pancreatic [AAPC 0.3%; P = 0.091]) cancers per 100,000 persons. Thus, except for pancreatic cancer, the APC of all ASMRs in other HBP cancers was negative (Table 2). In the subgroup analysis, the trends of change of ASMRs in liver, GB, and BD cancers were decreasing, similar to those in males and females. The ASMR in pancreatic cancer had been decreased in males but increased in females. Meanwhile, the ASMR in AoV cancer showed no significant difference in the past in both sexes (Supplementary Table 2).\nFrom 1999 to 2017, HBP cancer cases had been increasing in number continuously. Compared with HCC, which showed a relatively slow increase of new cases GB, BD, AoV, and pancreatic cancers had increased for approximately 2–3 times within the same period. In ASIR analysis, the ASIRs of liver and GB cancers decreased, whereas those of BD and pancreatic cancer increased (Table 1). Supplementary Table 1 shows the trend of annual cancer incidence cases among HBP cancers.\nGB = gallbladder, BD (I+E) = bile duct (intrahepatic+extrahepatic), AoV = ampulla of Vater.\nDuring the same period, trends in the ASIR of each cancer were as follows: 25.8 to 13.5 in liver cancer (AAPC −3.6%; P < 0.001), 2.9 to 2.6 in GB cancer (AAPC −0.5%; P = 0.171), 5.1 to 5.9 in BD cancer (AAPC 1.1%; P = 0.084), 0.9 to 0.9 in AoV cancer (AAPC −0.6%; P = 0.217), and 5.6 to 7.3 in pancreatic cancer (AAPC 1.5%; P < 0.001). The APC of ASIR in liver and pancreatic cancers was statistically significant (Table 2). In the subgroup analysis, the trends of change of ASIRs were similar between both sexes; APC of the liver was significantly low and that of the BD and pancreas was significantly high (Supplementary Table 2). In particular, the APC of ASIR in pancreatic cancer was significantly higher in females than in males (2.4% vs. 0.8%, P < 0.001).\nASR = age-standardized rates, AAPC = average annual percent change, GB = gallbladder, BD (I+E) = bile duct (intrahepatic+extrahepatic), AoV = ampulla of vater.\naObserved values; bEstimated values.\nFrom 2002 to 2018, the ASMR of each cancer decreased, except for pancreatic cancer (18.7 to 8.0 in liver [AAPC −5.2%; P < 0.001], 2.4 to 1.6 in GB [AAPC −2.5%; P < 0.001], 5.5 to 4.6 in IBD and EBD [AAPC −1.0%; P < 0.001], 0.4 to 0.4 in AoV [AAPC −0.2%; P = 0.654], and 5.5 to 5.6 in pancreatic [AAPC 0.3%; P = 0.091]) cancers per 100,000 persons. Thus, except for pancreatic cancer, the APC of all ASMRs in other HBP cancers was negative (Table 2). In the subgroup analysis, the trends of change of ASMRs in liver, GB, and BD cancers were decreasing, similar to those in males and females. The ASMR in pancreatic cancer had been decreased in males but increased in females. Meanwhile, the ASMR in AoV cancer showed no significant difference in the past in both sexes (Supplementary Table 2).\n Predicting future epidemiology of liver, pancreatic, and biliary tract cancer in Korea by 2040 Predicted annual cases of newly diagnosed cancer from 2018 to 2040 increased in all cancer types as follows: 12,157 to 13,089 (liver), 2,681 to 4,038 (GB), 6,304 to 7,964 (IBD and EBD), 844 to 1,376 (AoV), and 7,459 to 16,170 (pancreas) (Fig. 1). The CIR of pancreatic cancer is predicted to increase more rapidly than those of other cancers (Fig. 2A). Meanwhile, the predicted ASIR increased in pancreatic cancer (7.5 to 8.2 [AAPC 0.5%; P < 0.001]) but decreased in all other cancers (13.2 to 8.5 in liver [AAPC −2.0%; P < 0.001], 2.5 to 1.7 in GB [AAPC −1.6%; P < 0.001], 5.9 to 3.2 in IBD and EBD [AAPC −2.7%; P < 0.001], and 0.8 to 0.6 in AoV [AAPC −1.2%; P < 0.001]) (Table 2, Fig. 2B). Both sexes were predicted to have similar increasing trends of CIR (Supplementary Fig. 1A and B). In subgroup analysis in both sexes, the predicted ASIR of pancreatic cancer increased only in females (6.2 to 7.9 [AAPC 1.1%; P < 0.001] vs.8.6 to 8.5 [AAPC −0.1%; P < 0.001]). Meanwhile, the predicted ASIR in GB, IBD/EBD, and AoV cancers decreased slightly in both males and females. Moreover, the ASIR of liver cancer decreased more rapidly in males (21.8 to 12.5 [AAPC −2,5%; P < 0.001] than in females (5.3 to 4.5 [AAPC −0.8%; P < 0.001]) (Supplementary Table 3, Supplementary Fig. 1C and D).\nGB = gallbladder, BD = bile duct, AoV = ampulla of Vater.\nGB = gallbladder, BD = bile duct, AoV = ampulla of Vater.\nFurthermore, the predicted annual deaths from 2019 to 2040 decreased in liver cancer (7,551 to 6,037) but increased in other cancers (GB, 1,795 to 2,391; IBD and EBD, 5,313 to 7,928; AoV, 397 to 536; pancreas, 6,202 to 11,023). CMRs are predicted to increase markedly in pancreatic and BD cancers but decrease in liver cancer (Fig. 3A). However, the predicted ASMR decreased in all cancer types (7.5 to 3.2 in liver [AAPC −4.0%; P < 0.001], 1.5 to 0.9 in GB [AAPC −2.4%; P < 0.001], 4.5 to 2.9 in IBD and EBD [AAPC −2.1%; P < 0.001], 0.3 to 0.2 in AoV [AAPC −2.6%; P < 0.001], and 5.6 to 4.7 in pancreatic cancer [AAPC −0.8%; P < 0.001]) (Fig. 3B). Pancreatic cancer was predicted to be the leading cause of cancer-related death among HBP cancers. Similar CMR trends are expected in both sexes. Although CMR in liver cancer was predicted to be still higher in males than in females, it was predicted to decrease more markedly in males (22.8 to 18.7) than in females (6.4 to 5.1). Meanwhile, CMR in pancreatic cancer was predicted to increase more markedly (P < 0.001, data not shown) in females (12.1 to 25.1) than in males (11.8 to 18.2) (Supplementary Fig. 2A and B). The ASMR of pancreatic cancer was also predicted to increase in females (4.8 to 4.9) (P < 0.001) but decrease in males (6.3 to 4.5) (P < 0.001) (Supplementary Table 3). Furthermore, the ASMR (4.9) of pancreatic cancer in females in 2040 were predicted to be the highest among all HBP cancers (liver, 1.1; GB, 0.9; BD,1.7; AoV, 0.1). Moreover, the ASMR of the liver (12.9 to 5.4 [AAPC −4.1%; P < 0.001]) was still expected to be the highest in males in 2040 (GB, 0.9; BD, 4.2; AoV, 0.3; pancreas, 4.5). However, the predicted ASMR of other HBP cancers was decreased (P < 0.001) (Supplementary Table 3, Supplementary Fig. 2C and D).\nGB = gallbladder, BD = bile duct, AoV = ampulla of Vater.\nPredicted annual cases of newly diagnosed cancer from 2018 to 2040 increased in all cancer types as follows: 12,157 to 13,089 (liver), 2,681 to 4,038 (GB), 6,304 to 7,964 (IBD and EBD), 844 to 1,376 (AoV), and 7,459 to 16,170 (pancreas) (Fig. 1). The CIR of pancreatic cancer is predicted to increase more rapidly than those of other cancers (Fig. 2A). Meanwhile, the predicted ASIR increased in pancreatic cancer (7.5 to 8.2 [AAPC 0.5%; P < 0.001]) but decreased in all other cancers (13.2 to 8.5 in liver [AAPC −2.0%; P < 0.001], 2.5 to 1.7 in GB [AAPC −1.6%; P < 0.001], 5.9 to 3.2 in IBD and EBD [AAPC −2.7%; P < 0.001], and 0.8 to 0.6 in AoV [AAPC −1.2%; P < 0.001]) (Table 2, Fig. 2B). Both sexes were predicted to have similar increasing trends of CIR (Supplementary Fig. 1A and B). In subgroup analysis in both sexes, the predicted ASIR of pancreatic cancer increased only in females (6.2 to 7.9 [AAPC 1.1%; P < 0.001] vs.8.6 to 8.5 [AAPC −0.1%; P < 0.001]). Meanwhile, the predicted ASIR in GB, IBD/EBD, and AoV cancers decreased slightly in both males and females. Moreover, the ASIR of liver cancer decreased more rapidly in males (21.8 to 12.5 [AAPC −2,5%; P < 0.001] than in females (5.3 to 4.5 [AAPC −0.8%; P < 0.001]) (Supplementary Table 3, Supplementary Fig. 1C and D).\nGB = gallbladder, BD = bile duct, AoV = ampulla of Vater.\nGB = gallbladder, BD = bile duct, AoV = ampulla of Vater.\nFurthermore, the predicted annual deaths from 2019 to 2040 decreased in liver cancer (7,551 to 6,037) but increased in other cancers (GB, 1,795 to 2,391; IBD and EBD, 5,313 to 7,928; AoV, 397 to 536; pancreas, 6,202 to 11,023). CMRs are predicted to increase markedly in pancreatic and BD cancers but decrease in liver cancer (Fig. 3A). However, the predicted ASMR decreased in all cancer types (7.5 to 3.2 in liver [AAPC −4.0%; P < 0.001], 1.5 to 0.9 in GB [AAPC −2.4%; P < 0.001], 4.5 to 2.9 in IBD and EBD [AAPC −2.1%; P < 0.001], 0.3 to 0.2 in AoV [AAPC −2.6%; P < 0.001], and 5.6 to 4.7 in pancreatic cancer [AAPC −0.8%; P < 0.001]) (Fig. 3B). Pancreatic cancer was predicted to be the leading cause of cancer-related death among HBP cancers. Similar CMR trends are expected in both sexes. Although CMR in liver cancer was predicted to be still higher in males than in females, it was predicted to decrease more markedly in males (22.8 to 18.7) than in females (6.4 to 5.1). Meanwhile, CMR in pancreatic cancer was predicted to increase more markedly (P < 0.001, data not shown) in females (12.1 to 25.1) than in males (11.8 to 18.2) (Supplementary Fig. 2A and B). The ASMR of pancreatic cancer was also predicted to increase in females (4.8 to 4.9) (P < 0.001) but decrease in males (6.3 to 4.5) (P < 0.001) (Supplementary Table 3). Furthermore, the ASMR (4.9) of pancreatic cancer in females in 2040 were predicted to be the highest among all HBP cancers (liver, 1.1; GB, 0.9; BD,1.7; AoV, 0.1). Moreover, the ASMR of the liver (12.9 to 5.4 [AAPC −4.1%; P < 0.001]) was still expected to be the highest in males in 2040 (GB, 0.9; BD, 4.2; AoV, 0.3; pancreas, 4.5). However, the predicted ASMR of other HBP cancers was decreased (P < 0.001) (Supplementary Table 3, Supplementary Fig. 2C and D).\nGB = gallbladder, BD = bile duct, AoV = ampulla of Vater.", "From 1999 to 2017, HBP cancer cases had been increasing in number continuously. Compared with HCC, which showed a relatively slow increase of new cases GB, BD, AoV, and pancreatic cancers had increased for approximately 2–3 times within the same period. In ASIR analysis, the ASIRs of liver and GB cancers decreased, whereas those of BD and pancreatic cancer increased (Table 1). Supplementary Table 1 shows the trend of annual cancer incidence cases among HBP cancers.\nGB = gallbladder, BD (I+E) = bile duct (intrahepatic+extrahepatic), AoV = ampulla of Vater.\nDuring the same period, trends in the ASIR of each cancer were as follows: 25.8 to 13.5 in liver cancer (AAPC −3.6%; P < 0.001), 2.9 to 2.6 in GB cancer (AAPC −0.5%; P = 0.171), 5.1 to 5.9 in BD cancer (AAPC 1.1%; P = 0.084), 0.9 to 0.9 in AoV cancer (AAPC −0.6%; P = 0.217), and 5.6 to 7.3 in pancreatic cancer (AAPC 1.5%; P < 0.001). The APC of ASIR in liver and pancreatic cancers was statistically significant (Table 2). In the subgroup analysis, the trends of change of ASIRs were similar between both sexes; APC of the liver was significantly low and that of the BD and pancreas was significantly high (Supplementary Table 2). In particular, the APC of ASIR in pancreatic cancer was significantly higher in females than in males (2.4% vs. 0.8%, P < 0.001).\nASR = age-standardized rates, AAPC = average annual percent change, GB = gallbladder, BD (I+E) = bile duct (intrahepatic+extrahepatic), AoV = ampulla of vater.\naObserved values; bEstimated values.\nFrom 2002 to 2018, the ASMR of each cancer decreased, except for pancreatic cancer (18.7 to 8.0 in liver [AAPC −5.2%; P < 0.001], 2.4 to 1.6 in GB [AAPC −2.5%; P < 0.001], 5.5 to 4.6 in IBD and EBD [AAPC −1.0%; P < 0.001], 0.4 to 0.4 in AoV [AAPC −0.2%; P = 0.654], and 5.5 to 5.6 in pancreatic [AAPC 0.3%; P = 0.091]) cancers per 100,000 persons. Thus, except for pancreatic cancer, the APC of all ASMRs in other HBP cancers was negative (Table 2). In the subgroup analysis, the trends of change of ASMRs in liver, GB, and BD cancers were decreasing, similar to those in males and females. The ASMR in pancreatic cancer had been decreased in males but increased in females. Meanwhile, the ASMR in AoV cancer showed no significant difference in the past in both sexes (Supplementary Table 2).", "Predicted annual cases of newly diagnosed cancer from 2018 to 2040 increased in all cancer types as follows: 12,157 to 13,089 (liver), 2,681 to 4,038 (GB), 6,304 to 7,964 (IBD and EBD), 844 to 1,376 (AoV), and 7,459 to 16,170 (pancreas) (Fig. 1). The CIR of pancreatic cancer is predicted to increase more rapidly than those of other cancers (Fig. 2A). Meanwhile, the predicted ASIR increased in pancreatic cancer (7.5 to 8.2 [AAPC 0.5%; P < 0.001]) but decreased in all other cancers (13.2 to 8.5 in liver [AAPC −2.0%; P < 0.001], 2.5 to 1.7 in GB [AAPC −1.6%; P < 0.001], 5.9 to 3.2 in IBD and EBD [AAPC −2.7%; P < 0.001], and 0.8 to 0.6 in AoV [AAPC −1.2%; P < 0.001]) (Table 2, Fig. 2B). Both sexes were predicted to have similar increasing trends of CIR (Supplementary Fig. 1A and B). In subgroup analysis in both sexes, the predicted ASIR of pancreatic cancer increased only in females (6.2 to 7.9 [AAPC 1.1%; P < 0.001] vs.8.6 to 8.5 [AAPC −0.1%; P < 0.001]). Meanwhile, the predicted ASIR in GB, IBD/EBD, and AoV cancers decreased slightly in both males and females. Moreover, the ASIR of liver cancer decreased more rapidly in males (21.8 to 12.5 [AAPC −2,5%; P < 0.001] than in females (5.3 to 4.5 [AAPC −0.8%; P < 0.001]) (Supplementary Table 3, Supplementary Fig. 1C and D).\nGB = gallbladder, BD = bile duct, AoV = ampulla of Vater.\nGB = gallbladder, BD = bile duct, AoV = ampulla of Vater.\nFurthermore, the predicted annual deaths from 2019 to 2040 decreased in liver cancer (7,551 to 6,037) but increased in other cancers (GB, 1,795 to 2,391; IBD and EBD, 5,313 to 7,928; AoV, 397 to 536; pancreas, 6,202 to 11,023). CMRs are predicted to increase markedly in pancreatic and BD cancers but decrease in liver cancer (Fig. 3A). However, the predicted ASMR decreased in all cancer types (7.5 to 3.2 in liver [AAPC −4.0%; P < 0.001], 1.5 to 0.9 in GB [AAPC −2.4%; P < 0.001], 4.5 to 2.9 in IBD and EBD [AAPC −2.1%; P < 0.001], 0.3 to 0.2 in AoV [AAPC −2.6%; P < 0.001], and 5.6 to 4.7 in pancreatic cancer [AAPC −0.8%; P < 0.001]) (Fig. 3B). Pancreatic cancer was predicted to be the leading cause of cancer-related death among HBP cancers. Similar CMR trends are expected in both sexes. Although CMR in liver cancer was predicted to be still higher in males than in females, it was predicted to decrease more markedly in males (22.8 to 18.7) than in females (6.4 to 5.1). Meanwhile, CMR in pancreatic cancer was predicted to increase more markedly (P < 0.001, data not shown) in females (12.1 to 25.1) than in males (11.8 to 18.2) (Supplementary Fig. 2A and B). The ASMR of pancreatic cancer was also predicted to increase in females (4.8 to 4.9) (P < 0.001) but decrease in males (6.3 to 4.5) (P < 0.001) (Supplementary Table 3). Furthermore, the ASMR (4.9) of pancreatic cancer in females in 2040 were predicted to be the highest among all HBP cancers (liver, 1.1; GB, 0.9; BD,1.7; AoV, 0.1). Moreover, the ASMR of the liver (12.9 to 5.4 [AAPC −4.1%; P < 0.001]) was still expected to be the highest in males in 2040 (GB, 0.9; BD, 4.2; AoV, 0.3; pancreas, 4.5). However, the predicted ASMR of other HBP cancers was decreased (P < 0.001) (Supplementary Table 3, Supplementary Fig. 2C and D).\nGB = gallbladder, BD = bile duct, AoV = ampulla of Vater.", "Changes in cancer incidence and mortality may vary depending on the geography, race, sex, and age; hence establishing a nationwide healthcare policy is required to identify the epidemiologic features of cancer and predict its future according to the gathered data.21920 In Korea, a national policy on cancer prevention and screening revealed that the incidence of all types of cancers peaked in 2011 and then declined. The overall mortality rate of cancers has also started to decline since 2002, owing to advances in diagnostic and treatment modalities. However, these trends of cancer do not represent the trend of each cancer. In this study, we specifically assessed the epidemiological changes of HBP cancers (GB, IBD/EBD, liver, AoV, and pancreatic cancers). Results showed that the number of new HBP cancer cases had increased from 1999 to 2017 and predicted to continuously increase by 2040. In liver cancer, the incidence and mortality rates had decreased from 2011 and 2002, and the projected CIR, CMR, ASIR, and ASMR were predicted to dramatically decrease by 2040. In other HBP cancers, especially pancreatic cancer, the incidence had increased, and the CIR and ASIR were predicted to increase. The CMR and ASMR in pancreatic cancer were also predicted to decrease but slightly, and eventually, this type would become the leading cancer in HBP cancer mortality by 2040.\nRahib et al.21 predicted that pancreatic and liver cancers would surpass breast, prostate, and colorectal cancers, becoming the second and third leading causes of cancer-related death by 2030 in the United States. In a recently published report in Korea, pancreas is predicted to become one of the five leading cancer sites causing mortality in both sexes in 2021.22 Our study showed that pancreatic cancer incidence has been increasing gradually since 1999. Although the ASMR of pancreatic cancer is expected to slightly decrease in the future, it is expected to increase in females and pancreas will be the leading site of HBP cancer mortality by 2030–2040. The increase in predicted mortality of pancreatic cancer may result from the lack of effective treatment and difficulty in making early diagnosis despite the increased incidence. Surgery is the only potentially curative option for pancreatic cancer, but less than 20% of patients are eligible for this treatment. In addition, treatments for metastatic pancreatic cancer are minimally effective.232425 Therefore, strategies for early detection and therapeutic targets that can be translated and tested in clinical trials should be identified as early as possible in preparation for the expected increase of pancreatic cancer cases in the next 10–20 years. Of note, an extensive process is required to validate an early detection biomarker for clinical use, and approximately 7.9 years is needed for clinical testing and approval of a new cancer therapy.2627\nEspecially, the increasing trend of incidence and mortality in pancreatic cancer were observed and predicted more clearly in females than in males. Although the increase in females’ social activities and smoking and drinking rates may be the cause, the smoking and drinking rates only increased slightly, and these rates remain considerably higher in males than females.28 Therefore, these results are difficult to interpret, and additional research is needed to explain these different trends between sexes.\nIn contrast, the incidence and mortality rates of liver cancer were predicted to decrease more dominantly in males than females. This dramatic decrease may be derived from vaccination for chronic hepatitis B virus (HBV) as a national policy. Approximately 80% of all primary liver cancers are diagnosed as HCC, and the most important cause of HCC in Korea is HBV infection.29 According to the results of a random selection registry study of the Korean Liver Cancer Association and the KCCR, 62.2% and 10.4% of patients diagnosed with HCC between 2008 and 2010 had HBV and hepatitis C virus, respectively; the remaining 27.4% was considered unknown but was presumed to be liver cirrhosis caused by alcoholic and/or nonalcoholic fatty liver disease.3031 Since 1983, HBV vaccination has been recommended for all neonates in Korea. The percentage of vaccinated infants has surpassed 98.9% since 1990, and the HBsAg carrier rate in the general population decreased to 3.7% in 2007. In particular, HBsAg prevalence decreased to 0.44% in teenagers and 0.2% in children below 10 years old.32 Aside from vaccination, advances in medical, interventional, and surgical treatment for liver disease can contribute to reduced HCC mortality.33343536\nAs shown in our study results, the incidence and mortality rates of HBP cancers continue to change. Future predictions analyzed at a particular point of time must be interpreted. As cancer incidence and mortality increase, medical costs also increase; thus, researchers, including clinicians, need to regularly conduct epidemiologic analysis and prediction for efficient budgeting.3738 Based on the accumulated results, improved national cancer policies will be established and implemented in the future.\nThe annual rates of HBP cancer incidence and mortality are expected to increase in the future. However, those of liver cancer are decreasing and will further decrease, especially in males. In contrast, those of pancreatic cancer have been increasing and will continue to increase, becoming the most frequent and leading cause of cancer-related death among HBP cancers. Therefore, more HBP specialists and improvement in health policy are needed in preparation for such situation." ]
[ "intro", "methods", null, null, null, "results", null, null, "discussion" ]
[ "Hepatobiliary Cancer", "Pancreatic Cancer", "Incidence", "Mortality", "Future Prediction" ]
INTRODUCTION: Since 1983, when the cause of death statistics started to be published, cancer has been reported as the leading cause of death in Korea, making it the country’s major public health concern.1 In 2018, over 240,000 patients were newly diagnosed with cancer in Korea, and more than one-fourth of all-cause mortality cases resulted from cancer.2 Supported by a national policy on cancer prevention and screening in Korea, the incidence rate of cancer peaked in 2011 and gradually declined subsequently. The overall cancer mortality rate has also steadily decreased since 2002 owing to diagnostic and treatment modality advances. In 2018, liver, pancreatic, and biliary tract cancers were the 6th (6.5% of all new cancers), 8th (3.1%), and 9th (2.9%) most common cancers in Korea, respectively. However, when the incidence rates of all hepatobiliary and pancreatic (HBP) cancers were added, they comprise 12.5%, which is higher than that of stomach cancer (12.0%), the most common cancer in Korea. With the availability of vaccination and effective treatment for viral hepatitis, especially hepatitis B, the incidence of hepatocellular carcinoma (HCC) is decreasing; in contrast, the incidence rates of pancreatic and biliary tract cancers are rapidly increasing, and liver and pancreatic cancers ranked 2nd and 5th in the most common causes of cancer-related death in 2018.234567 Unfortunately, considering the relatively small number of patients with each type of HBP cancers, liver, pancreatic, and biliary tract cancers have not been prioritized in the national health policy, despite being fatal. Although studies from our institution have reported short-term future predictions of cancer incidence and mortality in Korea annually, no studies have focused on future cancer burdens regarding long-term changes in the incidence and mortality rates of HBP cancers.89 Moreover, although the cancer registration system in Korea is highly efficient in providing nationwide cancer statistics, a lag time of at least two years is required for data collection and analysis in a specific year.1011 In planning and implementing comprehensive cancer control programs, the number of new cases and deaths that are expected to occur in the future should be estimated. Therefore, this study aimed to analyze the current trends and predict the epidemiologic features of liver, pancreatic, and biliary tract cancers according to the Korea Central Cancer Registry data. The findings of this study could provide useful insights into health policy planning and budget allocation, including cancer prevention, diagnostic and treatment strategies. METHODS: Data collection The Korean Ministry of Health and Welfare initiated the Korea Central Cancer Registry (KCCR), a nationwide, hospital-based cancer registry, in 1980. Since 1999, the KCCR has collected cancer incidence data countrywide by integrating a nationwide hospital-based KCCR database with data from regional cancer registries. The KCCR built the Korea National Cancer Incidence Database (KNCI DB), which collects data from hospitals, 11 population-based registries, and additional medical record reviews since 2005. They have been providing the nationwide cancer incidence, survival, and prevalence statistics annually.12 This database contains information on age; sex; region; diagnosis; date; primary cancer site; histological type; most valid diagnostic method; summary stage of surveillance, epidemiology, and end results program (since 2005); and first treatment course within 4 months after diagnosis. In this study, we obtained cancer incidence data from 1999–2017 from the KNCI DB. Cancer cases were classified according to the International Classification of Diseases for Oncology (ICD-O), third edition, and converted according to the International Classification of Diseases, 10th edition (ICD-10).1314 We also collected cancer mortality data from 2002–2018 from Statistics Korea.1 The cause of death was coded and classified according to ICD-10.14 We obtained the estimated population from 1999–2040 from Statistics Korea as well. Cancer sites were categorized as follows: liver (C22.0, C22.9), gallbladder (GB) (C23), Intra- and extrahepatic bile duct (bile duct [BD]; IBD and EBD, respectively) (C22.1, C24.0, C24.8, C24.9), ampulla of Vater (AoV) (C24.1), and pancreas (C25). The Korean Ministry of Health and Welfare initiated the Korea Central Cancer Registry (KCCR), a nationwide, hospital-based cancer registry, in 1980. Since 1999, the KCCR has collected cancer incidence data countrywide by integrating a nationwide hospital-based KCCR database with data from regional cancer registries. The KCCR built the Korea National Cancer Incidence Database (KNCI DB), which collects data from hospitals, 11 population-based registries, and additional medical record reviews since 2005. They have been providing the nationwide cancer incidence, survival, and prevalence statistics annually.12 This database contains information on age; sex; region; diagnosis; date; primary cancer site; histological type; most valid diagnostic method; summary stage of surveillance, epidemiology, and end results program (since 2005); and first treatment course within 4 months after diagnosis. In this study, we obtained cancer incidence data from 1999–2017 from the KNCI DB. Cancer cases were classified according to the International Classification of Diseases for Oncology (ICD-O), third edition, and converted according to the International Classification of Diseases, 10th edition (ICD-10).1314 We also collected cancer mortality data from 2002–2018 from Statistics Korea.1 The cause of death was coded and classified according to ICD-10.14 We obtained the estimated population from 1999–2040 from Statistics Korea as well. Cancer sites were categorized as follows: liver (C22.0, C22.9), gallbladder (GB) (C23), Intra- and extrahepatic bile duct (bile duct [BD]; IBD and EBD, respectively) (C22.1, C24.0, C24.8, C24.9), ampulla of Vater (AoV) (C24.1), and pancreas (C25). Ethical statement The study protocol conformed to the ethical guidelines of the 1975 Declaration of Helsinki, as reflected in a prior approval obtained from the institutional ethical committee (Institutional Review Board, National Cancer Center, Korea [NCC2020-0093]), Written informed consent from the participants was waived because of the retrospective design of the study. The study protocol conformed to the ethical guidelines of the 1975 Declaration of Helsinki, as reflected in a prior approval obtained from the institutional ethical committee (Institutional Review Board, National Cancer Center, Korea [NCC2020-0093]), Written informed consent from the participants was waived because of the retrospective design of the study. Statistical analysis Rates were expressed as crude rate (CR) and age-standardized rates (ASR) per 100,000 individuals. The CR was calculated as the total number of incidence (crude incidence rate [CIR]) or mortality (crude mortality rate [CMR]) cases divided by the midyear population of the specified years. ASRs were standardized using the registered population of year 2000 and expressed per 100,000 persons; the incidence rate ratio using ASRs was also calculated.15 Based on the CIR and the age-standardized incidence rate (ASIR), which were calculated until 2017, the prediction was made for 2018–2040. Conversely, the CMR and age-standardized mortality rate (ASMR) were calculated until 2018, and a forecast was made for 2019–2040. Incidence and mortality were modeled using an age-period-cohort model and extrapolated to 2040. The age-period-cohort model was as follows: λ (age, period) = g[fA (age) + fP (period) + fC (cohort)] where λ represents the incidence (mortality) rate according to age and calendar period; g is the “link” function; and fA, fP, and fC represent the functions of age, period (year of incidence or mortality), and cohort (year of birth, i.e., cohort = period–age), respectively.16 We used the exponential function as the link function, and fA, fP, and fC as natural cubic splines. We used the “apcspline” Stata ado-program to conduct annual percentage change (APC) modeling, with a default setting for internal knots for the spline (6 for age variable, 5 for period variable, and 3 for cohort variable). When modeling using the spline, we absorbed the linear trends in period and cohort over the timespan with observed data into a drift component and attenuated the drift into the future. A function was developed in Stata to fit the model to the incidence/mortality of cancer in the individual (from 1999 to 2017 in incidence, 2002 to 2018 in mortality) and 5-year age groups (0–4, 5–9, …, 74–79, 80+) by sex. To estimate age-specific cancer rates in future years, an APC model was fitted to age-specific rates for the 5-year age groups against their respective years, and the projected age-specific rates were multiplied by the age-specific population to obtain the projected number of cancer cases and deaths for the upcoming years. To adjust the future demographic structure, we employed an estimated population by Statistics Korea to calculate the ASIR and ASMR. Linear regression models were used on logarithmic scaled ASRs and calendar years to identify time points with significant trend changes and to calculate the APC for each segment between these time points.17 The joinpoint regression analysis describes changes in data trends by connecting several different line segments on a log scale at “joinpoints,” which was performed to detect when significant changes occurred in cancer trends according to sex and cancer site. The linear regression model was applied to ASRs by 5-year age groups against their respective years, based on the observed cancer incidence data of the latest trends to predict ASRs. Moreover, APC, which is the average percentage change of ASRs, is calculated as follows: APC=Ry+1−RyRy×100=(eb1−1)×100, where log(Ry) = b0 + b1y × year Log(Ry) is the natural log-transformed ASR.17 The weighted average of APCs was the average APC (AAPC) with 95% confidence intervals for the whole period of interest.18 We also performed a pairwise comparison, and tested the differences between sexes. All statistical tests were two-tailed, and P values < 0.05 were considered statistically significant. Incidence and trends were analyzed using SAS 9.4 (SAS Institute, Inc., Cary, NC, USA), Joinpoint 4.7.0.0 (National Cancer Institute, Bethesda, MD, USA), and Stata version 16.1 (StataCorp LP, College Station, TX, USA). Rates were expressed as crude rate (CR) and age-standardized rates (ASR) per 100,000 individuals. The CR was calculated as the total number of incidence (crude incidence rate [CIR]) or mortality (crude mortality rate [CMR]) cases divided by the midyear population of the specified years. ASRs were standardized using the registered population of year 2000 and expressed per 100,000 persons; the incidence rate ratio using ASRs was also calculated.15 Based on the CIR and the age-standardized incidence rate (ASIR), which were calculated until 2017, the prediction was made for 2018–2040. Conversely, the CMR and age-standardized mortality rate (ASMR) were calculated until 2018, and a forecast was made for 2019–2040. Incidence and mortality were modeled using an age-period-cohort model and extrapolated to 2040. The age-period-cohort model was as follows: λ (age, period) = g[fA (age) + fP (period) + fC (cohort)] where λ represents the incidence (mortality) rate according to age and calendar period; g is the “link” function; and fA, fP, and fC represent the functions of age, period (year of incidence or mortality), and cohort (year of birth, i.e., cohort = period–age), respectively.16 We used the exponential function as the link function, and fA, fP, and fC as natural cubic splines. We used the “apcspline” Stata ado-program to conduct annual percentage change (APC) modeling, with a default setting for internal knots for the spline (6 for age variable, 5 for period variable, and 3 for cohort variable). When modeling using the spline, we absorbed the linear trends in period and cohort over the timespan with observed data into a drift component and attenuated the drift into the future. A function was developed in Stata to fit the model to the incidence/mortality of cancer in the individual (from 1999 to 2017 in incidence, 2002 to 2018 in mortality) and 5-year age groups (0–4, 5–9, …, 74–79, 80+) by sex. To estimate age-specific cancer rates in future years, an APC model was fitted to age-specific rates for the 5-year age groups against their respective years, and the projected age-specific rates were multiplied by the age-specific population to obtain the projected number of cancer cases and deaths for the upcoming years. To adjust the future demographic structure, we employed an estimated population by Statistics Korea to calculate the ASIR and ASMR. Linear regression models were used on logarithmic scaled ASRs and calendar years to identify time points with significant trend changes and to calculate the APC for each segment between these time points.17 The joinpoint regression analysis describes changes in data trends by connecting several different line segments on a log scale at “joinpoints,” which was performed to detect when significant changes occurred in cancer trends according to sex and cancer site. The linear regression model was applied to ASRs by 5-year age groups against their respective years, based on the observed cancer incidence data of the latest trends to predict ASRs. Moreover, APC, which is the average percentage change of ASRs, is calculated as follows: APC=Ry+1−RyRy×100=(eb1−1)×100, where log(Ry) = b0 + b1y × year Log(Ry) is the natural log-transformed ASR.17 The weighted average of APCs was the average APC (AAPC) with 95% confidence intervals for the whole period of interest.18 We also performed a pairwise comparison, and tested the differences between sexes. All statistical tests were two-tailed, and P values < 0.05 were considered statistically significant. Incidence and trends were analyzed using SAS 9.4 (SAS Institute, Inc., Cary, NC, USA), Joinpoint 4.7.0.0 (National Cancer Institute, Bethesda, MD, USA), and Stata version 16.1 (StataCorp LP, College Station, TX, USA). Data collection: The Korean Ministry of Health and Welfare initiated the Korea Central Cancer Registry (KCCR), a nationwide, hospital-based cancer registry, in 1980. Since 1999, the KCCR has collected cancer incidence data countrywide by integrating a nationwide hospital-based KCCR database with data from regional cancer registries. The KCCR built the Korea National Cancer Incidence Database (KNCI DB), which collects data from hospitals, 11 population-based registries, and additional medical record reviews since 2005. They have been providing the nationwide cancer incidence, survival, and prevalence statistics annually.12 This database contains information on age; sex; region; diagnosis; date; primary cancer site; histological type; most valid diagnostic method; summary stage of surveillance, epidemiology, and end results program (since 2005); and first treatment course within 4 months after diagnosis. In this study, we obtained cancer incidence data from 1999–2017 from the KNCI DB. Cancer cases were classified according to the International Classification of Diseases for Oncology (ICD-O), third edition, and converted according to the International Classification of Diseases, 10th edition (ICD-10).1314 We also collected cancer mortality data from 2002–2018 from Statistics Korea.1 The cause of death was coded and classified according to ICD-10.14 We obtained the estimated population from 1999–2040 from Statistics Korea as well. Cancer sites were categorized as follows: liver (C22.0, C22.9), gallbladder (GB) (C23), Intra- and extrahepatic bile duct (bile duct [BD]; IBD and EBD, respectively) (C22.1, C24.0, C24.8, C24.9), ampulla of Vater (AoV) (C24.1), and pancreas (C25). Ethical statement: The study protocol conformed to the ethical guidelines of the 1975 Declaration of Helsinki, as reflected in a prior approval obtained from the institutional ethical committee (Institutional Review Board, National Cancer Center, Korea [NCC2020-0093]), Written informed consent from the participants was waived because of the retrospective design of the study. Statistical analysis: Rates were expressed as crude rate (CR) and age-standardized rates (ASR) per 100,000 individuals. The CR was calculated as the total number of incidence (crude incidence rate [CIR]) or mortality (crude mortality rate [CMR]) cases divided by the midyear population of the specified years. ASRs were standardized using the registered population of year 2000 and expressed per 100,000 persons; the incidence rate ratio using ASRs was also calculated.15 Based on the CIR and the age-standardized incidence rate (ASIR), which were calculated until 2017, the prediction was made for 2018–2040. Conversely, the CMR and age-standardized mortality rate (ASMR) were calculated until 2018, and a forecast was made for 2019–2040. Incidence and mortality were modeled using an age-period-cohort model and extrapolated to 2040. The age-period-cohort model was as follows: λ (age, period) = g[fA (age) + fP (period) + fC (cohort)] where λ represents the incidence (mortality) rate according to age and calendar period; g is the “link” function; and fA, fP, and fC represent the functions of age, period (year of incidence or mortality), and cohort (year of birth, i.e., cohort = period–age), respectively.16 We used the exponential function as the link function, and fA, fP, and fC as natural cubic splines. We used the “apcspline” Stata ado-program to conduct annual percentage change (APC) modeling, with a default setting for internal knots for the spline (6 for age variable, 5 for period variable, and 3 for cohort variable). When modeling using the spline, we absorbed the linear trends in period and cohort over the timespan with observed data into a drift component and attenuated the drift into the future. A function was developed in Stata to fit the model to the incidence/mortality of cancer in the individual (from 1999 to 2017 in incidence, 2002 to 2018 in mortality) and 5-year age groups (0–4, 5–9, …, 74–79, 80+) by sex. To estimate age-specific cancer rates in future years, an APC model was fitted to age-specific rates for the 5-year age groups against their respective years, and the projected age-specific rates were multiplied by the age-specific population to obtain the projected number of cancer cases and deaths for the upcoming years. To adjust the future demographic structure, we employed an estimated population by Statistics Korea to calculate the ASIR and ASMR. Linear regression models were used on logarithmic scaled ASRs and calendar years to identify time points with significant trend changes and to calculate the APC for each segment between these time points.17 The joinpoint regression analysis describes changes in data trends by connecting several different line segments on a log scale at “joinpoints,” which was performed to detect when significant changes occurred in cancer trends according to sex and cancer site. The linear regression model was applied to ASRs by 5-year age groups against their respective years, based on the observed cancer incidence data of the latest trends to predict ASRs. Moreover, APC, which is the average percentage change of ASRs, is calculated as follows: APC=Ry+1−RyRy×100=(eb1−1)×100, where log(Ry) = b0 + b1y × year Log(Ry) is the natural log-transformed ASR.17 The weighted average of APCs was the average APC (AAPC) with 95% confidence intervals for the whole period of interest.18 We also performed a pairwise comparison, and tested the differences between sexes. All statistical tests were two-tailed, and P values < 0.05 were considered statistically significant. Incidence and trends were analyzed using SAS 9.4 (SAS Institute, Inc., Cary, NC, USA), Joinpoint 4.7.0.0 (National Cancer Institute, Bethesda, MD, USA), and Stata version 16.1 (StataCorp LP, College Station, TX, USA). RESULTS: Incidence and mortality rates of liver, pancreatic, and biliary tract cancers in Korea from 1999 to 2017, and from 2002 to 2018 From 1999 to 2017, HBP cancer cases had been increasing in number continuously. Compared with HCC, which showed a relatively slow increase of new cases GB, BD, AoV, and pancreatic cancers had increased for approximately 2–3 times within the same period. In ASIR analysis, the ASIRs of liver and GB cancers decreased, whereas those of BD and pancreatic cancer increased (Table 1). Supplementary Table 1 shows the trend of annual cancer incidence cases among HBP cancers. GB = gallbladder, BD (I+E) = bile duct (intrahepatic+extrahepatic), AoV = ampulla of Vater. During the same period, trends in the ASIR of each cancer were as follows: 25.8 to 13.5 in liver cancer (AAPC −3.6%; P < 0.001), 2.9 to 2.6 in GB cancer (AAPC −0.5%; P = 0.171), 5.1 to 5.9 in BD cancer (AAPC 1.1%; P = 0.084), 0.9 to 0.9 in AoV cancer (AAPC −0.6%; P = 0.217), and 5.6 to 7.3 in pancreatic cancer (AAPC 1.5%; P < 0.001). The APC of ASIR in liver and pancreatic cancers was statistically significant (Table 2). In the subgroup analysis, the trends of change of ASIRs were similar between both sexes; APC of the liver was significantly low and that of the BD and pancreas was significantly high (Supplementary Table 2). In particular, the APC of ASIR in pancreatic cancer was significantly higher in females than in males (2.4% vs. 0.8%, P < 0.001). ASR = age-standardized rates, AAPC = average annual percent change, GB = gallbladder, BD (I+E) = bile duct (intrahepatic+extrahepatic), AoV = ampulla of vater. aObserved values; bEstimated values. From 2002 to 2018, the ASMR of each cancer decreased, except for pancreatic cancer (18.7 to 8.0 in liver [AAPC −5.2%; P < 0.001], 2.4 to 1.6 in GB [AAPC −2.5%; P < 0.001], 5.5 to 4.6 in IBD and EBD [AAPC −1.0%; P < 0.001], 0.4 to 0.4 in AoV [AAPC −0.2%; P = 0.654], and 5.5 to 5.6 in pancreatic [AAPC 0.3%; P = 0.091]) cancers per 100,000 persons. Thus, except for pancreatic cancer, the APC of all ASMRs in other HBP cancers was negative (Table 2). In the subgroup analysis, the trends of change of ASMRs in liver, GB, and BD cancers were decreasing, similar to those in males and females. The ASMR in pancreatic cancer had been decreased in males but increased in females. Meanwhile, the ASMR in AoV cancer showed no significant difference in the past in both sexes (Supplementary Table 2). From 1999 to 2017, HBP cancer cases had been increasing in number continuously. Compared with HCC, which showed a relatively slow increase of new cases GB, BD, AoV, and pancreatic cancers had increased for approximately 2–3 times within the same period. In ASIR analysis, the ASIRs of liver and GB cancers decreased, whereas those of BD and pancreatic cancer increased (Table 1). Supplementary Table 1 shows the trend of annual cancer incidence cases among HBP cancers. GB = gallbladder, BD (I+E) = bile duct (intrahepatic+extrahepatic), AoV = ampulla of Vater. During the same period, trends in the ASIR of each cancer were as follows: 25.8 to 13.5 in liver cancer (AAPC −3.6%; P < 0.001), 2.9 to 2.6 in GB cancer (AAPC −0.5%; P = 0.171), 5.1 to 5.9 in BD cancer (AAPC 1.1%; P = 0.084), 0.9 to 0.9 in AoV cancer (AAPC −0.6%; P = 0.217), and 5.6 to 7.3 in pancreatic cancer (AAPC 1.5%; P < 0.001). The APC of ASIR in liver and pancreatic cancers was statistically significant (Table 2). In the subgroup analysis, the trends of change of ASIRs were similar between both sexes; APC of the liver was significantly low and that of the BD and pancreas was significantly high (Supplementary Table 2). In particular, the APC of ASIR in pancreatic cancer was significantly higher in females than in males (2.4% vs. 0.8%, P < 0.001). ASR = age-standardized rates, AAPC = average annual percent change, GB = gallbladder, BD (I+E) = bile duct (intrahepatic+extrahepatic), AoV = ampulla of vater. aObserved values; bEstimated values. From 2002 to 2018, the ASMR of each cancer decreased, except for pancreatic cancer (18.7 to 8.0 in liver [AAPC −5.2%; P < 0.001], 2.4 to 1.6 in GB [AAPC −2.5%; P < 0.001], 5.5 to 4.6 in IBD and EBD [AAPC −1.0%; P < 0.001], 0.4 to 0.4 in AoV [AAPC −0.2%; P = 0.654], and 5.5 to 5.6 in pancreatic [AAPC 0.3%; P = 0.091]) cancers per 100,000 persons. Thus, except for pancreatic cancer, the APC of all ASMRs in other HBP cancers was negative (Table 2). In the subgroup analysis, the trends of change of ASMRs in liver, GB, and BD cancers were decreasing, similar to those in males and females. The ASMR in pancreatic cancer had been decreased in males but increased in females. Meanwhile, the ASMR in AoV cancer showed no significant difference in the past in both sexes (Supplementary Table 2). Predicting future epidemiology of liver, pancreatic, and biliary tract cancer in Korea by 2040 Predicted annual cases of newly diagnosed cancer from 2018 to 2040 increased in all cancer types as follows: 12,157 to 13,089 (liver), 2,681 to 4,038 (GB), 6,304 to 7,964 (IBD and EBD), 844 to 1,376 (AoV), and 7,459 to 16,170 (pancreas) (Fig. 1). The CIR of pancreatic cancer is predicted to increase more rapidly than those of other cancers (Fig. 2A). Meanwhile, the predicted ASIR increased in pancreatic cancer (7.5 to 8.2 [AAPC 0.5%; P < 0.001]) but decreased in all other cancers (13.2 to 8.5 in liver [AAPC −2.0%; P < 0.001], 2.5 to 1.7 in GB [AAPC −1.6%; P < 0.001], 5.9 to 3.2 in IBD and EBD [AAPC −2.7%; P < 0.001], and 0.8 to 0.6 in AoV [AAPC −1.2%; P < 0.001]) (Table 2, Fig. 2B). Both sexes were predicted to have similar increasing trends of CIR (Supplementary Fig. 1A and B). In subgroup analysis in both sexes, the predicted ASIR of pancreatic cancer increased only in females (6.2 to 7.9 [AAPC 1.1%; P < 0.001] vs.8.6 to 8.5 [AAPC −0.1%; P < 0.001]). Meanwhile, the predicted ASIR in GB, IBD/EBD, and AoV cancers decreased slightly in both males and females. Moreover, the ASIR of liver cancer decreased more rapidly in males (21.8 to 12.5 [AAPC −2,5%; P < 0.001] than in females (5.3 to 4.5 [AAPC −0.8%; P < 0.001]) (Supplementary Table 3, Supplementary Fig. 1C and D). GB = gallbladder, BD = bile duct, AoV = ampulla of Vater. GB = gallbladder, BD = bile duct, AoV = ampulla of Vater. Furthermore, the predicted annual deaths from 2019 to 2040 decreased in liver cancer (7,551 to 6,037) but increased in other cancers (GB, 1,795 to 2,391; IBD and EBD, 5,313 to 7,928; AoV, 397 to 536; pancreas, 6,202 to 11,023). CMRs are predicted to increase markedly in pancreatic and BD cancers but decrease in liver cancer (Fig. 3A). However, the predicted ASMR decreased in all cancer types (7.5 to 3.2 in liver [AAPC −4.0%; P < 0.001], 1.5 to 0.9 in GB [AAPC −2.4%; P < 0.001], 4.5 to 2.9 in IBD and EBD [AAPC −2.1%; P < 0.001], 0.3 to 0.2 in AoV [AAPC −2.6%; P < 0.001], and 5.6 to 4.7 in pancreatic cancer [AAPC −0.8%; P < 0.001]) (Fig. 3B). Pancreatic cancer was predicted to be the leading cause of cancer-related death among HBP cancers. Similar CMR trends are expected in both sexes. Although CMR in liver cancer was predicted to be still higher in males than in females, it was predicted to decrease more markedly in males (22.8 to 18.7) than in females (6.4 to 5.1). Meanwhile, CMR in pancreatic cancer was predicted to increase more markedly (P < 0.001, data not shown) in females (12.1 to 25.1) than in males (11.8 to 18.2) (Supplementary Fig. 2A and B). The ASMR of pancreatic cancer was also predicted to increase in females (4.8 to 4.9) (P < 0.001) but decrease in males (6.3 to 4.5) (P < 0.001) (Supplementary Table 3). Furthermore, the ASMR (4.9) of pancreatic cancer in females in 2040 were predicted to be the highest among all HBP cancers (liver, 1.1; GB, 0.9; BD,1.7; AoV, 0.1). Moreover, the ASMR of the liver (12.9 to 5.4 [AAPC −4.1%; P < 0.001]) was still expected to be the highest in males in 2040 (GB, 0.9; BD, 4.2; AoV, 0.3; pancreas, 4.5). However, the predicted ASMR of other HBP cancers was decreased (P < 0.001) (Supplementary Table 3, Supplementary Fig. 2C and D). GB = gallbladder, BD = bile duct, AoV = ampulla of Vater. Predicted annual cases of newly diagnosed cancer from 2018 to 2040 increased in all cancer types as follows: 12,157 to 13,089 (liver), 2,681 to 4,038 (GB), 6,304 to 7,964 (IBD and EBD), 844 to 1,376 (AoV), and 7,459 to 16,170 (pancreas) (Fig. 1). The CIR of pancreatic cancer is predicted to increase more rapidly than those of other cancers (Fig. 2A). Meanwhile, the predicted ASIR increased in pancreatic cancer (7.5 to 8.2 [AAPC 0.5%; P < 0.001]) but decreased in all other cancers (13.2 to 8.5 in liver [AAPC −2.0%; P < 0.001], 2.5 to 1.7 in GB [AAPC −1.6%; P < 0.001], 5.9 to 3.2 in IBD and EBD [AAPC −2.7%; P < 0.001], and 0.8 to 0.6 in AoV [AAPC −1.2%; P < 0.001]) (Table 2, Fig. 2B). Both sexes were predicted to have similar increasing trends of CIR (Supplementary Fig. 1A and B). In subgroup analysis in both sexes, the predicted ASIR of pancreatic cancer increased only in females (6.2 to 7.9 [AAPC 1.1%; P < 0.001] vs.8.6 to 8.5 [AAPC −0.1%; P < 0.001]). Meanwhile, the predicted ASIR in GB, IBD/EBD, and AoV cancers decreased slightly in both males and females. Moreover, the ASIR of liver cancer decreased more rapidly in males (21.8 to 12.5 [AAPC −2,5%; P < 0.001] than in females (5.3 to 4.5 [AAPC −0.8%; P < 0.001]) (Supplementary Table 3, Supplementary Fig. 1C and D). GB = gallbladder, BD = bile duct, AoV = ampulla of Vater. GB = gallbladder, BD = bile duct, AoV = ampulla of Vater. Furthermore, the predicted annual deaths from 2019 to 2040 decreased in liver cancer (7,551 to 6,037) but increased in other cancers (GB, 1,795 to 2,391; IBD and EBD, 5,313 to 7,928; AoV, 397 to 536; pancreas, 6,202 to 11,023). CMRs are predicted to increase markedly in pancreatic and BD cancers but decrease in liver cancer (Fig. 3A). However, the predicted ASMR decreased in all cancer types (7.5 to 3.2 in liver [AAPC −4.0%; P < 0.001], 1.5 to 0.9 in GB [AAPC −2.4%; P < 0.001], 4.5 to 2.9 in IBD and EBD [AAPC −2.1%; P < 0.001], 0.3 to 0.2 in AoV [AAPC −2.6%; P < 0.001], and 5.6 to 4.7 in pancreatic cancer [AAPC −0.8%; P < 0.001]) (Fig. 3B). Pancreatic cancer was predicted to be the leading cause of cancer-related death among HBP cancers. Similar CMR trends are expected in both sexes. Although CMR in liver cancer was predicted to be still higher in males than in females, it was predicted to decrease more markedly in males (22.8 to 18.7) than in females (6.4 to 5.1). Meanwhile, CMR in pancreatic cancer was predicted to increase more markedly (P < 0.001, data not shown) in females (12.1 to 25.1) than in males (11.8 to 18.2) (Supplementary Fig. 2A and B). The ASMR of pancreatic cancer was also predicted to increase in females (4.8 to 4.9) (P < 0.001) but decrease in males (6.3 to 4.5) (P < 0.001) (Supplementary Table 3). Furthermore, the ASMR (4.9) of pancreatic cancer in females in 2040 were predicted to be the highest among all HBP cancers (liver, 1.1; GB, 0.9; BD,1.7; AoV, 0.1). Moreover, the ASMR of the liver (12.9 to 5.4 [AAPC −4.1%; P < 0.001]) was still expected to be the highest in males in 2040 (GB, 0.9; BD, 4.2; AoV, 0.3; pancreas, 4.5). However, the predicted ASMR of other HBP cancers was decreased (P < 0.001) (Supplementary Table 3, Supplementary Fig. 2C and D). GB = gallbladder, BD = bile duct, AoV = ampulla of Vater. Incidence and mortality rates of liver, pancreatic, and biliary tract cancers in Korea from 1999 to 2017, and from 2002 to 2018: From 1999 to 2017, HBP cancer cases had been increasing in number continuously. Compared with HCC, which showed a relatively slow increase of new cases GB, BD, AoV, and pancreatic cancers had increased for approximately 2–3 times within the same period. In ASIR analysis, the ASIRs of liver and GB cancers decreased, whereas those of BD and pancreatic cancer increased (Table 1). Supplementary Table 1 shows the trend of annual cancer incidence cases among HBP cancers. GB = gallbladder, BD (I+E) = bile duct (intrahepatic+extrahepatic), AoV = ampulla of Vater. During the same period, trends in the ASIR of each cancer were as follows: 25.8 to 13.5 in liver cancer (AAPC −3.6%; P < 0.001), 2.9 to 2.6 in GB cancer (AAPC −0.5%; P = 0.171), 5.1 to 5.9 in BD cancer (AAPC 1.1%; P = 0.084), 0.9 to 0.9 in AoV cancer (AAPC −0.6%; P = 0.217), and 5.6 to 7.3 in pancreatic cancer (AAPC 1.5%; P < 0.001). The APC of ASIR in liver and pancreatic cancers was statistically significant (Table 2). In the subgroup analysis, the trends of change of ASIRs were similar between both sexes; APC of the liver was significantly low and that of the BD and pancreas was significantly high (Supplementary Table 2). In particular, the APC of ASIR in pancreatic cancer was significantly higher in females than in males (2.4% vs. 0.8%, P < 0.001). ASR = age-standardized rates, AAPC = average annual percent change, GB = gallbladder, BD (I+E) = bile duct (intrahepatic+extrahepatic), AoV = ampulla of vater. aObserved values; bEstimated values. From 2002 to 2018, the ASMR of each cancer decreased, except for pancreatic cancer (18.7 to 8.0 in liver [AAPC −5.2%; P < 0.001], 2.4 to 1.6 in GB [AAPC −2.5%; P < 0.001], 5.5 to 4.6 in IBD and EBD [AAPC −1.0%; P < 0.001], 0.4 to 0.4 in AoV [AAPC −0.2%; P = 0.654], and 5.5 to 5.6 in pancreatic [AAPC 0.3%; P = 0.091]) cancers per 100,000 persons. Thus, except for pancreatic cancer, the APC of all ASMRs in other HBP cancers was negative (Table 2). In the subgroup analysis, the trends of change of ASMRs in liver, GB, and BD cancers were decreasing, similar to those in males and females. The ASMR in pancreatic cancer had been decreased in males but increased in females. Meanwhile, the ASMR in AoV cancer showed no significant difference in the past in both sexes (Supplementary Table 2). Predicting future epidemiology of liver, pancreatic, and biliary tract cancer in Korea by 2040: Predicted annual cases of newly diagnosed cancer from 2018 to 2040 increased in all cancer types as follows: 12,157 to 13,089 (liver), 2,681 to 4,038 (GB), 6,304 to 7,964 (IBD and EBD), 844 to 1,376 (AoV), and 7,459 to 16,170 (pancreas) (Fig. 1). The CIR of pancreatic cancer is predicted to increase more rapidly than those of other cancers (Fig. 2A). Meanwhile, the predicted ASIR increased in pancreatic cancer (7.5 to 8.2 [AAPC 0.5%; P < 0.001]) but decreased in all other cancers (13.2 to 8.5 in liver [AAPC −2.0%; P < 0.001], 2.5 to 1.7 in GB [AAPC −1.6%; P < 0.001], 5.9 to 3.2 in IBD and EBD [AAPC −2.7%; P < 0.001], and 0.8 to 0.6 in AoV [AAPC −1.2%; P < 0.001]) (Table 2, Fig. 2B). Both sexes were predicted to have similar increasing trends of CIR (Supplementary Fig. 1A and B). In subgroup analysis in both sexes, the predicted ASIR of pancreatic cancer increased only in females (6.2 to 7.9 [AAPC 1.1%; P < 0.001] vs.8.6 to 8.5 [AAPC −0.1%; P < 0.001]). Meanwhile, the predicted ASIR in GB, IBD/EBD, and AoV cancers decreased slightly in both males and females. Moreover, the ASIR of liver cancer decreased more rapidly in males (21.8 to 12.5 [AAPC −2,5%; P < 0.001] than in females (5.3 to 4.5 [AAPC −0.8%; P < 0.001]) (Supplementary Table 3, Supplementary Fig. 1C and D). GB = gallbladder, BD = bile duct, AoV = ampulla of Vater. GB = gallbladder, BD = bile duct, AoV = ampulla of Vater. Furthermore, the predicted annual deaths from 2019 to 2040 decreased in liver cancer (7,551 to 6,037) but increased in other cancers (GB, 1,795 to 2,391; IBD and EBD, 5,313 to 7,928; AoV, 397 to 536; pancreas, 6,202 to 11,023). CMRs are predicted to increase markedly in pancreatic and BD cancers but decrease in liver cancer (Fig. 3A). However, the predicted ASMR decreased in all cancer types (7.5 to 3.2 in liver [AAPC −4.0%; P < 0.001], 1.5 to 0.9 in GB [AAPC −2.4%; P < 0.001], 4.5 to 2.9 in IBD and EBD [AAPC −2.1%; P < 0.001], 0.3 to 0.2 in AoV [AAPC −2.6%; P < 0.001], and 5.6 to 4.7 in pancreatic cancer [AAPC −0.8%; P < 0.001]) (Fig. 3B). Pancreatic cancer was predicted to be the leading cause of cancer-related death among HBP cancers. Similar CMR trends are expected in both sexes. Although CMR in liver cancer was predicted to be still higher in males than in females, it was predicted to decrease more markedly in males (22.8 to 18.7) than in females (6.4 to 5.1). Meanwhile, CMR in pancreatic cancer was predicted to increase more markedly (P < 0.001, data not shown) in females (12.1 to 25.1) than in males (11.8 to 18.2) (Supplementary Fig. 2A and B). The ASMR of pancreatic cancer was also predicted to increase in females (4.8 to 4.9) (P < 0.001) but decrease in males (6.3 to 4.5) (P < 0.001) (Supplementary Table 3). Furthermore, the ASMR (4.9) of pancreatic cancer in females in 2040 were predicted to be the highest among all HBP cancers (liver, 1.1; GB, 0.9; BD,1.7; AoV, 0.1). Moreover, the ASMR of the liver (12.9 to 5.4 [AAPC −4.1%; P < 0.001]) was still expected to be the highest in males in 2040 (GB, 0.9; BD, 4.2; AoV, 0.3; pancreas, 4.5). However, the predicted ASMR of other HBP cancers was decreased (P < 0.001) (Supplementary Table 3, Supplementary Fig. 2C and D). GB = gallbladder, BD = bile duct, AoV = ampulla of Vater. DISCUSSION: Changes in cancer incidence and mortality may vary depending on the geography, race, sex, and age; hence establishing a nationwide healthcare policy is required to identify the epidemiologic features of cancer and predict its future according to the gathered data.21920 In Korea, a national policy on cancer prevention and screening revealed that the incidence of all types of cancers peaked in 2011 and then declined. The overall mortality rate of cancers has also started to decline since 2002, owing to advances in diagnostic and treatment modalities. However, these trends of cancer do not represent the trend of each cancer. In this study, we specifically assessed the epidemiological changes of HBP cancers (GB, IBD/EBD, liver, AoV, and pancreatic cancers). Results showed that the number of new HBP cancer cases had increased from 1999 to 2017 and predicted to continuously increase by 2040. In liver cancer, the incidence and mortality rates had decreased from 2011 and 2002, and the projected CIR, CMR, ASIR, and ASMR were predicted to dramatically decrease by 2040. In other HBP cancers, especially pancreatic cancer, the incidence had increased, and the CIR and ASIR were predicted to increase. The CMR and ASMR in pancreatic cancer were also predicted to decrease but slightly, and eventually, this type would become the leading cancer in HBP cancer mortality by 2040. Rahib et al.21 predicted that pancreatic and liver cancers would surpass breast, prostate, and colorectal cancers, becoming the second and third leading causes of cancer-related death by 2030 in the United States. In a recently published report in Korea, pancreas is predicted to become one of the five leading cancer sites causing mortality in both sexes in 2021.22 Our study showed that pancreatic cancer incidence has been increasing gradually since 1999. Although the ASMR of pancreatic cancer is expected to slightly decrease in the future, it is expected to increase in females and pancreas will be the leading site of HBP cancer mortality by 2030–2040. The increase in predicted mortality of pancreatic cancer may result from the lack of effective treatment and difficulty in making early diagnosis despite the increased incidence. Surgery is the only potentially curative option for pancreatic cancer, but less than 20% of patients are eligible for this treatment. In addition, treatments for metastatic pancreatic cancer are minimally effective.232425 Therefore, strategies for early detection and therapeutic targets that can be translated and tested in clinical trials should be identified as early as possible in preparation for the expected increase of pancreatic cancer cases in the next 10–20 years. Of note, an extensive process is required to validate an early detection biomarker for clinical use, and approximately 7.9 years is needed for clinical testing and approval of a new cancer therapy.2627 Especially, the increasing trend of incidence and mortality in pancreatic cancer were observed and predicted more clearly in females than in males. Although the increase in females’ social activities and smoking and drinking rates may be the cause, the smoking and drinking rates only increased slightly, and these rates remain considerably higher in males than females.28 Therefore, these results are difficult to interpret, and additional research is needed to explain these different trends between sexes. In contrast, the incidence and mortality rates of liver cancer were predicted to decrease more dominantly in males than females. This dramatic decrease may be derived from vaccination for chronic hepatitis B virus (HBV) as a national policy. Approximately 80% of all primary liver cancers are diagnosed as HCC, and the most important cause of HCC in Korea is HBV infection.29 According to the results of a random selection registry study of the Korean Liver Cancer Association and the KCCR, 62.2% and 10.4% of patients diagnosed with HCC between 2008 and 2010 had HBV and hepatitis C virus, respectively; the remaining 27.4% was considered unknown but was presumed to be liver cirrhosis caused by alcoholic and/or nonalcoholic fatty liver disease.3031 Since 1983, HBV vaccination has been recommended for all neonates in Korea. The percentage of vaccinated infants has surpassed 98.9% since 1990, and the HBsAg carrier rate in the general population decreased to 3.7% in 2007. In particular, HBsAg prevalence decreased to 0.44% in teenagers and 0.2% in children below 10 years old.32 Aside from vaccination, advances in medical, interventional, and surgical treatment for liver disease can contribute to reduced HCC mortality.33343536 As shown in our study results, the incidence and mortality rates of HBP cancers continue to change. Future predictions analyzed at a particular point of time must be interpreted. As cancer incidence and mortality increase, medical costs also increase; thus, researchers, including clinicians, need to regularly conduct epidemiologic analysis and prediction for efficient budgeting.3738 Based on the accumulated results, improved national cancer policies will be established and implemented in the future. The annual rates of HBP cancer incidence and mortality are expected to increase in the future. However, those of liver cancer are decreasing and will further decrease, especially in males. In contrast, those of pancreatic cancer have been increasing and will continue to increase, becoming the most frequent and leading cause of cancer-related death among HBP cancers. Therefore, more HBP specialists and improvement in health policy are needed in preparation for such situation.
Background: This study aimed to analyze the current trends and predict the epidemiologic features of hepatobiliary and pancreatic (HBP) cancers according to the Korea Central Cancer Registry to provide insights into health policy. Methods: Incidence data from 1999 to 2017 and mortality data from 2002 to 2018 were obtained from the Korea National Cancer Incidence Database and Statistics Korea, respectively. The future incidence rate from 2018 to 2040 and mortality rate from 2019 to 2040 of each HBP cancer were predicted using an age-period-cohort model. All analyses, including incidence and mortality, were stratified by sex. Results: From 1999 to 2017, the age-standardized incidence rate (ASIR) of HBP cancers per 100,000 population had changed (liver, 25.8 to 13.5; gallbladder [GB], 2.9 to 2.6; bile ducts, 5.1 to 5.9; ampulla of Vater [AoV], 0.9 to 0.9; and pancreatic, 5.6 to 7.3). The age-standardized mortality rate (ASMR) per 100,000 population from 2002 to 2018 of each cancer had declined, excluding pancreatic cancer (5.5 to 5.6). The predicted ASIR of pancreatic cancer per 100,000 population from 2018 to 2040 increased (7.5 to 8.2), but that of other cancers decreased. Furthermore, the predicted ASMR per 100,000 population from 2019 to 2040 decreased in all types of cancers: liver (6.5 to 3.2), GB (1.4 to 0.9), bile ducts (4.3 to 2.9), AoV (0.3 to 0.2), and pancreas (5.4 to 4.7). However, in terms of sex, the predicted ASMR of pancreatic cancer per 100,000 population in females increased (3.8 to 4.9). Conclusions: The annual incidence and mortality cases of HBP cancers are generally predicted to increase. Especially, pancreatic cancer has an increasing incidence and will be the leading cause of cancer-related death among HBP cancers.
null
null
9,036
362
[ 313, 61, 761, 531, 820 ]
9
[ "cancer", "aapc", "pancreatic", "001", "cancers", "incidence", "liver", "age", "aapc 001", "predicted" ]
[ "rates liver cancer", "common cancer korea", "liver pancreatic cancers", "incidence hepatocellular carcinoma", "cancer korea availability" ]
null
null
[CONTENT] Hepatobiliary Cancer | Pancreatic Cancer | Incidence | Mortality | Future Prediction [SUMMARY]
[CONTENT] Hepatobiliary Cancer | Pancreatic Cancer | Incidence | Mortality | Future Prediction [SUMMARY]
[CONTENT] Hepatobiliary Cancer | Pancreatic Cancer | Incidence | Mortality | Future Prediction [SUMMARY]
null
[CONTENT] Hepatobiliary Cancer | Pancreatic Cancer | Incidence | Mortality | Future Prediction [SUMMARY]
null
[CONTENT] Female | Humans | Incidence | Pancreatic Neoplasms | Registries | Republic of Korea [SUMMARY]
[CONTENT] Female | Humans | Incidence | Pancreatic Neoplasms | Registries | Republic of Korea [SUMMARY]
[CONTENT] Female | Humans | Incidence | Pancreatic Neoplasms | Registries | Republic of Korea [SUMMARY]
null
[CONTENT] Female | Humans | Incidence | Pancreatic Neoplasms | Registries | Republic of Korea [SUMMARY]
null
[CONTENT] rates liver cancer | common cancer korea | liver pancreatic cancers | incidence hepatocellular carcinoma | cancer korea availability [SUMMARY]
[CONTENT] rates liver cancer | common cancer korea | liver pancreatic cancers | incidence hepatocellular carcinoma | cancer korea availability [SUMMARY]
[CONTENT] rates liver cancer | common cancer korea | liver pancreatic cancers | incidence hepatocellular carcinoma | cancer korea availability [SUMMARY]
null
[CONTENT] rates liver cancer | common cancer korea | liver pancreatic cancers | incidence hepatocellular carcinoma | cancer korea availability [SUMMARY]
null
[CONTENT] cancer | aapc | pancreatic | 001 | cancers | incidence | liver | age | aapc 001 | predicted [SUMMARY]
[CONTENT] cancer | aapc | pancreatic | 001 | cancers | incidence | liver | age | aapc 001 | predicted [SUMMARY]
[CONTENT] cancer | aapc | pancreatic | 001 | cancers | incidence | liver | age | aapc 001 | predicted [SUMMARY]
null
[CONTENT] cancer | aapc | pancreatic | 001 | cancers | incidence | liver | age | aapc 001 | predicted [SUMMARY]
null
[CONTENT] cancer | cancers | korea | pancreatic | pancreatic biliary | biliary tract cancers | pancreatic biliary tract cancers | pancreatic biliary tract | biliary | biliary tract [SUMMARY]
[CONTENT] age | cancer | incidence | period | cohort | year | asrs | mortality | rate | calculated [SUMMARY]
[CONTENT] 001 | aapc | aapc 001 | cancer | predicted | pancreatic | pancreatic cancer | cancers | gb | aov [SUMMARY]
null
[CONTENT] cancer | 001 | pancreatic | aapc | cancers | aapc 001 | predicted | incidence | pancreatic cancer | age [SUMMARY]
null
[CONTENT] the Korea Central Cancer Registry [SUMMARY]
[CONTENT] 1999 | 2017 | 2002 | 2018 | the Korea National Cancer Incidence Database | Statistics Korea ||| 2018 | 2040 | 2019 | 2040 | HBP ||| [SUMMARY]
[CONTENT] 1999 to 2017 | ASIR | 100,000 | 25.8 | 13.5 ||| GB | 2.9 | 2.6 | 5.1 | 5.9 | Vater | 0.9 | 0.9 | 5.6 | 7.3 ||| 100,000 | 2002 | 2018 | 5.5 to 5.6 ||| ASIR | 100,000 | 2018 | 2040 | 7.5 to 8.2 ||| ASMR | 100,000 | 2019 | 2040 | 6.5 | 3.2 | GB | 1.4 | 0.9 | 4.3 to 2.9 | AoV | 0.3 | 5.4 | 4.7 ||| 100,000 | 3.8 [SUMMARY]
null
[CONTENT] the Korea Central Cancer Registry ||| 1999 | 2017 | 2002 | 2018 | the Korea National Cancer Incidence Database | Statistics Korea ||| 2018 | 2040 | 2019 | 2040 | HBP ||| ||| 1999 | 2017 | ASIR | 100,000 | 25.8 | 13.5 ||| GB | 2.9 | 2.6 | 5.1 | 5.9 | Vater | 0.9 | 0.9 | 5.6 | 7.3 ||| 100,000 | 2002 | 2018 | 5.5 to 5.6 ||| ASIR | 100,000 | 2018 | 2040 | 7.5 to 8.2 ||| ASMR | 100,000 | 2019 | 2040 | 6.5 | 3.2 | GB | 1.4 | 0.9 | 4.3 to 2.9 | AoV | 0.3 | 5.4 | 4.7 ||| 100,000 | 3.8 ||| annual ||| [SUMMARY]
null
Bacteraemia, antimicrobial susceptibility and treatment among Campylobacter-associated hospitalisations in the Australian Capital Territory: a review.
34419003
Campylobacter spp. cause mostly self-limiting enterocolitis, although a significant proportion of cases require hospitalisation highlighting potential for severe disease. Among people admitted, blood culture specimens are frequently collected and antibiotic treatment is initiated. We sought to understand clinical and host factors associated with bacteraemia, antibiotic treatment and isolate non-susceptibility among Campylobacter-associated hospitalisations.
BACKGROUND
Using linked hospital microbiology and administrative data we identified and reviewed Campylobacter-associated hospitalisations between 2004 and 2013. We calculated population-level incidence for Campylobacter bacteraemia and used logistic regression to examine factors associated with bacteraemia, antibiotic treatment and isolate non-susceptibility among Campylobacter-associated hospitalisations.
METHODS
Among 685 Campylobacter-associated hospitalisations, we identified 25 admissions for bacteraemia, an estimated incidence of 0.71 cases per 100,000 population per year. Around half of hospitalisations (333/685) had blood culturing performed. Factors associated with bacteraemia included underlying liver disease (aOR 48.89, 95% CI 7.03-340.22, p < 0.001), Haematology unit admission (aOR 14.67, 95% CI 2.99-72.07, p = 0.001) and age 70-79 years (aOR 4.93, 95% CI 1.57-15.49). Approximately one-third (219/685) of admissions received antibiotics with treatment rates increasing significantly over time (p < 0.05). Factors associated with antibiotic treatment included Gastroenterology unit admission (aOR 3.75, 95% CI 1.95-7.20, p < 0.001), having blood cultures taken (aOR 2.76, 95% CI 1.79-4.26, p < 0.001) and age 40-49 years (aOR 2.34, 95% CI 1.14-4.79, p = 0.02). Non-susceptibility of isolates to standard antimicrobials increased significantly over time (p = 0.01) and was associated with overseas travel (aOR 11.80 95% CI 3.18-43.83, p < 0.001) and negatively associated with tachycardia (aOR 0.48, 95%CI 0.26-0.88, p = 0.02), suggesting a healthy traveller effect.
RESULTS
Campylobacter infections result in considerable hospital burden. Among those admitted to hospital, an interplay of factors involving clinical presentation, presence of underlying comorbidities, complications and increasing age influence how a case is investigated and managed.
CONCLUSIONS
[ "Adult", "Aged", "Anti-Bacterial Agents", "Anti-Infective Agents", "Australia", "Australian Capital Territory", "Bacteremia", "Campylobacter", "Hospitalization", "Humans", "Middle Aged", "Risk Factors" ]
8379883
Background
Campylobacter spp. are internationally significant as a cause of infectious diarrhoeal disease [1]. Most cases of infectious diarrhoea, including those caused by Campylobacter spp., are self-limiting, with management focused on maintenance of hydration via fluid repletion [2]. However, for persons hospitalised with Campylobacter infection, clinical thresholds for exclusion of bacteraemia and the consideration of antimicrobial therapy differ due to symptom severity, risk of complications or the exacerbation of underlying co-morbidities [2]. While bacteraemia is an uncommon complication of campylobacteriosis, testing and diagnosis typically occurs when a person is hospitalised [3]. Within a hospital setting, clinical tolerances and the challenge of predicting bacteraemia among febrile admissions with an enteric focus are important considerations [4]. Further, the likelihood of clinicians commencing antibiotic therapy may also increase among hospitalised cases, highlighting the importance of judicial prescribing and understanding of isolate susceptibility patterns to ensure viable treatment options remain [5]. We describe rates of bacteraemia, antimicrobial susceptibility and treatment among a cohort of Campylobacter-associated hospitalisations. We also examined clinical and host factors associated with the diagnosis of blood stream infections (BSI), isolate non-susceptibility and antibiotic treatment of hospitalised cases.
null
null
Results
Campylobacter bacteraemia Out of 685 admissions, 333 (49%) had blood drawn for culture and 25 (7.5%) of these tested positive (Fig. 1). Speciation was performed on 21 (84%) of these case isolates with 15 (71.4%) cases of C. jejuni, 5 (23.8%) cases of C. coli and a single (4.0%) case of C. lari bacteraemia. Amongst the 25 admissions with bacteraemia, 15 were male (60%) and 10 were female (40%), while the median age was 59.5 years (range 12 to 90 years). Amongst 308 negative cases there were 177 males (57.5%) and 131 females (42.5%), with a median age of 38.8 years (age range \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$<1.0$$\end{document}<1.0 to 92 years). This age difference was significant (median test\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${ \chi }^{2}=5.30$$\end{document}χ2=5.30, \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$p=0.02$$\end{document}p=0.02) but no evidence of a difference in the sex composition was observed (\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\chi }^{2}=0.06$$\end{document}χ2=0.06, \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$p=0.81$$\end{document}p=0.81). The age and sex of admissions in relation to blood culture status is shown in Fig. 2. We also compared those who had blood drawn for culture (whether positive or negative) with those that did not to assess demographic differences. We saw no evidence of a difference in median age (median test\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\chi }^{2}=0.18$$\end{document}χ2=0.18, \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$p=0.68$$\end{document}p=0.68) but did observe males to be more likely to have blood drawn (\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\chi }^{2}=11.17$$\end{document}χ2=11.17, \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$p=0.001$$\end{document}p=0.001). We observed no difference in the proportion of admissions with documented comorbidities who had blood specimens collected for culture when compared to admission without comorbidities who had blood taken for culture \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${(\chi }^{2}=1.42$$\end{document}(χ2=1.42, \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$p=0.23$$\end{document}p=0.23). Bacteraemia generally occurred in the context of antecedent diarrhoeal illness, although two admissions involved primary bacteraemia (i.e. without diarrhoea) in patients with haematological malignancies. Comorbidities, while common, were not a characteristic feature of cases hospitalised with bacteraemia as shown in Table 1. No deaths were identified among cases with bacteraemia. Fig. 1Flow diagram for blood culture collection and subsequent Campylobacter spp. bacteraemia among a cohort of Campylobacter-associated hospitalisations Flow diagram for blood culture collection and subsequent Campylobacter spp. bacteraemia among a cohort of Campylobacter-associated hospitalisations Fig. 2Jitter plot of age and sex by blood culture status among Campylobacter-associated hospitalisations Jitter plot of age and sex by blood culture status among Campylobacter-associated hospitalisations Table 1Characteristics of cases hospitalised with Campylobacter bacteraemia, 2004 to 2013Year (case)Age range/sexSpeciesBacteraemia—sourceAntimicrobial susceptibilityAntimicrobial treatmentSignificant medical history and risk factors200480 + M C. jejuni Enteric—secondaryFully sensitiveNilAge, nil other significant2005 (a)50–59 FCampylobacter sp.Enteric—secondaryFully sensitivePO ciprofloxacin 500 mg bdAcute myeloid leukaemia2005 (b)60–69 FCampylobacter sp.Enteric—secondaryFully sensitiveNilNil significant2005 (c)40–49 F C. jejuni Enteric—secondaryFully sensitivePO ciprofloxacin 500 mg bdIV drug use, PUD on omeprazole2006 (a)40–49 M C. jejuni Enteric— secondaryFully sensitivePO ciprofloxacin 500 mg bdUntreated Stage III HIV2008 (a)20–29 MCampylobacter sp.Enteric—secondaryFully sensitiveIV azithromycin 500 mg qdNil significant2008 (b)60–69 M C. coli Primary bacteraemiaCiprofloxacin- ResistantErythromycin- ResistantNilLymphocytic lymphoma200930–39 F C. jejuni Enteric—secondaryFully sensitivePO ciprofloxacin 250 mg bdHistory of renal transplant secondary to IgA nephropathy2010 (a)60–69 MCampylobacter sp.Enteric—secondaryFully sensitivePO ciprofloxacin 500 mg bdBowel carcinoma, current chemotherapy2010 (b)30–39 M C. coli Enteric—secondaryFully sensitivePO ciprofloxacin 500 mg bdAlcoholic liver disease with portal hypotension and bleeding varices2010 (c)70–79 F C. jejuni Enteric—secondaryFully sensitivePO ciprofloxacin 500 mg bdT2DM2010 (d)10–19 F C. jejuni Enteric—secondaryFully sensitivePO azithromycin 500 mg qd (upon discharge)Nil significant.2010 (e)70–79 F C. coli Enteric—secondaryFully sensitivePO ciprofloxacin 500 mg bdT2DM2010 (f)40–49 M C. jejuni Enteric—secondaryFully sensitivePO Norfloxacin 400 mg bdIrritable bowel syndrome2011 (a)60–69 M C. lari Enteric—secondaryCiprofloxacin— ResistantPO Doxycycline 100 mg bdAlcoholic liver disease with portal hypotension, recent intracerebral bleed2011 (b)80 + M C. jejuni Enteric—secondaryFully sensitiveNil prescribedAge, nil other significant2011 (c)70–79 M C. coli Enteric—secondaryFully sensitivePO ciprofloxacin 500 mg bdAsplenic2012 (a)40–49 M C. jejuni Enteric—secondaryFully sensitiveIV ciprofloxacin 500 mg bdMultiple sclerosis, current chemotherapy pre-stem cell transplantation, IDDM2012 (b)30–39 F C. jejuni Enteric—secondaryFully sensitiveNilPregnant 33/40K, IDDM2012 (c)70–79 M C. jejuni Enteric—secondaryCiprofloxacin— ResistantNalidixic acid— ResistantNilDiabetic neuropathy, chronic renal failure2012 (d)60–69 F C. coli Enteric—secondaryCiprofloxacin— ResistantNalidixic acid— ResistantNilNil significant2013 (a)20–29 M C. jejuni Enteric—secondaryFully sensitiveNilNil significant2013 (b)20–29 M C. jejuni Primary bacteraemiaFully sensitivePO Ciprofloxacin 750 mg bd (upon discharge)B cell leukaemia, AVN (on steroids), SIADH2013 (c)50–59 M C. jejuni Enteric—secondaryFully sensitivePO ciprofloxacin 500 mg bdLiver failure with cirrhosis, portal hypotension secondary to Hepatitis C, T2DM, hypothyroidism. Awaiting transplant.2013 (d)70–79 M C. jejuni Enteric—secondaryFully sensitiveNilAge, nil other significantPO, per oral; IV ,  intravenous; PUD, peptic ulcer disease; HIV, human immunodeficiency virus; T2DM, type 2 diabetes mellitus; IDDM, insulin dependent diabetes mellitus; AVN, acute vascular necrosis; SIADH, syndrome of inappropriate antidiuretic hormone Characteristics of cases hospitalised with Campylobacter bacteraemia, 2004 to 2013 Ciprofloxacin- Resistant Erythromycin- Resistant Ciprofloxacin— Resistant Nalidixic acid— Resistant Ciprofloxacin— Resistant Nalidixic acid— Resistant PO, per oral; IV ,  intravenous; PUD, peptic ulcer disease; HIV, human immunodeficiency virus; T2DM, type 2 diabetes mellitus; IDDM, insulin dependent diabetes mellitus; AVN, acute vascular necrosis; SIADH, syndrome of inappropriate antidiuretic hormone During the period 2004 to 2013, the mean incidence of Campylobacter bacteraemia in the host population was 0.71 cases per 100,000 population per year (95% CI 0.48–1.05 per 100,000 population) (Fig. 3). We did not observe temporal trends in the proportion of Campylobacter-associated hospitalisations undergoing blood collection for culture or in the proportion of positive blood cultures among Campylobacter-associated hospitalisations. Fig. 3Incidence of Campylobacter spp. bacteraemia among ACT residents, 2004 to 2013 Incidence of Campylobacter spp. bacteraemia among ACT residents, 2004 to 2013 Factors associated with the collection of blood samples for culture and subsequent isolation of Campylobacter spp. are shown in Tables 2 and 3. Table 2Factors associated with collection of blood for culture among a cohort of Campylobacter-associated hospitalisations (\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\varvec{n}=663$$\end{document}n=663)Predictor variableEstimated adjusted Odds Ratio (aOR)95% Confidence interval\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\varvec{p}$$\end{document}p -valueInfectious Diseases Unit admission8.001.58–40.610.01Febrile during admission (≥ 38 °C)21.3013.76–32. 96< 0.001Tachycardia1.751.13–2.720.01Moderate to severe renal disease2.871.26–6.550.0110–19 years age group0.360.19–0.71< 0.0122 observations contained missing dataHosmer and Lemeshow goodness of fit \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\chi }^{2}=2.70$$\end{document}χ2=2.70, \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$p=0.61$$\end{document}p=0.61 Factors associated with collection of blood for culture among a cohort of Campylobacter-associated hospitalisations (\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\varvec{n}=663$$\end{document}n=663) 22 observations contained missing data Hosmer and Lemeshow goodness of fit \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\chi }^{2}=2.70$$\end{document}χ2=2.70, \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$p=0.61$$\end{document}p=0.61 Table 3Factors associated with blood stream isolation of Campylobacter spp. among Campylobacter-associated hospitalisations (n = 333)Predictor variableEstimated adjusted Odds Ratio (aOR)95 % Confidence interval\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\text{p}$$\end{document}p -valueModerate to severe liver disease48.897.03–340.22< 0.001Haematology Unit admission14.672.99–72.070.001Age group 70–79 years4.931.57–15.49< 0.01Admission during summer months2.931.14–7.570.03Indigenous Australian10.871.00–117.890.05Hosmer and Lemshow goodness of fit \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\chi }^{2}=0.71$$\end{document}χ2=0.71, \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$p=0.70$$\end{document}p=0.70 Factors associated with blood stream isolation of Campylobacter spp. among Campylobacter-associated hospitalisations (n = 333) Hosmer and Lemshow goodness of fit \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\chi }^{2}=0.71$$\end{document}χ2=0.71, \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$p=0.70$$\end{document}p=0.70 Out of 685 admissions, 333 (49%) had blood drawn for culture and 25 (7.5%) of these tested positive (Fig. 1). Speciation was performed on 21 (84%) of these case isolates with 15 (71.4%) cases of C. jejuni, 5 (23.8%) cases of C. coli and a single (4.0%) case of C. lari bacteraemia. Amongst the 25 admissions with bacteraemia, 15 were male (60%) and 10 were female (40%), while the median age was 59.5 years (range 12 to 90 years). Amongst 308 negative cases there were 177 males (57.5%) and 131 females (42.5%), with a median age of 38.8 years (age range \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$<1.0$$\end{document}<1.0 to 92 years). This age difference was significant (median test\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${ \chi }^{2}=5.30$$\end{document}χ2=5.30, \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$p=0.02$$\end{document}p=0.02) but no evidence of a difference in the sex composition was observed (\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\chi }^{2}=0.06$$\end{document}χ2=0.06, \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$p=0.81$$\end{document}p=0.81). The age and sex of admissions in relation to blood culture status is shown in Fig. 2. We also compared those who had blood drawn for culture (whether positive or negative) with those that did not to assess demographic differences. We saw no evidence of a difference in median age (median test\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\chi }^{2}=0.18$$\end{document}χ2=0.18, \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$p=0.68$$\end{document}p=0.68) but did observe males to be more likely to have blood drawn (\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\chi }^{2}=11.17$$\end{document}χ2=11.17, \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$p=0.001$$\end{document}p=0.001). We observed no difference in the proportion of admissions with documented comorbidities who had blood specimens collected for culture when compared to admission without comorbidities who had blood taken for culture \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${(\chi }^{2}=1.42$$\end{document}(χ2=1.42, \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$p=0.23$$\end{document}p=0.23). Bacteraemia generally occurred in the context of antecedent diarrhoeal illness, although two admissions involved primary bacteraemia (i.e. without diarrhoea) in patients with haematological malignancies. Comorbidities, while common, were not a characteristic feature of cases hospitalised with bacteraemia as shown in Table 1. No deaths were identified among cases with bacteraemia. Fig. 1Flow diagram for blood culture collection and subsequent Campylobacter spp. bacteraemia among a cohort of Campylobacter-associated hospitalisations Flow diagram for blood culture collection and subsequent Campylobacter spp. bacteraemia among a cohort of Campylobacter-associated hospitalisations Fig. 2Jitter plot of age and sex by blood culture status among Campylobacter-associated hospitalisations Jitter plot of age and sex by blood culture status among Campylobacter-associated hospitalisations Table 1Characteristics of cases hospitalised with Campylobacter bacteraemia, 2004 to 2013Year (case)Age range/sexSpeciesBacteraemia—sourceAntimicrobial susceptibilityAntimicrobial treatmentSignificant medical history and risk factors200480 + M C. jejuni Enteric—secondaryFully sensitiveNilAge, nil other significant2005 (a)50–59 FCampylobacter sp.Enteric—secondaryFully sensitivePO ciprofloxacin 500 mg bdAcute myeloid leukaemia2005 (b)60–69 FCampylobacter sp.Enteric—secondaryFully sensitiveNilNil significant2005 (c)40–49 F C. jejuni Enteric—secondaryFully sensitivePO ciprofloxacin 500 mg bdIV drug use, PUD on omeprazole2006 (a)40–49 M C. jejuni Enteric— secondaryFully sensitivePO ciprofloxacin 500 mg bdUntreated Stage III HIV2008 (a)20–29 MCampylobacter sp.Enteric—secondaryFully sensitiveIV azithromycin 500 mg qdNil significant2008 (b)60–69 M C. coli Primary bacteraemiaCiprofloxacin- ResistantErythromycin- ResistantNilLymphocytic lymphoma200930–39 F C. jejuni Enteric—secondaryFully sensitivePO ciprofloxacin 250 mg bdHistory of renal transplant secondary to IgA nephropathy2010 (a)60–69 MCampylobacter sp.Enteric—secondaryFully sensitivePO ciprofloxacin 500 mg bdBowel carcinoma, current chemotherapy2010 (b)30–39 M C. coli Enteric—secondaryFully sensitivePO ciprofloxacin 500 mg bdAlcoholic liver disease with portal hypotension and bleeding varices2010 (c)70–79 F C. jejuni Enteric—secondaryFully sensitivePO ciprofloxacin 500 mg bdT2DM2010 (d)10–19 F C. jejuni Enteric—secondaryFully sensitivePO azithromycin 500 mg qd (upon discharge)Nil significant.2010 (e)70–79 F C. coli Enteric—secondaryFully sensitivePO ciprofloxacin 500 mg bdT2DM2010 (f)40–49 M C. jejuni Enteric—secondaryFully sensitivePO Norfloxacin 400 mg bdIrritable bowel syndrome2011 (a)60–69 M C. lari Enteric—secondaryCiprofloxacin— ResistantPO Doxycycline 100 mg bdAlcoholic liver disease with portal hypotension, recent intracerebral bleed2011 (b)80 + M C. jejuni Enteric—secondaryFully sensitiveNil prescribedAge, nil other significant2011 (c)70–79 M C. coli Enteric—secondaryFully sensitivePO ciprofloxacin 500 mg bdAsplenic2012 (a)40–49 M C. jejuni Enteric—secondaryFully sensitiveIV ciprofloxacin 500 mg bdMultiple sclerosis, current chemotherapy pre-stem cell transplantation, IDDM2012 (b)30–39 F C. jejuni Enteric—secondaryFully sensitiveNilPregnant 33/40K, IDDM2012 (c)70–79 M C. jejuni Enteric—secondaryCiprofloxacin— ResistantNalidixic acid— ResistantNilDiabetic neuropathy, chronic renal failure2012 (d)60–69 F C. coli Enteric—secondaryCiprofloxacin— ResistantNalidixic acid— ResistantNilNil significant2013 (a)20–29 M C. jejuni Enteric—secondaryFully sensitiveNilNil significant2013 (b)20–29 M C. jejuni Primary bacteraemiaFully sensitivePO Ciprofloxacin 750 mg bd (upon discharge)B cell leukaemia, AVN (on steroids), SIADH2013 (c)50–59 M C. jejuni Enteric—secondaryFully sensitivePO ciprofloxacin 500 mg bdLiver failure with cirrhosis, portal hypotension secondary to Hepatitis C, T2DM, hypothyroidism. Awaiting transplant.2013 (d)70–79 M C. jejuni Enteric—secondaryFully sensitiveNilAge, nil other significantPO, per oral; IV ,  intravenous; PUD, peptic ulcer disease; HIV, human immunodeficiency virus; T2DM, type 2 diabetes mellitus; IDDM, insulin dependent diabetes mellitus; AVN, acute vascular necrosis; SIADH, syndrome of inappropriate antidiuretic hormone Characteristics of cases hospitalised with Campylobacter bacteraemia, 2004 to 2013 Ciprofloxacin- Resistant Erythromycin- Resistant Ciprofloxacin— Resistant Nalidixic acid— Resistant Ciprofloxacin— Resistant Nalidixic acid— Resistant PO, per oral; IV ,  intravenous; PUD, peptic ulcer disease; HIV, human immunodeficiency virus; T2DM, type 2 diabetes mellitus; IDDM, insulin dependent diabetes mellitus; AVN, acute vascular necrosis; SIADH, syndrome of inappropriate antidiuretic hormone During the period 2004 to 2013, the mean incidence of Campylobacter bacteraemia in the host population was 0.71 cases per 100,000 population per year (95% CI 0.48–1.05 per 100,000 population) (Fig. 3). We did not observe temporal trends in the proportion of Campylobacter-associated hospitalisations undergoing blood collection for culture or in the proportion of positive blood cultures among Campylobacter-associated hospitalisations. Fig. 3Incidence of Campylobacter spp. bacteraemia among ACT residents, 2004 to 2013 Incidence of Campylobacter spp. bacteraemia among ACT residents, 2004 to 2013 Factors associated with the collection of blood samples for culture and subsequent isolation of Campylobacter spp. are shown in Tables 2 and 3. Table 2Factors associated with collection of blood for culture among a cohort of Campylobacter-associated hospitalisations (\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\varvec{n}=663$$\end{document}n=663)Predictor variableEstimated adjusted Odds Ratio (aOR)95% Confidence interval\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\varvec{p}$$\end{document}p -valueInfectious Diseases Unit admission8.001.58–40.610.01Febrile during admission (≥ 38 °C)21.3013.76–32. 96< 0.001Tachycardia1.751.13–2.720.01Moderate to severe renal disease2.871.26–6.550.0110–19 years age group0.360.19–0.71< 0.0122 observations contained missing dataHosmer and Lemeshow goodness of fit \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\chi }^{2}=2.70$$\end{document}χ2=2.70, \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$p=0.61$$\end{document}p=0.61 Factors associated with collection of blood for culture among a cohort of Campylobacter-associated hospitalisations (\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\varvec{n}=663$$\end{document}n=663) 22 observations contained missing data Hosmer and Lemeshow goodness of fit \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\chi }^{2}=2.70$$\end{document}χ2=2.70, \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$p=0.61$$\end{document}p=0.61 Table 3Factors associated with blood stream isolation of Campylobacter spp. among Campylobacter-associated hospitalisations (n = 333)Predictor variableEstimated adjusted Odds Ratio (aOR)95 % Confidence interval\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\text{p}$$\end{document}p -valueModerate to severe liver disease48.897.03–340.22< 0.001Haematology Unit admission14.672.99–72.070.001Age group 70–79 years4.931.57–15.49< 0.01Admission during summer months2.931.14–7.570.03Indigenous Australian10.871.00–117.890.05Hosmer and Lemshow goodness of fit \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\chi }^{2}=0.71$$\end{document}χ2=0.71, \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$p=0.70$$\end{document}p=0.70 Factors associated with blood stream isolation of Campylobacter spp. among Campylobacter-associated hospitalisations (n = 333) Hosmer and Lemshow goodness of fit \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\chi }^{2}=0.71$$\end{document}χ2=0.71, \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$p=0.70$$\end{document}p=0.70 Antimicrobial susceptibility We identified 548 Campylobacter-associated admissions with a primary isolation of Campylobacter spp. and where subsequent antimicrobial susceptibility testing was performed. For 526 (96%) of these cases there was available data for three standard antimicrobials, ciprofloxacin, nalidixic acid and erythromycin. Of the remainder, nalidixic acid data was unavailable for 21 isolates, with erythromycin susceptibility unavailable for a single isolate. During the study period, 13% (70/548) of primary isolates exhibited non-susceptibility to at least one standard antimicrobial. The proportion of non-susceptible isolates ranged from ≤ 5.0% in 2004/05 to > 20.0% in 2012/13, with this increase being significant (\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\chi }^{2}=6.12$$\end{document}χ2=6.12, \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$p=0.01$$\end{document}p=0.01) (Fig. 4). Among the individual antimicrobials, nalidixic acid non-susceptibility was reported for 9% (49/527) of tested isolates, 7% (40/548) for ciprofloxacin and 4% (21/547) for erythromycin. Fig. 4Non-susceptibility to standard antimicrobials from Campylobacter isolates obtained during Campylobacter-associated hospitalisations, 2004 to 2013 Non-susceptibility to standard antimicrobials from Campylobacter isolates obtained during Campylobacter-associated hospitalisations, 2004 to 2013 Figure 4 shows that lower rates of erythromycin non-susceptibility were observed while significant increases in both ciprofloxacin (χ2 = 16.51, p < 0.001) and nalidixic acid non-susceptibility occurred during the study period (χ2 = 10.85, p = 0.001). Factors associated with antimicrobial non-susceptibility among Campylobacter-associated hospitalisations are shown in Tables 4 and 5. Table 4Factors associated with ciprofloxacin non-susceptible isolates among Campylobacter-associated hospitalisations (n = 411)Predictor variableEstimated adjusted Odds Ratio (aOR)95 % Confidence intervalp-valueBloody diarrhoea0.300.09–1.030.06Recent overseas travel15.532.86–84.180.001Previous Campylobacter-associated hospitalisation12.871.72–96.120.01Hosmer and Lemeshow goodness of fit χ2  = 0.03,  p = 0.86 Factors associated with ciprofloxacin non-susceptible isolates among Campylobacter-associated hospitalisations (n = 411) Hosmer and Lemeshow goodness of fit χ2  = 0.03,  p = 0.86 Table 5Factors associated with non-susceptibility to ciprofloxacin, nalidixic acid or erythromycin among a cohort of Campylobacter-associated hospitalisations (n = 548)Predictor variableEstimated adjusted Odds Ratio (aOR)95 % Confidence Intervalp-valueRecent overseas travel11.803.18–43.83< 0.001Previous Campylobacter-associated hospitalisation17.092.65−110.07< 0.01Tachycardia0.480.26–0.890.02Hosmer and Lemeshow goodness of fit χ2  = 0.01,  p = 0.92 Factors associated with non-susceptibility to ciprofloxacin, nalidixic acid or erythromycin among a cohort of Campylobacter-associated hospitalisations (n = 548) Hosmer and Lemeshow goodness of fit χ2  = 0.01,  p = 0.92 We identified 548 Campylobacter-associated admissions with a primary isolation of Campylobacter spp. and where subsequent antimicrobial susceptibility testing was performed. For 526 (96%) of these cases there was available data for three standard antimicrobials, ciprofloxacin, nalidixic acid and erythromycin. Of the remainder, nalidixic acid data was unavailable for 21 isolates, with erythromycin susceptibility unavailable for a single isolate. During the study period, 13% (70/548) of primary isolates exhibited non-susceptibility to at least one standard antimicrobial. The proportion of non-susceptible isolates ranged from ≤ 5.0% in 2004/05 to > 20.0% in 2012/13, with this increase being significant (\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\chi }^{2}=6.12$$\end{document}χ2=6.12, \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$p=0.01$$\end{document}p=0.01) (Fig. 4). Among the individual antimicrobials, nalidixic acid non-susceptibility was reported for 9% (49/527) of tested isolates, 7% (40/548) for ciprofloxacin and 4% (21/547) for erythromycin. Fig. 4Non-susceptibility to standard antimicrobials from Campylobacter isolates obtained during Campylobacter-associated hospitalisations, 2004 to 2013 Non-susceptibility to standard antimicrobials from Campylobacter isolates obtained during Campylobacter-associated hospitalisations, 2004 to 2013 Figure 4 shows that lower rates of erythromycin non-susceptibility were observed while significant increases in both ciprofloxacin (χ2 = 16.51, p < 0.001) and nalidixic acid non-susceptibility occurred during the study period (χ2 = 10.85, p = 0.001). Factors associated with antimicrobial non-susceptibility among Campylobacter-associated hospitalisations are shown in Tables 4 and 5. Table 4Factors associated with ciprofloxacin non-susceptible isolates among Campylobacter-associated hospitalisations (n = 411)Predictor variableEstimated adjusted Odds Ratio (aOR)95 % Confidence intervalp-valueBloody diarrhoea0.300.09–1.030.06Recent overseas travel15.532.86–84.180.001Previous Campylobacter-associated hospitalisation12.871.72–96.120.01Hosmer and Lemeshow goodness of fit χ2  = 0.03,  p = 0.86 Factors associated with ciprofloxacin non-susceptible isolates among Campylobacter-associated hospitalisations (n = 411) Hosmer and Lemeshow goodness of fit χ2  = 0.03,  p = 0.86 Table 5Factors associated with non-susceptibility to ciprofloxacin, nalidixic acid or erythromycin among a cohort of Campylobacter-associated hospitalisations (n = 548)Predictor variableEstimated adjusted Odds Ratio (aOR)95 % Confidence Intervalp-valueRecent overseas travel11.803.18–43.83< 0.001Previous Campylobacter-associated hospitalisation17.092.65−110.07< 0.01Tachycardia0.480.26–0.890.02Hosmer and Lemeshow goodness of fit χ2  = 0.01,  p = 0.92 Factors associated with non-susceptibility to ciprofloxacin, nalidixic acid or erythromycin among a cohort of Campylobacter-associated hospitalisations (n = 548) Hosmer and Lemeshow goodness of fit χ2  = 0.01,  p = 0.92 Antibiotic treatment Antimicrobial treatment was provided for 32% (219/685) of Campylobacter-associated hospitalisations. Those receiving treatment were observed to be significantly older (median 48.5 years, range 8 years to 92 years) than admissions where antibiotics were not administered (median 34.8 years, range < 1 years to 91 years, χ2 = 26.92, p < 0.001). No sex-based differences were observed. Treatment associated with bacteraemia is described in Table 1. Second generation fluoroquinolones were used in 79% (172/219) of treated admissions. Ciprofloxacin was administered to 82.0% (141/172), with the remainder receiving norfloxacin. Those receiving ciprofloxacin were younger (median 47.2 years, minimum 12 years, maximum 90 years) compared with those receiving norfloxacin (median age 64.3 years, minimum 19 years to maximum 92 years), but this difference was not significant. For admissions treated with ciprofloxacin, 91.4% (129/141) received treatment orally, with a median total daily dose of 1000 mg. For those receiving parenteral ciprofloxacin (\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$n$$\end{document}n =12), the median total daily dose was 800 mg. The median total daily dosage for oral norfloxacin (\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$n$$\end{document}n =31) was 800 mg. Macrolides (azithromycin or erythromycin) were administered to 21.0% (46/219) of admissions, with 63.0% (29/46) receiving azithromycin. No age or sex differences were observed between those receiving azithromycin versus erythromycin. For admissions receiving azithromycin, 72.4% (21/29) received treatment orally, with a median total daily dose for both oral and IV azithromycin of 500 mg. Oral administration was provided for 94.1% (16/17) of admissions receiving erythromycin, with a median total daily dose of 2000 mg. One patient received an initial total daily dose of 3200 mg IV erythromycin, prior to this being reduced to a total daily dose of 1600 mg. For two admissions (involving the same patient), a tetracycline (doxycycline) was used to treat a C. lari bacteraemia and enterocolitis. A significant increase in the proportion of Campylobacter-associated hospitalisations being administered antimicrobials was observed over time (χ2 = 4.37, p = 0.04), rising from 27% (14/52) in 2004 to 38% (26/68) of admissions in 2013. No difference in the proportion of admissions treated with either fluoroquinolones or macrolides was observed. Among admissions receiving macrolides a significant increase in the administration of azithromycin over erythromycin was observed (χ2 = 16.31, p < 0.001), most notably from 2011 onwards. No changes over time in the proportion of admissions treated with ciprofloxacin versus norfloxacin were observed. Factors associated with administration of antibiotics during Campylobacter-associated hospitalisations are shown in Table 6. Table 6Factors associated with antibiotic administration among a cohort of Campylobacter-associated hospitalisations (n = 607)Predictor variableEstimated adjusted Odds Ratio (aOR)95 % Confidence Intervalp-valueAge 0–9 years0.070.01–0.570.01Age 10–19 years0.440.20–0.970.04Age 40–49 years2.341.14–4.790.02Emergency Unit admission0.060.02–0.17< 0.001Gastroenterology admission3.751.95–7.20< 0.001Gen. Medicine admission2.021.22–3.35< 0.01Infectious Diseases admission2.581.03–6.440.04Vomiting1.701.11–2.610.02Electrolyte imbalance1.731.12–2.670.01Blood specimen for culture2.761.79–4.26< 0.001Hosmer and Lemeshow goodness of fit χ2 9.23,  p = 0.32 Factors associated with antibiotic administration among a cohort of Campylobacter-associated hospitalisations (n = 607) Hosmer and Lemeshow goodness of fit χ2 9.23,  p = 0.32 Antimicrobial treatment was provided for 32% (219/685) of Campylobacter-associated hospitalisations. Those receiving treatment were observed to be significantly older (median 48.5 years, range 8 years to 92 years) than admissions where antibiotics were not administered (median 34.8 years, range < 1 years to 91 years, χ2 = 26.92, p < 0.001). No sex-based differences were observed. Treatment associated with bacteraemia is described in Table 1. Second generation fluoroquinolones were used in 79% (172/219) of treated admissions. Ciprofloxacin was administered to 82.0% (141/172), with the remainder receiving norfloxacin. Those receiving ciprofloxacin were younger (median 47.2 years, minimum 12 years, maximum 90 years) compared with those receiving norfloxacin (median age 64.3 years, minimum 19 years to maximum 92 years), but this difference was not significant. For admissions treated with ciprofloxacin, 91.4% (129/141) received treatment orally, with a median total daily dose of 1000 mg. For those receiving parenteral ciprofloxacin (\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$n$$\end{document}n =12), the median total daily dose was 800 mg. The median total daily dosage for oral norfloxacin (\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$n$$\end{document}n =31) was 800 mg. Macrolides (azithromycin or erythromycin) were administered to 21.0% (46/219) of admissions, with 63.0% (29/46) receiving azithromycin. No age or sex differences were observed between those receiving azithromycin versus erythromycin. For admissions receiving azithromycin, 72.4% (21/29) received treatment orally, with a median total daily dose for both oral and IV azithromycin of 500 mg. Oral administration was provided for 94.1% (16/17) of admissions receiving erythromycin, with a median total daily dose of 2000 mg. One patient received an initial total daily dose of 3200 mg IV erythromycin, prior to this being reduced to a total daily dose of 1600 mg. For two admissions (involving the same patient), a tetracycline (doxycycline) was used to treat a C. lari bacteraemia and enterocolitis. A significant increase in the proportion of Campylobacter-associated hospitalisations being administered antimicrobials was observed over time (χ2 = 4.37, p = 0.04), rising from 27% (14/52) in 2004 to 38% (26/68) of admissions in 2013. No difference in the proportion of admissions treated with either fluoroquinolones or macrolides was observed. Among admissions receiving macrolides a significant increase in the administration of azithromycin over erythromycin was observed (χ2 = 16.31, p < 0.001), most notably from 2011 onwards. No changes over time in the proportion of admissions treated with ciprofloxacin versus norfloxacin were observed. Factors associated with administration of antibiotics during Campylobacter-associated hospitalisations are shown in Table 6. Table 6Factors associated with antibiotic administration among a cohort of Campylobacter-associated hospitalisations (n = 607)Predictor variableEstimated adjusted Odds Ratio (aOR)95 % Confidence Intervalp-valueAge 0–9 years0.070.01–0.570.01Age 10–19 years0.440.20–0.970.04Age 40–49 years2.341.14–4.790.02Emergency Unit admission0.060.02–0.17< 0.001Gastroenterology admission3.751.95–7.20< 0.001Gen. Medicine admission2.021.22–3.35< 0.01Infectious Diseases admission2.581.03–6.440.04Vomiting1.701.11–2.610.02Electrolyte imbalance1.731.12–2.670.01Blood specimen for culture2.761.79–4.26< 0.001Hosmer and Lemeshow goodness of fit χ2 9.23,  p = 0.32 Factors associated with antibiotic administration among a cohort of Campylobacter-associated hospitalisations (n = 607) Hosmer and Lemeshow goodness of fit χ2 9.23,  p = 0.32
Conclusions
Campylobacter infections cause a substantial disease burden, as reflected by the high number of hospitalisations and high incidence of bacteraemia in our study. While a spectrum of illness can be observed among hospitalisations, many cases exhibit signs suggestive of systemic disease. Furthermore, both the proportion of cases receiving antibiotic treatment and those having isolates that were non-susceptible to standard antimicrobials increased over time. Given the increasing incidence of Campylobacter infections, particularly among older patients, understanding hospitalisation burden becomes increasingly important. This study provides some evidence in relation to clinical factors influencing the management of hospitalised cases in high income settings.
[ "Background", "Methods", "Background and data", "Statistical analysis", "Campylobacter bacteraemia", "Antimicrobial susceptibility", "Antibiotic treatment", "Factors associated with a positive blood stream isolate (bacteraemia)", "Factors associated with non-susceptibility to antimicrobials", "Factors associated with antibiotic treatment during admission", "Limitations" ]
[ "Campylobacter spp. are internationally significant as a cause of infectious diarrhoeal disease [1]. Most cases of infectious diarrhoea, including those caused by Campylobacter spp., are self-limiting, with management focused on maintenance of hydration via fluid repletion [2]. However, for persons hospitalised with Campylobacter infection, clinical thresholds for exclusion of bacteraemia and the consideration of antimicrobial therapy differ due to symptom severity, risk of complications or the exacerbation of underlying co-morbidities [2].\nWhile bacteraemia is an uncommon complication of campylobacteriosis, testing and diagnosis typically occurs when a person is hospitalised [3]. Within a hospital setting, clinical tolerances and the challenge of predicting bacteraemia among febrile admissions with an enteric focus are important considerations [4]. Further, the likelihood of clinicians commencing antibiotic therapy may also increase among hospitalised cases, highlighting the importance of judicial prescribing and understanding of isolate susceptibility patterns to ensure viable treatment options remain [5].\nWe describe rates of bacteraemia, antimicrobial susceptibility and treatment among a cohort of Campylobacter-associated hospitalisations. We also examined clinical and host factors associated with the diagnosis of blood stream infections (BSI), isolate non-susceptibility and antibiotic treatment of hospitalised cases.", "Background and data This study forms part of a retrospective review of Campylobacter-associated hospital admissions in the Australian Capital Territory (ACT) between January 2004 and December 2013. A Campylobacter-associated hospital admission was defined as any episode of care for an admitted patient that was clinically and temporally linked to a Campylobacter isolate derived from the same patient. Full details of the setting, data sources, data collection and linkage are described elsewhere [6]. In summary, they include hospital generated admission data, hospital microbiology data and clinical data obtained via individual medical record review. Hospital admission details were obtained via a data extract that included patient demographics, admission and discharge dates and admission unit details. Record inclusion was determined by an inpatient admission having been assigned an International Classification of Diseases (ICD) diagnosis code ‘A045—Campylobacter enteritis’. We also obtained a separate microbiology data extract detailing inpatient isolations of Campylobacter spp., including specimen type, collection dates and antimicrobial susceptibility data. All laboratory diagnoses of Campylobacter infection were made via culture; speciation was not routinely performed on hospital isolates prior to 2013. Susceptibilities to ciprofloxacin, nalidixic acid and erythromycin were assessed using disk diffusion and according to Clinical and Laboratory Standards Institute breakpoints [7]. Intermediate and resistant isolates were grouped as non-susceptible to aid analysis. Due to the potential for multiple specimens being collected during an individual admission (or across multiple related admissions), we reported antimicrobial susceptibility using the earliest available data. Microbiology results were then linked to both admissions with and without ICD code ‘A045’. We undertook a review of medical records to collect additional details on illness presentation, associated complications, patient co-morbidities (as per the Charlson Co-morbidity Index) [8] and antibiotic treatment and prescribing details (e.g., antibiotic type, dosage, frequency and administration route). For antibiotic treatment, we defined a total daily dose as the dosage in milligrams (mg) multiplied by the frequency of administration per day, with the product expressed in milligrams per day.\nThis study forms part of a retrospective review of Campylobacter-associated hospital admissions in the Australian Capital Territory (ACT) between January 2004 and December 2013. A Campylobacter-associated hospital admission was defined as any episode of care for an admitted patient that was clinically and temporally linked to a Campylobacter isolate derived from the same patient. Full details of the setting, data sources, data collection and linkage are described elsewhere [6]. In summary, they include hospital generated admission data, hospital microbiology data and clinical data obtained via individual medical record review. Hospital admission details were obtained via a data extract that included patient demographics, admission and discharge dates and admission unit details. Record inclusion was determined by an inpatient admission having been assigned an International Classification of Diseases (ICD) diagnosis code ‘A045—Campylobacter enteritis’. We also obtained a separate microbiology data extract detailing inpatient isolations of Campylobacter spp., including specimen type, collection dates and antimicrobial susceptibility data. All laboratory diagnoses of Campylobacter infection were made via culture; speciation was not routinely performed on hospital isolates prior to 2013. Susceptibilities to ciprofloxacin, nalidixic acid and erythromycin were assessed using disk diffusion and according to Clinical and Laboratory Standards Institute breakpoints [7]. Intermediate and resistant isolates were grouped as non-susceptible to aid analysis. Due to the potential for multiple specimens being collected during an individual admission (or across multiple related admissions), we reported antimicrobial susceptibility using the earliest available data. Microbiology results were then linked to both admissions with and without ICD code ‘A045’. We undertook a review of medical records to collect additional details on illness presentation, associated complications, patient co-morbidities (as per the Charlson Co-morbidity Index) [8] and antibiotic treatment and prescribing details (e.g., antibiotic type, dosage, frequency and administration route). For antibiotic treatment, we defined a total daily dose as the dosage in milligrams (mg) multiplied by the frequency of administration per day, with the product expressed in milligrams per day.\nStatistical analysis We calculated bacteraemia incidence per 100,000 persons using the ACT’s mid-year estimated resident population for each year between 2004 and 2013 [9]. We used non-parametric methods, including median tests, to analyse non-normally distributed variables such as age. We used Pearson’s chi-squared test and Fisher’s exact tests to assess simple statistical associations between key outcome and independent variables of interest, while trends in proportions were assessed using chi-squared tests. Preliminary analyses included the estimation of relative risks (RRs), 95% confidence intervals (CIs) and p-values to assess potential predictors of blood culturing, antimicrobial susceptibility and antibiotic treatment. Variables examined included age, sex, country of birth, previous Campylobacter-associated hospitalisation, overseas travel history, signs and symptoms, admission unit, comorbidities and the presence of key signs of infection. Using a stepwise additive approach, with the most significant variables being added first, we constructed separate logistic regression models to identify predictors associated with (i) collection of blood specimens for culture, (ii) positive blood isolates, (iii) non-susceptibility to ciprofloxacin, (iv) any antimicrobial non-susceptibility and (vii) antibiotic treatment during hospitalisation. Erythromycin non-susceptibility was not assessed due to the limited number of observations. Clinical relevance and statistical evidence were used to assist with variable selection for multivariable analysis. The significance level for removal from the models was set at p ≤ 0.05. We used likelihood-ratio tests to assess the explanatory power of the models, with the variable expressing the largest p-value being removed. Final results were expressed as adjusted odds ratios (aOR), with accompanying 95% confidence intervals and p-values. We used Hosmer-Lemeshow tests to assess the goodness-of-fit for each model. All statistical analyses were performed using Stata v.14 (StataCorp, USA).\nWe calculated bacteraemia incidence per 100,000 persons using the ACT’s mid-year estimated resident population for each year between 2004 and 2013 [9]. We used non-parametric methods, including median tests, to analyse non-normally distributed variables such as age. We used Pearson’s chi-squared test and Fisher’s exact tests to assess simple statistical associations between key outcome and independent variables of interest, while trends in proportions were assessed using chi-squared tests. Preliminary analyses included the estimation of relative risks (RRs), 95% confidence intervals (CIs) and p-values to assess potential predictors of blood culturing, antimicrobial susceptibility and antibiotic treatment. Variables examined included age, sex, country of birth, previous Campylobacter-associated hospitalisation, overseas travel history, signs and symptoms, admission unit, comorbidities and the presence of key signs of infection. Using a stepwise additive approach, with the most significant variables being added first, we constructed separate logistic regression models to identify predictors associated with (i) collection of blood specimens for culture, (ii) positive blood isolates, (iii) non-susceptibility to ciprofloxacin, (iv) any antimicrobial non-susceptibility and (vii) antibiotic treatment during hospitalisation. Erythromycin non-susceptibility was not assessed due to the limited number of observations. Clinical relevance and statistical evidence were used to assist with variable selection for multivariable analysis. The significance level for removal from the models was set at p ≤ 0.05. We used likelihood-ratio tests to assess the explanatory power of the models, with the variable expressing the largest p-value being removed. Final results were expressed as adjusted odds ratios (aOR), with accompanying 95% confidence intervals and p-values. We used Hosmer-Lemeshow tests to assess the goodness-of-fit for each model. All statistical analyses were performed using Stata v.14 (StataCorp, USA).", "This study forms part of a retrospective review of Campylobacter-associated hospital admissions in the Australian Capital Territory (ACT) between January 2004 and December 2013. A Campylobacter-associated hospital admission was defined as any episode of care for an admitted patient that was clinically and temporally linked to a Campylobacter isolate derived from the same patient. Full details of the setting, data sources, data collection and linkage are described elsewhere [6]. In summary, they include hospital generated admission data, hospital microbiology data and clinical data obtained via individual medical record review. Hospital admission details were obtained via a data extract that included patient demographics, admission and discharge dates and admission unit details. Record inclusion was determined by an inpatient admission having been assigned an International Classification of Diseases (ICD) diagnosis code ‘A045—Campylobacter enteritis’. We also obtained a separate microbiology data extract detailing inpatient isolations of Campylobacter spp., including specimen type, collection dates and antimicrobial susceptibility data. All laboratory diagnoses of Campylobacter infection were made via culture; speciation was not routinely performed on hospital isolates prior to 2013. Susceptibilities to ciprofloxacin, nalidixic acid and erythromycin were assessed using disk diffusion and according to Clinical and Laboratory Standards Institute breakpoints [7]. Intermediate and resistant isolates were grouped as non-susceptible to aid analysis. Due to the potential for multiple specimens being collected during an individual admission (or across multiple related admissions), we reported antimicrobial susceptibility using the earliest available data. Microbiology results were then linked to both admissions with and without ICD code ‘A045’. We undertook a review of medical records to collect additional details on illness presentation, associated complications, patient co-morbidities (as per the Charlson Co-morbidity Index) [8] and antibiotic treatment and prescribing details (e.g., antibiotic type, dosage, frequency and administration route). For antibiotic treatment, we defined a total daily dose as the dosage in milligrams (mg) multiplied by the frequency of administration per day, with the product expressed in milligrams per day.", "We calculated bacteraemia incidence per 100,000 persons using the ACT’s mid-year estimated resident population for each year between 2004 and 2013 [9]. We used non-parametric methods, including median tests, to analyse non-normally distributed variables such as age. We used Pearson’s chi-squared test and Fisher’s exact tests to assess simple statistical associations between key outcome and independent variables of interest, while trends in proportions were assessed using chi-squared tests. Preliminary analyses included the estimation of relative risks (RRs), 95% confidence intervals (CIs) and p-values to assess potential predictors of blood culturing, antimicrobial susceptibility and antibiotic treatment. Variables examined included age, sex, country of birth, previous Campylobacter-associated hospitalisation, overseas travel history, signs and symptoms, admission unit, comorbidities and the presence of key signs of infection. Using a stepwise additive approach, with the most significant variables being added first, we constructed separate logistic regression models to identify predictors associated with (i) collection of blood specimens for culture, (ii) positive blood isolates, (iii) non-susceptibility to ciprofloxacin, (iv) any antimicrobial non-susceptibility and (vii) antibiotic treatment during hospitalisation. Erythromycin non-susceptibility was not assessed due to the limited number of observations. Clinical relevance and statistical evidence were used to assist with variable selection for multivariable analysis. The significance level for removal from the models was set at p ≤ 0.05. We used likelihood-ratio tests to assess the explanatory power of the models, with the variable expressing the largest p-value being removed. Final results were expressed as adjusted odds ratios (aOR), with accompanying 95% confidence intervals and p-values. We used Hosmer-Lemeshow tests to assess the goodness-of-fit for each model. All statistical analyses were performed using Stata v.14 (StataCorp, USA).", "Out of 685 admissions, 333 (49%) had blood drawn for culture and 25 (7.5%) of these tested positive (Fig. 1). Speciation was performed on 21 (84%) of these case isolates with 15 (71.4%) cases of C. jejuni, 5 (23.8%) cases of C. coli and a single (4.0%) case of C. lari bacteraemia. Amongst the 25 admissions with bacteraemia, 15 were male (60%) and 10 were female (40%), while the median age was 59.5 years (range 12 to 90 years). Amongst 308 negative cases there were 177 males (57.5%) and 131 females (42.5%), with a median age of 38.8 years (age range \\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$$<1.0$$\\end{document}<1.0 to 92 years). This age difference was significant (median test\\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$${ \\chi }^{2}=5.30$$\\end{document}χ2=5.30, \\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$$p=0.02$$\\end{document}p=0.02) but no evidence of a difference in the sex composition was observed (\\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$${\\chi }^{2}=0.06$$\\end{document}χ2=0.06, \\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$$p=0.81$$\\end{document}p=0.81). The age and sex of admissions in relation to blood culture status is shown in Fig. 2. We also compared those who had blood drawn for culture (whether positive or negative) with those that did not to assess demographic differences. We saw no evidence of a difference in median age (median test\\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$${\\chi }^{2}=0.18$$\\end{document}χ2=0.18, \\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$$p=0.68$$\\end{document}p=0.68) but did observe males to be more likely to have blood drawn (\\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$${\\chi }^{2}=11.17$$\\end{document}χ2=11.17, \\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$$p=0.001$$\\end{document}p=0.001). We observed no difference in the proportion of admissions with documented comorbidities who had blood specimens collected for culture when compared to admission without comorbidities who had blood taken for culture \\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$${(\\chi }^{2}=1.42$$\\end{document}(χ2=1.42, \\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$$p=0.23$$\\end{document}p=0.23). Bacteraemia generally occurred in the context of antecedent diarrhoeal illness, although two admissions involved primary bacteraemia (i.e. without diarrhoea) in patients with haematological malignancies. Comorbidities, while common, were not a characteristic feature of cases hospitalised with bacteraemia as shown in Table 1. No deaths were identified among cases with bacteraemia.\n\nFig. 1Flow diagram for blood culture collection and subsequent Campylobacter spp. bacteraemia among a cohort of Campylobacter-associated hospitalisations\n\nFlow diagram for blood culture collection and subsequent Campylobacter spp. bacteraemia among a cohort of Campylobacter-associated hospitalisations\n\nFig. 2Jitter plot of age and sex by blood culture status among Campylobacter-associated hospitalisations\n\nJitter plot of age and sex by blood culture status among Campylobacter-associated hospitalisations\n\nTable 1Characteristics of cases hospitalised with Campylobacter bacteraemia, 2004 to 2013Year (case)Age range/sexSpeciesBacteraemia—sourceAntimicrobial susceptibilityAntimicrobial treatmentSignificant medical history and risk factors200480 + M\nC. jejuni\nEnteric—secondaryFully sensitiveNilAge, nil other significant2005 (a)50–59 FCampylobacter sp.Enteric—secondaryFully sensitivePO ciprofloxacin 500 mg bdAcute myeloid leukaemia2005 (b)60–69 FCampylobacter sp.Enteric—secondaryFully sensitiveNilNil significant2005 (c)40–49 F\nC. jejuni\nEnteric—secondaryFully sensitivePO ciprofloxacin 500 mg bdIV drug use, PUD on omeprazole2006 (a)40–49 M\nC. jejuni\nEnteric— secondaryFully sensitivePO ciprofloxacin 500 mg bdUntreated Stage III HIV2008 (a)20–29 MCampylobacter sp.Enteric—secondaryFully sensitiveIV azithromycin 500 mg qdNil significant2008 (b)60–69 M\nC. coli\nPrimary bacteraemiaCiprofloxacin- ResistantErythromycin- ResistantNilLymphocytic lymphoma200930–39 F\nC. jejuni\nEnteric—secondaryFully sensitivePO ciprofloxacin 250 mg bdHistory of renal transplant secondary to IgA nephropathy2010 (a)60–69 MCampylobacter sp.Enteric—secondaryFully sensitivePO ciprofloxacin 500 mg bdBowel carcinoma, current chemotherapy2010 (b)30–39 M\nC. coli\nEnteric—secondaryFully sensitivePO ciprofloxacin 500 mg bdAlcoholic liver disease with portal hypotension and bleeding varices2010 (c)70–79 F\nC. jejuni\nEnteric—secondaryFully sensitivePO ciprofloxacin 500 mg bdT2DM2010 (d)10–19 F\nC. jejuni\nEnteric—secondaryFully sensitivePO azithromycin 500 mg qd (upon discharge)Nil significant.2010 (e)70–79 F\nC. coli\nEnteric—secondaryFully sensitivePO ciprofloxacin 500 mg bdT2DM2010 (f)40–49 M\nC. jejuni\nEnteric—secondaryFully sensitivePO Norfloxacin 400 mg bdIrritable bowel syndrome2011 (a)60–69 M\nC. lari\nEnteric—secondaryCiprofloxacin— ResistantPO Doxycycline 100 mg bdAlcoholic liver disease with portal hypotension, recent intracerebral bleed2011 (b)80 + M\nC. jejuni\nEnteric—secondaryFully sensitiveNil prescribedAge, nil other significant2011 (c)70–79 M\nC. coli\nEnteric—secondaryFully sensitivePO ciprofloxacin 500 mg bdAsplenic2012 (a)40–49 M\nC. jejuni\nEnteric—secondaryFully sensitiveIV ciprofloxacin 500 mg bdMultiple sclerosis, current chemotherapy pre-stem cell transplantation, IDDM2012 (b)30–39 F\nC. jejuni\nEnteric—secondaryFully sensitiveNilPregnant 33/40K, IDDM2012 (c)70–79 M\nC. jejuni\nEnteric—secondaryCiprofloxacin— ResistantNalidixic acid— ResistantNilDiabetic neuropathy, chronic renal failure2012 (d)60–69 F\nC. coli\nEnteric—secondaryCiprofloxacin— ResistantNalidixic acid— ResistantNilNil significant2013 (a)20–29 M\nC. jejuni\nEnteric—secondaryFully sensitiveNilNil significant2013 (b)20–29 M\nC. jejuni\nPrimary bacteraemiaFully sensitivePO Ciprofloxacin 750 mg bd (upon discharge)B cell leukaemia, AVN (on steroids), SIADH2013 (c)50–59 M\nC. jejuni\nEnteric—secondaryFully sensitivePO ciprofloxacin 500 mg bdLiver failure with cirrhosis, portal hypotension secondary to Hepatitis C, T2DM, hypothyroidism. Awaiting transplant.2013 (d)70–79 M\nC. jejuni\nEnteric—secondaryFully sensitiveNilAge, nil other significantPO, per oral; IV ,  intravenous; PUD, peptic ulcer disease; HIV, human immunodeficiency virus; T2DM, type 2 diabetes mellitus; IDDM, insulin dependent diabetes mellitus; AVN, acute vascular necrosis; SIADH, syndrome of inappropriate antidiuretic hormone\n\nCharacteristics of cases hospitalised with Campylobacter bacteraemia, 2004 to 2013\nCiprofloxacin- Resistant\nErythromycin- Resistant\nCiprofloxacin— Resistant\nNalidixic acid— Resistant\nCiprofloxacin— Resistant\nNalidixic acid— Resistant\nPO, per oral; IV ,  intravenous; PUD, peptic ulcer disease; HIV, human immunodeficiency virus; T2DM, type 2 diabetes mellitus; IDDM, insulin dependent diabetes mellitus; AVN, acute vascular necrosis; SIADH, syndrome of inappropriate antidiuretic hormone\nDuring the period 2004 to 2013, the mean incidence of Campylobacter bacteraemia in the host population was 0.71 cases per 100,000 population per year (95% CI 0.48–1.05 per 100,000 population) (Fig. 3). We did not observe temporal trends in the proportion of Campylobacter-associated hospitalisations undergoing blood collection for culture or in the proportion of positive blood cultures among Campylobacter-associated hospitalisations.\n\nFig. 3Incidence of Campylobacter spp. bacteraemia among ACT residents, 2004 to 2013\n\nIncidence of Campylobacter spp. bacteraemia among ACT residents, 2004 to 2013\nFactors associated with the collection of blood samples for culture and subsequent isolation of Campylobacter spp. are shown in Tables 2 and 3.\n\nTable 2Factors associated with collection of blood for culture among a cohort of Campylobacter-associated hospitalisations (\\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$$\\varvec{n}=663$$\\end{document}n=663)Predictor variableEstimated adjusted Odds Ratio (aOR)95% Confidence interval\\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$$\\varvec{p}$$\\end{document}p -valueInfectious Diseases Unit admission8.001.58–40.610.01Febrile during admission (≥ 38 °C)21.3013.76–32. 96< 0.001Tachycardia1.751.13–2.720.01Moderate to severe renal disease2.871.26–6.550.0110–19 years age group0.360.19–0.71< 0.0122 observations contained missing dataHosmer and Lemeshow goodness of fit \\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$${\\chi }^{2}=2.70$$\\end{document}χ2=2.70, \\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$$p=0.61$$\\end{document}p=0.61\n\nFactors associated with collection of blood for culture among a cohort of Campylobacter-associated hospitalisations (\\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$$\\varvec{n}=663$$\\end{document}n=663)\n22 observations contained missing data\nHosmer and Lemeshow goodness of fit \\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$${\\chi }^{2}=2.70$$\\end{document}χ2=2.70, \\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$$p=0.61$$\\end{document}p=0.61\n\nTable 3Factors associated with blood stream isolation of Campylobacter spp. among Campylobacter-associated hospitalisations (n = 333)Predictor variableEstimated adjusted Odds Ratio (aOR)95 % Confidence interval\\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$$\\text{p}$$\\end{document}p -valueModerate to severe liver disease48.897.03–340.22< 0.001Haematology Unit admission14.672.99–72.070.001Age group 70–79 years4.931.57–15.49< 0.01Admission during summer months2.931.14–7.570.03Indigenous Australian10.871.00–117.890.05Hosmer and Lemshow goodness of fit \\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$${\\chi }^{2}=0.71$$\\end{document}χ2=0.71, \\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$$p=0.70$$\\end{document}p=0.70\n\nFactors associated with blood stream isolation of Campylobacter spp. among Campylobacter-associated hospitalisations (n = 333)\nHosmer and Lemshow goodness of fit \\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$${\\chi }^{2}=0.71$$\\end{document}χ2=0.71, \\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$$p=0.70$$\\end{document}p=0.70", "We identified 548 Campylobacter-associated admissions with a primary isolation of Campylobacter spp. and where subsequent antimicrobial susceptibility testing was performed. For 526 (96%) of these cases there was available data for three standard antimicrobials, ciprofloxacin, nalidixic acid and erythromycin. Of the remainder, nalidixic acid data was unavailable for 21 isolates, with erythromycin susceptibility unavailable for a single isolate.\nDuring the study period, 13% (70/548) of primary isolates exhibited non-susceptibility to at least one standard antimicrobial. The proportion of non-susceptible isolates ranged from ≤ 5.0% in 2004/05 to > 20.0% in 2012/13, with this increase being significant (\\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$${\\chi }^{2}=6.12$$\\end{document}χ2=6.12, \\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$$p=0.01$$\\end{document}p=0.01) (Fig. 4). Among the individual antimicrobials, nalidixic acid non-susceptibility was reported for 9% (49/527) of tested isolates, 7% (40/548) for ciprofloxacin and 4% (21/547) for erythromycin.\n\nFig. 4Non-susceptibility to standard antimicrobials from Campylobacter isolates obtained during Campylobacter-associated hospitalisations, 2004 to 2013\n\nNon-susceptibility to standard antimicrobials from Campylobacter isolates obtained during Campylobacter-associated hospitalisations, 2004 to 2013\nFigure 4 shows that lower rates of erythromycin non-susceptibility were observed while significant increases in both ciprofloxacin (χ2 = 16.51, p < 0.001) and nalidixic acid non-susceptibility occurred during the study period (χ2 = 10.85, p = 0.001). Factors associated with antimicrobial non-susceptibility among Campylobacter-associated hospitalisations are shown in Tables 4 and 5.\n\nTable 4Factors associated with ciprofloxacin non-susceptible isolates among Campylobacter-associated hospitalisations (n = 411)Predictor variableEstimated adjusted Odds Ratio (aOR)95 % Confidence intervalp-valueBloody diarrhoea0.300.09–1.030.06Recent overseas travel15.532.86–84.180.001Previous Campylobacter-associated hospitalisation12.871.72–96.120.01Hosmer and Lemeshow goodness of fit χ2  = 0.03,  p = 0.86\n\nFactors associated with ciprofloxacin non-susceptible isolates among Campylobacter-associated hospitalisations (n = 411)\nHosmer and Lemeshow goodness of fit χ2  = 0.03,  p = 0.86\n\nTable 5Factors associated with non-susceptibility to ciprofloxacin, nalidixic acid or erythromycin among a cohort of Campylobacter-associated hospitalisations (n = 548)Predictor variableEstimated adjusted Odds Ratio (aOR)95 % Confidence Intervalp-valueRecent overseas travel11.803.18–43.83< 0.001Previous Campylobacter-associated hospitalisation17.092.65−110.07< 0.01Tachycardia0.480.26–0.890.02Hosmer and Lemeshow goodness of fit χ2  = 0.01,  p = 0.92\n\nFactors associated with non-susceptibility to ciprofloxacin, nalidixic acid or erythromycin among a cohort of Campylobacter-associated hospitalisations (n = 548)\nHosmer and Lemeshow goodness of fit χ2  = 0.01,  p = 0.92", "Antimicrobial treatment was provided for 32% (219/685) of Campylobacter-associated hospitalisations. Those receiving treatment were observed to be significantly older (median 48.5 years, range 8 years to 92 years) than admissions where antibiotics were not administered (median 34.8 years, range < 1 years to 91 years, χ2 = 26.92, p < 0.001). No sex-based differences were observed. Treatment associated with bacteraemia is described in Table 1.\nSecond generation fluoroquinolones were used in 79% (172/219) of treated admissions. Ciprofloxacin was administered to 82.0% (141/172), with the remainder receiving norfloxacin. Those receiving ciprofloxacin were younger (median 47.2 years, minimum 12 years, maximum 90 years) compared with those receiving norfloxacin (median age 64.3 years, minimum 19 years to maximum 92 years), but this difference was not significant. For admissions treated with ciprofloxacin, 91.4% (129/141) received treatment orally, with a median total daily dose of 1000 mg. For those receiving parenteral ciprofloxacin (\\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$$n$$\\end{document}n =12), the median total daily dose was 800 mg. The median total daily dosage for oral norfloxacin (\\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$$n$$\\end{document}n =31) was 800 mg.\nMacrolides (azithromycin or erythromycin) were administered to 21.0% (46/219) of admissions, with 63.0% (29/46) receiving azithromycin. No age or sex differences were observed between those receiving azithromycin versus erythromycin. For admissions receiving azithromycin, 72.4% (21/29) received treatment orally, with a median total daily dose for both oral and IV azithromycin of 500 mg. Oral administration was provided for 94.1% (16/17) of admissions receiving erythromycin, with a median total daily dose of 2000 mg. One patient received an initial total daily dose of 3200 mg IV erythromycin, prior to this being reduced to a total daily dose of 1600 mg. For two admissions (involving the same patient), a tetracycline (doxycycline) was used to treat a C. lari bacteraemia and enterocolitis.\nA significant increase in the proportion of Campylobacter-associated hospitalisations being administered antimicrobials was observed over time (χ2 = 4.37, p = 0.04), rising from 27% (14/52) in 2004 to 38% (26/68) of admissions in 2013. No difference in the proportion of admissions treated with either fluoroquinolones or macrolides was observed. Among admissions receiving macrolides a significant increase in the administration of azithromycin over erythromycin was observed (χ2 = 16.31, p < 0.001), most notably from 2011 onwards. No changes over time in the proportion of admissions treated with ciprofloxacin versus norfloxacin were observed. Factors associated with administration of antibiotics during Campylobacter-associated hospitalisations are shown in Table 6.\n\nTable 6Factors associated with antibiotic administration among a cohort of Campylobacter-associated hospitalisations (n = 607)Predictor variableEstimated adjusted Odds Ratio (aOR)95 % Confidence Intervalp-valueAge 0–9 years0.070.01–0.570.01Age 10–19 years0.440.20–0.970.04Age 40–49 years2.341.14–4.790.02Emergency Unit admission0.060.02–0.17< 0.001Gastroenterology admission3.751.95–7.20< 0.001Gen. Medicine admission2.021.22–3.35< 0.01Infectious Diseases admission2.581.03–6.440.04Vomiting1.701.11–2.610.02Electrolyte imbalance1.731.12–2.670.01Blood specimen for culture2.761.79–4.26< 0.001Hosmer and Lemeshow goodness of fit χ2 9.23,  p = 0.32\n\nFactors associated with antibiotic administration among a cohort of Campylobacter-associated hospitalisations (n = 607)\nHosmer and Lemeshow goodness of fit χ2 9.23,  p = 0.32", "The statistical association we observed between cases with positive blood cultures and the presence of underlying liver disease and haematological malignancy is in keeping with hospital-based studies showing higher proportions of these conditions among cases with Campylobacter bacteraemia [11, 21, 22]. In addition, advanced age was also statistically associated with detection of bacteraemia, a characteristic seen in larger population-based studies of campylobacteriosis [3]. Several studies also report seasonality with Campylobacter bacteraemia [23, 24]. Our results show hospitalisation during the southern hemisphere summer to be associated with bacteraemia, a finding that aligns with the seasonality of Campylobacter enteritis in Australia [14]. A further association with blood culture positivity was Indigenous status. This result was derived however from a small number of observations with, Indigenous Australians comprising 1% of Campylobacter-associated hospitalisations in the ACT [6].", "We observed statistically significant evidence of an increase in the proportion of isolates exhibiting non-susceptibility to standard antimicrobials, with this increase driven primarily by non-susceptibility to fluoroquinolones (including nalidixic acid). In keeping with trends in comparable settings only low rates of non-susceptibility to macrolides were observed [25, 26]. Internationally, fluoroquinolone resistance among Campylobacter spp. has become a major public health problem [5]. Increases in the proportion of clinical isolates demonstrating ciprofloxacin resistance has been observed in the United Kingdom (UK) and United States (US) [25, 27], while in the European Union (EU) more than half (54.6%) of human-associated C. jejuni and two-thirds (66.6%) of C. coli isolates were resistant to ciprofloxacin in 2013 [28].\nAustralia has previously reported low rates of fluoroquinolone non-susceptibility among clinical Campylobacter isolates (around 2% in 2006) [29]. This has been credited to a national pharmaceutical subsidy scheme that restricted human quinolone use and through regulation forbidding quinolone use in food-producing animals [30]. More recent studies reveal this situation has changed markedly, with current rates of ciprofloxacin resistance in clinically-derived Campylobacter isolates now ranging between 13 and 20% [31, 32].\nOverseas travel is a well-established risk factor for the acquisition of ciprofloxacin-resistant Campylobacter infections [26, 33]. In our study, cases with ciprofloxacin resistance all reported travel to India and South-East Asia, destinations associated with high rates of antimicrobial resistance among enteric pathogens (including Campylobacter spp.) [34, 35]. Nevertheless, the majority of ciprofloxacin non-susceptible isolates in our study had no recent overseas travel identified, meaning factors associated with domestic acquisition of ciprofloxacin resistance require greater consideration. US data similarly shows increases in domestically acquired ciprofloxacin resistance [27].\nWe observed that ciprofloxacin non-susceptibility was also associated with previous hospitalisation with campylobacteriosis. There is a paucity of population-level data on recurrent hospitalisations involving non-susceptible Campylobacter isolates. However, rates of recurrent campylobacteriosis in community settings have been reported to be as high as 248 episodes per 100,000 cases per year in the five years following an initial infection [36]. Explanations for our finding could be either host or pathogen-related, including higher rates of humoral immunodeficiency in patients hospitalised with recurrent campylobacteriosis [11, 37] or because of de novo mutations or increases in resistant organisms already present at subclinical levels.\nWhile there has been debate around isolate non-susceptibility and disease severity, reanalysis of the issue appears to show no substantial clinical differences between resistant and susceptible Campylobacter isolates [38]. Consequently, our finding of reduced odds for bloody diarrhoea and tachycardia among admissions with ciprofloxacin non-susceptibility and any antimicrobial non-susceptibility respectively, most likely represents the so-called “healthy traveller” effect [39].", "During the study period we observed statistically significant evidence of an increase in the proportion of Campylobacter-associated hospitalisations receiving antibiotic treatment. Antimicrobial therapy for Campylobacter enterocolitis is not routinely advised but may be recommended for patients with or at risk of severe disease, including high volume or bloody diarrhoea, high fever, symptom duration greater than one week, pregnancy or immunocompromised status [2, 10, 40]. Given that hospitalisation can be viewed as a marker of disease severity [41], the rates of treatment within our study population might be expected to differ from those among non-hospitalised campylobacteriosis cases.\nOther research on campylobacteriosis in the ACT has found no concomitant increase in hospitalisations during the same period as the current study [6]. Antibiotic treatment rates may be a reflection of local treatment practices rather than a response to disease severity, with data showing rates of appropriate prescribing and compliance with antibiotic guidelines in ACT hospitals to be the lowest in Australia during the study period [42].\nOne-third of Campylobacter-associated hospitalisations received antibiotics, either empirically or as targeted therapy for confirmed campylobacteriosis. Second generation fluoroquinolones—mainly per oral ciprofloxacin—comprised 80% of treatment, with macrolides the remainder. Australia has been successful in efforts to limit use of quinolones in humans and to prohibit their use in food-producing animals [30]. This has preserved their clinical use in Australia, with ciprofloxacin and norfloxacin remaining as empirical treatment options for acute infectious diarrhoea, while being recommended alongside azithromycin for treatment of domestically acquired Campylobacter enteritis [40]. Conversely, the high rates of quinolone resistance experienced in the UK, EU and US has seen macrolide treatment recommended or a greater emphasis placed on travel history and knowledge of local resistance patterns to guide empirical prescribing [43, 44].\nWithin our hospitalised study population, the strongest predictor of antibiotic treatment was collection of a blood specimen for culture. Both antibiotic prescription and ordering of blood cultures are clinical decisions, suggesting that the underlying clinical context observed by treating clinicians impacts both practices inducing them to be positively associated. Admission under specific clinical units, including Gastroenterology, Infectious Diseases and General Medicine were also associated with increased odds of receiving antimicrobial therapy. Hospitalisation implies a higher level of morbidity, potentially explaining the higher likelihood of antibiotic administration. Variation in treatment focus could also be expected between subspecialties, especially when underlying comorbidities are exacerbated. Such decision making may be further influenced by routine clinical behaviour, unease regarding the consequences of BSI or by the acceptability of not obtaining blood cultures among particular specialities [4].\nAge was also found to be an important factor in treatment, with paediatric admissions being less likely to receive antibiotics. This finding reflects that fluoroquinolones—the most commonly prescribed antibiotic class— are not recommended for use in children due to safety concerns [45]. We also observed that patients aged 40–49 years were more likely to be prescribed antibiotics. Reasons for this are less certain, but around 20% of admissions to high prescribing units such as Gastroenterology and Infectious Diseases were in this age range.\nVomiting and electrolyte imbalance were also associated with provision of antibiotic therapy. Vomiting is a less frequently reported symptom of campylobacteriosis but serves as an indicator for disease severity [46] and a predictor of bacteraemia [47]. While we did not assess the severity of dehydration, it is likely that a population such as ours included a higher proportion of cases with more pronounced symptomatology and severity of symptoms compared with non-hospitalised cases.", "There are a number of potential limitations with our study. Firstly, our rates of bacteraemia may underestimate the true incidence, as we identified cases using only public hospital laboratory data. Other cases of Campylobacter bacteraemia may have been diagnosed and managed in the community or the private hospital sector, although the clinical significance of these is less certain. A second limitation relates to the precision of our model estimates, with the small numbers of observations for some outcomes making detection of clinically meaningful associations challenging. Despite this, the observed associations still align plausibly with Campylobacter’s epidemiology. A third limitation relates to the generalisation of our findings. Our study population was hospital-based and drawn from a single Australian territory, with regional and international differences in the epidemiology of campylobacteriosis being observed in high income settings [14, 22, 48]. Finally, antimicrobial susceptibility testing and bacterial speciation were not performed on all isolates, limiting exploration of species-specific features such as higher macrolide resistance rates among C. coli isolates [49]." ]
[ null, null, null, null, null, null, null, null, null, null, null ]
[ "Background", "Methods", "Background and data", "Statistical analysis", "Results", "Campylobacter bacteraemia", "Antimicrobial susceptibility", "Antibiotic treatment", "Discussion", "Factors associated with a positive blood stream isolate (bacteraemia)", "Factors associated with non-susceptibility to antimicrobials", "Factors associated with antibiotic treatment during admission", "Limitations", "Conclusions" ]
[ "Campylobacter spp. are internationally significant as a cause of infectious diarrhoeal disease [1]. Most cases of infectious diarrhoea, including those caused by Campylobacter spp., are self-limiting, with management focused on maintenance of hydration via fluid repletion [2]. However, for persons hospitalised with Campylobacter infection, clinical thresholds for exclusion of bacteraemia and the consideration of antimicrobial therapy differ due to symptom severity, risk of complications or the exacerbation of underlying co-morbidities [2].\nWhile bacteraemia is an uncommon complication of campylobacteriosis, testing and diagnosis typically occurs when a person is hospitalised [3]. Within a hospital setting, clinical tolerances and the challenge of predicting bacteraemia among febrile admissions with an enteric focus are important considerations [4]. Further, the likelihood of clinicians commencing antibiotic therapy may also increase among hospitalised cases, highlighting the importance of judicial prescribing and understanding of isolate susceptibility patterns to ensure viable treatment options remain [5].\nWe describe rates of bacteraemia, antimicrobial susceptibility and treatment among a cohort of Campylobacter-associated hospitalisations. We also examined clinical and host factors associated with the diagnosis of blood stream infections (BSI), isolate non-susceptibility and antibiotic treatment of hospitalised cases.", "Background and data This study forms part of a retrospective review of Campylobacter-associated hospital admissions in the Australian Capital Territory (ACT) between January 2004 and December 2013. A Campylobacter-associated hospital admission was defined as any episode of care for an admitted patient that was clinically and temporally linked to a Campylobacter isolate derived from the same patient. Full details of the setting, data sources, data collection and linkage are described elsewhere [6]. In summary, they include hospital generated admission data, hospital microbiology data and clinical data obtained via individual medical record review. Hospital admission details were obtained via a data extract that included patient demographics, admission and discharge dates and admission unit details. Record inclusion was determined by an inpatient admission having been assigned an International Classification of Diseases (ICD) diagnosis code ‘A045—Campylobacter enteritis’. We also obtained a separate microbiology data extract detailing inpatient isolations of Campylobacter spp., including specimen type, collection dates and antimicrobial susceptibility data. All laboratory diagnoses of Campylobacter infection were made via culture; speciation was not routinely performed on hospital isolates prior to 2013. Susceptibilities to ciprofloxacin, nalidixic acid and erythromycin were assessed using disk diffusion and according to Clinical and Laboratory Standards Institute breakpoints [7]. Intermediate and resistant isolates were grouped as non-susceptible to aid analysis. Due to the potential for multiple specimens being collected during an individual admission (or across multiple related admissions), we reported antimicrobial susceptibility using the earliest available data. Microbiology results were then linked to both admissions with and without ICD code ‘A045’. We undertook a review of medical records to collect additional details on illness presentation, associated complications, patient co-morbidities (as per the Charlson Co-morbidity Index) [8] and antibiotic treatment and prescribing details (e.g., antibiotic type, dosage, frequency and administration route). For antibiotic treatment, we defined a total daily dose as the dosage in milligrams (mg) multiplied by the frequency of administration per day, with the product expressed in milligrams per day.\nThis study forms part of a retrospective review of Campylobacter-associated hospital admissions in the Australian Capital Territory (ACT) between January 2004 and December 2013. A Campylobacter-associated hospital admission was defined as any episode of care for an admitted patient that was clinically and temporally linked to a Campylobacter isolate derived from the same patient. Full details of the setting, data sources, data collection and linkage are described elsewhere [6]. In summary, they include hospital generated admission data, hospital microbiology data and clinical data obtained via individual medical record review. Hospital admission details were obtained via a data extract that included patient demographics, admission and discharge dates and admission unit details. Record inclusion was determined by an inpatient admission having been assigned an International Classification of Diseases (ICD) diagnosis code ‘A045—Campylobacter enteritis’. We also obtained a separate microbiology data extract detailing inpatient isolations of Campylobacter spp., including specimen type, collection dates and antimicrobial susceptibility data. All laboratory diagnoses of Campylobacter infection were made via culture; speciation was not routinely performed on hospital isolates prior to 2013. Susceptibilities to ciprofloxacin, nalidixic acid and erythromycin were assessed using disk diffusion and according to Clinical and Laboratory Standards Institute breakpoints [7]. Intermediate and resistant isolates were grouped as non-susceptible to aid analysis. Due to the potential for multiple specimens being collected during an individual admission (or across multiple related admissions), we reported antimicrobial susceptibility using the earliest available data. Microbiology results were then linked to both admissions with and without ICD code ‘A045’. We undertook a review of medical records to collect additional details on illness presentation, associated complications, patient co-morbidities (as per the Charlson Co-morbidity Index) [8] and antibiotic treatment and prescribing details (e.g., antibiotic type, dosage, frequency and administration route). For antibiotic treatment, we defined a total daily dose as the dosage in milligrams (mg) multiplied by the frequency of administration per day, with the product expressed in milligrams per day.\nStatistical analysis We calculated bacteraemia incidence per 100,000 persons using the ACT’s mid-year estimated resident population for each year between 2004 and 2013 [9]. We used non-parametric methods, including median tests, to analyse non-normally distributed variables such as age. We used Pearson’s chi-squared test and Fisher’s exact tests to assess simple statistical associations between key outcome and independent variables of interest, while trends in proportions were assessed using chi-squared tests. Preliminary analyses included the estimation of relative risks (RRs), 95% confidence intervals (CIs) and p-values to assess potential predictors of blood culturing, antimicrobial susceptibility and antibiotic treatment. Variables examined included age, sex, country of birth, previous Campylobacter-associated hospitalisation, overseas travel history, signs and symptoms, admission unit, comorbidities and the presence of key signs of infection. Using a stepwise additive approach, with the most significant variables being added first, we constructed separate logistic regression models to identify predictors associated with (i) collection of blood specimens for culture, (ii) positive blood isolates, (iii) non-susceptibility to ciprofloxacin, (iv) any antimicrobial non-susceptibility and (vii) antibiotic treatment during hospitalisation. Erythromycin non-susceptibility was not assessed due to the limited number of observations. Clinical relevance and statistical evidence were used to assist with variable selection for multivariable analysis. The significance level for removal from the models was set at p ≤ 0.05. We used likelihood-ratio tests to assess the explanatory power of the models, with the variable expressing the largest p-value being removed. Final results were expressed as adjusted odds ratios (aOR), with accompanying 95% confidence intervals and p-values. We used Hosmer-Lemeshow tests to assess the goodness-of-fit for each model. All statistical analyses were performed using Stata v.14 (StataCorp, USA).\nWe calculated bacteraemia incidence per 100,000 persons using the ACT’s mid-year estimated resident population for each year between 2004 and 2013 [9]. We used non-parametric methods, including median tests, to analyse non-normally distributed variables such as age. We used Pearson’s chi-squared test and Fisher’s exact tests to assess simple statistical associations between key outcome and independent variables of interest, while trends in proportions were assessed using chi-squared tests. Preliminary analyses included the estimation of relative risks (RRs), 95% confidence intervals (CIs) and p-values to assess potential predictors of blood culturing, antimicrobial susceptibility and antibiotic treatment. Variables examined included age, sex, country of birth, previous Campylobacter-associated hospitalisation, overseas travel history, signs and symptoms, admission unit, comorbidities and the presence of key signs of infection. Using a stepwise additive approach, with the most significant variables being added first, we constructed separate logistic regression models to identify predictors associated with (i) collection of blood specimens for culture, (ii) positive blood isolates, (iii) non-susceptibility to ciprofloxacin, (iv) any antimicrobial non-susceptibility and (vii) antibiotic treatment during hospitalisation. Erythromycin non-susceptibility was not assessed due to the limited number of observations. Clinical relevance and statistical evidence were used to assist with variable selection for multivariable analysis. The significance level for removal from the models was set at p ≤ 0.05. We used likelihood-ratio tests to assess the explanatory power of the models, with the variable expressing the largest p-value being removed. Final results were expressed as adjusted odds ratios (aOR), with accompanying 95% confidence intervals and p-values. We used Hosmer-Lemeshow tests to assess the goodness-of-fit for each model. All statistical analyses were performed using Stata v.14 (StataCorp, USA).", "This study forms part of a retrospective review of Campylobacter-associated hospital admissions in the Australian Capital Territory (ACT) between January 2004 and December 2013. A Campylobacter-associated hospital admission was defined as any episode of care for an admitted patient that was clinically and temporally linked to a Campylobacter isolate derived from the same patient. Full details of the setting, data sources, data collection and linkage are described elsewhere [6]. In summary, they include hospital generated admission data, hospital microbiology data and clinical data obtained via individual medical record review. Hospital admission details were obtained via a data extract that included patient demographics, admission and discharge dates and admission unit details. Record inclusion was determined by an inpatient admission having been assigned an International Classification of Diseases (ICD) diagnosis code ‘A045—Campylobacter enteritis’. We also obtained a separate microbiology data extract detailing inpatient isolations of Campylobacter spp., including specimen type, collection dates and antimicrobial susceptibility data. All laboratory diagnoses of Campylobacter infection were made via culture; speciation was not routinely performed on hospital isolates prior to 2013. Susceptibilities to ciprofloxacin, nalidixic acid and erythromycin were assessed using disk diffusion and according to Clinical and Laboratory Standards Institute breakpoints [7]. Intermediate and resistant isolates were grouped as non-susceptible to aid analysis. Due to the potential for multiple specimens being collected during an individual admission (or across multiple related admissions), we reported antimicrobial susceptibility using the earliest available data. Microbiology results were then linked to both admissions with and without ICD code ‘A045’. We undertook a review of medical records to collect additional details on illness presentation, associated complications, patient co-morbidities (as per the Charlson Co-morbidity Index) [8] and antibiotic treatment and prescribing details (e.g., antibiotic type, dosage, frequency and administration route). For antibiotic treatment, we defined a total daily dose as the dosage in milligrams (mg) multiplied by the frequency of administration per day, with the product expressed in milligrams per day.", "We calculated bacteraemia incidence per 100,000 persons using the ACT’s mid-year estimated resident population for each year between 2004 and 2013 [9]. We used non-parametric methods, including median tests, to analyse non-normally distributed variables such as age. We used Pearson’s chi-squared test and Fisher’s exact tests to assess simple statistical associations between key outcome and independent variables of interest, while trends in proportions were assessed using chi-squared tests. Preliminary analyses included the estimation of relative risks (RRs), 95% confidence intervals (CIs) and p-values to assess potential predictors of blood culturing, antimicrobial susceptibility and antibiotic treatment. Variables examined included age, sex, country of birth, previous Campylobacter-associated hospitalisation, overseas travel history, signs and symptoms, admission unit, comorbidities and the presence of key signs of infection. Using a stepwise additive approach, with the most significant variables being added first, we constructed separate logistic regression models to identify predictors associated with (i) collection of blood specimens for culture, (ii) positive blood isolates, (iii) non-susceptibility to ciprofloxacin, (iv) any antimicrobial non-susceptibility and (vii) antibiotic treatment during hospitalisation. Erythromycin non-susceptibility was not assessed due to the limited number of observations. Clinical relevance and statistical evidence were used to assist with variable selection for multivariable analysis. The significance level for removal from the models was set at p ≤ 0.05. We used likelihood-ratio tests to assess the explanatory power of the models, with the variable expressing the largest p-value being removed. Final results were expressed as adjusted odds ratios (aOR), with accompanying 95% confidence intervals and p-values. We used Hosmer-Lemeshow tests to assess the goodness-of-fit for each model. All statistical analyses were performed using Stata v.14 (StataCorp, USA).", "Campylobacter bacteraemia Out of 685 admissions, 333 (49%) had blood drawn for culture and 25 (7.5%) of these tested positive (Fig. 1). Speciation was performed on 21 (84%) of these case isolates with 15 (71.4%) cases of C. jejuni, 5 (23.8%) cases of C. coli and a single (4.0%) case of C. lari bacteraemia. Amongst the 25 admissions with bacteraemia, 15 were male (60%) and 10 were female (40%), while the median age was 59.5 years (range 12 to 90 years). Amongst 308 negative cases there were 177 males (57.5%) and 131 females (42.5%), with a median age of 38.8 years (age range \\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$$<1.0$$\\end{document}<1.0 to 92 years). This age difference was significant (median test\\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$${ \\chi }^{2}=5.30$$\\end{document}χ2=5.30, \\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$$p=0.02$$\\end{document}p=0.02) but no evidence of a difference in the sex composition was observed (\\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$${\\chi }^{2}=0.06$$\\end{document}χ2=0.06, \\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$$p=0.81$$\\end{document}p=0.81). The age and sex of admissions in relation to blood culture status is shown in Fig. 2. We also compared those who had blood drawn for culture (whether positive or negative) with those that did not to assess demographic differences. We saw no evidence of a difference in median age (median test\\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$${\\chi }^{2}=0.18$$\\end{document}χ2=0.18, \\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$$p=0.68$$\\end{document}p=0.68) but did observe males to be more likely to have blood drawn (\\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$${\\chi }^{2}=11.17$$\\end{document}χ2=11.17, \\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$$p=0.001$$\\end{document}p=0.001). We observed no difference in the proportion of admissions with documented comorbidities who had blood specimens collected for culture when compared to admission without comorbidities who had blood taken for culture \\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$${(\\chi }^{2}=1.42$$\\end{document}(χ2=1.42, \\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$$p=0.23$$\\end{document}p=0.23). Bacteraemia generally occurred in the context of antecedent diarrhoeal illness, although two admissions involved primary bacteraemia (i.e. without diarrhoea) in patients with haematological malignancies. Comorbidities, while common, were not a characteristic feature of cases hospitalised with bacteraemia as shown in Table 1. No deaths were identified among cases with bacteraemia.\n\nFig. 1Flow diagram for blood culture collection and subsequent Campylobacter spp. bacteraemia among a cohort of Campylobacter-associated hospitalisations\n\nFlow diagram for blood culture collection and subsequent Campylobacter spp. bacteraemia among a cohort of Campylobacter-associated hospitalisations\n\nFig. 2Jitter plot of age and sex by blood culture status among Campylobacter-associated hospitalisations\n\nJitter plot of age and sex by blood culture status among Campylobacter-associated hospitalisations\n\nTable 1Characteristics of cases hospitalised with Campylobacter bacteraemia, 2004 to 2013Year (case)Age range/sexSpeciesBacteraemia—sourceAntimicrobial susceptibilityAntimicrobial treatmentSignificant medical history and risk factors200480 + M\nC. jejuni\nEnteric—secondaryFully sensitiveNilAge, nil other significant2005 (a)50–59 FCampylobacter sp.Enteric—secondaryFully sensitivePO ciprofloxacin 500 mg bdAcute myeloid leukaemia2005 (b)60–69 FCampylobacter sp.Enteric—secondaryFully sensitiveNilNil significant2005 (c)40–49 F\nC. jejuni\nEnteric—secondaryFully sensitivePO ciprofloxacin 500 mg bdIV drug use, PUD on omeprazole2006 (a)40–49 M\nC. jejuni\nEnteric— secondaryFully sensitivePO ciprofloxacin 500 mg bdUntreated Stage III HIV2008 (a)20–29 MCampylobacter sp.Enteric—secondaryFully sensitiveIV azithromycin 500 mg qdNil significant2008 (b)60–69 M\nC. coli\nPrimary bacteraemiaCiprofloxacin- ResistantErythromycin- ResistantNilLymphocytic lymphoma200930–39 F\nC. jejuni\nEnteric—secondaryFully sensitivePO ciprofloxacin 250 mg bdHistory of renal transplant secondary to IgA nephropathy2010 (a)60–69 MCampylobacter sp.Enteric—secondaryFully sensitivePO ciprofloxacin 500 mg bdBowel carcinoma, current chemotherapy2010 (b)30–39 M\nC. coli\nEnteric—secondaryFully sensitivePO ciprofloxacin 500 mg bdAlcoholic liver disease with portal hypotension and bleeding varices2010 (c)70–79 F\nC. jejuni\nEnteric—secondaryFully sensitivePO ciprofloxacin 500 mg bdT2DM2010 (d)10–19 F\nC. jejuni\nEnteric—secondaryFully sensitivePO azithromycin 500 mg qd (upon discharge)Nil significant.2010 (e)70–79 F\nC. coli\nEnteric—secondaryFully sensitivePO ciprofloxacin 500 mg bdT2DM2010 (f)40–49 M\nC. jejuni\nEnteric—secondaryFully sensitivePO Norfloxacin 400 mg bdIrritable bowel syndrome2011 (a)60–69 M\nC. lari\nEnteric—secondaryCiprofloxacin— ResistantPO Doxycycline 100 mg bdAlcoholic liver disease with portal hypotension, recent intracerebral bleed2011 (b)80 + M\nC. jejuni\nEnteric—secondaryFully sensitiveNil prescribedAge, nil other significant2011 (c)70–79 M\nC. coli\nEnteric—secondaryFully sensitivePO ciprofloxacin 500 mg bdAsplenic2012 (a)40–49 M\nC. jejuni\nEnteric—secondaryFully sensitiveIV ciprofloxacin 500 mg bdMultiple sclerosis, current chemotherapy pre-stem cell transplantation, IDDM2012 (b)30–39 F\nC. jejuni\nEnteric—secondaryFully sensitiveNilPregnant 33/40K, IDDM2012 (c)70–79 M\nC. jejuni\nEnteric—secondaryCiprofloxacin— ResistantNalidixic acid— ResistantNilDiabetic neuropathy, chronic renal failure2012 (d)60–69 F\nC. coli\nEnteric—secondaryCiprofloxacin— ResistantNalidixic acid— ResistantNilNil significant2013 (a)20–29 M\nC. jejuni\nEnteric—secondaryFully sensitiveNilNil significant2013 (b)20–29 M\nC. jejuni\nPrimary bacteraemiaFully sensitivePO Ciprofloxacin 750 mg bd (upon discharge)B cell leukaemia, AVN (on steroids), SIADH2013 (c)50–59 M\nC. jejuni\nEnteric—secondaryFully sensitivePO ciprofloxacin 500 mg bdLiver failure with cirrhosis, portal hypotension secondary to Hepatitis C, T2DM, hypothyroidism. Awaiting transplant.2013 (d)70–79 M\nC. jejuni\nEnteric—secondaryFully sensitiveNilAge, nil other significantPO, per oral; IV ,  intravenous; PUD, peptic ulcer disease; HIV, human immunodeficiency virus; T2DM, type 2 diabetes mellitus; IDDM, insulin dependent diabetes mellitus; AVN, acute vascular necrosis; SIADH, syndrome of inappropriate antidiuretic hormone\n\nCharacteristics of cases hospitalised with Campylobacter bacteraemia, 2004 to 2013\nCiprofloxacin- Resistant\nErythromycin- Resistant\nCiprofloxacin— Resistant\nNalidixic acid— Resistant\nCiprofloxacin— Resistant\nNalidixic acid— Resistant\nPO, per oral; IV ,  intravenous; PUD, peptic ulcer disease; HIV, human immunodeficiency virus; T2DM, type 2 diabetes mellitus; IDDM, insulin dependent diabetes mellitus; AVN, acute vascular necrosis; SIADH, syndrome of inappropriate antidiuretic hormone\nDuring the period 2004 to 2013, the mean incidence of Campylobacter bacteraemia in the host population was 0.71 cases per 100,000 population per year (95% CI 0.48–1.05 per 100,000 population) (Fig. 3). We did not observe temporal trends in the proportion of Campylobacter-associated hospitalisations undergoing blood collection for culture or in the proportion of positive blood cultures among Campylobacter-associated hospitalisations.\n\nFig. 3Incidence of Campylobacter spp. bacteraemia among ACT residents, 2004 to 2013\n\nIncidence of Campylobacter spp. bacteraemia among ACT residents, 2004 to 2013\nFactors associated with the collection of blood samples for culture and subsequent isolation of Campylobacter spp. are shown in Tables 2 and 3.\n\nTable 2Factors associated with collection of blood for culture among a cohort of Campylobacter-associated hospitalisations (\\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$$\\varvec{n}=663$$\\end{document}n=663)Predictor variableEstimated adjusted Odds Ratio (aOR)95% Confidence interval\\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$$\\varvec{p}$$\\end{document}p -valueInfectious Diseases Unit admission8.001.58–40.610.01Febrile during admission (≥ 38 °C)21.3013.76–32. 96< 0.001Tachycardia1.751.13–2.720.01Moderate to severe renal disease2.871.26–6.550.0110–19 years age group0.360.19–0.71< 0.0122 observations contained missing dataHosmer and Lemeshow goodness of fit \\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$${\\chi }^{2}=2.70$$\\end{document}χ2=2.70, \\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$$p=0.61$$\\end{document}p=0.61\n\nFactors associated with collection of blood for culture among a cohort of Campylobacter-associated hospitalisations (\\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$$\\varvec{n}=663$$\\end{document}n=663)\n22 observations contained missing data\nHosmer and Lemeshow goodness of fit \\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$${\\chi }^{2}=2.70$$\\end{document}χ2=2.70, \\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$$p=0.61$$\\end{document}p=0.61\n\nTable 3Factors associated with blood stream isolation of Campylobacter spp. among Campylobacter-associated hospitalisations (n = 333)Predictor variableEstimated adjusted Odds Ratio (aOR)95 % Confidence interval\\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$$\\text{p}$$\\end{document}p -valueModerate to severe liver disease48.897.03–340.22< 0.001Haematology Unit admission14.672.99–72.070.001Age group 70–79 years4.931.57–15.49< 0.01Admission during summer months2.931.14–7.570.03Indigenous Australian10.871.00–117.890.05Hosmer and Lemshow goodness of fit \\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$${\\chi }^{2}=0.71$$\\end{document}χ2=0.71, \\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$$p=0.70$$\\end{document}p=0.70\n\nFactors associated with blood stream isolation of Campylobacter spp. among Campylobacter-associated hospitalisations (n = 333)\nHosmer and Lemshow goodness of fit \\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$${\\chi }^{2}=0.71$$\\end{document}χ2=0.71, \\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$$p=0.70$$\\end{document}p=0.70\nOut of 685 admissions, 333 (49%) had blood drawn for culture and 25 (7.5%) of these tested positive (Fig. 1). Speciation was performed on 21 (84%) of these case isolates with 15 (71.4%) cases of C. jejuni, 5 (23.8%) cases of C. coli and a single (4.0%) case of C. lari bacteraemia. Amongst the 25 admissions with bacteraemia, 15 were male (60%) and 10 were female (40%), while the median age was 59.5 years (range 12 to 90 years). Amongst 308 negative cases there were 177 males (57.5%) and 131 females (42.5%), with a median age of 38.8 years (age range \\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$$<1.0$$\\end{document}<1.0 to 92 years). This age difference was significant (median test\\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$${ \\chi }^{2}=5.30$$\\end{document}χ2=5.30, \\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$$p=0.02$$\\end{document}p=0.02) but no evidence of a difference in the sex composition was observed (\\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$${\\chi }^{2}=0.06$$\\end{document}χ2=0.06, \\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$$p=0.81$$\\end{document}p=0.81). The age and sex of admissions in relation to blood culture status is shown in Fig. 2. We also compared those who had blood drawn for culture (whether positive or negative) with those that did not to assess demographic differences. We saw no evidence of a difference in median age (median test\\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$${\\chi }^{2}=0.18$$\\end{document}χ2=0.18, \\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$$p=0.68$$\\end{document}p=0.68) but did observe males to be more likely to have blood drawn (\\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$${\\chi }^{2}=11.17$$\\end{document}χ2=11.17, \\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$$p=0.001$$\\end{document}p=0.001). We observed no difference in the proportion of admissions with documented comorbidities who had blood specimens collected for culture when compared to admission without comorbidities who had blood taken for culture \\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$${(\\chi }^{2}=1.42$$\\end{document}(χ2=1.42, \\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$$p=0.23$$\\end{document}p=0.23). Bacteraemia generally occurred in the context of antecedent diarrhoeal illness, although two admissions involved primary bacteraemia (i.e. without diarrhoea) in patients with haematological malignancies. Comorbidities, while common, were not a characteristic feature of cases hospitalised with bacteraemia as shown in Table 1. No deaths were identified among cases with bacteraemia.\n\nFig. 1Flow diagram for blood culture collection and subsequent Campylobacter spp. bacteraemia among a cohort of Campylobacter-associated hospitalisations\n\nFlow diagram for blood culture collection and subsequent Campylobacter spp. bacteraemia among a cohort of Campylobacter-associated hospitalisations\n\nFig. 2Jitter plot of age and sex by blood culture status among Campylobacter-associated hospitalisations\n\nJitter plot of age and sex by blood culture status among Campylobacter-associated hospitalisations\n\nTable 1Characteristics of cases hospitalised with Campylobacter bacteraemia, 2004 to 2013Year (case)Age range/sexSpeciesBacteraemia—sourceAntimicrobial susceptibilityAntimicrobial treatmentSignificant medical history and risk factors200480 + M\nC. jejuni\nEnteric—secondaryFully sensitiveNilAge, nil other significant2005 (a)50–59 FCampylobacter sp.Enteric—secondaryFully sensitivePO ciprofloxacin 500 mg bdAcute myeloid leukaemia2005 (b)60–69 FCampylobacter sp.Enteric—secondaryFully sensitiveNilNil significant2005 (c)40–49 F\nC. jejuni\nEnteric—secondaryFully sensitivePO ciprofloxacin 500 mg bdIV drug use, PUD on omeprazole2006 (a)40–49 M\nC. jejuni\nEnteric— secondaryFully sensitivePO ciprofloxacin 500 mg bdUntreated Stage III HIV2008 (a)20–29 MCampylobacter sp.Enteric—secondaryFully sensitiveIV azithromycin 500 mg qdNil significant2008 (b)60–69 M\nC. coli\nPrimary bacteraemiaCiprofloxacin- ResistantErythromycin- ResistantNilLymphocytic lymphoma200930–39 F\nC. jejuni\nEnteric—secondaryFully sensitivePO ciprofloxacin 250 mg bdHistory of renal transplant secondary to IgA nephropathy2010 (a)60–69 MCampylobacter sp.Enteric—secondaryFully sensitivePO ciprofloxacin 500 mg bdBowel carcinoma, current chemotherapy2010 (b)30–39 M\nC. coli\nEnteric—secondaryFully sensitivePO ciprofloxacin 500 mg bdAlcoholic liver disease with portal hypotension and bleeding varices2010 (c)70–79 F\nC. jejuni\nEnteric—secondaryFully sensitivePO ciprofloxacin 500 mg bdT2DM2010 (d)10–19 F\nC. jejuni\nEnteric—secondaryFully sensitivePO azithromycin 500 mg qd (upon discharge)Nil significant.2010 (e)70–79 F\nC. coli\nEnteric—secondaryFully sensitivePO ciprofloxacin 500 mg bdT2DM2010 (f)40–49 M\nC. jejuni\nEnteric—secondaryFully sensitivePO Norfloxacin 400 mg bdIrritable bowel syndrome2011 (a)60–69 M\nC. lari\nEnteric—secondaryCiprofloxacin— ResistantPO Doxycycline 100 mg bdAlcoholic liver disease with portal hypotension, recent intracerebral bleed2011 (b)80 + M\nC. jejuni\nEnteric—secondaryFully sensitiveNil prescribedAge, nil other significant2011 (c)70–79 M\nC. coli\nEnteric—secondaryFully sensitivePO ciprofloxacin 500 mg bdAsplenic2012 (a)40–49 M\nC. jejuni\nEnteric—secondaryFully sensitiveIV ciprofloxacin 500 mg bdMultiple sclerosis, current chemotherapy pre-stem cell transplantation, IDDM2012 (b)30–39 F\nC. jejuni\nEnteric—secondaryFully sensitiveNilPregnant 33/40K, IDDM2012 (c)70–79 M\nC. jejuni\nEnteric—secondaryCiprofloxacin— ResistantNalidixic acid— ResistantNilDiabetic neuropathy, chronic renal failure2012 (d)60–69 F\nC. coli\nEnteric—secondaryCiprofloxacin— ResistantNalidixic acid— ResistantNilNil significant2013 (a)20–29 M\nC. jejuni\nEnteric—secondaryFully sensitiveNilNil significant2013 (b)20–29 M\nC. jejuni\nPrimary bacteraemiaFully sensitivePO Ciprofloxacin 750 mg bd (upon discharge)B cell leukaemia, AVN (on steroids), SIADH2013 (c)50–59 M\nC. jejuni\nEnteric—secondaryFully sensitivePO ciprofloxacin 500 mg bdLiver failure with cirrhosis, portal hypotension secondary to Hepatitis C, T2DM, hypothyroidism. Awaiting transplant.2013 (d)70–79 M\nC. jejuni\nEnteric—secondaryFully sensitiveNilAge, nil other significantPO, per oral; IV ,  intravenous; PUD, peptic ulcer disease; HIV, human immunodeficiency virus; T2DM, type 2 diabetes mellitus; IDDM, insulin dependent diabetes mellitus; AVN, acute vascular necrosis; SIADH, syndrome of inappropriate antidiuretic hormone\n\nCharacteristics of cases hospitalised with Campylobacter bacteraemia, 2004 to 2013\nCiprofloxacin- Resistant\nErythromycin- Resistant\nCiprofloxacin— Resistant\nNalidixic acid— Resistant\nCiprofloxacin— Resistant\nNalidixic acid— Resistant\nPO, per oral; IV ,  intravenous; PUD, peptic ulcer disease; HIV, human immunodeficiency virus; T2DM, type 2 diabetes mellitus; IDDM, insulin dependent diabetes mellitus; AVN, acute vascular necrosis; SIADH, syndrome of inappropriate antidiuretic hormone\nDuring the period 2004 to 2013, the mean incidence of Campylobacter bacteraemia in the host population was 0.71 cases per 100,000 population per year (95% CI 0.48–1.05 per 100,000 population) (Fig. 3). We did not observe temporal trends in the proportion of Campylobacter-associated hospitalisations undergoing blood collection for culture or in the proportion of positive blood cultures among Campylobacter-associated hospitalisations.\n\nFig. 3Incidence of Campylobacter spp. bacteraemia among ACT residents, 2004 to 2013\n\nIncidence of Campylobacter spp. bacteraemia among ACT residents, 2004 to 2013\nFactors associated with the collection of blood samples for culture and subsequent isolation of Campylobacter spp. are shown in Tables 2 and 3.\n\nTable 2Factors associated with collection of blood for culture among a cohort of Campylobacter-associated hospitalisations (\\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$$\\varvec{n}=663$$\\end{document}n=663)Predictor variableEstimated adjusted Odds Ratio (aOR)95% Confidence interval\\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$$\\varvec{p}$$\\end{document}p -valueInfectious Diseases Unit admission8.001.58–40.610.01Febrile during admission (≥ 38 °C)21.3013.76–32. 96< 0.001Tachycardia1.751.13–2.720.01Moderate to severe renal disease2.871.26–6.550.0110–19 years age group0.360.19–0.71< 0.0122 observations contained missing dataHosmer and Lemeshow goodness of fit \\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$${\\chi }^{2}=2.70$$\\end{document}χ2=2.70, \\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$$p=0.61$$\\end{document}p=0.61\n\nFactors associated with collection of blood for culture among a cohort of Campylobacter-associated hospitalisations (\\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$$\\varvec{n}=663$$\\end{document}n=663)\n22 observations contained missing data\nHosmer and Lemeshow goodness of fit \\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$${\\chi }^{2}=2.70$$\\end{document}χ2=2.70, \\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$$p=0.61$$\\end{document}p=0.61\n\nTable 3Factors associated with blood stream isolation of Campylobacter spp. among Campylobacter-associated hospitalisations (n = 333)Predictor variableEstimated adjusted Odds Ratio (aOR)95 % Confidence interval\\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$$\\text{p}$$\\end{document}p -valueModerate to severe liver disease48.897.03–340.22< 0.001Haematology Unit admission14.672.99–72.070.001Age group 70–79 years4.931.57–15.49< 0.01Admission during summer months2.931.14–7.570.03Indigenous Australian10.871.00–117.890.05Hosmer and Lemshow goodness of fit \\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$${\\chi }^{2}=0.71$$\\end{document}χ2=0.71, \\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$$p=0.70$$\\end{document}p=0.70\n\nFactors associated with blood stream isolation of Campylobacter spp. among Campylobacter-associated hospitalisations (n = 333)\nHosmer and Lemshow goodness of fit \\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$${\\chi }^{2}=0.71$$\\end{document}χ2=0.71, \\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$$p=0.70$$\\end{document}p=0.70\nAntimicrobial susceptibility We identified 548 Campylobacter-associated admissions with a primary isolation of Campylobacter spp. and where subsequent antimicrobial susceptibility testing was performed. For 526 (96%) of these cases there was available data for three standard antimicrobials, ciprofloxacin, nalidixic acid and erythromycin. Of the remainder, nalidixic acid data was unavailable for 21 isolates, with erythromycin susceptibility unavailable for a single isolate.\nDuring the study period, 13% (70/548) of primary isolates exhibited non-susceptibility to at least one standard antimicrobial. The proportion of non-susceptible isolates ranged from ≤ 5.0% in 2004/05 to > 20.0% in 2012/13, with this increase being significant (\\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$${\\chi }^{2}=6.12$$\\end{document}χ2=6.12, \\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$$p=0.01$$\\end{document}p=0.01) (Fig. 4). Among the individual antimicrobials, nalidixic acid non-susceptibility was reported for 9% (49/527) of tested isolates, 7% (40/548) for ciprofloxacin and 4% (21/547) for erythromycin.\n\nFig. 4Non-susceptibility to standard antimicrobials from Campylobacter isolates obtained during Campylobacter-associated hospitalisations, 2004 to 2013\n\nNon-susceptibility to standard antimicrobials from Campylobacter isolates obtained during Campylobacter-associated hospitalisations, 2004 to 2013\nFigure 4 shows that lower rates of erythromycin non-susceptibility were observed while significant increases in both ciprofloxacin (χ2 = 16.51, p < 0.001) and nalidixic acid non-susceptibility occurred during the study period (χ2 = 10.85, p = 0.001). Factors associated with antimicrobial non-susceptibility among Campylobacter-associated hospitalisations are shown in Tables 4 and 5.\n\nTable 4Factors associated with ciprofloxacin non-susceptible isolates among Campylobacter-associated hospitalisations (n = 411)Predictor variableEstimated adjusted Odds Ratio (aOR)95 % Confidence intervalp-valueBloody diarrhoea0.300.09–1.030.06Recent overseas travel15.532.86–84.180.001Previous Campylobacter-associated hospitalisation12.871.72–96.120.01Hosmer and Lemeshow goodness of fit χ2  = 0.03,  p = 0.86\n\nFactors associated with ciprofloxacin non-susceptible isolates among Campylobacter-associated hospitalisations (n = 411)\nHosmer and Lemeshow goodness of fit χ2  = 0.03,  p = 0.86\n\nTable 5Factors associated with non-susceptibility to ciprofloxacin, nalidixic acid or erythromycin among a cohort of Campylobacter-associated hospitalisations (n = 548)Predictor variableEstimated adjusted Odds Ratio (aOR)95 % Confidence Intervalp-valueRecent overseas travel11.803.18–43.83< 0.001Previous Campylobacter-associated hospitalisation17.092.65−110.07< 0.01Tachycardia0.480.26–0.890.02Hosmer and Lemeshow goodness of fit χ2  = 0.01,  p = 0.92\n\nFactors associated with non-susceptibility to ciprofloxacin, nalidixic acid or erythromycin among a cohort of Campylobacter-associated hospitalisations (n = 548)\nHosmer and Lemeshow goodness of fit χ2  = 0.01,  p = 0.92\nWe identified 548 Campylobacter-associated admissions with a primary isolation of Campylobacter spp. and where subsequent antimicrobial susceptibility testing was performed. For 526 (96%) of these cases there was available data for three standard antimicrobials, ciprofloxacin, nalidixic acid and erythromycin. Of the remainder, nalidixic acid data was unavailable for 21 isolates, with erythromycin susceptibility unavailable for a single isolate.\nDuring the study period, 13% (70/548) of primary isolates exhibited non-susceptibility to at least one standard antimicrobial. The proportion of non-susceptible isolates ranged from ≤ 5.0% in 2004/05 to > 20.0% in 2012/13, with this increase being significant (\\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$${\\chi }^{2}=6.12$$\\end{document}χ2=6.12, \\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$$p=0.01$$\\end{document}p=0.01) (Fig. 4). Among the individual antimicrobials, nalidixic acid non-susceptibility was reported for 9% (49/527) of tested isolates, 7% (40/548) for ciprofloxacin and 4% (21/547) for erythromycin.\n\nFig. 4Non-susceptibility to standard antimicrobials from Campylobacter isolates obtained during Campylobacter-associated hospitalisations, 2004 to 2013\n\nNon-susceptibility to standard antimicrobials from Campylobacter isolates obtained during Campylobacter-associated hospitalisations, 2004 to 2013\nFigure 4 shows that lower rates of erythromycin non-susceptibility were observed while significant increases in both ciprofloxacin (χ2 = 16.51, p < 0.001) and nalidixic acid non-susceptibility occurred during the study period (χ2 = 10.85, p = 0.001). Factors associated with antimicrobial non-susceptibility among Campylobacter-associated hospitalisations are shown in Tables 4 and 5.\n\nTable 4Factors associated with ciprofloxacin non-susceptible isolates among Campylobacter-associated hospitalisations (n = 411)Predictor variableEstimated adjusted Odds Ratio (aOR)95 % Confidence intervalp-valueBloody diarrhoea0.300.09–1.030.06Recent overseas travel15.532.86–84.180.001Previous Campylobacter-associated hospitalisation12.871.72–96.120.01Hosmer and Lemeshow goodness of fit χ2  = 0.03,  p = 0.86\n\nFactors associated with ciprofloxacin non-susceptible isolates among Campylobacter-associated hospitalisations (n = 411)\nHosmer and Lemeshow goodness of fit χ2  = 0.03,  p = 0.86\n\nTable 5Factors associated with non-susceptibility to ciprofloxacin, nalidixic acid or erythromycin among a cohort of Campylobacter-associated hospitalisations (n = 548)Predictor variableEstimated adjusted Odds Ratio (aOR)95 % Confidence Intervalp-valueRecent overseas travel11.803.18–43.83< 0.001Previous Campylobacter-associated hospitalisation17.092.65−110.07< 0.01Tachycardia0.480.26–0.890.02Hosmer and Lemeshow goodness of fit χ2  = 0.01,  p = 0.92\n\nFactors associated with non-susceptibility to ciprofloxacin, nalidixic acid or erythromycin among a cohort of Campylobacter-associated hospitalisations (n = 548)\nHosmer and Lemeshow goodness of fit χ2  = 0.01,  p = 0.92\nAntibiotic treatment Antimicrobial treatment was provided for 32% (219/685) of Campylobacter-associated hospitalisations. Those receiving treatment were observed to be significantly older (median 48.5 years, range 8 years to 92 years) than admissions where antibiotics were not administered (median 34.8 years, range < 1 years to 91 years, χ2 = 26.92, p < 0.001). No sex-based differences were observed. Treatment associated with bacteraemia is described in Table 1.\nSecond generation fluoroquinolones were used in 79% (172/219) of treated admissions. Ciprofloxacin was administered to 82.0% (141/172), with the remainder receiving norfloxacin. Those receiving ciprofloxacin were younger (median 47.2 years, minimum 12 years, maximum 90 years) compared with those receiving norfloxacin (median age 64.3 years, minimum 19 years to maximum 92 years), but this difference was not significant. For admissions treated with ciprofloxacin, 91.4% (129/141) received treatment orally, with a median total daily dose of 1000 mg. For those receiving parenteral ciprofloxacin (\\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$$n$$\\end{document}n =12), the median total daily dose was 800 mg. The median total daily dosage for oral norfloxacin (\\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$$n$$\\end{document}n =31) was 800 mg.\nMacrolides (azithromycin or erythromycin) were administered to 21.0% (46/219) of admissions, with 63.0% (29/46) receiving azithromycin. No age or sex differences were observed between those receiving azithromycin versus erythromycin. For admissions receiving azithromycin, 72.4% (21/29) received treatment orally, with a median total daily dose for both oral and IV azithromycin of 500 mg. Oral administration was provided for 94.1% (16/17) of admissions receiving erythromycin, with a median total daily dose of 2000 mg. One patient received an initial total daily dose of 3200 mg IV erythromycin, prior to this being reduced to a total daily dose of 1600 mg. For two admissions (involving the same patient), a tetracycline (doxycycline) was used to treat a C. lari bacteraemia and enterocolitis.\nA significant increase in the proportion of Campylobacter-associated hospitalisations being administered antimicrobials was observed over time (χ2 = 4.37, p = 0.04), rising from 27% (14/52) in 2004 to 38% (26/68) of admissions in 2013. No difference in the proportion of admissions treated with either fluoroquinolones or macrolides was observed. Among admissions receiving macrolides a significant increase in the administration of azithromycin over erythromycin was observed (χ2 = 16.31, p < 0.001), most notably from 2011 onwards. No changes over time in the proportion of admissions treated with ciprofloxacin versus norfloxacin were observed. Factors associated with administration of antibiotics during Campylobacter-associated hospitalisations are shown in Table 6.\n\nTable 6Factors associated with antibiotic administration among a cohort of Campylobacter-associated hospitalisations (n = 607)Predictor variableEstimated adjusted Odds Ratio (aOR)95 % Confidence Intervalp-valueAge 0–9 years0.070.01–0.570.01Age 10–19 years0.440.20–0.970.04Age 40–49 years2.341.14–4.790.02Emergency Unit admission0.060.02–0.17< 0.001Gastroenterology admission3.751.95–7.20< 0.001Gen. Medicine admission2.021.22–3.35< 0.01Infectious Diseases admission2.581.03–6.440.04Vomiting1.701.11–2.610.02Electrolyte imbalance1.731.12–2.670.01Blood specimen for culture2.761.79–4.26< 0.001Hosmer and Lemeshow goodness of fit χ2 9.23,  p = 0.32\n\nFactors associated with antibiotic administration among a cohort of Campylobacter-associated hospitalisations (n = 607)\nHosmer and Lemeshow goodness of fit χ2 9.23,  p = 0.32\nAntimicrobial treatment was provided for 32% (219/685) of Campylobacter-associated hospitalisations. Those receiving treatment were observed to be significantly older (median 48.5 years, range 8 years to 92 years) than admissions where antibiotics were not administered (median 34.8 years, range < 1 years to 91 years, χ2 = 26.92, p < 0.001). No sex-based differences were observed. Treatment associated with bacteraemia is described in Table 1.\nSecond generation fluoroquinolones were used in 79% (172/219) of treated admissions. Ciprofloxacin was administered to 82.0% (141/172), with the remainder receiving norfloxacin. Those receiving ciprofloxacin were younger (median 47.2 years, minimum 12 years, maximum 90 years) compared with those receiving norfloxacin (median age 64.3 years, minimum 19 years to maximum 92 years), but this difference was not significant. For admissions treated with ciprofloxacin, 91.4% (129/141) received treatment orally, with a median total daily dose of 1000 mg. For those receiving parenteral ciprofloxacin (\\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$$n$$\\end{document}n =12), the median total daily dose was 800 mg. The median total daily dosage for oral norfloxacin (\\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$$n$$\\end{document}n =31) was 800 mg.\nMacrolides (azithromycin or erythromycin) were administered to 21.0% (46/219) of admissions, with 63.0% (29/46) receiving azithromycin. No age or sex differences were observed between those receiving azithromycin versus erythromycin. For admissions receiving azithromycin, 72.4% (21/29) received treatment orally, with a median total daily dose for both oral and IV azithromycin of 500 mg. Oral administration was provided for 94.1% (16/17) of admissions receiving erythromycin, with a median total daily dose of 2000 mg. One patient received an initial total daily dose of 3200 mg IV erythromycin, prior to this being reduced to a total daily dose of 1600 mg. For two admissions (involving the same patient), a tetracycline (doxycycline) was used to treat a C. lari bacteraemia and enterocolitis.\nA significant increase in the proportion of Campylobacter-associated hospitalisations being administered antimicrobials was observed over time (χ2 = 4.37, p = 0.04), rising from 27% (14/52) in 2004 to 38% (26/68) of admissions in 2013. No difference in the proportion of admissions treated with either fluoroquinolones or macrolides was observed. Among admissions receiving macrolides a significant increase in the administration of azithromycin over erythromycin was observed (χ2 = 16.31, p < 0.001), most notably from 2011 onwards. No changes over time in the proportion of admissions treated with ciprofloxacin versus norfloxacin were observed. Factors associated with administration of antibiotics during Campylobacter-associated hospitalisations are shown in Table 6.\n\nTable 6Factors associated with antibiotic administration among a cohort of Campylobacter-associated hospitalisations (n = 607)Predictor variableEstimated adjusted Odds Ratio (aOR)95 % Confidence Intervalp-valueAge 0–9 years0.070.01–0.570.01Age 10–19 years0.440.20–0.970.04Age 40–49 years2.341.14–4.790.02Emergency Unit admission0.060.02–0.17< 0.001Gastroenterology admission3.751.95–7.20< 0.001Gen. Medicine admission2.021.22–3.35< 0.01Infectious Diseases admission2.581.03–6.440.04Vomiting1.701.11–2.610.02Electrolyte imbalance1.731.12–2.670.01Blood specimen for culture2.761.79–4.26< 0.001Hosmer and Lemeshow goodness of fit χ2 9.23,  p = 0.32\n\nFactors associated with antibiotic administration among a cohort of Campylobacter-associated hospitalisations (n = 607)\nHosmer and Lemeshow goodness of fit χ2 9.23,  p = 0.32", "Out of 685 admissions, 333 (49%) had blood drawn for culture and 25 (7.5%) of these tested positive (Fig. 1). Speciation was performed on 21 (84%) of these case isolates with 15 (71.4%) cases of C. jejuni, 5 (23.8%) cases of C. coli and a single (4.0%) case of C. lari bacteraemia. Amongst the 25 admissions with bacteraemia, 15 were male (60%) and 10 were female (40%), while the median age was 59.5 years (range 12 to 90 years). Amongst 308 negative cases there were 177 males (57.5%) and 131 females (42.5%), with a median age of 38.8 years (age range \\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$$<1.0$$\\end{document}<1.0 to 92 years). This age difference was significant (median test\\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$${ \\chi }^{2}=5.30$$\\end{document}χ2=5.30, \\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$$p=0.02$$\\end{document}p=0.02) but no evidence of a difference in the sex composition was observed (\\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$${\\chi }^{2}=0.06$$\\end{document}χ2=0.06, \\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$$p=0.81$$\\end{document}p=0.81). The age and sex of admissions in relation to blood culture status is shown in Fig. 2. We also compared those who had blood drawn for culture (whether positive or negative) with those that did not to assess demographic differences. We saw no evidence of a difference in median age (median test\\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$${\\chi }^{2}=0.18$$\\end{document}χ2=0.18, \\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$$p=0.68$$\\end{document}p=0.68) but did observe males to be more likely to have blood drawn (\\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$${\\chi }^{2}=11.17$$\\end{document}χ2=11.17, \\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$$p=0.001$$\\end{document}p=0.001). We observed no difference in the proportion of admissions with documented comorbidities who had blood specimens collected for culture when compared to admission without comorbidities who had blood taken for culture \\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$${(\\chi }^{2}=1.42$$\\end{document}(χ2=1.42, \\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$$p=0.23$$\\end{document}p=0.23). Bacteraemia generally occurred in the context of antecedent diarrhoeal illness, although two admissions involved primary bacteraemia (i.e. without diarrhoea) in patients with haematological malignancies. Comorbidities, while common, were not a characteristic feature of cases hospitalised with bacteraemia as shown in Table 1. No deaths were identified among cases with bacteraemia.\n\nFig. 1Flow diagram for blood culture collection and subsequent Campylobacter spp. bacteraemia among a cohort of Campylobacter-associated hospitalisations\n\nFlow diagram for blood culture collection and subsequent Campylobacter spp. bacteraemia among a cohort of Campylobacter-associated hospitalisations\n\nFig. 2Jitter plot of age and sex by blood culture status among Campylobacter-associated hospitalisations\n\nJitter plot of age and sex by blood culture status among Campylobacter-associated hospitalisations\n\nTable 1Characteristics of cases hospitalised with Campylobacter bacteraemia, 2004 to 2013Year (case)Age range/sexSpeciesBacteraemia—sourceAntimicrobial susceptibilityAntimicrobial treatmentSignificant medical history and risk factors200480 + M\nC. jejuni\nEnteric—secondaryFully sensitiveNilAge, nil other significant2005 (a)50–59 FCampylobacter sp.Enteric—secondaryFully sensitivePO ciprofloxacin 500 mg bdAcute myeloid leukaemia2005 (b)60–69 FCampylobacter sp.Enteric—secondaryFully sensitiveNilNil significant2005 (c)40–49 F\nC. jejuni\nEnteric—secondaryFully sensitivePO ciprofloxacin 500 mg bdIV drug use, PUD on omeprazole2006 (a)40–49 M\nC. jejuni\nEnteric— secondaryFully sensitivePO ciprofloxacin 500 mg bdUntreated Stage III HIV2008 (a)20–29 MCampylobacter sp.Enteric—secondaryFully sensitiveIV azithromycin 500 mg qdNil significant2008 (b)60–69 M\nC. coli\nPrimary bacteraemiaCiprofloxacin- ResistantErythromycin- ResistantNilLymphocytic lymphoma200930–39 F\nC. jejuni\nEnteric—secondaryFully sensitivePO ciprofloxacin 250 mg bdHistory of renal transplant secondary to IgA nephropathy2010 (a)60–69 MCampylobacter sp.Enteric—secondaryFully sensitivePO ciprofloxacin 500 mg bdBowel carcinoma, current chemotherapy2010 (b)30–39 M\nC. coli\nEnteric—secondaryFully sensitivePO ciprofloxacin 500 mg bdAlcoholic liver disease with portal hypotension and bleeding varices2010 (c)70–79 F\nC. jejuni\nEnteric—secondaryFully sensitivePO ciprofloxacin 500 mg bdT2DM2010 (d)10–19 F\nC. jejuni\nEnteric—secondaryFully sensitivePO azithromycin 500 mg qd (upon discharge)Nil significant.2010 (e)70–79 F\nC. coli\nEnteric—secondaryFully sensitivePO ciprofloxacin 500 mg bdT2DM2010 (f)40–49 M\nC. jejuni\nEnteric—secondaryFully sensitivePO Norfloxacin 400 mg bdIrritable bowel syndrome2011 (a)60–69 M\nC. lari\nEnteric—secondaryCiprofloxacin— ResistantPO Doxycycline 100 mg bdAlcoholic liver disease with portal hypotension, recent intracerebral bleed2011 (b)80 + M\nC. jejuni\nEnteric—secondaryFully sensitiveNil prescribedAge, nil other significant2011 (c)70–79 M\nC. coli\nEnteric—secondaryFully sensitivePO ciprofloxacin 500 mg bdAsplenic2012 (a)40–49 M\nC. jejuni\nEnteric—secondaryFully sensitiveIV ciprofloxacin 500 mg bdMultiple sclerosis, current chemotherapy pre-stem cell transplantation, IDDM2012 (b)30–39 F\nC. jejuni\nEnteric—secondaryFully sensitiveNilPregnant 33/40K, IDDM2012 (c)70–79 M\nC. jejuni\nEnteric—secondaryCiprofloxacin— ResistantNalidixic acid— ResistantNilDiabetic neuropathy, chronic renal failure2012 (d)60–69 F\nC. coli\nEnteric—secondaryCiprofloxacin— ResistantNalidixic acid— ResistantNilNil significant2013 (a)20–29 M\nC. jejuni\nEnteric—secondaryFully sensitiveNilNil significant2013 (b)20–29 M\nC. jejuni\nPrimary bacteraemiaFully sensitivePO Ciprofloxacin 750 mg bd (upon discharge)B cell leukaemia, AVN (on steroids), SIADH2013 (c)50–59 M\nC. jejuni\nEnteric—secondaryFully sensitivePO ciprofloxacin 500 mg bdLiver failure with cirrhosis, portal hypotension secondary to Hepatitis C, T2DM, hypothyroidism. Awaiting transplant.2013 (d)70–79 M\nC. jejuni\nEnteric—secondaryFully sensitiveNilAge, nil other significantPO, per oral; IV ,  intravenous; PUD, peptic ulcer disease; HIV, human immunodeficiency virus; T2DM, type 2 diabetes mellitus; IDDM, insulin dependent diabetes mellitus; AVN, acute vascular necrosis; SIADH, syndrome of inappropriate antidiuretic hormone\n\nCharacteristics of cases hospitalised with Campylobacter bacteraemia, 2004 to 2013\nCiprofloxacin- Resistant\nErythromycin- Resistant\nCiprofloxacin— Resistant\nNalidixic acid— Resistant\nCiprofloxacin— Resistant\nNalidixic acid— Resistant\nPO, per oral; IV ,  intravenous; PUD, peptic ulcer disease; HIV, human immunodeficiency virus; T2DM, type 2 diabetes mellitus; IDDM, insulin dependent diabetes mellitus; AVN, acute vascular necrosis; SIADH, syndrome of inappropriate antidiuretic hormone\nDuring the period 2004 to 2013, the mean incidence of Campylobacter bacteraemia in the host population was 0.71 cases per 100,000 population per year (95% CI 0.48–1.05 per 100,000 population) (Fig. 3). We did not observe temporal trends in the proportion of Campylobacter-associated hospitalisations undergoing blood collection for culture or in the proportion of positive blood cultures among Campylobacter-associated hospitalisations.\n\nFig. 3Incidence of Campylobacter spp. bacteraemia among ACT residents, 2004 to 2013\n\nIncidence of Campylobacter spp. bacteraemia among ACT residents, 2004 to 2013\nFactors associated with the collection of blood samples for culture and subsequent isolation of Campylobacter spp. are shown in Tables 2 and 3.\n\nTable 2Factors associated with collection of blood for culture among a cohort of Campylobacter-associated hospitalisations (\\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$$\\varvec{n}=663$$\\end{document}n=663)Predictor variableEstimated adjusted Odds Ratio (aOR)95% Confidence interval\\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$$\\varvec{p}$$\\end{document}p -valueInfectious Diseases Unit admission8.001.58–40.610.01Febrile during admission (≥ 38 °C)21.3013.76–32. 96< 0.001Tachycardia1.751.13–2.720.01Moderate to severe renal disease2.871.26–6.550.0110–19 years age group0.360.19–0.71< 0.0122 observations contained missing dataHosmer and Lemeshow goodness of fit \\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$${\\chi }^{2}=2.70$$\\end{document}χ2=2.70, \\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$$p=0.61$$\\end{document}p=0.61\n\nFactors associated with collection of blood for culture among a cohort of Campylobacter-associated hospitalisations (\\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$$\\varvec{n}=663$$\\end{document}n=663)\n22 observations contained missing data\nHosmer and Lemeshow goodness of fit \\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$${\\chi }^{2}=2.70$$\\end{document}χ2=2.70, \\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$$p=0.61$$\\end{document}p=0.61\n\nTable 3Factors associated with blood stream isolation of Campylobacter spp. among Campylobacter-associated hospitalisations (n = 333)Predictor variableEstimated adjusted Odds Ratio (aOR)95 % Confidence interval\\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$$\\text{p}$$\\end{document}p -valueModerate to severe liver disease48.897.03–340.22< 0.001Haematology Unit admission14.672.99–72.070.001Age group 70–79 years4.931.57–15.49< 0.01Admission during summer months2.931.14–7.570.03Indigenous Australian10.871.00–117.890.05Hosmer and Lemshow goodness of fit \\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$${\\chi }^{2}=0.71$$\\end{document}χ2=0.71, \\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$$p=0.70$$\\end{document}p=0.70\n\nFactors associated with blood stream isolation of Campylobacter spp. among Campylobacter-associated hospitalisations (n = 333)\nHosmer and Lemshow goodness of fit \\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$${\\chi }^{2}=0.71$$\\end{document}χ2=0.71, \\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$$p=0.70$$\\end{document}p=0.70", "We identified 548 Campylobacter-associated admissions with a primary isolation of Campylobacter spp. and where subsequent antimicrobial susceptibility testing was performed. For 526 (96%) of these cases there was available data for three standard antimicrobials, ciprofloxacin, nalidixic acid and erythromycin. Of the remainder, nalidixic acid data was unavailable for 21 isolates, with erythromycin susceptibility unavailable for a single isolate.\nDuring the study period, 13% (70/548) of primary isolates exhibited non-susceptibility to at least one standard antimicrobial. The proportion of non-susceptible isolates ranged from ≤ 5.0% in 2004/05 to > 20.0% in 2012/13, with this increase being significant (\\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$${\\chi }^{2}=6.12$$\\end{document}χ2=6.12, \\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$$p=0.01$$\\end{document}p=0.01) (Fig. 4). Among the individual antimicrobials, nalidixic acid non-susceptibility was reported for 9% (49/527) of tested isolates, 7% (40/548) for ciprofloxacin and 4% (21/547) for erythromycin.\n\nFig. 4Non-susceptibility to standard antimicrobials from Campylobacter isolates obtained during Campylobacter-associated hospitalisations, 2004 to 2013\n\nNon-susceptibility to standard antimicrobials from Campylobacter isolates obtained during Campylobacter-associated hospitalisations, 2004 to 2013\nFigure 4 shows that lower rates of erythromycin non-susceptibility were observed while significant increases in both ciprofloxacin (χ2 = 16.51, p < 0.001) and nalidixic acid non-susceptibility occurred during the study period (χ2 = 10.85, p = 0.001). Factors associated with antimicrobial non-susceptibility among Campylobacter-associated hospitalisations are shown in Tables 4 and 5.\n\nTable 4Factors associated with ciprofloxacin non-susceptible isolates among Campylobacter-associated hospitalisations (n = 411)Predictor variableEstimated adjusted Odds Ratio (aOR)95 % Confidence intervalp-valueBloody diarrhoea0.300.09–1.030.06Recent overseas travel15.532.86–84.180.001Previous Campylobacter-associated hospitalisation12.871.72–96.120.01Hosmer and Lemeshow goodness of fit χ2  = 0.03,  p = 0.86\n\nFactors associated with ciprofloxacin non-susceptible isolates among Campylobacter-associated hospitalisations (n = 411)\nHosmer and Lemeshow goodness of fit χ2  = 0.03,  p = 0.86\n\nTable 5Factors associated with non-susceptibility to ciprofloxacin, nalidixic acid or erythromycin among a cohort of Campylobacter-associated hospitalisations (n = 548)Predictor variableEstimated adjusted Odds Ratio (aOR)95 % Confidence Intervalp-valueRecent overseas travel11.803.18–43.83< 0.001Previous Campylobacter-associated hospitalisation17.092.65−110.07< 0.01Tachycardia0.480.26–0.890.02Hosmer and Lemeshow goodness of fit χ2  = 0.01,  p = 0.92\n\nFactors associated with non-susceptibility to ciprofloxacin, nalidixic acid or erythromycin among a cohort of Campylobacter-associated hospitalisations (n = 548)\nHosmer and Lemeshow goodness of fit χ2  = 0.01,  p = 0.92", "Antimicrobial treatment was provided for 32% (219/685) of Campylobacter-associated hospitalisations. Those receiving treatment were observed to be significantly older (median 48.5 years, range 8 years to 92 years) than admissions where antibiotics were not administered (median 34.8 years, range < 1 years to 91 years, χ2 = 26.92, p < 0.001). No sex-based differences were observed. Treatment associated with bacteraemia is described in Table 1.\nSecond generation fluoroquinolones were used in 79% (172/219) of treated admissions. Ciprofloxacin was administered to 82.0% (141/172), with the remainder receiving norfloxacin. Those receiving ciprofloxacin were younger (median 47.2 years, minimum 12 years, maximum 90 years) compared with those receiving norfloxacin (median age 64.3 years, minimum 19 years to maximum 92 years), but this difference was not significant. For admissions treated with ciprofloxacin, 91.4% (129/141) received treatment orally, with a median total daily dose of 1000 mg. For those receiving parenteral ciprofloxacin (\\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$$n$$\\end{document}n =12), the median total daily dose was 800 mg. The median total daily dosage for oral norfloxacin (\\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$$n$$\\end{document}n =31) was 800 mg.\nMacrolides (azithromycin or erythromycin) were administered to 21.0% (46/219) of admissions, with 63.0% (29/46) receiving azithromycin. No age or sex differences were observed between those receiving azithromycin versus erythromycin. For admissions receiving azithromycin, 72.4% (21/29) received treatment orally, with a median total daily dose for both oral and IV azithromycin of 500 mg. Oral administration was provided for 94.1% (16/17) of admissions receiving erythromycin, with a median total daily dose of 2000 mg. One patient received an initial total daily dose of 3200 mg IV erythromycin, prior to this being reduced to a total daily dose of 1600 mg. For two admissions (involving the same patient), a tetracycline (doxycycline) was used to treat a C. lari bacteraemia and enterocolitis.\nA significant increase in the proportion of Campylobacter-associated hospitalisations being administered antimicrobials was observed over time (χ2 = 4.37, p = 0.04), rising from 27% (14/52) in 2004 to 38% (26/68) of admissions in 2013. No difference in the proportion of admissions treated with either fluoroquinolones or macrolides was observed. Among admissions receiving macrolides a significant increase in the administration of azithromycin over erythromycin was observed (χ2 = 16.31, p < 0.001), most notably from 2011 onwards. No changes over time in the proportion of admissions treated with ciprofloxacin versus norfloxacin were observed. Factors associated with administration of antibiotics during Campylobacter-associated hospitalisations are shown in Table 6.\n\nTable 6Factors associated with antibiotic administration among a cohort of Campylobacter-associated hospitalisations (n = 607)Predictor variableEstimated adjusted Odds Ratio (aOR)95 % Confidence Intervalp-valueAge 0–9 years0.070.01–0.570.01Age 10–19 years0.440.20–0.970.04Age 40–49 years2.341.14–4.790.02Emergency Unit admission0.060.02–0.17< 0.001Gastroenterology admission3.751.95–7.20< 0.001Gen. Medicine admission2.021.22–3.35< 0.01Infectious Diseases admission2.581.03–6.440.04Vomiting1.701.11–2.610.02Electrolyte imbalance1.731.12–2.670.01Blood specimen for culture2.761.79–4.26< 0.001Hosmer and Lemeshow goodness of fit χ2 9.23,  p = 0.32\n\nFactors associated with antibiotic administration among a cohort of Campylobacter-associated hospitalisations (n = 607)\nHosmer and Lemeshow goodness of fit χ2 9.23,  p = 0.32", "We observed a high rate of bacteraemia in this study of Campylobacter-associated hospital admissions. Although blood cultures are not routinely performed for cases admitted with infectious gastroenteritis, we observed for Campylobacter-associated hospitalisations the presence of fever, pre-existing kidney disease or admission to particular subspecialty units increased the likelihood of blood being collected. Comorbidity and advanced age were both associated with a subsequent isolation from blood. One third of admissions received antibiotic treatment during the study period, with the proportion of those treated rising significantly over time. We also observed a temporal increase in the proportion of isolates exhibiting non-susceptibility to standard antimicrobials, notably fluoroquinolones. This is notable given the clinical and public health concern about development of antimicrobial resistance. Factors associated with ciprofloxacin resistance included recent overseas travel and previous hospitalisation for campylobacteriosis. This carries important clinical consequences, as the option to treat with antibiotics may carry greater importance among hospitalised cases compared to non-hospitalised cases.\nCampylobacteriosis most commonly presents as a self-limiting enterocolitis, with secondary bacteraemia a recognised, but relatively infrequent complication [10]. A number of studies have sought to determine the incidence of Campylobacter bacteraemia in high-income settings, estimating rates to be between 0.20 and 0.47 cases per 100,000 population [3, 11, 12]. A more recent Swedish study [13] has reported an incidence of 1.00 case per 100,000 population, with this linked to changes in automated blood culture collection systems. In our study we observed only 25 incident cases of bacteraemia, equating to a mean incidence of 0.71 cases per 100,000 population. While bacteraemia is relatively infrequent, our population rates appears high, likely reflecting high background incidence of campylobacteriosis in the ACT [14] and a lower threshold for testing among hospitalised cases.\nBlood cultures are the gold standard for the diagnosis of BSIs [4]. In our study, patients who had a measured fever (≥ 38 °C), underlying chronic kidney disease or who were admitted under the care of the Infectious Diseases Unit were more likely to have blood drawn for culture. Fever is a common prompt for blood cultures, with patients whose measured temperatures are ≥ 38 °C having increased likelihood of bacteraemia [15]. Similarly tachycardia, another vital sign, has also been used in clinical prediction rules for blood-stream infection [16]. Although comorbidities do not feature in BSI underlying kidney disease has also been shown as a risk factor for bloodstream infections in older patients [17]. Several consequences of kidney disease have been proposed to contribute to infection including malnutrition, chronic inflammation, retained uremic solutes, trace element deficiencies and metabolic abnormalities [18]. The finding that blood culture is more likely to be ordered by Infectious Diseases clinicians is unsurprising given the clinical focus of this subspecialty. Conversely, there was significant evidence that those aged between 10 and 19 years were less likely to have blood cultures performed. This finding likely reflects fewer hospitalisations being observed due to the low incidence of campylobacteriosis in this age grouping [14].\nPredicting BSIs is challenging with numerous models developed for specific populations, settings and sources of infection [19]. The pre-test probability of bacteraemia will therefore vary considerably based upon the clinical context and source of infection [15]. Generally only 5–10% of blood cultures are positive, and of those positive results, between 30 and 50% represent contaminants [19]. In our study we observed blood cultures to be routinely requested, with 49% of acute admissions having blood drawn and 7% being positive for Campylobacter spp. These admissions involved both immunosuppressed and immunocompetent patients. The impacts of comorbid and immunosuppressive conditions on clinical variables used in predicting bacteraemia is less well understood [15], although associations with campylobacteriosis and invasive disease in immunosuppressed patients have been described [3, 20]. Transient bacteraemia may also be common among immunocompetent hosts with Campylobacter enterocolitis but is less frequently detected due to the bactericidal effect of human serum and reduced frequency of blood culture among acute enterocolitis patients [10], although nearly half of acute admissions in our study had blood culturing performed. Notably, similar proportions of acute admissions with and without comorbidities were observed to have blood drawn for culture suggesting in our study population that comorbidity exerted limited influence on decisions to request blood cultures. This is perhaps not surprising given prediction rules for BSI focus on clinical signs [15].\nFactors associated with a positive blood stream isolate (bacteraemia) The statistical association we observed between cases with positive blood cultures and the presence of underlying liver disease and haematological malignancy is in keeping with hospital-based studies showing higher proportions of these conditions among cases with Campylobacter bacteraemia [11, 21, 22]. In addition, advanced age was also statistically associated with detection of bacteraemia, a characteristic seen in larger population-based studies of campylobacteriosis [3]. Several studies also report seasonality with Campylobacter bacteraemia [23, 24]. Our results show hospitalisation during the southern hemisphere summer to be associated with bacteraemia, a finding that aligns with the seasonality of Campylobacter enteritis in Australia [14]. A further association with blood culture positivity was Indigenous status. This result was derived however from a small number of observations with, Indigenous Australians comprising 1% of Campylobacter-associated hospitalisations in the ACT [6].\nThe statistical association we observed between cases with positive blood cultures and the presence of underlying liver disease and haematological malignancy is in keeping with hospital-based studies showing higher proportions of these conditions among cases with Campylobacter bacteraemia [11, 21, 22]. In addition, advanced age was also statistically associated with detection of bacteraemia, a characteristic seen in larger population-based studies of campylobacteriosis [3]. Several studies also report seasonality with Campylobacter bacteraemia [23, 24]. Our results show hospitalisation during the southern hemisphere summer to be associated with bacteraemia, a finding that aligns with the seasonality of Campylobacter enteritis in Australia [14]. A further association with blood culture positivity was Indigenous status. This result was derived however from a small number of observations with, Indigenous Australians comprising 1% of Campylobacter-associated hospitalisations in the ACT [6].\nFactors associated with non-susceptibility to antimicrobials We observed statistically significant evidence of an increase in the proportion of isolates exhibiting non-susceptibility to standard antimicrobials, with this increase driven primarily by non-susceptibility to fluoroquinolones (including nalidixic acid). In keeping with trends in comparable settings only low rates of non-susceptibility to macrolides were observed [25, 26]. Internationally, fluoroquinolone resistance among Campylobacter spp. has become a major public health problem [5]. Increases in the proportion of clinical isolates demonstrating ciprofloxacin resistance has been observed in the United Kingdom (UK) and United States (US) [25, 27], while in the European Union (EU) more than half (54.6%) of human-associated C. jejuni and two-thirds (66.6%) of C. coli isolates were resistant to ciprofloxacin in 2013 [28].\nAustralia has previously reported low rates of fluoroquinolone non-susceptibility among clinical Campylobacter isolates (around 2% in 2006) [29]. This has been credited to a national pharmaceutical subsidy scheme that restricted human quinolone use and through regulation forbidding quinolone use in food-producing animals [30]. More recent studies reveal this situation has changed markedly, with current rates of ciprofloxacin resistance in clinically-derived Campylobacter isolates now ranging between 13 and 20% [31, 32].\nOverseas travel is a well-established risk factor for the acquisition of ciprofloxacin-resistant Campylobacter infections [26, 33]. In our study, cases with ciprofloxacin resistance all reported travel to India and South-East Asia, destinations associated with high rates of antimicrobial resistance among enteric pathogens (including Campylobacter spp.) [34, 35]. Nevertheless, the majority of ciprofloxacin non-susceptible isolates in our study had no recent overseas travel identified, meaning factors associated with domestic acquisition of ciprofloxacin resistance require greater consideration. US data similarly shows increases in domestically acquired ciprofloxacin resistance [27].\nWe observed that ciprofloxacin non-susceptibility was also associated with previous hospitalisation with campylobacteriosis. There is a paucity of population-level data on recurrent hospitalisations involving non-susceptible Campylobacter isolates. However, rates of recurrent campylobacteriosis in community settings have been reported to be as high as 248 episodes per 100,000 cases per year in the five years following an initial infection [36]. Explanations for our finding could be either host or pathogen-related, including higher rates of humoral immunodeficiency in patients hospitalised with recurrent campylobacteriosis [11, 37] or because of de novo mutations or increases in resistant organisms already present at subclinical levels.\nWhile there has been debate around isolate non-susceptibility and disease severity, reanalysis of the issue appears to show no substantial clinical differences between resistant and susceptible Campylobacter isolates [38]. Consequently, our finding of reduced odds for bloody diarrhoea and tachycardia among admissions with ciprofloxacin non-susceptibility and any antimicrobial non-susceptibility respectively, most likely represents the so-called “healthy traveller” effect [39].\nWe observed statistically significant evidence of an increase in the proportion of isolates exhibiting non-susceptibility to standard antimicrobials, with this increase driven primarily by non-susceptibility to fluoroquinolones (including nalidixic acid). In keeping with trends in comparable settings only low rates of non-susceptibility to macrolides were observed [25, 26]. Internationally, fluoroquinolone resistance among Campylobacter spp. has become a major public health problem [5]. Increases in the proportion of clinical isolates demonstrating ciprofloxacin resistance has been observed in the United Kingdom (UK) and United States (US) [25, 27], while in the European Union (EU) more than half (54.6%) of human-associated C. jejuni and two-thirds (66.6%) of C. coli isolates were resistant to ciprofloxacin in 2013 [28].\nAustralia has previously reported low rates of fluoroquinolone non-susceptibility among clinical Campylobacter isolates (around 2% in 2006) [29]. This has been credited to a national pharmaceutical subsidy scheme that restricted human quinolone use and through regulation forbidding quinolone use in food-producing animals [30]. More recent studies reveal this situation has changed markedly, with current rates of ciprofloxacin resistance in clinically-derived Campylobacter isolates now ranging between 13 and 20% [31, 32].\nOverseas travel is a well-established risk factor for the acquisition of ciprofloxacin-resistant Campylobacter infections [26, 33]. In our study, cases with ciprofloxacin resistance all reported travel to India and South-East Asia, destinations associated with high rates of antimicrobial resistance among enteric pathogens (including Campylobacter spp.) [34, 35]. Nevertheless, the majority of ciprofloxacin non-susceptible isolates in our study had no recent overseas travel identified, meaning factors associated with domestic acquisition of ciprofloxacin resistance require greater consideration. US data similarly shows increases in domestically acquired ciprofloxacin resistance [27].\nWe observed that ciprofloxacin non-susceptibility was also associated with previous hospitalisation with campylobacteriosis. There is a paucity of population-level data on recurrent hospitalisations involving non-susceptible Campylobacter isolates. However, rates of recurrent campylobacteriosis in community settings have been reported to be as high as 248 episodes per 100,000 cases per year in the five years following an initial infection [36]. Explanations for our finding could be either host or pathogen-related, including higher rates of humoral immunodeficiency in patients hospitalised with recurrent campylobacteriosis [11, 37] or because of de novo mutations or increases in resistant organisms already present at subclinical levels.\nWhile there has been debate around isolate non-susceptibility and disease severity, reanalysis of the issue appears to show no substantial clinical differences between resistant and susceptible Campylobacter isolates [38]. Consequently, our finding of reduced odds for bloody diarrhoea and tachycardia among admissions with ciprofloxacin non-susceptibility and any antimicrobial non-susceptibility respectively, most likely represents the so-called “healthy traveller” effect [39].\nFactors associated with antibiotic treatment during admission During the study period we observed statistically significant evidence of an increase in the proportion of Campylobacter-associated hospitalisations receiving antibiotic treatment. Antimicrobial therapy for Campylobacter enterocolitis is not routinely advised but may be recommended for patients with or at risk of severe disease, including high volume or bloody diarrhoea, high fever, symptom duration greater than one week, pregnancy or immunocompromised status [2, 10, 40]. Given that hospitalisation can be viewed as a marker of disease severity [41], the rates of treatment within our study population might be expected to differ from those among non-hospitalised campylobacteriosis cases.\nOther research on campylobacteriosis in the ACT has found no concomitant increase in hospitalisations during the same period as the current study [6]. Antibiotic treatment rates may be a reflection of local treatment practices rather than a response to disease severity, with data showing rates of appropriate prescribing and compliance with antibiotic guidelines in ACT hospitals to be the lowest in Australia during the study period [42].\nOne-third of Campylobacter-associated hospitalisations received antibiotics, either empirically or as targeted therapy for confirmed campylobacteriosis. Second generation fluoroquinolones—mainly per oral ciprofloxacin—comprised 80% of treatment, with macrolides the remainder. Australia has been successful in efforts to limit use of quinolones in humans and to prohibit their use in food-producing animals [30]. This has preserved their clinical use in Australia, with ciprofloxacin and norfloxacin remaining as empirical treatment options for acute infectious diarrhoea, while being recommended alongside azithromycin for treatment of domestically acquired Campylobacter enteritis [40]. Conversely, the high rates of quinolone resistance experienced in the UK, EU and US has seen macrolide treatment recommended or a greater emphasis placed on travel history and knowledge of local resistance patterns to guide empirical prescribing [43, 44].\nWithin our hospitalised study population, the strongest predictor of antibiotic treatment was collection of a blood specimen for culture. Both antibiotic prescription and ordering of blood cultures are clinical decisions, suggesting that the underlying clinical context observed by treating clinicians impacts both practices inducing them to be positively associated. Admission under specific clinical units, including Gastroenterology, Infectious Diseases and General Medicine were also associated with increased odds of receiving antimicrobial therapy. Hospitalisation implies a higher level of morbidity, potentially explaining the higher likelihood of antibiotic administration. Variation in treatment focus could also be expected between subspecialties, especially when underlying comorbidities are exacerbated. Such decision making may be further influenced by routine clinical behaviour, unease regarding the consequences of BSI or by the acceptability of not obtaining blood cultures among particular specialities [4].\nAge was also found to be an important factor in treatment, with paediatric admissions being less likely to receive antibiotics. This finding reflects that fluoroquinolones—the most commonly prescribed antibiotic class— are not recommended for use in children due to safety concerns [45]. We also observed that patients aged 40–49 years were more likely to be prescribed antibiotics. Reasons for this are less certain, but around 20% of admissions to high prescribing units such as Gastroenterology and Infectious Diseases were in this age range.\nVomiting and electrolyte imbalance were also associated with provision of antibiotic therapy. Vomiting is a less frequently reported symptom of campylobacteriosis but serves as an indicator for disease severity [46] and a predictor of bacteraemia [47]. While we did not assess the severity of dehydration, it is likely that a population such as ours included a higher proportion of cases with more pronounced symptomatology and severity of symptoms compared with non-hospitalised cases.\nDuring the study period we observed statistically significant evidence of an increase in the proportion of Campylobacter-associated hospitalisations receiving antibiotic treatment. Antimicrobial therapy for Campylobacter enterocolitis is not routinely advised but may be recommended for patients with or at risk of severe disease, including high volume or bloody diarrhoea, high fever, symptom duration greater than one week, pregnancy or immunocompromised status [2, 10, 40]. Given that hospitalisation can be viewed as a marker of disease severity [41], the rates of treatment within our study population might be expected to differ from those among non-hospitalised campylobacteriosis cases.\nOther research on campylobacteriosis in the ACT has found no concomitant increase in hospitalisations during the same period as the current study [6]. Antibiotic treatment rates may be a reflection of local treatment practices rather than a response to disease severity, with data showing rates of appropriate prescribing and compliance with antibiotic guidelines in ACT hospitals to be the lowest in Australia during the study period [42].\nOne-third of Campylobacter-associated hospitalisations received antibiotics, either empirically or as targeted therapy for confirmed campylobacteriosis. Second generation fluoroquinolones—mainly per oral ciprofloxacin—comprised 80% of treatment, with macrolides the remainder. Australia has been successful in efforts to limit use of quinolones in humans and to prohibit their use in food-producing animals [30]. This has preserved their clinical use in Australia, with ciprofloxacin and norfloxacin remaining as empirical treatment options for acute infectious diarrhoea, while being recommended alongside azithromycin for treatment of domestically acquired Campylobacter enteritis [40]. Conversely, the high rates of quinolone resistance experienced in the UK, EU and US has seen macrolide treatment recommended or a greater emphasis placed on travel history and knowledge of local resistance patterns to guide empirical prescribing [43, 44].\nWithin our hospitalised study population, the strongest predictor of antibiotic treatment was collection of a blood specimen for culture. Both antibiotic prescription and ordering of blood cultures are clinical decisions, suggesting that the underlying clinical context observed by treating clinicians impacts both practices inducing them to be positively associated. Admission under specific clinical units, including Gastroenterology, Infectious Diseases and General Medicine were also associated with increased odds of receiving antimicrobial therapy. Hospitalisation implies a higher level of morbidity, potentially explaining the higher likelihood of antibiotic administration. Variation in treatment focus could also be expected between subspecialties, especially when underlying comorbidities are exacerbated. Such decision making may be further influenced by routine clinical behaviour, unease regarding the consequences of BSI or by the acceptability of not obtaining blood cultures among particular specialities [4].\nAge was also found to be an important factor in treatment, with paediatric admissions being less likely to receive antibiotics. This finding reflects that fluoroquinolones—the most commonly prescribed antibiotic class— are not recommended for use in children due to safety concerns [45]. We also observed that patients aged 40–49 years were more likely to be prescribed antibiotics. Reasons for this are less certain, but around 20% of admissions to high prescribing units such as Gastroenterology and Infectious Diseases were in this age range.\nVomiting and electrolyte imbalance were also associated with provision of antibiotic therapy. Vomiting is a less frequently reported symptom of campylobacteriosis but serves as an indicator for disease severity [46] and a predictor of bacteraemia [47]. While we did not assess the severity of dehydration, it is likely that a population such as ours included a higher proportion of cases with more pronounced symptomatology and severity of symptoms compared with non-hospitalised cases.", "The statistical association we observed between cases with positive blood cultures and the presence of underlying liver disease and haematological malignancy is in keeping with hospital-based studies showing higher proportions of these conditions among cases with Campylobacter bacteraemia [11, 21, 22]. In addition, advanced age was also statistically associated with detection of bacteraemia, a characteristic seen in larger population-based studies of campylobacteriosis [3]. Several studies also report seasonality with Campylobacter bacteraemia [23, 24]. Our results show hospitalisation during the southern hemisphere summer to be associated with bacteraemia, a finding that aligns with the seasonality of Campylobacter enteritis in Australia [14]. A further association with blood culture positivity was Indigenous status. This result was derived however from a small number of observations with, Indigenous Australians comprising 1% of Campylobacter-associated hospitalisations in the ACT [6].", "We observed statistically significant evidence of an increase in the proportion of isolates exhibiting non-susceptibility to standard antimicrobials, with this increase driven primarily by non-susceptibility to fluoroquinolones (including nalidixic acid). In keeping with trends in comparable settings only low rates of non-susceptibility to macrolides were observed [25, 26]. Internationally, fluoroquinolone resistance among Campylobacter spp. has become a major public health problem [5]. Increases in the proportion of clinical isolates demonstrating ciprofloxacin resistance has been observed in the United Kingdom (UK) and United States (US) [25, 27], while in the European Union (EU) more than half (54.6%) of human-associated C. jejuni and two-thirds (66.6%) of C. coli isolates were resistant to ciprofloxacin in 2013 [28].\nAustralia has previously reported low rates of fluoroquinolone non-susceptibility among clinical Campylobacter isolates (around 2% in 2006) [29]. This has been credited to a national pharmaceutical subsidy scheme that restricted human quinolone use and through regulation forbidding quinolone use in food-producing animals [30]. More recent studies reveal this situation has changed markedly, with current rates of ciprofloxacin resistance in clinically-derived Campylobacter isolates now ranging between 13 and 20% [31, 32].\nOverseas travel is a well-established risk factor for the acquisition of ciprofloxacin-resistant Campylobacter infections [26, 33]. In our study, cases with ciprofloxacin resistance all reported travel to India and South-East Asia, destinations associated with high rates of antimicrobial resistance among enteric pathogens (including Campylobacter spp.) [34, 35]. Nevertheless, the majority of ciprofloxacin non-susceptible isolates in our study had no recent overseas travel identified, meaning factors associated with domestic acquisition of ciprofloxacin resistance require greater consideration. US data similarly shows increases in domestically acquired ciprofloxacin resistance [27].\nWe observed that ciprofloxacin non-susceptibility was also associated with previous hospitalisation with campylobacteriosis. There is a paucity of population-level data on recurrent hospitalisations involving non-susceptible Campylobacter isolates. However, rates of recurrent campylobacteriosis in community settings have been reported to be as high as 248 episodes per 100,000 cases per year in the five years following an initial infection [36]. Explanations for our finding could be either host or pathogen-related, including higher rates of humoral immunodeficiency in patients hospitalised with recurrent campylobacteriosis [11, 37] or because of de novo mutations or increases in resistant organisms already present at subclinical levels.\nWhile there has been debate around isolate non-susceptibility and disease severity, reanalysis of the issue appears to show no substantial clinical differences between resistant and susceptible Campylobacter isolates [38]. Consequently, our finding of reduced odds for bloody diarrhoea and tachycardia among admissions with ciprofloxacin non-susceptibility and any antimicrobial non-susceptibility respectively, most likely represents the so-called “healthy traveller” effect [39].", "During the study period we observed statistically significant evidence of an increase in the proportion of Campylobacter-associated hospitalisations receiving antibiotic treatment. Antimicrobial therapy for Campylobacter enterocolitis is not routinely advised but may be recommended for patients with or at risk of severe disease, including high volume or bloody diarrhoea, high fever, symptom duration greater than one week, pregnancy or immunocompromised status [2, 10, 40]. Given that hospitalisation can be viewed as a marker of disease severity [41], the rates of treatment within our study population might be expected to differ from those among non-hospitalised campylobacteriosis cases.\nOther research on campylobacteriosis in the ACT has found no concomitant increase in hospitalisations during the same period as the current study [6]. Antibiotic treatment rates may be a reflection of local treatment practices rather than a response to disease severity, with data showing rates of appropriate prescribing and compliance with antibiotic guidelines in ACT hospitals to be the lowest in Australia during the study period [42].\nOne-third of Campylobacter-associated hospitalisations received antibiotics, either empirically or as targeted therapy for confirmed campylobacteriosis. Second generation fluoroquinolones—mainly per oral ciprofloxacin—comprised 80% of treatment, with macrolides the remainder. Australia has been successful in efforts to limit use of quinolones in humans and to prohibit their use in food-producing animals [30]. This has preserved their clinical use in Australia, with ciprofloxacin and norfloxacin remaining as empirical treatment options for acute infectious diarrhoea, while being recommended alongside azithromycin for treatment of domestically acquired Campylobacter enteritis [40]. Conversely, the high rates of quinolone resistance experienced in the UK, EU and US has seen macrolide treatment recommended or a greater emphasis placed on travel history and knowledge of local resistance patterns to guide empirical prescribing [43, 44].\nWithin our hospitalised study population, the strongest predictor of antibiotic treatment was collection of a blood specimen for culture. Both antibiotic prescription and ordering of blood cultures are clinical decisions, suggesting that the underlying clinical context observed by treating clinicians impacts both practices inducing them to be positively associated. Admission under specific clinical units, including Gastroenterology, Infectious Diseases and General Medicine were also associated with increased odds of receiving antimicrobial therapy. Hospitalisation implies a higher level of morbidity, potentially explaining the higher likelihood of antibiotic administration. Variation in treatment focus could also be expected between subspecialties, especially when underlying comorbidities are exacerbated. Such decision making may be further influenced by routine clinical behaviour, unease regarding the consequences of BSI or by the acceptability of not obtaining blood cultures among particular specialities [4].\nAge was also found to be an important factor in treatment, with paediatric admissions being less likely to receive antibiotics. This finding reflects that fluoroquinolones—the most commonly prescribed antibiotic class— are not recommended for use in children due to safety concerns [45]. We also observed that patients aged 40–49 years were more likely to be prescribed antibiotics. Reasons for this are less certain, but around 20% of admissions to high prescribing units such as Gastroenterology and Infectious Diseases were in this age range.\nVomiting and electrolyte imbalance were also associated with provision of antibiotic therapy. Vomiting is a less frequently reported symptom of campylobacteriosis but serves as an indicator for disease severity [46] and a predictor of bacteraemia [47]. While we did not assess the severity of dehydration, it is likely that a population such as ours included a higher proportion of cases with more pronounced symptomatology and severity of symptoms compared with non-hospitalised cases.", "There are a number of potential limitations with our study. Firstly, our rates of bacteraemia may underestimate the true incidence, as we identified cases using only public hospital laboratory data. Other cases of Campylobacter bacteraemia may have been diagnosed and managed in the community or the private hospital sector, although the clinical significance of these is less certain. A second limitation relates to the precision of our model estimates, with the small numbers of observations for some outcomes making detection of clinically meaningful associations challenging. Despite this, the observed associations still align plausibly with Campylobacter’s epidemiology. A third limitation relates to the generalisation of our findings. Our study population was hospital-based and drawn from a single Australian territory, with regional and international differences in the epidemiology of campylobacteriosis being observed in high income settings [14, 22, 48]. Finally, antimicrobial susceptibility testing and bacterial speciation were not performed on all isolates, limiting exploration of species-specific features such as higher macrolide resistance rates among C. coli isolates [49].", "Campylobacter infections cause a substantial disease burden, as reflected by the high number of hospitalisations and high incidence of bacteraemia in our study. While a spectrum of illness can be observed among hospitalisations, many cases exhibit signs suggestive of systemic disease. Furthermore, both the proportion of cases receiving antibiotic treatment and those having isolates that were non-susceptible to standard antimicrobials increased over time. Given the increasing incidence of Campylobacter infections, particularly among older patients, understanding hospitalisation burden becomes increasingly important. This study provides some evidence in relation to clinical factors influencing the management of hospitalised cases in high income settings." ]
[ null, null, null, null, "results", null, null, null, "discussion", null, null, null, null, "conclusion" ]
[ "Campylobacter infections", "Hospitalisation", "Bacteraemia", "Incidence", "Antimicrobial susceptibility", "Antimicrobial therapy", "Comorbidity", "Elderly" ]
Background: Campylobacter spp. are internationally significant as a cause of infectious diarrhoeal disease [1]. Most cases of infectious diarrhoea, including those caused by Campylobacter spp., are self-limiting, with management focused on maintenance of hydration via fluid repletion [2]. However, for persons hospitalised with Campylobacter infection, clinical thresholds for exclusion of bacteraemia and the consideration of antimicrobial therapy differ due to symptom severity, risk of complications or the exacerbation of underlying co-morbidities [2]. While bacteraemia is an uncommon complication of campylobacteriosis, testing and diagnosis typically occurs when a person is hospitalised [3]. Within a hospital setting, clinical tolerances and the challenge of predicting bacteraemia among febrile admissions with an enteric focus are important considerations [4]. Further, the likelihood of clinicians commencing antibiotic therapy may also increase among hospitalised cases, highlighting the importance of judicial prescribing and understanding of isolate susceptibility patterns to ensure viable treatment options remain [5]. We describe rates of bacteraemia, antimicrobial susceptibility and treatment among a cohort of Campylobacter-associated hospitalisations. We also examined clinical and host factors associated with the diagnosis of blood stream infections (BSI), isolate non-susceptibility and antibiotic treatment of hospitalised cases. Methods: Background and data This study forms part of a retrospective review of Campylobacter-associated hospital admissions in the Australian Capital Territory (ACT) between January 2004 and December 2013. A Campylobacter-associated hospital admission was defined as any episode of care for an admitted patient that was clinically and temporally linked to a Campylobacter isolate derived from the same patient. Full details of the setting, data sources, data collection and linkage are described elsewhere [6]. In summary, they include hospital generated admission data, hospital microbiology data and clinical data obtained via individual medical record review. Hospital admission details were obtained via a data extract that included patient demographics, admission and discharge dates and admission unit details. Record inclusion was determined by an inpatient admission having been assigned an International Classification of Diseases (ICD) diagnosis code ‘A045—Campylobacter enteritis’. We also obtained a separate microbiology data extract detailing inpatient isolations of Campylobacter spp., including specimen type, collection dates and antimicrobial susceptibility data. All laboratory diagnoses of Campylobacter infection were made via culture; speciation was not routinely performed on hospital isolates prior to 2013. Susceptibilities to ciprofloxacin, nalidixic acid and erythromycin were assessed using disk diffusion and according to Clinical and Laboratory Standards Institute breakpoints [7]. Intermediate and resistant isolates were grouped as non-susceptible to aid analysis. Due to the potential for multiple specimens being collected during an individual admission (or across multiple related admissions), we reported antimicrobial susceptibility using the earliest available data. Microbiology results were then linked to both admissions with and without ICD code ‘A045’. We undertook a review of medical records to collect additional details on illness presentation, associated complications, patient co-morbidities (as per the Charlson Co-morbidity Index) [8] and antibiotic treatment and prescribing details (e.g., antibiotic type, dosage, frequency and administration route). For antibiotic treatment, we defined a total daily dose as the dosage in milligrams (mg) multiplied by the frequency of administration per day, with the product expressed in milligrams per day. This study forms part of a retrospective review of Campylobacter-associated hospital admissions in the Australian Capital Territory (ACT) between January 2004 and December 2013. A Campylobacter-associated hospital admission was defined as any episode of care for an admitted patient that was clinically and temporally linked to a Campylobacter isolate derived from the same patient. Full details of the setting, data sources, data collection and linkage are described elsewhere [6]. In summary, they include hospital generated admission data, hospital microbiology data and clinical data obtained via individual medical record review. Hospital admission details were obtained via a data extract that included patient demographics, admission and discharge dates and admission unit details. Record inclusion was determined by an inpatient admission having been assigned an International Classification of Diseases (ICD) diagnosis code ‘A045—Campylobacter enteritis’. We also obtained a separate microbiology data extract detailing inpatient isolations of Campylobacter spp., including specimen type, collection dates and antimicrobial susceptibility data. All laboratory diagnoses of Campylobacter infection were made via culture; speciation was not routinely performed on hospital isolates prior to 2013. Susceptibilities to ciprofloxacin, nalidixic acid and erythromycin were assessed using disk diffusion and according to Clinical and Laboratory Standards Institute breakpoints [7]. Intermediate and resistant isolates were grouped as non-susceptible to aid analysis. Due to the potential for multiple specimens being collected during an individual admission (or across multiple related admissions), we reported antimicrobial susceptibility using the earliest available data. Microbiology results were then linked to both admissions with and without ICD code ‘A045’. We undertook a review of medical records to collect additional details on illness presentation, associated complications, patient co-morbidities (as per the Charlson Co-morbidity Index) [8] and antibiotic treatment and prescribing details (e.g., antibiotic type, dosage, frequency and administration route). For antibiotic treatment, we defined a total daily dose as the dosage in milligrams (mg) multiplied by the frequency of administration per day, with the product expressed in milligrams per day. Statistical analysis We calculated bacteraemia incidence per 100,000 persons using the ACT’s mid-year estimated resident population for each year between 2004 and 2013 [9]. We used non-parametric methods, including median tests, to analyse non-normally distributed variables such as age. We used Pearson’s chi-squared test and Fisher’s exact tests to assess simple statistical associations between key outcome and independent variables of interest, while trends in proportions were assessed using chi-squared tests. Preliminary analyses included the estimation of relative risks (RRs), 95% confidence intervals (CIs) and p-values to assess potential predictors of blood culturing, antimicrobial susceptibility and antibiotic treatment. Variables examined included age, sex, country of birth, previous Campylobacter-associated hospitalisation, overseas travel history, signs and symptoms, admission unit, comorbidities and the presence of key signs of infection. Using a stepwise additive approach, with the most significant variables being added first, we constructed separate logistic regression models to identify predictors associated with (i) collection of blood specimens for culture, (ii) positive blood isolates, (iii) non-susceptibility to ciprofloxacin, (iv) any antimicrobial non-susceptibility and (vii) antibiotic treatment during hospitalisation. Erythromycin non-susceptibility was not assessed due to the limited number of observations. Clinical relevance and statistical evidence were used to assist with variable selection for multivariable analysis. The significance level for removal from the models was set at p ≤ 0.05. We used likelihood-ratio tests to assess the explanatory power of the models, with the variable expressing the largest p-value being removed. Final results were expressed as adjusted odds ratios (aOR), with accompanying 95% confidence intervals and p-values. We used Hosmer-Lemeshow tests to assess the goodness-of-fit for each model. All statistical analyses were performed using Stata v.14 (StataCorp, USA). We calculated bacteraemia incidence per 100,000 persons using the ACT’s mid-year estimated resident population for each year between 2004 and 2013 [9]. We used non-parametric methods, including median tests, to analyse non-normally distributed variables such as age. We used Pearson’s chi-squared test and Fisher’s exact tests to assess simple statistical associations between key outcome and independent variables of interest, while trends in proportions were assessed using chi-squared tests. Preliminary analyses included the estimation of relative risks (RRs), 95% confidence intervals (CIs) and p-values to assess potential predictors of blood culturing, antimicrobial susceptibility and antibiotic treatment. Variables examined included age, sex, country of birth, previous Campylobacter-associated hospitalisation, overseas travel history, signs and symptoms, admission unit, comorbidities and the presence of key signs of infection. Using a stepwise additive approach, with the most significant variables being added first, we constructed separate logistic regression models to identify predictors associated with (i) collection of blood specimens for culture, (ii) positive blood isolates, (iii) non-susceptibility to ciprofloxacin, (iv) any antimicrobial non-susceptibility and (vii) antibiotic treatment during hospitalisation. Erythromycin non-susceptibility was not assessed due to the limited number of observations. Clinical relevance and statistical evidence were used to assist with variable selection for multivariable analysis. The significance level for removal from the models was set at p ≤ 0.05. We used likelihood-ratio tests to assess the explanatory power of the models, with the variable expressing the largest p-value being removed. Final results were expressed as adjusted odds ratios (aOR), with accompanying 95% confidence intervals and p-values. We used Hosmer-Lemeshow tests to assess the goodness-of-fit for each model. All statistical analyses were performed using Stata v.14 (StataCorp, USA). Background and data: This study forms part of a retrospective review of Campylobacter-associated hospital admissions in the Australian Capital Territory (ACT) between January 2004 and December 2013. A Campylobacter-associated hospital admission was defined as any episode of care for an admitted patient that was clinically and temporally linked to a Campylobacter isolate derived from the same patient. Full details of the setting, data sources, data collection and linkage are described elsewhere [6]. In summary, they include hospital generated admission data, hospital microbiology data and clinical data obtained via individual medical record review. Hospital admission details were obtained via a data extract that included patient demographics, admission and discharge dates and admission unit details. Record inclusion was determined by an inpatient admission having been assigned an International Classification of Diseases (ICD) diagnosis code ‘A045—Campylobacter enteritis’. We also obtained a separate microbiology data extract detailing inpatient isolations of Campylobacter spp., including specimen type, collection dates and antimicrobial susceptibility data. All laboratory diagnoses of Campylobacter infection were made via culture; speciation was not routinely performed on hospital isolates prior to 2013. Susceptibilities to ciprofloxacin, nalidixic acid and erythromycin were assessed using disk diffusion and according to Clinical and Laboratory Standards Institute breakpoints [7]. Intermediate and resistant isolates were grouped as non-susceptible to aid analysis. Due to the potential for multiple specimens being collected during an individual admission (or across multiple related admissions), we reported antimicrobial susceptibility using the earliest available data. Microbiology results were then linked to both admissions with and without ICD code ‘A045’. We undertook a review of medical records to collect additional details on illness presentation, associated complications, patient co-morbidities (as per the Charlson Co-morbidity Index) [8] and antibiotic treatment and prescribing details (e.g., antibiotic type, dosage, frequency and administration route). For antibiotic treatment, we defined a total daily dose as the dosage in milligrams (mg) multiplied by the frequency of administration per day, with the product expressed in milligrams per day. Statistical analysis: We calculated bacteraemia incidence per 100,000 persons using the ACT’s mid-year estimated resident population for each year between 2004 and 2013 [9]. We used non-parametric methods, including median tests, to analyse non-normally distributed variables such as age. We used Pearson’s chi-squared test and Fisher’s exact tests to assess simple statistical associations between key outcome and independent variables of interest, while trends in proportions were assessed using chi-squared tests. Preliminary analyses included the estimation of relative risks (RRs), 95% confidence intervals (CIs) and p-values to assess potential predictors of blood culturing, antimicrobial susceptibility and antibiotic treatment. Variables examined included age, sex, country of birth, previous Campylobacter-associated hospitalisation, overseas travel history, signs and symptoms, admission unit, comorbidities and the presence of key signs of infection. Using a stepwise additive approach, with the most significant variables being added first, we constructed separate logistic regression models to identify predictors associated with (i) collection of blood specimens for culture, (ii) positive blood isolates, (iii) non-susceptibility to ciprofloxacin, (iv) any antimicrobial non-susceptibility and (vii) antibiotic treatment during hospitalisation. Erythromycin non-susceptibility was not assessed due to the limited number of observations. Clinical relevance and statistical evidence were used to assist with variable selection for multivariable analysis. The significance level for removal from the models was set at p ≤ 0.05. We used likelihood-ratio tests to assess the explanatory power of the models, with the variable expressing the largest p-value being removed. Final results were expressed as adjusted odds ratios (aOR), with accompanying 95% confidence intervals and p-values. We used Hosmer-Lemeshow tests to assess the goodness-of-fit for each model. All statistical analyses were performed using Stata v.14 (StataCorp, USA). Results: Campylobacter bacteraemia Out of 685 admissions, 333 (49%) had blood drawn for culture and 25 (7.5%) of these tested positive (Fig. 1). Speciation was performed on 21 (84%) of these case isolates with 15 (71.4%) cases of C. jejuni, 5 (23.8%) cases of C. coli and a single (4.0%) case of C. lari bacteraemia. Amongst the 25 admissions with bacteraemia, 15 were male (60%) and 10 were female (40%), while the median age was 59.5 years (range 12 to 90 years). Amongst 308 negative cases there were 177 males (57.5%) and 131 females (42.5%), with a median age of 38.8 years (age range \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$<1.0$$\end{document}<1.0 to 92 years). This age difference was significant (median test\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${ \chi }^{2}=5.30$$\end{document}χ2=5.30, \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$p=0.02$$\end{document}p=0.02) but no evidence of a difference in the sex composition was observed (\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\chi }^{2}=0.06$$\end{document}χ2=0.06, \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$p=0.81$$\end{document}p=0.81). The age and sex of admissions in relation to blood culture status is shown in Fig. 2. We also compared those who had blood drawn for culture (whether positive or negative) with those that did not to assess demographic differences. We saw no evidence of a difference in median age (median test\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\chi }^{2}=0.18$$\end{document}χ2=0.18, \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$p=0.68$$\end{document}p=0.68) but did observe males to be more likely to have blood drawn (\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\chi }^{2}=11.17$$\end{document}χ2=11.17, \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$p=0.001$$\end{document}p=0.001). We observed no difference in the proportion of admissions with documented comorbidities who had blood specimens collected for culture when compared to admission without comorbidities who had blood taken for culture \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${(\chi }^{2}=1.42$$\end{document}(χ2=1.42, \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$p=0.23$$\end{document}p=0.23). Bacteraemia generally occurred in the context of antecedent diarrhoeal illness, although two admissions involved primary bacteraemia (i.e. without diarrhoea) in patients with haematological malignancies. Comorbidities, while common, were not a characteristic feature of cases hospitalised with bacteraemia as shown in Table 1. No deaths were identified among cases with bacteraemia. Fig. 1Flow diagram for blood culture collection and subsequent Campylobacter spp. bacteraemia among a cohort of Campylobacter-associated hospitalisations Flow diagram for blood culture collection and subsequent Campylobacter spp. bacteraemia among a cohort of Campylobacter-associated hospitalisations Fig. 2Jitter plot of age and sex by blood culture status among Campylobacter-associated hospitalisations Jitter plot of age and sex by blood culture status among Campylobacter-associated hospitalisations Table 1Characteristics of cases hospitalised with Campylobacter bacteraemia, 2004 to 2013Year (case)Age range/sexSpeciesBacteraemia—sourceAntimicrobial susceptibilityAntimicrobial treatmentSignificant medical history and risk factors200480 + M C. jejuni Enteric—secondaryFully sensitiveNilAge, nil other significant2005 (a)50–59 FCampylobacter sp.Enteric—secondaryFully sensitivePO ciprofloxacin 500 mg bdAcute myeloid leukaemia2005 (b)60–69 FCampylobacter sp.Enteric—secondaryFully sensitiveNilNil significant2005 (c)40–49 F C. jejuni Enteric—secondaryFully sensitivePO ciprofloxacin 500 mg bdIV drug use, PUD on omeprazole2006 (a)40–49 M C. jejuni Enteric— secondaryFully sensitivePO ciprofloxacin 500 mg bdUntreated Stage III HIV2008 (a)20–29 MCampylobacter sp.Enteric—secondaryFully sensitiveIV azithromycin 500 mg qdNil significant2008 (b)60–69 M C. coli Primary bacteraemiaCiprofloxacin- ResistantErythromycin- ResistantNilLymphocytic lymphoma200930–39 F C. jejuni Enteric—secondaryFully sensitivePO ciprofloxacin 250 mg bdHistory of renal transplant secondary to IgA nephropathy2010 (a)60–69 MCampylobacter sp.Enteric—secondaryFully sensitivePO ciprofloxacin 500 mg bdBowel carcinoma, current chemotherapy2010 (b)30–39 M C. coli Enteric—secondaryFully sensitivePO ciprofloxacin 500 mg bdAlcoholic liver disease with portal hypotension and bleeding varices2010 (c)70–79 F C. jejuni Enteric—secondaryFully sensitivePO ciprofloxacin 500 mg bdT2DM2010 (d)10–19 F C. jejuni Enteric—secondaryFully sensitivePO azithromycin 500 mg qd (upon discharge)Nil significant.2010 (e)70–79 F C. coli Enteric—secondaryFully sensitivePO ciprofloxacin 500 mg bdT2DM2010 (f)40–49 M C. jejuni Enteric—secondaryFully sensitivePO Norfloxacin 400 mg bdIrritable bowel syndrome2011 (a)60–69 M C. lari Enteric—secondaryCiprofloxacin— ResistantPO Doxycycline 100 mg bdAlcoholic liver disease with portal hypotension, recent intracerebral bleed2011 (b)80 + M C. jejuni Enteric—secondaryFully sensitiveNil prescribedAge, nil other significant2011 (c)70–79 M C. coli Enteric—secondaryFully sensitivePO ciprofloxacin 500 mg bdAsplenic2012 (a)40–49 M C. jejuni Enteric—secondaryFully sensitiveIV ciprofloxacin 500 mg bdMultiple sclerosis, current chemotherapy pre-stem cell transplantation, IDDM2012 (b)30–39 F C. jejuni Enteric—secondaryFully sensitiveNilPregnant 33/40K, IDDM2012 (c)70–79 M C. jejuni Enteric—secondaryCiprofloxacin— ResistantNalidixic acid— ResistantNilDiabetic neuropathy, chronic renal failure2012 (d)60–69 F C. coli Enteric—secondaryCiprofloxacin— ResistantNalidixic acid— ResistantNilNil significant2013 (a)20–29 M C. jejuni Enteric—secondaryFully sensitiveNilNil significant2013 (b)20–29 M C. jejuni Primary bacteraemiaFully sensitivePO Ciprofloxacin 750 mg bd (upon discharge)B cell leukaemia, AVN (on steroids), SIADH2013 (c)50–59 M C. jejuni Enteric—secondaryFully sensitivePO ciprofloxacin 500 mg bdLiver failure with cirrhosis, portal hypotension secondary to Hepatitis C, T2DM, hypothyroidism. Awaiting transplant.2013 (d)70–79 M C. jejuni Enteric—secondaryFully sensitiveNilAge, nil other significantPO, per oral; IV ,  intravenous; PUD, peptic ulcer disease; HIV, human immunodeficiency virus; T2DM, type 2 diabetes mellitus; IDDM, insulin dependent diabetes mellitus; AVN, acute vascular necrosis; SIADH, syndrome of inappropriate antidiuretic hormone Characteristics of cases hospitalised with Campylobacter bacteraemia, 2004 to 2013 Ciprofloxacin- Resistant Erythromycin- Resistant Ciprofloxacin— Resistant Nalidixic acid— Resistant Ciprofloxacin— Resistant Nalidixic acid— Resistant PO, per oral; IV ,  intravenous; PUD, peptic ulcer disease; HIV, human immunodeficiency virus; T2DM, type 2 diabetes mellitus; IDDM, insulin dependent diabetes mellitus; AVN, acute vascular necrosis; SIADH, syndrome of inappropriate antidiuretic hormone During the period 2004 to 2013, the mean incidence of Campylobacter bacteraemia in the host population was 0.71 cases per 100,000 population per year (95% CI 0.48–1.05 per 100,000 population) (Fig. 3). We did not observe temporal trends in the proportion of Campylobacter-associated hospitalisations undergoing blood collection for culture or in the proportion of positive blood cultures among Campylobacter-associated hospitalisations. Fig. 3Incidence of Campylobacter spp. bacteraemia among ACT residents, 2004 to 2013 Incidence of Campylobacter spp. bacteraemia among ACT residents, 2004 to 2013 Factors associated with the collection of blood samples for culture and subsequent isolation of Campylobacter spp. are shown in Tables 2 and 3. Table 2Factors associated with collection of blood for culture among a cohort of Campylobacter-associated hospitalisations (\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\varvec{n}=663$$\end{document}n=663)Predictor variableEstimated adjusted Odds Ratio (aOR)95% Confidence interval\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\varvec{p}$$\end{document}p -valueInfectious Diseases Unit admission8.001.58–40.610.01Febrile during admission (≥ 38 °C)21.3013.76–32. 96< 0.001Tachycardia1.751.13–2.720.01Moderate to severe renal disease2.871.26–6.550.0110–19 years age group0.360.19–0.71< 0.0122 observations contained missing dataHosmer and Lemeshow goodness of fit \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\chi }^{2}=2.70$$\end{document}χ2=2.70, \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$p=0.61$$\end{document}p=0.61 Factors associated with collection of blood for culture among a cohort of Campylobacter-associated hospitalisations (\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\varvec{n}=663$$\end{document}n=663) 22 observations contained missing data Hosmer and Lemeshow goodness of fit \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\chi }^{2}=2.70$$\end{document}χ2=2.70, \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$p=0.61$$\end{document}p=0.61 Table 3Factors associated with blood stream isolation of Campylobacter spp. among Campylobacter-associated hospitalisations (n = 333)Predictor variableEstimated adjusted Odds Ratio (aOR)95 % Confidence interval\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\text{p}$$\end{document}p -valueModerate to severe liver disease48.897.03–340.22< 0.001Haematology Unit admission14.672.99–72.070.001Age group 70–79 years4.931.57–15.49< 0.01Admission during summer months2.931.14–7.570.03Indigenous Australian10.871.00–117.890.05Hosmer and Lemshow goodness of fit \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\chi }^{2}=0.71$$\end{document}χ2=0.71, \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$p=0.70$$\end{document}p=0.70 Factors associated with blood stream isolation of Campylobacter spp. among Campylobacter-associated hospitalisations (n = 333) Hosmer and Lemshow goodness of fit \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\chi }^{2}=0.71$$\end{document}χ2=0.71, \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$p=0.70$$\end{document}p=0.70 Out of 685 admissions, 333 (49%) had blood drawn for culture and 25 (7.5%) of these tested positive (Fig. 1). Speciation was performed on 21 (84%) of these case isolates with 15 (71.4%) cases of C. jejuni, 5 (23.8%) cases of C. coli and a single (4.0%) case of C. lari bacteraemia. Amongst the 25 admissions with bacteraemia, 15 were male (60%) and 10 were female (40%), while the median age was 59.5 years (range 12 to 90 years). Amongst 308 negative cases there were 177 males (57.5%) and 131 females (42.5%), with a median age of 38.8 years (age range \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$<1.0$$\end{document}<1.0 to 92 years). This age difference was significant (median test\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${ \chi }^{2}=5.30$$\end{document}χ2=5.30, \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$p=0.02$$\end{document}p=0.02) but no evidence of a difference in the sex composition was observed (\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\chi }^{2}=0.06$$\end{document}χ2=0.06, \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$p=0.81$$\end{document}p=0.81). The age and sex of admissions in relation to blood culture status is shown in Fig. 2. We also compared those who had blood drawn for culture (whether positive or negative) with those that did not to assess demographic differences. We saw no evidence of a difference in median age (median test\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\chi }^{2}=0.18$$\end{document}χ2=0.18, \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$p=0.68$$\end{document}p=0.68) but did observe males to be more likely to have blood drawn (\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\chi }^{2}=11.17$$\end{document}χ2=11.17, \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$p=0.001$$\end{document}p=0.001). We observed no difference in the proportion of admissions with documented comorbidities who had blood specimens collected for culture when compared to admission without comorbidities who had blood taken for culture \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${(\chi }^{2}=1.42$$\end{document}(χ2=1.42, \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$p=0.23$$\end{document}p=0.23). Bacteraemia generally occurred in the context of antecedent diarrhoeal illness, although two admissions involved primary bacteraemia (i.e. without diarrhoea) in patients with haematological malignancies. Comorbidities, while common, were not a characteristic feature of cases hospitalised with bacteraemia as shown in Table 1. No deaths were identified among cases with bacteraemia. Fig. 1Flow diagram for blood culture collection and subsequent Campylobacter spp. bacteraemia among a cohort of Campylobacter-associated hospitalisations Flow diagram for blood culture collection and subsequent Campylobacter spp. bacteraemia among a cohort of Campylobacter-associated hospitalisations Fig. 2Jitter plot of age and sex by blood culture status among Campylobacter-associated hospitalisations Jitter plot of age and sex by blood culture status among Campylobacter-associated hospitalisations Table 1Characteristics of cases hospitalised with Campylobacter bacteraemia, 2004 to 2013Year (case)Age range/sexSpeciesBacteraemia—sourceAntimicrobial susceptibilityAntimicrobial treatmentSignificant medical history and risk factors200480 + M C. jejuni Enteric—secondaryFully sensitiveNilAge, nil other significant2005 (a)50–59 FCampylobacter sp.Enteric—secondaryFully sensitivePO ciprofloxacin 500 mg bdAcute myeloid leukaemia2005 (b)60–69 FCampylobacter sp.Enteric—secondaryFully sensitiveNilNil significant2005 (c)40–49 F C. jejuni Enteric—secondaryFully sensitivePO ciprofloxacin 500 mg bdIV drug use, PUD on omeprazole2006 (a)40–49 M C. jejuni Enteric— secondaryFully sensitivePO ciprofloxacin 500 mg bdUntreated Stage III HIV2008 (a)20–29 MCampylobacter sp.Enteric—secondaryFully sensitiveIV azithromycin 500 mg qdNil significant2008 (b)60–69 M C. coli Primary bacteraemiaCiprofloxacin- ResistantErythromycin- ResistantNilLymphocytic lymphoma200930–39 F C. jejuni Enteric—secondaryFully sensitivePO ciprofloxacin 250 mg bdHistory of renal transplant secondary to IgA nephropathy2010 (a)60–69 MCampylobacter sp.Enteric—secondaryFully sensitivePO ciprofloxacin 500 mg bdBowel carcinoma, current chemotherapy2010 (b)30–39 M C. coli Enteric—secondaryFully sensitivePO ciprofloxacin 500 mg bdAlcoholic liver disease with portal hypotension and bleeding varices2010 (c)70–79 F C. jejuni Enteric—secondaryFully sensitivePO ciprofloxacin 500 mg bdT2DM2010 (d)10–19 F C. jejuni Enteric—secondaryFully sensitivePO azithromycin 500 mg qd (upon discharge)Nil significant.2010 (e)70–79 F C. coli Enteric—secondaryFully sensitivePO ciprofloxacin 500 mg bdT2DM2010 (f)40–49 M C. jejuni Enteric—secondaryFully sensitivePO Norfloxacin 400 mg bdIrritable bowel syndrome2011 (a)60–69 M C. lari Enteric—secondaryCiprofloxacin— ResistantPO Doxycycline 100 mg bdAlcoholic liver disease with portal hypotension, recent intracerebral bleed2011 (b)80 + M C. jejuni Enteric—secondaryFully sensitiveNil prescribedAge, nil other significant2011 (c)70–79 M C. coli Enteric—secondaryFully sensitivePO ciprofloxacin 500 mg bdAsplenic2012 (a)40–49 M C. jejuni Enteric—secondaryFully sensitiveIV ciprofloxacin 500 mg bdMultiple sclerosis, current chemotherapy pre-stem cell transplantation, IDDM2012 (b)30–39 F C. jejuni Enteric—secondaryFully sensitiveNilPregnant 33/40K, IDDM2012 (c)70–79 M C. jejuni Enteric—secondaryCiprofloxacin— ResistantNalidixic acid— ResistantNilDiabetic neuropathy, chronic renal failure2012 (d)60–69 F C. coli Enteric—secondaryCiprofloxacin— ResistantNalidixic acid— ResistantNilNil significant2013 (a)20–29 M C. jejuni Enteric—secondaryFully sensitiveNilNil significant2013 (b)20–29 M C. jejuni Primary bacteraemiaFully sensitivePO Ciprofloxacin 750 mg bd (upon discharge)B cell leukaemia, AVN (on steroids), SIADH2013 (c)50–59 M C. jejuni Enteric—secondaryFully sensitivePO ciprofloxacin 500 mg bdLiver failure with cirrhosis, portal hypotension secondary to Hepatitis C, T2DM, hypothyroidism. Awaiting transplant.2013 (d)70–79 M C. jejuni Enteric—secondaryFully sensitiveNilAge, nil other significantPO, per oral; IV ,  intravenous; PUD, peptic ulcer disease; HIV, human immunodeficiency virus; T2DM, type 2 diabetes mellitus; IDDM, insulin dependent diabetes mellitus; AVN, acute vascular necrosis; SIADH, syndrome of inappropriate antidiuretic hormone Characteristics of cases hospitalised with Campylobacter bacteraemia, 2004 to 2013 Ciprofloxacin- Resistant Erythromycin- Resistant Ciprofloxacin— Resistant Nalidixic acid— Resistant Ciprofloxacin— Resistant Nalidixic acid— Resistant PO, per oral; IV ,  intravenous; PUD, peptic ulcer disease; HIV, human immunodeficiency virus; T2DM, type 2 diabetes mellitus; IDDM, insulin dependent diabetes mellitus; AVN, acute vascular necrosis; SIADH, syndrome of inappropriate antidiuretic hormone During the period 2004 to 2013, the mean incidence of Campylobacter bacteraemia in the host population was 0.71 cases per 100,000 population per year (95% CI 0.48–1.05 per 100,000 population) (Fig. 3). We did not observe temporal trends in the proportion of Campylobacter-associated hospitalisations undergoing blood collection for culture or in the proportion of positive blood cultures among Campylobacter-associated hospitalisations. Fig. 3Incidence of Campylobacter spp. bacteraemia among ACT residents, 2004 to 2013 Incidence of Campylobacter spp. bacteraemia among ACT residents, 2004 to 2013 Factors associated with the collection of blood samples for culture and subsequent isolation of Campylobacter spp. are shown in Tables 2 and 3. Table 2Factors associated with collection of blood for culture among a cohort of Campylobacter-associated hospitalisations (\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\varvec{n}=663$$\end{document}n=663)Predictor variableEstimated adjusted Odds Ratio (aOR)95% Confidence interval\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\varvec{p}$$\end{document}p -valueInfectious Diseases Unit admission8.001.58–40.610.01Febrile during admission (≥ 38 °C)21.3013.76–32. 96< 0.001Tachycardia1.751.13–2.720.01Moderate to severe renal disease2.871.26–6.550.0110–19 years age group0.360.19–0.71< 0.0122 observations contained missing dataHosmer and Lemeshow goodness of fit \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\chi }^{2}=2.70$$\end{document}χ2=2.70, \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$p=0.61$$\end{document}p=0.61 Factors associated with collection of blood for culture among a cohort of Campylobacter-associated hospitalisations (\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\varvec{n}=663$$\end{document}n=663) 22 observations contained missing data Hosmer and Lemeshow goodness of fit \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\chi }^{2}=2.70$$\end{document}χ2=2.70, \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$p=0.61$$\end{document}p=0.61 Table 3Factors associated with blood stream isolation of Campylobacter spp. among Campylobacter-associated hospitalisations (n = 333)Predictor variableEstimated adjusted Odds Ratio (aOR)95 % Confidence interval\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\text{p}$$\end{document}p -valueModerate to severe liver disease48.897.03–340.22< 0.001Haematology Unit admission14.672.99–72.070.001Age group 70–79 years4.931.57–15.49< 0.01Admission during summer months2.931.14–7.570.03Indigenous Australian10.871.00–117.890.05Hosmer and Lemshow goodness of fit \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\chi }^{2}=0.71$$\end{document}χ2=0.71, \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$p=0.70$$\end{document}p=0.70 Factors associated with blood stream isolation of Campylobacter spp. among Campylobacter-associated hospitalisations (n = 333) Hosmer and Lemshow goodness of fit \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\chi }^{2}=0.71$$\end{document}χ2=0.71, \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$p=0.70$$\end{document}p=0.70 Antimicrobial susceptibility We identified 548 Campylobacter-associated admissions with a primary isolation of Campylobacter spp. and where subsequent antimicrobial susceptibility testing was performed. For 526 (96%) of these cases there was available data for three standard antimicrobials, ciprofloxacin, nalidixic acid and erythromycin. Of the remainder, nalidixic acid data was unavailable for 21 isolates, with erythromycin susceptibility unavailable for a single isolate. During the study period, 13% (70/548) of primary isolates exhibited non-susceptibility to at least one standard antimicrobial. The proportion of non-susceptible isolates ranged from ≤ 5.0% in 2004/05 to > 20.0% in 2012/13, with this increase being significant (\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\chi }^{2}=6.12$$\end{document}χ2=6.12, \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$p=0.01$$\end{document}p=0.01) (Fig. 4). Among the individual antimicrobials, nalidixic acid non-susceptibility was reported for 9% (49/527) of tested isolates, 7% (40/548) for ciprofloxacin and 4% (21/547) for erythromycin. Fig. 4Non-susceptibility to standard antimicrobials from Campylobacter isolates obtained during Campylobacter-associated hospitalisations, 2004 to 2013 Non-susceptibility to standard antimicrobials from Campylobacter isolates obtained during Campylobacter-associated hospitalisations, 2004 to 2013 Figure 4 shows that lower rates of erythromycin non-susceptibility were observed while significant increases in both ciprofloxacin (χ2 = 16.51, p < 0.001) and nalidixic acid non-susceptibility occurred during the study period (χ2 = 10.85, p = 0.001). Factors associated with antimicrobial non-susceptibility among Campylobacter-associated hospitalisations are shown in Tables 4 and 5. Table 4Factors associated with ciprofloxacin non-susceptible isolates among Campylobacter-associated hospitalisations (n = 411)Predictor variableEstimated adjusted Odds Ratio (aOR)95 % Confidence intervalp-valueBloody diarrhoea0.300.09–1.030.06Recent overseas travel15.532.86–84.180.001Previous Campylobacter-associated hospitalisation12.871.72–96.120.01Hosmer and Lemeshow goodness of fit χ2  = 0.03,  p = 0.86 Factors associated with ciprofloxacin non-susceptible isolates among Campylobacter-associated hospitalisations (n = 411) Hosmer and Lemeshow goodness of fit χ2  = 0.03,  p = 0.86 Table 5Factors associated with non-susceptibility to ciprofloxacin, nalidixic acid or erythromycin among a cohort of Campylobacter-associated hospitalisations (n = 548)Predictor variableEstimated adjusted Odds Ratio (aOR)95 % Confidence Intervalp-valueRecent overseas travel11.803.18–43.83< 0.001Previous Campylobacter-associated hospitalisation17.092.65−110.07< 0.01Tachycardia0.480.26–0.890.02Hosmer and Lemeshow goodness of fit χ2  = 0.01,  p = 0.92 Factors associated with non-susceptibility to ciprofloxacin, nalidixic acid or erythromycin among a cohort of Campylobacter-associated hospitalisations (n = 548) Hosmer and Lemeshow goodness of fit χ2  = 0.01,  p = 0.92 We identified 548 Campylobacter-associated admissions with a primary isolation of Campylobacter spp. and where subsequent antimicrobial susceptibility testing was performed. For 526 (96%) of these cases there was available data for three standard antimicrobials, ciprofloxacin, nalidixic acid and erythromycin. Of the remainder, nalidixic acid data was unavailable for 21 isolates, with erythromycin susceptibility unavailable for a single isolate. During the study period, 13% (70/548) of primary isolates exhibited non-susceptibility to at least one standard antimicrobial. The proportion of non-susceptible isolates ranged from ≤ 5.0% in 2004/05 to > 20.0% in 2012/13, with this increase being significant (\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\chi }^{2}=6.12$$\end{document}χ2=6.12, \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$p=0.01$$\end{document}p=0.01) (Fig. 4). Among the individual antimicrobials, nalidixic acid non-susceptibility was reported for 9% (49/527) of tested isolates, 7% (40/548) for ciprofloxacin and 4% (21/547) for erythromycin. Fig. 4Non-susceptibility to standard antimicrobials from Campylobacter isolates obtained during Campylobacter-associated hospitalisations, 2004 to 2013 Non-susceptibility to standard antimicrobials from Campylobacter isolates obtained during Campylobacter-associated hospitalisations, 2004 to 2013 Figure 4 shows that lower rates of erythromycin non-susceptibility were observed while significant increases in both ciprofloxacin (χ2 = 16.51, p < 0.001) and nalidixic acid non-susceptibility occurred during the study period (χ2 = 10.85, p = 0.001). Factors associated with antimicrobial non-susceptibility among Campylobacter-associated hospitalisations are shown in Tables 4 and 5. Table 4Factors associated with ciprofloxacin non-susceptible isolates among Campylobacter-associated hospitalisations (n = 411)Predictor variableEstimated adjusted Odds Ratio (aOR)95 % Confidence intervalp-valueBloody diarrhoea0.300.09–1.030.06Recent overseas travel15.532.86–84.180.001Previous Campylobacter-associated hospitalisation12.871.72–96.120.01Hosmer and Lemeshow goodness of fit χ2  = 0.03,  p = 0.86 Factors associated with ciprofloxacin non-susceptible isolates among Campylobacter-associated hospitalisations (n = 411) Hosmer and Lemeshow goodness of fit χ2  = 0.03,  p = 0.86 Table 5Factors associated with non-susceptibility to ciprofloxacin, nalidixic acid or erythromycin among a cohort of Campylobacter-associated hospitalisations (n = 548)Predictor variableEstimated adjusted Odds Ratio (aOR)95 % Confidence Intervalp-valueRecent overseas travel11.803.18–43.83< 0.001Previous Campylobacter-associated hospitalisation17.092.65−110.07< 0.01Tachycardia0.480.26–0.890.02Hosmer and Lemeshow goodness of fit χ2  = 0.01,  p = 0.92 Factors associated with non-susceptibility to ciprofloxacin, nalidixic acid or erythromycin among a cohort of Campylobacter-associated hospitalisations (n = 548) Hosmer and Lemeshow goodness of fit χ2  = 0.01,  p = 0.92 Antibiotic treatment Antimicrobial treatment was provided for 32% (219/685) of Campylobacter-associated hospitalisations. Those receiving treatment were observed to be significantly older (median 48.5 years, range 8 years to 92 years) than admissions where antibiotics were not administered (median 34.8 years, range < 1 years to 91 years, χ2 = 26.92, p < 0.001). No sex-based differences were observed. Treatment associated with bacteraemia is described in Table 1. Second generation fluoroquinolones were used in 79% (172/219) of treated admissions. Ciprofloxacin was administered to 82.0% (141/172), with the remainder receiving norfloxacin. Those receiving ciprofloxacin were younger (median 47.2 years, minimum 12 years, maximum 90 years) compared with those receiving norfloxacin (median age 64.3 years, minimum 19 years to maximum 92 years), but this difference was not significant. For admissions treated with ciprofloxacin, 91.4% (129/141) received treatment orally, with a median total daily dose of 1000 mg. For those receiving parenteral ciprofloxacin (\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$n$$\end{document}n =12), the median total daily dose was 800 mg. The median total daily dosage for oral norfloxacin (\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$n$$\end{document}n =31) was 800 mg. Macrolides (azithromycin or erythromycin) were administered to 21.0% (46/219) of admissions, with 63.0% (29/46) receiving azithromycin. No age or sex differences were observed between those receiving azithromycin versus erythromycin. For admissions receiving azithromycin, 72.4% (21/29) received treatment orally, with a median total daily dose for both oral and IV azithromycin of 500 mg. Oral administration was provided for 94.1% (16/17) of admissions receiving erythromycin, with a median total daily dose of 2000 mg. One patient received an initial total daily dose of 3200 mg IV erythromycin, prior to this being reduced to a total daily dose of 1600 mg. For two admissions (involving the same patient), a tetracycline (doxycycline) was used to treat a C. lari bacteraemia and enterocolitis. A significant increase in the proportion of Campylobacter-associated hospitalisations being administered antimicrobials was observed over time (χ2 = 4.37, p = 0.04), rising from 27% (14/52) in 2004 to 38% (26/68) of admissions in 2013. No difference in the proportion of admissions treated with either fluoroquinolones or macrolides was observed. Among admissions receiving macrolides a significant increase in the administration of azithromycin over erythromycin was observed (χ2 = 16.31, p < 0.001), most notably from 2011 onwards. No changes over time in the proportion of admissions treated with ciprofloxacin versus norfloxacin were observed. Factors associated with administration of antibiotics during Campylobacter-associated hospitalisations are shown in Table 6. Table 6Factors associated with antibiotic administration among a cohort of Campylobacter-associated hospitalisations (n = 607)Predictor variableEstimated adjusted Odds Ratio (aOR)95 % Confidence Intervalp-valueAge 0–9 years0.070.01–0.570.01Age 10–19 years0.440.20–0.970.04Age 40–49 years2.341.14–4.790.02Emergency Unit admission0.060.02–0.17< 0.001Gastroenterology admission3.751.95–7.20< 0.001Gen. Medicine admission2.021.22–3.35< 0.01Infectious Diseases admission2.581.03–6.440.04Vomiting1.701.11–2.610.02Electrolyte imbalance1.731.12–2.670.01Blood specimen for culture2.761.79–4.26< 0.001Hosmer and Lemeshow goodness of fit χ2 9.23,  p = 0.32 Factors associated with antibiotic administration among a cohort of Campylobacter-associated hospitalisations (n = 607) Hosmer and Lemeshow goodness of fit χ2 9.23,  p = 0.32 Antimicrobial treatment was provided for 32% (219/685) of Campylobacter-associated hospitalisations. Those receiving treatment were observed to be significantly older (median 48.5 years, range 8 years to 92 years) than admissions where antibiotics were not administered (median 34.8 years, range < 1 years to 91 years, χ2 = 26.92, p < 0.001). No sex-based differences were observed. Treatment associated with bacteraemia is described in Table 1. Second generation fluoroquinolones were used in 79% (172/219) of treated admissions. Ciprofloxacin was administered to 82.0% (141/172), with the remainder receiving norfloxacin. Those receiving ciprofloxacin were younger (median 47.2 years, minimum 12 years, maximum 90 years) compared with those receiving norfloxacin (median age 64.3 years, minimum 19 years to maximum 92 years), but this difference was not significant. For admissions treated with ciprofloxacin, 91.4% (129/141) received treatment orally, with a median total daily dose of 1000 mg. For those receiving parenteral ciprofloxacin (\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$n$$\end{document}n =12), the median total daily dose was 800 mg. The median total daily dosage for oral norfloxacin (\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$n$$\end{document}n =31) was 800 mg. Macrolides (azithromycin or erythromycin) were administered to 21.0% (46/219) of admissions, with 63.0% (29/46) receiving azithromycin. No age or sex differences were observed between those receiving azithromycin versus erythromycin. For admissions receiving azithromycin, 72.4% (21/29) received treatment orally, with a median total daily dose for both oral and IV azithromycin of 500 mg. Oral administration was provided for 94.1% (16/17) of admissions receiving erythromycin, with a median total daily dose of 2000 mg. One patient received an initial total daily dose of 3200 mg IV erythromycin, prior to this being reduced to a total daily dose of 1600 mg. For two admissions (involving the same patient), a tetracycline (doxycycline) was used to treat a C. lari bacteraemia and enterocolitis. A significant increase in the proportion of Campylobacter-associated hospitalisations being administered antimicrobials was observed over time (χ2 = 4.37, p = 0.04), rising from 27% (14/52) in 2004 to 38% (26/68) of admissions in 2013. No difference in the proportion of admissions treated with either fluoroquinolones or macrolides was observed. Among admissions receiving macrolides a significant increase in the administration of azithromycin over erythromycin was observed (χ2 = 16.31, p < 0.001), most notably from 2011 onwards. No changes over time in the proportion of admissions treated with ciprofloxacin versus norfloxacin were observed. Factors associated with administration of antibiotics during Campylobacter-associated hospitalisations are shown in Table 6. Table 6Factors associated with antibiotic administration among a cohort of Campylobacter-associated hospitalisations (n = 607)Predictor variableEstimated adjusted Odds Ratio (aOR)95 % Confidence Intervalp-valueAge 0–9 years0.070.01–0.570.01Age 10–19 years0.440.20–0.970.04Age 40–49 years2.341.14–4.790.02Emergency Unit admission0.060.02–0.17< 0.001Gastroenterology admission3.751.95–7.20< 0.001Gen. Medicine admission2.021.22–3.35< 0.01Infectious Diseases admission2.581.03–6.440.04Vomiting1.701.11–2.610.02Electrolyte imbalance1.731.12–2.670.01Blood specimen for culture2.761.79–4.26< 0.001Hosmer and Lemeshow goodness of fit χ2 9.23,  p = 0.32 Factors associated with antibiotic administration among a cohort of Campylobacter-associated hospitalisations (n = 607) Hosmer and Lemeshow goodness of fit χ2 9.23,  p = 0.32 Campylobacter bacteraemia: Out of 685 admissions, 333 (49%) had blood drawn for culture and 25 (7.5%) of these tested positive (Fig. 1). Speciation was performed on 21 (84%) of these case isolates with 15 (71.4%) cases of C. jejuni, 5 (23.8%) cases of C. coli and a single (4.0%) case of C. lari bacteraemia. Amongst the 25 admissions with bacteraemia, 15 were male (60%) and 10 were female (40%), while the median age was 59.5 years (range 12 to 90 years). Amongst 308 negative cases there were 177 males (57.5%) and 131 females (42.5%), with a median age of 38.8 years (age range \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$<1.0$$\end{document}<1.0 to 92 years). This age difference was significant (median test\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${ \chi }^{2}=5.30$$\end{document}χ2=5.30, \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$p=0.02$$\end{document}p=0.02) but no evidence of a difference in the sex composition was observed (\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\chi }^{2}=0.06$$\end{document}χ2=0.06, \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$p=0.81$$\end{document}p=0.81). The age and sex of admissions in relation to blood culture status is shown in Fig. 2. We also compared those who had blood drawn for culture (whether positive or negative) with those that did not to assess demographic differences. We saw no evidence of a difference in median age (median test\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\chi }^{2}=0.18$$\end{document}χ2=0.18, \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$p=0.68$$\end{document}p=0.68) but did observe males to be more likely to have blood drawn (\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\chi }^{2}=11.17$$\end{document}χ2=11.17, \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$p=0.001$$\end{document}p=0.001). We observed no difference in the proportion of admissions with documented comorbidities who had blood specimens collected for culture when compared to admission without comorbidities who had blood taken for culture \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${(\chi }^{2}=1.42$$\end{document}(χ2=1.42, \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$p=0.23$$\end{document}p=0.23). Bacteraemia generally occurred in the context of antecedent diarrhoeal illness, although two admissions involved primary bacteraemia (i.e. without diarrhoea) in patients with haematological malignancies. Comorbidities, while common, were not a characteristic feature of cases hospitalised with bacteraemia as shown in Table 1. No deaths were identified among cases with bacteraemia. Fig. 1Flow diagram for blood culture collection and subsequent Campylobacter spp. bacteraemia among a cohort of Campylobacter-associated hospitalisations Flow diagram for blood culture collection and subsequent Campylobacter spp. bacteraemia among a cohort of Campylobacter-associated hospitalisations Fig. 2Jitter plot of age and sex by blood culture status among Campylobacter-associated hospitalisations Jitter plot of age and sex by blood culture status among Campylobacter-associated hospitalisations Table 1Characteristics of cases hospitalised with Campylobacter bacteraemia, 2004 to 2013Year (case)Age range/sexSpeciesBacteraemia—sourceAntimicrobial susceptibilityAntimicrobial treatmentSignificant medical history and risk factors200480 + M C. jejuni Enteric—secondaryFully sensitiveNilAge, nil other significant2005 (a)50–59 FCampylobacter sp.Enteric—secondaryFully sensitivePO ciprofloxacin 500 mg bdAcute myeloid leukaemia2005 (b)60–69 FCampylobacter sp.Enteric—secondaryFully sensitiveNilNil significant2005 (c)40–49 F C. jejuni Enteric—secondaryFully sensitivePO ciprofloxacin 500 mg bdIV drug use, PUD on omeprazole2006 (a)40–49 M C. jejuni Enteric— secondaryFully sensitivePO ciprofloxacin 500 mg bdUntreated Stage III HIV2008 (a)20–29 MCampylobacter sp.Enteric—secondaryFully sensitiveIV azithromycin 500 mg qdNil significant2008 (b)60–69 M C. coli Primary bacteraemiaCiprofloxacin- ResistantErythromycin- ResistantNilLymphocytic lymphoma200930–39 F C. jejuni Enteric—secondaryFully sensitivePO ciprofloxacin 250 mg bdHistory of renal transplant secondary to IgA nephropathy2010 (a)60–69 MCampylobacter sp.Enteric—secondaryFully sensitivePO ciprofloxacin 500 mg bdBowel carcinoma, current chemotherapy2010 (b)30–39 M C. coli Enteric—secondaryFully sensitivePO ciprofloxacin 500 mg bdAlcoholic liver disease with portal hypotension and bleeding varices2010 (c)70–79 F C. jejuni Enteric—secondaryFully sensitivePO ciprofloxacin 500 mg bdT2DM2010 (d)10–19 F C. jejuni Enteric—secondaryFully sensitivePO azithromycin 500 mg qd (upon discharge)Nil significant.2010 (e)70–79 F C. coli Enteric—secondaryFully sensitivePO ciprofloxacin 500 mg bdT2DM2010 (f)40–49 M C. jejuni Enteric—secondaryFully sensitivePO Norfloxacin 400 mg bdIrritable bowel syndrome2011 (a)60–69 M C. lari Enteric—secondaryCiprofloxacin— ResistantPO Doxycycline 100 mg bdAlcoholic liver disease with portal hypotension, recent intracerebral bleed2011 (b)80 + M C. jejuni Enteric—secondaryFully sensitiveNil prescribedAge, nil other significant2011 (c)70–79 M C. coli Enteric—secondaryFully sensitivePO ciprofloxacin 500 mg bdAsplenic2012 (a)40–49 M C. jejuni Enteric—secondaryFully sensitiveIV ciprofloxacin 500 mg bdMultiple sclerosis, current chemotherapy pre-stem cell transplantation, IDDM2012 (b)30–39 F C. jejuni Enteric—secondaryFully sensitiveNilPregnant 33/40K, IDDM2012 (c)70–79 M C. jejuni Enteric—secondaryCiprofloxacin— ResistantNalidixic acid— ResistantNilDiabetic neuropathy, chronic renal failure2012 (d)60–69 F C. coli Enteric—secondaryCiprofloxacin— ResistantNalidixic acid— ResistantNilNil significant2013 (a)20–29 M C. jejuni Enteric—secondaryFully sensitiveNilNil significant2013 (b)20–29 M C. jejuni Primary bacteraemiaFully sensitivePO Ciprofloxacin 750 mg bd (upon discharge)B cell leukaemia, AVN (on steroids), SIADH2013 (c)50–59 M C. jejuni Enteric—secondaryFully sensitivePO ciprofloxacin 500 mg bdLiver failure with cirrhosis, portal hypotension secondary to Hepatitis C, T2DM, hypothyroidism. Awaiting transplant.2013 (d)70–79 M C. jejuni Enteric—secondaryFully sensitiveNilAge, nil other significantPO, per oral; IV ,  intravenous; PUD, peptic ulcer disease; HIV, human immunodeficiency virus; T2DM, type 2 diabetes mellitus; IDDM, insulin dependent diabetes mellitus; AVN, acute vascular necrosis; SIADH, syndrome of inappropriate antidiuretic hormone Characteristics of cases hospitalised with Campylobacter bacteraemia, 2004 to 2013 Ciprofloxacin- Resistant Erythromycin- Resistant Ciprofloxacin— Resistant Nalidixic acid— Resistant Ciprofloxacin— Resistant Nalidixic acid— Resistant PO, per oral; IV ,  intravenous; PUD, peptic ulcer disease; HIV, human immunodeficiency virus; T2DM, type 2 diabetes mellitus; IDDM, insulin dependent diabetes mellitus; AVN, acute vascular necrosis; SIADH, syndrome of inappropriate antidiuretic hormone During the period 2004 to 2013, the mean incidence of Campylobacter bacteraemia in the host population was 0.71 cases per 100,000 population per year (95% CI 0.48–1.05 per 100,000 population) (Fig. 3). We did not observe temporal trends in the proportion of Campylobacter-associated hospitalisations undergoing blood collection for culture or in the proportion of positive blood cultures among Campylobacter-associated hospitalisations. Fig. 3Incidence of Campylobacter spp. bacteraemia among ACT residents, 2004 to 2013 Incidence of Campylobacter spp. bacteraemia among ACT residents, 2004 to 2013 Factors associated with the collection of blood samples for culture and subsequent isolation of Campylobacter spp. are shown in Tables 2 and 3. Table 2Factors associated with collection of blood for culture among a cohort of Campylobacter-associated hospitalisations (\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\varvec{n}=663$$\end{document}n=663)Predictor variableEstimated adjusted Odds Ratio (aOR)95% Confidence interval\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\varvec{p}$$\end{document}p -valueInfectious Diseases Unit admission8.001.58–40.610.01Febrile during admission (≥ 38 °C)21.3013.76–32. 96< 0.001Tachycardia1.751.13–2.720.01Moderate to severe renal disease2.871.26–6.550.0110–19 years age group0.360.19–0.71< 0.0122 observations contained missing dataHosmer and Lemeshow goodness of fit \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\chi }^{2}=2.70$$\end{document}χ2=2.70, \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$p=0.61$$\end{document}p=0.61 Factors associated with collection of blood for culture among a cohort of Campylobacter-associated hospitalisations (\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\varvec{n}=663$$\end{document}n=663) 22 observations contained missing data Hosmer and Lemeshow goodness of fit \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\chi }^{2}=2.70$$\end{document}χ2=2.70, \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$p=0.61$$\end{document}p=0.61 Table 3Factors associated with blood stream isolation of Campylobacter spp. among Campylobacter-associated hospitalisations (n = 333)Predictor variableEstimated adjusted Odds Ratio (aOR)95 % Confidence interval\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\text{p}$$\end{document}p -valueModerate to severe liver disease48.897.03–340.22< 0.001Haematology Unit admission14.672.99–72.070.001Age group 70–79 years4.931.57–15.49< 0.01Admission during summer months2.931.14–7.570.03Indigenous Australian10.871.00–117.890.05Hosmer and Lemshow goodness of fit \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\chi }^{2}=0.71$$\end{document}χ2=0.71, \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$p=0.70$$\end{document}p=0.70 Factors associated with blood stream isolation of Campylobacter spp. among Campylobacter-associated hospitalisations (n = 333) Hosmer and Lemshow goodness of fit \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\chi }^{2}=0.71$$\end{document}χ2=0.71, \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$p=0.70$$\end{document}p=0.70 Antimicrobial susceptibility: We identified 548 Campylobacter-associated admissions with a primary isolation of Campylobacter spp. and where subsequent antimicrobial susceptibility testing was performed. For 526 (96%) of these cases there was available data for three standard antimicrobials, ciprofloxacin, nalidixic acid and erythromycin. Of the remainder, nalidixic acid data was unavailable for 21 isolates, with erythromycin susceptibility unavailable for a single isolate. During the study period, 13% (70/548) of primary isolates exhibited non-susceptibility to at least one standard antimicrobial. The proportion of non-susceptible isolates ranged from ≤ 5.0% in 2004/05 to > 20.0% in 2012/13, with this increase being significant (\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\chi }^{2}=6.12$$\end{document}χ2=6.12, \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$p=0.01$$\end{document}p=0.01) (Fig. 4). Among the individual antimicrobials, nalidixic acid non-susceptibility was reported for 9% (49/527) of tested isolates, 7% (40/548) for ciprofloxacin and 4% (21/547) for erythromycin. Fig. 4Non-susceptibility to standard antimicrobials from Campylobacter isolates obtained during Campylobacter-associated hospitalisations, 2004 to 2013 Non-susceptibility to standard antimicrobials from Campylobacter isolates obtained during Campylobacter-associated hospitalisations, 2004 to 2013 Figure 4 shows that lower rates of erythromycin non-susceptibility were observed while significant increases in both ciprofloxacin (χ2 = 16.51, p < 0.001) and nalidixic acid non-susceptibility occurred during the study period (χ2 = 10.85, p = 0.001). Factors associated with antimicrobial non-susceptibility among Campylobacter-associated hospitalisations are shown in Tables 4 and 5. Table 4Factors associated with ciprofloxacin non-susceptible isolates among Campylobacter-associated hospitalisations (n = 411)Predictor variableEstimated adjusted Odds Ratio (aOR)95 % Confidence intervalp-valueBloody diarrhoea0.300.09–1.030.06Recent overseas travel15.532.86–84.180.001Previous Campylobacter-associated hospitalisation12.871.72–96.120.01Hosmer and Lemeshow goodness of fit χ2  = 0.03,  p = 0.86 Factors associated with ciprofloxacin non-susceptible isolates among Campylobacter-associated hospitalisations (n = 411) Hosmer and Lemeshow goodness of fit χ2  = 0.03,  p = 0.86 Table 5Factors associated with non-susceptibility to ciprofloxacin, nalidixic acid or erythromycin among a cohort of Campylobacter-associated hospitalisations (n = 548)Predictor variableEstimated adjusted Odds Ratio (aOR)95 % Confidence Intervalp-valueRecent overseas travel11.803.18–43.83< 0.001Previous Campylobacter-associated hospitalisation17.092.65−110.07< 0.01Tachycardia0.480.26–0.890.02Hosmer and Lemeshow goodness of fit χ2  = 0.01,  p = 0.92 Factors associated with non-susceptibility to ciprofloxacin, nalidixic acid or erythromycin among a cohort of Campylobacter-associated hospitalisations (n = 548) Hosmer and Lemeshow goodness of fit χ2  = 0.01,  p = 0.92 Antibiotic treatment: Antimicrobial treatment was provided for 32% (219/685) of Campylobacter-associated hospitalisations. Those receiving treatment were observed to be significantly older (median 48.5 years, range 8 years to 92 years) than admissions where antibiotics were not administered (median 34.8 years, range < 1 years to 91 years, χ2 = 26.92, p < 0.001). No sex-based differences were observed. Treatment associated with bacteraemia is described in Table 1. Second generation fluoroquinolones were used in 79% (172/219) of treated admissions. Ciprofloxacin was administered to 82.0% (141/172), with the remainder receiving norfloxacin. Those receiving ciprofloxacin were younger (median 47.2 years, minimum 12 years, maximum 90 years) compared with those receiving norfloxacin (median age 64.3 years, minimum 19 years to maximum 92 years), but this difference was not significant. For admissions treated with ciprofloxacin, 91.4% (129/141) received treatment orally, with a median total daily dose of 1000 mg. For those receiving parenteral ciprofloxacin (\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$n$$\end{document}n =12), the median total daily dose was 800 mg. The median total daily dosage for oral norfloxacin (\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$n$$\end{document}n =31) was 800 mg. Macrolides (azithromycin or erythromycin) were administered to 21.0% (46/219) of admissions, with 63.0% (29/46) receiving azithromycin. No age or sex differences were observed between those receiving azithromycin versus erythromycin. For admissions receiving azithromycin, 72.4% (21/29) received treatment orally, with a median total daily dose for both oral and IV azithromycin of 500 mg. Oral administration was provided for 94.1% (16/17) of admissions receiving erythromycin, with a median total daily dose of 2000 mg. One patient received an initial total daily dose of 3200 mg IV erythromycin, prior to this being reduced to a total daily dose of 1600 mg. For two admissions (involving the same patient), a tetracycline (doxycycline) was used to treat a C. lari bacteraemia and enterocolitis. A significant increase in the proportion of Campylobacter-associated hospitalisations being administered antimicrobials was observed over time (χ2 = 4.37, p = 0.04), rising from 27% (14/52) in 2004 to 38% (26/68) of admissions in 2013. No difference in the proportion of admissions treated with either fluoroquinolones or macrolides was observed. Among admissions receiving macrolides a significant increase in the administration of azithromycin over erythromycin was observed (χ2 = 16.31, p < 0.001), most notably from 2011 onwards. No changes over time in the proportion of admissions treated with ciprofloxacin versus norfloxacin were observed. Factors associated with administration of antibiotics during Campylobacter-associated hospitalisations are shown in Table 6. Table 6Factors associated with antibiotic administration among a cohort of Campylobacter-associated hospitalisations (n = 607)Predictor variableEstimated adjusted Odds Ratio (aOR)95 % Confidence Intervalp-valueAge 0–9 years0.070.01–0.570.01Age 10–19 years0.440.20–0.970.04Age 40–49 years2.341.14–4.790.02Emergency Unit admission0.060.02–0.17< 0.001Gastroenterology admission3.751.95–7.20< 0.001Gen. Medicine admission2.021.22–3.35< 0.01Infectious Diseases admission2.581.03–6.440.04Vomiting1.701.11–2.610.02Electrolyte imbalance1.731.12–2.670.01Blood specimen for culture2.761.79–4.26< 0.001Hosmer and Lemeshow goodness of fit χ2 9.23,  p = 0.32 Factors associated with antibiotic administration among a cohort of Campylobacter-associated hospitalisations (n = 607) Hosmer and Lemeshow goodness of fit χ2 9.23,  p = 0.32 Discussion: We observed a high rate of bacteraemia in this study of Campylobacter-associated hospital admissions. Although blood cultures are not routinely performed for cases admitted with infectious gastroenteritis, we observed for Campylobacter-associated hospitalisations the presence of fever, pre-existing kidney disease or admission to particular subspecialty units increased the likelihood of blood being collected. Comorbidity and advanced age were both associated with a subsequent isolation from blood. One third of admissions received antibiotic treatment during the study period, with the proportion of those treated rising significantly over time. We also observed a temporal increase in the proportion of isolates exhibiting non-susceptibility to standard antimicrobials, notably fluoroquinolones. This is notable given the clinical and public health concern about development of antimicrobial resistance. Factors associated with ciprofloxacin resistance included recent overseas travel and previous hospitalisation for campylobacteriosis. This carries important clinical consequences, as the option to treat with antibiotics may carry greater importance among hospitalised cases compared to non-hospitalised cases. Campylobacteriosis most commonly presents as a self-limiting enterocolitis, with secondary bacteraemia a recognised, but relatively infrequent complication [10]. A number of studies have sought to determine the incidence of Campylobacter bacteraemia in high-income settings, estimating rates to be between 0.20 and 0.47 cases per 100,000 population [3, 11, 12]. A more recent Swedish study [13] has reported an incidence of 1.00 case per 100,000 population, with this linked to changes in automated blood culture collection systems. In our study we observed only 25 incident cases of bacteraemia, equating to a mean incidence of 0.71 cases per 100,000 population. While bacteraemia is relatively infrequent, our population rates appears high, likely reflecting high background incidence of campylobacteriosis in the ACT [14] and a lower threshold for testing among hospitalised cases. Blood cultures are the gold standard for the diagnosis of BSIs [4]. In our study, patients who had a measured fever (≥ 38 °C), underlying chronic kidney disease or who were admitted under the care of the Infectious Diseases Unit were more likely to have blood drawn for culture. Fever is a common prompt for blood cultures, with patients whose measured temperatures are ≥ 38 °C having increased likelihood of bacteraemia [15]. Similarly tachycardia, another vital sign, has also been used in clinical prediction rules for blood-stream infection [16]. Although comorbidities do not feature in BSI underlying kidney disease has also been shown as a risk factor for bloodstream infections in older patients [17]. Several consequences of kidney disease have been proposed to contribute to infection including malnutrition, chronic inflammation, retained uremic solutes, trace element deficiencies and metabolic abnormalities [18]. The finding that blood culture is more likely to be ordered by Infectious Diseases clinicians is unsurprising given the clinical focus of this subspecialty. Conversely, there was significant evidence that those aged between 10 and 19 years were less likely to have blood cultures performed. This finding likely reflects fewer hospitalisations being observed due to the low incidence of campylobacteriosis in this age grouping [14]. Predicting BSIs is challenging with numerous models developed for specific populations, settings and sources of infection [19]. The pre-test probability of bacteraemia will therefore vary considerably based upon the clinical context and source of infection [15]. Generally only 5–10% of blood cultures are positive, and of those positive results, between 30 and 50% represent contaminants [19]. In our study we observed blood cultures to be routinely requested, with 49% of acute admissions having blood drawn and 7% being positive for Campylobacter spp. These admissions involved both immunosuppressed and immunocompetent patients. The impacts of comorbid and immunosuppressive conditions on clinical variables used in predicting bacteraemia is less well understood [15], although associations with campylobacteriosis and invasive disease in immunosuppressed patients have been described [3, 20]. Transient bacteraemia may also be common among immunocompetent hosts with Campylobacter enterocolitis but is less frequently detected due to the bactericidal effect of human serum and reduced frequency of blood culture among acute enterocolitis patients [10], although nearly half of acute admissions in our study had blood culturing performed. Notably, similar proportions of acute admissions with and without comorbidities were observed to have blood drawn for culture suggesting in our study population that comorbidity exerted limited influence on decisions to request blood cultures. This is perhaps not surprising given prediction rules for BSI focus on clinical signs [15]. Factors associated with a positive blood stream isolate (bacteraemia) The statistical association we observed between cases with positive blood cultures and the presence of underlying liver disease and haematological malignancy is in keeping with hospital-based studies showing higher proportions of these conditions among cases with Campylobacter bacteraemia [11, 21, 22]. In addition, advanced age was also statistically associated with detection of bacteraemia, a characteristic seen in larger population-based studies of campylobacteriosis [3]. Several studies also report seasonality with Campylobacter bacteraemia [23, 24]. Our results show hospitalisation during the southern hemisphere summer to be associated with bacteraemia, a finding that aligns with the seasonality of Campylobacter enteritis in Australia [14]. A further association with blood culture positivity was Indigenous status. This result was derived however from a small number of observations with, Indigenous Australians comprising 1% of Campylobacter-associated hospitalisations in the ACT [6]. The statistical association we observed between cases with positive blood cultures and the presence of underlying liver disease and haematological malignancy is in keeping with hospital-based studies showing higher proportions of these conditions among cases with Campylobacter bacteraemia [11, 21, 22]. In addition, advanced age was also statistically associated with detection of bacteraemia, a characteristic seen in larger population-based studies of campylobacteriosis [3]. Several studies also report seasonality with Campylobacter bacteraemia [23, 24]. Our results show hospitalisation during the southern hemisphere summer to be associated with bacteraemia, a finding that aligns with the seasonality of Campylobacter enteritis in Australia [14]. A further association with blood culture positivity was Indigenous status. This result was derived however from a small number of observations with, Indigenous Australians comprising 1% of Campylobacter-associated hospitalisations in the ACT [6]. Factors associated with non-susceptibility to antimicrobials We observed statistically significant evidence of an increase in the proportion of isolates exhibiting non-susceptibility to standard antimicrobials, with this increase driven primarily by non-susceptibility to fluoroquinolones (including nalidixic acid). In keeping with trends in comparable settings only low rates of non-susceptibility to macrolides were observed [25, 26]. Internationally, fluoroquinolone resistance among Campylobacter spp. has become a major public health problem [5]. Increases in the proportion of clinical isolates demonstrating ciprofloxacin resistance has been observed in the United Kingdom (UK) and United States (US) [25, 27], while in the European Union (EU) more than half (54.6%) of human-associated C. jejuni and two-thirds (66.6%) of C. coli isolates were resistant to ciprofloxacin in 2013 [28]. Australia has previously reported low rates of fluoroquinolone non-susceptibility among clinical Campylobacter isolates (around 2% in 2006) [29]. This has been credited to a national pharmaceutical subsidy scheme that restricted human quinolone use and through regulation forbidding quinolone use in food-producing animals [30]. More recent studies reveal this situation has changed markedly, with current rates of ciprofloxacin resistance in clinically-derived Campylobacter isolates now ranging between 13 and 20% [31, 32]. Overseas travel is a well-established risk factor for the acquisition of ciprofloxacin-resistant Campylobacter infections [26, 33]. In our study, cases with ciprofloxacin resistance all reported travel to India and South-East Asia, destinations associated with high rates of antimicrobial resistance among enteric pathogens (including Campylobacter spp.) [34, 35]. Nevertheless, the majority of ciprofloxacin non-susceptible isolates in our study had no recent overseas travel identified, meaning factors associated with domestic acquisition of ciprofloxacin resistance require greater consideration. US data similarly shows increases in domestically acquired ciprofloxacin resistance [27]. We observed that ciprofloxacin non-susceptibility was also associated with previous hospitalisation with campylobacteriosis. There is a paucity of population-level data on recurrent hospitalisations involving non-susceptible Campylobacter isolates. However, rates of recurrent campylobacteriosis in community settings have been reported to be as high as 248 episodes per 100,000 cases per year in the five years following an initial infection [36]. Explanations for our finding could be either host or pathogen-related, including higher rates of humoral immunodeficiency in patients hospitalised with recurrent campylobacteriosis [11, 37] or because of de novo mutations or increases in resistant organisms already present at subclinical levels. While there has been debate around isolate non-susceptibility and disease severity, reanalysis of the issue appears to show no substantial clinical differences between resistant and susceptible Campylobacter isolates [38]. Consequently, our finding of reduced odds for bloody diarrhoea and tachycardia among admissions with ciprofloxacin non-susceptibility and any antimicrobial non-susceptibility respectively, most likely represents the so-called “healthy traveller” effect [39]. We observed statistically significant evidence of an increase in the proportion of isolates exhibiting non-susceptibility to standard antimicrobials, with this increase driven primarily by non-susceptibility to fluoroquinolones (including nalidixic acid). In keeping with trends in comparable settings only low rates of non-susceptibility to macrolides were observed [25, 26]. Internationally, fluoroquinolone resistance among Campylobacter spp. has become a major public health problem [5]. Increases in the proportion of clinical isolates demonstrating ciprofloxacin resistance has been observed in the United Kingdom (UK) and United States (US) [25, 27], while in the European Union (EU) more than half (54.6%) of human-associated C. jejuni and two-thirds (66.6%) of C. coli isolates were resistant to ciprofloxacin in 2013 [28]. Australia has previously reported low rates of fluoroquinolone non-susceptibility among clinical Campylobacter isolates (around 2% in 2006) [29]. This has been credited to a national pharmaceutical subsidy scheme that restricted human quinolone use and through regulation forbidding quinolone use in food-producing animals [30]. More recent studies reveal this situation has changed markedly, with current rates of ciprofloxacin resistance in clinically-derived Campylobacter isolates now ranging between 13 and 20% [31, 32]. Overseas travel is a well-established risk factor for the acquisition of ciprofloxacin-resistant Campylobacter infections [26, 33]. In our study, cases with ciprofloxacin resistance all reported travel to India and South-East Asia, destinations associated with high rates of antimicrobial resistance among enteric pathogens (including Campylobacter spp.) [34, 35]. Nevertheless, the majority of ciprofloxacin non-susceptible isolates in our study had no recent overseas travel identified, meaning factors associated with domestic acquisition of ciprofloxacin resistance require greater consideration. US data similarly shows increases in domestically acquired ciprofloxacin resistance [27]. We observed that ciprofloxacin non-susceptibility was also associated with previous hospitalisation with campylobacteriosis. There is a paucity of population-level data on recurrent hospitalisations involving non-susceptible Campylobacter isolates. However, rates of recurrent campylobacteriosis in community settings have been reported to be as high as 248 episodes per 100,000 cases per year in the five years following an initial infection [36]. Explanations for our finding could be either host or pathogen-related, including higher rates of humoral immunodeficiency in patients hospitalised with recurrent campylobacteriosis [11, 37] or because of de novo mutations or increases in resistant organisms already present at subclinical levels. While there has been debate around isolate non-susceptibility and disease severity, reanalysis of the issue appears to show no substantial clinical differences between resistant and susceptible Campylobacter isolates [38]. Consequently, our finding of reduced odds for bloody diarrhoea and tachycardia among admissions with ciprofloxacin non-susceptibility and any antimicrobial non-susceptibility respectively, most likely represents the so-called “healthy traveller” effect [39]. Factors associated with antibiotic treatment during admission During the study period we observed statistically significant evidence of an increase in the proportion of Campylobacter-associated hospitalisations receiving antibiotic treatment. Antimicrobial therapy for Campylobacter enterocolitis is not routinely advised but may be recommended for patients with or at risk of severe disease, including high volume or bloody diarrhoea, high fever, symptom duration greater than one week, pregnancy or immunocompromised status [2, 10, 40]. Given that hospitalisation can be viewed as a marker of disease severity [41], the rates of treatment within our study population might be expected to differ from those among non-hospitalised campylobacteriosis cases. Other research on campylobacteriosis in the ACT has found no concomitant increase in hospitalisations during the same period as the current study [6]. Antibiotic treatment rates may be a reflection of local treatment practices rather than a response to disease severity, with data showing rates of appropriate prescribing and compliance with antibiotic guidelines in ACT hospitals to be the lowest in Australia during the study period [42]. One-third of Campylobacter-associated hospitalisations received antibiotics, either empirically or as targeted therapy for confirmed campylobacteriosis. Second generation fluoroquinolones—mainly per oral ciprofloxacin—comprised 80% of treatment, with macrolides the remainder. Australia has been successful in efforts to limit use of quinolones in humans and to prohibit their use in food-producing animals [30]. This has preserved their clinical use in Australia, with ciprofloxacin and norfloxacin remaining as empirical treatment options for acute infectious diarrhoea, while being recommended alongside azithromycin for treatment of domestically acquired Campylobacter enteritis [40]. Conversely, the high rates of quinolone resistance experienced in the UK, EU and US has seen macrolide treatment recommended or a greater emphasis placed on travel history and knowledge of local resistance patterns to guide empirical prescribing [43, 44]. Within our hospitalised study population, the strongest predictor of antibiotic treatment was collection of a blood specimen for culture. Both antibiotic prescription and ordering of blood cultures are clinical decisions, suggesting that the underlying clinical context observed by treating clinicians impacts both practices inducing them to be positively associated. Admission under specific clinical units, including Gastroenterology, Infectious Diseases and General Medicine were also associated with increased odds of receiving antimicrobial therapy. Hospitalisation implies a higher level of morbidity, potentially explaining the higher likelihood of antibiotic administration. Variation in treatment focus could also be expected between subspecialties, especially when underlying comorbidities are exacerbated. Such decision making may be further influenced by routine clinical behaviour, unease regarding the consequences of BSI or by the acceptability of not obtaining blood cultures among particular specialities [4]. Age was also found to be an important factor in treatment, with paediatric admissions being less likely to receive antibiotics. This finding reflects that fluoroquinolones—the most commonly prescribed antibiotic class— are not recommended for use in children due to safety concerns [45]. We also observed that patients aged 40–49 years were more likely to be prescribed antibiotics. Reasons for this are less certain, but around 20% of admissions to high prescribing units such as Gastroenterology and Infectious Diseases were in this age range. Vomiting and electrolyte imbalance were also associated with provision of antibiotic therapy. Vomiting is a less frequently reported symptom of campylobacteriosis but serves as an indicator for disease severity [46] and a predictor of bacteraemia [47]. While we did not assess the severity of dehydration, it is likely that a population such as ours included a higher proportion of cases with more pronounced symptomatology and severity of symptoms compared with non-hospitalised cases. During the study period we observed statistically significant evidence of an increase in the proportion of Campylobacter-associated hospitalisations receiving antibiotic treatment. Antimicrobial therapy for Campylobacter enterocolitis is not routinely advised but may be recommended for patients with or at risk of severe disease, including high volume or bloody diarrhoea, high fever, symptom duration greater than one week, pregnancy or immunocompromised status [2, 10, 40]. Given that hospitalisation can be viewed as a marker of disease severity [41], the rates of treatment within our study population might be expected to differ from those among non-hospitalised campylobacteriosis cases. Other research on campylobacteriosis in the ACT has found no concomitant increase in hospitalisations during the same period as the current study [6]. Antibiotic treatment rates may be a reflection of local treatment practices rather than a response to disease severity, with data showing rates of appropriate prescribing and compliance with antibiotic guidelines in ACT hospitals to be the lowest in Australia during the study period [42]. One-third of Campylobacter-associated hospitalisations received antibiotics, either empirically or as targeted therapy for confirmed campylobacteriosis. Second generation fluoroquinolones—mainly per oral ciprofloxacin—comprised 80% of treatment, with macrolides the remainder. Australia has been successful in efforts to limit use of quinolones in humans and to prohibit their use in food-producing animals [30]. This has preserved their clinical use in Australia, with ciprofloxacin and norfloxacin remaining as empirical treatment options for acute infectious diarrhoea, while being recommended alongside azithromycin for treatment of domestically acquired Campylobacter enteritis [40]. Conversely, the high rates of quinolone resistance experienced in the UK, EU and US has seen macrolide treatment recommended or a greater emphasis placed on travel history and knowledge of local resistance patterns to guide empirical prescribing [43, 44]. Within our hospitalised study population, the strongest predictor of antibiotic treatment was collection of a blood specimen for culture. Both antibiotic prescription and ordering of blood cultures are clinical decisions, suggesting that the underlying clinical context observed by treating clinicians impacts both practices inducing them to be positively associated. Admission under specific clinical units, including Gastroenterology, Infectious Diseases and General Medicine were also associated with increased odds of receiving antimicrobial therapy. Hospitalisation implies a higher level of morbidity, potentially explaining the higher likelihood of antibiotic administration. Variation in treatment focus could also be expected between subspecialties, especially when underlying comorbidities are exacerbated. Such decision making may be further influenced by routine clinical behaviour, unease regarding the consequences of BSI or by the acceptability of not obtaining blood cultures among particular specialities [4]. Age was also found to be an important factor in treatment, with paediatric admissions being less likely to receive antibiotics. This finding reflects that fluoroquinolones—the most commonly prescribed antibiotic class— are not recommended for use in children due to safety concerns [45]. We also observed that patients aged 40–49 years were more likely to be prescribed antibiotics. Reasons for this are less certain, but around 20% of admissions to high prescribing units such as Gastroenterology and Infectious Diseases were in this age range. Vomiting and electrolyte imbalance were also associated with provision of antibiotic therapy. Vomiting is a less frequently reported symptom of campylobacteriosis but serves as an indicator for disease severity [46] and a predictor of bacteraemia [47]. While we did not assess the severity of dehydration, it is likely that a population such as ours included a higher proportion of cases with more pronounced symptomatology and severity of symptoms compared with non-hospitalised cases. Factors associated with a positive blood stream isolate (bacteraemia): The statistical association we observed between cases with positive blood cultures and the presence of underlying liver disease and haematological malignancy is in keeping with hospital-based studies showing higher proportions of these conditions among cases with Campylobacter bacteraemia [11, 21, 22]. In addition, advanced age was also statistically associated with detection of bacteraemia, a characteristic seen in larger population-based studies of campylobacteriosis [3]. Several studies also report seasonality with Campylobacter bacteraemia [23, 24]. Our results show hospitalisation during the southern hemisphere summer to be associated with bacteraemia, a finding that aligns with the seasonality of Campylobacter enteritis in Australia [14]. A further association with blood culture positivity was Indigenous status. This result was derived however from a small number of observations with, Indigenous Australians comprising 1% of Campylobacter-associated hospitalisations in the ACT [6]. Factors associated with non-susceptibility to antimicrobials: We observed statistically significant evidence of an increase in the proportion of isolates exhibiting non-susceptibility to standard antimicrobials, with this increase driven primarily by non-susceptibility to fluoroquinolones (including nalidixic acid). In keeping with trends in comparable settings only low rates of non-susceptibility to macrolides were observed [25, 26]. Internationally, fluoroquinolone resistance among Campylobacter spp. has become a major public health problem [5]. Increases in the proportion of clinical isolates demonstrating ciprofloxacin resistance has been observed in the United Kingdom (UK) and United States (US) [25, 27], while in the European Union (EU) more than half (54.6%) of human-associated C. jejuni and two-thirds (66.6%) of C. coli isolates were resistant to ciprofloxacin in 2013 [28]. Australia has previously reported low rates of fluoroquinolone non-susceptibility among clinical Campylobacter isolates (around 2% in 2006) [29]. This has been credited to a national pharmaceutical subsidy scheme that restricted human quinolone use and through regulation forbidding quinolone use in food-producing animals [30]. More recent studies reveal this situation has changed markedly, with current rates of ciprofloxacin resistance in clinically-derived Campylobacter isolates now ranging between 13 and 20% [31, 32]. Overseas travel is a well-established risk factor for the acquisition of ciprofloxacin-resistant Campylobacter infections [26, 33]. In our study, cases with ciprofloxacin resistance all reported travel to India and South-East Asia, destinations associated with high rates of antimicrobial resistance among enteric pathogens (including Campylobacter spp.) [34, 35]. Nevertheless, the majority of ciprofloxacin non-susceptible isolates in our study had no recent overseas travel identified, meaning factors associated with domestic acquisition of ciprofloxacin resistance require greater consideration. US data similarly shows increases in domestically acquired ciprofloxacin resistance [27]. We observed that ciprofloxacin non-susceptibility was also associated with previous hospitalisation with campylobacteriosis. There is a paucity of population-level data on recurrent hospitalisations involving non-susceptible Campylobacter isolates. However, rates of recurrent campylobacteriosis in community settings have been reported to be as high as 248 episodes per 100,000 cases per year in the five years following an initial infection [36]. Explanations for our finding could be either host or pathogen-related, including higher rates of humoral immunodeficiency in patients hospitalised with recurrent campylobacteriosis [11, 37] or because of de novo mutations or increases in resistant organisms already present at subclinical levels. While there has been debate around isolate non-susceptibility and disease severity, reanalysis of the issue appears to show no substantial clinical differences between resistant and susceptible Campylobacter isolates [38]. Consequently, our finding of reduced odds for bloody diarrhoea and tachycardia among admissions with ciprofloxacin non-susceptibility and any antimicrobial non-susceptibility respectively, most likely represents the so-called “healthy traveller” effect [39]. Factors associated with antibiotic treatment during admission: During the study period we observed statistically significant evidence of an increase in the proportion of Campylobacter-associated hospitalisations receiving antibiotic treatment. Antimicrobial therapy for Campylobacter enterocolitis is not routinely advised but may be recommended for patients with or at risk of severe disease, including high volume or bloody diarrhoea, high fever, symptom duration greater than one week, pregnancy or immunocompromised status [2, 10, 40]. Given that hospitalisation can be viewed as a marker of disease severity [41], the rates of treatment within our study population might be expected to differ from those among non-hospitalised campylobacteriosis cases. Other research on campylobacteriosis in the ACT has found no concomitant increase in hospitalisations during the same period as the current study [6]. Antibiotic treatment rates may be a reflection of local treatment practices rather than a response to disease severity, with data showing rates of appropriate prescribing and compliance with antibiotic guidelines in ACT hospitals to be the lowest in Australia during the study period [42]. One-third of Campylobacter-associated hospitalisations received antibiotics, either empirically or as targeted therapy for confirmed campylobacteriosis. Second generation fluoroquinolones—mainly per oral ciprofloxacin—comprised 80% of treatment, with macrolides the remainder. Australia has been successful in efforts to limit use of quinolones in humans and to prohibit their use in food-producing animals [30]. This has preserved their clinical use in Australia, with ciprofloxacin and norfloxacin remaining as empirical treatment options for acute infectious diarrhoea, while being recommended alongside azithromycin for treatment of domestically acquired Campylobacter enteritis [40]. Conversely, the high rates of quinolone resistance experienced in the UK, EU and US has seen macrolide treatment recommended or a greater emphasis placed on travel history and knowledge of local resistance patterns to guide empirical prescribing [43, 44]. Within our hospitalised study population, the strongest predictor of antibiotic treatment was collection of a blood specimen for culture. Both antibiotic prescription and ordering of blood cultures are clinical decisions, suggesting that the underlying clinical context observed by treating clinicians impacts both practices inducing them to be positively associated. Admission under specific clinical units, including Gastroenterology, Infectious Diseases and General Medicine were also associated with increased odds of receiving antimicrobial therapy. Hospitalisation implies a higher level of morbidity, potentially explaining the higher likelihood of antibiotic administration. Variation in treatment focus could also be expected between subspecialties, especially when underlying comorbidities are exacerbated. Such decision making may be further influenced by routine clinical behaviour, unease regarding the consequences of BSI or by the acceptability of not obtaining blood cultures among particular specialities [4]. Age was also found to be an important factor in treatment, with paediatric admissions being less likely to receive antibiotics. This finding reflects that fluoroquinolones—the most commonly prescribed antibiotic class— are not recommended for use in children due to safety concerns [45]. We also observed that patients aged 40–49 years were more likely to be prescribed antibiotics. Reasons for this are less certain, but around 20% of admissions to high prescribing units such as Gastroenterology and Infectious Diseases were in this age range. Vomiting and electrolyte imbalance were also associated with provision of antibiotic therapy. Vomiting is a less frequently reported symptom of campylobacteriosis but serves as an indicator for disease severity [46] and a predictor of bacteraemia [47]. While we did not assess the severity of dehydration, it is likely that a population such as ours included a higher proportion of cases with more pronounced symptomatology and severity of symptoms compared with non-hospitalised cases. Limitations: There are a number of potential limitations with our study. Firstly, our rates of bacteraemia may underestimate the true incidence, as we identified cases using only public hospital laboratory data. Other cases of Campylobacter bacteraemia may have been diagnosed and managed in the community or the private hospital sector, although the clinical significance of these is less certain. A second limitation relates to the precision of our model estimates, with the small numbers of observations for some outcomes making detection of clinically meaningful associations challenging. Despite this, the observed associations still align plausibly with Campylobacter’s epidemiology. A third limitation relates to the generalisation of our findings. Our study population was hospital-based and drawn from a single Australian territory, with regional and international differences in the epidemiology of campylobacteriosis being observed in high income settings [14, 22, 48]. Finally, antimicrobial susceptibility testing and bacterial speciation were not performed on all isolates, limiting exploration of species-specific features such as higher macrolide resistance rates among C. coli isolates [49]. Conclusions: Campylobacter infections cause a substantial disease burden, as reflected by the high number of hospitalisations and high incidence of bacteraemia in our study. While a spectrum of illness can be observed among hospitalisations, many cases exhibit signs suggestive of systemic disease. Furthermore, both the proportion of cases receiving antibiotic treatment and those having isolates that were non-susceptible to standard antimicrobials increased over time. Given the increasing incidence of Campylobacter infections, particularly among older patients, understanding hospitalisation burden becomes increasingly important. This study provides some evidence in relation to clinical factors influencing the management of hospitalised cases in high income settings.
Background: Campylobacter spp. cause mostly self-limiting enterocolitis, although a significant proportion of cases require hospitalisation highlighting potential for severe disease. Among people admitted, blood culture specimens are frequently collected and antibiotic treatment is initiated. We sought to understand clinical and host factors associated with bacteraemia, antibiotic treatment and isolate non-susceptibility among Campylobacter-associated hospitalisations. Methods: Using linked hospital microbiology and administrative data we identified and reviewed Campylobacter-associated hospitalisations between 2004 and 2013. We calculated population-level incidence for Campylobacter bacteraemia and used logistic regression to examine factors associated with bacteraemia, antibiotic treatment and isolate non-susceptibility among Campylobacter-associated hospitalisations. Results: Among 685 Campylobacter-associated hospitalisations, we identified 25 admissions for bacteraemia, an estimated incidence of 0.71 cases per 100,000 population per year. Around half of hospitalisations (333/685) had blood culturing performed. Factors associated with bacteraemia included underlying liver disease (aOR 48.89, 95% CI 7.03-340.22, p < 0.001), Haematology unit admission (aOR 14.67, 95% CI 2.99-72.07, p = 0.001) and age 70-79 years (aOR 4.93, 95% CI 1.57-15.49). Approximately one-third (219/685) of admissions received antibiotics with treatment rates increasing significantly over time (p < 0.05). Factors associated with antibiotic treatment included Gastroenterology unit admission (aOR 3.75, 95% CI 1.95-7.20, p < 0.001), having blood cultures taken (aOR 2.76, 95% CI 1.79-4.26, p < 0.001) and age 40-49 years (aOR 2.34, 95% CI 1.14-4.79, p = 0.02). Non-susceptibility of isolates to standard antimicrobials increased significantly over time (p = 0.01) and was associated with overseas travel (aOR 11.80 95% CI 3.18-43.83, p < 0.001) and negatively associated with tachycardia (aOR 0.48, 95%CI 0.26-0.88, p = 0.02), suggesting a healthy traveller effect. Conclusions: Campylobacter infections result in considerable hospital burden. Among those admitted to hospital, an interplay of factors involving clinical presentation, presence of underlying comorbidities, complications and increasing age influence how a case is investigated and managed.
Background: Campylobacter spp. are internationally significant as a cause of infectious diarrhoeal disease [1]. Most cases of infectious diarrhoea, including those caused by Campylobacter spp., are self-limiting, with management focused on maintenance of hydration via fluid repletion [2]. However, for persons hospitalised with Campylobacter infection, clinical thresholds for exclusion of bacteraemia and the consideration of antimicrobial therapy differ due to symptom severity, risk of complications or the exacerbation of underlying co-morbidities [2]. While bacteraemia is an uncommon complication of campylobacteriosis, testing and diagnosis typically occurs when a person is hospitalised [3]. Within a hospital setting, clinical tolerances and the challenge of predicting bacteraemia among febrile admissions with an enteric focus are important considerations [4]. Further, the likelihood of clinicians commencing antibiotic therapy may also increase among hospitalised cases, highlighting the importance of judicial prescribing and understanding of isolate susceptibility patterns to ensure viable treatment options remain [5]. We describe rates of bacteraemia, antimicrobial susceptibility and treatment among a cohort of Campylobacter-associated hospitalisations. We also examined clinical and host factors associated with the diagnosis of blood stream infections (BSI), isolate non-susceptibility and antibiotic treatment of hospitalised cases. Conclusions: Campylobacter infections cause a substantial disease burden, as reflected by the high number of hospitalisations and high incidence of bacteraemia in our study. While a spectrum of illness can be observed among hospitalisations, many cases exhibit signs suggestive of systemic disease. Furthermore, both the proportion of cases receiving antibiotic treatment and those having isolates that were non-susceptible to standard antimicrobials increased over time. Given the increasing incidence of Campylobacter infections, particularly among older patients, understanding hospitalisation burden becomes increasingly important. This study provides some evidence in relation to clinical factors influencing the management of hospitalised cases in high income settings.
Background: Campylobacter spp. cause mostly self-limiting enterocolitis, although a significant proportion of cases require hospitalisation highlighting potential for severe disease. Among people admitted, blood culture specimens are frequently collected and antibiotic treatment is initiated. We sought to understand clinical and host factors associated with bacteraemia, antibiotic treatment and isolate non-susceptibility among Campylobacter-associated hospitalisations. Methods: Using linked hospital microbiology and administrative data we identified and reviewed Campylobacter-associated hospitalisations between 2004 and 2013. We calculated population-level incidence for Campylobacter bacteraemia and used logistic regression to examine factors associated with bacteraemia, antibiotic treatment and isolate non-susceptibility among Campylobacter-associated hospitalisations. Results: Among 685 Campylobacter-associated hospitalisations, we identified 25 admissions for bacteraemia, an estimated incidence of 0.71 cases per 100,000 population per year. Around half of hospitalisations (333/685) had blood culturing performed. Factors associated with bacteraemia included underlying liver disease (aOR 48.89, 95% CI 7.03-340.22, p < 0.001), Haematology unit admission (aOR 14.67, 95% CI 2.99-72.07, p = 0.001) and age 70-79 years (aOR 4.93, 95% CI 1.57-15.49). Approximately one-third (219/685) of admissions received antibiotics with treatment rates increasing significantly over time (p < 0.05). Factors associated with antibiotic treatment included Gastroenterology unit admission (aOR 3.75, 95% CI 1.95-7.20, p < 0.001), having blood cultures taken (aOR 2.76, 95% CI 1.79-4.26, p < 0.001) and age 40-49 years (aOR 2.34, 95% CI 1.14-4.79, p = 0.02). Non-susceptibility of isolates to standard antimicrobials increased significantly over time (p = 0.01) and was associated with overseas travel (aOR 11.80 95% CI 3.18-43.83, p < 0.001) and negatively associated with tachycardia (aOR 0.48, 95%CI 0.26-0.88, p = 0.02), suggesting a healthy traveller effect. Conclusions: Campylobacter infections result in considerable hospital burden. Among those admitted to hospital, an interplay of factors involving clinical presentation, presence of underlying comorbidities, complications and increasing age influence how a case is investigated and managed.
17,912
441
[ 231, 1504, 385, 363, 2065, 567, 695, 163, 560, 669, 193 ]
14
[ "usepackage", "campylobacter", "associated", "document", "ciprofloxacin", "campylobacter associated", "blood", "non", "hospitalisations", "susceptibility" ]
[ "bacteraemia cohort campylobacter", "campylobacter infection clinical", "bacteraemia study campylobacter", "hospitalised campylobacter infection", "incidence campylobacter bacteraemia" ]
null
[CONTENT] Campylobacter infections | Hospitalisation | Bacteraemia | Incidence | Antimicrobial susceptibility | Antimicrobial therapy | Comorbidity | Elderly [SUMMARY]
null
[CONTENT] Campylobacter infections | Hospitalisation | Bacteraemia | Incidence | Antimicrobial susceptibility | Antimicrobial therapy | Comorbidity | Elderly [SUMMARY]
[CONTENT] Campylobacter infections | Hospitalisation | Bacteraemia | Incidence | Antimicrobial susceptibility | Antimicrobial therapy | Comorbidity | Elderly [SUMMARY]
[CONTENT] Campylobacter infections | Hospitalisation | Bacteraemia | Incidence | Antimicrobial susceptibility | Antimicrobial therapy | Comorbidity | Elderly [SUMMARY]
[CONTENT] Campylobacter infections | Hospitalisation | Bacteraemia | Incidence | Antimicrobial susceptibility | Antimicrobial therapy | Comorbidity | Elderly [SUMMARY]
[CONTENT] Adult | Aged | Anti-Bacterial Agents | Anti-Infective Agents | Australia | Australian Capital Territory | Bacteremia | Campylobacter | Hospitalization | Humans | Middle Aged | Risk Factors [SUMMARY]
null
[CONTENT] Adult | Aged | Anti-Bacterial Agents | Anti-Infective Agents | Australia | Australian Capital Territory | Bacteremia | Campylobacter | Hospitalization | Humans | Middle Aged | Risk Factors [SUMMARY]
[CONTENT] Adult | Aged | Anti-Bacterial Agents | Anti-Infective Agents | Australia | Australian Capital Territory | Bacteremia | Campylobacter | Hospitalization | Humans | Middle Aged | Risk Factors [SUMMARY]
[CONTENT] Adult | Aged | Anti-Bacterial Agents | Anti-Infective Agents | Australia | Australian Capital Territory | Bacteremia | Campylobacter | Hospitalization | Humans | Middle Aged | Risk Factors [SUMMARY]
[CONTENT] Adult | Aged | Anti-Bacterial Agents | Anti-Infective Agents | Australia | Australian Capital Territory | Bacteremia | Campylobacter | Hospitalization | Humans | Middle Aged | Risk Factors [SUMMARY]
[CONTENT] bacteraemia cohort campylobacter | campylobacter infection clinical | bacteraemia study campylobacter | hospitalised campylobacter infection | incidence campylobacter bacteraemia [SUMMARY]
null
[CONTENT] bacteraemia cohort campylobacter | campylobacter infection clinical | bacteraemia study campylobacter | hospitalised campylobacter infection | incidence campylobacter bacteraemia [SUMMARY]
[CONTENT] bacteraemia cohort campylobacter | campylobacter infection clinical | bacteraemia study campylobacter | hospitalised campylobacter infection | incidence campylobacter bacteraemia [SUMMARY]
[CONTENT] bacteraemia cohort campylobacter | campylobacter infection clinical | bacteraemia study campylobacter | hospitalised campylobacter infection | incidence campylobacter bacteraemia [SUMMARY]
[CONTENT] bacteraemia cohort campylobacter | campylobacter infection clinical | bacteraemia study campylobacter | hospitalised campylobacter infection | incidence campylobacter bacteraemia [SUMMARY]
[CONTENT] usepackage | campylobacter | associated | document | ciprofloxacin | campylobacter associated | blood | non | hospitalisations | susceptibility [SUMMARY]
null
[CONTENT] usepackage | campylobacter | associated | document | ciprofloxacin | campylobacter associated | blood | non | hospitalisations | susceptibility [SUMMARY]
[CONTENT] usepackage | campylobacter | associated | document | ciprofloxacin | campylobacter associated | blood | non | hospitalisations | susceptibility [SUMMARY]
[CONTENT] usepackage | campylobacter | associated | document | ciprofloxacin | campylobacter associated | blood | non | hospitalisations | susceptibility [SUMMARY]
[CONTENT] usepackage | campylobacter | associated | document | ciprofloxacin | campylobacter associated | blood | non | hospitalisations | susceptibility [SUMMARY]
[CONTENT] hospitalised | bacteraemia | therapy | infectious | treatment | clinical | susceptibility | diagnosis | hospitalised cases | campylobacter [SUMMARY]
null
[CONTENT] usepackage | document | minimal usepackage amsmath usepackage | usepackage upgreek setlength | upgreek setlength oddsidemargin 69pt | upgreek setlength oddsidemargin | upgreek setlength | upgreek | usepackage upgreek setlength oddsidemargin | usepackage wasysym [SUMMARY]
[CONTENT] burden | high | campylobacter infections | infections | cases | incidence | disease | hospitalised cases high income | time given increasing | time given increasing incidence [SUMMARY]
[CONTENT] usepackage | campylobacter | associated | non | susceptibility | treatment | ciprofloxacin | non susceptibility | document | isolates [SUMMARY]
[CONTENT] usepackage | campylobacter | associated | non | susceptibility | treatment | ciprofloxacin | non susceptibility | document | isolates [SUMMARY]
[CONTENT] ||| ||| ||| Campylobacter [SUMMARY]
null
[CONTENT] 685 | Campylobacter | 25 | 0.71 | 100,000 ||| 333/685 ||| 48.89 | 95% | CI | 7.03-340.22 | 0.001 | Haematology | 14.67 | 95% | CI | 2.99-72.07 | 0.001 | age 70-79 years | 4.93 | 95% | CI | 1.57-15.49 ||| Approximately one-third | 219/685 | 0.05 ||| Gastroenterology | 3.75 | 95% | 1.95-7.20 | 0.001 | 2.76 | 95% | CI | 1.79-4.26 | 0.001 | age 40-49 years | 2.34 | 95% | CI | 1.14-4.79 | 0.02 ||| 0.01 | 11.80 95% | CI | 3.18-43.83 | 0.001 | 0.48 | 0.26-0.88 | 0.02 [SUMMARY]
[CONTENT] ||| [SUMMARY]
[CONTENT] ||| ||| ||| Campylobacter ||| Campylobacter | between 2004 and 2013 ||| Campylobacter | Campylobacter ||| ||| 685 | Campylobacter | 25 | 0.71 | 100,000 ||| 333/685 ||| 48.89 | 95% | CI | 7.03-340.22 | 0.001 | Haematology | 14.67 | 95% | CI | 2.99-72.07 | 0.001 | age 70-79 years | 4.93 | 95% | CI | 1.57-15.49 ||| Approximately one-third | 219/685 | 0.05 ||| Gastroenterology | 3.75 | 95% | 1.95-7.20 | 0.001 | 2.76 | 95% | CI | 1.79-4.26 | 0.001 | age 40-49 years | 2.34 | 95% | CI | 1.14-4.79 | 0.02 ||| 0.01 | 11.80 95% | CI | 3.18-43.83 | 0.001 | 0.48 | 0.26-0.88 | 0.02 ||| ||| ||| [SUMMARY]
[CONTENT] ||| ||| ||| Campylobacter ||| Campylobacter | between 2004 and 2013 ||| Campylobacter | Campylobacter ||| ||| 685 | Campylobacter | 25 | 0.71 | 100,000 ||| 333/685 ||| 48.89 | 95% | CI | 7.03-340.22 | 0.001 | Haematology | 14.67 | 95% | CI | 2.99-72.07 | 0.001 | age 70-79 years | 4.93 | 95% | CI | 1.57-15.49 ||| Approximately one-third | 219/685 | 0.05 ||| Gastroenterology | 3.75 | 95% | 1.95-7.20 | 0.001 | 2.76 | 95% | CI | 1.79-4.26 | 0.001 | age 40-49 years | 2.34 | 95% | CI | 1.14-4.79 | 0.02 ||| 0.01 | 11.80 95% | CI | 3.18-43.83 | 0.001 | 0.48 | 0.26-0.88 | 0.02 ||| ||| ||| [SUMMARY]
Oncofertility: combination of ovarian stimulation with subsequent ovarian tissue extraction on the day of oocyte retrieval.
23510640
New anticancer treatments have increased survival rates for cancer patients, but often at the cost of sterility. Several strategies are currently available for preserving fertility. However, the chances of achieving a pregnancy with one technique are still limited. A combination of methods is therefore recommended in order to maximize women's chances of future fertility. In this retrospective study, ovarian stimulation with subsequent ovarian tissue extraction on the day of oocyte retrieval were combined and the quality of the ovarian tissue, the numbers and quality of oocytes, time requirements, and the safety of the strategy were examined.
BACKGROUND
Fourteen female patients suffering from malignant diseases underwent one in vitro fertilization cycle. Different stimulation protocols were used, depending on the menstrual cycle. Transvaginal oocyte retrieval was scheduled 34-36 h after human chorionic gonadotropin administration. Immediately afterwards, ovarian tissue was extracted laparoscopically.
METHODS
A mean of 10 oocytes were retrieved per patient, and 67% of the oocytes were successfully fertilized using intracytoplasmic sperm injection. No periprocedural complications and no complications leading to postponement of the start of chemotherapy occurred. The ovarian tissues were of good quality, with a normal age-related follicular distribution and without carcinoma cell invasion.
RESULTS
An approach using ovarian stimulation first, followed by laparoscopic collection of ovarian tissue, is a useful strategy for increasing the efficacy of fertility preservation techniques. The ovarian tissue is not affected by prior ovarian stimulation.
CONCLUSIONS
[ "Adult", "Chorionic Gonadotropin", "Female", "Fertility Preservation", "Fertilization in Vitro", "Humans", "Medical Oncology", "Neoplasms", "Oocyte Retrieval", "Ovary", "Ovulation Induction", "Reproducibility of Results", "Retrospective Studies", "Sperm Injections, Intracytoplasmic", "Time Factors", "Young Adult" ]
3599192
Background
In recent years, progress in the diagnosis and treatment of oncological diseases has led to considerable improvements in the survival prognosis, particularly in children and adolescent cancer patients. Unfortunately, aggressive chemotherapy and radiotherapy often cause infertility due to massive destruction of the ovarian reserve, resulting in premature ovarian failure. These women have to face years of hormone replacement therapy and the prospect of infertility, which causes psychological stress [1,2]. Several strategies are currently available for preserving fertility, depending on the risks and probability of gonadal failure, the patient’s general health at diagnosis, and the partner’s status. These strategies include transposition of the ovaries before radiotherapy, ovarian stimulation followed by cryopreservation of fertilized oocytes or unfertilized oocytes, cryopreservation of in vitro–matured oocytes, cryopreservation and transplantation of ovarian tissue, and administration of gonadotropin-releasing hormone (GnRH) agonists [3,4]. Although the variety of fertility preservation strategies available enables cancer patients to have children using their own gametes after overcoming their disease, most of these techniques are still experimental and the efficacy of the individual techniques is limited. A combination of methods is therefore recommended in order to maximize women’s chances of future fertility [5]. In this study, ovarian stimulation and ovarian tissue cryopreservation were combined as a strategy for fertility preservation in cancer patients. The aim was to evaluate whether ovarian stimulation affects the quality of ovarian tissue. The numbers and quality of oocytes, time requirements, and the safety of this strategy were also examined.
Methods
Fourteen patients between 24 and 35 years of age (median 29) were included in the retrospective study. They were all suffering from malignant diseases (Table 1) and wanted to preserve their fertility for a future pregnancy. None of the patients had been treated with chemotherapy or radiotherapy before the fertility preservation procedure. All patients provided informed consent for ovarian stimulation and ovarian tissue cryopreservation, after receiving counseling on alternative options for fertility preservation techniques. Three patients also wished to be treated with GnRH agonists during chemotherapy. Main characteristics and stimulation outcome in patients who provided informed consent for ovarian stimulation and ovarian tissue cryopreservation aStimulation started in the luteal phase. bNo ICSI; oocytes were cryopreserved in an unfertilized state. cReceived gonadotropin-releasing hormone agonists during cancer treatment. Ovarian stimulation All of the patients had regular menstrual cycles before chemoradiotherapy. The phase of the menstrual cycle was evaluated using the onset of the last menstrual period, ultrasonography, and progesterone concentrations. Patients who were stimulated during the follicular phase received either a short “flare-up” protocol or a GnRH-antagonist protocol [6,7]. In the case of a single patient, stimulation was started in the luteal phase with a modified GnRH-antagonist protocol [8]. In one case, ovarian stimulation was carried out with letrozole in combination with a GnRH-antagonist protocol, due to estrogen receptor–positive breast cancer [9,10]. Follicular growth was monitored using vaginal ultrasound and measurement of 17β-estradiol (E2) levels. The gonadotropin dosage was adjusted according to the preantral follicle count and follicle growth. A single dose of recombinant human chorionic gonadotropin (hCG) was administered when the lead follicle had a mean diameter of 15–18 mm. All of the patients had regular menstrual cycles before chemoradiotherapy. The phase of the menstrual cycle was evaluated using the onset of the last menstrual period, ultrasonography, and progesterone concentrations. Patients who were stimulated during the follicular phase received either a short “flare-up” protocol or a GnRH-antagonist protocol [6,7]. In the case of a single patient, stimulation was started in the luteal phase with a modified GnRH-antagonist protocol [8]. In one case, ovarian stimulation was carried out with letrozole in combination with a GnRH-antagonist protocol, due to estrogen receptor–positive breast cancer [9,10]. Follicular growth was monitored using vaginal ultrasound and measurement of 17β-estradiol (E2) levels. The gonadotropin dosage was adjusted according to the preantral follicle count and follicle growth. A single dose of recombinant human chorionic gonadotropin (hCG) was administered when the lead follicle had a mean diameter of 15–18 mm. Oocyte and ovarian tissue collection Transvaginal oocyte retrieval was scheduled 34–36 h after hCG administration and was performed with the patient under general anesthesia. Immediately afterwards, ovarian tissue was extracted laparoscopically; the stimulated ovary was divided along the longitudinal midline with scissors, without the use of any diathermy. During this procedure, an effort was made to avoid coming too close to the ovarian mesentery, containing the ovarian vessels. In this way, the anti-mesenteric half of one ovary was separated and retrieved for cryopreservation, while the other half was left in situ. For hemostasis after this, circumscribed bleeding from the ovarian tissue was coagulated using bipolar diathermy. This was necessary at points along the surface of the ovary and on the ovarian septa between the former follicles. The ovarian cortices were cryopreserved using a slow freezing protocol and an open freezing system [11]. Prior to freezing, a small biopsy of the ovarian tissue was examined histologically to assess follicle density and exclude involvement of the tissue by malignancy. The mature (metaphase II, MII) oocytes were fertilized by intracytoplasmic sperm injection (ICSI) to avoid the risk of fertilization failure with in vitro fertilization (IVF) and cryopreserved at the pronuclear (PN) stage using a slow freezing protocol in accordance with the patient’s request and with German national law. If the patient did not have a partner or the patient requested it, all oocytes were cryopreserved in an unfertilized state using a slow freezing protocol. Transvaginal oocyte retrieval was scheduled 34–36 h after hCG administration and was performed with the patient under general anesthesia. Immediately afterwards, ovarian tissue was extracted laparoscopically; the stimulated ovary was divided along the longitudinal midline with scissors, without the use of any diathermy. During this procedure, an effort was made to avoid coming too close to the ovarian mesentery, containing the ovarian vessels. In this way, the anti-mesenteric half of one ovary was separated and retrieved for cryopreservation, while the other half was left in situ. For hemostasis after this, circumscribed bleeding from the ovarian tissue was coagulated using bipolar diathermy. This was necessary at points along the surface of the ovary and on the ovarian septa between the former follicles. The ovarian cortices were cryopreserved using a slow freezing protocol and an open freezing system [11]. Prior to freezing, a small biopsy of the ovarian tissue was examined histologically to assess follicle density and exclude involvement of the tissue by malignancy. The mature (metaphase II, MII) oocytes were fertilized by intracytoplasmic sperm injection (ICSI) to avoid the risk of fertilization failure with in vitro fertilization (IVF) and cryopreserved at the pronuclear (PN) stage using a slow freezing protocol in accordance with the patient’s request and with German national law. If the patient did not have a partner or the patient requested it, all oocytes were cryopreserved in an unfertilized state using a slow freezing protocol.
Results
All of the women underwent one IVF cycle. The median period of the hormonal stimulation cycle, between the start of hormonal stimulation and hCG administration, was 10 days (range 6–13 days). The average number of oocytes retrieved per patient was 10, and 67% of the oocytes were successfully fertilized using ICSI. In three cases, the oocytes were cryopreserved in an unfertilized state (mean number of unfertilized oocytes 15) (Table 1). The transvaginal oocyte retrieval and extraction of the ovarian tissue immediately afterward were uneventful in all cases. No perioperative complications such as severe bleeding occurred. The fertility preservation procedures did not lead to postponement of the start of chemotherapy for any of the patients; however, one patient developed a mild ovarian hyperstimulation syndrome (OHSS). The prefreezing histological sections from the human ovarian grafts had a normal histological appearance. The follicular count showed a normal age-related follicular distribution, with the vast majority of follicles being primordial. No carcinoma cells were seen in any of the hematoxylin–eosin-stained slides examined (Figure 1). Histological analysis of ovarian tissue extracted immediately after ovarian stimulation. Ovarian tissue with (A) follicles and (B) one secondary follicle, stained with hematoxylin–eosin (original magnification: × 20).
Conclusions
Fertility-preserving procedures should be offered to all patients facing fertility loss before cytotoxic treatment is administered. The decision as to which fertility preservation treatment is most suitable in the patient’s individual situation has to be made during a personal discussion with her and requires intensive interdisciplinary discussion, including oncologists, radiotherapists, and reproductive medicine specialists. A combination of fertility preservation techniques increases the efficacy of the procedure and gives young cancer patients the best chance for future fertility [5,34].
[ "Background", "Ovarian stimulation", "Oocyte and ovarian tissue collection", "Abbreviations", "Competing interests", "Authors’ contributions" ]
[ "In recent years, progress in the diagnosis and treatment of oncological diseases has led to considerable improvements in the survival prognosis, particularly in children and adolescent cancer patients. Unfortunately, aggressive chemotherapy and radiotherapy often cause infertility due to massive destruction of the ovarian reserve, resulting in premature ovarian failure. These women have to face years of hormone replacement therapy and the prospect of infertility, which causes psychological stress [1,2].\nSeveral strategies are currently available for preserving fertility, depending on the risks and probability of gonadal failure, the patient’s general health at diagnosis, and the partner’s status. These strategies include transposition of the ovaries before radiotherapy, ovarian stimulation followed by cryopreservation of fertilized oocytes or unfertilized oocytes, cryopreservation of in vitro–matured oocytes, cryopreservation and transplantation of ovarian tissue, and administration of gonadotropin-releasing hormone (GnRH) agonists [3,4].\nAlthough the variety of fertility preservation strategies available enables cancer patients to have children using their own gametes after overcoming their disease, most of these techniques are still experimental and the efficacy of the individual techniques is limited. A combination of methods is therefore recommended in order to maximize women’s chances of future fertility [5].\nIn this study, ovarian stimulation and ovarian tissue cryopreservation were combined as a strategy for fertility preservation in cancer patients. The aim was to evaluate whether ovarian stimulation affects the quality of ovarian tissue. The numbers and quality of oocytes, time requirements, and the safety of this strategy were also examined.", "All of the patients had regular menstrual cycles before chemoradiotherapy. The phase of the menstrual cycle was evaluated using the onset of the last menstrual period, ultrasonography, and progesterone concentrations. Patients who were stimulated during the follicular phase received either a short “flare-up” protocol or a GnRH-antagonist protocol [6,7]. In the case of a single patient, stimulation was started in the luteal phase with a modified GnRH-antagonist protocol [8]. In one case, ovarian stimulation was carried out with letrozole in combination with a GnRH-antagonist protocol, due to estrogen receptor–positive breast cancer [9,10].\nFollicular growth was monitored using vaginal ultrasound and measurement of 17β-estradiol (E2) levels. The gonadotropin dosage was adjusted according to the preantral follicle count and follicle growth. A single dose of recombinant human chorionic gonadotropin (hCG) was administered when the lead follicle had a mean diameter of 15–18 mm.", "Transvaginal oocyte retrieval was scheduled 34–36 h after hCG administration and was performed with the patient under general anesthesia. Immediately afterwards, ovarian tissue was extracted laparoscopically; the stimulated ovary was divided along the longitudinal midline with scissors, without the use of any diathermy. During this procedure, an effort was made to avoid coming too close to the ovarian mesentery, containing the ovarian vessels. In this way, the anti-mesenteric half of one ovary was separated and retrieved for cryopreservation, while the other half was left in situ. For hemostasis after this, circumscribed bleeding from the ovarian tissue was coagulated using bipolar diathermy. This was necessary at points along the surface of the ovary and on the ovarian septa between the former follicles.\nThe ovarian cortices were cryopreserved using a slow freezing protocol and an open freezing system [11]. Prior to freezing, a small biopsy of the ovarian tissue was examined histologically to assess follicle density and exclude involvement of the tissue by malignancy.\nThe mature (metaphase II, MII) oocytes were fertilized by intracytoplasmic sperm injection (ICSI) to avoid the risk of fertilization failure with in vitro fertilization (IVF) and cryopreserved at the pronuclear (PN) stage using a slow freezing protocol in accordance with the patient’s request and with German national law. If the patient did not have a partner or the patient requested it, all oocytes were cryopreserved in an unfertilized state using a slow freezing protocol.", "GnRH: Gonadotropin-releasing hormone;hCG: human chorionic gonadotropin;ICSI: Intracytoplasmic sperm injection;IVF: in vitro fertilization;IVM: in vitro maturation;OHSS: Ovarian hyperstimulation syndrome;PN: Pronuclear (stage)", "The authors hereby declare that they have no competing interests.", "LL, AM, RD, and TH designed the study and analyzed the data. LL and RD wrote the manuscript. TH collected the data. AM, TH, and MWB carried out the operations. DW, LL, KUA, and IH carried out the histological work and analysis. MWB and RD supervised the study. All of the authors read and approved the final manuscript." ]
[ null, null, null, null, null, null ]
[ "Background", "Methods", "Ovarian stimulation", "Oocyte and ovarian tissue collection", "Results", "Discussion", "Conclusions", "Abbreviations", "Competing interests", "Authors’ contributions" ]
[ "In recent years, progress in the diagnosis and treatment of oncological diseases has led to considerable improvements in the survival prognosis, particularly in children and adolescent cancer patients. Unfortunately, aggressive chemotherapy and radiotherapy often cause infertility due to massive destruction of the ovarian reserve, resulting in premature ovarian failure. These women have to face years of hormone replacement therapy and the prospect of infertility, which causes psychological stress [1,2].\nSeveral strategies are currently available for preserving fertility, depending on the risks and probability of gonadal failure, the patient’s general health at diagnosis, and the partner’s status. These strategies include transposition of the ovaries before radiotherapy, ovarian stimulation followed by cryopreservation of fertilized oocytes or unfertilized oocytes, cryopreservation of in vitro–matured oocytes, cryopreservation and transplantation of ovarian tissue, and administration of gonadotropin-releasing hormone (GnRH) agonists [3,4].\nAlthough the variety of fertility preservation strategies available enables cancer patients to have children using their own gametes after overcoming their disease, most of these techniques are still experimental and the efficacy of the individual techniques is limited. A combination of methods is therefore recommended in order to maximize women’s chances of future fertility [5].\nIn this study, ovarian stimulation and ovarian tissue cryopreservation were combined as a strategy for fertility preservation in cancer patients. The aim was to evaluate whether ovarian stimulation affects the quality of ovarian tissue. The numbers and quality of oocytes, time requirements, and the safety of this strategy were also examined.", "Fourteen patients between 24 and 35 years of age (median 29) were included in the retrospective study. They were all suffering from malignant diseases (Table 1) and wanted to preserve their fertility for a future pregnancy. None of the patients had been treated with chemotherapy or radiotherapy before the fertility preservation procedure. All patients provided informed consent for ovarian stimulation and ovarian tissue cryopreservation, after receiving counseling on alternative options for fertility preservation techniques. Three patients also wished to be treated with GnRH agonists during chemotherapy.\nMain characteristics and stimulation outcome in patients who provided informed consent for ovarian stimulation and ovarian tissue cryopreservation\naStimulation started in the luteal phase.\nbNo ICSI; oocytes were cryopreserved in an unfertilized state.\ncReceived gonadotropin-releasing hormone agonists during cancer treatment.\n Ovarian stimulation All of the patients had regular menstrual cycles before chemoradiotherapy. The phase of the menstrual cycle was evaluated using the onset of the last menstrual period, ultrasonography, and progesterone concentrations. Patients who were stimulated during the follicular phase received either a short “flare-up” protocol or a GnRH-antagonist protocol [6,7]. In the case of a single patient, stimulation was started in the luteal phase with a modified GnRH-antagonist protocol [8]. In one case, ovarian stimulation was carried out with letrozole in combination with a GnRH-antagonist protocol, due to estrogen receptor–positive breast cancer [9,10].\nFollicular growth was monitored using vaginal ultrasound and measurement of 17β-estradiol (E2) levels. The gonadotropin dosage was adjusted according to the preantral follicle count and follicle growth. A single dose of recombinant human chorionic gonadotropin (hCG) was administered when the lead follicle had a mean diameter of 15–18 mm.\nAll of the patients had regular menstrual cycles before chemoradiotherapy. The phase of the menstrual cycle was evaluated using the onset of the last menstrual period, ultrasonography, and progesterone concentrations. Patients who were stimulated during the follicular phase received either a short “flare-up” protocol or a GnRH-antagonist protocol [6,7]. In the case of a single patient, stimulation was started in the luteal phase with a modified GnRH-antagonist protocol [8]. In one case, ovarian stimulation was carried out with letrozole in combination with a GnRH-antagonist protocol, due to estrogen receptor–positive breast cancer [9,10].\nFollicular growth was monitored using vaginal ultrasound and measurement of 17β-estradiol (E2) levels. The gonadotropin dosage was adjusted according to the preantral follicle count and follicle growth. A single dose of recombinant human chorionic gonadotropin (hCG) was administered when the lead follicle had a mean diameter of 15–18 mm.\n Oocyte and ovarian tissue collection Transvaginal oocyte retrieval was scheduled 34–36 h after hCG administration and was performed with the patient under general anesthesia. Immediately afterwards, ovarian tissue was extracted laparoscopically; the stimulated ovary was divided along the longitudinal midline with scissors, without the use of any diathermy. During this procedure, an effort was made to avoid coming too close to the ovarian mesentery, containing the ovarian vessels. In this way, the anti-mesenteric half of one ovary was separated and retrieved for cryopreservation, while the other half was left in situ. For hemostasis after this, circumscribed bleeding from the ovarian tissue was coagulated using bipolar diathermy. This was necessary at points along the surface of the ovary and on the ovarian septa between the former follicles.\nThe ovarian cortices were cryopreserved using a slow freezing protocol and an open freezing system [11]. Prior to freezing, a small biopsy of the ovarian tissue was examined histologically to assess follicle density and exclude involvement of the tissue by malignancy.\nThe mature (metaphase II, MII) oocytes were fertilized by intracytoplasmic sperm injection (ICSI) to avoid the risk of fertilization failure with in vitro fertilization (IVF) and cryopreserved at the pronuclear (PN) stage using a slow freezing protocol in accordance with the patient’s request and with German national law. If the patient did not have a partner or the patient requested it, all oocytes were cryopreserved in an unfertilized state using a slow freezing protocol.\nTransvaginal oocyte retrieval was scheduled 34–36 h after hCG administration and was performed with the patient under general anesthesia. Immediately afterwards, ovarian tissue was extracted laparoscopically; the stimulated ovary was divided along the longitudinal midline with scissors, without the use of any diathermy. During this procedure, an effort was made to avoid coming too close to the ovarian mesentery, containing the ovarian vessels. In this way, the anti-mesenteric half of one ovary was separated and retrieved for cryopreservation, while the other half was left in situ. For hemostasis after this, circumscribed bleeding from the ovarian tissue was coagulated using bipolar diathermy. This was necessary at points along the surface of the ovary and on the ovarian septa between the former follicles.\nThe ovarian cortices were cryopreserved using a slow freezing protocol and an open freezing system [11]. Prior to freezing, a small biopsy of the ovarian tissue was examined histologically to assess follicle density and exclude involvement of the tissue by malignancy.\nThe mature (metaphase II, MII) oocytes were fertilized by intracytoplasmic sperm injection (ICSI) to avoid the risk of fertilization failure with in vitro fertilization (IVF) and cryopreserved at the pronuclear (PN) stage using a slow freezing protocol in accordance with the patient’s request and with German national law. If the patient did not have a partner or the patient requested it, all oocytes were cryopreserved in an unfertilized state using a slow freezing protocol.", "All of the patients had regular menstrual cycles before chemoradiotherapy. The phase of the menstrual cycle was evaluated using the onset of the last menstrual period, ultrasonography, and progesterone concentrations. Patients who were stimulated during the follicular phase received either a short “flare-up” protocol or a GnRH-antagonist protocol [6,7]. In the case of a single patient, stimulation was started in the luteal phase with a modified GnRH-antagonist protocol [8]. In one case, ovarian stimulation was carried out with letrozole in combination with a GnRH-antagonist protocol, due to estrogen receptor–positive breast cancer [9,10].\nFollicular growth was monitored using vaginal ultrasound and measurement of 17β-estradiol (E2) levels. The gonadotropin dosage was adjusted according to the preantral follicle count and follicle growth. A single dose of recombinant human chorionic gonadotropin (hCG) was administered when the lead follicle had a mean diameter of 15–18 mm.", "Transvaginal oocyte retrieval was scheduled 34–36 h after hCG administration and was performed with the patient under general anesthesia. Immediately afterwards, ovarian tissue was extracted laparoscopically; the stimulated ovary was divided along the longitudinal midline with scissors, without the use of any diathermy. During this procedure, an effort was made to avoid coming too close to the ovarian mesentery, containing the ovarian vessels. In this way, the anti-mesenteric half of one ovary was separated and retrieved for cryopreservation, while the other half was left in situ. For hemostasis after this, circumscribed bleeding from the ovarian tissue was coagulated using bipolar diathermy. This was necessary at points along the surface of the ovary and on the ovarian septa between the former follicles.\nThe ovarian cortices were cryopreserved using a slow freezing protocol and an open freezing system [11]. Prior to freezing, a small biopsy of the ovarian tissue was examined histologically to assess follicle density and exclude involvement of the tissue by malignancy.\nThe mature (metaphase II, MII) oocytes were fertilized by intracytoplasmic sperm injection (ICSI) to avoid the risk of fertilization failure with in vitro fertilization (IVF) and cryopreserved at the pronuclear (PN) stage using a slow freezing protocol in accordance with the patient’s request and with German national law. If the patient did not have a partner or the patient requested it, all oocytes were cryopreserved in an unfertilized state using a slow freezing protocol.", "All of the women underwent one IVF cycle. The median period of the hormonal stimulation cycle, between the start of hormonal stimulation and hCG administration, was 10 days (range 6–13 days). The average number of oocytes retrieved per patient was 10, and 67% of the oocytes were successfully fertilized using ICSI. In three cases, the oocytes were cryopreserved in an unfertilized state (mean number of unfertilized oocytes 15) (Table 1).\nThe transvaginal oocyte retrieval and extraction of the ovarian tissue immediately afterward were uneventful in all cases. No perioperative complications such as severe bleeding occurred. The fertility preservation procedures did not lead to postponement of the start of chemotherapy for any of the patients; however, one patient developed a mild ovarian hyperstimulation syndrome (OHSS).\nThe prefreezing histological sections from the human ovarian grafts had a normal histological appearance. The follicular count showed a normal age-related follicular distribution, with the vast majority of follicles being primordial. No carcinoma cells were seen in any of the hematoxylin–eosin-stained slides examined (Figure 1).\nHistological analysis of ovarian tissue extracted immediately after ovarian stimulation. Ovarian tissue with (A) follicles and (B) one secondary follicle, stained with hematoxylin–eosin (original magnification: × 20).", "Ovarian stimulation, followed by intracytoplasmic sperm injection and cryopreservation of embryos, is currently the most successful procedure for fertility preservation in newly diagnosed cancer patients. Depending on the patient’s age, a survival rate of the embryos following thawing of 35–90%, an implantation rate of up to 30%, and a cumulative pregnancy rate of 30–40% can be achieved [12,13].\nFreezing unfertilized oocytes is also a promising option for preserving fertility today. Oocyte banking does not require any partner or sperm donor and it may also accord better with various religious or ethical considerations than embryo freezing. With recent improvements in freeze–thaw protocols such as vitrification, promising results with more than 60% of mature oocytes surviving after thawing and subsequent fertilization have been reported — rates comparable with fresh oocytes [14,15].\nFor either of these methods to be successful, however, appropriate quantities of oocytes have to be obtained. In addition, because the time frame up to the initiation of chemotherapy and/or radiotherapy is limited, usually only one IVF cycle can be carried out, and the numbers of oocytes or embryos cryopreserved are consequently often not sufficient for several transfer attempts. For maximum effectiveness, combinations with other fertility preservation techniques therefore need to be considered.\nCryopreservation of ovarian tissue offers an effective combination. Cryopreservation of ovarian tissue before oncologic treatment has recently become one of the most promising techniques for preserving fertility. It allows storage of a large number of primordial and primary follicles. It can be carried out rapidly at any time in the menstrual cycle without delaying the oncological treatment and provides a unique option for preserving fertility in prepubertal or premenarchal female patients [16]. However, the method is surgically invasive and there is a potential risk that malignant cells in the frozen tissue may lead to recurrence of the primary disease after transplantation. For most conditions, however, the risk is low and is presumably related to the stage of disease at the time of ovarian tissue cryopreservation, although considerable caution is advisable with cryoconserved tissue from patients with leukemia, borderline ovarian tumor, or with a high risk of ovarian metastases (e.g., in adenocarcinoma of the cervix or stage III–IV breast cancer) [17]. A total of 20 live births have been reported to date after orthotopic transplantation of cryopreserved ovarian tissue [18-21]. Although cryopreservation of ovarian tissue is still considered experimental, the technique is now gaining worldwide acceptance.\nIn the cancer patients included in the present study, ovarian stimulation was carried out first, followed by laparoscopic collection of the ovarian tissue. Although it has been reported that ovarian tissue is of poor quality after ovarian stimulation [22], no data on this topic have so far been published. Histological examination of the ovarian tissue showed a normal age-related follicle distribution. No histological differences were found from ovarian tissue from patients who underwent ovarian tissue cryopreservation in our department without prior ovarian stimulation. Nor was any correlation noted between the numbers of oocytes retrieved and the follicle distribution in the ovarian tissue. In patients with fewer retrieved oocytes, the numbers of follicles were similar to those in patients with a high response to ovarian stimulation.\nThe ovarian response to stimulation is crucial for successful fertility preservation, and there has been concern regarding the ovarian response to ovarian stimulation in cancer patients. In the present study, different stimulation protocols were used due to the different starting days for stimulation. Adequate numbers of oocytes were retrieved within 2 weeks. The average number of oocytes retrieved per patient was 10, and 67% of the oocytes were successfully fertilized. This is in accordance with recent studies that have reported no significant changes in the ovarian reserve or response to gonadotropins in patients with various types of cancer [19,20]. However, other studies have reported a poorer ovarian response in cancer patients undergoing IVF treatment protocols [17,18]. The published data on this topic are still inconsistent.\nThe present group of patients included five women with breast cancer, one of whom had estrogen receptor–positive breast cancer. Concerns have been raised regarding the use of controlled ovarian stimulation in patients with hormone-dependent tumors, due to inadequate data on short-term increases in hormonal effects on the tumor. Moreover, as animal models suggest, estrogen may also play a role in stimulating the growth of estrogen receptor–negative breast cancers [23]. Conventional stimulation protocols with gonadotropins are therefore modified to include administration of the aromatase inhibitor letrozole [9,24] or the selective estrogen modulator tamoxifen [25]. These protocols have been used with success in reducing the estradiol excesses that are normally seen with conventional protocols, and short-term follow-up data for these protocols have not shown any detrimental effects on survival [10].\nThe risk of ovarian hyperstimulation syndrome (OHSS) is a known complication of controlled ovarian stimulation. One patient in the present study developed a mild OHSS, but the start of cancer treatment did not have to be postponed in any of the patients. The overall risk of severe OHSS is low, and in cancer patients it is also reduced, given that pregnancy will not occur; however, the risk should not be underestimated. Careful selection of the gonadotropin starting dosage, close monitoring, and step-down dosing are critical for avoiding complications. Triggering using a GnRH agonist alone or together with low-dose hCG might potentially further reduce the risk of hyperstimulation [26,27].\nA potential side effect of the subsequent use of oocyte retrieval and ovarian tissue extraction may be bleeding in the residual ovarian tissue. Stimulated ovaries are more fragile than unstimulated ovarian tissue, which has a more compact structure. Stimulated ovaries have to be handled with greater care in comparison with unstimulated ovaries, to avoid injuries to the surface and to minimize possible tissue damage and bleeding. However, no side effects of this type were observed in any of the patients.\nSeveral attempts have been made to improve the effectiveness of fertility preservation programs by combining different techniques. Removing ovarian tissue first and starting ovarian stimulation approximately 1–2 days later is an effective alternative approach. The partial removal of ovarian tissue does not substantially affect the average number or quality of oocytes retrieved after ovarian stimulation [22].\nThe combination of cryopreservation of ovarian tissue before chemotherapy and ovarian stimulation after the start of chemotherapy should no longer be carried out, as the efficacy of IVF is dramatically reduced even after one round of chemotherapy and high rates of malformation of offspring after treatment with alkylating agents have been demonstrated experimentally [28,29].\nIf there is no time for ovarian stimulation, cryopreservation of oocytes retrieved during dissection of resected ovarian tissue has also been reported as a potential strategy for preserving fertility in patients with cancer. The combination of in vitro maturation (IVM) with oocyte cryopreservation prevents any delay in cancer treatment and avoids the risks associated with high estradiol levels in hormone-sensitive tumors [30]. Although healthy infants have been born following IVM, implantation and pregnancy rates are generally lower than for IVF with mature oocytes [31,32].\nAdministration of GnRH-agonist analogs, in an attempt to reduce the gonadotoxic effects of chemotherapy by simulating a prepubertal hormonal milieu, is another fertility preservation method and should be combined with other fertility-protecting measures as well if possible. Although conclusive proof is still awaited, there is increasing evidence that GnRH agonists are effective in protecting the ovaries [33]. Administration of these agents may be considered on an individual basis, as the method is safe, noninvasive, and easy to administer.", "Fertility-preserving procedures should be offered to all patients facing fertility loss before cytotoxic treatment is administered. The decision as to which fertility preservation treatment is most suitable in the patient’s individual situation has to be made during a personal discussion with her and requires intensive interdisciplinary discussion, including oncologists, radiotherapists, and reproductive medicine specialists. A combination of fertility preservation techniques increases the efficacy of the procedure and gives young cancer patients the best chance for future fertility [5,34].", "GnRH: Gonadotropin-releasing hormone;hCG: human chorionic gonadotropin;ICSI: Intracytoplasmic sperm injection;IVF: in vitro fertilization;IVM: in vitro maturation;OHSS: Ovarian hyperstimulation syndrome;PN: Pronuclear (stage)", "The authors hereby declare that they have no competing interests.", "LL, AM, RD, and TH designed the study and analyzed the data. LL and RD wrote the manuscript. TH collected the data. AM, TH, and MWB carried out the operations. DW, LL, KUA, and IH carried out the histological work and analysis. MWB and RD supervised the study. All of the authors read and approved the final manuscript." ]
[ null, "methods", null, null, "results", "discussion", "conclusions", null, null, null ]
[ "Fertility preservation", "Ovarian tissue", "Ovarian stimulation", "IVF", "Oocyte cryopreservation", "Ovarian tissue cryopreservation" ]
Background: In recent years, progress in the diagnosis and treatment of oncological diseases has led to considerable improvements in the survival prognosis, particularly in children and adolescent cancer patients. Unfortunately, aggressive chemotherapy and radiotherapy often cause infertility due to massive destruction of the ovarian reserve, resulting in premature ovarian failure. These women have to face years of hormone replacement therapy and the prospect of infertility, which causes psychological stress [1,2]. Several strategies are currently available for preserving fertility, depending on the risks and probability of gonadal failure, the patient’s general health at diagnosis, and the partner’s status. These strategies include transposition of the ovaries before radiotherapy, ovarian stimulation followed by cryopreservation of fertilized oocytes or unfertilized oocytes, cryopreservation of in vitro–matured oocytes, cryopreservation and transplantation of ovarian tissue, and administration of gonadotropin-releasing hormone (GnRH) agonists [3,4]. Although the variety of fertility preservation strategies available enables cancer patients to have children using their own gametes after overcoming their disease, most of these techniques are still experimental and the efficacy of the individual techniques is limited. A combination of methods is therefore recommended in order to maximize women’s chances of future fertility [5]. In this study, ovarian stimulation and ovarian tissue cryopreservation were combined as a strategy for fertility preservation in cancer patients. The aim was to evaluate whether ovarian stimulation affects the quality of ovarian tissue. The numbers and quality of oocytes, time requirements, and the safety of this strategy were also examined. Methods: Fourteen patients between 24 and 35 years of age (median 29) were included in the retrospective study. They were all suffering from malignant diseases (Table 1) and wanted to preserve their fertility for a future pregnancy. None of the patients had been treated with chemotherapy or radiotherapy before the fertility preservation procedure. All patients provided informed consent for ovarian stimulation and ovarian tissue cryopreservation, after receiving counseling on alternative options for fertility preservation techniques. Three patients also wished to be treated with GnRH agonists during chemotherapy. Main characteristics and stimulation outcome in patients who provided informed consent for ovarian stimulation and ovarian tissue cryopreservation aStimulation started in the luteal phase. bNo ICSI; oocytes were cryopreserved in an unfertilized state. cReceived gonadotropin-releasing hormone agonists during cancer treatment. Ovarian stimulation All of the patients had regular menstrual cycles before chemoradiotherapy. The phase of the menstrual cycle was evaluated using the onset of the last menstrual period, ultrasonography, and progesterone concentrations. Patients who were stimulated during the follicular phase received either a short “flare-up” protocol or a GnRH-antagonist protocol [6,7]. In the case of a single patient, stimulation was started in the luteal phase with a modified GnRH-antagonist protocol [8]. In one case, ovarian stimulation was carried out with letrozole in combination with a GnRH-antagonist protocol, due to estrogen receptor–positive breast cancer [9,10]. Follicular growth was monitored using vaginal ultrasound and measurement of 17β-estradiol (E2) levels. The gonadotropin dosage was adjusted according to the preantral follicle count and follicle growth. A single dose of recombinant human chorionic gonadotropin (hCG) was administered when the lead follicle had a mean diameter of 15–18 mm. All of the patients had regular menstrual cycles before chemoradiotherapy. The phase of the menstrual cycle was evaluated using the onset of the last menstrual period, ultrasonography, and progesterone concentrations. Patients who were stimulated during the follicular phase received either a short “flare-up” protocol or a GnRH-antagonist protocol [6,7]. In the case of a single patient, stimulation was started in the luteal phase with a modified GnRH-antagonist protocol [8]. In one case, ovarian stimulation was carried out with letrozole in combination with a GnRH-antagonist protocol, due to estrogen receptor–positive breast cancer [9,10]. Follicular growth was monitored using vaginal ultrasound and measurement of 17β-estradiol (E2) levels. The gonadotropin dosage was adjusted according to the preantral follicle count and follicle growth. A single dose of recombinant human chorionic gonadotropin (hCG) was administered when the lead follicle had a mean diameter of 15–18 mm. Oocyte and ovarian tissue collection Transvaginal oocyte retrieval was scheduled 34–36 h after hCG administration and was performed with the patient under general anesthesia. Immediately afterwards, ovarian tissue was extracted laparoscopically; the stimulated ovary was divided along the longitudinal midline with scissors, without the use of any diathermy. During this procedure, an effort was made to avoid coming too close to the ovarian mesentery, containing the ovarian vessels. In this way, the anti-mesenteric half of one ovary was separated and retrieved for cryopreservation, while the other half was left in situ. For hemostasis after this, circumscribed bleeding from the ovarian tissue was coagulated using bipolar diathermy. This was necessary at points along the surface of the ovary and on the ovarian septa between the former follicles. The ovarian cortices were cryopreserved using a slow freezing protocol and an open freezing system [11]. Prior to freezing, a small biopsy of the ovarian tissue was examined histologically to assess follicle density and exclude involvement of the tissue by malignancy. The mature (metaphase II, MII) oocytes were fertilized by intracytoplasmic sperm injection (ICSI) to avoid the risk of fertilization failure with in vitro fertilization (IVF) and cryopreserved at the pronuclear (PN) stage using a slow freezing protocol in accordance with the patient’s request and with German national law. If the patient did not have a partner or the patient requested it, all oocytes were cryopreserved in an unfertilized state using a slow freezing protocol. Transvaginal oocyte retrieval was scheduled 34–36 h after hCG administration and was performed with the patient under general anesthesia. Immediately afterwards, ovarian tissue was extracted laparoscopically; the stimulated ovary was divided along the longitudinal midline with scissors, without the use of any diathermy. During this procedure, an effort was made to avoid coming too close to the ovarian mesentery, containing the ovarian vessels. In this way, the anti-mesenteric half of one ovary was separated and retrieved for cryopreservation, while the other half was left in situ. For hemostasis after this, circumscribed bleeding from the ovarian tissue was coagulated using bipolar diathermy. This was necessary at points along the surface of the ovary and on the ovarian septa between the former follicles. The ovarian cortices were cryopreserved using a slow freezing protocol and an open freezing system [11]. Prior to freezing, a small biopsy of the ovarian tissue was examined histologically to assess follicle density and exclude involvement of the tissue by malignancy. The mature (metaphase II, MII) oocytes were fertilized by intracytoplasmic sperm injection (ICSI) to avoid the risk of fertilization failure with in vitro fertilization (IVF) and cryopreserved at the pronuclear (PN) stage using a slow freezing protocol in accordance with the patient’s request and with German national law. If the patient did not have a partner or the patient requested it, all oocytes were cryopreserved in an unfertilized state using a slow freezing protocol. Ovarian stimulation: All of the patients had regular menstrual cycles before chemoradiotherapy. The phase of the menstrual cycle was evaluated using the onset of the last menstrual period, ultrasonography, and progesterone concentrations. Patients who were stimulated during the follicular phase received either a short “flare-up” protocol or a GnRH-antagonist protocol [6,7]. In the case of a single patient, stimulation was started in the luteal phase with a modified GnRH-antagonist protocol [8]. In one case, ovarian stimulation was carried out with letrozole in combination with a GnRH-antagonist protocol, due to estrogen receptor–positive breast cancer [9,10]. Follicular growth was monitored using vaginal ultrasound and measurement of 17β-estradiol (E2) levels. The gonadotropin dosage was adjusted according to the preantral follicle count and follicle growth. A single dose of recombinant human chorionic gonadotropin (hCG) was administered when the lead follicle had a mean diameter of 15–18 mm. Oocyte and ovarian tissue collection: Transvaginal oocyte retrieval was scheduled 34–36 h after hCG administration and was performed with the patient under general anesthesia. Immediately afterwards, ovarian tissue was extracted laparoscopically; the stimulated ovary was divided along the longitudinal midline with scissors, without the use of any diathermy. During this procedure, an effort was made to avoid coming too close to the ovarian mesentery, containing the ovarian vessels. In this way, the anti-mesenteric half of one ovary was separated and retrieved for cryopreservation, while the other half was left in situ. For hemostasis after this, circumscribed bleeding from the ovarian tissue was coagulated using bipolar diathermy. This was necessary at points along the surface of the ovary and on the ovarian septa between the former follicles. The ovarian cortices were cryopreserved using a slow freezing protocol and an open freezing system [11]. Prior to freezing, a small biopsy of the ovarian tissue was examined histologically to assess follicle density and exclude involvement of the tissue by malignancy. The mature (metaphase II, MII) oocytes were fertilized by intracytoplasmic sperm injection (ICSI) to avoid the risk of fertilization failure with in vitro fertilization (IVF) and cryopreserved at the pronuclear (PN) stage using a slow freezing protocol in accordance with the patient’s request and with German national law. If the patient did not have a partner or the patient requested it, all oocytes were cryopreserved in an unfertilized state using a slow freezing protocol. Results: All of the women underwent one IVF cycle. The median period of the hormonal stimulation cycle, between the start of hormonal stimulation and hCG administration, was 10 days (range 6–13 days). The average number of oocytes retrieved per patient was 10, and 67% of the oocytes were successfully fertilized using ICSI. In three cases, the oocytes were cryopreserved in an unfertilized state (mean number of unfertilized oocytes 15) (Table 1). The transvaginal oocyte retrieval and extraction of the ovarian tissue immediately afterward were uneventful in all cases. No perioperative complications such as severe bleeding occurred. The fertility preservation procedures did not lead to postponement of the start of chemotherapy for any of the patients; however, one patient developed a mild ovarian hyperstimulation syndrome (OHSS). The prefreezing histological sections from the human ovarian grafts had a normal histological appearance. The follicular count showed a normal age-related follicular distribution, with the vast majority of follicles being primordial. No carcinoma cells were seen in any of the hematoxylin–eosin-stained slides examined (Figure 1). Histological analysis of ovarian tissue extracted immediately after ovarian stimulation. Ovarian tissue with (A) follicles and (B) one secondary follicle, stained with hematoxylin–eosin (original magnification: × 20). Discussion: Ovarian stimulation, followed by intracytoplasmic sperm injection and cryopreservation of embryos, is currently the most successful procedure for fertility preservation in newly diagnosed cancer patients. Depending on the patient’s age, a survival rate of the embryos following thawing of 35–90%, an implantation rate of up to 30%, and a cumulative pregnancy rate of 30–40% can be achieved [12,13]. Freezing unfertilized oocytes is also a promising option for preserving fertility today. Oocyte banking does not require any partner or sperm donor and it may also accord better with various religious or ethical considerations than embryo freezing. With recent improvements in freeze–thaw protocols such as vitrification, promising results with more than 60% of mature oocytes surviving after thawing and subsequent fertilization have been reported — rates comparable with fresh oocytes [14,15]. For either of these methods to be successful, however, appropriate quantities of oocytes have to be obtained. In addition, because the time frame up to the initiation of chemotherapy and/or radiotherapy is limited, usually only one IVF cycle can be carried out, and the numbers of oocytes or embryos cryopreserved are consequently often not sufficient for several transfer attempts. For maximum effectiveness, combinations with other fertility preservation techniques therefore need to be considered. Cryopreservation of ovarian tissue offers an effective combination. Cryopreservation of ovarian tissue before oncologic treatment has recently become one of the most promising techniques for preserving fertility. It allows storage of a large number of primordial and primary follicles. It can be carried out rapidly at any time in the menstrual cycle without delaying the oncological treatment and provides a unique option for preserving fertility in prepubertal or premenarchal female patients [16]. However, the method is surgically invasive and there is a potential risk that malignant cells in the frozen tissue may lead to recurrence of the primary disease after transplantation. For most conditions, however, the risk is low and is presumably related to the stage of disease at the time of ovarian tissue cryopreservation, although considerable caution is advisable with cryoconserved tissue from patients with leukemia, borderline ovarian tumor, or with a high risk of ovarian metastases (e.g., in adenocarcinoma of the cervix or stage III–IV breast cancer) [17]. A total of 20 live births have been reported to date after orthotopic transplantation of cryopreserved ovarian tissue [18-21]. Although cryopreservation of ovarian tissue is still considered experimental, the technique is now gaining worldwide acceptance. In the cancer patients included in the present study, ovarian stimulation was carried out first, followed by laparoscopic collection of the ovarian tissue. Although it has been reported that ovarian tissue is of poor quality after ovarian stimulation [22], no data on this topic have so far been published. Histological examination of the ovarian tissue showed a normal age-related follicle distribution. No histological differences were found from ovarian tissue from patients who underwent ovarian tissue cryopreservation in our department without prior ovarian stimulation. Nor was any correlation noted between the numbers of oocytes retrieved and the follicle distribution in the ovarian tissue. In patients with fewer retrieved oocytes, the numbers of follicles were similar to those in patients with a high response to ovarian stimulation. The ovarian response to stimulation is crucial for successful fertility preservation, and there has been concern regarding the ovarian response to ovarian stimulation in cancer patients. In the present study, different stimulation protocols were used due to the different starting days for stimulation. Adequate numbers of oocytes were retrieved within 2 weeks. The average number of oocytes retrieved per patient was 10, and 67% of the oocytes were successfully fertilized. This is in accordance with recent studies that have reported no significant changes in the ovarian reserve or response to gonadotropins in patients with various types of cancer [19,20]. However, other studies have reported a poorer ovarian response in cancer patients undergoing IVF treatment protocols [17,18]. The published data on this topic are still inconsistent. The present group of patients included five women with breast cancer, one of whom had estrogen receptor–positive breast cancer. Concerns have been raised regarding the use of controlled ovarian stimulation in patients with hormone-dependent tumors, due to inadequate data on short-term increases in hormonal effects on the tumor. Moreover, as animal models suggest, estrogen may also play a role in stimulating the growth of estrogen receptor–negative breast cancers [23]. Conventional stimulation protocols with gonadotropins are therefore modified to include administration of the aromatase inhibitor letrozole [9,24] or the selective estrogen modulator tamoxifen [25]. These protocols have been used with success in reducing the estradiol excesses that are normally seen with conventional protocols, and short-term follow-up data for these protocols have not shown any detrimental effects on survival [10]. The risk of ovarian hyperstimulation syndrome (OHSS) is a known complication of controlled ovarian stimulation. One patient in the present study developed a mild OHSS, but the start of cancer treatment did not have to be postponed in any of the patients. The overall risk of severe OHSS is low, and in cancer patients it is also reduced, given that pregnancy will not occur; however, the risk should not be underestimated. Careful selection of the gonadotropin starting dosage, close monitoring, and step-down dosing are critical for avoiding complications. Triggering using a GnRH agonist alone or together with low-dose hCG might potentially further reduce the risk of hyperstimulation [26,27]. A potential side effect of the subsequent use of oocyte retrieval and ovarian tissue extraction may be bleeding in the residual ovarian tissue. Stimulated ovaries are more fragile than unstimulated ovarian tissue, which has a more compact structure. Stimulated ovaries have to be handled with greater care in comparison with unstimulated ovaries, to avoid injuries to the surface and to minimize possible tissue damage and bleeding. However, no side effects of this type were observed in any of the patients. Several attempts have been made to improve the effectiveness of fertility preservation programs by combining different techniques. Removing ovarian tissue first and starting ovarian stimulation approximately 1–2 days later is an effective alternative approach. The partial removal of ovarian tissue does not substantially affect the average number or quality of oocytes retrieved after ovarian stimulation [22]. The combination of cryopreservation of ovarian tissue before chemotherapy and ovarian stimulation after the start of chemotherapy should no longer be carried out, as the efficacy of IVF is dramatically reduced even after one round of chemotherapy and high rates of malformation of offspring after treatment with alkylating agents have been demonstrated experimentally [28,29]. If there is no time for ovarian stimulation, cryopreservation of oocytes retrieved during dissection of resected ovarian tissue has also been reported as a potential strategy for preserving fertility in patients with cancer. The combination of in vitro maturation (IVM) with oocyte cryopreservation prevents any delay in cancer treatment and avoids the risks associated with high estradiol levels in hormone-sensitive tumors [30]. Although healthy infants have been born following IVM, implantation and pregnancy rates are generally lower than for IVF with mature oocytes [31,32]. Administration of GnRH-agonist analogs, in an attempt to reduce the gonadotoxic effects of chemotherapy by simulating a prepubertal hormonal milieu, is another fertility preservation method and should be combined with other fertility-protecting measures as well if possible. Although conclusive proof is still awaited, there is increasing evidence that GnRH agonists are effective in protecting the ovaries [33]. Administration of these agents may be considered on an individual basis, as the method is safe, noninvasive, and easy to administer. Conclusions: Fertility-preserving procedures should be offered to all patients facing fertility loss before cytotoxic treatment is administered. The decision as to which fertility preservation treatment is most suitable in the patient’s individual situation has to be made during a personal discussion with her and requires intensive interdisciplinary discussion, including oncologists, radiotherapists, and reproductive medicine specialists. A combination of fertility preservation techniques increases the efficacy of the procedure and gives young cancer patients the best chance for future fertility [5,34]. Abbreviations: GnRH: Gonadotropin-releasing hormone;hCG: human chorionic gonadotropin;ICSI: Intracytoplasmic sperm injection;IVF: in vitro fertilization;IVM: in vitro maturation;OHSS: Ovarian hyperstimulation syndrome;PN: Pronuclear (stage) Competing interests: The authors hereby declare that they have no competing interests. Authors’ contributions: LL, AM, RD, and TH designed the study and analyzed the data. LL and RD wrote the manuscript. TH collected the data. AM, TH, and MWB carried out the operations. DW, LL, KUA, and IH carried out the histological work and analysis. MWB and RD supervised the study. All of the authors read and approved the final manuscript.
Background: New anticancer treatments have increased survival rates for cancer patients, but often at the cost of sterility. Several strategies are currently available for preserving fertility. However, the chances of achieving a pregnancy with one technique are still limited. A combination of methods is therefore recommended in order to maximize women's chances of future fertility. In this retrospective study, ovarian stimulation with subsequent ovarian tissue extraction on the day of oocyte retrieval were combined and the quality of the ovarian tissue, the numbers and quality of oocytes, time requirements, and the safety of the strategy were examined. Methods: Fourteen female patients suffering from malignant diseases underwent one in vitro fertilization cycle. Different stimulation protocols were used, depending on the menstrual cycle. Transvaginal oocyte retrieval was scheduled 34-36 h after human chorionic gonadotropin administration. Immediately afterwards, ovarian tissue was extracted laparoscopically. Results: A mean of 10 oocytes were retrieved per patient, and 67% of the oocytes were successfully fertilized using intracytoplasmic sperm injection. No periprocedural complications and no complications leading to postponement of the start of chemotherapy occurred. The ovarian tissues were of good quality, with a normal age-related follicular distribution and without carcinoma cell invasion. Conclusions: An approach using ovarian stimulation first, followed by laparoscopic collection of ovarian tissue, is a useful strategy for increasing the efficacy of fertility preservation techniques. The ovarian tissue is not affected by prior ovarian stimulation.
Background: In recent years, progress in the diagnosis and treatment of oncological diseases has led to considerable improvements in the survival prognosis, particularly in children and adolescent cancer patients. Unfortunately, aggressive chemotherapy and radiotherapy often cause infertility due to massive destruction of the ovarian reserve, resulting in premature ovarian failure. These women have to face years of hormone replacement therapy and the prospect of infertility, which causes psychological stress [1,2]. Several strategies are currently available for preserving fertility, depending on the risks and probability of gonadal failure, the patient’s general health at diagnosis, and the partner’s status. These strategies include transposition of the ovaries before radiotherapy, ovarian stimulation followed by cryopreservation of fertilized oocytes or unfertilized oocytes, cryopreservation of in vitro–matured oocytes, cryopreservation and transplantation of ovarian tissue, and administration of gonadotropin-releasing hormone (GnRH) agonists [3,4]. Although the variety of fertility preservation strategies available enables cancer patients to have children using their own gametes after overcoming their disease, most of these techniques are still experimental and the efficacy of the individual techniques is limited. A combination of methods is therefore recommended in order to maximize women’s chances of future fertility [5]. In this study, ovarian stimulation and ovarian tissue cryopreservation were combined as a strategy for fertility preservation in cancer patients. The aim was to evaluate whether ovarian stimulation affects the quality of ovarian tissue. The numbers and quality of oocytes, time requirements, and the safety of this strategy were also examined. Conclusions: Fertility-preserving procedures should be offered to all patients facing fertility loss before cytotoxic treatment is administered. The decision as to which fertility preservation treatment is most suitable in the patient’s individual situation has to be made during a personal discussion with her and requires intensive interdisciplinary discussion, including oncologists, radiotherapists, and reproductive medicine specialists. A combination of fertility preservation techniques increases the efficacy of the procedure and gives young cancer patients the best chance for future fertility [5,34].
Background: New anticancer treatments have increased survival rates for cancer patients, but often at the cost of sterility. Several strategies are currently available for preserving fertility. However, the chances of achieving a pregnancy with one technique are still limited. A combination of methods is therefore recommended in order to maximize women's chances of future fertility. In this retrospective study, ovarian stimulation with subsequent ovarian tissue extraction on the day of oocyte retrieval were combined and the quality of the ovarian tissue, the numbers and quality of oocytes, time requirements, and the safety of the strategy were examined. Methods: Fourteen female patients suffering from malignant diseases underwent one in vitro fertilization cycle. Different stimulation protocols were used, depending on the menstrual cycle. Transvaginal oocyte retrieval was scheduled 34-36 h after human chorionic gonadotropin administration. Immediately afterwards, ovarian tissue was extracted laparoscopically. Results: A mean of 10 oocytes were retrieved per patient, and 67% of the oocytes were successfully fertilized using intracytoplasmic sperm injection. No periprocedural complications and no complications leading to postponement of the start of chemotherapy occurred. The ovarian tissues were of good quality, with a normal age-related follicular distribution and without carcinoma cell invasion. Conclusions: An approach using ovarian stimulation first, followed by laparoscopic collection of ovarian tissue, is a useful strategy for increasing the efficacy of fertility preservation techniques. The ovarian tissue is not affected by prior ovarian stimulation.
3,738
274
[ 285, 180, 273, 31, 11, 73 ]
10
[ "ovarian", "tissue", "ovarian tissue", "patients", "stimulation", "oocytes", "fertility", "ovarian stimulation", "patient", "protocol" ]
[ "chemotherapy radiotherapy fertility", "therapy prospect infertility", "radiotherapy fertility preservation", "preserve fertility", "fertility preservation strategies" ]
[CONTENT] Fertility preservation | Ovarian tissue | Ovarian stimulation | IVF | Oocyte cryopreservation | Ovarian tissue cryopreservation [SUMMARY]
[CONTENT] Fertility preservation | Ovarian tissue | Ovarian stimulation | IVF | Oocyte cryopreservation | Ovarian tissue cryopreservation [SUMMARY]
[CONTENT] Fertility preservation | Ovarian tissue | Ovarian stimulation | IVF | Oocyte cryopreservation | Ovarian tissue cryopreservation [SUMMARY]
[CONTENT] Fertility preservation | Ovarian tissue | Ovarian stimulation | IVF | Oocyte cryopreservation | Ovarian tissue cryopreservation [SUMMARY]
[CONTENT] Fertility preservation | Ovarian tissue | Ovarian stimulation | IVF | Oocyte cryopreservation | Ovarian tissue cryopreservation [SUMMARY]
[CONTENT] Fertility preservation | Ovarian tissue | Ovarian stimulation | IVF | Oocyte cryopreservation | Ovarian tissue cryopreservation [SUMMARY]
[CONTENT] Adult | Chorionic Gonadotropin | Female | Fertility Preservation | Fertilization in Vitro | Humans | Medical Oncology | Neoplasms | Oocyte Retrieval | Ovary | Ovulation Induction | Reproducibility of Results | Retrospective Studies | Sperm Injections, Intracytoplasmic | Time Factors | Young Adult [SUMMARY]
[CONTENT] Adult | Chorionic Gonadotropin | Female | Fertility Preservation | Fertilization in Vitro | Humans | Medical Oncology | Neoplasms | Oocyte Retrieval | Ovary | Ovulation Induction | Reproducibility of Results | Retrospective Studies | Sperm Injections, Intracytoplasmic | Time Factors | Young Adult [SUMMARY]
[CONTENT] Adult | Chorionic Gonadotropin | Female | Fertility Preservation | Fertilization in Vitro | Humans | Medical Oncology | Neoplasms | Oocyte Retrieval | Ovary | Ovulation Induction | Reproducibility of Results | Retrospective Studies | Sperm Injections, Intracytoplasmic | Time Factors | Young Adult [SUMMARY]
[CONTENT] Adult | Chorionic Gonadotropin | Female | Fertility Preservation | Fertilization in Vitro | Humans | Medical Oncology | Neoplasms | Oocyte Retrieval | Ovary | Ovulation Induction | Reproducibility of Results | Retrospective Studies | Sperm Injections, Intracytoplasmic | Time Factors | Young Adult [SUMMARY]
[CONTENT] Adult | Chorionic Gonadotropin | Female | Fertility Preservation | Fertilization in Vitro | Humans | Medical Oncology | Neoplasms | Oocyte Retrieval | Ovary | Ovulation Induction | Reproducibility of Results | Retrospective Studies | Sperm Injections, Intracytoplasmic | Time Factors | Young Adult [SUMMARY]
[CONTENT] Adult | Chorionic Gonadotropin | Female | Fertility Preservation | Fertilization in Vitro | Humans | Medical Oncology | Neoplasms | Oocyte Retrieval | Ovary | Ovulation Induction | Reproducibility of Results | Retrospective Studies | Sperm Injections, Intracytoplasmic | Time Factors | Young Adult [SUMMARY]
[CONTENT] chemotherapy radiotherapy fertility | therapy prospect infertility | radiotherapy fertility preservation | preserve fertility | fertility preservation strategies [SUMMARY]
[CONTENT] chemotherapy radiotherapy fertility | therapy prospect infertility | radiotherapy fertility preservation | preserve fertility | fertility preservation strategies [SUMMARY]
[CONTENT] chemotherapy radiotherapy fertility | therapy prospect infertility | radiotherapy fertility preservation | preserve fertility | fertility preservation strategies [SUMMARY]
[CONTENT] chemotherapy radiotherapy fertility | therapy prospect infertility | radiotherapy fertility preservation | preserve fertility | fertility preservation strategies [SUMMARY]
[CONTENT] chemotherapy radiotherapy fertility | therapy prospect infertility | radiotherapy fertility preservation | preserve fertility | fertility preservation strategies [SUMMARY]
[CONTENT] chemotherapy radiotherapy fertility | therapy prospect infertility | radiotherapy fertility preservation | preserve fertility | fertility preservation strategies [SUMMARY]
[CONTENT] ovarian | tissue | ovarian tissue | patients | stimulation | oocytes | fertility | ovarian stimulation | patient | protocol [SUMMARY]
[CONTENT] ovarian | tissue | ovarian tissue | patients | stimulation | oocytes | fertility | ovarian stimulation | patient | protocol [SUMMARY]
[CONTENT] ovarian | tissue | ovarian tissue | patients | stimulation | oocytes | fertility | ovarian stimulation | patient | protocol [SUMMARY]
[CONTENT] ovarian | tissue | ovarian tissue | patients | stimulation | oocytes | fertility | ovarian stimulation | patient | protocol [SUMMARY]
[CONTENT] ovarian | tissue | ovarian tissue | patients | stimulation | oocytes | fertility | ovarian stimulation | patient | protocol [SUMMARY]
[CONTENT] ovarian | tissue | ovarian tissue | patients | stimulation | oocytes | fertility | ovarian stimulation | patient | protocol [SUMMARY]
[CONTENT] ovarian | strategies | cryopreservation | oocytes | fertility | cancer patients | available | infertility | diagnosis | oocytes cryopreservation [SUMMARY]
[CONTENT] ovarian | protocol | freezing | tissue | phase | ovarian tissue | gnrh antagonist protocol | antagonist protocol | slow freezing protocol | slow freezing [SUMMARY]
[CONTENT] ovarian | oocytes | histological | hematoxylin eosin | hematoxylin | hormonal stimulation | stained | cases | eosin | tissue [SUMMARY]
[CONTENT] fertility | discussion | treatment | fertility preservation | preservation | patients | young cancer patients best | offered | preserving procedures | individual situation [SUMMARY]
[CONTENT] ovarian | tissue | ovarian tissue | fertility | protocol | stimulation | patients | oocytes | authors | freezing [SUMMARY]
[CONTENT] ovarian | tissue | ovarian tissue | fertility | protocol | stimulation | patients | oocytes | authors | freezing [SUMMARY]
[CONTENT] ||| ||| ||| ||| the day [SUMMARY]
[CONTENT] Fourteen ||| ||| 34-36 ||| [SUMMARY]
[CONTENT] 10 | 67% ||| ||| [SUMMARY]
[CONTENT] first ||| [SUMMARY]
[CONTENT] ||| ||| ||| ||| the day ||| Fourteen ||| ||| 34-36 ||| ||| 10 | 67% ||| ||| ||| first ||| [SUMMARY]
[CONTENT] ||| ||| ||| ||| the day ||| Fourteen ||| ||| 34-36 ||| ||| 10 | 67% ||| ||| ||| first ||| [SUMMARY]
Association of Osteoarthritis with Perfluorooctanoate and Perfluorooctane Sulfonate in NHANES 2003-2008.
23410534
Perfluorooctanoate (PFOA) and perfluorooctane sulfonate (PFOS) are persistent, synthetic industrial chemicals. Perfluorinated compounds are linked to health impacts that may be relevant to osteoarthritis, cartilage repair, and inflammatory responses.
BACKGROUND
We used multiple logistic regression to estimate associations between serum PFOA and PFOS concentrations and self-reported diagnosis of osteoarthritis in persons 20-84 years of age who participated in NHANES during 2003-2008. We adjusted for potential confounders including age, income, and race/ethnicity. Effects by sex were estimated using stratified models and interaction terms.
METHODS
Those in the highest exposure quartile had higher odds of osteoarthritis compared with those in the lowest quartile [odds ratio (OR) for PFOA = 1.55; 95% CI: 0.99, 2.43; OR for PFOS = 1.77; 95% CI: 1.05, 2.96]. When stratifying by sex, we found positive associations for women, but not men. Women in the highest quartiles of PFOA and PFOS exposure had higher odds of osteoarthritis compared with those in the lowest quartiles (OR for PFOA = 1.98; 95% CI: 1.24, 3.19 and OR for PFOS = 1.73; 95% CI: 0.97, 3.10).
RESULTS
Higher concentrations of serum PFOA were associated with osteoarthritis in women, but not men. PFOS was also associated with osteoarthritis in women only, though effect estimates for women were not significant. More research is needed to clarify potential differences in susceptibility between women and men with regard to possible effects of these and other endocrine-disrupting chemicals.
CONCLUSIONS
[ "Adult", "Aged", "Aged, 80 and over", "Alkanesulfonic Acids", "Caprylates", "Chromatography, High Pressure Liquid", "Endocrine Disruptors", "Environmental Exposure", "Female", "Fluorocarbons", "Humans", "Logistic Models", "Male", "Middle Aged", "Nutrition Surveys", "Osteoarthritis", "Prevalence", "Self Report", "Sex Characteristics", "Tandem Mass Spectrometry", "United States", "Young Adult" ]
3620767
null
null
Methods
NHANES is conducted by the National Center for Health Statistics [Centers for Disease Control and Prevention (CDC), Atlanta, GA], which selects approximately 5,000 representative study participants annually from the civilian, noninstitutionalized U.S. population. NHANES represents the most comprehensive attempt to understand human exposures to chemicals of concern (National Research Council 2006), and has been the sole data source for many cross-sectional studies of associations between chemical exposures and chronic disease. Participants are selected through a multistage probability sampling design. Each of the study participants undergoes a physical examination by a health professional, which includes measurement of height and weight, and completes a series of surveys to ascertain demographic, health, and nutrition information. Various biological samples are also collected for analysis from a random subset of study participants each year. Since 1999, NHANES has operated as a continuous annual survey with data released in 2-year cycles. Further details on study design are available from the CDC (2011a). NHANES was reviewed by the National Center for Health Statistics Ethics Review Board, and documented consent was obtained from participants. The variables used in our analysis are all publicly available through the CDC. Exposure. NHANES has annually assessed perfluorinated compounds since 2003 among a subsample of participants. Perfluorinated compound exposures are estimated by measuring the concentrations of 18 perfluorinated chemicals in serum samples collected from a random sample of one-third of the study participants ≥ 12 years of age (Kuklenyik et al. 2005). In summary, the CDC uses a solid-phase extraction method coupled to high-performance liquid chromatography–tandem mass spectrometry. The limits of detection for PFOA and PFOS are 0.1 and 0.2 μg/g, respectively. At the time of our analyses, laboratory data were available through the 2007–2008 NHANES cycle; we made use of information from 2003–2008 to increase the sample size. We restricted our analyses to persons 20–84 years of age, the group for which we had osteoarthritis status information and precise age information (in NHANES, the ages of all individuals ≥ 85 years of age are coded as 85 years to ensure anonymity). For categorical models of exposure to PFOA or PFOS, we assigned participants to four exposure categories based on distributions in the study population as a whole. The cut points for PFOA were as follows: quartile 1 (≤ 2.95 ng/mL), quartile 2 (> 2.95–4.22 ng/mL), quartile 3 (> 4.22–5.89 ng/mL), and quartile 4 (> 5.89 ng/mL). The cut points for PFOS were as follows: quartile 1 (≤ 8.56 ng/mL), quartile 2 (> 8.56–13.59 ng/mL), quartile 3 (> 13.59–20.97 ng/mL), and quartile 4 (> 20.97 ng/mL). Outcome. Information on the outcome of interest—osteoarthritis status—was collected by questionnaire via self-report. A previous study documented 81% agreement between a self-report of “definite” osteoarthritis and clinical confirmation (March et al. 1998), which suggests that osteoarthritis is likely to have been accurately reported in most cases. All NHANES participants ≥ 20 years of age were asked “Has a doctor or other health professional ever told you that you had arthritis?” Individuals who responded affirmatively were asked a follow-up question: “Which type of arthritis was it?” Possible answers to the latter question included rheumatoid arthritis, osteoarthritis, other type of arthritis, unknown type, and decline to answer the question. Individuals who indicated that a doctor had provided a diagnosis of arthritis, but who declined to answer the question about the type of arthritis or indicated that they did not know which type they had were classified as missing, and were excluded from the analyses. Those who indicated that they had rheumatoid arthritis or a form of arthritis other than rheumatoid or osteoarthritis were considered not to have osteoarthritis. Covariates. Information on potential confounders was obtained from publicly available NHANES data. Potential confounders were selected based on prior reports of associations with PFOA and PFOS exposure levels (Calafat et al. 2007; Nelson et al. 2010) and osteoarthritis (Anderson and Felson 1988; CDC 2011c). We assessed potential confounders as continuous variables unless otherwise noted, including age; sex (male vs. female); poverty status (a ratio of annual family income divided by the federal poverty threshold, calculated by the National Center for Health Statistics); self-reported race/ethnicity (Mexican American, non-Hispanic white, non-Hispanic black, or other, including other Hispanic and multiracial); daily fat and caloric intake (based on responses during the first of two 24-hr dietary recall surveys); body mass index [BMI; weight (kilograms)/height (meters) squared]; self-reported history of bone fractures of the hip, wrist, or spine (yes/no); self-reported participation in moderate or vigorous sports, fitness, or recreational physical activities (yes/no); self-reported smoking status (current, former, never); and for women, self-reported parity (0, 1, or ≥ 2 children). Interpretation of results should consider that further research is needed to disentangle the relationships among some of these covariates, exposure, and health outcome. For example, those with arthritis may engage in less physical activity. Statistical analysis. We used multivariable logistic regression to estimate associations between PFOA and PFOS and odds of osteoarthritis (yes/no). All analyses were conducted separately for PFOA and PFOS. First, we confirmed linear associations between exposure to PFOA or PFOS and odds of osteoarthritis using a test for linear trend. Then we developed models in which the exposures of interest, which were highly right-skewed, were treated as natural logarithm-transformed continuous variables. We developed separate models in which the exposures of interest were treated categorically. We first performed logistic regression with PFOA or PFOS and osteoarthritis without adjustment by any covariates to obtain crude estimates. We then adjusted for sociodemographic factors including age, poverty:income ratio, race/ethnicity, and sex. After eliminating highly correlated dietary and exercise variables, we performed backward model selection using likelihood ratio tests to build fully adjusted models including potential confounders that were statistically significant predictors of the outcome (p < 0.05). We present results for the association between PFOA or PFOS and osteoarthritis based on three models: a crude (unadjusted) model; a model adjusted for sociodemographic factors (age, race/ethnicity, and poverty:income ratio); and a fully adjusted model with adjustment for age, race/ethnicity, and poverty:income ratio, as well as variables selected to be associated with osteoarthritis based on the backward model selection. We used multiplicative interaction terms and stratified models to assess potential effect modification by sex, age (29–49 years or 50–84 years), and obesity status (BMI ≥ 30 or < 30). All models accounted for the complex, multistage sampling design of NHANES as recommended by the CDC (2011b). Stratum, cluster, and subsample weights were included in all logistic regression models using SAS statistical software survey procedures used in previous analyses of NHANES data (e.g., Meeker and Ferguson 2011; You et al. 2011). All analyses were performed using SAS statistical software version 9.2 (SAS Institute Inc., Cary, NC). Estimates were considered statistically significant based on two-tailed p-values < 0.05.
Results
Of 15,562 individuals 20–84 years of age who participated in NHANES during 2003–2008, PFOA and PFOS exposure information was available for 4,562 individuals, and 4,102 of these individuals also had osteoarthritis status information. Participants with missing information for one or more model covariates (income, BMI, smoking, or history of bone fractures) were excluded. Approximately 6% (n = 243) of these 4,102 subjects had missing income information, and were excluded from our analyses. BMI information was missing for about 1.3% of the remaining subjects (n = 55). Smoking information was missing for < 1% of subjects (n = 2, both of whom had already been excluded due to other missing information). Information on history of bone fractures was missing for one individual who had been excluded due to missing income information. Our study population included similar numbers of males and females, and had a relatively even age distribution (Table 1), and characteristics were similar to the overall NHANES sample of 15,562 individuals who participated during the study time period (data not shown). Compared with females, males had higher exposures to both PFOA (33.4% higher, p < 0.001) and PFOS (38.1% higher, p < 0.001). Mean serum PFOA and PFOS concentrations also increased with age (p < 0.001), except for a small decline in PFOA in the oldest age group (70–84 years) compared with the next youngest group (Table 1). Exposures also differed by self-reported race/ethnicity for both PFOA and PFOS (p < 0.001), with the highest mean PFOA and PFOS concentrations in non-Hispanic whites and non-Hispanic blacks, respectively, and the lowest mean concentrations of both exposures in Mexican Americans. PFOA and PFOS exposures increased with socioeconomic status as indicated by the poverty/income ratio (PFOA: p = 0.012; PFOS: p = 0.202). Exposure levels generally also increased with BMI for both exposures, though average concentrations were lower in obese participants than in overweight participants. Differences in exposure by smoking status were small, with levels for current smokers 9.0% higher and 5.0% lower than for never-smokers for PFOA and PFOS, respectively. Characteristics of study population. Osteoarthritis cases were more likely to be female, older, non-Hispanic white, of higher income, and of higher BMI than controls. Exposure to PFOA and PFOS differed by osteoarthritis status, with cases having higher levels than noncases. The survey-weighted mean PFOA exposures for cases and noncases were 5.39 ng/mL (95% CI: 4.91, 5.87 ng/mL) and 4.87 ng/mL (95% CI: 4.59, 5.15 ng/mL), respectively. For PFOS, the survey-weighted mean exposures for cases and non-cases were 24.57 ng/mL (95% CI: 21.49, 27.65 ng/mL) and 21.32 ng/mL (20.05, 22.59 ng/mL), respectively. In logistic regression models of all participants (males and females), continuous natural logarithm-transformed PFOA and PFOS exposures were positively associated with osteoarthritis without adjustment (Tables 2 and 3). However, associations were not statistically significant after full adjustment, and the OR for PFOS was attenuated toward the null. Comparing subjects in the highest quartile to the lowest quartile of serum PFOA and PFOS, we found statistically significant higher odds of osteoarthritis in the crude (unadjusted) models (Tables 2 and 3). The crude model for PFOA showed increased odds of osteoarthritis with higher exposure. Those in the fourth quartile of PFOA exposure had 62% higher odds [odds ratio (OR) = 1.62; 95% CI: 1.10, 2.39] of osteoarthritis than those in the first quartile. The unadjusted association for PFOS showed some evidence of a dose–response relationship. Study participants in the third and fourth quartiles of PFOS exposure had 2.00 and 2.16 times higher odds of osteoarthritis than those in the first quartile (95% CI: 1.27, 3.17 and 1.37, 3.39), respectively. Weighted associations between PFOA exposure and self-reported osteoarthritis in U.S. adults 20–84 years of age. Weighted associations between PFOS exposure and self-reported osteoarthritis in U.S. adults 20–84 years of age. These results were generally robust to adjustment by covariates in the partially adjusted model (adjusting for age, sex, race/ethnicity, and income) and the fully adjusted model (adjusting for covariates from the partially adjusted model as well as smoking, BMI, physical activity, and history of bone fractures), although some results lost statistical significance. In our partially and fully adjusted models, those in the fourth quartiles of PFOA and PFOS exposure continued to have elevated odds of osteoarthritis compared to those in the first quartiles of exposure (Tables 2 and 3). After full adjustment, those in the highest quartile of PFOA exposure had a nonsignificant 1.55 times higher odds of osteoarthritis compared with those in the lowest quartile (95% CI: 0.99, 2.43). After full adjustment, those in the highest quartile of PFOS exposure had a 1.77 times higher odds of osteoarthritis compared with those in the lowest quartile (95% CI: 1.05, 2.96). In general, fully adjusted ORs were stronger for obese participants compared with non-obese participants [see Supplemental Material, Table S1 (http://dx.doi.org/10.1289/ehp.1205673)] though differences were not statistically significant. Stratified models by sex showed slightly stronger associations in women than in men (Figure 1). Fully adjusted ORs comparing the highest with the lowest quartile of PFOA exposure were 1.98 (95% CI: 1.24, 3.19) for women and 0.82 (95% CI: 0.40, 1.70) for men (Table 2). Corresponding ORs for PFOS were 1.73 (95% CI: 0.97, 3.10) for women and 1.56 (95% CI: 0.54, 4.53) for men (Table 3). These results were consistent with models with an interaction term for sex and exposure as a continuous variable, which also indicated stronger associations for females. In these interaction models, the odds of osteoarthritis were 1.51 times higher (p = 0.030) for women than men for PFOA, and 1.33 times higher (p = 0.097) for PFOS in the fully adjusted model. Interaction models based on quartiles of exposure showed that the odds of osteoarthritis comparing the fourth and first quartiles of exposure were 1.93 times higher (p = 0.032) for women than men for PFOA, whereas for PFOS exposure, the odds for women were 1.27 times higher than for men, although not statistically different (p = 0.403). Associations between PFOA (A) and PFOS (B) exposure quartile (compared with the first quartile; reference) and odds of osteoarthritis, by sex. Points and vertical lines represent effect estimates and 95% CIs from fully adjusted, sex-stratified models, adjusting for age, race/ethnicity, poverty:income ratio, smoking, BMI, vigorous recreational activity, and prior fracture (hip, wrist, or spine). Models stratified by age suggest stronger associations among those 20–49 years of age compared with older participants (50–84 years) for men and women combined, and among women [see Supplemental Material, Table S2 (http://dx.doi.org/10.1289/ehp.1205673)]. The ORs comparing the highest to the lowest quartile of PFOA was 4.95 (95% CI: 1.27, 19.4) in younger women, and 1.33 (95% CI: 0.82, 1.16) in older women, and corresponding ORs for PFOS were 4.99 (95% CI: 1.61, 15.4) in younger women, and 1.30 (95% CI: 0.65, 2.60) in older women. Results were not statistically different between older and younger women, or between men and women, and many strata had small sample sizes. Younger women in the highest quartile of PFOA exposure had 4.95 times higher odds of osteoarthritis compared to those in the lowest quartile of PFOA exposure, after full adjustment (95% CI: 1.27, 19.4). For PFOS, younger women showed a similar increase in the odds of osteoarthritis when comparing those in the highest quartile of exposure with those in the lowest quartile (adjusted OR = 4.99; 95% CI: 1.61, 15.4).
Conclusion
Although production and use of PFOA and PFOS have declined due to safety concerns, human and environmental exposure to these chemicals remains widespread. Better understanding of the health effects of these chemicals and identifying any susceptible subpopulations could help to inform public health policies aimed at reducing exposures or associated health impacts. In this cross-sectional study of a representative sample of the adult U.S. population, PFOA and PFOS exposures were associated with higher prevalence of osteoarthritis, particularly in women. To our knowledge, the present analyses represent the first study of the association between perfluorinated compounds and osteoarthritis in a study population representative of the United Sstates. Future prospective studies are needed to establish temporality and elucidate possible biological mechanisms. Reasons for differences in these associations between men and women, if confirmed, also need further exploration. If replicated, these findings would support reducing exposures to PFOA and PFOS, and perhaps other PFAAs, to reduce the prevalence of osteoarthritis in women, a group that is disproportionately affected by this common chronic disease.
[ "Supplemental Material" ]
[ "Click here for additional data file." ]
[ null ]
[ "Methods", "Results", "Discussion", "Conclusion", "Supplemental Material" ]
[ "NHANES is conducted by the National Center for Health Statistics [Centers for Disease Control and Prevention (CDC), Atlanta, GA], which selects approximately 5,000 representative study participants annually from the civilian, noninstitutionalized U.S. population. NHANES represents the most comprehensive attempt to understand human exposures to chemicals of concern (National Research Council 2006), and has been the sole data source for many cross-sectional studies of associations between chemical exposures and chronic disease. Participants are selected through a multistage probability sampling design. Each of the study participants undergoes a physical examination by a health professional, which includes measurement of height and weight, and completes a series of surveys to ascertain demographic, health, and nutrition information. Various biological samples are also collected for analysis from a random subset of study participants each year. Since 1999, NHANES has operated as a continuous annual survey with data released in 2-year cycles. Further details on study design are available from the CDC (2011a). NHANES was reviewed by the National Center for Health Statistics Ethics Review Board, and documented consent was obtained from participants. The variables used in our analysis are all publicly available through the CDC.\nExposure. NHANES has annually assessed perfluorinated compounds since 2003 among a subsample of participants. Perfluorinated compound exposures are estimated by measuring the concentrations of 18 perfluorinated chemicals in serum samples collected from a random sample of one-third of the study participants ≥ 12 years of age (Kuklenyik et al. 2005). In summary, the CDC uses a solid-phase extraction method coupled to high-performance liquid chromatography–tandem mass spectrometry. The limits of detection for PFOA and PFOS are 0.1 and 0.2 μg/g, respectively. At the time of our analyses, laboratory data were available through the 2007–2008 NHANES cycle; we made use of information from 2003–2008 to increase the sample size. We restricted our analyses to persons 20–84 years of age, the group for which we had osteoarthritis status information and precise age information (in NHANES, the ages of all individuals ≥ 85 years of age are coded as 85 years to ensure anonymity). For categorical models of exposure to PFOA or PFOS, we assigned participants to four exposure categories based on distributions in the study population as a whole. The cut points for PFOA were as follows: quartile 1 (≤ 2.95 ng/mL), quartile 2 (> 2.95–4.22 ng/mL), quartile 3 (> 4.22–5.89 ng/mL), and quartile 4 (> 5.89 ng/mL). The cut points for PFOS were as follows: quartile 1 (≤ 8.56 ng/mL), quartile 2 (> 8.56–13.59 ng/mL), quartile 3 (> 13.59–20.97 ng/mL), and quartile 4 (> 20.97 ng/mL).\nOutcome. Information on the outcome of interest—osteoarthritis status—was collected by questionnaire via self-report. A previous study documented 81% agreement between a self-report of “definite” osteoarthritis and clinical confirmation (March et al. 1998), which suggests that osteoarthritis is likely to have been accurately reported in most cases. All NHANES participants ≥ 20 years of age were asked “Has a doctor or other health professional ever told you that you had arthritis?” Individuals who responded affirmatively were asked a follow-up question: “Which type of arthritis was it?” Possible answers to the latter question included rheumatoid arthritis, osteoarthritis, other type of arthritis, unknown type, and decline to answer the question. Individuals who indicated that a doctor had provided a diagnosis of arthritis, but who declined to answer the question about the type of arthritis or indicated that they did not know which type they had were classified as missing, and were excluded from the analyses. Those who indicated that they had rheumatoid arthritis or a form of arthritis other than rheumatoid or osteoarthritis were considered not to have osteoarthritis.\nCovariates. Information on potential confounders was obtained from publicly available NHANES data. Potential confounders were selected based on prior reports of associations with PFOA and PFOS exposure levels (Calafat et al. 2007; Nelson et al. 2010) and osteoarthritis (Anderson and Felson 1988; CDC 2011c). We assessed potential confounders as continuous variables unless otherwise noted, including age; sex (male vs. female); poverty status (a ratio of annual family income divided by the federal poverty threshold, calculated by the National Center for Health Statistics); self-reported race/ethnicity (Mexican American, non-Hispanic white, non-Hispanic black, or other, including other Hispanic and multiracial); daily fat and caloric intake (based on responses during the first of two 24-hr dietary recall surveys); body mass index [BMI; weight (kilograms)/height (meters) squared]; self-reported history of bone fractures of the hip, wrist, or spine (yes/no); self-reported participation in moderate or vigorous sports, fitness, or recreational physical activities (yes/no); self-reported smoking status (current, former, never); and for women, self-reported parity (0, 1, or ≥ 2 children). Interpretation of results should consider that further research is needed to disentangle the relationships among some of these covariates, exposure, and health outcome. For example, those with arthritis may engage in less physical activity.\nStatistical analysis. We used multivariable logistic regression to estimate associations between PFOA and PFOS and odds of osteoarthritis (yes/no). All analyses were conducted separately for PFOA and PFOS. First, we confirmed linear associations between exposure to PFOA or PFOS and odds of osteoarthritis using a test for linear trend. Then we developed models in which the exposures of interest, which were highly right-skewed, were treated as natural logarithm-transformed continuous variables. We developed separate models in which the exposures of interest were treated categorically. We first performed logistic regression with PFOA or PFOS and osteoarthritis without adjustment by any covariates to obtain crude estimates. We then adjusted for sociodemographic factors including age, poverty:income ratio, race/ethnicity, and sex. After eliminating highly correlated dietary and exercise variables, we performed backward model selection using likelihood ratio tests to build fully adjusted models including potential confounders that were statistically significant predictors of the outcome (p < 0.05).\nWe present results for the association between PFOA or PFOS and osteoarthritis based on three models: a crude (unadjusted) model; a model adjusted for sociodemographic factors (age, race/ethnicity, and poverty:income ratio); and a fully adjusted model with adjustment for age, race/ethnicity, and poverty:income ratio, as well as variables selected to be associated with osteoarthritis based on the backward model selection.\nWe used multiplicative interaction terms and stratified models to assess potential effect modification by sex, age (29–49 years or 50–84 years), and obesity status (BMI ≥ 30 or < 30). All models accounted for the complex, multistage sampling design of NHANES as recommended by the CDC (2011b). Stratum, cluster, and subsample weights were included in all logistic regression models using SAS statistical software survey procedures used in previous analyses of NHANES data (e.g., Meeker and Ferguson 2011; You et al. 2011). All analyses were performed using SAS statistical software version 9.2 (SAS Institute Inc., Cary, NC). Estimates were considered statistically significant based on two-tailed p-values < 0.05.", "Of 15,562 individuals 20–84 years of age who participated in NHANES during 2003–2008, PFOA and PFOS exposure information was available for 4,562 individuals, and 4,102 of these individuals also had osteoarthritis status information. Participants with missing information for one or more model covariates (income, BMI, smoking, or history of bone fractures) were excluded. Approximately 6% (n = 243) of these 4,102 subjects had missing income information, and were excluded from our analyses. BMI information was missing for about 1.3% of the remaining subjects (n = 55). Smoking information was missing for < 1% of subjects (n = 2, both of whom had already been excluded due to other missing information). Information on history of bone fractures was missing for one individual who had been excluded due to missing income information.\nOur study population included similar numbers of males and females, and had a relatively even age distribution (Table 1), and characteristics were similar to the overall NHANES sample of 15,562 individuals who participated during the study time period (data not shown). Compared with females, males had higher exposures to both PFOA (33.4% higher, p < 0.001) and PFOS (38.1% higher, p < 0.001). Mean serum PFOA and PFOS concentrations also increased with age (p < 0.001), except for a small decline in PFOA in the oldest age group (70–84 years) compared with the next youngest group (Table 1). Exposures also differed by self-reported race/ethnicity for both PFOA and PFOS (p < 0.001), with the highest mean PFOA and PFOS concentrations in non-Hispanic whites and non-Hispanic blacks, respectively, and the lowest mean concentrations of both exposures in Mexican Americans. PFOA and PFOS exposures increased with socioeconomic status as indicated by the poverty/income ratio (PFOA: p = 0.012; PFOS: p = 0.202). Exposure levels generally also increased with BMI for both exposures, though average concentrations were lower in obese participants than in overweight participants. Differences in exposure by smoking status were small, with levels for current smokers 9.0% higher and 5.0% lower than for never-smokers for PFOA and PFOS, respectively.\nCharacteristics of study population.\nOsteoarthritis cases were more likely to be female, older, non-Hispanic white, of higher income, and of higher BMI than controls. Exposure to PFOA and PFOS differed by osteoarthritis status, with cases having higher levels than noncases. The survey-weighted mean PFOA exposures for cases and noncases were 5.39 ng/mL (95% CI: 4.91, 5.87 ng/mL) and 4.87 ng/mL (95% CI: 4.59, 5.15 ng/mL), respectively. For PFOS, the survey-weighted mean exposures for cases and non-cases were 24.57 ng/mL (95% CI: 21.49, 27.65 ng/mL) and 21.32 ng/mL (20.05, 22.59 ng/mL), respectively.\nIn logistic regression models of all participants (males and females), continuous natural logarithm-transformed PFOA and PFOS exposures were positively associated with osteoarthritis without adjustment (Tables 2 and 3). However, associations were not statistically significant after full adjustment, and the OR for PFOS was attenuated toward the null. Comparing subjects in the highest quartile to the lowest quartile of serum PFOA and PFOS, we found statistically significant higher odds of osteoarthritis in the crude (unadjusted) models (Tables 2 and 3). The crude model for PFOA showed increased odds of osteoarthritis with higher exposure. Those in the fourth quartile of PFOA exposure had 62% higher odds [odds ratio (OR) = 1.62; 95% CI: 1.10, 2.39] of osteoarthritis than those in the first quartile. The unadjusted association for PFOS showed some evidence of a dose–response relationship. Study participants in the third and fourth quartiles of PFOS exposure had 2.00 and 2.16 times higher odds of osteoarthritis than those in the first quartile (95% CI: 1.27, 3.17 and 1.37, 3.39), respectively.\nWeighted associations between PFOA exposure and self-reported osteoarthritis in U.S. adults 20–84 years of age.\nWeighted associations between PFOS exposure and self-reported osteoarthritis in U.S. adults 20–84 years of age.\nThese results were generally robust to adjustment by covariates in the partially adjusted model (adjusting for age, sex, race/ethnicity, and income) and the fully adjusted model (adjusting for covariates from the partially adjusted model as well as smoking, BMI, physical activity, and history of bone fractures), although some results lost statistical significance. In our partially and fully adjusted models, those in the fourth quartiles of PFOA and PFOS exposure continued to have elevated odds of osteoarthritis compared to those in the first quartiles of exposure (Tables 2 and 3). After full adjustment, those in the highest quartile of PFOA exposure had a nonsignificant 1.55 times higher odds of osteoarthritis compared with those in the lowest quartile (95% CI: 0.99, 2.43). After full adjustment, those in the highest quartile of PFOS exposure had a 1.77 times higher odds of osteoarthritis compared with those in the lowest quartile (95% CI: 1.05, 2.96).\nIn general, fully adjusted ORs were stronger for obese participants compared with non-obese participants [see Supplemental Material, Table S1 (http://dx.doi.org/10.1289/ehp.1205673)] though differences were not statistically significant.\nStratified models by sex showed slightly stronger associations in women than in men (Figure 1). Fully adjusted ORs comparing the highest with the lowest quartile of PFOA exposure were 1.98 (95% CI: 1.24, 3.19) for women and 0.82 (95% CI: 0.40, 1.70) for men (Table 2). Corresponding ORs for PFOS were 1.73 (95% CI: 0.97, 3.10) for women and 1.56 (95% CI: 0.54, 4.53) for men (Table 3). These results were consistent with models with an interaction term for sex and exposure as a continuous variable, which also indicated stronger associations for females. In these interaction models, the odds of osteoarthritis were 1.51 times higher (p = 0.030) for women than men for PFOA, and 1.33 times higher (p = 0.097) for PFOS in the fully adjusted model. Interaction models based on quartiles of exposure showed that the odds of osteoarthritis comparing the fourth and first quartiles of exposure were 1.93 times higher (p = 0.032) for women than men for PFOA, whereas for PFOS exposure, the odds for women were 1.27 times higher than for men, although not statistically different (p = 0.403).\nAssociations between PFOA (A) and PFOS (B) exposure quartile (compared with the first quartile; reference) and odds of osteoarthritis, by sex. Points and vertical lines represent effect estimates and 95% CIs from fully adjusted, sex-stratified models, adjusting for age, race/ethnicity, poverty:income ratio, smoking, BMI, vigorous recreational activity, and prior fracture (hip, wrist, or spine).\nModels stratified by age suggest stronger associations among those 20–49 years of age compared with older participants (50–84 years) for men and women combined, and among women [see Supplemental Material, Table S2 (http://dx.doi.org/10.1289/ehp.1205673)]. The ORs comparing the highest to the lowest quartile of PFOA was 4.95 (95% CI: 1.27, 19.4) in younger women, and 1.33 (95% CI: 0.82, 1.16) in older women, and corresponding ORs for PFOS were 4.99 (95% CI: 1.61, 15.4) in younger women, and 1.30 (95% CI: 0.65, 2.60) in older women. Results were not statistically different between older and younger women, or between men and women, and many strata had small sample sizes. Younger women in the highest quartile of PFOA exposure had 4.95 times higher odds of osteoarthritis compared to those in the lowest quartile of PFOA exposure, after full adjustment (95% CI: 1.27, 19.4). For PFOS, younger women showed a similar increase in the odds of osteoarthritis when comparing those in the highest quartile of exposure with those in the lowest quartile (adjusted OR = 4.99; 95% CI: 1.61, 15.4).", "We found statistically significant associations between PFOA and PFOS and osteoarthritis. Positive associations between both chemicals and osteoarthritis were observed in females, but not males, both before and after adjustment for potential confounders. Women with the highest levels of PFOA and PFOS appeared to have 1.98 and 1.73 times higher odds of osteoarthritis, respectively, compared with women in the lowest quartiles of exposure to these chemicals. We estimated stronger associations for younger women (20–49 years) than older women (50–84 years) although these results should be interpreted with caution because of the small numbers of osteoarthritis cases when stratified by sex and age. Innes et al. (2011) reported a stronger relationship between PFOA exposure and osteoarthritis in younger men and women compared with older men and women, for whom a diagnosis would likely have taken place closer to the time of blood sampling. This result and our observation of the strongest associations in younger women suggest the need for follow-up in future studies that could better assess exposure before diagnosis and investigate differences in susceptibility to PFCs and other endocrine-disrupting compounds before and after menopause.\nDifferences in PFOA exposure levels and study population characteristics complicate the comparison of our results with those of Innes et al. (2011). The PFOA exposure reference categories used by Innes et al. encompass the exposures of most of our study participants and U.S. residents in general. Therefore, their results do not imply the absence of low-dose, potentially nonmonotonic effects of PFOA in the U.S. general population. In contrast to our findings, Innes et al. observed a negative association for PFOS at exposure levels that were quite consistent with those in our study. Although sample size limited our ability to test for effect modification by age and obesity status, which was reported by Innes et al., we did not observe statistically significant differences according to age or obesity in our study population; however, there was some suggestion of stronger associations in younger women than older women, and among obese compared with non-obese participants. Further research is needed to determine whether differences between study populations might be explained by differences in exposure, such that very high PFOA exposures might modify effects of PFOS, for example, or by differences related to race/ethnicity or other characteristics that might modify effects of exposure.\nWe focused on potential effect modification by sex because of previous animal literature suggesting that effects of PFOA and PFOS on osteoarthritis might be hormonally mediated, as well as evidence that the chemicals might be excreted differently by males and females (Betts 2007). However, other studies did not identify differences in PFOA excretion according to sex (Bartell et al. 2010; Brede et al. 2010).\nAlthough the previous study (Innes et al. 2011) was able to focus on age and BMI differences in a population with very high levels of exposure to PFOA, the present study evaluated the associations between PFOA and PFOS and osteoarthritis among a representative sample of the U.S. population. Our findings suggest that females may be more susceptible than males to effects of perfluorinated compounds.\nThe biological mechanism(s) by which PFOA and PFOS may cause osteoarthritis are not known, but experimental findings suggest they have the potential to mimic and interact with endogenous hormones, increase the expression of proinflammatory cytokines, and bind with PPARs, which are relevant to biological processes that might influence etiology and progression of osteoarthritis. In particular, PFOA and PFOS can bind to PPAR-α and PPAR-γ (DeWitt et al. 2009; Vanden Heuvel et al. 2006), which are involved with regulation of glucose homeostasis, inflammation, and lipid metabolism and storage (Kersten et al. 2000). Very few studies have reported on sex differences between the associations of PFOA or PFOS and health outcomes in humans. However, a recently published prospective cohort study from Denmark that examined the association between in utero PFOA exposure and risk of overweight at 20 years of age found a statistically significant association for females but not for males (Halldorsson et al. 2012).\nLimitations of this research include the relatively small sample size, the cross-sectional study design, exposure assessment at a single time point for each participant, self-reported information on the outcome of interest and several covariates, missing data for some study participants, and the lack of information about the date of osteoarthritis onset. The use of exercise as a potential confounder, while included in our fully adjusted model and incorporated in the analysis by Innes et al. (2011), warrants further investigation because arthritis may affect the ability to exercise. Additionally, potential effect modifiers should be examined, such as diabetes and many of the characteristics investigated here (e.g., obesity), with a larger sample size. Due to the relatively long half-lives of PFOA and PFOS (Olsen et al. 2007), the single serum samples likely provide reasonable estimates of long-term exposure. Any exposure misclassification would be random and would be unlikely to differ based on disease status. Still, the single serum measurements could represent exposures following osteoarthritis onset, which could have occurred many years before the survey.\nAnother potential limitation of our work is that samples were taken at a single time point for each participant, and measured concentrations in these samples may not reflect exposures during etiologically relevant time periods. Evidence from NHANES suggests that PFOS levels decreased in the U.S. population during the study period, whereas levels of PFOA have essentially remained stable (Kato et al. 2011). Thus, if past exposures are more relevant to osteoarthritis than recent exposures, associations based on current PFOS levels may underestimate potential effects.\nAlthough the breadth of variables included in NHANES enabled us to examine and adjust for many potential confounders, residual confounding and possible overadjustment could be sources of bias. Our inclusion of BMI and prior history of bone fractures, which could be on a causal pathway between endocrine-system disruption and development of osteoarthritis, may have introduced bias toward the null.\nAs new information about the health consequences of PFAAs emerges, patterns of production and usage are changing. Global production of PFOS has dropped considerably compared with 1999 levels following the primary manufacturer’s agreement to end production of the chemical, whereas global production of PFOA has increased during the same period (Lau et al. 2007). Recognizing the growing importance of PFOA, the U.S. EPA launched the PFOA Stewardship Program in 2006: Working with the eight leading manufacturers of PFOA, the U.S. EPA developed a goal of eliminating usage and emissions of the chemical by 2015 (Lau et al. 2007). As these compounds are being used less, at least in some parts of the world, newer PFAAs are entering the global marketplace, dominated by molecules with shorter carbon chains that may be less persistent (Betts 2007). These substitute compounds may present their own health and environmental hazards, and new evidence shows that certain substitutes can undergo chemical transformations in the environment yielding PFOS and PFOA (Betts 2007). Given the ongoing use of PFAAs and the global scale of human and environmental contamination, better understanding of the potential health effects of these chemicals, and of factors that might be used to identify susceptible subpopulations, could help to inform public health policies aimed at reducing exposures or associated health impacts. Future research could investigate the health impacts of newer PFAAs and the degree to which certain groups, such as women, may be particularly susceptible.", "Although production and use of PFOA and PFOS have declined due to safety concerns, human and environmental exposure to these chemicals remains widespread. Better understanding of the health effects of these chemicals and identifying any susceptible subpopulations could help to inform public health policies aimed at reducing exposures or associated health impacts. In this cross-sectional study of a representative sample of the adult U.S. population, PFOA and PFOS exposures were associated with higher prevalence of osteoarthritis, particularly in women. To our knowledge, the present analyses represent the first study of the association between perfluorinated compounds and osteoarthritis in a study population representative of the United Sstates. Future prospective studies are needed to establish temporality and elucidate possible biological mechanisms. Reasons for differences in these associations between men and women, if confirmed, also need further exploration. If replicated, these findings would support reducing exposures to PFOA and PFOS, and perhaps other PFAAs, to reduce the prevalence of osteoarthritis in women, a group that is disproportionately affected by this common chronic disease.", "Click here for additional data file." ]
[ "methods", "results", "discussion", "conclusions", null ]
[ "hazardous substances", "osteoarthritis", "perfluorooctane sulfonate", "perfluorooctanoate", "public health" ]
Methods: NHANES is conducted by the National Center for Health Statistics [Centers for Disease Control and Prevention (CDC), Atlanta, GA], which selects approximately 5,000 representative study participants annually from the civilian, noninstitutionalized U.S. population. NHANES represents the most comprehensive attempt to understand human exposures to chemicals of concern (National Research Council 2006), and has been the sole data source for many cross-sectional studies of associations between chemical exposures and chronic disease. Participants are selected through a multistage probability sampling design. Each of the study participants undergoes a physical examination by a health professional, which includes measurement of height and weight, and completes a series of surveys to ascertain demographic, health, and nutrition information. Various biological samples are also collected for analysis from a random subset of study participants each year. Since 1999, NHANES has operated as a continuous annual survey with data released in 2-year cycles. Further details on study design are available from the CDC (2011a). NHANES was reviewed by the National Center for Health Statistics Ethics Review Board, and documented consent was obtained from participants. The variables used in our analysis are all publicly available through the CDC. Exposure. NHANES has annually assessed perfluorinated compounds since 2003 among a subsample of participants. Perfluorinated compound exposures are estimated by measuring the concentrations of 18 perfluorinated chemicals in serum samples collected from a random sample of one-third of the study participants ≥ 12 years of age (Kuklenyik et al. 2005). In summary, the CDC uses a solid-phase extraction method coupled to high-performance liquid chromatography–tandem mass spectrometry. The limits of detection for PFOA and PFOS are 0.1 and 0.2 μg/g, respectively. At the time of our analyses, laboratory data were available through the 2007–2008 NHANES cycle; we made use of information from 2003–2008 to increase the sample size. We restricted our analyses to persons 20–84 years of age, the group for which we had osteoarthritis status information and precise age information (in NHANES, the ages of all individuals ≥ 85 years of age are coded as 85 years to ensure anonymity). For categorical models of exposure to PFOA or PFOS, we assigned participants to four exposure categories based on distributions in the study population as a whole. The cut points for PFOA were as follows: quartile 1 (≤ 2.95 ng/mL), quartile 2 (> 2.95–4.22 ng/mL), quartile 3 (> 4.22–5.89 ng/mL), and quartile 4 (> 5.89 ng/mL). The cut points for PFOS were as follows: quartile 1 (≤ 8.56 ng/mL), quartile 2 (> 8.56–13.59 ng/mL), quartile 3 (> 13.59–20.97 ng/mL), and quartile 4 (> 20.97 ng/mL). Outcome. Information on the outcome of interest—osteoarthritis status—was collected by questionnaire via self-report. A previous study documented 81% agreement between a self-report of “definite” osteoarthritis and clinical confirmation (March et al. 1998), which suggests that osteoarthritis is likely to have been accurately reported in most cases. All NHANES participants ≥ 20 years of age were asked “Has a doctor or other health professional ever told you that you had arthritis?” Individuals who responded affirmatively were asked a follow-up question: “Which type of arthritis was it?” Possible answers to the latter question included rheumatoid arthritis, osteoarthritis, other type of arthritis, unknown type, and decline to answer the question. Individuals who indicated that a doctor had provided a diagnosis of arthritis, but who declined to answer the question about the type of arthritis or indicated that they did not know which type they had were classified as missing, and were excluded from the analyses. Those who indicated that they had rheumatoid arthritis or a form of arthritis other than rheumatoid or osteoarthritis were considered not to have osteoarthritis. Covariates. Information on potential confounders was obtained from publicly available NHANES data. Potential confounders were selected based on prior reports of associations with PFOA and PFOS exposure levels (Calafat et al. 2007; Nelson et al. 2010) and osteoarthritis (Anderson and Felson 1988; CDC 2011c). We assessed potential confounders as continuous variables unless otherwise noted, including age; sex (male vs. female); poverty status (a ratio of annual family income divided by the federal poverty threshold, calculated by the National Center for Health Statistics); self-reported race/ethnicity (Mexican American, non-Hispanic white, non-Hispanic black, or other, including other Hispanic and multiracial); daily fat and caloric intake (based on responses during the first of two 24-hr dietary recall surveys); body mass index [BMI; weight (kilograms)/height (meters) squared]; self-reported history of bone fractures of the hip, wrist, or spine (yes/no); self-reported participation in moderate or vigorous sports, fitness, or recreational physical activities (yes/no); self-reported smoking status (current, former, never); and for women, self-reported parity (0, 1, or ≥ 2 children). Interpretation of results should consider that further research is needed to disentangle the relationships among some of these covariates, exposure, and health outcome. For example, those with arthritis may engage in less physical activity. Statistical analysis. We used multivariable logistic regression to estimate associations between PFOA and PFOS and odds of osteoarthritis (yes/no). All analyses were conducted separately for PFOA and PFOS. First, we confirmed linear associations between exposure to PFOA or PFOS and odds of osteoarthritis using a test for linear trend. Then we developed models in which the exposures of interest, which were highly right-skewed, were treated as natural logarithm-transformed continuous variables. We developed separate models in which the exposures of interest were treated categorically. We first performed logistic regression with PFOA or PFOS and osteoarthritis without adjustment by any covariates to obtain crude estimates. We then adjusted for sociodemographic factors including age, poverty:income ratio, race/ethnicity, and sex. After eliminating highly correlated dietary and exercise variables, we performed backward model selection using likelihood ratio tests to build fully adjusted models including potential confounders that were statistically significant predictors of the outcome (p < 0.05). We present results for the association between PFOA or PFOS and osteoarthritis based on three models: a crude (unadjusted) model; a model adjusted for sociodemographic factors (age, race/ethnicity, and poverty:income ratio); and a fully adjusted model with adjustment for age, race/ethnicity, and poverty:income ratio, as well as variables selected to be associated with osteoarthritis based on the backward model selection. We used multiplicative interaction terms and stratified models to assess potential effect modification by sex, age (29–49 years or 50–84 years), and obesity status (BMI ≥ 30 or < 30). All models accounted for the complex, multistage sampling design of NHANES as recommended by the CDC (2011b). Stratum, cluster, and subsample weights were included in all logistic regression models using SAS statistical software survey procedures used in previous analyses of NHANES data (e.g., Meeker and Ferguson 2011; You et al. 2011). All analyses were performed using SAS statistical software version 9.2 (SAS Institute Inc., Cary, NC). Estimates were considered statistically significant based on two-tailed p-values < 0.05. Results: Of 15,562 individuals 20–84 years of age who participated in NHANES during 2003–2008, PFOA and PFOS exposure information was available for 4,562 individuals, and 4,102 of these individuals also had osteoarthritis status information. Participants with missing information for one or more model covariates (income, BMI, smoking, or history of bone fractures) were excluded. Approximately 6% (n = 243) of these 4,102 subjects had missing income information, and were excluded from our analyses. BMI information was missing for about 1.3% of the remaining subjects (n = 55). Smoking information was missing for < 1% of subjects (n = 2, both of whom had already been excluded due to other missing information). Information on history of bone fractures was missing for one individual who had been excluded due to missing income information. Our study population included similar numbers of males and females, and had a relatively even age distribution (Table 1), and characteristics were similar to the overall NHANES sample of 15,562 individuals who participated during the study time period (data not shown). Compared with females, males had higher exposures to both PFOA (33.4% higher, p < 0.001) and PFOS (38.1% higher, p < 0.001). Mean serum PFOA and PFOS concentrations also increased with age (p < 0.001), except for a small decline in PFOA in the oldest age group (70–84 years) compared with the next youngest group (Table 1). Exposures also differed by self-reported race/ethnicity for both PFOA and PFOS (p < 0.001), with the highest mean PFOA and PFOS concentrations in non-Hispanic whites and non-Hispanic blacks, respectively, and the lowest mean concentrations of both exposures in Mexican Americans. PFOA and PFOS exposures increased with socioeconomic status as indicated by the poverty/income ratio (PFOA: p = 0.012; PFOS: p = 0.202). Exposure levels generally also increased with BMI for both exposures, though average concentrations were lower in obese participants than in overweight participants. Differences in exposure by smoking status were small, with levels for current smokers 9.0% higher and 5.0% lower than for never-smokers for PFOA and PFOS, respectively. Characteristics of study population. Osteoarthritis cases were more likely to be female, older, non-Hispanic white, of higher income, and of higher BMI than controls. Exposure to PFOA and PFOS differed by osteoarthritis status, with cases having higher levels than noncases. The survey-weighted mean PFOA exposures for cases and noncases were 5.39 ng/mL (95% CI: 4.91, 5.87 ng/mL) and 4.87 ng/mL (95% CI: 4.59, 5.15 ng/mL), respectively. For PFOS, the survey-weighted mean exposures for cases and non-cases were 24.57 ng/mL (95% CI: 21.49, 27.65 ng/mL) and 21.32 ng/mL (20.05, 22.59 ng/mL), respectively. In logistic regression models of all participants (males and females), continuous natural logarithm-transformed PFOA and PFOS exposures were positively associated with osteoarthritis without adjustment (Tables 2 and 3). However, associations were not statistically significant after full adjustment, and the OR for PFOS was attenuated toward the null. Comparing subjects in the highest quartile to the lowest quartile of serum PFOA and PFOS, we found statistically significant higher odds of osteoarthritis in the crude (unadjusted) models (Tables 2 and 3). The crude model for PFOA showed increased odds of osteoarthritis with higher exposure. Those in the fourth quartile of PFOA exposure had 62% higher odds [odds ratio (OR) = 1.62; 95% CI: 1.10, 2.39] of osteoarthritis than those in the first quartile. The unadjusted association for PFOS showed some evidence of a dose–response relationship. Study participants in the third and fourth quartiles of PFOS exposure had 2.00 and 2.16 times higher odds of osteoarthritis than those in the first quartile (95% CI: 1.27, 3.17 and 1.37, 3.39), respectively. Weighted associations between PFOA exposure and self-reported osteoarthritis in U.S. adults 20–84 years of age. Weighted associations between PFOS exposure and self-reported osteoarthritis in U.S. adults 20–84 years of age. These results were generally robust to adjustment by covariates in the partially adjusted model (adjusting for age, sex, race/ethnicity, and income) and the fully adjusted model (adjusting for covariates from the partially adjusted model as well as smoking, BMI, physical activity, and history of bone fractures), although some results lost statistical significance. In our partially and fully adjusted models, those in the fourth quartiles of PFOA and PFOS exposure continued to have elevated odds of osteoarthritis compared to those in the first quartiles of exposure (Tables 2 and 3). After full adjustment, those in the highest quartile of PFOA exposure had a nonsignificant 1.55 times higher odds of osteoarthritis compared with those in the lowest quartile (95% CI: 0.99, 2.43). After full adjustment, those in the highest quartile of PFOS exposure had a 1.77 times higher odds of osteoarthritis compared with those in the lowest quartile (95% CI: 1.05, 2.96). In general, fully adjusted ORs were stronger for obese participants compared with non-obese participants [see Supplemental Material, Table S1 (http://dx.doi.org/10.1289/ehp.1205673)] though differences were not statistically significant. Stratified models by sex showed slightly stronger associations in women than in men (Figure 1). Fully adjusted ORs comparing the highest with the lowest quartile of PFOA exposure were 1.98 (95% CI: 1.24, 3.19) for women and 0.82 (95% CI: 0.40, 1.70) for men (Table 2). Corresponding ORs for PFOS were 1.73 (95% CI: 0.97, 3.10) for women and 1.56 (95% CI: 0.54, 4.53) for men (Table 3). These results were consistent with models with an interaction term for sex and exposure as a continuous variable, which also indicated stronger associations for females. In these interaction models, the odds of osteoarthritis were 1.51 times higher (p = 0.030) for women than men for PFOA, and 1.33 times higher (p = 0.097) for PFOS in the fully adjusted model. Interaction models based on quartiles of exposure showed that the odds of osteoarthritis comparing the fourth and first quartiles of exposure were 1.93 times higher (p = 0.032) for women than men for PFOA, whereas for PFOS exposure, the odds for women were 1.27 times higher than for men, although not statistically different (p = 0.403). Associations between PFOA (A) and PFOS (B) exposure quartile (compared with the first quartile; reference) and odds of osteoarthritis, by sex. Points and vertical lines represent effect estimates and 95% CIs from fully adjusted, sex-stratified models, adjusting for age, race/ethnicity, poverty:income ratio, smoking, BMI, vigorous recreational activity, and prior fracture (hip, wrist, or spine). Models stratified by age suggest stronger associations among those 20–49 years of age compared with older participants (50–84 years) for men and women combined, and among women [see Supplemental Material, Table S2 (http://dx.doi.org/10.1289/ehp.1205673)]. The ORs comparing the highest to the lowest quartile of PFOA was 4.95 (95% CI: 1.27, 19.4) in younger women, and 1.33 (95% CI: 0.82, 1.16) in older women, and corresponding ORs for PFOS were 4.99 (95% CI: 1.61, 15.4) in younger women, and 1.30 (95% CI: 0.65, 2.60) in older women. Results were not statistically different between older and younger women, or between men and women, and many strata had small sample sizes. Younger women in the highest quartile of PFOA exposure had 4.95 times higher odds of osteoarthritis compared to those in the lowest quartile of PFOA exposure, after full adjustment (95% CI: 1.27, 19.4). For PFOS, younger women showed a similar increase in the odds of osteoarthritis when comparing those in the highest quartile of exposure with those in the lowest quartile (adjusted OR = 4.99; 95% CI: 1.61, 15.4). Discussion: We found statistically significant associations between PFOA and PFOS and osteoarthritis. Positive associations between both chemicals and osteoarthritis were observed in females, but not males, both before and after adjustment for potential confounders. Women with the highest levels of PFOA and PFOS appeared to have 1.98 and 1.73 times higher odds of osteoarthritis, respectively, compared with women in the lowest quartiles of exposure to these chemicals. We estimated stronger associations for younger women (20–49 years) than older women (50–84 years) although these results should be interpreted with caution because of the small numbers of osteoarthritis cases when stratified by sex and age. Innes et al. (2011) reported a stronger relationship between PFOA exposure and osteoarthritis in younger men and women compared with older men and women, for whom a diagnosis would likely have taken place closer to the time of blood sampling. This result and our observation of the strongest associations in younger women suggest the need for follow-up in future studies that could better assess exposure before diagnosis and investigate differences in susceptibility to PFCs and other endocrine-disrupting compounds before and after menopause. Differences in PFOA exposure levels and study population characteristics complicate the comparison of our results with those of Innes et al. (2011). The PFOA exposure reference categories used by Innes et al. encompass the exposures of most of our study participants and U.S. residents in general. Therefore, their results do not imply the absence of low-dose, potentially nonmonotonic effects of PFOA in the U.S. general population. In contrast to our findings, Innes et al. observed a negative association for PFOS at exposure levels that were quite consistent with those in our study. Although sample size limited our ability to test for effect modification by age and obesity status, which was reported by Innes et al., we did not observe statistically significant differences according to age or obesity in our study population; however, there was some suggestion of stronger associations in younger women than older women, and among obese compared with non-obese participants. Further research is needed to determine whether differences between study populations might be explained by differences in exposure, such that very high PFOA exposures might modify effects of PFOS, for example, or by differences related to race/ethnicity or other characteristics that might modify effects of exposure. We focused on potential effect modification by sex because of previous animal literature suggesting that effects of PFOA and PFOS on osteoarthritis might be hormonally mediated, as well as evidence that the chemicals might be excreted differently by males and females (Betts 2007). However, other studies did not identify differences in PFOA excretion according to sex (Bartell et al. 2010; Brede et al. 2010). Although the previous study (Innes et al. 2011) was able to focus on age and BMI differences in a population with very high levels of exposure to PFOA, the present study evaluated the associations between PFOA and PFOS and osteoarthritis among a representative sample of the U.S. population. Our findings suggest that females may be more susceptible than males to effects of perfluorinated compounds. The biological mechanism(s) by which PFOA and PFOS may cause osteoarthritis are not known, but experimental findings suggest they have the potential to mimic and interact with endogenous hormones, increase the expression of proinflammatory cytokines, and bind with PPARs, which are relevant to biological processes that might influence etiology and progression of osteoarthritis. In particular, PFOA and PFOS can bind to PPAR-α and PPAR-γ (DeWitt et al. 2009; Vanden Heuvel et al. 2006), which are involved with regulation of glucose homeostasis, inflammation, and lipid metabolism and storage (Kersten et al. 2000). Very few studies have reported on sex differences between the associations of PFOA or PFOS and health outcomes in humans. However, a recently published prospective cohort study from Denmark that examined the association between in utero PFOA exposure and risk of overweight at 20 years of age found a statistically significant association for females but not for males (Halldorsson et al. 2012). Limitations of this research include the relatively small sample size, the cross-sectional study design, exposure assessment at a single time point for each participant, self-reported information on the outcome of interest and several covariates, missing data for some study participants, and the lack of information about the date of osteoarthritis onset. The use of exercise as a potential confounder, while included in our fully adjusted model and incorporated in the analysis by Innes et al. (2011), warrants further investigation because arthritis may affect the ability to exercise. Additionally, potential effect modifiers should be examined, such as diabetes and many of the characteristics investigated here (e.g., obesity), with a larger sample size. Due to the relatively long half-lives of PFOA and PFOS (Olsen et al. 2007), the single serum samples likely provide reasonable estimates of long-term exposure. Any exposure misclassification would be random and would be unlikely to differ based on disease status. Still, the single serum measurements could represent exposures following osteoarthritis onset, which could have occurred many years before the survey. Another potential limitation of our work is that samples were taken at a single time point for each participant, and measured concentrations in these samples may not reflect exposures during etiologically relevant time periods. Evidence from NHANES suggests that PFOS levels decreased in the U.S. population during the study period, whereas levels of PFOA have essentially remained stable (Kato et al. 2011). Thus, if past exposures are more relevant to osteoarthritis than recent exposures, associations based on current PFOS levels may underestimate potential effects. Although the breadth of variables included in NHANES enabled us to examine and adjust for many potential confounders, residual confounding and possible overadjustment could be sources of bias. Our inclusion of BMI and prior history of bone fractures, which could be on a causal pathway between endocrine-system disruption and development of osteoarthritis, may have introduced bias toward the null. As new information about the health consequences of PFAAs emerges, patterns of production and usage are changing. Global production of PFOS has dropped considerably compared with 1999 levels following the primary manufacturer’s agreement to end production of the chemical, whereas global production of PFOA has increased during the same period (Lau et al. 2007). Recognizing the growing importance of PFOA, the U.S. EPA launched the PFOA Stewardship Program in 2006: Working with the eight leading manufacturers of PFOA, the U.S. EPA developed a goal of eliminating usage and emissions of the chemical by 2015 (Lau et al. 2007). As these compounds are being used less, at least in some parts of the world, newer PFAAs are entering the global marketplace, dominated by molecules with shorter carbon chains that may be less persistent (Betts 2007). These substitute compounds may present their own health and environmental hazards, and new evidence shows that certain substitutes can undergo chemical transformations in the environment yielding PFOS and PFOA (Betts 2007). Given the ongoing use of PFAAs and the global scale of human and environmental contamination, better understanding of the potential health effects of these chemicals, and of factors that might be used to identify susceptible subpopulations, could help to inform public health policies aimed at reducing exposures or associated health impacts. Future research could investigate the health impacts of newer PFAAs and the degree to which certain groups, such as women, may be particularly susceptible. Conclusion: Although production and use of PFOA and PFOS have declined due to safety concerns, human and environmental exposure to these chemicals remains widespread. Better understanding of the health effects of these chemicals and identifying any susceptible subpopulations could help to inform public health policies aimed at reducing exposures or associated health impacts. In this cross-sectional study of a representative sample of the adult U.S. population, PFOA and PFOS exposures were associated with higher prevalence of osteoarthritis, particularly in women. To our knowledge, the present analyses represent the first study of the association between perfluorinated compounds and osteoarthritis in a study population representative of the United Sstates. Future prospective studies are needed to establish temporality and elucidate possible biological mechanisms. Reasons for differences in these associations between men and women, if confirmed, also need further exploration. If replicated, these findings would support reducing exposures to PFOA and PFOS, and perhaps other PFAAs, to reduce the prevalence of osteoarthritis in women, a group that is disproportionately affected by this common chronic disease. Supplemental Material: Click here for additional data file.
Background: Perfluorooctanoate (PFOA) and perfluorooctane sulfonate (PFOS) are persistent, synthetic industrial chemicals. Perfluorinated compounds are linked to health impacts that may be relevant to osteoarthritis, cartilage repair, and inflammatory responses. Methods: We used multiple logistic regression to estimate associations between serum PFOA and PFOS concentrations and self-reported diagnosis of osteoarthritis in persons 20-84 years of age who participated in NHANES during 2003-2008. We adjusted for potential confounders including age, income, and race/ethnicity. Effects by sex were estimated using stratified models and interaction terms. Results: Those in the highest exposure quartile had higher odds of osteoarthritis compared with those in the lowest quartile [odds ratio (OR) for PFOA = 1.55; 95% CI: 0.99, 2.43; OR for PFOS = 1.77; 95% CI: 1.05, 2.96]. When stratifying by sex, we found positive associations for women, but not men. Women in the highest quartiles of PFOA and PFOS exposure had higher odds of osteoarthritis compared with those in the lowest quartiles (OR for PFOA = 1.98; 95% CI: 1.24, 3.19 and OR for PFOS = 1.73; 95% CI: 0.97, 3.10). Conclusions: Higher concentrations of serum PFOA were associated with osteoarthritis in women, but not men. PFOS was also associated with osteoarthritis in women only, though effect estimates for women were not significant. More research is needed to clarify potential differences in susceptibility between women and men with regard to possible effects of these and other endocrine-disrupting chemicals.
null
null
4,626
301
[ 7 ]
5
[ "pfoa", "pfos", "osteoarthritis", "exposure", "pfoa pfos", "women", "study", "age", "quartile", "exposures" ]
[ "cdc 2011a nhanes", "overall nhanes sample", "nhanes annually assessed", "methods nhanes conducted", "exposure nhanes" ]
null
null
null
[CONTENT] hazardous substances | osteoarthritis | perfluorooctane sulfonate | perfluorooctanoate | public health [SUMMARY]
[CONTENT] hazardous substances | osteoarthritis | perfluorooctane sulfonate | perfluorooctanoate | public health [SUMMARY]
[CONTENT] hazardous substances | osteoarthritis | perfluorooctane sulfonate | perfluorooctanoate | public health [SUMMARY]
[CONTENT] hazardous substances | osteoarthritis | perfluorooctane sulfonate | perfluorooctanoate | public health [SUMMARY]
null
null
[CONTENT] Adult | Aged | Aged, 80 and over | Alkanesulfonic Acids | Caprylates | Chromatography, High Pressure Liquid | Endocrine Disruptors | Environmental Exposure | Female | Fluorocarbons | Humans | Logistic Models | Male | Middle Aged | Nutrition Surveys | Osteoarthritis | Prevalence | Self Report | Sex Characteristics | Tandem Mass Spectrometry | United States | Young Adult [SUMMARY]
[CONTENT] Adult | Aged | Aged, 80 and over | Alkanesulfonic Acids | Caprylates | Chromatography, High Pressure Liquid | Endocrine Disruptors | Environmental Exposure | Female | Fluorocarbons | Humans | Logistic Models | Male | Middle Aged | Nutrition Surveys | Osteoarthritis | Prevalence | Self Report | Sex Characteristics | Tandem Mass Spectrometry | United States | Young Adult [SUMMARY]
[CONTENT] Adult | Aged | Aged, 80 and over | Alkanesulfonic Acids | Caprylates | Chromatography, High Pressure Liquid | Endocrine Disruptors | Environmental Exposure | Female | Fluorocarbons | Humans | Logistic Models | Male | Middle Aged | Nutrition Surveys | Osteoarthritis | Prevalence | Self Report | Sex Characteristics | Tandem Mass Spectrometry | United States | Young Adult [SUMMARY]
[CONTENT] Adult | Aged | Aged, 80 and over | Alkanesulfonic Acids | Caprylates | Chromatography, High Pressure Liquid | Endocrine Disruptors | Environmental Exposure | Female | Fluorocarbons | Humans | Logistic Models | Male | Middle Aged | Nutrition Surveys | Osteoarthritis | Prevalence | Self Report | Sex Characteristics | Tandem Mass Spectrometry | United States | Young Adult [SUMMARY]
null
null
[CONTENT] cdc 2011a nhanes | overall nhanes sample | nhanes annually assessed | methods nhanes conducted | exposure nhanes [SUMMARY]
[CONTENT] cdc 2011a nhanes | overall nhanes sample | nhanes annually assessed | methods nhanes conducted | exposure nhanes [SUMMARY]
[CONTENT] cdc 2011a nhanes | overall nhanes sample | nhanes annually assessed | methods nhanes conducted | exposure nhanes [SUMMARY]
[CONTENT] cdc 2011a nhanes | overall nhanes sample | nhanes annually assessed | methods nhanes conducted | exposure nhanes [SUMMARY]
null
null
[CONTENT] pfoa | pfos | osteoarthritis | exposure | pfoa pfos | women | study | age | quartile | exposures [SUMMARY]
[CONTENT] pfoa | pfos | osteoarthritis | exposure | pfoa pfos | women | study | age | quartile | exposures [SUMMARY]
[CONTENT] pfoa | pfos | osteoarthritis | exposure | pfoa pfos | women | study | age | quartile | exposures [SUMMARY]
[CONTENT] pfoa | pfos | osteoarthritis | exposure | pfoa pfos | women | study | age | quartile | exposures [SUMMARY]
null
null
[CONTENT] nhanes | osteoarthritis | arthritis | age | models | quartile | ng ml | ml | ng | participants [SUMMARY]
[CONTENT] 95 ci | ci | 95 | pfoa | quartile | pfos | exposure | higher | osteoarthritis | odds [SUMMARY]
[CONTENT] health | prevalence | prevalence osteoarthritis | study | osteoarthritis | pfoa | pfos | women | exposures | pfoa pfos [SUMMARY]
[CONTENT] pfoa | pfos | osteoarthritis | exposure | data file | file | additional data file | click | additional data | click additional [SUMMARY]
null
null
[CONTENT] PFOA | PFOS | 20-84 years of age | NHANES | 2003-2008 ||| ||| [SUMMARY]
[CONTENT] ||| PFOA | 1.55 | 95% | CI | 0.99 | 2.43 | PFOS | 1.77 | 95% | CI | 1.05 | 2.96 ||| ||| PFOA | PFOS | PFOA | 1.98 | 95% | CI | 1.24 | 3.19 | PFOS | 1.73 | 95% | CI | 0.97 | 3.10 [SUMMARY]
[CONTENT] PFOA ||| PFOS ||| [SUMMARY]
[CONTENT] PFOA | PFOS ||| ||| PFOA | PFOS | 20-84 years of age | NHANES | 2003-2008 ||| ||| ||| ||| PFOA | 1.55 | 95% | CI | 0.99 | 2.43 | PFOS | 1.77 | 95% | CI | 1.05 | 2.96 ||| ||| PFOA | PFOS | PFOA | 1.98 | 95% | CI | 1.24 | 3.19 | PFOS | 1.73 | 95% | CI | 0.97 | 3.10 ||| PFOA ||| PFOS ||| [SUMMARY]
null
Monitoring of Lassa virus infection in suspected and confirmed cases in Ondo State, Nigeria.
33014249
Lassa virus (LASV), the causative agent of Lassa fever (LF), an endemic acute viral haemorrhagic illness in Nigeria, is transmitted by direct contact with the rodent, contaminated food or household items. Person-to-person transmission also occurs and sexual transmission has been reported. Thus, this study investigated the presence of LASV in body fluids of suspected and confirmed cases.
INTRODUCTION
this was a cross-sectional study between March 2018 and April 2019 involving 112 consenting suspected and post ribavirin confirmed cases attending the Lassa fever treatment center in Ondo State. Whole blood was collected from 57 suspected and 29 confirmed cases. Other samples from confirmed cases were 5 each of High Vaginal Swab (HVS) and seminal fluid; 12 breast milk and 4 urine. All samples were analyzed using reverse transcription-PCR (RT-PCR) targeting the S-gene of LASV.
METHODS
analysis of whole blood by RT-PCR showed that 1/57 (1.8%) suspected and 1/29 (3.4%) confirmed post ribavirin treated cases were positive. While LASV was detected in 2/5 (40%) post ribavirin treated seminal fluids and 1/11 (8.3%) breast milk. However, LASV was not detected in any of the HVS and urine samples.
RESULTS
the detection of LASV in seminal fluid and breast milk of discharged post ribavirin treated cases suggests its persistence in these fluids of recovering Nigerians. The role of postnatal and sexual transmissions in the perennial outbreak of LF needs to be further evaluated.
CONCLUSION
[ "Adult", "Antiviral Agents", "Cross-Sectional Studies", "Disease Outbreaks", "Female", "Humans", "Lassa Fever", "Lassa virus", "Male", "Middle Aged", "Milk, Human", "Nigeria", "Reverse Transcriptase Polymerase Chain Reaction", "Ribavirin", "Semen" ]
7519794
Introduction
Lassa fever (LF) is an acute viral haemorrhagic illness endemic in the West African sub-region caused by Lassa virus (LASV) [1,2]. LASV is an enveloped single stranded RNA virus of the family Arenaviridae, genus Mammarenavirus transmitted to humans via contact with food or household items contaminated with urine or faeces of multimmamate rodents [1,3]. The natural host of LASV is the African rodent ‘Mastomys natalensis’, which lives close to human settlements [4]. However, evidences suggest that other rodent species such as the African wood mouse, ‘Hylomyscus pamfi’ captured from Nigeria and the Guinea mouse, ‘Mastomys erythroleucus’ captured from both Nigeria and Guinea may also be hosts for LASV [5]. The multimammate rodent quickly produce a large number of offspring, tends to colonize human settlement increasing the risk of rodent - human contact and is found throughout the west, central and eastern parts of the African continent [6]. Once the rodent becomes a carrier, it will excrete the virus throughout the rest of its lifetime through faeces and urine creating ample opportunity for continued exposure within the environment where these rodents cohabit [7]. LASV nucleotide sequences have been established to cluster into six lineages (I-VI) based on geographical locations within the West African Sub-region [8]. Lineages I, II and III circulate in Nigeria, lineage IV circulates in Guinea, Sierra Leone, Liberia, Mali, and Côte d´Ivoire [9-12]. Lineage V circulates in Mali/Ivory Coast [13] and lineage VI in Togo [8]. The LASV is extremely virulent and highly infectious, affecting about 100,000-500,000 persons annually in West Africa with approximately 5,000 deaths occurring yearly [14]. LF claims more lives than Ebola fever because its incidence is much higher [15]. About 80% of persons infected with the Lassa virus are asymptomatic [16]; but in the remaining 20%, the illness manifests as a febrile illness of variable severity associated with multiple organ dysfunctions with or without hemorrhage. About 15-20% of hospitalized Lassa fever patients die from the illness. Ever since the identification of LASV in Nigeria in 1969, there have been several LF outbreaks in the country with an increasing trend of morbidity/mortality, number of states reporting the infection/disease and case fatality rate (CFR) [17]. In year 2016, 2017, 2018 and as at June 2019, the incidences of LF were reported from 18 to 29 out of the 36 states with CFRs of 12.7%, 9.7%, 24% and 22.6% respectively [17,18]. Since yellow fever virus (YFV) and dengue virus (DENV) are also endemic in Nigeria with the country currently dealing with an active yellow fever virus (YFV) outbreak and this was first reported from Kwara state in 2017. Three hundred and forty-one suspected cases of yellow fever were reported from 16 states in 2017 and six of these states had confirmed cases of yellow fever [19,20]. In September 2019 alone, the NCDC reported 243 suspected cases in 42 LGA´s in five states with 10 presumptive positives and one inconclusive [20]. Active dengue infections have also been documented from different parts of the country such as Maiduguri, Ilorin, Ibadan, Ogbomoso, north-eastern Nigeria in Jos and Ibadan respectively [21-23]. Thus, it is important that every sample of suspected cases of VHFs is screened for YFV and DENV as differentials with LASV. Transmission of LASV from human to human has been established within the communities where the disease is endemic [24]. This also includes contact with the corpse of a LF case [24]. Few reports of nosocomial outbreaks in healthcare settings show a disease risk for healthcare workers, which probably were due to poor infection control practices. These include lack of appropriate personal protective equipment, use of contaminated items, failure to adequately disinfect between patients by hand washing. The poor practices facilitated transmission from patient-to-patient or to care givers resulting in high case fatality among healthcare workers [24,25]. It has been reported that the virus is excreted in urine for 3-9 weeks post infection, remains in the semen for up to 3 months after becoming infected and has also been detected 64 days after patient recovery and discharge from hospital [26,27]. Sexual transmission of LASV has also been reported. However, the extent of this transmission dynamic remains unknown and requires further evaluation in communities where this disease is endemic [16,28]. No study has proven the presence of LASV in breast milk, but the high level of viraemia suggests it may be possible [29]. Although, it was reported that the circulating strains of LASV appear to be very similar to those from previous years in the country [30], the transmission dynamics of this virus particularly in endemic regions are yet to be fully understood. Even though there is a risk for sexual and/or post-natal transmission of LASV, the persistence of LASV in semen, urine and other body fluids of Nigerians has not been reported. Therefore, this study investigated the presence of LASV, as well as YFV and DENV as differentials, in body fluids of suspected, confirmed and discharged cases to better understand its role in the perennial outbreaks of LF in Nigeria.
Methods
Study location/area: this study was carried out in Ondo State, Nigeria. The state has Akure as its state capital with eighteen local government areas, the major ones being Akoko, Akure, Okitipupa, Ondo and Owo. Ondo State borders Ekiti state to the north, Kogi State to the northeast, Edo State to the east, Delta State to the southeast, Ogun State to the southwest and Osun State to the northwest as highlighted in Figure 1 [31]. Ondo State has had repeated outbreaks with confirmed cases of Lassa fever within the study period and confirmed cases from other states had also been traced to the state. All suspected cases from the different Local Government Areas (LGAs) within the state, are referred to the Lassa fever treatment center at the Federal Medical Center (FMC), Owo for diagnosis and management. Study design and population: this was a cross-sectional study conducted between March 2018 and April 2019 with a total of 112 consenting individuals. These individuals comprised of 57(25.9%) suspected and 29(25.9%) confirmed cases who were still on admission at the Lassa fever treatment center at the Federal Medical Center (FMC), Owo, Ondo State. Additional 26 individuals (23.2%) were outpatients living in the communities who had previously been treated with ribavirin and discharged from the Lassa fever treatment center but were still attending the hospital for post discharge evaluations. Informed consent/ethical approval: an ethical approval was obtained from the Institutional Review Board (IRB) of the Nigerian Institute of Medical Research (NIMR). Furthermore, approval was also obtained from the Ondo State Ministry of Health before commencement of study. Only individuals who had given prior documented consent were included in this study. Specimen collection, transportation, handling and processing: the total number of one hundred and twelve (112) consenting individuals who participated in this study comprised of 66(58.9%) males and 46(41.1%) females with a mean age of 42.6 years (SD ± 6.8). Whole blood samples were collected from 57 suspected cases and 29 confirmed cases of Lassa fever. Other samples collected from discharged cases include 5 high vaginal swab (HVS) and seminal fluid each; 12 breast milk and 4 urine samples. All samples were packaged in triplicates and transported in cold-chain from Ondo State to the Centre for Human and Zoonotic Virology (CHAZVY), Central Research Laboratory, College of Medicine of the University of Lagos. Universal sample precautions and handling procedures were carried out as recommended by the United States Centers for Disease Control and Prevention [32]. All specimen transport containers were disinfected with 10% hypochlorite solution in an airtight glove box. Viral agents in specimen aliquots (undiluted and 1: 10 dilution) were inactivated in guanidinium-thiocyanate-based lysis buffer at room temperature for 10 minutes before extraction of viral nucleic acid. Nucleic acid extraction and reverse transcriptase-polymerase chain reaction: the viral nucleic acid from inactivated sample aliquots (undiluted and 1: 10 dilution) were extracted using a mini spin column RNA extraction kit by Qiagen (Qiagen, Germantown, Maryland, United States) in a class IIA biological safety cabinet according to the manufacturer´s instructions. After the extraction of viral nucleic acid, S segment of the RNA genome, 3´ and 5´ non-coding regions of the nucleic acid of LASV, dengue and yellow fever viruses were amplified in discrete RT-PCRs with primers as listed in Table 1. Separate reaction mixtures for Lassa, dengue and yellow fever viruses were prepared and cycled as described in the one-step RT-PCR kit by AmbionAgPath-ID protocol (Applied Bio-systems, Foster City, California, United States). The reaction was performed using the 9700 applied bio-systems thermal cycler with the following temperature profile: 50°C for 30 min and 95°C for 5 min, followed by 35 cycles of 95°C for 30 s, 55°C for 30s, and 72°C for 30s with a final extension of 72°C for 5 min. Subsequently, PCR amplicons were subjected to 1.5% agarose gel electrophoresis with 1X SYBR® safe DNA gel staining dye (Invitrogen, Carlsbad, California, United States) for 30 min at 120V/400mA and images of amplicon bands under UV light were taken with BioDocAnalyze 2.0 (Biometra, Goettingen, Germany). The positive control used for Lassa assays were previously detected Lassa samples from Irrua, Edo State, Nigeria with accession number GU481078 NIG 08-A47 2008 IRRUA, while those for dengue and YFV assays were both tissue culture inactivated samples both from the Virology Unit Laboratory of the Bernhard Nocht Institute of Tropical Medicine, Hamburg, Germany through our collaborations. map of Ondo State (study area) showing the two eco-climatic zones, local government areas and bordering states primers used for Lassa, dengue and yellow fevers investigation
null
null
Conclusion
Data presented here established and confirmed the detection and persistence of LASV RNA in the seminal fluid and breast milk of discharged post ribavirin treated Lassa fever cases in Nigeria; The detection and persistence of LASV RNA in these body fluids of discharged cases in Nigeria is useful for clinical and public health reasons, based on the high mortality and morbidity rates and the perennial nature of LF in Nigeria; This study with the proven evidence of the detection of LASV RNA in breast milk of mothers who had recovered from LF disease, puts to rest the speculations on the detection of this virus in breast milk. Data presented here established and confirmed the detection and persistence of LASV RNA in the seminal fluid and breast milk of discharged post ribavirin treated Lassa fever cases in Nigeria; The detection and persistence of LASV RNA in these body fluids of discharged cases in Nigeria is useful for clinical and public health reasons, based on the high mortality and morbidity rates and the perennial nature of LF in Nigeria; This study with the proven evidence of the detection of LASV RNA in breast milk of mothers who had recovered from LF disease, puts to rest the speculations on the detection of this virus in breast milk.
[ "Introduction", "Results", "Discussion", "Conclusion" ]
[ "Lassa fever (LF) is an acute viral haemorrhagic illness endemic in the West African sub-region caused by Lassa virus (LASV) [1,2]. LASV is an enveloped single stranded RNA virus of the family Arenaviridae, genus Mammarenavirus transmitted to humans via contact with food or household items contaminated with urine or faeces of multimmamate rodents [1,3]. The natural host of LASV is the African rodent ‘Mastomys natalensis’, which lives close to human settlements [4]. However, evidences suggest that other rodent species such as the African wood mouse, ‘Hylomyscus pamfi’ captured from Nigeria and the Guinea mouse, ‘Mastomys erythroleucus’ captured from both Nigeria and Guinea may also be hosts for LASV [5]. The multimammate rodent quickly produce a large number of offspring, tends to colonize human settlement increasing the risk of rodent - human contact and is found throughout the west, central and eastern parts of the African continent [6]. Once the rodent becomes a carrier, it will excrete the virus throughout the rest of its lifetime through faeces and urine creating ample opportunity for continued exposure within the environment where these rodents cohabit [7]. LASV nucleotide sequences have been established to cluster into six lineages (I-VI) based on geographical locations within the West African Sub-region [8]. Lineages I, II and III circulate in Nigeria, lineage IV circulates in Guinea, Sierra Leone, Liberia, Mali, and Côte d´Ivoire [9-12]. Lineage V circulates in Mali/Ivory Coast [13] and lineage VI in Togo [8].\nThe LASV is extremely virulent and highly infectious, affecting about 100,000-500,000 persons annually in West Africa with approximately 5,000 deaths occurring yearly [14]. LF claims more lives than Ebola fever because its incidence is much higher [15]. About 80% of persons infected with the Lassa virus are asymptomatic [16]; but in the remaining 20%, the illness manifests as a febrile illness of variable severity associated with multiple organ dysfunctions with or without hemorrhage. About 15-20% of hospitalized Lassa fever patients die from the illness. Ever since the identification of LASV in Nigeria in 1969, there have been several LF outbreaks in the country with an increasing trend of morbidity/mortality, number of states reporting the infection/disease and case fatality rate (CFR) [17]. In year 2016, 2017, 2018 and as at June 2019, the incidences of LF were reported from 18 to 29 out of the 36 states with CFRs of 12.7%, 9.7%, 24% and 22.6% respectively [17,18]. Since yellow fever virus (YFV) and dengue virus (DENV) are also endemic in Nigeria with the country currently dealing with an active yellow fever virus (YFV) outbreak and this was first reported from Kwara state in 2017. Three hundred and forty-one suspected cases of yellow fever were reported from 16 states in 2017 and six of these states had confirmed cases of yellow fever [19,20].\nIn September 2019 alone, the NCDC reported 243 suspected cases in 42 LGA´s in five states with 10 presumptive positives and one inconclusive [20]. Active dengue infections have also been documented from different parts of the country such as Maiduguri, Ilorin, Ibadan, Ogbomoso, north-eastern Nigeria in Jos and Ibadan respectively [21-23]. Thus, it is important that every sample of suspected cases of VHFs is screened for YFV and DENV as differentials with LASV. Transmission of LASV from human to human has been established within the communities where the disease is endemic [24]. This also includes contact with the corpse of a LF case [24]. Few reports of nosocomial outbreaks in healthcare settings show a disease risk for healthcare workers, which probably were due to poor infection control practices. These include lack of appropriate personal protective equipment, use of contaminated items, failure to adequately disinfect between patients by hand washing. The poor practices facilitated transmission from patient-to-patient or to care givers resulting in high case fatality among healthcare workers [24,25].\nIt has been reported that the virus is excreted in urine for 3-9 weeks post infection, remains in the semen for up to 3 months after becoming infected and has also been detected 64 days after patient recovery and discharge from hospital [26,27]. Sexual transmission of LASV has also been reported. However, the extent of this transmission dynamic remains unknown and requires further evaluation in communities where this disease is endemic [16,28]. No study has proven the presence of LASV in breast milk, but the high level of viraemia suggests it may be possible [29]. Although, it was reported that the circulating strains of LASV appear to be very similar to those from previous years in the country [30], the transmission dynamics of this virus particularly in endemic regions are yet to be fully understood. Even though there is a risk for sexual and/or post-natal transmission of LASV, the persistence of LASV in semen, urine and other body fluids of Nigerians has not been reported. Therefore, this study investigated the presence of LASV, as well as YFV and DENV as differentials, in body fluids of suspected, confirmed and discharged cases to better understand its role in the perennial outbreaks of LF in Nigeria.", "Reverse transcriptase-polymerase chain reaction amplification and agarose gel analysis of Lassa, yellow fever and dengue viruses: all samples collected were analyzed for LASV RNA by RT-PCR. The expected amplicon band size of approximately 320 base pairs (bp) of the S segment of the RNA genome for LASV was detected by the agarose gel electrophoresis analysis (Figure 2). The detected band size of the LASV amplicons was at par with the positive LASV controls and no band was observed in the negative control lane on the gel picture (PCR-grade water) (Figure 2). However, none of the expected band sizes (~79bp) and (~405bp) were detected for dengue viruses or yellow fever (Figure 3,Figure 4). Analysis of whole blood by RT-PCR showed that 1/57(1.8%) suspected and 1/29(3.4%) confirmed ribavirin treated case still on admission at FMC, Owo were positive for LASV (Table 2). While LASV was still detected in 1/11(8.3%) breast milk and 2/5(40%) post ribavirin treated seminal fluids from discharged individuals in the community. However, LASV was not detected in any of the HVS and urine samples (Table 2).\nRT-PCR Detection of S-Gene fragment of Lassa virus. The gel lanes represent neat (undiluted, N) and 1: 10 dilutions (D) of the RNA extracts used for RTPCR. Lassa positive samples were represented in lanes S1-S5. RNase/DNase free water was used as negative extraction/RTPCR control (-ve CTRL) while a 2008 outbreak positive sample (GU481078_NIG_08-A47_2008_IRRUA) was used as positive control (+ve CTRL)\nRT-PCR detection of dengue virus. The gel lanes represent neat (undiluted, N) and 1: 10 dilutions (D) of the RNA extracts used for RTPCR. Dengue negative samples were represented in lanes S1-S5. RNase/DNase free water was used as negative extraction/RTPCR control (-ve CTRL) while a tissue culture inactivated sample of dengue virus from the virology unit laboratory of the Bernhard Nocht Institute of Tropical Medicine, Germany was used as positive control (+ve CTRL)\nRT-PCR detection of yellow fever virus. The gel lanes represent neat (undiluted, N) and 1:10 dilutions (D) of the RNA extracts used for RTPCR. Yellow fever negative samples were represented in lanes S1-S5. RNase/DNase free water was used as negative extraction/RTPCR control (-ve CTRL) while a tissue culture inactivated sample of 17D yellow fever strain from the virology unit laboratory of the Bernhard Nocht Institute of Tropical Medicine, Germany was used as positive control (+ve CTRL)\ndistribution of LASV positivity by category of cases", "Large scale outbreaks of LF have been reported in Nigeria since 2015 with the disease occurring all year round. The main preventive strategies in endemic areas are: effective rodent control in areas close to living environment; avoiding contacts with rodents and the consumption of it. Others include high index of suspicion for LF; prompt referral from such health facilities to treatment centers and proper use of infection control practices in health facilities [33-35]. Although LASV is found in virtually all body fluids and compartment during acute infection, little is known about the viral kinetics of LASV in body fluids other than blood after recovery from the disease [36]. The burden of the disease and the role of transmission through other body fluids other than blood after recovery still remains a puzzle in the country. The non-detection of both YFV and DENV from this set of suspected, confirmed and discharged cases of VHFs in Ondo State, implies that Lassa virus remains the major agent and driver of VHFs in the country. However, since there have been reported cases of YFV and DENV with documented evidences that these viruses are all endemic in the country, testing for these agents alongside with LASV should remain a continuum particularly as it has been documented that only 20% of the cases of VHF are confirmed to be caused by LASV [16]. The detection of LASV in seminal fluid and breast milk of discharged post ribavirin treated cases and its persistence in these fluids of recovering individuals has been established in this study as documented in previous studies.\nThe presence of LASV in the body fluids of discharged cases is useful for clinical and public health reasons in Nigeria. This is particularly important because of the high mortality and morbidity rates in addition to its perennial nature. Thus, counseling on the possibilities of LASV transmission from men after discharge from treatment centres to their sexual partners and from mothers to their offspring during breast feeding is strongly advocated. Safe sex practices including sexual abstinence and use of male or female latex condom as well as abstinence from breast feeding by nursing mothers after discharge should be encouraged among survivors. The implications of viral persistence in such immune sanctuaries are now being recognized as potential sources of new outbreaks through sexual transmission for a number of other emerging infectious viruses, including Lassa, ebola and Zika viruses [36-39]. Therefore, the identification of viral persistence in seminal fluid and breast milk draws critical attention to the need to follow LF survivors´ longitudinally after clearance of viremia. Thus, survivors should also be offered access to care and prevention, particularly with the possibility to test other body fluid other than blood after discharge from any LF treatment centre which provides them with possibilities to mitigate any risks that may occur. The testing of other body fluids of LF survivors should be considered as a possibility to be included for improved LF management and treatment network coordinated by the Nigerian Centre for Disease Control (NCDC).\nHowever, given these findings, the following questions still need to be addressed; are the LASV shed in seminal fluid and breast milk viable and infective, for how long and at what concentrations? It has been over 50 years since LASV was first identified during a 1969 outbreak in Nigeria, but the mechanisms underlying its pathogenesis and the role that host and viral factors play in the transmission dynamics and persistent outbreaks remain unclear. The answers to these questions have implications for ascertaining the risks for sexual and post-natal transmission and therefore, its effects on epidemiologic and transmission dynamics. Although, LASV has been detected in human semen and other body fluids, the extent to which LASV existence and replication occurs within these fluids remains unclear [34,35]. The shedding of LASV in the urine for 3 to 6 weeks and up to 3 months in seminal fluid, with risk for sexual transmission, prompting condom use in survivors had been well documented [26,28,40,41]. Though a previous study had no proven evidence of the presence of LASV in breast milk [29], this study documents a proven evidence of the detection of LASV in breast milk of mothers who had recovered from LF disease. This study therefore puts to rest the speculations on the detection of this virus in breast milk. Moreover, this study was limited by the small numbers of post ribavirin treated individuals who recovered and were discharged. Thus, the actual duration of shedding of LASV in the seminal fluid and breast milk of these individuals could not be ascertained with this study.", "Lassa fever outbreak remains a public health threat and it is a burden on the vulnerable population within and outside the endemic states in Nigeria. Thus, the continued monitoring of individuals that are discharged from the treatment centres in Nigeria and the role of postnatal and sexual transmissions in the perennial outbreak of LF in Nigeria needs to be further evaluated.\n What is known about this topic \n\n\nSeveral Lassa fever outbreaks had been witnessed in Nigeria since the discovery of the Lassa virus in 1969;\n\n\nThere has been an increasing trend of morbidity/mortality, number of states reporting the infection/disease and case fatality rate (CFR) since year 2016;\n\n\nThe natural host of LASV is the multimammate African rodent ‘Mastomys natalensis’, which lives close to human settlement and transmitted to humans via contact with food or household items contaminated with urine or faeces of this rodent.\n\n\n\nSeveral Lassa fever outbreaks had been witnessed in Nigeria since the discovery of the Lassa virus in 1969;\nThere has been an increasing trend of morbidity/mortality, number of states reporting the infection/disease and case fatality rate (CFR) since year 2016;\nThe natural host of LASV is the multimammate African rodent ‘Mastomys natalensis’, which lives close to human settlement and transmitted to humans via contact with food or household items contaminated with urine or faeces of this rodent.\n\n\n\nSeveral Lassa fever outbreaks had been witnessed in Nigeria since the discovery of the Lassa virus in 1969;\n\n\nThere has been an increasing trend of morbidity/mortality, number of states reporting the infection/disease and case fatality rate (CFR) since year 2016;\n\n\nThe natural host of LASV is the multimammate African rodent ‘Mastomys natalensis’, which lives close to human settlement and transmitted to humans via contact with food or household items contaminated with urine or faeces of this rodent.\n\n\n\nSeveral Lassa fever outbreaks had been witnessed in Nigeria since the discovery of the Lassa virus in 1969;\nThere has been an increasing trend of morbidity/mortality, number of states reporting the infection/disease and case fatality rate (CFR) since year 2016;\nThe natural host of LASV is the multimammate African rodent ‘Mastomys natalensis’, which lives close to human settlement and transmitted to humans via contact with food or household items contaminated with urine or faeces of this rodent.\n What this study adds \n\n\nData presented here established and confirmed the detection and persistence of LASV RNA in the seminal fluid and breast milk of discharged post ribavirin treated Lassa fever cases in Nigeria;\n\n\nThe detection and persistence of LASV RNA in these body fluids of discharged cases in Nigeria is useful for clinical and public health reasons, based on the high mortality and morbidity rates and the perennial nature of LF in Nigeria;\n\n\nThis study with the proven evidence of the detection of LASV RNA in breast milk of mothers who had recovered from LF disease, puts to rest the speculations on the detection of this virus in breast milk.\n\n\n\nData presented here established and confirmed the detection and persistence of LASV RNA in the seminal fluid and breast milk of discharged post ribavirin treated Lassa fever cases in Nigeria;\nThe detection and persistence of LASV RNA in these body fluids of discharged cases in Nigeria is useful for clinical and public health reasons, based on the high mortality and morbidity rates and the perennial nature of LF in Nigeria;\nThis study with the proven evidence of the detection of LASV RNA in breast milk of mothers who had recovered from LF disease, puts to rest the speculations on the detection of this virus in breast milk.\n\n\n\nData presented here established and confirmed the detection and persistence of LASV RNA in the seminal fluid and breast milk of discharged post ribavirin treated Lassa fever cases in Nigeria;\n\n\nThe detection and persistence of LASV RNA in these body fluids of discharged cases in Nigeria is useful for clinical and public health reasons, based on the high mortality and morbidity rates and the perennial nature of LF in Nigeria;\n\n\nThis study with the proven evidence of the detection of LASV RNA in breast milk of mothers who had recovered from LF disease, puts to rest the speculations on the detection of this virus in breast milk.\n\n\n\nData presented here established and confirmed the detection and persistence of LASV RNA in the seminal fluid and breast milk of discharged post ribavirin treated Lassa fever cases in Nigeria;\nThe detection and persistence of LASV RNA in these body fluids of discharged cases in Nigeria is useful for clinical and public health reasons, based on the high mortality and morbidity rates and the perennial nature of LF in Nigeria;\nThis study with the proven evidence of the detection of LASV RNA in breast milk of mothers who had recovered from LF disease, puts to rest the speculations on the detection of this virus in breast milk." ]
[ null, null, null, null ]
[ "Introduction", "Methods", "Results", "Discussion", "Conclusion" ]
[ "Lassa fever (LF) is an acute viral haemorrhagic illness endemic in the West African sub-region caused by Lassa virus (LASV) [1,2]. LASV is an enveloped single stranded RNA virus of the family Arenaviridae, genus Mammarenavirus transmitted to humans via contact with food or household items contaminated with urine or faeces of multimmamate rodents [1,3]. The natural host of LASV is the African rodent ‘Mastomys natalensis’, which lives close to human settlements [4]. However, evidences suggest that other rodent species such as the African wood mouse, ‘Hylomyscus pamfi’ captured from Nigeria and the Guinea mouse, ‘Mastomys erythroleucus’ captured from both Nigeria and Guinea may also be hosts for LASV [5]. The multimammate rodent quickly produce a large number of offspring, tends to colonize human settlement increasing the risk of rodent - human contact and is found throughout the west, central and eastern parts of the African continent [6]. Once the rodent becomes a carrier, it will excrete the virus throughout the rest of its lifetime through faeces and urine creating ample opportunity for continued exposure within the environment where these rodents cohabit [7]. LASV nucleotide sequences have been established to cluster into six lineages (I-VI) based on geographical locations within the West African Sub-region [8]. Lineages I, II and III circulate in Nigeria, lineage IV circulates in Guinea, Sierra Leone, Liberia, Mali, and Côte d´Ivoire [9-12]. Lineage V circulates in Mali/Ivory Coast [13] and lineage VI in Togo [8].\nThe LASV is extremely virulent and highly infectious, affecting about 100,000-500,000 persons annually in West Africa with approximately 5,000 deaths occurring yearly [14]. LF claims more lives than Ebola fever because its incidence is much higher [15]. About 80% of persons infected with the Lassa virus are asymptomatic [16]; but in the remaining 20%, the illness manifests as a febrile illness of variable severity associated with multiple organ dysfunctions with or without hemorrhage. About 15-20% of hospitalized Lassa fever patients die from the illness. Ever since the identification of LASV in Nigeria in 1969, there have been several LF outbreaks in the country with an increasing trend of morbidity/mortality, number of states reporting the infection/disease and case fatality rate (CFR) [17]. In year 2016, 2017, 2018 and as at June 2019, the incidences of LF were reported from 18 to 29 out of the 36 states with CFRs of 12.7%, 9.7%, 24% and 22.6% respectively [17,18]. Since yellow fever virus (YFV) and dengue virus (DENV) are also endemic in Nigeria with the country currently dealing with an active yellow fever virus (YFV) outbreak and this was first reported from Kwara state in 2017. Three hundred and forty-one suspected cases of yellow fever were reported from 16 states in 2017 and six of these states had confirmed cases of yellow fever [19,20].\nIn September 2019 alone, the NCDC reported 243 suspected cases in 42 LGA´s in five states with 10 presumptive positives and one inconclusive [20]. Active dengue infections have also been documented from different parts of the country such as Maiduguri, Ilorin, Ibadan, Ogbomoso, north-eastern Nigeria in Jos and Ibadan respectively [21-23]. Thus, it is important that every sample of suspected cases of VHFs is screened for YFV and DENV as differentials with LASV. Transmission of LASV from human to human has been established within the communities where the disease is endemic [24]. This also includes contact with the corpse of a LF case [24]. Few reports of nosocomial outbreaks in healthcare settings show a disease risk for healthcare workers, which probably were due to poor infection control practices. These include lack of appropriate personal protective equipment, use of contaminated items, failure to adequately disinfect between patients by hand washing. The poor practices facilitated transmission from patient-to-patient or to care givers resulting in high case fatality among healthcare workers [24,25].\nIt has been reported that the virus is excreted in urine for 3-9 weeks post infection, remains in the semen for up to 3 months after becoming infected and has also been detected 64 days after patient recovery and discharge from hospital [26,27]. Sexual transmission of LASV has also been reported. However, the extent of this transmission dynamic remains unknown and requires further evaluation in communities where this disease is endemic [16,28]. No study has proven the presence of LASV in breast milk, but the high level of viraemia suggests it may be possible [29]. Although, it was reported that the circulating strains of LASV appear to be very similar to those from previous years in the country [30], the transmission dynamics of this virus particularly in endemic regions are yet to be fully understood. Even though there is a risk for sexual and/or post-natal transmission of LASV, the persistence of LASV in semen, urine and other body fluids of Nigerians has not been reported. Therefore, this study investigated the presence of LASV, as well as YFV and DENV as differentials, in body fluids of suspected, confirmed and discharged cases to better understand its role in the perennial outbreaks of LF in Nigeria.", "Study location/area: this study was carried out in Ondo State, Nigeria. The state has Akure as its state capital with eighteen local government areas, the major ones being Akoko, Akure, Okitipupa, Ondo and Owo. Ondo State borders Ekiti state to the north, Kogi State to the northeast, Edo State to the east, Delta State to the southeast, Ogun State to the southwest and Osun State to the northwest as highlighted in Figure 1 [31]. Ondo State has had repeated outbreaks with confirmed cases of Lassa fever within the study period and confirmed cases from other states had also been traced to the state. All suspected cases from the different Local Government Areas (LGAs) within the state, are referred to the Lassa fever treatment center at the Federal Medical Center (FMC), Owo for diagnosis and management.\nStudy design and population: this was a cross-sectional study conducted between March 2018 and April 2019 with a total of 112 consenting individuals. These individuals comprised of 57(25.9%) suspected and 29(25.9%) confirmed cases who were still on admission at the Lassa fever treatment center at the Federal Medical Center (FMC), Owo, Ondo State. Additional 26 individuals (23.2%) were outpatients living in the communities who had previously been treated with ribavirin and discharged from the Lassa fever treatment center but were still attending the hospital for post discharge evaluations.\nInformed consent/ethical approval: an ethical approval was obtained from the Institutional Review Board (IRB) of the Nigerian Institute of Medical Research (NIMR). Furthermore, approval was also obtained from the Ondo State Ministry of Health before commencement of study. Only individuals who had given prior documented consent were included in this study.\nSpecimen collection, transportation, handling and processing: the total number of one hundred and twelve (112) consenting individuals who participated in this study comprised of 66(58.9%) males and 46(41.1%) females with a mean age of 42.6 years (SD ± 6.8). Whole blood samples were collected from 57 suspected cases and 29 confirmed cases of Lassa fever. Other samples collected from discharged cases include 5 high vaginal swab (HVS) and seminal fluid each; 12 breast milk and 4 urine samples. All samples were packaged in triplicates and transported in cold-chain from Ondo State to the Centre for Human and Zoonotic Virology (CHAZVY), Central Research Laboratory, College of Medicine of the University of Lagos. Universal sample precautions and handling procedures were carried out as recommended by the United States Centers for Disease Control and Prevention [32]. All specimen transport containers were disinfected with 10% hypochlorite solution in an airtight glove box. Viral agents in specimen aliquots (undiluted and 1: 10 dilution) were inactivated in guanidinium-thiocyanate-based lysis buffer at room temperature for 10 minutes before extraction of viral nucleic acid.\nNucleic acid extraction and reverse transcriptase-polymerase chain reaction: the viral nucleic acid from inactivated sample aliquots (undiluted and 1: 10 dilution) were extracted using a mini spin column RNA extraction kit by Qiagen (Qiagen, Germantown, Maryland, United States) in a class IIA biological safety cabinet according to the manufacturer´s instructions. After the extraction of viral nucleic acid, S segment of the RNA genome, 3´ and 5´ non-coding regions of the nucleic acid of LASV, dengue and yellow fever viruses were amplified in discrete RT-PCRs with primers as listed in Table 1. Separate reaction mixtures for Lassa, dengue and yellow fever viruses were prepared and cycled as described in the one-step RT-PCR kit by AmbionAgPath-ID protocol (Applied Bio-systems, Foster City, California, United States). The reaction was performed using the 9700 applied bio-systems thermal cycler with the following temperature profile: 50°C for 30 min and 95°C for 5 min, followed by 35 cycles of 95°C for 30 s, 55°C for 30s, and 72°C for 30s with a final extension of 72°C for 5 min. Subsequently, PCR amplicons were subjected to 1.5% agarose gel electrophoresis with 1X SYBR® safe DNA gel staining dye (Invitrogen, Carlsbad, California, United States) for 30 min at 120V/400mA and images of amplicon bands under UV light were taken with BioDocAnalyze 2.0 (Biometra, Goettingen, Germany). The positive control used for Lassa assays were previously detected Lassa samples from Irrua, Edo State, Nigeria with accession number GU481078 NIG 08-A47 2008 IRRUA, while those for dengue and YFV assays were both tissue culture inactivated samples both from the Virology Unit Laboratory of the Bernhard Nocht Institute of Tropical Medicine, Hamburg, Germany through our collaborations.\nmap of Ondo State (study area) showing the two eco-climatic zones, local government areas and bordering states\nprimers used for Lassa, dengue and yellow fevers investigation", "Reverse transcriptase-polymerase chain reaction amplification and agarose gel analysis of Lassa, yellow fever and dengue viruses: all samples collected were analyzed for LASV RNA by RT-PCR. The expected amplicon band size of approximately 320 base pairs (bp) of the S segment of the RNA genome for LASV was detected by the agarose gel electrophoresis analysis (Figure 2). The detected band size of the LASV amplicons was at par with the positive LASV controls and no band was observed in the negative control lane on the gel picture (PCR-grade water) (Figure 2). However, none of the expected band sizes (~79bp) and (~405bp) were detected for dengue viruses or yellow fever (Figure 3,Figure 4). Analysis of whole blood by RT-PCR showed that 1/57(1.8%) suspected and 1/29(3.4%) confirmed ribavirin treated case still on admission at FMC, Owo were positive for LASV (Table 2). While LASV was still detected in 1/11(8.3%) breast milk and 2/5(40%) post ribavirin treated seminal fluids from discharged individuals in the community. However, LASV was not detected in any of the HVS and urine samples (Table 2).\nRT-PCR Detection of S-Gene fragment of Lassa virus. The gel lanes represent neat (undiluted, N) and 1: 10 dilutions (D) of the RNA extracts used for RTPCR. Lassa positive samples were represented in lanes S1-S5. RNase/DNase free water was used as negative extraction/RTPCR control (-ve CTRL) while a 2008 outbreak positive sample (GU481078_NIG_08-A47_2008_IRRUA) was used as positive control (+ve CTRL)\nRT-PCR detection of dengue virus. The gel lanes represent neat (undiluted, N) and 1: 10 dilutions (D) of the RNA extracts used for RTPCR. Dengue negative samples were represented in lanes S1-S5. RNase/DNase free water was used as negative extraction/RTPCR control (-ve CTRL) while a tissue culture inactivated sample of dengue virus from the virology unit laboratory of the Bernhard Nocht Institute of Tropical Medicine, Germany was used as positive control (+ve CTRL)\nRT-PCR detection of yellow fever virus. The gel lanes represent neat (undiluted, N) and 1:10 dilutions (D) of the RNA extracts used for RTPCR. Yellow fever negative samples were represented in lanes S1-S5. RNase/DNase free water was used as negative extraction/RTPCR control (-ve CTRL) while a tissue culture inactivated sample of 17D yellow fever strain from the virology unit laboratory of the Bernhard Nocht Institute of Tropical Medicine, Germany was used as positive control (+ve CTRL)\ndistribution of LASV positivity by category of cases", "Large scale outbreaks of LF have been reported in Nigeria since 2015 with the disease occurring all year round. The main preventive strategies in endemic areas are: effective rodent control in areas close to living environment; avoiding contacts with rodents and the consumption of it. Others include high index of suspicion for LF; prompt referral from such health facilities to treatment centers and proper use of infection control practices in health facilities [33-35]. Although LASV is found in virtually all body fluids and compartment during acute infection, little is known about the viral kinetics of LASV in body fluids other than blood after recovery from the disease [36]. The burden of the disease and the role of transmission through other body fluids other than blood after recovery still remains a puzzle in the country. The non-detection of both YFV and DENV from this set of suspected, confirmed and discharged cases of VHFs in Ondo State, implies that Lassa virus remains the major agent and driver of VHFs in the country. However, since there have been reported cases of YFV and DENV with documented evidences that these viruses are all endemic in the country, testing for these agents alongside with LASV should remain a continuum particularly as it has been documented that only 20% of the cases of VHF are confirmed to be caused by LASV [16]. The detection of LASV in seminal fluid and breast milk of discharged post ribavirin treated cases and its persistence in these fluids of recovering individuals has been established in this study as documented in previous studies.\nThe presence of LASV in the body fluids of discharged cases is useful for clinical and public health reasons in Nigeria. This is particularly important because of the high mortality and morbidity rates in addition to its perennial nature. Thus, counseling on the possibilities of LASV transmission from men after discharge from treatment centres to their sexual partners and from mothers to their offspring during breast feeding is strongly advocated. Safe sex practices including sexual abstinence and use of male or female latex condom as well as abstinence from breast feeding by nursing mothers after discharge should be encouraged among survivors. The implications of viral persistence in such immune sanctuaries are now being recognized as potential sources of new outbreaks through sexual transmission for a number of other emerging infectious viruses, including Lassa, ebola and Zika viruses [36-39]. Therefore, the identification of viral persistence in seminal fluid and breast milk draws critical attention to the need to follow LF survivors´ longitudinally after clearance of viremia. Thus, survivors should also be offered access to care and prevention, particularly with the possibility to test other body fluid other than blood after discharge from any LF treatment centre which provides them with possibilities to mitigate any risks that may occur. The testing of other body fluids of LF survivors should be considered as a possibility to be included for improved LF management and treatment network coordinated by the Nigerian Centre for Disease Control (NCDC).\nHowever, given these findings, the following questions still need to be addressed; are the LASV shed in seminal fluid and breast milk viable and infective, for how long and at what concentrations? It has been over 50 years since LASV was first identified during a 1969 outbreak in Nigeria, but the mechanisms underlying its pathogenesis and the role that host and viral factors play in the transmission dynamics and persistent outbreaks remain unclear. The answers to these questions have implications for ascertaining the risks for sexual and post-natal transmission and therefore, its effects on epidemiologic and transmission dynamics. Although, LASV has been detected in human semen and other body fluids, the extent to which LASV existence and replication occurs within these fluids remains unclear [34,35]. The shedding of LASV in the urine for 3 to 6 weeks and up to 3 months in seminal fluid, with risk for sexual transmission, prompting condom use in survivors had been well documented [26,28,40,41]. Though a previous study had no proven evidence of the presence of LASV in breast milk [29], this study documents a proven evidence of the detection of LASV in breast milk of mothers who had recovered from LF disease. This study therefore puts to rest the speculations on the detection of this virus in breast milk. Moreover, this study was limited by the small numbers of post ribavirin treated individuals who recovered and were discharged. Thus, the actual duration of shedding of LASV in the seminal fluid and breast milk of these individuals could not be ascertained with this study.", "Lassa fever outbreak remains a public health threat and it is a burden on the vulnerable population within and outside the endemic states in Nigeria. Thus, the continued monitoring of individuals that are discharged from the treatment centres in Nigeria and the role of postnatal and sexual transmissions in the perennial outbreak of LF in Nigeria needs to be further evaluated.\n What is known about this topic \n\n\nSeveral Lassa fever outbreaks had been witnessed in Nigeria since the discovery of the Lassa virus in 1969;\n\n\nThere has been an increasing trend of morbidity/mortality, number of states reporting the infection/disease and case fatality rate (CFR) since year 2016;\n\n\nThe natural host of LASV is the multimammate African rodent ‘Mastomys natalensis’, which lives close to human settlement and transmitted to humans via contact with food or household items contaminated with urine or faeces of this rodent.\n\n\n\nSeveral Lassa fever outbreaks had been witnessed in Nigeria since the discovery of the Lassa virus in 1969;\nThere has been an increasing trend of morbidity/mortality, number of states reporting the infection/disease and case fatality rate (CFR) since year 2016;\nThe natural host of LASV is the multimammate African rodent ‘Mastomys natalensis’, which lives close to human settlement and transmitted to humans via contact with food or household items contaminated with urine or faeces of this rodent.\n\n\n\nSeveral Lassa fever outbreaks had been witnessed in Nigeria since the discovery of the Lassa virus in 1969;\n\n\nThere has been an increasing trend of morbidity/mortality, number of states reporting the infection/disease and case fatality rate (CFR) since year 2016;\n\n\nThe natural host of LASV is the multimammate African rodent ‘Mastomys natalensis’, which lives close to human settlement and transmitted to humans via contact with food or household items contaminated with urine or faeces of this rodent.\n\n\n\nSeveral Lassa fever outbreaks had been witnessed in Nigeria since the discovery of the Lassa virus in 1969;\nThere has been an increasing trend of morbidity/mortality, number of states reporting the infection/disease and case fatality rate (CFR) since year 2016;\nThe natural host of LASV is the multimammate African rodent ‘Mastomys natalensis’, which lives close to human settlement and transmitted to humans via contact with food or household items contaminated with urine or faeces of this rodent.\n What this study adds \n\n\nData presented here established and confirmed the detection and persistence of LASV RNA in the seminal fluid and breast milk of discharged post ribavirin treated Lassa fever cases in Nigeria;\n\n\nThe detection and persistence of LASV RNA in these body fluids of discharged cases in Nigeria is useful for clinical and public health reasons, based on the high mortality and morbidity rates and the perennial nature of LF in Nigeria;\n\n\nThis study with the proven evidence of the detection of LASV RNA in breast milk of mothers who had recovered from LF disease, puts to rest the speculations on the detection of this virus in breast milk.\n\n\n\nData presented here established and confirmed the detection and persistence of LASV RNA in the seminal fluid and breast milk of discharged post ribavirin treated Lassa fever cases in Nigeria;\nThe detection and persistence of LASV RNA in these body fluids of discharged cases in Nigeria is useful for clinical and public health reasons, based on the high mortality and morbidity rates and the perennial nature of LF in Nigeria;\nThis study with the proven evidence of the detection of LASV RNA in breast milk of mothers who had recovered from LF disease, puts to rest the speculations on the detection of this virus in breast milk.\n\n\n\nData presented here established and confirmed the detection and persistence of LASV RNA in the seminal fluid and breast milk of discharged post ribavirin treated Lassa fever cases in Nigeria;\n\n\nThe detection and persistence of LASV RNA in these body fluids of discharged cases in Nigeria is useful for clinical and public health reasons, based on the high mortality and morbidity rates and the perennial nature of LF in Nigeria;\n\n\nThis study with the proven evidence of the detection of LASV RNA in breast milk of mothers who had recovered from LF disease, puts to rest the speculations on the detection of this virus in breast milk.\n\n\n\nData presented here established and confirmed the detection and persistence of LASV RNA in the seminal fluid and breast milk of discharged post ribavirin treated Lassa fever cases in Nigeria;\nThe detection and persistence of LASV RNA in these body fluids of discharged cases in Nigeria is useful for clinical and public health reasons, based on the high mortality and morbidity rates and the perennial nature of LF in Nigeria;\nThis study with the proven evidence of the detection of LASV RNA in breast milk of mothers who had recovered from LF disease, puts to rest the speculations on the detection of this virus in breast milk." ]
[ null, "methods", null, null, null ]
[ "Lassa Virus", "Lassa fever", "reverse transcription polymerase chain reaction", "transmission", "Ondo" ]
Introduction: Lassa fever (LF) is an acute viral haemorrhagic illness endemic in the West African sub-region caused by Lassa virus (LASV) [1,2]. LASV is an enveloped single stranded RNA virus of the family Arenaviridae, genus Mammarenavirus transmitted to humans via contact with food or household items contaminated with urine or faeces of multimmamate rodents [1,3]. The natural host of LASV is the African rodent ‘Mastomys natalensis’, which lives close to human settlements [4]. However, evidences suggest that other rodent species such as the African wood mouse, ‘Hylomyscus pamfi’ captured from Nigeria and the Guinea mouse, ‘Mastomys erythroleucus’ captured from both Nigeria and Guinea may also be hosts for LASV [5]. The multimammate rodent quickly produce a large number of offspring, tends to colonize human settlement increasing the risk of rodent - human contact and is found throughout the west, central and eastern parts of the African continent [6]. Once the rodent becomes a carrier, it will excrete the virus throughout the rest of its lifetime through faeces and urine creating ample opportunity for continued exposure within the environment where these rodents cohabit [7]. LASV nucleotide sequences have been established to cluster into six lineages (I-VI) based on geographical locations within the West African Sub-region [8]. Lineages I, II and III circulate in Nigeria, lineage IV circulates in Guinea, Sierra Leone, Liberia, Mali, and Côte d´Ivoire [9-12]. Lineage V circulates in Mali/Ivory Coast [13] and lineage VI in Togo [8]. The LASV is extremely virulent and highly infectious, affecting about 100,000-500,000 persons annually in West Africa with approximately 5,000 deaths occurring yearly [14]. LF claims more lives than Ebola fever because its incidence is much higher [15]. About 80% of persons infected with the Lassa virus are asymptomatic [16]; but in the remaining 20%, the illness manifests as a febrile illness of variable severity associated with multiple organ dysfunctions with or without hemorrhage. About 15-20% of hospitalized Lassa fever patients die from the illness. Ever since the identification of LASV in Nigeria in 1969, there have been several LF outbreaks in the country with an increasing trend of morbidity/mortality, number of states reporting the infection/disease and case fatality rate (CFR) [17]. In year 2016, 2017, 2018 and as at June 2019, the incidences of LF were reported from 18 to 29 out of the 36 states with CFRs of 12.7%, 9.7%, 24% and 22.6% respectively [17,18]. Since yellow fever virus (YFV) and dengue virus (DENV) are also endemic in Nigeria with the country currently dealing with an active yellow fever virus (YFV) outbreak and this was first reported from Kwara state in 2017. Three hundred and forty-one suspected cases of yellow fever were reported from 16 states in 2017 and six of these states had confirmed cases of yellow fever [19,20]. In September 2019 alone, the NCDC reported 243 suspected cases in 42 LGA´s in five states with 10 presumptive positives and one inconclusive [20]. Active dengue infections have also been documented from different parts of the country such as Maiduguri, Ilorin, Ibadan, Ogbomoso, north-eastern Nigeria in Jos and Ibadan respectively [21-23]. Thus, it is important that every sample of suspected cases of VHFs is screened for YFV and DENV as differentials with LASV. Transmission of LASV from human to human has been established within the communities where the disease is endemic [24]. This also includes contact with the corpse of a LF case [24]. Few reports of nosocomial outbreaks in healthcare settings show a disease risk for healthcare workers, which probably were due to poor infection control practices. These include lack of appropriate personal protective equipment, use of contaminated items, failure to adequately disinfect between patients by hand washing. The poor practices facilitated transmission from patient-to-patient or to care givers resulting in high case fatality among healthcare workers [24,25]. It has been reported that the virus is excreted in urine for 3-9 weeks post infection, remains in the semen for up to 3 months after becoming infected and has also been detected 64 days after patient recovery and discharge from hospital [26,27]. Sexual transmission of LASV has also been reported. However, the extent of this transmission dynamic remains unknown and requires further evaluation in communities where this disease is endemic [16,28]. No study has proven the presence of LASV in breast milk, but the high level of viraemia suggests it may be possible [29]. Although, it was reported that the circulating strains of LASV appear to be very similar to those from previous years in the country [30], the transmission dynamics of this virus particularly in endemic regions are yet to be fully understood. Even though there is a risk for sexual and/or post-natal transmission of LASV, the persistence of LASV in semen, urine and other body fluids of Nigerians has not been reported. Therefore, this study investigated the presence of LASV, as well as YFV and DENV as differentials, in body fluids of suspected, confirmed and discharged cases to better understand its role in the perennial outbreaks of LF in Nigeria. Methods: Study location/area: this study was carried out in Ondo State, Nigeria. The state has Akure as its state capital with eighteen local government areas, the major ones being Akoko, Akure, Okitipupa, Ondo and Owo. Ondo State borders Ekiti state to the north, Kogi State to the northeast, Edo State to the east, Delta State to the southeast, Ogun State to the southwest and Osun State to the northwest as highlighted in Figure 1 [31]. Ondo State has had repeated outbreaks with confirmed cases of Lassa fever within the study period and confirmed cases from other states had also been traced to the state. All suspected cases from the different Local Government Areas (LGAs) within the state, are referred to the Lassa fever treatment center at the Federal Medical Center (FMC), Owo for diagnosis and management. Study design and population: this was a cross-sectional study conducted between March 2018 and April 2019 with a total of 112 consenting individuals. These individuals comprised of 57(25.9%) suspected and 29(25.9%) confirmed cases who were still on admission at the Lassa fever treatment center at the Federal Medical Center (FMC), Owo, Ondo State. Additional 26 individuals (23.2%) were outpatients living in the communities who had previously been treated with ribavirin and discharged from the Lassa fever treatment center but were still attending the hospital for post discharge evaluations. Informed consent/ethical approval: an ethical approval was obtained from the Institutional Review Board (IRB) of the Nigerian Institute of Medical Research (NIMR). Furthermore, approval was also obtained from the Ondo State Ministry of Health before commencement of study. Only individuals who had given prior documented consent were included in this study. Specimen collection, transportation, handling and processing: the total number of one hundred and twelve (112) consenting individuals who participated in this study comprised of 66(58.9%) males and 46(41.1%) females with a mean age of 42.6 years (SD ± 6.8). Whole blood samples were collected from 57 suspected cases and 29 confirmed cases of Lassa fever. Other samples collected from discharged cases include 5 high vaginal swab (HVS) and seminal fluid each; 12 breast milk and 4 urine samples. All samples were packaged in triplicates and transported in cold-chain from Ondo State to the Centre for Human and Zoonotic Virology (CHAZVY), Central Research Laboratory, College of Medicine of the University of Lagos. Universal sample precautions and handling procedures were carried out as recommended by the United States Centers for Disease Control and Prevention [32]. All specimen transport containers were disinfected with 10% hypochlorite solution in an airtight glove box. Viral agents in specimen aliquots (undiluted and 1: 10 dilution) were inactivated in guanidinium-thiocyanate-based lysis buffer at room temperature for 10 minutes before extraction of viral nucleic acid. Nucleic acid extraction and reverse transcriptase-polymerase chain reaction: the viral nucleic acid from inactivated sample aliquots (undiluted and 1: 10 dilution) were extracted using a mini spin column RNA extraction kit by Qiagen (Qiagen, Germantown, Maryland, United States) in a class IIA biological safety cabinet according to the manufacturer´s instructions. After the extraction of viral nucleic acid, S segment of the RNA genome, 3´ and 5´ non-coding regions of the nucleic acid of LASV, dengue and yellow fever viruses were amplified in discrete RT-PCRs with primers as listed in Table 1. Separate reaction mixtures for Lassa, dengue and yellow fever viruses were prepared and cycled as described in the one-step RT-PCR kit by AmbionAgPath-ID protocol (Applied Bio-systems, Foster City, California, United States). The reaction was performed using the 9700 applied bio-systems thermal cycler with the following temperature profile: 50°C for 30 min and 95°C for 5 min, followed by 35 cycles of 95°C for 30 s, 55°C for 30s, and 72°C for 30s with a final extension of 72°C for 5 min. Subsequently, PCR amplicons were subjected to 1.5% agarose gel electrophoresis with 1X SYBR® safe DNA gel staining dye (Invitrogen, Carlsbad, California, United States) for 30 min at 120V/400mA and images of amplicon bands under UV light were taken with BioDocAnalyze 2.0 (Biometra, Goettingen, Germany). The positive control used for Lassa assays were previously detected Lassa samples from Irrua, Edo State, Nigeria with accession number GU481078 NIG 08-A47 2008 IRRUA, while those for dengue and YFV assays were both tissue culture inactivated samples both from the Virology Unit Laboratory of the Bernhard Nocht Institute of Tropical Medicine, Hamburg, Germany through our collaborations. map of Ondo State (study area) showing the two eco-climatic zones, local government areas and bordering states primers used for Lassa, dengue and yellow fevers investigation Results: Reverse transcriptase-polymerase chain reaction amplification and agarose gel analysis of Lassa, yellow fever and dengue viruses: all samples collected were analyzed for LASV RNA by RT-PCR. The expected amplicon band size of approximately 320 base pairs (bp) of the S segment of the RNA genome for LASV was detected by the agarose gel electrophoresis analysis (Figure 2). The detected band size of the LASV amplicons was at par with the positive LASV controls and no band was observed in the negative control lane on the gel picture (PCR-grade water) (Figure 2). However, none of the expected band sizes (~79bp) and (~405bp) were detected for dengue viruses or yellow fever (Figure 3,Figure 4). Analysis of whole blood by RT-PCR showed that 1/57(1.8%) suspected and 1/29(3.4%) confirmed ribavirin treated case still on admission at FMC, Owo were positive for LASV (Table 2). While LASV was still detected in 1/11(8.3%) breast milk and 2/5(40%) post ribavirin treated seminal fluids from discharged individuals in the community. However, LASV was not detected in any of the HVS and urine samples (Table 2). RT-PCR Detection of S-Gene fragment of Lassa virus. The gel lanes represent neat (undiluted, N) and 1: 10 dilutions (D) of the RNA extracts used for RTPCR. Lassa positive samples were represented in lanes S1-S5. RNase/DNase free water was used as negative extraction/RTPCR control (-ve CTRL) while a 2008 outbreak positive sample (GU481078_NIG_08-A47_2008_IRRUA) was used as positive control (+ve CTRL) RT-PCR detection of dengue virus. The gel lanes represent neat (undiluted, N) and 1: 10 dilutions (D) of the RNA extracts used for RTPCR. Dengue negative samples were represented in lanes S1-S5. RNase/DNase free water was used as negative extraction/RTPCR control (-ve CTRL) while a tissue culture inactivated sample of dengue virus from the virology unit laboratory of the Bernhard Nocht Institute of Tropical Medicine, Germany was used as positive control (+ve CTRL) RT-PCR detection of yellow fever virus. The gel lanes represent neat (undiluted, N) and 1:10 dilutions (D) of the RNA extracts used for RTPCR. Yellow fever negative samples were represented in lanes S1-S5. RNase/DNase free water was used as negative extraction/RTPCR control (-ve CTRL) while a tissue culture inactivated sample of 17D yellow fever strain from the virology unit laboratory of the Bernhard Nocht Institute of Tropical Medicine, Germany was used as positive control (+ve CTRL) distribution of LASV positivity by category of cases Discussion: Large scale outbreaks of LF have been reported in Nigeria since 2015 with the disease occurring all year round. The main preventive strategies in endemic areas are: effective rodent control in areas close to living environment; avoiding contacts with rodents and the consumption of it. Others include high index of suspicion for LF; prompt referral from such health facilities to treatment centers and proper use of infection control practices in health facilities [33-35]. Although LASV is found in virtually all body fluids and compartment during acute infection, little is known about the viral kinetics of LASV in body fluids other than blood after recovery from the disease [36]. The burden of the disease and the role of transmission through other body fluids other than blood after recovery still remains a puzzle in the country. The non-detection of both YFV and DENV from this set of suspected, confirmed and discharged cases of VHFs in Ondo State, implies that Lassa virus remains the major agent and driver of VHFs in the country. However, since there have been reported cases of YFV and DENV with documented evidences that these viruses are all endemic in the country, testing for these agents alongside with LASV should remain a continuum particularly as it has been documented that only 20% of the cases of VHF are confirmed to be caused by LASV [16]. The detection of LASV in seminal fluid and breast milk of discharged post ribavirin treated cases and its persistence in these fluids of recovering individuals has been established in this study as documented in previous studies. The presence of LASV in the body fluids of discharged cases is useful for clinical and public health reasons in Nigeria. This is particularly important because of the high mortality and morbidity rates in addition to its perennial nature. Thus, counseling on the possibilities of LASV transmission from men after discharge from treatment centres to their sexual partners and from mothers to their offspring during breast feeding is strongly advocated. Safe sex practices including sexual abstinence and use of male or female latex condom as well as abstinence from breast feeding by nursing mothers after discharge should be encouraged among survivors. The implications of viral persistence in such immune sanctuaries are now being recognized as potential sources of new outbreaks through sexual transmission for a number of other emerging infectious viruses, including Lassa, ebola and Zika viruses [36-39]. Therefore, the identification of viral persistence in seminal fluid and breast milk draws critical attention to the need to follow LF survivors´ longitudinally after clearance of viremia. Thus, survivors should also be offered access to care and prevention, particularly with the possibility to test other body fluid other than blood after discharge from any LF treatment centre which provides them with possibilities to mitigate any risks that may occur. The testing of other body fluids of LF survivors should be considered as a possibility to be included for improved LF management and treatment network coordinated by the Nigerian Centre for Disease Control (NCDC). However, given these findings, the following questions still need to be addressed; are the LASV shed in seminal fluid and breast milk viable and infective, for how long and at what concentrations? It has been over 50 years since LASV was first identified during a 1969 outbreak in Nigeria, but the mechanisms underlying its pathogenesis and the role that host and viral factors play in the transmission dynamics and persistent outbreaks remain unclear. The answers to these questions have implications for ascertaining the risks for sexual and post-natal transmission and therefore, its effects on epidemiologic and transmission dynamics. Although, LASV has been detected in human semen and other body fluids, the extent to which LASV existence and replication occurs within these fluids remains unclear [34,35]. The shedding of LASV in the urine for 3 to 6 weeks and up to 3 months in seminal fluid, with risk for sexual transmission, prompting condom use in survivors had been well documented [26,28,40,41]. Though a previous study had no proven evidence of the presence of LASV in breast milk [29], this study documents a proven evidence of the detection of LASV in breast milk of mothers who had recovered from LF disease. This study therefore puts to rest the speculations on the detection of this virus in breast milk. Moreover, this study was limited by the small numbers of post ribavirin treated individuals who recovered and were discharged. Thus, the actual duration of shedding of LASV in the seminal fluid and breast milk of these individuals could not be ascertained with this study. Conclusion: Lassa fever outbreak remains a public health threat and it is a burden on the vulnerable population within and outside the endemic states in Nigeria. Thus, the continued monitoring of individuals that are discharged from the treatment centres in Nigeria and the role of postnatal and sexual transmissions in the perennial outbreak of LF in Nigeria needs to be further evaluated. What is known about this topic Several Lassa fever outbreaks had been witnessed in Nigeria since the discovery of the Lassa virus in 1969; There has been an increasing trend of morbidity/mortality, number of states reporting the infection/disease and case fatality rate (CFR) since year 2016; The natural host of LASV is the multimammate African rodent ‘Mastomys natalensis’, which lives close to human settlement and transmitted to humans via contact with food or household items contaminated with urine or faeces of this rodent. Several Lassa fever outbreaks had been witnessed in Nigeria since the discovery of the Lassa virus in 1969; There has been an increasing trend of morbidity/mortality, number of states reporting the infection/disease and case fatality rate (CFR) since year 2016; The natural host of LASV is the multimammate African rodent ‘Mastomys natalensis’, which lives close to human settlement and transmitted to humans via contact with food or household items contaminated with urine or faeces of this rodent. Several Lassa fever outbreaks had been witnessed in Nigeria since the discovery of the Lassa virus in 1969; There has been an increasing trend of morbidity/mortality, number of states reporting the infection/disease and case fatality rate (CFR) since year 2016; The natural host of LASV is the multimammate African rodent ‘Mastomys natalensis’, which lives close to human settlement and transmitted to humans via contact with food or household items contaminated with urine or faeces of this rodent. Several Lassa fever outbreaks had been witnessed in Nigeria since the discovery of the Lassa virus in 1969; There has been an increasing trend of morbidity/mortality, number of states reporting the infection/disease and case fatality rate (CFR) since year 2016; The natural host of LASV is the multimammate African rodent ‘Mastomys natalensis’, which lives close to human settlement and transmitted to humans via contact with food or household items contaminated with urine or faeces of this rodent. What this study adds Data presented here established and confirmed the detection and persistence of LASV RNA in the seminal fluid and breast milk of discharged post ribavirin treated Lassa fever cases in Nigeria; The detection and persistence of LASV RNA in these body fluids of discharged cases in Nigeria is useful for clinical and public health reasons, based on the high mortality and morbidity rates and the perennial nature of LF in Nigeria; This study with the proven evidence of the detection of LASV RNA in breast milk of mothers who had recovered from LF disease, puts to rest the speculations on the detection of this virus in breast milk. Data presented here established and confirmed the detection and persistence of LASV RNA in the seminal fluid and breast milk of discharged post ribavirin treated Lassa fever cases in Nigeria; The detection and persistence of LASV RNA in these body fluids of discharged cases in Nigeria is useful for clinical and public health reasons, based on the high mortality and morbidity rates and the perennial nature of LF in Nigeria; This study with the proven evidence of the detection of LASV RNA in breast milk of mothers who had recovered from LF disease, puts to rest the speculations on the detection of this virus in breast milk. Data presented here established and confirmed the detection and persistence of LASV RNA in the seminal fluid and breast milk of discharged post ribavirin treated Lassa fever cases in Nigeria; The detection and persistence of LASV RNA in these body fluids of discharged cases in Nigeria is useful for clinical and public health reasons, based on the high mortality and morbidity rates and the perennial nature of LF in Nigeria; This study with the proven evidence of the detection of LASV RNA in breast milk of mothers who had recovered from LF disease, puts to rest the speculations on the detection of this virus in breast milk. Data presented here established and confirmed the detection and persistence of LASV RNA in the seminal fluid and breast milk of discharged post ribavirin treated Lassa fever cases in Nigeria; The detection and persistence of LASV RNA in these body fluids of discharged cases in Nigeria is useful for clinical and public health reasons, based on the high mortality and morbidity rates and the perennial nature of LF in Nigeria; This study with the proven evidence of the detection of LASV RNA in breast milk of mothers who had recovered from LF disease, puts to rest the speculations on the detection of this virus in breast milk.
Background: Lassa virus (LASV), the causative agent of Lassa fever (LF), an endemic acute viral haemorrhagic illness in Nigeria, is transmitted by direct contact with the rodent, contaminated food or household items. Person-to-person transmission also occurs and sexual transmission has been reported. Thus, this study investigated the presence of LASV in body fluids of suspected and confirmed cases. Methods: this was a cross-sectional study between March 2018 and April 2019 involving 112 consenting suspected and post ribavirin confirmed cases attending the Lassa fever treatment center in Ondo State. Whole blood was collected from 57 suspected and 29 confirmed cases. Other samples from confirmed cases were 5 each of High Vaginal Swab (HVS) and seminal fluid; 12 breast milk and 4 urine. All samples were analyzed using reverse transcription-PCR (RT-PCR) targeting the S-gene of LASV. Results: analysis of whole blood by RT-PCR showed that 1/57 (1.8%) suspected and 1/29 (3.4%) confirmed post ribavirin treated cases were positive. While LASV was detected in 2/5 (40%) post ribavirin treated seminal fluids and 1/11 (8.3%) breast milk. However, LASV was not detected in any of the HVS and urine samples. Conclusions: the detection of LASV in seminal fluid and breast milk of discharged post ribavirin treated cases suggests its persistence in these fluids of recovering Nigerians. The role of postnatal and sexual transmissions in the perennial outbreak of LF needs to be further evaluated.
Introduction: Lassa fever (LF) is an acute viral haemorrhagic illness endemic in the West African sub-region caused by Lassa virus (LASV) [1,2]. LASV is an enveloped single stranded RNA virus of the family Arenaviridae, genus Mammarenavirus transmitted to humans via contact with food or household items contaminated with urine or faeces of multimmamate rodents [1,3]. The natural host of LASV is the African rodent ‘Mastomys natalensis’, which lives close to human settlements [4]. However, evidences suggest that other rodent species such as the African wood mouse, ‘Hylomyscus pamfi’ captured from Nigeria and the Guinea mouse, ‘Mastomys erythroleucus’ captured from both Nigeria and Guinea may also be hosts for LASV [5]. The multimammate rodent quickly produce a large number of offspring, tends to colonize human settlement increasing the risk of rodent - human contact and is found throughout the west, central and eastern parts of the African continent [6]. Once the rodent becomes a carrier, it will excrete the virus throughout the rest of its lifetime through faeces and urine creating ample opportunity for continued exposure within the environment where these rodents cohabit [7]. LASV nucleotide sequences have been established to cluster into six lineages (I-VI) based on geographical locations within the West African Sub-region [8]. Lineages I, II and III circulate in Nigeria, lineage IV circulates in Guinea, Sierra Leone, Liberia, Mali, and Côte d´Ivoire [9-12]. Lineage V circulates in Mali/Ivory Coast [13] and lineage VI in Togo [8]. The LASV is extremely virulent and highly infectious, affecting about 100,000-500,000 persons annually in West Africa with approximately 5,000 deaths occurring yearly [14]. LF claims more lives than Ebola fever because its incidence is much higher [15]. About 80% of persons infected with the Lassa virus are asymptomatic [16]; but in the remaining 20%, the illness manifests as a febrile illness of variable severity associated with multiple organ dysfunctions with or without hemorrhage. About 15-20% of hospitalized Lassa fever patients die from the illness. Ever since the identification of LASV in Nigeria in 1969, there have been several LF outbreaks in the country with an increasing trend of morbidity/mortality, number of states reporting the infection/disease and case fatality rate (CFR) [17]. In year 2016, 2017, 2018 and as at June 2019, the incidences of LF were reported from 18 to 29 out of the 36 states with CFRs of 12.7%, 9.7%, 24% and 22.6% respectively [17,18]. Since yellow fever virus (YFV) and dengue virus (DENV) are also endemic in Nigeria with the country currently dealing with an active yellow fever virus (YFV) outbreak and this was first reported from Kwara state in 2017. Three hundred and forty-one suspected cases of yellow fever were reported from 16 states in 2017 and six of these states had confirmed cases of yellow fever [19,20]. In September 2019 alone, the NCDC reported 243 suspected cases in 42 LGA´s in five states with 10 presumptive positives and one inconclusive [20]. Active dengue infections have also been documented from different parts of the country such as Maiduguri, Ilorin, Ibadan, Ogbomoso, north-eastern Nigeria in Jos and Ibadan respectively [21-23]. Thus, it is important that every sample of suspected cases of VHFs is screened for YFV and DENV as differentials with LASV. Transmission of LASV from human to human has been established within the communities where the disease is endemic [24]. This also includes contact with the corpse of a LF case [24]. Few reports of nosocomial outbreaks in healthcare settings show a disease risk for healthcare workers, which probably were due to poor infection control practices. These include lack of appropriate personal protective equipment, use of contaminated items, failure to adequately disinfect between patients by hand washing. The poor practices facilitated transmission from patient-to-patient or to care givers resulting in high case fatality among healthcare workers [24,25]. It has been reported that the virus is excreted in urine for 3-9 weeks post infection, remains in the semen for up to 3 months after becoming infected and has also been detected 64 days after patient recovery and discharge from hospital [26,27]. Sexual transmission of LASV has also been reported. However, the extent of this transmission dynamic remains unknown and requires further evaluation in communities where this disease is endemic [16,28]. No study has proven the presence of LASV in breast milk, but the high level of viraemia suggests it may be possible [29]. Although, it was reported that the circulating strains of LASV appear to be very similar to those from previous years in the country [30], the transmission dynamics of this virus particularly in endemic regions are yet to be fully understood. Even though there is a risk for sexual and/or post-natal transmission of LASV, the persistence of LASV in semen, urine and other body fluids of Nigerians has not been reported. Therefore, this study investigated the presence of LASV, as well as YFV and DENV as differentials, in body fluids of suspected, confirmed and discharged cases to better understand its role in the perennial outbreaks of LF in Nigeria. Conclusion: Data presented here established and confirmed the detection and persistence of LASV RNA in the seminal fluid and breast milk of discharged post ribavirin treated Lassa fever cases in Nigeria; The detection and persistence of LASV RNA in these body fluids of discharged cases in Nigeria is useful for clinical and public health reasons, based on the high mortality and morbidity rates and the perennial nature of LF in Nigeria; This study with the proven evidence of the detection of LASV RNA in breast milk of mothers who had recovered from LF disease, puts to rest the speculations on the detection of this virus in breast milk. Data presented here established and confirmed the detection and persistence of LASV RNA in the seminal fluid and breast milk of discharged post ribavirin treated Lassa fever cases in Nigeria; The detection and persistence of LASV RNA in these body fluids of discharged cases in Nigeria is useful for clinical and public health reasons, based on the high mortality and morbidity rates and the perennial nature of LF in Nigeria; This study with the proven evidence of the detection of LASV RNA in breast milk of mothers who had recovered from LF disease, puts to rest the speculations on the detection of this virus in breast milk.
Background: Lassa virus (LASV), the causative agent of Lassa fever (LF), an endemic acute viral haemorrhagic illness in Nigeria, is transmitted by direct contact with the rodent, contaminated food or household items. Person-to-person transmission also occurs and sexual transmission has been reported. Thus, this study investigated the presence of LASV in body fluids of suspected and confirmed cases. Methods: this was a cross-sectional study between March 2018 and April 2019 involving 112 consenting suspected and post ribavirin confirmed cases attending the Lassa fever treatment center in Ondo State. Whole blood was collected from 57 suspected and 29 confirmed cases. Other samples from confirmed cases were 5 each of High Vaginal Swab (HVS) and seminal fluid; 12 breast milk and 4 urine. All samples were analyzed using reverse transcription-PCR (RT-PCR) targeting the S-gene of LASV. Results: analysis of whole blood by RT-PCR showed that 1/57 (1.8%) suspected and 1/29 (3.4%) confirmed post ribavirin treated cases were positive. While LASV was detected in 2/5 (40%) post ribavirin treated seminal fluids and 1/11 (8.3%) breast milk. However, LASV was not detected in any of the HVS and urine samples. Conclusions: the detection of LASV in seminal fluid and breast milk of discharged post ribavirin treated cases suggests its persistence in these fluids of recovering Nigerians. The role of postnatal and sexual transmissions in the perennial outbreak of LF needs to be further evaluated.
4,235
294
[ 1019, 517, 843, 908 ]
5
[ "lasv", "nigeria", "lassa", "fever", "cases", "breast", "virus", "detection", "study", "milk" ]
[ "rodent species african", "rodents cohabit lasv", "infected lassa virus", "viruses including lassa", "lasv african rodent" ]
null
[CONTENT] Lassa Virus | Lassa fever | reverse transcription polymerase chain reaction | transmission | Ondo [SUMMARY]
[CONTENT] Lassa Virus | Lassa fever | reverse transcription polymerase chain reaction | transmission | Ondo [SUMMARY]
null
[CONTENT] Lassa Virus | Lassa fever | reverse transcription polymerase chain reaction | transmission | Ondo [SUMMARY]
[CONTENT] Lassa Virus | Lassa fever | reverse transcription polymerase chain reaction | transmission | Ondo [SUMMARY]
[CONTENT] Lassa Virus | Lassa fever | reverse transcription polymerase chain reaction | transmission | Ondo [SUMMARY]
[CONTENT] Adult | Antiviral Agents | Cross-Sectional Studies | Disease Outbreaks | Female | Humans | Lassa Fever | Lassa virus | Male | Middle Aged | Milk, Human | Nigeria | Reverse Transcriptase Polymerase Chain Reaction | Ribavirin | Semen [SUMMARY]
[CONTENT] Adult | Antiviral Agents | Cross-Sectional Studies | Disease Outbreaks | Female | Humans | Lassa Fever | Lassa virus | Male | Middle Aged | Milk, Human | Nigeria | Reverse Transcriptase Polymerase Chain Reaction | Ribavirin | Semen [SUMMARY]
null
[CONTENT] Adult | Antiviral Agents | Cross-Sectional Studies | Disease Outbreaks | Female | Humans | Lassa Fever | Lassa virus | Male | Middle Aged | Milk, Human | Nigeria | Reverse Transcriptase Polymerase Chain Reaction | Ribavirin | Semen [SUMMARY]
[CONTENT] Adult | Antiviral Agents | Cross-Sectional Studies | Disease Outbreaks | Female | Humans | Lassa Fever | Lassa virus | Male | Middle Aged | Milk, Human | Nigeria | Reverse Transcriptase Polymerase Chain Reaction | Ribavirin | Semen [SUMMARY]
[CONTENT] Adult | Antiviral Agents | Cross-Sectional Studies | Disease Outbreaks | Female | Humans | Lassa Fever | Lassa virus | Male | Middle Aged | Milk, Human | Nigeria | Reverse Transcriptase Polymerase Chain Reaction | Ribavirin | Semen [SUMMARY]
[CONTENT] rodent species african | rodents cohabit lasv | infected lassa virus | viruses including lassa | lasv african rodent [SUMMARY]
[CONTENT] rodent species african | rodents cohabit lasv | infected lassa virus | viruses including lassa | lasv african rodent [SUMMARY]
null
[CONTENT] rodent species african | rodents cohabit lasv | infected lassa virus | viruses including lassa | lasv african rodent [SUMMARY]
[CONTENT] rodent species african | rodents cohabit lasv | infected lassa virus | viruses including lassa | lasv african rodent [SUMMARY]
[CONTENT] rodent species african | rodents cohabit lasv | infected lassa virus | viruses including lassa | lasv african rodent [SUMMARY]
[CONTENT] lasv | nigeria | lassa | fever | cases | breast | virus | detection | study | milk [SUMMARY]
[CONTENT] lasv | nigeria | lassa | fever | cases | breast | virus | detection | study | milk [SUMMARY]
null
[CONTENT] lasv | nigeria | lassa | fever | cases | breast | virus | detection | study | milk [SUMMARY]
[CONTENT] lasv | nigeria | lassa | fever | cases | breast | virus | detection | study | milk [SUMMARY]
[CONTENT] lasv | nigeria | lassa | fever | cases | breast | virus | detection | study | milk [SUMMARY]
[CONTENT] lasv | reported | virus | transmission | african | lf | illness | west | 24 | fever [SUMMARY]
[CONTENT] state | ondo | ondo state | study | acid | nucleic | nucleic acid | center | samples | lassa [SUMMARY]
null
[CONTENT] detection | nigeria | lasv rna | cases nigeria | persistence lasv rna | detection persistence | detection persistence lasv | detection persistence lasv rna | lasv | rna [SUMMARY]
[CONTENT] lasv | fever | nigeria | lf | state | lassa | detection | virus | study | transmission [SUMMARY]
[CONTENT] lasv | fever | nigeria | lf | state | lassa | detection | virus | study | transmission [SUMMARY]
[CONTENT] Lassa | LASV | Lassa | Nigeria ||| ||| LASV [SUMMARY]
[CONTENT] between March 2018 | April 2019 | 112 | Lassa | Ondo State ||| 57 | 29 ||| 5 | HVS | 12 | 4 ||| transcription-PCR | RT-PCR | LASV [SUMMARY]
null
[CONTENT] LASV | Nigerians ||| LF [SUMMARY]
[CONTENT] Lassa | LASV | Lassa | Nigeria ||| ||| LASV ||| between March 2018 | April 2019 | 112 | Lassa | Ondo State ||| 57 | 29 ||| 5 | HVS | 12 | 4 ||| transcription-PCR | RT-PCR | LASV ||| RT-PCR | 1/57 | 1.8% | 1/29 | 3.4% ||| LASV | 2/5 | 40% | 1/11 | 8.3% ||| LASV | HVS ||| Nigerians ||| LF [SUMMARY]
[CONTENT] Lassa | LASV | Lassa | Nigeria ||| ||| LASV ||| between March 2018 | April 2019 | 112 | Lassa | Ondo State ||| 57 | 29 ||| 5 | HVS | 12 | 4 ||| transcription-PCR | RT-PCR | LASV ||| RT-PCR | 1/57 | 1.8% | 1/29 | 3.4% ||| LASV | 2/5 | 40% | 1/11 | 8.3% ||| LASV | HVS ||| Nigerians ||| LF [SUMMARY]
Associations between mass media exposure and birth preparedness among women in southwestern Uganda: a community-based survey.
24433945
Exposure to mass media provides increased awareness and knowledge, as well as changes in attitudes, social norms and behaviors that may lead to positive public health outcomes. Birth preparedness (i.e. the preparations for childbirth made by pregnant women, their families, and communities) increases the use of skilled birth attendants (SBAs) and hence reduces maternal morbidity and mortality.
BACKGROUND
A total of 765 recently delivered women from 120 villages in the Mbarara District of southwest Uganda were selected for a community-based survey using two-stage cluster sampling. Univariate and multivariate logistic regression was performed with generalized linear mixed models using SPSS 21.
METHOD
We found that 88.6% of the women surveyed listened to the radio and 33.9% read newspapers. Birth preparedness actions included were money saved (87.8%), identified SBA (64.3%), identified transport (60.1%), and purchased childbirth materials (20.7%). Women who had taken three or more actions were coded as well birth prepared (53.9%). Women who read newspapers were more likely to be birth prepared (adjusted OR 2.2, 95% CI 1.5-3.2). High media exposure, i.e. regular exposure to radio, newspaper, or television, showed no significant association with birth preparedness (adjusted OR 1.3, 95% CI 0.9-2.0).
RESULTS
Our results indicate that increased reading of newspapers can enhance birth preparedness and skilled birth attendance. Apart from general literacy skills, this requires newspapers to be accessible in terms of language, dissemination, and cost.
CONCLUSION
[ "Adult", "Data Collection", "Delivery, Obstetric", "Female", "Health Knowledge, Attitudes, Practice", "Humans", "Interviews as Topic", "Mass Media", "Pregnancy", "Uganda", "Young Adult" ]
3888909
null
null
Method
Setting The study was conducted in the Mbarara District in southwest Uganda. The district is divided in Mbarara Municipality, which is the urban center, and Kashari and Rwampara Counties which are mainly rural. The total population is 436,400 (24). The area contains 46 health centers with various levels of health services provision (II–IV) that all provide antenatal and emergency obstetric care, although Mbarara Regional Referral Hospital is the only one providing comprehensive obstetric care. Mbarara Municipality also has four private hospitals. Eighty percent of the Ugandan population lives in rural areas where the economy is predominately agricultural. According to the World Bank, 38% live in extreme poverty, i.e. below 1.25 USD per day in purchasing power parity (25). According to the 2012 UDHS, only 5% of the rural population has electricity in their homes compared to 55% of the urban population. The literacy rate among women in Uganda is 64.2%, but higher in the southwest (75.5%) (10). The study was given ethical clearance by the Uganda National Council of Science and Technology and Lund University. Permission to conduct the study was also given by local leaders at the district, county, and village levels. A written consent form was read to each participant and a signature or thumbprint was obtained before the interviews. The study was conducted in the Mbarara District in southwest Uganda. The district is divided in Mbarara Municipality, which is the urban center, and Kashari and Rwampara Counties which are mainly rural. The total population is 436,400 (24). The area contains 46 health centers with various levels of health services provision (II–IV) that all provide antenatal and emergency obstetric care, although Mbarara Regional Referral Hospital is the only one providing comprehensive obstetric care. Mbarara Municipality also has four private hospitals. Eighty percent of the Ugandan population lives in rural areas where the economy is predominately agricultural. According to the World Bank, 38% live in extreme poverty, i.e. below 1.25 USD per day in purchasing power parity (25). According to the 2012 UDHS, only 5% of the rural population has electricity in their homes compared to 55% of the urban population. The literacy rate among women in Uganda is 64.2%, but higher in the southwest (75.5%) (10). The study was given ethical clearance by the Uganda National Council of Science and Technology and Lund University. Permission to conduct the study was also given by local leaders at the district, county, and village levels. A written consent form was read to each participant and a signature or thumbprint was obtained before the interviews. Sample and data collection Participants in the community survey were selected by using a two-stage cluster sampling technique. In the first stage, 120 of the 699 villages in the study area were randomly chosen. The average population of a village is approximately 500. In each village, a starting point was alternately identified at the center or on the periphery with the help of a Village Health Team member. Two research assistants moved in opposite directions through the village stopping at every second household until 10 women who were either pregnant or had delivered within the last 12 months were identified. A total of 1,199 women were interviewed. Out of these, 765 had delivered during the last 12 months and were included in the study. The participants were equally divided between Kashari (50.8%) and Rwampara Counties (49.2%). The women were asked about their knowledge, perceptions, and experiences regarding pregnancy, childbirth, and the postpartum period and about their media consumption and exposure to media interventions. The information was obtained by means of a Women's Safe Motherhood Questionnaire developed by the Johns Hopkins Program for International Education in Gynecology and Obstetrics (JHPIEGO) (11) that had been adapted to the Ugandan context and pretested in a neighboring district. After the pretesting, a question was added about the purchase of childbirth materials since it is an important preparation for childbirth in Ugandan. A total of 12 research assistants were recruited, each having a bachelor's degree in social science and previous experience in survey data collection. They all were trained for 1 week on the administration of the data instrument. The data collection took place from September to December 2010 in Kashari County and from April to May 2011 in Rwampara County under the supervision of the principal investigator. The questionnaires were continuously verified for completeness and consistency by the field supervisors. Participants in the community survey were selected by using a two-stage cluster sampling technique. In the first stage, 120 of the 699 villages in the study area were randomly chosen. The average population of a village is approximately 500. In each village, a starting point was alternately identified at the center or on the periphery with the help of a Village Health Team member. Two research assistants moved in opposite directions through the village stopping at every second household until 10 women who were either pregnant or had delivered within the last 12 months were identified. A total of 1,199 women were interviewed. Out of these, 765 had delivered during the last 12 months and were included in the study. The participants were equally divided between Kashari (50.8%) and Rwampara Counties (49.2%). The women were asked about their knowledge, perceptions, and experiences regarding pregnancy, childbirth, and the postpartum period and about their media consumption and exposure to media interventions. The information was obtained by means of a Women's Safe Motherhood Questionnaire developed by the Johns Hopkins Program for International Education in Gynecology and Obstetrics (JHPIEGO) (11) that had been adapted to the Ugandan context and pretested in a neighboring district. After the pretesting, a question was added about the purchase of childbirth materials since it is an important preparation for childbirth in Ugandan. A total of 12 research assistants were recruited, each having a bachelor's degree in social science and previous experience in survey data collection. They all were trained for 1 week on the administration of the data instrument. The data collection took place from September to December 2010 in Kashari County and from April to May 2011 in Rwampara County under the supervision of the principal investigator. The questionnaires were continuously verified for completeness and consistency by the field supervisors. Definition of variables Socio-demographic and reproductive variables Area of residence was coded into ‘Kashari’ or ‘Rwampara’ and ‘rural’ for villages or ‘semi-urban’ for trading centers. No urban areas existed in the two counties. Age was coded into ‘<20 years’, ‘20–24 years’ and ‘≥25 years’. Marital status was coded as ‘married’ for women who were married or in union and ‘not married’ for single, widowed, divorced or separated. Religion was coded as ‘Christians’ for women belonging to the Roman Catholic, Church of Uganda or Seventh-day Adventists. The remaining women were coded as ‘Other’. Highest education level completed was coded into ‘< primary school’, ‘primary school’ or ‘≥ secondary school’. Reading ability was categorized based on the question ‘Can you read a letter, Bible, or a newspaper easily, with difficulty, or not at all?’ Easily was coded as ‘yes’ and with difficulty/not at all was coded as ‘no’. Travel time to health facility was coded as ‘<1 hour’ and ‘≥1 hour’. Parity was calculated by adding the reported number of live births and stillbirths and coded as ‘primipara’, ‘2–4’ or ‘≥5’ births. ANC visits was coded as ‘<4’ or ‘≥4’ visits. Area of residence was coded into ‘Kashari’ or ‘Rwampara’ and ‘rural’ for villages or ‘semi-urban’ for trading centers. No urban areas existed in the two counties. Age was coded into ‘<20 years’, ‘20–24 years’ and ‘≥25 years’. Marital status was coded as ‘married’ for women who were married or in union and ‘not married’ for single, widowed, divorced or separated. Religion was coded as ‘Christians’ for women belonging to the Roman Catholic, Church of Uganda or Seventh-day Adventists. The remaining women were coded as ‘Other’. Highest education level completed was coded into ‘< primary school’, ‘primary school’ or ‘≥ secondary school’. Reading ability was categorized based on the question ‘Can you read a letter, Bible, or a newspaper easily, with difficulty, or not at all?’ Easily was coded as ‘yes’ and with difficulty/not at all was coded as ‘no’. Travel time to health facility was coded as ‘<1 hour’ and ‘≥1 hour’. Parity was calculated by adding the reported number of live births and stillbirths and coded as ‘primipara’, ‘2–4’ or ‘≥5’ births. ANC visits was coded as ‘<4’ or ‘≥4’ visits. Media exposure variables Read a newspaper/listen to radio/watch television was coded ‘yes’ or ‘no’ based on the question ‘Do you ever read newspaper/listen to radio/watch television?’ Frequency of exposure was coded as; ‘almost every day’, ‘at least once a week’ or ‘less than once a week’. Media exposure was defined as ‘high’ if the women were exposed to radio or television ‘almost every day’ or read a newspaper ‘at least once a week’. The regional newspaper in the local language is a weekly paper, and so weekly newspaper exposure was considered ‘high media exposure’. Read/heard/saw information on birth preparedness in the past 6 months was coded ‘yes’, ‘no’, or ‘don't know’. Exposure to birth preparedness information through newspaper/radio/television was inquired into within those groups exposed to the separate media. Read a newspaper/listen to radio/watch television was coded ‘yes’ or ‘no’ based on the question ‘Do you ever read newspaper/listen to radio/watch television?’ Frequency of exposure was coded as; ‘almost every day’, ‘at least once a week’ or ‘less than once a week’. Media exposure was defined as ‘high’ if the women were exposed to radio or television ‘almost every day’ or read a newspaper ‘at least once a week’. The regional newspaper in the local language is a weekly paper, and so weekly newspaper exposure was considered ‘high media exposure’. Read/heard/saw information on birth preparedness in the past 6 months was coded ‘yes’, ‘no’, or ‘don't know’. Exposure to birth preparedness information through newspaper/radio/television was inquired into within those groups exposed to the separate media. Birth preparedness variables Four variables were used as measures for birth preparedness, as follows: Bought childbirth materials was determined by asking ‘which arrangements did you or your family make for the birth of this child?’ and women who had bought a complete ‘Mama kit’ with the necessary materials for childbirth or any of these materials separately were coded as having bought childbirth materials. Saved money was determined by asking ‘Did you or your family save money for the birth of this child?’. Identified transport to place of delivery was determined by asking the question ‘Did you or your family identify transport for the birth of this child?’. Identified SBA for delivery was determined by asking ‘Did you or your family identify a skilled provider for the birth of this child?’. Those women who delivered in a health facility and who, in a subsequent question, stated that this place of delivery was planned during the pregnancy, were also coded as having identified a SBA for delivery. Well birth prepared was defined as having taken at least three of the four actions above. Four variables were used as measures for birth preparedness, as follows: Bought childbirth materials was determined by asking ‘which arrangements did you or your family make for the birth of this child?’ and women who had bought a complete ‘Mama kit’ with the necessary materials for childbirth or any of these materials separately were coded as having bought childbirth materials. Saved money was determined by asking ‘Did you or your family save money for the birth of this child?’. Identified transport to place of delivery was determined by asking the question ‘Did you or your family identify transport for the birth of this child?’. Identified SBA for delivery was determined by asking ‘Did you or your family identify a skilled provider for the birth of this child?’. Those women who delivered in a health facility and who, in a subsequent question, stated that this place of delivery was planned during the pregnancy, were also coded as having identified a SBA for delivery. Well birth prepared was defined as having taken at least three of the four actions above. Socio-demographic and reproductive variables Area of residence was coded into ‘Kashari’ or ‘Rwampara’ and ‘rural’ for villages or ‘semi-urban’ for trading centers. No urban areas existed in the two counties. Age was coded into ‘<20 years’, ‘20–24 years’ and ‘≥25 years’. Marital status was coded as ‘married’ for women who were married or in union and ‘not married’ for single, widowed, divorced or separated. Religion was coded as ‘Christians’ for women belonging to the Roman Catholic, Church of Uganda or Seventh-day Adventists. The remaining women were coded as ‘Other’. Highest education level completed was coded into ‘< primary school’, ‘primary school’ or ‘≥ secondary school’. Reading ability was categorized based on the question ‘Can you read a letter, Bible, or a newspaper easily, with difficulty, or not at all?’ Easily was coded as ‘yes’ and with difficulty/not at all was coded as ‘no’. Travel time to health facility was coded as ‘<1 hour’ and ‘≥1 hour’. Parity was calculated by adding the reported number of live births and stillbirths and coded as ‘primipara’, ‘2–4’ or ‘≥5’ births. ANC visits was coded as ‘<4’ or ‘≥4’ visits. Area of residence was coded into ‘Kashari’ or ‘Rwampara’ and ‘rural’ for villages or ‘semi-urban’ for trading centers. No urban areas existed in the two counties. Age was coded into ‘<20 years’, ‘20–24 years’ and ‘≥25 years’. Marital status was coded as ‘married’ for women who were married or in union and ‘not married’ for single, widowed, divorced or separated. Religion was coded as ‘Christians’ for women belonging to the Roman Catholic, Church of Uganda or Seventh-day Adventists. The remaining women were coded as ‘Other’. Highest education level completed was coded into ‘< primary school’, ‘primary school’ or ‘≥ secondary school’. Reading ability was categorized based on the question ‘Can you read a letter, Bible, or a newspaper easily, with difficulty, or not at all?’ Easily was coded as ‘yes’ and with difficulty/not at all was coded as ‘no’. Travel time to health facility was coded as ‘<1 hour’ and ‘≥1 hour’. Parity was calculated by adding the reported number of live births and stillbirths and coded as ‘primipara’, ‘2–4’ or ‘≥5’ births. ANC visits was coded as ‘<4’ or ‘≥4’ visits. Media exposure variables Read a newspaper/listen to radio/watch television was coded ‘yes’ or ‘no’ based on the question ‘Do you ever read newspaper/listen to radio/watch television?’ Frequency of exposure was coded as; ‘almost every day’, ‘at least once a week’ or ‘less than once a week’. Media exposure was defined as ‘high’ if the women were exposed to radio or television ‘almost every day’ or read a newspaper ‘at least once a week’. The regional newspaper in the local language is a weekly paper, and so weekly newspaper exposure was considered ‘high media exposure’. Read/heard/saw information on birth preparedness in the past 6 months was coded ‘yes’, ‘no’, or ‘don't know’. Exposure to birth preparedness information through newspaper/radio/television was inquired into within those groups exposed to the separate media. Read a newspaper/listen to radio/watch television was coded ‘yes’ or ‘no’ based on the question ‘Do you ever read newspaper/listen to radio/watch television?’ Frequency of exposure was coded as; ‘almost every day’, ‘at least once a week’ or ‘less than once a week’. Media exposure was defined as ‘high’ if the women were exposed to radio or television ‘almost every day’ or read a newspaper ‘at least once a week’. The regional newspaper in the local language is a weekly paper, and so weekly newspaper exposure was considered ‘high media exposure’. Read/heard/saw information on birth preparedness in the past 6 months was coded ‘yes’, ‘no’, or ‘don't know’. Exposure to birth preparedness information through newspaper/radio/television was inquired into within those groups exposed to the separate media. Birth preparedness variables Four variables were used as measures for birth preparedness, as follows: Bought childbirth materials was determined by asking ‘which arrangements did you or your family make for the birth of this child?’ and women who had bought a complete ‘Mama kit’ with the necessary materials for childbirth or any of these materials separately were coded as having bought childbirth materials. Saved money was determined by asking ‘Did you or your family save money for the birth of this child?’. Identified transport to place of delivery was determined by asking the question ‘Did you or your family identify transport for the birth of this child?’. Identified SBA for delivery was determined by asking ‘Did you or your family identify a skilled provider for the birth of this child?’. Those women who delivered in a health facility and who, in a subsequent question, stated that this place of delivery was planned during the pregnancy, were also coded as having identified a SBA for delivery. Well birth prepared was defined as having taken at least three of the four actions above. Four variables were used as measures for birth preparedness, as follows: Bought childbirth materials was determined by asking ‘which arrangements did you or your family make for the birth of this child?’ and women who had bought a complete ‘Mama kit’ with the necessary materials for childbirth or any of these materials separately were coded as having bought childbirth materials. Saved money was determined by asking ‘Did you or your family save money for the birth of this child?’. Identified transport to place of delivery was determined by asking the question ‘Did you or your family identify transport for the birth of this child?’. Identified SBA for delivery was determined by asking ‘Did you or your family identify a skilled provider for the birth of this child?’. Those women who delivered in a health facility and who, in a subsequent question, stated that this place of delivery was planned during the pregnancy, were also coded as having identified a SBA for delivery. Well birth prepared was defined as having taken at least three of the four actions above. Statistical analysis Sample size was predetermined by the existing database. Statistical analyses were conducted with SPSS Version 21 and all analyses accounted for the intra-cluster correlation. Generalized linear mixed models were used to calculate the odds ratios (OR) and 95% confidence intervals (CI) for the associations between media exposure and being well birth prepared (i.e. bought childbirth materials, saved money, identified transport to place of delivery, and identified SBA for delivery). Multivariate analyses included adjustments for age, education, location of residence, parity, travel time to health facility, and ANC visits. Sample size was predetermined by the existing database. Statistical analyses were conducted with SPSS Version 21 and all analyses accounted for the intra-cluster correlation. Generalized linear mixed models were used to calculate the odds ratios (OR) and 95% confidence intervals (CI) for the associations between media exposure and being well birth prepared (i.e. bought childbirth materials, saved money, identified transport to place of delivery, and identified SBA for delivery). Multivariate analyses included adjustments for age, education, location of residence, parity, travel time to health facility, and ANC visits.
Results
The socio-demographic and reproductive variables are presented in Table 1. A majority of the women (83.9%) lived in a rural area, and 44.7% had a travel time of 1 hour or more to a health facility. One-third (34%) could not read at all or only with difficulty and 29.3% had not completed primary school. A total of 23% were primipara and 30.2% had a parity of five or more. Two-thirds (68.9%) had attended the recommended minimum of four ANC visits. Socio-demographic and reproductive characteristics of recently delivered Ugandan women (n=765) Exposure to mass media is shown in Table 2. Although 33.9% of the women indicated they read newspapers, only 6.5% read a newspaper ‘almost every day’ and 41.2% read a newspaper ‘at least once a week’. By far the most popular form of media was radio, to which 88.6% of the women were exposed. In this group 90.3% listened to the radio almost every day, while only 4.9% reported ever watching television. In total 81.8% were exposed to radio or television almost every day or read a newspaper at least once a week and were therefore coded as having a high media exposure. The vast majority of this group consisted of radio listeners. Almost half (46.3%) said they had heard or seen some information on birth preparedness in the past 6 months. Among the radio listeners, 43.9% were exposed to birth preparedness information on the radio; among women reading newspapers the corresponding number was 11.4%. Proportion of recently delivered women reporting media exposure and exposure to birth preparedness information (n=765) *Defined as exposure to radio or television almost every day or newspaper at least once a week. Table 3 shows the distribution of the variables included in birth preparedness. One-fifth of the women (20.7%) had bought childbirth materials, 87.8% had saved money, 60.1% had identified transport for delivery, and 64.3% had identified a skilled provider or health facility for delivery. This resulted in a total of 53.9% of the women being well birth prepared. Reports of birth preparedness by recently delivered Ugandan women (n=765) Defined as having taken at least three of the four actions above. Table 4 shows the results of univariate logistic regression analysis for the associations between socio-demographic and reproductive factors on the one hand, and birth preparedness on the other. The women who had completed secondary school were more likely to be well birth prepared (OR 1.9, 95% CI 1.2–3.0). Birth preparedness decreased with increasing parity: women who had given birth to five or more children were significantly less birth prepared than primiparas (OR 0.6, 95% CI 0.4–0.99). The women who attended at least four ANC visits were more birth prepared (OR 1.5, 95% CI 1.1–2.1). Associations (OR, 95% CI) between socio-demographic and reproductive variables, and being well birth prepared in a sample of recently delivered Ugandan women (n=765) Table 5 provides an analysis of the associations between media exposure and birth preparedness with unadjusted an adjusted ORs. An association was found between reading newspapers and being well birth prepared (OR 2.2, 95% CI 1.5–3.2). Exposure to a newspaper at least once a week did not strengthen the association (OR 1.7, 95% CI 1.03–2.8). Listening to the radio did not have a significant effect on birth preparedness (OR 1.3, 95% CI 0.8–2.2), nor did watching television (OR 0.7, 95% CI 0.3–1.5). Women with high media exposure were not birth prepared to a higher extent (OR 1.3, 95% CI 0.9–2.0). Associations (OR, 95% CI) between media exposure and being well birth prepared in a sample of recently delivered Ugandan women (n=765) Adjusted for age, education, location of residence, parity, travel time to health facility, and ANC visits. Defined as exposure to radio or television almost every day or newspaper at least once a week.
Conclusion
Our results indicate that birth preparedness, and ultimately skilled birth attendance, can be enhanced through increased reading of newspapers. Apart from requiring general literacy skills this mean that newspapers have to be accessible with regard to language, dissemination and cost.
[ "Setting", "Sample and data collection", "Definition of variables", "Socio-demographic and reproductive variables", "Media exposure variables", "Birth preparedness variables", "Statistical analysis", "Strengths and limitations of this study", "Conclusion" ]
[ "The study was conducted in the Mbarara District in southwest Uganda. The district is divided in Mbarara Municipality, which is the urban center, and Kashari and Rwampara Counties which are mainly rural. The total population is 436,400 (24). The area contains 46 health centers with various levels of health services provision (II–IV) that all provide antenatal and emergency obstetric care, although Mbarara Regional Referral Hospital is the only one providing comprehensive obstetric care. Mbarara Municipality also has four private hospitals. Eighty percent of the Ugandan population lives in rural areas where the economy is predominately agricultural. According to the World Bank, 38% live in extreme poverty, i.e. below 1.25 USD per day in purchasing power parity (25). According to the 2012 UDHS, only 5% of the rural population has electricity in their homes compared to 55% of the urban population. The literacy rate among women in Uganda is 64.2%, but higher in the southwest (75.5%) (10).\nThe study was given ethical clearance by the Uganda National Council of Science and Technology and Lund University. Permission to conduct the study was also given by local leaders at the district, county, and village levels. A written consent form was read to each participant and a signature or thumbprint was obtained before the interviews.", "Participants in the community survey were selected by using a two-stage cluster sampling technique. In the first stage, 120 of the 699 villages in the study area were randomly chosen. The average population of a village is approximately 500. In each village, a starting point was alternately identified at the center or on the periphery with the help of a Village Health Team member. Two research assistants moved in opposite directions through the village stopping at every second household until 10 women who were either pregnant or had delivered within the last 12 months were identified. A total of 1,199 women were interviewed. Out of these, 765 had delivered during the last 12 months and were included in the study. The participants were equally divided between Kashari (50.8%) and Rwampara Counties (49.2%).\nThe women were asked about their knowledge, perceptions, and experiences regarding pregnancy, childbirth, and the postpartum period and about their media consumption and exposure to media interventions. The information was obtained by means of a Women's Safe Motherhood Questionnaire developed by the Johns Hopkins Program for International Education in Gynecology and Obstetrics (JHPIEGO) (11) that had been adapted to the Ugandan context and pretested in a neighboring district. After the pretesting, a question was added about the purchase of childbirth materials since it is an important preparation for childbirth in Ugandan. A total of 12 research assistants were recruited, each having a bachelor's degree in social science and previous experience in survey data collection. They all were trained for 1 week on the administration of the data instrument.\nThe data collection took place from September to December 2010 in Kashari County and from April to May 2011 in Rwampara County under the supervision of the principal investigator. The questionnaires were continuously verified for completeness and consistency by the field supervisors.", " Socio-demographic and reproductive variables \nArea of residence was coded into ‘Kashari’ or ‘Rwampara’ and ‘rural’ for villages or ‘semi-urban’ for trading centers. No urban areas existed in the two counties.\n\nAge was coded into ‘<20 years’, ‘20–24 years’ and ‘≥25 years’.\n\nMarital status was coded as ‘married’ for women who were married or in union and ‘not married’ for single, widowed, divorced or separated.\n\nReligion was coded as ‘Christians’ for women belonging to the Roman Catholic, Church of Uganda or Seventh-day Adventists. The remaining women were coded as ‘Other’.\n\nHighest education level completed was coded into ‘< primary school’, ‘primary school’ or ‘≥ secondary school’.\n\nReading ability was categorized based on the question ‘Can you read a letter, Bible, or a newspaper easily, with difficulty, or not at all?’ Easily was coded as ‘yes’ and with difficulty/not at all was coded as ‘no’.\n\nTravel time to health facility was coded as ‘<1 hour’ and ‘≥1 hour’.\n\nParity was calculated by adding the reported number of live births and stillbirths and coded as ‘primipara’, ‘2–4’ or ‘≥5’ births.\n\nANC visits was coded as ‘<4’ or ‘≥4’ visits.\n\nArea of residence was coded into ‘Kashari’ or ‘Rwampara’ and ‘rural’ for villages or ‘semi-urban’ for trading centers. No urban areas existed in the two counties.\n\nAge was coded into ‘<20 years’, ‘20–24 years’ and ‘≥25 years’.\n\nMarital status was coded as ‘married’ for women who were married or in union and ‘not married’ for single, widowed, divorced or separated.\n\nReligion was coded as ‘Christians’ for women belonging to the Roman Catholic, Church of Uganda or Seventh-day Adventists. The remaining women were coded as ‘Other’.\n\nHighest education level completed was coded into ‘< primary school’, ‘primary school’ or ‘≥ secondary school’.\n\nReading ability was categorized based on the question ‘Can you read a letter, Bible, or a newspaper easily, with difficulty, or not at all?’ Easily was coded as ‘yes’ and with difficulty/not at all was coded as ‘no’.\n\nTravel time to health facility was coded as ‘<1 hour’ and ‘≥1 hour’.\n\nParity was calculated by adding the reported number of live births and stillbirths and coded as ‘primipara’, ‘2–4’ or ‘≥5’ births.\n\nANC visits was coded as ‘<4’ or ‘≥4’ visits.\n Media exposure variables \nRead a newspaper/listen to radio/watch television was coded ‘yes’ or ‘no’ based on the question ‘Do you ever read newspaper/listen to radio/watch television?’\n\nFrequency of exposure was coded as; ‘almost every day’, ‘at least once a week’ or ‘less than once a week’.\n\nMedia exposure was defined as ‘high’ if the women were exposed to radio or television ‘almost every day’ or read a newspaper ‘at least once a week’. The regional newspaper in the local language is a weekly paper, and so weekly newspaper exposure was considered ‘high media exposure’.\n\nRead/heard/saw information on birth preparedness in the past 6 months was coded ‘yes’, ‘no’, or ‘don't know’.\n\nExposure to birth preparedness information through newspaper/radio/television was inquired into within those groups exposed to the separate media.\n\nRead a newspaper/listen to radio/watch television was coded ‘yes’ or ‘no’ based on the question ‘Do you ever read newspaper/listen to radio/watch television?’\n\nFrequency of exposure was coded as; ‘almost every day’, ‘at least once a week’ or ‘less than once a week’.\n\nMedia exposure was defined as ‘high’ if the women were exposed to radio or television ‘almost every day’ or read a newspaper ‘at least once a week’. The regional newspaper in the local language is a weekly paper, and so weekly newspaper exposure was considered ‘high media exposure’.\n\nRead/heard/saw information on birth preparedness in the past 6 months was coded ‘yes’, ‘no’, or ‘don't know’.\n\nExposure to birth preparedness information through newspaper/radio/television was inquired into within those groups exposed to the separate media.\n Birth preparedness variables Four variables were used as measures for birth preparedness, as follows:\n\nBought childbirth materials was determined by asking ‘which arrangements did you or your family make for the birth of this child?’ and women who had bought a complete ‘Mama kit’ with the necessary materials for childbirth or any of these materials separately were coded as having bought childbirth materials.\n\nSaved money was determined by asking ‘Did you or your family save money for the birth of this child?’.\n\nIdentified transport to place of delivery was determined by asking the question ‘Did you or your family identify transport for the birth of this child?’.\n\nIdentified SBA for delivery was determined by asking ‘Did you or your family identify a skilled provider for the birth of this child?’. Those women who delivered in a health facility and who, in a subsequent question, stated that this place of delivery was planned during the pregnancy, were also coded as having identified a SBA for delivery.\n\nWell birth prepared was defined as having taken at least three of the four actions above.\nFour variables were used as measures for birth preparedness, as follows:\n\nBought childbirth materials was determined by asking ‘which arrangements did you or your family make for the birth of this child?’ and women who had bought a complete ‘Mama kit’ with the necessary materials for childbirth or any of these materials separately were coded as having bought childbirth materials.\n\nSaved money was determined by asking ‘Did you or your family save money for the birth of this child?’.\n\nIdentified transport to place of delivery was determined by asking the question ‘Did you or your family identify transport for the birth of this child?’.\n\nIdentified SBA for delivery was determined by asking ‘Did you or your family identify a skilled provider for the birth of this child?’. Those women who delivered in a health facility and who, in a subsequent question, stated that this place of delivery was planned during the pregnancy, were also coded as having identified a SBA for delivery.\n\nWell birth prepared was defined as having taken at least three of the four actions above.", "\nArea of residence was coded into ‘Kashari’ or ‘Rwampara’ and ‘rural’ for villages or ‘semi-urban’ for trading centers. No urban areas existed in the two counties.\n\nAge was coded into ‘<20 years’, ‘20–24 years’ and ‘≥25 years’.\n\nMarital status was coded as ‘married’ for women who were married or in union and ‘not married’ for single, widowed, divorced or separated.\n\nReligion was coded as ‘Christians’ for women belonging to the Roman Catholic, Church of Uganda or Seventh-day Adventists. The remaining women were coded as ‘Other’.\n\nHighest education level completed was coded into ‘< primary school’, ‘primary school’ or ‘≥ secondary school’.\n\nReading ability was categorized based on the question ‘Can you read a letter, Bible, or a newspaper easily, with difficulty, or not at all?’ Easily was coded as ‘yes’ and with difficulty/not at all was coded as ‘no’.\n\nTravel time to health facility was coded as ‘<1 hour’ and ‘≥1 hour’.\n\nParity was calculated by adding the reported number of live births and stillbirths and coded as ‘primipara’, ‘2–4’ or ‘≥5’ births.\n\nANC visits was coded as ‘<4’ or ‘≥4’ visits.", "\nRead a newspaper/listen to radio/watch television was coded ‘yes’ or ‘no’ based on the question ‘Do you ever read newspaper/listen to radio/watch television?’\n\nFrequency of exposure was coded as; ‘almost every day’, ‘at least once a week’ or ‘less than once a week’.\n\nMedia exposure was defined as ‘high’ if the women were exposed to radio or television ‘almost every day’ or read a newspaper ‘at least once a week’. The regional newspaper in the local language is a weekly paper, and so weekly newspaper exposure was considered ‘high media exposure’.\n\nRead/heard/saw information on birth preparedness in the past 6 months was coded ‘yes’, ‘no’, or ‘don't know’.\n\nExposure to birth preparedness information through newspaper/radio/television was inquired into within those groups exposed to the separate media.", "Four variables were used as measures for birth preparedness, as follows:\n\nBought childbirth materials was determined by asking ‘which arrangements did you or your family make for the birth of this child?’ and women who had bought a complete ‘Mama kit’ with the necessary materials for childbirth or any of these materials separately were coded as having bought childbirth materials.\n\nSaved money was determined by asking ‘Did you or your family save money for the birth of this child?’.\n\nIdentified transport to place of delivery was determined by asking the question ‘Did you or your family identify transport for the birth of this child?’.\n\nIdentified SBA for delivery was determined by asking ‘Did you or your family identify a skilled provider for the birth of this child?’. Those women who delivered in a health facility and who, in a subsequent question, stated that this place of delivery was planned during the pregnancy, were also coded as having identified a SBA for delivery.\n\nWell birth prepared was defined as having taken at least three of the four actions above.", "Sample size was predetermined by the existing database. Statistical analyses were conducted with SPSS Version 21 and all analyses accounted for the intra-cluster correlation. Generalized linear mixed models were used to calculate the odds ratios (OR) and 95% confidence intervals (CI) for the associations between media exposure and being well birth prepared (i.e. bought childbirth materials, saved money, identified transport to place of delivery, and identified SBA for delivery). Multivariate analyses included adjustments for age, education, location of residence, parity, travel time to health facility, and ANC visits.", "By using two-stage cluster sampling and interviewing a relatively large number of recently delivered women for this study, a good estimate of birth preparedness and mass media exposure among women in rural southwest Uganda was obtained. Ideally a birth preparedness questionnaire should be given to women in late pregnancy in order to capture all preparations made prior to childbirth and avoid recall problems (38). However, this would lead to difficulties in obtaining the necessary sample size. A limit of 1 year was set in this study to minimize recall problems.\nIn this paper, we have only looked at the effect of direct exposure to mass media on our study group. However, communication researchers recognize the effect of exposure on the community as a whole. For some behaviors, a change can occur when a certain proportion of the community, rather than separate individuals, is exposed (1). When considering the effect of indirect exposure to mass media, a husband's exposure is likely to have the single most important influence on a woman. The importance of the husband in maternal and child health has been increasingly acknowledged and research has shown that a husband who takes an active role in this regard will increase his wife's birth preparedness (15, 39). We did not look at the husband's exposure to mass media, although it might have had an independent effect on birth preparedness.", "Our results indicate that birth preparedness, and ultimately skilled birth attendance, can be enhanced through increased reading of newspapers. Apart from requiring general literacy skills this mean that newspapers have to be accessible with regard to language, dissemination and cost." ]
[ null, null, null, null, null, null, null, null, null ]
[ "Method", "Setting", "Sample and data collection", "Definition of variables", "Socio-demographic and reproductive variables", "Media exposure variables", "Birth preparedness variables", "Statistical analysis", "Results", "Discussion", "Strengths and limitations of this study", "Conclusion" ]
[ " Setting The study was conducted in the Mbarara District in southwest Uganda. The district is divided in Mbarara Municipality, which is the urban center, and Kashari and Rwampara Counties which are mainly rural. The total population is 436,400 (24). The area contains 46 health centers with various levels of health services provision (II–IV) that all provide antenatal and emergency obstetric care, although Mbarara Regional Referral Hospital is the only one providing comprehensive obstetric care. Mbarara Municipality also has four private hospitals. Eighty percent of the Ugandan population lives in rural areas where the economy is predominately agricultural. According to the World Bank, 38% live in extreme poverty, i.e. below 1.25 USD per day in purchasing power parity (25). According to the 2012 UDHS, only 5% of the rural population has electricity in their homes compared to 55% of the urban population. The literacy rate among women in Uganda is 64.2%, but higher in the southwest (75.5%) (10).\nThe study was given ethical clearance by the Uganda National Council of Science and Technology and Lund University. Permission to conduct the study was also given by local leaders at the district, county, and village levels. A written consent form was read to each participant and a signature or thumbprint was obtained before the interviews.\nThe study was conducted in the Mbarara District in southwest Uganda. The district is divided in Mbarara Municipality, which is the urban center, and Kashari and Rwampara Counties which are mainly rural. The total population is 436,400 (24). The area contains 46 health centers with various levels of health services provision (II–IV) that all provide antenatal and emergency obstetric care, although Mbarara Regional Referral Hospital is the only one providing comprehensive obstetric care. Mbarara Municipality also has four private hospitals. Eighty percent of the Ugandan population lives in rural areas where the economy is predominately agricultural. According to the World Bank, 38% live in extreme poverty, i.e. below 1.25 USD per day in purchasing power parity (25). According to the 2012 UDHS, only 5% of the rural population has electricity in their homes compared to 55% of the urban population. The literacy rate among women in Uganda is 64.2%, but higher in the southwest (75.5%) (10).\nThe study was given ethical clearance by the Uganda National Council of Science and Technology and Lund University. Permission to conduct the study was also given by local leaders at the district, county, and village levels. A written consent form was read to each participant and a signature or thumbprint was obtained before the interviews.\n Sample and data collection Participants in the community survey were selected by using a two-stage cluster sampling technique. In the first stage, 120 of the 699 villages in the study area were randomly chosen. The average population of a village is approximately 500. In each village, a starting point was alternately identified at the center or on the periphery with the help of a Village Health Team member. Two research assistants moved in opposite directions through the village stopping at every second household until 10 women who were either pregnant or had delivered within the last 12 months were identified. A total of 1,199 women were interviewed. Out of these, 765 had delivered during the last 12 months and were included in the study. The participants were equally divided between Kashari (50.8%) and Rwampara Counties (49.2%).\nThe women were asked about their knowledge, perceptions, and experiences regarding pregnancy, childbirth, and the postpartum period and about their media consumption and exposure to media interventions. The information was obtained by means of a Women's Safe Motherhood Questionnaire developed by the Johns Hopkins Program for International Education in Gynecology and Obstetrics (JHPIEGO) (11) that had been adapted to the Ugandan context and pretested in a neighboring district. After the pretesting, a question was added about the purchase of childbirth materials since it is an important preparation for childbirth in Ugandan. A total of 12 research assistants were recruited, each having a bachelor's degree in social science and previous experience in survey data collection. They all were trained for 1 week on the administration of the data instrument.\nThe data collection took place from September to December 2010 in Kashari County and from April to May 2011 in Rwampara County under the supervision of the principal investigator. The questionnaires were continuously verified for completeness and consistency by the field supervisors.\nParticipants in the community survey were selected by using a two-stage cluster sampling technique. In the first stage, 120 of the 699 villages in the study area were randomly chosen. The average population of a village is approximately 500. In each village, a starting point was alternately identified at the center or on the periphery with the help of a Village Health Team member. Two research assistants moved in opposite directions through the village stopping at every second household until 10 women who were either pregnant or had delivered within the last 12 months were identified. A total of 1,199 women were interviewed. Out of these, 765 had delivered during the last 12 months and were included in the study. The participants were equally divided between Kashari (50.8%) and Rwampara Counties (49.2%).\nThe women were asked about their knowledge, perceptions, and experiences regarding pregnancy, childbirth, and the postpartum period and about their media consumption and exposure to media interventions. The information was obtained by means of a Women's Safe Motherhood Questionnaire developed by the Johns Hopkins Program for International Education in Gynecology and Obstetrics (JHPIEGO) (11) that had been adapted to the Ugandan context and pretested in a neighboring district. After the pretesting, a question was added about the purchase of childbirth materials since it is an important preparation for childbirth in Ugandan. A total of 12 research assistants were recruited, each having a bachelor's degree in social science and previous experience in survey data collection. They all were trained for 1 week on the administration of the data instrument.\nThe data collection took place from September to December 2010 in Kashari County and from April to May 2011 in Rwampara County under the supervision of the principal investigator. The questionnaires were continuously verified for completeness and consistency by the field supervisors.\n Definition of variables Socio-demographic and reproductive variables \nArea of residence was coded into ‘Kashari’ or ‘Rwampara’ and ‘rural’ for villages or ‘semi-urban’ for trading centers. No urban areas existed in the two counties.\n\nAge was coded into ‘<20 years’, ‘20–24 years’ and ‘≥25 years’.\n\nMarital status was coded as ‘married’ for women who were married or in union and ‘not married’ for single, widowed, divorced or separated.\n\nReligion was coded as ‘Christians’ for women belonging to the Roman Catholic, Church of Uganda or Seventh-day Adventists. The remaining women were coded as ‘Other’.\n\nHighest education level completed was coded into ‘< primary school’, ‘primary school’ or ‘≥ secondary school’.\n\nReading ability was categorized based on the question ‘Can you read a letter, Bible, or a newspaper easily, with difficulty, or not at all?’ Easily was coded as ‘yes’ and with difficulty/not at all was coded as ‘no’.\n\nTravel time to health facility was coded as ‘<1 hour’ and ‘≥1 hour’.\n\nParity was calculated by adding the reported number of live births and stillbirths and coded as ‘primipara’, ‘2–4’ or ‘≥5’ births.\n\nANC visits was coded as ‘<4’ or ‘≥4’ visits.\n\nArea of residence was coded into ‘Kashari’ or ‘Rwampara’ and ‘rural’ for villages or ‘semi-urban’ for trading centers. No urban areas existed in the two counties.\n\nAge was coded into ‘<20 years’, ‘20–24 years’ and ‘≥25 years’.\n\nMarital status was coded as ‘married’ for women who were married or in union and ‘not married’ for single, widowed, divorced or separated.\n\nReligion was coded as ‘Christians’ for women belonging to the Roman Catholic, Church of Uganda or Seventh-day Adventists. The remaining women were coded as ‘Other’.\n\nHighest education level completed was coded into ‘< primary school’, ‘primary school’ or ‘≥ secondary school’.\n\nReading ability was categorized based on the question ‘Can you read a letter, Bible, or a newspaper easily, with difficulty, or not at all?’ Easily was coded as ‘yes’ and with difficulty/not at all was coded as ‘no’.\n\nTravel time to health facility was coded as ‘<1 hour’ and ‘≥1 hour’.\n\nParity was calculated by adding the reported number of live births and stillbirths and coded as ‘primipara’, ‘2–4’ or ‘≥5’ births.\n\nANC visits was coded as ‘<4’ or ‘≥4’ visits.\n Media exposure variables \nRead a newspaper/listen to radio/watch television was coded ‘yes’ or ‘no’ based on the question ‘Do you ever read newspaper/listen to radio/watch television?’\n\nFrequency of exposure was coded as; ‘almost every day’, ‘at least once a week’ or ‘less than once a week’.\n\nMedia exposure was defined as ‘high’ if the women were exposed to radio or television ‘almost every day’ or read a newspaper ‘at least once a week’. The regional newspaper in the local language is a weekly paper, and so weekly newspaper exposure was considered ‘high media exposure’.\n\nRead/heard/saw information on birth preparedness in the past 6 months was coded ‘yes’, ‘no’, or ‘don't know’.\n\nExposure to birth preparedness information through newspaper/radio/television was inquired into within those groups exposed to the separate media.\n\nRead a newspaper/listen to radio/watch television was coded ‘yes’ or ‘no’ based on the question ‘Do you ever read newspaper/listen to radio/watch television?’\n\nFrequency of exposure was coded as; ‘almost every day’, ‘at least once a week’ or ‘less than once a week’.\n\nMedia exposure was defined as ‘high’ if the women were exposed to radio or television ‘almost every day’ or read a newspaper ‘at least once a week’. The regional newspaper in the local language is a weekly paper, and so weekly newspaper exposure was considered ‘high media exposure’.\n\nRead/heard/saw information on birth preparedness in the past 6 months was coded ‘yes’, ‘no’, or ‘don't know’.\n\nExposure to birth preparedness information through newspaper/radio/television was inquired into within those groups exposed to the separate media.\n Birth preparedness variables Four variables were used as measures for birth preparedness, as follows:\n\nBought childbirth materials was determined by asking ‘which arrangements did you or your family make for the birth of this child?’ and women who had bought a complete ‘Mama kit’ with the necessary materials for childbirth or any of these materials separately were coded as having bought childbirth materials.\n\nSaved money was determined by asking ‘Did you or your family save money for the birth of this child?’.\n\nIdentified transport to place of delivery was determined by asking the question ‘Did you or your family identify transport for the birth of this child?’.\n\nIdentified SBA for delivery was determined by asking ‘Did you or your family identify a skilled provider for the birth of this child?’. Those women who delivered in a health facility and who, in a subsequent question, stated that this place of delivery was planned during the pregnancy, were also coded as having identified a SBA for delivery.\n\nWell birth prepared was defined as having taken at least three of the four actions above.\nFour variables were used as measures for birth preparedness, as follows:\n\nBought childbirth materials was determined by asking ‘which arrangements did you or your family make for the birth of this child?’ and women who had bought a complete ‘Mama kit’ with the necessary materials for childbirth or any of these materials separately were coded as having bought childbirth materials.\n\nSaved money was determined by asking ‘Did you or your family save money for the birth of this child?’.\n\nIdentified transport to place of delivery was determined by asking the question ‘Did you or your family identify transport for the birth of this child?’.\n\nIdentified SBA for delivery was determined by asking ‘Did you or your family identify a skilled provider for the birth of this child?’. Those women who delivered in a health facility and who, in a subsequent question, stated that this place of delivery was planned during the pregnancy, were also coded as having identified a SBA for delivery.\n\nWell birth prepared was defined as having taken at least three of the four actions above.\n Socio-demographic and reproductive variables \nArea of residence was coded into ‘Kashari’ or ‘Rwampara’ and ‘rural’ for villages or ‘semi-urban’ for trading centers. No urban areas existed in the two counties.\n\nAge was coded into ‘<20 years’, ‘20–24 years’ and ‘≥25 years’.\n\nMarital status was coded as ‘married’ for women who were married or in union and ‘not married’ for single, widowed, divorced or separated.\n\nReligion was coded as ‘Christians’ for women belonging to the Roman Catholic, Church of Uganda or Seventh-day Adventists. The remaining women were coded as ‘Other’.\n\nHighest education level completed was coded into ‘< primary school’, ‘primary school’ or ‘≥ secondary school’.\n\nReading ability was categorized based on the question ‘Can you read a letter, Bible, or a newspaper easily, with difficulty, or not at all?’ Easily was coded as ‘yes’ and with difficulty/not at all was coded as ‘no’.\n\nTravel time to health facility was coded as ‘<1 hour’ and ‘≥1 hour’.\n\nParity was calculated by adding the reported number of live births and stillbirths and coded as ‘primipara’, ‘2–4’ or ‘≥5’ births.\n\nANC visits was coded as ‘<4’ or ‘≥4’ visits.\n\nArea of residence was coded into ‘Kashari’ or ‘Rwampara’ and ‘rural’ for villages or ‘semi-urban’ for trading centers. No urban areas existed in the two counties.\n\nAge was coded into ‘<20 years’, ‘20–24 years’ and ‘≥25 years’.\n\nMarital status was coded as ‘married’ for women who were married or in union and ‘not married’ for single, widowed, divorced or separated.\n\nReligion was coded as ‘Christians’ for women belonging to the Roman Catholic, Church of Uganda or Seventh-day Adventists. The remaining women were coded as ‘Other’.\n\nHighest education level completed was coded into ‘< primary school’, ‘primary school’ or ‘≥ secondary school’.\n\nReading ability was categorized based on the question ‘Can you read a letter, Bible, or a newspaper easily, with difficulty, or not at all?’ Easily was coded as ‘yes’ and with difficulty/not at all was coded as ‘no’.\n\nTravel time to health facility was coded as ‘<1 hour’ and ‘≥1 hour’.\n\nParity was calculated by adding the reported number of live births and stillbirths and coded as ‘primipara’, ‘2–4’ or ‘≥5’ births.\n\nANC visits was coded as ‘<4’ or ‘≥4’ visits.\n Media exposure variables \nRead a newspaper/listen to radio/watch television was coded ‘yes’ or ‘no’ based on the question ‘Do you ever read newspaper/listen to radio/watch television?’\n\nFrequency of exposure was coded as; ‘almost every day’, ‘at least once a week’ or ‘less than once a week’.\n\nMedia exposure was defined as ‘high’ if the women were exposed to radio or television ‘almost every day’ or read a newspaper ‘at least once a week’. The regional newspaper in the local language is a weekly paper, and so weekly newspaper exposure was considered ‘high media exposure’.\n\nRead/heard/saw information on birth preparedness in the past 6 months was coded ‘yes’, ‘no’, or ‘don't know’.\n\nExposure to birth preparedness information through newspaper/radio/television was inquired into within those groups exposed to the separate media.\n\nRead a newspaper/listen to radio/watch television was coded ‘yes’ or ‘no’ based on the question ‘Do you ever read newspaper/listen to radio/watch television?’\n\nFrequency of exposure was coded as; ‘almost every day’, ‘at least once a week’ or ‘less than once a week’.\n\nMedia exposure was defined as ‘high’ if the women were exposed to radio or television ‘almost every day’ or read a newspaper ‘at least once a week’. The regional newspaper in the local language is a weekly paper, and so weekly newspaper exposure was considered ‘high media exposure’.\n\nRead/heard/saw information on birth preparedness in the past 6 months was coded ‘yes’, ‘no’, or ‘don't know’.\n\nExposure to birth preparedness information through newspaper/radio/television was inquired into within those groups exposed to the separate media.\n Birth preparedness variables Four variables were used as measures for birth preparedness, as follows:\n\nBought childbirth materials was determined by asking ‘which arrangements did you or your family make for the birth of this child?’ and women who had bought a complete ‘Mama kit’ with the necessary materials for childbirth or any of these materials separately were coded as having bought childbirth materials.\n\nSaved money was determined by asking ‘Did you or your family save money for the birth of this child?’.\n\nIdentified transport to place of delivery was determined by asking the question ‘Did you or your family identify transport for the birth of this child?’.\n\nIdentified SBA for delivery was determined by asking ‘Did you or your family identify a skilled provider for the birth of this child?’. Those women who delivered in a health facility and who, in a subsequent question, stated that this place of delivery was planned during the pregnancy, were also coded as having identified a SBA for delivery.\n\nWell birth prepared was defined as having taken at least three of the four actions above.\nFour variables were used as measures for birth preparedness, as follows:\n\nBought childbirth materials was determined by asking ‘which arrangements did you or your family make for the birth of this child?’ and women who had bought a complete ‘Mama kit’ with the necessary materials for childbirth or any of these materials separately were coded as having bought childbirth materials.\n\nSaved money was determined by asking ‘Did you or your family save money for the birth of this child?’.\n\nIdentified transport to place of delivery was determined by asking the question ‘Did you or your family identify transport for the birth of this child?’.\n\nIdentified SBA for delivery was determined by asking ‘Did you or your family identify a skilled provider for the birth of this child?’. Those women who delivered in a health facility and who, in a subsequent question, stated that this place of delivery was planned during the pregnancy, were also coded as having identified a SBA for delivery.\n\nWell birth prepared was defined as having taken at least three of the four actions above.\n Statistical analysis Sample size was predetermined by the existing database. Statistical analyses were conducted with SPSS Version 21 and all analyses accounted for the intra-cluster correlation. Generalized linear mixed models were used to calculate the odds ratios (OR) and 95% confidence intervals (CI) for the associations between media exposure and being well birth prepared (i.e. bought childbirth materials, saved money, identified transport to place of delivery, and identified SBA for delivery). Multivariate analyses included adjustments for age, education, location of residence, parity, travel time to health facility, and ANC visits.\nSample size was predetermined by the existing database. Statistical analyses were conducted with SPSS Version 21 and all analyses accounted for the intra-cluster correlation. Generalized linear mixed models were used to calculate the odds ratios (OR) and 95% confidence intervals (CI) for the associations between media exposure and being well birth prepared (i.e. bought childbirth materials, saved money, identified transport to place of delivery, and identified SBA for delivery). Multivariate analyses included adjustments for age, education, location of residence, parity, travel time to health facility, and ANC visits.", "The study was conducted in the Mbarara District in southwest Uganda. The district is divided in Mbarara Municipality, which is the urban center, and Kashari and Rwampara Counties which are mainly rural. The total population is 436,400 (24). The area contains 46 health centers with various levels of health services provision (II–IV) that all provide antenatal and emergency obstetric care, although Mbarara Regional Referral Hospital is the only one providing comprehensive obstetric care. Mbarara Municipality also has four private hospitals. Eighty percent of the Ugandan population lives in rural areas where the economy is predominately agricultural. According to the World Bank, 38% live in extreme poverty, i.e. below 1.25 USD per day in purchasing power parity (25). According to the 2012 UDHS, only 5% of the rural population has electricity in their homes compared to 55% of the urban population. The literacy rate among women in Uganda is 64.2%, but higher in the southwest (75.5%) (10).\nThe study was given ethical clearance by the Uganda National Council of Science and Technology and Lund University. Permission to conduct the study was also given by local leaders at the district, county, and village levels. A written consent form was read to each participant and a signature or thumbprint was obtained before the interviews.", "Participants in the community survey were selected by using a two-stage cluster sampling technique. In the first stage, 120 of the 699 villages in the study area were randomly chosen. The average population of a village is approximately 500. In each village, a starting point was alternately identified at the center or on the periphery with the help of a Village Health Team member. Two research assistants moved in opposite directions through the village stopping at every second household until 10 women who were either pregnant or had delivered within the last 12 months were identified. A total of 1,199 women were interviewed. Out of these, 765 had delivered during the last 12 months and were included in the study. The participants were equally divided between Kashari (50.8%) and Rwampara Counties (49.2%).\nThe women were asked about their knowledge, perceptions, and experiences regarding pregnancy, childbirth, and the postpartum period and about their media consumption and exposure to media interventions. The information was obtained by means of a Women's Safe Motherhood Questionnaire developed by the Johns Hopkins Program for International Education in Gynecology and Obstetrics (JHPIEGO) (11) that had been adapted to the Ugandan context and pretested in a neighboring district. After the pretesting, a question was added about the purchase of childbirth materials since it is an important preparation for childbirth in Ugandan. A total of 12 research assistants were recruited, each having a bachelor's degree in social science and previous experience in survey data collection. They all were trained for 1 week on the administration of the data instrument.\nThe data collection took place from September to December 2010 in Kashari County and from April to May 2011 in Rwampara County under the supervision of the principal investigator. The questionnaires were continuously verified for completeness and consistency by the field supervisors.", " Socio-demographic and reproductive variables \nArea of residence was coded into ‘Kashari’ or ‘Rwampara’ and ‘rural’ for villages or ‘semi-urban’ for trading centers. No urban areas existed in the two counties.\n\nAge was coded into ‘<20 years’, ‘20–24 years’ and ‘≥25 years’.\n\nMarital status was coded as ‘married’ for women who were married or in union and ‘not married’ for single, widowed, divorced or separated.\n\nReligion was coded as ‘Christians’ for women belonging to the Roman Catholic, Church of Uganda or Seventh-day Adventists. The remaining women were coded as ‘Other’.\n\nHighest education level completed was coded into ‘< primary school’, ‘primary school’ or ‘≥ secondary school’.\n\nReading ability was categorized based on the question ‘Can you read a letter, Bible, or a newspaper easily, with difficulty, or not at all?’ Easily was coded as ‘yes’ and with difficulty/not at all was coded as ‘no’.\n\nTravel time to health facility was coded as ‘<1 hour’ and ‘≥1 hour’.\n\nParity was calculated by adding the reported number of live births and stillbirths and coded as ‘primipara’, ‘2–4’ or ‘≥5’ births.\n\nANC visits was coded as ‘<4’ or ‘≥4’ visits.\n\nArea of residence was coded into ‘Kashari’ or ‘Rwampara’ and ‘rural’ for villages or ‘semi-urban’ for trading centers. No urban areas existed in the two counties.\n\nAge was coded into ‘<20 years’, ‘20–24 years’ and ‘≥25 years’.\n\nMarital status was coded as ‘married’ for women who were married or in union and ‘not married’ for single, widowed, divorced or separated.\n\nReligion was coded as ‘Christians’ for women belonging to the Roman Catholic, Church of Uganda or Seventh-day Adventists. The remaining women were coded as ‘Other’.\n\nHighest education level completed was coded into ‘< primary school’, ‘primary school’ or ‘≥ secondary school’.\n\nReading ability was categorized based on the question ‘Can you read a letter, Bible, or a newspaper easily, with difficulty, or not at all?’ Easily was coded as ‘yes’ and with difficulty/not at all was coded as ‘no’.\n\nTravel time to health facility was coded as ‘<1 hour’ and ‘≥1 hour’.\n\nParity was calculated by adding the reported number of live births and stillbirths and coded as ‘primipara’, ‘2–4’ or ‘≥5’ births.\n\nANC visits was coded as ‘<4’ or ‘≥4’ visits.\n Media exposure variables \nRead a newspaper/listen to radio/watch television was coded ‘yes’ or ‘no’ based on the question ‘Do you ever read newspaper/listen to radio/watch television?’\n\nFrequency of exposure was coded as; ‘almost every day’, ‘at least once a week’ or ‘less than once a week’.\n\nMedia exposure was defined as ‘high’ if the women were exposed to radio or television ‘almost every day’ or read a newspaper ‘at least once a week’. The regional newspaper in the local language is a weekly paper, and so weekly newspaper exposure was considered ‘high media exposure’.\n\nRead/heard/saw information on birth preparedness in the past 6 months was coded ‘yes’, ‘no’, or ‘don't know’.\n\nExposure to birth preparedness information through newspaper/radio/television was inquired into within those groups exposed to the separate media.\n\nRead a newspaper/listen to radio/watch television was coded ‘yes’ or ‘no’ based on the question ‘Do you ever read newspaper/listen to radio/watch television?’\n\nFrequency of exposure was coded as; ‘almost every day’, ‘at least once a week’ or ‘less than once a week’.\n\nMedia exposure was defined as ‘high’ if the women were exposed to radio or television ‘almost every day’ or read a newspaper ‘at least once a week’. The regional newspaper in the local language is a weekly paper, and so weekly newspaper exposure was considered ‘high media exposure’.\n\nRead/heard/saw information on birth preparedness in the past 6 months was coded ‘yes’, ‘no’, or ‘don't know’.\n\nExposure to birth preparedness information through newspaper/radio/television was inquired into within those groups exposed to the separate media.\n Birth preparedness variables Four variables were used as measures for birth preparedness, as follows:\n\nBought childbirth materials was determined by asking ‘which arrangements did you or your family make for the birth of this child?’ and women who had bought a complete ‘Mama kit’ with the necessary materials for childbirth or any of these materials separately were coded as having bought childbirth materials.\n\nSaved money was determined by asking ‘Did you or your family save money for the birth of this child?’.\n\nIdentified transport to place of delivery was determined by asking the question ‘Did you or your family identify transport for the birth of this child?’.\n\nIdentified SBA for delivery was determined by asking ‘Did you or your family identify a skilled provider for the birth of this child?’. Those women who delivered in a health facility and who, in a subsequent question, stated that this place of delivery was planned during the pregnancy, were also coded as having identified a SBA for delivery.\n\nWell birth prepared was defined as having taken at least three of the four actions above.\nFour variables were used as measures for birth preparedness, as follows:\n\nBought childbirth materials was determined by asking ‘which arrangements did you or your family make for the birth of this child?’ and women who had bought a complete ‘Mama kit’ with the necessary materials for childbirth or any of these materials separately were coded as having bought childbirth materials.\n\nSaved money was determined by asking ‘Did you or your family save money for the birth of this child?’.\n\nIdentified transport to place of delivery was determined by asking the question ‘Did you or your family identify transport for the birth of this child?’.\n\nIdentified SBA for delivery was determined by asking ‘Did you or your family identify a skilled provider for the birth of this child?’. Those women who delivered in a health facility and who, in a subsequent question, stated that this place of delivery was planned during the pregnancy, were also coded as having identified a SBA for delivery.\n\nWell birth prepared was defined as having taken at least three of the four actions above.", "\nArea of residence was coded into ‘Kashari’ or ‘Rwampara’ and ‘rural’ for villages or ‘semi-urban’ for trading centers. No urban areas existed in the two counties.\n\nAge was coded into ‘<20 years’, ‘20–24 years’ and ‘≥25 years’.\n\nMarital status was coded as ‘married’ for women who were married or in union and ‘not married’ for single, widowed, divorced or separated.\n\nReligion was coded as ‘Christians’ for women belonging to the Roman Catholic, Church of Uganda or Seventh-day Adventists. The remaining women were coded as ‘Other’.\n\nHighest education level completed was coded into ‘< primary school’, ‘primary school’ or ‘≥ secondary school’.\n\nReading ability was categorized based on the question ‘Can you read a letter, Bible, or a newspaper easily, with difficulty, or not at all?’ Easily was coded as ‘yes’ and with difficulty/not at all was coded as ‘no’.\n\nTravel time to health facility was coded as ‘<1 hour’ and ‘≥1 hour’.\n\nParity was calculated by adding the reported number of live births and stillbirths and coded as ‘primipara’, ‘2–4’ or ‘≥5’ births.\n\nANC visits was coded as ‘<4’ or ‘≥4’ visits.", "\nRead a newspaper/listen to radio/watch television was coded ‘yes’ or ‘no’ based on the question ‘Do you ever read newspaper/listen to radio/watch television?’\n\nFrequency of exposure was coded as; ‘almost every day’, ‘at least once a week’ or ‘less than once a week’.\n\nMedia exposure was defined as ‘high’ if the women were exposed to radio or television ‘almost every day’ or read a newspaper ‘at least once a week’. The regional newspaper in the local language is a weekly paper, and so weekly newspaper exposure was considered ‘high media exposure’.\n\nRead/heard/saw information on birth preparedness in the past 6 months was coded ‘yes’, ‘no’, or ‘don't know’.\n\nExposure to birth preparedness information through newspaper/radio/television was inquired into within those groups exposed to the separate media.", "Four variables were used as measures for birth preparedness, as follows:\n\nBought childbirth materials was determined by asking ‘which arrangements did you or your family make for the birth of this child?’ and women who had bought a complete ‘Mama kit’ with the necessary materials for childbirth or any of these materials separately were coded as having bought childbirth materials.\n\nSaved money was determined by asking ‘Did you or your family save money for the birth of this child?’.\n\nIdentified transport to place of delivery was determined by asking the question ‘Did you or your family identify transport for the birth of this child?’.\n\nIdentified SBA for delivery was determined by asking ‘Did you or your family identify a skilled provider for the birth of this child?’. Those women who delivered in a health facility and who, in a subsequent question, stated that this place of delivery was planned during the pregnancy, were also coded as having identified a SBA for delivery.\n\nWell birth prepared was defined as having taken at least three of the four actions above.", "Sample size was predetermined by the existing database. Statistical analyses were conducted with SPSS Version 21 and all analyses accounted for the intra-cluster correlation. Generalized linear mixed models were used to calculate the odds ratios (OR) and 95% confidence intervals (CI) for the associations between media exposure and being well birth prepared (i.e. bought childbirth materials, saved money, identified transport to place of delivery, and identified SBA for delivery). Multivariate analyses included adjustments for age, education, location of residence, parity, travel time to health facility, and ANC visits.", "The socio-demographic and reproductive variables are presented in Table 1. A majority of the women (83.9%) lived in a rural area, and 44.7% had a travel time of 1 hour or more to a health facility. One-third (34%) could not read at all or only with difficulty and 29.3% had not completed primary school. A total of 23% were primipara and 30.2% had a parity of five or more. Two-thirds (68.9%) had attended the recommended minimum of four ANC visits.\nSocio-demographic and reproductive characteristics of recently delivered Ugandan women (n=765)\nExposure to mass media is shown in Table 2. Although 33.9% of the women indicated they read newspapers, only 6.5% read a newspaper ‘almost every day’ and 41.2% read a newspaper ‘at least once a week’. By far the most popular form of media was radio, to which 88.6% of the women were exposed. In this group 90.3% listened to the radio almost every day, while only 4.9% reported ever watching television. In total 81.8% were exposed to radio or television almost every day or read a newspaper at least once a week and were therefore coded as having a high media exposure. The vast majority of this group consisted of radio listeners. Almost half (46.3%) said they had heard or seen some information on birth preparedness in the past 6 months. Among the radio listeners, 43.9% were exposed to birth preparedness information on the radio; among women reading newspapers the corresponding number was 11.4%.\nProportion of recently delivered women reporting media exposure and exposure to birth preparedness information (n=765)\n*Defined as exposure to radio or television almost every day or newspaper at least once a week.\n\nTable 3 shows the distribution of the variables included in birth preparedness. One-fifth of the women (20.7%) had bought childbirth materials, 87.8% had saved money, 60.1% had identified transport for delivery, and 64.3% had identified a skilled provider or health facility for delivery. This resulted in a total of 53.9% of the women being well birth prepared.\nReports of birth preparedness by recently delivered Ugandan women (n=765)\nDefined as having taken at least three of the four actions above.\n\nTable 4 shows the results of univariate logistic regression analysis for the associations between socio-demographic and reproductive factors on the one hand, and birth preparedness on the other. The women who had completed secondary school were more likely to be well birth prepared (OR 1.9, 95% CI 1.2–3.0). Birth preparedness decreased with increasing parity: women who had given birth to five or more children were significantly less birth prepared than primiparas (OR 0.6, 95% CI 0.4–0.99). The women who attended at least four ANC visits were more birth prepared (OR 1.5, 95% CI 1.1–2.1).\nAssociations (OR, 95% CI) between socio-demographic and reproductive variables, and being well birth prepared in a sample of recently delivered Ugandan women (n=765)\n\nTable 5 provides an analysis of the associations between media exposure and birth preparedness with unadjusted an adjusted ORs. An association was found between reading newspapers and being well birth prepared (OR 2.2, 95% CI 1.5–3.2). Exposure to a newspaper at least once a week did not strengthen the association (OR 1.7, 95% CI 1.03–2.8). Listening to the radio did not have a significant effect on birth preparedness (OR 1.3, 95% CI 0.8–2.2), nor did watching television (OR 0.7, 95% CI 0.3–1.5). Women with high media exposure were not birth prepared to a higher extent (OR 1.3, 95% CI 0.9–2.0).\nAssociations (OR, 95% CI) between media exposure and being well birth prepared in a sample of recently delivered Ugandan women (n=765)\nAdjusted for age, education, location of residence, parity, travel time to health facility, and ANC visits.\nDefined as exposure to radio or television almost every day or newspaper at least once a week.", "Our findings show a significant relationship between reading newspapers and being birth prepared among rural women in southwest Uganda, regardless of the frequency of exposure. The women listening to the radio or watching television were not significantly more birth prepared. When comparing our results with the exposure to mass media for rural women reported in the 2012 UDHS, we found a higher proportion of women in our study were being exposed at least once a week to radio (85.5% vs. 73.2%) and newspapers (15.7% vs. 10.0%). Exposure to television on a weekly basis, on the other hand, was lower (4.5% vs. 9.8%) (10). Additionally, our data provided more detailed information on the frequency of exposure than the UDHS and could thus be used for the high vs. low media exposure variable. Since the newspaper in the local language is a weekly, there is a strong contextual indication to define reading a newspaper at least once a week as high media exposure. For television and radio, we required an almost daily exposure to be included among those highly exposed. However, because of the high proportion of women who listened to the radio almost every day, this definition resulted in a large group of highly media-exposed women that mainly consisted of radio listeners. We retained this definition since we wanted to study those who were highly media exposed as a collective group. With this definition we are acknowledging that with the current accessibility of radio in low-income countries, a great majority of the world's population is exposed to traditional media.\nThe most common preparation for childbirth in our study was saving money in anticipation of the birth. This result is similar to findings from Burkina Faso, Ethiopia, and India (83.3, 68.9 and 76.9%, respectively) (16, 17, 26). Approximately two-thirds of the women surveyed had identified a skilled provider or health facility for delivery, which is similar to findings from India (69.6%) but higher than Burkina Faso (43.9%) or Ethiopia (34.8%). Women in Uganda are instructed to bring a ‘mama kit’ or childbirth material to the health facility, but only one-fifth of the women in our study had bought childbirth material. This raises questions in regards to its availability and accessibility. Further assessment of the situation is required in order to verify this finding. The proportion of women who arranged for transportation shows large differences between SSA (Burkina Faso 46.1%, Nigeria 62.3%) and Asia (India 29.5%, Nepal 28%) (16, 17, 27, 28). Hence, birth preparedness and the practice of its different components vary considerably between continents, countries, and regions. As with all behavior change programs, it is crucial to know one's audience and their challenges so that one may adopt interventions relevant for a specific setting (29).\nDespite an extensive literature search, no studies were identified that had researched possible associations between media exposure and birth preparedness. Our findings should therefore be considered in relation to other factors associated with media exposure. The exposure rate for different media varies greatly between continents and countries so that findings from one study cannot be easily translated to another setting. Most studies from Asia do not include newspapers in their media exposure variable. The findings of the Bangladesh Demographic and Health Survey show that the weekly exposure to radio (4.7%) and newspapers (6.3%) among women are substantially lower than in our study population, but the weekly exposure to television is 10 times higher (48.4%) (30). Not surprisingly, a study from Bangladesh exploring the association between media exposure and knowledge of HIV/AIDS reported that television was the most influential in source of transmission and prevention knowledge of AIDS. Exposure to newspapers also had a significant association with people hearing about AIDS, and was more associated with myth rejection than television. Radio, however, had a low impact on knowledge of HIV/AIDS in Bangladesh (2).\nFindings from Ghana, with similar exposure to radio and newspapers as Uganda but a 10 times higher exposure to television (29), indicated that deliveries assisted by an SBA increased with the frequency of exposure to television. The same association appeared with regard to reading newspapers, although the significance did not remain after adjusting for confounders. However, as the variables adjusted for are not presented in the Ghanaian study, it is difficult to draw any conclusions regarding the association. The same study showed that exposure to radio had no association with being delivered by an SBA (3).\nOur study and those from multiple other settings (17, 26) have demonstrated the positive effect of education and literacy in birth preparedness, identifying education as a major social determinant of health (31). However, no earlier studies on birth preparedness have shown the added value of reading newspapers. According to our findings, birth preparedness in Uganda is not discussed to any greater extent in the newspapers than on the radio, and therefore cannot explain the differences between the two groups. As stated in the African Media Barometer Uganda 2012, newspapers are expensive when viewed in relation to the income of many Ugandans (32). The level of household income might be an important confounder to this association. People who read newspapers might also be more likely to discuss issues with others, and such interpersonal interactions are considered important when initiating behavioral change (33). Further research is needed to explore how reading a newspaper may be associated with birth preparedness. A content analysis of Ugandan newspapers, preferably both quantitative and qualitative, would be needed for an in-depth understanding of the influence of newspapers (34). It would also facilitate further improvements in the maternal health information provided by mass media, as requested by the Ugandan government (23).\nAlthough birth preparedness information was more frequent on the radio, listening to the radio did not make women significantly more birth prepared. On the other hand, as stated in the concept of health literacy, mere access to information is not enough: it needs to be understood and assimilated in order to lead to behavior change (35). This is especially true when the information conflicts with traditional practices and social norms, such as home deliveries, to which Ugandan women are accustomed (36). Radio generally has wider coverage and reaches vulnerable populations to a greater extent than other forms of media and may therefore be better used for the successful dissemination of interventions in the form of edutainment programs (37). However, apart from the many registered FM stations in Uganda (276 in 2011) (32), the country faces another challenge for attaining broad coverage with such interventions due to numerous local languages.\nThe increasing numbers of deliveries that take place in Uganda with the assistance of an SBA indicate an important change, although the maternal mortality ratio remains high (10). The ‘Road Map for Accelerating the Reduction of Maternal and Neonatal Mortality and Morbidity in Uganda’ (23) is a powerful document that has the potential to improve maternal health if appropriately implemented and adequately funded by the Ugandan government.\n Strengths and limitations of this study By using two-stage cluster sampling and interviewing a relatively large number of recently delivered women for this study, a good estimate of birth preparedness and mass media exposure among women in rural southwest Uganda was obtained. Ideally a birth preparedness questionnaire should be given to women in late pregnancy in order to capture all preparations made prior to childbirth and avoid recall problems (38). However, this would lead to difficulties in obtaining the necessary sample size. A limit of 1 year was set in this study to minimize recall problems.\nIn this paper, we have only looked at the effect of direct exposure to mass media on our study group. However, communication researchers recognize the effect of exposure on the community as a whole. For some behaviors, a change can occur when a certain proportion of the community, rather than separate individuals, is exposed (1). When considering the effect of indirect exposure to mass media, a husband's exposure is likely to have the single most important influence on a woman. The importance of the husband in maternal and child health has been increasingly acknowledged and research has shown that a husband who takes an active role in this regard will increase his wife's birth preparedness (15, 39). We did not look at the husband's exposure to mass media, although it might have had an independent effect on birth preparedness.\nBy using two-stage cluster sampling and interviewing a relatively large number of recently delivered women for this study, a good estimate of birth preparedness and mass media exposure among women in rural southwest Uganda was obtained. Ideally a birth preparedness questionnaire should be given to women in late pregnancy in order to capture all preparations made prior to childbirth and avoid recall problems (38). However, this would lead to difficulties in obtaining the necessary sample size. A limit of 1 year was set in this study to minimize recall problems.\nIn this paper, we have only looked at the effect of direct exposure to mass media on our study group. However, communication researchers recognize the effect of exposure on the community as a whole. For some behaviors, a change can occur when a certain proportion of the community, rather than separate individuals, is exposed (1). When considering the effect of indirect exposure to mass media, a husband's exposure is likely to have the single most important influence on a woman. The importance of the husband in maternal and child health has been increasingly acknowledged and research has shown that a husband who takes an active role in this regard will increase his wife's birth preparedness (15, 39). We did not look at the husband's exposure to mass media, although it might have had an independent effect on birth preparedness.", "By using two-stage cluster sampling and interviewing a relatively large number of recently delivered women for this study, a good estimate of birth preparedness and mass media exposure among women in rural southwest Uganda was obtained. Ideally a birth preparedness questionnaire should be given to women in late pregnancy in order to capture all preparations made prior to childbirth and avoid recall problems (38). However, this would lead to difficulties in obtaining the necessary sample size. A limit of 1 year was set in this study to minimize recall problems.\nIn this paper, we have only looked at the effect of direct exposure to mass media on our study group. However, communication researchers recognize the effect of exposure on the community as a whole. For some behaviors, a change can occur when a certain proportion of the community, rather than separate individuals, is exposed (1). When considering the effect of indirect exposure to mass media, a husband's exposure is likely to have the single most important influence on a woman. The importance of the husband in maternal and child health has been increasingly acknowledged and research has shown that a husband who takes an active role in this regard will increase his wife's birth preparedness (15, 39). We did not look at the husband's exposure to mass media, although it might have had an independent effect on birth preparedness.", "Our results indicate that birth preparedness, and ultimately skilled birth attendance, can be enhanced through increased reading of newspapers. Apart from requiring general literacy skills this mean that newspapers have to be accessible with regard to language, dissemination and cost." ]
[ "methods", null, null, null, null, null, null, null, "results", "discussion", null, null ]
[ "birth preparedness", "skilled birth attendant", "mass media exposure", "newspaper", "radio", "Uganda", "low-income country" ]
Method: Setting The study was conducted in the Mbarara District in southwest Uganda. The district is divided in Mbarara Municipality, which is the urban center, and Kashari and Rwampara Counties which are mainly rural. The total population is 436,400 (24). The area contains 46 health centers with various levels of health services provision (II–IV) that all provide antenatal and emergency obstetric care, although Mbarara Regional Referral Hospital is the only one providing comprehensive obstetric care. Mbarara Municipality also has four private hospitals. Eighty percent of the Ugandan population lives in rural areas where the economy is predominately agricultural. According to the World Bank, 38% live in extreme poverty, i.e. below 1.25 USD per day in purchasing power parity (25). According to the 2012 UDHS, only 5% of the rural population has electricity in their homes compared to 55% of the urban population. The literacy rate among women in Uganda is 64.2%, but higher in the southwest (75.5%) (10). The study was given ethical clearance by the Uganda National Council of Science and Technology and Lund University. Permission to conduct the study was also given by local leaders at the district, county, and village levels. A written consent form was read to each participant and a signature or thumbprint was obtained before the interviews. The study was conducted in the Mbarara District in southwest Uganda. The district is divided in Mbarara Municipality, which is the urban center, and Kashari and Rwampara Counties which are mainly rural. The total population is 436,400 (24). The area contains 46 health centers with various levels of health services provision (II–IV) that all provide antenatal and emergency obstetric care, although Mbarara Regional Referral Hospital is the only one providing comprehensive obstetric care. Mbarara Municipality also has four private hospitals. Eighty percent of the Ugandan population lives in rural areas where the economy is predominately agricultural. According to the World Bank, 38% live in extreme poverty, i.e. below 1.25 USD per day in purchasing power parity (25). According to the 2012 UDHS, only 5% of the rural population has electricity in their homes compared to 55% of the urban population. The literacy rate among women in Uganda is 64.2%, but higher in the southwest (75.5%) (10). The study was given ethical clearance by the Uganda National Council of Science and Technology and Lund University. Permission to conduct the study was also given by local leaders at the district, county, and village levels. A written consent form was read to each participant and a signature or thumbprint was obtained before the interviews. Sample and data collection Participants in the community survey were selected by using a two-stage cluster sampling technique. In the first stage, 120 of the 699 villages in the study area were randomly chosen. The average population of a village is approximately 500. In each village, a starting point was alternately identified at the center or on the periphery with the help of a Village Health Team member. Two research assistants moved in opposite directions through the village stopping at every second household until 10 women who were either pregnant or had delivered within the last 12 months were identified. A total of 1,199 women were interviewed. Out of these, 765 had delivered during the last 12 months and were included in the study. The participants were equally divided between Kashari (50.8%) and Rwampara Counties (49.2%). The women were asked about their knowledge, perceptions, and experiences regarding pregnancy, childbirth, and the postpartum period and about their media consumption and exposure to media interventions. The information was obtained by means of a Women's Safe Motherhood Questionnaire developed by the Johns Hopkins Program for International Education in Gynecology and Obstetrics (JHPIEGO) (11) that had been adapted to the Ugandan context and pretested in a neighboring district. After the pretesting, a question was added about the purchase of childbirth materials since it is an important preparation for childbirth in Ugandan. A total of 12 research assistants were recruited, each having a bachelor's degree in social science and previous experience in survey data collection. They all were trained for 1 week on the administration of the data instrument. The data collection took place from September to December 2010 in Kashari County and from April to May 2011 in Rwampara County under the supervision of the principal investigator. The questionnaires were continuously verified for completeness and consistency by the field supervisors. Participants in the community survey were selected by using a two-stage cluster sampling technique. In the first stage, 120 of the 699 villages in the study area were randomly chosen. The average population of a village is approximately 500. In each village, a starting point was alternately identified at the center or on the periphery with the help of a Village Health Team member. Two research assistants moved in opposite directions through the village stopping at every second household until 10 women who were either pregnant or had delivered within the last 12 months were identified. A total of 1,199 women were interviewed. Out of these, 765 had delivered during the last 12 months and were included in the study. The participants were equally divided between Kashari (50.8%) and Rwampara Counties (49.2%). The women were asked about their knowledge, perceptions, and experiences regarding pregnancy, childbirth, and the postpartum period and about their media consumption and exposure to media interventions. The information was obtained by means of a Women's Safe Motherhood Questionnaire developed by the Johns Hopkins Program for International Education in Gynecology and Obstetrics (JHPIEGO) (11) that had been adapted to the Ugandan context and pretested in a neighboring district. After the pretesting, a question was added about the purchase of childbirth materials since it is an important preparation for childbirth in Ugandan. A total of 12 research assistants were recruited, each having a bachelor's degree in social science and previous experience in survey data collection. They all were trained for 1 week on the administration of the data instrument. The data collection took place from September to December 2010 in Kashari County and from April to May 2011 in Rwampara County under the supervision of the principal investigator. The questionnaires were continuously verified for completeness and consistency by the field supervisors. Definition of variables Socio-demographic and reproductive variables Area of residence was coded into ‘Kashari’ or ‘Rwampara’ and ‘rural’ for villages or ‘semi-urban’ for trading centers. No urban areas existed in the two counties. Age was coded into ‘<20 years’, ‘20–24 years’ and ‘≥25 years’. Marital status was coded as ‘married’ for women who were married or in union and ‘not married’ for single, widowed, divorced or separated. Religion was coded as ‘Christians’ for women belonging to the Roman Catholic, Church of Uganda or Seventh-day Adventists. The remaining women were coded as ‘Other’. Highest education level completed was coded into ‘< primary school’, ‘primary school’ or ‘≥ secondary school’. Reading ability was categorized based on the question ‘Can you read a letter, Bible, or a newspaper easily, with difficulty, or not at all?’ Easily was coded as ‘yes’ and with difficulty/not at all was coded as ‘no’. Travel time to health facility was coded as ‘<1 hour’ and ‘≥1 hour’. Parity was calculated by adding the reported number of live births and stillbirths and coded as ‘primipara’, ‘2–4’ or ‘≥5’ births. ANC visits was coded as ‘<4’ or ‘≥4’ visits. Area of residence was coded into ‘Kashari’ or ‘Rwampara’ and ‘rural’ for villages or ‘semi-urban’ for trading centers. No urban areas existed in the two counties. Age was coded into ‘<20 years’, ‘20–24 years’ and ‘≥25 years’. Marital status was coded as ‘married’ for women who were married or in union and ‘not married’ for single, widowed, divorced or separated. Religion was coded as ‘Christians’ for women belonging to the Roman Catholic, Church of Uganda or Seventh-day Adventists. The remaining women were coded as ‘Other’. Highest education level completed was coded into ‘< primary school’, ‘primary school’ or ‘≥ secondary school’. Reading ability was categorized based on the question ‘Can you read a letter, Bible, or a newspaper easily, with difficulty, or not at all?’ Easily was coded as ‘yes’ and with difficulty/not at all was coded as ‘no’. Travel time to health facility was coded as ‘<1 hour’ and ‘≥1 hour’. Parity was calculated by adding the reported number of live births and stillbirths and coded as ‘primipara’, ‘2–4’ or ‘≥5’ births. ANC visits was coded as ‘<4’ or ‘≥4’ visits. Media exposure variables Read a newspaper/listen to radio/watch television was coded ‘yes’ or ‘no’ based on the question ‘Do you ever read newspaper/listen to radio/watch television?’ Frequency of exposure was coded as; ‘almost every day’, ‘at least once a week’ or ‘less than once a week’. Media exposure was defined as ‘high’ if the women were exposed to radio or television ‘almost every day’ or read a newspaper ‘at least once a week’. The regional newspaper in the local language is a weekly paper, and so weekly newspaper exposure was considered ‘high media exposure’. Read/heard/saw information on birth preparedness in the past 6 months was coded ‘yes’, ‘no’, or ‘don't know’. Exposure to birth preparedness information through newspaper/radio/television was inquired into within those groups exposed to the separate media. Read a newspaper/listen to radio/watch television was coded ‘yes’ or ‘no’ based on the question ‘Do you ever read newspaper/listen to radio/watch television?’ Frequency of exposure was coded as; ‘almost every day’, ‘at least once a week’ or ‘less than once a week’. Media exposure was defined as ‘high’ if the women were exposed to radio or television ‘almost every day’ or read a newspaper ‘at least once a week’. The regional newspaper in the local language is a weekly paper, and so weekly newspaper exposure was considered ‘high media exposure’. Read/heard/saw information on birth preparedness in the past 6 months was coded ‘yes’, ‘no’, or ‘don't know’. Exposure to birth preparedness information through newspaper/radio/television was inquired into within those groups exposed to the separate media. Birth preparedness variables Four variables were used as measures for birth preparedness, as follows: Bought childbirth materials was determined by asking ‘which arrangements did you or your family make for the birth of this child?’ and women who had bought a complete ‘Mama kit’ with the necessary materials for childbirth or any of these materials separately were coded as having bought childbirth materials. Saved money was determined by asking ‘Did you or your family save money for the birth of this child?’. Identified transport to place of delivery was determined by asking the question ‘Did you or your family identify transport for the birth of this child?’. Identified SBA for delivery was determined by asking ‘Did you or your family identify a skilled provider for the birth of this child?’. Those women who delivered in a health facility and who, in a subsequent question, stated that this place of delivery was planned during the pregnancy, were also coded as having identified a SBA for delivery. Well birth prepared was defined as having taken at least three of the four actions above. Four variables were used as measures for birth preparedness, as follows: Bought childbirth materials was determined by asking ‘which arrangements did you or your family make for the birth of this child?’ and women who had bought a complete ‘Mama kit’ with the necessary materials for childbirth or any of these materials separately were coded as having bought childbirth materials. Saved money was determined by asking ‘Did you or your family save money for the birth of this child?’. Identified transport to place of delivery was determined by asking the question ‘Did you or your family identify transport for the birth of this child?’. Identified SBA for delivery was determined by asking ‘Did you or your family identify a skilled provider for the birth of this child?’. Those women who delivered in a health facility and who, in a subsequent question, stated that this place of delivery was planned during the pregnancy, were also coded as having identified a SBA for delivery. Well birth prepared was defined as having taken at least three of the four actions above. Socio-demographic and reproductive variables Area of residence was coded into ‘Kashari’ or ‘Rwampara’ and ‘rural’ for villages or ‘semi-urban’ for trading centers. No urban areas existed in the two counties. Age was coded into ‘<20 years’, ‘20–24 years’ and ‘≥25 years’. Marital status was coded as ‘married’ for women who were married or in union and ‘not married’ for single, widowed, divorced or separated. Religion was coded as ‘Christians’ for women belonging to the Roman Catholic, Church of Uganda or Seventh-day Adventists. The remaining women were coded as ‘Other’. Highest education level completed was coded into ‘< primary school’, ‘primary school’ or ‘≥ secondary school’. Reading ability was categorized based on the question ‘Can you read a letter, Bible, or a newspaper easily, with difficulty, or not at all?’ Easily was coded as ‘yes’ and with difficulty/not at all was coded as ‘no’. Travel time to health facility was coded as ‘<1 hour’ and ‘≥1 hour’. Parity was calculated by adding the reported number of live births and stillbirths and coded as ‘primipara’, ‘2–4’ or ‘≥5’ births. ANC visits was coded as ‘<4’ or ‘≥4’ visits. Area of residence was coded into ‘Kashari’ or ‘Rwampara’ and ‘rural’ for villages or ‘semi-urban’ for trading centers. No urban areas existed in the two counties. Age was coded into ‘<20 years’, ‘20–24 years’ and ‘≥25 years’. Marital status was coded as ‘married’ for women who were married or in union and ‘not married’ for single, widowed, divorced or separated. Religion was coded as ‘Christians’ for women belonging to the Roman Catholic, Church of Uganda or Seventh-day Adventists. The remaining women were coded as ‘Other’. Highest education level completed was coded into ‘< primary school’, ‘primary school’ or ‘≥ secondary school’. Reading ability was categorized based on the question ‘Can you read a letter, Bible, or a newspaper easily, with difficulty, or not at all?’ Easily was coded as ‘yes’ and with difficulty/not at all was coded as ‘no’. Travel time to health facility was coded as ‘<1 hour’ and ‘≥1 hour’. Parity was calculated by adding the reported number of live births and stillbirths and coded as ‘primipara’, ‘2–4’ or ‘≥5’ births. ANC visits was coded as ‘<4’ or ‘≥4’ visits. Media exposure variables Read a newspaper/listen to radio/watch television was coded ‘yes’ or ‘no’ based on the question ‘Do you ever read newspaper/listen to radio/watch television?’ Frequency of exposure was coded as; ‘almost every day’, ‘at least once a week’ or ‘less than once a week’. Media exposure was defined as ‘high’ if the women were exposed to radio or television ‘almost every day’ or read a newspaper ‘at least once a week’. The regional newspaper in the local language is a weekly paper, and so weekly newspaper exposure was considered ‘high media exposure’. Read/heard/saw information on birth preparedness in the past 6 months was coded ‘yes’, ‘no’, or ‘don't know’. Exposure to birth preparedness information through newspaper/radio/television was inquired into within those groups exposed to the separate media. Read a newspaper/listen to radio/watch television was coded ‘yes’ or ‘no’ based on the question ‘Do you ever read newspaper/listen to radio/watch television?’ Frequency of exposure was coded as; ‘almost every day’, ‘at least once a week’ or ‘less than once a week’. Media exposure was defined as ‘high’ if the women were exposed to radio or television ‘almost every day’ or read a newspaper ‘at least once a week’. The regional newspaper in the local language is a weekly paper, and so weekly newspaper exposure was considered ‘high media exposure’. Read/heard/saw information on birth preparedness in the past 6 months was coded ‘yes’, ‘no’, or ‘don't know’. Exposure to birth preparedness information through newspaper/radio/television was inquired into within those groups exposed to the separate media. Birth preparedness variables Four variables were used as measures for birth preparedness, as follows: Bought childbirth materials was determined by asking ‘which arrangements did you or your family make for the birth of this child?’ and women who had bought a complete ‘Mama kit’ with the necessary materials for childbirth or any of these materials separately were coded as having bought childbirth materials. Saved money was determined by asking ‘Did you or your family save money for the birth of this child?’. Identified transport to place of delivery was determined by asking the question ‘Did you or your family identify transport for the birth of this child?’. Identified SBA for delivery was determined by asking ‘Did you or your family identify a skilled provider for the birth of this child?’. Those women who delivered in a health facility and who, in a subsequent question, stated that this place of delivery was planned during the pregnancy, were also coded as having identified a SBA for delivery. Well birth prepared was defined as having taken at least three of the four actions above. Four variables were used as measures for birth preparedness, as follows: Bought childbirth materials was determined by asking ‘which arrangements did you or your family make for the birth of this child?’ and women who had bought a complete ‘Mama kit’ with the necessary materials for childbirth or any of these materials separately were coded as having bought childbirth materials. Saved money was determined by asking ‘Did you or your family save money for the birth of this child?’. Identified transport to place of delivery was determined by asking the question ‘Did you or your family identify transport for the birth of this child?’. Identified SBA for delivery was determined by asking ‘Did you or your family identify a skilled provider for the birth of this child?’. Those women who delivered in a health facility and who, in a subsequent question, stated that this place of delivery was planned during the pregnancy, were also coded as having identified a SBA for delivery. Well birth prepared was defined as having taken at least three of the four actions above. Statistical analysis Sample size was predetermined by the existing database. Statistical analyses were conducted with SPSS Version 21 and all analyses accounted for the intra-cluster correlation. Generalized linear mixed models were used to calculate the odds ratios (OR) and 95% confidence intervals (CI) for the associations between media exposure and being well birth prepared (i.e. bought childbirth materials, saved money, identified transport to place of delivery, and identified SBA for delivery). Multivariate analyses included adjustments for age, education, location of residence, parity, travel time to health facility, and ANC visits. Sample size was predetermined by the existing database. Statistical analyses were conducted with SPSS Version 21 and all analyses accounted for the intra-cluster correlation. Generalized linear mixed models were used to calculate the odds ratios (OR) and 95% confidence intervals (CI) for the associations between media exposure and being well birth prepared (i.e. bought childbirth materials, saved money, identified transport to place of delivery, and identified SBA for delivery). Multivariate analyses included adjustments for age, education, location of residence, parity, travel time to health facility, and ANC visits. Setting: The study was conducted in the Mbarara District in southwest Uganda. The district is divided in Mbarara Municipality, which is the urban center, and Kashari and Rwampara Counties which are mainly rural. The total population is 436,400 (24). The area contains 46 health centers with various levels of health services provision (II–IV) that all provide antenatal and emergency obstetric care, although Mbarara Regional Referral Hospital is the only one providing comprehensive obstetric care. Mbarara Municipality also has four private hospitals. Eighty percent of the Ugandan population lives in rural areas where the economy is predominately agricultural. According to the World Bank, 38% live in extreme poverty, i.e. below 1.25 USD per day in purchasing power parity (25). According to the 2012 UDHS, only 5% of the rural population has electricity in their homes compared to 55% of the urban population. The literacy rate among women in Uganda is 64.2%, but higher in the southwest (75.5%) (10). The study was given ethical clearance by the Uganda National Council of Science and Technology and Lund University. Permission to conduct the study was also given by local leaders at the district, county, and village levels. A written consent form was read to each participant and a signature or thumbprint was obtained before the interviews. Sample and data collection: Participants in the community survey were selected by using a two-stage cluster sampling technique. In the first stage, 120 of the 699 villages in the study area were randomly chosen. The average population of a village is approximately 500. In each village, a starting point was alternately identified at the center or on the periphery with the help of a Village Health Team member. Two research assistants moved in opposite directions through the village stopping at every second household until 10 women who were either pregnant or had delivered within the last 12 months were identified. A total of 1,199 women were interviewed. Out of these, 765 had delivered during the last 12 months and were included in the study. The participants were equally divided between Kashari (50.8%) and Rwampara Counties (49.2%). The women were asked about their knowledge, perceptions, and experiences regarding pregnancy, childbirth, and the postpartum period and about their media consumption and exposure to media interventions. The information was obtained by means of a Women's Safe Motherhood Questionnaire developed by the Johns Hopkins Program for International Education in Gynecology and Obstetrics (JHPIEGO) (11) that had been adapted to the Ugandan context and pretested in a neighboring district. After the pretesting, a question was added about the purchase of childbirth materials since it is an important preparation for childbirth in Ugandan. A total of 12 research assistants were recruited, each having a bachelor's degree in social science and previous experience in survey data collection. They all were trained for 1 week on the administration of the data instrument. The data collection took place from September to December 2010 in Kashari County and from April to May 2011 in Rwampara County under the supervision of the principal investigator. The questionnaires were continuously verified for completeness and consistency by the field supervisors. Definition of variables: Socio-demographic and reproductive variables Area of residence was coded into ‘Kashari’ or ‘Rwampara’ and ‘rural’ for villages or ‘semi-urban’ for trading centers. No urban areas existed in the two counties. Age was coded into ‘<20 years’, ‘20–24 years’ and ‘≥25 years’. Marital status was coded as ‘married’ for women who were married or in union and ‘not married’ for single, widowed, divorced or separated. Religion was coded as ‘Christians’ for women belonging to the Roman Catholic, Church of Uganda or Seventh-day Adventists. The remaining women were coded as ‘Other’. Highest education level completed was coded into ‘< primary school’, ‘primary school’ or ‘≥ secondary school’. Reading ability was categorized based on the question ‘Can you read a letter, Bible, or a newspaper easily, with difficulty, or not at all?’ Easily was coded as ‘yes’ and with difficulty/not at all was coded as ‘no’. Travel time to health facility was coded as ‘<1 hour’ and ‘≥1 hour’. Parity was calculated by adding the reported number of live births and stillbirths and coded as ‘primipara’, ‘2–4’ or ‘≥5’ births. ANC visits was coded as ‘<4’ or ‘≥4’ visits. Area of residence was coded into ‘Kashari’ or ‘Rwampara’ and ‘rural’ for villages or ‘semi-urban’ for trading centers. No urban areas existed in the two counties. Age was coded into ‘<20 years’, ‘20–24 years’ and ‘≥25 years’. Marital status was coded as ‘married’ for women who were married or in union and ‘not married’ for single, widowed, divorced or separated. Religion was coded as ‘Christians’ for women belonging to the Roman Catholic, Church of Uganda or Seventh-day Adventists. The remaining women were coded as ‘Other’. Highest education level completed was coded into ‘< primary school’, ‘primary school’ or ‘≥ secondary school’. Reading ability was categorized based on the question ‘Can you read a letter, Bible, or a newspaper easily, with difficulty, or not at all?’ Easily was coded as ‘yes’ and with difficulty/not at all was coded as ‘no’. Travel time to health facility was coded as ‘<1 hour’ and ‘≥1 hour’. Parity was calculated by adding the reported number of live births and stillbirths and coded as ‘primipara’, ‘2–4’ or ‘≥5’ births. ANC visits was coded as ‘<4’ or ‘≥4’ visits. Media exposure variables Read a newspaper/listen to radio/watch television was coded ‘yes’ or ‘no’ based on the question ‘Do you ever read newspaper/listen to radio/watch television?’ Frequency of exposure was coded as; ‘almost every day’, ‘at least once a week’ or ‘less than once a week’. Media exposure was defined as ‘high’ if the women were exposed to radio or television ‘almost every day’ or read a newspaper ‘at least once a week’. The regional newspaper in the local language is a weekly paper, and so weekly newspaper exposure was considered ‘high media exposure’. Read/heard/saw information on birth preparedness in the past 6 months was coded ‘yes’, ‘no’, or ‘don't know’. Exposure to birth preparedness information through newspaper/radio/television was inquired into within those groups exposed to the separate media. Read a newspaper/listen to radio/watch television was coded ‘yes’ or ‘no’ based on the question ‘Do you ever read newspaper/listen to radio/watch television?’ Frequency of exposure was coded as; ‘almost every day’, ‘at least once a week’ or ‘less than once a week’. Media exposure was defined as ‘high’ if the women were exposed to radio or television ‘almost every day’ or read a newspaper ‘at least once a week’. The regional newspaper in the local language is a weekly paper, and so weekly newspaper exposure was considered ‘high media exposure’. Read/heard/saw information on birth preparedness in the past 6 months was coded ‘yes’, ‘no’, or ‘don't know’. Exposure to birth preparedness information through newspaper/radio/television was inquired into within those groups exposed to the separate media. Birth preparedness variables Four variables were used as measures for birth preparedness, as follows: Bought childbirth materials was determined by asking ‘which arrangements did you or your family make for the birth of this child?’ and women who had bought a complete ‘Mama kit’ with the necessary materials for childbirth or any of these materials separately were coded as having bought childbirth materials. Saved money was determined by asking ‘Did you or your family save money for the birth of this child?’. Identified transport to place of delivery was determined by asking the question ‘Did you or your family identify transport for the birth of this child?’. Identified SBA for delivery was determined by asking ‘Did you or your family identify a skilled provider for the birth of this child?’. Those women who delivered in a health facility and who, in a subsequent question, stated that this place of delivery was planned during the pregnancy, were also coded as having identified a SBA for delivery. Well birth prepared was defined as having taken at least three of the four actions above. Four variables were used as measures for birth preparedness, as follows: Bought childbirth materials was determined by asking ‘which arrangements did you or your family make for the birth of this child?’ and women who had bought a complete ‘Mama kit’ with the necessary materials for childbirth or any of these materials separately were coded as having bought childbirth materials. Saved money was determined by asking ‘Did you or your family save money for the birth of this child?’. Identified transport to place of delivery was determined by asking the question ‘Did you or your family identify transport for the birth of this child?’. Identified SBA for delivery was determined by asking ‘Did you or your family identify a skilled provider for the birth of this child?’. Those women who delivered in a health facility and who, in a subsequent question, stated that this place of delivery was planned during the pregnancy, were also coded as having identified a SBA for delivery. Well birth prepared was defined as having taken at least three of the four actions above. Socio-demographic and reproductive variables: Area of residence was coded into ‘Kashari’ or ‘Rwampara’ and ‘rural’ for villages or ‘semi-urban’ for trading centers. No urban areas existed in the two counties. Age was coded into ‘<20 years’, ‘20–24 years’ and ‘≥25 years’. Marital status was coded as ‘married’ for women who were married or in union and ‘not married’ for single, widowed, divorced or separated. Religion was coded as ‘Christians’ for women belonging to the Roman Catholic, Church of Uganda or Seventh-day Adventists. The remaining women were coded as ‘Other’. Highest education level completed was coded into ‘< primary school’, ‘primary school’ or ‘≥ secondary school’. Reading ability was categorized based on the question ‘Can you read a letter, Bible, or a newspaper easily, with difficulty, or not at all?’ Easily was coded as ‘yes’ and with difficulty/not at all was coded as ‘no’. Travel time to health facility was coded as ‘<1 hour’ and ‘≥1 hour’. Parity was calculated by adding the reported number of live births and stillbirths and coded as ‘primipara’, ‘2–4’ or ‘≥5’ births. ANC visits was coded as ‘<4’ or ‘≥4’ visits. Media exposure variables: Read a newspaper/listen to radio/watch television was coded ‘yes’ or ‘no’ based on the question ‘Do you ever read newspaper/listen to radio/watch television?’ Frequency of exposure was coded as; ‘almost every day’, ‘at least once a week’ or ‘less than once a week’. Media exposure was defined as ‘high’ if the women were exposed to radio or television ‘almost every day’ or read a newspaper ‘at least once a week’. The regional newspaper in the local language is a weekly paper, and so weekly newspaper exposure was considered ‘high media exposure’. Read/heard/saw information on birth preparedness in the past 6 months was coded ‘yes’, ‘no’, or ‘don't know’. Exposure to birth preparedness information through newspaper/radio/television was inquired into within those groups exposed to the separate media. Birth preparedness variables: Four variables were used as measures for birth preparedness, as follows: Bought childbirth materials was determined by asking ‘which arrangements did you or your family make for the birth of this child?’ and women who had bought a complete ‘Mama kit’ with the necessary materials for childbirth or any of these materials separately were coded as having bought childbirth materials. Saved money was determined by asking ‘Did you or your family save money for the birth of this child?’. Identified transport to place of delivery was determined by asking the question ‘Did you or your family identify transport for the birth of this child?’. Identified SBA for delivery was determined by asking ‘Did you or your family identify a skilled provider for the birth of this child?’. Those women who delivered in a health facility and who, in a subsequent question, stated that this place of delivery was planned during the pregnancy, were also coded as having identified a SBA for delivery. Well birth prepared was defined as having taken at least three of the four actions above. Statistical analysis: Sample size was predetermined by the existing database. Statistical analyses were conducted with SPSS Version 21 and all analyses accounted for the intra-cluster correlation. Generalized linear mixed models were used to calculate the odds ratios (OR) and 95% confidence intervals (CI) for the associations between media exposure and being well birth prepared (i.e. bought childbirth materials, saved money, identified transport to place of delivery, and identified SBA for delivery). Multivariate analyses included adjustments for age, education, location of residence, parity, travel time to health facility, and ANC visits. Results: The socio-demographic and reproductive variables are presented in Table 1. A majority of the women (83.9%) lived in a rural area, and 44.7% had a travel time of 1 hour or more to a health facility. One-third (34%) could not read at all or only with difficulty and 29.3% had not completed primary school. A total of 23% were primipara and 30.2% had a parity of five or more. Two-thirds (68.9%) had attended the recommended minimum of four ANC visits. Socio-demographic and reproductive characteristics of recently delivered Ugandan women (n=765) Exposure to mass media is shown in Table 2. Although 33.9% of the women indicated they read newspapers, only 6.5% read a newspaper ‘almost every day’ and 41.2% read a newspaper ‘at least once a week’. By far the most popular form of media was radio, to which 88.6% of the women were exposed. In this group 90.3% listened to the radio almost every day, while only 4.9% reported ever watching television. In total 81.8% were exposed to radio or television almost every day or read a newspaper at least once a week and were therefore coded as having a high media exposure. The vast majority of this group consisted of radio listeners. Almost half (46.3%) said they had heard or seen some information on birth preparedness in the past 6 months. Among the radio listeners, 43.9% were exposed to birth preparedness information on the radio; among women reading newspapers the corresponding number was 11.4%. Proportion of recently delivered women reporting media exposure and exposure to birth preparedness information (n=765) *Defined as exposure to radio or television almost every day or newspaper at least once a week. Table 3 shows the distribution of the variables included in birth preparedness. One-fifth of the women (20.7%) had bought childbirth materials, 87.8% had saved money, 60.1% had identified transport for delivery, and 64.3% had identified a skilled provider or health facility for delivery. This resulted in a total of 53.9% of the women being well birth prepared. Reports of birth preparedness by recently delivered Ugandan women (n=765) Defined as having taken at least three of the four actions above. Table 4 shows the results of univariate logistic regression analysis for the associations between socio-demographic and reproductive factors on the one hand, and birth preparedness on the other. The women who had completed secondary school were more likely to be well birth prepared (OR 1.9, 95% CI 1.2–3.0). Birth preparedness decreased with increasing parity: women who had given birth to five or more children were significantly less birth prepared than primiparas (OR 0.6, 95% CI 0.4–0.99). The women who attended at least four ANC visits were more birth prepared (OR 1.5, 95% CI 1.1–2.1). Associations (OR, 95% CI) between socio-demographic and reproductive variables, and being well birth prepared in a sample of recently delivered Ugandan women (n=765) Table 5 provides an analysis of the associations between media exposure and birth preparedness with unadjusted an adjusted ORs. An association was found between reading newspapers and being well birth prepared (OR 2.2, 95% CI 1.5–3.2). Exposure to a newspaper at least once a week did not strengthen the association (OR 1.7, 95% CI 1.03–2.8). Listening to the radio did not have a significant effect on birth preparedness (OR 1.3, 95% CI 0.8–2.2), nor did watching television (OR 0.7, 95% CI 0.3–1.5). Women with high media exposure were not birth prepared to a higher extent (OR 1.3, 95% CI 0.9–2.0). Associations (OR, 95% CI) between media exposure and being well birth prepared in a sample of recently delivered Ugandan women (n=765) Adjusted for age, education, location of residence, parity, travel time to health facility, and ANC visits. Defined as exposure to radio or television almost every day or newspaper at least once a week. Discussion: Our findings show a significant relationship between reading newspapers and being birth prepared among rural women in southwest Uganda, regardless of the frequency of exposure. The women listening to the radio or watching television were not significantly more birth prepared. When comparing our results with the exposure to mass media for rural women reported in the 2012 UDHS, we found a higher proportion of women in our study were being exposed at least once a week to radio (85.5% vs. 73.2%) and newspapers (15.7% vs. 10.0%). Exposure to television on a weekly basis, on the other hand, was lower (4.5% vs. 9.8%) (10). Additionally, our data provided more detailed information on the frequency of exposure than the UDHS and could thus be used for the high vs. low media exposure variable. Since the newspaper in the local language is a weekly, there is a strong contextual indication to define reading a newspaper at least once a week as high media exposure. For television and radio, we required an almost daily exposure to be included among those highly exposed. However, because of the high proportion of women who listened to the radio almost every day, this definition resulted in a large group of highly media-exposed women that mainly consisted of radio listeners. We retained this definition since we wanted to study those who were highly media exposed as a collective group. With this definition we are acknowledging that with the current accessibility of radio in low-income countries, a great majority of the world's population is exposed to traditional media. The most common preparation for childbirth in our study was saving money in anticipation of the birth. This result is similar to findings from Burkina Faso, Ethiopia, and India (83.3, 68.9 and 76.9%, respectively) (16, 17, 26). Approximately two-thirds of the women surveyed had identified a skilled provider or health facility for delivery, which is similar to findings from India (69.6%) but higher than Burkina Faso (43.9%) or Ethiopia (34.8%). Women in Uganda are instructed to bring a ‘mama kit’ or childbirth material to the health facility, but only one-fifth of the women in our study had bought childbirth material. This raises questions in regards to its availability and accessibility. Further assessment of the situation is required in order to verify this finding. The proportion of women who arranged for transportation shows large differences between SSA (Burkina Faso 46.1%, Nigeria 62.3%) and Asia (India 29.5%, Nepal 28%) (16, 17, 27, 28). Hence, birth preparedness and the practice of its different components vary considerably between continents, countries, and regions. As with all behavior change programs, it is crucial to know one's audience and their challenges so that one may adopt interventions relevant for a specific setting (29). Despite an extensive literature search, no studies were identified that had researched possible associations between media exposure and birth preparedness. Our findings should therefore be considered in relation to other factors associated with media exposure. The exposure rate for different media varies greatly between continents and countries so that findings from one study cannot be easily translated to another setting. Most studies from Asia do not include newspapers in their media exposure variable. The findings of the Bangladesh Demographic and Health Survey show that the weekly exposure to radio (4.7%) and newspapers (6.3%) among women are substantially lower than in our study population, but the weekly exposure to television is 10 times higher (48.4%) (30). Not surprisingly, a study from Bangladesh exploring the association between media exposure and knowledge of HIV/AIDS reported that television was the most influential in source of transmission and prevention knowledge of AIDS. Exposure to newspapers also had a significant association with people hearing about AIDS, and was more associated with myth rejection than television. Radio, however, had a low impact on knowledge of HIV/AIDS in Bangladesh (2). Findings from Ghana, with similar exposure to radio and newspapers as Uganda but a 10 times higher exposure to television (29), indicated that deliveries assisted by an SBA increased with the frequency of exposure to television. The same association appeared with regard to reading newspapers, although the significance did not remain after adjusting for confounders. However, as the variables adjusted for are not presented in the Ghanaian study, it is difficult to draw any conclusions regarding the association. The same study showed that exposure to radio had no association with being delivered by an SBA (3). Our study and those from multiple other settings (17, 26) have demonstrated the positive effect of education and literacy in birth preparedness, identifying education as a major social determinant of health (31). However, no earlier studies on birth preparedness have shown the added value of reading newspapers. According to our findings, birth preparedness in Uganda is not discussed to any greater extent in the newspapers than on the radio, and therefore cannot explain the differences between the two groups. As stated in the African Media Barometer Uganda 2012, newspapers are expensive when viewed in relation to the income of many Ugandans (32). The level of household income might be an important confounder to this association. People who read newspapers might also be more likely to discuss issues with others, and such interpersonal interactions are considered important when initiating behavioral change (33). Further research is needed to explore how reading a newspaper may be associated with birth preparedness. A content analysis of Ugandan newspapers, preferably both quantitative and qualitative, would be needed for an in-depth understanding of the influence of newspapers (34). It would also facilitate further improvements in the maternal health information provided by mass media, as requested by the Ugandan government (23). Although birth preparedness information was more frequent on the radio, listening to the radio did not make women significantly more birth prepared. On the other hand, as stated in the concept of health literacy, mere access to information is not enough: it needs to be understood and assimilated in order to lead to behavior change (35). This is especially true when the information conflicts with traditional practices and social norms, such as home deliveries, to which Ugandan women are accustomed (36). Radio generally has wider coverage and reaches vulnerable populations to a greater extent than other forms of media and may therefore be better used for the successful dissemination of interventions in the form of edutainment programs (37). However, apart from the many registered FM stations in Uganda (276 in 2011) (32), the country faces another challenge for attaining broad coverage with such interventions due to numerous local languages. The increasing numbers of deliveries that take place in Uganda with the assistance of an SBA indicate an important change, although the maternal mortality ratio remains high (10). The ‘Road Map for Accelerating the Reduction of Maternal and Neonatal Mortality and Morbidity in Uganda’ (23) is a powerful document that has the potential to improve maternal health if appropriately implemented and adequately funded by the Ugandan government. Strengths and limitations of this study By using two-stage cluster sampling and interviewing a relatively large number of recently delivered women for this study, a good estimate of birth preparedness and mass media exposure among women in rural southwest Uganda was obtained. Ideally a birth preparedness questionnaire should be given to women in late pregnancy in order to capture all preparations made prior to childbirth and avoid recall problems (38). However, this would lead to difficulties in obtaining the necessary sample size. A limit of 1 year was set in this study to minimize recall problems. In this paper, we have only looked at the effect of direct exposure to mass media on our study group. However, communication researchers recognize the effect of exposure on the community as a whole. For some behaviors, a change can occur when a certain proportion of the community, rather than separate individuals, is exposed (1). When considering the effect of indirect exposure to mass media, a husband's exposure is likely to have the single most important influence on a woman. The importance of the husband in maternal and child health has been increasingly acknowledged and research has shown that a husband who takes an active role in this regard will increase his wife's birth preparedness (15, 39). We did not look at the husband's exposure to mass media, although it might have had an independent effect on birth preparedness. By using two-stage cluster sampling and interviewing a relatively large number of recently delivered women for this study, a good estimate of birth preparedness and mass media exposure among women in rural southwest Uganda was obtained. Ideally a birth preparedness questionnaire should be given to women in late pregnancy in order to capture all preparations made prior to childbirth and avoid recall problems (38). However, this would lead to difficulties in obtaining the necessary sample size. A limit of 1 year was set in this study to minimize recall problems. In this paper, we have only looked at the effect of direct exposure to mass media on our study group. However, communication researchers recognize the effect of exposure on the community as a whole. For some behaviors, a change can occur when a certain proportion of the community, rather than separate individuals, is exposed (1). When considering the effect of indirect exposure to mass media, a husband's exposure is likely to have the single most important influence on a woman. The importance of the husband in maternal and child health has been increasingly acknowledged and research has shown that a husband who takes an active role in this regard will increase his wife's birth preparedness (15, 39). We did not look at the husband's exposure to mass media, although it might have had an independent effect on birth preparedness. Strengths and limitations of this study: By using two-stage cluster sampling and interviewing a relatively large number of recently delivered women for this study, a good estimate of birth preparedness and mass media exposure among women in rural southwest Uganda was obtained. Ideally a birth preparedness questionnaire should be given to women in late pregnancy in order to capture all preparations made prior to childbirth and avoid recall problems (38). However, this would lead to difficulties in obtaining the necessary sample size. A limit of 1 year was set in this study to minimize recall problems. In this paper, we have only looked at the effect of direct exposure to mass media on our study group. However, communication researchers recognize the effect of exposure on the community as a whole. For some behaviors, a change can occur when a certain proportion of the community, rather than separate individuals, is exposed (1). When considering the effect of indirect exposure to mass media, a husband's exposure is likely to have the single most important influence on a woman. The importance of the husband in maternal and child health has been increasingly acknowledged and research has shown that a husband who takes an active role in this regard will increase his wife's birth preparedness (15, 39). We did not look at the husband's exposure to mass media, although it might have had an independent effect on birth preparedness. Conclusion: Our results indicate that birth preparedness, and ultimately skilled birth attendance, can be enhanced through increased reading of newspapers. Apart from requiring general literacy skills this mean that newspapers have to be accessible with regard to language, dissemination and cost.
Background: Exposure to mass media provides increased awareness and knowledge, as well as changes in attitudes, social norms and behaviors that may lead to positive public health outcomes. Birth preparedness (i.e. the preparations for childbirth made by pregnant women, their families, and communities) increases the use of skilled birth attendants (SBAs) and hence reduces maternal morbidity and mortality. Methods: A total of 765 recently delivered women from 120 villages in the Mbarara District of southwest Uganda were selected for a community-based survey using two-stage cluster sampling. Univariate and multivariate logistic regression was performed with generalized linear mixed models using SPSS 21. Results: We found that 88.6% of the women surveyed listened to the radio and 33.9% read newspapers. Birth preparedness actions included were money saved (87.8%), identified SBA (64.3%), identified transport (60.1%), and purchased childbirth materials (20.7%). Women who had taken three or more actions were coded as well birth prepared (53.9%). Women who read newspapers were more likely to be birth prepared (adjusted OR 2.2, 95% CI 1.5-3.2). High media exposure, i.e. regular exposure to radio, newspaper, or television, showed no significant association with birth preparedness (adjusted OR 1.3, 95% CI 0.9-2.0). Conclusions: Our results indicate that increased reading of newspapers can enhance birth preparedness and skilled birth attendance. Apart from general literacy skills, this requires newspapers to be accessible in terms of language, dissemination, and cost.
null
null
9,884
301
[ 250, 342, 1347, 272, 185, 208, 109, 261, 45 ]
12
[ "coded", "birth", "women", "exposure", "media", "newspaper", "preparedness", "birth preparedness", "radio", "read" ]
[ "mbarara municipality private", "income ugandans", "percent ugandan population", "care mbarara municipality", "study conducted mbarara" ]
null
null
null
[CONTENT] birth preparedness | skilled birth attendant | mass media exposure | newspaper | radio | Uganda | low-income country [SUMMARY]
[CONTENT] birth preparedness | skilled birth attendant | mass media exposure | newspaper | radio | Uganda | low-income country [SUMMARY]
[CONTENT] birth preparedness | skilled birth attendant | mass media exposure | newspaper | radio | Uganda | low-income country [SUMMARY]
[CONTENT] birth preparedness | skilled birth attendant | mass media exposure | newspaper | radio | Uganda | low-income country [SUMMARY]
null
null
[CONTENT] Adult | Data Collection | Delivery, Obstetric | Female | Health Knowledge, Attitudes, Practice | Humans | Interviews as Topic | Mass Media | Pregnancy | Uganda | Young Adult [SUMMARY]
[CONTENT] Adult | Data Collection | Delivery, Obstetric | Female | Health Knowledge, Attitudes, Practice | Humans | Interviews as Topic | Mass Media | Pregnancy | Uganda | Young Adult [SUMMARY]
[CONTENT] Adult | Data Collection | Delivery, Obstetric | Female | Health Knowledge, Attitudes, Practice | Humans | Interviews as Topic | Mass Media | Pregnancy | Uganda | Young Adult [SUMMARY]
[CONTENT] Adult | Data Collection | Delivery, Obstetric | Female | Health Knowledge, Attitudes, Practice | Humans | Interviews as Topic | Mass Media | Pregnancy | Uganda | Young Adult [SUMMARY]
null
null
[CONTENT] mbarara municipality private | income ugandans | percent ugandan population | care mbarara municipality | study conducted mbarara [SUMMARY]
[CONTENT] mbarara municipality private | income ugandans | percent ugandan population | care mbarara municipality | study conducted mbarara [SUMMARY]
[CONTENT] mbarara municipality private | income ugandans | percent ugandan population | care mbarara municipality | study conducted mbarara [SUMMARY]
[CONTENT] mbarara municipality private | income ugandans | percent ugandan population | care mbarara municipality | study conducted mbarara [SUMMARY]
null
null
[CONTENT] coded | birth | women | exposure | media | newspaper | preparedness | birth preparedness | radio | read [SUMMARY]
[CONTENT] coded | birth | women | exposure | media | newspaper | preparedness | birth preparedness | radio | read [SUMMARY]
[CONTENT] coded | birth | women | exposure | media | newspaper | preparedness | birth preparedness | radio | read [SUMMARY]
[CONTENT] coded | birth | women | exposure | media | newspaper | preparedness | birth preparedness | radio | read [SUMMARY]
null
null
[CONTENT] coded | birth | newspaper | women | exposure | birth child | determined | asking | family | determined asking [SUMMARY]
[CONTENT] 95 ci | birth | 95 | ci | women | radio | table | exposure | birth prepared | prepared [SUMMARY]
[CONTENT] newspapers | regard language dissemination | accessible regard language | literacy skills | literacy skills mean | literacy skills mean newspapers | dissemination cost | enhanced increased reading newspapers | enhanced increased reading | enhanced increased [SUMMARY]
[CONTENT] coded | birth | exposure | women | newspaper | media | radio | birth preparedness | preparedness | television [SUMMARY]
null
null
[CONTENT] 765 | 120 | the Mbarara District | Uganda | two ||| linear | SPSS 21 [SUMMARY]
[CONTENT] 88.6% | 33.9% ||| 87.8% | SBA | 64.3% | 60.1% | 20.7% ||| three | 53.9% ||| 2.2 | 95% | CI | 1.5-3.2 ||| 1.3 | 95% | CI | 0.9-2.0 [SUMMARY]
[CONTENT] ||| [SUMMARY]
[CONTENT] ||| ||| 765 | 120 | the Mbarara District | Uganda | two ||| linear | SPSS 21 ||| ||| 88.6% | 33.9% ||| 87.8% | SBA | 64.3% | 60.1% | 20.7% ||| three | 53.9% ||| 2.2 | 95% | CI | 1.5-3.2 ||| 1.3 | 95% | CI | 0.9-2.0 ||| ||| [SUMMARY]
null
Prognostic significance of changes in heart rate following uptitration of beta-blockers in patients with sub-optimally treated heart failure with reduced ejection fraction in sinus rhythm versus atrial fibrillation.
30610382
In patients with heart failure with reduced ejection fraction (HFrEF) on sub-optimal doses of beta-blockers, it is conceivable that changes in heart rate following treatment intensification might be important regardless of underlying heart rhythm. We aimed to compare the prognostic significance of both achieved heart rate and change in heart rate following beta-blocker uptitration in patients with HFrEF either in sinus rhythm (SR) or atrial fibrillation (AF).
BACKGROUND
We performed a post hoc analysis of the BIOSTAT-CHF study. We evaluated 1548 patients with HFrEF (mean age 67 years, 35% AF). Median follow-up was 21 months. Patients were evaluated at baseline and at 9 months. The combined primary outcome was all-cause mortality and heart failure hospitalisation stratified by heart rhythm and heart rate at baseline.
METHODS
Despite similar changes in heart rate and beta-blocker dose, a decrease in heart rate at 9 months was associated with reduced incidence of the primary outcome in both SR and AF patients [HR per 10 bpm decrease-SR: 0.83 (0.75-0.91), p < 0.001; AF: 0.89 (0.81-0.98), p = 0.018], whereas the relationship was less strong for achieved heart rate in AF [HR per 10 bpm higher-SR: 1.26 (1.10-1.46), p = 0.001; AF: 1.08 (0.94-1.23), p = 0.18]. Achieved heart rate at 9 months was only prognostically significant in AF patients with high baseline heart rates (p for interaction 0.017 vs. low).
RESULTS
Following beta-blocker uptitration, both achieved and change in heart rate were prognostically significant regardless of starting heart rate in SR, however, they were only significant in AF patients with high baseline heart rate.
CONCLUSIONS
[ "Adrenergic beta-Antagonists", "Aged", "Atrial Fibrillation", "Dose-Response Relationship, Drug", "Female", "Follow-Up Studies", "Heart Failure", "Heart Rate", "Humans", "Male", "Prognosis", "Prospective Studies", "Stroke Volume", "Treatment Outcome" ]
6584244
Introduction
Heart rate is a risk factor in patients with heart failure with reduced ejection fraction (HFrEF) that, when reduced, provides outcome benefits [1, 2]. However, the benefit of heart rate-mediated reduction is less clear in atrial fibrillation (AF). Studies in patients with HFrEF and AF have provided conflicting results, with some suggesting that elevated heart rate is associated with adverse outcome in HFrEF patients in AF, while others found no significant relationship [3–5]. Conceptually, reducing heart rate should have prognostic benefit in HFrEF patients in AF. Randomised controlled trials evaluating rate control strategies in patients with AF have only included small numbers of patients with HFrEF [6]. Additionally, very few studies have evaluated the importance of changes in heart rate over time [7, 8]. Despite the lack of data, current guidelines recommend an optimal heart rate between 60 and 100 bpm in patients with AF and HFrEF, while studies evaluating rate control in patients with AF (but not necessarily HFrEF) suggest that rates up to 110 bpm may be acceptable [6, 9]. One strategy for reducing heart rate is the use of beta-blockers, a mainstay of therapy in HFrEF [9, 10]. Although beta-blockers are prognostically beneficial in patients with HFrEF, it is unclear whether the beta-blocker-mediated reduction in heart rate directly affects prognosis, with several studies reporting conflicting results [11–17]. Furthermore, questions have recently been raised about the prognostic benefits of beta-blocker therapy in HFrEF patients with AF [18, 19]. In particular, there is very little information about whether increasing beta-blocker therapy in patients on sub-optimal doses might derive greater benefit from any associated heart rate reduction [20]. Despite the current uncertainty over the benefits of beta-blockers in HFrEF patients in AF, current guidelines recommend uptitration of beta-blocker therapy to the same target doses irrespective of the underlying heart rhythm. To the best of our knowledge, the relative effects of change in heart rate following intensification of beta-blocker therapy have not been previously examined. Given the frequent co-existence of AF and HFrEF, it is important to determine whether patients in AF derive the same benefit from heart rate reduction and beta-blocker uptitration as those in SR, and whether this effect is modulated by changes in beta-blocker dose. We utilised the systems BIOlogy Study to Tailored Treatment in Chronic Heart Failure (BIOSTAT-CHF) dataset to compare the prognostic importance of changes in heart rate following beta-blocker uptitration in HFrEF patients in AF versus those in sinus rhythm (SR).
null
null
Results
Baseline characteristics The baseline characteristics of the BIOSTAT-CHF study have been reported previously [21]. Median follow-up in BIOSTAT-CHF was 21 months. Derivation of the cohort for this study is shown in Fig. 1. In total, following exclusion of patients with LVEF ≥ 40% and paced or undetermined ECG rhythms, we included 1548 patients from the BIOSTAT-CHF index cohort (Table 1). 535 patients (34.6%) were in AF on their baseline ECG. Fig. 1Cohort derivation. Derivation of the cohort from the BIOSTAT-CHF study Cohort derivation. Derivation of the cohort from the BIOSTAT-CHF study Table 1Baseline cohort characteristics according to heart rhythm at baselineTotal cohort (n = 1548)Sinus rhythm (n = 1013)Atrial fibrillation (n = 535)p value between SR and AFAge (years)67 ± 1265 ± 1371 ± 10< 0.001Men1175 (75.9)750 (74.0)425 (79.4) 0.018 SBP (mmHg)124 ± 21124 ± 21124 ± 210.55DBP (mmHg)76 ± 1275 ± 1276 ± 120.14Heart rate (bpm)83 ± 2178 ± 1793 ± 24< 0.001QRS duration (ms)112 ± 29113 ± 29112 ± 280.56NYHA classa< 0.001 I37 (2.4)30 (3.0)7 (1.3) II557 (36.7)400 (40.5)157 (29.7) III734 (48.4)448 (45.3)286 (54.2) IV188 (12.4)110 (11.1)78 (14.8)Ischaemic aetiology718 (47.4)510 (51.4)208 (39.8)< 0.001Hypertension935 (60.4)609 (60.1)326 (60.9)0.76Current smoker252 (16.3)201 (19.9)51 (9.6)< 0.001Diabetes mellitus490 (31.7)322 (31.8)168 (31.4)0.88COPD259 (16.7)163 (16.1)96 (17.9)0.35Renal impairment357 (23.1)193 (19.1)165 (30.8)< 0.001ACEI/ARB1158 (74.8)770 (76.0)388 (72.5)0.13Beta-blocker1299 (83.9)853 (84.2)446 (83.4)0.67Beta-blocker dose %< 0.001 0250 (16.1)161 (15.9)89 (16.6) 1–49938 (60.6)644 (63.6)294 (55.0) 50–99292 (18.9)176 (17.4)116 (21.7) ≥ 10068 (4.4)32 (3.2)36 (6.7)MRA860 (55.6)575 (56.8)285 (53.3)0.19Digoxin284 (18.3)86 (8.5)198 (37.0)< 0.001LVEF (%)27.3 ± 6.927.1 ± 7.027.8 ± 6.90.07Bold values indicate p < 0.0532 patients (2.1%) did not have NYHA class recordedSBP systolic blood pressure, DBP diastolic blood pressure, COPD chronic obstructive pulmonary disease, ACEI angiotensin-converting enzyme inhibitor, ARB angiotensin receptor blocker, LVEF left ventricular ejection fraction, NT-proBNP N-terminal pro B-type natriuretic peptideaMedian (interequartile range) Baseline cohort characteristics according to heart rhythm at baseline Bold values indicate p < 0.05 32 patients (2.1%) did not have NYHA class recorded SBP systolic blood pressure, DBP diastolic blood pressure, COPD chronic obstructive pulmonary disease, ACEI angiotensin-converting enzyme inhibitor, ARB angiotensin receptor blocker, LVEF left ventricular ejection fraction, NT-proBNP N-terminal pro B-type natriuretic peptide aMedian (interequartile range) The baseline characteristics of the BIOSTAT-CHF study have been reported previously [21]. Median follow-up in BIOSTAT-CHF was 21 months. Derivation of the cohort for this study is shown in Fig. 1. In total, following exclusion of patients with LVEF ≥ 40% and paced or undetermined ECG rhythms, we included 1548 patients from the BIOSTAT-CHF index cohort (Table 1). 535 patients (34.6%) were in AF on their baseline ECG. Fig. 1Cohort derivation. Derivation of the cohort from the BIOSTAT-CHF study Cohort derivation. Derivation of the cohort from the BIOSTAT-CHF study Table 1Baseline cohort characteristics according to heart rhythm at baselineTotal cohort (n = 1548)Sinus rhythm (n = 1013)Atrial fibrillation (n = 535)p value between SR and AFAge (years)67 ± 1265 ± 1371 ± 10< 0.001Men1175 (75.9)750 (74.0)425 (79.4) 0.018 SBP (mmHg)124 ± 21124 ± 21124 ± 210.55DBP (mmHg)76 ± 1275 ± 1276 ± 120.14Heart rate (bpm)83 ± 2178 ± 1793 ± 24< 0.001QRS duration (ms)112 ± 29113 ± 29112 ± 280.56NYHA classa< 0.001 I37 (2.4)30 (3.0)7 (1.3) II557 (36.7)400 (40.5)157 (29.7) III734 (48.4)448 (45.3)286 (54.2) IV188 (12.4)110 (11.1)78 (14.8)Ischaemic aetiology718 (47.4)510 (51.4)208 (39.8)< 0.001Hypertension935 (60.4)609 (60.1)326 (60.9)0.76Current smoker252 (16.3)201 (19.9)51 (9.6)< 0.001Diabetes mellitus490 (31.7)322 (31.8)168 (31.4)0.88COPD259 (16.7)163 (16.1)96 (17.9)0.35Renal impairment357 (23.1)193 (19.1)165 (30.8)< 0.001ACEI/ARB1158 (74.8)770 (76.0)388 (72.5)0.13Beta-blocker1299 (83.9)853 (84.2)446 (83.4)0.67Beta-blocker dose %< 0.001 0250 (16.1)161 (15.9)89 (16.6) 1–49938 (60.6)644 (63.6)294 (55.0) 50–99292 (18.9)176 (17.4)116 (21.7) ≥ 10068 (4.4)32 (3.2)36 (6.7)MRA860 (55.6)575 (56.8)285 (53.3)0.19Digoxin284 (18.3)86 (8.5)198 (37.0)< 0.001LVEF (%)27.3 ± 6.927.1 ± 7.027.8 ± 6.90.07Bold values indicate p < 0.0532 patients (2.1%) did not have NYHA class recordedSBP systolic blood pressure, DBP diastolic blood pressure, COPD chronic obstructive pulmonary disease, ACEI angiotensin-converting enzyme inhibitor, ARB angiotensin receptor blocker, LVEF left ventricular ejection fraction, NT-proBNP N-terminal pro B-type natriuretic peptideaMedian (interequartile range) Baseline cohort characteristics according to heart rhythm at baseline Bold values indicate p < 0.05 32 patients (2.1%) did not have NYHA class recorded SBP systolic blood pressure, DBP diastolic blood pressure, COPD chronic obstructive pulmonary disease, ACEI angiotensin-converting enzyme inhibitor, ARB angiotensin receptor blocker, LVEF left ventricular ejection fraction, NT-proBNP N-terminal pro B-type natriuretic peptide aMedian (interequartile range) Relationship between baseline heart rate and outcome In total, the primary outcome occurred in 554 patients [35.8% of the total cohort; 323 (31.8%) in SR and 231 (43.2%) in AF], including 324 deaths [20.9% of the total cohort; 212 (18.6%) in SR and 112 (28.0%) in AF] and 337 hospitalisations [21.8% of the total cohort; 198 (19.5%) in SR and 139 (26.0%) in AF] (Table 2). Table 2Cox regression analyses of baseline heart rate on the primary outcome of mortality and heart failure hospitalisationSinus rhythm (n = 1013)Atrial fibrillation (n = 535)Interaction p valueNumber of events (%)Multivariable hazard ratio (95% CI)p valueNumber of events (%)Multivariable hazard ratio (95% CI)p valueBaseline heart rate; hazard ratio per 10 bpm higher Mortality or heart failure hospitalisation323 (31.9)1.02 (0.96–1.08)0.60231 (43.2)0.91 (0.86–0.96) 0.001 0.011  Mortality212 (20.9)0.97 (0.90–1.05)0.50112 (20.9)0.96 (0.89–1.04)0.400.75 HF hospitalisationa198 (19.5)1.02 (0.94–1.11)0.62139 (26.0)0.95 (0.88–1.02)0.130.20Bold values indicate p < 0.05Multivariable model adjusted for the BIOSTAT-CHF risk prediction modelBIOSTAT-CHF risk prediction model for combined endpoint of Mortality and HF hospitalisation: age, HF hospitalisation in the previous year, peripheral oedema, systolic blood pressure, log-NT-proBNP, haemoglobin, HDL cholesterol, sodium, beta-blocker use at baselineBIOSTAT-CHF risk prediction model for heart failure hospitalisation alone: age, previous HF hospitalisation, presence of oedema, systolic blood pressure and estimated glomerular filtration rateBIOSTAT-CHF risk prediction model for mortality alone: age, blood urea nitrogen, NT-proBNP, haemoglobin and beta-blocker use at baselineaCompeting risk of death Cox regression analyses of baseline heart rate on the primary outcome of mortality and heart failure hospitalisation Bold values indicate p < 0.05 Multivariable model adjusted for the BIOSTAT-CHF risk prediction model BIOSTAT-CHF risk prediction model for combined endpoint of Mortality and HF hospitalisation: age, HF hospitalisation in the previous year, peripheral oedema, systolic blood pressure, log-NT-proBNP, haemoglobin, HDL cholesterol, sodium, beta-blocker use at baseline BIOSTAT-CHF risk prediction model for heart failure hospitalisation alone: age, previous HF hospitalisation, presence of oedema, systolic blood pressure and estimated glomerular filtration rate BIOSTAT-CHF risk prediction model for mortality alone: age, blood urea nitrogen, NT-proBNP, haemoglobin and beta-blocker use at baseline aCompeting risk of death Baseline heart rate was not a significant predictor of the primary outcome in SR patients (HR per 10 bpm higher: 1.02 95% CI 0.96–1.08, p = 0.60), however, higher baseline heart rate was significantly associated with improved outcome in patients with AF (HR per 10 bpm higher: 0.91; 95% CI 0.86–0.96, p = 0.001; p for interaction vs. sinus rhythm 0.011). There were no significant associations for the individual endpoints of mortality and HF hospitalisation (Table 2). In total, the primary outcome occurred in 554 patients [35.8% of the total cohort; 323 (31.8%) in SR and 231 (43.2%) in AF], including 324 deaths [20.9% of the total cohort; 212 (18.6%) in SR and 112 (28.0%) in AF] and 337 hospitalisations [21.8% of the total cohort; 198 (19.5%) in SR and 139 (26.0%) in AF] (Table 2). Table 2Cox regression analyses of baseline heart rate on the primary outcome of mortality and heart failure hospitalisationSinus rhythm (n = 1013)Atrial fibrillation (n = 535)Interaction p valueNumber of events (%)Multivariable hazard ratio (95% CI)p valueNumber of events (%)Multivariable hazard ratio (95% CI)p valueBaseline heart rate; hazard ratio per 10 bpm higher Mortality or heart failure hospitalisation323 (31.9)1.02 (0.96–1.08)0.60231 (43.2)0.91 (0.86–0.96) 0.001 0.011  Mortality212 (20.9)0.97 (0.90–1.05)0.50112 (20.9)0.96 (0.89–1.04)0.400.75 HF hospitalisationa198 (19.5)1.02 (0.94–1.11)0.62139 (26.0)0.95 (0.88–1.02)0.130.20Bold values indicate p < 0.05Multivariable model adjusted for the BIOSTAT-CHF risk prediction modelBIOSTAT-CHF risk prediction model for combined endpoint of Mortality and HF hospitalisation: age, HF hospitalisation in the previous year, peripheral oedema, systolic blood pressure, log-NT-proBNP, haemoglobin, HDL cholesterol, sodium, beta-blocker use at baselineBIOSTAT-CHF risk prediction model for heart failure hospitalisation alone: age, previous HF hospitalisation, presence of oedema, systolic blood pressure and estimated glomerular filtration rateBIOSTAT-CHF risk prediction model for mortality alone: age, blood urea nitrogen, NT-proBNP, haemoglobin and beta-blocker use at baselineaCompeting risk of death Cox regression analyses of baseline heart rate on the primary outcome of mortality and heart failure hospitalisation Bold values indicate p < 0.05 Multivariable model adjusted for the BIOSTAT-CHF risk prediction model BIOSTAT-CHF risk prediction model for combined endpoint of Mortality and HF hospitalisation: age, HF hospitalisation in the previous year, peripheral oedema, systolic blood pressure, log-NT-proBNP, haemoglobin, HDL cholesterol, sodium, beta-blocker use at baseline BIOSTAT-CHF risk prediction model for heart failure hospitalisation alone: age, previous HF hospitalisation, presence of oedema, systolic blood pressure and estimated glomerular filtration rate BIOSTAT-CHF risk prediction model for mortality alone: age, blood urea nitrogen, NT-proBNP, haemoglobin and beta-blocker use at baseline aCompeting risk of death Baseline heart rate was not a significant predictor of the primary outcome in SR patients (HR per 10 bpm higher: 1.02 95% CI 0.96–1.08, p = 0.60), however, higher baseline heart rate was significantly associated with improved outcome in patients with AF (HR per 10 bpm higher: 0.91; 95% CI 0.86–0.96, p = 0.001; p for interaction vs. sinus rhythm 0.011). There were no significant associations for the individual endpoints of mortality and HF hospitalisation (Table 2). Relationship between achieved heart rate at 9 months, change in heart rate at 9 months and outcome ECGs at the 9-month visit were available for 1155 patients. 198 patients died prior to their 9 month visit, while 195 patients did not have an ECG available. After exclusion of 125 patients with paced rhythms, 1030 patients remained for analysis, of which 734 (71.3%) were in sinus rhythm and 296 (28.7%) were in AF. Heart rate-lowering medication use at 9 months is shown in Table 3. AF at the 9-month ECG was associated with increased likelihood of the primary outcome compared to SR when added to the BIOSTAT risk prediction model (HR 1.63; 95% CI 1.18–2.23, p = 0.003). Table 3Heart rate controlling medication prescription at 9 monthsSinus rhythm (734)Atrial fibrillation (296)Beta-blocker691 (94.1)276 (93.2)Digoxin208 (28.3)201 (67.9)Verapamil/diltiazem8 (1.1)8 (2.7) Heart rate controlling medication prescription at 9 months Mean achieved heart rate at 9 months was significantly lower in SR patients compared to AF (67 ± 13 versus 81 ± 18 bpm, respectively, p < 0.001). Higher baseline heart was significantly associated with a greater reduction in heart rate at 9 months (r = − 0.77, p < 0.001) and an increase in beta-blocker dose at 9 months (r = 0.12, p < 0.001). After adjustment for the BIOSTAT risk prediction model and likelihood of uptitration, a higher achieved heart rate at 9 months was significantly associated with increased likelihood of the primary outcome in SR patients (HR 1.26 per 10 bpm higher; 95% CI 1.10–1.46, p = 0.001) but not in AF (HR 1.08 per 10 bpm higher; 95% CI 0.94–1.23, p = 0.18, p for interaction vs. SR 0.26) (Table 4). There were no significant associations between achieved heart rate and the individual endpoints. Table 4Cox regression analyses of achieved heart rate and change in heart rate at 9 months on clinical outcomesSinus rhythm (n = 734)Atrial fibrillation (n = 296)Interaction p valueNumber of events (%)Multivariable hazard ratio (95% CI)p valueNumber of events (%)Multivariable hazard ratio (95% CI)p valueAchieved heart rate; hazard ratio per 10 bpm highera Mortality or heart failure hospitalisation168 (22.9)1.29 (1.10–1.46) 0.001 115 (38.9)1.08 (0.94–1.23)0.180.26 Mortality1.00 (0.87–1.15)0.961.02 (0.88–1.18)0.770.20 HF hospitalisation+1.07 (0.91–1.27)0.420.84 (0.65–1.07)0.160.99Change in heart rate; hazard ratio per 10 bpm decreasea Mortality or heart failure hospitalisation168 (22.9)0.83 (0.75–0.91)< 0.001115 (38.9)0.89 (0.81–0.98) 0.018 0.97 Mortality0.95 (0.88–1.03)0.230.92 (0.84–1.02)0.110.50 HF hospitalisationb0.88 (0.77–1.00) 0.046 0.93 (0.85–1.01)0.100.91Bold values indicate p < 0.05BIOSTAT-CHF risk prediction model for mortality and HF hospitalisation includes: age, HF hospitalisation in the previous year, peripheral oedema, systolic blood pressure, NT-proBNP, haemoglobin, HDL cholesterol, sodium, beta-blocker use at baselineBIOSTAT-CHF risk prediction model for heart failure hospitalisation alone: age, previous HF hospitalisation, presence of oedema, systolic blood pressure and estimated glomerular filtration rateBIOSTAT-CHF risk prediction model for mortality alone: age, blood urea nitrogen, NT-proBNP, haemoglobin and beta-blocker use at baselineaAdjusted for likelihood of uptitraton and BIOSTAT-CHF risk prediction modelbCompeting risk of death Cox regression analyses of achieved heart rate and change in heart rate at 9 months on clinical outcomes Bold values indicate p < 0.05 BIOSTAT-CHF risk prediction model for mortality and HF hospitalisation includes: age, HF hospitalisation in the previous year, peripheral oedema, systolic blood pressure, NT-proBNP, haemoglobin, HDL cholesterol, sodium, beta-blocker use at baseline BIOSTAT-CHF risk prediction model for heart failure hospitalisation alone: age, previous HF hospitalisation, presence of oedema, systolic blood pressure and estimated glomerular filtration rate BIOSTAT-CHF risk prediction model for mortality alone: age, blood urea nitrogen, NT-proBNP, haemoglobin and beta-blocker use at baseline aAdjusted for likelihood of uptitraton and BIOSTAT-CHF risk prediction model bCompeting risk of death There was no significant difference in change in heart rate at 9 months between SR and AF patients (− 11.5 ± 21.9 bpm versus − 9.1 ± 25.9 bpm, respectively; p = 0.12). In multivariable analysis, a decrease in heart rate was significantly associated with reduced likelihood of the primary outcome in both SR and AF (SR: HR 0.83 per 10 bpm decrease; 95% CI 0.75–0.91, p < 0.001; AF: HR 0.89 per 10 bpm decrease; 95% CI 0.81–0.98, p = 0.018, p for interaction vs. SR 0.97) (Table 4). A decrease in heart rate at 9 months was also significantly associated with reduced HF hospitalisation in patients in SR (HR 0.88 per 10 bpm decrease; 95% CI 0.77–1.00, p = 0.046). ECGs at the 9-month visit were available for 1155 patients. 198 patients died prior to their 9 month visit, while 195 patients did not have an ECG available. After exclusion of 125 patients with paced rhythms, 1030 patients remained for analysis, of which 734 (71.3%) were in sinus rhythm and 296 (28.7%) were in AF. Heart rate-lowering medication use at 9 months is shown in Table 3. AF at the 9-month ECG was associated with increased likelihood of the primary outcome compared to SR when added to the BIOSTAT risk prediction model (HR 1.63; 95% CI 1.18–2.23, p = 0.003). Table 3Heart rate controlling medication prescription at 9 monthsSinus rhythm (734)Atrial fibrillation (296)Beta-blocker691 (94.1)276 (93.2)Digoxin208 (28.3)201 (67.9)Verapamil/diltiazem8 (1.1)8 (2.7) Heart rate controlling medication prescription at 9 months Mean achieved heart rate at 9 months was significantly lower in SR patients compared to AF (67 ± 13 versus 81 ± 18 bpm, respectively, p < 0.001). Higher baseline heart was significantly associated with a greater reduction in heart rate at 9 months (r = − 0.77, p < 0.001) and an increase in beta-blocker dose at 9 months (r = 0.12, p < 0.001). After adjustment for the BIOSTAT risk prediction model and likelihood of uptitration, a higher achieved heart rate at 9 months was significantly associated with increased likelihood of the primary outcome in SR patients (HR 1.26 per 10 bpm higher; 95% CI 1.10–1.46, p = 0.001) but not in AF (HR 1.08 per 10 bpm higher; 95% CI 0.94–1.23, p = 0.18, p for interaction vs. SR 0.26) (Table 4). There were no significant associations between achieved heart rate and the individual endpoints. Table 4Cox regression analyses of achieved heart rate and change in heart rate at 9 months on clinical outcomesSinus rhythm (n = 734)Atrial fibrillation (n = 296)Interaction p valueNumber of events (%)Multivariable hazard ratio (95% CI)p valueNumber of events (%)Multivariable hazard ratio (95% CI)p valueAchieved heart rate; hazard ratio per 10 bpm highera Mortality or heart failure hospitalisation168 (22.9)1.29 (1.10–1.46) 0.001 115 (38.9)1.08 (0.94–1.23)0.180.26 Mortality1.00 (0.87–1.15)0.961.02 (0.88–1.18)0.770.20 HF hospitalisation+1.07 (0.91–1.27)0.420.84 (0.65–1.07)0.160.99Change in heart rate; hazard ratio per 10 bpm decreasea Mortality or heart failure hospitalisation168 (22.9)0.83 (0.75–0.91)< 0.001115 (38.9)0.89 (0.81–0.98) 0.018 0.97 Mortality0.95 (0.88–1.03)0.230.92 (0.84–1.02)0.110.50 HF hospitalisationb0.88 (0.77–1.00) 0.046 0.93 (0.85–1.01)0.100.91Bold values indicate p < 0.05BIOSTAT-CHF risk prediction model for mortality and HF hospitalisation includes: age, HF hospitalisation in the previous year, peripheral oedema, systolic blood pressure, NT-proBNP, haemoglobin, HDL cholesterol, sodium, beta-blocker use at baselineBIOSTAT-CHF risk prediction model for heart failure hospitalisation alone: age, previous HF hospitalisation, presence of oedema, systolic blood pressure and estimated glomerular filtration rateBIOSTAT-CHF risk prediction model for mortality alone: age, blood urea nitrogen, NT-proBNP, haemoglobin and beta-blocker use at baselineaAdjusted for likelihood of uptitraton and BIOSTAT-CHF risk prediction modelbCompeting risk of death Cox regression analyses of achieved heart rate and change in heart rate at 9 months on clinical outcomes Bold values indicate p < 0.05 BIOSTAT-CHF risk prediction model for mortality and HF hospitalisation includes: age, HF hospitalisation in the previous year, peripheral oedema, systolic blood pressure, NT-proBNP, haemoglobin, HDL cholesterol, sodium, beta-blocker use at baseline BIOSTAT-CHF risk prediction model for heart failure hospitalisation alone: age, previous HF hospitalisation, presence of oedema, systolic blood pressure and estimated glomerular filtration rate BIOSTAT-CHF risk prediction model for mortality alone: age, blood urea nitrogen, NT-proBNP, haemoglobin and beta-blocker use at baseline aAdjusted for likelihood of uptitraton and BIOSTAT-CHF risk prediction model bCompeting risk of death There was no significant difference in change in heart rate at 9 months between SR and AF patients (− 11.5 ± 21.9 bpm versus − 9.1 ± 25.9 bpm, respectively; p = 0.12). In multivariable analysis, a decrease in heart rate was significantly associated with reduced likelihood of the primary outcome in both SR and AF (SR: HR 0.83 per 10 bpm decrease; 95% CI 0.75–0.91, p < 0.001; AF: HR 0.89 per 10 bpm decrease; 95% CI 0.81–0.98, p = 0.018, p for interaction vs. SR 0.97) (Table 4). A decrease in heart rate at 9 months was also significantly associated with reduced HF hospitalisation in patients in SR (HR 0.88 per 10 bpm decrease; 95% CI 0.77–1.00, p = 0.046). Effects of changes in heart rate in patients stratified by baseline heart rate Among the patients assessed at 9 months, baseline heart rate was 77 bpm in those in SR and 85 bpm in those in AF. Higher achieved heart rate and change in heart rate were significantly associated with outcome regardless of baseline heart rate in sinus rhythm (Fig. 2), however, a different pattern was seen in patients in AF however, with higher achieved heart rate only being associated with worse outcome in patients with higher baseline heart rates (baseline heart rate > 85 bpm: HR 1.37 per 10 bpm higher; 95% CI 1.16–1.61, p < 0.001; ≤ 85 bpm: HR 0.99 per 10 bpm higher; 95% CI 0.80–1.22, p = 0.94, p for interaction 0.017) (Fig. 2). A similar pattern was seen with change in heart rate. Fig. 2The relationship between achieved heart rate and change in heart rate at 9 months stratified by baseline heart rate. Association of achieved heart rate (left) and change in heart rate (right) with the primary outcome in sinus rhythm and atrial fibrillation stratified by baseline heart rate above and below the median The relationship between achieved heart rate and change in heart rate at 9 months stratified by baseline heart rate. Association of achieved heart rate (left) and change in heart rate (right) with the primary outcome in sinus rhythm and atrial fibrillation stratified by baseline heart rate above and below the median Among the patients assessed at 9 months, baseline heart rate was 77 bpm in those in SR and 85 bpm in those in AF. Higher achieved heart rate and change in heart rate were significantly associated with outcome regardless of baseline heart rate in sinus rhythm (Fig. 2), however, a different pattern was seen in patients in AF however, with higher achieved heart rate only being associated with worse outcome in patients with higher baseline heart rates (baseline heart rate > 85 bpm: HR 1.37 per 10 bpm higher; 95% CI 1.16–1.61, p < 0.001; ≤ 85 bpm: HR 0.99 per 10 bpm higher; 95% CI 0.80–1.22, p = 0.94, p for interaction 0.017) (Fig. 2). A similar pattern was seen with change in heart rate. Fig. 2The relationship between achieved heart rate and change in heart rate at 9 months stratified by baseline heart rate. Association of achieved heart rate (left) and change in heart rate (right) with the primary outcome in sinus rhythm and atrial fibrillation stratified by baseline heart rate above and below the median The relationship between achieved heart rate and change in heart rate at 9 months stratified by baseline heart rate. Association of achieved heart rate (left) and change in heart rate (right) with the primary outcome in sinus rhythm and atrial fibrillation stratified by baseline heart rate above and below the median
Conclusions
In HFrEF patients in SR both achieved and change in heart rate following beta-blocker uptitration were associated with improved outcomes, regardless of heart rate at baseline. Despite a similar increase in beta-blocker dose and baseline heart rate reduction in HFrEF patients in AF, achieved and decrease in heart rate from baseline were only prognostically significant in patients with higher baseline heart rates.
[ "Methods", "Patient selection", "Clinical outcomes", "Statistical analysis", "Baseline characteristics", "Relationship between baseline heart rate and outcome", "Relationship between achieved heart rate at 9 months, change in heart rate at 9 months and outcome", "Effects of changes in heart rate in patients stratified by baseline heart rate" ]
[ " Patient selection The BIOSTAT-CHF study design has been published previously [21]. Briefly, BIOSTAT-CHF was a large European, multi-center, multi-national, prospective, observational study of 2516 patients with new onset or worsening HF with either a left ventricular ejection fraction (LVEF) of ≤ 40% or plasma concentrations of Brain Natriuretic Peptide (BNP) > 400 pg/ml and/or N-terminal pro Brain Natriuretic Peptide (NT-proBNP) > 2000 pg/ml, who were being treated with furosemide ≥ 40 mg/day (or equivalent) and were on ≤ 50% of the target dose of angiotensin-converting enzyme inhibitor (ACEI)/angiotensin II receptor blocker (ARB) or beta-blocker therapy. Patients were recruited from both the in-patient and out-patient settings. Patients were classified as having AF if they had AF on their electrocardiogram (ECG) at their baseline visit and were reclassified at the second visit ECG. We excluded patients with paced or undetermined ECG rhythms and those with LVEF ≥ 40%.\nIn the first 3 months after recruitment, treating clinicians aimed to initiate and/or uptitrate ACEI/ARBs and beta-blockers to recommended target doses which have been previously published by the European Society of Cardiology [22]. Reasons for failure to successfully uptitrate and side effects have been previously published [23]. Following the 3-month uptitration period, patients entered a 6-month maintenance period where no further uptitration was mandated unless clinically indicated. Patients then were invited for a second visit at 9 months. The trial was approved by the local ethics committee of the participating centers and all patients provided written informed consent. The study complied with the Declaration of Helsinki.\nThe BIOSTAT-CHF study design has been published previously [21]. Briefly, BIOSTAT-CHF was a large European, multi-center, multi-national, prospective, observational study of 2516 patients with new onset or worsening HF with either a left ventricular ejection fraction (LVEF) of ≤ 40% or plasma concentrations of Brain Natriuretic Peptide (BNP) > 400 pg/ml and/or N-terminal pro Brain Natriuretic Peptide (NT-proBNP) > 2000 pg/ml, who were being treated with furosemide ≥ 40 mg/day (or equivalent) and were on ≤ 50% of the target dose of angiotensin-converting enzyme inhibitor (ACEI)/angiotensin II receptor blocker (ARB) or beta-blocker therapy. Patients were recruited from both the in-patient and out-patient settings. Patients were classified as having AF if they had AF on their electrocardiogram (ECG) at their baseline visit and were reclassified at the second visit ECG. We excluded patients with paced or undetermined ECG rhythms and those with LVEF ≥ 40%.\nIn the first 3 months after recruitment, treating clinicians aimed to initiate and/or uptitrate ACEI/ARBs and beta-blockers to recommended target doses which have been previously published by the European Society of Cardiology [22]. Reasons for failure to successfully uptitrate and side effects have been previously published [23]. Following the 3-month uptitration period, patients entered a 6-month maintenance period where no further uptitration was mandated unless clinically indicated. Patients then were invited for a second visit at 9 months. The trial was approved by the local ethics committee of the participating centers and all patients provided written informed consent. The study complied with the Declaration of Helsinki.\n Clinical outcomes Heart rate and rhythm were assessed by ECG with all patients supine and rested for at least 5 min. In BIOSTAT-CHF, all patients were followed up for clinical outcomes. After the scheduled visits at baseline and 9 months, patients were contacted by telephone every 6 months.\nThe primary outcome for this study was the combined endpoint of all-cause mortality or HF hospitalisation. HF hospitalisation was determined as admission to hospital ≥ 24 h due to worsening HF requiring either intravenous or increased dose of oral diuretics.\nHeart rate and rhythm were assessed by ECG with all patients supine and rested for at least 5 min. In BIOSTAT-CHF, all patients were followed up for clinical outcomes. After the scheduled visits at baseline and 9 months, patients were contacted by telephone every 6 months.\nThe primary outcome for this study was the combined endpoint of all-cause mortality or HF hospitalisation. HF hospitalisation was determined as admission to hospital ≥ 24 h due to worsening HF requiring either intravenous or increased dose of oral diuretics.\n Statistical analysis Clinical, ECG and echocardiographic data was obtained at baseline, with clinical and ECG data obtained at 9 months. Normally distributed continuous variables were reported as mean ± SD and categorical data, as number with percentage in brackets. Comparisons between continuous variables were carried out using a two-tailed Student t test and categorical variables were tested using the Chi square test. Heart rate and beta-blocker dose at baseline and 9 months were analysed for their association with the primary outcome and all-cause mortality using the Cox proportional hazard model and Kaplan–Meier analysis. Competing risks regression with death as a competing risk was used to determine hazard ratios for hospitalisation alone. To adjust for treatment indication bias inverse probability weighting was used, the method of which has been explained in detail previously [23]. Variables included in the inverse probability weighting were age, baseline heart rate and country of origin.\nVariables were tested for univariable significance and were then included in a multivariable model with the BIOSTAT-CHF risk score [23] to assess their independent association with outcome. SR and AF patients were evaluated separately and interaction testing between SR and AF was also performed within the whole cohort. Heart rate in increments of 10 bpm and beta-blocker dose as a percentage of target dose were examined. Increments of 12.5% of target beta-blocker dose were chosen to reflect clinically used dosages—for example, bisoprolol has a target dose of 10 mg, and is commonly increased in doses of 1.25 mg (12.5% of target dose). Nine-month outcomes only included patients who did not have an event in the first 9 months and those who had ECG data available. Correlations were assessed using Pearson correlation. A p value < 0.05 was considered significant throughout. Statistical analysis was performed using R version 3.4.1.\nClinical, ECG and echocardiographic data was obtained at baseline, with clinical and ECG data obtained at 9 months. Normally distributed continuous variables were reported as mean ± SD and categorical data, as number with percentage in brackets. Comparisons between continuous variables were carried out using a two-tailed Student t test and categorical variables were tested using the Chi square test. Heart rate and beta-blocker dose at baseline and 9 months were analysed for their association with the primary outcome and all-cause mortality using the Cox proportional hazard model and Kaplan–Meier analysis. Competing risks regression with death as a competing risk was used to determine hazard ratios for hospitalisation alone. To adjust for treatment indication bias inverse probability weighting was used, the method of which has been explained in detail previously [23]. Variables included in the inverse probability weighting were age, baseline heart rate and country of origin.\nVariables were tested for univariable significance and were then included in a multivariable model with the BIOSTAT-CHF risk score [23] to assess their independent association with outcome. SR and AF patients were evaluated separately and interaction testing between SR and AF was also performed within the whole cohort. Heart rate in increments of 10 bpm and beta-blocker dose as a percentage of target dose were examined. Increments of 12.5% of target beta-blocker dose were chosen to reflect clinically used dosages—for example, bisoprolol has a target dose of 10 mg, and is commonly increased in doses of 1.25 mg (12.5% of target dose). Nine-month outcomes only included patients who did not have an event in the first 9 months and those who had ECG data available. Correlations were assessed using Pearson correlation. A p value < 0.05 was considered significant throughout. Statistical analysis was performed using R version 3.4.1.", "The BIOSTAT-CHF study design has been published previously [21]. Briefly, BIOSTAT-CHF was a large European, multi-center, multi-national, prospective, observational study of 2516 patients with new onset or worsening HF with either a left ventricular ejection fraction (LVEF) of ≤ 40% or plasma concentrations of Brain Natriuretic Peptide (BNP) > 400 pg/ml and/or N-terminal pro Brain Natriuretic Peptide (NT-proBNP) > 2000 pg/ml, who were being treated with furosemide ≥ 40 mg/day (or equivalent) and were on ≤ 50% of the target dose of angiotensin-converting enzyme inhibitor (ACEI)/angiotensin II receptor blocker (ARB) or beta-blocker therapy. Patients were recruited from both the in-patient and out-patient settings. Patients were classified as having AF if they had AF on their electrocardiogram (ECG) at their baseline visit and were reclassified at the second visit ECG. We excluded patients with paced or undetermined ECG rhythms and those with LVEF ≥ 40%.\nIn the first 3 months after recruitment, treating clinicians aimed to initiate and/or uptitrate ACEI/ARBs and beta-blockers to recommended target doses which have been previously published by the European Society of Cardiology [22]. Reasons for failure to successfully uptitrate and side effects have been previously published [23]. Following the 3-month uptitration period, patients entered a 6-month maintenance period where no further uptitration was mandated unless clinically indicated. Patients then were invited for a second visit at 9 months. The trial was approved by the local ethics committee of the participating centers and all patients provided written informed consent. The study complied with the Declaration of Helsinki.", "Heart rate and rhythm were assessed by ECG with all patients supine and rested for at least 5 min. In BIOSTAT-CHF, all patients were followed up for clinical outcomes. After the scheduled visits at baseline and 9 months, patients were contacted by telephone every 6 months.\nThe primary outcome for this study was the combined endpoint of all-cause mortality or HF hospitalisation. HF hospitalisation was determined as admission to hospital ≥ 24 h due to worsening HF requiring either intravenous or increased dose of oral diuretics.", "Clinical, ECG and echocardiographic data was obtained at baseline, with clinical and ECG data obtained at 9 months. Normally distributed continuous variables were reported as mean ± SD and categorical data, as number with percentage in brackets. Comparisons between continuous variables were carried out using a two-tailed Student t test and categorical variables were tested using the Chi square test. Heart rate and beta-blocker dose at baseline and 9 months were analysed for their association with the primary outcome and all-cause mortality using the Cox proportional hazard model and Kaplan–Meier analysis. Competing risks regression with death as a competing risk was used to determine hazard ratios for hospitalisation alone. To adjust for treatment indication bias inverse probability weighting was used, the method of which has been explained in detail previously [23]. Variables included in the inverse probability weighting were age, baseline heart rate and country of origin.\nVariables were tested for univariable significance and were then included in a multivariable model with the BIOSTAT-CHF risk score [23] to assess their independent association with outcome. SR and AF patients were evaluated separately and interaction testing between SR and AF was also performed within the whole cohort. Heart rate in increments of 10 bpm and beta-blocker dose as a percentage of target dose were examined. Increments of 12.5% of target beta-blocker dose were chosen to reflect clinically used dosages—for example, bisoprolol has a target dose of 10 mg, and is commonly increased in doses of 1.25 mg (12.5% of target dose). Nine-month outcomes only included patients who did not have an event in the first 9 months and those who had ECG data available. Correlations were assessed using Pearson correlation. A p value < 0.05 was considered significant throughout. Statistical analysis was performed using R version 3.4.1.", "The baseline characteristics of the BIOSTAT-CHF study have been reported previously [21]. Median follow-up in BIOSTAT-CHF was 21 months. Derivation of the cohort for this study is shown in Fig. 1. In total, following exclusion of patients with LVEF ≥ 40% and paced or undetermined ECG rhythms, we included 1548 patients from the BIOSTAT-CHF index cohort (Table 1). 535 patients (34.6%) were in AF on their baseline ECG.\n\nFig. 1Cohort derivation. Derivation of the cohort from the BIOSTAT-CHF study\n\nCohort derivation. Derivation of the cohort from the BIOSTAT-CHF study\n\nTable 1Baseline cohort characteristics according to heart rhythm at baselineTotal cohort (n = 1548)Sinus rhythm (n = 1013)Atrial fibrillation (n = 535)p value between SR and AFAge (years)67 ± 1265 ± 1371 ± 10< 0.001Men1175 (75.9)750 (74.0)425 (79.4)\n0.018\nSBP (mmHg)124 ± 21124 ± 21124 ± 210.55DBP (mmHg)76 ± 1275 ± 1276 ± 120.14Heart rate (bpm)83 ± 2178 ± 1793 ± 24< 0.001QRS duration (ms)112 ± 29113 ± 29112 ± 280.56NYHA classa< 0.001 I37 (2.4)30 (3.0)7 (1.3) II557 (36.7)400 (40.5)157 (29.7) III734 (48.4)448 (45.3)286 (54.2) IV188 (12.4)110 (11.1)78 (14.8)Ischaemic aetiology718 (47.4)510 (51.4)208 (39.8)< 0.001Hypertension935 (60.4)609 (60.1)326 (60.9)0.76Current smoker252 (16.3)201 (19.9)51 (9.6)< 0.001Diabetes mellitus490 (31.7)322 (31.8)168 (31.4)0.88COPD259 (16.7)163 (16.1)96 (17.9)0.35Renal impairment357 (23.1)193 (19.1)165 (30.8)< 0.001ACEI/ARB1158 (74.8)770 (76.0)388 (72.5)0.13Beta-blocker1299 (83.9)853 (84.2)446 (83.4)0.67Beta-blocker dose %< 0.001 0250 (16.1)161 (15.9)89 (16.6) 1–49938 (60.6)644 (63.6)294 (55.0) 50–99292 (18.9)176 (17.4)116 (21.7) ≥ 10068 (4.4)32 (3.2)36 (6.7)MRA860 (55.6)575 (56.8)285 (53.3)0.19Digoxin284 (18.3)86 (8.5)198 (37.0)< 0.001LVEF (%)27.3 ± 6.927.1 ± 7.027.8 ± 6.90.07Bold values indicate p < 0.0532 patients (2.1%) did not have NYHA class recordedSBP systolic blood pressure, DBP diastolic blood pressure, COPD chronic obstructive pulmonary disease, ACEI angiotensin-converting enzyme inhibitor, ARB angiotensin receptor blocker, LVEF left ventricular ejection fraction, NT-proBNP N-terminal pro B-type natriuretic peptideaMedian (interequartile range)\n\nBaseline cohort characteristics according to heart rhythm at baseline\nBold values indicate p < 0.05\n32 patients (2.1%) did not have NYHA class recorded\nSBP systolic blood pressure, DBP diastolic blood pressure, COPD chronic obstructive pulmonary disease, ACEI angiotensin-converting enzyme inhibitor, ARB angiotensin receptor blocker, LVEF left ventricular ejection fraction, NT-proBNP N-terminal pro B-type natriuretic peptide\naMedian (interequartile range)", "In total, the primary outcome occurred in 554 patients [35.8% of the total cohort; 323 (31.8%) in SR and 231 (43.2%) in AF], including 324 deaths [20.9% of the total cohort; 212 (18.6%) in SR and 112 (28.0%) in AF] and 337 hospitalisations [21.8% of the total cohort; 198 (19.5%) in SR and 139 (26.0%) in AF] (Table 2).\n\nTable 2Cox regression analyses of baseline heart rate on the primary outcome of mortality and heart failure hospitalisationSinus rhythm (n = 1013)Atrial fibrillation (n = 535)Interaction p valueNumber of events (%)Multivariable hazard ratio (95% CI)p valueNumber of events (%)Multivariable hazard ratio (95% CI)p valueBaseline heart rate; hazard ratio per 10 bpm higher Mortality or heart failure hospitalisation323 (31.9)1.02 (0.96–1.08)0.60231 (43.2)0.91 (0.86–0.96)\n0.001\n\n0.011\n Mortality212 (20.9)0.97 (0.90–1.05)0.50112 (20.9)0.96 (0.89–1.04)0.400.75 HF hospitalisationa198 (19.5)1.02 (0.94–1.11)0.62139 (26.0)0.95 (0.88–1.02)0.130.20Bold values indicate p < 0.05Multivariable model adjusted for the BIOSTAT-CHF risk prediction modelBIOSTAT-CHF risk prediction model for combined endpoint of Mortality and HF hospitalisation: age, HF hospitalisation in the previous year, peripheral oedema, systolic blood pressure, log-NT-proBNP, haemoglobin, HDL cholesterol, sodium, beta-blocker use at baselineBIOSTAT-CHF risk prediction model for heart failure hospitalisation alone: age, previous HF hospitalisation, presence of oedema, systolic blood pressure and estimated glomerular filtration rateBIOSTAT-CHF risk prediction model for mortality alone: age, blood urea nitrogen, NT-proBNP, haemoglobin and beta-blocker use at baselineaCompeting risk of death\n\nCox regression analyses of baseline heart rate on the primary outcome of mortality and heart failure hospitalisation\nBold values indicate p < 0.05\nMultivariable model adjusted for the BIOSTAT-CHF risk prediction model\nBIOSTAT-CHF risk prediction model for combined endpoint of Mortality and HF hospitalisation: age, HF hospitalisation in the previous year, peripheral oedema, systolic blood pressure, log-NT-proBNP, haemoglobin, HDL cholesterol, sodium, beta-blocker use at baseline\nBIOSTAT-CHF risk prediction model for heart failure hospitalisation alone: age, previous HF hospitalisation, presence of oedema, systolic blood pressure and estimated glomerular filtration rate\nBIOSTAT-CHF risk prediction model for mortality alone: age, blood urea nitrogen, NT-proBNP, haemoglobin and beta-blocker use at baseline\naCompeting risk of death\nBaseline heart rate was not a significant predictor of the primary outcome in SR patients (HR per 10 bpm higher: 1.02 95% CI 0.96–1.08, p = 0.60), however, higher baseline heart rate was significantly associated with improved outcome in patients with AF (HR per 10 bpm higher: 0.91; 95% CI 0.86–0.96, p = 0.001; p for interaction vs. sinus rhythm 0.011). There were no significant associations for the individual endpoints of mortality and HF hospitalisation (Table 2).", "ECGs at the 9-month visit were available for 1155 patients. 198 patients died prior to their 9 month visit, while 195 patients did not have an ECG available. After exclusion of 125 patients with paced rhythms, 1030 patients remained for analysis, of which 734 (71.3%) were in sinus rhythm and 296 (28.7%) were in AF. Heart rate-lowering medication use at 9 months is shown in Table 3. AF at the 9-month ECG was associated with increased likelihood of the primary outcome compared to SR when added to the BIOSTAT risk prediction model (HR 1.63; 95% CI 1.18–2.23, p = 0.003).\n\nTable 3Heart rate controlling medication prescription at 9 monthsSinus rhythm (734)Atrial fibrillation (296)Beta-blocker691 (94.1)276 (93.2)Digoxin208 (28.3)201 (67.9)Verapamil/diltiazem8 (1.1)8 (2.7)\n\nHeart rate controlling medication prescription at 9 months\nMean achieved heart rate at 9 months was significantly lower in SR patients compared to AF (67 ± 13 versus 81 ± 18 bpm, respectively, p < 0.001). Higher baseline heart was significantly associated with a greater reduction in heart rate at 9 months (r = − 0.77, p < 0.001) and an increase in beta-blocker dose at 9 months (r = 0.12, p < 0.001). After adjustment for the BIOSTAT risk prediction model and likelihood of uptitration, a higher achieved heart rate at 9 months was significantly associated with increased likelihood of the primary outcome in SR patients (HR 1.26 per 10 bpm higher; 95% CI 1.10–1.46, p = 0.001) but not in AF (HR 1.08 per 10 bpm higher; 95% CI 0.94–1.23, p = 0.18, p for interaction vs. SR 0.26) (Table 4). There were no significant associations between achieved heart rate and the individual endpoints.\n\nTable 4Cox regression analyses of achieved heart rate and change in heart rate at 9 months on clinical outcomesSinus rhythm (n = 734)Atrial fibrillation (n = 296)Interaction p valueNumber of events (%)Multivariable hazard ratio (95% CI)p valueNumber of events (%)Multivariable hazard ratio (95% CI)p valueAchieved heart rate; hazard ratio per 10 bpm highera Mortality or heart failure hospitalisation168 (22.9)1.29 (1.10–1.46)\n0.001\n115 (38.9)1.08 (0.94–1.23)0.180.26 Mortality1.00 (0.87–1.15)0.961.02 (0.88–1.18)0.770.20 HF hospitalisation+1.07 (0.91–1.27)0.420.84 (0.65–1.07)0.160.99Change in heart rate; hazard ratio per 10 bpm decreasea Mortality or heart failure hospitalisation168 (22.9)0.83 (0.75–0.91)< 0.001115 (38.9)0.89 (0.81–0.98)\n0.018\n0.97 Mortality0.95 (0.88–1.03)0.230.92 (0.84–1.02)0.110.50 HF hospitalisationb0.88 (0.77–1.00)\n0.046\n0.93 (0.85–1.01)0.100.91Bold values indicate p < 0.05BIOSTAT-CHF risk prediction model for mortality and HF hospitalisation includes: age, HF hospitalisation in the previous year, peripheral oedema, systolic blood pressure, NT-proBNP, haemoglobin, HDL cholesterol, sodium, beta-blocker use at baselineBIOSTAT-CHF risk prediction model for heart failure hospitalisation alone: age, previous HF hospitalisation, presence of oedema, systolic blood pressure and estimated glomerular filtration rateBIOSTAT-CHF risk prediction model for mortality alone: age, blood urea nitrogen, NT-proBNP, haemoglobin and beta-blocker use at baselineaAdjusted for likelihood of uptitraton and BIOSTAT-CHF risk prediction modelbCompeting risk of death\n\nCox regression analyses of achieved heart rate and change in heart rate at 9 months on clinical outcomes\nBold values indicate p < 0.05\nBIOSTAT-CHF risk prediction model for mortality and HF hospitalisation includes: age, HF hospitalisation in the previous year, peripheral oedema, systolic blood pressure, NT-proBNP, haemoglobin, HDL cholesterol, sodium, beta-blocker use at baseline\nBIOSTAT-CHF risk prediction model for heart failure hospitalisation alone: age, previous HF hospitalisation, presence of oedema, systolic blood pressure and estimated glomerular filtration rate\nBIOSTAT-CHF risk prediction model for mortality alone: age, blood urea nitrogen, NT-proBNP, haemoglobin and beta-blocker use at baseline\naAdjusted for likelihood of uptitraton and BIOSTAT-CHF risk prediction model\nbCompeting risk of death\nThere was no significant difference in change in heart rate at 9 months between SR and AF patients (− 11.5 ± 21.9 bpm versus − 9.1 ± 25.9 bpm, respectively; p = 0.12). In multivariable analysis, a decrease in heart rate was significantly associated with reduced likelihood of the primary outcome in both SR and AF (SR: HR 0.83 per 10 bpm decrease; 95% CI 0.75–0.91, p < 0.001; AF: HR 0.89 per 10 bpm decrease; 95% CI 0.81–0.98, p = 0.018, p for interaction vs. SR 0.97) (Table 4). A decrease in heart rate at 9 months was also significantly associated with reduced HF hospitalisation in patients in SR (HR 0.88 per 10 bpm decrease; 95% CI 0.77–1.00, p = 0.046).", "Among the patients assessed at 9 months, baseline heart rate was 77 bpm in those in SR and 85 bpm in those in AF. Higher achieved heart rate and change in heart rate were significantly associated with outcome regardless of baseline heart rate in sinus rhythm (Fig. 2), however, a different pattern was seen in patients in AF however, with higher achieved heart rate only being associated with worse outcome in patients with higher baseline heart rates (baseline heart rate > 85 bpm: HR 1.37 per 10 bpm higher; 95% CI 1.16–1.61, p < 0.001; ≤ 85 bpm: HR 0.99 per 10 bpm higher; 95% CI 0.80–1.22, p = 0.94, p for interaction 0.017) (Fig. 2). A similar pattern was seen with change in heart rate.\n\nFig. 2The relationship between achieved heart rate and change in heart rate at 9 months stratified by baseline heart rate. Association of achieved heart rate (left) and change in heart rate (right) with the primary outcome in sinus rhythm and atrial fibrillation stratified by baseline heart rate above and below the median\n\nThe relationship between achieved heart rate and change in heart rate at 9 months stratified by baseline heart rate. Association of achieved heart rate (left) and change in heart rate (right) with the primary outcome in sinus rhythm and atrial fibrillation stratified by baseline heart rate above and below the median" ]
[ null, null, null, null, null, null, null, null ]
[ "Introduction", "Methods", "Patient selection", "Clinical outcomes", "Statistical analysis", "Results", "Baseline characteristics", "Relationship between baseline heart rate and outcome", "Relationship between achieved heart rate at 9 months, change in heart rate at 9 months and outcome", "Effects of changes in heart rate in patients stratified by baseline heart rate", "Discussion", "Conclusions" ]
[ "Heart rate is a risk factor in patients with heart failure with reduced ejection fraction (HFrEF) that, when reduced, provides outcome benefits [1, 2]. However, the benefit of heart rate-mediated reduction is less clear in atrial fibrillation (AF). Studies in patients with HFrEF and AF have provided conflicting results, with some suggesting that elevated heart rate is associated with adverse outcome in HFrEF patients in AF, while others found no significant relationship [3–5]. Conceptually, reducing heart rate should have prognostic benefit in HFrEF patients in AF. Randomised controlled trials evaluating rate control strategies in patients with AF have only included small numbers of patients with HFrEF [6]. Additionally, very few studies have evaluated the importance of changes in heart rate over time [7, 8]. Despite the lack of data, current guidelines recommend an optimal heart rate between 60 and 100 bpm in patients with AF and HFrEF, while studies evaluating rate control in patients with AF (but not necessarily HFrEF) suggest that rates up to 110 bpm may be acceptable [6, 9].\nOne strategy for reducing heart rate is the use of beta-blockers, a mainstay of therapy in HFrEF [9, 10]. Although beta-blockers are prognostically beneficial in patients with HFrEF, it is unclear whether the beta-blocker-mediated reduction in heart rate directly affects prognosis, with several studies reporting conflicting results [11–17]. Furthermore, questions have recently been raised about the prognostic benefits of beta-blocker therapy in HFrEF patients with AF [18, 19]. In particular, there is very little information about whether increasing beta-blocker therapy in patients on sub-optimal doses might derive greater benefit from any associated heart rate reduction [20]. Despite the current uncertainty over the benefits of beta-blockers in HFrEF patients in AF, current guidelines recommend uptitration of beta-blocker therapy to the same target doses irrespective of the underlying heart rhythm.\nTo the best of our knowledge, the relative effects of change in heart rate following intensification of beta-blocker therapy have not been previously examined. Given the frequent co-existence of AF and HFrEF, it is important to determine whether patients in AF derive the same benefit from heart rate reduction and beta-blocker uptitration as those in SR, and whether this effect is modulated by changes in beta-blocker dose. We utilised the systems BIOlogy Study to Tailored Treatment in Chronic Heart Failure (BIOSTAT-CHF) dataset to compare the prognostic importance of changes in heart rate following beta-blocker uptitration in HFrEF patients in AF versus those in sinus rhythm (SR).", " Patient selection The BIOSTAT-CHF study design has been published previously [21]. Briefly, BIOSTAT-CHF was a large European, multi-center, multi-national, prospective, observational study of 2516 patients with new onset or worsening HF with either a left ventricular ejection fraction (LVEF) of ≤ 40% or plasma concentrations of Brain Natriuretic Peptide (BNP) > 400 pg/ml and/or N-terminal pro Brain Natriuretic Peptide (NT-proBNP) > 2000 pg/ml, who were being treated with furosemide ≥ 40 mg/day (or equivalent) and were on ≤ 50% of the target dose of angiotensin-converting enzyme inhibitor (ACEI)/angiotensin II receptor blocker (ARB) or beta-blocker therapy. Patients were recruited from both the in-patient and out-patient settings. Patients were classified as having AF if they had AF on their electrocardiogram (ECG) at their baseline visit and were reclassified at the second visit ECG. We excluded patients with paced or undetermined ECG rhythms and those with LVEF ≥ 40%.\nIn the first 3 months after recruitment, treating clinicians aimed to initiate and/or uptitrate ACEI/ARBs and beta-blockers to recommended target doses which have been previously published by the European Society of Cardiology [22]. Reasons for failure to successfully uptitrate and side effects have been previously published [23]. Following the 3-month uptitration period, patients entered a 6-month maintenance period where no further uptitration was mandated unless clinically indicated. Patients then were invited for a second visit at 9 months. The trial was approved by the local ethics committee of the participating centers and all patients provided written informed consent. The study complied with the Declaration of Helsinki.\nThe BIOSTAT-CHF study design has been published previously [21]. Briefly, BIOSTAT-CHF was a large European, multi-center, multi-national, prospective, observational study of 2516 patients with new onset or worsening HF with either a left ventricular ejection fraction (LVEF) of ≤ 40% or plasma concentrations of Brain Natriuretic Peptide (BNP) > 400 pg/ml and/or N-terminal pro Brain Natriuretic Peptide (NT-proBNP) > 2000 pg/ml, who were being treated with furosemide ≥ 40 mg/day (or equivalent) and were on ≤ 50% of the target dose of angiotensin-converting enzyme inhibitor (ACEI)/angiotensin II receptor blocker (ARB) or beta-blocker therapy. Patients were recruited from both the in-patient and out-patient settings. Patients were classified as having AF if they had AF on their electrocardiogram (ECG) at their baseline visit and were reclassified at the second visit ECG. We excluded patients with paced or undetermined ECG rhythms and those with LVEF ≥ 40%.\nIn the first 3 months after recruitment, treating clinicians aimed to initiate and/or uptitrate ACEI/ARBs and beta-blockers to recommended target doses which have been previously published by the European Society of Cardiology [22]. Reasons for failure to successfully uptitrate and side effects have been previously published [23]. Following the 3-month uptitration period, patients entered a 6-month maintenance period where no further uptitration was mandated unless clinically indicated. Patients then were invited for a second visit at 9 months. The trial was approved by the local ethics committee of the participating centers and all patients provided written informed consent. The study complied with the Declaration of Helsinki.\n Clinical outcomes Heart rate and rhythm were assessed by ECG with all patients supine and rested for at least 5 min. In BIOSTAT-CHF, all patients were followed up for clinical outcomes. After the scheduled visits at baseline and 9 months, patients were contacted by telephone every 6 months.\nThe primary outcome for this study was the combined endpoint of all-cause mortality or HF hospitalisation. HF hospitalisation was determined as admission to hospital ≥ 24 h due to worsening HF requiring either intravenous or increased dose of oral diuretics.\nHeart rate and rhythm were assessed by ECG with all patients supine and rested for at least 5 min. In BIOSTAT-CHF, all patients were followed up for clinical outcomes. After the scheduled visits at baseline and 9 months, patients were contacted by telephone every 6 months.\nThe primary outcome for this study was the combined endpoint of all-cause mortality or HF hospitalisation. HF hospitalisation was determined as admission to hospital ≥ 24 h due to worsening HF requiring either intravenous or increased dose of oral diuretics.\n Statistical analysis Clinical, ECG and echocardiographic data was obtained at baseline, with clinical and ECG data obtained at 9 months. Normally distributed continuous variables were reported as mean ± SD and categorical data, as number with percentage in brackets. Comparisons between continuous variables were carried out using a two-tailed Student t test and categorical variables were tested using the Chi square test. Heart rate and beta-blocker dose at baseline and 9 months were analysed for their association with the primary outcome and all-cause mortality using the Cox proportional hazard model and Kaplan–Meier analysis. Competing risks regression with death as a competing risk was used to determine hazard ratios for hospitalisation alone. To adjust for treatment indication bias inverse probability weighting was used, the method of which has been explained in detail previously [23]. Variables included in the inverse probability weighting were age, baseline heart rate and country of origin.\nVariables were tested for univariable significance and were then included in a multivariable model with the BIOSTAT-CHF risk score [23] to assess their independent association with outcome. SR and AF patients were evaluated separately and interaction testing between SR and AF was also performed within the whole cohort. Heart rate in increments of 10 bpm and beta-blocker dose as a percentage of target dose were examined. Increments of 12.5% of target beta-blocker dose were chosen to reflect clinically used dosages—for example, bisoprolol has a target dose of 10 mg, and is commonly increased in doses of 1.25 mg (12.5% of target dose). Nine-month outcomes only included patients who did not have an event in the first 9 months and those who had ECG data available. Correlations were assessed using Pearson correlation. A p value < 0.05 was considered significant throughout. Statistical analysis was performed using R version 3.4.1.\nClinical, ECG and echocardiographic data was obtained at baseline, with clinical and ECG data obtained at 9 months. Normally distributed continuous variables were reported as mean ± SD and categorical data, as number with percentage in brackets. Comparisons between continuous variables were carried out using a two-tailed Student t test and categorical variables were tested using the Chi square test. Heart rate and beta-blocker dose at baseline and 9 months were analysed for their association with the primary outcome and all-cause mortality using the Cox proportional hazard model and Kaplan–Meier analysis. Competing risks regression with death as a competing risk was used to determine hazard ratios for hospitalisation alone. To adjust for treatment indication bias inverse probability weighting was used, the method of which has been explained in detail previously [23]. Variables included in the inverse probability weighting were age, baseline heart rate and country of origin.\nVariables were tested for univariable significance and were then included in a multivariable model with the BIOSTAT-CHF risk score [23] to assess their independent association with outcome. SR and AF patients were evaluated separately and interaction testing between SR and AF was also performed within the whole cohort. Heart rate in increments of 10 bpm and beta-blocker dose as a percentage of target dose were examined. Increments of 12.5% of target beta-blocker dose were chosen to reflect clinically used dosages—for example, bisoprolol has a target dose of 10 mg, and is commonly increased in doses of 1.25 mg (12.5% of target dose). Nine-month outcomes only included patients who did not have an event in the first 9 months and those who had ECG data available. Correlations were assessed using Pearson correlation. A p value < 0.05 was considered significant throughout. Statistical analysis was performed using R version 3.4.1.", "The BIOSTAT-CHF study design has been published previously [21]. Briefly, BIOSTAT-CHF was a large European, multi-center, multi-national, prospective, observational study of 2516 patients with new onset or worsening HF with either a left ventricular ejection fraction (LVEF) of ≤ 40% or plasma concentrations of Brain Natriuretic Peptide (BNP) > 400 pg/ml and/or N-terminal pro Brain Natriuretic Peptide (NT-proBNP) > 2000 pg/ml, who were being treated with furosemide ≥ 40 mg/day (or equivalent) and were on ≤ 50% of the target dose of angiotensin-converting enzyme inhibitor (ACEI)/angiotensin II receptor blocker (ARB) or beta-blocker therapy. Patients were recruited from both the in-patient and out-patient settings. Patients were classified as having AF if they had AF on their electrocardiogram (ECG) at their baseline visit and were reclassified at the second visit ECG. We excluded patients with paced or undetermined ECG rhythms and those with LVEF ≥ 40%.\nIn the first 3 months after recruitment, treating clinicians aimed to initiate and/or uptitrate ACEI/ARBs and beta-blockers to recommended target doses which have been previously published by the European Society of Cardiology [22]. Reasons for failure to successfully uptitrate and side effects have been previously published [23]. Following the 3-month uptitration period, patients entered a 6-month maintenance period where no further uptitration was mandated unless clinically indicated. Patients then were invited for a second visit at 9 months. The trial was approved by the local ethics committee of the participating centers and all patients provided written informed consent. The study complied with the Declaration of Helsinki.", "Heart rate and rhythm were assessed by ECG with all patients supine and rested for at least 5 min. In BIOSTAT-CHF, all patients were followed up for clinical outcomes. After the scheduled visits at baseline and 9 months, patients were contacted by telephone every 6 months.\nThe primary outcome for this study was the combined endpoint of all-cause mortality or HF hospitalisation. HF hospitalisation was determined as admission to hospital ≥ 24 h due to worsening HF requiring either intravenous or increased dose of oral diuretics.", "Clinical, ECG and echocardiographic data was obtained at baseline, with clinical and ECG data obtained at 9 months. Normally distributed continuous variables were reported as mean ± SD and categorical data, as number with percentage in brackets. Comparisons between continuous variables were carried out using a two-tailed Student t test and categorical variables were tested using the Chi square test. Heart rate and beta-blocker dose at baseline and 9 months were analysed for their association with the primary outcome and all-cause mortality using the Cox proportional hazard model and Kaplan–Meier analysis. Competing risks regression with death as a competing risk was used to determine hazard ratios for hospitalisation alone. To adjust for treatment indication bias inverse probability weighting was used, the method of which has been explained in detail previously [23]. Variables included in the inverse probability weighting were age, baseline heart rate and country of origin.\nVariables were tested for univariable significance and were then included in a multivariable model with the BIOSTAT-CHF risk score [23] to assess their independent association with outcome. SR and AF patients were evaluated separately and interaction testing between SR and AF was also performed within the whole cohort. Heart rate in increments of 10 bpm and beta-blocker dose as a percentage of target dose were examined. Increments of 12.5% of target beta-blocker dose were chosen to reflect clinically used dosages—for example, bisoprolol has a target dose of 10 mg, and is commonly increased in doses of 1.25 mg (12.5% of target dose). Nine-month outcomes only included patients who did not have an event in the first 9 months and those who had ECG data available. Correlations were assessed using Pearson correlation. A p value < 0.05 was considered significant throughout. Statistical analysis was performed using R version 3.4.1.", " Baseline characteristics The baseline characteristics of the BIOSTAT-CHF study have been reported previously [21]. Median follow-up in BIOSTAT-CHF was 21 months. Derivation of the cohort for this study is shown in Fig. 1. In total, following exclusion of patients with LVEF ≥ 40% and paced or undetermined ECG rhythms, we included 1548 patients from the BIOSTAT-CHF index cohort (Table 1). 535 patients (34.6%) were in AF on their baseline ECG.\n\nFig. 1Cohort derivation. Derivation of the cohort from the BIOSTAT-CHF study\n\nCohort derivation. Derivation of the cohort from the BIOSTAT-CHF study\n\nTable 1Baseline cohort characteristics according to heart rhythm at baselineTotal cohort (n = 1548)Sinus rhythm (n = 1013)Atrial fibrillation (n = 535)p value between SR and AFAge (years)67 ± 1265 ± 1371 ± 10< 0.001Men1175 (75.9)750 (74.0)425 (79.4)\n0.018\nSBP (mmHg)124 ± 21124 ± 21124 ± 210.55DBP (mmHg)76 ± 1275 ± 1276 ± 120.14Heart rate (bpm)83 ± 2178 ± 1793 ± 24< 0.001QRS duration (ms)112 ± 29113 ± 29112 ± 280.56NYHA classa< 0.001 I37 (2.4)30 (3.0)7 (1.3) II557 (36.7)400 (40.5)157 (29.7) III734 (48.4)448 (45.3)286 (54.2) IV188 (12.4)110 (11.1)78 (14.8)Ischaemic aetiology718 (47.4)510 (51.4)208 (39.8)< 0.001Hypertension935 (60.4)609 (60.1)326 (60.9)0.76Current smoker252 (16.3)201 (19.9)51 (9.6)< 0.001Diabetes mellitus490 (31.7)322 (31.8)168 (31.4)0.88COPD259 (16.7)163 (16.1)96 (17.9)0.35Renal impairment357 (23.1)193 (19.1)165 (30.8)< 0.001ACEI/ARB1158 (74.8)770 (76.0)388 (72.5)0.13Beta-blocker1299 (83.9)853 (84.2)446 (83.4)0.67Beta-blocker dose %< 0.001 0250 (16.1)161 (15.9)89 (16.6) 1–49938 (60.6)644 (63.6)294 (55.0) 50–99292 (18.9)176 (17.4)116 (21.7) ≥ 10068 (4.4)32 (3.2)36 (6.7)MRA860 (55.6)575 (56.8)285 (53.3)0.19Digoxin284 (18.3)86 (8.5)198 (37.0)< 0.001LVEF (%)27.3 ± 6.927.1 ± 7.027.8 ± 6.90.07Bold values indicate p < 0.0532 patients (2.1%) did not have NYHA class recordedSBP systolic blood pressure, DBP diastolic blood pressure, COPD chronic obstructive pulmonary disease, ACEI angiotensin-converting enzyme inhibitor, ARB angiotensin receptor blocker, LVEF left ventricular ejection fraction, NT-proBNP N-terminal pro B-type natriuretic peptideaMedian (interequartile range)\n\nBaseline cohort characteristics according to heart rhythm at baseline\nBold values indicate p < 0.05\n32 patients (2.1%) did not have NYHA class recorded\nSBP systolic blood pressure, DBP diastolic blood pressure, COPD chronic obstructive pulmonary disease, ACEI angiotensin-converting enzyme inhibitor, ARB angiotensin receptor blocker, LVEF left ventricular ejection fraction, NT-proBNP N-terminal pro B-type natriuretic peptide\naMedian (interequartile range)\nThe baseline characteristics of the BIOSTAT-CHF study have been reported previously [21]. Median follow-up in BIOSTAT-CHF was 21 months. Derivation of the cohort for this study is shown in Fig. 1. In total, following exclusion of patients with LVEF ≥ 40% and paced or undetermined ECG rhythms, we included 1548 patients from the BIOSTAT-CHF index cohort (Table 1). 535 patients (34.6%) were in AF on their baseline ECG.\n\nFig. 1Cohort derivation. Derivation of the cohort from the BIOSTAT-CHF study\n\nCohort derivation. Derivation of the cohort from the BIOSTAT-CHF study\n\nTable 1Baseline cohort characteristics according to heart rhythm at baselineTotal cohort (n = 1548)Sinus rhythm (n = 1013)Atrial fibrillation (n = 535)p value between SR and AFAge (years)67 ± 1265 ± 1371 ± 10< 0.001Men1175 (75.9)750 (74.0)425 (79.4)\n0.018\nSBP (mmHg)124 ± 21124 ± 21124 ± 210.55DBP (mmHg)76 ± 1275 ± 1276 ± 120.14Heart rate (bpm)83 ± 2178 ± 1793 ± 24< 0.001QRS duration (ms)112 ± 29113 ± 29112 ± 280.56NYHA classa< 0.001 I37 (2.4)30 (3.0)7 (1.3) II557 (36.7)400 (40.5)157 (29.7) III734 (48.4)448 (45.3)286 (54.2) IV188 (12.4)110 (11.1)78 (14.8)Ischaemic aetiology718 (47.4)510 (51.4)208 (39.8)< 0.001Hypertension935 (60.4)609 (60.1)326 (60.9)0.76Current smoker252 (16.3)201 (19.9)51 (9.6)< 0.001Diabetes mellitus490 (31.7)322 (31.8)168 (31.4)0.88COPD259 (16.7)163 (16.1)96 (17.9)0.35Renal impairment357 (23.1)193 (19.1)165 (30.8)< 0.001ACEI/ARB1158 (74.8)770 (76.0)388 (72.5)0.13Beta-blocker1299 (83.9)853 (84.2)446 (83.4)0.67Beta-blocker dose %< 0.001 0250 (16.1)161 (15.9)89 (16.6) 1–49938 (60.6)644 (63.6)294 (55.0) 50–99292 (18.9)176 (17.4)116 (21.7) ≥ 10068 (4.4)32 (3.2)36 (6.7)MRA860 (55.6)575 (56.8)285 (53.3)0.19Digoxin284 (18.3)86 (8.5)198 (37.0)< 0.001LVEF (%)27.3 ± 6.927.1 ± 7.027.8 ± 6.90.07Bold values indicate p < 0.0532 patients (2.1%) did not have NYHA class recordedSBP systolic blood pressure, DBP diastolic blood pressure, COPD chronic obstructive pulmonary disease, ACEI angiotensin-converting enzyme inhibitor, ARB angiotensin receptor blocker, LVEF left ventricular ejection fraction, NT-proBNP N-terminal pro B-type natriuretic peptideaMedian (interequartile range)\n\nBaseline cohort characteristics according to heart rhythm at baseline\nBold values indicate p < 0.05\n32 patients (2.1%) did not have NYHA class recorded\nSBP systolic blood pressure, DBP diastolic blood pressure, COPD chronic obstructive pulmonary disease, ACEI angiotensin-converting enzyme inhibitor, ARB angiotensin receptor blocker, LVEF left ventricular ejection fraction, NT-proBNP N-terminal pro B-type natriuretic peptide\naMedian (interequartile range)\n Relationship between baseline heart rate and outcome In total, the primary outcome occurred in 554 patients [35.8% of the total cohort; 323 (31.8%) in SR and 231 (43.2%) in AF], including 324 deaths [20.9% of the total cohort; 212 (18.6%) in SR and 112 (28.0%) in AF] and 337 hospitalisations [21.8% of the total cohort; 198 (19.5%) in SR and 139 (26.0%) in AF] (Table 2).\n\nTable 2Cox regression analyses of baseline heart rate on the primary outcome of mortality and heart failure hospitalisationSinus rhythm (n = 1013)Atrial fibrillation (n = 535)Interaction p valueNumber of events (%)Multivariable hazard ratio (95% CI)p valueNumber of events (%)Multivariable hazard ratio (95% CI)p valueBaseline heart rate; hazard ratio per 10 bpm higher Mortality or heart failure hospitalisation323 (31.9)1.02 (0.96–1.08)0.60231 (43.2)0.91 (0.86–0.96)\n0.001\n\n0.011\n Mortality212 (20.9)0.97 (0.90–1.05)0.50112 (20.9)0.96 (0.89–1.04)0.400.75 HF hospitalisationa198 (19.5)1.02 (0.94–1.11)0.62139 (26.0)0.95 (0.88–1.02)0.130.20Bold values indicate p < 0.05Multivariable model adjusted for the BIOSTAT-CHF risk prediction modelBIOSTAT-CHF risk prediction model for combined endpoint of Mortality and HF hospitalisation: age, HF hospitalisation in the previous year, peripheral oedema, systolic blood pressure, log-NT-proBNP, haemoglobin, HDL cholesterol, sodium, beta-blocker use at baselineBIOSTAT-CHF risk prediction model for heart failure hospitalisation alone: age, previous HF hospitalisation, presence of oedema, systolic blood pressure and estimated glomerular filtration rateBIOSTAT-CHF risk prediction model for mortality alone: age, blood urea nitrogen, NT-proBNP, haemoglobin and beta-blocker use at baselineaCompeting risk of death\n\nCox regression analyses of baseline heart rate on the primary outcome of mortality and heart failure hospitalisation\nBold values indicate p < 0.05\nMultivariable model adjusted for the BIOSTAT-CHF risk prediction model\nBIOSTAT-CHF risk prediction model for combined endpoint of Mortality and HF hospitalisation: age, HF hospitalisation in the previous year, peripheral oedema, systolic blood pressure, log-NT-proBNP, haemoglobin, HDL cholesterol, sodium, beta-blocker use at baseline\nBIOSTAT-CHF risk prediction model for heart failure hospitalisation alone: age, previous HF hospitalisation, presence of oedema, systolic blood pressure and estimated glomerular filtration rate\nBIOSTAT-CHF risk prediction model for mortality alone: age, blood urea nitrogen, NT-proBNP, haemoglobin and beta-blocker use at baseline\naCompeting risk of death\nBaseline heart rate was not a significant predictor of the primary outcome in SR patients (HR per 10 bpm higher: 1.02 95% CI 0.96–1.08, p = 0.60), however, higher baseline heart rate was significantly associated with improved outcome in patients with AF (HR per 10 bpm higher: 0.91; 95% CI 0.86–0.96, p = 0.001; p for interaction vs. sinus rhythm 0.011). There were no significant associations for the individual endpoints of mortality and HF hospitalisation (Table 2).\nIn total, the primary outcome occurred in 554 patients [35.8% of the total cohort; 323 (31.8%) in SR and 231 (43.2%) in AF], including 324 deaths [20.9% of the total cohort; 212 (18.6%) in SR and 112 (28.0%) in AF] and 337 hospitalisations [21.8% of the total cohort; 198 (19.5%) in SR and 139 (26.0%) in AF] (Table 2).\n\nTable 2Cox regression analyses of baseline heart rate on the primary outcome of mortality and heart failure hospitalisationSinus rhythm (n = 1013)Atrial fibrillation (n = 535)Interaction p valueNumber of events (%)Multivariable hazard ratio (95% CI)p valueNumber of events (%)Multivariable hazard ratio (95% CI)p valueBaseline heart rate; hazard ratio per 10 bpm higher Mortality or heart failure hospitalisation323 (31.9)1.02 (0.96–1.08)0.60231 (43.2)0.91 (0.86–0.96)\n0.001\n\n0.011\n Mortality212 (20.9)0.97 (0.90–1.05)0.50112 (20.9)0.96 (0.89–1.04)0.400.75 HF hospitalisationa198 (19.5)1.02 (0.94–1.11)0.62139 (26.0)0.95 (0.88–1.02)0.130.20Bold values indicate p < 0.05Multivariable model adjusted for the BIOSTAT-CHF risk prediction modelBIOSTAT-CHF risk prediction model for combined endpoint of Mortality and HF hospitalisation: age, HF hospitalisation in the previous year, peripheral oedema, systolic blood pressure, log-NT-proBNP, haemoglobin, HDL cholesterol, sodium, beta-blocker use at baselineBIOSTAT-CHF risk prediction model for heart failure hospitalisation alone: age, previous HF hospitalisation, presence of oedema, systolic blood pressure and estimated glomerular filtration rateBIOSTAT-CHF risk prediction model for mortality alone: age, blood urea nitrogen, NT-proBNP, haemoglobin and beta-blocker use at baselineaCompeting risk of death\n\nCox regression analyses of baseline heart rate on the primary outcome of mortality and heart failure hospitalisation\nBold values indicate p < 0.05\nMultivariable model adjusted for the BIOSTAT-CHF risk prediction model\nBIOSTAT-CHF risk prediction model for combined endpoint of Mortality and HF hospitalisation: age, HF hospitalisation in the previous year, peripheral oedema, systolic blood pressure, log-NT-proBNP, haemoglobin, HDL cholesterol, sodium, beta-blocker use at baseline\nBIOSTAT-CHF risk prediction model for heart failure hospitalisation alone: age, previous HF hospitalisation, presence of oedema, systolic blood pressure and estimated glomerular filtration rate\nBIOSTAT-CHF risk prediction model for mortality alone: age, blood urea nitrogen, NT-proBNP, haemoglobin and beta-blocker use at baseline\naCompeting risk of death\nBaseline heart rate was not a significant predictor of the primary outcome in SR patients (HR per 10 bpm higher: 1.02 95% CI 0.96–1.08, p = 0.60), however, higher baseline heart rate was significantly associated with improved outcome in patients with AF (HR per 10 bpm higher: 0.91; 95% CI 0.86–0.96, p = 0.001; p for interaction vs. sinus rhythm 0.011). There were no significant associations for the individual endpoints of mortality and HF hospitalisation (Table 2).\n Relationship between achieved heart rate at 9 months, change in heart rate at 9 months and outcome ECGs at the 9-month visit were available for 1155 patients. 198 patients died prior to their 9 month visit, while 195 patients did not have an ECG available. After exclusion of 125 patients with paced rhythms, 1030 patients remained for analysis, of which 734 (71.3%) were in sinus rhythm and 296 (28.7%) were in AF. Heart rate-lowering medication use at 9 months is shown in Table 3. AF at the 9-month ECG was associated with increased likelihood of the primary outcome compared to SR when added to the BIOSTAT risk prediction model (HR 1.63; 95% CI 1.18–2.23, p = 0.003).\n\nTable 3Heart rate controlling medication prescription at 9 monthsSinus rhythm (734)Atrial fibrillation (296)Beta-blocker691 (94.1)276 (93.2)Digoxin208 (28.3)201 (67.9)Verapamil/diltiazem8 (1.1)8 (2.7)\n\nHeart rate controlling medication prescription at 9 months\nMean achieved heart rate at 9 months was significantly lower in SR patients compared to AF (67 ± 13 versus 81 ± 18 bpm, respectively, p < 0.001). Higher baseline heart was significantly associated with a greater reduction in heart rate at 9 months (r = − 0.77, p < 0.001) and an increase in beta-blocker dose at 9 months (r = 0.12, p < 0.001). After adjustment for the BIOSTAT risk prediction model and likelihood of uptitration, a higher achieved heart rate at 9 months was significantly associated with increased likelihood of the primary outcome in SR patients (HR 1.26 per 10 bpm higher; 95% CI 1.10–1.46, p = 0.001) but not in AF (HR 1.08 per 10 bpm higher; 95% CI 0.94–1.23, p = 0.18, p for interaction vs. SR 0.26) (Table 4). There were no significant associations between achieved heart rate and the individual endpoints.\n\nTable 4Cox regression analyses of achieved heart rate and change in heart rate at 9 months on clinical outcomesSinus rhythm (n = 734)Atrial fibrillation (n = 296)Interaction p valueNumber of events (%)Multivariable hazard ratio (95% CI)p valueNumber of events (%)Multivariable hazard ratio (95% CI)p valueAchieved heart rate; hazard ratio per 10 bpm highera Mortality or heart failure hospitalisation168 (22.9)1.29 (1.10–1.46)\n0.001\n115 (38.9)1.08 (0.94–1.23)0.180.26 Mortality1.00 (0.87–1.15)0.961.02 (0.88–1.18)0.770.20 HF hospitalisation+1.07 (0.91–1.27)0.420.84 (0.65–1.07)0.160.99Change in heart rate; hazard ratio per 10 bpm decreasea Mortality or heart failure hospitalisation168 (22.9)0.83 (0.75–0.91)< 0.001115 (38.9)0.89 (0.81–0.98)\n0.018\n0.97 Mortality0.95 (0.88–1.03)0.230.92 (0.84–1.02)0.110.50 HF hospitalisationb0.88 (0.77–1.00)\n0.046\n0.93 (0.85–1.01)0.100.91Bold values indicate p < 0.05BIOSTAT-CHF risk prediction model for mortality and HF hospitalisation includes: age, HF hospitalisation in the previous year, peripheral oedema, systolic blood pressure, NT-proBNP, haemoglobin, HDL cholesterol, sodium, beta-blocker use at baselineBIOSTAT-CHF risk prediction model for heart failure hospitalisation alone: age, previous HF hospitalisation, presence of oedema, systolic blood pressure and estimated glomerular filtration rateBIOSTAT-CHF risk prediction model for mortality alone: age, blood urea nitrogen, NT-proBNP, haemoglobin and beta-blocker use at baselineaAdjusted for likelihood of uptitraton and BIOSTAT-CHF risk prediction modelbCompeting risk of death\n\nCox regression analyses of achieved heart rate and change in heart rate at 9 months on clinical outcomes\nBold values indicate p < 0.05\nBIOSTAT-CHF risk prediction model for mortality and HF hospitalisation includes: age, HF hospitalisation in the previous year, peripheral oedema, systolic blood pressure, NT-proBNP, haemoglobin, HDL cholesterol, sodium, beta-blocker use at baseline\nBIOSTAT-CHF risk prediction model for heart failure hospitalisation alone: age, previous HF hospitalisation, presence of oedema, systolic blood pressure and estimated glomerular filtration rate\nBIOSTAT-CHF risk prediction model for mortality alone: age, blood urea nitrogen, NT-proBNP, haemoglobin and beta-blocker use at baseline\naAdjusted for likelihood of uptitraton and BIOSTAT-CHF risk prediction model\nbCompeting risk of death\nThere was no significant difference in change in heart rate at 9 months between SR and AF patients (− 11.5 ± 21.9 bpm versus − 9.1 ± 25.9 bpm, respectively; p = 0.12). In multivariable analysis, a decrease in heart rate was significantly associated with reduced likelihood of the primary outcome in both SR and AF (SR: HR 0.83 per 10 bpm decrease; 95% CI 0.75–0.91, p < 0.001; AF: HR 0.89 per 10 bpm decrease; 95% CI 0.81–0.98, p = 0.018, p for interaction vs. SR 0.97) (Table 4). A decrease in heart rate at 9 months was also significantly associated with reduced HF hospitalisation in patients in SR (HR 0.88 per 10 bpm decrease; 95% CI 0.77–1.00, p = 0.046).\nECGs at the 9-month visit were available for 1155 patients. 198 patients died prior to their 9 month visit, while 195 patients did not have an ECG available. After exclusion of 125 patients with paced rhythms, 1030 patients remained for analysis, of which 734 (71.3%) were in sinus rhythm and 296 (28.7%) were in AF. Heart rate-lowering medication use at 9 months is shown in Table 3. AF at the 9-month ECG was associated with increased likelihood of the primary outcome compared to SR when added to the BIOSTAT risk prediction model (HR 1.63; 95% CI 1.18–2.23, p = 0.003).\n\nTable 3Heart rate controlling medication prescription at 9 monthsSinus rhythm (734)Atrial fibrillation (296)Beta-blocker691 (94.1)276 (93.2)Digoxin208 (28.3)201 (67.9)Verapamil/diltiazem8 (1.1)8 (2.7)\n\nHeart rate controlling medication prescription at 9 months\nMean achieved heart rate at 9 months was significantly lower in SR patients compared to AF (67 ± 13 versus 81 ± 18 bpm, respectively, p < 0.001). Higher baseline heart was significantly associated with a greater reduction in heart rate at 9 months (r = − 0.77, p < 0.001) and an increase in beta-blocker dose at 9 months (r = 0.12, p < 0.001). After adjustment for the BIOSTAT risk prediction model and likelihood of uptitration, a higher achieved heart rate at 9 months was significantly associated with increased likelihood of the primary outcome in SR patients (HR 1.26 per 10 bpm higher; 95% CI 1.10–1.46, p = 0.001) but not in AF (HR 1.08 per 10 bpm higher; 95% CI 0.94–1.23, p = 0.18, p for interaction vs. SR 0.26) (Table 4). There were no significant associations between achieved heart rate and the individual endpoints.\n\nTable 4Cox regression analyses of achieved heart rate and change in heart rate at 9 months on clinical outcomesSinus rhythm (n = 734)Atrial fibrillation (n = 296)Interaction p valueNumber of events (%)Multivariable hazard ratio (95% CI)p valueNumber of events (%)Multivariable hazard ratio (95% CI)p valueAchieved heart rate; hazard ratio per 10 bpm highera Mortality or heart failure hospitalisation168 (22.9)1.29 (1.10–1.46)\n0.001\n115 (38.9)1.08 (0.94–1.23)0.180.26 Mortality1.00 (0.87–1.15)0.961.02 (0.88–1.18)0.770.20 HF hospitalisation+1.07 (0.91–1.27)0.420.84 (0.65–1.07)0.160.99Change in heart rate; hazard ratio per 10 bpm decreasea Mortality or heart failure hospitalisation168 (22.9)0.83 (0.75–0.91)< 0.001115 (38.9)0.89 (0.81–0.98)\n0.018\n0.97 Mortality0.95 (0.88–1.03)0.230.92 (0.84–1.02)0.110.50 HF hospitalisationb0.88 (0.77–1.00)\n0.046\n0.93 (0.85–1.01)0.100.91Bold values indicate p < 0.05BIOSTAT-CHF risk prediction model for mortality and HF hospitalisation includes: age, HF hospitalisation in the previous year, peripheral oedema, systolic blood pressure, NT-proBNP, haemoglobin, HDL cholesterol, sodium, beta-blocker use at baselineBIOSTAT-CHF risk prediction model for heart failure hospitalisation alone: age, previous HF hospitalisation, presence of oedema, systolic blood pressure and estimated glomerular filtration rateBIOSTAT-CHF risk prediction model for mortality alone: age, blood urea nitrogen, NT-proBNP, haemoglobin and beta-blocker use at baselineaAdjusted for likelihood of uptitraton and BIOSTAT-CHF risk prediction modelbCompeting risk of death\n\nCox regression analyses of achieved heart rate and change in heart rate at 9 months on clinical outcomes\nBold values indicate p < 0.05\nBIOSTAT-CHF risk prediction model for mortality and HF hospitalisation includes: age, HF hospitalisation in the previous year, peripheral oedema, systolic blood pressure, NT-proBNP, haemoglobin, HDL cholesterol, sodium, beta-blocker use at baseline\nBIOSTAT-CHF risk prediction model for heart failure hospitalisation alone: age, previous HF hospitalisation, presence of oedema, systolic blood pressure and estimated glomerular filtration rate\nBIOSTAT-CHF risk prediction model for mortality alone: age, blood urea nitrogen, NT-proBNP, haemoglobin and beta-blocker use at baseline\naAdjusted for likelihood of uptitraton and BIOSTAT-CHF risk prediction model\nbCompeting risk of death\nThere was no significant difference in change in heart rate at 9 months between SR and AF patients (− 11.5 ± 21.9 bpm versus − 9.1 ± 25.9 bpm, respectively; p = 0.12). In multivariable analysis, a decrease in heart rate was significantly associated with reduced likelihood of the primary outcome in both SR and AF (SR: HR 0.83 per 10 bpm decrease; 95% CI 0.75–0.91, p < 0.001; AF: HR 0.89 per 10 bpm decrease; 95% CI 0.81–0.98, p = 0.018, p for interaction vs. SR 0.97) (Table 4). A decrease in heart rate at 9 months was also significantly associated with reduced HF hospitalisation in patients in SR (HR 0.88 per 10 bpm decrease; 95% CI 0.77–1.00, p = 0.046).\n Effects of changes in heart rate in patients stratified by baseline heart rate Among the patients assessed at 9 months, baseline heart rate was 77 bpm in those in SR and 85 bpm in those in AF. Higher achieved heart rate and change in heart rate were significantly associated with outcome regardless of baseline heart rate in sinus rhythm (Fig. 2), however, a different pattern was seen in patients in AF however, with higher achieved heart rate only being associated with worse outcome in patients with higher baseline heart rates (baseline heart rate > 85 bpm: HR 1.37 per 10 bpm higher; 95% CI 1.16–1.61, p < 0.001; ≤ 85 bpm: HR 0.99 per 10 bpm higher; 95% CI 0.80–1.22, p = 0.94, p for interaction 0.017) (Fig. 2). A similar pattern was seen with change in heart rate.\n\nFig. 2The relationship between achieved heart rate and change in heart rate at 9 months stratified by baseline heart rate. Association of achieved heart rate (left) and change in heart rate (right) with the primary outcome in sinus rhythm and atrial fibrillation stratified by baseline heart rate above and below the median\n\nThe relationship between achieved heart rate and change in heart rate at 9 months stratified by baseline heart rate. Association of achieved heart rate (left) and change in heart rate (right) with the primary outcome in sinus rhythm and atrial fibrillation stratified by baseline heart rate above and below the median\nAmong the patients assessed at 9 months, baseline heart rate was 77 bpm in those in SR and 85 bpm in those in AF. Higher achieved heart rate and change in heart rate were significantly associated with outcome regardless of baseline heart rate in sinus rhythm (Fig. 2), however, a different pattern was seen in patients in AF however, with higher achieved heart rate only being associated with worse outcome in patients with higher baseline heart rates (baseline heart rate > 85 bpm: HR 1.37 per 10 bpm higher; 95% CI 1.16–1.61, p < 0.001; ≤ 85 bpm: HR 0.99 per 10 bpm higher; 95% CI 0.80–1.22, p = 0.94, p for interaction 0.017) (Fig. 2). A similar pattern was seen with change in heart rate.\n\nFig. 2The relationship between achieved heart rate and change in heart rate at 9 months stratified by baseline heart rate. Association of achieved heart rate (left) and change in heart rate (right) with the primary outcome in sinus rhythm and atrial fibrillation stratified by baseline heart rate above and below the median\n\nThe relationship between achieved heart rate and change in heart rate at 9 months stratified by baseline heart rate. Association of achieved heart rate (left) and change in heart rate (right) with the primary outcome in sinus rhythm and atrial fibrillation stratified by baseline heart rate above and below the median", "The baseline characteristics of the BIOSTAT-CHF study have been reported previously [21]. Median follow-up in BIOSTAT-CHF was 21 months. Derivation of the cohort for this study is shown in Fig. 1. In total, following exclusion of patients with LVEF ≥ 40% and paced or undetermined ECG rhythms, we included 1548 patients from the BIOSTAT-CHF index cohort (Table 1). 535 patients (34.6%) were in AF on their baseline ECG.\n\nFig. 1Cohort derivation. Derivation of the cohort from the BIOSTAT-CHF study\n\nCohort derivation. Derivation of the cohort from the BIOSTAT-CHF study\n\nTable 1Baseline cohort characteristics according to heart rhythm at baselineTotal cohort (n = 1548)Sinus rhythm (n = 1013)Atrial fibrillation (n = 535)p value between SR and AFAge (years)67 ± 1265 ± 1371 ± 10< 0.001Men1175 (75.9)750 (74.0)425 (79.4)\n0.018\nSBP (mmHg)124 ± 21124 ± 21124 ± 210.55DBP (mmHg)76 ± 1275 ± 1276 ± 120.14Heart rate (bpm)83 ± 2178 ± 1793 ± 24< 0.001QRS duration (ms)112 ± 29113 ± 29112 ± 280.56NYHA classa< 0.001 I37 (2.4)30 (3.0)7 (1.3) II557 (36.7)400 (40.5)157 (29.7) III734 (48.4)448 (45.3)286 (54.2) IV188 (12.4)110 (11.1)78 (14.8)Ischaemic aetiology718 (47.4)510 (51.4)208 (39.8)< 0.001Hypertension935 (60.4)609 (60.1)326 (60.9)0.76Current smoker252 (16.3)201 (19.9)51 (9.6)< 0.001Diabetes mellitus490 (31.7)322 (31.8)168 (31.4)0.88COPD259 (16.7)163 (16.1)96 (17.9)0.35Renal impairment357 (23.1)193 (19.1)165 (30.8)< 0.001ACEI/ARB1158 (74.8)770 (76.0)388 (72.5)0.13Beta-blocker1299 (83.9)853 (84.2)446 (83.4)0.67Beta-blocker dose %< 0.001 0250 (16.1)161 (15.9)89 (16.6) 1–49938 (60.6)644 (63.6)294 (55.0) 50–99292 (18.9)176 (17.4)116 (21.7) ≥ 10068 (4.4)32 (3.2)36 (6.7)MRA860 (55.6)575 (56.8)285 (53.3)0.19Digoxin284 (18.3)86 (8.5)198 (37.0)< 0.001LVEF (%)27.3 ± 6.927.1 ± 7.027.8 ± 6.90.07Bold values indicate p < 0.0532 patients (2.1%) did not have NYHA class recordedSBP systolic blood pressure, DBP diastolic blood pressure, COPD chronic obstructive pulmonary disease, ACEI angiotensin-converting enzyme inhibitor, ARB angiotensin receptor blocker, LVEF left ventricular ejection fraction, NT-proBNP N-terminal pro B-type natriuretic peptideaMedian (interequartile range)\n\nBaseline cohort characteristics according to heart rhythm at baseline\nBold values indicate p < 0.05\n32 patients (2.1%) did not have NYHA class recorded\nSBP systolic blood pressure, DBP diastolic blood pressure, COPD chronic obstructive pulmonary disease, ACEI angiotensin-converting enzyme inhibitor, ARB angiotensin receptor blocker, LVEF left ventricular ejection fraction, NT-proBNP N-terminal pro B-type natriuretic peptide\naMedian (interequartile range)", "In total, the primary outcome occurred in 554 patients [35.8% of the total cohort; 323 (31.8%) in SR and 231 (43.2%) in AF], including 324 deaths [20.9% of the total cohort; 212 (18.6%) in SR and 112 (28.0%) in AF] and 337 hospitalisations [21.8% of the total cohort; 198 (19.5%) in SR and 139 (26.0%) in AF] (Table 2).\n\nTable 2Cox regression analyses of baseline heart rate on the primary outcome of mortality and heart failure hospitalisationSinus rhythm (n = 1013)Atrial fibrillation (n = 535)Interaction p valueNumber of events (%)Multivariable hazard ratio (95% CI)p valueNumber of events (%)Multivariable hazard ratio (95% CI)p valueBaseline heart rate; hazard ratio per 10 bpm higher Mortality or heart failure hospitalisation323 (31.9)1.02 (0.96–1.08)0.60231 (43.2)0.91 (0.86–0.96)\n0.001\n\n0.011\n Mortality212 (20.9)0.97 (0.90–1.05)0.50112 (20.9)0.96 (0.89–1.04)0.400.75 HF hospitalisationa198 (19.5)1.02 (0.94–1.11)0.62139 (26.0)0.95 (0.88–1.02)0.130.20Bold values indicate p < 0.05Multivariable model adjusted for the BIOSTAT-CHF risk prediction modelBIOSTAT-CHF risk prediction model for combined endpoint of Mortality and HF hospitalisation: age, HF hospitalisation in the previous year, peripheral oedema, systolic blood pressure, log-NT-proBNP, haemoglobin, HDL cholesterol, sodium, beta-blocker use at baselineBIOSTAT-CHF risk prediction model for heart failure hospitalisation alone: age, previous HF hospitalisation, presence of oedema, systolic blood pressure and estimated glomerular filtration rateBIOSTAT-CHF risk prediction model for mortality alone: age, blood urea nitrogen, NT-proBNP, haemoglobin and beta-blocker use at baselineaCompeting risk of death\n\nCox regression analyses of baseline heart rate on the primary outcome of mortality and heart failure hospitalisation\nBold values indicate p < 0.05\nMultivariable model adjusted for the BIOSTAT-CHF risk prediction model\nBIOSTAT-CHF risk prediction model for combined endpoint of Mortality and HF hospitalisation: age, HF hospitalisation in the previous year, peripheral oedema, systolic blood pressure, log-NT-proBNP, haemoglobin, HDL cholesterol, sodium, beta-blocker use at baseline\nBIOSTAT-CHF risk prediction model for heart failure hospitalisation alone: age, previous HF hospitalisation, presence of oedema, systolic blood pressure and estimated glomerular filtration rate\nBIOSTAT-CHF risk prediction model for mortality alone: age, blood urea nitrogen, NT-proBNP, haemoglobin and beta-blocker use at baseline\naCompeting risk of death\nBaseline heart rate was not a significant predictor of the primary outcome in SR patients (HR per 10 bpm higher: 1.02 95% CI 0.96–1.08, p = 0.60), however, higher baseline heart rate was significantly associated with improved outcome in patients with AF (HR per 10 bpm higher: 0.91; 95% CI 0.86–0.96, p = 0.001; p for interaction vs. sinus rhythm 0.011). There were no significant associations for the individual endpoints of mortality and HF hospitalisation (Table 2).", "ECGs at the 9-month visit were available for 1155 patients. 198 patients died prior to their 9 month visit, while 195 patients did not have an ECG available. After exclusion of 125 patients with paced rhythms, 1030 patients remained for analysis, of which 734 (71.3%) were in sinus rhythm and 296 (28.7%) were in AF. Heart rate-lowering medication use at 9 months is shown in Table 3. AF at the 9-month ECG was associated with increased likelihood of the primary outcome compared to SR when added to the BIOSTAT risk prediction model (HR 1.63; 95% CI 1.18–2.23, p = 0.003).\n\nTable 3Heart rate controlling medication prescription at 9 monthsSinus rhythm (734)Atrial fibrillation (296)Beta-blocker691 (94.1)276 (93.2)Digoxin208 (28.3)201 (67.9)Verapamil/diltiazem8 (1.1)8 (2.7)\n\nHeart rate controlling medication prescription at 9 months\nMean achieved heart rate at 9 months was significantly lower in SR patients compared to AF (67 ± 13 versus 81 ± 18 bpm, respectively, p < 0.001). Higher baseline heart was significantly associated with a greater reduction in heart rate at 9 months (r = − 0.77, p < 0.001) and an increase in beta-blocker dose at 9 months (r = 0.12, p < 0.001). After adjustment for the BIOSTAT risk prediction model and likelihood of uptitration, a higher achieved heart rate at 9 months was significantly associated with increased likelihood of the primary outcome in SR patients (HR 1.26 per 10 bpm higher; 95% CI 1.10–1.46, p = 0.001) but not in AF (HR 1.08 per 10 bpm higher; 95% CI 0.94–1.23, p = 0.18, p for interaction vs. SR 0.26) (Table 4). There were no significant associations between achieved heart rate and the individual endpoints.\n\nTable 4Cox regression analyses of achieved heart rate and change in heart rate at 9 months on clinical outcomesSinus rhythm (n = 734)Atrial fibrillation (n = 296)Interaction p valueNumber of events (%)Multivariable hazard ratio (95% CI)p valueNumber of events (%)Multivariable hazard ratio (95% CI)p valueAchieved heart rate; hazard ratio per 10 bpm highera Mortality or heart failure hospitalisation168 (22.9)1.29 (1.10–1.46)\n0.001\n115 (38.9)1.08 (0.94–1.23)0.180.26 Mortality1.00 (0.87–1.15)0.961.02 (0.88–1.18)0.770.20 HF hospitalisation+1.07 (0.91–1.27)0.420.84 (0.65–1.07)0.160.99Change in heart rate; hazard ratio per 10 bpm decreasea Mortality or heart failure hospitalisation168 (22.9)0.83 (0.75–0.91)< 0.001115 (38.9)0.89 (0.81–0.98)\n0.018\n0.97 Mortality0.95 (0.88–1.03)0.230.92 (0.84–1.02)0.110.50 HF hospitalisationb0.88 (0.77–1.00)\n0.046\n0.93 (0.85–1.01)0.100.91Bold values indicate p < 0.05BIOSTAT-CHF risk prediction model for mortality and HF hospitalisation includes: age, HF hospitalisation in the previous year, peripheral oedema, systolic blood pressure, NT-proBNP, haemoglobin, HDL cholesterol, sodium, beta-blocker use at baselineBIOSTAT-CHF risk prediction model for heart failure hospitalisation alone: age, previous HF hospitalisation, presence of oedema, systolic blood pressure and estimated glomerular filtration rateBIOSTAT-CHF risk prediction model for mortality alone: age, blood urea nitrogen, NT-proBNP, haemoglobin and beta-blocker use at baselineaAdjusted for likelihood of uptitraton and BIOSTAT-CHF risk prediction modelbCompeting risk of death\n\nCox regression analyses of achieved heart rate and change in heart rate at 9 months on clinical outcomes\nBold values indicate p < 0.05\nBIOSTAT-CHF risk prediction model for mortality and HF hospitalisation includes: age, HF hospitalisation in the previous year, peripheral oedema, systolic blood pressure, NT-proBNP, haemoglobin, HDL cholesterol, sodium, beta-blocker use at baseline\nBIOSTAT-CHF risk prediction model for heart failure hospitalisation alone: age, previous HF hospitalisation, presence of oedema, systolic blood pressure and estimated glomerular filtration rate\nBIOSTAT-CHF risk prediction model for mortality alone: age, blood urea nitrogen, NT-proBNP, haemoglobin and beta-blocker use at baseline\naAdjusted for likelihood of uptitraton and BIOSTAT-CHF risk prediction model\nbCompeting risk of death\nThere was no significant difference in change in heart rate at 9 months between SR and AF patients (− 11.5 ± 21.9 bpm versus − 9.1 ± 25.9 bpm, respectively; p = 0.12). In multivariable analysis, a decrease in heart rate was significantly associated with reduced likelihood of the primary outcome in both SR and AF (SR: HR 0.83 per 10 bpm decrease; 95% CI 0.75–0.91, p < 0.001; AF: HR 0.89 per 10 bpm decrease; 95% CI 0.81–0.98, p = 0.018, p for interaction vs. SR 0.97) (Table 4). A decrease in heart rate at 9 months was also significantly associated with reduced HF hospitalisation in patients in SR (HR 0.88 per 10 bpm decrease; 95% CI 0.77–1.00, p = 0.046).", "Among the patients assessed at 9 months, baseline heart rate was 77 bpm in those in SR and 85 bpm in those in AF. Higher achieved heart rate and change in heart rate were significantly associated with outcome regardless of baseline heart rate in sinus rhythm (Fig. 2), however, a different pattern was seen in patients in AF however, with higher achieved heart rate only being associated with worse outcome in patients with higher baseline heart rates (baseline heart rate > 85 bpm: HR 1.37 per 10 bpm higher; 95% CI 1.16–1.61, p < 0.001; ≤ 85 bpm: HR 0.99 per 10 bpm higher; 95% CI 0.80–1.22, p = 0.94, p for interaction 0.017) (Fig. 2). A similar pattern was seen with change in heart rate.\n\nFig. 2The relationship between achieved heart rate and change in heart rate at 9 months stratified by baseline heart rate. Association of achieved heart rate (left) and change in heart rate (right) with the primary outcome in sinus rhythm and atrial fibrillation stratified by baseline heart rate above and below the median\n\nThe relationship between achieved heart rate and change in heart rate at 9 months stratified by baseline heart rate. Association of achieved heart rate (left) and change in heart rate (right) with the primary outcome in sinus rhythm and atrial fibrillation stratified by baseline heart rate above and below the median", "In this multi-national, multi-centre contemporary study of HF patients with left ventricular systolic dysfunction on sub-optimal doses of beta-blocker therapy subjected to treatment intensification, we found that both achieved heart rate and change in heart rate at 9 months are strongly associated with outcome in HFrEF patients in SR regardless of baseline resting heart rate. In contrast, only a decrease in heart rate was significantly associated with improved outcome in AF patients, and in particular only in those with higher baseline heart rates.\nHF and AF frequently co-exist and present an additional layer of complexity in management [24]. Established markers of prognosis, such as baseline heart rate, and established therapies such as beta-blockers, appear to be less effective in HF patients in AF compared to those in SR. Our results align with the increasing evidence from observational studies [3, 4], randomised controlled trials [7] and meta-analysis [5] that suggests that baseline heart rate is not an important prognostic marker in HFrEF patients in AF. Very few studies however have examined the prognostic significance of follow-up heart rate in patients with HFrEF in SR and in AF, particularly in the setting of treatment change. Cullington et al. found that heart rate at 1 year was a significant independent predictor of outcome in SR patients but not in AF [3], while in contrast, in an analysis of the Candesartan in heart failure: assessment of reduction in mortality and morbidity (CHARM) programme, Vazir et al. found that change in heart rate was also an independent predictor of poor outcome in both SR and AF patients, although the prognostic significance was less in AF patients [7]. Our study differs from these, however, as we have evaluated a cohort of patients who were not receiving target doses of beta-blockers. A recent large meta-analysis of beta-blocker HF trials reported that a lower achieved heart rate was associated with improved outcome only in SR patients [25]. It is noteworthy that many of these beta-blocker trials were conducted in patients that had not been treated with contemporary heart failure therapy. Our study provides new evidence involving contemporary clinical practice.\nA major part of our study was to examine the effect of a change in heart rate in conjunction with changes in beta-blocker dose. We found that despite similar reductions in heart rate and similar increase in beta-blocker dose, the prognostic significance of both achieved and changes in heart rate were only seen in AF patients at higher baseline heart rates. It is not completely clear why heart rate reduction using beta-blockers should not be associated with improved outcome in HFrEF patients in AF regardless of baseline heart rate, as it is in SR. It has been postulated that higher heart rates in patients with AF is beneficial to compensate for the loss of atrial ejection and reduced left ventricular diastolic filling [26]. It may also be that reduction in heart rate using increased dosages of beta-blockade is not a beneficial strategy, perhaps due to the potential for ventricular pauses that might be associated with adverse outcome [27].\nWe found however that there were benefits in targeting heart rate in AF patients with higher baseline heart rate. A common clinical question in treatment of HFrEF in patients AF is whether the AF is secondary to HF or vice versa [28]. It may be that in some HFrEF patients in AF, the presence of AF may be a reflection of HF severity [29]. However, it is also possible that the AF is driving the HF, and that control of the heart rate in this setting (“tachycardiomyopathy”) might improve HF outcome [30]. This might also explain our somewhat surprising finding that increased baseline heart rate was significantly associated with improved outcome following treatment intensification over 9 months. Being aware of the fact that the median heart rate was significantly higher in patients with AF at baseline, we noted that a higher baseline heart rate was significantly associated with an increase in beta-blocker dose (i.e., more likelihood of uptitration) and a greater reduction in heart rate at 9 months. While we cannot determine causality due to the nature of our study, this does suggest that potentially some of these patients at higher baseline heart rate may have benefited from intensified therapy and may reflect an element of “tachycardiomyopathy”. While it is often difficult to diagnose tachycardiomyopathy prospectively, this might account for this unexpected finding.\nHeart rate reduction by other mechanisms generally appears to have limited benefit in AF-HFrEF patients. Digoxin is, at best, neutral in terms of clinical outcome in AF patients with HFrEF, though it might provide some symptomatic benefit, while non-dihyrdopyridine calcium channel blockers are contra-indicated in HF [31]. Alternative strategies may be more beneficial. There may be a role for AV nodal ablation and cardiac resynchronisation device implantation, however, no large randomised trials have been conducted to confirm this as yet [32]. Another strategy that has been proposed is AF ablation with recent data reporting improved outcomes in patients with AF and HFrEF [33]. Indeed, these results are particularly prescient given the recent results of the CASTLE-AF trial [33], as they suggest that persisting with beta-blocker dose uptitration to maximal targets with the aim of lowering heart rate may not provide any mortality benefit in HFrEF patients in AF, and perhaps other strategies such as pulmonary vein isolation or pacemaker implantation and AV node ablation may prove to have more prognostic benefit to remove the burden of AF.\nOur study has some limitations. First, this is a post hoc analysis of a prospective study. However, one of the strengths of the study was that as well as being an observational study, the protocol also mandated uptitration of HF therapy, thus adding some of the benefits of a clinical trial element. Second, we only obtained resting heart rhythm and rate at two separate time points. It is possible that patients may have been in paroxysmal AF at the time of their visit, while in SR the majority of time in the interim or vice versa. Further insights into the effect of heart rate on prognosis may have been obtained by more frequent heart rate monitoring. Third, we did not have any information on changes in heart rate or beta-blocker dose beyond 9 months, which might have had an impact on clinical outcomes. Additionally, despite the overall size of this study, there were a relatively low number of patients in AF at 9 months, thus we cannot exclude that interactions may have become significant with larger numbers. Further studies specifically examining beta-blocker uptitration are required to confirm these findings. Finally, due to the number of patients, we did not further stratify the cohort based on cardio-selectivity of prescribed beta-blocker. Larger cohorts should be evaluated with the specific aim of determining whether heart rate reduction mediated by cardio-selective beta-blockers is more beneficial in AF patients with HFrEF.", "In HFrEF patients in SR both achieved and change in heart rate following beta-blocker uptitration were associated with improved outcomes, regardless of heart rate at baseline.\nDespite a similar increase in beta-blocker dose and baseline heart rate reduction in HFrEF patients in AF, achieved and decrease in heart rate from baseline were only prognostically significant in patients with higher baseline heart rates." ]
[ "introduction", null, null, null, null, "results", null, null, null, null, "discussion", "conclusion" ]
[ "Heart failure", "Heart rate", "Atrial fibrillation", "Beta-blockers" ]
Introduction: Heart rate is a risk factor in patients with heart failure with reduced ejection fraction (HFrEF) that, when reduced, provides outcome benefits [1, 2]. However, the benefit of heart rate-mediated reduction is less clear in atrial fibrillation (AF). Studies in patients with HFrEF and AF have provided conflicting results, with some suggesting that elevated heart rate is associated with adverse outcome in HFrEF patients in AF, while others found no significant relationship [3–5]. Conceptually, reducing heart rate should have prognostic benefit in HFrEF patients in AF. Randomised controlled trials evaluating rate control strategies in patients with AF have only included small numbers of patients with HFrEF [6]. Additionally, very few studies have evaluated the importance of changes in heart rate over time [7, 8]. Despite the lack of data, current guidelines recommend an optimal heart rate between 60 and 100 bpm in patients with AF and HFrEF, while studies evaluating rate control in patients with AF (but not necessarily HFrEF) suggest that rates up to 110 bpm may be acceptable [6, 9]. One strategy for reducing heart rate is the use of beta-blockers, a mainstay of therapy in HFrEF [9, 10]. Although beta-blockers are prognostically beneficial in patients with HFrEF, it is unclear whether the beta-blocker-mediated reduction in heart rate directly affects prognosis, with several studies reporting conflicting results [11–17]. Furthermore, questions have recently been raised about the prognostic benefits of beta-blocker therapy in HFrEF patients with AF [18, 19]. In particular, there is very little information about whether increasing beta-blocker therapy in patients on sub-optimal doses might derive greater benefit from any associated heart rate reduction [20]. Despite the current uncertainty over the benefits of beta-blockers in HFrEF patients in AF, current guidelines recommend uptitration of beta-blocker therapy to the same target doses irrespective of the underlying heart rhythm. To the best of our knowledge, the relative effects of change in heart rate following intensification of beta-blocker therapy have not been previously examined. Given the frequent co-existence of AF and HFrEF, it is important to determine whether patients in AF derive the same benefit from heart rate reduction and beta-blocker uptitration as those in SR, and whether this effect is modulated by changes in beta-blocker dose. We utilised the systems BIOlogy Study to Tailored Treatment in Chronic Heart Failure (BIOSTAT-CHF) dataset to compare the prognostic importance of changes in heart rate following beta-blocker uptitration in HFrEF patients in AF versus those in sinus rhythm (SR). Methods: Patient selection The BIOSTAT-CHF study design has been published previously [21]. Briefly, BIOSTAT-CHF was a large European, multi-center, multi-national, prospective, observational study of 2516 patients with new onset or worsening HF with either a left ventricular ejection fraction (LVEF) of ≤ 40% or plasma concentrations of Brain Natriuretic Peptide (BNP) > 400 pg/ml and/or N-terminal pro Brain Natriuretic Peptide (NT-proBNP) > 2000 pg/ml, who were being treated with furosemide ≥ 40 mg/day (or equivalent) and were on ≤ 50% of the target dose of angiotensin-converting enzyme inhibitor (ACEI)/angiotensin II receptor blocker (ARB) or beta-blocker therapy. Patients were recruited from both the in-patient and out-patient settings. Patients were classified as having AF if they had AF on their electrocardiogram (ECG) at their baseline visit and were reclassified at the second visit ECG. We excluded patients with paced or undetermined ECG rhythms and those with LVEF ≥ 40%. In the first 3 months after recruitment, treating clinicians aimed to initiate and/or uptitrate ACEI/ARBs and beta-blockers to recommended target doses which have been previously published by the European Society of Cardiology [22]. Reasons for failure to successfully uptitrate and side effects have been previously published [23]. Following the 3-month uptitration period, patients entered a 6-month maintenance period where no further uptitration was mandated unless clinically indicated. Patients then were invited for a second visit at 9 months. The trial was approved by the local ethics committee of the participating centers and all patients provided written informed consent. The study complied with the Declaration of Helsinki. The BIOSTAT-CHF study design has been published previously [21]. Briefly, BIOSTAT-CHF was a large European, multi-center, multi-national, prospective, observational study of 2516 patients with new onset or worsening HF with either a left ventricular ejection fraction (LVEF) of ≤ 40% or plasma concentrations of Brain Natriuretic Peptide (BNP) > 400 pg/ml and/or N-terminal pro Brain Natriuretic Peptide (NT-proBNP) > 2000 pg/ml, who were being treated with furosemide ≥ 40 mg/day (or equivalent) and were on ≤ 50% of the target dose of angiotensin-converting enzyme inhibitor (ACEI)/angiotensin II receptor blocker (ARB) or beta-blocker therapy. Patients were recruited from both the in-patient and out-patient settings. Patients were classified as having AF if they had AF on their electrocardiogram (ECG) at their baseline visit and were reclassified at the second visit ECG. We excluded patients with paced or undetermined ECG rhythms and those with LVEF ≥ 40%. In the first 3 months after recruitment, treating clinicians aimed to initiate and/or uptitrate ACEI/ARBs and beta-blockers to recommended target doses which have been previously published by the European Society of Cardiology [22]. Reasons for failure to successfully uptitrate and side effects have been previously published [23]. Following the 3-month uptitration period, patients entered a 6-month maintenance period where no further uptitration was mandated unless clinically indicated. Patients then were invited for a second visit at 9 months. The trial was approved by the local ethics committee of the participating centers and all patients provided written informed consent. The study complied with the Declaration of Helsinki. Clinical outcomes Heart rate and rhythm were assessed by ECG with all patients supine and rested for at least 5 min. In BIOSTAT-CHF, all patients were followed up for clinical outcomes. After the scheduled visits at baseline and 9 months, patients were contacted by telephone every 6 months. The primary outcome for this study was the combined endpoint of all-cause mortality or HF hospitalisation. HF hospitalisation was determined as admission to hospital ≥ 24 h due to worsening HF requiring either intravenous or increased dose of oral diuretics. Heart rate and rhythm were assessed by ECG with all patients supine and rested for at least 5 min. In BIOSTAT-CHF, all patients were followed up for clinical outcomes. After the scheduled visits at baseline and 9 months, patients were contacted by telephone every 6 months. The primary outcome for this study was the combined endpoint of all-cause mortality or HF hospitalisation. HF hospitalisation was determined as admission to hospital ≥ 24 h due to worsening HF requiring either intravenous or increased dose of oral diuretics. Statistical analysis Clinical, ECG and echocardiographic data was obtained at baseline, with clinical and ECG data obtained at 9 months. Normally distributed continuous variables were reported as mean ± SD and categorical data, as number with percentage in brackets. Comparisons between continuous variables were carried out using a two-tailed Student t test and categorical variables were tested using the Chi square test. Heart rate and beta-blocker dose at baseline and 9 months were analysed for their association with the primary outcome and all-cause mortality using the Cox proportional hazard model and Kaplan–Meier analysis. Competing risks regression with death as a competing risk was used to determine hazard ratios for hospitalisation alone. To adjust for treatment indication bias inverse probability weighting was used, the method of which has been explained in detail previously [23]. Variables included in the inverse probability weighting were age, baseline heart rate and country of origin. Variables were tested for univariable significance and were then included in a multivariable model with the BIOSTAT-CHF risk score [23] to assess their independent association with outcome. SR and AF patients were evaluated separately and interaction testing between SR and AF was also performed within the whole cohort. Heart rate in increments of 10 bpm and beta-blocker dose as a percentage of target dose were examined. Increments of 12.5% of target beta-blocker dose were chosen to reflect clinically used dosages—for example, bisoprolol has a target dose of 10 mg, and is commonly increased in doses of 1.25 mg (12.5% of target dose). Nine-month outcomes only included patients who did not have an event in the first 9 months and those who had ECG data available. Correlations were assessed using Pearson correlation. A p value < 0.05 was considered significant throughout. Statistical analysis was performed using R version 3.4.1. Clinical, ECG and echocardiographic data was obtained at baseline, with clinical and ECG data obtained at 9 months. Normally distributed continuous variables were reported as mean ± SD and categorical data, as number with percentage in brackets. Comparisons between continuous variables were carried out using a two-tailed Student t test and categorical variables were tested using the Chi square test. Heart rate and beta-blocker dose at baseline and 9 months were analysed for their association with the primary outcome and all-cause mortality using the Cox proportional hazard model and Kaplan–Meier analysis. Competing risks regression with death as a competing risk was used to determine hazard ratios for hospitalisation alone. To adjust for treatment indication bias inverse probability weighting was used, the method of which has been explained in detail previously [23]. Variables included in the inverse probability weighting were age, baseline heart rate and country of origin. Variables were tested for univariable significance and were then included in a multivariable model with the BIOSTAT-CHF risk score [23] to assess their independent association with outcome. SR and AF patients were evaluated separately and interaction testing between SR and AF was also performed within the whole cohort. Heart rate in increments of 10 bpm and beta-blocker dose as a percentage of target dose were examined. Increments of 12.5% of target beta-blocker dose were chosen to reflect clinically used dosages—for example, bisoprolol has a target dose of 10 mg, and is commonly increased in doses of 1.25 mg (12.5% of target dose). Nine-month outcomes only included patients who did not have an event in the first 9 months and those who had ECG data available. Correlations were assessed using Pearson correlation. A p value < 0.05 was considered significant throughout. Statistical analysis was performed using R version 3.4.1. Patient selection: The BIOSTAT-CHF study design has been published previously [21]. Briefly, BIOSTAT-CHF was a large European, multi-center, multi-national, prospective, observational study of 2516 patients with new onset or worsening HF with either a left ventricular ejection fraction (LVEF) of ≤ 40% or plasma concentrations of Brain Natriuretic Peptide (BNP) > 400 pg/ml and/or N-terminal pro Brain Natriuretic Peptide (NT-proBNP) > 2000 pg/ml, who were being treated with furosemide ≥ 40 mg/day (or equivalent) and were on ≤ 50% of the target dose of angiotensin-converting enzyme inhibitor (ACEI)/angiotensin II receptor blocker (ARB) or beta-blocker therapy. Patients were recruited from both the in-patient and out-patient settings. Patients were classified as having AF if they had AF on their electrocardiogram (ECG) at their baseline visit and were reclassified at the second visit ECG. We excluded patients with paced or undetermined ECG rhythms and those with LVEF ≥ 40%. In the first 3 months after recruitment, treating clinicians aimed to initiate and/or uptitrate ACEI/ARBs and beta-blockers to recommended target doses which have been previously published by the European Society of Cardiology [22]. Reasons for failure to successfully uptitrate and side effects have been previously published [23]. Following the 3-month uptitration period, patients entered a 6-month maintenance period where no further uptitration was mandated unless clinically indicated. Patients then were invited for a second visit at 9 months. The trial was approved by the local ethics committee of the participating centers and all patients provided written informed consent. The study complied with the Declaration of Helsinki. Clinical outcomes: Heart rate and rhythm were assessed by ECG with all patients supine and rested for at least 5 min. In BIOSTAT-CHF, all patients were followed up for clinical outcomes. After the scheduled visits at baseline and 9 months, patients were contacted by telephone every 6 months. The primary outcome for this study was the combined endpoint of all-cause mortality or HF hospitalisation. HF hospitalisation was determined as admission to hospital ≥ 24 h due to worsening HF requiring either intravenous or increased dose of oral diuretics. Statistical analysis: Clinical, ECG and echocardiographic data was obtained at baseline, with clinical and ECG data obtained at 9 months. Normally distributed continuous variables were reported as mean ± SD and categorical data, as number with percentage in brackets. Comparisons between continuous variables were carried out using a two-tailed Student t test and categorical variables were tested using the Chi square test. Heart rate and beta-blocker dose at baseline and 9 months were analysed for their association with the primary outcome and all-cause mortality using the Cox proportional hazard model and Kaplan–Meier analysis. Competing risks regression with death as a competing risk was used to determine hazard ratios for hospitalisation alone. To adjust for treatment indication bias inverse probability weighting was used, the method of which has been explained in detail previously [23]. Variables included in the inverse probability weighting were age, baseline heart rate and country of origin. Variables were tested for univariable significance and were then included in a multivariable model with the BIOSTAT-CHF risk score [23] to assess their independent association with outcome. SR and AF patients were evaluated separately and interaction testing between SR and AF was also performed within the whole cohort. Heart rate in increments of 10 bpm and beta-blocker dose as a percentage of target dose were examined. Increments of 12.5% of target beta-blocker dose were chosen to reflect clinically used dosages—for example, bisoprolol has a target dose of 10 mg, and is commonly increased in doses of 1.25 mg (12.5% of target dose). Nine-month outcomes only included patients who did not have an event in the first 9 months and those who had ECG data available. Correlations were assessed using Pearson correlation. A p value < 0.05 was considered significant throughout. Statistical analysis was performed using R version 3.4.1. Results: Baseline characteristics The baseline characteristics of the BIOSTAT-CHF study have been reported previously [21]. Median follow-up in BIOSTAT-CHF was 21 months. Derivation of the cohort for this study is shown in Fig. 1. In total, following exclusion of patients with LVEF ≥ 40% and paced or undetermined ECG rhythms, we included 1548 patients from the BIOSTAT-CHF index cohort (Table 1). 535 patients (34.6%) were in AF on their baseline ECG. Fig. 1Cohort derivation. Derivation of the cohort from the BIOSTAT-CHF study Cohort derivation. Derivation of the cohort from the BIOSTAT-CHF study Table 1Baseline cohort characteristics according to heart rhythm at baselineTotal cohort (n = 1548)Sinus rhythm (n = 1013)Atrial fibrillation (n = 535)p value between SR and AFAge (years)67 ± 1265 ± 1371 ± 10< 0.001Men1175 (75.9)750 (74.0)425 (79.4) 0.018 SBP (mmHg)124 ± 21124 ± 21124 ± 210.55DBP (mmHg)76 ± 1275 ± 1276 ± 120.14Heart rate (bpm)83 ± 2178 ± 1793 ± 24< 0.001QRS duration (ms)112 ± 29113 ± 29112 ± 280.56NYHA classa< 0.001 I37 (2.4)30 (3.0)7 (1.3) II557 (36.7)400 (40.5)157 (29.7) III734 (48.4)448 (45.3)286 (54.2) IV188 (12.4)110 (11.1)78 (14.8)Ischaemic aetiology718 (47.4)510 (51.4)208 (39.8)< 0.001Hypertension935 (60.4)609 (60.1)326 (60.9)0.76Current smoker252 (16.3)201 (19.9)51 (9.6)< 0.001Diabetes mellitus490 (31.7)322 (31.8)168 (31.4)0.88COPD259 (16.7)163 (16.1)96 (17.9)0.35Renal impairment357 (23.1)193 (19.1)165 (30.8)< 0.001ACEI/ARB1158 (74.8)770 (76.0)388 (72.5)0.13Beta-blocker1299 (83.9)853 (84.2)446 (83.4)0.67Beta-blocker dose %< 0.001 0250 (16.1)161 (15.9)89 (16.6) 1–49938 (60.6)644 (63.6)294 (55.0) 50–99292 (18.9)176 (17.4)116 (21.7) ≥ 10068 (4.4)32 (3.2)36 (6.7)MRA860 (55.6)575 (56.8)285 (53.3)0.19Digoxin284 (18.3)86 (8.5)198 (37.0)< 0.001LVEF (%)27.3 ± 6.927.1 ± 7.027.8 ± 6.90.07Bold values indicate p < 0.0532 patients (2.1%) did not have NYHA class recordedSBP systolic blood pressure, DBP diastolic blood pressure, COPD chronic obstructive pulmonary disease, ACEI angiotensin-converting enzyme inhibitor, ARB angiotensin receptor blocker, LVEF left ventricular ejection fraction, NT-proBNP N-terminal pro B-type natriuretic peptideaMedian (interequartile range) Baseline cohort characteristics according to heart rhythm at baseline Bold values indicate p < 0.05 32 patients (2.1%) did not have NYHA class recorded SBP systolic blood pressure, DBP diastolic blood pressure, COPD chronic obstructive pulmonary disease, ACEI angiotensin-converting enzyme inhibitor, ARB angiotensin receptor blocker, LVEF left ventricular ejection fraction, NT-proBNP N-terminal pro B-type natriuretic peptide aMedian (interequartile range) The baseline characteristics of the BIOSTAT-CHF study have been reported previously [21]. Median follow-up in BIOSTAT-CHF was 21 months. Derivation of the cohort for this study is shown in Fig. 1. In total, following exclusion of patients with LVEF ≥ 40% and paced or undetermined ECG rhythms, we included 1548 patients from the BIOSTAT-CHF index cohort (Table 1). 535 patients (34.6%) were in AF on their baseline ECG. Fig. 1Cohort derivation. Derivation of the cohort from the BIOSTAT-CHF study Cohort derivation. Derivation of the cohort from the BIOSTAT-CHF study Table 1Baseline cohort characteristics according to heart rhythm at baselineTotal cohort (n = 1548)Sinus rhythm (n = 1013)Atrial fibrillation (n = 535)p value between SR and AFAge (years)67 ± 1265 ± 1371 ± 10< 0.001Men1175 (75.9)750 (74.0)425 (79.4) 0.018 SBP (mmHg)124 ± 21124 ± 21124 ± 210.55DBP (mmHg)76 ± 1275 ± 1276 ± 120.14Heart rate (bpm)83 ± 2178 ± 1793 ± 24< 0.001QRS duration (ms)112 ± 29113 ± 29112 ± 280.56NYHA classa< 0.001 I37 (2.4)30 (3.0)7 (1.3) II557 (36.7)400 (40.5)157 (29.7) III734 (48.4)448 (45.3)286 (54.2) IV188 (12.4)110 (11.1)78 (14.8)Ischaemic aetiology718 (47.4)510 (51.4)208 (39.8)< 0.001Hypertension935 (60.4)609 (60.1)326 (60.9)0.76Current smoker252 (16.3)201 (19.9)51 (9.6)< 0.001Diabetes mellitus490 (31.7)322 (31.8)168 (31.4)0.88COPD259 (16.7)163 (16.1)96 (17.9)0.35Renal impairment357 (23.1)193 (19.1)165 (30.8)< 0.001ACEI/ARB1158 (74.8)770 (76.0)388 (72.5)0.13Beta-blocker1299 (83.9)853 (84.2)446 (83.4)0.67Beta-blocker dose %< 0.001 0250 (16.1)161 (15.9)89 (16.6) 1–49938 (60.6)644 (63.6)294 (55.0) 50–99292 (18.9)176 (17.4)116 (21.7) ≥ 10068 (4.4)32 (3.2)36 (6.7)MRA860 (55.6)575 (56.8)285 (53.3)0.19Digoxin284 (18.3)86 (8.5)198 (37.0)< 0.001LVEF (%)27.3 ± 6.927.1 ± 7.027.8 ± 6.90.07Bold values indicate p < 0.0532 patients (2.1%) did not have NYHA class recordedSBP systolic blood pressure, DBP diastolic blood pressure, COPD chronic obstructive pulmonary disease, ACEI angiotensin-converting enzyme inhibitor, ARB angiotensin receptor blocker, LVEF left ventricular ejection fraction, NT-proBNP N-terminal pro B-type natriuretic peptideaMedian (interequartile range) Baseline cohort characteristics according to heart rhythm at baseline Bold values indicate p < 0.05 32 patients (2.1%) did not have NYHA class recorded SBP systolic blood pressure, DBP diastolic blood pressure, COPD chronic obstructive pulmonary disease, ACEI angiotensin-converting enzyme inhibitor, ARB angiotensin receptor blocker, LVEF left ventricular ejection fraction, NT-proBNP N-terminal pro B-type natriuretic peptide aMedian (interequartile range) Relationship between baseline heart rate and outcome In total, the primary outcome occurred in 554 patients [35.8% of the total cohort; 323 (31.8%) in SR and 231 (43.2%) in AF], including 324 deaths [20.9% of the total cohort; 212 (18.6%) in SR and 112 (28.0%) in AF] and 337 hospitalisations [21.8% of the total cohort; 198 (19.5%) in SR and 139 (26.0%) in AF] (Table 2). Table 2Cox regression analyses of baseline heart rate on the primary outcome of mortality and heart failure hospitalisationSinus rhythm (n = 1013)Atrial fibrillation (n = 535)Interaction p valueNumber of events (%)Multivariable hazard ratio (95% CI)p valueNumber of events (%)Multivariable hazard ratio (95% CI)p valueBaseline heart rate; hazard ratio per 10 bpm higher Mortality or heart failure hospitalisation323 (31.9)1.02 (0.96–1.08)0.60231 (43.2)0.91 (0.86–0.96) 0.001 0.011  Mortality212 (20.9)0.97 (0.90–1.05)0.50112 (20.9)0.96 (0.89–1.04)0.400.75 HF hospitalisationa198 (19.5)1.02 (0.94–1.11)0.62139 (26.0)0.95 (0.88–1.02)0.130.20Bold values indicate p < 0.05Multivariable model adjusted for the BIOSTAT-CHF risk prediction modelBIOSTAT-CHF risk prediction model for combined endpoint of Mortality and HF hospitalisation: age, HF hospitalisation in the previous year, peripheral oedema, systolic blood pressure, log-NT-proBNP, haemoglobin, HDL cholesterol, sodium, beta-blocker use at baselineBIOSTAT-CHF risk prediction model for heart failure hospitalisation alone: age, previous HF hospitalisation, presence of oedema, systolic blood pressure and estimated glomerular filtration rateBIOSTAT-CHF risk prediction model for mortality alone: age, blood urea nitrogen, NT-proBNP, haemoglobin and beta-blocker use at baselineaCompeting risk of death Cox regression analyses of baseline heart rate on the primary outcome of mortality and heart failure hospitalisation Bold values indicate p < 0.05 Multivariable model adjusted for the BIOSTAT-CHF risk prediction model BIOSTAT-CHF risk prediction model for combined endpoint of Mortality and HF hospitalisation: age, HF hospitalisation in the previous year, peripheral oedema, systolic blood pressure, log-NT-proBNP, haemoglobin, HDL cholesterol, sodium, beta-blocker use at baseline BIOSTAT-CHF risk prediction model for heart failure hospitalisation alone: age, previous HF hospitalisation, presence of oedema, systolic blood pressure and estimated glomerular filtration rate BIOSTAT-CHF risk prediction model for mortality alone: age, blood urea nitrogen, NT-proBNP, haemoglobin and beta-blocker use at baseline aCompeting risk of death Baseline heart rate was not a significant predictor of the primary outcome in SR patients (HR per 10 bpm higher: 1.02 95% CI 0.96–1.08, p = 0.60), however, higher baseline heart rate was significantly associated with improved outcome in patients with AF (HR per 10 bpm higher: 0.91; 95% CI 0.86–0.96, p = 0.001; p for interaction vs. sinus rhythm 0.011). There were no significant associations for the individual endpoints of mortality and HF hospitalisation (Table 2). In total, the primary outcome occurred in 554 patients [35.8% of the total cohort; 323 (31.8%) in SR and 231 (43.2%) in AF], including 324 deaths [20.9% of the total cohort; 212 (18.6%) in SR and 112 (28.0%) in AF] and 337 hospitalisations [21.8% of the total cohort; 198 (19.5%) in SR and 139 (26.0%) in AF] (Table 2). Table 2Cox regression analyses of baseline heart rate on the primary outcome of mortality and heart failure hospitalisationSinus rhythm (n = 1013)Atrial fibrillation (n = 535)Interaction p valueNumber of events (%)Multivariable hazard ratio (95% CI)p valueNumber of events (%)Multivariable hazard ratio (95% CI)p valueBaseline heart rate; hazard ratio per 10 bpm higher Mortality or heart failure hospitalisation323 (31.9)1.02 (0.96–1.08)0.60231 (43.2)0.91 (0.86–0.96) 0.001 0.011  Mortality212 (20.9)0.97 (0.90–1.05)0.50112 (20.9)0.96 (0.89–1.04)0.400.75 HF hospitalisationa198 (19.5)1.02 (0.94–1.11)0.62139 (26.0)0.95 (0.88–1.02)0.130.20Bold values indicate p < 0.05Multivariable model adjusted for the BIOSTAT-CHF risk prediction modelBIOSTAT-CHF risk prediction model for combined endpoint of Mortality and HF hospitalisation: age, HF hospitalisation in the previous year, peripheral oedema, systolic blood pressure, log-NT-proBNP, haemoglobin, HDL cholesterol, sodium, beta-blocker use at baselineBIOSTAT-CHF risk prediction model for heart failure hospitalisation alone: age, previous HF hospitalisation, presence of oedema, systolic blood pressure and estimated glomerular filtration rateBIOSTAT-CHF risk prediction model for mortality alone: age, blood urea nitrogen, NT-proBNP, haemoglobin and beta-blocker use at baselineaCompeting risk of death Cox regression analyses of baseline heart rate on the primary outcome of mortality and heart failure hospitalisation Bold values indicate p < 0.05 Multivariable model adjusted for the BIOSTAT-CHF risk prediction model BIOSTAT-CHF risk prediction model for combined endpoint of Mortality and HF hospitalisation: age, HF hospitalisation in the previous year, peripheral oedema, systolic blood pressure, log-NT-proBNP, haemoglobin, HDL cholesterol, sodium, beta-blocker use at baseline BIOSTAT-CHF risk prediction model for heart failure hospitalisation alone: age, previous HF hospitalisation, presence of oedema, systolic blood pressure and estimated glomerular filtration rate BIOSTAT-CHF risk prediction model for mortality alone: age, blood urea nitrogen, NT-proBNP, haemoglobin and beta-blocker use at baseline aCompeting risk of death Baseline heart rate was not a significant predictor of the primary outcome in SR patients (HR per 10 bpm higher: 1.02 95% CI 0.96–1.08, p = 0.60), however, higher baseline heart rate was significantly associated with improved outcome in patients with AF (HR per 10 bpm higher: 0.91; 95% CI 0.86–0.96, p = 0.001; p for interaction vs. sinus rhythm 0.011). There were no significant associations for the individual endpoints of mortality and HF hospitalisation (Table 2). Relationship between achieved heart rate at 9 months, change in heart rate at 9 months and outcome ECGs at the 9-month visit were available for 1155 patients. 198 patients died prior to their 9 month visit, while 195 patients did not have an ECG available. After exclusion of 125 patients with paced rhythms, 1030 patients remained for analysis, of which 734 (71.3%) were in sinus rhythm and 296 (28.7%) were in AF. Heart rate-lowering medication use at 9 months is shown in Table 3. AF at the 9-month ECG was associated with increased likelihood of the primary outcome compared to SR when added to the BIOSTAT risk prediction model (HR 1.63; 95% CI 1.18–2.23, p = 0.003). Table 3Heart rate controlling medication prescription at 9 monthsSinus rhythm (734)Atrial fibrillation (296)Beta-blocker691 (94.1)276 (93.2)Digoxin208 (28.3)201 (67.9)Verapamil/diltiazem8 (1.1)8 (2.7) Heart rate controlling medication prescription at 9 months Mean achieved heart rate at 9 months was significantly lower in SR patients compared to AF (67 ± 13 versus 81 ± 18 bpm, respectively, p < 0.001). Higher baseline heart was significantly associated with a greater reduction in heart rate at 9 months (r = − 0.77, p < 0.001) and an increase in beta-blocker dose at 9 months (r = 0.12, p < 0.001). After adjustment for the BIOSTAT risk prediction model and likelihood of uptitration, a higher achieved heart rate at 9 months was significantly associated with increased likelihood of the primary outcome in SR patients (HR 1.26 per 10 bpm higher; 95% CI 1.10–1.46, p = 0.001) but not in AF (HR 1.08 per 10 bpm higher; 95% CI 0.94–1.23, p = 0.18, p for interaction vs. SR 0.26) (Table 4). There were no significant associations between achieved heart rate and the individual endpoints. Table 4Cox regression analyses of achieved heart rate and change in heart rate at 9 months on clinical outcomesSinus rhythm (n = 734)Atrial fibrillation (n = 296)Interaction p valueNumber of events (%)Multivariable hazard ratio (95% CI)p valueNumber of events (%)Multivariable hazard ratio (95% CI)p valueAchieved heart rate; hazard ratio per 10 bpm highera Mortality or heart failure hospitalisation168 (22.9)1.29 (1.10–1.46) 0.001 115 (38.9)1.08 (0.94–1.23)0.180.26 Mortality1.00 (0.87–1.15)0.961.02 (0.88–1.18)0.770.20 HF hospitalisation+1.07 (0.91–1.27)0.420.84 (0.65–1.07)0.160.99Change in heart rate; hazard ratio per 10 bpm decreasea Mortality or heart failure hospitalisation168 (22.9)0.83 (0.75–0.91)< 0.001115 (38.9)0.89 (0.81–0.98) 0.018 0.97 Mortality0.95 (0.88–1.03)0.230.92 (0.84–1.02)0.110.50 HF hospitalisationb0.88 (0.77–1.00) 0.046 0.93 (0.85–1.01)0.100.91Bold values indicate p < 0.05BIOSTAT-CHF risk prediction model for mortality and HF hospitalisation includes: age, HF hospitalisation in the previous year, peripheral oedema, systolic blood pressure, NT-proBNP, haemoglobin, HDL cholesterol, sodium, beta-blocker use at baselineBIOSTAT-CHF risk prediction model for heart failure hospitalisation alone: age, previous HF hospitalisation, presence of oedema, systolic blood pressure and estimated glomerular filtration rateBIOSTAT-CHF risk prediction model for mortality alone: age, blood urea nitrogen, NT-proBNP, haemoglobin and beta-blocker use at baselineaAdjusted for likelihood of uptitraton and BIOSTAT-CHF risk prediction modelbCompeting risk of death Cox regression analyses of achieved heart rate and change in heart rate at 9 months on clinical outcomes Bold values indicate p < 0.05 BIOSTAT-CHF risk prediction model for mortality and HF hospitalisation includes: age, HF hospitalisation in the previous year, peripheral oedema, systolic blood pressure, NT-proBNP, haemoglobin, HDL cholesterol, sodium, beta-blocker use at baseline BIOSTAT-CHF risk prediction model for heart failure hospitalisation alone: age, previous HF hospitalisation, presence of oedema, systolic blood pressure and estimated glomerular filtration rate BIOSTAT-CHF risk prediction model for mortality alone: age, blood urea nitrogen, NT-proBNP, haemoglobin and beta-blocker use at baseline aAdjusted for likelihood of uptitraton and BIOSTAT-CHF risk prediction model bCompeting risk of death There was no significant difference in change in heart rate at 9 months between SR and AF patients (− 11.5 ± 21.9 bpm versus − 9.1 ± 25.9 bpm, respectively; p = 0.12). In multivariable analysis, a decrease in heart rate was significantly associated with reduced likelihood of the primary outcome in both SR and AF (SR: HR 0.83 per 10 bpm decrease; 95% CI 0.75–0.91, p < 0.001; AF: HR 0.89 per 10 bpm decrease; 95% CI 0.81–0.98, p = 0.018, p for interaction vs. SR 0.97) (Table 4). A decrease in heart rate at 9 months was also significantly associated with reduced HF hospitalisation in patients in SR (HR 0.88 per 10 bpm decrease; 95% CI 0.77–1.00, p = 0.046). ECGs at the 9-month visit were available for 1155 patients. 198 patients died prior to their 9 month visit, while 195 patients did not have an ECG available. After exclusion of 125 patients with paced rhythms, 1030 patients remained for analysis, of which 734 (71.3%) were in sinus rhythm and 296 (28.7%) were in AF. Heart rate-lowering medication use at 9 months is shown in Table 3. AF at the 9-month ECG was associated with increased likelihood of the primary outcome compared to SR when added to the BIOSTAT risk prediction model (HR 1.63; 95% CI 1.18–2.23, p = 0.003). Table 3Heart rate controlling medication prescription at 9 monthsSinus rhythm (734)Atrial fibrillation (296)Beta-blocker691 (94.1)276 (93.2)Digoxin208 (28.3)201 (67.9)Verapamil/diltiazem8 (1.1)8 (2.7) Heart rate controlling medication prescription at 9 months Mean achieved heart rate at 9 months was significantly lower in SR patients compared to AF (67 ± 13 versus 81 ± 18 bpm, respectively, p < 0.001). Higher baseline heart was significantly associated with a greater reduction in heart rate at 9 months (r = − 0.77, p < 0.001) and an increase in beta-blocker dose at 9 months (r = 0.12, p < 0.001). After adjustment for the BIOSTAT risk prediction model and likelihood of uptitration, a higher achieved heart rate at 9 months was significantly associated with increased likelihood of the primary outcome in SR patients (HR 1.26 per 10 bpm higher; 95% CI 1.10–1.46, p = 0.001) but not in AF (HR 1.08 per 10 bpm higher; 95% CI 0.94–1.23, p = 0.18, p for interaction vs. SR 0.26) (Table 4). There were no significant associations between achieved heart rate and the individual endpoints. Table 4Cox regression analyses of achieved heart rate and change in heart rate at 9 months on clinical outcomesSinus rhythm (n = 734)Atrial fibrillation (n = 296)Interaction p valueNumber of events (%)Multivariable hazard ratio (95% CI)p valueNumber of events (%)Multivariable hazard ratio (95% CI)p valueAchieved heart rate; hazard ratio per 10 bpm highera Mortality or heart failure hospitalisation168 (22.9)1.29 (1.10–1.46) 0.001 115 (38.9)1.08 (0.94–1.23)0.180.26 Mortality1.00 (0.87–1.15)0.961.02 (0.88–1.18)0.770.20 HF hospitalisation+1.07 (0.91–1.27)0.420.84 (0.65–1.07)0.160.99Change in heart rate; hazard ratio per 10 bpm decreasea Mortality or heart failure hospitalisation168 (22.9)0.83 (0.75–0.91)< 0.001115 (38.9)0.89 (0.81–0.98) 0.018 0.97 Mortality0.95 (0.88–1.03)0.230.92 (0.84–1.02)0.110.50 HF hospitalisationb0.88 (0.77–1.00) 0.046 0.93 (0.85–1.01)0.100.91Bold values indicate p < 0.05BIOSTAT-CHF risk prediction model for mortality and HF hospitalisation includes: age, HF hospitalisation in the previous year, peripheral oedema, systolic blood pressure, NT-proBNP, haemoglobin, HDL cholesterol, sodium, beta-blocker use at baselineBIOSTAT-CHF risk prediction model for heart failure hospitalisation alone: age, previous HF hospitalisation, presence of oedema, systolic blood pressure and estimated glomerular filtration rateBIOSTAT-CHF risk prediction model for mortality alone: age, blood urea nitrogen, NT-proBNP, haemoglobin and beta-blocker use at baselineaAdjusted for likelihood of uptitraton and BIOSTAT-CHF risk prediction modelbCompeting risk of death Cox regression analyses of achieved heart rate and change in heart rate at 9 months on clinical outcomes Bold values indicate p < 0.05 BIOSTAT-CHF risk prediction model for mortality and HF hospitalisation includes: age, HF hospitalisation in the previous year, peripheral oedema, systolic blood pressure, NT-proBNP, haemoglobin, HDL cholesterol, sodium, beta-blocker use at baseline BIOSTAT-CHF risk prediction model for heart failure hospitalisation alone: age, previous HF hospitalisation, presence of oedema, systolic blood pressure and estimated glomerular filtration rate BIOSTAT-CHF risk prediction model for mortality alone: age, blood urea nitrogen, NT-proBNP, haemoglobin and beta-blocker use at baseline aAdjusted for likelihood of uptitraton and BIOSTAT-CHF risk prediction model bCompeting risk of death There was no significant difference in change in heart rate at 9 months between SR and AF patients (− 11.5 ± 21.9 bpm versus − 9.1 ± 25.9 bpm, respectively; p = 0.12). In multivariable analysis, a decrease in heart rate was significantly associated with reduced likelihood of the primary outcome in both SR and AF (SR: HR 0.83 per 10 bpm decrease; 95% CI 0.75–0.91, p < 0.001; AF: HR 0.89 per 10 bpm decrease; 95% CI 0.81–0.98, p = 0.018, p for interaction vs. SR 0.97) (Table 4). A decrease in heart rate at 9 months was also significantly associated with reduced HF hospitalisation in patients in SR (HR 0.88 per 10 bpm decrease; 95% CI 0.77–1.00, p = 0.046). Effects of changes in heart rate in patients stratified by baseline heart rate Among the patients assessed at 9 months, baseline heart rate was 77 bpm in those in SR and 85 bpm in those in AF. Higher achieved heart rate and change in heart rate were significantly associated with outcome regardless of baseline heart rate in sinus rhythm (Fig. 2), however, a different pattern was seen in patients in AF however, with higher achieved heart rate only being associated with worse outcome in patients with higher baseline heart rates (baseline heart rate > 85 bpm: HR 1.37 per 10 bpm higher; 95% CI 1.16–1.61, p < 0.001; ≤ 85 bpm: HR 0.99 per 10 bpm higher; 95% CI 0.80–1.22, p = 0.94, p for interaction 0.017) (Fig. 2). A similar pattern was seen with change in heart rate. Fig. 2The relationship between achieved heart rate and change in heart rate at 9 months stratified by baseline heart rate. Association of achieved heart rate (left) and change in heart rate (right) with the primary outcome in sinus rhythm and atrial fibrillation stratified by baseline heart rate above and below the median The relationship between achieved heart rate and change in heart rate at 9 months stratified by baseline heart rate. Association of achieved heart rate (left) and change in heart rate (right) with the primary outcome in sinus rhythm and atrial fibrillation stratified by baseline heart rate above and below the median Among the patients assessed at 9 months, baseline heart rate was 77 bpm in those in SR and 85 bpm in those in AF. Higher achieved heart rate and change in heart rate were significantly associated with outcome regardless of baseline heart rate in sinus rhythm (Fig. 2), however, a different pattern was seen in patients in AF however, with higher achieved heart rate only being associated with worse outcome in patients with higher baseline heart rates (baseline heart rate > 85 bpm: HR 1.37 per 10 bpm higher; 95% CI 1.16–1.61, p < 0.001; ≤ 85 bpm: HR 0.99 per 10 bpm higher; 95% CI 0.80–1.22, p = 0.94, p for interaction 0.017) (Fig. 2). A similar pattern was seen with change in heart rate. Fig. 2The relationship between achieved heart rate and change in heart rate at 9 months stratified by baseline heart rate. Association of achieved heart rate (left) and change in heart rate (right) with the primary outcome in sinus rhythm and atrial fibrillation stratified by baseline heart rate above and below the median The relationship between achieved heart rate and change in heart rate at 9 months stratified by baseline heart rate. Association of achieved heart rate (left) and change in heart rate (right) with the primary outcome in sinus rhythm and atrial fibrillation stratified by baseline heart rate above and below the median Baseline characteristics: The baseline characteristics of the BIOSTAT-CHF study have been reported previously [21]. Median follow-up in BIOSTAT-CHF was 21 months. Derivation of the cohort for this study is shown in Fig. 1. In total, following exclusion of patients with LVEF ≥ 40% and paced or undetermined ECG rhythms, we included 1548 patients from the BIOSTAT-CHF index cohort (Table 1). 535 patients (34.6%) were in AF on their baseline ECG. Fig. 1Cohort derivation. Derivation of the cohort from the BIOSTAT-CHF study Cohort derivation. Derivation of the cohort from the BIOSTAT-CHF study Table 1Baseline cohort characteristics according to heart rhythm at baselineTotal cohort (n = 1548)Sinus rhythm (n = 1013)Atrial fibrillation (n = 535)p value between SR and AFAge (years)67 ± 1265 ± 1371 ± 10< 0.001Men1175 (75.9)750 (74.0)425 (79.4) 0.018 SBP (mmHg)124 ± 21124 ± 21124 ± 210.55DBP (mmHg)76 ± 1275 ± 1276 ± 120.14Heart rate (bpm)83 ± 2178 ± 1793 ± 24< 0.001QRS duration (ms)112 ± 29113 ± 29112 ± 280.56NYHA classa< 0.001 I37 (2.4)30 (3.0)7 (1.3) II557 (36.7)400 (40.5)157 (29.7) III734 (48.4)448 (45.3)286 (54.2) IV188 (12.4)110 (11.1)78 (14.8)Ischaemic aetiology718 (47.4)510 (51.4)208 (39.8)< 0.001Hypertension935 (60.4)609 (60.1)326 (60.9)0.76Current smoker252 (16.3)201 (19.9)51 (9.6)< 0.001Diabetes mellitus490 (31.7)322 (31.8)168 (31.4)0.88COPD259 (16.7)163 (16.1)96 (17.9)0.35Renal impairment357 (23.1)193 (19.1)165 (30.8)< 0.001ACEI/ARB1158 (74.8)770 (76.0)388 (72.5)0.13Beta-blocker1299 (83.9)853 (84.2)446 (83.4)0.67Beta-blocker dose %< 0.001 0250 (16.1)161 (15.9)89 (16.6) 1–49938 (60.6)644 (63.6)294 (55.0) 50–99292 (18.9)176 (17.4)116 (21.7) ≥ 10068 (4.4)32 (3.2)36 (6.7)MRA860 (55.6)575 (56.8)285 (53.3)0.19Digoxin284 (18.3)86 (8.5)198 (37.0)< 0.001LVEF (%)27.3 ± 6.927.1 ± 7.027.8 ± 6.90.07Bold values indicate p < 0.0532 patients (2.1%) did not have NYHA class recordedSBP systolic blood pressure, DBP diastolic blood pressure, COPD chronic obstructive pulmonary disease, ACEI angiotensin-converting enzyme inhibitor, ARB angiotensin receptor blocker, LVEF left ventricular ejection fraction, NT-proBNP N-terminal pro B-type natriuretic peptideaMedian (interequartile range) Baseline cohort characteristics according to heart rhythm at baseline Bold values indicate p < 0.05 32 patients (2.1%) did not have NYHA class recorded SBP systolic blood pressure, DBP diastolic blood pressure, COPD chronic obstructive pulmonary disease, ACEI angiotensin-converting enzyme inhibitor, ARB angiotensin receptor blocker, LVEF left ventricular ejection fraction, NT-proBNP N-terminal pro B-type natriuretic peptide aMedian (interequartile range) Relationship between baseline heart rate and outcome: In total, the primary outcome occurred in 554 patients [35.8% of the total cohort; 323 (31.8%) in SR and 231 (43.2%) in AF], including 324 deaths [20.9% of the total cohort; 212 (18.6%) in SR and 112 (28.0%) in AF] and 337 hospitalisations [21.8% of the total cohort; 198 (19.5%) in SR and 139 (26.0%) in AF] (Table 2). Table 2Cox regression analyses of baseline heart rate on the primary outcome of mortality and heart failure hospitalisationSinus rhythm (n = 1013)Atrial fibrillation (n = 535)Interaction p valueNumber of events (%)Multivariable hazard ratio (95% CI)p valueNumber of events (%)Multivariable hazard ratio (95% CI)p valueBaseline heart rate; hazard ratio per 10 bpm higher Mortality or heart failure hospitalisation323 (31.9)1.02 (0.96–1.08)0.60231 (43.2)0.91 (0.86–0.96) 0.001 0.011  Mortality212 (20.9)0.97 (0.90–1.05)0.50112 (20.9)0.96 (0.89–1.04)0.400.75 HF hospitalisationa198 (19.5)1.02 (0.94–1.11)0.62139 (26.0)0.95 (0.88–1.02)0.130.20Bold values indicate p < 0.05Multivariable model adjusted for the BIOSTAT-CHF risk prediction modelBIOSTAT-CHF risk prediction model for combined endpoint of Mortality and HF hospitalisation: age, HF hospitalisation in the previous year, peripheral oedema, systolic blood pressure, log-NT-proBNP, haemoglobin, HDL cholesterol, sodium, beta-blocker use at baselineBIOSTAT-CHF risk prediction model for heart failure hospitalisation alone: age, previous HF hospitalisation, presence of oedema, systolic blood pressure and estimated glomerular filtration rateBIOSTAT-CHF risk prediction model for mortality alone: age, blood urea nitrogen, NT-proBNP, haemoglobin and beta-blocker use at baselineaCompeting risk of death Cox regression analyses of baseline heart rate on the primary outcome of mortality and heart failure hospitalisation Bold values indicate p < 0.05 Multivariable model adjusted for the BIOSTAT-CHF risk prediction model BIOSTAT-CHF risk prediction model for combined endpoint of Mortality and HF hospitalisation: age, HF hospitalisation in the previous year, peripheral oedema, systolic blood pressure, log-NT-proBNP, haemoglobin, HDL cholesterol, sodium, beta-blocker use at baseline BIOSTAT-CHF risk prediction model for heart failure hospitalisation alone: age, previous HF hospitalisation, presence of oedema, systolic blood pressure and estimated glomerular filtration rate BIOSTAT-CHF risk prediction model for mortality alone: age, blood urea nitrogen, NT-proBNP, haemoglobin and beta-blocker use at baseline aCompeting risk of death Baseline heart rate was not a significant predictor of the primary outcome in SR patients (HR per 10 bpm higher: 1.02 95% CI 0.96–1.08, p = 0.60), however, higher baseline heart rate was significantly associated with improved outcome in patients with AF (HR per 10 bpm higher: 0.91; 95% CI 0.86–0.96, p = 0.001; p for interaction vs. sinus rhythm 0.011). There were no significant associations for the individual endpoints of mortality and HF hospitalisation (Table 2). Relationship between achieved heart rate at 9 months, change in heart rate at 9 months and outcome: ECGs at the 9-month visit were available for 1155 patients. 198 patients died prior to their 9 month visit, while 195 patients did not have an ECG available. After exclusion of 125 patients with paced rhythms, 1030 patients remained for analysis, of which 734 (71.3%) were in sinus rhythm and 296 (28.7%) were in AF. Heart rate-lowering medication use at 9 months is shown in Table 3. AF at the 9-month ECG was associated with increased likelihood of the primary outcome compared to SR when added to the BIOSTAT risk prediction model (HR 1.63; 95% CI 1.18–2.23, p = 0.003). Table 3Heart rate controlling medication prescription at 9 monthsSinus rhythm (734)Atrial fibrillation (296)Beta-blocker691 (94.1)276 (93.2)Digoxin208 (28.3)201 (67.9)Verapamil/diltiazem8 (1.1)8 (2.7) Heart rate controlling medication prescription at 9 months Mean achieved heart rate at 9 months was significantly lower in SR patients compared to AF (67 ± 13 versus 81 ± 18 bpm, respectively, p < 0.001). Higher baseline heart was significantly associated with a greater reduction in heart rate at 9 months (r = − 0.77, p < 0.001) and an increase in beta-blocker dose at 9 months (r = 0.12, p < 0.001). After adjustment for the BIOSTAT risk prediction model and likelihood of uptitration, a higher achieved heart rate at 9 months was significantly associated with increased likelihood of the primary outcome in SR patients (HR 1.26 per 10 bpm higher; 95% CI 1.10–1.46, p = 0.001) but not in AF (HR 1.08 per 10 bpm higher; 95% CI 0.94–1.23, p = 0.18, p for interaction vs. SR 0.26) (Table 4). There were no significant associations between achieved heart rate and the individual endpoints. Table 4Cox regression analyses of achieved heart rate and change in heart rate at 9 months on clinical outcomesSinus rhythm (n = 734)Atrial fibrillation (n = 296)Interaction p valueNumber of events (%)Multivariable hazard ratio (95% CI)p valueNumber of events (%)Multivariable hazard ratio (95% CI)p valueAchieved heart rate; hazard ratio per 10 bpm highera Mortality or heart failure hospitalisation168 (22.9)1.29 (1.10–1.46) 0.001 115 (38.9)1.08 (0.94–1.23)0.180.26 Mortality1.00 (0.87–1.15)0.961.02 (0.88–1.18)0.770.20 HF hospitalisation+1.07 (0.91–1.27)0.420.84 (0.65–1.07)0.160.99Change in heart rate; hazard ratio per 10 bpm decreasea Mortality or heart failure hospitalisation168 (22.9)0.83 (0.75–0.91)< 0.001115 (38.9)0.89 (0.81–0.98) 0.018 0.97 Mortality0.95 (0.88–1.03)0.230.92 (0.84–1.02)0.110.50 HF hospitalisationb0.88 (0.77–1.00) 0.046 0.93 (0.85–1.01)0.100.91Bold values indicate p < 0.05BIOSTAT-CHF risk prediction model for mortality and HF hospitalisation includes: age, HF hospitalisation in the previous year, peripheral oedema, systolic blood pressure, NT-proBNP, haemoglobin, HDL cholesterol, sodium, beta-blocker use at baselineBIOSTAT-CHF risk prediction model for heart failure hospitalisation alone: age, previous HF hospitalisation, presence of oedema, systolic blood pressure and estimated glomerular filtration rateBIOSTAT-CHF risk prediction model for mortality alone: age, blood urea nitrogen, NT-proBNP, haemoglobin and beta-blocker use at baselineaAdjusted for likelihood of uptitraton and BIOSTAT-CHF risk prediction modelbCompeting risk of death Cox regression analyses of achieved heart rate and change in heart rate at 9 months on clinical outcomes Bold values indicate p < 0.05 BIOSTAT-CHF risk prediction model for mortality and HF hospitalisation includes: age, HF hospitalisation in the previous year, peripheral oedema, systolic blood pressure, NT-proBNP, haemoglobin, HDL cholesterol, sodium, beta-blocker use at baseline BIOSTAT-CHF risk prediction model for heart failure hospitalisation alone: age, previous HF hospitalisation, presence of oedema, systolic blood pressure and estimated glomerular filtration rate BIOSTAT-CHF risk prediction model for mortality alone: age, blood urea nitrogen, NT-proBNP, haemoglobin and beta-blocker use at baseline aAdjusted for likelihood of uptitraton and BIOSTAT-CHF risk prediction model bCompeting risk of death There was no significant difference in change in heart rate at 9 months between SR and AF patients (− 11.5 ± 21.9 bpm versus − 9.1 ± 25.9 bpm, respectively; p = 0.12). In multivariable analysis, a decrease in heart rate was significantly associated with reduced likelihood of the primary outcome in both SR and AF (SR: HR 0.83 per 10 bpm decrease; 95% CI 0.75–0.91, p < 0.001; AF: HR 0.89 per 10 bpm decrease; 95% CI 0.81–0.98, p = 0.018, p for interaction vs. SR 0.97) (Table 4). A decrease in heart rate at 9 months was also significantly associated with reduced HF hospitalisation in patients in SR (HR 0.88 per 10 bpm decrease; 95% CI 0.77–1.00, p = 0.046). Effects of changes in heart rate in patients stratified by baseline heart rate: Among the patients assessed at 9 months, baseline heart rate was 77 bpm in those in SR and 85 bpm in those in AF. Higher achieved heart rate and change in heart rate were significantly associated with outcome regardless of baseline heart rate in sinus rhythm (Fig. 2), however, a different pattern was seen in patients in AF however, with higher achieved heart rate only being associated with worse outcome in patients with higher baseline heart rates (baseline heart rate > 85 bpm: HR 1.37 per 10 bpm higher; 95% CI 1.16–1.61, p < 0.001; ≤ 85 bpm: HR 0.99 per 10 bpm higher; 95% CI 0.80–1.22, p = 0.94, p for interaction 0.017) (Fig. 2). A similar pattern was seen with change in heart rate. Fig. 2The relationship between achieved heart rate and change in heart rate at 9 months stratified by baseline heart rate. Association of achieved heart rate (left) and change in heart rate (right) with the primary outcome in sinus rhythm and atrial fibrillation stratified by baseline heart rate above and below the median The relationship between achieved heart rate and change in heart rate at 9 months stratified by baseline heart rate. Association of achieved heart rate (left) and change in heart rate (right) with the primary outcome in sinus rhythm and atrial fibrillation stratified by baseline heart rate above and below the median Discussion: In this multi-national, multi-centre contemporary study of HF patients with left ventricular systolic dysfunction on sub-optimal doses of beta-blocker therapy subjected to treatment intensification, we found that both achieved heart rate and change in heart rate at 9 months are strongly associated with outcome in HFrEF patients in SR regardless of baseline resting heart rate. In contrast, only a decrease in heart rate was significantly associated with improved outcome in AF patients, and in particular only in those with higher baseline heart rates. HF and AF frequently co-exist and present an additional layer of complexity in management [24]. Established markers of prognosis, such as baseline heart rate, and established therapies such as beta-blockers, appear to be less effective in HF patients in AF compared to those in SR. Our results align with the increasing evidence from observational studies [3, 4], randomised controlled trials [7] and meta-analysis [5] that suggests that baseline heart rate is not an important prognostic marker in HFrEF patients in AF. Very few studies however have examined the prognostic significance of follow-up heart rate in patients with HFrEF in SR and in AF, particularly in the setting of treatment change. Cullington et al. found that heart rate at 1 year was a significant independent predictor of outcome in SR patients but not in AF [3], while in contrast, in an analysis of the Candesartan in heart failure: assessment of reduction in mortality and morbidity (CHARM) programme, Vazir et al. found that change in heart rate was also an independent predictor of poor outcome in both SR and AF patients, although the prognostic significance was less in AF patients [7]. Our study differs from these, however, as we have evaluated a cohort of patients who were not receiving target doses of beta-blockers. A recent large meta-analysis of beta-blocker HF trials reported that a lower achieved heart rate was associated with improved outcome only in SR patients [25]. It is noteworthy that many of these beta-blocker trials were conducted in patients that had not been treated with contemporary heart failure therapy. Our study provides new evidence involving contemporary clinical practice. A major part of our study was to examine the effect of a change in heart rate in conjunction with changes in beta-blocker dose. We found that despite similar reductions in heart rate and similar increase in beta-blocker dose, the prognostic significance of both achieved and changes in heart rate were only seen in AF patients at higher baseline heart rates. It is not completely clear why heart rate reduction using beta-blockers should not be associated with improved outcome in HFrEF patients in AF regardless of baseline heart rate, as it is in SR. It has been postulated that higher heart rates in patients with AF is beneficial to compensate for the loss of atrial ejection and reduced left ventricular diastolic filling [26]. It may also be that reduction in heart rate using increased dosages of beta-blockade is not a beneficial strategy, perhaps due to the potential for ventricular pauses that might be associated with adverse outcome [27]. We found however that there were benefits in targeting heart rate in AF patients with higher baseline heart rate. A common clinical question in treatment of HFrEF in patients AF is whether the AF is secondary to HF or vice versa [28]. It may be that in some HFrEF patients in AF, the presence of AF may be a reflection of HF severity [29]. However, it is also possible that the AF is driving the HF, and that control of the heart rate in this setting (“tachycardiomyopathy”) might improve HF outcome [30]. This might also explain our somewhat surprising finding that increased baseline heart rate was significantly associated with improved outcome following treatment intensification over 9 months. Being aware of the fact that the median heart rate was significantly higher in patients with AF at baseline, we noted that a higher baseline heart rate was significantly associated with an increase in beta-blocker dose (i.e., more likelihood of uptitration) and a greater reduction in heart rate at 9 months. While we cannot determine causality due to the nature of our study, this does suggest that potentially some of these patients at higher baseline heart rate may have benefited from intensified therapy and may reflect an element of “tachycardiomyopathy”. While it is often difficult to diagnose tachycardiomyopathy prospectively, this might account for this unexpected finding. Heart rate reduction by other mechanisms generally appears to have limited benefit in AF-HFrEF patients. Digoxin is, at best, neutral in terms of clinical outcome in AF patients with HFrEF, though it might provide some symptomatic benefit, while non-dihyrdopyridine calcium channel blockers are contra-indicated in HF [31]. Alternative strategies may be more beneficial. There may be a role for AV nodal ablation and cardiac resynchronisation device implantation, however, no large randomised trials have been conducted to confirm this as yet [32]. Another strategy that has been proposed is AF ablation with recent data reporting improved outcomes in patients with AF and HFrEF [33]. Indeed, these results are particularly prescient given the recent results of the CASTLE-AF trial [33], as they suggest that persisting with beta-blocker dose uptitration to maximal targets with the aim of lowering heart rate may not provide any mortality benefit in HFrEF patients in AF, and perhaps other strategies such as pulmonary vein isolation or pacemaker implantation and AV node ablation may prove to have more prognostic benefit to remove the burden of AF. Our study has some limitations. First, this is a post hoc analysis of a prospective study. However, one of the strengths of the study was that as well as being an observational study, the protocol also mandated uptitration of HF therapy, thus adding some of the benefits of a clinical trial element. Second, we only obtained resting heart rhythm and rate at two separate time points. It is possible that patients may have been in paroxysmal AF at the time of their visit, while in SR the majority of time in the interim or vice versa. Further insights into the effect of heart rate on prognosis may have been obtained by more frequent heart rate monitoring. Third, we did not have any information on changes in heart rate or beta-blocker dose beyond 9 months, which might have had an impact on clinical outcomes. Additionally, despite the overall size of this study, there were a relatively low number of patients in AF at 9 months, thus we cannot exclude that interactions may have become significant with larger numbers. Further studies specifically examining beta-blocker uptitration are required to confirm these findings. Finally, due to the number of patients, we did not further stratify the cohort based on cardio-selectivity of prescribed beta-blocker. Larger cohorts should be evaluated with the specific aim of determining whether heart rate reduction mediated by cardio-selective beta-blockers is more beneficial in AF patients with HFrEF. Conclusions: In HFrEF patients in SR both achieved and change in heart rate following beta-blocker uptitration were associated with improved outcomes, regardless of heart rate at baseline. Despite a similar increase in beta-blocker dose and baseline heart rate reduction in HFrEF patients in AF, achieved and decrease in heart rate from baseline were only prognostically significant in patients with higher baseline heart rates.
Background: In patients with heart failure with reduced ejection fraction (HFrEF) on sub-optimal doses of beta-blockers, it is conceivable that changes in heart rate following treatment intensification might be important regardless of underlying heart rhythm. We aimed to compare the prognostic significance of both achieved heart rate and change in heart rate following beta-blocker uptitration in patients with HFrEF either in sinus rhythm (SR) or atrial fibrillation (AF). Methods: We performed a post hoc analysis of the BIOSTAT-CHF study. We evaluated 1548 patients with HFrEF (mean age 67 years, 35% AF). Median follow-up was 21 months. Patients were evaluated at baseline and at 9 months. The combined primary outcome was all-cause mortality and heart failure hospitalisation stratified by heart rhythm and heart rate at baseline. Results: Despite similar changes in heart rate and beta-blocker dose, a decrease in heart rate at 9 months was associated with reduced incidence of the primary outcome in both SR and AF patients [HR per 10 bpm decrease-SR: 0.83 (0.75-0.91), p < 0.001; AF: 0.89 (0.81-0.98), p = 0.018], whereas the relationship was less strong for achieved heart rate in AF [HR per 10 bpm higher-SR: 1.26 (1.10-1.46), p = 0.001; AF: 1.08 (0.94-1.23), p = 0.18]. Achieved heart rate at 9 months was only prognostically significant in AF patients with high baseline heart rates (p for interaction 0.017 vs. low). Conclusions: Following beta-blocker uptitration, both achieved and change in heart rate were prognostically significant regardless of starting heart rate in SR, however, they were only significant in AF patients with high baseline heart rate.
Introduction: Heart rate is a risk factor in patients with heart failure with reduced ejection fraction (HFrEF) that, when reduced, provides outcome benefits [1, 2]. However, the benefit of heart rate-mediated reduction is less clear in atrial fibrillation (AF). Studies in patients with HFrEF and AF have provided conflicting results, with some suggesting that elevated heart rate is associated with adverse outcome in HFrEF patients in AF, while others found no significant relationship [3–5]. Conceptually, reducing heart rate should have prognostic benefit in HFrEF patients in AF. Randomised controlled trials evaluating rate control strategies in patients with AF have only included small numbers of patients with HFrEF [6]. Additionally, very few studies have evaluated the importance of changes in heart rate over time [7, 8]. Despite the lack of data, current guidelines recommend an optimal heart rate between 60 and 100 bpm in patients with AF and HFrEF, while studies evaluating rate control in patients with AF (but not necessarily HFrEF) suggest that rates up to 110 bpm may be acceptable [6, 9]. One strategy for reducing heart rate is the use of beta-blockers, a mainstay of therapy in HFrEF [9, 10]. Although beta-blockers are prognostically beneficial in patients with HFrEF, it is unclear whether the beta-blocker-mediated reduction in heart rate directly affects prognosis, with several studies reporting conflicting results [11–17]. Furthermore, questions have recently been raised about the prognostic benefits of beta-blocker therapy in HFrEF patients with AF [18, 19]. In particular, there is very little information about whether increasing beta-blocker therapy in patients on sub-optimal doses might derive greater benefit from any associated heart rate reduction [20]. Despite the current uncertainty over the benefits of beta-blockers in HFrEF patients in AF, current guidelines recommend uptitration of beta-blocker therapy to the same target doses irrespective of the underlying heart rhythm. To the best of our knowledge, the relative effects of change in heart rate following intensification of beta-blocker therapy have not been previously examined. Given the frequent co-existence of AF and HFrEF, it is important to determine whether patients in AF derive the same benefit from heart rate reduction and beta-blocker uptitration as those in SR, and whether this effect is modulated by changes in beta-blocker dose. We utilised the systems BIOlogy Study to Tailored Treatment in Chronic Heart Failure (BIOSTAT-CHF) dataset to compare the prognostic importance of changes in heart rate following beta-blocker uptitration in HFrEF patients in AF versus those in sinus rhythm (SR). Conclusions: In HFrEF patients in SR both achieved and change in heart rate following beta-blocker uptitration were associated with improved outcomes, regardless of heart rate at baseline. Despite a similar increase in beta-blocker dose and baseline heart rate reduction in HFrEF patients in AF, achieved and decrease in heart rate from baseline were only prognostically significant in patients with higher baseline heart rates.
Background: In patients with heart failure with reduced ejection fraction (HFrEF) on sub-optimal doses of beta-blockers, it is conceivable that changes in heart rate following treatment intensification might be important regardless of underlying heart rhythm. We aimed to compare the prognostic significance of both achieved heart rate and change in heart rate following beta-blocker uptitration in patients with HFrEF either in sinus rhythm (SR) or atrial fibrillation (AF). Methods: We performed a post hoc analysis of the BIOSTAT-CHF study. We evaluated 1548 patients with HFrEF (mean age 67 years, 35% AF). Median follow-up was 21 months. Patients were evaluated at baseline and at 9 months. The combined primary outcome was all-cause mortality and heart failure hospitalisation stratified by heart rhythm and heart rate at baseline. Results: Despite similar changes in heart rate and beta-blocker dose, a decrease in heart rate at 9 months was associated with reduced incidence of the primary outcome in both SR and AF patients [HR per 10 bpm decrease-SR: 0.83 (0.75-0.91), p < 0.001; AF: 0.89 (0.81-0.98), p = 0.018], whereas the relationship was less strong for achieved heart rate in AF [HR per 10 bpm higher-SR: 1.26 (1.10-1.46), p = 0.001; AF: 1.08 (0.94-1.23), p = 0.18]. Achieved heart rate at 9 months was only prognostically significant in AF patients with high baseline heart rates (p for interaction 0.017 vs. low). Conclusions: Following beta-blocker uptitration, both achieved and change in heart rate were prognostically significant regardless of starting heart rate in SR, however, they were only significant in AF patients with high baseline heart rate.
11,737
362
[ 1599, 339, 102, 351, 587, 589, 971, 282 ]
12
[ "heart", "rate", "heart rate", "patients", "baseline", "af", "chf", "risk", "hf", "beta" ]
[ "lowering heart rate", "af heart rate", "heart rate prognostic", "reductions heart rate", "atrial fibrillation af" ]
null
[CONTENT] Heart failure | Heart rate | Atrial fibrillation | Beta-blockers [SUMMARY]
null
[CONTENT] Heart failure | Heart rate | Atrial fibrillation | Beta-blockers [SUMMARY]
[CONTENT] Heart failure | Heart rate | Atrial fibrillation | Beta-blockers [SUMMARY]
[CONTENT] Heart failure | Heart rate | Atrial fibrillation | Beta-blockers [SUMMARY]
[CONTENT] Heart failure | Heart rate | Atrial fibrillation | Beta-blockers [SUMMARY]
[CONTENT] Adrenergic beta-Antagonists | Aged | Atrial Fibrillation | Dose-Response Relationship, Drug | Female | Follow-Up Studies | Heart Failure | Heart Rate | Humans | Male | Prognosis | Prospective Studies | Stroke Volume | Treatment Outcome [SUMMARY]
null
[CONTENT] Adrenergic beta-Antagonists | Aged | Atrial Fibrillation | Dose-Response Relationship, Drug | Female | Follow-Up Studies | Heart Failure | Heart Rate | Humans | Male | Prognosis | Prospective Studies | Stroke Volume | Treatment Outcome [SUMMARY]
[CONTENT] Adrenergic beta-Antagonists | Aged | Atrial Fibrillation | Dose-Response Relationship, Drug | Female | Follow-Up Studies | Heart Failure | Heart Rate | Humans | Male | Prognosis | Prospective Studies | Stroke Volume | Treatment Outcome [SUMMARY]
[CONTENT] Adrenergic beta-Antagonists | Aged | Atrial Fibrillation | Dose-Response Relationship, Drug | Female | Follow-Up Studies | Heart Failure | Heart Rate | Humans | Male | Prognosis | Prospective Studies | Stroke Volume | Treatment Outcome [SUMMARY]
[CONTENT] Adrenergic beta-Antagonists | Aged | Atrial Fibrillation | Dose-Response Relationship, Drug | Female | Follow-Up Studies | Heart Failure | Heart Rate | Humans | Male | Prognosis | Prospective Studies | Stroke Volume | Treatment Outcome [SUMMARY]
[CONTENT] lowering heart rate | af heart rate | heart rate prognostic | reductions heart rate | atrial fibrillation af [SUMMARY]
null
[CONTENT] lowering heart rate | af heart rate | heart rate prognostic | reductions heart rate | atrial fibrillation af [SUMMARY]
[CONTENT] lowering heart rate | af heart rate | heart rate prognostic | reductions heart rate | atrial fibrillation af [SUMMARY]
[CONTENT] lowering heart rate | af heart rate | heart rate prognostic | reductions heart rate | atrial fibrillation af [SUMMARY]
[CONTENT] lowering heart rate | af heart rate | heart rate prognostic | reductions heart rate | atrial fibrillation af [SUMMARY]
[CONTENT] heart | rate | heart rate | patients | baseline | af | chf | risk | hf | beta [SUMMARY]
null
[CONTENT] heart | rate | heart rate | patients | baseline | af | chf | risk | hf | beta [SUMMARY]
[CONTENT] heart | rate | heart rate | patients | baseline | af | chf | risk | hf | beta [SUMMARY]
[CONTENT] heart | rate | heart rate | patients | baseline | af | chf | risk | hf | beta [SUMMARY]
[CONTENT] heart | rate | heart rate | patients | baseline | af | chf | risk | hf | beta [SUMMARY]
[CONTENT] hfref | heart | rate | patients af | heart rate | patients | beta | af | hfref patients | hfref patients af [SUMMARY]
null
[CONTENT] heart | rate | heart rate | prediction | risk prediction | risk | prediction model | risk prediction model | chf risk prediction | hospitalisation [SUMMARY]
[CONTENT] heart rate baseline | rate baseline | heart | heart rate | hfref | hfref patients | baseline | rate | achieved | patients [SUMMARY]
[CONTENT] heart | heart rate | rate | patients | baseline | af | beta | hf | hfref | blocker [SUMMARY]
[CONTENT] heart | heart rate | rate | patients | baseline | af | beta | hf | hfref | blocker [SUMMARY]
[CONTENT] ||| SR [SUMMARY]
null
[CONTENT] 9 months | SR ||| 10 | 0.83 | 0.75-0.91 | 0.001 | 0.89 | 0.81 | 0.018 | AF | 10 | 1.26 | 1.10-1.46 | 0.001 | 1.08 | 0.94 | 0.18 ||| 9 months | 0.017 [SUMMARY]
[CONTENT] SR [SUMMARY]
[CONTENT] ||| SR ||| ||| 1548 | age 67 years | 35% ||| 21 months ||| 9 months ||| ||| 9 months | SR ||| 10 | 0.83 | 0.75-0.91 | 0.001 | 0.89 | 0.81 | 0.018 | AF | 10 | 1.26 | 1.10-1.46 | 0.001 | 1.08 | 0.94 | 0.18 ||| 9 months | 0.017 ||| SR [SUMMARY]
[CONTENT] ||| SR ||| ||| 1548 | age 67 years | 35% ||| 21 months ||| 9 months ||| ||| 9 months | SR ||| 10 | 0.83 | 0.75-0.91 | 0.001 | 0.89 | 0.81 | 0.018 | AF | 10 | 1.26 | 1.10-1.46 | 0.001 | 1.08 | 0.94 | 0.18 ||| 9 months | 0.017 ||| SR [SUMMARY]
Long-Term Prognosis in Young Patients with Acute Coronary Syndrome Treated with Percutaneous Coronary Intervention.
33907409
Acute coronary syndrome (ACS) at a young age is uncommon. Limited data regarding the long-term follow-up and prognosis in this population are available. Our objectives were to evaluate the long-term clinical outcomes of patients presenting with ACS at a young age and to assess factors that predict long-term prognosis.
BACKGROUND
A retrospective analysis of consecutive young patients (male below 40 and female below 50 years old) that were admitted with ACS and underwent percutaneous coronary intervention (PCI) between the years 1997 and 2009. Demographics, clinical characteristics, and clinical outcomes including major cardiovascular (CV) events and mortality were analyzed. Multivariable cox proportional hazard model was performed to identify predictors of long-term prognosis.
METHODS
One-hundred sixty-five patients were included with a mean follow-up of 9.1±4.6 years. Most patients were men (88%), and mean age (years) was 36.8±4.2. During follow-up, 15 (9.1%) died, 98 (59.4%) patients had at least one major CV event, 22 (13.3%) patients had more than two CV events, and the mean number of recurrent CV events was 1.4±1.48 events per patient. In multivariate analysis, the strongest predictors of major CV events and/or mortality were coronary intervention without stent insertion (HR1.77; 95% CI 1.09-2.9), LAD artery involvement (HR 1.59; 95% CI 1.04-2.44) and hypertension (HR 1.6; 95% CI 1.0-2.6).
RESULTS
Patients with ACS in young age are at high risk for major CV and/or mortality in long-term follow-up with a high rate of recurrent CV events. Close follow-up and risk factor management for secondary prevention have a major role, particularly in this population.
CONCLUSION
[ "Acute Coronary Syndrome", "Adult", "Age of Onset", "Angina, Unstable", "Databases, Factual", "Female", "Humans", "Male", "Non-ST Elevated Myocardial Infarction", "Percutaneous Coronary Intervention", "Recurrence", "Retrospective Studies", "Risk Assessment", "Risk Factors", "ST Elevation Myocardial Infarction", "Time Factors", "Treatment Outcome" ]
8064716
Introduction
Coronary artery disease (CAD) is a leading cause of morbidity and mortality worldwide.1,2 Several studies have shown that coronary atherosclerosis begins in the second or third decade of life with an increased prevalence with age in both males and females.3–5 However, the clinical manifestations of acute coronary syndrome (ACS) in most cases occur later, during the fifth to seventh decade of life6,7 and only 2–10% of all patients with ACS, are younger than 40 years old.8,9 Young patients with ACS have unique characteristics with distinct risk factors and clinical manifestations. Prior studies have shown that family history, hypercholesterolemia, sedentary lifestyles, obesity and smoking were common risk factors in young patients with ACS compared to older age groups.8,10–15 Diabetes and smoking in young patients is a significant risk factor for recurrent coronary events and interventions, as well as mortality.14,15 In addition, there is an association between ethnicity and geographical location and the incidence of ACS events at a young age.16,17 When compared to an older population, ST-segment elevation myocardial infarction (STEMI) is more common in young patients compared to non-ST-segment elevation myocardial infraction (NSTEMI).12,13,18 Young ACS patients have a lower incidence of multi-vessel (MVD) and left main (LM) disease,16,17 whereas involvement of the left anterior descending (LAD) artery is more common.14 Previous studies in this population mainly focused on risk factors and the unique clinical characteristics of this group,11–18 while others focused on in-hospital and short-term outcomes which demonstrated improved outcomes in the young ACS population.13,18,19 To date, only a limited number of studies have evaluated the long-term outcomes of this population, most of which were performed prior to the widespread use of the invasive strategy in patients with ACS.10,11,20 The goal of this study was to evaluate the long-term clinical outcomes of young patients with ACS who underwent percutaneous coronary intervention (PCI) and to elucidate predictive factors affecting long-term prognosis in this population.
null
null
null
null
Discussion
We present an analysis of 165 young patients with ACS who underwent PCI at a young age, with a long-term mean follow-up of 9.1±4.6 years. The main finding of this study is the remarkably high rate of recurrent CV events in the years following initial ACS. During follow-up, approximately 65% of the patients had at least one endpoint of mortality or major CV event. The average number of recurrent events per patient was 1.85, with an event rate of 0.3 per patient per year. These percentages are higher than previously published studies, in which the percentage of recurrent events was less than 30%.21 Moreover, in a more recent study with shorter follow-up of 3–5 years, only 11.2% patients younger than 45 years old had recurrent events with mortality rate of only 0.9%.14 The longer follow-up in our study and better recoding of events may explain the differences between our findings and prior studies. Another possible explanation is that our population may have lower compliance to optimal medical treatments and lifestyle modifications and thus a relatively high recurrent event rate. It may be expected that in a more contemporary revascularization era the prognosis of young ACS patients will be better. However, our results do not support this assumption. One explanation is that the invasive approach influences mainly the short-term outcome, up to 1 year but thereafter factors such as lifestyle and compliance are more dominant for long-term prognosis. Furthermore, an invasive approach may be harmful in patients with poor compliance due to late stent thrombosis. Young patients who experience coronary events may have trouble to adhere with medications and risk factors’ management and thus are exposed to recurrent adverse events.22 Hypertension prevalence in the first event was found only in 20% of the patients. However, in patients who died during follow-up or experienced MACCE, the percentages were much higher. We found that patients with hypertension had significantly shorter time to recurrent event (Figure 2). We found that hypertension was the strongest risk factor for recurrent CV events. This may be because hypertension is a less controlled among young patients who in many cases do not compliant with antihypertensive medications.22 We postulate that early intervention focusing on treatment of hypertension may reduce the risk for recurrent events. Another finding in this study was that PCI without stent insertion was found to be an independent predictive factor for worse outcomes. Possible explanation may be the fact that patients who had complex or diffuse ectatic disease not suitable for stent implantation, have worse prognosis.23 Our study indicates that patients who had a coronary event at a young age are in a remarkably high risk for recurrent event during long-term follow-up. While the short-term prognosis of these young patients is relatively good as they have less comorbidities, the long-term prognosis might be unfavorable. It seems that health systems should put more focus on the population of young ACS patients in order to improve compliance and potentially, prognosis. The present study has several limitations. First, the study based on a retrospective administrative database that contains discharge-level records and as such is susceptible to reporting errors and missing data. In order to minimize those errors, we have crosschecked our data in several major databases and we believe that we were able to obtain high-quality long-term data which allowed a consistent analysis. Another limitation is the lack of a control group of older patients, these data were not available in this cohort. However, data on the expected outcome of general ACS populations both old and young are abundant and we used it for reference as discussed. In conclusion, the rate of mortality and recurrent CV events in patients who presented with ACS at a young age is relatively high during long-term follow-up. Hypertension, LAD disease and coronary intervention without stenting are important negative prognostic factors in this population. Early interventions to reduce risk factors and to improve compliance, particularly for hypertensive patients may lead to a better prognosis in this unique population.
[ "Methods", "Population and Data Collection", "Outcome Variables", "Statistical Analysis", "Results", "MACCE and Mortality Analysis", "Recurrent Event Analysis", "Discussion" ]
[ "A retrospective cohort study that was approved by the Institutional Research Ethics Board (IRB) at Hadassah Medical Center approval number 0685-17-HMO). The IRB approved that patients’ consent was not required for this historical retrospective study that includes only de-identified data. Our study complied with the Declaration of Helsinki.\nPopulation and Data Collection We included consecutive patients that were admitted with a diagnosis of first ACS event in young age and underwent PCI at a tertiary medical center between the years 1997–2009. Young patients were defined as ≤40 years for males, and ≤50 years for females at the time of the presenting coronary event.\nDemographic and clinical data were collected from the hospitals’ electronic medical files system including hospitalization summary, cardiac catheterization and echocardiography reports as well as outpatient clinic reports. Demographic variables included age, sex, and ethnic origin. Risk factors variables included hypertension (HTN), diabetes mellitus (DM), dyslipidemia, smoking and family history of ischemic heart disease (IHD) and clinical characteristics included ACS presentation type (unstable angina, NSTEMI, STEMI).\nThe angiographic and PCI data were retrieved from the procedural report and was reviewed by an expert interventional cardiologist. Procedural variables included location of culprit lesion, presence of MVD and PCI type (stent insertion vs plain old balloon angioplasty (POBA)).\nWe included consecutive patients that were admitted with a diagnosis of first ACS event in young age and underwent PCI at a tertiary medical center between the years 1997–2009. Young patients were defined as ≤40 years for males, and ≤50 years for females at the time of the presenting coronary event.\nDemographic and clinical data were collected from the hospitals’ electronic medical files system including hospitalization summary, cardiac catheterization and echocardiography reports as well as outpatient clinic reports. Demographic variables included age, sex, and ethnic origin. Risk factors variables included hypertension (HTN), diabetes mellitus (DM), dyslipidemia, smoking and family history of ischemic heart disease (IHD) and clinical characteristics included ACS presentation type (unstable angina, NSTEMI, STEMI).\nThe angiographic and PCI data were retrieved from the procedural report and was reviewed by an expert interventional cardiologist. Procedural variables included location of culprit lesion, presence of MVD and PCI type (stent insertion vs plain old balloon angioplasty (POBA)).\nOutcome Variables Cardiovascular events were collected from the hospital database and from the patient’s own medical files. Mortality was determined from hospital chart review and by matching identification numbers of patients with the Israeli National Population Register.\nThe clinical outcomes were defined as mortality and major CV event that included hospitalization for the following events: myocardial infarction (MI), cerebrovascular accident (CVA), percutaneous coronary intervention (PCI), coronary artery bypass graft surgery (CABG) and exacerbation of congestive heart failure (CHF).\nCardiovascular events were collected from the hospital database and from the patient’s own medical files. Mortality was determined from hospital chart review and by matching identification numbers of patients with the Israeli National Population Register.\nThe clinical outcomes were defined as mortality and major CV event that included hospitalization for the following events: myocardial infarction (MI), cerebrovascular accident (CVA), percutaneous coronary intervention (PCI), coronary artery bypass graft surgery (CABG) and exacerbation of congestive heart failure (CHF).\nStatistical Analysis Differences in baseline characteristics were compared using unpaired t-test for continuous variables, χ2-distribution or Fisher’s exact test for categorical variables. Differences between the frequencies of the different events; mortality and major events according to baseline characteristics of population were compared using χ2-distribution or Fisher’s as appropriate. The event rate over time was displayed using the Kaplan–Meier method, with comparison between groups by Log rank test.\nTo examine the association between baseline characteristics and outcome, a multivariable cox proportional hazard model was used. For the mortality analysis, we used the total cohort, 165 patients as we have reliable information about the mortality date in all patients with matching identification numbers of patients with the Israeli National Population Register. For time for first CV event analysis and for the number of recurrent events analysis, we excluded patients that were lost to complete long-term follow-up. The final analysis for CV events included a cohort of 145 patients. The association between baseline characteristics and the annual event rate was examined by multivariate linear regression.\nFor all analyses, we used SAS software version 9.4 (SAS Institute Inc., Cary, NC). A p<0.05 was considered statistically significant.\nDifferences in baseline characteristics were compared using unpaired t-test for continuous variables, χ2-distribution or Fisher’s exact test for categorical variables. Differences between the frequencies of the different events; mortality and major events according to baseline characteristics of population were compared using χ2-distribution or Fisher’s as appropriate. The event rate over time was displayed using the Kaplan–Meier method, with comparison between groups by Log rank test.\nTo examine the association between baseline characteristics and outcome, a multivariable cox proportional hazard model was used. For the mortality analysis, we used the total cohort, 165 patients as we have reliable information about the mortality date in all patients with matching identification numbers of patients with the Israeli National Population Register. For time for first CV event analysis and for the number of recurrent events analysis, we excluded patients that were lost to complete long-term follow-up. The final analysis for CV events included a cohort of 145 patients. The association between baseline characteristics and the annual event rate was examined by multivariate linear regression.\nFor all analyses, we used SAS software version 9.4 (SAS Institute Inc., Cary, NC). A p<0.05 was considered statistically significant.", "We included consecutive patients that were admitted with a diagnosis of first ACS event in young age and underwent PCI at a tertiary medical center between the years 1997–2009. Young patients were defined as ≤40 years for males, and ≤50 years for females at the time of the presenting coronary event.\nDemographic and clinical data were collected from the hospitals’ electronic medical files system including hospitalization summary, cardiac catheterization and echocardiography reports as well as outpatient clinic reports. Demographic variables included age, sex, and ethnic origin. Risk factors variables included hypertension (HTN), diabetes mellitus (DM), dyslipidemia, smoking and family history of ischemic heart disease (IHD) and clinical characteristics included ACS presentation type (unstable angina, NSTEMI, STEMI).\nThe angiographic and PCI data were retrieved from the procedural report and was reviewed by an expert interventional cardiologist. Procedural variables included location of culprit lesion, presence of MVD and PCI type (stent insertion vs plain old balloon angioplasty (POBA)).", "Cardiovascular events were collected from the hospital database and from the patient’s own medical files. Mortality was determined from hospital chart review and by matching identification numbers of patients with the Israeli National Population Register.\nThe clinical outcomes were defined as mortality and major CV event that included hospitalization for the following events: myocardial infarction (MI), cerebrovascular accident (CVA), percutaneous coronary intervention (PCI), coronary artery bypass graft surgery (CABG) and exacerbation of congestive heart failure (CHF).", "Differences in baseline characteristics were compared using unpaired t-test for continuous variables, χ2-distribution or Fisher’s exact test for categorical variables. Differences between the frequencies of the different events; mortality and major events according to baseline characteristics of population were compared using χ2-distribution or Fisher’s as appropriate. The event rate over time was displayed using the Kaplan–Meier method, with comparison between groups by Log rank test.\nTo examine the association between baseline characteristics and outcome, a multivariable cox proportional hazard model was used. For the mortality analysis, we used the total cohort, 165 patients as we have reliable information about the mortality date in all patients with matching identification numbers of patients with the Israeli National Population Register. For time for first CV event analysis and for the number of recurrent events analysis, we excluded patients that were lost to complete long-term follow-up. The final analysis for CV events included a cohort of 145 patients. The association between baseline characteristics and the annual event rate was examined by multivariate linear regression.\nFor all analyses, we used SAS software version 9.4 (SAS Institute Inc., Cary, NC). A p<0.05 was considered statistically significant.", "A total of 165 patients who had ACS and PCI at a young age were included in the study. The mean follow-up was 9.1±4.6 years and median follow-up was 10 years (IQR: 5.67, 10.75). The baseline and clinical characteristics of the study population are summarized in Table 1. Most of the patients (88%) were males and 95% of the patients had at least one risk factor, the most common risk factor in our population was smoking.Table 1Baseline CharacteristicsVariableN=165Age (years) mean ± SD36.81 ± 4.24Male sex, n (%)145 (88)Origin, n (%) Arabs83 (50) Jews82 (50)Risk factors, n (%) Family history of IHD76 (46) Hypertension33 (20) Hyperlipidemia94 (57) Diabetes mellitus25 (15) Smoking110 (67)ACS presentation, n (%) Unstable angina54 (33) NSTEMI31 (19) STEMI80 (48)LV functiona (assessed by echocardiogram), n (%) Normal49 (46.5) Mild to moderately reduced47 (45) Severely reduced9 (8.5)MVD70 (42)Culprit lesion (%) LM1 (0.6) LAD73 (44) LCX27 (16) RCA47 (29) Indeterminate17 (28)PCI ≥ 1 stent130 (82)Note:\na105 Patients with available data.Abbreviations: ACS, acute coronary syndrome; IHD, ischemic heart disease; IQR, interquartile; NSTEMI, non-ST elevation segment myocardial Infraction; STEMI, ST elevation segment myocardial Infraction; LV, left ventricle; LAD, left anterior descending; LCX, left circumflex; LM, left main; MVD, multi-vessel disease; PCI, percutaneous coronary intervention; RCA, right coronary artery.\n\nBaseline Characteristics\nNote:\na105 Patients with available data.\nAbbreviations: ACS, acute coronary syndrome; IHD, ischemic heart disease; IQR, interquartile; NSTEMI, non-ST elevation segment myocardial Infraction; STEMI, ST elevation segment myocardial Infraction; LV, left ventricle; LAD, left anterior descending; LCX, left circumflex; LM, left main; MVD, multi-vessel disease; PCI, percutaneous coronary intervention; RCA, right coronary artery.\nNearly half of the patients (48%) presented with ST-segment elevation myocardial infraction (STEMI) and 52% non -STE-ACS. Surprisingly, the proportion of unstable angina was greater than that of NSTEMI (33% vs 19%, respectively); this may be dependent on the definition of UAP/MI that was not based on the high-sensitive troponin assays which were not available at the years of the study. The most common vessel involved was the LAD (44.2%), while 42% had multi-vessel coronary disease (MVD). Most patients had good left ventricular function (Table 1).\nMACCE and Mortality Analysis Fifteen (9.1%) patients died during follow-up while 98 (59.4%) patients had at least one major CV event and 107 (64.9%) patients had at least one major CV event and/or mortality (Figure 1). Hypertension, family history of ischemic heart disease and PCI without stent insertion were found to be predictors for major CV and/or mortality in the univariate analysis (Table 2). Survival curves of hypertensive versus non-hypertensive patients are presented in Figure 2.Table 2Predictors for Mortality and/or Major CV Events (Univariate Analysis)ParameterHazard Ratio (95% CI)P-valueSex1.55 (0.81–2.97)0.19Race1.26 (0.86–1.85)0.23HTN1.84 (1.18–2.85)0.007DM1.08 (0.65–1.80)0.76Smoking1.05 (0.70–1.57)0.83Family history of IHD0.63 (0.43–0.94)0.02LAD lesion1.38 (0.94–2.01)0.09MVD1.17 (0.79–1.72)0.42Stent; none vs ≥12.00 (1.26–3.18)0.003Abbreviations: CV, cardiovascular; DM, diabetes mellitus; HLP, hyperlipidemia, HTN, hypertension; LAD, left anterior descending; MVD, multi-vessel disease; STEMI, ST-segment elevation myocardial infraction.\nFigure 1Kaplan–Meier survival curve showing cumulative survival free from mortality event (A) and from mortality and/or major CV event (B).Figure 2Kaplan–Meier survival curve showing cumulative survival free from mortality and major CV event in patient with and without hypertension.\nPredictors for Mortality and/or Major CV Events (Univariate Analysis)\nAbbreviations: CV, cardiovascular; DM, diabetes mellitus; HLP, hyperlipidemia, HTN, hypertension; LAD, left anterior descending; MVD, multi-vessel disease; STEMI, ST-segment elevation myocardial infraction.\nKaplan–Meier survival curve showing cumulative survival free from mortality event (A) and from mortality and/or major CV event (B).\nKaplan–Meier survival curve showing cumulative survival free from mortality and major CV event in patient with and without hypertension.\nIn the multivariate regression analysis, the strongest predictors for MACCE and/or mortality were coronary intervention without stent insertion, LAD artery involvement and hypertension (Table 3).Table 3Multivariable Cox Proportional Hazard Model for Major CV Event and/or MortalityParameterHazard Ratio (95% CI)P-valueSex1.18 (0.57–2.42)0.65Race1.13 (0.74–1.73)0.58HTN1.61 (1.0–2.6)0.05DM0.75 (0.44–1.30)0.31Smoking0.77 (0.49–1.23)0.27Family history of IHD0.67 (0.45–1.02)0.06LAD lesion1.59 (1.04–2.44)0.03MVD1.26 (0.83–1.91)0.28Stent; none vs ≥11.77 (1.09–2.9)0.02Abbreviations: CC, cardiovascular; DM, diabetes mellitus; HTN, hypertension; IHD, ischemic heart disease; LAD, left anterior descending; MVD, multi-vessel disease.\n\nMultivariable Cox Proportional Hazard Model for Major CV Event and/or Mortality\nAbbreviations: CC, cardiovascular; DM, diabetes mellitus; HTN, hypertension; IHD, ischemic heart disease; LAD, left anterior descending; MVD, multi-vessel disease.\nFifteen (9.1%) patients died during follow-up while 98 (59.4%) patients had at least one major CV event and 107 (64.9%) patients had at least one major CV event and/or mortality (Figure 1). Hypertension, family history of ischemic heart disease and PCI without stent insertion were found to be predictors for major CV and/or mortality in the univariate analysis (Table 2). Survival curves of hypertensive versus non-hypertensive patients are presented in Figure 2.Table 2Predictors for Mortality and/or Major CV Events (Univariate Analysis)ParameterHazard Ratio (95% CI)P-valueSex1.55 (0.81–2.97)0.19Race1.26 (0.86–1.85)0.23HTN1.84 (1.18–2.85)0.007DM1.08 (0.65–1.80)0.76Smoking1.05 (0.70–1.57)0.83Family history of IHD0.63 (0.43–0.94)0.02LAD lesion1.38 (0.94–2.01)0.09MVD1.17 (0.79–1.72)0.42Stent; none vs ≥12.00 (1.26–3.18)0.003Abbreviations: CV, cardiovascular; DM, diabetes mellitus; HLP, hyperlipidemia, HTN, hypertension; LAD, left anterior descending; MVD, multi-vessel disease; STEMI, ST-segment elevation myocardial infraction.\nFigure 1Kaplan–Meier survival curve showing cumulative survival free from mortality event (A) and from mortality and/or major CV event (B).Figure 2Kaplan–Meier survival curve showing cumulative survival free from mortality and major CV event in patient with and without hypertension.\nPredictors for Mortality and/or Major CV Events (Univariate Analysis)\nAbbreviations: CV, cardiovascular; DM, diabetes mellitus; HLP, hyperlipidemia, HTN, hypertension; LAD, left anterior descending; MVD, multi-vessel disease; STEMI, ST-segment elevation myocardial infraction.\nKaplan–Meier survival curve showing cumulative survival free from mortality event (A) and from mortality and/or major CV event (B).\nKaplan–Meier survival curve showing cumulative survival free from mortality and major CV event in patient with and without hypertension.\nIn the multivariate regression analysis, the strongest predictors for MACCE and/or mortality were coronary intervention without stent insertion, LAD artery involvement and hypertension (Table 3).Table 3Multivariable Cox Proportional Hazard Model for Major CV Event and/or MortalityParameterHazard Ratio (95% CI)P-valueSex1.18 (0.57–2.42)0.65Race1.13 (0.74–1.73)0.58HTN1.61 (1.0–2.6)0.05DM0.75 (0.44–1.30)0.31Smoking0.77 (0.49–1.23)0.27Family history of IHD0.67 (0.45–1.02)0.06LAD lesion1.59 (1.04–2.44)0.03MVD1.26 (0.83–1.91)0.28Stent; none vs ≥11.77 (1.09–2.9)0.02Abbreviations: CC, cardiovascular; DM, diabetes mellitus; HTN, hypertension; IHD, ischemic heart disease; LAD, left anterior descending; MVD, multi-vessel disease.\n\nMultivariable Cox Proportional Hazard Model for Major CV Event and/or Mortality\nAbbreviations: CC, cardiovascular; DM, diabetes mellitus; HTN, hypertension; IHD, ischemic heart disease; LAD, left anterior descending; MVD, multi-vessel disease.\nRecurrent Event Analysis The recurrent event analysis was performed for 145 patients with complete follow-up at the end of study follow-up (1/2018). Twenty-two patients (13.3%) had three or more events during follow-up, the mean number of events was 1.4±1.48 per patient and the mean event rate per patient per year was 0.3±0.8. The median number of events per patients was 1 (IQR 0.2) and the median event rate per patient per year was 0.14 (IQR: 0, 0.27). Recurrent events according to baseline characteristics are presented in Appendix Table 1. In a linear multivariate regression model, the strongest predictor for recurrent event was hypertension (Table 4).Table 4Linear Model for Recurrent Annual Event (N=145)VariableParameterStandard ErrorP-valueSex−0.0900.140.51Race−0.0430.090.62HTN0.3100.110.004DM0.0620.120.59Family history of IHD−0.1440.080.09Smoking−0.0280.100.77LAD involvement0.0720.090.41Stent insertion ≥ 10.0380.100.72Abbreviations: DM, diabetes mellitus; HTN, hypertension; IHD, ischemic heart disease; LAD, left anterior descending.\n\nLinear Model for Recurrent Annual Event (N=145)\nAbbreviations: DM, diabetes mellitus; HTN, hypertension; IHD, ischemic heart disease; LAD, left anterior descending.\nThe recurrent event analysis was performed for 145 patients with complete follow-up at the end of study follow-up (1/2018). Twenty-two patients (13.3%) had three or more events during follow-up, the mean number of events was 1.4±1.48 per patient and the mean event rate per patient per year was 0.3±0.8. The median number of events per patients was 1 (IQR 0.2) and the median event rate per patient per year was 0.14 (IQR: 0, 0.27). Recurrent events according to baseline characteristics are presented in Appendix Table 1. In a linear multivariate regression model, the strongest predictor for recurrent event was hypertension (Table 4).Table 4Linear Model for Recurrent Annual Event (N=145)VariableParameterStandard ErrorP-valueSex−0.0900.140.51Race−0.0430.090.62HTN0.3100.110.004DM0.0620.120.59Family history of IHD−0.1440.080.09Smoking−0.0280.100.77LAD involvement0.0720.090.41Stent insertion ≥ 10.0380.100.72Abbreviations: DM, diabetes mellitus; HTN, hypertension; IHD, ischemic heart disease; LAD, left anterior descending.\n\nLinear Model for Recurrent Annual Event (N=145)\nAbbreviations: DM, diabetes mellitus; HTN, hypertension; IHD, ischemic heart disease; LAD, left anterior descending.", "Fifteen (9.1%) patients died during follow-up while 98 (59.4%) patients had at least one major CV event and 107 (64.9%) patients had at least one major CV event and/or mortality (Figure 1). Hypertension, family history of ischemic heart disease and PCI without stent insertion were found to be predictors for major CV and/or mortality in the univariate analysis (Table 2). Survival curves of hypertensive versus non-hypertensive patients are presented in Figure 2.Table 2Predictors for Mortality and/or Major CV Events (Univariate Analysis)ParameterHazard Ratio (95% CI)P-valueSex1.55 (0.81–2.97)0.19Race1.26 (0.86–1.85)0.23HTN1.84 (1.18–2.85)0.007DM1.08 (0.65–1.80)0.76Smoking1.05 (0.70–1.57)0.83Family history of IHD0.63 (0.43–0.94)0.02LAD lesion1.38 (0.94–2.01)0.09MVD1.17 (0.79–1.72)0.42Stent; none vs ≥12.00 (1.26–3.18)0.003Abbreviations: CV, cardiovascular; DM, diabetes mellitus; HLP, hyperlipidemia, HTN, hypertension; LAD, left anterior descending; MVD, multi-vessel disease; STEMI, ST-segment elevation myocardial infraction.\nFigure 1Kaplan–Meier survival curve showing cumulative survival free from mortality event (A) and from mortality and/or major CV event (B).Figure 2Kaplan–Meier survival curve showing cumulative survival free from mortality and major CV event in patient with and without hypertension.\nPredictors for Mortality and/or Major CV Events (Univariate Analysis)\nAbbreviations: CV, cardiovascular; DM, diabetes mellitus; HLP, hyperlipidemia, HTN, hypertension; LAD, left anterior descending; MVD, multi-vessel disease; STEMI, ST-segment elevation myocardial infraction.\nKaplan–Meier survival curve showing cumulative survival free from mortality event (A) and from mortality and/or major CV event (B).\nKaplan–Meier survival curve showing cumulative survival free from mortality and major CV event in patient with and without hypertension.\nIn the multivariate regression analysis, the strongest predictors for MACCE and/or mortality were coronary intervention without stent insertion, LAD artery involvement and hypertension (Table 3).Table 3Multivariable Cox Proportional Hazard Model for Major CV Event and/or MortalityParameterHazard Ratio (95% CI)P-valueSex1.18 (0.57–2.42)0.65Race1.13 (0.74–1.73)0.58HTN1.61 (1.0–2.6)0.05DM0.75 (0.44–1.30)0.31Smoking0.77 (0.49–1.23)0.27Family history of IHD0.67 (0.45–1.02)0.06LAD lesion1.59 (1.04–2.44)0.03MVD1.26 (0.83–1.91)0.28Stent; none vs ≥11.77 (1.09–2.9)0.02Abbreviations: CC, cardiovascular; DM, diabetes mellitus; HTN, hypertension; IHD, ischemic heart disease; LAD, left anterior descending; MVD, multi-vessel disease.\n\nMultivariable Cox Proportional Hazard Model for Major CV Event and/or Mortality\nAbbreviations: CC, cardiovascular; DM, diabetes mellitus; HTN, hypertension; IHD, ischemic heart disease; LAD, left anterior descending; MVD, multi-vessel disease.", "The recurrent event analysis was performed for 145 patients with complete follow-up at the end of study follow-up (1/2018). Twenty-two patients (13.3%) had three or more events during follow-up, the mean number of events was 1.4±1.48 per patient and the mean event rate per patient per year was 0.3±0.8. The median number of events per patients was 1 (IQR 0.2) and the median event rate per patient per year was 0.14 (IQR: 0, 0.27). Recurrent events according to baseline characteristics are presented in Appendix Table 1. In a linear multivariate regression model, the strongest predictor for recurrent event was hypertension (Table 4).Table 4Linear Model for Recurrent Annual Event (N=145)VariableParameterStandard ErrorP-valueSex−0.0900.140.51Race−0.0430.090.62HTN0.3100.110.004DM0.0620.120.59Family history of IHD−0.1440.080.09Smoking−0.0280.100.77LAD involvement0.0720.090.41Stent insertion ≥ 10.0380.100.72Abbreviations: DM, diabetes mellitus; HTN, hypertension; IHD, ischemic heart disease; LAD, left anterior descending.\n\nLinear Model for Recurrent Annual Event (N=145)\nAbbreviations: DM, diabetes mellitus; HTN, hypertension; IHD, ischemic heart disease; LAD, left anterior descending.", "We present an analysis of 165 young patients with ACS who underwent PCI at a young age, with a long-term mean follow-up of 9.1±4.6 years. The main finding of this study is the remarkably high rate of recurrent CV events in the years following initial ACS. During follow-up, approximately 65% of the patients had at least one endpoint of mortality or major CV event. The average number of recurrent events per patient was 1.85, with an event rate of 0.3 per patient per year. These percentages are higher than previously published studies, in which the percentage of recurrent events was less than 30%.21 Moreover, in a more recent study with shorter follow-up of 3–5 years, only 11.2% patients younger than 45 years old had recurrent events with mortality rate of only 0.9%.14\nThe longer follow-up in our study and better recoding of events may explain the differences between our findings and prior studies. Another possible explanation is that our population may have lower compliance to optimal medical treatments and lifestyle modifications and thus a relatively high recurrent event rate.\nIt may be expected that in a more contemporary revascularization era the prognosis of young ACS patients will be better. However, our results do not support this assumption. One explanation is that the invasive approach influences mainly the short-term outcome, up to 1 year but thereafter factors such as lifestyle and compliance are more dominant for long-term prognosis. Furthermore, an invasive approach may be harmful in patients with poor compliance due to late stent thrombosis. Young patients who experience coronary events may have trouble to adhere with medications and risk factors’ management and thus are exposed to recurrent adverse events.22 Hypertension prevalence in the first event was found only in 20% of the patients. However, in patients who died during follow-up or experienced MACCE, the percentages were much higher. We found that patients with hypertension had significantly shorter time to recurrent event (Figure 2). We found that hypertension was the strongest risk factor for recurrent CV events. This may be because hypertension is a less controlled among young patients who in many cases do not compliant with antihypertensive medications.22 We postulate that early intervention focusing on treatment of hypertension may reduce the risk for recurrent events.\nAnother finding in this study was that PCI without stent insertion was found to be an independent predictive factor for worse outcomes. Possible explanation may be the fact that patients who had complex or diffuse ectatic disease not suitable for stent implantation, have worse prognosis.23\nOur study indicates that patients who had a coronary event at a young age are in a remarkably high risk for recurrent event during long-term follow-up. While the short-term prognosis of these young patients is relatively good as they have less comorbidities, the long-term prognosis might be unfavorable. It seems that health systems should put more focus on the population of young ACS patients in order to improve compliance and potentially, prognosis.\nThe present study has several limitations. First, the study based on a retrospective administrative database that contains discharge-level records and as such is susceptible to reporting errors and missing data. In order to minimize those errors, we have crosschecked our data in several major databases and we believe that we were able to obtain high-quality long-term data which allowed a consistent analysis. Another limitation is the lack of a control group of older patients, these data were not available in this cohort. However, data on the expected outcome of general ACS populations both old and young are abundant and we used it for reference as discussed.\nIn conclusion, the rate of mortality and recurrent CV events in patients who presented with ACS at a young age is relatively high during long-term follow-up. Hypertension, LAD disease and coronary intervention without stenting are important negative prognostic factors in this population. Early interventions to reduce risk factors and to improve compliance, particularly for hypertensive patients may lead to a better prognosis in this unique population." ]
[ null, null, null, null, null, null, null, null ]
[ "Introduction", "Methods", "Population and Data Collection", "Outcome Variables", "Statistical Analysis", "Results", "MACCE and Mortality Analysis", "Recurrent Event Analysis", "Discussion" ]
[ "Coronary artery disease (CAD) is a leading cause of morbidity and mortality worldwide.1,2 Several studies have shown that coronary atherosclerosis begins in the second or third decade of life with an increased prevalence with age in both males and females.3–5 However, the clinical manifestations of acute coronary syndrome (ACS) in most cases occur later, during the fifth to seventh decade of life6,7 and only 2–10% of all patients with ACS, are younger than 40 years old.8,9\nYoung patients with ACS have unique characteristics with distinct risk factors and clinical manifestations. Prior studies have shown that family history, hypercholesterolemia, sedentary lifestyles, obesity and smoking were common risk factors in young patients with ACS compared to older age groups.8,10–15 Diabetes and smoking in young patients is a significant risk factor for recurrent coronary events and interventions, as well as mortality.14,15 In addition, there is an association between ethnicity and geographical location and the incidence of ACS events at a young age.16,17\nWhen compared to an older population, ST-segment elevation myocardial infarction (STEMI) is more common in young patients compared to non-ST-segment elevation myocardial infraction (NSTEMI).12,13,18 Young ACS patients have a lower incidence of multi-vessel (MVD) and left main (LM) disease,16,17 whereas involvement of the left anterior descending (LAD) artery is more common.14\nPrevious studies in this population mainly focused on risk factors and the unique clinical characteristics of this group,11–18 while others focused on in-hospital and short-term outcomes which demonstrated improved outcomes in the young ACS population.13,18,19 To date, only a limited number of studies have evaluated the long-term outcomes of this population, most of which were performed prior to the widespread use of the invasive strategy in patients with ACS.10,11,20 The goal of this study was to evaluate the long-term clinical outcomes of young patients with ACS who underwent percutaneous coronary intervention (PCI) and to elucidate predictive factors affecting long-term prognosis in this population.", "A retrospective cohort study that was approved by the Institutional Research Ethics Board (IRB) at Hadassah Medical Center approval number 0685-17-HMO). The IRB approved that patients’ consent was not required for this historical retrospective study that includes only de-identified data. Our study complied with the Declaration of Helsinki.\nPopulation and Data Collection We included consecutive patients that were admitted with a diagnosis of first ACS event in young age and underwent PCI at a tertiary medical center between the years 1997–2009. Young patients were defined as ≤40 years for males, and ≤50 years for females at the time of the presenting coronary event.\nDemographic and clinical data were collected from the hospitals’ electronic medical files system including hospitalization summary, cardiac catheterization and echocardiography reports as well as outpatient clinic reports. Demographic variables included age, sex, and ethnic origin. Risk factors variables included hypertension (HTN), diabetes mellitus (DM), dyslipidemia, smoking and family history of ischemic heart disease (IHD) and clinical characteristics included ACS presentation type (unstable angina, NSTEMI, STEMI).\nThe angiographic and PCI data were retrieved from the procedural report and was reviewed by an expert interventional cardiologist. Procedural variables included location of culprit lesion, presence of MVD and PCI type (stent insertion vs plain old balloon angioplasty (POBA)).\nWe included consecutive patients that were admitted with a diagnosis of first ACS event in young age and underwent PCI at a tertiary medical center between the years 1997–2009. Young patients were defined as ≤40 years for males, and ≤50 years for females at the time of the presenting coronary event.\nDemographic and clinical data were collected from the hospitals’ electronic medical files system including hospitalization summary, cardiac catheterization and echocardiography reports as well as outpatient clinic reports. Demographic variables included age, sex, and ethnic origin. Risk factors variables included hypertension (HTN), diabetes mellitus (DM), dyslipidemia, smoking and family history of ischemic heart disease (IHD) and clinical characteristics included ACS presentation type (unstable angina, NSTEMI, STEMI).\nThe angiographic and PCI data were retrieved from the procedural report and was reviewed by an expert interventional cardiologist. Procedural variables included location of culprit lesion, presence of MVD and PCI type (stent insertion vs plain old balloon angioplasty (POBA)).\nOutcome Variables Cardiovascular events were collected from the hospital database and from the patient’s own medical files. Mortality was determined from hospital chart review and by matching identification numbers of patients with the Israeli National Population Register.\nThe clinical outcomes were defined as mortality and major CV event that included hospitalization for the following events: myocardial infarction (MI), cerebrovascular accident (CVA), percutaneous coronary intervention (PCI), coronary artery bypass graft surgery (CABG) and exacerbation of congestive heart failure (CHF).\nCardiovascular events were collected from the hospital database and from the patient’s own medical files. Mortality was determined from hospital chart review and by matching identification numbers of patients with the Israeli National Population Register.\nThe clinical outcomes were defined as mortality and major CV event that included hospitalization for the following events: myocardial infarction (MI), cerebrovascular accident (CVA), percutaneous coronary intervention (PCI), coronary artery bypass graft surgery (CABG) and exacerbation of congestive heart failure (CHF).\nStatistical Analysis Differences in baseline characteristics were compared using unpaired t-test for continuous variables, χ2-distribution or Fisher’s exact test for categorical variables. Differences between the frequencies of the different events; mortality and major events according to baseline characteristics of population were compared using χ2-distribution or Fisher’s as appropriate. The event rate over time was displayed using the Kaplan–Meier method, with comparison between groups by Log rank test.\nTo examine the association between baseline characteristics and outcome, a multivariable cox proportional hazard model was used. For the mortality analysis, we used the total cohort, 165 patients as we have reliable information about the mortality date in all patients with matching identification numbers of patients with the Israeli National Population Register. For time for first CV event analysis and for the number of recurrent events analysis, we excluded patients that were lost to complete long-term follow-up. The final analysis for CV events included a cohort of 145 patients. The association between baseline characteristics and the annual event rate was examined by multivariate linear regression.\nFor all analyses, we used SAS software version 9.4 (SAS Institute Inc., Cary, NC). A p<0.05 was considered statistically significant.\nDifferences in baseline characteristics were compared using unpaired t-test for continuous variables, χ2-distribution or Fisher’s exact test for categorical variables. Differences between the frequencies of the different events; mortality and major events according to baseline characteristics of population were compared using χ2-distribution or Fisher’s as appropriate. The event rate over time was displayed using the Kaplan–Meier method, with comparison between groups by Log rank test.\nTo examine the association between baseline characteristics and outcome, a multivariable cox proportional hazard model was used. For the mortality analysis, we used the total cohort, 165 patients as we have reliable information about the mortality date in all patients with matching identification numbers of patients with the Israeli National Population Register. For time for first CV event analysis and for the number of recurrent events analysis, we excluded patients that were lost to complete long-term follow-up. The final analysis for CV events included a cohort of 145 patients. The association between baseline characteristics and the annual event rate was examined by multivariate linear regression.\nFor all analyses, we used SAS software version 9.4 (SAS Institute Inc., Cary, NC). A p<0.05 was considered statistically significant.", "We included consecutive patients that were admitted with a diagnosis of first ACS event in young age and underwent PCI at a tertiary medical center between the years 1997–2009. Young patients were defined as ≤40 years for males, and ≤50 years for females at the time of the presenting coronary event.\nDemographic and clinical data were collected from the hospitals’ electronic medical files system including hospitalization summary, cardiac catheterization and echocardiography reports as well as outpatient clinic reports. Demographic variables included age, sex, and ethnic origin. Risk factors variables included hypertension (HTN), diabetes mellitus (DM), dyslipidemia, smoking and family history of ischemic heart disease (IHD) and clinical characteristics included ACS presentation type (unstable angina, NSTEMI, STEMI).\nThe angiographic and PCI data were retrieved from the procedural report and was reviewed by an expert interventional cardiologist. Procedural variables included location of culprit lesion, presence of MVD and PCI type (stent insertion vs plain old balloon angioplasty (POBA)).", "Cardiovascular events were collected from the hospital database and from the patient’s own medical files. Mortality was determined from hospital chart review and by matching identification numbers of patients with the Israeli National Population Register.\nThe clinical outcomes were defined as mortality and major CV event that included hospitalization for the following events: myocardial infarction (MI), cerebrovascular accident (CVA), percutaneous coronary intervention (PCI), coronary artery bypass graft surgery (CABG) and exacerbation of congestive heart failure (CHF).", "Differences in baseline characteristics were compared using unpaired t-test for continuous variables, χ2-distribution or Fisher’s exact test for categorical variables. Differences between the frequencies of the different events; mortality and major events according to baseline characteristics of population were compared using χ2-distribution or Fisher’s as appropriate. The event rate over time was displayed using the Kaplan–Meier method, with comparison between groups by Log rank test.\nTo examine the association between baseline characteristics and outcome, a multivariable cox proportional hazard model was used. For the mortality analysis, we used the total cohort, 165 patients as we have reliable information about the mortality date in all patients with matching identification numbers of patients with the Israeli National Population Register. For time for first CV event analysis and for the number of recurrent events analysis, we excluded patients that were lost to complete long-term follow-up. The final analysis for CV events included a cohort of 145 patients. The association between baseline characteristics and the annual event rate was examined by multivariate linear regression.\nFor all analyses, we used SAS software version 9.4 (SAS Institute Inc., Cary, NC). A p<0.05 was considered statistically significant.", "A total of 165 patients who had ACS and PCI at a young age were included in the study. The mean follow-up was 9.1±4.6 years and median follow-up was 10 years (IQR: 5.67, 10.75). The baseline and clinical characteristics of the study population are summarized in Table 1. Most of the patients (88%) were males and 95% of the patients had at least one risk factor, the most common risk factor in our population was smoking.Table 1Baseline CharacteristicsVariableN=165Age (years) mean ± SD36.81 ± 4.24Male sex, n (%)145 (88)Origin, n (%) Arabs83 (50) Jews82 (50)Risk factors, n (%) Family history of IHD76 (46) Hypertension33 (20) Hyperlipidemia94 (57) Diabetes mellitus25 (15) Smoking110 (67)ACS presentation, n (%) Unstable angina54 (33) NSTEMI31 (19) STEMI80 (48)LV functiona (assessed by echocardiogram), n (%) Normal49 (46.5) Mild to moderately reduced47 (45) Severely reduced9 (8.5)MVD70 (42)Culprit lesion (%) LM1 (0.6) LAD73 (44) LCX27 (16) RCA47 (29) Indeterminate17 (28)PCI ≥ 1 stent130 (82)Note:\na105 Patients with available data.Abbreviations: ACS, acute coronary syndrome; IHD, ischemic heart disease; IQR, interquartile; NSTEMI, non-ST elevation segment myocardial Infraction; STEMI, ST elevation segment myocardial Infraction; LV, left ventricle; LAD, left anterior descending; LCX, left circumflex; LM, left main; MVD, multi-vessel disease; PCI, percutaneous coronary intervention; RCA, right coronary artery.\n\nBaseline Characteristics\nNote:\na105 Patients with available data.\nAbbreviations: ACS, acute coronary syndrome; IHD, ischemic heart disease; IQR, interquartile; NSTEMI, non-ST elevation segment myocardial Infraction; STEMI, ST elevation segment myocardial Infraction; LV, left ventricle; LAD, left anterior descending; LCX, left circumflex; LM, left main; MVD, multi-vessel disease; PCI, percutaneous coronary intervention; RCA, right coronary artery.\nNearly half of the patients (48%) presented with ST-segment elevation myocardial infraction (STEMI) and 52% non -STE-ACS. Surprisingly, the proportion of unstable angina was greater than that of NSTEMI (33% vs 19%, respectively); this may be dependent on the definition of UAP/MI that was not based on the high-sensitive troponin assays which were not available at the years of the study. The most common vessel involved was the LAD (44.2%), while 42% had multi-vessel coronary disease (MVD). Most patients had good left ventricular function (Table 1).\nMACCE and Mortality Analysis Fifteen (9.1%) patients died during follow-up while 98 (59.4%) patients had at least one major CV event and 107 (64.9%) patients had at least one major CV event and/or mortality (Figure 1). Hypertension, family history of ischemic heart disease and PCI without stent insertion were found to be predictors for major CV and/or mortality in the univariate analysis (Table 2). Survival curves of hypertensive versus non-hypertensive patients are presented in Figure 2.Table 2Predictors for Mortality and/or Major CV Events (Univariate Analysis)ParameterHazard Ratio (95% CI)P-valueSex1.55 (0.81–2.97)0.19Race1.26 (0.86–1.85)0.23HTN1.84 (1.18–2.85)0.007DM1.08 (0.65–1.80)0.76Smoking1.05 (0.70–1.57)0.83Family history of IHD0.63 (0.43–0.94)0.02LAD lesion1.38 (0.94–2.01)0.09MVD1.17 (0.79–1.72)0.42Stent; none vs ≥12.00 (1.26–3.18)0.003Abbreviations: CV, cardiovascular; DM, diabetes mellitus; HLP, hyperlipidemia, HTN, hypertension; LAD, left anterior descending; MVD, multi-vessel disease; STEMI, ST-segment elevation myocardial infraction.\nFigure 1Kaplan–Meier survival curve showing cumulative survival free from mortality event (A) and from mortality and/or major CV event (B).Figure 2Kaplan–Meier survival curve showing cumulative survival free from mortality and major CV event in patient with and without hypertension.\nPredictors for Mortality and/or Major CV Events (Univariate Analysis)\nAbbreviations: CV, cardiovascular; DM, diabetes mellitus; HLP, hyperlipidemia, HTN, hypertension; LAD, left anterior descending; MVD, multi-vessel disease; STEMI, ST-segment elevation myocardial infraction.\nKaplan–Meier survival curve showing cumulative survival free from mortality event (A) and from mortality and/or major CV event (B).\nKaplan–Meier survival curve showing cumulative survival free from mortality and major CV event in patient with and without hypertension.\nIn the multivariate regression analysis, the strongest predictors for MACCE and/or mortality were coronary intervention without stent insertion, LAD artery involvement and hypertension (Table 3).Table 3Multivariable Cox Proportional Hazard Model for Major CV Event and/or MortalityParameterHazard Ratio (95% CI)P-valueSex1.18 (0.57–2.42)0.65Race1.13 (0.74–1.73)0.58HTN1.61 (1.0–2.6)0.05DM0.75 (0.44–1.30)0.31Smoking0.77 (0.49–1.23)0.27Family history of IHD0.67 (0.45–1.02)0.06LAD lesion1.59 (1.04–2.44)0.03MVD1.26 (0.83–1.91)0.28Stent; none vs ≥11.77 (1.09–2.9)0.02Abbreviations: CC, cardiovascular; DM, diabetes mellitus; HTN, hypertension; IHD, ischemic heart disease; LAD, left anterior descending; MVD, multi-vessel disease.\n\nMultivariable Cox Proportional Hazard Model for Major CV Event and/or Mortality\nAbbreviations: CC, cardiovascular; DM, diabetes mellitus; HTN, hypertension; IHD, ischemic heart disease; LAD, left anterior descending; MVD, multi-vessel disease.\nFifteen (9.1%) patients died during follow-up while 98 (59.4%) patients had at least one major CV event and 107 (64.9%) patients had at least one major CV event and/or mortality (Figure 1). Hypertension, family history of ischemic heart disease and PCI without stent insertion were found to be predictors for major CV and/or mortality in the univariate analysis (Table 2). Survival curves of hypertensive versus non-hypertensive patients are presented in Figure 2.Table 2Predictors for Mortality and/or Major CV Events (Univariate Analysis)ParameterHazard Ratio (95% CI)P-valueSex1.55 (0.81–2.97)0.19Race1.26 (0.86–1.85)0.23HTN1.84 (1.18–2.85)0.007DM1.08 (0.65–1.80)0.76Smoking1.05 (0.70–1.57)0.83Family history of IHD0.63 (0.43–0.94)0.02LAD lesion1.38 (0.94–2.01)0.09MVD1.17 (0.79–1.72)0.42Stent; none vs ≥12.00 (1.26–3.18)0.003Abbreviations: CV, cardiovascular; DM, diabetes mellitus; HLP, hyperlipidemia, HTN, hypertension; LAD, left anterior descending; MVD, multi-vessel disease; STEMI, ST-segment elevation myocardial infraction.\nFigure 1Kaplan–Meier survival curve showing cumulative survival free from mortality event (A) and from mortality and/or major CV event (B).Figure 2Kaplan–Meier survival curve showing cumulative survival free from mortality and major CV event in patient with and without hypertension.\nPredictors for Mortality and/or Major CV Events (Univariate Analysis)\nAbbreviations: CV, cardiovascular; DM, diabetes mellitus; HLP, hyperlipidemia, HTN, hypertension; LAD, left anterior descending; MVD, multi-vessel disease; STEMI, ST-segment elevation myocardial infraction.\nKaplan–Meier survival curve showing cumulative survival free from mortality event (A) and from mortality and/or major CV event (B).\nKaplan–Meier survival curve showing cumulative survival free from mortality and major CV event in patient with and without hypertension.\nIn the multivariate regression analysis, the strongest predictors for MACCE and/or mortality were coronary intervention without stent insertion, LAD artery involvement and hypertension (Table 3).Table 3Multivariable Cox Proportional Hazard Model for Major CV Event and/or MortalityParameterHazard Ratio (95% CI)P-valueSex1.18 (0.57–2.42)0.65Race1.13 (0.74–1.73)0.58HTN1.61 (1.0–2.6)0.05DM0.75 (0.44–1.30)0.31Smoking0.77 (0.49–1.23)0.27Family history of IHD0.67 (0.45–1.02)0.06LAD lesion1.59 (1.04–2.44)0.03MVD1.26 (0.83–1.91)0.28Stent; none vs ≥11.77 (1.09–2.9)0.02Abbreviations: CC, cardiovascular; DM, diabetes mellitus; HTN, hypertension; IHD, ischemic heart disease; LAD, left anterior descending; MVD, multi-vessel disease.\n\nMultivariable Cox Proportional Hazard Model for Major CV Event and/or Mortality\nAbbreviations: CC, cardiovascular; DM, diabetes mellitus; HTN, hypertension; IHD, ischemic heart disease; LAD, left anterior descending; MVD, multi-vessel disease.\nRecurrent Event Analysis The recurrent event analysis was performed for 145 patients with complete follow-up at the end of study follow-up (1/2018). Twenty-two patients (13.3%) had three or more events during follow-up, the mean number of events was 1.4±1.48 per patient and the mean event rate per patient per year was 0.3±0.8. The median number of events per patients was 1 (IQR 0.2) and the median event rate per patient per year was 0.14 (IQR: 0, 0.27). Recurrent events according to baseline characteristics are presented in Appendix Table 1. In a linear multivariate regression model, the strongest predictor for recurrent event was hypertension (Table 4).Table 4Linear Model for Recurrent Annual Event (N=145)VariableParameterStandard ErrorP-valueSex−0.0900.140.51Race−0.0430.090.62HTN0.3100.110.004DM0.0620.120.59Family history of IHD−0.1440.080.09Smoking−0.0280.100.77LAD involvement0.0720.090.41Stent insertion ≥ 10.0380.100.72Abbreviations: DM, diabetes mellitus; HTN, hypertension; IHD, ischemic heart disease; LAD, left anterior descending.\n\nLinear Model for Recurrent Annual Event (N=145)\nAbbreviations: DM, diabetes mellitus; HTN, hypertension; IHD, ischemic heart disease; LAD, left anterior descending.\nThe recurrent event analysis was performed for 145 patients with complete follow-up at the end of study follow-up (1/2018). Twenty-two patients (13.3%) had three or more events during follow-up, the mean number of events was 1.4±1.48 per patient and the mean event rate per patient per year was 0.3±0.8. The median number of events per patients was 1 (IQR 0.2) and the median event rate per patient per year was 0.14 (IQR: 0, 0.27). Recurrent events according to baseline characteristics are presented in Appendix Table 1. In a linear multivariate regression model, the strongest predictor for recurrent event was hypertension (Table 4).Table 4Linear Model for Recurrent Annual Event (N=145)VariableParameterStandard ErrorP-valueSex−0.0900.140.51Race−0.0430.090.62HTN0.3100.110.004DM0.0620.120.59Family history of IHD−0.1440.080.09Smoking−0.0280.100.77LAD involvement0.0720.090.41Stent insertion ≥ 10.0380.100.72Abbreviations: DM, diabetes mellitus; HTN, hypertension; IHD, ischemic heart disease; LAD, left anterior descending.\n\nLinear Model for Recurrent Annual Event (N=145)\nAbbreviations: DM, diabetes mellitus; HTN, hypertension; IHD, ischemic heart disease; LAD, left anterior descending.", "Fifteen (9.1%) patients died during follow-up while 98 (59.4%) patients had at least one major CV event and 107 (64.9%) patients had at least one major CV event and/or mortality (Figure 1). Hypertension, family history of ischemic heart disease and PCI without stent insertion were found to be predictors for major CV and/or mortality in the univariate analysis (Table 2). Survival curves of hypertensive versus non-hypertensive patients are presented in Figure 2.Table 2Predictors for Mortality and/or Major CV Events (Univariate Analysis)ParameterHazard Ratio (95% CI)P-valueSex1.55 (0.81–2.97)0.19Race1.26 (0.86–1.85)0.23HTN1.84 (1.18–2.85)0.007DM1.08 (0.65–1.80)0.76Smoking1.05 (0.70–1.57)0.83Family history of IHD0.63 (0.43–0.94)0.02LAD lesion1.38 (0.94–2.01)0.09MVD1.17 (0.79–1.72)0.42Stent; none vs ≥12.00 (1.26–3.18)0.003Abbreviations: CV, cardiovascular; DM, diabetes mellitus; HLP, hyperlipidemia, HTN, hypertension; LAD, left anterior descending; MVD, multi-vessel disease; STEMI, ST-segment elevation myocardial infraction.\nFigure 1Kaplan–Meier survival curve showing cumulative survival free from mortality event (A) and from mortality and/or major CV event (B).Figure 2Kaplan–Meier survival curve showing cumulative survival free from mortality and major CV event in patient with and without hypertension.\nPredictors for Mortality and/or Major CV Events (Univariate Analysis)\nAbbreviations: CV, cardiovascular; DM, diabetes mellitus; HLP, hyperlipidemia, HTN, hypertension; LAD, left anterior descending; MVD, multi-vessel disease; STEMI, ST-segment elevation myocardial infraction.\nKaplan–Meier survival curve showing cumulative survival free from mortality event (A) and from mortality and/or major CV event (B).\nKaplan–Meier survival curve showing cumulative survival free from mortality and major CV event in patient with and without hypertension.\nIn the multivariate regression analysis, the strongest predictors for MACCE and/or mortality were coronary intervention without stent insertion, LAD artery involvement and hypertension (Table 3).Table 3Multivariable Cox Proportional Hazard Model for Major CV Event and/or MortalityParameterHazard Ratio (95% CI)P-valueSex1.18 (0.57–2.42)0.65Race1.13 (0.74–1.73)0.58HTN1.61 (1.0–2.6)0.05DM0.75 (0.44–1.30)0.31Smoking0.77 (0.49–1.23)0.27Family history of IHD0.67 (0.45–1.02)0.06LAD lesion1.59 (1.04–2.44)0.03MVD1.26 (0.83–1.91)0.28Stent; none vs ≥11.77 (1.09–2.9)0.02Abbreviations: CC, cardiovascular; DM, diabetes mellitus; HTN, hypertension; IHD, ischemic heart disease; LAD, left anterior descending; MVD, multi-vessel disease.\n\nMultivariable Cox Proportional Hazard Model for Major CV Event and/or Mortality\nAbbreviations: CC, cardiovascular; DM, diabetes mellitus; HTN, hypertension; IHD, ischemic heart disease; LAD, left anterior descending; MVD, multi-vessel disease.", "The recurrent event analysis was performed for 145 patients with complete follow-up at the end of study follow-up (1/2018). Twenty-two patients (13.3%) had three or more events during follow-up, the mean number of events was 1.4±1.48 per patient and the mean event rate per patient per year was 0.3±0.8. The median number of events per patients was 1 (IQR 0.2) and the median event rate per patient per year was 0.14 (IQR: 0, 0.27). Recurrent events according to baseline characteristics are presented in Appendix Table 1. In a linear multivariate regression model, the strongest predictor for recurrent event was hypertension (Table 4).Table 4Linear Model for Recurrent Annual Event (N=145)VariableParameterStandard ErrorP-valueSex−0.0900.140.51Race−0.0430.090.62HTN0.3100.110.004DM0.0620.120.59Family history of IHD−0.1440.080.09Smoking−0.0280.100.77LAD involvement0.0720.090.41Stent insertion ≥ 10.0380.100.72Abbreviations: DM, diabetes mellitus; HTN, hypertension; IHD, ischemic heart disease; LAD, left anterior descending.\n\nLinear Model for Recurrent Annual Event (N=145)\nAbbreviations: DM, diabetes mellitus; HTN, hypertension; IHD, ischemic heart disease; LAD, left anterior descending.", "We present an analysis of 165 young patients with ACS who underwent PCI at a young age, with a long-term mean follow-up of 9.1±4.6 years. The main finding of this study is the remarkably high rate of recurrent CV events in the years following initial ACS. During follow-up, approximately 65% of the patients had at least one endpoint of mortality or major CV event. The average number of recurrent events per patient was 1.85, with an event rate of 0.3 per patient per year. These percentages are higher than previously published studies, in which the percentage of recurrent events was less than 30%.21 Moreover, in a more recent study with shorter follow-up of 3–5 years, only 11.2% patients younger than 45 years old had recurrent events with mortality rate of only 0.9%.14\nThe longer follow-up in our study and better recoding of events may explain the differences between our findings and prior studies. Another possible explanation is that our population may have lower compliance to optimal medical treatments and lifestyle modifications and thus a relatively high recurrent event rate.\nIt may be expected that in a more contemporary revascularization era the prognosis of young ACS patients will be better. However, our results do not support this assumption. One explanation is that the invasive approach influences mainly the short-term outcome, up to 1 year but thereafter factors such as lifestyle and compliance are more dominant for long-term prognosis. Furthermore, an invasive approach may be harmful in patients with poor compliance due to late stent thrombosis. Young patients who experience coronary events may have trouble to adhere with medications and risk factors’ management and thus are exposed to recurrent adverse events.22 Hypertension prevalence in the first event was found only in 20% of the patients. However, in patients who died during follow-up or experienced MACCE, the percentages were much higher. We found that patients with hypertension had significantly shorter time to recurrent event (Figure 2). We found that hypertension was the strongest risk factor for recurrent CV events. This may be because hypertension is a less controlled among young patients who in many cases do not compliant with antihypertensive medications.22 We postulate that early intervention focusing on treatment of hypertension may reduce the risk for recurrent events.\nAnother finding in this study was that PCI without stent insertion was found to be an independent predictive factor for worse outcomes. Possible explanation may be the fact that patients who had complex or diffuse ectatic disease not suitable for stent implantation, have worse prognosis.23\nOur study indicates that patients who had a coronary event at a young age are in a remarkably high risk for recurrent event during long-term follow-up. While the short-term prognosis of these young patients is relatively good as they have less comorbidities, the long-term prognosis might be unfavorable. It seems that health systems should put more focus on the population of young ACS patients in order to improve compliance and potentially, prognosis.\nThe present study has several limitations. First, the study based on a retrospective administrative database that contains discharge-level records and as such is susceptible to reporting errors and missing data. In order to minimize those errors, we have crosschecked our data in several major databases and we believe that we were able to obtain high-quality long-term data which allowed a consistent analysis. Another limitation is the lack of a control group of older patients, these data were not available in this cohort. However, data on the expected outcome of general ACS populations both old and young are abundant and we used it for reference as discussed.\nIn conclusion, the rate of mortality and recurrent CV events in patients who presented with ACS at a young age is relatively high during long-term follow-up. Hypertension, LAD disease and coronary intervention without stenting are important negative prognostic factors in this population. Early interventions to reduce risk factors and to improve compliance, particularly for hypertensive patients may lead to a better prognosis in this unique population." ]
[ "intro", null, null, null, null, null, null, null, null ]
[ "acute coronary syndrome", "ACS", "NSTEMI", "STEMI", "young population", "outcomes" ]
Introduction: Coronary artery disease (CAD) is a leading cause of morbidity and mortality worldwide.1,2 Several studies have shown that coronary atherosclerosis begins in the second or third decade of life with an increased prevalence with age in both males and females.3–5 However, the clinical manifestations of acute coronary syndrome (ACS) in most cases occur later, during the fifth to seventh decade of life6,7 and only 2–10% of all patients with ACS, are younger than 40 years old.8,9 Young patients with ACS have unique characteristics with distinct risk factors and clinical manifestations. Prior studies have shown that family history, hypercholesterolemia, sedentary lifestyles, obesity and smoking were common risk factors in young patients with ACS compared to older age groups.8,10–15 Diabetes and smoking in young patients is a significant risk factor for recurrent coronary events and interventions, as well as mortality.14,15 In addition, there is an association between ethnicity and geographical location and the incidence of ACS events at a young age.16,17 When compared to an older population, ST-segment elevation myocardial infarction (STEMI) is more common in young patients compared to non-ST-segment elevation myocardial infraction (NSTEMI).12,13,18 Young ACS patients have a lower incidence of multi-vessel (MVD) and left main (LM) disease,16,17 whereas involvement of the left anterior descending (LAD) artery is more common.14 Previous studies in this population mainly focused on risk factors and the unique clinical characteristics of this group,11–18 while others focused on in-hospital and short-term outcomes which demonstrated improved outcomes in the young ACS population.13,18,19 To date, only a limited number of studies have evaluated the long-term outcomes of this population, most of which were performed prior to the widespread use of the invasive strategy in patients with ACS.10,11,20 The goal of this study was to evaluate the long-term clinical outcomes of young patients with ACS who underwent percutaneous coronary intervention (PCI) and to elucidate predictive factors affecting long-term prognosis in this population. Methods: A retrospective cohort study that was approved by the Institutional Research Ethics Board (IRB) at Hadassah Medical Center approval number 0685-17-HMO). The IRB approved that patients’ consent was not required for this historical retrospective study that includes only de-identified data. Our study complied with the Declaration of Helsinki. Population and Data Collection We included consecutive patients that were admitted with a diagnosis of first ACS event in young age and underwent PCI at a tertiary medical center between the years 1997–2009. Young patients were defined as ≤40 years for males, and ≤50 years for females at the time of the presenting coronary event. Demographic and clinical data were collected from the hospitals’ electronic medical files system including hospitalization summary, cardiac catheterization and echocardiography reports as well as outpatient clinic reports. Demographic variables included age, sex, and ethnic origin. Risk factors variables included hypertension (HTN), diabetes mellitus (DM), dyslipidemia, smoking and family history of ischemic heart disease (IHD) and clinical characteristics included ACS presentation type (unstable angina, NSTEMI, STEMI). The angiographic and PCI data were retrieved from the procedural report and was reviewed by an expert interventional cardiologist. Procedural variables included location of culprit lesion, presence of MVD and PCI type (stent insertion vs plain old balloon angioplasty (POBA)). We included consecutive patients that were admitted with a diagnosis of first ACS event in young age and underwent PCI at a tertiary medical center between the years 1997–2009. Young patients were defined as ≤40 years for males, and ≤50 years for females at the time of the presenting coronary event. Demographic and clinical data were collected from the hospitals’ electronic medical files system including hospitalization summary, cardiac catheterization and echocardiography reports as well as outpatient clinic reports. Demographic variables included age, sex, and ethnic origin. Risk factors variables included hypertension (HTN), diabetes mellitus (DM), dyslipidemia, smoking and family history of ischemic heart disease (IHD) and clinical characteristics included ACS presentation type (unstable angina, NSTEMI, STEMI). The angiographic and PCI data were retrieved from the procedural report and was reviewed by an expert interventional cardiologist. Procedural variables included location of culprit lesion, presence of MVD and PCI type (stent insertion vs plain old balloon angioplasty (POBA)). Outcome Variables Cardiovascular events were collected from the hospital database and from the patient’s own medical files. Mortality was determined from hospital chart review and by matching identification numbers of patients with the Israeli National Population Register. The clinical outcomes were defined as mortality and major CV event that included hospitalization for the following events: myocardial infarction (MI), cerebrovascular accident (CVA), percutaneous coronary intervention (PCI), coronary artery bypass graft surgery (CABG) and exacerbation of congestive heart failure (CHF). Cardiovascular events were collected from the hospital database and from the patient’s own medical files. Mortality was determined from hospital chart review and by matching identification numbers of patients with the Israeli National Population Register. The clinical outcomes were defined as mortality and major CV event that included hospitalization for the following events: myocardial infarction (MI), cerebrovascular accident (CVA), percutaneous coronary intervention (PCI), coronary artery bypass graft surgery (CABG) and exacerbation of congestive heart failure (CHF). Statistical Analysis Differences in baseline characteristics were compared using unpaired t-test for continuous variables, χ2-distribution or Fisher’s exact test for categorical variables. Differences between the frequencies of the different events; mortality and major events according to baseline characteristics of population were compared using χ2-distribution or Fisher’s as appropriate. The event rate over time was displayed using the Kaplan–Meier method, with comparison between groups by Log rank test. To examine the association between baseline characteristics and outcome, a multivariable cox proportional hazard model was used. For the mortality analysis, we used the total cohort, 165 patients as we have reliable information about the mortality date in all patients with matching identification numbers of patients with the Israeli National Population Register. For time for first CV event analysis and for the number of recurrent events analysis, we excluded patients that were lost to complete long-term follow-up. The final analysis for CV events included a cohort of 145 patients. The association between baseline characteristics and the annual event rate was examined by multivariate linear regression. For all analyses, we used SAS software version 9.4 (SAS Institute Inc., Cary, NC). A p<0.05 was considered statistically significant. Differences in baseline characteristics were compared using unpaired t-test for continuous variables, χ2-distribution or Fisher’s exact test for categorical variables. Differences between the frequencies of the different events; mortality and major events according to baseline characteristics of population were compared using χ2-distribution or Fisher’s as appropriate. The event rate over time was displayed using the Kaplan–Meier method, with comparison between groups by Log rank test. To examine the association between baseline characteristics and outcome, a multivariable cox proportional hazard model was used. For the mortality analysis, we used the total cohort, 165 patients as we have reliable information about the mortality date in all patients with matching identification numbers of patients with the Israeli National Population Register. For time for first CV event analysis and for the number of recurrent events analysis, we excluded patients that were lost to complete long-term follow-up. The final analysis for CV events included a cohort of 145 patients. The association between baseline characteristics and the annual event rate was examined by multivariate linear regression. For all analyses, we used SAS software version 9.4 (SAS Institute Inc., Cary, NC). A p<0.05 was considered statistically significant. Population and Data Collection: We included consecutive patients that were admitted with a diagnosis of first ACS event in young age and underwent PCI at a tertiary medical center between the years 1997–2009. Young patients were defined as ≤40 years for males, and ≤50 years for females at the time of the presenting coronary event. Demographic and clinical data were collected from the hospitals’ electronic medical files system including hospitalization summary, cardiac catheterization and echocardiography reports as well as outpatient clinic reports. Demographic variables included age, sex, and ethnic origin. Risk factors variables included hypertension (HTN), diabetes mellitus (DM), dyslipidemia, smoking and family history of ischemic heart disease (IHD) and clinical characteristics included ACS presentation type (unstable angina, NSTEMI, STEMI). The angiographic and PCI data were retrieved from the procedural report and was reviewed by an expert interventional cardiologist. Procedural variables included location of culprit lesion, presence of MVD and PCI type (stent insertion vs plain old balloon angioplasty (POBA)). Outcome Variables: Cardiovascular events were collected from the hospital database and from the patient’s own medical files. Mortality was determined from hospital chart review and by matching identification numbers of patients with the Israeli National Population Register. The clinical outcomes were defined as mortality and major CV event that included hospitalization for the following events: myocardial infarction (MI), cerebrovascular accident (CVA), percutaneous coronary intervention (PCI), coronary artery bypass graft surgery (CABG) and exacerbation of congestive heart failure (CHF). Statistical Analysis: Differences in baseline characteristics were compared using unpaired t-test for continuous variables, χ2-distribution or Fisher’s exact test for categorical variables. Differences between the frequencies of the different events; mortality and major events according to baseline characteristics of population were compared using χ2-distribution or Fisher’s as appropriate. The event rate over time was displayed using the Kaplan–Meier method, with comparison between groups by Log rank test. To examine the association between baseline characteristics and outcome, a multivariable cox proportional hazard model was used. For the mortality analysis, we used the total cohort, 165 patients as we have reliable information about the mortality date in all patients with matching identification numbers of patients with the Israeli National Population Register. For time for first CV event analysis and for the number of recurrent events analysis, we excluded patients that were lost to complete long-term follow-up. The final analysis for CV events included a cohort of 145 patients. The association between baseline characteristics and the annual event rate was examined by multivariate linear regression. For all analyses, we used SAS software version 9.4 (SAS Institute Inc., Cary, NC). A p<0.05 was considered statistically significant. Results: A total of 165 patients who had ACS and PCI at a young age were included in the study. The mean follow-up was 9.1±4.6 years and median follow-up was 10 years (IQR: 5.67, 10.75). The baseline and clinical characteristics of the study population are summarized in Table 1. Most of the patients (88%) were males and 95% of the patients had at least one risk factor, the most common risk factor in our population was smoking.Table 1Baseline CharacteristicsVariableN=165Age (years) mean ± SD36.81 ± 4.24Male sex, n (%)145 (88)Origin, n (%) Arabs83 (50) Jews82 (50)Risk factors, n (%) Family history of IHD76 (46) Hypertension33 (20) Hyperlipidemia94 (57) Diabetes mellitus25 (15) Smoking110 (67)ACS presentation, n (%) Unstable angina54 (33) NSTEMI31 (19) STEMI80 (48)LV functiona (assessed by echocardiogram), n (%) Normal49 (46.5) Mild to moderately reduced47 (45) Severely reduced9 (8.5)MVD70 (42)Culprit lesion (%) LM1 (0.6) LAD73 (44) LCX27 (16) RCA47 (29) Indeterminate17 (28)PCI ≥ 1 stent130 (82)Note: a105 Patients with available data.Abbreviations: ACS, acute coronary syndrome; IHD, ischemic heart disease; IQR, interquartile; NSTEMI, non-ST elevation segment myocardial Infraction; STEMI, ST elevation segment myocardial Infraction; LV, left ventricle; LAD, left anterior descending; LCX, left circumflex; LM, left main; MVD, multi-vessel disease; PCI, percutaneous coronary intervention; RCA, right coronary artery. Baseline Characteristics Note: a105 Patients with available data. Abbreviations: ACS, acute coronary syndrome; IHD, ischemic heart disease; IQR, interquartile; NSTEMI, non-ST elevation segment myocardial Infraction; STEMI, ST elevation segment myocardial Infraction; LV, left ventricle; LAD, left anterior descending; LCX, left circumflex; LM, left main; MVD, multi-vessel disease; PCI, percutaneous coronary intervention; RCA, right coronary artery. Nearly half of the patients (48%) presented with ST-segment elevation myocardial infraction (STEMI) and 52% non -STE-ACS. Surprisingly, the proportion of unstable angina was greater than that of NSTEMI (33% vs 19%, respectively); this may be dependent on the definition of UAP/MI that was not based on the high-sensitive troponin assays which were not available at the years of the study. The most common vessel involved was the LAD (44.2%), while 42% had multi-vessel coronary disease (MVD). Most patients had good left ventricular function (Table 1). MACCE and Mortality Analysis Fifteen (9.1%) patients died during follow-up while 98 (59.4%) patients had at least one major CV event and 107 (64.9%) patients had at least one major CV event and/or mortality (Figure 1). Hypertension, family history of ischemic heart disease and PCI without stent insertion were found to be predictors for major CV and/or mortality in the univariate analysis (Table 2). Survival curves of hypertensive versus non-hypertensive patients are presented in Figure 2.Table 2Predictors for Mortality and/or Major CV Events (Univariate Analysis)ParameterHazard Ratio (95% CI)P-valueSex1.55 (0.81–2.97)0.19Race1.26 (0.86–1.85)0.23HTN1.84 (1.18–2.85)0.007DM1.08 (0.65–1.80)0.76Smoking1.05 (0.70–1.57)0.83Family history of IHD0.63 (0.43–0.94)0.02LAD lesion1.38 (0.94–2.01)0.09MVD1.17 (0.79–1.72)0.42Stent; none vs ≥12.00 (1.26–3.18)0.003Abbreviations: CV, cardiovascular; DM, diabetes mellitus; HLP, hyperlipidemia, HTN, hypertension; LAD, left anterior descending; MVD, multi-vessel disease; STEMI, ST-segment elevation myocardial infraction. Figure 1Kaplan–Meier survival curve showing cumulative survival free from mortality event (A) and from mortality and/or major CV event (B).Figure 2Kaplan–Meier survival curve showing cumulative survival free from mortality and major CV event in patient with and without hypertension. Predictors for Mortality and/or Major CV Events (Univariate Analysis) Abbreviations: CV, cardiovascular; DM, diabetes mellitus; HLP, hyperlipidemia, HTN, hypertension; LAD, left anterior descending; MVD, multi-vessel disease; STEMI, ST-segment elevation myocardial infraction. Kaplan–Meier survival curve showing cumulative survival free from mortality event (A) and from mortality and/or major CV event (B). Kaplan–Meier survival curve showing cumulative survival free from mortality and major CV event in patient with and without hypertension. In the multivariate regression analysis, the strongest predictors for MACCE and/or mortality were coronary intervention without stent insertion, LAD artery involvement and hypertension (Table 3).Table 3Multivariable Cox Proportional Hazard Model for Major CV Event and/or MortalityParameterHazard Ratio (95% CI)P-valueSex1.18 (0.57–2.42)0.65Race1.13 (0.74–1.73)0.58HTN1.61 (1.0–2.6)0.05DM0.75 (0.44–1.30)0.31Smoking0.77 (0.49–1.23)0.27Family history of IHD0.67 (0.45–1.02)0.06LAD lesion1.59 (1.04–2.44)0.03MVD1.26 (0.83–1.91)0.28Stent; none vs ≥11.77 (1.09–2.9)0.02Abbreviations: CC, cardiovascular; DM, diabetes mellitus; HTN, hypertension; IHD, ischemic heart disease; LAD, left anterior descending; MVD, multi-vessel disease. Multivariable Cox Proportional Hazard Model for Major CV Event and/or Mortality Abbreviations: CC, cardiovascular; DM, diabetes mellitus; HTN, hypertension; IHD, ischemic heart disease; LAD, left anterior descending; MVD, multi-vessel disease. Fifteen (9.1%) patients died during follow-up while 98 (59.4%) patients had at least one major CV event and 107 (64.9%) patients had at least one major CV event and/or mortality (Figure 1). Hypertension, family history of ischemic heart disease and PCI without stent insertion were found to be predictors for major CV and/or mortality in the univariate analysis (Table 2). Survival curves of hypertensive versus non-hypertensive patients are presented in Figure 2.Table 2Predictors for Mortality and/or Major CV Events (Univariate Analysis)ParameterHazard Ratio (95% CI)P-valueSex1.55 (0.81–2.97)0.19Race1.26 (0.86–1.85)0.23HTN1.84 (1.18–2.85)0.007DM1.08 (0.65–1.80)0.76Smoking1.05 (0.70–1.57)0.83Family history of IHD0.63 (0.43–0.94)0.02LAD lesion1.38 (0.94–2.01)0.09MVD1.17 (0.79–1.72)0.42Stent; none vs ≥12.00 (1.26–3.18)0.003Abbreviations: CV, cardiovascular; DM, diabetes mellitus; HLP, hyperlipidemia, HTN, hypertension; LAD, left anterior descending; MVD, multi-vessel disease; STEMI, ST-segment elevation myocardial infraction. Figure 1Kaplan–Meier survival curve showing cumulative survival free from mortality event (A) and from mortality and/or major CV event (B).Figure 2Kaplan–Meier survival curve showing cumulative survival free from mortality and major CV event in patient with and without hypertension. Predictors for Mortality and/or Major CV Events (Univariate Analysis) Abbreviations: CV, cardiovascular; DM, diabetes mellitus; HLP, hyperlipidemia, HTN, hypertension; LAD, left anterior descending; MVD, multi-vessel disease; STEMI, ST-segment elevation myocardial infraction. Kaplan–Meier survival curve showing cumulative survival free from mortality event (A) and from mortality and/or major CV event (B). Kaplan–Meier survival curve showing cumulative survival free from mortality and major CV event in patient with and without hypertension. In the multivariate regression analysis, the strongest predictors for MACCE and/or mortality were coronary intervention without stent insertion, LAD artery involvement and hypertension (Table 3).Table 3Multivariable Cox Proportional Hazard Model for Major CV Event and/or MortalityParameterHazard Ratio (95% CI)P-valueSex1.18 (0.57–2.42)0.65Race1.13 (0.74–1.73)0.58HTN1.61 (1.0–2.6)0.05DM0.75 (0.44–1.30)0.31Smoking0.77 (0.49–1.23)0.27Family history of IHD0.67 (0.45–1.02)0.06LAD lesion1.59 (1.04–2.44)0.03MVD1.26 (0.83–1.91)0.28Stent; none vs ≥11.77 (1.09–2.9)0.02Abbreviations: CC, cardiovascular; DM, diabetes mellitus; HTN, hypertension; IHD, ischemic heart disease; LAD, left anterior descending; MVD, multi-vessel disease. Multivariable Cox Proportional Hazard Model for Major CV Event and/or Mortality Abbreviations: CC, cardiovascular; DM, diabetes mellitus; HTN, hypertension; IHD, ischemic heart disease; LAD, left anterior descending; MVD, multi-vessel disease. Recurrent Event Analysis The recurrent event analysis was performed for 145 patients with complete follow-up at the end of study follow-up (1/2018). Twenty-two patients (13.3%) had three or more events during follow-up, the mean number of events was 1.4±1.48 per patient and the mean event rate per patient per year was 0.3±0.8. The median number of events per patients was 1 (IQR 0.2) and the median event rate per patient per year was 0.14 (IQR: 0, 0.27). Recurrent events according to baseline characteristics are presented in Appendix Table 1. In a linear multivariate regression model, the strongest predictor for recurrent event was hypertension (Table 4).Table 4Linear Model for Recurrent Annual Event (N=145)VariableParameterStandard ErrorP-valueSex−0.0900.140.51Race−0.0430.090.62HTN0.3100.110.004DM0.0620.120.59Family history of IHD−0.1440.080.09Smoking−0.0280.100.77LAD involvement0.0720.090.41Stent insertion ≥ 10.0380.100.72Abbreviations: DM, diabetes mellitus; HTN, hypertension; IHD, ischemic heart disease; LAD, left anterior descending. Linear Model for Recurrent Annual Event (N=145) Abbreviations: DM, diabetes mellitus; HTN, hypertension; IHD, ischemic heart disease; LAD, left anterior descending. The recurrent event analysis was performed for 145 patients with complete follow-up at the end of study follow-up (1/2018). Twenty-two patients (13.3%) had three or more events during follow-up, the mean number of events was 1.4±1.48 per patient and the mean event rate per patient per year was 0.3±0.8. The median number of events per patients was 1 (IQR 0.2) and the median event rate per patient per year was 0.14 (IQR: 0, 0.27). Recurrent events according to baseline characteristics are presented in Appendix Table 1. In a linear multivariate regression model, the strongest predictor for recurrent event was hypertension (Table 4).Table 4Linear Model for Recurrent Annual Event (N=145)VariableParameterStandard ErrorP-valueSex−0.0900.140.51Race−0.0430.090.62HTN0.3100.110.004DM0.0620.120.59Family history of IHD−0.1440.080.09Smoking−0.0280.100.77LAD involvement0.0720.090.41Stent insertion ≥ 10.0380.100.72Abbreviations: DM, diabetes mellitus; HTN, hypertension; IHD, ischemic heart disease; LAD, left anterior descending. Linear Model for Recurrent Annual Event (N=145) Abbreviations: DM, diabetes mellitus; HTN, hypertension; IHD, ischemic heart disease; LAD, left anterior descending. MACCE and Mortality Analysis: Fifteen (9.1%) patients died during follow-up while 98 (59.4%) patients had at least one major CV event and 107 (64.9%) patients had at least one major CV event and/or mortality (Figure 1). Hypertension, family history of ischemic heart disease and PCI without stent insertion were found to be predictors for major CV and/or mortality in the univariate analysis (Table 2). Survival curves of hypertensive versus non-hypertensive patients are presented in Figure 2.Table 2Predictors for Mortality and/or Major CV Events (Univariate Analysis)ParameterHazard Ratio (95% CI)P-valueSex1.55 (0.81–2.97)0.19Race1.26 (0.86–1.85)0.23HTN1.84 (1.18–2.85)0.007DM1.08 (0.65–1.80)0.76Smoking1.05 (0.70–1.57)0.83Family history of IHD0.63 (0.43–0.94)0.02LAD lesion1.38 (0.94–2.01)0.09MVD1.17 (0.79–1.72)0.42Stent; none vs ≥12.00 (1.26–3.18)0.003Abbreviations: CV, cardiovascular; DM, diabetes mellitus; HLP, hyperlipidemia, HTN, hypertension; LAD, left anterior descending; MVD, multi-vessel disease; STEMI, ST-segment elevation myocardial infraction. Figure 1Kaplan–Meier survival curve showing cumulative survival free from mortality event (A) and from mortality and/or major CV event (B).Figure 2Kaplan–Meier survival curve showing cumulative survival free from mortality and major CV event in patient with and without hypertension. Predictors for Mortality and/or Major CV Events (Univariate Analysis) Abbreviations: CV, cardiovascular; DM, diabetes mellitus; HLP, hyperlipidemia, HTN, hypertension; LAD, left anterior descending; MVD, multi-vessel disease; STEMI, ST-segment elevation myocardial infraction. Kaplan–Meier survival curve showing cumulative survival free from mortality event (A) and from mortality and/or major CV event (B). Kaplan–Meier survival curve showing cumulative survival free from mortality and major CV event in patient with and without hypertension. In the multivariate regression analysis, the strongest predictors for MACCE and/or mortality were coronary intervention without stent insertion, LAD artery involvement and hypertension (Table 3).Table 3Multivariable Cox Proportional Hazard Model for Major CV Event and/or MortalityParameterHazard Ratio (95% CI)P-valueSex1.18 (0.57–2.42)0.65Race1.13 (0.74–1.73)0.58HTN1.61 (1.0–2.6)0.05DM0.75 (0.44–1.30)0.31Smoking0.77 (0.49–1.23)0.27Family history of IHD0.67 (0.45–1.02)0.06LAD lesion1.59 (1.04–2.44)0.03MVD1.26 (0.83–1.91)0.28Stent; none vs ≥11.77 (1.09–2.9)0.02Abbreviations: CC, cardiovascular; DM, diabetes mellitus; HTN, hypertension; IHD, ischemic heart disease; LAD, left anterior descending; MVD, multi-vessel disease. Multivariable Cox Proportional Hazard Model for Major CV Event and/or Mortality Abbreviations: CC, cardiovascular; DM, diabetes mellitus; HTN, hypertension; IHD, ischemic heart disease; LAD, left anterior descending; MVD, multi-vessel disease. Recurrent Event Analysis: The recurrent event analysis was performed for 145 patients with complete follow-up at the end of study follow-up (1/2018). Twenty-two patients (13.3%) had three or more events during follow-up, the mean number of events was 1.4±1.48 per patient and the mean event rate per patient per year was 0.3±0.8. The median number of events per patients was 1 (IQR 0.2) and the median event rate per patient per year was 0.14 (IQR: 0, 0.27). Recurrent events according to baseline characteristics are presented in Appendix Table 1. In a linear multivariate regression model, the strongest predictor for recurrent event was hypertension (Table 4).Table 4Linear Model for Recurrent Annual Event (N=145)VariableParameterStandard ErrorP-valueSex−0.0900.140.51Race−0.0430.090.62HTN0.3100.110.004DM0.0620.120.59Family history of IHD−0.1440.080.09Smoking−0.0280.100.77LAD involvement0.0720.090.41Stent insertion ≥ 10.0380.100.72Abbreviations: DM, diabetes mellitus; HTN, hypertension; IHD, ischemic heart disease; LAD, left anterior descending. Linear Model for Recurrent Annual Event (N=145) Abbreviations: DM, diabetes mellitus; HTN, hypertension; IHD, ischemic heart disease; LAD, left anterior descending. Discussion: We present an analysis of 165 young patients with ACS who underwent PCI at a young age, with a long-term mean follow-up of 9.1±4.6 years. The main finding of this study is the remarkably high rate of recurrent CV events in the years following initial ACS. During follow-up, approximately 65% of the patients had at least one endpoint of mortality or major CV event. The average number of recurrent events per patient was 1.85, with an event rate of 0.3 per patient per year. These percentages are higher than previously published studies, in which the percentage of recurrent events was less than 30%.21 Moreover, in a more recent study with shorter follow-up of 3–5 years, only 11.2% patients younger than 45 years old had recurrent events with mortality rate of only 0.9%.14 The longer follow-up in our study and better recoding of events may explain the differences between our findings and prior studies. Another possible explanation is that our population may have lower compliance to optimal medical treatments and lifestyle modifications and thus a relatively high recurrent event rate. It may be expected that in a more contemporary revascularization era the prognosis of young ACS patients will be better. However, our results do not support this assumption. One explanation is that the invasive approach influences mainly the short-term outcome, up to 1 year but thereafter factors such as lifestyle and compliance are more dominant for long-term prognosis. Furthermore, an invasive approach may be harmful in patients with poor compliance due to late stent thrombosis. Young patients who experience coronary events may have trouble to adhere with medications and risk factors’ management and thus are exposed to recurrent adverse events.22 Hypertension prevalence in the first event was found only in 20% of the patients. However, in patients who died during follow-up or experienced MACCE, the percentages were much higher. We found that patients with hypertension had significantly shorter time to recurrent event (Figure 2). We found that hypertension was the strongest risk factor for recurrent CV events. This may be because hypertension is a less controlled among young patients who in many cases do not compliant with antihypertensive medications.22 We postulate that early intervention focusing on treatment of hypertension may reduce the risk for recurrent events. Another finding in this study was that PCI without stent insertion was found to be an independent predictive factor for worse outcomes. Possible explanation may be the fact that patients who had complex or diffuse ectatic disease not suitable for stent implantation, have worse prognosis.23 Our study indicates that patients who had a coronary event at a young age are in a remarkably high risk for recurrent event during long-term follow-up. While the short-term prognosis of these young patients is relatively good as they have less comorbidities, the long-term prognosis might be unfavorable. It seems that health systems should put more focus on the population of young ACS patients in order to improve compliance and potentially, prognosis. The present study has several limitations. First, the study based on a retrospective administrative database that contains discharge-level records and as such is susceptible to reporting errors and missing data. In order to minimize those errors, we have crosschecked our data in several major databases and we believe that we were able to obtain high-quality long-term data which allowed a consistent analysis. Another limitation is the lack of a control group of older patients, these data were not available in this cohort. However, data on the expected outcome of general ACS populations both old and young are abundant and we used it for reference as discussed. In conclusion, the rate of mortality and recurrent CV events in patients who presented with ACS at a young age is relatively high during long-term follow-up. Hypertension, LAD disease and coronary intervention without stenting are important negative prognostic factors in this population. Early interventions to reduce risk factors and to improve compliance, particularly for hypertensive patients may lead to a better prognosis in this unique population.
Background: Acute coronary syndrome (ACS) at a young age is uncommon. Limited data regarding the long-term follow-up and prognosis in this population are available. Our objectives were to evaluate the long-term clinical outcomes of patients presenting with ACS at a young age and to assess factors that predict long-term prognosis. Methods: A retrospective analysis of consecutive young patients (male below 40 and female below 50 years old) that were admitted with ACS and underwent percutaneous coronary intervention (PCI) between the years 1997 and 2009. Demographics, clinical characteristics, and clinical outcomes including major cardiovascular (CV) events and mortality were analyzed. Multivariable cox proportional hazard model was performed to identify predictors of long-term prognosis. Results: One-hundred sixty-five patients were included with a mean follow-up of 9.1±4.6 years. Most patients were men (88%), and mean age (years) was 36.8±4.2. During follow-up, 15 (9.1%) died, 98 (59.4%) patients had at least one major CV event, 22 (13.3%) patients had more than two CV events, and the mean number of recurrent CV events was 1.4±1.48 events per patient. In multivariate analysis, the strongest predictors of major CV events and/or mortality were coronary intervention without stent insertion (HR1.77; 95% CI 1.09-2.9), LAD artery involvement (HR 1.59; 95% CI 1.04-2.44) and hypertension (HR 1.6; 95% CI 1.0-2.6). Conclusions: Patients with ACS in young age are at high risk for major CV and/or mortality in long-term follow-up with a high rate of recurrent CV events. Close follow-up and risk factor management for secondary prevention have a major role, particularly in this population.
Introduction: Coronary artery disease (CAD) is a leading cause of morbidity and mortality worldwide.1,2 Several studies have shown that coronary atherosclerosis begins in the second or third decade of life with an increased prevalence with age in both males and females.3–5 However, the clinical manifestations of acute coronary syndrome (ACS) in most cases occur later, during the fifth to seventh decade of life6,7 and only 2–10% of all patients with ACS, are younger than 40 years old.8,9 Young patients with ACS have unique characteristics with distinct risk factors and clinical manifestations. Prior studies have shown that family history, hypercholesterolemia, sedentary lifestyles, obesity and smoking were common risk factors in young patients with ACS compared to older age groups.8,10–15 Diabetes and smoking in young patients is a significant risk factor for recurrent coronary events and interventions, as well as mortality.14,15 In addition, there is an association between ethnicity and geographical location and the incidence of ACS events at a young age.16,17 When compared to an older population, ST-segment elevation myocardial infarction (STEMI) is more common in young patients compared to non-ST-segment elevation myocardial infraction (NSTEMI).12,13,18 Young ACS patients have a lower incidence of multi-vessel (MVD) and left main (LM) disease,16,17 whereas involvement of the left anterior descending (LAD) artery is more common.14 Previous studies in this population mainly focused on risk factors and the unique clinical characteristics of this group,11–18 while others focused on in-hospital and short-term outcomes which demonstrated improved outcomes in the young ACS population.13,18,19 To date, only a limited number of studies have evaluated the long-term outcomes of this population, most of which were performed prior to the widespread use of the invasive strategy in patients with ACS.10,11,20 The goal of this study was to evaluate the long-term clinical outcomes of young patients with ACS who underwent percutaneous coronary intervention (PCI) and to elucidate predictive factors affecting long-term prognosis in this population. Discussion: We present an analysis of 165 young patients with ACS who underwent PCI at a young age, with a long-term mean follow-up of 9.1±4.6 years. The main finding of this study is the remarkably high rate of recurrent CV events in the years following initial ACS. During follow-up, approximately 65% of the patients had at least one endpoint of mortality or major CV event. The average number of recurrent events per patient was 1.85, with an event rate of 0.3 per patient per year. These percentages are higher than previously published studies, in which the percentage of recurrent events was less than 30%.21 Moreover, in a more recent study with shorter follow-up of 3–5 years, only 11.2% patients younger than 45 years old had recurrent events with mortality rate of only 0.9%.14 The longer follow-up in our study and better recoding of events may explain the differences between our findings and prior studies. Another possible explanation is that our population may have lower compliance to optimal medical treatments and lifestyle modifications and thus a relatively high recurrent event rate. It may be expected that in a more contemporary revascularization era the prognosis of young ACS patients will be better. However, our results do not support this assumption. One explanation is that the invasive approach influences mainly the short-term outcome, up to 1 year but thereafter factors such as lifestyle and compliance are more dominant for long-term prognosis. Furthermore, an invasive approach may be harmful in patients with poor compliance due to late stent thrombosis. Young patients who experience coronary events may have trouble to adhere with medications and risk factors’ management and thus are exposed to recurrent adverse events.22 Hypertension prevalence in the first event was found only in 20% of the patients. However, in patients who died during follow-up or experienced MACCE, the percentages were much higher. We found that patients with hypertension had significantly shorter time to recurrent event (Figure 2). We found that hypertension was the strongest risk factor for recurrent CV events. This may be because hypertension is a less controlled among young patients who in many cases do not compliant with antihypertensive medications.22 We postulate that early intervention focusing on treatment of hypertension may reduce the risk for recurrent events. Another finding in this study was that PCI without stent insertion was found to be an independent predictive factor for worse outcomes. Possible explanation may be the fact that patients who had complex or diffuse ectatic disease not suitable for stent implantation, have worse prognosis.23 Our study indicates that patients who had a coronary event at a young age are in a remarkably high risk for recurrent event during long-term follow-up. While the short-term prognosis of these young patients is relatively good as they have less comorbidities, the long-term prognosis might be unfavorable. It seems that health systems should put more focus on the population of young ACS patients in order to improve compliance and potentially, prognosis. The present study has several limitations. First, the study based on a retrospective administrative database that contains discharge-level records and as such is susceptible to reporting errors and missing data. In order to minimize those errors, we have crosschecked our data in several major databases and we believe that we were able to obtain high-quality long-term data which allowed a consistent analysis. Another limitation is the lack of a control group of older patients, these data were not available in this cohort. However, data on the expected outcome of general ACS populations both old and young are abundant and we used it for reference as discussed. In conclusion, the rate of mortality and recurrent CV events in patients who presented with ACS at a young age is relatively high during long-term follow-up. Hypertension, LAD disease and coronary intervention without stenting are important negative prognostic factors in this population. Early interventions to reduce risk factors and to improve compliance, particularly for hypertensive patients may lead to a better prognosis in this unique population.
Background: Acute coronary syndrome (ACS) at a young age is uncommon. Limited data regarding the long-term follow-up and prognosis in this population are available. Our objectives were to evaluate the long-term clinical outcomes of patients presenting with ACS at a young age and to assess factors that predict long-term prognosis. Methods: A retrospective analysis of consecutive young patients (male below 40 and female below 50 years old) that were admitted with ACS and underwent percutaneous coronary intervention (PCI) between the years 1997 and 2009. Demographics, clinical characteristics, and clinical outcomes including major cardiovascular (CV) events and mortality were analyzed. Multivariable cox proportional hazard model was performed to identify predictors of long-term prognosis. Results: One-hundred sixty-five patients were included with a mean follow-up of 9.1±4.6 years. Most patients were men (88%), and mean age (years) was 36.8±4.2. During follow-up, 15 (9.1%) died, 98 (59.4%) patients had at least one major CV event, 22 (13.3%) patients had more than two CV events, and the mean number of recurrent CV events was 1.4±1.48 events per patient. In multivariate analysis, the strongest predictors of major CV events and/or mortality were coronary intervention without stent insertion (HR1.77; 95% CI 1.09-2.9), LAD artery involvement (HR 1.59; 95% CI 1.04-2.44) and hypertension (HR 1.6; 95% CI 1.0-2.6). Conclusions: Patients with ACS in young age are at high risk for major CV and/or mortality in long-term follow-up with a high rate of recurrent CV events. Close follow-up and risk factor management for secondary prevention have a major role, particularly in this population.
5,375
351
[ 1097, 188, 95, 228, 1918, 480, 202, 758 ]
9
[ "patients", "event", "mortality", "cv", "events", "hypertension", "major", "disease", "major cv", "analysis" ]
[ "disease coronary intervention", "coronary syndrome acs", "coronary event demographic", "coronary atherosclerosis", "acute coronary syndrome" ]
null
null
[CONTENT] acute coronary syndrome | ACS | NSTEMI | STEMI | young population | outcomes [SUMMARY]
null
null
[CONTENT] acute coronary syndrome | ACS | NSTEMI | STEMI | young population | outcomes [SUMMARY]
[CONTENT] acute coronary syndrome | ACS | NSTEMI | STEMI | young population | outcomes [SUMMARY]
[CONTENT] acute coronary syndrome | ACS | NSTEMI | STEMI | young population | outcomes [SUMMARY]
[CONTENT] Acute Coronary Syndrome | Adult | Age of Onset | Angina, Unstable | Databases, Factual | Female | Humans | Male | Non-ST Elevated Myocardial Infarction | Percutaneous Coronary Intervention | Recurrence | Retrospective Studies | Risk Assessment | Risk Factors | ST Elevation Myocardial Infarction | Time Factors | Treatment Outcome [SUMMARY]
null
null
[CONTENT] Acute Coronary Syndrome | Adult | Age of Onset | Angina, Unstable | Databases, Factual | Female | Humans | Male | Non-ST Elevated Myocardial Infarction | Percutaneous Coronary Intervention | Recurrence | Retrospective Studies | Risk Assessment | Risk Factors | ST Elevation Myocardial Infarction | Time Factors | Treatment Outcome [SUMMARY]
[CONTENT] Acute Coronary Syndrome | Adult | Age of Onset | Angina, Unstable | Databases, Factual | Female | Humans | Male | Non-ST Elevated Myocardial Infarction | Percutaneous Coronary Intervention | Recurrence | Retrospective Studies | Risk Assessment | Risk Factors | ST Elevation Myocardial Infarction | Time Factors | Treatment Outcome [SUMMARY]
[CONTENT] Acute Coronary Syndrome | Adult | Age of Onset | Angina, Unstable | Databases, Factual | Female | Humans | Male | Non-ST Elevated Myocardial Infarction | Percutaneous Coronary Intervention | Recurrence | Retrospective Studies | Risk Assessment | Risk Factors | ST Elevation Myocardial Infarction | Time Factors | Treatment Outcome [SUMMARY]
[CONTENT] disease coronary intervention | coronary syndrome acs | coronary event demographic | coronary atherosclerosis | acute coronary syndrome [SUMMARY]
null
null
[CONTENT] disease coronary intervention | coronary syndrome acs | coronary event demographic | coronary atherosclerosis | acute coronary syndrome [SUMMARY]
[CONTENT] disease coronary intervention | coronary syndrome acs | coronary event demographic | coronary atherosclerosis | acute coronary syndrome [SUMMARY]
[CONTENT] disease coronary intervention | coronary syndrome acs | coronary event demographic | coronary atherosclerosis | acute coronary syndrome [SUMMARY]
[CONTENT] patients | event | mortality | cv | events | hypertension | major | disease | major cv | analysis [SUMMARY]
null
null
[CONTENT] patients | event | mortality | cv | events | hypertension | major | disease | major cv | analysis [SUMMARY]
[CONTENT] patients | event | mortality | cv | events | hypertension | major | disease | major cv | analysis [SUMMARY]
[CONTENT] patients | event | mortality | cv | events | hypertension | major | disease | major cv | analysis [SUMMARY]
[CONTENT] acs | young | patients acs | studies | young patients | patients | population | outcomes | term | common [SUMMARY]
null
null
[CONTENT] patients | prognosis | young | recurrent | term | compliance | events | high | study | long term [SUMMARY]
[CONTENT] patients | event | mortality | events | cv | young | hypertension | included | acs | recurrent [SUMMARY]
[CONTENT] patients | event | mortality | events | cv | young | hypertension | included | acs | recurrent [SUMMARY]
[CONTENT] ||| ||| ACS [SUMMARY]
null
null
[CONTENT] CV | CV ||| [SUMMARY]
[CONTENT] ||| ||| ACS ||| 50 years old | ACS | between the years 1997 and 2009 ||| CV ||| ||| One-hundred sixty-five | years ||| 88% | age (years | 36.8±4.2 ||| 15 | 9.1% | 98 | 59.4% ||| at least one | CV | 22 | 13.3% | more than two | CV | CV | 1.4±1.48 ||| CV | 95% | CI | 1.09-2.9 | LAD | 1.59 | 95% | CI | 1.04-2.44 | 1.6 | 95% | CI | 1.0-2.6 ||| CV | CV ||| [SUMMARY]
[CONTENT] ||| ||| ACS ||| 50 years old | ACS | between the years 1997 and 2009 ||| CV ||| ||| One-hundred sixty-five | years ||| 88% | age (years | 36.8±4.2 ||| 15 | 9.1% | 98 | 59.4% ||| at least one | CV | 22 | 13.3% | more than two | CV | CV | 1.4±1.48 ||| CV | 95% | CI | 1.09-2.9 | LAD | 1.59 | 95% | CI | 1.04-2.44 | 1.6 | 95% | CI | 1.0-2.6 ||| CV | CV ||| [SUMMARY]
Antinociceptive effect of some extracts from Ajuga chamaecistus Ging. ssp. tomentella (Boiss.) Rech. f. aerial parts.
25022284
The genus Ajuga is used for the treatment of joint pain, gout, and jaundice in traditional Iranian medicine (TIM). Ajuga chamaecistus ssp. tomentella is an exclusive subspecies of Ajuga chamaecistus in the flora of Iran. The aim of this study was to evaluate antinociceptive properties of some extracts from aerial parts of A. chamaecistus ssp. tomentella.
BACKGROUND
Antinociceptive activities of total water and 80% methanol extracts, hexane, diethyl ether and n-butanolic partition fractions of the methanolic extract were analyzed using the formalin test in mice. Indomethacin (10 mg/kg) and normal saline were employed as positive and negative controls, respectively.
METHODS
Oral administration of all extracts (200, 400 and 600 mg/kg) 30 min before formalin injection had no effect against the acute phase (0-5 min after formalin injection) of the formalin-induced licking time, but hexane fraction (200 mg/kg) caused a significant effect (p < 0.001) on the chronic phase (15-60 min after formalin injection). Total water and diethyl ether extracts at a dose of 400 mg/kg showed a very significant analgesic activity on the chronic phase (p < 0.001 and p < 0.01, respectively).
RESULTS
The results of this study suggest that the extracts of A. chamaecistus ssp. tomentella have an analgesic property that supports traditional use of Ajuga genus for joint pain and other inflammatory diseases.
CONCLUSIONS
[ "Ajuga", "Analgesics", "Animals", "Ether", "Formaldehyde", "Hexanes", "Male", "Methanol", "Mice", "Pain", "Phytotherapy", "Plant Components, Aerial", "Plant Extracts", "Solvents", "Water" ]
4230404
Background
Five species of genus Ajuga (Lamiaceae) are found in the flora of Iran in which Ajuga chamaecistus has been contained several endemic subspecies including A. chamaecistus ssp. tomentella[1]. Some species which belong to this annual and perennial genus are used as the medicinal plant in the traditional medicine of several countries mostly in Africa, Asia, and China as for wound healing; anthelmintic, antifungal, antifebrile, antitumor, antimicrobial, and diuretic agent, and for the treatment of hypertension, hyperglycemia, joint pain, etc. [2-4]. Ajuga chamaepitys (L.) Schreb. which grows in the Middle East and Asia has been used in the treatment of rheumatism, gout, dropsy, jaundice, and sclerosis. A. decombens Thunb. that originally grows in East Asia is used for analgesia, inflammation, fever, and joint pain [5]. Moreover in Iranian traditional medicine, the genus Ajuga (Kamaphytus) has been used for treatment of joint pain, gout, and jaundice [6]. Also, several biological studies have been performed on many species of this genus which have confirmed their ethno pharmacological properties such as hypoglycemic [7], anti-inflammatory [8], anabolic, analgesic, anti-arthritis, antipyretic, hepatoprotective, antibacterial, antifungal, antioxidant, cardiotonic [5], and antimalarial [9] properties and their application in the treatment of joint diseases [10]. As well, many phytochemical studies on Ajuga species have been performed which have led to the isolation of phytoecdysteroids [11,12], neo- clerodanediterpenoids [13], phenylethyl glycosides [3], withanolides [2], iridoids and flavonoids [14], and essential oils [15]. Prior to this study, we have isolated 10 compounds; 20-hydroxyecdysone, cyasterone, ajugalactone, makisterone A, and 24-dehydroprecyasterone (phytoecdysteroids), 8-acetylharpagide (iridoid), cis- and trans-melilotoside, lavandulifolioside, leonoside B, and martynoside (phenylethanoid glycosides), from diethyl ether and n-butanolic fractions of Ajuga chamaecistus ssp. tomentella. Cytotoxicity evaluation of some fractions of this plant showed the cytotoxicity of hexane fraction against normal and cancer cell lines. Most of the isolated compounds were inactive in the cytotoxicity assay [16,17]. The aim of this study was to evaluate antinociceptive effects of oral administration of total water and 80% methanolic extracts and partition fractions of hexane, diethyl ether and n- butanol obtained from methanolic extract of aerial parts of Ajuga chamaecistus ssp. tomentella in an attempt to validate the traditional use of the plants belonging to genus Ajuga.
Methods
Plant material Aerial parts of Ajuga chamaecistus Ging. ssp. tomentella (Boiss.) Rech. f. were collected from “Sorkhe Hesar”, east of Tehran, Iran, in June 2008 and verified by Prof. G. Amin. A voucher specimen (THE-6697) was deposited in the herbarium of the Department of Pharmacognosy, Faculty of Pharmacy, Tehran University of Medical Sciences, Tehran, Iran. Methanolic extraction The air-dried and ground aerial parts of A. chamaecistus ssp. tomentella (250 g) were extracted with methanol 80% (3 × 0.5 L) at room temperature. The solvent was evaporated on a rotary evaporator and in a vacuum oven to give a dark brown extract (45 g). The extract (30 g) was suspended in 80% methanol and partitioned successively between 80% methanol, n-hexane, diethyl ether, and n-butanol. Removal of the solvents with a rotary evaporator resulted in the production of n-hexane, diethyl ether, and n-butanol fractions. The air-dried and ground aerial parts of A. chamaecistus ssp. tomentella (250 g) were extracted with methanol 80% (3 × 0.5 L) at room temperature. The solvent was evaporated on a rotary evaporator and in a vacuum oven to give a dark brown extract (45 g). The extract (30 g) was suspended in 80% methanol and partitioned successively between 80% methanol, n-hexane, diethyl ether, and n-butanol. Removal of the solvents with a rotary evaporator resulted in the production of n-hexane, diethyl ether, and n-butanol fractions. Water extraction Two hundred and fifty grams of the powdered plant from the aerial parts of A. chamaecistus ssp. tomentella were extracted with distilled water (3 × 0.5 L) at room temperature. The solvent was removed with a rotary evaporator and freeze drying process to give an extract (30 g). Two hundred and fifty grams of the powdered plant from the aerial parts of A. chamaecistus ssp. tomentella were extracted with distilled water (3 × 0.5 L) at room temperature. The solvent was removed with a rotary evaporator and freeze drying process to give an extract (30 g). Aerial parts of Ajuga chamaecistus Ging. ssp. tomentella (Boiss.) Rech. f. were collected from “Sorkhe Hesar”, east of Tehran, Iran, in June 2008 and verified by Prof. G. Amin. A voucher specimen (THE-6697) was deposited in the herbarium of the Department of Pharmacognosy, Faculty of Pharmacy, Tehran University of Medical Sciences, Tehran, Iran. Methanolic extraction The air-dried and ground aerial parts of A. chamaecistus ssp. tomentella (250 g) were extracted with methanol 80% (3 × 0.5 L) at room temperature. The solvent was evaporated on a rotary evaporator and in a vacuum oven to give a dark brown extract (45 g). The extract (30 g) was suspended in 80% methanol and partitioned successively between 80% methanol, n-hexane, diethyl ether, and n-butanol. Removal of the solvents with a rotary evaporator resulted in the production of n-hexane, diethyl ether, and n-butanol fractions. The air-dried and ground aerial parts of A. chamaecistus ssp. tomentella (250 g) were extracted with methanol 80% (3 × 0.5 L) at room temperature. The solvent was evaporated on a rotary evaporator and in a vacuum oven to give a dark brown extract (45 g). The extract (30 g) was suspended in 80% methanol and partitioned successively between 80% methanol, n-hexane, diethyl ether, and n-butanol. Removal of the solvents with a rotary evaporator resulted in the production of n-hexane, diethyl ether, and n-butanol fractions. Water extraction Two hundred and fifty grams of the powdered plant from the aerial parts of A. chamaecistus ssp. tomentella were extracted with distilled water (3 × 0.5 L) at room temperature. The solvent was removed with a rotary evaporator and freeze drying process to give an extract (30 g). Two hundred and fifty grams of the powdered plant from the aerial parts of A. chamaecistus ssp. tomentella were extracted with distilled water (3 × 0.5 L) at room temperature. The solvent was removed with a rotary evaporator and freeze drying process to give an extract (30 g). Administration The extracts were dissolved in normal saline to achieve the working concentrations. Extracts, standard drug (Indomethacin 10 mg/kg), and normal saline were administered by oral gavage. Three doses of 200, 400 and 600 mg/kg of all extracts were examined. The extracts were dissolved in normal saline to achieve the working concentrations. Extracts, standard drug (Indomethacin 10 mg/kg), and normal saline were administered by oral gavage. Three doses of 200, 400 and 600 mg/kg of all extracts were examined. Animals Male albino mice weighing 25–30 g were obtained from Pasteur institute and housed in groups of 7 with a 12 h light–dark cycle and constant temperature (22°C). Mice were allowed to acclimatize to the laboratory for 30 min before the experiments began. This study was approved by the ethics committee of the Pharmaceutical Science Research Center of TUMS. Male albino mice weighing 25–30 g were obtained from Pasteur institute and housed in groups of 7 with a 12 h light–dark cycle and constant temperature (22°C). Mice were allowed to acclimatize to the laboratory for 30 min before the experiments began. This study was approved by the ethics committee of the Pharmaceutical Science Research Center of TUMS. Formalin test in mice Twenty microliters of formalin (0.5%) was injected subcutaneously according to the previous study [18]. The total time (second) spent on licking in response to the injected paw in the acute phase (0–5 min) and chronic phase (15–60 min) after formalin injection was measured as a pain indicator. Twenty microliters of formalin (0.5%) was injected subcutaneously according to the previous study [18]. The total time (second) spent on licking in response to the injected paw in the acute phase (0–5 min) and chronic phase (15–60 min) after formalin injection was measured as a pain indicator. Statistical analysis Results are expressed as mean ± standard error of mean (S.E.M). Statistical differences between the treatment and control groups were evaluated by one-way ANOVA, followed by Newman–Keuls post hoc test. p < 0.05 was considered significant. Results are expressed as mean ± standard error of mean (S.E.M). Statistical differences between the treatment and control groups were evaluated by one-way ANOVA, followed by Newman–Keuls post hoc test. p < 0.05 was considered significant.
Results
In the present study, antinociceptive activity of total water, 80% methanol extracts, and three fractions of methanolic extract from the aerial parts of Ajuga chamaecistus ssp. tomentella were evaluated in a nociception model in mice. All extracts were administered by a gastrointestinal tube at different doses (200, 400, and 600 mg/kg) 30 min before intraplantar injection of formalin (20 μl, 0.5%). The licking time was 40.29 ± 5.76 for indomethacin (10 mg/kg) in the acute phase. There was no significant difference between indomethacin and all treated extracts (Table 1).Figure 1 shows the antinociceptive effects of all extracts at different doses on the chronic phase of pain induction. Results demonstrated that hexane fraction (200 mg/kg), diethyl ether fraction and total water extract at a dose of 400 mg/kg significantly (p < 0.001) affected the duration of licking in the chronic phase, which was comparable to indomethacin (10 mg/kg). The total methanol extract (600 mg/kg) and n-butanolic (600 mg/kg) fraction also exhibited a reduction on the duration of licking in the chronic phase (p < 0.01).Comparison of the antinociceptive activity of all extracts showed that the maximum inhibitory response was obtained with 200 mg/kg of hexane fraction. According to results, there was no significant difference between the analgesic effect of total water extract and hexane fraction at a dose of 400 mg/kg, diethyl ether fraction (600 mg/kg), n-butanol fraction (600 mg/kg), and indomethacin (10 mg/kg) as an NSAID (Figure 2). Influence of Ajuga chamaecistus ssp . tomentella extracts on acute phase of formalin- induced pain in mice aThe amount of time spent licking the injected paw recorded in 0–5 min (acute phase). bControl: Normal saline, TW: Total Water extract, TM: Total Methanol extract, HEX: Hexane fraction, DEE: Diethyl Ether fraction, NB: N-Butanol fraction. cResults are expressed as the mean ± S.E.M (n = 7). Effects of different extracts of Ajuga chamaecistus ssp. tomentella on chronic phase of formalin- induced pain. Different doses of all extracts (200, 400, 600 mg/kg), a) Total water extract, b) Total methanol extract, c) Hexane fraction, d) Diethyl ether fraction, and e) N-Butanol fraction, were administered to mice by oral tube. Control group received normal saline. All extracts of the plant were administered 30 min before formalin injection. Antinociception was recorded 15–60 min (Chronic phase), after formalin injection. Each point is the mean ± S.E.M of at least 7 animals. Control (normal saline), IND (Indomethacin). *p<0.05,** p < 0.01, ***p < 0.001 (as compared with normal saline), #p<0.05, ##p < 0.01, ###p < 0.001 (as compared with Indomethacin). Comparision of different treatments of Ajuga chamaecistus ssp. tomentella on chronic phase of formalin- induced pain. All samples were administered to mice by oral tube. Control group received normal saline. All extracts of the plant were administered 30 min before formalin injection. Antinociception was recorded 15–60 min (Chronic phase), after formalin injection. Each point is the mean ± S.E.M of at least 7 animals. Control (normal saline), TW (total water), TM (total methanol), NB (n-butanol), HEX (hexane fraction), DEE (diethyl ether fraction). IND (Indomethacin). **p < 0.01, ***p < 0.001 (as compared with normal saline), #p<0.05, ##p < 0.01, ###p < 0.001 (as compared with Indomethacin).
Conclusion
In conclusion, hexane and diethyl ether fractions obtained from the methanolic extract 80%, and total water extract of aerial parts of Ajuga chamaecistus ssp. tomentella possess significant and promising antinociceptive properties. The mechanism is supposed to be mediated through the inhibition of endogenous mediators release like prostaglandins. According to the previous study, the isolated compound 8-acethylharpagide could be responsible for the antinociceptive effect of the total water extract and diethyl ether fraction. This study confirmed the traditional use of some Ajuga plants for the treatment of joint pain and other inflammatory diseases.
[ "Background", "Plant material", "Methanolic extraction", "Water extraction", "Administration", "Animals", "Formalin test in mice", "Statistical analysis", "Competing interests", "Authors' contributions" ]
[ "Five species of genus Ajuga (Lamiaceae) are found in the flora of Iran in which Ajuga chamaecistus has been contained several endemic subspecies including A. chamaecistus ssp. tomentella[1]. Some species which belong to this annual and perennial genus are used as the medicinal plant in the traditional medicine of several countries mostly in Africa, Asia, and China as for wound healing; anthelmintic, antifungal, antifebrile, antitumor, antimicrobial, and diuretic agent, and for the treatment of hypertension, hyperglycemia, joint pain, etc. [2-4]. Ajuga chamaepitys (L.) Schreb. which grows in the Middle East and Asia has been used in the treatment of rheumatism, gout, dropsy, jaundice, and sclerosis. A. decombens Thunb. that originally grows in East Asia is used for analgesia, inflammation, fever, and joint pain [5]. Moreover in Iranian traditional medicine, the genus Ajuga (Kamaphytus) has been used for treatment of joint pain, gout, and jaundice [6].\nAlso, several biological studies have been performed on many species of this genus which have confirmed their ethno pharmacological properties such as hypoglycemic [7], anti-inflammatory [8], anabolic, analgesic, anti-arthritis, antipyretic, hepatoprotective, antibacterial, antifungal, antioxidant, cardiotonic [5], and antimalarial [9] properties and their application in the treatment of joint diseases [10].\nAs well, many phytochemical studies on Ajuga species have been performed which have led to the isolation of phytoecdysteroids [11,12], neo- clerodanediterpenoids [13], phenylethyl glycosides [3], withanolides [2], iridoids and flavonoids [14], and essential oils [15].\nPrior to this study, we have isolated 10 compounds; 20-hydroxyecdysone, cyasterone, ajugalactone, makisterone A, and 24-dehydroprecyasterone (phytoecdysteroids), 8-acetylharpagide (iridoid), cis- and trans-melilotoside, lavandulifolioside, leonoside B, and martynoside (phenylethanoid glycosides), from diethyl ether and n-butanolic fractions of Ajuga chamaecistus ssp. tomentella. Cytotoxicity evaluation of some fractions of this plant showed the cytotoxicity of hexane fraction against normal and cancer cell lines. Most of the isolated compounds were inactive in the cytotoxicity assay [16,17].\nThe aim of this study was to evaluate antinociceptive effects of oral administration of total water and 80% methanolic extracts and partition fractions of hexane, diethyl ether and n- butanol obtained from methanolic extract of aerial parts of Ajuga chamaecistus ssp. tomentella in an attempt to validate the traditional use of the plants belonging to genus Ajuga.", "Aerial parts of Ajuga chamaecistus Ging. ssp. tomentella (Boiss.) Rech. f. were collected from “Sorkhe Hesar”, east of Tehran, Iran, in June 2008 and verified by Prof. G. Amin. A voucher specimen (THE-6697) was deposited in the herbarium of the Department of Pharmacognosy, Faculty of Pharmacy, Tehran University of Medical Sciences, Tehran, Iran.\n Methanolic extraction The air-dried and ground aerial parts of A. chamaecistus ssp. tomentella (250 g) were extracted with methanol 80% (3 × 0.5 L) at room temperature. The solvent was evaporated on a rotary evaporator and in a vacuum oven to give a dark brown extract (45 g). The extract (30 g) was suspended in 80% methanol and partitioned successively between 80% methanol, n-hexane, diethyl ether, and n-butanol. Removal of the solvents with a rotary evaporator resulted in the production of n-hexane, diethyl ether, and n-butanol fractions.\nThe air-dried and ground aerial parts of A. chamaecistus ssp. tomentella (250 g) were extracted with methanol 80% (3 × 0.5 L) at room temperature. The solvent was evaporated on a rotary evaporator and in a vacuum oven to give a dark brown extract (45 g). The extract (30 g) was suspended in 80% methanol and partitioned successively between 80% methanol, n-hexane, diethyl ether, and n-butanol. Removal of the solvents with a rotary evaporator resulted in the production of n-hexane, diethyl ether, and n-butanol fractions.\n Water extraction Two hundred and fifty grams of the powdered plant from the aerial parts of A. chamaecistus ssp. tomentella were extracted with distilled water (3 × 0.5 L) at room temperature. The solvent was removed with a rotary evaporator and freeze drying process to give an extract (30 g).\nTwo hundred and fifty grams of the powdered plant from the aerial parts of A. chamaecistus ssp. tomentella were extracted with distilled water (3 × 0.5 L) at room temperature. The solvent was removed with a rotary evaporator and freeze drying process to give an extract (30 g).", "The air-dried and ground aerial parts of A. chamaecistus ssp. tomentella (250 g) were extracted with methanol 80% (3 × 0.5 L) at room temperature. The solvent was evaporated on a rotary evaporator and in a vacuum oven to give a dark brown extract (45 g). The extract (30 g) was suspended in 80% methanol and partitioned successively between 80% methanol, n-hexane, diethyl ether, and n-butanol. Removal of the solvents with a rotary evaporator resulted in the production of n-hexane, diethyl ether, and n-butanol fractions.", "Two hundred and fifty grams of the powdered plant from the aerial parts of A. chamaecistus ssp. tomentella were extracted with distilled water (3 × 0.5 L) at room temperature. The solvent was removed with a rotary evaporator and freeze drying process to give an extract (30 g).", "The extracts were dissolved in normal saline to achieve the working concentrations. Extracts, standard drug (Indomethacin 10 mg/kg), and normal saline were administered by oral gavage. Three doses of 200, 400 and 600 mg/kg of all extracts were examined.", "Male albino mice weighing 25–30 g were obtained from Pasteur institute and housed in groups of 7 with a 12 h light–dark cycle and constant temperature (22°C). Mice were allowed to acclimatize to the laboratory for 30 min before the experiments began. This study was approved by the ethics committee of the Pharmaceutical Science Research Center of TUMS.", "Twenty microliters of formalin (0.5%) was injected subcutaneously according to the previous study [18]. The total time (second) spent on licking in response to the injected paw in the acute phase (0–5 min) and chronic phase (15–60 min) after formalin injection was measured as a pain indicator.", "Results are expressed as mean ± standard error of mean (S.E.M). Statistical differences between the treatment and control groups were evaluated by one-way ANOVA, followed by Newman–Keuls post hoc test. p < 0.05 was considered significant.", "The authors declare that they have no competing interests.", "MK: Participated in introducing the plant and designing the study, AD: carried out the antinociception effect, NS: Participated in extraction of the plant and drafting the manuscript, MRS: Helped to revise the manuscript, MS: Participated in design and interpreting of the data. All authors read and approve the final manuscript." ]
[ null, null, null, null, null, null, null, null, null, null ]
[ "Background", "Methods", "Plant material", "Methanolic extraction", "Water extraction", "Administration", "Animals", "Formalin test in mice", "Statistical analysis", "Results", "Discussion", "Conclusion", "Competing interests", "Authors' contributions" ]
[ "Five species of genus Ajuga (Lamiaceae) are found in the flora of Iran in which Ajuga chamaecistus has been contained several endemic subspecies including A. chamaecistus ssp. tomentella[1]. Some species which belong to this annual and perennial genus are used as the medicinal plant in the traditional medicine of several countries mostly in Africa, Asia, and China as for wound healing; anthelmintic, antifungal, antifebrile, antitumor, antimicrobial, and diuretic agent, and for the treatment of hypertension, hyperglycemia, joint pain, etc. [2-4]. Ajuga chamaepitys (L.) Schreb. which grows in the Middle East and Asia has been used in the treatment of rheumatism, gout, dropsy, jaundice, and sclerosis. A. decombens Thunb. that originally grows in East Asia is used for analgesia, inflammation, fever, and joint pain [5]. Moreover in Iranian traditional medicine, the genus Ajuga (Kamaphytus) has been used for treatment of joint pain, gout, and jaundice [6].\nAlso, several biological studies have been performed on many species of this genus which have confirmed their ethno pharmacological properties such as hypoglycemic [7], anti-inflammatory [8], anabolic, analgesic, anti-arthritis, antipyretic, hepatoprotective, antibacterial, antifungal, antioxidant, cardiotonic [5], and antimalarial [9] properties and their application in the treatment of joint diseases [10].\nAs well, many phytochemical studies on Ajuga species have been performed which have led to the isolation of phytoecdysteroids [11,12], neo- clerodanediterpenoids [13], phenylethyl glycosides [3], withanolides [2], iridoids and flavonoids [14], and essential oils [15].\nPrior to this study, we have isolated 10 compounds; 20-hydroxyecdysone, cyasterone, ajugalactone, makisterone A, and 24-dehydroprecyasterone (phytoecdysteroids), 8-acetylharpagide (iridoid), cis- and trans-melilotoside, lavandulifolioside, leonoside B, and martynoside (phenylethanoid glycosides), from diethyl ether and n-butanolic fractions of Ajuga chamaecistus ssp. tomentella. Cytotoxicity evaluation of some fractions of this plant showed the cytotoxicity of hexane fraction against normal and cancer cell lines. Most of the isolated compounds were inactive in the cytotoxicity assay [16,17].\nThe aim of this study was to evaluate antinociceptive effects of oral administration of total water and 80% methanolic extracts and partition fractions of hexane, diethyl ether and n- butanol obtained from methanolic extract of aerial parts of Ajuga chamaecistus ssp. tomentella in an attempt to validate the traditional use of the plants belonging to genus Ajuga.", " Plant material Aerial parts of Ajuga chamaecistus Ging. ssp. tomentella (Boiss.) Rech. f. were collected from “Sorkhe Hesar”, east of Tehran, Iran, in June 2008 and verified by Prof. G. Amin. A voucher specimen (THE-6697) was deposited in the herbarium of the Department of Pharmacognosy, Faculty of Pharmacy, Tehran University of Medical Sciences, Tehran, Iran.\n Methanolic extraction The air-dried and ground aerial parts of A. chamaecistus ssp. tomentella (250 g) were extracted with methanol 80% (3 × 0.5 L) at room temperature. The solvent was evaporated on a rotary evaporator and in a vacuum oven to give a dark brown extract (45 g). The extract (30 g) was suspended in 80% methanol and partitioned successively between 80% methanol, n-hexane, diethyl ether, and n-butanol. Removal of the solvents with a rotary evaporator resulted in the production of n-hexane, diethyl ether, and n-butanol fractions.\nThe air-dried and ground aerial parts of A. chamaecistus ssp. tomentella (250 g) were extracted with methanol 80% (3 × 0.5 L) at room temperature. The solvent was evaporated on a rotary evaporator and in a vacuum oven to give a dark brown extract (45 g). The extract (30 g) was suspended in 80% methanol and partitioned successively between 80% methanol, n-hexane, diethyl ether, and n-butanol. Removal of the solvents with a rotary evaporator resulted in the production of n-hexane, diethyl ether, and n-butanol fractions.\n Water extraction Two hundred and fifty grams of the powdered plant from the aerial parts of A. chamaecistus ssp. tomentella were extracted with distilled water (3 × 0.5 L) at room temperature. The solvent was removed with a rotary evaporator and freeze drying process to give an extract (30 g).\nTwo hundred and fifty grams of the powdered plant from the aerial parts of A. chamaecistus ssp. tomentella were extracted with distilled water (3 × 0.5 L) at room temperature. The solvent was removed with a rotary evaporator and freeze drying process to give an extract (30 g).\nAerial parts of Ajuga chamaecistus Ging. ssp. tomentella (Boiss.) Rech. f. were collected from “Sorkhe Hesar”, east of Tehran, Iran, in June 2008 and verified by Prof. G. Amin. A voucher specimen (THE-6697) was deposited in the herbarium of the Department of Pharmacognosy, Faculty of Pharmacy, Tehran University of Medical Sciences, Tehran, Iran.\n Methanolic extraction The air-dried and ground aerial parts of A. chamaecistus ssp. tomentella (250 g) were extracted with methanol 80% (3 × 0.5 L) at room temperature. The solvent was evaporated on a rotary evaporator and in a vacuum oven to give a dark brown extract (45 g). The extract (30 g) was suspended in 80% methanol and partitioned successively between 80% methanol, n-hexane, diethyl ether, and n-butanol. Removal of the solvents with a rotary evaporator resulted in the production of n-hexane, diethyl ether, and n-butanol fractions.\nThe air-dried and ground aerial parts of A. chamaecistus ssp. tomentella (250 g) were extracted with methanol 80% (3 × 0.5 L) at room temperature. The solvent was evaporated on a rotary evaporator and in a vacuum oven to give a dark brown extract (45 g). The extract (30 g) was suspended in 80% methanol and partitioned successively between 80% methanol, n-hexane, diethyl ether, and n-butanol. Removal of the solvents with a rotary evaporator resulted in the production of n-hexane, diethyl ether, and n-butanol fractions.\n Water extraction Two hundred and fifty grams of the powdered plant from the aerial parts of A. chamaecistus ssp. tomentella were extracted with distilled water (3 × 0.5 L) at room temperature. The solvent was removed with a rotary evaporator and freeze drying process to give an extract (30 g).\nTwo hundred and fifty grams of the powdered plant from the aerial parts of A. chamaecistus ssp. tomentella were extracted with distilled water (3 × 0.5 L) at room temperature. The solvent was removed with a rotary evaporator and freeze drying process to give an extract (30 g).\n Administration The extracts were dissolved in normal saline to achieve the working concentrations. Extracts, standard drug (Indomethacin 10 mg/kg), and normal saline were administered by oral gavage. Three doses of 200, 400 and 600 mg/kg of all extracts were examined.\nThe extracts were dissolved in normal saline to achieve the working concentrations. Extracts, standard drug (Indomethacin 10 mg/kg), and normal saline were administered by oral gavage. Three doses of 200, 400 and 600 mg/kg of all extracts were examined.\n Animals Male albino mice weighing 25–30 g were obtained from Pasteur institute and housed in groups of 7 with a 12 h light–dark cycle and constant temperature (22°C). Mice were allowed to acclimatize to the laboratory for 30 min before the experiments began. This study was approved by the ethics committee of the Pharmaceutical Science Research Center of TUMS.\nMale albino mice weighing 25–30 g were obtained from Pasteur institute and housed in groups of 7 with a 12 h light–dark cycle and constant temperature (22°C). Mice were allowed to acclimatize to the laboratory for 30 min before the experiments began. This study was approved by the ethics committee of the Pharmaceutical Science Research Center of TUMS.\n Formalin test in mice Twenty microliters of formalin (0.5%) was injected subcutaneously according to the previous study [18]. The total time (second) spent on licking in response to the injected paw in the acute phase (0–5 min) and chronic phase (15–60 min) after formalin injection was measured as a pain indicator.\nTwenty microliters of formalin (0.5%) was injected subcutaneously according to the previous study [18]. The total time (second) spent on licking in response to the injected paw in the acute phase (0–5 min) and chronic phase (15–60 min) after formalin injection was measured as a pain indicator.\n Statistical analysis Results are expressed as mean ± standard error of mean (S.E.M). Statistical differences between the treatment and control groups were evaluated by one-way ANOVA, followed by Newman–Keuls post hoc test. p < 0.05 was considered significant.\nResults are expressed as mean ± standard error of mean (S.E.M). Statistical differences between the treatment and control groups were evaluated by one-way ANOVA, followed by Newman–Keuls post hoc test. p < 0.05 was considered significant.", "Aerial parts of Ajuga chamaecistus Ging. ssp. tomentella (Boiss.) Rech. f. were collected from “Sorkhe Hesar”, east of Tehran, Iran, in June 2008 and verified by Prof. G. Amin. A voucher specimen (THE-6697) was deposited in the herbarium of the Department of Pharmacognosy, Faculty of Pharmacy, Tehran University of Medical Sciences, Tehran, Iran.\n Methanolic extraction The air-dried and ground aerial parts of A. chamaecistus ssp. tomentella (250 g) were extracted with methanol 80% (3 × 0.5 L) at room temperature. The solvent was evaporated on a rotary evaporator and in a vacuum oven to give a dark brown extract (45 g). The extract (30 g) was suspended in 80% methanol and partitioned successively between 80% methanol, n-hexane, diethyl ether, and n-butanol. Removal of the solvents with a rotary evaporator resulted in the production of n-hexane, diethyl ether, and n-butanol fractions.\nThe air-dried and ground aerial parts of A. chamaecistus ssp. tomentella (250 g) were extracted with methanol 80% (3 × 0.5 L) at room temperature. The solvent was evaporated on a rotary evaporator and in a vacuum oven to give a dark brown extract (45 g). The extract (30 g) was suspended in 80% methanol and partitioned successively between 80% methanol, n-hexane, diethyl ether, and n-butanol. Removal of the solvents with a rotary evaporator resulted in the production of n-hexane, diethyl ether, and n-butanol fractions.\n Water extraction Two hundred and fifty grams of the powdered plant from the aerial parts of A. chamaecistus ssp. tomentella were extracted with distilled water (3 × 0.5 L) at room temperature. The solvent was removed with a rotary evaporator and freeze drying process to give an extract (30 g).\nTwo hundred and fifty grams of the powdered plant from the aerial parts of A. chamaecistus ssp. tomentella were extracted with distilled water (3 × 0.5 L) at room temperature. The solvent was removed with a rotary evaporator and freeze drying process to give an extract (30 g).", "The air-dried and ground aerial parts of A. chamaecistus ssp. tomentella (250 g) were extracted with methanol 80% (3 × 0.5 L) at room temperature. The solvent was evaporated on a rotary evaporator and in a vacuum oven to give a dark brown extract (45 g). The extract (30 g) was suspended in 80% methanol and partitioned successively between 80% methanol, n-hexane, diethyl ether, and n-butanol. Removal of the solvents with a rotary evaporator resulted in the production of n-hexane, diethyl ether, and n-butanol fractions.", "Two hundred and fifty grams of the powdered plant from the aerial parts of A. chamaecistus ssp. tomentella were extracted with distilled water (3 × 0.5 L) at room temperature. The solvent was removed with a rotary evaporator and freeze drying process to give an extract (30 g).", "The extracts were dissolved in normal saline to achieve the working concentrations. Extracts, standard drug (Indomethacin 10 mg/kg), and normal saline were administered by oral gavage. Three doses of 200, 400 and 600 mg/kg of all extracts were examined.", "Male albino mice weighing 25–30 g were obtained from Pasteur institute and housed in groups of 7 with a 12 h light–dark cycle and constant temperature (22°C). Mice were allowed to acclimatize to the laboratory for 30 min before the experiments began. This study was approved by the ethics committee of the Pharmaceutical Science Research Center of TUMS.", "Twenty microliters of formalin (0.5%) was injected subcutaneously according to the previous study [18]. The total time (second) spent on licking in response to the injected paw in the acute phase (0–5 min) and chronic phase (15–60 min) after formalin injection was measured as a pain indicator.", "Results are expressed as mean ± standard error of mean (S.E.M). Statistical differences between the treatment and control groups were evaluated by one-way ANOVA, followed by Newman–Keuls post hoc test. p < 0.05 was considered significant.", "In the present study, antinociceptive activity of total water, 80% methanol extracts, and three fractions of methanolic extract from the aerial parts of Ajuga chamaecistus ssp. tomentella were evaluated in a nociception model in mice. All extracts were administered by a gastrointestinal tube at different doses (200, 400, and 600 mg/kg) 30 min before intraplantar injection of formalin (20 μl, 0.5%). The licking time was 40.29 ± 5.76 for indomethacin (10 mg/kg) in the acute phase. There was no significant difference between indomethacin and all treated extracts (Table 1).Figure 1 shows the antinociceptive effects of all extracts at different doses on the chronic phase of pain induction. Results demonstrated that hexane fraction (200 mg/kg), diethyl ether fraction and total water extract at a dose of 400 mg/kg significantly (p < 0.001) affected the duration of licking in the chronic phase, which was comparable to indomethacin (10 mg/kg). The total methanol extract (600 mg/kg) and n-butanolic (600 mg/kg) fraction also exhibited a reduction on the duration of licking in the chronic phase (p < 0.01).Comparison of the antinociceptive activity of all extracts showed that the maximum inhibitory response was obtained with 200 mg/kg of hexane fraction. According to results, there was no significant difference between the analgesic effect of total water extract and hexane fraction at a dose of 400 mg/kg, diethyl ether fraction (600 mg/kg), n-butanol fraction (600 mg/kg), and indomethacin (10 mg/kg) as an NSAID (Figure 2).\n\nInfluence of \n\nAjuga chamaecistus \n\nssp\n\n. tomentella \n\nextracts on acute phase of formalin- induced pain in mice\n\naThe amount of time spent licking the injected paw recorded in 0–5 min (acute phase).\nbControl: Normal saline, TW: Total Water extract, TM: Total Methanol extract, HEX: Hexane fraction, DEE: Diethyl Ether fraction, NB: N-Butanol fraction.\ncResults are expressed as the mean ± S.E.M (n = 7).\nEffects of different extracts of Ajuga chamaecistus ssp. tomentella on chronic phase of formalin- induced pain. Different doses of all extracts (200, 400, 600 mg/kg), a) Total water extract, b) Total methanol extract, c) Hexane fraction, d) Diethyl ether fraction, and e) N-Butanol fraction, were administered to mice by oral tube. Control group received normal saline. All extracts of the plant were administered 30 min before formalin injection. Antinociception was recorded 15–60 min (Chronic phase), after formalin injection. Each point is the mean ± S.E.M of at least 7 animals. Control (normal saline), IND (Indomethacin). *p<0.05,** p < 0.01, ***p < 0.001 (as compared with normal saline), #p<0.05, ##p < 0.01, ###p < 0.001 (as compared with Indomethacin).\nComparision of different treatments of Ajuga chamaecistus ssp. tomentella on chronic phase of formalin- induced pain. All samples were administered to mice by oral tube. Control group received normal saline. All extracts of the plant were administered 30 min before formalin injection. Antinociception was recorded 15–60 min (Chronic phase), after formalin injection. Each point is the mean ± S.E.M of at least 7 animals. Control (normal saline), TW (total water), TM (total methanol), NB (n-butanol), HEX (hexane fraction), DEE (diethyl ether fraction). IND (Indomethacin). **p < 0.01, ***p < 0.001 (as compared with normal saline), #p<0.05, ##p < 0.01, ###p < 0.001 (as compared with Indomethacin).", "The formalin test provides a moderate and continuous pain because of tissue injury in the animal, which is a better approach to clinical conditions than more traditional tests of nociception [19]. Subcutaneous injection of formalin induces two distinctive periods of response. The early phase is explained as a direct stimulation of nociceptive neurons, and the late phase occurs secondary to the inflammatory reactions [20,21]. Drugs which affect the central nervous system such as opioids inhibit both phases equally while peripherally acting drugs such as aspirin and indomethacin inhibit the late phase [22]. The early phase of response to the formalin test is insensitive to anti-inflammatory drugs [23]. Inflammation in the late phase is due to the release of chemical mediators, such as serotonin, histamine, bradykinin and prostaglandins and at least to some degree, the sensitization of central nociceptive neurons [24].\nThe results showed that hexane (200 mg/kg), diethyl ether (400 mg/kg), and n-butanol fractions (600 mg/kg) from the methanolic extract (80%) in addition to total water extract (400 mg/kg) significantly decreased the pain related to the late phase (inflammatory agents) of the formalin test. Since the analgesic properties of effective extracts were observed in late phase like NSAIDs, the antinociceptive activities of the extracts are apparently mediated by interactions with inflammatory mediators especially arachidonic acid metabolites. Recently, [9] reported that 70% ethanol extract of whole plants of Ajuga bracteosa showed a significant topical anti-inflammatory activity and a strong in vitro COX-1 and COX-2 inhibitory effect. Among the isolated compounds from this extract, lupulin A (a clerodane diterpenes) exhibited the highest inhibition of COX-1and 6-deoxyharpagide (an iridoid) showed the highest COX-2 inhibition [8]. Cyclooxygenase (COX) catalyzes the biosynthesis of prostaglandin G2 and H2 from arachidonic acid. The cyclooxygenase isoforms (COX-1 and COX-2) are the target of the non-steroidal anti-inflammatory drugs (NSAIDs), which provide therapeutic effects in the treatment of pain, fever, and inflammation [25].\nAnother herb that is used in treatment of pain and arthritis is Harpagophytum procumbens, commonly known as devil’s claw. The total water extract of this plant possesses anti-inflammatory and analgesic effects by suppressing cyclooxygenase-2 and inducible nitric oxide synthase expressions. Iridoid glycosides such as harpagoside and harpagide are principal constituent of devil’s claw [26,27]. In our previous study, 8-acethyharpagide, an iridoid glycoside, was isolated in a large amount from the diethyl ether fraction [16]. Many studies have exhibited the biological effects of iridoids such as antioxidant, cytotoxic [9], chemoprotective [28], cardiovascular, hypoglycemic and hypolipidemic properties [29]. Li et al. [30] showed that the iridoid glycosides extract of Lamiophlomis rotate has a significant antinociceptive effect that was better at the second phase than the first phase of the formalin test [30].", "In conclusion, hexane and diethyl ether fractions obtained from the methanolic extract 80%, and total water extract of aerial parts of Ajuga chamaecistus ssp. tomentella possess significant and promising antinociceptive properties. The mechanism is supposed to be mediated through the inhibition of endogenous mediators release like prostaglandins. According to the previous study, the isolated compound 8-acethylharpagide could be responsible for the antinociceptive effect of the total water extract and diethyl ether fraction. This study confirmed the traditional use of some Ajuga plants for the treatment of joint pain and other inflammatory diseases.", "The authors declare that they have no competing interests.", "MK: Participated in introducing the plant and designing the study, AD: carried out the antinociception effect, NS: Participated in extraction of the plant and drafting the manuscript, MRS: Helped to revise the manuscript, MS: Participated in design and interpreting of the data. All authors read and approve the final manuscript." ]
[ null, "methods", null, null, null, null, null, null, null, "results", "discussion", "conclusions", null, null ]
[ "Ajuga chamaecistus ssp. tomentella", "Antinociceptive effect", "Analgesic activity", "Formalin test", "Mice" ]
Background: Five species of genus Ajuga (Lamiaceae) are found in the flora of Iran in which Ajuga chamaecistus has been contained several endemic subspecies including A. chamaecistus ssp. tomentella[1]. Some species which belong to this annual and perennial genus are used as the medicinal plant in the traditional medicine of several countries mostly in Africa, Asia, and China as for wound healing; anthelmintic, antifungal, antifebrile, antitumor, antimicrobial, and diuretic agent, and for the treatment of hypertension, hyperglycemia, joint pain, etc. [2-4]. Ajuga chamaepitys (L.) Schreb. which grows in the Middle East and Asia has been used in the treatment of rheumatism, gout, dropsy, jaundice, and sclerosis. A. decombens Thunb. that originally grows in East Asia is used for analgesia, inflammation, fever, and joint pain [5]. Moreover in Iranian traditional medicine, the genus Ajuga (Kamaphytus) has been used for treatment of joint pain, gout, and jaundice [6]. Also, several biological studies have been performed on many species of this genus which have confirmed their ethno pharmacological properties such as hypoglycemic [7], anti-inflammatory [8], anabolic, analgesic, anti-arthritis, antipyretic, hepatoprotective, antibacterial, antifungal, antioxidant, cardiotonic [5], and antimalarial [9] properties and their application in the treatment of joint diseases [10]. As well, many phytochemical studies on Ajuga species have been performed which have led to the isolation of phytoecdysteroids [11,12], neo- clerodanediterpenoids [13], phenylethyl glycosides [3], withanolides [2], iridoids and flavonoids [14], and essential oils [15]. Prior to this study, we have isolated 10 compounds; 20-hydroxyecdysone, cyasterone, ajugalactone, makisterone A, and 24-dehydroprecyasterone (phytoecdysteroids), 8-acetylharpagide (iridoid), cis- and trans-melilotoside, lavandulifolioside, leonoside B, and martynoside (phenylethanoid glycosides), from diethyl ether and n-butanolic fractions of Ajuga chamaecistus ssp. tomentella. Cytotoxicity evaluation of some fractions of this plant showed the cytotoxicity of hexane fraction against normal and cancer cell lines. Most of the isolated compounds were inactive in the cytotoxicity assay [16,17]. The aim of this study was to evaluate antinociceptive effects of oral administration of total water and 80% methanolic extracts and partition fractions of hexane, diethyl ether and n- butanol obtained from methanolic extract of aerial parts of Ajuga chamaecistus ssp. tomentella in an attempt to validate the traditional use of the plants belonging to genus Ajuga. Methods: Plant material Aerial parts of Ajuga chamaecistus Ging. ssp. tomentella (Boiss.) Rech. f. were collected from “Sorkhe Hesar”, east of Tehran, Iran, in June 2008 and verified by Prof. G. Amin. A voucher specimen (THE-6697) was deposited in the herbarium of the Department of Pharmacognosy, Faculty of Pharmacy, Tehran University of Medical Sciences, Tehran, Iran. Methanolic extraction The air-dried and ground aerial parts of A. chamaecistus ssp. tomentella (250 g) were extracted with methanol 80% (3 × 0.5 L) at room temperature. The solvent was evaporated on a rotary evaporator and in a vacuum oven to give a dark brown extract (45 g). The extract (30 g) was suspended in 80% methanol and partitioned successively between 80% methanol, n-hexane, diethyl ether, and n-butanol. Removal of the solvents with a rotary evaporator resulted in the production of n-hexane, diethyl ether, and n-butanol fractions. The air-dried and ground aerial parts of A. chamaecistus ssp. tomentella (250 g) were extracted with methanol 80% (3 × 0.5 L) at room temperature. The solvent was evaporated on a rotary evaporator and in a vacuum oven to give a dark brown extract (45 g). The extract (30 g) was suspended in 80% methanol and partitioned successively between 80% methanol, n-hexane, diethyl ether, and n-butanol. Removal of the solvents with a rotary evaporator resulted in the production of n-hexane, diethyl ether, and n-butanol fractions. Water extraction Two hundred and fifty grams of the powdered plant from the aerial parts of A. chamaecistus ssp. tomentella were extracted with distilled water (3 × 0.5 L) at room temperature. The solvent was removed with a rotary evaporator and freeze drying process to give an extract (30 g). Two hundred and fifty grams of the powdered plant from the aerial parts of A. chamaecistus ssp. tomentella were extracted with distilled water (3 × 0.5 L) at room temperature. The solvent was removed with a rotary evaporator and freeze drying process to give an extract (30 g). Aerial parts of Ajuga chamaecistus Ging. ssp. tomentella (Boiss.) Rech. f. were collected from “Sorkhe Hesar”, east of Tehran, Iran, in June 2008 and verified by Prof. G. Amin. A voucher specimen (THE-6697) was deposited in the herbarium of the Department of Pharmacognosy, Faculty of Pharmacy, Tehran University of Medical Sciences, Tehran, Iran. Methanolic extraction The air-dried and ground aerial parts of A. chamaecistus ssp. tomentella (250 g) were extracted with methanol 80% (3 × 0.5 L) at room temperature. The solvent was evaporated on a rotary evaporator and in a vacuum oven to give a dark brown extract (45 g). The extract (30 g) was suspended in 80% methanol and partitioned successively between 80% methanol, n-hexane, diethyl ether, and n-butanol. Removal of the solvents with a rotary evaporator resulted in the production of n-hexane, diethyl ether, and n-butanol fractions. The air-dried and ground aerial parts of A. chamaecistus ssp. tomentella (250 g) were extracted with methanol 80% (3 × 0.5 L) at room temperature. The solvent was evaporated on a rotary evaporator and in a vacuum oven to give a dark brown extract (45 g). The extract (30 g) was suspended in 80% methanol and partitioned successively between 80% methanol, n-hexane, diethyl ether, and n-butanol. Removal of the solvents with a rotary evaporator resulted in the production of n-hexane, diethyl ether, and n-butanol fractions. Water extraction Two hundred and fifty grams of the powdered plant from the aerial parts of A. chamaecistus ssp. tomentella were extracted with distilled water (3 × 0.5 L) at room temperature. The solvent was removed with a rotary evaporator and freeze drying process to give an extract (30 g). Two hundred and fifty grams of the powdered plant from the aerial parts of A. chamaecistus ssp. tomentella were extracted with distilled water (3 × 0.5 L) at room temperature. The solvent was removed with a rotary evaporator and freeze drying process to give an extract (30 g). Administration The extracts were dissolved in normal saline to achieve the working concentrations. Extracts, standard drug (Indomethacin 10 mg/kg), and normal saline were administered by oral gavage. Three doses of 200, 400 and 600 mg/kg of all extracts were examined. The extracts were dissolved in normal saline to achieve the working concentrations. Extracts, standard drug (Indomethacin 10 mg/kg), and normal saline were administered by oral gavage. Three doses of 200, 400 and 600 mg/kg of all extracts were examined. Animals Male albino mice weighing 25–30 g were obtained from Pasteur institute and housed in groups of 7 with a 12 h light–dark cycle and constant temperature (22°C). Mice were allowed to acclimatize to the laboratory for 30 min before the experiments began. This study was approved by the ethics committee of the Pharmaceutical Science Research Center of TUMS. Male albino mice weighing 25–30 g were obtained from Pasteur institute and housed in groups of 7 with a 12 h light–dark cycle and constant temperature (22°C). Mice were allowed to acclimatize to the laboratory for 30 min before the experiments began. This study was approved by the ethics committee of the Pharmaceutical Science Research Center of TUMS. Formalin test in mice Twenty microliters of formalin (0.5%) was injected subcutaneously according to the previous study [18]. The total time (second) spent on licking in response to the injected paw in the acute phase (0–5 min) and chronic phase (15–60 min) after formalin injection was measured as a pain indicator. Twenty microliters of formalin (0.5%) was injected subcutaneously according to the previous study [18]. The total time (second) spent on licking in response to the injected paw in the acute phase (0–5 min) and chronic phase (15–60 min) after formalin injection was measured as a pain indicator. Statistical analysis Results are expressed as mean ± standard error of mean (S.E.M). Statistical differences between the treatment and control groups were evaluated by one-way ANOVA, followed by Newman–Keuls post hoc test. p < 0.05 was considered significant. Results are expressed as mean ± standard error of mean (S.E.M). Statistical differences between the treatment and control groups were evaluated by one-way ANOVA, followed by Newman–Keuls post hoc test. p < 0.05 was considered significant. Plant material: Aerial parts of Ajuga chamaecistus Ging. ssp. tomentella (Boiss.) Rech. f. were collected from “Sorkhe Hesar”, east of Tehran, Iran, in June 2008 and verified by Prof. G. Amin. A voucher specimen (THE-6697) was deposited in the herbarium of the Department of Pharmacognosy, Faculty of Pharmacy, Tehran University of Medical Sciences, Tehran, Iran. Methanolic extraction The air-dried and ground aerial parts of A. chamaecistus ssp. tomentella (250 g) were extracted with methanol 80% (3 × 0.5 L) at room temperature. The solvent was evaporated on a rotary evaporator and in a vacuum oven to give a dark brown extract (45 g). The extract (30 g) was suspended in 80% methanol and partitioned successively between 80% methanol, n-hexane, diethyl ether, and n-butanol. Removal of the solvents with a rotary evaporator resulted in the production of n-hexane, diethyl ether, and n-butanol fractions. The air-dried and ground aerial parts of A. chamaecistus ssp. tomentella (250 g) were extracted with methanol 80% (3 × 0.5 L) at room temperature. The solvent was evaporated on a rotary evaporator and in a vacuum oven to give a dark brown extract (45 g). The extract (30 g) was suspended in 80% methanol and partitioned successively between 80% methanol, n-hexane, diethyl ether, and n-butanol. Removal of the solvents with a rotary evaporator resulted in the production of n-hexane, diethyl ether, and n-butanol fractions. Water extraction Two hundred and fifty grams of the powdered plant from the aerial parts of A. chamaecistus ssp. tomentella were extracted with distilled water (3 × 0.5 L) at room temperature. The solvent was removed with a rotary evaporator and freeze drying process to give an extract (30 g). Two hundred and fifty grams of the powdered plant from the aerial parts of A. chamaecistus ssp. tomentella were extracted with distilled water (3 × 0.5 L) at room temperature. The solvent was removed with a rotary evaporator and freeze drying process to give an extract (30 g). Methanolic extraction: The air-dried and ground aerial parts of A. chamaecistus ssp. tomentella (250 g) were extracted with methanol 80% (3 × 0.5 L) at room temperature. The solvent was evaporated on a rotary evaporator and in a vacuum oven to give a dark brown extract (45 g). The extract (30 g) was suspended in 80% methanol and partitioned successively between 80% methanol, n-hexane, diethyl ether, and n-butanol. Removal of the solvents with a rotary evaporator resulted in the production of n-hexane, diethyl ether, and n-butanol fractions. Water extraction: Two hundred and fifty grams of the powdered plant from the aerial parts of A. chamaecistus ssp. tomentella were extracted with distilled water (3 × 0.5 L) at room temperature. The solvent was removed with a rotary evaporator and freeze drying process to give an extract (30 g). Administration: The extracts were dissolved in normal saline to achieve the working concentrations. Extracts, standard drug (Indomethacin 10 mg/kg), and normal saline were administered by oral gavage. Three doses of 200, 400 and 600 mg/kg of all extracts were examined. Animals: Male albino mice weighing 25–30 g were obtained from Pasteur institute and housed in groups of 7 with a 12 h light–dark cycle and constant temperature (22°C). Mice were allowed to acclimatize to the laboratory for 30 min before the experiments began. This study was approved by the ethics committee of the Pharmaceutical Science Research Center of TUMS. Formalin test in mice: Twenty microliters of formalin (0.5%) was injected subcutaneously according to the previous study [18]. The total time (second) spent on licking in response to the injected paw in the acute phase (0–5 min) and chronic phase (15–60 min) after formalin injection was measured as a pain indicator. Statistical analysis: Results are expressed as mean ± standard error of mean (S.E.M). Statistical differences between the treatment and control groups were evaluated by one-way ANOVA, followed by Newman–Keuls post hoc test. p < 0.05 was considered significant. Results: In the present study, antinociceptive activity of total water, 80% methanol extracts, and three fractions of methanolic extract from the aerial parts of Ajuga chamaecistus ssp. tomentella were evaluated in a nociception model in mice. All extracts were administered by a gastrointestinal tube at different doses (200, 400, and 600 mg/kg) 30 min before intraplantar injection of formalin (20 μl, 0.5%). The licking time was 40.29 ± 5.76 for indomethacin (10 mg/kg) in the acute phase. There was no significant difference between indomethacin and all treated extracts (Table 1).Figure 1 shows the antinociceptive effects of all extracts at different doses on the chronic phase of pain induction. Results demonstrated that hexane fraction (200 mg/kg), diethyl ether fraction and total water extract at a dose of 400 mg/kg significantly (p < 0.001) affected the duration of licking in the chronic phase, which was comparable to indomethacin (10 mg/kg). The total methanol extract (600 mg/kg) and n-butanolic (600 mg/kg) fraction also exhibited a reduction on the duration of licking in the chronic phase (p < 0.01).Comparison of the antinociceptive activity of all extracts showed that the maximum inhibitory response was obtained with 200 mg/kg of hexane fraction. According to results, there was no significant difference between the analgesic effect of total water extract and hexane fraction at a dose of 400 mg/kg, diethyl ether fraction (600 mg/kg), n-butanol fraction (600 mg/kg), and indomethacin (10 mg/kg) as an NSAID (Figure 2). Influence of Ajuga chamaecistus ssp . tomentella extracts on acute phase of formalin- induced pain in mice aThe amount of time spent licking the injected paw recorded in 0–5 min (acute phase). bControl: Normal saline, TW: Total Water extract, TM: Total Methanol extract, HEX: Hexane fraction, DEE: Diethyl Ether fraction, NB: N-Butanol fraction. cResults are expressed as the mean ± S.E.M (n = 7). Effects of different extracts of Ajuga chamaecistus ssp. tomentella on chronic phase of formalin- induced pain. Different doses of all extracts (200, 400, 600 mg/kg), a) Total water extract, b) Total methanol extract, c) Hexane fraction, d) Diethyl ether fraction, and e) N-Butanol fraction, were administered to mice by oral tube. Control group received normal saline. All extracts of the plant were administered 30 min before formalin injection. Antinociception was recorded 15–60 min (Chronic phase), after formalin injection. Each point is the mean ± S.E.M of at least 7 animals. Control (normal saline), IND (Indomethacin). *p<0.05,** p < 0.01, ***p < 0.001 (as compared with normal saline), #p<0.05, ##p < 0.01, ###p < 0.001 (as compared with Indomethacin). Comparision of different treatments of Ajuga chamaecistus ssp. tomentella on chronic phase of formalin- induced pain. All samples were administered to mice by oral tube. Control group received normal saline. All extracts of the plant were administered 30 min before formalin injection. Antinociception was recorded 15–60 min (Chronic phase), after formalin injection. Each point is the mean ± S.E.M of at least 7 animals. Control (normal saline), TW (total water), TM (total methanol), NB (n-butanol), HEX (hexane fraction), DEE (diethyl ether fraction). IND (Indomethacin). **p < 0.01, ***p < 0.001 (as compared with normal saline), #p<0.05, ##p < 0.01, ###p < 0.001 (as compared with Indomethacin). Discussion: The formalin test provides a moderate and continuous pain because of tissue injury in the animal, which is a better approach to clinical conditions than more traditional tests of nociception [19]. Subcutaneous injection of formalin induces two distinctive periods of response. The early phase is explained as a direct stimulation of nociceptive neurons, and the late phase occurs secondary to the inflammatory reactions [20,21]. Drugs which affect the central nervous system such as opioids inhibit both phases equally while peripherally acting drugs such as aspirin and indomethacin inhibit the late phase [22]. The early phase of response to the formalin test is insensitive to anti-inflammatory drugs [23]. Inflammation in the late phase is due to the release of chemical mediators, such as serotonin, histamine, bradykinin and prostaglandins and at least to some degree, the sensitization of central nociceptive neurons [24]. The results showed that hexane (200 mg/kg), diethyl ether (400 mg/kg), and n-butanol fractions (600 mg/kg) from the methanolic extract (80%) in addition to total water extract (400 mg/kg) significantly decreased the pain related to the late phase (inflammatory agents) of the formalin test. Since the analgesic properties of effective extracts were observed in late phase like NSAIDs, the antinociceptive activities of the extracts are apparently mediated by interactions with inflammatory mediators especially arachidonic acid metabolites. Recently, [9] reported that 70% ethanol extract of whole plants of Ajuga bracteosa showed a significant topical anti-inflammatory activity and a strong in vitro COX-1 and COX-2 inhibitory effect. Among the isolated compounds from this extract, lupulin A (a clerodane diterpenes) exhibited the highest inhibition of COX-1and 6-deoxyharpagide (an iridoid) showed the highest COX-2 inhibition [8]. Cyclooxygenase (COX) catalyzes the biosynthesis of prostaglandin G2 and H2 from arachidonic acid. The cyclooxygenase isoforms (COX-1 and COX-2) are the target of the non-steroidal anti-inflammatory drugs (NSAIDs), which provide therapeutic effects in the treatment of pain, fever, and inflammation [25]. Another herb that is used in treatment of pain and arthritis is Harpagophytum procumbens, commonly known as devil’s claw. The total water extract of this plant possesses anti-inflammatory and analgesic effects by suppressing cyclooxygenase-2 and inducible nitric oxide synthase expressions. Iridoid glycosides such as harpagoside and harpagide are principal constituent of devil’s claw [26,27]. In our previous study, 8-acethyharpagide, an iridoid glycoside, was isolated in a large amount from the diethyl ether fraction [16]. Many studies have exhibited the biological effects of iridoids such as antioxidant, cytotoxic [9], chemoprotective [28], cardiovascular, hypoglycemic and hypolipidemic properties [29]. Li et al. [30] showed that the iridoid glycosides extract of Lamiophlomis rotate has a significant antinociceptive effect that was better at the second phase than the first phase of the formalin test [30]. Conclusion: In conclusion, hexane and diethyl ether fractions obtained from the methanolic extract 80%, and total water extract of aerial parts of Ajuga chamaecistus ssp. tomentella possess significant and promising antinociceptive properties. The mechanism is supposed to be mediated through the inhibition of endogenous mediators release like prostaglandins. According to the previous study, the isolated compound 8-acethylharpagide could be responsible for the antinociceptive effect of the total water extract and diethyl ether fraction. This study confirmed the traditional use of some Ajuga plants for the treatment of joint pain and other inflammatory diseases. Competing interests: The authors declare that they have no competing interests. Authors' contributions: MK: Participated in introducing the plant and designing the study, AD: carried out the antinociception effect, NS: Participated in extraction of the plant and drafting the manuscript, MRS: Helped to revise the manuscript, MS: Participated in design and interpreting of the data. All authors read and approve the final manuscript.
Background: The genus Ajuga is used for the treatment of joint pain, gout, and jaundice in traditional Iranian medicine (TIM). Ajuga chamaecistus ssp. tomentella is an exclusive subspecies of Ajuga chamaecistus in the flora of Iran. The aim of this study was to evaluate antinociceptive properties of some extracts from aerial parts of A. chamaecistus ssp. tomentella. Methods: Antinociceptive activities of total water and 80% methanol extracts, hexane, diethyl ether and n-butanolic partition fractions of the methanolic extract were analyzed using the formalin test in mice. Indomethacin (10 mg/kg) and normal saline were employed as positive and negative controls, respectively. Results: Oral administration of all extracts (200, 400 and 600 mg/kg) 30 min before formalin injection had no effect against the acute phase (0-5 min after formalin injection) of the formalin-induced licking time, but hexane fraction (200 mg/kg) caused a significant effect (p < 0.001) on the chronic phase (15-60 min after formalin injection). Total water and diethyl ether extracts at a dose of 400 mg/kg showed a very significant analgesic activity on the chronic phase (p < 0.001 and p < 0.01, respectively). Conclusions: The results of this study suggest that the extracts of A. chamaecistus ssp. tomentella have an analgesic property that supports traditional use of Ajuga genus for joint pain and other inflammatory diseases.
Background: Five species of genus Ajuga (Lamiaceae) are found in the flora of Iran in which Ajuga chamaecistus has been contained several endemic subspecies including A. chamaecistus ssp. tomentella[1]. Some species which belong to this annual and perennial genus are used as the medicinal plant in the traditional medicine of several countries mostly in Africa, Asia, and China as for wound healing; anthelmintic, antifungal, antifebrile, antitumor, antimicrobial, and diuretic agent, and for the treatment of hypertension, hyperglycemia, joint pain, etc. [2-4]. Ajuga chamaepitys (L.) Schreb. which grows in the Middle East and Asia has been used in the treatment of rheumatism, gout, dropsy, jaundice, and sclerosis. A. decombens Thunb. that originally grows in East Asia is used for analgesia, inflammation, fever, and joint pain [5]. Moreover in Iranian traditional medicine, the genus Ajuga (Kamaphytus) has been used for treatment of joint pain, gout, and jaundice [6]. Also, several biological studies have been performed on many species of this genus which have confirmed their ethno pharmacological properties such as hypoglycemic [7], anti-inflammatory [8], anabolic, analgesic, anti-arthritis, antipyretic, hepatoprotective, antibacterial, antifungal, antioxidant, cardiotonic [5], and antimalarial [9] properties and their application in the treatment of joint diseases [10]. As well, many phytochemical studies on Ajuga species have been performed which have led to the isolation of phytoecdysteroids [11,12], neo- clerodanediterpenoids [13], phenylethyl glycosides [3], withanolides [2], iridoids and flavonoids [14], and essential oils [15]. Prior to this study, we have isolated 10 compounds; 20-hydroxyecdysone, cyasterone, ajugalactone, makisterone A, and 24-dehydroprecyasterone (phytoecdysteroids), 8-acetylharpagide (iridoid), cis- and trans-melilotoside, lavandulifolioside, leonoside B, and martynoside (phenylethanoid glycosides), from diethyl ether and n-butanolic fractions of Ajuga chamaecistus ssp. tomentella. Cytotoxicity evaluation of some fractions of this plant showed the cytotoxicity of hexane fraction against normal and cancer cell lines. Most of the isolated compounds were inactive in the cytotoxicity assay [16,17]. The aim of this study was to evaluate antinociceptive effects of oral administration of total water and 80% methanolic extracts and partition fractions of hexane, diethyl ether and n- butanol obtained from methanolic extract of aerial parts of Ajuga chamaecistus ssp. tomentella in an attempt to validate the traditional use of the plants belonging to genus Ajuga. Conclusion: In conclusion, hexane and diethyl ether fractions obtained from the methanolic extract 80%, and total water extract of aerial parts of Ajuga chamaecistus ssp. tomentella possess significant and promising antinociceptive properties. The mechanism is supposed to be mediated through the inhibition of endogenous mediators release like prostaglandins. According to the previous study, the isolated compound 8-acethylharpagide could be responsible for the antinociceptive effect of the total water extract and diethyl ether fraction. This study confirmed the traditional use of some Ajuga plants for the treatment of joint pain and other inflammatory diseases.
Background: The genus Ajuga is used for the treatment of joint pain, gout, and jaundice in traditional Iranian medicine (TIM). Ajuga chamaecistus ssp. tomentella is an exclusive subspecies of Ajuga chamaecistus in the flora of Iran. The aim of this study was to evaluate antinociceptive properties of some extracts from aerial parts of A. chamaecistus ssp. tomentella. Methods: Antinociceptive activities of total water and 80% methanol extracts, hexane, diethyl ether and n-butanolic partition fractions of the methanolic extract were analyzed using the formalin test in mice. Indomethacin (10 mg/kg) and normal saline were employed as positive and negative controls, respectively. Results: Oral administration of all extracts (200, 400 and 600 mg/kg) 30 min before formalin injection had no effect against the acute phase (0-5 min after formalin injection) of the formalin-induced licking time, but hexane fraction (200 mg/kg) caused a significant effect (p < 0.001) on the chronic phase (15-60 min after formalin injection). Total water and diethyl ether extracts at a dose of 400 mg/kg showed a very significant analgesic activity on the chronic phase (p < 0.001 and p < 0.01, respectively). Conclusions: The results of this study suggest that the extracts of A. chamaecistus ssp. tomentella have an analgesic property that supports traditional use of Ajuga genus for joint pain and other inflammatory diseases.
4,329
286
[ 494, 444, 122, 59, 53, 70, 62, 50, 10, 61 ]
14
[ "extract", "chamaecistus", "methanol", "tomentella", "80", "ether", "ssp tomentella", "ssp", "phase", "diethyl" ]
[ "ajuga lamiaceae found", "traditional use ajuga", "treatments ajuga chamaecistus", "phytochemical studies ajuga", "different extracts ajuga" ]
[CONTENT] Ajuga chamaecistus ssp. tomentella | Antinociceptive effect | Analgesic activity | Formalin test | Mice [SUMMARY]
[CONTENT] Ajuga chamaecistus ssp. tomentella | Antinociceptive effect | Analgesic activity | Formalin test | Mice [SUMMARY]
[CONTENT] Ajuga chamaecistus ssp. tomentella | Antinociceptive effect | Analgesic activity | Formalin test | Mice [SUMMARY]
[CONTENT] Ajuga chamaecistus ssp. tomentella | Antinociceptive effect | Analgesic activity | Formalin test | Mice [SUMMARY]
[CONTENT] Ajuga chamaecistus ssp. tomentella | Antinociceptive effect | Analgesic activity | Formalin test | Mice [SUMMARY]
[CONTENT] Ajuga chamaecistus ssp. tomentella | Antinociceptive effect | Analgesic activity | Formalin test | Mice [SUMMARY]
[CONTENT] Ajuga | Analgesics | Animals | Ether | Formaldehyde | Hexanes | Male | Methanol | Mice | Pain | Phytotherapy | Plant Components, Aerial | Plant Extracts | Solvents | Water [SUMMARY]
[CONTENT] Ajuga | Analgesics | Animals | Ether | Formaldehyde | Hexanes | Male | Methanol | Mice | Pain | Phytotherapy | Plant Components, Aerial | Plant Extracts | Solvents | Water [SUMMARY]
[CONTENT] Ajuga | Analgesics | Animals | Ether | Formaldehyde | Hexanes | Male | Methanol | Mice | Pain | Phytotherapy | Plant Components, Aerial | Plant Extracts | Solvents | Water [SUMMARY]
[CONTENT] Ajuga | Analgesics | Animals | Ether | Formaldehyde | Hexanes | Male | Methanol | Mice | Pain | Phytotherapy | Plant Components, Aerial | Plant Extracts | Solvents | Water [SUMMARY]
[CONTENT] Ajuga | Analgesics | Animals | Ether | Formaldehyde | Hexanes | Male | Methanol | Mice | Pain | Phytotherapy | Plant Components, Aerial | Plant Extracts | Solvents | Water [SUMMARY]
[CONTENT] Ajuga | Analgesics | Animals | Ether | Formaldehyde | Hexanes | Male | Methanol | Mice | Pain | Phytotherapy | Plant Components, Aerial | Plant Extracts | Solvents | Water [SUMMARY]
[CONTENT] ajuga lamiaceae found | traditional use ajuga | treatments ajuga chamaecistus | phytochemical studies ajuga | different extracts ajuga [SUMMARY]
[CONTENT] ajuga lamiaceae found | traditional use ajuga | treatments ajuga chamaecistus | phytochemical studies ajuga | different extracts ajuga [SUMMARY]
[CONTENT] ajuga lamiaceae found | traditional use ajuga | treatments ajuga chamaecistus | phytochemical studies ajuga | different extracts ajuga [SUMMARY]
[CONTENT] ajuga lamiaceae found | traditional use ajuga | treatments ajuga chamaecistus | phytochemical studies ajuga | different extracts ajuga [SUMMARY]
[CONTENT] ajuga lamiaceae found | traditional use ajuga | treatments ajuga chamaecistus | phytochemical studies ajuga | different extracts ajuga [SUMMARY]
[CONTENT] ajuga lamiaceae found | traditional use ajuga | treatments ajuga chamaecistus | phytochemical studies ajuga | different extracts ajuga [SUMMARY]
[CONTENT] extract | chamaecistus | methanol | tomentella | 80 | ether | ssp tomentella | ssp | phase | diethyl [SUMMARY]
[CONTENT] extract | chamaecistus | methanol | tomentella | 80 | ether | ssp tomentella | ssp | phase | diethyl [SUMMARY]
[CONTENT] extract | chamaecistus | methanol | tomentella | 80 | ether | ssp tomentella | ssp | phase | diethyl [SUMMARY]
[CONTENT] extract | chamaecistus | methanol | tomentella | 80 | ether | ssp tomentella | ssp | phase | diethyl [SUMMARY]
[CONTENT] extract | chamaecistus | methanol | tomentella | 80 | ether | ssp tomentella | ssp | phase | diethyl [SUMMARY]
[CONTENT] extract | chamaecistus | methanol | tomentella | 80 | ether | ssp tomentella | ssp | phase | diethyl [SUMMARY]
[CONTENT] genus | ajuga | species | joint | genus ajuga | asia | cytotoxicity | joint pain | treatment | traditional [SUMMARY]
[CONTENT] rotary | rotary evaporator | methanol | evaporator | 80 | 30 | temperature | extract | extracted | extract 30 [SUMMARY]
[CONTENT] fraction | mg kg | mg | kg | phase | extracts | total | indomethacin | formalin | saline [SUMMARY]
[CONTENT] total water extract | water extract | extract | total water | antinociceptive | ajuga | total | ether | diethyl ether | diethyl [SUMMARY]
[CONTENT] extract | methanol | phase | evaporator | rotary | rotary evaporator | 30 | chamaecistus | extracts | mg kg [SUMMARY]
[CONTENT] extract | methanol | phase | evaporator | rotary | rotary evaporator | 30 | chamaecistus | extracts | mg kg [SUMMARY]
[CONTENT] Ajuga | Iranian | TIM ||| Ajuga ||| Ajuga | Iran ||| A. chamaecistus ssp ||| [SUMMARY]
[CONTENT] 80% ||| 10 mg/kg [SUMMARY]
[CONTENT] 200 | 400 | 600 mg/kg | 30 | 0-5 | hexane fraction | 200 mg/kg | 0.001 | 15-60 ||| 400 mg/kg | 0.001 | 0.01 [SUMMARY]
[CONTENT] A. chamaecistus ssp ||| Ajuga [SUMMARY]
[CONTENT] Ajuga | Iranian | TIM ||| Ajuga ||| Ajuga | Iran ||| A. chamaecistus ssp ||| ||| 80% ||| 10 mg/kg ||| 200 | 400 | 600 mg/kg | 30 | 0-5 | hexane fraction | 200 mg/kg | 0.001 | 15-60 ||| 400 mg/kg | 0.001 | 0.01 ||| A. chamaecistus ssp ||| Ajuga [SUMMARY]
[CONTENT] Ajuga | Iranian | TIM ||| Ajuga ||| Ajuga | Iran ||| A. chamaecistus ssp ||| ||| 80% ||| 10 mg/kg ||| 200 | 400 | 600 mg/kg | 30 | 0-5 | hexane fraction | 200 mg/kg | 0.001 | 15-60 ||| 400 mg/kg | 0.001 | 0.01 ||| A. chamaecistus ssp ||| Ajuga [SUMMARY]
Low positivity rates for HBeAg and HBV DNA in rheumatoid arthritis patients: a case-control study.
35751011
The rates of hepatitis B virus (HBV) infection in rheumatoid arthritis (RA) patients are controversial when considering the reported outcomes. It was speculated that HBV infection status was altered after RA, and variations inn HBV infection rates became apparent.
BACKGROUND
To compare the positive proportions of hepatitis B e antigen (HBeAg) and HBV DNA, a retrospective case-control study was performed between 27 chronic hepatitis B (CHB) patients with RA and 108 age- and gender-matched CHB patients. In addition, the positivity rates of hepatitis B surface antigen (HBsAg) and hepatitis B core antibody (anti-HBc) were surveyed among the 892 RA patients.
METHODS
Compared to CHB patients, CHB patients with RA exhibited lower rates of HBeAg positivity (11.1% vs. 35.2%, P = 0.003), HBV DNA positivity (37.0% vs. 63.9%, P = 0.007) and ALT elevation (11.1% vs. 35.2%, P = 0.024). In the 892 RA patients, the prevalence of HBsAg (3.0%) was lower than that reported in the Chinese national data (7.2%), whereas the anti-HBc positivity rate of 44.6% was higher than that of 34.1%.
RESULTS
HBV infection status was altered after suffering from RA. Compared to the matched CHB patients, low positive proportions of HBeAg and HBV DNA were observed for CHB patients with RA.
CONCLUSION
[ "Arthritis, Rheumatoid", "Case-Control Studies", "DNA, Viral", "Hepatitis B", "Hepatitis B Antibodies", "Hepatitis B Surface Antigens", "Hepatitis B e Antigens", "Hepatitis B virus", "Hepatitis B, Chronic", "Humans", "Retrospective Studies" ]
9229421
Introduction
It was observed that HBV infection affected rheumatoid arthritis (RA), where HBV was considered the suspected trigger for arthritis in genetically susceptible individuals [1]. The rates of positivity for RF and ACPA were as high as 14.4% and 4.1%, respectively, in patients with chronic hepatitis B (CHB) [2]. Hepatitis B core antigen (HBcAg) was found in the synovium of RA patients with CHB, indicating that HBV may be involved in the pathogenesis of local lesions [3]. In RA patients, immune dysregulation and immunosuppressive therapies also influence HBV infection [4, 5]. HBV infection with a high endemicity was reported in various regions of the Asia–Pacific and sub-Saharan Africa [6, 7]. It also affects approximately 10 million people in China [8]. Unfortunately, the HBV infection rates have been reported to be different for RA patients in previous studies. Yilmaz et al. reported a lower HBV infection prevalence in RA patients according to Turkish national data in comparison with the general population [9]. Mahroum et al. performed a case–control study and found that RA patients had a greater proportion of chronic HBV infection than age- and sex-matched controls [10]. Hsu et al. observed that RA patients were characterized by an increased risk of HBV infection when compared with that of the ≥ 18 years-old non-RA cohort [11]. The reasons for these differences were complicated, especially when there were no studies assessing the HBV infection status in RA patients, including HBeAg-positivity, HBV DNA load and ALT level. Herein, a case–control study was performed to clarify the effect of RA on HBV infection status. The positivity rates of hepatitis B e antigen (HBeAg) were compared between the RA patients with CHB and the age- and gender-matched general CHB patients, in addition to the positivity rates of HBV DNA.
null
null
Results
Basic characteristics of the CHB patients with RA A total of 27 (3.0%) RA patients were HBsAg-positive and enrolled in this study. The baseline data were collected before disease modifying antirheumatic drugs (DMARDs) treatment in the 26 patients. One patient had methotrexate and hydroxychloroquine treatment before data collection but had discontinued for 11 months. Among the 27 patients, seven patients (25.9%) had a HBV family history, seventeen patients (63.0%) had a HBV infection duration of more than 10 years, and one patient (3.7%) had a RA duration of more than 10 years (Table 1).Table 1Basic characteristics of the RA patients with HBV infectionVariableValueAge, years*52.0 (28.0–74.0)Gender, female (%)19 (70.4)Duration of RA, years*3.0 (0.2–18.0) > 101 (3.7) 1–1019 (70.4) < 17 (25.9)HBV family history (%)7 (25.9)Duration of HBV infection, years > 2011 (40.7) 10–206 (22.2) 1–102 (7.4) Unknown8 (29.6)ALT, U/L*15.0 (7.0–476.0)HBV DNA > 102 IU/mL (%)10 (37.0)HBeAg-positive (%)3 (11.1)ALT alanine transaminase; HBeAg hepatitis B e antigen; HBV Hepatitis B virus; RA rheumatoid arthritis*The values were expressed as the median (range) Basic characteristics of the RA patients with HBV infection ALT alanine transaminase; HBeAg hepatitis B e antigen; HBV Hepatitis B virus; RA rheumatoid arthritis *The values were expressed as the median (range) A total of 27 (3.0%) RA patients were HBsAg-positive and enrolled in this study. The baseline data were collected before disease modifying antirheumatic drugs (DMARDs) treatment in the 26 patients. One patient had methotrexate and hydroxychloroquine treatment before data collection but had discontinued for 11 months. Among the 27 patients, seven patients (25.9%) had a HBV family history, seventeen patients (63.0%) had a HBV infection duration of more than 10 years, and one patient (3.7%) had a RA duration of more than 10 years (Table 1).Table 1Basic characteristics of the RA patients with HBV infectionVariableValueAge, years*52.0 (28.0–74.0)Gender, female (%)19 (70.4)Duration of RA, years*3.0 (0.2–18.0) > 101 (3.7) 1–1019 (70.4) < 17 (25.9)HBV family history (%)7 (25.9)Duration of HBV infection, years > 2011 (40.7) 10–206 (22.2) 1–102 (7.4) Unknown8 (29.6)ALT, U/L*15.0 (7.0–476.0)HBV DNA > 102 IU/mL (%)10 (37.0)HBeAg-positive (%)3 (11.1)ALT alanine transaminase; HBeAg hepatitis B e antigen; HBV Hepatitis B virus; RA rheumatoid arthritis*The values were expressed as the median (range) Basic characteristics of the RA patients with HBV infection ALT alanine transaminase; HBeAg hepatitis B e antigen; HBV Hepatitis B virus; RA rheumatoid arthritis *The values were expressed as the median (range) Low proportions of HBeAg-positive and HBV DNA-positive in patients with CHB and RA As shown in the methods, 108 age- and gender-matched CHB outpatients were enrolled from the Department of Infectious Disease over the same period in this case–control study. In CHB patients with RA, the proportion of HBeAg(+) patients was 11.1% (3/27), which was much lower than that of the matched CHB patients (11.1% vs. 35.2%; OR 0.128; 95% CI 0.033–0.493; P = 0.003; Fig. 1A). The titers of HBeAg were lower in the RA patients than in the CHB patients [− 0.5 (− 0.6 to 3.2) vs. − 0.4 (− 0.6 to 3.2), Z = − 4.517, P < 0.001]. For three HBeAg(+) patients, the titers of HBeAg were 0.2, 1.4 and 3.2 log10 s/co, respectively. The corresponding value for the 38 CHB patients was 1.6 (0.02–3.2) log10 s/co (Fig. 1B).Fig. 1The comparison of HBV infection status between the RA patients and the age- and sex-matched CHB patients: A The comparison of HBeAg(+) rates; B The comparison of HBeAg titers; C The comparison of different HBV DNA gradients rates; D The comparison of HBV DNA load; E The comparison of elevated ALT rates; F The comparison of ALT levels. ALT alanine transaminase; CHB chronic Hepatitis B; HBeAg hepatitis B e antigen; HBsAg hepatitis B surface antigen; HBV hepatitis B virus; RA rheumatoid arthritis The comparison of HBV infection status between the RA patients and the age- and sex-matched CHB patients: A The comparison of HBeAg(+) rates; B The comparison of HBeAg titers; C The comparison of different HBV DNA gradients rates; D The comparison of HBV DNA load; E The comparison of elevated ALT rates; F The comparison of ALT levels. ALT alanine transaminase; CHB chronic Hepatitis B; HBeAg hepatitis B e antigen; HBsAg hepatitis B surface antigen; HBV hepatitis B virus; RA rheumatoid arthritis The HBV DNA load represents the degree of HBV replication. The positivity rate of HBV DNA in the RA patients was less than that in the matched CHB patients (37.0% vs. 63.9%; OR 0.244; 95% CI 0.089–0.674; P = 0.007; Fig. 1C). In addition, the HBV DNA load in the RA patients was lower than that in the CHB patients (1.5 ± 2.4 vs. 4.4 ± 3.3 log10 IU/mL, t = − 3.859, P = 0.001, Fig. 1D). The elevated ALT showed hepatitis activity. Compared to the matched CHB patients, the proportion of elevated ALT (> 40 U/L) was significantly lower in the RA patients (11.1% vs. 35.2%; OR 0.233; 95% CI 0.066–0.824; P = 0.024; Fig. 1E), together with the level of ALT [15.0 (7.0–97.0) vs. 22.0 (10.0–476.0), Z = − 2.066, P = 0.039, Fig. 1F]. As shown in the methods, 108 age- and gender-matched CHB outpatients were enrolled from the Department of Infectious Disease over the same period in this case–control study. In CHB patients with RA, the proportion of HBeAg(+) patients was 11.1% (3/27), which was much lower than that of the matched CHB patients (11.1% vs. 35.2%; OR 0.128; 95% CI 0.033–0.493; P = 0.003; Fig. 1A). The titers of HBeAg were lower in the RA patients than in the CHB patients [− 0.5 (− 0.6 to 3.2) vs. − 0.4 (− 0.6 to 3.2), Z = − 4.517, P < 0.001]. For three HBeAg(+) patients, the titers of HBeAg were 0.2, 1.4 and 3.2 log10 s/co, respectively. The corresponding value for the 38 CHB patients was 1.6 (0.02–3.2) log10 s/co (Fig. 1B).Fig. 1The comparison of HBV infection status between the RA patients and the age- and sex-matched CHB patients: A The comparison of HBeAg(+) rates; B The comparison of HBeAg titers; C The comparison of different HBV DNA gradients rates; D The comparison of HBV DNA load; E The comparison of elevated ALT rates; F The comparison of ALT levels. ALT alanine transaminase; CHB chronic Hepatitis B; HBeAg hepatitis B e antigen; HBsAg hepatitis B surface antigen; HBV hepatitis B virus; RA rheumatoid arthritis The comparison of HBV infection status between the RA patients and the age- and sex-matched CHB patients: A The comparison of HBeAg(+) rates; B The comparison of HBeAg titers; C The comparison of different HBV DNA gradients rates; D The comparison of HBV DNA load; E The comparison of elevated ALT rates; F The comparison of ALT levels. ALT alanine transaminase; CHB chronic Hepatitis B; HBeAg hepatitis B e antigen; HBsAg hepatitis B surface antigen; HBV hepatitis B virus; RA rheumatoid arthritis The HBV DNA load represents the degree of HBV replication. The positivity rate of HBV DNA in the RA patients was less than that in the matched CHB patients (37.0% vs. 63.9%; OR 0.244; 95% CI 0.089–0.674; P = 0.007; Fig. 1C). In addition, the HBV DNA load in the RA patients was lower than that in the CHB patients (1.5 ± 2.4 vs. 4.4 ± 3.3 log10 IU/mL, t = − 3.859, P = 0.001, Fig. 1D). The elevated ALT showed hepatitis activity. Compared to the matched CHB patients, the proportion of elevated ALT (> 40 U/L) was significantly lower in the RA patients (11.1% vs. 35.2%; OR 0.233; 95% CI 0.066–0.824; P = 0.024; Fig. 1E), together with the level of ALT [15.0 (7.0–97.0) vs. 22.0 (10.0–476.0), Z = − 2.066, P = 0.039, Fig. 1F]. Low prevalence of HBsAg and high prevalence of anti-HBc in RA patients With respect to the rate of HBsAg-positivity, the second Chinese National Hepatitis Seroepidemiological Survey demonstrated that the rate was 7.2% in 2006 (Fig. 1A) [8]. Then, the Polaris Observatory Collaborators developed models for 120 countries, and estimated that the prevalence of HBsAg in China in 2016 was 6.1% (5.5–6.9%) [14]. Based on the 27 included studies, Wang et al. estimated prevalence of 6.89% (5.84–7.95%) for HBV infection in the general population of China from 2013 to 2017 [15]. In the present work, 3.0% of RA patients (27/892) were HBsAg-positive. This is lower than the above reported data (Fig. 2A).Fig. 2The comparison of HBsAg(+) rate and anti-HBc(+) rates between the RA patients and the Chinese general population: A The comparison of HBsAg(+) rates; B The comparison of anti-HBc(+) rates. *The CGP data: CGP in 2006 [8], CGP in 2016 [14] and CGP from 2013 to 2017 [15]. anti-HBc hepatitis B core antibody; CGP Chinese general population; HBsAg hepatitis B surface antigen; RA rheumatoid arthritis The comparison of HBsAg(+) rate and anti-HBc(+) rates between the RA patients and the Chinese general population: A The comparison of HBsAg(+) rates; B The comparison of anti-HBc(+) rates. *The CGP data: CGP in 2006 [8], CGP in 2016 [14] and CGP from 2013 to 2017 [15]. anti-HBc hepatitis B core antibody; CGP Chinese general population; HBsAg hepatitis B surface antigen; RA rheumatoid arthritis Anti-HBc positivity mostly occurs in chronic HBV infection or resolved infection [16]. The anti-HBc(+) rate was 44.6% (398/892) in RA patients, higher than the data of the Chinese National Hepatitis Seroepidemiological Survey (44.6% vs. 34.1%, Fig. 2B). With respect to the rate of HBsAg-positivity, the second Chinese National Hepatitis Seroepidemiological Survey demonstrated that the rate was 7.2% in 2006 (Fig. 1A) [8]. Then, the Polaris Observatory Collaborators developed models for 120 countries, and estimated that the prevalence of HBsAg in China in 2016 was 6.1% (5.5–6.9%) [14]. Based on the 27 included studies, Wang et al. estimated prevalence of 6.89% (5.84–7.95%) for HBV infection in the general population of China from 2013 to 2017 [15]. In the present work, 3.0% of RA patients (27/892) were HBsAg-positive. This is lower than the above reported data (Fig. 2A).Fig. 2The comparison of HBsAg(+) rate and anti-HBc(+) rates between the RA patients and the Chinese general population: A The comparison of HBsAg(+) rates; B The comparison of anti-HBc(+) rates. *The CGP data: CGP in 2006 [8], CGP in 2016 [14] and CGP from 2013 to 2017 [15]. anti-HBc hepatitis B core antibody; CGP Chinese general population; HBsAg hepatitis B surface antigen; RA rheumatoid arthritis The comparison of HBsAg(+) rate and anti-HBc(+) rates between the RA patients and the Chinese general population: A The comparison of HBsAg(+) rates; B The comparison of anti-HBc(+) rates. *The CGP data: CGP in 2006 [8], CGP in 2016 [14] and CGP from 2013 to 2017 [15]. anti-HBc hepatitis B core antibody; CGP Chinese general population; HBsAg hepatitis B surface antigen; RA rheumatoid arthritis Anti-HBc positivity mostly occurs in chronic HBV infection or resolved infection [16]. The anti-HBc(+) rate was 44.6% (398/892) in RA patients, higher than the data of the Chinese National Hepatitis Seroepidemiological Survey (44.6% vs. 34.1%, Fig. 2B).
null
null
[ "Methods", "Study design", "Laboratory methods", "Statistical analysis", "Basic characteristics of the CHB patients with RA", "Low proportions of HBeAg-positive and HBV DNA-positive in patients with CHB and RA", "Low prevalence of HBsAg and high prevalence of anti-HBc in RA patients" ]
[ "Study design This was a retrospective case–control study. A total of 27 CHB patients with RA were enrolled from the Department of Rheumatology and Immunology, First Affiliated Hospital of Xi’an Jiaotong University, from January 1st 2016 to December 31st 2019. Inclusion criteria: (i) HBsAg was positive for more than 6 months; (ii) patients fulfilled ACR/EULAR 2010 rheumatoid arthritis classification criteria. The exclusion criteria were serologic human immunodeficiency virus (HIV), hepatitis C (HCV) or hepatitis D virus (HDV) positivity and cirrhosis, liver cancer or fatty liver disease. The diagnosis of cirrhosis was based on a physical examination, biochemical parameters (liver function test, full blood count and prothrombin time) and imageological examination (ultrasonic tests, CT, MR imaging or liver stiffness measurements). A liver biopsy was implemented in cases where the above tests reveal inconclusive results. The fatty liver disease was determined by the evidence of hepatic steatosis, either by imaging or by histology [12]. Arthritis is one of the extrahepatic manifestations in the patients with HBV infection [13]. We can identify RA from arthritis associated with HBV infection by the history of HBV infection, joint deformities, other organ/tissue lesions or deposition in the synovium of circulating immune complexes containing HBsAg-anti-HBs. To exclude the effect of antiviral therapy on HBV infection status, RA patients who accepted antiviral treatment were not included in the matched case–control study. During the corresponding period, age- and gender-matched CHB outpatients were enrolled at a 1:4 ratio from the Department of Infectious Disease. In addition, the positivity rates of HBsAg and anti-HBc were surveyed among the 892 RA patients over the corresponding period. This study was conducted in accordance with the Declaration of Helsinki, the protocol was approved by the Ethics Committee of First Affiliated Hospital of Xi’an Jiaotong University (No. 2017.120), and the patients gave their written informed consent.\nThe medical records of patients were reviewed, and the data of the following variables were collected: age, sex, diagnosis, duration of disease, HBsAg, HBeAg, anti-HBc, ALT, and HBV DNA load. All the test data of RA patients were collected at the first visit, and the data of CHB patients were collected before their antiviral treatment.\nThe primary outcome was the comparison of the positivity rates of HBeAg and HBV DNA between the CHB patients with RA and the age- and gender-matched CHB patients. The secondary outcomes were the following: (i) the comparison of the elevated ALT rates between the CHB patients with RA and the CHB patients; (ii) the comparison of the HBeAg titer, HBV DNA load and ALT levels between CHB patients with RA and the CHB patients; (iii) the comparison of HBsAg-positivity rates between the RA patients and Chinese general population (CGP); and (iv) the comparison of anti-HBc-positivity rates between the RA patients and CGP.\nThis was a retrospective case–control study. A total of 27 CHB patients with RA were enrolled from the Department of Rheumatology and Immunology, First Affiliated Hospital of Xi’an Jiaotong University, from January 1st 2016 to December 31st 2019. Inclusion criteria: (i) HBsAg was positive for more than 6 months; (ii) patients fulfilled ACR/EULAR 2010 rheumatoid arthritis classification criteria. The exclusion criteria were serologic human immunodeficiency virus (HIV), hepatitis C (HCV) or hepatitis D virus (HDV) positivity and cirrhosis, liver cancer or fatty liver disease. The diagnosis of cirrhosis was based on a physical examination, biochemical parameters (liver function test, full blood count and prothrombin time) and imageological examination (ultrasonic tests, CT, MR imaging or liver stiffness measurements). A liver biopsy was implemented in cases where the above tests reveal inconclusive results. The fatty liver disease was determined by the evidence of hepatic steatosis, either by imaging or by histology [12]. Arthritis is one of the extrahepatic manifestations in the patients with HBV infection [13]. We can identify RA from arthritis associated with HBV infection by the history of HBV infection, joint deformities, other organ/tissue lesions or deposition in the synovium of circulating immune complexes containing HBsAg-anti-HBs. To exclude the effect of antiviral therapy on HBV infection status, RA patients who accepted antiviral treatment were not included in the matched case–control study. During the corresponding period, age- and gender-matched CHB outpatients were enrolled at a 1:4 ratio from the Department of Infectious Disease. In addition, the positivity rates of HBsAg and anti-HBc were surveyed among the 892 RA patients over the corresponding period. This study was conducted in accordance with the Declaration of Helsinki, the protocol was approved by the Ethics Committee of First Affiliated Hospital of Xi’an Jiaotong University (No. 2017.120), and the patients gave their written informed consent.\nThe medical records of patients were reviewed, and the data of the following variables were collected: age, sex, diagnosis, duration of disease, HBsAg, HBeAg, anti-HBc, ALT, and HBV DNA load. All the test data of RA patients were collected at the first visit, and the data of CHB patients were collected before their antiviral treatment.\nThe primary outcome was the comparison of the positivity rates of HBeAg and HBV DNA between the CHB patients with RA and the age- and gender-matched CHB patients. The secondary outcomes were the following: (i) the comparison of the elevated ALT rates between the CHB patients with RA and the CHB patients; (ii) the comparison of the HBeAg titer, HBV DNA load and ALT levels between CHB patients with RA and the CHB patients; (iii) the comparison of HBsAg-positivity rates between the RA patients and Chinese general population (CGP); and (iv) the comparison of anti-HBc-positivity rates between the RA patients and CGP.\nLaboratory methods The titers of HBsAg, HBeAg and anti-HBc were quantified by Abbott ARCHITECT assays (Abbott Laboratories, Chicago, IL, USA). The lower limit of detection for HBsAg was 0.05 IU/mL, HBeAg was 1 s/co and anti-HBc was 1 s/co. The HBV DNA load was measured by the Roche COBAS AmpliPrep/COBAS TaqMan HBV test (Roche Molecular Systems, California, IL, USA), and the lower limit of detection was 12 IU/mL. Liver function tests were performed with an automated bioanalyzer (Olympus AU5400, Japan).\nThe titers of HBsAg, HBeAg and anti-HBc were quantified by Abbott ARCHITECT assays (Abbott Laboratories, Chicago, IL, USA). The lower limit of detection for HBsAg was 0.05 IU/mL, HBeAg was 1 s/co and anti-HBc was 1 s/co. The HBV DNA load was measured by the Roche COBAS AmpliPrep/COBAS TaqMan HBV test (Roche Molecular Systems, California, IL, USA), and the lower limit of detection was 12 IU/mL. Liver function tests were performed with an automated bioanalyzer (Olympus AU5400, Japan).\nStatistical analysis The analyses were performed by SPSS software 13.0 (SPSS Inc. Chicago, IL, USA). Conditional logistic regression was used to compare the proportions between the two groups. Quantitative data were analyzed with the Shapiro–Wilk test and Levene statistic for normality and homogeneity of variance, respectively. According to the situation, the paired-samples t-test or signed rank Wilcoxon test was used to evaluate differences between two groups. A P value < 0.05 was considered statistically significant.\nThe analyses were performed by SPSS software 13.0 (SPSS Inc. Chicago, IL, USA). Conditional logistic regression was used to compare the proportions between the two groups. Quantitative data were analyzed with the Shapiro–Wilk test and Levene statistic for normality and homogeneity of variance, respectively. According to the situation, the paired-samples t-test or signed rank Wilcoxon test was used to evaluate differences between two groups. A P value < 0.05 was considered statistically significant.", "This was a retrospective case–control study. A total of 27 CHB patients with RA were enrolled from the Department of Rheumatology and Immunology, First Affiliated Hospital of Xi’an Jiaotong University, from January 1st 2016 to December 31st 2019. Inclusion criteria: (i) HBsAg was positive for more than 6 months; (ii) patients fulfilled ACR/EULAR 2010 rheumatoid arthritis classification criteria. The exclusion criteria were serologic human immunodeficiency virus (HIV), hepatitis C (HCV) or hepatitis D virus (HDV) positivity and cirrhosis, liver cancer or fatty liver disease. The diagnosis of cirrhosis was based on a physical examination, biochemical parameters (liver function test, full blood count and prothrombin time) and imageological examination (ultrasonic tests, CT, MR imaging or liver stiffness measurements). A liver biopsy was implemented in cases where the above tests reveal inconclusive results. The fatty liver disease was determined by the evidence of hepatic steatosis, either by imaging or by histology [12]. Arthritis is one of the extrahepatic manifestations in the patients with HBV infection [13]. We can identify RA from arthritis associated with HBV infection by the history of HBV infection, joint deformities, other organ/tissue lesions or deposition in the synovium of circulating immune complexes containing HBsAg-anti-HBs. To exclude the effect of antiviral therapy on HBV infection status, RA patients who accepted antiviral treatment were not included in the matched case–control study. During the corresponding period, age- and gender-matched CHB outpatients were enrolled at a 1:4 ratio from the Department of Infectious Disease. In addition, the positivity rates of HBsAg and anti-HBc were surveyed among the 892 RA patients over the corresponding period. This study was conducted in accordance with the Declaration of Helsinki, the protocol was approved by the Ethics Committee of First Affiliated Hospital of Xi’an Jiaotong University (No. 2017.120), and the patients gave their written informed consent.\nThe medical records of patients were reviewed, and the data of the following variables were collected: age, sex, diagnosis, duration of disease, HBsAg, HBeAg, anti-HBc, ALT, and HBV DNA load. All the test data of RA patients were collected at the first visit, and the data of CHB patients were collected before their antiviral treatment.\nThe primary outcome was the comparison of the positivity rates of HBeAg and HBV DNA between the CHB patients with RA and the age- and gender-matched CHB patients. The secondary outcomes were the following: (i) the comparison of the elevated ALT rates between the CHB patients with RA and the CHB patients; (ii) the comparison of the HBeAg titer, HBV DNA load and ALT levels between CHB patients with RA and the CHB patients; (iii) the comparison of HBsAg-positivity rates between the RA patients and Chinese general population (CGP); and (iv) the comparison of anti-HBc-positivity rates between the RA patients and CGP.", "The titers of HBsAg, HBeAg and anti-HBc were quantified by Abbott ARCHITECT assays (Abbott Laboratories, Chicago, IL, USA). The lower limit of detection for HBsAg was 0.05 IU/mL, HBeAg was 1 s/co and anti-HBc was 1 s/co. The HBV DNA load was measured by the Roche COBAS AmpliPrep/COBAS TaqMan HBV test (Roche Molecular Systems, California, IL, USA), and the lower limit of detection was 12 IU/mL. Liver function tests were performed with an automated bioanalyzer (Olympus AU5400, Japan).", "The analyses were performed by SPSS software 13.0 (SPSS Inc. Chicago, IL, USA). Conditional logistic regression was used to compare the proportions between the two groups. Quantitative data were analyzed with the Shapiro–Wilk test and Levene statistic for normality and homogeneity of variance, respectively. According to the situation, the paired-samples t-test or signed rank Wilcoxon test was used to evaluate differences between two groups. A P value < 0.05 was considered statistically significant.", "A total of 27 (3.0%) RA patients were HBsAg-positive and enrolled in this study. The baseline data were collected before disease modifying antirheumatic drugs (DMARDs) treatment in the 26 patients. One patient had methotrexate and hydroxychloroquine treatment before data collection but had discontinued for 11 months. Among the 27 patients, seven patients (25.9%) had a HBV family history, seventeen patients (63.0%) had a HBV infection duration of more than 10 years, and one patient (3.7%) had a RA duration of more than 10 years (Table 1).Table 1Basic characteristics of the RA patients with HBV infectionVariableValueAge, years*52.0 (28.0–74.0)Gender, female (%)19 (70.4)Duration of RA, years*3.0 (0.2–18.0) > 101 (3.7) 1–1019 (70.4) < 17 (25.9)HBV family history (%)7 (25.9)Duration of HBV infection, years > 2011 (40.7) 10–206 (22.2) 1–102 (7.4) Unknown8 (29.6)ALT, U/L*15.0 (7.0–476.0)HBV DNA > 102 IU/mL (%)10 (37.0)HBeAg-positive (%)3 (11.1)ALT alanine transaminase; HBeAg hepatitis B e antigen; HBV Hepatitis B virus; RA rheumatoid arthritis*The values were expressed as the median (range)\nBasic characteristics of the RA patients with HBV infection\nALT alanine transaminase; HBeAg hepatitis B e antigen; HBV Hepatitis B virus; RA rheumatoid arthritis\n*The values were expressed as the median (range)", "As shown in the methods, 108 age- and gender-matched CHB outpatients were enrolled from the Department of Infectious Disease over the same period in this case–control study. In CHB patients with RA, the proportion of HBeAg(+) patients was 11.1% (3/27), which was much lower than that of the matched CHB patients (11.1% vs. 35.2%; OR 0.128; 95% CI 0.033–0.493; P = 0.003; Fig. 1A). The titers of HBeAg were lower in the RA patients than in the CHB patients [− 0.5 (− 0.6 to 3.2) vs. − 0.4 (− 0.6 to 3.2), Z = − 4.517, P < 0.001]. For three HBeAg(+) patients, the titers of HBeAg were 0.2, 1.4 and 3.2 log10 s/co, respectively. The corresponding value for the 38 CHB patients was 1.6 (0.02–3.2) log10 s/co (Fig. 1B).Fig. 1The comparison of HBV infection status between the RA patients and the age- and sex-matched CHB patients: A The comparison of HBeAg(+) rates; B The comparison of HBeAg titers; C The comparison of different HBV DNA gradients rates; D The comparison of HBV DNA load; E The comparison of elevated ALT rates; F The comparison of ALT levels. ALT alanine transaminase; CHB chronic Hepatitis B; HBeAg hepatitis B e antigen; HBsAg hepatitis B surface antigen; HBV hepatitis B virus; RA rheumatoid arthritis\nThe comparison of HBV infection status between the RA patients and the age- and sex-matched CHB patients: A The comparison of HBeAg(+) rates; B The comparison of HBeAg titers; C The comparison of different HBV DNA gradients rates; D The comparison of HBV DNA load; E The comparison of elevated ALT rates; F The comparison of ALT levels. ALT alanine transaminase; CHB chronic Hepatitis B; HBeAg hepatitis B e antigen; HBsAg hepatitis B surface antigen; HBV hepatitis B virus; RA rheumatoid arthritis\nThe HBV DNA load represents the degree of HBV replication. The positivity rate of HBV DNA in the RA patients was less than that in the matched CHB patients (37.0% vs. 63.9%; OR 0.244; 95% CI 0.089–0.674; P = 0.007; Fig. 1C). In addition, the HBV DNA load in the RA patients was lower than that in the CHB patients (1.5 ± 2.4 vs. 4.4 ± 3.3 log10 IU/mL, t = − 3.859, P = 0.001, Fig. 1D).\nThe elevated ALT showed hepatitis activity. Compared to the matched CHB patients, the proportion of elevated ALT (> 40 U/L) was significantly lower in the RA patients (11.1% vs. 35.2%; OR 0.233; 95% CI 0.066–0.824; P = 0.024; Fig. 1E), together with the level of ALT [15.0 (7.0–97.0) vs. 22.0 (10.0–476.0), Z = − 2.066, P = 0.039, Fig. 1F].", "With respect to the rate of HBsAg-positivity, the second Chinese National Hepatitis Seroepidemiological Survey demonstrated that the rate was 7.2% in 2006 (Fig. 1A) [8]. Then, the Polaris Observatory Collaborators developed models for 120 countries, and estimated that the prevalence of HBsAg in China in 2016 was 6.1% (5.5–6.9%) [14]. Based on the 27 included studies, Wang et al. estimated prevalence of 6.89% (5.84–7.95%) for HBV infection in the general population of China from 2013 to 2017 [15]. In the present work, 3.0% of RA patients (27/892) were HBsAg-positive. This is lower than the above reported data (Fig. 2A).Fig. 2The comparison of HBsAg(+) rate and anti-HBc(+) rates between the RA patients and the Chinese general population: A The comparison of HBsAg(+) rates; B The comparison of anti-HBc(+) rates. *The CGP data: CGP in 2006 [8], CGP in 2016 [14] and CGP from 2013 to 2017 [15]. anti-HBc hepatitis B core antibody; CGP Chinese general population; HBsAg hepatitis B surface antigen; RA rheumatoid arthritis\nThe comparison of HBsAg(+) rate and anti-HBc(+) rates between the RA patients and the Chinese general population: A The comparison of HBsAg(+) rates; B The comparison of anti-HBc(+) rates. *The CGP data: CGP in 2006 [8], CGP in 2016 [14] and CGP from 2013 to 2017 [15]. anti-HBc hepatitis B core antibody; CGP Chinese general population; HBsAg hepatitis B surface antigen; RA rheumatoid arthritis\nAnti-HBc positivity mostly occurs in chronic HBV infection or resolved infection [16]. The anti-HBc(+) rate was 44.6% (398/892) in RA patients, higher than the data of the Chinese National Hepatitis Seroepidemiological Survey (44.6% vs. 34.1%, Fig. 2B)." ]
[ null, null, null, null, null, null, null ]
[ "Introduction", "Methods", "Study design", "Laboratory methods", "Statistical analysis", "Results", "Basic characteristics of the CHB patients with RA", "Low proportions of HBeAg-positive and HBV DNA-positive in patients with CHB and RA", "Low prevalence of HBsAg and high prevalence of anti-HBc in RA patients", "Discussion" ]
[ "It was observed that HBV infection affected rheumatoid arthritis (RA), where HBV was considered the suspected trigger for arthritis in genetically susceptible individuals [1]. The rates of positivity for RF and ACPA were as high as 14.4% and 4.1%, respectively, in patients with chronic hepatitis B (CHB) [2]. Hepatitis B core antigen (HBcAg) was found in the synovium of RA patients with CHB, indicating that HBV may be involved in the pathogenesis of local lesions [3]. In RA patients, immune dysregulation and immunosuppressive therapies also influence HBV infection [4, 5].\nHBV infection with a high endemicity was reported in various regions of the Asia–Pacific and sub-Saharan Africa [6, 7]. It also affects approximately 10 million people in China [8]. Unfortunately, the HBV infection rates have been reported to be different for RA patients in previous studies. Yilmaz et al. reported a lower HBV infection prevalence in RA patients according to Turkish national data in comparison with the general population [9]. Mahroum et al. performed a case–control study and found that RA patients had a greater proportion of chronic HBV infection than age- and sex-matched controls [10]. Hsu et al. observed that RA patients were characterized by an increased risk of HBV infection when compared with that of the ≥ 18 years-old non-RA cohort [11]. The reasons for these differences were complicated, especially when there were no studies assessing the HBV infection status in RA patients, including HBeAg-positivity, HBV DNA load and ALT level.\nHerein, a case–control study was performed to clarify the effect of RA on HBV infection status. The positivity rates of hepatitis B e antigen (HBeAg) were compared between the RA patients with CHB and the age- and gender-matched general CHB patients, in addition to the positivity rates of HBV DNA.", "Study design This was a retrospective case–control study. A total of 27 CHB patients with RA were enrolled from the Department of Rheumatology and Immunology, First Affiliated Hospital of Xi’an Jiaotong University, from January 1st 2016 to December 31st 2019. Inclusion criteria: (i) HBsAg was positive for more than 6 months; (ii) patients fulfilled ACR/EULAR 2010 rheumatoid arthritis classification criteria. The exclusion criteria were serologic human immunodeficiency virus (HIV), hepatitis C (HCV) or hepatitis D virus (HDV) positivity and cirrhosis, liver cancer or fatty liver disease. The diagnosis of cirrhosis was based on a physical examination, biochemical parameters (liver function test, full blood count and prothrombin time) and imageological examination (ultrasonic tests, CT, MR imaging or liver stiffness measurements). A liver biopsy was implemented in cases where the above tests reveal inconclusive results. The fatty liver disease was determined by the evidence of hepatic steatosis, either by imaging or by histology [12]. Arthritis is one of the extrahepatic manifestations in the patients with HBV infection [13]. We can identify RA from arthritis associated with HBV infection by the history of HBV infection, joint deformities, other organ/tissue lesions or deposition in the synovium of circulating immune complexes containing HBsAg-anti-HBs. To exclude the effect of antiviral therapy on HBV infection status, RA patients who accepted antiviral treatment were not included in the matched case–control study. During the corresponding period, age- and gender-matched CHB outpatients were enrolled at a 1:4 ratio from the Department of Infectious Disease. In addition, the positivity rates of HBsAg and anti-HBc were surveyed among the 892 RA patients over the corresponding period. This study was conducted in accordance with the Declaration of Helsinki, the protocol was approved by the Ethics Committee of First Affiliated Hospital of Xi’an Jiaotong University (No. 2017.120), and the patients gave their written informed consent.\nThe medical records of patients were reviewed, and the data of the following variables were collected: age, sex, diagnosis, duration of disease, HBsAg, HBeAg, anti-HBc, ALT, and HBV DNA load. All the test data of RA patients were collected at the first visit, and the data of CHB patients were collected before their antiviral treatment.\nThe primary outcome was the comparison of the positivity rates of HBeAg and HBV DNA between the CHB patients with RA and the age- and gender-matched CHB patients. The secondary outcomes were the following: (i) the comparison of the elevated ALT rates between the CHB patients with RA and the CHB patients; (ii) the comparison of the HBeAg titer, HBV DNA load and ALT levels between CHB patients with RA and the CHB patients; (iii) the comparison of HBsAg-positivity rates between the RA patients and Chinese general population (CGP); and (iv) the comparison of anti-HBc-positivity rates between the RA patients and CGP.\nThis was a retrospective case–control study. A total of 27 CHB patients with RA were enrolled from the Department of Rheumatology and Immunology, First Affiliated Hospital of Xi’an Jiaotong University, from January 1st 2016 to December 31st 2019. Inclusion criteria: (i) HBsAg was positive for more than 6 months; (ii) patients fulfilled ACR/EULAR 2010 rheumatoid arthritis classification criteria. The exclusion criteria were serologic human immunodeficiency virus (HIV), hepatitis C (HCV) or hepatitis D virus (HDV) positivity and cirrhosis, liver cancer or fatty liver disease. The diagnosis of cirrhosis was based on a physical examination, biochemical parameters (liver function test, full blood count and prothrombin time) and imageological examination (ultrasonic tests, CT, MR imaging or liver stiffness measurements). A liver biopsy was implemented in cases where the above tests reveal inconclusive results. The fatty liver disease was determined by the evidence of hepatic steatosis, either by imaging or by histology [12]. Arthritis is one of the extrahepatic manifestations in the patients with HBV infection [13]. We can identify RA from arthritis associated with HBV infection by the history of HBV infection, joint deformities, other organ/tissue lesions or deposition in the synovium of circulating immune complexes containing HBsAg-anti-HBs. To exclude the effect of antiviral therapy on HBV infection status, RA patients who accepted antiviral treatment were not included in the matched case–control study. During the corresponding period, age- and gender-matched CHB outpatients were enrolled at a 1:4 ratio from the Department of Infectious Disease. In addition, the positivity rates of HBsAg and anti-HBc were surveyed among the 892 RA patients over the corresponding period. This study was conducted in accordance with the Declaration of Helsinki, the protocol was approved by the Ethics Committee of First Affiliated Hospital of Xi’an Jiaotong University (No. 2017.120), and the patients gave their written informed consent.\nThe medical records of patients were reviewed, and the data of the following variables were collected: age, sex, diagnosis, duration of disease, HBsAg, HBeAg, anti-HBc, ALT, and HBV DNA load. All the test data of RA patients were collected at the first visit, and the data of CHB patients were collected before their antiviral treatment.\nThe primary outcome was the comparison of the positivity rates of HBeAg and HBV DNA between the CHB patients with RA and the age- and gender-matched CHB patients. The secondary outcomes were the following: (i) the comparison of the elevated ALT rates between the CHB patients with RA and the CHB patients; (ii) the comparison of the HBeAg titer, HBV DNA load and ALT levels between CHB patients with RA and the CHB patients; (iii) the comparison of HBsAg-positivity rates between the RA patients and Chinese general population (CGP); and (iv) the comparison of anti-HBc-positivity rates between the RA patients and CGP.\nLaboratory methods The titers of HBsAg, HBeAg and anti-HBc were quantified by Abbott ARCHITECT assays (Abbott Laboratories, Chicago, IL, USA). The lower limit of detection for HBsAg was 0.05 IU/mL, HBeAg was 1 s/co and anti-HBc was 1 s/co. The HBV DNA load was measured by the Roche COBAS AmpliPrep/COBAS TaqMan HBV test (Roche Molecular Systems, California, IL, USA), and the lower limit of detection was 12 IU/mL. Liver function tests were performed with an automated bioanalyzer (Olympus AU5400, Japan).\nThe titers of HBsAg, HBeAg and anti-HBc were quantified by Abbott ARCHITECT assays (Abbott Laboratories, Chicago, IL, USA). The lower limit of detection for HBsAg was 0.05 IU/mL, HBeAg was 1 s/co and anti-HBc was 1 s/co. The HBV DNA load was measured by the Roche COBAS AmpliPrep/COBAS TaqMan HBV test (Roche Molecular Systems, California, IL, USA), and the lower limit of detection was 12 IU/mL. Liver function tests were performed with an automated bioanalyzer (Olympus AU5400, Japan).\nStatistical analysis The analyses were performed by SPSS software 13.0 (SPSS Inc. Chicago, IL, USA). Conditional logistic regression was used to compare the proportions between the two groups. Quantitative data were analyzed with the Shapiro–Wilk test and Levene statistic for normality and homogeneity of variance, respectively. According to the situation, the paired-samples t-test or signed rank Wilcoxon test was used to evaluate differences between two groups. A P value < 0.05 was considered statistically significant.\nThe analyses were performed by SPSS software 13.0 (SPSS Inc. Chicago, IL, USA). Conditional logistic regression was used to compare the proportions between the two groups. Quantitative data were analyzed with the Shapiro–Wilk test and Levene statistic for normality and homogeneity of variance, respectively. According to the situation, the paired-samples t-test or signed rank Wilcoxon test was used to evaluate differences between two groups. A P value < 0.05 was considered statistically significant.", "This was a retrospective case–control study. A total of 27 CHB patients with RA were enrolled from the Department of Rheumatology and Immunology, First Affiliated Hospital of Xi’an Jiaotong University, from January 1st 2016 to December 31st 2019. Inclusion criteria: (i) HBsAg was positive for more than 6 months; (ii) patients fulfilled ACR/EULAR 2010 rheumatoid arthritis classification criteria. The exclusion criteria were serologic human immunodeficiency virus (HIV), hepatitis C (HCV) or hepatitis D virus (HDV) positivity and cirrhosis, liver cancer or fatty liver disease. The diagnosis of cirrhosis was based on a physical examination, biochemical parameters (liver function test, full blood count and prothrombin time) and imageological examination (ultrasonic tests, CT, MR imaging or liver stiffness measurements). A liver biopsy was implemented in cases where the above tests reveal inconclusive results. The fatty liver disease was determined by the evidence of hepatic steatosis, either by imaging or by histology [12]. Arthritis is one of the extrahepatic manifestations in the patients with HBV infection [13]. We can identify RA from arthritis associated with HBV infection by the history of HBV infection, joint deformities, other organ/tissue lesions or deposition in the synovium of circulating immune complexes containing HBsAg-anti-HBs. To exclude the effect of antiviral therapy on HBV infection status, RA patients who accepted antiviral treatment were not included in the matched case–control study. During the corresponding period, age- and gender-matched CHB outpatients were enrolled at a 1:4 ratio from the Department of Infectious Disease. In addition, the positivity rates of HBsAg and anti-HBc were surveyed among the 892 RA patients over the corresponding period. This study was conducted in accordance with the Declaration of Helsinki, the protocol was approved by the Ethics Committee of First Affiliated Hospital of Xi’an Jiaotong University (No. 2017.120), and the patients gave their written informed consent.\nThe medical records of patients were reviewed, and the data of the following variables were collected: age, sex, diagnosis, duration of disease, HBsAg, HBeAg, anti-HBc, ALT, and HBV DNA load. All the test data of RA patients were collected at the first visit, and the data of CHB patients were collected before their antiviral treatment.\nThe primary outcome was the comparison of the positivity rates of HBeAg and HBV DNA between the CHB patients with RA and the age- and gender-matched CHB patients. The secondary outcomes were the following: (i) the comparison of the elevated ALT rates between the CHB patients with RA and the CHB patients; (ii) the comparison of the HBeAg titer, HBV DNA load and ALT levels between CHB patients with RA and the CHB patients; (iii) the comparison of HBsAg-positivity rates between the RA patients and Chinese general population (CGP); and (iv) the comparison of anti-HBc-positivity rates between the RA patients and CGP.", "The titers of HBsAg, HBeAg and anti-HBc were quantified by Abbott ARCHITECT assays (Abbott Laboratories, Chicago, IL, USA). The lower limit of detection for HBsAg was 0.05 IU/mL, HBeAg was 1 s/co and anti-HBc was 1 s/co. The HBV DNA load was measured by the Roche COBAS AmpliPrep/COBAS TaqMan HBV test (Roche Molecular Systems, California, IL, USA), and the lower limit of detection was 12 IU/mL. Liver function tests were performed with an automated bioanalyzer (Olympus AU5400, Japan).", "The analyses were performed by SPSS software 13.0 (SPSS Inc. Chicago, IL, USA). Conditional logistic regression was used to compare the proportions between the two groups. Quantitative data were analyzed with the Shapiro–Wilk test and Levene statistic for normality and homogeneity of variance, respectively. According to the situation, the paired-samples t-test or signed rank Wilcoxon test was used to evaluate differences between two groups. A P value < 0.05 was considered statistically significant.", "Basic characteristics of the CHB patients with RA A total of 27 (3.0%) RA patients were HBsAg-positive and enrolled in this study. The baseline data were collected before disease modifying antirheumatic drugs (DMARDs) treatment in the 26 patients. One patient had methotrexate and hydroxychloroquine treatment before data collection but had discontinued for 11 months. Among the 27 patients, seven patients (25.9%) had a HBV family history, seventeen patients (63.0%) had a HBV infection duration of more than 10 years, and one patient (3.7%) had a RA duration of more than 10 years (Table 1).Table 1Basic characteristics of the RA patients with HBV infectionVariableValueAge, years*52.0 (28.0–74.0)Gender, female (%)19 (70.4)Duration of RA, years*3.0 (0.2–18.0) > 101 (3.7) 1–1019 (70.4) < 17 (25.9)HBV family history (%)7 (25.9)Duration of HBV infection, years > 2011 (40.7) 10–206 (22.2) 1–102 (7.4) Unknown8 (29.6)ALT, U/L*15.0 (7.0–476.0)HBV DNA > 102 IU/mL (%)10 (37.0)HBeAg-positive (%)3 (11.1)ALT alanine transaminase; HBeAg hepatitis B e antigen; HBV Hepatitis B virus; RA rheumatoid arthritis*The values were expressed as the median (range)\nBasic characteristics of the RA patients with HBV infection\nALT alanine transaminase; HBeAg hepatitis B e antigen; HBV Hepatitis B virus; RA rheumatoid arthritis\n*The values were expressed as the median (range)\nA total of 27 (3.0%) RA patients were HBsAg-positive and enrolled in this study. The baseline data were collected before disease modifying antirheumatic drugs (DMARDs) treatment in the 26 patients. One patient had methotrexate and hydroxychloroquine treatment before data collection but had discontinued for 11 months. Among the 27 patients, seven patients (25.9%) had a HBV family history, seventeen patients (63.0%) had a HBV infection duration of more than 10 years, and one patient (3.7%) had a RA duration of more than 10 years (Table 1).Table 1Basic characteristics of the RA patients with HBV infectionVariableValueAge, years*52.0 (28.0–74.0)Gender, female (%)19 (70.4)Duration of RA, years*3.0 (0.2–18.0) > 101 (3.7) 1–1019 (70.4) < 17 (25.9)HBV family history (%)7 (25.9)Duration of HBV infection, years > 2011 (40.7) 10–206 (22.2) 1–102 (7.4) Unknown8 (29.6)ALT, U/L*15.0 (7.0–476.0)HBV DNA > 102 IU/mL (%)10 (37.0)HBeAg-positive (%)3 (11.1)ALT alanine transaminase; HBeAg hepatitis B e antigen; HBV Hepatitis B virus; RA rheumatoid arthritis*The values were expressed as the median (range)\nBasic characteristics of the RA patients with HBV infection\nALT alanine transaminase; HBeAg hepatitis B e antigen; HBV Hepatitis B virus; RA rheumatoid arthritis\n*The values were expressed as the median (range)\nLow proportions of HBeAg-positive and HBV DNA-positive in patients with CHB and RA As shown in the methods, 108 age- and gender-matched CHB outpatients were enrolled from the Department of Infectious Disease over the same period in this case–control study. In CHB patients with RA, the proportion of HBeAg(+) patients was 11.1% (3/27), which was much lower than that of the matched CHB patients (11.1% vs. 35.2%; OR 0.128; 95% CI 0.033–0.493; P = 0.003; Fig. 1A). The titers of HBeAg were lower in the RA patients than in the CHB patients [− 0.5 (− 0.6 to 3.2) vs. − 0.4 (− 0.6 to 3.2), Z = − 4.517, P < 0.001]. For three HBeAg(+) patients, the titers of HBeAg were 0.2, 1.4 and 3.2 log10 s/co, respectively. The corresponding value for the 38 CHB patients was 1.6 (0.02–3.2) log10 s/co (Fig. 1B).Fig. 1The comparison of HBV infection status between the RA patients and the age- and sex-matched CHB patients: A The comparison of HBeAg(+) rates; B The comparison of HBeAg titers; C The comparison of different HBV DNA gradients rates; D The comparison of HBV DNA load; E The comparison of elevated ALT rates; F The comparison of ALT levels. ALT alanine transaminase; CHB chronic Hepatitis B; HBeAg hepatitis B e antigen; HBsAg hepatitis B surface antigen; HBV hepatitis B virus; RA rheumatoid arthritis\nThe comparison of HBV infection status between the RA patients and the age- and sex-matched CHB patients: A The comparison of HBeAg(+) rates; B The comparison of HBeAg titers; C The comparison of different HBV DNA gradients rates; D The comparison of HBV DNA load; E The comparison of elevated ALT rates; F The comparison of ALT levels. ALT alanine transaminase; CHB chronic Hepatitis B; HBeAg hepatitis B e antigen; HBsAg hepatitis B surface antigen; HBV hepatitis B virus; RA rheumatoid arthritis\nThe HBV DNA load represents the degree of HBV replication. The positivity rate of HBV DNA in the RA patients was less than that in the matched CHB patients (37.0% vs. 63.9%; OR 0.244; 95% CI 0.089–0.674; P = 0.007; Fig. 1C). In addition, the HBV DNA load in the RA patients was lower than that in the CHB patients (1.5 ± 2.4 vs. 4.4 ± 3.3 log10 IU/mL, t = − 3.859, P = 0.001, Fig. 1D).\nThe elevated ALT showed hepatitis activity. Compared to the matched CHB patients, the proportion of elevated ALT (> 40 U/L) was significantly lower in the RA patients (11.1% vs. 35.2%; OR 0.233; 95% CI 0.066–0.824; P = 0.024; Fig. 1E), together with the level of ALT [15.0 (7.0–97.0) vs. 22.0 (10.0–476.0), Z = − 2.066, P = 0.039, Fig. 1F].\nAs shown in the methods, 108 age- and gender-matched CHB outpatients were enrolled from the Department of Infectious Disease over the same period in this case–control study. In CHB patients with RA, the proportion of HBeAg(+) patients was 11.1% (3/27), which was much lower than that of the matched CHB patients (11.1% vs. 35.2%; OR 0.128; 95% CI 0.033–0.493; P = 0.003; Fig. 1A). The titers of HBeAg were lower in the RA patients than in the CHB patients [− 0.5 (− 0.6 to 3.2) vs. − 0.4 (− 0.6 to 3.2), Z = − 4.517, P < 0.001]. For three HBeAg(+) patients, the titers of HBeAg were 0.2, 1.4 and 3.2 log10 s/co, respectively. The corresponding value for the 38 CHB patients was 1.6 (0.02–3.2) log10 s/co (Fig. 1B).Fig. 1The comparison of HBV infection status between the RA patients and the age- and sex-matched CHB patients: A The comparison of HBeAg(+) rates; B The comparison of HBeAg titers; C The comparison of different HBV DNA gradients rates; D The comparison of HBV DNA load; E The comparison of elevated ALT rates; F The comparison of ALT levels. ALT alanine transaminase; CHB chronic Hepatitis B; HBeAg hepatitis B e antigen; HBsAg hepatitis B surface antigen; HBV hepatitis B virus; RA rheumatoid arthritis\nThe comparison of HBV infection status between the RA patients and the age- and sex-matched CHB patients: A The comparison of HBeAg(+) rates; B The comparison of HBeAg titers; C The comparison of different HBV DNA gradients rates; D The comparison of HBV DNA load; E The comparison of elevated ALT rates; F The comparison of ALT levels. ALT alanine transaminase; CHB chronic Hepatitis B; HBeAg hepatitis B e antigen; HBsAg hepatitis B surface antigen; HBV hepatitis B virus; RA rheumatoid arthritis\nThe HBV DNA load represents the degree of HBV replication. The positivity rate of HBV DNA in the RA patients was less than that in the matched CHB patients (37.0% vs. 63.9%; OR 0.244; 95% CI 0.089–0.674; P = 0.007; Fig. 1C). In addition, the HBV DNA load in the RA patients was lower than that in the CHB patients (1.5 ± 2.4 vs. 4.4 ± 3.3 log10 IU/mL, t = − 3.859, P = 0.001, Fig. 1D).\nThe elevated ALT showed hepatitis activity. Compared to the matched CHB patients, the proportion of elevated ALT (> 40 U/L) was significantly lower in the RA patients (11.1% vs. 35.2%; OR 0.233; 95% CI 0.066–0.824; P = 0.024; Fig. 1E), together with the level of ALT [15.0 (7.0–97.0) vs. 22.0 (10.0–476.0), Z = − 2.066, P = 0.039, Fig. 1F].\nLow prevalence of HBsAg and high prevalence of anti-HBc in RA patients With respect to the rate of HBsAg-positivity, the second Chinese National Hepatitis Seroepidemiological Survey demonstrated that the rate was 7.2% in 2006 (Fig. 1A) [8]. Then, the Polaris Observatory Collaborators developed models for 120 countries, and estimated that the prevalence of HBsAg in China in 2016 was 6.1% (5.5–6.9%) [14]. Based on the 27 included studies, Wang et al. estimated prevalence of 6.89% (5.84–7.95%) for HBV infection in the general population of China from 2013 to 2017 [15]. In the present work, 3.0% of RA patients (27/892) were HBsAg-positive. This is lower than the above reported data (Fig. 2A).Fig. 2The comparison of HBsAg(+) rate and anti-HBc(+) rates between the RA patients and the Chinese general population: A The comparison of HBsAg(+) rates; B The comparison of anti-HBc(+) rates. *The CGP data: CGP in 2006 [8], CGP in 2016 [14] and CGP from 2013 to 2017 [15]. anti-HBc hepatitis B core antibody; CGP Chinese general population; HBsAg hepatitis B surface antigen; RA rheumatoid arthritis\nThe comparison of HBsAg(+) rate and anti-HBc(+) rates between the RA patients and the Chinese general population: A The comparison of HBsAg(+) rates; B The comparison of anti-HBc(+) rates. *The CGP data: CGP in 2006 [8], CGP in 2016 [14] and CGP from 2013 to 2017 [15]. anti-HBc hepatitis B core antibody; CGP Chinese general population; HBsAg hepatitis B surface antigen; RA rheumatoid arthritis\nAnti-HBc positivity mostly occurs in chronic HBV infection or resolved infection [16]. The anti-HBc(+) rate was 44.6% (398/892) in RA patients, higher than the data of the Chinese National Hepatitis Seroepidemiological Survey (44.6% vs. 34.1%, Fig. 2B).\nWith respect to the rate of HBsAg-positivity, the second Chinese National Hepatitis Seroepidemiological Survey demonstrated that the rate was 7.2% in 2006 (Fig. 1A) [8]. Then, the Polaris Observatory Collaborators developed models for 120 countries, and estimated that the prevalence of HBsAg in China in 2016 was 6.1% (5.5–6.9%) [14]. Based on the 27 included studies, Wang et al. estimated prevalence of 6.89% (5.84–7.95%) for HBV infection in the general population of China from 2013 to 2017 [15]. In the present work, 3.0% of RA patients (27/892) were HBsAg-positive. This is lower than the above reported data (Fig. 2A).Fig. 2The comparison of HBsAg(+) rate and anti-HBc(+) rates between the RA patients and the Chinese general population: A The comparison of HBsAg(+) rates; B The comparison of anti-HBc(+) rates. *The CGP data: CGP in 2006 [8], CGP in 2016 [14] and CGP from 2013 to 2017 [15]. anti-HBc hepatitis B core antibody; CGP Chinese general population; HBsAg hepatitis B surface antigen; RA rheumatoid arthritis\nThe comparison of HBsAg(+) rate and anti-HBc(+) rates between the RA patients and the Chinese general population: A The comparison of HBsAg(+) rates; B The comparison of anti-HBc(+) rates. *The CGP data: CGP in 2006 [8], CGP in 2016 [14] and CGP from 2013 to 2017 [15]. anti-HBc hepatitis B core antibody; CGP Chinese general population; HBsAg hepatitis B surface antigen; RA rheumatoid arthritis\nAnti-HBc positivity mostly occurs in chronic HBV infection or resolved infection [16]. The anti-HBc(+) rate was 44.6% (398/892) in RA patients, higher than the data of the Chinese National Hepatitis Seroepidemiological Survey (44.6% vs. 34.1%, Fig. 2B).", "A total of 27 (3.0%) RA patients were HBsAg-positive and enrolled in this study. The baseline data were collected before disease modifying antirheumatic drugs (DMARDs) treatment in the 26 patients. One patient had methotrexate and hydroxychloroquine treatment before data collection but had discontinued for 11 months. Among the 27 patients, seven patients (25.9%) had a HBV family history, seventeen patients (63.0%) had a HBV infection duration of more than 10 years, and one patient (3.7%) had a RA duration of more than 10 years (Table 1).Table 1Basic characteristics of the RA patients with HBV infectionVariableValueAge, years*52.0 (28.0–74.0)Gender, female (%)19 (70.4)Duration of RA, years*3.0 (0.2–18.0) > 101 (3.7) 1–1019 (70.4) < 17 (25.9)HBV family history (%)7 (25.9)Duration of HBV infection, years > 2011 (40.7) 10–206 (22.2) 1–102 (7.4) Unknown8 (29.6)ALT, U/L*15.0 (7.0–476.0)HBV DNA > 102 IU/mL (%)10 (37.0)HBeAg-positive (%)3 (11.1)ALT alanine transaminase; HBeAg hepatitis B e antigen; HBV Hepatitis B virus; RA rheumatoid arthritis*The values were expressed as the median (range)\nBasic characteristics of the RA patients with HBV infection\nALT alanine transaminase; HBeAg hepatitis B e antigen; HBV Hepatitis B virus; RA rheumatoid arthritis\n*The values were expressed as the median (range)", "As shown in the methods, 108 age- and gender-matched CHB outpatients were enrolled from the Department of Infectious Disease over the same period in this case–control study. In CHB patients with RA, the proportion of HBeAg(+) patients was 11.1% (3/27), which was much lower than that of the matched CHB patients (11.1% vs. 35.2%; OR 0.128; 95% CI 0.033–0.493; P = 0.003; Fig. 1A). The titers of HBeAg were lower in the RA patients than in the CHB patients [− 0.5 (− 0.6 to 3.2) vs. − 0.4 (− 0.6 to 3.2), Z = − 4.517, P < 0.001]. For three HBeAg(+) patients, the titers of HBeAg were 0.2, 1.4 and 3.2 log10 s/co, respectively. The corresponding value for the 38 CHB patients was 1.6 (0.02–3.2) log10 s/co (Fig. 1B).Fig. 1The comparison of HBV infection status between the RA patients and the age- and sex-matched CHB patients: A The comparison of HBeAg(+) rates; B The comparison of HBeAg titers; C The comparison of different HBV DNA gradients rates; D The comparison of HBV DNA load; E The comparison of elevated ALT rates; F The comparison of ALT levels. ALT alanine transaminase; CHB chronic Hepatitis B; HBeAg hepatitis B e antigen; HBsAg hepatitis B surface antigen; HBV hepatitis B virus; RA rheumatoid arthritis\nThe comparison of HBV infection status between the RA patients and the age- and sex-matched CHB patients: A The comparison of HBeAg(+) rates; B The comparison of HBeAg titers; C The comparison of different HBV DNA gradients rates; D The comparison of HBV DNA load; E The comparison of elevated ALT rates; F The comparison of ALT levels. ALT alanine transaminase; CHB chronic Hepatitis B; HBeAg hepatitis B e antigen; HBsAg hepatitis B surface antigen; HBV hepatitis B virus; RA rheumatoid arthritis\nThe HBV DNA load represents the degree of HBV replication. The positivity rate of HBV DNA in the RA patients was less than that in the matched CHB patients (37.0% vs. 63.9%; OR 0.244; 95% CI 0.089–0.674; P = 0.007; Fig. 1C). In addition, the HBV DNA load in the RA patients was lower than that in the CHB patients (1.5 ± 2.4 vs. 4.4 ± 3.3 log10 IU/mL, t = − 3.859, P = 0.001, Fig. 1D).\nThe elevated ALT showed hepatitis activity. Compared to the matched CHB patients, the proportion of elevated ALT (> 40 U/L) was significantly lower in the RA patients (11.1% vs. 35.2%; OR 0.233; 95% CI 0.066–0.824; P = 0.024; Fig. 1E), together with the level of ALT [15.0 (7.0–97.0) vs. 22.0 (10.0–476.0), Z = − 2.066, P = 0.039, Fig. 1F].", "With respect to the rate of HBsAg-positivity, the second Chinese National Hepatitis Seroepidemiological Survey demonstrated that the rate was 7.2% in 2006 (Fig. 1A) [8]. Then, the Polaris Observatory Collaborators developed models for 120 countries, and estimated that the prevalence of HBsAg in China in 2016 was 6.1% (5.5–6.9%) [14]. Based on the 27 included studies, Wang et al. estimated prevalence of 6.89% (5.84–7.95%) for HBV infection in the general population of China from 2013 to 2017 [15]. In the present work, 3.0% of RA patients (27/892) were HBsAg-positive. This is lower than the above reported data (Fig. 2A).Fig. 2The comparison of HBsAg(+) rate and anti-HBc(+) rates between the RA patients and the Chinese general population: A The comparison of HBsAg(+) rates; B The comparison of anti-HBc(+) rates. *The CGP data: CGP in 2006 [8], CGP in 2016 [14] and CGP from 2013 to 2017 [15]. anti-HBc hepatitis B core antibody; CGP Chinese general population; HBsAg hepatitis B surface antigen; RA rheumatoid arthritis\nThe comparison of HBsAg(+) rate and anti-HBc(+) rates between the RA patients and the Chinese general population: A The comparison of HBsAg(+) rates; B The comparison of anti-HBc(+) rates. *The CGP data: CGP in 2006 [8], CGP in 2016 [14] and CGP from 2013 to 2017 [15]. anti-HBc hepatitis B core antibody; CGP Chinese general population; HBsAg hepatitis B surface antigen; RA rheumatoid arthritis\nAnti-HBc positivity mostly occurs in chronic HBV infection or resolved infection [16]. The anti-HBc(+) rate was 44.6% (398/892) in RA patients, higher than the data of the Chinese National Hepatitis Seroepidemiological Survey (44.6% vs. 34.1%, Fig. 2B).", "For RA patients, the reported difference in HBV infection rates may be associated with the alteration of HBV infection status after suffering from RA. To elucidate this issue, the current case–control study was performed. Compared to the age- and gender-matched general CHB patients, low proportions of positivity for HBeAg and HBV DNA were observed for the CHB patients with RA.\nHBeAg positivity often represents a high replicative phase of chronic HBV infection, and HBeAg loss is considered partial immune control of chronic HBV infection [17]. In this study, CHB patients with RA exhibited lower positivity rates of HBeAg and titers of HBeAg than matched CHB patients. HBV DNA directly indicates HBV replication. The positivity rate of HBV DNA in the CHB patients with RA was less than that of the matched CHB patients, as was the HBV DNA load. It demonstrated that CHB patients with RA had a higher probability of HBeAg seroconversion and HBV DNA load decline, which may be associated with immune control after suffering from RA.\nImmune dysregulation is the a characteristic of RA. It plays a complicated role in HBV infection for abnormal innate and adaptive immune activation in RA patients. First, type I interferons (IFNs) play a critical role in defending against HBV, and the type I interferon signature is detectable in the peripheral blood of RA patients [18]. Second, CD8+ T cells are capable of controlling HBV infection and eliminating HBV infected cells [19]. For RA patients, CD8+ T cells are abundant and associated with disease activity, due to pro-inflammatory cytokine production [20] and self-antigens responses upon cross-presentation [21]. Third, the humoral immune response has a protective role against pathogens [22]. Abnormalities in B cells not only participate in the pathogenesis of RA [23] (including the production of autoantibodies, presentation of autoantigens and secretion of proinflammatory cytokines) [24], but also affect HBV elimination.\nThe elevated ALT is an important characteristic of immune clearance [25], and inactive HBsAg carrying status can be obtained after immune clearance. Compared to the matched CHB patients, low proportion of ALT > 40 U/L and low ALT levels were found for the CHB patients with RA. This suggested that CHB patients with RA were more prone to obtain immune control for HBV after immune clearance.\nHBsAg positivity is a definite HBV infection marker. In the present work, low HBsAg(+) rate of 3.0% was found in RA patients, which was consistent with a previous study [9]. However, the prevalence of HBsAg-positive was lower than other studies [10, 11, 26], which may be associated with the different ages, regions of patients, sample sizes and methods of HBsAg testing. Our data indicated that RA patients exhibited a low HBsAg(+) rate according to the second Chinese National Hepatitis Seroepidemiological Survey [8] and the estimated prevalence of HBsAg in China [14, 15]. Hepatitis B core antigen (HBcAg) is an inner nucleocapsid component, and the production of anti-HBc is induced by a cellular and humoral immune response to HBcAg during natural HBV infection. Anti-HBc positivity mostly occurs in chronic HBV infection or resolved infection [16]. We found that the positivity rate of anti-HBc was 44.6% in the RA patients, higher than the rate in the Chinese general population from the China national data [8]. Consistent with the previous studies [10, 11], RA patients may have a higher risk of HBV infection than the general population, due to receiving disease modifying antirheumatic drugs and complicated immunity related to the disease itself [4, 5]. During the natural history of HBV, HBsAg seroclearance can emerge in 0.5–1.0% patients per year after immune clearance phase [27]. It was expected that a low positivity rate of HBsAg would indicate that HBsAg seroclearance was more common in RA patients. The susceptible ages were different for RA and CHB patients. Mother-to-infant transmission was the main route of HBV infection in China. HBV infects early in life, and confers a high risk of chronicity [28]. In contrast, RA occurs much more frequently in elderly women. Hence, RA was supposed to be later than HBV infection for most patients. We speculated that HBV infection status altered after suffering from RA. Among the 27 patients in this study, 62.9% had a HBV duration of more than 10 years, and 96.3% had aRA duration of less than 10 years. This indicates that the majority suffered from RA after HBV infection. After suffering from RA, HBV DNA declines, and HBeAg and even HBsAg lost.\nHere, some limitations of the study were as follows: the numbers of patients were limited, and a prospective cohort study with large sample size is necessary to evaluate the difference in the natural history of chronic HBV infection between RA and general CHB patients.\nIn conclusion, HBV infection status was altered after suffering from RA. Compared to the matched CHB patients, variations were significant, including low positive proportions of HBeAg and HBV DNA, due to immune dysregulation of RA patients." ]
[ "introduction", null, null, null, null, "results", null, null, null, "discussion" ]
[ "Rheumatoid arthritis", "HBV", "Chronic hepatitis B", "HBeAg", "HBV DNA" ]
Introduction: It was observed that HBV infection affected rheumatoid arthritis (RA), where HBV was considered the suspected trigger for arthritis in genetically susceptible individuals [1]. The rates of positivity for RF and ACPA were as high as 14.4% and 4.1%, respectively, in patients with chronic hepatitis B (CHB) [2]. Hepatitis B core antigen (HBcAg) was found in the synovium of RA patients with CHB, indicating that HBV may be involved in the pathogenesis of local lesions [3]. In RA patients, immune dysregulation and immunosuppressive therapies also influence HBV infection [4, 5]. HBV infection with a high endemicity was reported in various regions of the Asia–Pacific and sub-Saharan Africa [6, 7]. It also affects approximately 10 million people in China [8]. Unfortunately, the HBV infection rates have been reported to be different for RA patients in previous studies. Yilmaz et al. reported a lower HBV infection prevalence in RA patients according to Turkish national data in comparison with the general population [9]. Mahroum et al. performed a case–control study and found that RA patients had a greater proportion of chronic HBV infection than age- and sex-matched controls [10]. Hsu et al. observed that RA patients were characterized by an increased risk of HBV infection when compared with that of the ≥ 18 years-old non-RA cohort [11]. The reasons for these differences were complicated, especially when there were no studies assessing the HBV infection status in RA patients, including HBeAg-positivity, HBV DNA load and ALT level. Herein, a case–control study was performed to clarify the effect of RA on HBV infection status. The positivity rates of hepatitis B e antigen (HBeAg) were compared between the RA patients with CHB and the age- and gender-matched general CHB patients, in addition to the positivity rates of HBV DNA. Methods: Study design This was a retrospective case–control study. A total of 27 CHB patients with RA were enrolled from the Department of Rheumatology and Immunology, First Affiliated Hospital of Xi’an Jiaotong University, from January 1st 2016 to December 31st 2019. Inclusion criteria: (i) HBsAg was positive for more than 6 months; (ii) patients fulfilled ACR/EULAR 2010 rheumatoid arthritis classification criteria. The exclusion criteria were serologic human immunodeficiency virus (HIV), hepatitis C (HCV) or hepatitis D virus (HDV) positivity and cirrhosis, liver cancer or fatty liver disease. The diagnosis of cirrhosis was based on a physical examination, biochemical parameters (liver function test, full blood count and prothrombin time) and imageological examination (ultrasonic tests, CT, MR imaging or liver stiffness measurements). A liver biopsy was implemented in cases where the above tests reveal inconclusive results. The fatty liver disease was determined by the evidence of hepatic steatosis, either by imaging or by histology [12]. Arthritis is one of the extrahepatic manifestations in the patients with HBV infection [13]. We can identify RA from arthritis associated with HBV infection by the history of HBV infection, joint deformities, other organ/tissue lesions or deposition in the synovium of circulating immune complexes containing HBsAg-anti-HBs. To exclude the effect of antiviral therapy on HBV infection status, RA patients who accepted antiviral treatment were not included in the matched case–control study. During the corresponding period, age- and gender-matched CHB outpatients were enrolled at a 1:4 ratio from the Department of Infectious Disease. In addition, the positivity rates of HBsAg and anti-HBc were surveyed among the 892 RA patients over the corresponding period. This study was conducted in accordance with the Declaration of Helsinki, the protocol was approved by the Ethics Committee of First Affiliated Hospital of Xi’an Jiaotong University (No. 2017.120), and the patients gave their written informed consent. The medical records of patients were reviewed, and the data of the following variables were collected: age, sex, diagnosis, duration of disease, HBsAg, HBeAg, anti-HBc, ALT, and HBV DNA load. All the test data of RA patients were collected at the first visit, and the data of CHB patients were collected before their antiviral treatment. The primary outcome was the comparison of the positivity rates of HBeAg and HBV DNA between the CHB patients with RA and the age- and gender-matched CHB patients. The secondary outcomes were the following: (i) the comparison of the elevated ALT rates between the CHB patients with RA and the CHB patients; (ii) the comparison of the HBeAg titer, HBV DNA load and ALT levels between CHB patients with RA and the CHB patients; (iii) the comparison of HBsAg-positivity rates between the RA patients and Chinese general population (CGP); and (iv) the comparison of anti-HBc-positivity rates between the RA patients and CGP. This was a retrospective case–control study. A total of 27 CHB patients with RA were enrolled from the Department of Rheumatology and Immunology, First Affiliated Hospital of Xi’an Jiaotong University, from January 1st 2016 to December 31st 2019. Inclusion criteria: (i) HBsAg was positive for more than 6 months; (ii) patients fulfilled ACR/EULAR 2010 rheumatoid arthritis classification criteria. The exclusion criteria were serologic human immunodeficiency virus (HIV), hepatitis C (HCV) or hepatitis D virus (HDV) positivity and cirrhosis, liver cancer or fatty liver disease. The diagnosis of cirrhosis was based on a physical examination, biochemical parameters (liver function test, full blood count and prothrombin time) and imageological examination (ultrasonic tests, CT, MR imaging or liver stiffness measurements). A liver biopsy was implemented in cases where the above tests reveal inconclusive results. The fatty liver disease was determined by the evidence of hepatic steatosis, either by imaging or by histology [12]. Arthritis is one of the extrahepatic manifestations in the patients with HBV infection [13]. We can identify RA from arthritis associated with HBV infection by the history of HBV infection, joint deformities, other organ/tissue lesions or deposition in the synovium of circulating immune complexes containing HBsAg-anti-HBs. To exclude the effect of antiviral therapy on HBV infection status, RA patients who accepted antiviral treatment were not included in the matched case–control study. During the corresponding period, age- and gender-matched CHB outpatients were enrolled at a 1:4 ratio from the Department of Infectious Disease. In addition, the positivity rates of HBsAg and anti-HBc were surveyed among the 892 RA patients over the corresponding period. This study was conducted in accordance with the Declaration of Helsinki, the protocol was approved by the Ethics Committee of First Affiliated Hospital of Xi’an Jiaotong University (No. 2017.120), and the patients gave their written informed consent. The medical records of patients were reviewed, and the data of the following variables were collected: age, sex, diagnosis, duration of disease, HBsAg, HBeAg, anti-HBc, ALT, and HBV DNA load. All the test data of RA patients were collected at the first visit, and the data of CHB patients were collected before their antiviral treatment. The primary outcome was the comparison of the positivity rates of HBeAg and HBV DNA between the CHB patients with RA and the age- and gender-matched CHB patients. The secondary outcomes were the following: (i) the comparison of the elevated ALT rates between the CHB patients with RA and the CHB patients; (ii) the comparison of the HBeAg titer, HBV DNA load and ALT levels between CHB patients with RA and the CHB patients; (iii) the comparison of HBsAg-positivity rates between the RA patients and Chinese general population (CGP); and (iv) the comparison of anti-HBc-positivity rates between the RA patients and CGP. Laboratory methods The titers of HBsAg, HBeAg and anti-HBc were quantified by Abbott ARCHITECT assays (Abbott Laboratories, Chicago, IL, USA). The lower limit of detection for HBsAg was 0.05 IU/mL, HBeAg was 1 s/co and anti-HBc was 1 s/co. The HBV DNA load was measured by the Roche COBAS AmpliPrep/COBAS TaqMan HBV test (Roche Molecular Systems, California, IL, USA), and the lower limit of detection was 12 IU/mL. Liver function tests were performed with an automated bioanalyzer (Olympus AU5400, Japan). The titers of HBsAg, HBeAg and anti-HBc were quantified by Abbott ARCHITECT assays (Abbott Laboratories, Chicago, IL, USA). The lower limit of detection for HBsAg was 0.05 IU/mL, HBeAg was 1 s/co and anti-HBc was 1 s/co. The HBV DNA load was measured by the Roche COBAS AmpliPrep/COBAS TaqMan HBV test (Roche Molecular Systems, California, IL, USA), and the lower limit of detection was 12 IU/mL. Liver function tests were performed with an automated bioanalyzer (Olympus AU5400, Japan). Statistical analysis The analyses were performed by SPSS software 13.0 (SPSS Inc. Chicago, IL, USA). Conditional logistic regression was used to compare the proportions between the two groups. Quantitative data were analyzed with the Shapiro–Wilk test and Levene statistic for normality and homogeneity of variance, respectively. According to the situation, the paired-samples t-test or signed rank Wilcoxon test was used to evaluate differences between two groups. A P value < 0.05 was considered statistically significant. The analyses were performed by SPSS software 13.0 (SPSS Inc. Chicago, IL, USA). Conditional logistic regression was used to compare the proportions between the two groups. Quantitative data were analyzed with the Shapiro–Wilk test and Levene statistic for normality and homogeneity of variance, respectively. According to the situation, the paired-samples t-test or signed rank Wilcoxon test was used to evaluate differences between two groups. A P value < 0.05 was considered statistically significant. Study design: This was a retrospective case–control study. A total of 27 CHB patients with RA were enrolled from the Department of Rheumatology and Immunology, First Affiliated Hospital of Xi’an Jiaotong University, from January 1st 2016 to December 31st 2019. Inclusion criteria: (i) HBsAg was positive for more than 6 months; (ii) patients fulfilled ACR/EULAR 2010 rheumatoid arthritis classification criteria. The exclusion criteria were serologic human immunodeficiency virus (HIV), hepatitis C (HCV) or hepatitis D virus (HDV) positivity and cirrhosis, liver cancer or fatty liver disease. The diagnosis of cirrhosis was based on a physical examination, biochemical parameters (liver function test, full blood count and prothrombin time) and imageological examination (ultrasonic tests, CT, MR imaging or liver stiffness measurements). A liver biopsy was implemented in cases where the above tests reveal inconclusive results. The fatty liver disease was determined by the evidence of hepatic steatosis, either by imaging or by histology [12]. Arthritis is one of the extrahepatic manifestations in the patients with HBV infection [13]. We can identify RA from arthritis associated with HBV infection by the history of HBV infection, joint deformities, other organ/tissue lesions or deposition in the synovium of circulating immune complexes containing HBsAg-anti-HBs. To exclude the effect of antiviral therapy on HBV infection status, RA patients who accepted antiviral treatment were not included in the matched case–control study. During the corresponding period, age- and gender-matched CHB outpatients were enrolled at a 1:4 ratio from the Department of Infectious Disease. In addition, the positivity rates of HBsAg and anti-HBc were surveyed among the 892 RA patients over the corresponding period. This study was conducted in accordance with the Declaration of Helsinki, the protocol was approved by the Ethics Committee of First Affiliated Hospital of Xi’an Jiaotong University (No. 2017.120), and the patients gave their written informed consent. The medical records of patients were reviewed, and the data of the following variables were collected: age, sex, diagnosis, duration of disease, HBsAg, HBeAg, anti-HBc, ALT, and HBV DNA load. All the test data of RA patients were collected at the first visit, and the data of CHB patients were collected before their antiviral treatment. The primary outcome was the comparison of the positivity rates of HBeAg and HBV DNA between the CHB patients with RA and the age- and gender-matched CHB patients. The secondary outcomes were the following: (i) the comparison of the elevated ALT rates between the CHB patients with RA and the CHB patients; (ii) the comparison of the HBeAg titer, HBV DNA load and ALT levels between CHB patients with RA and the CHB patients; (iii) the comparison of HBsAg-positivity rates between the RA patients and Chinese general population (CGP); and (iv) the comparison of anti-HBc-positivity rates between the RA patients and CGP. Laboratory methods: The titers of HBsAg, HBeAg and anti-HBc were quantified by Abbott ARCHITECT assays (Abbott Laboratories, Chicago, IL, USA). The lower limit of detection for HBsAg was 0.05 IU/mL, HBeAg was 1 s/co and anti-HBc was 1 s/co. The HBV DNA load was measured by the Roche COBAS AmpliPrep/COBAS TaqMan HBV test (Roche Molecular Systems, California, IL, USA), and the lower limit of detection was 12 IU/mL. Liver function tests were performed with an automated bioanalyzer (Olympus AU5400, Japan). Statistical analysis: The analyses were performed by SPSS software 13.0 (SPSS Inc. Chicago, IL, USA). Conditional logistic regression was used to compare the proportions between the two groups. Quantitative data were analyzed with the Shapiro–Wilk test and Levene statistic for normality and homogeneity of variance, respectively. According to the situation, the paired-samples t-test or signed rank Wilcoxon test was used to evaluate differences between two groups. A P value < 0.05 was considered statistically significant. Results: Basic characteristics of the CHB patients with RA A total of 27 (3.0%) RA patients were HBsAg-positive and enrolled in this study. The baseline data were collected before disease modifying antirheumatic drugs (DMARDs) treatment in the 26 patients. One patient had methotrexate and hydroxychloroquine treatment before data collection but had discontinued for 11 months. Among the 27 patients, seven patients (25.9%) had a HBV family history, seventeen patients (63.0%) had a HBV infection duration of more than 10 years, and one patient (3.7%) had a RA duration of more than 10 years (Table 1).Table 1Basic characteristics of the RA patients with HBV infectionVariableValueAge, years*52.0 (28.0–74.0)Gender, female (%)19 (70.4)Duration of RA, years*3.0 (0.2–18.0) > 101 (3.7) 1–1019 (70.4) < 17 (25.9)HBV family history (%)7 (25.9)Duration of HBV infection, years > 2011 (40.7) 10–206 (22.2) 1–102 (7.4) Unknown8 (29.6)ALT, U/L*15.0 (7.0–476.0)HBV DNA > 102 IU/mL (%)10 (37.0)HBeAg-positive (%)3 (11.1)ALT alanine transaminase; HBeAg hepatitis B e antigen; HBV Hepatitis B virus; RA rheumatoid arthritis*The values were expressed as the median (range) Basic characteristics of the RA patients with HBV infection ALT alanine transaminase; HBeAg hepatitis B e antigen; HBV Hepatitis B virus; RA rheumatoid arthritis *The values were expressed as the median (range) A total of 27 (3.0%) RA patients were HBsAg-positive and enrolled in this study. The baseline data were collected before disease modifying antirheumatic drugs (DMARDs) treatment in the 26 patients. One patient had methotrexate and hydroxychloroquine treatment before data collection but had discontinued for 11 months. Among the 27 patients, seven patients (25.9%) had a HBV family history, seventeen patients (63.0%) had a HBV infection duration of more than 10 years, and one patient (3.7%) had a RA duration of more than 10 years (Table 1).Table 1Basic characteristics of the RA patients with HBV infectionVariableValueAge, years*52.0 (28.0–74.0)Gender, female (%)19 (70.4)Duration of RA, years*3.0 (0.2–18.0) > 101 (3.7) 1–1019 (70.4) < 17 (25.9)HBV family history (%)7 (25.9)Duration of HBV infection, years > 2011 (40.7) 10–206 (22.2) 1–102 (7.4) Unknown8 (29.6)ALT, U/L*15.0 (7.0–476.0)HBV DNA > 102 IU/mL (%)10 (37.0)HBeAg-positive (%)3 (11.1)ALT alanine transaminase; HBeAg hepatitis B e antigen; HBV Hepatitis B virus; RA rheumatoid arthritis*The values were expressed as the median (range) Basic characteristics of the RA patients with HBV infection ALT alanine transaminase; HBeAg hepatitis B e antigen; HBV Hepatitis B virus; RA rheumatoid arthritis *The values were expressed as the median (range) Low proportions of HBeAg-positive and HBV DNA-positive in patients with CHB and RA As shown in the methods, 108 age- and gender-matched CHB outpatients were enrolled from the Department of Infectious Disease over the same period in this case–control study. In CHB patients with RA, the proportion of HBeAg(+) patients was 11.1% (3/27), which was much lower than that of the matched CHB patients (11.1% vs. 35.2%; OR 0.128; 95% CI 0.033–0.493; P = 0.003; Fig. 1A). The titers of HBeAg were lower in the RA patients than in the CHB patients [− 0.5 (− 0.6 to 3.2) vs. − 0.4 (− 0.6 to 3.2), Z = − 4.517, P < 0.001]. For three HBeAg(+) patients, the titers of HBeAg were 0.2, 1.4 and 3.2 log10 s/co, respectively. The corresponding value for the 38 CHB patients was 1.6 (0.02–3.2) log10 s/co (Fig. 1B).Fig. 1The comparison of HBV infection status between the RA patients and the age- and sex-matched CHB patients: A The comparison of HBeAg(+) rates; B The comparison of HBeAg titers; C The comparison of different HBV DNA gradients rates; D The comparison of HBV DNA load; E The comparison of elevated ALT rates; F The comparison of ALT levels. ALT alanine transaminase; CHB chronic Hepatitis B; HBeAg hepatitis B e antigen; HBsAg hepatitis B surface antigen; HBV hepatitis B virus; RA rheumatoid arthritis The comparison of HBV infection status between the RA patients and the age- and sex-matched CHB patients: A The comparison of HBeAg(+) rates; B The comparison of HBeAg titers; C The comparison of different HBV DNA gradients rates; D The comparison of HBV DNA load; E The comparison of elevated ALT rates; F The comparison of ALT levels. ALT alanine transaminase; CHB chronic Hepatitis B; HBeAg hepatitis B e antigen; HBsAg hepatitis B surface antigen; HBV hepatitis B virus; RA rheumatoid arthritis The HBV DNA load represents the degree of HBV replication. The positivity rate of HBV DNA in the RA patients was less than that in the matched CHB patients (37.0% vs. 63.9%; OR 0.244; 95% CI 0.089–0.674; P = 0.007; Fig. 1C). In addition, the HBV DNA load in the RA patients was lower than that in the CHB patients (1.5 ± 2.4 vs. 4.4 ± 3.3 log10 IU/mL, t = − 3.859, P = 0.001, Fig. 1D). The elevated ALT showed hepatitis activity. Compared to the matched CHB patients, the proportion of elevated ALT (> 40 U/L) was significantly lower in the RA patients (11.1% vs. 35.2%; OR 0.233; 95% CI 0.066–0.824; P = 0.024; Fig. 1E), together with the level of ALT [15.0 (7.0–97.0) vs. 22.0 (10.0–476.0), Z = − 2.066, P = 0.039, Fig. 1F]. As shown in the methods, 108 age- and gender-matched CHB outpatients were enrolled from the Department of Infectious Disease over the same period in this case–control study. In CHB patients with RA, the proportion of HBeAg(+) patients was 11.1% (3/27), which was much lower than that of the matched CHB patients (11.1% vs. 35.2%; OR 0.128; 95% CI 0.033–0.493; P = 0.003; Fig. 1A). The titers of HBeAg were lower in the RA patients than in the CHB patients [− 0.5 (− 0.6 to 3.2) vs. − 0.4 (− 0.6 to 3.2), Z = − 4.517, P < 0.001]. For three HBeAg(+) patients, the titers of HBeAg were 0.2, 1.4 and 3.2 log10 s/co, respectively. The corresponding value for the 38 CHB patients was 1.6 (0.02–3.2) log10 s/co (Fig. 1B).Fig. 1The comparison of HBV infection status between the RA patients and the age- and sex-matched CHB patients: A The comparison of HBeAg(+) rates; B The comparison of HBeAg titers; C The comparison of different HBV DNA gradients rates; D The comparison of HBV DNA load; E The comparison of elevated ALT rates; F The comparison of ALT levels. ALT alanine transaminase; CHB chronic Hepatitis B; HBeAg hepatitis B e antigen; HBsAg hepatitis B surface antigen; HBV hepatitis B virus; RA rheumatoid arthritis The comparison of HBV infection status between the RA patients and the age- and sex-matched CHB patients: A The comparison of HBeAg(+) rates; B The comparison of HBeAg titers; C The comparison of different HBV DNA gradients rates; D The comparison of HBV DNA load; E The comparison of elevated ALT rates; F The comparison of ALT levels. ALT alanine transaminase; CHB chronic Hepatitis B; HBeAg hepatitis B e antigen; HBsAg hepatitis B surface antigen; HBV hepatitis B virus; RA rheumatoid arthritis The HBV DNA load represents the degree of HBV replication. The positivity rate of HBV DNA in the RA patients was less than that in the matched CHB patients (37.0% vs. 63.9%; OR 0.244; 95% CI 0.089–0.674; P = 0.007; Fig. 1C). In addition, the HBV DNA load in the RA patients was lower than that in the CHB patients (1.5 ± 2.4 vs. 4.4 ± 3.3 log10 IU/mL, t = − 3.859, P = 0.001, Fig. 1D). The elevated ALT showed hepatitis activity. Compared to the matched CHB patients, the proportion of elevated ALT (> 40 U/L) was significantly lower in the RA patients (11.1% vs. 35.2%; OR 0.233; 95% CI 0.066–0.824; P = 0.024; Fig. 1E), together with the level of ALT [15.0 (7.0–97.0) vs. 22.0 (10.0–476.0), Z = − 2.066, P = 0.039, Fig. 1F]. Low prevalence of HBsAg and high prevalence of anti-HBc in RA patients With respect to the rate of HBsAg-positivity, the second Chinese National Hepatitis Seroepidemiological Survey demonstrated that the rate was 7.2% in 2006 (Fig. 1A) [8]. Then, the Polaris Observatory Collaborators developed models for 120 countries, and estimated that the prevalence of HBsAg in China in 2016 was 6.1% (5.5–6.9%) [14]. Based on the 27 included studies, Wang et al. estimated prevalence of 6.89% (5.84–7.95%) for HBV infection in the general population of China from 2013 to 2017 [15]. In the present work, 3.0% of RA patients (27/892) were HBsAg-positive. This is lower than the above reported data (Fig. 2A).Fig. 2The comparison of HBsAg(+) rate and anti-HBc(+) rates between the RA patients and the Chinese general population: A The comparison of HBsAg(+) rates; B The comparison of anti-HBc(+) rates. *The CGP data: CGP in 2006 [8], CGP in 2016 [14] and CGP from 2013 to 2017 [15]. anti-HBc hepatitis B core antibody; CGP Chinese general population; HBsAg hepatitis B surface antigen; RA rheumatoid arthritis The comparison of HBsAg(+) rate and anti-HBc(+) rates between the RA patients and the Chinese general population: A The comparison of HBsAg(+) rates; B The comparison of anti-HBc(+) rates. *The CGP data: CGP in 2006 [8], CGP in 2016 [14] and CGP from 2013 to 2017 [15]. anti-HBc hepatitis B core antibody; CGP Chinese general population; HBsAg hepatitis B surface antigen; RA rheumatoid arthritis Anti-HBc positivity mostly occurs in chronic HBV infection or resolved infection [16]. The anti-HBc(+) rate was 44.6% (398/892) in RA patients, higher than the data of the Chinese National Hepatitis Seroepidemiological Survey (44.6% vs. 34.1%, Fig. 2B). With respect to the rate of HBsAg-positivity, the second Chinese National Hepatitis Seroepidemiological Survey demonstrated that the rate was 7.2% in 2006 (Fig. 1A) [8]. Then, the Polaris Observatory Collaborators developed models for 120 countries, and estimated that the prevalence of HBsAg in China in 2016 was 6.1% (5.5–6.9%) [14]. Based on the 27 included studies, Wang et al. estimated prevalence of 6.89% (5.84–7.95%) for HBV infection in the general population of China from 2013 to 2017 [15]. In the present work, 3.0% of RA patients (27/892) were HBsAg-positive. This is lower than the above reported data (Fig. 2A).Fig. 2The comparison of HBsAg(+) rate and anti-HBc(+) rates between the RA patients and the Chinese general population: A The comparison of HBsAg(+) rates; B The comparison of anti-HBc(+) rates. *The CGP data: CGP in 2006 [8], CGP in 2016 [14] and CGP from 2013 to 2017 [15]. anti-HBc hepatitis B core antibody; CGP Chinese general population; HBsAg hepatitis B surface antigen; RA rheumatoid arthritis The comparison of HBsAg(+) rate and anti-HBc(+) rates between the RA patients and the Chinese general population: A The comparison of HBsAg(+) rates; B The comparison of anti-HBc(+) rates. *The CGP data: CGP in 2006 [8], CGP in 2016 [14] and CGP from 2013 to 2017 [15]. anti-HBc hepatitis B core antibody; CGP Chinese general population; HBsAg hepatitis B surface antigen; RA rheumatoid arthritis Anti-HBc positivity mostly occurs in chronic HBV infection or resolved infection [16]. The anti-HBc(+) rate was 44.6% (398/892) in RA patients, higher than the data of the Chinese National Hepatitis Seroepidemiological Survey (44.6% vs. 34.1%, Fig. 2B). Basic characteristics of the CHB patients with RA: A total of 27 (3.0%) RA patients were HBsAg-positive and enrolled in this study. The baseline data were collected before disease modifying antirheumatic drugs (DMARDs) treatment in the 26 patients. One patient had methotrexate and hydroxychloroquine treatment before data collection but had discontinued for 11 months. Among the 27 patients, seven patients (25.9%) had a HBV family history, seventeen patients (63.0%) had a HBV infection duration of more than 10 years, and one patient (3.7%) had a RA duration of more than 10 years (Table 1).Table 1Basic characteristics of the RA patients with HBV infectionVariableValueAge, years*52.0 (28.0–74.0)Gender, female (%)19 (70.4)Duration of RA, years*3.0 (0.2–18.0) > 101 (3.7) 1–1019 (70.4) < 17 (25.9)HBV family history (%)7 (25.9)Duration of HBV infection, years > 2011 (40.7) 10–206 (22.2) 1–102 (7.4) Unknown8 (29.6)ALT, U/L*15.0 (7.0–476.0)HBV DNA > 102 IU/mL (%)10 (37.0)HBeAg-positive (%)3 (11.1)ALT alanine transaminase; HBeAg hepatitis B e antigen; HBV Hepatitis B virus; RA rheumatoid arthritis*The values were expressed as the median (range) Basic characteristics of the RA patients with HBV infection ALT alanine transaminase; HBeAg hepatitis B e antigen; HBV Hepatitis B virus; RA rheumatoid arthritis *The values were expressed as the median (range) Low proportions of HBeAg-positive and HBV DNA-positive in patients with CHB and RA: As shown in the methods, 108 age- and gender-matched CHB outpatients were enrolled from the Department of Infectious Disease over the same period in this case–control study. In CHB patients with RA, the proportion of HBeAg(+) patients was 11.1% (3/27), which was much lower than that of the matched CHB patients (11.1% vs. 35.2%; OR 0.128; 95% CI 0.033–0.493; P = 0.003; Fig. 1A). The titers of HBeAg were lower in the RA patients than in the CHB patients [− 0.5 (− 0.6 to 3.2) vs. − 0.4 (− 0.6 to 3.2), Z = − 4.517, P < 0.001]. For three HBeAg(+) patients, the titers of HBeAg were 0.2, 1.4 and 3.2 log10 s/co, respectively. The corresponding value for the 38 CHB patients was 1.6 (0.02–3.2) log10 s/co (Fig. 1B).Fig. 1The comparison of HBV infection status between the RA patients and the age- and sex-matched CHB patients: A The comparison of HBeAg(+) rates; B The comparison of HBeAg titers; C The comparison of different HBV DNA gradients rates; D The comparison of HBV DNA load; E The comparison of elevated ALT rates; F The comparison of ALT levels. ALT alanine transaminase; CHB chronic Hepatitis B; HBeAg hepatitis B e antigen; HBsAg hepatitis B surface antigen; HBV hepatitis B virus; RA rheumatoid arthritis The comparison of HBV infection status between the RA patients and the age- and sex-matched CHB patients: A The comparison of HBeAg(+) rates; B The comparison of HBeAg titers; C The comparison of different HBV DNA gradients rates; D The comparison of HBV DNA load; E The comparison of elevated ALT rates; F The comparison of ALT levels. ALT alanine transaminase; CHB chronic Hepatitis B; HBeAg hepatitis B e antigen; HBsAg hepatitis B surface antigen; HBV hepatitis B virus; RA rheumatoid arthritis The HBV DNA load represents the degree of HBV replication. The positivity rate of HBV DNA in the RA patients was less than that in the matched CHB patients (37.0% vs. 63.9%; OR 0.244; 95% CI 0.089–0.674; P = 0.007; Fig. 1C). In addition, the HBV DNA load in the RA patients was lower than that in the CHB patients (1.5 ± 2.4 vs. 4.4 ± 3.3 log10 IU/mL, t = − 3.859, P = 0.001, Fig. 1D). The elevated ALT showed hepatitis activity. Compared to the matched CHB patients, the proportion of elevated ALT (> 40 U/L) was significantly lower in the RA patients (11.1% vs. 35.2%; OR 0.233; 95% CI 0.066–0.824; P = 0.024; Fig. 1E), together with the level of ALT [15.0 (7.0–97.0) vs. 22.0 (10.0–476.0), Z = − 2.066, P = 0.039, Fig. 1F]. Low prevalence of HBsAg and high prevalence of anti-HBc in RA patients: With respect to the rate of HBsAg-positivity, the second Chinese National Hepatitis Seroepidemiological Survey demonstrated that the rate was 7.2% in 2006 (Fig. 1A) [8]. Then, the Polaris Observatory Collaborators developed models for 120 countries, and estimated that the prevalence of HBsAg in China in 2016 was 6.1% (5.5–6.9%) [14]. Based on the 27 included studies, Wang et al. estimated prevalence of 6.89% (5.84–7.95%) for HBV infection in the general population of China from 2013 to 2017 [15]. In the present work, 3.0% of RA patients (27/892) were HBsAg-positive. This is lower than the above reported data (Fig. 2A).Fig. 2The comparison of HBsAg(+) rate and anti-HBc(+) rates between the RA patients and the Chinese general population: A The comparison of HBsAg(+) rates; B The comparison of anti-HBc(+) rates. *The CGP data: CGP in 2006 [8], CGP in 2016 [14] and CGP from 2013 to 2017 [15]. anti-HBc hepatitis B core antibody; CGP Chinese general population; HBsAg hepatitis B surface antigen; RA rheumatoid arthritis The comparison of HBsAg(+) rate and anti-HBc(+) rates between the RA patients and the Chinese general population: A The comparison of HBsAg(+) rates; B The comparison of anti-HBc(+) rates. *The CGP data: CGP in 2006 [8], CGP in 2016 [14] and CGP from 2013 to 2017 [15]. anti-HBc hepatitis B core antibody; CGP Chinese general population; HBsAg hepatitis B surface antigen; RA rheumatoid arthritis Anti-HBc positivity mostly occurs in chronic HBV infection or resolved infection [16]. The anti-HBc(+) rate was 44.6% (398/892) in RA patients, higher than the data of the Chinese National Hepatitis Seroepidemiological Survey (44.6% vs. 34.1%, Fig. 2B). Discussion: For RA patients, the reported difference in HBV infection rates may be associated with the alteration of HBV infection status after suffering from RA. To elucidate this issue, the current case–control study was performed. Compared to the age- and gender-matched general CHB patients, low proportions of positivity for HBeAg and HBV DNA were observed for the CHB patients with RA. HBeAg positivity often represents a high replicative phase of chronic HBV infection, and HBeAg loss is considered partial immune control of chronic HBV infection [17]. In this study, CHB patients with RA exhibited lower positivity rates of HBeAg and titers of HBeAg than matched CHB patients. HBV DNA directly indicates HBV replication. The positivity rate of HBV DNA in the CHB patients with RA was less than that of the matched CHB patients, as was the HBV DNA load. It demonstrated that CHB patients with RA had a higher probability of HBeAg seroconversion and HBV DNA load decline, which may be associated with immune control after suffering from RA. Immune dysregulation is the a characteristic of RA. It plays a complicated role in HBV infection for abnormal innate and adaptive immune activation in RA patients. First, type I interferons (IFNs) play a critical role in defending against HBV, and the type I interferon signature is detectable in the peripheral blood of RA patients [18]. Second, CD8+ T cells are capable of controlling HBV infection and eliminating HBV infected cells [19]. For RA patients, CD8+ T cells are abundant and associated with disease activity, due to pro-inflammatory cytokine production [20] and self-antigens responses upon cross-presentation [21]. Third, the humoral immune response has a protective role against pathogens [22]. Abnormalities in B cells not only participate in the pathogenesis of RA [23] (including the production of autoantibodies, presentation of autoantigens and secretion of proinflammatory cytokines) [24], but also affect HBV elimination. The elevated ALT is an important characteristic of immune clearance [25], and inactive HBsAg carrying status can be obtained after immune clearance. Compared to the matched CHB patients, low proportion of ALT > 40 U/L and low ALT levels were found for the CHB patients with RA. This suggested that CHB patients with RA were more prone to obtain immune control for HBV after immune clearance. HBsAg positivity is a definite HBV infection marker. In the present work, low HBsAg(+) rate of 3.0% was found in RA patients, which was consistent with a previous study [9]. However, the prevalence of HBsAg-positive was lower than other studies [10, 11, 26], which may be associated with the different ages, regions of patients, sample sizes and methods of HBsAg testing. Our data indicated that RA patients exhibited a low HBsAg(+) rate according to the second Chinese National Hepatitis Seroepidemiological Survey [8] and the estimated prevalence of HBsAg in China [14, 15]. Hepatitis B core antigen (HBcAg) is an inner nucleocapsid component, and the production of anti-HBc is induced by a cellular and humoral immune response to HBcAg during natural HBV infection. Anti-HBc positivity mostly occurs in chronic HBV infection or resolved infection [16]. We found that the positivity rate of anti-HBc was 44.6% in the RA patients, higher than the rate in the Chinese general population from the China national data [8]. Consistent with the previous studies [10, 11], RA patients may have a higher risk of HBV infection than the general population, due to receiving disease modifying antirheumatic drugs and complicated immunity related to the disease itself [4, 5]. During the natural history of HBV, HBsAg seroclearance can emerge in 0.5–1.0% patients per year after immune clearance phase [27]. It was expected that a low positivity rate of HBsAg would indicate that HBsAg seroclearance was more common in RA patients. The susceptible ages were different for RA and CHB patients. Mother-to-infant transmission was the main route of HBV infection in China. HBV infects early in life, and confers a high risk of chronicity [28]. In contrast, RA occurs much more frequently in elderly women. Hence, RA was supposed to be later than HBV infection for most patients. We speculated that HBV infection status altered after suffering from RA. Among the 27 patients in this study, 62.9% had a HBV duration of more than 10 years, and 96.3% had aRA duration of less than 10 years. This indicates that the majority suffered from RA after HBV infection. After suffering from RA, HBV DNA declines, and HBeAg and even HBsAg lost. Here, some limitations of the study were as follows: the numbers of patients were limited, and a prospective cohort study with large sample size is necessary to evaluate the difference in the natural history of chronic HBV infection between RA and general CHB patients. In conclusion, HBV infection status was altered after suffering from RA. Compared to the matched CHB patients, variations were significant, including low positive proportions of HBeAg and HBV DNA, due to immune dysregulation of RA patients.
Background: The rates of hepatitis B virus (HBV) infection in rheumatoid arthritis (RA) patients are controversial when considering the reported outcomes. It was speculated that HBV infection status was altered after RA, and variations inn HBV infection rates became apparent. Methods: To compare the positive proportions of hepatitis B e antigen (HBeAg) and HBV DNA, a retrospective case-control study was performed between 27 chronic hepatitis B (CHB) patients with RA and 108 age- and gender-matched CHB patients. In addition, the positivity rates of hepatitis B surface antigen (HBsAg) and hepatitis B core antibody (anti-HBc) were surveyed among the 892 RA patients. Results: Compared to CHB patients, CHB patients with RA exhibited lower rates of HBeAg positivity (11.1% vs. 35.2%, P = 0.003), HBV DNA positivity (37.0% vs. 63.9%, P = 0.007) and ALT elevation (11.1% vs. 35.2%, P = 0.024). In the 892 RA patients, the prevalence of HBsAg (3.0%) was lower than that reported in the Chinese national data (7.2%), whereas the anti-HBc positivity rate of 44.6% was higher than that of 34.1%. Conclusions: HBV infection status was altered after suffering from RA. Compared to the matched CHB patients, low positive proportions of HBeAg and HBV DNA were observed for CHB patients with RA.
null
null
7,607
281
[ 1557, 567, 114, 92, 288, 597, 378 ]
10
[ "patients", "hbv", "ra", "chb", "comparison", "ra patients", "hbsag", "hepatitis", "chb patients", "hbeag" ]
[ "hepatitis antigen hbv", "hbv infection prevalence", "ra patients hbv", "arthritis ra hbv", "arthritis comparison hbv" ]
null
null
null
[CONTENT] Rheumatoid arthritis | HBV | Chronic hepatitis B | HBeAg | HBV DNA [SUMMARY]
null
[CONTENT] Rheumatoid arthritis | HBV | Chronic hepatitis B | HBeAg | HBV DNA [SUMMARY]
null
[CONTENT] Rheumatoid arthritis | HBV | Chronic hepatitis B | HBeAg | HBV DNA [SUMMARY]
null
[CONTENT] Arthritis, Rheumatoid | Case-Control Studies | DNA, Viral | Hepatitis B | Hepatitis B Antibodies | Hepatitis B Surface Antigens | Hepatitis B e Antigens | Hepatitis B virus | Hepatitis B, Chronic | Humans | Retrospective Studies [SUMMARY]
null
[CONTENT] Arthritis, Rheumatoid | Case-Control Studies | DNA, Viral | Hepatitis B | Hepatitis B Antibodies | Hepatitis B Surface Antigens | Hepatitis B e Antigens | Hepatitis B virus | Hepatitis B, Chronic | Humans | Retrospective Studies [SUMMARY]
null
[CONTENT] Arthritis, Rheumatoid | Case-Control Studies | DNA, Viral | Hepatitis B | Hepatitis B Antibodies | Hepatitis B Surface Antigens | Hepatitis B e Antigens | Hepatitis B virus | Hepatitis B, Chronic | Humans | Retrospective Studies [SUMMARY]
null
[CONTENT] hepatitis antigen hbv | hbv infection prevalence | ra patients hbv | arthritis ra hbv | arthritis comparison hbv [SUMMARY]
null
[CONTENT] hepatitis antigen hbv | hbv infection prevalence | ra patients hbv | arthritis ra hbv | arthritis comparison hbv [SUMMARY]
null
[CONTENT] hepatitis antigen hbv | hbv infection prevalence | ra patients hbv | arthritis ra hbv | arthritis comparison hbv [SUMMARY]
null
[CONTENT] patients | hbv | ra | chb | comparison | ra patients | hbsag | hepatitis | chb patients | hbeag [SUMMARY]
null
[CONTENT] patients | hbv | ra | chb | comparison | ra patients | hbsag | hepatitis | chb patients | hbeag [SUMMARY]
null
[CONTENT] patients | hbv | ra | chb | comparison | ra patients | hbsag | hepatitis | chb patients | hbeag [SUMMARY]
null
[CONTENT] hbv | ra | patients | infection | hbv infection | ra patients | chb | reported | rates | positivity [SUMMARY]
null
[CONTENT] patients | comparison | ra | hbv | hepatitis | fig | chb | cgp | hbeag | ra patients [SUMMARY]
null
[CONTENT] patients | ra | hbv | chb | comparison | ra patients | chb patients | hbsag | infection | hbeag [SUMMARY]
null
[CONTENT] RA ||| HBV | RA | HBV [SUMMARY]
null
[CONTENT] CHB | CHB | RA | HBeAg | 11.1% | 35.2% | 0.003 | 37.0% | 63.9% | 0.007 | ALT | 11.1% | 35.2% | 0.024 ||| 892 | RA | HBsAg | 3.0% | Chinese | 7.2% | 44.6% | 34.1% [SUMMARY]
null
[CONTENT] RA ||| HBV | RA | HBV ||| HBeAg | between 27 | CHB | RA | 108 | CHB ||| 892 | RA ||| CHB | CHB | RA | HBeAg | 11.1% | 35.2% | 0.003 | 37.0% | 63.9% | 0.007 | ALT | 11.1% | 35.2% | 0.024 ||| 892 | RA | HBsAg | 3.0% | Chinese | 7.2% | 44.6% | 34.1% ||| RA ||| CHB | HBeAg | CHB | RA [SUMMARY]
null
A comparison of the Nottingham Health Profile and Short Form 36 Health Survey in patients with chronic lower limb ischaemia in a longitudinal perspective.
14969590
Different generic quality of life instruments such as the Nottingham Health Profile (NHP) and the Short Form 36 Health Survey (SF-36) have revealed conflicting results in patients with chronic lower limb ischaemia in psychometric attributes in short-term evaluations. The aim of this study was to compare the NHP and the SF-36 regarding internal consistency reliability, validity, responsiveness and suitability as outcome measures in patients with lower limb ischaemia in a longitudinal perspective.
BACKGROUND
48 patients with intermittent claudication and 42 with critical ischaemia were included. Assessment was made before and one year after revascularization using comparable domains of the NHP and the SF-36 questionnaires.
METHODS
The SF-36 was less skewed and more homogeneous than the NHP. There was an average convergent validity in three of the five comparable domains one year postoperatively. The SF-36 showed a higher internal consistency except for social functioning one-year postoperatively and was more responsive in detecting changes over time in patients with intermittent claudication. The NHP was more sensitive in discriminating among levels of ischaemia regarding pain and more able to detect changes in the critical ischaemia group.
RESULTS
Both SF-36 and NHP have acceptable degrees of reliability for group-level comparisons, convergent and construct validity one year postoperatively. Nevertheless, the SF-36 has superior psychometric properties and was more suitable in patients with intermittent claudication. The NHP however, discriminated better among severity of ischaemia and was more responsive in patients with critical ischaemia.
CONCLUSION
[ "Aged", "Female", "Humans", "Intermittent Claudication", "Ischemia", "Longitudinal Studies", "Lower Extremity", "Male", "Middle Aged", "Outcome and Process Assessment, Health Care", "Psychometrics", "Quality of Life", "Reproducibility of Results", "Sensitivity and Specificity", "Sickness Impact Profile", "Surveys and Questionnaires", "Sweden" ]
385253
Background
During the past few decades quality of life assessment has become one central outcome in treatment of patients with chronic lower limb ischaemia. Different generic quality of life instruments such as the Nottingham Health Profile (NHP) and the Short Form 36 Health Survey (SF-36) [1,2] have previously been tested, revealing conflicting results in these patients according to psychometric attributes in short-term evaluations. The strengths and weakness of the NHP and the SF-36 scales are not extensively examined and further research is needed to establish which is the more appropriate and responsive quality of life instrument for patients with chronic lower limb ischaemia in the long term. The main goal of vascular surgical treatment is the relief of symptoms and improvement in patients quality of life. A majority of the patients are elderly and have generally widespread arterial disease with numbers of symptoms due to the chronic lower limb ischaemia, which may affect the patients' quality of life [3-5]. Intermittent claudication (IC) means leg pain constantly produced by walking or muscular activity and is relieved by rest, while critical leg ischaemia (CLI) means pain even at rest and problems with non-healing ulcers or gangrene [6]. It is important to identify dimensions which are influenced by the severity and nature of the disease when selecting a suitable quality of life instrument [7]. The World Health Organization QOL group [8] has identified and recommended five broad dimensions – physical and psychological health, social relationship perceptions, function and well-being – which should be included in a generic quality of life instrument. Generic instruments cover a broad range of dimensions and allow comparisons between different groups of patients. Disease-specific instruments, on the other hand, are specially designed for a particular disease, patient group or areas of function [9]. The functional scale, Walking Impairment Questionnaire (WIQ) [10] and quality of life instruments such as Intermittent Claudication Questionnaire (ICQ) [11] and Claudication Scale (CLAU-S) [12] are examples of disease-specific instruments which have been developed in recent years for patients with IC. However, at present there is no accepted disease-specific questionnaire for quality of life assessment in patients with CLI. Nevertheless, the TransAtlantic Inter-Society Consensus (TASC) [6] recommended that quality of life instruments should be used in all clinical trials and preferably include both generic and disease-specific quality of life measures. Outcome measures need to satisfy different criteria to be useful as a suitable health outcome instrument in clinical practice. Construct validity is one of the most important characteristics and is a lengthy and ongoing process [13]. An essential consideration is the instrument's ability to discriminate between different levels of the disease; another consideration is its reliability, which means the degree to which the instrument is free from random error and all items measure the same underlying attribute [14]. Further, the requirement for a useful outcome measure is the responsiveness in detecting small but important clinical changes of quality of life in patients following vascular interventions [13]. Finally the ideal quality of life instrument must also be acceptable to patients, simple and easy to use and preferably short. Comparisons among quality of life instruments and their psychometric characteristics and performance are needed to provide recommendations about their usefulness as outcome measures for these particular groups of patients. The aim of this study was to compare two generic quality of life questionnaires, the Nottingham Health Profile (NHP) and the Short Form 36 Health Survey (SF-36) regarding the internal consistency reliability, validity, responsiveness and suitability as outcome measures in patients with lower limb ischaemia in a longitudinal perspective.
Methods
Patients Ninety consecutive patients from a Swedish vascular unit in southern Sweden were invited to participate in this study. The assessment took place before and 12 months after revascularization. Out of 90 patients, 24 (27%) dropped out during the follow-up period, of whom 14 suffered from CLI. Six patients (7%) died, 15 (17%) did not wish to participate and 3 (3%) had other concurrent diseases. The inclusion criteria were patients admitted for active treatment of documented lower limb ischaemia, having no communication problems and having no other disease restricting their walking capacity [1]. The severity of ischaemia was graded according to suggested standards for grading lower limb ischaemia [15]. Sixty-two (68.8%) patients were treated with a surgical bypass, 24 (26.6%) had a percutaneous angioplasty (PTA) and 4 (4.6%) had a surgical thromboendatherectomy (TEA) (Table 1). Routine medical history, risk factors and clinical examinations, which included ankle blood pressure (ABP) and ankle-brachial pressure index (ABPI), were obtained before and one year after revascularization in accordance with the Swedish Vascular Registry (Swedvasc) [16]. The questionnaire also contains questions about sex, age, housing and civil status. Demographic characteristics and clinical data were obtained from the patients' medical records. Demographic characteristics of the patient groups before revascularization (n = 90). 1Mann-Whitney U-test 2Chi-square test *Include the patients (n = 66) who completed the study one year postoperatively. P-value = <0.05 Ninety consecutive patients from a Swedish vascular unit in southern Sweden were invited to participate in this study. The assessment took place before and 12 months after revascularization. Out of 90 patients, 24 (27%) dropped out during the follow-up period, of whom 14 suffered from CLI. Six patients (7%) died, 15 (17%) did not wish to participate and 3 (3%) had other concurrent diseases. The inclusion criteria were patients admitted for active treatment of documented lower limb ischaemia, having no communication problems and having no other disease restricting their walking capacity [1]. The severity of ischaemia was graded according to suggested standards for grading lower limb ischaemia [15]. Sixty-two (68.8%) patients were treated with a surgical bypass, 24 (26.6%) had a percutaneous angioplasty (PTA) and 4 (4.6%) had a surgical thromboendatherectomy (TEA) (Table 1). Routine medical history, risk factors and clinical examinations, which included ankle blood pressure (ABP) and ankle-brachial pressure index (ABPI), were obtained before and one year after revascularization in accordance with the Swedish Vascular Registry (Swedvasc) [16]. The questionnaire also contains questions about sex, age, housing and civil status. Demographic characteristics and clinical data were obtained from the patients' medical records. Demographic characteristics of the patient groups before revascularization (n = 90). 1Mann-Whitney U-test 2Chi-square test *Include the patients (n = 66) who completed the study one year postoperatively. P-value = <0.05 Nottingham Health Profile The Nottingham Health Profile (NHP) was developed to be used in epidemiological studies of health and disease [17]. It consists of two parts. Part I contains 38 yes/no items in 6 dimensions: pain, physical mobility, emotional reactions, energy, social isolation and sleep. Part II contains 7 general yes/no questions concerning daily living problems. The two parts may be used independently and part II is not analysed in this study. Part I is scored using weighted values which give a range of possible scores from zero (no problems at all) to 100 (presence of all problems within a dimension). Swedish weights have been developed and used in this study [18]. The Swedish version has proved to be valid and reliable, for example, in patients with arthrosis of the hip joint [19] and in patients suffering from grave ventricular arrhythmias [20]. The NHP scale has also proved capable of measuring changes in perceived health following different treatments such as radical surgery for colorectal cancer [21] and after vascular interventions in lower limb ischaemia patients [4,22,23]. The Nottingham Health Profile (NHP) was developed to be used in epidemiological studies of health and disease [17]. It consists of two parts. Part I contains 38 yes/no items in 6 dimensions: pain, physical mobility, emotional reactions, energy, social isolation and sleep. Part II contains 7 general yes/no questions concerning daily living problems. The two parts may be used independently and part II is not analysed in this study. Part I is scored using weighted values which give a range of possible scores from zero (no problems at all) to 100 (presence of all problems within a dimension). Swedish weights have been developed and used in this study [18]. The Swedish version has proved to be valid and reliable, for example, in patients with arthrosis of the hip joint [19] and in patients suffering from grave ventricular arrhythmias [20]. The NHP scale has also proved capable of measuring changes in perceived health following different treatments such as radical surgery for colorectal cancer [21] and after vascular interventions in lower limb ischaemia patients [4,22,23]. Short Form 36 Health Survey The Short Form 36 Health Survey (SF-36) was developed by Ware et al [24] and designed to provide assessments involving generic health concepts that are not specific to age, disease or treatment group. It includes 36 items covering eight health concepts: bodily pain, physical functioning, role limitations due to physical problems, mental health, vitality, social functioning, role limitations due to emotional problems and general health. The response format is yes or no or in a three-to-six response scale. For each health concept questions scores are coded, summed and transformed on a scale from zero (worst health) to 100 (best health). In this study, the standard Swedish version was used [25]. The SF-36 has shown acceptable validity and reliability in population studies [26,27] and in various groups of patients, for example after stroke [28] and in patients with rheumatoid arthritis [29]. The SF-36 scale has also shown responsiveness to changes in health status over time in patients with critical ischaemia [30-32] and in patients with intermittent claudication following a revascularization [33-35]. The Short Form 36 Health Survey (SF-36) was developed by Ware et al [24] and designed to provide assessments involving generic health concepts that are not specific to age, disease or treatment group. It includes 36 items covering eight health concepts: bodily pain, physical functioning, role limitations due to physical problems, mental health, vitality, social functioning, role limitations due to emotional problems and general health. The response format is yes or no or in a three-to-six response scale. For each health concept questions scores are coded, summed and transformed on a scale from zero (worst health) to 100 (best health). In this study, the standard Swedish version was used [25]. The SF-36 has shown acceptable validity and reliability in population studies [26,27] and in various groups of patients, for example after stroke [28] and in patients with rheumatoid arthritis [29]. The SF-36 scale has also shown responsiveness to changes in health status over time in patients with critical ischaemia [30-32] and in patients with intermittent claudication following a revascularization [33-35]. Procedure The patients were asked by the head nurse to fill out the NHP and the SF-36 questionnaire during their admission before treatment. At the one-year follow-up, the questionnaire was sent home to the patients with a covering letter and a prepaid envelope. The Ethics Committee of Lund University approved the study (LU 470-98, Gbg M 098-98). The patients were asked by the head nurse to fill out the NHP and the SF-36 questionnaire during their admission before treatment. At the one-year follow-up, the questionnaire was sent home to the patients with a covering letter and a prepaid envelope. The Ethics Committee of Lund University approved the study (LU 470-98, Gbg M 098-98). Statistical analysis Differences in characteristics between patients with IC and with CLI before revascularization were analysed using Chi-squared test and Mann-Whitney U-test. The prevalence of the lowest ("floor" effect) and highest ("ceiling" effect) possible quality of life score in NHP and SF-36 was also calculated. Construct validity was evaluated for aspects of convergent and discriminant validity by the Multitrait-Multimethod matrix (MTMM) [13] based on five comparable domains, including pain, physical mobility, emotional reactions, energy and social isolation for the NHP and bodily pain, physical functioning, mental health, vitality and social functioning for the SF-36 (Table 2). Further, the Mann-Whitney U-test was used to examine the relative ability of the two instruments to discriminate among the degrees of severity of the peripheral vascular disease. Spearman's rank correlation coefficient was used to express the correlation between quality of life scores, ABPI, type of intervention and age. The internal consistency based on correlations between items for each scale was assessed with Cronbach's alpha [36]. The recommended reliability standard for group-level comparisons is a reliability coefficient of 0.70, while comparisons between individuals demands a reliability coefficient of 0.90 [25]. Comparable domains between the Nottingham Health Profile (NHP) and the Short Form 36 (SF-36) and number of items. The Wilcoxon Signed Ranks test was used to detect the responsiveness of within-subjects changes over time, before and one year after revascularization, in patients with IC and CLI. Data analysis was performed for overall comparisons using the statistical package SPSS 11.0 and a P value of <.05 was taken as statistically significant. Differences in characteristics between patients with IC and with CLI before revascularization were analysed using Chi-squared test and Mann-Whitney U-test. The prevalence of the lowest ("floor" effect) and highest ("ceiling" effect) possible quality of life score in NHP and SF-36 was also calculated. Construct validity was evaluated for aspects of convergent and discriminant validity by the Multitrait-Multimethod matrix (MTMM) [13] based on five comparable domains, including pain, physical mobility, emotional reactions, energy and social isolation for the NHP and bodily pain, physical functioning, mental health, vitality and social functioning for the SF-36 (Table 2). Further, the Mann-Whitney U-test was used to examine the relative ability of the two instruments to discriminate among the degrees of severity of the peripheral vascular disease. Spearman's rank correlation coefficient was used to express the correlation between quality of life scores, ABPI, type of intervention and age. The internal consistency based on correlations between items for each scale was assessed with Cronbach's alpha [36]. The recommended reliability standard for group-level comparisons is a reliability coefficient of 0.70, while comparisons between individuals demands a reliability coefficient of 0.90 [25]. Comparable domains between the Nottingham Health Profile (NHP) and the Short Form 36 (SF-36) and number of items. The Wilcoxon Signed Ranks test was used to detect the responsiveness of within-subjects changes over time, before and one year after revascularization, in patients with IC and CLI. Data analysis was performed for overall comparisons using the statistical package SPSS 11.0 and a P value of <.05 was taken as statistically significant.
null
null
Conclusion
The findings indicate that both the SF-36 and the NHP have acceptable degrees of reliability for group-level comparisons, convergent and construct validity one year after revascularization. Nevertheless, the SF-36 seems generally to have more superior psychometric properties and was more suitable than the NHP for evaluating quality of life in patients with intermittent claudication. The NHP, however, discriminated better among severity of ischaemia and was more responsive in detecting changes over time in patients with critical leg ischaemia.
[ "Background", "Patients", "Nottingham Health Profile", "Short Form 36 Health Survey", "Procedure", "Statistical analysis", "Results", "Validity", "Internal consistency", "Responsiveness", "Discussion", "Conclusion" ]
[ "During the past few decades quality of life assessment has become one central outcome in treatment of patients with chronic lower limb ischaemia. Different generic quality of life instruments such as the Nottingham Health Profile (NHP) and the Short Form 36 Health Survey (SF-36) [1,2] have previously been tested, revealing conflicting results in these patients according to psychometric attributes in short-term evaluations. The strengths and weakness of the NHP and the SF-36 scales are not extensively examined and further research is needed to establish which is the more appropriate and responsive quality of life instrument for patients with chronic lower limb ischaemia in the long term. The main goal of vascular surgical treatment is the relief of symptoms and improvement in patients quality of life. A majority of the patients are elderly and have generally widespread arterial disease with numbers of symptoms due to the chronic lower limb ischaemia, which may affect the patients' quality of life [3-5]. Intermittent claudication (IC) means leg pain constantly produced by walking or muscular activity and is relieved by rest, while critical leg ischaemia (CLI) means pain even at rest and problems with non-healing ulcers or gangrene [6]. It is important to identify dimensions which are influenced by the severity and nature of the disease when selecting a suitable quality of life instrument [7].\nThe World Health Organization QOL group [8] has identified and recommended five broad dimensions – physical and psychological health, social relationship perceptions, function and well-being – which should be included in a generic quality of life instrument. Generic instruments cover a broad range of dimensions and allow comparisons between different groups of patients. Disease-specific instruments, on the other hand, are specially designed for a particular disease, patient group or areas of function [9]. The functional scale, Walking Impairment Questionnaire (WIQ) [10] and quality of life instruments such as Intermittent Claudication Questionnaire (ICQ) [11] and Claudication Scale (CLAU-S) [12] are examples of disease-specific instruments which have been developed in recent years for patients with IC. However, at present there is no accepted disease-specific questionnaire for quality of life assessment in patients with CLI. Nevertheless, the TransAtlantic Inter-Society Consensus (TASC) [6] recommended that quality of life instruments should be used in all clinical trials and preferably include both generic and disease-specific quality of life measures.\nOutcome measures need to satisfy different criteria to be useful as a suitable health outcome instrument in clinical practice. Construct validity is one of the most important characteristics and is a lengthy and ongoing process [13]. An essential consideration is the instrument's ability to discriminate between different levels of the disease; another consideration is its reliability, which means the degree to which the instrument is free from random error and all items measure the same underlying attribute [14]. Further, the requirement for a useful outcome measure is the responsiveness in detecting small but important clinical changes of quality of life in patients following vascular interventions [13]. Finally the ideal quality of life instrument must also be acceptable to patients, simple and easy to use and preferably short. Comparisons among quality of life instruments and their psychometric characteristics and performance are needed to provide recommendations about their usefulness as outcome measures for these particular groups of patients.\nThe aim of this study was to compare two generic quality of life questionnaires, the Nottingham Health Profile (NHP) and the Short Form 36 Health Survey (SF-36) regarding the internal consistency reliability, validity, responsiveness and suitability as outcome measures in patients with lower limb ischaemia in a longitudinal perspective.", "Ninety consecutive patients from a Swedish vascular unit in southern Sweden were invited to participate in this study. The assessment took place before and 12 months after revascularization. Out of 90 patients, 24 (27%) dropped out during the follow-up period, of whom 14 suffered from CLI. Six patients (7%) died, 15 (17%) did not wish to participate and 3 (3%) had other concurrent diseases. The inclusion criteria were patients admitted for active treatment of documented lower limb ischaemia, having no communication problems and having no other disease restricting their walking capacity [1]. The severity of ischaemia was graded according to suggested standards for grading lower limb ischaemia [15]. Sixty-two (68.8%) patients were treated with a surgical bypass, 24 (26.6%) had a percutaneous angioplasty (PTA) and 4 (4.6%) had a surgical thromboendatherectomy (TEA) (Table 1). Routine medical history, risk factors and clinical examinations, which included ankle blood pressure (ABP) and ankle-brachial pressure index (ABPI), were obtained before and one year after revascularization in accordance with the Swedish Vascular Registry (Swedvasc) [16]. The questionnaire also contains questions about sex, age, housing and civil status. Demographic characteristics and clinical data were obtained from the patients' medical records.\nDemographic characteristics of the patient groups before revascularization (n = 90).\n1Mann-Whitney U-test 2Chi-square test *Include the patients (n = 66) who completed the study one year postoperatively. P-value = <0.05", "The Nottingham Health Profile (NHP) was developed to be used in epidemiological studies of health and disease [17]. It consists of two parts. Part I contains 38 yes/no items in 6 dimensions: pain, physical mobility, emotional reactions, energy, social isolation and sleep. Part II contains 7 general yes/no questions concerning daily living problems. The two parts may be used independently and part II is not analysed in this study. Part I is scored using weighted values which give a range of possible scores from zero (no problems at all) to 100 (presence of all problems within a dimension). Swedish weights have been developed and used in this study [18]. The Swedish version has proved to be valid and reliable, for example, in patients with arthrosis of the hip joint [19] and in patients suffering from grave ventricular arrhythmias [20]. The NHP scale has also proved capable of measuring changes in perceived health following different treatments such as radical surgery for colorectal cancer [21] and after vascular interventions in lower limb ischaemia patients [4,22,23].", "The Short Form 36 Health Survey (SF-36) was developed by Ware et al [24] and designed to provide assessments involving generic health concepts that are not specific to age, disease or treatment group. It includes 36 items covering eight health concepts: bodily pain, physical functioning, role limitations due to physical problems, mental health, vitality, social functioning, role limitations due to emotional problems and general health. The response format is yes or no or in a three-to-six response scale. For each health concept questions scores are coded, summed and transformed on a scale from zero (worst health) to 100 (best health). In this study, the standard Swedish version was used [25]. The SF-36 has shown acceptable validity and reliability in population studies [26,27] and in various groups of patients, for example after stroke [28] and in patients with rheumatoid arthritis [29]. The SF-36 scale has also shown responsiveness to changes in health status over time in patients with critical ischaemia [30-32] and in patients with intermittent claudication following a revascularization [33-35].", "The patients were asked by the head nurse to fill out the NHP and the SF-36 questionnaire during their admission before treatment. At the one-year follow-up, the questionnaire was sent home to the patients with a covering letter and a prepaid envelope. The Ethics Committee of Lund University approved the study (LU 470-98, Gbg M 098-98).", "Differences in characteristics between patients with IC and with CLI before revascularization were analysed using Chi-squared test and Mann-Whitney U-test. The prevalence of the lowest (\"floor\" effect) and highest (\"ceiling\" effect) possible quality of life score in NHP and SF-36 was also calculated.\nConstruct validity was evaluated for aspects of convergent and discriminant validity by the Multitrait-Multimethod matrix (MTMM) [13] based on five comparable domains, including pain, physical mobility, emotional reactions, energy and social isolation for the NHP and bodily pain, physical functioning, mental health, vitality and social functioning for the SF-36 (Table 2). Further, the Mann-Whitney U-test was used to examine the relative ability of the two instruments to discriminate among the degrees of severity of the peripheral vascular disease. Spearman's rank correlation coefficient was used to express the correlation between quality of life scores, ABPI, type of intervention and age. The internal consistency based on correlations between items for each scale was assessed with Cronbach's alpha [36]. The recommended reliability standard for group-level comparisons is a reliability coefficient of 0.70, while comparisons between individuals demands a reliability coefficient of 0.90 [25].\nComparable domains between the Nottingham Health Profile (NHP) and the Short Form 36 (SF-36) and number of items.\nThe Wilcoxon Signed Ranks test was used to detect the responsiveness of within-subjects changes over time, before and one year after revascularization, in patients with IC and CLI. Data analysis was performed for overall comparisons using the statistical package SPSS 11.0 and a P value of <.05 was taken as statistically significant.", "Forty-eight (53.3%) patients had IC of whom 26 (54%) were men. The remaining 42 (46.7%) suffered from CLI and 22 (52%) of them were men. There was a significant difference in age between the two groups with a mean age of 67 and 71 respectively (Table 1). One year postoperatively, sixty-six (73%) patients (38 with IC and 28 with CLI) remained in the study and secondary reconstructions were made on 7 (10%) patients during the follow-up. There were no significant differences at baseline in sex, age, ABPI and quality of life scores between the drop-out patients and the patients who completed the study. Further, there were no significant differences between the drop-outs and the remaining patients regarding the method of treatment or severity of ischaemia.\nAnalyses between the comparable domains showed that the NHP scores were more skewed than the SF-36 scores, especially in emotional reactions, energy and social isolation (Figure 1). The \"floor effect\", the proportion of individuals having the lowest possible scores (SF-36 = 0, NHP = 100), was larger for the NHP scale in energy one year (19.7%) after revascularization than for the SF-36. The \"ceiling effect\", the proportion of individuals having the best possible scores (SF-36 = 100, NHP = 0), was also larger for the NHP scale in emotional reactions (50.0%), energy (42.4%) and social isolation (71.2%) one year after revascularization (Table 3).\nFrequency distribution of scores on the NHP (left side) and comparable dimensions on the SF-36 (right side) one year after revascularization. NHP scores had 100 subtracted for consistency with SF-36\nReliability and \"floor\" and \"ceiling\" effects in comparable NHP and SF-36 scales before and one year after revascularization.\na The NHP scores are reversed for consistency with the SF-36. b Cronbach's α\n Validity The average convergent validity coefficients exceeded 0.5 one year postoperatively except for physical mobility and physical functioning (r = -0.46) and for social isolation and social functioning (r = -0.32), indicating a considerable convergence of the SF-36 and NHP (Table 4). One year postoperatively significant correlations between ABPI and physical functioning (r = 0.29) (SF-36), physical mobility (r = 0.42) and pain (r = 0.42) (NHP) were found. The severity of the ischaemia had a significant influence in the NHP-measured domain of pain (P < .003) and physical mobility (P < .03), indicating lower quality of life scores in patients with critical ischaemia. In the ability to discriminate between levels of ischaemia in the other comparable quality of life domains, no significant differences were found (Table 5).\nMultitrait-Multimethod matrix of correlation coefficient for the comparable scales of the SF-36 and NHP in patients with varying degrees of lower limb ischaemia one year after revascularization.\nCorrelation coefficients for comparable domains of the two questionnaires are shown in bold type Correlation coefficients are negative because the two scales run in opposite directions.\nDifferences in comparable domains in the NHP and the SF-36 between claudicants and patients with critical ischaemia before revascularization.\n*A higher score (100) indicates more perceived problems. ** A higher score (100) indicates fewer perceived problems. a P-value, claudicants versus critical ischaemia patients before revascularization. Tested by Mann-Whitney U-test. p-value = <0.05\nThe average convergent validity coefficients exceeded 0.5 one year postoperatively except for physical mobility and physical functioning (r = -0.46) and for social isolation and social functioning (r = -0.32), indicating a considerable convergence of the SF-36 and NHP (Table 4). One year postoperatively significant correlations between ABPI and physical functioning (r = 0.29) (SF-36), physical mobility (r = 0.42) and pain (r = 0.42) (NHP) were found. The severity of the ischaemia had a significant influence in the NHP-measured domain of pain (P < .003) and physical mobility (P < .03), indicating lower quality of life scores in patients with critical ischaemia. In the ability to discriminate between levels of ischaemia in the other comparable quality of life domains, no significant differences were found (Table 5).\nMultitrait-Multimethod matrix of correlation coefficient for the comparable scales of the SF-36 and NHP in patients with varying degrees of lower limb ischaemia one year after revascularization.\nCorrelation coefficients for comparable domains of the two questionnaires are shown in bold type Correlation coefficients are negative because the two scales run in opposite directions.\nDifferences in comparable domains in the NHP and the SF-36 between claudicants and patients with critical ischaemia before revascularization.\n*A higher score (100) indicates more perceived problems. ** A higher score (100) indicates fewer perceived problems. a P-value, claudicants versus critical ischaemia patients before revascularization. Tested by Mann-Whitney U-test. p-value = <0.05\n Internal consistency Physical functioning (α = 0.82), mental health (α = 0.76) and vitality (α = 0.70) for the SF-36 and pain (α = 0.71), emotional reactions (α = 0.76) and energy (α = 0.71) for the NHP scale were reliable, with coefficients >0.70 before revascularization. For the SF-36, all of the comparable domains except for social functioning (α = 0.64) exceeded the Cronbach's alpha value of 0.8 at the one-year follow-up. For the NHP the internal consistency coefficient was less than 0.8 but still exceeded 0.70 (Table 3).\nPhysical functioning (α = 0.82), mental health (α = 0.76) and vitality (α = 0.70) for the SF-36 and pain (α = 0.71), emotional reactions (α = 0.76) and energy (α = 0.71) for the NHP scale were reliable, with coefficients >0.70 before revascularization. For the SF-36, all of the comparable domains except for social functioning (α = 0.64) exceeded the Cronbach's alpha value of 0.8 at the one-year follow-up. For the NHP the internal consistency coefficient was less than 0.8 but still exceeded 0.70 (Table 3).\n Responsiveness The NHP scale and SF-36 were not equally good at detecting within-patient changes over time. In patients with IC the SF-36 scale showed significant improvements in bodily pain (P < .01) and in physical functioning (P < .001) and for the patients with CLI there were significant improvements in bodily pain (P < .004) at the one-year follow-up (Figure 2). The NHP scale showed no significant improvements in patients with IC, while in patients with CLI, significant improvements in pain (P < .001) and physical mobility (P < .03) were found (Figure 3).\nChanges in median score for the SF-36 in claudicants and critical ischaemia patients before and one year after revascularization. Tested by Wilcoxon Signed Ranks test. A higher score indicates fewer perceived problems. BP = Bodily pain, PF = Physical functioning, MH = Mental health, VT = Vitality and SF = Social functioning.\nChanges in median score for the NHP in claudicants and critical ischaemia patients before and one year after revascularization. Tested by Wilcoxon Signed Ranks test. A higher score indicates more perceived problems. P = Pain, PM = Physical mobility, EM = Emotional reactions, E = Energy and SO = Social isolation.\nThe NHP scale and SF-36 were not equally good at detecting within-patient changes over time. In patients with IC the SF-36 scale showed significant improvements in bodily pain (P < .01) and in physical functioning (P < .001) and for the patients with CLI there were significant improvements in bodily pain (P < .004) at the one-year follow-up (Figure 2). The NHP scale showed no significant improvements in patients with IC, while in patients with CLI, significant improvements in pain (P < .001) and physical mobility (P < .03) were found (Figure 3).\nChanges in median score for the SF-36 in claudicants and critical ischaemia patients before and one year after revascularization. Tested by Wilcoxon Signed Ranks test. A higher score indicates fewer perceived problems. BP = Bodily pain, PF = Physical functioning, MH = Mental health, VT = Vitality and SF = Social functioning.\nChanges in median score for the NHP in claudicants and critical ischaemia patients before and one year after revascularization. Tested by Wilcoxon Signed Ranks test. A higher score indicates more perceived problems. P = Pain, PM = Physical mobility, EM = Emotional reactions, E = Energy and SO = Social isolation.", "The average convergent validity coefficients exceeded 0.5 one year postoperatively except for physical mobility and physical functioning (r = -0.46) and for social isolation and social functioning (r = -0.32), indicating a considerable convergence of the SF-36 and NHP (Table 4). One year postoperatively significant correlations between ABPI and physical functioning (r = 0.29) (SF-36), physical mobility (r = 0.42) and pain (r = 0.42) (NHP) were found. The severity of the ischaemia had a significant influence in the NHP-measured domain of pain (P < .003) and physical mobility (P < .03), indicating lower quality of life scores in patients with critical ischaemia. In the ability to discriminate between levels of ischaemia in the other comparable quality of life domains, no significant differences were found (Table 5).\nMultitrait-Multimethod matrix of correlation coefficient for the comparable scales of the SF-36 and NHP in patients with varying degrees of lower limb ischaemia one year after revascularization.\nCorrelation coefficients for comparable domains of the two questionnaires are shown in bold type Correlation coefficients are negative because the two scales run in opposite directions.\nDifferences in comparable domains in the NHP and the SF-36 between claudicants and patients with critical ischaemia before revascularization.\n*A higher score (100) indicates more perceived problems. ** A higher score (100) indicates fewer perceived problems. a P-value, claudicants versus critical ischaemia patients before revascularization. Tested by Mann-Whitney U-test. p-value = <0.05", "Physical functioning (α = 0.82), mental health (α = 0.76) and vitality (α = 0.70) for the SF-36 and pain (α = 0.71), emotional reactions (α = 0.76) and energy (α = 0.71) for the NHP scale were reliable, with coefficients >0.70 before revascularization. For the SF-36, all of the comparable domains except for social functioning (α = 0.64) exceeded the Cronbach's alpha value of 0.8 at the one-year follow-up. For the NHP the internal consistency coefficient was less than 0.8 but still exceeded 0.70 (Table 3).", "The NHP scale and SF-36 were not equally good at detecting within-patient changes over time. In patients with IC the SF-36 scale showed significant improvements in bodily pain (P < .01) and in physical functioning (P < .001) and for the patients with CLI there were significant improvements in bodily pain (P < .004) at the one-year follow-up (Figure 2). The NHP scale showed no significant improvements in patients with IC, while in patients with CLI, significant improvements in pain (P < .001) and physical mobility (P < .03) were found (Figure 3).\nChanges in median score for the SF-36 in claudicants and critical ischaemia patients before and one year after revascularization. Tested by Wilcoxon Signed Ranks test. A higher score indicates fewer perceived problems. BP = Bodily pain, PF = Physical functioning, MH = Mental health, VT = Vitality and SF = Social functioning.\nChanges in median score for the NHP in claudicants and critical ischaemia patients before and one year after revascularization. Tested by Wilcoxon Signed Ranks test. A higher score indicates more perceived problems. P = Pain, PM = Physical mobility, EM = Emotional reactions, E = Energy and SO = Social isolation.", "The result showed that the SF-36 was less skewed and more homogeneous with lower \"floor\" and \"ceiling\" effects than the NHP. A considerable convergence in three of the five comparable domains one year postoperatively indicates an average convergent validity. The SF-36 showed a higher internal consistency except for social functioning one-year postoperatively and was more responsive in detecting changes over time in the IC group. The NHP was more sensitive in discriminating among levels of ischaemia regarding pain and more able to detect changes in the CLI group.\nThe attrition or loss of subjects (27%) in this study could have affected the outcome. Further analysis showed that there were no significant differences in quality of life, sex, age, method of treatment and severity of disease between the attrition group and those who completed the study. The fact that the NHP and SF-36 differ in their nature and content may limit the study design. Therefore the analyses in this study focused only on the comparable domains of the two instruments, including the basic domains of physical, mental and social health identified by the WHOQOL group [7]. A suitable quality of life instrument for patients with chronic lower limb ischaemia should not only be valid, reliable and responsive but also simple for elderly people to understand and complete. In this study there was no difference in response rate between the two instruments and both seemed to be user-friendly and took about 5–10 minutes to complete. The findings strengthen earlier results suggesting that both scales are practical and acceptable to use in elderly patients [37,38].\nA generic quality of life instrument, designed for a variety of populations and measuring a comprehensive set of health concepts, is likely to have problems with \"ceiling\" and \"floor\" effects [24]. In this study the NHP showed higher \"ceiling\" effects in all dimensions than the SF-36. There were minor \"floor\" effects in both the NHP and SF-36, indicating the lowest possible quality of life. This is in accordance to Klevsgård et al, [1] who also showed higher \"ceiling\" effects in social isolation, emotional reactions and energy for the NHP than the SF-36. Other studies have also reported fewer \"ceiling\" and \"floor\" effects in the SF-36 than in the NHP in patients with chronic obstructive pulmonary disease [39] and after a myocardial infarction [37]. The advantage of the SF-36 may be due to its use of a Likert-type response format with a number of possible different scores and its ability to detect positive as well negative states of health, whereas the NHP items are dichotomous and state more extreme ends of ill health [39]. This could mean that a patient with acceptable initial scores fails to improve even if the improvement is obvious. Furthermore, a false negative response will be more likely when a patient perceives to having perfect functioning on a measure that only assesses severe dysfunction. The result confirms the importance of finding a quality of life instrument that does have a spectrum of dimensions which match the patients with chronic lower limb ischaemia related to the presence of numerous and often severe comorbid conditions.\nIn this study the internal consistency was not as high as desirable for any of the instruments before revascularization, but both instruments exceeded the minimum internal consistency value of 0.7, except for social functioning in the SF-36 one year postoperatively. The SF-36, however, had considerably higher α-values in all other dimensions. Several studies have previously reported that the SF-36 has higher Cronbach's α values than the NHP, but the domains in which the highest and lowest values were estimated differ [37-40]. The findings suggest that it is not only the magnitude of the correlation among items, but also the number of items in the scale that affects the internal consistency. For example, the domains of pain and social functioning in the NHP contain 8 and 5 items respectively, while bodily pain and social functioning in the SF-36 contain only 2 items. This is further strengthened by the fact that both the scales were not sensitive enough to identify significant within-patients changes in social isolation and social functioning. Another explanation could be that the patients were a more homogeneous group before treatment, with similar problems which affected the quality of life, but one year postoperatively they have become more heterogeneous and represent different states of recovery [13]. Both instruments meet the reliability standards for group-level application in most respects, although none of them achieved the degree of reliability that be would be desirable in individual-based assessment.\nThe result in this study showed significant convergent correlation coefficients between scores of the comparable dimensions except for physical activity and social activity, indicating a considerable convergence of the NHP and SF-36 scale. Prieto et al [39] and Meyer-Rosberg et al [40] demonstrated similar results with an average convergent validity. Thus the NHP and SF-36 are relatively equal in the validity and corroborate that the subscales probably reflect similar impacts of chronic lower limb ischaemia. However, social isolation in the NHP showed a higher correlation with mental health in the SF-36 and might measure more psychological aspects of social life, whilst social functioning in the SF-36 tends to assess social activities according to the higher correlation with energy in NHP. Similarly the physical functioning in the SF-36 showed a higher correlation with energy and may reflect physical activities of daily living rather than physical mobility. This suggests that the SF-36 and NHP measure different aspects of physical and social activities.\nValidity in terms of the instruments' relative ability to discriminate among different levels of the ischaemia could only demonstrate that patients with CLI had significantly more problems with pain and physical mobility before treatment than patients with IC measured by the NHP. Klevsgård et al [1] showed similar results, that the NHP was more sensitive in discriminating deterioration in pain and physical mobility than the SF-36. In contrast, Brown et al [37] demonstrated that the SF-36 was more sensitive than the NHP for identifying people still troubled with angina or breathlessness after a myocardial infarction. Despite the lack of significant differences between patients with IC and patients with CLI, the NHP scale tends to be more sensitive in explaining the quality of life in this group of patients with regard to the dimension of pain and physical mobility. The important issue thus is to consider how well the measurement method explains health-related phenomena which are significant for the particular targeted disease or group of patients.\nThe SF-36 was the more responsive instrument in detecting changes in quality of life over time in patients with IC, including bodily pain and physical functioning one year postoperatively. However, in patients with CLI, the NHP was the more responsive instrument, with significant changes in pain and physical mobility, while the SF-36 showed a significant change only in bodily pain. Falcoz et al [38] also demonstrated that the SF-36 was more responsive than the NHP in detecting changes five weeks after cardiac surgery. In contrast, Klevsgård et al [1] showed that the NHP was more responsive in patients with chronic lower limb ischaemia one month after revascularization. The result of the present study supports the TransAtlantic Inter-Society Consensus (TASC) [12] recommendation that SF-36 should be used as a generic health outcome measure in patients with chronic lower limb ischaemia. It seems to be more sensitive for detecting changes in quality of life than the NHP in patients with IC. In the group of CLI patients who have more problems with mobility and pain, however, it is harder to evaluate whether the one questionnaire is superior to the other, the NHP could be a preferable instrument in this group of patients.", "The findings indicate that both the SF-36 and the NHP have acceptable degrees of reliability for group-level comparisons, convergent and construct validity one year after revascularization. Nevertheless, the SF-36 seems generally to have more superior psychometric properties and was more suitable than the NHP for evaluating quality of life in patients with intermittent claudication. The NHP, however, discriminated better among severity of ischaemia and was more responsive in detecting changes over time in patients with critical leg ischaemia." ]
[ null, null, null, null, null, null, null, null, null, null, null, null ]
[ "Background", "Methods", "Patients", "Nottingham Health Profile", "Short Form 36 Health Survey", "Procedure", "Statistical analysis", "Results", "Validity", "Internal consistency", "Responsiveness", "Discussion", "Conclusion" ]
[ "During the past few decades quality of life assessment has become one central outcome in treatment of patients with chronic lower limb ischaemia. Different generic quality of life instruments such as the Nottingham Health Profile (NHP) and the Short Form 36 Health Survey (SF-36) [1,2] have previously been tested, revealing conflicting results in these patients according to psychometric attributes in short-term evaluations. The strengths and weakness of the NHP and the SF-36 scales are not extensively examined and further research is needed to establish which is the more appropriate and responsive quality of life instrument for patients with chronic lower limb ischaemia in the long term. The main goal of vascular surgical treatment is the relief of symptoms and improvement in patients quality of life. A majority of the patients are elderly and have generally widespread arterial disease with numbers of symptoms due to the chronic lower limb ischaemia, which may affect the patients' quality of life [3-5]. Intermittent claudication (IC) means leg pain constantly produced by walking or muscular activity and is relieved by rest, while critical leg ischaemia (CLI) means pain even at rest and problems with non-healing ulcers or gangrene [6]. It is important to identify dimensions which are influenced by the severity and nature of the disease when selecting a suitable quality of life instrument [7].\nThe World Health Organization QOL group [8] has identified and recommended five broad dimensions – physical and psychological health, social relationship perceptions, function and well-being – which should be included in a generic quality of life instrument. Generic instruments cover a broad range of dimensions and allow comparisons between different groups of patients. Disease-specific instruments, on the other hand, are specially designed for a particular disease, patient group or areas of function [9]. The functional scale, Walking Impairment Questionnaire (WIQ) [10] and quality of life instruments such as Intermittent Claudication Questionnaire (ICQ) [11] and Claudication Scale (CLAU-S) [12] are examples of disease-specific instruments which have been developed in recent years for patients with IC. However, at present there is no accepted disease-specific questionnaire for quality of life assessment in patients with CLI. Nevertheless, the TransAtlantic Inter-Society Consensus (TASC) [6] recommended that quality of life instruments should be used in all clinical trials and preferably include both generic and disease-specific quality of life measures.\nOutcome measures need to satisfy different criteria to be useful as a suitable health outcome instrument in clinical practice. Construct validity is one of the most important characteristics and is a lengthy and ongoing process [13]. An essential consideration is the instrument's ability to discriminate between different levels of the disease; another consideration is its reliability, which means the degree to which the instrument is free from random error and all items measure the same underlying attribute [14]. Further, the requirement for a useful outcome measure is the responsiveness in detecting small but important clinical changes of quality of life in patients following vascular interventions [13]. Finally the ideal quality of life instrument must also be acceptable to patients, simple and easy to use and preferably short. Comparisons among quality of life instruments and their psychometric characteristics and performance are needed to provide recommendations about their usefulness as outcome measures for these particular groups of patients.\nThe aim of this study was to compare two generic quality of life questionnaires, the Nottingham Health Profile (NHP) and the Short Form 36 Health Survey (SF-36) regarding the internal consistency reliability, validity, responsiveness and suitability as outcome measures in patients with lower limb ischaemia in a longitudinal perspective.", " Patients Ninety consecutive patients from a Swedish vascular unit in southern Sweden were invited to participate in this study. The assessment took place before and 12 months after revascularization. Out of 90 patients, 24 (27%) dropped out during the follow-up period, of whom 14 suffered from CLI. Six patients (7%) died, 15 (17%) did not wish to participate and 3 (3%) had other concurrent diseases. The inclusion criteria were patients admitted for active treatment of documented lower limb ischaemia, having no communication problems and having no other disease restricting their walking capacity [1]. The severity of ischaemia was graded according to suggested standards for grading lower limb ischaemia [15]. Sixty-two (68.8%) patients were treated with a surgical bypass, 24 (26.6%) had a percutaneous angioplasty (PTA) and 4 (4.6%) had a surgical thromboendatherectomy (TEA) (Table 1). Routine medical history, risk factors and clinical examinations, which included ankle blood pressure (ABP) and ankle-brachial pressure index (ABPI), were obtained before and one year after revascularization in accordance with the Swedish Vascular Registry (Swedvasc) [16]. The questionnaire also contains questions about sex, age, housing and civil status. Demographic characteristics and clinical data were obtained from the patients' medical records.\nDemographic characteristics of the patient groups before revascularization (n = 90).\n1Mann-Whitney U-test 2Chi-square test *Include the patients (n = 66) who completed the study one year postoperatively. P-value = <0.05\nNinety consecutive patients from a Swedish vascular unit in southern Sweden were invited to participate in this study. The assessment took place before and 12 months after revascularization. Out of 90 patients, 24 (27%) dropped out during the follow-up period, of whom 14 suffered from CLI. Six patients (7%) died, 15 (17%) did not wish to participate and 3 (3%) had other concurrent diseases. The inclusion criteria were patients admitted for active treatment of documented lower limb ischaemia, having no communication problems and having no other disease restricting their walking capacity [1]. The severity of ischaemia was graded according to suggested standards for grading lower limb ischaemia [15]. Sixty-two (68.8%) patients were treated with a surgical bypass, 24 (26.6%) had a percutaneous angioplasty (PTA) and 4 (4.6%) had a surgical thromboendatherectomy (TEA) (Table 1). Routine medical history, risk factors and clinical examinations, which included ankle blood pressure (ABP) and ankle-brachial pressure index (ABPI), were obtained before and one year after revascularization in accordance with the Swedish Vascular Registry (Swedvasc) [16]. The questionnaire also contains questions about sex, age, housing and civil status. Demographic characteristics and clinical data were obtained from the patients' medical records.\nDemographic characteristics of the patient groups before revascularization (n = 90).\n1Mann-Whitney U-test 2Chi-square test *Include the patients (n = 66) who completed the study one year postoperatively. P-value = <0.05\n Nottingham Health Profile The Nottingham Health Profile (NHP) was developed to be used in epidemiological studies of health and disease [17]. It consists of two parts. Part I contains 38 yes/no items in 6 dimensions: pain, physical mobility, emotional reactions, energy, social isolation and sleep. Part II contains 7 general yes/no questions concerning daily living problems. The two parts may be used independently and part II is not analysed in this study. Part I is scored using weighted values which give a range of possible scores from zero (no problems at all) to 100 (presence of all problems within a dimension). Swedish weights have been developed and used in this study [18]. The Swedish version has proved to be valid and reliable, for example, in patients with arthrosis of the hip joint [19] and in patients suffering from grave ventricular arrhythmias [20]. The NHP scale has also proved capable of measuring changes in perceived health following different treatments such as radical surgery for colorectal cancer [21] and after vascular interventions in lower limb ischaemia patients [4,22,23].\nThe Nottingham Health Profile (NHP) was developed to be used in epidemiological studies of health and disease [17]. It consists of two parts. Part I contains 38 yes/no items in 6 dimensions: pain, physical mobility, emotional reactions, energy, social isolation and sleep. Part II contains 7 general yes/no questions concerning daily living problems. The two parts may be used independently and part II is not analysed in this study. Part I is scored using weighted values which give a range of possible scores from zero (no problems at all) to 100 (presence of all problems within a dimension). Swedish weights have been developed and used in this study [18]. The Swedish version has proved to be valid and reliable, for example, in patients with arthrosis of the hip joint [19] and in patients suffering from grave ventricular arrhythmias [20]. The NHP scale has also proved capable of measuring changes in perceived health following different treatments such as radical surgery for colorectal cancer [21] and after vascular interventions in lower limb ischaemia patients [4,22,23].\n Short Form 36 Health Survey The Short Form 36 Health Survey (SF-36) was developed by Ware et al [24] and designed to provide assessments involving generic health concepts that are not specific to age, disease or treatment group. It includes 36 items covering eight health concepts: bodily pain, physical functioning, role limitations due to physical problems, mental health, vitality, social functioning, role limitations due to emotional problems and general health. The response format is yes or no or in a three-to-six response scale. For each health concept questions scores are coded, summed and transformed on a scale from zero (worst health) to 100 (best health). In this study, the standard Swedish version was used [25]. The SF-36 has shown acceptable validity and reliability in population studies [26,27] and in various groups of patients, for example after stroke [28] and in patients with rheumatoid arthritis [29]. The SF-36 scale has also shown responsiveness to changes in health status over time in patients with critical ischaemia [30-32] and in patients with intermittent claudication following a revascularization [33-35].\nThe Short Form 36 Health Survey (SF-36) was developed by Ware et al [24] and designed to provide assessments involving generic health concepts that are not specific to age, disease or treatment group. It includes 36 items covering eight health concepts: bodily pain, physical functioning, role limitations due to physical problems, mental health, vitality, social functioning, role limitations due to emotional problems and general health. The response format is yes or no or in a three-to-six response scale. For each health concept questions scores are coded, summed and transformed on a scale from zero (worst health) to 100 (best health). In this study, the standard Swedish version was used [25]. The SF-36 has shown acceptable validity and reliability in population studies [26,27] and in various groups of patients, for example after stroke [28] and in patients with rheumatoid arthritis [29]. The SF-36 scale has also shown responsiveness to changes in health status over time in patients with critical ischaemia [30-32] and in patients with intermittent claudication following a revascularization [33-35].\n Procedure The patients were asked by the head nurse to fill out the NHP and the SF-36 questionnaire during their admission before treatment. At the one-year follow-up, the questionnaire was sent home to the patients with a covering letter and a prepaid envelope. The Ethics Committee of Lund University approved the study (LU 470-98, Gbg M 098-98).\nThe patients were asked by the head nurse to fill out the NHP and the SF-36 questionnaire during their admission before treatment. At the one-year follow-up, the questionnaire was sent home to the patients with a covering letter and a prepaid envelope. The Ethics Committee of Lund University approved the study (LU 470-98, Gbg M 098-98).\n Statistical analysis Differences in characteristics between patients with IC and with CLI before revascularization were analysed using Chi-squared test and Mann-Whitney U-test. The prevalence of the lowest (\"floor\" effect) and highest (\"ceiling\" effect) possible quality of life score in NHP and SF-36 was also calculated.\nConstruct validity was evaluated for aspects of convergent and discriminant validity by the Multitrait-Multimethod matrix (MTMM) [13] based on five comparable domains, including pain, physical mobility, emotional reactions, energy and social isolation for the NHP and bodily pain, physical functioning, mental health, vitality and social functioning for the SF-36 (Table 2). Further, the Mann-Whitney U-test was used to examine the relative ability of the two instruments to discriminate among the degrees of severity of the peripheral vascular disease. Spearman's rank correlation coefficient was used to express the correlation between quality of life scores, ABPI, type of intervention and age. The internal consistency based on correlations between items for each scale was assessed with Cronbach's alpha [36]. The recommended reliability standard for group-level comparisons is a reliability coefficient of 0.70, while comparisons between individuals demands a reliability coefficient of 0.90 [25].\nComparable domains between the Nottingham Health Profile (NHP) and the Short Form 36 (SF-36) and number of items.\nThe Wilcoxon Signed Ranks test was used to detect the responsiveness of within-subjects changes over time, before and one year after revascularization, in patients with IC and CLI. Data analysis was performed for overall comparisons using the statistical package SPSS 11.0 and a P value of <.05 was taken as statistically significant.\nDifferences in characteristics between patients with IC and with CLI before revascularization were analysed using Chi-squared test and Mann-Whitney U-test. The prevalence of the lowest (\"floor\" effect) and highest (\"ceiling\" effect) possible quality of life score in NHP and SF-36 was also calculated.\nConstruct validity was evaluated for aspects of convergent and discriminant validity by the Multitrait-Multimethod matrix (MTMM) [13] based on five comparable domains, including pain, physical mobility, emotional reactions, energy and social isolation for the NHP and bodily pain, physical functioning, mental health, vitality and social functioning for the SF-36 (Table 2). Further, the Mann-Whitney U-test was used to examine the relative ability of the two instruments to discriminate among the degrees of severity of the peripheral vascular disease. Spearman's rank correlation coefficient was used to express the correlation between quality of life scores, ABPI, type of intervention and age. The internal consistency based on correlations between items for each scale was assessed with Cronbach's alpha [36]. The recommended reliability standard for group-level comparisons is a reliability coefficient of 0.70, while comparisons between individuals demands a reliability coefficient of 0.90 [25].\nComparable domains between the Nottingham Health Profile (NHP) and the Short Form 36 (SF-36) and number of items.\nThe Wilcoxon Signed Ranks test was used to detect the responsiveness of within-subjects changes over time, before and one year after revascularization, in patients with IC and CLI. Data analysis was performed for overall comparisons using the statistical package SPSS 11.0 and a P value of <.05 was taken as statistically significant.", "Ninety consecutive patients from a Swedish vascular unit in southern Sweden were invited to participate in this study. The assessment took place before and 12 months after revascularization. Out of 90 patients, 24 (27%) dropped out during the follow-up period, of whom 14 suffered from CLI. Six patients (7%) died, 15 (17%) did not wish to participate and 3 (3%) had other concurrent diseases. The inclusion criteria were patients admitted for active treatment of documented lower limb ischaemia, having no communication problems and having no other disease restricting their walking capacity [1]. The severity of ischaemia was graded according to suggested standards for grading lower limb ischaemia [15]. Sixty-two (68.8%) patients were treated with a surgical bypass, 24 (26.6%) had a percutaneous angioplasty (PTA) and 4 (4.6%) had a surgical thromboendatherectomy (TEA) (Table 1). Routine medical history, risk factors and clinical examinations, which included ankle blood pressure (ABP) and ankle-brachial pressure index (ABPI), were obtained before and one year after revascularization in accordance with the Swedish Vascular Registry (Swedvasc) [16]. The questionnaire also contains questions about sex, age, housing and civil status. Demographic characteristics and clinical data were obtained from the patients' medical records.\nDemographic characteristics of the patient groups before revascularization (n = 90).\n1Mann-Whitney U-test 2Chi-square test *Include the patients (n = 66) who completed the study one year postoperatively. P-value = <0.05", "The Nottingham Health Profile (NHP) was developed to be used in epidemiological studies of health and disease [17]. It consists of two parts. Part I contains 38 yes/no items in 6 dimensions: pain, physical mobility, emotional reactions, energy, social isolation and sleep. Part II contains 7 general yes/no questions concerning daily living problems. The two parts may be used independently and part II is not analysed in this study. Part I is scored using weighted values which give a range of possible scores from zero (no problems at all) to 100 (presence of all problems within a dimension). Swedish weights have been developed and used in this study [18]. The Swedish version has proved to be valid and reliable, for example, in patients with arthrosis of the hip joint [19] and in patients suffering from grave ventricular arrhythmias [20]. The NHP scale has also proved capable of measuring changes in perceived health following different treatments such as radical surgery for colorectal cancer [21] and after vascular interventions in lower limb ischaemia patients [4,22,23].", "The Short Form 36 Health Survey (SF-36) was developed by Ware et al [24] and designed to provide assessments involving generic health concepts that are not specific to age, disease or treatment group. It includes 36 items covering eight health concepts: bodily pain, physical functioning, role limitations due to physical problems, mental health, vitality, social functioning, role limitations due to emotional problems and general health. The response format is yes or no or in a three-to-six response scale. For each health concept questions scores are coded, summed and transformed on a scale from zero (worst health) to 100 (best health). In this study, the standard Swedish version was used [25]. The SF-36 has shown acceptable validity and reliability in population studies [26,27] and in various groups of patients, for example after stroke [28] and in patients with rheumatoid arthritis [29]. The SF-36 scale has also shown responsiveness to changes in health status over time in patients with critical ischaemia [30-32] and in patients with intermittent claudication following a revascularization [33-35].", "The patients were asked by the head nurse to fill out the NHP and the SF-36 questionnaire during their admission before treatment. At the one-year follow-up, the questionnaire was sent home to the patients with a covering letter and a prepaid envelope. The Ethics Committee of Lund University approved the study (LU 470-98, Gbg M 098-98).", "Differences in characteristics between patients with IC and with CLI before revascularization were analysed using Chi-squared test and Mann-Whitney U-test. The prevalence of the lowest (\"floor\" effect) and highest (\"ceiling\" effect) possible quality of life score in NHP and SF-36 was also calculated.\nConstruct validity was evaluated for aspects of convergent and discriminant validity by the Multitrait-Multimethod matrix (MTMM) [13] based on five comparable domains, including pain, physical mobility, emotional reactions, energy and social isolation for the NHP and bodily pain, physical functioning, mental health, vitality and social functioning for the SF-36 (Table 2). Further, the Mann-Whitney U-test was used to examine the relative ability of the two instruments to discriminate among the degrees of severity of the peripheral vascular disease. Spearman's rank correlation coefficient was used to express the correlation between quality of life scores, ABPI, type of intervention and age. The internal consistency based on correlations between items for each scale was assessed with Cronbach's alpha [36]. The recommended reliability standard for group-level comparisons is a reliability coefficient of 0.70, while comparisons between individuals demands a reliability coefficient of 0.90 [25].\nComparable domains between the Nottingham Health Profile (NHP) and the Short Form 36 (SF-36) and number of items.\nThe Wilcoxon Signed Ranks test was used to detect the responsiveness of within-subjects changes over time, before and one year after revascularization, in patients with IC and CLI. Data analysis was performed for overall comparisons using the statistical package SPSS 11.0 and a P value of <.05 was taken as statistically significant.", "Forty-eight (53.3%) patients had IC of whom 26 (54%) were men. The remaining 42 (46.7%) suffered from CLI and 22 (52%) of them were men. There was a significant difference in age between the two groups with a mean age of 67 and 71 respectively (Table 1). One year postoperatively, sixty-six (73%) patients (38 with IC and 28 with CLI) remained in the study and secondary reconstructions were made on 7 (10%) patients during the follow-up. There were no significant differences at baseline in sex, age, ABPI and quality of life scores between the drop-out patients and the patients who completed the study. Further, there were no significant differences between the drop-outs and the remaining patients regarding the method of treatment or severity of ischaemia.\nAnalyses between the comparable domains showed that the NHP scores were more skewed than the SF-36 scores, especially in emotional reactions, energy and social isolation (Figure 1). The \"floor effect\", the proportion of individuals having the lowest possible scores (SF-36 = 0, NHP = 100), was larger for the NHP scale in energy one year (19.7%) after revascularization than for the SF-36. The \"ceiling effect\", the proportion of individuals having the best possible scores (SF-36 = 100, NHP = 0), was also larger for the NHP scale in emotional reactions (50.0%), energy (42.4%) and social isolation (71.2%) one year after revascularization (Table 3).\nFrequency distribution of scores on the NHP (left side) and comparable dimensions on the SF-36 (right side) one year after revascularization. NHP scores had 100 subtracted for consistency with SF-36\nReliability and \"floor\" and \"ceiling\" effects in comparable NHP and SF-36 scales before and one year after revascularization.\na The NHP scores are reversed for consistency with the SF-36. b Cronbach's α\n Validity The average convergent validity coefficients exceeded 0.5 one year postoperatively except for physical mobility and physical functioning (r = -0.46) and for social isolation and social functioning (r = -0.32), indicating a considerable convergence of the SF-36 and NHP (Table 4). One year postoperatively significant correlations between ABPI and physical functioning (r = 0.29) (SF-36), physical mobility (r = 0.42) and pain (r = 0.42) (NHP) were found. The severity of the ischaemia had a significant influence in the NHP-measured domain of pain (P < .003) and physical mobility (P < .03), indicating lower quality of life scores in patients with critical ischaemia. In the ability to discriminate between levels of ischaemia in the other comparable quality of life domains, no significant differences were found (Table 5).\nMultitrait-Multimethod matrix of correlation coefficient for the comparable scales of the SF-36 and NHP in patients with varying degrees of lower limb ischaemia one year after revascularization.\nCorrelation coefficients for comparable domains of the two questionnaires are shown in bold type Correlation coefficients are negative because the two scales run in opposite directions.\nDifferences in comparable domains in the NHP and the SF-36 between claudicants and patients with critical ischaemia before revascularization.\n*A higher score (100) indicates more perceived problems. ** A higher score (100) indicates fewer perceived problems. a P-value, claudicants versus critical ischaemia patients before revascularization. Tested by Mann-Whitney U-test. p-value = <0.05\nThe average convergent validity coefficients exceeded 0.5 one year postoperatively except for physical mobility and physical functioning (r = -0.46) and for social isolation and social functioning (r = -0.32), indicating a considerable convergence of the SF-36 and NHP (Table 4). One year postoperatively significant correlations between ABPI and physical functioning (r = 0.29) (SF-36), physical mobility (r = 0.42) and pain (r = 0.42) (NHP) were found. The severity of the ischaemia had a significant influence in the NHP-measured domain of pain (P < .003) and physical mobility (P < .03), indicating lower quality of life scores in patients with critical ischaemia. In the ability to discriminate between levels of ischaemia in the other comparable quality of life domains, no significant differences were found (Table 5).\nMultitrait-Multimethod matrix of correlation coefficient for the comparable scales of the SF-36 and NHP in patients with varying degrees of lower limb ischaemia one year after revascularization.\nCorrelation coefficients for comparable domains of the two questionnaires are shown in bold type Correlation coefficients are negative because the two scales run in opposite directions.\nDifferences in comparable domains in the NHP and the SF-36 between claudicants and patients with critical ischaemia before revascularization.\n*A higher score (100) indicates more perceived problems. ** A higher score (100) indicates fewer perceived problems. a P-value, claudicants versus critical ischaemia patients before revascularization. Tested by Mann-Whitney U-test. p-value = <0.05\n Internal consistency Physical functioning (α = 0.82), mental health (α = 0.76) and vitality (α = 0.70) for the SF-36 and pain (α = 0.71), emotional reactions (α = 0.76) and energy (α = 0.71) for the NHP scale were reliable, with coefficients >0.70 before revascularization. For the SF-36, all of the comparable domains except for social functioning (α = 0.64) exceeded the Cronbach's alpha value of 0.8 at the one-year follow-up. For the NHP the internal consistency coefficient was less than 0.8 but still exceeded 0.70 (Table 3).\nPhysical functioning (α = 0.82), mental health (α = 0.76) and vitality (α = 0.70) for the SF-36 and pain (α = 0.71), emotional reactions (α = 0.76) and energy (α = 0.71) for the NHP scale were reliable, with coefficients >0.70 before revascularization. For the SF-36, all of the comparable domains except for social functioning (α = 0.64) exceeded the Cronbach's alpha value of 0.8 at the one-year follow-up. For the NHP the internal consistency coefficient was less than 0.8 but still exceeded 0.70 (Table 3).\n Responsiveness The NHP scale and SF-36 were not equally good at detecting within-patient changes over time. In patients with IC the SF-36 scale showed significant improvements in bodily pain (P < .01) and in physical functioning (P < .001) and for the patients with CLI there were significant improvements in bodily pain (P < .004) at the one-year follow-up (Figure 2). The NHP scale showed no significant improvements in patients with IC, while in patients with CLI, significant improvements in pain (P < .001) and physical mobility (P < .03) were found (Figure 3).\nChanges in median score for the SF-36 in claudicants and critical ischaemia patients before and one year after revascularization. Tested by Wilcoxon Signed Ranks test. A higher score indicates fewer perceived problems. BP = Bodily pain, PF = Physical functioning, MH = Mental health, VT = Vitality and SF = Social functioning.\nChanges in median score for the NHP in claudicants and critical ischaemia patients before and one year after revascularization. Tested by Wilcoxon Signed Ranks test. A higher score indicates more perceived problems. P = Pain, PM = Physical mobility, EM = Emotional reactions, E = Energy and SO = Social isolation.\nThe NHP scale and SF-36 were not equally good at detecting within-patient changes over time. In patients with IC the SF-36 scale showed significant improvements in bodily pain (P < .01) and in physical functioning (P < .001) and for the patients with CLI there were significant improvements in bodily pain (P < .004) at the one-year follow-up (Figure 2). The NHP scale showed no significant improvements in patients with IC, while in patients with CLI, significant improvements in pain (P < .001) and physical mobility (P < .03) were found (Figure 3).\nChanges in median score for the SF-36 in claudicants and critical ischaemia patients before and one year after revascularization. Tested by Wilcoxon Signed Ranks test. A higher score indicates fewer perceived problems. BP = Bodily pain, PF = Physical functioning, MH = Mental health, VT = Vitality and SF = Social functioning.\nChanges in median score for the NHP in claudicants and critical ischaemia patients before and one year after revascularization. Tested by Wilcoxon Signed Ranks test. A higher score indicates more perceived problems. P = Pain, PM = Physical mobility, EM = Emotional reactions, E = Energy and SO = Social isolation.", "The average convergent validity coefficients exceeded 0.5 one year postoperatively except for physical mobility and physical functioning (r = -0.46) and for social isolation and social functioning (r = -0.32), indicating a considerable convergence of the SF-36 and NHP (Table 4). One year postoperatively significant correlations between ABPI and physical functioning (r = 0.29) (SF-36), physical mobility (r = 0.42) and pain (r = 0.42) (NHP) were found. The severity of the ischaemia had a significant influence in the NHP-measured domain of pain (P < .003) and physical mobility (P < .03), indicating lower quality of life scores in patients with critical ischaemia. In the ability to discriminate between levels of ischaemia in the other comparable quality of life domains, no significant differences were found (Table 5).\nMultitrait-Multimethod matrix of correlation coefficient for the comparable scales of the SF-36 and NHP in patients with varying degrees of lower limb ischaemia one year after revascularization.\nCorrelation coefficients for comparable domains of the two questionnaires are shown in bold type Correlation coefficients are negative because the two scales run in opposite directions.\nDifferences in comparable domains in the NHP and the SF-36 between claudicants and patients with critical ischaemia before revascularization.\n*A higher score (100) indicates more perceived problems. ** A higher score (100) indicates fewer perceived problems. a P-value, claudicants versus critical ischaemia patients before revascularization. Tested by Mann-Whitney U-test. p-value = <0.05", "Physical functioning (α = 0.82), mental health (α = 0.76) and vitality (α = 0.70) for the SF-36 and pain (α = 0.71), emotional reactions (α = 0.76) and energy (α = 0.71) for the NHP scale were reliable, with coefficients >0.70 before revascularization. For the SF-36, all of the comparable domains except for social functioning (α = 0.64) exceeded the Cronbach's alpha value of 0.8 at the one-year follow-up. For the NHP the internal consistency coefficient was less than 0.8 but still exceeded 0.70 (Table 3).", "The NHP scale and SF-36 were not equally good at detecting within-patient changes over time. In patients with IC the SF-36 scale showed significant improvements in bodily pain (P < .01) and in physical functioning (P < .001) and for the patients with CLI there were significant improvements in bodily pain (P < .004) at the one-year follow-up (Figure 2). The NHP scale showed no significant improvements in patients with IC, while in patients with CLI, significant improvements in pain (P < .001) and physical mobility (P < .03) were found (Figure 3).\nChanges in median score for the SF-36 in claudicants and critical ischaemia patients before and one year after revascularization. Tested by Wilcoxon Signed Ranks test. A higher score indicates fewer perceived problems. BP = Bodily pain, PF = Physical functioning, MH = Mental health, VT = Vitality and SF = Social functioning.\nChanges in median score for the NHP in claudicants and critical ischaemia patients before and one year after revascularization. Tested by Wilcoxon Signed Ranks test. A higher score indicates more perceived problems. P = Pain, PM = Physical mobility, EM = Emotional reactions, E = Energy and SO = Social isolation.", "The result showed that the SF-36 was less skewed and more homogeneous with lower \"floor\" and \"ceiling\" effects than the NHP. A considerable convergence in three of the five comparable domains one year postoperatively indicates an average convergent validity. The SF-36 showed a higher internal consistency except for social functioning one-year postoperatively and was more responsive in detecting changes over time in the IC group. The NHP was more sensitive in discriminating among levels of ischaemia regarding pain and more able to detect changes in the CLI group.\nThe attrition or loss of subjects (27%) in this study could have affected the outcome. Further analysis showed that there were no significant differences in quality of life, sex, age, method of treatment and severity of disease between the attrition group and those who completed the study. The fact that the NHP and SF-36 differ in their nature and content may limit the study design. Therefore the analyses in this study focused only on the comparable domains of the two instruments, including the basic domains of physical, mental and social health identified by the WHOQOL group [7]. A suitable quality of life instrument for patients with chronic lower limb ischaemia should not only be valid, reliable and responsive but also simple for elderly people to understand and complete. In this study there was no difference in response rate between the two instruments and both seemed to be user-friendly and took about 5–10 minutes to complete. The findings strengthen earlier results suggesting that both scales are practical and acceptable to use in elderly patients [37,38].\nA generic quality of life instrument, designed for a variety of populations and measuring a comprehensive set of health concepts, is likely to have problems with \"ceiling\" and \"floor\" effects [24]. In this study the NHP showed higher \"ceiling\" effects in all dimensions than the SF-36. There were minor \"floor\" effects in both the NHP and SF-36, indicating the lowest possible quality of life. This is in accordance to Klevsgård et al, [1] who also showed higher \"ceiling\" effects in social isolation, emotional reactions and energy for the NHP than the SF-36. Other studies have also reported fewer \"ceiling\" and \"floor\" effects in the SF-36 than in the NHP in patients with chronic obstructive pulmonary disease [39] and after a myocardial infarction [37]. The advantage of the SF-36 may be due to its use of a Likert-type response format with a number of possible different scores and its ability to detect positive as well negative states of health, whereas the NHP items are dichotomous and state more extreme ends of ill health [39]. This could mean that a patient with acceptable initial scores fails to improve even if the improvement is obvious. Furthermore, a false negative response will be more likely when a patient perceives to having perfect functioning on a measure that only assesses severe dysfunction. The result confirms the importance of finding a quality of life instrument that does have a spectrum of dimensions which match the patients with chronic lower limb ischaemia related to the presence of numerous and often severe comorbid conditions.\nIn this study the internal consistency was not as high as desirable for any of the instruments before revascularization, but both instruments exceeded the minimum internal consistency value of 0.7, except for social functioning in the SF-36 one year postoperatively. The SF-36, however, had considerably higher α-values in all other dimensions. Several studies have previously reported that the SF-36 has higher Cronbach's α values than the NHP, but the domains in which the highest and lowest values were estimated differ [37-40]. The findings suggest that it is not only the magnitude of the correlation among items, but also the number of items in the scale that affects the internal consistency. For example, the domains of pain and social functioning in the NHP contain 8 and 5 items respectively, while bodily pain and social functioning in the SF-36 contain only 2 items. This is further strengthened by the fact that both the scales were not sensitive enough to identify significant within-patients changes in social isolation and social functioning. Another explanation could be that the patients were a more homogeneous group before treatment, with similar problems which affected the quality of life, but one year postoperatively they have become more heterogeneous and represent different states of recovery [13]. Both instruments meet the reliability standards for group-level application in most respects, although none of them achieved the degree of reliability that be would be desirable in individual-based assessment.\nThe result in this study showed significant convergent correlation coefficients between scores of the comparable dimensions except for physical activity and social activity, indicating a considerable convergence of the NHP and SF-36 scale. Prieto et al [39] and Meyer-Rosberg et al [40] demonstrated similar results with an average convergent validity. Thus the NHP and SF-36 are relatively equal in the validity and corroborate that the subscales probably reflect similar impacts of chronic lower limb ischaemia. However, social isolation in the NHP showed a higher correlation with mental health in the SF-36 and might measure more psychological aspects of social life, whilst social functioning in the SF-36 tends to assess social activities according to the higher correlation with energy in NHP. Similarly the physical functioning in the SF-36 showed a higher correlation with energy and may reflect physical activities of daily living rather than physical mobility. This suggests that the SF-36 and NHP measure different aspects of physical and social activities.\nValidity in terms of the instruments' relative ability to discriminate among different levels of the ischaemia could only demonstrate that patients with CLI had significantly more problems with pain and physical mobility before treatment than patients with IC measured by the NHP. Klevsgård et al [1] showed similar results, that the NHP was more sensitive in discriminating deterioration in pain and physical mobility than the SF-36. In contrast, Brown et al [37] demonstrated that the SF-36 was more sensitive than the NHP for identifying people still troubled with angina or breathlessness after a myocardial infarction. Despite the lack of significant differences between patients with IC and patients with CLI, the NHP scale tends to be more sensitive in explaining the quality of life in this group of patients with regard to the dimension of pain and physical mobility. The important issue thus is to consider how well the measurement method explains health-related phenomena which are significant for the particular targeted disease or group of patients.\nThe SF-36 was the more responsive instrument in detecting changes in quality of life over time in patients with IC, including bodily pain and physical functioning one year postoperatively. However, in patients with CLI, the NHP was the more responsive instrument, with significant changes in pain and physical mobility, while the SF-36 showed a significant change only in bodily pain. Falcoz et al [38] also demonstrated that the SF-36 was more responsive than the NHP in detecting changes five weeks after cardiac surgery. In contrast, Klevsgård et al [1] showed that the NHP was more responsive in patients with chronic lower limb ischaemia one month after revascularization. The result of the present study supports the TransAtlantic Inter-Society Consensus (TASC) [12] recommendation that SF-36 should be used as a generic health outcome measure in patients with chronic lower limb ischaemia. It seems to be more sensitive for detecting changes in quality of life than the NHP in patients with IC. In the group of CLI patients who have more problems with mobility and pain, however, it is harder to evaluate whether the one questionnaire is superior to the other, the NHP could be a preferable instrument in this group of patients.", "The findings indicate that both the SF-36 and the NHP have acceptable degrees of reliability for group-level comparisons, convergent and construct validity one year after revascularization. Nevertheless, the SF-36 seems generally to have more superior psychometric properties and was more suitable than the NHP for evaluating quality of life in patients with intermittent claudication. The NHP, however, discriminated better among severity of ischaemia and was more responsive in detecting changes over time in patients with critical leg ischaemia." ]
[ null, "methods", null, null, null, null, null, null, null, null, null, null, null ]
[ "Lower limb ischaemia", "Nottingham Health Profile", "Reliability", "Responsiveness", "Short Form 36", "Validity" ]
Background: During the past few decades quality of life assessment has become one central outcome in treatment of patients with chronic lower limb ischaemia. Different generic quality of life instruments such as the Nottingham Health Profile (NHP) and the Short Form 36 Health Survey (SF-36) [1,2] have previously been tested, revealing conflicting results in these patients according to psychometric attributes in short-term evaluations. The strengths and weakness of the NHP and the SF-36 scales are not extensively examined and further research is needed to establish which is the more appropriate and responsive quality of life instrument for patients with chronic lower limb ischaemia in the long term. The main goal of vascular surgical treatment is the relief of symptoms and improvement in patients quality of life. A majority of the patients are elderly and have generally widespread arterial disease with numbers of symptoms due to the chronic lower limb ischaemia, which may affect the patients' quality of life [3-5]. Intermittent claudication (IC) means leg pain constantly produced by walking or muscular activity and is relieved by rest, while critical leg ischaemia (CLI) means pain even at rest and problems with non-healing ulcers or gangrene [6]. It is important to identify dimensions which are influenced by the severity and nature of the disease when selecting a suitable quality of life instrument [7]. The World Health Organization QOL group [8] has identified and recommended five broad dimensions – physical and psychological health, social relationship perceptions, function and well-being – which should be included in a generic quality of life instrument. Generic instruments cover a broad range of dimensions and allow comparisons between different groups of patients. Disease-specific instruments, on the other hand, are specially designed for a particular disease, patient group or areas of function [9]. The functional scale, Walking Impairment Questionnaire (WIQ) [10] and quality of life instruments such as Intermittent Claudication Questionnaire (ICQ) [11] and Claudication Scale (CLAU-S) [12] are examples of disease-specific instruments which have been developed in recent years for patients with IC. However, at present there is no accepted disease-specific questionnaire for quality of life assessment in patients with CLI. Nevertheless, the TransAtlantic Inter-Society Consensus (TASC) [6] recommended that quality of life instruments should be used in all clinical trials and preferably include both generic and disease-specific quality of life measures. Outcome measures need to satisfy different criteria to be useful as a suitable health outcome instrument in clinical practice. Construct validity is one of the most important characteristics and is a lengthy and ongoing process [13]. An essential consideration is the instrument's ability to discriminate between different levels of the disease; another consideration is its reliability, which means the degree to which the instrument is free from random error and all items measure the same underlying attribute [14]. Further, the requirement for a useful outcome measure is the responsiveness in detecting small but important clinical changes of quality of life in patients following vascular interventions [13]. Finally the ideal quality of life instrument must also be acceptable to patients, simple and easy to use and preferably short. Comparisons among quality of life instruments and their psychometric characteristics and performance are needed to provide recommendations about their usefulness as outcome measures for these particular groups of patients. The aim of this study was to compare two generic quality of life questionnaires, the Nottingham Health Profile (NHP) and the Short Form 36 Health Survey (SF-36) regarding the internal consistency reliability, validity, responsiveness and suitability as outcome measures in patients with lower limb ischaemia in a longitudinal perspective. Methods: Patients Ninety consecutive patients from a Swedish vascular unit in southern Sweden were invited to participate in this study. The assessment took place before and 12 months after revascularization. Out of 90 patients, 24 (27%) dropped out during the follow-up period, of whom 14 suffered from CLI. Six patients (7%) died, 15 (17%) did not wish to participate and 3 (3%) had other concurrent diseases. The inclusion criteria were patients admitted for active treatment of documented lower limb ischaemia, having no communication problems and having no other disease restricting their walking capacity [1]. The severity of ischaemia was graded according to suggested standards for grading lower limb ischaemia [15]. Sixty-two (68.8%) patients were treated with a surgical bypass, 24 (26.6%) had a percutaneous angioplasty (PTA) and 4 (4.6%) had a surgical thromboendatherectomy (TEA) (Table 1). Routine medical history, risk factors and clinical examinations, which included ankle blood pressure (ABP) and ankle-brachial pressure index (ABPI), were obtained before and one year after revascularization in accordance with the Swedish Vascular Registry (Swedvasc) [16]. The questionnaire also contains questions about sex, age, housing and civil status. Demographic characteristics and clinical data were obtained from the patients' medical records. Demographic characteristics of the patient groups before revascularization (n = 90). 1Mann-Whitney U-test 2Chi-square test *Include the patients (n = 66) who completed the study one year postoperatively. P-value = <0.05 Ninety consecutive patients from a Swedish vascular unit in southern Sweden were invited to participate in this study. The assessment took place before and 12 months after revascularization. Out of 90 patients, 24 (27%) dropped out during the follow-up period, of whom 14 suffered from CLI. Six patients (7%) died, 15 (17%) did not wish to participate and 3 (3%) had other concurrent diseases. The inclusion criteria were patients admitted for active treatment of documented lower limb ischaemia, having no communication problems and having no other disease restricting their walking capacity [1]. The severity of ischaemia was graded according to suggested standards for grading lower limb ischaemia [15]. Sixty-two (68.8%) patients were treated with a surgical bypass, 24 (26.6%) had a percutaneous angioplasty (PTA) and 4 (4.6%) had a surgical thromboendatherectomy (TEA) (Table 1). Routine medical history, risk factors and clinical examinations, which included ankle blood pressure (ABP) and ankle-brachial pressure index (ABPI), were obtained before and one year after revascularization in accordance with the Swedish Vascular Registry (Swedvasc) [16]. The questionnaire also contains questions about sex, age, housing and civil status. Demographic characteristics and clinical data were obtained from the patients' medical records. Demographic characteristics of the patient groups before revascularization (n = 90). 1Mann-Whitney U-test 2Chi-square test *Include the patients (n = 66) who completed the study one year postoperatively. P-value = <0.05 Nottingham Health Profile The Nottingham Health Profile (NHP) was developed to be used in epidemiological studies of health and disease [17]. It consists of two parts. Part I contains 38 yes/no items in 6 dimensions: pain, physical mobility, emotional reactions, energy, social isolation and sleep. Part II contains 7 general yes/no questions concerning daily living problems. The two parts may be used independently and part II is not analysed in this study. Part I is scored using weighted values which give a range of possible scores from zero (no problems at all) to 100 (presence of all problems within a dimension). Swedish weights have been developed and used in this study [18]. The Swedish version has proved to be valid and reliable, for example, in patients with arthrosis of the hip joint [19] and in patients suffering from grave ventricular arrhythmias [20]. The NHP scale has also proved capable of measuring changes in perceived health following different treatments such as radical surgery for colorectal cancer [21] and after vascular interventions in lower limb ischaemia patients [4,22,23]. The Nottingham Health Profile (NHP) was developed to be used in epidemiological studies of health and disease [17]. It consists of two parts. Part I contains 38 yes/no items in 6 dimensions: pain, physical mobility, emotional reactions, energy, social isolation and sleep. Part II contains 7 general yes/no questions concerning daily living problems. The two parts may be used independently and part II is not analysed in this study. Part I is scored using weighted values which give a range of possible scores from zero (no problems at all) to 100 (presence of all problems within a dimension). Swedish weights have been developed and used in this study [18]. The Swedish version has proved to be valid and reliable, for example, in patients with arthrosis of the hip joint [19] and in patients suffering from grave ventricular arrhythmias [20]. The NHP scale has also proved capable of measuring changes in perceived health following different treatments such as radical surgery for colorectal cancer [21] and after vascular interventions in lower limb ischaemia patients [4,22,23]. Short Form 36 Health Survey The Short Form 36 Health Survey (SF-36) was developed by Ware et al [24] and designed to provide assessments involving generic health concepts that are not specific to age, disease or treatment group. It includes 36 items covering eight health concepts: bodily pain, physical functioning, role limitations due to physical problems, mental health, vitality, social functioning, role limitations due to emotional problems and general health. The response format is yes or no or in a three-to-six response scale. For each health concept questions scores are coded, summed and transformed on a scale from zero (worst health) to 100 (best health). In this study, the standard Swedish version was used [25]. The SF-36 has shown acceptable validity and reliability in population studies [26,27] and in various groups of patients, for example after stroke [28] and in patients with rheumatoid arthritis [29]. The SF-36 scale has also shown responsiveness to changes in health status over time in patients with critical ischaemia [30-32] and in patients with intermittent claudication following a revascularization [33-35]. The Short Form 36 Health Survey (SF-36) was developed by Ware et al [24] and designed to provide assessments involving generic health concepts that are not specific to age, disease or treatment group. It includes 36 items covering eight health concepts: bodily pain, physical functioning, role limitations due to physical problems, mental health, vitality, social functioning, role limitations due to emotional problems and general health. The response format is yes or no or in a three-to-six response scale. For each health concept questions scores are coded, summed and transformed on a scale from zero (worst health) to 100 (best health). In this study, the standard Swedish version was used [25]. The SF-36 has shown acceptable validity and reliability in population studies [26,27] and in various groups of patients, for example after stroke [28] and in patients with rheumatoid arthritis [29]. The SF-36 scale has also shown responsiveness to changes in health status over time in patients with critical ischaemia [30-32] and in patients with intermittent claudication following a revascularization [33-35]. Procedure The patients were asked by the head nurse to fill out the NHP and the SF-36 questionnaire during their admission before treatment. At the one-year follow-up, the questionnaire was sent home to the patients with a covering letter and a prepaid envelope. The Ethics Committee of Lund University approved the study (LU 470-98, Gbg M 098-98). The patients were asked by the head nurse to fill out the NHP and the SF-36 questionnaire during their admission before treatment. At the one-year follow-up, the questionnaire was sent home to the patients with a covering letter and a prepaid envelope. The Ethics Committee of Lund University approved the study (LU 470-98, Gbg M 098-98). Statistical analysis Differences in characteristics between patients with IC and with CLI before revascularization were analysed using Chi-squared test and Mann-Whitney U-test. The prevalence of the lowest ("floor" effect) and highest ("ceiling" effect) possible quality of life score in NHP and SF-36 was also calculated. Construct validity was evaluated for aspects of convergent and discriminant validity by the Multitrait-Multimethod matrix (MTMM) [13] based on five comparable domains, including pain, physical mobility, emotional reactions, energy and social isolation for the NHP and bodily pain, physical functioning, mental health, vitality and social functioning for the SF-36 (Table 2). Further, the Mann-Whitney U-test was used to examine the relative ability of the two instruments to discriminate among the degrees of severity of the peripheral vascular disease. Spearman's rank correlation coefficient was used to express the correlation between quality of life scores, ABPI, type of intervention and age. The internal consistency based on correlations between items for each scale was assessed with Cronbach's alpha [36]. The recommended reliability standard for group-level comparisons is a reliability coefficient of 0.70, while comparisons between individuals demands a reliability coefficient of 0.90 [25]. Comparable domains between the Nottingham Health Profile (NHP) and the Short Form 36 (SF-36) and number of items. The Wilcoxon Signed Ranks test was used to detect the responsiveness of within-subjects changes over time, before and one year after revascularization, in patients with IC and CLI. Data analysis was performed for overall comparisons using the statistical package SPSS 11.0 and a P value of <.05 was taken as statistically significant. Differences in characteristics between patients with IC and with CLI before revascularization were analysed using Chi-squared test and Mann-Whitney U-test. The prevalence of the lowest ("floor" effect) and highest ("ceiling" effect) possible quality of life score in NHP and SF-36 was also calculated. Construct validity was evaluated for aspects of convergent and discriminant validity by the Multitrait-Multimethod matrix (MTMM) [13] based on five comparable domains, including pain, physical mobility, emotional reactions, energy and social isolation for the NHP and bodily pain, physical functioning, mental health, vitality and social functioning for the SF-36 (Table 2). Further, the Mann-Whitney U-test was used to examine the relative ability of the two instruments to discriminate among the degrees of severity of the peripheral vascular disease. Spearman's rank correlation coefficient was used to express the correlation between quality of life scores, ABPI, type of intervention and age. The internal consistency based on correlations between items for each scale was assessed with Cronbach's alpha [36]. The recommended reliability standard for group-level comparisons is a reliability coefficient of 0.70, while comparisons between individuals demands a reliability coefficient of 0.90 [25]. Comparable domains between the Nottingham Health Profile (NHP) and the Short Form 36 (SF-36) and number of items. The Wilcoxon Signed Ranks test was used to detect the responsiveness of within-subjects changes over time, before and one year after revascularization, in patients with IC and CLI. Data analysis was performed for overall comparisons using the statistical package SPSS 11.0 and a P value of <.05 was taken as statistically significant. Patients: Ninety consecutive patients from a Swedish vascular unit in southern Sweden were invited to participate in this study. The assessment took place before and 12 months after revascularization. Out of 90 patients, 24 (27%) dropped out during the follow-up period, of whom 14 suffered from CLI. Six patients (7%) died, 15 (17%) did not wish to participate and 3 (3%) had other concurrent diseases. The inclusion criteria were patients admitted for active treatment of documented lower limb ischaemia, having no communication problems and having no other disease restricting their walking capacity [1]. The severity of ischaemia was graded according to suggested standards for grading lower limb ischaemia [15]. Sixty-two (68.8%) patients were treated with a surgical bypass, 24 (26.6%) had a percutaneous angioplasty (PTA) and 4 (4.6%) had a surgical thromboendatherectomy (TEA) (Table 1). Routine medical history, risk factors and clinical examinations, which included ankle blood pressure (ABP) and ankle-brachial pressure index (ABPI), were obtained before and one year after revascularization in accordance with the Swedish Vascular Registry (Swedvasc) [16]. The questionnaire also contains questions about sex, age, housing and civil status. Demographic characteristics and clinical data were obtained from the patients' medical records. Demographic characteristics of the patient groups before revascularization (n = 90). 1Mann-Whitney U-test 2Chi-square test *Include the patients (n = 66) who completed the study one year postoperatively. P-value = <0.05 Nottingham Health Profile: The Nottingham Health Profile (NHP) was developed to be used in epidemiological studies of health and disease [17]. It consists of two parts. Part I contains 38 yes/no items in 6 dimensions: pain, physical mobility, emotional reactions, energy, social isolation and sleep. Part II contains 7 general yes/no questions concerning daily living problems. The two parts may be used independently and part II is not analysed in this study. Part I is scored using weighted values which give a range of possible scores from zero (no problems at all) to 100 (presence of all problems within a dimension). Swedish weights have been developed and used in this study [18]. The Swedish version has proved to be valid and reliable, for example, in patients with arthrosis of the hip joint [19] and in patients suffering from grave ventricular arrhythmias [20]. The NHP scale has also proved capable of measuring changes in perceived health following different treatments such as radical surgery for colorectal cancer [21] and after vascular interventions in lower limb ischaemia patients [4,22,23]. Short Form 36 Health Survey: The Short Form 36 Health Survey (SF-36) was developed by Ware et al [24] and designed to provide assessments involving generic health concepts that are not specific to age, disease or treatment group. It includes 36 items covering eight health concepts: bodily pain, physical functioning, role limitations due to physical problems, mental health, vitality, social functioning, role limitations due to emotional problems and general health. The response format is yes or no or in a three-to-six response scale. For each health concept questions scores are coded, summed and transformed on a scale from zero (worst health) to 100 (best health). In this study, the standard Swedish version was used [25]. The SF-36 has shown acceptable validity and reliability in population studies [26,27] and in various groups of patients, for example after stroke [28] and in patients with rheumatoid arthritis [29]. The SF-36 scale has also shown responsiveness to changes in health status over time in patients with critical ischaemia [30-32] and in patients with intermittent claudication following a revascularization [33-35]. Procedure: The patients were asked by the head nurse to fill out the NHP and the SF-36 questionnaire during their admission before treatment. At the one-year follow-up, the questionnaire was sent home to the patients with a covering letter and a prepaid envelope. The Ethics Committee of Lund University approved the study (LU 470-98, Gbg M 098-98). Statistical analysis: Differences in characteristics between patients with IC and with CLI before revascularization were analysed using Chi-squared test and Mann-Whitney U-test. The prevalence of the lowest ("floor" effect) and highest ("ceiling" effect) possible quality of life score in NHP and SF-36 was also calculated. Construct validity was evaluated for aspects of convergent and discriminant validity by the Multitrait-Multimethod matrix (MTMM) [13] based on five comparable domains, including pain, physical mobility, emotional reactions, energy and social isolation for the NHP and bodily pain, physical functioning, mental health, vitality and social functioning for the SF-36 (Table 2). Further, the Mann-Whitney U-test was used to examine the relative ability of the two instruments to discriminate among the degrees of severity of the peripheral vascular disease. Spearman's rank correlation coefficient was used to express the correlation between quality of life scores, ABPI, type of intervention and age. The internal consistency based on correlations between items for each scale was assessed with Cronbach's alpha [36]. The recommended reliability standard for group-level comparisons is a reliability coefficient of 0.70, while comparisons between individuals demands a reliability coefficient of 0.90 [25]. Comparable domains between the Nottingham Health Profile (NHP) and the Short Form 36 (SF-36) and number of items. The Wilcoxon Signed Ranks test was used to detect the responsiveness of within-subjects changes over time, before and one year after revascularization, in patients with IC and CLI. Data analysis was performed for overall comparisons using the statistical package SPSS 11.0 and a P value of <.05 was taken as statistically significant. Results: Forty-eight (53.3%) patients had IC of whom 26 (54%) were men. The remaining 42 (46.7%) suffered from CLI and 22 (52%) of them were men. There was a significant difference in age between the two groups with a mean age of 67 and 71 respectively (Table 1). One year postoperatively, sixty-six (73%) patients (38 with IC and 28 with CLI) remained in the study and secondary reconstructions were made on 7 (10%) patients during the follow-up. There were no significant differences at baseline in sex, age, ABPI and quality of life scores between the drop-out patients and the patients who completed the study. Further, there were no significant differences between the drop-outs and the remaining patients regarding the method of treatment or severity of ischaemia. Analyses between the comparable domains showed that the NHP scores were more skewed than the SF-36 scores, especially in emotional reactions, energy and social isolation (Figure 1). The "floor effect", the proportion of individuals having the lowest possible scores (SF-36 = 0, NHP = 100), was larger for the NHP scale in energy one year (19.7%) after revascularization than for the SF-36. The "ceiling effect", the proportion of individuals having the best possible scores (SF-36 = 100, NHP = 0), was also larger for the NHP scale in emotional reactions (50.0%), energy (42.4%) and social isolation (71.2%) one year after revascularization (Table 3). Frequency distribution of scores on the NHP (left side) and comparable dimensions on the SF-36 (right side) one year after revascularization. NHP scores had 100 subtracted for consistency with SF-36 Reliability and "floor" and "ceiling" effects in comparable NHP and SF-36 scales before and one year after revascularization. a The NHP scores are reversed for consistency with the SF-36. b Cronbach's α Validity The average convergent validity coefficients exceeded 0.5 one year postoperatively except for physical mobility and physical functioning (r = -0.46) and for social isolation and social functioning (r = -0.32), indicating a considerable convergence of the SF-36 and NHP (Table 4). One year postoperatively significant correlations between ABPI and physical functioning (r = 0.29) (SF-36), physical mobility (r = 0.42) and pain (r = 0.42) (NHP) were found. The severity of the ischaemia had a significant influence in the NHP-measured domain of pain (P < .003) and physical mobility (P < .03), indicating lower quality of life scores in patients with critical ischaemia. In the ability to discriminate between levels of ischaemia in the other comparable quality of life domains, no significant differences were found (Table 5). Multitrait-Multimethod matrix of correlation coefficient for the comparable scales of the SF-36 and NHP in patients with varying degrees of lower limb ischaemia one year after revascularization. Correlation coefficients for comparable domains of the two questionnaires are shown in bold type Correlation coefficients are negative because the two scales run in opposite directions. Differences in comparable domains in the NHP and the SF-36 between claudicants and patients with critical ischaemia before revascularization. *A higher score (100) indicates more perceived problems. ** A higher score (100) indicates fewer perceived problems. a P-value, claudicants versus critical ischaemia patients before revascularization. Tested by Mann-Whitney U-test. p-value = <0.05 The average convergent validity coefficients exceeded 0.5 one year postoperatively except for physical mobility and physical functioning (r = -0.46) and for social isolation and social functioning (r = -0.32), indicating a considerable convergence of the SF-36 and NHP (Table 4). One year postoperatively significant correlations between ABPI and physical functioning (r = 0.29) (SF-36), physical mobility (r = 0.42) and pain (r = 0.42) (NHP) were found. The severity of the ischaemia had a significant influence in the NHP-measured domain of pain (P < .003) and physical mobility (P < .03), indicating lower quality of life scores in patients with critical ischaemia. In the ability to discriminate between levels of ischaemia in the other comparable quality of life domains, no significant differences were found (Table 5). Multitrait-Multimethod matrix of correlation coefficient for the comparable scales of the SF-36 and NHP in patients with varying degrees of lower limb ischaemia one year after revascularization. Correlation coefficients for comparable domains of the two questionnaires are shown in bold type Correlation coefficients are negative because the two scales run in opposite directions. Differences in comparable domains in the NHP and the SF-36 between claudicants and patients with critical ischaemia before revascularization. *A higher score (100) indicates more perceived problems. ** A higher score (100) indicates fewer perceived problems. a P-value, claudicants versus critical ischaemia patients before revascularization. Tested by Mann-Whitney U-test. p-value = <0.05 Internal consistency Physical functioning (α = 0.82), mental health (α = 0.76) and vitality (α = 0.70) for the SF-36 and pain (α = 0.71), emotional reactions (α = 0.76) and energy (α = 0.71) for the NHP scale were reliable, with coefficients >0.70 before revascularization. For the SF-36, all of the comparable domains except for social functioning (α = 0.64) exceeded the Cronbach's alpha value of 0.8 at the one-year follow-up. For the NHP the internal consistency coefficient was less than 0.8 but still exceeded 0.70 (Table 3). Physical functioning (α = 0.82), mental health (α = 0.76) and vitality (α = 0.70) for the SF-36 and pain (α = 0.71), emotional reactions (α = 0.76) and energy (α = 0.71) for the NHP scale were reliable, with coefficients >0.70 before revascularization. For the SF-36, all of the comparable domains except for social functioning (α = 0.64) exceeded the Cronbach's alpha value of 0.8 at the one-year follow-up. For the NHP the internal consistency coefficient was less than 0.8 but still exceeded 0.70 (Table 3). Responsiveness The NHP scale and SF-36 were not equally good at detecting within-patient changes over time. In patients with IC the SF-36 scale showed significant improvements in bodily pain (P < .01) and in physical functioning (P < .001) and for the patients with CLI there were significant improvements in bodily pain (P < .004) at the one-year follow-up (Figure 2). The NHP scale showed no significant improvements in patients with IC, while in patients with CLI, significant improvements in pain (P < .001) and physical mobility (P < .03) were found (Figure 3). Changes in median score for the SF-36 in claudicants and critical ischaemia patients before and one year after revascularization. Tested by Wilcoxon Signed Ranks test. A higher score indicates fewer perceived problems. BP = Bodily pain, PF = Physical functioning, MH = Mental health, VT = Vitality and SF = Social functioning. Changes in median score for the NHP in claudicants and critical ischaemia patients before and one year after revascularization. Tested by Wilcoxon Signed Ranks test. A higher score indicates more perceived problems. P = Pain, PM = Physical mobility, EM = Emotional reactions, E = Energy and SO = Social isolation. The NHP scale and SF-36 were not equally good at detecting within-patient changes over time. In patients with IC the SF-36 scale showed significant improvements in bodily pain (P < .01) and in physical functioning (P < .001) and for the patients with CLI there were significant improvements in bodily pain (P < .004) at the one-year follow-up (Figure 2). The NHP scale showed no significant improvements in patients with IC, while in patients with CLI, significant improvements in pain (P < .001) and physical mobility (P < .03) were found (Figure 3). Changes in median score for the SF-36 in claudicants and critical ischaemia patients before and one year after revascularization. Tested by Wilcoxon Signed Ranks test. A higher score indicates fewer perceived problems. BP = Bodily pain, PF = Physical functioning, MH = Mental health, VT = Vitality and SF = Social functioning. Changes in median score for the NHP in claudicants and critical ischaemia patients before and one year after revascularization. Tested by Wilcoxon Signed Ranks test. A higher score indicates more perceived problems. P = Pain, PM = Physical mobility, EM = Emotional reactions, E = Energy and SO = Social isolation. Validity: The average convergent validity coefficients exceeded 0.5 one year postoperatively except for physical mobility and physical functioning (r = -0.46) and for social isolation and social functioning (r = -0.32), indicating a considerable convergence of the SF-36 and NHP (Table 4). One year postoperatively significant correlations between ABPI and physical functioning (r = 0.29) (SF-36), physical mobility (r = 0.42) and pain (r = 0.42) (NHP) were found. The severity of the ischaemia had a significant influence in the NHP-measured domain of pain (P < .003) and physical mobility (P < .03), indicating lower quality of life scores in patients with critical ischaemia. In the ability to discriminate between levels of ischaemia in the other comparable quality of life domains, no significant differences were found (Table 5). Multitrait-Multimethod matrix of correlation coefficient for the comparable scales of the SF-36 and NHP in patients with varying degrees of lower limb ischaemia one year after revascularization. Correlation coefficients for comparable domains of the two questionnaires are shown in bold type Correlation coefficients are negative because the two scales run in opposite directions. Differences in comparable domains in the NHP and the SF-36 between claudicants and patients with critical ischaemia before revascularization. *A higher score (100) indicates more perceived problems. ** A higher score (100) indicates fewer perceived problems. a P-value, claudicants versus critical ischaemia patients before revascularization. Tested by Mann-Whitney U-test. p-value = <0.05 Internal consistency: Physical functioning (α = 0.82), mental health (α = 0.76) and vitality (α = 0.70) for the SF-36 and pain (α = 0.71), emotional reactions (α = 0.76) and energy (α = 0.71) for the NHP scale were reliable, with coefficients >0.70 before revascularization. For the SF-36, all of the comparable domains except for social functioning (α = 0.64) exceeded the Cronbach's alpha value of 0.8 at the one-year follow-up. For the NHP the internal consistency coefficient was less than 0.8 but still exceeded 0.70 (Table 3). Responsiveness: The NHP scale and SF-36 were not equally good at detecting within-patient changes over time. In patients with IC the SF-36 scale showed significant improvements in bodily pain (P < .01) and in physical functioning (P < .001) and for the patients with CLI there were significant improvements in bodily pain (P < .004) at the one-year follow-up (Figure 2). The NHP scale showed no significant improvements in patients with IC, while in patients with CLI, significant improvements in pain (P < .001) and physical mobility (P < .03) were found (Figure 3). Changes in median score for the SF-36 in claudicants and critical ischaemia patients before and one year after revascularization. Tested by Wilcoxon Signed Ranks test. A higher score indicates fewer perceived problems. BP = Bodily pain, PF = Physical functioning, MH = Mental health, VT = Vitality and SF = Social functioning. Changes in median score for the NHP in claudicants and critical ischaemia patients before and one year after revascularization. Tested by Wilcoxon Signed Ranks test. A higher score indicates more perceived problems. P = Pain, PM = Physical mobility, EM = Emotional reactions, E = Energy and SO = Social isolation. Discussion: The result showed that the SF-36 was less skewed and more homogeneous with lower "floor" and "ceiling" effects than the NHP. A considerable convergence in three of the five comparable domains one year postoperatively indicates an average convergent validity. The SF-36 showed a higher internal consistency except for social functioning one-year postoperatively and was more responsive in detecting changes over time in the IC group. The NHP was more sensitive in discriminating among levels of ischaemia regarding pain and more able to detect changes in the CLI group. The attrition or loss of subjects (27%) in this study could have affected the outcome. Further analysis showed that there were no significant differences in quality of life, sex, age, method of treatment and severity of disease between the attrition group and those who completed the study. The fact that the NHP and SF-36 differ in their nature and content may limit the study design. Therefore the analyses in this study focused only on the comparable domains of the two instruments, including the basic domains of physical, mental and social health identified by the WHOQOL group [7]. A suitable quality of life instrument for patients with chronic lower limb ischaemia should not only be valid, reliable and responsive but also simple for elderly people to understand and complete. In this study there was no difference in response rate between the two instruments and both seemed to be user-friendly and took about 5–10 minutes to complete. The findings strengthen earlier results suggesting that both scales are practical and acceptable to use in elderly patients [37,38]. A generic quality of life instrument, designed for a variety of populations and measuring a comprehensive set of health concepts, is likely to have problems with "ceiling" and "floor" effects [24]. In this study the NHP showed higher "ceiling" effects in all dimensions than the SF-36. There were minor "floor" effects in both the NHP and SF-36, indicating the lowest possible quality of life. This is in accordance to Klevsgård et al, [1] who also showed higher "ceiling" effects in social isolation, emotional reactions and energy for the NHP than the SF-36. Other studies have also reported fewer "ceiling" and "floor" effects in the SF-36 than in the NHP in patients with chronic obstructive pulmonary disease [39] and after a myocardial infarction [37]. The advantage of the SF-36 may be due to its use of a Likert-type response format with a number of possible different scores and its ability to detect positive as well negative states of health, whereas the NHP items are dichotomous and state more extreme ends of ill health [39]. This could mean that a patient with acceptable initial scores fails to improve even if the improvement is obvious. Furthermore, a false negative response will be more likely when a patient perceives to having perfect functioning on a measure that only assesses severe dysfunction. The result confirms the importance of finding a quality of life instrument that does have a spectrum of dimensions which match the patients with chronic lower limb ischaemia related to the presence of numerous and often severe comorbid conditions. In this study the internal consistency was not as high as desirable for any of the instruments before revascularization, but both instruments exceeded the minimum internal consistency value of 0.7, except for social functioning in the SF-36 one year postoperatively. The SF-36, however, had considerably higher α-values in all other dimensions. Several studies have previously reported that the SF-36 has higher Cronbach's α values than the NHP, but the domains in which the highest and lowest values were estimated differ [37-40]. The findings suggest that it is not only the magnitude of the correlation among items, but also the number of items in the scale that affects the internal consistency. For example, the domains of pain and social functioning in the NHP contain 8 and 5 items respectively, while bodily pain and social functioning in the SF-36 contain only 2 items. This is further strengthened by the fact that both the scales were not sensitive enough to identify significant within-patients changes in social isolation and social functioning. Another explanation could be that the patients were a more homogeneous group before treatment, with similar problems which affected the quality of life, but one year postoperatively they have become more heterogeneous and represent different states of recovery [13]. Both instruments meet the reliability standards for group-level application in most respects, although none of them achieved the degree of reliability that be would be desirable in individual-based assessment. The result in this study showed significant convergent correlation coefficients between scores of the comparable dimensions except for physical activity and social activity, indicating a considerable convergence of the NHP and SF-36 scale. Prieto et al [39] and Meyer-Rosberg et al [40] demonstrated similar results with an average convergent validity. Thus the NHP and SF-36 are relatively equal in the validity and corroborate that the subscales probably reflect similar impacts of chronic lower limb ischaemia. However, social isolation in the NHP showed a higher correlation with mental health in the SF-36 and might measure more psychological aspects of social life, whilst social functioning in the SF-36 tends to assess social activities according to the higher correlation with energy in NHP. Similarly the physical functioning in the SF-36 showed a higher correlation with energy and may reflect physical activities of daily living rather than physical mobility. This suggests that the SF-36 and NHP measure different aspects of physical and social activities. Validity in terms of the instruments' relative ability to discriminate among different levels of the ischaemia could only demonstrate that patients with CLI had significantly more problems with pain and physical mobility before treatment than patients with IC measured by the NHP. Klevsgård et al [1] showed similar results, that the NHP was more sensitive in discriminating deterioration in pain and physical mobility than the SF-36. In contrast, Brown et al [37] demonstrated that the SF-36 was more sensitive than the NHP for identifying people still troubled with angina or breathlessness after a myocardial infarction. Despite the lack of significant differences between patients with IC and patients with CLI, the NHP scale tends to be more sensitive in explaining the quality of life in this group of patients with regard to the dimension of pain and physical mobility. The important issue thus is to consider how well the measurement method explains health-related phenomena which are significant for the particular targeted disease or group of patients. The SF-36 was the more responsive instrument in detecting changes in quality of life over time in patients with IC, including bodily pain and physical functioning one year postoperatively. However, in patients with CLI, the NHP was the more responsive instrument, with significant changes in pain and physical mobility, while the SF-36 showed a significant change only in bodily pain. Falcoz et al [38] also demonstrated that the SF-36 was more responsive than the NHP in detecting changes five weeks after cardiac surgery. In contrast, Klevsgård et al [1] showed that the NHP was more responsive in patients with chronic lower limb ischaemia one month after revascularization. The result of the present study supports the TransAtlantic Inter-Society Consensus (TASC) [12] recommendation that SF-36 should be used as a generic health outcome measure in patients with chronic lower limb ischaemia. It seems to be more sensitive for detecting changes in quality of life than the NHP in patients with IC. In the group of CLI patients who have more problems with mobility and pain, however, it is harder to evaluate whether the one questionnaire is superior to the other, the NHP could be a preferable instrument in this group of patients. Conclusion: The findings indicate that both the SF-36 and the NHP have acceptable degrees of reliability for group-level comparisons, convergent and construct validity one year after revascularization. Nevertheless, the SF-36 seems generally to have more superior psychometric properties and was more suitable than the NHP for evaluating quality of life in patients with intermittent claudication. The NHP, however, discriminated better among severity of ischaemia and was more responsive in detecting changes over time in patients with critical leg ischaemia.
Background: Different generic quality of life instruments such as the Nottingham Health Profile (NHP) and the Short Form 36 Health Survey (SF-36) have revealed conflicting results in patients with chronic lower limb ischaemia in psychometric attributes in short-term evaluations. The aim of this study was to compare the NHP and the SF-36 regarding internal consistency reliability, validity, responsiveness and suitability as outcome measures in patients with lower limb ischaemia in a longitudinal perspective. Methods: 48 patients with intermittent claudication and 42 with critical ischaemia were included. Assessment was made before and one year after revascularization using comparable domains of the NHP and the SF-36 questionnaires. Results: The SF-36 was less skewed and more homogeneous than the NHP. There was an average convergent validity in three of the five comparable domains one year postoperatively. The SF-36 showed a higher internal consistency except for social functioning one-year postoperatively and was more responsive in detecting changes over time in patients with intermittent claudication. The NHP was more sensitive in discriminating among levels of ischaemia regarding pain and more able to detect changes in the critical ischaemia group. Conclusions: Both SF-36 and NHP have acceptable degrees of reliability for group-level comparisons, convergent and construct validity one year postoperatively. Nevertheless, the SF-36 has superior psychometric properties and was more suitable in patients with intermittent claudication. The NHP however, discriminated better among severity of ischaemia and was more responsive in patients with critical ischaemia.
Background: During the past few decades quality of life assessment has become one central outcome in treatment of patients with chronic lower limb ischaemia. Different generic quality of life instruments such as the Nottingham Health Profile (NHP) and the Short Form 36 Health Survey (SF-36) [1,2] have previously been tested, revealing conflicting results in these patients according to psychometric attributes in short-term evaluations. The strengths and weakness of the NHP and the SF-36 scales are not extensively examined and further research is needed to establish which is the more appropriate and responsive quality of life instrument for patients with chronic lower limb ischaemia in the long term. The main goal of vascular surgical treatment is the relief of symptoms and improvement in patients quality of life. A majority of the patients are elderly and have generally widespread arterial disease with numbers of symptoms due to the chronic lower limb ischaemia, which may affect the patients' quality of life [3-5]. Intermittent claudication (IC) means leg pain constantly produced by walking or muscular activity and is relieved by rest, while critical leg ischaemia (CLI) means pain even at rest and problems with non-healing ulcers or gangrene [6]. It is important to identify dimensions which are influenced by the severity and nature of the disease when selecting a suitable quality of life instrument [7]. The World Health Organization QOL group [8] has identified and recommended five broad dimensions – physical and psychological health, social relationship perceptions, function and well-being – which should be included in a generic quality of life instrument. Generic instruments cover a broad range of dimensions and allow comparisons between different groups of patients. Disease-specific instruments, on the other hand, are specially designed for a particular disease, patient group or areas of function [9]. The functional scale, Walking Impairment Questionnaire (WIQ) [10] and quality of life instruments such as Intermittent Claudication Questionnaire (ICQ) [11] and Claudication Scale (CLAU-S) [12] are examples of disease-specific instruments which have been developed in recent years for patients with IC. However, at present there is no accepted disease-specific questionnaire for quality of life assessment in patients with CLI. Nevertheless, the TransAtlantic Inter-Society Consensus (TASC) [6] recommended that quality of life instruments should be used in all clinical trials and preferably include both generic and disease-specific quality of life measures. Outcome measures need to satisfy different criteria to be useful as a suitable health outcome instrument in clinical practice. Construct validity is one of the most important characteristics and is a lengthy and ongoing process [13]. An essential consideration is the instrument's ability to discriminate between different levels of the disease; another consideration is its reliability, which means the degree to which the instrument is free from random error and all items measure the same underlying attribute [14]. Further, the requirement for a useful outcome measure is the responsiveness in detecting small but important clinical changes of quality of life in patients following vascular interventions [13]. Finally the ideal quality of life instrument must also be acceptable to patients, simple and easy to use and preferably short. Comparisons among quality of life instruments and their psychometric characteristics and performance are needed to provide recommendations about their usefulness as outcome measures for these particular groups of patients. The aim of this study was to compare two generic quality of life questionnaires, the Nottingham Health Profile (NHP) and the Short Form 36 Health Survey (SF-36) regarding the internal consistency reliability, validity, responsiveness and suitability as outcome measures in patients with lower limb ischaemia in a longitudinal perspective. Conclusion: The findings indicate that both the SF-36 and the NHP have acceptable degrees of reliability for group-level comparisons, convergent and construct validity one year after revascularization. Nevertheless, the SF-36 seems generally to have more superior psychometric properties and was more suitable than the NHP for evaluating quality of life in patients with intermittent claudication. The NHP, however, discriminated better among severity of ischaemia and was more responsive in detecting changes over time in patients with critical leg ischaemia.
Background: Different generic quality of life instruments such as the Nottingham Health Profile (NHP) and the Short Form 36 Health Survey (SF-36) have revealed conflicting results in patients with chronic lower limb ischaemia in psychometric attributes in short-term evaluations. The aim of this study was to compare the NHP and the SF-36 regarding internal consistency reliability, validity, responsiveness and suitability as outcome measures in patients with lower limb ischaemia in a longitudinal perspective. Methods: 48 patients with intermittent claudication and 42 with critical ischaemia were included. Assessment was made before and one year after revascularization using comparable domains of the NHP and the SF-36 questionnaires. Results: The SF-36 was less skewed and more homogeneous than the NHP. There was an average convergent validity in three of the five comparable domains one year postoperatively. The SF-36 showed a higher internal consistency except for social functioning one-year postoperatively and was more responsive in detecting changes over time in patients with intermittent claudication. The NHP was more sensitive in discriminating among levels of ischaemia regarding pain and more able to detect changes in the critical ischaemia group. Conclusions: Both SF-36 and NHP have acceptable degrees of reliability for group-level comparisons, convergent and construct validity one year postoperatively. Nevertheless, the SF-36 has superior psychometric properties and was more suitable in patients with intermittent claudication. The NHP however, discriminated better among severity of ischaemia and was more responsive in patients with critical ischaemia.
8,034
275
[ 697, 309, 211, 216, 71, 320, 1689, 293, 116, 237, 1448, 87 ]
13
[ "patients", "36", "sf", "nhp", "sf 36", "health", "physical", "ischaemia", "pain", "functioning" ]
[ "limb ischaemia sensitive", "leg ischaemia cli", "critical leg ischaemia", "limb ischaemia different", "limb ischaemia longitudinal" ]
null
[CONTENT] Lower limb ischaemia | Nottingham Health Profile | Reliability | Responsiveness | Short Form 36 | Validity [SUMMARY]
[CONTENT] Lower limb ischaemia | Nottingham Health Profile | Reliability | Responsiveness | Short Form 36 | Validity [SUMMARY]
null
[CONTENT] Lower limb ischaemia | Nottingham Health Profile | Reliability | Responsiveness | Short Form 36 | Validity [SUMMARY]
[CONTENT] Lower limb ischaemia | Nottingham Health Profile | Reliability | Responsiveness | Short Form 36 | Validity [SUMMARY]
[CONTENT] Lower limb ischaemia | Nottingham Health Profile | Reliability | Responsiveness | Short Form 36 | Validity [SUMMARY]
[CONTENT] Aged | Female | Humans | Intermittent Claudication | Ischemia | Longitudinal Studies | Lower Extremity | Male | Middle Aged | Outcome and Process Assessment, Health Care | Psychometrics | Quality of Life | Reproducibility of Results | Sensitivity and Specificity | Sickness Impact Profile | Surveys and Questionnaires | Sweden [SUMMARY]
[CONTENT] Aged | Female | Humans | Intermittent Claudication | Ischemia | Longitudinal Studies | Lower Extremity | Male | Middle Aged | Outcome and Process Assessment, Health Care | Psychometrics | Quality of Life | Reproducibility of Results | Sensitivity and Specificity | Sickness Impact Profile | Surveys and Questionnaires | Sweden [SUMMARY]
null
[CONTENT] Aged | Female | Humans | Intermittent Claudication | Ischemia | Longitudinal Studies | Lower Extremity | Male | Middle Aged | Outcome and Process Assessment, Health Care | Psychometrics | Quality of Life | Reproducibility of Results | Sensitivity and Specificity | Sickness Impact Profile | Surveys and Questionnaires | Sweden [SUMMARY]
[CONTENT] Aged | Female | Humans | Intermittent Claudication | Ischemia | Longitudinal Studies | Lower Extremity | Male | Middle Aged | Outcome and Process Assessment, Health Care | Psychometrics | Quality of Life | Reproducibility of Results | Sensitivity and Specificity | Sickness Impact Profile | Surveys and Questionnaires | Sweden [SUMMARY]
[CONTENT] Aged | Female | Humans | Intermittent Claudication | Ischemia | Longitudinal Studies | Lower Extremity | Male | Middle Aged | Outcome and Process Assessment, Health Care | Psychometrics | Quality of Life | Reproducibility of Results | Sensitivity and Specificity | Sickness Impact Profile | Surveys and Questionnaires | Sweden [SUMMARY]
[CONTENT] limb ischaemia sensitive | leg ischaemia cli | critical leg ischaemia | limb ischaemia different | limb ischaemia longitudinal [SUMMARY]
[CONTENT] limb ischaemia sensitive | leg ischaemia cli | critical leg ischaemia | limb ischaemia different | limb ischaemia longitudinal [SUMMARY]
null
[CONTENT] limb ischaemia sensitive | leg ischaemia cli | critical leg ischaemia | limb ischaemia different | limb ischaemia longitudinal [SUMMARY]
[CONTENT] limb ischaemia sensitive | leg ischaemia cli | critical leg ischaemia | limb ischaemia different | limb ischaemia longitudinal [SUMMARY]
[CONTENT] limb ischaemia sensitive | leg ischaemia cli | critical leg ischaemia | limb ischaemia different | limb ischaemia longitudinal [SUMMARY]
[CONTENT] patients | 36 | sf | nhp | sf 36 | health | physical | ischaemia | pain | functioning [SUMMARY]
[CONTENT] patients | 36 | sf | nhp | sf 36 | health | physical | ischaemia | pain | functioning [SUMMARY]
null
[CONTENT] patients | 36 | sf | nhp | sf 36 | health | physical | ischaemia | pain | functioning [SUMMARY]
[CONTENT] patients | 36 | sf | nhp | sf 36 | health | physical | ischaemia | pain | functioning [SUMMARY]
[CONTENT] patients | 36 | sf | nhp | sf 36 | health | physical | ischaemia | pain | functioning [SUMMARY]
[CONTENT] quality | life | quality life | instrument | outcome | instruments | patients | disease | measures | life instruments [SUMMARY]
[CONTENT] health | patients | 36 | test | swedish | study | sf | sf 36 | problems | revascularization [SUMMARY]
null
[CONTENT] nhp | superior psychometric properties | level comparisons convergent construct | evaluating quality | nhp acceptable | claudication nhp | claudication nhp discriminated | claudication nhp discriminated better | nhp acceptable degrees | nhp acceptable degrees reliability [SUMMARY]
[CONTENT] patients | 36 | nhp | sf | sf 36 | health | ischaemia | physical | functioning | pain [SUMMARY]
[CONTENT] patients | 36 | nhp | sf | sf 36 | health | ischaemia | physical | functioning | pain [SUMMARY]
[CONTENT] the Nottingham Health Profile (NHP | the Short Form 36 | SF-36 ||| NHP | SF-36 [SUMMARY]
[CONTENT] 48 | 42 ||| before and one year | NHP | SF-36 [SUMMARY]
null
[CONTENT] SF-36 | NHP | one year ||| SF-36 ||| NHP [SUMMARY]
[CONTENT] the Nottingham Health Profile (NHP | the Short Form 36 | SF-36 ||| NHP | SF-36 ||| 48 | 42 ||| before and one year | NHP | SF-36 ||| ||| SF-36 | NHP ||| three | five | one year ||| SF-36 | one-year ||| NHP ||| SF-36 | NHP | one year ||| SF-36 ||| NHP [SUMMARY]
[CONTENT] the Nottingham Health Profile (NHP | the Short Form 36 | SF-36 ||| NHP | SF-36 ||| 48 | 42 ||| before and one year | NHP | SF-36 ||| ||| SF-36 | NHP ||| three | five | one year ||| SF-36 | one-year ||| NHP ||| SF-36 | NHP | one year ||| SF-36 ||| NHP [SUMMARY]
Early immunisation with dendritic cells after allogeneic bone marrow transplantation elicits graft vs tumour reactivity.
23511628
Perspectives of immunotherapy to cancer mediated by bone marrow transplantation (BMT) in conjunction with dendritic cell (DC)-mediated immune sensitisation have yielded modest success so far. In this study, we assessed the impact of DC on graft vs tumour (GvT) reactions triggered by allogeneic BMT.
BACKGROUND
H2K(a) mice implanted with congenic subcutaneous Neuro-2a neuroblastoma (NB, H2K(a)) tumours were irradiated and grafted with allogeneic H2K(b) bone marrow cells (BMC) followed by immunisation with tumour-inexperienced or tumour-pulsed DC.
METHODS
Immunisation with tumour-pulsed donor DC after allogeneic BMT suppressed tumour growth through induction of T cell-mediated NB cell lysis. Early post-transplant administration of DC was more effective than delayed immunisation, with similar efficacy of DC inoculated into the tumour and intravenously. In addition, tumour inexperienced DC were equally effective as tumour-pulsed DC in suppression of tumour growth. Immunisation of DC did not impact quantitative immune reconstitution, however, it enhanced T-cell maturation as evident from interferon-γ (IFN-γ) secretion, proliferation in response to mitogenic stimulation and tumour cell lysis in vitro. Dendritic cells potentiate GvT reactivity both through activation of T cells and specific sensitisation against tumour antigens. We found that during pulsing with tumour lysate DC also elaborate a factor that selectively inhibits lymphocyte proliferation, which is however abolished by humoral and DC-mediated lymphocyte activation.
RESULTS
These data reveal complex involvement of antigen-presenting cells in GvT reactions, suggesting that the limited success in clinical application is not a result of limited efficacy but suboptimal implementation. Although DC can amplify soluble signals from NB lysates that inhibit lymphocyte proliferation, early administration of DC is a dominant factor in suppression of tumour growth.
CONCLUSION
[ "Animals", "Antigen-Presenting Cells", "Bone Marrow Transplantation", "Dendritic Cells", "Graft vs Tumor Effect", "Immunization, Passive", "Lymphocyte Activation", "Mice", "Mice, Inbred C57BL", "Neuroblastoma", "T-Lymphocytes, Cytotoxic", "Transplantation, Homologous" ]
3619065
null
null
null
null
Results
Impact of DC administration on tumour growth In first stage, we assessed the impact of donor DC on graft vs tumour reactivity induced by allogeneic BMT using the experimental setting delineated in Figure 1A: Neuro-2a cells were implanted subcutaneously and after 5 days the mice were irradiated at 700 rad and grafted with allogeneic (H2Kb→H2Ka) BMC. Dendritic cells (H2Kb) were derived in vitro from donor bone marrow and were pulsed with Neuro-2a lysates (DCneuro2a) before infusion adjacent to the subcutaneous tumours. Inoculation of DC on day +7 post transplant caused a significant reduction in tumour growth rates (P<0.01 vs allo-BMT; Figure 1B), indicating effective immunisation against the tumour. In this analysis, we excluded tumours that regressed completely to facilitate the statistical analysis of growth rates. Immunisation of DC resulted in complete regression of 3 out of 17 tumours (18%), whereas only 1 of 21 tumours subsided completely after allogeneic BMT. The allografted mice displayed no signs of GVHD irrespective of DC inoculation, as determined by clinical monitoring and validated by liver histology. To test whether DC immunisation was mediated by initiation and/or propagation of tumour-reactive T cells, splenic lymphocytes were assessed for direct lysis of Neuro-2a cells in vitro. Lymphocytes from mice immunised with tumour-pulsed DC (DCNeuro2a) displayed more potent cytolytic activity as compared with allografted mice (P<0.005 at lowest ratio), decreasing the effective target:effector ratio (Figure 1C). These data confirmed that tumour growth suppression is mediated by direct tumour lytic activity of lymphocytes in the allografted mice. In first stage, we assessed the impact of donor DC on graft vs tumour reactivity induced by allogeneic BMT using the experimental setting delineated in Figure 1A: Neuro-2a cells were implanted subcutaneously and after 5 days the mice were irradiated at 700 rad and grafted with allogeneic (H2Kb→H2Ka) BMC. Dendritic cells (H2Kb) were derived in vitro from donor bone marrow and were pulsed with Neuro-2a lysates (DCneuro2a) before infusion adjacent to the subcutaneous tumours. Inoculation of DC on day +7 post transplant caused a significant reduction in tumour growth rates (P<0.01 vs allo-BMT; Figure 1B), indicating effective immunisation against the tumour. In this analysis, we excluded tumours that regressed completely to facilitate the statistical analysis of growth rates. Immunisation of DC resulted in complete regression of 3 out of 17 tumours (18%), whereas only 1 of 21 tumours subsided completely after allogeneic BMT. The allografted mice displayed no signs of GVHD irrespective of DC inoculation, as determined by clinical monitoring and validated by liver histology. To test whether DC immunisation was mediated by initiation and/or propagation of tumour-reactive T cells, splenic lymphocytes were assessed for direct lysis of Neuro-2a cells in vitro. Lymphocytes from mice immunised with tumour-pulsed DC (DCNeuro2a) displayed more potent cytolytic activity as compared with allografted mice (P<0.005 at lowest ratio), decreasing the effective target:effector ratio (Figure 1C). These data confirmed that tumour growth suppression is mediated by direct tumour lytic activity of lymphocytes in the allografted mice. Qualitative effects of DC on immune reconstitution after transplantation To evaluate the mechanism of action of immunisation with DC, we monitored quantitative and qualitative immune reconstitution at 4 weeks after transplantation. Dendritic cells had little impact on reconstitution of spleen cellularity, which includes delayed recovery of T cells (Figure 2A). Immaturity of T cells after allogeneic BMT was recognised from the low output of interferon-γ (IFN-γ), which was however somewhat elevated following DC immunisation (Figure 2B). The superior cytokine responses following administration of donor DC prompted more detailed functional analysis of the lymphocytes. At 4 weeks post transplant, splenic lymphocytes from allografted mice immunised with DC displayed higher proliferative responses under mitogenic stimulation than naïve (unmanipulated) B6 mice and allografted mice (P<0.01, Figure 2C). Thus, IFN-γ production and proliferation were consistently improved by immunisation with tumour-pulsed donor DC. To further refine the specificity of DC-mediated stimulation, lymphocytes were restimulated in vitro with naïve (antigen-inexperienced) and tumour-pulsed donor DC. Lymphocytes from both control (naïve) and allografted mice were equally stimulated by both types of DC. However, the proliferation of lymphocytes from mice immunised with tumour-pulsed donor DC was further amplified by restimulation with tumour-pulsed DC as compared with tumour-inexperienced DC in vitro (P<0.05), demonstrating enhanced antigen-specific responsiveness of the GvT effectors. These data delineate a scenario where DC participate in maturation of the reconstituted immune system (non-specific mitogenic stimulation), and also in specific sensitisation against tumour cells (restimulation with tumour-pulsed DC). The most specific tests of cytotoxic GvT effectors are the capacity to lyse tumour cells in vitro. As expected, lymphocytes from allografted mice with and without DC immunisation displayed more potent tumour-lytic activity in vitro than tumour-inexperienced lymphocytes from naïve mice (Figure 2D). However, restimulation with tumour-pulsed DC in vitro elicited more potent tumour lysis than restimulation with naïve DC, pointing to an antigen-specific reaction against the tumour. Therefore, lymphocytes have acquired antigen specificity in allografted mice even in the absence of donor DC infusion, and immunisation with tumour-pulsed donor DC caused only marginal increase in tumour cell lysis. These data suggest that amplification of immune reactivity mediated by DC is more significant than sensitisation to tumour antigens. To evaluate the mechanism of action of immunisation with DC, we monitored quantitative and qualitative immune reconstitution at 4 weeks after transplantation. Dendritic cells had little impact on reconstitution of spleen cellularity, which includes delayed recovery of T cells (Figure 2A). Immaturity of T cells after allogeneic BMT was recognised from the low output of interferon-γ (IFN-γ), which was however somewhat elevated following DC immunisation (Figure 2B). The superior cytokine responses following administration of donor DC prompted more detailed functional analysis of the lymphocytes. At 4 weeks post transplant, splenic lymphocytes from allografted mice immunised with DC displayed higher proliferative responses under mitogenic stimulation than naïve (unmanipulated) B6 mice and allografted mice (P<0.01, Figure 2C). Thus, IFN-γ production and proliferation were consistently improved by immunisation with tumour-pulsed donor DC. To further refine the specificity of DC-mediated stimulation, lymphocytes were restimulated in vitro with naïve (antigen-inexperienced) and tumour-pulsed donor DC. Lymphocytes from both control (naïve) and allografted mice were equally stimulated by both types of DC. However, the proliferation of lymphocytes from mice immunised with tumour-pulsed donor DC was further amplified by restimulation with tumour-pulsed DC as compared with tumour-inexperienced DC in vitro (P<0.05), demonstrating enhanced antigen-specific responsiveness of the GvT effectors. These data delineate a scenario where DC participate in maturation of the reconstituted immune system (non-specific mitogenic stimulation), and also in specific sensitisation against tumour cells (restimulation with tumour-pulsed DC). The most specific tests of cytotoxic GvT effectors are the capacity to lyse tumour cells in vitro. As expected, lymphocytes from allografted mice with and without DC immunisation displayed more potent tumour-lytic activity in vitro than tumour-inexperienced lymphocytes from naïve mice (Figure 2D). However, restimulation with tumour-pulsed DC in vitro elicited more potent tumour lysis than restimulation with naïve DC, pointing to an antigen-specific reaction against the tumour. Therefore, lymphocytes have acquired antigen specificity in allografted mice even in the absence of donor DC infusion, and immunisation with tumour-pulsed donor DC caused only marginal increase in tumour cell lysis. These data suggest that amplification of immune reactivity mediated by DC is more significant than sensitisation to tumour antigens. Optimal conditions of DC infusion In view of the beneficial effect on DC immunisation on GvT effectors recovering from lymphopenia following stem cell transplant, we sought to characterise several parameters. We reasoned that delayed infusion of DC after transplantation might be more effective in stimulation of GvT reactivity, under conditions of superior quantitative recovery and maturation of the lymphocytes. This assumption was not substantiated, as early immunisation with tumour-pulsed DC on day +7 was more potent in inhibition of tumour growth than late DC administration on day +14 (Figure 3A). To determine the basis of this differential effect of the time of immunisation, we assessed the proliferative responses of splenic lymphocytes from the immunised mice. Non-specific mitogenic stimulation resulted in higher proliferation rates of lymphocytes from mice immunised on day +7 (Figure 3B), indicating a general activation state. Furthermore, restimulation of the lymphocytes with tumour-pulsed DC in vitro resulted in robust proliferative responses, evidence of tumour-specific sensitisation of lymphocytes in the allografted mice. These tumour growth curves also emphasise that naïve (antigen-inexperienced) DC have similar suppressive effects as tumour-pulsed DC at both times of immunisation (Figure 3A). Consistently, lymphocytes from mice immunised with naïve and tumour-pulsed DC showed similar cytolytic activity against Neuro-2a cells in vitro (Figure 3C), indicating effective tumour-specific sensitisation mediated by naïve DC. The effective uptake and presentation of tumour antigens by naïve DC in vivo was affirmed by the robust tumour-lytic activity of lymphocytes upon restimulation with tumour-sensitised DC in vitro. Taken together, these data demonstrate efficient tumour antigen uptake and presentation by naïve donor DC, suggesting that presensitisation to the tumour is not required to induce lymphotoxic anti-tumour reactivity. The experimental setting used in so far relied on inoculation of the donor DC adjacent to the tumour, however, in the clinical setting DC are required to migrate to and operate at remote sites. We therefore assessed the effect of local vs systemic (intravenous) immunisation with donor DC on tumour growth rates. Tumour suppression was equally affected by DC inoculation adjacent to the tumours and intravenously (Figure 3D), emphasising effective DC migration to distant sites. In view of the beneficial effect on DC immunisation on GvT effectors recovering from lymphopenia following stem cell transplant, we sought to characterise several parameters. We reasoned that delayed infusion of DC after transplantation might be more effective in stimulation of GvT reactivity, under conditions of superior quantitative recovery and maturation of the lymphocytes. This assumption was not substantiated, as early immunisation with tumour-pulsed DC on day +7 was more potent in inhibition of tumour growth than late DC administration on day +14 (Figure 3A). To determine the basis of this differential effect of the time of immunisation, we assessed the proliferative responses of splenic lymphocytes from the immunised mice. Non-specific mitogenic stimulation resulted in higher proliferation rates of lymphocytes from mice immunised on day +7 (Figure 3B), indicating a general activation state. Furthermore, restimulation of the lymphocytes with tumour-pulsed DC in vitro resulted in robust proliferative responses, evidence of tumour-specific sensitisation of lymphocytes in the allografted mice. These tumour growth curves also emphasise that naïve (antigen-inexperienced) DC have similar suppressive effects as tumour-pulsed DC at both times of immunisation (Figure 3A). Consistently, lymphocytes from mice immunised with naïve and tumour-pulsed DC showed similar cytolytic activity against Neuro-2a cells in vitro (Figure 3C), indicating effective tumour-specific sensitisation mediated by naïve DC. The effective uptake and presentation of tumour antigens by naïve DC in vivo was affirmed by the robust tumour-lytic activity of lymphocytes upon restimulation with tumour-sensitised DC in vitro. Taken together, these data demonstrate efficient tumour antigen uptake and presentation by naïve donor DC, suggesting that presensitisation to the tumour is not required to induce lymphotoxic anti-tumour reactivity. The experimental setting used in so far relied on inoculation of the donor DC adjacent to the tumour, however, in the clinical setting DC are required to migrate to and operate at remote sites. We therefore assessed the effect of local vs systemic (intravenous) immunisation with donor DC on tumour growth rates. Tumour suppression was equally affected by DC inoculation adjacent to the tumours and intravenously (Figure 3D), emphasising effective DC migration to distant sites. Tumour-associated suppressive mechanisms In a previous study, we reported an interesting decrease in proliferation of lymphocytes from mice bearing tumours exposed to tumour lysate in vitro, suggesting the presence of an inhibitory factor in the soluble fraction of the Neuro-2a extract (Ash et al, 2009). The proliferation of splenocytes from allografted mice immunised with tumour-pulsed DC on day +7 was markedly increased as compared with all other experimental groups, including allografted mice, recipients of DCnaive and infusion of DC on day +14 (Figure 4A). To determine whether this inhibitory mechanism is modulated by administration of donor DC, the responses of lymphocytes to mitogenic stimulation were assessed in the presence of tumour lysates. Exposure to Neuro-2a lysate in vitro resulted in suppression of proliferation in one particular group (Figure 4B), suggesting that a tumour-derived inhibitory signal was amplified by immunisation with tumour-pulsed DC early after transplantation. Notably, this effect was restricted to lymphocytes from mice immunised with tumour-pulsed DC and not in lymphocytes of mice immunised with tumour-inexperienced DC. These data suggest that a soluble inhibitory factor was included in the repertoire of antigens processed by DC during pulsing with tumour lysate, and was presented to newly developed T cells in the allografted mice. However, restimulation of lymphocytes with unstimulated and tumour-pulsed DC ex vivo amplified antigen-specific responses (Figures 2C and 3B), indicating that this proliferation-inhibitory mechanism does not affect the process of tumour antigen processing and presentation. In addition to reversal of this inhibitory mechanism by restimulation with DC in vitro, proliferative anergy induced by tumour lysate was also reversed by non-specific stimulation of the lymphocytes with LPS in the absence of DC (Figure 4C). Therefore, immune activation overcomes the proliferation-inhibitory signal that is processed by DC during pulsing with tumour antigens. In a previous study, we reported an interesting decrease in proliferation of lymphocytes from mice bearing tumours exposed to tumour lysate in vitro, suggesting the presence of an inhibitory factor in the soluble fraction of the Neuro-2a extract (Ash et al, 2009). The proliferation of splenocytes from allografted mice immunised with tumour-pulsed DC on day +7 was markedly increased as compared with all other experimental groups, including allografted mice, recipients of DCnaive and infusion of DC on day +14 (Figure 4A). To determine whether this inhibitory mechanism is modulated by administration of donor DC, the responses of lymphocytes to mitogenic stimulation were assessed in the presence of tumour lysates. Exposure to Neuro-2a lysate in vitro resulted in suppression of proliferation in one particular group (Figure 4B), suggesting that a tumour-derived inhibitory signal was amplified by immunisation with tumour-pulsed DC early after transplantation. Notably, this effect was restricted to lymphocytes from mice immunised with tumour-pulsed DC and not in lymphocytes of mice immunised with tumour-inexperienced DC. These data suggest that a soluble inhibitory factor was included in the repertoire of antigens processed by DC during pulsing with tumour lysate, and was presented to newly developed T cells in the allografted mice. However, restimulation of lymphocytes with unstimulated and tumour-pulsed DC ex vivo amplified antigen-specific responses (Figures 2C and 3B), indicating that this proliferation-inhibitory mechanism does not affect the process of tumour antigen processing and presentation. In addition to reversal of this inhibitory mechanism by restimulation with DC in vitro, proliferative anergy induced by tumour lysate was also reversed by non-specific stimulation of the lymphocytes with LPS in the absence of DC (Figure 4C). Therefore, immune activation overcomes the proliferation-inhibitory signal that is processed by DC during pulsing with tumour antigens.
null
null
[ "Animal model", "Tumour cells", "Conditioning, transplantation and assessment of reconstitution", "Cell preparation", "Dendritic cells", "Flow cytometry", "Lymphocyte functional assays", "Cytotoxic assays", "Statistical analysis", "Impact of DC administration on tumour growth", "Qualitative effects of DC on immune reconstitution after transplantation", "Optimal conditions of DC infusion", "Tumour-associated suppressive mechanisms" ]
[ "Mice used in this study were A/J (H2Ka, CD45.2), C57BL/6 (H2Kb, CD45.2) and B6.SJL-Ptprca Pepcb/BoyJ (H2Kb, CD45.1) purchased from Jackson Laboratories (Bar Harbor, ME, USA). Mice were housed in a barrier facility in accordance with the guidelines of the Institutional Animal Care and Use Committee.", "Neuro-2a wild-type cells, a mouse NB of strain A origin (H2Ka), were obtained from the American Type Culture Collection (ATCC, Manassas, VA, USA). Cells were cultured to maximum 12 passages in Minimal Essential Medium (Gibco, Grand Island, NY, USA), supplemented with 10% fetal bovine serum (FBS), MEM non-essential amino acids, 100 units per ml penicillin, and 100 mg ml−1 streptomycin (Biological Industries, Beit Haemek, Israel). Tumours were induced by subcutaneous (s.c.) implantation of 106 Neuro-2a cells suspended in 100 μl of phosphate-buffered saline (PBS). Tumour growth was measured with a caliper and the volume (mm3) was calculated according to: (width2 × length × 0.4). Tumour lysates were prepared by repeated cycles ( × 3) of freezing in liquid nitrogen and thawing of Neuro-2a cells (Ash et al, 2009). Tumours that regressed were excluded from tumour growth curves, to allow separate evaluation of the rates of growth.", "Recipients were conditioned with total body irradiation (TBI) at 700 rad using an X-ray irradiator (Rad Source Technologies Inc, Alpharetta, GA, USA) at a rate of 106 rad per min (Kaminitz et al, 2009). After 6 h, cells were injected into the lateral tail vein in 200 μl PBS. GVHD was assessed using a semiquantitative clinical scale including weight loss, posture (hyperkeratosis of the foot pads impairs movement), activity, diffuse erythema (particularly of the ear) or dermatitis, and diarrhoea (Ash et al, 2010). Spleen reconstitution was quantified from the total numbers of splenocytes and the fractions of different subsets measured by flow cytometry (Ash et al, 2009).", "Whole BMC (wBMC) were harvested from femurs and tibia of donors, and low-density cells were collected as previously described (Kaminitz et al, 2009). Spleens were harvested from mice, minced, passed through 40 μm mesh and dispersed into a single-cell suspension in PBS (Ash et al, 2009). Red blood cells were lysed with medium containing 0.83% ammonium chloride, 0.1% potassium bicarbonate, 0.03% disodium EDTA (Biological Industries). After 4 min, the reaction was arrested with excess of ice-cold PBS. T cells were enriched by elution through a cotton wool column that preferentially retains B lymphocytes and myeloid cells by differential charge than the eluted T cells, or immunomagnetic depletion using hybridoma-derived antibodies against GR-1, Mac-1 and B220 (Dynal Inc, Lake Success, NY, USA).", "Mononuclear cells were harvested from bone marrow samples by gradient centrifugation over murine lympholyte (Cedarlane, Burlington, ON, Canada) and cultured (106 cells per ml) in low-LPS RPMI-1640 (<10 pg ml−1) supplemented with 10% FBS, 1% ℒ-glutamine, 1% sodium pyruvate, 1% α-MEM non-essential amino acids, 0.1% Hepes buffer and 1% Pen/Strep (Biological Industries and Gibco) (Ash et al, 2010). The medium was supplemented with 50 ng ml−1 mouse recombinant (mr)GM-CSF and 10 ng ml−1 mrIL-4 on intermittent days (PeproTech, Rocky Hill, NJ, USA). Immunophenotypic characterisation after 6 days of culture showed expression of CD11c=85±4%, CD86=65±6% and CD205=21±3% (n=5). Control DC were incubated for additional 24 h in DC medium without growth factors. Pulsing with tumour antigens was performed by co-incubation of DC with tumour lysate at an approximate DC:tumour cell ratio of 3:1.", "Measurements were performed with a Vantage SE flow cytometer (Becton Dickinson, Franklin Lakes, NJ, USA). Positive staining was determined on a log scale, normalised with control cells stained with isotype control mAb. Donor chimerism was determined from the percentage of donor and host peripheral blood lymphocytes (PBL) and splenocytes (Kaminitz et al, 2009). Blood was collected in heparinised serum vials in 200 μl M199, centrifuged over 1.5 ml lymphocyte separation media (Cedarlane), and red blood cells were lysed. Nucleated cells were incubated for 45 min at 4°C with phycoerythrin (PE)-anti-H2Kb (Caltag, Carlsbad, CA, USA) and fluorescein isothiocyanate (FITC)-anti-H2Kk mAb (eBioscience, San Diego, CA, USA), cross reactive against H2Ka (Ash et al, 2009). Minor antigen disparity was assed using CD45.1-PE and CD45.2-FITC antibodies (eBioscience). Lymphocytes were quantified using CD4-allophycocyanin, CD8-FITC antibodies, B220-PerCp and NK1.1-PE and DC were characterised using CD86-PE and CD11c-PerCp antibodies (BD Pharmingen, San Diego, CA, USA).", "For measurements of proliferation, cells were labelled with 2.5 μℳ 5-(and-6-)-carboxyfluorescein diacetate succinimidyl ester (CFSE, Molecular Probes, Eugene, OR, USA) and plated on petri dishes (5 × 107) for 45 min to enrich for lymphocytes. After 45 min, non-adherent cells were collected, washed and incubated in DMEM (Ash et al, 2009). Triplicate cultures were harvested after 3–5 days and the dilution of CFSE was analysed by flow cytometry by gating on the live lymphocytes. Data were analysed using the ModFit software (Verity Software House, Topsham, ME, USA).\nInterferon-γ was determined in supernatant of lymphocyte suspensions in 96-well microtiter plates using the murine IFN-γ ELISA kit (BD Pharmingen). Differences absorbance was read in triplicates in an ELISA plate reader (Biotec Inc, Suffolk, UK). Appropriate dilutions within the measurement range were quantified according to a standard calibration curve (pg ml−1).", "Effector splenocytes harvested from naïve mice and chimeras were lysed and passed through wool mesh to enrich for T lymphocytes (∼70%). These cells were incubated with 5 × 105 Neuro-2a target cells for 7 h at 37°C in 150 μl at 1 : 10 to 1:100 target:effector ratios. Cytolysis was quantified by lactate dehydrogenase (LDH) release and normalised for background values (Ash et al, 2009).", "Data are presented as means±standard deviations for each experimental protocol. Results in each experimental group were evaluated for reproducibility by linear regression of duplicate measurements. Differences between the experimental protocols were estimated with a post-hoc Scheffe t-test and significance was considered at P<0.05.", "In first stage, we assessed the impact of donor DC on graft vs tumour reactivity induced by allogeneic BMT using the experimental setting delineated in Figure 1A: Neuro-2a cells were implanted subcutaneously and after 5 days the mice were irradiated at 700 rad and grafted with allogeneic (H2Kb→H2Ka) BMC. Dendritic cells (H2Kb) were derived in vitro from donor bone marrow and were pulsed with Neuro-2a lysates (DCneuro2a) before infusion adjacent to the subcutaneous tumours. Inoculation of DC on day +7 post transplant caused a significant reduction in tumour growth rates (P<0.01 vs allo-BMT; Figure 1B), indicating effective immunisation against the tumour. In this analysis, we excluded tumours that regressed completely to facilitate the statistical analysis of growth rates. Immunisation of DC resulted in complete regression of 3 out of 17 tumours (18%), whereas only 1 of 21 tumours subsided completely after allogeneic BMT. The allografted mice displayed no signs of GVHD irrespective of DC inoculation, as determined by clinical monitoring and validated by liver histology.\nTo test whether DC immunisation was mediated by initiation and/or propagation of tumour-reactive T cells, splenic lymphocytes were assessed for direct lysis of Neuro-2a cells in vitro. Lymphocytes from mice immunised with tumour-pulsed DC (DCNeuro2a) displayed more potent cytolytic activity as compared with allografted mice (P<0.005 at lowest ratio), decreasing the effective target:effector ratio (Figure 1C). These data confirmed that tumour growth suppression is mediated by direct tumour lytic activity of lymphocytes in the allografted mice.", "To evaluate the mechanism of action of immunisation with DC, we monitored quantitative and qualitative immune reconstitution at 4 weeks after transplantation. Dendritic cells had little impact on reconstitution of spleen cellularity, which includes delayed recovery of T cells (Figure 2A). Immaturity of T cells after allogeneic BMT was recognised from the low output of interferon-γ (IFN-γ), which was however somewhat elevated following DC immunisation (Figure 2B). The superior cytokine responses following administration of donor DC prompted more detailed functional analysis of the lymphocytes.\nAt 4 weeks post transplant, splenic lymphocytes from allografted mice immunised with DC displayed higher proliferative responses under mitogenic stimulation than naïve (unmanipulated) B6 mice and allografted mice (P<0.01, Figure 2C). Thus, IFN-γ production and proliferation were consistently improved by immunisation with tumour-pulsed donor DC. To further refine the specificity of DC-mediated stimulation, lymphocytes were restimulated in vitro with naïve (antigen-inexperienced) and tumour-pulsed donor DC. Lymphocytes from both control (naïve) and allografted mice were equally stimulated by both types of DC. However, the proliferation of lymphocytes from mice immunised with tumour-pulsed donor DC was further amplified by restimulation with tumour-pulsed DC as compared with tumour-inexperienced DC in vitro (P<0.05), demonstrating enhanced antigen-specific responsiveness of the GvT effectors. These data delineate a scenario where DC participate in maturation of the reconstituted immune system (non-specific mitogenic stimulation), and also in specific sensitisation against tumour cells (restimulation with tumour-pulsed DC).\nThe most specific tests of cytotoxic GvT effectors are the capacity to lyse tumour cells in vitro. As expected, lymphocytes from allografted mice with and without DC immunisation displayed more potent tumour-lytic activity in vitro than tumour-inexperienced lymphocytes from naïve mice (Figure 2D). However, restimulation with tumour-pulsed DC in vitro elicited more potent tumour lysis than restimulation with naïve DC, pointing to an antigen-specific reaction against the tumour. Therefore, lymphocytes have acquired antigen specificity in allografted mice even in the absence of donor DC infusion, and immunisation with tumour-pulsed donor DC caused only marginal increase in tumour cell lysis. These data suggest that amplification of immune reactivity mediated by DC is more significant than sensitisation to tumour antigens.", "In view of the beneficial effect on DC immunisation on GvT effectors recovering from lymphopenia following stem cell transplant, we sought to characterise several parameters. We reasoned that delayed infusion of DC after transplantation might be more effective in stimulation of GvT reactivity, under conditions of superior quantitative recovery and maturation of the lymphocytes. This assumption was not substantiated, as early immunisation with tumour-pulsed DC on day +7 was more potent in inhibition of tumour growth than late DC administration on day +14 (Figure 3A). To determine the basis of this differential effect of the time of immunisation, we assessed the proliferative responses of splenic lymphocytes from the immunised mice. Non-specific mitogenic stimulation resulted in higher proliferation rates of lymphocytes from mice immunised on day +7 (Figure 3B), indicating a general activation state. Furthermore, restimulation of the lymphocytes with tumour-pulsed DC in vitro resulted in robust proliferative responses, evidence of tumour-specific sensitisation of lymphocytes in the allografted mice.\nThese tumour growth curves also emphasise that naïve (antigen-inexperienced) DC have similar suppressive effects as tumour-pulsed DC at both times of immunisation (Figure 3A). Consistently, lymphocytes from mice immunised with naïve and tumour-pulsed DC showed similar cytolytic activity against Neuro-2a cells in vitro (Figure 3C), indicating effective tumour-specific sensitisation mediated by naïve DC. The effective uptake and presentation of tumour antigens by naïve DC in vivo was affirmed by the robust tumour-lytic activity of lymphocytes upon restimulation with tumour-sensitised DC in vitro. Taken together, these data demonstrate efficient tumour antigen uptake and presentation by naïve donor DC, suggesting that presensitisation to the tumour is not required to induce lymphotoxic anti-tumour reactivity.\nThe experimental setting used in so far relied on inoculation of the donor DC adjacent to the tumour, however, in the clinical setting DC are required to migrate to and operate at remote sites. We therefore assessed the effect of local vs systemic (intravenous) immunisation with donor DC on tumour growth rates. Tumour suppression was equally affected by DC inoculation adjacent to the tumours and intravenously (Figure 3D), emphasising effective DC migration to distant sites.", "In a previous study, we reported an interesting decrease in proliferation of lymphocytes from mice bearing tumours exposed to tumour lysate in vitro, suggesting the presence of an inhibitory factor in the soluble fraction of the Neuro-2a extract (Ash et al, 2009). The proliferation of splenocytes from allografted mice immunised with tumour-pulsed DC on day +7 was markedly increased as compared with all other experimental groups, including allografted mice, recipients of DCnaive and infusion of DC on day +14 (Figure 4A). To determine whether this inhibitory mechanism is modulated by administration of donor DC, the responses of lymphocytes to mitogenic stimulation were assessed in the presence of tumour lysates. Exposure to Neuro-2a lysate in vitro resulted in suppression of proliferation in one particular group (Figure 4B), suggesting that a tumour-derived inhibitory signal was amplified by immunisation with tumour-pulsed DC early after transplantation. Notably, this effect was restricted to lymphocytes from mice immunised with tumour-pulsed DC and not in lymphocytes of mice immunised with tumour-inexperienced DC. These data suggest that a soluble inhibitory factor was included in the repertoire of antigens processed by DC during pulsing with tumour lysate, and was presented to newly developed T cells in the allografted mice. However, restimulation of lymphocytes with unstimulated and tumour-pulsed DC ex vivo amplified antigen-specific responses (Figures 2C and 3B), indicating that this proliferation-inhibitory mechanism does not affect the process of tumour antigen processing and presentation. In addition to reversal of this inhibitory mechanism by restimulation with DC in vitro, proliferative anergy induced by tumour lysate was also reversed by non-specific stimulation of the lymphocytes with LPS in the absence of DC (Figure 4C). Therefore, immune activation overcomes the proliferation-inhibitory signal that is processed by DC during pulsing with tumour antigens." ]
[ null, null, null, null, null, null, null, null, null, null, null, null, null ]
[ "Materials and methods", "Animal model", "Tumour cells", "Conditioning, transplantation and assessment of reconstitution", "Cell preparation", "Dendritic cells", "Flow cytometry", "Lymphocyte functional assays", "Cytotoxic assays", "Statistical analysis", "Results", "Impact of DC administration on tumour growth", "Qualitative effects of DC on immune reconstitution after transplantation", "Optimal conditions of DC infusion", "Tumour-associated suppressive mechanisms", "Discussion" ]
[ " Animal model Mice used in this study were A/J (H2Ka, CD45.2), C57BL/6 (H2Kb, CD45.2) and B6.SJL-Ptprca Pepcb/BoyJ (H2Kb, CD45.1) purchased from Jackson Laboratories (Bar Harbor, ME, USA). Mice were housed in a barrier facility in accordance with the guidelines of the Institutional Animal Care and Use Committee.\nMice used in this study were A/J (H2Ka, CD45.2), C57BL/6 (H2Kb, CD45.2) and B6.SJL-Ptprca Pepcb/BoyJ (H2Kb, CD45.1) purchased from Jackson Laboratories (Bar Harbor, ME, USA). Mice were housed in a barrier facility in accordance with the guidelines of the Institutional Animal Care and Use Committee.\n Tumour cells Neuro-2a wild-type cells, a mouse NB of strain A origin (H2Ka), were obtained from the American Type Culture Collection (ATCC, Manassas, VA, USA). Cells were cultured to maximum 12 passages in Minimal Essential Medium (Gibco, Grand Island, NY, USA), supplemented with 10% fetal bovine serum (FBS), MEM non-essential amino acids, 100 units per ml penicillin, and 100 mg ml−1 streptomycin (Biological Industries, Beit Haemek, Israel). Tumours were induced by subcutaneous (s.c.) implantation of 106 Neuro-2a cells suspended in 100 μl of phosphate-buffered saline (PBS). Tumour growth was measured with a caliper and the volume (mm3) was calculated according to: (width2 × length × 0.4). Tumour lysates were prepared by repeated cycles ( × 3) of freezing in liquid nitrogen and thawing of Neuro-2a cells (Ash et al, 2009). Tumours that regressed were excluded from tumour growth curves, to allow separate evaluation of the rates of growth.\nNeuro-2a wild-type cells, a mouse NB of strain A origin (H2Ka), were obtained from the American Type Culture Collection (ATCC, Manassas, VA, USA). Cells were cultured to maximum 12 passages in Minimal Essential Medium (Gibco, Grand Island, NY, USA), supplemented with 10% fetal bovine serum (FBS), MEM non-essential amino acids, 100 units per ml penicillin, and 100 mg ml−1 streptomycin (Biological Industries, Beit Haemek, Israel). Tumours were induced by subcutaneous (s.c.) implantation of 106 Neuro-2a cells suspended in 100 μl of phosphate-buffered saline (PBS). Tumour growth was measured with a caliper and the volume (mm3) was calculated according to: (width2 × length × 0.4). Tumour lysates were prepared by repeated cycles ( × 3) of freezing in liquid nitrogen and thawing of Neuro-2a cells (Ash et al, 2009). Tumours that regressed were excluded from tumour growth curves, to allow separate evaluation of the rates of growth.\n Conditioning, transplantation and assessment of reconstitution Recipients were conditioned with total body irradiation (TBI) at 700 rad using an X-ray irradiator (Rad Source Technologies Inc, Alpharetta, GA, USA) at a rate of 106 rad per min (Kaminitz et al, 2009). After 6 h, cells were injected into the lateral tail vein in 200 μl PBS. GVHD was assessed using a semiquantitative clinical scale including weight loss, posture (hyperkeratosis of the foot pads impairs movement), activity, diffuse erythema (particularly of the ear) or dermatitis, and diarrhoea (Ash et al, 2010). Spleen reconstitution was quantified from the total numbers of splenocytes and the fractions of different subsets measured by flow cytometry (Ash et al, 2009).\nRecipients were conditioned with total body irradiation (TBI) at 700 rad using an X-ray irradiator (Rad Source Technologies Inc, Alpharetta, GA, USA) at a rate of 106 rad per min (Kaminitz et al, 2009). After 6 h, cells were injected into the lateral tail vein in 200 μl PBS. GVHD was assessed using a semiquantitative clinical scale including weight loss, posture (hyperkeratosis of the foot pads impairs movement), activity, diffuse erythema (particularly of the ear) or dermatitis, and diarrhoea (Ash et al, 2010). Spleen reconstitution was quantified from the total numbers of splenocytes and the fractions of different subsets measured by flow cytometry (Ash et al, 2009).\n Cell preparation Whole BMC (wBMC) were harvested from femurs and tibia of donors, and low-density cells were collected as previously described (Kaminitz et al, 2009). Spleens were harvested from mice, minced, passed through 40 μm mesh and dispersed into a single-cell suspension in PBS (Ash et al, 2009). Red blood cells were lysed with medium containing 0.83% ammonium chloride, 0.1% potassium bicarbonate, 0.03% disodium EDTA (Biological Industries). After 4 min, the reaction was arrested with excess of ice-cold PBS. T cells were enriched by elution through a cotton wool column that preferentially retains B lymphocytes and myeloid cells by differential charge than the eluted T cells, or immunomagnetic depletion using hybridoma-derived antibodies against GR-1, Mac-1 and B220 (Dynal Inc, Lake Success, NY, USA).\nWhole BMC (wBMC) were harvested from femurs and tibia of donors, and low-density cells were collected as previously described (Kaminitz et al, 2009). Spleens were harvested from mice, minced, passed through 40 μm mesh and dispersed into a single-cell suspension in PBS (Ash et al, 2009). Red blood cells were lysed with medium containing 0.83% ammonium chloride, 0.1% potassium bicarbonate, 0.03% disodium EDTA (Biological Industries). After 4 min, the reaction was arrested with excess of ice-cold PBS. T cells were enriched by elution through a cotton wool column that preferentially retains B lymphocytes and myeloid cells by differential charge than the eluted T cells, or immunomagnetic depletion using hybridoma-derived antibodies against GR-1, Mac-1 and B220 (Dynal Inc, Lake Success, NY, USA).\n Dendritic cells Mononuclear cells were harvested from bone marrow samples by gradient centrifugation over murine lympholyte (Cedarlane, Burlington, ON, Canada) and cultured (106 cells per ml) in low-LPS RPMI-1640 (<10 pg ml−1) supplemented with 10% FBS, 1% ℒ-glutamine, 1% sodium pyruvate, 1% α-MEM non-essential amino acids, 0.1% Hepes buffer and 1% Pen/Strep (Biological Industries and Gibco) (Ash et al, 2010). The medium was supplemented with 50 ng ml−1 mouse recombinant (mr)GM-CSF and 10 ng ml−1 mrIL-4 on intermittent days (PeproTech, Rocky Hill, NJ, USA). Immunophenotypic characterisation after 6 days of culture showed expression of CD11c=85±4%, CD86=65±6% and CD205=21±3% (n=5). Control DC were incubated for additional 24 h in DC medium without growth factors. Pulsing with tumour antigens was performed by co-incubation of DC with tumour lysate at an approximate DC:tumour cell ratio of 3:1.\nMononuclear cells were harvested from bone marrow samples by gradient centrifugation over murine lympholyte (Cedarlane, Burlington, ON, Canada) and cultured (106 cells per ml) in low-LPS RPMI-1640 (<10 pg ml−1) supplemented with 10% FBS, 1% ℒ-glutamine, 1% sodium pyruvate, 1% α-MEM non-essential amino acids, 0.1% Hepes buffer and 1% Pen/Strep (Biological Industries and Gibco) (Ash et al, 2010). The medium was supplemented with 50 ng ml−1 mouse recombinant (mr)GM-CSF and 10 ng ml−1 mrIL-4 on intermittent days (PeproTech, Rocky Hill, NJ, USA). Immunophenotypic characterisation after 6 days of culture showed expression of CD11c=85±4%, CD86=65±6% and CD205=21±3% (n=5). Control DC were incubated for additional 24 h in DC medium without growth factors. Pulsing with tumour antigens was performed by co-incubation of DC with tumour lysate at an approximate DC:tumour cell ratio of 3:1.\n Flow cytometry Measurements were performed with a Vantage SE flow cytometer (Becton Dickinson, Franklin Lakes, NJ, USA). Positive staining was determined on a log scale, normalised with control cells stained with isotype control mAb. Donor chimerism was determined from the percentage of donor and host peripheral blood lymphocytes (PBL) and splenocytes (Kaminitz et al, 2009). Blood was collected in heparinised serum vials in 200 μl M199, centrifuged over 1.5 ml lymphocyte separation media (Cedarlane), and red blood cells were lysed. Nucleated cells were incubated for 45 min at 4°C with phycoerythrin (PE)-anti-H2Kb (Caltag, Carlsbad, CA, USA) and fluorescein isothiocyanate (FITC)-anti-H2Kk mAb (eBioscience, San Diego, CA, USA), cross reactive against H2Ka (Ash et al, 2009). Minor antigen disparity was assed using CD45.1-PE and CD45.2-FITC antibodies (eBioscience). Lymphocytes were quantified using CD4-allophycocyanin, CD8-FITC antibodies, B220-PerCp and NK1.1-PE and DC were characterised using CD86-PE and CD11c-PerCp antibodies (BD Pharmingen, San Diego, CA, USA).\nMeasurements were performed with a Vantage SE flow cytometer (Becton Dickinson, Franklin Lakes, NJ, USA). Positive staining was determined on a log scale, normalised with control cells stained with isotype control mAb. Donor chimerism was determined from the percentage of donor and host peripheral blood lymphocytes (PBL) and splenocytes (Kaminitz et al, 2009). Blood was collected in heparinised serum vials in 200 μl M199, centrifuged over 1.5 ml lymphocyte separation media (Cedarlane), and red blood cells were lysed. Nucleated cells were incubated for 45 min at 4°C with phycoerythrin (PE)-anti-H2Kb (Caltag, Carlsbad, CA, USA) and fluorescein isothiocyanate (FITC)-anti-H2Kk mAb (eBioscience, San Diego, CA, USA), cross reactive against H2Ka (Ash et al, 2009). Minor antigen disparity was assed using CD45.1-PE and CD45.2-FITC antibodies (eBioscience). Lymphocytes were quantified using CD4-allophycocyanin, CD8-FITC antibodies, B220-PerCp and NK1.1-PE and DC were characterised using CD86-PE and CD11c-PerCp antibodies (BD Pharmingen, San Diego, CA, USA).\n Lymphocyte functional assays For measurements of proliferation, cells were labelled with 2.5 μℳ 5-(and-6-)-carboxyfluorescein diacetate succinimidyl ester (CFSE, Molecular Probes, Eugene, OR, USA) and plated on petri dishes (5 × 107) for 45 min to enrich for lymphocytes. After 45 min, non-adherent cells were collected, washed and incubated in DMEM (Ash et al, 2009). Triplicate cultures were harvested after 3–5 days and the dilution of CFSE was analysed by flow cytometry by gating on the live lymphocytes. Data were analysed using the ModFit software (Verity Software House, Topsham, ME, USA).\nInterferon-γ was determined in supernatant of lymphocyte suspensions in 96-well microtiter plates using the murine IFN-γ ELISA kit (BD Pharmingen). Differences absorbance was read in triplicates in an ELISA plate reader (Biotec Inc, Suffolk, UK). Appropriate dilutions within the measurement range were quantified according to a standard calibration curve (pg ml−1).\nFor measurements of proliferation, cells were labelled with 2.5 μℳ 5-(and-6-)-carboxyfluorescein diacetate succinimidyl ester (CFSE, Molecular Probes, Eugene, OR, USA) and plated on petri dishes (5 × 107) for 45 min to enrich for lymphocytes. After 45 min, non-adherent cells were collected, washed and incubated in DMEM (Ash et al, 2009). Triplicate cultures were harvested after 3–5 days and the dilution of CFSE was analysed by flow cytometry by gating on the live lymphocytes. Data were analysed using the ModFit software (Verity Software House, Topsham, ME, USA).\nInterferon-γ was determined in supernatant of lymphocyte suspensions in 96-well microtiter plates using the murine IFN-γ ELISA kit (BD Pharmingen). Differences absorbance was read in triplicates in an ELISA plate reader (Biotec Inc, Suffolk, UK). Appropriate dilutions within the measurement range were quantified according to a standard calibration curve (pg ml−1).\n Cytotoxic assays Effector splenocytes harvested from naïve mice and chimeras were lysed and passed through wool mesh to enrich for T lymphocytes (∼70%). These cells were incubated with 5 × 105 Neuro-2a target cells for 7 h at 37°C in 150 μl at 1 : 10 to 1:100 target:effector ratios. Cytolysis was quantified by lactate dehydrogenase (LDH) release and normalised for background values (Ash et al, 2009).\nEffector splenocytes harvested from naïve mice and chimeras were lysed and passed through wool mesh to enrich for T lymphocytes (∼70%). These cells were incubated with 5 × 105 Neuro-2a target cells for 7 h at 37°C in 150 μl at 1 : 10 to 1:100 target:effector ratios. Cytolysis was quantified by lactate dehydrogenase (LDH) release and normalised for background values (Ash et al, 2009).\n Statistical analysis Data are presented as means±standard deviations for each experimental protocol. Results in each experimental group were evaluated for reproducibility by linear regression of duplicate measurements. Differences between the experimental protocols were estimated with a post-hoc Scheffe t-test and significance was considered at P<0.05.\nData are presented as means±standard deviations for each experimental protocol. Results in each experimental group were evaluated for reproducibility by linear regression of duplicate measurements. Differences between the experimental protocols were estimated with a post-hoc Scheffe t-test and significance was considered at P<0.05.", "Mice used in this study were A/J (H2Ka, CD45.2), C57BL/6 (H2Kb, CD45.2) and B6.SJL-Ptprca Pepcb/BoyJ (H2Kb, CD45.1) purchased from Jackson Laboratories (Bar Harbor, ME, USA). Mice were housed in a barrier facility in accordance with the guidelines of the Institutional Animal Care and Use Committee.", "Neuro-2a wild-type cells, a mouse NB of strain A origin (H2Ka), were obtained from the American Type Culture Collection (ATCC, Manassas, VA, USA). Cells were cultured to maximum 12 passages in Minimal Essential Medium (Gibco, Grand Island, NY, USA), supplemented with 10% fetal bovine serum (FBS), MEM non-essential amino acids, 100 units per ml penicillin, and 100 mg ml−1 streptomycin (Biological Industries, Beit Haemek, Israel). Tumours were induced by subcutaneous (s.c.) implantation of 106 Neuro-2a cells suspended in 100 μl of phosphate-buffered saline (PBS). Tumour growth was measured with a caliper and the volume (mm3) was calculated according to: (width2 × length × 0.4). Tumour lysates were prepared by repeated cycles ( × 3) of freezing in liquid nitrogen and thawing of Neuro-2a cells (Ash et al, 2009). Tumours that regressed were excluded from tumour growth curves, to allow separate evaluation of the rates of growth.", "Recipients were conditioned with total body irradiation (TBI) at 700 rad using an X-ray irradiator (Rad Source Technologies Inc, Alpharetta, GA, USA) at a rate of 106 rad per min (Kaminitz et al, 2009). After 6 h, cells were injected into the lateral tail vein in 200 μl PBS. GVHD was assessed using a semiquantitative clinical scale including weight loss, posture (hyperkeratosis of the foot pads impairs movement), activity, diffuse erythema (particularly of the ear) or dermatitis, and diarrhoea (Ash et al, 2010). Spleen reconstitution was quantified from the total numbers of splenocytes and the fractions of different subsets measured by flow cytometry (Ash et al, 2009).", "Whole BMC (wBMC) were harvested from femurs and tibia of donors, and low-density cells were collected as previously described (Kaminitz et al, 2009). Spleens were harvested from mice, minced, passed through 40 μm mesh and dispersed into a single-cell suspension in PBS (Ash et al, 2009). Red blood cells were lysed with medium containing 0.83% ammonium chloride, 0.1% potassium bicarbonate, 0.03% disodium EDTA (Biological Industries). After 4 min, the reaction was arrested with excess of ice-cold PBS. T cells were enriched by elution through a cotton wool column that preferentially retains B lymphocytes and myeloid cells by differential charge than the eluted T cells, or immunomagnetic depletion using hybridoma-derived antibodies against GR-1, Mac-1 and B220 (Dynal Inc, Lake Success, NY, USA).", "Mononuclear cells were harvested from bone marrow samples by gradient centrifugation over murine lympholyte (Cedarlane, Burlington, ON, Canada) and cultured (106 cells per ml) in low-LPS RPMI-1640 (<10 pg ml−1) supplemented with 10% FBS, 1% ℒ-glutamine, 1% sodium pyruvate, 1% α-MEM non-essential amino acids, 0.1% Hepes buffer and 1% Pen/Strep (Biological Industries and Gibco) (Ash et al, 2010). The medium was supplemented with 50 ng ml−1 mouse recombinant (mr)GM-CSF and 10 ng ml−1 mrIL-4 on intermittent days (PeproTech, Rocky Hill, NJ, USA). Immunophenotypic characterisation after 6 days of culture showed expression of CD11c=85±4%, CD86=65±6% and CD205=21±3% (n=5). Control DC were incubated for additional 24 h in DC medium without growth factors. Pulsing with tumour antigens was performed by co-incubation of DC with tumour lysate at an approximate DC:tumour cell ratio of 3:1.", "Measurements were performed with a Vantage SE flow cytometer (Becton Dickinson, Franklin Lakes, NJ, USA). Positive staining was determined on a log scale, normalised with control cells stained with isotype control mAb. Donor chimerism was determined from the percentage of donor and host peripheral blood lymphocytes (PBL) and splenocytes (Kaminitz et al, 2009). Blood was collected in heparinised serum vials in 200 μl M199, centrifuged over 1.5 ml lymphocyte separation media (Cedarlane), and red blood cells were lysed. Nucleated cells were incubated for 45 min at 4°C with phycoerythrin (PE)-anti-H2Kb (Caltag, Carlsbad, CA, USA) and fluorescein isothiocyanate (FITC)-anti-H2Kk mAb (eBioscience, San Diego, CA, USA), cross reactive against H2Ka (Ash et al, 2009). Minor antigen disparity was assed using CD45.1-PE and CD45.2-FITC antibodies (eBioscience). Lymphocytes were quantified using CD4-allophycocyanin, CD8-FITC antibodies, B220-PerCp and NK1.1-PE and DC were characterised using CD86-PE and CD11c-PerCp antibodies (BD Pharmingen, San Diego, CA, USA).", "For measurements of proliferation, cells were labelled with 2.5 μℳ 5-(and-6-)-carboxyfluorescein diacetate succinimidyl ester (CFSE, Molecular Probes, Eugene, OR, USA) and plated on petri dishes (5 × 107) for 45 min to enrich for lymphocytes. After 45 min, non-adherent cells were collected, washed and incubated in DMEM (Ash et al, 2009). Triplicate cultures were harvested after 3–5 days and the dilution of CFSE was analysed by flow cytometry by gating on the live lymphocytes. Data were analysed using the ModFit software (Verity Software House, Topsham, ME, USA).\nInterferon-γ was determined in supernatant of lymphocyte suspensions in 96-well microtiter plates using the murine IFN-γ ELISA kit (BD Pharmingen). Differences absorbance was read in triplicates in an ELISA plate reader (Biotec Inc, Suffolk, UK). Appropriate dilutions within the measurement range were quantified according to a standard calibration curve (pg ml−1).", "Effector splenocytes harvested from naïve mice and chimeras were lysed and passed through wool mesh to enrich for T lymphocytes (∼70%). These cells were incubated with 5 × 105 Neuro-2a target cells for 7 h at 37°C in 150 μl at 1 : 10 to 1:100 target:effector ratios. Cytolysis was quantified by lactate dehydrogenase (LDH) release and normalised for background values (Ash et al, 2009).", "Data are presented as means±standard deviations for each experimental protocol. Results in each experimental group were evaluated for reproducibility by linear regression of duplicate measurements. Differences between the experimental protocols were estimated with a post-hoc Scheffe t-test and significance was considered at P<0.05.", " Impact of DC administration on tumour growth In first stage, we assessed the impact of donor DC on graft vs tumour reactivity induced by allogeneic BMT using the experimental setting delineated in Figure 1A: Neuro-2a cells were implanted subcutaneously and after 5 days the mice were irradiated at 700 rad and grafted with allogeneic (H2Kb→H2Ka) BMC. Dendritic cells (H2Kb) were derived in vitro from donor bone marrow and were pulsed with Neuro-2a lysates (DCneuro2a) before infusion adjacent to the subcutaneous tumours. Inoculation of DC on day +7 post transplant caused a significant reduction in tumour growth rates (P<0.01 vs allo-BMT; Figure 1B), indicating effective immunisation against the tumour. In this analysis, we excluded tumours that regressed completely to facilitate the statistical analysis of growth rates. Immunisation of DC resulted in complete regression of 3 out of 17 tumours (18%), whereas only 1 of 21 tumours subsided completely after allogeneic BMT. The allografted mice displayed no signs of GVHD irrespective of DC inoculation, as determined by clinical monitoring and validated by liver histology.\nTo test whether DC immunisation was mediated by initiation and/or propagation of tumour-reactive T cells, splenic lymphocytes were assessed for direct lysis of Neuro-2a cells in vitro. Lymphocytes from mice immunised with tumour-pulsed DC (DCNeuro2a) displayed more potent cytolytic activity as compared with allografted mice (P<0.005 at lowest ratio), decreasing the effective target:effector ratio (Figure 1C). These data confirmed that tumour growth suppression is mediated by direct tumour lytic activity of lymphocytes in the allografted mice.\nIn first stage, we assessed the impact of donor DC on graft vs tumour reactivity induced by allogeneic BMT using the experimental setting delineated in Figure 1A: Neuro-2a cells were implanted subcutaneously and after 5 days the mice were irradiated at 700 rad and grafted with allogeneic (H2Kb→H2Ka) BMC. Dendritic cells (H2Kb) were derived in vitro from donor bone marrow and were pulsed with Neuro-2a lysates (DCneuro2a) before infusion adjacent to the subcutaneous tumours. Inoculation of DC on day +7 post transplant caused a significant reduction in tumour growth rates (P<0.01 vs allo-BMT; Figure 1B), indicating effective immunisation against the tumour. In this analysis, we excluded tumours that regressed completely to facilitate the statistical analysis of growth rates. Immunisation of DC resulted in complete regression of 3 out of 17 tumours (18%), whereas only 1 of 21 tumours subsided completely after allogeneic BMT. The allografted mice displayed no signs of GVHD irrespective of DC inoculation, as determined by clinical monitoring and validated by liver histology.\nTo test whether DC immunisation was mediated by initiation and/or propagation of tumour-reactive T cells, splenic lymphocytes were assessed for direct lysis of Neuro-2a cells in vitro. Lymphocytes from mice immunised with tumour-pulsed DC (DCNeuro2a) displayed more potent cytolytic activity as compared with allografted mice (P<0.005 at lowest ratio), decreasing the effective target:effector ratio (Figure 1C). These data confirmed that tumour growth suppression is mediated by direct tumour lytic activity of lymphocytes in the allografted mice.\n Qualitative effects of DC on immune reconstitution after transplantation To evaluate the mechanism of action of immunisation with DC, we monitored quantitative and qualitative immune reconstitution at 4 weeks after transplantation. Dendritic cells had little impact on reconstitution of spleen cellularity, which includes delayed recovery of T cells (Figure 2A). Immaturity of T cells after allogeneic BMT was recognised from the low output of interferon-γ (IFN-γ), which was however somewhat elevated following DC immunisation (Figure 2B). The superior cytokine responses following administration of donor DC prompted more detailed functional analysis of the lymphocytes.\nAt 4 weeks post transplant, splenic lymphocytes from allografted mice immunised with DC displayed higher proliferative responses under mitogenic stimulation than naïve (unmanipulated) B6 mice and allografted mice (P<0.01, Figure 2C). Thus, IFN-γ production and proliferation were consistently improved by immunisation with tumour-pulsed donor DC. To further refine the specificity of DC-mediated stimulation, lymphocytes were restimulated in vitro with naïve (antigen-inexperienced) and tumour-pulsed donor DC. Lymphocytes from both control (naïve) and allografted mice were equally stimulated by both types of DC. However, the proliferation of lymphocytes from mice immunised with tumour-pulsed donor DC was further amplified by restimulation with tumour-pulsed DC as compared with tumour-inexperienced DC in vitro (P<0.05), demonstrating enhanced antigen-specific responsiveness of the GvT effectors. These data delineate a scenario where DC participate in maturation of the reconstituted immune system (non-specific mitogenic stimulation), and also in specific sensitisation against tumour cells (restimulation with tumour-pulsed DC).\nThe most specific tests of cytotoxic GvT effectors are the capacity to lyse tumour cells in vitro. As expected, lymphocytes from allografted mice with and without DC immunisation displayed more potent tumour-lytic activity in vitro than tumour-inexperienced lymphocytes from naïve mice (Figure 2D). However, restimulation with tumour-pulsed DC in vitro elicited more potent tumour lysis than restimulation with naïve DC, pointing to an antigen-specific reaction against the tumour. Therefore, lymphocytes have acquired antigen specificity in allografted mice even in the absence of donor DC infusion, and immunisation with tumour-pulsed donor DC caused only marginal increase in tumour cell lysis. These data suggest that amplification of immune reactivity mediated by DC is more significant than sensitisation to tumour antigens.\nTo evaluate the mechanism of action of immunisation with DC, we monitored quantitative and qualitative immune reconstitution at 4 weeks after transplantation. Dendritic cells had little impact on reconstitution of spleen cellularity, which includes delayed recovery of T cells (Figure 2A). Immaturity of T cells after allogeneic BMT was recognised from the low output of interferon-γ (IFN-γ), which was however somewhat elevated following DC immunisation (Figure 2B). The superior cytokine responses following administration of donor DC prompted more detailed functional analysis of the lymphocytes.\nAt 4 weeks post transplant, splenic lymphocytes from allografted mice immunised with DC displayed higher proliferative responses under mitogenic stimulation than naïve (unmanipulated) B6 mice and allografted mice (P<0.01, Figure 2C). Thus, IFN-γ production and proliferation were consistently improved by immunisation with tumour-pulsed donor DC. To further refine the specificity of DC-mediated stimulation, lymphocytes were restimulated in vitro with naïve (antigen-inexperienced) and tumour-pulsed donor DC. Lymphocytes from both control (naïve) and allografted mice were equally stimulated by both types of DC. However, the proliferation of lymphocytes from mice immunised with tumour-pulsed donor DC was further amplified by restimulation with tumour-pulsed DC as compared with tumour-inexperienced DC in vitro (P<0.05), demonstrating enhanced antigen-specific responsiveness of the GvT effectors. These data delineate a scenario where DC participate in maturation of the reconstituted immune system (non-specific mitogenic stimulation), and also in specific sensitisation against tumour cells (restimulation with tumour-pulsed DC).\nThe most specific tests of cytotoxic GvT effectors are the capacity to lyse tumour cells in vitro. As expected, lymphocytes from allografted mice with and without DC immunisation displayed more potent tumour-lytic activity in vitro than tumour-inexperienced lymphocytes from naïve mice (Figure 2D). However, restimulation with tumour-pulsed DC in vitro elicited more potent tumour lysis than restimulation with naïve DC, pointing to an antigen-specific reaction against the tumour. Therefore, lymphocytes have acquired antigen specificity in allografted mice even in the absence of donor DC infusion, and immunisation with tumour-pulsed donor DC caused only marginal increase in tumour cell lysis. These data suggest that amplification of immune reactivity mediated by DC is more significant than sensitisation to tumour antigens.\n Optimal conditions of DC infusion In view of the beneficial effect on DC immunisation on GvT effectors recovering from lymphopenia following stem cell transplant, we sought to characterise several parameters. We reasoned that delayed infusion of DC after transplantation might be more effective in stimulation of GvT reactivity, under conditions of superior quantitative recovery and maturation of the lymphocytes. This assumption was not substantiated, as early immunisation with tumour-pulsed DC on day +7 was more potent in inhibition of tumour growth than late DC administration on day +14 (Figure 3A). To determine the basis of this differential effect of the time of immunisation, we assessed the proliferative responses of splenic lymphocytes from the immunised mice. Non-specific mitogenic stimulation resulted in higher proliferation rates of lymphocytes from mice immunised on day +7 (Figure 3B), indicating a general activation state. Furthermore, restimulation of the lymphocytes with tumour-pulsed DC in vitro resulted in robust proliferative responses, evidence of tumour-specific sensitisation of lymphocytes in the allografted mice.\nThese tumour growth curves also emphasise that naïve (antigen-inexperienced) DC have similar suppressive effects as tumour-pulsed DC at both times of immunisation (Figure 3A). Consistently, lymphocytes from mice immunised with naïve and tumour-pulsed DC showed similar cytolytic activity against Neuro-2a cells in vitro (Figure 3C), indicating effective tumour-specific sensitisation mediated by naïve DC. The effective uptake and presentation of tumour antigens by naïve DC in vivo was affirmed by the robust tumour-lytic activity of lymphocytes upon restimulation with tumour-sensitised DC in vitro. Taken together, these data demonstrate efficient tumour antigen uptake and presentation by naïve donor DC, suggesting that presensitisation to the tumour is not required to induce lymphotoxic anti-tumour reactivity.\nThe experimental setting used in so far relied on inoculation of the donor DC adjacent to the tumour, however, in the clinical setting DC are required to migrate to and operate at remote sites. We therefore assessed the effect of local vs systemic (intravenous) immunisation with donor DC on tumour growth rates. Tumour suppression was equally affected by DC inoculation adjacent to the tumours and intravenously (Figure 3D), emphasising effective DC migration to distant sites.\nIn view of the beneficial effect on DC immunisation on GvT effectors recovering from lymphopenia following stem cell transplant, we sought to characterise several parameters. We reasoned that delayed infusion of DC after transplantation might be more effective in stimulation of GvT reactivity, under conditions of superior quantitative recovery and maturation of the lymphocytes. This assumption was not substantiated, as early immunisation with tumour-pulsed DC on day +7 was more potent in inhibition of tumour growth than late DC administration on day +14 (Figure 3A). To determine the basis of this differential effect of the time of immunisation, we assessed the proliferative responses of splenic lymphocytes from the immunised mice. Non-specific mitogenic stimulation resulted in higher proliferation rates of lymphocytes from mice immunised on day +7 (Figure 3B), indicating a general activation state. Furthermore, restimulation of the lymphocytes with tumour-pulsed DC in vitro resulted in robust proliferative responses, evidence of tumour-specific sensitisation of lymphocytes in the allografted mice.\nThese tumour growth curves also emphasise that naïve (antigen-inexperienced) DC have similar suppressive effects as tumour-pulsed DC at both times of immunisation (Figure 3A). Consistently, lymphocytes from mice immunised with naïve and tumour-pulsed DC showed similar cytolytic activity against Neuro-2a cells in vitro (Figure 3C), indicating effective tumour-specific sensitisation mediated by naïve DC. The effective uptake and presentation of tumour antigens by naïve DC in vivo was affirmed by the robust tumour-lytic activity of lymphocytes upon restimulation with tumour-sensitised DC in vitro. Taken together, these data demonstrate efficient tumour antigen uptake and presentation by naïve donor DC, suggesting that presensitisation to the tumour is not required to induce lymphotoxic anti-tumour reactivity.\nThe experimental setting used in so far relied on inoculation of the donor DC adjacent to the tumour, however, in the clinical setting DC are required to migrate to and operate at remote sites. We therefore assessed the effect of local vs systemic (intravenous) immunisation with donor DC on tumour growth rates. Tumour suppression was equally affected by DC inoculation adjacent to the tumours and intravenously (Figure 3D), emphasising effective DC migration to distant sites.\n Tumour-associated suppressive mechanisms In a previous study, we reported an interesting decrease in proliferation of lymphocytes from mice bearing tumours exposed to tumour lysate in vitro, suggesting the presence of an inhibitory factor in the soluble fraction of the Neuro-2a extract (Ash et al, 2009). The proliferation of splenocytes from allografted mice immunised with tumour-pulsed DC on day +7 was markedly increased as compared with all other experimental groups, including allografted mice, recipients of DCnaive and infusion of DC on day +14 (Figure 4A). To determine whether this inhibitory mechanism is modulated by administration of donor DC, the responses of lymphocytes to mitogenic stimulation were assessed in the presence of tumour lysates. Exposure to Neuro-2a lysate in vitro resulted in suppression of proliferation in one particular group (Figure 4B), suggesting that a tumour-derived inhibitory signal was amplified by immunisation with tumour-pulsed DC early after transplantation. Notably, this effect was restricted to lymphocytes from mice immunised with tumour-pulsed DC and not in lymphocytes of mice immunised with tumour-inexperienced DC. These data suggest that a soluble inhibitory factor was included in the repertoire of antigens processed by DC during pulsing with tumour lysate, and was presented to newly developed T cells in the allografted mice. However, restimulation of lymphocytes with unstimulated and tumour-pulsed DC ex vivo amplified antigen-specific responses (Figures 2C and 3B), indicating that this proliferation-inhibitory mechanism does not affect the process of tumour antigen processing and presentation. In addition to reversal of this inhibitory mechanism by restimulation with DC in vitro, proliferative anergy induced by tumour lysate was also reversed by non-specific stimulation of the lymphocytes with LPS in the absence of DC (Figure 4C). Therefore, immune activation overcomes the proliferation-inhibitory signal that is processed by DC during pulsing with tumour antigens.\nIn a previous study, we reported an interesting decrease in proliferation of lymphocytes from mice bearing tumours exposed to tumour lysate in vitro, suggesting the presence of an inhibitory factor in the soluble fraction of the Neuro-2a extract (Ash et al, 2009). The proliferation of splenocytes from allografted mice immunised with tumour-pulsed DC on day +7 was markedly increased as compared with all other experimental groups, including allografted mice, recipients of DCnaive and infusion of DC on day +14 (Figure 4A). To determine whether this inhibitory mechanism is modulated by administration of donor DC, the responses of lymphocytes to mitogenic stimulation were assessed in the presence of tumour lysates. Exposure to Neuro-2a lysate in vitro resulted in suppression of proliferation in one particular group (Figure 4B), suggesting that a tumour-derived inhibitory signal was amplified by immunisation with tumour-pulsed DC early after transplantation. Notably, this effect was restricted to lymphocytes from mice immunised with tumour-pulsed DC and not in lymphocytes of mice immunised with tumour-inexperienced DC. These data suggest that a soluble inhibitory factor was included in the repertoire of antigens processed by DC during pulsing with tumour lysate, and was presented to newly developed T cells in the allografted mice. However, restimulation of lymphocytes with unstimulated and tumour-pulsed DC ex vivo amplified antigen-specific responses (Figures 2C and 3B), indicating that this proliferation-inhibitory mechanism does not affect the process of tumour antigen processing and presentation. In addition to reversal of this inhibitory mechanism by restimulation with DC in vitro, proliferative anergy induced by tumour lysate was also reversed by non-specific stimulation of the lymphocytes with LPS in the absence of DC (Figure 4C). Therefore, immune activation overcomes the proliferation-inhibitory signal that is processed by DC during pulsing with tumour antigens.", "In first stage, we assessed the impact of donor DC on graft vs tumour reactivity induced by allogeneic BMT using the experimental setting delineated in Figure 1A: Neuro-2a cells were implanted subcutaneously and after 5 days the mice were irradiated at 700 rad and grafted with allogeneic (H2Kb→H2Ka) BMC. Dendritic cells (H2Kb) were derived in vitro from donor bone marrow and were pulsed with Neuro-2a lysates (DCneuro2a) before infusion adjacent to the subcutaneous tumours. Inoculation of DC on day +7 post transplant caused a significant reduction in tumour growth rates (P<0.01 vs allo-BMT; Figure 1B), indicating effective immunisation against the tumour. In this analysis, we excluded tumours that regressed completely to facilitate the statistical analysis of growth rates. Immunisation of DC resulted in complete regression of 3 out of 17 tumours (18%), whereas only 1 of 21 tumours subsided completely after allogeneic BMT. The allografted mice displayed no signs of GVHD irrespective of DC inoculation, as determined by clinical monitoring and validated by liver histology.\nTo test whether DC immunisation was mediated by initiation and/or propagation of tumour-reactive T cells, splenic lymphocytes were assessed for direct lysis of Neuro-2a cells in vitro. Lymphocytes from mice immunised with tumour-pulsed DC (DCNeuro2a) displayed more potent cytolytic activity as compared with allografted mice (P<0.005 at lowest ratio), decreasing the effective target:effector ratio (Figure 1C). These data confirmed that tumour growth suppression is mediated by direct tumour lytic activity of lymphocytes in the allografted mice.", "To evaluate the mechanism of action of immunisation with DC, we monitored quantitative and qualitative immune reconstitution at 4 weeks after transplantation. Dendritic cells had little impact on reconstitution of spleen cellularity, which includes delayed recovery of T cells (Figure 2A). Immaturity of T cells after allogeneic BMT was recognised from the low output of interferon-γ (IFN-γ), which was however somewhat elevated following DC immunisation (Figure 2B). The superior cytokine responses following administration of donor DC prompted more detailed functional analysis of the lymphocytes.\nAt 4 weeks post transplant, splenic lymphocytes from allografted mice immunised with DC displayed higher proliferative responses under mitogenic stimulation than naïve (unmanipulated) B6 mice and allografted mice (P<0.01, Figure 2C). Thus, IFN-γ production and proliferation were consistently improved by immunisation with tumour-pulsed donor DC. To further refine the specificity of DC-mediated stimulation, lymphocytes were restimulated in vitro with naïve (antigen-inexperienced) and tumour-pulsed donor DC. Lymphocytes from both control (naïve) and allografted mice were equally stimulated by both types of DC. However, the proliferation of lymphocytes from mice immunised with tumour-pulsed donor DC was further amplified by restimulation with tumour-pulsed DC as compared with tumour-inexperienced DC in vitro (P<0.05), demonstrating enhanced antigen-specific responsiveness of the GvT effectors. These data delineate a scenario where DC participate in maturation of the reconstituted immune system (non-specific mitogenic stimulation), and also in specific sensitisation against tumour cells (restimulation with tumour-pulsed DC).\nThe most specific tests of cytotoxic GvT effectors are the capacity to lyse tumour cells in vitro. As expected, lymphocytes from allografted mice with and without DC immunisation displayed more potent tumour-lytic activity in vitro than tumour-inexperienced lymphocytes from naïve mice (Figure 2D). However, restimulation with tumour-pulsed DC in vitro elicited more potent tumour lysis than restimulation with naïve DC, pointing to an antigen-specific reaction against the tumour. Therefore, lymphocytes have acquired antigen specificity in allografted mice even in the absence of donor DC infusion, and immunisation with tumour-pulsed donor DC caused only marginal increase in tumour cell lysis. These data suggest that amplification of immune reactivity mediated by DC is more significant than sensitisation to tumour antigens.", "In view of the beneficial effect on DC immunisation on GvT effectors recovering from lymphopenia following stem cell transplant, we sought to characterise several parameters. We reasoned that delayed infusion of DC after transplantation might be more effective in stimulation of GvT reactivity, under conditions of superior quantitative recovery and maturation of the lymphocytes. This assumption was not substantiated, as early immunisation with tumour-pulsed DC on day +7 was more potent in inhibition of tumour growth than late DC administration on day +14 (Figure 3A). To determine the basis of this differential effect of the time of immunisation, we assessed the proliferative responses of splenic lymphocytes from the immunised mice. Non-specific mitogenic stimulation resulted in higher proliferation rates of lymphocytes from mice immunised on day +7 (Figure 3B), indicating a general activation state. Furthermore, restimulation of the lymphocytes with tumour-pulsed DC in vitro resulted in robust proliferative responses, evidence of tumour-specific sensitisation of lymphocytes in the allografted mice.\nThese tumour growth curves also emphasise that naïve (antigen-inexperienced) DC have similar suppressive effects as tumour-pulsed DC at both times of immunisation (Figure 3A). Consistently, lymphocytes from mice immunised with naïve and tumour-pulsed DC showed similar cytolytic activity against Neuro-2a cells in vitro (Figure 3C), indicating effective tumour-specific sensitisation mediated by naïve DC. The effective uptake and presentation of tumour antigens by naïve DC in vivo was affirmed by the robust tumour-lytic activity of lymphocytes upon restimulation with tumour-sensitised DC in vitro. Taken together, these data demonstrate efficient tumour antigen uptake and presentation by naïve donor DC, suggesting that presensitisation to the tumour is not required to induce lymphotoxic anti-tumour reactivity.\nThe experimental setting used in so far relied on inoculation of the donor DC adjacent to the tumour, however, in the clinical setting DC are required to migrate to and operate at remote sites. We therefore assessed the effect of local vs systemic (intravenous) immunisation with donor DC on tumour growth rates. Tumour suppression was equally affected by DC inoculation adjacent to the tumours and intravenously (Figure 3D), emphasising effective DC migration to distant sites.", "In a previous study, we reported an interesting decrease in proliferation of lymphocytes from mice bearing tumours exposed to tumour lysate in vitro, suggesting the presence of an inhibitory factor in the soluble fraction of the Neuro-2a extract (Ash et al, 2009). The proliferation of splenocytes from allografted mice immunised with tumour-pulsed DC on day +7 was markedly increased as compared with all other experimental groups, including allografted mice, recipients of DCnaive and infusion of DC on day +14 (Figure 4A). To determine whether this inhibitory mechanism is modulated by administration of donor DC, the responses of lymphocytes to mitogenic stimulation were assessed in the presence of tumour lysates. Exposure to Neuro-2a lysate in vitro resulted in suppression of proliferation in one particular group (Figure 4B), suggesting that a tumour-derived inhibitory signal was amplified by immunisation with tumour-pulsed DC early after transplantation. Notably, this effect was restricted to lymphocytes from mice immunised with tumour-pulsed DC and not in lymphocytes of mice immunised with tumour-inexperienced DC. These data suggest that a soluble inhibitory factor was included in the repertoire of antigens processed by DC during pulsing with tumour lysate, and was presented to newly developed T cells in the allografted mice. However, restimulation of lymphocytes with unstimulated and tumour-pulsed DC ex vivo amplified antigen-specific responses (Figures 2C and 3B), indicating that this proliferation-inhibitory mechanism does not affect the process of tumour antigen processing and presentation. In addition to reversal of this inhibitory mechanism by restimulation with DC in vitro, proliferative anergy induced by tumour lysate was also reversed by non-specific stimulation of the lymphocytes with LPS in the absence of DC (Figure 4C). Therefore, immune activation overcomes the proliferation-inhibitory signal that is processed by DC during pulsing with tumour antigens.", "T cells are effective mediators of graft vs NB reactions (Ash et al, 2009, 2010); however, this lineage develops relatively slow and consists of immature cells early after transplantation. A single bolus administration of donor DC elicits potent GvT reactivity of effector lymphocytes that translates into tumour growth inhibition. Immunisation of DC had no significant effect on quantitative immune reconstitution, but improved the function of the reconstituted lymphoid compartment. The salutary effect of immunisation with donor DC is mediated by enhanced maturation of the developing lymphocytes, including non-specific activation as well as tumour antigen-specific sensitisation. The current data indicate that DC should be administered early after transplantation, might be administered either locally or systemically, and immature bone marrow-derived DC do not require pulsing with tumour antigens.\nIntravenous and intratumour DC inoculation yielded similar tumour-suppressive activity, demonstrating efficient migration of circulating APC to the tumour. Early immunisation with donor DC (1 week after transplant) improved all functional parameters of the developing T cells without affecting quantitative recovery of the lymphoid compartment. Antigen-presenting cells improved IFN-γ secretion, responsiveness to mitogen-induced proliferation and direct anti-tumour cytotoxicity, which converged to reduce tumour growth rates. The superior outcome of DC administration in early stages of lymphoid reconstitution suggests that the immunological impact of these cells is primarily mediated by education of the developing donor lymphocytes. Measurements of proliferation and tumour cytotoxicity of lymphocytes from the allografted mice upon restimulation with DC in vitro revealed two activities (Figure 2). The first mechanism involves generalised and non-specific lymphocyte activation, as revealed by restimulation with naïve DC in vitro. The second mechanism involves selective immunisation against tumour antigens as evident from superior proliferative and cytolytic responses upon restimulation with tumour-pulsed DC in vitro. Tumour antigen-specific immunisation supports our contention that murine NB is immunogenic and is targeted by tumour-specific immune responses after allogeneic BMT (Ash et al, 2009, 2010). Furthermore, our current data confirm a significant role of DC in stimulation of lymphocytes and facilitation of tumour antigen recognition demonstrated by vaccine approaches to induction of anti-tumour immunity using tumour cells modified to express costimulatory molecules and cytokines (Bausero et al, 1996; Johnson et al, 2005) and allogenisation of tumour MHC (Kobayashi et al, 2007).\nNaïve DC process antigens following radiation-induced necrosis and apoptosis of tumour cells (Sauter et al, 2000; Tatsuta et al, 2009) and therefore DC interventions are beneficial after irradiation (Tatsuta et al, 2009), syngeneic (Asavaroengchai et al, 2002; Jing et al, 2007) and allogeneic BMT (Ash et al, 2010). It is therefore questioned whether donor DC should be pulsed with tumour antigens before immunisation in the transplant setting. Effective antigen uptake, processing and presentation by naïve (tumour-inexperienced) DC was demonstrated by the similar specific cytolytic responses and tumour growth inhibition as mice treated with tumour-pulsed DC following transplantation (Figure 3C). Efficient uptake of tumour antigens may be explained by the relatively immature nature of DC used in this study, which have superior antigen uptake and presentation capacity as compared with more mature DC (Wilson et al, 2004). We used bone marrow-derived immature DC because unlike mature DC (Lutz and Schuler, 2002), immature cells have reduced capacity to induce regulatory T cells and to impose anergy on cytotoxic T cells (Huang et al, 2000). It is therefore apparent that there is no need for pulsing of immature donor DC with tumour antigens for effective post-transplant immunisation. One of the potential drawbacks of DC exposure to tumour lysates is transduction of inhibitory signals derived from the tumour to effector lymphocytes.\nWe have previously reported that lymphocytes from mice bearing tumours display proliferative anergy under mitogenic stimulation (Ash et al, 2009). The same feature of hyporesponsiveness to mitogens was observed in lymphocytes from reconstituted mice immunised with tumour-pulsed DC exposed to tumour lysate. Inhibition pertains primarily to the mitotic function of lymphocytes, as other parameters of maturation, such as IFN-γ secretion and tumour cytolysis, were markedly improved by DC therapy. The inhibitory signal revealed here originates from the tumour lysate and is not an intrinsic suppressor activity of DC, because it did not affect lymphocytes of mice inoculated with naïve DC. These data depict a scenario where donor DC engulf and process an inhibitory signal during pulsing with tumour lysate, which is transferred to the emerging lymphocytes along other tumour-specific antigens. Loss of inhibition of proliferation when DC were administered at later times after transplantation suggests that it primarily affects developing T cells.\nSuppression of T-cell proliferation was reversed both by restimulation with DC (Figure 2C) and by non-specific stimulation with LPS (Figure 4C), demonstrating that lymphocyte stimulation overrides the induced proliferative anergy. Current data point to an inhibitory signal that specifically affects effector lymphocyte proliferation, and is distinct from tumour-associated mechanisms capable to inactivate DCs. Prior studies revealed several mechanisms by which NB downregulates GvT reactivity; however, most factors present different characteristics from those observed under our experimental conditions. A possible candidate expressed at high levels by NB is migration inhibitory factor (MIF) that suppresses lymphocyte proliferation; however, work with MIF-deficient tumours showed that this chemokine is not directly responsible for suppression of T cell-mediated immunity in vivo (Zhou et al, 2008). Other factors were shown to suppress DC function with variable effect on T cells: IL-27 promotes the evolution of cytotoxic lymphocytes endowed with anti-tumour activity (Shinozaki et al, 2009) and Galectin-1 inhibits cytotoxic T-cell activity (Soldati et al, 2012).\nEnhanced GvT activity induced by DC-mediated activation of the reconstituted T-cell subset overcomes specific barriers imposed by neuroblatoma. Stroma of this tumour causes gradual dysfunction of DC (Chen et al, 2003; Redlinger et al, 2003; Walker et al, 2005; Zou, 2005) through downregulation of survival factors and impairment of migration (Kanto et al, 2001; Redlinger et al, 2004). Dysfunction of DC has been effectively reversed by overexpression of proinflammatory cytokines and costimulatory molecules, and downregulation of the sensitivity to suppressor cytokines and chemokines (Zhou et al, 2008; Shinozaki et al, 2009). Additional humoral factors contribute to DC inactivation, for example, high ganglioside-2 (GD2) concentrations in NB patients (McKallip et al, 1999) suppress DC development (Shurin et al, 2001) and function (Sietsma et al, 1998). Repression of anti-tumour immunity by GD2 is attributed to defective cytokine production by DCs (Shurin et al, 2001; Redlinger et al, 2004), which impacts negatively on the development of anti-tumour immunity after syngeneic BMT (Jing et al, 2007).\nIn summary, DC foster graft vs NB reactivity through three main mechanisms: facilitation of immune maturation, non-specific activation and induction of antigen-specific GvT reactions. From the clinical point of view, it is important to underline that effective GvT reactivity can be attained already at early stages after transplantation by supplementation of DC, fostering the maturation of bone marrow-derived lymphocytes that are largely devoid of GvT reactivity. Although DC are shown to process and transduce also signals that inhibit lymphocyte proliferation, DC reduce tumour growth and even abrogate established tumours through induction of antigen-specific cytolytic responses." ]
[ "materials|methods", null, null, null, null, null, null, null, null, null, "results", null, null, null, null, "discussion" ]
[ "dendritic cells", "graft vs tumour reaction", "allogeneic bone marrow transplantation", "immunisation", "cytotoxic T cells" ]
Materials and methods: Animal model Mice used in this study were A/J (H2Ka, CD45.2), C57BL/6 (H2Kb, CD45.2) and B6.SJL-Ptprca Pepcb/BoyJ (H2Kb, CD45.1) purchased from Jackson Laboratories (Bar Harbor, ME, USA). Mice were housed in a barrier facility in accordance with the guidelines of the Institutional Animal Care and Use Committee. Mice used in this study were A/J (H2Ka, CD45.2), C57BL/6 (H2Kb, CD45.2) and B6.SJL-Ptprca Pepcb/BoyJ (H2Kb, CD45.1) purchased from Jackson Laboratories (Bar Harbor, ME, USA). Mice were housed in a barrier facility in accordance with the guidelines of the Institutional Animal Care and Use Committee. Tumour cells Neuro-2a wild-type cells, a mouse NB of strain A origin (H2Ka), were obtained from the American Type Culture Collection (ATCC, Manassas, VA, USA). Cells were cultured to maximum 12 passages in Minimal Essential Medium (Gibco, Grand Island, NY, USA), supplemented with 10% fetal bovine serum (FBS), MEM non-essential amino acids, 100 units per ml penicillin, and 100 mg ml−1 streptomycin (Biological Industries, Beit Haemek, Israel). Tumours were induced by subcutaneous (s.c.) implantation of 106 Neuro-2a cells suspended in 100 μl of phosphate-buffered saline (PBS). Tumour growth was measured with a caliper and the volume (mm3) was calculated according to: (width2 × length × 0.4). Tumour lysates were prepared by repeated cycles ( × 3) of freezing in liquid nitrogen and thawing of Neuro-2a cells (Ash et al, 2009). Tumours that regressed were excluded from tumour growth curves, to allow separate evaluation of the rates of growth. Neuro-2a wild-type cells, a mouse NB of strain A origin (H2Ka), were obtained from the American Type Culture Collection (ATCC, Manassas, VA, USA). Cells were cultured to maximum 12 passages in Minimal Essential Medium (Gibco, Grand Island, NY, USA), supplemented with 10% fetal bovine serum (FBS), MEM non-essential amino acids, 100 units per ml penicillin, and 100 mg ml−1 streptomycin (Biological Industries, Beit Haemek, Israel). Tumours were induced by subcutaneous (s.c.) implantation of 106 Neuro-2a cells suspended in 100 μl of phosphate-buffered saline (PBS). Tumour growth was measured with a caliper and the volume (mm3) was calculated according to: (width2 × length × 0.4). Tumour lysates were prepared by repeated cycles ( × 3) of freezing in liquid nitrogen and thawing of Neuro-2a cells (Ash et al, 2009). Tumours that regressed were excluded from tumour growth curves, to allow separate evaluation of the rates of growth. Conditioning, transplantation and assessment of reconstitution Recipients were conditioned with total body irradiation (TBI) at 700 rad using an X-ray irradiator (Rad Source Technologies Inc, Alpharetta, GA, USA) at a rate of 106 rad per min (Kaminitz et al, 2009). After 6 h, cells were injected into the lateral tail vein in 200 μl PBS. GVHD was assessed using a semiquantitative clinical scale including weight loss, posture (hyperkeratosis of the foot pads impairs movement), activity, diffuse erythema (particularly of the ear) or dermatitis, and diarrhoea (Ash et al, 2010). Spleen reconstitution was quantified from the total numbers of splenocytes and the fractions of different subsets measured by flow cytometry (Ash et al, 2009). Recipients were conditioned with total body irradiation (TBI) at 700 rad using an X-ray irradiator (Rad Source Technologies Inc, Alpharetta, GA, USA) at a rate of 106 rad per min (Kaminitz et al, 2009). After 6 h, cells were injected into the lateral tail vein in 200 μl PBS. GVHD was assessed using a semiquantitative clinical scale including weight loss, posture (hyperkeratosis of the foot pads impairs movement), activity, diffuse erythema (particularly of the ear) or dermatitis, and diarrhoea (Ash et al, 2010). Spleen reconstitution was quantified from the total numbers of splenocytes and the fractions of different subsets measured by flow cytometry (Ash et al, 2009). Cell preparation Whole BMC (wBMC) were harvested from femurs and tibia of donors, and low-density cells were collected as previously described (Kaminitz et al, 2009). Spleens were harvested from mice, minced, passed through 40 μm mesh and dispersed into a single-cell suspension in PBS (Ash et al, 2009). Red blood cells were lysed with medium containing 0.83% ammonium chloride, 0.1% potassium bicarbonate, 0.03% disodium EDTA (Biological Industries). After 4 min, the reaction was arrested with excess of ice-cold PBS. T cells were enriched by elution through a cotton wool column that preferentially retains B lymphocytes and myeloid cells by differential charge than the eluted T cells, or immunomagnetic depletion using hybridoma-derived antibodies against GR-1, Mac-1 and B220 (Dynal Inc, Lake Success, NY, USA). Whole BMC (wBMC) were harvested from femurs and tibia of donors, and low-density cells were collected as previously described (Kaminitz et al, 2009). Spleens were harvested from mice, minced, passed through 40 μm mesh and dispersed into a single-cell suspension in PBS (Ash et al, 2009). Red blood cells were lysed with medium containing 0.83% ammonium chloride, 0.1% potassium bicarbonate, 0.03% disodium EDTA (Biological Industries). After 4 min, the reaction was arrested with excess of ice-cold PBS. T cells were enriched by elution through a cotton wool column that preferentially retains B lymphocytes and myeloid cells by differential charge than the eluted T cells, or immunomagnetic depletion using hybridoma-derived antibodies against GR-1, Mac-1 and B220 (Dynal Inc, Lake Success, NY, USA). Dendritic cells Mononuclear cells were harvested from bone marrow samples by gradient centrifugation over murine lympholyte (Cedarlane, Burlington, ON, Canada) and cultured (106 cells per ml) in low-LPS RPMI-1640 (<10 pg ml−1) supplemented with 10% FBS, 1% ℒ-glutamine, 1% sodium pyruvate, 1% α-MEM non-essential amino acids, 0.1% Hepes buffer and 1% Pen/Strep (Biological Industries and Gibco) (Ash et al, 2010). The medium was supplemented with 50 ng ml−1 mouse recombinant (mr)GM-CSF and 10 ng ml−1 mrIL-4 on intermittent days (PeproTech, Rocky Hill, NJ, USA). Immunophenotypic characterisation after 6 days of culture showed expression of CD11c=85±4%, CD86=65±6% and CD205=21±3% (n=5). Control DC were incubated for additional 24 h in DC medium without growth factors. Pulsing with tumour antigens was performed by co-incubation of DC with tumour lysate at an approximate DC:tumour cell ratio of 3:1. Mononuclear cells were harvested from bone marrow samples by gradient centrifugation over murine lympholyte (Cedarlane, Burlington, ON, Canada) and cultured (106 cells per ml) in low-LPS RPMI-1640 (<10 pg ml−1) supplemented with 10% FBS, 1% ℒ-glutamine, 1% sodium pyruvate, 1% α-MEM non-essential amino acids, 0.1% Hepes buffer and 1% Pen/Strep (Biological Industries and Gibco) (Ash et al, 2010). The medium was supplemented with 50 ng ml−1 mouse recombinant (mr)GM-CSF and 10 ng ml−1 mrIL-4 on intermittent days (PeproTech, Rocky Hill, NJ, USA). Immunophenotypic characterisation after 6 days of culture showed expression of CD11c=85±4%, CD86=65±6% and CD205=21±3% (n=5). Control DC were incubated for additional 24 h in DC medium without growth factors. Pulsing with tumour antigens was performed by co-incubation of DC with tumour lysate at an approximate DC:tumour cell ratio of 3:1. Flow cytometry Measurements were performed with a Vantage SE flow cytometer (Becton Dickinson, Franklin Lakes, NJ, USA). Positive staining was determined on a log scale, normalised with control cells stained with isotype control mAb. Donor chimerism was determined from the percentage of donor and host peripheral blood lymphocytes (PBL) and splenocytes (Kaminitz et al, 2009). Blood was collected in heparinised serum vials in 200 μl M199, centrifuged over 1.5 ml lymphocyte separation media (Cedarlane), and red blood cells were lysed. Nucleated cells were incubated for 45 min at 4°C with phycoerythrin (PE)-anti-H2Kb (Caltag, Carlsbad, CA, USA) and fluorescein isothiocyanate (FITC)-anti-H2Kk mAb (eBioscience, San Diego, CA, USA), cross reactive against H2Ka (Ash et al, 2009). Minor antigen disparity was assed using CD45.1-PE and CD45.2-FITC antibodies (eBioscience). Lymphocytes were quantified using CD4-allophycocyanin, CD8-FITC antibodies, B220-PerCp and NK1.1-PE and DC were characterised using CD86-PE and CD11c-PerCp antibodies (BD Pharmingen, San Diego, CA, USA). Measurements were performed with a Vantage SE flow cytometer (Becton Dickinson, Franklin Lakes, NJ, USA). Positive staining was determined on a log scale, normalised with control cells stained with isotype control mAb. Donor chimerism was determined from the percentage of donor and host peripheral blood lymphocytes (PBL) and splenocytes (Kaminitz et al, 2009). Blood was collected in heparinised serum vials in 200 μl M199, centrifuged over 1.5 ml lymphocyte separation media (Cedarlane), and red blood cells were lysed. Nucleated cells were incubated for 45 min at 4°C with phycoerythrin (PE)-anti-H2Kb (Caltag, Carlsbad, CA, USA) and fluorescein isothiocyanate (FITC)-anti-H2Kk mAb (eBioscience, San Diego, CA, USA), cross reactive against H2Ka (Ash et al, 2009). Minor antigen disparity was assed using CD45.1-PE and CD45.2-FITC antibodies (eBioscience). Lymphocytes were quantified using CD4-allophycocyanin, CD8-FITC antibodies, B220-PerCp and NK1.1-PE and DC were characterised using CD86-PE and CD11c-PerCp antibodies (BD Pharmingen, San Diego, CA, USA). Lymphocyte functional assays For measurements of proliferation, cells were labelled with 2.5 μℳ 5-(and-6-)-carboxyfluorescein diacetate succinimidyl ester (CFSE, Molecular Probes, Eugene, OR, USA) and plated on petri dishes (5 × 107) for 45 min to enrich for lymphocytes. After 45 min, non-adherent cells were collected, washed and incubated in DMEM (Ash et al, 2009). Triplicate cultures were harvested after 3–5 days and the dilution of CFSE was analysed by flow cytometry by gating on the live lymphocytes. Data were analysed using the ModFit software (Verity Software House, Topsham, ME, USA). Interferon-γ was determined in supernatant of lymphocyte suspensions in 96-well microtiter plates using the murine IFN-γ ELISA kit (BD Pharmingen). Differences absorbance was read in triplicates in an ELISA plate reader (Biotec Inc, Suffolk, UK). Appropriate dilutions within the measurement range were quantified according to a standard calibration curve (pg ml−1). For measurements of proliferation, cells were labelled with 2.5 μℳ 5-(and-6-)-carboxyfluorescein diacetate succinimidyl ester (CFSE, Molecular Probes, Eugene, OR, USA) and plated on petri dishes (5 × 107) for 45 min to enrich for lymphocytes. After 45 min, non-adherent cells were collected, washed and incubated in DMEM (Ash et al, 2009). Triplicate cultures were harvested after 3–5 days and the dilution of CFSE was analysed by flow cytometry by gating on the live lymphocytes. Data were analysed using the ModFit software (Verity Software House, Topsham, ME, USA). Interferon-γ was determined in supernatant of lymphocyte suspensions in 96-well microtiter plates using the murine IFN-γ ELISA kit (BD Pharmingen). Differences absorbance was read in triplicates in an ELISA plate reader (Biotec Inc, Suffolk, UK). Appropriate dilutions within the measurement range were quantified according to a standard calibration curve (pg ml−1). Cytotoxic assays Effector splenocytes harvested from naïve mice and chimeras were lysed and passed through wool mesh to enrich for T lymphocytes (∼70%). These cells were incubated with 5 × 105 Neuro-2a target cells for 7 h at 37°C in 150 μl at 1 : 10 to 1:100 target:effector ratios. Cytolysis was quantified by lactate dehydrogenase (LDH) release and normalised for background values (Ash et al, 2009). Effector splenocytes harvested from naïve mice and chimeras were lysed and passed through wool mesh to enrich for T lymphocytes (∼70%). These cells were incubated with 5 × 105 Neuro-2a target cells for 7 h at 37°C in 150 μl at 1 : 10 to 1:100 target:effector ratios. Cytolysis was quantified by lactate dehydrogenase (LDH) release and normalised for background values (Ash et al, 2009). Statistical analysis Data are presented as means±standard deviations for each experimental protocol. Results in each experimental group were evaluated for reproducibility by linear regression of duplicate measurements. Differences between the experimental protocols were estimated with a post-hoc Scheffe t-test and significance was considered at P<0.05. Data are presented as means±standard deviations for each experimental protocol. Results in each experimental group were evaluated for reproducibility by linear regression of duplicate measurements. Differences between the experimental protocols were estimated with a post-hoc Scheffe t-test and significance was considered at P<0.05. Animal model: Mice used in this study were A/J (H2Ka, CD45.2), C57BL/6 (H2Kb, CD45.2) and B6.SJL-Ptprca Pepcb/BoyJ (H2Kb, CD45.1) purchased from Jackson Laboratories (Bar Harbor, ME, USA). Mice were housed in a barrier facility in accordance with the guidelines of the Institutional Animal Care and Use Committee. Tumour cells: Neuro-2a wild-type cells, a mouse NB of strain A origin (H2Ka), were obtained from the American Type Culture Collection (ATCC, Manassas, VA, USA). Cells were cultured to maximum 12 passages in Minimal Essential Medium (Gibco, Grand Island, NY, USA), supplemented with 10% fetal bovine serum (FBS), MEM non-essential amino acids, 100 units per ml penicillin, and 100 mg ml−1 streptomycin (Biological Industries, Beit Haemek, Israel). Tumours were induced by subcutaneous (s.c.) implantation of 106 Neuro-2a cells suspended in 100 μl of phosphate-buffered saline (PBS). Tumour growth was measured with a caliper and the volume (mm3) was calculated according to: (width2 × length × 0.4). Tumour lysates were prepared by repeated cycles ( × 3) of freezing in liquid nitrogen and thawing of Neuro-2a cells (Ash et al, 2009). Tumours that regressed were excluded from tumour growth curves, to allow separate evaluation of the rates of growth. Conditioning, transplantation and assessment of reconstitution: Recipients were conditioned with total body irradiation (TBI) at 700 rad using an X-ray irradiator (Rad Source Technologies Inc, Alpharetta, GA, USA) at a rate of 106 rad per min (Kaminitz et al, 2009). After 6 h, cells were injected into the lateral tail vein in 200 μl PBS. GVHD was assessed using a semiquantitative clinical scale including weight loss, posture (hyperkeratosis of the foot pads impairs movement), activity, diffuse erythema (particularly of the ear) or dermatitis, and diarrhoea (Ash et al, 2010). Spleen reconstitution was quantified from the total numbers of splenocytes and the fractions of different subsets measured by flow cytometry (Ash et al, 2009). Cell preparation: Whole BMC (wBMC) were harvested from femurs and tibia of donors, and low-density cells were collected as previously described (Kaminitz et al, 2009). Spleens were harvested from mice, minced, passed through 40 μm mesh and dispersed into a single-cell suspension in PBS (Ash et al, 2009). Red blood cells were lysed with medium containing 0.83% ammonium chloride, 0.1% potassium bicarbonate, 0.03% disodium EDTA (Biological Industries). After 4 min, the reaction was arrested with excess of ice-cold PBS. T cells were enriched by elution through a cotton wool column that preferentially retains B lymphocytes and myeloid cells by differential charge than the eluted T cells, or immunomagnetic depletion using hybridoma-derived antibodies against GR-1, Mac-1 and B220 (Dynal Inc, Lake Success, NY, USA). Dendritic cells: Mononuclear cells were harvested from bone marrow samples by gradient centrifugation over murine lympholyte (Cedarlane, Burlington, ON, Canada) and cultured (106 cells per ml) in low-LPS RPMI-1640 (<10 pg ml−1) supplemented with 10% FBS, 1% ℒ-glutamine, 1% sodium pyruvate, 1% α-MEM non-essential amino acids, 0.1% Hepes buffer and 1% Pen/Strep (Biological Industries and Gibco) (Ash et al, 2010). The medium was supplemented with 50 ng ml−1 mouse recombinant (mr)GM-CSF and 10 ng ml−1 mrIL-4 on intermittent days (PeproTech, Rocky Hill, NJ, USA). Immunophenotypic characterisation after 6 days of culture showed expression of CD11c=85±4%, CD86=65±6% and CD205=21±3% (n=5). Control DC were incubated for additional 24 h in DC medium without growth factors. Pulsing with tumour antigens was performed by co-incubation of DC with tumour lysate at an approximate DC:tumour cell ratio of 3:1. Flow cytometry: Measurements were performed with a Vantage SE flow cytometer (Becton Dickinson, Franklin Lakes, NJ, USA). Positive staining was determined on a log scale, normalised with control cells stained with isotype control mAb. Donor chimerism was determined from the percentage of donor and host peripheral blood lymphocytes (PBL) and splenocytes (Kaminitz et al, 2009). Blood was collected in heparinised serum vials in 200 μl M199, centrifuged over 1.5 ml lymphocyte separation media (Cedarlane), and red blood cells were lysed. Nucleated cells were incubated for 45 min at 4°C with phycoerythrin (PE)-anti-H2Kb (Caltag, Carlsbad, CA, USA) and fluorescein isothiocyanate (FITC)-anti-H2Kk mAb (eBioscience, San Diego, CA, USA), cross reactive against H2Ka (Ash et al, 2009). Minor antigen disparity was assed using CD45.1-PE and CD45.2-FITC antibodies (eBioscience). Lymphocytes were quantified using CD4-allophycocyanin, CD8-FITC antibodies, B220-PerCp and NK1.1-PE and DC were characterised using CD86-PE and CD11c-PerCp antibodies (BD Pharmingen, San Diego, CA, USA). Lymphocyte functional assays: For measurements of proliferation, cells were labelled with 2.5 μℳ 5-(and-6-)-carboxyfluorescein diacetate succinimidyl ester (CFSE, Molecular Probes, Eugene, OR, USA) and plated on petri dishes (5 × 107) for 45 min to enrich for lymphocytes. After 45 min, non-adherent cells were collected, washed and incubated in DMEM (Ash et al, 2009). Triplicate cultures were harvested after 3–5 days and the dilution of CFSE was analysed by flow cytometry by gating on the live lymphocytes. Data were analysed using the ModFit software (Verity Software House, Topsham, ME, USA). Interferon-γ was determined in supernatant of lymphocyte suspensions in 96-well microtiter plates using the murine IFN-γ ELISA kit (BD Pharmingen). Differences absorbance was read in triplicates in an ELISA plate reader (Biotec Inc, Suffolk, UK). Appropriate dilutions within the measurement range were quantified according to a standard calibration curve (pg ml−1). Cytotoxic assays: Effector splenocytes harvested from naïve mice and chimeras were lysed and passed through wool mesh to enrich for T lymphocytes (∼70%). These cells were incubated with 5 × 105 Neuro-2a target cells for 7 h at 37°C in 150 μl at 1 : 10 to 1:100 target:effector ratios. Cytolysis was quantified by lactate dehydrogenase (LDH) release and normalised for background values (Ash et al, 2009). Statistical analysis: Data are presented as means±standard deviations for each experimental protocol. Results in each experimental group were evaluated for reproducibility by linear regression of duplicate measurements. Differences between the experimental protocols were estimated with a post-hoc Scheffe t-test and significance was considered at P<0.05. Results: Impact of DC administration on tumour growth In first stage, we assessed the impact of donor DC on graft vs tumour reactivity induced by allogeneic BMT using the experimental setting delineated in Figure 1A: Neuro-2a cells were implanted subcutaneously and after 5 days the mice were irradiated at 700 rad and grafted with allogeneic (H2Kb→H2Ka) BMC. Dendritic cells (H2Kb) were derived in vitro from donor bone marrow and were pulsed with Neuro-2a lysates (DCneuro2a) before infusion adjacent to the subcutaneous tumours. Inoculation of DC on day +7 post transplant caused a significant reduction in tumour growth rates (P<0.01 vs allo-BMT; Figure 1B), indicating effective immunisation against the tumour. In this analysis, we excluded tumours that regressed completely to facilitate the statistical analysis of growth rates. Immunisation of DC resulted in complete regression of 3 out of 17 tumours (18%), whereas only 1 of 21 tumours subsided completely after allogeneic BMT. The allografted mice displayed no signs of GVHD irrespective of DC inoculation, as determined by clinical monitoring and validated by liver histology. To test whether DC immunisation was mediated by initiation and/or propagation of tumour-reactive T cells, splenic lymphocytes were assessed for direct lysis of Neuro-2a cells in vitro. Lymphocytes from mice immunised with tumour-pulsed DC (DCNeuro2a) displayed more potent cytolytic activity as compared with allografted mice (P<0.005 at lowest ratio), decreasing the effective target:effector ratio (Figure 1C). These data confirmed that tumour growth suppression is mediated by direct tumour lytic activity of lymphocytes in the allografted mice. In first stage, we assessed the impact of donor DC on graft vs tumour reactivity induced by allogeneic BMT using the experimental setting delineated in Figure 1A: Neuro-2a cells were implanted subcutaneously and after 5 days the mice were irradiated at 700 rad and grafted with allogeneic (H2Kb→H2Ka) BMC. Dendritic cells (H2Kb) were derived in vitro from donor bone marrow and were pulsed with Neuro-2a lysates (DCneuro2a) before infusion adjacent to the subcutaneous tumours. Inoculation of DC on day +7 post transplant caused a significant reduction in tumour growth rates (P<0.01 vs allo-BMT; Figure 1B), indicating effective immunisation against the tumour. In this analysis, we excluded tumours that regressed completely to facilitate the statistical analysis of growth rates. Immunisation of DC resulted in complete regression of 3 out of 17 tumours (18%), whereas only 1 of 21 tumours subsided completely after allogeneic BMT. The allografted mice displayed no signs of GVHD irrespective of DC inoculation, as determined by clinical monitoring and validated by liver histology. To test whether DC immunisation was mediated by initiation and/or propagation of tumour-reactive T cells, splenic lymphocytes were assessed for direct lysis of Neuro-2a cells in vitro. Lymphocytes from mice immunised with tumour-pulsed DC (DCNeuro2a) displayed more potent cytolytic activity as compared with allografted mice (P<0.005 at lowest ratio), decreasing the effective target:effector ratio (Figure 1C). These data confirmed that tumour growth suppression is mediated by direct tumour lytic activity of lymphocytes in the allografted mice. Qualitative effects of DC on immune reconstitution after transplantation To evaluate the mechanism of action of immunisation with DC, we monitored quantitative and qualitative immune reconstitution at 4 weeks after transplantation. Dendritic cells had little impact on reconstitution of spleen cellularity, which includes delayed recovery of T cells (Figure 2A). Immaturity of T cells after allogeneic BMT was recognised from the low output of interferon-γ (IFN-γ), which was however somewhat elevated following DC immunisation (Figure 2B). The superior cytokine responses following administration of donor DC prompted more detailed functional analysis of the lymphocytes. At 4 weeks post transplant, splenic lymphocytes from allografted mice immunised with DC displayed higher proliferative responses under mitogenic stimulation than naïve (unmanipulated) B6 mice and allografted mice (P<0.01, Figure 2C). Thus, IFN-γ production and proliferation were consistently improved by immunisation with tumour-pulsed donor DC. To further refine the specificity of DC-mediated stimulation, lymphocytes were restimulated in vitro with naïve (antigen-inexperienced) and tumour-pulsed donor DC. Lymphocytes from both control (naïve) and allografted mice were equally stimulated by both types of DC. However, the proliferation of lymphocytes from mice immunised with tumour-pulsed donor DC was further amplified by restimulation with tumour-pulsed DC as compared with tumour-inexperienced DC in vitro (P<0.05), demonstrating enhanced antigen-specific responsiveness of the GvT effectors. These data delineate a scenario where DC participate in maturation of the reconstituted immune system (non-specific mitogenic stimulation), and also in specific sensitisation against tumour cells (restimulation with tumour-pulsed DC). The most specific tests of cytotoxic GvT effectors are the capacity to lyse tumour cells in vitro. As expected, lymphocytes from allografted mice with and without DC immunisation displayed more potent tumour-lytic activity in vitro than tumour-inexperienced lymphocytes from naïve mice (Figure 2D). However, restimulation with tumour-pulsed DC in vitro elicited more potent tumour lysis than restimulation with naïve DC, pointing to an antigen-specific reaction against the tumour. Therefore, lymphocytes have acquired antigen specificity in allografted mice even in the absence of donor DC infusion, and immunisation with tumour-pulsed donor DC caused only marginal increase in tumour cell lysis. These data suggest that amplification of immune reactivity mediated by DC is more significant than sensitisation to tumour antigens. To evaluate the mechanism of action of immunisation with DC, we monitored quantitative and qualitative immune reconstitution at 4 weeks after transplantation. Dendritic cells had little impact on reconstitution of spleen cellularity, which includes delayed recovery of T cells (Figure 2A). Immaturity of T cells after allogeneic BMT was recognised from the low output of interferon-γ (IFN-γ), which was however somewhat elevated following DC immunisation (Figure 2B). The superior cytokine responses following administration of donor DC prompted more detailed functional analysis of the lymphocytes. At 4 weeks post transplant, splenic lymphocytes from allografted mice immunised with DC displayed higher proliferative responses under mitogenic stimulation than naïve (unmanipulated) B6 mice and allografted mice (P<0.01, Figure 2C). Thus, IFN-γ production and proliferation were consistently improved by immunisation with tumour-pulsed donor DC. To further refine the specificity of DC-mediated stimulation, lymphocytes were restimulated in vitro with naïve (antigen-inexperienced) and tumour-pulsed donor DC. Lymphocytes from both control (naïve) and allografted mice were equally stimulated by both types of DC. However, the proliferation of lymphocytes from mice immunised with tumour-pulsed donor DC was further amplified by restimulation with tumour-pulsed DC as compared with tumour-inexperienced DC in vitro (P<0.05), demonstrating enhanced antigen-specific responsiveness of the GvT effectors. These data delineate a scenario where DC participate in maturation of the reconstituted immune system (non-specific mitogenic stimulation), and also in specific sensitisation against tumour cells (restimulation with tumour-pulsed DC). The most specific tests of cytotoxic GvT effectors are the capacity to lyse tumour cells in vitro. As expected, lymphocytes from allografted mice with and without DC immunisation displayed more potent tumour-lytic activity in vitro than tumour-inexperienced lymphocytes from naïve mice (Figure 2D). However, restimulation with tumour-pulsed DC in vitro elicited more potent tumour lysis than restimulation with naïve DC, pointing to an antigen-specific reaction against the tumour. Therefore, lymphocytes have acquired antigen specificity in allografted mice even in the absence of donor DC infusion, and immunisation with tumour-pulsed donor DC caused only marginal increase in tumour cell lysis. These data suggest that amplification of immune reactivity mediated by DC is more significant than sensitisation to tumour antigens. Optimal conditions of DC infusion In view of the beneficial effect on DC immunisation on GvT effectors recovering from lymphopenia following stem cell transplant, we sought to characterise several parameters. We reasoned that delayed infusion of DC after transplantation might be more effective in stimulation of GvT reactivity, under conditions of superior quantitative recovery and maturation of the lymphocytes. This assumption was not substantiated, as early immunisation with tumour-pulsed DC on day +7 was more potent in inhibition of tumour growth than late DC administration on day +14 (Figure 3A). To determine the basis of this differential effect of the time of immunisation, we assessed the proliferative responses of splenic lymphocytes from the immunised mice. Non-specific mitogenic stimulation resulted in higher proliferation rates of lymphocytes from mice immunised on day +7 (Figure 3B), indicating a general activation state. Furthermore, restimulation of the lymphocytes with tumour-pulsed DC in vitro resulted in robust proliferative responses, evidence of tumour-specific sensitisation of lymphocytes in the allografted mice. These tumour growth curves also emphasise that naïve (antigen-inexperienced) DC have similar suppressive effects as tumour-pulsed DC at both times of immunisation (Figure 3A). Consistently, lymphocytes from mice immunised with naïve and tumour-pulsed DC showed similar cytolytic activity against Neuro-2a cells in vitro (Figure 3C), indicating effective tumour-specific sensitisation mediated by naïve DC. The effective uptake and presentation of tumour antigens by naïve DC in vivo was affirmed by the robust tumour-lytic activity of lymphocytes upon restimulation with tumour-sensitised DC in vitro. Taken together, these data demonstrate efficient tumour antigen uptake and presentation by naïve donor DC, suggesting that presensitisation to the tumour is not required to induce lymphotoxic anti-tumour reactivity. The experimental setting used in so far relied on inoculation of the donor DC adjacent to the tumour, however, in the clinical setting DC are required to migrate to and operate at remote sites. We therefore assessed the effect of local vs systemic (intravenous) immunisation with donor DC on tumour growth rates. Tumour suppression was equally affected by DC inoculation adjacent to the tumours and intravenously (Figure 3D), emphasising effective DC migration to distant sites. In view of the beneficial effect on DC immunisation on GvT effectors recovering from lymphopenia following stem cell transplant, we sought to characterise several parameters. We reasoned that delayed infusion of DC after transplantation might be more effective in stimulation of GvT reactivity, under conditions of superior quantitative recovery and maturation of the lymphocytes. This assumption was not substantiated, as early immunisation with tumour-pulsed DC on day +7 was more potent in inhibition of tumour growth than late DC administration on day +14 (Figure 3A). To determine the basis of this differential effect of the time of immunisation, we assessed the proliferative responses of splenic lymphocytes from the immunised mice. Non-specific mitogenic stimulation resulted in higher proliferation rates of lymphocytes from mice immunised on day +7 (Figure 3B), indicating a general activation state. Furthermore, restimulation of the lymphocytes with tumour-pulsed DC in vitro resulted in robust proliferative responses, evidence of tumour-specific sensitisation of lymphocytes in the allografted mice. These tumour growth curves also emphasise that naïve (antigen-inexperienced) DC have similar suppressive effects as tumour-pulsed DC at both times of immunisation (Figure 3A). Consistently, lymphocytes from mice immunised with naïve and tumour-pulsed DC showed similar cytolytic activity against Neuro-2a cells in vitro (Figure 3C), indicating effective tumour-specific sensitisation mediated by naïve DC. The effective uptake and presentation of tumour antigens by naïve DC in vivo was affirmed by the robust tumour-lytic activity of lymphocytes upon restimulation with tumour-sensitised DC in vitro. Taken together, these data demonstrate efficient tumour antigen uptake and presentation by naïve donor DC, suggesting that presensitisation to the tumour is not required to induce lymphotoxic anti-tumour reactivity. The experimental setting used in so far relied on inoculation of the donor DC adjacent to the tumour, however, in the clinical setting DC are required to migrate to and operate at remote sites. We therefore assessed the effect of local vs systemic (intravenous) immunisation with donor DC on tumour growth rates. Tumour suppression was equally affected by DC inoculation adjacent to the tumours and intravenously (Figure 3D), emphasising effective DC migration to distant sites. Tumour-associated suppressive mechanisms In a previous study, we reported an interesting decrease in proliferation of lymphocytes from mice bearing tumours exposed to tumour lysate in vitro, suggesting the presence of an inhibitory factor in the soluble fraction of the Neuro-2a extract (Ash et al, 2009). The proliferation of splenocytes from allografted mice immunised with tumour-pulsed DC on day +7 was markedly increased as compared with all other experimental groups, including allografted mice, recipients of DCnaive and infusion of DC on day +14 (Figure 4A). To determine whether this inhibitory mechanism is modulated by administration of donor DC, the responses of lymphocytes to mitogenic stimulation were assessed in the presence of tumour lysates. Exposure to Neuro-2a lysate in vitro resulted in suppression of proliferation in one particular group (Figure 4B), suggesting that a tumour-derived inhibitory signal was amplified by immunisation with tumour-pulsed DC early after transplantation. Notably, this effect was restricted to lymphocytes from mice immunised with tumour-pulsed DC and not in lymphocytes of mice immunised with tumour-inexperienced DC. These data suggest that a soluble inhibitory factor was included in the repertoire of antigens processed by DC during pulsing with tumour lysate, and was presented to newly developed T cells in the allografted mice. However, restimulation of lymphocytes with unstimulated and tumour-pulsed DC ex vivo amplified antigen-specific responses (Figures 2C and 3B), indicating that this proliferation-inhibitory mechanism does not affect the process of tumour antigen processing and presentation. In addition to reversal of this inhibitory mechanism by restimulation with DC in vitro, proliferative anergy induced by tumour lysate was also reversed by non-specific stimulation of the lymphocytes with LPS in the absence of DC (Figure 4C). Therefore, immune activation overcomes the proliferation-inhibitory signal that is processed by DC during pulsing with tumour antigens. In a previous study, we reported an interesting decrease in proliferation of lymphocytes from mice bearing tumours exposed to tumour lysate in vitro, suggesting the presence of an inhibitory factor in the soluble fraction of the Neuro-2a extract (Ash et al, 2009). The proliferation of splenocytes from allografted mice immunised with tumour-pulsed DC on day +7 was markedly increased as compared with all other experimental groups, including allografted mice, recipients of DCnaive and infusion of DC on day +14 (Figure 4A). To determine whether this inhibitory mechanism is modulated by administration of donor DC, the responses of lymphocytes to mitogenic stimulation were assessed in the presence of tumour lysates. Exposure to Neuro-2a lysate in vitro resulted in suppression of proliferation in one particular group (Figure 4B), suggesting that a tumour-derived inhibitory signal was amplified by immunisation with tumour-pulsed DC early after transplantation. Notably, this effect was restricted to lymphocytes from mice immunised with tumour-pulsed DC and not in lymphocytes of mice immunised with tumour-inexperienced DC. These data suggest that a soluble inhibitory factor was included in the repertoire of antigens processed by DC during pulsing with tumour lysate, and was presented to newly developed T cells in the allografted mice. However, restimulation of lymphocytes with unstimulated and tumour-pulsed DC ex vivo amplified antigen-specific responses (Figures 2C and 3B), indicating that this proliferation-inhibitory mechanism does not affect the process of tumour antigen processing and presentation. In addition to reversal of this inhibitory mechanism by restimulation with DC in vitro, proliferative anergy induced by tumour lysate was also reversed by non-specific stimulation of the lymphocytes with LPS in the absence of DC (Figure 4C). Therefore, immune activation overcomes the proliferation-inhibitory signal that is processed by DC during pulsing with tumour antigens. Impact of DC administration on tumour growth: In first stage, we assessed the impact of donor DC on graft vs tumour reactivity induced by allogeneic BMT using the experimental setting delineated in Figure 1A: Neuro-2a cells were implanted subcutaneously and after 5 days the mice were irradiated at 700 rad and grafted with allogeneic (H2Kb→H2Ka) BMC. Dendritic cells (H2Kb) were derived in vitro from donor bone marrow and were pulsed with Neuro-2a lysates (DCneuro2a) before infusion adjacent to the subcutaneous tumours. Inoculation of DC on day +7 post transplant caused a significant reduction in tumour growth rates (P<0.01 vs allo-BMT; Figure 1B), indicating effective immunisation against the tumour. In this analysis, we excluded tumours that regressed completely to facilitate the statistical analysis of growth rates. Immunisation of DC resulted in complete regression of 3 out of 17 tumours (18%), whereas only 1 of 21 tumours subsided completely after allogeneic BMT. The allografted mice displayed no signs of GVHD irrespective of DC inoculation, as determined by clinical monitoring and validated by liver histology. To test whether DC immunisation was mediated by initiation and/or propagation of tumour-reactive T cells, splenic lymphocytes were assessed for direct lysis of Neuro-2a cells in vitro. Lymphocytes from mice immunised with tumour-pulsed DC (DCNeuro2a) displayed more potent cytolytic activity as compared with allografted mice (P<0.005 at lowest ratio), decreasing the effective target:effector ratio (Figure 1C). These data confirmed that tumour growth suppression is mediated by direct tumour lytic activity of lymphocytes in the allografted mice. Qualitative effects of DC on immune reconstitution after transplantation: To evaluate the mechanism of action of immunisation with DC, we monitored quantitative and qualitative immune reconstitution at 4 weeks after transplantation. Dendritic cells had little impact on reconstitution of spleen cellularity, which includes delayed recovery of T cells (Figure 2A). Immaturity of T cells after allogeneic BMT was recognised from the low output of interferon-γ (IFN-γ), which was however somewhat elevated following DC immunisation (Figure 2B). The superior cytokine responses following administration of donor DC prompted more detailed functional analysis of the lymphocytes. At 4 weeks post transplant, splenic lymphocytes from allografted mice immunised with DC displayed higher proliferative responses under mitogenic stimulation than naïve (unmanipulated) B6 mice and allografted mice (P<0.01, Figure 2C). Thus, IFN-γ production and proliferation were consistently improved by immunisation with tumour-pulsed donor DC. To further refine the specificity of DC-mediated stimulation, lymphocytes were restimulated in vitro with naïve (antigen-inexperienced) and tumour-pulsed donor DC. Lymphocytes from both control (naïve) and allografted mice were equally stimulated by both types of DC. However, the proliferation of lymphocytes from mice immunised with tumour-pulsed donor DC was further amplified by restimulation with tumour-pulsed DC as compared with tumour-inexperienced DC in vitro (P<0.05), demonstrating enhanced antigen-specific responsiveness of the GvT effectors. These data delineate a scenario where DC participate in maturation of the reconstituted immune system (non-specific mitogenic stimulation), and also in specific sensitisation against tumour cells (restimulation with tumour-pulsed DC). The most specific tests of cytotoxic GvT effectors are the capacity to lyse tumour cells in vitro. As expected, lymphocytes from allografted mice with and without DC immunisation displayed more potent tumour-lytic activity in vitro than tumour-inexperienced lymphocytes from naïve mice (Figure 2D). However, restimulation with tumour-pulsed DC in vitro elicited more potent tumour lysis than restimulation with naïve DC, pointing to an antigen-specific reaction against the tumour. Therefore, lymphocytes have acquired antigen specificity in allografted mice even in the absence of donor DC infusion, and immunisation with tumour-pulsed donor DC caused only marginal increase in tumour cell lysis. These data suggest that amplification of immune reactivity mediated by DC is more significant than sensitisation to tumour antigens. Optimal conditions of DC infusion: In view of the beneficial effect on DC immunisation on GvT effectors recovering from lymphopenia following stem cell transplant, we sought to characterise several parameters. We reasoned that delayed infusion of DC after transplantation might be more effective in stimulation of GvT reactivity, under conditions of superior quantitative recovery and maturation of the lymphocytes. This assumption was not substantiated, as early immunisation with tumour-pulsed DC on day +7 was more potent in inhibition of tumour growth than late DC administration on day +14 (Figure 3A). To determine the basis of this differential effect of the time of immunisation, we assessed the proliferative responses of splenic lymphocytes from the immunised mice. Non-specific mitogenic stimulation resulted in higher proliferation rates of lymphocytes from mice immunised on day +7 (Figure 3B), indicating a general activation state. Furthermore, restimulation of the lymphocytes with tumour-pulsed DC in vitro resulted in robust proliferative responses, evidence of tumour-specific sensitisation of lymphocytes in the allografted mice. These tumour growth curves also emphasise that naïve (antigen-inexperienced) DC have similar suppressive effects as tumour-pulsed DC at both times of immunisation (Figure 3A). Consistently, lymphocytes from mice immunised with naïve and tumour-pulsed DC showed similar cytolytic activity against Neuro-2a cells in vitro (Figure 3C), indicating effective tumour-specific sensitisation mediated by naïve DC. The effective uptake and presentation of tumour antigens by naïve DC in vivo was affirmed by the robust tumour-lytic activity of lymphocytes upon restimulation with tumour-sensitised DC in vitro. Taken together, these data demonstrate efficient tumour antigen uptake and presentation by naïve donor DC, suggesting that presensitisation to the tumour is not required to induce lymphotoxic anti-tumour reactivity. The experimental setting used in so far relied on inoculation of the donor DC adjacent to the tumour, however, in the clinical setting DC are required to migrate to and operate at remote sites. We therefore assessed the effect of local vs systemic (intravenous) immunisation with donor DC on tumour growth rates. Tumour suppression was equally affected by DC inoculation adjacent to the tumours and intravenously (Figure 3D), emphasising effective DC migration to distant sites. Tumour-associated suppressive mechanisms: In a previous study, we reported an interesting decrease in proliferation of lymphocytes from mice bearing tumours exposed to tumour lysate in vitro, suggesting the presence of an inhibitory factor in the soluble fraction of the Neuro-2a extract (Ash et al, 2009). The proliferation of splenocytes from allografted mice immunised with tumour-pulsed DC on day +7 was markedly increased as compared with all other experimental groups, including allografted mice, recipients of DCnaive and infusion of DC on day +14 (Figure 4A). To determine whether this inhibitory mechanism is modulated by administration of donor DC, the responses of lymphocytes to mitogenic stimulation were assessed in the presence of tumour lysates. Exposure to Neuro-2a lysate in vitro resulted in suppression of proliferation in one particular group (Figure 4B), suggesting that a tumour-derived inhibitory signal was amplified by immunisation with tumour-pulsed DC early after transplantation. Notably, this effect was restricted to lymphocytes from mice immunised with tumour-pulsed DC and not in lymphocytes of mice immunised with tumour-inexperienced DC. These data suggest that a soluble inhibitory factor was included in the repertoire of antigens processed by DC during pulsing with tumour lysate, and was presented to newly developed T cells in the allografted mice. However, restimulation of lymphocytes with unstimulated and tumour-pulsed DC ex vivo amplified antigen-specific responses (Figures 2C and 3B), indicating that this proliferation-inhibitory mechanism does not affect the process of tumour antigen processing and presentation. In addition to reversal of this inhibitory mechanism by restimulation with DC in vitro, proliferative anergy induced by tumour lysate was also reversed by non-specific stimulation of the lymphocytes with LPS in the absence of DC (Figure 4C). Therefore, immune activation overcomes the proliferation-inhibitory signal that is processed by DC during pulsing with tumour antigens. Discussion: T cells are effective mediators of graft vs NB reactions (Ash et al, 2009, 2010); however, this lineage develops relatively slow and consists of immature cells early after transplantation. A single bolus administration of donor DC elicits potent GvT reactivity of effector lymphocytes that translates into tumour growth inhibition. Immunisation of DC had no significant effect on quantitative immune reconstitution, but improved the function of the reconstituted lymphoid compartment. The salutary effect of immunisation with donor DC is mediated by enhanced maturation of the developing lymphocytes, including non-specific activation as well as tumour antigen-specific sensitisation. The current data indicate that DC should be administered early after transplantation, might be administered either locally or systemically, and immature bone marrow-derived DC do not require pulsing with tumour antigens. Intravenous and intratumour DC inoculation yielded similar tumour-suppressive activity, demonstrating efficient migration of circulating APC to the tumour. Early immunisation with donor DC (1 week after transplant) improved all functional parameters of the developing T cells without affecting quantitative recovery of the lymphoid compartment. Antigen-presenting cells improved IFN-γ secretion, responsiveness to mitogen-induced proliferation and direct anti-tumour cytotoxicity, which converged to reduce tumour growth rates. The superior outcome of DC administration in early stages of lymphoid reconstitution suggests that the immunological impact of these cells is primarily mediated by education of the developing donor lymphocytes. Measurements of proliferation and tumour cytotoxicity of lymphocytes from the allografted mice upon restimulation with DC in vitro revealed two activities (Figure 2). The first mechanism involves generalised and non-specific lymphocyte activation, as revealed by restimulation with naïve DC in vitro. The second mechanism involves selective immunisation against tumour antigens as evident from superior proliferative and cytolytic responses upon restimulation with tumour-pulsed DC in vitro. Tumour antigen-specific immunisation supports our contention that murine NB is immunogenic and is targeted by tumour-specific immune responses after allogeneic BMT (Ash et al, 2009, 2010). Furthermore, our current data confirm a significant role of DC in stimulation of lymphocytes and facilitation of tumour antigen recognition demonstrated by vaccine approaches to induction of anti-tumour immunity using tumour cells modified to express costimulatory molecules and cytokines (Bausero et al, 1996; Johnson et al, 2005) and allogenisation of tumour MHC (Kobayashi et al, 2007). Naïve DC process antigens following radiation-induced necrosis and apoptosis of tumour cells (Sauter et al, 2000; Tatsuta et al, 2009) and therefore DC interventions are beneficial after irradiation (Tatsuta et al, 2009), syngeneic (Asavaroengchai et al, 2002; Jing et al, 2007) and allogeneic BMT (Ash et al, 2010). It is therefore questioned whether donor DC should be pulsed with tumour antigens before immunisation in the transplant setting. Effective antigen uptake, processing and presentation by naïve (tumour-inexperienced) DC was demonstrated by the similar specific cytolytic responses and tumour growth inhibition as mice treated with tumour-pulsed DC following transplantation (Figure 3C). Efficient uptake of tumour antigens may be explained by the relatively immature nature of DC used in this study, which have superior antigen uptake and presentation capacity as compared with more mature DC (Wilson et al, 2004). We used bone marrow-derived immature DC because unlike mature DC (Lutz and Schuler, 2002), immature cells have reduced capacity to induce regulatory T cells and to impose anergy on cytotoxic T cells (Huang et al, 2000). It is therefore apparent that there is no need for pulsing of immature donor DC with tumour antigens for effective post-transplant immunisation. One of the potential drawbacks of DC exposure to tumour lysates is transduction of inhibitory signals derived from the tumour to effector lymphocytes. We have previously reported that lymphocytes from mice bearing tumours display proliferative anergy under mitogenic stimulation (Ash et al, 2009). The same feature of hyporesponsiveness to mitogens was observed in lymphocytes from reconstituted mice immunised with tumour-pulsed DC exposed to tumour lysate. Inhibition pertains primarily to the mitotic function of lymphocytes, as other parameters of maturation, such as IFN-γ secretion and tumour cytolysis, were markedly improved by DC therapy. The inhibitory signal revealed here originates from the tumour lysate and is not an intrinsic suppressor activity of DC, because it did not affect lymphocytes of mice inoculated with naïve DC. These data depict a scenario where donor DC engulf and process an inhibitory signal during pulsing with tumour lysate, which is transferred to the emerging lymphocytes along other tumour-specific antigens. Loss of inhibition of proliferation when DC were administered at later times after transplantation suggests that it primarily affects developing T cells. Suppression of T-cell proliferation was reversed both by restimulation with DC (Figure 2C) and by non-specific stimulation with LPS (Figure 4C), demonstrating that lymphocyte stimulation overrides the induced proliferative anergy. Current data point to an inhibitory signal that specifically affects effector lymphocyte proliferation, and is distinct from tumour-associated mechanisms capable to inactivate DCs. Prior studies revealed several mechanisms by which NB downregulates GvT reactivity; however, most factors present different characteristics from those observed under our experimental conditions. A possible candidate expressed at high levels by NB is migration inhibitory factor (MIF) that suppresses lymphocyte proliferation; however, work with MIF-deficient tumours showed that this chemokine is not directly responsible for suppression of T cell-mediated immunity in vivo (Zhou et al, 2008). Other factors were shown to suppress DC function with variable effect on T cells: IL-27 promotes the evolution of cytotoxic lymphocytes endowed with anti-tumour activity (Shinozaki et al, 2009) and Galectin-1 inhibits cytotoxic T-cell activity (Soldati et al, 2012). Enhanced GvT activity induced by DC-mediated activation of the reconstituted T-cell subset overcomes specific barriers imposed by neuroblatoma. Stroma of this tumour causes gradual dysfunction of DC (Chen et al, 2003; Redlinger et al, 2003; Walker et al, 2005; Zou, 2005) through downregulation of survival factors and impairment of migration (Kanto et al, 2001; Redlinger et al, 2004). Dysfunction of DC has been effectively reversed by overexpression of proinflammatory cytokines and costimulatory molecules, and downregulation of the sensitivity to suppressor cytokines and chemokines (Zhou et al, 2008; Shinozaki et al, 2009). Additional humoral factors contribute to DC inactivation, for example, high ganglioside-2 (GD2) concentrations in NB patients (McKallip et al, 1999) suppress DC development (Shurin et al, 2001) and function (Sietsma et al, 1998). Repression of anti-tumour immunity by GD2 is attributed to defective cytokine production by DCs (Shurin et al, 2001; Redlinger et al, 2004), which impacts negatively on the development of anti-tumour immunity after syngeneic BMT (Jing et al, 2007). In summary, DC foster graft vs NB reactivity through three main mechanisms: facilitation of immune maturation, non-specific activation and induction of antigen-specific GvT reactions. From the clinical point of view, it is important to underline that effective GvT reactivity can be attained already at early stages after transplantation by supplementation of DC, fostering the maturation of bone marrow-derived lymphocytes that are largely devoid of GvT reactivity. Although DC are shown to process and transduce also signals that inhibit lymphocyte proliferation, DC reduce tumour growth and even abrogate established tumours through induction of antigen-specific cytolytic responses.
Background: Perspectives of immunotherapy to cancer mediated by bone marrow transplantation (BMT) in conjunction with dendritic cell (DC)-mediated immune sensitisation have yielded modest success so far. In this study, we assessed the impact of DC on graft vs tumour (GvT) reactions triggered by allogeneic BMT. Methods: H2K(a) mice implanted with congenic subcutaneous Neuro-2a neuroblastoma (NB, H2K(a)) tumours were irradiated and grafted with allogeneic H2K(b) bone marrow cells (BMC) followed by immunisation with tumour-inexperienced or tumour-pulsed DC. Results: Immunisation with tumour-pulsed donor DC after allogeneic BMT suppressed tumour growth through induction of T cell-mediated NB cell lysis. Early post-transplant administration of DC was more effective than delayed immunisation, with similar efficacy of DC inoculated into the tumour and intravenously. In addition, tumour inexperienced DC were equally effective as tumour-pulsed DC in suppression of tumour growth. Immunisation of DC did not impact quantitative immune reconstitution, however, it enhanced T-cell maturation as evident from interferon-γ (IFN-γ) secretion, proliferation in response to mitogenic stimulation and tumour cell lysis in vitro. Dendritic cells potentiate GvT reactivity both through activation of T cells and specific sensitisation against tumour antigens. We found that during pulsing with tumour lysate DC also elaborate a factor that selectively inhibits lymphocyte proliferation, which is however abolished by humoral and DC-mediated lymphocyte activation. Conclusions: These data reveal complex involvement of antigen-presenting cells in GvT reactions, suggesting that the limited success in clinical application is not a result of limited efficacy but suboptimal implementation. Although DC can amplify soluble signals from NB lysates that inhibit lymphocyte proliferation, early administration of DC is a dominant factor in suppression of tumour growth.
null
null
9,948
335
[ 67, 202, 142, 162, 195, 220, 185, 84, 50, 286, 439, 409, 341 ]
16
[ "dc", "tumour", "cells", "lymphocytes", "mice", "pulsed", "tumour pulsed", "figure", "immunisation", "donor" ]
[ "unmanipulated b6 mice", "cells mouse nb", "mice immunised tumour", "mice tumour growth", "mice study h2ka" ]
null
null
null
null
null
null
[CONTENT] dendritic cells | graft vs tumour reaction | allogeneic bone marrow transplantation | immunisation | cytotoxic T cells [SUMMARY]
null
[CONTENT] dendritic cells | graft vs tumour reaction | allogeneic bone marrow transplantation | immunisation | cytotoxic T cells [SUMMARY]
null
null
null
[CONTENT] Animals | Antigen-Presenting Cells | Bone Marrow Transplantation | Dendritic Cells | Graft vs Tumor Effect | Immunization, Passive | Lymphocyte Activation | Mice | Mice, Inbred C57BL | Neuroblastoma | T-Lymphocytes, Cytotoxic | Transplantation, Homologous [SUMMARY]
null
[CONTENT] Animals | Antigen-Presenting Cells | Bone Marrow Transplantation | Dendritic Cells | Graft vs Tumor Effect | Immunization, Passive | Lymphocyte Activation | Mice | Mice, Inbred C57BL | Neuroblastoma | T-Lymphocytes, Cytotoxic | Transplantation, Homologous [SUMMARY]
null
null
null
[CONTENT] unmanipulated b6 mice | cells mouse nb | mice immunised tumour | mice tumour growth | mice study h2ka [SUMMARY]
null
[CONTENT] unmanipulated b6 mice | cells mouse nb | mice immunised tumour | mice tumour growth | mice study h2ka [SUMMARY]
null
null
null
[CONTENT] dc | tumour | cells | lymphocytes | mice | pulsed | tumour pulsed | figure | immunisation | donor [SUMMARY]
null
[CONTENT] dc | tumour | cells | lymphocytes | mice | pulsed | tumour pulsed | figure | immunisation | donor [SUMMARY]
null
null
null
[CONTENT] dc | tumour | mice | lymphocytes | pulsed | tumour pulsed | figure | immunisation | vitro | tumour pulsed dc [SUMMARY]
null
[CONTENT] tumour | dc | cells | lymphocytes | mice | pulsed | usa | tumour pulsed | figure | immunisation [SUMMARY]
null
null
null
[CONTENT] DC | BMT | NB ||| DC | DC ||| DC | DC ||| IFN ||| ||| DC | DC [SUMMARY]
null
[CONTENT] BMT ||| DC | BMT ||| Neuro-2a | NB | H2K(a | BMC | DC ||| DC | BMT | NB ||| DC | DC ||| DC | DC ||| IFN ||| ||| DC | DC ||| ||| DC | NB | DC [SUMMARY]
null
High level of protection against COVID-19 after two doses of BNT162b2 vaccine in the working age population - first results from a cohort study in Southern Sweden.
34586934
Vaccine effectiveness against COVID-19 needs to be assessed in diverse real-world population settings.
BACKGROUND
A cohort study of 805,741 residents in Skåne county, Southern Sweden, aged 18-64 years, of whom 26,587 received at least one dose of the BNT162b2 vaccine. Incidence rates of COVID-19 were estimated in sex- and age-adjusted analysis and stratified in two-week periods with substantial community spread of the disease.
METHODS
The estimated vaccine effectiveness in preventing infection ≥7 days after second dose was 86% (95% CI 72-94%) but only 42% (95% CI 14-63%) ≥14 days after a single dose. No difference in vaccine effectiveness was observed between females and males. Having a prior positive test was associated with 91% (95% CI 85-94%) effectiveness against new infection among the unvaccinated.
RESULTS
A satisfactory effectiveness of BNT162b2 after the second dose was suggested, but with possibly substantially lower effect before the second dose.
CONCLUSION
[ "BNT162 Vaccine", "COVID-19", "COVID-19 Vaccines", "Cohort Studies", "Female", "Humans", "Male", "SARS-CoV-2", "Sweden", "Vaccine Efficacy", "Vaccines" ]
8500302
Introduction
There has been a very rapid development of vaccines against SARS-CoV-2 and mass vaccination campaigns have been launched worldwide [1,2]. To date, four different vaccines have been licenced in the European Union; BNT162b2 mRNA COVID-19 Vaccine (Pfizer-BioNTech), mRNA-1273 (Moderna Vaccine), ChAdOx1 nCoV-19 (AstraZeneca) and Ad26.COV2-S (Jansen). In Skåne, a county in Southern Sweden with approximately 1.4 million inhabitants, the vaccination campaign started on 27 December 2020. The first to be vaccinated were nursing home residents and their caregivers as well as frontline health care workers. The aim of this study was to evaluate vaccine effectiveness (VE) of the BNT16b2 mRNA (Pfizer-BioNTech) COVID-19 vaccine in preventing SARS-CoV-2 infection in people of working age.
null
null
Results
The study cohort comprised 805,741 individuals on 27 December 2020, of whom 26,587 (3.3%) received at least one dose of the BNT16b2 mRNA vaccine before 28 February 2021 (Table 1). The incidence of SARS-CoV-2 infection in the study cohort peaked in mid-January 2021 with a weekly incidence of 880 cases per 100,000 individuals, and then decreased gradually to 250 cases per 100,000 individuals in the last week of February. The vaccinated cohort had a higher proportion of females (80% vs. 52%) and was older (median age 47 vs. 40 years) than the unvaccinated cohort. The median time between dose 1 and dose 2 was 28 days and 99% of the vaccinated had an interval of ≥21 days between the doses. There were 8 observed cases of SARS-CoV-2 infection 7 days or more after the second dose among subjects with no prior positive test during period 4 (Feb 15–28), whereas 56 cases were expected if they have had the same incidence rated as the unvaccinated during that period. Thus, the estimated VE in preventing infection 7 days or more after second dose among subjects with no prior positive test was 86% (95% CI 72–94%) during period 4 (Table 2 and Figure 1). Similar but more statistically uncertain VE (93%; 95% CI 59–100%) was observed in period 3 (Figure 1 and Supplementary Table S1). VE was similar among females and males, but more statistically uncertain among males due to fewer vaccinated (Supplementary Table S2). No deaths within 30 days of a positive test were observed among the vaccinated (Supplementary Table S3). The corresponding case fatality risk was 0.1% among the unvaccinated (27 deaths among 25,970 cases that were followed 30 days). Having a prior positive test was associated with 91% (95% CI 85–94%) effectiveness against new infection among the unvaccinated during period 4 (Table 2). This protective effect was similarly high during period 3 (Supplementary Table S1), and still high when restricting the analysis to individuals with a prior positive test more than three months before baseline (83%, 95% CI 51–97%; not in tables). Effectiveness with 95% confidence interval (CI) of the BNT16b2 mRNA (Pfizer-BioNTech) vaccine in preventing SARS-CoV-2 infection during period 3 (1–14 February 2021) and 4 (15–28 February 2021). Baseline characteristics of the study cohort on 27 December 2020 when COVID-19 vaccination started, stratified by vaccination status on 28 February 2021. Effectiveness of the BNT16b2 mRNA (Pfizer-BioNTech) vaccine in preventing SARS-CoV-2 infection during period 4 (15–28 February 2021). aCases per 100,000 persons and week (95% confidence interval). Results from statistical analysis were weighted with respect to sex and age distribution of the vaccinated cohort. bReference category in the calculation of vaccine effectiveness. cThe number of vaccinated with prior positive test was too few to permit evaluation of vaccine effectiveness.
Conclusion
In conclusion, we found a vaccine effectiveness of 86% in preventing infection 7 days or more after second dose of BNT16b2 mRNA vaccine, in adults of working age during a period of high circulation of SARS-CoV-2. The observed vaccine effectiveness was not satisfactory after a first dose only.
[ "Data sources", "Study cohort", "Outcomes", "Statistical analysis" ]
[ "This cohort study was based on registers kept for administrative purposes at the Skåne county council, Sweden. Data sources were the total population register used for individual-level data on residency and vital status, and health care registers used for individual-level data on vaccinations and positive COVID-19 test results. Linkage between the different data sources was facilitated using the personal identification number assigned to all Swedish citizens at birth or immigration.", "The study cohort included all persons aged 18–64 years residing in Skåne county, Sweden, on 27 December 2020 when vaccinations started. The cohort was followed until 28 February 2021. Data on vaccination including type of vaccine and dose were linked to the cohort, together with data on prior positive COVID-19 tests at any time point from March 2020 until 26 December 2020. Individuals who during follow up were vaccinated with other COVID-19 vaccines than BNT16b2 mRNA were excluded at baseline due to too small numbers to permit evaluation (1.0% of the population). Individuals moving out from the region during follow up were censored on the date of relocation.", "The primary outcome was the first positive SARS-CoV2 test result received from 27 December 2020 to 28 February 2021, hereafter called SARS-CoV-2-infection. During the study period, the Regional Centre for Disease Control recommended individuals of >6 years old with symptoms of COVID-19 to get tested. Additionally, test recommendations were from 21 January 2021 given to persons living in the same household as a person with a confirmed infection, irrespective of own symptoms, five days after the index case. Sampling was performed mainly from nasopharynx and analysed by RT-PCR at the Regional Laboratory of Clinical Microbiology or through a combined sampling from pharynx, nose and saliva through RT-PCR at laboratories assigned by the Swedish Board of Health and Welfare: Dynamic Code AB, Linköping, Sweden and Eurofins LifeCodexx GmbH, Germany. Moreover, some patients and health care workers were tested using antigen tests (PANBIOTM, Abbot) from nasopharynx samples, within both primary and secondary care. Result from all diagnostic modalities and laboratories were available for the study. As secondary outcome, we used death with COVID-19, defined as death within 30 days of a positive test.", "Statistical analyses were conducted in Stata SE 14.2 (Stata Corp.) and IBM SPSS Statistics 26 (SPSS Corp.). The number of the positive COVID-19 tests was calculated in relation to person-weeks of follow up, separately for unvaccinated and vaccinated follow up time, and stratified on prior COVID-19 positivity. Further stratifications were done according to (i) no dose or 0–13 days after the first dose, (ii) at least 14 days after the first dose but before second dose, (iii) 0–6 days after the date of the second dose, (iv) at least 7 days after the second dose. To account for variations in community spread during follow up, the counting of cases and person-weeks was done separately in four two-week periods (period 1: Dec 27–Jan 17, period 2: Jan 18–31, period 3: Feb 1–14 and period 4: Feb 15–28). We estimated the VE overall and stratified by sex among individuals with no prior positive test at baseline as (IRR–1)/IRR together with 95% confidence interval (CI), where IRR represents the incidence rate ratio contrasting unvaccinated with vaccinated person-time. Main VE results were reported for period 4 with the longest follow up, and we also present results for period 3 as comparison whereas only incidence rates are presented for the two prior periods where no effect can be assumed. As a further reference, we calculated effectiveness associated with a prior positive test at baseline. All statistical analyses were weighted to account for differences in sex and age distribution (five groups: 18–44, 45–49, 50–54, 55–59 and 60–64 years old) among vaccinated and unvaccinated." ]
[ null, null, null, null ]
[ "Introduction", "Materials and methods", "Data sources", "Study cohort", "Outcomes", "Statistical analysis", "Results", "Discussion", "Conclusion", "Supplementary Material" ]
[ "There has been a very rapid development of vaccines against SARS-CoV-2 and mass vaccination campaigns have been launched worldwide [1,2]. To date, four different vaccines have been licenced in the European Union; BNT162b2 mRNA COVID-19 Vaccine (Pfizer-BioNTech), mRNA-1273 (Moderna Vaccine), ChAdOx1 nCoV-19 (AstraZeneca) and Ad26.COV2-S (Jansen). In Skåne, a county in Southern Sweden with approximately 1.4 million inhabitants, the vaccination campaign started on 27 December 2020. The first to be vaccinated were nursing home residents and their caregivers as well as frontline health care workers. The aim of this study was to evaluate vaccine effectiveness (VE) of the BNT16b2 mRNA (Pfizer-BioNTech) COVID-19 vaccine in preventing SARS-CoV-2 infection in people of working age.", "Data sources This cohort study was based on registers kept for administrative purposes at the Skåne county council, Sweden. Data sources were the total population register used for individual-level data on residency and vital status, and health care registers used for individual-level data on vaccinations and positive COVID-19 test results. Linkage between the different data sources was facilitated using the personal identification number assigned to all Swedish citizens at birth or immigration.\nThis cohort study was based on registers kept for administrative purposes at the Skåne county council, Sweden. Data sources were the total population register used for individual-level data on residency and vital status, and health care registers used for individual-level data on vaccinations and positive COVID-19 test results. Linkage between the different data sources was facilitated using the personal identification number assigned to all Swedish citizens at birth or immigration.\nStudy cohort The study cohort included all persons aged 18–64 years residing in Skåne county, Sweden, on 27 December 2020 when vaccinations started. The cohort was followed until 28 February 2021. Data on vaccination including type of vaccine and dose were linked to the cohort, together with data on prior positive COVID-19 tests at any time point from March 2020 until 26 December 2020. Individuals who during follow up were vaccinated with other COVID-19 vaccines than BNT16b2 mRNA were excluded at baseline due to too small numbers to permit evaluation (1.0% of the population). Individuals moving out from the region during follow up were censored on the date of relocation.\nThe study cohort included all persons aged 18–64 years residing in Skåne county, Sweden, on 27 December 2020 when vaccinations started. The cohort was followed until 28 February 2021. Data on vaccination including type of vaccine and dose were linked to the cohort, together with data on prior positive COVID-19 tests at any time point from March 2020 until 26 December 2020. Individuals who during follow up were vaccinated with other COVID-19 vaccines than BNT16b2 mRNA were excluded at baseline due to too small numbers to permit evaluation (1.0% of the population). Individuals moving out from the region during follow up were censored on the date of relocation.\nOutcomes The primary outcome was the first positive SARS-CoV2 test result received from 27 December 2020 to 28 February 2021, hereafter called SARS-CoV-2-infection. During the study period, the Regional Centre for Disease Control recommended individuals of >6 years old with symptoms of COVID-19 to get tested. Additionally, test recommendations were from 21 January 2021 given to persons living in the same household as a person with a confirmed infection, irrespective of own symptoms, five days after the index case. Sampling was performed mainly from nasopharynx and analysed by RT-PCR at the Regional Laboratory of Clinical Microbiology or through a combined sampling from pharynx, nose and saliva through RT-PCR at laboratories assigned by the Swedish Board of Health and Welfare: Dynamic Code AB, Linköping, Sweden and Eurofins LifeCodexx GmbH, Germany. Moreover, some patients and health care workers were tested using antigen tests (PANBIOTM, Abbot) from nasopharynx samples, within both primary and secondary care. Result from all diagnostic modalities and laboratories were available for the study. As secondary outcome, we used death with COVID-19, defined as death within 30 days of a positive test.\nThe primary outcome was the first positive SARS-CoV2 test result received from 27 December 2020 to 28 February 2021, hereafter called SARS-CoV-2-infection. During the study period, the Regional Centre for Disease Control recommended individuals of >6 years old with symptoms of COVID-19 to get tested. Additionally, test recommendations were from 21 January 2021 given to persons living in the same household as a person with a confirmed infection, irrespective of own symptoms, five days after the index case. Sampling was performed mainly from nasopharynx and analysed by RT-PCR at the Regional Laboratory of Clinical Microbiology or through a combined sampling from pharynx, nose and saliva through RT-PCR at laboratories assigned by the Swedish Board of Health and Welfare: Dynamic Code AB, Linköping, Sweden and Eurofins LifeCodexx GmbH, Germany. Moreover, some patients and health care workers were tested using antigen tests (PANBIOTM, Abbot) from nasopharynx samples, within both primary and secondary care. Result from all diagnostic modalities and laboratories were available for the study. As secondary outcome, we used death with COVID-19, defined as death within 30 days of a positive test.\nStatistical analysis Statistical analyses were conducted in Stata SE 14.2 (Stata Corp.) and IBM SPSS Statistics 26 (SPSS Corp.). The number of the positive COVID-19 tests was calculated in relation to person-weeks of follow up, separately for unvaccinated and vaccinated follow up time, and stratified on prior COVID-19 positivity. Further stratifications were done according to (i) no dose or 0–13 days after the first dose, (ii) at least 14 days after the first dose but before second dose, (iii) 0–6 days after the date of the second dose, (iv) at least 7 days after the second dose. To account for variations in community spread during follow up, the counting of cases and person-weeks was done separately in four two-week periods (period 1: Dec 27–Jan 17, period 2: Jan 18–31, period 3: Feb 1–14 and period 4: Feb 15–28). We estimated the VE overall and stratified by sex among individuals with no prior positive test at baseline as (IRR–1)/IRR together with 95% confidence interval (CI), where IRR represents the incidence rate ratio contrasting unvaccinated with vaccinated person-time. Main VE results were reported for period 4 with the longest follow up, and we also present results for period 3 as comparison whereas only incidence rates are presented for the two prior periods where no effect can be assumed. As a further reference, we calculated effectiveness associated with a prior positive test at baseline. All statistical analyses were weighted to account for differences in sex and age distribution (five groups: 18–44, 45–49, 50–54, 55–59 and 60–64 years old) among vaccinated and unvaccinated.\nStatistical analyses were conducted in Stata SE 14.2 (Stata Corp.) and IBM SPSS Statistics 26 (SPSS Corp.). The number of the positive COVID-19 tests was calculated in relation to person-weeks of follow up, separately for unvaccinated and vaccinated follow up time, and stratified on prior COVID-19 positivity. Further stratifications were done according to (i) no dose or 0–13 days after the first dose, (ii) at least 14 days after the first dose but before second dose, (iii) 0–6 days after the date of the second dose, (iv) at least 7 days after the second dose. To account for variations in community spread during follow up, the counting of cases and person-weeks was done separately in four two-week periods (period 1: Dec 27–Jan 17, period 2: Jan 18–31, period 3: Feb 1–14 and period 4: Feb 15–28). We estimated the VE overall and stratified by sex among individuals with no prior positive test at baseline as (IRR–1)/IRR together with 95% confidence interval (CI), where IRR represents the incidence rate ratio contrasting unvaccinated with vaccinated person-time. Main VE results were reported for period 4 with the longest follow up, and we also present results for period 3 as comparison whereas only incidence rates are presented for the two prior periods where no effect can be assumed. As a further reference, we calculated effectiveness associated with a prior positive test at baseline. All statistical analyses were weighted to account for differences in sex and age distribution (five groups: 18–44, 45–49, 50–54, 55–59 and 60–64 years old) among vaccinated and unvaccinated.", "This cohort study was based on registers kept for administrative purposes at the Skåne county council, Sweden. Data sources were the total population register used for individual-level data on residency and vital status, and health care registers used for individual-level data on vaccinations and positive COVID-19 test results. Linkage between the different data sources was facilitated using the personal identification number assigned to all Swedish citizens at birth or immigration.", "The study cohort included all persons aged 18–64 years residing in Skåne county, Sweden, on 27 December 2020 when vaccinations started. The cohort was followed until 28 February 2021. Data on vaccination including type of vaccine and dose were linked to the cohort, together with data on prior positive COVID-19 tests at any time point from March 2020 until 26 December 2020. Individuals who during follow up were vaccinated with other COVID-19 vaccines than BNT16b2 mRNA were excluded at baseline due to too small numbers to permit evaluation (1.0% of the population). Individuals moving out from the region during follow up were censored on the date of relocation.", "The primary outcome was the first positive SARS-CoV2 test result received from 27 December 2020 to 28 February 2021, hereafter called SARS-CoV-2-infection. During the study period, the Regional Centre for Disease Control recommended individuals of >6 years old with symptoms of COVID-19 to get tested. Additionally, test recommendations were from 21 January 2021 given to persons living in the same household as a person with a confirmed infection, irrespective of own symptoms, five days after the index case. Sampling was performed mainly from nasopharynx and analysed by RT-PCR at the Regional Laboratory of Clinical Microbiology or through a combined sampling from pharynx, nose and saliva through RT-PCR at laboratories assigned by the Swedish Board of Health and Welfare: Dynamic Code AB, Linköping, Sweden and Eurofins LifeCodexx GmbH, Germany. Moreover, some patients and health care workers were tested using antigen tests (PANBIOTM, Abbot) from nasopharynx samples, within both primary and secondary care. Result from all diagnostic modalities and laboratories were available for the study. As secondary outcome, we used death with COVID-19, defined as death within 30 days of a positive test.", "Statistical analyses were conducted in Stata SE 14.2 (Stata Corp.) and IBM SPSS Statistics 26 (SPSS Corp.). The number of the positive COVID-19 tests was calculated in relation to person-weeks of follow up, separately for unvaccinated and vaccinated follow up time, and stratified on prior COVID-19 positivity. Further stratifications were done according to (i) no dose or 0–13 days after the first dose, (ii) at least 14 days after the first dose but before second dose, (iii) 0–6 days after the date of the second dose, (iv) at least 7 days after the second dose. To account for variations in community spread during follow up, the counting of cases and person-weeks was done separately in four two-week periods (period 1: Dec 27–Jan 17, period 2: Jan 18–31, period 3: Feb 1–14 and period 4: Feb 15–28). We estimated the VE overall and stratified by sex among individuals with no prior positive test at baseline as (IRR–1)/IRR together with 95% confidence interval (CI), where IRR represents the incidence rate ratio contrasting unvaccinated with vaccinated person-time. Main VE results were reported for period 4 with the longest follow up, and we also present results for period 3 as comparison whereas only incidence rates are presented for the two prior periods where no effect can be assumed. As a further reference, we calculated effectiveness associated with a prior positive test at baseline. All statistical analyses were weighted to account for differences in sex and age distribution (five groups: 18–44, 45–49, 50–54, 55–59 and 60–64 years old) among vaccinated and unvaccinated.", "The study cohort comprised 805,741 individuals on 27 December 2020, of whom 26,587 (3.3%) received at least one dose of the BNT16b2 mRNA vaccine before 28 February 2021 (Table 1). The incidence of SARS-CoV-2 infection in the study cohort peaked in mid-January 2021 with a weekly incidence of 880 cases per 100,000 individuals, and then decreased gradually to 250 cases per 100,000 individuals in the last week of February. The vaccinated cohort had a higher proportion of females (80% vs. 52%) and was older (median age 47 vs. 40 years) than the unvaccinated cohort. The median time between dose 1 and dose 2 was 28 days and 99% of the vaccinated had an interval of ≥21 days between the doses. There were 8 observed cases of SARS-CoV-2 infection 7 days or more after the second dose among subjects with no prior positive test during period 4 (Feb 15–28), whereas 56 cases were expected if they have had the same incidence rated as the unvaccinated during that period. Thus, the estimated VE in preventing infection 7 days or more after second dose among subjects with no prior positive test was 86% (95% CI 72–94%) during period 4 (Table 2 and Figure 1). Similar but more statistically uncertain VE (93%; 95% CI 59–100%) was observed in period 3 (Figure 1 and Supplementary Table S1). VE was similar among females and males, but more statistically uncertain among males due to fewer vaccinated (Supplementary Table S2). No deaths within 30 days of a positive test were observed among the vaccinated (Supplementary Table S3). The corresponding case fatality risk was 0.1% among the unvaccinated (27 deaths among 25,970 cases that were followed 30 days). Having a prior positive test was associated with 91% (95% CI 85–94%) effectiveness against new infection among the unvaccinated during period 4 (Table 2). This protective effect was similarly high during period 3 (Supplementary Table S1), and still high when restricting the analysis to individuals with a prior positive test more than three months before baseline (83%, 95% CI 51–97%; not in tables).\nEffectiveness with 95% confidence interval (CI) of the BNT16b2 mRNA (Pfizer-BioNTech) vaccine in preventing SARS-CoV-2 infection during period 3 (1–14 February 2021) and 4 (15–28 February 2021).\nBaseline characteristics of the study cohort on 27 December 2020 when COVID-19 vaccination started, stratified by vaccination status on 28 February 2021.\nEffectiveness of the BNT16b2 mRNA (Pfizer-BioNTech) vaccine in preventing SARS-CoV-2 infection during period 4 (15–28 February 2021).\naCases per 100,000 persons and week (95% confidence interval). Results from statistical analysis were weighted with respect to sex and age distribution of the vaccinated cohort.\nbReference category in the calculation of vaccine effectiveness.\ncThe number of vaccinated with prior positive test was too few to permit evaluation of vaccine effectiveness.", "The most salient finding was the satisfactory VE in preventing SARS-CoV-2 infection seven days or more after the second dose of the BNT16b2 mRNA vaccine, observed in a working age population. A major strength of the study was the rapid evaluation of vaccine effectiveness in a real-world Scandinavian setting with substantial and prevailing community spread of the virus. The circulation of SARS-CoV-2 in the region was among the highest in Europe during the study period with incidence rates between 300 and 900 new cases per week and 100,000 population, why the vaccinated cohort most likely had considerable exposure to the virus during follow up.\nA limitation was the short follow up time, and the current lack of data to evaluate effects on disease severity and hospitalisations and effects of specific virus variants. Surveillance data compiled by the Public Health Agency of Sweden suggest that 32–50% of the positive tests were of the B.1.1.7 variant in the study region during the last follow up period [3]. We also lacked data on disease history and co-existing conditions in the study population, preventing a detailed stratification (or matching) of vaccinated and unvaccinated beyond sex, age and follow up period. The main reason for vaccination in the study cohort was working in the health care sector with frontline personnel given highest priority, but individuals aged up to 64 years who were vaccinated due to their residence in special homes were also included. As we could not account for differences in health related to occupational status and residence across cohorts, we decided not to evaluate effects on all-cause mortality. However, we observed no deaths related to COVID-19 among the vaccinated, but it should be noted that the case fatality risk was low also among unvaccinated in this working age cohort. The median interval between doses was in this study 28 days and thus a bit longer than the 21 days in the clinical trials and to what was recommended. This was most likely due to a relative shortage of vaccine. Since only persons without a previously verified SARS-CoV-2 infection was included in the cohort when estimating VE we minimised the risk of detecting persistent RNA shedding. However, we cannot exclude the possibility that a person with a previously asymptomatic infection with continued RNA shedding gets tested again and thus adjudicated as a new case. As a final limitation, we may have some bias in the estimated VE due to unknown prior infections, especially as COVID-19 testing was limited in this population during the spring 2020.\nSeveral reports on VE of the BNT162b2 mRNA vaccine have emerged since the launch of large vaccination campaigns in many parts of the world, Although we estimated the VE after 14 days after the first dose, we also studied the effect 0–6 days after the second dose with a comparably low estimated VE (60%) where the effect is probably still due to the first dose. A cohort study in health care workers in UK demonstrated a VE against SARS-CoV-2 infection after first dose that was higher than in our study (72% after 21 days), whereas they found similar VE as we after second dose (86% after 7 days) [4]. Other studies have also reported higher VE after the first dose [5,6], and reduced risk of severe COVID-19 that required hospitalisation [7]. However, a cohort study from Israel with detailed matching on demographic and clinical characteristics in a diverse population showed similar evolvement of VE after first and second dose as in our study when evaluated against symptomatic infection (57% 14–20 days after first dose and 94% 7 days after second dose [8].\nThe suggested high protection (91%–94% depending on level of community spread) by a previous infection in our study is in line with recently published studies. A study from Denmark suggested an overall protection against reinfection of 81% during the second surge of the COVID-19 epidemic, but with markedly diminishing protection of individuals ≥65 years old [9]. Among health care workers in UK the estimated protection of a previous infection was 94% against a probable or possible symptomatic infection and 83% against all probable and possible infections (our calculations based on reported odds ratios) [10].\nAs our results suggest that vaccine effectiveness may not be satisfactory until seven days after the second dose, it is prudent to inform the public about the importance of maintaining social distancing and complying with other recommendations until full vaccine effect can be expected. Compliance with recommendations is likely to be especially important in regions where the exposure to the virus is still considerable. Another aspect of the present findings, especially when making priorities in the vaccination programs for the general population, is the strong protective effect associated with documented prior infection. It is important to continue to monitor VE for longer periods and to compare VE of different vaccines, and also carefully monitoring risk of adverse events. Sweden, with its combination of register infrastructure for population studies and prevailing community spread of the SARS-CoV-2 virus, constitutes a suitable setting for such further studies.", "In conclusion, we found a vaccine effectiveness of 86% in preventing infection 7 days or more after second dose of BNT16b2 mRNA vaccine, in adults of working age during a period of high circulation of SARS-CoV-2. The observed vaccine effectiveness was not satisfactory after a first dose only.", "Click here for additional data file." ]
[ "intro", "materials", null, null, null, null, "results", "discussion", "conclusions", "supplementary-material" ]
[ "SARS-CoV2", "COVID-19 vaccines", "COVID-19 testing", "vaccine effectiveness" ]
Introduction: There has been a very rapid development of vaccines against SARS-CoV-2 and mass vaccination campaigns have been launched worldwide [1,2]. To date, four different vaccines have been licenced in the European Union; BNT162b2 mRNA COVID-19 Vaccine (Pfizer-BioNTech), mRNA-1273 (Moderna Vaccine), ChAdOx1 nCoV-19 (AstraZeneca) and Ad26.COV2-S (Jansen). In Skåne, a county in Southern Sweden with approximately 1.4 million inhabitants, the vaccination campaign started on 27 December 2020. The first to be vaccinated were nursing home residents and their caregivers as well as frontline health care workers. The aim of this study was to evaluate vaccine effectiveness (VE) of the BNT16b2 mRNA (Pfizer-BioNTech) COVID-19 vaccine in preventing SARS-CoV-2 infection in people of working age. Materials and methods: Data sources This cohort study was based on registers kept for administrative purposes at the Skåne county council, Sweden. Data sources were the total population register used for individual-level data on residency and vital status, and health care registers used for individual-level data on vaccinations and positive COVID-19 test results. Linkage between the different data sources was facilitated using the personal identification number assigned to all Swedish citizens at birth or immigration. This cohort study was based on registers kept for administrative purposes at the Skåne county council, Sweden. Data sources were the total population register used for individual-level data on residency and vital status, and health care registers used for individual-level data on vaccinations and positive COVID-19 test results. Linkage between the different data sources was facilitated using the personal identification number assigned to all Swedish citizens at birth or immigration. Study cohort The study cohort included all persons aged 18–64 years residing in Skåne county, Sweden, on 27 December 2020 when vaccinations started. The cohort was followed until 28 February 2021. Data on vaccination including type of vaccine and dose were linked to the cohort, together with data on prior positive COVID-19 tests at any time point from March 2020 until 26 December 2020. Individuals who during follow up were vaccinated with other COVID-19 vaccines than BNT16b2 mRNA were excluded at baseline due to too small numbers to permit evaluation (1.0% of the population). Individuals moving out from the region during follow up were censored on the date of relocation. The study cohort included all persons aged 18–64 years residing in Skåne county, Sweden, on 27 December 2020 when vaccinations started. The cohort was followed until 28 February 2021. Data on vaccination including type of vaccine and dose were linked to the cohort, together with data on prior positive COVID-19 tests at any time point from March 2020 until 26 December 2020. Individuals who during follow up were vaccinated with other COVID-19 vaccines than BNT16b2 mRNA were excluded at baseline due to too small numbers to permit evaluation (1.0% of the population). Individuals moving out from the region during follow up were censored on the date of relocation. Outcomes The primary outcome was the first positive SARS-CoV2 test result received from 27 December 2020 to 28 February 2021, hereafter called SARS-CoV-2-infection. During the study period, the Regional Centre for Disease Control recommended individuals of >6 years old with symptoms of COVID-19 to get tested. Additionally, test recommendations were from 21 January 2021 given to persons living in the same household as a person with a confirmed infection, irrespective of own symptoms, five days after the index case. Sampling was performed mainly from nasopharynx and analysed by RT-PCR at the Regional Laboratory of Clinical Microbiology or through a combined sampling from pharynx, nose and saliva through RT-PCR at laboratories assigned by the Swedish Board of Health and Welfare: Dynamic Code AB, Linköping, Sweden and Eurofins LifeCodexx GmbH, Germany. Moreover, some patients and health care workers were tested using antigen tests (PANBIOTM, Abbot) from nasopharynx samples, within both primary and secondary care. Result from all diagnostic modalities and laboratories were available for the study. As secondary outcome, we used death with COVID-19, defined as death within 30 days of a positive test. The primary outcome was the first positive SARS-CoV2 test result received from 27 December 2020 to 28 February 2021, hereafter called SARS-CoV-2-infection. During the study period, the Regional Centre for Disease Control recommended individuals of >6 years old with symptoms of COVID-19 to get tested. Additionally, test recommendations were from 21 January 2021 given to persons living in the same household as a person with a confirmed infection, irrespective of own symptoms, five days after the index case. Sampling was performed mainly from nasopharynx and analysed by RT-PCR at the Regional Laboratory of Clinical Microbiology or through a combined sampling from pharynx, nose and saliva through RT-PCR at laboratories assigned by the Swedish Board of Health and Welfare: Dynamic Code AB, Linköping, Sweden and Eurofins LifeCodexx GmbH, Germany. Moreover, some patients and health care workers were tested using antigen tests (PANBIOTM, Abbot) from nasopharynx samples, within both primary and secondary care. Result from all diagnostic modalities and laboratories were available for the study. As secondary outcome, we used death with COVID-19, defined as death within 30 days of a positive test. Statistical analysis Statistical analyses were conducted in Stata SE 14.2 (Stata Corp.) and IBM SPSS Statistics 26 (SPSS Corp.). The number of the positive COVID-19 tests was calculated in relation to person-weeks of follow up, separately for unvaccinated and vaccinated follow up time, and stratified on prior COVID-19 positivity. Further stratifications were done according to (i) no dose or 0–13 days after the first dose, (ii) at least 14 days after the first dose but before second dose, (iii) 0–6 days after the date of the second dose, (iv) at least 7 days after the second dose. To account for variations in community spread during follow up, the counting of cases and person-weeks was done separately in four two-week periods (period 1: Dec 27–Jan 17, period 2: Jan 18–31, period 3: Feb 1–14 and period 4: Feb 15–28). We estimated the VE overall and stratified by sex among individuals with no prior positive test at baseline as (IRR–1)/IRR together with 95% confidence interval (CI), where IRR represents the incidence rate ratio contrasting unvaccinated with vaccinated person-time. Main VE results were reported for period 4 with the longest follow up, and we also present results for period 3 as comparison whereas only incidence rates are presented for the two prior periods where no effect can be assumed. As a further reference, we calculated effectiveness associated with a prior positive test at baseline. All statistical analyses were weighted to account for differences in sex and age distribution (five groups: 18–44, 45–49, 50–54, 55–59 and 60–64 years old) among vaccinated and unvaccinated. Statistical analyses were conducted in Stata SE 14.2 (Stata Corp.) and IBM SPSS Statistics 26 (SPSS Corp.). The number of the positive COVID-19 tests was calculated in relation to person-weeks of follow up, separately for unvaccinated and vaccinated follow up time, and stratified on prior COVID-19 positivity. Further stratifications were done according to (i) no dose or 0–13 days after the first dose, (ii) at least 14 days after the first dose but before second dose, (iii) 0–6 days after the date of the second dose, (iv) at least 7 days after the second dose. To account for variations in community spread during follow up, the counting of cases and person-weeks was done separately in four two-week periods (period 1: Dec 27–Jan 17, period 2: Jan 18–31, period 3: Feb 1–14 and period 4: Feb 15–28). We estimated the VE overall and stratified by sex among individuals with no prior positive test at baseline as (IRR–1)/IRR together with 95% confidence interval (CI), where IRR represents the incidence rate ratio contrasting unvaccinated with vaccinated person-time. Main VE results were reported for period 4 with the longest follow up, and we also present results for period 3 as comparison whereas only incidence rates are presented for the two prior periods where no effect can be assumed. As a further reference, we calculated effectiveness associated with a prior positive test at baseline. All statistical analyses were weighted to account for differences in sex and age distribution (five groups: 18–44, 45–49, 50–54, 55–59 and 60–64 years old) among vaccinated and unvaccinated. Data sources: This cohort study was based on registers kept for administrative purposes at the Skåne county council, Sweden. Data sources were the total population register used for individual-level data on residency and vital status, and health care registers used for individual-level data on vaccinations and positive COVID-19 test results. Linkage between the different data sources was facilitated using the personal identification number assigned to all Swedish citizens at birth or immigration. Study cohort: The study cohort included all persons aged 18–64 years residing in Skåne county, Sweden, on 27 December 2020 when vaccinations started. The cohort was followed until 28 February 2021. Data on vaccination including type of vaccine and dose were linked to the cohort, together with data on prior positive COVID-19 tests at any time point from March 2020 until 26 December 2020. Individuals who during follow up were vaccinated with other COVID-19 vaccines than BNT16b2 mRNA were excluded at baseline due to too small numbers to permit evaluation (1.0% of the population). Individuals moving out from the region during follow up were censored on the date of relocation. Outcomes: The primary outcome was the first positive SARS-CoV2 test result received from 27 December 2020 to 28 February 2021, hereafter called SARS-CoV-2-infection. During the study period, the Regional Centre for Disease Control recommended individuals of >6 years old with symptoms of COVID-19 to get tested. Additionally, test recommendations were from 21 January 2021 given to persons living in the same household as a person with a confirmed infection, irrespective of own symptoms, five days after the index case. Sampling was performed mainly from nasopharynx and analysed by RT-PCR at the Regional Laboratory of Clinical Microbiology or through a combined sampling from pharynx, nose and saliva through RT-PCR at laboratories assigned by the Swedish Board of Health and Welfare: Dynamic Code AB, Linköping, Sweden and Eurofins LifeCodexx GmbH, Germany. Moreover, some patients and health care workers were tested using antigen tests (PANBIOTM, Abbot) from nasopharynx samples, within both primary and secondary care. Result from all diagnostic modalities and laboratories were available for the study. As secondary outcome, we used death with COVID-19, defined as death within 30 days of a positive test. Statistical analysis: Statistical analyses were conducted in Stata SE 14.2 (Stata Corp.) and IBM SPSS Statistics 26 (SPSS Corp.). The number of the positive COVID-19 tests was calculated in relation to person-weeks of follow up, separately for unvaccinated and vaccinated follow up time, and stratified on prior COVID-19 positivity. Further stratifications were done according to (i) no dose or 0–13 days after the first dose, (ii) at least 14 days after the first dose but before second dose, (iii) 0–6 days after the date of the second dose, (iv) at least 7 days after the second dose. To account for variations in community spread during follow up, the counting of cases and person-weeks was done separately in four two-week periods (period 1: Dec 27–Jan 17, period 2: Jan 18–31, period 3: Feb 1–14 and period 4: Feb 15–28). We estimated the VE overall and stratified by sex among individuals with no prior positive test at baseline as (IRR–1)/IRR together with 95% confidence interval (CI), where IRR represents the incidence rate ratio contrasting unvaccinated with vaccinated person-time. Main VE results were reported for period 4 with the longest follow up, and we also present results for period 3 as comparison whereas only incidence rates are presented for the two prior periods where no effect can be assumed. As a further reference, we calculated effectiveness associated with a prior positive test at baseline. All statistical analyses were weighted to account for differences in sex and age distribution (five groups: 18–44, 45–49, 50–54, 55–59 and 60–64 years old) among vaccinated and unvaccinated. Results: The study cohort comprised 805,741 individuals on 27 December 2020, of whom 26,587 (3.3%) received at least one dose of the BNT16b2 mRNA vaccine before 28 February 2021 (Table 1). The incidence of SARS-CoV-2 infection in the study cohort peaked in mid-January 2021 with a weekly incidence of 880 cases per 100,000 individuals, and then decreased gradually to 250 cases per 100,000 individuals in the last week of February. The vaccinated cohort had a higher proportion of females (80% vs. 52%) and was older (median age 47 vs. 40 years) than the unvaccinated cohort. The median time between dose 1 and dose 2 was 28 days and 99% of the vaccinated had an interval of ≥21 days between the doses. There were 8 observed cases of SARS-CoV-2 infection 7 days or more after the second dose among subjects with no prior positive test during period 4 (Feb 15–28), whereas 56 cases were expected if they have had the same incidence rated as the unvaccinated during that period. Thus, the estimated VE in preventing infection 7 days or more after second dose among subjects with no prior positive test was 86% (95% CI 72–94%) during period 4 (Table 2 and Figure 1). Similar but more statistically uncertain VE (93%; 95% CI 59–100%) was observed in period 3 (Figure 1 and Supplementary Table S1). VE was similar among females and males, but more statistically uncertain among males due to fewer vaccinated (Supplementary Table S2). No deaths within 30 days of a positive test were observed among the vaccinated (Supplementary Table S3). The corresponding case fatality risk was 0.1% among the unvaccinated (27 deaths among 25,970 cases that were followed 30 days). Having a prior positive test was associated with 91% (95% CI 85–94%) effectiveness against new infection among the unvaccinated during period 4 (Table 2). This protective effect was similarly high during period 3 (Supplementary Table S1), and still high when restricting the analysis to individuals with a prior positive test more than three months before baseline (83%, 95% CI 51–97%; not in tables). Effectiveness with 95% confidence interval (CI) of the BNT16b2 mRNA (Pfizer-BioNTech) vaccine in preventing SARS-CoV-2 infection during period 3 (1–14 February 2021) and 4 (15–28 February 2021). Baseline characteristics of the study cohort on 27 December 2020 when COVID-19 vaccination started, stratified by vaccination status on 28 February 2021. Effectiveness of the BNT16b2 mRNA (Pfizer-BioNTech) vaccine in preventing SARS-CoV-2 infection during period 4 (15–28 February 2021). aCases per 100,000 persons and week (95% confidence interval). Results from statistical analysis were weighted with respect to sex and age distribution of the vaccinated cohort. bReference category in the calculation of vaccine effectiveness. cThe number of vaccinated with prior positive test was too few to permit evaluation of vaccine effectiveness. Discussion: The most salient finding was the satisfactory VE in preventing SARS-CoV-2 infection seven days or more after the second dose of the BNT16b2 mRNA vaccine, observed in a working age population. A major strength of the study was the rapid evaluation of vaccine effectiveness in a real-world Scandinavian setting with substantial and prevailing community spread of the virus. The circulation of SARS-CoV-2 in the region was among the highest in Europe during the study period with incidence rates between 300 and 900 new cases per week and 100,000 population, why the vaccinated cohort most likely had considerable exposure to the virus during follow up. A limitation was the short follow up time, and the current lack of data to evaluate effects on disease severity and hospitalisations and effects of specific virus variants. Surveillance data compiled by the Public Health Agency of Sweden suggest that 32–50% of the positive tests were of the B.1.1.7 variant in the study region during the last follow up period [3]. We also lacked data on disease history and co-existing conditions in the study population, preventing a detailed stratification (or matching) of vaccinated and unvaccinated beyond sex, age and follow up period. The main reason for vaccination in the study cohort was working in the health care sector with frontline personnel given highest priority, but individuals aged up to 64 years who were vaccinated due to their residence in special homes were also included. As we could not account for differences in health related to occupational status and residence across cohorts, we decided not to evaluate effects on all-cause mortality. However, we observed no deaths related to COVID-19 among the vaccinated, but it should be noted that the case fatality risk was low also among unvaccinated in this working age cohort. The median interval between doses was in this study 28 days and thus a bit longer than the 21 days in the clinical trials and to what was recommended. This was most likely due to a relative shortage of vaccine. Since only persons without a previously verified SARS-CoV-2 infection was included in the cohort when estimating VE we minimised the risk of detecting persistent RNA shedding. However, we cannot exclude the possibility that a person with a previously asymptomatic infection with continued RNA shedding gets tested again and thus adjudicated as a new case. As a final limitation, we may have some bias in the estimated VE due to unknown prior infections, especially as COVID-19 testing was limited in this population during the spring 2020. Several reports on VE of the BNT162b2 mRNA vaccine have emerged since the launch of large vaccination campaigns in many parts of the world, Although we estimated the VE after 14 days after the first dose, we also studied the effect 0–6 days after the second dose with a comparably low estimated VE (60%) where the effect is probably still due to the first dose. A cohort study in health care workers in UK demonstrated a VE against SARS-CoV-2 infection after first dose that was higher than in our study (72% after 21 days), whereas they found similar VE as we after second dose (86% after 7 days) [4]. Other studies have also reported higher VE after the first dose [5,6], and reduced risk of severe COVID-19 that required hospitalisation [7]. However, a cohort study from Israel with detailed matching on demographic and clinical characteristics in a diverse population showed similar evolvement of VE after first and second dose as in our study when evaluated against symptomatic infection (57% 14–20 days after first dose and 94% 7 days after second dose [8]. The suggested high protection (91%–94% depending on level of community spread) by a previous infection in our study is in line with recently published studies. A study from Denmark suggested an overall protection against reinfection of 81% during the second surge of the COVID-19 epidemic, but with markedly diminishing protection of individuals ≥65 years old [9]. Among health care workers in UK the estimated protection of a previous infection was 94% against a probable or possible symptomatic infection and 83% against all probable and possible infections (our calculations based on reported odds ratios) [10]. As our results suggest that vaccine effectiveness may not be satisfactory until seven days after the second dose, it is prudent to inform the public about the importance of maintaining social distancing and complying with other recommendations until full vaccine effect can be expected. Compliance with recommendations is likely to be especially important in regions where the exposure to the virus is still considerable. Another aspect of the present findings, especially when making priorities in the vaccination programs for the general population, is the strong protective effect associated with documented prior infection. It is important to continue to monitor VE for longer periods and to compare VE of different vaccines, and also carefully monitoring risk of adverse events. Sweden, with its combination of register infrastructure for population studies and prevailing community spread of the SARS-CoV-2 virus, constitutes a suitable setting for such further studies. Conclusion: In conclusion, we found a vaccine effectiveness of 86% in preventing infection 7 days or more after second dose of BNT16b2 mRNA vaccine, in adults of working age during a period of high circulation of SARS-CoV-2. The observed vaccine effectiveness was not satisfactory after a first dose only. Supplementary Material: Click here for additional data file.
Background: Vaccine effectiveness against COVID-19 needs to be assessed in diverse real-world population settings. Methods: A cohort study of 805,741 residents in Skåne county, Southern Sweden, aged 18-64 years, of whom 26,587 received at least one dose of the BNT162b2 vaccine. Incidence rates of COVID-19 were estimated in sex- and age-adjusted analysis and stratified in two-week periods with substantial community spread of the disease. Results: The estimated vaccine effectiveness in preventing infection ≥7 days after second dose was 86% (95% CI 72-94%) but only 42% (95% CI 14-63%) ≥14 days after a single dose. No difference in vaccine effectiveness was observed between females and males. Having a prior positive test was associated with 91% (95% CI 85-94%) effectiveness against new infection among the unvaccinated. Conclusions: A satisfactory effectiveness of BNT162b2 after the second dose was suggested, but with possibly substantially lower effect before the second dose.
Introduction: There has been a very rapid development of vaccines against SARS-CoV-2 and mass vaccination campaigns have been launched worldwide [1,2]. To date, four different vaccines have been licenced in the European Union; BNT162b2 mRNA COVID-19 Vaccine (Pfizer-BioNTech), mRNA-1273 (Moderna Vaccine), ChAdOx1 nCoV-19 (AstraZeneca) and Ad26.COV2-S (Jansen). In Skåne, a county in Southern Sweden with approximately 1.4 million inhabitants, the vaccination campaign started on 27 December 2020. The first to be vaccinated were nursing home residents and their caregivers as well as frontline health care workers. The aim of this study was to evaluate vaccine effectiveness (VE) of the BNT16b2 mRNA (Pfizer-BioNTech) COVID-19 vaccine in preventing SARS-CoV-2 infection in people of working age. Conclusion: In conclusion, we found a vaccine effectiveness of 86% in preventing infection 7 days or more after second dose of BNT16b2 mRNA vaccine, in adults of working age during a period of high circulation of SARS-CoV-2. The observed vaccine effectiveness was not satisfactory after a first dose only.
Background: Vaccine effectiveness against COVID-19 needs to be assessed in diverse real-world population settings. Methods: A cohort study of 805,741 residents in Skåne county, Southern Sweden, aged 18-64 years, of whom 26,587 received at least one dose of the BNT162b2 vaccine. Incidence rates of COVID-19 were estimated in sex- and age-adjusted analysis and stratified in two-week periods with substantial community spread of the disease. Results: The estimated vaccine effectiveness in preventing infection ≥7 days after second dose was 86% (95% CI 72-94%) but only 42% (95% CI 14-63%) ≥14 days after a single dose. No difference in vaccine effectiveness was observed between females and males. Having a prior positive test was associated with 91% (95% CI 85-94%) effectiveness against new infection among the unvaccinated. Conclusions: A satisfactory effectiveness of BNT162b2 after the second dose was suggested, but with possibly substantially lower effect before the second dose.
4,016
200
[ 79, 120, 218, 319 ]
10
[ "dose", "days", "period", "study", "19", "covid", "covid 19", "positive", "cohort", "test" ]
[ "bnt162b2 mrna vaccine", "covid 19 vaccines", "vaccines sars cov", "vaccination study cohort", "vaccinations started cohort" ]
null
[CONTENT] SARS-CoV2 | COVID-19 vaccines | COVID-19 testing | vaccine effectiveness [SUMMARY]
null
[CONTENT] SARS-CoV2 | COVID-19 vaccines | COVID-19 testing | vaccine effectiveness [SUMMARY]
[CONTENT] SARS-CoV2 | COVID-19 vaccines | COVID-19 testing | vaccine effectiveness [SUMMARY]
[CONTENT] SARS-CoV2 | COVID-19 vaccines | COVID-19 testing | vaccine effectiveness [SUMMARY]
[CONTENT] SARS-CoV2 | COVID-19 vaccines | COVID-19 testing | vaccine effectiveness [SUMMARY]
[CONTENT] BNT162 Vaccine | COVID-19 | COVID-19 Vaccines | Cohort Studies | Female | Humans | Male | SARS-CoV-2 | Sweden | Vaccine Efficacy | Vaccines [SUMMARY]
null
[CONTENT] BNT162 Vaccine | COVID-19 | COVID-19 Vaccines | Cohort Studies | Female | Humans | Male | SARS-CoV-2 | Sweden | Vaccine Efficacy | Vaccines [SUMMARY]
[CONTENT] BNT162 Vaccine | COVID-19 | COVID-19 Vaccines | Cohort Studies | Female | Humans | Male | SARS-CoV-2 | Sweden | Vaccine Efficacy | Vaccines [SUMMARY]
[CONTENT] BNT162 Vaccine | COVID-19 | COVID-19 Vaccines | Cohort Studies | Female | Humans | Male | SARS-CoV-2 | Sweden | Vaccine Efficacy | Vaccines [SUMMARY]
[CONTENT] BNT162 Vaccine | COVID-19 | COVID-19 Vaccines | Cohort Studies | Female | Humans | Male | SARS-CoV-2 | Sweden | Vaccine Efficacy | Vaccines [SUMMARY]
[CONTENT] bnt162b2 mrna vaccine | covid 19 vaccines | vaccines sars cov | vaccination study cohort | vaccinations started cohort [SUMMARY]
null
[CONTENT] bnt162b2 mrna vaccine | covid 19 vaccines | vaccines sars cov | vaccination study cohort | vaccinations started cohort [SUMMARY]
[CONTENT] bnt162b2 mrna vaccine | covid 19 vaccines | vaccines sars cov | vaccination study cohort | vaccinations started cohort [SUMMARY]
[CONTENT] bnt162b2 mrna vaccine | covid 19 vaccines | vaccines sars cov | vaccination study cohort | vaccinations started cohort [SUMMARY]
[CONTENT] bnt162b2 mrna vaccine | covid 19 vaccines | vaccines sars cov | vaccination study cohort | vaccinations started cohort [SUMMARY]
[CONTENT] dose | days | period | study | 19 | covid | covid 19 | positive | cohort | test [SUMMARY]
null
[CONTENT] dose | days | period | study | 19 | covid | covid 19 | positive | cohort | test [SUMMARY]
[CONTENT] dose | days | period | study | 19 | covid | covid 19 | positive | cohort | test [SUMMARY]
[CONTENT] dose | days | period | study | 19 | covid | covid 19 | positive | cohort | test [SUMMARY]
[CONTENT] dose | days | period | study | 19 | covid | covid 19 | positive | cohort | test [SUMMARY]
[CONTENT] vaccine | 19 vaccine | covid 19 vaccine | biontech | pfizer biontech | pfizer | mrna | 19 | vaccines | vaccination [SUMMARY]
null
[CONTENT] table | 95 | period | 95 ci | supplementary | supplementary table | positive test | february | 2021 | ci [SUMMARY]
[CONTENT] vaccine | vaccine effectiveness | effectiveness | dose | adults | vaccine adults | vaccine adults working | vaccine adults working age | vaccine effectiveness 86 | vaccine effectiveness 86 preventing [SUMMARY]
[CONTENT] data | dose | vaccine | days | period | cohort | 19 | test | follow | study [SUMMARY]
[CONTENT] data | dose | vaccine | days | period | cohort | 19 | test | follow | study [SUMMARY]
[CONTENT] Vaccine | COVID-19 [SUMMARY]
null
[CONTENT] days | second | 86% | 95% | CI | 72-94% | only 42% | 95% | CI | 14-63% | days ||| ||| 91% | 95% | CI | 85-94% [SUMMARY]
[CONTENT] BNT162b2 | second | second [SUMMARY]
[CONTENT] Vaccine | COVID-19 ||| 805,741 | Skåne county | Southern Sweden | 18-64 years | 26,587 | at least one ||| COVID-19 | two-week ||| ||| days | second | 86% | 95% | CI | 72-94% | only 42% | 95% | CI | 14-63% | days ||| ||| 91% | 95% | CI | 85-94% ||| BNT162b2 | second | second [SUMMARY]
[CONTENT] Vaccine | COVID-19 ||| 805,741 | Skåne county | Southern Sweden | 18-64 years | 26,587 | at least one ||| COVID-19 | two-week ||| ||| days | second | 86% | 95% | CI | 72-94% | only 42% | 95% | CI | 14-63% | days ||| ||| 91% | 95% | CI | 85-94% ||| BNT162b2 | second | second [SUMMARY]
Bacterial agents of the discharging middle ear among children seen at the University of Nigeria Teaching Hospital, Enugu.
28491218
Discharging middle ear continues to be one of the commonest problems seen in the developing world. There is an ever growing need to carry out studies periodically to determine the common bacterial agents responsible for discharging otitis media and their antibiotic sensitivity especially in set-ups characterized with minimal laboratory services. The study sought to determine the common bacterial agents causing discharging middle ear among children presenting at the University of Nigeria Teaching Hospital, Enugu and their sensitivity to the commonly available antibiotics.
INTRODUCTION
Middle ear swabs were collected from 100 children aged 1 month to 17 years at the Children Out-Patient and Otorhinolaryngology Clinics of the University of Nigeria Teaching Hospital, Enugu, Nigeria. The specimens were cultured for aerobic bacterial organisms and their sensitivity determined.
METHODS
Among those with acute discharge, Staphylococcal aureus was isolated in 31.3% and Proteus species in 25.0%. In chronically discharging ears, Proteus Species dominated (39.1%), followed by Staphylococcal aureus (28.3%).
RESULTS
Staphylococcal aureus and Proteus species were the commonest bacterial agents in acute and chronic otitis media respectively. Most isolates showed high sensitivity to the fluoroquinolone antibiotics.
CONCLUSION
[ "Acute Disease", "Adolescent", "Anti-Bacterial Agents", "Bacteria, Aerobic", "Bacterial Infections", "Child", "Child, Preschool", "Chronic Disease", "Female", "Hospitals, Teaching", "Humans", "Infant", "Male", "Microbial Sensitivity Tests", "Nigeria", "Otitis Media, Suppurative", "Pilot Projects" ]
5410015
Introduction
Otitis media is one of the most common infectious diseases of childhood worldwide [1]. It is the inflammation of the mucous membrane of the middle ear cleft which includes the middle ear cavity, mastoid antrum, the mastoid air cells and the Eustachian tube [2]. When the inflammation is associated with a discharge from the ear through a perforation in the tympanic membrane or via a ventilatinig tube, suppurative (or discharging) otitis media results. Otitis media may be acute (less than 6 weeks) or chronic (at least 6 weeks) [1]. Discharging ear or otorrhea is drainage exiting the ear which may be serous, serosanguineous, or purulent [3]. Bacteria have remained the most important aetiological agents in suppurative or discharging otitis media. While non-typable Haemophilus influenzae, Streptooccus pneumonia and Moraxella catarrhalis are commonly reported as aetiological agents in acutely discharging ears in the developed countries [4], local studies done in Nigeria suggest that Moraxella catarrhalis is not a predominant organism in acutely discharging otitis media in Nigerian children [5]. Pseudomonas aeruginosa is commonly implicated in chronic discharging otitis media in Nigerian children [5, 6]. Complications of discharging otitis media are numerous and include hearing impairment, mastoiditis, facial nerve paralysis, cholesteatoma, tympanosclerosis, bacterial meningitis and brain abscess, to mention a few. Treatment of otitis media is usually based on empiric knowledge of aetiologic organisms and their sensitivity pattern. There is emerging evidence of multi-drug resistance of bacterial isolates with reduction in antibiotic efficacy [7, 8]. Many paediatricians and general practitioners base their treatment of otitis media on empiric evidence of the aetiologic agents and their sensitivities to various antimicrobial agents. In view of these, it is important to document the trend in Enugu, to aid in appropriate treatment of this condition and help prevent its complications which may arise if otitis media is not treated or is improperly treated. This paper therefore, aims to present the aerobic bacteriological agents (and their antibiotic susceptibilities) implicated in discharging ears of children in Enugu.
Methods
Study site: Study was conducted at the Children Out-Patient and Otorhinolaryngology Clinics of the University of Nigeria Teaching Hospital, Enugu, Nigeria. Sampling method: The convenience sampling technique was used: each consecutive patient seen at the Paediatric and Otorhinolaryngology clinics with ear discharge with or without other symptoms of otitis media was recruited. Data were collected between July and September 2007. Sample size calculation: The sample size was calculated using the formula: N= Z2 p(100-p)/D2. Where N = minimum sample size, Z = confidence interval (1.96), P = prevalence with reference to a previous study (6%), D = standard error (5%). Substitutions in the above formula give a minimum sample size of 87 participants. Adding an attrition rate of 10% will bring the minimum sample size to 96 participants, rounded up to 100 participants Subject recruitment: The participants were 100 children aged 0 to 17 years who presented who presented with discharging middle ear. Children with foreign bodies in or infection of the external auditory meatus were excluded; as well as those who had taken antibiotics within the preceding two weeks to their presentation. Data collection: A structured questionnaire designed for this study was used to record information on participants by two of the researchers. A pilot study to test information collection tool was conducted on 35 patients who qualified for the inclusion criteria. Information collected included the child's name, age, sex, place of abode, parents' educational level and occupation, presenting symptoms of the patient, duration of ear discharge and immunization status of the patient. After otoscopic examination to document presence of perforated tympanic membrane and pus in the middle ear, the external ears were cleaned with normal saline and 70% alcohol solution and allowed to dry for 1-2 minutes. After this, an ear swab from the discharging ear was taken with the aid of sterile cotton-tipped applicators taking care not to touch the skin of the external auditory meatus to limit contamination of the specimen. In patients with bilateral discharging otitis media, samples were collected from only one ear which was preferred by the patient or mother. The swabs were then quickly transported to the microbiology laboratory of the Eastern Nigeria Medical Center Enugu, a nearby specialist center located very close to the University of Nigeria Teaching Hospital Enugu. Sample processing and analysis: Samples of the ear discharge were promptly plated onto freshly prepared chocolate, blood and cystine-lactose-electrolyte-deficient (CLED) agar. The blood and cystine-lactose-electrolyte-deficient (CLED) agar plates were incubated aerobically at 37°C for 24 hours while the chocolate agar plates were incubated microaerobically using the candle jar extinction technique [9]. After the 24 hour incubation period, the plates were examined for growth and individual colonies were further analysed for their physical characteristics such as morphology, colour (pigmentation), odour and in case of blood agar, haemolysis. No anaerobic cultures were carried out and mycobacteria were not searched for because of unavailability of the required laboratory materials. A Gram stain was carried out on all cultured isolates. Thereafter, characterization to species level was carried out using standard bacteriological methods [10]. Antibiotic susceptibility tests with multi-discs were carried out for various drugs using the disc diffusion method [10]. Choice of the antibiotics to be tested for was based primarily on knowledge of commonly available antibiotic discs. Zones of inhibition for each antibiotic, if formed were then measured to the nearest millimeter and documented. Interpretation of these results in terms of resistance or susceptibility was according to accepted protocol [10]. Ethical approval: Ethical Clearance for this study was obtained from the ethical committee of the University of Nigeria Teaching Hospital Enugu before the study was commenced. Informed written consent was obtained from parents and guardian of the participants and assent was obtained from children where appropriate. Data analysis: Data were entered into a computer database and were analyzed using SPSS version 11.0. Chi-square tests (χ2) were used to test for significance; and probability value (p-value) of less than 0.05 was taken as being statistically significant.
Results
The subject were aged 1 month to 17years (median 3 years). The male: female ratio was 1:0.9. Majority of the participants (70%) were aged 1month to 5years while 30 (30%) were between 6 and 17years. Fifteen (15.0%) participants were from the upper socio-economic class, while 32 (32.0%) and 53 (53.0%) participants were from the middle and lower socio-economic class respectively. Ninety-one (91.0%) ear swab samples grew isolates on culture while 9(9.0%) were sterile. Out of these 91 samples with positive bacterial yield, 3(3.3%) samples yielded multiple isolates while the rest 88(96.7%) yielded single isolates on culture. On the whole, 94 isolates were got from the bacterial cultures. Forty (42.6%) were gram positive; whereas 54(57.4%) were gram negative. Among the isolates from patients with acute discharging otitis media, gram positives constituted 43.8% while gram negatives formed 56.3% of them. The leading organisms in acutely discharging ears were: staphylococcus aureus15(31.3%) followed by Proteus species 12(25.0%), and then by Pseudomonas aeruginosa11(22.9%) (Table 1). Frequency distribution of cultured isolates in participants with acute and chronic discharging otitis media and their p-values Among the 49 cases of chronically discharging otitis media, gram positive and gram negative organisms accounted for 41.3% and 58.7% of them respectively. The commonest causative isolated agent in these chronic cases was Proteus species 18 (39.1%), followed by Staphylococcus aureus13(28.3%). Others were Pseudomonas aeruginosa6(13.0%), Streptococcus pneumoniae5(10.9%), Klebsiella species 3(6.5%) and Non haemolytic Streptococcus 1(2.2%) (Table 1). No significant difference was found among the isolates cultured in acute and chronic discharging ears (Table 1). As otitis media in children is documented to be commonest in children less than 5 years [8], the pattern of bacterial isolates in those less than 5 years of age and in those at least 5 years of age was sought as is depicted in Table 2. No relationship of statistical significance was observed when chi-square (with Yates correction for continuity) was calculated for the different organisms. Staphylococcus aureusand Streptococcus pneumoniae each showed 95.7% and 100% sensitivity respectively to ofloxacin; and 75.0% and 100.0% sensitivity respectively to gentamycin. Cloxacillin, and amoxicillin-clavulanate combination were also seen to show good in vitro activity against Staphylococcus aureus. However, Staphylococcus aureusand Streptococcus pneumonia were poorly sensitive to some other commonly used antibiotics like cefuroxime, co-trimoxazole and amoxicillin (Table 3). Proteus species and Pseudomonas aeruginosa showed 95.5% and 100.0% sensitivity respectively to ofloxacin; and on the other extreme, they were both 100.0% resistant to cloxacillin. However, gentamycin and to a lesser extent, chloramphenicol had good antibacterial activity against them. While Proteus was 100.0% sensitive to ceftriaxone, Pseudomonas aeruginosawas only 66.7% sensitive to it (Table 3). In general, all the major isolates showed excellent in vitrosensitivity to ofloxacin, and ciprofloxacin. Pattern of bacterial isolates in different age groups and their probability values Antibiotic sensitivity and resistance of principal isolates to some common antibiotics
Conclusion
Staphylococcal aureus and Proteus species were the most common bacterial agents in acute and chronic otitis media respectively. Fluoroquinolones were found to be effective in their treatment. What is known about this topic Bacterial agents are known causes of both acute and chronic otitis media. Antibiotic therapy is key in the management of otitis media. In resource poor countries like Nigeria, empirical therapy is not uncommon, hence the need for implicating agents and susceptible antibiotics. Bacterial agents are known causes of both acute and chronic otitis media. Antibiotic therapy is key in the management of otitis media. In resource poor countries like Nigeria, empirical therapy is not uncommon, hence the need for implicating agents and susceptible antibiotics. What this study adds This study shows that the leading organisms in both acutely and chronically discharging otitis media among Nigerian children were Staphylococcal aureus and Proteus species, these organisms were more sensitive to fluoroquinolones group of antibiotics and should now be the first-line antibiotic management of otitis media among Nigerian children. This study shows that the leading organisms in both acutely and chronically discharging otitis media among Nigerian children were Staphylococcal aureus and Proteus species, these organisms were more sensitive to fluoroquinolones group of antibiotics and should now be the first-line antibiotic management of otitis media among Nigerian children.
[ "What is known about this topic", "What this study adds" ]
[ "Bacterial agents are known causes of both acute and chronic otitis media. Antibiotic therapy is key in the management of otitis media. In resource poor countries like Nigeria, empirical therapy is not uncommon, hence the need for implicating agents and susceptible antibiotics.", "This study shows that the leading organisms in both acutely and chronically discharging otitis media among Nigerian children were Staphylococcal aureus and Proteus species, these organisms were more sensitive to fluoroquinolones group of antibiotics and should now be the first-line antibiotic management of otitis media among Nigerian children." ]
[ null, null ]
[ "Introduction", "Methods", "Results", "Discussion", "Conclusion", "What is known about this topic", "What this study adds" ]
[ "Otitis media is one of the most common infectious diseases of childhood worldwide [1]. It is the inflammation of the mucous membrane of the middle ear cleft which includes the middle ear cavity, mastoid antrum, the mastoid air cells and the Eustachian tube [2]. When the inflammation is associated with a discharge from the ear through a perforation in the tympanic membrane or via a ventilatinig tube, suppurative (or discharging) otitis media results. Otitis media may be acute (less than 6 weeks) or chronic (at least 6 weeks) [1]. Discharging ear or otorrhea is drainage exiting the ear which may be serous, serosanguineous, or purulent [3]. Bacteria have remained the most important aetiological agents in suppurative or discharging otitis media. While non-typable Haemophilus influenzae, Streptooccus pneumonia and Moraxella catarrhalis are commonly reported as aetiological agents in acutely discharging ears in the developed countries [4], local studies done in Nigeria suggest that Moraxella catarrhalis is not a predominant organism in acutely discharging otitis media in Nigerian children [5]. Pseudomonas aeruginosa is commonly implicated in chronic discharging otitis media in Nigerian children [5, 6]. Complications of discharging otitis media are numerous and include hearing impairment, mastoiditis, facial nerve paralysis, cholesteatoma, tympanosclerosis, bacterial meningitis and brain abscess, to mention a few. Treatment of otitis media is usually based on empiric knowledge of aetiologic organisms and their sensitivity pattern. There is emerging evidence of multi-drug resistance of bacterial isolates with reduction in antibiotic efficacy [7, 8]. Many paediatricians and general practitioners base their treatment of otitis media on empiric evidence of the aetiologic agents and their sensitivities to various antimicrobial agents. In view of these, it is important to document the trend in Enugu, to aid in appropriate treatment of this condition and help prevent its complications which may arise if otitis media is not treated or is improperly treated. This paper therefore, aims to present the aerobic bacteriological agents (and their antibiotic susceptibilities) implicated in discharging ears of children in Enugu.", "Study site: Study was conducted at the Children Out-Patient and Otorhinolaryngology Clinics of the University of Nigeria Teaching Hospital, Enugu, Nigeria.\nSampling method: The convenience sampling technique was used: each consecutive patient seen at the Paediatric and Otorhinolaryngology clinics with ear discharge with or without other symptoms of otitis media was recruited. Data were collected between July and September 2007.\nSample size calculation: The sample size was calculated using the formula: N= Z2 p(100-p)/D2. Where N = minimum sample size, Z = confidence interval (1.96), P = prevalence with reference to a previous study (6%), D = standard error (5%). Substitutions in the above formula give a minimum sample size of 87 participants. Adding an attrition rate of 10% will bring the minimum sample size to 96 participants, rounded up to 100 participants\nSubject recruitment: The participants were 100 children aged 0 to 17 years who presented who presented with discharging middle ear. Children with foreign bodies in or infection of the external auditory meatus were excluded; as well as those who had taken antibiotics within the preceding two weeks to their presentation.\nData collection: A structured questionnaire designed for this study was used to record information on participants by two of the researchers. A pilot study to test information collection tool was conducted on 35 patients who qualified for the inclusion criteria. Information collected included the child's name, age, sex, place of abode, parents' educational level and occupation, presenting symptoms of the patient, duration of ear discharge and immunization status of the patient. After otoscopic examination to document presence of perforated tympanic membrane and pus in the middle ear, the external ears were cleaned with normal saline and 70% alcohol solution and allowed to dry for 1-2 minutes. After this, an ear swab from the discharging ear was taken with the aid of sterile cotton-tipped applicators taking care not to touch the skin of the external auditory meatus to limit contamination of the specimen. In patients with bilateral discharging otitis media, samples were collected from only one ear which was preferred by the patient or mother. The swabs were then quickly transported to the microbiology laboratory of the Eastern Nigeria Medical Center Enugu, a nearby specialist center located very close to the University of Nigeria Teaching Hospital Enugu.\nSample processing and analysis: Samples of the ear discharge were promptly plated onto freshly prepared chocolate, blood and cystine-lactose-electrolyte-deficient (CLED) agar. The blood and cystine-lactose-electrolyte-deficient (CLED) agar plates were incubated aerobically at 37°C for 24 hours while the chocolate agar plates were incubated microaerobically using the candle jar extinction technique [9]. After the 24 hour incubation period, the plates were examined for growth and individual colonies were further analysed for their physical characteristics such as morphology, colour (pigmentation), odour and in case of blood agar, haemolysis. No anaerobic cultures were carried out and mycobacteria were not searched for because of unavailability of the required laboratory materials. A Gram stain was carried out on all cultured isolates. Thereafter, characterization to species level was carried out using standard bacteriological methods [10]. Antibiotic susceptibility tests with multi-discs were carried out for various drugs using the disc diffusion method [10]. Choice of the antibiotics to be tested for was based primarily on knowledge of commonly available antibiotic discs. Zones of inhibition for each antibiotic, if formed were then measured to the nearest millimeter and documented. Interpretation of these results in terms of resistance or susceptibility was according to accepted protocol [10].\nEthical approval: Ethical Clearance for this study was obtained from the ethical committee of the University of Nigeria Teaching Hospital Enugu before the study was commenced. Informed written consent was obtained from parents and guardian of the participants and assent was obtained from children where appropriate.\nData analysis: Data were entered into a computer database and were analyzed using SPSS version 11.0. Chi-square tests (χ2) were used to test for significance; and probability value (p-value) of less than 0.05 was taken as being statistically significant.", "The subject were aged 1 month to 17years (median 3 years). The male: female ratio was 1:0.9. Majority of the participants (70%) were aged 1month to 5years while 30 (30%) were between 6 and 17years. Fifteen (15.0%) participants were from the upper socio-economic class, while 32 (32.0%) and 53 (53.0%) participants were from the middle and lower socio-economic class respectively. Ninety-one (91.0%) ear swab samples grew isolates on culture while 9(9.0%) were sterile. Out of these 91 samples with positive bacterial yield, 3(3.3%) samples yielded multiple isolates while the rest 88(96.7%) yielded single isolates on culture. On the whole, 94 isolates were got from the bacterial cultures. Forty (42.6%) were gram positive; whereas 54(57.4%) were gram negative. Among the isolates from patients with acute discharging otitis media, gram positives constituted 43.8% while gram negatives formed 56.3% of them. The leading organisms in acutely discharging ears were: staphylococcus aureus15(31.3%) followed by Proteus species 12(25.0%), and then by Pseudomonas aeruginosa11(22.9%) (Table 1).\nFrequency distribution of cultured isolates in participants with acute and chronic discharging otitis media and their p-values\nAmong the 49 cases of chronically discharging otitis media, gram positive and gram negative organisms accounted for 41.3% and 58.7% of them respectively. The commonest causative isolated agent in these chronic cases was Proteus species 18 (39.1%), followed by Staphylococcus aureus13(28.3%). Others were Pseudomonas aeruginosa6(13.0%), Streptococcus pneumoniae5(10.9%), Klebsiella species 3(6.5%) and Non haemolytic Streptococcus 1(2.2%) (Table 1). No significant difference was found among the isolates cultured in acute and chronic discharging ears (Table 1). As otitis media in children is documented to be commonest in children less than 5 years [8], the pattern of bacterial isolates in those less than 5 years of age and in those at least 5 years of age was sought as is depicted in Table 2. No relationship of statistical significance was observed when chi-square (with Yates correction for continuity) was calculated for the different organisms. Staphylococcus aureusand Streptococcus pneumoniae each showed 95.7% and 100% sensitivity respectively to ofloxacin; and 75.0% and 100.0% sensitivity respectively to gentamycin. Cloxacillin, and amoxicillin-clavulanate combination were also seen to show good in vitro activity against Staphylococcus aureus. However, Staphylococcus aureusand Streptococcus pneumonia were poorly sensitive to some other commonly used antibiotics like cefuroxime, co-trimoxazole and amoxicillin (Table 3). Proteus species and Pseudomonas aeruginosa showed 95.5% and 100.0% sensitivity respectively to ofloxacin; and on the other extreme, they were both 100.0% resistant to cloxacillin. However, gentamycin and to a lesser extent, chloramphenicol had good antibacterial activity against them. While Proteus was 100.0% sensitive to ceftriaxone, Pseudomonas aeruginosawas only 66.7% sensitive to it (Table 3). In general, all the major isolates showed excellent in vitrosensitivity to ofloxacin, and ciprofloxacin.\nPattern of bacterial isolates in different age groups and their probability values\nAntibiotic sensitivity and resistance of principal isolates to some common antibiotics", "It was noted in this study that Staphylococcus aureuswas the leading isolate among the acute cases of discharging otitis media, closely followed by Proteus species and Pseudomonas aeruginosa. This pattern seems to resemble observations by different researchers in Nigeria [5, 7] but does not mimic the trend in the developed world where non-typable Haemophilus influenzae, Streptococcus pneumoniae and Moraxella catarrhalis assume important predominant roles in acute otitis media [4, 11, 12]. Among the chronic cases of discharging otitis media, Proteus species was the leading causative isolate, followed closely by Staphylococcus aureus, and Pseudomonas aeruginosa. These three organisms seem to be the predominant ones in Europe, Middle East, [13] United States of America and within Nigeria [5, 14, 15]. Streptococcus pneumoniae was a less common isolate in this study; and this was likewise observed in Lagos [6]. The pattern of cultured isolates in chronically discharging ears is nearly similar to that observed by Tiedt and colleagues [16] in South Africa where the commonest isolates were Proteus Mirabilis, Pseudomonas aeruginosa and Haemophilus influenzae. The overall picture of bacteriology in this study also compares with the study carried out by Abera and Kibret in Ethiopia [17] and suggests that geographical location has a large part in the aetiology of discharging otitis media. Type of ear discharge did not seem to affect or influence the causative agent. Again, age of participants did not seem to have any bearing on the pattern of bacterial isolates. There is paucity of information in the literature regarding the possible role of age on the bacteriology of discharging otitis media. All the major isolates showed excellent in vitro sensitivity to the fluoroquinolone group of antibiotics (which in this study are ofloxacin and ciprofloxacin). This observation is same as that noted by Oni and fellow workers [5] in Ibadan. Although gentamycin and chloramphenicol had moderately good in vitro activity against Staphylococcus aureus and Streptococcus pneumoniae in this study, the research by Ako-Nai and colleagues [7] in Ile-Ife showed that these gram positive organisms were only weakly susceptible to gentamycin and chloramphenicol in vitro.\nHowever, the number of isolates tested for sensitivity to gentamycin were five and three for Staphylococcus aureus and Streptococcus pneumoniae respectively, compared with twenty-eight and eight in this index study. This difference in number of isolates tested could well be a potential source of bias. On the other hand, in Ibadan (Nigeria) [5] and in Ethiopia [17], gentamycin was noted to have good antibacterial effectiveness against Staphylococcus aureus. The varying sensitivity of isolates to particular antibiotics found within a country and intracontinentally may well be a result of chronic and perhaps appropriateness of exposure of organisms to antibiotics. The fluoroquinolone group of antibiotics is a broad-spectrum class, which acts by inhibiting Deoxyribonucleic acid (DNA) gyrase. Its coverage includes the organisms most commonly associated with otitis media (Streptococcus pneumoniae regardless of susceptibility to penicillin, Haemophilus influenzae, Moraxella catarrhalis regardless of β-lactamase status, Pseudomonas aeruginosa and Staphylococcus aureus) [18]. Resistant strains are however emerging [19]. Systemic use of this group of antibiotics has been limited in children because of the observation that these antibiotics, when administered systemically, may have an adverse effect on the development of weight-bearing joints in juvenile animals [18, 19].\nOhyama and co-workers [20] in Japan demonstrated that 0.3% ofloxacin otic solution was efficacious without ototoxic effects in discharging otitis media of the chronic variety. Again, Dohar and fellow workers [18] showed that otic formulation of 0.3% ofloxacin in a dose of 0.25ml twice daily for 10 days eradicated cultured isolates in 96.3% of the participants tested without much adverse effects. Many researchers have documented good in vitro efficacy of gentamycin against Pseudomonas aeruginosa, Klebsiella species and Proteus species [5–7]. Indeed, Coker and colleagues [6] had observed that gentamycin appeared to be the most effective antibiotic against strains of Pseudomonas aeruginosa, Proteus and Klebsiella species and went on to recommend its topical application in chronic forms of middle ear discharge. Wariso and Ibe [21] in Port Harcourt and some other researchers in Ethiopia [17] also recommended it as the first drug of choice in treating chronic otitis media.\nThe results of this study suggest that there exists a high level of resistance of bacterial isolates to a number of commonly used antibiotics like cloxacillin, amoxicillin, erythromycin, ampiclox, co-trimoxazole, amoxicillin-clavulanate and even cefuroxime in in vitro testing. The increasing prevalence of multi-drug resistant bacteria is of epidemiological importance especially in attempts to control infection in the event of an epidemic caused by these agents. Such resistance may have arisen due to injudicious use of antibiotics especially as they are commonly used as 'over-the-counter'drugs with no qualified medical personnel's prescription. It is therefore being recommended that the Ministry of Health should restrict injudicious sale of antibiotics. They should only be sold on qualified health personnel's prescription to minimize their abuse. We also recommend that for acutely discharging ears, systemic Ofloxacin or ciprofloxacin could be used as the drug of choice until ear swab bacteriologic results are out. Gentamycin or ofloxacin or ciprofloxacin could serve as logical choice in chronically discharging ears until swab culture and sensitivity results are out.", "Staphylococcal aureus and Proteus species were the most common bacterial agents in acute and chronic otitis media respectively. Fluoroquinolones were found to be effective in their treatment.\n What is known about this topic Bacterial agents are known causes of both acute and chronic otitis media. Antibiotic therapy is key in the management of otitis media. In resource poor countries like Nigeria, empirical therapy is not uncommon, hence the need for implicating agents and susceptible antibiotics.\nBacterial agents are known causes of both acute and chronic otitis media. Antibiotic therapy is key in the management of otitis media. In resource poor countries like Nigeria, empirical therapy is not uncommon, hence the need for implicating agents and susceptible antibiotics.\n What this study adds This study shows that the leading organisms in both acutely and chronically discharging otitis media among Nigerian children were Staphylococcal aureus and Proteus species, these organisms were more sensitive to fluoroquinolones group of antibiotics and should now be the first-line antibiotic management of otitis media among Nigerian children.\nThis study shows that the leading organisms in both acutely and chronically discharging otitis media among Nigerian children were Staphylococcal aureus and Proteus species, these organisms were more sensitive to fluoroquinolones group of antibiotics and should now be the first-line antibiotic management of otitis media among Nigerian children.", "Bacterial agents are known causes of both acute and chronic otitis media. Antibiotic therapy is key in the management of otitis media. In resource poor countries like Nigeria, empirical therapy is not uncommon, hence the need for implicating agents and susceptible antibiotics.", "This study shows that the leading organisms in both acutely and chronically discharging otitis media among Nigerian children were Staphylococcal aureus and Proteus species, these organisms were more sensitive to fluoroquinolones group of antibiotics and should now be the first-line antibiotic management of otitis media among Nigerian children." ]
[ "intro", "methods", "results", "discussion", "conclusion", null, null ]
[ "Bacterial agents", "discharging middle ear", "Nigerian children" ]
Introduction: Otitis media is one of the most common infectious diseases of childhood worldwide [1]. It is the inflammation of the mucous membrane of the middle ear cleft which includes the middle ear cavity, mastoid antrum, the mastoid air cells and the Eustachian tube [2]. When the inflammation is associated with a discharge from the ear through a perforation in the tympanic membrane or via a ventilatinig tube, suppurative (or discharging) otitis media results. Otitis media may be acute (less than 6 weeks) or chronic (at least 6 weeks) [1]. Discharging ear or otorrhea is drainage exiting the ear which may be serous, serosanguineous, or purulent [3]. Bacteria have remained the most important aetiological agents in suppurative or discharging otitis media. While non-typable Haemophilus influenzae, Streptooccus pneumonia and Moraxella catarrhalis are commonly reported as aetiological agents in acutely discharging ears in the developed countries [4], local studies done in Nigeria suggest that Moraxella catarrhalis is not a predominant organism in acutely discharging otitis media in Nigerian children [5]. Pseudomonas aeruginosa is commonly implicated in chronic discharging otitis media in Nigerian children [5, 6]. Complications of discharging otitis media are numerous and include hearing impairment, mastoiditis, facial nerve paralysis, cholesteatoma, tympanosclerosis, bacterial meningitis and brain abscess, to mention a few. Treatment of otitis media is usually based on empiric knowledge of aetiologic organisms and their sensitivity pattern. There is emerging evidence of multi-drug resistance of bacterial isolates with reduction in antibiotic efficacy [7, 8]. Many paediatricians and general practitioners base their treatment of otitis media on empiric evidence of the aetiologic agents and their sensitivities to various antimicrobial agents. In view of these, it is important to document the trend in Enugu, to aid in appropriate treatment of this condition and help prevent its complications which may arise if otitis media is not treated or is improperly treated. This paper therefore, aims to present the aerobic bacteriological agents (and their antibiotic susceptibilities) implicated in discharging ears of children in Enugu. Methods: Study site: Study was conducted at the Children Out-Patient and Otorhinolaryngology Clinics of the University of Nigeria Teaching Hospital, Enugu, Nigeria. Sampling method: The convenience sampling technique was used: each consecutive patient seen at the Paediatric and Otorhinolaryngology clinics with ear discharge with or without other symptoms of otitis media was recruited. Data were collected between July and September 2007. Sample size calculation: The sample size was calculated using the formula: N= Z2 p(100-p)/D2. Where N = minimum sample size, Z = confidence interval (1.96), P = prevalence with reference to a previous study (6%), D = standard error (5%). Substitutions in the above formula give a minimum sample size of 87 participants. Adding an attrition rate of 10% will bring the minimum sample size to 96 participants, rounded up to 100 participants Subject recruitment: The participants were 100 children aged 0 to 17 years who presented who presented with discharging middle ear. Children with foreign bodies in or infection of the external auditory meatus were excluded; as well as those who had taken antibiotics within the preceding two weeks to their presentation. Data collection: A structured questionnaire designed for this study was used to record information on participants by two of the researchers. A pilot study to test information collection tool was conducted on 35 patients who qualified for the inclusion criteria. Information collected included the child's name, age, sex, place of abode, parents' educational level and occupation, presenting symptoms of the patient, duration of ear discharge and immunization status of the patient. After otoscopic examination to document presence of perforated tympanic membrane and pus in the middle ear, the external ears were cleaned with normal saline and 70% alcohol solution and allowed to dry for 1-2 minutes. After this, an ear swab from the discharging ear was taken with the aid of sterile cotton-tipped applicators taking care not to touch the skin of the external auditory meatus to limit contamination of the specimen. In patients with bilateral discharging otitis media, samples were collected from only one ear which was preferred by the patient or mother. The swabs were then quickly transported to the microbiology laboratory of the Eastern Nigeria Medical Center Enugu, a nearby specialist center located very close to the University of Nigeria Teaching Hospital Enugu. Sample processing and analysis: Samples of the ear discharge were promptly plated onto freshly prepared chocolate, blood and cystine-lactose-electrolyte-deficient (CLED) agar. The blood and cystine-lactose-electrolyte-deficient (CLED) agar plates were incubated aerobically at 37°C for 24 hours while the chocolate agar plates were incubated microaerobically using the candle jar extinction technique [9]. After the 24 hour incubation period, the plates were examined for growth and individual colonies were further analysed for their physical characteristics such as morphology, colour (pigmentation), odour and in case of blood agar, haemolysis. No anaerobic cultures were carried out and mycobacteria were not searched for because of unavailability of the required laboratory materials. A Gram stain was carried out on all cultured isolates. Thereafter, characterization to species level was carried out using standard bacteriological methods [10]. Antibiotic susceptibility tests with multi-discs were carried out for various drugs using the disc diffusion method [10]. Choice of the antibiotics to be tested for was based primarily on knowledge of commonly available antibiotic discs. Zones of inhibition for each antibiotic, if formed were then measured to the nearest millimeter and documented. Interpretation of these results in terms of resistance or susceptibility was according to accepted protocol [10]. Ethical approval: Ethical Clearance for this study was obtained from the ethical committee of the University of Nigeria Teaching Hospital Enugu before the study was commenced. Informed written consent was obtained from parents and guardian of the participants and assent was obtained from children where appropriate. Data analysis: Data were entered into a computer database and were analyzed using SPSS version 11.0. Chi-square tests (χ2) were used to test for significance; and probability value (p-value) of less than 0.05 was taken as being statistically significant. Results: The subject were aged 1 month to 17years (median 3 years). The male: female ratio was 1:0.9. Majority of the participants (70%) were aged 1month to 5years while 30 (30%) were between 6 and 17years. Fifteen (15.0%) participants were from the upper socio-economic class, while 32 (32.0%) and 53 (53.0%) participants were from the middle and lower socio-economic class respectively. Ninety-one (91.0%) ear swab samples grew isolates on culture while 9(9.0%) were sterile. Out of these 91 samples with positive bacterial yield, 3(3.3%) samples yielded multiple isolates while the rest 88(96.7%) yielded single isolates on culture. On the whole, 94 isolates were got from the bacterial cultures. Forty (42.6%) were gram positive; whereas 54(57.4%) were gram negative. Among the isolates from patients with acute discharging otitis media, gram positives constituted 43.8% while gram negatives formed 56.3% of them. The leading organisms in acutely discharging ears were: staphylococcus aureus15(31.3%) followed by Proteus species 12(25.0%), and then by Pseudomonas aeruginosa11(22.9%) (Table 1). Frequency distribution of cultured isolates in participants with acute and chronic discharging otitis media and their p-values Among the 49 cases of chronically discharging otitis media, gram positive and gram negative organisms accounted for 41.3% and 58.7% of them respectively. The commonest causative isolated agent in these chronic cases was Proteus species 18 (39.1%), followed by Staphylococcus aureus13(28.3%). Others were Pseudomonas aeruginosa6(13.0%), Streptococcus pneumoniae5(10.9%), Klebsiella species 3(6.5%) and Non haemolytic Streptococcus 1(2.2%) (Table 1). No significant difference was found among the isolates cultured in acute and chronic discharging ears (Table 1). As otitis media in children is documented to be commonest in children less than 5 years [8], the pattern of bacterial isolates in those less than 5 years of age and in those at least 5 years of age was sought as is depicted in Table 2. No relationship of statistical significance was observed when chi-square (with Yates correction for continuity) was calculated for the different organisms. Staphylococcus aureusand Streptococcus pneumoniae each showed 95.7% and 100% sensitivity respectively to ofloxacin; and 75.0% and 100.0% sensitivity respectively to gentamycin. Cloxacillin, and amoxicillin-clavulanate combination were also seen to show good in vitro activity against Staphylococcus aureus. However, Staphylococcus aureusand Streptococcus pneumonia were poorly sensitive to some other commonly used antibiotics like cefuroxime, co-trimoxazole and amoxicillin (Table 3). Proteus species and Pseudomonas aeruginosa showed 95.5% and 100.0% sensitivity respectively to ofloxacin; and on the other extreme, they were both 100.0% resistant to cloxacillin. However, gentamycin and to a lesser extent, chloramphenicol had good antibacterial activity against them. While Proteus was 100.0% sensitive to ceftriaxone, Pseudomonas aeruginosawas only 66.7% sensitive to it (Table 3). In general, all the major isolates showed excellent in vitrosensitivity to ofloxacin, and ciprofloxacin. Pattern of bacterial isolates in different age groups and their probability values Antibiotic sensitivity and resistance of principal isolates to some common antibiotics Discussion: It was noted in this study that Staphylococcus aureuswas the leading isolate among the acute cases of discharging otitis media, closely followed by Proteus species and Pseudomonas aeruginosa. This pattern seems to resemble observations by different researchers in Nigeria [5, 7] but does not mimic the trend in the developed world where non-typable Haemophilus influenzae, Streptococcus pneumoniae and Moraxella catarrhalis assume important predominant roles in acute otitis media [4, 11, 12]. Among the chronic cases of discharging otitis media, Proteus species was the leading causative isolate, followed closely by Staphylococcus aureus, and Pseudomonas aeruginosa. These three organisms seem to be the predominant ones in Europe, Middle East, [13] United States of America and within Nigeria [5, 14, 15]. Streptococcus pneumoniae was a less common isolate in this study; and this was likewise observed in Lagos [6]. The pattern of cultured isolates in chronically discharging ears is nearly similar to that observed by Tiedt and colleagues [16] in South Africa where the commonest isolates were Proteus Mirabilis, Pseudomonas aeruginosa and Haemophilus influenzae. The overall picture of bacteriology in this study also compares with the study carried out by Abera and Kibret in Ethiopia [17] and suggests that geographical location has a large part in the aetiology of discharging otitis media. Type of ear discharge did not seem to affect or influence the causative agent. Again, age of participants did not seem to have any bearing on the pattern of bacterial isolates. There is paucity of information in the literature regarding the possible role of age on the bacteriology of discharging otitis media. All the major isolates showed excellent in vitro sensitivity to the fluoroquinolone group of antibiotics (which in this study are ofloxacin and ciprofloxacin). This observation is same as that noted by Oni and fellow workers [5] in Ibadan. Although gentamycin and chloramphenicol had moderately good in vitro activity against Staphylococcus aureus and Streptococcus pneumoniae in this study, the research by Ako-Nai and colleagues [7] in Ile-Ife showed that these gram positive organisms were only weakly susceptible to gentamycin and chloramphenicol in vitro. However, the number of isolates tested for sensitivity to gentamycin were five and three for Staphylococcus aureus and Streptococcus pneumoniae respectively, compared with twenty-eight and eight in this index study. This difference in number of isolates tested could well be a potential source of bias. On the other hand, in Ibadan (Nigeria) [5] and in Ethiopia [17], gentamycin was noted to have good antibacterial effectiveness against Staphylococcus aureus. The varying sensitivity of isolates to particular antibiotics found within a country and intracontinentally may well be a result of chronic and perhaps appropriateness of exposure of organisms to antibiotics. The fluoroquinolone group of antibiotics is a broad-spectrum class, which acts by inhibiting Deoxyribonucleic acid (DNA) gyrase. Its coverage includes the organisms most commonly associated with otitis media (Streptococcus pneumoniae regardless of susceptibility to penicillin, Haemophilus influenzae, Moraxella catarrhalis regardless of β-lactamase status, Pseudomonas aeruginosa and Staphylococcus aureus) [18]. Resistant strains are however emerging [19]. Systemic use of this group of antibiotics has been limited in children because of the observation that these antibiotics, when administered systemically, may have an adverse effect on the development of weight-bearing joints in juvenile animals [18, 19]. Ohyama and co-workers [20] in Japan demonstrated that 0.3% ofloxacin otic solution was efficacious without ototoxic effects in discharging otitis media of the chronic variety. Again, Dohar and fellow workers [18] showed that otic formulation of 0.3% ofloxacin in a dose of 0.25ml twice daily for 10 days eradicated cultured isolates in 96.3% of the participants tested without much adverse effects. Many researchers have documented good in vitro efficacy of gentamycin against Pseudomonas aeruginosa, Klebsiella species and Proteus species [5–7]. Indeed, Coker and colleagues [6] had observed that gentamycin appeared to be the most effective antibiotic against strains of Pseudomonas aeruginosa, Proteus and Klebsiella species and went on to recommend its topical application in chronic forms of middle ear discharge. Wariso and Ibe [21] in Port Harcourt and some other researchers in Ethiopia [17] also recommended it as the first drug of choice in treating chronic otitis media. The results of this study suggest that there exists a high level of resistance of bacterial isolates to a number of commonly used antibiotics like cloxacillin, amoxicillin, erythromycin, ampiclox, co-trimoxazole, amoxicillin-clavulanate and even cefuroxime in in vitro testing. The increasing prevalence of multi-drug resistant bacteria is of epidemiological importance especially in attempts to control infection in the event of an epidemic caused by these agents. Such resistance may have arisen due to injudicious use of antibiotics especially as they are commonly used as 'over-the-counter'drugs with no qualified medical personnel's prescription. It is therefore being recommended that the Ministry of Health should restrict injudicious sale of antibiotics. They should only be sold on qualified health personnel's prescription to minimize their abuse. We also recommend that for acutely discharging ears, systemic Ofloxacin or ciprofloxacin could be used as the drug of choice until ear swab bacteriologic results are out. Gentamycin or ofloxacin or ciprofloxacin could serve as logical choice in chronically discharging ears until swab culture and sensitivity results are out. Conclusion: Staphylococcal aureus and Proteus species were the most common bacterial agents in acute and chronic otitis media respectively. Fluoroquinolones were found to be effective in their treatment. What is known about this topic Bacterial agents are known causes of both acute and chronic otitis media. Antibiotic therapy is key in the management of otitis media. In resource poor countries like Nigeria, empirical therapy is not uncommon, hence the need for implicating agents and susceptible antibiotics. Bacterial agents are known causes of both acute and chronic otitis media. Antibiotic therapy is key in the management of otitis media. In resource poor countries like Nigeria, empirical therapy is not uncommon, hence the need for implicating agents and susceptible antibiotics. What this study adds This study shows that the leading organisms in both acutely and chronically discharging otitis media among Nigerian children were Staphylococcal aureus and Proteus species, these organisms were more sensitive to fluoroquinolones group of antibiotics and should now be the first-line antibiotic management of otitis media among Nigerian children. This study shows that the leading organisms in both acutely and chronically discharging otitis media among Nigerian children were Staphylococcal aureus and Proteus species, these organisms were more sensitive to fluoroquinolones group of antibiotics and should now be the first-line antibiotic management of otitis media among Nigerian children. What is known about this topic: Bacterial agents are known causes of both acute and chronic otitis media. Antibiotic therapy is key in the management of otitis media. In resource poor countries like Nigeria, empirical therapy is not uncommon, hence the need for implicating agents and susceptible antibiotics. What this study adds: This study shows that the leading organisms in both acutely and chronically discharging otitis media among Nigerian children were Staphylococcal aureus and Proteus species, these organisms were more sensitive to fluoroquinolones group of antibiotics and should now be the first-line antibiotic management of otitis media among Nigerian children.
Background: Discharging middle ear continues to be one of the commonest problems seen in the developing world. There is an ever growing need to carry out studies periodically to determine the common bacterial agents responsible for discharging otitis media and their antibiotic sensitivity especially in set-ups characterized with minimal laboratory services. The study sought to determine the common bacterial agents causing discharging middle ear among children presenting at the University of Nigeria Teaching Hospital, Enugu and their sensitivity to the commonly available antibiotics. Methods: Middle ear swabs were collected from 100 children aged 1 month to 17 years at the Children Out-Patient and Otorhinolaryngology Clinics of the University of Nigeria Teaching Hospital, Enugu, Nigeria. The specimens were cultured for aerobic bacterial organisms and their sensitivity determined. Results: Among those with acute discharge, Staphylococcal aureus was isolated in 31.3% and Proteus species in 25.0%. In chronically discharging ears, Proteus Species dominated (39.1%), followed by Staphylococcal aureus (28.3%). Conclusions: Staphylococcal aureus and Proteus species were the commonest bacterial agents in acute and chronic otitis media respectively. Most isolates showed high sensitivity to the fluoroquinolone antibiotics.
Introduction: Otitis media is one of the most common infectious diseases of childhood worldwide [1]. It is the inflammation of the mucous membrane of the middle ear cleft which includes the middle ear cavity, mastoid antrum, the mastoid air cells and the Eustachian tube [2]. When the inflammation is associated with a discharge from the ear through a perforation in the tympanic membrane or via a ventilatinig tube, suppurative (or discharging) otitis media results. Otitis media may be acute (less than 6 weeks) or chronic (at least 6 weeks) [1]. Discharging ear or otorrhea is drainage exiting the ear which may be serous, serosanguineous, or purulent [3]. Bacteria have remained the most important aetiological agents in suppurative or discharging otitis media. While non-typable Haemophilus influenzae, Streptooccus pneumonia and Moraxella catarrhalis are commonly reported as aetiological agents in acutely discharging ears in the developed countries [4], local studies done in Nigeria suggest that Moraxella catarrhalis is not a predominant organism in acutely discharging otitis media in Nigerian children [5]. Pseudomonas aeruginosa is commonly implicated in chronic discharging otitis media in Nigerian children [5, 6]. Complications of discharging otitis media are numerous and include hearing impairment, mastoiditis, facial nerve paralysis, cholesteatoma, tympanosclerosis, bacterial meningitis and brain abscess, to mention a few. Treatment of otitis media is usually based on empiric knowledge of aetiologic organisms and their sensitivity pattern. There is emerging evidence of multi-drug resistance of bacterial isolates with reduction in antibiotic efficacy [7, 8]. Many paediatricians and general practitioners base their treatment of otitis media on empiric evidence of the aetiologic agents and their sensitivities to various antimicrobial agents. In view of these, it is important to document the trend in Enugu, to aid in appropriate treatment of this condition and help prevent its complications which may arise if otitis media is not treated or is improperly treated. This paper therefore, aims to present the aerobic bacteriological agents (and their antibiotic susceptibilities) implicated in discharging ears of children in Enugu. Conclusion: Staphylococcal aureus and Proteus species were the most common bacterial agents in acute and chronic otitis media respectively. Fluoroquinolones were found to be effective in their treatment. What is known about this topic Bacterial agents are known causes of both acute and chronic otitis media. Antibiotic therapy is key in the management of otitis media. In resource poor countries like Nigeria, empirical therapy is not uncommon, hence the need for implicating agents and susceptible antibiotics. Bacterial agents are known causes of both acute and chronic otitis media. Antibiotic therapy is key in the management of otitis media. In resource poor countries like Nigeria, empirical therapy is not uncommon, hence the need for implicating agents and susceptible antibiotics. What this study adds This study shows that the leading organisms in both acutely and chronically discharging otitis media among Nigerian children were Staphylococcal aureus and Proteus species, these organisms were more sensitive to fluoroquinolones group of antibiotics and should now be the first-line antibiotic management of otitis media among Nigerian children. This study shows that the leading organisms in both acutely and chronically discharging otitis media among Nigerian children were Staphylococcal aureus and Proteus species, these organisms were more sensitive to fluoroquinolones group of antibiotics and should now be the first-line antibiotic management of otitis media among Nigerian children.
Background: Discharging middle ear continues to be one of the commonest problems seen in the developing world. There is an ever growing need to carry out studies periodically to determine the common bacterial agents responsible for discharging otitis media and their antibiotic sensitivity especially in set-ups characterized with minimal laboratory services. The study sought to determine the common bacterial agents causing discharging middle ear among children presenting at the University of Nigeria Teaching Hospital, Enugu and their sensitivity to the commonly available antibiotics. Methods: Middle ear swabs were collected from 100 children aged 1 month to 17 years at the Children Out-Patient and Otorhinolaryngology Clinics of the University of Nigeria Teaching Hospital, Enugu, Nigeria. The specimens were cultured for aerobic bacterial organisms and their sensitivity determined. Results: Among those with acute discharge, Staphylococcal aureus was isolated in 31.3% and Proteus species in 25.0%. In chronically discharging ears, Proteus Species dominated (39.1%), followed by Staphylococcal aureus (28.3%). Conclusions: Staphylococcal aureus and Proteus species were the commonest bacterial agents in acute and chronic otitis media respectively. Most isolates showed high sensitivity to the fluoroquinolone antibiotics.
3,170
221
[ 47, 52 ]
7
[ "media", "otitis media", "otitis", "discharging", "isolates", "study", "antibiotics", "discharging otitis", "ear", "discharging otitis media" ]
[ "suppurative discharging otitis", "aetiology discharging otitis", "bacteriology discharging otitis", "otitis media acute", "acute otitis media" ]
[CONTENT] Bacterial agents | discharging middle ear | Nigerian children [SUMMARY]
[CONTENT] Bacterial agents | discharging middle ear | Nigerian children [SUMMARY]
[CONTENT] Bacterial agents | discharging middle ear | Nigerian children [SUMMARY]
[CONTENT] Bacterial agents | discharging middle ear | Nigerian children [SUMMARY]
[CONTENT] Bacterial agents | discharging middle ear | Nigerian children [SUMMARY]
[CONTENT] Bacterial agents | discharging middle ear | Nigerian children [SUMMARY]
[CONTENT] Acute Disease | Adolescent | Anti-Bacterial Agents | Bacteria, Aerobic | Bacterial Infections | Child | Child, Preschool | Chronic Disease | Female | Hospitals, Teaching | Humans | Infant | Male | Microbial Sensitivity Tests | Nigeria | Otitis Media, Suppurative | Pilot Projects [SUMMARY]
[CONTENT] Acute Disease | Adolescent | Anti-Bacterial Agents | Bacteria, Aerobic | Bacterial Infections | Child | Child, Preschool | Chronic Disease | Female | Hospitals, Teaching | Humans | Infant | Male | Microbial Sensitivity Tests | Nigeria | Otitis Media, Suppurative | Pilot Projects [SUMMARY]
[CONTENT] Acute Disease | Adolescent | Anti-Bacterial Agents | Bacteria, Aerobic | Bacterial Infections | Child | Child, Preschool | Chronic Disease | Female | Hospitals, Teaching | Humans | Infant | Male | Microbial Sensitivity Tests | Nigeria | Otitis Media, Suppurative | Pilot Projects [SUMMARY]
[CONTENT] Acute Disease | Adolescent | Anti-Bacterial Agents | Bacteria, Aerobic | Bacterial Infections | Child | Child, Preschool | Chronic Disease | Female | Hospitals, Teaching | Humans | Infant | Male | Microbial Sensitivity Tests | Nigeria | Otitis Media, Suppurative | Pilot Projects [SUMMARY]
[CONTENT] Acute Disease | Adolescent | Anti-Bacterial Agents | Bacteria, Aerobic | Bacterial Infections | Child | Child, Preschool | Chronic Disease | Female | Hospitals, Teaching | Humans | Infant | Male | Microbial Sensitivity Tests | Nigeria | Otitis Media, Suppurative | Pilot Projects [SUMMARY]
[CONTENT] Acute Disease | Adolescent | Anti-Bacterial Agents | Bacteria, Aerobic | Bacterial Infections | Child | Child, Preschool | Chronic Disease | Female | Hospitals, Teaching | Humans | Infant | Male | Microbial Sensitivity Tests | Nigeria | Otitis Media, Suppurative | Pilot Projects [SUMMARY]
[CONTENT] suppurative discharging otitis | aetiology discharging otitis | bacteriology discharging otitis | otitis media acute | acute otitis media [SUMMARY]
[CONTENT] suppurative discharging otitis | aetiology discharging otitis | bacteriology discharging otitis | otitis media acute | acute otitis media [SUMMARY]
[CONTENT] suppurative discharging otitis | aetiology discharging otitis | bacteriology discharging otitis | otitis media acute | acute otitis media [SUMMARY]
[CONTENT] suppurative discharging otitis | aetiology discharging otitis | bacteriology discharging otitis | otitis media acute | acute otitis media [SUMMARY]
[CONTENT] suppurative discharging otitis | aetiology discharging otitis | bacteriology discharging otitis | otitis media acute | acute otitis media [SUMMARY]
[CONTENT] suppurative discharging otitis | aetiology discharging otitis | bacteriology discharging otitis | otitis media acute | acute otitis media [SUMMARY]
[CONTENT] media | otitis media | otitis | discharging | isolates | study | antibiotics | discharging otitis | ear | discharging otitis media [SUMMARY]
[CONTENT] media | otitis media | otitis | discharging | isolates | study | antibiotics | discharging otitis | ear | discharging otitis media [SUMMARY]
[CONTENT] media | otitis media | otitis | discharging | isolates | study | antibiotics | discharging otitis | ear | discharging otitis media [SUMMARY]
[CONTENT] media | otitis media | otitis | discharging | isolates | study | antibiotics | discharging otitis | ear | discharging otitis media [SUMMARY]
[CONTENT] media | otitis media | otitis | discharging | isolates | study | antibiotics | discharging otitis | ear | discharging otitis media [SUMMARY]
[CONTENT] media | otitis media | otitis | discharging | isolates | study | antibiotics | discharging otitis | ear | discharging otitis media [SUMMARY]
[CONTENT] otitis | otitis media | media | discharging | ear | agents | treatment | discharging otitis media | discharging otitis | evidence [SUMMARY]
[CONTENT] sample | sample size | patient | size | ear | study | participants | agar | data | enugu [SUMMARY]
[CONTENT] isolates | table | gram | 100 | staphylococcus | respectively | years | streptococcus | 100 sensitivity respectively | sensitivity respectively [SUMMARY]
[CONTENT] media | otitis | otitis media | therapy | agents | management otitis media | nigerian | nigerian children | media nigerian children | management otitis [SUMMARY]
[CONTENT] media | otitis | otitis media | discharging | agents | study | isolates | otitis media nigerian | otitis media nigerian children | media nigerian children [SUMMARY]
[CONTENT] media | otitis | otitis media | discharging | agents | study | isolates | otitis media nigerian | otitis media nigerian children | media nigerian children [SUMMARY]
[CONTENT] ||| ||| the University of Nigeria Teaching Hospital, Enugu [SUMMARY]
[CONTENT] 100 | 1 month to 17 years | the Children Out-Patient | Otorhinolaryngology Clinics of | the University of Nigeria Teaching Hospital, Enugu, Nigeria ||| [SUMMARY]
[CONTENT] 31.3% | Proteus | 25.0% ||| Proteus Species | 39.1% | 28.3% [SUMMARY]
[CONTENT] Proteus ||| [SUMMARY]
[CONTENT] ||| ||| the University of Nigeria Teaching Hospital, Enugu ||| 100 | 1 month to 17 years | the Children Out-Patient | Otorhinolaryngology Clinics of | the University of Nigeria Teaching Hospital, Enugu, Nigeria ||| ||| ||| 31.3% | Proteus | 25.0% ||| Proteus Species | 39.1% | 28.3% ||| Proteus ||| [SUMMARY]
[CONTENT] ||| ||| the University of Nigeria Teaching Hospital, Enugu ||| 100 | 1 month to 17 years | the Children Out-Patient | Otorhinolaryngology Clinics of | the University of Nigeria Teaching Hospital, Enugu, Nigeria ||| ||| ||| 31.3% | Proteus | 25.0% ||| Proteus Species | 39.1% | 28.3% ||| Proteus ||| [SUMMARY]
Macroscopic hematuria in wasp sting patients: a retrospective study.
33706645
Macroscopic hematuria after wasp sting has been reported in Asia to occur before acute kidney injury (AKI), and is often used by clinicians as a sign indicating the need for intensive care and blood purification therapy. However, there is no study on the clinical characteristics and prognosis of this symptom.
BACKGROUND
The clinical data of 363 patients with wasp sting admitted to Suining Central Hospital from January 2016 to December 2018 were retrospectively analyzed. At admission, the poisoning severity score (PSS) was used as the criterion for severity classification. According to the presence of macroscopic hematuria, the patients were divided into macroscopic hematuria and non-macroscopic hematuria group.
METHODS
Of the 363 wasp sting patients, 219 were male and 144 were female, with a mean age of 55.9 ± 16.3 years. Fifty-one (14%) had macroscopic hematuria, 39 (10.7%) had AKI, 105 (28.9%) had rhabdomyolysis, 61 (16.8%) had hemolysis, 45 (12.4%) went on to received hemodialysis, and 14 (3.9%) died. The incidence of AKI in macroscopic hematuria group was 70.6%, and oliguric renal failure accounted for 72.2%. Patients with macroscopic hematuria had significantly higher PSS (2.2 ± 0.5 vs. 1.1 ± 0.3, p < .001).
RESULTS
Macroscopic hematuria can be regarded as a surrogate marker of deteriorating clinical outcome following wasp stings. In wasp sting patients with symptoms of macroscopic hematuria or serum LDH higher than 463.5 u/L upon admission, the risk of AKI increases significantly, therefore hemodialysis should be considered. The PSS is helpful in early assessment of the severity of wasp sting patients.
CONCLUSION
[ "Acute Kidney Injury", "Adult", "Aged", "Animals", "China", "Female", "Hematuria", "Humans", "Insect Bites and Stings", "L-Lactate Dehydrogenase", "Logistic Models", "Male", "Middle Aged", "Multivariate Analysis", "Renal Dialysis", "Retrospective Studies", "Rhabdomyolysis", "Severity of Illness Index", "Wasp Venoms", "Wasps" ]
7971319
Introduction
Wasps belong to the Order Hymenoptera in the Animal Kingdom [1]. Currently, there are more than 6,000 species of wasps in the world and more than 200 species have been recorded in China [2,3]. Wasp stings are common in the world. Epidemiological surveys in the United States show that wasp stings account for 27.4%–29.7% of all animal injuries and the annual mortality is 0.14–0.74/million population [4,5]. In July–October 2013, 1,675 cases of wasp stings occurred in Shaanxi Province of China, resulting in 42 deaths [6]. However, this global public health problem hasn’t drawn adequate attention as there has no epidemiological survey on wasp stings in China [7]. There is a paucity of existing guidelines on the diagnosis and treatment of wasp stings. Chinese Society of Toxicology has prepared a consensus statement on the standardized diagnosis and treatment of wasp stings [7]. Nevertheless, a wider application of this consensus criteria is likely limited by the complex evaluation criteria. In Europe, the poisoning severity score (PSS) is used to guide initial assessment and appropriate medical care assignment [8–11]. However, the PSS has not been reported specifically for evaluating the severity of wasp sting patients. While local symptoms of wasp sting include redness, swelling, and pain, severe wasp sting can lead to systemic allergic reaction, acute kidney injury (AKI), rhabdomyolysis, hemolysis, shock, and even death [3,12,13]. Xie, etc. [14] reported the occurrence rate of macroscopic hematuria after wasp sting to be 10.1%. Macroscopic hematuria is visible to the naked eye and may appear pink, bright red, brown, or the classic finding of tea colored urine [15,16]. Macroscopic hematuria occurring after wasp sting often alerts clinicians to the likelihood of a serious medical condition which may require intensive care admission, and in some cases even plasmapheresis [7]. However, there has not been any study looking at the clinical characteristics and prognosis of wasp sting patients complicated with macroscopic hematuria in particular. Suining Central Hospital of Sichuan Province, China, is the only tertiary grade A general hospital in the interior areas of Sichuan Province, and most patients with severe wasp stings were treated in our nephrology department and ICU. Therefore, we undertook this study in which we looked at the 363 patients with wasp sting from January 2016 to December 2018 in Suining Central Hospital, and we analyzed the clinical characteristics of patients with macroscopic hematuria. In our study, we used the PSS to evaluate the severity of wasp sting patients upon admission so as to evaluate whether the macroscopic hematuria and the PSS would be valuable to predict the prognosis of severe wasp sting patients.
null
null
Results
Demographic characteristics From January 2016 to December 2018, 390 patients with wasp sting presented to Suining Central Hospital. Three hundred sixty-three were included in the study. Twenty-four patients refused to be hospitalized and three patients were excluded from the study as they already had CKD based on their medical history and ultrasonography. Sixty percent patients were male with a mean age of 55.9 ± 16.3 years (Table 1). The time interval between sting and admission was 3(0.5–144) hours, and the PSS was 1.2 ± 0.5 points on admission. Fourteen patients died with a mortality of 3.9%. Eight patients died on the first day of admission, followed by three each on second and third day. The most common cause of death was ARDS (9/14) followed by MODS (4/14) (Table 1). Figure 1 shows the monthly distribution of patients with macroscopic hematuria, rhabdomyolysis, hemolysis, AKI, ICU and death after wasp sting. Monthly distribution of patients with severe complication. Clinical data of wasp sting patients (n = 363). *The time interval between sting and admission, **Multiple Organ Dysfunction Syndrome, #Acute Respiratory Distress Syndrome, ##Intensive Care Unit. From January 2016 to December 2018, 390 patients with wasp sting presented to Suining Central Hospital. Three hundred sixty-three were included in the study. Twenty-four patients refused to be hospitalized and three patients were excluded from the study as they already had CKD based on their medical history and ultrasonography. Sixty percent patients were male with a mean age of 55.9 ± 16.3 years (Table 1). The time interval between sting and admission was 3(0.5–144) hours, and the PSS was 1.2 ± 0.5 points on admission. Fourteen patients died with a mortality of 3.9%. Eight patients died on the first day of admission, followed by three each on second and third day. The most common cause of death was ARDS (9/14) followed by MODS (4/14) (Table 1). Figure 1 shows the monthly distribution of patients with macroscopic hematuria, rhabdomyolysis, hemolysis, AKI, ICU and death after wasp sting. Monthly distribution of patients with severe complication. Clinical data of wasp sting patients (n = 363). *The time interval between sting and admission, **Multiple Organ Dysfunction Syndrome, #Acute Respiratory Distress Syndrome, ##Intensive Care Unit. Clinical manifestations Almost all patients had local symptoms such as redness, swelling, and pain at the sting site. Thirty-nine patients (10.7%) had AKI, 7(17.9%) patients in stage1, 9 (23.1%) in stage2, 23 (59%) in stage 3. Fifty-one patients (14%) had macroscopic hematuria, 61 patients (16.8%) developed hemolysis, and 105 patients (28.9%) developed rhabdomyolysis. Forty-five patients (12.4%) underwent blood purification therapy, and 7 patients (1.9%) were admitted to ICU. Of the 7 patients admitted to ICU, five were directly admitted to ICU for need of endotracheal intubation, and 2 were transferred from nephrology department (Table 1). Almost all patients had local symptoms such as redness, swelling, and pain at the sting site. Thirty-nine patients (10.7%) had AKI, 7(17.9%) patients in stage1, 9 (23.1%) in stage2, 23 (59%) in stage 3. Fifty-one patients (14%) had macroscopic hematuria, 61 patients (16.8%) developed hemolysis, and 105 patients (28.9%) developed rhabdomyolysis. Forty-five patients (12.4%) underwent blood purification therapy, and 7 patients (1.9%) were admitted to ICU. Of the 7 patients admitted to ICU, five were directly admitted to ICU for need of endotracheal intubation, and 2 were transferred from nephrology department (Table 1). Comparison of clinical data between the two groups There was no statistical difference in gender composition between patients with macroscopic hematuria and patients with non-macroscopic hematuria (p = .089). The average age of patients with macroscopic hematuria was higher than that of patients with non-macroscopic hematuria (65 years vs 54 years, p < .001). The number of stings, the time interval between stings and admission, and the PSS on admission in patients with macroscopic hematuria group were higher than those in patients with non-macroscopic hematuria group, and the hospitalization days were significantly prolonged (p < .001). Thirteen patients (25.5%) died in the macroscopic hematuria group, and 1 patient (0.3%) died in the non-macroscopic hematuria group (p < .001). The patient in the non-macroscopic hematuria group died of respiratory failure in the setting of chronic obstructive pulmonary disease. Although there was no statistical difference in the duration of blood purification therapy between these two groups, the time of blood purification therapy in the macroscopic hematuria group was significantly prolonged (Table 2). Clinical data between non-macroscopic hematuria and macroscopic hematuria group. *The time interval between sting and admission, **Multiple Organ Dysfunction Syndrome, #Acute Respiratory Distress Syndrome, ##Intensive Care Unit. There was no statistical difference in gender composition between patients with macroscopic hematuria and patients with non-macroscopic hematuria (p = .089). The average age of patients with macroscopic hematuria was higher than that of patients with non-macroscopic hematuria (65 years vs 54 years, p < .001). The number of stings, the time interval between stings and admission, and the PSS on admission in patients with macroscopic hematuria group were higher than those in patients with non-macroscopic hematuria group, and the hospitalization days were significantly prolonged (p < .001). Thirteen patients (25.5%) died in the macroscopic hematuria group, and 1 patient (0.3%) died in the non-macroscopic hematuria group (p < .001). The patient in the non-macroscopic hematuria group died of respiratory failure in the setting of chronic obstructive pulmonary disease. Although there was no statistical difference in the duration of blood purification therapy between these two groups, the time of blood purification therapy in the macroscopic hematuria group was significantly prolonged (Table 2). Clinical data between non-macroscopic hematuria and macroscopic hematuria group. *The time interval between sting and admission, **Multiple Organ Dysfunction Syndrome, #Acute Respiratory Distress Syndrome, ##Intensive Care Unit. Comparison of complications between the two groups No allergic rash was seen in the macroscopic hematuria group (p = .001). One patient developed hypotension in the macroscopic hematuria group compared to 19 in the non-macroscopic hematuria group (p = .332). The incidence of oliguria (or anuria), rhabdomyolysis, hemolysis, coagulation abnormalities, liver damage, MODS, ARDS in the macroscopic hematuria group was higher than that in the non-macroscopic hematuria group. Seven patients (13.7%) in the macroscopic hematuria group were admitted to ICU, while no patients in the non-macroscopic hematuria group were admitted to ICU (p < .001). Thirty-six (70.6%) patients with macroscopic hematuria developed AKI, compared to 3 (1%) patients with non-macroscopic hematuria (p < .001). Forty patients (78.4%) in the macroscopic hematuria group received hemodialysis compared to 5 (1.6%) in the non-macroscopic hematuria group (p < .001) (Table 2). No allergic rash was seen in the macroscopic hematuria group (p = .001). One patient developed hypotension in the macroscopic hematuria group compared to 19 in the non-macroscopic hematuria group (p = .332). The incidence of oliguria (or anuria), rhabdomyolysis, hemolysis, coagulation abnormalities, liver damage, MODS, ARDS in the macroscopic hematuria group was higher than that in the non-macroscopic hematuria group. Seven patients (13.7%) in the macroscopic hematuria group were admitted to ICU, while no patients in the non-macroscopic hematuria group were admitted to ICU (p < .001). Thirty-six (70.6%) patients with macroscopic hematuria developed AKI, compared to 3 (1%) patients with non-macroscopic hematuria (p < .001). Forty patients (78.4%) in the macroscopic hematuria group received hemodialysis compared to 5 (1.6%) in the non-macroscopic hematuria group (p < .001) (Table 2). Comparison of laboratory examination results between the two groups At the time of admission and the 2nd–3rd day after admission, the serum creatinine, creatine kinase (CK), aspartate aminotransferase (AST), indirect bilirubin (IBIL), alanine transaminase (ALT), prothrombin time (PT), activated partial thromboplastin time (APTT), lactate dehydrogenase (LDH), leukocyte (WBC) values in the macroscopic hematuria group were higher than those in the non-macroscopic hematuria group (p < .001). Serum creatinine and LDH tested before discharge were still significantly higher in the macroscopic hematuria group than that in the non-macroscopic hematuria group (p < .001) (Table 3). Lab examination between non-macroscopic hematuria and macroscopic hematuria group. *Aspartate aminotransferase, **Alanine transaminase, #Prothrombin time, ##Activated partial thromboplastin time. At the time of admission and the 2nd–3rd day after admission, the serum creatinine, creatine kinase (CK), aspartate aminotransferase (AST), indirect bilirubin (IBIL), alanine transaminase (ALT), prothrombin time (PT), activated partial thromboplastin time (APTT), lactate dehydrogenase (LDH), leukocyte (WBC) values in the macroscopic hematuria group were higher than those in the non-macroscopic hematuria group (p < .001). Serum creatinine and LDH tested before discharge were still significantly higher in the macroscopic hematuria group than that in the non-macroscopic hematuria group (p < .001) (Table 3). Lab examination between non-macroscopic hematuria and macroscopic hematuria group. *Aspartate aminotransferase, **Alanine transaminase, #Prothrombin time, ##Activated partial thromboplastin time. Spearman analysis between macroscopic hematuria and patients' outcome A Spearman correlation analysis indicated that macroscopic hematuria was related to the patients' outcome (r = 0.454, p < .001) (Table 4). Spearman analysis between macroscopic hematuria and the patients’ outcome. Spearman analysis, r = 0.454 p < .001. A Spearman correlation analysis indicated that macroscopic hematuria was related to the patients' outcome (r = 0.454, p < .001) (Table 4). Spearman analysis between macroscopic hematuria and the patients’ outcome. Spearman analysis, r = 0.454 p < .001. Multivariate logistic regression analysis The PSS was one risk factor for death in wasp sting patients (OR = 6.768, 95%CI 1.981–23.120, p = .002) (Table 5). Multivariate logistic regression analysis of death in wasp sting patients. H-L test X² = 0.826, p = .999. We took the data of laboratory examination with significant difference at admission (AKI group and non-AKI group) (Table 6) as independent variables and performed a multivariate logistic regression analysis of AKI in wasp sting patients. The results showed that LDH was an independent risk factor for AKI (OR = 1.006, 95%CI 1.004–1.007, p < .001) (Table 7). Lab examination on admission between non-AKI and AKI group. *Aspartate aminotransferase, **Alanine transaminase. Multivariate logistic regression analysis of AKI in wasp sting patients. H-L test X² = 14.555, p = .068. The PSS was one risk factor for death in wasp sting patients (OR = 6.768, 95%CI 1.981–23.120, p = .002) (Table 5). Multivariate logistic regression analysis of death in wasp sting patients. H-L test X² = 0.826, p = .999. We took the data of laboratory examination with significant difference at admission (AKI group and non-AKI group) (Table 6) as independent variables and performed a multivariate logistic regression analysis of AKI in wasp sting patients. The results showed that LDH was an independent risk factor for AKI (OR = 1.006, 95%CI 1.004–1.007, p < .001) (Table 7). Lab examination on admission between non-AKI and AKI group. *Aspartate aminotransferase, **Alanine transaminase. Multivariate logistic regression analysis of AKI in wasp sting patients. H-L test X² = 14.555, p = .068. ROC curve analysis The AUC of the PSS on admission to predict death of wasp sting patients was 0.911(95%CI 0.870–0.952, p < .001) (Figure 2). The poisoning severity score on admission to predict death of wasp sting patients. The AUC of serum LDH on admission to predict AKI of wasp sting patients was 0.980(95%CI 0.966–0.995, p < .001) (Figure 3). LDH on admission and number of stings to predict AKI of wasp sting patients. LDH on admission AUC = 0.980(95%CI 0.966–0.995), CUTOFF = 463.5u/L Youden index = 0.906 Number of stings AUC = 0.875(95%CI 0.826–0.925), CUTOFF = 11.5, Youden index = 0.663. The AUC of number of stings to predict AKI of wasp sting patients was 0.875(95%CI 0.826–0.925, p < .001) (Figure 3). The AUC of the PSS on admission to predict death of wasp sting patients was 0.911(95%CI 0.870–0.952, p < .001) (Figure 2). The poisoning severity score on admission to predict death of wasp sting patients. The AUC of serum LDH on admission to predict AKI of wasp sting patients was 0.980(95%CI 0.966–0.995, p < .001) (Figure 3). LDH on admission and number of stings to predict AKI of wasp sting patients. LDH on admission AUC = 0.980(95%CI 0.966–0.995), CUTOFF = 463.5u/L Youden index = 0.906 Number of stings AUC = 0.875(95%CI 0.826–0.925), CUTOFF = 11.5, Youden index = 0.663. The AUC of number of stings to predict AKI of wasp sting patients was 0.875(95%CI 0.826–0.925, p < .001) (Figure 3).
null
null
[ "Research subjects", "Data collection", "Therapeutic schedule", "Statistical methods", "Demographic characteristics", "Clinical manifestations", "Comparison of clinical data between the two groups", "Comparison of complications between the two groups", "Comparison of laboratory examination results between the two groups", "Spearman analysis between macroscopic hematuria and patients' outcome", "Multivariate logistic regression analysis", "ROC curve analysis" ]
[ "The research subjects were patients with wasp sting and hospitalized from 1 January 2016 to 31 December 2018 in the nephrology department or the Intensive Care Unit (ICU) of Suining Central Hospital in Sichuan province, China. The exclusion criteria were chronic kidney disease (CKD), age less than 14 years old, and death upon admission. CKD is defined as abnormalities of kidney structure or function, present for >3 months, with implications for health [17]. For the diagnosis of CKD, we did not only rely on the patients’ medical history, but also kidney ultrasonography and baseline serum creatinine with presence of proteinuria/hematuria. AKI was defined by KDIGO definition criteria as any of the following: increase in SCr by ≥ 0.3 mg/dL within 48 h; or increase in SCr ≥1.5 times baseline, which is known or presumed to have occurred within the prior 7 days; or Urine volume < 0.5 mL/kg/h for 6 h [18]. Creatine kinase (CK) levels of 1000 U/L, exceeding five times the upper limit of normal, was used for diagnosing rhabdomyolysis [19]. We categorized them into macroscopic hematuria group (n = 51) and non- macroscopic hematuria group (n = 312) according to the presence of macroscopic hematuria. Microscopic hematuria was not included in this study. The study was approved by the Ethics Committee of Suining Central Hospital (LLSNCH20200022).", "EpiData3.1 software was used to input the following data: 363 patients' demographic indicators, including gender, age, number of stings, the time interval between sting and admission, and previous medical history; main symptoms and signs such as hypotension, allergic rash, macroscopic hematuria, and oliguria (or anuria); the PSS at admission: The PSS grades severity as (0) none, (1) minor, (2) moderate, (3) severe, and (4) fatal poisoning. The severity grading takes into account only the observed clinical symptoms and signs, it does not estimate risks or hazards on the basis of information such as the amounts ingested or serum concentrations of the toxic agent [11]. According to the exclusion criteria, patients with no symptoms (0) and those who died upon admission (4) were excluded, thus the PSS graded the patients into minor, moderate, and severe poisoning levels; laboratory data of patients on admission, the 2nd and the 3rd day after admission, and the day before discharge; severe complications such as AKI, rhabdomyolysis, hemolysis, coagulation disorder, liver dysfunction, MODS, and ARDS; procedures including hemodialysis and plasmapheresis, state at discharge (death or survival), and length of hospital stay.", "Patients with wasp stings are rarely treated in outpatient clinics. During pre-hospital first aid: given 0.9% sodium chloride to hydration, and glucocorticoid or epinephrine were given for allergic reactions. After admission, the stinger that could be found were removed and the number of stings was counted. 0.9% sodium chloride was given for hydration, and sodium bicarbonate was used to alkalize the urine.\nTherapeutic schedule of Rhabdomyolysis: 0.9% sodium chloride was given for hydration, and sodium bicarbonate was used to alkalize the urine. When serum CK was over 1000 U/L, diuretics were given to achieve at least 300 mL/h urine output.\nTherapeutic schedule of AKI: Intravenous infusion of glucocorticoid (Methylprednisolone 40 mg/d intravenous infusion) for 3–5 days, and the dosage was gradually reduced and discontinued for 7–10 days. hydration and diuresis for patients without oliguria or anuria, to achieve at least 100–200 mL/h urine excretion.\nBlood purification treatment: We provided blood purification therapy when patients with wasp sting had macroscopic hematuria (14%) and if they had AKI (oliguria or KDIGO stage 3), and in some patients with CK over 10000 u/L in the absence of AKI.\nOur patients were treated with glucocorticoids (Methylprednisolone 40 mg/d intravenous infusion) for 3–5 days in the presence of macroscopic hematuria, AKI, or anaphylaxis, and the dosage was gradually reduced and discontinued for 7–10 days.", "The data were analyzed by SPSS software version 19.0. The enumeration data are represented by rate, and the chi-square test was used for comparison between the two groups. The measurement data conforming to the normal distribution are represented by mean ± standard deviation (x¯ ± s), and the non-conforming normal distribution by median M (P25, P75). Variables of the two groups were compared by Mann-Whitney U test. Spearman analysis was performed for the correlation between macroscopic hematuria and the patients’ outcome. Multivariate logistic regression model was used to screen the risk factors of AKI, and then ROC curve analysis was performed on the selected risk factors. A p-value of less than .05 for 95% confidence was set and used as a cutoff point to examine the statistical association between the variables.", "From January 2016 to December 2018, 390 patients with wasp sting presented to Suining Central Hospital. Three hundred sixty-three were included in the study. Twenty-four patients refused to be hospitalized and three patients were excluded from the study as they already had CKD based on their medical history and ultrasonography. Sixty percent patients were male with a mean age of 55.9 ± 16.3 years (Table 1). The time interval between sting and admission was 3(0.5–144) hours, and the PSS was 1.2 ± 0.5 points on admission. Fourteen patients died with a mortality of 3.9%. Eight patients died on the first day of admission, followed by three each on second and third day. The most common cause of death was ARDS (9/14) followed by MODS (4/14) (Table 1). Figure 1 shows the monthly distribution of patients with macroscopic hematuria, rhabdomyolysis, hemolysis, AKI, ICU and death after wasp sting.\nMonthly distribution of patients with severe complication.\nClinical data of wasp sting patients (n = 363).\n*The time interval between sting and admission, **Multiple Organ Dysfunction Syndrome, #Acute Respiratory Distress Syndrome, ##Intensive Care Unit.", "Almost all patients had local symptoms such as redness, swelling, and pain at the sting site. Thirty-nine patients (10.7%) had AKI, 7(17.9%) patients in stage1, 9 (23.1%) in stage2, 23 (59%) in stage 3. Fifty-one patients (14%) had macroscopic hematuria, 61 patients (16.8%) developed hemolysis, and 105 patients (28.9%) developed rhabdomyolysis. Forty-five patients (12.4%) underwent blood purification therapy, and 7 patients (1.9%) were admitted to ICU. Of the 7 patients admitted to ICU, five were directly admitted to ICU for need of endotracheal intubation, and 2 were transferred from nephrology department (Table 1).", "There was no statistical difference in gender composition between patients with macroscopic hematuria and patients with non-macroscopic hematuria (p = .089). The average age of patients with macroscopic hematuria was higher than that of patients with non-macroscopic hematuria (65 years vs 54 years, p < .001). The number of stings, the time interval between stings and admission, and the PSS on admission in patients with macroscopic hematuria group were higher than those in patients with non-macroscopic hematuria group, and the hospitalization days were significantly prolonged (p < .001). Thirteen patients (25.5%) died in the macroscopic hematuria group, and 1 patient (0.3%) died in the non-macroscopic hematuria group (p < .001). The patient in the non-macroscopic hematuria group died of respiratory failure in the setting of chronic obstructive pulmonary disease. Although there was no statistical difference in the duration of blood purification therapy between these two groups, the time of blood purification therapy in the macroscopic hematuria group was significantly prolonged (Table 2).\nClinical data between non-macroscopic hematuria and macroscopic hematuria group.\n*The time interval between sting and admission, **Multiple Organ Dysfunction Syndrome, #Acute Respiratory Distress Syndrome, ##Intensive Care Unit.", "No allergic rash was seen in the macroscopic hematuria group (p = .001). One patient developed hypotension in the macroscopic hematuria group compared to 19 in the non-macroscopic hematuria group (p = .332). The incidence of oliguria (or anuria), rhabdomyolysis, hemolysis, coagulation abnormalities, liver damage, MODS, ARDS in the macroscopic hematuria group was higher than that in the non-macroscopic hematuria group. Seven patients (13.7%) in the macroscopic hematuria group were admitted to ICU, while no patients in the non-macroscopic hematuria group were admitted to ICU (p < .001). Thirty-six (70.6%) patients with macroscopic hematuria developed AKI, compared to 3 (1%) patients with non-macroscopic hematuria (p < .001). Forty patients (78.4%) in the macroscopic hematuria group received hemodialysis compared to 5 (1.6%) in the non-macroscopic hematuria group (p < .001) (Table 2).", "At the time of admission and the 2nd–3rd day after admission, the serum creatinine, creatine kinase (CK), aspartate aminotransferase (AST), indirect bilirubin (IBIL), alanine transaminase (ALT), prothrombin time (PT), activated partial thromboplastin time (APTT), lactate dehydrogenase (LDH), leukocyte (WBC) values in the macroscopic hematuria group were higher than those in the non-macroscopic hematuria group (p < .001). Serum creatinine and LDH tested before discharge were still significantly higher in the macroscopic hematuria group than that in the non-macroscopic hematuria group (p < .001) (Table 3).\nLab examination between non-macroscopic hematuria and macroscopic hematuria group.\n*Aspartate aminotransferase, **Alanine transaminase, #Prothrombin time, ##Activated partial thromboplastin time.", "A Spearman correlation analysis indicated that macroscopic hematuria was related to the patients' outcome (r = 0.454, p < .001) (Table 4).\nSpearman analysis between macroscopic hematuria and the patients’ outcome.\nSpearman analysis, r = 0.454 p < .001.", "The PSS was one risk factor for death in wasp sting patients (OR = 6.768, 95%CI 1.981–23.120, p = .002) (Table 5).\nMultivariate logistic regression analysis of death in wasp sting patients.\nH-L test X² = 0.826, p = .999.\nWe took the data of laboratory examination with significant difference at admission (AKI group and non-AKI group) (Table 6) as independent variables and performed a multivariate logistic regression analysis of AKI in wasp sting patients. The results showed that LDH was an independent risk factor for AKI (OR = 1.006, 95%CI 1.004–1.007, p < .001) (Table 7).\nLab examination on admission between non-AKI and AKI group.\n*Aspartate aminotransferase, **Alanine transaminase.\nMultivariate logistic regression analysis of AKI in wasp sting patients.\nH-L test X² = 14.555, p = .068.", "The AUC of the PSS on admission to predict death of wasp sting patients was 0.911(95%CI 0.870–0.952, p < .001) (Figure 2).\nThe poisoning severity score on admission to predict death of wasp sting patients.\nThe AUC of serum LDH on admission to predict AKI of wasp sting patients was 0.980(95%CI 0.966–0.995, p < .001) (Figure 3).\nLDH on admission and number of stings to predict AKI of wasp sting patients. LDH on admission AUC = 0.980(95%CI 0.966–0.995), CUTOFF = 463.5u/L Youden index = 0.906 Number of stings AUC = 0.875(95%CI 0.826–0.925), CUTOFF = 11.5, Youden index = 0.663.\nThe AUC of number of stings to predict AKI of wasp sting patients was 0.875(95%CI 0.826–0.925, p < .001) (Figure 3)." ]
[ null, null, null, null, null, null, null, null, null, null, null, null ]
[ "Introduction", "Materials and methods", "Research subjects", "Data collection", "Therapeutic schedule", "Statistical methods", "Results", "Demographic characteristics", "Clinical manifestations", "Comparison of clinical data between the two groups", "Comparison of complications between the two groups", "Comparison of laboratory examination results between the two groups", "Spearman analysis between macroscopic hematuria and patients' outcome", "Multivariate logistic regression analysis", "ROC curve analysis", "Discussion" ]
[ "Wasps belong to the Order Hymenoptera in the Animal Kingdom [1]. Currently, there are more than 6,000 species of wasps in the world and more than 200 species have been recorded in China [2,3]. Wasp stings are common in the world. Epidemiological surveys in the United States show that wasp stings account for 27.4%–29.7% of all animal injuries and the annual mortality is 0.14–0.74/million population [4,5]. In July–October 2013, 1,675 cases of wasp stings occurred in Shaanxi Province of China, resulting in 42 deaths [6]. However, this global public health problem hasn’t drawn adequate attention as there has no epidemiological survey on wasp stings in China [7]. There is a paucity of existing guidelines on the diagnosis and treatment of wasp stings. Chinese Society of Toxicology has prepared a consensus statement on the standardized diagnosis and treatment of wasp stings [7]. Nevertheless, a wider application of this consensus criteria is likely limited by the complex evaluation criteria. In Europe, the poisoning severity score (PSS) is used to guide initial assessment and appropriate medical care assignment [8–11]. However, the PSS has not been reported specifically for evaluating the severity of wasp sting patients. While local symptoms of wasp sting include redness, swelling, and pain, severe wasp sting can lead to systemic allergic reaction, acute kidney injury (AKI), rhabdomyolysis, hemolysis, shock, and even death [3,12,13]. Xie, etc. [14] reported the occurrence rate of macroscopic hematuria after wasp sting to be 10.1%. Macroscopic hematuria is visible to the naked eye and may appear pink, bright red, brown, or the classic finding of tea colored urine [15,16]. Macroscopic hematuria occurring after wasp sting often alerts clinicians to the likelihood of a serious medical condition which may require intensive care admission, and in some cases even plasmapheresis [7]. However, there has not been any study looking at the clinical characteristics and prognosis of wasp sting patients complicated with macroscopic hematuria in particular. Suining Central Hospital of Sichuan Province, China, is the only tertiary grade A general hospital in the interior areas of Sichuan Province, and most patients with severe wasp stings were treated in our nephrology department and ICU. Therefore, we undertook this study in which we looked at the 363 patients with wasp sting from January 2016 to December 2018 in Suining Central Hospital, and we analyzed the clinical characteristics of patients with macroscopic hematuria. In our study, we used the PSS to evaluate the severity of wasp sting patients upon admission so as to evaluate whether the macroscopic hematuria and the PSS would be valuable to predict the prognosis of severe wasp sting patients.", "Research subjects The research subjects were patients with wasp sting and hospitalized from 1 January 2016 to 31 December 2018 in the nephrology department or the Intensive Care Unit (ICU) of Suining Central Hospital in Sichuan province, China. The exclusion criteria were chronic kidney disease (CKD), age less than 14 years old, and death upon admission. CKD is defined as abnormalities of kidney structure or function, present for >3 months, with implications for health [17]. For the diagnosis of CKD, we did not only rely on the patients’ medical history, but also kidney ultrasonography and baseline serum creatinine with presence of proteinuria/hematuria. AKI was defined by KDIGO definition criteria as any of the following: increase in SCr by ≥ 0.3 mg/dL within 48 h; or increase in SCr ≥1.5 times baseline, which is known or presumed to have occurred within the prior 7 days; or Urine volume < 0.5 mL/kg/h for 6 h [18]. Creatine kinase (CK) levels of 1000 U/L, exceeding five times the upper limit of normal, was used for diagnosing rhabdomyolysis [19]. We categorized them into macroscopic hematuria group (n = 51) and non- macroscopic hematuria group (n = 312) according to the presence of macroscopic hematuria. Microscopic hematuria was not included in this study. The study was approved by the Ethics Committee of Suining Central Hospital (LLSNCH20200022).\nThe research subjects were patients with wasp sting and hospitalized from 1 January 2016 to 31 December 2018 in the nephrology department or the Intensive Care Unit (ICU) of Suining Central Hospital in Sichuan province, China. The exclusion criteria were chronic kidney disease (CKD), age less than 14 years old, and death upon admission. CKD is defined as abnormalities of kidney structure or function, present for >3 months, with implications for health [17]. For the diagnosis of CKD, we did not only rely on the patients’ medical history, but also kidney ultrasonography and baseline serum creatinine with presence of proteinuria/hematuria. AKI was defined by KDIGO definition criteria as any of the following: increase in SCr by ≥ 0.3 mg/dL within 48 h; or increase in SCr ≥1.5 times baseline, which is known or presumed to have occurred within the prior 7 days; or Urine volume < 0.5 mL/kg/h for 6 h [18]. Creatine kinase (CK) levels of 1000 U/L, exceeding five times the upper limit of normal, was used for diagnosing rhabdomyolysis [19]. We categorized them into macroscopic hematuria group (n = 51) and non- macroscopic hematuria group (n = 312) according to the presence of macroscopic hematuria. Microscopic hematuria was not included in this study. The study was approved by the Ethics Committee of Suining Central Hospital (LLSNCH20200022).\nData collection EpiData3.1 software was used to input the following data: 363 patients' demographic indicators, including gender, age, number of stings, the time interval between sting and admission, and previous medical history; main symptoms and signs such as hypotension, allergic rash, macroscopic hematuria, and oliguria (or anuria); the PSS at admission: The PSS grades severity as (0) none, (1) minor, (2) moderate, (3) severe, and (4) fatal poisoning. The severity grading takes into account only the observed clinical symptoms and signs, it does not estimate risks or hazards on the basis of information such as the amounts ingested or serum concentrations of the toxic agent [11]. According to the exclusion criteria, patients with no symptoms (0) and those who died upon admission (4) were excluded, thus the PSS graded the patients into minor, moderate, and severe poisoning levels; laboratory data of patients on admission, the 2nd and the 3rd day after admission, and the day before discharge; severe complications such as AKI, rhabdomyolysis, hemolysis, coagulation disorder, liver dysfunction, MODS, and ARDS; procedures including hemodialysis and plasmapheresis, state at discharge (death or survival), and length of hospital stay.\nEpiData3.1 software was used to input the following data: 363 patients' demographic indicators, including gender, age, number of stings, the time interval between sting and admission, and previous medical history; main symptoms and signs such as hypotension, allergic rash, macroscopic hematuria, and oliguria (or anuria); the PSS at admission: The PSS grades severity as (0) none, (1) minor, (2) moderate, (3) severe, and (4) fatal poisoning. The severity grading takes into account only the observed clinical symptoms and signs, it does not estimate risks or hazards on the basis of information such as the amounts ingested or serum concentrations of the toxic agent [11]. According to the exclusion criteria, patients with no symptoms (0) and those who died upon admission (4) were excluded, thus the PSS graded the patients into minor, moderate, and severe poisoning levels; laboratory data of patients on admission, the 2nd and the 3rd day after admission, and the day before discharge; severe complications such as AKI, rhabdomyolysis, hemolysis, coagulation disorder, liver dysfunction, MODS, and ARDS; procedures including hemodialysis and plasmapheresis, state at discharge (death or survival), and length of hospital stay.\nTherapeutic schedule Patients with wasp stings are rarely treated in outpatient clinics. During pre-hospital first aid: given 0.9% sodium chloride to hydration, and glucocorticoid or epinephrine were given for allergic reactions. After admission, the stinger that could be found were removed and the number of stings was counted. 0.9% sodium chloride was given for hydration, and sodium bicarbonate was used to alkalize the urine.\nTherapeutic schedule of Rhabdomyolysis: 0.9% sodium chloride was given for hydration, and sodium bicarbonate was used to alkalize the urine. When serum CK was over 1000 U/L, diuretics were given to achieve at least 300 mL/h urine output.\nTherapeutic schedule of AKI: Intravenous infusion of glucocorticoid (Methylprednisolone 40 mg/d intravenous infusion) for 3–5 days, and the dosage was gradually reduced and discontinued for 7–10 days. hydration and diuresis for patients without oliguria or anuria, to achieve at least 100–200 mL/h urine excretion.\nBlood purification treatment: We provided blood purification therapy when patients with wasp sting had macroscopic hematuria (14%) and if they had AKI (oliguria or KDIGO stage 3), and in some patients with CK over 10000 u/L in the absence of AKI.\nOur patients were treated with glucocorticoids (Methylprednisolone 40 mg/d intravenous infusion) for 3–5 days in the presence of macroscopic hematuria, AKI, or anaphylaxis, and the dosage was gradually reduced and discontinued for 7–10 days.\nPatients with wasp stings are rarely treated in outpatient clinics. During pre-hospital first aid: given 0.9% sodium chloride to hydration, and glucocorticoid or epinephrine were given for allergic reactions. After admission, the stinger that could be found were removed and the number of stings was counted. 0.9% sodium chloride was given for hydration, and sodium bicarbonate was used to alkalize the urine.\nTherapeutic schedule of Rhabdomyolysis: 0.9% sodium chloride was given for hydration, and sodium bicarbonate was used to alkalize the urine. When serum CK was over 1000 U/L, diuretics were given to achieve at least 300 mL/h urine output.\nTherapeutic schedule of AKI: Intravenous infusion of glucocorticoid (Methylprednisolone 40 mg/d intravenous infusion) for 3–5 days, and the dosage was gradually reduced and discontinued for 7–10 days. hydration and diuresis for patients without oliguria or anuria, to achieve at least 100–200 mL/h urine excretion.\nBlood purification treatment: We provided blood purification therapy when patients with wasp sting had macroscopic hematuria (14%) and if they had AKI (oliguria or KDIGO stage 3), and in some patients with CK over 10000 u/L in the absence of AKI.\nOur patients were treated with glucocorticoids (Methylprednisolone 40 mg/d intravenous infusion) for 3–5 days in the presence of macroscopic hematuria, AKI, or anaphylaxis, and the dosage was gradually reduced and discontinued for 7–10 days.\nStatistical methods The data were analyzed by SPSS software version 19.0. The enumeration data are represented by rate, and the chi-square test was used for comparison between the two groups. The measurement data conforming to the normal distribution are represented by mean ± standard deviation (x¯ ± s), and the non-conforming normal distribution by median M (P25, P75). Variables of the two groups were compared by Mann-Whitney U test. Spearman analysis was performed for the correlation between macroscopic hematuria and the patients’ outcome. Multivariate logistic regression model was used to screen the risk factors of AKI, and then ROC curve analysis was performed on the selected risk factors. A p-value of less than .05 for 95% confidence was set and used as a cutoff point to examine the statistical association between the variables.\nThe data were analyzed by SPSS software version 19.0. The enumeration data are represented by rate, and the chi-square test was used for comparison between the two groups. The measurement data conforming to the normal distribution are represented by mean ± standard deviation (x¯ ± s), and the non-conforming normal distribution by median M (P25, P75). Variables of the two groups were compared by Mann-Whitney U test. Spearman analysis was performed for the correlation between macroscopic hematuria and the patients’ outcome. Multivariate logistic regression model was used to screen the risk factors of AKI, and then ROC curve analysis was performed on the selected risk factors. A p-value of less than .05 for 95% confidence was set and used as a cutoff point to examine the statistical association between the variables.", "The research subjects were patients with wasp sting and hospitalized from 1 January 2016 to 31 December 2018 in the nephrology department or the Intensive Care Unit (ICU) of Suining Central Hospital in Sichuan province, China. The exclusion criteria were chronic kidney disease (CKD), age less than 14 years old, and death upon admission. CKD is defined as abnormalities of kidney structure or function, present for >3 months, with implications for health [17]. For the diagnosis of CKD, we did not only rely on the patients’ medical history, but also kidney ultrasonography and baseline serum creatinine with presence of proteinuria/hematuria. AKI was defined by KDIGO definition criteria as any of the following: increase in SCr by ≥ 0.3 mg/dL within 48 h; or increase in SCr ≥1.5 times baseline, which is known or presumed to have occurred within the prior 7 days; or Urine volume < 0.5 mL/kg/h for 6 h [18]. Creatine kinase (CK) levels of 1000 U/L, exceeding five times the upper limit of normal, was used for diagnosing rhabdomyolysis [19]. We categorized them into macroscopic hematuria group (n = 51) and non- macroscopic hematuria group (n = 312) according to the presence of macroscopic hematuria. Microscopic hematuria was not included in this study. The study was approved by the Ethics Committee of Suining Central Hospital (LLSNCH20200022).", "EpiData3.1 software was used to input the following data: 363 patients' demographic indicators, including gender, age, number of stings, the time interval between sting and admission, and previous medical history; main symptoms and signs such as hypotension, allergic rash, macroscopic hematuria, and oliguria (or anuria); the PSS at admission: The PSS grades severity as (0) none, (1) minor, (2) moderate, (3) severe, and (4) fatal poisoning. The severity grading takes into account only the observed clinical symptoms and signs, it does not estimate risks or hazards on the basis of information such as the amounts ingested or serum concentrations of the toxic agent [11]. According to the exclusion criteria, patients with no symptoms (0) and those who died upon admission (4) were excluded, thus the PSS graded the patients into minor, moderate, and severe poisoning levels; laboratory data of patients on admission, the 2nd and the 3rd day after admission, and the day before discharge; severe complications such as AKI, rhabdomyolysis, hemolysis, coagulation disorder, liver dysfunction, MODS, and ARDS; procedures including hemodialysis and plasmapheresis, state at discharge (death or survival), and length of hospital stay.", "Patients with wasp stings are rarely treated in outpatient clinics. During pre-hospital first aid: given 0.9% sodium chloride to hydration, and glucocorticoid or epinephrine were given for allergic reactions. After admission, the stinger that could be found were removed and the number of stings was counted. 0.9% sodium chloride was given for hydration, and sodium bicarbonate was used to alkalize the urine.\nTherapeutic schedule of Rhabdomyolysis: 0.9% sodium chloride was given for hydration, and sodium bicarbonate was used to alkalize the urine. When serum CK was over 1000 U/L, diuretics were given to achieve at least 300 mL/h urine output.\nTherapeutic schedule of AKI: Intravenous infusion of glucocorticoid (Methylprednisolone 40 mg/d intravenous infusion) for 3–5 days, and the dosage was gradually reduced and discontinued for 7–10 days. hydration and diuresis for patients without oliguria or anuria, to achieve at least 100–200 mL/h urine excretion.\nBlood purification treatment: We provided blood purification therapy when patients with wasp sting had macroscopic hematuria (14%) and if they had AKI (oliguria or KDIGO stage 3), and in some patients with CK over 10000 u/L in the absence of AKI.\nOur patients were treated with glucocorticoids (Methylprednisolone 40 mg/d intravenous infusion) for 3–5 days in the presence of macroscopic hematuria, AKI, or anaphylaxis, and the dosage was gradually reduced and discontinued for 7–10 days.", "The data were analyzed by SPSS software version 19.0. The enumeration data are represented by rate, and the chi-square test was used for comparison between the two groups. The measurement data conforming to the normal distribution are represented by mean ± standard deviation (x¯ ± s), and the non-conforming normal distribution by median M (P25, P75). Variables of the two groups were compared by Mann-Whitney U test. Spearman analysis was performed for the correlation between macroscopic hematuria and the patients’ outcome. Multivariate logistic regression model was used to screen the risk factors of AKI, and then ROC curve analysis was performed on the selected risk factors. A p-value of less than .05 for 95% confidence was set and used as a cutoff point to examine the statistical association between the variables.", "Demographic characteristics From January 2016 to December 2018, 390 patients with wasp sting presented to Suining Central Hospital. Three hundred sixty-three were included in the study. Twenty-four patients refused to be hospitalized and three patients were excluded from the study as they already had CKD based on their medical history and ultrasonography. Sixty percent patients were male with a mean age of 55.9 ± 16.3 years (Table 1). The time interval between sting and admission was 3(0.5–144) hours, and the PSS was 1.2 ± 0.5 points on admission. Fourteen patients died with a mortality of 3.9%. Eight patients died on the first day of admission, followed by three each on second and third day. The most common cause of death was ARDS (9/14) followed by MODS (4/14) (Table 1). Figure 1 shows the monthly distribution of patients with macroscopic hematuria, rhabdomyolysis, hemolysis, AKI, ICU and death after wasp sting.\nMonthly distribution of patients with severe complication.\nClinical data of wasp sting patients (n = 363).\n*The time interval between sting and admission, **Multiple Organ Dysfunction Syndrome, #Acute Respiratory Distress Syndrome, ##Intensive Care Unit.\nFrom January 2016 to December 2018, 390 patients with wasp sting presented to Suining Central Hospital. Three hundred sixty-three were included in the study. Twenty-four patients refused to be hospitalized and three patients were excluded from the study as they already had CKD based on their medical history and ultrasonography. Sixty percent patients were male with a mean age of 55.9 ± 16.3 years (Table 1). The time interval between sting and admission was 3(0.5–144) hours, and the PSS was 1.2 ± 0.5 points on admission. Fourteen patients died with a mortality of 3.9%. Eight patients died on the first day of admission, followed by three each on second and third day. The most common cause of death was ARDS (9/14) followed by MODS (4/14) (Table 1). Figure 1 shows the monthly distribution of patients with macroscopic hematuria, rhabdomyolysis, hemolysis, AKI, ICU and death after wasp sting.\nMonthly distribution of patients with severe complication.\nClinical data of wasp sting patients (n = 363).\n*The time interval between sting and admission, **Multiple Organ Dysfunction Syndrome, #Acute Respiratory Distress Syndrome, ##Intensive Care Unit.\nClinical manifestations Almost all patients had local symptoms such as redness, swelling, and pain at the sting site. Thirty-nine patients (10.7%) had AKI, 7(17.9%) patients in stage1, 9 (23.1%) in stage2, 23 (59%) in stage 3. Fifty-one patients (14%) had macroscopic hematuria, 61 patients (16.8%) developed hemolysis, and 105 patients (28.9%) developed rhabdomyolysis. Forty-five patients (12.4%) underwent blood purification therapy, and 7 patients (1.9%) were admitted to ICU. Of the 7 patients admitted to ICU, five were directly admitted to ICU for need of endotracheal intubation, and 2 were transferred from nephrology department (Table 1).\nAlmost all patients had local symptoms such as redness, swelling, and pain at the sting site. Thirty-nine patients (10.7%) had AKI, 7(17.9%) patients in stage1, 9 (23.1%) in stage2, 23 (59%) in stage 3. Fifty-one patients (14%) had macroscopic hematuria, 61 patients (16.8%) developed hemolysis, and 105 patients (28.9%) developed rhabdomyolysis. Forty-five patients (12.4%) underwent blood purification therapy, and 7 patients (1.9%) were admitted to ICU. Of the 7 patients admitted to ICU, five were directly admitted to ICU for need of endotracheal intubation, and 2 were transferred from nephrology department (Table 1).\nComparison of clinical data between the two groups There was no statistical difference in gender composition between patients with macroscopic hematuria and patients with non-macroscopic hematuria (p = .089). The average age of patients with macroscopic hematuria was higher than that of patients with non-macroscopic hematuria (65 years vs 54 years, p < .001). The number of stings, the time interval between stings and admission, and the PSS on admission in patients with macroscopic hematuria group were higher than those in patients with non-macroscopic hematuria group, and the hospitalization days were significantly prolonged (p < .001). Thirteen patients (25.5%) died in the macroscopic hematuria group, and 1 patient (0.3%) died in the non-macroscopic hematuria group (p < .001). The patient in the non-macroscopic hematuria group died of respiratory failure in the setting of chronic obstructive pulmonary disease. Although there was no statistical difference in the duration of blood purification therapy between these two groups, the time of blood purification therapy in the macroscopic hematuria group was significantly prolonged (Table 2).\nClinical data between non-macroscopic hematuria and macroscopic hematuria group.\n*The time interval between sting and admission, **Multiple Organ Dysfunction Syndrome, #Acute Respiratory Distress Syndrome, ##Intensive Care Unit.\nThere was no statistical difference in gender composition between patients with macroscopic hematuria and patients with non-macroscopic hematuria (p = .089). The average age of patients with macroscopic hematuria was higher than that of patients with non-macroscopic hematuria (65 years vs 54 years, p < .001). The number of stings, the time interval between stings and admission, and the PSS on admission in patients with macroscopic hematuria group were higher than those in patients with non-macroscopic hematuria group, and the hospitalization days were significantly prolonged (p < .001). Thirteen patients (25.5%) died in the macroscopic hematuria group, and 1 patient (0.3%) died in the non-macroscopic hematuria group (p < .001). The patient in the non-macroscopic hematuria group died of respiratory failure in the setting of chronic obstructive pulmonary disease. Although there was no statistical difference in the duration of blood purification therapy between these two groups, the time of blood purification therapy in the macroscopic hematuria group was significantly prolonged (Table 2).\nClinical data between non-macroscopic hematuria and macroscopic hematuria group.\n*The time interval between sting and admission, **Multiple Organ Dysfunction Syndrome, #Acute Respiratory Distress Syndrome, ##Intensive Care Unit.\nComparison of complications between the two groups No allergic rash was seen in the macroscopic hematuria group (p = .001). One patient developed hypotension in the macroscopic hematuria group compared to 19 in the non-macroscopic hematuria group (p = .332). The incidence of oliguria (or anuria), rhabdomyolysis, hemolysis, coagulation abnormalities, liver damage, MODS, ARDS in the macroscopic hematuria group was higher than that in the non-macroscopic hematuria group. Seven patients (13.7%) in the macroscopic hematuria group were admitted to ICU, while no patients in the non-macroscopic hematuria group were admitted to ICU (p < .001). Thirty-six (70.6%) patients with macroscopic hematuria developed AKI, compared to 3 (1%) patients with non-macroscopic hematuria (p < .001). Forty patients (78.4%) in the macroscopic hematuria group received hemodialysis compared to 5 (1.6%) in the non-macroscopic hematuria group (p < .001) (Table 2).\nNo allergic rash was seen in the macroscopic hematuria group (p = .001). One patient developed hypotension in the macroscopic hematuria group compared to 19 in the non-macroscopic hematuria group (p = .332). The incidence of oliguria (or anuria), rhabdomyolysis, hemolysis, coagulation abnormalities, liver damage, MODS, ARDS in the macroscopic hematuria group was higher than that in the non-macroscopic hematuria group. Seven patients (13.7%) in the macroscopic hematuria group were admitted to ICU, while no patients in the non-macroscopic hematuria group were admitted to ICU (p < .001). Thirty-six (70.6%) patients with macroscopic hematuria developed AKI, compared to 3 (1%) patients with non-macroscopic hematuria (p < .001). Forty patients (78.4%) in the macroscopic hematuria group received hemodialysis compared to 5 (1.6%) in the non-macroscopic hematuria group (p < .001) (Table 2).\nComparison of laboratory examination results between the two groups At the time of admission and the 2nd–3rd day after admission, the serum creatinine, creatine kinase (CK), aspartate aminotransferase (AST), indirect bilirubin (IBIL), alanine transaminase (ALT), prothrombin time (PT), activated partial thromboplastin time (APTT), lactate dehydrogenase (LDH), leukocyte (WBC) values in the macroscopic hematuria group were higher than those in the non-macroscopic hematuria group (p < .001). Serum creatinine and LDH tested before discharge were still significantly higher in the macroscopic hematuria group than that in the non-macroscopic hematuria group (p < .001) (Table 3).\nLab examination between non-macroscopic hematuria and macroscopic hematuria group.\n*Aspartate aminotransferase, **Alanine transaminase, #Prothrombin time, ##Activated partial thromboplastin time.\nAt the time of admission and the 2nd–3rd day after admission, the serum creatinine, creatine kinase (CK), aspartate aminotransferase (AST), indirect bilirubin (IBIL), alanine transaminase (ALT), prothrombin time (PT), activated partial thromboplastin time (APTT), lactate dehydrogenase (LDH), leukocyte (WBC) values in the macroscopic hematuria group were higher than those in the non-macroscopic hematuria group (p < .001). Serum creatinine and LDH tested before discharge were still significantly higher in the macroscopic hematuria group than that in the non-macroscopic hematuria group (p < .001) (Table 3).\nLab examination between non-macroscopic hematuria and macroscopic hematuria group.\n*Aspartate aminotransferase, **Alanine transaminase, #Prothrombin time, ##Activated partial thromboplastin time.\nSpearman analysis between macroscopic hematuria and patients' outcome A Spearman correlation analysis indicated that macroscopic hematuria was related to the patients' outcome (r = 0.454, p < .001) (Table 4).\nSpearman analysis between macroscopic hematuria and the patients’ outcome.\nSpearman analysis, r = 0.454 p < .001.\nA Spearman correlation analysis indicated that macroscopic hematuria was related to the patients' outcome (r = 0.454, p < .001) (Table 4).\nSpearman analysis between macroscopic hematuria and the patients’ outcome.\nSpearman analysis, r = 0.454 p < .001.\nMultivariate logistic regression analysis The PSS was one risk factor for death in wasp sting patients (OR = 6.768, 95%CI 1.981–23.120, p = .002) (Table 5).\nMultivariate logistic regression analysis of death in wasp sting patients.\nH-L test X² = 0.826, p = .999.\nWe took the data of laboratory examination with significant difference at admission (AKI group and non-AKI group) (Table 6) as independent variables and performed a multivariate logistic regression analysis of AKI in wasp sting patients. The results showed that LDH was an independent risk factor for AKI (OR = 1.006, 95%CI 1.004–1.007, p < .001) (Table 7).\nLab examination on admission between non-AKI and AKI group.\n*Aspartate aminotransferase, **Alanine transaminase.\nMultivariate logistic regression analysis of AKI in wasp sting patients.\nH-L test X² = 14.555, p = .068.\nThe PSS was one risk factor for death in wasp sting patients (OR = 6.768, 95%CI 1.981–23.120, p = .002) (Table 5).\nMultivariate logistic regression analysis of death in wasp sting patients.\nH-L test X² = 0.826, p = .999.\nWe took the data of laboratory examination with significant difference at admission (AKI group and non-AKI group) (Table 6) as independent variables and performed a multivariate logistic regression analysis of AKI in wasp sting patients. The results showed that LDH was an independent risk factor for AKI (OR = 1.006, 95%CI 1.004–1.007, p < .001) (Table 7).\nLab examination on admission between non-AKI and AKI group.\n*Aspartate aminotransferase, **Alanine transaminase.\nMultivariate logistic regression analysis of AKI in wasp sting patients.\nH-L test X² = 14.555, p = .068.\nROC curve analysis The AUC of the PSS on admission to predict death of wasp sting patients was 0.911(95%CI 0.870–0.952, p < .001) (Figure 2).\nThe poisoning severity score on admission to predict death of wasp sting patients.\nThe AUC of serum LDH on admission to predict AKI of wasp sting patients was 0.980(95%CI 0.966–0.995, p < .001) (Figure 3).\nLDH on admission and number of stings to predict AKI of wasp sting patients. LDH on admission AUC = 0.980(95%CI 0.966–0.995), CUTOFF = 463.5u/L Youden index = 0.906 Number of stings AUC = 0.875(95%CI 0.826–0.925), CUTOFF = 11.5, Youden index = 0.663.\nThe AUC of number of stings to predict AKI of wasp sting patients was 0.875(95%CI 0.826–0.925, p < .001) (Figure 3).\nThe AUC of the PSS on admission to predict death of wasp sting patients was 0.911(95%CI 0.870–0.952, p < .001) (Figure 2).\nThe poisoning severity score on admission to predict death of wasp sting patients.\nThe AUC of serum LDH on admission to predict AKI of wasp sting patients was 0.980(95%CI 0.966–0.995, p < .001) (Figure 3).\nLDH on admission and number of stings to predict AKI of wasp sting patients. LDH on admission AUC = 0.980(95%CI 0.966–0.995), CUTOFF = 463.5u/L Youden index = 0.906 Number of stings AUC = 0.875(95%CI 0.826–0.925), CUTOFF = 11.5, Youden index = 0.663.\nThe AUC of number of stings to predict AKI of wasp sting patients was 0.875(95%CI 0.826–0.925, p < .001) (Figure 3).", "From January 2016 to December 2018, 390 patients with wasp sting presented to Suining Central Hospital. Three hundred sixty-three were included in the study. Twenty-four patients refused to be hospitalized and three patients were excluded from the study as they already had CKD based on their medical history and ultrasonography. Sixty percent patients were male with a mean age of 55.9 ± 16.3 years (Table 1). The time interval between sting and admission was 3(0.5–144) hours, and the PSS was 1.2 ± 0.5 points on admission. Fourteen patients died with a mortality of 3.9%. Eight patients died on the first day of admission, followed by three each on second and third day. The most common cause of death was ARDS (9/14) followed by MODS (4/14) (Table 1). Figure 1 shows the monthly distribution of patients with macroscopic hematuria, rhabdomyolysis, hemolysis, AKI, ICU and death after wasp sting.\nMonthly distribution of patients with severe complication.\nClinical data of wasp sting patients (n = 363).\n*The time interval between sting and admission, **Multiple Organ Dysfunction Syndrome, #Acute Respiratory Distress Syndrome, ##Intensive Care Unit.", "Almost all patients had local symptoms such as redness, swelling, and pain at the sting site. Thirty-nine patients (10.7%) had AKI, 7(17.9%) patients in stage1, 9 (23.1%) in stage2, 23 (59%) in stage 3. Fifty-one patients (14%) had macroscopic hematuria, 61 patients (16.8%) developed hemolysis, and 105 patients (28.9%) developed rhabdomyolysis. Forty-five patients (12.4%) underwent blood purification therapy, and 7 patients (1.9%) were admitted to ICU. Of the 7 patients admitted to ICU, five were directly admitted to ICU for need of endotracheal intubation, and 2 were transferred from nephrology department (Table 1).", "There was no statistical difference in gender composition between patients with macroscopic hematuria and patients with non-macroscopic hematuria (p = .089). The average age of patients with macroscopic hematuria was higher than that of patients with non-macroscopic hematuria (65 years vs 54 years, p < .001). The number of stings, the time interval between stings and admission, and the PSS on admission in patients with macroscopic hematuria group were higher than those in patients with non-macroscopic hematuria group, and the hospitalization days were significantly prolonged (p < .001). Thirteen patients (25.5%) died in the macroscopic hematuria group, and 1 patient (0.3%) died in the non-macroscopic hematuria group (p < .001). The patient in the non-macroscopic hematuria group died of respiratory failure in the setting of chronic obstructive pulmonary disease. Although there was no statistical difference in the duration of blood purification therapy between these two groups, the time of blood purification therapy in the macroscopic hematuria group was significantly prolonged (Table 2).\nClinical data between non-macroscopic hematuria and macroscopic hematuria group.\n*The time interval between sting and admission, **Multiple Organ Dysfunction Syndrome, #Acute Respiratory Distress Syndrome, ##Intensive Care Unit.", "No allergic rash was seen in the macroscopic hematuria group (p = .001). One patient developed hypotension in the macroscopic hematuria group compared to 19 in the non-macroscopic hematuria group (p = .332). The incidence of oliguria (or anuria), rhabdomyolysis, hemolysis, coagulation abnormalities, liver damage, MODS, ARDS in the macroscopic hematuria group was higher than that in the non-macroscopic hematuria group. Seven patients (13.7%) in the macroscopic hematuria group were admitted to ICU, while no patients in the non-macroscopic hematuria group were admitted to ICU (p < .001). Thirty-six (70.6%) patients with macroscopic hematuria developed AKI, compared to 3 (1%) patients with non-macroscopic hematuria (p < .001). Forty patients (78.4%) in the macroscopic hematuria group received hemodialysis compared to 5 (1.6%) in the non-macroscopic hematuria group (p < .001) (Table 2).", "At the time of admission and the 2nd–3rd day after admission, the serum creatinine, creatine kinase (CK), aspartate aminotransferase (AST), indirect bilirubin (IBIL), alanine transaminase (ALT), prothrombin time (PT), activated partial thromboplastin time (APTT), lactate dehydrogenase (LDH), leukocyte (WBC) values in the macroscopic hematuria group were higher than those in the non-macroscopic hematuria group (p < .001). Serum creatinine and LDH tested before discharge were still significantly higher in the macroscopic hematuria group than that in the non-macroscopic hematuria group (p < .001) (Table 3).\nLab examination between non-macroscopic hematuria and macroscopic hematuria group.\n*Aspartate aminotransferase, **Alanine transaminase, #Prothrombin time, ##Activated partial thromboplastin time.", "A Spearman correlation analysis indicated that macroscopic hematuria was related to the patients' outcome (r = 0.454, p < .001) (Table 4).\nSpearman analysis between macroscopic hematuria and the patients’ outcome.\nSpearman analysis, r = 0.454 p < .001.", "The PSS was one risk factor for death in wasp sting patients (OR = 6.768, 95%CI 1.981–23.120, p = .002) (Table 5).\nMultivariate logistic regression analysis of death in wasp sting patients.\nH-L test X² = 0.826, p = .999.\nWe took the data of laboratory examination with significant difference at admission (AKI group and non-AKI group) (Table 6) as independent variables and performed a multivariate logistic regression analysis of AKI in wasp sting patients. The results showed that LDH was an independent risk factor for AKI (OR = 1.006, 95%CI 1.004–1.007, p < .001) (Table 7).\nLab examination on admission between non-AKI and AKI group.\n*Aspartate aminotransferase, **Alanine transaminase.\nMultivariate logistic regression analysis of AKI in wasp sting patients.\nH-L test X² = 14.555, p = .068.", "The AUC of the PSS on admission to predict death of wasp sting patients was 0.911(95%CI 0.870–0.952, p < .001) (Figure 2).\nThe poisoning severity score on admission to predict death of wasp sting patients.\nThe AUC of serum LDH on admission to predict AKI of wasp sting patients was 0.980(95%CI 0.966–0.995, p < .001) (Figure 3).\nLDH on admission and number of stings to predict AKI of wasp sting patients. LDH on admission AUC = 0.980(95%CI 0.966–0.995), CUTOFF = 463.5u/L Youden index = 0.906 Number of stings AUC = 0.875(95%CI 0.826–0.925), CUTOFF = 11.5, Youden index = 0.663.\nThe AUC of number of stings to predict AKI of wasp sting patients was 0.875(95%CI 0.826–0.925, p < .001) (Figure 3).", "The current study is a retrospective analysis showing clinical features of patients with wasp sting in Suining Central Hospital, Sichuan, China. Macroscopic hematuria after wasp sting is not rare in Asian countries [6,14,20]. Our study shows that 14% of the patients with wasp sting had macroscopic hematuria, and this was only seen in the summer and fall months from July through December. More than half of patients with macroscopic hematuria (51%) developed oliguria (or anuria). More than 70% of patients with macroscopic hematuria had AKI, and the mortality was 25.5%. The poisoning severity score (PSS) in patients with macroscopic hematuria was significantly higher than that in patients with non-macroscopic hematuria (2.2 ± 0.5 vs. 1.1 ± 0.3, p < .001). The PSS was one risk factor for death in wasp sting patients.\nWasp venom contains a variety of bioactive components, such as enzymes (including phospholipase, hyaluronic acid), amines (including histamine, serotonin, catecholamine), peptides (including wasp venom peptide, wasp kinin) [12]. Phospholipase damages the cell membrane by attacking the phospholipid structure, which has a toxic effect on skeletal muscle and erythrocyte membrane, leading to rhabdomyolysis and intravascular hemolysis [21–23]. Wasp venom peptide can also independently cause muscle necrosis and cell apoptosis [12,24]. Venom induced rhabdomyolysis and intravascular hemolysis leads to a release of muscle enzymes such as creatine kinase and muscle protein such as myoglobin, free hemoglobin (from red blood cells) in the intravascular circulation [25]. Once in the circulation, these muscle (heme) proteins gets freely filtered through the glomeruli and eventually exceed the tubular reabsorption capacity of renal tubules resulting in macroscopic hematuria [16]. Kidney biopsy study in wasp sting-induced kidney injury has demonstrated deposition of myoglobin and hemoglobin in renal tubules [26–28].\nIn developed countries, wasp stings are mainly manifested in varying degrees of allergic-reactions, therefore their treatment mainly focuses on desensitization and antiallergic treatment [5,29,30]. On the other hand, in China, wasp sting patients are mainly characterized by more severe and life-threatening presentations such as MODS, ARDS and non-allergic shock [13,14], which was also shown in our cohort. Population based data from United States and Sweden shows that wasp stings mostly occur in summer and autumn months when the climate is warm and hence there is more outdoor activity and thus subsequent an increased exposure to wasps [31,32]. In our study as well, 95.6% of wasp stings occurred in the months of July through November. Severe complications such as AKI, rhabdomyolysis, hemolysis, and admission to ICU mainly occurred in July-December, but death was documented to only happen in September-November months. This regional and seasonal difference may be related to the different wasp species in different regions and seasons, and the different components and virulence of wasp venom [21,33,34].\nPrior study has shown that macroscopic hematuria associated with wasp sting generally occurs 4–12 h after sting, and tends to occur earlier than AKI [35]. In our cohort, 14% of wasp sting patients presented with macroscopic hematuria, and the incidence of serious complications such as AKI, MODS, and ARDS of these patients were significantly higher than those reported by Xie, etc.[14]. Patients admitted to ICU only occurred in the macroscopic hematuria group, and almost 92.8% of the death were from the macroscopic hematuria group. Spearman analysis also demonstrated that macroscopic hematuria was related to patients' outcome (r = 0.454, p < .001). Thus, we suggest that macroscopic hematuria may be regarded as a surrogate marker of worse clinical outcomes following wasp stings. These patients also had a higher PSS at the time of admission. PSS is already used in Europe to assess the severity of poisoned patients (including environmental toxins), with simple and accurate characteristics [9,36]. Multivariate logistic regression analysis showed that the PSS was also an independent risk factor for death in wasp sting patients (OR = 6.768, 95%CI 1.981–23.120, p = .002). The ROC analysis of the PSS for predicting death in wasp sting patients shows that when the PSS over 1.5, the risk of death was increased with high accuracy (AUC = 0.911). In brief, the PSS is helpful in early assessment of the severity of wasp sting patients and is worthy of promotion in clinical practice.\nThe serum creatine kinase, aspartate aminotransferase, lactate dehydrogenase, and indirect bilirubin of patients with macroscopic hematuria were significantly higher than those of patients with non-macroscopic hematuria on admission and the 2nd–3rd day after admission, which indicated that patients with macroscopic hematuria experienced more severe and prolonged rhabdomyolysis and intravascular hemolysis. Prior studies have suggested that rhabdomyolysis, intravascular hemolysis, and direct nephrotoxicity of wasp venoms are the main causes of AKI in patients with wasp stings [37–41], and 77–90% of the pathology were ATN [42,43]. The kidney injury is exacerbated in low volume states which accompany severe wasp sting due to development of shock [16,44]. AKI is the most common and serious outcome following wasp sting [12,45,46]. Ten-point seven percent of our patients developed AKI, 82% of patients with AKI had stages 2/3 of kidney injury by KDIGO criteria. The incidence of oliguric AKI was as high as 72.2% in macroscopic hematuria group, and most of them required blood purification treatment. In Xie's literature, the overall incidence of AKI was higher in the group with more than 10 stings [14]. The ROC analysis of our patients showed that AKI incidence increased when the stings was higher than 11 (AUC = 0.875, 95%CI 0.826–0.925, p < .001). Among biochemical parameters, serum LDH upon admission was found to be an independent marker for AKI. Zhang, etc. has shown that elevated serum LDH was associated with AKI in wasp sting patient population [13,35]. ROC analysis of LDH for predicting AKI in wasp sting patients shows in our study that when serum LDH was over 463.5 U/L, the risk of AKI in wasp sting patients increased, with high accuracy (AUC = 0.980, 95%CI 0.966–0.995, p < .001). Thus, greater number of sting and elevated LDH on admission (> 463.5 u/L) are associated with greater risk of AKI and therefore should alert clinicians of more serious outcomes such as need of hemodialysis.\nIn the macroscopic hematuria group, the serum creatinine of 25 patients complicated with AKI did not return to normal at discharge (1.3–10.5 mg/dL). Only 8 patients were followed up in outpatient for 2–8 months, and renal function was not fully recovered in 4 patients (creatinine 1.28–2.65 mg/dL). According to Zhang's report, 10.7% of patients with wasp sting complicated with AKI will progress to CKD [13]. We have evidence from population based studies that a subset of patients with AKI progress to CKD [47]. It would be therefore be advisable for such patients to be followed in nephrology clinic after their discharge. However, this result may also be related to the short hospitalization time of our patients (average 11 days), because, according to Ambarsari's report, the cure of acute kidney injury after wasp sting takes 3–6 weeks [42].\nOur research also has limitations. Ours is a retrospective study, hence there may be selection bias in addition to possible confounding. The study data came from a single center and we did not have complete information such as wasp species, sting site, prognosis and follow-up of patients with AKI. We did not consider the possible effect of microscopic hematuria, and we also didn't distinguish between hemoglobinuria and myoglobinuria. Although the PSS has a high accuracy in predicting the prognosis of wasp sting patients, the severity of wasp sting patients from our study highlighted that the number of stings, time of the year also correlate with worse outcomes. PSS criteria may benefit with incorporation of these variables. Twelve patients had diabetes, but no evidence of diabetic nephropathy was found. Our study did not adjust for risk of AKI with preexisting diabetes. To our knowledge, this is the largest study in terms of study subject number to review macroscopic hematuria in wasp sting patients in terms of clinical features and outcomes.\nIn conclusion, macroscopic hematuria is one of the early and important markers of adverse outcomes in patients with wasp sting, and thus, can alert the clinicians to initiate prompt and careful monitoring of such patients. In wasp sting patients with symptoms of macroscopic hematuria or serum LDH higher than 463.5 u/L upon admission, the risk of AKI increases significantly, therefore hemodialysis should be considered. The poisoning severity score is helpful in early assessment of the severity of wasp sting patients." ]
[ "intro", "materials", null, null, null, null, "results", null, null, null, null, null, null, null, null, "discussion" ]
[ "Wasp sting", "macroscopic hematuria", "poisoning severity score", "AKI", "rhabdomyolysis" ]
Introduction: Wasps belong to the Order Hymenoptera in the Animal Kingdom [1]. Currently, there are more than 6,000 species of wasps in the world and more than 200 species have been recorded in China [2,3]. Wasp stings are common in the world. Epidemiological surveys in the United States show that wasp stings account for 27.4%–29.7% of all animal injuries and the annual mortality is 0.14–0.74/million population [4,5]. In July–October 2013, 1,675 cases of wasp stings occurred in Shaanxi Province of China, resulting in 42 deaths [6]. However, this global public health problem hasn’t drawn adequate attention as there has no epidemiological survey on wasp stings in China [7]. There is a paucity of existing guidelines on the diagnosis and treatment of wasp stings. Chinese Society of Toxicology has prepared a consensus statement on the standardized diagnosis and treatment of wasp stings [7]. Nevertheless, a wider application of this consensus criteria is likely limited by the complex evaluation criteria. In Europe, the poisoning severity score (PSS) is used to guide initial assessment and appropriate medical care assignment [8–11]. However, the PSS has not been reported specifically for evaluating the severity of wasp sting patients. While local symptoms of wasp sting include redness, swelling, and pain, severe wasp sting can lead to systemic allergic reaction, acute kidney injury (AKI), rhabdomyolysis, hemolysis, shock, and even death [3,12,13]. Xie, etc. [14] reported the occurrence rate of macroscopic hematuria after wasp sting to be 10.1%. Macroscopic hematuria is visible to the naked eye and may appear pink, bright red, brown, or the classic finding of tea colored urine [15,16]. Macroscopic hematuria occurring after wasp sting often alerts clinicians to the likelihood of a serious medical condition which may require intensive care admission, and in some cases even plasmapheresis [7]. However, there has not been any study looking at the clinical characteristics and prognosis of wasp sting patients complicated with macroscopic hematuria in particular. Suining Central Hospital of Sichuan Province, China, is the only tertiary grade A general hospital in the interior areas of Sichuan Province, and most patients with severe wasp stings were treated in our nephrology department and ICU. Therefore, we undertook this study in which we looked at the 363 patients with wasp sting from January 2016 to December 2018 in Suining Central Hospital, and we analyzed the clinical characteristics of patients with macroscopic hematuria. In our study, we used the PSS to evaluate the severity of wasp sting patients upon admission so as to evaluate whether the macroscopic hematuria and the PSS would be valuable to predict the prognosis of severe wasp sting patients. Materials and methods: Research subjects The research subjects were patients with wasp sting and hospitalized from 1 January 2016 to 31 December 2018 in the nephrology department or the Intensive Care Unit (ICU) of Suining Central Hospital in Sichuan province, China. The exclusion criteria were chronic kidney disease (CKD), age less than 14 years old, and death upon admission. CKD is defined as abnormalities of kidney structure or function, present for >3 months, with implications for health [17]. For the diagnosis of CKD, we did not only rely on the patients’ medical history, but also kidney ultrasonography and baseline serum creatinine with presence of proteinuria/hematuria. AKI was defined by KDIGO definition criteria as any of the following: increase in SCr by ≥ 0.3 mg/dL within 48 h; or increase in SCr ≥1.5 times baseline, which is known or presumed to have occurred within the prior 7 days; or Urine volume < 0.5 mL/kg/h for 6 h [18]. Creatine kinase (CK) levels of 1000 U/L, exceeding five times the upper limit of normal, was used for diagnosing rhabdomyolysis [19]. We categorized them into macroscopic hematuria group (n = 51) and non- macroscopic hematuria group (n = 312) according to the presence of macroscopic hematuria. Microscopic hematuria was not included in this study. The study was approved by the Ethics Committee of Suining Central Hospital (LLSNCH20200022). The research subjects were patients with wasp sting and hospitalized from 1 January 2016 to 31 December 2018 in the nephrology department or the Intensive Care Unit (ICU) of Suining Central Hospital in Sichuan province, China. The exclusion criteria were chronic kidney disease (CKD), age less than 14 years old, and death upon admission. CKD is defined as abnormalities of kidney structure or function, present for >3 months, with implications for health [17]. For the diagnosis of CKD, we did not only rely on the patients’ medical history, but also kidney ultrasonography and baseline serum creatinine with presence of proteinuria/hematuria. AKI was defined by KDIGO definition criteria as any of the following: increase in SCr by ≥ 0.3 mg/dL within 48 h; or increase in SCr ≥1.5 times baseline, which is known or presumed to have occurred within the prior 7 days; or Urine volume < 0.5 mL/kg/h for 6 h [18]. Creatine kinase (CK) levels of 1000 U/L, exceeding five times the upper limit of normal, was used for diagnosing rhabdomyolysis [19]. We categorized them into macroscopic hematuria group (n = 51) and non- macroscopic hematuria group (n = 312) according to the presence of macroscopic hematuria. Microscopic hematuria was not included in this study. The study was approved by the Ethics Committee of Suining Central Hospital (LLSNCH20200022). Data collection EpiData3.1 software was used to input the following data: 363 patients' demographic indicators, including gender, age, number of stings, the time interval between sting and admission, and previous medical history; main symptoms and signs such as hypotension, allergic rash, macroscopic hematuria, and oliguria (or anuria); the PSS at admission: The PSS grades severity as (0) none, (1) minor, (2) moderate, (3) severe, and (4) fatal poisoning. The severity grading takes into account only the observed clinical symptoms and signs, it does not estimate risks or hazards on the basis of information such as the amounts ingested or serum concentrations of the toxic agent [11]. According to the exclusion criteria, patients with no symptoms (0) and those who died upon admission (4) were excluded, thus the PSS graded the patients into minor, moderate, and severe poisoning levels; laboratory data of patients on admission, the 2nd and the 3rd day after admission, and the day before discharge; severe complications such as AKI, rhabdomyolysis, hemolysis, coagulation disorder, liver dysfunction, MODS, and ARDS; procedures including hemodialysis and plasmapheresis, state at discharge (death or survival), and length of hospital stay. EpiData3.1 software was used to input the following data: 363 patients' demographic indicators, including gender, age, number of stings, the time interval between sting and admission, and previous medical history; main symptoms and signs such as hypotension, allergic rash, macroscopic hematuria, and oliguria (or anuria); the PSS at admission: The PSS grades severity as (0) none, (1) minor, (2) moderate, (3) severe, and (4) fatal poisoning. The severity grading takes into account only the observed clinical symptoms and signs, it does not estimate risks or hazards on the basis of information such as the amounts ingested or serum concentrations of the toxic agent [11]. According to the exclusion criteria, patients with no symptoms (0) and those who died upon admission (4) were excluded, thus the PSS graded the patients into minor, moderate, and severe poisoning levels; laboratory data of patients on admission, the 2nd and the 3rd day after admission, and the day before discharge; severe complications such as AKI, rhabdomyolysis, hemolysis, coagulation disorder, liver dysfunction, MODS, and ARDS; procedures including hemodialysis and plasmapheresis, state at discharge (death or survival), and length of hospital stay. Therapeutic schedule Patients with wasp stings are rarely treated in outpatient clinics. During pre-hospital first aid: given 0.9% sodium chloride to hydration, and glucocorticoid or epinephrine were given for allergic reactions. After admission, the stinger that could be found were removed and the number of stings was counted. 0.9% sodium chloride was given for hydration, and sodium bicarbonate was used to alkalize the urine. Therapeutic schedule of Rhabdomyolysis: 0.9% sodium chloride was given for hydration, and sodium bicarbonate was used to alkalize the urine. When serum CK was over 1000 U/L, diuretics were given to achieve at least 300 mL/h urine output. Therapeutic schedule of AKI: Intravenous infusion of glucocorticoid (Methylprednisolone 40 mg/d intravenous infusion) for 3–5 days, and the dosage was gradually reduced and discontinued for 7–10 days. hydration and diuresis for patients without oliguria or anuria, to achieve at least 100–200 mL/h urine excretion. Blood purification treatment: We provided blood purification therapy when patients with wasp sting had macroscopic hematuria (14%) and if they had AKI (oliguria or KDIGO stage 3), and in some patients with CK over 10000 u/L in the absence of AKI. Our patients were treated with glucocorticoids (Methylprednisolone 40 mg/d intravenous infusion) for 3–5 days in the presence of macroscopic hematuria, AKI, or anaphylaxis, and the dosage was gradually reduced and discontinued for 7–10 days. Patients with wasp stings are rarely treated in outpatient clinics. During pre-hospital first aid: given 0.9% sodium chloride to hydration, and glucocorticoid or epinephrine were given for allergic reactions. After admission, the stinger that could be found were removed and the number of stings was counted. 0.9% sodium chloride was given for hydration, and sodium bicarbonate was used to alkalize the urine. Therapeutic schedule of Rhabdomyolysis: 0.9% sodium chloride was given for hydration, and sodium bicarbonate was used to alkalize the urine. When serum CK was over 1000 U/L, diuretics were given to achieve at least 300 mL/h urine output. Therapeutic schedule of AKI: Intravenous infusion of glucocorticoid (Methylprednisolone 40 mg/d intravenous infusion) for 3–5 days, and the dosage was gradually reduced and discontinued for 7–10 days. hydration and diuresis for patients without oliguria or anuria, to achieve at least 100–200 mL/h urine excretion. Blood purification treatment: We provided blood purification therapy when patients with wasp sting had macroscopic hematuria (14%) and if they had AKI (oliguria or KDIGO stage 3), and in some patients with CK over 10000 u/L in the absence of AKI. Our patients were treated with glucocorticoids (Methylprednisolone 40 mg/d intravenous infusion) for 3–5 days in the presence of macroscopic hematuria, AKI, or anaphylaxis, and the dosage was gradually reduced and discontinued for 7–10 days. Statistical methods The data were analyzed by SPSS software version 19.0. The enumeration data are represented by rate, and the chi-square test was used for comparison between the two groups. The measurement data conforming to the normal distribution are represented by mean ± standard deviation (x¯ ± s), and the non-conforming normal distribution by median M (P25, P75). Variables of the two groups were compared by Mann-Whitney U test. Spearman analysis was performed for the correlation between macroscopic hematuria and the patients’ outcome. Multivariate logistic regression model was used to screen the risk factors of AKI, and then ROC curve analysis was performed on the selected risk factors. A p-value of less than .05 for 95% confidence was set and used as a cutoff point to examine the statistical association between the variables. The data were analyzed by SPSS software version 19.0. The enumeration data are represented by rate, and the chi-square test was used for comparison between the two groups. The measurement data conforming to the normal distribution are represented by mean ± standard deviation (x¯ ± s), and the non-conforming normal distribution by median M (P25, P75). Variables of the two groups were compared by Mann-Whitney U test. Spearman analysis was performed for the correlation between macroscopic hematuria and the patients’ outcome. Multivariate logistic regression model was used to screen the risk factors of AKI, and then ROC curve analysis was performed on the selected risk factors. A p-value of less than .05 for 95% confidence was set and used as a cutoff point to examine the statistical association between the variables. Research subjects: The research subjects were patients with wasp sting and hospitalized from 1 January 2016 to 31 December 2018 in the nephrology department or the Intensive Care Unit (ICU) of Suining Central Hospital in Sichuan province, China. The exclusion criteria were chronic kidney disease (CKD), age less than 14 years old, and death upon admission. CKD is defined as abnormalities of kidney structure or function, present for >3 months, with implications for health [17]. For the diagnosis of CKD, we did not only rely on the patients’ medical history, but also kidney ultrasonography and baseline serum creatinine with presence of proteinuria/hematuria. AKI was defined by KDIGO definition criteria as any of the following: increase in SCr by ≥ 0.3 mg/dL within 48 h; or increase in SCr ≥1.5 times baseline, which is known or presumed to have occurred within the prior 7 days; or Urine volume < 0.5 mL/kg/h for 6 h [18]. Creatine kinase (CK) levels of 1000 U/L, exceeding five times the upper limit of normal, was used for diagnosing rhabdomyolysis [19]. We categorized them into macroscopic hematuria group (n = 51) and non- macroscopic hematuria group (n = 312) according to the presence of macroscopic hematuria. Microscopic hematuria was not included in this study. The study was approved by the Ethics Committee of Suining Central Hospital (LLSNCH20200022). Data collection: EpiData3.1 software was used to input the following data: 363 patients' demographic indicators, including gender, age, number of stings, the time interval between sting and admission, and previous medical history; main symptoms and signs such as hypotension, allergic rash, macroscopic hematuria, and oliguria (or anuria); the PSS at admission: The PSS grades severity as (0) none, (1) minor, (2) moderate, (3) severe, and (4) fatal poisoning. The severity grading takes into account only the observed clinical symptoms and signs, it does not estimate risks or hazards on the basis of information such as the amounts ingested or serum concentrations of the toxic agent [11]. According to the exclusion criteria, patients with no symptoms (0) and those who died upon admission (4) were excluded, thus the PSS graded the patients into minor, moderate, and severe poisoning levels; laboratory data of patients on admission, the 2nd and the 3rd day after admission, and the day before discharge; severe complications such as AKI, rhabdomyolysis, hemolysis, coagulation disorder, liver dysfunction, MODS, and ARDS; procedures including hemodialysis and plasmapheresis, state at discharge (death or survival), and length of hospital stay. Therapeutic schedule: Patients with wasp stings are rarely treated in outpatient clinics. During pre-hospital first aid: given 0.9% sodium chloride to hydration, and glucocorticoid or epinephrine were given for allergic reactions. After admission, the stinger that could be found were removed and the number of stings was counted. 0.9% sodium chloride was given for hydration, and sodium bicarbonate was used to alkalize the urine. Therapeutic schedule of Rhabdomyolysis: 0.9% sodium chloride was given for hydration, and sodium bicarbonate was used to alkalize the urine. When serum CK was over 1000 U/L, diuretics were given to achieve at least 300 mL/h urine output. Therapeutic schedule of AKI: Intravenous infusion of glucocorticoid (Methylprednisolone 40 mg/d intravenous infusion) for 3–5 days, and the dosage was gradually reduced and discontinued for 7–10 days. hydration and diuresis for patients without oliguria or anuria, to achieve at least 100–200 mL/h urine excretion. Blood purification treatment: We provided blood purification therapy when patients with wasp sting had macroscopic hematuria (14%) and if they had AKI (oliguria or KDIGO stage 3), and in some patients with CK over 10000 u/L in the absence of AKI. Our patients were treated with glucocorticoids (Methylprednisolone 40 mg/d intravenous infusion) for 3–5 days in the presence of macroscopic hematuria, AKI, or anaphylaxis, and the dosage was gradually reduced and discontinued for 7–10 days. Statistical methods: The data were analyzed by SPSS software version 19.0. The enumeration data are represented by rate, and the chi-square test was used for comparison between the two groups. The measurement data conforming to the normal distribution are represented by mean ± standard deviation (x¯ ± s), and the non-conforming normal distribution by median M (P25, P75). Variables of the two groups were compared by Mann-Whitney U test. Spearman analysis was performed for the correlation between macroscopic hematuria and the patients’ outcome. Multivariate logistic regression model was used to screen the risk factors of AKI, and then ROC curve analysis was performed on the selected risk factors. A p-value of less than .05 for 95% confidence was set and used as a cutoff point to examine the statistical association between the variables. Results: Demographic characteristics From January 2016 to December 2018, 390 patients with wasp sting presented to Suining Central Hospital. Three hundred sixty-three were included in the study. Twenty-four patients refused to be hospitalized and three patients were excluded from the study as they already had CKD based on their medical history and ultrasonography. Sixty percent patients were male with a mean age of 55.9 ± 16.3 years (Table 1). The time interval between sting and admission was 3(0.5–144) hours, and the PSS was 1.2 ± 0.5 points on admission. Fourteen patients died with a mortality of 3.9%. Eight patients died on the first day of admission, followed by three each on second and third day. The most common cause of death was ARDS (9/14) followed by MODS (4/14) (Table 1). Figure 1 shows the monthly distribution of patients with macroscopic hematuria, rhabdomyolysis, hemolysis, AKI, ICU and death after wasp sting. Monthly distribution of patients with severe complication. Clinical data of wasp sting patients (n = 363). *The time interval between sting and admission, **Multiple Organ Dysfunction Syndrome, #Acute Respiratory Distress Syndrome, ##Intensive Care Unit. From January 2016 to December 2018, 390 patients with wasp sting presented to Suining Central Hospital. Three hundred sixty-three were included in the study. Twenty-four patients refused to be hospitalized and three patients were excluded from the study as they already had CKD based on their medical history and ultrasonography. Sixty percent patients were male with a mean age of 55.9 ± 16.3 years (Table 1). The time interval between sting and admission was 3(0.5–144) hours, and the PSS was 1.2 ± 0.5 points on admission. Fourteen patients died with a mortality of 3.9%. Eight patients died on the first day of admission, followed by three each on second and third day. The most common cause of death was ARDS (9/14) followed by MODS (4/14) (Table 1). Figure 1 shows the monthly distribution of patients with macroscopic hematuria, rhabdomyolysis, hemolysis, AKI, ICU and death after wasp sting. Monthly distribution of patients with severe complication. Clinical data of wasp sting patients (n = 363). *The time interval between sting and admission, **Multiple Organ Dysfunction Syndrome, #Acute Respiratory Distress Syndrome, ##Intensive Care Unit. Clinical manifestations Almost all patients had local symptoms such as redness, swelling, and pain at the sting site. Thirty-nine patients (10.7%) had AKI, 7(17.9%) patients in stage1, 9 (23.1%) in stage2, 23 (59%) in stage 3. Fifty-one patients (14%) had macroscopic hematuria, 61 patients (16.8%) developed hemolysis, and 105 patients (28.9%) developed rhabdomyolysis. Forty-five patients (12.4%) underwent blood purification therapy, and 7 patients (1.9%) were admitted to ICU. Of the 7 patients admitted to ICU, five were directly admitted to ICU for need of endotracheal intubation, and 2 were transferred from nephrology department (Table 1). Almost all patients had local symptoms such as redness, swelling, and pain at the sting site. Thirty-nine patients (10.7%) had AKI, 7(17.9%) patients in stage1, 9 (23.1%) in stage2, 23 (59%) in stage 3. Fifty-one patients (14%) had macroscopic hematuria, 61 patients (16.8%) developed hemolysis, and 105 patients (28.9%) developed rhabdomyolysis. Forty-five patients (12.4%) underwent blood purification therapy, and 7 patients (1.9%) were admitted to ICU. Of the 7 patients admitted to ICU, five were directly admitted to ICU for need of endotracheal intubation, and 2 were transferred from nephrology department (Table 1). Comparison of clinical data between the two groups There was no statistical difference in gender composition between patients with macroscopic hematuria and patients with non-macroscopic hematuria (p = .089). The average age of patients with macroscopic hematuria was higher than that of patients with non-macroscopic hematuria (65 years vs 54 years, p < .001). The number of stings, the time interval between stings and admission, and the PSS on admission in patients with macroscopic hematuria group were higher than those in patients with non-macroscopic hematuria group, and the hospitalization days were significantly prolonged (p < .001). Thirteen patients (25.5%) died in the macroscopic hematuria group, and 1 patient (0.3%) died in the non-macroscopic hematuria group (p < .001). The patient in the non-macroscopic hematuria group died of respiratory failure in the setting of chronic obstructive pulmonary disease. Although there was no statistical difference in the duration of blood purification therapy between these two groups, the time of blood purification therapy in the macroscopic hematuria group was significantly prolonged (Table 2). Clinical data between non-macroscopic hematuria and macroscopic hematuria group. *The time interval between sting and admission, **Multiple Organ Dysfunction Syndrome, #Acute Respiratory Distress Syndrome, ##Intensive Care Unit. There was no statistical difference in gender composition between patients with macroscopic hematuria and patients with non-macroscopic hematuria (p = .089). The average age of patients with macroscopic hematuria was higher than that of patients with non-macroscopic hematuria (65 years vs 54 years, p < .001). The number of stings, the time interval between stings and admission, and the PSS on admission in patients with macroscopic hematuria group were higher than those in patients with non-macroscopic hematuria group, and the hospitalization days were significantly prolonged (p < .001). Thirteen patients (25.5%) died in the macroscopic hematuria group, and 1 patient (0.3%) died in the non-macroscopic hematuria group (p < .001). The patient in the non-macroscopic hematuria group died of respiratory failure in the setting of chronic obstructive pulmonary disease. Although there was no statistical difference in the duration of blood purification therapy between these two groups, the time of blood purification therapy in the macroscopic hematuria group was significantly prolonged (Table 2). Clinical data between non-macroscopic hematuria and macroscopic hematuria group. *The time interval between sting and admission, **Multiple Organ Dysfunction Syndrome, #Acute Respiratory Distress Syndrome, ##Intensive Care Unit. Comparison of complications between the two groups No allergic rash was seen in the macroscopic hematuria group (p = .001). One patient developed hypotension in the macroscopic hematuria group compared to 19 in the non-macroscopic hematuria group (p = .332). The incidence of oliguria (or anuria), rhabdomyolysis, hemolysis, coagulation abnormalities, liver damage, MODS, ARDS in the macroscopic hematuria group was higher than that in the non-macroscopic hematuria group. Seven patients (13.7%) in the macroscopic hematuria group were admitted to ICU, while no patients in the non-macroscopic hematuria group were admitted to ICU (p < .001). Thirty-six (70.6%) patients with macroscopic hematuria developed AKI, compared to 3 (1%) patients with non-macroscopic hematuria (p < .001). Forty patients (78.4%) in the macroscopic hematuria group received hemodialysis compared to 5 (1.6%) in the non-macroscopic hematuria group (p < .001) (Table 2). No allergic rash was seen in the macroscopic hematuria group (p = .001). One patient developed hypotension in the macroscopic hematuria group compared to 19 in the non-macroscopic hematuria group (p = .332). The incidence of oliguria (or anuria), rhabdomyolysis, hemolysis, coagulation abnormalities, liver damage, MODS, ARDS in the macroscopic hematuria group was higher than that in the non-macroscopic hematuria group. Seven patients (13.7%) in the macroscopic hematuria group were admitted to ICU, while no patients in the non-macroscopic hematuria group were admitted to ICU (p < .001). Thirty-six (70.6%) patients with macroscopic hematuria developed AKI, compared to 3 (1%) patients with non-macroscopic hematuria (p < .001). Forty patients (78.4%) in the macroscopic hematuria group received hemodialysis compared to 5 (1.6%) in the non-macroscopic hematuria group (p < .001) (Table 2). Comparison of laboratory examination results between the two groups At the time of admission and the 2nd–3rd day after admission, the serum creatinine, creatine kinase (CK), aspartate aminotransferase (AST), indirect bilirubin (IBIL), alanine transaminase (ALT), prothrombin time (PT), activated partial thromboplastin time (APTT), lactate dehydrogenase (LDH), leukocyte (WBC) values in the macroscopic hematuria group were higher than those in the non-macroscopic hematuria group (p < .001). Serum creatinine and LDH tested before discharge were still significantly higher in the macroscopic hematuria group than that in the non-macroscopic hematuria group (p < .001) (Table 3). Lab examination between non-macroscopic hematuria and macroscopic hematuria group. *Aspartate aminotransferase, **Alanine transaminase, #Prothrombin time, ##Activated partial thromboplastin time. At the time of admission and the 2nd–3rd day after admission, the serum creatinine, creatine kinase (CK), aspartate aminotransferase (AST), indirect bilirubin (IBIL), alanine transaminase (ALT), prothrombin time (PT), activated partial thromboplastin time (APTT), lactate dehydrogenase (LDH), leukocyte (WBC) values in the macroscopic hematuria group were higher than those in the non-macroscopic hematuria group (p < .001). Serum creatinine and LDH tested before discharge were still significantly higher in the macroscopic hematuria group than that in the non-macroscopic hematuria group (p < .001) (Table 3). Lab examination between non-macroscopic hematuria and macroscopic hematuria group. *Aspartate aminotransferase, **Alanine transaminase, #Prothrombin time, ##Activated partial thromboplastin time. Spearman analysis between macroscopic hematuria and patients' outcome A Spearman correlation analysis indicated that macroscopic hematuria was related to the patients' outcome (r = 0.454, p < .001) (Table 4). Spearman analysis between macroscopic hematuria and the patients’ outcome. Spearman analysis, r = 0.454 p < .001. A Spearman correlation analysis indicated that macroscopic hematuria was related to the patients' outcome (r = 0.454, p < .001) (Table 4). Spearman analysis between macroscopic hematuria and the patients’ outcome. Spearman analysis, r = 0.454 p < .001. Multivariate logistic regression analysis The PSS was one risk factor for death in wasp sting patients (OR = 6.768, 95%CI 1.981–23.120, p = .002) (Table 5). Multivariate logistic regression analysis of death in wasp sting patients. H-L test X² = 0.826, p = .999. We took the data of laboratory examination with significant difference at admission (AKI group and non-AKI group) (Table 6) as independent variables and performed a multivariate logistic regression analysis of AKI in wasp sting patients. The results showed that LDH was an independent risk factor for AKI (OR = 1.006, 95%CI 1.004–1.007, p < .001) (Table 7). Lab examination on admission between non-AKI and AKI group. *Aspartate aminotransferase, **Alanine transaminase. Multivariate logistic regression analysis of AKI in wasp sting patients. H-L test X² = 14.555, p = .068. The PSS was one risk factor for death in wasp sting patients (OR = 6.768, 95%CI 1.981–23.120, p = .002) (Table 5). Multivariate logistic regression analysis of death in wasp sting patients. H-L test X² = 0.826, p = .999. We took the data of laboratory examination with significant difference at admission (AKI group and non-AKI group) (Table 6) as independent variables and performed a multivariate logistic regression analysis of AKI in wasp sting patients. The results showed that LDH was an independent risk factor for AKI (OR = 1.006, 95%CI 1.004–1.007, p < .001) (Table 7). Lab examination on admission between non-AKI and AKI group. *Aspartate aminotransferase, **Alanine transaminase. Multivariate logistic regression analysis of AKI in wasp sting patients. H-L test X² = 14.555, p = .068. ROC curve analysis The AUC of the PSS on admission to predict death of wasp sting patients was 0.911(95%CI 0.870–0.952, p < .001) (Figure 2). The poisoning severity score on admission to predict death of wasp sting patients. The AUC of serum LDH on admission to predict AKI of wasp sting patients was 0.980(95%CI 0.966–0.995, p < .001) (Figure 3). LDH on admission and number of stings to predict AKI of wasp sting patients. LDH on admission AUC = 0.980(95%CI 0.966–0.995), CUTOFF = 463.5u/L Youden index = 0.906 Number of stings AUC = 0.875(95%CI 0.826–0.925), CUTOFF = 11.5, Youden index = 0.663. The AUC of number of stings to predict AKI of wasp sting patients was 0.875(95%CI 0.826–0.925, p < .001) (Figure 3). The AUC of the PSS on admission to predict death of wasp sting patients was 0.911(95%CI 0.870–0.952, p < .001) (Figure 2). The poisoning severity score on admission to predict death of wasp sting patients. The AUC of serum LDH on admission to predict AKI of wasp sting patients was 0.980(95%CI 0.966–0.995, p < .001) (Figure 3). LDH on admission and number of stings to predict AKI of wasp sting patients. LDH on admission AUC = 0.980(95%CI 0.966–0.995), CUTOFF = 463.5u/L Youden index = 0.906 Number of stings AUC = 0.875(95%CI 0.826–0.925), CUTOFF = 11.5, Youden index = 0.663. The AUC of number of stings to predict AKI of wasp sting patients was 0.875(95%CI 0.826–0.925, p < .001) (Figure 3). Demographic characteristics: From January 2016 to December 2018, 390 patients with wasp sting presented to Suining Central Hospital. Three hundred sixty-three were included in the study. Twenty-four patients refused to be hospitalized and three patients were excluded from the study as they already had CKD based on their medical history and ultrasonography. Sixty percent patients were male with a mean age of 55.9 ± 16.3 years (Table 1). The time interval between sting and admission was 3(0.5–144) hours, and the PSS was 1.2 ± 0.5 points on admission. Fourteen patients died with a mortality of 3.9%. Eight patients died on the first day of admission, followed by three each on second and third day. The most common cause of death was ARDS (9/14) followed by MODS (4/14) (Table 1). Figure 1 shows the monthly distribution of patients with macroscopic hematuria, rhabdomyolysis, hemolysis, AKI, ICU and death after wasp sting. Monthly distribution of patients with severe complication. Clinical data of wasp sting patients (n = 363). *The time interval between sting and admission, **Multiple Organ Dysfunction Syndrome, #Acute Respiratory Distress Syndrome, ##Intensive Care Unit. Clinical manifestations: Almost all patients had local symptoms such as redness, swelling, and pain at the sting site. Thirty-nine patients (10.7%) had AKI, 7(17.9%) patients in stage1, 9 (23.1%) in stage2, 23 (59%) in stage 3. Fifty-one patients (14%) had macroscopic hematuria, 61 patients (16.8%) developed hemolysis, and 105 patients (28.9%) developed rhabdomyolysis. Forty-five patients (12.4%) underwent blood purification therapy, and 7 patients (1.9%) were admitted to ICU. Of the 7 patients admitted to ICU, five were directly admitted to ICU for need of endotracheal intubation, and 2 were transferred from nephrology department (Table 1). Comparison of clinical data between the two groups: There was no statistical difference in gender composition between patients with macroscopic hematuria and patients with non-macroscopic hematuria (p = .089). The average age of patients with macroscopic hematuria was higher than that of patients with non-macroscopic hematuria (65 years vs 54 years, p < .001). The number of stings, the time interval between stings and admission, and the PSS on admission in patients with macroscopic hematuria group were higher than those in patients with non-macroscopic hematuria group, and the hospitalization days were significantly prolonged (p < .001). Thirteen patients (25.5%) died in the macroscopic hematuria group, and 1 patient (0.3%) died in the non-macroscopic hematuria group (p < .001). The patient in the non-macroscopic hematuria group died of respiratory failure in the setting of chronic obstructive pulmonary disease. Although there was no statistical difference in the duration of blood purification therapy between these two groups, the time of blood purification therapy in the macroscopic hematuria group was significantly prolonged (Table 2). Clinical data between non-macroscopic hematuria and macroscopic hematuria group. *The time interval between sting and admission, **Multiple Organ Dysfunction Syndrome, #Acute Respiratory Distress Syndrome, ##Intensive Care Unit. Comparison of complications between the two groups: No allergic rash was seen in the macroscopic hematuria group (p = .001). One patient developed hypotension in the macroscopic hematuria group compared to 19 in the non-macroscopic hematuria group (p = .332). The incidence of oliguria (or anuria), rhabdomyolysis, hemolysis, coagulation abnormalities, liver damage, MODS, ARDS in the macroscopic hematuria group was higher than that in the non-macroscopic hematuria group. Seven patients (13.7%) in the macroscopic hematuria group were admitted to ICU, while no patients in the non-macroscopic hematuria group were admitted to ICU (p < .001). Thirty-six (70.6%) patients with macroscopic hematuria developed AKI, compared to 3 (1%) patients with non-macroscopic hematuria (p < .001). Forty patients (78.4%) in the macroscopic hematuria group received hemodialysis compared to 5 (1.6%) in the non-macroscopic hematuria group (p < .001) (Table 2). Comparison of laboratory examination results between the two groups: At the time of admission and the 2nd–3rd day after admission, the serum creatinine, creatine kinase (CK), aspartate aminotransferase (AST), indirect bilirubin (IBIL), alanine transaminase (ALT), prothrombin time (PT), activated partial thromboplastin time (APTT), lactate dehydrogenase (LDH), leukocyte (WBC) values in the macroscopic hematuria group were higher than those in the non-macroscopic hematuria group (p < .001). Serum creatinine and LDH tested before discharge were still significantly higher in the macroscopic hematuria group than that in the non-macroscopic hematuria group (p < .001) (Table 3). Lab examination between non-macroscopic hematuria and macroscopic hematuria group. *Aspartate aminotransferase, **Alanine transaminase, #Prothrombin time, ##Activated partial thromboplastin time. Spearman analysis between macroscopic hematuria and patients' outcome: A Spearman correlation analysis indicated that macroscopic hematuria was related to the patients' outcome (r = 0.454, p < .001) (Table 4). Spearman analysis between macroscopic hematuria and the patients’ outcome. Spearman analysis, r = 0.454 p < .001. Multivariate logistic regression analysis: The PSS was one risk factor for death in wasp sting patients (OR = 6.768, 95%CI 1.981–23.120, p = .002) (Table 5). Multivariate logistic regression analysis of death in wasp sting patients. H-L test X² = 0.826, p = .999. We took the data of laboratory examination with significant difference at admission (AKI group and non-AKI group) (Table 6) as independent variables and performed a multivariate logistic regression analysis of AKI in wasp sting patients. The results showed that LDH was an independent risk factor for AKI (OR = 1.006, 95%CI 1.004–1.007, p < .001) (Table 7). Lab examination on admission between non-AKI and AKI group. *Aspartate aminotransferase, **Alanine transaminase. Multivariate logistic regression analysis of AKI in wasp sting patients. H-L test X² = 14.555, p = .068. ROC curve analysis: The AUC of the PSS on admission to predict death of wasp sting patients was 0.911(95%CI 0.870–0.952, p < .001) (Figure 2). The poisoning severity score on admission to predict death of wasp sting patients. The AUC of serum LDH on admission to predict AKI of wasp sting patients was 0.980(95%CI 0.966–0.995, p < .001) (Figure 3). LDH on admission and number of stings to predict AKI of wasp sting patients. LDH on admission AUC = 0.980(95%CI 0.966–0.995), CUTOFF = 463.5u/L Youden index = 0.906 Number of stings AUC = 0.875(95%CI 0.826–0.925), CUTOFF = 11.5, Youden index = 0.663. The AUC of number of stings to predict AKI of wasp sting patients was 0.875(95%CI 0.826–0.925, p < .001) (Figure 3). Discussion: The current study is a retrospective analysis showing clinical features of patients with wasp sting in Suining Central Hospital, Sichuan, China. Macroscopic hematuria after wasp sting is not rare in Asian countries [6,14,20]. Our study shows that 14% of the patients with wasp sting had macroscopic hematuria, and this was only seen in the summer and fall months from July through December. More than half of patients with macroscopic hematuria (51%) developed oliguria (or anuria). More than 70% of patients with macroscopic hematuria had AKI, and the mortality was 25.5%. The poisoning severity score (PSS) in patients with macroscopic hematuria was significantly higher than that in patients with non-macroscopic hematuria (2.2 ± 0.5 vs. 1.1 ± 0.3, p < .001). The PSS was one risk factor for death in wasp sting patients. Wasp venom contains a variety of bioactive components, such as enzymes (including phospholipase, hyaluronic acid), amines (including histamine, serotonin, catecholamine), peptides (including wasp venom peptide, wasp kinin) [12]. Phospholipase damages the cell membrane by attacking the phospholipid structure, which has a toxic effect on skeletal muscle and erythrocyte membrane, leading to rhabdomyolysis and intravascular hemolysis [21–23]. Wasp venom peptide can also independently cause muscle necrosis and cell apoptosis [12,24]. Venom induced rhabdomyolysis and intravascular hemolysis leads to a release of muscle enzymes such as creatine kinase and muscle protein such as myoglobin, free hemoglobin (from red blood cells) in the intravascular circulation [25]. Once in the circulation, these muscle (heme) proteins gets freely filtered through the glomeruli and eventually exceed the tubular reabsorption capacity of renal tubules resulting in macroscopic hematuria [16]. Kidney biopsy study in wasp sting-induced kidney injury has demonstrated deposition of myoglobin and hemoglobin in renal tubules [26–28]. In developed countries, wasp stings are mainly manifested in varying degrees of allergic-reactions, therefore their treatment mainly focuses on desensitization and antiallergic treatment [5,29,30]. On the other hand, in China, wasp sting patients are mainly characterized by more severe and life-threatening presentations such as MODS, ARDS and non-allergic shock [13,14], which was also shown in our cohort. Population based data from United States and Sweden shows that wasp stings mostly occur in summer and autumn months when the climate is warm and hence there is more outdoor activity and thus subsequent an increased exposure to wasps [31,32]. In our study as well, 95.6% of wasp stings occurred in the months of July through November. Severe complications such as AKI, rhabdomyolysis, hemolysis, and admission to ICU mainly occurred in July-December, but death was documented to only happen in September-November months. This regional and seasonal difference may be related to the different wasp species in different regions and seasons, and the different components and virulence of wasp venom [21,33,34]. Prior study has shown that macroscopic hematuria associated with wasp sting generally occurs 4–12 h after sting, and tends to occur earlier than AKI [35]. In our cohort, 14% of wasp sting patients presented with macroscopic hematuria, and the incidence of serious complications such as AKI, MODS, and ARDS of these patients were significantly higher than those reported by Xie, etc.[14]. Patients admitted to ICU only occurred in the macroscopic hematuria group, and almost 92.8% of the death were from the macroscopic hematuria group. Spearman analysis also demonstrated that macroscopic hematuria was related to patients' outcome (r = 0.454, p < .001). Thus, we suggest that macroscopic hematuria may be regarded as a surrogate marker of worse clinical outcomes following wasp stings. These patients also had a higher PSS at the time of admission. PSS is already used in Europe to assess the severity of poisoned patients (including environmental toxins), with simple and accurate characteristics [9,36]. Multivariate logistic regression analysis showed that the PSS was also an independent risk factor for death in wasp sting patients (OR = 6.768, 95%CI 1.981–23.120, p = .002). The ROC analysis of the PSS for predicting death in wasp sting patients shows that when the PSS over 1.5, the risk of death was increased with high accuracy (AUC = 0.911). In brief, the PSS is helpful in early assessment of the severity of wasp sting patients and is worthy of promotion in clinical practice. The serum creatine kinase, aspartate aminotransferase, lactate dehydrogenase, and indirect bilirubin of patients with macroscopic hematuria were significantly higher than those of patients with non-macroscopic hematuria on admission and the 2nd–3rd day after admission, which indicated that patients with macroscopic hematuria experienced more severe and prolonged rhabdomyolysis and intravascular hemolysis. Prior studies have suggested that rhabdomyolysis, intravascular hemolysis, and direct nephrotoxicity of wasp venoms are the main causes of AKI in patients with wasp stings [37–41], and 77–90% of the pathology were ATN [42,43]. The kidney injury is exacerbated in low volume states which accompany severe wasp sting due to development of shock [16,44]. AKI is the most common and serious outcome following wasp sting [12,45,46]. Ten-point seven percent of our patients developed AKI, 82% of patients with AKI had stages 2/3 of kidney injury by KDIGO criteria. The incidence of oliguric AKI was as high as 72.2% in macroscopic hematuria group, and most of them required blood purification treatment. In Xie's literature, the overall incidence of AKI was higher in the group with more than 10 stings [14]. The ROC analysis of our patients showed that AKI incidence increased when the stings was higher than 11 (AUC = 0.875, 95%CI 0.826–0.925, p < .001). Among biochemical parameters, serum LDH upon admission was found to be an independent marker for AKI. Zhang, etc. has shown that elevated serum LDH was associated with AKI in wasp sting patient population [13,35]. ROC analysis of LDH for predicting AKI in wasp sting patients shows in our study that when serum LDH was over 463.5 U/L, the risk of AKI in wasp sting patients increased, with high accuracy (AUC = 0.980, 95%CI 0.966–0.995, p < .001). Thus, greater number of sting and elevated LDH on admission (> 463.5 u/L) are associated with greater risk of AKI and therefore should alert clinicians of more serious outcomes such as need of hemodialysis. In the macroscopic hematuria group, the serum creatinine of 25 patients complicated with AKI did not return to normal at discharge (1.3–10.5 mg/dL). Only 8 patients were followed up in outpatient for 2–8 months, and renal function was not fully recovered in 4 patients (creatinine 1.28–2.65 mg/dL). According to Zhang's report, 10.7% of patients with wasp sting complicated with AKI will progress to CKD [13]. We have evidence from population based studies that a subset of patients with AKI progress to CKD [47]. It would be therefore be advisable for such patients to be followed in nephrology clinic after their discharge. However, this result may also be related to the short hospitalization time of our patients (average 11 days), because, according to Ambarsari's report, the cure of acute kidney injury after wasp sting takes 3–6 weeks [42]. Our research also has limitations. Ours is a retrospective study, hence there may be selection bias in addition to possible confounding. The study data came from a single center and we did not have complete information such as wasp species, sting site, prognosis and follow-up of patients with AKI. We did not consider the possible effect of microscopic hematuria, and we also didn't distinguish between hemoglobinuria and myoglobinuria. Although the PSS has a high accuracy in predicting the prognosis of wasp sting patients, the severity of wasp sting patients from our study highlighted that the number of stings, time of the year also correlate with worse outcomes. PSS criteria may benefit with incorporation of these variables. Twelve patients had diabetes, but no evidence of diabetic nephropathy was found. Our study did not adjust for risk of AKI with preexisting diabetes. To our knowledge, this is the largest study in terms of study subject number to review macroscopic hematuria in wasp sting patients in terms of clinical features and outcomes. In conclusion, macroscopic hematuria is one of the early and important markers of adverse outcomes in patients with wasp sting, and thus, can alert the clinicians to initiate prompt and careful monitoring of such patients. In wasp sting patients with symptoms of macroscopic hematuria or serum LDH higher than 463.5 u/L upon admission, the risk of AKI increases significantly, therefore hemodialysis should be considered. The poisoning severity score is helpful in early assessment of the severity of wasp sting patients.
Background: Macroscopic hematuria after wasp sting has been reported in Asia to occur before acute kidney injury (AKI), and is often used by clinicians as a sign indicating the need for intensive care and blood purification therapy. However, there is no study on the clinical characteristics and prognosis of this symptom. Methods: The clinical data of 363 patients with wasp sting admitted to Suining Central Hospital from January 2016 to December 2018 were retrospectively analyzed. At admission, the poisoning severity score (PSS) was used as the criterion for severity classification. According to the presence of macroscopic hematuria, the patients were divided into macroscopic hematuria and non-macroscopic hematuria group. Results: Of the 363 wasp sting patients, 219 were male and 144 were female, with a mean age of 55.9 ± 16.3 years. Fifty-one (14%) had macroscopic hematuria, 39 (10.7%) had AKI, 105 (28.9%) had rhabdomyolysis, 61 (16.8%) had hemolysis, 45 (12.4%) went on to received hemodialysis, and 14 (3.9%) died. The incidence of AKI in macroscopic hematuria group was 70.6%, and oliguric renal failure accounted for 72.2%. Patients with macroscopic hematuria had significantly higher PSS (2.2 ± 0.5 vs. 1.1 ± 0.3, p < .001). Conclusions: Macroscopic hematuria can be regarded as a surrogate marker of deteriorating clinical outcome following wasp stings. In wasp sting patients with symptoms of macroscopic hematuria or serum LDH higher than 463.5 u/L upon admission, the risk of AKI increases significantly, therefore hemodialysis should be considered. The PSS is helpful in early assessment of the severity of wasp sting patients.
null
null
9,441
334
[ 283, 243, 284, 160, 235, 142, 252, 196, 160, 60, 182, 155 ]
16
[ "patients", "hematuria", "macroscopic hematuria", "macroscopic", "wasp", "sting", "aki", "group", "admission", "wasp sting" ]
[ "severity wasp sting", "epidemiological survey wasp", "prognosis wasp sting", "china wasp sting", "wasp stings chinese" ]
null
null
null
[CONTENT] Wasp sting | macroscopic hematuria | poisoning severity score | AKI | rhabdomyolysis [SUMMARY]
null
[CONTENT] Wasp sting | macroscopic hematuria | poisoning severity score | AKI | rhabdomyolysis [SUMMARY]
null
[CONTENT] Wasp sting | macroscopic hematuria | poisoning severity score | AKI | rhabdomyolysis [SUMMARY]
null
[CONTENT] Acute Kidney Injury | Adult | Aged | Animals | China | Female | Hematuria | Humans | Insect Bites and Stings | L-Lactate Dehydrogenase | Logistic Models | Male | Middle Aged | Multivariate Analysis | Renal Dialysis | Retrospective Studies | Rhabdomyolysis | Severity of Illness Index | Wasp Venoms | Wasps [SUMMARY]
null
[CONTENT] Acute Kidney Injury | Adult | Aged | Animals | China | Female | Hematuria | Humans | Insect Bites and Stings | L-Lactate Dehydrogenase | Logistic Models | Male | Middle Aged | Multivariate Analysis | Renal Dialysis | Retrospective Studies | Rhabdomyolysis | Severity of Illness Index | Wasp Venoms | Wasps [SUMMARY]
null
[CONTENT] Acute Kidney Injury | Adult | Aged | Animals | China | Female | Hematuria | Humans | Insect Bites and Stings | L-Lactate Dehydrogenase | Logistic Models | Male | Middle Aged | Multivariate Analysis | Renal Dialysis | Retrospective Studies | Rhabdomyolysis | Severity of Illness Index | Wasp Venoms | Wasps [SUMMARY]
null
[CONTENT] severity wasp sting | epidemiological survey wasp | prognosis wasp sting | china wasp sting | wasp stings chinese [SUMMARY]
null
[CONTENT] severity wasp sting | epidemiological survey wasp | prognosis wasp sting | china wasp sting | wasp stings chinese [SUMMARY]
null
[CONTENT] severity wasp sting | epidemiological survey wasp | prognosis wasp sting | china wasp sting | wasp stings chinese [SUMMARY]
null
[CONTENT] patients | hematuria | macroscopic hematuria | macroscopic | wasp | sting | aki | group | admission | wasp sting [SUMMARY]
null
[CONTENT] patients | hematuria | macroscopic hematuria | macroscopic | wasp | sting | aki | group | admission | wasp sting [SUMMARY]
null
[CONTENT] patients | hematuria | macroscopic hematuria | macroscopic | wasp | sting | aki | group | admission | wasp sting [SUMMARY]
null
[CONTENT] wasp | wasp stings | wasp sting | stings | sting | china | severe wasp | wasp sting patients | sting patients | patients [SUMMARY]
null
[CONTENT] patients | group | hematuria | macroscopic hematuria | macroscopic | macroscopic hematuria group | hematuria group | non macroscopic hematuria | non macroscopic | 001 [SUMMARY]
null
[CONTENT] patients | hematuria | macroscopic | macroscopic hematuria | wasp | group | macroscopic hematuria group | hematuria group | sting | wasp sting [SUMMARY]
null
[CONTENT] Macroscopic hematuria | Asia | clinicians ||| [SUMMARY]
null
[CONTENT] 363 | 219 | 144 | 55.9 ± | 16.3 years ||| Fifty-one | 14% | hematuria | 39 | 10.7% | AKI | 105 | 28.9% | 61 | 16.8% | 45 | 12.4% | 14 | 3.9% ||| AKI | hematuria | 70.6% | 72.2% ||| hematuria | PSS | 2.2 ± 0.5 | 1.1 | 0.3 | .001 [SUMMARY]
null
[CONTENT] Macroscopic hematuria | Asia | clinicians ||| ||| 363 | Suining Central Hospital | January 2016 to December 2018 ||| PSS ||| hematuria | hematuria | hematuria ||| 363 | 219 | 144 | 55.9 ± | 16.3 years ||| Fifty-one | 14% | hematuria | 39 | 10.7% | AKI | 105 | 28.9% | 61 | 16.8% | 45 | 12.4% | 14 | 3.9% ||| AKI | hematuria | 70.6% | 72.2% ||| hematuria | PSS | 2.2 ± 0.5 | 1.1 | 0.3 | .001 ||| ||| Macroscopic hematuria ||| hematuria | AKI ||| PSS [SUMMARY]
null
Haematogenous osteoarticular infections in paediatric sickle cell trait patients: A reality in a tertiary centre in West Africa.
33595545
Sickle cell trait (SCT) affects at least 5.2% of the world population, and it is considered asymptomatic by medical practitioners. There is a paucity of data regarding SCT paediatric patients and haematogenous osteoarticular infections (HOAIs). In our practice, some children with SCT presented HOAIs. This study aims to describe the pattern of HOAIs in children with SCT admitted in our unit.
BACKGROUND
A single-centre retrospective study of medical records of SCT paediatric patients treated for HOAIs between January 2012 and June 2019 was performed. The data extracted were epidemiologic (gender, age at diagnosis, history of haemoglobinopathy and ethnic group), diagnostic (time to diagnosis, type of infection and fraction of haemoglobin S [HbS] at standard electrophoresis of Hb), germs and complications.
MATERIALS AND METHODS
Among 149 patients with haemoglobinopathy treated for HOAIs, 52 have SCT. The prevalence of SCT patients was 34.9%. Thirty-nine (n = 39) records were retained for the study. The average age at diagnosis was 7.18 ± 4.59 years (7 months-15 years). The Malinké ethnic group was found in 22 (56.4%) cases. The mean HbS fraction was 37.2% ± 4.3% (30%-46%). Septic arthritis and osteoarthritis involved the hip in 11 cases, the shoulder in 4 and the knee in 2. Osteomyelitis was acute in 5 cases (11.1%) and chronic in 16 (35.5%). None of the patients has multifocal involvements. Bacterial identification was positive in 17 cases (37.8%). Staphylococcus aureus was involved in 9 cases (52.9%), and in one case, it was Mycobacterium tuberculosis. This patient has abscess of the psoas. No patient was infected by human immunodeficiency virus. The sequelae were joint destruction (n = 2), epiphysiodesis (n = 5) and retractile scars (n = 2).
RESULTS
Relatively infrequent in our daily practice, SCT patients present with HOAIs. These infections had characteristics that are not very different from the series of the literature.
CONCLUSION
[ "Adolescent", "Africa, Western", "Arthritis, Infectious", "Child", "Child, Preschool", "Female", "Humans", "Male", "Osteomyelitis", "Prevalence", "Retrospective Studies", "Sickle Cell Trait", "Tertiary Care Centers" ]
8109757
INTRODUCTION
Sickle cell disease (SCD), a hereditary disease with recessive autosomal transmission, is the most common haemoglobinopathy encountered worldwide, and sub-Saharan Africa bears the greatest load. Sickle cell trait (SCT) is the heterozygous conditions (haemoglobin AS [HbAS]) with one copy of the normal haemoglobin gene (HbA) and one copy of the sickle cell gene (HbS).[12] SCT affects at least 5.2% of the world population.[3] Its incidence greatly varies among different races and ethnic groups[45] and from state to state: 13.3% in Uganda,[6] around 20% in Cameroun,[7] 0.49% in the southern suburb of Beirut and 1.1%–9.8% in Brazil.[5] Three types of manifestations are seen in SCD patients: chronic haemolytic anaemia, vaso-occlusive crises and haematogenous infections. These infections affect many organs such as bones and joints, which are major targets in children. Carriers of the SCT are said to be asymptomatic by medical practitioners.[8910] Because of this benign condition, there is a paucity of data available regarding haematogenous osteoarticular infections (HOAIs) in SCT patients. Few published studies reported some cases of HOAIs in SCT paediatric patients among their study population.[111213] In our practice, some SCT patients are encountered presenting HOAIs. This study, therefore, aims at describing the characteristics of HOAIs in children with SCT admitted at Yopougon Teaching Hospital.
null
null
RESULTS
Among 149 patients with haemoglobinopathy treated for HOAIs, 52 have SCT. The prevalence of SCT among sickle cell patients was 34.9%. The prevalence among patients with HOAI was 10.3% (52/502). Records were retained for the study [Figure 1]. The average age at diagnosis was 7.18 ± 4.59 years (range: 7 months and 15 years) with a male-to-female ratio of 1.8:1. The Malinké ethnic group was found in 22 (56.4%) cases. The mean HbS fraction was 37.2% ± 4.3% (range: 30%–46%). Septic arthritis was predominant in the age group of 0–5 years (n = 9). The patient characteristics are listed in Table 1. Septic arthritis and osteoarthritis involved the hip in 11 cases, the shoulder in four cases and the knee in two cases. Osteomyelitis was acute in 5 cases (11.1%) and chronic in 16 cases (35.5%) [Figure 2]. Osteomyelitis involved the femur (n = 7), tibia (n = 9), humerus (n = 4) and radius (n = 1). None of the patients has multifocal involvements. A pathogen was isolated in 17 cases (37.8%) [Table 2]. Staphylococcus aureus was involved in 9 cases (52.9%), and in one case, it was Mycobacterium tuberculosis. The patient with M. tuberculosis has developed the psoas abscess with vertebral osteolysis. He had a history of tuberculosis contagion and had a corset after surgical drainage [Figure 3]. No patient was infected by HIV. The treatment was surgical in 34 cases (75.5%). Among the patients with septic arthritis and osteoarthritis, 13 underwent surgical drainage, whereas the remaining 4 patients have needle aspiration. Chronic osteomyelitis cases were treated surgically either by sequestrectomy-drainage or by fistulectomy-drainage. The sequelae were joint destruction (n = 2), epiphysiodesis (n = 5), joint stiffness (n = 3), residual pain (n = 5), retractile scars (n = 2) and keloids (n = 3). Flow charts of file selection Chronic osteomyelitis of the tibia with sequestrum and proximal epiphysiodesis Causative microorganisms (a and b) A 13-year-old boy with spondylodiscitis (computed tomography scans show abscess of the psoas and vertebral osteolysis)
CONCLUSION
Relatively infrequent in our daily practice, SCT patients present with HOAIs. These HOAIs had characteristics that are not very different from the series in less developed countries in terms of late diagnosis and difficulties of germ research. First-line antibiotic must be directed against S. aureus. Further cross-sectional and prospective studies are necessary to well define the characteristics of these infections on SCT ground. Financial support and sponsorship Nil. Nil. Conflicts of interest There are no conflicts of interest. There are no conflicts of interest.
[ "Financial support and sponsorship" ]
[ "Nil." ]
[ null ]
[ "INTRODUCTION", "MATERIALS AND METHODS", "RESULTS", "DISCUSSION", "CONCLUSION", "Financial support and sponsorship", "Conflicts of interest" ]
[ "Sickle cell disease (SCD), a hereditary disease with recessive autosomal transmission, is the most common haemoglobinopathy encountered worldwide, and sub-Saharan Africa bears the greatest load. Sickle cell trait (SCT) is the heterozygous conditions (haemoglobin AS [HbAS]) with one copy of the normal haemoglobin gene (HbA) and one copy of the sickle cell gene (HbS).[12] SCT affects at least 5.2% of the world population.[3] Its incidence greatly varies among different races and ethnic groups[45] and from state to state: 13.3% in Uganda,[6] around 20% in Cameroun,[7] 0.49% in the southern suburb of Beirut and 1.1%–9.8% in Brazil.[5] Three types of manifestations are seen in SCD patients: chronic haemolytic anaemia, vaso-occlusive crises and haematogenous infections. These infections affect many organs such as bones and joints, which are major targets in children. Carriers of the SCT are said to be asymptomatic by medical practitioners.[8910] Because of this benign condition, there is a paucity of data available regarding haematogenous osteoarticular infections (HOAIs) in SCT patients. Few published studies reported some cases of HOAIs in SCT paediatric patients among their study population.[111213] In our practice, some SCT patients are encountered presenting HOAIs. This study, therefore, aims at describing the characteristics of HOAIs in children with SCT admitted at Yopougon Teaching Hospital.", "A single-centre retrospective study of medical records of children with haemoglobinopathy treated for HOAIs between January 2012 and June 2019 was performed. Included were all records of children with SCT (HbAS with A > S >30%). Incomplete or missing records and those of other genotypes were excluded. HOAI diagnosis was based on clinical, laboratory tests, imaging and bacteriological data. Acute osteomyelitis was associated with bone pains, fever and absence of chronic suppuration, whereas chronic osteomyelitis was associated with chronic suppuration with or without sequestrum or pathologic fracture or fistulae. Association of fever, pains, swelling of the joints, hypomobility of the limb, with the presence of suppurative fluid in the joints cavity defined septic arthritis. Osteoarthritis was associated with fever, pain, swelling, loss of joint flexibility, suppurative fluid in the joint cavity and metaphyseal or epiphyseal bone lesions. The data extracted were epidemiologic (gender, age at diagnosis, history of haemoglobinopathy and ethnic group), diagnostic (time to diagnosis, type of HOAI and fraction of HbS at standard electrophoresis of Hb), germs and complications. Bacteriological research included blood culture, needle aspiration or biopsy, swab of entry portal and cytobacteriological examination of pus [Table 1]. Testing for germs was done by standard techniques. For technical reasons, anaerobic germs were not tested. The polymerase chain reaction (PCR) was performed by five patients. Human immunodeficiency virus (HIV) serology was performed in all cases of HOAIs. The antibiotherapy strategy consisted of administered intravenous Amoxiclav® (100–150 mg/kg/24 h) or Ceftriaxone® (100–150 mg/kg/24 h) with or without Gentamicin® (3–5 mg/kg/24 h during 5 days via intramuscular route). The concerned member was also immobilised. Upper-limb immobilisation was the Mayo Clinic splint for the shoulder and a splint or cast for the forearm. For hip involvement, axial skin traction was performed, followed by hip cast for 4–6 weeks. In the lower limb, a splint or cast was applied, and the cast was windowed for local care or monitoring. Data were processed using Excel 2010 (Microsoft Office® 2010), and quantitative variables were presented as mean ± standard deviation.\nDemographic and clinical data\nSD: Standard deviation", "Among 149 patients with haemoglobinopathy treated for HOAIs, 52 have SCT. The prevalence of SCT among sickle cell patients was 34.9%. The prevalence among patients with HOAI was 10.3% (52/502). Records were retained for the study [Figure 1]. The average age at diagnosis was 7.18 ± 4.59 years (range: 7 months and 15 years) with a male-to-female ratio of 1.8:1. The Malinké ethnic group was found in 22 (56.4%) cases. The mean HbS fraction was 37.2% ± 4.3% (range: 30%–46%). Septic arthritis was predominant in the age group of 0–5 years (n = 9). The patient characteristics are listed in Table 1. Septic arthritis and osteoarthritis involved the hip in 11 cases, the shoulder in four cases and the knee in two cases. Osteomyelitis was acute in 5 cases (11.1%) and chronic in 16 cases (35.5%) [Figure 2]. Osteomyelitis involved the femur (n = 7), tibia (n = 9), humerus (n = 4) and radius (n = 1). None of the patients has multifocal involvements. A pathogen was isolated in 17 cases (37.8%) [Table 2]. Staphylococcus aureus was involved in 9 cases (52.9%), and in one case, it was Mycobacterium tuberculosis. The patient with M. tuberculosis has developed the psoas abscess with vertebral osteolysis. He had a history of tuberculosis contagion and had a corset after surgical drainage [Figure 3]. No patient was infected by HIV. The treatment was surgical in 34 cases (75.5%). Among the patients with septic arthritis and osteoarthritis, 13 underwent surgical drainage, whereas the remaining 4 patients have needle aspiration. Chronic osteomyelitis cases were treated surgically either by sequestrectomy-drainage or by fistulectomy-drainage. The sequelae were joint destruction (n = 2), epiphysiodesis (n = 5), joint stiffness (n = 3), residual pain (n = 5), retractile scars (n = 2) and keloids (n = 3).\nFlow charts of file selection\nChronic osteomyelitis of the tibia with sequestrum and proximal epiphysiodesis\nCausative microorganisms\n(a and b) A 13-year-old boy with spondylodiscitis (computed tomography scans show abscess of the psoas and vertebral osteolysis)", "The objective of this study was to describe the characteristics of HOAIs in SCT patients. This study has several limitations due to its retrospective nature. It was monocentric and the sample size was small. Identification of germs was limited. Anaerobic organisms have not been tested for technical reasons. Due to its high cost, the PCR, which has more diagnostic value than standard techniques, was only performed by five patients. Follow-up remains difficult because patients do not return for consultation once the infection has subsided.\nHOAIs are one of the major causes of hospitalisation and morbidity among paediatric SCD patients.[457] However, there is a paucity of data of these HOAIs in SCT patients. The incidence of SCT varies from state to state.[45] It varies from 7% to 10% among African descendants in the USA[8911141516] and 1%–40% in Africa.[1117] There are no national data of SCT in Côte d’Ivoire.\nThe slave trade and migration of population were the common factors for SCT dissemination in America and Europe.[18] In our country, this trait remains disseminated, particularly in the Malinké ethnic group in which consanguineous marriages are cultural. These consanguineous marriages are favoured to eliminate social risk and to provide security to women and children by strengthening family ties and preserving wealth. Our findings are consistent with previous studies reporting that SCT varies among races and ethnic group.[4519]\nThe lack of a national screening program and poor community awareness about SCT in our health system explains the fact that only a third of our patients had a known Hb status on admission. This was similar to the 29% reported by Nwadiaro et al. in Nigeria.[13] However, in the USA, despite neonatal screening, only 16% of Americans with SCT know their status in Ohio State.[220]\nThe early screening techniques such as isoelectric focusing or high-performance liquid chromatography are of very recent introduction in our hospital.\nTraditionally, SCT has been viewed as a benign condition, a non-disease, partially protective against death from falciparum malaria[2122] and without any of the painful episodes characteristic of the homozygous SCD.[23] However, most adverse events or complications have been reported: (1) fatal exertional heat illness with exercise; (2) sudden idiopathic death with exercise; (3) glaucoma or recurrent hyphema following the first episode of hyphema; (4) splenic infarction at high altitude, with exercise or with hypoxemia; (5) renal medullary carcinoma in young people; (6) isosthenuria with loss of maximal renal concentrating ability; (7) haematuria secondary to renal papillary necrosis; (8) bacteriuria in women; (9) bacteriuria or pyelonephritis associated with pregnancy and (10) early onset of end-stage renal disease from autosomal dominant polycystic kidney.[114212324] SCT might also increase the risk of venous thromboembolism.[25] Certain authors have reported that the risk of sudden death due to exercise stress was 30 times higher among black American soldiers with SCT.[1] Most are case reports and range from firemen dying during training exercises, a 6-year-old playing in the park, a teenager running forced participation in a juvenile justice boot camp and a 13-year-old running from the police.[1]\nHowever, literature is poorer about data of HOAIs in SCT carriers. Few published studies reported some cases of HOAIs in SCT patients in their study population.[111213] In Akakpo-Numado et al.[11] study in Togo, among 43 children who developed osteomyelitis, 11 (25.6%) had SCT. In the Tambo's series in Cameroon, of the 25 patients treated for septic arthritis, six (24%) had SCT.[12] Nwadiaro et al. have also reported musculoskeletal infections in SCT patients (8/24; 34%) in their study population.[13]\nThe clinical features and complications in SCT patients are not very different from those of common HOAIs in children. These HOAIs were characterised by a delayed diagnosis which remains common and multifactorial in sub-Saharan Africa: (a) first-intention trend to use prayer houses or traditional healers, (b) self-medication with street drugs, (c) diagnostic errors, (d) delayed referrals to definitive care centres and (e) lack of medical expertise outside tertiary centres. Because of late referrals, we resorted to surgery for fistulas and bone sequesters for all cases of chronic osteomyelitis. This management has a significant social and economic impact affecting families in our area where health insurance is lacking. Contrary to homozygous forms, none of our patients presented multifocal involvements.\nThe predominance of septic arthritis in the 0–5 years’ age group confirmed the literature data, and the hip was more concerned.[26] On these toddlers, the metaphyseal vessel loop and epiphyseal vessel are connected via transphyseal vessels traversing across the growth plate. Therefore, spread of metaphyseal infection to the epiphysis and joints can occur via transphyseal vessels. This septic arthritis predominated where bone proximal metaphysis was intra-articular such as femur, humerus and radius.\nOsteomyelitis predominated on the lower limbs and the most commonly involved sites are those with the fastest-growing metaphysis, where blood flow is rich but sluggish. Our findings are consistent with the literature data.[27]\nIn this series, the identification of germs was limited for practical and technical reasons. S. aureus was the predominant germ. SCD has classically been associated with Salmonella infections, but more series[262728] noted a predominance of staphylococcal infections in both osteomyelitis and septic arthritis. This is believed to be due to its high adhesion capacity to bone through its surface receptors. The isolation of Proteus mirabilis, a commensal germ from the digestive tract and generally responsible for urinary and skin infections, could be explained by the presence of a skin portal of entry in this patient. The patient with M. tuberculosis had a history of tuberculosis contagion.\nEarly diagnosis and quick management is the best way to prevent extended bone destruction, joint damage and premature epiphysiodesis observed in this study. Early screening followed by family and patient education may provide additional value beyond the potential to inform or change reproduction behaviours, because these children are at risk of having a child with SCT or SCD.[2] Furthermore, we think that the possible occurrence of these adverse events above requires periodic monitoring such as annual ophthalmologist or nephrologist visit.", "Relatively infrequent in our daily practice, SCT patients present with HOAIs. These HOAIs had characteristics that are not very different from the series in less developed countries in terms of late diagnosis and difficulties of germ research. First-line antibiotic must be directed against S. aureus. Further cross-sectional and prospective studies are necessary to well define the characteristics of these infections on SCT ground.\nFinancial support and sponsorship Nil.\nNil.\nConflicts of interest There are no conflicts of interest.\nThere are no conflicts of interest.", "Nil.", "There are no conflicts of interest." ]
[ "intro", "materials|methods", "results", "discussion", "conclusion", null, "COI-statement" ]
[ "Children", "osteoarticular infection", "sickle cell trait" ]
INTRODUCTION: Sickle cell disease (SCD), a hereditary disease with recessive autosomal transmission, is the most common haemoglobinopathy encountered worldwide, and sub-Saharan Africa bears the greatest load. Sickle cell trait (SCT) is the heterozygous conditions (haemoglobin AS [HbAS]) with one copy of the normal haemoglobin gene (HbA) and one copy of the sickle cell gene (HbS).[12] SCT affects at least 5.2% of the world population.[3] Its incidence greatly varies among different races and ethnic groups[45] and from state to state: 13.3% in Uganda,[6] around 20% in Cameroun,[7] 0.49% in the southern suburb of Beirut and 1.1%–9.8% in Brazil.[5] Three types of manifestations are seen in SCD patients: chronic haemolytic anaemia, vaso-occlusive crises and haematogenous infections. These infections affect many organs such as bones and joints, which are major targets in children. Carriers of the SCT are said to be asymptomatic by medical practitioners.[8910] Because of this benign condition, there is a paucity of data available regarding haematogenous osteoarticular infections (HOAIs) in SCT patients. Few published studies reported some cases of HOAIs in SCT paediatric patients among their study population.[111213] In our practice, some SCT patients are encountered presenting HOAIs. This study, therefore, aims at describing the characteristics of HOAIs in children with SCT admitted at Yopougon Teaching Hospital. MATERIALS AND METHODS: A single-centre retrospective study of medical records of children with haemoglobinopathy treated for HOAIs between January 2012 and June 2019 was performed. Included were all records of children with SCT (HbAS with A > S >30%). Incomplete or missing records and those of other genotypes were excluded. HOAI diagnosis was based on clinical, laboratory tests, imaging and bacteriological data. Acute osteomyelitis was associated with bone pains, fever and absence of chronic suppuration, whereas chronic osteomyelitis was associated with chronic suppuration with or without sequestrum or pathologic fracture or fistulae. Association of fever, pains, swelling of the joints, hypomobility of the limb, with the presence of suppurative fluid in the joints cavity defined septic arthritis. Osteoarthritis was associated with fever, pain, swelling, loss of joint flexibility, suppurative fluid in the joint cavity and metaphyseal or epiphyseal bone lesions. The data extracted were epidemiologic (gender, age at diagnosis, history of haemoglobinopathy and ethnic group), diagnostic (time to diagnosis, type of HOAI and fraction of HbS at standard electrophoresis of Hb), germs and complications. Bacteriological research included blood culture, needle aspiration or biopsy, swab of entry portal and cytobacteriological examination of pus [Table 1]. Testing for germs was done by standard techniques. For technical reasons, anaerobic germs were not tested. The polymerase chain reaction (PCR) was performed by five patients. Human immunodeficiency virus (HIV) serology was performed in all cases of HOAIs. The antibiotherapy strategy consisted of administered intravenous Amoxiclav® (100–150 mg/kg/24 h) or Ceftriaxone® (100–150 mg/kg/24 h) with or without Gentamicin® (3–5 mg/kg/24 h during 5 days via intramuscular route). The concerned member was also immobilised. Upper-limb immobilisation was the Mayo Clinic splint for the shoulder and a splint or cast for the forearm. For hip involvement, axial skin traction was performed, followed by hip cast for 4–6 weeks. In the lower limb, a splint or cast was applied, and the cast was windowed for local care or monitoring. Data were processed using Excel 2010 (Microsoft Office® 2010), and quantitative variables were presented as mean ± standard deviation. Demographic and clinical data SD: Standard deviation RESULTS: Among 149 patients with haemoglobinopathy treated for HOAIs, 52 have SCT. The prevalence of SCT among sickle cell patients was 34.9%. The prevalence among patients with HOAI was 10.3% (52/502). Records were retained for the study [Figure 1]. The average age at diagnosis was 7.18 ± 4.59 years (range: 7 months and 15 years) with a male-to-female ratio of 1.8:1. The Malinké ethnic group was found in 22 (56.4%) cases. The mean HbS fraction was 37.2% ± 4.3% (range: 30%–46%). Septic arthritis was predominant in the age group of 0–5 years (n = 9). The patient characteristics are listed in Table 1. Septic arthritis and osteoarthritis involved the hip in 11 cases, the shoulder in four cases and the knee in two cases. Osteomyelitis was acute in 5 cases (11.1%) and chronic in 16 cases (35.5%) [Figure 2]. Osteomyelitis involved the femur (n = 7), tibia (n = 9), humerus (n = 4) and radius (n = 1). None of the patients has multifocal involvements. A pathogen was isolated in 17 cases (37.8%) [Table 2]. Staphylococcus aureus was involved in 9 cases (52.9%), and in one case, it was Mycobacterium tuberculosis. The patient with M. tuberculosis has developed the psoas abscess with vertebral osteolysis. He had a history of tuberculosis contagion and had a corset after surgical drainage [Figure 3]. No patient was infected by HIV. The treatment was surgical in 34 cases (75.5%). Among the patients with septic arthritis and osteoarthritis, 13 underwent surgical drainage, whereas the remaining 4 patients have needle aspiration. Chronic osteomyelitis cases were treated surgically either by sequestrectomy-drainage or by fistulectomy-drainage. The sequelae were joint destruction (n = 2), epiphysiodesis (n = 5), joint stiffness (n = 3), residual pain (n = 5), retractile scars (n = 2) and keloids (n = 3). Flow charts of file selection Chronic osteomyelitis of the tibia with sequestrum and proximal epiphysiodesis Causative microorganisms (a and b) A 13-year-old boy with spondylodiscitis (computed tomography scans show abscess of the psoas and vertebral osteolysis) DISCUSSION: The objective of this study was to describe the characteristics of HOAIs in SCT patients. This study has several limitations due to its retrospective nature. It was monocentric and the sample size was small. Identification of germs was limited. Anaerobic organisms have not been tested for technical reasons. Due to its high cost, the PCR, which has more diagnostic value than standard techniques, was only performed by five patients. Follow-up remains difficult because patients do not return for consultation once the infection has subsided. HOAIs are one of the major causes of hospitalisation and morbidity among paediatric SCD patients.[457] However, there is a paucity of data of these HOAIs in SCT patients. The incidence of SCT varies from state to state.[45] It varies from 7% to 10% among African descendants in the USA[8911141516] and 1%–40% in Africa.[1117] There are no national data of SCT in Côte d’Ivoire. The slave trade and migration of population were the common factors for SCT dissemination in America and Europe.[18] In our country, this trait remains disseminated, particularly in the Malinké ethnic group in which consanguineous marriages are cultural. These consanguineous marriages are favoured to eliminate social risk and to provide security to women and children by strengthening family ties and preserving wealth. Our findings are consistent with previous studies reporting that SCT varies among races and ethnic group.[4519] The lack of a national screening program and poor community awareness about SCT in our health system explains the fact that only a third of our patients had a known Hb status on admission. This was similar to the 29% reported by Nwadiaro et al. in Nigeria.[13] However, in the USA, despite neonatal screening, only 16% of Americans with SCT know their status in Ohio State.[220] The early screening techniques such as isoelectric focusing or high-performance liquid chromatography are of very recent introduction in our hospital. Traditionally, SCT has been viewed as a benign condition, a non-disease, partially protective against death from falciparum malaria[2122] and without any of the painful episodes characteristic of the homozygous SCD.[23] However, most adverse events or complications have been reported: (1) fatal exertional heat illness with exercise; (2) sudden idiopathic death with exercise; (3) glaucoma or recurrent hyphema following the first episode of hyphema; (4) splenic infarction at high altitude, with exercise or with hypoxemia; (5) renal medullary carcinoma in young people; (6) isosthenuria with loss of maximal renal concentrating ability; (7) haematuria secondary to renal papillary necrosis; (8) bacteriuria in women; (9) bacteriuria or pyelonephritis associated with pregnancy and (10) early onset of end-stage renal disease from autosomal dominant polycystic kidney.[114212324] SCT might also increase the risk of venous thromboembolism.[25] Certain authors have reported that the risk of sudden death due to exercise stress was 30 times higher among black American soldiers with SCT.[1] Most are case reports and range from firemen dying during training exercises, a 6-year-old playing in the park, a teenager running forced participation in a juvenile justice boot camp and a 13-year-old running from the police.[1] However, literature is poorer about data of HOAIs in SCT carriers. Few published studies reported some cases of HOAIs in SCT patients in their study population.[111213] In Akakpo-Numado et al.[11] study in Togo, among 43 children who developed osteomyelitis, 11 (25.6%) had SCT. In the Tambo's series in Cameroon, of the 25 patients treated for septic arthritis, six (24%) had SCT.[12] Nwadiaro et al. have also reported musculoskeletal infections in SCT patients (8/24; 34%) in their study population.[13] The clinical features and complications in SCT patients are not very different from those of common HOAIs in children. These HOAIs were characterised by a delayed diagnosis which remains common and multifactorial in sub-Saharan Africa: (a) first-intention trend to use prayer houses or traditional healers, (b) self-medication with street drugs, (c) diagnostic errors, (d) delayed referrals to definitive care centres and (e) lack of medical expertise outside tertiary centres. Because of late referrals, we resorted to surgery for fistulas and bone sequesters for all cases of chronic osteomyelitis. This management has a significant social and economic impact affecting families in our area where health insurance is lacking. Contrary to homozygous forms, none of our patients presented multifocal involvements. The predominance of septic arthritis in the 0–5 years’ age group confirmed the literature data, and the hip was more concerned.[26] On these toddlers, the metaphyseal vessel loop and epiphyseal vessel are connected via transphyseal vessels traversing across the growth plate. Therefore, spread of metaphyseal infection to the epiphysis and joints can occur via transphyseal vessels. This septic arthritis predominated where bone proximal metaphysis was intra-articular such as femur, humerus and radius. Osteomyelitis predominated on the lower limbs and the most commonly involved sites are those with the fastest-growing metaphysis, where blood flow is rich but sluggish. Our findings are consistent with the literature data.[27] In this series, the identification of germs was limited for practical and technical reasons. S. aureus was the predominant germ. SCD has classically been associated with Salmonella infections, but more series[262728] noted a predominance of staphylococcal infections in both osteomyelitis and septic arthritis. This is believed to be due to its high adhesion capacity to bone through its surface receptors. The isolation of Proteus mirabilis, a commensal germ from the digestive tract and generally responsible for urinary and skin infections, could be explained by the presence of a skin portal of entry in this patient. The patient with M. tuberculosis had a history of tuberculosis contagion. Early diagnosis and quick management is the best way to prevent extended bone destruction, joint damage and premature epiphysiodesis observed in this study. Early screening followed by family and patient education may provide additional value beyond the potential to inform or change reproduction behaviours, because these children are at risk of having a child with SCT or SCD.[2] Furthermore, we think that the possible occurrence of these adverse events above requires periodic monitoring such as annual ophthalmologist or nephrologist visit. CONCLUSION: Relatively infrequent in our daily practice, SCT patients present with HOAIs. These HOAIs had characteristics that are not very different from the series in less developed countries in terms of late diagnosis and difficulties of germ research. First-line antibiotic must be directed against S. aureus. Further cross-sectional and prospective studies are necessary to well define the characteristics of these infections on SCT ground. Financial support and sponsorship Nil. Nil. Conflicts of interest There are no conflicts of interest. There are no conflicts of interest. Financial support and sponsorship: Nil. Conflicts of interest: There are no conflicts of interest.
Background: Sickle cell trait (SCT) affects at least 5.2% of the world population, and it is considered asymptomatic by medical practitioners. There is a paucity of data regarding SCT paediatric patients and haematogenous osteoarticular infections (HOAIs). In our practice, some children with SCT presented HOAIs. This study aims to describe the pattern of HOAIs in children with SCT admitted in our unit. Methods: A single-centre retrospective study of medical records of SCT paediatric patients treated for HOAIs between January 2012 and June 2019 was performed. The data extracted were epidemiologic (gender, age at diagnosis, history of haemoglobinopathy and ethnic group), diagnostic (time to diagnosis, type of infection and fraction of haemoglobin S [HbS] at standard electrophoresis of Hb), germs and complications. Results: Among 149 patients with haemoglobinopathy treated for HOAIs, 52 have SCT. The prevalence of SCT patients was 34.9%. Thirty-nine (n = 39) records were retained for the study. The average age at diagnosis was 7.18 ± 4.59 years (7 months-15 years). The Malinké ethnic group was found in 22 (56.4%) cases. The mean HbS fraction was 37.2% ± 4.3% (30%-46%). Septic arthritis and osteoarthritis involved the hip in 11 cases, the shoulder in 4 and the knee in 2. Osteomyelitis was acute in 5 cases (11.1%) and chronic in 16 (35.5%). None of the patients has multifocal involvements. Bacterial identification was positive in 17 cases (37.8%). Staphylococcus aureus was involved in 9 cases (52.9%), and in one case, it was Mycobacterium tuberculosis. This patient has abscess of the psoas. No patient was infected by human immunodeficiency virus. The sequelae were joint destruction (n = 2), epiphysiodesis (n = 5) and retractile scars (n = 2). Conclusions: Relatively infrequent in our daily practice, SCT patients present with HOAIs. These infections had characteristics that are not very different from the series of the literature.
INTRODUCTION: Sickle cell disease (SCD), a hereditary disease with recessive autosomal transmission, is the most common haemoglobinopathy encountered worldwide, and sub-Saharan Africa bears the greatest load. Sickle cell trait (SCT) is the heterozygous conditions (haemoglobin AS [HbAS]) with one copy of the normal haemoglobin gene (HbA) and one copy of the sickle cell gene (HbS).[12] SCT affects at least 5.2% of the world population.[3] Its incidence greatly varies among different races and ethnic groups[45] and from state to state: 13.3% in Uganda,[6] around 20% in Cameroun,[7] 0.49% in the southern suburb of Beirut and 1.1%–9.8% in Brazil.[5] Three types of manifestations are seen in SCD patients: chronic haemolytic anaemia, vaso-occlusive crises and haematogenous infections. These infections affect many organs such as bones and joints, which are major targets in children. Carriers of the SCT are said to be asymptomatic by medical practitioners.[8910] Because of this benign condition, there is a paucity of data available regarding haematogenous osteoarticular infections (HOAIs) in SCT patients. Few published studies reported some cases of HOAIs in SCT paediatric patients among their study population.[111213] In our practice, some SCT patients are encountered presenting HOAIs. This study, therefore, aims at describing the characteristics of HOAIs in children with SCT admitted at Yopougon Teaching Hospital. CONCLUSION: Relatively infrequent in our daily practice, SCT patients present with HOAIs. These HOAIs had characteristics that are not very different from the series in less developed countries in terms of late diagnosis and difficulties of germ research. First-line antibiotic must be directed against S. aureus. Further cross-sectional and prospective studies are necessary to well define the characteristics of these infections on SCT ground. Financial support and sponsorship Nil. Nil. Conflicts of interest There are no conflicts of interest. There are no conflicts of interest.
Background: Sickle cell trait (SCT) affects at least 5.2% of the world population, and it is considered asymptomatic by medical practitioners. There is a paucity of data regarding SCT paediatric patients and haematogenous osteoarticular infections (HOAIs). In our practice, some children with SCT presented HOAIs. This study aims to describe the pattern of HOAIs in children with SCT admitted in our unit. Methods: A single-centre retrospective study of medical records of SCT paediatric patients treated for HOAIs between January 2012 and June 2019 was performed. The data extracted were epidemiologic (gender, age at diagnosis, history of haemoglobinopathy and ethnic group), diagnostic (time to diagnosis, type of infection and fraction of haemoglobin S [HbS] at standard electrophoresis of Hb), germs and complications. Results: Among 149 patients with haemoglobinopathy treated for HOAIs, 52 have SCT. The prevalence of SCT patients was 34.9%. Thirty-nine (n = 39) records were retained for the study. The average age at diagnosis was 7.18 ± 4.59 years (7 months-15 years). The Malinké ethnic group was found in 22 (56.4%) cases. The mean HbS fraction was 37.2% ± 4.3% (30%-46%). Septic arthritis and osteoarthritis involved the hip in 11 cases, the shoulder in 4 and the knee in 2. Osteomyelitis was acute in 5 cases (11.1%) and chronic in 16 (35.5%). None of the patients has multifocal involvements. Bacterial identification was positive in 17 cases (37.8%). Staphylococcus aureus was involved in 9 cases (52.9%), and in one case, it was Mycobacterium tuberculosis. This patient has abscess of the psoas. No patient was infected by human immunodeficiency virus. The sequelae were joint destruction (n = 2), epiphysiodesis (n = 5) and retractile scars (n = 2). Conclusions: Relatively infrequent in our daily practice, SCT patients present with HOAIs. These infections had characteristics that are not very different from the series of the literature.
2,449
396
[ 2 ]
7
[ "sct", "patients", "hoais", "cases", "study", "data", "osteomyelitis", "chronic", "arthritis", "children" ]
[ "normal haemoglobin gene", "chronic haemolytic anaemia", "haemoglobinopathy encountered worldwide", "introduction sickle cell", "prevalence sct sickle" ]
null
[CONTENT] Children | osteoarticular infection | sickle cell trait [SUMMARY]
null
[CONTENT] Children | osteoarticular infection | sickle cell trait [SUMMARY]
[CONTENT] Children | osteoarticular infection | sickle cell trait [SUMMARY]
[CONTENT] Children | osteoarticular infection | sickle cell trait [SUMMARY]
[CONTENT] Children | osteoarticular infection | sickle cell trait [SUMMARY]
[CONTENT] Adolescent | Africa, Western | Arthritis, Infectious | Child | Child, Preschool | Female | Humans | Male | Osteomyelitis | Prevalence | Retrospective Studies | Sickle Cell Trait | Tertiary Care Centers [SUMMARY]
null
[CONTENT] Adolescent | Africa, Western | Arthritis, Infectious | Child | Child, Preschool | Female | Humans | Male | Osteomyelitis | Prevalence | Retrospective Studies | Sickle Cell Trait | Tertiary Care Centers [SUMMARY]
[CONTENT] Adolescent | Africa, Western | Arthritis, Infectious | Child | Child, Preschool | Female | Humans | Male | Osteomyelitis | Prevalence | Retrospective Studies | Sickle Cell Trait | Tertiary Care Centers [SUMMARY]
[CONTENT] Adolescent | Africa, Western | Arthritis, Infectious | Child | Child, Preschool | Female | Humans | Male | Osteomyelitis | Prevalence | Retrospective Studies | Sickle Cell Trait | Tertiary Care Centers [SUMMARY]
[CONTENT] Adolescent | Africa, Western | Arthritis, Infectious | Child | Child, Preschool | Female | Humans | Male | Osteomyelitis | Prevalence | Retrospective Studies | Sickle Cell Trait | Tertiary Care Centers [SUMMARY]
[CONTENT] normal haemoglobin gene | chronic haemolytic anaemia | haemoglobinopathy encountered worldwide | introduction sickle cell | prevalence sct sickle [SUMMARY]
null
[CONTENT] normal haemoglobin gene | chronic haemolytic anaemia | haemoglobinopathy encountered worldwide | introduction sickle cell | prevalence sct sickle [SUMMARY]
[CONTENT] normal haemoglobin gene | chronic haemolytic anaemia | haemoglobinopathy encountered worldwide | introduction sickle cell | prevalence sct sickle [SUMMARY]
[CONTENT] normal haemoglobin gene | chronic haemolytic anaemia | haemoglobinopathy encountered worldwide | introduction sickle cell | prevalence sct sickle [SUMMARY]
[CONTENT] normal haemoglobin gene | chronic haemolytic anaemia | haemoglobinopathy encountered worldwide | introduction sickle cell | prevalence sct sickle [SUMMARY]
[CONTENT] sct | patients | hoais | cases | study | data | osteomyelitis | chronic | arthritis | children [SUMMARY]
null
[CONTENT] sct | patients | hoais | cases | study | data | osteomyelitis | chronic | arthritis | children [SUMMARY]
[CONTENT] sct | patients | hoais | cases | study | data | osteomyelitis | chronic | arthritis | children [SUMMARY]
[CONTENT] sct | patients | hoais | cases | study | data | osteomyelitis | chronic | arthritis | children [SUMMARY]
[CONTENT] sct | patients | hoais | cases | study | data | osteomyelitis | chronic | arthritis | children [SUMMARY]
[CONTENT] sct | sickle cell | cell | sickle | patients | hoais | infections | haematogenous | haemoglobin | encountered [SUMMARY]
null
[CONTENT] cases | drainage | patients | surgical | 52 | figure | osteomyelitis | patient | years | tuberculosis [SUMMARY]
[CONTENT] interest | conflicts interest | conflicts | interest conflicts | conflicts interest conflicts interest | conflicts interest conflicts | interest conflicts interest | nil | characteristics | sct [SUMMARY]
[CONTENT] nil | conflicts interest | conflicts | interest | sct | patients | hoais | cases | infections | osteomyelitis [SUMMARY]
[CONTENT] nil | conflicts interest | conflicts | interest | sct | patients | hoais | cases | infections | osteomyelitis [SUMMARY]
[CONTENT] SCT | at least 5.2% ||| SCT ||| SCT ||| SCT [SUMMARY]
null
[CONTENT] 149 | 52 | SCT ||| SCT | 34.9% ||| Thirty-nine | 39 ||| 7.18 | 4.59 years | 7 months-15 years ||| Malinké | 22 | 56.4% ||| HbS | 37.2% | 4.3% | 30%-46% ||| 11 | 4 | 2 ||| 5 | 11.1% | 16 | 35.5% ||| ||| 17 | 37.8% ||| 9 | 52.9% | one ||| ||| ||| 2 | 5 | 2 [SUMMARY]
[CONTENT] daily | SCT ||| [SUMMARY]
[CONTENT] SCT | at least 5.2% ||| SCT ||| SCT ||| SCT ||| SCT | between January 2012 and June 2019 ||| ||| ||| Hb ||| ||| 149 | 52 | SCT ||| SCT | 34.9% ||| Thirty-nine | 39 ||| 7.18 | 4.59 years | 7 months-15 years ||| Malinké | 22 | 56.4% ||| HbS | 37.2% | 4.3% | 30%-46% ||| 11 | 4 | 2 ||| 5 | 11.1% | 16 | 35.5% ||| ||| 17 | 37.8% ||| 9 | 52.9% | one ||| ||| ||| 2 | 5 | 2 ||| daily | SCT ||| [SUMMARY]
[CONTENT] SCT | at least 5.2% ||| SCT ||| SCT ||| SCT ||| SCT | between January 2012 and June 2019 ||| ||| ||| Hb ||| ||| 149 | 52 | SCT ||| SCT | 34.9% ||| Thirty-nine | 39 ||| 7.18 | 4.59 years | 7 months-15 years ||| Malinké | 22 | 56.4% ||| HbS | 37.2% | 4.3% | 30%-46% ||| 11 | 4 | 2 ||| 5 | 11.1% | 16 | 35.5% ||| ||| 17 | 37.8% ||| 9 | 52.9% | one ||| ||| ||| 2 | 5 | 2 ||| daily | SCT ||| [SUMMARY]
Quantity and Source of Protein during Complementary Feeding and Infant Growth: Evidence from a Population Facing Double Burden of Malnutrition.
36235599
While high protein intake during infancy may increase obesity risk, low qualities and quantities of protein contribute to undernutrition. This study aimed to investigate the impact of the amount and source of protein on infant growth during complementary feeding (CF) in a country where under- and overnutrition co-exist as the so-called the double burden of malnutrition.
BACKGROUND
A multicenter, prospective cohort was conducted. Healthy term infants were enrolled with dietary and anthropometric assessments at 6, 9 and 12 months (M). Blood samples were collected at 12M for IGF-1, IGFBP-3 and insulin analyses.
METHODS
A total of 145 infants were enrolled (49.7% female). Animal source foods (ASFs) were the main protein source and showed a positive, dose-response relationship with weight-for-age, weight-for-length and BMI z-scores after adjusting for potential confounders. However, dairy protein had a greater impact on those parameters than non-dairy ASFs, while plant-based protein had no effect. These findings were supported by higher levels of IGF-1, IGFBP-3 and insulin following a higher intake of dairy protein. None of the protein sources were associated with linear growth.
RESULTS
This study showed the distinctive impact of different protein sources during CF on infant growth. A high intake of dairy protein, mainly from infant formula, had a greater impact on weight gain and growth-related hormones.
CONCLUSIONS
[ "Animals", "Dietary Proteins", "Hormones", "Infant", "Infant Formula", "Infant Nutritional Physiological Phenomena", "Insulin-Like Growth Factor Binding Protein 3", "Insulin-Like Growth Factor I", "Insulins", "Malnutrition", "Prospective Studies" ]
9572535
1. Introduction
The double burden of malnutrition (DBM)—the coexistence of under- and overnutrition—represents an emerging public health problem globally, especially for those in lower- and middle-income countries (LMICs) where eating habits are being transformed towards Westernized diets and lifestyles [1]. The World Health Organization (WHO) has emphasized the need for “double-duty actions” to prevent both forms of malnutrition [2]. While undernutrition and overweight were initially considered to affect different groups, they are increasingly recognized to occur within individuals through the life course3. This might potentially affect younger age groups, resulting in the combination of poor linear growth and overweight; however, evidence is lacking. This framework also recognizes that the two forms of malnutrition may have common risk factors in the form of unhealthy diets and environments [3,4]. Optimizing early-life nutrition by improving maternal nutrition, encouraging exclusive breastfeeding, and promoting appropriate complementary feeding (CF) are among the important actions identified to overcome the DBM [5]. The CF period is one of great change, when infants are introduced to foods other than milk [6] and protein intake increases; the percent of energy from protein (%PE) typically rises from around 5% to 15% when complementary foods become the major energy source for breastfed infants [7]. Protein is a key macronutrient promoting growth. However, to our knowledge, no studies have yet focused on the association between protein intake and growth in the context of the DBM. Previous studies have typically investigated the impact of protein intake in early life in populations facing either undernutrition or overnutrition [8], but not both. In high-income settings, research has highlighted the association between “too much” dietary protein in early life and the increased risk of overweight/obesity in later childhood, while previous studies in resource-limited countries have focused on the association of “too little” high-quality protein with undernutrition and, particularly, stunting. There are no studies from LMICs investigating the effect of high protein intake in early life on the growth of infants and young children [8]. Furthermore, it is unclear whether all protein sources (i.e., dairy protein, non-dairy animal-based protein (ABP) and plant-based protein (PBP)) have the same effect on growth [9,10,11,12]. The most robust evidence, from a large, multi-center randomized controlled trial (RCT) in five European countries, reported that high protein intake from formula during infancy significantly increased weight gain, but not linear growth, in children aged 2 and 6 years [13,14]. Additionally, this RCT demonstrated that infants who received high-protein formula had significantly higher plasma insulin-like growth factor-1 (IGF-1) and urine C-peptide, an insulin derivative, at 6 months of age compared to those fed with low-protein formula or that were breastfed. However, the impact of other protein sources during CF was not investigated. During infancy, nutrition is the most important factor promoting growth via the GH–IGF axis [15], while amino acids derived from dietary protein are associated with IGF-1 and insulin secretion [16,17,18,19,20]. The aim of this study was to investigate associations of the amount and source of protein intake during the CF period with the growth of infants in Thailand, where the DBM is prevalent. Furthermore, plasma insulin, serum IGF-1 and insulin-like growth factor binding protein 3 (IGFBP-3) were also measured to support the clinical outcomes. We aimed to tackle a key question in the context of the DBM: how a specific component of infant diet may relate to markers of both undernutrition and overweight in early life.
null
null
3. Results
3.1. Demographic Data One hundred and fifty healthy term infants were enrolled. There were four dropouts, and one infant was excluded due to developing a multiple food-protein allergy during the study period (Figure S2). Data from 145 infants (96.7%) were thus available for analysis. As shown in Table 1, there were almost equal numbers of male and female infants, while nearly two thirds of the infants were first-born. Mean parental age was around 30 years, and more than 50% attained at least a college degree. The majority of infants were living in extended, middle-class families where most families received a higher monthly income than the minimum wage in Thailand. One hundred and fifty healthy term infants were enrolled. There were four dropouts, and one infant was excluded due to developing a multiple food-protein allergy during the study period (Figure S2). Data from 145 infants (96.7%) were thus available for analysis. As shown in Table 1, there were almost equal numbers of male and female infants, while nearly two thirds of the infants were first-born. Mean parental age was around 30 years, and more than 50% attained at least a college degree. The majority of infants were living in extended, middle-class families where most families received a higher monthly income than the minimum wage in Thailand. 3.2. Prevalence of Malnutrition in the Study Population At 12 months of age, the percentages of infants with wasting, underweight and stunting were 3.5, 4.1 and 4.8%, respectively, while only one infant (0.7%) was overweight. No infants in this cohort were classified as obese, or both wasted and stunted. According to the parental reports, over one-third of mothers and nearly two-thirds of fathers had overweight or obesity. Therefore, the prevalence of DBM at household level where underweight infants lived with parents who had overweight/obesity was 6.2% of all families. At 12 months of age, the percentages of infants with wasting, underweight and stunting were 3.5, 4.1 and 4.8%, respectively, while only one infant (0.7%) was overweight. No infants in this cohort were classified as obese, or both wasted and stunted. According to the parental reports, over one-third of mothers and nearly two-thirds of fathers had overweight or obesity. Therefore, the prevalence of DBM at household level where underweight infants lived with parents who had overweight/obesity was 6.2% of all families. 3.3. Complementary Feeding Practices and Nutrient Intakes Notably, 44.1% of infants were exclusively breastfed until 6 months of age, while 36.6% of all infants continued to receive only breast milk along with complementary foods until 12 months of age (Table 2). The mean age of introduction of CF was 5.7 ± 0.6 months. The most common first complementary food was rice with cooked egg yolk, while other non-dairy proteins such as meats and organ meats were introduced later. Mean protein intake during CF rapidly increased and reached its highest value at 12 months. In general, infants consumed more dietary protein than the Dietary Reference Intake for Thais 2020 (Thai DRI), as well as the intake recommended by the WHO. At 9 and 12 months of age, protein intakes were 2 to 3 times higher than the Thai and international recommendations (Figure 1). The average percentage of energy from protein (%PE) was 7.8, 12.6 and 15.6% at 6, 9 and 12 months of age, respectively. Figure 1 shows mean protein intake (g/kg/day) of infants during complementary feeding. Compared to the recommendations suggested by the Thai dietary recommended intake (Thai DRI), the Institute of Medicine (IOM), the World Health Organization (WHO)/Food and Agriculture Organization (FAO)/United Nations International Children’s Emergency Fund (UNICEF) and the European Food Safety Authority (ESFA), the protein intake of this study population rapidly increased from 6 to 12 months and was higher than all recommendations at 9 and 12 months of age. Notably, 44.1% of infants were exclusively breastfed until 6 months of age, while 36.6% of all infants continued to receive only breast milk along with complementary foods until 12 months of age (Table 2). The mean age of introduction of CF was 5.7 ± 0.6 months. The most common first complementary food was rice with cooked egg yolk, while other non-dairy proteins such as meats and organ meats were introduced later. Mean protein intake during CF rapidly increased and reached its highest value at 12 months. In general, infants consumed more dietary protein than the Dietary Reference Intake for Thais 2020 (Thai DRI), as well as the intake recommended by the WHO. At 9 and 12 months of age, protein intakes were 2 to 3 times higher than the Thai and international recommendations (Figure 1). The average percentage of energy from protein (%PE) was 7.8, 12.6 and 15.6% at 6, 9 and 12 months of age, respectively. Figure 1 shows mean protein intake (g/kg/day) of infants during complementary feeding. Compared to the recommendations suggested by the Thai dietary recommended intake (Thai DRI), the Institute of Medicine (IOM), the World Health Organization (WHO)/Food and Agriculture Organization (FAO)/United Nations International Children’s Emergency Fund (UNICEF) and the European Food Safety Authority (ESFA), the protein intake of this study population rapidly increased from 6 to 12 months and was higher than all recommendations at 9 and 12 months of age. 3.4. Association between Dietary Protein and Growth Outcomes Infants were categorized into three groups based on the average %PE from 6 to 12 months of age; those in the highest and lowest quartiles had %PE ≥12.9% and ≤10.9%, respectively, while the median group received protein between these values. Infants in the highest quartile had significantly higher WAZ, WLZ and BMIZ at 12 months, while there was no significant difference in LAZ between groups (Table 3). Conditional weight-related z-scores (i.e., WAZ, WLZ and BMIZ) of infants in the high protein intake group were significantly higher compared to the median and low protein intake groups (Figure 2: 95%CIs shown in Table S1), indicating that infants in the high protein intake group gained more weight than expected, given their baseline z-score at 6 months of age. However, there was no difference in the prevalence of all forms of malnutrition (i.e., underweight, wasting, stunting and overweight/obesity) between protein intake groups. Figure 2 illustrates conditional growth status at 12 months. Infants who consumed protein in the highest quartile (black bar) had significantly higher conditional WAZ, WLZ and BMIZ compared to infants receiving protein in the median (dark grey bar) and lowest quartile (light grey bar). Although conditional LAZ was higher in the high protein intake group compared with other groups, there was no significant difference. According to CF recommendations in Thailand [27] (Table S2), infants should be given three main meals (i.e., breakfast, lunch, and dinner) from 9 months; thus, protein intakes during the early (6–9 months old) and later stages (9–12 months old) of CF were expected to be quite different. Therefore, average %PE during the early and later CF periods were separated for univariate analyses investigating the association of protein intake with conditional growth outcomes. The results in Table 4 indicate that protein intake from 9–12 months was significantly associated with conditional growth outcomes, whilst protein intake from 6–9 months of age was not. Thus, only protein intake from 9–12 months of age was included in the subsequent analyses. According to the DAGs (Figure 3), the suggested covariates for the multiple linear regression model investigating the association of protein intake with linear growth were duration of predominant breastfeeding, type of milk feeding, non-protein energy intake at 6–12 months, maternal education, frequency of illness and family income. For ponderal growth including WAZ, WLZ and BMIZ, the DAG suggested duration of predominant breastfeeding, type of milk feeding, non-protein energy, maternal education, frequency of illness, maternal BMI and maternal age as covariates. Figure 3 illustrates DAGs predicting conditional growth status (for either linear or ponderal growth) by protein intake during the CF period (main predictor). Black arrows indicate causal paths between the main predictors and outcomes. Dashed-grey arrows represent bias paths. Boxes with black frames show potential confounders. All covariates suggested by the DAGs were included in the multiple regression models. There was no association between conditional LAZ and %PE from 9 to 12 months, or other covariates (Table 5). However, %PE from 9 to 12 months was associated with conditional WAZ, WLZ, and BMIZ (95%CI varied between 0.02 and 0.20). Considering different protein sources, only %PE from milk/dairy and non-dairy protein from 9 to 12 months were significantly associated with the weight-related parameters, while PBP was not (Table 5). Protein intake from milk had a stronger association with conditional weight-related parameters compared to other protein sources based on effect size (regression co-efficient (β)). To differentiate the effect of milk protein from breast milk and that from dairy/infant formula on conditional growth outcomes, %PE from breast milk was subtracted from the %PE from formula, cow’s milk and other dairy products. The resulting variable was called “%PE from dairy vs. breast milk”. As shown in Figure 4, this variable was directly associated with weight-related parameters, suggesting that greater %PE from formula milk and dairy rather than breast milk was significantly associated with higher conditional WAZ, WLZ and BMIZ. The findings suggest that a 1% increase in daily %PE from formula and dairy from 9 to 12 months of age was associated with a 0.18 (95%CI, 0.03, 0.32) and 0.16 (95%CI, 0.01, 0.30) standard deviation score (SDS) increase in conditional WAZ and WLZ, respectively, after adjusting for other protein sources, duration of predominant BF, non-protein energy consumption, type of milk feeding, maternal age, maternal education, maternal BMI, and frequency of illness. A 1% increase in daily %PE from non-dairy ASFs from 9 to 12 months was also associated with a 0.10 (95%CI 0.02, 0.18), 0.12 (95%CI 0.04, 0.20) and 0.10 (95%CI 0.01, 0.18) SDS increase in conditional WAZ, WLZ and BMIZ, respectively, after adjusting for other protein sources, duration of predominant BF, non-protein energy consumption, type of milk feeding, maternal age, maternal education, maternal BMI and frequency of illness. Scatter plots demonstrate associations between the percentage of protein energy from dairy sources (i.e., formula and cow’s milk), subtracted from the percentage of protein energy from breast milk (%PE dairy source vs. breast milk) and conditional growth at 12 months. The scatter plots show dose–response, positive associations between %PE dairy source vs. breast milk, and conditional WAZ, WLZ and BMIZ, but not conditional LAZ. Infants were categorized into three groups based on the average %PE from 6 to 12 months of age; those in the highest and lowest quartiles had %PE ≥12.9% and ≤10.9%, respectively, while the median group received protein between these values. Infants in the highest quartile had significantly higher WAZ, WLZ and BMIZ at 12 months, while there was no significant difference in LAZ between groups (Table 3). Conditional weight-related z-scores (i.e., WAZ, WLZ and BMIZ) of infants in the high protein intake group were significantly higher compared to the median and low protein intake groups (Figure 2: 95%CIs shown in Table S1), indicating that infants in the high protein intake group gained more weight than expected, given their baseline z-score at 6 months of age. However, there was no difference in the prevalence of all forms of malnutrition (i.e., underweight, wasting, stunting and overweight/obesity) between protein intake groups. Figure 2 illustrates conditional growth status at 12 months. Infants who consumed protein in the highest quartile (black bar) had significantly higher conditional WAZ, WLZ and BMIZ compared to infants receiving protein in the median (dark grey bar) and lowest quartile (light grey bar). Although conditional LAZ was higher in the high protein intake group compared with other groups, there was no significant difference. According to CF recommendations in Thailand [27] (Table S2), infants should be given three main meals (i.e., breakfast, lunch, and dinner) from 9 months; thus, protein intakes during the early (6–9 months old) and later stages (9–12 months old) of CF were expected to be quite different. Therefore, average %PE during the early and later CF periods were separated for univariate analyses investigating the association of protein intake with conditional growth outcomes. The results in Table 4 indicate that protein intake from 9–12 months was significantly associated with conditional growth outcomes, whilst protein intake from 6–9 months of age was not. Thus, only protein intake from 9–12 months of age was included in the subsequent analyses. According to the DAGs (Figure 3), the suggested covariates for the multiple linear regression model investigating the association of protein intake with linear growth were duration of predominant breastfeeding, type of milk feeding, non-protein energy intake at 6–12 months, maternal education, frequency of illness and family income. For ponderal growth including WAZ, WLZ and BMIZ, the DAG suggested duration of predominant breastfeeding, type of milk feeding, non-protein energy, maternal education, frequency of illness, maternal BMI and maternal age as covariates. Figure 3 illustrates DAGs predicting conditional growth status (for either linear or ponderal growth) by protein intake during the CF period (main predictor). Black arrows indicate causal paths between the main predictors and outcomes. Dashed-grey arrows represent bias paths. Boxes with black frames show potential confounders. All covariates suggested by the DAGs were included in the multiple regression models. There was no association between conditional LAZ and %PE from 9 to 12 months, or other covariates (Table 5). However, %PE from 9 to 12 months was associated with conditional WAZ, WLZ, and BMIZ (95%CI varied between 0.02 and 0.20). Considering different protein sources, only %PE from milk/dairy and non-dairy protein from 9 to 12 months were significantly associated with the weight-related parameters, while PBP was not (Table 5). Protein intake from milk had a stronger association with conditional weight-related parameters compared to other protein sources based on effect size (regression co-efficient (β)). To differentiate the effect of milk protein from breast milk and that from dairy/infant formula on conditional growth outcomes, %PE from breast milk was subtracted from the %PE from formula, cow’s milk and other dairy products. The resulting variable was called “%PE from dairy vs. breast milk”. As shown in Figure 4, this variable was directly associated with weight-related parameters, suggesting that greater %PE from formula milk and dairy rather than breast milk was significantly associated with higher conditional WAZ, WLZ and BMIZ. The findings suggest that a 1% increase in daily %PE from formula and dairy from 9 to 12 months of age was associated with a 0.18 (95%CI, 0.03, 0.32) and 0.16 (95%CI, 0.01, 0.30) standard deviation score (SDS) increase in conditional WAZ and WLZ, respectively, after adjusting for other protein sources, duration of predominant BF, non-protein energy consumption, type of milk feeding, maternal age, maternal education, maternal BMI, and frequency of illness. A 1% increase in daily %PE from non-dairy ASFs from 9 to 12 months was also associated with a 0.10 (95%CI 0.02, 0.18), 0.12 (95%CI 0.04, 0.20) and 0.10 (95%CI 0.01, 0.18) SDS increase in conditional WAZ, WLZ and BMIZ, respectively, after adjusting for other protein sources, duration of predominant BF, non-protein energy consumption, type of milk feeding, maternal age, maternal education, maternal BMI and frequency of illness. Scatter plots demonstrate associations between the percentage of protein energy from dairy sources (i.e., formula and cow’s milk), subtracted from the percentage of protein energy from breast milk (%PE dairy source vs. breast milk) and conditional growth at 12 months. The scatter plots show dose–response, positive associations between %PE dairy source vs. breast milk, and conditional WAZ, WLZ and BMIZ, but not conditional LAZ. 3.5. Association between Dietary Protein Intake and Blood Levels of IGF-1, IGFBP-3 and Insulin at 12 Months of Age In order to investigate the association between dietary protein and weight-related growth parameters, IGF-1, IGFBP-3 and insulin were investigated at 12 months of age. Milk protein was the only food source that showed a significantly positive association with circulating IGF-1, IGFBP-3 and insulin (Table 6). However, a stronger association was found between “%PE from dairy vs. breast milk” and the IGF-1 level, suggesting the consumption of more %PE from formula and dairy than from breast milk was associated more strongly with IGF-1 level. As shown in Figure 5, there were positive dose–response relationships of “%PE from dairy vs. breast milk” with all growth-related hormones after adjusting for sex. A 1% greater %PE from formula and dairy was associated with increasing blood concentrations of IGF-1, IGFBP-3 and insulin by 2.34 (95%CI 1.44, 3.23) ng/mL, 33.41 (95%CI 9.46, 57.37) ng/mL and 4 (95%CI 1, 7) %, respectively. Mean IGF-1, IGFBP-3 and insulin stratified by sex are given in Table S3. Scatter plots illustrate the associations between the percentage of protein energy from dairy sources (i.e., formula and cow’s milk) subtracted from the percentage of protein energy from breast milk (%PE dairy source vs. breast milk) and blood concentrations of IGF-1, IGFBP-3 and insulin at 12 months. The scatter plots show dose–response, positive associations between %PE dairy source vs. breast milk, and all laboratory markers after controlling of sex. In order to investigate the association between dietary protein and weight-related growth parameters, IGF-1, IGFBP-3 and insulin were investigated at 12 months of age. Milk protein was the only food source that showed a significantly positive association with circulating IGF-1, IGFBP-3 and insulin (Table 6). However, a stronger association was found between “%PE from dairy vs. breast milk” and the IGF-1 level, suggesting the consumption of more %PE from formula and dairy than from breast milk was associated more strongly with IGF-1 level. As shown in Figure 5, there were positive dose–response relationships of “%PE from dairy vs. breast milk” with all growth-related hormones after adjusting for sex. A 1% greater %PE from formula and dairy was associated with increasing blood concentrations of IGF-1, IGFBP-3 and insulin by 2.34 (95%CI 1.44, 3.23) ng/mL, 33.41 (95%CI 9.46, 57.37) ng/mL and 4 (95%CI 1, 7) %, respectively. Mean IGF-1, IGFBP-3 and insulin stratified by sex are given in Table S3. Scatter plots illustrate the associations between the percentage of protein energy from dairy sources (i.e., formula and cow’s milk) subtracted from the percentage of protein energy from breast milk (%PE dairy source vs. breast milk) and blood concentrations of IGF-1, IGFBP-3 and insulin at 12 months. The scatter plots show dose–response, positive associations between %PE dairy source vs. breast milk, and all laboratory markers after controlling of sex.
5. Conclusions
The present cohort provides evidence from a middle-income country that different protein sources may have contrasting influences on infant growth. While high protein intake from ASFs, especially formula and cow’s milk, during the CF period was associated with higher weight gain in a dose–response manner, the study did not find an effect on linear growth. Importantly, higher levels of IGF-1, IGFBP-3 and insulin in infants consuming higher amounts of protein from formula and cow’s milk provided mechanistic support for the clinical findings. However, further studies in populations facing the DBM and nutritional transition are needed to confirm the key findings from this cohort and to investigate the relationship between dietary protein and body composition. A longer follow-up period is also needed to see whether the study population consuming higher protein have a greater risk of overweight/obesity at later ages.
[ "3.1. Demographic Data", "3.2. Prevalence of Malnutrition in the Study Population", "3.3. Complementary Feeding Practices and Nutrient Intakes", "3.4. Association between Dietary Protein and Growth Outcomes", "3.5. Association between Dietary Protein Intake and Blood Levels of IGF-1, IGFBP-3 and Insulin at 12 Months of Age" ]
[ "One hundred and fifty healthy term infants were enrolled. There were four dropouts, and one infant was excluded due to developing a multiple food-protein allergy during the study period (Figure S2). Data from 145 infants (96.7%) were thus available for analysis. As shown in Table 1, there were almost equal numbers of male and female infants, while nearly two thirds of the infants were first-born. Mean parental age was around 30 years, and more than 50% attained at least a college degree. The majority of infants were living in extended, middle-class families where most families received a higher monthly income than the minimum wage in Thailand.", "At 12 months of age, the percentages of infants with wasting, underweight and stunting were 3.5, 4.1 and 4.8%, respectively, while only one infant (0.7%) was overweight. No infants in this cohort were classified as obese, or both wasted and stunted. According to the parental reports, over one-third of mothers and nearly two-thirds of fathers had overweight or obesity. Therefore, the prevalence of DBM at household level where underweight infants lived with parents who had overweight/obesity was 6.2% of all families.", "Notably, 44.1% of infants were exclusively breastfed until 6 months of age, while 36.6% of all infants continued to receive only breast milk along with complementary foods until 12 months of age (Table 2). The mean age of introduction of CF was 5.7 ± 0.6 months. The most common first complementary food was rice with cooked egg yolk, while other non-dairy proteins such as meats and organ meats were introduced later. Mean protein intake during CF rapidly increased and reached its highest value at 12 months. In general, infants consumed more dietary protein than the Dietary Reference Intake for Thais 2020 (Thai DRI), as well as the intake recommended by the WHO. At 9 and 12 months of age, protein intakes were 2 to 3 times higher than the Thai and international recommendations (Figure 1). The average percentage of energy from protein (%PE) was 7.8, 12.6 and 15.6% at 6, 9 and 12 months of age, respectively.\nFigure 1 shows mean protein intake (g/kg/day) of infants during complementary feeding. Compared to the recommendations suggested by the Thai dietary recommended intake (Thai DRI), the Institute of Medicine (IOM), the World Health Organization (WHO)/Food and Agriculture Organization (FAO)/United Nations International Children’s Emergency Fund (UNICEF) and the European Food Safety Authority (ESFA), the protein intake of this study population rapidly increased from 6 to 12 months and was higher than all recommendations at 9 and 12 months of age.", "Infants were categorized into three groups based on the average %PE from 6 to 12 months of age; those in the highest and lowest quartiles had %PE ≥12.9% and ≤10.9%, respectively, while the median group received protein between these values. Infants in the highest quartile had significantly higher WAZ, WLZ and BMIZ at 12 months, while there was no significant difference in LAZ between groups (Table 3). Conditional weight-related z-scores (i.e., WAZ, WLZ and BMIZ) of infants in the high protein intake group were significantly higher compared to the median and low protein intake groups (Figure 2: 95%CIs shown in Table S1), indicating that infants in the high protein intake group gained more weight than expected, given their baseline z-score at 6 months of age. However, there was no difference in the prevalence of all forms of malnutrition (i.e., underweight, wasting, stunting and overweight/obesity) between protein intake groups.\nFigure 2 illustrates conditional growth status at 12 months. Infants who consumed protein in the highest quartile (black bar) had significantly higher conditional WAZ, WLZ and BMIZ compared to infants receiving protein in the median (dark grey bar) and lowest quartile (light grey bar). Although conditional LAZ was higher in the high protein intake group compared with other groups, there was no significant difference.\nAccording to CF recommendations in Thailand [27] (Table S2), infants should be given three main meals (i.e., breakfast, lunch, and dinner) from 9 months; thus, protein intakes during the early (6–9 months old) and later stages (9–12 months old) of CF were expected to be quite different. Therefore, average %PE during the early and later CF periods were separated for univariate analyses investigating the association of protein intake with conditional growth outcomes. The results in Table 4 indicate that protein intake from 9–12 months was significantly associated with conditional growth outcomes, whilst protein intake from 6–9 months of age was not. Thus, only protein intake from 9–12 months of age was included in the subsequent analyses.\nAccording to the DAGs (Figure 3), the suggested covariates for the multiple linear regression model investigating the association of protein intake with linear growth were duration of predominant breastfeeding, type of milk feeding, non-protein energy intake at 6–12 months, maternal education, frequency of illness and family income. For ponderal growth including WAZ, WLZ and BMIZ, the DAG suggested duration of predominant breastfeeding, type of milk feeding, non-protein energy, maternal education, frequency of illness, maternal BMI and maternal age as covariates.\nFigure 3 illustrates DAGs predicting conditional growth status (for either linear or ponderal growth) by protein intake during the CF period (main predictor). Black arrows indicate causal paths between the main predictors and outcomes. Dashed-grey arrows represent bias paths. Boxes with black frames show potential confounders.\nAll covariates suggested by the DAGs were included in the multiple regression models. There was no association between conditional LAZ and %PE from 9 to 12 months, or other covariates (Table 5). However, %PE from 9 to 12 months was associated with conditional WAZ, WLZ, and BMIZ (95%CI varied between 0.02 and 0.20). Considering different protein sources, only %PE from milk/dairy and non-dairy protein from 9 to 12 months were significantly associated with the weight-related parameters, while PBP was not (Table 5). Protein intake from milk had a stronger association with conditional weight-related parameters compared to other protein sources based on effect size (regression co-efficient (β)).\nTo differentiate the effect of milk protein from breast milk and that from dairy/infant formula on conditional growth outcomes, %PE from breast milk was subtracted from the %PE from formula, cow’s milk and other dairy products. The resulting variable was called “%PE from dairy vs. breast milk”. As shown in Figure 4, this variable was directly associated with weight-related parameters, suggesting that greater %PE from formula milk and dairy rather than breast milk was significantly associated with higher conditional WAZ, WLZ and BMIZ.\nThe findings suggest that a 1% increase in daily %PE from formula and dairy from 9 to 12 months of age was associated with a 0.18 (95%CI, 0.03, 0.32) and 0.16 (95%CI, 0.01, 0.30) standard deviation score (SDS) increase in conditional WAZ and WLZ, respectively, after adjusting for other protein sources, duration of predominant BF, non-protein energy consumption, type of milk feeding, maternal age, maternal education, maternal BMI, and frequency of illness.\nA 1% increase in daily %PE from non-dairy ASFs from 9 to 12 months was also associated with a 0.10 (95%CI 0.02, 0.18), 0.12 (95%CI 0.04, 0.20) and 0.10 (95%CI 0.01, 0.18) SDS increase in conditional WAZ, WLZ and BMIZ, respectively, after adjusting for other protein sources, duration of predominant BF, non-protein energy consumption, type of milk feeding, maternal age, maternal education, maternal BMI and frequency of illness.\nScatter plots demonstrate associations between the percentage of protein energy from dairy sources (i.e., formula and cow’s milk), subtracted from the percentage of protein energy from breast milk (%PE dairy source vs. breast milk) and conditional growth at 12 months. The scatter plots show dose–response, positive associations between %PE dairy source vs. breast milk, and conditional WAZ, WLZ and BMIZ, but not conditional LAZ.", "In order to investigate the association between dietary protein and weight-related growth parameters, IGF-1, IGFBP-3 and insulin were investigated at 12 months of age. Milk protein was the only food source that showed a significantly positive association with circulating IGF-1, IGFBP-3 and insulin (Table 6). However, a stronger association was found between “%PE from dairy vs. breast milk” and the IGF-1 level, suggesting the consumption of more %PE from formula and dairy than from breast milk was associated more strongly with IGF-1 level.\nAs shown in Figure 5, there were positive dose–response relationships of “%PE from dairy vs. breast milk” with all growth-related hormones after adjusting for sex. A 1% greater %PE from formula and dairy was associated with increasing blood concentrations of IGF-1, IGFBP-3 and insulin by 2.34 (95%CI 1.44, 3.23) ng/mL, 33.41 (95%CI 9.46, 57.37) ng/mL and 4 (95%CI 1, 7) %, respectively. Mean IGF-1, IGFBP-3 and insulin stratified by sex are given in Table S3.\nScatter plots illustrate the associations between the percentage of protein energy from dairy sources (i.e., formula and cow’s milk) subtracted from the percentage of protein energy from breast milk (%PE dairy source vs. breast milk) and blood concentrations of IGF-1, IGFBP-3 and insulin at 12 months. The scatter plots show dose–response, positive associations between %PE dairy source vs. breast milk, and all laboratory markers after controlling of sex." ]
[ null, null, null, null, null ]
[ "1. Introduction", "2. Subjects and Methods", "3. Results", "3.1. Demographic Data", "3.2. Prevalence of Malnutrition in the Study Population", "3.3. Complementary Feeding Practices and Nutrient Intakes", "3.4. Association between Dietary Protein and Growth Outcomes", "3.5. Association between Dietary Protein Intake and Blood Levels of IGF-1, IGFBP-3 and Insulin at 12 Months of Age", "4. Discussion", "5. Conclusions" ]
[ "The double burden of malnutrition (DBM)—the coexistence of under- and overnutrition—represents an emerging public health problem globally, especially for those in lower- and middle-income countries (LMICs) where eating habits are being transformed towards Westernized diets and lifestyles [1]. The World Health Organization (WHO) has emphasized the need for “double-duty actions” to prevent both forms of malnutrition [2]. While undernutrition and overweight were initially considered to affect different groups, they are increasingly recognized to occur within individuals through the life course3. This might potentially affect younger age groups, resulting in the combination of poor linear growth and overweight; however, evidence is lacking. This framework also recognizes that the two forms of malnutrition may have common risk factors in the form of unhealthy diets and environments [3,4]. Optimizing early-life nutrition by improving maternal nutrition, encouraging exclusive breastfeeding, and promoting appropriate complementary feeding (CF) are among the important actions identified to overcome the DBM [5].\nThe CF period is one of great change, when infants are introduced to foods other than milk [6] and protein intake increases; the percent of energy from protein (%PE) typically rises from around 5% to 15% when complementary foods become the major energy source for breastfed infants [7]. Protein is a key macronutrient promoting growth. However, to our knowledge, no studies have yet focused on the association between protein intake and growth in the context of the DBM. Previous studies have typically investigated the impact of protein intake in early life in populations facing either undernutrition or overnutrition [8], but not both.\nIn high-income settings, research has highlighted the association between “too much” dietary protein in early life and the increased risk of overweight/obesity in later childhood, while previous studies in resource-limited countries have focused on the association of “too little” high-quality protein with undernutrition and, particularly, stunting. There are no studies from LMICs investigating the effect of high protein intake in early life on the growth of infants and young children [8]. Furthermore, it is unclear whether all protein sources (i.e., dairy protein, non-dairy animal-based protein (ABP) and plant-based protein (PBP)) have the same effect on growth [9,10,11,12]. The most robust evidence, from a large, multi-center randomized controlled trial (RCT) in five European countries, reported that high protein intake from formula during infancy significantly increased weight gain, but not linear growth, in children aged 2 and 6 years [13,14].\nAdditionally, this RCT demonstrated that infants who received high-protein formula had significantly higher plasma insulin-like growth factor-1 (IGF-1) and urine C-peptide, an insulin derivative, at 6 months of age compared to those fed with low-protein formula or that were breastfed. However, the impact of other protein sources during CF was not investigated. During infancy, nutrition is the most important factor promoting growth via the GH–IGF axis [15], while amino acids derived from dietary protein are associated with IGF-1 and insulin secretion [16,17,18,19,20].\nThe aim of this study was to investigate associations of the amount and source of protein intake during the CF period with the growth of infants in Thailand, where the DBM is prevalent. Furthermore, plasma insulin, serum IGF-1 and insulin-like growth factor binding protein 3 (IGFBP-3) were also measured to support the clinical outcomes. We aimed to tackle a key question in the context of the DBM: how a specific component of infant diet may relate to markers of both undernutrition and overweight in early life.", "A multicenter, prospective, cohort study was conducted at three well-baby clinics in Chiang Mai, Thailand, between June 2018 and May 2019. Healthy term infants with birth weight ≥2500 g were recruited at age 4–6 months. Exclusion criteria were: infants with any underlying or chronic diseases; known cases of, or recovery from, protein-energy malnutrition; or those who regularly received medication except mineral and vitamin supplementation. Parents and legal guardians were provided with study information and gave written informed consent before enrollment. Ethics approval was obtained from the University College London Ethics committee, United Kingdom (Approval ID: 12551/001), and the Ethics Committee of the Faculty of Medicine, Chiang Mai University, Thailand (Approval ID: PED-2561-05287).\nData including demographics of infants, family characteristics, growth measurements and dietary assessments were collected at 6, 9 and 12 months (M) during routine child health surveillance clinic visits. Body weight and the recumbent length of infants were measured by trained health professionals using an electronic scale (TScale Electronics Mfg. Co., Ltd., Kunshan, Taiwan, precision ± 5 g) and a standard wooden measuring board (precision ± 0.1 cm). The weight-for-age (WAZ), weight-for-length (WLZ), BMI (BMIZ) and length-for-age (LAZ) z-scores (standard deviation scores) were calculated using WHO Anthro version 3.2.2 [21]. Stunting, wasting and underweight were defined as LAZ, WLZ or WAZ <−2 standard deviation score (SDS), respectively. Overweight and obesity were defined as WLZ more than +2 SDS [22]. The primary outcome was conditional growth at 12 months (see below).\nDietary intake was estimated using a food frequency questionnaire (FFQ) for the semi-quantitative estimation of habitual intake alongside a 24 h recall interview (24-HR) at all time points, and a 3-day food record (3-DFR) was also collected at 9 and 12 months for quantitative estimation. Initially, dietary data from the 3-DFRs were used to estimate the average energy and nutrient intakes at 9 and 12 months of age, while the 24-HRs were used to estimate those intakes at 6 months of age. However, in cases where the-3DFR was missing, dietary intake from the 24-HR was used instead. The FFQ was used to confirm the portion size if data from either the 3-DFR or 24-HR were unclear. Dietary intakes were converted to energy and nutrients using the Institute of Nutrition Mahidol University Calculation (INMUCAL)-Nutrient program version 4.0 (2018) developed by the Institute of Nutrition, Mahidol University, Bangkok, Thailand [23]. This programme provided information on total energy consumption (kcal/day), crude intakes of all macronutrients (g/day) and 8 micronutrients, as well as the caloric distribution from each macronutrient. In addition, the program also separately reported protein and iron intake from ASFs and plant-based foods (Figure S1).\nVenous blood samples were obtained at 12 months of age. In total, approximately 2 mL of venous blood was obtained and kept at −20 °C until analyses were undertaken. Serum IGF-1 and IGFBP-3 were analyzed by a solid-phase, enzyme-labeled chemiluminescent immunometric assay using the IMMULITE® 2000 system (Siemens Healthcare Diagnostics Products Inc., Devault, PA, USA). The intra- and inter-assay variation in these tests was less than 8%. Plasma insulin was analyzed by an electrochemiluminescence technique using the COBAS® e411 analyzer (Roche Diagnostics Inc., Basel, Switzerland). The repeatability and intermediate precision of this technique were less than 5%.\nStatistical analyses were performed using SPSS (IBM Corp. Released 2019. IBM SPSS Statistics for Windows, Version 26.0. Armonk, NY, USA: IBM Corp). The sample size calculation showed that at least 126 infants were needed to see differences of 0.5 z-score in WLZ at 12 months old between infants who regularly received red meat and those who did not [24]. For analyses, non-parametric data were natural log (Ln)-transformed prior to use in the regression models. Conditional growth status was calculated as z-score deviation from average size of the study population at 12 months of age, controlling for baseline size at 6 months. Simple linear regression was used to develop a formula predicting the average size of the study population at 12 months, while a positive and negative result indicated larger or smaller size than expected at follow-up, respectively, given their earlier size [25]. Demographic data, prevalence of malnutrition, CF practices and nutrient intake are described as means ± standard deviation (SD) and percentages depending on data characteristics. To investigate associations between protein intake and outcomes of interest, bivariate correlation and general linear models were performed. Pearson’s correlations were used to demonstrate relationships between the variables. Regression analysis was used to investigate the association between the main predictor (protein intake) and the primary outcome (conditional growth at 12 months old) and secondary outcomes, including insulin, IGF-1 and IGFBP-3. In order to investigate the effect of different protein sources, protein intakes were also divided into 3 groups: (1) milk protein—breast milk, formula, cow’s milk and other dairy products; (2) non-dairy ABP—meats, eggs and meat products; (3) PBP—cereals and legumes. Covariates in the regression models were selected using a directed acyclic graph (DAG). The DAG is considered as a statistical approach to identify confounding variables and reduce the risk of selection bias when estimating causality in observational studies. The DAG applied in this study was created using DAGitty.net version 3.0, 2020 [26]. To demonstrate the magnitude of effect, both correlation and regression coefficients were also reported with their 95% confidence intervals (CIs). Statistical significance was defined by a p-value less than 0.05.", "3.1. Demographic Data One hundred and fifty healthy term infants were enrolled. There were four dropouts, and one infant was excluded due to developing a multiple food-protein allergy during the study period (Figure S2). Data from 145 infants (96.7%) were thus available for analysis. As shown in Table 1, there were almost equal numbers of male and female infants, while nearly two thirds of the infants were first-born. Mean parental age was around 30 years, and more than 50% attained at least a college degree. The majority of infants were living in extended, middle-class families where most families received a higher monthly income than the minimum wage in Thailand.\nOne hundred and fifty healthy term infants were enrolled. There were four dropouts, and one infant was excluded due to developing a multiple food-protein allergy during the study period (Figure S2). Data from 145 infants (96.7%) were thus available for analysis. As shown in Table 1, there were almost equal numbers of male and female infants, while nearly two thirds of the infants were first-born. Mean parental age was around 30 years, and more than 50% attained at least a college degree. The majority of infants were living in extended, middle-class families where most families received a higher monthly income than the minimum wage in Thailand.\n3.2. Prevalence of Malnutrition in the Study Population At 12 months of age, the percentages of infants with wasting, underweight and stunting were 3.5, 4.1 and 4.8%, respectively, while only one infant (0.7%) was overweight. No infants in this cohort were classified as obese, or both wasted and stunted. According to the parental reports, over one-third of mothers and nearly two-thirds of fathers had overweight or obesity. Therefore, the prevalence of DBM at household level where underweight infants lived with parents who had overweight/obesity was 6.2% of all families.\nAt 12 months of age, the percentages of infants with wasting, underweight and stunting were 3.5, 4.1 and 4.8%, respectively, while only one infant (0.7%) was overweight. No infants in this cohort were classified as obese, or both wasted and stunted. According to the parental reports, over one-third of mothers and nearly two-thirds of fathers had overweight or obesity. Therefore, the prevalence of DBM at household level where underweight infants lived with parents who had overweight/obesity was 6.2% of all families.\n3.3. Complementary Feeding Practices and Nutrient Intakes Notably, 44.1% of infants were exclusively breastfed until 6 months of age, while 36.6% of all infants continued to receive only breast milk along with complementary foods until 12 months of age (Table 2). The mean age of introduction of CF was 5.7 ± 0.6 months. The most common first complementary food was rice with cooked egg yolk, while other non-dairy proteins such as meats and organ meats were introduced later. Mean protein intake during CF rapidly increased and reached its highest value at 12 months. In general, infants consumed more dietary protein than the Dietary Reference Intake for Thais 2020 (Thai DRI), as well as the intake recommended by the WHO. At 9 and 12 months of age, protein intakes were 2 to 3 times higher than the Thai and international recommendations (Figure 1). The average percentage of energy from protein (%PE) was 7.8, 12.6 and 15.6% at 6, 9 and 12 months of age, respectively.\nFigure 1 shows mean protein intake (g/kg/day) of infants during complementary feeding. Compared to the recommendations suggested by the Thai dietary recommended intake (Thai DRI), the Institute of Medicine (IOM), the World Health Organization (WHO)/Food and Agriculture Organization (FAO)/United Nations International Children’s Emergency Fund (UNICEF) and the European Food Safety Authority (ESFA), the protein intake of this study population rapidly increased from 6 to 12 months and was higher than all recommendations at 9 and 12 months of age.\nNotably, 44.1% of infants were exclusively breastfed until 6 months of age, while 36.6% of all infants continued to receive only breast milk along with complementary foods until 12 months of age (Table 2). The mean age of introduction of CF was 5.7 ± 0.6 months. The most common first complementary food was rice with cooked egg yolk, while other non-dairy proteins such as meats and organ meats were introduced later. Mean protein intake during CF rapidly increased and reached its highest value at 12 months. In general, infants consumed more dietary protein than the Dietary Reference Intake for Thais 2020 (Thai DRI), as well as the intake recommended by the WHO. At 9 and 12 months of age, protein intakes were 2 to 3 times higher than the Thai and international recommendations (Figure 1). The average percentage of energy from protein (%PE) was 7.8, 12.6 and 15.6% at 6, 9 and 12 months of age, respectively.\nFigure 1 shows mean protein intake (g/kg/day) of infants during complementary feeding. Compared to the recommendations suggested by the Thai dietary recommended intake (Thai DRI), the Institute of Medicine (IOM), the World Health Organization (WHO)/Food and Agriculture Organization (FAO)/United Nations International Children’s Emergency Fund (UNICEF) and the European Food Safety Authority (ESFA), the protein intake of this study population rapidly increased from 6 to 12 months and was higher than all recommendations at 9 and 12 months of age.\n3.4. Association between Dietary Protein and Growth Outcomes Infants were categorized into three groups based on the average %PE from 6 to 12 months of age; those in the highest and lowest quartiles had %PE ≥12.9% and ≤10.9%, respectively, while the median group received protein between these values. Infants in the highest quartile had significantly higher WAZ, WLZ and BMIZ at 12 months, while there was no significant difference in LAZ between groups (Table 3). Conditional weight-related z-scores (i.e., WAZ, WLZ and BMIZ) of infants in the high protein intake group were significantly higher compared to the median and low protein intake groups (Figure 2: 95%CIs shown in Table S1), indicating that infants in the high protein intake group gained more weight than expected, given their baseline z-score at 6 months of age. However, there was no difference in the prevalence of all forms of malnutrition (i.e., underweight, wasting, stunting and overweight/obesity) between protein intake groups.\nFigure 2 illustrates conditional growth status at 12 months. Infants who consumed protein in the highest quartile (black bar) had significantly higher conditional WAZ, WLZ and BMIZ compared to infants receiving protein in the median (dark grey bar) and lowest quartile (light grey bar). Although conditional LAZ was higher in the high protein intake group compared with other groups, there was no significant difference.\nAccording to CF recommendations in Thailand [27] (Table S2), infants should be given three main meals (i.e., breakfast, lunch, and dinner) from 9 months; thus, protein intakes during the early (6–9 months old) and later stages (9–12 months old) of CF were expected to be quite different. Therefore, average %PE during the early and later CF periods were separated for univariate analyses investigating the association of protein intake with conditional growth outcomes. The results in Table 4 indicate that protein intake from 9–12 months was significantly associated with conditional growth outcomes, whilst protein intake from 6–9 months of age was not. Thus, only protein intake from 9–12 months of age was included in the subsequent analyses.\nAccording to the DAGs (Figure 3), the suggested covariates for the multiple linear regression model investigating the association of protein intake with linear growth were duration of predominant breastfeeding, type of milk feeding, non-protein energy intake at 6–12 months, maternal education, frequency of illness and family income. For ponderal growth including WAZ, WLZ and BMIZ, the DAG suggested duration of predominant breastfeeding, type of milk feeding, non-protein energy, maternal education, frequency of illness, maternal BMI and maternal age as covariates.\nFigure 3 illustrates DAGs predicting conditional growth status (for either linear or ponderal growth) by protein intake during the CF period (main predictor). Black arrows indicate causal paths between the main predictors and outcomes. Dashed-grey arrows represent bias paths. Boxes with black frames show potential confounders.\nAll covariates suggested by the DAGs were included in the multiple regression models. There was no association between conditional LAZ and %PE from 9 to 12 months, or other covariates (Table 5). However, %PE from 9 to 12 months was associated with conditional WAZ, WLZ, and BMIZ (95%CI varied between 0.02 and 0.20). Considering different protein sources, only %PE from milk/dairy and non-dairy protein from 9 to 12 months were significantly associated with the weight-related parameters, while PBP was not (Table 5). Protein intake from milk had a stronger association with conditional weight-related parameters compared to other protein sources based on effect size (regression co-efficient (β)).\nTo differentiate the effect of milk protein from breast milk and that from dairy/infant formula on conditional growth outcomes, %PE from breast milk was subtracted from the %PE from formula, cow’s milk and other dairy products. The resulting variable was called “%PE from dairy vs. breast milk”. As shown in Figure 4, this variable was directly associated with weight-related parameters, suggesting that greater %PE from formula milk and dairy rather than breast milk was significantly associated with higher conditional WAZ, WLZ and BMIZ.\nThe findings suggest that a 1% increase in daily %PE from formula and dairy from 9 to 12 months of age was associated with a 0.18 (95%CI, 0.03, 0.32) and 0.16 (95%CI, 0.01, 0.30) standard deviation score (SDS) increase in conditional WAZ and WLZ, respectively, after adjusting for other protein sources, duration of predominant BF, non-protein energy consumption, type of milk feeding, maternal age, maternal education, maternal BMI, and frequency of illness.\nA 1% increase in daily %PE from non-dairy ASFs from 9 to 12 months was also associated with a 0.10 (95%CI 0.02, 0.18), 0.12 (95%CI 0.04, 0.20) and 0.10 (95%CI 0.01, 0.18) SDS increase in conditional WAZ, WLZ and BMIZ, respectively, after adjusting for other protein sources, duration of predominant BF, non-protein energy consumption, type of milk feeding, maternal age, maternal education, maternal BMI and frequency of illness.\nScatter plots demonstrate associations between the percentage of protein energy from dairy sources (i.e., formula and cow’s milk), subtracted from the percentage of protein energy from breast milk (%PE dairy source vs. breast milk) and conditional growth at 12 months. The scatter plots show dose–response, positive associations between %PE dairy source vs. breast milk, and conditional WAZ, WLZ and BMIZ, but not conditional LAZ.\nInfants were categorized into three groups based on the average %PE from 6 to 12 months of age; those in the highest and lowest quartiles had %PE ≥12.9% and ≤10.9%, respectively, while the median group received protein between these values. Infants in the highest quartile had significantly higher WAZ, WLZ and BMIZ at 12 months, while there was no significant difference in LAZ between groups (Table 3). Conditional weight-related z-scores (i.e., WAZ, WLZ and BMIZ) of infants in the high protein intake group were significantly higher compared to the median and low protein intake groups (Figure 2: 95%CIs shown in Table S1), indicating that infants in the high protein intake group gained more weight than expected, given their baseline z-score at 6 months of age. However, there was no difference in the prevalence of all forms of malnutrition (i.e., underweight, wasting, stunting and overweight/obesity) between protein intake groups.\nFigure 2 illustrates conditional growth status at 12 months. Infants who consumed protein in the highest quartile (black bar) had significantly higher conditional WAZ, WLZ and BMIZ compared to infants receiving protein in the median (dark grey bar) and lowest quartile (light grey bar). Although conditional LAZ was higher in the high protein intake group compared with other groups, there was no significant difference.\nAccording to CF recommendations in Thailand [27] (Table S2), infants should be given three main meals (i.e., breakfast, lunch, and dinner) from 9 months; thus, protein intakes during the early (6–9 months old) and later stages (9–12 months old) of CF were expected to be quite different. Therefore, average %PE during the early and later CF periods were separated for univariate analyses investigating the association of protein intake with conditional growth outcomes. The results in Table 4 indicate that protein intake from 9–12 months was significantly associated with conditional growth outcomes, whilst protein intake from 6–9 months of age was not. Thus, only protein intake from 9–12 months of age was included in the subsequent analyses.\nAccording to the DAGs (Figure 3), the suggested covariates for the multiple linear regression model investigating the association of protein intake with linear growth were duration of predominant breastfeeding, type of milk feeding, non-protein energy intake at 6–12 months, maternal education, frequency of illness and family income. For ponderal growth including WAZ, WLZ and BMIZ, the DAG suggested duration of predominant breastfeeding, type of milk feeding, non-protein energy, maternal education, frequency of illness, maternal BMI and maternal age as covariates.\nFigure 3 illustrates DAGs predicting conditional growth status (for either linear or ponderal growth) by protein intake during the CF period (main predictor). Black arrows indicate causal paths between the main predictors and outcomes. Dashed-grey arrows represent bias paths. Boxes with black frames show potential confounders.\nAll covariates suggested by the DAGs were included in the multiple regression models. There was no association between conditional LAZ and %PE from 9 to 12 months, or other covariates (Table 5). However, %PE from 9 to 12 months was associated with conditional WAZ, WLZ, and BMIZ (95%CI varied between 0.02 and 0.20). Considering different protein sources, only %PE from milk/dairy and non-dairy protein from 9 to 12 months were significantly associated with the weight-related parameters, while PBP was not (Table 5). Protein intake from milk had a stronger association with conditional weight-related parameters compared to other protein sources based on effect size (regression co-efficient (β)).\nTo differentiate the effect of milk protein from breast milk and that from dairy/infant formula on conditional growth outcomes, %PE from breast milk was subtracted from the %PE from formula, cow’s milk and other dairy products. The resulting variable was called “%PE from dairy vs. breast milk”. As shown in Figure 4, this variable was directly associated with weight-related parameters, suggesting that greater %PE from formula milk and dairy rather than breast milk was significantly associated with higher conditional WAZ, WLZ and BMIZ.\nThe findings suggest that a 1% increase in daily %PE from formula and dairy from 9 to 12 months of age was associated with a 0.18 (95%CI, 0.03, 0.32) and 0.16 (95%CI, 0.01, 0.30) standard deviation score (SDS) increase in conditional WAZ and WLZ, respectively, after adjusting for other protein sources, duration of predominant BF, non-protein energy consumption, type of milk feeding, maternal age, maternal education, maternal BMI, and frequency of illness.\nA 1% increase in daily %PE from non-dairy ASFs from 9 to 12 months was also associated with a 0.10 (95%CI 0.02, 0.18), 0.12 (95%CI 0.04, 0.20) and 0.10 (95%CI 0.01, 0.18) SDS increase in conditional WAZ, WLZ and BMIZ, respectively, after adjusting for other protein sources, duration of predominant BF, non-protein energy consumption, type of milk feeding, maternal age, maternal education, maternal BMI and frequency of illness.\nScatter plots demonstrate associations between the percentage of protein energy from dairy sources (i.e., formula and cow’s milk), subtracted from the percentage of protein energy from breast milk (%PE dairy source vs. breast milk) and conditional growth at 12 months. The scatter plots show dose–response, positive associations between %PE dairy source vs. breast milk, and conditional WAZ, WLZ and BMIZ, but not conditional LAZ.\n3.5. Association between Dietary Protein Intake and Blood Levels of IGF-1, IGFBP-3 and Insulin at 12 Months of Age In order to investigate the association between dietary protein and weight-related growth parameters, IGF-1, IGFBP-3 and insulin were investigated at 12 months of age. Milk protein was the only food source that showed a significantly positive association with circulating IGF-1, IGFBP-3 and insulin (Table 6). However, a stronger association was found between “%PE from dairy vs. breast milk” and the IGF-1 level, suggesting the consumption of more %PE from formula and dairy than from breast milk was associated more strongly with IGF-1 level.\nAs shown in Figure 5, there were positive dose–response relationships of “%PE from dairy vs. breast milk” with all growth-related hormones after adjusting for sex. A 1% greater %PE from formula and dairy was associated with increasing blood concentrations of IGF-1, IGFBP-3 and insulin by 2.34 (95%CI 1.44, 3.23) ng/mL, 33.41 (95%CI 9.46, 57.37) ng/mL and 4 (95%CI 1, 7) %, respectively. Mean IGF-1, IGFBP-3 and insulin stratified by sex are given in Table S3.\nScatter plots illustrate the associations between the percentage of protein energy from dairy sources (i.e., formula and cow’s milk) subtracted from the percentage of protein energy from breast milk (%PE dairy source vs. breast milk) and blood concentrations of IGF-1, IGFBP-3 and insulin at 12 months. The scatter plots show dose–response, positive associations between %PE dairy source vs. breast milk, and all laboratory markers after controlling of sex.\nIn order to investigate the association between dietary protein and weight-related growth parameters, IGF-1, IGFBP-3 and insulin were investigated at 12 months of age. Milk protein was the only food source that showed a significantly positive association with circulating IGF-1, IGFBP-3 and insulin (Table 6). However, a stronger association was found between “%PE from dairy vs. breast milk” and the IGF-1 level, suggesting the consumption of more %PE from formula and dairy than from breast milk was associated more strongly with IGF-1 level.\nAs shown in Figure 5, there were positive dose–response relationships of “%PE from dairy vs. breast milk” with all growth-related hormones after adjusting for sex. A 1% greater %PE from formula and dairy was associated with increasing blood concentrations of IGF-1, IGFBP-3 and insulin by 2.34 (95%CI 1.44, 3.23) ng/mL, 33.41 (95%CI 9.46, 57.37) ng/mL and 4 (95%CI 1, 7) %, respectively. Mean IGF-1, IGFBP-3 and insulin stratified by sex are given in Table S3.\nScatter plots illustrate the associations between the percentage of protein energy from dairy sources (i.e., formula and cow’s milk) subtracted from the percentage of protein energy from breast milk (%PE dairy source vs. breast milk) and blood concentrations of IGF-1, IGFBP-3 and insulin at 12 months. The scatter plots show dose–response, positive associations between %PE dairy source vs. breast milk, and all laboratory markers after controlling of sex.", "One hundred and fifty healthy term infants were enrolled. There were four dropouts, and one infant was excluded due to developing a multiple food-protein allergy during the study period (Figure S2). Data from 145 infants (96.7%) were thus available for analysis. As shown in Table 1, there were almost equal numbers of male and female infants, while nearly two thirds of the infants were first-born. Mean parental age was around 30 years, and more than 50% attained at least a college degree. The majority of infants were living in extended, middle-class families where most families received a higher monthly income than the minimum wage in Thailand.", "At 12 months of age, the percentages of infants with wasting, underweight and stunting were 3.5, 4.1 and 4.8%, respectively, while only one infant (0.7%) was overweight. No infants in this cohort were classified as obese, or both wasted and stunted. According to the parental reports, over one-third of mothers and nearly two-thirds of fathers had overweight or obesity. Therefore, the prevalence of DBM at household level where underweight infants lived with parents who had overweight/obesity was 6.2% of all families.", "Notably, 44.1% of infants were exclusively breastfed until 6 months of age, while 36.6% of all infants continued to receive only breast milk along with complementary foods until 12 months of age (Table 2). The mean age of introduction of CF was 5.7 ± 0.6 months. The most common first complementary food was rice with cooked egg yolk, while other non-dairy proteins such as meats and organ meats were introduced later. Mean protein intake during CF rapidly increased and reached its highest value at 12 months. In general, infants consumed more dietary protein than the Dietary Reference Intake for Thais 2020 (Thai DRI), as well as the intake recommended by the WHO. At 9 and 12 months of age, protein intakes were 2 to 3 times higher than the Thai and international recommendations (Figure 1). The average percentage of energy from protein (%PE) was 7.8, 12.6 and 15.6% at 6, 9 and 12 months of age, respectively.\nFigure 1 shows mean protein intake (g/kg/day) of infants during complementary feeding. Compared to the recommendations suggested by the Thai dietary recommended intake (Thai DRI), the Institute of Medicine (IOM), the World Health Organization (WHO)/Food and Agriculture Organization (FAO)/United Nations International Children’s Emergency Fund (UNICEF) and the European Food Safety Authority (ESFA), the protein intake of this study population rapidly increased from 6 to 12 months and was higher than all recommendations at 9 and 12 months of age.", "Infants were categorized into three groups based on the average %PE from 6 to 12 months of age; those in the highest and lowest quartiles had %PE ≥12.9% and ≤10.9%, respectively, while the median group received protein between these values. Infants in the highest quartile had significantly higher WAZ, WLZ and BMIZ at 12 months, while there was no significant difference in LAZ between groups (Table 3). Conditional weight-related z-scores (i.e., WAZ, WLZ and BMIZ) of infants in the high protein intake group were significantly higher compared to the median and low protein intake groups (Figure 2: 95%CIs shown in Table S1), indicating that infants in the high protein intake group gained more weight than expected, given their baseline z-score at 6 months of age. However, there was no difference in the prevalence of all forms of malnutrition (i.e., underweight, wasting, stunting and overweight/obesity) between protein intake groups.\nFigure 2 illustrates conditional growth status at 12 months. Infants who consumed protein in the highest quartile (black bar) had significantly higher conditional WAZ, WLZ and BMIZ compared to infants receiving protein in the median (dark grey bar) and lowest quartile (light grey bar). Although conditional LAZ was higher in the high protein intake group compared with other groups, there was no significant difference.\nAccording to CF recommendations in Thailand [27] (Table S2), infants should be given three main meals (i.e., breakfast, lunch, and dinner) from 9 months; thus, protein intakes during the early (6–9 months old) and later stages (9–12 months old) of CF were expected to be quite different. Therefore, average %PE during the early and later CF periods were separated for univariate analyses investigating the association of protein intake with conditional growth outcomes. The results in Table 4 indicate that protein intake from 9–12 months was significantly associated with conditional growth outcomes, whilst protein intake from 6–9 months of age was not. Thus, only protein intake from 9–12 months of age was included in the subsequent analyses.\nAccording to the DAGs (Figure 3), the suggested covariates for the multiple linear regression model investigating the association of protein intake with linear growth were duration of predominant breastfeeding, type of milk feeding, non-protein energy intake at 6–12 months, maternal education, frequency of illness and family income. For ponderal growth including WAZ, WLZ and BMIZ, the DAG suggested duration of predominant breastfeeding, type of milk feeding, non-protein energy, maternal education, frequency of illness, maternal BMI and maternal age as covariates.\nFigure 3 illustrates DAGs predicting conditional growth status (for either linear or ponderal growth) by protein intake during the CF period (main predictor). Black arrows indicate causal paths between the main predictors and outcomes. Dashed-grey arrows represent bias paths. Boxes with black frames show potential confounders.\nAll covariates suggested by the DAGs were included in the multiple regression models. There was no association between conditional LAZ and %PE from 9 to 12 months, or other covariates (Table 5). However, %PE from 9 to 12 months was associated with conditional WAZ, WLZ, and BMIZ (95%CI varied between 0.02 and 0.20). Considering different protein sources, only %PE from milk/dairy and non-dairy protein from 9 to 12 months were significantly associated with the weight-related parameters, while PBP was not (Table 5). Protein intake from milk had a stronger association with conditional weight-related parameters compared to other protein sources based on effect size (regression co-efficient (β)).\nTo differentiate the effect of milk protein from breast milk and that from dairy/infant formula on conditional growth outcomes, %PE from breast milk was subtracted from the %PE from formula, cow’s milk and other dairy products. The resulting variable was called “%PE from dairy vs. breast milk”. As shown in Figure 4, this variable was directly associated with weight-related parameters, suggesting that greater %PE from formula milk and dairy rather than breast milk was significantly associated with higher conditional WAZ, WLZ and BMIZ.\nThe findings suggest that a 1% increase in daily %PE from formula and dairy from 9 to 12 months of age was associated with a 0.18 (95%CI, 0.03, 0.32) and 0.16 (95%CI, 0.01, 0.30) standard deviation score (SDS) increase in conditional WAZ and WLZ, respectively, after adjusting for other protein sources, duration of predominant BF, non-protein energy consumption, type of milk feeding, maternal age, maternal education, maternal BMI, and frequency of illness.\nA 1% increase in daily %PE from non-dairy ASFs from 9 to 12 months was also associated with a 0.10 (95%CI 0.02, 0.18), 0.12 (95%CI 0.04, 0.20) and 0.10 (95%CI 0.01, 0.18) SDS increase in conditional WAZ, WLZ and BMIZ, respectively, after adjusting for other protein sources, duration of predominant BF, non-protein energy consumption, type of milk feeding, maternal age, maternal education, maternal BMI and frequency of illness.\nScatter plots demonstrate associations between the percentage of protein energy from dairy sources (i.e., formula and cow’s milk), subtracted from the percentage of protein energy from breast milk (%PE dairy source vs. breast milk) and conditional growth at 12 months. The scatter plots show dose–response, positive associations between %PE dairy source vs. breast milk, and conditional WAZ, WLZ and BMIZ, but not conditional LAZ.", "In order to investigate the association between dietary protein and weight-related growth parameters, IGF-1, IGFBP-3 and insulin were investigated at 12 months of age. Milk protein was the only food source that showed a significantly positive association with circulating IGF-1, IGFBP-3 and insulin (Table 6). However, a stronger association was found between “%PE from dairy vs. breast milk” and the IGF-1 level, suggesting the consumption of more %PE from formula and dairy than from breast milk was associated more strongly with IGF-1 level.\nAs shown in Figure 5, there were positive dose–response relationships of “%PE from dairy vs. breast milk” with all growth-related hormones after adjusting for sex. A 1% greater %PE from formula and dairy was associated with increasing blood concentrations of IGF-1, IGFBP-3 and insulin by 2.34 (95%CI 1.44, 3.23) ng/mL, 33.41 (95%CI 9.46, 57.37) ng/mL and 4 (95%CI 1, 7) %, respectively. Mean IGF-1, IGFBP-3 and insulin stratified by sex are given in Table S3.\nScatter plots illustrate the associations between the percentage of protein energy from dairy sources (i.e., formula and cow’s milk) subtracted from the percentage of protein energy from breast milk (%PE dairy source vs. breast milk) and blood concentrations of IGF-1, IGFBP-3 and insulin at 12 months. The scatter plots show dose–response, positive associations between %PE dairy source vs. breast milk, and all laboratory markers after controlling of sex.", "This cohort study demonstrated that infants living in Chiang Mai, Thailand, consumed more dietary protein, mainly from ASFs, than Thai and WHO recommendations during the CF period. More importantly, the main results indicated that protein intake was significantly associated with weight-related parameters (i.e., WAZ, WLZ, and BMIZ) during the CF period after adjusting for potential confounders. Considering protein sources, the results showed a different impact of protein from diary and non-dairy ASFs. The predominant association with weight gain was from dairy protein—mainly formula and unfortified cow’s milk—whereas non-dairy ABP showed a lesser impact. Protein intake from formula and unfortified cow’s milk also showed positive associations with circulating IGF-1, IGFBP-3 and insulin at 12 months of age in a dose–response manner, independent of infant sex. There was no association of protein intake with linear growth markers, or of PBP with conditional growth outcomes in this cohort.\nIn contrast to a recent review highlighting that infants and young children in LMICs consumed less ABP compared with those from high-income settings [28], this cohort showed that infants living in northern Thailand consumed more dietary protein from ASFs than from plant-based foods during the CF period. It could be assumed that, in some LMICs, especially upper-middle-income countries such as Thailand, CF is now shifting towards a “Western style” diet. Recently, a cross-sectional study [29] and data from the national survey [30] in Thailand also reported that over 80% of protein in complementary foods came from ASFs. These findings are relevant to the current global situation in which many LMIC countries are transitioning to Western diets with high amounts of ASFs, even though this change may occur at very different rates across different countries [31,32].\nConsidering the relation between protein intake and growth outcomes, this study found that infants consuming protein in the highest quartile, with a median %PE of nearly 13%, had significantly higher weight-related z-scores at 12 months of age compared to those who had lower protein intakes. Interestingly, the median %PE of the high protein intake group was similar to a report based on European populations [33]. Michaelsen et al. [33] found that most studies in European countries showing a significant association between high protein intake and BMI at 12 months reported a %PE around 13%. Therefore, some experts agreed to recommend an upper limit of protein intake around 15%, with the aim of reducing the risk of childhood obesity in their populations [4,34,35]. In contrast, current international recommendations only recommend safe levels: lower limits of protein intake considered necessary to adequately support the normal growth of infants/children [36,37,38]. Given the dramatically increasing prevalence of overweight/obesity in young children in many LMICs, an upper limit of protein intake should be considered for international recommendations, and more studies in this specific context should be encouraged.\nFurthermore, our findings also indicated dose–response associations between ABP and weight-related parameters regardless of the type of milk received or how much energy was provided from carbohydrate and fat, although the effects of dairy and non-dairy protein were different to some extent. When considering the concept of conditional growth [39], the outcomes can be interpreted as indicating that every 1% increase in %PE from either dairy or non-dairy ABP at 9–12 months of age is associated with a positive deviation in WAZ, WLZ and BMIZ from the expected values at 12 months, based on growth parameters at 6 months. Thus, these results suggest that higher protein intake from ASFs is associated with more rapid weight gain than the infant’s expected growth trajectory. Underpinning these clinical findings, our laboratory results showed that the higher consumption of dairy protein, mainly from formula and cow’s milk, significantly increased levels of circulating IGF-1, IGFBP-3 and insulin, which are the main hormonal regulators of human growth and may relate to increased adiposity [40,41]. A possible mechanism explaining the greater effect of dairy protein over other protein sources is the high proportion of leucine, a potent factor stimulating IGF-1 secretion in dairy protein compared to other food sources (14% vs. 8% of amino acids in dairy and meats, respectively) [42]. In addition, some evidence indicates that leucine also plays an essential role in the activation of the mammalian target of rapamycin (mTOR), which is the major regulator of growth and metabolism homeostasis in humans [43].\nTo our knowledge, this is the first evidence from an LMIC demonstrating an association between high protein intake and rapid weight gain, and the possible mechanism of this association through IGF-1, IGFBP-3 and insulin. More importantly, this cohort also showed the distinctive effect of different protein sources on infant growth, as previous evidence on this issue was inconclusive. The latest systematic review and meta-analysis examining the relationship between high protein intake and growth and risk of childhood overweight/obesity included no studies from LMICs [44]. This systematic review concluded that there is adequate evidence supporting a possibly causal effect of high protein, especially ABP, on BMI (dose–response effect), while limited evidence suggests that high protein intake may affect weight gain/weight-for-age score and the risk of childhood obesity. However, there were several inconclusive results, including the effect of high protein on linear growth and body composition [44]. The present study did not demonstrate a relationship between high protein intake and overweight/obesity due to the very small number of infants who were overweight/obese at 12 months old. However, there is evidence justifying the concern about the potential impact of high protein intake on overweight/obesity in this population. In 2019, the prevalence of overweight/obesity among Thai infants and young children aged less than 5 years rose to 12.7% [45] compared to the previous national surveys in 2009 and 2016 (8.5% and 8.2%, respectively) [46,47]. The daily protein intake reported in 2013 was similar to the present study; infants and young children aged 6 to less than 36 months had dietary protein intakes about 3 times higher than the Thai recommendations and nearly 80% of the protein was derived from ASFs, while total energy consumption was not different from the recommendation [30,48,49].\nNotably, the literature from LMICs generally considers ABP as a preferred protein source due to its beneficial effect in preventing undernutrition [44,50,51]. Theoretically, protein from ASFs should provide adequate amounts of essential amino acids to meet the requirements of infants and children in order to prevent stunting [52]. A recent systematic review of studies on infants and children aged 6–60 months in LMICs did not find any significant associations between the consumption of ASFs and growth outcomes including weight, length/height and head circumference, though the included studies showed high heterogeneity [53].\nThe literature thus illustrates how ‘optimal’ protein intakes and sources during CF may differ in high-income and low-income settings. Reducing protein in complementary foods in European countries and the United States may help prevent childhood overweight/obesity, while promoting the consumption of ABPs in many low-income countries might mitigate the burden of wasting, stunting and micronutrient deficiencies. Nonetheless, for countries such as Thailand facing the DBM and nutritional transition, using either approach could be problematic. Therefore, such countries should ideally adopt recommendations related to dietary protein based on data from their population and avoid making assumptions by using dietary data from other countries.\nMore importantly, the distinctive effects of different protein sources should be taken into account when considering recommendations for dietary protein during the CF period. Current evidence suggests that dairy protein from formula and cow’s milk can promote rapid weight gain and could contribute to childhood obesity [54], while non-dairy ABP has a lesser impact on weight gain according to this cohort and other studies [55,56]. Therefore, to optimize protein intake during the CF period, nutritional policies focused on decreasing the intake of dairy protein, such as reducing the protein content in infant and follow-on formula and avoiding cow’s milk whilst encouraging mothers to continue breastfeeding throughout the first year of life, should be integrated into CF practices. In addition, non-dairy ABP enriched with essential micronutrients such as iron, zinc, iodine, and vitamin A should be promoted to provide adequate micronutrients whilst avoiding a high intake of dairy protein.\nFinally, limitations of this study should be noted. First, the results from the present cohort cannot infer causality between dietary protein and rapid weight gain due to the observational study design. The association could be interpreted either way, and it is not possible to conclude whether dietary protein contributes to greater weight gain or whether parents of faster-growing infants provide more food, including protein. However, DAGs were applied to appropriately identify potential confounders and to avoid overadjustment and selection bias [57]. Second, the null effect of PBP on growth outcomes should be interpreted with caution because the PBP consumed by infants in this cohort was mainly cereals, whereas legumes and grains containing higher protein quantity and quality, which may be more frequently used in other LMICs, were rarely consumed. Third, the lack of a significant association between protein intake and linear growth may be due to lack of statistical power, as the sample size was calculated based on the expected difference in WLZ at 12 months between infants consuming ASF regularly and those who did not. Fourth, by assessing change in size between 6 and 12 months, and assessing complementary feeding during this period, some of the variability in growth that we quantified may have occurred prior to the dietary exposure. However, this makes any associations of growth and complementary feeding that we detect conservative. Lastly, it should be noted that “extra” weight gain from increasing intake of ABP cannot be assumed to indicate higher body fatness without additional evidence from body composition analysis, and we do not yet know how our findings will translate into the risk of overweight/obesity at later ages.", "The present cohort provides evidence from a middle-income country that different protein sources may have contrasting influences on infant growth. While high protein intake from ASFs, especially formula and cow’s milk, during the CF period was associated with higher weight gain in a dose–response manner, the study did not find an effect on linear growth. Importantly, higher levels of IGF-1, IGFBP-3 and insulin in infants consuming higher amounts of protein from formula and cow’s milk provided mechanistic support for the clinical findings. However, further studies in populations facing the DBM and nutritional transition are needed to confirm the key findings from this cohort and to investigate the relationship between dietary protein and body composition. A longer follow-up period is also needed to see whether the study population consuming higher protein have a greater risk of overweight/obesity at later ages." ]
[ "intro", "subjects", "results", null, null, null, null, null, "discussion", "conclusions" ]
[ "protein intake", "early-life nutrition", "complementary feeding", "animal source foods", "double burden of malnutrition", "infant growth", "insulin", "insulin-like growth factor-1" ]
1. Introduction: The double burden of malnutrition (DBM)—the coexistence of under- and overnutrition—represents an emerging public health problem globally, especially for those in lower- and middle-income countries (LMICs) where eating habits are being transformed towards Westernized diets and lifestyles [1]. The World Health Organization (WHO) has emphasized the need for “double-duty actions” to prevent both forms of malnutrition [2]. While undernutrition and overweight were initially considered to affect different groups, they are increasingly recognized to occur within individuals through the life course3. This might potentially affect younger age groups, resulting in the combination of poor linear growth and overweight; however, evidence is lacking. This framework also recognizes that the two forms of malnutrition may have common risk factors in the form of unhealthy diets and environments [3,4]. Optimizing early-life nutrition by improving maternal nutrition, encouraging exclusive breastfeeding, and promoting appropriate complementary feeding (CF) are among the important actions identified to overcome the DBM [5]. The CF period is one of great change, when infants are introduced to foods other than milk [6] and protein intake increases; the percent of energy from protein (%PE) typically rises from around 5% to 15% when complementary foods become the major energy source for breastfed infants [7]. Protein is a key macronutrient promoting growth. However, to our knowledge, no studies have yet focused on the association between protein intake and growth in the context of the DBM. Previous studies have typically investigated the impact of protein intake in early life in populations facing either undernutrition or overnutrition [8], but not both. In high-income settings, research has highlighted the association between “too much” dietary protein in early life and the increased risk of overweight/obesity in later childhood, while previous studies in resource-limited countries have focused on the association of “too little” high-quality protein with undernutrition and, particularly, stunting. There are no studies from LMICs investigating the effect of high protein intake in early life on the growth of infants and young children [8]. Furthermore, it is unclear whether all protein sources (i.e., dairy protein, non-dairy animal-based protein (ABP) and plant-based protein (PBP)) have the same effect on growth [9,10,11,12]. The most robust evidence, from a large, multi-center randomized controlled trial (RCT) in five European countries, reported that high protein intake from formula during infancy significantly increased weight gain, but not linear growth, in children aged 2 and 6 years [13,14]. Additionally, this RCT demonstrated that infants who received high-protein formula had significantly higher plasma insulin-like growth factor-1 (IGF-1) and urine C-peptide, an insulin derivative, at 6 months of age compared to those fed with low-protein formula or that were breastfed. However, the impact of other protein sources during CF was not investigated. During infancy, nutrition is the most important factor promoting growth via the GH–IGF axis [15], while amino acids derived from dietary protein are associated with IGF-1 and insulin secretion [16,17,18,19,20]. The aim of this study was to investigate associations of the amount and source of protein intake during the CF period with the growth of infants in Thailand, where the DBM is prevalent. Furthermore, plasma insulin, serum IGF-1 and insulin-like growth factor binding protein 3 (IGFBP-3) were also measured to support the clinical outcomes. We aimed to tackle a key question in the context of the DBM: how a specific component of infant diet may relate to markers of both undernutrition and overweight in early life. 2. Subjects and Methods: A multicenter, prospective, cohort study was conducted at three well-baby clinics in Chiang Mai, Thailand, between June 2018 and May 2019. Healthy term infants with birth weight ≥2500 g were recruited at age 4–6 months. Exclusion criteria were: infants with any underlying or chronic diseases; known cases of, or recovery from, protein-energy malnutrition; or those who regularly received medication except mineral and vitamin supplementation. Parents and legal guardians were provided with study information and gave written informed consent before enrollment. Ethics approval was obtained from the University College London Ethics committee, United Kingdom (Approval ID: 12551/001), and the Ethics Committee of the Faculty of Medicine, Chiang Mai University, Thailand (Approval ID: PED-2561-05287). Data including demographics of infants, family characteristics, growth measurements and dietary assessments were collected at 6, 9 and 12 months (M) during routine child health surveillance clinic visits. Body weight and the recumbent length of infants were measured by trained health professionals using an electronic scale (TScale Electronics Mfg. Co., Ltd., Kunshan, Taiwan, precision ± 5 g) and a standard wooden measuring board (precision ± 0.1 cm). The weight-for-age (WAZ), weight-for-length (WLZ), BMI (BMIZ) and length-for-age (LAZ) z-scores (standard deviation scores) were calculated using WHO Anthro version 3.2.2 [21]. Stunting, wasting and underweight were defined as LAZ, WLZ or WAZ <−2 standard deviation score (SDS), respectively. Overweight and obesity were defined as WLZ more than +2 SDS [22]. The primary outcome was conditional growth at 12 months (see below). Dietary intake was estimated using a food frequency questionnaire (FFQ) for the semi-quantitative estimation of habitual intake alongside a 24 h recall interview (24-HR) at all time points, and a 3-day food record (3-DFR) was also collected at 9 and 12 months for quantitative estimation. Initially, dietary data from the 3-DFRs were used to estimate the average energy and nutrient intakes at 9 and 12 months of age, while the 24-HRs were used to estimate those intakes at 6 months of age. However, in cases where the-3DFR was missing, dietary intake from the 24-HR was used instead. The FFQ was used to confirm the portion size if data from either the 3-DFR or 24-HR were unclear. Dietary intakes were converted to energy and nutrients using the Institute of Nutrition Mahidol University Calculation (INMUCAL)-Nutrient program version 4.0 (2018) developed by the Institute of Nutrition, Mahidol University, Bangkok, Thailand [23]. This programme provided information on total energy consumption (kcal/day), crude intakes of all macronutrients (g/day) and 8 micronutrients, as well as the caloric distribution from each macronutrient. In addition, the program also separately reported protein and iron intake from ASFs and plant-based foods (Figure S1). Venous blood samples were obtained at 12 months of age. In total, approximately 2 mL of venous blood was obtained and kept at −20 °C until analyses were undertaken. Serum IGF-1 and IGFBP-3 were analyzed by a solid-phase, enzyme-labeled chemiluminescent immunometric assay using the IMMULITE® 2000 system (Siemens Healthcare Diagnostics Products Inc., Devault, PA, USA). The intra- and inter-assay variation in these tests was less than 8%. Plasma insulin was analyzed by an electrochemiluminescence technique using the COBAS® e411 analyzer (Roche Diagnostics Inc., Basel, Switzerland). The repeatability and intermediate precision of this technique were less than 5%. Statistical analyses were performed using SPSS (IBM Corp. Released 2019. IBM SPSS Statistics for Windows, Version 26.0. Armonk, NY, USA: IBM Corp). The sample size calculation showed that at least 126 infants were needed to see differences of 0.5 z-score in WLZ at 12 months old between infants who regularly received red meat and those who did not [24]. For analyses, non-parametric data were natural log (Ln)-transformed prior to use in the regression models. Conditional growth status was calculated as z-score deviation from average size of the study population at 12 months of age, controlling for baseline size at 6 months. Simple linear regression was used to develop a formula predicting the average size of the study population at 12 months, while a positive and negative result indicated larger or smaller size than expected at follow-up, respectively, given their earlier size [25]. Demographic data, prevalence of malnutrition, CF practices and nutrient intake are described as means ± standard deviation (SD) and percentages depending on data characteristics. To investigate associations between protein intake and outcomes of interest, bivariate correlation and general linear models were performed. Pearson’s correlations were used to demonstrate relationships between the variables. Regression analysis was used to investigate the association between the main predictor (protein intake) and the primary outcome (conditional growth at 12 months old) and secondary outcomes, including insulin, IGF-1 and IGFBP-3. In order to investigate the effect of different protein sources, protein intakes were also divided into 3 groups: (1) milk protein—breast milk, formula, cow’s milk and other dairy products; (2) non-dairy ABP—meats, eggs and meat products; (3) PBP—cereals and legumes. Covariates in the regression models were selected using a directed acyclic graph (DAG). The DAG is considered as a statistical approach to identify confounding variables and reduce the risk of selection bias when estimating causality in observational studies. The DAG applied in this study was created using DAGitty.net version 3.0, 2020 [26]. To demonstrate the magnitude of effect, both correlation and regression coefficients were also reported with their 95% confidence intervals (CIs). Statistical significance was defined by a p-value less than 0.05. 3. Results: 3.1. Demographic Data One hundred and fifty healthy term infants were enrolled. There were four dropouts, and one infant was excluded due to developing a multiple food-protein allergy during the study period (Figure S2). Data from 145 infants (96.7%) were thus available for analysis. As shown in Table 1, there were almost equal numbers of male and female infants, while nearly two thirds of the infants were first-born. Mean parental age was around 30 years, and more than 50% attained at least a college degree. The majority of infants were living in extended, middle-class families where most families received a higher monthly income than the minimum wage in Thailand. One hundred and fifty healthy term infants were enrolled. There were four dropouts, and one infant was excluded due to developing a multiple food-protein allergy during the study period (Figure S2). Data from 145 infants (96.7%) were thus available for analysis. As shown in Table 1, there were almost equal numbers of male and female infants, while nearly two thirds of the infants were first-born. Mean parental age was around 30 years, and more than 50% attained at least a college degree. The majority of infants were living in extended, middle-class families where most families received a higher monthly income than the minimum wage in Thailand. 3.2. Prevalence of Malnutrition in the Study Population At 12 months of age, the percentages of infants with wasting, underweight and stunting were 3.5, 4.1 and 4.8%, respectively, while only one infant (0.7%) was overweight. No infants in this cohort were classified as obese, or both wasted and stunted. According to the parental reports, over one-third of mothers and nearly two-thirds of fathers had overweight or obesity. Therefore, the prevalence of DBM at household level where underweight infants lived with parents who had overweight/obesity was 6.2% of all families. At 12 months of age, the percentages of infants with wasting, underweight and stunting were 3.5, 4.1 and 4.8%, respectively, while only one infant (0.7%) was overweight. No infants in this cohort were classified as obese, or both wasted and stunted. According to the parental reports, over one-third of mothers and nearly two-thirds of fathers had overweight or obesity. Therefore, the prevalence of DBM at household level where underweight infants lived with parents who had overweight/obesity was 6.2% of all families. 3.3. Complementary Feeding Practices and Nutrient Intakes Notably, 44.1% of infants were exclusively breastfed until 6 months of age, while 36.6% of all infants continued to receive only breast milk along with complementary foods until 12 months of age (Table 2). The mean age of introduction of CF was 5.7 ± 0.6 months. The most common first complementary food was rice with cooked egg yolk, while other non-dairy proteins such as meats and organ meats were introduced later. Mean protein intake during CF rapidly increased and reached its highest value at 12 months. In general, infants consumed more dietary protein than the Dietary Reference Intake for Thais 2020 (Thai DRI), as well as the intake recommended by the WHO. At 9 and 12 months of age, protein intakes were 2 to 3 times higher than the Thai and international recommendations (Figure 1). The average percentage of energy from protein (%PE) was 7.8, 12.6 and 15.6% at 6, 9 and 12 months of age, respectively. Figure 1 shows mean protein intake (g/kg/day) of infants during complementary feeding. Compared to the recommendations suggested by the Thai dietary recommended intake (Thai DRI), the Institute of Medicine (IOM), the World Health Organization (WHO)/Food and Agriculture Organization (FAO)/United Nations International Children’s Emergency Fund (UNICEF) and the European Food Safety Authority (ESFA), the protein intake of this study population rapidly increased from 6 to 12 months and was higher than all recommendations at 9 and 12 months of age. Notably, 44.1% of infants were exclusively breastfed until 6 months of age, while 36.6% of all infants continued to receive only breast milk along with complementary foods until 12 months of age (Table 2). The mean age of introduction of CF was 5.7 ± 0.6 months. The most common first complementary food was rice with cooked egg yolk, while other non-dairy proteins such as meats and organ meats were introduced later. Mean protein intake during CF rapidly increased and reached its highest value at 12 months. In general, infants consumed more dietary protein than the Dietary Reference Intake for Thais 2020 (Thai DRI), as well as the intake recommended by the WHO. At 9 and 12 months of age, protein intakes were 2 to 3 times higher than the Thai and international recommendations (Figure 1). The average percentage of energy from protein (%PE) was 7.8, 12.6 and 15.6% at 6, 9 and 12 months of age, respectively. Figure 1 shows mean protein intake (g/kg/day) of infants during complementary feeding. Compared to the recommendations suggested by the Thai dietary recommended intake (Thai DRI), the Institute of Medicine (IOM), the World Health Organization (WHO)/Food and Agriculture Organization (FAO)/United Nations International Children’s Emergency Fund (UNICEF) and the European Food Safety Authority (ESFA), the protein intake of this study population rapidly increased from 6 to 12 months and was higher than all recommendations at 9 and 12 months of age. 3.4. Association between Dietary Protein and Growth Outcomes Infants were categorized into three groups based on the average %PE from 6 to 12 months of age; those in the highest and lowest quartiles had %PE ≥12.9% and ≤10.9%, respectively, while the median group received protein between these values. Infants in the highest quartile had significantly higher WAZ, WLZ and BMIZ at 12 months, while there was no significant difference in LAZ between groups (Table 3). Conditional weight-related z-scores (i.e., WAZ, WLZ and BMIZ) of infants in the high protein intake group were significantly higher compared to the median and low protein intake groups (Figure 2: 95%CIs shown in Table S1), indicating that infants in the high protein intake group gained more weight than expected, given their baseline z-score at 6 months of age. However, there was no difference in the prevalence of all forms of malnutrition (i.e., underweight, wasting, stunting and overweight/obesity) between protein intake groups. Figure 2 illustrates conditional growth status at 12 months. Infants who consumed protein in the highest quartile (black bar) had significantly higher conditional WAZ, WLZ and BMIZ compared to infants receiving protein in the median (dark grey bar) and lowest quartile (light grey bar). Although conditional LAZ was higher in the high protein intake group compared with other groups, there was no significant difference. According to CF recommendations in Thailand [27] (Table S2), infants should be given three main meals (i.e., breakfast, lunch, and dinner) from 9 months; thus, protein intakes during the early (6–9 months old) and later stages (9–12 months old) of CF were expected to be quite different. Therefore, average %PE during the early and later CF periods were separated for univariate analyses investigating the association of protein intake with conditional growth outcomes. The results in Table 4 indicate that protein intake from 9–12 months was significantly associated with conditional growth outcomes, whilst protein intake from 6–9 months of age was not. Thus, only protein intake from 9–12 months of age was included in the subsequent analyses. According to the DAGs (Figure 3), the suggested covariates for the multiple linear regression model investigating the association of protein intake with linear growth were duration of predominant breastfeeding, type of milk feeding, non-protein energy intake at 6–12 months, maternal education, frequency of illness and family income. For ponderal growth including WAZ, WLZ and BMIZ, the DAG suggested duration of predominant breastfeeding, type of milk feeding, non-protein energy, maternal education, frequency of illness, maternal BMI and maternal age as covariates. Figure 3 illustrates DAGs predicting conditional growth status (for either linear or ponderal growth) by protein intake during the CF period (main predictor). Black arrows indicate causal paths between the main predictors and outcomes. Dashed-grey arrows represent bias paths. Boxes with black frames show potential confounders. All covariates suggested by the DAGs were included in the multiple regression models. There was no association between conditional LAZ and %PE from 9 to 12 months, or other covariates (Table 5). However, %PE from 9 to 12 months was associated with conditional WAZ, WLZ, and BMIZ (95%CI varied between 0.02 and 0.20). Considering different protein sources, only %PE from milk/dairy and non-dairy protein from 9 to 12 months were significantly associated with the weight-related parameters, while PBP was not (Table 5). Protein intake from milk had a stronger association with conditional weight-related parameters compared to other protein sources based on effect size (regression co-efficient (β)). To differentiate the effect of milk protein from breast milk and that from dairy/infant formula on conditional growth outcomes, %PE from breast milk was subtracted from the %PE from formula, cow’s milk and other dairy products. The resulting variable was called “%PE from dairy vs. breast milk”. As shown in Figure 4, this variable was directly associated with weight-related parameters, suggesting that greater %PE from formula milk and dairy rather than breast milk was significantly associated with higher conditional WAZ, WLZ and BMIZ. The findings suggest that a 1% increase in daily %PE from formula and dairy from 9 to 12 months of age was associated with a 0.18 (95%CI, 0.03, 0.32) and 0.16 (95%CI, 0.01, 0.30) standard deviation score (SDS) increase in conditional WAZ and WLZ, respectively, after adjusting for other protein sources, duration of predominant BF, non-protein energy consumption, type of milk feeding, maternal age, maternal education, maternal BMI, and frequency of illness. A 1% increase in daily %PE from non-dairy ASFs from 9 to 12 months was also associated with a 0.10 (95%CI 0.02, 0.18), 0.12 (95%CI 0.04, 0.20) and 0.10 (95%CI 0.01, 0.18) SDS increase in conditional WAZ, WLZ and BMIZ, respectively, after adjusting for other protein sources, duration of predominant BF, non-protein energy consumption, type of milk feeding, maternal age, maternal education, maternal BMI and frequency of illness. Scatter plots demonstrate associations between the percentage of protein energy from dairy sources (i.e., formula and cow’s milk), subtracted from the percentage of protein energy from breast milk (%PE dairy source vs. breast milk) and conditional growth at 12 months. The scatter plots show dose–response, positive associations between %PE dairy source vs. breast milk, and conditional WAZ, WLZ and BMIZ, but not conditional LAZ. Infants were categorized into three groups based on the average %PE from 6 to 12 months of age; those in the highest and lowest quartiles had %PE ≥12.9% and ≤10.9%, respectively, while the median group received protein between these values. Infants in the highest quartile had significantly higher WAZ, WLZ and BMIZ at 12 months, while there was no significant difference in LAZ between groups (Table 3). Conditional weight-related z-scores (i.e., WAZ, WLZ and BMIZ) of infants in the high protein intake group were significantly higher compared to the median and low protein intake groups (Figure 2: 95%CIs shown in Table S1), indicating that infants in the high protein intake group gained more weight than expected, given their baseline z-score at 6 months of age. However, there was no difference in the prevalence of all forms of malnutrition (i.e., underweight, wasting, stunting and overweight/obesity) between protein intake groups. Figure 2 illustrates conditional growth status at 12 months. Infants who consumed protein in the highest quartile (black bar) had significantly higher conditional WAZ, WLZ and BMIZ compared to infants receiving protein in the median (dark grey bar) and lowest quartile (light grey bar). Although conditional LAZ was higher in the high protein intake group compared with other groups, there was no significant difference. According to CF recommendations in Thailand [27] (Table S2), infants should be given three main meals (i.e., breakfast, lunch, and dinner) from 9 months; thus, protein intakes during the early (6–9 months old) and later stages (9–12 months old) of CF were expected to be quite different. Therefore, average %PE during the early and later CF periods were separated for univariate analyses investigating the association of protein intake with conditional growth outcomes. The results in Table 4 indicate that protein intake from 9–12 months was significantly associated with conditional growth outcomes, whilst protein intake from 6–9 months of age was not. Thus, only protein intake from 9–12 months of age was included in the subsequent analyses. According to the DAGs (Figure 3), the suggested covariates for the multiple linear regression model investigating the association of protein intake with linear growth were duration of predominant breastfeeding, type of milk feeding, non-protein energy intake at 6–12 months, maternal education, frequency of illness and family income. For ponderal growth including WAZ, WLZ and BMIZ, the DAG suggested duration of predominant breastfeeding, type of milk feeding, non-protein energy, maternal education, frequency of illness, maternal BMI and maternal age as covariates. Figure 3 illustrates DAGs predicting conditional growth status (for either linear or ponderal growth) by protein intake during the CF period (main predictor). Black arrows indicate causal paths between the main predictors and outcomes. Dashed-grey arrows represent bias paths. Boxes with black frames show potential confounders. All covariates suggested by the DAGs were included in the multiple regression models. There was no association between conditional LAZ and %PE from 9 to 12 months, or other covariates (Table 5). However, %PE from 9 to 12 months was associated with conditional WAZ, WLZ, and BMIZ (95%CI varied between 0.02 and 0.20). Considering different protein sources, only %PE from milk/dairy and non-dairy protein from 9 to 12 months were significantly associated with the weight-related parameters, while PBP was not (Table 5). Protein intake from milk had a stronger association with conditional weight-related parameters compared to other protein sources based on effect size (regression co-efficient (β)). To differentiate the effect of milk protein from breast milk and that from dairy/infant formula on conditional growth outcomes, %PE from breast milk was subtracted from the %PE from formula, cow’s milk and other dairy products. The resulting variable was called “%PE from dairy vs. breast milk”. As shown in Figure 4, this variable was directly associated with weight-related parameters, suggesting that greater %PE from formula milk and dairy rather than breast milk was significantly associated with higher conditional WAZ, WLZ and BMIZ. The findings suggest that a 1% increase in daily %PE from formula and dairy from 9 to 12 months of age was associated with a 0.18 (95%CI, 0.03, 0.32) and 0.16 (95%CI, 0.01, 0.30) standard deviation score (SDS) increase in conditional WAZ and WLZ, respectively, after adjusting for other protein sources, duration of predominant BF, non-protein energy consumption, type of milk feeding, maternal age, maternal education, maternal BMI, and frequency of illness. A 1% increase in daily %PE from non-dairy ASFs from 9 to 12 months was also associated with a 0.10 (95%CI 0.02, 0.18), 0.12 (95%CI 0.04, 0.20) and 0.10 (95%CI 0.01, 0.18) SDS increase in conditional WAZ, WLZ and BMIZ, respectively, after adjusting for other protein sources, duration of predominant BF, non-protein energy consumption, type of milk feeding, maternal age, maternal education, maternal BMI and frequency of illness. Scatter plots demonstrate associations between the percentage of protein energy from dairy sources (i.e., formula and cow’s milk), subtracted from the percentage of protein energy from breast milk (%PE dairy source vs. breast milk) and conditional growth at 12 months. The scatter plots show dose–response, positive associations between %PE dairy source vs. breast milk, and conditional WAZ, WLZ and BMIZ, but not conditional LAZ. 3.5. Association between Dietary Protein Intake and Blood Levels of IGF-1, IGFBP-3 and Insulin at 12 Months of Age In order to investigate the association between dietary protein and weight-related growth parameters, IGF-1, IGFBP-3 and insulin were investigated at 12 months of age. Milk protein was the only food source that showed a significantly positive association with circulating IGF-1, IGFBP-3 and insulin (Table 6). However, a stronger association was found between “%PE from dairy vs. breast milk” and the IGF-1 level, suggesting the consumption of more %PE from formula and dairy than from breast milk was associated more strongly with IGF-1 level. As shown in Figure 5, there were positive dose–response relationships of “%PE from dairy vs. breast milk” with all growth-related hormones after adjusting for sex. A 1% greater %PE from formula and dairy was associated with increasing blood concentrations of IGF-1, IGFBP-3 and insulin by 2.34 (95%CI 1.44, 3.23) ng/mL, 33.41 (95%CI 9.46, 57.37) ng/mL and 4 (95%CI 1, 7) %, respectively. Mean IGF-1, IGFBP-3 and insulin stratified by sex are given in Table S3. Scatter plots illustrate the associations between the percentage of protein energy from dairy sources (i.e., formula and cow’s milk) subtracted from the percentage of protein energy from breast milk (%PE dairy source vs. breast milk) and blood concentrations of IGF-1, IGFBP-3 and insulin at 12 months. The scatter plots show dose–response, positive associations between %PE dairy source vs. breast milk, and all laboratory markers after controlling of sex. In order to investigate the association between dietary protein and weight-related growth parameters, IGF-1, IGFBP-3 and insulin were investigated at 12 months of age. Milk protein was the only food source that showed a significantly positive association with circulating IGF-1, IGFBP-3 and insulin (Table 6). However, a stronger association was found between “%PE from dairy vs. breast milk” and the IGF-1 level, suggesting the consumption of more %PE from formula and dairy than from breast milk was associated more strongly with IGF-1 level. As shown in Figure 5, there were positive dose–response relationships of “%PE from dairy vs. breast milk” with all growth-related hormones after adjusting for sex. A 1% greater %PE from formula and dairy was associated with increasing blood concentrations of IGF-1, IGFBP-3 and insulin by 2.34 (95%CI 1.44, 3.23) ng/mL, 33.41 (95%CI 9.46, 57.37) ng/mL and 4 (95%CI 1, 7) %, respectively. Mean IGF-1, IGFBP-3 and insulin stratified by sex are given in Table S3. Scatter plots illustrate the associations between the percentage of protein energy from dairy sources (i.e., formula and cow’s milk) subtracted from the percentage of protein energy from breast milk (%PE dairy source vs. breast milk) and blood concentrations of IGF-1, IGFBP-3 and insulin at 12 months. The scatter plots show dose–response, positive associations between %PE dairy source vs. breast milk, and all laboratory markers after controlling of sex. 3.1. Demographic Data: One hundred and fifty healthy term infants were enrolled. There were four dropouts, and one infant was excluded due to developing a multiple food-protein allergy during the study period (Figure S2). Data from 145 infants (96.7%) were thus available for analysis. As shown in Table 1, there were almost equal numbers of male and female infants, while nearly two thirds of the infants were first-born. Mean parental age was around 30 years, and more than 50% attained at least a college degree. The majority of infants were living in extended, middle-class families where most families received a higher monthly income than the minimum wage in Thailand. 3.2. Prevalence of Malnutrition in the Study Population: At 12 months of age, the percentages of infants with wasting, underweight and stunting were 3.5, 4.1 and 4.8%, respectively, while only one infant (0.7%) was overweight. No infants in this cohort were classified as obese, or both wasted and stunted. According to the parental reports, over one-third of mothers and nearly two-thirds of fathers had overweight or obesity. Therefore, the prevalence of DBM at household level where underweight infants lived with parents who had overweight/obesity was 6.2% of all families. 3.3. Complementary Feeding Practices and Nutrient Intakes: Notably, 44.1% of infants were exclusively breastfed until 6 months of age, while 36.6% of all infants continued to receive only breast milk along with complementary foods until 12 months of age (Table 2). The mean age of introduction of CF was 5.7 ± 0.6 months. The most common first complementary food was rice with cooked egg yolk, while other non-dairy proteins such as meats and organ meats were introduced later. Mean protein intake during CF rapidly increased and reached its highest value at 12 months. In general, infants consumed more dietary protein than the Dietary Reference Intake for Thais 2020 (Thai DRI), as well as the intake recommended by the WHO. At 9 and 12 months of age, protein intakes were 2 to 3 times higher than the Thai and international recommendations (Figure 1). The average percentage of energy from protein (%PE) was 7.8, 12.6 and 15.6% at 6, 9 and 12 months of age, respectively. Figure 1 shows mean protein intake (g/kg/day) of infants during complementary feeding. Compared to the recommendations suggested by the Thai dietary recommended intake (Thai DRI), the Institute of Medicine (IOM), the World Health Organization (WHO)/Food and Agriculture Organization (FAO)/United Nations International Children’s Emergency Fund (UNICEF) and the European Food Safety Authority (ESFA), the protein intake of this study population rapidly increased from 6 to 12 months and was higher than all recommendations at 9 and 12 months of age. 3.4. Association between Dietary Protein and Growth Outcomes: Infants were categorized into three groups based on the average %PE from 6 to 12 months of age; those in the highest and lowest quartiles had %PE ≥12.9% and ≤10.9%, respectively, while the median group received protein between these values. Infants in the highest quartile had significantly higher WAZ, WLZ and BMIZ at 12 months, while there was no significant difference in LAZ between groups (Table 3). Conditional weight-related z-scores (i.e., WAZ, WLZ and BMIZ) of infants in the high protein intake group were significantly higher compared to the median and low protein intake groups (Figure 2: 95%CIs shown in Table S1), indicating that infants in the high protein intake group gained more weight than expected, given their baseline z-score at 6 months of age. However, there was no difference in the prevalence of all forms of malnutrition (i.e., underweight, wasting, stunting and overweight/obesity) between protein intake groups. Figure 2 illustrates conditional growth status at 12 months. Infants who consumed protein in the highest quartile (black bar) had significantly higher conditional WAZ, WLZ and BMIZ compared to infants receiving protein in the median (dark grey bar) and lowest quartile (light grey bar). Although conditional LAZ was higher in the high protein intake group compared with other groups, there was no significant difference. According to CF recommendations in Thailand [27] (Table S2), infants should be given three main meals (i.e., breakfast, lunch, and dinner) from 9 months; thus, protein intakes during the early (6–9 months old) and later stages (9–12 months old) of CF were expected to be quite different. Therefore, average %PE during the early and later CF periods were separated for univariate analyses investigating the association of protein intake with conditional growth outcomes. The results in Table 4 indicate that protein intake from 9–12 months was significantly associated with conditional growth outcomes, whilst protein intake from 6–9 months of age was not. Thus, only protein intake from 9–12 months of age was included in the subsequent analyses. According to the DAGs (Figure 3), the suggested covariates for the multiple linear regression model investigating the association of protein intake with linear growth were duration of predominant breastfeeding, type of milk feeding, non-protein energy intake at 6–12 months, maternal education, frequency of illness and family income. For ponderal growth including WAZ, WLZ and BMIZ, the DAG suggested duration of predominant breastfeeding, type of milk feeding, non-protein energy, maternal education, frequency of illness, maternal BMI and maternal age as covariates. Figure 3 illustrates DAGs predicting conditional growth status (for either linear or ponderal growth) by protein intake during the CF period (main predictor). Black arrows indicate causal paths between the main predictors and outcomes. Dashed-grey arrows represent bias paths. Boxes with black frames show potential confounders. All covariates suggested by the DAGs were included in the multiple regression models. There was no association between conditional LAZ and %PE from 9 to 12 months, or other covariates (Table 5). However, %PE from 9 to 12 months was associated with conditional WAZ, WLZ, and BMIZ (95%CI varied between 0.02 and 0.20). Considering different protein sources, only %PE from milk/dairy and non-dairy protein from 9 to 12 months were significantly associated with the weight-related parameters, while PBP was not (Table 5). Protein intake from milk had a stronger association with conditional weight-related parameters compared to other protein sources based on effect size (regression co-efficient (β)). To differentiate the effect of milk protein from breast milk and that from dairy/infant formula on conditional growth outcomes, %PE from breast milk was subtracted from the %PE from formula, cow’s milk and other dairy products. The resulting variable was called “%PE from dairy vs. breast milk”. As shown in Figure 4, this variable was directly associated with weight-related parameters, suggesting that greater %PE from formula milk and dairy rather than breast milk was significantly associated with higher conditional WAZ, WLZ and BMIZ. The findings suggest that a 1% increase in daily %PE from formula and dairy from 9 to 12 months of age was associated with a 0.18 (95%CI, 0.03, 0.32) and 0.16 (95%CI, 0.01, 0.30) standard deviation score (SDS) increase in conditional WAZ and WLZ, respectively, after adjusting for other protein sources, duration of predominant BF, non-protein energy consumption, type of milk feeding, maternal age, maternal education, maternal BMI, and frequency of illness. A 1% increase in daily %PE from non-dairy ASFs from 9 to 12 months was also associated with a 0.10 (95%CI 0.02, 0.18), 0.12 (95%CI 0.04, 0.20) and 0.10 (95%CI 0.01, 0.18) SDS increase in conditional WAZ, WLZ and BMIZ, respectively, after adjusting for other protein sources, duration of predominant BF, non-protein energy consumption, type of milk feeding, maternal age, maternal education, maternal BMI and frequency of illness. Scatter plots demonstrate associations between the percentage of protein energy from dairy sources (i.e., formula and cow’s milk), subtracted from the percentage of protein energy from breast milk (%PE dairy source vs. breast milk) and conditional growth at 12 months. The scatter plots show dose–response, positive associations between %PE dairy source vs. breast milk, and conditional WAZ, WLZ and BMIZ, but not conditional LAZ. 3.5. Association between Dietary Protein Intake and Blood Levels of IGF-1, IGFBP-3 and Insulin at 12 Months of Age: In order to investigate the association between dietary protein and weight-related growth parameters, IGF-1, IGFBP-3 and insulin were investigated at 12 months of age. Milk protein was the only food source that showed a significantly positive association with circulating IGF-1, IGFBP-3 and insulin (Table 6). However, a stronger association was found between “%PE from dairy vs. breast milk” and the IGF-1 level, suggesting the consumption of more %PE from formula and dairy than from breast milk was associated more strongly with IGF-1 level. As shown in Figure 5, there were positive dose–response relationships of “%PE from dairy vs. breast milk” with all growth-related hormones after adjusting for sex. A 1% greater %PE from formula and dairy was associated with increasing blood concentrations of IGF-1, IGFBP-3 and insulin by 2.34 (95%CI 1.44, 3.23) ng/mL, 33.41 (95%CI 9.46, 57.37) ng/mL and 4 (95%CI 1, 7) %, respectively. Mean IGF-1, IGFBP-3 and insulin stratified by sex are given in Table S3. Scatter plots illustrate the associations between the percentage of protein energy from dairy sources (i.e., formula and cow’s milk) subtracted from the percentage of protein energy from breast milk (%PE dairy source vs. breast milk) and blood concentrations of IGF-1, IGFBP-3 and insulin at 12 months. The scatter plots show dose–response, positive associations between %PE dairy source vs. breast milk, and all laboratory markers after controlling of sex. 4. Discussion: This cohort study demonstrated that infants living in Chiang Mai, Thailand, consumed more dietary protein, mainly from ASFs, than Thai and WHO recommendations during the CF period. More importantly, the main results indicated that protein intake was significantly associated with weight-related parameters (i.e., WAZ, WLZ, and BMIZ) during the CF period after adjusting for potential confounders. Considering protein sources, the results showed a different impact of protein from diary and non-dairy ASFs. The predominant association with weight gain was from dairy protein—mainly formula and unfortified cow’s milk—whereas non-dairy ABP showed a lesser impact. Protein intake from formula and unfortified cow’s milk also showed positive associations with circulating IGF-1, IGFBP-3 and insulin at 12 months of age in a dose–response manner, independent of infant sex. There was no association of protein intake with linear growth markers, or of PBP with conditional growth outcomes in this cohort. In contrast to a recent review highlighting that infants and young children in LMICs consumed less ABP compared with those from high-income settings [28], this cohort showed that infants living in northern Thailand consumed more dietary protein from ASFs than from plant-based foods during the CF period. It could be assumed that, in some LMICs, especially upper-middle-income countries such as Thailand, CF is now shifting towards a “Western style” diet. Recently, a cross-sectional study [29] and data from the national survey [30] in Thailand also reported that over 80% of protein in complementary foods came from ASFs. These findings are relevant to the current global situation in which many LMIC countries are transitioning to Western diets with high amounts of ASFs, even though this change may occur at very different rates across different countries [31,32]. Considering the relation between protein intake and growth outcomes, this study found that infants consuming protein in the highest quartile, with a median %PE of nearly 13%, had significantly higher weight-related z-scores at 12 months of age compared to those who had lower protein intakes. Interestingly, the median %PE of the high protein intake group was similar to a report based on European populations [33]. Michaelsen et al. [33] found that most studies in European countries showing a significant association between high protein intake and BMI at 12 months reported a %PE around 13%. Therefore, some experts agreed to recommend an upper limit of protein intake around 15%, with the aim of reducing the risk of childhood obesity in their populations [4,34,35]. In contrast, current international recommendations only recommend safe levels: lower limits of protein intake considered necessary to adequately support the normal growth of infants/children [36,37,38]. Given the dramatically increasing prevalence of overweight/obesity in young children in many LMICs, an upper limit of protein intake should be considered for international recommendations, and more studies in this specific context should be encouraged. Furthermore, our findings also indicated dose–response associations between ABP and weight-related parameters regardless of the type of milk received or how much energy was provided from carbohydrate and fat, although the effects of dairy and non-dairy protein were different to some extent. When considering the concept of conditional growth [39], the outcomes can be interpreted as indicating that every 1% increase in %PE from either dairy or non-dairy ABP at 9–12 months of age is associated with a positive deviation in WAZ, WLZ and BMIZ from the expected values at 12 months, based on growth parameters at 6 months. Thus, these results suggest that higher protein intake from ASFs is associated with more rapid weight gain than the infant’s expected growth trajectory. Underpinning these clinical findings, our laboratory results showed that the higher consumption of dairy protein, mainly from formula and cow’s milk, significantly increased levels of circulating IGF-1, IGFBP-3 and insulin, which are the main hormonal regulators of human growth and may relate to increased adiposity [40,41]. A possible mechanism explaining the greater effect of dairy protein over other protein sources is the high proportion of leucine, a potent factor stimulating IGF-1 secretion in dairy protein compared to other food sources (14% vs. 8% of amino acids in dairy and meats, respectively) [42]. In addition, some evidence indicates that leucine also plays an essential role in the activation of the mammalian target of rapamycin (mTOR), which is the major regulator of growth and metabolism homeostasis in humans [43]. To our knowledge, this is the first evidence from an LMIC demonstrating an association between high protein intake and rapid weight gain, and the possible mechanism of this association through IGF-1, IGFBP-3 and insulin. More importantly, this cohort also showed the distinctive effect of different protein sources on infant growth, as previous evidence on this issue was inconclusive. The latest systematic review and meta-analysis examining the relationship between high protein intake and growth and risk of childhood overweight/obesity included no studies from LMICs [44]. This systematic review concluded that there is adequate evidence supporting a possibly causal effect of high protein, especially ABP, on BMI (dose–response effect), while limited evidence suggests that high protein intake may affect weight gain/weight-for-age score and the risk of childhood obesity. However, there were several inconclusive results, including the effect of high protein on linear growth and body composition [44]. The present study did not demonstrate a relationship between high protein intake and overweight/obesity due to the very small number of infants who were overweight/obese at 12 months old. However, there is evidence justifying the concern about the potential impact of high protein intake on overweight/obesity in this population. In 2019, the prevalence of overweight/obesity among Thai infants and young children aged less than 5 years rose to 12.7% [45] compared to the previous national surveys in 2009 and 2016 (8.5% and 8.2%, respectively) [46,47]. The daily protein intake reported in 2013 was similar to the present study; infants and young children aged 6 to less than 36 months had dietary protein intakes about 3 times higher than the Thai recommendations and nearly 80% of the protein was derived from ASFs, while total energy consumption was not different from the recommendation [30,48,49]. Notably, the literature from LMICs generally considers ABP as a preferred protein source due to its beneficial effect in preventing undernutrition [44,50,51]. Theoretically, protein from ASFs should provide adequate amounts of essential amino acids to meet the requirements of infants and children in order to prevent stunting [52]. A recent systematic review of studies on infants and children aged 6–60 months in LMICs did not find any significant associations between the consumption of ASFs and growth outcomes including weight, length/height and head circumference, though the included studies showed high heterogeneity [53]. The literature thus illustrates how ‘optimal’ protein intakes and sources during CF may differ in high-income and low-income settings. Reducing protein in complementary foods in European countries and the United States may help prevent childhood overweight/obesity, while promoting the consumption of ABPs in many low-income countries might mitigate the burden of wasting, stunting and micronutrient deficiencies. Nonetheless, for countries such as Thailand facing the DBM and nutritional transition, using either approach could be problematic. Therefore, such countries should ideally adopt recommendations related to dietary protein based on data from their population and avoid making assumptions by using dietary data from other countries. More importantly, the distinctive effects of different protein sources should be taken into account when considering recommendations for dietary protein during the CF period. Current evidence suggests that dairy protein from formula and cow’s milk can promote rapid weight gain and could contribute to childhood obesity [54], while non-dairy ABP has a lesser impact on weight gain according to this cohort and other studies [55,56]. Therefore, to optimize protein intake during the CF period, nutritional policies focused on decreasing the intake of dairy protein, such as reducing the protein content in infant and follow-on formula and avoiding cow’s milk whilst encouraging mothers to continue breastfeeding throughout the first year of life, should be integrated into CF practices. In addition, non-dairy ABP enriched with essential micronutrients such as iron, zinc, iodine, and vitamin A should be promoted to provide adequate micronutrients whilst avoiding a high intake of dairy protein. Finally, limitations of this study should be noted. First, the results from the present cohort cannot infer causality between dietary protein and rapid weight gain due to the observational study design. The association could be interpreted either way, and it is not possible to conclude whether dietary protein contributes to greater weight gain or whether parents of faster-growing infants provide more food, including protein. However, DAGs were applied to appropriately identify potential confounders and to avoid overadjustment and selection bias [57]. Second, the null effect of PBP on growth outcomes should be interpreted with caution because the PBP consumed by infants in this cohort was mainly cereals, whereas legumes and grains containing higher protein quantity and quality, which may be more frequently used in other LMICs, were rarely consumed. Third, the lack of a significant association between protein intake and linear growth may be due to lack of statistical power, as the sample size was calculated based on the expected difference in WLZ at 12 months between infants consuming ASF regularly and those who did not. Fourth, by assessing change in size between 6 and 12 months, and assessing complementary feeding during this period, some of the variability in growth that we quantified may have occurred prior to the dietary exposure. However, this makes any associations of growth and complementary feeding that we detect conservative. Lastly, it should be noted that “extra” weight gain from increasing intake of ABP cannot be assumed to indicate higher body fatness without additional evidence from body composition analysis, and we do not yet know how our findings will translate into the risk of overweight/obesity at later ages. 5. Conclusions: The present cohort provides evidence from a middle-income country that different protein sources may have contrasting influences on infant growth. While high protein intake from ASFs, especially formula and cow’s milk, during the CF period was associated with higher weight gain in a dose–response manner, the study did not find an effect on linear growth. Importantly, higher levels of IGF-1, IGFBP-3 and insulin in infants consuming higher amounts of protein from formula and cow’s milk provided mechanistic support for the clinical findings. However, further studies in populations facing the DBM and nutritional transition are needed to confirm the key findings from this cohort and to investigate the relationship between dietary protein and body composition. A longer follow-up period is also needed to see whether the study population consuming higher protein have a greater risk of overweight/obesity at later ages.
Background: While high protein intake during infancy may increase obesity risk, low qualities and quantities of protein contribute to undernutrition. This study aimed to investigate the impact of the amount and source of protein on infant growth during complementary feeding (CF) in a country where under- and overnutrition co-exist as the so-called the double burden of malnutrition. Methods: A multicenter, prospective cohort was conducted. Healthy term infants were enrolled with dietary and anthropometric assessments at 6, 9 and 12 months (M). Blood samples were collected at 12M for IGF-1, IGFBP-3 and insulin analyses. Results: A total of 145 infants were enrolled (49.7% female). Animal source foods (ASFs) were the main protein source and showed a positive, dose-response relationship with weight-for-age, weight-for-length and BMI z-scores after adjusting for potential confounders. However, dairy protein had a greater impact on those parameters than non-dairy ASFs, while plant-based protein had no effect. These findings were supported by higher levels of IGF-1, IGFBP-3 and insulin following a higher intake of dairy protein. None of the protein sources were associated with linear growth. Conclusions: This study showed the distinctive impact of different protein sources during CF on infant growth. A high intake of dairy protein, mainly from infant formula, had a greater impact on weight gain and growth-related hormones.
1. Introduction: The double burden of malnutrition (DBM)—the coexistence of under- and overnutrition—represents an emerging public health problem globally, especially for those in lower- and middle-income countries (LMICs) where eating habits are being transformed towards Westernized diets and lifestyles [1]. The World Health Organization (WHO) has emphasized the need for “double-duty actions” to prevent both forms of malnutrition [2]. While undernutrition and overweight were initially considered to affect different groups, they are increasingly recognized to occur within individuals through the life course3. This might potentially affect younger age groups, resulting in the combination of poor linear growth and overweight; however, evidence is lacking. This framework also recognizes that the two forms of malnutrition may have common risk factors in the form of unhealthy diets and environments [3,4]. Optimizing early-life nutrition by improving maternal nutrition, encouraging exclusive breastfeeding, and promoting appropriate complementary feeding (CF) are among the important actions identified to overcome the DBM [5]. The CF period is one of great change, when infants are introduced to foods other than milk [6] and protein intake increases; the percent of energy from protein (%PE) typically rises from around 5% to 15% when complementary foods become the major energy source for breastfed infants [7]. Protein is a key macronutrient promoting growth. However, to our knowledge, no studies have yet focused on the association between protein intake and growth in the context of the DBM. Previous studies have typically investigated the impact of protein intake in early life in populations facing either undernutrition or overnutrition [8], but not both. In high-income settings, research has highlighted the association between “too much” dietary protein in early life and the increased risk of overweight/obesity in later childhood, while previous studies in resource-limited countries have focused on the association of “too little” high-quality protein with undernutrition and, particularly, stunting. There are no studies from LMICs investigating the effect of high protein intake in early life on the growth of infants and young children [8]. Furthermore, it is unclear whether all protein sources (i.e., dairy protein, non-dairy animal-based protein (ABP) and plant-based protein (PBP)) have the same effect on growth [9,10,11,12]. The most robust evidence, from a large, multi-center randomized controlled trial (RCT) in five European countries, reported that high protein intake from formula during infancy significantly increased weight gain, but not linear growth, in children aged 2 and 6 years [13,14]. Additionally, this RCT demonstrated that infants who received high-protein formula had significantly higher plasma insulin-like growth factor-1 (IGF-1) and urine C-peptide, an insulin derivative, at 6 months of age compared to those fed with low-protein formula or that were breastfed. However, the impact of other protein sources during CF was not investigated. During infancy, nutrition is the most important factor promoting growth via the GH–IGF axis [15], while amino acids derived from dietary protein are associated with IGF-1 and insulin secretion [16,17,18,19,20]. The aim of this study was to investigate associations of the amount and source of protein intake during the CF period with the growth of infants in Thailand, where the DBM is prevalent. Furthermore, plasma insulin, serum IGF-1 and insulin-like growth factor binding protein 3 (IGFBP-3) were also measured to support the clinical outcomes. We aimed to tackle a key question in the context of the DBM: how a specific component of infant diet may relate to markers of both undernutrition and overweight in early life. 5. Conclusions: The present cohort provides evidence from a middle-income country that different protein sources may have contrasting influences on infant growth. While high protein intake from ASFs, especially formula and cow’s milk, during the CF period was associated with higher weight gain in a dose–response manner, the study did not find an effect on linear growth. Importantly, higher levels of IGF-1, IGFBP-3 and insulin in infants consuming higher amounts of protein from formula and cow’s milk provided mechanistic support for the clinical findings. However, further studies in populations facing the DBM and nutritional transition are needed to confirm the key findings from this cohort and to investigate the relationship between dietary protein and body composition. A longer follow-up period is also needed to see whether the study population consuming higher protein have a greater risk of overweight/obesity at later ages.
Background: While high protein intake during infancy may increase obesity risk, low qualities and quantities of protein contribute to undernutrition. This study aimed to investigate the impact of the amount and source of protein on infant growth during complementary feeding (CF) in a country where under- and overnutrition co-exist as the so-called the double burden of malnutrition. Methods: A multicenter, prospective cohort was conducted. Healthy term infants were enrolled with dietary and anthropometric assessments at 6, 9 and 12 months (M). Blood samples were collected at 12M for IGF-1, IGFBP-3 and insulin analyses. Results: A total of 145 infants were enrolled (49.7% female). Animal source foods (ASFs) were the main protein source and showed a positive, dose-response relationship with weight-for-age, weight-for-length and BMI z-scores after adjusting for potential confounders. However, dairy protein had a greater impact on those parameters than non-dairy ASFs, while plant-based protein had no effect. These findings were supported by higher levels of IGF-1, IGFBP-3 and insulin following a higher intake of dairy protein. None of the protein sources were associated with linear growth. Conclusions: This study showed the distinctive impact of different protein sources during CF on infant growth. A high intake of dairy protein, mainly from infant formula, had a greater impact on weight gain and growth-related hormones.
9,769
279
[ 129, 104, 289, 1079, 289 ]
10
[ "protein", "months", "12", "intake", "milk", "12 months", "infants", "dairy", "protein intake", "growth" ]
[ "double burden malnutrition", "maternal nutrition encouraging", "undernutrition overweight early", "infancy nutrition important", "effect preventing undernutrition" ]
null
[CONTENT] protein intake | early-life nutrition | complementary feeding | animal source foods | double burden of malnutrition | infant growth | insulin | insulin-like growth factor-1 [SUMMARY]
null
[CONTENT] protein intake | early-life nutrition | complementary feeding | animal source foods | double burden of malnutrition | infant growth | insulin | insulin-like growth factor-1 [SUMMARY]
[CONTENT] protein intake | early-life nutrition | complementary feeding | animal source foods | double burden of malnutrition | infant growth | insulin | insulin-like growth factor-1 [SUMMARY]
[CONTENT] protein intake | early-life nutrition | complementary feeding | animal source foods | double burden of malnutrition | infant growth | insulin | insulin-like growth factor-1 [SUMMARY]
[CONTENT] protein intake | early-life nutrition | complementary feeding | animal source foods | double burden of malnutrition | infant growth | insulin | insulin-like growth factor-1 [SUMMARY]
[CONTENT] Animals | Dietary Proteins | Hormones | Infant | Infant Formula | Infant Nutritional Physiological Phenomena | Insulin-Like Growth Factor Binding Protein 3 | Insulin-Like Growth Factor I | Insulins | Malnutrition | Prospective Studies [SUMMARY]
null
[CONTENT] Animals | Dietary Proteins | Hormones | Infant | Infant Formula | Infant Nutritional Physiological Phenomena | Insulin-Like Growth Factor Binding Protein 3 | Insulin-Like Growth Factor I | Insulins | Malnutrition | Prospective Studies [SUMMARY]
[CONTENT] Animals | Dietary Proteins | Hormones | Infant | Infant Formula | Infant Nutritional Physiological Phenomena | Insulin-Like Growth Factor Binding Protein 3 | Insulin-Like Growth Factor I | Insulins | Malnutrition | Prospective Studies [SUMMARY]
[CONTENT] Animals | Dietary Proteins | Hormones | Infant | Infant Formula | Infant Nutritional Physiological Phenomena | Insulin-Like Growth Factor Binding Protein 3 | Insulin-Like Growth Factor I | Insulins | Malnutrition | Prospective Studies [SUMMARY]
[CONTENT] Animals | Dietary Proteins | Hormones | Infant | Infant Formula | Infant Nutritional Physiological Phenomena | Insulin-Like Growth Factor Binding Protein 3 | Insulin-Like Growth Factor I | Insulins | Malnutrition | Prospective Studies [SUMMARY]
[CONTENT] double burden malnutrition | maternal nutrition encouraging | undernutrition overweight early | infancy nutrition important | effect preventing undernutrition [SUMMARY]
null
[CONTENT] double burden malnutrition | maternal nutrition encouraging | undernutrition overweight early | infancy nutrition important | effect preventing undernutrition [SUMMARY]
[CONTENT] double burden malnutrition | maternal nutrition encouraging | undernutrition overweight early | infancy nutrition important | effect preventing undernutrition [SUMMARY]
[CONTENT] double burden malnutrition | maternal nutrition encouraging | undernutrition overweight early | infancy nutrition important | effect preventing undernutrition [SUMMARY]
[CONTENT] double burden malnutrition | maternal nutrition encouraging | undernutrition overweight early | infancy nutrition important | effect preventing undernutrition [SUMMARY]
[CONTENT] protein | months | 12 | intake | milk | 12 months | infants | dairy | protein intake | growth [SUMMARY]
null
[CONTENT] protein | months | 12 | intake | milk | 12 months | infants | dairy | protein intake | growth [SUMMARY]
[CONTENT] protein | months | 12 | intake | milk | 12 months | infants | dairy | protein intake | growth [SUMMARY]
[CONTENT] protein | months | 12 | intake | milk | 12 months | infants | dairy | protein intake | growth [SUMMARY]
[CONTENT] protein | months | 12 | intake | milk | 12 months | infants | dairy | protein intake | growth [SUMMARY]
[CONTENT] protein | life | early life | growth | early | undernutrition | dbm | high | intake | protein intake [SUMMARY]
null
[CONTENT] protein | months | milk | 12 | pe | conditional | 12 months | intake | dairy | breast milk [SUMMARY]
[CONTENT] protein | consuming higher | higher | consuming | needed | findings | cohort | period | cow milk | cow [SUMMARY]
[CONTENT] protein | months | infants | intake | 12 | 12 months | milk | dairy | growth | protein intake [SUMMARY]
[CONTENT] protein | months | infants | intake | 12 | 12 months | milk | dairy | growth | protein intake [SUMMARY]
[CONTENT] ||| [SUMMARY]
null
[CONTENT] 145 | 49.7% ||| BMI ||| ||| IGF-1 | IGFBP-3 ||| linear [SUMMARY]
[CONTENT] ||| [SUMMARY]
[CONTENT] ||| ||| ||| 6 | 9 | 12 months ||| 12 | IGF-1 | IGFBP-3 ||| ||| 145 | 49.7% ||| BMI ||| ||| IGF-1 | IGFBP-3 ||| linear ||| ||| [SUMMARY]
[CONTENT] ||| ||| ||| 6 | 9 | 12 months ||| 12 | IGF-1 | IGFBP-3 ||| ||| 145 | 49.7% ||| BMI ||| ||| IGF-1 | IGFBP-3 ||| linear ||| ||| [SUMMARY]
The Prevalence and Associated Factors of Cancer Screening Uptake Among a National Population-Based Sample of Adults in Marshall Islands.
33890501
The study aimed to estimate the prevalence and associated factors of cancer screening among men and women in the general population in Marshall Islands.
BACKGROUND
The national cross-sectional sub-study population consisted of 2,813 persons aged 21-75 years (Median = 37.4 years) from the "2017/2018 Marshall Islands STEPS survey". Information about cancer screening uptake included Pap smear or Vaginal Inspection with Acetic Acid (=VIA), clinical breast examination, mammography, faecal occult blood test (FOBT), and colonoscopy.
METHODS
The prevalence of past 2 years mammography screening was 21.7% among women aged 50-74 years, past year CBE 15.9% among women aged 40 years and older, past 3 years Pap smear or VIA 32.6% among women 21-65 years, past year FOBT 21.8% among women and 22.3% among men aged 50-75 years, and past 10 years colonoscopy 9.1% among women and 7.3% among men aged 50-75 years. In adjusted logistic regression, cholesterol screening (AOR: 1.91, 95% CI: 1.07-3.41) was associated with past 2 years mammography screening among women aged 50-74 years. Blood pressure screening (AOR: 2.39, 95% CI: 1.71-3.35), glucose screening (AOR: 1.59, 95% CI: 1.13-2.23), dental visit in the past year (AOR: 1.51, 95% CI: 1.17, 1.96), binge drinking (AOR: 1.88, 95% CI: 1.07-3.30), and 2-3 servings of fruit and vegetable consumption a day (AOR: 1.42, 95% CI: 1.03-1.95) were positively and high physical activity (30 days a month) (AOR: 0.56, 95% CI: 0.41-0.76) was negatively associated with Pap smear or VIA screening among women aged 21-65 years. Higher education (AOR: 2.58, 95% CI: 1.02-6.58), and cholesterol screening (AOR: 2.87, 95% CI: 1.48-5.59), were positively and current smoking (AOR: 0.09, 95% CI: 0.01-0.65) was negatively associated with past 10 years colonoscopy uptake among 50-75 year-olds.
RESULTS
The study showed a low cancer screening uptake, and several factors were identified that can assist in promoting cancer screening in Marshall Islands.
CONCLUSION
[ "Adult", "Aged", "Breast Neoplasms", "Colonic Neoplasms", "Cross-Sectional Studies", "Early Detection of Cancer", "Female", "Humans", "Male", "Micronesia", "Middle Aged", "Uterine Cervical Neoplasms", "Young Adult" ]
8204481
Introduction
Globally, “cancer is the second leading cause of death; 70% of deaths from cancer occur in low- and middle-income countries.” 1 Some of the most common cancers are lung, breast, colorectal, and prostate cancer. 1 The “cervical cancer burden in the Pacific Region is substantial, with age-standardized incidence rates ranging from 8.2 to 50.7 and age-standardized mortality rates from 2.7 to 23.9 per 100,000 women per year.” 2 In the Marshall Island’s (an upper middle income country in the Pacific) cancer is the second leading cause of death (after diabetes), 3,4 and the most prevalent cancers are cervical, lung, and breast cancers. 4 The “age-standardized rate of cervical cancer was 74 per 100,000” (the highest in the world) in Marshall Islands. 4 The breast cancer and colorectal cancer incidence rates 2007-2012 were 28.6 and 5.5, respectively, in Marshall Islands. 4 Due to low screening rates (7.8% in 2013) and late diagnosis, breast cancer in the Marshall Islands is associated with high mortality rates. 4 Some cancers can be fatal if not identified and treated early, emphasizing the importance of organized cancer screening. 5 Marshall Islands offer various cancer screening modalities, e.g. breast cancer screening (clinical breast examination = CBE with no specific recommendations on age and frequency and mammography, recommended every 2 years in women 50-74 years), cervical cancer screening (Vaginal Inspection with Acetic Acid = VIA or Pap Smear/Cytology, and others, recommended every 3 years in women 21-65 years) and colorectal cancer screening (fecal occult blood examination = FOBT, recommended every year in women and men 50-75 years and colonoscopy, recommended every 10 years in women and men 50-75 years). 4 Marshall Islands developed a national Comprehensive Cancer Control Plan 2017-2022, including cancer screening uptake targets as follows: 30% updated breast cancer screening in women ages 20-75 years, 60% cervical cancer screening at least once in the past 3 years in women 21-65 years, and 20% updated colorectal cancer screening in men and women aged 50-75 years.4 Cancer screening efforts in Marshall Islands include skills-training on clinical breast examination and Pap smear testing, purchase of new mammogram machines, and VIA was introduced as a core option for cervical cancer screening. 4 It is believed that screening rates are below the recommended targets but no national study has reported on the current cancer screening uptake in Marshall Islands. In addition, it would be of outmost importance to have an understanding of the possible facilitators and barriers of cancer screening in Marshall Islands. Knowing the national prevalence and possible factors associated with different cancer screening methods could help in developing programs to improve cancer screening in Marshall Islands. As part of the national Demographic and Health Surveys in 18 countries (women 21-49 years), the prevalence of the utilization of ever cervical cancer screening was 29.2% in 18 countries, 6 in India 27.2%, 6 in Tajikistan 10.6%, 6 and in Turkey ever cervical cancer screening (women 30 years and older) 22.0%. 7 In an analysis of nationally representative household surveys in 55 low- and middle-income countries, the country level median of lifetime cervical cancer screening among women aged 30-49 years 43.6%. 8 The prevalence of breast cancer screening (mammography in the past 5 years) among women aged 40 years and older was 38.4% in China, 10.8% in India, and 15.6% in South Africa. 9 In Italy, the uptake of past 3 years Pap smear or HPV test was 77% (25-64 years old women), past 2 years mammography was 70% (50-69 years old women), and past 2 years colorectal screening was 38% (50-69 year olds). 10 In Brunei Darussalam, the prevalence of a pap smear test (women 18-69 years) was 56.5%, mammography (women 18-69 years) 11.3%, clinical breast examination ( = CBE)(women 18-69 years) 56.2%, fecal occult blood test (men and women 18-69 years) 20.0%, and colonoscopy (men and women 18-69 years) 7.9%. 11 Factors associated with cancer screening may include sociodemographic factors, health system factors, health status, and lifestyle factors. 5,12 Sociodemographic factors associated with cancer screening uptake include higher socioeconomic position, 7,13 -15 older age, 16 younger age, 17 and urban living. 18,19 Health system issues associated with cancer screening uptake include increased access to health care, 20 had health insurance, 18,21 having had blood cholesterol test, 22 general health care utilization, 13,16,17,21 and complementary medicine use. 23 Health status associated with cancer screening uptake include having chronic conditions, 24 having multimorbidity or comorbidity, 11,25 family history of non-communicable diseases, 11 not having mental distress or illness or having depressive symptoms, 17,26,27 not having obesity, 17,28 good self-rated health status, 29 and better physical functioning. 17 Positive lifestyle behaviors associated with cancer screening uptake include such as physical activity, fruit and vegetable consumption, and not smoking, 7,17,30 and not consuming alcohol. 11 The study aimed to estimate the prevalence and associated factors of cancer screening uptake among adults a national population-based survey in Marshall Islands.
Methods
Sample and Procedures Cross-sectional data from the 2017/2018 Marshall Islands STEPwise approach to Surveillance (STEPS) survey were analyzed. 31 Individuals aged 18 years or older participated in the survey from the islands of Majuro, Kwajalein, Arno, Jaluit, Wotje, and Kili, making up 83% of the overall population of Marshall Islands; the final response rate was 92.3%. 32 “Sample size was determined based on overall adult populations on selected islands in the Republic of the Marshall Islands (Majuro = 1659; Ebeye = 627; Kili = 200; Wotje = 207; Jaluit = 207; Arno = 207).”32 A multi-stage sampling design was used: Stage 1: Households were identified at random according to geographical stratification in Majuro and Ebeye. 32 The country was stratified into 2 major groups, Urban (Majuro and Ebeye) and Rural (all outer islands). 32 In Majuro and Ebeye, household cluster sampling was used to randomly select households in these areas. 32 Stage 2: In Majuro and Ebeye, 1 individual was selected at random from each household. All adults in Kili, Arno, Wotje, and Jabwor, Jaluit atolls were included in the sample because the adult populations are about 200 each on these atolls. 32 “Participants eligible for the survey were Marshall Islands residents aged 18 years and older, and who were able to comprehend either English or Marshallese and provide consent.” 32 Data were collected electronically using a tablet by trained surveyors that conducted face-to-face administration of structured questionnaires and anthropometric measurements. 32 Quality control of completed questionnaires was ensured at different stages during the questionnaire-processing phase. 32 The total sample included 3,029 persons 18 years and older, but since we were investigating cancer screening, the sample was restricted to individuals aged 21-75 years. The sample size of this subsample was 2,813. Sample size calculation for cancer screening. All sample sizes calculated with acceptable margin 5%, 95% confidence level, the minimum sample size for cervical cancer screening is 316 (estimated based on a prevalence of 29.2% in 18 countries6), for mammography 261 (estimated based on a prevalence of 21.6%, average of 3 countries, China, India and South Africa9), for CBE 378 (based on prevalence of 56.2% in Brunei Durassalam11), for FBOT 246 (based on 20.0% in Brunei Durassalam11) and for colonoscopy 113 (based on prevalence of 7.9% in Brunei Durassalam11) Cross-sectional data from the 2017/2018 Marshall Islands STEPwise approach to Surveillance (STEPS) survey were analyzed. 31 Individuals aged 18 years or older participated in the survey from the islands of Majuro, Kwajalein, Arno, Jaluit, Wotje, and Kili, making up 83% of the overall population of Marshall Islands; the final response rate was 92.3%. 32 “Sample size was determined based on overall adult populations on selected islands in the Republic of the Marshall Islands (Majuro = 1659; Ebeye = 627; Kili = 200; Wotje = 207; Jaluit = 207; Arno = 207).”32 A multi-stage sampling design was used: Stage 1: Households were identified at random according to geographical stratification in Majuro and Ebeye. 32 The country was stratified into 2 major groups, Urban (Majuro and Ebeye) and Rural (all outer islands). 32 In Majuro and Ebeye, household cluster sampling was used to randomly select households in these areas. 32 Stage 2: In Majuro and Ebeye, 1 individual was selected at random from each household. All adults in Kili, Arno, Wotje, and Jabwor, Jaluit atolls were included in the sample because the adult populations are about 200 each on these atolls. 32 “Participants eligible for the survey were Marshall Islands residents aged 18 years and older, and who were able to comprehend either English or Marshallese and provide consent.” 32 Data were collected electronically using a tablet by trained surveyors that conducted face-to-face administration of structured questionnaires and anthropometric measurements. 32 Quality control of completed questionnaires was ensured at different stages during the questionnaire-processing phase. 32 The total sample included 3,029 persons 18 years and older, but since we were investigating cancer screening, the sample was restricted to individuals aged 21-75 years. The sample size of this subsample was 2,813. Sample size calculation for cancer screening. All sample sizes calculated with acceptable margin 5%, 95% confidence level, the minimum sample size for cervical cancer screening is 316 (estimated based on a prevalence of 29.2% in 18 countries6), for mammography 261 (estimated based on a prevalence of 21.6%, average of 3 countries, China, India and South Africa9), for CBE 378 (based on prevalence of 56.2% in Brunei Durassalam11), for FBOT 246 (based on 20.0% in Brunei Durassalam11) and for colonoscopy 113 (based on prevalence of 7.9% in Brunei Durassalam11)
Results
Sample Characteristics The study population consisted of 2,813 persons aged 21-75 years (Median = 37.4 years, IQR = 28.7-59.5), 1,329 (47.2%) were men and 1,484 (52.8%) were women, 74.0% had high school or higher education, and 38.3% had an annual household income of lower than 10000$. More than half of the participants (55.3%) had ever their blood pressure checked and had ever undergone glucose (diabetes) screening (54.9%) by a health care provider, 21.5% had ever their blood cholesterol checked, and 36.2% had a dental visit in the past 12 months. One in 5 persons (24.5%) were current smokers (45.3% among men), 11.2% were currently chewing tobacco, 16.4% engaged in binge drinking in the past month, 71.5% had 0-1 servings of fruit and vegetables per day, and 35.0% engaged daily in physical activities or exercises. Almost 1 in 10 participants (8.7%) were currently using traditional medicine for diabetes, or hypertension or high cholesterol, 4.6% had some form of cardiovascular disorder (coronary heart disease, angina, heart attack, any kind of heart condition, or heart disease), and 47.2% had obesity (see Table 1). Sample Characteristics of Men and Women Aged 21-75 years, Marshall Islands, STEPS Survey, 2017. The study population consisted of 2,813 persons aged 21-75 years (Median = 37.4 years, IQR = 28.7-59.5), 1,329 (47.2%) were men and 1,484 (52.8%) were women, 74.0% had high school or higher education, and 38.3% had an annual household income of lower than 10000$. More than half of the participants (55.3%) had ever their blood pressure checked and had ever undergone glucose (diabetes) screening (54.9%) by a health care provider, 21.5% had ever their blood cholesterol checked, and 36.2% had a dental visit in the past 12 months. One in 5 persons (24.5%) were current smokers (45.3% among men), 11.2% were currently chewing tobacco, 16.4% engaged in binge drinking in the past month, 71.5% had 0-1 servings of fruit and vegetables per day, and 35.0% engaged daily in physical activities or exercises. Almost 1 in 10 participants (8.7%) were currently using traditional medicine for diabetes, or hypertension or high cholesterol, 4.6% had some form of cardiovascular disorder (coronary heart disease, angina, heart attack, any kind of heart condition, or heart disease), and 47.2% had obesity (see Table 1). Sample Characteristics of Men and Women Aged 21-75 years, Marshall Islands, STEPS Survey, 2017. Prevalence of Cancer Screening The prevalence of past 2 years mammography screening was 21.7% among women aged 50-74 years, past year CBE 15.9% among women aged 40 years and older, past 3 years Pap smear or VIA 32.6% among women 21-65 years, past year FOBT 21.8% among women aged 50-75 years, past year FOBT 22.3% among men aged 50-75 years, past 10 years colonoscopy 9.1% among women aged 50-75 years and past 10 years colonoscopy 7.3% among men aged 50-75 years (see Table 2). Cancer Screening. VIA = Vaginal Inspection with Acetic Acid; CI = Confidence Interval. The prevalence of past 2 years mammography screening was 21.7% among women aged 50-74 years, past year CBE 15.9% among women aged 40 years and older, past 3 years Pap smear or VIA 32.6% among women 21-65 years, past year FOBT 21.8% among women aged 50-75 years, past year FOBT 22.3% among men aged 50-75 years, past 10 years colonoscopy 9.1% among women aged 50-75 years and past 10 years colonoscopy 7.3% among men aged 50-75 years (see Table 2). Cancer Screening. VIA = Vaginal Inspection with Acetic Acid; CI = Confidence Interval. Associations With Mammography Screening In adjusted logistic regression, cholesterol screening (AOR: 1.91, 95% CI: 1.07-3.41) was associated with past 2 years mammography screening among women aged 50-74 years. In addition, in univariate analysis, higher education, blood pressure, and glucose screening were associated with mammography screening (see Table 3). Associations With Past 2 Years Mammography Screening (Women 50-74 Years). COR = Crude Odds Ratio; AOR = Adjusted Odds Ratio; CI = Confidence Interval. N = 323. In adjusted logistic regression, cholesterol screening (AOR: 1.91, 95% CI: 1.07-3.41) was associated with past 2 years mammography screening among women aged 50-74 years. In addition, in univariate analysis, higher education, blood pressure, and glucose screening were associated with mammography screening (see Table 3). Associations With Past 2 Years Mammography Screening (Women 50-74 Years). COR = Crude Odds Ratio; AOR = Adjusted Odds Ratio; CI = Confidence Interval. N = 323. Associations With Clinical Breast Examination In the adjusted logistic regression analysis, glucose screening (AOR: 5.47, 95% CI: 1.94-15.43), cholesterol screening (AOR: 2.20, 95% CI: 1.35-3.57), and dental visit in the past year (AOR: 1.63, 95% CI: 1.03-2.60) were associated with CBE among women aged 40 years and older. In addition, in univariate analysis, blood pressure screening, 2-3 servings of fruit and vegetable consumption a day, and the use of traditional medicine were associated with CBE (see Table 4). Associations With Past Year Clinical Breast Examination (Women 40+ years). COR = Crude Odds Ratio; AOR = Adjusted Odds Ratio; CI = Confidence Interval. N = 655. In the adjusted logistic regression analysis, glucose screening (AOR: 5.47, 95% CI: 1.94-15.43), cholesterol screening (AOR: 2.20, 95% CI: 1.35-3.57), and dental visit in the past year (AOR: 1.63, 95% CI: 1.03-2.60) were associated with CBE among women aged 40 years and older. In addition, in univariate analysis, blood pressure screening, 2-3 servings of fruit and vegetable consumption a day, and the use of traditional medicine were associated with CBE (see Table 4). Associations With Past Year Clinical Breast Examination (Women 40+ years). COR = Crude Odds Ratio; AOR = Adjusted Odds Ratio; CI = Confidence Interval. N = 655. Associations With Pap Smear or VIA Screening In the adjusted logistic regression analysis, blood pressure screening (AOR: 2.39, 95% CI: 1.71-3.35), glucose screening (AOR: 1.59, 95% CI: 1.13-2.23), dental visit in the past year (AOR: 1.51, 95% CI: 1.17, 1.96), binge drinking (AOR: 1.88, 95% CI: 1.07-3.30), and 2-3 servings of fruit and vegetable consumption a day (AOR: 1.42, 95% CI: 1.03-1.95) were positively and high physical activity (AOR: 0.56, 95% CI: 0.41-0.76) was negatively associated with Pap smear or VIA screening among women aged 21-65 years. In addition, in univariate analysis, higher education, cholesterol screening, and the use of traditional medicine were associated with Pap smear or VIA screening (see Table 5). Associations With Past 3 Years Pap Smear or VIA (Women 21-65 Years). COR = Crude Odds Ratio; AOR = Adjusted Odds Ratio; CI = Confidence Interval. N = 1367. In the adjusted logistic regression analysis, blood pressure screening (AOR: 2.39, 95% CI: 1.71-3.35), glucose screening (AOR: 1.59, 95% CI: 1.13-2.23), dental visit in the past year (AOR: 1.51, 95% CI: 1.17, 1.96), binge drinking (AOR: 1.88, 95% CI: 1.07-3.30), and 2-3 servings of fruit and vegetable consumption a day (AOR: 1.42, 95% CI: 1.03-1.95) were positively and high physical activity (AOR: 0.56, 95% CI: 0.41-0.76) was negatively associated with Pap smear or VIA screening among women aged 21-65 years. In addition, in univariate analysis, higher education, cholesterol screening, and the use of traditional medicine were associated with Pap smear or VIA screening (see Table 5). Associations With Past 3 Years Pap Smear or VIA (Women 21-65 Years). COR = Crude Odds Ratio; AOR = Adjusted Odds Ratio; CI = Confidence Interval. N = 1367. Associations With Fecal Occult Blood Examination In adjusted logistic regression analysis, glucose screening (AOR: 3.63, 95% CI: 1.43-9.22), cholesterol screening (AOR: 2.04, 95% CI: 1.32-3.15), dental visit in the past year (AOR: 1.78, 95% CI: 1.17-2.71), intake of 2-3 servings of fruit and vegetables a day (AOR: 1.83, 95% CI: 1.13-2.98), the use of traditional medicine (AOR: 1.77, 95% CI: 1.12-2.80), and cardiovascular disorder (AOR: 2.26, 95% CI: 1.35, 4.83) were associated with FOBE among 50-75 year-olds. In addition, in univariate analysis, blood pressure screening was associated with FOBE (see Table 6). Associations With Past Year Fecal Occult Blood Test (50-75 Years). COR = Crude Odds Ratio; AOR = Adjusted Odds Ratio; CI = Confidence Interval. N = 676. In adjusted logistic regression analysis, glucose screening (AOR: 3.63, 95% CI: 1.43-9.22), cholesterol screening (AOR: 2.04, 95% CI: 1.32-3.15), dental visit in the past year (AOR: 1.78, 95% CI: 1.17-2.71), intake of 2-3 servings of fruit and vegetables a day (AOR: 1.83, 95% CI: 1.13-2.98), the use of traditional medicine (AOR: 1.77, 95% CI: 1.12-2.80), and cardiovascular disorder (AOR: 2.26, 95% CI: 1.35, 4.83) were associated with FOBE among 50-75 year-olds. In addition, in univariate analysis, blood pressure screening was associated with FOBE (see Table 6). Associations With Past Year Fecal Occult Blood Test (50-75 Years). COR = Crude Odds Ratio; AOR = Adjusted Odds Ratio; CI = Confidence Interval. N = 676. Associations With Colonoscopy In adjusted logistic regression analysis, higher education (AOR: 2.58, 95% CI: 1.02-6.58), and cholesterol screening (AOR: 2.87, 95% CI: 1.48-5.59), were positively and current smoking (AOR: 0.09, 95% CI: 0.01-0.65) was negatively associated with past 10 years colonoscopy uptake among 50-75 year-olds. In addition, in univariate analysis, glucose screening, 1-29 days physical activity, and the use of traditional medicine were associated with past 10 years colonoscopy uptake (see Table 7). Associations With Past 10 Years Colonoscopy (50-75 years). COR = Crude Odds Ratio; AOR = Adjusted Odds Ratio; CI = Confidence Interval. N = 685. In adjusted logistic regression analysis, higher education (AOR: 2.58, 95% CI: 1.02-6.58), and cholesterol screening (AOR: 2.87, 95% CI: 1.48-5.59), were positively and current smoking (AOR: 0.09, 95% CI: 0.01-0.65) was negatively associated with past 10 years colonoscopy uptake among 50-75 year-olds. In addition, in univariate analysis, glucose screening, 1-29 days physical activity, and the use of traditional medicine were associated with past 10 years colonoscopy uptake (see Table 7). Associations With Past 10 Years Colonoscopy (50-75 years). COR = Crude Odds Ratio; AOR = Adjusted Odds Ratio; CI = Confidence Interval. N = 685.
Conclusion
The study showed a low cancer screening uptake (mammography, clinical breast examination, Pap smear or VIA, FBOT, and colonoscopy). Several protective factors were identified for cancer screening, such as higher education, other health screening (blood pressure, glucose, or cholesterol), health behavior (dental visit, fruit and vegetable consumption, and nonsmoking) and the use of traditional medicine that could assist in programs promoting cancer screening in Marshall Islands. In addition, cancer awareness campaigns, expansion of skills training of health care providers, and improving cancer screening infrastructure could help in improving the uptake of cancer screening.
[ "Sample and Procedures", "Measures", "Cancer Screening", "Data Analysis", "Ethical Consideration", "Sample Characteristics", "Prevalence of Cancer Screening", "Associations With Mammography Screening", "Associations With Clinical Breast Examination", "Associations With Pap Smear or VIA Screening", "Associations With Fecal Occult Blood Examination", "Associations With Colonoscopy" ]
[ "Cross-sectional data from the 2017/2018 Marshall Islands STEPwise approach to Surveillance (STEPS) survey were analyzed.\n31\n Individuals aged 18 years or older participated in the survey from the islands of Majuro, Kwajalein, Arno, Jaluit, Wotje, and Kili, making up 83% of the overall population of Marshall Islands; the final response rate was 92.3%.\n32\n “Sample size was determined based on overall adult populations on selected islands in the Republic of the Marshall Islands (Majuro = 1659; Ebeye = 627; Kili = 200; Wotje = 207; Jaluit = 207; Arno = 207).”32 A multi-stage sampling design was used: Stage 1: Households were identified at random according to geographical stratification in Majuro and Ebeye.\n32\n The country was stratified into 2 major groups, Urban (Majuro and Ebeye) and Rural (all outer islands).\n32\n In Majuro and Ebeye, household cluster sampling was used to randomly select households in these areas.\n32\n Stage 2: In Majuro and Ebeye, 1 individual was selected at random from each household. All adults in Kili, Arno, Wotje, and Jabwor, Jaluit atolls were included in the sample because the adult populations are about 200 each on these atolls.\n32\n “Participants eligible for the survey were Marshall Islands residents aged 18 years and older, and who were able to comprehend either English or Marshallese and provide consent.”\n32\n Data were collected electronically using a tablet by trained surveyors that conducted face-to-face administration of structured questionnaires and anthropometric measurements.\n32\n Quality control of completed questionnaires was ensured at different stages during the questionnaire-processing phase.\n32\n The total sample included 3,029 persons 18 years and older, but since we were investigating cancer screening, the sample was restricted to individuals aged 21-75 years. The sample size of this subsample was 2,813.\nSample size calculation for cancer screening. All sample sizes calculated with acceptable margin 5%, 95% confidence level, the minimum sample size for cervical cancer screening is 316 (estimated based on a prevalence of 29.2% in 18 countries6), for mammography 261 (estimated based on a prevalence of 21.6%, average of 3 countries, China, India and South Africa9), for CBE 378 (based on prevalence of 56.2% in Brunei Durassalam11), for FBOT 246 (based on 20.0% in Brunei Durassalam11) and for colonoscopy 113 (based on prevalence of 7.9% in Brunei Durassalam11)", "Cancer Screening Colon cancer screening (use showcard): 1) “Have you ever had a colonoscopy? (Yes, No, Don’t know, Refuse) (time since last colonoscopy, 1 = within the past year to 6 = 10 or more years ago)” and 2) “A blood stool test is a test that determines whether the stool contains blood. Have you ever had this test? (Yes, No, Don’t know, Refuse) (time since last blood stool test, 1 = within the past year to 5 = 5 or more years ago)”.32 Outcome variables were defined as past 10 years colonoscopy uptake in 50-75 year-olds, and past year FOBT uptake in 50-75 year-olds.\nWomen cancer screening (use showcard): 1) “Have you ever had a mammogram? A mammogram is done with a machine (Yes, No, Don’t know, Refuse) (time since the last mammogram, 1 = within the past year to 5 = 5 or more years ago)”, 2) “A clinical breast exam is when a doctor, nurse, or other health professional feels the breasts for lumps. Have you ever had a clinical breast exam? (Yes, No, Don’t know, Refuse) (time since last clinical breast exam, 1 = within the past year to 5 = 5 or more years ago)”, and 3) “Have you ever had a Pap or VIA test? (Yes, No, Don’t know, Refuse) (time since last Pap or VIA test, 1 = within the past year to 5 = 5 or more years ago)”.\n32\n Those who responded “don’t know” or “refuse” were excluded from the analysis. Outcome variables were defined as past 2 years mammography in women 50-74 years, past year CBE in women 40 years and older, and past 3 years Pap smear or VIA in women 21-65 years.\n\nSocio-demographic factor questions included age (years), sex (male, female), highest level of education (1 = never attended school to 6 = college or university completed), and past year household income (1 = less than US$ 5000 to 5 = US$ 20000 or more, Don’t know, Refused to answer).32\n\nOther health screenings and visits included blood pressure, blood sugar, cholesterol, and dental visits, as follows: 1) “Have you ever had your blood pressure checked by a doctor, nurse, or other health worker?” (Yes, No) 2) “Have you ever had your blood sugar checked by a doctor, nurse, or other health worker?” (Yes, No) 3) “Blood cholesterol is a fatty substance found in the blood. Have you ever had your blood cholesterol checked by a doctor, nurse, or other health worker?” (Yes, No) 4) “How long has it been since you last visited a dentist or a dental clinic for any reason? Include visits to dental specialists, such as orthodontists.” (1 = within the past year to 5 = 5 or more years ago).\n32\n\n\nThe health risk behavior variables included current smoking (Yes, No), current chewing tobacco (Yes, No), past month binge drinking (≥5 units for men and ≥4 units for women), consumption of fruit and vegetables per day (number of days in a week and number of servings a day), and physical activity (“During the past 30 days, other than your regular job, on how many days did you participate in any physical activities or exercises such as running, sports, walking, or going to the gym, specifically for exercise?”).\n32\n Cronbach alpha for the fruit and vegetable consumption measure was 0.82 in this study. Body Mass Index was measured: “<18.5kg/m2 underweight, 18.5-24.4kg/m2 normal weight, 25-29.9kg/m2 overweight and ≥30 kg/m2 obesity.”\n32\n\n\nCardiovascular disorders included self-reported “coronary heart disease; angina, also called angina pectoris; a heart attack (also called myocardial infarction); any kind of heart condition or heart disease (other than the ones I just asked about) (Yes, No)”\n32\n\n\nThe utilization of traditional medicine included 3 questions on currently taking any herbal or traditional remedy for your high blood sugar or diabetes, or high blood pressure or hypertension or high cholesterol (Yes, No).\n32\n\n\nOverall, the “STEPS protocols can be utilized to provide aggregate data for valid between-population comparisons.”\n33\n\n\nColon cancer screening (use showcard): 1) “Have you ever had a colonoscopy? (Yes, No, Don’t know, Refuse) (time since last colonoscopy, 1 = within the past year to 6 = 10 or more years ago)” and 2) “A blood stool test is a test that determines whether the stool contains blood. Have you ever had this test? (Yes, No, Don’t know, Refuse) (time since last blood stool test, 1 = within the past year to 5 = 5 or more years ago)”.32 Outcome variables were defined as past 10 years colonoscopy uptake in 50-75 year-olds, and past year FOBT uptake in 50-75 year-olds.\nWomen cancer screening (use showcard): 1) “Have you ever had a mammogram? A mammogram is done with a machine (Yes, No, Don’t know, Refuse) (time since the last mammogram, 1 = within the past year to 5 = 5 or more years ago)”, 2) “A clinical breast exam is when a doctor, nurse, or other health professional feels the breasts for lumps. Have you ever had a clinical breast exam? (Yes, No, Don’t know, Refuse) (time since last clinical breast exam, 1 = within the past year to 5 = 5 or more years ago)”, and 3) “Have you ever had a Pap or VIA test? (Yes, No, Don’t know, Refuse) (time since last Pap or VIA test, 1 = within the past year to 5 = 5 or more years ago)”.\n32\n Those who responded “don’t know” or “refuse” were excluded from the analysis. Outcome variables were defined as past 2 years mammography in women 50-74 years, past year CBE in women 40 years and older, and past 3 years Pap smear or VIA in women 21-65 years.\n\nSocio-demographic factor questions included age (years), sex (male, female), highest level of education (1 = never attended school to 6 = college or university completed), and past year household income (1 = less than US$ 5000 to 5 = US$ 20000 or more, Don’t know, Refused to answer).32\n\nOther health screenings and visits included blood pressure, blood sugar, cholesterol, and dental visits, as follows: 1) “Have you ever had your blood pressure checked by a doctor, nurse, or other health worker?” (Yes, No) 2) “Have you ever had your blood sugar checked by a doctor, nurse, or other health worker?” (Yes, No) 3) “Blood cholesterol is a fatty substance found in the blood. Have you ever had your blood cholesterol checked by a doctor, nurse, or other health worker?” (Yes, No) 4) “How long has it been since you last visited a dentist or a dental clinic for any reason? Include visits to dental specialists, such as orthodontists.” (1 = within the past year to 5 = 5 or more years ago).\n32\n\n\nThe health risk behavior variables included current smoking (Yes, No), current chewing tobacco (Yes, No), past month binge drinking (≥5 units for men and ≥4 units for women), consumption of fruit and vegetables per day (number of days in a week and number of servings a day), and physical activity (“During the past 30 days, other than your regular job, on how many days did you participate in any physical activities or exercises such as running, sports, walking, or going to the gym, specifically for exercise?”).\n32\n Cronbach alpha for the fruit and vegetable consumption measure was 0.82 in this study. Body Mass Index was measured: “<18.5kg/m2 underweight, 18.5-24.4kg/m2 normal weight, 25-29.9kg/m2 overweight and ≥30 kg/m2 obesity.”\n32\n\n\nCardiovascular disorders included self-reported “coronary heart disease; angina, also called angina pectoris; a heart attack (also called myocardial infarction); any kind of heart condition or heart disease (other than the ones I just asked about) (Yes, No)”\n32\n\n\nThe utilization of traditional medicine included 3 questions on currently taking any herbal or traditional remedy for your high blood sugar or diabetes, or high blood pressure or hypertension or high cholesterol (Yes, No).\n32\n\n\nOverall, the “STEPS protocols can be utilized to provide aggregate data for valid between-population comparisons.”\n33\n\n\nData Analysis Descriptive statistics were used to summarize sample and cancer screening prevalence characteristics. Unadjusted and adjusted (including variables significant p < 0.05 at univariate analysis) logistic regression analyses were used to predict the prevalence of mammography, CBE, Pap smear or VIA, FOBT and colonoscopy. Covariates, based on literature review,\n5,\n7,\n12\n\n\n\n-\n17,\n21,\n22,\n24,\n25,\n28,\n30\n included sociodemographic factors, health care utilization, health risk behaviors, cardiovascular disorder, body mass index, and use of traditional medicine for all outcome variables. Explanatory variables are statistically significant at p < 0.05 and are free from multicollinearity as measured by the variance inflation factor (VIF <  1.8). Model assumptions were checked with residual plots, and the overall fitness of the models was checked with the Hosmer-Lemeshow goodness-of-fit test. Missing values (<2% for all variables except for body weight, 6.1%) were excluded. All statistical analyses were conducted using STATA software version 14.0 (Stata Corporation, College Station, TX, USA).\nDescriptive statistics were used to summarize sample and cancer screening prevalence characteristics. Unadjusted and adjusted (including variables significant p < 0.05 at univariate analysis) logistic regression analyses were used to predict the prevalence of mammography, CBE, Pap smear or VIA, FOBT and colonoscopy. Covariates, based on literature review,\n5,\n7,\n12\n\n\n\n-\n17,\n21,\n22,\n24,\n25,\n28,\n30\n included sociodemographic factors, health care utilization, health risk behaviors, cardiovascular disorder, body mass index, and use of traditional medicine for all outcome variables. Explanatory variables are statistically significant at p < 0.05 and are free from multicollinearity as measured by the variance inflation factor (VIF <  1.8). Model assumptions were checked with residual plots, and the overall fitness of the models was checked with the Hosmer-Lemeshow goodness-of-fit test. Missing values (<2% for all variables except for body weight, 6.1%) were excluded. All statistical analyses were conducted using STATA software version 14.0 (Stata Corporation, College Station, TX, USA).\nEthical Consideration The Marshall Islands Ministry of Health & Human Services provided ethics approval of the study, and written informed consent was obtained from study participants.\n32\n\n\nThe Marshall Islands Ministry of Health & Human Services provided ethics approval of the study, and written informed consent was obtained from study participants.\n32\n\n", "Colon cancer screening (use showcard): 1) “Have you ever had a colonoscopy? (Yes, No, Don’t know, Refuse) (time since last colonoscopy, 1 = within the past year to 6 = 10 or more years ago)” and 2) “A blood stool test is a test that determines whether the stool contains blood. Have you ever had this test? (Yes, No, Don’t know, Refuse) (time since last blood stool test, 1 = within the past year to 5 = 5 or more years ago)”.32 Outcome variables were defined as past 10 years colonoscopy uptake in 50-75 year-olds, and past year FOBT uptake in 50-75 year-olds.\nWomen cancer screening (use showcard): 1) “Have you ever had a mammogram? A mammogram is done with a machine (Yes, No, Don’t know, Refuse) (time since the last mammogram, 1 = within the past year to 5 = 5 or more years ago)”, 2) “A clinical breast exam is when a doctor, nurse, or other health professional feels the breasts for lumps. Have you ever had a clinical breast exam? (Yes, No, Don’t know, Refuse) (time since last clinical breast exam, 1 = within the past year to 5 = 5 or more years ago)”, and 3) “Have you ever had a Pap or VIA test? (Yes, No, Don’t know, Refuse) (time since last Pap or VIA test, 1 = within the past year to 5 = 5 or more years ago)”.\n32\n Those who responded “don’t know” or “refuse” were excluded from the analysis. Outcome variables were defined as past 2 years mammography in women 50-74 years, past year CBE in women 40 years and older, and past 3 years Pap smear or VIA in women 21-65 years.\n\nSocio-demographic factor questions included age (years), sex (male, female), highest level of education (1 = never attended school to 6 = college or university completed), and past year household income (1 = less than US$ 5000 to 5 = US$ 20000 or more, Don’t know, Refused to answer).32\n\nOther health screenings and visits included blood pressure, blood sugar, cholesterol, and dental visits, as follows: 1) “Have you ever had your blood pressure checked by a doctor, nurse, or other health worker?” (Yes, No) 2) “Have you ever had your blood sugar checked by a doctor, nurse, or other health worker?” (Yes, No) 3) “Blood cholesterol is a fatty substance found in the blood. Have you ever had your blood cholesterol checked by a doctor, nurse, or other health worker?” (Yes, No) 4) “How long has it been since you last visited a dentist or a dental clinic for any reason? Include visits to dental specialists, such as orthodontists.” (1 = within the past year to 5 = 5 or more years ago).\n32\n\n\nThe health risk behavior variables included current smoking (Yes, No), current chewing tobacco (Yes, No), past month binge drinking (≥5 units for men and ≥4 units for women), consumption of fruit and vegetables per day (number of days in a week and number of servings a day), and physical activity (“During the past 30 days, other than your regular job, on how many days did you participate in any physical activities or exercises such as running, sports, walking, or going to the gym, specifically for exercise?”).\n32\n Cronbach alpha for the fruit and vegetable consumption measure was 0.82 in this study. Body Mass Index was measured: “<18.5kg/m2 underweight, 18.5-24.4kg/m2 normal weight, 25-29.9kg/m2 overweight and ≥30 kg/m2 obesity.”\n32\n\n\nCardiovascular disorders included self-reported “coronary heart disease; angina, also called angina pectoris; a heart attack (also called myocardial infarction); any kind of heart condition or heart disease (other than the ones I just asked about) (Yes, No)”\n32\n\n\nThe utilization of traditional medicine included 3 questions on currently taking any herbal or traditional remedy for your high blood sugar or diabetes, or high blood pressure or hypertension or high cholesterol (Yes, No).\n32\n\n\nOverall, the “STEPS protocols can be utilized to provide aggregate data for valid between-population comparisons.”\n33\n\n", "Descriptive statistics were used to summarize sample and cancer screening prevalence characteristics. Unadjusted and adjusted (including variables significant p < 0.05 at univariate analysis) logistic regression analyses were used to predict the prevalence of mammography, CBE, Pap smear or VIA, FOBT and colonoscopy. Covariates, based on literature review,\n5,\n7,\n12\n\n\n\n-\n17,\n21,\n22,\n24,\n25,\n28,\n30\n included sociodemographic factors, health care utilization, health risk behaviors, cardiovascular disorder, body mass index, and use of traditional medicine for all outcome variables. Explanatory variables are statistically significant at p < 0.05 and are free from multicollinearity as measured by the variance inflation factor (VIF <  1.8). Model assumptions were checked with residual plots, and the overall fitness of the models was checked with the Hosmer-Lemeshow goodness-of-fit test. Missing values (<2% for all variables except for body weight, 6.1%) were excluded. All statistical analyses were conducted using STATA software version 14.0 (Stata Corporation, College Station, TX, USA).", "The Marshall Islands Ministry of Health & Human Services provided ethics approval of the study, and written informed consent was obtained from study participants.\n32\n\n", "The study population consisted of 2,813 persons aged 21-75 years (Median = 37.4 years, IQR = 28.7-59.5), 1,329 (47.2%) were men and 1,484 (52.8%) were women, 74.0% had high school or higher education, and 38.3% had an annual household income of lower than 10000$. More than half of the participants (55.3%) had ever their blood pressure checked and had ever undergone glucose (diabetes) screening (54.9%) by a health care provider, 21.5% had ever their blood cholesterol checked, and 36.2% had a dental visit in the past 12 months.\nOne in 5 persons (24.5%) were current smokers (45.3% among men), 11.2% were currently chewing tobacco, 16.4% engaged in binge drinking in the past month, 71.5% had 0-1 servings of fruit and vegetables per day, and 35.0% engaged daily in physical activities or exercises. Almost 1 in 10 participants (8.7%) were currently using traditional medicine for diabetes, or hypertension or high cholesterol, 4.6% had some form of cardiovascular disorder (coronary heart disease, angina, heart attack, any kind of heart condition, or heart disease), and 47.2% had obesity (see Table 1).\nSample Characteristics of Men and Women Aged 21-75 years, Marshall Islands, STEPS Survey, 2017.", "The prevalence of past 2 years mammography screening was 21.7% among women aged 50-74 years, past year CBE 15.9% among women aged 40 years and older, past 3 years Pap smear or VIA 32.6% among women 21-65 years, past year FOBT 21.8% among women aged 50-75 years, past year FOBT 22.3% among men aged 50-75 years, past 10 years colonoscopy 9.1% among women aged 50-75 years and past 10 years colonoscopy 7.3% among men aged 50-75 years (see Table 2).\nCancer Screening.\nVIA = Vaginal Inspection with Acetic Acid; CI = Confidence Interval.", "In adjusted logistic regression, cholesterol screening (AOR: 1.91, 95% CI: 1.07-3.41) was associated with past 2 years mammography screening among women aged 50-74 years. In addition, in univariate analysis, higher education, blood pressure, and glucose screening were associated with mammography screening (see Table 3).\nAssociations With Past 2 Years Mammography Screening (Women 50-74 Years).\nCOR = Crude Odds Ratio; AOR = Adjusted Odds Ratio; CI = Confidence Interval.\nN = 323.", "In the adjusted logistic regression analysis, glucose screening (AOR: 5.47, 95% CI: 1.94-15.43), cholesterol screening (AOR: 2.20, 95% CI: 1.35-3.57), and dental visit in the past year (AOR: 1.63, 95% CI: 1.03-2.60) were associated with CBE among women aged 40 years and older. In addition, in univariate analysis, blood pressure screening, 2-3 servings of fruit and vegetable consumption a day, and the use of traditional medicine were associated with CBE (see Table 4).\nAssociations With Past Year Clinical Breast Examination (Women 40+ years).\nCOR = Crude Odds Ratio; AOR = Adjusted Odds Ratio; CI = Confidence Interval.\nN = 655.", "In the adjusted logistic regression analysis, blood pressure screening (AOR: 2.39, 95% CI: 1.71-3.35), glucose screening (AOR: 1.59, 95% CI: 1.13-2.23), dental visit in the past year (AOR: 1.51, 95% CI: 1.17, 1.96), binge drinking (AOR: 1.88, 95% CI: 1.07-3.30), and 2-3 servings of fruit and vegetable consumption a day (AOR: 1.42, 95% CI: 1.03-1.95) were positively and high physical activity (AOR: 0.56, 95% CI: 0.41-0.76) was negatively associated with Pap smear or VIA screening among women aged 21-65 years. In addition, in univariate analysis, higher education, cholesterol screening, and the use of traditional medicine were associated with Pap smear or VIA screening (see Table 5).\nAssociations With Past 3 Years Pap Smear or VIA (Women 21-65 Years).\nCOR = Crude Odds Ratio; AOR = Adjusted Odds Ratio; CI = Confidence Interval.\nN = 1367.", "In adjusted logistic regression analysis, glucose screening (AOR: 3.63, 95% CI: 1.43-9.22), cholesterol screening (AOR: 2.04, 95% CI: 1.32-3.15), dental visit in the past year (AOR: 1.78, 95% CI: 1.17-2.71), intake of 2-3 servings of fruit and vegetables a day (AOR: 1.83, 95% CI: 1.13-2.98), the use of traditional medicine (AOR: 1.77, 95% CI: 1.12-2.80), and cardiovascular disorder (AOR: 2.26, 95% CI: 1.35, 4.83) were associated with FOBE among 50-75 year-olds. In addition, in univariate analysis, blood pressure screening was associated with FOBE (see Table 6).\nAssociations With Past Year Fecal Occult Blood Test (50-75 Years).\nCOR = Crude Odds Ratio; AOR = Adjusted Odds Ratio; CI = Confidence Interval.\nN = 676.", "In adjusted logistic regression analysis, higher education (AOR: 2.58, 95% CI: 1.02-6.58), and cholesterol screening (AOR: 2.87, 95% CI: 1.48-5.59), were positively and current smoking (AOR: 0.09, 95% CI: 0.01-0.65) was negatively associated with past 10 years colonoscopy uptake among 50-75 year-olds. In addition, in univariate analysis, glucose screening, 1-29 days physical activity, and the use of traditional medicine were associated with past 10 years colonoscopy uptake (see Table 7).\nAssociations With Past 10 Years Colonoscopy (50-75 years).\nCOR = Crude Odds Ratio; AOR = Adjusted Odds Ratio; CI = Confidence Interval.\nN = 685." ]
[ null, null, null, null, null, null, null, null, null, null, null, null ]
[ "Introduction", "Methods", "Sample and Procedures", "Measures", "Cancer Screening", "Data Analysis", "Ethical Consideration", "Results", "Sample Characteristics", "Prevalence of Cancer Screening", "Associations With Mammography Screening", "Associations With Clinical Breast Examination", "Associations With Pap Smear or VIA Screening", "Associations With Fecal Occult Blood Examination", "Associations With Colonoscopy", "Discussion", "Conclusion" ]
[ "Globally, “cancer is the second leading cause of death; 70% of deaths from cancer occur in low- and middle-income countries.”\n1\n Some of the most common cancers are lung, breast, colorectal, and prostate cancer.\n1\n The “cervical cancer burden in the Pacific Region is substantial, with age-standardized incidence rates ranging from 8.2 to 50.7 and age-standardized mortality rates from 2.7 to 23.9 per 100,000 women per year.”\n2\n In the Marshall Island’s (an upper middle income country in the Pacific) cancer is the second leading cause of death (after diabetes),\n3,4\n and the most prevalent cancers are cervical, lung, and breast cancers.\n4\n The “age-standardized rate of cervical cancer was 74 per 100,000” (the highest in the world) in Marshall Islands.\n4\n The breast cancer and colorectal cancer incidence rates 2007-2012 were 28.6 and 5.5, respectively, in Marshall Islands.\n4\n Due to low screening rates (7.8% in 2013) and late diagnosis, breast cancer in the Marshall Islands is associated with high mortality rates.\n4\n\n\nSome cancers can be fatal if not identified and treated early, emphasizing the importance of organized cancer screening.\n5\n Marshall Islands offer various cancer screening modalities, e.g. breast cancer screening (clinical breast examination = CBE with no specific recommendations on age and frequency and mammography, recommended every 2 years in women 50-74 years), cervical cancer screening (Vaginal Inspection with Acetic Acid = VIA or Pap Smear/Cytology, and others, recommended every 3 years in women 21-65 years) and colorectal cancer screening (fecal occult blood examination = FOBT, recommended every year in women and men 50-75 years and colonoscopy, recommended every 10 years in women and men 50-75 years).\n4\n Marshall Islands developed a national Comprehensive Cancer Control Plan 2017-2022, including cancer screening uptake targets as follows: 30% updated breast cancer screening in women ages 20-75 years, 60% cervical cancer screening at least once in the past 3 years in women 21-65 years, and 20% updated colorectal cancer screening in men and women aged 50-75 years.4 Cancer screening efforts in Marshall Islands include skills-training on clinical breast examination and Pap smear testing, purchase of new mammogram machines, and VIA was introduced as a core option for cervical cancer screening.\n4\n It is believed that screening rates are below the recommended targets but no national study has reported on the current cancer screening uptake in Marshall Islands. In addition, it would be of outmost importance to have an understanding of the possible facilitators and barriers of cancer screening in Marshall Islands. Knowing the national prevalence and possible factors associated with different cancer screening methods could help in developing programs to improve cancer screening in Marshall Islands.\nAs part of the national Demographic and Health Surveys in 18 countries (women 21-49 years), the prevalence of the utilization of ever cervical cancer screening was 29.2% in 18 countries,\n6\n in India 27.2%,\n6\n in Tajikistan 10.6%,\n6\n and in Turkey ever cervical cancer screening (women 30 years and older) 22.0%.\n7\n In an analysis of nationally representative household surveys in 55 low- and middle-income countries, the country level median of lifetime cervical cancer screening among women aged 30-49 years 43.6%.\n8\n The prevalence of breast cancer screening (mammography in the past 5 years) among women aged 40 years and older was 38.4% in China, 10.8% in India, and 15.6% in South Africa.\n9\n In Italy, the uptake of past 3 years Pap smear or HPV test was 77% (25-64 years old women), past 2 years mammography was 70% (50-69 years old women), and past 2 years colorectal screening was 38% (50-69 year olds).\n10\n In Brunei Darussalam, the prevalence of a pap smear test (women 18-69 years) was 56.5%, mammography (women 18-69 years) 11.3%, clinical breast examination ( = CBE)(women 18-69 years) 56.2%, fecal occult blood test (men and women 18-69 years) 20.0%, and colonoscopy (men and women 18-69 years) 7.9%.\n11\n\n\nFactors associated with cancer screening may include sociodemographic factors, health system factors, health status, and lifestyle factors.\n5,12\n Sociodemographic factors associated with cancer screening uptake include higher socioeconomic position, \n7,13\n-15\n older age,\n16\n younger age,\n17\n and urban living.\n18,19\n Health system issues associated with cancer screening uptake include increased access to health care,\n20\n had health insurance,\n18,21\n having had blood cholesterol test,\n22\n general health care utilization,\n13,16,17,21\n and complementary medicine use.\n23\n Health status associated with cancer screening uptake include having chronic conditions,\n24\n having multimorbidity or comorbidity,\n11,25\n family history of non-communicable diseases,\n11\n not having mental distress or illness or having depressive symptoms,\n17,26,27\n not having obesity,\n17,28\n good self-rated health status,\n29\n and better physical functioning.\n17\n Positive lifestyle behaviors associated with cancer screening uptake include such as physical activity, fruit and vegetable consumption, and not smoking,\n7,17,30\n and not consuming alcohol.\n11\n The study aimed to estimate the prevalence and associated factors of cancer screening uptake among adults a national population-based survey in Marshall Islands.", "Sample and Procedures Cross-sectional data from the 2017/2018 Marshall Islands STEPwise approach to Surveillance (STEPS) survey were analyzed.\n31\n Individuals aged 18 years or older participated in the survey from the islands of Majuro, Kwajalein, Arno, Jaluit, Wotje, and Kili, making up 83% of the overall population of Marshall Islands; the final response rate was 92.3%.\n32\n “Sample size was determined based on overall adult populations on selected islands in the Republic of the Marshall Islands (Majuro = 1659; Ebeye = 627; Kili = 200; Wotje = 207; Jaluit = 207; Arno = 207).”32 A multi-stage sampling design was used: Stage 1: Households were identified at random according to geographical stratification in Majuro and Ebeye.\n32\n The country was stratified into 2 major groups, Urban (Majuro and Ebeye) and Rural (all outer islands).\n32\n In Majuro and Ebeye, household cluster sampling was used to randomly select households in these areas.\n32\n Stage 2: In Majuro and Ebeye, 1 individual was selected at random from each household. All adults in Kili, Arno, Wotje, and Jabwor, Jaluit atolls were included in the sample because the adult populations are about 200 each on these atolls.\n32\n “Participants eligible for the survey were Marshall Islands residents aged 18 years and older, and who were able to comprehend either English or Marshallese and provide consent.”\n32\n Data were collected electronically using a tablet by trained surveyors that conducted face-to-face administration of structured questionnaires and anthropometric measurements.\n32\n Quality control of completed questionnaires was ensured at different stages during the questionnaire-processing phase.\n32\n The total sample included 3,029 persons 18 years and older, but since we were investigating cancer screening, the sample was restricted to individuals aged 21-75 years. The sample size of this subsample was 2,813.\nSample size calculation for cancer screening. All sample sizes calculated with acceptable margin 5%, 95% confidence level, the minimum sample size for cervical cancer screening is 316 (estimated based on a prevalence of 29.2% in 18 countries6), for mammography 261 (estimated based on a prevalence of 21.6%, average of 3 countries, China, India and South Africa9), for CBE 378 (based on prevalence of 56.2% in Brunei Durassalam11), for FBOT 246 (based on 20.0% in Brunei Durassalam11) and for colonoscopy 113 (based on prevalence of 7.9% in Brunei Durassalam11)\nCross-sectional data from the 2017/2018 Marshall Islands STEPwise approach to Surveillance (STEPS) survey were analyzed.\n31\n Individuals aged 18 years or older participated in the survey from the islands of Majuro, Kwajalein, Arno, Jaluit, Wotje, and Kili, making up 83% of the overall population of Marshall Islands; the final response rate was 92.3%.\n32\n “Sample size was determined based on overall adult populations on selected islands in the Republic of the Marshall Islands (Majuro = 1659; Ebeye = 627; Kili = 200; Wotje = 207; Jaluit = 207; Arno = 207).”32 A multi-stage sampling design was used: Stage 1: Households were identified at random according to geographical stratification in Majuro and Ebeye.\n32\n The country was stratified into 2 major groups, Urban (Majuro and Ebeye) and Rural (all outer islands).\n32\n In Majuro and Ebeye, household cluster sampling was used to randomly select households in these areas.\n32\n Stage 2: In Majuro and Ebeye, 1 individual was selected at random from each household. All adults in Kili, Arno, Wotje, and Jabwor, Jaluit atolls were included in the sample because the adult populations are about 200 each on these atolls.\n32\n “Participants eligible for the survey were Marshall Islands residents aged 18 years and older, and who were able to comprehend either English or Marshallese and provide consent.”\n32\n Data were collected electronically using a tablet by trained surveyors that conducted face-to-face administration of structured questionnaires and anthropometric measurements.\n32\n Quality control of completed questionnaires was ensured at different stages during the questionnaire-processing phase.\n32\n The total sample included 3,029 persons 18 years and older, but since we were investigating cancer screening, the sample was restricted to individuals aged 21-75 years. The sample size of this subsample was 2,813.\nSample size calculation for cancer screening. All sample sizes calculated with acceptable margin 5%, 95% confidence level, the minimum sample size for cervical cancer screening is 316 (estimated based on a prevalence of 29.2% in 18 countries6), for mammography 261 (estimated based on a prevalence of 21.6%, average of 3 countries, China, India and South Africa9), for CBE 378 (based on prevalence of 56.2% in Brunei Durassalam11), for FBOT 246 (based on 20.0% in Brunei Durassalam11) and for colonoscopy 113 (based on prevalence of 7.9% in Brunei Durassalam11)", "Cross-sectional data from the 2017/2018 Marshall Islands STEPwise approach to Surveillance (STEPS) survey were analyzed.\n31\n Individuals aged 18 years or older participated in the survey from the islands of Majuro, Kwajalein, Arno, Jaluit, Wotje, and Kili, making up 83% of the overall population of Marshall Islands; the final response rate was 92.3%.\n32\n “Sample size was determined based on overall adult populations on selected islands in the Republic of the Marshall Islands (Majuro = 1659; Ebeye = 627; Kili = 200; Wotje = 207; Jaluit = 207; Arno = 207).”32 A multi-stage sampling design was used: Stage 1: Households were identified at random according to geographical stratification in Majuro and Ebeye.\n32\n The country was stratified into 2 major groups, Urban (Majuro and Ebeye) and Rural (all outer islands).\n32\n In Majuro and Ebeye, household cluster sampling was used to randomly select households in these areas.\n32\n Stage 2: In Majuro and Ebeye, 1 individual was selected at random from each household. All adults in Kili, Arno, Wotje, and Jabwor, Jaluit atolls were included in the sample because the adult populations are about 200 each on these atolls.\n32\n “Participants eligible for the survey were Marshall Islands residents aged 18 years and older, and who were able to comprehend either English or Marshallese and provide consent.”\n32\n Data were collected electronically using a tablet by trained surveyors that conducted face-to-face administration of structured questionnaires and anthropometric measurements.\n32\n Quality control of completed questionnaires was ensured at different stages during the questionnaire-processing phase.\n32\n The total sample included 3,029 persons 18 years and older, but since we were investigating cancer screening, the sample was restricted to individuals aged 21-75 years. The sample size of this subsample was 2,813.\nSample size calculation for cancer screening. All sample sizes calculated with acceptable margin 5%, 95% confidence level, the minimum sample size for cervical cancer screening is 316 (estimated based on a prevalence of 29.2% in 18 countries6), for mammography 261 (estimated based on a prevalence of 21.6%, average of 3 countries, China, India and South Africa9), for CBE 378 (based on prevalence of 56.2% in Brunei Durassalam11), for FBOT 246 (based on 20.0% in Brunei Durassalam11) and for colonoscopy 113 (based on prevalence of 7.9% in Brunei Durassalam11)", "Cancer Screening Colon cancer screening (use showcard): 1) “Have you ever had a colonoscopy? (Yes, No, Don’t know, Refuse) (time since last colonoscopy, 1 = within the past year to 6 = 10 or more years ago)” and 2) “A blood stool test is a test that determines whether the stool contains blood. Have you ever had this test? (Yes, No, Don’t know, Refuse) (time since last blood stool test, 1 = within the past year to 5 = 5 or more years ago)”.32 Outcome variables were defined as past 10 years colonoscopy uptake in 50-75 year-olds, and past year FOBT uptake in 50-75 year-olds.\nWomen cancer screening (use showcard): 1) “Have you ever had a mammogram? A mammogram is done with a machine (Yes, No, Don’t know, Refuse) (time since the last mammogram, 1 = within the past year to 5 = 5 or more years ago)”, 2) “A clinical breast exam is when a doctor, nurse, or other health professional feels the breasts for lumps. Have you ever had a clinical breast exam? (Yes, No, Don’t know, Refuse) (time since last clinical breast exam, 1 = within the past year to 5 = 5 or more years ago)”, and 3) “Have you ever had a Pap or VIA test? (Yes, No, Don’t know, Refuse) (time since last Pap or VIA test, 1 = within the past year to 5 = 5 or more years ago)”.\n32\n Those who responded “don’t know” or “refuse” were excluded from the analysis. Outcome variables were defined as past 2 years mammography in women 50-74 years, past year CBE in women 40 years and older, and past 3 years Pap smear or VIA in women 21-65 years.\n\nSocio-demographic factor questions included age (years), sex (male, female), highest level of education (1 = never attended school to 6 = college or university completed), and past year household income (1 = less than US$ 5000 to 5 = US$ 20000 or more, Don’t know, Refused to answer).32\n\nOther health screenings and visits included blood pressure, blood sugar, cholesterol, and dental visits, as follows: 1) “Have you ever had your blood pressure checked by a doctor, nurse, or other health worker?” (Yes, No) 2) “Have you ever had your blood sugar checked by a doctor, nurse, or other health worker?” (Yes, No) 3) “Blood cholesterol is a fatty substance found in the blood. Have you ever had your blood cholesterol checked by a doctor, nurse, or other health worker?” (Yes, No) 4) “How long has it been since you last visited a dentist or a dental clinic for any reason? Include visits to dental specialists, such as orthodontists.” (1 = within the past year to 5 = 5 or more years ago).\n32\n\n\nThe health risk behavior variables included current smoking (Yes, No), current chewing tobacco (Yes, No), past month binge drinking (≥5 units for men and ≥4 units for women), consumption of fruit and vegetables per day (number of days in a week and number of servings a day), and physical activity (“During the past 30 days, other than your regular job, on how many days did you participate in any physical activities or exercises such as running, sports, walking, or going to the gym, specifically for exercise?”).\n32\n Cronbach alpha for the fruit and vegetable consumption measure was 0.82 in this study. Body Mass Index was measured: “<18.5kg/m2 underweight, 18.5-24.4kg/m2 normal weight, 25-29.9kg/m2 overweight and ≥30 kg/m2 obesity.”\n32\n\n\nCardiovascular disorders included self-reported “coronary heart disease; angina, also called angina pectoris; a heart attack (also called myocardial infarction); any kind of heart condition or heart disease (other than the ones I just asked about) (Yes, No)”\n32\n\n\nThe utilization of traditional medicine included 3 questions on currently taking any herbal or traditional remedy for your high blood sugar or diabetes, or high blood pressure or hypertension or high cholesterol (Yes, No).\n32\n\n\nOverall, the “STEPS protocols can be utilized to provide aggregate data for valid between-population comparisons.”\n33\n\n\nColon cancer screening (use showcard): 1) “Have you ever had a colonoscopy? (Yes, No, Don’t know, Refuse) (time since last colonoscopy, 1 = within the past year to 6 = 10 or more years ago)” and 2) “A blood stool test is a test that determines whether the stool contains blood. Have you ever had this test? (Yes, No, Don’t know, Refuse) (time since last blood stool test, 1 = within the past year to 5 = 5 or more years ago)”.32 Outcome variables were defined as past 10 years colonoscopy uptake in 50-75 year-olds, and past year FOBT uptake in 50-75 year-olds.\nWomen cancer screening (use showcard): 1) “Have you ever had a mammogram? A mammogram is done with a machine (Yes, No, Don’t know, Refuse) (time since the last mammogram, 1 = within the past year to 5 = 5 or more years ago)”, 2) “A clinical breast exam is when a doctor, nurse, or other health professional feels the breasts for lumps. Have you ever had a clinical breast exam? (Yes, No, Don’t know, Refuse) (time since last clinical breast exam, 1 = within the past year to 5 = 5 or more years ago)”, and 3) “Have you ever had a Pap or VIA test? (Yes, No, Don’t know, Refuse) (time since last Pap or VIA test, 1 = within the past year to 5 = 5 or more years ago)”.\n32\n Those who responded “don’t know” or “refuse” were excluded from the analysis. Outcome variables were defined as past 2 years mammography in women 50-74 years, past year CBE in women 40 years and older, and past 3 years Pap smear or VIA in women 21-65 years.\n\nSocio-demographic factor questions included age (years), sex (male, female), highest level of education (1 = never attended school to 6 = college or university completed), and past year household income (1 = less than US$ 5000 to 5 = US$ 20000 or more, Don’t know, Refused to answer).32\n\nOther health screenings and visits included blood pressure, blood sugar, cholesterol, and dental visits, as follows: 1) “Have you ever had your blood pressure checked by a doctor, nurse, or other health worker?” (Yes, No) 2) “Have you ever had your blood sugar checked by a doctor, nurse, or other health worker?” (Yes, No) 3) “Blood cholesterol is a fatty substance found in the blood. Have you ever had your blood cholesterol checked by a doctor, nurse, or other health worker?” (Yes, No) 4) “How long has it been since you last visited a dentist or a dental clinic for any reason? Include visits to dental specialists, such as orthodontists.” (1 = within the past year to 5 = 5 or more years ago).\n32\n\n\nThe health risk behavior variables included current smoking (Yes, No), current chewing tobacco (Yes, No), past month binge drinking (≥5 units for men and ≥4 units for women), consumption of fruit and vegetables per day (number of days in a week and number of servings a day), and physical activity (“During the past 30 days, other than your regular job, on how many days did you participate in any physical activities or exercises such as running, sports, walking, or going to the gym, specifically for exercise?”).\n32\n Cronbach alpha for the fruit and vegetable consumption measure was 0.82 in this study. Body Mass Index was measured: “<18.5kg/m2 underweight, 18.5-24.4kg/m2 normal weight, 25-29.9kg/m2 overweight and ≥30 kg/m2 obesity.”\n32\n\n\nCardiovascular disorders included self-reported “coronary heart disease; angina, also called angina pectoris; a heart attack (also called myocardial infarction); any kind of heart condition or heart disease (other than the ones I just asked about) (Yes, No)”\n32\n\n\nThe utilization of traditional medicine included 3 questions on currently taking any herbal or traditional remedy for your high blood sugar or diabetes, or high blood pressure or hypertension or high cholesterol (Yes, No).\n32\n\n\nOverall, the “STEPS protocols can be utilized to provide aggregate data for valid between-population comparisons.”\n33\n\n\nData Analysis Descriptive statistics were used to summarize sample and cancer screening prevalence characteristics. Unadjusted and adjusted (including variables significant p < 0.05 at univariate analysis) logistic regression analyses were used to predict the prevalence of mammography, CBE, Pap smear or VIA, FOBT and colonoscopy. Covariates, based on literature review,\n5,\n7,\n12\n\n\n\n-\n17,\n21,\n22,\n24,\n25,\n28,\n30\n included sociodemographic factors, health care utilization, health risk behaviors, cardiovascular disorder, body mass index, and use of traditional medicine for all outcome variables. Explanatory variables are statistically significant at p < 0.05 and are free from multicollinearity as measured by the variance inflation factor (VIF <  1.8). Model assumptions were checked with residual plots, and the overall fitness of the models was checked with the Hosmer-Lemeshow goodness-of-fit test. Missing values (<2% for all variables except for body weight, 6.1%) were excluded. All statistical analyses were conducted using STATA software version 14.0 (Stata Corporation, College Station, TX, USA).\nDescriptive statistics were used to summarize sample and cancer screening prevalence characteristics. Unadjusted and adjusted (including variables significant p < 0.05 at univariate analysis) logistic regression analyses were used to predict the prevalence of mammography, CBE, Pap smear or VIA, FOBT and colonoscopy. Covariates, based on literature review,\n5,\n7,\n12\n\n\n\n-\n17,\n21,\n22,\n24,\n25,\n28,\n30\n included sociodemographic factors, health care utilization, health risk behaviors, cardiovascular disorder, body mass index, and use of traditional medicine for all outcome variables. Explanatory variables are statistically significant at p < 0.05 and are free from multicollinearity as measured by the variance inflation factor (VIF <  1.8). Model assumptions were checked with residual plots, and the overall fitness of the models was checked with the Hosmer-Lemeshow goodness-of-fit test. Missing values (<2% for all variables except for body weight, 6.1%) were excluded. All statistical analyses were conducted using STATA software version 14.0 (Stata Corporation, College Station, TX, USA).\nEthical Consideration The Marshall Islands Ministry of Health & Human Services provided ethics approval of the study, and written informed consent was obtained from study participants.\n32\n\n\nThe Marshall Islands Ministry of Health & Human Services provided ethics approval of the study, and written informed consent was obtained from study participants.\n32\n\n", "Colon cancer screening (use showcard): 1) “Have you ever had a colonoscopy? (Yes, No, Don’t know, Refuse) (time since last colonoscopy, 1 = within the past year to 6 = 10 or more years ago)” and 2) “A blood stool test is a test that determines whether the stool contains blood. Have you ever had this test? (Yes, No, Don’t know, Refuse) (time since last blood stool test, 1 = within the past year to 5 = 5 or more years ago)”.32 Outcome variables were defined as past 10 years colonoscopy uptake in 50-75 year-olds, and past year FOBT uptake in 50-75 year-olds.\nWomen cancer screening (use showcard): 1) “Have you ever had a mammogram? A mammogram is done with a machine (Yes, No, Don’t know, Refuse) (time since the last mammogram, 1 = within the past year to 5 = 5 or more years ago)”, 2) “A clinical breast exam is when a doctor, nurse, or other health professional feels the breasts for lumps. Have you ever had a clinical breast exam? (Yes, No, Don’t know, Refuse) (time since last clinical breast exam, 1 = within the past year to 5 = 5 or more years ago)”, and 3) “Have you ever had a Pap or VIA test? (Yes, No, Don’t know, Refuse) (time since last Pap or VIA test, 1 = within the past year to 5 = 5 or more years ago)”.\n32\n Those who responded “don’t know” or “refuse” were excluded from the analysis. Outcome variables were defined as past 2 years mammography in women 50-74 years, past year CBE in women 40 years and older, and past 3 years Pap smear or VIA in women 21-65 years.\n\nSocio-demographic factor questions included age (years), sex (male, female), highest level of education (1 = never attended school to 6 = college or university completed), and past year household income (1 = less than US$ 5000 to 5 = US$ 20000 or more, Don’t know, Refused to answer).32\n\nOther health screenings and visits included blood pressure, blood sugar, cholesterol, and dental visits, as follows: 1) “Have you ever had your blood pressure checked by a doctor, nurse, or other health worker?” (Yes, No) 2) “Have you ever had your blood sugar checked by a doctor, nurse, or other health worker?” (Yes, No) 3) “Blood cholesterol is a fatty substance found in the blood. Have you ever had your blood cholesterol checked by a doctor, nurse, or other health worker?” (Yes, No) 4) “How long has it been since you last visited a dentist or a dental clinic for any reason? Include visits to dental specialists, such as orthodontists.” (1 = within the past year to 5 = 5 or more years ago).\n32\n\n\nThe health risk behavior variables included current smoking (Yes, No), current chewing tobacco (Yes, No), past month binge drinking (≥5 units for men and ≥4 units for women), consumption of fruit and vegetables per day (number of days in a week and number of servings a day), and physical activity (“During the past 30 days, other than your regular job, on how many days did you participate in any physical activities or exercises such as running, sports, walking, or going to the gym, specifically for exercise?”).\n32\n Cronbach alpha for the fruit and vegetable consumption measure was 0.82 in this study. Body Mass Index was measured: “<18.5kg/m2 underweight, 18.5-24.4kg/m2 normal weight, 25-29.9kg/m2 overweight and ≥30 kg/m2 obesity.”\n32\n\n\nCardiovascular disorders included self-reported “coronary heart disease; angina, also called angina pectoris; a heart attack (also called myocardial infarction); any kind of heart condition or heart disease (other than the ones I just asked about) (Yes, No)”\n32\n\n\nThe utilization of traditional medicine included 3 questions on currently taking any herbal or traditional remedy for your high blood sugar or diabetes, or high blood pressure or hypertension or high cholesterol (Yes, No).\n32\n\n\nOverall, the “STEPS protocols can be utilized to provide aggregate data for valid between-population comparisons.”\n33\n\n", "Descriptive statistics were used to summarize sample and cancer screening prevalence characteristics. Unadjusted and adjusted (including variables significant p < 0.05 at univariate analysis) logistic regression analyses were used to predict the prevalence of mammography, CBE, Pap smear or VIA, FOBT and colonoscopy. Covariates, based on literature review,\n5,\n7,\n12\n\n\n\n-\n17,\n21,\n22,\n24,\n25,\n28,\n30\n included sociodemographic factors, health care utilization, health risk behaviors, cardiovascular disorder, body mass index, and use of traditional medicine for all outcome variables. Explanatory variables are statistically significant at p < 0.05 and are free from multicollinearity as measured by the variance inflation factor (VIF <  1.8). Model assumptions were checked with residual plots, and the overall fitness of the models was checked with the Hosmer-Lemeshow goodness-of-fit test. Missing values (<2% for all variables except for body weight, 6.1%) were excluded. All statistical analyses were conducted using STATA software version 14.0 (Stata Corporation, College Station, TX, USA).", "The Marshall Islands Ministry of Health & Human Services provided ethics approval of the study, and written informed consent was obtained from study participants.\n32\n\n", "Sample Characteristics The study population consisted of 2,813 persons aged 21-75 years (Median = 37.4 years, IQR = 28.7-59.5), 1,329 (47.2%) were men and 1,484 (52.8%) were women, 74.0% had high school or higher education, and 38.3% had an annual household income of lower than 10000$. More than half of the participants (55.3%) had ever their blood pressure checked and had ever undergone glucose (diabetes) screening (54.9%) by a health care provider, 21.5% had ever their blood cholesterol checked, and 36.2% had a dental visit in the past 12 months.\nOne in 5 persons (24.5%) were current smokers (45.3% among men), 11.2% were currently chewing tobacco, 16.4% engaged in binge drinking in the past month, 71.5% had 0-1 servings of fruit and vegetables per day, and 35.0% engaged daily in physical activities or exercises. Almost 1 in 10 participants (8.7%) were currently using traditional medicine for diabetes, or hypertension or high cholesterol, 4.6% had some form of cardiovascular disorder (coronary heart disease, angina, heart attack, any kind of heart condition, or heart disease), and 47.2% had obesity (see Table 1).\nSample Characteristics of Men and Women Aged 21-75 years, Marshall Islands, STEPS Survey, 2017.\nThe study population consisted of 2,813 persons aged 21-75 years (Median = 37.4 years, IQR = 28.7-59.5), 1,329 (47.2%) were men and 1,484 (52.8%) were women, 74.0% had high school or higher education, and 38.3% had an annual household income of lower than 10000$. More than half of the participants (55.3%) had ever their blood pressure checked and had ever undergone glucose (diabetes) screening (54.9%) by a health care provider, 21.5% had ever their blood cholesterol checked, and 36.2% had a dental visit in the past 12 months.\nOne in 5 persons (24.5%) were current smokers (45.3% among men), 11.2% were currently chewing tobacco, 16.4% engaged in binge drinking in the past month, 71.5% had 0-1 servings of fruit and vegetables per day, and 35.0% engaged daily in physical activities or exercises. Almost 1 in 10 participants (8.7%) were currently using traditional medicine for diabetes, or hypertension or high cholesterol, 4.6% had some form of cardiovascular disorder (coronary heart disease, angina, heart attack, any kind of heart condition, or heart disease), and 47.2% had obesity (see Table 1).\nSample Characteristics of Men and Women Aged 21-75 years, Marshall Islands, STEPS Survey, 2017.\nPrevalence of Cancer Screening The prevalence of past 2 years mammography screening was 21.7% among women aged 50-74 years, past year CBE 15.9% among women aged 40 years and older, past 3 years Pap smear or VIA 32.6% among women 21-65 years, past year FOBT 21.8% among women aged 50-75 years, past year FOBT 22.3% among men aged 50-75 years, past 10 years colonoscopy 9.1% among women aged 50-75 years and past 10 years colonoscopy 7.3% among men aged 50-75 years (see Table 2).\nCancer Screening.\nVIA = Vaginal Inspection with Acetic Acid; CI = Confidence Interval.\nThe prevalence of past 2 years mammography screening was 21.7% among women aged 50-74 years, past year CBE 15.9% among women aged 40 years and older, past 3 years Pap smear or VIA 32.6% among women 21-65 years, past year FOBT 21.8% among women aged 50-75 years, past year FOBT 22.3% among men aged 50-75 years, past 10 years colonoscopy 9.1% among women aged 50-75 years and past 10 years colonoscopy 7.3% among men aged 50-75 years (see Table 2).\nCancer Screening.\nVIA = Vaginal Inspection with Acetic Acid; CI = Confidence Interval.\nAssociations With Mammography Screening In adjusted logistic regression, cholesterol screening (AOR: 1.91, 95% CI: 1.07-3.41) was associated with past 2 years mammography screening among women aged 50-74 years. In addition, in univariate analysis, higher education, blood pressure, and glucose screening were associated with mammography screening (see Table 3).\nAssociations With Past 2 Years Mammography Screening (Women 50-74 Years).\nCOR = Crude Odds Ratio; AOR = Adjusted Odds Ratio; CI = Confidence Interval.\nN = 323.\nIn adjusted logistic regression, cholesterol screening (AOR: 1.91, 95% CI: 1.07-3.41) was associated with past 2 years mammography screening among women aged 50-74 years. In addition, in univariate analysis, higher education, blood pressure, and glucose screening were associated with mammography screening (see Table 3).\nAssociations With Past 2 Years Mammography Screening (Women 50-74 Years).\nCOR = Crude Odds Ratio; AOR = Adjusted Odds Ratio; CI = Confidence Interval.\nN = 323.\nAssociations With Clinical Breast Examination In the adjusted logistic regression analysis, glucose screening (AOR: 5.47, 95% CI: 1.94-15.43), cholesterol screening (AOR: 2.20, 95% CI: 1.35-3.57), and dental visit in the past year (AOR: 1.63, 95% CI: 1.03-2.60) were associated with CBE among women aged 40 years and older. In addition, in univariate analysis, blood pressure screening, 2-3 servings of fruit and vegetable consumption a day, and the use of traditional medicine were associated with CBE (see Table 4).\nAssociations With Past Year Clinical Breast Examination (Women 40+ years).\nCOR = Crude Odds Ratio; AOR = Adjusted Odds Ratio; CI = Confidence Interval.\nN = 655.\nIn the adjusted logistic regression analysis, glucose screening (AOR: 5.47, 95% CI: 1.94-15.43), cholesterol screening (AOR: 2.20, 95% CI: 1.35-3.57), and dental visit in the past year (AOR: 1.63, 95% CI: 1.03-2.60) were associated with CBE among women aged 40 years and older. In addition, in univariate analysis, blood pressure screening, 2-3 servings of fruit and vegetable consumption a day, and the use of traditional medicine were associated with CBE (see Table 4).\nAssociations With Past Year Clinical Breast Examination (Women 40+ years).\nCOR = Crude Odds Ratio; AOR = Adjusted Odds Ratio; CI = Confidence Interval.\nN = 655.\nAssociations With Pap Smear or VIA Screening In the adjusted logistic regression analysis, blood pressure screening (AOR: 2.39, 95% CI: 1.71-3.35), glucose screening (AOR: 1.59, 95% CI: 1.13-2.23), dental visit in the past year (AOR: 1.51, 95% CI: 1.17, 1.96), binge drinking (AOR: 1.88, 95% CI: 1.07-3.30), and 2-3 servings of fruit and vegetable consumption a day (AOR: 1.42, 95% CI: 1.03-1.95) were positively and high physical activity (AOR: 0.56, 95% CI: 0.41-0.76) was negatively associated with Pap smear or VIA screening among women aged 21-65 years. In addition, in univariate analysis, higher education, cholesterol screening, and the use of traditional medicine were associated with Pap smear or VIA screening (see Table 5).\nAssociations With Past 3 Years Pap Smear or VIA (Women 21-65 Years).\nCOR = Crude Odds Ratio; AOR = Adjusted Odds Ratio; CI = Confidence Interval.\nN = 1367.\nIn the adjusted logistic regression analysis, blood pressure screening (AOR: 2.39, 95% CI: 1.71-3.35), glucose screening (AOR: 1.59, 95% CI: 1.13-2.23), dental visit in the past year (AOR: 1.51, 95% CI: 1.17, 1.96), binge drinking (AOR: 1.88, 95% CI: 1.07-3.30), and 2-3 servings of fruit and vegetable consumption a day (AOR: 1.42, 95% CI: 1.03-1.95) were positively and high physical activity (AOR: 0.56, 95% CI: 0.41-0.76) was negatively associated with Pap smear or VIA screening among women aged 21-65 years. In addition, in univariate analysis, higher education, cholesterol screening, and the use of traditional medicine were associated with Pap smear or VIA screening (see Table 5).\nAssociations With Past 3 Years Pap Smear or VIA (Women 21-65 Years).\nCOR = Crude Odds Ratio; AOR = Adjusted Odds Ratio; CI = Confidence Interval.\nN = 1367.\nAssociations With Fecal Occult Blood Examination In adjusted logistic regression analysis, glucose screening (AOR: 3.63, 95% CI: 1.43-9.22), cholesterol screening (AOR: 2.04, 95% CI: 1.32-3.15), dental visit in the past year (AOR: 1.78, 95% CI: 1.17-2.71), intake of 2-3 servings of fruit and vegetables a day (AOR: 1.83, 95% CI: 1.13-2.98), the use of traditional medicine (AOR: 1.77, 95% CI: 1.12-2.80), and cardiovascular disorder (AOR: 2.26, 95% CI: 1.35, 4.83) were associated with FOBE among 50-75 year-olds. In addition, in univariate analysis, blood pressure screening was associated with FOBE (see Table 6).\nAssociations With Past Year Fecal Occult Blood Test (50-75 Years).\nCOR = Crude Odds Ratio; AOR = Adjusted Odds Ratio; CI = Confidence Interval.\nN = 676.\nIn adjusted logistic regression analysis, glucose screening (AOR: 3.63, 95% CI: 1.43-9.22), cholesterol screening (AOR: 2.04, 95% CI: 1.32-3.15), dental visit in the past year (AOR: 1.78, 95% CI: 1.17-2.71), intake of 2-3 servings of fruit and vegetables a day (AOR: 1.83, 95% CI: 1.13-2.98), the use of traditional medicine (AOR: 1.77, 95% CI: 1.12-2.80), and cardiovascular disorder (AOR: 2.26, 95% CI: 1.35, 4.83) were associated with FOBE among 50-75 year-olds. In addition, in univariate analysis, blood pressure screening was associated with FOBE (see Table 6).\nAssociations With Past Year Fecal Occult Blood Test (50-75 Years).\nCOR = Crude Odds Ratio; AOR = Adjusted Odds Ratio; CI = Confidence Interval.\nN = 676.\nAssociations With Colonoscopy In adjusted logistic regression analysis, higher education (AOR: 2.58, 95% CI: 1.02-6.58), and cholesterol screening (AOR: 2.87, 95% CI: 1.48-5.59), were positively and current smoking (AOR: 0.09, 95% CI: 0.01-0.65) was negatively associated with past 10 years colonoscopy uptake among 50-75 year-olds. In addition, in univariate analysis, glucose screening, 1-29 days physical activity, and the use of traditional medicine were associated with past 10 years colonoscopy uptake (see Table 7).\nAssociations With Past 10 Years Colonoscopy (50-75 years).\nCOR = Crude Odds Ratio; AOR = Adjusted Odds Ratio; CI = Confidence Interval.\nN = 685.\nIn adjusted logistic regression analysis, higher education (AOR: 2.58, 95% CI: 1.02-6.58), and cholesterol screening (AOR: 2.87, 95% CI: 1.48-5.59), were positively and current smoking (AOR: 0.09, 95% CI: 0.01-0.65) was negatively associated with past 10 years colonoscopy uptake among 50-75 year-olds. In addition, in univariate analysis, glucose screening, 1-29 days physical activity, and the use of traditional medicine were associated with past 10 years colonoscopy uptake (see Table 7).\nAssociations With Past 10 Years Colonoscopy (50-75 years).\nCOR = Crude Odds Ratio; AOR = Adjusted Odds Ratio; CI = Confidence Interval.\nN = 685.", "The study population consisted of 2,813 persons aged 21-75 years (Median = 37.4 years, IQR = 28.7-59.5), 1,329 (47.2%) were men and 1,484 (52.8%) were women, 74.0% had high school or higher education, and 38.3% had an annual household income of lower than 10000$. More than half of the participants (55.3%) had ever their blood pressure checked and had ever undergone glucose (diabetes) screening (54.9%) by a health care provider, 21.5% had ever their blood cholesterol checked, and 36.2% had a dental visit in the past 12 months.\nOne in 5 persons (24.5%) were current smokers (45.3% among men), 11.2% were currently chewing tobacco, 16.4% engaged in binge drinking in the past month, 71.5% had 0-1 servings of fruit and vegetables per day, and 35.0% engaged daily in physical activities or exercises. Almost 1 in 10 participants (8.7%) were currently using traditional medicine for diabetes, or hypertension or high cholesterol, 4.6% had some form of cardiovascular disorder (coronary heart disease, angina, heart attack, any kind of heart condition, or heart disease), and 47.2% had obesity (see Table 1).\nSample Characteristics of Men and Women Aged 21-75 years, Marshall Islands, STEPS Survey, 2017.", "The prevalence of past 2 years mammography screening was 21.7% among women aged 50-74 years, past year CBE 15.9% among women aged 40 years and older, past 3 years Pap smear or VIA 32.6% among women 21-65 years, past year FOBT 21.8% among women aged 50-75 years, past year FOBT 22.3% among men aged 50-75 years, past 10 years colonoscopy 9.1% among women aged 50-75 years and past 10 years colonoscopy 7.3% among men aged 50-75 years (see Table 2).\nCancer Screening.\nVIA = Vaginal Inspection with Acetic Acid; CI = Confidence Interval.", "In adjusted logistic regression, cholesterol screening (AOR: 1.91, 95% CI: 1.07-3.41) was associated with past 2 years mammography screening among women aged 50-74 years. In addition, in univariate analysis, higher education, blood pressure, and glucose screening were associated with mammography screening (see Table 3).\nAssociations With Past 2 Years Mammography Screening (Women 50-74 Years).\nCOR = Crude Odds Ratio; AOR = Adjusted Odds Ratio; CI = Confidence Interval.\nN = 323.", "In the adjusted logistic regression analysis, glucose screening (AOR: 5.47, 95% CI: 1.94-15.43), cholesterol screening (AOR: 2.20, 95% CI: 1.35-3.57), and dental visit in the past year (AOR: 1.63, 95% CI: 1.03-2.60) were associated with CBE among women aged 40 years and older. In addition, in univariate analysis, blood pressure screening, 2-3 servings of fruit and vegetable consumption a day, and the use of traditional medicine were associated with CBE (see Table 4).\nAssociations With Past Year Clinical Breast Examination (Women 40+ years).\nCOR = Crude Odds Ratio; AOR = Adjusted Odds Ratio; CI = Confidence Interval.\nN = 655.", "In the adjusted logistic regression analysis, blood pressure screening (AOR: 2.39, 95% CI: 1.71-3.35), glucose screening (AOR: 1.59, 95% CI: 1.13-2.23), dental visit in the past year (AOR: 1.51, 95% CI: 1.17, 1.96), binge drinking (AOR: 1.88, 95% CI: 1.07-3.30), and 2-3 servings of fruit and vegetable consumption a day (AOR: 1.42, 95% CI: 1.03-1.95) were positively and high physical activity (AOR: 0.56, 95% CI: 0.41-0.76) was negatively associated with Pap smear or VIA screening among women aged 21-65 years. In addition, in univariate analysis, higher education, cholesterol screening, and the use of traditional medicine were associated with Pap smear or VIA screening (see Table 5).\nAssociations With Past 3 Years Pap Smear or VIA (Women 21-65 Years).\nCOR = Crude Odds Ratio; AOR = Adjusted Odds Ratio; CI = Confidence Interval.\nN = 1367.", "In adjusted logistic regression analysis, glucose screening (AOR: 3.63, 95% CI: 1.43-9.22), cholesterol screening (AOR: 2.04, 95% CI: 1.32-3.15), dental visit in the past year (AOR: 1.78, 95% CI: 1.17-2.71), intake of 2-3 servings of fruit and vegetables a day (AOR: 1.83, 95% CI: 1.13-2.98), the use of traditional medicine (AOR: 1.77, 95% CI: 1.12-2.80), and cardiovascular disorder (AOR: 2.26, 95% CI: 1.35, 4.83) were associated with FOBE among 50-75 year-olds. In addition, in univariate analysis, blood pressure screening was associated with FOBE (see Table 6).\nAssociations With Past Year Fecal Occult Blood Test (50-75 Years).\nCOR = Crude Odds Ratio; AOR = Adjusted Odds Ratio; CI = Confidence Interval.\nN = 676.", "In adjusted logistic regression analysis, higher education (AOR: 2.58, 95% CI: 1.02-6.58), and cholesterol screening (AOR: 2.87, 95% CI: 1.48-5.59), were positively and current smoking (AOR: 0.09, 95% CI: 0.01-0.65) was negatively associated with past 10 years colonoscopy uptake among 50-75 year-olds. In addition, in univariate analysis, glucose screening, 1-29 days physical activity, and the use of traditional medicine were associated with past 10 years colonoscopy uptake (see Table 7).\nAssociations With Past 10 Years Colonoscopy (50-75 years).\nCOR = Crude Odds Ratio; AOR = Adjusted Odds Ratio; CI = Confidence Interval.\nN = 685.", "The national study in Marshall Islands showed that the prevalence of past 2 years mammography screening (21.7%, women 50-74 years), was higher than in India (10.8% in the past 5 years, 40 years and older),\n9\n South Africa (15.6% in the past 5 years, 40 years and older),\n9\n but lower than in China (38.4% in the past 5 years, 40 years and older),\n9\n and Italy (70%, women 50-69 years).\n10\n The prevalence of past 3 years Pap smear or VIA (32.6%, women 21-65 years) in Marshall Islands was higher than in 18 national Demographic and Health Surveys (ever 29.2%),\n6\n including India (ever 27.2%),\n6\n Tajikistan (ever 10.6%),6 and Turkey (ever 22.0%),\n7\n similar to a 55 country study in low- and middle-income countries (ever 43.6%, 30-49 years),\n8\n but lower than in Brunei Darussalam (ever 56.5%, women 18-69 years),11 Italy (77% past 2 years, 25-64 years).\n10\n The prevalence of past year CBE (15.9%, 40 years and older) in this study was lower than in the Brunei Darussalam STEPS survey (ever 56.2%, 18-69 years),\n11\n while the prevalence of past year FOBT (22.0%, 50-75 years) was higher than in Brunei (ever 20.0%, 18-69 years),\n11\n however, the prevalence of past 10 years colonoscopy (7.9%, 50-75 years) was similar to Brunei (7.9%, 18-69 years).\n11\n Due to the use of different age groups for cancer screening in different countries, comparisons should be taken with caution. The cancer screening uptake still falls short of the 2017-2022 targets in Marshall Islands, i.e. 30% updated breast cancer screening in women aged 20-75 years, 60% cervical cancer screening at least once in the past 3 years in women 21-65 years, and 20% updated colorectal cancer screening in men and women aged 50-75 years.\n4\n Possible barriers for the low cancer screening uptake in Marshall Islands may include public policy, organizational systems and practice settings, health care providers, and the patients themselves, e.g. lack of awareness or understanding, misperceptions about the benefits.\n16,34\n\n-37\n Various interventions targeting at each of these factors can improve cancer screening rates.\n38\n\n\nConsistent with some previous research,\n7,13\n-15,39\n-41\n this study showed that a higher socioeconomic position (higher education) increased the odds for colonoscopy and in univariate analysis for cervical cancer screening uptake. It is possible that women who have higher education have better knowledge of the health risks related to cancer and therefore engage more likely in cancer screening.\n40\n Consistent with previous research,\n22\n this study showed that having ever had blood pressure, glucose, and/or blood cholesterol screening were associated with mammography, CBE, Pap smear or VIA, FOBT and/or colonoscopy. This finding may be explained by the increased possibility of referral to cancer screening during other clinical health screenings and should be utilized in the promotion of integrating cancer screening in routine health services. Furthermore, the study found that the use of traditional medicine for diabetes, hypertension or high cholesterol was associated with a higher uptake of FOBT, and in univariate analysis with CBE and colonoscopy. Similarly, in the US 2017 National Health Interview Survey, individuals who consulted complementary medicine approaches, such as chiropractor, naturopath, or mind-body medicine, were more likely to take up Pap smear test, mammography, and/or colorectal cancer screening.\n23\n Persons who used traditional medicine in Marshall Islands may have higher health literacy than those who do not use traditional medicine, which may explain the higher uptake of cancer screening.\n42,43\n\n\nThe study found an association between other chronic conditions, such as cardiovascular disorders and cancer screening (FBOT), which is in line with the findings of some previous research.\n11,25\n It is possible that persons with cardiovascular disorders access health care services more often than those without cardiovascular disorders, which may have led to more chances of the uptake of FBOT.\n24\n Contrary to a previous review,\n28\n which found a negative association between having overweight/obesity and cervical cancer screening, this study did not find any significant association between body weight status and cancer screening uptake.\nSome previous research showed that the practice of other positive lifestyle behaviors apart from cancer screening increased the odds for cancer screening,\n7,11,12,44\n which was confirmed in this study for noncurrent smokers with colonoscopy, intake of 2-3 servings of fruit and vegetable consumption with FOBT and Pap smear or VIA, and past 12-month dental visit with CBE, Pap smear or VIA, and FBOT. However, contrary to expectations, this study found a positive association between binge drinking and Pap smear or VIA, and a negative association between physical activity and Pap smear or VIA. Findings may help health planners to improve cancer screening uptake by targeting or promoting facilitators of cancer screening identified in this study.\nThe study limitations included that this investigation was limited due to the self-report of data and the cross-sectional survey design. Self-report could lead to recall bias resulting in overestimation or underestimation of true screening rates. Due to the cross-sectional design, no causative conclusions can be drawn on the relationship between explanatory and outcome variables. An additional limitation was that the STEPS survey in Marshall Islands did not assess urban and rural residence, knowledge and perceptions about cancer screening, family history of cancer, accessibility of cancer screening, as well as prostate cancer screening, which should be included in future studies.", "The study showed a low cancer screening uptake (mammography, clinical breast examination, Pap smear or VIA, FBOT, and colonoscopy). Several protective factors were identified for cancer screening, such as higher education, other health screening (blood pressure, glucose, or cholesterol), health behavior (dental visit, fruit and vegetable consumption, and nonsmoking) and the use of traditional medicine that could assist in programs promoting cancer screening in Marshall Islands. In addition, cancer awareness campaigns, expansion of skills training of health care providers, and improving cancer screening infrastructure could help in improving the uptake of cancer screening." ]
[ "intro", "methods", null, null, null, null, null, "results", null, null, null, null, null, null, null, "discussion", "conclusions" ]
[ "cancer screening", "men", "women", "determinants", "Marshall islands" ]
Introduction: Globally, “cancer is the second leading cause of death; 70% of deaths from cancer occur in low- and middle-income countries.” 1 Some of the most common cancers are lung, breast, colorectal, and prostate cancer. 1 The “cervical cancer burden in the Pacific Region is substantial, with age-standardized incidence rates ranging from 8.2 to 50.7 and age-standardized mortality rates from 2.7 to 23.9 per 100,000 women per year.” 2 In the Marshall Island’s (an upper middle income country in the Pacific) cancer is the second leading cause of death (after diabetes), 3,4 and the most prevalent cancers are cervical, lung, and breast cancers. 4 The “age-standardized rate of cervical cancer was 74 per 100,000” (the highest in the world) in Marshall Islands. 4 The breast cancer and colorectal cancer incidence rates 2007-2012 were 28.6 and 5.5, respectively, in Marshall Islands. 4 Due to low screening rates (7.8% in 2013) and late diagnosis, breast cancer in the Marshall Islands is associated with high mortality rates. 4 Some cancers can be fatal if not identified and treated early, emphasizing the importance of organized cancer screening. 5 Marshall Islands offer various cancer screening modalities, e.g. breast cancer screening (clinical breast examination = CBE with no specific recommendations on age and frequency and mammography, recommended every 2 years in women 50-74 years), cervical cancer screening (Vaginal Inspection with Acetic Acid = VIA or Pap Smear/Cytology, and others, recommended every 3 years in women 21-65 years) and colorectal cancer screening (fecal occult blood examination = FOBT, recommended every year in women and men 50-75 years and colonoscopy, recommended every 10 years in women and men 50-75 years). 4 Marshall Islands developed a national Comprehensive Cancer Control Plan 2017-2022, including cancer screening uptake targets as follows: 30% updated breast cancer screening in women ages 20-75 years, 60% cervical cancer screening at least once in the past 3 years in women 21-65 years, and 20% updated colorectal cancer screening in men and women aged 50-75 years.4 Cancer screening efforts in Marshall Islands include skills-training on clinical breast examination and Pap smear testing, purchase of new mammogram machines, and VIA was introduced as a core option for cervical cancer screening. 4 It is believed that screening rates are below the recommended targets but no national study has reported on the current cancer screening uptake in Marshall Islands. In addition, it would be of outmost importance to have an understanding of the possible facilitators and barriers of cancer screening in Marshall Islands. Knowing the national prevalence and possible factors associated with different cancer screening methods could help in developing programs to improve cancer screening in Marshall Islands. As part of the national Demographic and Health Surveys in 18 countries (women 21-49 years), the prevalence of the utilization of ever cervical cancer screening was 29.2% in 18 countries, 6 in India 27.2%, 6 in Tajikistan 10.6%, 6 and in Turkey ever cervical cancer screening (women 30 years and older) 22.0%. 7 In an analysis of nationally representative household surveys in 55 low- and middle-income countries, the country level median of lifetime cervical cancer screening among women aged 30-49 years 43.6%. 8 The prevalence of breast cancer screening (mammography in the past 5 years) among women aged 40 years and older was 38.4% in China, 10.8% in India, and 15.6% in South Africa. 9 In Italy, the uptake of past 3 years Pap smear or HPV test was 77% (25-64 years old women), past 2 years mammography was 70% (50-69 years old women), and past 2 years colorectal screening was 38% (50-69 year olds). 10 In Brunei Darussalam, the prevalence of a pap smear test (women 18-69 years) was 56.5%, mammography (women 18-69 years) 11.3%, clinical breast examination ( = CBE)(women 18-69 years) 56.2%, fecal occult blood test (men and women 18-69 years) 20.0%, and colonoscopy (men and women 18-69 years) 7.9%. 11 Factors associated with cancer screening may include sociodemographic factors, health system factors, health status, and lifestyle factors. 5,12 Sociodemographic factors associated with cancer screening uptake include higher socioeconomic position, 7,13 -15 older age, 16 younger age, 17 and urban living. 18,19 Health system issues associated with cancer screening uptake include increased access to health care, 20 had health insurance, 18,21 having had blood cholesterol test, 22 general health care utilization, 13,16,17,21 and complementary medicine use. 23 Health status associated with cancer screening uptake include having chronic conditions, 24 having multimorbidity or comorbidity, 11,25 family history of non-communicable diseases, 11 not having mental distress or illness or having depressive symptoms, 17,26,27 not having obesity, 17,28 good self-rated health status, 29 and better physical functioning. 17 Positive lifestyle behaviors associated with cancer screening uptake include such as physical activity, fruit and vegetable consumption, and not smoking, 7,17,30 and not consuming alcohol. 11 The study aimed to estimate the prevalence and associated factors of cancer screening uptake among adults a national population-based survey in Marshall Islands. Methods: Sample and Procedures Cross-sectional data from the 2017/2018 Marshall Islands STEPwise approach to Surveillance (STEPS) survey were analyzed. 31 Individuals aged 18 years or older participated in the survey from the islands of Majuro, Kwajalein, Arno, Jaluit, Wotje, and Kili, making up 83% of the overall population of Marshall Islands; the final response rate was 92.3%. 32 “Sample size was determined based on overall adult populations on selected islands in the Republic of the Marshall Islands (Majuro = 1659; Ebeye = 627; Kili = 200; Wotje = 207; Jaluit = 207; Arno = 207).”32 A multi-stage sampling design was used: Stage 1: Households were identified at random according to geographical stratification in Majuro and Ebeye. 32 The country was stratified into 2 major groups, Urban (Majuro and Ebeye) and Rural (all outer islands). 32 In Majuro and Ebeye, household cluster sampling was used to randomly select households in these areas. 32 Stage 2: In Majuro and Ebeye, 1 individual was selected at random from each household. All adults in Kili, Arno, Wotje, and Jabwor, Jaluit atolls were included in the sample because the adult populations are about 200 each on these atolls. 32 “Participants eligible for the survey were Marshall Islands residents aged 18 years and older, and who were able to comprehend either English or Marshallese and provide consent.” 32 Data were collected electronically using a tablet by trained surveyors that conducted face-to-face administration of structured questionnaires and anthropometric measurements. 32 Quality control of completed questionnaires was ensured at different stages during the questionnaire-processing phase. 32 The total sample included 3,029 persons 18 years and older, but since we were investigating cancer screening, the sample was restricted to individuals aged 21-75 years. The sample size of this subsample was 2,813. Sample size calculation for cancer screening. All sample sizes calculated with acceptable margin 5%, 95% confidence level, the minimum sample size for cervical cancer screening is 316 (estimated based on a prevalence of 29.2% in 18 countries6), for mammography 261 (estimated based on a prevalence of 21.6%, average of 3 countries, China, India and South Africa9), for CBE 378 (based on prevalence of 56.2% in Brunei Durassalam11), for FBOT 246 (based on 20.0% in Brunei Durassalam11) and for colonoscopy 113 (based on prevalence of 7.9% in Brunei Durassalam11) Cross-sectional data from the 2017/2018 Marshall Islands STEPwise approach to Surveillance (STEPS) survey were analyzed. 31 Individuals aged 18 years or older participated in the survey from the islands of Majuro, Kwajalein, Arno, Jaluit, Wotje, and Kili, making up 83% of the overall population of Marshall Islands; the final response rate was 92.3%. 32 “Sample size was determined based on overall adult populations on selected islands in the Republic of the Marshall Islands (Majuro = 1659; Ebeye = 627; Kili = 200; Wotje = 207; Jaluit = 207; Arno = 207).”32 A multi-stage sampling design was used: Stage 1: Households were identified at random according to geographical stratification in Majuro and Ebeye. 32 The country was stratified into 2 major groups, Urban (Majuro and Ebeye) and Rural (all outer islands). 32 In Majuro and Ebeye, household cluster sampling was used to randomly select households in these areas. 32 Stage 2: In Majuro and Ebeye, 1 individual was selected at random from each household. All adults in Kili, Arno, Wotje, and Jabwor, Jaluit atolls were included in the sample because the adult populations are about 200 each on these atolls. 32 “Participants eligible for the survey were Marshall Islands residents aged 18 years and older, and who were able to comprehend either English or Marshallese and provide consent.” 32 Data were collected electronically using a tablet by trained surveyors that conducted face-to-face administration of structured questionnaires and anthropometric measurements. 32 Quality control of completed questionnaires was ensured at different stages during the questionnaire-processing phase. 32 The total sample included 3,029 persons 18 years and older, but since we were investigating cancer screening, the sample was restricted to individuals aged 21-75 years. The sample size of this subsample was 2,813. Sample size calculation for cancer screening. All sample sizes calculated with acceptable margin 5%, 95% confidence level, the minimum sample size for cervical cancer screening is 316 (estimated based on a prevalence of 29.2% in 18 countries6), for mammography 261 (estimated based on a prevalence of 21.6%, average of 3 countries, China, India and South Africa9), for CBE 378 (based on prevalence of 56.2% in Brunei Durassalam11), for FBOT 246 (based on 20.0% in Brunei Durassalam11) and for colonoscopy 113 (based on prevalence of 7.9% in Brunei Durassalam11) Sample and Procedures: Cross-sectional data from the 2017/2018 Marshall Islands STEPwise approach to Surveillance (STEPS) survey were analyzed. 31 Individuals aged 18 years or older participated in the survey from the islands of Majuro, Kwajalein, Arno, Jaluit, Wotje, and Kili, making up 83% of the overall population of Marshall Islands; the final response rate was 92.3%. 32 “Sample size was determined based on overall adult populations on selected islands in the Republic of the Marshall Islands (Majuro = 1659; Ebeye = 627; Kili = 200; Wotje = 207; Jaluit = 207; Arno = 207).”32 A multi-stage sampling design was used: Stage 1: Households were identified at random according to geographical stratification in Majuro and Ebeye. 32 The country was stratified into 2 major groups, Urban (Majuro and Ebeye) and Rural (all outer islands). 32 In Majuro and Ebeye, household cluster sampling was used to randomly select households in these areas. 32 Stage 2: In Majuro and Ebeye, 1 individual was selected at random from each household. All adults in Kili, Arno, Wotje, and Jabwor, Jaluit atolls were included in the sample because the adult populations are about 200 each on these atolls. 32 “Participants eligible for the survey were Marshall Islands residents aged 18 years and older, and who were able to comprehend either English or Marshallese and provide consent.” 32 Data were collected electronically using a tablet by trained surveyors that conducted face-to-face administration of structured questionnaires and anthropometric measurements. 32 Quality control of completed questionnaires was ensured at different stages during the questionnaire-processing phase. 32 The total sample included 3,029 persons 18 years and older, but since we were investigating cancer screening, the sample was restricted to individuals aged 21-75 years. The sample size of this subsample was 2,813. Sample size calculation for cancer screening. All sample sizes calculated with acceptable margin 5%, 95% confidence level, the minimum sample size for cervical cancer screening is 316 (estimated based on a prevalence of 29.2% in 18 countries6), for mammography 261 (estimated based on a prevalence of 21.6%, average of 3 countries, China, India and South Africa9), for CBE 378 (based on prevalence of 56.2% in Brunei Durassalam11), for FBOT 246 (based on 20.0% in Brunei Durassalam11) and for colonoscopy 113 (based on prevalence of 7.9% in Brunei Durassalam11) Measures: Cancer Screening Colon cancer screening (use showcard): 1) “Have you ever had a colonoscopy? (Yes, No, Don’t know, Refuse) (time since last colonoscopy, 1 = within the past year to 6 = 10 or more years ago)” and 2) “A blood stool test is a test that determines whether the stool contains blood. Have you ever had this test? (Yes, No, Don’t know, Refuse) (time since last blood stool test, 1 = within the past year to 5 = 5 or more years ago)”.32 Outcome variables were defined as past 10 years colonoscopy uptake in 50-75 year-olds, and past year FOBT uptake in 50-75 year-olds. Women cancer screening (use showcard): 1) “Have you ever had a mammogram? A mammogram is done with a machine (Yes, No, Don’t know, Refuse) (time since the last mammogram, 1 = within the past year to 5 = 5 or more years ago)”, 2) “A clinical breast exam is when a doctor, nurse, or other health professional feels the breasts for lumps. Have you ever had a clinical breast exam? (Yes, No, Don’t know, Refuse) (time since last clinical breast exam, 1 = within the past year to 5 = 5 or more years ago)”, and 3) “Have you ever had a Pap or VIA test? (Yes, No, Don’t know, Refuse) (time since last Pap or VIA test, 1 = within the past year to 5 = 5 or more years ago)”. 32 Those who responded “don’t know” or “refuse” were excluded from the analysis. Outcome variables were defined as past 2 years mammography in women 50-74 years, past year CBE in women 40 years and older, and past 3 years Pap smear or VIA in women 21-65 years. Socio-demographic factor questions included age (years), sex (male, female), highest level of education (1 = never attended school to 6 = college or university completed), and past year household income (1 = less than US$ 5000 to 5 = US$ 20000 or more, Don’t know, Refused to answer).32 Other health screenings and visits included blood pressure, blood sugar, cholesterol, and dental visits, as follows: 1) “Have you ever had your blood pressure checked by a doctor, nurse, or other health worker?” (Yes, No) 2) “Have you ever had your blood sugar checked by a doctor, nurse, or other health worker?” (Yes, No) 3) “Blood cholesterol is a fatty substance found in the blood. Have you ever had your blood cholesterol checked by a doctor, nurse, or other health worker?” (Yes, No) 4) “How long has it been since you last visited a dentist or a dental clinic for any reason? Include visits to dental specialists, such as orthodontists.” (1 = within the past year to 5 = 5 or more years ago). 32 The health risk behavior variables included current smoking (Yes, No), current chewing tobacco (Yes, No), past month binge drinking (≥5 units for men and ≥4 units for women), consumption of fruit and vegetables per day (number of days in a week and number of servings a day), and physical activity (“During the past 30 days, other than your regular job, on how many days did you participate in any physical activities or exercises such as running, sports, walking, or going to the gym, specifically for exercise?”). 32 Cronbach alpha for the fruit and vegetable consumption measure was 0.82 in this study. Body Mass Index was measured: “<18.5kg/m2 underweight, 18.5-24.4kg/m2 normal weight, 25-29.9kg/m2 overweight and ≥30 kg/m2 obesity.” 32 Cardiovascular disorders included self-reported “coronary heart disease; angina, also called angina pectoris; a heart attack (also called myocardial infarction); any kind of heart condition or heart disease (other than the ones I just asked about) (Yes, No)” 32 The utilization of traditional medicine included 3 questions on currently taking any herbal or traditional remedy for your high blood sugar or diabetes, or high blood pressure or hypertension or high cholesterol (Yes, No). 32 Overall, the “STEPS protocols can be utilized to provide aggregate data for valid between-population comparisons.” 33 Colon cancer screening (use showcard): 1) “Have you ever had a colonoscopy? (Yes, No, Don’t know, Refuse) (time since last colonoscopy, 1 = within the past year to 6 = 10 or more years ago)” and 2) “A blood stool test is a test that determines whether the stool contains blood. Have you ever had this test? (Yes, No, Don’t know, Refuse) (time since last blood stool test, 1 = within the past year to 5 = 5 or more years ago)”.32 Outcome variables were defined as past 10 years colonoscopy uptake in 50-75 year-olds, and past year FOBT uptake in 50-75 year-olds. Women cancer screening (use showcard): 1) “Have you ever had a mammogram? A mammogram is done with a machine (Yes, No, Don’t know, Refuse) (time since the last mammogram, 1 = within the past year to 5 = 5 or more years ago)”, 2) “A clinical breast exam is when a doctor, nurse, or other health professional feels the breasts for lumps. Have you ever had a clinical breast exam? (Yes, No, Don’t know, Refuse) (time since last clinical breast exam, 1 = within the past year to 5 = 5 or more years ago)”, and 3) “Have you ever had a Pap or VIA test? (Yes, No, Don’t know, Refuse) (time since last Pap or VIA test, 1 = within the past year to 5 = 5 or more years ago)”. 32 Those who responded “don’t know” or “refuse” were excluded from the analysis. Outcome variables were defined as past 2 years mammography in women 50-74 years, past year CBE in women 40 years and older, and past 3 years Pap smear or VIA in women 21-65 years. Socio-demographic factor questions included age (years), sex (male, female), highest level of education (1 = never attended school to 6 = college or university completed), and past year household income (1 = less than US$ 5000 to 5 = US$ 20000 or more, Don’t know, Refused to answer).32 Other health screenings and visits included blood pressure, blood sugar, cholesterol, and dental visits, as follows: 1) “Have you ever had your blood pressure checked by a doctor, nurse, or other health worker?” (Yes, No) 2) “Have you ever had your blood sugar checked by a doctor, nurse, or other health worker?” (Yes, No) 3) “Blood cholesterol is a fatty substance found in the blood. Have you ever had your blood cholesterol checked by a doctor, nurse, or other health worker?” (Yes, No) 4) “How long has it been since you last visited a dentist or a dental clinic for any reason? Include visits to dental specialists, such as orthodontists.” (1 = within the past year to 5 = 5 or more years ago). 32 The health risk behavior variables included current smoking (Yes, No), current chewing tobacco (Yes, No), past month binge drinking (≥5 units for men and ≥4 units for women), consumption of fruit and vegetables per day (number of days in a week and number of servings a day), and physical activity (“During the past 30 days, other than your regular job, on how many days did you participate in any physical activities or exercises such as running, sports, walking, or going to the gym, specifically for exercise?”). 32 Cronbach alpha for the fruit and vegetable consumption measure was 0.82 in this study. Body Mass Index was measured: “<18.5kg/m2 underweight, 18.5-24.4kg/m2 normal weight, 25-29.9kg/m2 overweight and ≥30 kg/m2 obesity.” 32 Cardiovascular disorders included self-reported “coronary heart disease; angina, also called angina pectoris; a heart attack (also called myocardial infarction); any kind of heart condition or heart disease (other than the ones I just asked about) (Yes, No)” 32 The utilization of traditional medicine included 3 questions on currently taking any herbal or traditional remedy for your high blood sugar or diabetes, or high blood pressure or hypertension or high cholesterol (Yes, No). 32 Overall, the “STEPS protocols can be utilized to provide aggregate data for valid between-population comparisons.” 33 Data Analysis Descriptive statistics were used to summarize sample and cancer screening prevalence characteristics. Unadjusted and adjusted (including variables significant p < 0.05 at univariate analysis) logistic regression analyses were used to predict the prevalence of mammography, CBE, Pap smear or VIA, FOBT and colonoscopy. Covariates, based on literature review, 5, 7, 12 - 17, 21, 22, 24, 25, 28, 30 included sociodemographic factors, health care utilization, health risk behaviors, cardiovascular disorder, body mass index, and use of traditional medicine for all outcome variables. Explanatory variables are statistically significant at p < 0.05 and are free from multicollinearity as measured by the variance inflation factor (VIF <  1.8). Model assumptions were checked with residual plots, and the overall fitness of the models was checked with the Hosmer-Lemeshow goodness-of-fit test. Missing values (<2% for all variables except for body weight, 6.1%) were excluded. All statistical analyses were conducted using STATA software version 14.0 (Stata Corporation, College Station, TX, USA). Descriptive statistics were used to summarize sample and cancer screening prevalence characteristics. Unadjusted and adjusted (including variables significant p < 0.05 at univariate analysis) logistic regression analyses were used to predict the prevalence of mammography, CBE, Pap smear or VIA, FOBT and colonoscopy. Covariates, based on literature review, 5, 7, 12 - 17, 21, 22, 24, 25, 28, 30 included sociodemographic factors, health care utilization, health risk behaviors, cardiovascular disorder, body mass index, and use of traditional medicine for all outcome variables. Explanatory variables are statistically significant at p < 0.05 and are free from multicollinearity as measured by the variance inflation factor (VIF <  1.8). Model assumptions were checked with residual plots, and the overall fitness of the models was checked with the Hosmer-Lemeshow goodness-of-fit test. Missing values (<2% for all variables except for body weight, 6.1%) were excluded. All statistical analyses were conducted using STATA software version 14.0 (Stata Corporation, College Station, TX, USA). Ethical Consideration The Marshall Islands Ministry of Health & Human Services provided ethics approval of the study, and written informed consent was obtained from study participants. 32 The Marshall Islands Ministry of Health & Human Services provided ethics approval of the study, and written informed consent was obtained from study participants. 32 Cancer Screening: Colon cancer screening (use showcard): 1) “Have you ever had a colonoscopy? (Yes, No, Don’t know, Refuse) (time since last colonoscopy, 1 = within the past year to 6 = 10 or more years ago)” and 2) “A blood stool test is a test that determines whether the stool contains blood. Have you ever had this test? (Yes, No, Don’t know, Refuse) (time since last blood stool test, 1 = within the past year to 5 = 5 or more years ago)”.32 Outcome variables were defined as past 10 years colonoscopy uptake in 50-75 year-olds, and past year FOBT uptake in 50-75 year-olds. Women cancer screening (use showcard): 1) “Have you ever had a mammogram? A mammogram is done with a machine (Yes, No, Don’t know, Refuse) (time since the last mammogram, 1 = within the past year to 5 = 5 or more years ago)”, 2) “A clinical breast exam is when a doctor, nurse, or other health professional feels the breasts for lumps. Have you ever had a clinical breast exam? (Yes, No, Don’t know, Refuse) (time since last clinical breast exam, 1 = within the past year to 5 = 5 or more years ago)”, and 3) “Have you ever had a Pap or VIA test? (Yes, No, Don’t know, Refuse) (time since last Pap or VIA test, 1 = within the past year to 5 = 5 or more years ago)”. 32 Those who responded “don’t know” or “refuse” were excluded from the analysis. Outcome variables were defined as past 2 years mammography in women 50-74 years, past year CBE in women 40 years and older, and past 3 years Pap smear or VIA in women 21-65 years. Socio-demographic factor questions included age (years), sex (male, female), highest level of education (1 = never attended school to 6 = college or university completed), and past year household income (1 = less than US$ 5000 to 5 = US$ 20000 or more, Don’t know, Refused to answer).32 Other health screenings and visits included blood pressure, blood sugar, cholesterol, and dental visits, as follows: 1) “Have you ever had your blood pressure checked by a doctor, nurse, or other health worker?” (Yes, No) 2) “Have you ever had your blood sugar checked by a doctor, nurse, or other health worker?” (Yes, No) 3) “Blood cholesterol is a fatty substance found in the blood. Have you ever had your blood cholesterol checked by a doctor, nurse, or other health worker?” (Yes, No) 4) “How long has it been since you last visited a dentist or a dental clinic for any reason? Include visits to dental specialists, such as orthodontists.” (1 = within the past year to 5 = 5 or more years ago). 32 The health risk behavior variables included current smoking (Yes, No), current chewing tobacco (Yes, No), past month binge drinking (≥5 units for men and ≥4 units for women), consumption of fruit and vegetables per day (number of days in a week and number of servings a day), and physical activity (“During the past 30 days, other than your regular job, on how many days did you participate in any physical activities or exercises such as running, sports, walking, or going to the gym, specifically for exercise?”). 32 Cronbach alpha for the fruit and vegetable consumption measure was 0.82 in this study. Body Mass Index was measured: “<18.5kg/m2 underweight, 18.5-24.4kg/m2 normal weight, 25-29.9kg/m2 overweight and ≥30 kg/m2 obesity.” 32 Cardiovascular disorders included self-reported “coronary heart disease; angina, also called angina pectoris; a heart attack (also called myocardial infarction); any kind of heart condition or heart disease (other than the ones I just asked about) (Yes, No)” 32 The utilization of traditional medicine included 3 questions on currently taking any herbal or traditional remedy for your high blood sugar or diabetes, or high blood pressure or hypertension or high cholesterol (Yes, No). 32 Overall, the “STEPS protocols can be utilized to provide aggregate data for valid between-population comparisons.” 33 Data Analysis: Descriptive statistics were used to summarize sample and cancer screening prevalence characteristics. Unadjusted and adjusted (including variables significant p < 0.05 at univariate analysis) logistic regression analyses were used to predict the prevalence of mammography, CBE, Pap smear or VIA, FOBT and colonoscopy. Covariates, based on literature review, 5, 7, 12 - 17, 21, 22, 24, 25, 28, 30 included sociodemographic factors, health care utilization, health risk behaviors, cardiovascular disorder, body mass index, and use of traditional medicine for all outcome variables. Explanatory variables are statistically significant at p < 0.05 and are free from multicollinearity as measured by the variance inflation factor (VIF <  1.8). Model assumptions were checked with residual plots, and the overall fitness of the models was checked with the Hosmer-Lemeshow goodness-of-fit test. Missing values (<2% for all variables except for body weight, 6.1%) were excluded. All statistical analyses were conducted using STATA software version 14.0 (Stata Corporation, College Station, TX, USA). Ethical Consideration: The Marshall Islands Ministry of Health & Human Services provided ethics approval of the study, and written informed consent was obtained from study participants. 32 Results: Sample Characteristics The study population consisted of 2,813 persons aged 21-75 years (Median = 37.4 years, IQR = 28.7-59.5), 1,329 (47.2%) were men and 1,484 (52.8%) were women, 74.0% had high school or higher education, and 38.3% had an annual household income of lower than 10000$. More than half of the participants (55.3%) had ever their blood pressure checked and had ever undergone glucose (diabetes) screening (54.9%) by a health care provider, 21.5% had ever their blood cholesterol checked, and 36.2% had a dental visit in the past 12 months. One in 5 persons (24.5%) were current smokers (45.3% among men), 11.2% were currently chewing tobacco, 16.4% engaged in binge drinking in the past month, 71.5% had 0-1 servings of fruit and vegetables per day, and 35.0% engaged daily in physical activities or exercises. Almost 1 in 10 participants (8.7%) were currently using traditional medicine for diabetes, or hypertension or high cholesterol, 4.6% had some form of cardiovascular disorder (coronary heart disease, angina, heart attack, any kind of heart condition, or heart disease), and 47.2% had obesity (see Table 1). Sample Characteristics of Men and Women Aged 21-75 years, Marshall Islands, STEPS Survey, 2017. The study population consisted of 2,813 persons aged 21-75 years (Median = 37.4 years, IQR = 28.7-59.5), 1,329 (47.2%) were men and 1,484 (52.8%) were women, 74.0% had high school or higher education, and 38.3% had an annual household income of lower than 10000$. More than half of the participants (55.3%) had ever their blood pressure checked and had ever undergone glucose (diabetes) screening (54.9%) by a health care provider, 21.5% had ever their blood cholesterol checked, and 36.2% had a dental visit in the past 12 months. One in 5 persons (24.5%) were current smokers (45.3% among men), 11.2% were currently chewing tobacco, 16.4% engaged in binge drinking in the past month, 71.5% had 0-1 servings of fruit and vegetables per day, and 35.0% engaged daily in physical activities or exercises. Almost 1 in 10 participants (8.7%) were currently using traditional medicine for diabetes, or hypertension or high cholesterol, 4.6% had some form of cardiovascular disorder (coronary heart disease, angina, heart attack, any kind of heart condition, or heart disease), and 47.2% had obesity (see Table 1). Sample Characteristics of Men and Women Aged 21-75 years, Marshall Islands, STEPS Survey, 2017. Prevalence of Cancer Screening The prevalence of past 2 years mammography screening was 21.7% among women aged 50-74 years, past year CBE 15.9% among women aged 40 years and older, past 3 years Pap smear or VIA 32.6% among women 21-65 years, past year FOBT 21.8% among women aged 50-75 years, past year FOBT 22.3% among men aged 50-75 years, past 10 years colonoscopy 9.1% among women aged 50-75 years and past 10 years colonoscopy 7.3% among men aged 50-75 years (see Table 2). Cancer Screening. VIA = Vaginal Inspection with Acetic Acid; CI = Confidence Interval. The prevalence of past 2 years mammography screening was 21.7% among women aged 50-74 years, past year CBE 15.9% among women aged 40 years and older, past 3 years Pap smear or VIA 32.6% among women 21-65 years, past year FOBT 21.8% among women aged 50-75 years, past year FOBT 22.3% among men aged 50-75 years, past 10 years colonoscopy 9.1% among women aged 50-75 years and past 10 years colonoscopy 7.3% among men aged 50-75 years (see Table 2). Cancer Screening. VIA = Vaginal Inspection with Acetic Acid; CI = Confidence Interval. Associations With Mammography Screening In adjusted logistic regression, cholesterol screening (AOR: 1.91, 95% CI: 1.07-3.41) was associated with past 2 years mammography screening among women aged 50-74 years. In addition, in univariate analysis, higher education, blood pressure, and glucose screening were associated with mammography screening (see Table 3). Associations With Past 2 Years Mammography Screening (Women 50-74 Years). COR = Crude Odds Ratio; AOR = Adjusted Odds Ratio; CI = Confidence Interval. N = 323. In adjusted logistic regression, cholesterol screening (AOR: 1.91, 95% CI: 1.07-3.41) was associated with past 2 years mammography screening among women aged 50-74 years. In addition, in univariate analysis, higher education, blood pressure, and glucose screening were associated with mammography screening (see Table 3). Associations With Past 2 Years Mammography Screening (Women 50-74 Years). COR = Crude Odds Ratio; AOR = Adjusted Odds Ratio; CI = Confidence Interval. N = 323. Associations With Clinical Breast Examination In the adjusted logistic regression analysis, glucose screening (AOR: 5.47, 95% CI: 1.94-15.43), cholesterol screening (AOR: 2.20, 95% CI: 1.35-3.57), and dental visit in the past year (AOR: 1.63, 95% CI: 1.03-2.60) were associated with CBE among women aged 40 years and older. In addition, in univariate analysis, blood pressure screening, 2-3 servings of fruit and vegetable consumption a day, and the use of traditional medicine were associated with CBE (see Table 4). Associations With Past Year Clinical Breast Examination (Women 40+ years). COR = Crude Odds Ratio; AOR = Adjusted Odds Ratio; CI = Confidence Interval. N = 655. In the adjusted logistic regression analysis, glucose screening (AOR: 5.47, 95% CI: 1.94-15.43), cholesterol screening (AOR: 2.20, 95% CI: 1.35-3.57), and dental visit in the past year (AOR: 1.63, 95% CI: 1.03-2.60) were associated with CBE among women aged 40 years and older. In addition, in univariate analysis, blood pressure screening, 2-3 servings of fruit and vegetable consumption a day, and the use of traditional medicine were associated with CBE (see Table 4). Associations With Past Year Clinical Breast Examination (Women 40+ years). COR = Crude Odds Ratio; AOR = Adjusted Odds Ratio; CI = Confidence Interval. N = 655. Associations With Pap Smear or VIA Screening In the adjusted logistic regression analysis, blood pressure screening (AOR: 2.39, 95% CI: 1.71-3.35), glucose screening (AOR: 1.59, 95% CI: 1.13-2.23), dental visit in the past year (AOR: 1.51, 95% CI: 1.17, 1.96), binge drinking (AOR: 1.88, 95% CI: 1.07-3.30), and 2-3 servings of fruit and vegetable consumption a day (AOR: 1.42, 95% CI: 1.03-1.95) were positively and high physical activity (AOR: 0.56, 95% CI: 0.41-0.76) was negatively associated with Pap smear or VIA screening among women aged 21-65 years. In addition, in univariate analysis, higher education, cholesterol screening, and the use of traditional medicine were associated with Pap smear or VIA screening (see Table 5). Associations With Past 3 Years Pap Smear or VIA (Women 21-65 Years). COR = Crude Odds Ratio; AOR = Adjusted Odds Ratio; CI = Confidence Interval. N = 1367. In the adjusted logistic regression analysis, blood pressure screening (AOR: 2.39, 95% CI: 1.71-3.35), glucose screening (AOR: 1.59, 95% CI: 1.13-2.23), dental visit in the past year (AOR: 1.51, 95% CI: 1.17, 1.96), binge drinking (AOR: 1.88, 95% CI: 1.07-3.30), and 2-3 servings of fruit and vegetable consumption a day (AOR: 1.42, 95% CI: 1.03-1.95) were positively and high physical activity (AOR: 0.56, 95% CI: 0.41-0.76) was negatively associated with Pap smear or VIA screening among women aged 21-65 years. In addition, in univariate analysis, higher education, cholesterol screening, and the use of traditional medicine were associated with Pap smear or VIA screening (see Table 5). Associations With Past 3 Years Pap Smear or VIA (Women 21-65 Years). COR = Crude Odds Ratio; AOR = Adjusted Odds Ratio; CI = Confidence Interval. N = 1367. Associations With Fecal Occult Blood Examination In adjusted logistic regression analysis, glucose screening (AOR: 3.63, 95% CI: 1.43-9.22), cholesterol screening (AOR: 2.04, 95% CI: 1.32-3.15), dental visit in the past year (AOR: 1.78, 95% CI: 1.17-2.71), intake of 2-3 servings of fruit and vegetables a day (AOR: 1.83, 95% CI: 1.13-2.98), the use of traditional medicine (AOR: 1.77, 95% CI: 1.12-2.80), and cardiovascular disorder (AOR: 2.26, 95% CI: 1.35, 4.83) were associated with FOBE among 50-75 year-olds. In addition, in univariate analysis, blood pressure screening was associated with FOBE (see Table 6). Associations With Past Year Fecal Occult Blood Test (50-75 Years). COR = Crude Odds Ratio; AOR = Adjusted Odds Ratio; CI = Confidence Interval. N = 676. In adjusted logistic regression analysis, glucose screening (AOR: 3.63, 95% CI: 1.43-9.22), cholesterol screening (AOR: 2.04, 95% CI: 1.32-3.15), dental visit in the past year (AOR: 1.78, 95% CI: 1.17-2.71), intake of 2-3 servings of fruit and vegetables a day (AOR: 1.83, 95% CI: 1.13-2.98), the use of traditional medicine (AOR: 1.77, 95% CI: 1.12-2.80), and cardiovascular disorder (AOR: 2.26, 95% CI: 1.35, 4.83) were associated with FOBE among 50-75 year-olds. In addition, in univariate analysis, blood pressure screening was associated with FOBE (see Table 6). Associations With Past Year Fecal Occult Blood Test (50-75 Years). COR = Crude Odds Ratio; AOR = Adjusted Odds Ratio; CI = Confidence Interval. N = 676. Associations With Colonoscopy In adjusted logistic regression analysis, higher education (AOR: 2.58, 95% CI: 1.02-6.58), and cholesterol screening (AOR: 2.87, 95% CI: 1.48-5.59), were positively and current smoking (AOR: 0.09, 95% CI: 0.01-0.65) was negatively associated with past 10 years colonoscopy uptake among 50-75 year-olds. In addition, in univariate analysis, glucose screening, 1-29 days physical activity, and the use of traditional medicine were associated with past 10 years colonoscopy uptake (see Table 7). Associations With Past 10 Years Colonoscopy (50-75 years). COR = Crude Odds Ratio; AOR = Adjusted Odds Ratio; CI = Confidence Interval. N = 685. In adjusted logistic regression analysis, higher education (AOR: 2.58, 95% CI: 1.02-6.58), and cholesterol screening (AOR: 2.87, 95% CI: 1.48-5.59), were positively and current smoking (AOR: 0.09, 95% CI: 0.01-0.65) was negatively associated with past 10 years colonoscopy uptake among 50-75 year-olds. In addition, in univariate analysis, glucose screening, 1-29 days physical activity, and the use of traditional medicine were associated with past 10 years colonoscopy uptake (see Table 7). Associations With Past 10 Years Colonoscopy (50-75 years). COR = Crude Odds Ratio; AOR = Adjusted Odds Ratio; CI = Confidence Interval. N = 685. Sample Characteristics: The study population consisted of 2,813 persons aged 21-75 years (Median = 37.4 years, IQR = 28.7-59.5), 1,329 (47.2%) were men and 1,484 (52.8%) were women, 74.0% had high school or higher education, and 38.3% had an annual household income of lower than 10000$. More than half of the participants (55.3%) had ever their blood pressure checked and had ever undergone glucose (diabetes) screening (54.9%) by a health care provider, 21.5% had ever their blood cholesterol checked, and 36.2% had a dental visit in the past 12 months. One in 5 persons (24.5%) were current smokers (45.3% among men), 11.2% were currently chewing tobacco, 16.4% engaged in binge drinking in the past month, 71.5% had 0-1 servings of fruit and vegetables per day, and 35.0% engaged daily in physical activities or exercises. Almost 1 in 10 participants (8.7%) were currently using traditional medicine for diabetes, or hypertension or high cholesterol, 4.6% had some form of cardiovascular disorder (coronary heart disease, angina, heart attack, any kind of heart condition, or heart disease), and 47.2% had obesity (see Table 1). Sample Characteristics of Men and Women Aged 21-75 years, Marshall Islands, STEPS Survey, 2017. Prevalence of Cancer Screening: The prevalence of past 2 years mammography screening was 21.7% among women aged 50-74 years, past year CBE 15.9% among women aged 40 years and older, past 3 years Pap smear or VIA 32.6% among women 21-65 years, past year FOBT 21.8% among women aged 50-75 years, past year FOBT 22.3% among men aged 50-75 years, past 10 years colonoscopy 9.1% among women aged 50-75 years and past 10 years colonoscopy 7.3% among men aged 50-75 years (see Table 2). Cancer Screening. VIA = Vaginal Inspection with Acetic Acid; CI = Confidence Interval. Associations With Mammography Screening: In adjusted logistic regression, cholesterol screening (AOR: 1.91, 95% CI: 1.07-3.41) was associated with past 2 years mammography screening among women aged 50-74 years. In addition, in univariate analysis, higher education, blood pressure, and glucose screening were associated with mammography screening (see Table 3). Associations With Past 2 Years Mammography Screening (Women 50-74 Years). COR = Crude Odds Ratio; AOR = Adjusted Odds Ratio; CI = Confidence Interval. N = 323. Associations With Clinical Breast Examination: In the adjusted logistic regression analysis, glucose screening (AOR: 5.47, 95% CI: 1.94-15.43), cholesterol screening (AOR: 2.20, 95% CI: 1.35-3.57), and dental visit in the past year (AOR: 1.63, 95% CI: 1.03-2.60) were associated with CBE among women aged 40 years and older. In addition, in univariate analysis, blood pressure screening, 2-3 servings of fruit and vegetable consumption a day, and the use of traditional medicine were associated with CBE (see Table 4). Associations With Past Year Clinical Breast Examination (Women 40+ years). COR = Crude Odds Ratio; AOR = Adjusted Odds Ratio; CI = Confidence Interval. N = 655. Associations With Pap Smear or VIA Screening: In the adjusted logistic regression analysis, blood pressure screening (AOR: 2.39, 95% CI: 1.71-3.35), glucose screening (AOR: 1.59, 95% CI: 1.13-2.23), dental visit in the past year (AOR: 1.51, 95% CI: 1.17, 1.96), binge drinking (AOR: 1.88, 95% CI: 1.07-3.30), and 2-3 servings of fruit and vegetable consumption a day (AOR: 1.42, 95% CI: 1.03-1.95) were positively and high physical activity (AOR: 0.56, 95% CI: 0.41-0.76) was negatively associated with Pap smear or VIA screening among women aged 21-65 years. In addition, in univariate analysis, higher education, cholesterol screening, and the use of traditional medicine were associated with Pap smear or VIA screening (see Table 5). Associations With Past 3 Years Pap Smear or VIA (Women 21-65 Years). COR = Crude Odds Ratio; AOR = Adjusted Odds Ratio; CI = Confidence Interval. N = 1367. Associations With Fecal Occult Blood Examination: In adjusted logistic regression analysis, glucose screening (AOR: 3.63, 95% CI: 1.43-9.22), cholesterol screening (AOR: 2.04, 95% CI: 1.32-3.15), dental visit in the past year (AOR: 1.78, 95% CI: 1.17-2.71), intake of 2-3 servings of fruit and vegetables a day (AOR: 1.83, 95% CI: 1.13-2.98), the use of traditional medicine (AOR: 1.77, 95% CI: 1.12-2.80), and cardiovascular disorder (AOR: 2.26, 95% CI: 1.35, 4.83) were associated with FOBE among 50-75 year-olds. In addition, in univariate analysis, blood pressure screening was associated with FOBE (see Table 6). Associations With Past Year Fecal Occult Blood Test (50-75 Years). COR = Crude Odds Ratio; AOR = Adjusted Odds Ratio; CI = Confidence Interval. N = 676. Associations With Colonoscopy: In adjusted logistic regression analysis, higher education (AOR: 2.58, 95% CI: 1.02-6.58), and cholesterol screening (AOR: 2.87, 95% CI: 1.48-5.59), were positively and current smoking (AOR: 0.09, 95% CI: 0.01-0.65) was negatively associated with past 10 years colonoscopy uptake among 50-75 year-olds. In addition, in univariate analysis, glucose screening, 1-29 days physical activity, and the use of traditional medicine were associated with past 10 years colonoscopy uptake (see Table 7). Associations With Past 10 Years Colonoscopy (50-75 years). COR = Crude Odds Ratio; AOR = Adjusted Odds Ratio; CI = Confidence Interval. N = 685. Discussion: The national study in Marshall Islands showed that the prevalence of past 2 years mammography screening (21.7%, women 50-74 years), was higher than in India (10.8% in the past 5 years, 40 years and older), 9 South Africa (15.6% in the past 5 years, 40 years and older), 9 but lower than in China (38.4% in the past 5 years, 40 years and older), 9 and Italy (70%, women 50-69 years). 10 The prevalence of past 3 years Pap smear or VIA (32.6%, women 21-65 years) in Marshall Islands was higher than in 18 national Demographic and Health Surveys (ever 29.2%), 6 including India (ever 27.2%), 6 Tajikistan (ever 10.6%),6 and Turkey (ever 22.0%), 7 similar to a 55 country study in low- and middle-income countries (ever 43.6%, 30-49 years), 8 but lower than in Brunei Darussalam (ever 56.5%, women 18-69 years),11 Italy (77% past 2 years, 25-64 years). 10 The prevalence of past year CBE (15.9%, 40 years and older) in this study was lower than in the Brunei Darussalam STEPS survey (ever 56.2%, 18-69 years), 11 while the prevalence of past year FOBT (22.0%, 50-75 years) was higher than in Brunei (ever 20.0%, 18-69 years), 11 however, the prevalence of past 10 years colonoscopy (7.9%, 50-75 years) was similar to Brunei (7.9%, 18-69 years). 11 Due to the use of different age groups for cancer screening in different countries, comparisons should be taken with caution. The cancer screening uptake still falls short of the 2017-2022 targets in Marshall Islands, i.e. 30% updated breast cancer screening in women aged 20-75 years, 60% cervical cancer screening at least once in the past 3 years in women 21-65 years, and 20% updated colorectal cancer screening in men and women aged 50-75 years. 4 Possible barriers for the low cancer screening uptake in Marshall Islands may include public policy, organizational systems and practice settings, health care providers, and the patients themselves, e.g. lack of awareness or understanding, misperceptions about the benefits. 16,34 -37 Various interventions targeting at each of these factors can improve cancer screening rates. 38 Consistent with some previous research, 7,13 -15,39 -41 this study showed that a higher socioeconomic position (higher education) increased the odds for colonoscopy and in univariate analysis for cervical cancer screening uptake. It is possible that women who have higher education have better knowledge of the health risks related to cancer and therefore engage more likely in cancer screening. 40 Consistent with previous research, 22 this study showed that having ever had blood pressure, glucose, and/or blood cholesterol screening were associated with mammography, CBE, Pap smear or VIA, FOBT and/or colonoscopy. This finding may be explained by the increased possibility of referral to cancer screening during other clinical health screenings and should be utilized in the promotion of integrating cancer screening in routine health services. Furthermore, the study found that the use of traditional medicine for diabetes, hypertension or high cholesterol was associated with a higher uptake of FOBT, and in univariate analysis with CBE and colonoscopy. Similarly, in the US 2017 National Health Interview Survey, individuals who consulted complementary medicine approaches, such as chiropractor, naturopath, or mind-body medicine, were more likely to take up Pap smear test, mammography, and/or colorectal cancer screening. 23 Persons who used traditional medicine in Marshall Islands may have higher health literacy than those who do not use traditional medicine, which may explain the higher uptake of cancer screening. 42,43 The study found an association between other chronic conditions, such as cardiovascular disorders and cancer screening (FBOT), which is in line with the findings of some previous research. 11,25 It is possible that persons with cardiovascular disorders access health care services more often than those without cardiovascular disorders, which may have led to more chances of the uptake of FBOT. 24 Contrary to a previous review, 28 which found a negative association between having overweight/obesity and cervical cancer screening, this study did not find any significant association between body weight status and cancer screening uptake. Some previous research showed that the practice of other positive lifestyle behaviors apart from cancer screening increased the odds for cancer screening, 7,11,12,44 which was confirmed in this study for noncurrent smokers with colonoscopy, intake of 2-3 servings of fruit and vegetable consumption with FOBT and Pap smear or VIA, and past 12-month dental visit with CBE, Pap smear or VIA, and FBOT. However, contrary to expectations, this study found a positive association between binge drinking and Pap smear or VIA, and a negative association between physical activity and Pap smear or VIA. Findings may help health planners to improve cancer screening uptake by targeting or promoting facilitators of cancer screening identified in this study. The study limitations included that this investigation was limited due to the self-report of data and the cross-sectional survey design. Self-report could lead to recall bias resulting in overestimation or underestimation of true screening rates. Due to the cross-sectional design, no causative conclusions can be drawn on the relationship between explanatory and outcome variables. An additional limitation was that the STEPS survey in Marshall Islands did not assess urban and rural residence, knowledge and perceptions about cancer screening, family history of cancer, accessibility of cancer screening, as well as prostate cancer screening, which should be included in future studies. Conclusion: The study showed a low cancer screening uptake (mammography, clinical breast examination, Pap smear or VIA, FBOT, and colonoscopy). Several protective factors were identified for cancer screening, such as higher education, other health screening (blood pressure, glucose, or cholesterol), health behavior (dental visit, fruit and vegetable consumption, and nonsmoking) and the use of traditional medicine that could assist in programs promoting cancer screening in Marshall Islands. In addition, cancer awareness campaigns, expansion of skills training of health care providers, and improving cancer screening infrastructure could help in improving the uptake of cancer screening.
Background: The study aimed to estimate the prevalence and associated factors of cancer screening among men and women in the general population in Marshall Islands. Methods: The national cross-sectional sub-study population consisted of 2,813 persons aged 21-75 years (Median = 37.4 years) from the "2017/2018 Marshall Islands STEPS survey". Information about cancer screening uptake included Pap smear or Vaginal Inspection with Acetic Acid (=VIA), clinical breast examination, mammography, faecal occult blood test (FOBT), and colonoscopy. Results: The prevalence of past 2 years mammography screening was 21.7% among women aged 50-74 years, past year CBE 15.9% among women aged 40 years and older, past 3 years Pap smear or VIA 32.6% among women 21-65 years, past year FOBT 21.8% among women and 22.3% among men aged 50-75 years, and past 10 years colonoscopy 9.1% among women and 7.3% among men aged 50-75 years. In adjusted logistic regression, cholesterol screening (AOR: 1.91, 95% CI: 1.07-3.41) was associated with past 2 years mammography screening among women aged 50-74 years. Blood pressure screening (AOR: 2.39, 95% CI: 1.71-3.35), glucose screening (AOR: 1.59, 95% CI: 1.13-2.23), dental visit in the past year (AOR: 1.51, 95% CI: 1.17, 1.96), binge drinking (AOR: 1.88, 95% CI: 1.07-3.30), and 2-3 servings of fruit and vegetable consumption a day (AOR: 1.42, 95% CI: 1.03-1.95) were positively and high physical activity (30 days a month) (AOR: 0.56, 95% CI: 0.41-0.76) was negatively associated with Pap smear or VIA screening among women aged 21-65 years. Higher education (AOR: 2.58, 95% CI: 1.02-6.58), and cholesterol screening (AOR: 2.87, 95% CI: 1.48-5.59), were positively and current smoking (AOR: 0.09, 95% CI: 0.01-0.65) was negatively associated with past 10 years colonoscopy uptake among 50-75 year-olds. Conclusions: The study showed a low cancer screening uptake, and several factors were identified that can assist in promoting cancer screening in Marshall Islands.
Introduction: Globally, “cancer is the second leading cause of death; 70% of deaths from cancer occur in low- and middle-income countries.” 1 Some of the most common cancers are lung, breast, colorectal, and prostate cancer. 1 The “cervical cancer burden in the Pacific Region is substantial, with age-standardized incidence rates ranging from 8.2 to 50.7 and age-standardized mortality rates from 2.7 to 23.9 per 100,000 women per year.” 2 In the Marshall Island’s (an upper middle income country in the Pacific) cancer is the second leading cause of death (after diabetes), 3,4 and the most prevalent cancers are cervical, lung, and breast cancers. 4 The “age-standardized rate of cervical cancer was 74 per 100,000” (the highest in the world) in Marshall Islands. 4 The breast cancer and colorectal cancer incidence rates 2007-2012 were 28.6 and 5.5, respectively, in Marshall Islands. 4 Due to low screening rates (7.8% in 2013) and late diagnosis, breast cancer in the Marshall Islands is associated with high mortality rates. 4 Some cancers can be fatal if not identified and treated early, emphasizing the importance of organized cancer screening. 5 Marshall Islands offer various cancer screening modalities, e.g. breast cancer screening (clinical breast examination = CBE with no specific recommendations on age and frequency and mammography, recommended every 2 years in women 50-74 years), cervical cancer screening (Vaginal Inspection with Acetic Acid = VIA or Pap Smear/Cytology, and others, recommended every 3 years in women 21-65 years) and colorectal cancer screening (fecal occult blood examination = FOBT, recommended every year in women and men 50-75 years and colonoscopy, recommended every 10 years in women and men 50-75 years). 4 Marshall Islands developed a national Comprehensive Cancer Control Plan 2017-2022, including cancer screening uptake targets as follows: 30% updated breast cancer screening in women ages 20-75 years, 60% cervical cancer screening at least once in the past 3 years in women 21-65 years, and 20% updated colorectal cancer screening in men and women aged 50-75 years.4 Cancer screening efforts in Marshall Islands include skills-training on clinical breast examination and Pap smear testing, purchase of new mammogram machines, and VIA was introduced as a core option for cervical cancer screening. 4 It is believed that screening rates are below the recommended targets but no national study has reported on the current cancer screening uptake in Marshall Islands. In addition, it would be of outmost importance to have an understanding of the possible facilitators and barriers of cancer screening in Marshall Islands. Knowing the national prevalence and possible factors associated with different cancer screening methods could help in developing programs to improve cancer screening in Marshall Islands. As part of the national Demographic and Health Surveys in 18 countries (women 21-49 years), the prevalence of the utilization of ever cervical cancer screening was 29.2% in 18 countries, 6 in India 27.2%, 6 in Tajikistan 10.6%, 6 and in Turkey ever cervical cancer screening (women 30 years and older) 22.0%. 7 In an analysis of nationally representative household surveys in 55 low- and middle-income countries, the country level median of lifetime cervical cancer screening among women aged 30-49 years 43.6%. 8 The prevalence of breast cancer screening (mammography in the past 5 years) among women aged 40 years and older was 38.4% in China, 10.8% in India, and 15.6% in South Africa. 9 In Italy, the uptake of past 3 years Pap smear or HPV test was 77% (25-64 years old women), past 2 years mammography was 70% (50-69 years old women), and past 2 years colorectal screening was 38% (50-69 year olds). 10 In Brunei Darussalam, the prevalence of a pap smear test (women 18-69 years) was 56.5%, mammography (women 18-69 years) 11.3%, clinical breast examination ( = CBE)(women 18-69 years) 56.2%, fecal occult blood test (men and women 18-69 years) 20.0%, and colonoscopy (men and women 18-69 years) 7.9%. 11 Factors associated with cancer screening may include sociodemographic factors, health system factors, health status, and lifestyle factors. 5,12 Sociodemographic factors associated with cancer screening uptake include higher socioeconomic position, 7,13 -15 older age, 16 younger age, 17 and urban living. 18,19 Health system issues associated with cancer screening uptake include increased access to health care, 20 had health insurance, 18,21 having had blood cholesterol test, 22 general health care utilization, 13,16,17,21 and complementary medicine use. 23 Health status associated with cancer screening uptake include having chronic conditions, 24 having multimorbidity or comorbidity, 11,25 family history of non-communicable diseases, 11 not having mental distress or illness or having depressive symptoms, 17,26,27 not having obesity, 17,28 good self-rated health status, 29 and better physical functioning. 17 Positive lifestyle behaviors associated with cancer screening uptake include such as physical activity, fruit and vegetable consumption, and not smoking, 7,17,30 and not consuming alcohol. 11 The study aimed to estimate the prevalence and associated factors of cancer screening uptake among adults a national population-based survey in Marshall Islands. Conclusion: The study showed a low cancer screening uptake (mammography, clinical breast examination, Pap smear or VIA, FBOT, and colonoscopy). Several protective factors were identified for cancer screening, such as higher education, other health screening (blood pressure, glucose, or cholesterol), health behavior (dental visit, fruit and vegetable consumption, and nonsmoking) and the use of traditional medicine that could assist in programs promoting cancer screening in Marshall Islands. In addition, cancer awareness campaigns, expansion of skills training of health care providers, and improving cancer screening infrastructure could help in improving the uptake of cancer screening.
Background: The study aimed to estimate the prevalence and associated factors of cancer screening among men and women in the general population in Marshall Islands. Methods: The national cross-sectional sub-study population consisted of 2,813 persons aged 21-75 years (Median = 37.4 years) from the "2017/2018 Marshall Islands STEPS survey". Information about cancer screening uptake included Pap smear or Vaginal Inspection with Acetic Acid (=VIA), clinical breast examination, mammography, faecal occult blood test (FOBT), and colonoscopy. Results: The prevalence of past 2 years mammography screening was 21.7% among women aged 50-74 years, past year CBE 15.9% among women aged 40 years and older, past 3 years Pap smear or VIA 32.6% among women 21-65 years, past year FOBT 21.8% among women and 22.3% among men aged 50-75 years, and past 10 years colonoscopy 9.1% among women and 7.3% among men aged 50-75 years. In adjusted logistic regression, cholesterol screening (AOR: 1.91, 95% CI: 1.07-3.41) was associated with past 2 years mammography screening among women aged 50-74 years. Blood pressure screening (AOR: 2.39, 95% CI: 1.71-3.35), glucose screening (AOR: 1.59, 95% CI: 1.13-2.23), dental visit in the past year (AOR: 1.51, 95% CI: 1.17, 1.96), binge drinking (AOR: 1.88, 95% CI: 1.07-3.30), and 2-3 servings of fruit and vegetable consumption a day (AOR: 1.42, 95% CI: 1.03-1.95) were positively and high physical activity (30 days a month) (AOR: 0.56, 95% CI: 0.41-0.76) was negatively associated with Pap smear or VIA screening among women aged 21-65 years. Higher education (AOR: 2.58, 95% CI: 1.02-6.58), and cholesterol screening (AOR: 2.87, 95% CI: 1.48-5.59), were positively and current smoking (AOR: 0.09, 95% CI: 0.01-0.65) was negatively associated with past 10 years colonoscopy uptake among 50-75 year-olds. Conclusions: The study showed a low cancer screening uptake, and several factors were identified that can assist in promoting cancer screening in Marshall Islands.
10,968
460
[ 482, 2310, 906, 216, 29, 265, 125, 102, 148, 211, 190, 148 ]
17
[ "years", "screening", "past", "cancer", "women", "cancer screening", "ci", "aor", "year", "blood" ]
[ "country pacific cancer", "rates cancers fatal", "cancer incidence rates", "cancer marshall islands", "islands breast cancer" ]
[CONTENT] cancer screening | men | women | determinants | Marshall islands [SUMMARY]
[CONTENT] cancer screening | men | women | determinants | Marshall islands [SUMMARY]
[CONTENT] cancer screening | men | women | determinants | Marshall islands [SUMMARY]
[CONTENT] cancer screening | men | women | determinants | Marshall islands [SUMMARY]
[CONTENT] cancer screening | men | women | determinants | Marshall islands [SUMMARY]
[CONTENT] cancer screening | men | women | determinants | Marshall islands [SUMMARY]
[CONTENT] Adult | Aged | Breast Neoplasms | Colonic Neoplasms | Cross-Sectional Studies | Early Detection of Cancer | Female | Humans | Male | Micronesia | Middle Aged | Uterine Cervical Neoplasms | Young Adult [SUMMARY]
[CONTENT] Adult | Aged | Breast Neoplasms | Colonic Neoplasms | Cross-Sectional Studies | Early Detection of Cancer | Female | Humans | Male | Micronesia | Middle Aged | Uterine Cervical Neoplasms | Young Adult [SUMMARY]
[CONTENT] Adult | Aged | Breast Neoplasms | Colonic Neoplasms | Cross-Sectional Studies | Early Detection of Cancer | Female | Humans | Male | Micronesia | Middle Aged | Uterine Cervical Neoplasms | Young Adult [SUMMARY]
[CONTENT] Adult | Aged | Breast Neoplasms | Colonic Neoplasms | Cross-Sectional Studies | Early Detection of Cancer | Female | Humans | Male | Micronesia | Middle Aged | Uterine Cervical Neoplasms | Young Adult [SUMMARY]
[CONTENT] Adult | Aged | Breast Neoplasms | Colonic Neoplasms | Cross-Sectional Studies | Early Detection of Cancer | Female | Humans | Male | Micronesia | Middle Aged | Uterine Cervical Neoplasms | Young Adult [SUMMARY]
[CONTENT] Adult | Aged | Breast Neoplasms | Colonic Neoplasms | Cross-Sectional Studies | Early Detection of Cancer | Female | Humans | Male | Micronesia | Middle Aged | Uterine Cervical Neoplasms | Young Adult [SUMMARY]
[CONTENT] country pacific cancer | rates cancers fatal | cancer incidence rates | cancer marshall islands | islands breast cancer [SUMMARY]
[CONTENT] country pacific cancer | rates cancers fatal | cancer incidence rates | cancer marshall islands | islands breast cancer [SUMMARY]
[CONTENT] country pacific cancer | rates cancers fatal | cancer incidence rates | cancer marshall islands | islands breast cancer [SUMMARY]
[CONTENT] country pacific cancer | rates cancers fatal | cancer incidence rates | cancer marshall islands | islands breast cancer [SUMMARY]
[CONTENT] country pacific cancer | rates cancers fatal | cancer incidence rates | cancer marshall islands | islands breast cancer [SUMMARY]
[CONTENT] country pacific cancer | rates cancers fatal | cancer incidence rates | cancer marshall islands | islands breast cancer [SUMMARY]
[CONTENT] years | screening | past | cancer | women | cancer screening | ci | aor | year | blood [SUMMARY]
[CONTENT] years | screening | past | cancer | women | cancer screening | ci | aor | year | blood [SUMMARY]
[CONTENT] years | screening | past | cancer | women | cancer screening | ci | aor | year | blood [SUMMARY]
[CONTENT] years | screening | past | cancer | women | cancer screening | ci | aor | year | blood [SUMMARY]
[CONTENT] years | screening | past | cancer | women | cancer screening | ci | aor | year | blood [SUMMARY]
[CONTENT] years | screening | past | cancer | women | cancer screening | ci | aor | year | blood [SUMMARY]
[CONTENT] cancer | cancer screening | years | screening | women | cervical | 69 | cervical cancer | breast | screening uptake [SUMMARY]
[CONTENT] majuro | sample | 32 | ebeye | based | majuro ebeye | sample size | based prevalence | size | islands [SUMMARY]
[CONTENT] aor | ci | 95 ci | 95 | years | past | screening | ratio | odds ratio | women [SUMMARY]
[CONTENT] cancer | cancer screening | improving | screening | health | uptake | health screening blood pressure | training health care providers | identified cancer screening | identified cancer screening higher [SUMMARY]
[CONTENT] years | aor | screening | ci | past | 95 ci | 95 | cancer | women | cancer screening [SUMMARY]
[CONTENT] years | aor | screening | ci | past | 95 ci | 95 | cancer | women | cancer screening [SUMMARY]
[CONTENT] Marshall Islands [SUMMARY]
[CONTENT] 2,813 | 21-75 years | Median | 37.4 years | 2017/2018 | Marshall ||| Pap | Vaginal Inspection with Acetic Acid [SUMMARY]
[CONTENT] 21.7% | 50-74 years | past year | CBE | 15.9% | 40 years | VIA | 32.6% | 21-65 years | past year | 21.8% | 22.3% | 50-75 years | 9.1% | 7.3% | 50-75 years ||| 1.91 | 95% | CI | 1.07-3.41 | 2 years | 50-74 years ||| 2.39 | 95% | CI | 1.71-3.35 | 1.59 | 95% | CI | 1.13 | the past year | 1.51 | 95% | CI | 1.17 | 1.96 | 1.88 | 95% | CI | 1.07-3.30 | 2-3 | a day | 1.42 | 95% | CI | 1.03-1.95 | 30 days | 0.56 | 95% | CI | 0.41-0.76 | Pap | VIA | 21-65 years ||| 2.58 | 95% | CI | 1.02-6.58 | 2.87 | 95% | CI | 1.48-5.59 | 0.09 | 95% | CI | 0.01-0.65 | 10 years | 50-75 year-olds [SUMMARY]
[CONTENT] Marshall Islands [SUMMARY]
[CONTENT] Marshall Islands ||| 2,813 | 21-75 years | Median | 37.4 years | 2017/2018 | Marshall ||| Pap | Vaginal Inspection with Acetic Acid ||| ||| 21.7% | 50-74 years | past year | CBE | 15.9% | 40 years | VIA | 32.6% | 21-65 years | past year | 21.8% | 22.3% | 50-75 years | 9.1% | 7.3% | 50-75 years ||| 1.91 | 95% | CI | 1.07-3.41 | 2 years | 50-74 years ||| 2.39 | 95% | CI | 1.71-3.35 | 1.59 | 95% | CI | 1.13 | the past year | 1.51 | 95% | CI | 1.17 | 1.96 | 1.88 | 95% | CI | 1.07-3.30 | 2-3 | a day | 1.42 | 95% | CI | 1.03-1.95 | 30 days | 0.56 | 95% | CI | 0.41-0.76 | Pap | VIA | 21-65 years ||| 2.58 | 95% | CI | 1.02-6.58 | 2.87 | 95% | CI | 1.48-5.59 | 0.09 | 95% | CI | 0.01-0.65 | 10 years | 50-75 year-olds ||| Marshall Islands [SUMMARY]
[CONTENT] Marshall Islands ||| 2,813 | 21-75 years | Median | 37.4 years | 2017/2018 | Marshall ||| Pap | Vaginal Inspection with Acetic Acid ||| ||| 21.7% | 50-74 years | past year | CBE | 15.9% | 40 years | VIA | 32.6% | 21-65 years | past year | 21.8% | 22.3% | 50-75 years | 9.1% | 7.3% | 50-75 years ||| 1.91 | 95% | CI | 1.07-3.41 | 2 years | 50-74 years ||| 2.39 | 95% | CI | 1.71-3.35 | 1.59 | 95% | CI | 1.13 | the past year | 1.51 | 95% | CI | 1.17 | 1.96 | 1.88 | 95% | CI | 1.07-3.30 | 2-3 | a day | 1.42 | 95% | CI | 1.03-1.95 | 30 days | 0.56 | 95% | CI | 0.41-0.76 | Pap | VIA | 21-65 years ||| 2.58 | 95% | CI | 1.02-6.58 | 2.87 | 95% | CI | 1.48-5.59 | 0.09 | 95% | CI | 0.01-0.65 | 10 years | 50-75 year-olds ||| Marshall Islands [SUMMARY]
Pre-Clinical Investigation of Keratose as an Excipient of Drug Coated Balloons.
32244375
Drug-coated balloons (DCBs), which deliver anti-proliferative drugs with the aid of excipients, have emerged as a new endovascular therapy for the treatment of peripheral arterial disease. In this study, we evaluated the use of keratose (KOS) as a novel DCB-coating excipient to deliver and retain paclitaxel.
BACKGROUND
A custom coating method was developed to deposit KOS and paclitaxel on uncoated angioplasty balloons. The retention of the KOS-paclitaxel coating, in comparison to a commercially available DCB, was evaluated using a novel vascular-motion simulating ex vivo flow model at 1 h and 3 days. Additionally, the locoregional biological response of the KOS-paclitaxel coating was evaluated in a rabbit ilio-femoral injury model at 14 days.
METHODS
The KOS coating exhibited greater retention of the paclitaxel at 3 days under pulsatile conditions with vascular motion as compared to the commercially available DCB (14.89 ± 4.12 ng/mg vs. 0.60 ± 0.26 ng/mg, p = 0.018). Histological analysis of the KOS-paclitaxel-treated arteries demonstrated a significant reduction in neointimal thickness as compared to the uncoated balloons, KOS-only balloon and paclitaxel-only balloon.
RESULTS
The ability to enhance drug delivery and retention in targeted arterial segments can ultimately improve clinical peripheral endovascular outcomes.
CONCLUSIONS
[ "Angioplasty, Balloon", "Animals", "Antineoplastic Agents", "Cardiovascular Agents", "Coated Materials, Biocompatible", "Drug Carriers", "Drug Delivery Systems", "Drug Evaluation, Preclinical", "Immunohistochemistry", "Keratosis", "Paclitaxel", "Peripheral Arterial Disease" ]
7180741
1. Introduction
Drug-coated balloons (DCBs) represent a new therapeutic approach to treat peripheral arterial disease (PAD) [1,2,3,4,5]. In the United States, PAD affects more than eight million people, with an annual cost of roughly $21 billion [6]. Traditionally, endovascular treatment of PAD has been performed by balloon angioplasty or the placement of a permanent metallic stent [7,8]. However, results are poor, with 50–85% of patients developing hemodynamically significant restenosis (re-occlusion), and 16–65% developing occlusions within 2 years post-treatment [9,10]. The use of anti-proliferative drugs in combination with bare metal stents, i.e., drug-eluting stents (DES), was a major breakthrough and highly successful in treating coronary artery disease [11,12]. However, stents have shown very poor clinical outcomes in treating PAD, as they are subjected to biomechanical stress and severe artery deformation (twisting, bending, and shortening), leading to high fracture rates (up to 68%) and restenosis [13]. DCBs, which were FDA-approved for the treatment of PAD in late 2014, provide a new therapeutic approach for interventionalists to practice a ‘leave nothing behind’ procedure, preserving future treatment options DCBs are angioplasty balloons directly coated with an anti-proliferative therapeutic drug and an excipient (drug carrier) [1,14,15,16,17,18]. The excipient enhances the adhesion of the drug to the balloon surface, increases the stability of the drug coating during handling and delivery, and maximizes drug retention to the targeted arterial segment. [18,19,20,21,22,23,24] Current DCBs excipients include polysorbate and sorbitol, urea, polyethylene glycol (PEG) and butyryl-tri-hexyl citrate (BTHC). The rationale for the selection of these various excipients varies. For example, excipients such as polysorbate and PEG are known cosolvents of paclitaxel [25,26], which can alter the vessel interaction of the drug with the DCB device. Conversely, urea acts to increase paclitaxel release at the lesion [18] and PEG has been shown to bind to hydroxylapatite, a primary component of calcified atherosclerotic lesions [17,19,23,24], thereby improving local pharmacodynamics. However, more recent pre-clinical studies have demonstrated the potential of DCB excipients to embolize and travel downstream to distal tissue post-treatment [27,28]. As peripheral arteries undergo severe mechanical deformation, excipients should aid in maintaining drug residency on the luminal surface, in particular at the early time phase, prior to the buildup of tissue, following delivery onto the luminal surface of the artery. Therefore, novel excipients that are capable of maintaining drug residency while minimizing downstream or off-target effects are needed. Keratins are a class of proteins that can be derived from numerous sources, including from human hair. Keratins have been shown to achieve the sustained release of small-molecule drugs and growth factors [29,30]. Further, keratin films have been reported for use in vascular grafts to reduce thrombosis, suggesting their utility in cardiovascular applications [31]. The goal of this study was thus to examine the ability to use an oxidized form of keratin (known as keratose (KOS)) as a new drug carrier excipient to aid in the delivery and retention of the anti-proliferative drug, paclitaxel. Specifically, the mobility, retention and biological impact of a KOS–paclitaxel-coated DCB was determined using ex vivo and in vivo models.
null
null
2. Results
2.1. Ex Vivo Results The non-coated angioplasty balloons were successfully coated with the KOS–paclitaxel mixture (Figure 1D,E). To determine the impact of vascular deformation on DCB retention, both the KOS–paclitaxel DCB and the commercially available DCB were tested under only physiological pulsatile conditions (no twisting or shortening) and physiological pulsatile conditions with vascular deformation conditions. The pulsatile flow conditions consisted of pressures ranging from 70 to 120 mmHg with a mean flow rate of 120 mL/min at 60 beats per minute. The vascular deformation conditions consisted of the artery shortening 10% in the axial direction and twisting of the artery at 15°/cm. The frequency of the artery twisting and shortening was 0.05 Hz (3 cycles/min). All DCBs were inserted through a 6 Fr sheath into the closed-circulatory system under the physiological pulsatile conditions. The treated sections of the artery were marked during inflation of the DCBs. It is noted that vascular deformation (twisting and shortening) occurred following 4 h of physiological pulsatile conditions of DCB deployment. At timepoints of 1 h and 3 days, the treated section of the arteries were removed and analyzed for arterial tissue paclitaxel concentration (Figure 2). There was a reduction in arterial paclitaxel levels from 1 h to 3 days post-treatment for both the KOS–paclitaxel and the commercially available DCB under physiological pulsatile conditions (3 days—pulse only: KOS-PXL: 17.56 ± 7.19 ng/mg vs. commercial DCB: 24.44 ± 27.03, p = 0.96, Table 1). However, under pulsatile and vascular deformation, paclitaxel was significantly retained within the treated artery (3 days—pulse and vascular deformation: KOS-PXL: 14.89 ± 4.12 ng/mg vs. commercial DCB: 0.60 ± 0.26, p = 0.018). The non-coated angioplasty balloons were successfully coated with the KOS–paclitaxel mixture (Figure 1D,E). To determine the impact of vascular deformation on DCB retention, both the KOS–paclitaxel DCB and the commercially available DCB were tested under only physiological pulsatile conditions (no twisting or shortening) and physiological pulsatile conditions with vascular deformation conditions. The pulsatile flow conditions consisted of pressures ranging from 70 to 120 mmHg with a mean flow rate of 120 mL/min at 60 beats per minute. The vascular deformation conditions consisted of the artery shortening 10% in the axial direction and twisting of the artery at 15°/cm. The frequency of the artery twisting and shortening was 0.05 Hz (3 cycles/min). All DCBs were inserted through a 6 Fr sheath into the closed-circulatory system under the physiological pulsatile conditions. The treated sections of the artery were marked during inflation of the DCBs. It is noted that vascular deformation (twisting and shortening) occurred following 4 h of physiological pulsatile conditions of DCB deployment. At timepoints of 1 h and 3 days, the treated section of the arteries were removed and analyzed for arterial tissue paclitaxel concentration (Figure 2). There was a reduction in arterial paclitaxel levels from 1 h to 3 days post-treatment for both the KOS–paclitaxel and the commercially available DCB under physiological pulsatile conditions (3 days—pulse only: KOS-PXL: 17.56 ± 7.19 ng/mg vs. commercial DCB: 24.44 ± 27.03, p = 0.96, Table 1). However, under pulsatile and vascular deformation, paclitaxel was significantly retained within the treated artery (3 days—pulse and vascular deformation: KOS-PXL: 14.89 ± 4.12 ng/mg vs. commercial DCB: 0.60 ± 0.26, p = 0.018). 2.2. Histomorphometric Results Following ex vivo studies, in vivo studies were performed using the rabbit ilio–femoral injury model to determine the impact of the KOS excipients on vascular remodeling. The animals were treated with a KOS-paclitaxel (n = 4), KOS-only balloon (n = 4), paclitaxel-only balloon (n = 4) or an uncoated balloon (n = 4). All arteries were treated successfully without any signs of dissection or thrombosis, and all animals survived the duration of the study. At 7 days, morphometric analysis demonstrated similar area measurements, including the EEL, IEL, lumen and media, for all treatment groups (Table 2). Neointimal thickness was significantly different between the varying groups (no coating: 0.10 ± 0.011 mm vs. KOS-only: 0.069 ± 0.022 mm vs. PXL-only: 0.066 ± 0.018 mm vs. KOS-PXL: 0.53 ± 0.003 mm, p = 0.005, Figure 3). Although percent area stenosis was the least in the KOS-PXL group, differences between the group were non-significant (no coating: 10.88% ± 4.52% vs. KOS-only: 9.99% ± 3.78% vs. PXL-only: 7.92% ± 3.84% vs. KOS-PXL: 6.80% ± 2.74%, p = 0.45). Histological analysis demonstrated minimal injury for all groups at 7 days with the greatest endothelial cell loss in the KOS–paclitaxel treated arteries (no coating: 0.25 ± 0.50 vs. KOS-only: 0.00 ± 0.00 vs. PXL-only: 0.25 ± 0.58 vs. KOS-PXL: 1.50 ± 0.58, p = 0.013). Inflammation was minimal for all groups and there was a trend towards greater SMC loss in the KOS–paclitaxel group (no coating: 0.00 ± 0.00 vs. KOS-only: 0.00 ± 0.00 vs. PXL-only: 0.25 ± 0.50 vs. KOS-PXL: 1.00 ± 0.82, p = 0.45). No aneurysmal dilatation or thrombosis was observed in any treated artery. Following ex vivo studies, in vivo studies were performed using the rabbit ilio–femoral injury model to determine the impact of the KOS excipients on vascular remodeling. The animals were treated with a KOS-paclitaxel (n = 4), KOS-only balloon (n = 4), paclitaxel-only balloon (n = 4) or an uncoated balloon (n = 4). All arteries were treated successfully without any signs of dissection or thrombosis, and all animals survived the duration of the study. At 7 days, morphometric analysis demonstrated similar area measurements, including the EEL, IEL, lumen and media, for all treatment groups (Table 2). Neointimal thickness was significantly different between the varying groups (no coating: 0.10 ± 0.011 mm vs. KOS-only: 0.069 ± 0.022 mm vs. PXL-only: 0.066 ± 0.018 mm vs. KOS-PXL: 0.53 ± 0.003 mm, p = 0.005, Figure 3). Although percent area stenosis was the least in the KOS-PXL group, differences between the group were non-significant (no coating: 10.88% ± 4.52% vs. KOS-only: 9.99% ± 3.78% vs. PXL-only: 7.92% ± 3.84% vs. KOS-PXL: 6.80% ± 2.74%, p = 0.45). Histological analysis demonstrated minimal injury for all groups at 7 days with the greatest endothelial cell loss in the KOS–paclitaxel treated arteries (no coating: 0.25 ± 0.50 vs. KOS-only: 0.00 ± 0.00 vs. PXL-only: 0.25 ± 0.58 vs. KOS-PXL: 1.50 ± 0.58, p = 0.013). Inflammation was minimal for all groups and there was a trend towards greater SMC loss in the KOS–paclitaxel group (no coating: 0.00 ± 0.00 vs. KOS-only: 0.00 ± 0.00 vs. PXL-only: 0.25 ± 0.50 vs. KOS-PXL: 1.00 ± 0.82, p = 0.45). No aneurysmal dilatation or thrombosis was observed in any treated artery.
5. Conclusions
This study provides evidence of the use of keratose as an excipient for peripheral applications. The ex vivo results showed a potential benefit of the coating to minimize the adverse impact of vascular motion on drug mobility and favorable biological response in the pre-clinical model. Additional studies are warranted to further demonstrate the safety and efficacy profile of the keratose coating in larger animal models and longer durations. Overall, this approach has the potential to improve interventional outcomes and quality of life of millions of patients suffering with PAD.
[ "2.1. Ex Vivo Results", "2.2. Histomorphometric Results", "4. Materials and Methods ", "4.1. Keratose-Coated Balloons", "4.2. Peripheral-Simulating Bioreactor", "4.3. Ex Vivo DCB Testing and Arterial Time Drug Concentration", "4.4. Rabbit Injury Model", "4.5. Arterial Sections", "4.6. Histomorphometric Analysis", "4.7. Statistical Analysis" ]
[ "The non-coated angioplasty balloons were successfully coated with the KOS–paclitaxel mixture (Figure 1D,E). To determine the impact of vascular deformation on DCB retention, both the KOS–paclitaxel DCB and the commercially available DCB were tested under only physiological pulsatile conditions (no twisting or shortening) and physiological pulsatile conditions with vascular deformation conditions. The pulsatile flow conditions consisted of pressures ranging from 70 to 120 mmHg with a mean flow rate of 120 mL/min at 60 beats per minute. The vascular deformation conditions consisted of the artery shortening 10% in the axial direction and twisting of the artery at 15°/cm. The frequency of the artery twisting and shortening was 0.05 Hz (3 cycles/min). All DCBs were inserted through a 6 Fr sheath into the closed-circulatory system under the physiological pulsatile conditions. The treated sections of the artery were marked during inflation of the DCBs. It is noted that vascular deformation (twisting and shortening) occurred following 4 h of physiological pulsatile conditions of DCB deployment. At timepoints of 1 h and 3 days, the treated section of the arteries were removed and analyzed for arterial tissue paclitaxel concentration (Figure 2). There was a reduction in arterial paclitaxel levels from 1 h to 3 days post-treatment for both the KOS–paclitaxel and the commercially available DCB under physiological pulsatile conditions (3 days—pulse only: KOS-PXL: 17.56 ± 7.19 ng/mg vs. commercial DCB: 24.44 ± 27.03, p = 0.96, Table 1). However, under pulsatile and vascular deformation, paclitaxel was significantly retained within the treated artery (3 days—pulse and vascular deformation: KOS-PXL: 14.89 ± 4.12 ng/mg vs. commercial DCB: 0.60 ± 0.26, p = 0.018). ", "Following ex vivo studies, in vivo studies were performed using the rabbit ilio–femoral injury model to determine the impact of the KOS excipients on vascular remodeling. The animals were treated with a KOS-paclitaxel (n = 4), KOS-only balloon (n = 4), paclitaxel-only balloon (n = 4) or an uncoated balloon (n = 4). All arteries were treated successfully without any signs of dissection or thrombosis, and all animals survived the duration of the study. At 7 days, morphometric analysis demonstrated similar area measurements, including the EEL, IEL, lumen and media, for all treatment groups (Table 2). Neointimal thickness was significantly different between the varying groups (no coating: 0.10 ± 0.011 mm vs. KOS-only: 0.069 ± 0.022 mm vs. PXL-only: 0.066 ± 0.018 mm vs. KOS-PXL: 0.53 ± 0.003 mm, p = 0.005, Figure 3). Although percent area stenosis was the least in the KOS-PXL group, differences between the group were non-significant (no coating: 10.88% ± 4.52% vs. KOS-only: 9.99% ± 3.78% vs. PXL-only: 7.92% ± 3.84% vs. KOS-PXL: 6.80% ± 2.74%, p = 0.45). \nHistological analysis demonstrated minimal injury for all groups at 7 days with the greatest endothelial cell loss in the KOS–paclitaxel treated arteries (no coating: 0.25 ± 0.50 vs. KOS-only: 0.00 ± 0.00 vs. PXL-only: 0.25 ± 0.58 vs. KOS-PXL: 1.50 ± 0.58, p = 0.013). Inflammation was minimal for all groups and there was a trend towards greater SMC loss in the KOS–paclitaxel group (no coating: 0.00 ± 0.00 vs. KOS-only: 0.00 ± 0.00 vs. PXL-only: 0.25 ± 0.50 vs. KOS-PXL: 1.00 ± 0.82, p = 0.45). No aneurysmal dilatation or thrombosis was observed in any treated artery.", " 4.1. Keratose-Coated Balloons The KOS-coated balloons were prepared as previously described [44,45]. Briefly, paclitaxel (LC Laboratories, Woburn, MA, USA) was prepared by dissolving paclitaxel in absolute ethanol followed by sonication at a final concentration of 40 mg/mL. Keratose (KeraNetics LLC, Winston-Salem, NC, USA) solution was prepared by dissolving lyophilized keratose in iohexol (GE Healthcare, Little Chalfont, UK) at a 6% weight-to-volume ratio in. An in-house air spray coating method was used to deposit keratose and paclitaxel in a layered approach on uncoated angioplasty balloons (Abbott Vascular, Abbott Park, IL, USA) [45]. Coated balloons were then sterilized by UV irradiation. \nThe KOS-coated balloons were prepared as previously described [44,45]. Briefly, paclitaxel (LC Laboratories, Woburn, MA, USA) was prepared by dissolving paclitaxel in absolute ethanol followed by sonication at a final concentration of 40 mg/mL. Keratose (KeraNetics LLC, Winston-Salem, NC, USA) solution was prepared by dissolving lyophilized keratose in iohexol (GE Healthcare, Little Chalfont, UK) at a 6% weight-to-volume ratio in. An in-house air spray coating method was used to deposit keratose and paclitaxel in a layered approach on uncoated angioplasty balloons (Abbott Vascular, Abbott Park, IL, USA) [45]. Coated balloons were then sterilized by UV irradiation. \n 4.2. Peripheral-Simulating Bioreactor The peripheral-simulating bioreactor was designed to shorten and twist two harvest porcine carotid arteries subjected to pulsatile flow conditions (Figure 1A). The overall system (46 × 19 × 19 cm) was designed to fit inside of a standard CO2 incubator by arranging the arteries in a parallel configuration. The system utilizes one stepper motor per artery for rotational motion and one stepper motor for the translational motion of both arteries. Custom connectors were machined to mount the arteries to the stepper motors. The motion of the stepper motors was measured using rotary encoders (CUI AMT11, Tualatin, OR, USA) mounted on the shaft of each motor. An Arduino microcontroller with two motor shields was used to control the motors along with an LCD keypad module to provide an intuitive user experience, displaying time and cycles remaining for each test and providing physical inputs to start, stop, input artery length and the duration of testing. \nThe carotid arteries, positioned within the vascular-simulating bioreactor, were harvested from large pigs (250–350 lbs.) from a local abattoir and transferred in sterile PBS with 1% antibiotic-antimitotic (Gibco, Grand Island, NY, USA). The arteries were then rinsed in sterile PBS in a culture hood and trimmed. Eight-cm-long segments were cut and tied with sutures onto fittings within the ex vivo setup. The circulating medium consisted of the system made up of Dulbecco’s modified eagle’s medium containing 10% fetal bovine serum and 1% antibiotic–antimycotic. \nThe peripheral-simulating bioreactor was designed to shorten and twist two harvest porcine carotid arteries subjected to pulsatile flow conditions (Figure 1A). The overall system (46 × 19 × 19 cm) was designed to fit inside of a standard CO2 incubator by arranging the arteries in a parallel configuration. The system utilizes one stepper motor per artery for rotational motion and one stepper motor for the translational motion of both arteries. Custom connectors were machined to mount the arteries to the stepper motors. The motion of the stepper motors was measured using rotary encoders (CUI AMT11, Tualatin, OR, USA) mounted on the shaft of each motor. An Arduino microcontroller with two motor shields was used to control the motors along with an LCD keypad module to provide an intuitive user experience, displaying time and cycles remaining for each test and providing physical inputs to start, stop, input artery length and the duration of testing. \nThe carotid arteries, positioned within the vascular-simulating bioreactor, were harvested from large pigs (250–350 lbs.) from a local abattoir and transferred in sterile PBS with 1% antibiotic-antimitotic (Gibco, Grand Island, NY, USA). The arteries were then rinsed in sterile PBS in a culture hood and trimmed. Eight-cm-long segments were cut and tied with sutures onto fittings within the ex vivo setup. The circulating medium consisted of the system made up of Dulbecco’s modified eagle’s medium containing 10% fetal bovine serum and 1% antibiotic–antimycotic. \n 4.3. Ex Vivo DCB Testing and Arterial Time Drug Concentration Prior to any vascular motion (twisting and shortening), all arteries were subjected to pulsatile flow for 1 h, as defined by a custom LabVIEW program as previously described [42] The pressure was monitored via a pressure catheter transducer (Millar Instruments, Houston, TX). Flow was monitored by an ultrasonic flow meter. Following this pre-conditioning phase, the vessel diameter was measured by ultrasound (Figure 1F,G). Harvested arteries were then treated by either the KOS–paclitaxel-coated balloon or a commercially available DCB (In.PACT Admiral DCB, Medtronic, Santa Rosa, CA, USA). The delivery pressure of the DCB was determined by the manufacturers’ specification at a 10–20% overstretch. At timepoints of one hour and three-days, flow was ceased, and the treated portion of the vessel was removed. Excised vessels were flash frozen, stored at −80 °C and shipped on dry ice to iC42 Clinical Research and Development (Aurora, CO, USA) for the quantification of arterial paclitaxel. Quantification of arterial paclitaxel levels was performed using a validated high-performance liquid chromatography (HPLC)-electrospray ionization- tandem mass spectrometry assay (LC-MS/MS) [44,45,46]. In brief, the LC-MS/MS system was a series 1260 HPLC system (Agilent Technologies, Santa Clara, CA, USA) linked to a Sciex 5000 triple-stage quadrupole mass spectrometer (MS/MS, Sciex, Concord, ON, USA) via a turbo-flow electrospray ionization source. The artery tissue samples were homogenized using an electric wand homogenizer (VWR 200, VWR International, Radnor, PA, USA) after the addition of 1 mL of phosphate buffer. Eight hundred (800) μL of 0.2 M ZnSO4 30% water/70% methanol v/v protein precipitation solution containing the internal standard (paclitaxel-D5, 10 ng/mL) was added. Samples were vortexed for 5 min, centrifuged (16,000, 4 °C, 15 min) and transferred into glass HPLC vials. Study samples were diluted as necessary for detector signals to fall within the dynamic MS/MS detector range. One hundred (100) μL of the samples was injected onto a 4.6 × 12.5 mm extraction column (Eclipse XDB C8, 5 μm particle size, Agilent Technologies, Palo Alto, CA, USA). Samples were washed with a mobile phase of 15% methanol and 85% 0.1% formic acid using a flow of 3 mL/min. The temperature for the extraction column was 65 °C. After 1 min, the switching valve was activated and the analytes were eluted in the backflush mode from the extraction column onto a 150 × 4.6 mm analytical column (Zorbax XDB C8, 3.5 µm particle size, Agilent). The analytes were eluted from the analytical column using a gradient of methanol/acetonitrile (1/1 v/v) plus 0.1% formic acid (solvent B) and 0.1% formic acid in HPLC grade water (solvent A). The MS/MS was run in the positive multi-reaction mode and the following ion transitions were monitored: m/z = 876.6 [M + Na]+ → 308.2 (paclitaxel) and m/z = 881.6 [M + Na]+ → 313.1 (the internal standard paclitaxel-D5). Paclitaxel tissue concentrations were calculated based on paclitaxel/paclitaxel-D5 peak area ratios using a quadratic regression equation with 1/x weighting. The range of reliable response was 0.5–100 ng/mL tissue homogenate. Inter-day imprecision was less than 15% and accuracy was within 85–115% of the nominal concentrations. There were no significant matrix interferences, carry-over or matrix effects. For more details, please see the aforementioned publications [42,44,45].\nPrior to any vascular motion (twisting and shortening), all arteries were subjected to pulsatile flow for 1 h, as defined by a custom LabVIEW program as previously described [42] The pressure was monitored via a pressure catheter transducer (Millar Instruments, Houston, TX). Flow was monitored by an ultrasonic flow meter. Following this pre-conditioning phase, the vessel diameter was measured by ultrasound (Figure 1F,G). Harvested arteries were then treated by either the KOS–paclitaxel-coated balloon or a commercially available DCB (In.PACT Admiral DCB, Medtronic, Santa Rosa, CA, USA). The delivery pressure of the DCB was determined by the manufacturers’ specification at a 10–20% overstretch. At timepoints of one hour and three-days, flow was ceased, and the treated portion of the vessel was removed. Excised vessels were flash frozen, stored at −80 °C and shipped on dry ice to iC42 Clinical Research and Development (Aurora, CO, USA) for the quantification of arterial paclitaxel. Quantification of arterial paclitaxel levels was performed using a validated high-performance liquid chromatography (HPLC)-electrospray ionization- tandem mass spectrometry assay (LC-MS/MS) [44,45,46]. In brief, the LC-MS/MS system was a series 1260 HPLC system (Agilent Technologies, Santa Clara, CA, USA) linked to a Sciex 5000 triple-stage quadrupole mass spectrometer (MS/MS, Sciex, Concord, ON, USA) via a turbo-flow electrospray ionization source. The artery tissue samples were homogenized using an electric wand homogenizer (VWR 200, VWR International, Radnor, PA, USA) after the addition of 1 mL of phosphate buffer. Eight hundred (800) μL of 0.2 M ZnSO4 30% water/70% methanol v/v protein precipitation solution containing the internal standard (paclitaxel-D5, 10 ng/mL) was added. Samples were vortexed for 5 min, centrifuged (16,000, 4 °C, 15 min) and transferred into glass HPLC vials. Study samples were diluted as necessary for detector signals to fall within the dynamic MS/MS detector range. One hundred (100) μL of the samples was injected onto a 4.6 × 12.5 mm extraction column (Eclipse XDB C8, 5 μm particle size, Agilent Technologies, Palo Alto, CA, USA). Samples were washed with a mobile phase of 15% methanol and 85% 0.1% formic acid using a flow of 3 mL/min. The temperature for the extraction column was 65 °C. After 1 min, the switching valve was activated and the analytes were eluted in the backflush mode from the extraction column onto a 150 × 4.6 mm analytical column (Zorbax XDB C8, 3.5 µm particle size, Agilent). The analytes were eluted from the analytical column using a gradient of methanol/acetonitrile (1/1 v/v) plus 0.1% formic acid (solvent B) and 0.1% formic acid in HPLC grade water (solvent A). The MS/MS was run in the positive multi-reaction mode and the following ion transitions were monitored: m/z = 876.6 [M + Na]+ → 308.2 (paclitaxel) and m/z = 881.6 [M + Na]+ → 313.1 (the internal standard paclitaxel-D5). Paclitaxel tissue concentrations were calculated based on paclitaxel/paclitaxel-D5 peak area ratios using a quadratic regression equation with 1/x weighting. The range of reliable response was 0.5–100 ng/mL tissue homogenate. Inter-day imprecision was less than 15% and accuracy was within 85–115% of the nominal concentrations. There were no significant matrix interferences, carry-over or matrix effects. For more details, please see the aforementioned publications [42,44,45].\n 4.4. Rabbit Injury Model This study was approved by the Institutional Animal Care and Use Committee and conformed to the position of the American Heart Association on use of animals in research. The experimental preparation of the animal model has been previously reported [42,46]. Under fluoroscopic guidance, eight anesthetized adult male New Zealand White rabbits underwent endothelial denudation of both iliac arteries using an angioplasty balloon catheter (3.0 × 8 mm). Subsequently, arteries were treated by either KOS–paclitaxel (3.0 × 15 mm), KOS-only balloon (3.0 × 15 mm), paclitaxel-only balloon (3.0 × 15 mm), or an uncoated balloon (3.0 × 15 mm) at a delivery pressure of 8 atm for two minutes. Anti-platelet therapy consisted of aspirin (40 mg/day) given orally 24 h before catheterization, with continued dosing throughout the in-life-phase of the study, while single-dose intra-arterial heparin (150 IU/kg) and lidocaine were administered at the time of catheterization. The animals survived for 7 days and subsequent histological evaluations were performed. \nThis study was approved by the Institutional Animal Care and Use Committee and conformed to the position of the American Heart Association on use of animals in research. The experimental preparation of the animal model has been previously reported [42,46]. Under fluoroscopic guidance, eight anesthetized adult male New Zealand White rabbits underwent endothelial denudation of both iliac arteries using an angioplasty balloon catheter (3.0 × 8 mm). Subsequently, arteries were treated by either KOS–paclitaxel (3.0 × 15 mm), KOS-only balloon (3.0 × 15 mm), paclitaxel-only balloon (3.0 × 15 mm), or an uncoated balloon (3.0 × 15 mm) at a delivery pressure of 8 atm for two minutes. Anti-platelet therapy consisted of aspirin (40 mg/day) given orally 24 h before catheterization, with continued dosing throughout the in-life-phase of the study, while single-dose intra-arterial heparin (150 IU/kg) and lidocaine were administered at the time of catheterization. The animals survived for 7 days and subsequent histological evaluations were performed. \n 4.5. Arterial Sections Following the duration of the study, animals were anesthetized and euthanized, and the treated artery segments were removed based on landmarks identified by angiography. The arteries were perfused with saline and formalin-fixed under physiological pressure prior to removal. The segments were stored in 10% formalin at room temperature and then processed to paraffin blocks, sectioned, and stained with Hematoxylin and Eosin (H&E) or Verhoeff’s elastin stain (VEG).\nFollowing the duration of the study, animals were anesthetized and euthanized, and the treated artery segments were removed based on landmarks identified by angiography. The arteries were perfused with saline and formalin-fixed under physiological pressure prior to removal. The segments were stored in 10% formalin at room temperature and then processed to paraffin blocks, sectioned, and stained with Hematoxylin and Eosin (H&E) or Verhoeff’s elastin stain (VEG).\n 4.6. Histomorphometric Analysis Histological sections were digitized and measurements performed using ImageJ software (NIH). Cross-sectional area measurements included the external elastic lamina (EEL), internal elastic lamina (IEL), and lumen area of each section. Using these measurements, the medial area, neointimal area and percent area stenosis were calculated as previously described [46,47]. \nMorphological analysis was performed by light microscopy using a grading criterion as previously published [46,47]. The parameters assessed included intimal healing as judged by injury, endothelial cell loss and inflammation. The medial wall was also assessed for drug-induced biological effect, specifically looking at smooth muscle cell loss. These parameters were semi-quantified using a scoring a system of 0 (none), 1 (minimal), 2 (mild), 3 (moderate) and 4 (severe) as previously described [47].\nHistological sections were digitized and measurements performed using ImageJ software (NIH). Cross-sectional area measurements included the external elastic lamina (EEL), internal elastic lamina (IEL), and lumen area of each section. Using these measurements, the medial area, neointimal area and percent area stenosis were calculated as previously described [46,47]. \nMorphological analysis was performed by light microscopy using a grading criterion as previously published [46,47]. The parameters assessed included intimal healing as judged by injury, endothelial cell loss and inflammation. The medial wall was also assessed for drug-induced biological effect, specifically looking at smooth muscle cell loss. These parameters were semi-quantified using a scoring a system of 0 (none), 1 (minimal), 2 (mild), 3 (moderate) and 4 (severe) as previously described [47].\n 4.7. Statistical Analysis Results are reported as mean ± standard deviation. Data were compared with analysis of variance (ANOVA) using GrapPad Prism 7 (GraphPad Software, La Jolla, CA, USA). The comparison of quantitative data of multiple groups was performed by Tukey’s multiple comparisons post hoc test. Significance is reported as p < 0.05. \nResults are reported as mean ± standard deviation. Data were compared with analysis of variance (ANOVA) using GrapPad Prism 7 (GraphPad Software, La Jolla, CA, USA). The comparison of quantitative data of multiple groups was performed by Tukey’s multiple comparisons post hoc test. Significance is reported as p < 0.05. ", "The KOS-coated balloons were prepared as previously described [44,45]. Briefly, paclitaxel (LC Laboratories, Woburn, MA, USA) was prepared by dissolving paclitaxel in absolute ethanol followed by sonication at a final concentration of 40 mg/mL. Keratose (KeraNetics LLC, Winston-Salem, NC, USA) solution was prepared by dissolving lyophilized keratose in iohexol (GE Healthcare, Little Chalfont, UK) at a 6% weight-to-volume ratio in. An in-house air spray coating method was used to deposit keratose and paclitaxel in a layered approach on uncoated angioplasty balloons (Abbott Vascular, Abbott Park, IL, USA) [45]. Coated balloons were then sterilized by UV irradiation. ", "The peripheral-simulating bioreactor was designed to shorten and twist two harvest porcine carotid arteries subjected to pulsatile flow conditions (Figure 1A). The overall system (46 × 19 × 19 cm) was designed to fit inside of a standard CO2 incubator by arranging the arteries in a parallel configuration. The system utilizes one stepper motor per artery for rotational motion and one stepper motor for the translational motion of both arteries. Custom connectors were machined to mount the arteries to the stepper motors. The motion of the stepper motors was measured using rotary encoders (CUI AMT11, Tualatin, OR, USA) mounted on the shaft of each motor. An Arduino microcontroller with two motor shields was used to control the motors along with an LCD keypad module to provide an intuitive user experience, displaying time and cycles remaining for each test and providing physical inputs to start, stop, input artery length and the duration of testing. \nThe carotid arteries, positioned within the vascular-simulating bioreactor, were harvested from large pigs (250–350 lbs.) from a local abattoir and transferred in sterile PBS with 1% antibiotic-antimitotic (Gibco, Grand Island, NY, USA). The arteries were then rinsed in sterile PBS in a culture hood and trimmed. Eight-cm-long segments were cut and tied with sutures onto fittings within the ex vivo setup. The circulating medium consisted of the system made up of Dulbecco’s modified eagle’s medium containing 10% fetal bovine serum and 1% antibiotic–antimycotic. ", "Prior to any vascular motion (twisting and shortening), all arteries were subjected to pulsatile flow for 1 h, as defined by a custom LabVIEW program as previously described [42] The pressure was monitored via a pressure catheter transducer (Millar Instruments, Houston, TX). Flow was monitored by an ultrasonic flow meter. Following this pre-conditioning phase, the vessel diameter was measured by ultrasound (Figure 1F,G). Harvested arteries were then treated by either the KOS–paclitaxel-coated balloon or a commercially available DCB (In.PACT Admiral DCB, Medtronic, Santa Rosa, CA, USA). The delivery pressure of the DCB was determined by the manufacturers’ specification at a 10–20% overstretch. At timepoints of one hour and three-days, flow was ceased, and the treated portion of the vessel was removed. Excised vessels were flash frozen, stored at −80 °C and shipped on dry ice to iC42 Clinical Research and Development (Aurora, CO, USA) for the quantification of arterial paclitaxel. Quantification of arterial paclitaxel levels was performed using a validated high-performance liquid chromatography (HPLC)-electrospray ionization- tandem mass spectrometry assay (LC-MS/MS) [44,45,46]. In brief, the LC-MS/MS system was a series 1260 HPLC system (Agilent Technologies, Santa Clara, CA, USA) linked to a Sciex 5000 triple-stage quadrupole mass spectrometer (MS/MS, Sciex, Concord, ON, USA) via a turbo-flow electrospray ionization source. The artery tissue samples were homogenized using an electric wand homogenizer (VWR 200, VWR International, Radnor, PA, USA) after the addition of 1 mL of phosphate buffer. Eight hundred (800) μL of 0.2 M ZnSO4 30% water/70% methanol v/v protein precipitation solution containing the internal standard (paclitaxel-D5, 10 ng/mL) was added. Samples were vortexed for 5 min, centrifuged (16,000, 4 °C, 15 min) and transferred into glass HPLC vials. Study samples were diluted as necessary for detector signals to fall within the dynamic MS/MS detector range. One hundred (100) μL of the samples was injected onto a 4.6 × 12.5 mm extraction column (Eclipse XDB C8, 5 μm particle size, Agilent Technologies, Palo Alto, CA, USA). Samples were washed with a mobile phase of 15% methanol and 85% 0.1% formic acid using a flow of 3 mL/min. The temperature for the extraction column was 65 °C. After 1 min, the switching valve was activated and the analytes were eluted in the backflush mode from the extraction column onto a 150 × 4.6 mm analytical column (Zorbax XDB C8, 3.5 µm particle size, Agilent). The analytes were eluted from the analytical column using a gradient of methanol/acetonitrile (1/1 v/v) plus 0.1% formic acid (solvent B) and 0.1% formic acid in HPLC grade water (solvent A). The MS/MS was run in the positive multi-reaction mode and the following ion transitions were monitored: m/z = 876.6 [M + Na]+ → 308.2 (paclitaxel) and m/z = 881.6 [M + Na]+ → 313.1 (the internal standard paclitaxel-D5). Paclitaxel tissue concentrations were calculated based on paclitaxel/paclitaxel-D5 peak area ratios using a quadratic regression equation with 1/x weighting. The range of reliable response was 0.5–100 ng/mL tissue homogenate. Inter-day imprecision was less than 15% and accuracy was within 85–115% of the nominal concentrations. There were no significant matrix interferences, carry-over or matrix effects. For more details, please see the aforementioned publications [42,44,45].", "This study was approved by the Institutional Animal Care and Use Committee and conformed to the position of the American Heart Association on use of animals in research. The experimental preparation of the animal model has been previously reported [42,46]. Under fluoroscopic guidance, eight anesthetized adult male New Zealand White rabbits underwent endothelial denudation of both iliac arteries using an angioplasty balloon catheter (3.0 × 8 mm). Subsequently, arteries were treated by either KOS–paclitaxel (3.0 × 15 mm), KOS-only balloon (3.0 × 15 mm), paclitaxel-only balloon (3.0 × 15 mm), or an uncoated balloon (3.0 × 15 mm) at a delivery pressure of 8 atm for two minutes. Anti-platelet therapy consisted of aspirin (40 mg/day) given orally 24 h before catheterization, with continued dosing throughout the in-life-phase of the study, while single-dose intra-arterial heparin (150 IU/kg) and lidocaine were administered at the time of catheterization. The animals survived for 7 days and subsequent histological evaluations were performed. ", "Following the duration of the study, animals were anesthetized and euthanized, and the treated artery segments were removed based on landmarks identified by angiography. The arteries were perfused with saline and formalin-fixed under physiological pressure prior to removal. The segments were stored in 10% formalin at room temperature and then processed to paraffin blocks, sectioned, and stained with Hematoxylin and Eosin (H&E) or Verhoeff’s elastin stain (VEG).", "Histological sections were digitized and measurements performed using ImageJ software (NIH). Cross-sectional area measurements included the external elastic lamina (EEL), internal elastic lamina (IEL), and lumen area of each section. Using these measurements, the medial area, neointimal area and percent area stenosis were calculated as previously described [46,47]. \nMorphological analysis was performed by light microscopy using a grading criterion as previously published [46,47]. The parameters assessed included intimal healing as judged by injury, endothelial cell loss and inflammation. The medial wall was also assessed for drug-induced biological effect, specifically looking at smooth muscle cell loss. These parameters were semi-quantified using a scoring a system of 0 (none), 1 (minimal), 2 (mild), 3 (moderate) and 4 (severe) as previously described [47].", "Results are reported as mean ± standard deviation. Data were compared with analysis of variance (ANOVA) using GrapPad Prism 7 (GraphPad Software, La Jolla, CA, USA). The comparison of quantitative data of multiple groups was performed by Tukey’s multiple comparisons post hoc test. Significance is reported as p < 0.05. " ]
[ null, null, null, null, null, null, null, null, null, null ]
[ "1. Introduction", "2. Results", "2.1. Ex Vivo Results", "2.2. Histomorphometric Results", "3. Discussion", "4. Materials and Methods ", "4.1. Keratose-Coated Balloons", "4.2. Peripheral-Simulating Bioreactor", "4.3. Ex Vivo DCB Testing and Arterial Time Drug Concentration", "4.4. Rabbit Injury Model", "4.5. Arterial Sections", "4.6. Histomorphometric Analysis", "4.7. Statistical Analysis", "5. Conclusions" ]
[ "Drug-coated balloons (DCBs) represent a new therapeutic approach to treat peripheral arterial disease (PAD) [1,2,3,4,5]. In the United States, PAD affects more than eight million people, with an annual cost of roughly $21 billion [6]. Traditionally, endovascular treatment of PAD has been performed by balloon angioplasty or the placement of a permanent metallic stent [7,8]. However, results are poor, with 50–85% of patients developing hemodynamically significant restenosis (re-occlusion), and 16–65% developing occlusions within 2 years post-treatment [9,10]. The use of anti-proliferative drugs in combination with bare metal stents, i.e., drug-eluting stents (DES), was a major breakthrough and highly successful in treating coronary artery disease [11,12]. However, stents have shown very poor clinical outcomes in treating PAD, as they are subjected to biomechanical stress and severe artery deformation (twisting, bending, and shortening), leading to high fracture rates (up to 68%) and restenosis [13].\nDCBs, which were FDA-approved for the treatment of PAD in late 2014, provide a new therapeutic approach for interventionalists to practice a ‘leave nothing behind’ procedure, preserving future treatment options DCBs are angioplasty balloons directly coated with an anti-proliferative therapeutic drug and an excipient (drug carrier) [1,14,15,16,17,18]. The excipient enhances the adhesion of the drug to the balloon surface, increases the stability of the drug coating during handling and delivery, and maximizes drug retention to the targeted arterial segment. [18,19,20,21,22,23,24] Current DCBs excipients include polysorbate and sorbitol, urea, polyethylene glycol (PEG) and butyryl-tri-hexyl citrate (BTHC). The rationale for the selection of these various excipients varies. For example, excipients such as polysorbate and PEG are known cosolvents of paclitaxel [25,26], which can alter the vessel interaction of the drug with the DCB device. Conversely, urea acts to increase paclitaxel release at the lesion [18] and PEG has been shown to bind to hydroxylapatite, a primary component of calcified atherosclerotic lesions [17,19,23,24], thereby improving local pharmacodynamics.\nHowever, more recent pre-clinical studies have demonstrated the potential of DCB excipients to embolize and travel downstream to distal tissue post-treatment [27,28]. As peripheral arteries undergo severe mechanical deformation, excipients should aid in maintaining drug residency on the luminal surface, in particular at the early time phase, prior to the buildup of tissue, following delivery onto the luminal surface of the artery. Therefore, novel excipients that are capable of maintaining drug residency while minimizing downstream or off-target effects are needed. Keratins are a class of proteins that can be derived from numerous sources, including from human hair. Keratins have been shown to achieve the sustained release of small-molecule drugs and growth factors [29,30]. Further, keratin films have been reported for use in vascular grafts to reduce thrombosis, suggesting their utility in cardiovascular applications [31]. The goal of this study was thus to examine the ability to use an oxidized form of keratin (known as keratose (KOS)) as a new drug carrier excipient to aid in the delivery and retention of the anti-proliferative drug, paclitaxel. Specifically, the mobility, retention and biological impact of a KOS–paclitaxel-coated DCB was determined using ex vivo and in vivo models. ", " 2.1. Ex Vivo Results The non-coated angioplasty balloons were successfully coated with the KOS–paclitaxel mixture (Figure 1D,E). To determine the impact of vascular deformation on DCB retention, both the KOS–paclitaxel DCB and the commercially available DCB were tested under only physiological pulsatile conditions (no twisting or shortening) and physiological pulsatile conditions with vascular deformation conditions. The pulsatile flow conditions consisted of pressures ranging from 70 to 120 mmHg with a mean flow rate of 120 mL/min at 60 beats per minute. The vascular deformation conditions consisted of the artery shortening 10% in the axial direction and twisting of the artery at 15°/cm. The frequency of the artery twisting and shortening was 0.05 Hz (3 cycles/min). All DCBs were inserted through a 6 Fr sheath into the closed-circulatory system under the physiological pulsatile conditions. The treated sections of the artery were marked during inflation of the DCBs. It is noted that vascular deformation (twisting and shortening) occurred following 4 h of physiological pulsatile conditions of DCB deployment. At timepoints of 1 h and 3 days, the treated section of the arteries were removed and analyzed for arterial tissue paclitaxel concentration (Figure 2). There was a reduction in arterial paclitaxel levels from 1 h to 3 days post-treatment for both the KOS–paclitaxel and the commercially available DCB under physiological pulsatile conditions (3 days—pulse only: KOS-PXL: 17.56 ± 7.19 ng/mg vs. commercial DCB: 24.44 ± 27.03, p = 0.96, Table 1). However, under pulsatile and vascular deformation, paclitaxel was significantly retained within the treated artery (3 days—pulse and vascular deformation: KOS-PXL: 14.89 ± 4.12 ng/mg vs. commercial DCB: 0.60 ± 0.26, p = 0.018). \nThe non-coated angioplasty balloons were successfully coated with the KOS–paclitaxel mixture (Figure 1D,E). To determine the impact of vascular deformation on DCB retention, both the KOS–paclitaxel DCB and the commercially available DCB were tested under only physiological pulsatile conditions (no twisting or shortening) and physiological pulsatile conditions with vascular deformation conditions. The pulsatile flow conditions consisted of pressures ranging from 70 to 120 mmHg with a mean flow rate of 120 mL/min at 60 beats per minute. The vascular deformation conditions consisted of the artery shortening 10% in the axial direction and twisting of the artery at 15°/cm. The frequency of the artery twisting and shortening was 0.05 Hz (3 cycles/min). All DCBs were inserted through a 6 Fr sheath into the closed-circulatory system under the physiological pulsatile conditions. The treated sections of the artery were marked during inflation of the DCBs. It is noted that vascular deformation (twisting and shortening) occurred following 4 h of physiological pulsatile conditions of DCB deployment. At timepoints of 1 h and 3 days, the treated section of the arteries were removed and analyzed for arterial tissue paclitaxel concentration (Figure 2). There was a reduction in arterial paclitaxel levels from 1 h to 3 days post-treatment for both the KOS–paclitaxel and the commercially available DCB under physiological pulsatile conditions (3 days—pulse only: KOS-PXL: 17.56 ± 7.19 ng/mg vs. commercial DCB: 24.44 ± 27.03, p = 0.96, Table 1). However, under pulsatile and vascular deformation, paclitaxel was significantly retained within the treated artery (3 days—pulse and vascular deformation: KOS-PXL: 14.89 ± 4.12 ng/mg vs. commercial DCB: 0.60 ± 0.26, p = 0.018). \n 2.2. Histomorphometric Results Following ex vivo studies, in vivo studies were performed using the rabbit ilio–femoral injury model to determine the impact of the KOS excipients on vascular remodeling. The animals were treated with a KOS-paclitaxel (n = 4), KOS-only balloon (n = 4), paclitaxel-only balloon (n = 4) or an uncoated balloon (n = 4). All arteries were treated successfully without any signs of dissection or thrombosis, and all animals survived the duration of the study. At 7 days, morphometric analysis demonstrated similar area measurements, including the EEL, IEL, lumen and media, for all treatment groups (Table 2). Neointimal thickness was significantly different between the varying groups (no coating: 0.10 ± 0.011 mm vs. KOS-only: 0.069 ± 0.022 mm vs. PXL-only: 0.066 ± 0.018 mm vs. KOS-PXL: 0.53 ± 0.003 mm, p = 0.005, Figure 3). Although percent area stenosis was the least in the KOS-PXL group, differences between the group were non-significant (no coating: 10.88% ± 4.52% vs. KOS-only: 9.99% ± 3.78% vs. PXL-only: 7.92% ± 3.84% vs. KOS-PXL: 6.80% ± 2.74%, p = 0.45). \nHistological analysis demonstrated minimal injury for all groups at 7 days with the greatest endothelial cell loss in the KOS–paclitaxel treated arteries (no coating: 0.25 ± 0.50 vs. KOS-only: 0.00 ± 0.00 vs. PXL-only: 0.25 ± 0.58 vs. KOS-PXL: 1.50 ± 0.58, p = 0.013). Inflammation was minimal for all groups and there was a trend towards greater SMC loss in the KOS–paclitaxel group (no coating: 0.00 ± 0.00 vs. KOS-only: 0.00 ± 0.00 vs. PXL-only: 0.25 ± 0.50 vs. KOS-PXL: 1.00 ± 0.82, p = 0.45). No aneurysmal dilatation or thrombosis was observed in any treated artery.\nFollowing ex vivo studies, in vivo studies were performed using the rabbit ilio–femoral injury model to determine the impact of the KOS excipients on vascular remodeling. The animals were treated with a KOS-paclitaxel (n = 4), KOS-only balloon (n = 4), paclitaxel-only balloon (n = 4) or an uncoated balloon (n = 4). All arteries were treated successfully without any signs of dissection or thrombosis, and all animals survived the duration of the study. At 7 days, morphometric analysis demonstrated similar area measurements, including the EEL, IEL, lumen and media, for all treatment groups (Table 2). Neointimal thickness was significantly different between the varying groups (no coating: 0.10 ± 0.011 mm vs. KOS-only: 0.069 ± 0.022 mm vs. PXL-only: 0.066 ± 0.018 mm vs. KOS-PXL: 0.53 ± 0.003 mm, p = 0.005, Figure 3). Although percent area stenosis was the least in the KOS-PXL group, differences between the group were non-significant (no coating: 10.88% ± 4.52% vs. KOS-only: 9.99% ± 3.78% vs. PXL-only: 7.92% ± 3.84% vs. KOS-PXL: 6.80% ± 2.74%, p = 0.45). \nHistological analysis demonstrated minimal injury for all groups at 7 days with the greatest endothelial cell loss in the KOS–paclitaxel treated arteries (no coating: 0.25 ± 0.50 vs. KOS-only: 0.00 ± 0.00 vs. PXL-only: 0.25 ± 0.58 vs. KOS-PXL: 1.50 ± 0.58, p = 0.013). Inflammation was minimal for all groups and there was a trend towards greater SMC loss in the KOS–paclitaxel group (no coating: 0.00 ± 0.00 vs. KOS-only: 0.00 ± 0.00 vs. PXL-only: 0.25 ± 0.50 vs. KOS-PXL: 1.00 ± 0.82, p = 0.45). No aneurysmal dilatation or thrombosis was observed in any treated artery.", "The non-coated angioplasty balloons were successfully coated with the KOS–paclitaxel mixture (Figure 1D,E). To determine the impact of vascular deformation on DCB retention, both the KOS–paclitaxel DCB and the commercially available DCB were tested under only physiological pulsatile conditions (no twisting or shortening) and physiological pulsatile conditions with vascular deformation conditions. The pulsatile flow conditions consisted of pressures ranging from 70 to 120 mmHg with a mean flow rate of 120 mL/min at 60 beats per minute. The vascular deformation conditions consisted of the artery shortening 10% in the axial direction and twisting of the artery at 15°/cm. The frequency of the artery twisting and shortening was 0.05 Hz (3 cycles/min). All DCBs were inserted through a 6 Fr sheath into the closed-circulatory system under the physiological pulsatile conditions. The treated sections of the artery were marked during inflation of the DCBs. It is noted that vascular deformation (twisting and shortening) occurred following 4 h of physiological pulsatile conditions of DCB deployment. At timepoints of 1 h and 3 days, the treated section of the arteries were removed and analyzed for arterial tissue paclitaxel concentration (Figure 2). There was a reduction in arterial paclitaxel levels from 1 h to 3 days post-treatment for both the KOS–paclitaxel and the commercially available DCB under physiological pulsatile conditions (3 days—pulse only: KOS-PXL: 17.56 ± 7.19 ng/mg vs. commercial DCB: 24.44 ± 27.03, p = 0.96, Table 1). However, under pulsatile and vascular deformation, paclitaxel was significantly retained within the treated artery (3 days—pulse and vascular deformation: KOS-PXL: 14.89 ± 4.12 ng/mg vs. commercial DCB: 0.60 ± 0.26, p = 0.018). ", "Following ex vivo studies, in vivo studies were performed using the rabbit ilio–femoral injury model to determine the impact of the KOS excipients on vascular remodeling. The animals were treated with a KOS-paclitaxel (n = 4), KOS-only balloon (n = 4), paclitaxel-only balloon (n = 4) or an uncoated balloon (n = 4). All arteries were treated successfully without any signs of dissection or thrombosis, and all animals survived the duration of the study. At 7 days, morphometric analysis demonstrated similar area measurements, including the EEL, IEL, lumen and media, for all treatment groups (Table 2). Neointimal thickness was significantly different between the varying groups (no coating: 0.10 ± 0.011 mm vs. KOS-only: 0.069 ± 0.022 mm vs. PXL-only: 0.066 ± 0.018 mm vs. KOS-PXL: 0.53 ± 0.003 mm, p = 0.005, Figure 3). Although percent area stenosis was the least in the KOS-PXL group, differences between the group were non-significant (no coating: 10.88% ± 4.52% vs. KOS-only: 9.99% ± 3.78% vs. PXL-only: 7.92% ± 3.84% vs. KOS-PXL: 6.80% ± 2.74%, p = 0.45). \nHistological analysis demonstrated minimal injury for all groups at 7 days with the greatest endothelial cell loss in the KOS–paclitaxel treated arteries (no coating: 0.25 ± 0.50 vs. KOS-only: 0.00 ± 0.00 vs. PXL-only: 0.25 ± 0.58 vs. KOS-PXL: 1.50 ± 0.58, p = 0.013). Inflammation was minimal for all groups and there was a trend towards greater SMC loss in the KOS–paclitaxel group (no coating: 0.00 ± 0.00 vs. KOS-only: 0.00 ± 0.00 vs. PXL-only: 0.25 ± 0.50 vs. KOS-PXL: 1.00 ± 0.82, p = 0.45). No aneurysmal dilatation or thrombosis was observed in any treated artery.", "This study was designed to evaluate the use of keratose as a novel excipient for peripheral applications and, specifically, to determine the feasibility of the keratose excipient to retain paclitaxel under peripheral vascular mechanical environments. This was accomplished by developing a novel vascular-simulating ex vivo flow system and testing in a clinically relevant pre-clinical model. Furthermore, the vascular biological response to the keratose excipient was also investigated in the pre-clinical model. The ex vivo model arterial drug concentration results demonstrated that keratose significantly improves the retention of paclitaxel as compared to a commercially available DCB. Histomorphometric results of rabbit arteries treated by keratose demonstrated the safety and efficacy of the excipient in the delivery of paclitaxel. Overall, these results demonstrate the potential of the keratose as a DCB excipient for peripheral applications.\nDrug-coated balloons are the next-generation treatment for PAD. Approved in the US since late 2014, DCB represented a shift in the approach to treating peripheral artery disease. While the DES provides a scaffold for long-term drug release, DCBs are limited in the time they can interact with the target lesion (~30 s to 2 min). Therefore, a major goal of any excipient is to support the retention of the therapeutic agent to the arterial wall surface, even under vascular deformation conditions. In two recent studies, the embolization of release particulates from all currently FDA-approved DCB coatings was investigated [27,28]. Twenty-eight days post-delivery, their results also demonstrated evidence of distal embolization, including embolic crystalline material, in downstream tissue. Remarkably, pharmacokinetic analysis of the distal tissue showed similar or higher levels of paclitaxel concentration as compared to the arterial treatment site, in particular for the IN.PACT DCB. These results indicate the mobility of the DCB coating following deployment, although, to date, no studies have directly investigated the impact of vascular deformation on DCB performance.\nThe vascular-mimicking ex vivo system, to our knowledge, is the first system that can evaluate the acute drug-loading of arteries treated by endovascular devices under pulsatile and vascular deformation conditions using explanted pig arteries. Our testing of the KOS coating was performed under vascular deformation conditions of 10% artery shortening, 15°/cm twisting at a frequency of 0.05 Hz (3 cycles/min). These conditions were selected to replicate the human periphery motion of the femoral artery (shortening lengths of 7% and twisting at 11.5°/cm) and the popliteal–tibial artery motion (shortening of 15% and twisting at 19.9°/cm) [32,33,34,35]. The frequency of the peripheral movement will be 0.06 Hz (5184 cycles/day or 3.6 cycles/min), which is based upon the average steps per day of adults in the US [36]. Our results indicated that the KOS coating maintained paclitaxel tissue levels under physiological pulsatile and vascular motion conditions 3 days post-delivery.\nTo further evaluate the DCB coating, we fluorescently tagged (NHS-Fluorescein, Thermo Scientific) the KOS to visualize the presence of the coating acutely (1 h) and three days post-delivery in arteries undergoing vascular deformation. The presence of the KOS was confirmed by confocal microscopy (Figure 4). The mechanism by which this process occurs is not fully elucidated in these studies. In drug-release experiments with small molecule drugs such as ciprofloxacin from hydrogel (rather than coating) forms of KOS, we have demonstrated that the rate of drug release correlates with the degradation rate of the hydrogel material [30]. We note that this degradation process does not refer to the breaking of peptide (amide) bonds in the keratin, but rather the dissolution of the keratin hydrogels. This correlation between drug release and KOS dissolution (or degradation) suggested an interaction between keratin and the drug. In the case of ciprofloxacin, this was found to be associated with electrostatic interactions. While the physiochemical characteristics of paclitaxel are different than ciprofloxacin, such interactions (or others, such as hydrophobic interactions) could be in play and are an area for further study. \nThis previous finding of an interaction between KOS and small molecule drugs is noteworthy due to the findings of paclitaxel retention in the vessel at 3 days with vascular motion compared to DCB (Figure 2) and the presence of (fluorescently labeled) KOS on the vascular walls (Figure 4). That is, it is possible that paclitaxel remains associated with the KOS in a manner not possible with other synthetic polymers (e.g., PEG) or other (e.g., urea) excipients due to the properties of keratin. In particular, KOS has been shown to contain RGD and other integrin-binding sequences which may allow it to bind to the vascular cells [37,38]. Thus, KOS may have a unique ability to associate with the lumen through integrin-binding with vascular cells while simultaneously retaining the paclitaxel through electrostatic or hydrophobic interactions.\nWhile it is well-recognized that arterial repair after balloon injury occurs more rapidly in animals than in humans, animal models still hold a predictive value for the observation of biological effects that may be associated with drug delivery [39]. In this study, histopathologic evaluation of the KOS–paclitaxel DCB, along with uncoated balloons, KOS-only balloons and paclitaxel-only DCBs were performed in a rabbit ilio–femoral injury model, which has been shown to be an appropriate model for the evaluation of endovascular devices [40,41,42,43]. Overall, the morphometric results demonstrated minimal neointimal growth, as percent area stenoses were less than eleven percent for all groups at the 7-day time point. These results were expected as, in general, peripheral rabbit arteries appear to be resistant to the development of aggressive neointimal growth with mild balloon to artery ratio (1.1–1.2:1), especially with plain balloon angioplasty [39,43]. Furthermore, as expected, injury scores were mild, ranging from 0.50 to 1.13 in all groups. However, by histologic evaluation, the safety and effectiveness of the KOS–paclitaxel DCB was still evident, based on vascular remodeling and healing. Specifically, neointimal thickness was significantly reduced in the KOS–paclitaxel DCB treatment group (no coating: 0.10 ± 0.011 mm vs. KOS-only: 0.069 ± 0.022 mm vs. PXL-only: 0.066 ± 0.018 mm vs. KOS-PXL: 0.53 ± 0.003 mm, p = 0.005). Importantly, the endothelization score was significantly reduced in the KOS–paclitaxel treated arteries, indicative of drug retention (Table 1). Additionally, there was a trend towards a lower neointimal area and higher loss of smooth muscle cells (SMCs) in the KOS–paclitaxel DCB group as compared to all others, indicative of drug effect (no coating: 0.00 ± 0.00 vs. KOS-only: 0.00 ± 0.00 vs. PXL-only: 0.25 ± 0.0.50 vs. KOS-PXL: 1.00 ± 0.82, p = 0.081). Overall, the in vivo data demonstrate the safety of the keratose coating and a reduction in neointimal growth by the keratose–paclitaxel DCB. \nWhile our results support the concept of a keratose coating to deliver anti-proliferative drugs to arterial segments, the study was limited to a healthy animal model and thus did not take into consideration diseased arteries, as observed in patients with PAD. For the ex vivo studies, further characteristic testing of paclitaxel delivery via the drug-coated balloon is warranted to quantify the amount of drug remaining on the balloon following delivery and to quantify circulating paclitaxel levels. We also recognize that human lesions are more complex and often include fibrosis, calcification, hemorrhage and, in most cases, require de-bulking using balloons and atherectomy devices, which may alter drug transfer and retention. While preclinical studies involving healthy arteries are the standard model to determine the arterial time drug concentration of cardiac and stent-based intervention devices, further improvement may be found with a KOS-paclitaxel coating in injury models.", " 4.1. Keratose-Coated Balloons The KOS-coated balloons were prepared as previously described [44,45]. Briefly, paclitaxel (LC Laboratories, Woburn, MA, USA) was prepared by dissolving paclitaxel in absolute ethanol followed by sonication at a final concentration of 40 mg/mL. Keratose (KeraNetics LLC, Winston-Salem, NC, USA) solution was prepared by dissolving lyophilized keratose in iohexol (GE Healthcare, Little Chalfont, UK) at a 6% weight-to-volume ratio in. An in-house air spray coating method was used to deposit keratose and paclitaxel in a layered approach on uncoated angioplasty balloons (Abbott Vascular, Abbott Park, IL, USA) [45]. Coated balloons were then sterilized by UV irradiation. \nThe KOS-coated balloons were prepared as previously described [44,45]. Briefly, paclitaxel (LC Laboratories, Woburn, MA, USA) was prepared by dissolving paclitaxel in absolute ethanol followed by sonication at a final concentration of 40 mg/mL. Keratose (KeraNetics LLC, Winston-Salem, NC, USA) solution was prepared by dissolving lyophilized keratose in iohexol (GE Healthcare, Little Chalfont, UK) at a 6% weight-to-volume ratio in. An in-house air spray coating method was used to deposit keratose and paclitaxel in a layered approach on uncoated angioplasty balloons (Abbott Vascular, Abbott Park, IL, USA) [45]. Coated balloons were then sterilized by UV irradiation. \n 4.2. Peripheral-Simulating Bioreactor The peripheral-simulating bioreactor was designed to shorten and twist two harvest porcine carotid arteries subjected to pulsatile flow conditions (Figure 1A). The overall system (46 × 19 × 19 cm) was designed to fit inside of a standard CO2 incubator by arranging the arteries in a parallel configuration. The system utilizes one stepper motor per artery for rotational motion and one stepper motor for the translational motion of both arteries. Custom connectors were machined to mount the arteries to the stepper motors. The motion of the stepper motors was measured using rotary encoders (CUI AMT11, Tualatin, OR, USA) mounted on the shaft of each motor. An Arduino microcontroller with two motor shields was used to control the motors along with an LCD keypad module to provide an intuitive user experience, displaying time and cycles remaining for each test and providing physical inputs to start, stop, input artery length and the duration of testing. \nThe carotid arteries, positioned within the vascular-simulating bioreactor, were harvested from large pigs (250–350 lbs.) from a local abattoir and transferred in sterile PBS with 1% antibiotic-antimitotic (Gibco, Grand Island, NY, USA). The arteries were then rinsed in sterile PBS in a culture hood and trimmed. Eight-cm-long segments were cut and tied with sutures onto fittings within the ex vivo setup. The circulating medium consisted of the system made up of Dulbecco’s modified eagle’s medium containing 10% fetal bovine serum and 1% antibiotic–antimycotic. \nThe peripheral-simulating bioreactor was designed to shorten and twist two harvest porcine carotid arteries subjected to pulsatile flow conditions (Figure 1A). The overall system (46 × 19 × 19 cm) was designed to fit inside of a standard CO2 incubator by arranging the arteries in a parallel configuration. The system utilizes one stepper motor per artery for rotational motion and one stepper motor for the translational motion of both arteries. Custom connectors were machined to mount the arteries to the stepper motors. The motion of the stepper motors was measured using rotary encoders (CUI AMT11, Tualatin, OR, USA) mounted on the shaft of each motor. An Arduino microcontroller with two motor shields was used to control the motors along with an LCD keypad module to provide an intuitive user experience, displaying time and cycles remaining for each test and providing physical inputs to start, stop, input artery length and the duration of testing. \nThe carotid arteries, positioned within the vascular-simulating bioreactor, were harvested from large pigs (250–350 lbs.) from a local abattoir and transferred in sterile PBS with 1% antibiotic-antimitotic (Gibco, Grand Island, NY, USA). The arteries were then rinsed in sterile PBS in a culture hood and trimmed. Eight-cm-long segments were cut and tied with sutures onto fittings within the ex vivo setup. The circulating medium consisted of the system made up of Dulbecco’s modified eagle’s medium containing 10% fetal bovine serum and 1% antibiotic–antimycotic. \n 4.3. Ex Vivo DCB Testing and Arterial Time Drug Concentration Prior to any vascular motion (twisting and shortening), all arteries were subjected to pulsatile flow for 1 h, as defined by a custom LabVIEW program as previously described [42] The pressure was monitored via a pressure catheter transducer (Millar Instruments, Houston, TX). Flow was monitored by an ultrasonic flow meter. Following this pre-conditioning phase, the vessel diameter was measured by ultrasound (Figure 1F,G). Harvested arteries were then treated by either the KOS–paclitaxel-coated balloon or a commercially available DCB (In.PACT Admiral DCB, Medtronic, Santa Rosa, CA, USA). The delivery pressure of the DCB was determined by the manufacturers’ specification at a 10–20% overstretch. At timepoints of one hour and three-days, flow was ceased, and the treated portion of the vessel was removed. Excised vessels were flash frozen, stored at −80 °C and shipped on dry ice to iC42 Clinical Research and Development (Aurora, CO, USA) for the quantification of arterial paclitaxel. Quantification of arterial paclitaxel levels was performed using a validated high-performance liquid chromatography (HPLC)-electrospray ionization- tandem mass spectrometry assay (LC-MS/MS) [44,45,46]. In brief, the LC-MS/MS system was a series 1260 HPLC system (Agilent Technologies, Santa Clara, CA, USA) linked to a Sciex 5000 triple-stage quadrupole mass spectrometer (MS/MS, Sciex, Concord, ON, USA) via a turbo-flow electrospray ionization source. The artery tissue samples were homogenized using an electric wand homogenizer (VWR 200, VWR International, Radnor, PA, USA) after the addition of 1 mL of phosphate buffer. Eight hundred (800) μL of 0.2 M ZnSO4 30% water/70% methanol v/v protein precipitation solution containing the internal standard (paclitaxel-D5, 10 ng/mL) was added. Samples were vortexed for 5 min, centrifuged (16,000, 4 °C, 15 min) and transferred into glass HPLC vials. Study samples were diluted as necessary for detector signals to fall within the dynamic MS/MS detector range. One hundred (100) μL of the samples was injected onto a 4.6 × 12.5 mm extraction column (Eclipse XDB C8, 5 μm particle size, Agilent Technologies, Palo Alto, CA, USA). Samples were washed with a mobile phase of 15% methanol and 85% 0.1% formic acid using a flow of 3 mL/min. The temperature for the extraction column was 65 °C. After 1 min, the switching valve was activated and the analytes were eluted in the backflush mode from the extraction column onto a 150 × 4.6 mm analytical column (Zorbax XDB C8, 3.5 µm particle size, Agilent). The analytes were eluted from the analytical column using a gradient of methanol/acetonitrile (1/1 v/v) plus 0.1% formic acid (solvent B) and 0.1% formic acid in HPLC grade water (solvent A). The MS/MS was run in the positive multi-reaction mode and the following ion transitions were monitored: m/z = 876.6 [M + Na]+ → 308.2 (paclitaxel) and m/z = 881.6 [M + Na]+ → 313.1 (the internal standard paclitaxel-D5). Paclitaxel tissue concentrations were calculated based on paclitaxel/paclitaxel-D5 peak area ratios using a quadratic regression equation with 1/x weighting. The range of reliable response was 0.5–100 ng/mL tissue homogenate. Inter-day imprecision was less than 15% and accuracy was within 85–115% of the nominal concentrations. There were no significant matrix interferences, carry-over or matrix effects. For more details, please see the aforementioned publications [42,44,45].\nPrior to any vascular motion (twisting and shortening), all arteries were subjected to pulsatile flow for 1 h, as defined by a custom LabVIEW program as previously described [42] The pressure was monitored via a pressure catheter transducer (Millar Instruments, Houston, TX). Flow was monitored by an ultrasonic flow meter. Following this pre-conditioning phase, the vessel diameter was measured by ultrasound (Figure 1F,G). Harvested arteries were then treated by either the KOS–paclitaxel-coated balloon or a commercially available DCB (In.PACT Admiral DCB, Medtronic, Santa Rosa, CA, USA). The delivery pressure of the DCB was determined by the manufacturers’ specification at a 10–20% overstretch. At timepoints of one hour and three-days, flow was ceased, and the treated portion of the vessel was removed. Excised vessels were flash frozen, stored at −80 °C and shipped on dry ice to iC42 Clinical Research and Development (Aurora, CO, USA) for the quantification of arterial paclitaxel. Quantification of arterial paclitaxel levels was performed using a validated high-performance liquid chromatography (HPLC)-electrospray ionization- tandem mass spectrometry assay (LC-MS/MS) [44,45,46]. In brief, the LC-MS/MS system was a series 1260 HPLC system (Agilent Technologies, Santa Clara, CA, USA) linked to a Sciex 5000 triple-stage quadrupole mass spectrometer (MS/MS, Sciex, Concord, ON, USA) via a turbo-flow electrospray ionization source. The artery tissue samples were homogenized using an electric wand homogenizer (VWR 200, VWR International, Radnor, PA, USA) after the addition of 1 mL of phosphate buffer. Eight hundred (800) μL of 0.2 M ZnSO4 30% water/70% methanol v/v protein precipitation solution containing the internal standard (paclitaxel-D5, 10 ng/mL) was added. Samples were vortexed for 5 min, centrifuged (16,000, 4 °C, 15 min) and transferred into glass HPLC vials. Study samples were diluted as necessary for detector signals to fall within the dynamic MS/MS detector range. One hundred (100) μL of the samples was injected onto a 4.6 × 12.5 mm extraction column (Eclipse XDB C8, 5 μm particle size, Agilent Technologies, Palo Alto, CA, USA). Samples were washed with a mobile phase of 15% methanol and 85% 0.1% formic acid using a flow of 3 mL/min. The temperature for the extraction column was 65 °C. After 1 min, the switching valve was activated and the analytes were eluted in the backflush mode from the extraction column onto a 150 × 4.6 mm analytical column (Zorbax XDB C8, 3.5 µm particle size, Agilent). The analytes were eluted from the analytical column using a gradient of methanol/acetonitrile (1/1 v/v) plus 0.1% formic acid (solvent B) and 0.1% formic acid in HPLC grade water (solvent A). The MS/MS was run in the positive multi-reaction mode and the following ion transitions were monitored: m/z = 876.6 [M + Na]+ → 308.2 (paclitaxel) and m/z = 881.6 [M + Na]+ → 313.1 (the internal standard paclitaxel-D5). Paclitaxel tissue concentrations were calculated based on paclitaxel/paclitaxel-D5 peak area ratios using a quadratic regression equation with 1/x weighting. The range of reliable response was 0.5–100 ng/mL tissue homogenate. Inter-day imprecision was less than 15% and accuracy was within 85–115% of the nominal concentrations. There were no significant matrix interferences, carry-over or matrix effects. For more details, please see the aforementioned publications [42,44,45].\n 4.4. Rabbit Injury Model This study was approved by the Institutional Animal Care and Use Committee and conformed to the position of the American Heart Association on use of animals in research. The experimental preparation of the animal model has been previously reported [42,46]. Under fluoroscopic guidance, eight anesthetized adult male New Zealand White rabbits underwent endothelial denudation of both iliac arteries using an angioplasty balloon catheter (3.0 × 8 mm). Subsequently, arteries were treated by either KOS–paclitaxel (3.0 × 15 mm), KOS-only balloon (3.0 × 15 mm), paclitaxel-only balloon (3.0 × 15 mm), or an uncoated balloon (3.0 × 15 mm) at a delivery pressure of 8 atm for two minutes. Anti-platelet therapy consisted of aspirin (40 mg/day) given orally 24 h before catheterization, with continued dosing throughout the in-life-phase of the study, while single-dose intra-arterial heparin (150 IU/kg) and lidocaine were administered at the time of catheterization. The animals survived for 7 days and subsequent histological evaluations were performed. \nThis study was approved by the Institutional Animal Care and Use Committee and conformed to the position of the American Heart Association on use of animals in research. The experimental preparation of the animal model has been previously reported [42,46]. Under fluoroscopic guidance, eight anesthetized adult male New Zealand White rabbits underwent endothelial denudation of both iliac arteries using an angioplasty balloon catheter (3.0 × 8 mm). Subsequently, arteries were treated by either KOS–paclitaxel (3.0 × 15 mm), KOS-only balloon (3.0 × 15 mm), paclitaxel-only balloon (3.0 × 15 mm), or an uncoated balloon (3.0 × 15 mm) at a delivery pressure of 8 atm for two minutes. Anti-platelet therapy consisted of aspirin (40 mg/day) given orally 24 h before catheterization, with continued dosing throughout the in-life-phase of the study, while single-dose intra-arterial heparin (150 IU/kg) and lidocaine were administered at the time of catheterization. The animals survived for 7 days and subsequent histological evaluations were performed. \n 4.5. Arterial Sections Following the duration of the study, animals were anesthetized and euthanized, and the treated artery segments were removed based on landmarks identified by angiography. The arteries were perfused with saline and formalin-fixed under physiological pressure prior to removal. The segments were stored in 10% formalin at room temperature and then processed to paraffin blocks, sectioned, and stained with Hematoxylin and Eosin (H&E) or Verhoeff’s elastin stain (VEG).\nFollowing the duration of the study, animals were anesthetized and euthanized, and the treated artery segments were removed based on landmarks identified by angiography. The arteries were perfused with saline and formalin-fixed under physiological pressure prior to removal. The segments were stored in 10% formalin at room temperature and then processed to paraffin blocks, sectioned, and stained with Hematoxylin and Eosin (H&E) or Verhoeff’s elastin stain (VEG).\n 4.6. Histomorphometric Analysis Histological sections were digitized and measurements performed using ImageJ software (NIH). Cross-sectional area measurements included the external elastic lamina (EEL), internal elastic lamina (IEL), and lumen area of each section. Using these measurements, the medial area, neointimal area and percent area stenosis were calculated as previously described [46,47]. \nMorphological analysis was performed by light microscopy using a grading criterion as previously published [46,47]. The parameters assessed included intimal healing as judged by injury, endothelial cell loss and inflammation. The medial wall was also assessed for drug-induced biological effect, specifically looking at smooth muscle cell loss. These parameters were semi-quantified using a scoring a system of 0 (none), 1 (minimal), 2 (mild), 3 (moderate) and 4 (severe) as previously described [47].\nHistological sections were digitized and measurements performed using ImageJ software (NIH). Cross-sectional area measurements included the external elastic lamina (EEL), internal elastic lamina (IEL), and lumen area of each section. Using these measurements, the medial area, neointimal area and percent area stenosis were calculated as previously described [46,47]. \nMorphological analysis was performed by light microscopy using a grading criterion as previously published [46,47]. The parameters assessed included intimal healing as judged by injury, endothelial cell loss and inflammation. The medial wall was also assessed for drug-induced biological effect, specifically looking at smooth muscle cell loss. These parameters were semi-quantified using a scoring a system of 0 (none), 1 (minimal), 2 (mild), 3 (moderate) and 4 (severe) as previously described [47].\n 4.7. Statistical Analysis Results are reported as mean ± standard deviation. Data were compared with analysis of variance (ANOVA) using GrapPad Prism 7 (GraphPad Software, La Jolla, CA, USA). The comparison of quantitative data of multiple groups was performed by Tukey’s multiple comparisons post hoc test. Significance is reported as p < 0.05. \nResults are reported as mean ± standard deviation. Data were compared with analysis of variance (ANOVA) using GrapPad Prism 7 (GraphPad Software, La Jolla, CA, USA). The comparison of quantitative data of multiple groups was performed by Tukey’s multiple comparisons post hoc test. Significance is reported as p < 0.05. ", "The KOS-coated balloons were prepared as previously described [44,45]. Briefly, paclitaxel (LC Laboratories, Woburn, MA, USA) was prepared by dissolving paclitaxel in absolute ethanol followed by sonication at a final concentration of 40 mg/mL. Keratose (KeraNetics LLC, Winston-Salem, NC, USA) solution was prepared by dissolving lyophilized keratose in iohexol (GE Healthcare, Little Chalfont, UK) at a 6% weight-to-volume ratio in. An in-house air spray coating method was used to deposit keratose and paclitaxel in a layered approach on uncoated angioplasty balloons (Abbott Vascular, Abbott Park, IL, USA) [45]. Coated balloons were then sterilized by UV irradiation. ", "The peripheral-simulating bioreactor was designed to shorten and twist two harvest porcine carotid arteries subjected to pulsatile flow conditions (Figure 1A). The overall system (46 × 19 × 19 cm) was designed to fit inside of a standard CO2 incubator by arranging the arteries in a parallel configuration. The system utilizes one stepper motor per artery for rotational motion and one stepper motor for the translational motion of both arteries. Custom connectors were machined to mount the arteries to the stepper motors. The motion of the stepper motors was measured using rotary encoders (CUI AMT11, Tualatin, OR, USA) mounted on the shaft of each motor. An Arduino microcontroller with two motor shields was used to control the motors along with an LCD keypad module to provide an intuitive user experience, displaying time and cycles remaining for each test and providing physical inputs to start, stop, input artery length and the duration of testing. \nThe carotid arteries, positioned within the vascular-simulating bioreactor, were harvested from large pigs (250–350 lbs.) from a local abattoir and transferred in sterile PBS with 1% antibiotic-antimitotic (Gibco, Grand Island, NY, USA). The arteries were then rinsed in sterile PBS in a culture hood and trimmed. Eight-cm-long segments were cut and tied with sutures onto fittings within the ex vivo setup. The circulating medium consisted of the system made up of Dulbecco’s modified eagle’s medium containing 10% fetal bovine serum and 1% antibiotic–antimycotic. ", "Prior to any vascular motion (twisting and shortening), all arteries were subjected to pulsatile flow for 1 h, as defined by a custom LabVIEW program as previously described [42] The pressure was monitored via a pressure catheter transducer (Millar Instruments, Houston, TX). Flow was monitored by an ultrasonic flow meter. Following this pre-conditioning phase, the vessel diameter was measured by ultrasound (Figure 1F,G). Harvested arteries were then treated by either the KOS–paclitaxel-coated balloon or a commercially available DCB (In.PACT Admiral DCB, Medtronic, Santa Rosa, CA, USA). The delivery pressure of the DCB was determined by the manufacturers’ specification at a 10–20% overstretch. At timepoints of one hour and three-days, flow was ceased, and the treated portion of the vessel was removed. Excised vessels were flash frozen, stored at −80 °C and shipped on dry ice to iC42 Clinical Research and Development (Aurora, CO, USA) for the quantification of arterial paclitaxel. Quantification of arterial paclitaxel levels was performed using a validated high-performance liquid chromatography (HPLC)-electrospray ionization- tandem mass spectrometry assay (LC-MS/MS) [44,45,46]. In brief, the LC-MS/MS system was a series 1260 HPLC system (Agilent Technologies, Santa Clara, CA, USA) linked to a Sciex 5000 triple-stage quadrupole mass spectrometer (MS/MS, Sciex, Concord, ON, USA) via a turbo-flow electrospray ionization source. The artery tissue samples were homogenized using an electric wand homogenizer (VWR 200, VWR International, Radnor, PA, USA) after the addition of 1 mL of phosphate buffer. Eight hundred (800) μL of 0.2 M ZnSO4 30% water/70% methanol v/v protein precipitation solution containing the internal standard (paclitaxel-D5, 10 ng/mL) was added. Samples were vortexed for 5 min, centrifuged (16,000, 4 °C, 15 min) and transferred into glass HPLC vials. Study samples were diluted as necessary for detector signals to fall within the dynamic MS/MS detector range. One hundred (100) μL of the samples was injected onto a 4.6 × 12.5 mm extraction column (Eclipse XDB C8, 5 μm particle size, Agilent Technologies, Palo Alto, CA, USA). Samples were washed with a mobile phase of 15% methanol and 85% 0.1% formic acid using a flow of 3 mL/min. The temperature for the extraction column was 65 °C. After 1 min, the switching valve was activated and the analytes were eluted in the backflush mode from the extraction column onto a 150 × 4.6 mm analytical column (Zorbax XDB C8, 3.5 µm particle size, Agilent). The analytes were eluted from the analytical column using a gradient of methanol/acetonitrile (1/1 v/v) plus 0.1% formic acid (solvent B) and 0.1% formic acid in HPLC grade water (solvent A). The MS/MS was run in the positive multi-reaction mode and the following ion transitions were monitored: m/z = 876.6 [M + Na]+ → 308.2 (paclitaxel) and m/z = 881.6 [M + Na]+ → 313.1 (the internal standard paclitaxel-D5). Paclitaxel tissue concentrations were calculated based on paclitaxel/paclitaxel-D5 peak area ratios using a quadratic regression equation with 1/x weighting. The range of reliable response was 0.5–100 ng/mL tissue homogenate. Inter-day imprecision was less than 15% and accuracy was within 85–115% of the nominal concentrations. There were no significant matrix interferences, carry-over or matrix effects. For more details, please see the aforementioned publications [42,44,45].", "This study was approved by the Institutional Animal Care and Use Committee and conformed to the position of the American Heart Association on use of animals in research. The experimental preparation of the animal model has been previously reported [42,46]. Under fluoroscopic guidance, eight anesthetized adult male New Zealand White rabbits underwent endothelial denudation of both iliac arteries using an angioplasty balloon catheter (3.0 × 8 mm). Subsequently, arteries were treated by either KOS–paclitaxel (3.0 × 15 mm), KOS-only balloon (3.0 × 15 mm), paclitaxel-only balloon (3.0 × 15 mm), or an uncoated balloon (3.0 × 15 mm) at a delivery pressure of 8 atm for two minutes. Anti-platelet therapy consisted of aspirin (40 mg/day) given orally 24 h before catheterization, with continued dosing throughout the in-life-phase of the study, while single-dose intra-arterial heparin (150 IU/kg) and lidocaine were administered at the time of catheterization. The animals survived for 7 days and subsequent histological evaluations were performed. ", "Following the duration of the study, animals were anesthetized and euthanized, and the treated artery segments were removed based on landmarks identified by angiography. The arteries were perfused with saline and formalin-fixed under physiological pressure prior to removal. The segments were stored in 10% formalin at room temperature and then processed to paraffin blocks, sectioned, and stained with Hematoxylin and Eosin (H&E) or Verhoeff’s elastin stain (VEG).", "Histological sections were digitized and measurements performed using ImageJ software (NIH). Cross-sectional area measurements included the external elastic lamina (EEL), internal elastic lamina (IEL), and lumen area of each section. Using these measurements, the medial area, neointimal area and percent area stenosis were calculated as previously described [46,47]. \nMorphological analysis was performed by light microscopy using a grading criterion as previously published [46,47]. The parameters assessed included intimal healing as judged by injury, endothelial cell loss and inflammation. The medial wall was also assessed for drug-induced biological effect, specifically looking at smooth muscle cell loss. These parameters were semi-quantified using a scoring a system of 0 (none), 1 (minimal), 2 (mild), 3 (moderate) and 4 (severe) as previously described [47].", "Results are reported as mean ± standard deviation. Data were compared with analysis of variance (ANOVA) using GrapPad Prism 7 (GraphPad Software, La Jolla, CA, USA). The comparison of quantitative data of multiple groups was performed by Tukey’s multiple comparisons post hoc test. Significance is reported as p < 0.05. ", "This study provides evidence of the use of keratose as an excipient for peripheral applications. The ex vivo results showed a potential benefit of the coating to minimize the adverse impact of vascular motion on drug mobility and favorable biological response in the pre-clinical model. Additional studies are warranted to further demonstrate the safety and efficacy profile of the keratose coating in larger animal models and longer durations. Overall, this approach has the potential to improve interventional outcomes and quality of life of millions of patients suffering with PAD." ]
[ "intro", "results", null, null, "discussion", null, null, null, null, null, null, null, null, "conclusions" ]
[ "Keratose", "drug-coated balloon", "paclitaxel", "drug delivery", "pre-clinical", "peripheral arterial disease", "endovascular" ]
1. Introduction: Drug-coated balloons (DCBs) represent a new therapeutic approach to treat peripheral arterial disease (PAD) [1,2,3,4,5]. In the United States, PAD affects more than eight million people, with an annual cost of roughly $21 billion [6]. Traditionally, endovascular treatment of PAD has been performed by balloon angioplasty or the placement of a permanent metallic stent [7,8]. However, results are poor, with 50–85% of patients developing hemodynamically significant restenosis (re-occlusion), and 16–65% developing occlusions within 2 years post-treatment [9,10]. The use of anti-proliferative drugs in combination with bare metal stents, i.e., drug-eluting stents (DES), was a major breakthrough and highly successful in treating coronary artery disease [11,12]. However, stents have shown very poor clinical outcomes in treating PAD, as they are subjected to biomechanical stress and severe artery deformation (twisting, bending, and shortening), leading to high fracture rates (up to 68%) and restenosis [13]. DCBs, which were FDA-approved for the treatment of PAD in late 2014, provide a new therapeutic approach for interventionalists to practice a ‘leave nothing behind’ procedure, preserving future treatment options DCBs are angioplasty balloons directly coated with an anti-proliferative therapeutic drug and an excipient (drug carrier) [1,14,15,16,17,18]. The excipient enhances the adhesion of the drug to the balloon surface, increases the stability of the drug coating during handling and delivery, and maximizes drug retention to the targeted arterial segment. [18,19,20,21,22,23,24] Current DCBs excipients include polysorbate and sorbitol, urea, polyethylene glycol (PEG) and butyryl-tri-hexyl citrate (BTHC). The rationale for the selection of these various excipients varies. For example, excipients such as polysorbate and PEG are known cosolvents of paclitaxel [25,26], which can alter the vessel interaction of the drug with the DCB device. Conversely, urea acts to increase paclitaxel release at the lesion [18] and PEG has been shown to bind to hydroxylapatite, a primary component of calcified atherosclerotic lesions [17,19,23,24], thereby improving local pharmacodynamics. However, more recent pre-clinical studies have demonstrated the potential of DCB excipients to embolize and travel downstream to distal tissue post-treatment [27,28]. As peripheral arteries undergo severe mechanical deformation, excipients should aid in maintaining drug residency on the luminal surface, in particular at the early time phase, prior to the buildup of tissue, following delivery onto the luminal surface of the artery. Therefore, novel excipients that are capable of maintaining drug residency while minimizing downstream or off-target effects are needed. Keratins are a class of proteins that can be derived from numerous sources, including from human hair. Keratins have been shown to achieve the sustained release of small-molecule drugs and growth factors [29,30]. Further, keratin films have been reported for use in vascular grafts to reduce thrombosis, suggesting their utility in cardiovascular applications [31]. The goal of this study was thus to examine the ability to use an oxidized form of keratin (known as keratose (KOS)) as a new drug carrier excipient to aid in the delivery and retention of the anti-proliferative drug, paclitaxel. Specifically, the mobility, retention and biological impact of a KOS–paclitaxel-coated DCB was determined using ex vivo and in vivo models. 2. Results: 2.1. Ex Vivo Results The non-coated angioplasty balloons were successfully coated with the KOS–paclitaxel mixture (Figure 1D,E). To determine the impact of vascular deformation on DCB retention, both the KOS–paclitaxel DCB and the commercially available DCB were tested under only physiological pulsatile conditions (no twisting or shortening) and physiological pulsatile conditions with vascular deformation conditions. The pulsatile flow conditions consisted of pressures ranging from 70 to 120 mmHg with a mean flow rate of 120 mL/min at 60 beats per minute. The vascular deformation conditions consisted of the artery shortening 10% in the axial direction and twisting of the artery at 15°/cm. The frequency of the artery twisting and shortening was 0.05 Hz (3 cycles/min). All DCBs were inserted through a 6 Fr sheath into the closed-circulatory system under the physiological pulsatile conditions. The treated sections of the artery were marked during inflation of the DCBs. It is noted that vascular deformation (twisting and shortening) occurred following 4 h of physiological pulsatile conditions of DCB deployment. At timepoints of 1 h and 3 days, the treated section of the arteries were removed and analyzed for arterial tissue paclitaxel concentration (Figure 2). There was a reduction in arterial paclitaxel levels from 1 h to 3 days post-treatment for both the KOS–paclitaxel and the commercially available DCB under physiological pulsatile conditions (3 days—pulse only: KOS-PXL: 17.56 ± 7.19 ng/mg vs. commercial DCB: 24.44 ± 27.03, p = 0.96, Table 1). However, under pulsatile and vascular deformation, paclitaxel was significantly retained within the treated artery (3 days—pulse and vascular deformation: KOS-PXL: 14.89 ± 4.12 ng/mg vs. commercial DCB: 0.60 ± 0.26, p = 0.018). The non-coated angioplasty balloons were successfully coated with the KOS–paclitaxel mixture (Figure 1D,E). To determine the impact of vascular deformation on DCB retention, both the KOS–paclitaxel DCB and the commercially available DCB were tested under only physiological pulsatile conditions (no twisting or shortening) and physiological pulsatile conditions with vascular deformation conditions. The pulsatile flow conditions consisted of pressures ranging from 70 to 120 mmHg with a mean flow rate of 120 mL/min at 60 beats per minute. The vascular deformation conditions consisted of the artery shortening 10% in the axial direction and twisting of the artery at 15°/cm. The frequency of the artery twisting and shortening was 0.05 Hz (3 cycles/min). All DCBs were inserted through a 6 Fr sheath into the closed-circulatory system under the physiological pulsatile conditions. The treated sections of the artery were marked during inflation of the DCBs. It is noted that vascular deformation (twisting and shortening) occurred following 4 h of physiological pulsatile conditions of DCB deployment. At timepoints of 1 h and 3 days, the treated section of the arteries were removed and analyzed for arterial tissue paclitaxel concentration (Figure 2). There was a reduction in arterial paclitaxel levels from 1 h to 3 days post-treatment for both the KOS–paclitaxel and the commercially available DCB under physiological pulsatile conditions (3 days—pulse only: KOS-PXL: 17.56 ± 7.19 ng/mg vs. commercial DCB: 24.44 ± 27.03, p = 0.96, Table 1). However, under pulsatile and vascular deformation, paclitaxel was significantly retained within the treated artery (3 days—pulse and vascular deformation: KOS-PXL: 14.89 ± 4.12 ng/mg vs. commercial DCB: 0.60 ± 0.26, p = 0.018). 2.2. Histomorphometric Results Following ex vivo studies, in vivo studies were performed using the rabbit ilio–femoral injury model to determine the impact of the KOS excipients on vascular remodeling. The animals were treated with a KOS-paclitaxel (n = 4), KOS-only balloon (n = 4), paclitaxel-only balloon (n = 4) or an uncoated balloon (n = 4). All arteries were treated successfully without any signs of dissection or thrombosis, and all animals survived the duration of the study. At 7 days, morphometric analysis demonstrated similar area measurements, including the EEL, IEL, lumen and media, for all treatment groups (Table 2). Neointimal thickness was significantly different between the varying groups (no coating: 0.10 ± 0.011 mm vs. KOS-only: 0.069 ± 0.022 mm vs. PXL-only: 0.066 ± 0.018 mm vs. KOS-PXL: 0.53 ± 0.003 mm, p = 0.005, Figure 3). Although percent area stenosis was the least in the KOS-PXL group, differences between the group were non-significant (no coating: 10.88% ± 4.52% vs. KOS-only: 9.99% ± 3.78% vs. PXL-only: 7.92% ± 3.84% vs. KOS-PXL: 6.80% ± 2.74%, p = 0.45). Histological analysis demonstrated minimal injury for all groups at 7 days with the greatest endothelial cell loss in the KOS–paclitaxel treated arteries (no coating: 0.25 ± 0.50 vs. KOS-only: 0.00 ± 0.00 vs. PXL-only: 0.25 ± 0.58 vs. KOS-PXL: 1.50 ± 0.58, p = 0.013). Inflammation was minimal for all groups and there was a trend towards greater SMC loss in the KOS–paclitaxel group (no coating: 0.00 ± 0.00 vs. KOS-only: 0.00 ± 0.00 vs. PXL-only: 0.25 ± 0.50 vs. KOS-PXL: 1.00 ± 0.82, p = 0.45). No aneurysmal dilatation or thrombosis was observed in any treated artery. Following ex vivo studies, in vivo studies were performed using the rabbit ilio–femoral injury model to determine the impact of the KOS excipients on vascular remodeling. The animals were treated with a KOS-paclitaxel (n = 4), KOS-only balloon (n = 4), paclitaxel-only balloon (n = 4) or an uncoated balloon (n = 4). All arteries were treated successfully without any signs of dissection or thrombosis, and all animals survived the duration of the study. At 7 days, morphometric analysis demonstrated similar area measurements, including the EEL, IEL, lumen and media, for all treatment groups (Table 2). Neointimal thickness was significantly different between the varying groups (no coating: 0.10 ± 0.011 mm vs. KOS-only: 0.069 ± 0.022 mm vs. PXL-only: 0.066 ± 0.018 mm vs. KOS-PXL: 0.53 ± 0.003 mm, p = 0.005, Figure 3). Although percent area stenosis was the least in the KOS-PXL group, differences between the group were non-significant (no coating: 10.88% ± 4.52% vs. KOS-only: 9.99% ± 3.78% vs. PXL-only: 7.92% ± 3.84% vs. KOS-PXL: 6.80% ± 2.74%, p = 0.45). Histological analysis demonstrated minimal injury for all groups at 7 days with the greatest endothelial cell loss in the KOS–paclitaxel treated arteries (no coating: 0.25 ± 0.50 vs. KOS-only: 0.00 ± 0.00 vs. PXL-only: 0.25 ± 0.58 vs. KOS-PXL: 1.50 ± 0.58, p = 0.013). Inflammation was minimal for all groups and there was a trend towards greater SMC loss in the KOS–paclitaxel group (no coating: 0.00 ± 0.00 vs. KOS-only: 0.00 ± 0.00 vs. PXL-only: 0.25 ± 0.50 vs. KOS-PXL: 1.00 ± 0.82, p = 0.45). No aneurysmal dilatation or thrombosis was observed in any treated artery. 2.1. Ex Vivo Results: The non-coated angioplasty balloons were successfully coated with the KOS–paclitaxel mixture (Figure 1D,E). To determine the impact of vascular deformation on DCB retention, both the KOS–paclitaxel DCB and the commercially available DCB were tested under only physiological pulsatile conditions (no twisting or shortening) and physiological pulsatile conditions with vascular deformation conditions. The pulsatile flow conditions consisted of pressures ranging from 70 to 120 mmHg with a mean flow rate of 120 mL/min at 60 beats per minute. The vascular deformation conditions consisted of the artery shortening 10% in the axial direction and twisting of the artery at 15°/cm. The frequency of the artery twisting and shortening was 0.05 Hz (3 cycles/min). All DCBs were inserted through a 6 Fr sheath into the closed-circulatory system under the physiological pulsatile conditions. The treated sections of the artery were marked during inflation of the DCBs. It is noted that vascular deformation (twisting and shortening) occurred following 4 h of physiological pulsatile conditions of DCB deployment. At timepoints of 1 h and 3 days, the treated section of the arteries were removed and analyzed for arterial tissue paclitaxel concentration (Figure 2). There was a reduction in arterial paclitaxel levels from 1 h to 3 days post-treatment for both the KOS–paclitaxel and the commercially available DCB under physiological pulsatile conditions (3 days—pulse only: KOS-PXL: 17.56 ± 7.19 ng/mg vs. commercial DCB: 24.44 ± 27.03, p = 0.96, Table 1). However, under pulsatile and vascular deformation, paclitaxel was significantly retained within the treated artery (3 days—pulse and vascular deformation: KOS-PXL: 14.89 ± 4.12 ng/mg vs. commercial DCB: 0.60 ± 0.26, p = 0.018). 2.2. Histomorphometric Results: Following ex vivo studies, in vivo studies were performed using the rabbit ilio–femoral injury model to determine the impact of the KOS excipients on vascular remodeling. The animals were treated with a KOS-paclitaxel (n = 4), KOS-only balloon (n = 4), paclitaxel-only balloon (n = 4) or an uncoated balloon (n = 4). All arteries were treated successfully without any signs of dissection or thrombosis, and all animals survived the duration of the study. At 7 days, morphometric analysis demonstrated similar area measurements, including the EEL, IEL, lumen and media, for all treatment groups (Table 2). Neointimal thickness was significantly different between the varying groups (no coating: 0.10 ± 0.011 mm vs. KOS-only: 0.069 ± 0.022 mm vs. PXL-only: 0.066 ± 0.018 mm vs. KOS-PXL: 0.53 ± 0.003 mm, p = 0.005, Figure 3). Although percent area stenosis was the least in the KOS-PXL group, differences between the group were non-significant (no coating: 10.88% ± 4.52% vs. KOS-only: 9.99% ± 3.78% vs. PXL-only: 7.92% ± 3.84% vs. KOS-PXL: 6.80% ± 2.74%, p = 0.45). Histological analysis demonstrated minimal injury for all groups at 7 days with the greatest endothelial cell loss in the KOS–paclitaxel treated arteries (no coating: 0.25 ± 0.50 vs. KOS-only: 0.00 ± 0.00 vs. PXL-only: 0.25 ± 0.58 vs. KOS-PXL: 1.50 ± 0.58, p = 0.013). Inflammation was minimal for all groups and there was a trend towards greater SMC loss in the KOS–paclitaxel group (no coating: 0.00 ± 0.00 vs. KOS-only: 0.00 ± 0.00 vs. PXL-only: 0.25 ± 0.50 vs. KOS-PXL: 1.00 ± 0.82, p = 0.45). No aneurysmal dilatation or thrombosis was observed in any treated artery. 3. Discussion: This study was designed to evaluate the use of keratose as a novel excipient for peripheral applications and, specifically, to determine the feasibility of the keratose excipient to retain paclitaxel under peripheral vascular mechanical environments. This was accomplished by developing a novel vascular-simulating ex vivo flow system and testing in a clinically relevant pre-clinical model. Furthermore, the vascular biological response to the keratose excipient was also investigated in the pre-clinical model. The ex vivo model arterial drug concentration results demonstrated that keratose significantly improves the retention of paclitaxel as compared to a commercially available DCB. Histomorphometric results of rabbit arteries treated by keratose demonstrated the safety and efficacy of the excipient in the delivery of paclitaxel. Overall, these results demonstrate the potential of the keratose as a DCB excipient for peripheral applications. Drug-coated balloons are the next-generation treatment for PAD. Approved in the US since late 2014, DCB represented a shift in the approach to treating peripheral artery disease. While the DES provides a scaffold for long-term drug release, DCBs are limited in the time they can interact with the target lesion (~30 s to 2 min). Therefore, a major goal of any excipient is to support the retention of the therapeutic agent to the arterial wall surface, even under vascular deformation conditions. In two recent studies, the embolization of release particulates from all currently FDA-approved DCB coatings was investigated [27,28]. Twenty-eight days post-delivery, their results also demonstrated evidence of distal embolization, including embolic crystalline material, in downstream tissue. Remarkably, pharmacokinetic analysis of the distal tissue showed similar or higher levels of paclitaxel concentration as compared to the arterial treatment site, in particular for the IN.PACT DCB. These results indicate the mobility of the DCB coating following deployment, although, to date, no studies have directly investigated the impact of vascular deformation on DCB performance. The vascular-mimicking ex vivo system, to our knowledge, is the first system that can evaluate the acute drug-loading of arteries treated by endovascular devices under pulsatile and vascular deformation conditions using explanted pig arteries. Our testing of the KOS coating was performed under vascular deformation conditions of 10% artery shortening, 15°/cm twisting at a frequency of 0.05 Hz (3 cycles/min). These conditions were selected to replicate the human periphery motion of the femoral artery (shortening lengths of 7% and twisting at 11.5°/cm) and the popliteal–tibial artery motion (shortening of 15% and twisting at 19.9°/cm) [32,33,34,35]. The frequency of the peripheral movement will be 0.06 Hz (5184 cycles/day or 3.6 cycles/min), which is based upon the average steps per day of adults in the US [36]. Our results indicated that the KOS coating maintained paclitaxel tissue levels under physiological pulsatile and vascular motion conditions 3 days post-delivery. To further evaluate the DCB coating, we fluorescently tagged (NHS-Fluorescein, Thermo Scientific) the KOS to visualize the presence of the coating acutely (1 h) and three days post-delivery in arteries undergoing vascular deformation. The presence of the KOS was confirmed by confocal microscopy (Figure 4). The mechanism by which this process occurs is not fully elucidated in these studies. In drug-release experiments with small molecule drugs such as ciprofloxacin from hydrogel (rather than coating) forms of KOS, we have demonstrated that the rate of drug release correlates with the degradation rate of the hydrogel material [30]. We note that this degradation process does not refer to the breaking of peptide (amide) bonds in the keratin, but rather the dissolution of the keratin hydrogels. This correlation between drug release and KOS dissolution (or degradation) suggested an interaction between keratin and the drug. In the case of ciprofloxacin, this was found to be associated with electrostatic interactions. While the physiochemical characteristics of paclitaxel are different than ciprofloxacin, such interactions (or others, such as hydrophobic interactions) could be in play and are an area for further study. This previous finding of an interaction between KOS and small molecule drugs is noteworthy due to the findings of paclitaxel retention in the vessel at 3 days with vascular motion compared to DCB (Figure 2) and the presence of (fluorescently labeled) KOS on the vascular walls (Figure 4). That is, it is possible that paclitaxel remains associated with the KOS in a manner not possible with other synthetic polymers (e.g., PEG) or other (e.g., urea) excipients due to the properties of keratin. In particular, KOS has been shown to contain RGD and other integrin-binding sequences which may allow it to bind to the vascular cells [37,38]. Thus, KOS may have a unique ability to associate with the lumen through integrin-binding with vascular cells while simultaneously retaining the paclitaxel through electrostatic or hydrophobic interactions. While it is well-recognized that arterial repair after balloon injury occurs more rapidly in animals than in humans, animal models still hold a predictive value for the observation of biological effects that may be associated with drug delivery [39]. In this study, histopathologic evaluation of the KOS–paclitaxel DCB, along with uncoated balloons, KOS-only balloons and paclitaxel-only DCBs were performed in a rabbit ilio–femoral injury model, which has been shown to be an appropriate model for the evaluation of endovascular devices [40,41,42,43]. Overall, the morphometric results demonstrated minimal neointimal growth, as percent area stenoses were less than eleven percent for all groups at the 7-day time point. These results were expected as, in general, peripheral rabbit arteries appear to be resistant to the development of aggressive neointimal growth with mild balloon to artery ratio (1.1–1.2:1), especially with plain balloon angioplasty [39,43]. Furthermore, as expected, injury scores were mild, ranging from 0.50 to 1.13 in all groups. However, by histologic evaluation, the safety and effectiveness of the KOS–paclitaxel DCB was still evident, based on vascular remodeling and healing. Specifically, neointimal thickness was significantly reduced in the KOS–paclitaxel DCB treatment group (no coating: 0.10 ± 0.011 mm vs. KOS-only: 0.069 ± 0.022 mm vs. PXL-only: 0.066 ± 0.018 mm vs. KOS-PXL: 0.53 ± 0.003 mm, p = 0.005). Importantly, the endothelization score was significantly reduced in the KOS–paclitaxel treated arteries, indicative of drug retention (Table 1). Additionally, there was a trend towards a lower neointimal area and higher loss of smooth muscle cells (SMCs) in the KOS–paclitaxel DCB group as compared to all others, indicative of drug effect (no coating: 0.00 ± 0.00 vs. KOS-only: 0.00 ± 0.00 vs. PXL-only: 0.25 ± 0.0.50 vs. KOS-PXL: 1.00 ± 0.82, p = 0.081). Overall, the in vivo data demonstrate the safety of the keratose coating and a reduction in neointimal growth by the keratose–paclitaxel DCB. While our results support the concept of a keratose coating to deliver anti-proliferative drugs to arterial segments, the study was limited to a healthy animal model and thus did not take into consideration diseased arteries, as observed in patients with PAD. For the ex vivo studies, further characteristic testing of paclitaxel delivery via the drug-coated balloon is warranted to quantify the amount of drug remaining on the balloon following delivery and to quantify circulating paclitaxel levels. We also recognize that human lesions are more complex and often include fibrosis, calcification, hemorrhage and, in most cases, require de-bulking using balloons and atherectomy devices, which may alter drug transfer and retention. While preclinical studies involving healthy arteries are the standard model to determine the arterial time drug concentration of cardiac and stent-based intervention devices, further improvement may be found with a KOS-paclitaxel coating in injury models. 4. Materials and Methods : 4.1. Keratose-Coated Balloons The KOS-coated balloons were prepared as previously described [44,45]. Briefly, paclitaxel (LC Laboratories, Woburn, MA, USA) was prepared by dissolving paclitaxel in absolute ethanol followed by sonication at a final concentration of 40 mg/mL. Keratose (KeraNetics LLC, Winston-Salem, NC, USA) solution was prepared by dissolving lyophilized keratose in iohexol (GE Healthcare, Little Chalfont, UK) at a 6% weight-to-volume ratio in. An in-house air spray coating method was used to deposit keratose and paclitaxel in a layered approach on uncoated angioplasty balloons (Abbott Vascular, Abbott Park, IL, USA) [45]. Coated balloons were then sterilized by UV irradiation. The KOS-coated balloons were prepared as previously described [44,45]. Briefly, paclitaxel (LC Laboratories, Woburn, MA, USA) was prepared by dissolving paclitaxel in absolute ethanol followed by sonication at a final concentration of 40 mg/mL. Keratose (KeraNetics LLC, Winston-Salem, NC, USA) solution was prepared by dissolving lyophilized keratose in iohexol (GE Healthcare, Little Chalfont, UK) at a 6% weight-to-volume ratio in. An in-house air spray coating method was used to deposit keratose and paclitaxel in a layered approach on uncoated angioplasty balloons (Abbott Vascular, Abbott Park, IL, USA) [45]. Coated balloons were then sterilized by UV irradiation. 4.2. Peripheral-Simulating Bioreactor The peripheral-simulating bioreactor was designed to shorten and twist two harvest porcine carotid arteries subjected to pulsatile flow conditions (Figure 1A). The overall system (46 × 19 × 19 cm) was designed to fit inside of a standard CO2 incubator by arranging the arteries in a parallel configuration. The system utilizes one stepper motor per artery for rotational motion and one stepper motor for the translational motion of both arteries. Custom connectors were machined to mount the arteries to the stepper motors. The motion of the stepper motors was measured using rotary encoders (CUI AMT11, Tualatin, OR, USA) mounted on the shaft of each motor. An Arduino microcontroller with two motor shields was used to control the motors along with an LCD keypad module to provide an intuitive user experience, displaying time and cycles remaining for each test and providing physical inputs to start, stop, input artery length and the duration of testing. The carotid arteries, positioned within the vascular-simulating bioreactor, were harvested from large pigs (250–350 lbs.) from a local abattoir and transferred in sterile PBS with 1% antibiotic-antimitotic (Gibco, Grand Island, NY, USA). The arteries were then rinsed in sterile PBS in a culture hood and trimmed. Eight-cm-long segments were cut and tied with sutures onto fittings within the ex vivo setup. The circulating medium consisted of the system made up of Dulbecco’s modified eagle’s medium containing 10% fetal bovine serum and 1% antibiotic–antimycotic. The peripheral-simulating bioreactor was designed to shorten and twist two harvest porcine carotid arteries subjected to pulsatile flow conditions (Figure 1A). The overall system (46 × 19 × 19 cm) was designed to fit inside of a standard CO2 incubator by arranging the arteries in a parallel configuration. The system utilizes one stepper motor per artery for rotational motion and one stepper motor for the translational motion of both arteries. Custom connectors were machined to mount the arteries to the stepper motors. The motion of the stepper motors was measured using rotary encoders (CUI AMT11, Tualatin, OR, USA) mounted on the shaft of each motor. An Arduino microcontroller with two motor shields was used to control the motors along with an LCD keypad module to provide an intuitive user experience, displaying time and cycles remaining for each test and providing physical inputs to start, stop, input artery length and the duration of testing. The carotid arteries, positioned within the vascular-simulating bioreactor, were harvested from large pigs (250–350 lbs.) from a local abattoir and transferred in sterile PBS with 1% antibiotic-antimitotic (Gibco, Grand Island, NY, USA). The arteries were then rinsed in sterile PBS in a culture hood and trimmed. Eight-cm-long segments were cut and tied with sutures onto fittings within the ex vivo setup. The circulating medium consisted of the system made up of Dulbecco’s modified eagle’s medium containing 10% fetal bovine serum and 1% antibiotic–antimycotic. 4.3. Ex Vivo DCB Testing and Arterial Time Drug Concentration Prior to any vascular motion (twisting and shortening), all arteries were subjected to pulsatile flow for 1 h, as defined by a custom LabVIEW program as previously described [42] The pressure was monitored via a pressure catheter transducer (Millar Instruments, Houston, TX). Flow was monitored by an ultrasonic flow meter. Following this pre-conditioning phase, the vessel diameter was measured by ultrasound (Figure 1F,G). Harvested arteries were then treated by either the KOS–paclitaxel-coated balloon or a commercially available DCB (In.PACT Admiral DCB, Medtronic, Santa Rosa, CA, USA). The delivery pressure of the DCB was determined by the manufacturers’ specification at a 10–20% overstretch. At timepoints of one hour and three-days, flow was ceased, and the treated portion of the vessel was removed. Excised vessels were flash frozen, stored at −80 °C and shipped on dry ice to iC42 Clinical Research and Development (Aurora, CO, USA) for the quantification of arterial paclitaxel. Quantification of arterial paclitaxel levels was performed using a validated high-performance liquid chromatography (HPLC)-electrospray ionization- tandem mass spectrometry assay (LC-MS/MS) [44,45,46]. In brief, the LC-MS/MS system was a series 1260 HPLC system (Agilent Technologies, Santa Clara, CA, USA) linked to a Sciex 5000 triple-stage quadrupole mass spectrometer (MS/MS, Sciex, Concord, ON, USA) via a turbo-flow electrospray ionization source. The artery tissue samples were homogenized using an electric wand homogenizer (VWR 200, VWR International, Radnor, PA, USA) after the addition of 1 mL of phosphate buffer. Eight hundred (800) μL of 0.2 M ZnSO4 30% water/70% methanol v/v protein precipitation solution containing the internal standard (paclitaxel-D5, 10 ng/mL) was added. Samples were vortexed for 5 min, centrifuged (16,000, 4 °C, 15 min) and transferred into glass HPLC vials. Study samples were diluted as necessary for detector signals to fall within the dynamic MS/MS detector range. One hundred (100) μL of the samples was injected onto a 4.6 × 12.5 mm extraction column (Eclipse XDB C8, 5 μm particle size, Agilent Technologies, Palo Alto, CA, USA). Samples were washed with a mobile phase of 15% methanol and 85% 0.1% formic acid using a flow of 3 mL/min. The temperature for the extraction column was 65 °C. After 1 min, the switching valve was activated and the analytes were eluted in the backflush mode from the extraction column onto a 150 × 4.6 mm analytical column (Zorbax XDB C8, 3.5 µm particle size, Agilent). The analytes were eluted from the analytical column using a gradient of methanol/acetonitrile (1/1 v/v) plus 0.1% formic acid (solvent B) and 0.1% formic acid in HPLC grade water (solvent A). The MS/MS was run in the positive multi-reaction mode and the following ion transitions were monitored: m/z = 876.6 [M + Na]+ → 308.2 (paclitaxel) and m/z = 881.6 [M + Na]+ → 313.1 (the internal standard paclitaxel-D5). Paclitaxel tissue concentrations were calculated based on paclitaxel/paclitaxel-D5 peak area ratios using a quadratic regression equation with 1/x weighting. The range of reliable response was 0.5–100 ng/mL tissue homogenate. Inter-day imprecision was less than 15% and accuracy was within 85–115% of the nominal concentrations. There were no significant matrix interferences, carry-over or matrix effects. For more details, please see the aforementioned publications [42,44,45]. Prior to any vascular motion (twisting and shortening), all arteries were subjected to pulsatile flow for 1 h, as defined by a custom LabVIEW program as previously described [42] The pressure was monitored via a pressure catheter transducer (Millar Instruments, Houston, TX). Flow was monitored by an ultrasonic flow meter. Following this pre-conditioning phase, the vessel diameter was measured by ultrasound (Figure 1F,G). Harvested arteries were then treated by either the KOS–paclitaxel-coated balloon or a commercially available DCB (In.PACT Admiral DCB, Medtronic, Santa Rosa, CA, USA). The delivery pressure of the DCB was determined by the manufacturers’ specification at a 10–20% overstretch. At timepoints of one hour and three-days, flow was ceased, and the treated portion of the vessel was removed. Excised vessels were flash frozen, stored at −80 °C and shipped on dry ice to iC42 Clinical Research and Development (Aurora, CO, USA) for the quantification of arterial paclitaxel. Quantification of arterial paclitaxel levels was performed using a validated high-performance liquid chromatography (HPLC)-electrospray ionization- tandem mass spectrometry assay (LC-MS/MS) [44,45,46]. In brief, the LC-MS/MS system was a series 1260 HPLC system (Agilent Technologies, Santa Clara, CA, USA) linked to a Sciex 5000 triple-stage quadrupole mass spectrometer (MS/MS, Sciex, Concord, ON, USA) via a turbo-flow electrospray ionization source. The artery tissue samples were homogenized using an electric wand homogenizer (VWR 200, VWR International, Radnor, PA, USA) after the addition of 1 mL of phosphate buffer. Eight hundred (800) μL of 0.2 M ZnSO4 30% water/70% methanol v/v protein precipitation solution containing the internal standard (paclitaxel-D5, 10 ng/mL) was added. Samples were vortexed for 5 min, centrifuged (16,000, 4 °C, 15 min) and transferred into glass HPLC vials. Study samples were diluted as necessary for detector signals to fall within the dynamic MS/MS detector range. One hundred (100) μL of the samples was injected onto a 4.6 × 12.5 mm extraction column (Eclipse XDB C8, 5 μm particle size, Agilent Technologies, Palo Alto, CA, USA). Samples were washed with a mobile phase of 15% methanol and 85% 0.1% formic acid using a flow of 3 mL/min. The temperature for the extraction column was 65 °C. After 1 min, the switching valve was activated and the analytes were eluted in the backflush mode from the extraction column onto a 150 × 4.6 mm analytical column (Zorbax XDB C8, 3.5 µm particle size, Agilent). The analytes were eluted from the analytical column using a gradient of methanol/acetonitrile (1/1 v/v) plus 0.1% formic acid (solvent B) and 0.1% formic acid in HPLC grade water (solvent A). The MS/MS was run in the positive multi-reaction mode and the following ion transitions were monitored: m/z = 876.6 [M + Na]+ → 308.2 (paclitaxel) and m/z = 881.6 [M + Na]+ → 313.1 (the internal standard paclitaxel-D5). Paclitaxel tissue concentrations were calculated based on paclitaxel/paclitaxel-D5 peak area ratios using a quadratic regression equation with 1/x weighting. The range of reliable response was 0.5–100 ng/mL tissue homogenate. Inter-day imprecision was less than 15% and accuracy was within 85–115% of the nominal concentrations. There were no significant matrix interferences, carry-over or matrix effects. For more details, please see the aforementioned publications [42,44,45]. 4.4. Rabbit Injury Model This study was approved by the Institutional Animal Care and Use Committee and conformed to the position of the American Heart Association on use of animals in research. The experimental preparation of the animal model has been previously reported [42,46]. Under fluoroscopic guidance, eight anesthetized adult male New Zealand White rabbits underwent endothelial denudation of both iliac arteries using an angioplasty balloon catheter (3.0 × 8 mm). Subsequently, arteries were treated by either KOS–paclitaxel (3.0 × 15 mm), KOS-only balloon (3.0 × 15 mm), paclitaxel-only balloon (3.0 × 15 mm), or an uncoated balloon (3.0 × 15 mm) at a delivery pressure of 8 atm for two minutes. Anti-platelet therapy consisted of aspirin (40 mg/day) given orally 24 h before catheterization, with continued dosing throughout the in-life-phase of the study, while single-dose intra-arterial heparin (150 IU/kg) and lidocaine were administered at the time of catheterization. The animals survived for 7 days and subsequent histological evaluations were performed. This study was approved by the Institutional Animal Care and Use Committee and conformed to the position of the American Heart Association on use of animals in research. The experimental preparation of the animal model has been previously reported [42,46]. Under fluoroscopic guidance, eight anesthetized adult male New Zealand White rabbits underwent endothelial denudation of both iliac arteries using an angioplasty balloon catheter (3.0 × 8 mm). Subsequently, arteries were treated by either KOS–paclitaxel (3.0 × 15 mm), KOS-only balloon (3.0 × 15 mm), paclitaxel-only balloon (3.0 × 15 mm), or an uncoated balloon (3.0 × 15 mm) at a delivery pressure of 8 atm for two minutes. Anti-platelet therapy consisted of aspirin (40 mg/day) given orally 24 h before catheterization, with continued dosing throughout the in-life-phase of the study, while single-dose intra-arterial heparin (150 IU/kg) and lidocaine were administered at the time of catheterization. The animals survived for 7 days and subsequent histological evaluations were performed. 4.5. Arterial Sections Following the duration of the study, animals were anesthetized and euthanized, and the treated artery segments were removed based on landmarks identified by angiography. The arteries were perfused with saline and formalin-fixed under physiological pressure prior to removal. The segments were stored in 10% formalin at room temperature and then processed to paraffin blocks, sectioned, and stained with Hematoxylin and Eosin (H&E) or Verhoeff’s elastin stain (VEG). Following the duration of the study, animals were anesthetized and euthanized, and the treated artery segments were removed based on landmarks identified by angiography. The arteries were perfused with saline and formalin-fixed under physiological pressure prior to removal. The segments were stored in 10% formalin at room temperature and then processed to paraffin blocks, sectioned, and stained with Hematoxylin and Eosin (H&E) or Verhoeff’s elastin stain (VEG). 4.6. Histomorphometric Analysis Histological sections were digitized and measurements performed using ImageJ software (NIH). Cross-sectional area measurements included the external elastic lamina (EEL), internal elastic lamina (IEL), and lumen area of each section. Using these measurements, the medial area, neointimal area and percent area stenosis were calculated as previously described [46,47]. Morphological analysis was performed by light microscopy using a grading criterion as previously published [46,47]. The parameters assessed included intimal healing as judged by injury, endothelial cell loss and inflammation. The medial wall was also assessed for drug-induced biological effect, specifically looking at smooth muscle cell loss. These parameters were semi-quantified using a scoring a system of 0 (none), 1 (minimal), 2 (mild), 3 (moderate) and 4 (severe) as previously described [47]. Histological sections were digitized and measurements performed using ImageJ software (NIH). Cross-sectional area measurements included the external elastic lamina (EEL), internal elastic lamina (IEL), and lumen area of each section. Using these measurements, the medial area, neointimal area and percent area stenosis were calculated as previously described [46,47]. Morphological analysis was performed by light microscopy using a grading criterion as previously published [46,47]. The parameters assessed included intimal healing as judged by injury, endothelial cell loss and inflammation. The medial wall was also assessed for drug-induced biological effect, specifically looking at smooth muscle cell loss. These parameters were semi-quantified using a scoring a system of 0 (none), 1 (minimal), 2 (mild), 3 (moderate) and 4 (severe) as previously described [47]. 4.7. Statistical Analysis Results are reported as mean ± standard deviation. Data were compared with analysis of variance (ANOVA) using GrapPad Prism 7 (GraphPad Software, La Jolla, CA, USA). The comparison of quantitative data of multiple groups was performed by Tukey’s multiple comparisons post hoc test. Significance is reported as p < 0.05. Results are reported as mean ± standard deviation. Data were compared with analysis of variance (ANOVA) using GrapPad Prism 7 (GraphPad Software, La Jolla, CA, USA). The comparison of quantitative data of multiple groups was performed by Tukey’s multiple comparisons post hoc test. Significance is reported as p < 0.05. 4.1. Keratose-Coated Balloons: The KOS-coated balloons were prepared as previously described [44,45]. Briefly, paclitaxel (LC Laboratories, Woburn, MA, USA) was prepared by dissolving paclitaxel in absolute ethanol followed by sonication at a final concentration of 40 mg/mL. Keratose (KeraNetics LLC, Winston-Salem, NC, USA) solution was prepared by dissolving lyophilized keratose in iohexol (GE Healthcare, Little Chalfont, UK) at a 6% weight-to-volume ratio in. An in-house air spray coating method was used to deposit keratose and paclitaxel in a layered approach on uncoated angioplasty balloons (Abbott Vascular, Abbott Park, IL, USA) [45]. Coated balloons were then sterilized by UV irradiation. 4.2. Peripheral-Simulating Bioreactor: The peripheral-simulating bioreactor was designed to shorten and twist two harvest porcine carotid arteries subjected to pulsatile flow conditions (Figure 1A). The overall system (46 × 19 × 19 cm) was designed to fit inside of a standard CO2 incubator by arranging the arteries in a parallel configuration. The system utilizes one stepper motor per artery for rotational motion and one stepper motor for the translational motion of both arteries. Custom connectors were machined to mount the arteries to the stepper motors. The motion of the stepper motors was measured using rotary encoders (CUI AMT11, Tualatin, OR, USA) mounted on the shaft of each motor. An Arduino microcontroller with two motor shields was used to control the motors along with an LCD keypad module to provide an intuitive user experience, displaying time and cycles remaining for each test and providing physical inputs to start, stop, input artery length and the duration of testing. The carotid arteries, positioned within the vascular-simulating bioreactor, were harvested from large pigs (250–350 lbs.) from a local abattoir and transferred in sterile PBS with 1% antibiotic-antimitotic (Gibco, Grand Island, NY, USA). The arteries were then rinsed in sterile PBS in a culture hood and trimmed. Eight-cm-long segments were cut and tied with sutures onto fittings within the ex vivo setup. The circulating medium consisted of the system made up of Dulbecco’s modified eagle’s medium containing 10% fetal bovine serum and 1% antibiotic–antimycotic. 4.3. Ex Vivo DCB Testing and Arterial Time Drug Concentration: Prior to any vascular motion (twisting and shortening), all arteries were subjected to pulsatile flow for 1 h, as defined by a custom LabVIEW program as previously described [42] The pressure was monitored via a pressure catheter transducer (Millar Instruments, Houston, TX). Flow was monitored by an ultrasonic flow meter. Following this pre-conditioning phase, the vessel diameter was measured by ultrasound (Figure 1F,G). Harvested arteries were then treated by either the KOS–paclitaxel-coated balloon or a commercially available DCB (In.PACT Admiral DCB, Medtronic, Santa Rosa, CA, USA). The delivery pressure of the DCB was determined by the manufacturers’ specification at a 10–20% overstretch. At timepoints of one hour and three-days, flow was ceased, and the treated portion of the vessel was removed. Excised vessels were flash frozen, stored at −80 °C and shipped on dry ice to iC42 Clinical Research and Development (Aurora, CO, USA) for the quantification of arterial paclitaxel. Quantification of arterial paclitaxel levels was performed using a validated high-performance liquid chromatography (HPLC)-electrospray ionization- tandem mass spectrometry assay (LC-MS/MS) [44,45,46]. In brief, the LC-MS/MS system was a series 1260 HPLC system (Agilent Technologies, Santa Clara, CA, USA) linked to a Sciex 5000 triple-stage quadrupole mass spectrometer (MS/MS, Sciex, Concord, ON, USA) via a turbo-flow electrospray ionization source. The artery tissue samples were homogenized using an electric wand homogenizer (VWR 200, VWR International, Radnor, PA, USA) after the addition of 1 mL of phosphate buffer. Eight hundred (800) μL of 0.2 M ZnSO4 30% water/70% methanol v/v protein precipitation solution containing the internal standard (paclitaxel-D5, 10 ng/mL) was added. Samples were vortexed for 5 min, centrifuged (16,000, 4 °C, 15 min) and transferred into glass HPLC vials. Study samples were diluted as necessary for detector signals to fall within the dynamic MS/MS detector range. One hundred (100) μL of the samples was injected onto a 4.6 × 12.5 mm extraction column (Eclipse XDB C8, 5 μm particle size, Agilent Technologies, Palo Alto, CA, USA). Samples were washed with a mobile phase of 15% methanol and 85% 0.1% formic acid using a flow of 3 mL/min. The temperature for the extraction column was 65 °C. After 1 min, the switching valve was activated and the analytes were eluted in the backflush mode from the extraction column onto a 150 × 4.6 mm analytical column (Zorbax XDB C8, 3.5 µm particle size, Agilent). The analytes were eluted from the analytical column using a gradient of methanol/acetonitrile (1/1 v/v) plus 0.1% formic acid (solvent B) and 0.1% formic acid in HPLC grade water (solvent A). The MS/MS was run in the positive multi-reaction mode and the following ion transitions were monitored: m/z = 876.6 [M + Na]+ → 308.2 (paclitaxel) and m/z = 881.6 [M + Na]+ → 313.1 (the internal standard paclitaxel-D5). Paclitaxel tissue concentrations were calculated based on paclitaxel/paclitaxel-D5 peak area ratios using a quadratic regression equation with 1/x weighting. The range of reliable response was 0.5–100 ng/mL tissue homogenate. Inter-day imprecision was less than 15% and accuracy was within 85–115% of the nominal concentrations. There were no significant matrix interferences, carry-over or matrix effects. For more details, please see the aforementioned publications [42,44,45]. 4.4. Rabbit Injury Model: This study was approved by the Institutional Animal Care and Use Committee and conformed to the position of the American Heart Association on use of animals in research. The experimental preparation of the animal model has been previously reported [42,46]. Under fluoroscopic guidance, eight anesthetized adult male New Zealand White rabbits underwent endothelial denudation of both iliac arteries using an angioplasty balloon catheter (3.0 × 8 mm). Subsequently, arteries were treated by either KOS–paclitaxel (3.0 × 15 mm), KOS-only balloon (3.0 × 15 mm), paclitaxel-only balloon (3.0 × 15 mm), or an uncoated balloon (3.0 × 15 mm) at a delivery pressure of 8 atm for two minutes. Anti-platelet therapy consisted of aspirin (40 mg/day) given orally 24 h before catheterization, with continued dosing throughout the in-life-phase of the study, while single-dose intra-arterial heparin (150 IU/kg) and lidocaine were administered at the time of catheterization. The animals survived for 7 days and subsequent histological evaluations were performed. 4.5. Arterial Sections: Following the duration of the study, animals were anesthetized and euthanized, and the treated artery segments were removed based on landmarks identified by angiography. The arteries were perfused with saline and formalin-fixed under physiological pressure prior to removal. The segments were stored in 10% formalin at room temperature and then processed to paraffin blocks, sectioned, and stained with Hematoxylin and Eosin (H&E) or Verhoeff’s elastin stain (VEG). 4.6. Histomorphometric Analysis: Histological sections were digitized and measurements performed using ImageJ software (NIH). Cross-sectional area measurements included the external elastic lamina (EEL), internal elastic lamina (IEL), and lumen area of each section. Using these measurements, the medial area, neointimal area and percent area stenosis were calculated as previously described [46,47]. Morphological analysis was performed by light microscopy using a grading criterion as previously published [46,47]. The parameters assessed included intimal healing as judged by injury, endothelial cell loss and inflammation. The medial wall was also assessed for drug-induced biological effect, specifically looking at smooth muscle cell loss. These parameters were semi-quantified using a scoring a system of 0 (none), 1 (minimal), 2 (mild), 3 (moderate) and 4 (severe) as previously described [47]. 4.7. Statistical Analysis: Results are reported as mean ± standard deviation. Data were compared with analysis of variance (ANOVA) using GrapPad Prism 7 (GraphPad Software, La Jolla, CA, USA). The comparison of quantitative data of multiple groups was performed by Tukey’s multiple comparisons post hoc test. Significance is reported as p < 0.05. 5. Conclusions: This study provides evidence of the use of keratose as an excipient for peripheral applications. The ex vivo results showed a potential benefit of the coating to minimize the adverse impact of vascular motion on drug mobility and favorable biological response in the pre-clinical model. Additional studies are warranted to further demonstrate the safety and efficacy profile of the keratose coating in larger animal models and longer durations. Overall, this approach has the potential to improve interventional outcomes and quality of life of millions of patients suffering with PAD.
Background: Drug-coated balloons (DCBs), which deliver anti-proliferative drugs with the aid of excipients, have emerged as a new endovascular therapy for the treatment of peripheral arterial disease. In this study, we evaluated the use of keratose (KOS) as a novel DCB-coating excipient to deliver and retain paclitaxel. Methods: A custom coating method was developed to deposit KOS and paclitaxel on uncoated angioplasty balloons. The retention of the KOS-paclitaxel coating, in comparison to a commercially available DCB, was evaluated using a novel vascular-motion simulating ex vivo flow model at 1 h and 3 days. Additionally, the locoregional biological response of the KOS-paclitaxel coating was evaluated in a rabbit ilio-femoral injury model at 14 days. Results: The KOS coating exhibited greater retention of the paclitaxel at 3 days under pulsatile conditions with vascular motion as compared to the commercially available DCB (14.89 ± 4.12 ng/mg vs. 0.60 ± 0.26 ng/mg, p = 0.018). Histological analysis of the KOS-paclitaxel-treated arteries demonstrated a significant reduction in neointimal thickness as compared to the uncoated balloons, KOS-only balloon and paclitaxel-only balloon. Conclusions: The ability to enhance drug delivery and retention in targeted arterial segments can ultimately improve clinical peripheral endovascular outcomes.
1. Introduction: Drug-coated balloons (DCBs) represent a new therapeutic approach to treat peripheral arterial disease (PAD) [1,2,3,4,5]. In the United States, PAD affects more than eight million people, with an annual cost of roughly $21 billion [6]. Traditionally, endovascular treatment of PAD has been performed by balloon angioplasty or the placement of a permanent metallic stent [7,8]. However, results are poor, with 50–85% of patients developing hemodynamically significant restenosis (re-occlusion), and 16–65% developing occlusions within 2 years post-treatment [9,10]. The use of anti-proliferative drugs in combination with bare metal stents, i.e., drug-eluting stents (DES), was a major breakthrough and highly successful in treating coronary artery disease [11,12]. However, stents have shown very poor clinical outcomes in treating PAD, as they are subjected to biomechanical stress and severe artery deformation (twisting, bending, and shortening), leading to high fracture rates (up to 68%) and restenosis [13]. DCBs, which were FDA-approved for the treatment of PAD in late 2014, provide a new therapeutic approach for interventionalists to practice a ‘leave nothing behind’ procedure, preserving future treatment options DCBs are angioplasty balloons directly coated with an anti-proliferative therapeutic drug and an excipient (drug carrier) [1,14,15,16,17,18]. The excipient enhances the adhesion of the drug to the balloon surface, increases the stability of the drug coating during handling and delivery, and maximizes drug retention to the targeted arterial segment. [18,19,20,21,22,23,24] Current DCBs excipients include polysorbate and sorbitol, urea, polyethylene glycol (PEG) and butyryl-tri-hexyl citrate (BTHC). The rationale for the selection of these various excipients varies. For example, excipients such as polysorbate and PEG are known cosolvents of paclitaxel [25,26], which can alter the vessel interaction of the drug with the DCB device. Conversely, urea acts to increase paclitaxel release at the lesion [18] and PEG has been shown to bind to hydroxylapatite, a primary component of calcified atherosclerotic lesions [17,19,23,24], thereby improving local pharmacodynamics. However, more recent pre-clinical studies have demonstrated the potential of DCB excipients to embolize and travel downstream to distal tissue post-treatment [27,28]. As peripheral arteries undergo severe mechanical deformation, excipients should aid in maintaining drug residency on the luminal surface, in particular at the early time phase, prior to the buildup of tissue, following delivery onto the luminal surface of the artery. Therefore, novel excipients that are capable of maintaining drug residency while minimizing downstream or off-target effects are needed. Keratins are a class of proteins that can be derived from numerous sources, including from human hair. Keratins have been shown to achieve the sustained release of small-molecule drugs and growth factors [29,30]. Further, keratin films have been reported for use in vascular grafts to reduce thrombosis, suggesting their utility in cardiovascular applications [31]. The goal of this study was thus to examine the ability to use an oxidized form of keratin (known as keratose (KOS)) as a new drug carrier excipient to aid in the delivery and retention of the anti-proliferative drug, paclitaxel. Specifically, the mobility, retention and biological impact of a KOS–paclitaxel-coated DCB was determined using ex vivo and in vivo models. 5. Conclusions: This study provides evidence of the use of keratose as an excipient for peripheral applications. The ex vivo results showed a potential benefit of the coating to minimize the adverse impact of vascular motion on drug mobility and favorable biological response in the pre-clinical model. Additional studies are warranted to further demonstrate the safety and efficacy profile of the keratose coating in larger animal models and longer durations. Overall, this approach has the potential to improve interventional outcomes and quality of life of millions of patients suffering with PAD.
Background: Drug-coated balloons (DCBs), which deliver anti-proliferative drugs with the aid of excipients, have emerged as a new endovascular therapy for the treatment of peripheral arterial disease. In this study, we evaluated the use of keratose (KOS) as a novel DCB-coating excipient to deliver and retain paclitaxel. Methods: A custom coating method was developed to deposit KOS and paclitaxel on uncoated angioplasty balloons. The retention of the KOS-paclitaxel coating, in comparison to a commercially available DCB, was evaluated using a novel vascular-motion simulating ex vivo flow model at 1 h and 3 days. Additionally, the locoregional biological response of the KOS-paclitaxel coating was evaluated in a rabbit ilio-femoral injury model at 14 days. Results: The KOS coating exhibited greater retention of the paclitaxel at 3 days under pulsatile conditions with vascular motion as compared to the commercially available DCB (14.89 ± 4.12 ng/mg vs. 0.60 ± 0.26 ng/mg, p = 0.018). Histological analysis of the KOS-paclitaxel-treated arteries demonstrated a significant reduction in neointimal thickness as compared to the uncoated balloons, KOS-only balloon and paclitaxel-only balloon. Conclusions: The ability to enhance drug delivery and retention in targeted arterial segments can ultimately improve clinical peripheral endovascular outcomes.
9,537
254
[ 338, 379, 3367, 137, 286, 715, 206, 82, 165, 62 ]
14
[ "paclitaxel", "kos", "arteries", "dcb", "vs", "vascular", "artery", "mm", "pxl", "usa" ]
[ "angioplasty balloons directly", "balloon arteries treated", "stent based intervention", "dcbs angioplasty balloons", "stents drug eluting" ]
null
[CONTENT] Keratose | drug-coated balloon | paclitaxel | drug delivery | pre-clinical | peripheral arterial disease | endovascular [SUMMARY]
null
[CONTENT] Keratose | drug-coated balloon | paclitaxel | drug delivery | pre-clinical | peripheral arterial disease | endovascular [SUMMARY]
[CONTENT] Keratose | drug-coated balloon | paclitaxel | drug delivery | pre-clinical | peripheral arterial disease | endovascular [SUMMARY]
[CONTENT] Keratose | drug-coated balloon | paclitaxel | drug delivery | pre-clinical | peripheral arterial disease | endovascular [SUMMARY]
[CONTENT] Keratose | drug-coated balloon | paclitaxel | drug delivery | pre-clinical | peripheral arterial disease | endovascular [SUMMARY]
[CONTENT] Angioplasty, Balloon | Animals | Antineoplastic Agents | Cardiovascular Agents | Coated Materials, Biocompatible | Drug Carriers | Drug Delivery Systems | Drug Evaluation, Preclinical | Immunohistochemistry | Keratosis | Paclitaxel | Peripheral Arterial Disease [SUMMARY]
null
[CONTENT] Angioplasty, Balloon | Animals | Antineoplastic Agents | Cardiovascular Agents | Coated Materials, Biocompatible | Drug Carriers | Drug Delivery Systems | Drug Evaluation, Preclinical | Immunohistochemistry | Keratosis | Paclitaxel | Peripheral Arterial Disease [SUMMARY]
[CONTENT] Angioplasty, Balloon | Animals | Antineoplastic Agents | Cardiovascular Agents | Coated Materials, Biocompatible | Drug Carriers | Drug Delivery Systems | Drug Evaluation, Preclinical | Immunohistochemistry | Keratosis | Paclitaxel | Peripheral Arterial Disease [SUMMARY]
[CONTENT] Angioplasty, Balloon | Animals | Antineoplastic Agents | Cardiovascular Agents | Coated Materials, Biocompatible | Drug Carriers | Drug Delivery Systems | Drug Evaluation, Preclinical | Immunohistochemistry | Keratosis | Paclitaxel | Peripheral Arterial Disease [SUMMARY]
[CONTENT] Angioplasty, Balloon | Animals | Antineoplastic Agents | Cardiovascular Agents | Coated Materials, Biocompatible | Drug Carriers | Drug Delivery Systems | Drug Evaluation, Preclinical | Immunohistochemistry | Keratosis | Paclitaxel | Peripheral Arterial Disease [SUMMARY]
[CONTENT] angioplasty balloons directly | balloon arteries treated | stent based intervention | dcbs angioplasty balloons | stents drug eluting [SUMMARY]
null
[CONTENT] angioplasty balloons directly | balloon arteries treated | stent based intervention | dcbs angioplasty balloons | stents drug eluting [SUMMARY]
[CONTENT] angioplasty balloons directly | balloon arteries treated | stent based intervention | dcbs angioplasty balloons | stents drug eluting [SUMMARY]
[CONTENT] angioplasty balloons directly | balloon arteries treated | stent based intervention | dcbs angioplasty balloons | stents drug eluting [SUMMARY]
[CONTENT] angioplasty balloons directly | balloon arteries treated | stent based intervention | dcbs angioplasty balloons | stents drug eluting [SUMMARY]
[CONTENT] paclitaxel | kos | arteries | dcb | vs | vascular | artery | mm | pxl | usa [SUMMARY]
null
[CONTENT] paclitaxel | kos | arteries | dcb | vs | vascular | artery | mm | pxl | usa [SUMMARY]
[CONTENT] paclitaxel | kos | arteries | dcb | vs | vascular | artery | mm | pxl | usa [SUMMARY]
[CONTENT] paclitaxel | kos | arteries | dcb | vs | vascular | artery | mm | pxl | usa [SUMMARY]
[CONTENT] paclitaxel | kos | arteries | dcb | vs | vascular | artery | mm | pxl | usa [SUMMARY]
[CONTENT] drug | excipients | pad | treatment | stents | 18 | dcbs | therapeutic | shown | proliferative [SUMMARY]
null
[CONTENT] vs | kos | pxl | vs kos | 00 | conditions | kos pxl | paclitaxel | vascular deformation | pulsatile conditions [SUMMARY]
[CONTENT] potential | keratose | coating | potential benefit coating minimize | overall approach potential improve | mobility favorable biological response | mobility favorable biological | potential improve interventional outcomes | potential improve interventional | potential improve [SUMMARY]
[CONTENT] kos | paclitaxel | vs | dcb | pxl | usa | arteries | drug | mm | vs kos [SUMMARY]
[CONTENT] kos | paclitaxel | vs | dcb | pxl | usa | arteries | drug | mm | vs kos [SUMMARY]
[CONTENT] ||| KOS | DCB [SUMMARY]
null
[CONTENT] KOS | 3 days | DCB ||| 0.60 | 0.26 ng | 0.018 ||| KOS | KOS [SUMMARY]
[CONTENT] [SUMMARY]
[CONTENT] ||| KOS | DCB ||| KOS ||| KOS | DCB | 1 h and 3 days ||| KOS | 14 days ||| ||| KOS | 3 days | DCB ||| 0.60 | 0.26 ng | 0.018 ||| KOS | KOS ||| [SUMMARY]
[CONTENT] ||| KOS | DCB ||| KOS ||| KOS | DCB | 1 h and 3 days ||| KOS | 14 days ||| ||| KOS | 3 days | DCB ||| 0.60 | 0.26 ng | 0.018 ||| KOS | KOS ||| [SUMMARY]
The epidemiological and clinical profile of COVID-19 in children: Moroccan experience of the Cheikh Khalifa University Center.
33623582
COVID-19 is an infectious disease caused by a new coronavirus. The first cases were identified in Wuhan. It rapidly spread causing a pandemic worldwide. The incidence and severity of this disease are likely to be different in children compared with adults. Few publications of COVID-19 in children have been published. Our Moroccan paediatric series is among the first studies on this disease in Africa.
INTRODUCTION
We included all children with COVID-19 who were admitted and treated at the hospital from March 25 to April 26, 2020. We have collected information, including demographic data, symptoms, imaging data, laboratory results, treatments and clinical progress from patients with COVID-19.
METHODS
Since the outbreak of 2019 novel coronavirus infection (2019-nCoV) in Morocco, a total of 145 COVID-19 confirmed cases have been reported in the Cheikh Khalifa's Hospital. Among this cases, 15 children were registered. The median age of patients was 13 years. There were 7 boys and 8 girls. Five children are asymptomatic, 8 have mild symptoms and 2 have a moderate respiratory difficulty. The RT-PCR test results were positive in all patients. Radiologically, we found in 2 cases, multiple nodules with ground-glass opacities on the chest scan. The treatment was based on the combination of hydroxychloroquine and azithromycin. Evolution under treatment was good for all patients.
RESULTS
This study describes the profile of COVID-19 in child in a Moroccan hospital and confirms that the severity of illness in children with COVID-19 to be far less than adults.
CONCLUSION
[ "Adolescent", "Azithromycin", "COVID-19", "COVID-19 Nucleic Acid Testing", "Child", "Child, Preschool", "Female", "Humans", "Hydroxychloroquine", "Male", "Morocco", "Retrospective Studies", "Severity of Illness Index", "Treatment Outcome", "COVID-19 Drug Treatment" ]
7875728
Introduction
The world is currently experiencing a pandemic of infectious disease called COVID-19 (coronavirus disease 2019), caused by a new virus coronavirus: SARS-CoV-2. Coronaviruses are a large family of enveloped RNA viruses that cause illnesses ranging from a common cold to more severe diseases such as MERS or SARS. The first cases in Wuhan, China, did not involve children, which suggested that the disease was not symptomatic in children [1]. Now, that the outbreak is global with more than 3.3 million confirmed cases and 238 000 deaths (as of May 01, 2020) [2], we can evaluate more accurately the epidemiological profile of this disease in children. The Korean center for disease control and prevention reported that, up to 20 March, 6.3% of all cases that tested positive for COVID-19 were children under 19 years of age [3]. Worldwide, the confirmed cases were mostly among adults, with paediatric cases rarely reported [4]. Therefore, little is known about the clinical profile of infected children around the world. Here, we report our experience, among the first studies in Africa, detailing the epidemiological and clinical profile in children with COVID-19 in University Moroccan Hospital. Morocco has identified until April 26, according to the direction of epidemiology and fight against diseases, about 4900 cases of confirmed COVID-19 including more than 390 paediatric cases [5]. In Casablanca, the economic capital of the country and according to the same source, there have been 97 cases of children affected with a cumulative incidence of 4.8 per 100,000 children [5]. Context: Cheikh Khalifa hospital (CKH) is an international university and multidisciplinary center. Since the start of this pandemic in Morocco, the ministry of health has designated several hospitals to handle COVID-19 cases, including the CKH. There was a rapid increase in the disease with the reception and care of many affected families in our hospital, then there was a gradual and regular decrease. From the first confirmed case of children admitted to our structure, we conducted a retrospective study focused on different characteristics of this category. All the hospitalised children received the same therapeutic protocol, codified by the ministry of health and generalized to all the country.
Methods
In order to analyse the COVID-19 disease in children, we conducted a retrospective study and we included all children hospitalised for COVID-19 during a period of 1 month (from March 25 to April 26, 2020) within the Hospital International University Cheikh Khalifa of Casablanca city in Morocco. Children and adolescents were defined as being under 19 years old. Demographic information and all characteristics including exposure, medical history, symptoms, chest computed tomographic (CT) scan, laboratory tests, treatments and evolution of each patient were obtained from the electronic medical record system of Cheikh Khalifa´s Hospital. Suspected cases were identified if a child had 2 of the 4 following conditions: close contact with a confirmed case within the 2 last weeks; fever and or respiratory signs (cough, difficulty breathing, anosmia) and or digestive symptoms (vomiting, diarrhea) and or asthenia, myalgia; abnormal laboratory test (white blood cell count, lymphocyte count, level of C-reactive protein procalcitonin, lactate dehydrogenase, ferritin and D-dimer); or abnormal chest radiograph imaging result. COVID-19 was confirmed in all patients with positive results in the real time reverse transcription polymerase chain reaction (RT-PCR) test on nasopharyngeal swab specimens. The exclusion criteria are all cases who are over 19 years of age and/or do not meet the previously detailed criteria for confirmation of the disease. The inclusion criteria are: age under 19; ≥ 2 of the above 4 conditions of suspicion of COVID-19; positive RT-PCR test on nasopharyngeal swab specimens. All confirmed children were hospitalised and treated with regular check-ups. The treatment given, in symptomatic children, is codified nationally by the ministry of health. Patients are declared cured if they are doing well clinically with 2 negative RT-PCRs. All ethical principles were considered in this article. The participants were informed about the purpose of the research and its implementation stages; they were also assured about the confidentiality of their information.
Results
Since the outbreak of the new coronavirus infection 2019 (COVID-19) in Morocco, from March 25 to April 26, 2020, more than 400 suspected cases have been admitted to Cheikh Khalifa University Hospital. Of the 145 laboratory-confirmed cases, 15 children were reported with a percentage of 10.3%. The median age of patients was 13 years (interquartile range: 5 - 19 years). There were 7 boys and 8 girls (sex ration: 0.87). All children were Moroccan, coming from the Casablanca city and had family exposure. All children were vaccinated according to national vaccination program including tuberculosis vaccine. Among these children, two had uncontrolled asthma and one has allergic rhinitis. The average consultation time after onset of symptoms is 3 days (extremes 1 and 7 days). Regarding the severity of confirmed cases, 5 (33.4%), 8 (53.4%) and 2 (13.2%) cases were diagnosed as asymptomatic, mild and moderate respectively. Among symptomatic children, 9 had fever and dry cough with moderate breathing difficulty in 2 cases, 2 had watery diarrhea and 3 had anosmia (Table 1). The 2 children who were respiratory difficulty had sibilant rales at auscultation and had saturation between 92 and 95%. The RT-PCR test results were positive in all patients. In all cases, the values of CRP, procalcitonin, white blood cells, lactate dehydrogenase, kidney and liver function were normal. Table summarizing the clinical symptoms of the 10 symptomatic patients The D-dimer levels were high in 3 patients (greater than 500μg/l). Radiologically, we found in 2 cases, multiple nodules with ground-glass opacities in both lungs on the chest CT scan (Figure 1, Figure 2). Treatment modalities were focused on symptomatic and specific support. Symptomatic treatment was based on bed rest, vitamin C and paracetamol if fever. Beta 2 agonist inhaler with a valved chamber and oral corticoids were administered in the 2 asthmatic children. Low molecular weight heparin was used in adolescents with high D dimers levels. We adopted, following the ministerial recommendations, hydroxychloroquine-based therapy preceded by an electrocardiogram in search of heart rhythm disorder. All our symptomatic patients received bi-therapy based on hydroxychloroquine and azithromycin at the respective dose of 10mg/Kg/Day for 10 days and 20mg/Kg/Day for 7 days. No children received antivirals. The response to treatment was good for all patients. No side effects were noted. Until April 26, 2020, two patients are stable and still hospitalized in the paediatric service under supervision, the other thirteen have recovered clinically. Their RT-PCRs performed at two days´ interval was negative and they returned home. Chest CT scan in axial sections showing bilateral multiple nodules with ground-glass opacities (patient 2) Chest CT scan in axial sections noting ground-glass opacities (patient 10)
Conclusion
To our knowledge, our study is one of the first paediatric series in Africa to describe the clinical, radiobiological and evolutionary characteristics of COVID-19 in children. the clinical presentation was mild to moderate without mortality. While the initial data reported so far on children is reassuring, paediatricians need to continually update their knowledge and be aware of the risks, especially in infants and children with comorbidities. What is known about this topic COVID-19 is particularly rare and less severe in immunocompetent children than in adults; Few studies (mainly among Asian and European populations) have been published on COVID-19 in this age group. COVID-19 is particularly rare and less severe in immunocompetent children than in adults; Few studies (mainly among Asian and European populations) have been published on COVID-19 in this age group. What this study adds The literature on COVID-19 is developing day by day all over the world and our African series would contribute to increase knowledge; This study helps to identify the clinical characteristics and the management of COVID-19 in Moroccan children. The literature on COVID-19 is developing day by day all over the world and our African series would contribute to increase knowledge; This study helps to identify the clinical characteristics and the management of COVID-19 in Moroccan children.
[ "What is known about this topic", "What this study adds" ]
[ "COVID-19 is particularly rare and less severe in immunocompetent children than in adults;\nFew studies (mainly among Asian and European populations) have been published on COVID-19 in this age group.", "The literature on COVID-19 is developing day by day all over the world and our African series would contribute to increase knowledge;\nThis study helps to identify the clinical characteristics and the management of COVID-19 in Moroccan children." ]
[ null, null ]
[ "Introduction", "Methods", "Results", "Discussion", "Conclusion", "What is known about this topic", "What this study adds", "Competing interests" ]
[ "The world is currently experiencing a pandemic of infectious disease called COVID-19 (coronavirus disease 2019), caused by a new virus coronavirus: SARS-CoV-2. Coronaviruses are a large family of enveloped RNA viruses that cause illnesses ranging from a common cold to more severe diseases such as MERS or SARS. The first cases in Wuhan, China, did not involve children, which suggested that the disease was not symptomatic in children [1]. Now, that the outbreak is global with more than 3.3 million confirmed cases and 238 000 deaths (as of May 01, 2020) [2], we can evaluate more accurately the epidemiological profile of this disease in children. The Korean center for disease control and prevention reported that, up to 20 March, 6.3% of all cases that tested positive for COVID-19 were children under 19 years of age [3]. Worldwide, the confirmed cases were mostly among adults, with paediatric cases rarely reported [4]. Therefore, little is known about the clinical profile of infected children around the world. Here, we report our experience, among the first studies in Africa, detailing the epidemiological and clinical profile in children with COVID-19 in University Moroccan Hospital. Morocco has identified until April 26, according to the direction of epidemiology and fight against diseases, about 4900 cases of confirmed COVID-19 including more than 390 paediatric cases [5]. In Casablanca, the economic capital of the country and according to the same source, there have been 97 cases of children affected with a cumulative incidence of 4.8 per 100,000 children [5].\nContext: Cheikh Khalifa hospital (CKH) is an international university and multidisciplinary center. Since the start of this pandemic in Morocco, the ministry of health has designated several hospitals to handle COVID-19 cases, including the CKH. There was a rapid increase in the disease with the reception and care of many affected families in our hospital, then there was a gradual and regular decrease. From the first confirmed case of children admitted to our structure, we conducted a retrospective study focused on different characteristics of this category. All the hospitalised children received the same therapeutic protocol, codified by the ministry of health and generalized to all the country.", "In order to analyse the COVID-19 disease in children, we conducted a retrospective study and we included all children hospitalised for COVID-19 during a period of 1 month (from March 25 to April 26, 2020) within the Hospital International University Cheikh Khalifa of Casablanca city in Morocco. Children and adolescents were defined as being under 19 years old. Demographic information and all characteristics including exposure, medical history, symptoms, chest computed tomographic (CT) scan, laboratory tests, treatments and evolution of each patient were obtained from the electronic medical record system of Cheikh Khalifa´s Hospital. Suspected cases were identified if a child had 2 of the 4 following conditions: close contact with a confirmed case within the 2 last weeks; fever and or respiratory signs (cough, difficulty breathing, anosmia) and or digestive symptoms (vomiting, diarrhea) and or asthenia, myalgia; abnormal laboratory test (white blood cell count, lymphocyte count, level of C-reactive protein procalcitonin, lactate dehydrogenase, ferritin and D-dimer); or abnormal chest radiograph imaging result. COVID-19 was confirmed in all patients with positive results in the real time reverse transcription polymerase chain reaction (RT-PCR) test on nasopharyngeal swab specimens. The exclusion criteria are all cases who are over 19 years of age and/or do not meet the previously detailed criteria for confirmation of the disease. The inclusion criteria are: age under 19; ≥ 2 of the above 4 conditions of suspicion of COVID-19; positive RT-PCR test on nasopharyngeal swab specimens. All confirmed children were hospitalised and treated with regular check-ups. The treatment given, in symptomatic children, is codified nationally by the ministry of health. Patients are declared cured if they are doing well clinically with 2 negative RT-PCRs. All ethical principles were considered in this article. The participants were informed about the purpose of the research and its implementation stages; they were also assured about the confidentiality of their information.", "Since the outbreak of the new coronavirus infection 2019 (COVID-19) in Morocco, from March 25 to April 26, 2020, more than 400 suspected cases have been admitted to Cheikh Khalifa University Hospital. Of the 145 laboratory-confirmed cases, 15 children were reported with a percentage of 10.3%. The median age of patients was 13 years (interquartile range: 5 - 19 years). There were 7 boys and 8 girls (sex ration: 0.87). All children were Moroccan, coming from the Casablanca city and had family exposure. All children were vaccinated according to national vaccination program including tuberculosis vaccine. Among these children, two had uncontrolled asthma and one has allergic rhinitis. The average consultation time after onset of symptoms is 3 days (extremes 1 and 7 days). Regarding the severity of confirmed cases, 5 (33.4%), 8 (53.4%) and 2 (13.2%) cases were diagnosed as asymptomatic, mild and moderate respectively. Among symptomatic children, 9 had fever and dry cough with moderate breathing difficulty in 2 cases, 2 had watery diarrhea and 3 had anosmia (Table 1). The 2 children who were respiratory difficulty had sibilant rales at auscultation and had saturation between 92 and 95%. The RT-PCR test results were positive in all patients. In all cases, the values of CRP, procalcitonin, white blood cells, lactate dehydrogenase, kidney and liver function were normal.\nTable summarizing the clinical symptoms of the 10 symptomatic patients\nThe D-dimer levels were high in 3 patients (greater than 500μg/l). Radiologically, we found in 2 cases, multiple nodules with ground-glass opacities in both lungs on the chest CT scan (Figure 1, Figure 2). Treatment modalities were focused on symptomatic and specific support. Symptomatic treatment was based on bed rest, vitamin C and paracetamol if fever. Beta 2 agonist inhaler with a valved chamber and oral corticoids were administered in the 2 asthmatic children. Low molecular weight heparin was used in adolescents with high D dimers levels. We adopted, following the ministerial recommendations, hydroxychloroquine-based therapy preceded by an electrocardiogram in search of heart rhythm disorder. All our symptomatic patients received bi-therapy based on hydroxychloroquine and azithromycin at the respective dose of 10mg/Kg/Day for 10 days and 20mg/Kg/Day for 7 days. No children received antivirals. The response to treatment was good for all patients. No side effects were noted. Until April 26, 2020, two patients are stable and still hospitalized in the paediatric service under supervision, the other thirteen have recovered clinically. Their RT-PCRs performed at two days´ interval was negative and they returned home.\nChest CT scan in axial sections showing bilateral multiple nodules with ground-glass opacities (patient 2)\nChest CT scan in axial sections noting ground-glass opacities (patient 10)", "Since December, 2019, COVID-19 has spread rapidly around China and the world. The world health organization (WHO) has declared the infection a global pandemic. Nationwide case series of 2135 paediatric patients with COVID-19 reported to the chinese center for disease control and prevention from January 16, 2020, to February 8, 2020 were included [6]. The United States of America released a study, which looked at more than 2,500 coronavirus cases among children younger than 18 in the US between February 12 and April 2. That’s the largest research sample of children with the coronavirus to date. The data suggested that children are less likely to develop symptoms than adults. Of all reported cases, only 1.7% were children [7]. During the first 2 weeks of the epidemic in Madrid region (Spain), 41 of the 4695 confirmed cases (0.8%) were children younger than 18 years [8]. The reason for having a higher rate of infected children (10%) than other studies is the hospitalization of five sick families containing 2 or 3 confirmed children for each of the families. The COVID-19 incubation period is approximately 2 to 14 days (but most often between three to five days) and transmitted by respiratory droplets and close contact with an infected person. Since most children have been exposed to family members and/or other children with COVID-19, this clearly indicates person-to-person transmission [6].\nTo fight against human-to-human transmission and the spread of this pandemic, Morocco implemented rigorous preventive interventions very early on, such as home confinement, social isolation, school closings, compulsory wearing of masks and search for contacts. Although clinical manifestations of children´s COVID-19 cases were generally less severe than those of adult patients, young children, particularly infants and those with chronic disease were vulnerable to infection and were more likely to have severe clinical manifestations [9]. This means that children who do not have underlying conditions, such as chronic heart or lung disease, diabetes mellitus or immunosuppression (chemotherapy, high doses of corticosteroids), have a much lower risk of severe forms of COVID-19 than other age groups [10]. In our study, we did not find any severe or unstable paediatric case, unlike adults where there were about twenty serious cases with 14 deaths. We also found, in our series, that the 2 children with uncontrolled asthma presented a more telling respiratory symptoms than the others. However, we noted that atopy represents only 3 out of 15 patients (20%), which leads to a discussion on the real role of atopy in COVID-19.\nIs atopy a risk factor for the seriousness of the disease or, on the contrary, does it play a preventive role because of the immune imbalance weighted towards the TH2 response instead of TH1? This hypothesis must be a research soon. The symptoms of COVID-19 are similar to those of a simple flu: infected children often have fever, dry cough and rarely difficulty breathing. Digestive symptoms (vomiting, diarrhea), asthenia and myalgia are also described in the different series. However, the infection can progress more seriously and lead to pneumonia, failure of several organs, severe acute respiratory syndrome, or even death in the most serious cases [11]. We present the findings of Dong et al. who reported in china, a series of 731 children with laboratory-confirmed COVID-19 disease, approximately 13% of cases had asymptomatic infection, 42% were mild, 40% were moderate (pneumonia without hypoxemia), only 5% of cases were severe (dyspnea and hypoxemia) and 0.6% progressed to acute respiratory distress syndrome (ARDS) or multiorgan dysfunction, a rate that is significaly lower than that seen in adults [11].\nRegarding the results of our study, we have 33.4% of asymptomatic children and more than half of the cases had a mild form of the disease. We conclude that the children may play a major role in community-based viral transmission. Since the majority of available data suggest that children can be asymptomatic and have nasopharyngeal carriage, rather than lower respiratory tract infection [6]. There is also evidence of faecal shedding in the stool for several weeks after diagnosis [4,12]. Radiological semiology in children is still poorly described. The first publications describe lesions similar to those of adults, although less severe. They show ground glass under pleural uni or bilateral and in half of the cases, zones of consolidation with halo sign [13]. A recent study in China analysed the CT images of 15 children with COVID-19, 10 of whom were asymptomatic and 5 had mild symptoms. Among the 15 children, pulmonary inflammatory lesions were observed in 9. Nodular ground glass opacity was the most common finding and sub pleural patchy opacities were also observable, with all lesions limited to a single lung segment [14].\nChest CT is a highly sensitive diagnostic tool to detect pneumonia and the sensitivity for COVID-19 is reported to be 97.5% [15]. In our series, we had 2 pathological chest CT scanners with bilateral multiple ground glass opacity (Figure 1, Figure 2). The 1st case had a dry cough without dyspnea as long as the 2nd case was asymptomatic. The 2 cases which presented a moderate respiratory gene had a completely normal CT scan. Biologically, white blood cell count is normal or decreased, with decreased lymphocyte count; most patients display normal C-reactive protein and procalcitonin levels. Severe cases show high D-dimer levels and ferritin rate. Samples from nasopharyngeal swabs, sputum, lower respiratory tract secretions, stool and blood are tested positive for 2019-nCoV PCR [12,16]. We present in table comparative results between different series (Table 2) [4,14,17]. Some institutions have developed algorithms to determine when treatment with hydroxychloroquine or remdesivir may be warranted [11]. Children with underlying conditions or with immunocompromising treatment and children younger than three years appear to be at the greatest risk for progression of COVID-19 disease and therefore indication of the specific treatment mentioned above [11].\nMain comparative data between the series\nIn an Iranian series (9 cases), most of patients were treated with administrating oxygen, nebulizing with β2-agonist, using probabilistic antibiotics, hydroxychloroquine and oseltamivir [17]. Sometimes antibiotic treatment of bacterial superinfection may be necessary. According to the ministerial and national therapeutic protocol, we put symptomatic children under the association hydroxychloroquine and azithromycin with adapted paediatric dosages. Hydroxychloroquine, an antimalarial drug, known to have anti-inflammatory and antiviral effects, with low cost and easy availability, is a promising practice to treat COVID-19 infection under reasonable managements. One of the major French studies suggested that hydroxychloroquine, in combination with the antibiotic azithromycin, could work as a treatment for COVID-19 [18]. The results of this study should not be conclusive given its observational design. According to a more recent American study, the administration of hydroxychloroquine in patients with COVID-19 was not associated with a significantly reduced risk of complication or death [19].\nRandomized controlled trials of hydroxychloroquine efficacy in patients with COVID-19 are needed. The Indian medical research council (IMRC) recommended the use of hydroxychloroquine as a prophylactic against COVID-19 in health care workers and asymptomatic contacts of laboratory-confirmed cases of COVID [20]. Why not follow this prophylactic practice among health professionals and fragile people to protect them now and in the future from other seasonal outbreaks? The number of COVID-19 cases in countries with fragile health systems is lower than expected [21]. Among the hypotheses mentioned, it is that the tuberculosis vaccine could have a protective effect against the coronavirus, either by reducing the risk of being infected, or by limiting the severity of the symptoms [21]. Note that all our patients are vaccinated against tuberculosis. Until a specific vaccine is developed, COVID-19 vulnerable populations and health personnel could be immunized with BCG vaccines [21].", "To our knowledge, our study is one of the first paediatric series in Africa to describe the clinical, radiobiological and evolutionary characteristics of COVID-19 in children. the clinical presentation was mild to moderate without mortality. While the initial data reported so far on children is reassuring, paediatricians need to continually update their knowledge and be aware of the risks, especially in infants and children with comorbidities.\nWhat is known about this topic COVID-19 is particularly rare and less severe in immunocompetent children than in adults;\nFew studies (mainly among Asian and European populations) have been published on COVID-19 in this age group.\nCOVID-19 is particularly rare and less severe in immunocompetent children than in adults;\nFew studies (mainly among Asian and European populations) have been published on COVID-19 in this age group.\nWhat this study adds The literature on COVID-19 is developing day by day all over the world and our African series would contribute to increase knowledge;\nThis study helps to identify the clinical characteristics and the management of COVID-19 in Moroccan children.\nThe literature on COVID-19 is developing day by day all over the world and our African series would contribute to increase knowledge;\nThis study helps to identify the clinical characteristics and the management of COVID-19 in Moroccan children.", "COVID-19 is particularly rare and less severe in immunocompetent children than in adults;\nFew studies (mainly among Asian and European populations) have been published on COVID-19 in this age group.", "The literature on COVID-19 is developing day by day all over the world and our African series would contribute to increase knowledge;\nThis study helps to identify the clinical characteristics and the management of COVID-19 in Moroccan children.", "The author declares no competing interests." ]
[ "intro", "methods", "results", "discussion", "conclusion", null, null, "COI-statement" ]
[ "COVID-19", "children", "coronavirus" ]
Introduction: The world is currently experiencing a pandemic of infectious disease called COVID-19 (coronavirus disease 2019), caused by a new virus coronavirus: SARS-CoV-2. Coronaviruses are a large family of enveloped RNA viruses that cause illnesses ranging from a common cold to more severe diseases such as MERS or SARS. The first cases in Wuhan, China, did not involve children, which suggested that the disease was not symptomatic in children [1]. Now, that the outbreak is global with more than 3.3 million confirmed cases and 238 000 deaths (as of May 01, 2020) [2], we can evaluate more accurately the epidemiological profile of this disease in children. The Korean center for disease control and prevention reported that, up to 20 March, 6.3% of all cases that tested positive for COVID-19 were children under 19 years of age [3]. Worldwide, the confirmed cases were mostly among adults, with paediatric cases rarely reported [4]. Therefore, little is known about the clinical profile of infected children around the world. Here, we report our experience, among the first studies in Africa, detailing the epidemiological and clinical profile in children with COVID-19 in University Moroccan Hospital. Morocco has identified until April 26, according to the direction of epidemiology and fight against diseases, about 4900 cases of confirmed COVID-19 including more than 390 paediatric cases [5]. In Casablanca, the economic capital of the country and according to the same source, there have been 97 cases of children affected with a cumulative incidence of 4.8 per 100,000 children [5]. Context: Cheikh Khalifa hospital (CKH) is an international university and multidisciplinary center. Since the start of this pandemic in Morocco, the ministry of health has designated several hospitals to handle COVID-19 cases, including the CKH. There was a rapid increase in the disease with the reception and care of many affected families in our hospital, then there was a gradual and regular decrease. From the first confirmed case of children admitted to our structure, we conducted a retrospective study focused on different characteristics of this category. All the hospitalised children received the same therapeutic protocol, codified by the ministry of health and generalized to all the country. Methods: In order to analyse the COVID-19 disease in children, we conducted a retrospective study and we included all children hospitalised for COVID-19 during a period of 1 month (from March 25 to April 26, 2020) within the Hospital International University Cheikh Khalifa of Casablanca city in Morocco. Children and adolescents were defined as being under 19 years old. Demographic information and all characteristics including exposure, medical history, symptoms, chest computed tomographic (CT) scan, laboratory tests, treatments and evolution of each patient were obtained from the electronic medical record system of Cheikh Khalifa´s Hospital. Suspected cases were identified if a child had 2 of the 4 following conditions: close contact with a confirmed case within the 2 last weeks; fever and or respiratory signs (cough, difficulty breathing, anosmia) and or digestive symptoms (vomiting, diarrhea) and or asthenia, myalgia; abnormal laboratory test (white blood cell count, lymphocyte count, level of C-reactive protein procalcitonin, lactate dehydrogenase, ferritin and D-dimer); or abnormal chest radiograph imaging result. COVID-19 was confirmed in all patients with positive results in the real time reverse transcription polymerase chain reaction (RT-PCR) test on nasopharyngeal swab specimens. The exclusion criteria are all cases who are over 19 years of age and/or do not meet the previously detailed criteria for confirmation of the disease. The inclusion criteria are: age under 19; ≥ 2 of the above 4 conditions of suspicion of COVID-19; positive RT-PCR test on nasopharyngeal swab specimens. All confirmed children were hospitalised and treated with regular check-ups. The treatment given, in symptomatic children, is codified nationally by the ministry of health. Patients are declared cured if they are doing well clinically with 2 negative RT-PCRs. All ethical principles were considered in this article. The participants were informed about the purpose of the research and its implementation stages; they were also assured about the confidentiality of their information. Results: Since the outbreak of the new coronavirus infection 2019 (COVID-19) in Morocco, from March 25 to April 26, 2020, more than 400 suspected cases have been admitted to Cheikh Khalifa University Hospital. Of the 145 laboratory-confirmed cases, 15 children were reported with a percentage of 10.3%. The median age of patients was 13 years (interquartile range: 5 - 19 years). There were 7 boys and 8 girls (sex ration: 0.87). All children were Moroccan, coming from the Casablanca city and had family exposure. All children were vaccinated according to national vaccination program including tuberculosis vaccine. Among these children, two had uncontrolled asthma and one has allergic rhinitis. The average consultation time after onset of symptoms is 3 days (extremes 1 and 7 days). Regarding the severity of confirmed cases, 5 (33.4%), 8 (53.4%) and 2 (13.2%) cases were diagnosed as asymptomatic, mild and moderate respectively. Among symptomatic children, 9 had fever and dry cough with moderate breathing difficulty in 2 cases, 2 had watery diarrhea and 3 had anosmia (Table 1). The 2 children who were respiratory difficulty had sibilant rales at auscultation and had saturation between 92 and 95%. The RT-PCR test results were positive in all patients. In all cases, the values of CRP, procalcitonin, white blood cells, lactate dehydrogenase, kidney and liver function were normal. Table summarizing the clinical symptoms of the 10 symptomatic patients The D-dimer levels were high in 3 patients (greater than 500μg/l). Radiologically, we found in 2 cases, multiple nodules with ground-glass opacities in both lungs on the chest CT scan (Figure 1, Figure 2). Treatment modalities were focused on symptomatic and specific support. Symptomatic treatment was based on bed rest, vitamin C and paracetamol if fever. Beta 2 agonist inhaler with a valved chamber and oral corticoids were administered in the 2 asthmatic children. Low molecular weight heparin was used in adolescents with high D dimers levels. We adopted, following the ministerial recommendations, hydroxychloroquine-based therapy preceded by an electrocardiogram in search of heart rhythm disorder. All our symptomatic patients received bi-therapy based on hydroxychloroquine and azithromycin at the respective dose of 10mg/Kg/Day for 10 days and 20mg/Kg/Day for 7 days. No children received antivirals. The response to treatment was good for all patients. No side effects were noted. Until April 26, 2020, two patients are stable and still hospitalized in the paediatric service under supervision, the other thirteen have recovered clinically. Their RT-PCRs performed at two days´ interval was negative and they returned home. Chest CT scan in axial sections showing bilateral multiple nodules with ground-glass opacities (patient 2) Chest CT scan in axial sections noting ground-glass opacities (patient 10) Discussion: Since December, 2019, COVID-19 has spread rapidly around China and the world. The world health organization (WHO) has declared the infection a global pandemic. Nationwide case series of 2135 paediatric patients with COVID-19 reported to the chinese center for disease control and prevention from January 16, 2020, to February 8, 2020 were included [6]. The United States of America released a study, which looked at more than 2,500 coronavirus cases among children younger than 18 in the US between February 12 and April 2. That’s the largest research sample of children with the coronavirus to date. The data suggested that children are less likely to develop symptoms than adults. Of all reported cases, only 1.7% were children [7]. During the first 2 weeks of the epidemic in Madrid region (Spain), 41 of the 4695 confirmed cases (0.8%) were children younger than 18 years [8]. The reason for having a higher rate of infected children (10%) than other studies is the hospitalization of five sick families containing 2 or 3 confirmed children for each of the families. The COVID-19 incubation period is approximately 2 to 14 days (but most often between three to five days) and transmitted by respiratory droplets and close contact with an infected person. Since most children have been exposed to family members and/or other children with COVID-19, this clearly indicates person-to-person transmission [6]. To fight against human-to-human transmission and the spread of this pandemic, Morocco implemented rigorous preventive interventions very early on, such as home confinement, social isolation, school closings, compulsory wearing of masks and search for contacts. Although clinical manifestations of children´s COVID-19 cases were generally less severe than those of adult patients, young children, particularly infants and those with chronic disease were vulnerable to infection and were more likely to have severe clinical manifestations [9]. This means that children who do not have underlying conditions, such as chronic heart or lung disease, diabetes mellitus or immunosuppression (chemotherapy, high doses of corticosteroids), have a much lower risk of severe forms of COVID-19 than other age groups [10]. In our study, we did not find any severe or unstable paediatric case, unlike adults where there were about twenty serious cases with 14 deaths. We also found, in our series, that the 2 children with uncontrolled asthma presented a more telling respiratory symptoms than the others. However, we noted that atopy represents only 3 out of 15 patients (20%), which leads to a discussion on the real role of atopy in COVID-19. Is atopy a risk factor for the seriousness of the disease or, on the contrary, does it play a preventive role because of the immune imbalance weighted towards the TH2 response instead of TH1? This hypothesis must be a research soon. The symptoms of COVID-19 are similar to those of a simple flu: infected children often have fever, dry cough and rarely difficulty breathing. Digestive symptoms (vomiting, diarrhea), asthenia and myalgia are also described in the different series. However, the infection can progress more seriously and lead to pneumonia, failure of several organs, severe acute respiratory syndrome, or even death in the most serious cases [11]. We present the findings of Dong et al. who reported in china, a series of 731 children with laboratory-confirmed COVID-19 disease, approximately 13% of cases had asymptomatic infection, 42% were mild, 40% were moderate (pneumonia without hypoxemia), only 5% of cases were severe (dyspnea and hypoxemia) and 0.6% progressed to acute respiratory distress syndrome (ARDS) or multiorgan dysfunction, a rate that is significaly lower than that seen in adults [11]. Regarding the results of our study, we have 33.4% of asymptomatic children and more than half of the cases had a mild form of the disease. We conclude that the children may play a major role in community-based viral transmission. Since the majority of available data suggest that children can be asymptomatic and have nasopharyngeal carriage, rather than lower respiratory tract infection [6]. There is also evidence of faecal shedding in the stool for several weeks after diagnosis [4,12]. Radiological semiology in children is still poorly described. The first publications describe lesions similar to those of adults, although less severe. They show ground glass under pleural uni or bilateral and in half of the cases, zones of consolidation with halo sign [13]. A recent study in China analysed the CT images of 15 children with COVID-19, 10 of whom were asymptomatic and 5 had mild symptoms. Among the 15 children, pulmonary inflammatory lesions were observed in 9. Nodular ground glass opacity was the most common finding and sub pleural patchy opacities were also observable, with all lesions limited to a single lung segment [14]. Chest CT is a highly sensitive diagnostic tool to detect pneumonia and the sensitivity for COVID-19 is reported to be 97.5% [15]. In our series, we had 2 pathological chest CT scanners with bilateral multiple ground glass opacity (Figure 1, Figure 2). The 1st case had a dry cough without dyspnea as long as the 2nd case was asymptomatic. The 2 cases which presented a moderate respiratory gene had a completely normal CT scan. Biologically, white blood cell count is normal or decreased, with decreased lymphocyte count; most patients display normal C-reactive protein and procalcitonin levels. Severe cases show high D-dimer levels and ferritin rate. Samples from nasopharyngeal swabs, sputum, lower respiratory tract secretions, stool and blood are tested positive for 2019-nCoV PCR [12,16]. We present in table comparative results between different series (Table 2) [4,14,17]. Some institutions have developed algorithms to determine when treatment with hydroxychloroquine or remdesivir may be warranted [11]. Children with underlying conditions or with immunocompromising treatment and children younger than three years appear to be at the greatest risk for progression of COVID-19 disease and therefore indication of the specific treatment mentioned above [11]. Main comparative data between the series In an Iranian series (9 cases), most of patients were treated with administrating oxygen, nebulizing with β2-agonist, using probabilistic antibiotics, hydroxychloroquine and oseltamivir [17]. Sometimes antibiotic treatment of bacterial superinfection may be necessary. According to the ministerial and national therapeutic protocol, we put symptomatic children under the association hydroxychloroquine and azithromycin with adapted paediatric dosages. Hydroxychloroquine, an antimalarial drug, known to have anti-inflammatory and antiviral effects, with low cost and easy availability, is a promising practice to treat COVID-19 infection under reasonable managements. One of the major French studies suggested that hydroxychloroquine, in combination with the antibiotic azithromycin, could work as a treatment for COVID-19 [18]. The results of this study should not be conclusive given its observational design. According to a more recent American study, the administration of hydroxychloroquine in patients with COVID-19 was not associated with a significantly reduced risk of complication or death [19]. Randomized controlled trials of hydroxychloroquine efficacy in patients with COVID-19 are needed. The Indian medical research council (IMRC) recommended the use of hydroxychloroquine as a prophylactic against COVID-19 in health care workers and asymptomatic contacts of laboratory-confirmed cases of COVID [20]. Why not follow this prophylactic practice among health professionals and fragile people to protect them now and in the future from other seasonal outbreaks? The number of COVID-19 cases in countries with fragile health systems is lower than expected [21]. Among the hypotheses mentioned, it is that the tuberculosis vaccine could have a protective effect against the coronavirus, either by reducing the risk of being infected, or by limiting the severity of the symptoms [21]. Note that all our patients are vaccinated against tuberculosis. Until a specific vaccine is developed, COVID-19 vulnerable populations and health personnel could be immunized with BCG vaccines [21]. Conclusion: To our knowledge, our study is one of the first paediatric series in Africa to describe the clinical, radiobiological and evolutionary characteristics of COVID-19 in children. the clinical presentation was mild to moderate without mortality. While the initial data reported so far on children is reassuring, paediatricians need to continually update their knowledge and be aware of the risks, especially in infants and children with comorbidities. What is known about this topic COVID-19 is particularly rare and less severe in immunocompetent children than in adults; Few studies (mainly among Asian and European populations) have been published on COVID-19 in this age group. COVID-19 is particularly rare and less severe in immunocompetent children than in adults; Few studies (mainly among Asian and European populations) have been published on COVID-19 in this age group. What this study adds The literature on COVID-19 is developing day by day all over the world and our African series would contribute to increase knowledge; This study helps to identify the clinical characteristics and the management of COVID-19 in Moroccan children. The literature on COVID-19 is developing day by day all over the world and our African series would contribute to increase knowledge; This study helps to identify the clinical characteristics and the management of COVID-19 in Moroccan children. What is known about this topic: COVID-19 is particularly rare and less severe in immunocompetent children than in adults; Few studies (mainly among Asian and European populations) have been published on COVID-19 in this age group. What this study adds: The literature on COVID-19 is developing day by day all over the world and our African series would contribute to increase knowledge; This study helps to identify the clinical characteristics and the management of COVID-19 in Moroccan children. Competing interests: The author declares no competing interests.
Background: COVID-19 is an infectious disease caused by a new coronavirus. The first cases were identified in Wuhan. It rapidly spread causing a pandemic worldwide. The incidence and severity of this disease are likely to be different in children compared with adults. Few publications of COVID-19 in children have been published. Our Moroccan paediatric series is among the first studies on this disease in Africa. Methods: We included all children with COVID-19 who were admitted and treated at the hospital from March 25 to April 26, 2020. We have collected information, including demographic data, symptoms, imaging data, laboratory results, treatments and clinical progress from patients with COVID-19. Results: Since the outbreak of 2019 novel coronavirus infection (2019-nCoV) in Morocco, a total of 145 COVID-19 confirmed cases have been reported in the Cheikh Khalifa's Hospital. Among this cases, 15 children were registered. The median age of patients was 13 years. There were 7 boys and 8 girls. Five children are asymptomatic, 8 have mild symptoms and 2 have a moderate respiratory difficulty. The RT-PCR test results were positive in all patients. Radiologically, we found in 2 cases, multiple nodules with ground-glass opacities on the chest scan. The treatment was based on the combination of hydroxychloroquine and azithromycin. Evolution under treatment was good for all patients. Conclusions: This study describes the profile of COVID-19 in child in a Moroccan hospital and confirms that the severity of illness in children with COVID-19 to be far less than adults.
Introduction: The world is currently experiencing a pandemic of infectious disease called COVID-19 (coronavirus disease 2019), caused by a new virus coronavirus: SARS-CoV-2. Coronaviruses are a large family of enveloped RNA viruses that cause illnesses ranging from a common cold to more severe diseases such as MERS or SARS. The first cases in Wuhan, China, did not involve children, which suggested that the disease was not symptomatic in children [1]. Now, that the outbreak is global with more than 3.3 million confirmed cases and 238 000 deaths (as of May 01, 2020) [2], we can evaluate more accurately the epidemiological profile of this disease in children. The Korean center for disease control and prevention reported that, up to 20 March, 6.3% of all cases that tested positive for COVID-19 were children under 19 years of age [3]. Worldwide, the confirmed cases were mostly among adults, with paediatric cases rarely reported [4]. Therefore, little is known about the clinical profile of infected children around the world. Here, we report our experience, among the first studies in Africa, detailing the epidemiological and clinical profile in children with COVID-19 in University Moroccan Hospital. Morocco has identified until April 26, according to the direction of epidemiology and fight against diseases, about 4900 cases of confirmed COVID-19 including more than 390 paediatric cases [5]. In Casablanca, the economic capital of the country and according to the same source, there have been 97 cases of children affected with a cumulative incidence of 4.8 per 100,000 children [5]. Context: Cheikh Khalifa hospital (CKH) is an international university and multidisciplinary center. Since the start of this pandemic in Morocco, the ministry of health has designated several hospitals to handle COVID-19 cases, including the CKH. There was a rapid increase in the disease with the reception and care of many affected families in our hospital, then there was a gradual and regular decrease. From the first confirmed case of children admitted to our structure, we conducted a retrospective study focused on different characteristics of this category. All the hospitalised children received the same therapeutic protocol, codified by the ministry of health and generalized to all the country. Conclusion: To our knowledge, our study is one of the first paediatric series in Africa to describe the clinical, radiobiological and evolutionary characteristics of COVID-19 in children. the clinical presentation was mild to moderate without mortality. While the initial data reported so far on children is reassuring, paediatricians need to continually update their knowledge and be aware of the risks, especially in infants and children with comorbidities. What is known about this topic COVID-19 is particularly rare and less severe in immunocompetent children than in adults; Few studies (mainly among Asian and European populations) have been published on COVID-19 in this age group. COVID-19 is particularly rare and less severe in immunocompetent children than in adults; Few studies (mainly among Asian and European populations) have been published on COVID-19 in this age group. What this study adds The literature on COVID-19 is developing day by day all over the world and our African series would contribute to increase knowledge; This study helps to identify the clinical characteristics and the management of COVID-19 in Moroccan children. The literature on COVID-19 is developing day by day all over the world and our African series would contribute to increase knowledge; This study helps to identify the clinical characteristics and the management of COVID-19 in Moroccan children.
Background: COVID-19 is an infectious disease caused by a new coronavirus. The first cases were identified in Wuhan. It rapidly spread causing a pandemic worldwide. The incidence and severity of this disease are likely to be different in children compared with adults. Few publications of COVID-19 in children have been published. Our Moroccan paediatric series is among the first studies on this disease in Africa. Methods: We included all children with COVID-19 who were admitted and treated at the hospital from March 25 to April 26, 2020. We have collected information, including demographic data, symptoms, imaging data, laboratory results, treatments and clinical progress from patients with COVID-19. Results: Since the outbreak of 2019 novel coronavirus infection (2019-nCoV) in Morocco, a total of 145 COVID-19 confirmed cases have been reported in the Cheikh Khalifa's Hospital. Among this cases, 15 children were registered. The median age of patients was 13 years. There were 7 boys and 8 girls. Five children are asymptomatic, 8 have mild symptoms and 2 have a moderate respiratory difficulty. The RT-PCR test results were positive in all patients. Radiologically, we found in 2 cases, multiple nodules with ground-glass opacities on the chest scan. The treatment was based on the combination of hydroxychloroquine and azithromycin. Evolution under treatment was good for all patients. Conclusions: This study describes the profile of COVID-19 in child in a Moroccan hospital and confirms that the severity of illness in children with COVID-19 to be far less than adults.
3,216
294
[ 35, 41 ]
8
[ "children", "19", "covid", "covid 19", "cases", "patients", "disease", "study", "confirmed", "series" ]
[ "coronavirus disease 2019", "manifestations children covid", "children coronavirus date", "outbreak new coronavirus", "covid 19 children" ]
[CONTENT] COVID-19 | children | coronavirus [SUMMARY]
[CONTENT] COVID-19 | children | coronavirus [SUMMARY]
[CONTENT] COVID-19 | children | coronavirus [SUMMARY]
[CONTENT] COVID-19 | children | coronavirus [SUMMARY]
[CONTENT] COVID-19 | children | coronavirus [SUMMARY]
[CONTENT] COVID-19 | children | coronavirus [SUMMARY]
[CONTENT] Adolescent | Azithromycin | COVID-19 | COVID-19 Nucleic Acid Testing | Child | Child, Preschool | Female | Humans | Hydroxychloroquine | Male | Morocco | Retrospective Studies | Severity of Illness Index | Treatment Outcome | COVID-19 Drug Treatment [SUMMARY]
[CONTENT] Adolescent | Azithromycin | COVID-19 | COVID-19 Nucleic Acid Testing | Child | Child, Preschool | Female | Humans | Hydroxychloroquine | Male | Morocco | Retrospective Studies | Severity of Illness Index | Treatment Outcome | COVID-19 Drug Treatment [SUMMARY]
[CONTENT] Adolescent | Azithromycin | COVID-19 | COVID-19 Nucleic Acid Testing | Child | Child, Preschool | Female | Humans | Hydroxychloroquine | Male | Morocco | Retrospective Studies | Severity of Illness Index | Treatment Outcome | COVID-19 Drug Treatment [SUMMARY]
[CONTENT] Adolescent | Azithromycin | COVID-19 | COVID-19 Nucleic Acid Testing | Child | Child, Preschool | Female | Humans | Hydroxychloroquine | Male | Morocco | Retrospective Studies | Severity of Illness Index | Treatment Outcome | COVID-19 Drug Treatment [SUMMARY]
[CONTENT] Adolescent | Azithromycin | COVID-19 | COVID-19 Nucleic Acid Testing | Child | Child, Preschool | Female | Humans | Hydroxychloroquine | Male | Morocco | Retrospective Studies | Severity of Illness Index | Treatment Outcome | COVID-19 Drug Treatment [SUMMARY]
[CONTENT] Adolescent | Azithromycin | COVID-19 | COVID-19 Nucleic Acid Testing | Child | Child, Preschool | Female | Humans | Hydroxychloroquine | Male | Morocco | Retrospective Studies | Severity of Illness Index | Treatment Outcome | COVID-19 Drug Treatment [SUMMARY]
[CONTENT] coronavirus disease 2019 | manifestations children covid | children coronavirus date | outbreak new coronavirus | covid 19 children [SUMMARY]
[CONTENT] coronavirus disease 2019 | manifestations children covid | children coronavirus date | outbreak new coronavirus | covid 19 children [SUMMARY]
[CONTENT] coronavirus disease 2019 | manifestations children covid | children coronavirus date | outbreak new coronavirus | covid 19 children [SUMMARY]
[CONTENT] coronavirus disease 2019 | manifestations children covid | children coronavirus date | outbreak new coronavirus | covid 19 children [SUMMARY]
[CONTENT] coronavirus disease 2019 | manifestations children covid | children coronavirus date | outbreak new coronavirus | covid 19 children [SUMMARY]
[CONTENT] coronavirus disease 2019 | manifestations children covid | children coronavirus date | outbreak new coronavirus | covid 19 children [SUMMARY]
[CONTENT] children | 19 | covid | covid 19 | cases | patients | disease | study | confirmed | series [SUMMARY]
[CONTENT] children | 19 | covid | covid 19 | cases | patients | disease | study | confirmed | series [SUMMARY]
[CONTENT] children | 19 | covid | covid 19 | cases | patients | disease | study | confirmed | series [SUMMARY]
[CONTENT] children | 19 | covid | covid 19 | cases | patients | disease | study | confirmed | series [SUMMARY]
[CONTENT] children | 19 | covid | covid 19 | cases | patients | disease | study | confirmed | series [SUMMARY]
[CONTENT] children | 19 | covid | covid 19 | cases | patients | disease | study | confirmed | series [SUMMARY]
[CONTENT] cases | children | disease | profile | 19 | confirmed | covid | covid 19 | hospital | country [SUMMARY]
[CONTENT] 19 | criteria | rt | test | children | children hospitalised | rt pcr test nasopharyngeal | swab | swab specimens | abnormal [SUMMARY]
[CONTENT] patients | cases | days | children | 10 | symptomatic | glass opacities | ground glass opacities | chest ct scan | ground glass [SUMMARY]
[CONTENT] covid 19 | 19 | covid | knowledge | children | day | knowledge study | clinical | study | series [SUMMARY]
[CONTENT] 19 | children | covid | covid 19 | cases | day | interests | author declares competing interests | author declares competing | author declares [SUMMARY]
[CONTENT] 19 | children | covid | covid 19 | cases | day | interests | author declares competing interests | author declares competing | author declares [SUMMARY]
[CONTENT] COVID-19 ||| first | Wuhan ||| ||| ||| COVID-19 ||| Moroccan | first | Africa [SUMMARY]
[CONTENT] COVID-19 | March 25 to April 26, 2020 ||| COVID-19 [SUMMARY]
[CONTENT] 2019 | 2019 | Morocco | 145 | COVID-19 | the Cheikh Khalifa's Hospital ||| 15 ||| 13 years ||| 7 | 8 ||| Five | 8 | 2 ||| ||| 2 ||| ||| [SUMMARY]
[CONTENT] COVID-19 | Moroccan | COVID-19 [SUMMARY]
[CONTENT] COVID-19 ||| first | Wuhan ||| ||| ||| COVID-19 ||| Moroccan | first | Africa ||| COVID-19 | March 25 to April 26, 2020 ||| COVID-19 ||| 2019 | 2019 | Morocco | 145 | COVID-19 | the Cheikh Khalifa's Hospital ||| 15 ||| 13 years ||| 7 | 8 ||| Five | 8 | 2 ||| ||| 2 ||| ||| ||| COVID-19 | Moroccan | COVID-19 [SUMMARY]
[CONTENT] COVID-19 ||| first | Wuhan ||| ||| ||| COVID-19 ||| Moroccan | first | Africa ||| COVID-19 | March 25 to April 26, 2020 ||| COVID-19 ||| 2019 | 2019 | Morocco | 145 | COVID-19 | the Cheikh Khalifa's Hospital ||| 15 ||| 13 years ||| 7 | 8 ||| Five | 8 | 2 ||| ||| 2 ||| ||| ||| COVID-19 | Moroccan | COVID-19 [SUMMARY]
Prototype of biliary drug-eluting stent with photodynamic and chemotherapy using electrospinning.
25138739
The combination of biliary stent with photodynamic and chemotherapy seemed to be a beneficial palliative treatment of unresectable cholangiocarcinoma. However, by intravenous delivery to the target tumor the distribution of the drug had its limitations and caused serious side effect on non-target organs. Therefore, in this study, we are going to develop a localized eluting stent, named PDT-chemo stent, covered with gemcitabine (GEM) and hematoporphyrin (HP).
BACKGROUND
The prototype of PDT-chemo stent was made through electrospinning and electrospraying dual-processes with an electrical charge to cover the stent with a drug-storing membrane from polymer liquid. The design of prototype used PU as the material of the backing layer, and PCL/PEG blends in different molar ratio of 9:1 and of 1:4 were used in two drug-storing layers with GEM and HP loaded respectively.
METHODS
The optical microscopy revealed that the backing layer was formed in fine fibers from electrospinning, while drug-storing layers, attributed to the droplets from electrospraying process. The covered membrane, the morphology of which was observed by scanning electron microscopy (SEM), covered the stent surface homogeneously without crack appearances. The GEM had almost 100% of electrosprayed efficiency than 70% HP loaded on the covered membrane due to the different solubility of drug in PEG/PCL blends. Drug release study confirmed the two-phased drug release pattern by regulating in different molar ratio of PEG/PCL blends polymer.
RESULTS
The result proves that the PDT-chemo stent is composed of a first burst-releasing phase from HP and a later slow-releasing phase from GEM eluting. This two-phase of drug eluting stent may provide a new prospect of localized and controlled release treatment for cholangiocarcinoma disease.
CONCLUSIONS
[ "Deoxycytidine", "Drug-Eluting Stents", "Equipment Design", "Microscopy, Electron, Scanning", "Photochemotherapy", "Polymers", "Gemcitabine" ]
4155126
Background
Cholangiocarcinoma is the second most common hepatobiliary tumor, which is generally a locally invasive tumor that occludes the biliary tree and leads to cholangitis and liver failure. Until now, tumor resection has been the only potential cure for cholangiocarcinoma [1, 2]. Unfortunately, even with resection, the survival rate with five years can decrease to 11% at most and more than 50% of patients still remained at unresectable stage [3, 4]. Inoperable patients with advanced cholangiocarcinoma typically have obstructive cholestasis. So far, the primary standard method of treatment has been biliary stenting [5]. However, this treatment can prolong survival time slightly by providing temporary biliary drainage. Therefore, the secondary method of treatment is required to prolong the survival time by reducing tumor burden. Chemotherapy and radiotherapy are classical treatments but their results are also disappointing [3, 5]. Photodynamic therapy (PDT) is a new and promising treatment option, which contains a photosensitizer, light source, and oxygen [6]. The concept of PDT is based on a photosensitizer exposed to the specific wavelength of light, which can generate cytotoxic reactive oxygen species (ROS) to kill tumor cells [7]. Additionally, previous studies have shown that PDT could also inhibit the P-glycoprotein efflux of drug. A combination of PDT and chemotherapy can improve the accumulation of chemo-drug in tumor cells, and reduce the chemo-drug resistance from the P-glycoprotein efflux [8]. Another advantage of combination therapy with PDT and chemo-drug is the capability to induce antitumor immunity [9]. However, by intravenous delivery to target tumor, the distribution of the drug had its limitations and caused serious side effect on non-target organs. After receiving PDT treatment, patients have to stay indoors, away from bright light for 3 to 4 days to avoid the skin photosensitivity from the side effect [10]. In order to decrease the side effect during the treatment, the aim of this study is to develop a localized drug eluting stent, named PDT-chemo stent, by incorporating gemcitabine (GEM) with hematoporphyrin (HP) to cover the stent surface. Drug-eluting stent has been considered a method to maximize the drug concentration immediately on the localized tumor environment, while minimizing the non-target organs exposure [11, 12]. In clinical practice, this PDT-chemo stent could be inserted to the tumor area via endoscopic retrograde cholangiography [13], followed by the simultaneous specific light source from endoscopy to activate the photosensitizer for PDT. Meanwhile, the chemo-drug of GEM will be released continuously as the second step for chemotherapy. The multimodal function of PDT-chemo stent will not only aim to increase the accumulation of drug within the neoplastic tissue, but also decrease the side effect on non-target tissues. In the past decade, the stent with drug-incorporated covered membrane has been received increasing attention as drug-eluting stent due to its functions of providing mechanic support and releasing sufficient drug to prevent restenosis or treat malignant [12, 14, 15]. Historically, several techniques were developed and have been used to manufacture the covered membrane on stent surface. These diverse techniques include dip coating [12, 16], compression technique [14] and electrospinning [17, 18]. Concerned about the adherence and flexibility problems of the membrane covering the stent [19], we used the electrospinning technique. Electrospinning is the process with voltage to extrude polymer solution into fine fibers for the production of micro/nano-fiber-covered stents [20]. The electrospinning technique could support the covered membrane uniformly adhered to the stent and easily regulate the thickness according to the clinical needs [18, 19]. Therefore, we selected the electrospinning to construct the backing layer on 316 L stainless stent and used the electrospraying process to regulate GEM and HP on the covered membrane. To provide the flexibility of the membrane, polyurethane (PU), with sufficient elastic property, was used as the material of drug-free backing layer, which could effectively control majority of the inner drugs released to the tissue-contacting side and additional supporting force for the main drug layer [18]. Polycaprolactone (PCL) and Polyethylene glycol (PEG) blends in different molar ratio were selected as two drug-storing layers to control the drug-releasing rate. In the ratio of 1:4, PCL/PEG blends were used as the outer releasing layer with HP loaded, while in the ratio of 9:1, PCL/PEG blends were used as the inner releasing layer with GEM loaded (Figure  1). To our knowledge, this is the first study to show that the biliary stent could be accompanied with PDT and chemotherapy for localized cholangiocarcinoma treatment.Figure 1 Schematic illustration of the PDT-chemo stent as a biliary stent for cholangiocarcinoma treatment. The covered membrane is composed of three layers with PU, PCL/PEG = 9:1 blends and PCL/PEG = 1:4 blends. PU was employed as the material of the backing layer, and PCL/PEG blends were used in drug-storing layers with GEM and HP loaded. Schematic illustration of the PDT-chemo stent as a biliary stent for cholangiocarcinoma treatment. The covered membrane is composed of three layers with PU, PCL/PEG = 9:1 blends and PCL/PEG = 1:4 blends. PU was employed as the material of the backing layer, and PCL/PEG blends were used in drug-storing layers with GEM and HP loaded.
Methods
Materials Polyurethane (PU), Polycaprolactone (PCL, Mw = 80,000), Polyethylene glycol (PEG, Mw = 20,000), Tetrahydrofuran (THF), 1,1,1,3,3,3-Hexafluro-2-propanel (HFIP) and Hematoporphyrin (HP) were purchased from Sigma-Aldrich (St Louis, MO). Gemcitabine (GEM) of clinical grade was supplied by National Taiwan University Hospital. All other chemicals were of analytical grade and used as received. Polyurethane (PU), Polycaprolactone (PCL, Mw = 80,000), Polyethylene glycol (PEG, Mw = 20,000), Tetrahydrofuran (THF), 1,1,1,3,3,3-Hexafluro-2-propanel (HFIP) and Hematoporphyrin (HP) were purchased from Sigma-Aldrich (St Louis, MO). Gemcitabine (GEM) of clinical grade was supplied by National Taiwan University Hospital. All other chemicals were of analytical grade and used as received. Preparation of PDT-chemo stent The PDT-chemo stent, consisting of tri-layers of membranes, was made by electrospinning and electrospraying dual-processes. The metallic stent was provided from the laboratory of Dr. Fuh-Yu Chang in National Taiwan University of Science and Technology, and the femtosecond laser was used to carve the 316 L stainless metallic tube. The unit of electrospinning contained a high-voltage power supply, a motor to rotate the stent, a syringe pump, and a 19G-needle that was connected by a tube to a syringe. The metallic stent rotated by a motor was horizontally placed 15 cm away from the needle. The solution of PU in HFIP (10 m/v%) was used for electrospinning process, while PCL/PEG blends were mixed in HFIP/THF (1:1) solution for electrospraying process. Both PU and PCL/PEG blends solution were extruded from the syringe at a rate of 5 μL/min. The backing layer of PU was first electrospun with voltage 14 kV, and was followed by the drug-storing layer of PCL/PEG blends elecrosprayed with higher voltage 22 kV. Herein, the drug-storing layer of PCL/PEG blends in molar ratio of 9:1 was loaded with GEM covering the backing layer, followed by the HP coating on the top in PCL/PEG molar ratio of 1:4 (Figure  1). The extruded polymer from the syringe of electrspinning/electrospraying was collected for only a short period time on cover glass for optical microscopy (OM, Leica, Germany). Samples collected covered stent were prepared by coating with thin gold film by sputtering PVD and visualized by scanning electron microscopy (SEM, JSM-7000 F, Japan) operated at 15 KV. The PDT-chemo stent, consisting of tri-layers of membranes, was made by electrospinning and electrospraying dual-processes. The metallic stent was provided from the laboratory of Dr. Fuh-Yu Chang in National Taiwan University of Science and Technology, and the femtosecond laser was used to carve the 316 L stainless metallic tube. The unit of electrospinning contained a high-voltage power supply, a motor to rotate the stent, a syringe pump, and a 19G-needle that was connected by a tube to a syringe. The metallic stent rotated by a motor was horizontally placed 15 cm away from the needle. The solution of PU in HFIP (10 m/v%) was used for electrospinning process, while PCL/PEG blends were mixed in HFIP/THF (1:1) solution for electrospraying process. Both PU and PCL/PEG blends solution were extruded from the syringe at a rate of 5 μL/min. The backing layer of PU was first electrospun with voltage 14 kV, and was followed by the drug-storing layer of PCL/PEG blends elecrosprayed with higher voltage 22 kV. Herein, the drug-storing layer of PCL/PEG blends in molar ratio of 9:1 was loaded with GEM covering the backing layer, followed by the HP coating on the top in PCL/PEG molar ratio of 1:4 (Figure  1). The extruded polymer from the syringe of electrspinning/electrospraying was collected for only a short period time on cover glass for optical microscopy (OM, Leica, Germany). Samples collected covered stent were prepared by coating with thin gold film by sputtering PVD and visualized by scanning electron microscopy (SEM, JSM-7000 F, Japan) operated at 15 KV. Drug electrosprayed efficiency To further confirm the electrsprayed efficiency of loading drug in state of covered membrane, the membrane was collected from the stent and absolutely dissolved in dimethyl sulfoxide (DMSO) solution. After that, high-performance liquid chromatography (HPLC, Waters e2695, USA) and ultraviolet–visible spectroscopy (UV/vis, JASCO V-550, USA) were used to examine the extruded drugs of GEM and HP from the covered membrane respectively. To further confirm the electrsprayed efficiency of loading drug in state of covered membrane, the membrane was collected from the stent and absolutely dissolved in dimethyl sulfoxide (DMSO) solution. After that, high-performance liquid chromatography (HPLC, Waters e2695, USA) and ultraviolet–visible spectroscopy (UV/vis, JASCO V-550, USA) were used to examine the extruded drugs of GEM and HP from the covered membrane respectively. Drug release The covered membrane was incubated in a sealed glass bottle with 0.5 ml phosphate-buffered saline (PBS) as the releasing medium. The bottle was placed in a shaking incubator at 37°C at a shaking speed of 50 rpm. At the predetermined time, 0.5 ml sample was withdrawn and replaced with the same volume of fresh medium. Residual concentration of drug in the membrane was counted by dissolving the membrane in DMSO solution as the eluting medium. Samples were collected and analyzed under the UV–vis spectrometer and HPLC. The morphology of membrane after 72 h of release was assessed by SEM imaging. The values were presented as mean ± standard error (STD) in triplicate. Statistical analysis was performed using the analysis student’s t-test. Values of p < 0.05 was considered being statistically significant. The covered membrane was incubated in a sealed glass bottle with 0.5 ml phosphate-buffered saline (PBS) as the releasing medium. The bottle was placed in a shaking incubator at 37°C at a shaking speed of 50 rpm. At the predetermined time, 0.5 ml sample was withdrawn and replaced with the same volume of fresh medium. Residual concentration of drug in the membrane was counted by dissolving the membrane in DMSO solution as the eluting medium. Samples were collected and analyzed under the UV–vis spectrometer and HPLC. The morphology of membrane after 72 h of release was assessed by SEM imaging. The values were presented as mean ± standard error (STD) in triplicate. Statistical analysis was performed using the analysis student’s t-test. Values of p < 0.05 was considered being statistically significant.
null
null
Conclusions
In preliminary study, we have successfully developed a prototype of tri-layered covered stent with PDT and chemotherapy. This PDT-chemo stent was prepared by electrospinning and electrospraying dual-processes. The membrane is composed of PU backing layer as the base and PCL/PEG drug-storing layers with GEM and HP on the top. The mixing of drugs with different PCL and PEG composition demonstrated an effective strategy for regulating the drugs release from the membrane. The release study has confirmed a two-phased drug release pattern, which provides a proof of concept for the hypothesis that the PDT-chemo stent is composed of a first burst-releasing phase from HP and a later slow-releasing phase from GEM eluting. This two-phase of drug eluting stent may provide a new prospective of localized controlled release treatment for cholangiocarcinoma disease.
[ "Background", "Materials", "Preparation of PDT-chemo stent", "Drug electrosprayed efficiency", "Drug release", "Results and discussion", "Prototype of PDT-chemo stent", "Effect of drug release" ]
[ "Cholangiocarcinoma is the second most common hepatobiliary tumor, which is generally a locally invasive tumor that occludes the biliary tree and leads to cholangitis and liver failure. Until now, tumor resection has been the only potential cure for cholangiocarcinoma\n[1, 2]. Unfortunately, even with resection, the survival rate with five years can decrease to 11% at most and more than 50% of patients still remained at unresectable stage\n[3, 4]. Inoperable patients with advanced cholangiocarcinoma typically have obstructive cholestasis. So far, the primary standard method of treatment has been biliary stenting\n[5]. However, this treatment can prolong survival time slightly by providing temporary biliary drainage. Therefore, the secondary method of treatment is required to prolong the survival time by reducing tumor burden. Chemotherapy and radiotherapy are classical treatments but their results are also disappointing\n[3, 5].\nPhotodynamic therapy (PDT) is a new and promising treatment option, which contains a photosensitizer, light source, and oxygen\n[6]. The concept of PDT is based on a photosensitizer exposed to the specific wavelength of light, which can generate cytotoxic reactive oxygen species (ROS) to kill tumor cells\n[7]. Additionally, previous studies have shown that PDT could also inhibit the P-glycoprotein efflux of drug. A combination of PDT and chemotherapy can improve the accumulation of chemo-drug in tumor cells, and reduce the chemo-drug resistance from the P-glycoprotein efflux\n[8]. Another advantage of combination therapy with PDT and chemo-drug is the capability to induce antitumor immunity\n[9]. However, by intravenous delivery to target tumor, the distribution of the drug had its limitations and caused serious side effect on non-target organs. After receiving PDT treatment, patients have to stay indoors, away from bright light for 3 to 4 days to avoid the skin photosensitivity from the side effect\n[10].\nIn order to decrease the side effect during the treatment, the aim of this study is to develop a localized drug eluting stent, named PDT-chemo stent, by incorporating gemcitabine (GEM) with hematoporphyrin (HP) to cover the stent surface. Drug-eluting stent has been considered a method to maximize the drug concentration immediately on the localized tumor environment, while minimizing the non-target organs exposure\n[11, 12]. In clinical practice, this PDT-chemo stent could be inserted to the tumor area via endoscopic retrograde cholangiography\n[13], followed by the simultaneous specific light source from endoscopy to activate the photosensitizer for PDT. Meanwhile, the chemo-drug of GEM will be released continuously as the second step for chemotherapy. The multimodal function of PDT-chemo stent will not only aim to increase the accumulation of drug within the neoplastic tissue, but also decrease the side effect on non-target tissues.\nIn the past decade, the stent with drug-incorporated covered membrane has been received increasing attention as drug-eluting stent due to its functions of providing mechanic support and releasing sufficient drug to prevent restenosis or treat malignant\n[12, 14, 15]. Historically, several techniques were developed and have been used to manufacture the covered membrane on stent surface. These diverse techniques include dip coating\n[12, 16], compression technique\n[14] and electrospinning\n[17, 18]. Concerned about the adherence and flexibility problems of the membrane covering the stent\n[19], we used the electrospinning technique. Electrospinning is the process with voltage to extrude polymer solution into fine fibers for the production of micro/nano-fiber-covered stents\n[20]. The electrospinning technique could support the covered membrane uniformly adhered to the stent and easily regulate the thickness according to the clinical needs\n[18, 19]. Therefore, we selected the electrospinning to construct the backing layer on 316 L stainless stent and used the electrospraying process to regulate GEM and HP on the covered membrane. To provide the flexibility of the membrane, polyurethane (PU), with sufficient elastic property, was used as the material of drug-free backing layer, which could effectively control majority of the inner drugs released to the tissue-contacting side and additional supporting force for the main drug layer\n[18]. Polycaprolactone (PCL) and Polyethylene glycol (PEG) blends in different molar ratio were selected as two drug-storing layers to control the drug-releasing rate. In the ratio of 1:4, PCL/PEG blends were used as the outer releasing layer with HP loaded, while in the ratio of 9:1, PCL/PEG blends were used as the inner releasing layer with GEM loaded (Figure \n1). To our knowledge, this is the first study to show that the biliary stent could be accompanied with PDT and chemotherapy for localized cholangiocarcinoma treatment.Figure 1\nSchematic illustration of the PDT-chemo stent as a biliary stent for cholangiocarcinoma treatment. The covered membrane is composed of three layers with PU, PCL/PEG = 9:1 blends and PCL/PEG = 1:4 blends. PU was employed as the material of the backing layer, and PCL/PEG blends were used in drug-storing layers with GEM and HP loaded.\n\nSchematic illustration of the PDT-chemo stent as a biliary stent for cholangiocarcinoma treatment. The covered membrane is composed of three layers with PU, PCL/PEG = 9:1 blends and PCL/PEG = 1:4 blends. PU was employed as the material of the backing layer, and PCL/PEG blends were used in drug-storing layers with GEM and HP loaded.", "Polyurethane (PU), Polycaprolactone (PCL, Mw = 80,000), Polyethylene glycol (PEG, Mw = 20,000), Tetrahydrofuran (THF), 1,1,1,3,3,3-Hexafluro-2-propanel (HFIP) and Hematoporphyrin (HP) were purchased from Sigma-Aldrich (St Louis, MO). Gemcitabine (GEM) of clinical grade was supplied by National Taiwan University Hospital. All other chemicals were of analytical grade and used as received.", "The PDT-chemo stent, consisting of tri-layers of membranes, was made by electrospinning and electrospraying dual-processes. The metallic stent was provided from the laboratory of Dr. Fuh-Yu Chang in National Taiwan University of Science and Technology, and the femtosecond laser was used to carve the 316 L stainless metallic tube. The unit of electrospinning contained a high-voltage power supply, a motor to rotate the stent, a syringe pump, and a 19G-needle that was connected by a tube to a syringe. The metallic stent rotated by a motor was horizontally placed 15 cm away from the needle. The solution of PU in HFIP (10 m/v%) was used for electrospinning process, while PCL/PEG blends were mixed in HFIP/THF (1:1) solution for electrospraying process. Both PU and PCL/PEG blends solution were extruded from the syringe at a rate of 5 μL/min. The backing layer of PU was first electrospun with voltage 14 kV, and was followed by the drug-storing layer of PCL/PEG blends elecrosprayed with higher voltage 22 kV. Herein, the drug-storing layer of PCL/PEG blends in molar ratio of 9:1 was loaded with GEM covering the backing layer, followed by the HP coating on the top in PCL/PEG molar ratio of 1:4 (Figure \n1). The extruded polymer from the syringe of electrspinning/electrospraying was collected for only a short period time on cover glass for optical microscopy (OM, Leica, Germany). Samples collected covered stent were prepared by coating with thin gold film by sputtering PVD and visualized by scanning electron microscopy (SEM, JSM-7000 F, Japan) operated at 15 KV.", "To further confirm the electrsprayed efficiency of loading drug in state of covered membrane, the membrane was collected from the stent and absolutely dissolved in dimethyl sulfoxide (DMSO) solution. After that, high-performance liquid chromatography (HPLC, Waters e2695, USA) and ultraviolet–visible spectroscopy (UV/vis, JASCO V-550, USA) were used to examine the extruded drugs of GEM and HP from the covered membrane respectively.", "The covered membrane was incubated in a sealed glass bottle with 0.5 ml phosphate-buffered saline (PBS) as the releasing medium. The bottle was placed in a shaking incubator at 37°C at a shaking speed of 50 rpm. At the predetermined time, 0.5 ml sample was withdrawn and replaced with the same volume of fresh medium. Residual concentration of drug in the membrane was counted by dissolving the membrane in DMSO solution as the eluting medium. Samples were collected and analyzed under the UV–vis spectrometer and HPLC. The morphology of membrane after 72 h of release was assessed by SEM imaging. The values were presented as mean ± standard error (STD) in triplicate. Statistical analysis was performed using the analysis student’s t-test. Values of p < 0.05 was considered being statistically significant.", " Prototype of PDT-chemo stent To make the stent with covered membrane more uniformly adhered to the surface and easily regulate the thickness according to the clinical needs, electrospinning/electrospraying has been regarded as the appropriate means for demonstrating the PDT-chemo stent\n[18, 19]. The covered stent can be manufactured by several electrospinning methods: post-spinning modification, drug/polymer blends, emulsion electrospinning and core-shell electrospinning\n[21]. Drug/polymer blends technique could easily mix the drug with polymer directly and form a layer of membrane to achieve sustained drug release. Therefore, in this study, the prototype of PDT-chemo stent was constructed by drug/polymer blends technique via electrospinning and electrospraying dual-processes. The backing layer was electrospun first from PU polymer solution, followed by the electrospraying process from PCL/PEG blends solution with drug loaded. Electrospinning is the process with voltage to extrude polymer solution into fine fibers for the production of fine-fibers-covered stents. In our case, fine fibers could support the superior mechanical properties of the membrane and be introduced as the backing layer for effectively controlling majority of the inner drugs released to the surrounding tissue. During electrospinning, the organic solvent, which could be toxic to cells, will be completely evaporated due to its high volatility\n[22]. Electrospraying has similar preparation process to electrospinning but is usually used with higher voltage and lower polymer density, which makes the polymer solution more easily broken up into droplets\n[23, 24]. The concept of electrospraying process was used to increase layer-to-layer adhesion, which could avoid drug-storing layer cracking and separating from backing layer during the stent expending.\nThe covered membrane is composed of three layers with PU, PCL/PEG = 9:1 blends and PCL/PEG = 1:4 blends. PU was employed as the material of the backing layer, while PCL/PEG blends were used in drug-storing layers with GEM and HP loaded (Figure \n1). Several materials (eg. Silicone, PTFE, and PU) have been approved by US Food and Drug Administration (FDA) as the covered membrane on the stent\n[13]. Silicone was reported to cause the acute inflammation to the surrounding tissue\n[25]. The PTFE, however, could not be dissolved into any solvent\n[26]. Therefore, as the material of the backing layer in this study, PU has been found able to be formed sufficiently thin and flat on the metallic stent by electrospinning process and with elastic properties to allow the covered stent to be homogeneously expanded\n[18]. Additionally, PU membrane has been proved to prevent the tumor ingrowth effectively and to reduce the occlusion rate of expandable metal stent in patients with malignant biliary obstruction\n[15]. The distinct biodegradable properties of PCL and PEG blending were regarded as an approach to controlling the drug-releasing rate in different blending ratio. The selection of PCL is due to its good biocompatibility, drug permeability and relatively slow degradation rate\n[27, 28]. The hydrophilic PEG was selected to play a role in resulting in regulating the drug-releasing rate, due to its easily acting on aqueous solution\n[14, 29].\nThe prototype of PDT-chemo stent was imaged by photography and optical microscopy, which proved that the membrane was homogeneously covered on the stent surface in each electrospun process (Figure \n2). The optical microscopy (Figure \n2b) revealed that PU was formed in fine fibers with width around 5 to 10 μm; thus PCL/PEG blends solution was favorable for generating submicron droplets (Figure \n2c, d). The photograph of membrane appeared brown (Figure \n2d) due to the homogeneously dispersed of HP coated on the top. Finally, the prototype of PDT-chemo stent could be easily removed from the cylindrical collector without any surface damage (Figure \n2e). The surface morphology and cross section of the film was further investigated by SEM imaging (Figure \n3), which showed that the membrane was constructed of two sides with the backing layer and drug-storing layer. The architecture of the backing layer was constructed of fine fibers in networks structure, while drug-storing layer, attributed to the droplets from electrospraying process was coated on the backing layer roughly. The cross section of the membrane was smoothly with width in range from 170 to 190 μm. The thickness of the membrane could be optimized for the most favorable thickness according to the clinical needs via regulating time during the electrospun process.Figure 2\nThe photography and microscopic imaging of PDT-chemo stent. The prototype of PDT-chemo stent in each step of electrospun/electrosprayed process is presented under the photography and optical microscopic imaging. (a) 316 L stainless bare stent; (b) covered with PU backing layer; (c) covered with inner drug-storing layer by PCL/PEG = 9:1 with GEM loaded; (d) covered with outer drug-storing layer by PCL/PEG = 1:4 with HP loaded; (e) the prototype of PDT-chemo stent collected from cylindrical collector. Optical microscopy shows the extruded polymer from the syringe of electrspinning/electrospraying processes.Figure 3\nSEM imaging of the covered membrane. The surface morphology and cross section of the covered membrane from PDT-chemo stent was observed by SEM imaging, which showed that the membrane was constructed of two sides with the backing layer and drug-storing layer.\n\nThe photography and microscopic imaging of PDT-chemo stent. The prototype of PDT-chemo stent in each step of electrospun/electrosprayed process is presented under the photography and optical microscopic imaging. (a) 316 L stainless bare stent; (b) covered with PU backing layer; (c) covered with inner drug-storing layer by PCL/PEG = 9:1 with GEM loaded; (d) covered with outer drug-storing layer by PCL/PEG = 1:4 with HP loaded; (e) the prototype of PDT-chemo stent collected from cylindrical collector. Optical microscopy shows the extruded polymer from the syringe of electrspinning/electrospraying processes.\n\nSEM imaging of the covered membrane. The surface morphology and cross section of the covered membrane from PDT-chemo stent was observed by SEM imaging, which showed that the membrane was constructed of two sides with the backing layer and drug-storing layer.\nTo further confirm the electrosprayed amount of the drug on the covered membrane, the membrane removed from the stent was completely dissolved in DMSO solution for further analysis. The results revealed that, with the increasing concentration of the HP in PEG/PCL solution from 0.67 mg/ml to 6.67 mg/ml, the density of the HP from the electrosprayed membrane increased gradually from 8.72 μg/cm2 to 63.59 μg/cm2 at most, which was around 0.7 fold to the HP containing solution, as seemed to demonstrate 70% of electrosprayed efficiency (Figure \n4a). By the same method, GEM had almost 100% electrosprayed efficiency (Figure \n4b) and had a tendency of increasing electrosprayed efficiency by mixing with PCL/PEG blends due to its high hydrophilicity\n[30]. The corresponding results of the concentration of the drug in the solution and the density of the covered membrane could be an useful information to further imitate a clinical dose regimen for cholangiocarcinoma treatment.Figure 4\nThe electrosprayed amount of the drug on the covered membrane. The density of (a) HP and (b) GEM coated on the covered membrane is illustrated on the vertical axis of this graph, while the drug concentration of HP and GEM in PCL/PEG blends solution respectively is illustrated on the horizontal axis.\n\nThe electrosprayed amount of the drug on the covered membrane. The density of (a) HP and (b) GEM coated on the covered membrane is illustrated on the vertical axis of this graph, while the drug concentration of HP and GEM in PCL/PEG blends solution respectively is illustrated on the horizontal axis.\nTo make the stent with covered membrane more uniformly adhered to the surface and easily regulate the thickness according to the clinical needs, electrospinning/electrospraying has been regarded as the appropriate means for demonstrating the PDT-chemo stent\n[18, 19]. The covered stent can be manufactured by several electrospinning methods: post-spinning modification, drug/polymer blends, emulsion electrospinning and core-shell electrospinning\n[21]. Drug/polymer blends technique could easily mix the drug with polymer directly and form a layer of membrane to achieve sustained drug release. Therefore, in this study, the prototype of PDT-chemo stent was constructed by drug/polymer blends technique via electrospinning and electrospraying dual-processes. The backing layer was electrospun first from PU polymer solution, followed by the electrospraying process from PCL/PEG blends solution with drug loaded. Electrospinning is the process with voltage to extrude polymer solution into fine fibers for the production of fine-fibers-covered stents. In our case, fine fibers could support the superior mechanical properties of the membrane and be introduced as the backing layer for effectively controlling majority of the inner drugs released to the surrounding tissue. During electrospinning, the organic solvent, which could be toxic to cells, will be completely evaporated due to its high volatility\n[22]. Electrospraying has similar preparation process to electrospinning but is usually used with higher voltage and lower polymer density, which makes the polymer solution more easily broken up into droplets\n[23, 24]. The concept of electrospraying process was used to increase layer-to-layer adhesion, which could avoid drug-storing layer cracking and separating from backing layer during the stent expending.\nThe covered membrane is composed of three layers with PU, PCL/PEG = 9:1 blends and PCL/PEG = 1:4 blends. PU was employed as the material of the backing layer, while PCL/PEG blends were used in drug-storing layers with GEM and HP loaded (Figure \n1). Several materials (eg. Silicone, PTFE, and PU) have been approved by US Food and Drug Administration (FDA) as the covered membrane on the stent\n[13]. Silicone was reported to cause the acute inflammation to the surrounding tissue\n[25]. The PTFE, however, could not be dissolved into any solvent\n[26]. Therefore, as the material of the backing layer in this study, PU has been found able to be formed sufficiently thin and flat on the metallic stent by electrospinning process and with elastic properties to allow the covered stent to be homogeneously expanded\n[18]. Additionally, PU membrane has been proved to prevent the tumor ingrowth effectively and to reduce the occlusion rate of expandable metal stent in patients with malignant biliary obstruction\n[15]. The distinct biodegradable properties of PCL and PEG blending were regarded as an approach to controlling the drug-releasing rate in different blending ratio. The selection of PCL is due to its good biocompatibility, drug permeability and relatively slow degradation rate\n[27, 28]. The hydrophilic PEG was selected to play a role in resulting in regulating the drug-releasing rate, due to its easily acting on aqueous solution\n[14, 29].\nThe prototype of PDT-chemo stent was imaged by photography and optical microscopy, which proved that the membrane was homogeneously covered on the stent surface in each electrospun process (Figure \n2). The optical microscopy (Figure \n2b) revealed that PU was formed in fine fibers with width around 5 to 10 μm; thus PCL/PEG blends solution was favorable for generating submicron droplets (Figure \n2c, d). The photograph of membrane appeared brown (Figure \n2d) due to the homogeneously dispersed of HP coated on the top. Finally, the prototype of PDT-chemo stent could be easily removed from the cylindrical collector without any surface damage (Figure \n2e). The surface morphology and cross section of the film was further investigated by SEM imaging (Figure \n3), which showed that the membrane was constructed of two sides with the backing layer and drug-storing layer. The architecture of the backing layer was constructed of fine fibers in networks structure, while drug-storing layer, attributed to the droplets from electrospraying process was coated on the backing layer roughly. The cross section of the membrane was smoothly with width in range from 170 to 190 μm. The thickness of the membrane could be optimized for the most favorable thickness according to the clinical needs via regulating time during the electrospun process.Figure 2\nThe photography and microscopic imaging of PDT-chemo stent. The prototype of PDT-chemo stent in each step of electrospun/electrosprayed process is presented under the photography and optical microscopic imaging. (a) 316 L stainless bare stent; (b) covered with PU backing layer; (c) covered with inner drug-storing layer by PCL/PEG = 9:1 with GEM loaded; (d) covered with outer drug-storing layer by PCL/PEG = 1:4 with HP loaded; (e) the prototype of PDT-chemo stent collected from cylindrical collector. Optical microscopy shows the extruded polymer from the syringe of electrspinning/electrospraying processes.Figure 3\nSEM imaging of the covered membrane. The surface morphology and cross section of the covered membrane from PDT-chemo stent was observed by SEM imaging, which showed that the membrane was constructed of two sides with the backing layer and drug-storing layer.\n\nThe photography and microscopic imaging of PDT-chemo stent. The prototype of PDT-chemo stent in each step of electrospun/electrosprayed process is presented under the photography and optical microscopic imaging. (a) 316 L stainless bare stent; (b) covered with PU backing layer; (c) covered with inner drug-storing layer by PCL/PEG = 9:1 with GEM loaded; (d) covered with outer drug-storing layer by PCL/PEG = 1:4 with HP loaded; (e) the prototype of PDT-chemo stent collected from cylindrical collector. Optical microscopy shows the extruded polymer from the syringe of electrspinning/electrospraying processes.\n\nSEM imaging of the covered membrane. The surface morphology and cross section of the covered membrane from PDT-chemo stent was observed by SEM imaging, which showed that the membrane was constructed of two sides with the backing layer and drug-storing layer.\nTo further confirm the electrosprayed amount of the drug on the covered membrane, the membrane removed from the stent was completely dissolved in DMSO solution for further analysis. The results revealed that, with the increasing concentration of the HP in PEG/PCL solution from 0.67 mg/ml to 6.67 mg/ml, the density of the HP from the electrosprayed membrane increased gradually from 8.72 μg/cm2 to 63.59 μg/cm2 at most, which was around 0.7 fold to the HP containing solution, as seemed to demonstrate 70% of electrosprayed efficiency (Figure \n4a). By the same method, GEM had almost 100% electrosprayed efficiency (Figure \n4b) and had a tendency of increasing electrosprayed efficiency by mixing with PCL/PEG blends due to its high hydrophilicity\n[30]. The corresponding results of the concentration of the drug in the solution and the density of the covered membrane could be an useful information to further imitate a clinical dose regimen for cholangiocarcinoma treatment.Figure 4\nThe electrosprayed amount of the drug on the covered membrane. The density of (a) HP and (b) GEM coated on the covered membrane is illustrated on the vertical axis of this graph, while the drug concentration of HP and GEM in PCL/PEG blends solution respectively is illustrated on the horizontal axis.\n\nThe electrosprayed amount of the drug on the covered membrane. The density of (a) HP and (b) GEM coated on the covered membrane is illustrated on the vertical axis of this graph, while the drug concentration of HP and GEM in PCL/PEG blends solution respectively is illustrated on the horizontal axis.\n Effect of drug release As illustrated in Figure \n1, this PDT-chmo stent was designed to obtain a two-phased drug release pattern, which is composed of a first burst-releasing phase and a later slow-releasing phase with combination therapy for cholangiocarcinoma treatment. Hydrophilic PEG was used as a regulator of drug release from PCL/PEG blends\n[31]. Figure \n5 displays the drug releasing profiles from the covered membrane. At each releasing time, the cumulative amounts of HP from PCL/PEG = 1:4 membrane, released in the first hour (P < 0.01 relative to GEM), had a high initial burst 80% and reached maximal cumulative release of nearly 98% within 24 h. Whereas, compared to HP in the first hour, GEM within PCL/PEG = 9:1 membrane showed relatively slow drug release in the first hour (50%). The significant difference of releasing kinetics between HP and GEM was observed within 24 h, as indicated drug-releasing rate can be regulated within 24 h by adjusting PEG and PCL compositions, then both drugs complete releasing occurred during the time span between 24 h and 72 h. The mechanism of drug release was reported by Liu et al.\n[32] and Lei et al.\n[14] that PEG acted as a pore former in PCL/PEG blends, where the releasing rate from co-localization of protein/drug and blends were proportional to PEG content. The hydrophilic PEG in PCL/PEG blends easily acted on aqueous solution, which resulted in the formation of swollen structure and subsequently increased the drug-releasing rate, as indicated the kinetics of drug releasing was mainly due to the degradation of the PCL/PEG blends\n[14]. Although the slow releasing rate of GEM may not exclude the possibility of outer drug-storing layer (PCL/PEG = 1:4) to delay parts of GEM release, we consider that PCL/PEG = 1:4 membrane with high concentration of hydrophilic PEG polymer will interact fast with aqueous solution, leading the membrane to degrade quickly within the first few minutes\n[14]. Therefore, we suggest PCL and PEG with different molar ratio for controlling the polymer degradation rate be the major factor to regulate drugs releasing rate. In order to better meet the needs of clinical application, the thickness of covered membrane and membrane composition could be further easily improved according to our requirements by electrospinning/electrospraying technique\n[18, 19].\nAfter drug eluting in PBS solution, the surface morphology of the covered membrane was investigated by SEM imaging (Figure \n6). Generally, fine fiber architecture of the backing layer was covered with roughness of the drug-storing layer (Figure \n6a). After 72 h of drug eluting, rough surface of the drug-storing layer was converted to alignment (Figure \n6b). This orderly structure indicates the degradation of PCL/PEG blends from the surface, and only PU backing layer and partial PCL/PEG blends left on the layer can be observed. The thickness of the cross section was reduced from around 180 μm (Figure \n6c) to 120 μm (Figure \n6d). The SEM imaging further confirmed the drug release kinetics was mainly because of the degradation of PCL/PEG membrane, not due to the diffusion or permeation of drugs through the membrane.Figure 5\nThe profile of drug release. Release of HP and GEM from covered membrane at each predetermined time in PBS solution (mean ± STD, n = 3). Values of p < 0.01 is considered being statistically significant.Figure 6\nSEM imaging of eluted covered membrane. After 72 h drug eluting in PBS solution, the surface morphology (a, b) and cross section (c, d) of the covered membrane were observed by SEM imaging.\n\nThe profile of drug release. Release of HP and GEM from covered membrane at each predetermined time in PBS solution (mean ± STD, n = 3). Values of p < 0.01 is considered being statistically significant.\n\nSEM imaging of eluted covered membrane. After 72 h drug eluting in PBS solution, the surface morphology (a, b) and cross section (c, d) of the covered membrane were observed by SEM imaging.\nHerein, GEM is one of the first line chemo-drugs in the treatment of advanced cholangiocarcinoma, which is a prodrug belonging to an analog of deoxycytidine. Once GEM is transported into the cell, it will be phosphorylated to an active form to inhibit DNA synthesis\n[33]. Therefore, low initial burst of GEM may help to prevent undesired toxicity associated with high concentration of GEM, and the burst release of HP can provide a simultaneous treatment for PDT, triggered by the light source from endoscopy when the stent is localized in bile duct. Overall, the prototype of PDT-chemo stent has demonstrated the proof of concept of localized combination therapy for cholangiocarcinoma. Based on the theory, the drug-releasing rate could be further regulated by changing the initial electrospraying blend polymer solution, concentration, structure and type of fibers and the amount of additives for the clinical needs\n[34, 35].\nAs illustrated in Figure \n1, this PDT-chmo stent was designed to obtain a two-phased drug release pattern, which is composed of a first burst-releasing phase and a later slow-releasing phase with combination therapy for cholangiocarcinoma treatment. Hydrophilic PEG was used as a regulator of drug release from PCL/PEG blends\n[31]. Figure \n5 displays the drug releasing profiles from the covered membrane. At each releasing time, the cumulative amounts of HP from PCL/PEG = 1:4 membrane, released in the first hour (P < 0.01 relative to GEM), had a high initial burst 80% and reached maximal cumulative release of nearly 98% within 24 h. Whereas, compared to HP in the first hour, GEM within PCL/PEG = 9:1 membrane showed relatively slow drug release in the first hour (50%). The significant difference of releasing kinetics between HP and GEM was observed within 24 h, as indicated drug-releasing rate can be regulated within 24 h by adjusting PEG and PCL compositions, then both drugs complete releasing occurred during the time span between 24 h and 72 h. The mechanism of drug release was reported by Liu et al.\n[32] and Lei et al.\n[14] that PEG acted as a pore former in PCL/PEG blends, where the releasing rate from co-localization of protein/drug and blends were proportional to PEG content. The hydrophilic PEG in PCL/PEG blends easily acted on aqueous solution, which resulted in the formation of swollen structure and subsequently increased the drug-releasing rate, as indicated the kinetics of drug releasing was mainly due to the degradation of the PCL/PEG blends\n[14]. Although the slow releasing rate of GEM may not exclude the possibility of outer drug-storing layer (PCL/PEG = 1:4) to delay parts of GEM release, we consider that PCL/PEG = 1:4 membrane with high concentration of hydrophilic PEG polymer will interact fast with aqueous solution, leading the membrane to degrade quickly within the first few minutes\n[14]. Therefore, we suggest PCL and PEG with different molar ratio for controlling the polymer degradation rate be the major factor to regulate drugs releasing rate. In order to better meet the needs of clinical application, the thickness of covered membrane and membrane composition could be further easily improved according to our requirements by electrospinning/electrospraying technique\n[18, 19].\nAfter drug eluting in PBS solution, the surface morphology of the covered membrane was investigated by SEM imaging (Figure \n6). Generally, fine fiber architecture of the backing layer was covered with roughness of the drug-storing layer (Figure \n6a). After 72 h of drug eluting, rough surface of the drug-storing layer was converted to alignment (Figure \n6b). This orderly structure indicates the degradation of PCL/PEG blends from the surface, and only PU backing layer and partial PCL/PEG blends left on the layer can be observed. The thickness of the cross section was reduced from around 180 μm (Figure \n6c) to 120 μm (Figure \n6d). The SEM imaging further confirmed the drug release kinetics was mainly because of the degradation of PCL/PEG membrane, not due to the diffusion or permeation of drugs through the membrane.Figure 5\nThe profile of drug release. Release of HP and GEM from covered membrane at each predetermined time in PBS solution (mean ± STD, n = 3). Values of p < 0.01 is considered being statistically significant.Figure 6\nSEM imaging of eluted covered membrane. After 72 h drug eluting in PBS solution, the surface morphology (a, b) and cross section (c, d) of the covered membrane were observed by SEM imaging.\n\nThe profile of drug release. Release of HP and GEM from covered membrane at each predetermined time in PBS solution (mean ± STD, n = 3). Values of p < 0.01 is considered being statistically significant.\n\nSEM imaging of eluted covered membrane. After 72 h drug eluting in PBS solution, the surface morphology (a, b) and cross section (c, d) of the covered membrane were observed by SEM imaging.\nHerein, GEM is one of the first line chemo-drugs in the treatment of advanced cholangiocarcinoma, which is a prodrug belonging to an analog of deoxycytidine. Once GEM is transported into the cell, it will be phosphorylated to an active form to inhibit DNA synthesis\n[33]. Therefore, low initial burst of GEM may help to prevent undesired toxicity associated with high concentration of GEM, and the burst release of HP can provide a simultaneous treatment for PDT, triggered by the light source from endoscopy when the stent is localized in bile duct. Overall, the prototype of PDT-chemo stent has demonstrated the proof of concept of localized combination therapy for cholangiocarcinoma. Based on the theory, the drug-releasing rate could be further regulated by changing the initial electrospraying blend polymer solution, concentration, structure and type of fibers and the amount of additives for the clinical needs\n[34, 35].", "To make the stent with covered membrane more uniformly adhered to the surface and easily regulate the thickness according to the clinical needs, electrospinning/electrospraying has been regarded as the appropriate means for demonstrating the PDT-chemo stent\n[18, 19]. The covered stent can be manufactured by several electrospinning methods: post-spinning modification, drug/polymer blends, emulsion electrospinning and core-shell electrospinning\n[21]. Drug/polymer blends technique could easily mix the drug with polymer directly and form a layer of membrane to achieve sustained drug release. Therefore, in this study, the prototype of PDT-chemo stent was constructed by drug/polymer blends technique via electrospinning and electrospraying dual-processes. The backing layer was electrospun first from PU polymer solution, followed by the electrospraying process from PCL/PEG blends solution with drug loaded. Electrospinning is the process with voltage to extrude polymer solution into fine fibers for the production of fine-fibers-covered stents. In our case, fine fibers could support the superior mechanical properties of the membrane and be introduced as the backing layer for effectively controlling majority of the inner drugs released to the surrounding tissue. During electrospinning, the organic solvent, which could be toxic to cells, will be completely evaporated due to its high volatility\n[22]. Electrospraying has similar preparation process to electrospinning but is usually used with higher voltage and lower polymer density, which makes the polymer solution more easily broken up into droplets\n[23, 24]. The concept of electrospraying process was used to increase layer-to-layer adhesion, which could avoid drug-storing layer cracking and separating from backing layer during the stent expending.\nThe covered membrane is composed of three layers with PU, PCL/PEG = 9:1 blends and PCL/PEG = 1:4 blends. PU was employed as the material of the backing layer, while PCL/PEG blends were used in drug-storing layers with GEM and HP loaded (Figure \n1). Several materials (eg. Silicone, PTFE, and PU) have been approved by US Food and Drug Administration (FDA) as the covered membrane on the stent\n[13]. Silicone was reported to cause the acute inflammation to the surrounding tissue\n[25]. The PTFE, however, could not be dissolved into any solvent\n[26]. Therefore, as the material of the backing layer in this study, PU has been found able to be formed sufficiently thin and flat on the metallic stent by electrospinning process and with elastic properties to allow the covered stent to be homogeneously expanded\n[18]. Additionally, PU membrane has been proved to prevent the tumor ingrowth effectively and to reduce the occlusion rate of expandable metal stent in patients with malignant biliary obstruction\n[15]. The distinct biodegradable properties of PCL and PEG blending were regarded as an approach to controlling the drug-releasing rate in different blending ratio. The selection of PCL is due to its good biocompatibility, drug permeability and relatively slow degradation rate\n[27, 28]. The hydrophilic PEG was selected to play a role in resulting in regulating the drug-releasing rate, due to its easily acting on aqueous solution\n[14, 29].\nThe prototype of PDT-chemo stent was imaged by photography and optical microscopy, which proved that the membrane was homogeneously covered on the stent surface in each electrospun process (Figure \n2). The optical microscopy (Figure \n2b) revealed that PU was formed in fine fibers with width around 5 to 10 μm; thus PCL/PEG blends solution was favorable for generating submicron droplets (Figure \n2c, d). The photograph of membrane appeared brown (Figure \n2d) due to the homogeneously dispersed of HP coated on the top. Finally, the prototype of PDT-chemo stent could be easily removed from the cylindrical collector without any surface damage (Figure \n2e). The surface morphology and cross section of the film was further investigated by SEM imaging (Figure \n3), which showed that the membrane was constructed of two sides with the backing layer and drug-storing layer. The architecture of the backing layer was constructed of fine fibers in networks structure, while drug-storing layer, attributed to the droplets from electrospraying process was coated on the backing layer roughly. The cross section of the membrane was smoothly with width in range from 170 to 190 μm. The thickness of the membrane could be optimized for the most favorable thickness according to the clinical needs via regulating time during the electrospun process.Figure 2\nThe photography and microscopic imaging of PDT-chemo stent. The prototype of PDT-chemo stent in each step of electrospun/electrosprayed process is presented under the photography and optical microscopic imaging. (a) 316 L stainless bare stent; (b) covered with PU backing layer; (c) covered with inner drug-storing layer by PCL/PEG = 9:1 with GEM loaded; (d) covered with outer drug-storing layer by PCL/PEG = 1:4 with HP loaded; (e) the prototype of PDT-chemo stent collected from cylindrical collector. Optical microscopy shows the extruded polymer from the syringe of electrspinning/electrospraying processes.Figure 3\nSEM imaging of the covered membrane. The surface morphology and cross section of the covered membrane from PDT-chemo stent was observed by SEM imaging, which showed that the membrane was constructed of two sides with the backing layer and drug-storing layer.\n\nThe photography and microscopic imaging of PDT-chemo stent. The prototype of PDT-chemo stent in each step of electrospun/electrosprayed process is presented under the photography and optical microscopic imaging. (a) 316 L stainless bare stent; (b) covered with PU backing layer; (c) covered with inner drug-storing layer by PCL/PEG = 9:1 with GEM loaded; (d) covered with outer drug-storing layer by PCL/PEG = 1:4 with HP loaded; (e) the prototype of PDT-chemo stent collected from cylindrical collector. Optical microscopy shows the extruded polymer from the syringe of electrspinning/electrospraying processes.\n\nSEM imaging of the covered membrane. The surface morphology and cross section of the covered membrane from PDT-chemo stent was observed by SEM imaging, which showed that the membrane was constructed of two sides with the backing layer and drug-storing layer.\nTo further confirm the electrosprayed amount of the drug on the covered membrane, the membrane removed from the stent was completely dissolved in DMSO solution for further analysis. The results revealed that, with the increasing concentration of the HP in PEG/PCL solution from 0.67 mg/ml to 6.67 mg/ml, the density of the HP from the electrosprayed membrane increased gradually from 8.72 μg/cm2 to 63.59 μg/cm2 at most, which was around 0.7 fold to the HP containing solution, as seemed to demonstrate 70% of electrosprayed efficiency (Figure \n4a). By the same method, GEM had almost 100% electrosprayed efficiency (Figure \n4b) and had a tendency of increasing electrosprayed efficiency by mixing with PCL/PEG blends due to its high hydrophilicity\n[30]. The corresponding results of the concentration of the drug in the solution and the density of the covered membrane could be an useful information to further imitate a clinical dose regimen for cholangiocarcinoma treatment.Figure 4\nThe electrosprayed amount of the drug on the covered membrane. The density of (a) HP and (b) GEM coated on the covered membrane is illustrated on the vertical axis of this graph, while the drug concentration of HP and GEM in PCL/PEG blends solution respectively is illustrated on the horizontal axis.\n\nThe electrosprayed amount of the drug on the covered membrane. The density of (a) HP and (b) GEM coated on the covered membrane is illustrated on the vertical axis of this graph, while the drug concentration of HP and GEM in PCL/PEG blends solution respectively is illustrated on the horizontal axis.", "As illustrated in Figure \n1, this PDT-chmo stent was designed to obtain a two-phased drug release pattern, which is composed of a first burst-releasing phase and a later slow-releasing phase with combination therapy for cholangiocarcinoma treatment. Hydrophilic PEG was used as a regulator of drug release from PCL/PEG blends\n[31]. Figure \n5 displays the drug releasing profiles from the covered membrane. At each releasing time, the cumulative amounts of HP from PCL/PEG = 1:4 membrane, released in the first hour (P < 0.01 relative to GEM), had a high initial burst 80% and reached maximal cumulative release of nearly 98% within 24 h. Whereas, compared to HP in the first hour, GEM within PCL/PEG = 9:1 membrane showed relatively slow drug release in the first hour (50%). The significant difference of releasing kinetics between HP and GEM was observed within 24 h, as indicated drug-releasing rate can be regulated within 24 h by adjusting PEG and PCL compositions, then both drugs complete releasing occurred during the time span between 24 h and 72 h. The mechanism of drug release was reported by Liu et al.\n[32] and Lei et al.\n[14] that PEG acted as a pore former in PCL/PEG blends, where the releasing rate from co-localization of protein/drug and blends were proportional to PEG content. The hydrophilic PEG in PCL/PEG blends easily acted on aqueous solution, which resulted in the formation of swollen structure and subsequently increased the drug-releasing rate, as indicated the kinetics of drug releasing was mainly due to the degradation of the PCL/PEG blends\n[14]. Although the slow releasing rate of GEM may not exclude the possibility of outer drug-storing layer (PCL/PEG = 1:4) to delay parts of GEM release, we consider that PCL/PEG = 1:4 membrane with high concentration of hydrophilic PEG polymer will interact fast with aqueous solution, leading the membrane to degrade quickly within the first few minutes\n[14]. Therefore, we suggest PCL and PEG with different molar ratio for controlling the polymer degradation rate be the major factor to regulate drugs releasing rate. In order to better meet the needs of clinical application, the thickness of covered membrane and membrane composition could be further easily improved according to our requirements by electrospinning/electrospraying technique\n[18, 19].\nAfter drug eluting in PBS solution, the surface morphology of the covered membrane was investigated by SEM imaging (Figure \n6). Generally, fine fiber architecture of the backing layer was covered with roughness of the drug-storing layer (Figure \n6a). After 72 h of drug eluting, rough surface of the drug-storing layer was converted to alignment (Figure \n6b). This orderly structure indicates the degradation of PCL/PEG blends from the surface, and only PU backing layer and partial PCL/PEG blends left on the layer can be observed. The thickness of the cross section was reduced from around 180 μm (Figure \n6c) to 120 μm (Figure \n6d). The SEM imaging further confirmed the drug release kinetics was mainly because of the degradation of PCL/PEG membrane, not due to the diffusion or permeation of drugs through the membrane.Figure 5\nThe profile of drug release. Release of HP and GEM from covered membrane at each predetermined time in PBS solution (mean ± STD, n = 3). Values of p < 0.01 is considered being statistically significant.Figure 6\nSEM imaging of eluted covered membrane. After 72 h drug eluting in PBS solution, the surface morphology (a, b) and cross section (c, d) of the covered membrane were observed by SEM imaging.\n\nThe profile of drug release. Release of HP and GEM from covered membrane at each predetermined time in PBS solution (mean ± STD, n = 3). Values of p < 0.01 is considered being statistically significant.\n\nSEM imaging of eluted covered membrane. After 72 h drug eluting in PBS solution, the surface morphology (a, b) and cross section (c, d) of the covered membrane were observed by SEM imaging.\nHerein, GEM is one of the first line chemo-drugs in the treatment of advanced cholangiocarcinoma, which is a prodrug belonging to an analog of deoxycytidine. Once GEM is transported into the cell, it will be phosphorylated to an active form to inhibit DNA synthesis\n[33]. Therefore, low initial burst of GEM may help to prevent undesired toxicity associated with high concentration of GEM, and the burst release of HP can provide a simultaneous treatment for PDT, triggered by the light source from endoscopy when the stent is localized in bile duct. Overall, the prototype of PDT-chemo stent has demonstrated the proof of concept of localized combination therapy for cholangiocarcinoma. Based on the theory, the drug-releasing rate could be further regulated by changing the initial electrospraying blend polymer solution, concentration, structure and type of fibers and the amount of additives for the clinical needs\n[34, 35]." ]
[ null, null, null, null, null, null, null, null ]
[ "Background", "Methods", "Materials", "Preparation of PDT-chemo stent", "Drug electrosprayed efficiency", "Drug release", "Results and discussion", "Prototype of PDT-chemo stent", "Effect of drug release", "Conclusions" ]
[ "Cholangiocarcinoma is the second most common hepatobiliary tumor, which is generally a locally invasive tumor that occludes the biliary tree and leads to cholangitis and liver failure. Until now, tumor resection has been the only potential cure for cholangiocarcinoma\n[1, 2]. Unfortunately, even with resection, the survival rate with five years can decrease to 11% at most and more than 50% of patients still remained at unresectable stage\n[3, 4]. Inoperable patients with advanced cholangiocarcinoma typically have obstructive cholestasis. So far, the primary standard method of treatment has been biliary stenting\n[5]. However, this treatment can prolong survival time slightly by providing temporary biliary drainage. Therefore, the secondary method of treatment is required to prolong the survival time by reducing tumor burden. Chemotherapy and radiotherapy are classical treatments but their results are also disappointing\n[3, 5].\nPhotodynamic therapy (PDT) is a new and promising treatment option, which contains a photosensitizer, light source, and oxygen\n[6]. The concept of PDT is based on a photosensitizer exposed to the specific wavelength of light, which can generate cytotoxic reactive oxygen species (ROS) to kill tumor cells\n[7]. Additionally, previous studies have shown that PDT could also inhibit the P-glycoprotein efflux of drug. A combination of PDT and chemotherapy can improve the accumulation of chemo-drug in tumor cells, and reduce the chemo-drug resistance from the P-glycoprotein efflux\n[8]. Another advantage of combination therapy with PDT and chemo-drug is the capability to induce antitumor immunity\n[9]. However, by intravenous delivery to target tumor, the distribution of the drug had its limitations and caused serious side effect on non-target organs. After receiving PDT treatment, patients have to stay indoors, away from bright light for 3 to 4 days to avoid the skin photosensitivity from the side effect\n[10].\nIn order to decrease the side effect during the treatment, the aim of this study is to develop a localized drug eluting stent, named PDT-chemo stent, by incorporating gemcitabine (GEM) with hematoporphyrin (HP) to cover the stent surface. Drug-eluting stent has been considered a method to maximize the drug concentration immediately on the localized tumor environment, while minimizing the non-target organs exposure\n[11, 12]. In clinical practice, this PDT-chemo stent could be inserted to the tumor area via endoscopic retrograde cholangiography\n[13], followed by the simultaneous specific light source from endoscopy to activate the photosensitizer for PDT. Meanwhile, the chemo-drug of GEM will be released continuously as the second step for chemotherapy. The multimodal function of PDT-chemo stent will not only aim to increase the accumulation of drug within the neoplastic tissue, but also decrease the side effect on non-target tissues.\nIn the past decade, the stent with drug-incorporated covered membrane has been received increasing attention as drug-eluting stent due to its functions of providing mechanic support and releasing sufficient drug to prevent restenosis or treat malignant\n[12, 14, 15]. Historically, several techniques were developed and have been used to manufacture the covered membrane on stent surface. These diverse techniques include dip coating\n[12, 16], compression technique\n[14] and electrospinning\n[17, 18]. Concerned about the adherence and flexibility problems of the membrane covering the stent\n[19], we used the electrospinning technique. Electrospinning is the process with voltage to extrude polymer solution into fine fibers for the production of micro/nano-fiber-covered stents\n[20]. The electrospinning technique could support the covered membrane uniformly adhered to the stent and easily regulate the thickness according to the clinical needs\n[18, 19]. Therefore, we selected the electrospinning to construct the backing layer on 316 L stainless stent and used the electrospraying process to regulate GEM and HP on the covered membrane. To provide the flexibility of the membrane, polyurethane (PU), with sufficient elastic property, was used as the material of drug-free backing layer, which could effectively control majority of the inner drugs released to the tissue-contacting side and additional supporting force for the main drug layer\n[18]. Polycaprolactone (PCL) and Polyethylene glycol (PEG) blends in different molar ratio were selected as two drug-storing layers to control the drug-releasing rate. In the ratio of 1:4, PCL/PEG blends were used as the outer releasing layer with HP loaded, while in the ratio of 9:1, PCL/PEG blends were used as the inner releasing layer with GEM loaded (Figure \n1). To our knowledge, this is the first study to show that the biliary stent could be accompanied with PDT and chemotherapy for localized cholangiocarcinoma treatment.Figure 1\nSchematic illustration of the PDT-chemo stent as a biliary stent for cholangiocarcinoma treatment. The covered membrane is composed of three layers with PU, PCL/PEG = 9:1 blends and PCL/PEG = 1:4 blends. PU was employed as the material of the backing layer, and PCL/PEG blends were used in drug-storing layers with GEM and HP loaded.\n\nSchematic illustration of the PDT-chemo stent as a biliary stent for cholangiocarcinoma treatment. The covered membrane is composed of three layers with PU, PCL/PEG = 9:1 blends and PCL/PEG = 1:4 blends. PU was employed as the material of the backing layer, and PCL/PEG blends were used in drug-storing layers with GEM and HP loaded.", " Materials Polyurethane (PU), Polycaprolactone (PCL, Mw = 80,000), Polyethylene glycol (PEG, Mw = 20,000), Tetrahydrofuran (THF), 1,1,1,3,3,3-Hexafluro-2-propanel (HFIP) and Hematoporphyrin (HP) were purchased from Sigma-Aldrich (St Louis, MO). Gemcitabine (GEM) of clinical grade was supplied by National Taiwan University Hospital. All other chemicals were of analytical grade and used as received.\nPolyurethane (PU), Polycaprolactone (PCL, Mw = 80,000), Polyethylene glycol (PEG, Mw = 20,000), Tetrahydrofuran (THF), 1,1,1,3,3,3-Hexafluro-2-propanel (HFIP) and Hematoporphyrin (HP) were purchased from Sigma-Aldrich (St Louis, MO). Gemcitabine (GEM) of clinical grade was supplied by National Taiwan University Hospital. All other chemicals were of analytical grade and used as received.\n Preparation of PDT-chemo stent The PDT-chemo stent, consisting of tri-layers of membranes, was made by electrospinning and electrospraying dual-processes. The metallic stent was provided from the laboratory of Dr. Fuh-Yu Chang in National Taiwan University of Science and Technology, and the femtosecond laser was used to carve the 316 L stainless metallic tube. The unit of electrospinning contained a high-voltage power supply, a motor to rotate the stent, a syringe pump, and a 19G-needle that was connected by a tube to a syringe. The metallic stent rotated by a motor was horizontally placed 15 cm away from the needle. The solution of PU in HFIP (10 m/v%) was used for electrospinning process, while PCL/PEG blends were mixed in HFIP/THF (1:1) solution for electrospraying process. Both PU and PCL/PEG blends solution were extruded from the syringe at a rate of 5 μL/min. The backing layer of PU was first electrospun with voltage 14 kV, and was followed by the drug-storing layer of PCL/PEG blends elecrosprayed with higher voltage 22 kV. Herein, the drug-storing layer of PCL/PEG blends in molar ratio of 9:1 was loaded with GEM covering the backing layer, followed by the HP coating on the top in PCL/PEG molar ratio of 1:4 (Figure \n1). The extruded polymer from the syringe of electrspinning/electrospraying was collected for only a short period time on cover glass for optical microscopy (OM, Leica, Germany). Samples collected covered stent were prepared by coating with thin gold film by sputtering PVD and visualized by scanning electron microscopy (SEM, JSM-7000 F, Japan) operated at 15 KV.\nThe PDT-chemo stent, consisting of tri-layers of membranes, was made by electrospinning and electrospraying dual-processes. The metallic stent was provided from the laboratory of Dr. Fuh-Yu Chang in National Taiwan University of Science and Technology, and the femtosecond laser was used to carve the 316 L stainless metallic tube. The unit of electrospinning contained a high-voltage power supply, a motor to rotate the stent, a syringe pump, and a 19G-needle that was connected by a tube to a syringe. The metallic stent rotated by a motor was horizontally placed 15 cm away from the needle. The solution of PU in HFIP (10 m/v%) was used for electrospinning process, while PCL/PEG blends were mixed in HFIP/THF (1:1) solution for electrospraying process. Both PU and PCL/PEG blends solution were extruded from the syringe at a rate of 5 μL/min. The backing layer of PU was first electrospun with voltage 14 kV, and was followed by the drug-storing layer of PCL/PEG blends elecrosprayed with higher voltage 22 kV. Herein, the drug-storing layer of PCL/PEG blends in molar ratio of 9:1 was loaded with GEM covering the backing layer, followed by the HP coating on the top in PCL/PEG molar ratio of 1:4 (Figure \n1). The extruded polymer from the syringe of electrspinning/electrospraying was collected for only a short period time on cover glass for optical microscopy (OM, Leica, Germany). Samples collected covered stent were prepared by coating with thin gold film by sputtering PVD and visualized by scanning electron microscopy (SEM, JSM-7000 F, Japan) operated at 15 KV.\n Drug electrosprayed efficiency To further confirm the electrsprayed efficiency of loading drug in state of covered membrane, the membrane was collected from the stent and absolutely dissolved in dimethyl sulfoxide (DMSO) solution. After that, high-performance liquid chromatography (HPLC, Waters e2695, USA) and ultraviolet–visible spectroscopy (UV/vis, JASCO V-550, USA) were used to examine the extruded drugs of GEM and HP from the covered membrane respectively.\nTo further confirm the electrsprayed efficiency of loading drug in state of covered membrane, the membrane was collected from the stent and absolutely dissolved in dimethyl sulfoxide (DMSO) solution. After that, high-performance liquid chromatography (HPLC, Waters e2695, USA) and ultraviolet–visible spectroscopy (UV/vis, JASCO V-550, USA) were used to examine the extruded drugs of GEM and HP from the covered membrane respectively.\n Drug release The covered membrane was incubated in a sealed glass bottle with 0.5 ml phosphate-buffered saline (PBS) as the releasing medium. The bottle was placed in a shaking incubator at 37°C at a shaking speed of 50 rpm. At the predetermined time, 0.5 ml sample was withdrawn and replaced with the same volume of fresh medium. Residual concentration of drug in the membrane was counted by dissolving the membrane in DMSO solution as the eluting medium. Samples were collected and analyzed under the UV–vis spectrometer and HPLC. The morphology of membrane after 72 h of release was assessed by SEM imaging. The values were presented as mean ± standard error (STD) in triplicate. Statistical analysis was performed using the analysis student’s t-test. Values of p < 0.05 was considered being statistically significant.\nThe covered membrane was incubated in a sealed glass bottle with 0.5 ml phosphate-buffered saline (PBS) as the releasing medium. The bottle was placed in a shaking incubator at 37°C at a shaking speed of 50 rpm. At the predetermined time, 0.5 ml sample was withdrawn and replaced with the same volume of fresh medium. Residual concentration of drug in the membrane was counted by dissolving the membrane in DMSO solution as the eluting medium. Samples were collected and analyzed under the UV–vis spectrometer and HPLC. The morphology of membrane after 72 h of release was assessed by SEM imaging. The values were presented as mean ± standard error (STD) in triplicate. Statistical analysis was performed using the analysis student’s t-test. Values of p < 0.05 was considered being statistically significant.", "Polyurethane (PU), Polycaprolactone (PCL, Mw = 80,000), Polyethylene glycol (PEG, Mw = 20,000), Tetrahydrofuran (THF), 1,1,1,3,3,3-Hexafluro-2-propanel (HFIP) and Hematoporphyrin (HP) were purchased from Sigma-Aldrich (St Louis, MO). Gemcitabine (GEM) of clinical grade was supplied by National Taiwan University Hospital. All other chemicals were of analytical grade and used as received.", "The PDT-chemo stent, consisting of tri-layers of membranes, was made by electrospinning and electrospraying dual-processes. The metallic stent was provided from the laboratory of Dr. Fuh-Yu Chang in National Taiwan University of Science and Technology, and the femtosecond laser was used to carve the 316 L stainless metallic tube. The unit of electrospinning contained a high-voltage power supply, a motor to rotate the stent, a syringe pump, and a 19G-needle that was connected by a tube to a syringe. The metallic stent rotated by a motor was horizontally placed 15 cm away from the needle. The solution of PU in HFIP (10 m/v%) was used for electrospinning process, while PCL/PEG blends were mixed in HFIP/THF (1:1) solution for electrospraying process. Both PU and PCL/PEG blends solution were extruded from the syringe at a rate of 5 μL/min. The backing layer of PU was first electrospun with voltage 14 kV, and was followed by the drug-storing layer of PCL/PEG blends elecrosprayed with higher voltage 22 kV. Herein, the drug-storing layer of PCL/PEG blends in molar ratio of 9:1 was loaded with GEM covering the backing layer, followed by the HP coating on the top in PCL/PEG molar ratio of 1:4 (Figure \n1). The extruded polymer from the syringe of electrspinning/electrospraying was collected for only a short period time on cover glass for optical microscopy (OM, Leica, Germany). Samples collected covered stent were prepared by coating with thin gold film by sputtering PVD and visualized by scanning electron microscopy (SEM, JSM-7000 F, Japan) operated at 15 KV.", "To further confirm the electrsprayed efficiency of loading drug in state of covered membrane, the membrane was collected from the stent and absolutely dissolved in dimethyl sulfoxide (DMSO) solution. After that, high-performance liquid chromatography (HPLC, Waters e2695, USA) and ultraviolet–visible spectroscopy (UV/vis, JASCO V-550, USA) were used to examine the extruded drugs of GEM and HP from the covered membrane respectively.", "The covered membrane was incubated in a sealed glass bottle with 0.5 ml phosphate-buffered saline (PBS) as the releasing medium. The bottle was placed in a shaking incubator at 37°C at a shaking speed of 50 rpm. At the predetermined time, 0.5 ml sample was withdrawn and replaced with the same volume of fresh medium. Residual concentration of drug in the membrane was counted by dissolving the membrane in DMSO solution as the eluting medium. Samples were collected and analyzed under the UV–vis spectrometer and HPLC. The morphology of membrane after 72 h of release was assessed by SEM imaging. The values were presented as mean ± standard error (STD) in triplicate. Statistical analysis was performed using the analysis student’s t-test. Values of p < 0.05 was considered being statistically significant.", " Prototype of PDT-chemo stent To make the stent with covered membrane more uniformly adhered to the surface and easily regulate the thickness according to the clinical needs, electrospinning/electrospraying has been regarded as the appropriate means for demonstrating the PDT-chemo stent\n[18, 19]. The covered stent can be manufactured by several electrospinning methods: post-spinning modification, drug/polymer blends, emulsion electrospinning and core-shell electrospinning\n[21]. Drug/polymer blends technique could easily mix the drug with polymer directly and form a layer of membrane to achieve sustained drug release. Therefore, in this study, the prototype of PDT-chemo stent was constructed by drug/polymer blends technique via electrospinning and electrospraying dual-processes. The backing layer was electrospun first from PU polymer solution, followed by the electrospraying process from PCL/PEG blends solution with drug loaded. Electrospinning is the process with voltage to extrude polymer solution into fine fibers for the production of fine-fibers-covered stents. In our case, fine fibers could support the superior mechanical properties of the membrane and be introduced as the backing layer for effectively controlling majority of the inner drugs released to the surrounding tissue. During electrospinning, the organic solvent, which could be toxic to cells, will be completely evaporated due to its high volatility\n[22]. Electrospraying has similar preparation process to electrospinning but is usually used with higher voltage and lower polymer density, which makes the polymer solution more easily broken up into droplets\n[23, 24]. The concept of electrospraying process was used to increase layer-to-layer adhesion, which could avoid drug-storing layer cracking and separating from backing layer during the stent expending.\nThe covered membrane is composed of three layers with PU, PCL/PEG = 9:1 blends and PCL/PEG = 1:4 blends. PU was employed as the material of the backing layer, while PCL/PEG blends were used in drug-storing layers with GEM and HP loaded (Figure \n1). Several materials (eg. Silicone, PTFE, and PU) have been approved by US Food and Drug Administration (FDA) as the covered membrane on the stent\n[13]. Silicone was reported to cause the acute inflammation to the surrounding tissue\n[25]. The PTFE, however, could not be dissolved into any solvent\n[26]. Therefore, as the material of the backing layer in this study, PU has been found able to be formed sufficiently thin and flat on the metallic stent by electrospinning process and with elastic properties to allow the covered stent to be homogeneously expanded\n[18]. Additionally, PU membrane has been proved to prevent the tumor ingrowth effectively and to reduce the occlusion rate of expandable metal stent in patients with malignant biliary obstruction\n[15]. The distinct biodegradable properties of PCL and PEG blending were regarded as an approach to controlling the drug-releasing rate in different blending ratio. The selection of PCL is due to its good biocompatibility, drug permeability and relatively slow degradation rate\n[27, 28]. The hydrophilic PEG was selected to play a role in resulting in regulating the drug-releasing rate, due to its easily acting on aqueous solution\n[14, 29].\nThe prototype of PDT-chemo stent was imaged by photography and optical microscopy, which proved that the membrane was homogeneously covered on the stent surface in each electrospun process (Figure \n2). The optical microscopy (Figure \n2b) revealed that PU was formed in fine fibers with width around 5 to 10 μm; thus PCL/PEG blends solution was favorable for generating submicron droplets (Figure \n2c, d). The photograph of membrane appeared brown (Figure \n2d) due to the homogeneously dispersed of HP coated on the top. Finally, the prototype of PDT-chemo stent could be easily removed from the cylindrical collector without any surface damage (Figure \n2e). The surface morphology and cross section of the film was further investigated by SEM imaging (Figure \n3), which showed that the membrane was constructed of two sides with the backing layer and drug-storing layer. The architecture of the backing layer was constructed of fine fibers in networks structure, while drug-storing layer, attributed to the droplets from electrospraying process was coated on the backing layer roughly. The cross section of the membrane was smoothly with width in range from 170 to 190 μm. The thickness of the membrane could be optimized for the most favorable thickness according to the clinical needs via regulating time during the electrospun process.Figure 2\nThe photography and microscopic imaging of PDT-chemo stent. The prototype of PDT-chemo stent in each step of electrospun/electrosprayed process is presented under the photography and optical microscopic imaging. (a) 316 L stainless bare stent; (b) covered with PU backing layer; (c) covered with inner drug-storing layer by PCL/PEG = 9:1 with GEM loaded; (d) covered with outer drug-storing layer by PCL/PEG = 1:4 with HP loaded; (e) the prototype of PDT-chemo stent collected from cylindrical collector. Optical microscopy shows the extruded polymer from the syringe of electrspinning/electrospraying processes.Figure 3\nSEM imaging of the covered membrane. The surface morphology and cross section of the covered membrane from PDT-chemo stent was observed by SEM imaging, which showed that the membrane was constructed of two sides with the backing layer and drug-storing layer.\n\nThe photography and microscopic imaging of PDT-chemo stent. The prototype of PDT-chemo stent in each step of electrospun/electrosprayed process is presented under the photography and optical microscopic imaging. (a) 316 L stainless bare stent; (b) covered with PU backing layer; (c) covered with inner drug-storing layer by PCL/PEG = 9:1 with GEM loaded; (d) covered with outer drug-storing layer by PCL/PEG = 1:4 with HP loaded; (e) the prototype of PDT-chemo stent collected from cylindrical collector. Optical microscopy shows the extruded polymer from the syringe of electrspinning/electrospraying processes.\n\nSEM imaging of the covered membrane. The surface morphology and cross section of the covered membrane from PDT-chemo stent was observed by SEM imaging, which showed that the membrane was constructed of two sides with the backing layer and drug-storing layer.\nTo further confirm the electrosprayed amount of the drug on the covered membrane, the membrane removed from the stent was completely dissolved in DMSO solution for further analysis. The results revealed that, with the increasing concentration of the HP in PEG/PCL solution from 0.67 mg/ml to 6.67 mg/ml, the density of the HP from the electrosprayed membrane increased gradually from 8.72 μg/cm2 to 63.59 μg/cm2 at most, which was around 0.7 fold to the HP containing solution, as seemed to demonstrate 70% of electrosprayed efficiency (Figure \n4a). By the same method, GEM had almost 100% electrosprayed efficiency (Figure \n4b) and had a tendency of increasing electrosprayed efficiency by mixing with PCL/PEG blends due to its high hydrophilicity\n[30]. The corresponding results of the concentration of the drug in the solution and the density of the covered membrane could be an useful information to further imitate a clinical dose regimen for cholangiocarcinoma treatment.Figure 4\nThe electrosprayed amount of the drug on the covered membrane. The density of (a) HP and (b) GEM coated on the covered membrane is illustrated on the vertical axis of this graph, while the drug concentration of HP and GEM in PCL/PEG blends solution respectively is illustrated on the horizontal axis.\n\nThe electrosprayed amount of the drug on the covered membrane. The density of (a) HP and (b) GEM coated on the covered membrane is illustrated on the vertical axis of this graph, while the drug concentration of HP and GEM in PCL/PEG blends solution respectively is illustrated on the horizontal axis.\nTo make the stent with covered membrane more uniformly adhered to the surface and easily regulate the thickness according to the clinical needs, electrospinning/electrospraying has been regarded as the appropriate means for demonstrating the PDT-chemo stent\n[18, 19]. The covered stent can be manufactured by several electrospinning methods: post-spinning modification, drug/polymer blends, emulsion electrospinning and core-shell electrospinning\n[21]. Drug/polymer blends technique could easily mix the drug with polymer directly and form a layer of membrane to achieve sustained drug release. Therefore, in this study, the prototype of PDT-chemo stent was constructed by drug/polymer blends technique via electrospinning and electrospraying dual-processes. The backing layer was electrospun first from PU polymer solution, followed by the electrospraying process from PCL/PEG blends solution with drug loaded. Electrospinning is the process with voltage to extrude polymer solution into fine fibers for the production of fine-fibers-covered stents. In our case, fine fibers could support the superior mechanical properties of the membrane and be introduced as the backing layer for effectively controlling majority of the inner drugs released to the surrounding tissue. During electrospinning, the organic solvent, which could be toxic to cells, will be completely evaporated due to its high volatility\n[22]. Electrospraying has similar preparation process to electrospinning but is usually used with higher voltage and lower polymer density, which makes the polymer solution more easily broken up into droplets\n[23, 24]. The concept of electrospraying process was used to increase layer-to-layer adhesion, which could avoid drug-storing layer cracking and separating from backing layer during the stent expending.\nThe covered membrane is composed of three layers with PU, PCL/PEG = 9:1 blends and PCL/PEG = 1:4 blends. PU was employed as the material of the backing layer, while PCL/PEG blends were used in drug-storing layers with GEM and HP loaded (Figure \n1). Several materials (eg. Silicone, PTFE, and PU) have been approved by US Food and Drug Administration (FDA) as the covered membrane on the stent\n[13]. Silicone was reported to cause the acute inflammation to the surrounding tissue\n[25]. The PTFE, however, could not be dissolved into any solvent\n[26]. Therefore, as the material of the backing layer in this study, PU has been found able to be formed sufficiently thin and flat on the metallic stent by electrospinning process and with elastic properties to allow the covered stent to be homogeneously expanded\n[18]. Additionally, PU membrane has been proved to prevent the tumor ingrowth effectively and to reduce the occlusion rate of expandable metal stent in patients with malignant biliary obstruction\n[15]. The distinct biodegradable properties of PCL and PEG blending were regarded as an approach to controlling the drug-releasing rate in different blending ratio. The selection of PCL is due to its good biocompatibility, drug permeability and relatively slow degradation rate\n[27, 28]. The hydrophilic PEG was selected to play a role in resulting in regulating the drug-releasing rate, due to its easily acting on aqueous solution\n[14, 29].\nThe prototype of PDT-chemo stent was imaged by photography and optical microscopy, which proved that the membrane was homogeneously covered on the stent surface in each electrospun process (Figure \n2). The optical microscopy (Figure \n2b) revealed that PU was formed in fine fibers with width around 5 to 10 μm; thus PCL/PEG blends solution was favorable for generating submicron droplets (Figure \n2c, d). The photograph of membrane appeared brown (Figure \n2d) due to the homogeneously dispersed of HP coated on the top. Finally, the prototype of PDT-chemo stent could be easily removed from the cylindrical collector without any surface damage (Figure \n2e). The surface morphology and cross section of the film was further investigated by SEM imaging (Figure \n3), which showed that the membrane was constructed of two sides with the backing layer and drug-storing layer. The architecture of the backing layer was constructed of fine fibers in networks structure, while drug-storing layer, attributed to the droplets from electrospraying process was coated on the backing layer roughly. The cross section of the membrane was smoothly with width in range from 170 to 190 μm. The thickness of the membrane could be optimized for the most favorable thickness according to the clinical needs via regulating time during the electrospun process.Figure 2\nThe photography and microscopic imaging of PDT-chemo stent. The prototype of PDT-chemo stent in each step of electrospun/electrosprayed process is presented under the photography and optical microscopic imaging. (a) 316 L stainless bare stent; (b) covered with PU backing layer; (c) covered with inner drug-storing layer by PCL/PEG = 9:1 with GEM loaded; (d) covered with outer drug-storing layer by PCL/PEG = 1:4 with HP loaded; (e) the prototype of PDT-chemo stent collected from cylindrical collector. Optical microscopy shows the extruded polymer from the syringe of electrspinning/electrospraying processes.Figure 3\nSEM imaging of the covered membrane. The surface morphology and cross section of the covered membrane from PDT-chemo stent was observed by SEM imaging, which showed that the membrane was constructed of two sides with the backing layer and drug-storing layer.\n\nThe photography and microscopic imaging of PDT-chemo stent. The prototype of PDT-chemo stent in each step of electrospun/electrosprayed process is presented under the photography and optical microscopic imaging. (a) 316 L stainless bare stent; (b) covered with PU backing layer; (c) covered with inner drug-storing layer by PCL/PEG = 9:1 with GEM loaded; (d) covered with outer drug-storing layer by PCL/PEG = 1:4 with HP loaded; (e) the prototype of PDT-chemo stent collected from cylindrical collector. Optical microscopy shows the extruded polymer from the syringe of electrspinning/electrospraying processes.\n\nSEM imaging of the covered membrane. The surface morphology and cross section of the covered membrane from PDT-chemo stent was observed by SEM imaging, which showed that the membrane was constructed of two sides with the backing layer and drug-storing layer.\nTo further confirm the electrosprayed amount of the drug on the covered membrane, the membrane removed from the stent was completely dissolved in DMSO solution for further analysis. The results revealed that, with the increasing concentration of the HP in PEG/PCL solution from 0.67 mg/ml to 6.67 mg/ml, the density of the HP from the electrosprayed membrane increased gradually from 8.72 μg/cm2 to 63.59 μg/cm2 at most, which was around 0.7 fold to the HP containing solution, as seemed to demonstrate 70% of electrosprayed efficiency (Figure \n4a). By the same method, GEM had almost 100% electrosprayed efficiency (Figure \n4b) and had a tendency of increasing electrosprayed efficiency by mixing with PCL/PEG blends due to its high hydrophilicity\n[30]. The corresponding results of the concentration of the drug in the solution and the density of the covered membrane could be an useful information to further imitate a clinical dose regimen for cholangiocarcinoma treatment.Figure 4\nThe electrosprayed amount of the drug on the covered membrane. The density of (a) HP and (b) GEM coated on the covered membrane is illustrated on the vertical axis of this graph, while the drug concentration of HP and GEM in PCL/PEG blends solution respectively is illustrated on the horizontal axis.\n\nThe electrosprayed amount of the drug on the covered membrane. The density of (a) HP and (b) GEM coated on the covered membrane is illustrated on the vertical axis of this graph, while the drug concentration of HP and GEM in PCL/PEG blends solution respectively is illustrated on the horizontal axis.\n Effect of drug release As illustrated in Figure \n1, this PDT-chmo stent was designed to obtain a two-phased drug release pattern, which is composed of a first burst-releasing phase and a later slow-releasing phase with combination therapy for cholangiocarcinoma treatment. Hydrophilic PEG was used as a regulator of drug release from PCL/PEG blends\n[31]. Figure \n5 displays the drug releasing profiles from the covered membrane. At each releasing time, the cumulative amounts of HP from PCL/PEG = 1:4 membrane, released in the first hour (P < 0.01 relative to GEM), had a high initial burst 80% and reached maximal cumulative release of nearly 98% within 24 h. Whereas, compared to HP in the first hour, GEM within PCL/PEG = 9:1 membrane showed relatively slow drug release in the first hour (50%). The significant difference of releasing kinetics between HP and GEM was observed within 24 h, as indicated drug-releasing rate can be regulated within 24 h by adjusting PEG and PCL compositions, then both drugs complete releasing occurred during the time span between 24 h and 72 h. The mechanism of drug release was reported by Liu et al.\n[32] and Lei et al.\n[14] that PEG acted as a pore former in PCL/PEG blends, where the releasing rate from co-localization of protein/drug and blends were proportional to PEG content. The hydrophilic PEG in PCL/PEG blends easily acted on aqueous solution, which resulted in the formation of swollen structure and subsequently increased the drug-releasing rate, as indicated the kinetics of drug releasing was mainly due to the degradation of the PCL/PEG blends\n[14]. Although the slow releasing rate of GEM may not exclude the possibility of outer drug-storing layer (PCL/PEG = 1:4) to delay parts of GEM release, we consider that PCL/PEG = 1:4 membrane with high concentration of hydrophilic PEG polymer will interact fast with aqueous solution, leading the membrane to degrade quickly within the first few minutes\n[14]. Therefore, we suggest PCL and PEG with different molar ratio for controlling the polymer degradation rate be the major factor to regulate drugs releasing rate. In order to better meet the needs of clinical application, the thickness of covered membrane and membrane composition could be further easily improved according to our requirements by electrospinning/electrospraying technique\n[18, 19].\nAfter drug eluting in PBS solution, the surface morphology of the covered membrane was investigated by SEM imaging (Figure \n6). Generally, fine fiber architecture of the backing layer was covered with roughness of the drug-storing layer (Figure \n6a). After 72 h of drug eluting, rough surface of the drug-storing layer was converted to alignment (Figure \n6b). This orderly structure indicates the degradation of PCL/PEG blends from the surface, and only PU backing layer and partial PCL/PEG blends left on the layer can be observed. The thickness of the cross section was reduced from around 180 μm (Figure \n6c) to 120 μm (Figure \n6d). The SEM imaging further confirmed the drug release kinetics was mainly because of the degradation of PCL/PEG membrane, not due to the diffusion or permeation of drugs through the membrane.Figure 5\nThe profile of drug release. Release of HP and GEM from covered membrane at each predetermined time in PBS solution (mean ± STD, n = 3). Values of p < 0.01 is considered being statistically significant.Figure 6\nSEM imaging of eluted covered membrane. After 72 h drug eluting in PBS solution, the surface morphology (a, b) and cross section (c, d) of the covered membrane were observed by SEM imaging.\n\nThe profile of drug release. Release of HP and GEM from covered membrane at each predetermined time in PBS solution (mean ± STD, n = 3). Values of p < 0.01 is considered being statistically significant.\n\nSEM imaging of eluted covered membrane. After 72 h drug eluting in PBS solution, the surface morphology (a, b) and cross section (c, d) of the covered membrane were observed by SEM imaging.\nHerein, GEM is one of the first line chemo-drugs in the treatment of advanced cholangiocarcinoma, which is a prodrug belonging to an analog of deoxycytidine. Once GEM is transported into the cell, it will be phosphorylated to an active form to inhibit DNA synthesis\n[33]. Therefore, low initial burst of GEM may help to prevent undesired toxicity associated with high concentration of GEM, and the burst release of HP can provide a simultaneous treatment for PDT, triggered by the light source from endoscopy when the stent is localized in bile duct. Overall, the prototype of PDT-chemo stent has demonstrated the proof of concept of localized combination therapy for cholangiocarcinoma. Based on the theory, the drug-releasing rate could be further regulated by changing the initial electrospraying blend polymer solution, concentration, structure and type of fibers and the amount of additives for the clinical needs\n[34, 35].\nAs illustrated in Figure \n1, this PDT-chmo stent was designed to obtain a two-phased drug release pattern, which is composed of a first burst-releasing phase and a later slow-releasing phase with combination therapy for cholangiocarcinoma treatment. Hydrophilic PEG was used as a regulator of drug release from PCL/PEG blends\n[31]. Figure \n5 displays the drug releasing profiles from the covered membrane. At each releasing time, the cumulative amounts of HP from PCL/PEG = 1:4 membrane, released in the first hour (P < 0.01 relative to GEM), had a high initial burst 80% and reached maximal cumulative release of nearly 98% within 24 h. Whereas, compared to HP in the first hour, GEM within PCL/PEG = 9:1 membrane showed relatively slow drug release in the first hour (50%). The significant difference of releasing kinetics between HP and GEM was observed within 24 h, as indicated drug-releasing rate can be regulated within 24 h by adjusting PEG and PCL compositions, then both drugs complete releasing occurred during the time span between 24 h and 72 h. The mechanism of drug release was reported by Liu et al.\n[32] and Lei et al.\n[14] that PEG acted as a pore former in PCL/PEG blends, where the releasing rate from co-localization of protein/drug and blends were proportional to PEG content. The hydrophilic PEG in PCL/PEG blends easily acted on aqueous solution, which resulted in the formation of swollen structure and subsequently increased the drug-releasing rate, as indicated the kinetics of drug releasing was mainly due to the degradation of the PCL/PEG blends\n[14]. Although the slow releasing rate of GEM may not exclude the possibility of outer drug-storing layer (PCL/PEG = 1:4) to delay parts of GEM release, we consider that PCL/PEG = 1:4 membrane with high concentration of hydrophilic PEG polymer will interact fast with aqueous solution, leading the membrane to degrade quickly within the first few minutes\n[14]. Therefore, we suggest PCL and PEG with different molar ratio for controlling the polymer degradation rate be the major factor to regulate drugs releasing rate. In order to better meet the needs of clinical application, the thickness of covered membrane and membrane composition could be further easily improved according to our requirements by electrospinning/electrospraying technique\n[18, 19].\nAfter drug eluting in PBS solution, the surface morphology of the covered membrane was investigated by SEM imaging (Figure \n6). Generally, fine fiber architecture of the backing layer was covered with roughness of the drug-storing layer (Figure \n6a). After 72 h of drug eluting, rough surface of the drug-storing layer was converted to alignment (Figure \n6b). This orderly structure indicates the degradation of PCL/PEG blends from the surface, and only PU backing layer and partial PCL/PEG blends left on the layer can be observed. The thickness of the cross section was reduced from around 180 μm (Figure \n6c) to 120 μm (Figure \n6d). The SEM imaging further confirmed the drug release kinetics was mainly because of the degradation of PCL/PEG membrane, not due to the diffusion or permeation of drugs through the membrane.Figure 5\nThe profile of drug release. Release of HP and GEM from covered membrane at each predetermined time in PBS solution (mean ± STD, n = 3). Values of p < 0.01 is considered being statistically significant.Figure 6\nSEM imaging of eluted covered membrane. After 72 h drug eluting in PBS solution, the surface morphology (a, b) and cross section (c, d) of the covered membrane were observed by SEM imaging.\n\nThe profile of drug release. Release of HP and GEM from covered membrane at each predetermined time in PBS solution (mean ± STD, n = 3). Values of p < 0.01 is considered being statistically significant.\n\nSEM imaging of eluted covered membrane. After 72 h drug eluting in PBS solution, the surface morphology (a, b) and cross section (c, d) of the covered membrane were observed by SEM imaging.\nHerein, GEM is one of the first line chemo-drugs in the treatment of advanced cholangiocarcinoma, which is a prodrug belonging to an analog of deoxycytidine. Once GEM is transported into the cell, it will be phosphorylated to an active form to inhibit DNA synthesis\n[33]. Therefore, low initial burst of GEM may help to prevent undesired toxicity associated with high concentration of GEM, and the burst release of HP can provide a simultaneous treatment for PDT, triggered by the light source from endoscopy when the stent is localized in bile duct. Overall, the prototype of PDT-chemo stent has demonstrated the proof of concept of localized combination therapy for cholangiocarcinoma. Based on the theory, the drug-releasing rate could be further regulated by changing the initial electrospraying blend polymer solution, concentration, structure and type of fibers and the amount of additives for the clinical needs\n[34, 35].", "To make the stent with covered membrane more uniformly adhered to the surface and easily regulate the thickness according to the clinical needs, electrospinning/electrospraying has been regarded as the appropriate means for demonstrating the PDT-chemo stent\n[18, 19]. The covered stent can be manufactured by several electrospinning methods: post-spinning modification, drug/polymer blends, emulsion electrospinning and core-shell electrospinning\n[21]. Drug/polymer blends technique could easily mix the drug with polymer directly and form a layer of membrane to achieve sustained drug release. Therefore, in this study, the prototype of PDT-chemo stent was constructed by drug/polymer blends technique via electrospinning and electrospraying dual-processes. The backing layer was electrospun first from PU polymer solution, followed by the electrospraying process from PCL/PEG blends solution with drug loaded. Electrospinning is the process with voltage to extrude polymer solution into fine fibers for the production of fine-fibers-covered stents. In our case, fine fibers could support the superior mechanical properties of the membrane and be introduced as the backing layer for effectively controlling majority of the inner drugs released to the surrounding tissue. During electrospinning, the organic solvent, which could be toxic to cells, will be completely evaporated due to its high volatility\n[22]. Electrospraying has similar preparation process to electrospinning but is usually used with higher voltage and lower polymer density, which makes the polymer solution more easily broken up into droplets\n[23, 24]. The concept of electrospraying process was used to increase layer-to-layer adhesion, which could avoid drug-storing layer cracking and separating from backing layer during the stent expending.\nThe covered membrane is composed of three layers with PU, PCL/PEG = 9:1 blends and PCL/PEG = 1:4 blends. PU was employed as the material of the backing layer, while PCL/PEG blends were used in drug-storing layers with GEM and HP loaded (Figure \n1). Several materials (eg. Silicone, PTFE, and PU) have been approved by US Food and Drug Administration (FDA) as the covered membrane on the stent\n[13]. Silicone was reported to cause the acute inflammation to the surrounding tissue\n[25]. The PTFE, however, could not be dissolved into any solvent\n[26]. Therefore, as the material of the backing layer in this study, PU has been found able to be formed sufficiently thin and flat on the metallic stent by electrospinning process and with elastic properties to allow the covered stent to be homogeneously expanded\n[18]. Additionally, PU membrane has been proved to prevent the tumor ingrowth effectively and to reduce the occlusion rate of expandable metal stent in patients with malignant biliary obstruction\n[15]. The distinct biodegradable properties of PCL and PEG blending were regarded as an approach to controlling the drug-releasing rate in different blending ratio. The selection of PCL is due to its good biocompatibility, drug permeability and relatively slow degradation rate\n[27, 28]. The hydrophilic PEG was selected to play a role in resulting in regulating the drug-releasing rate, due to its easily acting on aqueous solution\n[14, 29].\nThe prototype of PDT-chemo stent was imaged by photography and optical microscopy, which proved that the membrane was homogeneously covered on the stent surface in each electrospun process (Figure \n2). The optical microscopy (Figure \n2b) revealed that PU was formed in fine fibers with width around 5 to 10 μm; thus PCL/PEG blends solution was favorable for generating submicron droplets (Figure \n2c, d). The photograph of membrane appeared brown (Figure \n2d) due to the homogeneously dispersed of HP coated on the top. Finally, the prototype of PDT-chemo stent could be easily removed from the cylindrical collector without any surface damage (Figure \n2e). The surface morphology and cross section of the film was further investigated by SEM imaging (Figure \n3), which showed that the membrane was constructed of two sides with the backing layer and drug-storing layer. The architecture of the backing layer was constructed of fine fibers in networks structure, while drug-storing layer, attributed to the droplets from electrospraying process was coated on the backing layer roughly. The cross section of the membrane was smoothly with width in range from 170 to 190 μm. The thickness of the membrane could be optimized for the most favorable thickness according to the clinical needs via regulating time during the electrospun process.Figure 2\nThe photography and microscopic imaging of PDT-chemo stent. The prototype of PDT-chemo stent in each step of electrospun/electrosprayed process is presented under the photography and optical microscopic imaging. (a) 316 L stainless bare stent; (b) covered with PU backing layer; (c) covered with inner drug-storing layer by PCL/PEG = 9:1 with GEM loaded; (d) covered with outer drug-storing layer by PCL/PEG = 1:4 with HP loaded; (e) the prototype of PDT-chemo stent collected from cylindrical collector. Optical microscopy shows the extruded polymer from the syringe of electrspinning/electrospraying processes.Figure 3\nSEM imaging of the covered membrane. The surface morphology and cross section of the covered membrane from PDT-chemo stent was observed by SEM imaging, which showed that the membrane was constructed of two sides with the backing layer and drug-storing layer.\n\nThe photography and microscopic imaging of PDT-chemo stent. The prototype of PDT-chemo stent in each step of electrospun/electrosprayed process is presented under the photography and optical microscopic imaging. (a) 316 L stainless bare stent; (b) covered with PU backing layer; (c) covered with inner drug-storing layer by PCL/PEG = 9:1 with GEM loaded; (d) covered with outer drug-storing layer by PCL/PEG = 1:4 with HP loaded; (e) the prototype of PDT-chemo stent collected from cylindrical collector. Optical microscopy shows the extruded polymer from the syringe of electrspinning/electrospraying processes.\n\nSEM imaging of the covered membrane. The surface morphology and cross section of the covered membrane from PDT-chemo stent was observed by SEM imaging, which showed that the membrane was constructed of two sides with the backing layer and drug-storing layer.\nTo further confirm the electrosprayed amount of the drug on the covered membrane, the membrane removed from the stent was completely dissolved in DMSO solution for further analysis. The results revealed that, with the increasing concentration of the HP in PEG/PCL solution from 0.67 mg/ml to 6.67 mg/ml, the density of the HP from the electrosprayed membrane increased gradually from 8.72 μg/cm2 to 63.59 μg/cm2 at most, which was around 0.7 fold to the HP containing solution, as seemed to demonstrate 70% of electrosprayed efficiency (Figure \n4a). By the same method, GEM had almost 100% electrosprayed efficiency (Figure \n4b) and had a tendency of increasing electrosprayed efficiency by mixing with PCL/PEG blends due to its high hydrophilicity\n[30]. The corresponding results of the concentration of the drug in the solution and the density of the covered membrane could be an useful information to further imitate a clinical dose regimen for cholangiocarcinoma treatment.Figure 4\nThe electrosprayed amount of the drug on the covered membrane. The density of (a) HP and (b) GEM coated on the covered membrane is illustrated on the vertical axis of this graph, while the drug concentration of HP and GEM in PCL/PEG blends solution respectively is illustrated on the horizontal axis.\n\nThe electrosprayed amount of the drug on the covered membrane. The density of (a) HP and (b) GEM coated on the covered membrane is illustrated on the vertical axis of this graph, while the drug concentration of HP and GEM in PCL/PEG blends solution respectively is illustrated on the horizontal axis.", "As illustrated in Figure \n1, this PDT-chmo stent was designed to obtain a two-phased drug release pattern, which is composed of a first burst-releasing phase and a later slow-releasing phase with combination therapy for cholangiocarcinoma treatment. Hydrophilic PEG was used as a regulator of drug release from PCL/PEG blends\n[31]. Figure \n5 displays the drug releasing profiles from the covered membrane. At each releasing time, the cumulative amounts of HP from PCL/PEG = 1:4 membrane, released in the first hour (P < 0.01 relative to GEM), had a high initial burst 80% and reached maximal cumulative release of nearly 98% within 24 h. Whereas, compared to HP in the first hour, GEM within PCL/PEG = 9:1 membrane showed relatively slow drug release in the first hour (50%). The significant difference of releasing kinetics between HP and GEM was observed within 24 h, as indicated drug-releasing rate can be regulated within 24 h by adjusting PEG and PCL compositions, then both drugs complete releasing occurred during the time span between 24 h and 72 h. The mechanism of drug release was reported by Liu et al.\n[32] and Lei et al.\n[14] that PEG acted as a pore former in PCL/PEG blends, where the releasing rate from co-localization of protein/drug and blends were proportional to PEG content. The hydrophilic PEG in PCL/PEG blends easily acted on aqueous solution, which resulted in the formation of swollen structure and subsequently increased the drug-releasing rate, as indicated the kinetics of drug releasing was mainly due to the degradation of the PCL/PEG blends\n[14]. Although the slow releasing rate of GEM may not exclude the possibility of outer drug-storing layer (PCL/PEG = 1:4) to delay parts of GEM release, we consider that PCL/PEG = 1:4 membrane with high concentration of hydrophilic PEG polymer will interact fast with aqueous solution, leading the membrane to degrade quickly within the first few minutes\n[14]. Therefore, we suggest PCL and PEG with different molar ratio for controlling the polymer degradation rate be the major factor to regulate drugs releasing rate. In order to better meet the needs of clinical application, the thickness of covered membrane and membrane composition could be further easily improved according to our requirements by electrospinning/electrospraying technique\n[18, 19].\nAfter drug eluting in PBS solution, the surface morphology of the covered membrane was investigated by SEM imaging (Figure \n6). Generally, fine fiber architecture of the backing layer was covered with roughness of the drug-storing layer (Figure \n6a). After 72 h of drug eluting, rough surface of the drug-storing layer was converted to alignment (Figure \n6b). This orderly structure indicates the degradation of PCL/PEG blends from the surface, and only PU backing layer and partial PCL/PEG blends left on the layer can be observed. The thickness of the cross section was reduced from around 180 μm (Figure \n6c) to 120 μm (Figure \n6d). The SEM imaging further confirmed the drug release kinetics was mainly because of the degradation of PCL/PEG membrane, not due to the diffusion or permeation of drugs through the membrane.Figure 5\nThe profile of drug release. Release of HP and GEM from covered membrane at each predetermined time in PBS solution (mean ± STD, n = 3). Values of p < 0.01 is considered being statistically significant.Figure 6\nSEM imaging of eluted covered membrane. After 72 h drug eluting in PBS solution, the surface morphology (a, b) and cross section (c, d) of the covered membrane were observed by SEM imaging.\n\nThe profile of drug release. Release of HP and GEM from covered membrane at each predetermined time in PBS solution (mean ± STD, n = 3). Values of p < 0.01 is considered being statistically significant.\n\nSEM imaging of eluted covered membrane. After 72 h drug eluting in PBS solution, the surface morphology (a, b) and cross section (c, d) of the covered membrane were observed by SEM imaging.\nHerein, GEM is one of the first line chemo-drugs in the treatment of advanced cholangiocarcinoma, which is a prodrug belonging to an analog of deoxycytidine. Once GEM is transported into the cell, it will be phosphorylated to an active form to inhibit DNA synthesis\n[33]. Therefore, low initial burst of GEM may help to prevent undesired toxicity associated with high concentration of GEM, and the burst release of HP can provide a simultaneous treatment for PDT, triggered by the light source from endoscopy when the stent is localized in bile duct. Overall, the prototype of PDT-chemo stent has demonstrated the proof of concept of localized combination therapy for cholangiocarcinoma. Based on the theory, the drug-releasing rate could be further regulated by changing the initial electrospraying blend polymer solution, concentration, structure and type of fibers and the amount of additives for the clinical needs\n[34, 35].", "In preliminary study, we have successfully developed a prototype of tri-layered covered stent with PDT and chemotherapy. This PDT-chemo stent was prepared by electrospinning and electrospraying dual-processes. The membrane is composed of PU backing layer as the base and PCL/PEG drug-storing layers with GEM and HP on the top. The mixing of drugs with different PCL and PEG composition demonstrated an effective strategy for regulating the drugs release from the membrane. The release study has confirmed a two-phased drug release pattern, which provides a proof of concept for the hypothesis that the PDT-chemo stent is composed of a first burst-releasing phase from HP and a later slow-releasing phase from GEM eluting. This two-phase of drug eluting stent may provide a new prospective of localized controlled release treatment for cholangiocarcinoma disease." ]
[ null, "methods", null, null, null, null, null, null, null, "conclusions" ]
[ "Cholangiocarcinoma", "Photodynamic therapy", "Chemotherapy", "Biliary drug-eluting stent", "Electrospinning and Electrospraying" ]
Background: Cholangiocarcinoma is the second most common hepatobiliary tumor, which is generally a locally invasive tumor that occludes the biliary tree and leads to cholangitis and liver failure. Until now, tumor resection has been the only potential cure for cholangiocarcinoma [1, 2]. Unfortunately, even with resection, the survival rate with five years can decrease to 11% at most and more than 50% of patients still remained at unresectable stage [3, 4]. Inoperable patients with advanced cholangiocarcinoma typically have obstructive cholestasis. So far, the primary standard method of treatment has been biliary stenting [5]. However, this treatment can prolong survival time slightly by providing temporary biliary drainage. Therefore, the secondary method of treatment is required to prolong the survival time by reducing tumor burden. Chemotherapy and radiotherapy are classical treatments but their results are also disappointing [3, 5]. Photodynamic therapy (PDT) is a new and promising treatment option, which contains a photosensitizer, light source, and oxygen [6]. The concept of PDT is based on a photosensitizer exposed to the specific wavelength of light, which can generate cytotoxic reactive oxygen species (ROS) to kill tumor cells [7]. Additionally, previous studies have shown that PDT could also inhibit the P-glycoprotein efflux of drug. A combination of PDT and chemotherapy can improve the accumulation of chemo-drug in tumor cells, and reduce the chemo-drug resistance from the P-glycoprotein efflux [8]. Another advantage of combination therapy with PDT and chemo-drug is the capability to induce antitumor immunity [9]. However, by intravenous delivery to target tumor, the distribution of the drug had its limitations and caused serious side effect on non-target organs. After receiving PDT treatment, patients have to stay indoors, away from bright light for 3 to 4 days to avoid the skin photosensitivity from the side effect [10]. In order to decrease the side effect during the treatment, the aim of this study is to develop a localized drug eluting stent, named PDT-chemo stent, by incorporating gemcitabine (GEM) with hematoporphyrin (HP) to cover the stent surface. Drug-eluting stent has been considered a method to maximize the drug concentration immediately on the localized tumor environment, while minimizing the non-target organs exposure [11, 12]. In clinical practice, this PDT-chemo stent could be inserted to the tumor area via endoscopic retrograde cholangiography [13], followed by the simultaneous specific light source from endoscopy to activate the photosensitizer for PDT. Meanwhile, the chemo-drug of GEM will be released continuously as the second step for chemotherapy. The multimodal function of PDT-chemo stent will not only aim to increase the accumulation of drug within the neoplastic tissue, but also decrease the side effect on non-target tissues. In the past decade, the stent with drug-incorporated covered membrane has been received increasing attention as drug-eluting stent due to its functions of providing mechanic support and releasing sufficient drug to prevent restenosis or treat malignant [12, 14, 15]. Historically, several techniques were developed and have been used to manufacture the covered membrane on stent surface. These diverse techniques include dip coating [12, 16], compression technique [14] and electrospinning [17, 18]. Concerned about the adherence and flexibility problems of the membrane covering the stent [19], we used the electrospinning technique. Electrospinning is the process with voltage to extrude polymer solution into fine fibers for the production of micro/nano-fiber-covered stents [20]. The electrospinning technique could support the covered membrane uniformly adhered to the stent and easily regulate the thickness according to the clinical needs [18, 19]. Therefore, we selected the electrospinning to construct the backing layer on 316 L stainless stent and used the electrospraying process to regulate GEM and HP on the covered membrane. To provide the flexibility of the membrane, polyurethane (PU), with sufficient elastic property, was used as the material of drug-free backing layer, which could effectively control majority of the inner drugs released to the tissue-contacting side and additional supporting force for the main drug layer [18]. Polycaprolactone (PCL) and Polyethylene glycol (PEG) blends in different molar ratio were selected as two drug-storing layers to control the drug-releasing rate. In the ratio of 1:4, PCL/PEG blends were used as the outer releasing layer with HP loaded, while in the ratio of 9:1, PCL/PEG blends were used as the inner releasing layer with GEM loaded (Figure  1). To our knowledge, this is the first study to show that the biliary stent could be accompanied with PDT and chemotherapy for localized cholangiocarcinoma treatment.Figure 1 Schematic illustration of the PDT-chemo stent as a biliary stent for cholangiocarcinoma treatment. The covered membrane is composed of three layers with PU, PCL/PEG = 9:1 blends and PCL/PEG = 1:4 blends. PU was employed as the material of the backing layer, and PCL/PEG blends were used in drug-storing layers with GEM and HP loaded. Schematic illustration of the PDT-chemo stent as a biliary stent for cholangiocarcinoma treatment. The covered membrane is composed of three layers with PU, PCL/PEG = 9:1 blends and PCL/PEG = 1:4 blends. PU was employed as the material of the backing layer, and PCL/PEG blends were used in drug-storing layers with GEM and HP loaded. Methods: Materials Polyurethane (PU), Polycaprolactone (PCL, Mw = 80,000), Polyethylene glycol (PEG, Mw = 20,000), Tetrahydrofuran (THF), 1,1,1,3,3,3-Hexafluro-2-propanel (HFIP) and Hematoporphyrin (HP) were purchased from Sigma-Aldrich (St Louis, MO). Gemcitabine (GEM) of clinical grade was supplied by National Taiwan University Hospital. All other chemicals were of analytical grade and used as received. Polyurethane (PU), Polycaprolactone (PCL, Mw = 80,000), Polyethylene glycol (PEG, Mw = 20,000), Tetrahydrofuran (THF), 1,1,1,3,3,3-Hexafluro-2-propanel (HFIP) and Hematoporphyrin (HP) were purchased from Sigma-Aldrich (St Louis, MO). Gemcitabine (GEM) of clinical grade was supplied by National Taiwan University Hospital. All other chemicals were of analytical grade and used as received. Preparation of PDT-chemo stent The PDT-chemo stent, consisting of tri-layers of membranes, was made by electrospinning and electrospraying dual-processes. The metallic stent was provided from the laboratory of Dr. Fuh-Yu Chang in National Taiwan University of Science and Technology, and the femtosecond laser was used to carve the 316 L stainless metallic tube. The unit of electrospinning contained a high-voltage power supply, a motor to rotate the stent, a syringe pump, and a 19G-needle that was connected by a tube to a syringe. The metallic stent rotated by a motor was horizontally placed 15 cm away from the needle. The solution of PU in HFIP (10 m/v%) was used for electrospinning process, while PCL/PEG blends were mixed in HFIP/THF (1:1) solution for electrospraying process. Both PU and PCL/PEG blends solution were extruded from the syringe at a rate of 5 μL/min. The backing layer of PU was first electrospun with voltage 14 kV, and was followed by the drug-storing layer of PCL/PEG blends elecrosprayed with higher voltage 22 kV. Herein, the drug-storing layer of PCL/PEG blends in molar ratio of 9:1 was loaded with GEM covering the backing layer, followed by the HP coating on the top in PCL/PEG molar ratio of 1:4 (Figure  1). The extruded polymer from the syringe of electrspinning/electrospraying was collected for only a short period time on cover glass for optical microscopy (OM, Leica, Germany). Samples collected covered stent were prepared by coating with thin gold film by sputtering PVD and visualized by scanning electron microscopy (SEM, JSM-7000 F, Japan) operated at 15 KV. The PDT-chemo stent, consisting of tri-layers of membranes, was made by electrospinning and electrospraying dual-processes. The metallic stent was provided from the laboratory of Dr. Fuh-Yu Chang in National Taiwan University of Science and Technology, and the femtosecond laser was used to carve the 316 L stainless metallic tube. The unit of electrospinning contained a high-voltage power supply, a motor to rotate the stent, a syringe pump, and a 19G-needle that was connected by a tube to a syringe. The metallic stent rotated by a motor was horizontally placed 15 cm away from the needle. The solution of PU in HFIP (10 m/v%) was used for electrospinning process, while PCL/PEG blends were mixed in HFIP/THF (1:1) solution for electrospraying process. Both PU and PCL/PEG blends solution were extruded from the syringe at a rate of 5 μL/min. The backing layer of PU was first electrospun with voltage 14 kV, and was followed by the drug-storing layer of PCL/PEG blends elecrosprayed with higher voltage 22 kV. Herein, the drug-storing layer of PCL/PEG blends in molar ratio of 9:1 was loaded with GEM covering the backing layer, followed by the HP coating on the top in PCL/PEG molar ratio of 1:4 (Figure  1). The extruded polymer from the syringe of electrspinning/electrospraying was collected for only a short period time on cover glass for optical microscopy (OM, Leica, Germany). Samples collected covered stent were prepared by coating with thin gold film by sputtering PVD and visualized by scanning electron microscopy (SEM, JSM-7000 F, Japan) operated at 15 KV. Drug electrosprayed efficiency To further confirm the electrsprayed efficiency of loading drug in state of covered membrane, the membrane was collected from the stent and absolutely dissolved in dimethyl sulfoxide (DMSO) solution. After that, high-performance liquid chromatography (HPLC, Waters e2695, USA) and ultraviolet–visible spectroscopy (UV/vis, JASCO V-550, USA) were used to examine the extruded drugs of GEM and HP from the covered membrane respectively. To further confirm the electrsprayed efficiency of loading drug in state of covered membrane, the membrane was collected from the stent and absolutely dissolved in dimethyl sulfoxide (DMSO) solution. After that, high-performance liquid chromatography (HPLC, Waters e2695, USA) and ultraviolet–visible spectroscopy (UV/vis, JASCO V-550, USA) were used to examine the extruded drugs of GEM and HP from the covered membrane respectively. Drug release The covered membrane was incubated in a sealed glass bottle with 0.5 ml phosphate-buffered saline (PBS) as the releasing medium. The bottle was placed in a shaking incubator at 37°C at a shaking speed of 50 rpm. At the predetermined time, 0.5 ml sample was withdrawn and replaced with the same volume of fresh medium. Residual concentration of drug in the membrane was counted by dissolving the membrane in DMSO solution as the eluting medium. Samples were collected and analyzed under the UV–vis spectrometer and HPLC. The morphology of membrane after 72 h of release was assessed by SEM imaging. The values were presented as mean ± standard error (STD) in triplicate. Statistical analysis was performed using the analysis student’s t-test. Values of p < 0.05 was considered being statistically significant. The covered membrane was incubated in a sealed glass bottle with 0.5 ml phosphate-buffered saline (PBS) as the releasing medium. The bottle was placed in a shaking incubator at 37°C at a shaking speed of 50 rpm. At the predetermined time, 0.5 ml sample was withdrawn and replaced with the same volume of fresh medium. Residual concentration of drug in the membrane was counted by dissolving the membrane in DMSO solution as the eluting medium. Samples were collected and analyzed under the UV–vis spectrometer and HPLC. The morphology of membrane after 72 h of release was assessed by SEM imaging. The values were presented as mean ± standard error (STD) in triplicate. Statistical analysis was performed using the analysis student’s t-test. Values of p < 0.05 was considered being statistically significant. Materials: Polyurethane (PU), Polycaprolactone (PCL, Mw = 80,000), Polyethylene glycol (PEG, Mw = 20,000), Tetrahydrofuran (THF), 1,1,1,3,3,3-Hexafluro-2-propanel (HFIP) and Hematoporphyrin (HP) were purchased from Sigma-Aldrich (St Louis, MO). Gemcitabine (GEM) of clinical grade was supplied by National Taiwan University Hospital. All other chemicals were of analytical grade and used as received. Preparation of PDT-chemo stent: The PDT-chemo stent, consisting of tri-layers of membranes, was made by electrospinning and electrospraying dual-processes. The metallic stent was provided from the laboratory of Dr. Fuh-Yu Chang in National Taiwan University of Science and Technology, and the femtosecond laser was used to carve the 316 L stainless metallic tube. The unit of electrospinning contained a high-voltage power supply, a motor to rotate the stent, a syringe pump, and a 19G-needle that was connected by a tube to a syringe. The metallic stent rotated by a motor was horizontally placed 15 cm away from the needle. The solution of PU in HFIP (10 m/v%) was used for electrospinning process, while PCL/PEG blends were mixed in HFIP/THF (1:1) solution for electrospraying process. Both PU and PCL/PEG blends solution were extruded from the syringe at a rate of 5 μL/min. The backing layer of PU was first electrospun with voltage 14 kV, and was followed by the drug-storing layer of PCL/PEG blends elecrosprayed with higher voltage 22 kV. Herein, the drug-storing layer of PCL/PEG blends in molar ratio of 9:1 was loaded with GEM covering the backing layer, followed by the HP coating on the top in PCL/PEG molar ratio of 1:4 (Figure  1). The extruded polymer from the syringe of electrspinning/electrospraying was collected for only a short period time on cover glass for optical microscopy (OM, Leica, Germany). Samples collected covered stent were prepared by coating with thin gold film by sputtering PVD and visualized by scanning electron microscopy (SEM, JSM-7000 F, Japan) operated at 15 KV. Drug electrosprayed efficiency: To further confirm the electrsprayed efficiency of loading drug in state of covered membrane, the membrane was collected from the stent and absolutely dissolved in dimethyl sulfoxide (DMSO) solution. After that, high-performance liquid chromatography (HPLC, Waters e2695, USA) and ultraviolet–visible spectroscopy (UV/vis, JASCO V-550, USA) were used to examine the extruded drugs of GEM and HP from the covered membrane respectively. Drug release: The covered membrane was incubated in a sealed glass bottle with 0.5 ml phosphate-buffered saline (PBS) as the releasing medium. The bottle was placed in a shaking incubator at 37°C at a shaking speed of 50 rpm. At the predetermined time, 0.5 ml sample was withdrawn and replaced with the same volume of fresh medium. Residual concentration of drug in the membrane was counted by dissolving the membrane in DMSO solution as the eluting medium. Samples were collected and analyzed under the UV–vis spectrometer and HPLC. The morphology of membrane after 72 h of release was assessed by SEM imaging. The values were presented as mean ± standard error (STD) in triplicate. Statistical analysis was performed using the analysis student’s t-test. Values of p < 0.05 was considered being statistically significant. Results and discussion: Prototype of PDT-chemo stent To make the stent with covered membrane more uniformly adhered to the surface and easily regulate the thickness according to the clinical needs, electrospinning/electrospraying has been regarded as the appropriate means for demonstrating the PDT-chemo stent [18, 19]. The covered stent can be manufactured by several electrospinning methods: post-spinning modification, drug/polymer blends, emulsion electrospinning and core-shell electrospinning [21]. Drug/polymer blends technique could easily mix the drug with polymer directly and form a layer of membrane to achieve sustained drug release. Therefore, in this study, the prototype of PDT-chemo stent was constructed by drug/polymer blends technique via electrospinning and electrospraying dual-processes. The backing layer was electrospun first from PU polymer solution, followed by the electrospraying process from PCL/PEG blends solution with drug loaded. Electrospinning is the process with voltage to extrude polymer solution into fine fibers for the production of fine-fibers-covered stents. In our case, fine fibers could support the superior mechanical properties of the membrane and be introduced as the backing layer for effectively controlling majority of the inner drugs released to the surrounding tissue. During electrospinning, the organic solvent, which could be toxic to cells, will be completely evaporated due to its high volatility [22]. Electrospraying has similar preparation process to electrospinning but is usually used with higher voltage and lower polymer density, which makes the polymer solution more easily broken up into droplets [23, 24]. The concept of electrospraying process was used to increase layer-to-layer adhesion, which could avoid drug-storing layer cracking and separating from backing layer during the stent expending. The covered membrane is composed of three layers with PU, PCL/PEG = 9:1 blends and PCL/PEG = 1:4 blends. PU was employed as the material of the backing layer, while PCL/PEG blends were used in drug-storing layers with GEM and HP loaded (Figure  1). Several materials (eg. Silicone, PTFE, and PU) have been approved by US Food and Drug Administration (FDA) as the covered membrane on the stent [13]. Silicone was reported to cause the acute inflammation to the surrounding tissue [25]. The PTFE, however, could not be dissolved into any solvent [26]. Therefore, as the material of the backing layer in this study, PU has been found able to be formed sufficiently thin and flat on the metallic stent by electrospinning process and with elastic properties to allow the covered stent to be homogeneously expanded [18]. Additionally, PU membrane has been proved to prevent the tumor ingrowth effectively and to reduce the occlusion rate of expandable metal stent in patients with malignant biliary obstruction [15]. The distinct biodegradable properties of PCL and PEG blending were regarded as an approach to controlling the drug-releasing rate in different blending ratio. The selection of PCL is due to its good biocompatibility, drug permeability and relatively slow degradation rate [27, 28]. The hydrophilic PEG was selected to play a role in resulting in regulating the drug-releasing rate, due to its easily acting on aqueous solution [14, 29]. The prototype of PDT-chemo stent was imaged by photography and optical microscopy, which proved that the membrane was homogeneously covered on the stent surface in each electrospun process (Figure  2). The optical microscopy (Figure  2b) revealed that PU was formed in fine fibers with width around 5 to 10 μm; thus PCL/PEG blends solution was favorable for generating submicron droplets (Figure  2c, d). The photograph of membrane appeared brown (Figure  2d) due to the homogeneously dispersed of HP coated on the top. Finally, the prototype of PDT-chemo stent could be easily removed from the cylindrical collector without any surface damage (Figure  2e). The surface morphology and cross section of the film was further investigated by SEM imaging (Figure  3), which showed that the membrane was constructed of two sides with the backing layer and drug-storing layer. The architecture of the backing layer was constructed of fine fibers in networks structure, while drug-storing layer, attributed to the droplets from electrospraying process was coated on the backing layer roughly. The cross section of the membrane was smoothly with width in range from 170 to 190 μm. The thickness of the membrane could be optimized for the most favorable thickness according to the clinical needs via regulating time during the electrospun process.Figure 2 The photography and microscopic imaging of PDT-chemo stent. The prototype of PDT-chemo stent in each step of electrospun/electrosprayed process is presented under the photography and optical microscopic imaging. (a) 316 L stainless bare stent; (b) covered with PU backing layer; (c) covered with inner drug-storing layer by PCL/PEG = 9:1 with GEM loaded; (d) covered with outer drug-storing layer by PCL/PEG = 1:4 with HP loaded; (e) the prototype of PDT-chemo stent collected from cylindrical collector. Optical microscopy shows the extruded polymer from the syringe of electrspinning/electrospraying processes.Figure 3 SEM imaging of the covered membrane. The surface morphology and cross section of the covered membrane from PDT-chemo stent was observed by SEM imaging, which showed that the membrane was constructed of two sides with the backing layer and drug-storing layer. The photography and microscopic imaging of PDT-chemo stent. The prototype of PDT-chemo stent in each step of electrospun/electrosprayed process is presented under the photography and optical microscopic imaging. (a) 316 L stainless bare stent; (b) covered with PU backing layer; (c) covered with inner drug-storing layer by PCL/PEG = 9:1 with GEM loaded; (d) covered with outer drug-storing layer by PCL/PEG = 1:4 with HP loaded; (e) the prototype of PDT-chemo stent collected from cylindrical collector. Optical microscopy shows the extruded polymer from the syringe of electrspinning/electrospraying processes. SEM imaging of the covered membrane. The surface morphology and cross section of the covered membrane from PDT-chemo stent was observed by SEM imaging, which showed that the membrane was constructed of two sides with the backing layer and drug-storing layer. To further confirm the electrosprayed amount of the drug on the covered membrane, the membrane removed from the stent was completely dissolved in DMSO solution for further analysis. The results revealed that, with the increasing concentration of the HP in PEG/PCL solution from 0.67 mg/ml to 6.67 mg/ml, the density of the HP from the electrosprayed membrane increased gradually from 8.72 μg/cm2 to 63.59 μg/cm2 at most, which was around 0.7 fold to the HP containing solution, as seemed to demonstrate 70% of electrosprayed efficiency (Figure  4a). By the same method, GEM had almost 100% electrosprayed efficiency (Figure  4b) and had a tendency of increasing electrosprayed efficiency by mixing with PCL/PEG blends due to its high hydrophilicity [30]. The corresponding results of the concentration of the drug in the solution and the density of the covered membrane could be an useful information to further imitate a clinical dose regimen for cholangiocarcinoma treatment.Figure 4 The electrosprayed amount of the drug on the covered membrane. The density of (a) HP and (b) GEM coated on the covered membrane is illustrated on the vertical axis of this graph, while the drug concentration of HP and GEM in PCL/PEG blends solution respectively is illustrated on the horizontal axis. The electrosprayed amount of the drug on the covered membrane. The density of (a) HP and (b) GEM coated on the covered membrane is illustrated on the vertical axis of this graph, while the drug concentration of HP and GEM in PCL/PEG blends solution respectively is illustrated on the horizontal axis. To make the stent with covered membrane more uniformly adhered to the surface and easily regulate the thickness according to the clinical needs, electrospinning/electrospraying has been regarded as the appropriate means for demonstrating the PDT-chemo stent [18, 19]. The covered stent can be manufactured by several electrospinning methods: post-spinning modification, drug/polymer blends, emulsion electrospinning and core-shell electrospinning [21]. Drug/polymer blends technique could easily mix the drug with polymer directly and form a layer of membrane to achieve sustained drug release. Therefore, in this study, the prototype of PDT-chemo stent was constructed by drug/polymer blends technique via electrospinning and electrospraying dual-processes. The backing layer was electrospun first from PU polymer solution, followed by the electrospraying process from PCL/PEG blends solution with drug loaded. Electrospinning is the process with voltage to extrude polymer solution into fine fibers for the production of fine-fibers-covered stents. In our case, fine fibers could support the superior mechanical properties of the membrane and be introduced as the backing layer for effectively controlling majority of the inner drugs released to the surrounding tissue. During electrospinning, the organic solvent, which could be toxic to cells, will be completely evaporated due to its high volatility [22]. Electrospraying has similar preparation process to electrospinning but is usually used with higher voltage and lower polymer density, which makes the polymer solution more easily broken up into droplets [23, 24]. The concept of electrospraying process was used to increase layer-to-layer adhesion, which could avoid drug-storing layer cracking and separating from backing layer during the stent expending. The covered membrane is composed of three layers with PU, PCL/PEG = 9:1 blends and PCL/PEG = 1:4 blends. PU was employed as the material of the backing layer, while PCL/PEG blends were used in drug-storing layers with GEM and HP loaded (Figure  1). Several materials (eg. Silicone, PTFE, and PU) have been approved by US Food and Drug Administration (FDA) as the covered membrane on the stent [13]. Silicone was reported to cause the acute inflammation to the surrounding tissue [25]. The PTFE, however, could not be dissolved into any solvent [26]. Therefore, as the material of the backing layer in this study, PU has been found able to be formed sufficiently thin and flat on the metallic stent by electrospinning process and with elastic properties to allow the covered stent to be homogeneously expanded [18]. Additionally, PU membrane has been proved to prevent the tumor ingrowth effectively and to reduce the occlusion rate of expandable metal stent in patients with malignant biliary obstruction [15]. The distinct biodegradable properties of PCL and PEG blending were regarded as an approach to controlling the drug-releasing rate in different blending ratio. The selection of PCL is due to its good biocompatibility, drug permeability and relatively slow degradation rate [27, 28]. The hydrophilic PEG was selected to play a role in resulting in regulating the drug-releasing rate, due to its easily acting on aqueous solution [14, 29]. The prototype of PDT-chemo stent was imaged by photography and optical microscopy, which proved that the membrane was homogeneously covered on the stent surface in each electrospun process (Figure  2). The optical microscopy (Figure  2b) revealed that PU was formed in fine fibers with width around 5 to 10 μm; thus PCL/PEG blends solution was favorable for generating submicron droplets (Figure  2c, d). The photograph of membrane appeared brown (Figure  2d) due to the homogeneously dispersed of HP coated on the top. Finally, the prototype of PDT-chemo stent could be easily removed from the cylindrical collector without any surface damage (Figure  2e). The surface morphology and cross section of the film was further investigated by SEM imaging (Figure  3), which showed that the membrane was constructed of two sides with the backing layer and drug-storing layer. The architecture of the backing layer was constructed of fine fibers in networks structure, while drug-storing layer, attributed to the droplets from electrospraying process was coated on the backing layer roughly. The cross section of the membrane was smoothly with width in range from 170 to 190 μm. The thickness of the membrane could be optimized for the most favorable thickness according to the clinical needs via regulating time during the electrospun process.Figure 2 The photography and microscopic imaging of PDT-chemo stent. The prototype of PDT-chemo stent in each step of electrospun/electrosprayed process is presented under the photography and optical microscopic imaging. (a) 316 L stainless bare stent; (b) covered with PU backing layer; (c) covered with inner drug-storing layer by PCL/PEG = 9:1 with GEM loaded; (d) covered with outer drug-storing layer by PCL/PEG = 1:4 with HP loaded; (e) the prototype of PDT-chemo stent collected from cylindrical collector. Optical microscopy shows the extruded polymer from the syringe of electrspinning/electrospraying processes.Figure 3 SEM imaging of the covered membrane. The surface morphology and cross section of the covered membrane from PDT-chemo stent was observed by SEM imaging, which showed that the membrane was constructed of two sides with the backing layer and drug-storing layer. The photography and microscopic imaging of PDT-chemo stent. The prototype of PDT-chemo stent in each step of electrospun/electrosprayed process is presented under the photography and optical microscopic imaging. (a) 316 L stainless bare stent; (b) covered with PU backing layer; (c) covered with inner drug-storing layer by PCL/PEG = 9:1 with GEM loaded; (d) covered with outer drug-storing layer by PCL/PEG = 1:4 with HP loaded; (e) the prototype of PDT-chemo stent collected from cylindrical collector. Optical microscopy shows the extruded polymer from the syringe of electrspinning/electrospraying processes. SEM imaging of the covered membrane. The surface morphology and cross section of the covered membrane from PDT-chemo stent was observed by SEM imaging, which showed that the membrane was constructed of two sides with the backing layer and drug-storing layer. To further confirm the electrosprayed amount of the drug on the covered membrane, the membrane removed from the stent was completely dissolved in DMSO solution for further analysis. The results revealed that, with the increasing concentration of the HP in PEG/PCL solution from 0.67 mg/ml to 6.67 mg/ml, the density of the HP from the electrosprayed membrane increased gradually from 8.72 μg/cm2 to 63.59 μg/cm2 at most, which was around 0.7 fold to the HP containing solution, as seemed to demonstrate 70% of electrosprayed efficiency (Figure  4a). By the same method, GEM had almost 100% electrosprayed efficiency (Figure  4b) and had a tendency of increasing electrosprayed efficiency by mixing with PCL/PEG blends due to its high hydrophilicity [30]. The corresponding results of the concentration of the drug in the solution and the density of the covered membrane could be an useful information to further imitate a clinical dose regimen for cholangiocarcinoma treatment.Figure 4 The electrosprayed amount of the drug on the covered membrane. The density of (a) HP and (b) GEM coated on the covered membrane is illustrated on the vertical axis of this graph, while the drug concentration of HP and GEM in PCL/PEG blends solution respectively is illustrated on the horizontal axis. The electrosprayed amount of the drug on the covered membrane. The density of (a) HP and (b) GEM coated on the covered membrane is illustrated on the vertical axis of this graph, while the drug concentration of HP and GEM in PCL/PEG blends solution respectively is illustrated on the horizontal axis. Effect of drug release As illustrated in Figure  1, this PDT-chmo stent was designed to obtain a two-phased drug release pattern, which is composed of a first burst-releasing phase and a later slow-releasing phase with combination therapy for cholangiocarcinoma treatment. Hydrophilic PEG was used as a regulator of drug release from PCL/PEG blends [31]. Figure  5 displays the drug releasing profiles from the covered membrane. At each releasing time, the cumulative amounts of HP from PCL/PEG = 1:4 membrane, released in the first hour (P < 0.01 relative to GEM), had a high initial burst 80% and reached maximal cumulative release of nearly 98% within 24 h. Whereas, compared to HP in the first hour, GEM within PCL/PEG = 9:1 membrane showed relatively slow drug release in the first hour (50%). The significant difference of releasing kinetics between HP and GEM was observed within 24 h, as indicated drug-releasing rate can be regulated within 24 h by adjusting PEG and PCL compositions, then both drugs complete releasing occurred during the time span between 24 h and 72 h. The mechanism of drug release was reported by Liu et al. [32] and Lei et al. [14] that PEG acted as a pore former in PCL/PEG blends, where the releasing rate from co-localization of protein/drug and blends were proportional to PEG content. The hydrophilic PEG in PCL/PEG blends easily acted on aqueous solution, which resulted in the formation of swollen structure and subsequently increased the drug-releasing rate, as indicated the kinetics of drug releasing was mainly due to the degradation of the PCL/PEG blends [14]. Although the slow releasing rate of GEM may not exclude the possibility of outer drug-storing layer (PCL/PEG = 1:4) to delay parts of GEM release, we consider that PCL/PEG = 1:4 membrane with high concentration of hydrophilic PEG polymer will interact fast with aqueous solution, leading the membrane to degrade quickly within the first few minutes [14]. Therefore, we suggest PCL and PEG with different molar ratio for controlling the polymer degradation rate be the major factor to regulate drugs releasing rate. In order to better meet the needs of clinical application, the thickness of covered membrane and membrane composition could be further easily improved according to our requirements by electrospinning/electrospraying technique [18, 19]. After drug eluting in PBS solution, the surface morphology of the covered membrane was investigated by SEM imaging (Figure  6). Generally, fine fiber architecture of the backing layer was covered with roughness of the drug-storing layer (Figure  6a). After 72 h of drug eluting, rough surface of the drug-storing layer was converted to alignment (Figure  6b). This orderly structure indicates the degradation of PCL/PEG blends from the surface, and only PU backing layer and partial PCL/PEG blends left on the layer can be observed. The thickness of the cross section was reduced from around 180 μm (Figure  6c) to 120 μm (Figure  6d). The SEM imaging further confirmed the drug release kinetics was mainly because of the degradation of PCL/PEG membrane, not due to the diffusion or permeation of drugs through the membrane.Figure 5 The profile of drug release. Release of HP and GEM from covered membrane at each predetermined time in PBS solution (mean ± STD, n = 3). Values of p < 0.01 is considered being statistically significant.Figure 6 SEM imaging of eluted covered membrane. After 72 h drug eluting in PBS solution, the surface morphology (a, b) and cross section (c, d) of the covered membrane were observed by SEM imaging. The profile of drug release. Release of HP and GEM from covered membrane at each predetermined time in PBS solution (mean ± STD, n = 3). Values of p < 0.01 is considered being statistically significant. SEM imaging of eluted covered membrane. After 72 h drug eluting in PBS solution, the surface morphology (a, b) and cross section (c, d) of the covered membrane were observed by SEM imaging. Herein, GEM is one of the first line chemo-drugs in the treatment of advanced cholangiocarcinoma, which is a prodrug belonging to an analog of deoxycytidine. Once GEM is transported into the cell, it will be phosphorylated to an active form to inhibit DNA synthesis [33]. Therefore, low initial burst of GEM may help to prevent undesired toxicity associated with high concentration of GEM, and the burst release of HP can provide a simultaneous treatment for PDT, triggered by the light source from endoscopy when the stent is localized in bile duct. Overall, the prototype of PDT-chemo stent has demonstrated the proof of concept of localized combination therapy for cholangiocarcinoma. Based on the theory, the drug-releasing rate could be further regulated by changing the initial electrospraying blend polymer solution, concentration, structure and type of fibers and the amount of additives for the clinical needs [34, 35]. As illustrated in Figure  1, this PDT-chmo stent was designed to obtain a two-phased drug release pattern, which is composed of a first burst-releasing phase and a later slow-releasing phase with combination therapy for cholangiocarcinoma treatment. Hydrophilic PEG was used as a regulator of drug release from PCL/PEG blends [31]. Figure  5 displays the drug releasing profiles from the covered membrane. At each releasing time, the cumulative amounts of HP from PCL/PEG = 1:4 membrane, released in the first hour (P < 0.01 relative to GEM), had a high initial burst 80% and reached maximal cumulative release of nearly 98% within 24 h. Whereas, compared to HP in the first hour, GEM within PCL/PEG = 9:1 membrane showed relatively slow drug release in the first hour (50%). The significant difference of releasing kinetics between HP and GEM was observed within 24 h, as indicated drug-releasing rate can be regulated within 24 h by adjusting PEG and PCL compositions, then both drugs complete releasing occurred during the time span between 24 h and 72 h. The mechanism of drug release was reported by Liu et al. [32] and Lei et al. [14] that PEG acted as a pore former in PCL/PEG blends, where the releasing rate from co-localization of protein/drug and blends were proportional to PEG content. The hydrophilic PEG in PCL/PEG blends easily acted on aqueous solution, which resulted in the formation of swollen structure and subsequently increased the drug-releasing rate, as indicated the kinetics of drug releasing was mainly due to the degradation of the PCL/PEG blends [14]. Although the slow releasing rate of GEM may not exclude the possibility of outer drug-storing layer (PCL/PEG = 1:4) to delay parts of GEM release, we consider that PCL/PEG = 1:4 membrane with high concentration of hydrophilic PEG polymer will interact fast with aqueous solution, leading the membrane to degrade quickly within the first few minutes [14]. Therefore, we suggest PCL and PEG with different molar ratio for controlling the polymer degradation rate be the major factor to regulate drugs releasing rate. In order to better meet the needs of clinical application, the thickness of covered membrane and membrane composition could be further easily improved according to our requirements by electrospinning/electrospraying technique [18, 19]. After drug eluting in PBS solution, the surface morphology of the covered membrane was investigated by SEM imaging (Figure  6). Generally, fine fiber architecture of the backing layer was covered with roughness of the drug-storing layer (Figure  6a). After 72 h of drug eluting, rough surface of the drug-storing layer was converted to alignment (Figure  6b). This orderly structure indicates the degradation of PCL/PEG blends from the surface, and only PU backing layer and partial PCL/PEG blends left on the layer can be observed. The thickness of the cross section was reduced from around 180 μm (Figure  6c) to 120 μm (Figure  6d). The SEM imaging further confirmed the drug release kinetics was mainly because of the degradation of PCL/PEG membrane, not due to the diffusion or permeation of drugs through the membrane.Figure 5 The profile of drug release. Release of HP and GEM from covered membrane at each predetermined time in PBS solution (mean ± STD, n = 3). Values of p < 0.01 is considered being statistically significant.Figure 6 SEM imaging of eluted covered membrane. After 72 h drug eluting in PBS solution, the surface morphology (a, b) and cross section (c, d) of the covered membrane were observed by SEM imaging. The profile of drug release. Release of HP and GEM from covered membrane at each predetermined time in PBS solution (mean ± STD, n = 3). Values of p < 0.01 is considered being statistically significant. SEM imaging of eluted covered membrane. After 72 h drug eluting in PBS solution, the surface morphology (a, b) and cross section (c, d) of the covered membrane were observed by SEM imaging. Herein, GEM is one of the first line chemo-drugs in the treatment of advanced cholangiocarcinoma, which is a prodrug belonging to an analog of deoxycytidine. Once GEM is transported into the cell, it will be phosphorylated to an active form to inhibit DNA synthesis [33]. Therefore, low initial burst of GEM may help to prevent undesired toxicity associated with high concentration of GEM, and the burst release of HP can provide a simultaneous treatment for PDT, triggered by the light source from endoscopy when the stent is localized in bile duct. Overall, the prototype of PDT-chemo stent has demonstrated the proof of concept of localized combination therapy for cholangiocarcinoma. Based on the theory, the drug-releasing rate could be further regulated by changing the initial electrospraying blend polymer solution, concentration, structure and type of fibers and the amount of additives for the clinical needs [34, 35]. Prototype of PDT-chemo stent: To make the stent with covered membrane more uniformly adhered to the surface and easily regulate the thickness according to the clinical needs, electrospinning/electrospraying has been regarded as the appropriate means for demonstrating the PDT-chemo stent [18, 19]. The covered stent can be manufactured by several electrospinning methods: post-spinning modification, drug/polymer blends, emulsion electrospinning and core-shell electrospinning [21]. Drug/polymer blends technique could easily mix the drug with polymer directly and form a layer of membrane to achieve sustained drug release. Therefore, in this study, the prototype of PDT-chemo stent was constructed by drug/polymer blends technique via electrospinning and electrospraying dual-processes. The backing layer was electrospun first from PU polymer solution, followed by the electrospraying process from PCL/PEG blends solution with drug loaded. Electrospinning is the process with voltage to extrude polymer solution into fine fibers for the production of fine-fibers-covered stents. In our case, fine fibers could support the superior mechanical properties of the membrane and be introduced as the backing layer for effectively controlling majority of the inner drugs released to the surrounding tissue. During electrospinning, the organic solvent, which could be toxic to cells, will be completely evaporated due to its high volatility [22]. Electrospraying has similar preparation process to electrospinning but is usually used with higher voltage and lower polymer density, which makes the polymer solution more easily broken up into droplets [23, 24]. The concept of electrospraying process was used to increase layer-to-layer adhesion, which could avoid drug-storing layer cracking and separating from backing layer during the stent expending. The covered membrane is composed of three layers with PU, PCL/PEG = 9:1 blends and PCL/PEG = 1:4 blends. PU was employed as the material of the backing layer, while PCL/PEG blends were used in drug-storing layers with GEM and HP loaded (Figure  1). Several materials (eg. Silicone, PTFE, and PU) have been approved by US Food and Drug Administration (FDA) as the covered membrane on the stent [13]. Silicone was reported to cause the acute inflammation to the surrounding tissue [25]. The PTFE, however, could not be dissolved into any solvent [26]. Therefore, as the material of the backing layer in this study, PU has been found able to be formed sufficiently thin and flat on the metallic stent by electrospinning process and with elastic properties to allow the covered stent to be homogeneously expanded [18]. Additionally, PU membrane has been proved to prevent the tumor ingrowth effectively and to reduce the occlusion rate of expandable metal stent in patients with malignant biliary obstruction [15]. The distinct biodegradable properties of PCL and PEG blending were regarded as an approach to controlling the drug-releasing rate in different blending ratio. The selection of PCL is due to its good biocompatibility, drug permeability and relatively slow degradation rate [27, 28]. The hydrophilic PEG was selected to play a role in resulting in regulating the drug-releasing rate, due to its easily acting on aqueous solution [14, 29]. The prototype of PDT-chemo stent was imaged by photography and optical microscopy, which proved that the membrane was homogeneously covered on the stent surface in each electrospun process (Figure  2). The optical microscopy (Figure  2b) revealed that PU was formed in fine fibers with width around 5 to 10 μm; thus PCL/PEG blends solution was favorable for generating submicron droplets (Figure  2c, d). The photograph of membrane appeared brown (Figure  2d) due to the homogeneously dispersed of HP coated on the top. Finally, the prototype of PDT-chemo stent could be easily removed from the cylindrical collector without any surface damage (Figure  2e). The surface morphology and cross section of the film was further investigated by SEM imaging (Figure  3), which showed that the membrane was constructed of two sides with the backing layer and drug-storing layer. The architecture of the backing layer was constructed of fine fibers in networks structure, while drug-storing layer, attributed to the droplets from electrospraying process was coated on the backing layer roughly. The cross section of the membrane was smoothly with width in range from 170 to 190 μm. The thickness of the membrane could be optimized for the most favorable thickness according to the clinical needs via regulating time during the electrospun process.Figure 2 The photography and microscopic imaging of PDT-chemo stent. The prototype of PDT-chemo stent in each step of electrospun/electrosprayed process is presented under the photography and optical microscopic imaging. (a) 316 L stainless bare stent; (b) covered with PU backing layer; (c) covered with inner drug-storing layer by PCL/PEG = 9:1 with GEM loaded; (d) covered with outer drug-storing layer by PCL/PEG = 1:4 with HP loaded; (e) the prototype of PDT-chemo stent collected from cylindrical collector. Optical microscopy shows the extruded polymer from the syringe of electrspinning/electrospraying processes.Figure 3 SEM imaging of the covered membrane. The surface morphology and cross section of the covered membrane from PDT-chemo stent was observed by SEM imaging, which showed that the membrane was constructed of two sides with the backing layer and drug-storing layer. The photography and microscopic imaging of PDT-chemo stent. The prototype of PDT-chemo stent in each step of electrospun/electrosprayed process is presented under the photography and optical microscopic imaging. (a) 316 L stainless bare stent; (b) covered with PU backing layer; (c) covered with inner drug-storing layer by PCL/PEG = 9:1 with GEM loaded; (d) covered with outer drug-storing layer by PCL/PEG = 1:4 with HP loaded; (e) the prototype of PDT-chemo stent collected from cylindrical collector. Optical microscopy shows the extruded polymer from the syringe of electrspinning/electrospraying processes. SEM imaging of the covered membrane. The surface morphology and cross section of the covered membrane from PDT-chemo stent was observed by SEM imaging, which showed that the membrane was constructed of two sides with the backing layer and drug-storing layer. To further confirm the electrosprayed amount of the drug on the covered membrane, the membrane removed from the stent was completely dissolved in DMSO solution for further analysis. The results revealed that, with the increasing concentration of the HP in PEG/PCL solution from 0.67 mg/ml to 6.67 mg/ml, the density of the HP from the electrosprayed membrane increased gradually from 8.72 μg/cm2 to 63.59 μg/cm2 at most, which was around 0.7 fold to the HP containing solution, as seemed to demonstrate 70% of electrosprayed efficiency (Figure  4a). By the same method, GEM had almost 100% electrosprayed efficiency (Figure  4b) and had a tendency of increasing electrosprayed efficiency by mixing with PCL/PEG blends due to its high hydrophilicity [30]. The corresponding results of the concentration of the drug in the solution and the density of the covered membrane could be an useful information to further imitate a clinical dose regimen for cholangiocarcinoma treatment.Figure 4 The electrosprayed amount of the drug on the covered membrane. The density of (a) HP and (b) GEM coated on the covered membrane is illustrated on the vertical axis of this graph, while the drug concentration of HP and GEM in PCL/PEG blends solution respectively is illustrated on the horizontal axis. The electrosprayed amount of the drug on the covered membrane. The density of (a) HP and (b) GEM coated on the covered membrane is illustrated on the vertical axis of this graph, while the drug concentration of HP and GEM in PCL/PEG blends solution respectively is illustrated on the horizontal axis. Effect of drug release: As illustrated in Figure  1, this PDT-chmo stent was designed to obtain a two-phased drug release pattern, which is composed of a first burst-releasing phase and a later slow-releasing phase with combination therapy for cholangiocarcinoma treatment. Hydrophilic PEG was used as a regulator of drug release from PCL/PEG blends [31]. Figure  5 displays the drug releasing profiles from the covered membrane. At each releasing time, the cumulative amounts of HP from PCL/PEG = 1:4 membrane, released in the first hour (P < 0.01 relative to GEM), had a high initial burst 80% and reached maximal cumulative release of nearly 98% within 24 h. Whereas, compared to HP in the first hour, GEM within PCL/PEG = 9:1 membrane showed relatively slow drug release in the first hour (50%). The significant difference of releasing kinetics between HP and GEM was observed within 24 h, as indicated drug-releasing rate can be regulated within 24 h by adjusting PEG and PCL compositions, then both drugs complete releasing occurred during the time span between 24 h and 72 h. The mechanism of drug release was reported by Liu et al. [32] and Lei et al. [14] that PEG acted as a pore former in PCL/PEG blends, where the releasing rate from co-localization of protein/drug and blends were proportional to PEG content. The hydrophilic PEG in PCL/PEG blends easily acted on aqueous solution, which resulted in the formation of swollen structure and subsequently increased the drug-releasing rate, as indicated the kinetics of drug releasing was mainly due to the degradation of the PCL/PEG blends [14]. Although the slow releasing rate of GEM may not exclude the possibility of outer drug-storing layer (PCL/PEG = 1:4) to delay parts of GEM release, we consider that PCL/PEG = 1:4 membrane with high concentration of hydrophilic PEG polymer will interact fast with aqueous solution, leading the membrane to degrade quickly within the first few minutes [14]. Therefore, we suggest PCL and PEG with different molar ratio for controlling the polymer degradation rate be the major factor to regulate drugs releasing rate. In order to better meet the needs of clinical application, the thickness of covered membrane and membrane composition could be further easily improved according to our requirements by electrospinning/electrospraying technique [18, 19]. After drug eluting in PBS solution, the surface morphology of the covered membrane was investigated by SEM imaging (Figure  6). Generally, fine fiber architecture of the backing layer was covered with roughness of the drug-storing layer (Figure  6a). After 72 h of drug eluting, rough surface of the drug-storing layer was converted to alignment (Figure  6b). This orderly structure indicates the degradation of PCL/PEG blends from the surface, and only PU backing layer and partial PCL/PEG blends left on the layer can be observed. The thickness of the cross section was reduced from around 180 μm (Figure  6c) to 120 μm (Figure  6d). The SEM imaging further confirmed the drug release kinetics was mainly because of the degradation of PCL/PEG membrane, not due to the diffusion or permeation of drugs through the membrane.Figure 5 The profile of drug release. Release of HP and GEM from covered membrane at each predetermined time in PBS solution (mean ± STD, n = 3). Values of p < 0.01 is considered being statistically significant.Figure 6 SEM imaging of eluted covered membrane. After 72 h drug eluting in PBS solution, the surface morphology (a, b) and cross section (c, d) of the covered membrane were observed by SEM imaging. The profile of drug release. Release of HP and GEM from covered membrane at each predetermined time in PBS solution (mean ± STD, n = 3). Values of p < 0.01 is considered being statistically significant. SEM imaging of eluted covered membrane. After 72 h drug eluting in PBS solution, the surface morphology (a, b) and cross section (c, d) of the covered membrane were observed by SEM imaging. Herein, GEM is one of the first line chemo-drugs in the treatment of advanced cholangiocarcinoma, which is a prodrug belonging to an analog of deoxycytidine. Once GEM is transported into the cell, it will be phosphorylated to an active form to inhibit DNA synthesis [33]. Therefore, low initial burst of GEM may help to prevent undesired toxicity associated with high concentration of GEM, and the burst release of HP can provide a simultaneous treatment for PDT, triggered by the light source from endoscopy when the stent is localized in bile duct. Overall, the prototype of PDT-chemo stent has demonstrated the proof of concept of localized combination therapy for cholangiocarcinoma. Based on the theory, the drug-releasing rate could be further regulated by changing the initial electrospraying blend polymer solution, concentration, structure and type of fibers and the amount of additives for the clinical needs [34, 35]. Conclusions: In preliminary study, we have successfully developed a prototype of tri-layered covered stent with PDT and chemotherapy. This PDT-chemo stent was prepared by electrospinning and electrospraying dual-processes. The membrane is composed of PU backing layer as the base and PCL/PEG drug-storing layers with GEM and HP on the top. The mixing of drugs with different PCL and PEG composition demonstrated an effective strategy for regulating the drugs release from the membrane. The release study has confirmed a two-phased drug release pattern, which provides a proof of concept for the hypothesis that the PDT-chemo stent is composed of a first burst-releasing phase from HP and a later slow-releasing phase from GEM eluting. This two-phase of drug eluting stent may provide a new prospective of localized controlled release treatment for cholangiocarcinoma disease.
Background: The combination of biliary stent with photodynamic and chemotherapy seemed to be a beneficial palliative treatment of unresectable cholangiocarcinoma. However, by intravenous delivery to the target tumor the distribution of the drug had its limitations and caused serious side effect on non-target organs. Therefore, in this study, we are going to develop a localized eluting stent, named PDT-chemo stent, covered with gemcitabine (GEM) and hematoporphyrin (HP). Methods: The prototype of PDT-chemo stent was made through electrospinning and electrospraying dual-processes with an electrical charge to cover the stent with a drug-storing membrane from polymer liquid. The design of prototype used PU as the material of the backing layer, and PCL/PEG blends in different molar ratio of 9:1 and of 1:4 were used in two drug-storing layers with GEM and HP loaded respectively. Results: The optical microscopy revealed that the backing layer was formed in fine fibers from electrospinning, while drug-storing layers, attributed to the droplets from electrospraying process. The covered membrane, the morphology of which was observed by scanning electron microscopy (SEM), covered the stent surface homogeneously without crack appearances. The GEM had almost 100% of electrosprayed efficiency than 70% HP loaded on the covered membrane due to the different solubility of drug in PEG/PCL blends. Drug release study confirmed the two-phased drug release pattern by regulating in different molar ratio of PEG/PCL blends polymer. Conclusions: The result proves that the PDT-chemo stent is composed of a first burst-releasing phase from HP and a later slow-releasing phase from GEM eluting. This two-phase of drug eluting stent may provide a new prospect of localized and controlled release treatment for cholangiocarcinoma disease.
Background: Cholangiocarcinoma is the second most common hepatobiliary tumor, which is generally a locally invasive tumor that occludes the biliary tree and leads to cholangitis and liver failure. Until now, tumor resection has been the only potential cure for cholangiocarcinoma [1, 2]. Unfortunately, even with resection, the survival rate with five years can decrease to 11% at most and more than 50% of patients still remained at unresectable stage [3, 4]. Inoperable patients with advanced cholangiocarcinoma typically have obstructive cholestasis. So far, the primary standard method of treatment has been biliary stenting [5]. However, this treatment can prolong survival time slightly by providing temporary biliary drainage. Therefore, the secondary method of treatment is required to prolong the survival time by reducing tumor burden. Chemotherapy and radiotherapy are classical treatments but their results are also disappointing [3, 5]. Photodynamic therapy (PDT) is a new and promising treatment option, which contains a photosensitizer, light source, and oxygen [6]. The concept of PDT is based on a photosensitizer exposed to the specific wavelength of light, which can generate cytotoxic reactive oxygen species (ROS) to kill tumor cells [7]. Additionally, previous studies have shown that PDT could also inhibit the P-glycoprotein efflux of drug. A combination of PDT and chemotherapy can improve the accumulation of chemo-drug in tumor cells, and reduce the chemo-drug resistance from the P-glycoprotein efflux [8]. Another advantage of combination therapy with PDT and chemo-drug is the capability to induce antitumor immunity [9]. However, by intravenous delivery to target tumor, the distribution of the drug had its limitations and caused serious side effect on non-target organs. After receiving PDT treatment, patients have to stay indoors, away from bright light for 3 to 4 days to avoid the skin photosensitivity from the side effect [10]. In order to decrease the side effect during the treatment, the aim of this study is to develop a localized drug eluting stent, named PDT-chemo stent, by incorporating gemcitabine (GEM) with hematoporphyrin (HP) to cover the stent surface. Drug-eluting stent has been considered a method to maximize the drug concentration immediately on the localized tumor environment, while minimizing the non-target organs exposure [11, 12]. In clinical practice, this PDT-chemo stent could be inserted to the tumor area via endoscopic retrograde cholangiography [13], followed by the simultaneous specific light source from endoscopy to activate the photosensitizer for PDT. Meanwhile, the chemo-drug of GEM will be released continuously as the second step for chemotherapy. The multimodal function of PDT-chemo stent will not only aim to increase the accumulation of drug within the neoplastic tissue, but also decrease the side effect on non-target tissues. In the past decade, the stent with drug-incorporated covered membrane has been received increasing attention as drug-eluting stent due to its functions of providing mechanic support and releasing sufficient drug to prevent restenosis or treat malignant [12, 14, 15]. Historically, several techniques were developed and have been used to manufacture the covered membrane on stent surface. These diverse techniques include dip coating [12, 16], compression technique [14] and electrospinning [17, 18]. Concerned about the adherence and flexibility problems of the membrane covering the stent [19], we used the electrospinning technique. Electrospinning is the process with voltage to extrude polymer solution into fine fibers for the production of micro/nano-fiber-covered stents [20]. The electrospinning technique could support the covered membrane uniformly adhered to the stent and easily regulate the thickness according to the clinical needs [18, 19]. Therefore, we selected the electrospinning to construct the backing layer on 316 L stainless stent and used the electrospraying process to regulate GEM and HP on the covered membrane. To provide the flexibility of the membrane, polyurethane (PU), with sufficient elastic property, was used as the material of drug-free backing layer, which could effectively control majority of the inner drugs released to the tissue-contacting side and additional supporting force for the main drug layer [18]. Polycaprolactone (PCL) and Polyethylene glycol (PEG) blends in different molar ratio were selected as two drug-storing layers to control the drug-releasing rate. In the ratio of 1:4, PCL/PEG blends were used as the outer releasing layer with HP loaded, while in the ratio of 9:1, PCL/PEG blends were used as the inner releasing layer with GEM loaded (Figure  1). To our knowledge, this is the first study to show that the biliary stent could be accompanied with PDT and chemotherapy for localized cholangiocarcinoma treatment.Figure 1 Schematic illustration of the PDT-chemo stent as a biliary stent for cholangiocarcinoma treatment. The covered membrane is composed of three layers with PU, PCL/PEG = 9:1 blends and PCL/PEG = 1:4 blends. PU was employed as the material of the backing layer, and PCL/PEG blends were used in drug-storing layers with GEM and HP loaded. Schematic illustration of the PDT-chemo stent as a biliary stent for cholangiocarcinoma treatment. The covered membrane is composed of three layers with PU, PCL/PEG = 9:1 blends and PCL/PEG = 1:4 blends. PU was employed as the material of the backing layer, and PCL/PEG blends were used in drug-storing layers with GEM and HP loaded. Conclusions: In preliminary study, we have successfully developed a prototype of tri-layered covered stent with PDT and chemotherapy. This PDT-chemo stent was prepared by electrospinning and electrospraying dual-processes. The membrane is composed of PU backing layer as the base and PCL/PEG drug-storing layers with GEM and HP on the top. The mixing of drugs with different PCL and PEG composition demonstrated an effective strategy for regulating the drugs release from the membrane. The release study has confirmed a two-phased drug release pattern, which provides a proof of concept for the hypothesis that the PDT-chemo stent is composed of a first burst-releasing phase from HP and a later slow-releasing phase from GEM eluting. This two-phase of drug eluting stent may provide a new prospective of localized controlled release treatment for cholangiocarcinoma disease.
Background: The combination of biliary stent with photodynamic and chemotherapy seemed to be a beneficial palliative treatment of unresectable cholangiocarcinoma. However, by intravenous delivery to the target tumor the distribution of the drug had its limitations and caused serious side effect on non-target organs. Therefore, in this study, we are going to develop a localized eluting stent, named PDT-chemo stent, covered with gemcitabine (GEM) and hematoporphyrin (HP). Methods: The prototype of PDT-chemo stent was made through electrospinning and electrospraying dual-processes with an electrical charge to cover the stent with a drug-storing membrane from polymer liquid. The design of prototype used PU as the material of the backing layer, and PCL/PEG blends in different molar ratio of 9:1 and of 1:4 were used in two drug-storing layers with GEM and HP loaded respectively. Results: The optical microscopy revealed that the backing layer was formed in fine fibers from electrospinning, while drug-storing layers, attributed to the droplets from electrospraying process. The covered membrane, the morphology of which was observed by scanning electron microscopy (SEM), covered the stent surface homogeneously without crack appearances. The GEM had almost 100% of electrosprayed efficiency than 70% HP loaded on the covered membrane due to the different solubility of drug in PEG/PCL blends. Drug release study confirmed the two-phased drug release pattern by regulating in different molar ratio of PEG/PCL blends polymer. Conclusions: The result proves that the PDT-chemo stent is composed of a first burst-releasing phase from HP and a later slow-releasing phase from GEM eluting. This two-phase of drug eluting stent may provide a new prospect of localized and controlled release treatment for cholangiocarcinoma disease.
11,033
340
[ 1086, 86, 330, 81, 162, 5160, 1554, 1018 ]
10
[ "drug", "membrane", "peg", "stent", "covered", "pcl", "layer", "pcl peg", "covered membrane", "solution" ]
[ "potential cure cholangiocarcinoma", "cholangiocarcinoma treatment covered", "treatment advanced cholangiocarcinoma", "therapy cholangiocarcinoma based", "regimen cholangiocarcinoma treatment" ]
null
[CONTENT] Cholangiocarcinoma | Photodynamic therapy | Chemotherapy | Biliary drug-eluting stent | Electrospinning and Electrospraying [SUMMARY]
[CONTENT] Cholangiocarcinoma | Photodynamic therapy | Chemotherapy | Biliary drug-eluting stent | Electrospinning and Electrospraying [SUMMARY]
null
[CONTENT] Cholangiocarcinoma | Photodynamic therapy | Chemotherapy | Biliary drug-eluting stent | Electrospinning and Electrospraying [SUMMARY]
[CONTENT] Cholangiocarcinoma | Photodynamic therapy | Chemotherapy | Biliary drug-eluting stent | Electrospinning and Electrospraying [SUMMARY]
[CONTENT] Cholangiocarcinoma | Photodynamic therapy | Chemotherapy | Biliary drug-eluting stent | Electrospinning and Electrospraying [SUMMARY]
[CONTENT] Deoxycytidine | Drug-Eluting Stents | Equipment Design | Microscopy, Electron, Scanning | Photochemotherapy | Polymers | Gemcitabine [SUMMARY]
[CONTENT] Deoxycytidine | Drug-Eluting Stents | Equipment Design | Microscopy, Electron, Scanning | Photochemotherapy | Polymers | Gemcitabine [SUMMARY]
null
[CONTENT] Deoxycytidine | Drug-Eluting Stents | Equipment Design | Microscopy, Electron, Scanning | Photochemotherapy | Polymers | Gemcitabine [SUMMARY]
[CONTENT] Deoxycytidine | Drug-Eluting Stents | Equipment Design | Microscopy, Electron, Scanning | Photochemotherapy | Polymers | Gemcitabine [SUMMARY]
[CONTENT] Deoxycytidine | Drug-Eluting Stents | Equipment Design | Microscopy, Electron, Scanning | Photochemotherapy | Polymers | Gemcitabine [SUMMARY]
[CONTENT] potential cure cholangiocarcinoma | cholangiocarcinoma treatment covered | treatment advanced cholangiocarcinoma | therapy cholangiocarcinoma based | regimen cholangiocarcinoma treatment [SUMMARY]
[CONTENT] potential cure cholangiocarcinoma | cholangiocarcinoma treatment covered | treatment advanced cholangiocarcinoma | therapy cholangiocarcinoma based | regimen cholangiocarcinoma treatment [SUMMARY]
null
[CONTENT] potential cure cholangiocarcinoma | cholangiocarcinoma treatment covered | treatment advanced cholangiocarcinoma | therapy cholangiocarcinoma based | regimen cholangiocarcinoma treatment [SUMMARY]
[CONTENT] potential cure cholangiocarcinoma | cholangiocarcinoma treatment covered | treatment advanced cholangiocarcinoma | therapy cholangiocarcinoma based | regimen cholangiocarcinoma treatment [SUMMARY]
[CONTENT] potential cure cholangiocarcinoma | cholangiocarcinoma treatment covered | treatment advanced cholangiocarcinoma | therapy cholangiocarcinoma based | regimen cholangiocarcinoma treatment [SUMMARY]
[CONTENT] drug | membrane | peg | stent | covered | pcl | layer | pcl peg | covered membrane | solution [SUMMARY]
[CONTENT] drug | membrane | peg | stent | covered | pcl | layer | pcl peg | covered membrane | solution [SUMMARY]
null
[CONTENT] drug | membrane | peg | stent | covered | pcl | layer | pcl peg | covered membrane | solution [SUMMARY]
[CONTENT] drug | membrane | peg | stent | covered | pcl | layer | pcl peg | covered membrane | solution [SUMMARY]
[CONTENT] drug | membrane | peg | stent | covered | pcl | layer | pcl peg | covered membrane | solution [SUMMARY]
[CONTENT] drug | stent | tumor | pdt | treatment | blends | peg blends | biliary | chemo | pcl peg blends [SUMMARY]
[CONTENT] membrane | stent | peg | pcl | syringe | kv | medium | pcl peg | hfip | solution [SUMMARY]
null
[CONTENT] phase | release | stent | releasing phase | pdt | study | drug | composed | eluting | hypothesis pdt [SUMMARY]
[CONTENT] membrane | drug | stent | peg | pcl | pcl peg | layer | covered | covered membrane | blends [SUMMARY]
[CONTENT] membrane | drug | stent | peg | pcl | pcl peg | layer | covered | covered membrane | blends [SUMMARY]
[CONTENT] ||| ||| GEM | hematoporphyrin [SUMMARY]
[CONTENT] ||| PU | PCL/PEG | 9:1 | 1:4 | two | GEM | HP [SUMMARY]
null
[CONTENT] first | HP | GEM ||| two [SUMMARY]
[CONTENT] ||| ||| GEM | hematoporphyrin ||| ||| PU | PCL/PEG | 9:1 | 1:4 | two | GEM | HP ||| ||| ||| ||| GEM | almost 100% | 70% | PEG/PCL ||| two | PEG/PCL ||| first | HP | GEM ||| two [SUMMARY]
[CONTENT] ||| ||| GEM | hematoporphyrin ||| ||| PU | PCL/PEG | 9:1 | 1:4 | two | GEM | HP ||| ||| ||| ||| GEM | almost 100% | 70% | PEG/PCL ||| two | PEG/PCL ||| first | HP | GEM ||| two [SUMMARY]
Tanshinone IIA inhibits metastasis after palliative resection of hepatocellular carcinoma and prolongs survival in part via vascular normalization.
23137165
Promotion of endothelial normalization restores tumor oxygenation and obstructs tumor cells invasion, intravasation, and metastasis. We therefore investigated whether a vasoactive drug, tanshinone IIA, could inhibit metastasis by inducing vascular normalization after palliative resection (PR) of hepatocellular carcinoma (HCC).
BACKGROUND
A liver orthotopic double-tumor xenograft model in nude mouse was established by implantation of HCCLM3 (high metastatic potential) and HepG2 tumor cells. After removal of one tumor by PR, the effects of tanshinone IIA administration on metastasis, tumor vascularization, and survival were evaluated. Tube formation was examined in mouse tumor-derived endothelial cells (TECs) treated with tanshinone IIA.
METHODS
PR significantly accelerated residual hepatoma metastases. Tanshinone IIA did not inhibit growth of single-xenotransplanted tumors, but it did reduce the occurrence of metastases. Moreover, it inhibited PR-enhanced metastases and, more importantly, prolonged host survival. Tanshinone IIA alleviated residual tumor hypoxia and suppressed epithelial-mesenchymal transition (EMT) in vivo; however, it did not downregulate hypoxia-inducible factor 1α (HIF-1α) or reverse EMT of tumor cells under hypoxic conditions in vitro. Tanshinone IIA directly strengthened tube formation of TECs, associated with vascular endothelial cell growth factor receptor 1/platelet derived growth factor receptor (VEGFR1/PDGFR) upregulation. Although the microvessel density (MVD) of residual tumor tissue increased after PR, the microvessel integrity (MVI) was still low. While tanshinone IIA did not inhibit MVD, it did dramatically increase MVI, leading to vascular normalization.
RESULTS
Our results demonstrate that tanshinone IIA can inhibit the enhanced HCC metastasis associated with PR. Inhibition results from promoting VEGFR1/PDGFR-related vascular normalization. This application demonstrates the potential clinical benefit of preventing postsurgical recurrence.
CONCLUSIONS
[ "Abietanes", "Animals", "Antineoplastic Agents, Phytogenic", "Carcinoma, Hepatocellular", "Cell Growth Processes", "Cell Line, Tumor", "Humans", "Liver Neoplasms", "Male", "Mice", "Mice, Inbred BALB C", "Mice, Nude", "Neoplasm Metastasis", "Random Allocation", "Xenograft Model Antitumor Assays" ]
3506473
Background
Surgical resection is the most promising strategy for early-stage hepatocellular carcinoma (HCC); however, the 5-year risk of recurrence is as high as 70% [1]. The surgery is actually palliative resection (PR), owing to the existence of satellites and microvascular invasion, and these residual tumor nests can actually be stimulated to grow by the PR [2,3]. Although several treatments, such as interferon-alpha and sorafenib, have been proposed to diminish relapse [4,5], prometastatic side effects of these options have also been observed [6,7]. Residual tumor cells may stimulate angiogenesis, which is needed for tumor growth [8-10]. However, the resulting neovessels may be disordered and inefficiently perfused, resulting in hypoxic conditions [10,11]. Both abnormal endothelium and pericytes integrated into the capillary wall, along with deficient coverage, could be responsible for the vascular architectural abnormalities [12]. The resulting hypoxia creates a hostile tumor milieu in which tumor cells may migrate via intra- or extravasation through a leaky vessel [9,13]. In effect, surgery-induced hypoxia unfavorably impacts the prognosis of cancer patients by inducing angiogenesis [14]. Therefore, restoring oxygen supply via vascular normalization may reduce metastasis, even tumor growth. Mazzone et al. [13] showed that downregulation of the oxygen sensing molecule PHD2 can restore tumor oxygenation and inhibit metastasis via endothelial normalization, where endothelial cells form a protective phalanx that blocks metastasis. Although several methods have been shown experimentally to promote vessel remodeling, only seldom has any of them found use in the clinic [9,10,15,16]. Tanshinone IIA (Tan IIA) is an herbal monomer with a clear chemical structure, isolated from Salvia miltiorrhiza. In Chinese traditional medicine, S. miltiorrhiza is considered to promote blood circulation for removing blood stasis and improve microcirculation. Some of these effects could include vessel normalization. We have reported that an herbal formula, Songyou Yin, can attenuate HCC metastases [17], and S. miltiorrhiza is one of the five constituents of the formula [18]. Tan IIA exhibits direct vasoactive [19,20] and certain antitumor properties [21]. It is possible that Tan IIA may indirectly decrease metastasis in HCC following PR by promoting blood vessel normalization; however, there is to date no evidence supporting this hypothesis. We aimed to identify inhibitory effects of Tan IIA on HCC metastasis for delineating a possible mechanism of action of the compound, with a main focus on tumor vessel maturity as a potential marker for evaluating Tan IIA treatment responses.
Methods
Cell lines, animal model, and drug The human HCC cell lines HCCLM3-RFP, which has high metastatic potential [25], and HepG2-GFP [26], HUVECs, and TECs [27] were used in the studies. Male, athymic BALB/c nu/nu mice of 4–6 weeks of age and weighing approximately 20 g were used as host animals. A metastatic human HCC animal model was established by orthotopic implantation of histologically intact tumor tissue into the nude mouse liver [28]. To explore the protumoral effects of PR, we constructed an orthotopic double-tumor xenograft model, in which two tumor pieces were simultaneously inoculated into the left liver lobe; the inoculation method was as described [28]. After 2 weeks, partial hepatectomy [29] was performed to excise one tumor. The Sham hepatectomy cohort was handled like the PR cohort, but without tumor resection. Tan IIA (sulfotanshinone sodium injection, 5 mg/ml), commercially available from the first Biochemical Pharmaceutical Co. Ltd., Shanghai, China, was used in in vivo experiment. Tan IIA monomer (Sigma, St. Louis, MO), a reddish lyophilized powder with the purity 99.99%, firstly dissolved in dimethyl sulfoxide and then diluted with NS to the required concentration, was used in in vitro study. The human HCC cell lines HCCLM3-RFP, which has high metastatic potential [25], and HepG2-GFP [26], HUVECs, and TECs [27] were used in the studies. Male, athymic BALB/c nu/nu mice of 4–6 weeks of age and weighing approximately 20 g were used as host animals. A metastatic human HCC animal model was established by orthotopic implantation of histologically intact tumor tissue into the nude mouse liver [28]. To explore the protumoral effects of PR, we constructed an orthotopic double-tumor xenograft model, in which two tumor pieces were simultaneously inoculated into the left liver lobe; the inoculation method was as described [28]. After 2 weeks, partial hepatectomy [29] was performed to excise one tumor. The Sham hepatectomy cohort was handled like the PR cohort, but without tumor resection. Tan IIA (sulfotanshinone sodium injection, 5 mg/ml), commercially available from the first Biochemical Pharmaceutical Co. Ltd., Shanghai, China, was used in in vivo experiment. Tan IIA monomer (Sigma, St. Louis, MO), a reddish lyophilized powder with the purity 99.99%, firstly dissolved in dimethyl sulfoxide and then diluted with NS to the required concentration, was used in in vitro study. Experimental groups and assessment parameters For IE (1), 30 double-tumor-bearing mice were randomly divided into Sham and PR groups (each of n=15) and scheduled to be observed after 35 d. In IE (2), the single-tumor xenograft model was used. Mice were divided into four groups (each n=18) and received daily injections of NS or Tan IIA (1, 5, or 10 mg/kg/d). Tan IIA was diluted with NS. We took 20 g as the average mouse weight (25 g after 21 d), and each mouse received 0.2 mL solution intraperitoneally for 35 d. In IE (3), the double-tumor xenograft plus PR model was used to examine effects of Tan IIA on residual tumor. Mice were divided into two groups (each n=15) after PR and received daily injections of NS or 10 mg Tan IIA/kg/d for 35 d. The mouse weight was measured once every 7 d. After 35 d, six mice from each group were retained to observe survival, and the remaining were sacrificed to measure TV [30], LM, IHM, AM [26], CTCs, and to perform SEM of tumor vessels. CTCs were enumerated by flow cytometry and expressed as percent CTCs/TV (%) [6]. Twelve mice (IE 1, n = 6) were sacrificed 2 d after resection to examine CTCs shortly after PR. For IE (1), 30 double-tumor-bearing mice were randomly divided into Sham and PR groups (each of n=15) and scheduled to be observed after 35 d. In IE (2), the single-tumor xenograft model was used. Mice were divided into four groups (each n=18) and received daily injections of NS or Tan IIA (1, 5, or 10 mg/kg/d). Tan IIA was diluted with NS. We took 20 g as the average mouse weight (25 g after 21 d), and each mouse received 0.2 mL solution intraperitoneally for 35 d. In IE (3), the double-tumor xenograft plus PR model was used to examine effects of Tan IIA on residual tumor. Mice were divided into two groups (each n=15) after PR and received daily injections of NS or 10 mg Tan IIA/kg/d for 35 d. The mouse weight was measured once every 7 d. After 35 d, six mice from each group were retained to observe survival, and the remaining were sacrificed to measure TV [30], LM, IHM, AM [26], CTCs, and to perform SEM of tumor vessels. CTCs were enumerated by flow cytometry and expressed as percent CTCs/TV (%) [6]. Twelve mice (IE 1, n = 6) were sacrificed 2 d after resection to examine CTCs shortly after PR. Cell proliferation and invasion A Cell Counting Kit-8 (Dojindo, Kumamoto, Japan) was used to assay cell proliferation. The final concentration of Tan IIA was 0.01–1000 μM. Results were expressed as OD at 490 nm. Cell invasiveness was assayed in Matrigel-coated Transwell Invasion Chambers (Corning, Cambridge, MA). Tan IIA was added to cells at final concentrations of 1, 5, or 10 μM, and these cultures were incubated for 48 h. Cells that passed through the chamber membranes were counted. A Cell Counting Kit-8 (Dojindo, Kumamoto, Japan) was used to assay cell proliferation. The final concentration of Tan IIA was 0.01–1000 μM. Results were expressed as OD at 490 nm. Cell invasiveness was assayed in Matrigel-coated Transwell Invasion Chambers (Corning, Cambridge, MA). Tan IIA was added to cells at final concentrations of 1, 5, or 10 μM, and these cultures were incubated for 48 h. Cells that passed through the chamber membranes were counted. Hypoxia evaluation Cells were cultured in a Bugbox Hypoxic Workstation (Ruskinn, Mid Glamorgan, UK; 1% O2, 5% CO2, and 94% N2 atmosphere) and incubated with Tan IIA at 1, 5, or 10 μM for 48 h. Normoxic conditions (20% O2, 5% CO2, and 75% N2) were set as control. Pimonidazole immunostaining and HIF-1α expression were defined as hypoxia biomarkers. A Hypoxyprobe™-1 Kit (Hypoxyprobe Inc., Burlington, MA) was used [6]. Cells were cultured in a Bugbox Hypoxic Workstation (Ruskinn, Mid Glamorgan, UK; 1% O2, 5% CO2, and 94% N2 atmosphere) and incubated with Tan IIA at 1, 5, or 10 μM for 48 h. Normoxic conditions (20% O2, 5% CO2, and 75% N2) were set as control. Pimonidazole immunostaining and HIF-1α expression were defined as hypoxia biomarkers. A Hypoxyprobe™-1 Kit (Hypoxyprobe Inc., Burlington, MA) was used [6]. Isolation of TECs, flow cytometry, and tube formation Eight tumors from Sham, PR, or PR + Tan IIA groups were collected. The TECs were isolated by use of anti-CD31 antibody (AB)-coupled magnetic beads (Miltenyi Biotec, Cologne, Germany) and magnetic cell-sorting system [27], and they were divided into Sham, PR, PR + Tan IIA (in vivo), and PR + Tan IIA (in vitro) groups; TECs isolated from the PR group were incubated with Tan IIA for 48 h. TEC surface expression of VEGFR1, VEGFR2, EGFR, PDGFR, FGFR1, and CD31 was determined by flow cytometric analysis (R&D, Minneapolis, MN). Receptor density was calculated as the relative fluorescence intensity. In another set of experiments, TECs from the PR + Tan IIA (in vivo) cohort were divided into control and SU6668 (Sigma) (VEGFR1/PDGFR selective receptor inhibitor) treatment groups. TECs from the PR cohort were also divided into PR, PR + SU6668, PR + Tan IIA (in vitro), and PR + SU6668 + Tan IIA groups. HUVECs and human TECs were separated into control, normoxia + Tan IIA, and hypoxia + Tan IIA groups. Formation of capillary-like structures was observed as described [27]. Eight tumors from Sham, PR, or PR + Tan IIA groups were collected. The TECs were isolated by use of anti-CD31 antibody (AB)-coupled magnetic beads (Miltenyi Biotec, Cologne, Germany) and magnetic cell-sorting system [27], and they were divided into Sham, PR, PR + Tan IIA (in vivo), and PR + Tan IIA (in vitro) groups; TECs isolated from the PR group were incubated with Tan IIA for 48 h. TEC surface expression of VEGFR1, VEGFR2, EGFR, PDGFR, FGFR1, and CD31 was determined by flow cytometric analysis (R&D, Minneapolis, MN). Receptor density was calculated as the relative fluorescence intensity. In another set of experiments, TECs from the PR + Tan IIA (in vivo) cohort were divided into control and SU6668 (Sigma) (VEGFR1/PDGFR selective receptor inhibitor) treatment groups. TECs from the PR cohort were also divided into PR, PR + SU6668, PR + Tan IIA (in vitro), and PR + SU6668 + Tan IIA groups. HUVECs and human TECs were separated into control, normoxia + Tan IIA, and hypoxia + Tan IIA groups. Formation of capillary-like structures was observed as described [27]. Immunohistochemistry, immunofluorescence, western blot, and quantitative real-time polymerase chain reaction Immunohistochemistry [31] of Pimonidazole, CD31, and NG2 [15] was performed in paraffin sections on slides. The primary antibodies to Pimonidazole (1:100), CD31 (1:100; Abcam, Cambridge, MA), and NG2 (1:200; Millipore, Billerica, MA) were selected. The integrated optical density (IOD; for Pimonidazole) or area (for CD31 and NG2) of positive staining/total area was quantified by Image-Pro Plus software [31]. IF double-staining [31] of CD31 (1:50) and NG2 (1:50), and NG2 and Pimonidazole (1:80) was done in frozen sections and observed under laser confocal microscope. IF of E-cadherin in cells was also determined. Protein levels of HIF-1α, N-cadherin, E-cadherin, and vimentin were determined by immunoblot analysis. Primary antibodies against HIF-1α (1:1000; Sigma), β-actin, N-cadherin (1:1000; Abcam), E-cadherin (1:400; Santa Cruz Biotechnology, Santa Cruz, CA), and vimentin (1:800; Cell Signaling Technology, Beverly, MA) were used. Levels of mRNA were assessed by polymerase chain reaction (Additional file 1: Table S1) and normalized to the corresponding internal β-actin signal (ΔCt). Relative gene expression values were expressed as 2−ΔΔCt[30]. Immunohistochemistry [31] of Pimonidazole, CD31, and NG2 [15] was performed in paraffin sections on slides. The primary antibodies to Pimonidazole (1:100), CD31 (1:100; Abcam, Cambridge, MA), and NG2 (1:200; Millipore, Billerica, MA) were selected. The integrated optical density (IOD; for Pimonidazole) or area (for CD31 and NG2) of positive staining/total area was quantified by Image-Pro Plus software [31]. IF double-staining [31] of CD31 (1:50) and NG2 (1:50), and NG2 and Pimonidazole (1:80) was done in frozen sections and observed under laser confocal microscope. IF of E-cadherin in cells was also determined. Protein levels of HIF-1α, N-cadherin, E-cadherin, and vimentin were determined by immunoblot analysis. Primary antibodies against HIF-1α (1:1000; Sigma), β-actin, N-cadherin (1:1000; Abcam), E-cadherin (1:400; Santa Cruz Biotechnology, Santa Cruz, CA), and vimentin (1:800; Cell Signaling Technology, Beverly, MA) were used. Levels of mRNA were assessed by polymerase chain reaction (Additional file 1: Table S1) and normalized to the corresponding internal β-actin signal (ΔCt). Relative gene expression values were expressed as 2−ΔΔCt[30]. Statistical analysis All statistical analyses were performed with the SPSS 16.0 software. The Pearson chi-square test was applied to compare qualitative variables. Quantitative variables were expressed as mean ± standard deviations and analyzed by t-test or one-way analysis of variance followed by least significant difference test. The Kaplan–Meier method with log-rank test was used for survival analysis. A p value of <0.05 was considered to be statistically significant. All statistical analyses were performed with the SPSS 16.0 software. The Pearson chi-square test was applied to compare qualitative variables. Quantitative variables were expressed as mean ± standard deviations and analyzed by t-test or one-way analysis of variance followed by least significant difference test. The Kaplan–Meier method with log-rank test was used for survival analysis. A p value of <0.05 was considered to be statistically significant. Ethics approval Animal care and experimental protocols were approved by the Shanghai Medical Experimental Animal Care Commission. Animal care and experimental protocols were approved by the Shanghai Medical Experimental Animal Care Commission.
Results
PR-induced residual tumor growth and metastasis As shown in the in vivo experiment 1 (IE 1) in Tables 1 and Additional file 1: Table S2, the tumor volume (TV) was greater in the PR than Sham groups (p<0.05, Additional file 1: Figure S1A). Compared with the Sham group, the lung metastasis (LM) (HCCLM3) of the PR group significantly increased (p<0.001, Figure 1A and S1B); both intrahepatic metastasis (IHM) and abdominal metastasis (AM) also increased (p<0.01 for IHM, in Figures 1B, Additional file 1: Figure S1C, and Additional file 1: Figure S2A; p<0.05 for AM, in Figures 1C, Additional file 1: Figure S1D, and Additional file 1: Figure S2B); and circulating tumor cells (CTCs) were elevated both at 2 and 35 d postresection (p<0.001, Additional file 1: Figure S1E). Summary of tumor growth, metastasis, and mice’s survival from three animal experiments of HCCLM3 a Student’s t-test, equal variances assumed. b One-way analysis of variance. Abbreviations: PR, palliative resection; NS, normal saline; CTCs, circulating tumor cells; IE, in vivo experiment; LM, lung metastasis; ND, not done; TV, tumor volume. Schematic diagram of metastases and cumulative survival of tumor-bearing nude mice in animal experiments. Occurrence of lung (A), intrahepatic (B), and abdomen metastases (C) significantly increased after PR. Tan IIA decreased the numbers of various metastatic lesions. (D) Survival curves showed that the life-span of test mice was significantly shortened by PR and was markedly prolonged by Tan IIA. Red: HCCLM3. Green: HepG2. As shown in the in vivo experiment 1 (IE 1) in Tables 1 and Additional file 1: Table S2, the tumor volume (TV) was greater in the PR than Sham groups (p<0.05, Additional file 1: Figure S1A). Compared with the Sham group, the lung metastasis (LM) (HCCLM3) of the PR group significantly increased (p<0.001, Figure 1A and S1B); both intrahepatic metastasis (IHM) and abdominal metastasis (AM) also increased (p<0.01 for IHM, in Figures 1B, Additional file 1: Figure S1C, and Additional file 1: Figure S2A; p<0.05 for AM, in Figures 1C, Additional file 1: Figure S1D, and Additional file 1: Figure S2B); and circulating tumor cells (CTCs) were elevated both at 2 and 35 d postresection (p<0.001, Additional file 1: Figure S1E). Summary of tumor growth, metastasis, and mice’s survival from three animal experiments of HCCLM3 a Student’s t-test, equal variances assumed. b One-way analysis of variance. Abbreviations: PR, palliative resection; NS, normal saline; CTCs, circulating tumor cells; IE, in vivo experiment; LM, lung metastasis; ND, not done; TV, tumor volume. Schematic diagram of metastases and cumulative survival of tumor-bearing nude mice in animal experiments. Occurrence of lung (A), intrahepatic (B), and abdomen metastases (C) significantly increased after PR. Tan IIA decreased the numbers of various metastatic lesions. (D) Survival curves showed that the life-span of test mice was significantly shortened by PR and was markedly prolonged by Tan IIA. Red: HCCLM3. Green: HepG2. Tan IIA does not directly inhibit tumor growth but reduces metastasis Results of IE (2), shown in Tables 1 and Additional file 1: Table S2, indicate there were no differences in TV between the four groups observed (Additional file 1: Figure S1A). Compared with normal saline (NS), the LM of the Tan IIA-treated groups (5 or 10 mg/kg/d) decreased (p=0.046 and p<0.001, Additional file 1: Figure S1B). Compared with Tan IIA treatment of 1 or 5 mg/kg/d, the LM of the 10 mg/kg/d group also decreased (p=0.003 and p=0.046, Additional file 1: Figure S1B). Both the IHM and AM rates of the 5/10 mg Tan IIA/kg/d treatment groups were significantly reduced (p<0.05, Additional file 1: Figure S1C and D). No AM deriving from HepG2 cells was found. The greatest inhibitory effects were seen at a dosage of 10 mg/kg/d, which was chosen as the intervention dosage in IE (3). Results of IE (2), shown in Tables 1 and Additional file 1: Table S2, indicate there were no differences in TV between the four groups observed (Additional file 1: Figure S1A). Compared with normal saline (NS), the LM of the Tan IIA-treated groups (5 or 10 mg/kg/d) decreased (p=0.046 and p<0.001, Additional file 1: Figure S1B). Compared with Tan IIA treatment of 1 or 5 mg/kg/d, the LM of the 10 mg/kg/d group also decreased (p=0.003 and p=0.046, Additional file 1: Figure S1B). Both the IHM and AM rates of the 5/10 mg Tan IIA/kg/d treatment groups were significantly reduced (p<0.05, Additional file 1: Figure S1C and D). No AM deriving from HepG2 cells was found. The greatest inhibitory effects were seen at a dosage of 10 mg/kg/d, which was chosen as the intervention dosage in IE (3). Tan IIA inhibits the PR-enhanced residual tumor metastasis Results of IE (3), summarized in Tables 1 and Additional file 1: Table S2, show that administration of Tan IIA after PR resulted in decreased residual TV compared to NS (p<0.05, Additional file 1: Figure S1A). Compared with the PR + NS group, the LM (HCCLM3) of the Tan IIA treatment group was significantly decreased (p<0.001, Figures 1A and Additional file 1: Figure S1B); both IHM and AM also decreased (p<0.01 for IHM, Figures 1B, Additional file 1: Figure S1C, and S2A; p<0.01 for AM, Figures 1C, Additional file 1: Figure S1D, and S2B), and the CTCs were relatively decreased (p<0.001, Additional file 1: Figure S1E). Results of IE (3), summarized in Tables 1 and Additional file 1: Table S2, show that administration of Tan IIA after PR resulted in decreased residual TV compared to NS (p<0.05, Additional file 1: Figure S1A). Compared with the PR + NS group, the LM (HCCLM3) of the Tan IIA treatment group was significantly decreased (p<0.001, Figures 1A and Additional file 1: Figure S1B); both IHM and AM also decreased (p<0.01 for IHM, Figures 1B, Additional file 1: Figure S1C, and S2A; p<0.01 for AM, Figures 1C, Additional file 1: Figure S1D, and S2B), and the CTCs were relatively decreased (p<0.001, Additional file 1: Figure S1E). Tan IIA prolongs host survival Tan IIA treatment retarded the weight loss of mice after PR (Additional file 1: Figure S3). The estimated survival of PR mice was significantly shorter than of Sham mice in IE (1) (p<0.01, Tables 1 and Additional file 1: Table S2, Figure 1D). In IE (2), Tan IIA prolonged the mice’s survival up to 16 d for HCCLM3 (87.000 ± 3.804 vs. 102.667 ± 3.201, p=0.004) and 19 d for HepG2 (p<0.001, Tables 1 and Additional file 1: Table S2, Figure 1D), compared with NS. The same effect on prolongation of post-PR survival was seen in IE (3) (p=0.001 for both, Tables 1 and Additional file 1: Table S2, Figure 1D). Tan IIA treatment retarded the weight loss of mice after PR (Additional file 1: Figure S3). The estimated survival of PR mice was significantly shorter than of Sham mice in IE (1) (p<0.01, Tables 1 and Additional file 1: Table S2, Figure 1D). In IE (2), Tan IIA prolonged the mice’s survival up to 16 d for HCCLM3 (87.000 ± 3.804 vs. 102.667 ± 3.201, p=0.004) and 19 d for HepG2 (p<0.001, Tables 1 and Additional file 1: Table S2, Figure 1D), compared with NS. The same effect on prolongation of post-PR survival was seen in IE (3) (p=0.001 for both, Tables 1 and Additional file 1: Table S2, Figure 1D). Tan IIA does not inhibit proliferation but minimizes invasiveness of tumor cells Compared with dimethylsulfoxide treatment control, the OD value of Tan IIA 0.01–100-μM treatment groups showed no change (Figure 2A). The number of invasive cells in the 5/10-μM Tan IIA groups was significantly reduced (p<0.01). No significant differences were seen for the 1-μM Tan IIA group (Figure 2B and C). Effects of Tan IIA on tumor cell proliferation and invasion. (A) Tan IIA treatment at 0.01–100 μM did not inhibit HCCLM3 and HepG2 cell proliferation, except in the 1000-μM dosage group. (B), (C) Tan IIA treatment with 5 or 10 μM for 48 h inhibited tumor cell invasion (200×). Compared with dimethylsulfoxide treatment control, the OD value of Tan IIA 0.01–100-μM treatment groups showed no change (Figure 2A). The number of invasive cells in the 5/10-μM Tan IIA groups was significantly reduced (p<0.01). No significant differences were seen for the 1-μM Tan IIA group (Figure 2B and C). Effects of Tan IIA on tumor cell proliferation and invasion. (A) Tan IIA treatment at 0.01–100 μM did not inhibit HCCLM3 and HepG2 cell proliferation, except in the 1000-μM dosage group. (B), (C) Tan IIA treatment with 5 or 10 μM for 48 h inhibited tumor cell invasion (200×). Tan IIA alleviates residual tumor hypoxia in vivo but does not downregulate HIF-1 of tumor cells under hypoxic conditions in vitro The immunohistochemical marker for tissue hypoxia Pimonidazole and HIF-1α levels were significantly increased after PR, and they were reduced by Tan IIA. In addition, the residual tumor epithelial-mesenchymal transition (EMT) was enhanced (N-cadherin and Vimentin were both upregulated, but E-cadherin was downregulated), and this effect could be reversed by Tan IIA (Figures 3A,C, and D, and Additional file 1: Figure S4A). These results indicate that PR aggravated residual tumor hypoxia and promoted EMT, and Tan IIA treatment was able to alleviate hypoxia and inhibit EMT in vivo. Levels of HIF-1α, N-cadherin, and vimentin were upregulated in tumor cells, and E-cadherin was downregulated, under conditions of hypoxia. Other molecules were not downregulated as anticipated from results of Tan IIA experiments in vitro (Figures 3B and Additional file 1: Figure S4B). We did observe that E-cadherin expression could indeed be upregulated by Tan IIA, independent of the hypoxia effect (Figures 3E and Additional file 1: Figure S4C). Levels of proteins observed were consistent with their corresponding mRNA levels (Figures 3 and Additional file 1: Figure S4). Effects of Tan IIA on residual tumor, cell hypoxia and epithelial-mesenchymal transition. (A) PR promoted HIF-1α expression and caused epithelial-mesenchymal transition (EMT; upregulated N-cadherin and vimentin, downregulated E-cadherin); Tan IIA reversed EMT in vivo. (B) Hypoxia-induced HIF-1α and promotion of EMT; Tan IIA had no effect on these parameters in vitro. (C), (D) Tan IIA diminished the enlarged Pimonidazole area of residual tumor after PR (50×). *Compared with Sham group, **compared with PR + NS group; p<0.001. (E) Tan IIA upregulated E-cadherin in tumor cells (200×). The immunohistochemical marker for tissue hypoxia Pimonidazole and HIF-1α levels were significantly increased after PR, and they were reduced by Tan IIA. In addition, the residual tumor epithelial-mesenchymal transition (EMT) was enhanced (N-cadherin and Vimentin were both upregulated, but E-cadherin was downregulated), and this effect could be reversed by Tan IIA (Figures 3A,C, and D, and Additional file 1: Figure S4A). These results indicate that PR aggravated residual tumor hypoxia and promoted EMT, and Tan IIA treatment was able to alleviate hypoxia and inhibit EMT in vivo. Levels of HIF-1α, N-cadherin, and vimentin were upregulated in tumor cells, and E-cadherin was downregulated, under conditions of hypoxia. Other molecules were not downregulated as anticipated from results of Tan IIA experiments in vitro (Figures 3B and Additional file 1: Figure S4B). We did observe that E-cadherin expression could indeed be upregulated by Tan IIA, independent of the hypoxia effect (Figures 3E and Additional file 1: Figure S4C). Levels of proteins observed were consistent with their corresponding mRNA levels (Figures 3 and Additional file 1: Figure S4). Effects of Tan IIA on residual tumor, cell hypoxia and epithelial-mesenchymal transition. (A) PR promoted HIF-1α expression and caused epithelial-mesenchymal transition (EMT; upregulated N-cadherin and vimentin, downregulated E-cadherin); Tan IIA reversed EMT in vivo. (B) Hypoxia-induced HIF-1α and promotion of EMT; Tan IIA had no effect on these parameters in vitro. (C), (D) Tan IIA diminished the enlarged Pimonidazole area of residual tumor after PR (50×). *Compared with Sham group, **compared with PR + NS group; p<0.001. (E) Tan IIA upregulated E-cadherin in tumor cells (200×). Tan IIA does not affect microvessel density but promotes microvessel integrity Studies of mouse microvessel density (MVD) used the marker CD31. NG2 proteoglycan, the marker of vascular pericytes [12,15], was adopted to evaluate microvessel intensity (MVI). The CD31 levels of the PR group were higher than of the Sham (p<0.001), and there was no statistical significance between the PR + NS and Tan IIA groups (Figures 4A and Additional file 1: Figure S5A). Although the NG2 proteoglycan levels showed no change after PR, its level was significantly elevated in the PR + Tan IIA group (p<0.01, Figures 4A and Additional file 1: Figure S5B). Immunohistochemistry of CD31, NG2, and Pimonidazole in serial sections showed that in the PR + NS group, CD31 was high, NG2 was low, and the hypoxia levels (Pimonidazole) were seriously high. In the PR + Tan IIA group, CD31 was high, NG2 was also high, and hypoxia levels were slight (Figure 4B). These results indicate that the residual tumor MVD increased after PR, but the MVI was low. Tan IIA did not inhibit MVD but markedly improved MVI, promoting vessel integrity. Scanning electron microscopy (SEM) further revealed that the vascular wall of PR + Tan IIA tumors was more integrated than in the NS tumors (Figure 4C). Immunofluorescence (IF) results verified that Tan IIA did not affect CD31 levels, elevated NG2, and decreased hypoxia levels (Figure 4D and E). Effects of Tan IIA on tumor microvessel density, microvessel integrity, and hypoxia. (A) CD31 microvessel density (MVD) increased, and NG2 microvessel integrity (MVI) showed no change after PR. Tan IIA did not affect MVD, but it did elevate MVI. *p<0.01. (B) PR + NS group: CD31 was high, NG2 was low, and Pimonidazole was high. PR + Tan IIA group: CD31 was high, NG2 was also high, and thus hypoxia was slight (400×). (C) The vascular wall of Tan IIA tumor tissue was more integrated than NS tissue (1200×). (D), (E) Tan IIA increased NG2 levels and reduced residual tumor hypoxia (200×). Studies of mouse microvessel density (MVD) used the marker CD31. NG2 proteoglycan, the marker of vascular pericytes [12,15], was adopted to evaluate microvessel intensity (MVI). The CD31 levels of the PR group were higher than of the Sham (p<0.001), and there was no statistical significance between the PR + NS and Tan IIA groups (Figures 4A and Additional file 1: Figure S5A). Although the NG2 proteoglycan levels showed no change after PR, its level was significantly elevated in the PR + Tan IIA group (p<0.01, Figures 4A and Additional file 1: Figure S5B). Immunohistochemistry of CD31, NG2, and Pimonidazole in serial sections showed that in the PR + NS group, CD31 was high, NG2 was low, and the hypoxia levels (Pimonidazole) were seriously high. In the PR + Tan IIA group, CD31 was high, NG2 was also high, and hypoxia levels were slight (Figure 4B). These results indicate that the residual tumor MVD increased after PR, but the MVI was low. Tan IIA did not inhibit MVD but markedly improved MVI, promoting vessel integrity. Scanning electron microscopy (SEM) further revealed that the vascular wall of PR + Tan IIA tumors was more integrated than in the NS tumors (Figure 4C). Immunofluorescence (IF) results verified that Tan IIA did not affect CD31 levels, elevated NG2, and decreased hypoxia levels (Figure 4D and E). Effects of Tan IIA on tumor microvessel density, microvessel integrity, and hypoxia. (A) CD31 microvessel density (MVD) increased, and NG2 microvessel integrity (MVI) showed no change after PR. Tan IIA did not affect MVD, but it did elevate MVI. *p<0.01. (B) PR + NS group: CD31 was high, NG2 was low, and Pimonidazole was high. PR + Tan IIA group: CD31 was high, NG2 was also high, and thus hypoxia was slight (400×). (C) The vascular wall of Tan IIA tumor tissue was more integrated than NS tissue (1200×). (D), (E) Tan IIA increased NG2 levels and reduced residual tumor hypoxia (200×). Tan IIA enhances tube formation and is associated with vascular endothelial cell growth factor receptor 1 (VEGFR1) and platelet derived growth factor receptor (PDGFR) upregulation Tube formation of human umbilical vein endothelial cells (HUVECs) and human tumor-derived endothelial cells (TECs) was enhanced by Tan IIA (Figure 5A and B). And tube formation of TECs from the PR + Tan IIA (in vivo) mouse group was strengthened relative to the PR group (Figure 5D). Using TECs from the PR group, we further found that tube formation was enhanced in vitro by Tan IIA, and it was roughly equivalent to the PR + Tan IIA (in vivo) group (Figure 5D). Subsequent flow cytometric analysis of VEGFR1 and PDGFR in TECs indicated that both the percentage of positive mice and relative cellular fluorescence intensities were significantly higher in the PR + Tan IIA group than the NS group (p<0.05), and no changes were seen in VEGFR2, EGFR, and FGFR1 levels (Figure 5C). Treatment of cells with the VEGFR1/PDGFR inhibitor SU6668 weakened the Tan IIA-dependent enhanced tube formation; whereas, low SU6668 concentration was not seen to inhibit tube formation without Tan IIA (Figure 5D). Effects of Tan IIA on endothelial cells tube formation and TEC surface expression of growth factor receptors. (A, B) Tan IIA enhanced tube formation of HUVECs and human TECs (50×). (C) The percentage of VEGFR1- and PDGFR-positive TECs and the relative fluorescence intensities. Values were higher in the PR + Tan IIA group than NS group. *p<0.05, **p<0.01. (D) Tube formation was enhanced by Tan IIA in vivo, similar to results seen with Tan IIA incubation in vitro; the addition of the VEGFR1/PDGFR inhibitor SU6668 weakened this enhancement effect. Tube formation of human umbilical vein endothelial cells (HUVECs) and human tumor-derived endothelial cells (TECs) was enhanced by Tan IIA (Figure 5A and B). And tube formation of TECs from the PR + Tan IIA (in vivo) mouse group was strengthened relative to the PR group (Figure 5D). Using TECs from the PR group, we further found that tube formation was enhanced in vitro by Tan IIA, and it was roughly equivalent to the PR + Tan IIA (in vivo) group (Figure 5D). Subsequent flow cytometric analysis of VEGFR1 and PDGFR in TECs indicated that both the percentage of positive mice and relative cellular fluorescence intensities were significantly higher in the PR + Tan IIA group than the NS group (p<0.05), and no changes were seen in VEGFR2, EGFR, and FGFR1 levels (Figure 5C). Treatment of cells with the VEGFR1/PDGFR inhibitor SU6668 weakened the Tan IIA-dependent enhanced tube formation; whereas, low SU6668 concentration was not seen to inhibit tube formation without Tan IIA (Figure 5D). Effects of Tan IIA on endothelial cells tube formation and TEC surface expression of growth factor receptors. (A, B) Tan IIA enhanced tube formation of HUVECs and human TECs (50×). (C) The percentage of VEGFR1- and PDGFR-positive TECs and the relative fluorescence intensities. Values were higher in the PR + Tan IIA group than NS group. *p<0.05, **p<0.01. (D) Tube formation was enhanced by Tan IIA in vivo, similar to results seen with Tan IIA incubation in vitro; the addition of the VEGFR1/PDGFR inhibitor SU6668 weakened this enhancement effect.
Conclusions
Our findings demonstrate that Tan IIA can inhibit the enhanced metastasis induced by PR and may do so in part via VEGFR1/PDGFR-related vascular normalization. This work has an important implication: that the malignant phenotype of a tumor may be manipulated through vascular pathways, which could be an alternative to simple eradication. Our results highlight the potential of proangiogenic “vessel normalizing” treatment strategies to silence metastasis and prolong patient survival.
[ "Background", "PR-induced residual tumor growth and metastasis", "Tan IIA does not directly inhibit tumor growth but reduces metastasis", "Tan IIA inhibits the PR-enhanced residual tumor metastasis", "Tan IIA prolongs host survival", "Tan IIA does not inhibit proliferation but minimizes invasiveness of tumor cells", "Tan IIA alleviates residual tumor hypoxia in vivo but does not downregulate HIF-1 of tumor cells under hypoxic conditions in vitro", "Tan IIA does not affect microvessel density but promotes microvessel integrity", "Tan IIA enhances tube formation and is associated with vascular endothelial cell growth factor receptor 1 (VEGFR1) and platelet derived growth factor receptor (PDGFR) upregulation", "Cell lines, animal model, and drug", "Experimental groups and assessment parameters", "Cell proliferation and invasion", "Hypoxia evaluation", "Isolation of TECs, flow cytometry, and tube formation", "Immunohistochemistry, immunofluorescence, western blot, and quantitative real-time polymerase chain reaction", "Statistical analysis", "Ethics approval", "Abbreviations", "Competing interests", "Authors’ contributions", "Authors’ information" ]
[ "Surgical resection is the most promising strategy for early-stage hepatocellular carcinoma (HCC); however, the 5-year risk of recurrence is as high as 70% [1]. The surgery is actually palliative resection (PR), owing to the existence of satellites and microvascular invasion, and these residual tumor nests can actually be stimulated to grow by the PR [2,3]. Although several treatments, such as interferon-alpha and sorafenib, have been proposed to diminish relapse [4,5], prometastatic side effects of these options have also been observed [6,7].\nResidual tumor cells may stimulate angiogenesis, which is needed for tumor growth [8-10]. However, the resulting neovessels may be disordered and inefficiently perfused, resulting in hypoxic conditions [10,11]. Both abnormal endothelium and pericytes integrated into the capillary wall, along with deficient coverage, could be responsible for the vascular architectural abnormalities [12]. The resulting hypoxia creates a hostile tumor milieu in which tumor cells may migrate via intra- or extravasation through a leaky vessel [9,13]. In effect, surgery-induced hypoxia unfavorably impacts the prognosis of cancer patients by inducing angiogenesis [14]. Therefore, restoring oxygen supply via vascular normalization may reduce metastasis, even tumor growth. Mazzone et al. [13] showed that downregulation of the oxygen sensing molecule PHD2 can restore tumor oxygenation and inhibit metastasis via endothelial normalization, where endothelial cells form a protective phalanx that blocks metastasis. Although several methods have been shown experimentally to promote vessel remodeling, only seldom has any of them found use in the clinic [9,10,15,16].\nTanshinone IIA (Tan IIA) is an herbal monomer with a clear chemical structure, isolated from Salvia miltiorrhiza. In Chinese traditional medicine, S. miltiorrhiza is considered to promote blood circulation for removing blood stasis and improve microcirculation. Some of these effects could include vessel normalization. We have reported that an herbal formula, Songyou Yin, can attenuate HCC metastases [17], and S. miltiorrhiza is one of the five constituents of the formula [18]. Tan IIA exhibits direct vasoactive [19,20] and certain antitumor properties [21]. It is possible that Tan IIA may indirectly decrease metastasis in HCC following PR by promoting blood vessel normalization; however, there is to date no evidence supporting this hypothesis.\nWe aimed to identify inhibitory effects of Tan IIA on HCC metastasis for delineating a possible mechanism of action of the compound, with a main focus on tumor vessel maturity as a potential marker for evaluating Tan IIA treatment responses.", "As shown in the in vivo experiment 1 (IE 1) in Tables 1 and Additional file 1: Table S2, the tumor volume (TV) was greater in the PR than Sham groups (p<0.05, Additional file 1: Figure S1A). Compared with the Sham group, the lung metastasis (LM) (HCCLM3) of the PR group significantly increased (p<0.001, Figure 1A and S1B); both intrahepatic metastasis (IHM) and abdominal metastasis (AM) also increased (p<0.01 for IHM, in Figures 1B, Additional file 1: Figure S1C, and Additional file 1: Figure S2A; p<0.05 for AM, in Figures 1C, Additional file 1: Figure S1D, and Additional file 1: Figure S2B); and circulating tumor cells (CTCs) were elevated both at 2 and 35 d postresection (p<0.001, Additional file 1: Figure S1E).\nSummary of tumor growth, metastasis, and mice’s survival from three animal experiments of HCCLM3\na Student’s t-test, equal variances assumed.\nb One-way analysis of variance.\nAbbreviations: PR, palliative resection; NS, normal saline; CTCs, circulating tumor cells; IE, in vivo experiment; LM, lung metastasis; ND, not done; TV, tumor volume.\nSchematic diagram of metastases and cumulative survival of tumor-bearing nude mice in animal experiments. Occurrence of lung (A), intrahepatic (B), and abdomen metastases (C) significantly increased after PR. Tan IIA decreased the numbers of various metastatic lesions. (D) Survival curves showed that the life-span of test mice was significantly shortened by PR and was markedly prolonged by Tan IIA. Red: HCCLM3. Green: HepG2.", "Results of IE (2), shown in Tables 1 and Additional file 1: Table S2, indicate there were no differences in TV between the four groups observed (Additional file 1: Figure S1A). Compared with normal saline (NS), the LM of the Tan IIA-treated groups (5 or 10 mg/kg/d) decreased (p=0.046 and p<0.001, Additional file 1: Figure S1B). Compared with Tan IIA treatment of 1 or 5 mg/kg/d, the LM of the 10 mg/kg/d group also decreased (p=0.003 and p=0.046, Additional file 1: Figure S1B). Both the IHM and AM rates of the 5/10 mg Tan IIA/kg/d treatment groups were significantly reduced (p<0.05, Additional file 1: Figure S1C and D). No AM deriving from HepG2 cells was found. The greatest inhibitory effects were seen at a dosage of 10 mg/kg/d, which was chosen as the intervention dosage in IE (3).", "Results of IE (3), summarized in Tables 1 and Additional file 1: Table S2, show that administration of Tan IIA after PR resulted in decreased residual TV compared to NS (p<0.05, Additional file 1: Figure S1A). Compared with the PR + NS group, the LM (HCCLM3) of the Tan IIA treatment group was significantly decreased (p<0.001, Figures 1A and Additional file 1: Figure S1B); both IHM and AM also decreased (p<0.01 for IHM, Figures 1B, Additional file 1: Figure S1C, and S2A; p<0.01 for AM, Figures 1C, Additional file 1: Figure S1D, and S2B), and the CTCs were relatively decreased (p<0.001, Additional file 1: Figure S1E).", "Tan IIA treatment retarded the weight loss of mice after PR (Additional file 1: Figure S3). The estimated survival of PR mice was significantly shorter than of Sham mice in IE (1) (p<0.01, Tables 1 and Additional file 1: Table S2, Figure 1D). In IE (2), Tan IIA prolonged the mice’s survival up to 16 d for HCCLM3 (87.000 ± 3.804 vs. 102.667 ± 3.201, p=0.004) and 19 d for HepG2 (p<0.001, Tables 1 and Additional file 1: Table S2, Figure 1D), compared with NS. The same effect on prolongation of post-PR survival was seen in IE (3) (p=0.001 for both, Tables 1 and Additional file 1: Table S2, Figure 1D).", "Compared with dimethylsulfoxide treatment control, the OD value of Tan IIA 0.01–100-μM treatment groups showed no change (Figure 2A). The number of invasive cells in the 5/10-μM Tan IIA groups was significantly reduced (p<0.01). No significant differences were seen for the 1-μM Tan IIA group (Figure 2B and C).\nEffects of Tan IIA on tumor cell proliferation and invasion. (A) Tan IIA treatment at 0.01–100 μM did not inhibit HCCLM3 and HepG2 cell proliferation, except in the 1000-μM dosage group. (B), (C) Tan IIA treatment with 5 or 10 μM for 48 h inhibited tumor cell invasion (200×).", "The immunohistochemical marker for tissue hypoxia Pimonidazole and HIF-1α levels were significantly increased after PR, and they were reduced by Tan IIA. In addition, the residual tumor epithelial-mesenchymal transition (EMT) was enhanced (N-cadherin and Vimentin were both upregulated, but E-cadherin was downregulated), and this effect could be reversed by Tan IIA (Figures 3A,C, and D, and Additional file 1: Figure S4A). These results indicate that PR aggravated residual tumor hypoxia and promoted EMT, and Tan IIA treatment was able to alleviate hypoxia and inhibit EMT in vivo. Levels of HIF-1α, N-cadherin, and vimentin were upregulated in tumor cells, and E-cadherin was downregulated, under conditions of hypoxia. Other molecules were not downregulated as anticipated from results of Tan IIA experiments in vitro (Figures 3B and Additional file 1: Figure S4B). We did observe that E-cadherin expression could indeed be upregulated by Tan IIA, independent of the hypoxia effect (Figures 3E and Additional file 1: Figure S4C). Levels of proteins observed were consistent with their corresponding mRNA levels (Figures 3 and Additional file 1: Figure S4).\nEffects of Tan IIA on residual tumor, cell hypoxia and epithelial-mesenchymal transition. (A) PR promoted HIF-1α expression and caused epithelial-mesenchymal transition (EMT; upregulated N-cadherin and vimentin, downregulated E-cadherin); Tan IIA reversed EMT in vivo. (B) Hypoxia-induced HIF-1α and promotion of EMT; Tan IIA had no effect on these parameters in vitro. (C), (D) Tan IIA diminished the enlarged Pimonidazole area of residual tumor after PR (50×). *Compared with Sham group, **compared with PR + NS group; p<0.001. (E) Tan IIA upregulated E-cadherin in tumor cells (200×).", "Studies of mouse microvessel density (MVD) used the marker CD31. NG2 proteoglycan, the marker of vascular pericytes [12,15], was adopted to evaluate microvessel intensity (MVI). The CD31 levels of the PR group were higher than of the Sham (p<0.001), and there was no statistical significance between the PR + NS and Tan IIA groups (Figures 4A and Additional file 1: Figure S5A). Although the NG2 proteoglycan levels showed no change after PR, its level was significantly elevated in the PR + Tan IIA group (p<0.01, Figures 4A and Additional file 1: Figure S5B). Immunohistochemistry of CD31, NG2, and Pimonidazole in serial sections showed that in the PR + NS group, CD31 was high, NG2 was low, and the hypoxia levels (Pimonidazole) were seriously high. In the PR + Tan IIA group, CD31 was high, NG2 was also high, and hypoxia levels were slight (Figure 4B). These results indicate that the residual tumor MVD increased after PR, but the MVI was low. Tan IIA did not inhibit MVD but markedly improved MVI, promoting vessel integrity. Scanning electron microscopy (SEM) further revealed that the vascular wall of PR + Tan IIA tumors was more integrated than in the NS tumors (Figure 4C). Immunofluorescence (IF) results verified that Tan IIA did not affect CD31 levels, elevated NG2, and decreased hypoxia levels (Figure 4D and E).\nEffects of Tan IIA on tumor microvessel density, microvessel integrity, and hypoxia. (A) CD31 microvessel density (MVD) increased, and NG2 microvessel integrity (MVI) showed no change after PR. Tan IIA did not affect MVD, but it did elevate MVI. *p<0.01. (B) PR + NS group: CD31 was high, NG2 was low, and Pimonidazole was high. PR + Tan IIA group: CD31 was high, NG2 was also high, and thus hypoxia was slight (400×). (C) The vascular wall of Tan IIA tumor tissue was more integrated than NS tissue (1200×). (D), (E) Tan IIA increased NG2 levels and reduced residual tumor hypoxia (200×).", "Tube formation of human umbilical vein endothelial cells (HUVECs) and human tumor-derived endothelial cells (TECs) was enhanced by Tan IIA (Figure 5A and B). And tube formation of TECs from the PR + Tan IIA (in vivo) mouse group was strengthened relative to the PR group (Figure 5D). Using TECs from the PR group, we further found that tube formation was enhanced in vitro by Tan IIA, and it was roughly equivalent to the PR + Tan IIA (in vivo) group (Figure 5D). Subsequent flow cytometric analysis of VEGFR1 and PDGFR in TECs indicated that both the percentage of positive mice and relative cellular fluorescence intensities were significantly higher in the PR + Tan IIA group than the NS group (p<0.05), and no changes were seen in VEGFR2, EGFR, and FGFR1 levels (Figure 5C). Treatment of cells with the VEGFR1/PDGFR inhibitor SU6668 weakened the Tan IIA-dependent enhanced tube formation; whereas, low SU6668 concentration was not seen to inhibit tube formation without Tan IIA (Figure 5D).\nEffects of Tan IIA on endothelial cells tube formation and TEC surface expression of growth factor receptors. (A, B) Tan IIA enhanced tube formation of HUVECs and human TECs (50×). (C) The percentage of VEGFR1- and PDGFR-positive TECs and the relative fluorescence intensities. Values were higher in the PR + Tan IIA group than NS group. *p<0.05, **p<0.01. (D) Tube formation was enhanced by Tan IIA in vivo, similar to results seen with Tan IIA incubation in vitro; the addition of the VEGFR1/PDGFR inhibitor SU6668 weakened this enhancement effect.", "The human HCC cell lines HCCLM3-RFP, which has high metastatic potential [25], and HepG2-GFP [26], HUVECs, and TECs [27] were used in the studies. Male, athymic BALB/c nu/nu mice of 4–6 weeks of age and weighing approximately 20 g were used as host animals.\nA metastatic human HCC animal model was established by orthotopic implantation of histologically intact tumor tissue into the nude mouse liver [28]. To explore the protumoral effects of PR, we constructed an orthotopic double-tumor xenograft model, in which two tumor pieces were simultaneously inoculated into the left liver lobe; the inoculation method was as described [28]. After 2 weeks, partial hepatectomy [29] was performed to excise one tumor. The Sham hepatectomy cohort was handled like the PR cohort, but without tumor resection.\nTan IIA (sulfotanshinone sodium injection, 5 mg/ml), commercially available from the first Biochemical Pharmaceutical Co. Ltd., Shanghai, China, was used in in vivo experiment. Tan IIA monomer (Sigma, St. Louis, MO), a reddish lyophilized powder with the purity 99.99%, firstly dissolved in dimethyl sulfoxide and then diluted with NS to the required concentration, was used in in vitro study.", "For IE (1), 30 double-tumor-bearing mice were randomly divided into Sham and PR groups (each of n=15) and scheduled to be observed after 35 d. In IE (2), the single-tumor xenograft model was used. Mice were divided into four groups (each n=18) and received daily injections of NS or Tan IIA (1, 5, or 10 mg/kg/d). Tan IIA was diluted with NS. We took 20 g as the average mouse weight (25 g after 21 d), and each mouse received 0.2 mL solution intraperitoneally for 35 d. In IE (3), the double-tumor xenograft plus PR model was used to examine effects of Tan IIA on residual tumor. Mice were divided into two groups (each n=15) after PR and received daily injections of NS or 10 mg Tan IIA/kg/d for 35 d.\nThe mouse weight was measured once every 7 d. After 35 d, six mice from each group were retained to observe survival, and the remaining were sacrificed to measure TV [30], LM, IHM, AM [26], CTCs, and to perform SEM of tumor vessels. CTCs were enumerated by flow cytometry and expressed as percent CTCs/TV (%) [6]. Twelve mice (IE 1, n = 6) were sacrificed 2 d after resection to examine CTCs shortly after PR.", "A Cell Counting Kit-8 (Dojindo, Kumamoto, Japan) was used to assay cell proliferation. The final concentration of Tan IIA was 0.01–1000 μM. Results were expressed as OD at 490 nm. Cell invasiveness was assayed in Matrigel-coated Transwell Invasion Chambers (Corning, Cambridge, MA). Tan IIA was added to cells at final concentrations of 1, 5, or 10 μM, and these cultures were incubated for 48 h. Cells that passed through the chamber membranes were counted.", "Cells were cultured in a Bugbox Hypoxic Workstation (Ruskinn, Mid Glamorgan, UK; 1% O2, 5% CO2, and 94% N2 atmosphere) and incubated with Tan IIA at 1, 5, or 10 μM for 48 h. Normoxic conditions (20% O2, 5% CO2, and 75% N2) were set as control. Pimonidazole immunostaining and HIF-1α expression were defined as hypoxia biomarkers. A Hypoxyprobe™-1 Kit (Hypoxyprobe Inc., Burlington, MA) was used [6].", "Eight tumors from Sham, PR, or PR + Tan IIA groups were collected. The TECs were isolated by use of anti-CD31 antibody (AB)-coupled magnetic beads (Miltenyi Biotec, Cologne, Germany) and magnetic cell-sorting system [27], and they were divided into Sham, PR, PR + Tan IIA (in vivo), and PR + Tan IIA (in vitro) groups; TECs isolated from the PR group were incubated with Tan IIA for 48 h. TEC surface expression of VEGFR1, VEGFR2, EGFR, PDGFR, FGFR1, and CD31 was determined by flow cytometric analysis (R&D, Minneapolis, MN). Receptor density was calculated as the relative fluorescence intensity. In another set of experiments, TECs from the PR + Tan IIA (in vivo) cohort were divided into control and SU6668 (Sigma) (VEGFR1/PDGFR selective receptor inhibitor) treatment groups. TECs from the PR cohort were also divided into PR, PR + SU6668, PR + Tan IIA (in vitro), and PR + SU6668 + Tan IIA groups. HUVECs and human TECs were separated into control, normoxia + Tan IIA, and hypoxia + Tan IIA groups. Formation of capillary-like structures was observed as described [27].", "Immunohistochemistry [31] of Pimonidazole, CD31, and NG2 [15] was performed in paraffin sections on slides. The primary antibodies to Pimonidazole (1:100), CD31 (1:100; Abcam, Cambridge, MA), and NG2 (1:200; Millipore, Billerica, MA) were selected. The integrated optical density (IOD; for Pimonidazole) or area (for CD31 and NG2) of positive staining/total area was quantified by Image-Pro Plus software [31]. IF double-staining [31] of CD31 (1:50) and NG2 (1:50), and NG2 and Pimonidazole (1:80) was done in frozen sections and observed under laser confocal microscope. IF of E-cadherin in cells was also determined.\nProtein levels of HIF-1α, N-cadherin, E-cadherin, and vimentin were determined by immunoblot analysis. Primary antibodies against HIF-1α (1:1000; Sigma), β-actin, N-cadherin (1:1000; Abcam), E-cadherin (1:400; Santa Cruz Biotechnology, Santa Cruz, CA), and vimentin (1:800; Cell Signaling Technology, Beverly, MA) were used. Levels of mRNA were assessed by polymerase chain reaction (Additional file 1: Table S1) and normalized to the corresponding internal β-actin signal (ΔCt). Relative gene expression values were expressed as 2−ΔΔCt[30].", "All statistical analyses were performed with the SPSS 16.0 software. The Pearson chi-square test was applied to compare qualitative variables. Quantitative variables were expressed as mean ± standard deviations and analyzed by t-test or one-way analysis of variance followed by least significant difference test. The Kaplan–Meier method with log-rank test was used for survival analysis. A p value of <0.05 was considered to be statistically significant.", "Animal care and experimental protocols were approved by the Shanghai Medical Experimental Animal Care Commission.", "AM: Abdominal metastasis; CTCs: Circulating tumor cells; EMT: Epithelial-mesenchymal transition; HCC: Hepatocellular carcinoma; HUVEC: Human umbilical vein endothelial cells; IE: In vivo experiment; IF: Immunofluorescence; IHM: Intrahepatic metastasis; IOD: Integrated optical density; LM: Lung metastasis; MVD: Microvessel density; MVI: Microvessel integrity; NS: Normal saline; PDGFR: Platelet derived growth factor receptor; PR: Palliative resection; SEM: Scanning electron microscopy; Tan IIA: Tanshinone IIA; TEC: Tumor-derived endothelial cells; TV: Tumor volume; VEGFR1: Vascular endothelial cell growth factor receptor 1.", "The authors declare there are no competing interests.", "WWQ designed the study, established the animal model, carried out the immunoassays, performed the statistical analysis, and drafted the manuscript. LL and SHC participated in the design of the study, data analysis, and drafting the manuscript. FYL, XHX, CZT, ZQB, KLQ, ZXD, and LL helped to acquire experimental data. RZG reviewed the manuscript. TZY conceived the study, participated in its design and coordination, and helped to draft the manuscript. All authors read and approved the final manuscript.", "WWQ, M.D., Ph.D., in Cancer Surgery. Graduated from Liver Cancer Institute, Zhongshan Hospital, Fudan University; Key Laboratory for Carcinogenesis & Cancer Invasion (Fudan University), the Chinese Ministry of Education. WWQ is now working in the Department of Pancreatic and Hepatobiliary Surgery, Fudan University, Shanghai Cancer Center; the Department of Oncology, Shanghai Medical College, Fudan University; and the Pancreatic Cancer Institute, Fudan University." ]
[ null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null ]
[ "Background", "Results", "PR-induced residual tumor growth and metastasis", "Tan IIA does not directly inhibit tumor growth but reduces metastasis", "Tan IIA inhibits the PR-enhanced residual tumor metastasis", "Tan IIA prolongs host survival", "Tan IIA does not inhibit proliferation but minimizes invasiveness of tumor cells", "Tan IIA alleviates residual tumor hypoxia in vivo but does not downregulate HIF-1 of tumor cells under hypoxic conditions in vitro", "Tan IIA does not affect microvessel density but promotes microvessel integrity", "Tan IIA enhances tube formation and is associated with vascular endothelial cell growth factor receptor 1 (VEGFR1) and platelet derived growth factor receptor (PDGFR) upregulation", "Discussion", "Conclusions", "Methods", "Cell lines, animal model, and drug", "Experimental groups and assessment parameters", "Cell proliferation and invasion", "Hypoxia evaluation", "Isolation of TECs, flow cytometry, and tube formation", "Immunohistochemistry, immunofluorescence, western blot, and quantitative real-time polymerase chain reaction", "Statistical analysis", "Ethics approval", "Abbreviations", "Competing interests", "Authors’ contributions", "Authors’ information", "Supplementary Material" ]
[ "Surgical resection is the most promising strategy for early-stage hepatocellular carcinoma (HCC); however, the 5-year risk of recurrence is as high as 70% [1]. The surgery is actually palliative resection (PR), owing to the existence of satellites and microvascular invasion, and these residual tumor nests can actually be stimulated to grow by the PR [2,3]. Although several treatments, such as interferon-alpha and sorafenib, have been proposed to diminish relapse [4,5], prometastatic side effects of these options have also been observed [6,7].\nResidual tumor cells may stimulate angiogenesis, which is needed for tumor growth [8-10]. However, the resulting neovessels may be disordered and inefficiently perfused, resulting in hypoxic conditions [10,11]. Both abnormal endothelium and pericytes integrated into the capillary wall, along with deficient coverage, could be responsible for the vascular architectural abnormalities [12]. The resulting hypoxia creates a hostile tumor milieu in which tumor cells may migrate via intra- or extravasation through a leaky vessel [9,13]. In effect, surgery-induced hypoxia unfavorably impacts the prognosis of cancer patients by inducing angiogenesis [14]. Therefore, restoring oxygen supply via vascular normalization may reduce metastasis, even tumor growth. Mazzone et al. [13] showed that downregulation of the oxygen sensing molecule PHD2 can restore tumor oxygenation and inhibit metastasis via endothelial normalization, where endothelial cells form a protective phalanx that blocks metastasis. Although several methods have been shown experimentally to promote vessel remodeling, only seldom has any of them found use in the clinic [9,10,15,16].\nTanshinone IIA (Tan IIA) is an herbal monomer with a clear chemical structure, isolated from Salvia miltiorrhiza. In Chinese traditional medicine, S. miltiorrhiza is considered to promote blood circulation for removing blood stasis and improve microcirculation. Some of these effects could include vessel normalization. We have reported that an herbal formula, Songyou Yin, can attenuate HCC metastases [17], and S. miltiorrhiza is one of the five constituents of the formula [18]. Tan IIA exhibits direct vasoactive [19,20] and certain antitumor properties [21]. It is possible that Tan IIA may indirectly decrease metastasis in HCC following PR by promoting blood vessel normalization; however, there is to date no evidence supporting this hypothesis.\nWe aimed to identify inhibitory effects of Tan IIA on HCC metastasis for delineating a possible mechanism of action of the compound, with a main focus on tumor vessel maturity as a potential marker for evaluating Tan IIA treatment responses.", " PR-induced residual tumor growth and metastasis As shown in the in vivo experiment 1 (IE 1) in Tables 1 and Additional file 1: Table S2, the tumor volume (TV) was greater in the PR than Sham groups (p<0.05, Additional file 1: Figure S1A). Compared with the Sham group, the lung metastasis (LM) (HCCLM3) of the PR group significantly increased (p<0.001, Figure 1A and S1B); both intrahepatic metastasis (IHM) and abdominal metastasis (AM) also increased (p<0.01 for IHM, in Figures 1B, Additional file 1: Figure S1C, and Additional file 1: Figure S2A; p<0.05 for AM, in Figures 1C, Additional file 1: Figure S1D, and Additional file 1: Figure S2B); and circulating tumor cells (CTCs) were elevated both at 2 and 35 d postresection (p<0.001, Additional file 1: Figure S1E).\nSummary of tumor growth, metastasis, and mice’s survival from three animal experiments of HCCLM3\na Student’s t-test, equal variances assumed.\nb One-way analysis of variance.\nAbbreviations: PR, palliative resection; NS, normal saline; CTCs, circulating tumor cells; IE, in vivo experiment; LM, lung metastasis; ND, not done; TV, tumor volume.\nSchematic diagram of metastases and cumulative survival of tumor-bearing nude mice in animal experiments. Occurrence of lung (A), intrahepatic (B), and abdomen metastases (C) significantly increased after PR. Tan IIA decreased the numbers of various metastatic lesions. (D) Survival curves showed that the life-span of test mice was significantly shortened by PR and was markedly prolonged by Tan IIA. Red: HCCLM3. Green: HepG2.\nAs shown in the in vivo experiment 1 (IE 1) in Tables 1 and Additional file 1: Table S2, the tumor volume (TV) was greater in the PR than Sham groups (p<0.05, Additional file 1: Figure S1A). Compared with the Sham group, the lung metastasis (LM) (HCCLM3) of the PR group significantly increased (p<0.001, Figure 1A and S1B); both intrahepatic metastasis (IHM) and abdominal metastasis (AM) also increased (p<0.01 for IHM, in Figures 1B, Additional file 1: Figure S1C, and Additional file 1: Figure S2A; p<0.05 for AM, in Figures 1C, Additional file 1: Figure S1D, and Additional file 1: Figure S2B); and circulating tumor cells (CTCs) were elevated both at 2 and 35 d postresection (p<0.001, Additional file 1: Figure S1E).\nSummary of tumor growth, metastasis, and mice’s survival from three animal experiments of HCCLM3\na Student’s t-test, equal variances assumed.\nb One-way analysis of variance.\nAbbreviations: PR, palliative resection; NS, normal saline; CTCs, circulating tumor cells; IE, in vivo experiment; LM, lung metastasis; ND, not done; TV, tumor volume.\nSchematic diagram of metastases and cumulative survival of tumor-bearing nude mice in animal experiments. Occurrence of lung (A), intrahepatic (B), and abdomen metastases (C) significantly increased after PR. Tan IIA decreased the numbers of various metastatic lesions. (D) Survival curves showed that the life-span of test mice was significantly shortened by PR and was markedly prolonged by Tan IIA. Red: HCCLM3. Green: HepG2.\n Tan IIA does not directly inhibit tumor growth but reduces metastasis Results of IE (2), shown in Tables 1 and Additional file 1: Table S2, indicate there were no differences in TV between the four groups observed (Additional file 1: Figure S1A). Compared with normal saline (NS), the LM of the Tan IIA-treated groups (5 or 10 mg/kg/d) decreased (p=0.046 and p<0.001, Additional file 1: Figure S1B). Compared with Tan IIA treatment of 1 or 5 mg/kg/d, the LM of the 10 mg/kg/d group also decreased (p=0.003 and p=0.046, Additional file 1: Figure S1B). Both the IHM and AM rates of the 5/10 mg Tan IIA/kg/d treatment groups were significantly reduced (p<0.05, Additional file 1: Figure S1C and D). No AM deriving from HepG2 cells was found. The greatest inhibitory effects were seen at a dosage of 10 mg/kg/d, which was chosen as the intervention dosage in IE (3).\nResults of IE (2), shown in Tables 1 and Additional file 1: Table S2, indicate there were no differences in TV between the four groups observed (Additional file 1: Figure S1A). Compared with normal saline (NS), the LM of the Tan IIA-treated groups (5 or 10 mg/kg/d) decreased (p=0.046 and p<0.001, Additional file 1: Figure S1B). Compared with Tan IIA treatment of 1 or 5 mg/kg/d, the LM of the 10 mg/kg/d group also decreased (p=0.003 and p=0.046, Additional file 1: Figure S1B). Both the IHM and AM rates of the 5/10 mg Tan IIA/kg/d treatment groups were significantly reduced (p<0.05, Additional file 1: Figure S1C and D). No AM deriving from HepG2 cells was found. The greatest inhibitory effects were seen at a dosage of 10 mg/kg/d, which was chosen as the intervention dosage in IE (3).\n Tan IIA inhibits the PR-enhanced residual tumor metastasis Results of IE (3), summarized in Tables 1 and Additional file 1: Table S2, show that administration of Tan IIA after PR resulted in decreased residual TV compared to NS (p<0.05, Additional file 1: Figure S1A). Compared with the PR + NS group, the LM (HCCLM3) of the Tan IIA treatment group was significantly decreased (p<0.001, Figures 1A and Additional file 1: Figure S1B); both IHM and AM also decreased (p<0.01 for IHM, Figures 1B, Additional file 1: Figure S1C, and S2A; p<0.01 for AM, Figures 1C, Additional file 1: Figure S1D, and S2B), and the CTCs were relatively decreased (p<0.001, Additional file 1: Figure S1E).\nResults of IE (3), summarized in Tables 1 and Additional file 1: Table S2, show that administration of Tan IIA after PR resulted in decreased residual TV compared to NS (p<0.05, Additional file 1: Figure S1A). Compared with the PR + NS group, the LM (HCCLM3) of the Tan IIA treatment group was significantly decreased (p<0.001, Figures 1A and Additional file 1: Figure S1B); both IHM and AM also decreased (p<0.01 for IHM, Figures 1B, Additional file 1: Figure S1C, and S2A; p<0.01 for AM, Figures 1C, Additional file 1: Figure S1D, and S2B), and the CTCs were relatively decreased (p<0.001, Additional file 1: Figure S1E).\n Tan IIA prolongs host survival Tan IIA treatment retarded the weight loss of mice after PR (Additional file 1: Figure S3). The estimated survival of PR mice was significantly shorter than of Sham mice in IE (1) (p<0.01, Tables 1 and Additional file 1: Table S2, Figure 1D). In IE (2), Tan IIA prolonged the mice’s survival up to 16 d for HCCLM3 (87.000 ± 3.804 vs. 102.667 ± 3.201, p=0.004) and 19 d for HepG2 (p<0.001, Tables 1 and Additional file 1: Table S2, Figure 1D), compared with NS. The same effect on prolongation of post-PR survival was seen in IE (3) (p=0.001 for both, Tables 1 and Additional file 1: Table S2, Figure 1D).\nTan IIA treatment retarded the weight loss of mice after PR (Additional file 1: Figure S3). The estimated survival of PR mice was significantly shorter than of Sham mice in IE (1) (p<0.01, Tables 1 and Additional file 1: Table S2, Figure 1D). In IE (2), Tan IIA prolonged the mice’s survival up to 16 d for HCCLM3 (87.000 ± 3.804 vs. 102.667 ± 3.201, p=0.004) and 19 d for HepG2 (p<0.001, Tables 1 and Additional file 1: Table S2, Figure 1D), compared with NS. The same effect on prolongation of post-PR survival was seen in IE (3) (p=0.001 for both, Tables 1 and Additional file 1: Table S2, Figure 1D).\n Tan IIA does not inhibit proliferation but minimizes invasiveness of tumor cells Compared with dimethylsulfoxide treatment control, the OD value of Tan IIA 0.01–100-μM treatment groups showed no change (Figure 2A). The number of invasive cells in the 5/10-μM Tan IIA groups was significantly reduced (p<0.01). No significant differences were seen for the 1-μM Tan IIA group (Figure 2B and C).\nEffects of Tan IIA on tumor cell proliferation and invasion. (A) Tan IIA treatment at 0.01–100 μM did not inhibit HCCLM3 and HepG2 cell proliferation, except in the 1000-μM dosage group. (B), (C) Tan IIA treatment with 5 or 10 μM for 48 h inhibited tumor cell invasion (200×).\nCompared with dimethylsulfoxide treatment control, the OD value of Tan IIA 0.01–100-μM treatment groups showed no change (Figure 2A). The number of invasive cells in the 5/10-μM Tan IIA groups was significantly reduced (p<0.01). No significant differences were seen for the 1-μM Tan IIA group (Figure 2B and C).\nEffects of Tan IIA on tumor cell proliferation and invasion. (A) Tan IIA treatment at 0.01–100 μM did not inhibit HCCLM3 and HepG2 cell proliferation, except in the 1000-μM dosage group. (B), (C) Tan IIA treatment with 5 or 10 μM for 48 h inhibited tumor cell invasion (200×).\n Tan IIA alleviates residual tumor hypoxia in vivo but does not downregulate HIF-1 of tumor cells under hypoxic conditions in vitro The immunohistochemical marker for tissue hypoxia Pimonidazole and HIF-1α levels were significantly increased after PR, and they were reduced by Tan IIA. In addition, the residual tumor epithelial-mesenchymal transition (EMT) was enhanced (N-cadherin and Vimentin were both upregulated, but E-cadherin was downregulated), and this effect could be reversed by Tan IIA (Figures 3A,C, and D, and Additional file 1: Figure S4A). These results indicate that PR aggravated residual tumor hypoxia and promoted EMT, and Tan IIA treatment was able to alleviate hypoxia and inhibit EMT in vivo. Levels of HIF-1α, N-cadherin, and vimentin were upregulated in tumor cells, and E-cadherin was downregulated, under conditions of hypoxia. Other molecules were not downregulated as anticipated from results of Tan IIA experiments in vitro (Figures 3B and Additional file 1: Figure S4B). We did observe that E-cadherin expression could indeed be upregulated by Tan IIA, independent of the hypoxia effect (Figures 3E and Additional file 1: Figure S4C). Levels of proteins observed were consistent with their corresponding mRNA levels (Figures 3 and Additional file 1: Figure S4).\nEffects of Tan IIA on residual tumor, cell hypoxia and epithelial-mesenchymal transition. (A) PR promoted HIF-1α expression and caused epithelial-mesenchymal transition (EMT; upregulated N-cadherin and vimentin, downregulated E-cadherin); Tan IIA reversed EMT in vivo. (B) Hypoxia-induced HIF-1α and promotion of EMT; Tan IIA had no effect on these parameters in vitro. (C), (D) Tan IIA diminished the enlarged Pimonidazole area of residual tumor after PR (50×). *Compared with Sham group, **compared with PR + NS group; p<0.001. (E) Tan IIA upregulated E-cadherin in tumor cells (200×).\nThe immunohistochemical marker for tissue hypoxia Pimonidazole and HIF-1α levels were significantly increased after PR, and they were reduced by Tan IIA. In addition, the residual tumor epithelial-mesenchymal transition (EMT) was enhanced (N-cadherin and Vimentin were both upregulated, but E-cadherin was downregulated), and this effect could be reversed by Tan IIA (Figures 3A,C, and D, and Additional file 1: Figure S4A). These results indicate that PR aggravated residual tumor hypoxia and promoted EMT, and Tan IIA treatment was able to alleviate hypoxia and inhibit EMT in vivo. Levels of HIF-1α, N-cadherin, and vimentin were upregulated in tumor cells, and E-cadherin was downregulated, under conditions of hypoxia. Other molecules were not downregulated as anticipated from results of Tan IIA experiments in vitro (Figures 3B and Additional file 1: Figure S4B). We did observe that E-cadherin expression could indeed be upregulated by Tan IIA, independent of the hypoxia effect (Figures 3E and Additional file 1: Figure S4C). Levels of proteins observed were consistent with their corresponding mRNA levels (Figures 3 and Additional file 1: Figure S4).\nEffects of Tan IIA on residual tumor, cell hypoxia and epithelial-mesenchymal transition. (A) PR promoted HIF-1α expression and caused epithelial-mesenchymal transition (EMT; upregulated N-cadherin and vimentin, downregulated E-cadherin); Tan IIA reversed EMT in vivo. (B) Hypoxia-induced HIF-1α and promotion of EMT; Tan IIA had no effect on these parameters in vitro. (C), (D) Tan IIA diminished the enlarged Pimonidazole area of residual tumor after PR (50×). *Compared with Sham group, **compared with PR + NS group; p<0.001. (E) Tan IIA upregulated E-cadherin in tumor cells (200×).\n Tan IIA does not affect microvessel density but promotes microvessel integrity Studies of mouse microvessel density (MVD) used the marker CD31. NG2 proteoglycan, the marker of vascular pericytes [12,15], was adopted to evaluate microvessel intensity (MVI). The CD31 levels of the PR group were higher than of the Sham (p<0.001), and there was no statistical significance between the PR + NS and Tan IIA groups (Figures 4A and Additional file 1: Figure S5A). Although the NG2 proteoglycan levels showed no change after PR, its level was significantly elevated in the PR + Tan IIA group (p<0.01, Figures 4A and Additional file 1: Figure S5B). Immunohistochemistry of CD31, NG2, and Pimonidazole in serial sections showed that in the PR + NS group, CD31 was high, NG2 was low, and the hypoxia levels (Pimonidazole) were seriously high. In the PR + Tan IIA group, CD31 was high, NG2 was also high, and hypoxia levels were slight (Figure 4B). These results indicate that the residual tumor MVD increased after PR, but the MVI was low. Tan IIA did not inhibit MVD but markedly improved MVI, promoting vessel integrity. Scanning electron microscopy (SEM) further revealed that the vascular wall of PR + Tan IIA tumors was more integrated than in the NS tumors (Figure 4C). Immunofluorescence (IF) results verified that Tan IIA did not affect CD31 levels, elevated NG2, and decreased hypoxia levels (Figure 4D and E).\nEffects of Tan IIA on tumor microvessel density, microvessel integrity, and hypoxia. (A) CD31 microvessel density (MVD) increased, and NG2 microvessel integrity (MVI) showed no change after PR. Tan IIA did not affect MVD, but it did elevate MVI. *p<0.01. (B) PR + NS group: CD31 was high, NG2 was low, and Pimonidazole was high. PR + Tan IIA group: CD31 was high, NG2 was also high, and thus hypoxia was slight (400×). (C) The vascular wall of Tan IIA tumor tissue was more integrated than NS tissue (1200×). (D), (E) Tan IIA increased NG2 levels and reduced residual tumor hypoxia (200×).\nStudies of mouse microvessel density (MVD) used the marker CD31. NG2 proteoglycan, the marker of vascular pericytes [12,15], was adopted to evaluate microvessel intensity (MVI). The CD31 levels of the PR group were higher than of the Sham (p<0.001), and there was no statistical significance between the PR + NS and Tan IIA groups (Figures 4A and Additional file 1: Figure S5A). Although the NG2 proteoglycan levels showed no change after PR, its level was significantly elevated in the PR + Tan IIA group (p<0.01, Figures 4A and Additional file 1: Figure S5B). Immunohistochemistry of CD31, NG2, and Pimonidazole in serial sections showed that in the PR + NS group, CD31 was high, NG2 was low, and the hypoxia levels (Pimonidazole) were seriously high. In the PR + Tan IIA group, CD31 was high, NG2 was also high, and hypoxia levels were slight (Figure 4B). These results indicate that the residual tumor MVD increased after PR, but the MVI was low. Tan IIA did not inhibit MVD but markedly improved MVI, promoting vessel integrity. Scanning electron microscopy (SEM) further revealed that the vascular wall of PR + Tan IIA tumors was more integrated than in the NS tumors (Figure 4C). Immunofluorescence (IF) results verified that Tan IIA did not affect CD31 levels, elevated NG2, and decreased hypoxia levels (Figure 4D and E).\nEffects of Tan IIA on tumor microvessel density, microvessel integrity, and hypoxia. (A) CD31 microvessel density (MVD) increased, and NG2 microvessel integrity (MVI) showed no change after PR. Tan IIA did not affect MVD, but it did elevate MVI. *p<0.01. (B) PR + NS group: CD31 was high, NG2 was low, and Pimonidazole was high. PR + Tan IIA group: CD31 was high, NG2 was also high, and thus hypoxia was slight (400×). (C) The vascular wall of Tan IIA tumor tissue was more integrated than NS tissue (1200×). (D), (E) Tan IIA increased NG2 levels and reduced residual tumor hypoxia (200×).\n Tan IIA enhances tube formation and is associated with vascular endothelial cell growth factor receptor 1 (VEGFR1) and platelet derived growth factor receptor (PDGFR) upregulation Tube formation of human umbilical vein endothelial cells (HUVECs) and human tumor-derived endothelial cells (TECs) was enhanced by Tan IIA (Figure 5A and B). And tube formation of TECs from the PR + Tan IIA (in vivo) mouse group was strengthened relative to the PR group (Figure 5D). Using TECs from the PR group, we further found that tube formation was enhanced in vitro by Tan IIA, and it was roughly equivalent to the PR + Tan IIA (in vivo) group (Figure 5D). Subsequent flow cytometric analysis of VEGFR1 and PDGFR in TECs indicated that both the percentage of positive mice and relative cellular fluorescence intensities were significantly higher in the PR + Tan IIA group than the NS group (p<0.05), and no changes were seen in VEGFR2, EGFR, and FGFR1 levels (Figure 5C). Treatment of cells with the VEGFR1/PDGFR inhibitor SU6668 weakened the Tan IIA-dependent enhanced tube formation; whereas, low SU6668 concentration was not seen to inhibit tube formation without Tan IIA (Figure 5D).\nEffects of Tan IIA on endothelial cells tube formation and TEC surface expression of growth factor receptors. (A, B) Tan IIA enhanced tube formation of HUVECs and human TECs (50×). (C) The percentage of VEGFR1- and PDGFR-positive TECs and the relative fluorescence intensities. Values were higher in the PR + Tan IIA group than NS group. *p<0.05, **p<0.01. (D) Tube formation was enhanced by Tan IIA in vivo, similar to results seen with Tan IIA incubation in vitro; the addition of the VEGFR1/PDGFR inhibitor SU6668 weakened this enhancement effect.\nTube formation of human umbilical vein endothelial cells (HUVECs) and human tumor-derived endothelial cells (TECs) was enhanced by Tan IIA (Figure 5A and B). And tube formation of TECs from the PR + Tan IIA (in vivo) mouse group was strengthened relative to the PR group (Figure 5D). Using TECs from the PR group, we further found that tube formation was enhanced in vitro by Tan IIA, and it was roughly equivalent to the PR + Tan IIA (in vivo) group (Figure 5D). Subsequent flow cytometric analysis of VEGFR1 and PDGFR in TECs indicated that both the percentage of positive mice and relative cellular fluorescence intensities were significantly higher in the PR + Tan IIA group than the NS group (p<0.05), and no changes were seen in VEGFR2, EGFR, and FGFR1 levels (Figure 5C). Treatment of cells with the VEGFR1/PDGFR inhibitor SU6668 weakened the Tan IIA-dependent enhanced tube formation; whereas, low SU6668 concentration was not seen to inhibit tube formation without Tan IIA (Figure 5D).\nEffects of Tan IIA on endothelial cells tube formation and TEC surface expression of growth factor receptors. (A, B) Tan IIA enhanced tube formation of HUVECs and human TECs (50×). (C) The percentage of VEGFR1- and PDGFR-positive TECs and the relative fluorescence intensities. Values were higher in the PR + Tan IIA group than NS group. *p<0.05, **p<0.01. (D) Tube formation was enhanced by Tan IIA in vivo, similar to results seen with Tan IIA incubation in vitro; the addition of the VEGFR1/PDGFR inhibitor SU6668 weakened this enhancement effect.", "As shown in the in vivo experiment 1 (IE 1) in Tables 1 and Additional file 1: Table S2, the tumor volume (TV) was greater in the PR than Sham groups (p<0.05, Additional file 1: Figure S1A). Compared with the Sham group, the lung metastasis (LM) (HCCLM3) of the PR group significantly increased (p<0.001, Figure 1A and S1B); both intrahepatic metastasis (IHM) and abdominal metastasis (AM) also increased (p<0.01 for IHM, in Figures 1B, Additional file 1: Figure S1C, and Additional file 1: Figure S2A; p<0.05 for AM, in Figures 1C, Additional file 1: Figure S1D, and Additional file 1: Figure S2B); and circulating tumor cells (CTCs) were elevated both at 2 and 35 d postresection (p<0.001, Additional file 1: Figure S1E).\nSummary of tumor growth, metastasis, and mice’s survival from three animal experiments of HCCLM3\na Student’s t-test, equal variances assumed.\nb One-way analysis of variance.\nAbbreviations: PR, palliative resection; NS, normal saline; CTCs, circulating tumor cells; IE, in vivo experiment; LM, lung metastasis; ND, not done; TV, tumor volume.\nSchematic diagram of metastases and cumulative survival of tumor-bearing nude mice in animal experiments. Occurrence of lung (A), intrahepatic (B), and abdomen metastases (C) significantly increased after PR. Tan IIA decreased the numbers of various metastatic lesions. (D) Survival curves showed that the life-span of test mice was significantly shortened by PR and was markedly prolonged by Tan IIA. Red: HCCLM3. Green: HepG2.", "Results of IE (2), shown in Tables 1 and Additional file 1: Table S2, indicate there were no differences in TV between the four groups observed (Additional file 1: Figure S1A). Compared with normal saline (NS), the LM of the Tan IIA-treated groups (5 or 10 mg/kg/d) decreased (p=0.046 and p<0.001, Additional file 1: Figure S1B). Compared with Tan IIA treatment of 1 or 5 mg/kg/d, the LM of the 10 mg/kg/d group also decreased (p=0.003 and p=0.046, Additional file 1: Figure S1B). Both the IHM and AM rates of the 5/10 mg Tan IIA/kg/d treatment groups were significantly reduced (p<0.05, Additional file 1: Figure S1C and D). No AM deriving from HepG2 cells was found. The greatest inhibitory effects were seen at a dosage of 10 mg/kg/d, which was chosen as the intervention dosage in IE (3).", "Results of IE (3), summarized in Tables 1 and Additional file 1: Table S2, show that administration of Tan IIA after PR resulted in decreased residual TV compared to NS (p<0.05, Additional file 1: Figure S1A). Compared with the PR + NS group, the LM (HCCLM3) of the Tan IIA treatment group was significantly decreased (p<0.001, Figures 1A and Additional file 1: Figure S1B); both IHM and AM also decreased (p<0.01 for IHM, Figures 1B, Additional file 1: Figure S1C, and S2A; p<0.01 for AM, Figures 1C, Additional file 1: Figure S1D, and S2B), and the CTCs were relatively decreased (p<0.001, Additional file 1: Figure S1E).", "Tan IIA treatment retarded the weight loss of mice after PR (Additional file 1: Figure S3). The estimated survival of PR mice was significantly shorter than of Sham mice in IE (1) (p<0.01, Tables 1 and Additional file 1: Table S2, Figure 1D). In IE (2), Tan IIA prolonged the mice’s survival up to 16 d for HCCLM3 (87.000 ± 3.804 vs. 102.667 ± 3.201, p=0.004) and 19 d for HepG2 (p<0.001, Tables 1 and Additional file 1: Table S2, Figure 1D), compared with NS. The same effect on prolongation of post-PR survival was seen in IE (3) (p=0.001 for both, Tables 1 and Additional file 1: Table S2, Figure 1D).", "Compared with dimethylsulfoxide treatment control, the OD value of Tan IIA 0.01–100-μM treatment groups showed no change (Figure 2A). The number of invasive cells in the 5/10-μM Tan IIA groups was significantly reduced (p<0.01). No significant differences were seen for the 1-μM Tan IIA group (Figure 2B and C).\nEffects of Tan IIA on tumor cell proliferation and invasion. (A) Tan IIA treatment at 0.01–100 μM did not inhibit HCCLM3 and HepG2 cell proliferation, except in the 1000-μM dosage group. (B), (C) Tan IIA treatment with 5 or 10 μM for 48 h inhibited tumor cell invasion (200×).", "The immunohistochemical marker for tissue hypoxia Pimonidazole and HIF-1α levels were significantly increased after PR, and they were reduced by Tan IIA. In addition, the residual tumor epithelial-mesenchymal transition (EMT) was enhanced (N-cadherin and Vimentin were both upregulated, but E-cadherin was downregulated), and this effect could be reversed by Tan IIA (Figures 3A,C, and D, and Additional file 1: Figure S4A). These results indicate that PR aggravated residual tumor hypoxia and promoted EMT, and Tan IIA treatment was able to alleviate hypoxia and inhibit EMT in vivo. Levels of HIF-1α, N-cadherin, and vimentin were upregulated in tumor cells, and E-cadherin was downregulated, under conditions of hypoxia. Other molecules were not downregulated as anticipated from results of Tan IIA experiments in vitro (Figures 3B and Additional file 1: Figure S4B). We did observe that E-cadherin expression could indeed be upregulated by Tan IIA, independent of the hypoxia effect (Figures 3E and Additional file 1: Figure S4C). Levels of proteins observed were consistent with their corresponding mRNA levels (Figures 3 and Additional file 1: Figure S4).\nEffects of Tan IIA on residual tumor, cell hypoxia and epithelial-mesenchymal transition. (A) PR promoted HIF-1α expression and caused epithelial-mesenchymal transition (EMT; upregulated N-cadherin and vimentin, downregulated E-cadherin); Tan IIA reversed EMT in vivo. (B) Hypoxia-induced HIF-1α and promotion of EMT; Tan IIA had no effect on these parameters in vitro. (C), (D) Tan IIA diminished the enlarged Pimonidazole area of residual tumor after PR (50×). *Compared with Sham group, **compared with PR + NS group; p<0.001. (E) Tan IIA upregulated E-cadherin in tumor cells (200×).", "Studies of mouse microvessel density (MVD) used the marker CD31. NG2 proteoglycan, the marker of vascular pericytes [12,15], was adopted to evaluate microvessel intensity (MVI). The CD31 levels of the PR group were higher than of the Sham (p<0.001), and there was no statistical significance between the PR + NS and Tan IIA groups (Figures 4A and Additional file 1: Figure S5A). Although the NG2 proteoglycan levels showed no change after PR, its level was significantly elevated in the PR + Tan IIA group (p<0.01, Figures 4A and Additional file 1: Figure S5B). Immunohistochemistry of CD31, NG2, and Pimonidazole in serial sections showed that in the PR + NS group, CD31 was high, NG2 was low, and the hypoxia levels (Pimonidazole) were seriously high. In the PR + Tan IIA group, CD31 was high, NG2 was also high, and hypoxia levels were slight (Figure 4B). These results indicate that the residual tumor MVD increased after PR, but the MVI was low. Tan IIA did not inhibit MVD but markedly improved MVI, promoting vessel integrity. Scanning electron microscopy (SEM) further revealed that the vascular wall of PR + Tan IIA tumors was more integrated than in the NS tumors (Figure 4C). Immunofluorescence (IF) results verified that Tan IIA did not affect CD31 levels, elevated NG2, and decreased hypoxia levels (Figure 4D and E).\nEffects of Tan IIA on tumor microvessel density, microvessel integrity, and hypoxia. (A) CD31 microvessel density (MVD) increased, and NG2 microvessel integrity (MVI) showed no change after PR. Tan IIA did not affect MVD, but it did elevate MVI. *p<0.01. (B) PR + NS group: CD31 was high, NG2 was low, and Pimonidazole was high. PR + Tan IIA group: CD31 was high, NG2 was also high, and thus hypoxia was slight (400×). (C) The vascular wall of Tan IIA tumor tissue was more integrated than NS tissue (1200×). (D), (E) Tan IIA increased NG2 levels and reduced residual tumor hypoxia (200×).", "Tube formation of human umbilical vein endothelial cells (HUVECs) and human tumor-derived endothelial cells (TECs) was enhanced by Tan IIA (Figure 5A and B). And tube formation of TECs from the PR + Tan IIA (in vivo) mouse group was strengthened relative to the PR group (Figure 5D). Using TECs from the PR group, we further found that tube formation was enhanced in vitro by Tan IIA, and it was roughly equivalent to the PR + Tan IIA (in vivo) group (Figure 5D). Subsequent flow cytometric analysis of VEGFR1 and PDGFR in TECs indicated that both the percentage of positive mice and relative cellular fluorescence intensities were significantly higher in the PR + Tan IIA group than the NS group (p<0.05), and no changes were seen in VEGFR2, EGFR, and FGFR1 levels (Figure 5C). Treatment of cells with the VEGFR1/PDGFR inhibitor SU6668 weakened the Tan IIA-dependent enhanced tube formation; whereas, low SU6668 concentration was not seen to inhibit tube formation without Tan IIA (Figure 5D).\nEffects of Tan IIA on endothelial cells tube formation and TEC surface expression of growth factor receptors. (A, B) Tan IIA enhanced tube formation of HUVECs and human TECs (50×). (C) The percentage of VEGFR1- and PDGFR-positive TECs and the relative fluorescence intensities. Values were higher in the PR + Tan IIA group than NS group. *p<0.05, **p<0.01. (D) Tube formation was enhanced by Tan IIA in vivo, similar to results seen with Tan IIA incubation in vitro; the addition of the VEGFR1/PDGFR inhibitor SU6668 weakened this enhancement effect.", "In the present study of postsurgical residual tumors, we established a double-tumor xenograft HCC model and found that PR accelerated local aggressivity and distant metastasis. Administration of Tan IIA after PR significantly inhibited metastases and prolonged survival of nude mice bearing residual tumor tissue, and the effect was closely associated with VEGFR1/PDGFR-related vascular normalization.\nEarly in 1959, Fisher et al. [22] found that partial hepatectomy elicited hepatic metastases. Our results confirm the view that incomplete surgical resection of primary tumor may well induce the metastatic potential of residual tumor tissue. Currently, there is no evidence indicating that presurgical primary tumor could govern postsurgical residual tumor when growing in the same liver lobe. Interestingly, we found that residual tumor hypoxia was aggravated, HIF-1α was upregulated, and EMT was induced after surgical removal of “primary tumor.” Van der Bilt et al. [14] have reported that surgery-induced tumor hypoxia can stimulate abnormal angiogenesis. Our findings that neovascular abnormality is significantly augmented following PR are consistent with that report. Severe tumor hypoxia might have effects on abnormalities in the vasculature by promoting release of angiogenic cytokines [10], which would enhance metastasis.\nTumor vessel abnormalities could promote metastasis through mechanical penetration and hypoxia-induced EMT. We selected Tan IIA, known to possess potential vascular activity, to investigate inhibition of metastasis and the association with vascular normalization. We observed that Tan IIA could inhibit post-PR enhanced metastases and, more importantly, prolong survival. At a cellular level, Tan IIA showed no effect on tumor cell proliferation, but it minimized invasion. These effects were consistent with in vivo results showing that Tan IIA did not inhibit single-xenograft tumor growth but decreased metastases. A possible mechanism may be related by correlation with the observed E-cadherin upregulation. Furthermore, we observed that Tan IIA significantly alleviates residual tumor hypoxia and inhibits EMT in vivo. However, it did not downregulate HIF-1α or reverse EMT of tumor cells under hypoxic conditions in vitro. Therefore, we propose that the main inhibition of metastasis by Tan IIA is indirect.\nWe also observed that the tumor MVD increased after PR, but the MVI was low, suggesting that the surgery-induced angiogenesis was related to structural abnormalities [14]. Tumor cells would metastasize more easily through such a leaky vascular wall, causing an increase of CTCs. Tan IIA did not inhibit MVD but dramatically improved MVI, which is probably related to its underlying vasoactivity. Tan IIA also enhanced tube formation of endothelial cells under hypoxic conditions in vitro. Additionally, we found that tube formation of mouse TECs which processed with Tan IIA no matter in vivo or in vitro, was similar to each other, further indicating a direct effect of Tan IIA on endothelium. How Tan IIA may promote vascular normalization is not entirely clear because its receptor on endothelial cells is unknown. Our results have shown a possible correlation with VEGFR1 and PDGFR upregulation. Recently, it has been reported that inhibition of VEGFR1 and PDGFR signaling in several tumors causes pericyte detachment and vessel regression, leading to vascular abnormalities [23,24]. This implies that upregulation of this signaling might produce a beneficial effect. Of relevance is the observation that a compound AZD2171 inhibits VEGFR while promoting vessel normalization [16]. The mechanism of Tan IIA action requires further investigation.\nVascular normalization has effects on two major processes: (i) mechanical prevention of tumor cell migration via intra/extravasation; and (ii) restoration of oxygen and nutritional supply. However, recovery of tumor blood supply may have mixed effects by contributing to progression while also suppressing tumor growth [9]. The latter effect is likely to depend on a combination of factors. The normalization of vessels from both direct and indirect Tan IIA effects appears to be involved with inhibition of residual tumor growth, invasion, and metastasis.", "Our findings demonstrate that Tan IIA can inhibit the enhanced metastasis induced by PR and may do so in part via VEGFR1/PDGFR-related vascular normalization. This work has an important implication: that the malignant phenotype of a tumor may be manipulated through vascular pathways, which could be an alternative to simple eradication. Our results highlight the potential of proangiogenic “vessel normalizing” treatment strategies to silence metastasis and prolong patient survival.", " Cell lines, animal model, and drug The human HCC cell lines HCCLM3-RFP, which has high metastatic potential [25], and HepG2-GFP [26], HUVECs, and TECs [27] were used in the studies. Male, athymic BALB/c nu/nu mice of 4–6 weeks of age and weighing approximately 20 g were used as host animals.\nA metastatic human HCC animal model was established by orthotopic implantation of histologically intact tumor tissue into the nude mouse liver [28]. To explore the protumoral effects of PR, we constructed an orthotopic double-tumor xenograft model, in which two tumor pieces were simultaneously inoculated into the left liver lobe; the inoculation method was as described [28]. After 2 weeks, partial hepatectomy [29] was performed to excise one tumor. The Sham hepatectomy cohort was handled like the PR cohort, but without tumor resection.\nTan IIA (sulfotanshinone sodium injection, 5 mg/ml), commercially available from the first Biochemical Pharmaceutical Co. Ltd., Shanghai, China, was used in in vivo experiment. Tan IIA monomer (Sigma, St. Louis, MO), a reddish lyophilized powder with the purity 99.99%, firstly dissolved in dimethyl sulfoxide and then diluted with NS to the required concentration, was used in in vitro study.\nThe human HCC cell lines HCCLM3-RFP, which has high metastatic potential [25], and HepG2-GFP [26], HUVECs, and TECs [27] were used in the studies. Male, athymic BALB/c nu/nu mice of 4–6 weeks of age and weighing approximately 20 g were used as host animals.\nA metastatic human HCC animal model was established by orthotopic implantation of histologically intact tumor tissue into the nude mouse liver [28]. To explore the protumoral effects of PR, we constructed an orthotopic double-tumor xenograft model, in which two tumor pieces were simultaneously inoculated into the left liver lobe; the inoculation method was as described [28]. After 2 weeks, partial hepatectomy [29] was performed to excise one tumor. The Sham hepatectomy cohort was handled like the PR cohort, but without tumor resection.\nTan IIA (sulfotanshinone sodium injection, 5 mg/ml), commercially available from the first Biochemical Pharmaceutical Co. Ltd., Shanghai, China, was used in in vivo experiment. Tan IIA monomer (Sigma, St. Louis, MO), a reddish lyophilized powder with the purity 99.99%, firstly dissolved in dimethyl sulfoxide and then diluted with NS to the required concentration, was used in in vitro study.\n Experimental groups and assessment parameters For IE (1), 30 double-tumor-bearing mice were randomly divided into Sham and PR groups (each of n=15) and scheduled to be observed after 35 d. In IE (2), the single-tumor xenograft model was used. Mice were divided into four groups (each n=18) and received daily injections of NS or Tan IIA (1, 5, or 10 mg/kg/d). Tan IIA was diluted with NS. We took 20 g as the average mouse weight (25 g after 21 d), and each mouse received 0.2 mL solution intraperitoneally for 35 d. In IE (3), the double-tumor xenograft plus PR model was used to examine effects of Tan IIA on residual tumor. Mice were divided into two groups (each n=15) after PR and received daily injections of NS or 10 mg Tan IIA/kg/d for 35 d.\nThe mouse weight was measured once every 7 d. After 35 d, six mice from each group were retained to observe survival, and the remaining were sacrificed to measure TV [30], LM, IHM, AM [26], CTCs, and to perform SEM of tumor vessels. CTCs were enumerated by flow cytometry and expressed as percent CTCs/TV (%) [6]. Twelve mice (IE 1, n = 6) were sacrificed 2 d after resection to examine CTCs shortly after PR.\nFor IE (1), 30 double-tumor-bearing mice were randomly divided into Sham and PR groups (each of n=15) and scheduled to be observed after 35 d. In IE (2), the single-tumor xenograft model was used. Mice were divided into four groups (each n=18) and received daily injections of NS or Tan IIA (1, 5, or 10 mg/kg/d). Tan IIA was diluted with NS. We took 20 g as the average mouse weight (25 g after 21 d), and each mouse received 0.2 mL solution intraperitoneally for 35 d. In IE (3), the double-tumor xenograft plus PR model was used to examine effects of Tan IIA on residual tumor. Mice were divided into two groups (each n=15) after PR and received daily injections of NS or 10 mg Tan IIA/kg/d for 35 d.\nThe mouse weight was measured once every 7 d. After 35 d, six mice from each group were retained to observe survival, and the remaining were sacrificed to measure TV [30], LM, IHM, AM [26], CTCs, and to perform SEM of tumor vessels. CTCs were enumerated by flow cytometry and expressed as percent CTCs/TV (%) [6]. Twelve mice (IE 1, n = 6) were sacrificed 2 d after resection to examine CTCs shortly after PR.\n Cell proliferation and invasion A Cell Counting Kit-8 (Dojindo, Kumamoto, Japan) was used to assay cell proliferation. The final concentration of Tan IIA was 0.01–1000 μM. Results were expressed as OD at 490 nm. Cell invasiveness was assayed in Matrigel-coated Transwell Invasion Chambers (Corning, Cambridge, MA). Tan IIA was added to cells at final concentrations of 1, 5, or 10 μM, and these cultures were incubated for 48 h. Cells that passed through the chamber membranes were counted.\nA Cell Counting Kit-8 (Dojindo, Kumamoto, Japan) was used to assay cell proliferation. The final concentration of Tan IIA was 0.01–1000 μM. Results were expressed as OD at 490 nm. Cell invasiveness was assayed in Matrigel-coated Transwell Invasion Chambers (Corning, Cambridge, MA). Tan IIA was added to cells at final concentrations of 1, 5, or 10 μM, and these cultures were incubated for 48 h. Cells that passed through the chamber membranes were counted.\n Hypoxia evaluation Cells were cultured in a Bugbox Hypoxic Workstation (Ruskinn, Mid Glamorgan, UK; 1% O2, 5% CO2, and 94% N2 atmosphere) and incubated with Tan IIA at 1, 5, or 10 μM for 48 h. Normoxic conditions (20% O2, 5% CO2, and 75% N2) were set as control. Pimonidazole immunostaining and HIF-1α expression were defined as hypoxia biomarkers. A Hypoxyprobe™-1 Kit (Hypoxyprobe Inc., Burlington, MA) was used [6].\nCells were cultured in a Bugbox Hypoxic Workstation (Ruskinn, Mid Glamorgan, UK; 1% O2, 5% CO2, and 94% N2 atmosphere) and incubated with Tan IIA at 1, 5, or 10 μM for 48 h. Normoxic conditions (20% O2, 5% CO2, and 75% N2) were set as control. Pimonidazole immunostaining and HIF-1α expression were defined as hypoxia biomarkers. A Hypoxyprobe™-1 Kit (Hypoxyprobe Inc., Burlington, MA) was used [6].\n Isolation of TECs, flow cytometry, and tube formation Eight tumors from Sham, PR, or PR + Tan IIA groups were collected. The TECs were isolated by use of anti-CD31 antibody (AB)-coupled magnetic beads (Miltenyi Biotec, Cologne, Germany) and magnetic cell-sorting system [27], and they were divided into Sham, PR, PR + Tan IIA (in vivo), and PR + Tan IIA (in vitro) groups; TECs isolated from the PR group were incubated with Tan IIA for 48 h. TEC surface expression of VEGFR1, VEGFR2, EGFR, PDGFR, FGFR1, and CD31 was determined by flow cytometric analysis (R&D, Minneapolis, MN). Receptor density was calculated as the relative fluorescence intensity. In another set of experiments, TECs from the PR + Tan IIA (in vivo) cohort were divided into control and SU6668 (Sigma) (VEGFR1/PDGFR selective receptor inhibitor) treatment groups. TECs from the PR cohort were also divided into PR, PR + SU6668, PR + Tan IIA (in vitro), and PR + SU6668 + Tan IIA groups. HUVECs and human TECs were separated into control, normoxia + Tan IIA, and hypoxia + Tan IIA groups. Formation of capillary-like structures was observed as described [27].\nEight tumors from Sham, PR, or PR + Tan IIA groups were collected. The TECs were isolated by use of anti-CD31 antibody (AB)-coupled magnetic beads (Miltenyi Biotec, Cologne, Germany) and magnetic cell-sorting system [27], and they were divided into Sham, PR, PR + Tan IIA (in vivo), and PR + Tan IIA (in vitro) groups; TECs isolated from the PR group were incubated with Tan IIA for 48 h. TEC surface expression of VEGFR1, VEGFR2, EGFR, PDGFR, FGFR1, and CD31 was determined by flow cytometric analysis (R&D, Minneapolis, MN). Receptor density was calculated as the relative fluorescence intensity. In another set of experiments, TECs from the PR + Tan IIA (in vivo) cohort were divided into control and SU6668 (Sigma) (VEGFR1/PDGFR selective receptor inhibitor) treatment groups. TECs from the PR cohort were also divided into PR, PR + SU6668, PR + Tan IIA (in vitro), and PR + SU6668 + Tan IIA groups. HUVECs and human TECs were separated into control, normoxia + Tan IIA, and hypoxia + Tan IIA groups. Formation of capillary-like structures was observed as described [27].\n Immunohistochemistry, immunofluorescence, western blot, and quantitative real-time polymerase chain reaction Immunohistochemistry [31] of Pimonidazole, CD31, and NG2 [15] was performed in paraffin sections on slides. The primary antibodies to Pimonidazole (1:100), CD31 (1:100; Abcam, Cambridge, MA), and NG2 (1:200; Millipore, Billerica, MA) were selected. The integrated optical density (IOD; for Pimonidazole) or area (for CD31 and NG2) of positive staining/total area was quantified by Image-Pro Plus software [31]. IF double-staining [31] of CD31 (1:50) and NG2 (1:50), and NG2 and Pimonidazole (1:80) was done in frozen sections and observed under laser confocal microscope. IF of E-cadherin in cells was also determined.\nProtein levels of HIF-1α, N-cadherin, E-cadherin, and vimentin were determined by immunoblot analysis. Primary antibodies against HIF-1α (1:1000; Sigma), β-actin, N-cadherin (1:1000; Abcam), E-cadherin (1:400; Santa Cruz Biotechnology, Santa Cruz, CA), and vimentin (1:800; Cell Signaling Technology, Beverly, MA) were used. Levels of mRNA were assessed by polymerase chain reaction (Additional file 1: Table S1) and normalized to the corresponding internal β-actin signal (ΔCt). Relative gene expression values were expressed as 2−ΔΔCt[30].\nImmunohistochemistry [31] of Pimonidazole, CD31, and NG2 [15] was performed in paraffin sections on slides. The primary antibodies to Pimonidazole (1:100), CD31 (1:100; Abcam, Cambridge, MA), and NG2 (1:200; Millipore, Billerica, MA) were selected. The integrated optical density (IOD; for Pimonidazole) or area (for CD31 and NG2) of positive staining/total area was quantified by Image-Pro Plus software [31]. IF double-staining [31] of CD31 (1:50) and NG2 (1:50), and NG2 and Pimonidazole (1:80) was done in frozen sections and observed under laser confocal microscope. IF of E-cadherin in cells was also determined.\nProtein levels of HIF-1α, N-cadherin, E-cadherin, and vimentin were determined by immunoblot analysis. Primary antibodies against HIF-1α (1:1000; Sigma), β-actin, N-cadherin (1:1000; Abcam), E-cadherin (1:400; Santa Cruz Biotechnology, Santa Cruz, CA), and vimentin (1:800; Cell Signaling Technology, Beverly, MA) were used. Levels of mRNA were assessed by polymerase chain reaction (Additional file 1: Table S1) and normalized to the corresponding internal β-actin signal (ΔCt). Relative gene expression values were expressed as 2−ΔΔCt[30].\n Statistical analysis All statistical analyses were performed with the SPSS 16.0 software. The Pearson chi-square test was applied to compare qualitative variables. Quantitative variables were expressed as mean ± standard deviations and analyzed by t-test or one-way analysis of variance followed by least significant difference test. The Kaplan–Meier method with log-rank test was used for survival analysis. A p value of <0.05 was considered to be statistically significant.\nAll statistical analyses were performed with the SPSS 16.0 software. The Pearson chi-square test was applied to compare qualitative variables. Quantitative variables were expressed as mean ± standard deviations and analyzed by t-test or one-way analysis of variance followed by least significant difference test. The Kaplan–Meier method with log-rank test was used for survival analysis. A p value of <0.05 was considered to be statistically significant.\n Ethics approval Animal care and experimental protocols were approved by the Shanghai Medical Experimental Animal Care Commission.\nAnimal care and experimental protocols were approved by the Shanghai Medical Experimental Animal Care Commission.", "The human HCC cell lines HCCLM3-RFP, which has high metastatic potential [25], and HepG2-GFP [26], HUVECs, and TECs [27] were used in the studies. Male, athymic BALB/c nu/nu mice of 4–6 weeks of age and weighing approximately 20 g were used as host animals.\nA metastatic human HCC animal model was established by orthotopic implantation of histologically intact tumor tissue into the nude mouse liver [28]. To explore the protumoral effects of PR, we constructed an orthotopic double-tumor xenograft model, in which two tumor pieces were simultaneously inoculated into the left liver lobe; the inoculation method was as described [28]. After 2 weeks, partial hepatectomy [29] was performed to excise one tumor. The Sham hepatectomy cohort was handled like the PR cohort, but without tumor resection.\nTan IIA (sulfotanshinone sodium injection, 5 mg/ml), commercially available from the first Biochemical Pharmaceutical Co. Ltd., Shanghai, China, was used in in vivo experiment. Tan IIA monomer (Sigma, St. Louis, MO), a reddish lyophilized powder with the purity 99.99%, firstly dissolved in dimethyl sulfoxide and then diluted with NS to the required concentration, was used in in vitro study.", "For IE (1), 30 double-tumor-bearing mice were randomly divided into Sham and PR groups (each of n=15) and scheduled to be observed after 35 d. In IE (2), the single-tumor xenograft model was used. Mice were divided into four groups (each n=18) and received daily injections of NS or Tan IIA (1, 5, or 10 mg/kg/d). Tan IIA was diluted with NS. We took 20 g as the average mouse weight (25 g after 21 d), and each mouse received 0.2 mL solution intraperitoneally for 35 d. In IE (3), the double-tumor xenograft plus PR model was used to examine effects of Tan IIA on residual tumor. Mice were divided into two groups (each n=15) after PR and received daily injections of NS or 10 mg Tan IIA/kg/d for 35 d.\nThe mouse weight was measured once every 7 d. After 35 d, six mice from each group were retained to observe survival, and the remaining were sacrificed to measure TV [30], LM, IHM, AM [26], CTCs, and to perform SEM of tumor vessels. CTCs were enumerated by flow cytometry and expressed as percent CTCs/TV (%) [6]. Twelve mice (IE 1, n = 6) were sacrificed 2 d after resection to examine CTCs shortly after PR.", "A Cell Counting Kit-8 (Dojindo, Kumamoto, Japan) was used to assay cell proliferation. The final concentration of Tan IIA was 0.01–1000 μM. Results were expressed as OD at 490 nm. Cell invasiveness was assayed in Matrigel-coated Transwell Invasion Chambers (Corning, Cambridge, MA). Tan IIA was added to cells at final concentrations of 1, 5, or 10 μM, and these cultures were incubated for 48 h. Cells that passed through the chamber membranes were counted.", "Cells were cultured in a Bugbox Hypoxic Workstation (Ruskinn, Mid Glamorgan, UK; 1% O2, 5% CO2, and 94% N2 atmosphere) and incubated with Tan IIA at 1, 5, or 10 μM for 48 h. Normoxic conditions (20% O2, 5% CO2, and 75% N2) were set as control. Pimonidazole immunostaining and HIF-1α expression were defined as hypoxia biomarkers. A Hypoxyprobe™-1 Kit (Hypoxyprobe Inc., Burlington, MA) was used [6].", "Eight tumors from Sham, PR, or PR + Tan IIA groups were collected. The TECs were isolated by use of anti-CD31 antibody (AB)-coupled magnetic beads (Miltenyi Biotec, Cologne, Germany) and magnetic cell-sorting system [27], and they were divided into Sham, PR, PR + Tan IIA (in vivo), and PR + Tan IIA (in vitro) groups; TECs isolated from the PR group were incubated with Tan IIA for 48 h. TEC surface expression of VEGFR1, VEGFR2, EGFR, PDGFR, FGFR1, and CD31 was determined by flow cytometric analysis (R&D, Minneapolis, MN). Receptor density was calculated as the relative fluorescence intensity. In another set of experiments, TECs from the PR + Tan IIA (in vivo) cohort were divided into control and SU6668 (Sigma) (VEGFR1/PDGFR selective receptor inhibitor) treatment groups. TECs from the PR cohort were also divided into PR, PR + SU6668, PR + Tan IIA (in vitro), and PR + SU6668 + Tan IIA groups. HUVECs and human TECs were separated into control, normoxia + Tan IIA, and hypoxia + Tan IIA groups. Formation of capillary-like structures was observed as described [27].", "Immunohistochemistry [31] of Pimonidazole, CD31, and NG2 [15] was performed in paraffin sections on slides. The primary antibodies to Pimonidazole (1:100), CD31 (1:100; Abcam, Cambridge, MA), and NG2 (1:200; Millipore, Billerica, MA) were selected. The integrated optical density (IOD; for Pimonidazole) or area (for CD31 and NG2) of positive staining/total area was quantified by Image-Pro Plus software [31]. IF double-staining [31] of CD31 (1:50) and NG2 (1:50), and NG2 and Pimonidazole (1:80) was done in frozen sections and observed under laser confocal microscope. IF of E-cadherin in cells was also determined.\nProtein levels of HIF-1α, N-cadherin, E-cadherin, and vimentin were determined by immunoblot analysis. Primary antibodies against HIF-1α (1:1000; Sigma), β-actin, N-cadherin (1:1000; Abcam), E-cadherin (1:400; Santa Cruz Biotechnology, Santa Cruz, CA), and vimentin (1:800; Cell Signaling Technology, Beverly, MA) were used. Levels of mRNA were assessed by polymerase chain reaction (Additional file 1: Table S1) and normalized to the corresponding internal β-actin signal (ΔCt). Relative gene expression values were expressed as 2−ΔΔCt[30].", "All statistical analyses were performed with the SPSS 16.0 software. The Pearson chi-square test was applied to compare qualitative variables. Quantitative variables were expressed as mean ± standard deviations and analyzed by t-test or one-way analysis of variance followed by least significant difference test. The Kaplan–Meier method with log-rank test was used for survival analysis. A p value of <0.05 was considered to be statistically significant.", "Animal care and experimental protocols were approved by the Shanghai Medical Experimental Animal Care Commission.", "AM: Abdominal metastasis; CTCs: Circulating tumor cells; EMT: Epithelial-mesenchymal transition; HCC: Hepatocellular carcinoma; HUVEC: Human umbilical vein endothelial cells; IE: In vivo experiment; IF: Immunofluorescence; IHM: Intrahepatic metastasis; IOD: Integrated optical density; LM: Lung metastasis; MVD: Microvessel density; MVI: Microvessel integrity; NS: Normal saline; PDGFR: Platelet derived growth factor receptor; PR: Palliative resection; SEM: Scanning electron microscopy; Tan IIA: Tanshinone IIA; TEC: Tumor-derived endothelial cells; TV: Tumor volume; VEGFR1: Vascular endothelial cell growth factor receptor 1.", "The authors declare there are no competing interests.", "WWQ designed the study, established the animal model, carried out the immunoassays, performed the statistical analysis, and drafted the manuscript. LL and SHC participated in the design of the study, data analysis, and drafting the manuscript. FYL, XHX, CZT, ZQB, KLQ, ZXD, and LL helped to acquire experimental data. RZG reviewed the manuscript. TZY conceived the study, participated in its design and coordination, and helped to draft the manuscript. All authors read and approved the final manuscript.", "WWQ, M.D., Ph.D., in Cancer Surgery. Graduated from Liver Cancer Institute, Zhongshan Hospital, Fudan University; Key Laboratory for Carcinogenesis & Cancer Invasion (Fudan University), the Chinese Ministry of Education. WWQ is now working in the Department of Pancreatic and Hepatobiliary Surgery, Fudan University, Shanghai Cancer Center; the Department of Oncology, Shanghai Medical College, Fudan University; and the Pancreatic Cancer Institute, Fudan University.", "Table S1. The primer sequences for amplification of human HIF-1α, N-cadherin, E-cadherin, Vimentin, and β-actin. Table S2. Summary of tumor growth, metastasis, and survival of host mice in three animal experiments. Figure S1. Statistical chart of tumor volume (A), metastases to lung (B), intrahepatic (C), and to abdomen (D), and circulating tumor cells (E) in three animal experiments. *For Student’s t-test, equal variances were assumed. Abbreviations: CTCs, circulating tumor cells; IHM, intrahepatic metastasis. Figure S2. Schematic diagram of intrahepatic and abdomen metastases. (A) Intrahepatic metastatic lesions were shown by HE staining (50×). (B) Typical abdomen metastasis of residual HCCLM3 tumor 35 d after palliative resection. Panels a and b show intrahepatic, peritoneal, and diaphragmatic metastatic lesions. c and d show intrahepatic and diaphragmatic metastatic lesions; d shows the area where the tumor in c was removed. Figure S3. Monitoring body weight of experimental mice in three in vivo experiments. (A) In vitro experiment 1 (IE 1). (B) IE (2). (C) IE (3). Figure S4. Intratumoral and intracellular mRNA levels of HIF-1α, N-cadherin, E-cadherin, and Vimentin. (A)*Compared with Sham group, **compared with PR + NS group; p<0.05. (B)*Compared with dimethylsulfoxide (DMSO) group, **compared with hypoxia group; p<0.05. (C)*Compared with DMSO group; p<0.05. Figure S5. Immunohistochemical staining of CD31 (A) and NG2 (B) in tumor samples.\nClick here for file" ]
[ null, "results", null, null, null, null, null, null, null, null, "discussion", "conclusions", "methods", null, null, null, null, null, null, null, null, null, null, null, null, "supplementary-material" ]
[ "Tanshinone IIA", "Vascular normalization", "Palliative resection", "Hepatocellular carcinoma", "Metastasis" ]
Background: Surgical resection is the most promising strategy for early-stage hepatocellular carcinoma (HCC); however, the 5-year risk of recurrence is as high as 70% [1]. The surgery is actually palliative resection (PR), owing to the existence of satellites and microvascular invasion, and these residual tumor nests can actually be stimulated to grow by the PR [2,3]. Although several treatments, such as interferon-alpha and sorafenib, have been proposed to diminish relapse [4,5], prometastatic side effects of these options have also been observed [6,7]. Residual tumor cells may stimulate angiogenesis, which is needed for tumor growth [8-10]. However, the resulting neovessels may be disordered and inefficiently perfused, resulting in hypoxic conditions [10,11]. Both abnormal endothelium and pericytes integrated into the capillary wall, along with deficient coverage, could be responsible for the vascular architectural abnormalities [12]. The resulting hypoxia creates a hostile tumor milieu in which tumor cells may migrate via intra- or extravasation through a leaky vessel [9,13]. In effect, surgery-induced hypoxia unfavorably impacts the prognosis of cancer patients by inducing angiogenesis [14]. Therefore, restoring oxygen supply via vascular normalization may reduce metastasis, even tumor growth. Mazzone et al. [13] showed that downregulation of the oxygen sensing molecule PHD2 can restore tumor oxygenation and inhibit metastasis via endothelial normalization, where endothelial cells form a protective phalanx that blocks metastasis. Although several methods have been shown experimentally to promote vessel remodeling, only seldom has any of them found use in the clinic [9,10,15,16]. Tanshinone IIA (Tan IIA) is an herbal monomer with a clear chemical structure, isolated from Salvia miltiorrhiza. In Chinese traditional medicine, S. miltiorrhiza is considered to promote blood circulation for removing blood stasis and improve microcirculation. Some of these effects could include vessel normalization. We have reported that an herbal formula, Songyou Yin, can attenuate HCC metastases [17], and S. miltiorrhiza is one of the five constituents of the formula [18]. Tan IIA exhibits direct vasoactive [19,20] and certain antitumor properties [21]. It is possible that Tan IIA may indirectly decrease metastasis in HCC following PR by promoting blood vessel normalization; however, there is to date no evidence supporting this hypothesis. We aimed to identify inhibitory effects of Tan IIA on HCC metastasis for delineating a possible mechanism of action of the compound, with a main focus on tumor vessel maturity as a potential marker for evaluating Tan IIA treatment responses. Results: PR-induced residual tumor growth and metastasis As shown in the in vivo experiment 1 (IE 1) in Tables 1 and Additional file 1: Table S2, the tumor volume (TV) was greater in the PR than Sham groups (p<0.05, Additional file 1: Figure S1A). Compared with the Sham group, the lung metastasis (LM) (HCCLM3) of the PR group significantly increased (p<0.001, Figure 1A and S1B); both intrahepatic metastasis (IHM) and abdominal metastasis (AM) also increased (p<0.01 for IHM, in Figures 1B, Additional file 1: Figure S1C, and Additional file 1: Figure S2A; p<0.05 for AM, in Figures 1C, Additional file 1: Figure S1D, and Additional file 1: Figure S2B); and circulating tumor cells (CTCs) were elevated both at 2 and 35 d postresection (p<0.001, Additional file 1: Figure S1E). Summary of tumor growth, metastasis, and mice’s survival from three animal experiments of HCCLM3 a Student’s t-test, equal variances assumed. b One-way analysis of variance. Abbreviations: PR, palliative resection; NS, normal saline; CTCs, circulating tumor cells; IE, in vivo experiment; LM, lung metastasis; ND, not done; TV, tumor volume. Schematic diagram of metastases and cumulative survival of tumor-bearing nude mice in animal experiments. Occurrence of lung (A), intrahepatic (B), and abdomen metastases (C) significantly increased after PR. Tan IIA decreased the numbers of various metastatic lesions. (D) Survival curves showed that the life-span of test mice was significantly shortened by PR and was markedly prolonged by Tan IIA. Red: HCCLM3. Green: HepG2. As shown in the in vivo experiment 1 (IE 1) in Tables 1 and Additional file 1: Table S2, the tumor volume (TV) was greater in the PR than Sham groups (p<0.05, Additional file 1: Figure S1A). Compared with the Sham group, the lung metastasis (LM) (HCCLM3) of the PR group significantly increased (p<0.001, Figure 1A and S1B); both intrahepatic metastasis (IHM) and abdominal metastasis (AM) also increased (p<0.01 for IHM, in Figures 1B, Additional file 1: Figure S1C, and Additional file 1: Figure S2A; p<0.05 for AM, in Figures 1C, Additional file 1: Figure S1D, and Additional file 1: Figure S2B); and circulating tumor cells (CTCs) were elevated both at 2 and 35 d postresection (p<0.001, Additional file 1: Figure S1E). Summary of tumor growth, metastasis, and mice’s survival from three animal experiments of HCCLM3 a Student’s t-test, equal variances assumed. b One-way analysis of variance. Abbreviations: PR, palliative resection; NS, normal saline; CTCs, circulating tumor cells; IE, in vivo experiment; LM, lung metastasis; ND, not done; TV, tumor volume. Schematic diagram of metastases and cumulative survival of tumor-bearing nude mice in animal experiments. Occurrence of lung (A), intrahepatic (B), and abdomen metastases (C) significantly increased after PR. Tan IIA decreased the numbers of various metastatic lesions. (D) Survival curves showed that the life-span of test mice was significantly shortened by PR and was markedly prolonged by Tan IIA. Red: HCCLM3. Green: HepG2. Tan IIA does not directly inhibit tumor growth but reduces metastasis Results of IE (2), shown in Tables 1 and Additional file 1: Table S2, indicate there were no differences in TV between the four groups observed (Additional file 1: Figure S1A). Compared with normal saline (NS), the LM of the Tan IIA-treated groups (5 or 10 mg/kg/d) decreased (p=0.046 and p<0.001, Additional file 1: Figure S1B). Compared with Tan IIA treatment of 1 or 5 mg/kg/d, the LM of the 10 mg/kg/d group also decreased (p=0.003 and p=0.046, Additional file 1: Figure S1B). Both the IHM and AM rates of the 5/10 mg Tan IIA/kg/d treatment groups were significantly reduced (p<0.05, Additional file 1: Figure S1C and D). No AM deriving from HepG2 cells was found. The greatest inhibitory effects were seen at a dosage of 10 mg/kg/d, which was chosen as the intervention dosage in IE (3). Results of IE (2), shown in Tables 1 and Additional file 1: Table S2, indicate there were no differences in TV between the four groups observed (Additional file 1: Figure S1A). Compared with normal saline (NS), the LM of the Tan IIA-treated groups (5 or 10 mg/kg/d) decreased (p=0.046 and p<0.001, Additional file 1: Figure S1B). Compared with Tan IIA treatment of 1 or 5 mg/kg/d, the LM of the 10 mg/kg/d group also decreased (p=0.003 and p=0.046, Additional file 1: Figure S1B). Both the IHM and AM rates of the 5/10 mg Tan IIA/kg/d treatment groups were significantly reduced (p<0.05, Additional file 1: Figure S1C and D). No AM deriving from HepG2 cells was found. The greatest inhibitory effects were seen at a dosage of 10 mg/kg/d, which was chosen as the intervention dosage in IE (3). Tan IIA inhibits the PR-enhanced residual tumor metastasis Results of IE (3), summarized in Tables 1 and Additional file 1: Table S2, show that administration of Tan IIA after PR resulted in decreased residual TV compared to NS (p<0.05, Additional file 1: Figure S1A). Compared with the PR + NS group, the LM (HCCLM3) of the Tan IIA treatment group was significantly decreased (p<0.001, Figures 1A and Additional file 1: Figure S1B); both IHM and AM also decreased (p<0.01 for IHM, Figures 1B, Additional file 1: Figure S1C, and S2A; p<0.01 for AM, Figures 1C, Additional file 1: Figure S1D, and S2B), and the CTCs were relatively decreased (p<0.001, Additional file 1: Figure S1E). Results of IE (3), summarized in Tables 1 and Additional file 1: Table S2, show that administration of Tan IIA after PR resulted in decreased residual TV compared to NS (p<0.05, Additional file 1: Figure S1A). Compared with the PR + NS group, the LM (HCCLM3) of the Tan IIA treatment group was significantly decreased (p<0.001, Figures 1A and Additional file 1: Figure S1B); both IHM and AM also decreased (p<0.01 for IHM, Figures 1B, Additional file 1: Figure S1C, and S2A; p<0.01 for AM, Figures 1C, Additional file 1: Figure S1D, and S2B), and the CTCs were relatively decreased (p<0.001, Additional file 1: Figure S1E). Tan IIA prolongs host survival Tan IIA treatment retarded the weight loss of mice after PR (Additional file 1: Figure S3). The estimated survival of PR mice was significantly shorter than of Sham mice in IE (1) (p<0.01, Tables 1 and Additional file 1: Table S2, Figure 1D). In IE (2), Tan IIA prolonged the mice’s survival up to 16 d for HCCLM3 (87.000 ± 3.804 vs. 102.667 ± 3.201, p=0.004) and 19 d for HepG2 (p<0.001, Tables 1 and Additional file 1: Table S2, Figure 1D), compared with NS. The same effect on prolongation of post-PR survival was seen in IE (3) (p=0.001 for both, Tables 1 and Additional file 1: Table S2, Figure 1D). Tan IIA treatment retarded the weight loss of mice after PR (Additional file 1: Figure S3). The estimated survival of PR mice was significantly shorter than of Sham mice in IE (1) (p<0.01, Tables 1 and Additional file 1: Table S2, Figure 1D). In IE (2), Tan IIA prolonged the mice’s survival up to 16 d for HCCLM3 (87.000 ± 3.804 vs. 102.667 ± 3.201, p=0.004) and 19 d for HepG2 (p<0.001, Tables 1 and Additional file 1: Table S2, Figure 1D), compared with NS. The same effect on prolongation of post-PR survival was seen in IE (3) (p=0.001 for both, Tables 1 and Additional file 1: Table S2, Figure 1D). Tan IIA does not inhibit proliferation but minimizes invasiveness of tumor cells Compared with dimethylsulfoxide treatment control, the OD value of Tan IIA 0.01–100-μM treatment groups showed no change (Figure 2A). The number of invasive cells in the 5/10-μM Tan IIA groups was significantly reduced (p<0.01). No significant differences were seen for the 1-μM Tan IIA group (Figure 2B and C). Effects of Tan IIA on tumor cell proliferation and invasion. (A) Tan IIA treatment at 0.01–100 μM did not inhibit HCCLM3 and HepG2 cell proliferation, except in the 1000-μM dosage group. (B), (C) Tan IIA treatment with 5 or 10 μM for 48 h inhibited tumor cell invasion (200×). Compared with dimethylsulfoxide treatment control, the OD value of Tan IIA 0.01–100-μM treatment groups showed no change (Figure 2A). The number of invasive cells in the 5/10-μM Tan IIA groups was significantly reduced (p<0.01). No significant differences were seen for the 1-μM Tan IIA group (Figure 2B and C). Effects of Tan IIA on tumor cell proliferation and invasion. (A) Tan IIA treatment at 0.01–100 μM did not inhibit HCCLM3 and HepG2 cell proliferation, except in the 1000-μM dosage group. (B), (C) Tan IIA treatment with 5 or 10 μM for 48 h inhibited tumor cell invasion (200×). Tan IIA alleviates residual tumor hypoxia in vivo but does not downregulate HIF-1 of tumor cells under hypoxic conditions in vitro The immunohistochemical marker for tissue hypoxia Pimonidazole and HIF-1α levels were significantly increased after PR, and they were reduced by Tan IIA. In addition, the residual tumor epithelial-mesenchymal transition (EMT) was enhanced (N-cadherin and Vimentin were both upregulated, but E-cadherin was downregulated), and this effect could be reversed by Tan IIA (Figures 3A,C, and D, and Additional file 1: Figure S4A). These results indicate that PR aggravated residual tumor hypoxia and promoted EMT, and Tan IIA treatment was able to alleviate hypoxia and inhibit EMT in vivo. Levels of HIF-1α, N-cadherin, and vimentin were upregulated in tumor cells, and E-cadherin was downregulated, under conditions of hypoxia. Other molecules were not downregulated as anticipated from results of Tan IIA experiments in vitro (Figures 3B and Additional file 1: Figure S4B). We did observe that E-cadherin expression could indeed be upregulated by Tan IIA, independent of the hypoxia effect (Figures 3E and Additional file 1: Figure S4C). Levels of proteins observed were consistent with their corresponding mRNA levels (Figures 3 and Additional file 1: Figure S4). Effects of Tan IIA on residual tumor, cell hypoxia and epithelial-mesenchymal transition. (A) PR promoted HIF-1α expression and caused epithelial-mesenchymal transition (EMT; upregulated N-cadherin and vimentin, downregulated E-cadherin); Tan IIA reversed EMT in vivo. (B) Hypoxia-induced HIF-1α and promotion of EMT; Tan IIA had no effect on these parameters in vitro. (C), (D) Tan IIA diminished the enlarged Pimonidazole area of residual tumor after PR (50×). *Compared with Sham group, **compared with PR + NS group; p<0.001. (E) Tan IIA upregulated E-cadherin in tumor cells (200×). The immunohistochemical marker for tissue hypoxia Pimonidazole and HIF-1α levels were significantly increased after PR, and they were reduced by Tan IIA. In addition, the residual tumor epithelial-mesenchymal transition (EMT) was enhanced (N-cadherin and Vimentin were both upregulated, but E-cadherin was downregulated), and this effect could be reversed by Tan IIA (Figures 3A,C, and D, and Additional file 1: Figure S4A). These results indicate that PR aggravated residual tumor hypoxia and promoted EMT, and Tan IIA treatment was able to alleviate hypoxia and inhibit EMT in vivo. Levels of HIF-1α, N-cadherin, and vimentin were upregulated in tumor cells, and E-cadherin was downregulated, under conditions of hypoxia. Other molecules were not downregulated as anticipated from results of Tan IIA experiments in vitro (Figures 3B and Additional file 1: Figure S4B). We did observe that E-cadherin expression could indeed be upregulated by Tan IIA, independent of the hypoxia effect (Figures 3E and Additional file 1: Figure S4C). Levels of proteins observed were consistent with their corresponding mRNA levels (Figures 3 and Additional file 1: Figure S4). Effects of Tan IIA on residual tumor, cell hypoxia and epithelial-mesenchymal transition. (A) PR promoted HIF-1α expression and caused epithelial-mesenchymal transition (EMT; upregulated N-cadherin and vimentin, downregulated E-cadherin); Tan IIA reversed EMT in vivo. (B) Hypoxia-induced HIF-1α and promotion of EMT; Tan IIA had no effect on these parameters in vitro. (C), (D) Tan IIA diminished the enlarged Pimonidazole area of residual tumor after PR (50×). *Compared with Sham group, **compared with PR + NS group; p<0.001. (E) Tan IIA upregulated E-cadherin in tumor cells (200×). Tan IIA does not affect microvessel density but promotes microvessel integrity Studies of mouse microvessel density (MVD) used the marker CD31. NG2 proteoglycan, the marker of vascular pericytes [12,15], was adopted to evaluate microvessel intensity (MVI). The CD31 levels of the PR group were higher than of the Sham (p<0.001), and there was no statistical significance between the PR + NS and Tan IIA groups (Figures 4A and Additional file 1: Figure S5A). Although the NG2 proteoglycan levels showed no change after PR, its level was significantly elevated in the PR + Tan IIA group (p<0.01, Figures 4A and Additional file 1: Figure S5B). Immunohistochemistry of CD31, NG2, and Pimonidazole in serial sections showed that in the PR + NS group, CD31 was high, NG2 was low, and the hypoxia levels (Pimonidazole) were seriously high. In the PR + Tan IIA group, CD31 was high, NG2 was also high, and hypoxia levels were slight (Figure 4B). These results indicate that the residual tumor MVD increased after PR, but the MVI was low. Tan IIA did not inhibit MVD but markedly improved MVI, promoting vessel integrity. Scanning electron microscopy (SEM) further revealed that the vascular wall of PR + Tan IIA tumors was more integrated than in the NS tumors (Figure 4C). Immunofluorescence (IF) results verified that Tan IIA did not affect CD31 levels, elevated NG2, and decreased hypoxia levels (Figure 4D and E). Effects of Tan IIA on tumor microvessel density, microvessel integrity, and hypoxia. (A) CD31 microvessel density (MVD) increased, and NG2 microvessel integrity (MVI) showed no change after PR. Tan IIA did not affect MVD, but it did elevate MVI. *p<0.01. (B) PR + NS group: CD31 was high, NG2 was low, and Pimonidazole was high. PR + Tan IIA group: CD31 was high, NG2 was also high, and thus hypoxia was slight (400×). (C) The vascular wall of Tan IIA tumor tissue was more integrated than NS tissue (1200×). (D), (E) Tan IIA increased NG2 levels and reduced residual tumor hypoxia (200×). Studies of mouse microvessel density (MVD) used the marker CD31. NG2 proteoglycan, the marker of vascular pericytes [12,15], was adopted to evaluate microvessel intensity (MVI). The CD31 levels of the PR group were higher than of the Sham (p<0.001), and there was no statistical significance between the PR + NS and Tan IIA groups (Figures 4A and Additional file 1: Figure S5A). Although the NG2 proteoglycan levels showed no change after PR, its level was significantly elevated in the PR + Tan IIA group (p<0.01, Figures 4A and Additional file 1: Figure S5B). Immunohistochemistry of CD31, NG2, and Pimonidazole in serial sections showed that in the PR + NS group, CD31 was high, NG2 was low, and the hypoxia levels (Pimonidazole) were seriously high. In the PR + Tan IIA group, CD31 was high, NG2 was also high, and hypoxia levels were slight (Figure 4B). These results indicate that the residual tumor MVD increased after PR, but the MVI was low. Tan IIA did not inhibit MVD but markedly improved MVI, promoting vessel integrity. Scanning electron microscopy (SEM) further revealed that the vascular wall of PR + Tan IIA tumors was more integrated than in the NS tumors (Figure 4C). Immunofluorescence (IF) results verified that Tan IIA did not affect CD31 levels, elevated NG2, and decreased hypoxia levels (Figure 4D and E). Effects of Tan IIA on tumor microvessel density, microvessel integrity, and hypoxia. (A) CD31 microvessel density (MVD) increased, and NG2 microvessel integrity (MVI) showed no change after PR. Tan IIA did not affect MVD, but it did elevate MVI. *p<0.01. (B) PR + NS group: CD31 was high, NG2 was low, and Pimonidazole was high. PR + Tan IIA group: CD31 was high, NG2 was also high, and thus hypoxia was slight (400×). (C) The vascular wall of Tan IIA tumor tissue was more integrated than NS tissue (1200×). (D), (E) Tan IIA increased NG2 levels and reduced residual tumor hypoxia (200×). Tan IIA enhances tube formation and is associated with vascular endothelial cell growth factor receptor 1 (VEGFR1) and platelet derived growth factor receptor (PDGFR) upregulation Tube formation of human umbilical vein endothelial cells (HUVECs) and human tumor-derived endothelial cells (TECs) was enhanced by Tan IIA (Figure 5A and B). And tube formation of TECs from the PR + Tan IIA (in vivo) mouse group was strengthened relative to the PR group (Figure 5D). Using TECs from the PR group, we further found that tube formation was enhanced in vitro by Tan IIA, and it was roughly equivalent to the PR + Tan IIA (in vivo) group (Figure 5D). Subsequent flow cytometric analysis of VEGFR1 and PDGFR in TECs indicated that both the percentage of positive mice and relative cellular fluorescence intensities were significantly higher in the PR + Tan IIA group than the NS group (p<0.05), and no changes were seen in VEGFR2, EGFR, and FGFR1 levels (Figure 5C). Treatment of cells with the VEGFR1/PDGFR inhibitor SU6668 weakened the Tan IIA-dependent enhanced tube formation; whereas, low SU6668 concentration was not seen to inhibit tube formation without Tan IIA (Figure 5D). Effects of Tan IIA on endothelial cells tube formation and TEC surface expression of growth factor receptors. (A, B) Tan IIA enhanced tube formation of HUVECs and human TECs (50×). (C) The percentage of VEGFR1- and PDGFR-positive TECs and the relative fluorescence intensities. Values were higher in the PR + Tan IIA group than NS group. *p<0.05, **p<0.01. (D) Tube formation was enhanced by Tan IIA in vivo, similar to results seen with Tan IIA incubation in vitro; the addition of the VEGFR1/PDGFR inhibitor SU6668 weakened this enhancement effect. Tube formation of human umbilical vein endothelial cells (HUVECs) and human tumor-derived endothelial cells (TECs) was enhanced by Tan IIA (Figure 5A and B). And tube formation of TECs from the PR + Tan IIA (in vivo) mouse group was strengthened relative to the PR group (Figure 5D). Using TECs from the PR group, we further found that tube formation was enhanced in vitro by Tan IIA, and it was roughly equivalent to the PR + Tan IIA (in vivo) group (Figure 5D). Subsequent flow cytometric analysis of VEGFR1 and PDGFR in TECs indicated that both the percentage of positive mice and relative cellular fluorescence intensities were significantly higher in the PR + Tan IIA group than the NS group (p<0.05), and no changes were seen in VEGFR2, EGFR, and FGFR1 levels (Figure 5C). Treatment of cells with the VEGFR1/PDGFR inhibitor SU6668 weakened the Tan IIA-dependent enhanced tube formation; whereas, low SU6668 concentration was not seen to inhibit tube formation without Tan IIA (Figure 5D). Effects of Tan IIA on endothelial cells tube formation and TEC surface expression of growth factor receptors. (A, B) Tan IIA enhanced tube formation of HUVECs and human TECs (50×). (C) The percentage of VEGFR1- and PDGFR-positive TECs and the relative fluorescence intensities. Values were higher in the PR + Tan IIA group than NS group. *p<0.05, **p<0.01. (D) Tube formation was enhanced by Tan IIA in vivo, similar to results seen with Tan IIA incubation in vitro; the addition of the VEGFR1/PDGFR inhibitor SU6668 weakened this enhancement effect. PR-induced residual tumor growth and metastasis: As shown in the in vivo experiment 1 (IE 1) in Tables 1 and Additional file 1: Table S2, the tumor volume (TV) was greater in the PR than Sham groups (p<0.05, Additional file 1: Figure S1A). Compared with the Sham group, the lung metastasis (LM) (HCCLM3) of the PR group significantly increased (p<0.001, Figure 1A and S1B); both intrahepatic metastasis (IHM) and abdominal metastasis (AM) also increased (p<0.01 for IHM, in Figures 1B, Additional file 1: Figure S1C, and Additional file 1: Figure S2A; p<0.05 for AM, in Figures 1C, Additional file 1: Figure S1D, and Additional file 1: Figure S2B); and circulating tumor cells (CTCs) were elevated both at 2 and 35 d postresection (p<0.001, Additional file 1: Figure S1E). Summary of tumor growth, metastasis, and mice’s survival from three animal experiments of HCCLM3 a Student’s t-test, equal variances assumed. b One-way analysis of variance. Abbreviations: PR, palliative resection; NS, normal saline; CTCs, circulating tumor cells; IE, in vivo experiment; LM, lung metastasis; ND, not done; TV, tumor volume. Schematic diagram of metastases and cumulative survival of tumor-bearing nude mice in animal experiments. Occurrence of lung (A), intrahepatic (B), and abdomen metastases (C) significantly increased after PR. Tan IIA decreased the numbers of various metastatic lesions. (D) Survival curves showed that the life-span of test mice was significantly shortened by PR and was markedly prolonged by Tan IIA. Red: HCCLM3. Green: HepG2. Tan IIA does not directly inhibit tumor growth but reduces metastasis: Results of IE (2), shown in Tables 1 and Additional file 1: Table S2, indicate there were no differences in TV between the four groups observed (Additional file 1: Figure S1A). Compared with normal saline (NS), the LM of the Tan IIA-treated groups (5 or 10 mg/kg/d) decreased (p=0.046 and p<0.001, Additional file 1: Figure S1B). Compared with Tan IIA treatment of 1 or 5 mg/kg/d, the LM of the 10 mg/kg/d group also decreased (p=0.003 and p=0.046, Additional file 1: Figure S1B). Both the IHM and AM rates of the 5/10 mg Tan IIA/kg/d treatment groups were significantly reduced (p<0.05, Additional file 1: Figure S1C and D). No AM deriving from HepG2 cells was found. The greatest inhibitory effects were seen at a dosage of 10 mg/kg/d, which was chosen as the intervention dosage in IE (3). Tan IIA inhibits the PR-enhanced residual tumor metastasis: Results of IE (3), summarized in Tables 1 and Additional file 1: Table S2, show that administration of Tan IIA after PR resulted in decreased residual TV compared to NS (p<0.05, Additional file 1: Figure S1A). Compared with the PR + NS group, the LM (HCCLM3) of the Tan IIA treatment group was significantly decreased (p<0.001, Figures 1A and Additional file 1: Figure S1B); both IHM and AM also decreased (p<0.01 for IHM, Figures 1B, Additional file 1: Figure S1C, and S2A; p<0.01 for AM, Figures 1C, Additional file 1: Figure S1D, and S2B), and the CTCs were relatively decreased (p<0.001, Additional file 1: Figure S1E). Tan IIA prolongs host survival: Tan IIA treatment retarded the weight loss of mice after PR (Additional file 1: Figure S3). The estimated survival of PR mice was significantly shorter than of Sham mice in IE (1) (p<0.01, Tables 1 and Additional file 1: Table S2, Figure 1D). In IE (2), Tan IIA prolonged the mice’s survival up to 16 d for HCCLM3 (87.000 ± 3.804 vs. 102.667 ± 3.201, p=0.004) and 19 d for HepG2 (p<0.001, Tables 1 and Additional file 1: Table S2, Figure 1D), compared with NS. The same effect on prolongation of post-PR survival was seen in IE (3) (p=0.001 for both, Tables 1 and Additional file 1: Table S2, Figure 1D). Tan IIA does not inhibit proliferation but minimizes invasiveness of tumor cells: Compared with dimethylsulfoxide treatment control, the OD value of Tan IIA 0.01–100-μM treatment groups showed no change (Figure 2A). The number of invasive cells in the 5/10-μM Tan IIA groups was significantly reduced (p<0.01). No significant differences were seen for the 1-μM Tan IIA group (Figure 2B and C). Effects of Tan IIA on tumor cell proliferation and invasion. (A) Tan IIA treatment at 0.01–100 μM did not inhibit HCCLM3 and HepG2 cell proliferation, except in the 1000-μM dosage group. (B), (C) Tan IIA treatment with 5 or 10 μM for 48 h inhibited tumor cell invasion (200×). Tan IIA alleviates residual tumor hypoxia in vivo but does not downregulate HIF-1 of tumor cells under hypoxic conditions in vitro: The immunohistochemical marker for tissue hypoxia Pimonidazole and HIF-1α levels were significantly increased after PR, and they were reduced by Tan IIA. In addition, the residual tumor epithelial-mesenchymal transition (EMT) was enhanced (N-cadherin and Vimentin were both upregulated, but E-cadherin was downregulated), and this effect could be reversed by Tan IIA (Figures 3A,C, and D, and Additional file 1: Figure S4A). These results indicate that PR aggravated residual tumor hypoxia and promoted EMT, and Tan IIA treatment was able to alleviate hypoxia and inhibit EMT in vivo. Levels of HIF-1α, N-cadherin, and vimentin were upregulated in tumor cells, and E-cadherin was downregulated, under conditions of hypoxia. Other molecules were not downregulated as anticipated from results of Tan IIA experiments in vitro (Figures 3B and Additional file 1: Figure S4B). We did observe that E-cadherin expression could indeed be upregulated by Tan IIA, independent of the hypoxia effect (Figures 3E and Additional file 1: Figure S4C). Levels of proteins observed were consistent with their corresponding mRNA levels (Figures 3 and Additional file 1: Figure S4). Effects of Tan IIA on residual tumor, cell hypoxia and epithelial-mesenchymal transition. (A) PR promoted HIF-1α expression and caused epithelial-mesenchymal transition (EMT; upregulated N-cadherin and vimentin, downregulated E-cadherin); Tan IIA reversed EMT in vivo. (B) Hypoxia-induced HIF-1α and promotion of EMT; Tan IIA had no effect on these parameters in vitro. (C), (D) Tan IIA diminished the enlarged Pimonidazole area of residual tumor after PR (50×). *Compared with Sham group, **compared with PR + NS group; p<0.001. (E) Tan IIA upregulated E-cadherin in tumor cells (200×). Tan IIA does not affect microvessel density but promotes microvessel integrity: Studies of mouse microvessel density (MVD) used the marker CD31. NG2 proteoglycan, the marker of vascular pericytes [12,15], was adopted to evaluate microvessel intensity (MVI). The CD31 levels of the PR group were higher than of the Sham (p<0.001), and there was no statistical significance between the PR + NS and Tan IIA groups (Figures 4A and Additional file 1: Figure S5A). Although the NG2 proteoglycan levels showed no change after PR, its level was significantly elevated in the PR + Tan IIA group (p<0.01, Figures 4A and Additional file 1: Figure S5B). Immunohistochemistry of CD31, NG2, and Pimonidazole in serial sections showed that in the PR + NS group, CD31 was high, NG2 was low, and the hypoxia levels (Pimonidazole) were seriously high. In the PR + Tan IIA group, CD31 was high, NG2 was also high, and hypoxia levels were slight (Figure 4B). These results indicate that the residual tumor MVD increased after PR, but the MVI was low. Tan IIA did not inhibit MVD but markedly improved MVI, promoting vessel integrity. Scanning electron microscopy (SEM) further revealed that the vascular wall of PR + Tan IIA tumors was more integrated than in the NS tumors (Figure 4C). Immunofluorescence (IF) results verified that Tan IIA did not affect CD31 levels, elevated NG2, and decreased hypoxia levels (Figure 4D and E). Effects of Tan IIA on tumor microvessel density, microvessel integrity, and hypoxia. (A) CD31 microvessel density (MVD) increased, and NG2 microvessel integrity (MVI) showed no change after PR. Tan IIA did not affect MVD, but it did elevate MVI. *p<0.01. (B) PR + NS group: CD31 was high, NG2 was low, and Pimonidazole was high. PR + Tan IIA group: CD31 was high, NG2 was also high, and thus hypoxia was slight (400×). (C) The vascular wall of Tan IIA tumor tissue was more integrated than NS tissue (1200×). (D), (E) Tan IIA increased NG2 levels and reduced residual tumor hypoxia (200×). Tan IIA enhances tube formation and is associated with vascular endothelial cell growth factor receptor 1 (VEGFR1) and platelet derived growth factor receptor (PDGFR) upregulation: Tube formation of human umbilical vein endothelial cells (HUVECs) and human tumor-derived endothelial cells (TECs) was enhanced by Tan IIA (Figure 5A and B). And tube formation of TECs from the PR + Tan IIA (in vivo) mouse group was strengthened relative to the PR group (Figure 5D). Using TECs from the PR group, we further found that tube formation was enhanced in vitro by Tan IIA, and it was roughly equivalent to the PR + Tan IIA (in vivo) group (Figure 5D). Subsequent flow cytometric analysis of VEGFR1 and PDGFR in TECs indicated that both the percentage of positive mice and relative cellular fluorescence intensities were significantly higher in the PR + Tan IIA group than the NS group (p<0.05), and no changes were seen in VEGFR2, EGFR, and FGFR1 levels (Figure 5C). Treatment of cells with the VEGFR1/PDGFR inhibitor SU6668 weakened the Tan IIA-dependent enhanced tube formation; whereas, low SU6668 concentration was not seen to inhibit tube formation without Tan IIA (Figure 5D). Effects of Tan IIA on endothelial cells tube formation and TEC surface expression of growth factor receptors. (A, B) Tan IIA enhanced tube formation of HUVECs and human TECs (50×). (C) The percentage of VEGFR1- and PDGFR-positive TECs and the relative fluorescence intensities. Values were higher in the PR + Tan IIA group than NS group. *p<0.05, **p<0.01. (D) Tube formation was enhanced by Tan IIA in vivo, similar to results seen with Tan IIA incubation in vitro; the addition of the VEGFR1/PDGFR inhibitor SU6668 weakened this enhancement effect. Discussion: In the present study of postsurgical residual tumors, we established a double-tumor xenograft HCC model and found that PR accelerated local aggressivity and distant metastasis. Administration of Tan IIA after PR significantly inhibited metastases and prolonged survival of nude mice bearing residual tumor tissue, and the effect was closely associated with VEGFR1/PDGFR-related vascular normalization. Early in 1959, Fisher et al. [22] found that partial hepatectomy elicited hepatic metastases. Our results confirm the view that incomplete surgical resection of primary tumor may well induce the metastatic potential of residual tumor tissue. Currently, there is no evidence indicating that presurgical primary tumor could govern postsurgical residual tumor when growing in the same liver lobe. Interestingly, we found that residual tumor hypoxia was aggravated, HIF-1α was upregulated, and EMT was induced after surgical removal of “primary tumor.” Van der Bilt et al. [14] have reported that surgery-induced tumor hypoxia can stimulate abnormal angiogenesis. Our findings that neovascular abnormality is significantly augmented following PR are consistent with that report. Severe tumor hypoxia might have effects on abnormalities in the vasculature by promoting release of angiogenic cytokines [10], which would enhance metastasis. Tumor vessel abnormalities could promote metastasis through mechanical penetration and hypoxia-induced EMT. We selected Tan IIA, known to possess potential vascular activity, to investigate inhibition of metastasis and the association with vascular normalization. We observed that Tan IIA could inhibit post-PR enhanced metastases and, more importantly, prolong survival. At a cellular level, Tan IIA showed no effect on tumor cell proliferation, but it minimized invasion. These effects were consistent with in vivo results showing that Tan IIA did not inhibit single-xenograft tumor growth but decreased metastases. A possible mechanism may be related by correlation with the observed E-cadherin upregulation. Furthermore, we observed that Tan IIA significantly alleviates residual tumor hypoxia and inhibits EMT in vivo. However, it did not downregulate HIF-1α or reverse EMT of tumor cells under hypoxic conditions in vitro. Therefore, we propose that the main inhibition of metastasis by Tan IIA is indirect. We also observed that the tumor MVD increased after PR, but the MVI was low, suggesting that the surgery-induced angiogenesis was related to structural abnormalities [14]. Tumor cells would metastasize more easily through such a leaky vascular wall, causing an increase of CTCs. Tan IIA did not inhibit MVD but dramatically improved MVI, which is probably related to its underlying vasoactivity. Tan IIA also enhanced tube formation of endothelial cells under hypoxic conditions in vitro. Additionally, we found that tube formation of mouse TECs which processed with Tan IIA no matter in vivo or in vitro, was similar to each other, further indicating a direct effect of Tan IIA on endothelium. How Tan IIA may promote vascular normalization is not entirely clear because its receptor on endothelial cells is unknown. Our results have shown a possible correlation with VEGFR1 and PDGFR upregulation. Recently, it has been reported that inhibition of VEGFR1 and PDGFR signaling in several tumors causes pericyte detachment and vessel regression, leading to vascular abnormalities [23,24]. This implies that upregulation of this signaling might produce a beneficial effect. Of relevance is the observation that a compound AZD2171 inhibits VEGFR while promoting vessel normalization [16]. The mechanism of Tan IIA action requires further investigation. Vascular normalization has effects on two major processes: (i) mechanical prevention of tumor cell migration via intra/extravasation; and (ii) restoration of oxygen and nutritional supply. However, recovery of tumor blood supply may have mixed effects by contributing to progression while also suppressing tumor growth [9]. The latter effect is likely to depend on a combination of factors. The normalization of vessels from both direct and indirect Tan IIA effects appears to be involved with inhibition of residual tumor growth, invasion, and metastasis. Conclusions: Our findings demonstrate that Tan IIA can inhibit the enhanced metastasis induced by PR and may do so in part via VEGFR1/PDGFR-related vascular normalization. This work has an important implication: that the malignant phenotype of a tumor may be manipulated through vascular pathways, which could be an alternative to simple eradication. Our results highlight the potential of proangiogenic “vessel normalizing” treatment strategies to silence metastasis and prolong patient survival. Methods: Cell lines, animal model, and drug The human HCC cell lines HCCLM3-RFP, which has high metastatic potential [25], and HepG2-GFP [26], HUVECs, and TECs [27] were used in the studies. Male, athymic BALB/c nu/nu mice of 4–6 weeks of age and weighing approximately 20 g were used as host animals. A metastatic human HCC animal model was established by orthotopic implantation of histologically intact tumor tissue into the nude mouse liver [28]. To explore the protumoral effects of PR, we constructed an orthotopic double-tumor xenograft model, in which two tumor pieces were simultaneously inoculated into the left liver lobe; the inoculation method was as described [28]. After 2 weeks, partial hepatectomy [29] was performed to excise one tumor. The Sham hepatectomy cohort was handled like the PR cohort, but without tumor resection. Tan IIA (sulfotanshinone sodium injection, 5 mg/ml), commercially available from the first Biochemical Pharmaceutical Co. Ltd., Shanghai, China, was used in in vivo experiment. Tan IIA monomer (Sigma, St. Louis, MO), a reddish lyophilized powder with the purity 99.99%, firstly dissolved in dimethyl sulfoxide and then diluted with NS to the required concentration, was used in in vitro study. The human HCC cell lines HCCLM3-RFP, which has high metastatic potential [25], and HepG2-GFP [26], HUVECs, and TECs [27] were used in the studies. Male, athymic BALB/c nu/nu mice of 4–6 weeks of age and weighing approximately 20 g were used as host animals. A metastatic human HCC animal model was established by orthotopic implantation of histologically intact tumor tissue into the nude mouse liver [28]. To explore the protumoral effects of PR, we constructed an orthotopic double-tumor xenograft model, in which two tumor pieces were simultaneously inoculated into the left liver lobe; the inoculation method was as described [28]. After 2 weeks, partial hepatectomy [29] was performed to excise one tumor. The Sham hepatectomy cohort was handled like the PR cohort, but without tumor resection. Tan IIA (sulfotanshinone sodium injection, 5 mg/ml), commercially available from the first Biochemical Pharmaceutical Co. Ltd., Shanghai, China, was used in in vivo experiment. Tan IIA monomer (Sigma, St. Louis, MO), a reddish lyophilized powder with the purity 99.99%, firstly dissolved in dimethyl sulfoxide and then diluted with NS to the required concentration, was used in in vitro study. Experimental groups and assessment parameters For IE (1), 30 double-tumor-bearing mice were randomly divided into Sham and PR groups (each of n=15) and scheduled to be observed after 35 d. In IE (2), the single-tumor xenograft model was used. Mice were divided into four groups (each n=18) and received daily injections of NS or Tan IIA (1, 5, or 10 mg/kg/d). Tan IIA was diluted with NS. We took 20 g as the average mouse weight (25 g after 21 d), and each mouse received 0.2 mL solution intraperitoneally for 35 d. In IE (3), the double-tumor xenograft plus PR model was used to examine effects of Tan IIA on residual tumor. Mice were divided into two groups (each n=15) after PR and received daily injections of NS or 10 mg Tan IIA/kg/d for 35 d. The mouse weight was measured once every 7 d. After 35 d, six mice from each group were retained to observe survival, and the remaining were sacrificed to measure TV [30], LM, IHM, AM [26], CTCs, and to perform SEM of tumor vessels. CTCs were enumerated by flow cytometry and expressed as percent CTCs/TV (%) [6]. Twelve mice (IE 1, n = 6) were sacrificed 2 d after resection to examine CTCs shortly after PR. For IE (1), 30 double-tumor-bearing mice were randomly divided into Sham and PR groups (each of n=15) and scheduled to be observed after 35 d. In IE (2), the single-tumor xenograft model was used. Mice were divided into four groups (each n=18) and received daily injections of NS or Tan IIA (1, 5, or 10 mg/kg/d). Tan IIA was diluted with NS. We took 20 g as the average mouse weight (25 g after 21 d), and each mouse received 0.2 mL solution intraperitoneally for 35 d. In IE (3), the double-tumor xenograft plus PR model was used to examine effects of Tan IIA on residual tumor. Mice were divided into two groups (each n=15) after PR and received daily injections of NS or 10 mg Tan IIA/kg/d for 35 d. The mouse weight was measured once every 7 d. After 35 d, six mice from each group were retained to observe survival, and the remaining were sacrificed to measure TV [30], LM, IHM, AM [26], CTCs, and to perform SEM of tumor vessels. CTCs were enumerated by flow cytometry and expressed as percent CTCs/TV (%) [6]. Twelve mice (IE 1, n = 6) were sacrificed 2 d after resection to examine CTCs shortly after PR. Cell proliferation and invasion A Cell Counting Kit-8 (Dojindo, Kumamoto, Japan) was used to assay cell proliferation. The final concentration of Tan IIA was 0.01–1000 μM. Results were expressed as OD at 490 nm. Cell invasiveness was assayed in Matrigel-coated Transwell Invasion Chambers (Corning, Cambridge, MA). Tan IIA was added to cells at final concentrations of 1, 5, or 10 μM, and these cultures were incubated for 48 h. Cells that passed through the chamber membranes were counted. A Cell Counting Kit-8 (Dojindo, Kumamoto, Japan) was used to assay cell proliferation. The final concentration of Tan IIA was 0.01–1000 μM. Results were expressed as OD at 490 nm. Cell invasiveness was assayed in Matrigel-coated Transwell Invasion Chambers (Corning, Cambridge, MA). Tan IIA was added to cells at final concentrations of 1, 5, or 10 μM, and these cultures were incubated for 48 h. Cells that passed through the chamber membranes were counted. Hypoxia evaluation Cells were cultured in a Bugbox Hypoxic Workstation (Ruskinn, Mid Glamorgan, UK; 1% O2, 5% CO2, and 94% N2 atmosphere) and incubated with Tan IIA at 1, 5, or 10 μM for 48 h. Normoxic conditions (20% O2, 5% CO2, and 75% N2) were set as control. Pimonidazole immunostaining and HIF-1α expression were defined as hypoxia biomarkers. A Hypoxyprobe™-1 Kit (Hypoxyprobe Inc., Burlington, MA) was used [6]. Cells were cultured in a Bugbox Hypoxic Workstation (Ruskinn, Mid Glamorgan, UK; 1% O2, 5% CO2, and 94% N2 atmosphere) and incubated with Tan IIA at 1, 5, or 10 μM for 48 h. Normoxic conditions (20% O2, 5% CO2, and 75% N2) were set as control. Pimonidazole immunostaining and HIF-1α expression were defined as hypoxia biomarkers. A Hypoxyprobe™-1 Kit (Hypoxyprobe Inc., Burlington, MA) was used [6]. Isolation of TECs, flow cytometry, and tube formation Eight tumors from Sham, PR, or PR + Tan IIA groups were collected. The TECs were isolated by use of anti-CD31 antibody (AB)-coupled magnetic beads (Miltenyi Biotec, Cologne, Germany) and magnetic cell-sorting system [27], and they were divided into Sham, PR, PR + Tan IIA (in vivo), and PR + Tan IIA (in vitro) groups; TECs isolated from the PR group were incubated with Tan IIA for 48 h. TEC surface expression of VEGFR1, VEGFR2, EGFR, PDGFR, FGFR1, and CD31 was determined by flow cytometric analysis (R&D, Minneapolis, MN). Receptor density was calculated as the relative fluorescence intensity. In another set of experiments, TECs from the PR + Tan IIA (in vivo) cohort were divided into control and SU6668 (Sigma) (VEGFR1/PDGFR selective receptor inhibitor) treatment groups. TECs from the PR cohort were also divided into PR, PR + SU6668, PR + Tan IIA (in vitro), and PR + SU6668 + Tan IIA groups. HUVECs and human TECs were separated into control, normoxia + Tan IIA, and hypoxia + Tan IIA groups. Formation of capillary-like structures was observed as described [27]. Eight tumors from Sham, PR, or PR + Tan IIA groups were collected. The TECs were isolated by use of anti-CD31 antibody (AB)-coupled magnetic beads (Miltenyi Biotec, Cologne, Germany) and magnetic cell-sorting system [27], and they were divided into Sham, PR, PR + Tan IIA (in vivo), and PR + Tan IIA (in vitro) groups; TECs isolated from the PR group were incubated with Tan IIA for 48 h. TEC surface expression of VEGFR1, VEGFR2, EGFR, PDGFR, FGFR1, and CD31 was determined by flow cytometric analysis (R&D, Minneapolis, MN). Receptor density was calculated as the relative fluorescence intensity. In another set of experiments, TECs from the PR + Tan IIA (in vivo) cohort were divided into control and SU6668 (Sigma) (VEGFR1/PDGFR selective receptor inhibitor) treatment groups. TECs from the PR cohort were also divided into PR, PR + SU6668, PR + Tan IIA (in vitro), and PR + SU6668 + Tan IIA groups. HUVECs and human TECs were separated into control, normoxia + Tan IIA, and hypoxia + Tan IIA groups. Formation of capillary-like structures was observed as described [27]. Immunohistochemistry, immunofluorescence, western blot, and quantitative real-time polymerase chain reaction Immunohistochemistry [31] of Pimonidazole, CD31, and NG2 [15] was performed in paraffin sections on slides. The primary antibodies to Pimonidazole (1:100), CD31 (1:100; Abcam, Cambridge, MA), and NG2 (1:200; Millipore, Billerica, MA) were selected. The integrated optical density (IOD; for Pimonidazole) or area (for CD31 and NG2) of positive staining/total area was quantified by Image-Pro Plus software [31]. IF double-staining [31] of CD31 (1:50) and NG2 (1:50), and NG2 and Pimonidazole (1:80) was done in frozen sections and observed under laser confocal microscope. IF of E-cadherin in cells was also determined. Protein levels of HIF-1α, N-cadherin, E-cadherin, and vimentin were determined by immunoblot analysis. Primary antibodies against HIF-1α (1:1000; Sigma), β-actin, N-cadherin (1:1000; Abcam), E-cadherin (1:400; Santa Cruz Biotechnology, Santa Cruz, CA), and vimentin (1:800; Cell Signaling Technology, Beverly, MA) were used. Levels of mRNA were assessed by polymerase chain reaction (Additional file 1: Table S1) and normalized to the corresponding internal β-actin signal (ΔCt). Relative gene expression values were expressed as 2−ΔΔCt[30]. Immunohistochemistry [31] of Pimonidazole, CD31, and NG2 [15] was performed in paraffin sections on slides. The primary antibodies to Pimonidazole (1:100), CD31 (1:100; Abcam, Cambridge, MA), and NG2 (1:200; Millipore, Billerica, MA) were selected. The integrated optical density (IOD; for Pimonidazole) or area (for CD31 and NG2) of positive staining/total area was quantified by Image-Pro Plus software [31]. IF double-staining [31] of CD31 (1:50) and NG2 (1:50), and NG2 and Pimonidazole (1:80) was done in frozen sections and observed under laser confocal microscope. IF of E-cadherin in cells was also determined. Protein levels of HIF-1α, N-cadherin, E-cadherin, and vimentin were determined by immunoblot analysis. Primary antibodies against HIF-1α (1:1000; Sigma), β-actin, N-cadherin (1:1000; Abcam), E-cadherin (1:400; Santa Cruz Biotechnology, Santa Cruz, CA), and vimentin (1:800; Cell Signaling Technology, Beverly, MA) were used. Levels of mRNA were assessed by polymerase chain reaction (Additional file 1: Table S1) and normalized to the corresponding internal β-actin signal (ΔCt). Relative gene expression values were expressed as 2−ΔΔCt[30]. Statistical analysis All statistical analyses were performed with the SPSS 16.0 software. The Pearson chi-square test was applied to compare qualitative variables. Quantitative variables were expressed as mean ± standard deviations and analyzed by t-test or one-way analysis of variance followed by least significant difference test. The Kaplan–Meier method with log-rank test was used for survival analysis. A p value of <0.05 was considered to be statistically significant. All statistical analyses were performed with the SPSS 16.0 software. The Pearson chi-square test was applied to compare qualitative variables. Quantitative variables were expressed as mean ± standard deviations and analyzed by t-test or one-way analysis of variance followed by least significant difference test. The Kaplan–Meier method with log-rank test was used for survival analysis. A p value of <0.05 was considered to be statistically significant. Ethics approval Animal care and experimental protocols were approved by the Shanghai Medical Experimental Animal Care Commission. Animal care and experimental protocols were approved by the Shanghai Medical Experimental Animal Care Commission. Cell lines, animal model, and drug: The human HCC cell lines HCCLM3-RFP, which has high metastatic potential [25], and HepG2-GFP [26], HUVECs, and TECs [27] were used in the studies. Male, athymic BALB/c nu/nu mice of 4–6 weeks of age and weighing approximately 20 g were used as host animals. A metastatic human HCC animal model was established by orthotopic implantation of histologically intact tumor tissue into the nude mouse liver [28]. To explore the protumoral effects of PR, we constructed an orthotopic double-tumor xenograft model, in which two tumor pieces were simultaneously inoculated into the left liver lobe; the inoculation method was as described [28]. After 2 weeks, partial hepatectomy [29] was performed to excise one tumor. The Sham hepatectomy cohort was handled like the PR cohort, but without tumor resection. Tan IIA (sulfotanshinone sodium injection, 5 mg/ml), commercially available from the first Biochemical Pharmaceutical Co. Ltd., Shanghai, China, was used in in vivo experiment. Tan IIA monomer (Sigma, St. Louis, MO), a reddish lyophilized powder with the purity 99.99%, firstly dissolved in dimethyl sulfoxide and then diluted with NS to the required concentration, was used in in vitro study. Experimental groups and assessment parameters: For IE (1), 30 double-tumor-bearing mice were randomly divided into Sham and PR groups (each of n=15) and scheduled to be observed after 35 d. In IE (2), the single-tumor xenograft model was used. Mice were divided into four groups (each n=18) and received daily injections of NS or Tan IIA (1, 5, or 10 mg/kg/d). Tan IIA was diluted with NS. We took 20 g as the average mouse weight (25 g after 21 d), and each mouse received 0.2 mL solution intraperitoneally for 35 d. In IE (3), the double-tumor xenograft plus PR model was used to examine effects of Tan IIA on residual tumor. Mice were divided into two groups (each n=15) after PR and received daily injections of NS or 10 mg Tan IIA/kg/d for 35 d. The mouse weight was measured once every 7 d. After 35 d, six mice from each group were retained to observe survival, and the remaining were sacrificed to measure TV [30], LM, IHM, AM [26], CTCs, and to perform SEM of tumor vessels. CTCs were enumerated by flow cytometry and expressed as percent CTCs/TV (%) [6]. Twelve mice (IE 1, n = 6) were sacrificed 2 d after resection to examine CTCs shortly after PR. Cell proliferation and invasion: A Cell Counting Kit-8 (Dojindo, Kumamoto, Japan) was used to assay cell proliferation. The final concentration of Tan IIA was 0.01–1000 μM. Results were expressed as OD at 490 nm. Cell invasiveness was assayed in Matrigel-coated Transwell Invasion Chambers (Corning, Cambridge, MA). Tan IIA was added to cells at final concentrations of 1, 5, or 10 μM, and these cultures were incubated for 48 h. Cells that passed through the chamber membranes were counted. Hypoxia evaluation: Cells were cultured in a Bugbox Hypoxic Workstation (Ruskinn, Mid Glamorgan, UK; 1% O2, 5% CO2, and 94% N2 atmosphere) and incubated with Tan IIA at 1, 5, or 10 μM for 48 h. Normoxic conditions (20% O2, 5% CO2, and 75% N2) were set as control. Pimonidazole immunostaining and HIF-1α expression were defined as hypoxia biomarkers. A Hypoxyprobe™-1 Kit (Hypoxyprobe Inc., Burlington, MA) was used [6]. Isolation of TECs, flow cytometry, and tube formation: Eight tumors from Sham, PR, or PR + Tan IIA groups were collected. The TECs were isolated by use of anti-CD31 antibody (AB)-coupled magnetic beads (Miltenyi Biotec, Cologne, Germany) and magnetic cell-sorting system [27], and they were divided into Sham, PR, PR + Tan IIA (in vivo), and PR + Tan IIA (in vitro) groups; TECs isolated from the PR group were incubated with Tan IIA for 48 h. TEC surface expression of VEGFR1, VEGFR2, EGFR, PDGFR, FGFR1, and CD31 was determined by flow cytometric analysis (R&D, Minneapolis, MN). Receptor density was calculated as the relative fluorescence intensity. In another set of experiments, TECs from the PR + Tan IIA (in vivo) cohort were divided into control and SU6668 (Sigma) (VEGFR1/PDGFR selective receptor inhibitor) treatment groups. TECs from the PR cohort were also divided into PR, PR + SU6668, PR + Tan IIA (in vitro), and PR + SU6668 + Tan IIA groups. HUVECs and human TECs were separated into control, normoxia + Tan IIA, and hypoxia + Tan IIA groups. Formation of capillary-like structures was observed as described [27]. Immunohistochemistry, immunofluorescence, western blot, and quantitative real-time polymerase chain reaction: Immunohistochemistry [31] of Pimonidazole, CD31, and NG2 [15] was performed in paraffin sections on slides. The primary antibodies to Pimonidazole (1:100), CD31 (1:100; Abcam, Cambridge, MA), and NG2 (1:200; Millipore, Billerica, MA) were selected. The integrated optical density (IOD; for Pimonidazole) or area (for CD31 and NG2) of positive staining/total area was quantified by Image-Pro Plus software [31]. IF double-staining [31] of CD31 (1:50) and NG2 (1:50), and NG2 and Pimonidazole (1:80) was done in frozen sections and observed under laser confocal microscope. IF of E-cadherin in cells was also determined. Protein levels of HIF-1α, N-cadherin, E-cadherin, and vimentin were determined by immunoblot analysis. Primary antibodies against HIF-1α (1:1000; Sigma), β-actin, N-cadherin (1:1000; Abcam), E-cadherin (1:400; Santa Cruz Biotechnology, Santa Cruz, CA), and vimentin (1:800; Cell Signaling Technology, Beverly, MA) were used. Levels of mRNA were assessed by polymerase chain reaction (Additional file 1: Table S1) and normalized to the corresponding internal β-actin signal (ΔCt). Relative gene expression values were expressed as 2−ΔΔCt[30]. Statistical analysis: All statistical analyses were performed with the SPSS 16.0 software. The Pearson chi-square test was applied to compare qualitative variables. Quantitative variables were expressed as mean ± standard deviations and analyzed by t-test or one-way analysis of variance followed by least significant difference test. The Kaplan–Meier method with log-rank test was used for survival analysis. A p value of <0.05 was considered to be statistically significant. Ethics approval: Animal care and experimental protocols were approved by the Shanghai Medical Experimental Animal Care Commission. Abbreviations: AM: Abdominal metastasis; CTCs: Circulating tumor cells; EMT: Epithelial-mesenchymal transition; HCC: Hepatocellular carcinoma; HUVEC: Human umbilical vein endothelial cells; IE: In vivo experiment; IF: Immunofluorescence; IHM: Intrahepatic metastasis; IOD: Integrated optical density; LM: Lung metastasis; MVD: Microvessel density; MVI: Microvessel integrity; NS: Normal saline; PDGFR: Platelet derived growth factor receptor; PR: Palliative resection; SEM: Scanning electron microscopy; Tan IIA: Tanshinone IIA; TEC: Tumor-derived endothelial cells; TV: Tumor volume; VEGFR1: Vascular endothelial cell growth factor receptor 1. Competing interests: The authors declare there are no competing interests. Authors’ contributions: WWQ designed the study, established the animal model, carried out the immunoassays, performed the statistical analysis, and drafted the manuscript. LL and SHC participated in the design of the study, data analysis, and drafting the manuscript. FYL, XHX, CZT, ZQB, KLQ, ZXD, and LL helped to acquire experimental data. RZG reviewed the manuscript. TZY conceived the study, participated in its design and coordination, and helped to draft the manuscript. All authors read and approved the final manuscript. Authors’ information: WWQ, M.D., Ph.D., in Cancer Surgery. Graduated from Liver Cancer Institute, Zhongshan Hospital, Fudan University; Key Laboratory for Carcinogenesis & Cancer Invasion (Fudan University), the Chinese Ministry of Education. WWQ is now working in the Department of Pancreatic and Hepatobiliary Surgery, Fudan University, Shanghai Cancer Center; the Department of Oncology, Shanghai Medical College, Fudan University; and the Pancreatic Cancer Institute, Fudan University. Supplementary Material: Table S1. The primer sequences for amplification of human HIF-1α, N-cadherin, E-cadherin, Vimentin, and β-actin. Table S2. Summary of tumor growth, metastasis, and survival of host mice in three animal experiments. Figure S1. Statistical chart of tumor volume (A), metastases to lung (B), intrahepatic (C), and to abdomen (D), and circulating tumor cells (E) in three animal experiments. *For Student’s t-test, equal variances were assumed. Abbreviations: CTCs, circulating tumor cells; IHM, intrahepatic metastasis. Figure S2. Schematic diagram of intrahepatic and abdomen metastases. (A) Intrahepatic metastatic lesions were shown by HE staining (50×). (B) Typical abdomen metastasis of residual HCCLM3 tumor 35 d after palliative resection. Panels a and b show intrahepatic, peritoneal, and diaphragmatic metastatic lesions. c and d show intrahepatic and diaphragmatic metastatic lesions; d shows the area where the tumor in c was removed. Figure S3. Monitoring body weight of experimental mice in three in vivo experiments. (A) In vitro experiment 1 (IE 1). (B) IE (2). (C) IE (3). Figure S4. Intratumoral and intracellular mRNA levels of HIF-1α, N-cadherin, E-cadherin, and Vimentin. (A)*Compared with Sham group, **compared with PR + NS group; p<0.05. (B)*Compared with dimethylsulfoxide (DMSO) group, **compared with hypoxia group; p<0.05. (C)*Compared with DMSO group; p<0.05. Figure S5. Immunohistochemical staining of CD31 (A) and NG2 (B) in tumor samples. Click here for file
Background: Promotion of endothelial normalization restores tumor oxygenation and obstructs tumor cells invasion, intravasation, and metastasis. We therefore investigated whether a vasoactive drug, tanshinone IIA, could inhibit metastasis by inducing vascular normalization after palliative resection (PR) of hepatocellular carcinoma (HCC). Methods: A liver orthotopic double-tumor xenograft model in nude mouse was established by implantation of HCCLM3 (high metastatic potential) and HepG2 tumor cells. After removal of one tumor by PR, the effects of tanshinone IIA administration on metastasis, tumor vascularization, and survival were evaluated. Tube formation was examined in mouse tumor-derived endothelial cells (TECs) treated with tanshinone IIA. Results: PR significantly accelerated residual hepatoma metastases. Tanshinone IIA did not inhibit growth of single-xenotransplanted tumors, but it did reduce the occurrence of metastases. Moreover, it inhibited PR-enhanced metastases and, more importantly, prolonged host survival. Tanshinone IIA alleviated residual tumor hypoxia and suppressed epithelial-mesenchymal transition (EMT) in vivo; however, it did not downregulate hypoxia-inducible factor 1α (HIF-1α) or reverse EMT of tumor cells under hypoxic conditions in vitro. Tanshinone IIA directly strengthened tube formation of TECs, associated with vascular endothelial cell growth factor receptor 1/platelet derived growth factor receptor (VEGFR1/PDGFR) upregulation. Although the microvessel density (MVD) of residual tumor tissue increased after PR, the microvessel integrity (MVI) was still low. While tanshinone IIA did not inhibit MVD, it did dramatically increase MVI, leading to vascular normalization. Conclusions: Our results demonstrate that tanshinone IIA can inhibit the enhanced HCC metastasis associated with PR. Inhibition results from promoting VEGFR1/PDGFR-related vascular normalization. This application demonstrates the potential clinical benefit of preventing postsurgical recurrence.
Background: Surgical resection is the most promising strategy for early-stage hepatocellular carcinoma (HCC); however, the 5-year risk of recurrence is as high as 70% [1]. The surgery is actually palliative resection (PR), owing to the existence of satellites and microvascular invasion, and these residual tumor nests can actually be stimulated to grow by the PR [2,3]. Although several treatments, such as interferon-alpha and sorafenib, have been proposed to diminish relapse [4,5], prometastatic side effects of these options have also been observed [6,7]. Residual tumor cells may stimulate angiogenesis, which is needed for tumor growth [8-10]. However, the resulting neovessels may be disordered and inefficiently perfused, resulting in hypoxic conditions [10,11]. Both abnormal endothelium and pericytes integrated into the capillary wall, along with deficient coverage, could be responsible for the vascular architectural abnormalities [12]. The resulting hypoxia creates a hostile tumor milieu in which tumor cells may migrate via intra- or extravasation through a leaky vessel [9,13]. In effect, surgery-induced hypoxia unfavorably impacts the prognosis of cancer patients by inducing angiogenesis [14]. Therefore, restoring oxygen supply via vascular normalization may reduce metastasis, even tumor growth. Mazzone et al. [13] showed that downregulation of the oxygen sensing molecule PHD2 can restore tumor oxygenation and inhibit metastasis via endothelial normalization, where endothelial cells form a protective phalanx that blocks metastasis. Although several methods have been shown experimentally to promote vessel remodeling, only seldom has any of them found use in the clinic [9,10,15,16]. Tanshinone IIA (Tan IIA) is an herbal monomer with a clear chemical structure, isolated from Salvia miltiorrhiza. In Chinese traditional medicine, S. miltiorrhiza is considered to promote blood circulation for removing blood stasis and improve microcirculation. Some of these effects could include vessel normalization. We have reported that an herbal formula, Songyou Yin, can attenuate HCC metastases [17], and S. miltiorrhiza is one of the five constituents of the formula [18]. Tan IIA exhibits direct vasoactive [19,20] and certain antitumor properties [21]. It is possible that Tan IIA may indirectly decrease metastasis in HCC following PR by promoting blood vessel normalization; however, there is to date no evidence supporting this hypothesis. We aimed to identify inhibitory effects of Tan IIA on HCC metastasis for delineating a possible mechanism of action of the compound, with a main focus on tumor vessel maturity as a potential marker for evaluating Tan IIA treatment responses. Conclusions: Our findings demonstrate that Tan IIA can inhibit the enhanced metastasis induced by PR and may do so in part via VEGFR1/PDGFR-related vascular normalization. This work has an important implication: that the malignant phenotype of a tumor may be manipulated through vascular pathways, which could be an alternative to simple eradication. Our results highlight the potential of proangiogenic “vessel normalizing” treatment strategies to silence metastasis and prolong patient survival.
Background: Promotion of endothelial normalization restores tumor oxygenation and obstructs tumor cells invasion, intravasation, and metastasis. We therefore investigated whether a vasoactive drug, tanshinone IIA, could inhibit metastasis by inducing vascular normalization after palliative resection (PR) of hepatocellular carcinoma (HCC). Methods: A liver orthotopic double-tumor xenograft model in nude mouse was established by implantation of HCCLM3 (high metastatic potential) and HepG2 tumor cells. After removal of one tumor by PR, the effects of tanshinone IIA administration on metastasis, tumor vascularization, and survival were evaluated. Tube formation was examined in mouse tumor-derived endothelial cells (TECs) treated with tanshinone IIA. Results: PR significantly accelerated residual hepatoma metastases. Tanshinone IIA did not inhibit growth of single-xenotransplanted tumors, but it did reduce the occurrence of metastases. Moreover, it inhibited PR-enhanced metastases and, more importantly, prolonged host survival. Tanshinone IIA alleviated residual tumor hypoxia and suppressed epithelial-mesenchymal transition (EMT) in vivo; however, it did not downregulate hypoxia-inducible factor 1α (HIF-1α) or reverse EMT of tumor cells under hypoxic conditions in vitro. Tanshinone IIA directly strengthened tube formation of TECs, associated with vascular endothelial cell growth factor receptor 1/platelet derived growth factor receptor (VEGFR1/PDGFR) upregulation. Although the microvessel density (MVD) of residual tumor tissue increased after PR, the microvessel integrity (MVI) was still low. While tanshinone IIA did not inhibit MVD, it did dramatically increase MVI, leading to vascular normalization. Conclusions: Our results demonstrate that tanshinone IIA can inhibit the enhanced HCC metastasis associated with PR. Inhibition results from promoting VEGFR1/PDGFR-related vascular normalization. This application demonstrates the potential clinical benefit of preventing postsurgical recurrence.
12,547
339
[ 484, 336, 201, 145, 153, 135, 359, 423, 323, 247, 277, 95, 101, 239, 258, 82, 16, 120, 9, 97, 81 ]
26
[ "iia", "tan iia", "tan", "pr", "tumor", "figure", "file", "additional file", "additional", "group" ]
[ "tumor derived endothelial", "angiogenesis needed tumor", "tumor hypoxia effects", "tumor manipulated vascular", "residual tumor hypoxia" ]
[CONTENT] Tanshinone IIA | Vascular normalization | Palliative resection | Hepatocellular carcinoma | Metastasis [SUMMARY]
[CONTENT] Tanshinone IIA | Vascular normalization | Palliative resection | Hepatocellular carcinoma | Metastasis [SUMMARY]
[CONTENT] Tanshinone IIA | Vascular normalization | Palliative resection | Hepatocellular carcinoma | Metastasis [SUMMARY]
[CONTENT] Tanshinone IIA | Vascular normalization | Palliative resection | Hepatocellular carcinoma | Metastasis [SUMMARY]
[CONTENT] Tanshinone IIA | Vascular normalization | Palliative resection | Hepatocellular carcinoma | Metastasis [SUMMARY]
[CONTENT] Tanshinone IIA | Vascular normalization | Palliative resection | Hepatocellular carcinoma | Metastasis [SUMMARY]
[CONTENT] Abietanes | Animals | Antineoplastic Agents, Phytogenic | Carcinoma, Hepatocellular | Cell Growth Processes | Cell Line, Tumor | Humans | Liver Neoplasms | Male | Mice | Mice, Inbred BALB C | Mice, Nude | Neoplasm Metastasis | Random Allocation | Xenograft Model Antitumor Assays [SUMMARY]
[CONTENT] Abietanes | Animals | Antineoplastic Agents, Phytogenic | Carcinoma, Hepatocellular | Cell Growth Processes | Cell Line, Tumor | Humans | Liver Neoplasms | Male | Mice | Mice, Inbred BALB C | Mice, Nude | Neoplasm Metastasis | Random Allocation | Xenograft Model Antitumor Assays [SUMMARY]
[CONTENT] Abietanes | Animals | Antineoplastic Agents, Phytogenic | Carcinoma, Hepatocellular | Cell Growth Processes | Cell Line, Tumor | Humans | Liver Neoplasms | Male | Mice | Mice, Inbred BALB C | Mice, Nude | Neoplasm Metastasis | Random Allocation | Xenograft Model Antitumor Assays [SUMMARY]
[CONTENT] Abietanes | Animals | Antineoplastic Agents, Phytogenic | Carcinoma, Hepatocellular | Cell Growth Processes | Cell Line, Tumor | Humans | Liver Neoplasms | Male | Mice | Mice, Inbred BALB C | Mice, Nude | Neoplasm Metastasis | Random Allocation | Xenograft Model Antitumor Assays [SUMMARY]
[CONTENT] Abietanes | Animals | Antineoplastic Agents, Phytogenic | Carcinoma, Hepatocellular | Cell Growth Processes | Cell Line, Tumor | Humans | Liver Neoplasms | Male | Mice | Mice, Inbred BALB C | Mice, Nude | Neoplasm Metastasis | Random Allocation | Xenograft Model Antitumor Assays [SUMMARY]
[CONTENT] Abietanes | Animals | Antineoplastic Agents, Phytogenic | Carcinoma, Hepatocellular | Cell Growth Processes | Cell Line, Tumor | Humans | Liver Neoplasms | Male | Mice | Mice, Inbred BALB C | Mice, Nude | Neoplasm Metastasis | Random Allocation | Xenograft Model Antitumor Assays [SUMMARY]
[CONTENT] tumor derived endothelial | angiogenesis needed tumor | tumor hypoxia effects | tumor manipulated vascular | residual tumor hypoxia [SUMMARY]
[CONTENT] tumor derived endothelial | angiogenesis needed tumor | tumor hypoxia effects | tumor manipulated vascular | residual tumor hypoxia [SUMMARY]
[CONTENT] tumor derived endothelial | angiogenesis needed tumor | tumor hypoxia effects | tumor manipulated vascular | residual tumor hypoxia [SUMMARY]
[CONTENT] tumor derived endothelial | angiogenesis needed tumor | tumor hypoxia effects | tumor manipulated vascular | residual tumor hypoxia [SUMMARY]
[CONTENT] tumor derived endothelial | angiogenesis needed tumor | tumor hypoxia effects | tumor manipulated vascular | residual tumor hypoxia [SUMMARY]
[CONTENT] tumor derived endothelial | angiogenesis needed tumor | tumor hypoxia effects | tumor manipulated vascular | residual tumor hypoxia [SUMMARY]
[CONTENT] iia | tan iia | tan | pr | tumor | figure | file | additional file | additional | group [SUMMARY]
[CONTENT] iia | tan iia | tan | pr | tumor | figure | file | additional file | additional | group [SUMMARY]
[CONTENT] iia | tan iia | tan | pr | tumor | figure | file | additional file | additional | group [SUMMARY]
[CONTENT] iia | tan iia | tan | pr | tumor | figure | file | additional file | additional | group [SUMMARY]
[CONTENT] iia | tan iia | tan | pr | tumor | figure | file | additional file | additional | group [SUMMARY]
[CONTENT] iia | tan iia | tan | pr | tumor | figure | file | additional file | additional | group [SUMMARY]
[CONTENT] tumor | vessel | normalization | metastasis | miltiorrhiza | resulting | hcc | blood | iia | 13 [SUMMARY]
[CONTENT] pr | iia | tan | tan iia | groups | divided | tumor | tecs | cd31 | ma [SUMMARY]
[CONTENT] figure | tan | tan iia | iia | additional file | additional | file | additional file figure | file figure | pr [SUMMARY]
[CONTENT] vascular | metastasis | alternative simple | vascular normalization work important | proangiogenic vessel normalizing | proangiogenic vessel | proangiogenic | vascular pathways alternative simple | vascular pathways alternative | vascular pathways [SUMMARY]
[CONTENT] iia | tan iia | tan | tumor | pr | figure | additional | additional file | file | file figure [SUMMARY]
[CONTENT] iia | tan iia | tan | tumor | pr | figure | additional | additional file | file | file figure [SUMMARY]
[CONTENT] ||| IIA [SUMMARY]
[CONTENT] HCCLM3 ||| one ||| IIA [SUMMARY]
[CONTENT] ||| Tanshinone IIA ||| ||| Tanshinone IIA | EMT | 1α | HIF-1α | EMT ||| Tanshinone IIA | 1 | VEGFR1 ||| MVD | MVI ||| IIA | MVD | MVI [SUMMARY]
[CONTENT] IIA ||| VEGFR1 ||| [SUMMARY]
[CONTENT] ||| IIA ||| HCCLM3 ||| one ||| IIA ||| ||| ||| Tanshinone IIA ||| ||| Tanshinone IIA | EMT | 1α | HIF-1α | EMT ||| Tanshinone IIA | 1 | VEGFR1 ||| MVD | MVI ||| IIA | MVD | MVI ||| IIA ||| VEGFR1 ||| [SUMMARY]
[CONTENT] ||| IIA ||| HCCLM3 ||| one ||| IIA ||| ||| ||| Tanshinone IIA ||| ||| Tanshinone IIA | EMT | 1α | HIF-1α | EMT ||| Tanshinone IIA | 1 | VEGFR1 ||| MVD | MVI ||| IIA | MVD | MVI ||| IIA ||| VEGFR1 ||| [SUMMARY]
Development of betulinic acid as an agonist of TGR5 receptor using a new in vitro assay.
27578964
G-protein-coupled bile acid receptor 1, also known as TGR5 is known to be involved in glucose homeostasis. In animal models, treatment with a TGR5 agonist induces incretin secretion to reduce hyperglycemia. Betulinic acid, a triterpenoid present in the leaves of white birch, has been introduced as a selective TGR5 agonist. However, direct activation of TGR5 by betulinic acid has not yet been reported.
BACKGROUND
Transfection of TGR5 into cultured Chinese hamster ovary (CHO-K1) cells was performed to establish the presence of TGR5. Additionally, TGR5-specific small interfering RNA was employed to silence TGR5 in cells (NCI-H716 cells) that secreted incretins. Uptake of glucose by CHO-K1 cells was evaluated using a fluorescent indicator. Amounts of cyclic adenosine monophosphate and glucagon-like peptide were quantified using enzyme-linked immunosorbent assay kits.
METHODS
Betulinic acid dose-dependently increases glucose uptake by CHO-K1 cells transfected with TGR5 only, which can be considered an alternative method instead of radioligand binding assay. Additionally, signals coupled to TGR5 activation are also increased by betulinic acid in cells transfected with TGR5. In NCI-H716 cells, which endogenously express TGR5, betulinic acid induces glucagon-like peptide secretion via increasing calcium levels. However, the actions of betulinic acid were markedly reduced in NCI-H716 cells that received TGR5-silencing treatment. Therefore, the present study demonstrates the activation of TGR5 by betulinic acid for the first time.
RESULTS
Similar to the positive control lithocholic acid, which is the established agonist of TGR5, betulinic acid has been characterized as a useful agonist of TGR5 and can be used to activate TGR5 in the future.
CONCLUSION
[ "Animals", "CHO Cells", "Cricetinae", "Cricetulus", "Dose-Response Relationship, Drug", "Humans", "Pentacyclic Triterpenes", "Receptors, G-Protein-Coupled", "Structure-Activity Relationship", "Triterpenes", "Tumor Cells, Cultured", "Betulinic Acid" ]
5001664
Introduction
It has been established that G-protein-coupled bile acid (BA) receptor 1 (also known as TGR5) agonists show potential for treating diabetic disorders.1,2 Essentially, BA induces TGR5 receptor internalization, activation of extracellular signal-regulated kinase, and intracellular cyclic adenosine monophosphate (cAMP) production in cells.1,2 In animal models, treatment with TGR5 agonist(s) produces glucagon-like peptide (GLP-1) secretion, which in turn induces insulin secretion to lower blood glucose and/or increase the basal energy expenditure.3 Therefore, TGR5 has been recognized as a target for developing new antidiabetic agents.4 In the gut, BAs are also modified by the gut flora to produce secondary BAs, deoxycholic acid, and lithocholic acid (LCA). BAs can be divided into hydrophobic and hydrophilic subgroups. LCA is hydrophobic and could activate TGR55 but was highly toxic to liver.6 Betulinic acid, a triterpenoid, as shown in Figure 1, that is present in the leaves of white birch, has been introduced as a selective TGR5 agonist with moderate potency to produce antihyperglycemic actions.7,8 Due to its potential as a TGR5 agonist, derivatives of betulinic acid have widely been studied.9,10 Recently, the antidiabetic action of betulinic acid has been reviewed with two other triterpenic acids, oleanolic acid, and ursolic acid, in detail.11 However, the activation of TGR5 was not shown in that report, probably due to poor evidence. Therefore, data showing the direct effect of betulinic acid on TGR5 are likely to be helpful. In the present study, we used cells transfected with TGR5 to identify the effect of betulinic acid. Additionally, TGR5-silenced cells were applied to confirm the deletion of betulinic acid-induced actions. Therefore, the TGR5-mediated actions of betulinic acid can be observed directly.
Statistical analysis
Results are presented as the mean±standard error of the mean from the sample number(n) of each group. One-way analysis of variance was followed by post hoc Tukey’s test and t-test using SPSS for Windows, version 17 (SPSS Inc., Chicago, IL, USA). Two-tailed P≤0.05 was considered significant.
Results
Betulinic acid increases glucose uptake in TGR5-transfected CHO cells The successful transfection of TGR receptor into CHO-K1 cells was confirmed using Western blots, as shown in Figure 2A. Then, we investigated the functional response of TGR5-transfected CHO-K1 cells using glucose uptake as an indicator. Similar to the action of LCA, betulinic acid increases glucose uptake in a dose-dependent manner. However, glucose uptake was not induced by betulinic acid (Figure 2B) or LCA (Figure 2C) in CHO-K1 cells transfected with empty vector. We also evaluated the possibility of cytotoxicity induced by betulinic acid in CHO-K1 cells transfected with TGR5 receptor or with empty vector. Betulinic acid at the highest dose (0.1 mM) did not influence the viability of TGR5-transfected CHO-K1 cells or CHO-K1 cells transfected with empty vector only in the preliminary experiments. Similar results of the MTT assay were observed in LCA-treated TGR5-transfected CHO-K1 cells or CHO-K1 cells transfected with empty vector only (data not shown). The successful transfection of TGR receptor into CHO-K1 cells was confirmed using Western blots, as shown in Figure 2A. Then, we investigated the functional response of TGR5-transfected CHO-K1 cells using glucose uptake as an indicator. Similar to the action of LCA, betulinic acid increases glucose uptake in a dose-dependent manner. However, glucose uptake was not induced by betulinic acid (Figure 2B) or LCA (Figure 2C) in CHO-K1 cells transfected with empty vector. We also evaluated the possibility of cytotoxicity induced by betulinic acid in CHO-K1 cells transfected with TGR5 receptor or with empty vector. Betulinic acid at the highest dose (0.1 mM) did not influence the viability of TGR5-transfected CHO-K1 cells or CHO-K1 cells transfected with empty vector only in the preliminary experiments. Similar results of the MTT assay were observed in LCA-treated TGR5-transfected CHO-K1 cells or CHO-K1 cells transfected with empty vector only (data not shown). Betulinic acid increases signals in TGR5-transfected CHO-K1 cells It has been established that cAMP is the main signal coupled to TGR5 receptor activation.15 Therefore, we determined the change of cAMP levels in CHO-K1 cells-expressed TGR5. Betulinic acid induced a marked increase in cAMP levels in cells transfected with TGR5 receptor, but not in cells transfected with empty vector (Figure 3A). The 50% effective dose value of betulinic acid was ~0.5 µM. The effect of betulinic acid was produced in a dose-dependent fashion over the same range as that used to increase the glucose uptake. Similarly, LCA (the well-known agonist of TGR5) induced a dose-dependent increase in cAMP levels in CHO-K1 cells transfected with TGR5 receptor (Figure 3B). LCA is known to increase intracellular cAMP levels,3 and it was used as a positive control in this study. Additionally, glucose uptake increased by betulinic acid was markedly reduced by blockade of protein kinase A using protein kinase A inhibitor (PKI) in a dose-related manner (Figure 4A). Similar results were observed in LCA-treated TGR5-transfected CHO-K1 cells as shown in Figure 4B. It has been established that cAMP is the main signal coupled to TGR5 receptor activation.15 Therefore, we determined the change of cAMP levels in CHO-K1 cells-expressed TGR5. Betulinic acid induced a marked increase in cAMP levels in cells transfected with TGR5 receptor, but not in cells transfected with empty vector (Figure 3A). The 50% effective dose value of betulinic acid was ~0.5 µM. The effect of betulinic acid was produced in a dose-dependent fashion over the same range as that used to increase the glucose uptake. Similarly, LCA (the well-known agonist of TGR5) induced a dose-dependent increase in cAMP levels in CHO-K1 cells transfected with TGR5 receptor (Figure 3B). LCA is known to increase intracellular cAMP levels,3 and it was used as a positive control in this study. Additionally, glucose uptake increased by betulinic acid was markedly reduced by blockade of protein kinase A using protein kinase A inhibitor (PKI) in a dose-related manner (Figure 4A). Similar results were observed in LCA-treated TGR5-transfected CHO-K1 cells as shown in Figure 4B. Betulinic acid increases GLP-1 secretion in NCI-H716 cells Glucose from the digestion of carbohydrates in foods is the well-known stimulator of GLP-1 secretion in the intestine.16 NCI-H716 cells are widely used in the study of GLP-1 secretion,17 and the presence of TGR5 receptor in NCI-H716 cells has been characterized.10 Therefore, we used NCI-H716 cells to investigate the effect of betulinic acid on GLP-1 secretion in vitro. Additionally, as shown in Figure 5A, we applied siRNA to silence TGR5 receptor in NCI-H716 cells. Betulinic acid increases GLP-1 secretion in a dose-dependent manner after incubation with NCI-H716 cells (Figure 5B). Similar changes were also observed in LCA-treated NCI-H716 cells (Figure 5C). The effectiveness of betulinic acid was markedly attenuated once TGR5 receptor was silenced by siRNA (Figure 5B). Secretion of GLP-1 by betulinic acid was reduced with the absence of TGR5 receptor in NCI-H716 cells. Similar results were also observed in LCA-treated NCI-H716 cells (Figure 5C). This shows that TGR5 receptor mediates the secretion of GLP-1. Glucose from the digestion of carbohydrates in foods is the well-known stimulator of GLP-1 secretion in the intestine.16 NCI-H716 cells are widely used in the study of GLP-1 secretion,17 and the presence of TGR5 receptor in NCI-H716 cells has been characterized.10 Therefore, we used NCI-H716 cells to investigate the effect of betulinic acid on GLP-1 secretion in vitro. Additionally, as shown in Figure 5A, we applied siRNA to silence TGR5 receptor in NCI-H716 cells. Betulinic acid increases GLP-1 secretion in a dose-dependent manner after incubation with NCI-H716 cells (Figure 5B). Similar changes were also observed in LCA-treated NCI-H716 cells (Figure 5C). The effectiveness of betulinic acid was markedly attenuated once TGR5 receptor was silenced by siRNA (Figure 5B). Secretion of GLP-1 by betulinic acid was reduced with the absence of TGR5 receptor in NCI-H716 cells. Similar results were also observed in LCA-treated NCI-H716 cells (Figure 5C). This shows that TGR5 receptor mediates the secretion of GLP-1. Betulinic acid increases calcium levels in NCI-H716 cells We used a nonradioactive assay to determine intracellular calcium levels in NCI-H716 cells because GLP-1 secretion from cells is associated with calcium levels.18 Betulinic acid also dose-dependently increases calcium levels in NCI-H716 cells (Figure 6A). Additionally, the same changes were obtained in LCA-treated NCI-H716 cells. However, the effect of betulinic acid was markedly reduced after TGR5 receptor was silenced by siRNA (Figure 6A). Calcium level increased by betulinic acid was attenuated after the silencing of TGR5 receptor in NCI-H716 cells. Similar results were also produced in LCA-treated NCI-H716 cells (Figure 6B). We used a nonradioactive assay to determine intracellular calcium levels in NCI-H716 cells because GLP-1 secretion from cells is associated with calcium levels.18 Betulinic acid also dose-dependently increases calcium levels in NCI-H716 cells (Figure 6A). Additionally, the same changes were obtained in LCA-treated NCI-H716 cells. However, the effect of betulinic acid was markedly reduced after TGR5 receptor was silenced by siRNA (Figure 6A). Calcium level increased by betulinic acid was attenuated after the silencing of TGR5 receptor in NCI-H716 cells. Similar results were also produced in LCA-treated NCI-H716 cells (Figure 6B).
Conclusion
In the present study, we demonstrated that betulinic acid is an agonist of human TGR5 receptor. Betulinic acid increases glucose uptake and stimulates GLP-1 secretion in cells via TGR5 receptor activation. Therefore, betulinic acid is a potential agent for activating TGR5 receptor, which has multiple actions in vivo.
[ "Cell cultures", "Transfection of TGR5 in CHO-K1 cells", "Uptake of 2-NBDG into CHO-K1 cells", "Small interfering RNA in NCI-H716 cells", "Western blotting analysis", "Measurement of intracellular calcium", "GLP-1 secretion from NCI-H716 cells", "Measurement of intracellular cAMP levels", "Betulinic acid increases glucose uptake in TGR5-transfected CHO cells", "Betulinic acid increases signals in TGR5-transfected CHO-K1 cells", "Betulinic acid increases GLP-1 secretion in NCI-H716 cells", "Betulinic acid increases calcium levels in NCI-H716 cells", "Conclusion" ]
[ "The commercial human NCI-H716 cells (BCRC No CCL-251) obtained from the Culture Collection and Research Center of the Food Industry Institute (Hsin-Chiu, Taiwan) were maintained in medium supplemented with 10% (v/v) fetal bovine serum and 2 mM l-glutamine at 5% CO2. Additionally, Chinese hamster ovary (CHO-K1) cells (BCRC No CCL-61) were maintained in growth medium composed of F-12K supplemented with 10% fetal bovine serum. Cells were subcultured once every 3 days by trypsinization (GIBCO-BRL Life Technologies, Gaithersburg, MD, USA), and the medium was changed every 2–3 days.", "As described in a previous report,12 CHO-K1 cells were transiently transfected with human G-protein-coupled BA receptor 1 and an expression vector (pCMV6-Entry; OriGene, Rockville, MD, USA). We used the TurboFect transfection reagent (Thermo Fisher Scientific, Pittsburgh, PA, USA) to transfect the cells, which were seeded at 5×104 cells per well in six-well plates. Twenty-four hours later, the success of transfection was confirmed using the Western blotting analysis. Then, the TGR5-CHO-K1 cells were incubated with betulinic acid or LCA at the indicated concentrations.", "Similar to our previous report,13 glucose uptake into cells was investigated using 2-(N-[7-nitrobenz-2-oxa-1,3-diazol-4-yl] amino)-2-deoxyglucose (2-NBDG) as a fluorescent indicator. Each assay used 1×106 cells/mL. The medium was removed, and the cells were washed gently with phosphate-buffered solution (PBS). Cells were detached from the dish by trypsinization, suspended in 0.2 mM 2-NBDG and the testing agent at the indicated concentration in PBS, and then incubated in a 37°C water bath for 60 minutes in the dark. The cells were centrifuged (4°C, 5,000× g, 10 minutes), and the supernatant was discarded. The pellet was washed three times with cold PBS and cooled on ice. The pellet was suspended in 1 mL of PBS. The fluorescence intensity in cell suspension was evaluated using a fluorescence spectrofluorometer (Hitachi F-2000, Tokyo, Japan), with excitation and emission wavelengths of 488 and 520 nm, respectively. The intensity of fluorescence showed the uptake of 2-NBDG in the cells that were incubated with betulinic acid or LCA at the indicated concentrations for suitable time.", "Based on a previous method,12 we purchased a validated small interference RNA (siRNA) targeted against TGR5 from a commercial source (Dharmacon RNA Technology, Lafayette, CO, USA). The validated siRNAs were as follows: ON-TARGETplus SMARTpool siTAS1R3 (human NM_152228 sequence), ON-TARGETplus SMARTpool siGNAT3 (human NM_001102386 sequence), and ON-TARGETplus SMARTpool siGPBAR1 (TGR5) (human NM_170699 sequence). ON-TARGETplus SMARTpool non-targeting siRNA pool was used as a negative control to distinguish sequence-specific silencing from nonspecific effects. TurboFect transfection reagent (Thermo Fisher Scientific) was used to transfer siRNA. Success of silencing was evaluated using the Western blotting analysis after the transfection. The NCI-H716 cells were transfected with siRNA and differentiated for another 24 hours before the assay.", "Cells were harvested 24 hours after transfection and were lysed with ice-cold lysis buffer containing 300 mmol/L NaCl, 20 mmol/L Tris-HCl (pH 7.8), 2 mmol/L ethylenediaminetetraacetic acid, 2 mmol/L dithiothreitol, 2% Nonidet P-40, 0.2% sodium lauryl sulfate, and a cocktail of protease inhibitors (Sigma-Aldrich). Then, 30 μg of cell lysate was separated by 10% sodium dodecyl sulfate–polyacryl-amide gel electrophoresis. The blots were then transferred to polyvinylidene difluoride membranes (Millipore, Billerica, MA, USA). After blocking with 10% skim milk for 1 hour, the blots were developed using a primary antibody against TGR5 (Abcam, Rockville, MD, USA). The blots were subsequently hybridized using horseradish peroxidase-conjugated goat antirabbit or antimouse immunoglobulin G (Calbiochem, San Diego, CA, USA) and developed using a chemiluminescence kit (PerkinElmer, Waltham, MA, USA). The optical densities of the bands for TGR5 (32 kDa) and β-actin (43 kDa) were determined using GEL-PRO ANALYSER software 4.0 (Media Cybernetics, Silver Spring, MD, USA), and quantified as the ratio to β-actin.", "Changes in intracellular calcium concentrations were detected using the fluorescent probe fura-2. In brief, NCI-H716 cells were placed in a buffered physiological saline solution (PSS), to which 5 mmol/L fura-2 was added. Cells were then incubated for 1 hour in humidified 5% CO2 and 95% air at 37°C. Then, cells were washed and incubated for an additional 30 minutes in PSS. The cells were inserted into a preheated (37°C) cuvette containing 2 mL PSS, and the testing agent at the indicated concentration for the mentioned time. In addition, in some experiments, TGR5-deleted NCI-H716 cells were incubated with the testing agent in the same manner. Fluorescence was recorded continuously using a fluorescence spectrofluorometer (F-2000; Hitachi, Tokyo, Japan). Values of [Ca2+]i were determined, as described in our previous report.14 Background autofluorescence was measured in unloaded cells and was subtracted from all measurements.", "After being cultured for 48 hours, NCI-H716 cells (5×105 cells per well) were placed in buffered PSS containing betulinic acid or positive control (LCA) at the indicated concentration to incubate under 37°C for 1 hour. The supernatants were collected and immediately assayed using a GLP-1 active enzyme-linked immunosorbent assay kit (EZGLP1T-36K, EMD Millipore Co., Billerica, MA, USA). Experiments were performed in duplicate from the indicated samples.", "The TGR5-CHO-K1 cells were plated at 5×104 cells/well in 96-well plates and incubated with betulinic acid or LCA at the indicated concentrations for 72 hours before cAMP measurements. Intracellular cAMP accumulation was measured using a cAMP enzyme-linked immunosorbent assay kit (ADI-900-066, Enzo Life Sciences, Farmingdale, NY, USA). Experiments were performed in duplicate from the indicated samples.", "The successful transfection of TGR receptor into CHO-K1 cells was confirmed using Western blots, as shown in Figure 2A. Then, we investigated the functional response of TGR5-transfected CHO-K1 cells using glucose uptake as an indicator. Similar to the action of LCA, betulinic acid increases glucose uptake in a dose-dependent manner. However, glucose uptake was not induced by betulinic acid (Figure 2B) or LCA (Figure 2C) in CHO-K1 cells transfected with empty vector. We also evaluated the possibility of cytotoxicity induced by betulinic acid in CHO-K1 cells transfected with TGR5 receptor or with empty vector. Betulinic acid at the highest dose (0.1 mM) did not influence the viability of TGR5-transfected CHO-K1 cells or CHO-K1 cells transfected with empty vector only in the preliminary experiments. Similar results of the MTT assay were observed in LCA-treated TGR5-transfected CHO-K1 cells or CHO-K1 cells transfected with empty vector only (data not shown).", "It has been established that cAMP is the main signal coupled to TGR5 receptor activation.15 Therefore, we determined the change of cAMP levels in CHO-K1 cells-expressed TGR5. Betulinic acid induced a marked increase in cAMP levels in cells transfected with TGR5 receptor, but not in cells transfected with empty vector (Figure 3A). The 50% effective dose value of betulinic acid was ~0.5 µM. The effect of betulinic acid was produced in a dose-dependent fashion over the same range as that used to increase the glucose uptake. Similarly, LCA (the well-known agonist of TGR5) induced a dose-dependent increase in cAMP levels in CHO-K1 cells transfected with TGR5 receptor (Figure 3B). LCA is known to increase intracellular cAMP levels,3 and it was used as a positive control in this study.\nAdditionally, glucose uptake increased by betulinic acid was markedly reduced by blockade of protein kinase A using protein kinase A inhibitor (PKI) in a dose-related manner (Figure 4A). Similar results were observed in LCA-treated TGR5-transfected CHO-K1 cells as shown in Figure 4B.", "Glucose from the digestion of carbohydrates in foods is the well-known stimulator of GLP-1 secretion in the intestine.16 NCI-H716 cells are widely used in the study of GLP-1 secretion,17 and the presence of TGR5 receptor in NCI-H716 cells has been characterized.10 Therefore, we used NCI-H716 cells to investigate the effect of betulinic acid on GLP-1 secretion in vitro. Additionally, as shown in Figure 5A, we applied siRNA to silence TGR5 receptor in NCI-H716 cells.\nBetulinic acid increases GLP-1 secretion in a dose-dependent manner after incubation with NCI-H716 cells (Figure 5B). Similar changes were also observed in LCA-treated NCI-H716 cells (Figure 5C).\nThe effectiveness of betulinic acid was markedly attenuated once TGR5 receptor was silenced by siRNA (Figure 5B). Secretion of GLP-1 by betulinic acid was reduced with the absence of TGR5 receptor in NCI-H716 cells. Similar results were also observed in LCA-treated NCI-H716 cells (Figure 5C). This shows that TGR5 receptor mediates the secretion of GLP-1.", "We used a nonradioactive assay to determine intracellular calcium levels in NCI-H716 cells because GLP-1 secretion from cells is associated with calcium levels.18\nBetulinic acid also dose-dependently increases calcium levels in NCI-H716 cells (Figure 6A). Additionally, the same changes were obtained in LCA-treated NCI-H716 cells. However, the effect of betulinic acid was markedly reduced after TGR5 receptor was silenced by siRNA (Figure 6A). Calcium level increased by betulinic acid was attenuated after the silencing of TGR5 receptor in NCI-H716 cells. Similar results were also produced in LCA-treated NCI-H716 cells (Figure 6B).", "In the present study, we demonstrated that betulinic acid is an agonist of human TGR5 receptor. Betulinic acid increases glucose uptake and stimulates GLP-1 secretion in cells via TGR5 receptor activation. Therefore, betulinic acid is a potential agent for activating TGR5 receptor, which has multiple actions in vivo." ]
[ null, null, null, null, "methods", null, null, null, null, null, null, null, null ]
[ "Introduction", "Materials and methods", "Materials", "Cell cultures", "Transfection of TGR5 in CHO-K1 cells", "Uptake of 2-NBDG into CHO-K1 cells", "Small interfering RNA in NCI-H716 cells", "Western blotting analysis", "Measurement of intracellular calcium", "GLP-1 secretion from NCI-H716 cells", "Measurement of intracellular cAMP levels", "Statistical analysis", "Results", "Betulinic acid increases glucose uptake in TGR5-transfected CHO cells", "Betulinic acid increases signals in TGR5-transfected CHO-K1 cells", "Betulinic acid increases GLP-1 secretion in NCI-H716 cells", "Betulinic acid increases calcium levels in NCI-H716 cells", "Discussion", "Conclusion" ]
[ "It has been established that G-protein-coupled bile acid (BA) receptor 1 (also known as TGR5) agonists show potential for treating diabetic disorders.1,2 Essentially, BA induces TGR5 receptor internalization, activation of extracellular signal-regulated kinase, and intracellular cyclic adenosine monophosphate (cAMP) production in cells.1,2 In animal models, treatment with TGR5 agonist(s) produces glucagon-like peptide (GLP-1) secretion, which in turn induces insulin secretion to lower blood glucose and/or increase the basal energy expenditure.3 Therefore, TGR5 has been recognized as a target for developing new antidiabetic agents.4 In the gut, BAs are also modified by the gut flora to produce secondary BAs, deoxycholic acid, and lithocholic acid (LCA). BAs can be divided into hydrophobic and hydrophilic subgroups. LCA is hydrophobic and could activate TGR55 but was highly toxic to liver.6\nBetulinic acid, a triterpenoid, as shown in Figure 1, that is present in the leaves of white birch, has been introduced as a selective TGR5 agonist with moderate potency to produce antihyperglycemic actions.7,8 Due to its potential as a TGR5 agonist, derivatives of betulinic acid have widely been studied.9,10 Recently, the antidiabetic action of betulinic acid has been reviewed with two other triterpenic acids, oleanolic acid, and ursolic acid, in detail.11 However, the activation of TGR5 was not shown in that report, probably due to poor evidence. Therefore, data showing the direct effect of betulinic acid on TGR5 are likely to be helpful.\nIn the present study, we used cells transfected with TGR5 to identify the effect of betulinic acid. Additionally, TGR5-silenced cells were applied to confirm the deletion of betulinic acid-induced actions. Therefore, the TGR5-mediated actions of betulinic acid can be observed directly.", " Materials Betulinic acid (Tokyo Chemical Institute, Tokyo, Japan) and LCA (Sigma-Aldrich Chemical Co., St Louis, MO, USA) were dissolved in dimethyl sulfoxide. Additionally, myristoylated PKI 14-22 amide (Tocris, Avonmouth, Bristol, UK), an inhibitor of protein kinase A, was dissolved in an aqueous solution. Protein was assayed using a bicinchoninic acid assay kit (Thermo Sci., Rockford, IL, USA).\nBetulinic acid (Tokyo Chemical Institute, Tokyo, Japan) and LCA (Sigma-Aldrich Chemical Co., St Louis, MO, USA) were dissolved in dimethyl sulfoxide. Additionally, myristoylated PKI 14-22 amide (Tocris, Avonmouth, Bristol, UK), an inhibitor of protein kinase A, was dissolved in an aqueous solution. Protein was assayed using a bicinchoninic acid assay kit (Thermo Sci., Rockford, IL, USA).\n Cell cultures The commercial human NCI-H716 cells (BCRC No CCL-251) obtained from the Culture Collection and Research Center of the Food Industry Institute (Hsin-Chiu, Taiwan) were maintained in medium supplemented with 10% (v/v) fetal bovine serum and 2 mM l-glutamine at 5% CO2. Additionally, Chinese hamster ovary (CHO-K1) cells (BCRC No CCL-61) were maintained in growth medium composed of F-12K supplemented with 10% fetal bovine serum. Cells were subcultured once every 3 days by trypsinization (GIBCO-BRL Life Technologies, Gaithersburg, MD, USA), and the medium was changed every 2–3 days.\nThe commercial human NCI-H716 cells (BCRC No CCL-251) obtained from the Culture Collection and Research Center of the Food Industry Institute (Hsin-Chiu, Taiwan) were maintained in medium supplemented with 10% (v/v) fetal bovine serum and 2 mM l-glutamine at 5% CO2. Additionally, Chinese hamster ovary (CHO-K1) cells (BCRC No CCL-61) were maintained in growth medium composed of F-12K supplemented with 10% fetal bovine serum. Cells were subcultured once every 3 days by trypsinization (GIBCO-BRL Life Technologies, Gaithersburg, MD, USA), and the medium was changed every 2–3 days.\n Transfection of TGR5 in CHO-K1 cells As described in a previous report,12 CHO-K1 cells were transiently transfected with human G-protein-coupled BA receptor 1 and an expression vector (pCMV6-Entry; OriGene, Rockville, MD, USA). We used the TurboFect transfection reagent (Thermo Fisher Scientific, Pittsburgh, PA, USA) to transfect the cells, which were seeded at 5×104 cells per well in six-well plates. Twenty-four hours later, the success of transfection was confirmed using the Western blotting analysis. Then, the TGR5-CHO-K1 cells were incubated with betulinic acid or LCA at the indicated concentrations.\nAs described in a previous report,12 CHO-K1 cells were transiently transfected with human G-protein-coupled BA receptor 1 and an expression vector (pCMV6-Entry; OriGene, Rockville, MD, USA). We used the TurboFect transfection reagent (Thermo Fisher Scientific, Pittsburgh, PA, USA) to transfect the cells, which were seeded at 5×104 cells per well in six-well plates. Twenty-four hours later, the success of transfection was confirmed using the Western blotting analysis. Then, the TGR5-CHO-K1 cells were incubated with betulinic acid or LCA at the indicated concentrations.\n Uptake of 2-NBDG into CHO-K1 cells Similar to our previous report,13 glucose uptake into cells was investigated using 2-(N-[7-nitrobenz-2-oxa-1,3-diazol-4-yl] amino)-2-deoxyglucose (2-NBDG) as a fluorescent indicator. Each assay used 1×106 cells/mL. The medium was removed, and the cells were washed gently with phosphate-buffered solution (PBS). Cells were detached from the dish by trypsinization, suspended in 0.2 mM 2-NBDG and the testing agent at the indicated concentration in PBS, and then incubated in a 37°C water bath for 60 minutes in the dark. The cells were centrifuged (4°C, 5,000× g, 10 minutes), and the supernatant was discarded. The pellet was washed three times with cold PBS and cooled on ice. The pellet was suspended in 1 mL of PBS. The fluorescence intensity in cell suspension was evaluated using a fluorescence spectrofluorometer (Hitachi F-2000, Tokyo, Japan), with excitation and emission wavelengths of 488 and 520 nm, respectively. The intensity of fluorescence showed the uptake of 2-NBDG in the cells that were incubated with betulinic acid or LCA at the indicated concentrations for suitable time.\nSimilar to our previous report,13 glucose uptake into cells was investigated using 2-(N-[7-nitrobenz-2-oxa-1,3-diazol-4-yl] amino)-2-deoxyglucose (2-NBDG) as a fluorescent indicator. Each assay used 1×106 cells/mL. The medium was removed, and the cells were washed gently with phosphate-buffered solution (PBS). Cells were detached from the dish by trypsinization, suspended in 0.2 mM 2-NBDG and the testing agent at the indicated concentration in PBS, and then incubated in a 37°C water bath for 60 minutes in the dark. The cells were centrifuged (4°C, 5,000× g, 10 minutes), and the supernatant was discarded. The pellet was washed three times with cold PBS and cooled on ice. The pellet was suspended in 1 mL of PBS. The fluorescence intensity in cell suspension was evaluated using a fluorescence spectrofluorometer (Hitachi F-2000, Tokyo, Japan), with excitation and emission wavelengths of 488 and 520 nm, respectively. The intensity of fluorescence showed the uptake of 2-NBDG in the cells that were incubated with betulinic acid or LCA at the indicated concentrations for suitable time.\n Small interfering RNA in NCI-H716 cells Based on a previous method,12 we purchased a validated small interference RNA (siRNA) targeted against TGR5 from a commercial source (Dharmacon RNA Technology, Lafayette, CO, USA). The validated siRNAs were as follows: ON-TARGETplus SMARTpool siTAS1R3 (human NM_152228 sequence), ON-TARGETplus SMARTpool siGNAT3 (human NM_001102386 sequence), and ON-TARGETplus SMARTpool siGPBAR1 (TGR5) (human NM_170699 sequence). ON-TARGETplus SMARTpool non-targeting siRNA pool was used as a negative control to distinguish sequence-specific silencing from nonspecific effects. TurboFect transfection reagent (Thermo Fisher Scientific) was used to transfer siRNA. Success of silencing was evaluated using the Western blotting analysis after the transfection. The NCI-H716 cells were transfected with siRNA and differentiated for another 24 hours before the assay.\nBased on a previous method,12 we purchased a validated small interference RNA (siRNA) targeted against TGR5 from a commercial source (Dharmacon RNA Technology, Lafayette, CO, USA). The validated siRNAs were as follows: ON-TARGETplus SMARTpool siTAS1R3 (human NM_152228 sequence), ON-TARGETplus SMARTpool siGNAT3 (human NM_001102386 sequence), and ON-TARGETplus SMARTpool siGPBAR1 (TGR5) (human NM_170699 sequence). ON-TARGETplus SMARTpool non-targeting siRNA pool was used as a negative control to distinguish sequence-specific silencing from nonspecific effects. TurboFect transfection reagent (Thermo Fisher Scientific) was used to transfer siRNA. Success of silencing was evaluated using the Western blotting analysis after the transfection. The NCI-H716 cells were transfected with siRNA and differentiated for another 24 hours before the assay.\n Western blotting analysis Cells were harvested 24 hours after transfection and were lysed with ice-cold lysis buffer containing 300 mmol/L NaCl, 20 mmol/L Tris-HCl (pH 7.8), 2 mmol/L ethylenediaminetetraacetic acid, 2 mmol/L dithiothreitol, 2% Nonidet P-40, 0.2% sodium lauryl sulfate, and a cocktail of protease inhibitors (Sigma-Aldrich). Then, 30 μg of cell lysate was separated by 10% sodium dodecyl sulfate–polyacryl-amide gel electrophoresis. The blots were then transferred to polyvinylidene difluoride membranes (Millipore, Billerica, MA, USA). After blocking with 10% skim milk for 1 hour, the blots were developed using a primary antibody against TGR5 (Abcam, Rockville, MD, USA). The blots were subsequently hybridized using horseradish peroxidase-conjugated goat antirabbit or antimouse immunoglobulin G (Calbiochem, San Diego, CA, USA) and developed using a chemiluminescence kit (PerkinElmer, Waltham, MA, USA). The optical densities of the bands for TGR5 (32 kDa) and β-actin (43 kDa) were determined using GEL-PRO ANALYSER software 4.0 (Media Cybernetics, Silver Spring, MD, USA), and quantified as the ratio to β-actin.\nCells were harvested 24 hours after transfection and were lysed with ice-cold lysis buffer containing 300 mmol/L NaCl, 20 mmol/L Tris-HCl (pH 7.8), 2 mmol/L ethylenediaminetetraacetic acid, 2 mmol/L dithiothreitol, 2% Nonidet P-40, 0.2% sodium lauryl sulfate, and a cocktail of protease inhibitors (Sigma-Aldrich). Then, 30 μg of cell lysate was separated by 10% sodium dodecyl sulfate–polyacryl-amide gel electrophoresis. The blots were then transferred to polyvinylidene difluoride membranes (Millipore, Billerica, MA, USA). After blocking with 10% skim milk for 1 hour, the blots were developed using a primary antibody against TGR5 (Abcam, Rockville, MD, USA). The blots were subsequently hybridized using horseradish peroxidase-conjugated goat antirabbit or antimouse immunoglobulin G (Calbiochem, San Diego, CA, USA) and developed using a chemiluminescence kit (PerkinElmer, Waltham, MA, USA). The optical densities of the bands for TGR5 (32 kDa) and β-actin (43 kDa) were determined using GEL-PRO ANALYSER software 4.0 (Media Cybernetics, Silver Spring, MD, USA), and quantified as the ratio to β-actin.\n Measurement of intracellular calcium Changes in intracellular calcium concentrations were detected using the fluorescent probe fura-2. In brief, NCI-H716 cells were placed in a buffered physiological saline solution (PSS), to which 5 mmol/L fura-2 was added. Cells were then incubated for 1 hour in humidified 5% CO2 and 95% air at 37°C. Then, cells were washed and incubated for an additional 30 minutes in PSS. The cells were inserted into a preheated (37°C) cuvette containing 2 mL PSS, and the testing agent at the indicated concentration for the mentioned time. In addition, in some experiments, TGR5-deleted NCI-H716 cells were incubated with the testing agent in the same manner. Fluorescence was recorded continuously using a fluorescence spectrofluorometer (F-2000; Hitachi, Tokyo, Japan). Values of [Ca2+]i were determined, as described in our previous report.14 Background autofluorescence was measured in unloaded cells and was subtracted from all measurements.\nChanges in intracellular calcium concentrations were detected using the fluorescent probe fura-2. In brief, NCI-H716 cells were placed in a buffered physiological saline solution (PSS), to which 5 mmol/L fura-2 was added. Cells were then incubated for 1 hour in humidified 5% CO2 and 95% air at 37°C. Then, cells were washed and incubated for an additional 30 minutes in PSS. The cells were inserted into a preheated (37°C) cuvette containing 2 mL PSS, and the testing agent at the indicated concentration for the mentioned time. In addition, in some experiments, TGR5-deleted NCI-H716 cells were incubated with the testing agent in the same manner. Fluorescence was recorded continuously using a fluorescence spectrofluorometer (F-2000; Hitachi, Tokyo, Japan). Values of [Ca2+]i were determined, as described in our previous report.14 Background autofluorescence was measured in unloaded cells and was subtracted from all measurements.\n GLP-1 secretion from NCI-H716 cells After being cultured for 48 hours, NCI-H716 cells (5×105 cells per well) were placed in buffered PSS containing betulinic acid or positive control (LCA) at the indicated concentration to incubate under 37°C for 1 hour. The supernatants were collected and immediately assayed using a GLP-1 active enzyme-linked immunosorbent assay kit (EZGLP1T-36K, EMD Millipore Co., Billerica, MA, USA). Experiments were performed in duplicate from the indicated samples.\nAfter being cultured for 48 hours, NCI-H716 cells (5×105 cells per well) were placed in buffered PSS containing betulinic acid or positive control (LCA) at the indicated concentration to incubate under 37°C for 1 hour. The supernatants were collected and immediately assayed using a GLP-1 active enzyme-linked immunosorbent assay kit (EZGLP1T-36K, EMD Millipore Co., Billerica, MA, USA). Experiments were performed in duplicate from the indicated samples.\n Measurement of intracellular cAMP levels The TGR5-CHO-K1 cells were plated at 5×104 cells/well in 96-well plates and incubated with betulinic acid or LCA at the indicated concentrations for 72 hours before cAMP measurements. Intracellular cAMP accumulation was measured using a cAMP enzyme-linked immunosorbent assay kit (ADI-900-066, Enzo Life Sciences, Farmingdale, NY, USA). Experiments were performed in duplicate from the indicated samples.\nThe TGR5-CHO-K1 cells were plated at 5×104 cells/well in 96-well plates and incubated with betulinic acid or LCA at the indicated concentrations for 72 hours before cAMP measurements. Intracellular cAMP accumulation was measured using a cAMP enzyme-linked immunosorbent assay kit (ADI-900-066, Enzo Life Sciences, Farmingdale, NY, USA). Experiments were performed in duplicate from the indicated samples.\n Statistical analysis Results are presented as the mean±standard error of the mean from the sample number(n) of each group. One-way analysis of variance was followed by post hoc Tukey’s test and t-test using SPSS for Windows, version 17 (SPSS Inc., Chicago, IL, USA). Two-tailed P≤0.05 was considered significant.\nResults are presented as the mean±standard error of the mean from the sample number(n) of each group. One-way analysis of variance was followed by post hoc Tukey’s test and t-test using SPSS for Windows, version 17 (SPSS Inc., Chicago, IL, USA). Two-tailed P≤0.05 was considered significant.", "Betulinic acid (Tokyo Chemical Institute, Tokyo, Japan) and LCA (Sigma-Aldrich Chemical Co., St Louis, MO, USA) were dissolved in dimethyl sulfoxide. Additionally, myristoylated PKI 14-22 amide (Tocris, Avonmouth, Bristol, UK), an inhibitor of protein kinase A, was dissolved in an aqueous solution. Protein was assayed using a bicinchoninic acid assay kit (Thermo Sci., Rockford, IL, USA).", "The commercial human NCI-H716 cells (BCRC No CCL-251) obtained from the Culture Collection and Research Center of the Food Industry Institute (Hsin-Chiu, Taiwan) were maintained in medium supplemented with 10% (v/v) fetal bovine serum and 2 mM l-glutamine at 5% CO2. Additionally, Chinese hamster ovary (CHO-K1) cells (BCRC No CCL-61) were maintained in growth medium composed of F-12K supplemented with 10% fetal bovine serum. Cells were subcultured once every 3 days by trypsinization (GIBCO-BRL Life Technologies, Gaithersburg, MD, USA), and the medium was changed every 2–3 days.", "As described in a previous report,12 CHO-K1 cells were transiently transfected with human G-protein-coupled BA receptor 1 and an expression vector (pCMV6-Entry; OriGene, Rockville, MD, USA). We used the TurboFect transfection reagent (Thermo Fisher Scientific, Pittsburgh, PA, USA) to transfect the cells, which were seeded at 5×104 cells per well in six-well plates. Twenty-four hours later, the success of transfection was confirmed using the Western blotting analysis. Then, the TGR5-CHO-K1 cells were incubated with betulinic acid or LCA at the indicated concentrations.", "Similar to our previous report,13 glucose uptake into cells was investigated using 2-(N-[7-nitrobenz-2-oxa-1,3-diazol-4-yl] amino)-2-deoxyglucose (2-NBDG) as a fluorescent indicator. Each assay used 1×106 cells/mL. The medium was removed, and the cells were washed gently with phosphate-buffered solution (PBS). Cells were detached from the dish by trypsinization, suspended in 0.2 mM 2-NBDG and the testing agent at the indicated concentration in PBS, and then incubated in a 37°C water bath for 60 minutes in the dark. The cells were centrifuged (4°C, 5,000× g, 10 minutes), and the supernatant was discarded. The pellet was washed three times with cold PBS and cooled on ice. The pellet was suspended in 1 mL of PBS. The fluorescence intensity in cell suspension was evaluated using a fluorescence spectrofluorometer (Hitachi F-2000, Tokyo, Japan), with excitation and emission wavelengths of 488 and 520 nm, respectively. The intensity of fluorescence showed the uptake of 2-NBDG in the cells that were incubated with betulinic acid or LCA at the indicated concentrations for suitable time.", "Based on a previous method,12 we purchased a validated small interference RNA (siRNA) targeted against TGR5 from a commercial source (Dharmacon RNA Technology, Lafayette, CO, USA). The validated siRNAs were as follows: ON-TARGETplus SMARTpool siTAS1R3 (human NM_152228 sequence), ON-TARGETplus SMARTpool siGNAT3 (human NM_001102386 sequence), and ON-TARGETplus SMARTpool siGPBAR1 (TGR5) (human NM_170699 sequence). ON-TARGETplus SMARTpool non-targeting siRNA pool was used as a negative control to distinguish sequence-specific silencing from nonspecific effects. TurboFect transfection reagent (Thermo Fisher Scientific) was used to transfer siRNA. Success of silencing was evaluated using the Western blotting analysis after the transfection. The NCI-H716 cells were transfected with siRNA and differentiated for another 24 hours before the assay.", "Cells were harvested 24 hours after transfection and were lysed with ice-cold lysis buffer containing 300 mmol/L NaCl, 20 mmol/L Tris-HCl (pH 7.8), 2 mmol/L ethylenediaminetetraacetic acid, 2 mmol/L dithiothreitol, 2% Nonidet P-40, 0.2% sodium lauryl sulfate, and a cocktail of protease inhibitors (Sigma-Aldrich). Then, 30 μg of cell lysate was separated by 10% sodium dodecyl sulfate–polyacryl-amide gel electrophoresis. The blots were then transferred to polyvinylidene difluoride membranes (Millipore, Billerica, MA, USA). After blocking with 10% skim milk for 1 hour, the blots were developed using a primary antibody against TGR5 (Abcam, Rockville, MD, USA). The blots were subsequently hybridized using horseradish peroxidase-conjugated goat antirabbit or antimouse immunoglobulin G (Calbiochem, San Diego, CA, USA) and developed using a chemiluminescence kit (PerkinElmer, Waltham, MA, USA). The optical densities of the bands for TGR5 (32 kDa) and β-actin (43 kDa) were determined using GEL-PRO ANALYSER software 4.0 (Media Cybernetics, Silver Spring, MD, USA), and quantified as the ratio to β-actin.", "Changes in intracellular calcium concentrations were detected using the fluorescent probe fura-2. In brief, NCI-H716 cells were placed in a buffered physiological saline solution (PSS), to which 5 mmol/L fura-2 was added. Cells were then incubated for 1 hour in humidified 5% CO2 and 95% air at 37°C. Then, cells were washed and incubated for an additional 30 minutes in PSS. The cells were inserted into a preheated (37°C) cuvette containing 2 mL PSS, and the testing agent at the indicated concentration for the mentioned time. In addition, in some experiments, TGR5-deleted NCI-H716 cells were incubated with the testing agent in the same manner. Fluorescence was recorded continuously using a fluorescence spectrofluorometer (F-2000; Hitachi, Tokyo, Japan). Values of [Ca2+]i were determined, as described in our previous report.14 Background autofluorescence was measured in unloaded cells and was subtracted from all measurements.", "After being cultured for 48 hours, NCI-H716 cells (5×105 cells per well) were placed in buffered PSS containing betulinic acid or positive control (LCA) at the indicated concentration to incubate under 37°C for 1 hour. The supernatants were collected and immediately assayed using a GLP-1 active enzyme-linked immunosorbent assay kit (EZGLP1T-36K, EMD Millipore Co., Billerica, MA, USA). Experiments were performed in duplicate from the indicated samples.", "The TGR5-CHO-K1 cells were plated at 5×104 cells/well in 96-well plates and incubated with betulinic acid or LCA at the indicated concentrations for 72 hours before cAMP measurements. Intracellular cAMP accumulation was measured using a cAMP enzyme-linked immunosorbent assay kit (ADI-900-066, Enzo Life Sciences, Farmingdale, NY, USA). Experiments were performed in duplicate from the indicated samples.", "Results are presented as the mean±standard error of the mean from the sample number(n) of each group. One-way analysis of variance was followed by post hoc Tukey’s test and t-test using SPSS for Windows, version 17 (SPSS Inc., Chicago, IL, USA). Two-tailed P≤0.05 was considered significant.", " Betulinic acid increases glucose uptake in TGR5-transfected CHO cells The successful transfection of TGR receptor into CHO-K1 cells was confirmed using Western blots, as shown in Figure 2A. Then, we investigated the functional response of TGR5-transfected CHO-K1 cells using glucose uptake as an indicator. Similar to the action of LCA, betulinic acid increases glucose uptake in a dose-dependent manner. However, glucose uptake was not induced by betulinic acid (Figure 2B) or LCA (Figure 2C) in CHO-K1 cells transfected with empty vector. We also evaluated the possibility of cytotoxicity induced by betulinic acid in CHO-K1 cells transfected with TGR5 receptor or with empty vector. Betulinic acid at the highest dose (0.1 mM) did not influence the viability of TGR5-transfected CHO-K1 cells or CHO-K1 cells transfected with empty vector only in the preliminary experiments. Similar results of the MTT assay were observed in LCA-treated TGR5-transfected CHO-K1 cells or CHO-K1 cells transfected with empty vector only (data not shown).\nThe successful transfection of TGR receptor into CHO-K1 cells was confirmed using Western blots, as shown in Figure 2A. Then, we investigated the functional response of TGR5-transfected CHO-K1 cells using glucose uptake as an indicator. Similar to the action of LCA, betulinic acid increases glucose uptake in a dose-dependent manner. However, glucose uptake was not induced by betulinic acid (Figure 2B) or LCA (Figure 2C) in CHO-K1 cells transfected with empty vector. We also evaluated the possibility of cytotoxicity induced by betulinic acid in CHO-K1 cells transfected with TGR5 receptor or with empty vector. Betulinic acid at the highest dose (0.1 mM) did not influence the viability of TGR5-transfected CHO-K1 cells or CHO-K1 cells transfected with empty vector only in the preliminary experiments. Similar results of the MTT assay were observed in LCA-treated TGR5-transfected CHO-K1 cells or CHO-K1 cells transfected with empty vector only (data not shown).\n Betulinic acid increases signals in TGR5-transfected CHO-K1 cells It has been established that cAMP is the main signal coupled to TGR5 receptor activation.15 Therefore, we determined the change of cAMP levels in CHO-K1 cells-expressed TGR5. Betulinic acid induced a marked increase in cAMP levels in cells transfected with TGR5 receptor, but not in cells transfected with empty vector (Figure 3A). The 50% effective dose value of betulinic acid was ~0.5 µM. The effect of betulinic acid was produced in a dose-dependent fashion over the same range as that used to increase the glucose uptake. Similarly, LCA (the well-known agonist of TGR5) induced a dose-dependent increase in cAMP levels in CHO-K1 cells transfected with TGR5 receptor (Figure 3B). LCA is known to increase intracellular cAMP levels,3 and it was used as a positive control in this study.\nAdditionally, glucose uptake increased by betulinic acid was markedly reduced by blockade of protein kinase A using protein kinase A inhibitor (PKI) in a dose-related manner (Figure 4A). Similar results were observed in LCA-treated TGR5-transfected CHO-K1 cells as shown in Figure 4B.\nIt has been established that cAMP is the main signal coupled to TGR5 receptor activation.15 Therefore, we determined the change of cAMP levels in CHO-K1 cells-expressed TGR5. Betulinic acid induced a marked increase in cAMP levels in cells transfected with TGR5 receptor, but not in cells transfected with empty vector (Figure 3A). The 50% effective dose value of betulinic acid was ~0.5 µM. The effect of betulinic acid was produced in a dose-dependent fashion over the same range as that used to increase the glucose uptake. Similarly, LCA (the well-known agonist of TGR5) induced a dose-dependent increase in cAMP levels in CHO-K1 cells transfected with TGR5 receptor (Figure 3B). LCA is known to increase intracellular cAMP levels,3 and it was used as a positive control in this study.\nAdditionally, glucose uptake increased by betulinic acid was markedly reduced by blockade of protein kinase A using protein kinase A inhibitor (PKI) in a dose-related manner (Figure 4A). Similar results were observed in LCA-treated TGR5-transfected CHO-K1 cells as shown in Figure 4B.\n Betulinic acid increases GLP-1 secretion in NCI-H716 cells Glucose from the digestion of carbohydrates in foods is the well-known stimulator of GLP-1 secretion in the intestine.16 NCI-H716 cells are widely used in the study of GLP-1 secretion,17 and the presence of TGR5 receptor in NCI-H716 cells has been characterized.10 Therefore, we used NCI-H716 cells to investigate the effect of betulinic acid on GLP-1 secretion in vitro. Additionally, as shown in Figure 5A, we applied siRNA to silence TGR5 receptor in NCI-H716 cells.\nBetulinic acid increases GLP-1 secretion in a dose-dependent manner after incubation with NCI-H716 cells (Figure 5B). Similar changes were also observed in LCA-treated NCI-H716 cells (Figure 5C).\nThe effectiveness of betulinic acid was markedly attenuated once TGR5 receptor was silenced by siRNA (Figure 5B). Secretion of GLP-1 by betulinic acid was reduced with the absence of TGR5 receptor in NCI-H716 cells. Similar results were also observed in LCA-treated NCI-H716 cells (Figure 5C). This shows that TGR5 receptor mediates the secretion of GLP-1.\nGlucose from the digestion of carbohydrates in foods is the well-known stimulator of GLP-1 secretion in the intestine.16 NCI-H716 cells are widely used in the study of GLP-1 secretion,17 and the presence of TGR5 receptor in NCI-H716 cells has been characterized.10 Therefore, we used NCI-H716 cells to investigate the effect of betulinic acid on GLP-1 secretion in vitro. Additionally, as shown in Figure 5A, we applied siRNA to silence TGR5 receptor in NCI-H716 cells.\nBetulinic acid increases GLP-1 secretion in a dose-dependent manner after incubation with NCI-H716 cells (Figure 5B). Similar changes were also observed in LCA-treated NCI-H716 cells (Figure 5C).\nThe effectiveness of betulinic acid was markedly attenuated once TGR5 receptor was silenced by siRNA (Figure 5B). Secretion of GLP-1 by betulinic acid was reduced with the absence of TGR5 receptor in NCI-H716 cells. Similar results were also observed in LCA-treated NCI-H716 cells (Figure 5C). This shows that TGR5 receptor mediates the secretion of GLP-1.\n Betulinic acid increases calcium levels in NCI-H716 cells We used a nonradioactive assay to determine intracellular calcium levels in NCI-H716 cells because GLP-1 secretion from cells is associated with calcium levels.18\nBetulinic acid also dose-dependently increases calcium levels in NCI-H716 cells (Figure 6A). Additionally, the same changes were obtained in LCA-treated NCI-H716 cells. However, the effect of betulinic acid was markedly reduced after TGR5 receptor was silenced by siRNA (Figure 6A). Calcium level increased by betulinic acid was attenuated after the silencing of TGR5 receptor in NCI-H716 cells. Similar results were also produced in LCA-treated NCI-H716 cells (Figure 6B).\nWe used a nonradioactive assay to determine intracellular calcium levels in NCI-H716 cells because GLP-1 secretion from cells is associated with calcium levels.18\nBetulinic acid also dose-dependently increases calcium levels in NCI-H716 cells (Figure 6A). Additionally, the same changes were obtained in LCA-treated NCI-H716 cells. However, the effect of betulinic acid was markedly reduced after TGR5 receptor was silenced by siRNA (Figure 6A). Calcium level increased by betulinic acid was attenuated after the silencing of TGR5 receptor in NCI-H716 cells. Similar results were also produced in LCA-treated NCI-H716 cells (Figure 6B).", "The successful transfection of TGR receptor into CHO-K1 cells was confirmed using Western blots, as shown in Figure 2A. Then, we investigated the functional response of TGR5-transfected CHO-K1 cells using glucose uptake as an indicator. Similar to the action of LCA, betulinic acid increases glucose uptake in a dose-dependent manner. However, glucose uptake was not induced by betulinic acid (Figure 2B) or LCA (Figure 2C) in CHO-K1 cells transfected with empty vector. We also evaluated the possibility of cytotoxicity induced by betulinic acid in CHO-K1 cells transfected with TGR5 receptor or with empty vector. Betulinic acid at the highest dose (0.1 mM) did not influence the viability of TGR5-transfected CHO-K1 cells or CHO-K1 cells transfected with empty vector only in the preliminary experiments. Similar results of the MTT assay were observed in LCA-treated TGR5-transfected CHO-K1 cells or CHO-K1 cells transfected with empty vector only (data not shown).", "It has been established that cAMP is the main signal coupled to TGR5 receptor activation.15 Therefore, we determined the change of cAMP levels in CHO-K1 cells-expressed TGR5. Betulinic acid induced a marked increase in cAMP levels in cells transfected with TGR5 receptor, but not in cells transfected with empty vector (Figure 3A). The 50% effective dose value of betulinic acid was ~0.5 µM. The effect of betulinic acid was produced in a dose-dependent fashion over the same range as that used to increase the glucose uptake. Similarly, LCA (the well-known agonist of TGR5) induced a dose-dependent increase in cAMP levels in CHO-K1 cells transfected with TGR5 receptor (Figure 3B). LCA is known to increase intracellular cAMP levels,3 and it was used as a positive control in this study.\nAdditionally, glucose uptake increased by betulinic acid was markedly reduced by blockade of protein kinase A using protein kinase A inhibitor (PKI) in a dose-related manner (Figure 4A). Similar results were observed in LCA-treated TGR5-transfected CHO-K1 cells as shown in Figure 4B.", "Glucose from the digestion of carbohydrates in foods is the well-known stimulator of GLP-1 secretion in the intestine.16 NCI-H716 cells are widely used in the study of GLP-1 secretion,17 and the presence of TGR5 receptor in NCI-H716 cells has been characterized.10 Therefore, we used NCI-H716 cells to investigate the effect of betulinic acid on GLP-1 secretion in vitro. Additionally, as shown in Figure 5A, we applied siRNA to silence TGR5 receptor in NCI-H716 cells.\nBetulinic acid increases GLP-1 secretion in a dose-dependent manner after incubation with NCI-H716 cells (Figure 5B). Similar changes were also observed in LCA-treated NCI-H716 cells (Figure 5C).\nThe effectiveness of betulinic acid was markedly attenuated once TGR5 receptor was silenced by siRNA (Figure 5B). Secretion of GLP-1 by betulinic acid was reduced with the absence of TGR5 receptor in NCI-H716 cells. Similar results were also observed in LCA-treated NCI-H716 cells (Figure 5C). This shows that TGR5 receptor mediates the secretion of GLP-1.", "We used a nonradioactive assay to determine intracellular calcium levels in NCI-H716 cells because GLP-1 secretion from cells is associated with calcium levels.18\nBetulinic acid also dose-dependently increases calcium levels in NCI-H716 cells (Figure 6A). Additionally, the same changes were obtained in LCA-treated NCI-H716 cells. However, the effect of betulinic acid was markedly reduced after TGR5 receptor was silenced by siRNA (Figure 6A). Calcium level increased by betulinic acid was attenuated after the silencing of TGR5 receptor in NCI-H716 cells. Similar results were also produced in LCA-treated NCI-H716 cells (Figure 6B).", "In this study, we demonstrated that the natural product betulinic acid is an agonist of TGR5 receptor. Using CHO-K1 cells transfected with human TGR5 receptor, we found for the first time that betulinic acid increases glucose uptake through activation of TGR5 receptor. A similar result was also produced using LCA, a well-known agonist of TGR5 receptor and the positive control in this study.\nIn CHO-K1 cells transfected with TGR5 receptor, betulinic acid increases glucose uptake in a dose-dependent manner. However, similar changes were not observed in CHO-K1 cells transfected with empty vector only. That the same results were observed in LCA-treated cells provide additional support for the betulinic acid-induced glucose uptake through TGR5 receptor. Moreover, betulinic acid dose-dependently increases the cAMP level in CHO-K1 cells transfected with TGR5 receptor. Previous report has identified cAMP as the subcellular signal coupled to TGR5 receptor activation,6 further supporting the mediation of TGR5 receptor in betulinic acid-induced action. Finally, we applied an inhibitor of protein kinase A to block the betulinic acid-induced glucose uptake in CHO-K1 cells transfected with TGR5 receptor. It is thus plausible that TGR5 receptors are activated by betulinic acid to enhance glucose uptake through a cAMP-dependent pathway. Notably, this view has not been reported previously.\nNCI-H716 cells have been introduced as a useful in vitro tool in the study of GLP-1 secretion from the human intestine,17 and in which GLP-1 secretion has been reported to be linked to increased levels of intracellular calcium.18 We found that betulinic acid dose-dependently increased GLP-1 secretion in parallel with the increase in calcium levels in NCI-H716 cells. Similar changes were induced by LCA, the well-known agonist of TGR5 receptor that was used as a positive control in this study. Interestingly, actions of betulinic acid were markedly reduced once TGR5 receptor was silenced by siRNA in NCI-H716 cells. To produce the stable silencing, the siRNA used for TGR5 receptor included three targets: TAS1R3 and α-gustducin in addition to TGR5 receptor.12 However, it has been demonstrated that neither siRNA targeting TAS1R3 nor siRNA targeting α-gustducin in NCI-H716 cells influenced the GLP-1 secretion that was induced by a TGR5 receptor agonist.12 TGR5 receptor is known to be a Gαs-coupled receptor.19 TGR5 receptor activation stimulates GLP-1 secretion via Gαs, which increases the intracellular calcium level in NCI-H716 cells.20 Our data are consistent with this view, supporting the idea that the action of betulinic acid in NCI-H716 cells to increase GLP-1 secretion occurs mainly through TGR5 receptor activation. Some reports have indicated that betulinic acid is able to induce TGR5 receptor activation.9,10 However, direct stimulation of TGR5 receptor by betulinic acid using radioligand binding assay or similar approaches has still not been reported. Using an alternative method, the present study demonstrated for the first time that betulinic acid-induced action is mediated by TGR5 receptor activation. However, chemical ligand(s) are known to produce nonspecific action, particularly at other receptor sites such as farnesoid X receptor. It is the limitation of this report, and more studies are required to clarify the specificity of betulinic acid in advance.\nGLP-1 plays an essential role in the maintenance of glucose homeostasis by its multiple actions, which include stimulation of glucose-dependent insulin secretion, proinsulin gene expression, and β-cell proliferative and antiapoptotic pathways.21 Moreover, GLP-1 regulates energy homeostasis via its neural actions, which include inhibiting glucagon release, gastric emptying, food intake, and appetite.22,23 Betulinic acid appears to regulate energy homeostasis via GLP-1 secretion. In addition, betulinic acid has been characterized as an agonist of human TGR5 receptor, which is expressed in the intestine, brown adipose tissue, skeletal muscle, and selected regions of the central nervous system. TGR5 receptor activation is known to regulate glucose and/or energy homeostasis, immune function, and liver function.24 Therefore, betulinic acid can produce multiple actions in vivo.", "In the present study, we demonstrated that betulinic acid is an agonist of human TGR5 receptor. Betulinic acid increases glucose uptake and stimulates GLP-1 secretion in cells via TGR5 receptor activation. Therefore, betulinic acid is a potential agent for activating TGR5 receptor, which has multiple actions in vivo." ]
[ "intro", "materials|methods", "materials", null, null, null, null, "methods", null, null, null, "methods", "results", null, null, null, null, "discussion", null ]
[ "CHO-K1 cell", "lithocholic acid", "NCI-H716 cell", "transfection", "siRNA" ]
Introduction: It has been established that G-protein-coupled bile acid (BA) receptor 1 (also known as TGR5) agonists show potential for treating diabetic disorders.1,2 Essentially, BA induces TGR5 receptor internalization, activation of extracellular signal-regulated kinase, and intracellular cyclic adenosine monophosphate (cAMP) production in cells.1,2 In animal models, treatment with TGR5 agonist(s) produces glucagon-like peptide (GLP-1) secretion, which in turn induces insulin secretion to lower blood glucose and/or increase the basal energy expenditure.3 Therefore, TGR5 has been recognized as a target for developing new antidiabetic agents.4 In the gut, BAs are also modified by the gut flora to produce secondary BAs, deoxycholic acid, and lithocholic acid (LCA). BAs can be divided into hydrophobic and hydrophilic subgroups. LCA is hydrophobic and could activate TGR55 but was highly toxic to liver.6 Betulinic acid, a triterpenoid, as shown in Figure 1, that is present in the leaves of white birch, has been introduced as a selective TGR5 agonist with moderate potency to produce antihyperglycemic actions.7,8 Due to its potential as a TGR5 agonist, derivatives of betulinic acid have widely been studied.9,10 Recently, the antidiabetic action of betulinic acid has been reviewed with two other triterpenic acids, oleanolic acid, and ursolic acid, in detail.11 However, the activation of TGR5 was not shown in that report, probably due to poor evidence. Therefore, data showing the direct effect of betulinic acid on TGR5 are likely to be helpful. In the present study, we used cells transfected with TGR5 to identify the effect of betulinic acid. Additionally, TGR5-silenced cells were applied to confirm the deletion of betulinic acid-induced actions. Therefore, the TGR5-mediated actions of betulinic acid can be observed directly. Materials and methods: Materials Betulinic acid (Tokyo Chemical Institute, Tokyo, Japan) and LCA (Sigma-Aldrich Chemical Co., St Louis, MO, USA) were dissolved in dimethyl sulfoxide. Additionally, myristoylated PKI 14-22 amide (Tocris, Avonmouth, Bristol, UK), an inhibitor of protein kinase A, was dissolved in an aqueous solution. Protein was assayed using a bicinchoninic acid assay kit (Thermo Sci., Rockford, IL, USA). Betulinic acid (Tokyo Chemical Institute, Tokyo, Japan) and LCA (Sigma-Aldrich Chemical Co., St Louis, MO, USA) were dissolved in dimethyl sulfoxide. Additionally, myristoylated PKI 14-22 amide (Tocris, Avonmouth, Bristol, UK), an inhibitor of protein kinase A, was dissolved in an aqueous solution. Protein was assayed using a bicinchoninic acid assay kit (Thermo Sci., Rockford, IL, USA). Cell cultures The commercial human NCI-H716 cells (BCRC No CCL-251) obtained from the Culture Collection and Research Center of the Food Industry Institute (Hsin-Chiu, Taiwan) were maintained in medium supplemented with 10% (v/v) fetal bovine serum and 2 mM l-glutamine at 5% CO2. Additionally, Chinese hamster ovary (CHO-K1) cells (BCRC No CCL-61) were maintained in growth medium composed of F-12K supplemented with 10% fetal bovine serum. Cells were subcultured once every 3 days by trypsinization (GIBCO-BRL Life Technologies, Gaithersburg, MD, USA), and the medium was changed every 2–3 days. The commercial human NCI-H716 cells (BCRC No CCL-251) obtained from the Culture Collection and Research Center of the Food Industry Institute (Hsin-Chiu, Taiwan) were maintained in medium supplemented with 10% (v/v) fetal bovine serum and 2 mM l-glutamine at 5% CO2. Additionally, Chinese hamster ovary (CHO-K1) cells (BCRC No CCL-61) were maintained in growth medium composed of F-12K supplemented with 10% fetal bovine serum. Cells were subcultured once every 3 days by trypsinization (GIBCO-BRL Life Technologies, Gaithersburg, MD, USA), and the medium was changed every 2–3 days. Transfection of TGR5 in CHO-K1 cells As described in a previous report,12 CHO-K1 cells were transiently transfected with human G-protein-coupled BA receptor 1 and an expression vector (pCMV6-Entry; OriGene, Rockville, MD, USA). We used the TurboFect transfection reagent (Thermo Fisher Scientific, Pittsburgh, PA, USA) to transfect the cells, which were seeded at 5×104 cells per well in six-well plates. Twenty-four hours later, the success of transfection was confirmed using the Western blotting analysis. Then, the TGR5-CHO-K1 cells were incubated with betulinic acid or LCA at the indicated concentrations. As described in a previous report,12 CHO-K1 cells were transiently transfected with human G-protein-coupled BA receptor 1 and an expression vector (pCMV6-Entry; OriGene, Rockville, MD, USA). We used the TurboFect transfection reagent (Thermo Fisher Scientific, Pittsburgh, PA, USA) to transfect the cells, which were seeded at 5×104 cells per well in six-well plates. Twenty-four hours later, the success of transfection was confirmed using the Western blotting analysis. Then, the TGR5-CHO-K1 cells were incubated with betulinic acid or LCA at the indicated concentrations. Uptake of 2-NBDG into CHO-K1 cells Similar to our previous report,13 glucose uptake into cells was investigated using 2-(N-[7-nitrobenz-2-oxa-1,3-diazol-4-yl] amino)-2-deoxyglucose (2-NBDG) as a fluorescent indicator. Each assay used 1×106 cells/mL. The medium was removed, and the cells were washed gently with phosphate-buffered solution (PBS). Cells were detached from the dish by trypsinization, suspended in 0.2 mM 2-NBDG and the testing agent at the indicated concentration in PBS, and then incubated in a 37°C water bath for 60 minutes in the dark. The cells were centrifuged (4°C, 5,000× g, 10 minutes), and the supernatant was discarded. The pellet was washed three times with cold PBS and cooled on ice. The pellet was suspended in 1 mL of PBS. The fluorescence intensity in cell suspension was evaluated using a fluorescence spectrofluorometer (Hitachi F-2000, Tokyo, Japan), with excitation and emission wavelengths of 488 and 520 nm, respectively. The intensity of fluorescence showed the uptake of 2-NBDG in the cells that were incubated with betulinic acid or LCA at the indicated concentrations for suitable time. Similar to our previous report,13 glucose uptake into cells was investigated using 2-(N-[7-nitrobenz-2-oxa-1,3-diazol-4-yl] amino)-2-deoxyglucose (2-NBDG) as a fluorescent indicator. Each assay used 1×106 cells/mL. The medium was removed, and the cells were washed gently with phosphate-buffered solution (PBS). Cells were detached from the dish by trypsinization, suspended in 0.2 mM 2-NBDG and the testing agent at the indicated concentration in PBS, and then incubated in a 37°C water bath for 60 minutes in the dark. The cells were centrifuged (4°C, 5,000× g, 10 minutes), and the supernatant was discarded. The pellet was washed three times with cold PBS and cooled on ice. The pellet was suspended in 1 mL of PBS. The fluorescence intensity in cell suspension was evaluated using a fluorescence spectrofluorometer (Hitachi F-2000, Tokyo, Japan), with excitation and emission wavelengths of 488 and 520 nm, respectively. The intensity of fluorescence showed the uptake of 2-NBDG in the cells that were incubated with betulinic acid or LCA at the indicated concentrations for suitable time. Small interfering RNA in NCI-H716 cells Based on a previous method,12 we purchased a validated small interference RNA (siRNA) targeted against TGR5 from a commercial source (Dharmacon RNA Technology, Lafayette, CO, USA). The validated siRNAs were as follows: ON-TARGETplus SMARTpool siTAS1R3 (human NM_152228 sequence), ON-TARGETplus SMARTpool siGNAT3 (human NM_001102386 sequence), and ON-TARGETplus SMARTpool siGPBAR1 (TGR5) (human NM_170699 sequence). ON-TARGETplus SMARTpool non-targeting siRNA pool was used as a negative control to distinguish sequence-specific silencing from nonspecific effects. TurboFect transfection reagent (Thermo Fisher Scientific) was used to transfer siRNA. Success of silencing was evaluated using the Western blotting analysis after the transfection. The NCI-H716 cells were transfected with siRNA and differentiated for another 24 hours before the assay. Based on a previous method,12 we purchased a validated small interference RNA (siRNA) targeted against TGR5 from a commercial source (Dharmacon RNA Technology, Lafayette, CO, USA). The validated siRNAs were as follows: ON-TARGETplus SMARTpool siTAS1R3 (human NM_152228 sequence), ON-TARGETplus SMARTpool siGNAT3 (human NM_001102386 sequence), and ON-TARGETplus SMARTpool siGPBAR1 (TGR5) (human NM_170699 sequence). ON-TARGETplus SMARTpool non-targeting siRNA pool was used as a negative control to distinguish sequence-specific silencing from nonspecific effects. TurboFect transfection reagent (Thermo Fisher Scientific) was used to transfer siRNA. Success of silencing was evaluated using the Western blotting analysis after the transfection. The NCI-H716 cells were transfected with siRNA and differentiated for another 24 hours before the assay. Western blotting analysis Cells were harvested 24 hours after transfection and were lysed with ice-cold lysis buffer containing 300 mmol/L NaCl, 20 mmol/L Tris-HCl (pH 7.8), 2 mmol/L ethylenediaminetetraacetic acid, 2 mmol/L dithiothreitol, 2% Nonidet P-40, 0.2% sodium lauryl sulfate, and a cocktail of protease inhibitors (Sigma-Aldrich). Then, 30 μg of cell lysate was separated by 10% sodium dodecyl sulfate–polyacryl-amide gel electrophoresis. The blots were then transferred to polyvinylidene difluoride membranes (Millipore, Billerica, MA, USA). After blocking with 10% skim milk for 1 hour, the blots were developed using a primary antibody against TGR5 (Abcam, Rockville, MD, USA). The blots were subsequently hybridized using horseradish peroxidase-conjugated goat antirabbit or antimouse immunoglobulin G (Calbiochem, San Diego, CA, USA) and developed using a chemiluminescence kit (PerkinElmer, Waltham, MA, USA). The optical densities of the bands for TGR5 (32 kDa) and β-actin (43 kDa) were determined using GEL-PRO ANALYSER software 4.0 (Media Cybernetics, Silver Spring, MD, USA), and quantified as the ratio to β-actin. Cells were harvested 24 hours after transfection and were lysed with ice-cold lysis buffer containing 300 mmol/L NaCl, 20 mmol/L Tris-HCl (pH 7.8), 2 mmol/L ethylenediaminetetraacetic acid, 2 mmol/L dithiothreitol, 2% Nonidet P-40, 0.2% sodium lauryl sulfate, and a cocktail of protease inhibitors (Sigma-Aldrich). Then, 30 μg of cell lysate was separated by 10% sodium dodecyl sulfate–polyacryl-amide gel electrophoresis. The blots were then transferred to polyvinylidene difluoride membranes (Millipore, Billerica, MA, USA). After blocking with 10% skim milk for 1 hour, the blots were developed using a primary antibody against TGR5 (Abcam, Rockville, MD, USA). The blots were subsequently hybridized using horseradish peroxidase-conjugated goat antirabbit or antimouse immunoglobulin G (Calbiochem, San Diego, CA, USA) and developed using a chemiluminescence kit (PerkinElmer, Waltham, MA, USA). The optical densities of the bands for TGR5 (32 kDa) and β-actin (43 kDa) were determined using GEL-PRO ANALYSER software 4.0 (Media Cybernetics, Silver Spring, MD, USA), and quantified as the ratio to β-actin. Measurement of intracellular calcium Changes in intracellular calcium concentrations were detected using the fluorescent probe fura-2. In brief, NCI-H716 cells were placed in a buffered physiological saline solution (PSS), to which 5 mmol/L fura-2 was added. Cells were then incubated for 1 hour in humidified 5% CO2 and 95% air at 37°C. Then, cells were washed and incubated for an additional 30 minutes in PSS. The cells were inserted into a preheated (37°C) cuvette containing 2 mL PSS, and the testing agent at the indicated concentration for the mentioned time. In addition, in some experiments, TGR5-deleted NCI-H716 cells were incubated with the testing agent in the same manner. Fluorescence was recorded continuously using a fluorescence spectrofluorometer (F-2000; Hitachi, Tokyo, Japan). Values of [Ca2+]i were determined, as described in our previous report.14 Background autofluorescence was measured in unloaded cells and was subtracted from all measurements. Changes in intracellular calcium concentrations were detected using the fluorescent probe fura-2. In brief, NCI-H716 cells were placed in a buffered physiological saline solution (PSS), to which 5 mmol/L fura-2 was added. Cells were then incubated for 1 hour in humidified 5% CO2 and 95% air at 37°C. Then, cells were washed and incubated for an additional 30 minutes in PSS. The cells were inserted into a preheated (37°C) cuvette containing 2 mL PSS, and the testing agent at the indicated concentration for the mentioned time. In addition, in some experiments, TGR5-deleted NCI-H716 cells were incubated with the testing agent in the same manner. Fluorescence was recorded continuously using a fluorescence spectrofluorometer (F-2000; Hitachi, Tokyo, Japan). Values of [Ca2+]i were determined, as described in our previous report.14 Background autofluorescence was measured in unloaded cells and was subtracted from all measurements. GLP-1 secretion from NCI-H716 cells After being cultured for 48 hours, NCI-H716 cells (5×105 cells per well) were placed in buffered PSS containing betulinic acid or positive control (LCA) at the indicated concentration to incubate under 37°C for 1 hour. The supernatants were collected and immediately assayed using a GLP-1 active enzyme-linked immunosorbent assay kit (EZGLP1T-36K, EMD Millipore Co., Billerica, MA, USA). Experiments were performed in duplicate from the indicated samples. After being cultured for 48 hours, NCI-H716 cells (5×105 cells per well) were placed in buffered PSS containing betulinic acid or positive control (LCA) at the indicated concentration to incubate under 37°C for 1 hour. The supernatants were collected and immediately assayed using a GLP-1 active enzyme-linked immunosorbent assay kit (EZGLP1T-36K, EMD Millipore Co., Billerica, MA, USA). Experiments were performed in duplicate from the indicated samples. Measurement of intracellular cAMP levels The TGR5-CHO-K1 cells were plated at 5×104 cells/well in 96-well plates and incubated with betulinic acid or LCA at the indicated concentrations for 72 hours before cAMP measurements. Intracellular cAMP accumulation was measured using a cAMP enzyme-linked immunosorbent assay kit (ADI-900-066, Enzo Life Sciences, Farmingdale, NY, USA). Experiments were performed in duplicate from the indicated samples. The TGR5-CHO-K1 cells were plated at 5×104 cells/well in 96-well plates and incubated with betulinic acid or LCA at the indicated concentrations for 72 hours before cAMP measurements. Intracellular cAMP accumulation was measured using a cAMP enzyme-linked immunosorbent assay kit (ADI-900-066, Enzo Life Sciences, Farmingdale, NY, USA). Experiments were performed in duplicate from the indicated samples. Statistical analysis Results are presented as the mean±standard error of the mean from the sample number(n) of each group. One-way analysis of variance was followed by post hoc Tukey’s test and t-test using SPSS for Windows, version 17 (SPSS Inc., Chicago, IL, USA). Two-tailed P≤0.05 was considered significant. Results are presented as the mean±standard error of the mean from the sample number(n) of each group. One-way analysis of variance was followed by post hoc Tukey’s test and t-test using SPSS for Windows, version 17 (SPSS Inc., Chicago, IL, USA). Two-tailed P≤0.05 was considered significant. Materials: Betulinic acid (Tokyo Chemical Institute, Tokyo, Japan) and LCA (Sigma-Aldrich Chemical Co., St Louis, MO, USA) were dissolved in dimethyl sulfoxide. Additionally, myristoylated PKI 14-22 amide (Tocris, Avonmouth, Bristol, UK), an inhibitor of protein kinase A, was dissolved in an aqueous solution. Protein was assayed using a bicinchoninic acid assay kit (Thermo Sci., Rockford, IL, USA). Cell cultures: The commercial human NCI-H716 cells (BCRC No CCL-251) obtained from the Culture Collection and Research Center of the Food Industry Institute (Hsin-Chiu, Taiwan) were maintained in medium supplemented with 10% (v/v) fetal bovine serum and 2 mM l-glutamine at 5% CO2. Additionally, Chinese hamster ovary (CHO-K1) cells (BCRC No CCL-61) were maintained in growth medium composed of F-12K supplemented with 10% fetal bovine serum. Cells were subcultured once every 3 days by trypsinization (GIBCO-BRL Life Technologies, Gaithersburg, MD, USA), and the medium was changed every 2–3 days. Transfection of TGR5 in CHO-K1 cells: As described in a previous report,12 CHO-K1 cells were transiently transfected with human G-protein-coupled BA receptor 1 and an expression vector (pCMV6-Entry; OriGene, Rockville, MD, USA). We used the TurboFect transfection reagent (Thermo Fisher Scientific, Pittsburgh, PA, USA) to transfect the cells, which were seeded at 5×104 cells per well in six-well plates. Twenty-four hours later, the success of transfection was confirmed using the Western blotting analysis. Then, the TGR5-CHO-K1 cells were incubated with betulinic acid or LCA at the indicated concentrations. Uptake of 2-NBDG into CHO-K1 cells: Similar to our previous report,13 glucose uptake into cells was investigated using 2-(N-[7-nitrobenz-2-oxa-1,3-diazol-4-yl] amino)-2-deoxyglucose (2-NBDG) as a fluorescent indicator. Each assay used 1×106 cells/mL. The medium was removed, and the cells were washed gently with phosphate-buffered solution (PBS). Cells were detached from the dish by trypsinization, suspended in 0.2 mM 2-NBDG and the testing agent at the indicated concentration in PBS, and then incubated in a 37°C water bath for 60 minutes in the dark. The cells were centrifuged (4°C, 5,000× g, 10 minutes), and the supernatant was discarded. The pellet was washed three times with cold PBS and cooled on ice. The pellet was suspended in 1 mL of PBS. The fluorescence intensity in cell suspension was evaluated using a fluorescence spectrofluorometer (Hitachi F-2000, Tokyo, Japan), with excitation and emission wavelengths of 488 and 520 nm, respectively. The intensity of fluorescence showed the uptake of 2-NBDG in the cells that were incubated with betulinic acid or LCA at the indicated concentrations for suitable time. Small interfering RNA in NCI-H716 cells: Based on a previous method,12 we purchased a validated small interference RNA (siRNA) targeted against TGR5 from a commercial source (Dharmacon RNA Technology, Lafayette, CO, USA). The validated siRNAs were as follows: ON-TARGETplus SMARTpool siTAS1R3 (human NM_152228 sequence), ON-TARGETplus SMARTpool siGNAT3 (human NM_001102386 sequence), and ON-TARGETplus SMARTpool siGPBAR1 (TGR5) (human NM_170699 sequence). ON-TARGETplus SMARTpool non-targeting siRNA pool was used as a negative control to distinguish sequence-specific silencing from nonspecific effects. TurboFect transfection reagent (Thermo Fisher Scientific) was used to transfer siRNA. Success of silencing was evaluated using the Western blotting analysis after the transfection. The NCI-H716 cells were transfected with siRNA and differentiated for another 24 hours before the assay. Western blotting analysis: Cells were harvested 24 hours after transfection and were lysed with ice-cold lysis buffer containing 300 mmol/L NaCl, 20 mmol/L Tris-HCl (pH 7.8), 2 mmol/L ethylenediaminetetraacetic acid, 2 mmol/L dithiothreitol, 2% Nonidet P-40, 0.2% sodium lauryl sulfate, and a cocktail of protease inhibitors (Sigma-Aldrich). Then, 30 μg of cell lysate was separated by 10% sodium dodecyl sulfate–polyacryl-amide gel electrophoresis. The blots were then transferred to polyvinylidene difluoride membranes (Millipore, Billerica, MA, USA). After blocking with 10% skim milk for 1 hour, the blots were developed using a primary antibody against TGR5 (Abcam, Rockville, MD, USA). The blots were subsequently hybridized using horseradish peroxidase-conjugated goat antirabbit or antimouse immunoglobulin G (Calbiochem, San Diego, CA, USA) and developed using a chemiluminescence kit (PerkinElmer, Waltham, MA, USA). The optical densities of the bands for TGR5 (32 kDa) and β-actin (43 kDa) were determined using GEL-PRO ANALYSER software 4.0 (Media Cybernetics, Silver Spring, MD, USA), and quantified as the ratio to β-actin. Measurement of intracellular calcium: Changes in intracellular calcium concentrations were detected using the fluorescent probe fura-2. In brief, NCI-H716 cells were placed in a buffered physiological saline solution (PSS), to which 5 mmol/L fura-2 was added. Cells were then incubated for 1 hour in humidified 5% CO2 and 95% air at 37°C. Then, cells were washed and incubated for an additional 30 minutes in PSS. The cells were inserted into a preheated (37°C) cuvette containing 2 mL PSS, and the testing agent at the indicated concentration for the mentioned time. In addition, in some experiments, TGR5-deleted NCI-H716 cells were incubated with the testing agent in the same manner. Fluorescence was recorded continuously using a fluorescence spectrofluorometer (F-2000; Hitachi, Tokyo, Japan). Values of [Ca2+]i were determined, as described in our previous report.14 Background autofluorescence was measured in unloaded cells and was subtracted from all measurements. GLP-1 secretion from NCI-H716 cells: After being cultured for 48 hours, NCI-H716 cells (5×105 cells per well) were placed in buffered PSS containing betulinic acid or positive control (LCA) at the indicated concentration to incubate under 37°C for 1 hour. The supernatants were collected and immediately assayed using a GLP-1 active enzyme-linked immunosorbent assay kit (EZGLP1T-36K, EMD Millipore Co., Billerica, MA, USA). Experiments were performed in duplicate from the indicated samples. Measurement of intracellular cAMP levels: The TGR5-CHO-K1 cells were plated at 5×104 cells/well in 96-well plates and incubated with betulinic acid or LCA at the indicated concentrations for 72 hours before cAMP measurements. Intracellular cAMP accumulation was measured using a cAMP enzyme-linked immunosorbent assay kit (ADI-900-066, Enzo Life Sciences, Farmingdale, NY, USA). Experiments were performed in duplicate from the indicated samples. Statistical analysis: Results are presented as the mean±standard error of the mean from the sample number(n) of each group. One-way analysis of variance was followed by post hoc Tukey’s test and t-test using SPSS for Windows, version 17 (SPSS Inc., Chicago, IL, USA). Two-tailed P≤0.05 was considered significant. Results: Betulinic acid increases glucose uptake in TGR5-transfected CHO cells The successful transfection of TGR receptor into CHO-K1 cells was confirmed using Western blots, as shown in Figure 2A. Then, we investigated the functional response of TGR5-transfected CHO-K1 cells using glucose uptake as an indicator. Similar to the action of LCA, betulinic acid increases glucose uptake in a dose-dependent manner. However, glucose uptake was not induced by betulinic acid (Figure 2B) or LCA (Figure 2C) in CHO-K1 cells transfected with empty vector. We also evaluated the possibility of cytotoxicity induced by betulinic acid in CHO-K1 cells transfected with TGR5 receptor or with empty vector. Betulinic acid at the highest dose (0.1 mM) did not influence the viability of TGR5-transfected CHO-K1 cells or CHO-K1 cells transfected with empty vector only in the preliminary experiments. Similar results of the MTT assay were observed in LCA-treated TGR5-transfected CHO-K1 cells or CHO-K1 cells transfected with empty vector only (data not shown). The successful transfection of TGR receptor into CHO-K1 cells was confirmed using Western blots, as shown in Figure 2A. Then, we investigated the functional response of TGR5-transfected CHO-K1 cells using glucose uptake as an indicator. Similar to the action of LCA, betulinic acid increases glucose uptake in a dose-dependent manner. However, glucose uptake was not induced by betulinic acid (Figure 2B) or LCA (Figure 2C) in CHO-K1 cells transfected with empty vector. We also evaluated the possibility of cytotoxicity induced by betulinic acid in CHO-K1 cells transfected with TGR5 receptor or with empty vector. Betulinic acid at the highest dose (0.1 mM) did not influence the viability of TGR5-transfected CHO-K1 cells or CHO-K1 cells transfected with empty vector only in the preliminary experiments. Similar results of the MTT assay were observed in LCA-treated TGR5-transfected CHO-K1 cells or CHO-K1 cells transfected with empty vector only (data not shown). Betulinic acid increases signals in TGR5-transfected CHO-K1 cells It has been established that cAMP is the main signal coupled to TGR5 receptor activation.15 Therefore, we determined the change of cAMP levels in CHO-K1 cells-expressed TGR5. Betulinic acid induced a marked increase in cAMP levels in cells transfected with TGR5 receptor, but not in cells transfected with empty vector (Figure 3A). The 50% effective dose value of betulinic acid was ~0.5 µM. The effect of betulinic acid was produced in a dose-dependent fashion over the same range as that used to increase the glucose uptake. Similarly, LCA (the well-known agonist of TGR5) induced a dose-dependent increase in cAMP levels in CHO-K1 cells transfected with TGR5 receptor (Figure 3B). LCA is known to increase intracellular cAMP levels,3 and it was used as a positive control in this study. Additionally, glucose uptake increased by betulinic acid was markedly reduced by blockade of protein kinase A using protein kinase A inhibitor (PKI) in a dose-related manner (Figure 4A). Similar results were observed in LCA-treated TGR5-transfected CHO-K1 cells as shown in Figure 4B. It has been established that cAMP is the main signal coupled to TGR5 receptor activation.15 Therefore, we determined the change of cAMP levels in CHO-K1 cells-expressed TGR5. Betulinic acid induced a marked increase in cAMP levels in cells transfected with TGR5 receptor, but not in cells transfected with empty vector (Figure 3A). The 50% effective dose value of betulinic acid was ~0.5 µM. The effect of betulinic acid was produced in a dose-dependent fashion over the same range as that used to increase the glucose uptake. Similarly, LCA (the well-known agonist of TGR5) induced a dose-dependent increase in cAMP levels in CHO-K1 cells transfected with TGR5 receptor (Figure 3B). LCA is known to increase intracellular cAMP levels,3 and it was used as a positive control in this study. Additionally, glucose uptake increased by betulinic acid was markedly reduced by blockade of protein kinase A using protein kinase A inhibitor (PKI) in a dose-related manner (Figure 4A). Similar results were observed in LCA-treated TGR5-transfected CHO-K1 cells as shown in Figure 4B. Betulinic acid increases GLP-1 secretion in NCI-H716 cells Glucose from the digestion of carbohydrates in foods is the well-known stimulator of GLP-1 secretion in the intestine.16 NCI-H716 cells are widely used in the study of GLP-1 secretion,17 and the presence of TGR5 receptor in NCI-H716 cells has been characterized.10 Therefore, we used NCI-H716 cells to investigate the effect of betulinic acid on GLP-1 secretion in vitro. Additionally, as shown in Figure 5A, we applied siRNA to silence TGR5 receptor in NCI-H716 cells. Betulinic acid increases GLP-1 secretion in a dose-dependent manner after incubation with NCI-H716 cells (Figure 5B). Similar changes were also observed in LCA-treated NCI-H716 cells (Figure 5C). The effectiveness of betulinic acid was markedly attenuated once TGR5 receptor was silenced by siRNA (Figure 5B). Secretion of GLP-1 by betulinic acid was reduced with the absence of TGR5 receptor in NCI-H716 cells. Similar results were also observed in LCA-treated NCI-H716 cells (Figure 5C). This shows that TGR5 receptor mediates the secretion of GLP-1. Glucose from the digestion of carbohydrates in foods is the well-known stimulator of GLP-1 secretion in the intestine.16 NCI-H716 cells are widely used in the study of GLP-1 secretion,17 and the presence of TGR5 receptor in NCI-H716 cells has been characterized.10 Therefore, we used NCI-H716 cells to investigate the effect of betulinic acid on GLP-1 secretion in vitro. Additionally, as shown in Figure 5A, we applied siRNA to silence TGR5 receptor in NCI-H716 cells. Betulinic acid increases GLP-1 secretion in a dose-dependent manner after incubation with NCI-H716 cells (Figure 5B). Similar changes were also observed in LCA-treated NCI-H716 cells (Figure 5C). The effectiveness of betulinic acid was markedly attenuated once TGR5 receptor was silenced by siRNA (Figure 5B). Secretion of GLP-1 by betulinic acid was reduced with the absence of TGR5 receptor in NCI-H716 cells. Similar results were also observed in LCA-treated NCI-H716 cells (Figure 5C). This shows that TGR5 receptor mediates the secretion of GLP-1. Betulinic acid increases calcium levels in NCI-H716 cells We used a nonradioactive assay to determine intracellular calcium levels in NCI-H716 cells because GLP-1 secretion from cells is associated with calcium levels.18 Betulinic acid also dose-dependently increases calcium levels in NCI-H716 cells (Figure 6A). Additionally, the same changes were obtained in LCA-treated NCI-H716 cells. However, the effect of betulinic acid was markedly reduced after TGR5 receptor was silenced by siRNA (Figure 6A). Calcium level increased by betulinic acid was attenuated after the silencing of TGR5 receptor in NCI-H716 cells. Similar results were also produced in LCA-treated NCI-H716 cells (Figure 6B). We used a nonradioactive assay to determine intracellular calcium levels in NCI-H716 cells because GLP-1 secretion from cells is associated with calcium levels.18 Betulinic acid also dose-dependently increases calcium levels in NCI-H716 cells (Figure 6A). Additionally, the same changes were obtained in LCA-treated NCI-H716 cells. However, the effect of betulinic acid was markedly reduced after TGR5 receptor was silenced by siRNA (Figure 6A). Calcium level increased by betulinic acid was attenuated after the silencing of TGR5 receptor in NCI-H716 cells. Similar results were also produced in LCA-treated NCI-H716 cells (Figure 6B). Betulinic acid increases glucose uptake in TGR5-transfected CHO cells: The successful transfection of TGR receptor into CHO-K1 cells was confirmed using Western blots, as shown in Figure 2A. Then, we investigated the functional response of TGR5-transfected CHO-K1 cells using glucose uptake as an indicator. Similar to the action of LCA, betulinic acid increases glucose uptake in a dose-dependent manner. However, glucose uptake was not induced by betulinic acid (Figure 2B) or LCA (Figure 2C) in CHO-K1 cells transfected with empty vector. We also evaluated the possibility of cytotoxicity induced by betulinic acid in CHO-K1 cells transfected with TGR5 receptor or with empty vector. Betulinic acid at the highest dose (0.1 mM) did not influence the viability of TGR5-transfected CHO-K1 cells or CHO-K1 cells transfected with empty vector only in the preliminary experiments. Similar results of the MTT assay were observed in LCA-treated TGR5-transfected CHO-K1 cells or CHO-K1 cells transfected with empty vector only (data not shown). Betulinic acid increases signals in TGR5-transfected CHO-K1 cells: It has been established that cAMP is the main signal coupled to TGR5 receptor activation.15 Therefore, we determined the change of cAMP levels in CHO-K1 cells-expressed TGR5. Betulinic acid induced a marked increase in cAMP levels in cells transfected with TGR5 receptor, but not in cells transfected with empty vector (Figure 3A). The 50% effective dose value of betulinic acid was ~0.5 µM. The effect of betulinic acid was produced in a dose-dependent fashion over the same range as that used to increase the glucose uptake. Similarly, LCA (the well-known agonist of TGR5) induced a dose-dependent increase in cAMP levels in CHO-K1 cells transfected with TGR5 receptor (Figure 3B). LCA is known to increase intracellular cAMP levels,3 and it was used as a positive control in this study. Additionally, glucose uptake increased by betulinic acid was markedly reduced by blockade of protein kinase A using protein kinase A inhibitor (PKI) in a dose-related manner (Figure 4A). Similar results were observed in LCA-treated TGR5-transfected CHO-K1 cells as shown in Figure 4B. Betulinic acid increases GLP-1 secretion in NCI-H716 cells: Glucose from the digestion of carbohydrates in foods is the well-known stimulator of GLP-1 secretion in the intestine.16 NCI-H716 cells are widely used in the study of GLP-1 secretion,17 and the presence of TGR5 receptor in NCI-H716 cells has been characterized.10 Therefore, we used NCI-H716 cells to investigate the effect of betulinic acid on GLP-1 secretion in vitro. Additionally, as shown in Figure 5A, we applied siRNA to silence TGR5 receptor in NCI-H716 cells. Betulinic acid increases GLP-1 secretion in a dose-dependent manner after incubation with NCI-H716 cells (Figure 5B). Similar changes were also observed in LCA-treated NCI-H716 cells (Figure 5C). The effectiveness of betulinic acid was markedly attenuated once TGR5 receptor was silenced by siRNA (Figure 5B). Secretion of GLP-1 by betulinic acid was reduced with the absence of TGR5 receptor in NCI-H716 cells. Similar results were also observed in LCA-treated NCI-H716 cells (Figure 5C). This shows that TGR5 receptor mediates the secretion of GLP-1. Betulinic acid increases calcium levels in NCI-H716 cells: We used a nonradioactive assay to determine intracellular calcium levels in NCI-H716 cells because GLP-1 secretion from cells is associated with calcium levels.18 Betulinic acid also dose-dependently increases calcium levels in NCI-H716 cells (Figure 6A). Additionally, the same changes were obtained in LCA-treated NCI-H716 cells. However, the effect of betulinic acid was markedly reduced after TGR5 receptor was silenced by siRNA (Figure 6A). Calcium level increased by betulinic acid was attenuated after the silencing of TGR5 receptor in NCI-H716 cells. Similar results were also produced in LCA-treated NCI-H716 cells (Figure 6B). Discussion: In this study, we demonstrated that the natural product betulinic acid is an agonist of TGR5 receptor. Using CHO-K1 cells transfected with human TGR5 receptor, we found for the first time that betulinic acid increases glucose uptake through activation of TGR5 receptor. A similar result was also produced using LCA, a well-known agonist of TGR5 receptor and the positive control in this study. In CHO-K1 cells transfected with TGR5 receptor, betulinic acid increases glucose uptake in a dose-dependent manner. However, similar changes were not observed in CHO-K1 cells transfected with empty vector only. That the same results were observed in LCA-treated cells provide additional support for the betulinic acid-induced glucose uptake through TGR5 receptor. Moreover, betulinic acid dose-dependently increases the cAMP level in CHO-K1 cells transfected with TGR5 receptor. Previous report has identified cAMP as the subcellular signal coupled to TGR5 receptor activation,6 further supporting the mediation of TGR5 receptor in betulinic acid-induced action. Finally, we applied an inhibitor of protein kinase A to block the betulinic acid-induced glucose uptake in CHO-K1 cells transfected with TGR5 receptor. It is thus plausible that TGR5 receptors are activated by betulinic acid to enhance glucose uptake through a cAMP-dependent pathway. Notably, this view has not been reported previously. NCI-H716 cells have been introduced as a useful in vitro tool in the study of GLP-1 secretion from the human intestine,17 and in which GLP-1 secretion has been reported to be linked to increased levels of intracellular calcium.18 We found that betulinic acid dose-dependently increased GLP-1 secretion in parallel with the increase in calcium levels in NCI-H716 cells. Similar changes were induced by LCA, the well-known agonist of TGR5 receptor that was used as a positive control in this study. Interestingly, actions of betulinic acid were markedly reduced once TGR5 receptor was silenced by siRNA in NCI-H716 cells. To produce the stable silencing, the siRNA used for TGR5 receptor included three targets: TAS1R3 and α-gustducin in addition to TGR5 receptor.12 However, it has been demonstrated that neither siRNA targeting TAS1R3 nor siRNA targeting α-gustducin in NCI-H716 cells influenced the GLP-1 secretion that was induced by a TGR5 receptor agonist.12 TGR5 receptor is known to be a Gαs-coupled receptor.19 TGR5 receptor activation stimulates GLP-1 secretion via Gαs, which increases the intracellular calcium level in NCI-H716 cells.20 Our data are consistent with this view, supporting the idea that the action of betulinic acid in NCI-H716 cells to increase GLP-1 secretion occurs mainly through TGR5 receptor activation. Some reports have indicated that betulinic acid is able to induce TGR5 receptor activation.9,10 However, direct stimulation of TGR5 receptor by betulinic acid using radioligand binding assay or similar approaches has still not been reported. Using an alternative method, the present study demonstrated for the first time that betulinic acid-induced action is mediated by TGR5 receptor activation. However, chemical ligand(s) are known to produce nonspecific action, particularly at other receptor sites such as farnesoid X receptor. It is the limitation of this report, and more studies are required to clarify the specificity of betulinic acid in advance. GLP-1 plays an essential role in the maintenance of glucose homeostasis by its multiple actions, which include stimulation of glucose-dependent insulin secretion, proinsulin gene expression, and β-cell proliferative and antiapoptotic pathways.21 Moreover, GLP-1 regulates energy homeostasis via its neural actions, which include inhibiting glucagon release, gastric emptying, food intake, and appetite.22,23 Betulinic acid appears to regulate energy homeostasis via GLP-1 secretion. In addition, betulinic acid has been characterized as an agonist of human TGR5 receptor, which is expressed in the intestine, brown adipose tissue, skeletal muscle, and selected regions of the central nervous system. TGR5 receptor activation is known to regulate glucose and/or energy homeostasis, immune function, and liver function.24 Therefore, betulinic acid can produce multiple actions in vivo. Conclusion: In the present study, we demonstrated that betulinic acid is an agonist of human TGR5 receptor. Betulinic acid increases glucose uptake and stimulates GLP-1 secretion in cells via TGR5 receptor activation. Therefore, betulinic acid is a potential agent for activating TGR5 receptor, which has multiple actions in vivo.
Background: G-protein-coupled bile acid receptor 1, also known as TGR5 is known to be involved in glucose homeostasis. In animal models, treatment with a TGR5 agonist induces incretin secretion to reduce hyperglycemia. Betulinic acid, a triterpenoid present in the leaves of white birch, has been introduced as a selective TGR5 agonist. However, direct activation of TGR5 by betulinic acid has not yet been reported. Methods: Transfection of TGR5 into cultured Chinese hamster ovary (CHO-K1) cells was performed to establish the presence of TGR5. Additionally, TGR5-specific small interfering RNA was employed to silence TGR5 in cells (NCI-H716 cells) that secreted incretins. Uptake of glucose by CHO-K1 cells was evaluated using a fluorescent indicator. Amounts of cyclic adenosine monophosphate and glucagon-like peptide were quantified using enzyme-linked immunosorbent assay kits. Results: Betulinic acid dose-dependently increases glucose uptake by CHO-K1 cells transfected with TGR5 only, which can be considered an alternative method instead of radioligand binding assay. Additionally, signals coupled to TGR5 activation are also increased by betulinic acid in cells transfected with TGR5. In NCI-H716 cells, which endogenously express TGR5, betulinic acid induces glucagon-like peptide secretion via increasing calcium levels. However, the actions of betulinic acid were markedly reduced in NCI-H716 cells that received TGR5-silencing treatment. Therefore, the present study demonstrates the activation of TGR5 by betulinic acid for the first time. Conclusions: Similar to the positive control lithocholic acid, which is the established agonist of TGR5, betulinic acid has been characterized as a useful agonist of TGR5 and can be used to activate TGR5 in the future.
Introduction: It has been established that G-protein-coupled bile acid (BA) receptor 1 (also known as TGR5) agonists show potential for treating diabetic disorders.1,2 Essentially, BA induces TGR5 receptor internalization, activation of extracellular signal-regulated kinase, and intracellular cyclic adenosine monophosphate (cAMP) production in cells.1,2 In animal models, treatment with TGR5 agonist(s) produces glucagon-like peptide (GLP-1) secretion, which in turn induces insulin secretion to lower blood glucose and/or increase the basal energy expenditure.3 Therefore, TGR5 has been recognized as a target for developing new antidiabetic agents.4 In the gut, BAs are also modified by the gut flora to produce secondary BAs, deoxycholic acid, and lithocholic acid (LCA). BAs can be divided into hydrophobic and hydrophilic subgroups. LCA is hydrophobic and could activate TGR55 but was highly toxic to liver.6 Betulinic acid, a triterpenoid, as shown in Figure 1, that is present in the leaves of white birch, has been introduced as a selective TGR5 agonist with moderate potency to produce antihyperglycemic actions.7,8 Due to its potential as a TGR5 agonist, derivatives of betulinic acid have widely been studied.9,10 Recently, the antidiabetic action of betulinic acid has been reviewed with two other triterpenic acids, oleanolic acid, and ursolic acid, in detail.11 However, the activation of TGR5 was not shown in that report, probably due to poor evidence. Therefore, data showing the direct effect of betulinic acid on TGR5 are likely to be helpful. In the present study, we used cells transfected with TGR5 to identify the effect of betulinic acid. Additionally, TGR5-silenced cells were applied to confirm the deletion of betulinic acid-induced actions. Therefore, the TGR5-mediated actions of betulinic acid can be observed directly. Conclusion: In the present study, we demonstrated that betulinic acid is an agonist of human TGR5 receptor. Betulinic acid increases glucose uptake and stimulates GLP-1 secretion in cells via TGR5 receptor activation. Therefore, betulinic acid is a potential agent for activating TGR5 receptor, which has multiple actions in vivo.
Background: G-protein-coupled bile acid receptor 1, also known as TGR5 is known to be involved in glucose homeostasis. In animal models, treatment with a TGR5 agonist induces incretin secretion to reduce hyperglycemia. Betulinic acid, a triterpenoid present in the leaves of white birch, has been introduced as a selective TGR5 agonist. However, direct activation of TGR5 by betulinic acid has not yet been reported. Methods: Transfection of TGR5 into cultured Chinese hamster ovary (CHO-K1) cells was performed to establish the presence of TGR5. Additionally, TGR5-specific small interfering RNA was employed to silence TGR5 in cells (NCI-H716 cells) that secreted incretins. Uptake of glucose by CHO-K1 cells was evaluated using a fluorescent indicator. Amounts of cyclic adenosine monophosphate and glucagon-like peptide were quantified using enzyme-linked immunosorbent assay kits. Results: Betulinic acid dose-dependently increases glucose uptake by CHO-K1 cells transfected with TGR5 only, which can be considered an alternative method instead of radioligand binding assay. Additionally, signals coupled to TGR5 activation are also increased by betulinic acid in cells transfected with TGR5. In NCI-H716 cells, which endogenously express TGR5, betulinic acid induces glucagon-like peptide secretion via increasing calcium levels. However, the actions of betulinic acid were markedly reduced in NCI-H716 cells that received TGR5-silencing treatment. Therefore, the present study demonstrates the activation of TGR5 by betulinic acid for the first time. Conclusions: Similar to the positive control lithocholic acid, which is the established agonist of TGR5, betulinic acid has been characterized as a useful agonist of TGR5 and can be used to activate TGR5 in the future.
7,590
326
[ 124, 116, 216, 150, 236, 179, 87, 77, 191, 212, 201, 121, 54 ]
19
[ "cells", "tgr5", "acid", "betulinic", "betulinic acid", "receptor", "h716", "nci h716", "h716 cells", "nci" ]
[ "tgr5 receptor betulinic", "acid agonist tgr5", "acid enhance glucose", "liver betulinic acid", "antidiabetic agents gut" ]
[CONTENT] CHO-K1 cell | lithocholic acid | NCI-H716 cell | transfection | siRNA [SUMMARY]
[CONTENT] CHO-K1 cell | lithocholic acid | NCI-H716 cell | transfection | siRNA [SUMMARY]
[CONTENT] CHO-K1 cell | lithocholic acid | NCI-H716 cell | transfection | siRNA [SUMMARY]
[CONTENT] CHO-K1 cell | lithocholic acid | NCI-H716 cell | transfection | siRNA [SUMMARY]
[CONTENT] CHO-K1 cell | lithocholic acid | NCI-H716 cell | transfection | siRNA [SUMMARY]
[CONTENT] CHO-K1 cell | lithocholic acid | NCI-H716 cell | transfection | siRNA [SUMMARY]
[CONTENT] Animals | CHO Cells | Cricetinae | Cricetulus | Dose-Response Relationship, Drug | Humans | Pentacyclic Triterpenes | Receptors, G-Protein-Coupled | Structure-Activity Relationship | Triterpenes | Tumor Cells, Cultured | Betulinic Acid [SUMMARY]
[CONTENT] Animals | CHO Cells | Cricetinae | Cricetulus | Dose-Response Relationship, Drug | Humans | Pentacyclic Triterpenes | Receptors, G-Protein-Coupled | Structure-Activity Relationship | Triterpenes | Tumor Cells, Cultured | Betulinic Acid [SUMMARY]
[CONTENT] Animals | CHO Cells | Cricetinae | Cricetulus | Dose-Response Relationship, Drug | Humans | Pentacyclic Triterpenes | Receptors, G-Protein-Coupled | Structure-Activity Relationship | Triterpenes | Tumor Cells, Cultured | Betulinic Acid [SUMMARY]
[CONTENT] Animals | CHO Cells | Cricetinae | Cricetulus | Dose-Response Relationship, Drug | Humans | Pentacyclic Triterpenes | Receptors, G-Protein-Coupled | Structure-Activity Relationship | Triterpenes | Tumor Cells, Cultured | Betulinic Acid [SUMMARY]
[CONTENT] Animals | CHO Cells | Cricetinae | Cricetulus | Dose-Response Relationship, Drug | Humans | Pentacyclic Triterpenes | Receptors, G-Protein-Coupled | Structure-Activity Relationship | Triterpenes | Tumor Cells, Cultured | Betulinic Acid [SUMMARY]
[CONTENT] Animals | CHO Cells | Cricetinae | Cricetulus | Dose-Response Relationship, Drug | Humans | Pentacyclic Triterpenes | Receptors, G-Protein-Coupled | Structure-Activity Relationship | Triterpenes | Tumor Cells, Cultured | Betulinic Acid [SUMMARY]
[CONTENT] tgr5 receptor betulinic | acid agonist tgr5 | acid enhance glucose | liver betulinic acid | antidiabetic agents gut [SUMMARY]
[CONTENT] tgr5 receptor betulinic | acid agonist tgr5 | acid enhance glucose | liver betulinic acid | antidiabetic agents gut [SUMMARY]
[CONTENT] tgr5 receptor betulinic | acid agonist tgr5 | acid enhance glucose | liver betulinic acid | antidiabetic agents gut [SUMMARY]
[CONTENT] tgr5 receptor betulinic | acid agonist tgr5 | acid enhance glucose | liver betulinic acid | antidiabetic agents gut [SUMMARY]
[CONTENT] tgr5 receptor betulinic | acid agonist tgr5 | acid enhance glucose | liver betulinic acid | antidiabetic agents gut [SUMMARY]
[CONTENT] tgr5 receptor betulinic | acid agonist tgr5 | acid enhance glucose | liver betulinic acid | antidiabetic agents gut [SUMMARY]
[CONTENT] cells | tgr5 | acid | betulinic | betulinic acid | receptor | h716 | nci h716 | h716 cells | nci [SUMMARY]
[CONTENT] cells | tgr5 | acid | betulinic | betulinic acid | receptor | h716 | nci h716 | h716 cells | nci [SUMMARY]
[CONTENT] cells | tgr5 | acid | betulinic | betulinic acid | receptor | h716 | nci h716 | h716 cells | nci [SUMMARY]
[CONTENT] cells | tgr5 | acid | betulinic | betulinic acid | receptor | h716 | nci h716 | h716 cells | nci [SUMMARY]
[CONTENT] cells | tgr5 | acid | betulinic | betulinic acid | receptor | h716 | nci h716 | h716 cells | nci [SUMMARY]
[CONTENT] cells | tgr5 | acid | betulinic | betulinic acid | receptor | h716 | nci h716 | h716 cells | nci [SUMMARY]
[CONTENT] acid | tgr5 | tgr5 agonist | bas | betulinic | betulinic acid | actions | agonist | antidiabetic | hydrophobic [SUMMARY]
[CONTENT] test | spss | mean | followed post | tailed | chicago il usa | chicago il | version | version 17 | version 17 spss [SUMMARY]
[CONTENT] cells | figure | nci | h716 | h716 cells | nci h716 | nci h716 cells | tgr5 | betulinic acid | betulinic [SUMMARY]
[CONTENT] tgr5 receptor | receptor | tgr5 | betulinic acid | betulinic | acid | acid agonist human | activating tgr5 receptor multiple | acid potential agent | acid potential [SUMMARY]
[CONTENT] cells | tgr5 | acid | betulinic acid | betulinic | receptor | tgr5 receptor | nci h716 cells | nci h716 | h716 [SUMMARY]
[CONTENT] cells | tgr5 | acid | betulinic acid | betulinic | receptor | tgr5 receptor | nci h716 cells | nci h716 | h716 [SUMMARY]
[CONTENT] 1 | TGR5 ||| ||| ||| [SUMMARY]
[CONTENT] Chinese | CHO-K1 | TGR5 ||| RNA | NCI-H716 ||| CHO-K1 ||| [SUMMARY]
[CONTENT] CHO-K1 ||| ||| NCI ||| NCI-H716 ||| first [SUMMARY]
[CONTENT] TGR5 | TGR5 [SUMMARY]
[CONTENT] 1 | TGR5 ||| ||| ||| ||| Chinese | CHO-K1 | TGR5 ||| RNA | NCI-H716 ||| CHO-K1 ||| ||| ||| CHO-K1 ||| ||| NCI ||| NCI-H716 ||| first ||| TGR5 | TGR5 [SUMMARY]
[CONTENT] 1 | TGR5 ||| ||| ||| ||| Chinese | CHO-K1 | TGR5 ||| RNA | NCI-H716 ||| CHO-K1 ||| ||| ||| CHO-K1 ||| ||| NCI ||| NCI-H716 ||| first ||| TGR5 | TGR5 [SUMMARY]
Progression of cerebral infarction before and after thrombectomy is modified by prehospital pathways.
33986107
Evidence of the consequences of different prehospital pathways before mechanical thrombectomy (MT) in large vessel occlusion stroke is inconclusive. The aim of this study was to investigate the infarct extent and progression before and after MT in directly admitted (mothership) versus transferred (drip and ship) patients using the Alberta Stroke Program Early CT Score (ASPECTS).
BACKGROUND
ASPECTS of 535 consecutive large vessel occlusion stroke patients eligible for MT between 2015 to 2019 were retrospectively analyzed for differences in the extent of baseline, post-referral, and post-recanalization infarction between the mothership and drip and ship pathways. Time intervals and transport distances of both pathways were analyzed. Multiple linear regression was used to examine the association between infarct progression (baseline to post-recanalization ASPECTS decline), patient characteristics, and logistic key figures.
METHODS
ASPECTS declined during transfer (9 (8-10) vs 7 (6-9), p<0.0001), resulting in lower ASPECTS at stroke center presentation (mothership 9 (7-10) vs drip and ship 7 (6-9), p<0.0001) and on follow-up imaging (mothership 7 (4-8) vs drip and ship 6 (3-7), p=0.001) compared with mothership patients. Infarct progression was significantly higher in transferred patients (points lost, mothership 2 (0-3) vs drip and ship 3 (2-6), p<0.0001). After multivariable adjustment, only interfacility transfer, preinterventional clinical stroke severity, the degree of angiographic recanalization, and the duration of the thrombectomy procedure remained predictors of infarct progression (R 2=0.209, p<0.0001).
RESULTS
Infarct progression and postinterventional infarct extent, as assessed by ASPECTS, varied between the drip and ship and mothership pathway, leading to more pronounced infarction in transferred patients. ASPECTS may serve as a radiological measure to monitor the benefit or harm of different prehospital pathways for MT.
CONCLUSIONS
[ "Arterial Occlusive Diseases", "Brain Ischemia", "Cerebral Infarction", "Emergency Medical Services", "Humans", "Ischemic Stroke", "Retrospective Studies", "Stroke", "Thrombectomy", "Treatment Outcome" ]
9016250
Introduction
In many regions of the world, the two most frequent treatment pathways in the management of acute large vessel occlusion strokes are intravenous thrombolysis in the nearest thrombolysis facility (drip and ship concept) prior to mechanical thrombectomy in endovascular capable comprehensive stroke centers or, as the main alternative, direct transfer to a comprehensive stroke center for mechanical thrombectomy (mothership concept).1 The population based benefit or harm of either scenario is region specific and not externally generalizable.2 Although certain stroke networks have evaluated prehospital triage systems based on stroke severity scales, it will likely remain elusive, from a methodological perspective, to study optimal prehospital routing strategies by performing trials where the treatment pathway is randomized in the field and which at the same time will account for region specific factors or by performing such trials separately by region.2 3 Functional clinical measures are the standard of reference as primary endpoints in randomized stroke trials. However, observational investigation using standardized and widely used radiological measures, such as the Alberta Stroke Program Early CT Score (ASPECTS), may be valuable to indicate benefit or harm in a setting where randomization of different treatment pathways for acute large vessel occlusion stroke remains a serious obstacle or is not feasible across multiple centers, each with distinct region specific features.1 2 4 Recently, ASPECTS has been proposed as an imaging tool to assess the dynamics of infarction during interfacility transfer.5 6 We add to previous studies on ASPECTS by investigating whether the infarct extent before and after mechanical thrombectomy differs between the mothership and drip and ship pathways, and aim to quantify the imaging defined effect size of pre- to postinterventional worsening of stroke over time for both pathways.
Methods
This retrospective observational study was approved by the local ethics committee. The period of observation was from May 2015 to July 2019. A total of 535 consecutive anterior circulation ischemic stroke patients intended to be treated by mechanical thrombectomy at an academic comprehensive stroke center in our regiopolitan area were analyzed to investigate whether preinterventional and early postinterventional follow-up ASPECTS at 24–48 hour after symptom onset differed between the mothership and drip and ship treatment pathways for mechanical thrombectomy (online supplemental figure 1). In addition, we compared the course of decline in ASPECTS over time (difference in follow-up ASPECTS to baseline) between the two treatment pathways, and assessed the association of key patient, interventional, and logistic variables with ASPECTS-defined infarct progression. The stroke network of the regiopolitan area surveyed covers a population of ~1.4 million inhabitants and is composed of one academic comprehensive stroke center and 11 associated hospitals within a maximum ground transfer distance of 149 km.7 To ensure guideline compliant neurointerventional and pharmacological stroke treatment in a 24/7 setting,1 the academic comprehensive stroke center provides teleradiological and teleneurological support on request (ie, the assessment of thrombectomy eligibility), coordination of patient transfer from associated hospitals (individual decision making), and, following transfer, subsequent mechanical thrombectomy if indicated.7 All symptomatic acute ischemic stroke patients with proven occlusion of the distal internal carotid artery, middle cerebral artery M1 segment, and proximal M2 segment were eligible for study entry. Of those, we included patients either directly admitted or transferred from associated referring hospitals for which initial non-invasive imaging by means of non-contrast CT and CT angiography were available for both the comparison with non-contrast CT (Somatom Definition AS, Siemens Healthcare, Erlangen, Germany) at stroke center presentation and at 24–48 hours (postinterventional imaging, cut-off for contrast agent resorption).8 Thrombectomy eligibility was guided using consensus recommendations or current guidelines if available.1 The following demographic, clinical, radiological, interventional, and logistical data were collected according to the available medical records: age, sex, baseline medication and medical history, time of symptom onset, heart rate and blood pressure at presentation, National Institutes of Health Stroke Scale (NIHSS) score at presentation and at 24 hours, time of non-invasive and angiographic image acquisition, occlusion location, baseline ASPECTS (referring hospital/comprehensive stroke center), ASPECTS after patient transfer and at 24–48 hour, and alteplase administration. Imaging data were stored in the respective picture archiving and communication system and were available via teleradiological access. ASPECTS values were independently evaluated by two of the authors (FC/JH) trained in ASPECTS scoring (http://www.aspectsinstroke.com), both blinded to clinical information. Interobserver agreement was assessed by the intraclass correlation coefficient (ICC). Recanalization success was assessed by the modified Treatment in Cerebral Ischemia Scale (mTICI) and determined by finding a consensus between two examiners with longstanding expertise in vascular neurointervention (AMK/MP). A mTICI score of ≥2 b was defined as successful recanalization. Logistic key figures were obtained as described previously.7 Briefly, the overall treatment chain ranging from symptom onset to stroke center admission and initiation of mechanical thrombectomy was divided into chronological intervals. For each patient, we reconstructed and geocoded the actual transport routes by time and distance between symptom onset, initial hospital, and the comprehensive stroke center, applying Google’s Distance Matrix Application Programming Interface (Mountain View, California, USA). All calculations were carried out under the assumption that the documented time of image acquisition (referring hospital/comprehensive stroke center) approximates the time of clinical presentation. The time interval between non-invasive imaging (non-contrast CT) at the referring facility and the subsequent CT at the comprehensive stroke center was defined as time surrogate for the duration of the overall transfer process. The manuscript was prepared according to the STROBE (Strengthening the Reporting of Observational Studies in Epidemiology) statement for observational studies.9 Statistical analysis All data were stored and processed in Microsoft Office 365 ProPlus Excel (V.2102, Microsoft Corporation, Redmond, Washington, USA). Statistical analyses were performed using GraphPad Prism (GraphPad Prism 9.1, GraphPad Software, San Diego, California, USA) and MedCalc 19.7.2 (MedCalc Software, Ostend, Belgium). Gaussian distribution was tested by the D'Agostino and Pearson omnibus normality test. Standard descriptive statistical measures were used, including median (IQR) for numerical data or absolute and relative frequency distribution for categorial variables. ICC was used as a statistical measure of interobserver agreement for radiological scoring of ASPECTS and is given with 95% CI. Single comparison between two groups was done by χ2 test for categorial variables and by unpaired t test or Mann–Whitney U test for parametric and non-parametric data. The Kruskal–Wallis test was used for non-parametric repeated measures and followed by Dunn’s multiple comparison test. Variables of univariate association (p<0.1) were chosen for entry into multiple linear regression to evaluate the association between ASPECTS decline over time, patient characteristics, and logistic key figures. Stepwise backwards selection procedures were used to fit a model to these variables. A two sided p value <0.05 was predetermined as the threshold for statistical significance. All data were stored and processed in Microsoft Office 365 ProPlus Excel (V.2102, Microsoft Corporation, Redmond, Washington, USA). Statistical analyses were performed using GraphPad Prism (GraphPad Prism 9.1, GraphPad Software, San Diego, California, USA) and MedCalc 19.7.2 (MedCalc Software, Ostend, Belgium). Gaussian distribution was tested by the D'Agostino and Pearson omnibus normality test. Standard descriptive statistical measures were used, including median (IQR) for numerical data or absolute and relative frequency distribution for categorial variables. ICC was used as a statistical measure of interobserver agreement for radiological scoring of ASPECTS and is given with 95% CI. Single comparison between two groups was done by χ2 test for categorial variables and by unpaired t test or Mann–Whitney U test for parametric and non-parametric data. The Kruskal–Wallis test was used for non-parametric repeated measures and followed by Dunn’s multiple comparison test. Variables of univariate association (p<0.1) were chosen for entry into multiple linear regression to evaluate the association between ASPECTS decline over time, patient characteristics, and logistic key figures. Stepwise backwards selection procedures were used to fit a model to these variables. A two sided p value <0.05 was predetermined as the threshold for statistical significance.
Results
Between May 2015 and July 2019, 535 consecutive patients with large vessel occlusion of the anterior circulation were primarily admitted (mothership) or secondarily referred (drip and ship) to our comprehensive stroke center with the intention to be treated by mechanical thrombectomy. The demographic, clinical, and radiological characteristics according to the prehospital pathway (mothership vs drip and ship) are given in table 1. Patient characteristics Values are number (%) for categorial variables and median (IQR) for continuous variables. *Including combined vessel occlusions. CSC, comprehensive stroke center; DS, drip and ship; ICA, internal carotid artery; IV, intravenous; M1/M2, middle cerebral artery segment; MS, mothership; MT, mechanical thrombectomy; mTICI, modified Treatment in Cerebral Ischemia Scale; NIHSS, National Institutes of Health Stroke Scale. There were no differences in patient age (mothership 77 (66-83) vs drip and ship 77 (66-82) years, p=0.912) or gender distribution (mothership 47% men vs 42% for drip and ship, p=0.433). Hypertension was the most prevalent comorbid disease and similarly distributed across both groups (mothership 81% vs drip and ship 82%, p=0.659). The rate of atrial fibrillation was significantly higher in transferred patients (mothership 31% vs drip and ship 42%, p=0.014). No differences were observed for the use of antithrombotic (mothership 50% vs drip and ship 50%, p=0.733) or antihypertensive drugs (mothership 71% vs drip and ship 72%, p=0.667). We found significantly more patients with unknown time of symptom onset (mothership 17% vs drip and ship 11%, p=0.025) and higher degrees of stroke severity in the mothership group (NIHSS, mothership 16 (11-20) vs drip and ship 14 (10-14), p=0.016). Both groups showed elevated systolic blood pressure levels at the time of stroke center presentation (mothership 158 (137-179) vs drip and ship 160 (144-225) mm Hg, p=0.125). The rate of intravenous alteplase administration was almost 50% in both groups (mothership 49% vs drip and ship 52%, p=0.508). In 34 patients (11%) after interfacility transfer, eventual mechanical thrombectomy was not performed because of significant infarct progression on arrival. Transport distances and quantitative time metrics are given in table 2. In the drip and ship group, interfacility transfer delayed the initiation of mechanical thrombectomy by 319% (picture to puncture time mothership 57 (47-75) vs drip and ship 182 (144-217) min, p<0.0001). Logistic key figures *Including all transfer routes. †Picture to puncture time=time interval between initial CT scan and groin puncture at the CSC. CSC, comprehensive stroke center; DS, drip and ship; MS, mothership; RH, referring hospital. The middle cerebral artery M1 segment (mothership 59% vs drip and ship 62%, p=0.509) and the distal internal carotid artery (mothership 41% vs drip and ship 39%, p=0.61) were the most common sites of vascular occlusion in both groups (table 1). In both groups, a median of two retrieval maneuvers (mothership 2 (1–3) vs drip and ship 2 (1–3), p=0.779) were necessary to achieve successful recanalization (mTICI ≥2 b). Although statistically significant, the rate of successful recanalization of the infarct related artery was only slightly higher in transferred patients (mothership 82% vs drip and ship 87%, p=0.001). Interfacility transfer extended the time interval from symptom onset to recanalization by 136 min (onset to recanalization time mothership 210 (168-271) vs drip and ship 346 (296-416) min, p<0.0001), while the overall duration of thrombectomy procedures (puncture to final recanalization result) did not differ between the two groups (mothership 70 (45-111) vs drip and ship 75 (49-99), p=0.947). No differences were observed regarding the occurrence of tandem lesions represented by cervical internal carotid artery stenosis of ≥50% requiring angioplasty with or without stenting (mothership 16% vs drip and ship 17%, p=0.934). Short term clinical improvement in the mothership group did not significantly differ from the drip and ship group (NIHSS 24 hours post intervention mothership 6 (1–15) vs drip and ship 5 (2-13), p=0.854). There were no intergroup differences with regard to intracranial hemorrhage (mothership 16% vs drip and ship 13%, p=0.425) or death during hospital stay (mothership 19% vs drip and ship 16%, p=0.486). Serial ASPECTS are given in figure 1A and online supplemental table 1. The ICC for interobserver agreement of ASPECTS was good to excellent in both groups (mothership 0.876 (95% CI 0.788 to 0.924); drip and ship 0.958 (0.9381 to 0.971)).10 Interfacility transfer in the drip and ship pathway was associated with a significant drop in median ASPECTS on stroke center admission by 2 points (preinterventional ASPECTS, comprehensive stroke center 7 (6–9) vs external ASPECTS 9 (8–10), p<0.0001). Preinterventional ASPECTS on stroke center admission was significantly lower in drip and ship (7 (6–9)) versus mothership (9 (7–10)) patients (p<0.0001). Infarct extension on follow-up imaging was significantly more favorable in the mothership pathway (mothership 7 (4–8) vs drip and ship 6 (3–7), p=0.001). Consistently, the decline in ASPECTS between baseline (CT at remote hospitals for drip and ship patients and at stroke center presentation for mothership patients) and postinterventional follow-up imaging (figure 1B) was significantly steeper in transferred patients (points lost, mothership 2 (0–3) vs drip and ship 3 (2–6), p<0.0001). (A) Serial Alberta Stroke Program Early CT Score (ASPECTS) of directly admitted (mothership (MS)) and referred patients (drip and ship (DS)) at the time of initial imaging, at admission to the comprehensive stroke center (CSC), and at 24–48 hours after symptom onset. (B) Decline in ASPECTS between baseline and follow-up imaging according to treatment pathway. Data are median (IQR). RH, referring hospital. Multiple linear regression was performed to identify relevant recorded patient characteristics (ie, age, sex, unknown time of symptom onset, baseline ASPECTS, NIHSS at presentation (comprehensive stroke center), blood pressure, heart rate, hypertension, diabetes mellitus, hyperlipidemia, atrial fibrillation, smoking, antithrombotic medication and use of antihypertensive drugs, intravenous thrombolysis, time interval from symptom onset to groin puncture, site of occlusion, stent retrieval maneuvers, angiographic degree of recanalization, duration of thrombectomy, time interval from symptom onset to recanalization, and stenting) and logistic key figures (ie, interfacility transfer, air transport, transfer time, and ground bound transport distance) associated with ASPECTS decline (online supplemental table 2). A model covering all variables of univariate association (p<0.1) is given in table 3, revealing that only interfacility transfer (drip and ship), NIHSS at presentation, degree of angiographic recanalization (mTICI), and duration of the thrombectomy procedure were significant predictors of infarct progression (adjusted R 2=0.209, p<0.0001). Multiple regression model of ASPECTS decline between baseline and postinterventional follow-up imaging R 2=0.231, adjusted R 2=0.209, F=10.501, p<0.0001 ASPECTS, Alberta Stroke Program Early CT Score; CSC, comprehensive stroke center; MT, mechanical thrombectomy; mTICI, modified Treatment in Cerebral Ischemia Scale; NIHSS, National Institutes of Health Stroke Scale.
Conclusion
Infarct progression and postinterventional infarct extent, as assessed by ASPECTS, varied between the drip and ship and mothership pathways for mechanical thrombectomy, leading to more pronounced infarction in transferred patients. ASPECTS may serve as a radiological measure to monitor benefit or harm of different prehospital pathways and to advocate for the direct transfer to a comprehensive stroke center in regions with similar landscapes and infrastructure.
[ "Statistical analysis" ]
[ "All data were stored and processed in Microsoft Office 365 ProPlus Excel (V.2102, Microsoft Corporation, Redmond, Washington, USA). Statistical analyses were performed using GraphPad Prism (GraphPad Prism 9.1, GraphPad Software, San Diego, California, USA) and MedCalc 19.7.2 (MedCalc Software, Ostend, Belgium). Gaussian distribution was tested by the D'Agostino and Pearson omnibus normality test. Standard descriptive statistical measures were used, including median (IQR) for numerical data or absolute and relative frequency distribution for categorial variables. ICC was used as a statistical measure of interobserver agreement for radiological scoring of ASPECTS and is given with 95% CI. Single comparison between two groups was done by χ2 test for categorial variables and by unpaired t test or Mann–Whitney U test for parametric and non-parametric data. The Kruskal–Wallis test was used for non-parametric repeated measures and followed by Dunn’s multiple comparison test. Variables of univariate association (p<0.1) were chosen for entry into multiple linear regression to evaluate the association between ASPECTS decline over time, patient characteristics, and logistic key figures. Stepwise backwards selection procedures were used to fit a model to these variables. A two sided p value <0.05 was predetermined as the threshold for statistical significance." ]
[ null ]
[ "Introduction", "Methods", "Statistical analysis", "Results", "Discussion", "Conclusion" ]
[ "In many regions of the world, the two most frequent treatment pathways in the management of acute large vessel occlusion strokes are intravenous thrombolysis in the nearest thrombolysis facility (drip and ship concept) prior to mechanical thrombectomy in endovascular capable comprehensive stroke centers or, as the main alternative, direct transfer to a comprehensive stroke center for mechanical thrombectomy (mothership concept).1 The population based benefit or harm of either scenario is region specific and not externally generalizable.2 Although certain stroke networks have evaluated prehospital triage systems based on stroke severity scales, it will likely remain elusive, from a methodological perspective, to study optimal prehospital routing strategies by performing trials where the treatment pathway is randomized in the field and which at the same time will account for region specific factors or by performing such trials separately by region.2 3\n\nFunctional clinical measures are the standard of reference as primary endpoints in randomized stroke trials. However, observational investigation using standardized and widely used radiological measures, such as the Alberta Stroke Program Early CT Score (ASPECTS), may be valuable to indicate benefit or harm in a setting where randomization of different treatment pathways for acute large vessel occlusion stroke remains a serious obstacle or is not feasible across multiple centers, each with distinct region specific features.1 2 4 Recently, ASPECTS has been proposed as an imaging tool to assess the dynamics of infarction during interfacility transfer.5 6 We add to previous studies on ASPECTS by investigating whether the infarct extent before and after mechanical thrombectomy differs between the mothership and drip and ship pathways, and aim to quantify the imaging defined effect size of pre- to postinterventional worsening of stroke over time for both pathways.", "This retrospective observational study was approved by the local ethics committee. The period of observation was from May 2015 to July 2019. A total of 535 consecutive anterior circulation ischemic stroke patients intended to be treated by mechanical thrombectomy at an academic comprehensive stroke center in our regiopolitan area were analyzed to investigate whether preinterventional and early postinterventional follow-up ASPECTS at 24–48 hour after symptom onset differed between the mothership and drip and ship treatment pathways for mechanical thrombectomy (online supplemental figure 1). In addition, we compared the course of decline in ASPECTS over time (difference in follow-up ASPECTS to baseline) between the two treatment pathways, and assessed the association of key patient, interventional, and logistic variables with ASPECTS-defined infarct progression.\n\n\n\nThe stroke network of the regiopolitan area surveyed covers a population of ~1.4 million inhabitants and is composed of one academic comprehensive stroke center and 11 associated hospitals within a maximum ground transfer distance of 149 km.7 To ensure guideline compliant neurointerventional and pharmacological stroke treatment in a 24/7 setting,1 the academic comprehensive stroke center provides teleradiological and teleneurological support on request (ie, the assessment of thrombectomy eligibility), coordination of patient transfer from associated hospitals (individual decision making), and, following transfer, subsequent mechanical thrombectomy if indicated.7\n\nAll symptomatic acute ischemic stroke patients with proven occlusion of the distal internal carotid artery, middle cerebral artery M1 segment, and proximal M2 segment were eligible for study entry. Of those, we included patients either directly admitted or transferred from associated referring hospitals for which initial non-invasive imaging by means of non-contrast CT and CT angiography were available for both the comparison with non-contrast CT (Somatom Definition AS, Siemens Healthcare, Erlangen, Germany) at stroke center presentation and at 24–48 hours (postinterventional imaging, cut-off for contrast agent resorption).8 Thrombectomy eligibility was guided using consensus recommendations or current guidelines if available.1\n\nThe following demographic, clinical, radiological, interventional, and logistical data were collected according to the available medical records: age, sex, baseline medication and medical history, time of symptom onset, heart rate and blood pressure at presentation, National Institutes of Health Stroke Scale (NIHSS) score at presentation and at 24 hours, time of non-invasive and angiographic image acquisition, occlusion location, baseline ASPECTS (referring hospital/comprehensive stroke center), ASPECTS after patient transfer and at 24–48 hour, and alteplase administration. Imaging data were stored in the respective picture archiving and communication system and were available via teleradiological access. ASPECTS values were independently evaluated by two of the authors (FC/JH) trained in ASPECTS scoring (http://www.aspectsinstroke.com), both blinded to clinical information. Interobserver agreement was assessed by the intraclass correlation coefficient (ICC). Recanalization success was assessed by the modified Treatment in Cerebral Ischemia Scale (mTICI) and determined by finding a consensus between two examiners with longstanding expertise in vascular neurointervention (AMK/MP). A mTICI score of ≥2 b was defined as successful recanalization.\nLogistic key figures were obtained as described previously.7 Briefly, the overall treatment chain ranging from symptom onset to stroke center admission and initiation of mechanical thrombectomy was divided into chronological intervals. For each patient, we reconstructed and geocoded the actual transport routes by time and distance between symptom onset, initial hospital, and the comprehensive stroke center, applying Google’s Distance Matrix Application Programming Interface (Mountain View, California, USA). All calculations were carried out under the assumption that the documented time of image acquisition (referring hospital/comprehensive stroke center) approximates the time of clinical presentation. The time interval between non-invasive imaging (non-contrast CT) at the referring facility and the subsequent CT at the comprehensive stroke center was defined as time surrogate for the duration of the overall transfer process.\nThe manuscript was prepared according to the STROBE (Strengthening the Reporting of Observational Studies in Epidemiology) statement for observational studies.9\n\nStatistical analysis All data were stored and processed in Microsoft Office 365 ProPlus Excel (V.2102, Microsoft Corporation, Redmond, Washington, USA). Statistical analyses were performed using GraphPad Prism (GraphPad Prism 9.1, GraphPad Software, San Diego, California, USA) and MedCalc 19.7.2 (MedCalc Software, Ostend, Belgium). Gaussian distribution was tested by the D'Agostino and Pearson omnibus normality test. Standard descriptive statistical measures were used, including median (IQR) for numerical data or absolute and relative frequency distribution for categorial variables. ICC was used as a statistical measure of interobserver agreement for radiological scoring of ASPECTS and is given with 95% CI. Single comparison between two groups was done by χ2 test for categorial variables and by unpaired t test or Mann–Whitney U test for parametric and non-parametric data. The Kruskal–Wallis test was used for non-parametric repeated measures and followed by Dunn’s multiple comparison test. Variables of univariate association (p<0.1) were chosen for entry into multiple linear regression to evaluate the association between ASPECTS decline over time, patient characteristics, and logistic key figures. Stepwise backwards selection procedures were used to fit a model to these variables. A two sided p value <0.05 was predetermined as the threshold for statistical significance.\nAll data were stored and processed in Microsoft Office 365 ProPlus Excel (V.2102, Microsoft Corporation, Redmond, Washington, USA). Statistical analyses were performed using GraphPad Prism (GraphPad Prism 9.1, GraphPad Software, San Diego, California, USA) and MedCalc 19.7.2 (MedCalc Software, Ostend, Belgium). Gaussian distribution was tested by the D'Agostino and Pearson omnibus normality test. Standard descriptive statistical measures were used, including median (IQR) for numerical data or absolute and relative frequency distribution for categorial variables. ICC was used as a statistical measure of interobserver agreement for radiological scoring of ASPECTS and is given with 95% CI. Single comparison between two groups was done by χ2 test for categorial variables and by unpaired t test or Mann–Whitney U test for parametric and non-parametric data. The Kruskal–Wallis test was used for non-parametric repeated measures and followed by Dunn’s multiple comparison test. Variables of univariate association (p<0.1) were chosen for entry into multiple linear regression to evaluate the association between ASPECTS decline over time, patient characteristics, and logistic key figures. Stepwise backwards selection procedures were used to fit a model to these variables. A two sided p value <0.05 was predetermined as the threshold for statistical significance.", "All data were stored and processed in Microsoft Office 365 ProPlus Excel (V.2102, Microsoft Corporation, Redmond, Washington, USA). Statistical analyses were performed using GraphPad Prism (GraphPad Prism 9.1, GraphPad Software, San Diego, California, USA) and MedCalc 19.7.2 (MedCalc Software, Ostend, Belgium). Gaussian distribution was tested by the D'Agostino and Pearson omnibus normality test. Standard descriptive statistical measures were used, including median (IQR) for numerical data or absolute and relative frequency distribution for categorial variables. ICC was used as a statistical measure of interobserver agreement for radiological scoring of ASPECTS and is given with 95% CI. Single comparison between two groups was done by χ2 test for categorial variables and by unpaired t test or Mann–Whitney U test for parametric and non-parametric data. The Kruskal–Wallis test was used for non-parametric repeated measures and followed by Dunn’s multiple comparison test. Variables of univariate association (p<0.1) were chosen for entry into multiple linear regression to evaluate the association between ASPECTS decline over time, patient characteristics, and logistic key figures. Stepwise backwards selection procedures were used to fit a model to these variables. A two sided p value <0.05 was predetermined as the threshold for statistical significance.", "Between May 2015 and July 2019, 535 consecutive patients with large vessel occlusion of the anterior circulation were primarily admitted (mothership) or secondarily referred (drip and ship) to our comprehensive stroke center with the intention to be treated by mechanical thrombectomy. The demographic, clinical, and radiological characteristics according to the prehospital pathway (mothership vs drip and ship) are given in table 1.\nPatient characteristics\nValues are number (%) for categorial variables and median (IQR) for continuous variables.\n*Including combined vessel occlusions.\nCSC, comprehensive stroke center; DS, drip and ship; ICA, internal carotid artery; IV, intravenous; M1/M2, middle cerebral artery segment; MS, mothership; MT, mechanical thrombectomy; mTICI, modified Treatment in Cerebral Ischemia Scale; NIHSS, National Institutes of Health Stroke Scale.\nThere were no differences in patient age (mothership 77 (66-83) vs drip and ship 77 (66-82) years, p=0.912) or gender distribution (mothership 47% men vs 42% for drip and ship, p=0.433). Hypertension was the most prevalent comorbid disease and similarly distributed across both groups (mothership 81% vs drip and ship 82%, p=0.659). The rate of atrial fibrillation was significantly higher in transferred patients (mothership 31% vs drip and ship 42%, p=0.014). No differences were observed for the use of antithrombotic (mothership 50% vs drip and ship 50%, p=0.733) or antihypertensive drugs (mothership 71% vs drip and ship 72%, p=0.667). We found significantly more patients with unknown time of symptom onset (mothership 17% vs drip and ship 11%, p=0.025) and higher degrees of stroke severity in the mothership group (NIHSS, mothership 16 (11-20) vs drip and ship 14 (10-14), p=0.016). Both groups showed elevated systolic blood pressure levels at the time of stroke center presentation (mothership 158 (137-179) vs drip and ship 160 (144-225) mm Hg, p=0.125). The rate of intravenous alteplase administration was almost 50% in both groups (mothership 49% vs drip and ship 52%, p=0.508). In 34 patients (11%) after interfacility transfer, eventual mechanical thrombectomy was not performed because of significant infarct progression on arrival.\nTransport distances and quantitative time metrics are given in table 2. In the drip and ship group, interfacility transfer delayed the initiation of mechanical thrombectomy by 319% (picture to puncture time mothership 57 (47-75) vs drip and ship 182 (144-217) min, p<0.0001).\nLogistic key figures\n*Including all transfer routes.\n†Picture to puncture time=time interval between initial CT scan and groin puncture at the CSC.\nCSC, comprehensive stroke center; DS, drip and ship; MS, mothership; RH, referring hospital.\nThe middle cerebral artery M1 segment (mothership 59% vs drip and ship 62%, p=0.509) and the distal internal carotid artery (mothership 41% vs drip and ship 39%, p=0.61) were the most common sites of vascular occlusion in both groups (table 1). In both groups, a median of two retrieval maneuvers (mothership 2 (1–3) vs drip and ship 2 (1–3), p=0.779) were necessary to achieve successful recanalization (mTICI ≥2 b). Although statistically significant, the rate of successful recanalization of the infarct related artery was only slightly higher in transferred patients (mothership 82% vs drip and ship 87%, p=0.001). Interfacility transfer extended the time interval from symptom onset to recanalization by 136 min (onset to recanalization time mothership 210 (168-271) vs drip and ship 346 (296-416) min, p<0.0001), while the overall duration of thrombectomy procedures (puncture to final recanalization result) did not differ between the two groups (mothership 70 (45-111) vs drip and ship 75 (49-99), p=0.947). No differences were observed regarding the occurrence of tandem lesions represented by cervical internal carotid artery stenosis of ≥50% requiring angioplasty with or without stenting (mothership 16% vs drip and ship 17%, p=0.934). Short term clinical improvement in the mothership group did not significantly differ from the drip and ship group (NIHSS 24 hours post intervention mothership 6 (1–15) vs drip and ship 5 (2-13), p=0.854). There were no intergroup differences with regard to intracranial hemorrhage (mothership 16% vs drip and ship 13%, p=0.425) or death during hospital stay (mothership 19% vs drip and ship 16%, p=0.486).\nSerial ASPECTS are given in figure 1A and online supplemental table 1. The ICC for interobserver agreement of ASPECTS was good to excellent in both groups (mothership 0.876 (95% CI 0.788 to 0.924); drip and ship 0.958 (0.9381 to 0.971)).10 Interfacility transfer in the drip and ship pathway was associated with a significant drop in median ASPECTS on stroke center admission by 2 points (preinterventional ASPECTS, comprehensive stroke center 7 (6–9) vs external ASPECTS 9 (8–10), p<0.0001). Preinterventional ASPECTS on stroke center admission was significantly lower in drip and ship (7 (6–9)) versus mothership (9 (7–10)) patients (p<0.0001). Infarct extension on follow-up imaging was significantly more favorable in the mothership pathway (mothership 7 (4–8) vs drip and ship 6 (3–7), p=0.001). Consistently, the decline in ASPECTS between baseline (CT at remote hospitals for drip and ship patients and at stroke center presentation for mothership patients) and postinterventional follow-up imaging (figure 1B) was significantly steeper in transferred patients (points lost, mothership 2 (0–3) vs drip and ship 3 (2–6), p<0.0001).\n(A) Serial Alberta Stroke Program Early CT Score (ASPECTS) of directly admitted (mothership (MS)) and referred patients (drip and ship (DS)) at the time of initial imaging, at admission to the comprehensive stroke center (CSC), and at 24–48 hours after symptom onset. (B) Decline in ASPECTS between baseline and follow-up imaging according to treatment pathway. Data are median (IQR). RH, referring hospital.\nMultiple linear regression was performed to identify relevant recorded patient characteristics (ie, age, sex, unknown time of symptom onset, baseline ASPECTS, NIHSS at presentation (comprehensive stroke center), blood pressure, heart rate, hypertension, diabetes mellitus, hyperlipidemia, atrial fibrillation, smoking, antithrombotic medication and use of antihypertensive drugs, intravenous thrombolysis, time interval from symptom onset to groin puncture, site of occlusion, stent retrieval maneuvers, angiographic degree of recanalization, duration of thrombectomy, time interval from symptom onset to recanalization, and stenting) and logistic key figures (ie, interfacility transfer, air transport, transfer time, and ground bound transport distance) associated with ASPECTS decline (online supplemental table 2). A model covering all variables of univariate association (p<0.1) is given in table 3, revealing that only interfacility transfer (drip and ship), NIHSS at presentation, degree of angiographic recanalization (mTICI), and duration of the thrombectomy procedure were significant predictors of infarct progression (adjusted R\n2=0.209, p<0.0001).\nMultiple regression model of ASPECTS decline between baseline and postinterventional follow-up imaging\n\nR\n2=0.231, adjusted R\n2=0.209, F=10.501, p<0.0001\nASPECTS, Alberta Stroke Program Early CT Score; CSC, comprehensive stroke center; MT, mechanical thrombectomy; mTICI, modified Treatment in Cerebral Ischemia Scale; NIHSS, National Institutes of Health Stroke Scale.", "Prehospital triage and interfacility transfer are critical factors that potentially worsen clinical outcome in acute large vessel occlusion stroke.11 12 So far, there are no data on the extent to which different prehospital pathways affect radiological measures of structural damage and the dynamic progression of cerebral infarction before and after mechanical thrombectomy, which can be semiquantitatively assessed through radiological scoring of ASPECTS.4–6 13–15 Our study addressed this issue with the following main results. Preinterventional ASPECTS declined significantly during interfacility transfer, resulting in significantly lower ASPECTS at stroke center presentation and on follow-up imaging in the drip and ship group compared with the mothership group. Infarct progression from baseline to postinterventional imaging was significantly higher in transferred patients. In the multivariable model, only interfacility transfer, NIHSS at presentation, degree of angiographic recanalization, and duration of the thrombectomy procedure proved to be independent predictors of structural infarct progression.\nASPECTS is a 10 point non-contrast CT scoring system of early ischemic changes within the middle cerebral artery territory.4 For large vessel occlusion stroke, ASPECTS has been increasingly exposed to internal validation with good estimates of internal consistency (Crohnbach’s alpha=0.859) and varying interobserver reliability (ICC=0.672–0.834).1 16 17 In our study, good to excellent levels of inter-rater agreement could be achieved, further supporting that ASPECTS is a reliable tool to assess the extent of cerebral infarction across different CT scanners, scan acquisition protocols, and time points.5 10\n\nRecently, ASPECTS has been used to investigate the dynamics of infarct progression during interfacility transfer, revealing that one of three patients becomes ineligible for mechanical thrombectomy based on ASPECTS imaging criteria,6 13 and that every 1 point increase in ASPECTS decline per hour correlates with a 23 fold lower probability of good functional outcome (modified Rankin Scale score of 0–2) at 90 days.5 In our study, the ASPECTS decline of 2 points that occurred during interfacility transfer was more severe than observed previously by others (1 point), although transfer times were nearly identical.5 14 This discrepancy in early ASPECTS decline may be best explained by the rate of distal internal carotid artery occlusions which was three times higher in our observation. This highlights that not rigid absolute time windows but rather local cerebral pathophysiology and individual collateral recruitment, which critically depend on precise occlusion location, likely affects the dynamics of how the penumbra is transformed into infarction.15 18–20 Prospective registry data revealed that significant treatment delays in the drip and ship treatment pathway (onset to recanalization time, mothership 202 (160-265) vs drip and ship 312 (255-286) min, p<0.0001) lead to worse clinical outcome (modified Rankin Scale score of 0–1 at 90 days, mothership 47,4% vs drip and ship 38%, p=0.005).21 This is in line with the fact that a significant proportion of large vessel occlusion stroke patients (≥70%) can be assigned to the categories of rapid and intermediate infarct progressors,18 22 in whom the frequency of good outcome (modified Rankin Scale score of 0–2) is highly time dependent and declines from 64.1% for those recanalized within 180 min to 46.1% for those recanalized within 480 min.23 In our study, multimodal imaging selection was applied to identify recanalization responders in the extended or unknown time window,24 25 which consequently dissolved the association of time and imaging defined pre- to postinterventional worsening of stroke. Importantly, interfacility transfer remained the most powerful predictor of structural infarct progression.\nAlthough ASPECTS values following interfacility transfer were significantly lower in our cohort, short term functional outcomes at 24 hours were not different between the two prehospital pathways. Each ASPECTS value, by definition, is associated with a spectrum of different infarct volumes for which a strong correlation with NIHSS has been reported (r=0.79, p<0.0001).4 26 However, for small infarct volumes, this correlation does not hold.27 There are various explanations as to why a greater infarct extent may remain neutral with regard to functional outcome, as was observed in our cohort. The clinical endpoint of NIHSS may be biased towards good functional outcome as it does not reflect more subtle neuropsychological deficits.28 Alternatively, or in addition, NIHSS may have underestimated the degree of favorable outcome due to the short follow-up period. Also, the beta error of statistical decision making may have occurred for which this retrospective study may have been at risk because statistical power was not adjustable a priori.\nThere are certain limitations to this study. First, all referring hospitals were heterogeneously located in peri-urban, urban, and rural areas. This reflects specific geographical and organizational characteristics of our region but limits external generalizability. Second, ASPECTS is an established marker of baseline infarct extent but is not established for the quantification of infarct extent on postinterventional follow-up imaging. Third, ASPECTS is unable to detect volumetric infarct progression due to edema in already affected ASPECTS regions, which may have reached substantial levels at the time of follow-up imaging.29\n", "Infarct progression and postinterventional infarct extent, as assessed by ASPECTS, varied between the drip and ship and mothership pathways for mechanical thrombectomy, leading to more pronounced infarction in transferred patients. ASPECTS may serve as a radiological measure to monitor benefit or harm of different prehospital pathways and to advocate for the direct transfer to a comprehensive stroke center in regions with similar landscapes and infrastructure." ]
[ "intro", "methods", null, "results", "discussion", "conclusions" ]
[ "stroke", "CT", "intervention" ]
Introduction: In many regions of the world, the two most frequent treatment pathways in the management of acute large vessel occlusion strokes are intravenous thrombolysis in the nearest thrombolysis facility (drip and ship concept) prior to mechanical thrombectomy in endovascular capable comprehensive stroke centers or, as the main alternative, direct transfer to a comprehensive stroke center for mechanical thrombectomy (mothership concept).1 The population based benefit or harm of either scenario is region specific and not externally generalizable.2 Although certain stroke networks have evaluated prehospital triage systems based on stroke severity scales, it will likely remain elusive, from a methodological perspective, to study optimal prehospital routing strategies by performing trials where the treatment pathway is randomized in the field and which at the same time will account for region specific factors or by performing such trials separately by region.2 3 Functional clinical measures are the standard of reference as primary endpoints in randomized stroke trials. However, observational investigation using standardized and widely used radiological measures, such as the Alberta Stroke Program Early CT Score (ASPECTS), may be valuable to indicate benefit or harm in a setting where randomization of different treatment pathways for acute large vessel occlusion stroke remains a serious obstacle or is not feasible across multiple centers, each with distinct region specific features.1 2 4 Recently, ASPECTS has been proposed as an imaging tool to assess the dynamics of infarction during interfacility transfer.5 6 We add to previous studies on ASPECTS by investigating whether the infarct extent before and after mechanical thrombectomy differs between the mothership and drip and ship pathways, and aim to quantify the imaging defined effect size of pre- to postinterventional worsening of stroke over time for both pathways. Methods: This retrospective observational study was approved by the local ethics committee. The period of observation was from May 2015 to July 2019. A total of 535 consecutive anterior circulation ischemic stroke patients intended to be treated by mechanical thrombectomy at an academic comprehensive stroke center in our regiopolitan area were analyzed to investigate whether preinterventional and early postinterventional follow-up ASPECTS at 24–48 hour after symptom onset differed between the mothership and drip and ship treatment pathways for mechanical thrombectomy (online supplemental figure 1). In addition, we compared the course of decline in ASPECTS over time (difference in follow-up ASPECTS to baseline) between the two treatment pathways, and assessed the association of key patient, interventional, and logistic variables with ASPECTS-defined infarct progression. The stroke network of the regiopolitan area surveyed covers a population of ~1.4 million inhabitants and is composed of one academic comprehensive stroke center and 11 associated hospitals within a maximum ground transfer distance of 149 km.7 To ensure guideline compliant neurointerventional and pharmacological stroke treatment in a 24/7 setting,1 the academic comprehensive stroke center provides teleradiological and teleneurological support on request (ie, the assessment of thrombectomy eligibility), coordination of patient transfer from associated hospitals (individual decision making), and, following transfer, subsequent mechanical thrombectomy if indicated.7 All symptomatic acute ischemic stroke patients with proven occlusion of the distal internal carotid artery, middle cerebral artery M1 segment, and proximal M2 segment were eligible for study entry. Of those, we included patients either directly admitted or transferred from associated referring hospitals for which initial non-invasive imaging by means of non-contrast CT and CT angiography were available for both the comparison with non-contrast CT (Somatom Definition AS, Siemens Healthcare, Erlangen, Germany) at stroke center presentation and at 24–48 hours (postinterventional imaging, cut-off for contrast agent resorption).8 Thrombectomy eligibility was guided using consensus recommendations or current guidelines if available.1 The following demographic, clinical, radiological, interventional, and logistical data were collected according to the available medical records: age, sex, baseline medication and medical history, time of symptom onset, heart rate and blood pressure at presentation, National Institutes of Health Stroke Scale (NIHSS) score at presentation and at 24 hours, time of non-invasive and angiographic image acquisition, occlusion location, baseline ASPECTS (referring hospital/comprehensive stroke center), ASPECTS after patient transfer and at 24–48 hour, and alteplase administration. Imaging data were stored in the respective picture archiving and communication system and were available via teleradiological access. ASPECTS values were independently evaluated by two of the authors (FC/JH) trained in ASPECTS scoring (http://www.aspectsinstroke.com), both blinded to clinical information. Interobserver agreement was assessed by the intraclass correlation coefficient (ICC). Recanalization success was assessed by the modified Treatment in Cerebral Ischemia Scale (mTICI) and determined by finding a consensus between two examiners with longstanding expertise in vascular neurointervention (AMK/MP). A mTICI score of ≥2 b was defined as successful recanalization. Logistic key figures were obtained as described previously.7 Briefly, the overall treatment chain ranging from symptom onset to stroke center admission and initiation of mechanical thrombectomy was divided into chronological intervals. For each patient, we reconstructed and geocoded the actual transport routes by time and distance between symptom onset, initial hospital, and the comprehensive stroke center, applying Google’s Distance Matrix Application Programming Interface (Mountain View, California, USA). All calculations were carried out under the assumption that the documented time of image acquisition (referring hospital/comprehensive stroke center) approximates the time of clinical presentation. The time interval between non-invasive imaging (non-contrast CT) at the referring facility and the subsequent CT at the comprehensive stroke center was defined as time surrogate for the duration of the overall transfer process. The manuscript was prepared according to the STROBE (Strengthening the Reporting of Observational Studies in Epidemiology) statement for observational studies.9 Statistical analysis All data were stored and processed in Microsoft Office 365 ProPlus Excel (V.2102, Microsoft Corporation, Redmond, Washington, USA). Statistical analyses were performed using GraphPad Prism (GraphPad Prism 9.1, GraphPad Software, San Diego, California, USA) and MedCalc 19.7.2 (MedCalc Software, Ostend, Belgium). Gaussian distribution was tested by the D'Agostino and Pearson omnibus normality test. Standard descriptive statistical measures were used, including median (IQR) for numerical data or absolute and relative frequency distribution for categorial variables. ICC was used as a statistical measure of interobserver agreement for radiological scoring of ASPECTS and is given with 95% CI. Single comparison between two groups was done by χ2 test for categorial variables and by unpaired t test or Mann–Whitney U test for parametric and non-parametric data. The Kruskal–Wallis test was used for non-parametric repeated measures and followed by Dunn’s multiple comparison test. Variables of univariate association (p<0.1) were chosen for entry into multiple linear regression to evaluate the association between ASPECTS decline over time, patient characteristics, and logistic key figures. Stepwise backwards selection procedures were used to fit a model to these variables. A two sided p value <0.05 was predetermined as the threshold for statistical significance. All data were stored and processed in Microsoft Office 365 ProPlus Excel (V.2102, Microsoft Corporation, Redmond, Washington, USA). Statistical analyses were performed using GraphPad Prism (GraphPad Prism 9.1, GraphPad Software, San Diego, California, USA) and MedCalc 19.7.2 (MedCalc Software, Ostend, Belgium). Gaussian distribution was tested by the D'Agostino and Pearson omnibus normality test. Standard descriptive statistical measures were used, including median (IQR) for numerical data or absolute and relative frequency distribution for categorial variables. ICC was used as a statistical measure of interobserver agreement for radiological scoring of ASPECTS and is given with 95% CI. Single comparison between two groups was done by χ2 test for categorial variables and by unpaired t test or Mann–Whitney U test for parametric and non-parametric data. The Kruskal–Wallis test was used for non-parametric repeated measures and followed by Dunn’s multiple comparison test. Variables of univariate association (p<0.1) were chosen for entry into multiple linear regression to evaluate the association between ASPECTS decline over time, patient characteristics, and logistic key figures. Stepwise backwards selection procedures were used to fit a model to these variables. A two sided p value <0.05 was predetermined as the threshold for statistical significance. Statistical analysis: All data were stored and processed in Microsoft Office 365 ProPlus Excel (V.2102, Microsoft Corporation, Redmond, Washington, USA). Statistical analyses were performed using GraphPad Prism (GraphPad Prism 9.1, GraphPad Software, San Diego, California, USA) and MedCalc 19.7.2 (MedCalc Software, Ostend, Belgium). Gaussian distribution was tested by the D'Agostino and Pearson omnibus normality test. Standard descriptive statistical measures were used, including median (IQR) for numerical data or absolute and relative frequency distribution for categorial variables. ICC was used as a statistical measure of interobserver agreement for radiological scoring of ASPECTS and is given with 95% CI. Single comparison between two groups was done by χ2 test for categorial variables and by unpaired t test or Mann–Whitney U test for parametric and non-parametric data. The Kruskal–Wallis test was used for non-parametric repeated measures and followed by Dunn’s multiple comparison test. Variables of univariate association (p<0.1) were chosen for entry into multiple linear regression to evaluate the association between ASPECTS decline over time, patient characteristics, and logistic key figures. Stepwise backwards selection procedures were used to fit a model to these variables. A two sided p value <0.05 was predetermined as the threshold for statistical significance. Results: Between May 2015 and July 2019, 535 consecutive patients with large vessel occlusion of the anterior circulation were primarily admitted (mothership) or secondarily referred (drip and ship) to our comprehensive stroke center with the intention to be treated by mechanical thrombectomy. The demographic, clinical, and radiological characteristics according to the prehospital pathway (mothership vs drip and ship) are given in table 1. Patient characteristics Values are number (%) for categorial variables and median (IQR) for continuous variables. *Including combined vessel occlusions. CSC, comprehensive stroke center; DS, drip and ship; ICA, internal carotid artery; IV, intravenous; M1/M2, middle cerebral artery segment; MS, mothership; MT, mechanical thrombectomy; mTICI, modified Treatment in Cerebral Ischemia Scale; NIHSS, National Institutes of Health Stroke Scale. There were no differences in patient age (mothership 77 (66-83) vs drip and ship 77 (66-82) years, p=0.912) or gender distribution (mothership 47% men vs 42% for drip and ship, p=0.433). Hypertension was the most prevalent comorbid disease and similarly distributed across both groups (mothership 81% vs drip and ship 82%, p=0.659). The rate of atrial fibrillation was significantly higher in transferred patients (mothership 31% vs drip and ship 42%, p=0.014). No differences were observed for the use of antithrombotic (mothership 50% vs drip and ship 50%, p=0.733) or antihypertensive drugs (mothership 71% vs drip and ship 72%, p=0.667). We found significantly more patients with unknown time of symptom onset (mothership 17% vs drip and ship 11%, p=0.025) and higher degrees of stroke severity in the mothership group (NIHSS, mothership 16 (11-20) vs drip and ship 14 (10-14), p=0.016). Both groups showed elevated systolic blood pressure levels at the time of stroke center presentation (mothership 158 (137-179) vs drip and ship 160 (144-225) mm Hg, p=0.125). The rate of intravenous alteplase administration was almost 50% in both groups (mothership 49% vs drip and ship 52%, p=0.508). In 34 patients (11%) after interfacility transfer, eventual mechanical thrombectomy was not performed because of significant infarct progression on arrival. Transport distances and quantitative time metrics are given in table 2. In the drip and ship group, interfacility transfer delayed the initiation of mechanical thrombectomy by 319% (picture to puncture time mothership 57 (47-75) vs drip and ship 182 (144-217) min, p<0.0001). Logistic key figures *Including all transfer routes. †Picture to puncture time=time interval between initial CT scan and groin puncture at the CSC. CSC, comprehensive stroke center; DS, drip and ship; MS, mothership; RH, referring hospital. The middle cerebral artery M1 segment (mothership 59% vs drip and ship 62%, p=0.509) and the distal internal carotid artery (mothership 41% vs drip and ship 39%, p=0.61) were the most common sites of vascular occlusion in both groups (table 1). In both groups, a median of two retrieval maneuvers (mothership 2 (1–3) vs drip and ship 2 (1–3), p=0.779) were necessary to achieve successful recanalization (mTICI ≥2 b). Although statistically significant, the rate of successful recanalization of the infarct related artery was only slightly higher in transferred patients (mothership 82% vs drip and ship 87%, p=0.001). Interfacility transfer extended the time interval from symptom onset to recanalization by 136 min (onset to recanalization time mothership 210 (168-271) vs drip and ship 346 (296-416) min, p<0.0001), while the overall duration of thrombectomy procedures (puncture to final recanalization result) did not differ between the two groups (mothership 70 (45-111) vs drip and ship 75 (49-99), p=0.947). No differences were observed regarding the occurrence of tandem lesions represented by cervical internal carotid artery stenosis of ≥50% requiring angioplasty with or without stenting (mothership 16% vs drip and ship 17%, p=0.934). Short term clinical improvement in the mothership group did not significantly differ from the drip and ship group (NIHSS 24 hours post intervention mothership 6 (1–15) vs drip and ship 5 (2-13), p=0.854). There were no intergroup differences with regard to intracranial hemorrhage (mothership 16% vs drip and ship 13%, p=0.425) or death during hospital stay (mothership 19% vs drip and ship 16%, p=0.486). Serial ASPECTS are given in figure 1A and online supplemental table 1. The ICC for interobserver agreement of ASPECTS was good to excellent in both groups (mothership 0.876 (95% CI 0.788 to 0.924); drip and ship 0.958 (0.9381 to 0.971)).10 Interfacility transfer in the drip and ship pathway was associated with a significant drop in median ASPECTS on stroke center admission by 2 points (preinterventional ASPECTS, comprehensive stroke center 7 (6–9) vs external ASPECTS 9 (8–10), p<0.0001). Preinterventional ASPECTS on stroke center admission was significantly lower in drip and ship (7 (6–9)) versus mothership (9 (7–10)) patients (p<0.0001). Infarct extension on follow-up imaging was significantly more favorable in the mothership pathway (mothership 7 (4–8) vs drip and ship 6 (3–7), p=0.001). Consistently, the decline in ASPECTS between baseline (CT at remote hospitals for drip and ship patients and at stroke center presentation for mothership patients) and postinterventional follow-up imaging (figure 1B) was significantly steeper in transferred patients (points lost, mothership 2 (0–3) vs drip and ship 3 (2–6), p<0.0001). (A) Serial Alberta Stroke Program Early CT Score (ASPECTS) of directly admitted (mothership (MS)) and referred patients (drip and ship (DS)) at the time of initial imaging, at admission to the comprehensive stroke center (CSC), and at 24–48 hours after symptom onset. (B) Decline in ASPECTS between baseline and follow-up imaging according to treatment pathway. Data are median (IQR). RH, referring hospital. Multiple linear regression was performed to identify relevant recorded patient characteristics (ie, age, sex, unknown time of symptom onset, baseline ASPECTS, NIHSS at presentation (comprehensive stroke center), blood pressure, heart rate, hypertension, diabetes mellitus, hyperlipidemia, atrial fibrillation, smoking, antithrombotic medication and use of antihypertensive drugs, intravenous thrombolysis, time interval from symptom onset to groin puncture, site of occlusion, stent retrieval maneuvers, angiographic degree of recanalization, duration of thrombectomy, time interval from symptom onset to recanalization, and stenting) and logistic key figures (ie, interfacility transfer, air transport, transfer time, and ground bound transport distance) associated with ASPECTS decline (online supplemental table 2). A model covering all variables of univariate association (p<0.1) is given in table 3, revealing that only interfacility transfer (drip and ship), NIHSS at presentation, degree of angiographic recanalization (mTICI), and duration of the thrombectomy procedure were significant predictors of infarct progression (adjusted R 2=0.209, p<0.0001). Multiple regression model of ASPECTS decline between baseline and postinterventional follow-up imaging R 2=0.231, adjusted R 2=0.209, F=10.501, p<0.0001 ASPECTS, Alberta Stroke Program Early CT Score; CSC, comprehensive stroke center; MT, mechanical thrombectomy; mTICI, modified Treatment in Cerebral Ischemia Scale; NIHSS, National Institutes of Health Stroke Scale. Discussion: Prehospital triage and interfacility transfer are critical factors that potentially worsen clinical outcome in acute large vessel occlusion stroke.11 12 So far, there are no data on the extent to which different prehospital pathways affect radiological measures of structural damage and the dynamic progression of cerebral infarction before and after mechanical thrombectomy, which can be semiquantitatively assessed through radiological scoring of ASPECTS.4–6 13–15 Our study addressed this issue with the following main results. Preinterventional ASPECTS declined significantly during interfacility transfer, resulting in significantly lower ASPECTS at stroke center presentation and on follow-up imaging in the drip and ship group compared with the mothership group. Infarct progression from baseline to postinterventional imaging was significantly higher in transferred patients. In the multivariable model, only interfacility transfer, NIHSS at presentation, degree of angiographic recanalization, and duration of the thrombectomy procedure proved to be independent predictors of structural infarct progression. ASPECTS is a 10 point non-contrast CT scoring system of early ischemic changes within the middle cerebral artery territory.4 For large vessel occlusion stroke, ASPECTS has been increasingly exposed to internal validation with good estimates of internal consistency (Crohnbach’s alpha=0.859) and varying interobserver reliability (ICC=0.672–0.834).1 16 17 In our study, good to excellent levels of inter-rater agreement could be achieved, further supporting that ASPECTS is a reliable tool to assess the extent of cerebral infarction across different CT scanners, scan acquisition protocols, and time points.5 10 Recently, ASPECTS has been used to investigate the dynamics of infarct progression during interfacility transfer, revealing that one of three patients becomes ineligible for mechanical thrombectomy based on ASPECTS imaging criteria,6 13 and that every 1 point increase in ASPECTS decline per hour correlates with a 23 fold lower probability of good functional outcome (modified Rankin Scale score of 0–2) at 90 days.5 In our study, the ASPECTS decline of 2 points that occurred during interfacility transfer was more severe than observed previously by others (1 point), although transfer times were nearly identical.5 14 This discrepancy in early ASPECTS decline may be best explained by the rate of distal internal carotid artery occlusions which was three times higher in our observation. This highlights that not rigid absolute time windows but rather local cerebral pathophysiology and individual collateral recruitment, which critically depend on precise occlusion location, likely affects the dynamics of how the penumbra is transformed into infarction.15 18–20 Prospective registry data revealed that significant treatment delays in the drip and ship treatment pathway (onset to recanalization time, mothership 202 (160-265) vs drip and ship 312 (255-286) min, p<0.0001) lead to worse clinical outcome (modified Rankin Scale score of 0–1 at 90 days, mothership 47,4% vs drip and ship 38%, p=0.005).21 This is in line with the fact that a significant proportion of large vessel occlusion stroke patients (≥70%) can be assigned to the categories of rapid and intermediate infarct progressors,18 22 in whom the frequency of good outcome (modified Rankin Scale score of 0–2) is highly time dependent and declines from 64.1% for those recanalized within 180 min to 46.1% for those recanalized within 480 min.23 In our study, multimodal imaging selection was applied to identify recanalization responders in the extended or unknown time window,24 25 which consequently dissolved the association of time and imaging defined pre- to postinterventional worsening of stroke. Importantly, interfacility transfer remained the most powerful predictor of structural infarct progression. Although ASPECTS values following interfacility transfer were significantly lower in our cohort, short term functional outcomes at 24 hours were not different between the two prehospital pathways. Each ASPECTS value, by definition, is associated with a spectrum of different infarct volumes for which a strong correlation with NIHSS has been reported (r=0.79, p<0.0001).4 26 However, for small infarct volumes, this correlation does not hold.27 There are various explanations as to why a greater infarct extent may remain neutral with regard to functional outcome, as was observed in our cohort. The clinical endpoint of NIHSS may be biased towards good functional outcome as it does not reflect more subtle neuropsychological deficits.28 Alternatively, or in addition, NIHSS may have underestimated the degree of favorable outcome due to the short follow-up period. Also, the beta error of statistical decision making may have occurred for which this retrospective study may have been at risk because statistical power was not adjustable a priori. There are certain limitations to this study. First, all referring hospitals were heterogeneously located in peri-urban, urban, and rural areas. This reflects specific geographical and organizational characteristics of our region but limits external generalizability. Second, ASPECTS is an established marker of baseline infarct extent but is not established for the quantification of infarct extent on postinterventional follow-up imaging. Third, ASPECTS is unable to detect volumetric infarct progression due to edema in already affected ASPECTS regions, which may have reached substantial levels at the time of follow-up imaging.29 Conclusion: Infarct progression and postinterventional infarct extent, as assessed by ASPECTS, varied between the drip and ship and mothership pathways for mechanical thrombectomy, leading to more pronounced infarction in transferred patients. ASPECTS may serve as a radiological measure to monitor benefit or harm of different prehospital pathways and to advocate for the direct transfer to a comprehensive stroke center in regions with similar landscapes and infrastructure.
Background: Evidence of the consequences of different prehospital pathways before mechanical thrombectomy (MT) in large vessel occlusion stroke is inconclusive. The aim of this study was to investigate the infarct extent and progression before and after MT in directly admitted (mothership) versus transferred (drip and ship) patients using the Alberta Stroke Program Early CT Score (ASPECTS). Methods: ASPECTS of 535 consecutive large vessel occlusion stroke patients eligible for MT between 2015 to 2019 were retrospectively analyzed for differences in the extent of baseline, post-referral, and post-recanalization infarction between the mothership and drip and ship pathways. Time intervals and transport distances of both pathways were analyzed. Multiple linear regression was used to examine the association between infarct progression (baseline to post-recanalization ASPECTS decline), patient characteristics, and logistic key figures. Results: ASPECTS declined during transfer (9 (8-10) vs 7 (6-9), p<0.0001), resulting in lower ASPECTS at stroke center presentation (mothership 9 (7-10) vs drip and ship 7 (6-9), p<0.0001) and on follow-up imaging (mothership 7 (4-8) vs drip and ship 6 (3-7), p=0.001) compared with mothership patients. Infarct progression was significantly higher in transferred patients (points lost, mothership 2 (0-3) vs drip and ship 3 (2-6), p<0.0001). After multivariable adjustment, only interfacility transfer, preinterventional clinical stroke severity, the degree of angiographic recanalization, and the duration of the thrombectomy procedure remained predictors of infarct progression (R 2=0.209, p<0.0001). Conclusions: Infarct progression and postinterventional infarct extent, as assessed by ASPECTS, varied between the drip and ship and mothership pathway, leading to more pronounced infarction in transferred patients. ASPECTS may serve as a radiological measure to monitor the benefit or harm of different prehospital pathways for MT.
Introduction: In many regions of the world, the two most frequent treatment pathways in the management of acute large vessel occlusion strokes are intravenous thrombolysis in the nearest thrombolysis facility (drip and ship concept) prior to mechanical thrombectomy in endovascular capable comprehensive stroke centers or, as the main alternative, direct transfer to a comprehensive stroke center for mechanical thrombectomy (mothership concept).1 The population based benefit or harm of either scenario is region specific and not externally generalizable.2 Although certain stroke networks have evaluated prehospital triage systems based on stroke severity scales, it will likely remain elusive, from a methodological perspective, to study optimal prehospital routing strategies by performing trials where the treatment pathway is randomized in the field and which at the same time will account for region specific factors or by performing such trials separately by region.2 3 Functional clinical measures are the standard of reference as primary endpoints in randomized stroke trials. However, observational investigation using standardized and widely used radiological measures, such as the Alberta Stroke Program Early CT Score (ASPECTS), may be valuable to indicate benefit or harm in a setting where randomization of different treatment pathways for acute large vessel occlusion stroke remains a serious obstacle or is not feasible across multiple centers, each with distinct region specific features.1 2 4 Recently, ASPECTS has been proposed as an imaging tool to assess the dynamics of infarction during interfacility transfer.5 6 We add to previous studies on ASPECTS by investigating whether the infarct extent before and after mechanical thrombectomy differs between the mothership and drip and ship pathways, and aim to quantify the imaging defined effect size of pre- to postinterventional worsening of stroke over time for both pathways. Conclusion: Infarct progression and postinterventional infarct extent, as assessed by ASPECTS, varied between the drip and ship and mothership pathways for mechanical thrombectomy, leading to more pronounced infarction in transferred patients. ASPECTS may serve as a radiological measure to monitor benefit or harm of different prehospital pathways and to advocate for the direct transfer to a comprehensive stroke center in regions with similar landscapes and infrastructure.
Background: Evidence of the consequences of different prehospital pathways before mechanical thrombectomy (MT) in large vessel occlusion stroke is inconclusive. The aim of this study was to investigate the infarct extent and progression before and after MT in directly admitted (mothership) versus transferred (drip and ship) patients using the Alberta Stroke Program Early CT Score (ASPECTS). Methods: ASPECTS of 535 consecutive large vessel occlusion stroke patients eligible for MT between 2015 to 2019 were retrospectively analyzed for differences in the extent of baseline, post-referral, and post-recanalization infarction between the mothership and drip and ship pathways. Time intervals and transport distances of both pathways were analyzed. Multiple linear regression was used to examine the association between infarct progression (baseline to post-recanalization ASPECTS decline), patient characteristics, and logistic key figures. Results: ASPECTS declined during transfer (9 (8-10) vs 7 (6-9), p<0.0001), resulting in lower ASPECTS at stroke center presentation (mothership 9 (7-10) vs drip and ship 7 (6-9), p<0.0001) and on follow-up imaging (mothership 7 (4-8) vs drip and ship 6 (3-7), p=0.001) compared with mothership patients. Infarct progression was significantly higher in transferred patients (points lost, mothership 2 (0-3) vs drip and ship 3 (2-6), p<0.0001). After multivariable adjustment, only interfacility transfer, preinterventional clinical stroke severity, the degree of angiographic recanalization, and the duration of the thrombectomy procedure remained predictors of infarct progression (R 2=0.209, p<0.0001). Conclusions: Infarct progression and postinterventional infarct extent, as assessed by ASPECTS, varied between the drip and ship and mothership pathway, leading to more pronounced infarction in transferred patients. ASPECTS may serve as a radiological measure to monitor the benefit or harm of different prehospital pathways for MT.
4,241
370
[ 237 ]
6
[ "aspects", "stroke", "drip", "ship", "drip ship", "mothership", "time", "vs", "vs drip ship", "vs drip" ]
[ "stroke center admission", "strokes intravenous thrombolysis", "stroke trials observational", "alberta stroke program", "stroke severity scales" ]
[CONTENT] stroke | CT | intervention [SUMMARY]
[CONTENT] stroke | CT | intervention [SUMMARY]
[CONTENT] stroke | CT | intervention [SUMMARY]
[CONTENT] stroke | CT | intervention [SUMMARY]
[CONTENT] stroke | CT | intervention [SUMMARY]
[CONTENT] stroke | CT | intervention [SUMMARY]
[CONTENT] Arterial Occlusive Diseases | Brain Ischemia | Cerebral Infarction | Emergency Medical Services | Humans | Ischemic Stroke | Retrospective Studies | Stroke | Thrombectomy | Treatment Outcome [SUMMARY]
[CONTENT] Arterial Occlusive Diseases | Brain Ischemia | Cerebral Infarction | Emergency Medical Services | Humans | Ischemic Stroke | Retrospective Studies | Stroke | Thrombectomy | Treatment Outcome [SUMMARY]
[CONTENT] Arterial Occlusive Diseases | Brain Ischemia | Cerebral Infarction | Emergency Medical Services | Humans | Ischemic Stroke | Retrospective Studies | Stroke | Thrombectomy | Treatment Outcome [SUMMARY]
[CONTENT] Arterial Occlusive Diseases | Brain Ischemia | Cerebral Infarction | Emergency Medical Services | Humans | Ischemic Stroke | Retrospective Studies | Stroke | Thrombectomy | Treatment Outcome [SUMMARY]
[CONTENT] Arterial Occlusive Diseases | Brain Ischemia | Cerebral Infarction | Emergency Medical Services | Humans | Ischemic Stroke | Retrospective Studies | Stroke | Thrombectomy | Treatment Outcome [SUMMARY]
[CONTENT] Arterial Occlusive Diseases | Brain Ischemia | Cerebral Infarction | Emergency Medical Services | Humans | Ischemic Stroke | Retrospective Studies | Stroke | Thrombectomy | Treatment Outcome [SUMMARY]
[CONTENT] stroke center admission | strokes intravenous thrombolysis | stroke trials observational | alberta stroke program | stroke severity scales [SUMMARY]
[CONTENT] stroke center admission | strokes intravenous thrombolysis | stroke trials observational | alberta stroke program | stroke severity scales [SUMMARY]
[CONTENT] stroke center admission | strokes intravenous thrombolysis | stroke trials observational | alberta stroke program | stroke severity scales [SUMMARY]
[CONTENT] stroke center admission | strokes intravenous thrombolysis | stroke trials observational | alberta stroke program | stroke severity scales [SUMMARY]
[CONTENT] stroke center admission | strokes intravenous thrombolysis | stroke trials observational | alberta stroke program | stroke severity scales [SUMMARY]
[CONTENT] stroke center admission | strokes intravenous thrombolysis | stroke trials observational | alberta stroke program | stroke severity scales [SUMMARY]
[CONTENT] aspects | stroke | drip | ship | drip ship | mothership | time | vs | vs drip ship | vs drip [SUMMARY]
[CONTENT] aspects | stroke | drip | ship | drip ship | mothership | time | vs | vs drip ship | vs drip [SUMMARY]
[CONTENT] aspects | stroke | drip | ship | drip ship | mothership | time | vs | vs drip ship | vs drip [SUMMARY]
[CONTENT] aspects | stroke | drip | ship | drip ship | mothership | time | vs | vs drip ship | vs drip [SUMMARY]
[CONTENT] aspects | stroke | drip | ship | drip ship | mothership | time | vs | vs drip ship | vs drip [SUMMARY]
[CONTENT] aspects | stroke | drip | ship | drip ship | mothership | time | vs | vs drip ship | vs drip [SUMMARY]
[CONTENT] stroke | region | trials | region specific | specific | pathways | performing | randomized | centers | concept [SUMMARY]
[CONTENT] test | stroke | non | variables | statistical | aspects | time | parametric | graphpad | data [SUMMARY]
[CONTENT] vs | vs drip | vs drip ship | drip ship | drip | ship | mothership | stroke | time | table [SUMMARY]
[CONTENT] pathways | infarct | varied drip ship | radiological measure | thrombectomy leading pronounced infarction | thrombectomy leading pronounced | thrombectomy leading | progression postinterventional | radiological measure monitor benefit | radiological measure monitor [SUMMARY]
[CONTENT] aspects | stroke | test | drip ship | drip | ship | mothership | time | vs | statistical [SUMMARY]
[CONTENT] aspects | stroke | test | drip ship | drip | ship | mothership | time | vs | statistical [SUMMARY]
[CONTENT] ||| MT | the Alberta Stroke Program Early CT Score (ASPECTS [SUMMARY]
[CONTENT] 535 | MT | between 2015 to 2019 ||| ||| [SUMMARY]
[CONTENT] 9 | 7 | 6-9 | 9 | 7 | 6-9 | 7 | 4-8) | 6 | 3-7 ||| 2 | 0-3 | 3 ||| [SUMMARY]
[CONTENT] ||| MT [SUMMARY]
[CONTENT] ||| MT | the Alberta Stroke Program Early CT Score (ASPECTS ||| 535 | MT | between 2015 to 2019 ||| ||| ||| 9 | 7 | 6-9 | 9 | 7 | 6-9 | 7 | 4-8) | 6 | 3-7 ||| 2 | 0-3 | 3 ||| ||| ||| MT [SUMMARY]
[CONTENT] ||| MT | the Alberta Stroke Program Early CT Score (ASPECTS ||| 535 | MT | between 2015 to 2019 ||| ||| ||| 9 | 7 | 6-9 | 9 | 7 | 6-9 | 7 | 4-8) | 6 | 3-7 ||| 2 | 0-3 | 3 ||| ||| ||| MT [SUMMARY]
Early Access to Oral Antivirals in High-Risk Outpatients: Good Weapons to Fight COVID-19.
36423123
Molnupiravir and Nirmatrelvir/r (NMV-r) have been proven to reduce severe Coronavirus Disease 2019 (COVID-19) in unvaccinated high-risk individuals. Data regarding their impact in fully vaccinated vulnerable subjects with mild-to-moderate COVID-19 are still limited, particularly in the era of Omicron and sub-variants.
INTRODUCTION
Our retrospective study aimed to compare the safety profile and effectiveness of the two antivirals in all consecutive high-risk outpatients between 11 January and 10 July 2022. A logistic regression model was carried out to assess factors associated with the composite outcome defined as all-cause hospitalization and/or death at 30 days.
METHODS
A total of 719 individuals were included: 554 (77%) received Molnupiravir, whereas 165 (23%) were NMV-r users. Overall, 43 all-cause hospitalizations (5.9%) and 13 (1.8%) deaths were observed at 30 days. A composite outcome occurred in 47 (6.5%) individuals. At multivariate analysis, male sex [OR 3.785; p = 0.0021], age ≥ 75 [OR 2.647; p = 0.0124], moderate illness [OR 16.75; p &lt; 0.001], and treatment discontinuation after medical decision [OR 8.148; p = 0.0123] remained independently associated with the composite outcome.
RESULTS
No differences between the two antivirals were observed. In this real-life setting, the early use of both of the oral antivirals helped limit composite outcome at 30 days among subjects who were at high risk of disease progression.
CONCLUSIONS
[ "Male", "Humans", "Antiviral Agents", "Outpatients", "Retrospective Studies", "COVID-19 Drug Treatment" ]
9695104
1. Introduction
Since its rapid spread starting in December 2019, as of 23 October 2022, the COVID-19 pandemic caused over 624 million confirmed cases and over 6.5 million deaths have been reported globally [1]. The severe acute respiratory syndrome Coronavirus-2 (SARS-CoV-2) after its entry mainly through the respiratory tract is capable of inducing a vehement inflammatory response, which is considered the hallmark of the infection. In fact, its various structural and non-structural proteins can directly or indirectly stimulate the uncontrolled activation of harmful inflammatory pathways causing cytokine storm, tissue damage, increased pulmonary edema, acute respiratory distress syndrome (ARDS) and mortality [2]. COVID-19 still represents a threat to the healthcare systems and has significant social implications in terms of morbidity, absence due to illness in the workplace, and direct and indirect costs [3]. Numerous therapeutic strategies have been developed to prevent the infection (vaccines and monoclonal antibodies) and to slow down the progression to severe COVID-19 (anti-inflammatory molecules, steroids, heparin, and antivirals) [4]. Moreover, the dominant variants of SARS-CoV-2 are constantly evolving. Data regarding variants of the SARS-CoV-2 in Italy are periodically updated by the National healthcare Institute [5]. As of January 2022, Omicron and subvariants constituted over 90% of all SARS-Cov2 infections in Italy. In the current scenario, circulating Omicron variants (BA.2, BA.4, and BA.5) have significantly reduced susceptibility to several monoclonal antibodies (mAbs), including casirivimab–imdevimab, bamlanivimab–etesevimab, and sotrovimab [6,7]. The combination tixagevimab–cilgavimab, initially authorized only in pre-exposure prophylaxis, appears to be still active toward BA.2 and, albeit to a lesser extent, toward BA.1, BA.1.1, and the recent BA.4 and BA.5 [6,7,8]. Importantly, anti-SARS-CoV-2 vaccination still represents the main mean for limiting the spread of infection and reducing the risk of worse outcomes. However, the effectiveness of vaccines tends to decrease over time, due, on the one hand, to the ability of SARS-CoV-2 to modify itself [9], and on the other hand, to the impaired immune system, particularly among fragile and high-risk individuals [10]. At the end of December 2021, European Medicines Agency (EMA) authorized the emergency use of two antivirals against SARS-CoV-2, Molnupiravir, and Nirmatrelvir/r (NMV-r) to prevent severe illness in high-risk individuals who are not hospitalized for COVID-19 and with no need for supplemental oxygen [11]. Molnupiravir is a prodrug of beta-d-N4-hydroxyxcytidine acting as an oral inhibitor of RNA-dependent RNA polymerase that can increase the viral RNA mutations, thus impairing the replication of SARS-COV2 [12]. NMV-r is an orally administered antiviral agent targeting the SARS-CoV-2 3-chymotrypsin-like cysteine protease enzyme (Mpro) that is essential in the viral replication cycle [13]. Despite the fact that their efficacy has been demonstrated in randomized trials [14,15], real-life data regarding their impact on fully vaccinated vulnerable subjects with mild-to-moderate COVID-19 are still limited, particularly in the era of Omicron and subvariants. Hence, we aim to assess the safety and effectiveness of the two antivirals in terms of the composite outcome defined as all-cause hospitalization and/or death at 30 days.
null
null
3. Results
A total of 719 individuals (51.7% female) were included during the study phase. Among them, 554 (77%) received Molnupiravir, whereas 165 (23%) were NMV-r(NVM) users. The baseline characteristics of the patients are summarized in Table 1. The median age was 71 years (interquartile range, IQR, 61–80). All subjects were Caucasian. Compared with NMV-r users, subjects receiving Molnupiravir were older (median age 73 vs. 61, p < 0.0001), had more comorbidities (p = 0.001), suffered mostly from cardiovascular diseases (p < 0.001), chronic renal failure with eGFR ≥ 30 mL/min/1.73 m2 (p = 0.005), and were more likely to take anticoagulants (p < 0.001) and antipsychotics/antidepressants (p = 0.008). Overall, 669 (93%) individuals out of 719 had received complete vaccination: 581 (86.8%) received BNT162b2 (Comirnaty), 73 (11%) Moderna m-RNA vaccine, and 15 (2.2%) AstraZeneca. At least one booster dose was administered in 89.1% of patients (89% Comirnaty and 11% Moderna). A total of 31 subjects received a fourth dose. 3.1. Safety Profile Oral antivirals were safe and well-tolerated. The safety profile, according to the antiviral therapy, is reported in Table 2. During antiviral therapy, 85 (11.8%) individuals experienced at least one adverse event. Compared with Molnupiravir users, those receiving NMV-r were more likely to have a bitter mouth (p < 0.001), dysgeusia (p < 0.001), nausea (p = 0.001), and epigastric burning (p = 0.02). Only three serious adverse events (SAEs) were reported: extensive rash in an 80-year-old man taking Molnupiravir, whilst, among the NMV-r users, reversible bradycardia in a 70-year-old woman and an extensive rash in a 57-year-old man. Treatment discontinuation occurred in 30 individuals (4.1%): 19 (63%) by voluntary decision and 11 (37%) by medical decision. Among the latter, seven subjects developed adverse events, including the three SAEs described above, whereas four individuals, who required hospitalization for pneumonia and oxygen therapy, replaced the oral antiviral treatment with the antiviral Remdesivir. Oral antivirals were safe and well-tolerated. The safety profile, according to the antiviral therapy, is reported in Table 2. During antiviral therapy, 85 (11.8%) individuals experienced at least one adverse event. Compared with Molnupiravir users, those receiving NMV-r were more likely to have a bitter mouth (p < 0.001), dysgeusia (p < 0.001), nausea (p = 0.001), and epigastric burning (p = 0.02). Only three serious adverse events (SAEs) were reported: extensive rash in an 80-year-old man taking Molnupiravir, whilst, among the NMV-r users, reversible bradycardia in a 70-year-old woman and an extensive rash in a 57-year-old man. Treatment discontinuation occurred in 30 individuals (4.1%): 19 (63%) by voluntary decision and 11 (37%) by medical decision. Among the latter, seven subjects developed adverse events, including the three SAEs described above, whereas four individuals, who required hospitalization for pneumonia and oxygen therapy, replaced the oral antiviral treatment with the antiviral Remdesivir. 3.2. Clinical Outcomes at 30 Days Overall, 47 (6.5%) individuals were hospitalized and/or died at 30 days. They formed Group A. Clinical characteristics among the outpatients with (Group A) or without (Group B) composite outcomes are described in Table 3. Subjects in Group A were more likely to have an age ≥ 75 (59.57% vs. 38.24%, p = 0.005), a chronic renal failure (21.28% vs. 9.25%, p = 0.019), and discontinuation of antiviral treatment after a medical decision (8.51% vs. 1.05%, p = 0.004). No significant differences between the two antivirals were observed in terms of the composite outcome. A total of 43 (5.9%) hospitalizations at 30 days were reported; 20 out of 43 were caused directly by COVID-19 pneumonia, requiring oxygen support, whereas the remaining 23 included 4 to heart failure; 1 to abdominal pain; 2 to surgical interventions; 3 to social reasons; 1 to a stroke; and 12 to the expiry of the general conditions, including severe dehydration, senile cachexia, and feeding difficulties. All-cause hospitalization was associated with age ≥ 75 (OR 2.73; 95% CI 1.445–5.172; p = 0.002), moderate illness (OR 14.13; 95% CI 6.258–31.902; p < 0.001), chronic renal failure (OR 2.57; 95% CI 1.178–5.596; p = 0.010), and a discontinuation in antiviral treatment after medical decision (OR 6.24; 95% CI 1.593–24.409; p = 0.008) (Supplementary Table S1). NMR/r users had a shorter median time to negative test compared with Molnupiravir users (9 days vs. 12 days, p < 0.0001). The time from the first positive test to viral clearance was not associated with the composite outcome. Thirteen deaths were observed: seven due to acute respiratory failure related to COVID-19, three because of advanced malignancies, one for an acute myocardial infarction, and two because of senile cachexia. Among the individuals who died, ten were Molnupiravir users, and two received NMV-r, including a 63-year-old woman suffering from breast cancer with multiple metastases and a 73-year-old man with acute myocardial infarction (Supplementary Table S2). Overall, 47 (6.5%) individuals were hospitalized and/or died at 30 days. They formed Group A. Clinical characteristics among the outpatients with (Group A) or without (Group B) composite outcomes are described in Table 3. Subjects in Group A were more likely to have an age ≥ 75 (59.57% vs. 38.24%, p = 0.005), a chronic renal failure (21.28% vs. 9.25%, p = 0.019), and discontinuation of antiviral treatment after a medical decision (8.51% vs. 1.05%, p = 0.004). No significant differences between the two antivirals were observed in terms of the composite outcome. A total of 43 (5.9%) hospitalizations at 30 days were reported; 20 out of 43 were caused directly by COVID-19 pneumonia, requiring oxygen support, whereas the remaining 23 included 4 to heart failure; 1 to abdominal pain; 2 to surgical interventions; 3 to social reasons; 1 to a stroke; and 12 to the expiry of the general conditions, including severe dehydration, senile cachexia, and feeding difficulties. All-cause hospitalization was associated with age ≥ 75 (OR 2.73; 95% CI 1.445–5.172; p = 0.002), moderate illness (OR 14.13; 95% CI 6.258–31.902; p < 0.001), chronic renal failure (OR 2.57; 95% CI 1.178–5.596; p = 0.010), and a discontinuation in antiviral treatment after medical decision (OR 6.24; 95% CI 1.593–24.409; p = 0.008) (Supplementary Table S1). NMR/r users had a shorter median time to negative test compared with Molnupiravir users (9 days vs. 12 days, p < 0.0001). The time from the first positive test to viral clearance was not associated with the composite outcome. Thirteen deaths were observed: seven due to acute respiratory failure related to COVID-19, three because of advanced malignancies, one for an acute myocardial infarction, and two because of senile cachexia. Among the individuals who died, ten were Molnupiravir users, and two received NMV-r, including a 63-year-old woman suffering from breast cancer with multiple metastases and a 73-year-old man with acute myocardial infarction (Supplementary Table S2). 3.3. Factors Associated with the Composite Outcome As shown in Table 4, a logistic regression model was performed to assess factors associated with the composite outcome. At the univariate analysis, male sex (OR 1.79; 95% CI 0.977–3.292; p = 0.050), age ≥ 75 (OR 2.38; 95% CI 1.302–4.349; p = 0.005), moderate illness at time of prescription (OR 12.44; 95% CI 5.557–27.84; p < 0.001), treatment discontinuation after medical decision (OR 8.79; 95% CI 2.479–31.221; p = 0.001), and a greater number of comorbidities (OR 1.51; 95% CI 1.084–2.111; p = 0.010) were associated with all-cause hospitalization and/or death at 30 days. In multivariate analysis, after adjusting for age and sex, male sex (OR 3.78; 95% CI 1.622–8.836; p = 0.002), age ≥75 (OR 2.65; 95% CI 1.245–5.628; p = 0.012), moderate illness (OR 16.75; 95% CI 6.17–45.48; p < 0.001), and treatment discontinuation after medical decision (OR 8.15; 95% CI 1.577–42.114; p = 0.012) remained independently associated with the composite outcome. In terms of the composite outcome, no differences between the two antiviral regimens were observed. As shown in Table 4, a logistic regression model was performed to assess factors associated with the composite outcome. At the univariate analysis, male sex (OR 1.79; 95% CI 0.977–3.292; p = 0.050), age ≥ 75 (OR 2.38; 95% CI 1.302–4.349; p = 0.005), moderate illness at time of prescription (OR 12.44; 95% CI 5.557–27.84; p < 0.001), treatment discontinuation after medical decision (OR 8.79; 95% CI 2.479–31.221; p = 0.001), and a greater number of comorbidities (OR 1.51; 95% CI 1.084–2.111; p = 0.010) were associated with all-cause hospitalization and/or death at 30 days. In multivariate analysis, after adjusting for age and sex, male sex (OR 3.78; 95% CI 1.622–8.836; p = 0.002), age ≥75 (OR 2.65; 95% CI 1.245–5.628; p = 0.012), moderate illness (OR 16.75; 95% CI 6.17–45.48; p < 0.001), and treatment discontinuation after medical decision (OR 8.15; 95% CI 1.577–42.114; p = 0.012) remained independently associated with the composite outcome. In terms of the composite outcome, no differences between the two antiviral regimens were observed. 3.4. Factors Associated with 30-Day Progression-Free Survival toward Composite Endpoint As shown in Table 5, a Cox regression model was performed to assess factors associated with a 30-day progression-free survival composite endpoint (PFSCE). At univariate analysis, an age over 75 years (HR 2.80; 95%CI 1.426–5.821; p = 0.003), moderate illness at time of prescription (HR 13.83; 94%CI 6.727–28.451; p < 0.001), treatment discontinuation after medical decision (HR 10.19; 95% CI 3.585–28.972; p < 0.001), and a greater number of comorbidities (HR 1.69; 95%CI 1.193–2.389; p = 0.003) were associated with 30-day PFSCE. In the multivariate analysis, after adjusting for age and sex, age ≥ 75 (HR 2.76; 95%CI 1.339–5.672; p = 0.005), moderate illness (HR 10.97; 95%CI 5.184–23.207; p < 0.001), treatment discontinuation after medical decision (HR 9.42; 95%CI 3.157–28.1; p < 0.001), and number of comorbidities (HR 1.67; 95%CI 1.178–2.379; p = 0.004) were associated with 30-day PFSCE. Figure 1 shows the survival curves drawn with the Kaplan–Meier method. In the curve stratified by COVID-19 symptoms, the mean (±standard error) of survival of patients with moderate symptoms was less compared with the patients with mild symptoms (12.17 ± 1.33 vs. 29.30 ± 0.16). Patients who discontinued antiviral therapy after a medical decision survived less often than those who continued (6.45 ± 1.06 vs. 29.06 ± 0.18). As shown in Table 5, a Cox regression model was performed to assess factors associated with a 30-day progression-free survival composite endpoint (PFSCE). At univariate analysis, an age over 75 years (HR 2.80; 95%CI 1.426–5.821; p = 0.003), moderate illness at time of prescription (HR 13.83; 94%CI 6.727–28.451; p < 0.001), treatment discontinuation after medical decision (HR 10.19; 95% CI 3.585–28.972; p < 0.001), and a greater number of comorbidities (HR 1.69; 95%CI 1.193–2.389; p = 0.003) were associated with 30-day PFSCE. In the multivariate analysis, after adjusting for age and sex, age ≥ 75 (HR 2.76; 95%CI 1.339–5.672; p = 0.005), moderate illness (HR 10.97; 95%CI 5.184–23.207; p < 0.001), treatment discontinuation after medical decision (HR 9.42; 95%CI 3.157–28.1; p < 0.001), and number of comorbidities (HR 1.67; 95%CI 1.178–2.379; p = 0.004) were associated with 30-day PFSCE. Figure 1 shows the survival curves drawn with the Kaplan–Meier method. In the curve stratified by COVID-19 symptoms, the mean (±standard error) of survival of patients with moderate symptoms was less compared with the patients with mild symptoms (12.17 ± 1.33 vs. 29.30 ± 0.16). Patients who discontinued antiviral therapy after a medical decision survived less often than those who continued (6.45 ± 1.06 vs. 29.06 ± 0.18).
5. Conclusions
Early use of oral antivirals may limit hospital admissions, reduce COVID-19-related morbidity or mortality, and reduce healthcare costs. Therefore, it also becomes necessary to increase efforts to establish and strengthen a network between COVID-19 referral hubs and GPs in order to to identify individuals who may benefit from these therapies.
[ "2. Materials and Methods", "2.1. Clinical Setting", "2.2. Criteria Inclusion", "2.3. Criteria of Exclusion", "2.4. Endpoints", "2.5. Statistics", "3.1. Safety Profile", "3.2. Clinical Outcomes at 30 Days", "3.3. Factors Associated with the Composite Outcome", "3.4. Factors Associated with 30-Day Progression-Free Survival toward Composite Endpoint" ]
[ " 2.1. Clinical Setting Our hospital, “San Giuseppe Moscati” of Taranto, is a COVID-19 referral hub in Apulia, Southern Italy. We included in this retrospective study all consecutive individuals with confirmed COVID-19 and mild-to-moderate illness who received an oral antiviral prescription in Taranto and its Province between 11 January and 10 July 2022. Sociodemographic, as well as clinical, data were collected in a dedicated database that included comorbidities, daily taken drugs, time from the onset of symptoms to antiviral prescription, date of COVID-19 vaccinations, side effects in the course of treatment, and clinical outcomes at 30 days after the treatment initiation.\nGeneral Practitioners and Special Units for Continuity of Care (USCA) identified high-risk patients with COVID-19 and sent a formal request for eligibility for antiviral therapy. We assessed each patient according to the Italian Medicine Agency (AIFA) criteria [16]. Therefore, antiviral therapy was selected after carefully evaluating drug–drug interactions by consulting a dedicated website https://www.covid19-druginteractions.org (accessed on 3 November 2022) [17]. Before starting treatment, all subjects received an information form on the prescribed antiviral and signed informed consent. In addition, women with childbearing potential were advised to use an effective method of contraception, which necessarily includes a barrier method, for the whole duration of the treatment and at least four days after the end of Molnupiravir treatment. Male partners of women with childbearing potential were required to ensure contraception for the total treatment duration and at least three months after the end of Molnupiravir treatment.\nIn the presence of clinical signs of worsening (persistent fever, onset of breathlessness, reduced oxygen saturation, etc.), patients themselves or their caregivers were asked to contact the GPs who activated the Special Units for Continuity of Care (USCA) or, in severe cases, the Italian emergency telephone number, 118. Alternatively, our team was contacted directly and provided clinical suggestions.\nOur hospital, “San Giuseppe Moscati” of Taranto, is a COVID-19 referral hub in Apulia, Southern Italy. We included in this retrospective study all consecutive individuals with confirmed COVID-19 and mild-to-moderate illness who received an oral antiviral prescription in Taranto and its Province between 11 January and 10 July 2022. Sociodemographic, as well as clinical, data were collected in a dedicated database that included comorbidities, daily taken drugs, time from the onset of symptoms to antiviral prescription, date of COVID-19 vaccinations, side effects in the course of treatment, and clinical outcomes at 30 days after the treatment initiation.\nGeneral Practitioners and Special Units for Continuity of Care (USCA) identified high-risk patients with COVID-19 and sent a formal request for eligibility for antiviral therapy. We assessed each patient according to the Italian Medicine Agency (AIFA) criteria [16]. Therefore, antiviral therapy was selected after carefully evaluating drug–drug interactions by consulting a dedicated website https://www.covid19-druginteractions.org (accessed on 3 November 2022) [17]. Before starting treatment, all subjects received an information form on the prescribed antiviral and signed informed consent. In addition, women with childbearing potential were advised to use an effective method of contraception, which necessarily includes a barrier method, for the whole duration of the treatment and at least four days after the end of Molnupiravir treatment. Male partners of women with childbearing potential were required to ensure contraception for the total treatment duration and at least three months after the end of Molnupiravir treatment.\nIn the presence of clinical signs of worsening (persistent fever, onset of breathlessness, reduced oxygen saturation, etc.), patients themselves or their caregivers were asked to contact the GPs who activated the Special Units for Continuity of Care (USCA) or, in severe cases, the Italian emergency telephone number, 118. Alternatively, our team was contacted directly and provided clinical suggestions.\n 2.2. Criteria Inclusion Criteria inclusion of the study were (1) age ≥ 18 years, (2) COVID-19 confirmed by antigenic or molecular swab, (3) subjects who have taken at least one dose of antiviral, (4) onset of symptoms within five days, and (5) at least one of the following comorbidities: obesity (body mass index ≥ 30); diabetes mellitus with organ damage or HBa1c > 7.5%; chronic renal failure; chronic respiratory diseases; severe cardiovascular disease; primary or secondary immunodeficiency; malignancies; neurological disease; and age ≥ 65.\nCriteria inclusion of the study were (1) age ≥ 18 years, (2) COVID-19 confirmed by antigenic or molecular swab, (3) subjects who have taken at least one dose of antiviral, (4) onset of symptoms within five days, and (5) at least one of the following comorbidities: obesity (body mass index ≥ 30); diabetes mellitus with organ damage or HBa1c > 7.5%; chronic renal failure; chronic respiratory diseases; severe cardiovascular disease; primary or secondary immunodeficiency; malignancies; neurological disease; and age ≥ 65.\n 2.3. Criteria of Exclusion The criteria for exclusion were (1) pregnancy, (2) patients who refused to take the therapy, (3) severe illness requiring oxygen support and/or hospitalization due to COVID-19, (4) patients already hospitalized, (5) severe liver impairment, and (6) severe renal impairment (eGFR < 30 mL/min/1.73 m2).\nMild-to-moderate illness was defined as reported in the COVID-19 Treatment Guidelines Panel [18]. Both antivirals were administered for five days according to the dosage recommended by the manufacturers [16].\nThe criteria for exclusion were (1) pregnancy, (2) patients who refused to take the therapy, (3) severe illness requiring oxygen support and/or hospitalization due to COVID-19, (4) patients already hospitalized, (5) severe liver impairment, and (6) severe renal impairment (eGFR < 30 mL/min/1.73 m2).\nMild-to-moderate illness was defined as reported in the COVID-19 Treatment Guidelines Panel [18]. Both antivirals were administered for five days according to the dosage recommended by the manufacturers [16].\n 2.4. Endpoints The first endpoint was to assess of the two antivirals in terms of the composite outcome defined as all-cause hospitalization and/or death at 30 days, as reported. The second endpoint was to compare their safety profile. Third, we aimed to identify factors associated with the composite outcome.\nThe first endpoint was to assess of the two antivirals in terms of the composite outcome defined as all-cause hospitalization and/or death at 30 days, as reported. The second endpoint was to compare their safety profile. Third, we aimed to identify factors associated with the composite outcome.\n 2.5. Statistics Quantitative data were shown as means and standard deviation (SD) if normally distributed, and as median and interquartile range (IQR) if assumption of normality was not acceptable. Shapiro–Wilk’s statistics was used to test normality. Differences in continuous variables between two groups defined by the primary (composite outcome) or secondary (antiviral therapy) endpoint were compared by using Student’s t-test for normally distributed parameters, or the nonparametric Mann–Whitney U test otherwise. Categorical data were expressed as frequency and percentage, and the Chi-square test or Fischer’s exact test was used to compare the groups. Univariate and multivariable logistic regression models were applied to evaluate the effect of the parameters (age, sex, comorbidities, antiviral therapy, severity of symptoms, time from the first test to the first negative test, side effect of antiviral therapy, number of comorbidities, discontinuation of therapy after medical decision, suspension of therapy by voluntary decision, COVID-19 vaccination, days after last vaccination, and days from the onset of symptoms to prescription of antiviral therapy) on the probability of being hospitalized and/or death at 30 days. Using the p-values criterion (p < 0.25), a stepwise selection was used to estimate the final model. The results of the logistic models are expressed by the Odds Ratios (OR), their 95% Confidence Interval (95% CI), and the p-values of the Wald’s tests. Thirty-day progression-free survival toward the composite endpoint (PFSCE) was defined as the time interval between the date of positivity to COVID-19 and the date of hospitalization and/or death within 30 days. Univariate and multivariable Cox-regression models were performed to define associations between 30-day PFSCE and the other parameters. The proportional hazard assumptions for the Cox model were checked, and the results were expressed as hazard ratios (HR) and their 95% Confidence Interval. The survival curves of someone with the significant parameters were drawn with the Kaplan–Meier method.\nA p-value < 0.05 was considered statistically significant. Statistical analyses were performed by using the SAS/STAT® Statistics version 9.4 (SAS Institute, Cary, NC, USA).\nQuantitative data were shown as means and standard deviation (SD) if normally distributed, and as median and interquartile range (IQR) if assumption of normality was not acceptable. Shapiro–Wilk’s statistics was used to test normality. Differences in continuous variables between two groups defined by the primary (composite outcome) or secondary (antiviral therapy) endpoint were compared by using Student’s t-test for normally distributed parameters, or the nonparametric Mann–Whitney U test otherwise. Categorical data were expressed as frequency and percentage, and the Chi-square test or Fischer’s exact test was used to compare the groups. Univariate and multivariable logistic regression models were applied to evaluate the effect of the parameters (age, sex, comorbidities, antiviral therapy, severity of symptoms, time from the first test to the first negative test, side effect of antiviral therapy, number of comorbidities, discontinuation of therapy after medical decision, suspension of therapy by voluntary decision, COVID-19 vaccination, days after last vaccination, and days from the onset of symptoms to prescription of antiviral therapy) on the probability of being hospitalized and/or death at 30 days. Using the p-values criterion (p < 0.25), a stepwise selection was used to estimate the final model. The results of the logistic models are expressed by the Odds Ratios (OR), their 95% Confidence Interval (95% CI), and the p-values of the Wald’s tests. Thirty-day progression-free survival toward the composite endpoint (PFSCE) was defined as the time interval between the date of positivity to COVID-19 and the date of hospitalization and/or death within 30 days. Univariate and multivariable Cox-regression models were performed to define associations between 30-day PFSCE and the other parameters. The proportional hazard assumptions for the Cox model were checked, and the results were expressed as hazard ratios (HR) and their 95% Confidence Interval. The survival curves of someone with the significant parameters were drawn with the Kaplan–Meier method.\nA p-value < 0.05 was considered statistically significant. Statistical analyses were performed by using the SAS/STAT® Statistics version 9.4 (SAS Institute, Cary, NC, USA).", "Our hospital, “San Giuseppe Moscati” of Taranto, is a COVID-19 referral hub in Apulia, Southern Italy. We included in this retrospective study all consecutive individuals with confirmed COVID-19 and mild-to-moderate illness who received an oral antiviral prescription in Taranto and its Province between 11 January and 10 July 2022. Sociodemographic, as well as clinical, data were collected in a dedicated database that included comorbidities, daily taken drugs, time from the onset of symptoms to antiviral prescription, date of COVID-19 vaccinations, side effects in the course of treatment, and clinical outcomes at 30 days after the treatment initiation.\nGeneral Practitioners and Special Units for Continuity of Care (USCA) identified high-risk patients with COVID-19 and sent a formal request for eligibility for antiviral therapy. We assessed each patient according to the Italian Medicine Agency (AIFA) criteria [16]. Therefore, antiviral therapy was selected after carefully evaluating drug–drug interactions by consulting a dedicated website https://www.covid19-druginteractions.org (accessed on 3 November 2022) [17]. Before starting treatment, all subjects received an information form on the prescribed antiviral and signed informed consent. In addition, women with childbearing potential were advised to use an effective method of contraception, which necessarily includes a barrier method, for the whole duration of the treatment and at least four days after the end of Molnupiravir treatment. Male partners of women with childbearing potential were required to ensure contraception for the total treatment duration and at least three months after the end of Molnupiravir treatment.\nIn the presence of clinical signs of worsening (persistent fever, onset of breathlessness, reduced oxygen saturation, etc.), patients themselves or their caregivers were asked to contact the GPs who activated the Special Units for Continuity of Care (USCA) or, in severe cases, the Italian emergency telephone number, 118. Alternatively, our team was contacted directly and provided clinical suggestions.", "Criteria inclusion of the study were (1) age ≥ 18 years, (2) COVID-19 confirmed by antigenic or molecular swab, (3) subjects who have taken at least one dose of antiviral, (4) onset of symptoms within five days, and (5) at least one of the following comorbidities: obesity (body mass index ≥ 30); diabetes mellitus with organ damage or HBa1c > 7.5%; chronic renal failure; chronic respiratory diseases; severe cardiovascular disease; primary or secondary immunodeficiency; malignancies; neurological disease; and age ≥ 65.", "The criteria for exclusion were (1) pregnancy, (2) patients who refused to take the therapy, (3) severe illness requiring oxygen support and/or hospitalization due to COVID-19, (4) patients already hospitalized, (5) severe liver impairment, and (6) severe renal impairment (eGFR < 30 mL/min/1.73 m2).\nMild-to-moderate illness was defined as reported in the COVID-19 Treatment Guidelines Panel [18]. Both antivirals were administered for five days according to the dosage recommended by the manufacturers [16].", "The first endpoint was to assess of the two antivirals in terms of the composite outcome defined as all-cause hospitalization and/or death at 30 days, as reported. The second endpoint was to compare their safety profile. Third, we aimed to identify factors associated with the composite outcome.", "Quantitative data were shown as means and standard deviation (SD) if normally distributed, and as median and interquartile range (IQR) if assumption of normality was not acceptable. Shapiro–Wilk’s statistics was used to test normality. Differences in continuous variables between two groups defined by the primary (composite outcome) or secondary (antiviral therapy) endpoint were compared by using Student’s t-test for normally distributed parameters, or the nonparametric Mann–Whitney U test otherwise. Categorical data were expressed as frequency and percentage, and the Chi-square test or Fischer’s exact test was used to compare the groups. Univariate and multivariable logistic regression models were applied to evaluate the effect of the parameters (age, sex, comorbidities, antiviral therapy, severity of symptoms, time from the first test to the first negative test, side effect of antiviral therapy, number of comorbidities, discontinuation of therapy after medical decision, suspension of therapy by voluntary decision, COVID-19 vaccination, days after last vaccination, and days from the onset of symptoms to prescription of antiviral therapy) on the probability of being hospitalized and/or death at 30 days. Using the p-values criterion (p < 0.25), a stepwise selection was used to estimate the final model. The results of the logistic models are expressed by the Odds Ratios (OR), their 95% Confidence Interval (95% CI), and the p-values of the Wald’s tests. Thirty-day progression-free survival toward the composite endpoint (PFSCE) was defined as the time interval between the date of positivity to COVID-19 and the date of hospitalization and/or death within 30 days. Univariate and multivariable Cox-regression models were performed to define associations between 30-day PFSCE and the other parameters. The proportional hazard assumptions for the Cox model were checked, and the results were expressed as hazard ratios (HR) and their 95% Confidence Interval. The survival curves of someone with the significant parameters were drawn with the Kaplan–Meier method.\nA p-value < 0.05 was considered statistically significant. Statistical analyses were performed by using the SAS/STAT® Statistics version 9.4 (SAS Institute, Cary, NC, USA).", "Oral antivirals were safe and well-tolerated. The safety profile, according to the antiviral therapy, is reported in Table 2. During antiviral therapy, 85 (11.8%) individuals experienced at least one adverse event. Compared with Molnupiravir users, those receiving NMV-r were more likely to have a bitter mouth (p < 0.001), dysgeusia (p < 0.001), nausea (p = 0.001), and epigastric burning (p = 0.02).\nOnly three serious adverse events (SAEs) were reported: extensive rash in an 80-year-old man taking Molnupiravir, whilst, among the NMV-r users, reversible bradycardia in a 70-year-old woman and an extensive rash in a 57-year-old man.\nTreatment discontinuation occurred in 30 individuals (4.1%): 19 (63%) by voluntary decision and 11 (37%) by medical decision. Among the latter, seven subjects developed adverse events, including the three SAEs described above, whereas four individuals, who required hospitalization for pneumonia and oxygen therapy, replaced the oral antiviral treatment with the antiviral Remdesivir.", "Overall, 47 (6.5%) individuals were hospitalized and/or died at 30 days. They formed Group A. Clinical characteristics among the outpatients with (Group A) or without (Group B) composite outcomes are described in Table 3. Subjects in Group A were more likely to have an age ≥ 75 (59.57% vs. 38.24%, p = 0.005), a chronic renal failure (21.28% vs. 9.25%, p = 0.019), and discontinuation of antiviral treatment after a medical decision (8.51% vs. 1.05%, p = 0.004). No significant differences between the two antivirals were observed in terms of the composite outcome.\nA total of 43 (5.9%) hospitalizations at 30 days were reported; 20 out of 43 were caused directly by COVID-19 pneumonia, requiring oxygen support, whereas the remaining 23 included 4 to heart failure; 1 to abdominal pain; 2 to surgical interventions; 3 to social reasons; 1 to a stroke; and 12 to the expiry of the general conditions, including severe dehydration, senile cachexia, and feeding difficulties. All-cause hospitalization was associated with age ≥ 75 (OR 2.73; 95% CI 1.445–5.172; p = 0.002), moderate illness (OR 14.13; 95% CI 6.258–31.902; p < 0.001), chronic renal failure (OR 2.57; 95% CI 1.178–5.596; p = 0.010), and a discontinuation in antiviral treatment after medical decision (OR 6.24; 95% CI 1.593–24.409; p = 0.008) (Supplementary Table S1).\nNMR/r users had a shorter median time to negative test compared with Molnupiravir users (9 days vs. 12 days, p < 0.0001). The time from the first positive test to viral clearance was not associated with the composite outcome.\nThirteen deaths were observed: seven due to acute respiratory failure related to COVID-19, three because of advanced malignancies, one for an acute myocardial infarction, and two because of senile cachexia. Among the individuals who died, ten were Molnupiravir users, and two received NMV-r, including a 63-year-old woman suffering from breast cancer with multiple metastases and a 73-year-old man with acute myocardial infarction (Supplementary Table S2).", "As shown in Table 4, a logistic regression model was performed to assess factors associated with the composite outcome. At the univariate analysis, male sex (OR 1.79; 95% CI 0.977–3.292; p = 0.050), age ≥ 75 (OR 2.38; 95% CI 1.302–4.349; p = 0.005), moderate illness at time of prescription (OR 12.44; 95% CI 5.557–27.84; p < 0.001), treatment discontinuation after medical decision (OR 8.79; 95% CI 2.479–31.221; p = 0.001), and a greater number of comorbidities (OR 1.51; 95% CI 1.084–2.111; p = 0.010) were associated with all-cause hospitalization and/or death at 30 days.\nIn multivariate analysis, after adjusting for age and sex, male sex (OR 3.78; 95% CI 1.622–8.836; p = 0.002), age ≥75 (OR 2.65; 95% CI 1.245–5.628; p = 0.012), moderate illness (OR 16.75; 95% CI 6.17–45.48; p < 0.001), and treatment discontinuation after medical decision (OR 8.15; 95% CI 1.577–42.114; p = 0.012) remained independently associated with the composite outcome. In terms of the composite outcome, no differences between the two antiviral regimens were observed.", "As shown in Table 5, a Cox regression model was performed to assess factors associated with a 30-day progression-free survival composite endpoint (PFSCE). At univariate analysis, an age over 75 years (HR 2.80; 95%CI 1.426–5.821; p = 0.003), moderate illness at time of prescription (HR 13.83; 94%CI 6.727–28.451; p < 0.001), treatment discontinuation after medical decision (HR 10.19; 95% CI 3.585–28.972; p < 0.001), and a greater number of comorbidities (HR 1.69; 95%CI 1.193–2.389; p = 0.003) were associated with 30-day PFSCE. In the multivariate analysis, after adjusting for age and sex, age ≥ 75 (HR 2.76; 95%CI 1.339–5.672; p = 0.005), moderate illness (HR 10.97; 95%CI 5.184–23.207; p < 0.001), treatment discontinuation after medical decision (HR 9.42; 95%CI 3.157–28.1; p < 0.001), and number of comorbidities (HR 1.67; 95%CI 1.178–2.379; p = 0.004) were associated with 30-day PFSCE.\nFigure 1 shows the survival curves drawn with the Kaplan–Meier method. In the curve stratified by COVID-19 symptoms, the mean (±standard error) of survival of patients with moderate symptoms was less compared with the patients with mild symptoms (12.17 ± 1.33 vs. 29.30 ± 0.16). Patients who discontinued antiviral therapy after a medical decision survived less often than those who continued (6.45 ± 1.06 vs. 29.06 ± 0.18)." ]
[ null, null, null, null, null, null, null, null, null, null ]
[ "1. Introduction", "2. Materials and Methods", "2.1. Clinical Setting", "2.2. Criteria Inclusion", "2.3. Criteria of Exclusion", "2.4. Endpoints", "2.5. Statistics", "3. Results", "3.1. Safety Profile", "3.2. Clinical Outcomes at 30 Days", "3.3. Factors Associated with the Composite Outcome", "3.4. Factors Associated with 30-Day Progression-Free Survival toward Composite Endpoint", "4. Discussion", "5. Conclusions" ]
[ "Since its rapid spread starting in December 2019, as of 23 October 2022, the COVID-19 pandemic caused over 624 million confirmed cases and over 6.5 million deaths have been reported globally [1]. The severe acute respiratory syndrome Coronavirus-2 (SARS-CoV-2) after its entry mainly through the respiratory tract is capable of inducing a vehement inflammatory response, which is considered the hallmark of the infection. In fact, its various structural and non-structural proteins can directly or indirectly stimulate the uncontrolled activation of harmful inflammatory pathways causing cytokine storm, tissue damage, increased pulmonary edema, acute respiratory distress syndrome (ARDS) and mortality [2].\nCOVID-19 still represents a threat to the healthcare systems and has significant social implications in terms of morbidity, absence due to illness in the workplace, and direct and indirect costs [3]. Numerous therapeutic strategies have been developed to prevent the infection (vaccines and monoclonal antibodies) and to slow down the progression to severe COVID-19 (anti-inflammatory molecules, steroids, heparin, and antivirals) [4].\nMoreover, the dominant variants of SARS-CoV-2 are constantly evolving. Data regarding variants of the SARS-CoV-2 in Italy are periodically updated by the National healthcare Institute [5]. As of January 2022, Omicron and subvariants constituted over 90% of all SARS-Cov2 infections in Italy.\nIn the current scenario, circulating Omicron variants (BA.2, BA.4, and BA.5) have significantly reduced susceptibility to several monoclonal antibodies (mAbs), including casirivimab–imdevimab, bamlanivimab–etesevimab, and sotrovimab [6,7]. The combination tixagevimab–cilgavimab, initially authorized only in pre-exposure prophylaxis, appears to be still active toward BA.2 and, albeit to a lesser extent, toward BA.1, BA.1.1, and the recent BA.4 and BA.5 [6,7,8].\nImportantly, anti-SARS-CoV-2 vaccination still represents the main mean for limiting the spread of infection and reducing the risk of worse outcomes. However, the effectiveness of vaccines tends to decrease over time, due, on the one hand, to the ability of SARS-CoV-2 to modify itself [9], and on the other hand, to the impaired immune system, particularly among fragile and high-risk individuals [10]. At the end of December 2021, European Medicines Agency (EMA) authorized the emergency use of two antivirals against SARS-CoV-2, Molnupiravir, and Nirmatrelvir/r (NMV-r) to prevent severe illness in high-risk individuals who are not hospitalized for COVID-19 and with no need for supplemental oxygen [11]. Molnupiravir is a prodrug of beta-d-N4-hydroxyxcytidine acting as an oral inhibitor of RNA-dependent RNA polymerase that can increase the viral RNA mutations, thus impairing the replication of SARS-COV2 [12]. NMV-r is an orally administered antiviral agent targeting the SARS-CoV-2 3-chymotrypsin-like cysteine protease enzyme (Mpro) that is essential in the viral replication cycle [13]. Despite the fact that their efficacy has been demonstrated in randomized trials [14,15], real-life data regarding their impact on fully vaccinated vulnerable subjects with mild-to-moderate COVID-19 are still limited, particularly in the era of Omicron and subvariants. Hence, we aim to assess the safety and effectiveness of the two antivirals in terms of the composite outcome defined as all-cause hospitalization and/or death at 30 days.", " 2.1. Clinical Setting Our hospital, “San Giuseppe Moscati” of Taranto, is a COVID-19 referral hub in Apulia, Southern Italy. We included in this retrospective study all consecutive individuals with confirmed COVID-19 and mild-to-moderate illness who received an oral antiviral prescription in Taranto and its Province between 11 January and 10 July 2022. Sociodemographic, as well as clinical, data were collected in a dedicated database that included comorbidities, daily taken drugs, time from the onset of symptoms to antiviral prescription, date of COVID-19 vaccinations, side effects in the course of treatment, and clinical outcomes at 30 days after the treatment initiation.\nGeneral Practitioners and Special Units for Continuity of Care (USCA) identified high-risk patients with COVID-19 and sent a formal request for eligibility for antiviral therapy. We assessed each patient according to the Italian Medicine Agency (AIFA) criteria [16]. Therefore, antiviral therapy was selected after carefully evaluating drug–drug interactions by consulting a dedicated website https://www.covid19-druginteractions.org (accessed on 3 November 2022) [17]. Before starting treatment, all subjects received an information form on the prescribed antiviral and signed informed consent. In addition, women with childbearing potential were advised to use an effective method of contraception, which necessarily includes a barrier method, for the whole duration of the treatment and at least four days after the end of Molnupiravir treatment. Male partners of women with childbearing potential were required to ensure contraception for the total treatment duration and at least three months after the end of Molnupiravir treatment.\nIn the presence of clinical signs of worsening (persistent fever, onset of breathlessness, reduced oxygen saturation, etc.), patients themselves or their caregivers were asked to contact the GPs who activated the Special Units for Continuity of Care (USCA) or, in severe cases, the Italian emergency telephone number, 118. Alternatively, our team was contacted directly and provided clinical suggestions.\nOur hospital, “San Giuseppe Moscati” of Taranto, is a COVID-19 referral hub in Apulia, Southern Italy. We included in this retrospective study all consecutive individuals with confirmed COVID-19 and mild-to-moderate illness who received an oral antiviral prescription in Taranto and its Province between 11 January and 10 July 2022. Sociodemographic, as well as clinical, data were collected in a dedicated database that included comorbidities, daily taken drugs, time from the onset of symptoms to antiviral prescription, date of COVID-19 vaccinations, side effects in the course of treatment, and clinical outcomes at 30 days after the treatment initiation.\nGeneral Practitioners and Special Units for Continuity of Care (USCA) identified high-risk patients with COVID-19 and sent a formal request for eligibility for antiviral therapy. We assessed each patient according to the Italian Medicine Agency (AIFA) criteria [16]. Therefore, antiviral therapy was selected after carefully evaluating drug–drug interactions by consulting a dedicated website https://www.covid19-druginteractions.org (accessed on 3 November 2022) [17]. Before starting treatment, all subjects received an information form on the prescribed antiviral and signed informed consent. In addition, women with childbearing potential were advised to use an effective method of contraception, which necessarily includes a barrier method, for the whole duration of the treatment and at least four days after the end of Molnupiravir treatment. Male partners of women with childbearing potential were required to ensure contraception for the total treatment duration and at least three months after the end of Molnupiravir treatment.\nIn the presence of clinical signs of worsening (persistent fever, onset of breathlessness, reduced oxygen saturation, etc.), patients themselves or their caregivers were asked to contact the GPs who activated the Special Units for Continuity of Care (USCA) or, in severe cases, the Italian emergency telephone number, 118. Alternatively, our team was contacted directly and provided clinical suggestions.\n 2.2. Criteria Inclusion Criteria inclusion of the study were (1) age ≥ 18 years, (2) COVID-19 confirmed by antigenic or molecular swab, (3) subjects who have taken at least one dose of antiviral, (4) onset of symptoms within five days, and (5) at least one of the following comorbidities: obesity (body mass index ≥ 30); diabetes mellitus with organ damage or HBa1c > 7.5%; chronic renal failure; chronic respiratory diseases; severe cardiovascular disease; primary or secondary immunodeficiency; malignancies; neurological disease; and age ≥ 65.\nCriteria inclusion of the study were (1) age ≥ 18 years, (2) COVID-19 confirmed by antigenic or molecular swab, (3) subjects who have taken at least one dose of antiviral, (4) onset of symptoms within five days, and (5) at least one of the following comorbidities: obesity (body mass index ≥ 30); diabetes mellitus with organ damage or HBa1c > 7.5%; chronic renal failure; chronic respiratory diseases; severe cardiovascular disease; primary or secondary immunodeficiency; malignancies; neurological disease; and age ≥ 65.\n 2.3. Criteria of Exclusion The criteria for exclusion were (1) pregnancy, (2) patients who refused to take the therapy, (3) severe illness requiring oxygen support and/or hospitalization due to COVID-19, (4) patients already hospitalized, (5) severe liver impairment, and (6) severe renal impairment (eGFR < 30 mL/min/1.73 m2).\nMild-to-moderate illness was defined as reported in the COVID-19 Treatment Guidelines Panel [18]. Both antivirals were administered for five days according to the dosage recommended by the manufacturers [16].\nThe criteria for exclusion were (1) pregnancy, (2) patients who refused to take the therapy, (3) severe illness requiring oxygen support and/or hospitalization due to COVID-19, (4) patients already hospitalized, (5) severe liver impairment, and (6) severe renal impairment (eGFR < 30 mL/min/1.73 m2).\nMild-to-moderate illness was defined as reported in the COVID-19 Treatment Guidelines Panel [18]. Both antivirals were administered for five days according to the dosage recommended by the manufacturers [16].\n 2.4. Endpoints The first endpoint was to assess of the two antivirals in terms of the composite outcome defined as all-cause hospitalization and/or death at 30 days, as reported. The second endpoint was to compare their safety profile. Third, we aimed to identify factors associated with the composite outcome.\nThe first endpoint was to assess of the two antivirals in terms of the composite outcome defined as all-cause hospitalization and/or death at 30 days, as reported. The second endpoint was to compare their safety profile. Third, we aimed to identify factors associated with the composite outcome.\n 2.5. Statistics Quantitative data were shown as means and standard deviation (SD) if normally distributed, and as median and interquartile range (IQR) if assumption of normality was not acceptable. Shapiro–Wilk’s statistics was used to test normality. Differences in continuous variables between two groups defined by the primary (composite outcome) or secondary (antiviral therapy) endpoint were compared by using Student’s t-test for normally distributed parameters, or the nonparametric Mann–Whitney U test otherwise. Categorical data were expressed as frequency and percentage, and the Chi-square test or Fischer’s exact test was used to compare the groups. Univariate and multivariable logistic regression models were applied to evaluate the effect of the parameters (age, sex, comorbidities, antiviral therapy, severity of symptoms, time from the first test to the first negative test, side effect of antiviral therapy, number of comorbidities, discontinuation of therapy after medical decision, suspension of therapy by voluntary decision, COVID-19 vaccination, days after last vaccination, and days from the onset of symptoms to prescription of antiviral therapy) on the probability of being hospitalized and/or death at 30 days. Using the p-values criterion (p < 0.25), a stepwise selection was used to estimate the final model. The results of the logistic models are expressed by the Odds Ratios (OR), their 95% Confidence Interval (95% CI), and the p-values of the Wald’s tests. Thirty-day progression-free survival toward the composite endpoint (PFSCE) was defined as the time interval between the date of positivity to COVID-19 and the date of hospitalization and/or death within 30 days. Univariate and multivariable Cox-regression models were performed to define associations between 30-day PFSCE and the other parameters. The proportional hazard assumptions for the Cox model were checked, and the results were expressed as hazard ratios (HR) and their 95% Confidence Interval. The survival curves of someone with the significant parameters were drawn with the Kaplan–Meier method.\nA p-value < 0.05 was considered statistically significant. Statistical analyses were performed by using the SAS/STAT® Statistics version 9.4 (SAS Institute, Cary, NC, USA).\nQuantitative data were shown as means and standard deviation (SD) if normally distributed, and as median and interquartile range (IQR) if assumption of normality was not acceptable. Shapiro–Wilk’s statistics was used to test normality. Differences in continuous variables between two groups defined by the primary (composite outcome) or secondary (antiviral therapy) endpoint were compared by using Student’s t-test for normally distributed parameters, or the nonparametric Mann–Whitney U test otherwise. Categorical data were expressed as frequency and percentage, and the Chi-square test or Fischer’s exact test was used to compare the groups. Univariate and multivariable logistic regression models were applied to evaluate the effect of the parameters (age, sex, comorbidities, antiviral therapy, severity of symptoms, time from the first test to the first negative test, side effect of antiviral therapy, number of comorbidities, discontinuation of therapy after medical decision, suspension of therapy by voluntary decision, COVID-19 vaccination, days after last vaccination, and days from the onset of symptoms to prescription of antiviral therapy) on the probability of being hospitalized and/or death at 30 days. Using the p-values criterion (p < 0.25), a stepwise selection was used to estimate the final model. The results of the logistic models are expressed by the Odds Ratios (OR), their 95% Confidence Interval (95% CI), and the p-values of the Wald’s tests. Thirty-day progression-free survival toward the composite endpoint (PFSCE) was defined as the time interval between the date of positivity to COVID-19 and the date of hospitalization and/or death within 30 days. Univariate and multivariable Cox-regression models were performed to define associations between 30-day PFSCE and the other parameters. The proportional hazard assumptions for the Cox model were checked, and the results were expressed as hazard ratios (HR) and their 95% Confidence Interval. The survival curves of someone with the significant parameters were drawn with the Kaplan–Meier method.\nA p-value < 0.05 was considered statistically significant. Statistical analyses were performed by using the SAS/STAT® Statistics version 9.4 (SAS Institute, Cary, NC, USA).", "Our hospital, “San Giuseppe Moscati” of Taranto, is a COVID-19 referral hub in Apulia, Southern Italy. We included in this retrospective study all consecutive individuals with confirmed COVID-19 and mild-to-moderate illness who received an oral antiviral prescription in Taranto and its Province between 11 January and 10 July 2022. Sociodemographic, as well as clinical, data were collected in a dedicated database that included comorbidities, daily taken drugs, time from the onset of symptoms to antiviral prescription, date of COVID-19 vaccinations, side effects in the course of treatment, and clinical outcomes at 30 days after the treatment initiation.\nGeneral Practitioners and Special Units for Continuity of Care (USCA) identified high-risk patients with COVID-19 and sent a formal request for eligibility for antiviral therapy. We assessed each patient according to the Italian Medicine Agency (AIFA) criteria [16]. Therefore, antiviral therapy was selected after carefully evaluating drug–drug interactions by consulting a dedicated website https://www.covid19-druginteractions.org (accessed on 3 November 2022) [17]. Before starting treatment, all subjects received an information form on the prescribed antiviral and signed informed consent. In addition, women with childbearing potential were advised to use an effective method of contraception, which necessarily includes a barrier method, for the whole duration of the treatment and at least four days after the end of Molnupiravir treatment. Male partners of women with childbearing potential were required to ensure contraception for the total treatment duration and at least three months after the end of Molnupiravir treatment.\nIn the presence of clinical signs of worsening (persistent fever, onset of breathlessness, reduced oxygen saturation, etc.), patients themselves or their caregivers were asked to contact the GPs who activated the Special Units for Continuity of Care (USCA) or, in severe cases, the Italian emergency telephone number, 118. Alternatively, our team was contacted directly and provided clinical suggestions.", "Criteria inclusion of the study were (1) age ≥ 18 years, (2) COVID-19 confirmed by antigenic or molecular swab, (3) subjects who have taken at least one dose of antiviral, (4) onset of symptoms within five days, and (5) at least one of the following comorbidities: obesity (body mass index ≥ 30); diabetes mellitus with organ damage or HBa1c > 7.5%; chronic renal failure; chronic respiratory diseases; severe cardiovascular disease; primary or secondary immunodeficiency; malignancies; neurological disease; and age ≥ 65.", "The criteria for exclusion were (1) pregnancy, (2) patients who refused to take the therapy, (3) severe illness requiring oxygen support and/or hospitalization due to COVID-19, (4) patients already hospitalized, (5) severe liver impairment, and (6) severe renal impairment (eGFR < 30 mL/min/1.73 m2).\nMild-to-moderate illness was defined as reported in the COVID-19 Treatment Guidelines Panel [18]. Both antivirals were administered for five days according to the dosage recommended by the manufacturers [16].", "The first endpoint was to assess of the two antivirals in terms of the composite outcome defined as all-cause hospitalization and/or death at 30 days, as reported. The second endpoint was to compare their safety profile. Third, we aimed to identify factors associated with the composite outcome.", "Quantitative data were shown as means and standard deviation (SD) if normally distributed, and as median and interquartile range (IQR) if assumption of normality was not acceptable. Shapiro–Wilk’s statistics was used to test normality. Differences in continuous variables between two groups defined by the primary (composite outcome) or secondary (antiviral therapy) endpoint were compared by using Student’s t-test for normally distributed parameters, or the nonparametric Mann–Whitney U test otherwise. Categorical data were expressed as frequency and percentage, and the Chi-square test or Fischer’s exact test was used to compare the groups. Univariate and multivariable logistic regression models were applied to evaluate the effect of the parameters (age, sex, comorbidities, antiviral therapy, severity of symptoms, time from the first test to the first negative test, side effect of antiviral therapy, number of comorbidities, discontinuation of therapy after medical decision, suspension of therapy by voluntary decision, COVID-19 vaccination, days after last vaccination, and days from the onset of symptoms to prescription of antiviral therapy) on the probability of being hospitalized and/or death at 30 days. Using the p-values criterion (p < 0.25), a stepwise selection was used to estimate the final model. The results of the logistic models are expressed by the Odds Ratios (OR), their 95% Confidence Interval (95% CI), and the p-values of the Wald’s tests. Thirty-day progression-free survival toward the composite endpoint (PFSCE) was defined as the time interval between the date of positivity to COVID-19 and the date of hospitalization and/or death within 30 days. Univariate and multivariable Cox-regression models were performed to define associations between 30-day PFSCE and the other parameters. The proportional hazard assumptions for the Cox model were checked, and the results were expressed as hazard ratios (HR) and their 95% Confidence Interval. The survival curves of someone with the significant parameters were drawn with the Kaplan–Meier method.\nA p-value < 0.05 was considered statistically significant. Statistical analyses were performed by using the SAS/STAT® Statistics version 9.4 (SAS Institute, Cary, NC, USA).", "A total of 719 individuals (51.7% female) were included during the study phase. Among them, 554 (77%) received Molnupiravir, whereas 165 (23%) were NMV-r(NVM) users. The baseline characteristics of the patients are summarized in Table 1. The median age was 71 years (interquartile range, IQR, 61–80). All subjects were Caucasian.\nCompared with NMV-r users, subjects receiving Molnupiravir were older (median age 73 vs. 61, p < 0.0001), had more comorbidities (p = 0.001), suffered mostly from cardiovascular diseases (p < 0.001), chronic renal failure with eGFR ≥ 30 mL/min/1.73 m2 (p = 0.005), and were more likely to take anticoagulants (p < 0.001) and antipsychotics/antidepressants (p = 0.008).\nOverall, 669 (93%) individuals out of 719 had received complete vaccination: 581 (86.8%) received BNT162b2 (Comirnaty), 73 (11%) Moderna m-RNA vaccine, and 15 (2.2%) AstraZeneca. At least one booster dose was administered in 89.1% of patients (89% Comirnaty and 11% Moderna). A total of 31 subjects received a fourth dose.\n 3.1. Safety Profile Oral antivirals were safe and well-tolerated. The safety profile, according to the antiviral therapy, is reported in Table 2. During antiviral therapy, 85 (11.8%) individuals experienced at least one adverse event. Compared with Molnupiravir users, those receiving NMV-r were more likely to have a bitter mouth (p < 0.001), dysgeusia (p < 0.001), nausea (p = 0.001), and epigastric burning (p = 0.02).\nOnly three serious adverse events (SAEs) were reported: extensive rash in an 80-year-old man taking Molnupiravir, whilst, among the NMV-r users, reversible bradycardia in a 70-year-old woman and an extensive rash in a 57-year-old man.\nTreatment discontinuation occurred in 30 individuals (4.1%): 19 (63%) by voluntary decision and 11 (37%) by medical decision. Among the latter, seven subjects developed adverse events, including the three SAEs described above, whereas four individuals, who required hospitalization for pneumonia and oxygen therapy, replaced the oral antiviral treatment with the antiviral Remdesivir.\nOral antivirals were safe and well-tolerated. The safety profile, according to the antiviral therapy, is reported in Table 2. During antiviral therapy, 85 (11.8%) individuals experienced at least one adverse event. Compared with Molnupiravir users, those receiving NMV-r were more likely to have a bitter mouth (p < 0.001), dysgeusia (p < 0.001), nausea (p = 0.001), and epigastric burning (p = 0.02).\nOnly three serious adverse events (SAEs) were reported: extensive rash in an 80-year-old man taking Molnupiravir, whilst, among the NMV-r users, reversible bradycardia in a 70-year-old woman and an extensive rash in a 57-year-old man.\nTreatment discontinuation occurred in 30 individuals (4.1%): 19 (63%) by voluntary decision and 11 (37%) by medical decision. Among the latter, seven subjects developed adverse events, including the three SAEs described above, whereas four individuals, who required hospitalization for pneumonia and oxygen therapy, replaced the oral antiviral treatment with the antiviral Remdesivir.\n 3.2. Clinical Outcomes at 30 Days Overall, 47 (6.5%) individuals were hospitalized and/or died at 30 days. They formed Group A. Clinical characteristics among the outpatients with (Group A) or without (Group B) composite outcomes are described in Table 3. Subjects in Group A were more likely to have an age ≥ 75 (59.57% vs. 38.24%, p = 0.005), a chronic renal failure (21.28% vs. 9.25%, p = 0.019), and discontinuation of antiviral treatment after a medical decision (8.51% vs. 1.05%, p = 0.004). No significant differences between the two antivirals were observed in terms of the composite outcome.\nA total of 43 (5.9%) hospitalizations at 30 days were reported; 20 out of 43 were caused directly by COVID-19 pneumonia, requiring oxygen support, whereas the remaining 23 included 4 to heart failure; 1 to abdominal pain; 2 to surgical interventions; 3 to social reasons; 1 to a stroke; and 12 to the expiry of the general conditions, including severe dehydration, senile cachexia, and feeding difficulties. All-cause hospitalization was associated with age ≥ 75 (OR 2.73; 95% CI 1.445–5.172; p = 0.002), moderate illness (OR 14.13; 95% CI 6.258–31.902; p < 0.001), chronic renal failure (OR 2.57; 95% CI 1.178–5.596; p = 0.010), and a discontinuation in antiviral treatment after medical decision (OR 6.24; 95% CI 1.593–24.409; p = 0.008) (Supplementary Table S1).\nNMR/r users had a shorter median time to negative test compared with Molnupiravir users (9 days vs. 12 days, p < 0.0001). The time from the first positive test to viral clearance was not associated with the composite outcome.\nThirteen deaths were observed: seven due to acute respiratory failure related to COVID-19, three because of advanced malignancies, one for an acute myocardial infarction, and two because of senile cachexia. Among the individuals who died, ten were Molnupiravir users, and two received NMV-r, including a 63-year-old woman suffering from breast cancer with multiple metastases and a 73-year-old man with acute myocardial infarction (Supplementary Table S2).\nOverall, 47 (6.5%) individuals were hospitalized and/or died at 30 days. They formed Group A. Clinical characteristics among the outpatients with (Group A) or without (Group B) composite outcomes are described in Table 3. Subjects in Group A were more likely to have an age ≥ 75 (59.57% vs. 38.24%, p = 0.005), a chronic renal failure (21.28% vs. 9.25%, p = 0.019), and discontinuation of antiviral treatment after a medical decision (8.51% vs. 1.05%, p = 0.004). No significant differences between the two antivirals were observed in terms of the composite outcome.\nA total of 43 (5.9%) hospitalizations at 30 days were reported; 20 out of 43 were caused directly by COVID-19 pneumonia, requiring oxygen support, whereas the remaining 23 included 4 to heart failure; 1 to abdominal pain; 2 to surgical interventions; 3 to social reasons; 1 to a stroke; and 12 to the expiry of the general conditions, including severe dehydration, senile cachexia, and feeding difficulties. All-cause hospitalization was associated with age ≥ 75 (OR 2.73; 95% CI 1.445–5.172; p = 0.002), moderate illness (OR 14.13; 95% CI 6.258–31.902; p < 0.001), chronic renal failure (OR 2.57; 95% CI 1.178–5.596; p = 0.010), and a discontinuation in antiviral treatment after medical decision (OR 6.24; 95% CI 1.593–24.409; p = 0.008) (Supplementary Table S1).\nNMR/r users had a shorter median time to negative test compared with Molnupiravir users (9 days vs. 12 days, p < 0.0001). The time from the first positive test to viral clearance was not associated with the composite outcome.\nThirteen deaths were observed: seven due to acute respiratory failure related to COVID-19, three because of advanced malignancies, one for an acute myocardial infarction, and two because of senile cachexia. Among the individuals who died, ten were Molnupiravir users, and two received NMV-r, including a 63-year-old woman suffering from breast cancer with multiple metastases and a 73-year-old man with acute myocardial infarction (Supplementary Table S2).\n 3.3. Factors Associated with the Composite Outcome As shown in Table 4, a logistic regression model was performed to assess factors associated with the composite outcome. At the univariate analysis, male sex (OR 1.79; 95% CI 0.977–3.292; p = 0.050), age ≥ 75 (OR 2.38; 95% CI 1.302–4.349; p = 0.005), moderate illness at time of prescription (OR 12.44; 95% CI 5.557–27.84; p < 0.001), treatment discontinuation after medical decision (OR 8.79; 95% CI 2.479–31.221; p = 0.001), and a greater number of comorbidities (OR 1.51; 95% CI 1.084–2.111; p = 0.010) were associated with all-cause hospitalization and/or death at 30 days.\nIn multivariate analysis, after adjusting for age and sex, male sex (OR 3.78; 95% CI 1.622–8.836; p = 0.002), age ≥75 (OR 2.65; 95% CI 1.245–5.628; p = 0.012), moderate illness (OR 16.75; 95% CI 6.17–45.48; p < 0.001), and treatment discontinuation after medical decision (OR 8.15; 95% CI 1.577–42.114; p = 0.012) remained independently associated with the composite outcome. In terms of the composite outcome, no differences between the two antiviral regimens were observed.\nAs shown in Table 4, a logistic regression model was performed to assess factors associated with the composite outcome. At the univariate analysis, male sex (OR 1.79; 95% CI 0.977–3.292; p = 0.050), age ≥ 75 (OR 2.38; 95% CI 1.302–4.349; p = 0.005), moderate illness at time of prescription (OR 12.44; 95% CI 5.557–27.84; p < 0.001), treatment discontinuation after medical decision (OR 8.79; 95% CI 2.479–31.221; p = 0.001), and a greater number of comorbidities (OR 1.51; 95% CI 1.084–2.111; p = 0.010) were associated with all-cause hospitalization and/or death at 30 days.\nIn multivariate analysis, after adjusting for age and sex, male sex (OR 3.78; 95% CI 1.622–8.836; p = 0.002), age ≥75 (OR 2.65; 95% CI 1.245–5.628; p = 0.012), moderate illness (OR 16.75; 95% CI 6.17–45.48; p < 0.001), and treatment discontinuation after medical decision (OR 8.15; 95% CI 1.577–42.114; p = 0.012) remained independently associated with the composite outcome. In terms of the composite outcome, no differences between the two antiviral regimens were observed.\n 3.4. Factors Associated with 30-Day Progression-Free Survival toward Composite Endpoint As shown in Table 5, a Cox regression model was performed to assess factors associated with a 30-day progression-free survival composite endpoint (PFSCE). At univariate analysis, an age over 75 years (HR 2.80; 95%CI 1.426–5.821; p = 0.003), moderate illness at time of prescription (HR 13.83; 94%CI 6.727–28.451; p < 0.001), treatment discontinuation after medical decision (HR 10.19; 95% CI 3.585–28.972; p < 0.001), and a greater number of comorbidities (HR 1.69; 95%CI 1.193–2.389; p = 0.003) were associated with 30-day PFSCE. In the multivariate analysis, after adjusting for age and sex, age ≥ 75 (HR 2.76; 95%CI 1.339–5.672; p = 0.005), moderate illness (HR 10.97; 95%CI 5.184–23.207; p < 0.001), treatment discontinuation after medical decision (HR 9.42; 95%CI 3.157–28.1; p < 0.001), and number of comorbidities (HR 1.67; 95%CI 1.178–2.379; p = 0.004) were associated with 30-day PFSCE.\nFigure 1 shows the survival curves drawn with the Kaplan–Meier method. In the curve stratified by COVID-19 symptoms, the mean (±standard error) of survival of patients with moderate symptoms was less compared with the patients with mild symptoms (12.17 ± 1.33 vs. 29.30 ± 0.16). Patients who discontinued antiviral therapy after a medical decision survived less often than those who continued (6.45 ± 1.06 vs. 29.06 ± 0.18).\nAs shown in Table 5, a Cox regression model was performed to assess factors associated with a 30-day progression-free survival composite endpoint (PFSCE). At univariate analysis, an age over 75 years (HR 2.80; 95%CI 1.426–5.821; p = 0.003), moderate illness at time of prescription (HR 13.83; 94%CI 6.727–28.451; p < 0.001), treatment discontinuation after medical decision (HR 10.19; 95% CI 3.585–28.972; p < 0.001), and a greater number of comorbidities (HR 1.69; 95%CI 1.193–2.389; p = 0.003) were associated with 30-day PFSCE. In the multivariate analysis, after adjusting for age and sex, age ≥ 75 (HR 2.76; 95%CI 1.339–5.672; p = 0.005), moderate illness (HR 10.97; 95%CI 5.184–23.207; p < 0.001), treatment discontinuation after medical decision (HR 9.42; 95%CI 3.157–28.1; p < 0.001), and number of comorbidities (HR 1.67; 95%CI 1.178–2.379; p = 0.004) were associated with 30-day PFSCE.\nFigure 1 shows the survival curves drawn with the Kaplan–Meier method. In the curve stratified by COVID-19 symptoms, the mean (±standard error) of survival of patients with moderate symptoms was less compared with the patients with mild symptoms (12.17 ± 1.33 vs. 29.30 ± 0.16). Patients who discontinued antiviral therapy after a medical decision survived less often than those who continued (6.45 ± 1.06 vs. 29.06 ± 0.18).", "Oral antivirals were safe and well-tolerated. The safety profile, according to the antiviral therapy, is reported in Table 2. During antiviral therapy, 85 (11.8%) individuals experienced at least one adverse event. Compared with Molnupiravir users, those receiving NMV-r were more likely to have a bitter mouth (p < 0.001), dysgeusia (p < 0.001), nausea (p = 0.001), and epigastric burning (p = 0.02).\nOnly three serious adverse events (SAEs) were reported: extensive rash in an 80-year-old man taking Molnupiravir, whilst, among the NMV-r users, reversible bradycardia in a 70-year-old woman and an extensive rash in a 57-year-old man.\nTreatment discontinuation occurred in 30 individuals (4.1%): 19 (63%) by voluntary decision and 11 (37%) by medical decision. Among the latter, seven subjects developed adverse events, including the three SAEs described above, whereas four individuals, who required hospitalization for pneumonia and oxygen therapy, replaced the oral antiviral treatment with the antiviral Remdesivir.", "Overall, 47 (6.5%) individuals were hospitalized and/or died at 30 days. They formed Group A. Clinical characteristics among the outpatients with (Group A) or without (Group B) composite outcomes are described in Table 3. Subjects in Group A were more likely to have an age ≥ 75 (59.57% vs. 38.24%, p = 0.005), a chronic renal failure (21.28% vs. 9.25%, p = 0.019), and discontinuation of antiviral treatment after a medical decision (8.51% vs. 1.05%, p = 0.004). No significant differences between the two antivirals were observed in terms of the composite outcome.\nA total of 43 (5.9%) hospitalizations at 30 days were reported; 20 out of 43 were caused directly by COVID-19 pneumonia, requiring oxygen support, whereas the remaining 23 included 4 to heart failure; 1 to abdominal pain; 2 to surgical interventions; 3 to social reasons; 1 to a stroke; and 12 to the expiry of the general conditions, including severe dehydration, senile cachexia, and feeding difficulties. All-cause hospitalization was associated with age ≥ 75 (OR 2.73; 95% CI 1.445–5.172; p = 0.002), moderate illness (OR 14.13; 95% CI 6.258–31.902; p < 0.001), chronic renal failure (OR 2.57; 95% CI 1.178–5.596; p = 0.010), and a discontinuation in antiviral treatment after medical decision (OR 6.24; 95% CI 1.593–24.409; p = 0.008) (Supplementary Table S1).\nNMR/r users had a shorter median time to negative test compared with Molnupiravir users (9 days vs. 12 days, p < 0.0001). The time from the first positive test to viral clearance was not associated with the composite outcome.\nThirteen deaths were observed: seven due to acute respiratory failure related to COVID-19, three because of advanced malignancies, one for an acute myocardial infarction, and two because of senile cachexia. Among the individuals who died, ten were Molnupiravir users, and two received NMV-r, including a 63-year-old woman suffering from breast cancer with multiple metastases and a 73-year-old man with acute myocardial infarction (Supplementary Table S2).", "As shown in Table 4, a logistic regression model was performed to assess factors associated with the composite outcome. At the univariate analysis, male sex (OR 1.79; 95% CI 0.977–3.292; p = 0.050), age ≥ 75 (OR 2.38; 95% CI 1.302–4.349; p = 0.005), moderate illness at time of prescription (OR 12.44; 95% CI 5.557–27.84; p < 0.001), treatment discontinuation after medical decision (OR 8.79; 95% CI 2.479–31.221; p = 0.001), and a greater number of comorbidities (OR 1.51; 95% CI 1.084–2.111; p = 0.010) were associated with all-cause hospitalization and/or death at 30 days.\nIn multivariate analysis, after adjusting for age and sex, male sex (OR 3.78; 95% CI 1.622–8.836; p = 0.002), age ≥75 (OR 2.65; 95% CI 1.245–5.628; p = 0.012), moderate illness (OR 16.75; 95% CI 6.17–45.48; p < 0.001), and treatment discontinuation after medical decision (OR 8.15; 95% CI 1.577–42.114; p = 0.012) remained independently associated with the composite outcome. In terms of the composite outcome, no differences between the two antiviral regimens were observed.", "As shown in Table 5, a Cox regression model was performed to assess factors associated with a 30-day progression-free survival composite endpoint (PFSCE). At univariate analysis, an age over 75 years (HR 2.80; 95%CI 1.426–5.821; p = 0.003), moderate illness at time of prescription (HR 13.83; 94%CI 6.727–28.451; p < 0.001), treatment discontinuation after medical decision (HR 10.19; 95% CI 3.585–28.972; p < 0.001), and a greater number of comorbidities (HR 1.69; 95%CI 1.193–2.389; p = 0.003) were associated with 30-day PFSCE. In the multivariate analysis, after adjusting for age and sex, age ≥ 75 (HR 2.76; 95%CI 1.339–5.672; p = 0.005), moderate illness (HR 10.97; 95%CI 5.184–23.207; p < 0.001), treatment discontinuation after medical decision (HR 9.42; 95%CI 3.157–28.1; p < 0.001), and number of comorbidities (HR 1.67; 95%CI 1.178–2.379; p = 0.004) were associated with 30-day PFSCE.\nFigure 1 shows the survival curves drawn with the Kaplan–Meier method. In the curve stratified by COVID-19 symptoms, the mean (±standard error) of survival of patients with moderate symptoms was less compared with the patients with mild symptoms (12.17 ± 1.33 vs. 29.30 ± 0.16). Patients who discontinued antiviral therapy after a medical decision survived less often than those who continued (6.45 ± 1.06 vs. 29.06 ± 0.18).", "SARS-CoV2 treatment with antivirals has recently been enriched by the introduction of Molnupiravir and NMV-r. Although a short three-day Remdesivir treatment was added to prevent severe COVID-19 [19], oral antivirals have undoubted advantages in limiting organizational issues and healthcare costs [4,11]. NMV-r is considered by the current international guidelines to be the first-line antiviral treatment and has obtained approval by EMA [18]. Molnupiravir, on the other hand, should be prescribed only when the use of NMV-r is contraindicated.\nFurthermore, since new monoclonal antibodies that are effective against circulating variants are not to be immediately available [7,20], in many countries, including Italy, oral antivirals along with Remdesivir are currently the effective treatments to be used to stem severe COVID-19 in high-risk individuals [21]. In addition, anti-SARS-CoV-2 antivirals may be useful in reducing the symptoms of COVID-19 that can compromise the quality of life among the individuals with a recent infection [3].\nWe reported our real-life experience with oral antivirals in a setting of high-risk outpatients in a period where numerous variants of concern were circulating, including Omicron and its subvariants. Additionally, a comparison of the two oral antivirals in terms of effectiveness and safety profile was performed.\nRandomized trials of NMV-r and Molnupiravir demonstrated a reduction in the risk of progression to severe COVID-19 that was 89% and 31%, respectively, lower than placebo groups [14,15]. Of note, MOVe-OUT and EPIC-HR trials were conducted in young and unvaccinated individuals.\nDifferently, our study evaluated older subjects (median age of 71 years) who were either fully vaccinated (93%) or received at least one booster dose (89%).\nTwo recent real-life studies evaluated the efficacy of NMV-r and Molnupiravir. In both studies, the authors used a propensity-score-matched analysis [22,23]. In particular, in the NMV-r study, the authors demonstrated a reduced risk of composite outcome among NMV-r users compared with non-users (7.8% vs. 14.4%), thus resulting in a 45% relative risk reduction [22]; in the Molnupiravir study, a non-significantly reduced risk of the composite outcome was evidenced. However, the authors reported a significant decrease in the risk of disease progression in specific subgroups, such as older patients, females, and patients with inadequate COVID-19 vaccination [23].\nAnother real-life study reported a rapid improvement in COVID-19 symptoms at phone follow-up 5 days after the initial evaluation and initiation of Molnupiravir [24].\nIn our study, a composite outcome occurred in 47 individuals (6.5%), similar to that reported in the Ganatra et al. study. However, concerning this study, our data included both NMV-r users and Molnupiravir users, without an untreated group. We did not observe significant differences among the two antivirals in the composite outcome (6.8% in Molnupiravir users vs. 5.4% in NMV-r users), even among the 50 (7%) unvaccinated patients. The lack of difference between the two regimes might be explained by several reasons. Primarily, the two groups had such differences as the older age in the Molnupiravir group and the higher percentage of active malignancies in the NMV-r group. With both of these categories being at high risk of hospitalization, the composite outcome could be balanced. Another reason might be the smaller number of patients in the NMV-r group. Furthermore, although the median time since the last vaccine dose was not particularly short (median, 132 days), its protective effect in preventing severe COVID-19 was likely to be maintained, thus helping to limit hospitalizations and deaths. Finally, some evidence suggests that the clinical features of Omicron and subvariants, although more likely to evade the vaccine, may be milder than those caused by the alpha and the delta variants [25].\nAlthough viral clearance was not an endpoint of our study, we found that NMR/r users had a shorter time to negative test compared with Molnupiravir users (9 days vs. 12 days, p < 0.0001). However, numerous biases should be considered. First, Molnupiravir users were older subjects, sometimes bedridden at home, and with a greater number of comorbidities. Thus, the amount of time from the first positive test to viral clearance could be longer in this group, partly due to the difficulties of carrying out the test. Second, the types of COVID-19 tests (antigenic or molecular), as well as the timing of performing the swab to ascertain healing, could differ. At any rate, the time to viral clearance was not associated with the composite outcome.\nWe decided to include in the analysis all the causes of hospitalization and death, as we believe that COVID-19 disease can cause direct (interstitial pneumonia and acute respiratory failure) and indirect consequences, including increased risk of cardiovascular events, heart exacerbation, respiratory and renal diseases, risk of falling, and difficulty in hydration and feeding, particularly among the elderly and frail individuals [26,27].\nMale sex, age ≥ 75 years, moderate illness at the time of prescription, and treatment discontinuation after medical decision were factors independently associated with the composite outcome. Furthermore, the discontinuation of treatment after a medical decision deserves an explanation, unlike the first three factors that have been largely associated with severe COVID-19 in several studies [28,29,30,31]. We reported eleven discontinuations due to medical advice determined by side effects in seven cases without hospitalizations/or deaths at 30 days. The remaining four were due to hospitalizations, including two subjects with advanced malignancies who stopped the oral antiviral, received Remdesivir after developing respiratory failure, and subsequently died. Thus, we assume that antiviral discontinuation due to medical advice might be an effect of hospitalization rather than a cause. In our study, 19 subjects voluntarily stopped the treatment. Possible reasons included the lack of need for antiviral therapy due to subjective clinical improvement and the fear of developing side effects in the course of treatment.\nBoth antivirals were well tolerated, and serious adverse events leading to treatment discontinuation were rare (0.4%). Most of the self-reported side effects were mild and could be partly attributable to the disease itself. Interestingly, they were more frequent in NMV-r users than in those receiving Molnupiravir. A possible explanation may be that NMV-r users were overall younger and consequently were able to report in more detail the effects experienced during treatment. Another reason may be the presence of Ritonavir, which is notoriously associated with the reported side effects, including dysgeusia, diarrhea, and nausea. In addition, since Ritonavir is a CYP3A4 inhibitor, it also involves interactions and alterations in the efficacy of numerous other drugs; this makes managing their co-administration somewhat difficult [17,32].\nMoreover, a careful drug–drug interactions evaluation was essential for choosing the most appropriate antiviral therapy. Consequently, for many frail subjects who took life-saving drugs, such as anticoagulants, antiarrhythmics, or immunosuppressants, the choice of antiviral treatment was directed toward Molnupiravir, which, unlike NMV-r, has a good profile of drug–drug interactions and does not require dose adjustments based on renal filtrate.\nOur study population, mainly consisting of elderly and vulnerable subjects, was treated predominantly with Molnupiravir (77%). Molnupiravir was more frequently prescribed partly for the above reasons and partly because of the current therapeutic choices in Italy (as reported in the AIFA monitoring records) and the unavailability of NMV-r in January and February 2022. In addition, from May onward, NMV-r could be directly prescribed by GPs who have to compile a treatment plan.\nOur study presents several limitations. Firstly, this is a single-center retrospective observational study. Secondly, we did not have a matched control group including untreated subjects. Third, since the study population assessed only outpatients, biochemical and radiological data during the illness were unavailable, except for hospitalized individuals. A further limitation of our study is that we cannot provide clear information if any COVID-19 infections had occurred in the past.\nTo our knowledge, this is the first European study comparing the two regimens in terms of safety profile and efficacy from a real-life setting. Furthermore, since the role of oral antivirals in preventing severe COVID-19 among fully vaccinated high-risk subjects is still under investigation, our observations provide valuable insights from clinical practice.\nNotwithstanding, we did not observe any differences between the two antivirals in terms of clinical outcomes consistent with recent studies; we reported that older people with multiple comorbidities were more likely to be hospitalized and/or die at 30 days compared with the younger, regardless of vaccination status. Thus, a timely antiviral treatment in these categories should be prioritized and pursued as soon as possible. Finally, in high-risk and fragile individuals with impaired response to COVID-19 vaccination, a series of measures might be envisaged, including periodic booster doses, the use of effective monoclonal antibodies, and a prolonged antiviral therapy, as well as a combination of antivirals to consider in future research goals.", "Early use of oral antivirals may limit hospital admissions, reduce COVID-19-related morbidity or mortality, and reduce healthcare costs. Therefore, it also becomes necessary to increase efforts to establish and strengthen a network between COVID-19 referral hubs and GPs in order to to identify individuals who may benefit from these therapies." ]
[ "intro", null, null, null, null, null, null, "results", null, null, null, null, "discussion", "conclusions" ]
[ "COVID-19", "antivirals", "SARS-CoV-2", "Molnupiravir", "Nirmatrelvir" ]
1. Introduction: Since its rapid spread starting in December 2019, as of 23 October 2022, the COVID-19 pandemic caused over 624 million confirmed cases and over 6.5 million deaths have been reported globally [1]. The severe acute respiratory syndrome Coronavirus-2 (SARS-CoV-2) after its entry mainly through the respiratory tract is capable of inducing a vehement inflammatory response, which is considered the hallmark of the infection. In fact, its various structural and non-structural proteins can directly or indirectly stimulate the uncontrolled activation of harmful inflammatory pathways causing cytokine storm, tissue damage, increased pulmonary edema, acute respiratory distress syndrome (ARDS) and mortality [2]. COVID-19 still represents a threat to the healthcare systems and has significant social implications in terms of morbidity, absence due to illness in the workplace, and direct and indirect costs [3]. Numerous therapeutic strategies have been developed to prevent the infection (vaccines and monoclonal antibodies) and to slow down the progression to severe COVID-19 (anti-inflammatory molecules, steroids, heparin, and antivirals) [4]. Moreover, the dominant variants of SARS-CoV-2 are constantly evolving. Data regarding variants of the SARS-CoV-2 in Italy are periodically updated by the National healthcare Institute [5]. As of January 2022, Omicron and subvariants constituted over 90% of all SARS-Cov2 infections in Italy. In the current scenario, circulating Omicron variants (BA.2, BA.4, and BA.5) have significantly reduced susceptibility to several monoclonal antibodies (mAbs), including casirivimab–imdevimab, bamlanivimab–etesevimab, and sotrovimab [6,7]. The combination tixagevimab–cilgavimab, initially authorized only in pre-exposure prophylaxis, appears to be still active toward BA.2 and, albeit to a lesser extent, toward BA.1, BA.1.1, and the recent BA.4 and BA.5 [6,7,8]. Importantly, anti-SARS-CoV-2 vaccination still represents the main mean for limiting the spread of infection and reducing the risk of worse outcomes. However, the effectiveness of vaccines tends to decrease over time, due, on the one hand, to the ability of SARS-CoV-2 to modify itself [9], and on the other hand, to the impaired immune system, particularly among fragile and high-risk individuals [10]. At the end of December 2021, European Medicines Agency (EMA) authorized the emergency use of two antivirals against SARS-CoV-2, Molnupiravir, and Nirmatrelvir/r (NMV-r) to prevent severe illness in high-risk individuals who are not hospitalized for COVID-19 and with no need for supplemental oxygen [11]. Molnupiravir is a prodrug of beta-d-N4-hydroxyxcytidine acting as an oral inhibitor of RNA-dependent RNA polymerase that can increase the viral RNA mutations, thus impairing the replication of SARS-COV2 [12]. NMV-r is an orally administered antiviral agent targeting the SARS-CoV-2 3-chymotrypsin-like cysteine protease enzyme (Mpro) that is essential in the viral replication cycle [13]. Despite the fact that their efficacy has been demonstrated in randomized trials [14,15], real-life data regarding their impact on fully vaccinated vulnerable subjects with mild-to-moderate COVID-19 are still limited, particularly in the era of Omicron and subvariants. Hence, we aim to assess the safety and effectiveness of the two antivirals in terms of the composite outcome defined as all-cause hospitalization and/or death at 30 days. 2. Materials and Methods: 2.1. Clinical Setting Our hospital, “San Giuseppe Moscati” of Taranto, is a COVID-19 referral hub in Apulia, Southern Italy. We included in this retrospective study all consecutive individuals with confirmed COVID-19 and mild-to-moderate illness who received an oral antiviral prescription in Taranto and its Province between 11 January and 10 July 2022. Sociodemographic, as well as clinical, data were collected in a dedicated database that included comorbidities, daily taken drugs, time from the onset of symptoms to antiviral prescription, date of COVID-19 vaccinations, side effects in the course of treatment, and clinical outcomes at 30 days after the treatment initiation. General Practitioners and Special Units for Continuity of Care (USCA) identified high-risk patients with COVID-19 and sent a formal request for eligibility for antiviral therapy. We assessed each patient according to the Italian Medicine Agency (AIFA) criteria [16]. Therefore, antiviral therapy was selected after carefully evaluating drug–drug interactions by consulting a dedicated website https://www.covid19-druginteractions.org (accessed on 3 November 2022) [17]. Before starting treatment, all subjects received an information form on the prescribed antiviral and signed informed consent. In addition, women with childbearing potential were advised to use an effective method of contraception, which necessarily includes a barrier method, for the whole duration of the treatment and at least four days after the end of Molnupiravir treatment. Male partners of women with childbearing potential were required to ensure contraception for the total treatment duration and at least three months after the end of Molnupiravir treatment. In the presence of clinical signs of worsening (persistent fever, onset of breathlessness, reduced oxygen saturation, etc.), patients themselves or their caregivers were asked to contact the GPs who activated the Special Units for Continuity of Care (USCA) or, in severe cases, the Italian emergency telephone number, 118. Alternatively, our team was contacted directly and provided clinical suggestions. Our hospital, “San Giuseppe Moscati” of Taranto, is a COVID-19 referral hub in Apulia, Southern Italy. We included in this retrospective study all consecutive individuals with confirmed COVID-19 and mild-to-moderate illness who received an oral antiviral prescription in Taranto and its Province between 11 January and 10 July 2022. Sociodemographic, as well as clinical, data were collected in a dedicated database that included comorbidities, daily taken drugs, time from the onset of symptoms to antiviral prescription, date of COVID-19 vaccinations, side effects in the course of treatment, and clinical outcomes at 30 days after the treatment initiation. General Practitioners and Special Units for Continuity of Care (USCA) identified high-risk patients with COVID-19 and sent a formal request for eligibility for antiviral therapy. We assessed each patient according to the Italian Medicine Agency (AIFA) criteria [16]. Therefore, antiviral therapy was selected after carefully evaluating drug–drug interactions by consulting a dedicated website https://www.covid19-druginteractions.org (accessed on 3 November 2022) [17]. Before starting treatment, all subjects received an information form on the prescribed antiviral and signed informed consent. In addition, women with childbearing potential were advised to use an effective method of contraception, which necessarily includes a barrier method, for the whole duration of the treatment and at least four days after the end of Molnupiravir treatment. Male partners of women with childbearing potential were required to ensure contraception for the total treatment duration and at least three months after the end of Molnupiravir treatment. In the presence of clinical signs of worsening (persistent fever, onset of breathlessness, reduced oxygen saturation, etc.), patients themselves or their caregivers were asked to contact the GPs who activated the Special Units for Continuity of Care (USCA) or, in severe cases, the Italian emergency telephone number, 118. Alternatively, our team was contacted directly and provided clinical suggestions. 2.2. Criteria Inclusion Criteria inclusion of the study were (1) age ≥ 18 years, (2) COVID-19 confirmed by antigenic or molecular swab, (3) subjects who have taken at least one dose of antiviral, (4) onset of symptoms within five days, and (5) at least one of the following comorbidities: obesity (body mass index ≥ 30); diabetes mellitus with organ damage or HBa1c > 7.5%; chronic renal failure; chronic respiratory diseases; severe cardiovascular disease; primary or secondary immunodeficiency; malignancies; neurological disease; and age ≥ 65. Criteria inclusion of the study were (1) age ≥ 18 years, (2) COVID-19 confirmed by antigenic or molecular swab, (3) subjects who have taken at least one dose of antiviral, (4) onset of symptoms within five days, and (5) at least one of the following comorbidities: obesity (body mass index ≥ 30); diabetes mellitus with organ damage or HBa1c > 7.5%; chronic renal failure; chronic respiratory diseases; severe cardiovascular disease; primary or secondary immunodeficiency; malignancies; neurological disease; and age ≥ 65. 2.3. Criteria of Exclusion The criteria for exclusion were (1) pregnancy, (2) patients who refused to take the therapy, (3) severe illness requiring oxygen support and/or hospitalization due to COVID-19, (4) patients already hospitalized, (5) severe liver impairment, and (6) severe renal impairment (eGFR < 30 mL/min/1.73 m2). Mild-to-moderate illness was defined as reported in the COVID-19 Treatment Guidelines Panel [18]. Both antivirals were administered for five days according to the dosage recommended by the manufacturers [16]. The criteria for exclusion were (1) pregnancy, (2) patients who refused to take the therapy, (3) severe illness requiring oxygen support and/or hospitalization due to COVID-19, (4) patients already hospitalized, (5) severe liver impairment, and (6) severe renal impairment (eGFR < 30 mL/min/1.73 m2). Mild-to-moderate illness was defined as reported in the COVID-19 Treatment Guidelines Panel [18]. Both antivirals were administered for five days according to the dosage recommended by the manufacturers [16]. 2.4. Endpoints The first endpoint was to assess of the two antivirals in terms of the composite outcome defined as all-cause hospitalization and/or death at 30 days, as reported. The second endpoint was to compare their safety profile. Third, we aimed to identify factors associated with the composite outcome. The first endpoint was to assess of the two antivirals in terms of the composite outcome defined as all-cause hospitalization and/or death at 30 days, as reported. The second endpoint was to compare their safety profile. Third, we aimed to identify factors associated with the composite outcome. 2.5. Statistics Quantitative data were shown as means and standard deviation (SD) if normally distributed, and as median and interquartile range (IQR) if assumption of normality was not acceptable. Shapiro–Wilk’s statistics was used to test normality. Differences in continuous variables between two groups defined by the primary (composite outcome) or secondary (antiviral therapy) endpoint were compared by using Student’s t-test for normally distributed parameters, or the nonparametric Mann–Whitney U test otherwise. Categorical data were expressed as frequency and percentage, and the Chi-square test or Fischer’s exact test was used to compare the groups. Univariate and multivariable logistic regression models were applied to evaluate the effect of the parameters (age, sex, comorbidities, antiviral therapy, severity of symptoms, time from the first test to the first negative test, side effect of antiviral therapy, number of comorbidities, discontinuation of therapy after medical decision, suspension of therapy by voluntary decision, COVID-19 vaccination, days after last vaccination, and days from the onset of symptoms to prescription of antiviral therapy) on the probability of being hospitalized and/or death at 30 days. Using the p-values criterion (p < 0.25), a stepwise selection was used to estimate the final model. The results of the logistic models are expressed by the Odds Ratios (OR), their 95% Confidence Interval (95% CI), and the p-values of the Wald’s tests. Thirty-day progression-free survival toward the composite endpoint (PFSCE) was defined as the time interval between the date of positivity to COVID-19 and the date of hospitalization and/or death within 30 days. Univariate and multivariable Cox-regression models were performed to define associations between 30-day PFSCE and the other parameters. The proportional hazard assumptions for the Cox model were checked, and the results were expressed as hazard ratios (HR) and their 95% Confidence Interval. The survival curves of someone with the significant parameters were drawn with the Kaplan–Meier method. A p-value < 0.05 was considered statistically significant. Statistical analyses were performed by using the SAS/STAT® Statistics version 9.4 (SAS Institute, Cary, NC, USA). Quantitative data were shown as means and standard deviation (SD) if normally distributed, and as median and interquartile range (IQR) if assumption of normality was not acceptable. Shapiro–Wilk’s statistics was used to test normality. Differences in continuous variables between two groups defined by the primary (composite outcome) or secondary (antiviral therapy) endpoint were compared by using Student’s t-test for normally distributed parameters, or the nonparametric Mann–Whitney U test otherwise. Categorical data were expressed as frequency and percentage, and the Chi-square test or Fischer’s exact test was used to compare the groups. Univariate and multivariable logistic regression models were applied to evaluate the effect of the parameters (age, sex, comorbidities, antiviral therapy, severity of symptoms, time from the first test to the first negative test, side effect of antiviral therapy, number of comorbidities, discontinuation of therapy after medical decision, suspension of therapy by voluntary decision, COVID-19 vaccination, days after last vaccination, and days from the onset of symptoms to prescription of antiviral therapy) on the probability of being hospitalized and/or death at 30 days. Using the p-values criterion (p < 0.25), a stepwise selection was used to estimate the final model. The results of the logistic models are expressed by the Odds Ratios (OR), their 95% Confidence Interval (95% CI), and the p-values of the Wald’s tests. Thirty-day progression-free survival toward the composite endpoint (PFSCE) was defined as the time interval between the date of positivity to COVID-19 and the date of hospitalization and/or death within 30 days. Univariate and multivariable Cox-regression models were performed to define associations between 30-day PFSCE and the other parameters. The proportional hazard assumptions for the Cox model were checked, and the results were expressed as hazard ratios (HR) and their 95% Confidence Interval. The survival curves of someone with the significant parameters were drawn with the Kaplan–Meier method. A p-value < 0.05 was considered statistically significant. Statistical analyses were performed by using the SAS/STAT® Statistics version 9.4 (SAS Institute, Cary, NC, USA). 2.1. Clinical Setting: Our hospital, “San Giuseppe Moscati” of Taranto, is a COVID-19 referral hub in Apulia, Southern Italy. We included in this retrospective study all consecutive individuals with confirmed COVID-19 and mild-to-moderate illness who received an oral antiviral prescription in Taranto and its Province between 11 January and 10 July 2022. Sociodemographic, as well as clinical, data were collected in a dedicated database that included comorbidities, daily taken drugs, time from the onset of symptoms to antiviral prescription, date of COVID-19 vaccinations, side effects in the course of treatment, and clinical outcomes at 30 days after the treatment initiation. General Practitioners and Special Units for Continuity of Care (USCA) identified high-risk patients with COVID-19 and sent a formal request for eligibility for antiviral therapy. We assessed each patient according to the Italian Medicine Agency (AIFA) criteria [16]. Therefore, antiviral therapy was selected after carefully evaluating drug–drug interactions by consulting a dedicated website https://www.covid19-druginteractions.org (accessed on 3 November 2022) [17]. Before starting treatment, all subjects received an information form on the prescribed antiviral and signed informed consent. In addition, women with childbearing potential were advised to use an effective method of contraception, which necessarily includes a barrier method, for the whole duration of the treatment and at least four days after the end of Molnupiravir treatment. Male partners of women with childbearing potential were required to ensure contraception for the total treatment duration and at least three months after the end of Molnupiravir treatment. In the presence of clinical signs of worsening (persistent fever, onset of breathlessness, reduced oxygen saturation, etc.), patients themselves or their caregivers were asked to contact the GPs who activated the Special Units for Continuity of Care (USCA) or, in severe cases, the Italian emergency telephone number, 118. Alternatively, our team was contacted directly and provided clinical suggestions. 2.2. Criteria Inclusion: Criteria inclusion of the study were (1) age ≥ 18 years, (2) COVID-19 confirmed by antigenic or molecular swab, (3) subjects who have taken at least one dose of antiviral, (4) onset of symptoms within five days, and (5) at least one of the following comorbidities: obesity (body mass index ≥ 30); diabetes mellitus with organ damage or HBa1c > 7.5%; chronic renal failure; chronic respiratory diseases; severe cardiovascular disease; primary or secondary immunodeficiency; malignancies; neurological disease; and age ≥ 65. 2.3. Criteria of Exclusion: The criteria for exclusion were (1) pregnancy, (2) patients who refused to take the therapy, (3) severe illness requiring oxygen support and/or hospitalization due to COVID-19, (4) patients already hospitalized, (5) severe liver impairment, and (6) severe renal impairment (eGFR < 30 mL/min/1.73 m2). Mild-to-moderate illness was defined as reported in the COVID-19 Treatment Guidelines Panel [18]. Both antivirals were administered for five days according to the dosage recommended by the manufacturers [16]. 2.4. Endpoints: The first endpoint was to assess of the two antivirals in terms of the composite outcome defined as all-cause hospitalization and/or death at 30 days, as reported. The second endpoint was to compare their safety profile. Third, we aimed to identify factors associated with the composite outcome. 2.5. Statistics: Quantitative data were shown as means and standard deviation (SD) if normally distributed, and as median and interquartile range (IQR) if assumption of normality was not acceptable. Shapiro–Wilk’s statistics was used to test normality. Differences in continuous variables between two groups defined by the primary (composite outcome) or secondary (antiviral therapy) endpoint were compared by using Student’s t-test for normally distributed parameters, or the nonparametric Mann–Whitney U test otherwise. Categorical data were expressed as frequency and percentage, and the Chi-square test or Fischer’s exact test was used to compare the groups. Univariate and multivariable logistic regression models were applied to evaluate the effect of the parameters (age, sex, comorbidities, antiviral therapy, severity of symptoms, time from the first test to the first negative test, side effect of antiviral therapy, number of comorbidities, discontinuation of therapy after medical decision, suspension of therapy by voluntary decision, COVID-19 vaccination, days after last vaccination, and days from the onset of symptoms to prescription of antiviral therapy) on the probability of being hospitalized and/or death at 30 days. Using the p-values criterion (p < 0.25), a stepwise selection was used to estimate the final model. The results of the logistic models are expressed by the Odds Ratios (OR), their 95% Confidence Interval (95% CI), and the p-values of the Wald’s tests. Thirty-day progression-free survival toward the composite endpoint (PFSCE) was defined as the time interval between the date of positivity to COVID-19 and the date of hospitalization and/or death within 30 days. Univariate and multivariable Cox-regression models were performed to define associations between 30-day PFSCE and the other parameters. The proportional hazard assumptions for the Cox model were checked, and the results were expressed as hazard ratios (HR) and their 95% Confidence Interval. The survival curves of someone with the significant parameters were drawn with the Kaplan–Meier method. A p-value < 0.05 was considered statistically significant. Statistical analyses were performed by using the SAS/STAT® Statistics version 9.4 (SAS Institute, Cary, NC, USA). 3. Results: A total of 719 individuals (51.7% female) were included during the study phase. Among them, 554 (77%) received Molnupiravir, whereas 165 (23%) were NMV-r(NVM) users. The baseline characteristics of the patients are summarized in Table 1. The median age was 71 years (interquartile range, IQR, 61–80). All subjects were Caucasian. Compared with NMV-r users, subjects receiving Molnupiravir were older (median age 73 vs. 61, p < 0.0001), had more comorbidities (p = 0.001), suffered mostly from cardiovascular diseases (p < 0.001), chronic renal failure with eGFR ≥ 30 mL/min/1.73 m2 (p = 0.005), and were more likely to take anticoagulants (p < 0.001) and antipsychotics/antidepressants (p = 0.008). Overall, 669 (93%) individuals out of 719 had received complete vaccination: 581 (86.8%) received BNT162b2 (Comirnaty), 73 (11%) Moderna m-RNA vaccine, and 15 (2.2%) AstraZeneca. At least one booster dose was administered in 89.1% of patients (89% Comirnaty and 11% Moderna). A total of 31 subjects received a fourth dose. 3.1. Safety Profile Oral antivirals were safe and well-tolerated. The safety profile, according to the antiviral therapy, is reported in Table 2. During antiviral therapy, 85 (11.8%) individuals experienced at least one adverse event. Compared with Molnupiravir users, those receiving NMV-r were more likely to have a bitter mouth (p < 0.001), dysgeusia (p < 0.001), nausea (p = 0.001), and epigastric burning (p = 0.02). Only three serious adverse events (SAEs) were reported: extensive rash in an 80-year-old man taking Molnupiravir, whilst, among the NMV-r users, reversible bradycardia in a 70-year-old woman and an extensive rash in a 57-year-old man. Treatment discontinuation occurred in 30 individuals (4.1%): 19 (63%) by voluntary decision and 11 (37%) by medical decision. Among the latter, seven subjects developed adverse events, including the three SAEs described above, whereas four individuals, who required hospitalization for pneumonia and oxygen therapy, replaced the oral antiviral treatment with the antiviral Remdesivir. Oral antivirals were safe and well-tolerated. The safety profile, according to the antiviral therapy, is reported in Table 2. During antiviral therapy, 85 (11.8%) individuals experienced at least one adverse event. Compared with Molnupiravir users, those receiving NMV-r were more likely to have a bitter mouth (p < 0.001), dysgeusia (p < 0.001), nausea (p = 0.001), and epigastric burning (p = 0.02). Only three serious adverse events (SAEs) were reported: extensive rash in an 80-year-old man taking Molnupiravir, whilst, among the NMV-r users, reversible bradycardia in a 70-year-old woman and an extensive rash in a 57-year-old man. Treatment discontinuation occurred in 30 individuals (4.1%): 19 (63%) by voluntary decision and 11 (37%) by medical decision. Among the latter, seven subjects developed adverse events, including the three SAEs described above, whereas four individuals, who required hospitalization for pneumonia and oxygen therapy, replaced the oral antiviral treatment with the antiviral Remdesivir. 3.2. Clinical Outcomes at 30 Days Overall, 47 (6.5%) individuals were hospitalized and/or died at 30 days. They formed Group A. Clinical characteristics among the outpatients with (Group A) or without (Group B) composite outcomes are described in Table 3. Subjects in Group A were more likely to have an age ≥ 75 (59.57% vs. 38.24%, p = 0.005), a chronic renal failure (21.28% vs. 9.25%, p = 0.019), and discontinuation of antiviral treatment after a medical decision (8.51% vs. 1.05%, p = 0.004). No significant differences between the two antivirals were observed in terms of the composite outcome. A total of 43 (5.9%) hospitalizations at 30 days were reported; 20 out of 43 were caused directly by COVID-19 pneumonia, requiring oxygen support, whereas the remaining 23 included 4 to heart failure; 1 to abdominal pain; 2 to surgical interventions; 3 to social reasons; 1 to a stroke; and 12 to the expiry of the general conditions, including severe dehydration, senile cachexia, and feeding difficulties. All-cause hospitalization was associated with age ≥ 75 (OR 2.73; 95% CI 1.445–5.172; p = 0.002), moderate illness (OR 14.13; 95% CI 6.258–31.902; p < 0.001), chronic renal failure (OR 2.57; 95% CI 1.178–5.596; p = 0.010), and a discontinuation in antiviral treatment after medical decision (OR 6.24; 95% CI 1.593–24.409; p = 0.008) (Supplementary Table S1). NMR/r users had a shorter median time to negative test compared with Molnupiravir users (9 days vs. 12 days, p < 0.0001). The time from the first positive test to viral clearance was not associated with the composite outcome. Thirteen deaths were observed: seven due to acute respiratory failure related to COVID-19, three because of advanced malignancies, one for an acute myocardial infarction, and two because of senile cachexia. Among the individuals who died, ten were Molnupiravir users, and two received NMV-r, including a 63-year-old woman suffering from breast cancer with multiple metastases and a 73-year-old man with acute myocardial infarction (Supplementary Table S2). Overall, 47 (6.5%) individuals were hospitalized and/or died at 30 days. They formed Group A. Clinical characteristics among the outpatients with (Group A) or without (Group B) composite outcomes are described in Table 3. Subjects in Group A were more likely to have an age ≥ 75 (59.57% vs. 38.24%, p = 0.005), a chronic renal failure (21.28% vs. 9.25%, p = 0.019), and discontinuation of antiviral treatment after a medical decision (8.51% vs. 1.05%, p = 0.004). No significant differences between the two antivirals were observed in terms of the composite outcome. A total of 43 (5.9%) hospitalizations at 30 days were reported; 20 out of 43 were caused directly by COVID-19 pneumonia, requiring oxygen support, whereas the remaining 23 included 4 to heart failure; 1 to abdominal pain; 2 to surgical interventions; 3 to social reasons; 1 to a stroke; and 12 to the expiry of the general conditions, including severe dehydration, senile cachexia, and feeding difficulties. All-cause hospitalization was associated with age ≥ 75 (OR 2.73; 95% CI 1.445–5.172; p = 0.002), moderate illness (OR 14.13; 95% CI 6.258–31.902; p < 0.001), chronic renal failure (OR 2.57; 95% CI 1.178–5.596; p = 0.010), and a discontinuation in antiviral treatment after medical decision (OR 6.24; 95% CI 1.593–24.409; p = 0.008) (Supplementary Table S1). NMR/r users had a shorter median time to negative test compared with Molnupiravir users (9 days vs. 12 days, p < 0.0001). The time from the first positive test to viral clearance was not associated with the composite outcome. Thirteen deaths were observed: seven due to acute respiratory failure related to COVID-19, three because of advanced malignancies, one for an acute myocardial infarction, and two because of senile cachexia. Among the individuals who died, ten were Molnupiravir users, and two received NMV-r, including a 63-year-old woman suffering from breast cancer with multiple metastases and a 73-year-old man with acute myocardial infarction (Supplementary Table S2). 3.3. Factors Associated with the Composite Outcome As shown in Table 4, a logistic regression model was performed to assess factors associated with the composite outcome. At the univariate analysis, male sex (OR 1.79; 95% CI 0.977–3.292; p = 0.050), age ≥ 75 (OR 2.38; 95% CI 1.302–4.349; p = 0.005), moderate illness at time of prescription (OR 12.44; 95% CI 5.557–27.84; p < 0.001), treatment discontinuation after medical decision (OR 8.79; 95% CI 2.479–31.221; p = 0.001), and a greater number of comorbidities (OR 1.51; 95% CI 1.084–2.111; p = 0.010) were associated with all-cause hospitalization and/or death at 30 days. In multivariate analysis, after adjusting for age and sex, male sex (OR 3.78; 95% CI 1.622–8.836; p = 0.002), age ≥75 (OR 2.65; 95% CI 1.245–5.628; p = 0.012), moderate illness (OR 16.75; 95% CI 6.17–45.48; p < 0.001), and treatment discontinuation after medical decision (OR 8.15; 95% CI 1.577–42.114; p = 0.012) remained independently associated with the composite outcome. In terms of the composite outcome, no differences between the two antiviral regimens were observed. As shown in Table 4, a logistic regression model was performed to assess factors associated with the composite outcome. At the univariate analysis, male sex (OR 1.79; 95% CI 0.977–3.292; p = 0.050), age ≥ 75 (OR 2.38; 95% CI 1.302–4.349; p = 0.005), moderate illness at time of prescription (OR 12.44; 95% CI 5.557–27.84; p < 0.001), treatment discontinuation after medical decision (OR 8.79; 95% CI 2.479–31.221; p = 0.001), and a greater number of comorbidities (OR 1.51; 95% CI 1.084–2.111; p = 0.010) were associated with all-cause hospitalization and/or death at 30 days. In multivariate analysis, after adjusting for age and sex, male sex (OR 3.78; 95% CI 1.622–8.836; p = 0.002), age ≥75 (OR 2.65; 95% CI 1.245–5.628; p = 0.012), moderate illness (OR 16.75; 95% CI 6.17–45.48; p < 0.001), and treatment discontinuation after medical decision (OR 8.15; 95% CI 1.577–42.114; p = 0.012) remained independently associated with the composite outcome. In terms of the composite outcome, no differences between the two antiviral regimens were observed. 3.4. Factors Associated with 30-Day Progression-Free Survival toward Composite Endpoint As shown in Table 5, a Cox regression model was performed to assess factors associated with a 30-day progression-free survival composite endpoint (PFSCE). At univariate analysis, an age over 75 years (HR 2.80; 95%CI 1.426–5.821; p = 0.003), moderate illness at time of prescription (HR 13.83; 94%CI 6.727–28.451; p < 0.001), treatment discontinuation after medical decision (HR 10.19; 95% CI 3.585–28.972; p < 0.001), and a greater number of comorbidities (HR 1.69; 95%CI 1.193–2.389; p = 0.003) were associated with 30-day PFSCE. In the multivariate analysis, after adjusting for age and sex, age ≥ 75 (HR 2.76; 95%CI 1.339–5.672; p = 0.005), moderate illness (HR 10.97; 95%CI 5.184–23.207; p < 0.001), treatment discontinuation after medical decision (HR 9.42; 95%CI 3.157–28.1; p < 0.001), and number of comorbidities (HR 1.67; 95%CI 1.178–2.379; p = 0.004) were associated with 30-day PFSCE. Figure 1 shows the survival curves drawn with the Kaplan–Meier method. In the curve stratified by COVID-19 symptoms, the mean (±standard error) of survival of patients with moderate symptoms was less compared with the patients with mild symptoms (12.17 ± 1.33 vs. 29.30 ± 0.16). Patients who discontinued antiviral therapy after a medical decision survived less often than those who continued (6.45 ± 1.06 vs. 29.06 ± 0.18). As shown in Table 5, a Cox regression model was performed to assess factors associated with a 30-day progression-free survival composite endpoint (PFSCE). At univariate analysis, an age over 75 years (HR 2.80; 95%CI 1.426–5.821; p = 0.003), moderate illness at time of prescription (HR 13.83; 94%CI 6.727–28.451; p < 0.001), treatment discontinuation after medical decision (HR 10.19; 95% CI 3.585–28.972; p < 0.001), and a greater number of comorbidities (HR 1.69; 95%CI 1.193–2.389; p = 0.003) were associated with 30-day PFSCE. In the multivariate analysis, after adjusting for age and sex, age ≥ 75 (HR 2.76; 95%CI 1.339–5.672; p = 0.005), moderate illness (HR 10.97; 95%CI 5.184–23.207; p < 0.001), treatment discontinuation after medical decision (HR 9.42; 95%CI 3.157–28.1; p < 0.001), and number of comorbidities (HR 1.67; 95%CI 1.178–2.379; p = 0.004) were associated with 30-day PFSCE. Figure 1 shows the survival curves drawn with the Kaplan–Meier method. In the curve stratified by COVID-19 symptoms, the mean (±standard error) of survival of patients with moderate symptoms was less compared with the patients with mild symptoms (12.17 ± 1.33 vs. 29.30 ± 0.16). Patients who discontinued antiviral therapy after a medical decision survived less often than those who continued (6.45 ± 1.06 vs. 29.06 ± 0.18). 3.1. Safety Profile: Oral antivirals were safe and well-tolerated. The safety profile, according to the antiviral therapy, is reported in Table 2. During antiviral therapy, 85 (11.8%) individuals experienced at least one adverse event. Compared with Molnupiravir users, those receiving NMV-r were more likely to have a bitter mouth (p < 0.001), dysgeusia (p < 0.001), nausea (p = 0.001), and epigastric burning (p = 0.02). Only three serious adverse events (SAEs) were reported: extensive rash in an 80-year-old man taking Molnupiravir, whilst, among the NMV-r users, reversible bradycardia in a 70-year-old woman and an extensive rash in a 57-year-old man. Treatment discontinuation occurred in 30 individuals (4.1%): 19 (63%) by voluntary decision and 11 (37%) by medical decision. Among the latter, seven subjects developed adverse events, including the three SAEs described above, whereas four individuals, who required hospitalization for pneumonia and oxygen therapy, replaced the oral antiviral treatment with the antiviral Remdesivir. 3.2. Clinical Outcomes at 30 Days: Overall, 47 (6.5%) individuals were hospitalized and/or died at 30 days. They formed Group A. Clinical characteristics among the outpatients with (Group A) or without (Group B) composite outcomes are described in Table 3. Subjects in Group A were more likely to have an age ≥ 75 (59.57% vs. 38.24%, p = 0.005), a chronic renal failure (21.28% vs. 9.25%, p = 0.019), and discontinuation of antiviral treatment after a medical decision (8.51% vs. 1.05%, p = 0.004). No significant differences between the two antivirals were observed in terms of the composite outcome. A total of 43 (5.9%) hospitalizations at 30 days were reported; 20 out of 43 were caused directly by COVID-19 pneumonia, requiring oxygen support, whereas the remaining 23 included 4 to heart failure; 1 to abdominal pain; 2 to surgical interventions; 3 to social reasons; 1 to a stroke; and 12 to the expiry of the general conditions, including severe dehydration, senile cachexia, and feeding difficulties. All-cause hospitalization was associated with age ≥ 75 (OR 2.73; 95% CI 1.445–5.172; p = 0.002), moderate illness (OR 14.13; 95% CI 6.258–31.902; p < 0.001), chronic renal failure (OR 2.57; 95% CI 1.178–5.596; p = 0.010), and a discontinuation in antiviral treatment after medical decision (OR 6.24; 95% CI 1.593–24.409; p = 0.008) (Supplementary Table S1). NMR/r users had a shorter median time to negative test compared with Molnupiravir users (9 days vs. 12 days, p < 0.0001). The time from the first positive test to viral clearance was not associated with the composite outcome. Thirteen deaths were observed: seven due to acute respiratory failure related to COVID-19, three because of advanced malignancies, one for an acute myocardial infarction, and two because of senile cachexia. Among the individuals who died, ten were Molnupiravir users, and two received NMV-r, including a 63-year-old woman suffering from breast cancer with multiple metastases and a 73-year-old man with acute myocardial infarction (Supplementary Table S2). 3.3. Factors Associated with the Composite Outcome: As shown in Table 4, a logistic regression model was performed to assess factors associated with the composite outcome. At the univariate analysis, male sex (OR 1.79; 95% CI 0.977–3.292; p = 0.050), age ≥ 75 (OR 2.38; 95% CI 1.302–4.349; p = 0.005), moderate illness at time of prescription (OR 12.44; 95% CI 5.557–27.84; p < 0.001), treatment discontinuation after medical decision (OR 8.79; 95% CI 2.479–31.221; p = 0.001), and a greater number of comorbidities (OR 1.51; 95% CI 1.084–2.111; p = 0.010) were associated with all-cause hospitalization and/or death at 30 days. In multivariate analysis, after adjusting for age and sex, male sex (OR 3.78; 95% CI 1.622–8.836; p = 0.002), age ≥75 (OR 2.65; 95% CI 1.245–5.628; p = 0.012), moderate illness (OR 16.75; 95% CI 6.17–45.48; p < 0.001), and treatment discontinuation after medical decision (OR 8.15; 95% CI 1.577–42.114; p = 0.012) remained independently associated with the composite outcome. In terms of the composite outcome, no differences between the two antiviral regimens were observed. 3.4. Factors Associated with 30-Day Progression-Free Survival toward Composite Endpoint: As shown in Table 5, a Cox regression model was performed to assess factors associated with a 30-day progression-free survival composite endpoint (PFSCE). At univariate analysis, an age over 75 years (HR 2.80; 95%CI 1.426–5.821; p = 0.003), moderate illness at time of prescription (HR 13.83; 94%CI 6.727–28.451; p < 0.001), treatment discontinuation after medical decision (HR 10.19; 95% CI 3.585–28.972; p < 0.001), and a greater number of comorbidities (HR 1.69; 95%CI 1.193–2.389; p = 0.003) were associated with 30-day PFSCE. In the multivariate analysis, after adjusting for age and sex, age ≥ 75 (HR 2.76; 95%CI 1.339–5.672; p = 0.005), moderate illness (HR 10.97; 95%CI 5.184–23.207; p < 0.001), treatment discontinuation after medical decision (HR 9.42; 95%CI 3.157–28.1; p < 0.001), and number of comorbidities (HR 1.67; 95%CI 1.178–2.379; p = 0.004) were associated with 30-day PFSCE. Figure 1 shows the survival curves drawn with the Kaplan–Meier method. In the curve stratified by COVID-19 symptoms, the mean (±standard error) of survival of patients with moderate symptoms was less compared with the patients with mild symptoms (12.17 ± 1.33 vs. 29.30 ± 0.16). Patients who discontinued antiviral therapy after a medical decision survived less often than those who continued (6.45 ± 1.06 vs. 29.06 ± 0.18). 4. Discussion: SARS-CoV2 treatment with antivirals has recently been enriched by the introduction of Molnupiravir and NMV-r. Although a short three-day Remdesivir treatment was added to prevent severe COVID-19 [19], oral antivirals have undoubted advantages in limiting organizational issues and healthcare costs [4,11]. NMV-r is considered by the current international guidelines to be the first-line antiviral treatment and has obtained approval by EMA [18]. Molnupiravir, on the other hand, should be prescribed only when the use of NMV-r is contraindicated. Furthermore, since new monoclonal antibodies that are effective against circulating variants are not to be immediately available [7,20], in many countries, including Italy, oral antivirals along with Remdesivir are currently the effective treatments to be used to stem severe COVID-19 in high-risk individuals [21]. In addition, anti-SARS-CoV-2 antivirals may be useful in reducing the symptoms of COVID-19 that can compromise the quality of life among the individuals with a recent infection [3]. We reported our real-life experience with oral antivirals in a setting of high-risk outpatients in a period where numerous variants of concern were circulating, including Omicron and its subvariants. Additionally, a comparison of the two oral antivirals in terms of effectiveness and safety profile was performed. Randomized trials of NMV-r and Molnupiravir demonstrated a reduction in the risk of progression to severe COVID-19 that was 89% and 31%, respectively, lower than placebo groups [14,15]. Of note, MOVe-OUT and EPIC-HR trials were conducted in young and unvaccinated individuals. Differently, our study evaluated older subjects (median age of 71 years) who were either fully vaccinated (93%) or received at least one booster dose (89%). Two recent real-life studies evaluated the efficacy of NMV-r and Molnupiravir. In both studies, the authors used a propensity-score-matched analysis [22,23]. In particular, in the NMV-r study, the authors demonstrated a reduced risk of composite outcome among NMV-r users compared with non-users (7.8% vs. 14.4%), thus resulting in a 45% relative risk reduction [22]; in the Molnupiravir study, a non-significantly reduced risk of the composite outcome was evidenced. However, the authors reported a significant decrease in the risk of disease progression in specific subgroups, such as older patients, females, and patients with inadequate COVID-19 vaccination [23]. Another real-life study reported a rapid improvement in COVID-19 symptoms at phone follow-up 5 days after the initial evaluation and initiation of Molnupiravir [24]. In our study, a composite outcome occurred in 47 individuals (6.5%), similar to that reported in the Ganatra et al. study. However, concerning this study, our data included both NMV-r users and Molnupiravir users, without an untreated group. We did not observe significant differences among the two antivirals in the composite outcome (6.8% in Molnupiravir users vs. 5.4% in NMV-r users), even among the 50 (7%) unvaccinated patients. The lack of difference between the two regimes might be explained by several reasons. Primarily, the two groups had such differences as the older age in the Molnupiravir group and the higher percentage of active malignancies in the NMV-r group. With both of these categories being at high risk of hospitalization, the composite outcome could be balanced. Another reason might be the smaller number of patients in the NMV-r group. Furthermore, although the median time since the last vaccine dose was not particularly short (median, 132 days), its protective effect in preventing severe COVID-19 was likely to be maintained, thus helping to limit hospitalizations and deaths. Finally, some evidence suggests that the clinical features of Omicron and subvariants, although more likely to evade the vaccine, may be milder than those caused by the alpha and the delta variants [25]. Although viral clearance was not an endpoint of our study, we found that NMR/r users had a shorter time to negative test compared with Molnupiravir users (9 days vs. 12 days, p < 0.0001). However, numerous biases should be considered. First, Molnupiravir users were older subjects, sometimes bedridden at home, and with a greater number of comorbidities. Thus, the amount of time from the first positive test to viral clearance could be longer in this group, partly due to the difficulties of carrying out the test. Second, the types of COVID-19 tests (antigenic or molecular), as well as the timing of performing the swab to ascertain healing, could differ. At any rate, the time to viral clearance was not associated with the composite outcome. We decided to include in the analysis all the causes of hospitalization and death, as we believe that COVID-19 disease can cause direct (interstitial pneumonia and acute respiratory failure) and indirect consequences, including increased risk of cardiovascular events, heart exacerbation, respiratory and renal diseases, risk of falling, and difficulty in hydration and feeding, particularly among the elderly and frail individuals [26,27]. Male sex, age ≥ 75 years, moderate illness at the time of prescription, and treatment discontinuation after medical decision were factors independently associated with the composite outcome. Furthermore, the discontinuation of treatment after a medical decision deserves an explanation, unlike the first three factors that have been largely associated with severe COVID-19 in several studies [28,29,30,31]. We reported eleven discontinuations due to medical advice determined by side effects in seven cases without hospitalizations/or deaths at 30 days. The remaining four were due to hospitalizations, including two subjects with advanced malignancies who stopped the oral antiviral, received Remdesivir after developing respiratory failure, and subsequently died. Thus, we assume that antiviral discontinuation due to medical advice might be an effect of hospitalization rather than a cause. In our study, 19 subjects voluntarily stopped the treatment. Possible reasons included the lack of need for antiviral therapy due to subjective clinical improvement and the fear of developing side effects in the course of treatment. Both antivirals were well tolerated, and serious adverse events leading to treatment discontinuation were rare (0.4%). Most of the self-reported side effects were mild and could be partly attributable to the disease itself. Interestingly, they were more frequent in NMV-r users than in those receiving Molnupiravir. A possible explanation may be that NMV-r users were overall younger and consequently were able to report in more detail the effects experienced during treatment. Another reason may be the presence of Ritonavir, which is notoriously associated with the reported side effects, including dysgeusia, diarrhea, and nausea. In addition, since Ritonavir is a CYP3A4 inhibitor, it also involves interactions and alterations in the efficacy of numerous other drugs; this makes managing their co-administration somewhat difficult [17,32]. Moreover, a careful drug–drug interactions evaluation was essential for choosing the most appropriate antiviral therapy. Consequently, for many frail subjects who took life-saving drugs, such as anticoagulants, antiarrhythmics, or immunosuppressants, the choice of antiviral treatment was directed toward Molnupiravir, which, unlike NMV-r, has a good profile of drug–drug interactions and does not require dose adjustments based on renal filtrate. Our study population, mainly consisting of elderly and vulnerable subjects, was treated predominantly with Molnupiravir (77%). Molnupiravir was more frequently prescribed partly for the above reasons and partly because of the current therapeutic choices in Italy (as reported in the AIFA monitoring records) and the unavailability of NMV-r in January and February 2022. In addition, from May onward, NMV-r could be directly prescribed by GPs who have to compile a treatment plan. Our study presents several limitations. Firstly, this is a single-center retrospective observational study. Secondly, we did not have a matched control group including untreated subjects. Third, since the study population assessed only outpatients, biochemical and radiological data during the illness were unavailable, except for hospitalized individuals. A further limitation of our study is that we cannot provide clear information if any COVID-19 infections had occurred in the past. To our knowledge, this is the first European study comparing the two regimens in terms of safety profile and efficacy from a real-life setting. Furthermore, since the role of oral antivirals in preventing severe COVID-19 among fully vaccinated high-risk subjects is still under investigation, our observations provide valuable insights from clinical practice. Notwithstanding, we did not observe any differences between the two antivirals in terms of clinical outcomes consistent with recent studies; we reported that older people with multiple comorbidities were more likely to be hospitalized and/or die at 30 days compared with the younger, regardless of vaccination status. Thus, a timely antiviral treatment in these categories should be prioritized and pursued as soon as possible. Finally, in high-risk and fragile individuals with impaired response to COVID-19 vaccination, a series of measures might be envisaged, including periodic booster doses, the use of effective monoclonal antibodies, and a prolonged antiviral therapy, as well as a combination of antivirals to consider in future research goals. 5. Conclusions: Early use of oral antivirals may limit hospital admissions, reduce COVID-19-related morbidity or mortality, and reduce healthcare costs. Therefore, it also becomes necessary to increase efforts to establish and strengthen a network between COVID-19 referral hubs and GPs in order to to identify individuals who may benefit from these therapies.
Background: Molnupiravir and Nirmatrelvir/r (NMV-r) have been proven to reduce severe Coronavirus Disease 2019 (COVID-19) in unvaccinated high-risk individuals. Data regarding their impact in fully vaccinated vulnerable subjects with mild-to-moderate COVID-19 are still limited, particularly in the era of Omicron and sub-variants. Methods: Our retrospective study aimed to compare the safety profile and effectiveness of the two antivirals in all consecutive high-risk outpatients between 11 January and 10 July 2022. A logistic regression model was carried out to assess factors associated with the composite outcome defined as all-cause hospitalization and/or death at 30 days. Results: A total of 719 individuals were included: 554 (77%) received Molnupiravir, whereas 165 (23%) were NMV-r users. Overall, 43 all-cause hospitalizations (5.9%) and 13 (1.8%) deaths were observed at 30 days. A composite outcome occurred in 47 (6.5%) individuals. At multivariate analysis, male sex [OR 3.785; p = 0.0021], age ≥ 75 [OR 2.647; p = 0.0124], moderate illness [OR 16.75; p &lt; 0.001], and treatment discontinuation after medical decision [OR 8.148; p = 0.0123] remained independently associated with the composite outcome. Conclusions: No differences between the two antivirals were observed. In this real-life setting, the early use of both of the oral antivirals helped limit composite outcome at 30 days among subjects who were at high risk of disease progression.
1. Introduction: Since its rapid spread starting in December 2019, as of 23 October 2022, the COVID-19 pandemic caused over 624 million confirmed cases and over 6.5 million deaths have been reported globally [1]. The severe acute respiratory syndrome Coronavirus-2 (SARS-CoV-2) after its entry mainly through the respiratory tract is capable of inducing a vehement inflammatory response, which is considered the hallmark of the infection. In fact, its various structural and non-structural proteins can directly or indirectly stimulate the uncontrolled activation of harmful inflammatory pathways causing cytokine storm, tissue damage, increased pulmonary edema, acute respiratory distress syndrome (ARDS) and mortality [2]. COVID-19 still represents a threat to the healthcare systems and has significant social implications in terms of morbidity, absence due to illness in the workplace, and direct and indirect costs [3]. Numerous therapeutic strategies have been developed to prevent the infection (vaccines and monoclonal antibodies) and to slow down the progression to severe COVID-19 (anti-inflammatory molecules, steroids, heparin, and antivirals) [4]. Moreover, the dominant variants of SARS-CoV-2 are constantly evolving. Data regarding variants of the SARS-CoV-2 in Italy are periodically updated by the National healthcare Institute [5]. As of January 2022, Omicron and subvariants constituted over 90% of all SARS-Cov2 infections in Italy. In the current scenario, circulating Omicron variants (BA.2, BA.4, and BA.5) have significantly reduced susceptibility to several monoclonal antibodies (mAbs), including casirivimab–imdevimab, bamlanivimab–etesevimab, and sotrovimab [6,7]. The combination tixagevimab–cilgavimab, initially authorized only in pre-exposure prophylaxis, appears to be still active toward BA.2 and, albeit to a lesser extent, toward BA.1, BA.1.1, and the recent BA.4 and BA.5 [6,7,8]. Importantly, anti-SARS-CoV-2 vaccination still represents the main mean for limiting the spread of infection and reducing the risk of worse outcomes. However, the effectiveness of vaccines tends to decrease over time, due, on the one hand, to the ability of SARS-CoV-2 to modify itself [9], and on the other hand, to the impaired immune system, particularly among fragile and high-risk individuals [10]. At the end of December 2021, European Medicines Agency (EMA) authorized the emergency use of two antivirals against SARS-CoV-2, Molnupiravir, and Nirmatrelvir/r (NMV-r) to prevent severe illness in high-risk individuals who are not hospitalized for COVID-19 and with no need for supplemental oxygen [11]. Molnupiravir is a prodrug of beta-d-N4-hydroxyxcytidine acting as an oral inhibitor of RNA-dependent RNA polymerase that can increase the viral RNA mutations, thus impairing the replication of SARS-COV2 [12]. NMV-r is an orally administered antiviral agent targeting the SARS-CoV-2 3-chymotrypsin-like cysteine protease enzyme (Mpro) that is essential in the viral replication cycle [13]. Despite the fact that their efficacy has been demonstrated in randomized trials [14,15], real-life data regarding their impact on fully vaccinated vulnerable subjects with mild-to-moderate COVID-19 are still limited, particularly in the era of Omicron and subvariants. Hence, we aim to assess the safety and effectiveness of the two antivirals in terms of the composite outcome defined as all-cause hospitalization and/or death at 30 days. 5. Conclusions: Early use of oral antivirals may limit hospital admissions, reduce COVID-19-related morbidity or mortality, and reduce healthcare costs. Therefore, it also becomes necessary to increase efforts to establish and strengthen a network between COVID-19 referral hubs and GPs in order to to identify individuals who may benefit from these therapies.
Background: Molnupiravir and Nirmatrelvir/r (NMV-r) have been proven to reduce severe Coronavirus Disease 2019 (COVID-19) in unvaccinated high-risk individuals. Data regarding their impact in fully vaccinated vulnerable subjects with mild-to-moderate COVID-19 are still limited, particularly in the era of Omicron and sub-variants. Methods: Our retrospective study aimed to compare the safety profile and effectiveness of the two antivirals in all consecutive high-risk outpatients between 11 January and 10 July 2022. A logistic regression model was carried out to assess factors associated with the composite outcome defined as all-cause hospitalization and/or death at 30 days. Results: A total of 719 individuals were included: 554 (77%) received Molnupiravir, whereas 165 (23%) were NMV-r users. Overall, 43 all-cause hospitalizations (5.9%) and 13 (1.8%) deaths were observed at 30 days. A composite outcome occurred in 47 (6.5%) individuals. At multivariate analysis, male sex [OR 3.785; p = 0.0021], age ≥ 75 [OR 2.647; p = 0.0124], moderate illness [OR 16.75; p &lt; 0.001], and treatment discontinuation after medical decision [OR 8.148; p = 0.0123] remained independently associated with the composite outcome. Conclusions: No differences between the two antivirals were observed. In this real-life setting, the early use of both of the oral antivirals helped limit composite outcome at 30 days among subjects who were at high risk of disease progression.
9,484
301
[ 2128, 360, 108, 106, 54, 419, 215, 423, 234, 276 ]
14
[ "95", "ci", "19", "antiviral", "95 ci", "treatment", "covid", "covid 19", "30", "days" ]
[ "antivirals sars cov", "syndrome coronavirus", "coronavirus", "sars cov2 treatment", "respiratory syndrome coronavirus" ]
null
[CONTENT] COVID-19 | antivirals | SARS-CoV-2 | Molnupiravir | Nirmatrelvir [SUMMARY]
null
[CONTENT] COVID-19 | antivirals | SARS-CoV-2 | Molnupiravir | Nirmatrelvir [SUMMARY]
[CONTENT] COVID-19 | antivirals | SARS-CoV-2 | Molnupiravir | Nirmatrelvir [SUMMARY]
[CONTENT] COVID-19 | antivirals | SARS-CoV-2 | Molnupiravir | Nirmatrelvir [SUMMARY]
[CONTENT] COVID-19 | antivirals | SARS-CoV-2 | Molnupiravir | Nirmatrelvir [SUMMARY]
[CONTENT] Male | Humans | Antiviral Agents | Outpatients | Retrospective Studies | COVID-19 Drug Treatment [SUMMARY]
null
[CONTENT] Male | Humans | Antiviral Agents | Outpatients | Retrospective Studies | COVID-19 Drug Treatment [SUMMARY]
[CONTENT] Male | Humans | Antiviral Agents | Outpatients | Retrospective Studies | COVID-19 Drug Treatment [SUMMARY]
[CONTENT] Male | Humans | Antiviral Agents | Outpatients | Retrospective Studies | COVID-19 Drug Treatment [SUMMARY]
[CONTENT] Male | Humans | Antiviral Agents | Outpatients | Retrospective Studies | COVID-19 Drug Treatment [SUMMARY]
[CONTENT] antivirals sars cov | syndrome coronavirus | coronavirus | sars cov2 treatment | respiratory syndrome coronavirus [SUMMARY]
null
[CONTENT] antivirals sars cov | syndrome coronavirus | coronavirus | sars cov2 treatment | respiratory syndrome coronavirus [SUMMARY]
[CONTENT] antivirals sars cov | syndrome coronavirus | coronavirus | sars cov2 treatment | respiratory syndrome coronavirus [SUMMARY]
[CONTENT] antivirals sars cov | syndrome coronavirus | coronavirus | sars cov2 treatment | respiratory syndrome coronavirus [SUMMARY]
[CONTENT] antivirals sars cov | syndrome coronavirus | coronavirus | sars cov2 treatment | respiratory syndrome coronavirus [SUMMARY]
[CONTENT] 95 | ci | 19 | antiviral | 95 ci | treatment | covid | covid 19 | 30 | days [SUMMARY]
null
[CONTENT] 95 | ci | 19 | antiviral | 95 ci | treatment | covid | covid 19 | 30 | days [SUMMARY]
[CONTENT] 95 | ci | 19 | antiviral | 95 ci | treatment | covid | covid 19 | 30 | days [SUMMARY]
[CONTENT] 95 | ci | 19 | antiviral | 95 ci | treatment | covid | covid 19 | 30 | days [SUMMARY]
[CONTENT] 95 | ci | 19 | antiviral | 95 ci | treatment | covid | covid 19 | 30 | days [SUMMARY]
[CONTENT] ba | sars | cov | sars cov | ba ba | inflammatory | variants | infection | rna | omicron [SUMMARY]
null
[CONTENT] ci | 95 | 95 ci | 001 | hr | associated | vs | age | decision | 75 [SUMMARY]
[CONTENT] reduce | hubs gps | reduce covid | establish strengthen | establish | healthcare costs necessary | admissions | admissions reduce | reduce covid 19 | establish strengthen network covid [SUMMARY]
[CONTENT] 95 | ci | 95 ci | 19 | treatment | covid | covid 19 | antiviral | composite | days [SUMMARY]
[CONTENT] 95 | ci | 95 ci | 19 | treatment | covid | covid 19 | antiviral | composite | days [SUMMARY]
[CONTENT] Molnupiravir | Nirmatrelvir | 2019 | COVID-19 ||| COVID-19 | Omicron [SUMMARY]
null
[CONTENT] 719 | 554 | 77% | Molnupiravir | 165 | 23% | NMV ||| 43 | 5.9% | 13 | 1.8% | 30 days ||| 47 | 6.5% ||| ||| 3.785 | 0.0021 | ≥ | 75 ||| 2.647 | 0.0124 | 16.75 | p &lt | 0.001 | 8.148 | 0.0123 [SUMMARY]
[CONTENT] two ||| 30 days [SUMMARY]
[CONTENT] Molnupiravir | Nirmatrelvir | 2019 | COVID-19 ||| COVID-19 | Omicron ||| two | between 11 January and 10 July 2022 ||| 30 days ||| 719 | 554 | 77% | Molnupiravir | 165 | 23% | NMV ||| 43 | 5.9% | 13 | 1.8% | 30 days ||| 47 | 6.5% ||| ||| 3.785 | 0.0021 | ≥ | 75 ||| 2.647 | 0.0124 | 16.75 | p &lt | 0.001 | 8.148 | 0.0123 ||| two ||| 30 days [SUMMARY]
[CONTENT] Molnupiravir | Nirmatrelvir | 2019 | COVID-19 ||| COVID-19 | Omicron ||| two | between 11 January and 10 July 2022 ||| 30 days ||| 719 | 554 | 77% | Molnupiravir | 165 | 23% | NMV ||| 43 | 5.9% | 13 | 1.8% | 30 days ||| 47 | 6.5% ||| ||| 3.785 | 0.0021 | ≥ | 75 ||| 2.647 | 0.0124 | 16.75 | p &lt | 0.001 | 8.148 | 0.0123 ||| two ||| 30 days [SUMMARY]
N-terminal pro-brain-type natriuretic peptide (NT-pro-BNP) and mortality risk in early inflammatory polyarthritis: results from the Norfolk Arthritis Registry (NOAR).
23511225
We measured N-terminal pro-brain natriuretic peptide (NT-pro-BNP), a marker of cardiac dysfunction, in an inception cohort with early inflammatory polyarthritis (IP) and assessed its association with disease phenotype, cardiovascular disease (CVD), all-cause and CVD related mortality.
BACKGROUND
Subjects with early IP were recruited to the Norfolk Arthritis Register from January 2000 to December 2008 and followed up to death or until March 2010 including any data from the national death register. The associations of baseline NT-pro-BNP with IP related factors and CVD were assessed by linear regression. Cox proportional hazards models examined the independent association of baseline NT-pro-BNP with all-cause and CVD mortality.
METHODS
We studied 960 early IP subjects; 163 (17%) had prior CVD. 373 (39%) patients had a baseline NT-pro-BNP levels ≥ 100 pg/ml. NT-pro-BNP was associated with age, female gender, HAQ score, CRP, current smoking, history of hypertension, prior CVD and the presence of carotid plaque. 92 (10%) IP subjects died including 31 (3%) from CVD. In an age and gender adjusted analysis, having a raised NT-pro-BNP level (≥ 100 pg/ml) was associated with both all-cause and CVD mortality (adjusted HR (95% CI) 2.36 (1.42 to 3.94) and 3.40 (1.28 to 9.03), respectively). These findings were robust to adjustment for conventional CVD risk factors and prevalent CVD.
RESULTS
In early IP patients, elevated NT-pro-BNP is related to HAQ and CRP and predicts all-cause and CVD mortality independently of conventional CVD risk factors. Further study is required to identify whether NT-pro-BNP may be clinically useful in targeting intensive interventions to IP patients at greatest risk of CVD.
CONCLUSIONS
[ "Adult", "Aged", "Arthritis", "Biomarkers", "Cardiovascular Diseases", "Cross-Sectional Studies", "England", "Female", "Humans", "Male", "Middle Aged", "Natriuretic Peptide, Brain", "Peptide Fragments", "Phenotype", "Prognosis", "Registries", "Risk Factors", "Severity of Illness Index" ]
3963600
Background
N-terminal pro-brain-type natriuretic peptide (NT-pro-BNP) is an inactive and stable amino acid fragment cosecreted, alongside the neuroendocrine peptide BNP, from the ventricular cardiac myocytes. It is released in response to left ventricular strain or ischaemia and has been found to be an important biomarker for left ventricular systolic dysfunction and left ventricular stress in the general population.1 Levels are also elevated even in those without clinical cardiovascular disease (CVD) and even minimally elevated NT-pro-BNP levels are predictive of future CVD and mortality in a range of cohort and general healthy population studies.2–5 These observations have also been confirmed in a meta-analysis of 40 studies involving 87 474 subjects and 10 625 incident CVD outcomes.6 In clinical practice, an NT-pro-BNP level of ≥100 pg/ml has high sensitivity and specificity for congestive heart failure (CHF) and can be used to stratify patients at high risk of cardiac failure in the acute setting.7–9 Circulating NT-pro-BNP is also associated with pro-inflammatory cytokines such as tumour necrosis factor (TNF) and interleukin 6 (IL-6).10 Acute inflammatory conditions including pneumonia and septic shock are also associated with raised NT-pro-BNP levels11–17 and TNF blockade in rheumatoid arthritis (RA) lowers NT-pro-BNP, suggesting a link between inflammation and cardiac stress. This has raised interest in the role of NT-pro-BNP in chronic inflammatory conditions such as inflammatory polyarthritis (IP). IP and RA are associated with an excess CVD risk including coronary heart disease, stroke and CHF.18–21 In two recent meta-analyses, the standardised mortality ratio (95% CI) for coronary heart disease and stroke in RA was 1.50 (1.39 to 1.61)22 and 1.46 (1.31 to 1.63),23 respectively. CHF has also been noted to be a common cause of death in RA and chronic inflammation may contribute to this outcome.24 A small number of studies have examined the potential relevance of NT-pro-BNP levels in RA patients. Crowson et al25 noted higher NT-pro-BNP levels in RA compared with population controls. In established RA, NT-pro-BNP was associated with disease activity, disease duration as well as with C reactive protein (CRP), TNF and IL-6 concentrations.26–29 One study from our group found that a reduction in NT-pro-BNP correlated with the reduction in erythrocyte sedimentation rate observed after starting adalimumab therapy.30 A further study also noted an association between NT-pro-BNP and carotid intima-medial thickness (cIMT).28 To date, only one study has looked at the predictive value of NT-pro-BNP levels with future CVD mortality in established RA;29 however, little is known about NT-pro-BNP in early IP patients. The aim of this study was to measure serum NT-pro-BNP levels in a large, well characterised inception cohort of patients with early IP. First, we aimed to examine the baseline association of NT-pro-BNP levels with IP disease phenotype, clinical CVD risk markers and subclinical atherosclerosis surrogates. Second, we also examined the predictive value of raised NT-pro-BNP levels for future all-cause and CVD related mortality.
Methods
Setting Subjects aged 16 years and older with early IP (≥2 joints swollen for ≥4 weeks) were recruited to the Norfolk Arthritis Register (NOAR) from January 2000 to December 2008. In this study, we restricted inclusion to subjects with ≤24 months’ symptom duration. Subjects aged 16 years and older with early IP (≥2 joints swollen for ≥4 weeks) were recruited to the Norfolk Arthritis Register (NOAR) from January 2000 to December 2008. In this study, we restricted inclusion to subjects with ≤24 months’ symptom duration. Data collection at recruitment All subjects underwent a clinical assessment including swollen and tender joint count (51 joints), measurement of height and weight to calculate body mass index (BMI). Smoking status was recorded and patients were categorised into never smokers, previous smokers or current smokers. Subjects were specifically asked about past history of CVD events such as angina, heart attacks and heart failure, as well as current prescribed and over-the-counter medications and any history of disease modifying antirheumatic drugs use. All patients completed the Health Assessment Questionnaire (HAQ). Hypertension was defined as being present if patients reported a physician diagnosis of hypertension or if they were taking antihypertensive therapy at the baseline clinical assessment. Diabetes mellitus was also defined by patient self-report of a physician diagnosis or if they were taking oral hypoglycaemic agents or injected insulin. All subjects underwent a clinical assessment including swollen and tender joint count (51 joints), measurement of height and weight to calculate body mass index (BMI). Smoking status was recorded and patients were categorised into never smokers, previous smokers or current smokers. Subjects were specifically asked about past history of CVD events such as angina, heart attacks and heart failure, as well as current prescribed and over-the-counter medications and any history of disease modifying antirheumatic drugs use. All patients completed the Health Assessment Questionnaire (HAQ). Hypertension was defined as being present if patients reported a physician diagnosis of hypertension or if they were taking antihypertensive therapy at the baseline clinical assessment. Diabetes mellitus was also defined by patient self-report of a physician diagnosis or if they were taking oral hypoglycaemic agents or injected insulin. Blood sample analysis At recruitment blood samples were collected, aliquoted and frozen at −80°C in Norfolk before being transported to the Arthritis Research UK Epidemiology Unit in Manchester. A Nephrostar Galaxy automated analyser was used to determine CRP concentration (BMG Labtech, Aylesbury, UK); this was used to calculate the 28-joint disease activity score (DAS28CRP). Rheumatoid factor (RF) was measured using a particle enhanced immunoturbidimetric assay where >40 iU/ml was considered positive for RF (BMG Labtech). Antibodies to citrullinated protein antigens (ACPA) were measured using the Axis-Shield DIASTAT kit (Axis-Shield, Dundee, UK) where >5 U/ml was considered positive for ACPA. Patients were considered ‘seropositive’ if they had a positive RF and/or ACPA. Serum NT-pro-BNP concentrations were determined using the Elecsys 2010 electrochemiluminescence method (Roche Diagnostics, Burgess Hill, UK), calibrated using the manufacturer's reagents at the University of Glasgow (PW, NS). Manufacturer's controls were used with limits of acceptability defined by the manufacturer. The limit of detection was 5 pg/ml. Low control coefficient of variance (CV) was 6.7% and high control CV was 4.9%. Patients were classified as having a raised NT-pro-BNP level if their NT-pro-BNP measurement was ≥100 pg/ml as per local clinically used threshold and published data.7–9 At recruitment blood samples were collected, aliquoted and frozen at −80°C in Norfolk before being transported to the Arthritis Research UK Epidemiology Unit in Manchester. A Nephrostar Galaxy automated analyser was used to determine CRP concentration (BMG Labtech, Aylesbury, UK); this was used to calculate the 28-joint disease activity score (DAS28CRP). Rheumatoid factor (RF) was measured using a particle enhanced immunoturbidimetric assay where >40 iU/ml was considered positive for RF (BMG Labtech). Antibodies to citrullinated protein antigens (ACPA) were measured using the Axis-Shield DIASTAT kit (Axis-Shield, Dundee, UK) where >5 U/ml was considered positive for ACPA. Patients were considered ‘seropositive’ if they had a positive RF and/or ACPA. Serum NT-pro-BNP concentrations were determined using the Elecsys 2010 electrochemiluminescence method (Roche Diagnostics, Burgess Hill, UK), calibrated using the manufacturer's reagents at the University of Glasgow (PW, NS). Manufacturer's controls were used with limits of acceptability defined by the manufacturer. The limit of detection was 5 pg/ml. Low control coefficient of variance (CV) was 6.7% and high control CV was 4.9%. Patients were classified as having a raised NT-pro-BNP level if their NT-pro-BNP measurement was ≥100 pg/ml as per local clinically used threshold and published data.7–9 Cardiovascular substudy In a subset of the main cohort, consecutive patients (n=327) recruited between 2004 and 2008 and aged 18–65 years at cohort entry were invited to participate in a detailed cardiovascular assessment. A fasting blood sample was used to measure the lipid profile and plasma glucose. Blood pressure was measured in each arm and the higher of the two measurements was used in analyses. For the purposes of this study, however, hypertension was defined in all cases as described above. They also underwent a B-mode Doppler carotid ultrasound examination at the Department of Vascular Radiology at the Norfolk and Norwich University Hospital. The images were stored on super video home system (VHS) video and analysed at The University of Manchester, Department of Vascular Surgery using a standardised research protocol. Preliminary validation has revealed good intraobserver correlation using this technique.31 Briefly, the right and left common carotid artery, carotid bulb and the first 1.5 cm of the internal and external carotid arteries were examined in longitudinal and cross-sectional planes. Carotid IMT measurements were made in a longitudinal plane at a point of maximum thickness on the far wall of the common carotid artery along a 1 cm section of the artery proximal to the carotid bulb. Measurements were repeated three times on each side, unfreezing the image on each occasion and relocating the maximal cIMT. The average of these three measurements was then calculated and used as the mean cIMT.32 The higher of the right or left mean cIMT was used as the cIMT for analyses. Carotid plaque was defined as present if any of the following parameters was met and then graded according to standard definitions:33 plaque present with <30% vessel diameter loss30%–50% vessel diameter loss>50% vessel diameter loss. plaque present with <30% vessel diameter loss 30%–50% vessel diameter loss >50% vessel diameter loss. Total cholesterol, high density lipoprotein (HDL) and triglycerides were assayed on fresh fasting serum using cholesterol oxidase phenol aminopjenazone (CHOD-PAP), a homogenous direct method (Abbott Diagnostics, Berkshire, UK), and glycerol-3-phosphate oxidaze (GPO-PAP) methods respectively at the Norfolk and Norwich University Hospital. Low density lipoprotein (LDL) levels were mathematically derived using the Friedwald's equation. In a subset of the main cohort, consecutive patients (n=327) recruited between 2004 and 2008 and aged 18–65 years at cohort entry were invited to participate in a detailed cardiovascular assessment. A fasting blood sample was used to measure the lipid profile and plasma glucose. Blood pressure was measured in each arm and the higher of the two measurements was used in analyses. For the purposes of this study, however, hypertension was defined in all cases as described above. They also underwent a B-mode Doppler carotid ultrasound examination at the Department of Vascular Radiology at the Norfolk and Norwich University Hospital. The images were stored on super video home system (VHS) video and analysed at The University of Manchester, Department of Vascular Surgery using a standardised research protocol. Preliminary validation has revealed good intraobserver correlation using this technique.31 Briefly, the right and left common carotid artery, carotid bulb and the first 1.5 cm of the internal and external carotid arteries were examined in longitudinal and cross-sectional planes. Carotid IMT measurements were made in a longitudinal plane at a point of maximum thickness on the far wall of the common carotid artery along a 1 cm section of the artery proximal to the carotid bulb. Measurements were repeated three times on each side, unfreezing the image on each occasion and relocating the maximal cIMT. The average of these three measurements was then calculated and used as the mean cIMT.32 The higher of the right or left mean cIMT was used as the cIMT for analyses. Carotid plaque was defined as present if any of the following parameters was met and then graded according to standard definitions:33 plaque present with <30% vessel diameter loss30%–50% vessel diameter loss>50% vessel diameter loss. plaque present with <30% vessel diameter loss 30%–50% vessel diameter loss >50% vessel diameter loss. Total cholesterol, high density lipoprotein (HDL) and triglycerides were assayed on fresh fasting serum using cholesterol oxidase phenol aminopjenazone (CHOD-PAP), a homogenous direct method (Abbott Diagnostics, Berkshire, UK), and glycerol-3-phosphate oxidaze (GPO-PAP) methods respectively at the Norfolk and Norwich University Hospital. Low density lipoprotein (LDL) levels were mathematically derived using the Friedwald's equation. Follow-up and mortality In NOAR all patients are flagged with the National Health Service Information Service (NHS-IS). Patients were followed up until March 2010, death or embarkation, whichever came first. Death certificates were provided by the NHS-IS for all those who had died. For this study we took the underlying cause of death, coded using ICD10,34 as the main cause of death. CV deaths were defined as those where the underlying cause of death fell within chapter I of ICD10. All patients provided written informed consent to take part and the study was approved by the Norfolk Research Ethics Committee (REC no. 2003/075). In NOAR all patients are flagged with the National Health Service Information Service (NHS-IS). Patients were followed up until March 2010, death or embarkation, whichever came first. Death certificates were provided by the NHS-IS for all those who had died. For this study we took the underlying cause of death, coded using ICD10,34 as the main cause of death. CV deaths were defined as those where the underlying cause of death fell within chapter I of ICD10. All patients provided written informed consent to take part and the study was approved by the Norfolk Research Ethics Committee (REC no. 2003/075). Statistical analyses As the NT-pro-BNP levels were positively skewed, we log-transformed these values for our analysis. The associations among log-transformed NT-pro-BNP and baseline demographic, IP and CVD parameters including the presence or absence of carotid plaque at recruitment were assessed using linear regression. These analyses were subsequently adjusted for age at recruitment and gender. Forward stepwise multivariable regression analysis was carried out that included age and gender to assess key IP and CVD related factors associated with log NT-pro-BNP levels at baseline on univariate analyses (p<0.05). We examined the association of raised NT-pro-BNP (≥100 pg/ml) with mortality (all-cause and CVD) using Cox proportional hazards regression. Analyses were adjusted for key CVD risk factors, namely, age, gender, BMI, prior CVD, diabetes, smoking status and hypertension. Subsequently analyses were adjusted for IP related factors associated with log NT-pro-BNP in our cross-sectional analysis. Incremental prognostic information was calculated using the Harrell's C-statistic (C-index) and these values were compared using the χ2 test. Sensitivity analyses were also undertaken in the subsets who (i) were negative for ACPA and RF and (ii) had no reported history of prior CVD at baseline recruitment. As the NT-pro-BNP levels were positively skewed, we log-transformed these values for our analysis. The associations among log-transformed NT-pro-BNP and baseline demographic, IP and CVD parameters including the presence or absence of carotid plaque at recruitment were assessed using linear regression. These analyses were subsequently adjusted for age at recruitment and gender. Forward stepwise multivariable regression analysis was carried out that included age and gender to assess key IP and CVD related factors associated with log NT-pro-BNP levels at baseline on univariate analyses (p<0.05). We examined the association of raised NT-pro-BNP (≥100 pg/ml) with mortality (all-cause and CVD) using Cox proportional hazards regression. Analyses were adjusted for key CVD risk factors, namely, age, gender, BMI, prior CVD, diabetes, smoking status and hypertension. Subsequently analyses were adjusted for IP related factors associated with log NT-pro-BNP in our cross-sectional analysis. Incremental prognostic information was calculated using the Harrell's C-statistic (C-index) and these values were compared using the χ2 test. Sensitivity analyses were also undertaken in the subsets who (i) were negative for ACPA and RF and (ii) had no reported history of prior CVD at baseline recruitment.
Results
Cross-sectional association of baseline NT-pro-BNP concentration with IP disease phenotype and CVD risk markers We studied 960 IP subjects with a median (IQR) age and symptom duration of 58 (47–68) years and 5.9 (3.2–10.5) months, respectively. The cohort included 617 (64%) female subjects; there were 211 (25%) current smokers. In all, 484/823 (59%) were positive for RF and/or ACPA and 436 (45%) fulfilled the 1987 American College for Rheumatology Criteria for RA at first assessment (table 1). A history of prior CVD was reported by 163 (17%) patients (table 1). The demographic, IP phenotype and CVD parameters were comparable in the subset of 327 patients who had carotid Doppler scans relative to the entire cohort, with the exception of a younger median age at recruitment into this substudy as per protocol (table 1). Baseline characteristics of the IP cohort and the CVD substudy population *Statistically significantly different at p<0.05. ACPA, anticitrulinated protein antibody; ACR, American College for Rheumatology; BMI, body mass index; BP, blood pressure; cIMT, carotid intima-medial thickness; COX-2, cyclo-oxygenase-2 inhibitor; CRP, C reactive protein; CVD, cardiovascular disease; DAS28, disease activity score based on 28 joint count; DMARD, disease modifying antirheumatic drug; HAQ, Health Assessment Questionnaire; IP, inflammatory polyarthritis; mm Hg, millimetres of mercury; NA, data not available; NSAID, non-steroidal anti-inflammatory drug; RA, rheumatoid arthritis; RF, rheumatoid factor. NT-pro-BNP levels were positively skewed with a median (IQR) of 74 (38–153) pg/ml; 373 (39%) patients had NT-pro-BNP levels ≥100 pg/ml, including 89 of 163 (55%) patients with a prior history of CVD. In univariate linear regression analyses, higher log NT-pro-BNP levels were associated with being older at recruitment (β-coefficient (95% CI) 0.035 (0.031 to 0.039) per year) and female (β-coefficient (95% CI) 0.211 (0.085 to 0.336)). In univariate age and gender adjusted analyses (table 1), a number of factors were associated with log NT-pro-BNP including prior CVD, classic risk factors such as previous smoking and hypertension and several IP related factors, including HAQ score, and CRP. Both BMI and tender joint counts were negatively associated with log NT-pro-BNP (table 2). Cross-sectional association between log NT-pro-BNP and demographic and disease related factors measured at baseline n=960 *Log NT-pro-BNP, NT-pro-BNP levels that have been log transformed. †Linear regression models adjusted for age and gender. ‡Stepwise regression includes age, gender, HAQ score, tender joint count, CRP, hypertension, BMI, smoking status and prior CVD in the entire cohort. ACPA, antibodies to citrullinated protein antigens; ACR, American College for Rheumatology; BMI, body mass index; CRP, C reactive protein; CVD, cardiovascular disease; DAS28, disease activity score based on 28 joint count; DMARD, disease modifying antirheumatic drug; HAQ, Health Assessment Questionnaire; NA, not applicable; NT-pro-BNP, N-terminal pro-brain natriuretic peptide; RF, rheumatoid factor. In a multivariable forward stepwise linear regression analysis containing all risk markers showing significance on univariate analyses, age, prior CVD and hypertension all remained positively associated and BMI remained negatively associated with log NT-pro-BNP (table 2). IP related factors that remained associated on multivariable analysis were HAQ score (β-coefficient (95% CI) 0.250 (0.149 to 0.351)), CRP (β-coefficient (95% CI) 0.002 (0.001 to 0.004)) and tender joint count (β-coefficient (95% CI) −0.021 (−0.032 to −0.010)). We studied 960 IP subjects with a median (IQR) age and symptom duration of 58 (47–68) years and 5.9 (3.2–10.5) months, respectively. The cohort included 617 (64%) female subjects; there were 211 (25%) current smokers. In all, 484/823 (59%) were positive for RF and/or ACPA and 436 (45%) fulfilled the 1987 American College for Rheumatology Criteria for RA at first assessment (table 1). A history of prior CVD was reported by 163 (17%) patients (table 1). The demographic, IP phenotype and CVD parameters were comparable in the subset of 327 patients who had carotid Doppler scans relative to the entire cohort, with the exception of a younger median age at recruitment into this substudy as per protocol (table 1). Baseline characteristics of the IP cohort and the CVD substudy population *Statistically significantly different at p<0.05. ACPA, anticitrulinated protein antibody; ACR, American College for Rheumatology; BMI, body mass index; BP, blood pressure; cIMT, carotid intima-medial thickness; COX-2, cyclo-oxygenase-2 inhibitor; CRP, C reactive protein; CVD, cardiovascular disease; DAS28, disease activity score based on 28 joint count; DMARD, disease modifying antirheumatic drug; HAQ, Health Assessment Questionnaire; IP, inflammatory polyarthritis; mm Hg, millimetres of mercury; NA, data not available; NSAID, non-steroidal anti-inflammatory drug; RA, rheumatoid arthritis; RF, rheumatoid factor. NT-pro-BNP levels were positively skewed with a median (IQR) of 74 (38–153) pg/ml; 373 (39%) patients had NT-pro-BNP levels ≥100 pg/ml, including 89 of 163 (55%) patients with a prior history of CVD. In univariate linear regression analyses, higher log NT-pro-BNP levels were associated with being older at recruitment (β-coefficient (95% CI) 0.035 (0.031 to 0.039) per year) and female (β-coefficient (95% CI) 0.211 (0.085 to 0.336)). In univariate age and gender adjusted analyses (table 1), a number of factors were associated with log NT-pro-BNP including prior CVD, classic risk factors such as previous smoking and hypertension and several IP related factors, including HAQ score, and CRP. Both BMI and tender joint counts were negatively associated with log NT-pro-BNP (table 2). Cross-sectional association between log NT-pro-BNP and demographic and disease related factors measured at baseline n=960 *Log NT-pro-BNP, NT-pro-BNP levels that have been log transformed. †Linear regression models adjusted for age and gender. ‡Stepwise regression includes age, gender, HAQ score, tender joint count, CRP, hypertension, BMI, smoking status and prior CVD in the entire cohort. ACPA, antibodies to citrullinated protein antigens; ACR, American College for Rheumatology; BMI, body mass index; CRP, C reactive protein; CVD, cardiovascular disease; DAS28, disease activity score based on 28 joint count; DMARD, disease modifying antirheumatic drug; HAQ, Health Assessment Questionnaire; NA, not applicable; NT-pro-BNP, N-terminal pro-brain natriuretic peptide; RF, rheumatoid factor. In a multivariable forward stepwise linear regression analysis containing all risk markers showing significance on univariate analyses, age, prior CVD and hypertension all remained positively associated and BMI remained negatively associated with log NT-pro-BNP (table 2). IP related factors that remained associated on multivariable analysis were HAQ score (β-coefficient (95% CI) 0.250 (0.149 to 0.351)), CRP (β-coefficient (95% CI) 0.002 (0.001 to 0.004)) and tender joint count (β-coefficient (95% CI) −0.021 (−0.032 to −0.010)). Cross-sectional association of baseline NT-pro-BNP concentration with subclinical atherosclerosis The presence of plaque was significantly associated with log NT-pro-BNP levels (β-coefficient (95% CI) 0.253 (0.063 to 0.442)). This association did not however persist after age and gender adjustment (β-coefficient (95% CI) 0.109 (−0.100 to 0.319)). Carotid IMT was not significantly associated with NT-pro-BNP levels on univariate analysis. The presence of plaque was significantly associated with log NT-pro-BNP levels (β-coefficient (95% CI) 0.253 (0.063 to 0.442)). This association did not however persist after age and gender adjustment (β-coefficient (95% CI) 0.109 (−0.100 to 0.319)). Carotid IMT was not significantly associated with NT-pro-BNP levels on univariate analysis. NT-pro-BNP levels and mortality During 5526 patient years of follow-up over a median (IQR) follow-up period of 5.5 (3.7–7.7) years, 92 (10%) IP subjects died. Of the 92 deaths, 31 (34%) had the main underlying cause of death as CVD (Chapter I of ICD10), 31 (34%) died due to neoplastic conditions (Chapter II of ICD10), 15 (16%) due to respiratory disease (Chapter X of ICD10) and the remaining 15 (16%) had other causes of death cited. The overall and CVD related mortality rates were 16.8 (95% CI 16.7 to 16.9) and 5.8 per 1000 patient years (95% CI 5.6 to 5.9), respectively. Of the subjects with a baseline NT-pro-BNP level <100 pg/ml, 25 (4%) died during the follow-up period. This accounted for 27% of all deaths. In subjects with NT-pro-BNP ≥100 pg/ml, 67 (18%) died which accounted for 73% of all deaths in the cohort. In a Cox proportional hazards model adjusted for age and gender (table 3, model A), NT-pro-BNP ≥100 pg/ml was associated with all-cause (HR (95% CI) 2.36 (1.42 to 3.94)) and CVD mortality (HR (95% CI) 3.40 (1.28 to 9.03)). These associations remained when we adjusted for additional CVD risk factors (model B) or additional IP related factors (model C) (table 3). Our final adjusted model (model D) also demonstrated an independent association between NT-pro-BNP and all-cause mortality (HR (95% CI) 2.15 (1.19 to 3.88)). The association between NT-pro-BNP and CVD mortality was no longer significant in the model (model D) that included both CVD and IP related factors (HR (95% CI) 2.18 (0.76 to 6.27)) (table 3). Raised NT-pro-BNP (≥100 pg/ml) level associations with increased all-cause and cardiovascular mortality in early inflammatory polyarthritis *Fully adjusted model included age at recruitment, gender, hypertension, BMI, diabetes, smoking status, prior CVD, RF and/or ACPA status, HAQ score, tender joint count and CRP. ACPA, antibodies to citrullinated protein antigens; BMI, body mass index; CRP, C reactive protein; CVD, cardiovascular disease; HAQ, Health Assessment Questionnaire; NT-pro-BNP, N-terminal pro-brain natriuretic peptide; RF, rheumatoid factor. We examined the additional prognostic value of NT-pro-BNP on mortality. The model predicting overall mortality using only age and gender had a C-index of 0.787 which increased to 0.796 with the addition of standard CVD risk parameters; this increase was not significant (model A vs B p=0.15). The addition of NT-pro-BNP to standard CVD risk factors increased the C-index to 0.812 which was a statistically significant increase from the model including CVD risk factors (model B vs B plus NT-pro-BNP p=0.0014). The model predicting CVD mortality using only age and gender had a C-index of 0.817 which increased to 0.831 with the addition of standard CVD risk parameters; this increase was not significant (model A vs B p=0.17). The addition of NT-pro-BNP to model B of standard CVD risk factors increased the C-index to 0.848 which was a statistically significant increase from the model including CVD risk factors alone (model B vs B plus NT-pro-BNP p=0.04). During 5526 patient years of follow-up over a median (IQR) follow-up period of 5.5 (3.7–7.7) years, 92 (10%) IP subjects died. Of the 92 deaths, 31 (34%) had the main underlying cause of death as CVD (Chapter I of ICD10), 31 (34%) died due to neoplastic conditions (Chapter II of ICD10), 15 (16%) due to respiratory disease (Chapter X of ICD10) and the remaining 15 (16%) had other causes of death cited. The overall and CVD related mortality rates were 16.8 (95% CI 16.7 to 16.9) and 5.8 per 1000 patient years (95% CI 5.6 to 5.9), respectively. Of the subjects with a baseline NT-pro-BNP level <100 pg/ml, 25 (4%) died during the follow-up period. This accounted for 27% of all deaths. In subjects with NT-pro-BNP ≥100 pg/ml, 67 (18%) died which accounted for 73% of all deaths in the cohort. In a Cox proportional hazards model adjusted for age and gender (table 3, model A), NT-pro-BNP ≥100 pg/ml was associated with all-cause (HR (95% CI) 2.36 (1.42 to 3.94)) and CVD mortality (HR (95% CI) 3.40 (1.28 to 9.03)). These associations remained when we adjusted for additional CVD risk factors (model B) or additional IP related factors (model C) (table 3). Our final adjusted model (model D) also demonstrated an independent association between NT-pro-BNP and all-cause mortality (HR (95% CI) 2.15 (1.19 to 3.88)). The association between NT-pro-BNP and CVD mortality was no longer significant in the model (model D) that included both CVD and IP related factors (HR (95% CI) 2.18 (0.76 to 6.27)) (table 3). Raised NT-pro-BNP (≥100 pg/ml) level associations with increased all-cause and cardiovascular mortality in early inflammatory polyarthritis *Fully adjusted model included age at recruitment, gender, hypertension, BMI, diabetes, smoking status, prior CVD, RF and/or ACPA status, HAQ score, tender joint count and CRP. ACPA, antibodies to citrullinated protein antigens; BMI, body mass index; CRP, C reactive protein; CVD, cardiovascular disease; HAQ, Health Assessment Questionnaire; NT-pro-BNP, N-terminal pro-brain natriuretic peptide; RF, rheumatoid factor. We examined the additional prognostic value of NT-pro-BNP on mortality. The model predicting overall mortality using only age and gender had a C-index of 0.787 which increased to 0.796 with the addition of standard CVD risk parameters; this increase was not significant (model A vs B p=0.15). The addition of NT-pro-BNP to standard CVD risk factors increased the C-index to 0.812 which was a statistically significant increase from the model including CVD risk factors (model B vs B plus NT-pro-BNP p=0.0014). The model predicting CVD mortality using only age and gender had a C-index of 0.817 which increased to 0.831 with the addition of standard CVD risk parameters; this increase was not significant (model A vs B p=0.17). The addition of NT-pro-BNP to model B of standard CVD risk factors increased the C-index to 0.848 which was a statistically significant increase from the model including CVD risk factors alone (model B vs B plus NT-pro-BNP p=0.04). Additional sensitivity analyses The association of raised NT-pro-BNP with all-cause mortality remained in an analysis restricted to patients who were seronegative for RF and ACPA (n=339) (HR (95% CI) 6.36 (3.01 to 13.44), fully adjusted HR (95% CI) 2.94 (1.04 to 8.23)) and in those patients with no prior CVD (n=721) (HR (95% CI) 3.97 (2.38 to 6.62), fully adjusted HR (95% CI) 1.93 (1.00 to 3.72)), respectively. In these smaller subsets, the association with CVD mortality was not statistically significant (data on file). The association of raised NT-pro-BNP with all-cause mortality remained in an analysis restricted to patients who were seronegative for RF and ACPA (n=339) (HR (95% CI) 6.36 (3.01 to 13.44), fully adjusted HR (95% CI) 2.94 (1.04 to 8.23)) and in those patients with no prior CVD (n=721) (HR (95% CI) 3.97 (2.38 to 6.62), fully adjusted HR (95% CI) 1.93 (1.00 to 3.72)), respectively. In these smaller subsets, the association with CVD mortality was not statistically significant (data on file).
null
null
[ "Background", "Setting", "Data collection at recruitment", "Blood sample analysis", "Cardiovascular substudy", "Follow-up and mortality", "Statistical analyses", "Cross-sectional association of baseline NT-pro-BNP concentration with IP disease phenotype and CVD risk markers", "Cross-sectional association of baseline NT-pro-BNP concentration with subclinical atherosclerosis", "NT-pro-BNP levels and mortality", "Additional sensitivity analyses" ]
[ "N-terminal pro-brain-type natriuretic peptide (NT-pro-BNP) is an inactive and stable amino acid fragment cosecreted, alongside the neuroendocrine peptide BNP, from the ventricular cardiac myocytes. It is released in response to left ventricular strain or ischaemia and has been found to be an important biomarker for left ventricular systolic dysfunction and left ventricular stress in the general population.1 Levels are also elevated even in those without clinical cardiovascular disease (CVD) and even minimally elevated NT-pro-BNP levels are predictive of future CVD and mortality in a range of cohort and general healthy population studies.2–5 These observations have also been confirmed in a meta-analysis of 40 studies involving 87 474 subjects and 10 625 incident CVD outcomes.6 In clinical practice, an NT-pro-BNP level of ≥100 pg/ml has high sensitivity and specificity for congestive heart failure (CHF) and can be used to stratify patients at high risk of cardiac failure in the acute setting.7–9 Circulating NT-pro-BNP is also associated with pro-inflammatory cytokines such as tumour necrosis factor (TNF) and interleukin 6 (IL-6).10 Acute inflammatory conditions including pneumonia and septic shock are also associated with raised NT-pro-BNP levels11–17 and TNF blockade in rheumatoid arthritis (RA) lowers NT-pro-BNP, suggesting a link between inflammation and cardiac stress. This has raised interest in the role of NT-pro-BNP in chronic inflammatory conditions such as inflammatory polyarthritis (IP).\nIP and RA are associated with an excess CVD risk including coronary heart disease, stroke and CHF.18–21 In two recent meta-analyses, the standardised mortality ratio (95% CI) for coronary heart disease and stroke in RA was 1.50 (1.39 to 1.61)22 and 1.46 (1.31 to 1.63),23 respectively. CHF has also been noted to be a common cause of death in RA and chronic inflammation may contribute to this outcome.24\nA small number of studies have examined the potential relevance of NT-pro-BNP levels in RA patients. Crowson et al25 noted higher NT-pro-BNP levels in RA compared with population controls. In established RA, NT-pro-BNP was associated with disease activity, disease duration as well as with C reactive protein (CRP), TNF and IL-6 concentrations.26–29 One study from our group found that a reduction in NT-pro-BNP correlated with the reduction in erythrocyte sedimentation rate observed after starting adalimumab therapy.30 A further study also noted an association between NT-pro-BNP and carotid intima-medial thickness (cIMT).28 To date, only one study has looked at the predictive value of NT-pro-BNP levels with future CVD mortality in established RA;29 however, little is known about NT-pro-BNP in early IP patients.\nThe aim of this study was to measure serum NT-pro-BNP levels in a large, well characterised inception cohort of patients with early IP. First, we aimed to examine the baseline association of NT-pro-BNP levels with IP disease phenotype, clinical CVD risk markers and subclinical atherosclerosis surrogates. Second, we also examined the predictive value of raised NT-pro-BNP levels for future all-cause and CVD related mortality.", "Subjects aged 16 years and older with early IP (≥2 joints swollen for ≥4 weeks) were recruited to the Norfolk Arthritis Register (NOAR) from January 2000 to December 2008. In this study, we restricted inclusion to subjects with ≤24 months’ symptom duration.", "All subjects underwent a clinical assessment including swollen and tender joint count (51 joints), measurement of height and weight to calculate body mass index (BMI). Smoking status was recorded and patients were categorised into never smokers, previous smokers or current smokers. Subjects were specifically asked about past history of CVD events such as angina, heart attacks and heart failure, as well as current prescribed and over-the-counter medications and any history of disease modifying antirheumatic drugs use. All patients completed the Health Assessment Questionnaire (HAQ). Hypertension was defined as being present if patients reported a physician diagnosis of hypertension or if they were taking antihypertensive therapy at the baseline clinical assessment. Diabetes mellitus was also defined by patient self-report of a physician diagnosis or if they were taking oral hypoglycaemic agents or injected insulin.", "At recruitment blood samples were collected, aliquoted and frozen at −80°C in Norfolk before being transported to the Arthritis Research UK Epidemiology Unit in Manchester. A Nephrostar Galaxy automated analyser was used to determine CRP concentration (BMG Labtech, Aylesbury, UK); this was used to calculate the 28-joint disease activity score (DAS28CRP). Rheumatoid factor (RF) was measured using a particle enhanced immunoturbidimetric assay where >40 iU/ml was considered positive for RF (BMG Labtech). Antibodies to citrullinated protein antigens (ACPA) were measured using the Axis-Shield DIASTAT kit (Axis-Shield, Dundee, UK) where >5 U/ml was considered positive for ACPA. Patients were considered ‘seropositive’ if they had a positive RF and/or ACPA.\nSerum NT-pro-BNP concentrations were determined using the Elecsys 2010 electrochemiluminescence method (Roche Diagnostics, Burgess Hill, UK), calibrated using the manufacturer's reagents at the University of Glasgow (PW, NS). Manufacturer's controls were used with limits of acceptability defined by the manufacturer. The limit of detection was 5 pg/ml. Low control coefficient of variance (CV) was 6.7% and high control CV was 4.9%. Patients were classified as having a raised NT-pro-BNP level if their NT-pro-BNP measurement was ≥100 pg/ml as per local clinically used threshold and published data.7–9", "In a subset of the main cohort, consecutive patients (n=327) recruited between 2004 and 2008 and aged 18–65 years at cohort entry were invited to participate in a detailed cardiovascular assessment. A fasting blood sample was used to measure the lipid profile and plasma glucose. Blood pressure was measured in each arm and the higher of the two measurements was used in analyses. For the purposes of this study, however, hypertension was defined in all cases as described above. They also underwent a B-mode Doppler carotid ultrasound examination at the Department of Vascular Radiology at the Norfolk and Norwich University Hospital. The images were stored on super video home system (VHS) video and analysed at The University of Manchester, Department of Vascular Surgery using a standardised research protocol. Preliminary validation has revealed good intraobserver correlation using this technique.31 Briefly, the right and left common carotid artery, carotid bulb and the first 1.5 cm of the internal and external carotid arteries were examined in longitudinal and cross-sectional planes. Carotid IMT measurements were made in a longitudinal plane at a point of maximum thickness on the far wall of the common carotid artery along a 1 cm section of the artery proximal to the carotid bulb. Measurements were repeated three times on each side, unfreezing the image on each occasion and relocating the maximal cIMT. The average of these three measurements was then calculated and used as the mean cIMT.32 The higher of the right or left mean cIMT was used as the cIMT for analyses. Carotid plaque was defined as present if any of the following parameters was met and then graded according to standard definitions:33\nplaque present with <30% vessel diameter loss30%–50% vessel diameter loss>50% vessel diameter loss.\nplaque present with <30% vessel diameter loss\n30%–50% vessel diameter loss\n>50% vessel diameter loss.\nTotal cholesterol, high density lipoprotein (HDL) and triglycerides were assayed on fresh fasting serum using cholesterol oxidase phenol aminopjenazone (CHOD-PAP), a homogenous direct method (Abbott Diagnostics, Berkshire, UK), and glycerol-3-phosphate oxidaze (GPO-PAP) methods respectively at the Norfolk and Norwich University Hospital. Low density lipoprotein (LDL) levels were mathematically derived using the Friedwald's equation.", "In NOAR all patients are flagged with the National Health Service Information Service (NHS-IS). Patients were followed up until March 2010, death or embarkation, whichever came first. Death certificates were provided by the NHS-IS for all those who had died. For this study we took the underlying cause of death, coded using ICD10,34 as the main cause of death. CV deaths were defined as those where the underlying cause of death fell within chapter I of ICD10. All patients provided written informed consent to take part and the study was approved by the Norfolk Research Ethics Committee (REC no. 2003/075).", "As the NT-pro-BNP levels were positively skewed, we log-transformed these values for our analysis. The associations among log-transformed NT-pro-BNP and baseline demographic, IP and CVD parameters including the presence or absence of carotid plaque at recruitment were assessed using linear regression. These analyses were subsequently adjusted for age at recruitment and gender. Forward stepwise multivariable regression analysis was carried out that included age and gender to assess key IP and CVD related factors associated with log NT-pro-BNP levels at baseline on univariate analyses (p<0.05).\nWe examined the association of raised NT-pro-BNP (≥100 pg/ml) with mortality (all-cause and CVD) using Cox proportional hazards regression. Analyses were adjusted for key CVD risk factors, namely, age, gender, BMI, prior CVD, diabetes, smoking status and hypertension. Subsequently analyses were adjusted for IP related factors associated with log NT-pro-BNP in our cross-sectional analysis. Incremental prognostic information was calculated using the Harrell's C-statistic (C-index) and these values were compared using the χ2 test. Sensitivity analyses were also undertaken in the subsets who (i) were negative for ACPA and RF and (ii) had no reported history of prior CVD at baseline recruitment.", "We studied 960 IP subjects with a median (IQR) age and symptom duration of 58 (47–68) years and 5.9 (3.2–10.5) months, respectively. The cohort included 617 (64%) female subjects; there were 211 (25%) current smokers. In all, 484/823 (59%) were positive for RF and/or ACPA and 436 (45%) fulfilled the 1987 American College for Rheumatology Criteria for RA at first assessment (table 1). A history of prior CVD was reported by 163 (17%) patients (table 1). The demographic, IP phenotype and CVD parameters were comparable in the subset of 327 patients who had carotid Doppler scans relative to the entire cohort, with the exception of a younger median age at recruitment into this substudy as per protocol (table 1).\nBaseline characteristics of the IP cohort and the CVD substudy population\n*Statistically significantly different at p<0.05.\nACPA, anticitrulinated protein antibody; ACR, American College for Rheumatology; BMI, body mass index; BP, blood pressure; cIMT, carotid intima-medial thickness; COX-2, cyclo-oxygenase-2 inhibitor; CRP, C reactive protein; CVD, cardiovascular disease; DAS28, disease activity score based on 28 joint count; DMARD, disease modifying antirheumatic drug; HAQ, Health Assessment Questionnaire; IP, inflammatory polyarthritis; mm Hg, millimetres of mercury; NA, data not available; NSAID, non-steroidal anti-inflammatory drug; RA, rheumatoid arthritis; RF, rheumatoid factor.\nNT-pro-BNP levels were positively skewed with a median (IQR) of 74 (38–153) pg/ml; 373 (39%) patients had NT-pro-BNP levels ≥100 pg/ml, including 89 of 163 (55%) patients with a prior history of CVD. In univariate linear regression analyses, higher log NT-pro-BNP levels were associated with being older at recruitment (β-coefficient (95% CI) 0.035 (0.031 to 0.039) per year) and female (β-coefficient (95% CI) 0.211 (0.085 to 0.336)). In univariate age and gender adjusted analyses (table 1), a number of factors were associated with log NT-pro-BNP including prior CVD, classic risk factors such as previous smoking and hypertension and several IP related factors, including HAQ score, and CRP. Both BMI and tender joint counts were negatively associated with log NT-pro-BNP (table 2).\nCross-sectional association between log NT-pro-BNP and demographic and disease related factors measured at baseline n=960\n*Log NT-pro-BNP, NT-pro-BNP levels that have been log transformed.\n†Linear regression models adjusted for age and gender.\n‡Stepwise regression includes age, gender, HAQ score, tender joint count, CRP, hypertension, BMI, smoking status and prior CVD in the entire cohort.\nACPA, antibodies to citrullinated protein antigens; ACR, American College for Rheumatology; BMI, body mass index; CRP, C reactive protein; CVD, cardiovascular disease; DAS28, disease activity score based on 28 joint count; DMARD, disease modifying antirheumatic drug; HAQ, Health Assessment Questionnaire; NA, not applicable; NT-pro-BNP, N-terminal pro-brain natriuretic peptide; RF, rheumatoid factor.\nIn a multivariable forward stepwise linear regression analysis containing all risk markers showing significance on univariate analyses, age, prior CVD and hypertension all remained positively associated and BMI remained negatively associated with log NT-pro-BNP (table 2). IP related factors that remained associated on multivariable analysis were HAQ score (β-coefficient (95% CI) 0.250 (0.149 to 0.351)), CRP (β-coefficient (95% CI) 0.002 (0.001 to 0.004)) and tender joint count (β-coefficient (95% CI) −0.021 (−0.032 to −0.010)).", "The presence of plaque was significantly associated with log NT-pro-BNP levels (β-coefficient (95% CI) 0.253 (0.063 to 0.442)). This association did not however persist after age and gender adjustment (β-coefficient (95% CI) 0.109 (−0.100 to 0.319)). Carotid IMT was not significantly associated with NT-pro-BNP levels on univariate analysis.", "During 5526 patient years of follow-up over a median (IQR) follow-up period of 5.5 (3.7–7.7) years, 92 (10%) IP subjects died. Of the 92 deaths, 31 (34%) had the main underlying cause of death as CVD (Chapter I of ICD10), 31 (34%) died due to neoplastic conditions (Chapter II of ICD10), 15 (16%) due to respiratory disease (Chapter X of ICD10) and the remaining 15 (16%) had other causes of death cited.\nThe overall and CVD related mortality rates were 16.8 (95% CI 16.7 to 16.9) and 5.8 per 1000 patient years (95% CI 5.6 to 5.9), respectively. Of the subjects with a baseline NT-pro-BNP level <100 pg/ml, 25 (4%) died during the follow-up period. This accounted for 27% of all deaths. In subjects with NT-pro-BNP ≥100 pg/ml, 67 (18%) died which accounted for 73% of all deaths in the cohort.\nIn a Cox proportional hazards model adjusted for age and gender (table 3, model A), NT-pro-BNP ≥100 pg/ml was associated with all-cause (HR (95% CI) 2.36 (1.42 to 3.94)) and CVD mortality (HR (95% CI) 3.40 (1.28 to 9.03)). These associations remained when we adjusted for additional CVD risk factors (model B) or additional IP related factors (model C) (table 3). Our final adjusted model (model D) also demonstrated an independent association between NT-pro-BNP and all-cause mortality (HR (95% CI) 2.15 (1.19 to 3.88)). The association between NT-pro-BNP and CVD mortality was no longer significant in the model (model D) that included both CVD and IP related factors (HR (95% CI) 2.18 (0.76 to 6.27)) (table 3).\nRaised NT-pro-BNP (≥100 pg/ml) level associations with increased all-cause and cardiovascular mortality in early inflammatory polyarthritis\n*Fully adjusted model included age at recruitment, gender, hypertension, BMI, diabetes, smoking status, prior CVD, RF and/or ACPA status, HAQ score, tender joint count and CRP.\nACPA, antibodies to citrullinated protein antigens; BMI, body mass index; CRP, C reactive protein; CVD, cardiovascular disease; HAQ, Health Assessment Questionnaire; NT-pro-BNP, N-terminal pro-brain natriuretic peptide; RF, rheumatoid factor.\nWe examined the additional prognostic value of NT-pro-BNP on mortality. The model predicting overall mortality using only age and gender had a C-index of 0.787 which increased to 0.796 with the addition of standard CVD risk parameters; this increase was not significant (model A vs B p=0.15). The addition of NT-pro-BNP to standard CVD risk factors increased the C-index to 0.812 which was a statistically significant increase from the model including CVD risk factors (model B vs B plus NT-pro-BNP p=0.0014). The model predicting CVD mortality using only age and gender had a C-index of 0.817 which increased to 0.831 with the addition of standard CVD risk parameters; this increase was not significant (model A vs B p=0.17). The addition of NT-pro-BNP to model B of standard CVD risk factors increased the C-index to 0.848 which was a statistically significant increase from the model including CVD risk factors alone (model B vs B plus NT-pro-BNP p=0.04).", "The association of raised NT-pro-BNP with all-cause mortality remained in an analysis restricted to patients who were seronegative for RF and ACPA (n=339) (HR (95% CI) 6.36 (3.01 to 13.44), fully adjusted HR (95% CI) 2.94 (1.04 to 8.23)) and in those patients with no prior CVD (n=721) (HR (95% CI) 3.97 (2.38 to 6.62), fully adjusted HR (95% CI) 1.93 (1.00 to 3.72)), respectively. In these smaller subsets, the association with CVD mortality was not statistically significant (data on file)." ]
[ null, null, null, null, null, null, null, null, null, null, null ]
[ "Background", "Methods", "Setting", "Data collection at recruitment", "Blood sample analysis", "Cardiovascular substudy", "Follow-up and mortality", "Statistical analyses", "Results", "Cross-sectional association of baseline NT-pro-BNP concentration with IP disease phenotype and CVD risk markers", "Cross-sectional association of baseline NT-pro-BNP concentration with subclinical atherosclerosis", "NT-pro-BNP levels and mortality", "Additional sensitivity analyses", "Discussion" ]
[ "N-terminal pro-brain-type natriuretic peptide (NT-pro-BNP) is an inactive and stable amino acid fragment cosecreted, alongside the neuroendocrine peptide BNP, from the ventricular cardiac myocytes. It is released in response to left ventricular strain or ischaemia and has been found to be an important biomarker for left ventricular systolic dysfunction and left ventricular stress in the general population.1 Levels are also elevated even in those without clinical cardiovascular disease (CVD) and even minimally elevated NT-pro-BNP levels are predictive of future CVD and mortality in a range of cohort and general healthy population studies.2–5 These observations have also been confirmed in a meta-analysis of 40 studies involving 87 474 subjects and 10 625 incident CVD outcomes.6 In clinical practice, an NT-pro-BNP level of ≥100 pg/ml has high sensitivity and specificity for congestive heart failure (CHF) and can be used to stratify patients at high risk of cardiac failure in the acute setting.7–9 Circulating NT-pro-BNP is also associated with pro-inflammatory cytokines such as tumour necrosis factor (TNF) and interleukin 6 (IL-6).10 Acute inflammatory conditions including pneumonia and septic shock are also associated with raised NT-pro-BNP levels11–17 and TNF blockade in rheumatoid arthritis (RA) lowers NT-pro-BNP, suggesting a link between inflammation and cardiac stress. This has raised interest in the role of NT-pro-BNP in chronic inflammatory conditions such as inflammatory polyarthritis (IP).\nIP and RA are associated with an excess CVD risk including coronary heart disease, stroke and CHF.18–21 In two recent meta-analyses, the standardised mortality ratio (95% CI) for coronary heart disease and stroke in RA was 1.50 (1.39 to 1.61)22 and 1.46 (1.31 to 1.63),23 respectively. CHF has also been noted to be a common cause of death in RA and chronic inflammation may contribute to this outcome.24\nA small number of studies have examined the potential relevance of NT-pro-BNP levels in RA patients. Crowson et al25 noted higher NT-pro-BNP levels in RA compared with population controls. In established RA, NT-pro-BNP was associated with disease activity, disease duration as well as with C reactive protein (CRP), TNF and IL-6 concentrations.26–29 One study from our group found that a reduction in NT-pro-BNP correlated with the reduction in erythrocyte sedimentation rate observed after starting adalimumab therapy.30 A further study also noted an association between NT-pro-BNP and carotid intima-medial thickness (cIMT).28 To date, only one study has looked at the predictive value of NT-pro-BNP levels with future CVD mortality in established RA;29 however, little is known about NT-pro-BNP in early IP patients.\nThe aim of this study was to measure serum NT-pro-BNP levels in a large, well characterised inception cohort of patients with early IP. First, we aimed to examine the baseline association of NT-pro-BNP levels with IP disease phenotype, clinical CVD risk markers and subclinical atherosclerosis surrogates. Second, we also examined the predictive value of raised NT-pro-BNP levels for future all-cause and CVD related mortality.", " Setting Subjects aged 16 years and older with early IP (≥2 joints swollen for ≥4 weeks) were recruited to the Norfolk Arthritis Register (NOAR) from January 2000 to December 2008. In this study, we restricted inclusion to subjects with ≤24 months’ symptom duration.\nSubjects aged 16 years and older with early IP (≥2 joints swollen for ≥4 weeks) were recruited to the Norfolk Arthritis Register (NOAR) from January 2000 to December 2008. In this study, we restricted inclusion to subjects with ≤24 months’ symptom duration.\n Data collection at recruitment All subjects underwent a clinical assessment including swollen and tender joint count (51 joints), measurement of height and weight to calculate body mass index (BMI). Smoking status was recorded and patients were categorised into never smokers, previous smokers or current smokers. Subjects were specifically asked about past history of CVD events such as angina, heart attacks and heart failure, as well as current prescribed and over-the-counter medications and any history of disease modifying antirheumatic drugs use. All patients completed the Health Assessment Questionnaire (HAQ). Hypertension was defined as being present if patients reported a physician diagnosis of hypertension or if they were taking antihypertensive therapy at the baseline clinical assessment. Diabetes mellitus was also defined by patient self-report of a physician diagnosis or if they were taking oral hypoglycaemic agents or injected insulin.\nAll subjects underwent a clinical assessment including swollen and tender joint count (51 joints), measurement of height and weight to calculate body mass index (BMI). Smoking status was recorded and patients were categorised into never smokers, previous smokers or current smokers. Subjects were specifically asked about past history of CVD events such as angina, heart attacks and heart failure, as well as current prescribed and over-the-counter medications and any history of disease modifying antirheumatic drugs use. All patients completed the Health Assessment Questionnaire (HAQ). Hypertension was defined as being present if patients reported a physician diagnosis of hypertension or if they were taking antihypertensive therapy at the baseline clinical assessment. Diabetes mellitus was also defined by patient self-report of a physician diagnosis or if they were taking oral hypoglycaemic agents or injected insulin.\n Blood sample analysis At recruitment blood samples were collected, aliquoted and frozen at −80°C in Norfolk before being transported to the Arthritis Research UK Epidemiology Unit in Manchester. A Nephrostar Galaxy automated analyser was used to determine CRP concentration (BMG Labtech, Aylesbury, UK); this was used to calculate the 28-joint disease activity score (DAS28CRP). Rheumatoid factor (RF) was measured using a particle enhanced immunoturbidimetric assay where >40 iU/ml was considered positive for RF (BMG Labtech). Antibodies to citrullinated protein antigens (ACPA) were measured using the Axis-Shield DIASTAT kit (Axis-Shield, Dundee, UK) where >5 U/ml was considered positive for ACPA. Patients were considered ‘seropositive’ if they had a positive RF and/or ACPA.\nSerum NT-pro-BNP concentrations were determined using the Elecsys 2010 electrochemiluminescence method (Roche Diagnostics, Burgess Hill, UK), calibrated using the manufacturer's reagents at the University of Glasgow (PW, NS). Manufacturer's controls were used with limits of acceptability defined by the manufacturer. The limit of detection was 5 pg/ml. Low control coefficient of variance (CV) was 6.7% and high control CV was 4.9%. Patients were classified as having a raised NT-pro-BNP level if their NT-pro-BNP measurement was ≥100 pg/ml as per local clinically used threshold and published data.7–9\nAt recruitment blood samples were collected, aliquoted and frozen at −80°C in Norfolk before being transported to the Arthritis Research UK Epidemiology Unit in Manchester. A Nephrostar Galaxy automated analyser was used to determine CRP concentration (BMG Labtech, Aylesbury, UK); this was used to calculate the 28-joint disease activity score (DAS28CRP). Rheumatoid factor (RF) was measured using a particle enhanced immunoturbidimetric assay where >40 iU/ml was considered positive for RF (BMG Labtech). Antibodies to citrullinated protein antigens (ACPA) were measured using the Axis-Shield DIASTAT kit (Axis-Shield, Dundee, UK) where >5 U/ml was considered positive for ACPA. Patients were considered ‘seropositive’ if they had a positive RF and/or ACPA.\nSerum NT-pro-BNP concentrations were determined using the Elecsys 2010 electrochemiluminescence method (Roche Diagnostics, Burgess Hill, UK), calibrated using the manufacturer's reagents at the University of Glasgow (PW, NS). Manufacturer's controls were used with limits of acceptability defined by the manufacturer. The limit of detection was 5 pg/ml. Low control coefficient of variance (CV) was 6.7% and high control CV was 4.9%. Patients were classified as having a raised NT-pro-BNP level if their NT-pro-BNP measurement was ≥100 pg/ml as per local clinically used threshold and published data.7–9\n Cardiovascular substudy In a subset of the main cohort, consecutive patients (n=327) recruited between 2004 and 2008 and aged 18–65 years at cohort entry were invited to participate in a detailed cardiovascular assessment. A fasting blood sample was used to measure the lipid profile and plasma glucose. Blood pressure was measured in each arm and the higher of the two measurements was used in analyses. For the purposes of this study, however, hypertension was defined in all cases as described above. They also underwent a B-mode Doppler carotid ultrasound examination at the Department of Vascular Radiology at the Norfolk and Norwich University Hospital. The images were stored on super video home system (VHS) video and analysed at The University of Manchester, Department of Vascular Surgery using a standardised research protocol. Preliminary validation has revealed good intraobserver correlation using this technique.31 Briefly, the right and left common carotid artery, carotid bulb and the first 1.5 cm of the internal and external carotid arteries were examined in longitudinal and cross-sectional planes. Carotid IMT measurements were made in a longitudinal plane at a point of maximum thickness on the far wall of the common carotid artery along a 1 cm section of the artery proximal to the carotid bulb. Measurements were repeated three times on each side, unfreezing the image on each occasion and relocating the maximal cIMT. The average of these three measurements was then calculated and used as the mean cIMT.32 The higher of the right or left mean cIMT was used as the cIMT for analyses. Carotid plaque was defined as present if any of the following parameters was met and then graded according to standard definitions:33\nplaque present with <30% vessel diameter loss30%–50% vessel diameter loss>50% vessel diameter loss.\nplaque present with <30% vessel diameter loss\n30%–50% vessel diameter loss\n>50% vessel diameter loss.\nTotal cholesterol, high density lipoprotein (HDL) and triglycerides were assayed on fresh fasting serum using cholesterol oxidase phenol aminopjenazone (CHOD-PAP), a homogenous direct method (Abbott Diagnostics, Berkshire, UK), and glycerol-3-phosphate oxidaze (GPO-PAP) methods respectively at the Norfolk and Norwich University Hospital. Low density lipoprotein (LDL) levels were mathematically derived using the Friedwald's equation.\nIn a subset of the main cohort, consecutive patients (n=327) recruited between 2004 and 2008 and aged 18–65 years at cohort entry were invited to participate in a detailed cardiovascular assessment. A fasting blood sample was used to measure the lipid profile and plasma glucose. Blood pressure was measured in each arm and the higher of the two measurements was used in analyses. For the purposes of this study, however, hypertension was defined in all cases as described above. They also underwent a B-mode Doppler carotid ultrasound examination at the Department of Vascular Radiology at the Norfolk and Norwich University Hospital. The images were stored on super video home system (VHS) video and analysed at The University of Manchester, Department of Vascular Surgery using a standardised research protocol. Preliminary validation has revealed good intraobserver correlation using this technique.31 Briefly, the right and left common carotid artery, carotid bulb and the first 1.5 cm of the internal and external carotid arteries were examined in longitudinal and cross-sectional planes. Carotid IMT measurements were made in a longitudinal plane at a point of maximum thickness on the far wall of the common carotid artery along a 1 cm section of the artery proximal to the carotid bulb. Measurements were repeated three times on each side, unfreezing the image on each occasion and relocating the maximal cIMT. The average of these three measurements was then calculated and used as the mean cIMT.32 The higher of the right or left mean cIMT was used as the cIMT for analyses. Carotid plaque was defined as present if any of the following parameters was met and then graded according to standard definitions:33\nplaque present with <30% vessel diameter loss30%–50% vessel diameter loss>50% vessel diameter loss.\nplaque present with <30% vessel diameter loss\n30%–50% vessel diameter loss\n>50% vessel diameter loss.\nTotal cholesterol, high density lipoprotein (HDL) and triglycerides were assayed on fresh fasting serum using cholesterol oxidase phenol aminopjenazone (CHOD-PAP), a homogenous direct method (Abbott Diagnostics, Berkshire, UK), and glycerol-3-phosphate oxidaze (GPO-PAP) methods respectively at the Norfolk and Norwich University Hospital. Low density lipoprotein (LDL) levels were mathematically derived using the Friedwald's equation.\n Follow-up and mortality In NOAR all patients are flagged with the National Health Service Information Service (NHS-IS). Patients were followed up until March 2010, death or embarkation, whichever came first. Death certificates were provided by the NHS-IS for all those who had died. For this study we took the underlying cause of death, coded using ICD10,34 as the main cause of death. CV deaths were defined as those where the underlying cause of death fell within chapter I of ICD10. All patients provided written informed consent to take part and the study was approved by the Norfolk Research Ethics Committee (REC no. 2003/075).\nIn NOAR all patients are flagged with the National Health Service Information Service (NHS-IS). Patients were followed up until March 2010, death or embarkation, whichever came first. Death certificates were provided by the NHS-IS for all those who had died. For this study we took the underlying cause of death, coded using ICD10,34 as the main cause of death. CV deaths were defined as those where the underlying cause of death fell within chapter I of ICD10. All patients provided written informed consent to take part and the study was approved by the Norfolk Research Ethics Committee (REC no. 2003/075).\n Statistical analyses As the NT-pro-BNP levels were positively skewed, we log-transformed these values for our analysis. The associations among log-transformed NT-pro-BNP and baseline demographic, IP and CVD parameters including the presence or absence of carotid plaque at recruitment were assessed using linear regression. These analyses were subsequently adjusted for age at recruitment and gender. Forward stepwise multivariable regression analysis was carried out that included age and gender to assess key IP and CVD related factors associated with log NT-pro-BNP levels at baseline on univariate analyses (p<0.05).\nWe examined the association of raised NT-pro-BNP (≥100 pg/ml) with mortality (all-cause and CVD) using Cox proportional hazards regression. Analyses were adjusted for key CVD risk factors, namely, age, gender, BMI, prior CVD, diabetes, smoking status and hypertension. Subsequently analyses were adjusted for IP related factors associated with log NT-pro-BNP in our cross-sectional analysis. Incremental prognostic information was calculated using the Harrell's C-statistic (C-index) and these values were compared using the χ2 test. Sensitivity analyses were also undertaken in the subsets who (i) were negative for ACPA and RF and (ii) had no reported history of prior CVD at baseline recruitment.\nAs the NT-pro-BNP levels were positively skewed, we log-transformed these values for our analysis. The associations among log-transformed NT-pro-BNP and baseline demographic, IP and CVD parameters including the presence or absence of carotid plaque at recruitment were assessed using linear regression. These analyses were subsequently adjusted for age at recruitment and gender. Forward stepwise multivariable regression analysis was carried out that included age and gender to assess key IP and CVD related factors associated with log NT-pro-BNP levels at baseline on univariate analyses (p<0.05).\nWe examined the association of raised NT-pro-BNP (≥100 pg/ml) with mortality (all-cause and CVD) using Cox proportional hazards regression. Analyses were adjusted for key CVD risk factors, namely, age, gender, BMI, prior CVD, diabetes, smoking status and hypertension. Subsequently analyses were adjusted for IP related factors associated with log NT-pro-BNP in our cross-sectional analysis. Incremental prognostic information was calculated using the Harrell's C-statistic (C-index) and these values were compared using the χ2 test. Sensitivity analyses were also undertaken in the subsets who (i) were negative for ACPA and RF and (ii) had no reported history of prior CVD at baseline recruitment.", "Subjects aged 16 years and older with early IP (≥2 joints swollen for ≥4 weeks) were recruited to the Norfolk Arthritis Register (NOAR) from January 2000 to December 2008. In this study, we restricted inclusion to subjects with ≤24 months’ symptom duration.", "All subjects underwent a clinical assessment including swollen and tender joint count (51 joints), measurement of height and weight to calculate body mass index (BMI). Smoking status was recorded and patients were categorised into never smokers, previous smokers or current smokers. Subjects were specifically asked about past history of CVD events such as angina, heart attacks and heart failure, as well as current prescribed and over-the-counter medications and any history of disease modifying antirheumatic drugs use. All patients completed the Health Assessment Questionnaire (HAQ). Hypertension was defined as being present if patients reported a physician diagnosis of hypertension or if they were taking antihypertensive therapy at the baseline clinical assessment. Diabetes mellitus was also defined by patient self-report of a physician diagnosis or if they were taking oral hypoglycaemic agents or injected insulin.", "At recruitment blood samples were collected, aliquoted and frozen at −80°C in Norfolk before being transported to the Arthritis Research UK Epidemiology Unit in Manchester. A Nephrostar Galaxy automated analyser was used to determine CRP concentration (BMG Labtech, Aylesbury, UK); this was used to calculate the 28-joint disease activity score (DAS28CRP). Rheumatoid factor (RF) was measured using a particle enhanced immunoturbidimetric assay where >40 iU/ml was considered positive for RF (BMG Labtech). Antibodies to citrullinated protein antigens (ACPA) were measured using the Axis-Shield DIASTAT kit (Axis-Shield, Dundee, UK) where >5 U/ml was considered positive for ACPA. Patients were considered ‘seropositive’ if they had a positive RF and/or ACPA.\nSerum NT-pro-BNP concentrations were determined using the Elecsys 2010 electrochemiluminescence method (Roche Diagnostics, Burgess Hill, UK), calibrated using the manufacturer's reagents at the University of Glasgow (PW, NS). Manufacturer's controls were used with limits of acceptability defined by the manufacturer. The limit of detection was 5 pg/ml. Low control coefficient of variance (CV) was 6.7% and high control CV was 4.9%. Patients were classified as having a raised NT-pro-BNP level if their NT-pro-BNP measurement was ≥100 pg/ml as per local clinically used threshold and published data.7–9", "In a subset of the main cohort, consecutive patients (n=327) recruited between 2004 and 2008 and aged 18–65 years at cohort entry were invited to participate in a detailed cardiovascular assessment. A fasting blood sample was used to measure the lipid profile and plasma glucose. Blood pressure was measured in each arm and the higher of the two measurements was used in analyses. For the purposes of this study, however, hypertension was defined in all cases as described above. They also underwent a B-mode Doppler carotid ultrasound examination at the Department of Vascular Radiology at the Norfolk and Norwich University Hospital. The images were stored on super video home system (VHS) video and analysed at The University of Manchester, Department of Vascular Surgery using a standardised research protocol. Preliminary validation has revealed good intraobserver correlation using this technique.31 Briefly, the right and left common carotid artery, carotid bulb and the first 1.5 cm of the internal and external carotid arteries were examined in longitudinal and cross-sectional planes. Carotid IMT measurements were made in a longitudinal plane at a point of maximum thickness on the far wall of the common carotid artery along a 1 cm section of the artery proximal to the carotid bulb. Measurements were repeated three times on each side, unfreezing the image on each occasion and relocating the maximal cIMT. The average of these three measurements was then calculated and used as the mean cIMT.32 The higher of the right or left mean cIMT was used as the cIMT for analyses. Carotid plaque was defined as present if any of the following parameters was met and then graded according to standard definitions:33\nplaque present with <30% vessel diameter loss30%–50% vessel diameter loss>50% vessel diameter loss.\nplaque present with <30% vessel diameter loss\n30%–50% vessel diameter loss\n>50% vessel diameter loss.\nTotal cholesterol, high density lipoprotein (HDL) and triglycerides were assayed on fresh fasting serum using cholesterol oxidase phenol aminopjenazone (CHOD-PAP), a homogenous direct method (Abbott Diagnostics, Berkshire, UK), and glycerol-3-phosphate oxidaze (GPO-PAP) methods respectively at the Norfolk and Norwich University Hospital. Low density lipoprotein (LDL) levels were mathematically derived using the Friedwald's equation.", "In NOAR all patients are flagged with the National Health Service Information Service (NHS-IS). Patients were followed up until March 2010, death or embarkation, whichever came first. Death certificates were provided by the NHS-IS for all those who had died. For this study we took the underlying cause of death, coded using ICD10,34 as the main cause of death. CV deaths were defined as those where the underlying cause of death fell within chapter I of ICD10. All patients provided written informed consent to take part and the study was approved by the Norfolk Research Ethics Committee (REC no. 2003/075).", "As the NT-pro-BNP levels were positively skewed, we log-transformed these values for our analysis. The associations among log-transformed NT-pro-BNP and baseline demographic, IP and CVD parameters including the presence or absence of carotid plaque at recruitment were assessed using linear regression. These analyses were subsequently adjusted for age at recruitment and gender. Forward stepwise multivariable regression analysis was carried out that included age and gender to assess key IP and CVD related factors associated with log NT-pro-BNP levels at baseline on univariate analyses (p<0.05).\nWe examined the association of raised NT-pro-BNP (≥100 pg/ml) with mortality (all-cause and CVD) using Cox proportional hazards regression. Analyses were adjusted for key CVD risk factors, namely, age, gender, BMI, prior CVD, diabetes, smoking status and hypertension. Subsequently analyses were adjusted for IP related factors associated with log NT-pro-BNP in our cross-sectional analysis. Incremental prognostic information was calculated using the Harrell's C-statistic (C-index) and these values were compared using the χ2 test. Sensitivity analyses were also undertaken in the subsets who (i) were negative for ACPA and RF and (ii) had no reported history of prior CVD at baseline recruitment.", " Cross-sectional association of baseline NT-pro-BNP concentration with IP disease phenotype and CVD risk markers We studied 960 IP subjects with a median (IQR) age and symptom duration of 58 (47–68) years and 5.9 (3.2–10.5) months, respectively. The cohort included 617 (64%) female subjects; there were 211 (25%) current smokers. In all, 484/823 (59%) were positive for RF and/or ACPA and 436 (45%) fulfilled the 1987 American College for Rheumatology Criteria for RA at first assessment (table 1). A history of prior CVD was reported by 163 (17%) patients (table 1). The demographic, IP phenotype and CVD parameters were comparable in the subset of 327 patients who had carotid Doppler scans relative to the entire cohort, with the exception of a younger median age at recruitment into this substudy as per protocol (table 1).\nBaseline characteristics of the IP cohort and the CVD substudy population\n*Statistically significantly different at p<0.05.\nACPA, anticitrulinated protein antibody; ACR, American College for Rheumatology; BMI, body mass index; BP, blood pressure; cIMT, carotid intima-medial thickness; COX-2, cyclo-oxygenase-2 inhibitor; CRP, C reactive protein; CVD, cardiovascular disease; DAS28, disease activity score based on 28 joint count; DMARD, disease modifying antirheumatic drug; HAQ, Health Assessment Questionnaire; IP, inflammatory polyarthritis; mm Hg, millimetres of mercury; NA, data not available; NSAID, non-steroidal anti-inflammatory drug; RA, rheumatoid arthritis; RF, rheumatoid factor.\nNT-pro-BNP levels were positively skewed with a median (IQR) of 74 (38–153) pg/ml; 373 (39%) patients had NT-pro-BNP levels ≥100 pg/ml, including 89 of 163 (55%) patients with a prior history of CVD. In univariate linear regression analyses, higher log NT-pro-BNP levels were associated with being older at recruitment (β-coefficient (95% CI) 0.035 (0.031 to 0.039) per year) and female (β-coefficient (95% CI) 0.211 (0.085 to 0.336)). In univariate age and gender adjusted analyses (table 1), a number of factors were associated with log NT-pro-BNP including prior CVD, classic risk factors such as previous smoking and hypertension and several IP related factors, including HAQ score, and CRP. Both BMI and tender joint counts were negatively associated with log NT-pro-BNP (table 2).\nCross-sectional association between log NT-pro-BNP and demographic and disease related factors measured at baseline n=960\n*Log NT-pro-BNP, NT-pro-BNP levels that have been log transformed.\n†Linear regression models adjusted for age and gender.\n‡Stepwise regression includes age, gender, HAQ score, tender joint count, CRP, hypertension, BMI, smoking status and prior CVD in the entire cohort.\nACPA, antibodies to citrullinated protein antigens; ACR, American College for Rheumatology; BMI, body mass index; CRP, C reactive protein; CVD, cardiovascular disease; DAS28, disease activity score based on 28 joint count; DMARD, disease modifying antirheumatic drug; HAQ, Health Assessment Questionnaire; NA, not applicable; NT-pro-BNP, N-terminal pro-brain natriuretic peptide; RF, rheumatoid factor.\nIn a multivariable forward stepwise linear regression analysis containing all risk markers showing significance on univariate analyses, age, prior CVD and hypertension all remained positively associated and BMI remained negatively associated with log NT-pro-BNP (table 2). IP related factors that remained associated on multivariable analysis were HAQ score (β-coefficient (95% CI) 0.250 (0.149 to 0.351)), CRP (β-coefficient (95% CI) 0.002 (0.001 to 0.004)) and tender joint count (β-coefficient (95% CI) −0.021 (−0.032 to −0.010)).\nWe studied 960 IP subjects with a median (IQR) age and symptom duration of 58 (47–68) years and 5.9 (3.2–10.5) months, respectively. The cohort included 617 (64%) female subjects; there were 211 (25%) current smokers. In all, 484/823 (59%) were positive for RF and/or ACPA and 436 (45%) fulfilled the 1987 American College for Rheumatology Criteria for RA at first assessment (table 1). A history of prior CVD was reported by 163 (17%) patients (table 1). The demographic, IP phenotype and CVD parameters were comparable in the subset of 327 patients who had carotid Doppler scans relative to the entire cohort, with the exception of a younger median age at recruitment into this substudy as per protocol (table 1).\nBaseline characteristics of the IP cohort and the CVD substudy population\n*Statistically significantly different at p<0.05.\nACPA, anticitrulinated protein antibody; ACR, American College for Rheumatology; BMI, body mass index; BP, blood pressure; cIMT, carotid intima-medial thickness; COX-2, cyclo-oxygenase-2 inhibitor; CRP, C reactive protein; CVD, cardiovascular disease; DAS28, disease activity score based on 28 joint count; DMARD, disease modifying antirheumatic drug; HAQ, Health Assessment Questionnaire; IP, inflammatory polyarthritis; mm Hg, millimetres of mercury; NA, data not available; NSAID, non-steroidal anti-inflammatory drug; RA, rheumatoid arthritis; RF, rheumatoid factor.\nNT-pro-BNP levels were positively skewed with a median (IQR) of 74 (38–153) pg/ml; 373 (39%) patients had NT-pro-BNP levels ≥100 pg/ml, including 89 of 163 (55%) patients with a prior history of CVD. In univariate linear regression analyses, higher log NT-pro-BNP levels were associated with being older at recruitment (β-coefficient (95% CI) 0.035 (0.031 to 0.039) per year) and female (β-coefficient (95% CI) 0.211 (0.085 to 0.336)). In univariate age and gender adjusted analyses (table 1), a number of factors were associated with log NT-pro-BNP including prior CVD, classic risk factors such as previous smoking and hypertension and several IP related factors, including HAQ score, and CRP. Both BMI and tender joint counts were negatively associated with log NT-pro-BNP (table 2).\nCross-sectional association between log NT-pro-BNP and demographic and disease related factors measured at baseline n=960\n*Log NT-pro-BNP, NT-pro-BNP levels that have been log transformed.\n†Linear regression models adjusted for age and gender.\n‡Stepwise regression includes age, gender, HAQ score, tender joint count, CRP, hypertension, BMI, smoking status and prior CVD in the entire cohort.\nACPA, antibodies to citrullinated protein antigens; ACR, American College for Rheumatology; BMI, body mass index; CRP, C reactive protein; CVD, cardiovascular disease; DAS28, disease activity score based on 28 joint count; DMARD, disease modifying antirheumatic drug; HAQ, Health Assessment Questionnaire; NA, not applicable; NT-pro-BNP, N-terminal pro-brain natriuretic peptide; RF, rheumatoid factor.\nIn a multivariable forward stepwise linear regression analysis containing all risk markers showing significance on univariate analyses, age, prior CVD and hypertension all remained positively associated and BMI remained negatively associated with log NT-pro-BNP (table 2). IP related factors that remained associated on multivariable analysis were HAQ score (β-coefficient (95% CI) 0.250 (0.149 to 0.351)), CRP (β-coefficient (95% CI) 0.002 (0.001 to 0.004)) and tender joint count (β-coefficient (95% CI) −0.021 (−0.032 to −0.010)).\n Cross-sectional association of baseline NT-pro-BNP concentration with subclinical atherosclerosis The presence of plaque was significantly associated with log NT-pro-BNP levels (β-coefficient (95% CI) 0.253 (0.063 to 0.442)). This association did not however persist after age and gender adjustment (β-coefficient (95% CI) 0.109 (−0.100 to 0.319)). Carotid IMT was not significantly associated with NT-pro-BNP levels on univariate analysis.\nThe presence of plaque was significantly associated with log NT-pro-BNP levels (β-coefficient (95% CI) 0.253 (0.063 to 0.442)). This association did not however persist after age and gender adjustment (β-coefficient (95% CI) 0.109 (−0.100 to 0.319)). Carotid IMT was not significantly associated with NT-pro-BNP levels on univariate analysis.\n NT-pro-BNP levels and mortality During 5526 patient years of follow-up over a median (IQR) follow-up period of 5.5 (3.7–7.7) years, 92 (10%) IP subjects died. Of the 92 deaths, 31 (34%) had the main underlying cause of death as CVD (Chapter I of ICD10), 31 (34%) died due to neoplastic conditions (Chapter II of ICD10), 15 (16%) due to respiratory disease (Chapter X of ICD10) and the remaining 15 (16%) had other causes of death cited.\nThe overall and CVD related mortality rates were 16.8 (95% CI 16.7 to 16.9) and 5.8 per 1000 patient years (95% CI 5.6 to 5.9), respectively. Of the subjects with a baseline NT-pro-BNP level <100 pg/ml, 25 (4%) died during the follow-up period. This accounted for 27% of all deaths. In subjects with NT-pro-BNP ≥100 pg/ml, 67 (18%) died which accounted for 73% of all deaths in the cohort.\nIn a Cox proportional hazards model adjusted for age and gender (table 3, model A), NT-pro-BNP ≥100 pg/ml was associated with all-cause (HR (95% CI) 2.36 (1.42 to 3.94)) and CVD mortality (HR (95% CI) 3.40 (1.28 to 9.03)). These associations remained when we adjusted for additional CVD risk factors (model B) or additional IP related factors (model C) (table 3). Our final adjusted model (model D) also demonstrated an independent association between NT-pro-BNP and all-cause mortality (HR (95% CI) 2.15 (1.19 to 3.88)). The association between NT-pro-BNP and CVD mortality was no longer significant in the model (model D) that included both CVD and IP related factors (HR (95% CI) 2.18 (0.76 to 6.27)) (table 3).\nRaised NT-pro-BNP (≥100 pg/ml) level associations with increased all-cause and cardiovascular mortality in early inflammatory polyarthritis\n*Fully adjusted model included age at recruitment, gender, hypertension, BMI, diabetes, smoking status, prior CVD, RF and/or ACPA status, HAQ score, tender joint count and CRP.\nACPA, antibodies to citrullinated protein antigens; BMI, body mass index; CRP, C reactive protein; CVD, cardiovascular disease; HAQ, Health Assessment Questionnaire; NT-pro-BNP, N-terminal pro-brain natriuretic peptide; RF, rheumatoid factor.\nWe examined the additional prognostic value of NT-pro-BNP on mortality. The model predicting overall mortality using only age and gender had a C-index of 0.787 which increased to 0.796 with the addition of standard CVD risk parameters; this increase was not significant (model A vs B p=0.15). The addition of NT-pro-BNP to standard CVD risk factors increased the C-index to 0.812 which was a statistically significant increase from the model including CVD risk factors (model B vs B plus NT-pro-BNP p=0.0014). The model predicting CVD mortality using only age and gender had a C-index of 0.817 which increased to 0.831 with the addition of standard CVD risk parameters; this increase was not significant (model A vs B p=0.17). The addition of NT-pro-BNP to model B of standard CVD risk factors increased the C-index to 0.848 which was a statistically significant increase from the model including CVD risk factors alone (model B vs B plus NT-pro-BNP p=0.04).\nDuring 5526 patient years of follow-up over a median (IQR) follow-up period of 5.5 (3.7–7.7) years, 92 (10%) IP subjects died. Of the 92 deaths, 31 (34%) had the main underlying cause of death as CVD (Chapter I of ICD10), 31 (34%) died due to neoplastic conditions (Chapter II of ICD10), 15 (16%) due to respiratory disease (Chapter X of ICD10) and the remaining 15 (16%) had other causes of death cited.\nThe overall and CVD related mortality rates were 16.8 (95% CI 16.7 to 16.9) and 5.8 per 1000 patient years (95% CI 5.6 to 5.9), respectively. Of the subjects with a baseline NT-pro-BNP level <100 pg/ml, 25 (4%) died during the follow-up period. This accounted for 27% of all deaths. In subjects with NT-pro-BNP ≥100 pg/ml, 67 (18%) died which accounted for 73% of all deaths in the cohort.\nIn a Cox proportional hazards model adjusted for age and gender (table 3, model A), NT-pro-BNP ≥100 pg/ml was associated with all-cause (HR (95% CI) 2.36 (1.42 to 3.94)) and CVD mortality (HR (95% CI) 3.40 (1.28 to 9.03)). These associations remained when we adjusted for additional CVD risk factors (model B) or additional IP related factors (model C) (table 3). Our final adjusted model (model D) also demonstrated an independent association between NT-pro-BNP and all-cause mortality (HR (95% CI) 2.15 (1.19 to 3.88)). The association between NT-pro-BNP and CVD mortality was no longer significant in the model (model D) that included both CVD and IP related factors (HR (95% CI) 2.18 (0.76 to 6.27)) (table 3).\nRaised NT-pro-BNP (≥100 pg/ml) level associations with increased all-cause and cardiovascular mortality in early inflammatory polyarthritis\n*Fully adjusted model included age at recruitment, gender, hypertension, BMI, diabetes, smoking status, prior CVD, RF and/or ACPA status, HAQ score, tender joint count and CRP.\nACPA, antibodies to citrullinated protein antigens; BMI, body mass index; CRP, C reactive protein; CVD, cardiovascular disease; HAQ, Health Assessment Questionnaire; NT-pro-BNP, N-terminal pro-brain natriuretic peptide; RF, rheumatoid factor.\nWe examined the additional prognostic value of NT-pro-BNP on mortality. The model predicting overall mortality using only age and gender had a C-index of 0.787 which increased to 0.796 with the addition of standard CVD risk parameters; this increase was not significant (model A vs B p=0.15). The addition of NT-pro-BNP to standard CVD risk factors increased the C-index to 0.812 which was a statistically significant increase from the model including CVD risk factors (model B vs B plus NT-pro-BNP p=0.0014). The model predicting CVD mortality using only age and gender had a C-index of 0.817 which increased to 0.831 with the addition of standard CVD risk parameters; this increase was not significant (model A vs B p=0.17). The addition of NT-pro-BNP to model B of standard CVD risk factors increased the C-index to 0.848 which was a statistically significant increase from the model including CVD risk factors alone (model B vs B plus NT-pro-BNP p=0.04).\n Additional sensitivity analyses The association of raised NT-pro-BNP with all-cause mortality remained in an analysis restricted to patients who were seronegative for RF and ACPA (n=339) (HR (95% CI) 6.36 (3.01 to 13.44), fully adjusted HR (95% CI) 2.94 (1.04 to 8.23)) and in those patients with no prior CVD (n=721) (HR (95% CI) 3.97 (2.38 to 6.62), fully adjusted HR (95% CI) 1.93 (1.00 to 3.72)), respectively. In these smaller subsets, the association with CVD mortality was not statistically significant (data on file).\nThe association of raised NT-pro-BNP with all-cause mortality remained in an analysis restricted to patients who were seronegative for RF and ACPA (n=339) (HR (95% CI) 6.36 (3.01 to 13.44), fully adjusted HR (95% CI) 2.94 (1.04 to 8.23)) and in those patients with no prior CVD (n=721) (HR (95% CI) 3.97 (2.38 to 6.62), fully adjusted HR (95% CI) 1.93 (1.00 to 3.72)), respectively. In these smaller subsets, the association with CVD mortality was not statistically significant (data on file).", "We studied 960 IP subjects with a median (IQR) age and symptom duration of 58 (47–68) years and 5.9 (3.2–10.5) months, respectively. The cohort included 617 (64%) female subjects; there were 211 (25%) current smokers. In all, 484/823 (59%) were positive for RF and/or ACPA and 436 (45%) fulfilled the 1987 American College for Rheumatology Criteria for RA at first assessment (table 1). A history of prior CVD was reported by 163 (17%) patients (table 1). The demographic, IP phenotype and CVD parameters were comparable in the subset of 327 patients who had carotid Doppler scans relative to the entire cohort, with the exception of a younger median age at recruitment into this substudy as per protocol (table 1).\nBaseline characteristics of the IP cohort and the CVD substudy population\n*Statistically significantly different at p<0.05.\nACPA, anticitrulinated protein antibody; ACR, American College for Rheumatology; BMI, body mass index; BP, blood pressure; cIMT, carotid intima-medial thickness; COX-2, cyclo-oxygenase-2 inhibitor; CRP, C reactive protein; CVD, cardiovascular disease; DAS28, disease activity score based on 28 joint count; DMARD, disease modifying antirheumatic drug; HAQ, Health Assessment Questionnaire; IP, inflammatory polyarthritis; mm Hg, millimetres of mercury; NA, data not available; NSAID, non-steroidal anti-inflammatory drug; RA, rheumatoid arthritis; RF, rheumatoid factor.\nNT-pro-BNP levels were positively skewed with a median (IQR) of 74 (38–153) pg/ml; 373 (39%) patients had NT-pro-BNP levels ≥100 pg/ml, including 89 of 163 (55%) patients with a prior history of CVD. In univariate linear regression analyses, higher log NT-pro-BNP levels were associated with being older at recruitment (β-coefficient (95% CI) 0.035 (0.031 to 0.039) per year) and female (β-coefficient (95% CI) 0.211 (0.085 to 0.336)). In univariate age and gender adjusted analyses (table 1), a number of factors were associated with log NT-pro-BNP including prior CVD, classic risk factors such as previous smoking and hypertension and several IP related factors, including HAQ score, and CRP. Both BMI and tender joint counts were negatively associated with log NT-pro-BNP (table 2).\nCross-sectional association between log NT-pro-BNP and demographic and disease related factors measured at baseline n=960\n*Log NT-pro-BNP, NT-pro-BNP levels that have been log transformed.\n†Linear regression models adjusted for age and gender.\n‡Stepwise regression includes age, gender, HAQ score, tender joint count, CRP, hypertension, BMI, smoking status and prior CVD in the entire cohort.\nACPA, antibodies to citrullinated protein antigens; ACR, American College for Rheumatology; BMI, body mass index; CRP, C reactive protein; CVD, cardiovascular disease; DAS28, disease activity score based on 28 joint count; DMARD, disease modifying antirheumatic drug; HAQ, Health Assessment Questionnaire; NA, not applicable; NT-pro-BNP, N-terminal pro-brain natriuretic peptide; RF, rheumatoid factor.\nIn a multivariable forward stepwise linear regression analysis containing all risk markers showing significance on univariate analyses, age, prior CVD and hypertension all remained positively associated and BMI remained negatively associated with log NT-pro-BNP (table 2). IP related factors that remained associated on multivariable analysis were HAQ score (β-coefficient (95% CI) 0.250 (0.149 to 0.351)), CRP (β-coefficient (95% CI) 0.002 (0.001 to 0.004)) and tender joint count (β-coefficient (95% CI) −0.021 (−0.032 to −0.010)).", "The presence of plaque was significantly associated with log NT-pro-BNP levels (β-coefficient (95% CI) 0.253 (0.063 to 0.442)). This association did not however persist after age and gender adjustment (β-coefficient (95% CI) 0.109 (−0.100 to 0.319)). Carotid IMT was not significantly associated with NT-pro-BNP levels on univariate analysis.", "During 5526 patient years of follow-up over a median (IQR) follow-up period of 5.5 (3.7–7.7) years, 92 (10%) IP subjects died. Of the 92 deaths, 31 (34%) had the main underlying cause of death as CVD (Chapter I of ICD10), 31 (34%) died due to neoplastic conditions (Chapter II of ICD10), 15 (16%) due to respiratory disease (Chapter X of ICD10) and the remaining 15 (16%) had other causes of death cited.\nThe overall and CVD related mortality rates were 16.8 (95% CI 16.7 to 16.9) and 5.8 per 1000 patient years (95% CI 5.6 to 5.9), respectively. Of the subjects with a baseline NT-pro-BNP level <100 pg/ml, 25 (4%) died during the follow-up period. This accounted for 27% of all deaths. In subjects with NT-pro-BNP ≥100 pg/ml, 67 (18%) died which accounted for 73% of all deaths in the cohort.\nIn a Cox proportional hazards model adjusted for age and gender (table 3, model A), NT-pro-BNP ≥100 pg/ml was associated with all-cause (HR (95% CI) 2.36 (1.42 to 3.94)) and CVD mortality (HR (95% CI) 3.40 (1.28 to 9.03)). These associations remained when we adjusted for additional CVD risk factors (model B) or additional IP related factors (model C) (table 3). Our final adjusted model (model D) also demonstrated an independent association between NT-pro-BNP and all-cause mortality (HR (95% CI) 2.15 (1.19 to 3.88)). The association between NT-pro-BNP and CVD mortality was no longer significant in the model (model D) that included both CVD and IP related factors (HR (95% CI) 2.18 (0.76 to 6.27)) (table 3).\nRaised NT-pro-BNP (≥100 pg/ml) level associations with increased all-cause and cardiovascular mortality in early inflammatory polyarthritis\n*Fully adjusted model included age at recruitment, gender, hypertension, BMI, diabetes, smoking status, prior CVD, RF and/or ACPA status, HAQ score, tender joint count and CRP.\nACPA, antibodies to citrullinated protein antigens; BMI, body mass index; CRP, C reactive protein; CVD, cardiovascular disease; HAQ, Health Assessment Questionnaire; NT-pro-BNP, N-terminal pro-brain natriuretic peptide; RF, rheumatoid factor.\nWe examined the additional prognostic value of NT-pro-BNP on mortality. The model predicting overall mortality using only age and gender had a C-index of 0.787 which increased to 0.796 with the addition of standard CVD risk parameters; this increase was not significant (model A vs B p=0.15). The addition of NT-pro-BNP to standard CVD risk factors increased the C-index to 0.812 which was a statistically significant increase from the model including CVD risk factors (model B vs B plus NT-pro-BNP p=0.0014). The model predicting CVD mortality using only age and gender had a C-index of 0.817 which increased to 0.831 with the addition of standard CVD risk parameters; this increase was not significant (model A vs B p=0.17). The addition of NT-pro-BNP to model B of standard CVD risk factors increased the C-index to 0.848 which was a statistically significant increase from the model including CVD risk factors alone (model B vs B plus NT-pro-BNP p=0.04).", "The association of raised NT-pro-BNP with all-cause mortality remained in an analysis restricted to patients who were seronegative for RF and ACPA (n=339) (HR (95% CI) 6.36 (3.01 to 13.44), fully adjusted HR (95% CI) 2.94 (1.04 to 8.23)) and in those patients with no prior CVD (n=721) (HR (95% CI) 3.97 (2.38 to 6.62), fully adjusted HR (95% CI) 1.93 (1.00 to 3.72)), respectively. In these smaller subsets, the association with CVD mortality was not statistically significant (data on file).", "In a large inception cohort of patients with early IP we found that NT-pro-BNP was associated with both a prior history of CVD and also with higher CRP and HAQ scores. More importantly, NT-pro-BNP was independently associated with all-cause and CVD mortality even after adjustment for conventional CVD risk factors. As far as we are aware this is the first study to address these questions in early IP and therefore our results are of importance in further examining the value of NT-pro-BNP measurement in stratifying IP patients for future mortality risk from the outset of their disease.\nA number of previous studies have shown that NT-pro-BNP concentrations are elevated in RA compared with population controls.26\n27 Some of these have also noted an association between inflammation and NT-pro-BNP levels.25–27 Inflammation has been proposed as an important contributor to the pathogenesis of cardiac dysfunction in RA patients. Maradit-Kremers et al24 showed that in a study of 575 patients with RA, episodes of heart failure were often temporally associated with elevations in erythrocyte sedimentation rate. They hypothesised that inflammation directly compromised left ventricular function, akin to the model of myocardial depression and dysfunction seen in acute sepsis.35 Our data support this paradigm as raised NT-pro-BNP has previously been associated with IL-6 levels in other studies26 and in our population was independently associated with higher CRP. The association with HAQ scores was also significant in our multivariate analysis. In early IP, HAQ scores tend to be closely associated with inflammatory disease activity.36 Disability and functional status may also be associated with a more sedentary lifestyle and may also in part reflect unmeasured ‘frailty’ or other comorbidities which may add to the association between HAQ and NT-pro-BNP that we observed.\nNo prior studies have reported on the association of NT-pro-BNP and BMI in early arthritis. We found log NT-pro-BNP levels to be associated with lower BMI in early IP which accords with the results of Wang et al who found that in the Framingham cohort individuals with a high BMI had low NT-pro-BNP levels.37 This association is broadly in agreement with recent data showing that genetic predisposition to slightly elevated circulating NT-pro-BNP is protective of diabetes.38 In the patients who had a more detailed cardiovascular assessment, we found that log NT-pro-BNP was not significantly associated with atherosclerotic plaque or carotid IMT after age and gender adjustment. This is also in agreement with data in patients with diabetes.39 This may be partially because carotid IMT is a relatively weak surrogate of CVD risk, although plaque presence may be a more credible marker of vascular risk.40\n41 Recent evidence indicates that NT-pro-BNP predicts risk of CVD even in middle-aged men without minor ECG abnormalities and thus the association of NT-pro-BNP with CVD appears to extend beyond abnormalities in cardiac function and atherosclerosis.42\nFrom a prognostic viewpoint, our raw data demonstrated a much higher mortality rate among patients with elevated NT-pro-BNP at baseline; indeed, 73% of all deaths over a median follow-up of 5.5 (3.7–7.7) years were in this subset. Following adjustment for age, sex and IP related covariates or CVD related covariates this association was attenuated but remained statistically significant. The lack of association with CVD mortality after full adjustment may reflect the smaller numbers in this subgroup relative to the number of dependent variables we adjusted for. Overall our study had 69% power to detect a doubling of overall mortality and 19% power to detect a doubling of CVD mortality. Nevertheless, our observations have prognostic relevance as NT-pro-BNP assessment in early IP may identify patients with a particularly high risk of future mortality and in particular cardiovascular death. Our study extends and confirms the previous study by Provan et al29 who studied 182 patients from the EURIDISS cohort with 5–9-year disease duration. In this cohort, NT-pro-BNP was independently associated with all-cause mortality over a 10-year follow-up period. Previous work has noted that the anti-TNF agent adalimumab is associated with a reduction in NT-pro-BNP concentrations and may also improve pulse pressure in RA.30 It will be interesting to determine whether there will be added value in using either anti-TNF drugs or IL-6 blockade in this particular subset of early RA patients not only to improve inflammation but improve cardiac dysfunction.\nThis study has a number of strengths. It is a prospective study of a large well described inception cohort with comprehensive data on mortality. The clinical and laboratory data collections were standardised and well established. There are some limitations that are also worth considering. A limitation of the study is that we do not have available to us a large control group of healthy individuals drawn from the same population who have prospective data in which to examine the influence of NT-pro-BNP on future mortality. Despite this, the findings in our early IP cohort can be compared with that in similar studies in the general population. In agreement with our study population studies have shown that NT-pro-BNP is associated with worse CVD outcomes. A previous study from a Dutch population found that 16.6% of individuals had raised NT-pro-BNP compared with 39% in our study. As expected the mortality in the general population was lower than in our study.4 The NT-pro-BNP levels were measured on serum samples collected at baseline recruitment into NOAR. We have only included subjects with ≤24 months of symptom duration but nevertheless it is possible that subjects had subclinical inflammatory disease for a longer, albeit unspecified, period of time prior to being recruited. It is also possible that some patients will have received anti-inflammatory treatments which may have altered their NT-pro-BNP levels, although raised NT-pro-BNP levels were not associated with the duration of symptoms or disease modifying antirheumatic drug and/or steroid use. Another potential limitation of the study was that certain comorbid conditions which may influence NT-pro-BNP levels were not included and adjusted for in our analysis. The major variables that are known to affect NT-pro-BNP levels such as age, gender, diabetes, prior CVD, hypertension, smoking status and BMI were however included. Echocardiographic and electrocardiographic data were not available for this study; therefore, it is uncertain whether the increase in NT-pro-BNP is solely due to increased cardiac strain. Raised NT-pro-BNP levels are also associated with other variables contributing to mortality in patients with IP which we have not measured such as anaemia of chronic disease, renal impairment, impaired glucose tolerance and concomitant infection as found in patients affected with HIV.43 Studies are also needed to investigate why these patients have apparently increased cardiac stress. Advanced atherosclerosis, both that which is clinically evident and subclinical cardiac disease, is arguably the principal contributor to the left ventricular strain. Crowson et al25 noted that echocardiographic evidence of left ventricular diastolic dysfunction was not as strongly associated with raised NT-pro-BNP in RA than in population controls. Therefore, while the prognostic value of NT-pro-BNP in RA is clear from our study, further work is necessary to dissect the precise mechanisms by which this association is mediated. Certainly, our data suggest NT-pro-BNP may also capture some aspects of a chronic inflammatory state and in this way pick up vascular risk from both conventional and RA related risk pathways.\nIn conclusion, our results suggest NT-pro-BNP may have utility as part of a screening strategy to identify IP patients who may benefit from a more aggressive risk management programme for CVD. In particular, better understanding of the relationship between NT-pro-BNP and cardiac/vascular outcomes in patients with inflammatory disease may help achieve more targeted anti-inflammatory and cardioprotective approaches in this high risk population." ]
[ null, "methods", null, null, null, null, null, null, "results", null, null, null, null, "discussion" ]
[ "Rheumatoid Arthritis", "Cardiovascular Disease", "Atherosclerosis" ]
Background: N-terminal pro-brain-type natriuretic peptide (NT-pro-BNP) is an inactive and stable amino acid fragment cosecreted, alongside the neuroendocrine peptide BNP, from the ventricular cardiac myocytes. It is released in response to left ventricular strain or ischaemia and has been found to be an important biomarker for left ventricular systolic dysfunction and left ventricular stress in the general population.1 Levels are also elevated even in those without clinical cardiovascular disease (CVD) and even minimally elevated NT-pro-BNP levels are predictive of future CVD and mortality in a range of cohort and general healthy population studies.2–5 These observations have also been confirmed in a meta-analysis of 40 studies involving 87 474 subjects and 10 625 incident CVD outcomes.6 In clinical practice, an NT-pro-BNP level of ≥100 pg/ml has high sensitivity and specificity for congestive heart failure (CHF) and can be used to stratify patients at high risk of cardiac failure in the acute setting.7–9 Circulating NT-pro-BNP is also associated with pro-inflammatory cytokines such as tumour necrosis factor (TNF) and interleukin 6 (IL-6).10 Acute inflammatory conditions including pneumonia and septic shock are also associated with raised NT-pro-BNP levels11–17 and TNF blockade in rheumatoid arthritis (RA) lowers NT-pro-BNP, suggesting a link between inflammation and cardiac stress. This has raised interest in the role of NT-pro-BNP in chronic inflammatory conditions such as inflammatory polyarthritis (IP). IP and RA are associated with an excess CVD risk including coronary heart disease, stroke and CHF.18–21 In two recent meta-analyses, the standardised mortality ratio (95% CI) for coronary heart disease and stroke in RA was 1.50 (1.39 to 1.61)22 and 1.46 (1.31 to 1.63),23 respectively. CHF has also been noted to be a common cause of death in RA and chronic inflammation may contribute to this outcome.24 A small number of studies have examined the potential relevance of NT-pro-BNP levels in RA patients. Crowson et al25 noted higher NT-pro-BNP levels in RA compared with population controls. In established RA, NT-pro-BNP was associated with disease activity, disease duration as well as with C reactive protein (CRP), TNF and IL-6 concentrations.26–29 One study from our group found that a reduction in NT-pro-BNP correlated with the reduction in erythrocyte sedimentation rate observed after starting adalimumab therapy.30 A further study also noted an association between NT-pro-BNP and carotid intima-medial thickness (cIMT).28 To date, only one study has looked at the predictive value of NT-pro-BNP levels with future CVD mortality in established RA;29 however, little is known about NT-pro-BNP in early IP patients. The aim of this study was to measure serum NT-pro-BNP levels in a large, well characterised inception cohort of patients with early IP. First, we aimed to examine the baseline association of NT-pro-BNP levels with IP disease phenotype, clinical CVD risk markers and subclinical atherosclerosis surrogates. Second, we also examined the predictive value of raised NT-pro-BNP levels for future all-cause and CVD related mortality. Methods: Setting Subjects aged 16 years and older with early IP (≥2 joints swollen for ≥4 weeks) were recruited to the Norfolk Arthritis Register (NOAR) from January 2000 to December 2008. In this study, we restricted inclusion to subjects with ≤24 months’ symptom duration. Subjects aged 16 years and older with early IP (≥2 joints swollen for ≥4 weeks) were recruited to the Norfolk Arthritis Register (NOAR) from January 2000 to December 2008. In this study, we restricted inclusion to subjects with ≤24 months’ symptom duration. Data collection at recruitment All subjects underwent a clinical assessment including swollen and tender joint count (51 joints), measurement of height and weight to calculate body mass index (BMI). Smoking status was recorded and patients were categorised into never smokers, previous smokers or current smokers. Subjects were specifically asked about past history of CVD events such as angina, heart attacks and heart failure, as well as current prescribed and over-the-counter medications and any history of disease modifying antirheumatic drugs use. All patients completed the Health Assessment Questionnaire (HAQ). Hypertension was defined as being present if patients reported a physician diagnosis of hypertension or if they were taking antihypertensive therapy at the baseline clinical assessment. Diabetes mellitus was also defined by patient self-report of a physician diagnosis or if they were taking oral hypoglycaemic agents or injected insulin. All subjects underwent a clinical assessment including swollen and tender joint count (51 joints), measurement of height and weight to calculate body mass index (BMI). Smoking status was recorded and patients were categorised into never smokers, previous smokers or current smokers. Subjects were specifically asked about past history of CVD events such as angina, heart attacks and heart failure, as well as current prescribed and over-the-counter medications and any history of disease modifying antirheumatic drugs use. All patients completed the Health Assessment Questionnaire (HAQ). Hypertension was defined as being present if patients reported a physician diagnosis of hypertension or if they were taking antihypertensive therapy at the baseline clinical assessment. Diabetes mellitus was also defined by patient self-report of a physician diagnosis or if they were taking oral hypoglycaemic agents or injected insulin. Blood sample analysis At recruitment blood samples were collected, aliquoted and frozen at −80°C in Norfolk before being transported to the Arthritis Research UK Epidemiology Unit in Manchester. A Nephrostar Galaxy automated analyser was used to determine CRP concentration (BMG Labtech, Aylesbury, UK); this was used to calculate the 28-joint disease activity score (DAS28CRP). Rheumatoid factor (RF) was measured using a particle enhanced immunoturbidimetric assay where >40 iU/ml was considered positive for RF (BMG Labtech). Antibodies to citrullinated protein antigens (ACPA) were measured using the Axis-Shield DIASTAT kit (Axis-Shield, Dundee, UK) where >5 U/ml was considered positive for ACPA. Patients were considered ‘seropositive’ if they had a positive RF and/or ACPA. Serum NT-pro-BNP concentrations were determined using the Elecsys 2010 electrochemiluminescence method (Roche Diagnostics, Burgess Hill, UK), calibrated using the manufacturer's reagents at the University of Glasgow (PW, NS). Manufacturer's controls were used with limits of acceptability defined by the manufacturer. The limit of detection was 5 pg/ml. Low control coefficient of variance (CV) was 6.7% and high control CV was 4.9%. Patients were classified as having a raised NT-pro-BNP level if their NT-pro-BNP measurement was ≥100 pg/ml as per local clinically used threshold and published data.7–9 At recruitment blood samples were collected, aliquoted and frozen at −80°C in Norfolk before being transported to the Arthritis Research UK Epidemiology Unit in Manchester. A Nephrostar Galaxy automated analyser was used to determine CRP concentration (BMG Labtech, Aylesbury, UK); this was used to calculate the 28-joint disease activity score (DAS28CRP). Rheumatoid factor (RF) was measured using a particle enhanced immunoturbidimetric assay where >40 iU/ml was considered positive for RF (BMG Labtech). Antibodies to citrullinated protein antigens (ACPA) were measured using the Axis-Shield DIASTAT kit (Axis-Shield, Dundee, UK) where >5 U/ml was considered positive for ACPA. Patients were considered ‘seropositive’ if they had a positive RF and/or ACPA. Serum NT-pro-BNP concentrations were determined using the Elecsys 2010 electrochemiluminescence method (Roche Diagnostics, Burgess Hill, UK), calibrated using the manufacturer's reagents at the University of Glasgow (PW, NS). Manufacturer's controls were used with limits of acceptability defined by the manufacturer. The limit of detection was 5 pg/ml. Low control coefficient of variance (CV) was 6.7% and high control CV was 4.9%. Patients were classified as having a raised NT-pro-BNP level if their NT-pro-BNP measurement was ≥100 pg/ml as per local clinically used threshold and published data.7–9 Cardiovascular substudy In a subset of the main cohort, consecutive patients (n=327) recruited between 2004 and 2008 and aged 18–65 years at cohort entry were invited to participate in a detailed cardiovascular assessment. A fasting blood sample was used to measure the lipid profile and plasma glucose. Blood pressure was measured in each arm and the higher of the two measurements was used in analyses. For the purposes of this study, however, hypertension was defined in all cases as described above. They also underwent a B-mode Doppler carotid ultrasound examination at the Department of Vascular Radiology at the Norfolk and Norwich University Hospital. The images were stored on super video home system (VHS) video and analysed at The University of Manchester, Department of Vascular Surgery using a standardised research protocol. Preliminary validation has revealed good intraobserver correlation using this technique.31 Briefly, the right and left common carotid artery, carotid bulb and the first 1.5 cm of the internal and external carotid arteries were examined in longitudinal and cross-sectional planes. Carotid IMT measurements were made in a longitudinal plane at a point of maximum thickness on the far wall of the common carotid artery along a 1 cm section of the artery proximal to the carotid bulb. Measurements were repeated three times on each side, unfreezing the image on each occasion and relocating the maximal cIMT. The average of these three measurements was then calculated and used as the mean cIMT.32 The higher of the right or left mean cIMT was used as the cIMT for analyses. Carotid plaque was defined as present if any of the following parameters was met and then graded according to standard definitions:33 plaque present with <30% vessel diameter loss30%–50% vessel diameter loss>50% vessel diameter loss. plaque present with <30% vessel diameter loss 30%–50% vessel diameter loss >50% vessel diameter loss. Total cholesterol, high density lipoprotein (HDL) and triglycerides were assayed on fresh fasting serum using cholesterol oxidase phenol aminopjenazone (CHOD-PAP), a homogenous direct method (Abbott Diagnostics, Berkshire, UK), and glycerol-3-phosphate oxidaze (GPO-PAP) methods respectively at the Norfolk and Norwich University Hospital. Low density lipoprotein (LDL) levels were mathematically derived using the Friedwald's equation. In a subset of the main cohort, consecutive patients (n=327) recruited between 2004 and 2008 and aged 18–65 years at cohort entry were invited to participate in a detailed cardiovascular assessment. A fasting blood sample was used to measure the lipid profile and plasma glucose. Blood pressure was measured in each arm and the higher of the two measurements was used in analyses. For the purposes of this study, however, hypertension was defined in all cases as described above. They also underwent a B-mode Doppler carotid ultrasound examination at the Department of Vascular Radiology at the Norfolk and Norwich University Hospital. The images were stored on super video home system (VHS) video and analysed at The University of Manchester, Department of Vascular Surgery using a standardised research protocol. Preliminary validation has revealed good intraobserver correlation using this technique.31 Briefly, the right and left common carotid artery, carotid bulb and the first 1.5 cm of the internal and external carotid arteries were examined in longitudinal and cross-sectional planes. Carotid IMT measurements were made in a longitudinal plane at a point of maximum thickness on the far wall of the common carotid artery along a 1 cm section of the artery proximal to the carotid bulb. Measurements were repeated three times on each side, unfreezing the image on each occasion and relocating the maximal cIMT. The average of these three measurements was then calculated and used as the mean cIMT.32 The higher of the right or left mean cIMT was used as the cIMT for analyses. Carotid plaque was defined as present if any of the following parameters was met and then graded according to standard definitions:33 plaque present with <30% vessel diameter loss30%–50% vessel diameter loss>50% vessel diameter loss. plaque present with <30% vessel diameter loss 30%–50% vessel diameter loss >50% vessel diameter loss. Total cholesterol, high density lipoprotein (HDL) and triglycerides were assayed on fresh fasting serum using cholesterol oxidase phenol aminopjenazone (CHOD-PAP), a homogenous direct method (Abbott Diagnostics, Berkshire, UK), and glycerol-3-phosphate oxidaze (GPO-PAP) methods respectively at the Norfolk and Norwich University Hospital. Low density lipoprotein (LDL) levels were mathematically derived using the Friedwald's equation. Follow-up and mortality In NOAR all patients are flagged with the National Health Service Information Service (NHS-IS). Patients were followed up until March 2010, death or embarkation, whichever came first. Death certificates were provided by the NHS-IS for all those who had died. For this study we took the underlying cause of death, coded using ICD10,34 as the main cause of death. CV deaths were defined as those where the underlying cause of death fell within chapter I of ICD10. All patients provided written informed consent to take part and the study was approved by the Norfolk Research Ethics Committee (REC no. 2003/075). In NOAR all patients are flagged with the National Health Service Information Service (NHS-IS). Patients were followed up until March 2010, death or embarkation, whichever came first. Death certificates were provided by the NHS-IS for all those who had died. For this study we took the underlying cause of death, coded using ICD10,34 as the main cause of death. CV deaths were defined as those where the underlying cause of death fell within chapter I of ICD10. All patients provided written informed consent to take part and the study was approved by the Norfolk Research Ethics Committee (REC no. 2003/075). Statistical analyses As the NT-pro-BNP levels were positively skewed, we log-transformed these values for our analysis. The associations among log-transformed NT-pro-BNP and baseline demographic, IP and CVD parameters including the presence or absence of carotid plaque at recruitment were assessed using linear regression. These analyses were subsequently adjusted for age at recruitment and gender. Forward stepwise multivariable regression analysis was carried out that included age and gender to assess key IP and CVD related factors associated with log NT-pro-BNP levels at baseline on univariate analyses (p<0.05). We examined the association of raised NT-pro-BNP (≥100 pg/ml) with mortality (all-cause and CVD) using Cox proportional hazards regression. Analyses were adjusted for key CVD risk factors, namely, age, gender, BMI, prior CVD, diabetes, smoking status and hypertension. Subsequently analyses were adjusted for IP related factors associated with log NT-pro-BNP in our cross-sectional analysis. Incremental prognostic information was calculated using the Harrell's C-statistic (C-index) and these values were compared using the χ2 test. Sensitivity analyses were also undertaken in the subsets who (i) were negative for ACPA and RF and (ii) had no reported history of prior CVD at baseline recruitment. As the NT-pro-BNP levels were positively skewed, we log-transformed these values for our analysis. The associations among log-transformed NT-pro-BNP and baseline demographic, IP and CVD parameters including the presence or absence of carotid plaque at recruitment were assessed using linear regression. These analyses were subsequently adjusted for age at recruitment and gender. Forward stepwise multivariable regression analysis was carried out that included age and gender to assess key IP and CVD related factors associated with log NT-pro-BNP levels at baseline on univariate analyses (p<0.05). We examined the association of raised NT-pro-BNP (≥100 pg/ml) with mortality (all-cause and CVD) using Cox proportional hazards regression. Analyses were adjusted for key CVD risk factors, namely, age, gender, BMI, prior CVD, diabetes, smoking status and hypertension. Subsequently analyses were adjusted for IP related factors associated with log NT-pro-BNP in our cross-sectional analysis. Incremental prognostic information was calculated using the Harrell's C-statistic (C-index) and these values were compared using the χ2 test. Sensitivity analyses were also undertaken in the subsets who (i) were negative for ACPA and RF and (ii) had no reported history of prior CVD at baseline recruitment. Setting: Subjects aged 16 years and older with early IP (≥2 joints swollen for ≥4 weeks) were recruited to the Norfolk Arthritis Register (NOAR) from January 2000 to December 2008. In this study, we restricted inclusion to subjects with ≤24 months’ symptom duration. Data collection at recruitment: All subjects underwent a clinical assessment including swollen and tender joint count (51 joints), measurement of height and weight to calculate body mass index (BMI). Smoking status was recorded and patients were categorised into never smokers, previous smokers or current smokers. Subjects were specifically asked about past history of CVD events such as angina, heart attacks and heart failure, as well as current prescribed and over-the-counter medications and any history of disease modifying antirheumatic drugs use. All patients completed the Health Assessment Questionnaire (HAQ). Hypertension was defined as being present if patients reported a physician diagnosis of hypertension or if they were taking antihypertensive therapy at the baseline clinical assessment. Diabetes mellitus was also defined by patient self-report of a physician diagnosis or if they were taking oral hypoglycaemic agents or injected insulin. Blood sample analysis: At recruitment blood samples were collected, aliquoted and frozen at −80°C in Norfolk before being transported to the Arthritis Research UK Epidemiology Unit in Manchester. A Nephrostar Galaxy automated analyser was used to determine CRP concentration (BMG Labtech, Aylesbury, UK); this was used to calculate the 28-joint disease activity score (DAS28CRP). Rheumatoid factor (RF) was measured using a particle enhanced immunoturbidimetric assay where >40 iU/ml was considered positive for RF (BMG Labtech). Antibodies to citrullinated protein antigens (ACPA) were measured using the Axis-Shield DIASTAT kit (Axis-Shield, Dundee, UK) where >5 U/ml was considered positive for ACPA. Patients were considered ‘seropositive’ if they had a positive RF and/or ACPA. Serum NT-pro-BNP concentrations were determined using the Elecsys 2010 electrochemiluminescence method (Roche Diagnostics, Burgess Hill, UK), calibrated using the manufacturer's reagents at the University of Glasgow (PW, NS). Manufacturer's controls were used with limits of acceptability defined by the manufacturer. The limit of detection was 5 pg/ml. Low control coefficient of variance (CV) was 6.7% and high control CV was 4.9%. Patients were classified as having a raised NT-pro-BNP level if their NT-pro-BNP measurement was ≥100 pg/ml as per local clinically used threshold and published data.7–9 Cardiovascular substudy: In a subset of the main cohort, consecutive patients (n=327) recruited between 2004 and 2008 and aged 18–65 years at cohort entry were invited to participate in a detailed cardiovascular assessment. A fasting blood sample was used to measure the lipid profile and plasma glucose. Blood pressure was measured in each arm and the higher of the two measurements was used in analyses. For the purposes of this study, however, hypertension was defined in all cases as described above. They also underwent a B-mode Doppler carotid ultrasound examination at the Department of Vascular Radiology at the Norfolk and Norwich University Hospital. The images were stored on super video home system (VHS) video and analysed at The University of Manchester, Department of Vascular Surgery using a standardised research protocol. Preliminary validation has revealed good intraobserver correlation using this technique.31 Briefly, the right and left common carotid artery, carotid bulb and the first 1.5 cm of the internal and external carotid arteries were examined in longitudinal and cross-sectional planes. Carotid IMT measurements were made in a longitudinal plane at a point of maximum thickness on the far wall of the common carotid artery along a 1 cm section of the artery proximal to the carotid bulb. Measurements were repeated three times on each side, unfreezing the image on each occasion and relocating the maximal cIMT. The average of these three measurements was then calculated and used as the mean cIMT.32 The higher of the right or left mean cIMT was used as the cIMT for analyses. Carotid plaque was defined as present if any of the following parameters was met and then graded according to standard definitions:33 plaque present with <30% vessel diameter loss30%–50% vessel diameter loss>50% vessel diameter loss. plaque present with <30% vessel diameter loss 30%–50% vessel diameter loss >50% vessel diameter loss. Total cholesterol, high density lipoprotein (HDL) and triglycerides were assayed on fresh fasting serum using cholesterol oxidase phenol aminopjenazone (CHOD-PAP), a homogenous direct method (Abbott Diagnostics, Berkshire, UK), and glycerol-3-phosphate oxidaze (GPO-PAP) methods respectively at the Norfolk and Norwich University Hospital. Low density lipoprotein (LDL) levels were mathematically derived using the Friedwald's equation. Follow-up and mortality: In NOAR all patients are flagged with the National Health Service Information Service (NHS-IS). Patients were followed up until March 2010, death or embarkation, whichever came first. Death certificates were provided by the NHS-IS for all those who had died. For this study we took the underlying cause of death, coded using ICD10,34 as the main cause of death. CV deaths were defined as those where the underlying cause of death fell within chapter I of ICD10. All patients provided written informed consent to take part and the study was approved by the Norfolk Research Ethics Committee (REC no. 2003/075). Statistical analyses: As the NT-pro-BNP levels were positively skewed, we log-transformed these values for our analysis. The associations among log-transformed NT-pro-BNP and baseline demographic, IP and CVD parameters including the presence or absence of carotid plaque at recruitment were assessed using linear regression. These analyses were subsequently adjusted for age at recruitment and gender. Forward stepwise multivariable regression analysis was carried out that included age and gender to assess key IP and CVD related factors associated with log NT-pro-BNP levels at baseline on univariate analyses (p<0.05). We examined the association of raised NT-pro-BNP (≥100 pg/ml) with mortality (all-cause and CVD) using Cox proportional hazards regression. Analyses were adjusted for key CVD risk factors, namely, age, gender, BMI, prior CVD, diabetes, smoking status and hypertension. Subsequently analyses were adjusted for IP related factors associated with log NT-pro-BNP in our cross-sectional analysis. Incremental prognostic information was calculated using the Harrell's C-statistic (C-index) and these values were compared using the χ2 test. Sensitivity analyses were also undertaken in the subsets who (i) were negative for ACPA and RF and (ii) had no reported history of prior CVD at baseline recruitment. Results: Cross-sectional association of baseline NT-pro-BNP concentration with IP disease phenotype and CVD risk markers We studied 960 IP subjects with a median (IQR) age and symptom duration of 58 (47–68) years and 5.9 (3.2–10.5) months, respectively. The cohort included 617 (64%) female subjects; there were 211 (25%) current smokers. In all, 484/823 (59%) were positive for RF and/or ACPA and 436 (45%) fulfilled the 1987 American College for Rheumatology Criteria for RA at first assessment (table 1). A history of prior CVD was reported by 163 (17%) patients (table 1). The demographic, IP phenotype and CVD parameters were comparable in the subset of 327 patients who had carotid Doppler scans relative to the entire cohort, with the exception of a younger median age at recruitment into this substudy as per protocol (table 1). Baseline characteristics of the IP cohort and the CVD substudy population *Statistically significantly different at p<0.05. ACPA, anticitrulinated protein antibody; ACR, American College for Rheumatology; BMI, body mass index; BP, blood pressure; cIMT, carotid intima-medial thickness; COX-2, cyclo-oxygenase-2 inhibitor; CRP, C reactive protein; CVD, cardiovascular disease; DAS28, disease activity score based on 28 joint count; DMARD, disease modifying antirheumatic drug; HAQ, Health Assessment Questionnaire; IP, inflammatory polyarthritis; mm Hg, millimetres of mercury; NA, data not available; NSAID, non-steroidal anti-inflammatory drug; RA, rheumatoid arthritis; RF, rheumatoid factor. NT-pro-BNP levels were positively skewed with a median (IQR) of 74 (38–153) pg/ml; 373 (39%) patients had NT-pro-BNP levels ≥100 pg/ml, including 89 of 163 (55%) patients with a prior history of CVD. In univariate linear regression analyses, higher log NT-pro-BNP levels were associated with being older at recruitment (β-coefficient (95% CI) 0.035 (0.031 to 0.039) per year) and female (β-coefficient (95% CI) 0.211 (0.085 to 0.336)). In univariate age and gender adjusted analyses (table 1), a number of factors were associated with log NT-pro-BNP including prior CVD, classic risk factors such as previous smoking and hypertension and several IP related factors, including HAQ score, and CRP. Both BMI and tender joint counts were negatively associated with log NT-pro-BNP (table 2). Cross-sectional association between log NT-pro-BNP and demographic and disease related factors measured at baseline n=960 *Log NT-pro-BNP, NT-pro-BNP levels that have been log transformed. †Linear regression models adjusted for age and gender. ‡Stepwise regression includes age, gender, HAQ score, tender joint count, CRP, hypertension, BMI, smoking status and prior CVD in the entire cohort. ACPA, antibodies to citrullinated protein antigens; ACR, American College for Rheumatology; BMI, body mass index; CRP, C reactive protein; CVD, cardiovascular disease; DAS28, disease activity score based on 28 joint count; DMARD, disease modifying antirheumatic drug; HAQ, Health Assessment Questionnaire; NA, not applicable; NT-pro-BNP, N-terminal pro-brain natriuretic peptide; RF, rheumatoid factor. In a multivariable forward stepwise linear regression analysis containing all risk markers showing significance on univariate analyses, age, prior CVD and hypertension all remained positively associated and BMI remained negatively associated with log NT-pro-BNP (table 2). IP related factors that remained associated on multivariable analysis were HAQ score (β-coefficient (95% CI) 0.250 (0.149 to 0.351)), CRP (β-coefficient (95% CI) 0.002 (0.001 to 0.004)) and tender joint count (β-coefficient (95% CI) −0.021 (−0.032 to −0.010)). We studied 960 IP subjects with a median (IQR) age and symptom duration of 58 (47–68) years and 5.9 (3.2–10.5) months, respectively. The cohort included 617 (64%) female subjects; there were 211 (25%) current smokers. In all, 484/823 (59%) were positive for RF and/or ACPA and 436 (45%) fulfilled the 1987 American College for Rheumatology Criteria for RA at first assessment (table 1). A history of prior CVD was reported by 163 (17%) patients (table 1). The demographic, IP phenotype and CVD parameters were comparable in the subset of 327 patients who had carotid Doppler scans relative to the entire cohort, with the exception of a younger median age at recruitment into this substudy as per protocol (table 1). Baseline characteristics of the IP cohort and the CVD substudy population *Statistically significantly different at p<0.05. ACPA, anticitrulinated protein antibody; ACR, American College for Rheumatology; BMI, body mass index; BP, blood pressure; cIMT, carotid intima-medial thickness; COX-2, cyclo-oxygenase-2 inhibitor; CRP, C reactive protein; CVD, cardiovascular disease; DAS28, disease activity score based on 28 joint count; DMARD, disease modifying antirheumatic drug; HAQ, Health Assessment Questionnaire; IP, inflammatory polyarthritis; mm Hg, millimetres of mercury; NA, data not available; NSAID, non-steroidal anti-inflammatory drug; RA, rheumatoid arthritis; RF, rheumatoid factor. NT-pro-BNP levels were positively skewed with a median (IQR) of 74 (38–153) pg/ml; 373 (39%) patients had NT-pro-BNP levels ≥100 pg/ml, including 89 of 163 (55%) patients with a prior history of CVD. In univariate linear regression analyses, higher log NT-pro-BNP levels were associated with being older at recruitment (β-coefficient (95% CI) 0.035 (0.031 to 0.039) per year) and female (β-coefficient (95% CI) 0.211 (0.085 to 0.336)). In univariate age and gender adjusted analyses (table 1), a number of factors were associated with log NT-pro-BNP including prior CVD, classic risk factors such as previous smoking and hypertension and several IP related factors, including HAQ score, and CRP. Both BMI and tender joint counts were negatively associated with log NT-pro-BNP (table 2). Cross-sectional association between log NT-pro-BNP and demographic and disease related factors measured at baseline n=960 *Log NT-pro-BNP, NT-pro-BNP levels that have been log transformed. †Linear regression models adjusted for age and gender. ‡Stepwise regression includes age, gender, HAQ score, tender joint count, CRP, hypertension, BMI, smoking status and prior CVD in the entire cohort. ACPA, antibodies to citrullinated protein antigens; ACR, American College for Rheumatology; BMI, body mass index; CRP, C reactive protein; CVD, cardiovascular disease; DAS28, disease activity score based on 28 joint count; DMARD, disease modifying antirheumatic drug; HAQ, Health Assessment Questionnaire; NA, not applicable; NT-pro-BNP, N-terminal pro-brain natriuretic peptide; RF, rheumatoid factor. In a multivariable forward stepwise linear regression analysis containing all risk markers showing significance on univariate analyses, age, prior CVD and hypertension all remained positively associated and BMI remained negatively associated with log NT-pro-BNP (table 2). IP related factors that remained associated on multivariable analysis were HAQ score (β-coefficient (95% CI) 0.250 (0.149 to 0.351)), CRP (β-coefficient (95% CI) 0.002 (0.001 to 0.004)) and tender joint count (β-coefficient (95% CI) −0.021 (−0.032 to −0.010)). Cross-sectional association of baseline NT-pro-BNP concentration with subclinical atherosclerosis The presence of plaque was significantly associated with log NT-pro-BNP levels (β-coefficient (95% CI) 0.253 (0.063 to 0.442)). This association did not however persist after age and gender adjustment (β-coefficient (95% CI) 0.109 (−0.100 to 0.319)). Carotid IMT was not significantly associated with NT-pro-BNP levels on univariate analysis. The presence of plaque was significantly associated with log NT-pro-BNP levels (β-coefficient (95% CI) 0.253 (0.063 to 0.442)). This association did not however persist after age and gender adjustment (β-coefficient (95% CI) 0.109 (−0.100 to 0.319)). Carotid IMT was not significantly associated with NT-pro-BNP levels on univariate analysis. NT-pro-BNP levels and mortality During 5526 patient years of follow-up over a median (IQR) follow-up period of 5.5 (3.7–7.7) years, 92 (10%) IP subjects died. Of the 92 deaths, 31 (34%) had the main underlying cause of death as CVD (Chapter I of ICD10), 31 (34%) died due to neoplastic conditions (Chapter II of ICD10), 15 (16%) due to respiratory disease (Chapter X of ICD10) and the remaining 15 (16%) had other causes of death cited. The overall and CVD related mortality rates were 16.8 (95% CI 16.7 to 16.9) and 5.8 per 1000 patient years (95% CI 5.6 to 5.9), respectively. Of the subjects with a baseline NT-pro-BNP level <100 pg/ml, 25 (4%) died during the follow-up period. This accounted for 27% of all deaths. In subjects with NT-pro-BNP ≥100 pg/ml, 67 (18%) died which accounted for 73% of all deaths in the cohort. In a Cox proportional hazards model adjusted for age and gender (table 3, model A), NT-pro-BNP ≥100 pg/ml was associated with all-cause (HR (95% CI) 2.36 (1.42 to 3.94)) and CVD mortality (HR (95% CI) 3.40 (1.28 to 9.03)). These associations remained when we adjusted for additional CVD risk factors (model B) or additional IP related factors (model C) (table 3). Our final adjusted model (model D) also demonstrated an independent association between NT-pro-BNP and all-cause mortality (HR (95% CI) 2.15 (1.19 to 3.88)). The association between NT-pro-BNP and CVD mortality was no longer significant in the model (model D) that included both CVD and IP related factors (HR (95% CI) 2.18 (0.76 to 6.27)) (table 3). Raised NT-pro-BNP (≥100 pg/ml) level associations with increased all-cause and cardiovascular mortality in early inflammatory polyarthritis *Fully adjusted model included age at recruitment, gender, hypertension, BMI, diabetes, smoking status, prior CVD, RF and/or ACPA status, HAQ score, tender joint count and CRP. ACPA, antibodies to citrullinated protein antigens; BMI, body mass index; CRP, C reactive protein; CVD, cardiovascular disease; HAQ, Health Assessment Questionnaire; NT-pro-BNP, N-terminal pro-brain natriuretic peptide; RF, rheumatoid factor. We examined the additional prognostic value of NT-pro-BNP on mortality. The model predicting overall mortality using only age and gender had a C-index of 0.787 which increased to 0.796 with the addition of standard CVD risk parameters; this increase was not significant (model A vs B p=0.15). The addition of NT-pro-BNP to standard CVD risk factors increased the C-index to 0.812 which was a statistically significant increase from the model including CVD risk factors (model B vs B plus NT-pro-BNP p=0.0014). The model predicting CVD mortality using only age and gender had a C-index of 0.817 which increased to 0.831 with the addition of standard CVD risk parameters; this increase was not significant (model A vs B p=0.17). The addition of NT-pro-BNP to model B of standard CVD risk factors increased the C-index to 0.848 which was a statistically significant increase from the model including CVD risk factors alone (model B vs B plus NT-pro-BNP p=0.04). During 5526 patient years of follow-up over a median (IQR) follow-up period of 5.5 (3.7–7.7) years, 92 (10%) IP subjects died. Of the 92 deaths, 31 (34%) had the main underlying cause of death as CVD (Chapter I of ICD10), 31 (34%) died due to neoplastic conditions (Chapter II of ICD10), 15 (16%) due to respiratory disease (Chapter X of ICD10) and the remaining 15 (16%) had other causes of death cited. The overall and CVD related mortality rates were 16.8 (95% CI 16.7 to 16.9) and 5.8 per 1000 patient years (95% CI 5.6 to 5.9), respectively. Of the subjects with a baseline NT-pro-BNP level <100 pg/ml, 25 (4%) died during the follow-up period. This accounted for 27% of all deaths. In subjects with NT-pro-BNP ≥100 pg/ml, 67 (18%) died which accounted for 73% of all deaths in the cohort. In a Cox proportional hazards model adjusted for age and gender (table 3, model A), NT-pro-BNP ≥100 pg/ml was associated with all-cause (HR (95% CI) 2.36 (1.42 to 3.94)) and CVD mortality (HR (95% CI) 3.40 (1.28 to 9.03)). These associations remained when we adjusted for additional CVD risk factors (model B) or additional IP related factors (model C) (table 3). Our final adjusted model (model D) also demonstrated an independent association between NT-pro-BNP and all-cause mortality (HR (95% CI) 2.15 (1.19 to 3.88)). The association between NT-pro-BNP and CVD mortality was no longer significant in the model (model D) that included both CVD and IP related factors (HR (95% CI) 2.18 (0.76 to 6.27)) (table 3). Raised NT-pro-BNP (≥100 pg/ml) level associations with increased all-cause and cardiovascular mortality in early inflammatory polyarthritis *Fully adjusted model included age at recruitment, gender, hypertension, BMI, diabetes, smoking status, prior CVD, RF and/or ACPA status, HAQ score, tender joint count and CRP. ACPA, antibodies to citrullinated protein antigens; BMI, body mass index; CRP, C reactive protein; CVD, cardiovascular disease; HAQ, Health Assessment Questionnaire; NT-pro-BNP, N-terminal pro-brain natriuretic peptide; RF, rheumatoid factor. We examined the additional prognostic value of NT-pro-BNP on mortality. The model predicting overall mortality using only age and gender had a C-index of 0.787 which increased to 0.796 with the addition of standard CVD risk parameters; this increase was not significant (model A vs B p=0.15). The addition of NT-pro-BNP to standard CVD risk factors increased the C-index to 0.812 which was a statistically significant increase from the model including CVD risk factors (model B vs B plus NT-pro-BNP p=0.0014). The model predicting CVD mortality using only age and gender had a C-index of 0.817 which increased to 0.831 with the addition of standard CVD risk parameters; this increase was not significant (model A vs B p=0.17). The addition of NT-pro-BNP to model B of standard CVD risk factors increased the C-index to 0.848 which was a statistically significant increase from the model including CVD risk factors alone (model B vs B plus NT-pro-BNP p=0.04). Additional sensitivity analyses The association of raised NT-pro-BNP with all-cause mortality remained in an analysis restricted to patients who were seronegative for RF and ACPA (n=339) (HR (95% CI) 6.36 (3.01 to 13.44), fully adjusted HR (95% CI) 2.94 (1.04 to 8.23)) and in those patients with no prior CVD (n=721) (HR (95% CI) 3.97 (2.38 to 6.62), fully adjusted HR (95% CI) 1.93 (1.00 to 3.72)), respectively. In these smaller subsets, the association with CVD mortality was not statistically significant (data on file). The association of raised NT-pro-BNP with all-cause mortality remained in an analysis restricted to patients who were seronegative for RF and ACPA (n=339) (HR (95% CI) 6.36 (3.01 to 13.44), fully adjusted HR (95% CI) 2.94 (1.04 to 8.23)) and in those patients with no prior CVD (n=721) (HR (95% CI) 3.97 (2.38 to 6.62), fully adjusted HR (95% CI) 1.93 (1.00 to 3.72)), respectively. In these smaller subsets, the association with CVD mortality was not statistically significant (data on file). Cross-sectional association of baseline NT-pro-BNP concentration with IP disease phenotype and CVD risk markers: We studied 960 IP subjects with a median (IQR) age and symptom duration of 58 (47–68) years and 5.9 (3.2–10.5) months, respectively. The cohort included 617 (64%) female subjects; there were 211 (25%) current smokers. In all, 484/823 (59%) were positive for RF and/or ACPA and 436 (45%) fulfilled the 1987 American College for Rheumatology Criteria for RA at first assessment (table 1). A history of prior CVD was reported by 163 (17%) patients (table 1). The demographic, IP phenotype and CVD parameters were comparable in the subset of 327 patients who had carotid Doppler scans relative to the entire cohort, with the exception of a younger median age at recruitment into this substudy as per protocol (table 1). Baseline characteristics of the IP cohort and the CVD substudy population *Statistically significantly different at p<0.05. ACPA, anticitrulinated protein antibody; ACR, American College for Rheumatology; BMI, body mass index; BP, blood pressure; cIMT, carotid intima-medial thickness; COX-2, cyclo-oxygenase-2 inhibitor; CRP, C reactive protein; CVD, cardiovascular disease; DAS28, disease activity score based on 28 joint count; DMARD, disease modifying antirheumatic drug; HAQ, Health Assessment Questionnaire; IP, inflammatory polyarthritis; mm Hg, millimetres of mercury; NA, data not available; NSAID, non-steroidal anti-inflammatory drug; RA, rheumatoid arthritis; RF, rheumatoid factor. NT-pro-BNP levels were positively skewed with a median (IQR) of 74 (38–153) pg/ml; 373 (39%) patients had NT-pro-BNP levels ≥100 pg/ml, including 89 of 163 (55%) patients with a prior history of CVD. In univariate linear regression analyses, higher log NT-pro-BNP levels were associated with being older at recruitment (β-coefficient (95% CI) 0.035 (0.031 to 0.039) per year) and female (β-coefficient (95% CI) 0.211 (0.085 to 0.336)). In univariate age and gender adjusted analyses (table 1), a number of factors were associated with log NT-pro-BNP including prior CVD, classic risk factors such as previous smoking and hypertension and several IP related factors, including HAQ score, and CRP. Both BMI and tender joint counts were negatively associated with log NT-pro-BNP (table 2). Cross-sectional association between log NT-pro-BNP and demographic and disease related factors measured at baseline n=960 *Log NT-pro-BNP, NT-pro-BNP levels that have been log transformed. †Linear regression models adjusted for age and gender. ‡Stepwise regression includes age, gender, HAQ score, tender joint count, CRP, hypertension, BMI, smoking status and prior CVD in the entire cohort. ACPA, antibodies to citrullinated protein antigens; ACR, American College for Rheumatology; BMI, body mass index; CRP, C reactive protein; CVD, cardiovascular disease; DAS28, disease activity score based on 28 joint count; DMARD, disease modifying antirheumatic drug; HAQ, Health Assessment Questionnaire; NA, not applicable; NT-pro-BNP, N-terminal pro-brain natriuretic peptide; RF, rheumatoid factor. In a multivariable forward stepwise linear regression analysis containing all risk markers showing significance on univariate analyses, age, prior CVD and hypertension all remained positively associated and BMI remained negatively associated with log NT-pro-BNP (table 2). IP related factors that remained associated on multivariable analysis were HAQ score (β-coefficient (95% CI) 0.250 (0.149 to 0.351)), CRP (β-coefficient (95% CI) 0.002 (0.001 to 0.004)) and tender joint count (β-coefficient (95% CI) −0.021 (−0.032 to −0.010)). Cross-sectional association of baseline NT-pro-BNP concentration with subclinical atherosclerosis: The presence of plaque was significantly associated with log NT-pro-BNP levels (β-coefficient (95% CI) 0.253 (0.063 to 0.442)). This association did not however persist after age and gender adjustment (β-coefficient (95% CI) 0.109 (−0.100 to 0.319)). Carotid IMT was not significantly associated with NT-pro-BNP levels on univariate analysis. NT-pro-BNP levels and mortality: During 5526 patient years of follow-up over a median (IQR) follow-up period of 5.5 (3.7–7.7) years, 92 (10%) IP subjects died. Of the 92 deaths, 31 (34%) had the main underlying cause of death as CVD (Chapter I of ICD10), 31 (34%) died due to neoplastic conditions (Chapter II of ICD10), 15 (16%) due to respiratory disease (Chapter X of ICD10) and the remaining 15 (16%) had other causes of death cited. The overall and CVD related mortality rates were 16.8 (95% CI 16.7 to 16.9) and 5.8 per 1000 patient years (95% CI 5.6 to 5.9), respectively. Of the subjects with a baseline NT-pro-BNP level <100 pg/ml, 25 (4%) died during the follow-up period. This accounted for 27% of all deaths. In subjects with NT-pro-BNP ≥100 pg/ml, 67 (18%) died which accounted for 73% of all deaths in the cohort. In a Cox proportional hazards model adjusted for age and gender (table 3, model A), NT-pro-BNP ≥100 pg/ml was associated with all-cause (HR (95% CI) 2.36 (1.42 to 3.94)) and CVD mortality (HR (95% CI) 3.40 (1.28 to 9.03)). These associations remained when we adjusted for additional CVD risk factors (model B) or additional IP related factors (model C) (table 3). Our final adjusted model (model D) also demonstrated an independent association between NT-pro-BNP and all-cause mortality (HR (95% CI) 2.15 (1.19 to 3.88)). The association between NT-pro-BNP and CVD mortality was no longer significant in the model (model D) that included both CVD and IP related factors (HR (95% CI) 2.18 (0.76 to 6.27)) (table 3). Raised NT-pro-BNP (≥100 pg/ml) level associations with increased all-cause and cardiovascular mortality in early inflammatory polyarthritis *Fully adjusted model included age at recruitment, gender, hypertension, BMI, diabetes, smoking status, prior CVD, RF and/or ACPA status, HAQ score, tender joint count and CRP. ACPA, antibodies to citrullinated protein antigens; BMI, body mass index; CRP, C reactive protein; CVD, cardiovascular disease; HAQ, Health Assessment Questionnaire; NT-pro-BNP, N-terminal pro-brain natriuretic peptide; RF, rheumatoid factor. We examined the additional prognostic value of NT-pro-BNP on mortality. The model predicting overall mortality using only age and gender had a C-index of 0.787 which increased to 0.796 with the addition of standard CVD risk parameters; this increase was not significant (model A vs B p=0.15). The addition of NT-pro-BNP to standard CVD risk factors increased the C-index to 0.812 which was a statistically significant increase from the model including CVD risk factors (model B vs B plus NT-pro-BNP p=0.0014). The model predicting CVD mortality using only age and gender had a C-index of 0.817 which increased to 0.831 with the addition of standard CVD risk parameters; this increase was not significant (model A vs B p=0.17). The addition of NT-pro-BNP to model B of standard CVD risk factors increased the C-index to 0.848 which was a statistically significant increase from the model including CVD risk factors alone (model B vs B plus NT-pro-BNP p=0.04). Additional sensitivity analyses: The association of raised NT-pro-BNP with all-cause mortality remained in an analysis restricted to patients who were seronegative for RF and ACPA (n=339) (HR (95% CI) 6.36 (3.01 to 13.44), fully adjusted HR (95% CI) 2.94 (1.04 to 8.23)) and in those patients with no prior CVD (n=721) (HR (95% CI) 3.97 (2.38 to 6.62), fully adjusted HR (95% CI) 1.93 (1.00 to 3.72)), respectively. In these smaller subsets, the association with CVD mortality was not statistically significant (data on file). Discussion: In a large inception cohort of patients with early IP we found that NT-pro-BNP was associated with both a prior history of CVD and also with higher CRP and HAQ scores. More importantly, NT-pro-BNP was independently associated with all-cause and CVD mortality even after adjustment for conventional CVD risk factors. As far as we are aware this is the first study to address these questions in early IP and therefore our results are of importance in further examining the value of NT-pro-BNP measurement in stratifying IP patients for future mortality risk from the outset of their disease. A number of previous studies have shown that NT-pro-BNP concentrations are elevated in RA compared with population controls.26 27 Some of these have also noted an association between inflammation and NT-pro-BNP levels.25–27 Inflammation has been proposed as an important contributor to the pathogenesis of cardiac dysfunction in RA patients. Maradit-Kremers et al24 showed that in a study of 575 patients with RA, episodes of heart failure were often temporally associated with elevations in erythrocyte sedimentation rate. They hypothesised that inflammation directly compromised left ventricular function, akin to the model of myocardial depression and dysfunction seen in acute sepsis.35 Our data support this paradigm as raised NT-pro-BNP has previously been associated with IL-6 levels in other studies26 and in our population was independently associated with higher CRP. The association with HAQ scores was also significant in our multivariate analysis. In early IP, HAQ scores tend to be closely associated with inflammatory disease activity.36 Disability and functional status may also be associated with a more sedentary lifestyle and may also in part reflect unmeasured ‘frailty’ or other comorbidities which may add to the association between HAQ and NT-pro-BNP that we observed. No prior studies have reported on the association of NT-pro-BNP and BMI in early arthritis. We found log NT-pro-BNP levels to be associated with lower BMI in early IP which accords with the results of Wang et al who found that in the Framingham cohort individuals with a high BMI had low NT-pro-BNP levels.37 This association is broadly in agreement with recent data showing that genetic predisposition to slightly elevated circulating NT-pro-BNP is protective of diabetes.38 In the patients who had a more detailed cardiovascular assessment, we found that log NT-pro-BNP was not significantly associated with atherosclerotic plaque or carotid IMT after age and gender adjustment. This is also in agreement with data in patients with diabetes.39 This may be partially because carotid IMT is a relatively weak surrogate of CVD risk, although plaque presence may be a more credible marker of vascular risk.40 41 Recent evidence indicates that NT-pro-BNP predicts risk of CVD even in middle-aged men without minor ECG abnormalities and thus the association of NT-pro-BNP with CVD appears to extend beyond abnormalities in cardiac function and atherosclerosis.42 From a prognostic viewpoint, our raw data demonstrated a much higher mortality rate among patients with elevated NT-pro-BNP at baseline; indeed, 73% of all deaths over a median follow-up of 5.5 (3.7–7.7) years were in this subset. Following adjustment for age, sex and IP related covariates or CVD related covariates this association was attenuated but remained statistically significant. The lack of association with CVD mortality after full adjustment may reflect the smaller numbers in this subgroup relative to the number of dependent variables we adjusted for. Overall our study had 69% power to detect a doubling of overall mortality and 19% power to detect a doubling of CVD mortality. Nevertheless, our observations have prognostic relevance as NT-pro-BNP assessment in early IP may identify patients with a particularly high risk of future mortality and in particular cardiovascular death. Our study extends and confirms the previous study by Provan et al29 who studied 182 patients from the EURIDISS cohort with 5–9-year disease duration. In this cohort, NT-pro-BNP was independently associated with all-cause mortality over a 10-year follow-up period. Previous work has noted that the anti-TNF agent adalimumab is associated with a reduction in NT-pro-BNP concentrations and may also improve pulse pressure in RA.30 It will be interesting to determine whether there will be added value in using either anti-TNF drugs or IL-6 blockade in this particular subset of early RA patients not only to improve inflammation but improve cardiac dysfunction. This study has a number of strengths. It is a prospective study of a large well described inception cohort with comprehensive data on mortality. The clinical and laboratory data collections were standardised and well established. There are some limitations that are also worth considering. A limitation of the study is that we do not have available to us a large control group of healthy individuals drawn from the same population who have prospective data in which to examine the influence of NT-pro-BNP on future mortality. Despite this, the findings in our early IP cohort can be compared with that in similar studies in the general population. In agreement with our study population studies have shown that NT-pro-BNP is associated with worse CVD outcomes. A previous study from a Dutch population found that 16.6% of individuals had raised NT-pro-BNP compared with 39% in our study. As expected the mortality in the general population was lower than in our study.4 The NT-pro-BNP levels were measured on serum samples collected at baseline recruitment into NOAR. We have only included subjects with ≤24 months of symptom duration but nevertheless it is possible that subjects had subclinical inflammatory disease for a longer, albeit unspecified, period of time prior to being recruited. It is also possible that some patients will have received anti-inflammatory treatments which may have altered their NT-pro-BNP levels, although raised NT-pro-BNP levels were not associated with the duration of symptoms or disease modifying antirheumatic drug and/or steroid use. Another potential limitation of the study was that certain comorbid conditions which may influence NT-pro-BNP levels were not included and adjusted for in our analysis. The major variables that are known to affect NT-pro-BNP levels such as age, gender, diabetes, prior CVD, hypertension, smoking status and BMI were however included. Echocardiographic and electrocardiographic data were not available for this study; therefore, it is uncertain whether the increase in NT-pro-BNP is solely due to increased cardiac strain. Raised NT-pro-BNP levels are also associated with other variables contributing to mortality in patients with IP which we have not measured such as anaemia of chronic disease, renal impairment, impaired glucose tolerance and concomitant infection as found in patients affected with HIV.43 Studies are also needed to investigate why these patients have apparently increased cardiac stress. Advanced atherosclerosis, both that which is clinically evident and subclinical cardiac disease, is arguably the principal contributor to the left ventricular strain. Crowson et al25 noted that echocardiographic evidence of left ventricular diastolic dysfunction was not as strongly associated with raised NT-pro-BNP in RA than in population controls. Therefore, while the prognostic value of NT-pro-BNP in RA is clear from our study, further work is necessary to dissect the precise mechanisms by which this association is mediated. Certainly, our data suggest NT-pro-BNP may also capture some aspects of a chronic inflammatory state and in this way pick up vascular risk from both conventional and RA related risk pathways. In conclusion, our results suggest NT-pro-BNP may have utility as part of a screening strategy to identify IP patients who may benefit from a more aggressive risk management programme for CVD. In particular, better understanding of the relationship between NT-pro-BNP and cardiac/vascular outcomes in patients with inflammatory disease may help achieve more targeted anti-inflammatory and cardioprotective approaches in this high risk population.
Background: We measured N-terminal pro-brain natriuretic peptide (NT-pro-BNP), a marker of cardiac dysfunction, in an inception cohort with early inflammatory polyarthritis (IP) and assessed its association with disease phenotype, cardiovascular disease (CVD), all-cause and CVD related mortality. Methods: Subjects with early IP were recruited to the Norfolk Arthritis Register from January 2000 to December 2008 and followed up to death or until March 2010 including any data from the national death register. The associations of baseline NT-pro-BNP with IP related factors and CVD were assessed by linear regression. Cox proportional hazards models examined the independent association of baseline NT-pro-BNP with all-cause and CVD mortality. Results: We studied 960 early IP subjects; 163 (17%) had prior CVD. 373 (39%) patients had a baseline NT-pro-BNP levels ≥ 100 pg/ml. NT-pro-BNP was associated with age, female gender, HAQ score, CRP, current smoking, history of hypertension, prior CVD and the presence of carotid plaque. 92 (10%) IP subjects died including 31 (3%) from CVD. In an age and gender adjusted analysis, having a raised NT-pro-BNP level (≥ 100 pg/ml) was associated with both all-cause and CVD mortality (adjusted HR (95% CI) 2.36 (1.42 to 3.94) and 3.40 (1.28 to 9.03), respectively). These findings were robust to adjustment for conventional CVD risk factors and prevalent CVD. Conclusions: In early IP patients, elevated NT-pro-BNP is related to HAQ and CRP and predicts all-cause and CVD mortality independently of conventional CVD risk factors. Further study is required to identify whether NT-pro-BNP may be clinically useful in targeting intensive interventions to IP patients at greatest risk of CVD.
null
null
11,159
374
[ 609, 53, 155, 272, 424, 118, 252, 759, 77, 718, 123 ]
14
[ "pro", "bnp", "nt", "nt pro bnp", "pro bnp", "nt pro", "cvd", "patients", "ip", "model" ]
[ "brain natriuretic peptide", "bnp cvd mortality", "natriuretic peptide nt", "bnp ventricular cardiac", "pro bnp cardiac" ]
null
null
[CONTENT] Rheumatoid Arthritis | Cardiovascular Disease | Atherosclerosis [SUMMARY]
[CONTENT] Rheumatoid Arthritis | Cardiovascular Disease | Atherosclerosis [SUMMARY]
[CONTENT] Rheumatoid Arthritis | Cardiovascular Disease | Atherosclerosis [SUMMARY]
null
[CONTENT] Rheumatoid Arthritis | Cardiovascular Disease | Atherosclerosis [SUMMARY]
null
[CONTENT] Adult | Aged | Arthritis | Biomarkers | Cardiovascular Diseases | Cross-Sectional Studies | England | Female | Humans | Male | Middle Aged | Natriuretic Peptide, Brain | Peptide Fragments | Phenotype | Prognosis | Registries | Risk Factors | Severity of Illness Index [SUMMARY]
[CONTENT] Adult | Aged | Arthritis | Biomarkers | Cardiovascular Diseases | Cross-Sectional Studies | England | Female | Humans | Male | Middle Aged | Natriuretic Peptide, Brain | Peptide Fragments | Phenotype | Prognosis | Registries | Risk Factors | Severity of Illness Index [SUMMARY]
[CONTENT] Adult | Aged | Arthritis | Biomarkers | Cardiovascular Diseases | Cross-Sectional Studies | England | Female | Humans | Male | Middle Aged | Natriuretic Peptide, Brain | Peptide Fragments | Phenotype | Prognosis | Registries | Risk Factors | Severity of Illness Index [SUMMARY]
null
[CONTENT] Adult | Aged | Arthritis | Biomarkers | Cardiovascular Diseases | Cross-Sectional Studies | England | Female | Humans | Male | Middle Aged | Natriuretic Peptide, Brain | Peptide Fragments | Phenotype | Prognosis | Registries | Risk Factors | Severity of Illness Index [SUMMARY]
null
[CONTENT] brain natriuretic peptide | bnp cvd mortality | natriuretic peptide nt | bnp ventricular cardiac | pro bnp cardiac [SUMMARY]
[CONTENT] brain natriuretic peptide | bnp cvd mortality | natriuretic peptide nt | bnp ventricular cardiac | pro bnp cardiac [SUMMARY]
[CONTENT] brain natriuretic peptide | bnp cvd mortality | natriuretic peptide nt | bnp ventricular cardiac | pro bnp cardiac [SUMMARY]
null
[CONTENT] brain natriuretic peptide | bnp cvd mortality | natriuretic peptide nt | bnp ventricular cardiac | pro bnp cardiac [SUMMARY]
null
[CONTENT] pro | bnp | nt | nt pro bnp | pro bnp | nt pro | cvd | patients | ip | model [SUMMARY]
[CONTENT] pro | bnp | nt | nt pro bnp | pro bnp | nt pro | cvd | patients | ip | model [SUMMARY]
[CONTENT] pro | bnp | nt | nt pro bnp | pro bnp | nt pro | cvd | patients | ip | model [SUMMARY]
null
[CONTENT] pro | bnp | nt | nt pro bnp | pro bnp | nt pro | cvd | patients | ip | model [SUMMARY]
null
[CONTENT] pro | bnp | nt | nt pro | nt pro bnp | pro bnp | ra | levels | bnp levels | nt pro bnp levels [SUMMARY]
[CONTENT] diameter | vessel diameter | vessel | carotid | analyses | diameter loss | loss | vessel diameter loss | patients | uk [SUMMARY]
[CONTENT] model | pro | cvd | nt | nt pro bnp | bnp | pro bnp | nt pro | 95 ci | ci [SUMMARY]
null
[CONTENT] pro | bnp | nt | pro bnp | nt pro | nt pro bnp | cvd | patients | ci | 95 [SUMMARY]
null
[CONTENT] IP | CVD [SUMMARY]
[CONTENT] IP | the Norfolk Arthritis Register | January 2000 to December 2008 | March 2010 ||| IP | CVD | linear ||| CVD [SUMMARY]
[CONTENT] 960 | IP | 163 | 17% ||| 373 | 39% ||| ≥ 100 ||| HAQ | CRP | CVD ||| 92 | 10% ||| IP | 31 | 3% | CVD ||| ≥ | 100 pg/ml | CVD | 95% | CI | 2.36 | 1.42 | 3.94 | 3.40 | 1.28 | 9.03 ||| CVD [SUMMARY]
null
[CONTENT] IP | CVD ||| IP | the Norfolk Arthritis Register | January 2000 to December 2008 | March 2010 ||| IP | CVD | linear ||| CVD ||| ||| 960 | IP | 163 | 17% ||| 373 | 39% ||| ≥ 100 ||| HAQ | CRP | CVD ||| 92 | 10% ||| IP | 31 | 3% | CVD ||| ≥ | 100 pg/ml | CVD | 95% | CI | 2.36 | 1.42 | 3.94 | 3.40 | 1.28 | 9.03 ||| CVD ||| IP | HAQ | CRP | CVD ||| IP | CVD [SUMMARY]
null
Combined submandibular gland flap and sternocleidomastoid musculocutaneous flap for postoperative reconstruction in older aged patients with oral cavity and oropharyngeal cancers.
25127876
The growth of aging populations in an increasing number of countries has led to a concomitant increase in the incidence of chronic diseases. Accordingly, the proportion of older aged patients with oral cavity and oropharyngeal cancers and comorbidities has also increased. Thus, improvements must be made in the tolerance and safety of surgical procedures for these patients with complex medical conditions. In this study, we investigated combined submandibular gland flap and sternocleidomastoid musculocutaneous flap for postoperative reconstruction in older aged patients with oral cavity and oropharyngeal cancers in terms of surgical methods, safety, and clinical outcome.
BACKGROUND
Between January 2011 and May 2012, 8 patients over the age of 65 years (7 men, 1 woman; aged 66 to 75 years (median, 69.6)) with oral cavity and oropharyngeal cancers underwent combined submandibular gland and sternocleidomastoid myocutaneous flaps for postoperative reconstruction at Ganzhou Tumor Hospital. All eight patients had comorbid cardiovascular, cerebrovascular, or chronic respiratory disease or diabetes. Clinical outcomes, complications, and tolerance to surgical treatment were observed.
METHODS
Surgical treatment was successful in all eight patients. All submandibular gland flaps survived with well-mucosalized surfaces and with no complications. During the postoperative follow-up period of 12 to 28 months, no patient developed local recurrence or distant metastasis, and all had good recovery of function and local contour.
RESULTS
This combined reconstruction technique enables appropriate restoration of oral function, facial aesthetics and improved quality of life. Further, this technique has several advantages: it is easier to perform, reduces operation time and surgical risk, causes less surgical injury, and has minor impact on contour. The technique provides a new and safe reconstruction option for older aged patients with oral cavity and oropharyngeal cancers.
CONCLUSIONS
[ "Aged", "Carcinoma, Squamous Cell", "Female", "Follow-Up Studies", "Humans", "Male", "Mouth Neoplasms", "Myocutaneous Flap", "Oropharyngeal Neoplasms", "Postoperative Period", "Prognosis", "Quality of Life", "Plastic Surgery Procedures", "Retrospective Studies", "Sternoclavicular Joint", "Submandibular Gland", "Surgical Flaps" ]
4138394
Background
A number of major organs are concentrated in the oral cavity and oropharynx. Surgical removal of oral cavity and oropharyngeal cancers can lead to facial injury or even disfigurement and decreased oral functions, including speech, mastication, and swallowing. Surgical reconstruction with skin flap, musculocutaneous flap, and/or osteo-myocutaneous flap enables oral cancer patients to achieve appropriate restoration of oral function and facial aesthetics and a greatly improved quality of life [1, 2]. With the gradual aging of society, the proportion of older aged patients with oral cavity and oropharyngeal cancers and comorbid cardiovascular disease, diabetes, chronic respiratory disease, or cerebrovascular disease has been markedly increasing. Older aged patients with complex medical conditions may have poor surgical tolerance and be unable to undergo lengthy and risky operative procedures. Conventional reconstructive surgeries with skin, musculocutaneous, and osteo-myocutaneous flaps tend to be time-consuming and cause relatively large operative trauma, thus potentially increasing surgical risk. Moreover, the postoperative recovery of older aged patients, especially those with comorbid systemic disease, may be delayed. Therefore, there is an increasing need for a new reconstruction method to improve the tolerance and safety of reconstructive surgery [3]. Oral cavity and oropharyngeal cancers most often metastasize to neck lymph nodes, and neck lymphatic metastases are most frequently observed in levels I, II, and III [4]. The submandibular glands are located in level Ib, where they are surrounded by rich lymphatic tissues. However, the involvement of submandibular glands in oral cavity and oropharyngeal cancers is quite rare, especially in early oral cancers [5, 6]. Additionally, the existence of intraglandular lymph nodes within submandibular glands continues to be debated [7, 8]. Studies have demonstrated that preservation of the submandibular gland during neck dissection is oncologically safe in patients with early oral cancers unless the primary tumor or metastatic regional lymphadenopathy is adherent to the gland [9, 10]. It has also been reported that the submandibular gland can survive and maintain its integrity and function following surgical transfer and reimplantation [11, 12]. These studies suggest that the submandibular gland can be used in the surgical reconstruction of oral defects. Therefore, it is of clinical significance to explore further the feasibility of the submandibular gland flap for postoperative reconstruction in patients with early oral cavity and oropharyngeal cancers. Between January 2011 and May 2012, we performed reconstructive surgeries for oral defects using combined submandibular gland and sternocleidomastoid musculocutaneous flaps in eight older aged patients with oral cavity and oropharyngeal cancers at our facility. In this study, we conducted a retrospective analysis of these patients and report on the safety, practicality, and clinical effects of this new reconstruction method.
Methods
Ethics statement The ethics committee of the Tumor Hospital of Ganzhou Review Board approved the study protocol (20101201), and the study was conducted in accordance with the principles of the Declaration of Helsinki regarding research involving human subjects. Each of the patients provided written informed consent to participate after the nature of the study had been explained to them. The ethics committee of the Tumor Hospital of Ganzhou Review Board approved the study protocol (20101201), and the study was conducted in accordance with the principles of the Declaration of Helsinki regarding research involving human subjects. Each of the patients provided written informed consent to participate after the nature of the study had been explained to them. Inclusion and exclusion criteria Inclusion criteria: (1) over 65 years old; (2) pathologically confirmed diagnosis of oral cavity cancer or oropharyngeal cancer; (3) no cervical lymph node metastasis or no level I lymphatic metastasis; (4) tumor staging: < T3; (5) comorbid cardiovascular disease, diabetes, chronic respiratory disease, or cerebrovascular disease; (6) Karnofsky score: ≥ 80; (7) expected survival: over 1 year; (8) voluntarily signed informed consent. Exclusion criteria: (1) bilateral cervical lymph node metastasis; (2) bilateral level I cervical lymph node metastasis; (3) diseased submandibular gland; (4) prior neck surgery or neck radiotherapy; (5) distant metastasis; (6) unable to tolerate surgical operation due to poor clinical condition. Inclusion criteria: (1) over 65 years old; (2) pathologically confirmed diagnosis of oral cavity cancer or oropharyngeal cancer; (3) no cervical lymph node metastasis or no level I lymphatic metastasis; (4) tumor staging: < T3; (5) comorbid cardiovascular disease, diabetes, chronic respiratory disease, or cerebrovascular disease; (6) Karnofsky score: ≥ 80; (7) expected survival: over 1 year; (8) voluntarily signed informed consent. Exclusion criteria: (1) bilateral cervical lymph node metastasis; (2) bilateral level I cervical lymph node metastasis; (3) diseased submandibular gland; (4) prior neck surgery or neck radiotherapy; (5) distant metastasis; (6) unable to tolerate surgical operation due to poor clinical condition. Clinical data Between January 2011 and May 2012, combined submandibular gland and sternocleidomastoid myocutaneous flap postoperative reconstruction was performed in eight patients over 65 years of age (seven men, one woman; age, 66 to 75 years (median, 69.6)) with oral cavity and oropharyngeal cancers at Ganzhou Tumor Hospital. Comorbidities included three cases of hypertension, two cases of chronic bronchitis, two cases of diabetes mellitus, and one case of history of cerebral infarction. Detailed clinical data are presented in Table 1.Table 1 Clinical data Case numberSexAge (years)Site of primary tumorTumor dimensions (cm)PathologyExtent of defect (cm)ComorbiditiesProcedureRepair method1Male68Lateral border of tongue4 × 3Squamous6 × 5Primary hypertensionTongue carcinoma resection plus total neck dissectionTongue defects repaired with submandibular gland flap; floor of mouth repaired with sternocleidomastoid muscle flap2Female66Lateral border of tongue3 × 3Squamous5 × 5Diabetes mellitusTongue carcinoma resection plus total neck dissectionTongue defects repaired with submandibular gland flap; floor of mouth repaired with sternocleidomastoid muscle flap3Male75Lateral border of tongue3 × 2Squamous5 × 4Chronic bronchitisTongue carcinoma resection plus total neck dissectionTongue defects repaired with submandibular gland flap; floor of mouth repaired with sternocleidomastoid muscle flap4Male71Root of tongue2 × 2Squamous4 × 4Primary hypertensionRoot-of-tongue carcinoma resection plus whole neck dissectionTongue-root defects repaired with submandibular gland flap; floor of mouth repaired with sternocleidomastoid muscle flap5Male67Root of tongue3 × 2Squamous5 × 4Cerebral infarctionRoot-of-tongue carcinoma resection plus whole neck dissectionTongue-root defects repaired with submandibular gland flap; floor of mouth repaired with sternocleidomastoid muscle flap6Male70Floor of mouth2 × 2Squamous4 × 4Primary hypertensionFloor-of-mouth cancer resection plus whole neck dissectionFloor-of-mouth mucosal defects repaired with submandibular gland flap; floor of mouth repaired with sternocleidomastoid muscle flap7Male78Gingiva3 × 3Squamous5 × 5Chronic bronchitisMandible segmental resection plus total neck dissectionMandibular defects repaired with submandibular flap; floor of mouth repaired with sternocleidomastoid muscle flap8Male72Gingiva4 × 3Squamous6 × 5Diabetes mellitusMandible segmental resection plus total neck dissectionMandibular defects repaired with submandibular flap; floor of mouth repaired with sternocleidomastoid muscle flap Clinical data Between January 2011 and May 2012, combined submandibular gland and sternocleidomastoid myocutaneous flap postoperative reconstruction was performed in eight patients over 65 years of age (seven men, one woman; age, 66 to 75 years (median, 69.6)) with oral cavity and oropharyngeal cancers at Ganzhou Tumor Hospital. Comorbidities included three cases of hypertension, two cases of chronic bronchitis, two cases of diabetes mellitus, and one case of history of cerebral infarction. Detailed clinical data are presented in Table 1.Table 1 Clinical data Case numberSexAge (years)Site of primary tumorTumor dimensions (cm)PathologyExtent of defect (cm)ComorbiditiesProcedureRepair method1Male68Lateral border of tongue4 × 3Squamous6 × 5Primary hypertensionTongue carcinoma resection plus total neck dissectionTongue defects repaired with submandibular gland flap; floor of mouth repaired with sternocleidomastoid muscle flap2Female66Lateral border of tongue3 × 3Squamous5 × 5Diabetes mellitusTongue carcinoma resection plus total neck dissectionTongue defects repaired with submandibular gland flap; floor of mouth repaired with sternocleidomastoid muscle flap3Male75Lateral border of tongue3 × 2Squamous5 × 4Chronic bronchitisTongue carcinoma resection plus total neck dissectionTongue defects repaired with submandibular gland flap; floor of mouth repaired with sternocleidomastoid muscle flap4Male71Root of tongue2 × 2Squamous4 × 4Primary hypertensionRoot-of-tongue carcinoma resection plus whole neck dissectionTongue-root defects repaired with submandibular gland flap; floor of mouth repaired with sternocleidomastoid muscle flap5Male67Root of tongue3 × 2Squamous5 × 4Cerebral infarctionRoot-of-tongue carcinoma resection plus whole neck dissectionTongue-root defects repaired with submandibular gland flap; floor of mouth repaired with sternocleidomastoid muscle flap6Male70Floor of mouth2 × 2Squamous4 × 4Primary hypertensionFloor-of-mouth cancer resection plus whole neck dissectionFloor-of-mouth mucosal defects repaired with submandibular gland flap; floor of mouth repaired with sternocleidomastoid muscle flap7Male78Gingiva3 × 3Squamous5 × 5Chronic bronchitisMandible segmental resection plus total neck dissectionMandibular defects repaired with submandibular flap; floor of mouth repaired with sternocleidomastoid muscle flap8Male72Gingiva4 × 3Squamous6 × 5Diabetes mellitusMandible segmental resection plus total neck dissectionMandibular defects repaired with submandibular flap; floor of mouth repaired with sternocleidomastoid muscle flap Clinical data Surgical procedure The surgeries were performed under general anesthesia. Ipsilateral primary tumor resection and total neck level I to V lymph node dissection were performed, including level Ia, ipsilateral floor of mouth, sublingual gland, and mandibular lingual-side periosteum. Local mandibular resection was performed in two patients with gingival carcinoma. Following the confirmation of no residual tumor and no cervical lymph node metastases by intraoperative frozen-section pathology, submandibular gland flap was performed to reconstruct the oral defect, and sternocleidomastoid myocutaneous flap was used to reconstruct the floor of the mouth. The surgeries were performed under general anesthesia. Ipsilateral primary tumor resection and total neck level I to V lymph node dissection were performed, including level Ia, ipsilateral floor of mouth, sublingual gland, and mandibular lingual-side periosteum. Local mandibular resection was performed in two patients with gingival carcinoma. Following the confirmation of no residual tumor and no cervical lymph node metastases by intraoperative frozen-section pathology, submandibular gland flap was performed to reconstruct the oral defect, and sternocleidomastoid myocutaneous flap was used to reconstruct the floor of the mouth. Design and preparation of combined submandibular gland and sternocleidomastoid myocutaneous flap and reconstruction During routine neck dissection, a horizontal skin incision was first made 2 cm below the mandible and carried down through the subcutaneous tissue and platysma. The submandibular gland was then dissected free of the surrounding structures cranially and caudally, up to the horizontal branch of the mandible and down to the level of the hyoid bone. During the preparation of the submandibular gland flap, the capsule of the submandibular gland was well-protected. During level I neck dissection, all lymphatic tissue and fatty tissue at levels I to V were dissected out, and frozen-section and paraffin-section pathological examinations for cancer metastases were subsequently conducted. The inferior border of the submandibular gland was released and separated from the digastric muscle. The proximal ends of the external maxillary artery and anterior facial vein were identified in the superior border of the deep surface of the posterior belly of the digastric muscle and were well-protected. The posterior border of the submandibular gland was then separated, and the distal ends of the external maxillary artery and anterior facial vein were both ligated with no damage to the marginal mandibular branches of the facial nerve. The superior border of the submandibular gland was released with its duct severed. Removal of the primary tumor was routinely performed. The released submandibular gland was then transferred to the site of the defect for reconstruction, and the proximal ends of the external maxillary artery and anterior facial vein were used as pedicles (Figure 1). The clavicular end of the sternocleidomastoid was dissected to produce a sternocleidomastoid myocutaneous flap, and the clavicular end of sternocleidomastoid was then sutured to the anterior belly of the digastric muscle and mylohyoid muscle to reconstruct the floor of the mouth (Figure 2).Figure 1 Submandibular gland flap for reconstruction of root-of-tongue carcinoma, intraoperative view. Figure 2 Sternocleidomastoid myocutaneous flap for reconstruction of roof of mouth, intraoperative view. Submandibular gland flap for reconstruction of root-of-tongue carcinoma, intraoperative view. Sternocleidomastoid myocutaneous flap for reconstruction of roof of mouth, intraoperative view. During routine neck dissection, a horizontal skin incision was first made 2 cm below the mandible and carried down through the subcutaneous tissue and platysma. The submandibular gland was then dissected free of the surrounding structures cranially and caudally, up to the horizontal branch of the mandible and down to the level of the hyoid bone. During the preparation of the submandibular gland flap, the capsule of the submandibular gland was well-protected. During level I neck dissection, all lymphatic tissue and fatty tissue at levels I to V were dissected out, and frozen-section and paraffin-section pathological examinations for cancer metastases were subsequently conducted. The inferior border of the submandibular gland was released and separated from the digastric muscle. The proximal ends of the external maxillary artery and anterior facial vein were identified in the superior border of the deep surface of the posterior belly of the digastric muscle and were well-protected. The posterior border of the submandibular gland was then separated, and the distal ends of the external maxillary artery and anterior facial vein were both ligated with no damage to the marginal mandibular branches of the facial nerve. The superior border of the submandibular gland was released with its duct severed. Removal of the primary tumor was routinely performed. The released submandibular gland was then transferred to the site of the defect for reconstruction, and the proximal ends of the external maxillary artery and anterior facial vein were used as pedicles (Figure 1). The clavicular end of the sternocleidomastoid was dissected to produce a sternocleidomastoid myocutaneous flap, and the clavicular end of sternocleidomastoid was then sutured to the anterior belly of the digastric muscle and mylohyoid muscle to reconstruct the floor of the mouth (Figure 2).Figure 1 Submandibular gland flap for reconstruction of root-of-tongue carcinoma, intraoperative view. Figure 2 Sternocleidomastoid myocutaneous flap for reconstruction of roof of mouth, intraoperative view. Submandibular gland flap for reconstruction of root-of-tongue carcinoma, intraoperative view. Sternocleidomastoid myocutaneous flap for reconstruction of roof of mouth, intraoperative view. Radiotherapy Postoperative radiotherapy was conducted in two patients with tumors 4 × 3 cm using a linear accelerator (Siemens AG, Munich, Germany) with 6-MV x-rays. Patients were treated in a thermoplastic mask with isocenter irradiation. The non-irradiated area was shielded by an individualized low-melting-point lead shielding. Conventional fractionated radiotherapy was carried out at a dose of 60 Gy for the primary tumor site and a prophylactic dose of 44 Gy for cervical lymph nodes in 2-Gy fractions once daily five times per week. Postoperative radiotherapy was conducted in two patients with tumors 4 × 3 cm using a linear accelerator (Siemens AG, Munich, Germany) with 6-MV x-rays. Patients were treated in a thermoplastic mask with isocenter irradiation. The non-irradiated area was shielded by an individualized low-melting-point lead shielding. Conventional fractionated radiotherapy was carried out at a dose of 60 Gy for the primary tumor site and a prophylactic dose of 44 Gy for cervical lymph nodes in 2-Gy fractions once daily five times per week. Clinical observations Detailed clinical observations included: postoperative wound healing, complications, pathological examinations of submandibular, submental and cervical lymph nodes, postoperative contours of tongue, root of tongue and floor of mouth, postoperative contour and occlusion of mandible, restoration of breathing, phonation, swallowing, dehiscence and occlusion, and postoperative shoulder function, as well as relapse of primary lesion and cervical lymph nodes. Detailed clinical observations included: postoperative wound healing, complications, pathological examinations of submandibular, submental and cervical lymph nodes, postoperative contours of tongue, root of tongue and floor of mouth, postoperative contour and occlusion of mandible, restoration of breathing, phonation, swallowing, dehiscence and occlusion, and postoperative shoulder function, as well as relapse of primary lesion and cervical lymph nodes.
Results
Postoperative conditions No patients died from the surgery. All submandibular gland flaps survived and no complications were observed. Postoperatively, the submandibular glands developed well-mucosalized surfaces and no significant pain from gland distension was noted. The operative incisions healed by first intention in all cases. Both intraoperative frozen sections and postoperative paraffin sections for all patients confirmed pathologically no cancer metastasis to any of the 27 dissected level I lymph nodes. Pathological examination also confirmed no lymphatic metastasis in the remaining levels in all cases. No patients died from the surgery. All submandibular gland flaps survived and no complications were observed. Postoperatively, the submandibular glands developed well-mucosalized surfaces and no significant pain from gland distension was noted. The operative incisions healed by first intention in all cases. Both intraoperative frozen sections and postoperative paraffin sections for all patients confirmed pathologically no cancer metastasis to any of the 27 dissected level I lymph nodes. Pathological examination also confirmed no lymphatic metastasis in the remaining levels in all cases. Appearance and function After reconstruction, tongue function recovered well and the contour was pleasing. No significant impact on phonation was observed. For reconstruction of the tongue root, no negative impact on breathing or swallowing and no significant swelling were observed. Tongue movement was normal, with clear phonation. After floor-of-mouth reconstruction, no swelling or significant impact on swallowing, tongue movement, or phonation was observed. Mandibular reconstruction resulted in basically satisfactory appearance. Range of mouth-opening was normal, but slight malocclusion was observed. Swallowing and phonation were both normal. No postoperative shoulder dysfunction was found, indicating that separation of the sternocleidomastoid from the clavicle had not noticeably altered shoulder function (Figures 3 and 4).Figure 3 Submandibular gland flap for reconstruction of tongue cancer. One week postoperatively, surface mucosalization of the submandibular gland is bright red.Figure 4 Submandibular gland flap for reconstruction of gingival cancer. Three months postoperatively, submandibular gland is essentially the same color as the oral mucosa. Submandibular gland flap for reconstruction of tongue cancer. One week postoperatively, surface mucosalization of the submandibular gland is bright red. Submandibular gland flap for reconstruction of gingival cancer. Three months postoperatively, submandibular gland is essentially the same color as the oral mucosa. After reconstruction, tongue function recovered well and the contour was pleasing. No significant impact on phonation was observed. For reconstruction of the tongue root, no negative impact on breathing or swallowing and no significant swelling were observed. Tongue movement was normal, with clear phonation. After floor-of-mouth reconstruction, no swelling or significant impact on swallowing, tongue movement, or phonation was observed. Mandibular reconstruction resulted in basically satisfactory appearance. Range of mouth-opening was normal, but slight malocclusion was observed. Swallowing and phonation were both normal. No postoperative shoulder dysfunction was found, indicating that separation of the sternocleidomastoid from the clavicle had not noticeably altered shoulder function (Figures 3 and 4).Figure 3 Submandibular gland flap for reconstruction of tongue cancer. One week postoperatively, surface mucosalization of the submandibular gland is bright red.Figure 4 Submandibular gland flap for reconstruction of gingival cancer. Three months postoperatively, submandibular gland is essentially the same color as the oral mucosa. Submandibular gland flap for reconstruction of tongue cancer. One week postoperatively, surface mucosalization of the submandibular gland is bright red. Submandibular gland flap for reconstruction of gingival cancer. Three months postoperatively, submandibular gland is essentially the same color as the oral mucosa. Follow-up Postoperative follow-up began at the completion of treatment and ended 31 May 2013. No patient was lost to follow-up. During the follow-up period of 12 to 28 months, none of the patients developed local recurrence or lymphatic or distant metastases. Postoperative follow-up began at the completion of treatment and ended 31 May 2013. No patient was lost to follow-up. During the follow-up period of 12 to 28 months, none of the patients developed local recurrence or lymphatic or distant metastases.
Conclusions
Skin flap, musculocutaneous flap, and osteo-myocutaneous flap are conventional methods of postoperative reconstruction in patients with oral cavity and oropharyngeal cancers. However, older aged patients with oral cavity and oropharyngeal cancers tend to have comorbid conditions. Older aged patients with this complex medical condition have poor surgical tolerance and thus cannot undergo a lengthy and potentially risky surgical operation. Our combined reconstruction technique with submandibular gland flap and sternocleidomastoid myocutaneous flap has the advantages of being easier to perform, causing less surgical injury and only minor impact on contour, and greatly reducing operation time and surgical risk. This technique can facilitate the restoration of oral function and facial aesthetics and greatly improve the quality of life of older aged patients. Hence, this technique can provide a new and safe reconstruction option for older aged cancer patients with complex medical conditions. If we can verify at the molecular level the absence of intraglandular lymph nodes and that there is no lymphatic metastasis to the submandibular glands, this combined reconstruction technique may gain popularity.
[ "Background", "Ethics statement", "Inclusion and exclusion criteria", "Clinical data", "Surgical procedure", "Design and preparation of combined submandibular gland and sternocleidomastoid myocutaneous flap and reconstruction", "Radiotherapy", "Clinical observations", "Postoperative conditions", "Appearance and function", "Follow-up" ]
[ "A number of major organs are concentrated in the oral cavity and oropharynx. Surgical removal of oral cavity and oropharyngeal cancers can lead to facial injury or even disfigurement and decreased oral functions, including speech, mastication, and swallowing. Surgical reconstruction with skin flap, musculocutaneous flap, and/or osteo-myocutaneous flap enables oral cancer patients to achieve appropriate restoration of oral function and facial aesthetics and a greatly improved quality of life [1, 2]. With the gradual aging of society, the proportion of older aged patients with oral cavity and oropharyngeal cancers and comorbid cardiovascular disease, diabetes, chronic respiratory disease, or cerebrovascular disease has been markedly increasing. Older aged patients with complex medical conditions may have poor surgical tolerance and be unable to undergo lengthy and risky operative procedures. Conventional reconstructive surgeries with skin, musculocutaneous, and osteo-myocutaneous flaps tend to be time-consuming and cause relatively large operative trauma, thus potentially increasing surgical risk. Moreover, the postoperative recovery of older aged patients, especially those with comorbid systemic disease, may be delayed. Therefore, there is an increasing need for a new reconstruction method to improve the tolerance and safety of reconstructive surgery [3].\nOral cavity and oropharyngeal cancers most often metastasize to neck lymph nodes, and neck lymphatic metastases are most frequently observed in levels I, II, and III [4]. The submandibular glands are located in level Ib, where they are surrounded by rich lymphatic tissues. However, the involvement of submandibular glands in oral cavity and oropharyngeal cancers is quite rare, especially in early oral cancers [5, 6]. Additionally, the existence of intraglandular lymph nodes within submandibular glands continues to be debated [7, 8]. Studies have demonstrated that preservation of the submandibular gland during neck dissection is oncologically safe in patients with early oral cancers unless the primary tumor or metastatic regional lymphadenopathy is adherent to the gland [9, 10]. It has also been reported that the submandibular gland can survive and maintain its integrity and function following surgical transfer and reimplantation [11, 12]. These studies suggest that the submandibular gland can be used in the surgical reconstruction of oral defects. Therefore, it is of clinical significance to explore further the feasibility of the submandibular gland flap for postoperative reconstruction in patients with early oral cavity and oropharyngeal cancers.\nBetween January 2011 and May 2012, we performed reconstructive surgeries for oral defects using combined submandibular gland and sternocleidomastoid musculocutaneous flaps in eight older aged patients with oral cavity and oropharyngeal cancers at our facility. In this study, we conducted a retrospective analysis of these patients and report on the safety, practicality, and clinical effects of this new reconstruction method.", "The ethics committee of the Tumor Hospital of Ganzhou Review Board approved the study protocol (20101201), and the study was conducted in accordance with the principles of the Declaration of Helsinki regarding research involving human subjects. Each of the patients provided written informed consent to participate after the nature of the study had been explained to them.", "Inclusion criteria: (1) over 65 years old; (2) pathologically confirmed diagnosis of oral cavity cancer or oropharyngeal cancer; (3) no cervical lymph node metastasis or no level I lymphatic metastasis; (4) tumor staging: < T3; (5) comorbid cardiovascular disease, diabetes, chronic respiratory disease, or cerebrovascular disease; (6) Karnofsky score: ≥ 80; (7) expected survival: over 1 year; (8) voluntarily signed informed consent. Exclusion criteria: (1) bilateral cervical lymph node metastasis; (2) bilateral level I cervical lymph node metastasis; (3) diseased submandibular gland; (4) prior neck surgery or neck radiotherapy; (5) distant metastasis; (6) unable to tolerate surgical operation due to poor clinical condition.", "Between January 2011 and May 2012, combined submandibular gland and sternocleidomastoid myocutaneous flap postoperative reconstruction was performed in eight patients over 65 years of age (seven men, one woman; age, 66 to 75 years (median, 69.6)) with oral cavity and oropharyngeal cancers at Ganzhou Tumor Hospital. Comorbidities included three cases of hypertension, two cases of chronic bronchitis, two cases of diabetes mellitus, and one case of history of cerebral infarction. Detailed clinical data are presented in Table 1.Table 1\nClinical data\nCase numberSexAge (years)Site of primary tumorTumor dimensions (cm)PathologyExtent of defect (cm)ComorbiditiesProcedureRepair method1Male68Lateral border of tongue4 × 3Squamous6 × 5Primary hypertensionTongue carcinoma resection plus total neck dissectionTongue defects repaired with submandibular gland flap; floor of mouth repaired with sternocleidomastoid muscle flap2Female66Lateral border of tongue3 × 3Squamous5 × 5Diabetes mellitusTongue carcinoma resection plus total neck dissectionTongue defects repaired with submandibular gland flap; floor of mouth repaired with sternocleidomastoid muscle flap3Male75Lateral border of tongue3 × 2Squamous5 × 4Chronic bronchitisTongue carcinoma resection plus total neck dissectionTongue defects repaired with submandibular gland flap; floor of mouth repaired with sternocleidomastoid muscle flap4Male71Root of tongue2 × 2Squamous4 × 4Primary hypertensionRoot-of-tongue carcinoma resection plus whole neck dissectionTongue-root defects repaired with submandibular gland flap; floor of mouth repaired with sternocleidomastoid muscle flap5Male67Root of tongue3 × 2Squamous5 × 4Cerebral infarctionRoot-of-tongue carcinoma resection plus whole neck dissectionTongue-root defects repaired with submandibular gland flap; floor of mouth repaired with sternocleidomastoid muscle flap6Male70Floor of mouth2 × 2Squamous4 × 4Primary hypertensionFloor-of-mouth cancer resection plus whole neck dissectionFloor-of-mouth mucosal defects repaired with submandibular gland flap; floor of mouth repaired with sternocleidomastoid muscle flap7Male78Gingiva3 × 3Squamous5 × 5Chronic bronchitisMandible segmental resection plus total neck dissectionMandibular defects repaired with submandibular flap; floor of mouth repaired with sternocleidomastoid muscle flap8Male72Gingiva4 × 3Squamous6 × 5Diabetes mellitusMandible segmental resection plus total neck dissectionMandibular defects repaired with submandibular flap; floor of mouth repaired with sternocleidomastoid muscle flap\n\nClinical data\n", "The surgeries were performed under general anesthesia. Ipsilateral primary tumor resection and total neck level I to V lymph node dissection were performed, including level Ia, ipsilateral floor of mouth, sublingual gland, and mandibular lingual-side periosteum. Local mandibular resection was performed in two patients with gingival carcinoma. Following the confirmation of no residual tumor and no cervical lymph node metastases by intraoperative frozen-section pathology, submandibular gland flap was performed to reconstruct the oral defect, and sternocleidomastoid myocutaneous flap was used to reconstruct the floor of the mouth.", "During routine neck dissection, a horizontal skin incision was first made 2 cm below the mandible and carried down through the subcutaneous tissue and platysma. The submandibular gland was then dissected free of the surrounding structures cranially and caudally, up to the horizontal branch of the mandible and down to the level of the hyoid bone. During the preparation of the submandibular gland flap, the capsule of the submandibular gland was well-protected.\nDuring level I neck dissection, all lymphatic tissue and fatty tissue at levels I to V were dissected out, and frozen-section and paraffin-section pathological examinations for cancer metastases were subsequently conducted.\nThe inferior border of the submandibular gland was released and separated from the digastric muscle. The proximal ends of the external maxillary artery and anterior facial vein were identified in the superior border of the deep surface of the posterior belly of the digastric muscle and were well-protected. The posterior border of the submandibular gland was then separated, and the distal ends of the external maxillary artery and anterior facial vein were both ligated with no damage to the marginal mandibular branches of the facial nerve. The superior border of the submandibular gland was released with its duct severed.\nRemoval of the primary tumor was routinely performed. The released submandibular gland was then transferred to the site of the defect for reconstruction, and the proximal ends of the external maxillary artery and anterior facial vein were used as pedicles (Figure 1).\nThe clavicular end of the sternocleidomastoid was dissected to produce a sternocleidomastoid myocutaneous flap, and the clavicular end of sternocleidomastoid was then sutured to the anterior belly of the digastric muscle and mylohyoid muscle to reconstruct the floor of the mouth (Figure 2).Figure 1\nSubmandibular gland flap for reconstruction of root-of-tongue carcinoma, intraoperative view.\nFigure 2\nSternocleidomastoid myocutaneous flap for reconstruction of roof of mouth, intraoperative view.\n\n\nSubmandibular gland flap for reconstruction of root-of-tongue carcinoma, intraoperative view.\n\n\nSternocleidomastoid myocutaneous flap for reconstruction of roof of mouth, intraoperative view.\n", "Postoperative radiotherapy was conducted in two patients with tumors 4 × 3 cm using a linear accelerator (Siemens AG, Munich, Germany) with 6-MV x-rays. Patients were treated in a thermoplastic mask with isocenter irradiation. The non-irradiated area was shielded by an individualized low-melting-point lead shielding. Conventional fractionated radiotherapy was carried out at a dose of 60 Gy for the primary tumor site and a prophylactic dose of 44 Gy for cervical lymph nodes in 2-Gy fractions once daily five times per week.", "Detailed clinical observations included: postoperative wound healing, complications, pathological examinations of submandibular, submental and cervical lymph nodes, postoperative contours of tongue, root of tongue and floor of mouth, postoperative contour and occlusion of mandible, restoration of breathing, phonation, swallowing, dehiscence and occlusion, and postoperative shoulder function, as well as relapse of primary lesion and cervical lymph nodes.", "No patients died from the surgery. All submandibular gland flaps survived and no complications were observed. Postoperatively, the submandibular glands developed well-mucosalized surfaces and no significant pain from gland distension was noted. The operative incisions healed by first intention in all cases. Both intraoperative frozen sections and postoperative paraffin sections for all patients confirmed pathologically no cancer metastasis to any of the 27 dissected level I lymph nodes. Pathological examination also confirmed no lymphatic metastasis in the remaining levels in all cases.", "After reconstruction, tongue function recovered well and the contour was pleasing. No significant impact on phonation was observed. For reconstruction of the tongue root, no negative impact on breathing or swallowing and no significant swelling were observed. Tongue movement was normal, with clear phonation. After floor-of-mouth reconstruction, no swelling or significant impact on swallowing, tongue movement, or phonation was observed. Mandibular reconstruction resulted in basically satisfactory appearance. Range of mouth-opening was normal, but slight malocclusion was observed. Swallowing and phonation were both normal. No postoperative shoulder dysfunction was found, indicating that separation of the sternocleidomastoid from the clavicle had not noticeably altered shoulder function (Figures 3 and 4).Figure 3\nSubmandibular gland flap for reconstruction of tongue cancer. One week postoperatively, surface mucosalization of the submandibular gland is bright red.Figure 4\nSubmandibular gland flap for reconstruction of gingival cancer. Three months postoperatively, submandibular gland is essentially the same color as the oral mucosa.\n\nSubmandibular gland flap for reconstruction of tongue cancer. One week postoperatively, surface mucosalization of the submandibular gland is bright red.\n\nSubmandibular gland flap for reconstruction of gingival cancer. Three months postoperatively, submandibular gland is essentially the same color as the oral mucosa.", "Postoperative follow-up began at the completion of treatment and ended 31 May 2013. No patient was lost to follow-up. During the follow-up period of 12 to 28 months, none of the patients developed local recurrence or lymphatic or distant metastases." ]
[ null, null, null, null, null, null, null, null, null, null, null ]
[ "Background", "Methods", "Ethics statement", "Inclusion and exclusion criteria", "Clinical data", "Surgical procedure", "Design and preparation of combined submandibular gland and sternocleidomastoid myocutaneous flap and reconstruction", "Radiotherapy", "Clinical observations", "Results", "Postoperative conditions", "Appearance and function", "Follow-up", "Discussion", "Conclusions" ]
[ "A number of major organs are concentrated in the oral cavity and oropharynx. Surgical removal of oral cavity and oropharyngeal cancers can lead to facial injury or even disfigurement and decreased oral functions, including speech, mastication, and swallowing. Surgical reconstruction with skin flap, musculocutaneous flap, and/or osteo-myocutaneous flap enables oral cancer patients to achieve appropriate restoration of oral function and facial aesthetics and a greatly improved quality of life [1, 2]. With the gradual aging of society, the proportion of older aged patients with oral cavity and oropharyngeal cancers and comorbid cardiovascular disease, diabetes, chronic respiratory disease, or cerebrovascular disease has been markedly increasing. Older aged patients with complex medical conditions may have poor surgical tolerance and be unable to undergo lengthy and risky operative procedures. Conventional reconstructive surgeries with skin, musculocutaneous, and osteo-myocutaneous flaps tend to be time-consuming and cause relatively large operative trauma, thus potentially increasing surgical risk. Moreover, the postoperative recovery of older aged patients, especially those with comorbid systemic disease, may be delayed. Therefore, there is an increasing need for a new reconstruction method to improve the tolerance and safety of reconstructive surgery [3].\nOral cavity and oropharyngeal cancers most often metastasize to neck lymph nodes, and neck lymphatic metastases are most frequently observed in levels I, II, and III [4]. The submandibular glands are located in level Ib, where they are surrounded by rich lymphatic tissues. However, the involvement of submandibular glands in oral cavity and oropharyngeal cancers is quite rare, especially in early oral cancers [5, 6]. Additionally, the existence of intraglandular lymph nodes within submandibular glands continues to be debated [7, 8]. Studies have demonstrated that preservation of the submandibular gland during neck dissection is oncologically safe in patients with early oral cancers unless the primary tumor or metastatic regional lymphadenopathy is adherent to the gland [9, 10]. It has also been reported that the submandibular gland can survive and maintain its integrity and function following surgical transfer and reimplantation [11, 12]. These studies suggest that the submandibular gland can be used in the surgical reconstruction of oral defects. Therefore, it is of clinical significance to explore further the feasibility of the submandibular gland flap for postoperative reconstruction in patients with early oral cavity and oropharyngeal cancers.\nBetween January 2011 and May 2012, we performed reconstructive surgeries for oral defects using combined submandibular gland and sternocleidomastoid musculocutaneous flaps in eight older aged patients with oral cavity and oropharyngeal cancers at our facility. In this study, we conducted a retrospective analysis of these patients and report on the safety, practicality, and clinical effects of this new reconstruction method.", " Ethics statement The ethics committee of the Tumor Hospital of Ganzhou Review Board approved the study protocol (20101201), and the study was conducted in accordance with the principles of the Declaration of Helsinki regarding research involving human subjects. Each of the patients provided written informed consent to participate after the nature of the study had been explained to them.\nThe ethics committee of the Tumor Hospital of Ganzhou Review Board approved the study protocol (20101201), and the study was conducted in accordance with the principles of the Declaration of Helsinki regarding research involving human subjects. Each of the patients provided written informed consent to participate after the nature of the study had been explained to them.\n Inclusion and exclusion criteria Inclusion criteria: (1) over 65 years old; (2) pathologically confirmed diagnosis of oral cavity cancer or oropharyngeal cancer; (3) no cervical lymph node metastasis or no level I lymphatic metastasis; (4) tumor staging: < T3; (5) comorbid cardiovascular disease, diabetes, chronic respiratory disease, or cerebrovascular disease; (6) Karnofsky score: ≥ 80; (7) expected survival: over 1 year; (8) voluntarily signed informed consent. Exclusion criteria: (1) bilateral cervical lymph node metastasis; (2) bilateral level I cervical lymph node metastasis; (3) diseased submandibular gland; (4) prior neck surgery or neck radiotherapy; (5) distant metastasis; (6) unable to tolerate surgical operation due to poor clinical condition.\nInclusion criteria: (1) over 65 years old; (2) pathologically confirmed diagnosis of oral cavity cancer or oropharyngeal cancer; (3) no cervical lymph node metastasis or no level I lymphatic metastasis; (4) tumor staging: < T3; (5) comorbid cardiovascular disease, diabetes, chronic respiratory disease, or cerebrovascular disease; (6) Karnofsky score: ≥ 80; (7) expected survival: over 1 year; (8) voluntarily signed informed consent. Exclusion criteria: (1) bilateral cervical lymph node metastasis; (2) bilateral level I cervical lymph node metastasis; (3) diseased submandibular gland; (4) prior neck surgery or neck radiotherapy; (5) distant metastasis; (6) unable to tolerate surgical operation due to poor clinical condition.\n Clinical data Between January 2011 and May 2012, combined submandibular gland and sternocleidomastoid myocutaneous flap postoperative reconstruction was performed in eight patients over 65 years of age (seven men, one woman; age, 66 to 75 years (median, 69.6)) with oral cavity and oropharyngeal cancers at Ganzhou Tumor Hospital. Comorbidities included three cases of hypertension, two cases of chronic bronchitis, two cases of diabetes mellitus, and one case of history of cerebral infarction. Detailed clinical data are presented in Table 1.Table 1\nClinical data\nCase numberSexAge (years)Site of primary tumorTumor dimensions (cm)PathologyExtent of defect (cm)ComorbiditiesProcedureRepair method1Male68Lateral border of tongue4 × 3Squamous6 × 5Primary hypertensionTongue carcinoma resection plus total neck dissectionTongue defects repaired with submandibular gland flap; floor of mouth repaired with sternocleidomastoid muscle flap2Female66Lateral border of tongue3 × 3Squamous5 × 5Diabetes mellitusTongue carcinoma resection plus total neck dissectionTongue defects repaired with submandibular gland flap; floor of mouth repaired with sternocleidomastoid muscle flap3Male75Lateral border of tongue3 × 2Squamous5 × 4Chronic bronchitisTongue carcinoma resection plus total neck dissectionTongue defects repaired with submandibular gland flap; floor of mouth repaired with sternocleidomastoid muscle flap4Male71Root of tongue2 × 2Squamous4 × 4Primary hypertensionRoot-of-tongue carcinoma resection plus whole neck dissectionTongue-root defects repaired with submandibular gland flap; floor of mouth repaired with sternocleidomastoid muscle flap5Male67Root of tongue3 × 2Squamous5 × 4Cerebral infarctionRoot-of-tongue carcinoma resection plus whole neck dissectionTongue-root defects repaired with submandibular gland flap; floor of mouth repaired with sternocleidomastoid muscle flap6Male70Floor of mouth2 × 2Squamous4 × 4Primary hypertensionFloor-of-mouth cancer resection plus whole neck dissectionFloor-of-mouth mucosal defects repaired with submandibular gland flap; floor of mouth repaired with sternocleidomastoid muscle flap7Male78Gingiva3 × 3Squamous5 × 5Chronic bronchitisMandible segmental resection plus total neck dissectionMandibular defects repaired with submandibular flap; floor of mouth repaired with sternocleidomastoid muscle flap8Male72Gingiva4 × 3Squamous6 × 5Diabetes mellitusMandible segmental resection plus total neck dissectionMandibular defects repaired with submandibular flap; floor of mouth repaired with sternocleidomastoid muscle flap\n\nClinical data\n\nBetween January 2011 and May 2012, combined submandibular gland and sternocleidomastoid myocutaneous flap postoperative reconstruction was performed in eight patients over 65 years of age (seven men, one woman; age, 66 to 75 years (median, 69.6)) with oral cavity and oropharyngeal cancers at Ganzhou Tumor Hospital. Comorbidities included three cases of hypertension, two cases of chronic bronchitis, two cases of diabetes mellitus, and one case of history of cerebral infarction. Detailed clinical data are presented in Table 1.Table 1\nClinical data\nCase numberSexAge (years)Site of primary tumorTumor dimensions (cm)PathologyExtent of defect (cm)ComorbiditiesProcedureRepair method1Male68Lateral border of tongue4 × 3Squamous6 × 5Primary hypertensionTongue carcinoma resection plus total neck dissectionTongue defects repaired with submandibular gland flap; floor of mouth repaired with sternocleidomastoid muscle flap2Female66Lateral border of tongue3 × 3Squamous5 × 5Diabetes mellitusTongue carcinoma resection plus total neck dissectionTongue defects repaired with submandibular gland flap; floor of mouth repaired with sternocleidomastoid muscle flap3Male75Lateral border of tongue3 × 2Squamous5 × 4Chronic bronchitisTongue carcinoma resection plus total neck dissectionTongue defects repaired with submandibular gland flap; floor of mouth repaired with sternocleidomastoid muscle flap4Male71Root of tongue2 × 2Squamous4 × 4Primary hypertensionRoot-of-tongue carcinoma resection plus whole neck dissectionTongue-root defects repaired with submandibular gland flap; floor of mouth repaired with sternocleidomastoid muscle flap5Male67Root of tongue3 × 2Squamous5 × 4Cerebral infarctionRoot-of-tongue carcinoma resection plus whole neck dissectionTongue-root defects repaired with submandibular gland flap; floor of mouth repaired with sternocleidomastoid muscle flap6Male70Floor of mouth2 × 2Squamous4 × 4Primary hypertensionFloor-of-mouth cancer resection plus whole neck dissectionFloor-of-mouth mucosal defects repaired with submandibular gland flap; floor of mouth repaired with sternocleidomastoid muscle flap7Male78Gingiva3 × 3Squamous5 × 5Chronic bronchitisMandible segmental resection plus total neck dissectionMandibular defects repaired with submandibular flap; floor of mouth repaired with sternocleidomastoid muscle flap8Male72Gingiva4 × 3Squamous6 × 5Diabetes mellitusMandible segmental resection plus total neck dissectionMandibular defects repaired with submandibular flap; floor of mouth repaired with sternocleidomastoid muscle flap\n\nClinical data\n\n Surgical procedure The surgeries were performed under general anesthesia. Ipsilateral primary tumor resection and total neck level I to V lymph node dissection were performed, including level Ia, ipsilateral floor of mouth, sublingual gland, and mandibular lingual-side periosteum. Local mandibular resection was performed in two patients with gingival carcinoma. Following the confirmation of no residual tumor and no cervical lymph node metastases by intraoperative frozen-section pathology, submandibular gland flap was performed to reconstruct the oral defect, and sternocleidomastoid myocutaneous flap was used to reconstruct the floor of the mouth.\nThe surgeries were performed under general anesthesia. Ipsilateral primary tumor resection and total neck level I to V lymph node dissection were performed, including level Ia, ipsilateral floor of mouth, sublingual gland, and mandibular lingual-side periosteum. Local mandibular resection was performed in two patients with gingival carcinoma. Following the confirmation of no residual tumor and no cervical lymph node metastases by intraoperative frozen-section pathology, submandibular gland flap was performed to reconstruct the oral defect, and sternocleidomastoid myocutaneous flap was used to reconstruct the floor of the mouth.\n Design and preparation of combined submandibular gland and sternocleidomastoid myocutaneous flap and reconstruction During routine neck dissection, a horizontal skin incision was first made 2 cm below the mandible and carried down through the subcutaneous tissue and platysma. The submandibular gland was then dissected free of the surrounding structures cranially and caudally, up to the horizontal branch of the mandible and down to the level of the hyoid bone. During the preparation of the submandibular gland flap, the capsule of the submandibular gland was well-protected.\nDuring level I neck dissection, all lymphatic tissue and fatty tissue at levels I to V were dissected out, and frozen-section and paraffin-section pathological examinations for cancer metastases were subsequently conducted.\nThe inferior border of the submandibular gland was released and separated from the digastric muscle. The proximal ends of the external maxillary artery and anterior facial vein were identified in the superior border of the deep surface of the posterior belly of the digastric muscle and were well-protected. The posterior border of the submandibular gland was then separated, and the distal ends of the external maxillary artery and anterior facial vein were both ligated with no damage to the marginal mandibular branches of the facial nerve. The superior border of the submandibular gland was released with its duct severed.\nRemoval of the primary tumor was routinely performed. The released submandibular gland was then transferred to the site of the defect for reconstruction, and the proximal ends of the external maxillary artery and anterior facial vein were used as pedicles (Figure 1).\nThe clavicular end of the sternocleidomastoid was dissected to produce a sternocleidomastoid myocutaneous flap, and the clavicular end of sternocleidomastoid was then sutured to the anterior belly of the digastric muscle and mylohyoid muscle to reconstruct the floor of the mouth (Figure 2).Figure 1\nSubmandibular gland flap for reconstruction of root-of-tongue carcinoma, intraoperative view.\nFigure 2\nSternocleidomastoid myocutaneous flap for reconstruction of roof of mouth, intraoperative view.\n\n\nSubmandibular gland flap for reconstruction of root-of-tongue carcinoma, intraoperative view.\n\n\nSternocleidomastoid myocutaneous flap for reconstruction of roof of mouth, intraoperative view.\n\nDuring routine neck dissection, a horizontal skin incision was first made 2 cm below the mandible and carried down through the subcutaneous tissue and platysma. The submandibular gland was then dissected free of the surrounding structures cranially and caudally, up to the horizontal branch of the mandible and down to the level of the hyoid bone. During the preparation of the submandibular gland flap, the capsule of the submandibular gland was well-protected.\nDuring level I neck dissection, all lymphatic tissue and fatty tissue at levels I to V were dissected out, and frozen-section and paraffin-section pathological examinations for cancer metastases were subsequently conducted.\nThe inferior border of the submandibular gland was released and separated from the digastric muscle. The proximal ends of the external maxillary artery and anterior facial vein were identified in the superior border of the deep surface of the posterior belly of the digastric muscle and were well-protected. The posterior border of the submandibular gland was then separated, and the distal ends of the external maxillary artery and anterior facial vein were both ligated with no damage to the marginal mandibular branches of the facial nerve. The superior border of the submandibular gland was released with its duct severed.\nRemoval of the primary tumor was routinely performed. The released submandibular gland was then transferred to the site of the defect for reconstruction, and the proximal ends of the external maxillary artery and anterior facial vein were used as pedicles (Figure 1).\nThe clavicular end of the sternocleidomastoid was dissected to produce a sternocleidomastoid myocutaneous flap, and the clavicular end of sternocleidomastoid was then sutured to the anterior belly of the digastric muscle and mylohyoid muscle to reconstruct the floor of the mouth (Figure 2).Figure 1\nSubmandibular gland flap for reconstruction of root-of-tongue carcinoma, intraoperative view.\nFigure 2\nSternocleidomastoid myocutaneous flap for reconstruction of roof of mouth, intraoperative view.\n\n\nSubmandibular gland flap for reconstruction of root-of-tongue carcinoma, intraoperative view.\n\n\nSternocleidomastoid myocutaneous flap for reconstruction of roof of mouth, intraoperative view.\n\n Radiotherapy Postoperative radiotherapy was conducted in two patients with tumors 4 × 3 cm using a linear accelerator (Siemens AG, Munich, Germany) with 6-MV x-rays. Patients were treated in a thermoplastic mask with isocenter irradiation. The non-irradiated area was shielded by an individualized low-melting-point lead shielding. Conventional fractionated radiotherapy was carried out at a dose of 60 Gy for the primary tumor site and a prophylactic dose of 44 Gy for cervical lymph nodes in 2-Gy fractions once daily five times per week.\nPostoperative radiotherapy was conducted in two patients with tumors 4 × 3 cm using a linear accelerator (Siemens AG, Munich, Germany) with 6-MV x-rays. Patients were treated in a thermoplastic mask with isocenter irradiation. The non-irradiated area was shielded by an individualized low-melting-point lead shielding. Conventional fractionated radiotherapy was carried out at a dose of 60 Gy for the primary tumor site and a prophylactic dose of 44 Gy for cervical lymph nodes in 2-Gy fractions once daily five times per week.\n Clinical observations Detailed clinical observations included: postoperative wound healing, complications, pathological examinations of submandibular, submental and cervical lymph nodes, postoperative contours of tongue, root of tongue and floor of mouth, postoperative contour and occlusion of mandible, restoration of breathing, phonation, swallowing, dehiscence and occlusion, and postoperative shoulder function, as well as relapse of primary lesion and cervical lymph nodes.\nDetailed clinical observations included: postoperative wound healing, complications, pathological examinations of submandibular, submental and cervical lymph nodes, postoperative contours of tongue, root of tongue and floor of mouth, postoperative contour and occlusion of mandible, restoration of breathing, phonation, swallowing, dehiscence and occlusion, and postoperative shoulder function, as well as relapse of primary lesion and cervical lymph nodes.", "The ethics committee of the Tumor Hospital of Ganzhou Review Board approved the study protocol (20101201), and the study was conducted in accordance with the principles of the Declaration of Helsinki regarding research involving human subjects. Each of the patients provided written informed consent to participate after the nature of the study had been explained to them.", "Inclusion criteria: (1) over 65 years old; (2) pathologically confirmed diagnosis of oral cavity cancer or oropharyngeal cancer; (3) no cervical lymph node metastasis or no level I lymphatic metastasis; (4) tumor staging: < T3; (5) comorbid cardiovascular disease, diabetes, chronic respiratory disease, or cerebrovascular disease; (6) Karnofsky score: ≥ 80; (7) expected survival: over 1 year; (8) voluntarily signed informed consent. Exclusion criteria: (1) bilateral cervical lymph node metastasis; (2) bilateral level I cervical lymph node metastasis; (3) diseased submandibular gland; (4) prior neck surgery or neck radiotherapy; (5) distant metastasis; (6) unable to tolerate surgical operation due to poor clinical condition.", "Between January 2011 and May 2012, combined submandibular gland and sternocleidomastoid myocutaneous flap postoperative reconstruction was performed in eight patients over 65 years of age (seven men, one woman; age, 66 to 75 years (median, 69.6)) with oral cavity and oropharyngeal cancers at Ganzhou Tumor Hospital. Comorbidities included three cases of hypertension, two cases of chronic bronchitis, two cases of diabetes mellitus, and one case of history of cerebral infarction. Detailed clinical data are presented in Table 1.Table 1\nClinical data\nCase numberSexAge (years)Site of primary tumorTumor dimensions (cm)PathologyExtent of defect (cm)ComorbiditiesProcedureRepair method1Male68Lateral border of tongue4 × 3Squamous6 × 5Primary hypertensionTongue carcinoma resection plus total neck dissectionTongue defects repaired with submandibular gland flap; floor of mouth repaired with sternocleidomastoid muscle flap2Female66Lateral border of tongue3 × 3Squamous5 × 5Diabetes mellitusTongue carcinoma resection plus total neck dissectionTongue defects repaired with submandibular gland flap; floor of mouth repaired with sternocleidomastoid muscle flap3Male75Lateral border of tongue3 × 2Squamous5 × 4Chronic bronchitisTongue carcinoma resection plus total neck dissectionTongue defects repaired with submandibular gland flap; floor of mouth repaired with sternocleidomastoid muscle flap4Male71Root of tongue2 × 2Squamous4 × 4Primary hypertensionRoot-of-tongue carcinoma resection plus whole neck dissectionTongue-root defects repaired with submandibular gland flap; floor of mouth repaired with sternocleidomastoid muscle flap5Male67Root of tongue3 × 2Squamous5 × 4Cerebral infarctionRoot-of-tongue carcinoma resection plus whole neck dissectionTongue-root defects repaired with submandibular gland flap; floor of mouth repaired with sternocleidomastoid muscle flap6Male70Floor of mouth2 × 2Squamous4 × 4Primary hypertensionFloor-of-mouth cancer resection plus whole neck dissectionFloor-of-mouth mucosal defects repaired with submandibular gland flap; floor of mouth repaired with sternocleidomastoid muscle flap7Male78Gingiva3 × 3Squamous5 × 5Chronic bronchitisMandible segmental resection plus total neck dissectionMandibular defects repaired with submandibular flap; floor of mouth repaired with sternocleidomastoid muscle flap8Male72Gingiva4 × 3Squamous6 × 5Diabetes mellitusMandible segmental resection plus total neck dissectionMandibular defects repaired with submandibular flap; floor of mouth repaired with sternocleidomastoid muscle flap\n\nClinical data\n", "The surgeries were performed under general anesthesia. Ipsilateral primary tumor resection and total neck level I to V lymph node dissection were performed, including level Ia, ipsilateral floor of mouth, sublingual gland, and mandibular lingual-side periosteum. Local mandibular resection was performed in two patients with gingival carcinoma. Following the confirmation of no residual tumor and no cervical lymph node metastases by intraoperative frozen-section pathology, submandibular gland flap was performed to reconstruct the oral defect, and sternocleidomastoid myocutaneous flap was used to reconstruct the floor of the mouth.", "During routine neck dissection, a horizontal skin incision was first made 2 cm below the mandible and carried down through the subcutaneous tissue and platysma. The submandibular gland was then dissected free of the surrounding structures cranially and caudally, up to the horizontal branch of the mandible and down to the level of the hyoid bone. During the preparation of the submandibular gland flap, the capsule of the submandibular gland was well-protected.\nDuring level I neck dissection, all lymphatic tissue and fatty tissue at levels I to V were dissected out, and frozen-section and paraffin-section pathological examinations for cancer metastases were subsequently conducted.\nThe inferior border of the submandibular gland was released and separated from the digastric muscle. The proximal ends of the external maxillary artery and anterior facial vein were identified in the superior border of the deep surface of the posterior belly of the digastric muscle and were well-protected. The posterior border of the submandibular gland was then separated, and the distal ends of the external maxillary artery and anterior facial vein were both ligated with no damage to the marginal mandibular branches of the facial nerve. The superior border of the submandibular gland was released with its duct severed.\nRemoval of the primary tumor was routinely performed. The released submandibular gland was then transferred to the site of the defect for reconstruction, and the proximal ends of the external maxillary artery and anterior facial vein were used as pedicles (Figure 1).\nThe clavicular end of the sternocleidomastoid was dissected to produce a sternocleidomastoid myocutaneous flap, and the clavicular end of sternocleidomastoid was then sutured to the anterior belly of the digastric muscle and mylohyoid muscle to reconstruct the floor of the mouth (Figure 2).Figure 1\nSubmandibular gland flap for reconstruction of root-of-tongue carcinoma, intraoperative view.\nFigure 2\nSternocleidomastoid myocutaneous flap for reconstruction of roof of mouth, intraoperative view.\n\n\nSubmandibular gland flap for reconstruction of root-of-tongue carcinoma, intraoperative view.\n\n\nSternocleidomastoid myocutaneous flap for reconstruction of roof of mouth, intraoperative view.\n", "Postoperative radiotherapy was conducted in two patients with tumors 4 × 3 cm using a linear accelerator (Siemens AG, Munich, Germany) with 6-MV x-rays. Patients were treated in a thermoplastic mask with isocenter irradiation. The non-irradiated area was shielded by an individualized low-melting-point lead shielding. Conventional fractionated radiotherapy was carried out at a dose of 60 Gy for the primary tumor site and a prophylactic dose of 44 Gy for cervical lymph nodes in 2-Gy fractions once daily five times per week.", "Detailed clinical observations included: postoperative wound healing, complications, pathological examinations of submandibular, submental and cervical lymph nodes, postoperative contours of tongue, root of tongue and floor of mouth, postoperative contour and occlusion of mandible, restoration of breathing, phonation, swallowing, dehiscence and occlusion, and postoperative shoulder function, as well as relapse of primary lesion and cervical lymph nodes.", " Postoperative conditions No patients died from the surgery. All submandibular gland flaps survived and no complications were observed. Postoperatively, the submandibular glands developed well-mucosalized surfaces and no significant pain from gland distension was noted. The operative incisions healed by first intention in all cases. Both intraoperative frozen sections and postoperative paraffin sections for all patients confirmed pathologically no cancer metastasis to any of the 27 dissected level I lymph nodes. Pathological examination also confirmed no lymphatic metastasis in the remaining levels in all cases.\nNo patients died from the surgery. All submandibular gland flaps survived and no complications were observed. Postoperatively, the submandibular glands developed well-mucosalized surfaces and no significant pain from gland distension was noted. The operative incisions healed by first intention in all cases. Both intraoperative frozen sections and postoperative paraffin sections for all patients confirmed pathologically no cancer metastasis to any of the 27 dissected level I lymph nodes. Pathological examination also confirmed no lymphatic metastasis in the remaining levels in all cases.\n Appearance and function After reconstruction, tongue function recovered well and the contour was pleasing. No significant impact on phonation was observed. For reconstruction of the tongue root, no negative impact on breathing or swallowing and no significant swelling were observed. Tongue movement was normal, with clear phonation. After floor-of-mouth reconstruction, no swelling or significant impact on swallowing, tongue movement, or phonation was observed. Mandibular reconstruction resulted in basically satisfactory appearance. Range of mouth-opening was normal, but slight malocclusion was observed. Swallowing and phonation were both normal. No postoperative shoulder dysfunction was found, indicating that separation of the sternocleidomastoid from the clavicle had not noticeably altered shoulder function (Figures 3 and 4).Figure 3\nSubmandibular gland flap for reconstruction of tongue cancer. One week postoperatively, surface mucosalization of the submandibular gland is bright red.Figure 4\nSubmandibular gland flap for reconstruction of gingival cancer. Three months postoperatively, submandibular gland is essentially the same color as the oral mucosa.\n\nSubmandibular gland flap for reconstruction of tongue cancer. One week postoperatively, surface mucosalization of the submandibular gland is bright red.\n\nSubmandibular gland flap for reconstruction of gingival cancer. Three months postoperatively, submandibular gland is essentially the same color as the oral mucosa.\nAfter reconstruction, tongue function recovered well and the contour was pleasing. No significant impact on phonation was observed. For reconstruction of the tongue root, no negative impact on breathing or swallowing and no significant swelling were observed. Tongue movement was normal, with clear phonation. After floor-of-mouth reconstruction, no swelling or significant impact on swallowing, tongue movement, or phonation was observed. Mandibular reconstruction resulted in basically satisfactory appearance. Range of mouth-opening was normal, but slight malocclusion was observed. Swallowing and phonation were both normal. No postoperative shoulder dysfunction was found, indicating that separation of the sternocleidomastoid from the clavicle had not noticeably altered shoulder function (Figures 3 and 4).Figure 3\nSubmandibular gland flap for reconstruction of tongue cancer. One week postoperatively, surface mucosalization of the submandibular gland is bright red.Figure 4\nSubmandibular gland flap for reconstruction of gingival cancer. Three months postoperatively, submandibular gland is essentially the same color as the oral mucosa.\n\nSubmandibular gland flap for reconstruction of tongue cancer. One week postoperatively, surface mucosalization of the submandibular gland is bright red.\n\nSubmandibular gland flap for reconstruction of gingival cancer. Three months postoperatively, submandibular gland is essentially the same color as the oral mucosa.\n Follow-up Postoperative follow-up began at the completion of treatment and ended 31 May 2013. No patient was lost to follow-up. During the follow-up period of 12 to 28 months, none of the patients developed local recurrence or lymphatic or distant metastases.\nPostoperative follow-up began at the completion of treatment and ended 31 May 2013. No patient was lost to follow-up. During the follow-up period of 12 to 28 months, none of the patients developed local recurrence or lymphatic or distant metastases.", "No patients died from the surgery. All submandibular gland flaps survived and no complications were observed. Postoperatively, the submandibular glands developed well-mucosalized surfaces and no significant pain from gland distension was noted. The operative incisions healed by first intention in all cases. Both intraoperative frozen sections and postoperative paraffin sections for all patients confirmed pathologically no cancer metastasis to any of the 27 dissected level I lymph nodes. Pathological examination also confirmed no lymphatic metastasis in the remaining levels in all cases.", "After reconstruction, tongue function recovered well and the contour was pleasing. No significant impact on phonation was observed. For reconstruction of the tongue root, no negative impact on breathing or swallowing and no significant swelling were observed. Tongue movement was normal, with clear phonation. After floor-of-mouth reconstruction, no swelling or significant impact on swallowing, tongue movement, or phonation was observed. Mandibular reconstruction resulted in basically satisfactory appearance. Range of mouth-opening was normal, but slight malocclusion was observed. Swallowing and phonation were both normal. No postoperative shoulder dysfunction was found, indicating that separation of the sternocleidomastoid from the clavicle had not noticeably altered shoulder function (Figures 3 and 4).Figure 3\nSubmandibular gland flap for reconstruction of tongue cancer. One week postoperatively, surface mucosalization of the submandibular gland is bright red.Figure 4\nSubmandibular gland flap for reconstruction of gingival cancer. Three months postoperatively, submandibular gland is essentially the same color as the oral mucosa.\n\nSubmandibular gland flap for reconstruction of tongue cancer. One week postoperatively, surface mucosalization of the submandibular gland is bright red.\n\nSubmandibular gland flap for reconstruction of gingival cancer. Three months postoperatively, submandibular gland is essentially the same color as the oral mucosa.", "Postoperative follow-up began at the completion of treatment and ended 31 May 2013. No patient was lost to follow-up. During the follow-up period of 12 to 28 months, none of the patients developed local recurrence or lymphatic or distant metastases.", "To date there has been no consensus about preservation of the submandibular gland during neck dissection in oral cavity and oropharyngeal cancers. Traditional surgical treatment includes excision of the submandibular gland and the surrounding lymphatic tissue in addition to removal of the primary lesion. Submandibular glands are frequently excised as part of neck dissection. With the development of functional surgery, an increasing number of investigations have demonstrated that cancer metastasis to the submandibular gland is quite rare and that it is feasible and oncologically safe to preserve the submandibular gland with no involvement of cancer metastases during neck dissection followed by close postoperative follow-up. Whether to preserve the submandibular gland depends on the following conditions: (1) whether there are intraglandular submandibular lymph nodes; (2) whether it is involved in the primary cancer; and (3) whether the vessels of the submandibular gland are left intact during level I neck dissection.\nThe existence of intraglandular lymph nodes within the submandibular gland currently remains controversial [5, 6]. Because of their doubts about the existence of intraglandular lymph nodes, Rouviere and Tobies as well as DiNardo performed pathological studies to investigate intraglandular lymph nodes using human submandibular gland tissue sections from cadavers. In both studies, no intraglandular lymph nodes were identified in the submandibular gland samples [4, 6]. Other investigations have reported similar results [13, 14]. Even if there were intraglandular lymph nodes, the lymphatic drainage of intraglandular lymph nodes might not be truly identical to that of the lymph nodes outside the submandibular gland capsule because of the difference in anatomic structure between the submandibular gland and submandibular triangle. Moreover, whether lymph from surrounding lymphatic tissue can be directed into intraglandular lymph nodes is less clear and requires further study. Conversely, numerous studies have suggested that although metastatic lymph nodes rarely invade the gland, metastasis to level Ib occurs frequently in oral cavity cancer [15, 16]. Hence, although there is no universal consensus about the existence of intraglandular lymph nodes thus far, it is generally believed that they do not exist within the submandibular gland.\nHead and neck carcinoma, especially oral cancer, seldom metastasizes to the salivary glands. If it occurs, the parotid gland is more likely to be affected than the submandibular gland [17]. Although oral cancer most often metastasizes to neck lymph nodes and levels I, II, and III are the most frequently involved [4], the involvement of the submandibular gland in oral cancers is extremely rare [5, 6]. In a study investigating whether and how head and neck squamous carcinoma metastasizes to the submandibular gland, Spiegel et al. demonstrated that, because of the absence of intraglandular lymph nodes among 196 examined submandibular glands, in no case did a squamous cell carcinoma metastasize to the submandibular gland [8]. The authors found that the submandibular gland was involved only in cases in which the primary tumor was in close proximity to the gland, or when metastasis to level I of the neck had occurred by extension from a locally affected lymph node into the submandibular gland. Byeon et al. reported in 2009 that of 201 cases of oral cavity squamous cell carcinoma, only two (1%) had carcinomatous involvement in the submandibular gland through direct extension from a primary lesion and no submandibular glands showed pathological evidence of isolated metastasis or local extension of metastatic lymph nodes. Based on their study findings, the authors thought that it might be feasible and safe to preserve an oncologically sound submandibular gland during neck dissection in patients with early-stage oral cancer [18]. Other studies revealed similar results, demonstrating that submandibular gland metastasis from head and neck primary squamous cell carcinoma was extremely rare and that direct invasion as a primary mechanism of metastasis led to most of the metastases [6, 7, 9, 10]. These studies concluded that preservation of the submandibular gland might be oncologically safe and feasible unless there was evidence of direct invasion of the gland or close proximity of the cancer to it. Taken together, head and neck carcinoma, especially early-stage oral cancer, can be characterized by extremely rare metastasis to the submandibular glands. This metastasis profile provides a solid oncological basis for the use of the preserved submandibular gland as reconstruction material.\nSubmandibular glands are located in level Ib, surrounded by rich lymphatic tissue. The important question is whether this preparatory procedure in creation of a submandibular gland flap is technically feasible; that is whether all lymph nodes in level Ib can be eradicated while leaving the submandibular gland intact for reconstruction during neck dissection. More recently, Dhiwakar et al. performed an anatomic pathological study to determine whether all lymph nodes in level Ib can be extirpated without removing the submandibular gland [19]. The authors reported that complete removal of lymph nodes in level Ib was achieved in all 30 neck dissections and that the submandibular gland and surgical bed contained no residual lymph nodes. They concluded that, in suitable cases, it was technically feasible to remove all level Ib lymph nodes and preserve the submandibular gland.\nThe next question is whether the submandibular gland can survive and maintain its integrity and function following transplantation. Survival of the transplanted submandibular gland is crucial to this reconstruction method. It is clearly unwise to select a submandibular gland as a candidate for reconstruction material if it fails to survive following transplantation. To prevent xerostomia and protect the submandibular gland from radiation damage, Seikaly et al. performed surgical transfer of the submandibular gland to the submental space outside the proposed radiation field in patients with head and neck carcinoma and carried out long-term postoperative follow-up. The authors reported that the transferred submandibular glands survived and maintained salivary function for statistically significant prevention of xerostomia, and that there were no disease recurrences on the side of the transferred gland or in the submental space [11]. Al-Qahtani K et al. conducted a prospective clinical trial in which the submandibular gland was similarly transferred to the submental space. Their results also suggested that the submandibular gland can survive with its function preserved following surgical transplantation [20].\nTherefore, based on these above-mentioned findings, we believe that using the submandibular gland flap to reconstruct oral defects is oncologically safe and feasible in patients with early oral cavity and oropharyngeal cancers. In this study, we preserved the ipsilateral submandibular gland during neck dissection and reconstructed oral defects using the transferred submandibular gland in eight patients. Surgeries were successful in all eight patients and there were no resulting deaths. All submandibular gland flaps survived with well-mucosalized surfaces. No complications were observed. The operative incisions healed by primary intention in all eight cases. Twenty-seven lymph nodes were dissected out of level I in all patients. Pathological examinations of intraoperative frozen sections and postoperative paraffin sections confirmed no cancer metastases to any of the 27 lymph nodes. No cancer metastasis was found in the remaining levels in any patient. No patient developed local relapse or lymphatic or distant metastasis during the follow-up period of 12 to 28 months.\nAs a reconstruction material, the submandibular gland has several major advantages. First, its abundant blood supply and a consistent, bulky vascular pedicle make it difficult for the flap to develop necrosis following transplantation. Second, a submandibular gland flap is easy to obtain and prepare. Its preparation requires no special surgical skills and can be completed during neck dissection. A submandibular gland flap can be easily prepared for use upon completion of neck dissection. The key points of the operation are to use care to keep the capsule of the submandibular gland intact and to protect the proximal and distal ends of the external maxillary artery and anterior facial vein during preparation. Third, this method does not require vascular anastomosis. The relatively simple anatomic structure of the submandibular gland facilitates the preparation of a submandibular gland flap in less time than conventional methods. Additionally, neck dissection and flap preparation can be performed as a single-stage procedure, which simplifies the operation and reduces operative duration. Finally, the simplified procedure and shorter duration result in fewer traumas, lower complication rates, and, consequently, decreased surgical risk.\nGiven that a submandibular gland flap cannot completely fill the oral defect because of its limited size, in this study we conducted a combined reconstruction using a submandibular gland flap for oral and oropharyngeal defects and a sternocleidomastoid myocutaneous flap for floor-of-mouth defects. Similar to a submandibular gland flap, a sternocleidomastoid myocutaneous flap is easy to obtain and prepare and does not require a complex surgical procedure. Therefore, this combined reconstructive technique can greatly minimize surgical trauma and operating time.\nThese advantages suggest that this reconstructive technique may be the preferred treatment option in suitable cases, especially in older aged patients with complicated systemic disease. In the present study, all eight patients had comorbid cardiovascular disease, cerebrovascular disease, diabetes, or chronic respiratory disease that made our patients poor candidates for lengthy and risky surgical procedures. Thus, we performed reconstructive surgery using combined submandibular gland and sternocleidomastoid myocutaneous flaps.\nOur preliminary results were encouraging. Surgery was successful in all eight patients, with no surgical deaths and no complications. Compared with the conventional method we previously used, this combined reconstruction method shortened operative duration by about 30% on average. The cosmetic and functional outcomes were satisfactory in all eight patients. Reconstruction of the tongue resulted in good recovery of tongue function, a pleasing contour, and no significant impact on phonation. After reconstruction of the tongue root, no negative impact on breathing or swallowing and no significant swelling were observed. Tongue movement was normal and phonation was clear. Floor-of-mouth reconstruction resulted in no swelling and no significant impact on swallowing, tongue movement, or phonation. Appearance after mandibular reconstruction was basically satisfactory; range of mouth opening was normal although there was slight malocclusion. However, swallowing and phonation were both normal. No significant postoperative shoulder dysfunction was noted, suggesting that the separation of the sternocleidomastoid from the clavicle did not greatly alter this. Appropriate restoration of oral function and facial aesthetics greatly improved the quality of life of these older aged patients.\nDespite its advantages, there are several disadvantages to this reconstruction method. First, safety is an underlying concern. To the best of our knowledge, no large multicenter clinical study - particularly a prospective study - has been conducted to prove that intraglandular lymph nodes do not exist within the submandibular glands or that the submandibular gland cannot be involved in oral cavity and oropharyngeal cancers through lymphatic metastasis. Thus, only in cases in which no neck or level I lymphatic metastases are confirmed pathologically should the submandibular gland be used for reconstruction. This limits the clinical application of this technique in a considerable number of patients. A second disadvantage is the limited length of the vascular pedicle. If the proximal ends of the anterior facial artery and vein are used as vascular pedicles, the submandibular gland flap can extend only 2.0 ± 7.0 cm away from donor site because of the length limitation of the pedicle [21]. If the distal ends of the anterior facial artery and vein are used as vascular pedicles, the reach of this flap can be improved to some extent with blood supply to the transferred submandibular gland from the reverse flow of the external maxillary artery and anterior facial vein. A third disadvantage is the reconstruction range. The maximal reconstruction range is equal to the maximal cross-section of the submandibular gland [22], so the submandibular gland flap cannot cover a larger defect [15]. A final disadvantage concerns submandibular gland secretions. It has been reported that the denervated submandibular gland maintains long-term secretory function [13, 16]. In this reconstruction method, the transferred submandibular gland is denervated once it is completely detached from surrounding tissue during flap preparation, and about half of the submandibular gland is exposed in the oral cavity after completion of the reconstruction. Little is known about whether the exposure in the oral cavity has an impact on secretory function of the denervated gland. If the exposure has little or no impact and salivary function after denervation is preserved, the submandibular gland may develop swelling and pain after reconstructive surgery since the submandibular ducts have been severed during flap preparation. In this study, because of the limitations in lengths of vascular pedicles and ranges of reconstruction, we performed the reconstruction only for oral cancers in the posterior two-thirds of the tongue and both sides of the floor of mouth, gingival cancer adjacent to the molars, and oropharyngeal cancer close to the tongue. The maximal reconstruction range was 6 × 5 cm.\nWe note several limitations to this study. The number of patients in this study was relatively small and the follow-up period relatively short, and thus far no prospective randomized comparative clinical study has been conducted to confirm the feasibility of this combined reconstruction technique in a large number of patients. Therefore, the long-term clinical effects, safety, and application scope of this technique need to be further evaluated with adequate statistical power in future clinical practice.", "Skin flap, musculocutaneous flap, and osteo-myocutaneous flap are conventional methods of postoperative reconstruction in patients with oral cavity and oropharyngeal cancers. However, older aged patients with oral cavity and oropharyngeal cancers tend to have comorbid conditions. Older aged patients with this complex medical condition have poor surgical tolerance and thus cannot undergo a lengthy and potentially risky surgical operation. Our combined reconstruction technique with submandibular gland flap and sternocleidomastoid myocutaneous flap has the advantages of being easier to perform, causing less surgical injury and only minor impact on contour, and greatly reducing operation time and surgical risk. This technique can facilitate the restoration of oral function and facial aesthetics and greatly improve the quality of life of older aged patients. Hence, this technique can provide a new and safe reconstruction option for older aged cancer patients with complex medical conditions. If we can verify at the molecular level the absence of intraglandular lymph nodes and that there is no lymphatic metastasis to the submandibular glands, this combined reconstruction technique may gain popularity." ]
[ null, "methods", null, null, null, null, null, null, null, "results", null, null, null, "discussion", "conclusions" ]
[ "Submandibular gland flap", "Sternocleidomastoid musculocutaneous flap", "Oral cavity cancer", "Oropharyngeal cancer", "Reconstruction" ]
Background: A number of major organs are concentrated in the oral cavity and oropharynx. Surgical removal of oral cavity and oropharyngeal cancers can lead to facial injury or even disfigurement and decreased oral functions, including speech, mastication, and swallowing. Surgical reconstruction with skin flap, musculocutaneous flap, and/or osteo-myocutaneous flap enables oral cancer patients to achieve appropriate restoration of oral function and facial aesthetics and a greatly improved quality of life [1, 2]. With the gradual aging of society, the proportion of older aged patients with oral cavity and oropharyngeal cancers and comorbid cardiovascular disease, diabetes, chronic respiratory disease, or cerebrovascular disease has been markedly increasing. Older aged patients with complex medical conditions may have poor surgical tolerance and be unable to undergo lengthy and risky operative procedures. Conventional reconstructive surgeries with skin, musculocutaneous, and osteo-myocutaneous flaps tend to be time-consuming and cause relatively large operative trauma, thus potentially increasing surgical risk. Moreover, the postoperative recovery of older aged patients, especially those with comorbid systemic disease, may be delayed. Therefore, there is an increasing need for a new reconstruction method to improve the tolerance and safety of reconstructive surgery [3]. Oral cavity and oropharyngeal cancers most often metastasize to neck lymph nodes, and neck lymphatic metastases are most frequently observed in levels I, II, and III [4]. The submandibular glands are located in level Ib, where they are surrounded by rich lymphatic tissues. However, the involvement of submandibular glands in oral cavity and oropharyngeal cancers is quite rare, especially in early oral cancers [5, 6]. Additionally, the existence of intraglandular lymph nodes within submandibular glands continues to be debated [7, 8]. Studies have demonstrated that preservation of the submandibular gland during neck dissection is oncologically safe in patients with early oral cancers unless the primary tumor or metastatic regional lymphadenopathy is adherent to the gland [9, 10]. It has also been reported that the submandibular gland can survive and maintain its integrity and function following surgical transfer and reimplantation [11, 12]. These studies suggest that the submandibular gland can be used in the surgical reconstruction of oral defects. Therefore, it is of clinical significance to explore further the feasibility of the submandibular gland flap for postoperative reconstruction in patients with early oral cavity and oropharyngeal cancers. Between January 2011 and May 2012, we performed reconstructive surgeries for oral defects using combined submandibular gland and sternocleidomastoid musculocutaneous flaps in eight older aged patients with oral cavity and oropharyngeal cancers at our facility. In this study, we conducted a retrospective analysis of these patients and report on the safety, practicality, and clinical effects of this new reconstruction method. Methods: Ethics statement The ethics committee of the Tumor Hospital of Ganzhou Review Board approved the study protocol (20101201), and the study was conducted in accordance with the principles of the Declaration of Helsinki regarding research involving human subjects. Each of the patients provided written informed consent to participate after the nature of the study had been explained to them. The ethics committee of the Tumor Hospital of Ganzhou Review Board approved the study protocol (20101201), and the study was conducted in accordance with the principles of the Declaration of Helsinki regarding research involving human subjects. Each of the patients provided written informed consent to participate after the nature of the study had been explained to them. Inclusion and exclusion criteria Inclusion criteria: (1) over 65 years old; (2) pathologically confirmed diagnosis of oral cavity cancer or oropharyngeal cancer; (3) no cervical lymph node metastasis or no level I lymphatic metastasis; (4) tumor staging: < T3; (5) comorbid cardiovascular disease, diabetes, chronic respiratory disease, or cerebrovascular disease; (6) Karnofsky score: ≥ 80; (7) expected survival: over 1 year; (8) voluntarily signed informed consent. Exclusion criteria: (1) bilateral cervical lymph node metastasis; (2) bilateral level I cervical lymph node metastasis; (3) diseased submandibular gland; (4) prior neck surgery or neck radiotherapy; (5) distant metastasis; (6) unable to tolerate surgical operation due to poor clinical condition. Inclusion criteria: (1) over 65 years old; (2) pathologically confirmed diagnosis of oral cavity cancer or oropharyngeal cancer; (3) no cervical lymph node metastasis or no level I lymphatic metastasis; (4) tumor staging: < T3; (5) comorbid cardiovascular disease, diabetes, chronic respiratory disease, or cerebrovascular disease; (6) Karnofsky score: ≥ 80; (7) expected survival: over 1 year; (8) voluntarily signed informed consent. Exclusion criteria: (1) bilateral cervical lymph node metastasis; (2) bilateral level I cervical lymph node metastasis; (3) diseased submandibular gland; (4) prior neck surgery or neck radiotherapy; (5) distant metastasis; (6) unable to tolerate surgical operation due to poor clinical condition. Clinical data Between January 2011 and May 2012, combined submandibular gland and sternocleidomastoid myocutaneous flap postoperative reconstruction was performed in eight patients over 65 years of age (seven men, one woman; age, 66 to 75 years (median, 69.6)) with oral cavity and oropharyngeal cancers at Ganzhou Tumor Hospital. Comorbidities included three cases of hypertension, two cases of chronic bronchitis, two cases of diabetes mellitus, and one case of history of cerebral infarction. Detailed clinical data are presented in Table 1.Table 1 Clinical data Case numberSexAge (years)Site of primary tumorTumor dimensions (cm)PathologyExtent of defect (cm)ComorbiditiesProcedureRepair method1Male68Lateral border of tongue4 × 3Squamous6 × 5Primary hypertensionTongue carcinoma resection plus total neck dissectionTongue defects repaired with submandibular gland flap; floor of mouth repaired with sternocleidomastoid muscle flap2Female66Lateral border of tongue3 × 3Squamous5 × 5Diabetes mellitusTongue carcinoma resection plus total neck dissectionTongue defects repaired with submandibular gland flap; floor of mouth repaired with sternocleidomastoid muscle flap3Male75Lateral border of tongue3 × 2Squamous5 × 4Chronic bronchitisTongue carcinoma resection plus total neck dissectionTongue defects repaired with submandibular gland flap; floor of mouth repaired with sternocleidomastoid muscle flap4Male71Root of tongue2 × 2Squamous4 × 4Primary hypertensionRoot-of-tongue carcinoma resection plus whole neck dissectionTongue-root defects repaired with submandibular gland flap; floor of mouth repaired with sternocleidomastoid muscle flap5Male67Root of tongue3 × 2Squamous5 × 4Cerebral infarctionRoot-of-tongue carcinoma resection plus whole neck dissectionTongue-root defects repaired with submandibular gland flap; floor of mouth repaired with sternocleidomastoid muscle flap6Male70Floor of mouth2 × 2Squamous4 × 4Primary hypertensionFloor-of-mouth cancer resection plus whole neck dissectionFloor-of-mouth mucosal defects repaired with submandibular gland flap; floor of mouth repaired with sternocleidomastoid muscle flap7Male78Gingiva3 × 3Squamous5 × 5Chronic bronchitisMandible segmental resection plus total neck dissectionMandibular defects repaired with submandibular flap; floor of mouth repaired with sternocleidomastoid muscle flap8Male72Gingiva4 × 3Squamous6 × 5Diabetes mellitusMandible segmental resection plus total neck dissectionMandibular defects repaired with submandibular flap; floor of mouth repaired with sternocleidomastoid muscle flap Clinical data Between January 2011 and May 2012, combined submandibular gland and sternocleidomastoid myocutaneous flap postoperative reconstruction was performed in eight patients over 65 years of age (seven men, one woman; age, 66 to 75 years (median, 69.6)) with oral cavity and oropharyngeal cancers at Ganzhou Tumor Hospital. Comorbidities included three cases of hypertension, two cases of chronic bronchitis, two cases of diabetes mellitus, and one case of history of cerebral infarction. Detailed clinical data are presented in Table 1.Table 1 Clinical data Case numberSexAge (years)Site of primary tumorTumor dimensions (cm)PathologyExtent of defect (cm)ComorbiditiesProcedureRepair method1Male68Lateral border of tongue4 × 3Squamous6 × 5Primary hypertensionTongue carcinoma resection plus total neck dissectionTongue defects repaired with submandibular gland flap; floor of mouth repaired with sternocleidomastoid muscle flap2Female66Lateral border of tongue3 × 3Squamous5 × 5Diabetes mellitusTongue carcinoma resection plus total neck dissectionTongue defects repaired with submandibular gland flap; floor of mouth repaired with sternocleidomastoid muscle flap3Male75Lateral border of tongue3 × 2Squamous5 × 4Chronic bronchitisTongue carcinoma resection plus total neck dissectionTongue defects repaired with submandibular gland flap; floor of mouth repaired with sternocleidomastoid muscle flap4Male71Root of tongue2 × 2Squamous4 × 4Primary hypertensionRoot-of-tongue carcinoma resection plus whole neck dissectionTongue-root defects repaired with submandibular gland flap; floor of mouth repaired with sternocleidomastoid muscle flap5Male67Root of tongue3 × 2Squamous5 × 4Cerebral infarctionRoot-of-tongue carcinoma resection plus whole neck dissectionTongue-root defects repaired with submandibular gland flap; floor of mouth repaired with sternocleidomastoid muscle flap6Male70Floor of mouth2 × 2Squamous4 × 4Primary hypertensionFloor-of-mouth cancer resection plus whole neck dissectionFloor-of-mouth mucosal defects repaired with submandibular gland flap; floor of mouth repaired with sternocleidomastoid muscle flap7Male78Gingiva3 × 3Squamous5 × 5Chronic bronchitisMandible segmental resection plus total neck dissectionMandibular defects repaired with submandibular flap; floor of mouth repaired with sternocleidomastoid muscle flap8Male72Gingiva4 × 3Squamous6 × 5Diabetes mellitusMandible segmental resection plus total neck dissectionMandibular defects repaired with submandibular flap; floor of mouth repaired with sternocleidomastoid muscle flap Clinical data Surgical procedure The surgeries were performed under general anesthesia. Ipsilateral primary tumor resection and total neck level I to V lymph node dissection were performed, including level Ia, ipsilateral floor of mouth, sublingual gland, and mandibular lingual-side periosteum. Local mandibular resection was performed in two patients with gingival carcinoma. Following the confirmation of no residual tumor and no cervical lymph node metastases by intraoperative frozen-section pathology, submandibular gland flap was performed to reconstruct the oral defect, and sternocleidomastoid myocutaneous flap was used to reconstruct the floor of the mouth. The surgeries were performed under general anesthesia. Ipsilateral primary tumor resection and total neck level I to V lymph node dissection were performed, including level Ia, ipsilateral floor of mouth, sublingual gland, and mandibular lingual-side periosteum. Local mandibular resection was performed in two patients with gingival carcinoma. Following the confirmation of no residual tumor and no cervical lymph node metastases by intraoperative frozen-section pathology, submandibular gland flap was performed to reconstruct the oral defect, and sternocleidomastoid myocutaneous flap was used to reconstruct the floor of the mouth. Design and preparation of combined submandibular gland and sternocleidomastoid myocutaneous flap and reconstruction During routine neck dissection, a horizontal skin incision was first made 2 cm below the mandible and carried down through the subcutaneous tissue and platysma. The submandibular gland was then dissected free of the surrounding structures cranially and caudally, up to the horizontal branch of the mandible and down to the level of the hyoid bone. During the preparation of the submandibular gland flap, the capsule of the submandibular gland was well-protected. During level I neck dissection, all lymphatic tissue and fatty tissue at levels I to V were dissected out, and frozen-section and paraffin-section pathological examinations for cancer metastases were subsequently conducted. The inferior border of the submandibular gland was released and separated from the digastric muscle. The proximal ends of the external maxillary artery and anterior facial vein were identified in the superior border of the deep surface of the posterior belly of the digastric muscle and were well-protected. The posterior border of the submandibular gland was then separated, and the distal ends of the external maxillary artery and anterior facial vein were both ligated with no damage to the marginal mandibular branches of the facial nerve. The superior border of the submandibular gland was released with its duct severed. Removal of the primary tumor was routinely performed. The released submandibular gland was then transferred to the site of the defect for reconstruction, and the proximal ends of the external maxillary artery and anterior facial vein were used as pedicles (Figure 1). The clavicular end of the sternocleidomastoid was dissected to produce a sternocleidomastoid myocutaneous flap, and the clavicular end of sternocleidomastoid was then sutured to the anterior belly of the digastric muscle and mylohyoid muscle to reconstruct the floor of the mouth (Figure 2).Figure 1 Submandibular gland flap for reconstruction of root-of-tongue carcinoma, intraoperative view. Figure 2 Sternocleidomastoid myocutaneous flap for reconstruction of roof of mouth, intraoperative view. Submandibular gland flap for reconstruction of root-of-tongue carcinoma, intraoperative view. Sternocleidomastoid myocutaneous flap for reconstruction of roof of mouth, intraoperative view. During routine neck dissection, a horizontal skin incision was first made 2 cm below the mandible and carried down through the subcutaneous tissue and platysma. The submandibular gland was then dissected free of the surrounding structures cranially and caudally, up to the horizontal branch of the mandible and down to the level of the hyoid bone. During the preparation of the submandibular gland flap, the capsule of the submandibular gland was well-protected. During level I neck dissection, all lymphatic tissue and fatty tissue at levels I to V were dissected out, and frozen-section and paraffin-section pathological examinations for cancer metastases were subsequently conducted. The inferior border of the submandibular gland was released and separated from the digastric muscle. The proximal ends of the external maxillary artery and anterior facial vein were identified in the superior border of the deep surface of the posterior belly of the digastric muscle and were well-protected. The posterior border of the submandibular gland was then separated, and the distal ends of the external maxillary artery and anterior facial vein were both ligated with no damage to the marginal mandibular branches of the facial nerve. The superior border of the submandibular gland was released with its duct severed. Removal of the primary tumor was routinely performed. The released submandibular gland was then transferred to the site of the defect for reconstruction, and the proximal ends of the external maxillary artery and anterior facial vein were used as pedicles (Figure 1). The clavicular end of the sternocleidomastoid was dissected to produce a sternocleidomastoid myocutaneous flap, and the clavicular end of sternocleidomastoid was then sutured to the anterior belly of the digastric muscle and mylohyoid muscle to reconstruct the floor of the mouth (Figure 2).Figure 1 Submandibular gland flap for reconstruction of root-of-tongue carcinoma, intraoperative view. Figure 2 Sternocleidomastoid myocutaneous flap for reconstruction of roof of mouth, intraoperative view. Submandibular gland flap for reconstruction of root-of-tongue carcinoma, intraoperative view. Sternocleidomastoid myocutaneous flap for reconstruction of roof of mouth, intraoperative view. Radiotherapy Postoperative radiotherapy was conducted in two patients with tumors 4 × 3 cm using a linear accelerator (Siemens AG, Munich, Germany) with 6-MV x-rays. Patients were treated in a thermoplastic mask with isocenter irradiation. The non-irradiated area was shielded by an individualized low-melting-point lead shielding. Conventional fractionated radiotherapy was carried out at a dose of 60 Gy for the primary tumor site and a prophylactic dose of 44 Gy for cervical lymph nodes in 2-Gy fractions once daily five times per week. Postoperative radiotherapy was conducted in two patients with tumors 4 × 3 cm using a linear accelerator (Siemens AG, Munich, Germany) with 6-MV x-rays. Patients were treated in a thermoplastic mask with isocenter irradiation. The non-irradiated area was shielded by an individualized low-melting-point lead shielding. Conventional fractionated radiotherapy was carried out at a dose of 60 Gy for the primary tumor site and a prophylactic dose of 44 Gy for cervical lymph nodes in 2-Gy fractions once daily five times per week. Clinical observations Detailed clinical observations included: postoperative wound healing, complications, pathological examinations of submandibular, submental and cervical lymph nodes, postoperative contours of tongue, root of tongue and floor of mouth, postoperative contour and occlusion of mandible, restoration of breathing, phonation, swallowing, dehiscence and occlusion, and postoperative shoulder function, as well as relapse of primary lesion and cervical lymph nodes. Detailed clinical observations included: postoperative wound healing, complications, pathological examinations of submandibular, submental and cervical lymph nodes, postoperative contours of tongue, root of tongue and floor of mouth, postoperative contour and occlusion of mandible, restoration of breathing, phonation, swallowing, dehiscence and occlusion, and postoperative shoulder function, as well as relapse of primary lesion and cervical lymph nodes. Ethics statement: The ethics committee of the Tumor Hospital of Ganzhou Review Board approved the study protocol (20101201), and the study was conducted in accordance with the principles of the Declaration of Helsinki regarding research involving human subjects. Each of the patients provided written informed consent to participate after the nature of the study had been explained to them. Inclusion and exclusion criteria: Inclusion criteria: (1) over 65 years old; (2) pathologically confirmed diagnosis of oral cavity cancer or oropharyngeal cancer; (3) no cervical lymph node metastasis or no level I lymphatic metastasis; (4) tumor staging: < T3; (5) comorbid cardiovascular disease, diabetes, chronic respiratory disease, or cerebrovascular disease; (6) Karnofsky score: ≥ 80; (7) expected survival: over 1 year; (8) voluntarily signed informed consent. Exclusion criteria: (1) bilateral cervical lymph node metastasis; (2) bilateral level I cervical lymph node metastasis; (3) diseased submandibular gland; (4) prior neck surgery or neck radiotherapy; (5) distant metastasis; (6) unable to tolerate surgical operation due to poor clinical condition. Clinical data: Between January 2011 and May 2012, combined submandibular gland and sternocleidomastoid myocutaneous flap postoperative reconstruction was performed in eight patients over 65 years of age (seven men, one woman; age, 66 to 75 years (median, 69.6)) with oral cavity and oropharyngeal cancers at Ganzhou Tumor Hospital. Comorbidities included three cases of hypertension, two cases of chronic bronchitis, two cases of diabetes mellitus, and one case of history of cerebral infarction. Detailed clinical data are presented in Table 1.Table 1 Clinical data Case numberSexAge (years)Site of primary tumorTumor dimensions (cm)PathologyExtent of defect (cm)ComorbiditiesProcedureRepair method1Male68Lateral border of tongue4 × 3Squamous6 × 5Primary hypertensionTongue carcinoma resection plus total neck dissectionTongue defects repaired with submandibular gland flap; floor of mouth repaired with sternocleidomastoid muscle flap2Female66Lateral border of tongue3 × 3Squamous5 × 5Diabetes mellitusTongue carcinoma resection plus total neck dissectionTongue defects repaired with submandibular gland flap; floor of mouth repaired with sternocleidomastoid muscle flap3Male75Lateral border of tongue3 × 2Squamous5 × 4Chronic bronchitisTongue carcinoma resection plus total neck dissectionTongue defects repaired with submandibular gland flap; floor of mouth repaired with sternocleidomastoid muscle flap4Male71Root of tongue2 × 2Squamous4 × 4Primary hypertensionRoot-of-tongue carcinoma resection plus whole neck dissectionTongue-root defects repaired with submandibular gland flap; floor of mouth repaired with sternocleidomastoid muscle flap5Male67Root of tongue3 × 2Squamous5 × 4Cerebral infarctionRoot-of-tongue carcinoma resection plus whole neck dissectionTongue-root defects repaired with submandibular gland flap; floor of mouth repaired with sternocleidomastoid muscle flap6Male70Floor of mouth2 × 2Squamous4 × 4Primary hypertensionFloor-of-mouth cancer resection plus whole neck dissectionFloor-of-mouth mucosal defects repaired with submandibular gland flap; floor of mouth repaired with sternocleidomastoid muscle flap7Male78Gingiva3 × 3Squamous5 × 5Chronic bronchitisMandible segmental resection plus total neck dissectionMandibular defects repaired with submandibular flap; floor of mouth repaired with sternocleidomastoid muscle flap8Male72Gingiva4 × 3Squamous6 × 5Diabetes mellitusMandible segmental resection plus total neck dissectionMandibular defects repaired with submandibular flap; floor of mouth repaired with sternocleidomastoid muscle flap Clinical data Surgical procedure: The surgeries were performed under general anesthesia. Ipsilateral primary tumor resection and total neck level I to V lymph node dissection were performed, including level Ia, ipsilateral floor of mouth, sublingual gland, and mandibular lingual-side periosteum. Local mandibular resection was performed in two patients with gingival carcinoma. Following the confirmation of no residual tumor and no cervical lymph node metastases by intraoperative frozen-section pathology, submandibular gland flap was performed to reconstruct the oral defect, and sternocleidomastoid myocutaneous flap was used to reconstruct the floor of the mouth. Design and preparation of combined submandibular gland and sternocleidomastoid myocutaneous flap and reconstruction: During routine neck dissection, a horizontal skin incision was first made 2 cm below the mandible and carried down through the subcutaneous tissue and platysma. The submandibular gland was then dissected free of the surrounding structures cranially and caudally, up to the horizontal branch of the mandible and down to the level of the hyoid bone. During the preparation of the submandibular gland flap, the capsule of the submandibular gland was well-protected. During level I neck dissection, all lymphatic tissue and fatty tissue at levels I to V were dissected out, and frozen-section and paraffin-section pathological examinations for cancer metastases were subsequently conducted. The inferior border of the submandibular gland was released and separated from the digastric muscle. The proximal ends of the external maxillary artery and anterior facial vein were identified in the superior border of the deep surface of the posterior belly of the digastric muscle and were well-protected. The posterior border of the submandibular gland was then separated, and the distal ends of the external maxillary artery and anterior facial vein were both ligated with no damage to the marginal mandibular branches of the facial nerve. The superior border of the submandibular gland was released with its duct severed. Removal of the primary tumor was routinely performed. The released submandibular gland was then transferred to the site of the defect for reconstruction, and the proximal ends of the external maxillary artery and anterior facial vein were used as pedicles (Figure 1). The clavicular end of the sternocleidomastoid was dissected to produce a sternocleidomastoid myocutaneous flap, and the clavicular end of sternocleidomastoid was then sutured to the anterior belly of the digastric muscle and mylohyoid muscle to reconstruct the floor of the mouth (Figure 2).Figure 1 Submandibular gland flap for reconstruction of root-of-tongue carcinoma, intraoperative view. Figure 2 Sternocleidomastoid myocutaneous flap for reconstruction of roof of mouth, intraoperative view. Submandibular gland flap for reconstruction of root-of-tongue carcinoma, intraoperative view. Sternocleidomastoid myocutaneous flap for reconstruction of roof of mouth, intraoperative view. Radiotherapy: Postoperative radiotherapy was conducted in two patients with tumors 4 × 3 cm using a linear accelerator (Siemens AG, Munich, Germany) with 6-MV x-rays. Patients were treated in a thermoplastic mask with isocenter irradiation. The non-irradiated area was shielded by an individualized low-melting-point lead shielding. Conventional fractionated radiotherapy was carried out at a dose of 60 Gy for the primary tumor site and a prophylactic dose of 44 Gy for cervical lymph nodes in 2-Gy fractions once daily five times per week. Clinical observations: Detailed clinical observations included: postoperative wound healing, complications, pathological examinations of submandibular, submental and cervical lymph nodes, postoperative contours of tongue, root of tongue and floor of mouth, postoperative contour and occlusion of mandible, restoration of breathing, phonation, swallowing, dehiscence and occlusion, and postoperative shoulder function, as well as relapse of primary lesion and cervical lymph nodes. Results: Postoperative conditions No patients died from the surgery. All submandibular gland flaps survived and no complications were observed. Postoperatively, the submandibular glands developed well-mucosalized surfaces and no significant pain from gland distension was noted. The operative incisions healed by first intention in all cases. Both intraoperative frozen sections and postoperative paraffin sections for all patients confirmed pathologically no cancer metastasis to any of the 27 dissected level I lymph nodes. Pathological examination also confirmed no lymphatic metastasis in the remaining levels in all cases. No patients died from the surgery. All submandibular gland flaps survived and no complications were observed. Postoperatively, the submandibular glands developed well-mucosalized surfaces and no significant pain from gland distension was noted. The operative incisions healed by first intention in all cases. Both intraoperative frozen sections and postoperative paraffin sections for all patients confirmed pathologically no cancer metastasis to any of the 27 dissected level I lymph nodes. Pathological examination also confirmed no lymphatic metastasis in the remaining levels in all cases. Appearance and function After reconstruction, tongue function recovered well and the contour was pleasing. No significant impact on phonation was observed. For reconstruction of the tongue root, no negative impact on breathing or swallowing and no significant swelling were observed. Tongue movement was normal, with clear phonation. After floor-of-mouth reconstruction, no swelling or significant impact on swallowing, tongue movement, or phonation was observed. Mandibular reconstruction resulted in basically satisfactory appearance. Range of mouth-opening was normal, but slight malocclusion was observed. Swallowing and phonation were both normal. No postoperative shoulder dysfunction was found, indicating that separation of the sternocleidomastoid from the clavicle had not noticeably altered shoulder function (Figures 3 and 4).Figure 3 Submandibular gland flap for reconstruction of tongue cancer. One week postoperatively, surface mucosalization of the submandibular gland is bright red.Figure 4 Submandibular gland flap for reconstruction of gingival cancer. Three months postoperatively, submandibular gland is essentially the same color as the oral mucosa. Submandibular gland flap for reconstruction of tongue cancer. One week postoperatively, surface mucosalization of the submandibular gland is bright red. Submandibular gland flap for reconstruction of gingival cancer. Three months postoperatively, submandibular gland is essentially the same color as the oral mucosa. After reconstruction, tongue function recovered well and the contour was pleasing. No significant impact on phonation was observed. For reconstruction of the tongue root, no negative impact on breathing or swallowing and no significant swelling were observed. Tongue movement was normal, with clear phonation. After floor-of-mouth reconstruction, no swelling or significant impact on swallowing, tongue movement, or phonation was observed. Mandibular reconstruction resulted in basically satisfactory appearance. Range of mouth-opening was normal, but slight malocclusion was observed. Swallowing and phonation were both normal. No postoperative shoulder dysfunction was found, indicating that separation of the sternocleidomastoid from the clavicle had not noticeably altered shoulder function (Figures 3 and 4).Figure 3 Submandibular gland flap for reconstruction of tongue cancer. One week postoperatively, surface mucosalization of the submandibular gland is bright red.Figure 4 Submandibular gland flap for reconstruction of gingival cancer. Three months postoperatively, submandibular gland is essentially the same color as the oral mucosa. Submandibular gland flap for reconstruction of tongue cancer. One week postoperatively, surface mucosalization of the submandibular gland is bright red. Submandibular gland flap for reconstruction of gingival cancer. Three months postoperatively, submandibular gland is essentially the same color as the oral mucosa. Follow-up Postoperative follow-up began at the completion of treatment and ended 31 May 2013. No patient was lost to follow-up. During the follow-up period of 12 to 28 months, none of the patients developed local recurrence or lymphatic or distant metastases. Postoperative follow-up began at the completion of treatment and ended 31 May 2013. No patient was lost to follow-up. During the follow-up period of 12 to 28 months, none of the patients developed local recurrence or lymphatic or distant metastases. Postoperative conditions: No patients died from the surgery. All submandibular gland flaps survived and no complications were observed. Postoperatively, the submandibular glands developed well-mucosalized surfaces and no significant pain from gland distension was noted. The operative incisions healed by first intention in all cases. Both intraoperative frozen sections and postoperative paraffin sections for all patients confirmed pathologically no cancer metastasis to any of the 27 dissected level I lymph nodes. Pathological examination also confirmed no lymphatic metastasis in the remaining levels in all cases. Appearance and function: After reconstruction, tongue function recovered well and the contour was pleasing. No significant impact on phonation was observed. For reconstruction of the tongue root, no negative impact on breathing or swallowing and no significant swelling were observed. Tongue movement was normal, with clear phonation. After floor-of-mouth reconstruction, no swelling or significant impact on swallowing, tongue movement, or phonation was observed. Mandibular reconstruction resulted in basically satisfactory appearance. Range of mouth-opening was normal, but slight malocclusion was observed. Swallowing and phonation were both normal. No postoperative shoulder dysfunction was found, indicating that separation of the sternocleidomastoid from the clavicle had not noticeably altered shoulder function (Figures 3 and 4).Figure 3 Submandibular gland flap for reconstruction of tongue cancer. One week postoperatively, surface mucosalization of the submandibular gland is bright red.Figure 4 Submandibular gland flap for reconstruction of gingival cancer. Three months postoperatively, submandibular gland is essentially the same color as the oral mucosa. Submandibular gland flap for reconstruction of tongue cancer. One week postoperatively, surface mucosalization of the submandibular gland is bright red. Submandibular gland flap for reconstruction of gingival cancer. Three months postoperatively, submandibular gland is essentially the same color as the oral mucosa. Follow-up: Postoperative follow-up began at the completion of treatment and ended 31 May 2013. No patient was lost to follow-up. During the follow-up period of 12 to 28 months, none of the patients developed local recurrence or lymphatic or distant metastases. Discussion: To date there has been no consensus about preservation of the submandibular gland during neck dissection in oral cavity and oropharyngeal cancers. Traditional surgical treatment includes excision of the submandibular gland and the surrounding lymphatic tissue in addition to removal of the primary lesion. Submandibular glands are frequently excised as part of neck dissection. With the development of functional surgery, an increasing number of investigations have demonstrated that cancer metastasis to the submandibular gland is quite rare and that it is feasible and oncologically safe to preserve the submandibular gland with no involvement of cancer metastases during neck dissection followed by close postoperative follow-up. Whether to preserve the submandibular gland depends on the following conditions: (1) whether there are intraglandular submandibular lymph nodes; (2) whether it is involved in the primary cancer; and (3) whether the vessels of the submandibular gland are left intact during level I neck dissection. The existence of intraglandular lymph nodes within the submandibular gland currently remains controversial [5, 6]. Because of their doubts about the existence of intraglandular lymph nodes, Rouviere and Tobies as well as DiNardo performed pathological studies to investigate intraglandular lymph nodes using human submandibular gland tissue sections from cadavers. In both studies, no intraglandular lymph nodes were identified in the submandibular gland samples [4, 6]. Other investigations have reported similar results [13, 14]. Even if there were intraglandular lymph nodes, the lymphatic drainage of intraglandular lymph nodes might not be truly identical to that of the lymph nodes outside the submandibular gland capsule because of the difference in anatomic structure between the submandibular gland and submandibular triangle. Moreover, whether lymph from surrounding lymphatic tissue can be directed into intraglandular lymph nodes is less clear and requires further study. Conversely, numerous studies have suggested that although metastatic lymph nodes rarely invade the gland, metastasis to level Ib occurs frequently in oral cavity cancer [15, 16]. Hence, although there is no universal consensus about the existence of intraglandular lymph nodes thus far, it is generally believed that they do not exist within the submandibular gland. Head and neck carcinoma, especially oral cancer, seldom metastasizes to the salivary glands. If it occurs, the parotid gland is more likely to be affected than the submandibular gland [17]. Although oral cancer most often metastasizes to neck lymph nodes and levels I, II, and III are the most frequently involved [4], the involvement of the submandibular gland in oral cancers is extremely rare [5, 6]. In a study investigating whether and how head and neck squamous carcinoma metastasizes to the submandibular gland, Spiegel et al. demonstrated that, because of the absence of intraglandular lymph nodes among 196 examined submandibular glands, in no case did a squamous cell carcinoma metastasize to the submandibular gland [8]. The authors found that the submandibular gland was involved only in cases in which the primary tumor was in close proximity to the gland, or when metastasis to level I of the neck had occurred by extension from a locally affected lymph node into the submandibular gland. Byeon et al. reported in 2009 that of 201 cases of oral cavity squamous cell carcinoma, only two (1%) had carcinomatous involvement in the submandibular gland through direct extension from a primary lesion and no submandibular glands showed pathological evidence of isolated metastasis or local extension of metastatic lymph nodes. Based on their study findings, the authors thought that it might be feasible and safe to preserve an oncologically sound submandibular gland during neck dissection in patients with early-stage oral cancer [18]. Other studies revealed similar results, demonstrating that submandibular gland metastasis from head and neck primary squamous cell carcinoma was extremely rare and that direct invasion as a primary mechanism of metastasis led to most of the metastases [6, 7, 9, 10]. These studies concluded that preservation of the submandibular gland might be oncologically safe and feasible unless there was evidence of direct invasion of the gland or close proximity of the cancer to it. Taken together, head and neck carcinoma, especially early-stage oral cancer, can be characterized by extremely rare metastasis to the submandibular glands. This metastasis profile provides a solid oncological basis for the use of the preserved submandibular gland as reconstruction material. Submandibular glands are located in level Ib, surrounded by rich lymphatic tissue. The important question is whether this preparatory procedure in creation of a submandibular gland flap is technically feasible; that is whether all lymph nodes in level Ib can be eradicated while leaving the submandibular gland intact for reconstruction during neck dissection. More recently, Dhiwakar et al. performed an anatomic pathological study to determine whether all lymph nodes in level Ib can be extirpated without removing the submandibular gland [19]. The authors reported that complete removal of lymph nodes in level Ib was achieved in all 30 neck dissections and that the submandibular gland and surgical bed contained no residual lymph nodes. They concluded that, in suitable cases, it was technically feasible to remove all level Ib lymph nodes and preserve the submandibular gland. The next question is whether the submandibular gland can survive and maintain its integrity and function following transplantation. Survival of the transplanted submandibular gland is crucial to this reconstruction method. It is clearly unwise to select a submandibular gland as a candidate for reconstruction material if it fails to survive following transplantation. To prevent xerostomia and protect the submandibular gland from radiation damage, Seikaly et al. performed surgical transfer of the submandibular gland to the submental space outside the proposed radiation field in patients with head and neck carcinoma and carried out long-term postoperative follow-up. The authors reported that the transferred submandibular glands survived and maintained salivary function for statistically significant prevention of xerostomia, and that there were no disease recurrences on the side of the transferred gland or in the submental space [11]. Al-Qahtani K et al. conducted a prospective clinical trial in which the submandibular gland was similarly transferred to the submental space. Their results also suggested that the submandibular gland can survive with its function preserved following surgical transplantation [20]. Therefore, based on these above-mentioned findings, we believe that using the submandibular gland flap to reconstruct oral defects is oncologically safe and feasible in patients with early oral cavity and oropharyngeal cancers. In this study, we preserved the ipsilateral submandibular gland during neck dissection and reconstructed oral defects using the transferred submandibular gland in eight patients. Surgeries were successful in all eight patients and there were no resulting deaths. All submandibular gland flaps survived with well-mucosalized surfaces. No complications were observed. The operative incisions healed by primary intention in all eight cases. Twenty-seven lymph nodes were dissected out of level I in all patients. Pathological examinations of intraoperative frozen sections and postoperative paraffin sections confirmed no cancer metastases to any of the 27 lymph nodes. No cancer metastasis was found in the remaining levels in any patient. No patient developed local relapse or lymphatic or distant metastasis during the follow-up period of 12 to 28 months. As a reconstruction material, the submandibular gland has several major advantages. First, its abundant blood supply and a consistent, bulky vascular pedicle make it difficult for the flap to develop necrosis following transplantation. Second, a submandibular gland flap is easy to obtain and prepare. Its preparation requires no special surgical skills and can be completed during neck dissection. A submandibular gland flap can be easily prepared for use upon completion of neck dissection. The key points of the operation are to use care to keep the capsule of the submandibular gland intact and to protect the proximal and distal ends of the external maxillary artery and anterior facial vein during preparation. Third, this method does not require vascular anastomosis. The relatively simple anatomic structure of the submandibular gland facilitates the preparation of a submandibular gland flap in less time than conventional methods. Additionally, neck dissection and flap preparation can be performed as a single-stage procedure, which simplifies the operation and reduces operative duration. Finally, the simplified procedure and shorter duration result in fewer traumas, lower complication rates, and, consequently, decreased surgical risk. Given that a submandibular gland flap cannot completely fill the oral defect because of its limited size, in this study we conducted a combined reconstruction using a submandibular gland flap for oral and oropharyngeal defects and a sternocleidomastoid myocutaneous flap for floor-of-mouth defects. Similar to a submandibular gland flap, a sternocleidomastoid myocutaneous flap is easy to obtain and prepare and does not require a complex surgical procedure. Therefore, this combined reconstructive technique can greatly minimize surgical trauma and operating time. These advantages suggest that this reconstructive technique may be the preferred treatment option in suitable cases, especially in older aged patients with complicated systemic disease. In the present study, all eight patients had comorbid cardiovascular disease, cerebrovascular disease, diabetes, or chronic respiratory disease that made our patients poor candidates for lengthy and risky surgical procedures. Thus, we performed reconstructive surgery using combined submandibular gland and sternocleidomastoid myocutaneous flaps. Our preliminary results were encouraging. Surgery was successful in all eight patients, with no surgical deaths and no complications. Compared with the conventional method we previously used, this combined reconstruction method shortened operative duration by about 30% on average. The cosmetic and functional outcomes were satisfactory in all eight patients. Reconstruction of the tongue resulted in good recovery of tongue function, a pleasing contour, and no significant impact on phonation. After reconstruction of the tongue root, no negative impact on breathing or swallowing and no significant swelling were observed. Tongue movement was normal and phonation was clear. Floor-of-mouth reconstruction resulted in no swelling and no significant impact on swallowing, tongue movement, or phonation. Appearance after mandibular reconstruction was basically satisfactory; range of mouth opening was normal although there was slight malocclusion. However, swallowing and phonation were both normal. No significant postoperative shoulder dysfunction was noted, suggesting that the separation of the sternocleidomastoid from the clavicle did not greatly alter this. Appropriate restoration of oral function and facial aesthetics greatly improved the quality of life of these older aged patients. Despite its advantages, there are several disadvantages to this reconstruction method. First, safety is an underlying concern. To the best of our knowledge, no large multicenter clinical study - particularly a prospective study - has been conducted to prove that intraglandular lymph nodes do not exist within the submandibular glands or that the submandibular gland cannot be involved in oral cavity and oropharyngeal cancers through lymphatic metastasis. Thus, only in cases in which no neck or level I lymphatic metastases are confirmed pathologically should the submandibular gland be used for reconstruction. This limits the clinical application of this technique in a considerable number of patients. A second disadvantage is the limited length of the vascular pedicle. If the proximal ends of the anterior facial artery and vein are used as vascular pedicles, the submandibular gland flap can extend only 2.0 ± 7.0 cm away from donor site because of the length limitation of the pedicle [21]. If the distal ends of the anterior facial artery and vein are used as vascular pedicles, the reach of this flap can be improved to some extent with blood supply to the transferred submandibular gland from the reverse flow of the external maxillary artery and anterior facial vein. A third disadvantage is the reconstruction range. The maximal reconstruction range is equal to the maximal cross-section of the submandibular gland [22], so the submandibular gland flap cannot cover a larger defect [15]. A final disadvantage concerns submandibular gland secretions. It has been reported that the denervated submandibular gland maintains long-term secretory function [13, 16]. In this reconstruction method, the transferred submandibular gland is denervated once it is completely detached from surrounding tissue during flap preparation, and about half of the submandibular gland is exposed in the oral cavity after completion of the reconstruction. Little is known about whether the exposure in the oral cavity has an impact on secretory function of the denervated gland. If the exposure has little or no impact and salivary function after denervation is preserved, the submandibular gland may develop swelling and pain after reconstructive surgery since the submandibular ducts have been severed during flap preparation. In this study, because of the limitations in lengths of vascular pedicles and ranges of reconstruction, we performed the reconstruction only for oral cancers in the posterior two-thirds of the tongue and both sides of the floor of mouth, gingival cancer adjacent to the molars, and oropharyngeal cancer close to the tongue. The maximal reconstruction range was 6 × 5 cm. We note several limitations to this study. The number of patients in this study was relatively small and the follow-up period relatively short, and thus far no prospective randomized comparative clinical study has been conducted to confirm the feasibility of this combined reconstruction technique in a large number of patients. Therefore, the long-term clinical effects, safety, and application scope of this technique need to be further evaluated with adequate statistical power in future clinical practice. Conclusions: Skin flap, musculocutaneous flap, and osteo-myocutaneous flap are conventional methods of postoperative reconstruction in patients with oral cavity and oropharyngeal cancers. However, older aged patients with oral cavity and oropharyngeal cancers tend to have comorbid conditions. Older aged patients with this complex medical condition have poor surgical tolerance and thus cannot undergo a lengthy and potentially risky surgical operation. Our combined reconstruction technique with submandibular gland flap and sternocleidomastoid myocutaneous flap has the advantages of being easier to perform, causing less surgical injury and only minor impact on contour, and greatly reducing operation time and surgical risk. This technique can facilitate the restoration of oral function and facial aesthetics and greatly improve the quality of life of older aged patients. Hence, this technique can provide a new and safe reconstruction option for older aged cancer patients with complex medical conditions. If we can verify at the molecular level the absence of intraglandular lymph nodes and that there is no lymphatic metastasis to the submandibular glands, this combined reconstruction technique may gain popularity.
Background: The growth of aging populations in an increasing number of countries has led to a concomitant increase in the incidence of chronic diseases. Accordingly, the proportion of older aged patients with oral cavity and oropharyngeal cancers and comorbidities has also increased. Thus, improvements must be made in the tolerance and safety of surgical procedures for these patients with complex medical conditions. In this study, we investigated combined submandibular gland flap and sternocleidomastoid musculocutaneous flap for postoperative reconstruction in older aged patients with oral cavity and oropharyngeal cancers in terms of surgical methods, safety, and clinical outcome. Methods: Between January 2011 and May 2012, 8 patients over the age of 65 years (7 men, 1 woman; aged 66 to 75 years (median, 69.6)) with oral cavity and oropharyngeal cancers underwent combined submandibular gland and sternocleidomastoid myocutaneous flaps for postoperative reconstruction at Ganzhou Tumor Hospital. All eight patients had comorbid cardiovascular, cerebrovascular, or chronic respiratory disease or diabetes. Clinical outcomes, complications, and tolerance to surgical treatment were observed. Results: Surgical treatment was successful in all eight patients. All submandibular gland flaps survived with well-mucosalized surfaces and with no complications. During the postoperative follow-up period of 12 to 28 months, no patient developed local recurrence or distant metastasis, and all had good recovery of function and local contour. Conclusions: This combined reconstruction technique enables appropriate restoration of oral function, facial aesthetics and improved quality of life. Further, this technique has several advantages: it is easier to perform, reduces operation time and surgical risk, causes less surgical injury, and has minor impact on contour. The technique provides a new and safe reconstruction option for older aged patients with oral cavity and oropharyngeal cancers.
Background: A number of major organs are concentrated in the oral cavity and oropharynx. Surgical removal of oral cavity and oropharyngeal cancers can lead to facial injury or even disfigurement and decreased oral functions, including speech, mastication, and swallowing. Surgical reconstruction with skin flap, musculocutaneous flap, and/or osteo-myocutaneous flap enables oral cancer patients to achieve appropriate restoration of oral function and facial aesthetics and a greatly improved quality of life [1, 2]. With the gradual aging of society, the proportion of older aged patients with oral cavity and oropharyngeal cancers and comorbid cardiovascular disease, diabetes, chronic respiratory disease, or cerebrovascular disease has been markedly increasing. Older aged patients with complex medical conditions may have poor surgical tolerance and be unable to undergo lengthy and risky operative procedures. Conventional reconstructive surgeries with skin, musculocutaneous, and osteo-myocutaneous flaps tend to be time-consuming and cause relatively large operative trauma, thus potentially increasing surgical risk. Moreover, the postoperative recovery of older aged patients, especially those with comorbid systemic disease, may be delayed. Therefore, there is an increasing need for a new reconstruction method to improve the tolerance and safety of reconstructive surgery [3]. Oral cavity and oropharyngeal cancers most often metastasize to neck lymph nodes, and neck lymphatic metastases are most frequently observed in levels I, II, and III [4]. The submandibular glands are located in level Ib, where they are surrounded by rich lymphatic tissues. However, the involvement of submandibular glands in oral cavity and oropharyngeal cancers is quite rare, especially in early oral cancers [5, 6]. Additionally, the existence of intraglandular lymph nodes within submandibular glands continues to be debated [7, 8]. Studies have demonstrated that preservation of the submandibular gland during neck dissection is oncologically safe in patients with early oral cancers unless the primary tumor or metastatic regional lymphadenopathy is adherent to the gland [9, 10]. It has also been reported that the submandibular gland can survive and maintain its integrity and function following surgical transfer and reimplantation [11, 12]. These studies suggest that the submandibular gland can be used in the surgical reconstruction of oral defects. Therefore, it is of clinical significance to explore further the feasibility of the submandibular gland flap for postoperative reconstruction in patients with early oral cavity and oropharyngeal cancers. Between January 2011 and May 2012, we performed reconstructive surgeries for oral defects using combined submandibular gland and sternocleidomastoid musculocutaneous flaps in eight older aged patients with oral cavity and oropharyngeal cancers at our facility. In this study, we conducted a retrospective analysis of these patients and report on the safety, practicality, and clinical effects of this new reconstruction method. Conclusions: Skin flap, musculocutaneous flap, and osteo-myocutaneous flap are conventional methods of postoperative reconstruction in patients with oral cavity and oropharyngeal cancers. However, older aged patients with oral cavity and oropharyngeal cancers tend to have comorbid conditions. Older aged patients with this complex medical condition have poor surgical tolerance and thus cannot undergo a lengthy and potentially risky surgical operation. Our combined reconstruction technique with submandibular gland flap and sternocleidomastoid myocutaneous flap has the advantages of being easier to perform, causing less surgical injury and only minor impact on contour, and greatly reducing operation time and surgical risk. This technique can facilitate the restoration of oral function and facial aesthetics and greatly improve the quality of life of older aged patients. Hence, this technique can provide a new and safe reconstruction option for older aged cancer patients with complex medical conditions. If we can verify at the molecular level the absence of intraglandular lymph nodes and that there is no lymphatic metastasis to the submandibular glands, this combined reconstruction technique may gain popularity.
Background: The growth of aging populations in an increasing number of countries has led to a concomitant increase in the incidence of chronic diseases. Accordingly, the proportion of older aged patients with oral cavity and oropharyngeal cancers and comorbidities has also increased. Thus, improvements must be made in the tolerance and safety of surgical procedures for these patients with complex medical conditions. In this study, we investigated combined submandibular gland flap and sternocleidomastoid musculocutaneous flap for postoperative reconstruction in older aged patients with oral cavity and oropharyngeal cancers in terms of surgical methods, safety, and clinical outcome. Methods: Between January 2011 and May 2012, 8 patients over the age of 65 years (7 men, 1 woman; aged 66 to 75 years (median, 69.6)) with oral cavity and oropharyngeal cancers underwent combined submandibular gland and sternocleidomastoid myocutaneous flaps for postoperative reconstruction at Ganzhou Tumor Hospital. All eight patients had comorbid cardiovascular, cerebrovascular, or chronic respiratory disease or diabetes. Clinical outcomes, complications, and tolerance to surgical treatment were observed. Results: Surgical treatment was successful in all eight patients. All submandibular gland flaps survived with well-mucosalized surfaces and with no complications. During the postoperative follow-up period of 12 to 28 months, no patient developed local recurrence or distant metastasis, and all had good recovery of function and local contour. Conclusions: This combined reconstruction technique enables appropriate restoration of oral function, facial aesthetics and improved quality of life. Further, this technique has several advantages: it is easier to perform, reduces operation time and surgical risk, causes less surgical injury, and has minor impact on contour. The technique provides a new and safe reconstruction option for older aged patients with oral cavity and oropharyngeal cancers.
8,265
336
[ 507, 62, 154, 393, 101, 390, 107, 71, 91, 235, 51 ]
15
[ "submandibular", "gland", "submandibular gland", "flap", "reconstruction", "neck", "mouth", "sternocleidomastoid", "submandibular gland flap", "gland flap" ]
[ "reconstruction oral cancers", "flap reconstruct oral", "mouth surgeries performed", "oropharyngeal cancers older", "flap oral oropharyngeal" ]
[CONTENT] Submandibular gland flap | Sternocleidomastoid musculocutaneous flap | Oral cavity cancer | Oropharyngeal cancer | Reconstruction [SUMMARY]
[CONTENT] Submandibular gland flap | Sternocleidomastoid musculocutaneous flap | Oral cavity cancer | Oropharyngeal cancer | Reconstruction [SUMMARY]
[CONTENT] Submandibular gland flap | Sternocleidomastoid musculocutaneous flap | Oral cavity cancer | Oropharyngeal cancer | Reconstruction [SUMMARY]
[CONTENT] Submandibular gland flap | Sternocleidomastoid musculocutaneous flap | Oral cavity cancer | Oropharyngeal cancer | Reconstruction [SUMMARY]
[CONTENT] Submandibular gland flap | Sternocleidomastoid musculocutaneous flap | Oral cavity cancer | Oropharyngeal cancer | Reconstruction [SUMMARY]
[CONTENT] Submandibular gland flap | Sternocleidomastoid musculocutaneous flap | Oral cavity cancer | Oropharyngeal cancer | Reconstruction [SUMMARY]
[CONTENT] Aged | Carcinoma, Squamous Cell | Female | Follow-Up Studies | Humans | Male | Mouth Neoplasms | Myocutaneous Flap | Oropharyngeal Neoplasms | Postoperative Period | Prognosis | Quality of Life | Plastic Surgery Procedures | Retrospective Studies | Sternoclavicular Joint | Submandibular Gland | Surgical Flaps [SUMMARY]
[CONTENT] Aged | Carcinoma, Squamous Cell | Female | Follow-Up Studies | Humans | Male | Mouth Neoplasms | Myocutaneous Flap | Oropharyngeal Neoplasms | Postoperative Period | Prognosis | Quality of Life | Plastic Surgery Procedures | Retrospective Studies | Sternoclavicular Joint | Submandibular Gland | Surgical Flaps [SUMMARY]
[CONTENT] Aged | Carcinoma, Squamous Cell | Female | Follow-Up Studies | Humans | Male | Mouth Neoplasms | Myocutaneous Flap | Oropharyngeal Neoplasms | Postoperative Period | Prognosis | Quality of Life | Plastic Surgery Procedures | Retrospective Studies | Sternoclavicular Joint | Submandibular Gland | Surgical Flaps [SUMMARY]
[CONTENT] Aged | Carcinoma, Squamous Cell | Female | Follow-Up Studies | Humans | Male | Mouth Neoplasms | Myocutaneous Flap | Oropharyngeal Neoplasms | Postoperative Period | Prognosis | Quality of Life | Plastic Surgery Procedures | Retrospective Studies | Sternoclavicular Joint | Submandibular Gland | Surgical Flaps [SUMMARY]
[CONTENT] Aged | Carcinoma, Squamous Cell | Female | Follow-Up Studies | Humans | Male | Mouth Neoplasms | Myocutaneous Flap | Oropharyngeal Neoplasms | Postoperative Period | Prognosis | Quality of Life | Plastic Surgery Procedures | Retrospective Studies | Sternoclavicular Joint | Submandibular Gland | Surgical Flaps [SUMMARY]
[CONTENT] Aged | Carcinoma, Squamous Cell | Female | Follow-Up Studies | Humans | Male | Mouth Neoplasms | Myocutaneous Flap | Oropharyngeal Neoplasms | Postoperative Period | Prognosis | Quality of Life | Plastic Surgery Procedures | Retrospective Studies | Sternoclavicular Joint | Submandibular Gland | Surgical Flaps [SUMMARY]
[CONTENT] reconstruction oral cancers | flap reconstruct oral | mouth surgeries performed | oropharyngeal cancers older | flap oral oropharyngeal [SUMMARY]
[CONTENT] reconstruction oral cancers | flap reconstruct oral | mouth surgeries performed | oropharyngeal cancers older | flap oral oropharyngeal [SUMMARY]
[CONTENT] reconstruction oral cancers | flap reconstruct oral | mouth surgeries performed | oropharyngeal cancers older | flap oral oropharyngeal [SUMMARY]
[CONTENT] reconstruction oral cancers | flap reconstruct oral | mouth surgeries performed | oropharyngeal cancers older | flap oral oropharyngeal [SUMMARY]
[CONTENT] reconstruction oral cancers | flap reconstruct oral | mouth surgeries performed | oropharyngeal cancers older | flap oral oropharyngeal [SUMMARY]
[CONTENT] reconstruction oral cancers | flap reconstruct oral | mouth surgeries performed | oropharyngeal cancers older | flap oral oropharyngeal [SUMMARY]
[CONTENT] submandibular | gland | submandibular gland | flap | reconstruction | neck | mouth | sternocleidomastoid | submandibular gland flap | gland flap [SUMMARY]
[CONTENT] submandibular | gland | submandibular gland | flap | reconstruction | neck | mouth | sternocleidomastoid | submandibular gland flap | gland flap [SUMMARY]
[CONTENT] submandibular | gland | submandibular gland | flap | reconstruction | neck | mouth | sternocleidomastoid | submandibular gland flap | gland flap [SUMMARY]
[CONTENT] submandibular | gland | submandibular gland | flap | reconstruction | neck | mouth | sternocleidomastoid | submandibular gland flap | gland flap [SUMMARY]
[CONTENT] submandibular | gland | submandibular gland | flap | reconstruction | neck | mouth | sternocleidomastoid | submandibular gland flap | gland flap [SUMMARY]
[CONTENT] submandibular | gland | submandibular gland | flap | reconstruction | neck | mouth | sternocleidomastoid | submandibular gland flap | gland flap [SUMMARY]
[CONTENT] oral | cancers | cavity | oral cavity | surgical | oropharyngeal cancers | cavity oropharyngeal | cavity oropharyngeal cancers | oral cavity oropharyngeal | oral cavity oropharyngeal cancers [SUMMARY]
[CONTENT] repaired | muscle | flap | submandibular | mouth | gland | resection | submandibular gland | sternocleidomastoid | neck [SUMMARY]
[CONTENT] gland | reconstruction | submandibular | postoperatively | submandibular gland | tongue | observed | reconstruction tongue | gland flap reconstruction | flap reconstruction [SUMMARY]
[CONTENT] technique | older aged | aged | older | surgical | flap | older aged patients | aged patients | patients | reconstruction [SUMMARY]
[CONTENT] submandibular | gland | submandibular gland | flap | reconstruction | patients | lymph | oral | mouth | tongue [SUMMARY]
[CONTENT] submandibular | gland | submandibular gland | flap | reconstruction | patients | lymph | oral | mouth | tongue [SUMMARY]
[CONTENT] ||| ||| ||| sternocleidomastoid [SUMMARY]
[CONTENT] Between January 2011 | May 2012 | 8 | the age of 65 years | 7 | 1 | 66 to 75 years | 69.6 | sternocleidomastoid | Ganzhou Tumor Hospital ||| eight ||| [SUMMARY]
[CONTENT] eight ||| ||| 12 to [SUMMARY]
[CONTENT] ||| ||| [SUMMARY]
[CONTENT] ||| ||| ||| sternocleidomastoid ||| Between January 2011 | May 2012 | 8 | the age of 65 years | 7 | 1 | 66 to 75 years | 69.6 | sternocleidomastoid | Ganzhou Tumor Hospital ||| eight ||| ||| ||| eight ||| ||| 12 to ||| ||| ||| [SUMMARY]
[CONTENT] ||| ||| ||| sternocleidomastoid ||| Between January 2011 | May 2012 | 8 | the age of 65 years | 7 | 1 | 66 to 75 years | 69.6 | sternocleidomastoid | Ganzhou Tumor Hospital ||| eight ||| ||| ||| eight ||| ||| 12 to ||| ||| ||| [SUMMARY]
Epidemiology of occult hepatitis B and C in Africa: A systematic review and meta-analysis.
36395668
Occult hepatitis B (OBI) and C (OCI) infections lead to hepatic crises including cases of liver cirrhosis and even hepatocellular carcinoma (HCC). OBI and OCI also pose a significant problem of their transmissibility. This study aimed to assess the overall prevalence of OBI and OCI in the African continent, a region highly endemic for classical hepatitis B and C viruses.
BACKGROUND
For this systematic review and meta-analysis, we searched: PubMed, Web of Science, African Journal Online and African Index Medicus for published studies on the prevalence of OBI and OCI in Africa. Study selection and data extraction were performed by at least two independent investigators. Heterogeneity (I²) was assessed using the χ² test on the Cochran Q statistic and H parameters. Sources of heterogeneity were explored by subgroup analyses. This study was registered in PROSPERO, with reference number CRD42021252772.
METHODS
We obtained 157 prevalence data for this meta-analysis, from 134 studies for OBI prevalence; 23 studies on OCI prevalence, and a single study on the OBI case fatality rate. The overall estimate for the prevalence of OBI was 14.8% [95% CI = 12.2-17.7] among 18579 participants. The prevalence of seronegative OBI and seropositive OBI was 7.4% [95% CI = 3.8-11.8] and 20.0% [95% CI = 15.3-25.1] respectively. The overall estimate for the prevalence of OCI was 10.7% [95% CI = 6.6-15.4] among 2865 participants. The pooled prevalence of seronegative OCI was estimated at 10.7% [95%CI = 4.8-18.3] and that of seropositive OCI at 14.4% [95%CI = 5.2-22.1]. In Sub-group analysis, patients with malignancies, chronic hepatitis C, and hemodialysis had a higher OCI prevalence. While those with malignancies, liver disorders, and HIV positive registered highest OBI prevalence.
RESULTS
This review shows a high prevalence of OBI and OCI in Africa, with variable prevalence between countries and population groups.
CONCLUSION
[ "Humans", "Carcinoma, Hepatocellular", "Liver Neoplasms", "Hepatitis B", "Liver Cirrhosis", "Africa" ]
7613883
Introduction
In the early 1980 s, a new form of clinical hepatitis B virus (HBV) infection was described, corresponding to the presence of HBV DNA in the liver and/or serum of patients in whom the HBs antigen (Ag) is undetectable by the usual serology tests. This was called occult hepatitis B (OBI) [1–3]. Based on the antibody profile of HBV, OBI can be distinguished into seropositive-OBI (anti-HBc and/or anti-HBs positive) and seronegative-OBI (anti-HBc and anti-HBs negative)[4]. A similar entity was studied in 2005, by Castillo et al., on the ability of hepatitis C virus (HCV) to replicate in peripheral blood mono-nuclear cells (PBMCs) of patients in the absence of viral RNA in serum and detectable anti-HCV antibodies, thus describing an occult HCV infection[5,6]. Two types of OCI are recognized: seronegative OCI (anti HCV antibody-negative and serum HCV RNA-negative) and seropositive OCI (anti HCV antibody-positive and serum HCV RNA-negative)[7]. These occult forms of hepatitis B and C can lead to hepatic crises including cases of chronic liver infection and liver cirrhosis [8]. It is known that OBI plays a role in the development of hepatocellular carcinoma [9]. These entities highlight multiple concerns, including the potential for transmission of this form of infection through blood transfusion, hemodialysis, or from mother to child [10]. The OBI and OCI have been described in a variety of individuals, including hemodialysis patients, patients with sustained virologic response, immunocompromised individuals, patients with abnormal liver function, and apparently healthy subjects. Multiple meta-analyses on the prevalence of OBI and OCI have shown great variability in estimates according to population categories, regions, type of tests used, and the level of endemicity of the disease [9,11–17]. Two global meta-analyses with partial analyses for Africa reported the prevalence of OBI at 3.7% [15], seronegative OCI at 9.6% and seropositive OCI at 13.3% [17]. The highest prevalence of OBI was in countries with high endemicity of classical hepatitis B, including Africa, with prevalence of up to 35.6% in the general population in Uganda [15]. Similar to classical hepatitis C infection, which has the highest prevalence in Egypt, this meta-analysis showed that OCI had the highest prevalence in North Africa, represented only by Egypt [17]. Other meta-analyses have estimated the prevalence of OBI in Africa among people living with HIV at 11.2% [16] and in Sudan at 15.5% [12]. In a regional meta-analysis in the Middle East and Eastern Mediterranean countries the prevalence of OCI was estimated at 10.0% with the highest prevalence recorded in Egypt [14]. Estimates of a high OBI prevalence of 34% was reported in Western Europe and Northern America [18] and a lower OBI prevalence of 4% was observed in Asia [19]. Thus, there are no systematic syntheses for OBI and OCI for Africa which is a highly endemic region for classical HBV and HCV infections. In this meta-analysis, we compiled all available evidence for OBI and OCI in Africa with differences across populations and regions. The results of this review may help consider OBI and OCI in the WHO goal of eliminating viral hepatitis by 2030. It will also help to sound the alarm on the need to strengthen the diagnostic capacities of OBI and OCI in Africa which has limited access to molecular testing.
null
null
Results
Selection of studies As of 07/18/2022, 496 studies were analysed with no additional studies added from the manual search. After removing the duplicates (152), 344 articles were screened. The full text evaluation led to the elimination of 78 studies for several reasons, including lack of data on OBI and/or OCI prevalence or case fatality rate, or small sample size (S4 Table). In all, a total of 109 articles reporting the prevalence and/or case fatality rate of OBI and/or OCI in Africa were included in this synthesis (Fig. 1) [3,31–138]. As of 07/18/2022, 496 studies were analysed with no additional studies added from the manual search. After removing the duplicates (152), 344 articles were screened. The full text evaluation led to the elimination of 78 studies for several reasons, including lack of data on OBI and/or OCI prevalence or case fatality rate, or small sample size (S4 Table). In all, a total of 109 articles reporting the prevalence and/or case fatality rate of OBI and/or OCI in Africa were included in this synthesis (Fig. 1) [3,31–138]. Study characteristics Overall, we obtained 157 prevalence data for this meta-analysis, including 133 studies for OBI prevalence; 23 studies for OCI prevalence, and a single study for OBI case fatality rate (S5, S6, and S7 Tables). The OBI case fatality rate study reported a total of 6 deaths among 72 OBI and HIV infected patients in Botswana [108]. The studies were published between 2006 and 2022 in general and respectively from 2006 to 2022 and 2010-2022 for OBI and OCI. Participants were registered from 1997 to 2021. The majority of studies were cross-sectional (84.7%) with non-probability sampling (88.5%). The study setting was mainly in the hospital (93.6%). The studies were carried out in 21 African countries, mainly in Egypt (44.6%) and South Africa (13.4%). The largest number of studies were from lower-middle-income countries (68.2%). Most of the studies recruited adults (61.8%). Of the included studies, 39 (24.8%) were carried out in HIV infected patients, 27 (16.6%) in patients with chronic hepatitis C infection, 19 (12.1%) in blood donors, and 19 (12.1%) in hemodialysis patients. The diagnosis of OCI was either performed using conventional RT-PCR (5.7%) or real-time RT-PCR (8.3%). OBI diagnosis was made using conventional PCR (46.5%) or real-time PCR (39.5%). The type of sample used was blood for OBI (100%) and peripheral blood mononuclear cells (PBMCs) for OCI (100%). Most of the prevalence data were at moderate risk of bias (59.2%). Overall, we obtained 157 prevalence data for this meta-analysis, including 133 studies for OBI prevalence; 23 studies for OCI prevalence, and a single study for OBI case fatality rate (S5, S6, and S7 Tables). The OBI case fatality rate study reported a total of 6 deaths among 72 OBI and HIV infected patients in Botswana [108]. The studies were published between 2006 and 2022 in general and respectively from 2006 to 2022 and 2010-2022 for OBI and OCI. Participants were registered from 1997 to 2021. The majority of studies were cross-sectional (84.7%) with non-probability sampling (88.5%). The study setting was mainly in the hospital (93.6%). The studies were carried out in 21 African countries, mainly in Egypt (44.6%) and South Africa (13.4%). The largest number of studies were from lower-middle-income countries (68.2%). Most of the studies recruited adults (61.8%). Of the included studies, 39 (24.8%) were carried out in HIV infected patients, 27 (16.6%) in patients with chronic hepatitis C infection, 19 (12.1%) in blood donors, and 19 (12.1%) in hemodialysis patients. The diagnosis of OCI was either performed using conventional RT-PCR (5.7%) or real-time RT-PCR (8.3%). OBI diagnosis was made using conventional PCR (46.5%) or real-time PCR (39.5%). The type of sample used was blood for OBI (100%) and peripheral blood mononuclear cells (PBMCs) for OCI (100%). Most of the prevalence data were at moderate risk of bias (59.2%). The prevalence of occult C infection In total, the twenty-three studies reporting OCI prevalence data in Africa were conducted only in Egypt. The pooled OCI prevalence was estimated to be 10.7% [95% CI = 6.6-15.4] on a sample of 2865 participants with substantial heterogeneity (I2 = 90.7% [95% CI = 87.4-93.2], p < 0.001) (Table 1, Figs. 2 and 3). Two studies presented the prevalence of seropositive and/or seronegative OCI without distinction with an estimated at 5.8% [95% CI = 0.8-13.9] in a total of 100 participants. There was a total of 14 studies reporting seronegative OCI prevalence. The pooled seronegative OCI prevalence was estimated to be 10.7% [95% CI = 4.8-18.3] in a sample of 908 participants. There was a total of 7 studies reporting seropositive OCI prevalence. The overall seropositive OCI prevalence was estimated to be 14.4% [95% CI = 5.2-22.1], with a total of 1857 participants. In total, the twenty-three studies reporting OCI prevalence data in Africa were conducted only in Egypt. The pooled OCI prevalence was estimated to be 10.7% [95% CI = 6.6-15.4] on a sample of 2865 participants with substantial heterogeneity (I2 = 90.7% [95% CI = 87.4-93.2], p < 0.001) (Table 1, Figs. 2 and 3). Two studies presented the prevalence of seropositive and/or seronegative OCI without distinction with an estimated at 5.8% [95% CI = 0.8-13.9] in a total of 100 participants. There was a total of 14 studies reporting seronegative OCI prevalence. The pooled seronegative OCI prevalence was estimated to be 10.7% [95% CI = 4.8-18.3] in a sample of 908 participants. There was a total of 7 studies reporting seropositive OCI prevalence. The overall seropositive OCI prevalence was estimated to be 14.4% [95% CI = 5.2-22.1], with a total of 1857 participants. The prevalence of occult B infection A total of 133 studies reporting prevalence data on OBI were conducted in 5 UNSD regions of Africa: Central Africa, Eastern Africa, West Africa, Northern Africa, and Southern Africa (Fig. 2). The OBI prevalence was estimated at 14.8% [95% CI = 12.2-17.7] in a sample of 18579 participants with significant heterogeneity (I2 = 96.2% [95% CI = 95.8-96.5], p < 0.0001) (Table 1, Figs. 2, 4, and S1 Fig). The pooled prevalence of seropositive OBI was estimated at 20.0% [95% CI = 15.3-25.1] among 6873 participants. Prevalence data on seronegative OBI were obtained from 32 studies and the prevalence was estimated at 7.4% [95% CI = 3.8-11,8] with 2896 participants. A total of 44 studies reported on seropositive and/or seronegative OBI with an estimated prevalence of 14.3% [95% CI = 10.1-19.2] in 8810 participants. Few studies provided data on the variation of seropositive or seronegative OBI with anti-HBs (only 9 studies). The distribution of OBI prevalence according to anti-HBs was as follows: 17.32% [4.37; 35.69] for anti-HBc negative/ anti-HBs negative, 30.95% [17.10; 46.62] for anti-HBc positive/ anti-HBs negative, 27.41% [ 8.89; 50.70] anti-HBc positive/ anti-HBs positive, and 0% for anti-HBc negative/ anti-HBs positive (S3 Fig). A total of 133 studies reporting prevalence data on OBI were conducted in 5 UNSD regions of Africa: Central Africa, Eastern Africa, West Africa, Northern Africa, and Southern Africa (Fig. 2). The OBI prevalence was estimated at 14.8% [95% CI = 12.2-17.7] in a sample of 18579 participants with significant heterogeneity (I2 = 96.2% [95% CI = 95.8-96.5], p < 0.0001) (Table 1, Figs. 2, 4, and S1 Fig). The pooled prevalence of seropositive OBI was estimated at 20.0% [95% CI = 15.3-25.1] among 6873 participants. Prevalence data on seronegative OBI were obtained from 32 studies and the prevalence was estimated at 7.4% [95% CI = 3.8-11,8] with 2896 participants. A total of 44 studies reported on seropositive and/or seronegative OBI with an estimated prevalence of 14.3% [95% CI = 10.1-19.2] in 8810 participants. Few studies provided data on the variation of seropositive or seronegative OBI with anti-HBs (only 9 studies). The distribution of OBI prevalence according to anti-HBs was as follows: 17.32% [4.37; 35.69] for anti-HBc negative/ anti-HBs negative, 30.95% [17.10; 46.62] for anti-HBc positive/ anti-HBs negative, 27.41% [ 8.89; 50.70] anti-HBc positive/ anti-HBs positive, and 0% for anti-HBc negative/ anti-HBs positive (S3 Fig). Subgroup analyses Occult hepatitis C infection The OCI prevalence was statistically different (p = 0.005) according to the population categories with an estimated prevalence of 34.9% in patients with malignancies, 12.3% in those with chronic hepatitis C, 7.2% in hemodialysis patients, and 1.1% in apparently healthy individuals (S8 Table). OCI prevalence did not vary significantly by study design (p = 0.380), sampling approach (p = 0.460), type of OCI (seropositive or seronegative) (p = 0.819), and diagnostic method (p = 0.067). The OCI prevalence was statistically different (p = 0.005) according to the population categories with an estimated prevalence of 34.9% in patients with malignancies, 12.3% in those with chronic hepatitis C, 7.2% in hemodialysis patients, and 1.1% in apparently healthy individuals (S8 Table). OCI prevalence did not vary significantly by study design (p = 0.380), sampling approach (p = 0.460), type of OCI (seropositive or seronegative) (p = 0.819), and diagnostic method (p = 0.067). Occult hepatitis B infection Variation of OBI prevalence was statistically significant by country (p < 0.001), categories of population (p < 0.001), and type of OBI (seropositive or seronegative) (p < 0.001) (S8 Table). The highest OBI prevalence rates were in Morocco (42.8%), Gabon (22.6%), and South Africa (20.2%). Patients with malignancies, patients with liver disorders, and HIV positive patients were the three population categories with the highest OBI prevalence, 41.9%, 33.1%, and 17.0% respectively. The seropositive OBI prevalence (20.1%) was significantly higher than that of seronegative OBI (8.0%). The OBI prevalence did not vary significantly by study design (p = 0.360), sampling approach (p = 0.117), setting (hospital or community-based) (p = 0.360), timing of data collection (p = 0.875), WHO region (p = 0.544), UNSD region (p = 0.384), country income level (p = 0.395), age range (p = 0.799), and diagnostic method (p = 0.659). Variation of OBI prevalence was statistically significant by country (p < 0.001), categories of population (p < 0.001), and type of OBI (seropositive or seronegative) (p < 0.001) (S8 Table). The highest OBI prevalence rates were in Morocco (42.8%), Gabon (22.6%), and South Africa (20.2%). Patients with malignancies, patients with liver disorders, and HIV positive patients were the three population categories with the highest OBI prevalence, 41.9%, 33.1%, and 17.0% respectively. The seropositive OBI prevalence (20.1%) was significantly higher than that of seronegative OBI (8.0%). The OBI prevalence did not vary significantly by study design (p = 0.360), sampling approach (p = 0.117), setting (hospital or community-based) (p = 0.360), timing of data collection (p = 0.875), WHO region (p = 0.544), UNSD region (p = 0.384), country income level (p = 0.395), age range (p = 0.799), and diagnostic method (p = 0.659). Publication bias Egger’s tests were significant for the OCI prevalence (P < 0.001) and OBI prevalence in Africa (P = 0.014), suggesting the presence of a publication bias. Funnel plots confirmed the results of publication bias obtained by Egger’s test (S2 and S4 Figs). Egger’s tests were significant for the OCI prevalence (P < 0.001) and OBI prevalence in Africa (P = 0.014), suggesting the presence of a publication bias. Funnel plots confirmed the results of publication bias obtained by Egger’s test (S2 and S4 Figs). Occult hepatitis C infection The OCI prevalence was statistically different (p = 0.005) according to the population categories with an estimated prevalence of 34.9% in patients with malignancies, 12.3% in those with chronic hepatitis C, 7.2% in hemodialysis patients, and 1.1% in apparently healthy individuals (S8 Table). OCI prevalence did not vary significantly by study design (p = 0.380), sampling approach (p = 0.460), type of OCI (seropositive or seronegative) (p = 0.819), and diagnostic method (p = 0.067). The OCI prevalence was statistically different (p = 0.005) according to the population categories with an estimated prevalence of 34.9% in patients with malignancies, 12.3% in those with chronic hepatitis C, 7.2% in hemodialysis patients, and 1.1% in apparently healthy individuals (S8 Table). OCI prevalence did not vary significantly by study design (p = 0.380), sampling approach (p = 0.460), type of OCI (seropositive or seronegative) (p = 0.819), and diagnostic method (p = 0.067). Occult hepatitis B infection Variation of OBI prevalence was statistically significant by country (p < 0.001), categories of population (p < 0.001), and type of OBI (seropositive or seronegative) (p < 0.001) (S8 Table). The highest OBI prevalence rates were in Morocco (42.8%), Gabon (22.6%), and South Africa (20.2%). Patients with malignancies, patients with liver disorders, and HIV positive patients were the three population categories with the highest OBI prevalence, 41.9%, 33.1%, and 17.0% respectively. The seropositive OBI prevalence (20.1%) was significantly higher than that of seronegative OBI (8.0%). The OBI prevalence did not vary significantly by study design (p = 0.360), sampling approach (p = 0.117), setting (hospital or community-based) (p = 0.360), timing of data collection (p = 0.875), WHO region (p = 0.544), UNSD region (p = 0.384), country income level (p = 0.395), age range (p = 0.799), and diagnostic method (p = 0.659). Variation of OBI prevalence was statistically significant by country (p < 0.001), categories of population (p < 0.001), and type of OBI (seropositive or seronegative) (p < 0.001) (S8 Table). The highest OBI prevalence rates were in Morocco (42.8%), Gabon (22.6%), and South Africa (20.2%). Patients with malignancies, patients with liver disorders, and HIV positive patients were the three population categories with the highest OBI prevalence, 41.9%, 33.1%, and 17.0% respectively. The seropositive OBI prevalence (20.1%) was significantly higher than that of seronegative OBI (8.0%). The OBI prevalence did not vary significantly by study design (p = 0.360), sampling approach (p = 0.117), setting (hospital or community-based) (p = 0.360), timing of data collection (p = 0.875), WHO region (p = 0.544), UNSD region (p = 0.384), country income level (p = 0.395), age range (p = 0.799), and diagnostic method (p = 0.659). Publication bias Egger’s tests were significant for the OCI prevalence (P < 0.001) and OBI prevalence in Africa (P = 0.014), suggesting the presence of a publication bias. Funnel plots confirmed the results of publication bias obtained by Egger’s test (S2 and S4 Figs). Egger’s tests were significant for the OCI prevalence (P < 0.001) and OBI prevalence in Africa (P = 0.014), suggesting the presence of a publication bias. Funnel plots confirmed the results of publication bias obtained by Egger’s test (S2 and S4 Figs).
Conclusion
This review presents a summary of the prevalence of OBI and OIC in Africa. It shows that despite the great variability of prevalence according to population category and geographic region, in general, the prevalence of OBI and OCI in Africa are high and patients with malignancies, hemodialysis patients and HIV patients are groups at high risk for these infections. This highlights the need to include more appropriate OCI and OBI screening programs to target those at high risk of infection. In addition, the lack of data in several African countries shows the need for researchers to investigate this field, particularly in countries with high endemicity for HBV/HCV.
[ "Registration", "Search strategy", "Inclusion and exclusion criteria", "Study selection and data extraction process", "Quality assessment", "Statistical analysis", "Selection of studies", "Study characteristics", "The prevalence of occult C infection", "The prevalence of occult B infection", "Subgroup analyses", "Occult hepatitis C infection", "Occult hepatitis B infection", "Publication bias" ]
[ "This systematic review was conducted in full compliance with the PRISMA (Preferred Reporting Items for Systematic Reviews and Meta - Analyses) guidelines (S1 Table) [20]. The protocol of this systematic review was developed and registered on PROSPERO (International prospective register of systematic reviews) on June 03, 2021 under the digital identifier: https://www.crd.york.ac.uk/pros-pero/display_record.php? ID = CRD42021252772.", "In May 2021, an electronic literature search was performed and the following databases were used: PubMed, Web of Science, African Journal Online and African Index Medicus. A second update search was performed in July 2022 to search for additional articles. We used a combination of the keywords covering OBI, OCI, the names of African countries and regions as well as the boolean operators OR and AND as illustrated in detail in S2 Table. This search strategy was also adapted to identify relevant articles from other databases. To complement the bibliographic database search and identify potential additional data sources, we reviewed the reference list of all relevant articles. All search results were managed using Endnote X9 credential management software, and all duplicates were removed using this same software.", "We considered published studies, without any restrictions in time, population category or detection assay. We classified populations for OBI as: 1) HBsAg negative 2) HBsAg negative and Anti-HBc positive, and 3) HBsAg negative and Anti-HBc negative [21,22], and for OCI as 1) seronegative OCI (anti HCV antibody-negative and serum HCV RNA-negative), 2) seropositive OCI (anti HCV antibody-positive and serum HCV RNA-negative), and 3) seropositive and/or seronegative OCI [7]. Only reports written in English and/or French were included. Exclusion criteria included all of the following: case reports, reviews and articles for which: studies were conducted outside Africa, sample size ≤ 10 participants, no baseline data for the longitudinal study, no data on OBI and/or OCI prevalence or case fatality rate, and no data on number of negative HBsAg tested.", "Duplicates identified in the full list of studies were removed. The titles and abstracts of articles retrieved from the electronic literature search were independently reviewed by four researchers, and the full texts of potentially eligible articles were obtained and further assessed for final inclusion. Articles were selected based on the availability of data needed to calculate the prevalence of OBI and/or OCI in the study. Data from the included studies were extracted using a Google Form by 18 of the study authors and verified by SK. Data extracted were first author name, year of publication, design of study, WHO region, UNSD region, country, country’s income level, sampling method, time of data collection, period of study, age of study participants, recruiting framework, population categories, OBI and/or OCI diagnostic method, and prevalence of OBI and/or OCI in Africa. We considered as the high-risk groups of OBI: HIV-positive patients, patients with chronic liver disease, patients on hemodialysis, patients with hematological disorders, patients with malignancies, organ recipients, healthcare workers, patients who inject drugs and men who have sex with men [12,19,23]. We grouped apparently healthy individuals, blood donors, the general population, and pregnant women as the low-risk group for OBI.\nDisagreements observed during study selection and data extraction were resolved by discussion and consensus.", "The tool developed by Hoy et al. for cross-sectional studies was used to assess the methodological quality of the included studies (S3 Table) [24]. Discussion and consensus were used to resolve disagreements.", "Study-specific estimates were pooled using DerSimonian and Laird’s random-effects model meta-analysis [25]. This model provided the overall prevalence, the 95% confidence interval and the prediction interval which informs about the values of future studies. Heterogeneity was assessed by the Cochrane statistical test, and the p value of the Q test (< 0.05) was used to indicate significant heterogeneity and quantified by I2 values, assuming I2 values of 25%, 50% and 75% represent respectively low, moderate and high heterogeneity[26,27]. Publication bias was assessed using Egger’s test and funnel plot [28]. Subgroup analyses were performed based on study design, sampling approach, setting, timing of sample collection, countries, WHO region, UNSD region, country income level, age range, population categories, and OBI and/or OCI diagnostic assays. We considered p values < 0.05, to be statistically significant. R software version 4.1.0 was used to perform the analyses[29,30].", "As of 07/18/2022, 496 studies were analysed with no additional studies added from the manual search. After removing the duplicates (152), 344 articles were screened. The full text evaluation led to the elimination of 78 studies for several reasons, including lack of data on OBI and/or OCI prevalence or case fatality rate, or small sample size (S4 Table). In all, a total of 109 articles reporting the prevalence and/or case fatality rate of OBI and/or OCI in Africa were included in this synthesis (Fig. 1) [3,31–138].", "Overall, we obtained 157 prevalence data for this meta-analysis, including 133 studies for OBI prevalence; 23 studies for OCI prevalence, and a single study for OBI case fatality rate (S5, S6, and S7 Tables). The OBI case fatality rate study reported a total of 6 deaths among 72 OBI and HIV infected patients in Botswana [108]. The studies were published between 2006 and 2022 in general and respectively from 2006 to 2022 and 2010-2022 for OBI and OCI. Participants were registered from 1997 to 2021. The majority of studies were cross-sectional (84.7%) with non-probability sampling (88.5%). The study setting was mainly in the hospital (93.6%). The studies were carried out in 21 African countries, mainly in Egypt (44.6%) and South Africa (13.4%). The largest number of studies were from lower-middle-income countries (68.2%). Most of the studies recruited adults (61.8%). Of the included studies, 39 (24.8%) were carried out in HIV infected patients, 27 (16.6%) in patients with chronic hepatitis C infection, 19 (12.1%) in blood donors, and 19 (12.1%) in hemodialysis patients. The diagnosis of OCI was either performed using conventional RT-PCR (5.7%) or real-time RT-PCR (8.3%). OBI diagnosis was made using conventional PCR (46.5%) or real-time PCR (39.5%). The type of sample used was blood for OBI (100%) and peripheral blood mononuclear cells (PBMCs) for OCI (100%). Most of the prevalence data were at moderate risk of bias (59.2%).", "In total, the twenty-three studies reporting OCI prevalence data in Africa were conducted only in Egypt. The pooled OCI prevalence was estimated to be 10.7% [95% CI = 6.6-15.4] on a sample of 2865 participants with substantial heterogeneity (I2 = 90.7% [95% CI = 87.4-93.2], p < 0.001) (Table 1, Figs. 2 and 3). Two studies presented the prevalence of seropositive and/or seronegative OCI without distinction with an estimated at 5.8% [95% CI = 0.8-13.9] in a total of 100 participants. There was a total of 14 studies reporting seronegative OCI prevalence. The pooled seronegative OCI prevalence was estimated to be 10.7% [95% CI = 4.8-18.3] in a sample of 908 participants. There was a total of 7 studies reporting seropositive OCI prevalence. The overall seropositive OCI prevalence was estimated to be 14.4% [95% CI = 5.2-22.1], with a total of 1857 participants.", "A total of 133 studies reporting prevalence data on OBI were conducted in 5 UNSD regions of Africa: Central Africa, Eastern Africa, West Africa, Northern Africa, and Southern Africa (Fig. 2). The OBI prevalence was estimated at 14.8% [95% CI = 12.2-17.7] in a sample of 18579 participants with significant heterogeneity (I2 = 96.2% [95% CI = 95.8-96.5], p < 0.0001) (Table 1, Figs. 2, 4, and S1 Fig). The pooled prevalence of seropositive OBI was estimated at 20.0% [95% CI = 15.3-25.1] among 6873 participants. Prevalence data on seronegative OBI were obtained from 32 studies and the prevalence was estimated at 7.4% [95% CI = 3.8-11,8] with 2896 participants. A total of 44 studies reported on seropositive and/or seronegative OBI with an estimated prevalence of 14.3% [95% CI = 10.1-19.2] in 8810 participants. Few studies provided data on the variation of seropositive or seronegative OBI with anti-HBs (only 9 studies). The distribution of OBI prevalence according to anti-HBs was as follows: 17.32% [4.37; 35.69] for anti-HBc negative/ anti-HBs negative, 30.95% [17.10; 46.62] for anti-HBc positive/ anti-HBs negative, 27.41% [ 8.89; 50.70] anti-HBc positive/ anti-HBs positive, and 0% for anti-HBc negative/ anti-HBs positive (S3 Fig).", "Occult hepatitis C infection The OCI prevalence was statistically different (p = 0.005) according to the population categories with an estimated prevalence of 34.9% in patients with malignancies, 12.3% in those with chronic hepatitis C, 7.2% in hemodialysis patients, and 1.1% in apparently healthy individuals (S8 Table). OCI prevalence did not vary significantly by study design (p = 0.380), sampling approach (p = 0.460), type of OCI (seropositive or seronegative) (p = 0.819), and diagnostic method (p = 0.067).\nThe OCI prevalence was statistically different (p = 0.005) according to the population categories with an estimated prevalence of 34.9% in patients with malignancies, 12.3% in those with chronic hepatitis C, 7.2% in hemodialysis patients, and 1.1% in apparently healthy individuals (S8 Table). OCI prevalence did not vary significantly by study design (p = 0.380), sampling approach (p = 0.460), type of OCI (seropositive or seronegative) (p = 0.819), and diagnostic method (p = 0.067).\nOccult hepatitis B infection Variation of OBI prevalence was statistically significant by country (p < 0.001), categories of population (p < 0.001), and type of OBI (seropositive or seronegative) (p < 0.001) (S8 Table). The highest OBI prevalence rates were in Morocco (42.8%), Gabon (22.6%), and South Africa (20.2%). Patients with malignancies, patients with liver disorders, and HIV positive patients were the three population categories with the highest OBI prevalence, 41.9%, 33.1%, and 17.0% respectively. The seropositive OBI prevalence (20.1%) was significantly higher than that of seronegative OBI (8.0%). The OBI prevalence did not vary significantly by study design (p = 0.360), sampling approach (p = 0.117), setting (hospital or community-based) (p = 0.360), timing of data collection (p = 0.875), WHO region (p = 0.544), UNSD region (p = 0.384), country income level (p = 0.395), age range (p = 0.799), and diagnostic method (p = 0.659).\nVariation of OBI prevalence was statistically significant by country (p < 0.001), categories of population (p < 0.001), and type of OBI (seropositive or seronegative) (p < 0.001) (S8 Table). The highest OBI prevalence rates were in Morocco (42.8%), Gabon (22.6%), and South Africa (20.2%). Patients with malignancies, patients with liver disorders, and HIV positive patients were the three population categories with the highest OBI prevalence, 41.9%, 33.1%, and 17.0% respectively. The seropositive OBI prevalence (20.1%) was significantly higher than that of seronegative OBI (8.0%). The OBI prevalence did not vary significantly by study design (p = 0.360), sampling approach (p = 0.117), setting (hospital or community-based) (p = 0.360), timing of data collection (p = 0.875), WHO region (p = 0.544), UNSD region (p = 0.384), country income level (p = 0.395), age range (p = 0.799), and diagnostic method (p = 0.659).\nPublication bias Egger’s tests were significant for the OCI prevalence (P < 0.001) and OBI prevalence in Africa (P = 0.014), suggesting the presence of a publication bias. Funnel plots confirmed the results of publication bias obtained by Egger’s test (S2 and S4 Figs).\nEgger’s tests were significant for the OCI prevalence (P < 0.001) and OBI prevalence in Africa (P = 0.014), suggesting the presence of a publication bias. Funnel plots confirmed the results of publication bias obtained by Egger’s test (S2 and S4 Figs).", "The OCI prevalence was statistically different (p = 0.005) according to the population categories with an estimated prevalence of 34.9% in patients with malignancies, 12.3% in those with chronic hepatitis C, 7.2% in hemodialysis patients, and 1.1% in apparently healthy individuals (S8 Table). OCI prevalence did not vary significantly by study design (p = 0.380), sampling approach (p = 0.460), type of OCI (seropositive or seronegative) (p = 0.819), and diagnostic method (p = 0.067).", "Variation of OBI prevalence was statistically significant by country (p < 0.001), categories of population (p < 0.001), and type of OBI (seropositive or seronegative) (p < 0.001) (S8 Table). The highest OBI prevalence rates were in Morocco (42.8%), Gabon (22.6%), and South Africa (20.2%). Patients with malignancies, patients with liver disorders, and HIV positive patients were the three population categories with the highest OBI prevalence, 41.9%, 33.1%, and 17.0% respectively. The seropositive OBI prevalence (20.1%) was significantly higher than that of seronegative OBI (8.0%). The OBI prevalence did not vary significantly by study design (p = 0.360), sampling approach (p = 0.117), setting (hospital or community-based) (p = 0.360), timing of data collection (p = 0.875), WHO region (p = 0.544), UNSD region (p = 0.384), country income level (p = 0.395), age range (p = 0.799), and diagnostic method (p = 0.659).", "Egger’s tests were significant for the OCI prevalence (P < 0.001) and OBI prevalence in Africa (P = 0.014), suggesting the presence of a publication bias. Funnel plots confirmed the results of publication bias obtained by Egger’s test (S2 and S4 Figs)." ]
[ null, null, null, null, null, null, null, null, null, null, null, null, null, null ]
[ "Introduction", "Materials and methods", "Registration", "Search strategy", "Inclusion and exclusion criteria", "Study selection and data extraction process", "Quality assessment", "Statistical analysis", "Results", "Selection of studies", "Study characteristics", "The prevalence of occult C infection", "The prevalence of occult B infection", "Subgroup analyses", "Occult hepatitis C infection", "Occult hepatitis B infection", "Publication bias", "Discussion", "Conclusion", "Supplementary Material" ]
[ "In the early 1980 s, a new form of clinical hepatitis B virus (HBV) infection was described, corresponding to the presence of HBV DNA in the liver and/or serum of patients in whom the HBs antigen (Ag) is undetectable by the usual serology tests. This was called occult hepatitis B (OBI) [1–3]. Based on the antibody profile of HBV, OBI can be distinguished into seropositive-OBI (anti-HBc and/or anti-HBs positive) and seronegative-OBI (anti-HBc and anti-HBs negative)[4]. A similar entity was studied in 2005, by Castillo et al., on the ability of hepatitis C virus (HCV) to replicate in peripheral blood mono-nuclear cells (PBMCs) of patients in the absence of viral RNA in serum and detectable anti-HCV antibodies, thus describing an occult HCV infection[5,6]. Two types of OCI are recognized: seronegative OCI (anti HCV antibody-negative and serum HCV RNA-negative) and seropositive OCI (anti HCV antibody-positive and serum HCV RNA-negative)[7]. These occult forms of hepatitis B and C can lead to hepatic crises including cases of chronic liver infection and liver cirrhosis [8]. It is known that OBI plays a role in the development of hepatocellular carcinoma [9]. These entities highlight multiple concerns, including the potential for transmission of this form of infection through blood transfusion, hemodialysis, or from mother to child [10]. The OBI and OCI have been described in a variety of individuals, including hemodialysis patients, patients with sustained virologic response, immunocompromised individuals, patients with abnormal liver function, and apparently healthy subjects. Multiple meta-analyses on the prevalence of OBI and OCI have shown great variability in estimates according to population categories, regions, type of tests used, and the level of endemicity of the disease [9,11–17]. Two global meta-analyses with partial analyses for Africa reported the prevalence of OBI at 3.7% [15], seronegative OCI at 9.6% and seropositive OCI at 13.3% [17]. The highest prevalence of OBI was in countries with high endemicity of classical hepatitis B, including Africa, with prevalence of up to 35.6% in the general population in Uganda [15]. Similar to classical hepatitis C infection, which has the highest prevalence in Egypt, this meta-analysis showed that OCI had the highest prevalence in North Africa, represented only by Egypt [17]. Other meta-analyses have estimated the prevalence of OBI in Africa among people living with HIV at 11.2% [16] and in Sudan at 15.5% [12]. In a regional meta-analysis in the Middle East and Eastern Mediterranean countries the prevalence of OCI was estimated at 10.0% with the highest prevalence recorded in Egypt [14]. Estimates of a high OBI prevalence of 34% was reported in Western Europe and Northern America [18] and a lower OBI prevalence of 4% was observed in Asia [19]. Thus, there are no systematic syntheses for OBI and OCI for Africa which is a highly endemic region for classical HBV and HCV infections. In this meta-analysis, we compiled all available evidence for OBI and OCI in Africa with differences across populations and regions. The results of this review may help consider OBI and OCI in the WHO goal of eliminating viral hepatitis by 2030. It will also help to sound the alarm on the need to strengthen the diagnostic capacities of OBI and OCI in Africa which has limited access to molecular testing.", "Registration This systematic review was conducted in full compliance with the PRISMA (Preferred Reporting Items for Systematic Reviews and Meta - Analyses) guidelines (S1 Table) [20]. The protocol of this systematic review was developed and registered on PROSPERO (International prospective register of systematic reviews) on June 03, 2021 under the digital identifier: https://www.crd.york.ac.uk/pros-pero/display_record.php? ID = CRD42021252772.\nThis systematic review was conducted in full compliance with the PRISMA (Preferred Reporting Items for Systematic Reviews and Meta - Analyses) guidelines (S1 Table) [20]. The protocol of this systematic review was developed and registered on PROSPERO (International prospective register of systematic reviews) on June 03, 2021 under the digital identifier: https://www.crd.york.ac.uk/pros-pero/display_record.php? ID = CRD42021252772.\nSearch strategy In May 2021, an electronic literature search was performed and the following databases were used: PubMed, Web of Science, African Journal Online and African Index Medicus. A second update search was performed in July 2022 to search for additional articles. We used a combination of the keywords covering OBI, OCI, the names of African countries and regions as well as the boolean operators OR and AND as illustrated in detail in S2 Table. This search strategy was also adapted to identify relevant articles from other databases. To complement the bibliographic database search and identify potential additional data sources, we reviewed the reference list of all relevant articles. All search results were managed using Endnote X9 credential management software, and all duplicates were removed using this same software.\nIn May 2021, an electronic literature search was performed and the following databases were used: PubMed, Web of Science, African Journal Online and African Index Medicus. A second update search was performed in July 2022 to search for additional articles. We used a combination of the keywords covering OBI, OCI, the names of African countries and regions as well as the boolean operators OR and AND as illustrated in detail in S2 Table. This search strategy was also adapted to identify relevant articles from other databases. To complement the bibliographic database search and identify potential additional data sources, we reviewed the reference list of all relevant articles. All search results were managed using Endnote X9 credential management software, and all duplicates were removed using this same software.\nInclusion and exclusion criteria We considered published studies, without any restrictions in time, population category or detection assay. We classified populations for OBI as: 1) HBsAg negative 2) HBsAg negative and Anti-HBc positive, and 3) HBsAg negative and Anti-HBc negative [21,22], and for OCI as 1) seronegative OCI (anti HCV antibody-negative and serum HCV RNA-negative), 2) seropositive OCI (anti HCV antibody-positive and serum HCV RNA-negative), and 3) seropositive and/or seronegative OCI [7]. Only reports written in English and/or French were included. Exclusion criteria included all of the following: case reports, reviews and articles for which: studies were conducted outside Africa, sample size ≤ 10 participants, no baseline data for the longitudinal study, no data on OBI and/or OCI prevalence or case fatality rate, and no data on number of negative HBsAg tested.\nWe considered published studies, without any restrictions in time, population category or detection assay. We classified populations for OBI as: 1) HBsAg negative 2) HBsAg negative and Anti-HBc positive, and 3) HBsAg negative and Anti-HBc negative [21,22], and for OCI as 1) seronegative OCI (anti HCV antibody-negative and serum HCV RNA-negative), 2) seropositive OCI (anti HCV antibody-positive and serum HCV RNA-negative), and 3) seropositive and/or seronegative OCI [7]. Only reports written in English and/or French were included. Exclusion criteria included all of the following: case reports, reviews and articles for which: studies were conducted outside Africa, sample size ≤ 10 participants, no baseline data for the longitudinal study, no data on OBI and/or OCI prevalence or case fatality rate, and no data on number of negative HBsAg tested.\nStudy selection and data extraction process Duplicates identified in the full list of studies were removed. The titles and abstracts of articles retrieved from the electronic literature search were independently reviewed by four researchers, and the full texts of potentially eligible articles were obtained and further assessed for final inclusion. Articles were selected based on the availability of data needed to calculate the prevalence of OBI and/or OCI in the study. Data from the included studies were extracted using a Google Form by 18 of the study authors and verified by SK. Data extracted were first author name, year of publication, design of study, WHO region, UNSD region, country, country’s income level, sampling method, time of data collection, period of study, age of study participants, recruiting framework, population categories, OBI and/or OCI diagnostic method, and prevalence of OBI and/or OCI in Africa. We considered as the high-risk groups of OBI: HIV-positive patients, patients with chronic liver disease, patients on hemodialysis, patients with hematological disorders, patients with malignancies, organ recipients, healthcare workers, patients who inject drugs and men who have sex with men [12,19,23]. We grouped apparently healthy individuals, blood donors, the general population, and pregnant women as the low-risk group for OBI.\nDisagreements observed during study selection and data extraction were resolved by discussion and consensus.\nDuplicates identified in the full list of studies were removed. The titles and abstracts of articles retrieved from the electronic literature search were independently reviewed by four researchers, and the full texts of potentially eligible articles were obtained and further assessed for final inclusion. Articles were selected based on the availability of data needed to calculate the prevalence of OBI and/or OCI in the study. Data from the included studies were extracted using a Google Form by 18 of the study authors and verified by SK. Data extracted were first author name, year of publication, design of study, WHO region, UNSD region, country, country’s income level, sampling method, time of data collection, period of study, age of study participants, recruiting framework, population categories, OBI and/or OCI diagnostic method, and prevalence of OBI and/or OCI in Africa. We considered as the high-risk groups of OBI: HIV-positive patients, patients with chronic liver disease, patients on hemodialysis, patients with hematological disorders, patients with malignancies, organ recipients, healthcare workers, patients who inject drugs and men who have sex with men [12,19,23]. We grouped apparently healthy individuals, blood donors, the general population, and pregnant women as the low-risk group for OBI.\nDisagreements observed during study selection and data extraction were resolved by discussion and consensus.\nQuality assessment The tool developed by Hoy et al. for cross-sectional studies was used to assess the methodological quality of the included studies (S3 Table) [24]. Discussion and consensus were used to resolve disagreements.\nThe tool developed by Hoy et al. for cross-sectional studies was used to assess the methodological quality of the included studies (S3 Table) [24]. Discussion and consensus were used to resolve disagreements.\nStatistical analysis Study-specific estimates were pooled using DerSimonian and Laird’s random-effects model meta-analysis [25]. This model provided the overall prevalence, the 95% confidence interval and the prediction interval which informs about the values of future studies. Heterogeneity was assessed by the Cochrane statistical test, and the p value of the Q test (< 0.05) was used to indicate significant heterogeneity and quantified by I2 values, assuming I2 values of 25%, 50% and 75% represent respectively low, moderate and high heterogeneity[26,27]. Publication bias was assessed using Egger’s test and funnel plot [28]. Subgroup analyses were performed based on study design, sampling approach, setting, timing of sample collection, countries, WHO region, UNSD region, country income level, age range, population categories, and OBI and/or OCI diagnostic assays. We considered p values < 0.05, to be statistically significant. R software version 4.1.0 was used to perform the analyses[29,30].\nStudy-specific estimates were pooled using DerSimonian and Laird’s random-effects model meta-analysis [25]. This model provided the overall prevalence, the 95% confidence interval and the prediction interval which informs about the values of future studies. Heterogeneity was assessed by the Cochrane statistical test, and the p value of the Q test (< 0.05) was used to indicate significant heterogeneity and quantified by I2 values, assuming I2 values of 25%, 50% and 75% represent respectively low, moderate and high heterogeneity[26,27]. Publication bias was assessed using Egger’s test and funnel plot [28]. Subgroup analyses were performed based on study design, sampling approach, setting, timing of sample collection, countries, WHO region, UNSD region, country income level, age range, population categories, and OBI and/or OCI diagnostic assays. We considered p values < 0.05, to be statistically significant. R software version 4.1.0 was used to perform the analyses[29,30].", "This systematic review was conducted in full compliance with the PRISMA (Preferred Reporting Items for Systematic Reviews and Meta - Analyses) guidelines (S1 Table) [20]. The protocol of this systematic review was developed and registered on PROSPERO (International prospective register of systematic reviews) on June 03, 2021 under the digital identifier: https://www.crd.york.ac.uk/pros-pero/display_record.php? ID = CRD42021252772.", "In May 2021, an electronic literature search was performed and the following databases were used: PubMed, Web of Science, African Journal Online and African Index Medicus. A second update search was performed in July 2022 to search for additional articles. We used a combination of the keywords covering OBI, OCI, the names of African countries and regions as well as the boolean operators OR and AND as illustrated in detail in S2 Table. This search strategy was also adapted to identify relevant articles from other databases. To complement the bibliographic database search and identify potential additional data sources, we reviewed the reference list of all relevant articles. All search results were managed using Endnote X9 credential management software, and all duplicates were removed using this same software.", "We considered published studies, without any restrictions in time, population category or detection assay. We classified populations for OBI as: 1) HBsAg negative 2) HBsAg negative and Anti-HBc positive, and 3) HBsAg negative and Anti-HBc negative [21,22], and for OCI as 1) seronegative OCI (anti HCV antibody-negative and serum HCV RNA-negative), 2) seropositive OCI (anti HCV antibody-positive and serum HCV RNA-negative), and 3) seropositive and/or seronegative OCI [7]. Only reports written in English and/or French were included. Exclusion criteria included all of the following: case reports, reviews and articles for which: studies were conducted outside Africa, sample size ≤ 10 participants, no baseline data for the longitudinal study, no data on OBI and/or OCI prevalence or case fatality rate, and no data on number of negative HBsAg tested.", "Duplicates identified in the full list of studies were removed. The titles and abstracts of articles retrieved from the electronic literature search were independently reviewed by four researchers, and the full texts of potentially eligible articles were obtained and further assessed for final inclusion. Articles were selected based on the availability of data needed to calculate the prevalence of OBI and/or OCI in the study. Data from the included studies were extracted using a Google Form by 18 of the study authors and verified by SK. Data extracted were first author name, year of publication, design of study, WHO region, UNSD region, country, country’s income level, sampling method, time of data collection, period of study, age of study participants, recruiting framework, population categories, OBI and/or OCI diagnostic method, and prevalence of OBI and/or OCI in Africa. We considered as the high-risk groups of OBI: HIV-positive patients, patients with chronic liver disease, patients on hemodialysis, patients with hematological disorders, patients with malignancies, organ recipients, healthcare workers, patients who inject drugs and men who have sex with men [12,19,23]. We grouped apparently healthy individuals, blood donors, the general population, and pregnant women as the low-risk group for OBI.\nDisagreements observed during study selection and data extraction were resolved by discussion and consensus.", "The tool developed by Hoy et al. for cross-sectional studies was used to assess the methodological quality of the included studies (S3 Table) [24]. Discussion and consensus were used to resolve disagreements.", "Study-specific estimates were pooled using DerSimonian and Laird’s random-effects model meta-analysis [25]. This model provided the overall prevalence, the 95% confidence interval and the prediction interval which informs about the values of future studies. Heterogeneity was assessed by the Cochrane statistical test, and the p value of the Q test (< 0.05) was used to indicate significant heterogeneity and quantified by I2 values, assuming I2 values of 25%, 50% and 75% represent respectively low, moderate and high heterogeneity[26,27]. Publication bias was assessed using Egger’s test and funnel plot [28]. Subgroup analyses were performed based on study design, sampling approach, setting, timing of sample collection, countries, WHO region, UNSD region, country income level, age range, population categories, and OBI and/or OCI diagnostic assays. We considered p values < 0.05, to be statistically significant. R software version 4.1.0 was used to perform the analyses[29,30].", "Selection of studies As of 07/18/2022, 496 studies were analysed with no additional studies added from the manual search. After removing the duplicates (152), 344 articles were screened. The full text evaluation led to the elimination of 78 studies for several reasons, including lack of data on OBI and/or OCI prevalence or case fatality rate, or small sample size (S4 Table). In all, a total of 109 articles reporting the prevalence and/or case fatality rate of OBI and/or OCI in Africa were included in this synthesis (Fig. 1) [3,31–138].\nAs of 07/18/2022, 496 studies were analysed with no additional studies added from the manual search. After removing the duplicates (152), 344 articles were screened. The full text evaluation led to the elimination of 78 studies for several reasons, including lack of data on OBI and/or OCI prevalence or case fatality rate, or small sample size (S4 Table). In all, a total of 109 articles reporting the prevalence and/or case fatality rate of OBI and/or OCI in Africa were included in this synthesis (Fig. 1) [3,31–138].\nStudy characteristics Overall, we obtained 157 prevalence data for this meta-analysis, including 133 studies for OBI prevalence; 23 studies for OCI prevalence, and a single study for OBI case fatality rate (S5, S6, and S7 Tables). The OBI case fatality rate study reported a total of 6 deaths among 72 OBI and HIV infected patients in Botswana [108]. The studies were published between 2006 and 2022 in general and respectively from 2006 to 2022 and 2010-2022 for OBI and OCI. Participants were registered from 1997 to 2021. The majority of studies were cross-sectional (84.7%) with non-probability sampling (88.5%). The study setting was mainly in the hospital (93.6%). The studies were carried out in 21 African countries, mainly in Egypt (44.6%) and South Africa (13.4%). The largest number of studies were from lower-middle-income countries (68.2%). Most of the studies recruited adults (61.8%). Of the included studies, 39 (24.8%) were carried out in HIV infected patients, 27 (16.6%) in patients with chronic hepatitis C infection, 19 (12.1%) in blood donors, and 19 (12.1%) in hemodialysis patients. The diagnosis of OCI was either performed using conventional RT-PCR (5.7%) or real-time RT-PCR (8.3%). OBI diagnosis was made using conventional PCR (46.5%) or real-time PCR (39.5%). The type of sample used was blood for OBI (100%) and peripheral blood mononuclear cells (PBMCs) for OCI (100%). Most of the prevalence data were at moderate risk of bias (59.2%).\nOverall, we obtained 157 prevalence data for this meta-analysis, including 133 studies for OBI prevalence; 23 studies for OCI prevalence, and a single study for OBI case fatality rate (S5, S6, and S7 Tables). The OBI case fatality rate study reported a total of 6 deaths among 72 OBI and HIV infected patients in Botswana [108]. The studies were published between 2006 and 2022 in general and respectively from 2006 to 2022 and 2010-2022 for OBI and OCI. Participants were registered from 1997 to 2021. The majority of studies were cross-sectional (84.7%) with non-probability sampling (88.5%). The study setting was mainly in the hospital (93.6%). The studies were carried out in 21 African countries, mainly in Egypt (44.6%) and South Africa (13.4%). The largest number of studies were from lower-middle-income countries (68.2%). Most of the studies recruited adults (61.8%). Of the included studies, 39 (24.8%) were carried out in HIV infected patients, 27 (16.6%) in patients with chronic hepatitis C infection, 19 (12.1%) in blood donors, and 19 (12.1%) in hemodialysis patients. The diagnosis of OCI was either performed using conventional RT-PCR (5.7%) or real-time RT-PCR (8.3%). OBI diagnosis was made using conventional PCR (46.5%) or real-time PCR (39.5%). The type of sample used was blood for OBI (100%) and peripheral blood mononuclear cells (PBMCs) for OCI (100%). Most of the prevalence data were at moderate risk of bias (59.2%).\nThe prevalence of occult C infection In total, the twenty-three studies reporting OCI prevalence data in Africa were conducted only in Egypt. The pooled OCI prevalence was estimated to be 10.7% [95% CI = 6.6-15.4] on a sample of 2865 participants with substantial heterogeneity (I2 = 90.7% [95% CI = 87.4-93.2], p < 0.001) (Table 1, Figs. 2 and 3). Two studies presented the prevalence of seropositive and/or seronegative OCI without distinction with an estimated at 5.8% [95% CI = 0.8-13.9] in a total of 100 participants. There was a total of 14 studies reporting seronegative OCI prevalence. The pooled seronegative OCI prevalence was estimated to be 10.7% [95% CI = 4.8-18.3] in a sample of 908 participants. There was a total of 7 studies reporting seropositive OCI prevalence. The overall seropositive OCI prevalence was estimated to be 14.4% [95% CI = 5.2-22.1], with a total of 1857 participants.\nIn total, the twenty-three studies reporting OCI prevalence data in Africa were conducted only in Egypt. The pooled OCI prevalence was estimated to be 10.7% [95% CI = 6.6-15.4] on a sample of 2865 participants with substantial heterogeneity (I2 = 90.7% [95% CI = 87.4-93.2], p < 0.001) (Table 1, Figs. 2 and 3). Two studies presented the prevalence of seropositive and/or seronegative OCI without distinction with an estimated at 5.8% [95% CI = 0.8-13.9] in a total of 100 participants. There was a total of 14 studies reporting seronegative OCI prevalence. The pooled seronegative OCI prevalence was estimated to be 10.7% [95% CI = 4.8-18.3] in a sample of 908 participants. There was a total of 7 studies reporting seropositive OCI prevalence. The overall seropositive OCI prevalence was estimated to be 14.4% [95% CI = 5.2-22.1], with a total of 1857 participants.\nThe prevalence of occult B infection A total of 133 studies reporting prevalence data on OBI were conducted in 5 UNSD regions of Africa: Central Africa, Eastern Africa, West Africa, Northern Africa, and Southern Africa (Fig. 2). The OBI prevalence was estimated at 14.8% [95% CI = 12.2-17.7] in a sample of 18579 participants with significant heterogeneity (I2 = 96.2% [95% CI = 95.8-96.5], p < 0.0001) (Table 1, Figs. 2, 4, and S1 Fig). The pooled prevalence of seropositive OBI was estimated at 20.0% [95% CI = 15.3-25.1] among 6873 participants. Prevalence data on seronegative OBI were obtained from 32 studies and the prevalence was estimated at 7.4% [95% CI = 3.8-11,8] with 2896 participants. A total of 44 studies reported on seropositive and/or seronegative OBI with an estimated prevalence of 14.3% [95% CI = 10.1-19.2] in 8810 participants. Few studies provided data on the variation of seropositive or seronegative OBI with anti-HBs (only 9 studies). The distribution of OBI prevalence according to anti-HBs was as follows: 17.32% [4.37; 35.69] for anti-HBc negative/ anti-HBs negative, 30.95% [17.10; 46.62] for anti-HBc positive/ anti-HBs negative, 27.41% [ 8.89; 50.70] anti-HBc positive/ anti-HBs positive, and 0% for anti-HBc negative/ anti-HBs positive (S3 Fig).\nA total of 133 studies reporting prevalence data on OBI were conducted in 5 UNSD regions of Africa: Central Africa, Eastern Africa, West Africa, Northern Africa, and Southern Africa (Fig. 2). The OBI prevalence was estimated at 14.8% [95% CI = 12.2-17.7] in a sample of 18579 participants with significant heterogeneity (I2 = 96.2% [95% CI = 95.8-96.5], p < 0.0001) (Table 1, Figs. 2, 4, and S1 Fig). The pooled prevalence of seropositive OBI was estimated at 20.0% [95% CI = 15.3-25.1] among 6873 participants. Prevalence data on seronegative OBI were obtained from 32 studies and the prevalence was estimated at 7.4% [95% CI = 3.8-11,8] with 2896 participants. A total of 44 studies reported on seropositive and/or seronegative OBI with an estimated prevalence of 14.3% [95% CI = 10.1-19.2] in 8810 participants. Few studies provided data on the variation of seropositive or seronegative OBI with anti-HBs (only 9 studies). The distribution of OBI prevalence according to anti-HBs was as follows: 17.32% [4.37; 35.69] for anti-HBc negative/ anti-HBs negative, 30.95% [17.10; 46.62] for anti-HBc positive/ anti-HBs negative, 27.41% [ 8.89; 50.70] anti-HBc positive/ anti-HBs positive, and 0% for anti-HBc negative/ anti-HBs positive (S3 Fig).\nSubgroup analyses Occult hepatitis C infection The OCI prevalence was statistically different (p = 0.005) according to the population categories with an estimated prevalence of 34.9% in patients with malignancies, 12.3% in those with chronic hepatitis C, 7.2% in hemodialysis patients, and 1.1% in apparently healthy individuals (S8 Table). OCI prevalence did not vary significantly by study design (p = 0.380), sampling approach (p = 0.460), type of OCI (seropositive or seronegative) (p = 0.819), and diagnostic method (p = 0.067).\nThe OCI prevalence was statistically different (p = 0.005) according to the population categories with an estimated prevalence of 34.9% in patients with malignancies, 12.3% in those with chronic hepatitis C, 7.2% in hemodialysis patients, and 1.1% in apparently healthy individuals (S8 Table). OCI prevalence did not vary significantly by study design (p = 0.380), sampling approach (p = 0.460), type of OCI (seropositive or seronegative) (p = 0.819), and diagnostic method (p = 0.067).\nOccult hepatitis B infection Variation of OBI prevalence was statistically significant by country (p < 0.001), categories of population (p < 0.001), and type of OBI (seropositive or seronegative) (p < 0.001) (S8 Table). The highest OBI prevalence rates were in Morocco (42.8%), Gabon (22.6%), and South Africa (20.2%). Patients with malignancies, patients with liver disorders, and HIV positive patients were the three population categories with the highest OBI prevalence, 41.9%, 33.1%, and 17.0% respectively. The seropositive OBI prevalence (20.1%) was significantly higher than that of seronegative OBI (8.0%). The OBI prevalence did not vary significantly by study design (p = 0.360), sampling approach (p = 0.117), setting (hospital or community-based) (p = 0.360), timing of data collection (p = 0.875), WHO region (p = 0.544), UNSD region (p = 0.384), country income level (p = 0.395), age range (p = 0.799), and diagnostic method (p = 0.659).\nVariation of OBI prevalence was statistically significant by country (p < 0.001), categories of population (p < 0.001), and type of OBI (seropositive or seronegative) (p < 0.001) (S8 Table). The highest OBI prevalence rates were in Morocco (42.8%), Gabon (22.6%), and South Africa (20.2%). Patients with malignancies, patients with liver disorders, and HIV positive patients were the three population categories with the highest OBI prevalence, 41.9%, 33.1%, and 17.0% respectively. The seropositive OBI prevalence (20.1%) was significantly higher than that of seronegative OBI (8.0%). The OBI prevalence did not vary significantly by study design (p = 0.360), sampling approach (p = 0.117), setting (hospital or community-based) (p = 0.360), timing of data collection (p = 0.875), WHO region (p = 0.544), UNSD region (p = 0.384), country income level (p = 0.395), age range (p = 0.799), and diagnostic method (p = 0.659).\nPublication bias Egger’s tests were significant for the OCI prevalence (P < 0.001) and OBI prevalence in Africa (P = 0.014), suggesting the presence of a publication bias. Funnel plots confirmed the results of publication bias obtained by Egger’s test (S2 and S4 Figs).\nEgger’s tests were significant for the OCI prevalence (P < 0.001) and OBI prevalence in Africa (P = 0.014), suggesting the presence of a publication bias. Funnel plots confirmed the results of publication bias obtained by Egger’s test (S2 and S4 Figs).\nOccult hepatitis C infection The OCI prevalence was statistically different (p = 0.005) according to the population categories with an estimated prevalence of 34.9% in patients with malignancies, 12.3% in those with chronic hepatitis C, 7.2% in hemodialysis patients, and 1.1% in apparently healthy individuals (S8 Table). OCI prevalence did not vary significantly by study design (p = 0.380), sampling approach (p = 0.460), type of OCI (seropositive or seronegative) (p = 0.819), and diagnostic method (p = 0.067).\nThe OCI prevalence was statistically different (p = 0.005) according to the population categories with an estimated prevalence of 34.9% in patients with malignancies, 12.3% in those with chronic hepatitis C, 7.2% in hemodialysis patients, and 1.1% in apparently healthy individuals (S8 Table). OCI prevalence did not vary significantly by study design (p = 0.380), sampling approach (p = 0.460), type of OCI (seropositive or seronegative) (p = 0.819), and diagnostic method (p = 0.067).\nOccult hepatitis B infection Variation of OBI prevalence was statistically significant by country (p < 0.001), categories of population (p < 0.001), and type of OBI (seropositive or seronegative) (p < 0.001) (S8 Table). The highest OBI prevalence rates were in Morocco (42.8%), Gabon (22.6%), and South Africa (20.2%). Patients with malignancies, patients with liver disorders, and HIV positive patients were the three population categories with the highest OBI prevalence, 41.9%, 33.1%, and 17.0% respectively. The seropositive OBI prevalence (20.1%) was significantly higher than that of seronegative OBI (8.0%). The OBI prevalence did not vary significantly by study design (p = 0.360), sampling approach (p = 0.117), setting (hospital or community-based) (p = 0.360), timing of data collection (p = 0.875), WHO region (p = 0.544), UNSD region (p = 0.384), country income level (p = 0.395), age range (p = 0.799), and diagnostic method (p = 0.659).\nVariation of OBI prevalence was statistically significant by country (p < 0.001), categories of population (p < 0.001), and type of OBI (seropositive or seronegative) (p < 0.001) (S8 Table). The highest OBI prevalence rates were in Morocco (42.8%), Gabon (22.6%), and South Africa (20.2%). Patients with malignancies, patients with liver disorders, and HIV positive patients were the three population categories with the highest OBI prevalence, 41.9%, 33.1%, and 17.0% respectively. The seropositive OBI prevalence (20.1%) was significantly higher than that of seronegative OBI (8.0%). The OBI prevalence did not vary significantly by study design (p = 0.360), sampling approach (p = 0.117), setting (hospital or community-based) (p = 0.360), timing of data collection (p = 0.875), WHO region (p = 0.544), UNSD region (p = 0.384), country income level (p = 0.395), age range (p = 0.799), and diagnostic method (p = 0.659).\nPublication bias Egger’s tests were significant for the OCI prevalence (P < 0.001) and OBI prevalence in Africa (P = 0.014), suggesting the presence of a publication bias. Funnel plots confirmed the results of publication bias obtained by Egger’s test (S2 and S4 Figs).\nEgger’s tests were significant for the OCI prevalence (P < 0.001) and OBI prevalence in Africa (P = 0.014), suggesting the presence of a publication bias. Funnel plots confirmed the results of publication bias obtained by Egger’s test (S2 and S4 Figs).", "As of 07/18/2022, 496 studies were analysed with no additional studies added from the manual search. After removing the duplicates (152), 344 articles were screened. The full text evaluation led to the elimination of 78 studies for several reasons, including lack of data on OBI and/or OCI prevalence or case fatality rate, or small sample size (S4 Table). In all, a total of 109 articles reporting the prevalence and/or case fatality rate of OBI and/or OCI in Africa were included in this synthesis (Fig. 1) [3,31–138].", "Overall, we obtained 157 prevalence data for this meta-analysis, including 133 studies for OBI prevalence; 23 studies for OCI prevalence, and a single study for OBI case fatality rate (S5, S6, and S7 Tables). The OBI case fatality rate study reported a total of 6 deaths among 72 OBI and HIV infected patients in Botswana [108]. The studies were published between 2006 and 2022 in general and respectively from 2006 to 2022 and 2010-2022 for OBI and OCI. Participants were registered from 1997 to 2021. The majority of studies were cross-sectional (84.7%) with non-probability sampling (88.5%). The study setting was mainly in the hospital (93.6%). The studies were carried out in 21 African countries, mainly in Egypt (44.6%) and South Africa (13.4%). The largest number of studies were from lower-middle-income countries (68.2%). Most of the studies recruited adults (61.8%). Of the included studies, 39 (24.8%) were carried out in HIV infected patients, 27 (16.6%) in patients with chronic hepatitis C infection, 19 (12.1%) in blood donors, and 19 (12.1%) in hemodialysis patients. The diagnosis of OCI was either performed using conventional RT-PCR (5.7%) or real-time RT-PCR (8.3%). OBI diagnosis was made using conventional PCR (46.5%) or real-time PCR (39.5%). The type of sample used was blood for OBI (100%) and peripheral blood mononuclear cells (PBMCs) for OCI (100%). Most of the prevalence data were at moderate risk of bias (59.2%).", "In total, the twenty-three studies reporting OCI prevalence data in Africa were conducted only in Egypt. The pooled OCI prevalence was estimated to be 10.7% [95% CI = 6.6-15.4] on a sample of 2865 participants with substantial heterogeneity (I2 = 90.7% [95% CI = 87.4-93.2], p < 0.001) (Table 1, Figs. 2 and 3). Two studies presented the prevalence of seropositive and/or seronegative OCI without distinction with an estimated at 5.8% [95% CI = 0.8-13.9] in a total of 100 participants. There was a total of 14 studies reporting seronegative OCI prevalence. The pooled seronegative OCI prevalence was estimated to be 10.7% [95% CI = 4.8-18.3] in a sample of 908 participants. There was a total of 7 studies reporting seropositive OCI prevalence. The overall seropositive OCI prevalence was estimated to be 14.4% [95% CI = 5.2-22.1], with a total of 1857 participants.", "A total of 133 studies reporting prevalence data on OBI were conducted in 5 UNSD regions of Africa: Central Africa, Eastern Africa, West Africa, Northern Africa, and Southern Africa (Fig. 2). The OBI prevalence was estimated at 14.8% [95% CI = 12.2-17.7] in a sample of 18579 participants with significant heterogeneity (I2 = 96.2% [95% CI = 95.8-96.5], p < 0.0001) (Table 1, Figs. 2, 4, and S1 Fig). The pooled prevalence of seropositive OBI was estimated at 20.0% [95% CI = 15.3-25.1] among 6873 participants. Prevalence data on seronegative OBI were obtained from 32 studies and the prevalence was estimated at 7.4% [95% CI = 3.8-11,8] with 2896 participants. A total of 44 studies reported on seropositive and/or seronegative OBI with an estimated prevalence of 14.3% [95% CI = 10.1-19.2] in 8810 participants. Few studies provided data on the variation of seropositive or seronegative OBI with anti-HBs (only 9 studies). The distribution of OBI prevalence according to anti-HBs was as follows: 17.32% [4.37; 35.69] for anti-HBc negative/ anti-HBs negative, 30.95% [17.10; 46.62] for anti-HBc positive/ anti-HBs negative, 27.41% [ 8.89; 50.70] anti-HBc positive/ anti-HBs positive, and 0% for anti-HBc negative/ anti-HBs positive (S3 Fig).", "Occult hepatitis C infection The OCI prevalence was statistically different (p = 0.005) according to the population categories with an estimated prevalence of 34.9% in patients with malignancies, 12.3% in those with chronic hepatitis C, 7.2% in hemodialysis patients, and 1.1% in apparently healthy individuals (S8 Table). OCI prevalence did not vary significantly by study design (p = 0.380), sampling approach (p = 0.460), type of OCI (seropositive or seronegative) (p = 0.819), and diagnostic method (p = 0.067).\nThe OCI prevalence was statistically different (p = 0.005) according to the population categories with an estimated prevalence of 34.9% in patients with malignancies, 12.3% in those with chronic hepatitis C, 7.2% in hemodialysis patients, and 1.1% in apparently healthy individuals (S8 Table). OCI prevalence did not vary significantly by study design (p = 0.380), sampling approach (p = 0.460), type of OCI (seropositive or seronegative) (p = 0.819), and diagnostic method (p = 0.067).\nOccult hepatitis B infection Variation of OBI prevalence was statistically significant by country (p < 0.001), categories of population (p < 0.001), and type of OBI (seropositive or seronegative) (p < 0.001) (S8 Table). The highest OBI prevalence rates were in Morocco (42.8%), Gabon (22.6%), and South Africa (20.2%). Patients with malignancies, patients with liver disorders, and HIV positive patients were the three population categories with the highest OBI prevalence, 41.9%, 33.1%, and 17.0% respectively. The seropositive OBI prevalence (20.1%) was significantly higher than that of seronegative OBI (8.0%). The OBI prevalence did not vary significantly by study design (p = 0.360), sampling approach (p = 0.117), setting (hospital or community-based) (p = 0.360), timing of data collection (p = 0.875), WHO region (p = 0.544), UNSD region (p = 0.384), country income level (p = 0.395), age range (p = 0.799), and diagnostic method (p = 0.659).\nVariation of OBI prevalence was statistically significant by country (p < 0.001), categories of population (p < 0.001), and type of OBI (seropositive or seronegative) (p < 0.001) (S8 Table). The highest OBI prevalence rates were in Morocco (42.8%), Gabon (22.6%), and South Africa (20.2%). Patients with malignancies, patients with liver disorders, and HIV positive patients were the three population categories with the highest OBI prevalence, 41.9%, 33.1%, and 17.0% respectively. The seropositive OBI prevalence (20.1%) was significantly higher than that of seronegative OBI (8.0%). The OBI prevalence did not vary significantly by study design (p = 0.360), sampling approach (p = 0.117), setting (hospital or community-based) (p = 0.360), timing of data collection (p = 0.875), WHO region (p = 0.544), UNSD region (p = 0.384), country income level (p = 0.395), age range (p = 0.799), and diagnostic method (p = 0.659).\nPublication bias Egger’s tests were significant for the OCI prevalence (P < 0.001) and OBI prevalence in Africa (P = 0.014), suggesting the presence of a publication bias. Funnel plots confirmed the results of publication bias obtained by Egger’s test (S2 and S4 Figs).\nEgger’s tests were significant for the OCI prevalence (P < 0.001) and OBI prevalence in Africa (P = 0.014), suggesting the presence of a publication bias. Funnel plots confirmed the results of publication bias obtained by Egger’s test (S2 and S4 Figs).", "The OCI prevalence was statistically different (p = 0.005) according to the population categories with an estimated prevalence of 34.9% in patients with malignancies, 12.3% in those with chronic hepatitis C, 7.2% in hemodialysis patients, and 1.1% in apparently healthy individuals (S8 Table). OCI prevalence did not vary significantly by study design (p = 0.380), sampling approach (p = 0.460), type of OCI (seropositive or seronegative) (p = 0.819), and diagnostic method (p = 0.067).", "Variation of OBI prevalence was statistically significant by country (p < 0.001), categories of population (p < 0.001), and type of OBI (seropositive or seronegative) (p < 0.001) (S8 Table). The highest OBI prevalence rates were in Morocco (42.8%), Gabon (22.6%), and South Africa (20.2%). Patients with malignancies, patients with liver disorders, and HIV positive patients were the three population categories with the highest OBI prevalence, 41.9%, 33.1%, and 17.0% respectively. The seropositive OBI prevalence (20.1%) was significantly higher than that of seronegative OBI (8.0%). The OBI prevalence did not vary significantly by study design (p = 0.360), sampling approach (p = 0.117), setting (hospital or community-based) (p = 0.360), timing of data collection (p = 0.875), WHO region (p = 0.544), UNSD region (p = 0.384), country income level (p = 0.395), age range (p = 0.799), and diagnostic method (p = 0.659).", "Egger’s tests were significant for the OCI prevalence (P < 0.001) and OBI prevalence in Africa (P = 0.014), suggesting the presence of a publication bias. Funnel plots confirmed the results of publication bias obtained by Egger’s test (S2 and S4 Figs).", "We performed a meta-analysis to provide an estimate of the prevalence and case fatality rate of OBI and OCI in Africa. A total of 109 studies were reviewed and a total of 18579 and 2865 participants were assessed for OBI and OCI respectively. There was only one study that reported deaths (6/72) in HIV-positive subjects with OBI in Botswana. The prevalence of OBI ranged from 0% to 100% and the prevalence of OCI ranged from 0% to 60%. An estimated prevalence of OCI (10.7%) and OBI (14.8%) was obtained in relevant articles published between 2010 and 2022 in Africa. Overall, we found a higher prevalence of OCI in patients with malignancies, in patients with chronic hepatitis C, and in hemodialysis patients. Patients with malignancies, patients with liver disorders, and HIV positive patients were the population groups with the highest OBI prevalence.\nOur results show that apart from Egypt, studies on OCI in Africa are very rare. Although Egypt has the highest prevalence of HCV in the world [139], many other African countries also have significant HCV prevalence, including Cameroon, Burundi and Morocco [140]. From this observation, it is therefore necessary to conduct additional studies on OCI in other African countries outside of Egypt. Studies have shown risks of reactivating the presence of OCI in subjects with a sustained virological response to HCV [17]. With the arrival of new direct-acting HCV treatments [141], OCI studies on those patients with a sustained virologic response are needed. Our systematic review found that the combined prevalence of OCI was 10.7% (14.4% for OCI seropositive and 10.7% for OCI seronegative). Related studies have reported similar prevalence of OCI at 10% in Middle Eastern and Eastern Mediterranean countries [14]. In a global review, the prevalence of seropositive OCI was estimated at 13.3% and seronegative OCI at 9.6%. [17]. These related reviews suggest that the prevalence of OCI is high in subjects with chronic liver disease, subjects who inject drugs, HIV positive subjects and patients on hemodialysis. [14,17]. We also found similar results in the present work, with a higher prevalence in patients with malignancies, patients with hepatological disorders, HIV positive patients and patients on hemodialysis. So far, no studies were found on OCI in people at high risk of HCV, such as people who inject drugs or men who have sex with men, so these studies are needed in Africa.\nThe prevalence of OBI is known to be very variable according to population categories. The high-risk groups reported by the authors included HIV-positive patients, patients with chronic liver disease, patients on hemodialysis, patients with hematological disorders, patients with malignancies, organ recipients, healthcare workers, patients who inject drugs and men who have sex with men [12,19,23]. The geographical context is also an important source of variability in the prevalence of OBI. The main factors of this variability include the endemicity of circulation of classic hepatitis B (low, intermediate or high level), the socio-demographic index and the rate of vaccination coverage for infection due to the hepatitis B virus [15,23]. Diagnostic factors such as the type of samples (blood, peripheral blood mononuclear cells or liver tissue) and the sensitivity and specificity of diagnostic techniques for the targets (DNA, HBsAg, anti-HBc, anti-HBs) are also associated with a significant influence of the prevalence of OBI [18,23]. Because of all these variability factors, objective comparisons between estimates from studies with different methodological approaches are difficult. Similar prevalence to the one found in our review were reported in a national review in Sudan (15.5%) and in a second review conducted in Africa among HIV-positive patients (11.2%) [12,16]. The prevalence of HBV in HIV-positive patients in Africa is known to be high due to the high endemicity of these two diseases and their common route of transmission [142]. Therefore, the high prevalence of OBI also found in HIV-positive subjects in this study suggests the need for HBV DNA testing in these high-risk patients in Africa. Africa is a highly endemic area for classic HBV. So, hoping for the elimination of HBV as recommended by the WHO, strengthening vaccination policies against HBV is necessary and more particularly in patients with malignancies and HIV-positive patients [143]. Compared to the prevalence of OBI in this work, Xie et al., estimated a lower pooled prevalence of 4% from 70 studies conducted in Asia [19]. Surprisingly, the highest prevalences have been estimated for areas of the world with the lowest endemicity, notably 36% and 25% for Western Europe and North America, respectively [18].\nOur study is mainly limited by the representativeness of the included studies. All OCI prevalence studies were conducted in Egypt and OBI prevalence data were usually collected in certain regions, which might not represent national prevalence and due to unavailability of data from OCI. Only 15/55 African countries were represented in this review. Therefore, the data may not be sufficient to represent the whole continent for the OBI and the OCI. In addition, substantial residual statistical heterogeneity in prevalence measures was identified in the overall and subgroup meta-analyses for OBI and OCI. Despite these limitations, the main strength of our review is that we identified a very large number of studies, which covered multiple categories of symptomatic, apparently healthy populations at high risk of HCV and HBV infection. We also took into account in our estimates the variability of prevalence according to anti-HCV serological status as well as that of anti-HBc.", "This review presents a summary of the prevalence of OBI and OIC in Africa. It shows that despite the great variability of prevalence according to population category and geographic region, in general, the prevalence of OBI and OCI in Africa are high and patients with malignancies, hemodialysis patients and HIV patients are groups at high risk for these infections. This highlights the need to include more appropriate OCI and OBI screening programs to target those at high risk of infection. In addition, the lack of data in several African countries shows the need for researchers to investigate this field, particularly in countries with high endemicity for HBV/HCV.", "Supplementary data associated with this article can be found in the online version at 10.1016/j.jiph.2022.11.008." ]
[ "intro", "materials | methods", null, null, null, null, null, null, "results", null, null, null, null, null, null, null, null, "discussion", "conclusions", "supplementary-material" ]
[ "Occult hepatitis B", "Occult hepatitis C", "Prevalence", "Africa" ]
Introduction: In the early 1980 s, a new form of clinical hepatitis B virus (HBV) infection was described, corresponding to the presence of HBV DNA in the liver and/or serum of patients in whom the HBs antigen (Ag) is undetectable by the usual serology tests. This was called occult hepatitis B (OBI) [1–3]. Based on the antibody profile of HBV, OBI can be distinguished into seropositive-OBI (anti-HBc and/or anti-HBs positive) and seronegative-OBI (anti-HBc and anti-HBs negative)[4]. A similar entity was studied in 2005, by Castillo et al., on the ability of hepatitis C virus (HCV) to replicate in peripheral blood mono-nuclear cells (PBMCs) of patients in the absence of viral RNA in serum and detectable anti-HCV antibodies, thus describing an occult HCV infection[5,6]. Two types of OCI are recognized: seronegative OCI (anti HCV antibody-negative and serum HCV RNA-negative) and seropositive OCI (anti HCV antibody-positive and serum HCV RNA-negative)[7]. These occult forms of hepatitis B and C can lead to hepatic crises including cases of chronic liver infection and liver cirrhosis [8]. It is known that OBI plays a role in the development of hepatocellular carcinoma [9]. These entities highlight multiple concerns, including the potential for transmission of this form of infection through blood transfusion, hemodialysis, or from mother to child [10]. The OBI and OCI have been described in a variety of individuals, including hemodialysis patients, patients with sustained virologic response, immunocompromised individuals, patients with abnormal liver function, and apparently healthy subjects. Multiple meta-analyses on the prevalence of OBI and OCI have shown great variability in estimates according to population categories, regions, type of tests used, and the level of endemicity of the disease [9,11–17]. Two global meta-analyses with partial analyses for Africa reported the prevalence of OBI at 3.7% [15], seronegative OCI at 9.6% and seropositive OCI at 13.3% [17]. The highest prevalence of OBI was in countries with high endemicity of classical hepatitis B, including Africa, with prevalence of up to 35.6% in the general population in Uganda [15]. Similar to classical hepatitis C infection, which has the highest prevalence in Egypt, this meta-analysis showed that OCI had the highest prevalence in North Africa, represented only by Egypt [17]. Other meta-analyses have estimated the prevalence of OBI in Africa among people living with HIV at 11.2% [16] and in Sudan at 15.5% [12]. In a regional meta-analysis in the Middle East and Eastern Mediterranean countries the prevalence of OCI was estimated at 10.0% with the highest prevalence recorded in Egypt [14]. Estimates of a high OBI prevalence of 34% was reported in Western Europe and Northern America [18] and a lower OBI prevalence of 4% was observed in Asia [19]. Thus, there are no systematic syntheses for OBI and OCI for Africa which is a highly endemic region for classical HBV and HCV infections. In this meta-analysis, we compiled all available evidence for OBI and OCI in Africa with differences across populations and regions. The results of this review may help consider OBI and OCI in the WHO goal of eliminating viral hepatitis by 2030. It will also help to sound the alarm on the need to strengthen the diagnostic capacities of OBI and OCI in Africa which has limited access to molecular testing. Materials and methods: Registration This systematic review was conducted in full compliance with the PRISMA (Preferred Reporting Items for Systematic Reviews and Meta - Analyses) guidelines (S1 Table) [20]. The protocol of this systematic review was developed and registered on PROSPERO (International prospective register of systematic reviews) on June 03, 2021 under the digital identifier: https://www.crd.york.ac.uk/pros-pero/display_record.php? ID = CRD42021252772. This systematic review was conducted in full compliance with the PRISMA (Preferred Reporting Items for Systematic Reviews and Meta - Analyses) guidelines (S1 Table) [20]. The protocol of this systematic review was developed and registered on PROSPERO (International prospective register of systematic reviews) on June 03, 2021 under the digital identifier: https://www.crd.york.ac.uk/pros-pero/display_record.php? ID = CRD42021252772. Search strategy In May 2021, an electronic literature search was performed and the following databases were used: PubMed, Web of Science, African Journal Online and African Index Medicus. A second update search was performed in July 2022 to search for additional articles. We used a combination of the keywords covering OBI, OCI, the names of African countries and regions as well as the boolean operators OR and AND as illustrated in detail in S2 Table. This search strategy was also adapted to identify relevant articles from other databases. To complement the bibliographic database search and identify potential additional data sources, we reviewed the reference list of all relevant articles. All search results were managed using Endnote X9 credential management software, and all duplicates were removed using this same software. In May 2021, an electronic literature search was performed and the following databases were used: PubMed, Web of Science, African Journal Online and African Index Medicus. A second update search was performed in July 2022 to search for additional articles. We used a combination of the keywords covering OBI, OCI, the names of African countries and regions as well as the boolean operators OR and AND as illustrated in detail in S2 Table. This search strategy was also adapted to identify relevant articles from other databases. To complement the bibliographic database search and identify potential additional data sources, we reviewed the reference list of all relevant articles. All search results were managed using Endnote X9 credential management software, and all duplicates were removed using this same software. Inclusion and exclusion criteria We considered published studies, without any restrictions in time, population category or detection assay. We classified populations for OBI as: 1) HBsAg negative 2) HBsAg negative and Anti-HBc positive, and 3) HBsAg negative and Anti-HBc negative [21,22], and for OCI as 1) seronegative OCI (anti HCV antibody-negative and serum HCV RNA-negative), 2) seropositive OCI (anti HCV antibody-positive and serum HCV RNA-negative), and 3) seropositive and/or seronegative OCI [7]. Only reports written in English and/or French were included. Exclusion criteria included all of the following: case reports, reviews and articles for which: studies were conducted outside Africa, sample size ≤ 10 participants, no baseline data for the longitudinal study, no data on OBI and/or OCI prevalence or case fatality rate, and no data on number of negative HBsAg tested. We considered published studies, without any restrictions in time, population category or detection assay. We classified populations for OBI as: 1) HBsAg negative 2) HBsAg negative and Anti-HBc positive, and 3) HBsAg negative and Anti-HBc negative [21,22], and for OCI as 1) seronegative OCI (anti HCV antibody-negative and serum HCV RNA-negative), 2) seropositive OCI (anti HCV antibody-positive and serum HCV RNA-negative), and 3) seropositive and/or seronegative OCI [7]. Only reports written in English and/or French were included. Exclusion criteria included all of the following: case reports, reviews and articles for which: studies were conducted outside Africa, sample size ≤ 10 participants, no baseline data for the longitudinal study, no data on OBI and/or OCI prevalence or case fatality rate, and no data on number of negative HBsAg tested. Study selection and data extraction process Duplicates identified in the full list of studies were removed. The titles and abstracts of articles retrieved from the electronic literature search were independently reviewed by four researchers, and the full texts of potentially eligible articles were obtained and further assessed for final inclusion. Articles were selected based on the availability of data needed to calculate the prevalence of OBI and/or OCI in the study. Data from the included studies were extracted using a Google Form by 18 of the study authors and verified by SK. Data extracted were first author name, year of publication, design of study, WHO region, UNSD region, country, country’s income level, sampling method, time of data collection, period of study, age of study participants, recruiting framework, population categories, OBI and/or OCI diagnostic method, and prevalence of OBI and/or OCI in Africa. We considered as the high-risk groups of OBI: HIV-positive patients, patients with chronic liver disease, patients on hemodialysis, patients with hematological disorders, patients with malignancies, organ recipients, healthcare workers, patients who inject drugs and men who have sex with men [12,19,23]. We grouped apparently healthy individuals, blood donors, the general population, and pregnant women as the low-risk group for OBI. Disagreements observed during study selection and data extraction were resolved by discussion and consensus. Duplicates identified in the full list of studies were removed. The titles and abstracts of articles retrieved from the electronic literature search were independently reviewed by four researchers, and the full texts of potentially eligible articles were obtained and further assessed for final inclusion. Articles were selected based on the availability of data needed to calculate the prevalence of OBI and/or OCI in the study. Data from the included studies were extracted using a Google Form by 18 of the study authors and verified by SK. Data extracted were first author name, year of publication, design of study, WHO region, UNSD region, country, country’s income level, sampling method, time of data collection, period of study, age of study participants, recruiting framework, population categories, OBI and/or OCI diagnostic method, and prevalence of OBI and/or OCI in Africa. We considered as the high-risk groups of OBI: HIV-positive patients, patients with chronic liver disease, patients on hemodialysis, patients with hematological disorders, patients with malignancies, organ recipients, healthcare workers, patients who inject drugs and men who have sex with men [12,19,23]. We grouped apparently healthy individuals, blood donors, the general population, and pregnant women as the low-risk group for OBI. Disagreements observed during study selection and data extraction were resolved by discussion and consensus. Quality assessment The tool developed by Hoy et al. for cross-sectional studies was used to assess the methodological quality of the included studies (S3 Table) [24]. Discussion and consensus were used to resolve disagreements. The tool developed by Hoy et al. for cross-sectional studies was used to assess the methodological quality of the included studies (S3 Table) [24]. Discussion and consensus were used to resolve disagreements. Statistical analysis Study-specific estimates were pooled using DerSimonian and Laird’s random-effects model meta-analysis [25]. This model provided the overall prevalence, the 95% confidence interval and the prediction interval which informs about the values of future studies. Heterogeneity was assessed by the Cochrane statistical test, and the p value of the Q test (< 0.05) was used to indicate significant heterogeneity and quantified by I2 values, assuming I2 values of 25%, 50% and 75% represent respectively low, moderate and high heterogeneity[26,27]. Publication bias was assessed using Egger’s test and funnel plot [28]. Subgroup analyses were performed based on study design, sampling approach, setting, timing of sample collection, countries, WHO region, UNSD region, country income level, age range, population categories, and OBI and/or OCI diagnostic assays. We considered p values < 0.05, to be statistically significant. R software version 4.1.0 was used to perform the analyses[29,30]. Study-specific estimates were pooled using DerSimonian and Laird’s random-effects model meta-analysis [25]. This model provided the overall prevalence, the 95% confidence interval and the prediction interval which informs about the values of future studies. Heterogeneity was assessed by the Cochrane statistical test, and the p value of the Q test (< 0.05) was used to indicate significant heterogeneity and quantified by I2 values, assuming I2 values of 25%, 50% and 75% represent respectively low, moderate and high heterogeneity[26,27]. Publication bias was assessed using Egger’s test and funnel plot [28]. Subgroup analyses were performed based on study design, sampling approach, setting, timing of sample collection, countries, WHO region, UNSD region, country income level, age range, population categories, and OBI and/or OCI diagnostic assays. We considered p values < 0.05, to be statistically significant. R software version 4.1.0 was used to perform the analyses[29,30]. Registration: This systematic review was conducted in full compliance with the PRISMA (Preferred Reporting Items for Systematic Reviews and Meta - Analyses) guidelines (S1 Table) [20]. The protocol of this systematic review was developed and registered on PROSPERO (International prospective register of systematic reviews) on June 03, 2021 under the digital identifier: https://www.crd.york.ac.uk/pros-pero/display_record.php? ID = CRD42021252772. Search strategy: In May 2021, an electronic literature search was performed and the following databases were used: PubMed, Web of Science, African Journal Online and African Index Medicus. A second update search was performed in July 2022 to search for additional articles. We used a combination of the keywords covering OBI, OCI, the names of African countries and regions as well as the boolean operators OR and AND as illustrated in detail in S2 Table. This search strategy was also adapted to identify relevant articles from other databases. To complement the bibliographic database search and identify potential additional data sources, we reviewed the reference list of all relevant articles. All search results were managed using Endnote X9 credential management software, and all duplicates were removed using this same software. Inclusion and exclusion criteria: We considered published studies, without any restrictions in time, population category or detection assay. We classified populations for OBI as: 1) HBsAg negative 2) HBsAg negative and Anti-HBc positive, and 3) HBsAg negative and Anti-HBc negative [21,22], and for OCI as 1) seronegative OCI (anti HCV antibody-negative and serum HCV RNA-negative), 2) seropositive OCI (anti HCV antibody-positive and serum HCV RNA-negative), and 3) seropositive and/or seronegative OCI [7]. Only reports written in English and/or French were included. Exclusion criteria included all of the following: case reports, reviews and articles for which: studies were conducted outside Africa, sample size ≤ 10 participants, no baseline data for the longitudinal study, no data on OBI and/or OCI prevalence or case fatality rate, and no data on number of negative HBsAg tested. Study selection and data extraction process: Duplicates identified in the full list of studies were removed. The titles and abstracts of articles retrieved from the electronic literature search were independently reviewed by four researchers, and the full texts of potentially eligible articles were obtained and further assessed for final inclusion. Articles were selected based on the availability of data needed to calculate the prevalence of OBI and/or OCI in the study. Data from the included studies were extracted using a Google Form by 18 of the study authors and verified by SK. Data extracted were first author name, year of publication, design of study, WHO region, UNSD region, country, country’s income level, sampling method, time of data collection, period of study, age of study participants, recruiting framework, population categories, OBI and/or OCI diagnostic method, and prevalence of OBI and/or OCI in Africa. We considered as the high-risk groups of OBI: HIV-positive patients, patients with chronic liver disease, patients on hemodialysis, patients with hematological disorders, patients with malignancies, organ recipients, healthcare workers, patients who inject drugs and men who have sex with men [12,19,23]. We grouped apparently healthy individuals, blood donors, the general population, and pregnant women as the low-risk group for OBI. Disagreements observed during study selection and data extraction were resolved by discussion and consensus. Quality assessment: The tool developed by Hoy et al. for cross-sectional studies was used to assess the methodological quality of the included studies (S3 Table) [24]. Discussion and consensus were used to resolve disagreements. Statistical analysis: Study-specific estimates were pooled using DerSimonian and Laird’s random-effects model meta-analysis [25]. This model provided the overall prevalence, the 95% confidence interval and the prediction interval which informs about the values of future studies. Heterogeneity was assessed by the Cochrane statistical test, and the p value of the Q test (< 0.05) was used to indicate significant heterogeneity and quantified by I2 values, assuming I2 values of 25%, 50% and 75% represent respectively low, moderate and high heterogeneity[26,27]. Publication bias was assessed using Egger’s test and funnel plot [28]. Subgroup analyses were performed based on study design, sampling approach, setting, timing of sample collection, countries, WHO region, UNSD region, country income level, age range, population categories, and OBI and/or OCI diagnostic assays. We considered p values < 0.05, to be statistically significant. R software version 4.1.0 was used to perform the analyses[29,30]. Results: Selection of studies As of 07/18/2022, 496 studies were analysed with no additional studies added from the manual search. After removing the duplicates (152), 344 articles were screened. The full text evaluation led to the elimination of 78 studies for several reasons, including lack of data on OBI and/or OCI prevalence or case fatality rate, or small sample size (S4 Table). In all, a total of 109 articles reporting the prevalence and/or case fatality rate of OBI and/or OCI in Africa were included in this synthesis (Fig. 1) [3,31–138]. As of 07/18/2022, 496 studies were analysed with no additional studies added from the manual search. After removing the duplicates (152), 344 articles were screened. The full text evaluation led to the elimination of 78 studies for several reasons, including lack of data on OBI and/or OCI prevalence or case fatality rate, or small sample size (S4 Table). In all, a total of 109 articles reporting the prevalence and/or case fatality rate of OBI and/or OCI in Africa were included in this synthesis (Fig. 1) [3,31–138]. Study characteristics Overall, we obtained 157 prevalence data for this meta-analysis, including 133 studies for OBI prevalence; 23 studies for OCI prevalence, and a single study for OBI case fatality rate (S5, S6, and S7 Tables). The OBI case fatality rate study reported a total of 6 deaths among 72 OBI and HIV infected patients in Botswana [108]. The studies were published between 2006 and 2022 in general and respectively from 2006 to 2022 and 2010-2022 for OBI and OCI. Participants were registered from 1997 to 2021. The majority of studies were cross-sectional (84.7%) with non-probability sampling (88.5%). The study setting was mainly in the hospital (93.6%). The studies were carried out in 21 African countries, mainly in Egypt (44.6%) and South Africa (13.4%). The largest number of studies were from lower-middle-income countries (68.2%). Most of the studies recruited adults (61.8%). Of the included studies, 39 (24.8%) were carried out in HIV infected patients, 27 (16.6%) in patients with chronic hepatitis C infection, 19 (12.1%) in blood donors, and 19 (12.1%) in hemodialysis patients. The diagnosis of OCI was either performed using conventional RT-PCR (5.7%) or real-time RT-PCR (8.3%). OBI diagnosis was made using conventional PCR (46.5%) or real-time PCR (39.5%). The type of sample used was blood for OBI (100%) and peripheral blood mononuclear cells (PBMCs) for OCI (100%). Most of the prevalence data were at moderate risk of bias (59.2%). Overall, we obtained 157 prevalence data for this meta-analysis, including 133 studies for OBI prevalence; 23 studies for OCI prevalence, and a single study for OBI case fatality rate (S5, S6, and S7 Tables). The OBI case fatality rate study reported a total of 6 deaths among 72 OBI and HIV infected patients in Botswana [108]. The studies were published between 2006 and 2022 in general and respectively from 2006 to 2022 and 2010-2022 for OBI and OCI. Participants were registered from 1997 to 2021. The majority of studies were cross-sectional (84.7%) with non-probability sampling (88.5%). The study setting was mainly in the hospital (93.6%). The studies were carried out in 21 African countries, mainly in Egypt (44.6%) and South Africa (13.4%). The largest number of studies were from lower-middle-income countries (68.2%). Most of the studies recruited adults (61.8%). Of the included studies, 39 (24.8%) were carried out in HIV infected patients, 27 (16.6%) in patients with chronic hepatitis C infection, 19 (12.1%) in blood donors, and 19 (12.1%) in hemodialysis patients. The diagnosis of OCI was either performed using conventional RT-PCR (5.7%) or real-time RT-PCR (8.3%). OBI diagnosis was made using conventional PCR (46.5%) or real-time PCR (39.5%). The type of sample used was blood for OBI (100%) and peripheral blood mononuclear cells (PBMCs) for OCI (100%). Most of the prevalence data were at moderate risk of bias (59.2%). The prevalence of occult C infection In total, the twenty-three studies reporting OCI prevalence data in Africa were conducted only in Egypt. The pooled OCI prevalence was estimated to be 10.7% [95% CI = 6.6-15.4] on a sample of 2865 participants with substantial heterogeneity (I2 = 90.7% [95% CI = 87.4-93.2], p < 0.001) (Table 1, Figs. 2 and 3). Two studies presented the prevalence of seropositive and/or seronegative OCI without distinction with an estimated at 5.8% [95% CI = 0.8-13.9] in a total of 100 participants. There was a total of 14 studies reporting seronegative OCI prevalence. The pooled seronegative OCI prevalence was estimated to be 10.7% [95% CI = 4.8-18.3] in a sample of 908 participants. There was a total of 7 studies reporting seropositive OCI prevalence. The overall seropositive OCI prevalence was estimated to be 14.4% [95% CI = 5.2-22.1], with a total of 1857 participants. In total, the twenty-three studies reporting OCI prevalence data in Africa were conducted only in Egypt. The pooled OCI prevalence was estimated to be 10.7% [95% CI = 6.6-15.4] on a sample of 2865 participants with substantial heterogeneity (I2 = 90.7% [95% CI = 87.4-93.2], p < 0.001) (Table 1, Figs. 2 and 3). Two studies presented the prevalence of seropositive and/or seronegative OCI without distinction with an estimated at 5.8% [95% CI = 0.8-13.9] in a total of 100 participants. There was a total of 14 studies reporting seronegative OCI prevalence. The pooled seronegative OCI prevalence was estimated to be 10.7% [95% CI = 4.8-18.3] in a sample of 908 participants. There was a total of 7 studies reporting seropositive OCI prevalence. The overall seropositive OCI prevalence was estimated to be 14.4% [95% CI = 5.2-22.1], with a total of 1857 participants. The prevalence of occult B infection A total of 133 studies reporting prevalence data on OBI were conducted in 5 UNSD regions of Africa: Central Africa, Eastern Africa, West Africa, Northern Africa, and Southern Africa (Fig. 2). The OBI prevalence was estimated at 14.8% [95% CI = 12.2-17.7] in a sample of 18579 participants with significant heterogeneity (I2 = 96.2% [95% CI = 95.8-96.5], p < 0.0001) (Table 1, Figs. 2, 4, and S1 Fig). The pooled prevalence of seropositive OBI was estimated at 20.0% [95% CI = 15.3-25.1] among 6873 participants. Prevalence data on seronegative OBI were obtained from 32 studies and the prevalence was estimated at 7.4% [95% CI = 3.8-11,8] with 2896 participants. A total of 44 studies reported on seropositive and/or seronegative OBI with an estimated prevalence of 14.3% [95% CI = 10.1-19.2] in 8810 participants. Few studies provided data on the variation of seropositive or seronegative OBI with anti-HBs (only 9 studies). The distribution of OBI prevalence according to anti-HBs was as follows: 17.32% [4.37; 35.69] for anti-HBc negative/ anti-HBs negative, 30.95% [17.10; 46.62] for anti-HBc positive/ anti-HBs negative, 27.41% [ 8.89; 50.70] anti-HBc positive/ anti-HBs positive, and 0% for anti-HBc negative/ anti-HBs positive (S3 Fig). A total of 133 studies reporting prevalence data on OBI were conducted in 5 UNSD regions of Africa: Central Africa, Eastern Africa, West Africa, Northern Africa, and Southern Africa (Fig. 2). The OBI prevalence was estimated at 14.8% [95% CI = 12.2-17.7] in a sample of 18579 participants with significant heterogeneity (I2 = 96.2% [95% CI = 95.8-96.5], p < 0.0001) (Table 1, Figs. 2, 4, and S1 Fig). The pooled prevalence of seropositive OBI was estimated at 20.0% [95% CI = 15.3-25.1] among 6873 participants. Prevalence data on seronegative OBI were obtained from 32 studies and the prevalence was estimated at 7.4% [95% CI = 3.8-11,8] with 2896 participants. A total of 44 studies reported on seropositive and/or seronegative OBI with an estimated prevalence of 14.3% [95% CI = 10.1-19.2] in 8810 participants. Few studies provided data on the variation of seropositive or seronegative OBI with anti-HBs (only 9 studies). The distribution of OBI prevalence according to anti-HBs was as follows: 17.32% [4.37; 35.69] for anti-HBc negative/ anti-HBs negative, 30.95% [17.10; 46.62] for anti-HBc positive/ anti-HBs negative, 27.41% [ 8.89; 50.70] anti-HBc positive/ anti-HBs positive, and 0% for anti-HBc negative/ anti-HBs positive (S3 Fig). Subgroup analyses Occult hepatitis C infection The OCI prevalence was statistically different (p = 0.005) according to the population categories with an estimated prevalence of 34.9% in patients with malignancies, 12.3% in those with chronic hepatitis C, 7.2% in hemodialysis patients, and 1.1% in apparently healthy individuals (S8 Table). OCI prevalence did not vary significantly by study design (p = 0.380), sampling approach (p = 0.460), type of OCI (seropositive or seronegative) (p = 0.819), and diagnostic method (p = 0.067). The OCI prevalence was statistically different (p = 0.005) according to the population categories with an estimated prevalence of 34.9% in patients with malignancies, 12.3% in those with chronic hepatitis C, 7.2% in hemodialysis patients, and 1.1% in apparently healthy individuals (S8 Table). OCI prevalence did not vary significantly by study design (p = 0.380), sampling approach (p = 0.460), type of OCI (seropositive or seronegative) (p = 0.819), and diagnostic method (p = 0.067). Occult hepatitis B infection Variation of OBI prevalence was statistically significant by country (p < 0.001), categories of population (p < 0.001), and type of OBI (seropositive or seronegative) (p < 0.001) (S8 Table). The highest OBI prevalence rates were in Morocco (42.8%), Gabon (22.6%), and South Africa (20.2%). Patients with malignancies, patients with liver disorders, and HIV positive patients were the three population categories with the highest OBI prevalence, 41.9%, 33.1%, and 17.0% respectively. The seropositive OBI prevalence (20.1%) was significantly higher than that of seronegative OBI (8.0%). The OBI prevalence did not vary significantly by study design (p = 0.360), sampling approach (p = 0.117), setting (hospital or community-based) (p = 0.360), timing of data collection (p = 0.875), WHO region (p = 0.544), UNSD region (p = 0.384), country income level (p = 0.395), age range (p = 0.799), and diagnostic method (p = 0.659). Variation of OBI prevalence was statistically significant by country (p < 0.001), categories of population (p < 0.001), and type of OBI (seropositive or seronegative) (p < 0.001) (S8 Table). The highest OBI prevalence rates were in Morocco (42.8%), Gabon (22.6%), and South Africa (20.2%). Patients with malignancies, patients with liver disorders, and HIV positive patients were the three population categories with the highest OBI prevalence, 41.9%, 33.1%, and 17.0% respectively. The seropositive OBI prevalence (20.1%) was significantly higher than that of seronegative OBI (8.0%). The OBI prevalence did not vary significantly by study design (p = 0.360), sampling approach (p = 0.117), setting (hospital or community-based) (p = 0.360), timing of data collection (p = 0.875), WHO region (p = 0.544), UNSD region (p = 0.384), country income level (p = 0.395), age range (p = 0.799), and diagnostic method (p = 0.659). Publication bias Egger’s tests were significant for the OCI prevalence (P < 0.001) and OBI prevalence in Africa (P = 0.014), suggesting the presence of a publication bias. Funnel plots confirmed the results of publication bias obtained by Egger’s test (S2 and S4 Figs). Egger’s tests were significant for the OCI prevalence (P < 0.001) and OBI prevalence in Africa (P = 0.014), suggesting the presence of a publication bias. Funnel plots confirmed the results of publication bias obtained by Egger’s test (S2 and S4 Figs). Occult hepatitis C infection The OCI prevalence was statistically different (p = 0.005) according to the population categories with an estimated prevalence of 34.9% in patients with malignancies, 12.3% in those with chronic hepatitis C, 7.2% in hemodialysis patients, and 1.1% in apparently healthy individuals (S8 Table). OCI prevalence did not vary significantly by study design (p = 0.380), sampling approach (p = 0.460), type of OCI (seropositive or seronegative) (p = 0.819), and diagnostic method (p = 0.067). The OCI prevalence was statistically different (p = 0.005) according to the population categories with an estimated prevalence of 34.9% in patients with malignancies, 12.3% in those with chronic hepatitis C, 7.2% in hemodialysis patients, and 1.1% in apparently healthy individuals (S8 Table). OCI prevalence did not vary significantly by study design (p = 0.380), sampling approach (p = 0.460), type of OCI (seropositive or seronegative) (p = 0.819), and diagnostic method (p = 0.067). Occult hepatitis B infection Variation of OBI prevalence was statistically significant by country (p < 0.001), categories of population (p < 0.001), and type of OBI (seropositive or seronegative) (p < 0.001) (S8 Table). The highest OBI prevalence rates were in Morocco (42.8%), Gabon (22.6%), and South Africa (20.2%). Patients with malignancies, patients with liver disorders, and HIV positive patients were the three population categories with the highest OBI prevalence, 41.9%, 33.1%, and 17.0% respectively. The seropositive OBI prevalence (20.1%) was significantly higher than that of seronegative OBI (8.0%). The OBI prevalence did not vary significantly by study design (p = 0.360), sampling approach (p = 0.117), setting (hospital or community-based) (p = 0.360), timing of data collection (p = 0.875), WHO region (p = 0.544), UNSD region (p = 0.384), country income level (p = 0.395), age range (p = 0.799), and diagnostic method (p = 0.659). Variation of OBI prevalence was statistically significant by country (p < 0.001), categories of population (p < 0.001), and type of OBI (seropositive or seronegative) (p < 0.001) (S8 Table). The highest OBI prevalence rates were in Morocco (42.8%), Gabon (22.6%), and South Africa (20.2%). Patients with malignancies, patients with liver disorders, and HIV positive patients were the three population categories with the highest OBI prevalence, 41.9%, 33.1%, and 17.0% respectively. The seropositive OBI prevalence (20.1%) was significantly higher than that of seronegative OBI (8.0%). The OBI prevalence did not vary significantly by study design (p = 0.360), sampling approach (p = 0.117), setting (hospital or community-based) (p = 0.360), timing of data collection (p = 0.875), WHO region (p = 0.544), UNSD region (p = 0.384), country income level (p = 0.395), age range (p = 0.799), and diagnostic method (p = 0.659). Publication bias Egger’s tests were significant for the OCI prevalence (P < 0.001) and OBI prevalence in Africa (P = 0.014), suggesting the presence of a publication bias. Funnel plots confirmed the results of publication bias obtained by Egger’s test (S2 and S4 Figs). Egger’s tests were significant for the OCI prevalence (P < 0.001) and OBI prevalence in Africa (P = 0.014), suggesting the presence of a publication bias. Funnel plots confirmed the results of publication bias obtained by Egger’s test (S2 and S4 Figs). Selection of studies: As of 07/18/2022, 496 studies were analysed with no additional studies added from the manual search. After removing the duplicates (152), 344 articles were screened. The full text evaluation led to the elimination of 78 studies for several reasons, including lack of data on OBI and/or OCI prevalence or case fatality rate, or small sample size (S4 Table). In all, a total of 109 articles reporting the prevalence and/or case fatality rate of OBI and/or OCI in Africa were included in this synthesis (Fig. 1) [3,31–138]. Study characteristics: Overall, we obtained 157 prevalence data for this meta-analysis, including 133 studies for OBI prevalence; 23 studies for OCI prevalence, and a single study for OBI case fatality rate (S5, S6, and S7 Tables). The OBI case fatality rate study reported a total of 6 deaths among 72 OBI and HIV infected patients in Botswana [108]. The studies were published between 2006 and 2022 in general and respectively from 2006 to 2022 and 2010-2022 for OBI and OCI. Participants were registered from 1997 to 2021. The majority of studies were cross-sectional (84.7%) with non-probability sampling (88.5%). The study setting was mainly in the hospital (93.6%). The studies were carried out in 21 African countries, mainly in Egypt (44.6%) and South Africa (13.4%). The largest number of studies were from lower-middle-income countries (68.2%). Most of the studies recruited adults (61.8%). Of the included studies, 39 (24.8%) were carried out in HIV infected patients, 27 (16.6%) in patients with chronic hepatitis C infection, 19 (12.1%) in blood donors, and 19 (12.1%) in hemodialysis patients. The diagnosis of OCI was either performed using conventional RT-PCR (5.7%) or real-time RT-PCR (8.3%). OBI diagnosis was made using conventional PCR (46.5%) or real-time PCR (39.5%). The type of sample used was blood for OBI (100%) and peripheral blood mononuclear cells (PBMCs) for OCI (100%). Most of the prevalence data were at moderate risk of bias (59.2%). The prevalence of occult C infection: In total, the twenty-three studies reporting OCI prevalence data in Africa were conducted only in Egypt. The pooled OCI prevalence was estimated to be 10.7% [95% CI = 6.6-15.4] on a sample of 2865 participants with substantial heterogeneity (I2 = 90.7% [95% CI = 87.4-93.2], p < 0.001) (Table 1, Figs. 2 and 3). Two studies presented the prevalence of seropositive and/or seronegative OCI without distinction with an estimated at 5.8% [95% CI = 0.8-13.9] in a total of 100 participants. There was a total of 14 studies reporting seronegative OCI prevalence. The pooled seronegative OCI prevalence was estimated to be 10.7% [95% CI = 4.8-18.3] in a sample of 908 participants. There was a total of 7 studies reporting seropositive OCI prevalence. The overall seropositive OCI prevalence was estimated to be 14.4% [95% CI = 5.2-22.1], with a total of 1857 participants. The prevalence of occult B infection: A total of 133 studies reporting prevalence data on OBI were conducted in 5 UNSD regions of Africa: Central Africa, Eastern Africa, West Africa, Northern Africa, and Southern Africa (Fig. 2). The OBI prevalence was estimated at 14.8% [95% CI = 12.2-17.7] in a sample of 18579 participants with significant heterogeneity (I2 = 96.2% [95% CI = 95.8-96.5], p < 0.0001) (Table 1, Figs. 2, 4, and S1 Fig). The pooled prevalence of seropositive OBI was estimated at 20.0% [95% CI = 15.3-25.1] among 6873 participants. Prevalence data on seronegative OBI were obtained from 32 studies and the prevalence was estimated at 7.4% [95% CI = 3.8-11,8] with 2896 participants. A total of 44 studies reported on seropositive and/or seronegative OBI with an estimated prevalence of 14.3% [95% CI = 10.1-19.2] in 8810 participants. Few studies provided data on the variation of seropositive or seronegative OBI with anti-HBs (only 9 studies). The distribution of OBI prevalence according to anti-HBs was as follows: 17.32% [4.37; 35.69] for anti-HBc negative/ anti-HBs negative, 30.95% [17.10; 46.62] for anti-HBc positive/ anti-HBs negative, 27.41% [ 8.89; 50.70] anti-HBc positive/ anti-HBs positive, and 0% for anti-HBc negative/ anti-HBs positive (S3 Fig). Subgroup analyses: Occult hepatitis C infection The OCI prevalence was statistically different (p = 0.005) according to the population categories with an estimated prevalence of 34.9% in patients with malignancies, 12.3% in those with chronic hepatitis C, 7.2% in hemodialysis patients, and 1.1% in apparently healthy individuals (S8 Table). OCI prevalence did not vary significantly by study design (p = 0.380), sampling approach (p = 0.460), type of OCI (seropositive or seronegative) (p = 0.819), and diagnostic method (p = 0.067). The OCI prevalence was statistically different (p = 0.005) according to the population categories with an estimated prevalence of 34.9% in patients with malignancies, 12.3% in those with chronic hepatitis C, 7.2% in hemodialysis patients, and 1.1% in apparently healthy individuals (S8 Table). OCI prevalence did not vary significantly by study design (p = 0.380), sampling approach (p = 0.460), type of OCI (seropositive or seronegative) (p = 0.819), and diagnostic method (p = 0.067). Occult hepatitis B infection Variation of OBI prevalence was statistically significant by country (p < 0.001), categories of population (p < 0.001), and type of OBI (seropositive or seronegative) (p < 0.001) (S8 Table). The highest OBI prevalence rates were in Morocco (42.8%), Gabon (22.6%), and South Africa (20.2%). Patients with malignancies, patients with liver disorders, and HIV positive patients were the three population categories with the highest OBI prevalence, 41.9%, 33.1%, and 17.0% respectively. The seropositive OBI prevalence (20.1%) was significantly higher than that of seronegative OBI (8.0%). The OBI prevalence did not vary significantly by study design (p = 0.360), sampling approach (p = 0.117), setting (hospital or community-based) (p = 0.360), timing of data collection (p = 0.875), WHO region (p = 0.544), UNSD region (p = 0.384), country income level (p = 0.395), age range (p = 0.799), and diagnostic method (p = 0.659). Variation of OBI prevalence was statistically significant by country (p < 0.001), categories of population (p < 0.001), and type of OBI (seropositive or seronegative) (p < 0.001) (S8 Table). The highest OBI prevalence rates were in Morocco (42.8%), Gabon (22.6%), and South Africa (20.2%). Patients with malignancies, patients with liver disorders, and HIV positive patients were the three population categories with the highest OBI prevalence, 41.9%, 33.1%, and 17.0% respectively. The seropositive OBI prevalence (20.1%) was significantly higher than that of seronegative OBI (8.0%). The OBI prevalence did not vary significantly by study design (p = 0.360), sampling approach (p = 0.117), setting (hospital or community-based) (p = 0.360), timing of data collection (p = 0.875), WHO region (p = 0.544), UNSD region (p = 0.384), country income level (p = 0.395), age range (p = 0.799), and diagnostic method (p = 0.659). Publication bias Egger’s tests were significant for the OCI prevalence (P < 0.001) and OBI prevalence in Africa (P = 0.014), suggesting the presence of a publication bias. Funnel plots confirmed the results of publication bias obtained by Egger’s test (S2 and S4 Figs). Egger’s tests were significant for the OCI prevalence (P < 0.001) and OBI prevalence in Africa (P = 0.014), suggesting the presence of a publication bias. Funnel plots confirmed the results of publication bias obtained by Egger’s test (S2 and S4 Figs). Occult hepatitis C infection: The OCI prevalence was statistically different (p = 0.005) according to the population categories with an estimated prevalence of 34.9% in patients with malignancies, 12.3% in those with chronic hepatitis C, 7.2% in hemodialysis patients, and 1.1% in apparently healthy individuals (S8 Table). OCI prevalence did not vary significantly by study design (p = 0.380), sampling approach (p = 0.460), type of OCI (seropositive or seronegative) (p = 0.819), and diagnostic method (p = 0.067). Occult hepatitis B infection: Variation of OBI prevalence was statistically significant by country (p < 0.001), categories of population (p < 0.001), and type of OBI (seropositive or seronegative) (p < 0.001) (S8 Table). The highest OBI prevalence rates were in Morocco (42.8%), Gabon (22.6%), and South Africa (20.2%). Patients with malignancies, patients with liver disorders, and HIV positive patients were the three population categories with the highest OBI prevalence, 41.9%, 33.1%, and 17.0% respectively. The seropositive OBI prevalence (20.1%) was significantly higher than that of seronegative OBI (8.0%). The OBI prevalence did not vary significantly by study design (p = 0.360), sampling approach (p = 0.117), setting (hospital or community-based) (p = 0.360), timing of data collection (p = 0.875), WHO region (p = 0.544), UNSD region (p = 0.384), country income level (p = 0.395), age range (p = 0.799), and diagnostic method (p = 0.659). Publication bias: Egger’s tests were significant for the OCI prevalence (P < 0.001) and OBI prevalence in Africa (P = 0.014), suggesting the presence of a publication bias. Funnel plots confirmed the results of publication bias obtained by Egger’s test (S2 and S4 Figs). Discussion: We performed a meta-analysis to provide an estimate of the prevalence and case fatality rate of OBI and OCI in Africa. A total of 109 studies were reviewed and a total of 18579 and 2865 participants were assessed for OBI and OCI respectively. There was only one study that reported deaths (6/72) in HIV-positive subjects with OBI in Botswana. The prevalence of OBI ranged from 0% to 100% and the prevalence of OCI ranged from 0% to 60%. An estimated prevalence of OCI (10.7%) and OBI (14.8%) was obtained in relevant articles published between 2010 and 2022 in Africa. Overall, we found a higher prevalence of OCI in patients with malignancies, in patients with chronic hepatitis C, and in hemodialysis patients. Patients with malignancies, patients with liver disorders, and HIV positive patients were the population groups with the highest OBI prevalence. Our results show that apart from Egypt, studies on OCI in Africa are very rare. Although Egypt has the highest prevalence of HCV in the world [139], many other African countries also have significant HCV prevalence, including Cameroon, Burundi and Morocco [140]. From this observation, it is therefore necessary to conduct additional studies on OCI in other African countries outside of Egypt. Studies have shown risks of reactivating the presence of OCI in subjects with a sustained virological response to HCV [17]. With the arrival of new direct-acting HCV treatments [141], OCI studies on those patients with a sustained virologic response are needed. Our systematic review found that the combined prevalence of OCI was 10.7% (14.4% for OCI seropositive and 10.7% for OCI seronegative). Related studies have reported similar prevalence of OCI at 10% in Middle Eastern and Eastern Mediterranean countries [14]. In a global review, the prevalence of seropositive OCI was estimated at 13.3% and seronegative OCI at 9.6%. [17]. These related reviews suggest that the prevalence of OCI is high in subjects with chronic liver disease, subjects who inject drugs, HIV positive subjects and patients on hemodialysis. [14,17]. We also found similar results in the present work, with a higher prevalence in patients with malignancies, patients with hepatological disorders, HIV positive patients and patients on hemodialysis. So far, no studies were found on OCI in people at high risk of HCV, such as people who inject drugs or men who have sex with men, so these studies are needed in Africa. The prevalence of OBI is known to be very variable according to population categories. The high-risk groups reported by the authors included HIV-positive patients, patients with chronic liver disease, patients on hemodialysis, patients with hematological disorders, patients with malignancies, organ recipients, healthcare workers, patients who inject drugs and men who have sex with men [12,19,23]. The geographical context is also an important source of variability in the prevalence of OBI. The main factors of this variability include the endemicity of circulation of classic hepatitis B (low, intermediate or high level), the socio-demographic index and the rate of vaccination coverage for infection due to the hepatitis B virus [15,23]. Diagnostic factors such as the type of samples (blood, peripheral blood mononuclear cells or liver tissue) and the sensitivity and specificity of diagnostic techniques for the targets (DNA, HBsAg, anti-HBc, anti-HBs) are also associated with a significant influence of the prevalence of OBI [18,23]. Because of all these variability factors, objective comparisons between estimates from studies with different methodological approaches are difficult. Similar prevalence to the one found in our review were reported in a national review in Sudan (15.5%) and in a second review conducted in Africa among HIV-positive patients (11.2%) [12,16]. The prevalence of HBV in HIV-positive patients in Africa is known to be high due to the high endemicity of these two diseases and their common route of transmission [142]. Therefore, the high prevalence of OBI also found in HIV-positive subjects in this study suggests the need for HBV DNA testing in these high-risk patients in Africa. Africa is a highly endemic area for classic HBV. So, hoping for the elimination of HBV as recommended by the WHO, strengthening vaccination policies against HBV is necessary and more particularly in patients with malignancies and HIV-positive patients [143]. Compared to the prevalence of OBI in this work, Xie et al., estimated a lower pooled prevalence of 4% from 70 studies conducted in Asia [19]. Surprisingly, the highest prevalences have been estimated for areas of the world with the lowest endemicity, notably 36% and 25% for Western Europe and North America, respectively [18]. Our study is mainly limited by the representativeness of the included studies. All OCI prevalence studies were conducted in Egypt and OBI prevalence data were usually collected in certain regions, which might not represent national prevalence and due to unavailability of data from OCI. Only 15/55 African countries were represented in this review. Therefore, the data may not be sufficient to represent the whole continent for the OBI and the OCI. In addition, substantial residual statistical heterogeneity in prevalence measures was identified in the overall and subgroup meta-analyses for OBI and OCI. Despite these limitations, the main strength of our review is that we identified a very large number of studies, which covered multiple categories of symptomatic, apparently healthy populations at high risk of HCV and HBV infection. We also took into account in our estimates the variability of prevalence according to anti-HCV serological status as well as that of anti-HBc. Conclusion: This review presents a summary of the prevalence of OBI and OIC in Africa. It shows that despite the great variability of prevalence according to population category and geographic region, in general, the prevalence of OBI and OCI in Africa are high and patients with malignancies, hemodialysis patients and HIV patients are groups at high risk for these infections. This highlights the need to include more appropriate OCI and OBI screening programs to target those at high risk of infection. In addition, the lack of data in several African countries shows the need for researchers to investigate this field, particularly in countries with high endemicity for HBV/HCV. Supplementary Material: Supplementary data associated with this article can be found in the online version at 10.1016/j.jiph.2022.11.008.
Background: Occult hepatitis B (OBI) and C (OCI) infections lead to hepatic crises including cases of liver cirrhosis and even hepatocellular carcinoma (HCC). OBI and OCI also pose a significant problem of their transmissibility. This study aimed to assess the overall prevalence of OBI and OCI in the African continent, a region highly endemic for classical hepatitis B and C viruses. Methods: For this systematic review and meta-analysis, we searched: PubMed, Web of Science, African Journal Online and African Index Medicus for published studies on the prevalence of OBI and OCI in Africa. Study selection and data extraction were performed by at least two independent investigators. Heterogeneity (I²) was assessed using the χ² test on the Cochran Q statistic and H parameters. Sources of heterogeneity were explored by subgroup analyses. This study was registered in PROSPERO, with reference number CRD42021252772. Results: We obtained 157 prevalence data for this meta-analysis, from 134 studies for OBI prevalence; 23 studies on OCI prevalence, and a single study on the OBI case fatality rate. The overall estimate for the prevalence of OBI was 14.8% [95% CI = 12.2-17.7] among 18579 participants. The prevalence of seronegative OBI and seropositive OBI was 7.4% [95% CI = 3.8-11.8] and 20.0% [95% CI = 15.3-25.1] respectively. The overall estimate for the prevalence of OCI was 10.7% [95% CI = 6.6-15.4] among 2865 participants. The pooled prevalence of seronegative OCI was estimated at 10.7% [95%CI = 4.8-18.3] and that of seropositive OCI at 14.4% [95%CI = 5.2-22.1]. In Sub-group analysis, patients with malignancies, chronic hepatitis C, and hemodialysis had a higher OCI prevalence. While those with malignancies, liver disorders, and HIV positive registered highest OBI prevalence. Conclusions: This review shows a high prevalence of OBI and OCI in Africa, with variable prevalence between countries and population groups.
Introduction: In the early 1980 s, a new form of clinical hepatitis B virus (HBV) infection was described, corresponding to the presence of HBV DNA in the liver and/or serum of patients in whom the HBs antigen (Ag) is undetectable by the usual serology tests. This was called occult hepatitis B (OBI) [1–3]. Based on the antibody profile of HBV, OBI can be distinguished into seropositive-OBI (anti-HBc and/or anti-HBs positive) and seronegative-OBI (anti-HBc and anti-HBs negative)[4]. A similar entity was studied in 2005, by Castillo et al., on the ability of hepatitis C virus (HCV) to replicate in peripheral blood mono-nuclear cells (PBMCs) of patients in the absence of viral RNA in serum and detectable anti-HCV antibodies, thus describing an occult HCV infection[5,6]. Two types of OCI are recognized: seronegative OCI (anti HCV antibody-negative and serum HCV RNA-negative) and seropositive OCI (anti HCV antibody-positive and serum HCV RNA-negative)[7]. These occult forms of hepatitis B and C can lead to hepatic crises including cases of chronic liver infection and liver cirrhosis [8]. It is known that OBI plays a role in the development of hepatocellular carcinoma [9]. These entities highlight multiple concerns, including the potential for transmission of this form of infection through blood transfusion, hemodialysis, or from mother to child [10]. The OBI and OCI have been described in a variety of individuals, including hemodialysis patients, patients with sustained virologic response, immunocompromised individuals, patients with abnormal liver function, and apparently healthy subjects. Multiple meta-analyses on the prevalence of OBI and OCI have shown great variability in estimates according to population categories, regions, type of tests used, and the level of endemicity of the disease [9,11–17]. Two global meta-analyses with partial analyses for Africa reported the prevalence of OBI at 3.7% [15], seronegative OCI at 9.6% and seropositive OCI at 13.3% [17]. The highest prevalence of OBI was in countries with high endemicity of classical hepatitis B, including Africa, with prevalence of up to 35.6% in the general population in Uganda [15]. Similar to classical hepatitis C infection, which has the highest prevalence in Egypt, this meta-analysis showed that OCI had the highest prevalence in North Africa, represented only by Egypt [17]. Other meta-analyses have estimated the prevalence of OBI in Africa among people living with HIV at 11.2% [16] and in Sudan at 15.5% [12]. In a regional meta-analysis in the Middle East and Eastern Mediterranean countries the prevalence of OCI was estimated at 10.0% with the highest prevalence recorded in Egypt [14]. Estimates of a high OBI prevalence of 34% was reported in Western Europe and Northern America [18] and a lower OBI prevalence of 4% was observed in Asia [19]. Thus, there are no systematic syntheses for OBI and OCI for Africa which is a highly endemic region for classical HBV and HCV infections. In this meta-analysis, we compiled all available evidence for OBI and OCI in Africa with differences across populations and regions. The results of this review may help consider OBI and OCI in the WHO goal of eliminating viral hepatitis by 2030. It will also help to sound the alarm on the need to strengthen the diagnostic capacities of OBI and OCI in Africa which has limited access to molecular testing. Conclusion: This review presents a summary of the prevalence of OBI and OIC in Africa. It shows that despite the great variability of prevalence according to population category and geographic region, in general, the prevalence of OBI and OCI in Africa are high and patients with malignancies, hemodialysis patients and HIV patients are groups at high risk for these infections. This highlights the need to include more appropriate OCI and OBI screening programs to target those at high risk of infection. In addition, the lack of data in several African countries shows the need for researchers to investigate this field, particularly in countries with high endemicity for HBV/HCV.
Background: Occult hepatitis B (OBI) and C (OCI) infections lead to hepatic crises including cases of liver cirrhosis and even hepatocellular carcinoma (HCC). OBI and OCI also pose a significant problem of their transmissibility. This study aimed to assess the overall prevalence of OBI and OCI in the African continent, a region highly endemic for classical hepatitis B and C viruses. Methods: For this systematic review and meta-analysis, we searched: PubMed, Web of Science, African Journal Online and African Index Medicus for published studies on the prevalence of OBI and OCI in Africa. Study selection and data extraction were performed by at least two independent investigators. Heterogeneity (I²) was assessed using the χ² test on the Cochran Q statistic and H parameters. Sources of heterogeneity were explored by subgroup analyses. This study was registered in PROSPERO, with reference number CRD42021252772. Results: We obtained 157 prevalence data for this meta-analysis, from 134 studies for OBI prevalence; 23 studies on OCI prevalence, and a single study on the OBI case fatality rate. The overall estimate for the prevalence of OBI was 14.8% [95% CI = 12.2-17.7] among 18579 participants. The prevalence of seronegative OBI and seropositive OBI was 7.4% [95% CI = 3.8-11.8] and 20.0% [95% CI = 15.3-25.1] respectively. The overall estimate for the prevalence of OCI was 10.7% [95% CI = 6.6-15.4] among 2865 participants. The pooled prevalence of seronegative OCI was estimated at 10.7% [95%CI = 4.8-18.3] and that of seropositive OCI at 14.4% [95%CI = 5.2-22.1]. In Sub-group analysis, patients with malignancies, chronic hepatitis C, and hemodialysis had a higher OCI prevalence. While those with malignancies, liver disorders, and HIV positive registered highest OBI prevalence. Conclusions: This review shows a high prevalence of OBI and OCI in Africa, with variable prevalence between countries and population groups.
10,041
394
[ 68, 141, 172, 254, 41, 185, 104, 335, 191, 289, 755, 101, 216, 53 ]
20
[ "prevalence", "obi", "oci", "studies", "patients", "africa", "data", "study", "obi prevalence", "anti" ]
[ "viral hepatitis", "hcv antibody negative", "hbv hcv infections", "hcv antibody positive", "hepatitis virus hbv" ]
null
[CONTENT] Occult hepatitis B | Occult hepatitis C | Prevalence | Africa [SUMMARY]
null
[CONTENT] Occult hepatitis B | Occult hepatitis C | Prevalence | Africa [SUMMARY]
[CONTENT] Occult hepatitis B | Occult hepatitis C | Prevalence | Africa [SUMMARY]
[CONTENT] Occult hepatitis B | Occult hepatitis C | Prevalence | Africa [SUMMARY]
[CONTENT] Occult hepatitis B | Occult hepatitis C | Prevalence | Africa [SUMMARY]
[CONTENT] Humans | Carcinoma, Hepatocellular | Liver Neoplasms | Hepatitis B | Liver Cirrhosis | Africa [SUMMARY]
null
[CONTENT] Humans | Carcinoma, Hepatocellular | Liver Neoplasms | Hepatitis B | Liver Cirrhosis | Africa [SUMMARY]
[CONTENT] Humans | Carcinoma, Hepatocellular | Liver Neoplasms | Hepatitis B | Liver Cirrhosis | Africa [SUMMARY]
[CONTENT] Humans | Carcinoma, Hepatocellular | Liver Neoplasms | Hepatitis B | Liver Cirrhosis | Africa [SUMMARY]
[CONTENT] Humans | Carcinoma, Hepatocellular | Liver Neoplasms | Hepatitis B | Liver Cirrhosis | Africa [SUMMARY]
[CONTENT] viral hepatitis | hcv antibody negative | hbv hcv infections | hcv antibody positive | hepatitis virus hbv [SUMMARY]
null
[CONTENT] viral hepatitis | hcv antibody negative | hbv hcv infections | hcv antibody positive | hepatitis virus hbv [SUMMARY]
[CONTENT] viral hepatitis | hcv antibody negative | hbv hcv infections | hcv antibody positive | hepatitis virus hbv [SUMMARY]
[CONTENT] viral hepatitis | hcv antibody negative | hbv hcv infections | hcv antibody positive | hepatitis virus hbv [SUMMARY]
[CONTENT] viral hepatitis | hcv antibody negative | hbv hcv infections | hcv antibody positive | hepatitis virus hbv [SUMMARY]
[CONTENT] prevalence | obi | oci | studies | patients | africa | data | study | obi prevalence | anti [SUMMARY]
null
[CONTENT] prevalence | obi | oci | studies | patients | africa | data | study | obi prevalence | anti [SUMMARY]
[CONTENT] prevalence | obi | oci | studies | patients | africa | data | study | obi prevalence | anti [SUMMARY]
[CONTENT] prevalence | obi | oci | studies | patients | africa | data | study | obi prevalence | anti [SUMMARY]
[CONTENT] prevalence | obi | oci | studies | patients | africa | data | study | obi prevalence | anti [SUMMARY]
[CONTENT] obi | hcv | oci | hepatitis | anti | prevalence | meta | highest prevalence | hbv | serum [SUMMARY]
null
[CONTENT] prevalence | obi | studies | obi prevalence | 95 | oci | ci | 95 ci | patients | oci prevalence [SUMMARY]
[CONTENT] high | shows | need | patients | high risk | prevalence obi | risk | obi | countries | prevalence [SUMMARY]
[CONTENT] prevalence | obi | oci | patients | studies | anti | obi prevalence | negative | data | study [SUMMARY]
[CONTENT] prevalence | obi | oci | patients | studies | anti | obi prevalence | negative | data | study [SUMMARY]
[CONTENT] OCI ||| OCI ||| OBI | OCI | African [SUMMARY]
null
[CONTENT] 157 | 134 | 23 | OCI ||| 14.8% ||| 95% | 12.2-17.7 | 18579 ||| 7.4% | 95% | CI | 3.8 | 20.0% ||| 95% | CI | 15.3-25.1 ||| OCI | 10.7% ||| 95% | 6.6 | 2865 ||| OCI | 10.7% | 4.8-18.3 | OCI | 14.4% ||| 5.2 ||| OCI ||| [SUMMARY]
[CONTENT] OBI | OCI | Africa [SUMMARY]
[CONTENT] OCI ||| OCI ||| OBI | OCI | African ||| PubMed | Science, African Journal Online | African | Index Medicus | OBI | OCI | Africa ||| at least two ||| Cochran ||| ||| PROSPERO | CRD42021252772 ||| ||| 157 | 134 | 23 | OCI ||| 14.8% ||| 95% | 12.2-17.7 | 18579 ||| 7.4% | 95% | CI | 3.8 | 20.0% ||| 95% | CI | 15.3-25.1 ||| OCI | 10.7% ||| 95% | 6.6 | 2865 ||| OCI | 10.7% | 4.8-18.3 | OCI | 14.4% ||| 5.2 ||| OCI ||| ||| OBI | OCI | Africa [SUMMARY]
[CONTENT] OCI ||| OCI ||| OBI | OCI | African ||| PubMed | Science, African Journal Online | African | Index Medicus | OBI | OCI | Africa ||| at least two ||| Cochran ||| ||| PROSPERO | CRD42021252772 ||| ||| 157 | 134 | 23 | OCI ||| 14.8% ||| 95% | 12.2-17.7 | 18579 ||| 7.4% | 95% | CI | 3.8 | 20.0% ||| 95% | CI | 15.3-25.1 ||| OCI | 10.7% ||| 95% | 6.6 | 2865 ||| OCI | 10.7% | 4.8-18.3 | OCI | 14.4% ||| 5.2 ||| OCI ||| ||| OBI | OCI | Africa [SUMMARY]
A clinician-nurse model to reduce early mortality and increase clinic retention among high-risk HIV-infected patients initiating combination antiretroviral treatment.
22340703
In resource-poor settings, mortality is at its highest during the first 3 months after combination antiretroviral treatment (cART) initiation. A clear predictor of mortality during this period is having a low CD4 count at the time of treatment initiation. The objective of this study was to evaluate the effect on survival and clinic retention of a nurse-based rapid assessment clinic for high-risk individuals initiating cART in a resource-constrained setting.
BACKGROUND
The USAID-AMPATH Partnership has enrolled more than 140,000 patients at 25 clinics throughout western Kenya. High Risk Express Care (HREC) provides weekly or bi-weekly rapid contacts with nurses for individuals initiating cART with CD4 counts of ≤100 cells/mm3. All HIV-infected individuals aged 14 years or older initiating cART with CD4 counts of ≤100 cells/mm3 were eligible for enrolment into HREC and for analysis. Adjusted hazard ratios (AHRs) control for potential confounding using propensity score methods.
METHODS
Between March 2007 and March 2009, 4,958 patients initiated cART with CD4 counts of ≤100 cells/mm3. After adjusting for age, sex, CD4 count, use of cotrimoxazole, treatment for tuberculosis, travel time to clinic and type of clinic, individuals in HREC had reduced mortality (AHR: 0.59; 95% confidence interval: 0.45-0.77), and reduced loss to follow up (AHR: 0.62; 95% CI: 0.55-0.70) compared with individuals in routine care. Overall, patients in HREC were much more likely to be alive and in care after a median of nearly 11 months of follow up (AHR: 0.62; 95% CI: 0.57-0.67).
RESULTS
Frequent monitoring by dedicated nurses in the early months of cART can significantly reduce mortality and loss to follow up among high-risk patients initiating treatment in resource-constrained settings.
CONCLUSIONS
[ "Adolescent", "Adult", "Ambulatory Care", "Ambulatory Care Facilities", "Anti-HIV Agents", "CD4 Lymphocyte Count", "Female", "Follow-Up Studies", "HIV Infections", "House Calls", "Humans", "Kenya", "Male", "Middle Aged", "Models, Nursing", "Nurse Clinicians", "Patient Compliance", "Prospective Studies", "Retrospective Studies", "Young Adult" ]
3297518
Background
Combination antiretroviral treatment (cART) has proven itself to be an effective therapeutic mechanism for suppressing viral replication and enabling reconstitution of the immune system, thus allowing patients to recover and live with HIV disease as a chronic illness [1-3]. If adherence to the medications is high, severe immune-suppression is not present at cART initiation, and no significant co-morbidities, such as hepatitis C infection, exist, projections suggest that people living with HIV/AIDS have greatly improved long-term prognosis [4]. Despite the proven effectiveness of cART in low-income countries [5-9], mortality rates among patients in these settings are higher than those seen in high-income environments [10]. In resource-poor settings, mortality is at its highest during the first 3 months after cART initiation [9-12]. It is at least four times higher than rates in high-income countries in the first month of treatment [10]. Why mortality is at its highest during this period has been the subject of much debate and speculation. Reasons for these differences have been attributed to the non-use of cotrimoxazole prophylaxis [13,14], tuberculosis-associated immune reconstitution inflammatory syndrome (IRIS) [15-17], IRIS due to other opportunistic infections [18], and hepatotoxicity related to antiretroviral agents [19]. A consistently clear predictor of mortality during this period is having a low CD4 count at the time of treatment initiation [10,20]. Recent estimates by the World Health Organization (WHO) indicate that although 6.7 million individuals in low- and middle-income settings are receiving cART, this represents only 47% coverage of individuals who are in clinical need [21]. The massive scale up of HIV care and treatment programmes has required enormous investments, and still there is a substantial unmet need. Thus, the challenge presented to HIV care programmes operating in resource-poor settings is how to continue scaling up while simultaneously improving the outcomes of those enrolling in treatment programmes. As such, novel models of care, such as task shifting [22-24], which increase healthcare efficiency and improve patient outcomes, clearly need to be designed and tested. Here, we describe the impact of a nurse-clinician approach [25] on mortality and patient retention among severely immune-suppressed HIV-infected adults initiating cART within a large multi-centre HIV/AIDS care and treatment programme in western Kenya.
Methods
Study design This was a retrospective analysis of prospectively collected routine clinical data. The study was approved by the Indiana University School of Medicine Institutional Review Board and the Moi University School of Medicine Institutional Review and Ethics Committee. This was a retrospective analysis of prospectively collected routine clinical data. The study was approved by the Indiana University School of Medicine Institutional Review Board and the Moi University School of Medicine Institutional Review and Ethics Committee. The programme The Academic Model Providing Access to Healthcare (AMPATH) was initiated in 2001 as a joint partnership between Moi University School of Medicine in Kenya, the Indiana University School of Medicine, and the Moi Teaching and Referral Hospital. The USAID-AMPATH Partnership was initiated in 2004 when AMPATH received ongoing funding through the United States Agency for International Development (USAID) and the United States Presidential Emergency Plan for AIDS Relief (PEPFAR). The initial goal of AMPATH was to establish an HIV care system to serve the needs of both urban and rural patients and to assess the barriers to and outcomes of antiretroviral therapy. Details of the development of this programme have been described in detail elsewhere [26]. The first urban and rural HIV clinics were opened in November 2001. Since then, the programme has enrolled more than 140,000 HIV-infected adults and children in 25 Ministry of Health (MOH) facilities and numerous satellite clinics in western Kenya (data for satellite clinics are incorporated into their "parent" clinic). Although located within the MOH facilities, the AMPATH clinics are dedicated to HIV and HIV/TB care, treatment and support. All HIV- and tuberculosis-related care and treatment are provided free at the point of care. Clinical procedures: express care and routine care The HIV clinical care protocols used by the USAID-AMPATH Partnership are consistent with those recommended by WHO and have been described in detail elsewhere [27]. Briefly, the Routine Care protocol for patients receiving cART is that patients are seen by the clinician (clinical officer or physician) 2 weeks after initiating treatment, and then monthly thereafter. Those who have not initiated cART return every 1 to 3 months depending on their clinical status and co-morbidities. During these visits patients are seen by multiple care providers, including nurses, clinicians, pharmacy technicians, nutritionists, peer outreach workers and social workers. For new patients, clinical contact begins at registration, followed by the nurse who checks vital signs. The patient also sees a peer outreach worker for documentation of locator information, then goes on to see the doctor/clinical officer. Returning patients go directly to the nurse and then follow a course similar to that established for new patients. All patients newly initiated on cART who miss a scheduled clinic visit trigger an outreach attempt within 24 h, through either a telephone contact or home visit conducted by trained peers. Standard first-line antiretroviral regimens used are either nevirapine-based or efavirenz-based [28]. Beginning in March 2007, the High Risk Express Care (HREC) programme was implemented in a step-wise fashion in USAID-AMPATH Partnership clinics. As of May 2008, HREC had been rolled out to all parent clinics. The selection of clinics to pilot the programme was based primarily on space availability, patient volume and clinic congestion, and the general capacity of healthcare personnel to pilot a new programme. Once a clinic had implemented HREC, all patients meeting the criteria for HREC were eligible for referral to the programme. The criteria for referral to HREC are having a CD4 count of 100 cells/mm3 or less and initiating cART. Once a patient is identified as eligible for HREC, the clinical officer or physician prescribes a two-week supply of cART and then refers the patient to HREC (within the same clinic location). The patient is seen by a clinical officer or physician 2 weeks after antiretroviral initiation and then monthly. The HREC nurse is then responsible for interim weekly visits either physically or by telephone for a period of 3 months. The patient is seen monthly by a clinical officer or physician. In HREC, returning patients do not queue in the waiting bay or go through the nursing station, clinician room, pharmacy or any other referral points within the clinic. Instead, such patients go directly to the "Express Care room", which provides "one-stop care". The HREC nurse maintains a list of scheduled return visits, and if a patient misses a clinic appointment, the outreach team is activated as per routine protocol. The HREC visit for the high-risk patients is focused on identifying co-morbidities and complications of cART and reinforcing medication adherence. The nurse asks about adherence to medication by asking whether the patient has missed any of his/her medications in the previous 7 days and then conducting a pill count. If the patient is not perfectly adherent, he or she is referred to the clinical officer or physician. The nurse reviews a symptom checklist (new cough, breathlessness, rash, jaundiced eyes, vomiting, diarrhoea, severe headache, fever or "any other problem that you feel you need a doctor for") and measures temperature and transcutaneous oxygen saturation. If the patient reports any symptoms or meets the pre-set threshold for either temperature (≥37.2°Celsius) or oxygen saturation (O2 ≤ 93%), the patient is referred immediately to the clinical officer or physician. The nurse does not dispense drugs during an HREC visit as these are prescribed during monthly clinical officer or physician visits. A summary of similarities and differences between HREC and Routine Care can be found in Table 1. Summary of programme characteristics in High Risk Express Care (HREC) versus Routine Care The HIV clinical care protocols used by the USAID-AMPATH Partnership are consistent with those recommended by WHO and have been described in detail elsewhere [27]. Briefly, the Routine Care protocol for patients receiving cART is that patients are seen by the clinician (clinical officer or physician) 2 weeks after initiating treatment, and then monthly thereafter. Those who have not initiated cART return every 1 to 3 months depending on their clinical status and co-morbidities. During these visits patients are seen by multiple care providers, including nurses, clinicians, pharmacy technicians, nutritionists, peer outreach workers and social workers. For new patients, clinical contact begins at registration, followed by the nurse who checks vital signs. The patient also sees a peer outreach worker for documentation of locator information, then goes on to see the doctor/clinical officer. Returning patients go directly to the nurse and then follow a course similar to that established for new patients. All patients newly initiated on cART who miss a scheduled clinic visit trigger an outreach attempt within 24 h, through either a telephone contact or home visit conducted by trained peers. Standard first-line antiretroviral regimens used are either nevirapine-based or efavirenz-based [28]. Beginning in March 2007, the High Risk Express Care (HREC) programme was implemented in a step-wise fashion in USAID-AMPATH Partnership clinics. As of May 2008, HREC had been rolled out to all parent clinics. The selection of clinics to pilot the programme was based primarily on space availability, patient volume and clinic congestion, and the general capacity of healthcare personnel to pilot a new programme. Once a clinic had implemented HREC, all patients meeting the criteria for HREC were eligible for referral to the programme. The criteria for referral to HREC are having a CD4 count of 100 cells/mm3 or less and initiating cART. Once a patient is identified as eligible for HREC, the clinical officer or physician prescribes a two-week supply of cART and then refers the patient to HREC (within the same clinic location). The patient is seen by a clinical officer or physician 2 weeks after antiretroviral initiation and then monthly. The HREC nurse is then responsible for interim weekly visits either physically or by telephone for a period of 3 months. The patient is seen monthly by a clinical officer or physician. In HREC, returning patients do not queue in the waiting bay or go through the nursing station, clinician room, pharmacy or any other referral points within the clinic. Instead, such patients go directly to the "Express Care room", which provides "one-stop care". The HREC nurse maintains a list of scheduled return visits, and if a patient misses a clinic appointment, the outreach team is activated as per routine protocol. The HREC visit for the high-risk patients is focused on identifying co-morbidities and complications of cART and reinforcing medication adherence. The nurse asks about adherence to medication by asking whether the patient has missed any of his/her medications in the previous 7 days and then conducting a pill count. If the patient is not perfectly adherent, he or she is referred to the clinical officer or physician. The nurse reviews a symptom checklist (new cough, breathlessness, rash, jaundiced eyes, vomiting, diarrhoea, severe headache, fever or "any other problem that you feel you need a doctor for") and measures temperature and transcutaneous oxygen saturation. If the patient reports any symptoms or meets the pre-set threshold for either temperature (≥37.2°Celsius) or oxygen saturation (O2 ≤ 93%), the patient is referred immediately to the clinical officer or physician. The nurse does not dispense drugs during an HREC visit as these are prescribed during monthly clinical officer or physician visits. A summary of similarities and differences between HREC and Routine Care can be found in Table 1. Summary of programme characteristics in High Risk Express Care (HREC) versus Routine Care The Academic Model Providing Access to Healthcare (AMPATH) was initiated in 2001 as a joint partnership between Moi University School of Medicine in Kenya, the Indiana University School of Medicine, and the Moi Teaching and Referral Hospital. The USAID-AMPATH Partnership was initiated in 2004 when AMPATH received ongoing funding through the United States Agency for International Development (USAID) and the United States Presidential Emergency Plan for AIDS Relief (PEPFAR). The initial goal of AMPATH was to establish an HIV care system to serve the needs of both urban and rural patients and to assess the barriers to and outcomes of antiretroviral therapy. Details of the development of this programme have been described in detail elsewhere [26]. The first urban and rural HIV clinics were opened in November 2001. Since then, the programme has enrolled more than 140,000 HIV-infected adults and children in 25 Ministry of Health (MOH) facilities and numerous satellite clinics in western Kenya (data for satellite clinics are incorporated into their "parent" clinic). Although located within the MOH facilities, the AMPATH clinics are dedicated to HIV and HIV/TB care, treatment and support. All HIV- and tuberculosis-related care and treatment are provided free at the point of care. Clinical procedures: express care and routine care The HIV clinical care protocols used by the USAID-AMPATH Partnership are consistent with those recommended by WHO and have been described in detail elsewhere [27]. Briefly, the Routine Care protocol for patients receiving cART is that patients are seen by the clinician (clinical officer or physician) 2 weeks after initiating treatment, and then monthly thereafter. Those who have not initiated cART return every 1 to 3 months depending on their clinical status and co-morbidities. During these visits patients are seen by multiple care providers, including nurses, clinicians, pharmacy technicians, nutritionists, peer outreach workers and social workers. For new patients, clinical contact begins at registration, followed by the nurse who checks vital signs. The patient also sees a peer outreach worker for documentation of locator information, then goes on to see the doctor/clinical officer. Returning patients go directly to the nurse and then follow a course similar to that established for new patients. All patients newly initiated on cART who miss a scheduled clinic visit trigger an outreach attempt within 24 h, through either a telephone contact or home visit conducted by trained peers. Standard first-line antiretroviral regimens used are either nevirapine-based or efavirenz-based [28]. Beginning in March 2007, the High Risk Express Care (HREC) programme was implemented in a step-wise fashion in USAID-AMPATH Partnership clinics. As of May 2008, HREC had been rolled out to all parent clinics. The selection of clinics to pilot the programme was based primarily on space availability, patient volume and clinic congestion, and the general capacity of healthcare personnel to pilot a new programme. Once a clinic had implemented HREC, all patients meeting the criteria for HREC were eligible for referral to the programme. The criteria for referral to HREC are having a CD4 count of 100 cells/mm3 or less and initiating cART. Once a patient is identified as eligible for HREC, the clinical officer or physician prescribes a two-week supply of cART and then refers the patient to HREC (within the same clinic location). The patient is seen by a clinical officer or physician 2 weeks after antiretroviral initiation and then monthly. The HREC nurse is then responsible for interim weekly visits either physically or by telephone for a period of 3 months. The patient is seen monthly by a clinical officer or physician. In HREC, returning patients do not queue in the waiting bay or go through the nursing station, clinician room, pharmacy or any other referral points within the clinic. Instead, such patients go directly to the "Express Care room", which provides "one-stop care". The HREC nurse maintains a list of scheduled return visits, and if a patient misses a clinic appointment, the outreach team is activated as per routine protocol. The HREC visit for the high-risk patients is focused on identifying co-morbidities and complications of cART and reinforcing medication adherence. The nurse asks about adherence to medication by asking whether the patient has missed any of his/her medications in the previous 7 days and then conducting a pill count. If the patient is not perfectly adherent, he or she is referred to the clinical officer or physician. The nurse reviews a symptom checklist (new cough, breathlessness, rash, jaundiced eyes, vomiting, diarrhoea, severe headache, fever or "any other problem that you feel you need a doctor for") and measures temperature and transcutaneous oxygen saturation. If the patient reports any symptoms or meets the pre-set threshold for either temperature (≥37.2°Celsius) or oxygen saturation (O2 ≤ 93%), the patient is referred immediately to the clinical officer or physician. The nurse does not dispense drugs during an HREC visit as these are prescribed during monthly clinical officer or physician visits. A summary of similarities and differences between HREC and Routine Care can be found in Table 1. Summary of programme characteristics in High Risk Express Care (HREC) versus Routine Care The HIV clinical care protocols used by the USAID-AMPATH Partnership are consistent with those recommended by WHO and have been described in detail elsewhere [27]. Briefly, the Routine Care protocol for patients receiving cART is that patients are seen by the clinician (clinical officer or physician) 2 weeks after initiating treatment, and then monthly thereafter. Those who have not initiated cART return every 1 to 3 months depending on their clinical status and co-morbidities. During these visits patients are seen by multiple care providers, including nurses, clinicians, pharmacy technicians, nutritionists, peer outreach workers and social workers. For new patients, clinical contact begins at registration, followed by the nurse who checks vital signs. The patient also sees a peer outreach worker for documentation of locator information, then goes on to see the doctor/clinical officer. Returning patients go directly to the nurse and then follow a course similar to that established for new patients. All patients newly initiated on cART who miss a scheduled clinic visit trigger an outreach attempt within 24 h, through either a telephone contact or home visit conducted by trained peers. Standard first-line antiretroviral regimens used are either nevirapine-based or efavirenz-based [28]. Beginning in March 2007, the High Risk Express Care (HREC) programme was implemented in a step-wise fashion in USAID-AMPATH Partnership clinics. As of May 2008, HREC had been rolled out to all parent clinics. The selection of clinics to pilot the programme was based primarily on space availability, patient volume and clinic congestion, and the general capacity of healthcare personnel to pilot a new programme. Once a clinic had implemented HREC, all patients meeting the criteria for HREC were eligible for referral to the programme. The criteria for referral to HREC are having a CD4 count of 100 cells/mm3 or less and initiating cART. Once a patient is identified as eligible for HREC, the clinical officer or physician prescribes a two-week supply of cART and then refers the patient to HREC (within the same clinic location). The patient is seen by a clinical officer or physician 2 weeks after antiretroviral initiation and then monthly. The HREC nurse is then responsible for interim weekly visits either physically or by telephone for a period of 3 months. The patient is seen monthly by a clinical officer or physician. In HREC, returning patients do not queue in the waiting bay or go through the nursing station, clinician room, pharmacy or any other referral points within the clinic. Instead, such patients go directly to the "Express Care room", which provides "one-stop care". The HREC nurse maintains a list of scheduled return visits, and if a patient misses a clinic appointment, the outreach team is activated as per routine protocol. The HREC visit for the high-risk patients is focused on identifying co-morbidities and complications of cART and reinforcing medication adherence. The nurse asks about adherence to medication by asking whether the patient has missed any of his/her medications in the previous 7 days and then conducting a pill count. If the patient is not perfectly adherent, he or she is referred to the clinical officer or physician. The nurse reviews a symptom checklist (new cough, breathlessness, rash, jaundiced eyes, vomiting, diarrhoea, severe headache, fever or "any other problem that you feel you need a doctor for") and measures temperature and transcutaneous oxygen saturation. If the patient reports any symptoms or meets the pre-set threshold for either temperature (≥37.2°Celsius) or oxygen saturation (O2 ≤ 93%), the patient is referred immediately to the clinical officer or physician. The nurse does not dispense drugs during an HREC visit as these are prescribed during monthly clinical officer or physician visits. A summary of similarities and differences between HREC and Routine Care can be found in Table 1. Summary of programme characteristics in High Risk Express Care (HREC) versus Routine Care Data collection Clinicians complete standardized forms capturing demographic, clinical and pharmacologic information at each patient visit. These data are then hand-entered into the AMPATH Medical Record System, a secure computerized database designed for clinical management, with data entry validated by random review of 10% of the forms entered [29]. At the time of registration, patients are provided with a unique identifying number. For this study, all data were stripped of identifying information prior to analysis. Clinicians complete standardized forms capturing demographic, clinical and pharmacologic information at each patient visit. These data are then hand-entered into the AMPATH Medical Record System, a secure computerized database designed for clinical management, with data entry validated by random review of 10% of the forms entered [29]. At the time of registration, patients are provided with a unique identifying number. For this study, all data were stripped of identifying information prior to analysis. Study population The analysis included all patients aged 14 years or older who were initiating cART with CD4 counts less than or equal to 100 cells/mm3 in one of the USAID-AMPATH clinics from March 2007 until March 2009. The analysis included all patients aged 14 years or older who were initiating cART with CD4 counts less than or equal to 100 cells/mm3 in one of the USAID-AMPATH clinics from March 2007 until March 2009. Outcomes, explanatory variables and confounders The primary goal of the HREC system is to prevent mortality during the first 3 months following cART initiation. Therefore, the primary outcome for this analysis was all-cause mortality. Two secondary outcomes were analyzed: loss to follow up (LTFU), defined as being absent from the clinic for at least 3 months with no information regarding vital status, and a composite outcome defined as LTFU or death. The rationale for the composite outcome is that loss to follow up in such a high-risk population is likely to mean that a patient has died, even if the death has not yet been reported to the clinic [10,30-34]. Analysis of the composite outcome can be viewed as a sensitivity analysis for the mortality rate as it provides an upper bound of the mortality estimate. The primary explanatory variable is being in the HREC programme (versus remaining in Routine Care). Our analyses quantify the effect of HREC on mortality, loss to follow up, and the composite outcome of death or loss to follow up using crude and adjusted hazard rate (HR) ratios. Our adjusted HR ratios control for the following potential confounding variables, all measured at time of cART initiation and selected a priori based on their potential to independently affect changes in risk of mortality and/or loss to follow up: CD4 count (analyzed as a continuous variable); receipt of treatment for tuberculosis at the time of cART initiation (yes/no); receipt of cotrimoxazole or dapsone prophylaxis within 28 days of cART initiation (yes/no) [13,17]; travel time to clinic (dichotomized as up to one hour vs. more than one hour); type of clinic (referral hospital, district or sub-district hospital, or rural health centre); age; and sex (male/female). CD4 and age are centred at their mean values. WHO clinical stage was not included in the final model because: a) it was non-significantly associated with the outcomes of interest in bivariable analyses; and b) there was missing data in one of the two groups. The primary goal of the HREC system is to prevent mortality during the first 3 months following cART initiation. Therefore, the primary outcome for this analysis was all-cause mortality. Two secondary outcomes were analyzed: loss to follow up (LTFU), defined as being absent from the clinic for at least 3 months with no information regarding vital status, and a composite outcome defined as LTFU or death. The rationale for the composite outcome is that loss to follow up in such a high-risk population is likely to mean that a patient has died, even if the death has not yet been reported to the clinic [10,30-34]. Analysis of the composite outcome can be viewed as a sensitivity analysis for the mortality rate as it provides an upper bound of the mortality estimate. The primary explanatory variable is being in the HREC programme (versus remaining in Routine Care). Our analyses quantify the effect of HREC on mortality, loss to follow up, and the composite outcome of death or loss to follow up using crude and adjusted hazard rate (HR) ratios. Our adjusted HR ratios control for the following potential confounding variables, all measured at time of cART initiation and selected a priori based on their potential to independently affect changes in risk of mortality and/or loss to follow up: CD4 count (analyzed as a continuous variable); receipt of treatment for tuberculosis at the time of cART initiation (yes/no); receipt of cotrimoxazole or dapsone prophylaxis within 28 days of cART initiation (yes/no) [13,17]; travel time to clinic (dichotomized as up to one hour vs. more than one hour); type of clinic (referral hospital, district or sub-district hospital, or rural health centre); age; and sex (male/female). CD4 and age are centred at their mean values. WHO clinical stage was not included in the final model because: a) it was non-significantly associated with the outcomes of interest in bivariable analyses; and b) there was missing data in one of the two groups. Analysis We included all eligible patients and categorized them as having been enrolled into HREC at initiation of cART or having remained in Routine Care. The distributions of the time to event outcomes are summarized using Kaplan-Meier curves. Event times and censoring times are defined as follows: for analysis of mortality, the event time is the date of death; others are censored at the time of their last clinic visit. For the analysis of loss to follow up, patients were defined as lost if they had not returned to clinic for more than three consecutive months; for these patients, the event time is the 90th day following the most recent visit to the clinic. Individuals whose reported follow-up time is zero days had 1 day added for the purpose of analysis. Those who died were censored on their death date, and patients whose most recent clinic was less than 90 days prior to the close of the database were censored at the date of their last clinic visit. For the loss to follow up analysis, the earliest possible event time is 90 days after the initiation of cART; hence our tests and regression models use a start time of day 90. Finally, for the composite outcome, the event time is the earliest of death date or loss to follow up date, where the LTFU date is defined as we have explained. The adjusted HR ratios control for potential confounding variables using inverse weighting by the treatment propensity score method [35]. For each individual, the propensity score is the probability of receiving HREC as a function of individual level characteristics. The propensity score, denoted by p(x), is estimated by fitting a logistic regression model of HREC status (yes/no) on potential confounding variables x. The propensity score model is checked for lack of fit using the Hosmer-Lemeshow goodness-of-fit test [36]. The adjusted HR ratios are calculated by fitting a weighted, stratified proportional hazards regression of the event time on HREC status. The weights are proportional to 1/p(x) for those who receive HREC, and to 1/{1 - p(x)} for those who do not. Following Hernan et al. [27], stabilized weights were used in the estimation (full details are available upon request). The stratification variable is clinic type. Robust standard errors are used to account for clustering by clinic and for correlation induced by the use of inverse probability weights. Because the propensity scores represent the probability of receiving HREC as a function of individual-level covariates (listed here), the weight for each individual is inversely proportional to the (estimated) probability of his or her actual HREC status. The weighted sample can therefore be viewed as one where differential selection into HREC attributable to x has been eliminated; hence, to the extent that x contains all relevant confounders, the adjusted HR is equivalent to the exposure effect from a marginal structural proportional hazards model and can be interpreted as the causal effect of HREC on the event of interest [37,38]. All analyses were done using STATA Version SE/10 (College Station, Texas, USA). We included all eligible patients and categorized them as having been enrolled into HREC at initiation of cART or having remained in Routine Care. The distributions of the time to event outcomes are summarized using Kaplan-Meier curves. Event times and censoring times are defined as follows: for analysis of mortality, the event time is the date of death; others are censored at the time of their last clinic visit. For the analysis of loss to follow up, patients were defined as lost if they had not returned to clinic for more than three consecutive months; for these patients, the event time is the 90th day following the most recent visit to the clinic. Individuals whose reported follow-up time is zero days had 1 day added for the purpose of analysis. Those who died were censored on their death date, and patients whose most recent clinic was less than 90 days prior to the close of the database were censored at the date of their last clinic visit. For the loss to follow up analysis, the earliest possible event time is 90 days after the initiation of cART; hence our tests and regression models use a start time of day 90. Finally, for the composite outcome, the event time is the earliest of death date or loss to follow up date, where the LTFU date is defined as we have explained. The adjusted HR ratios control for potential confounding variables using inverse weighting by the treatment propensity score method [35]. For each individual, the propensity score is the probability of receiving HREC as a function of individual level characteristics. The propensity score, denoted by p(x), is estimated by fitting a logistic regression model of HREC status (yes/no) on potential confounding variables x. The propensity score model is checked for lack of fit using the Hosmer-Lemeshow goodness-of-fit test [36]. The adjusted HR ratios are calculated by fitting a weighted, stratified proportional hazards regression of the event time on HREC status. The weights are proportional to 1/p(x) for those who receive HREC, and to 1/{1 - p(x)} for those who do not. Following Hernan et al. [27], stabilized weights were used in the estimation (full details are available upon request). The stratification variable is clinic type. Robust standard errors are used to account for clustering by clinic and for correlation induced by the use of inverse probability weights. Because the propensity scores represent the probability of receiving HREC as a function of individual-level covariates (listed here), the weight for each individual is inversely proportional to the (estimated) probability of his or her actual HREC status. The weighted sample can therefore be viewed as one where differential selection into HREC attributable to x has been eliminated; hence, to the extent that x contains all relevant confounders, the adjusted HR is equivalent to the exposure effect from a marginal structural proportional hazards model and can be interpreted as the causal effect of HREC on the event of interest [37,38]. All analyses were done using STATA Version SE/10 (College Station, Texas, USA).
Results
There were 4,958 patients aged 14 years or older with CD4 counts of ≤100 cells/mm3 who initiated cART at one of the USAID-AMPATH clinics during the study period. Of these, 635 were enrolled into HREC. Reasons why patients were not enrolled into HREC included that the HREC programme had not yet been rolled out to a particular clinic, that the patient lived too far to allow them to attend clinic weekly or bi-weekly, and lack of available space in the clinic for expansion of the HREC programme. As summarized in Table 2 patients in Routine Care and HREC were similar with regard to gender distribution and age: 40% male with a median age of approximately 36 years. There were no significant differences between the groups with regard to baseline CD4 count or proportion receiving treatment for tuberculosis. Patients in HREC were slightly less likely to be WHO Stage III/IV at cART initiation (66% vs. 69%) and to have to travel at least 1 h to clinic (71% vs. 77%). They were more likely to be attending an urban clinic (61% vs. 52%), and using cotrimoxazole or dapsone prophylaxis at cART initiation (100% vs. 95%). The median follow-up time was 318 days (interquartile range 147-533). Socio-demographic and clinical characteristics of patients in High Risk Express Care (HREC) versus Routine Care N.B. The only variable for which there were missing data was WHO Stage, as described in the table. This variable was not included in the propensity score model as it was not a significant predictor of the outcomes of interest There were 426 deaths among the study population: 39 in HREC and 387 in Routine Care. The crude incidence rate of death during the follow-up period was 5.7 per 100 person-years in HREC compared with 10.6 per 100 person-years in Routine Care (incidence rate ratio, IRR: 0.54; 95% confidence interval, CI: 0.38-0.75) (Figure 1a). After adjustment for potential confounders, the HREC programme was associated with a 40% reduced risk of death (adjusted HR: 0.59; 95% CI: 0.45-0.77). Kaplan-Meier curves demonstrating the effect of the High Risk Express Care compared with Routine Care among all high-risk patients on: a) their probability of survival; b) their probability of remaining in care (i.e., loss to follow up); and c) their probability of remaining alive and in care (n = 4958). a Probability of remaining alive after cART initiation. b Probability of remaining in care after cART initiation. c Probability of remaining alive and in care after cART. There were also 1,299 patients lost to follow up during the same period, including 134 in HREC and 1,165 in Routine Care. The crude incidence rate of LTFU among HREC was 18.7 per 100 person-years versus 29.7 in Routine Care (IRR: 0.63; 95% CI: 0.52-0.76) (Figure 1b). After adjustment, patients in HREC were also much less likely to become lost to follow up (AHR 0.62; 95% CI: 0.55-0.70). When we assessed the combined endpoint of LTFU and death, there were 1,725 events in 4639.5 person-years of follow up, for a crude incidence rate of 24.2 per 100 person-years in HREC versus 39.5 in Routine Care (IRR: 0.61; 95% CI: 0.52-0.72) (Figure 1c). Overall, the HREC patients were much more likely to be alive and in care after starting cART by the end of the study follow-up period (AHR 0.62; 95% CI: 0.57-0.67) (Table 3). Adjusted Hazard Ratios (HR)* and 95% Confidence Intervals (CI) of the effect of High Risk Express Care (HREC) vs. Routine Care on: a) Death, b) Lost to Follow-up (LTFU), and c) Death or LTFU (combined endpoint) following cART initiation *Adjusted using propensity scores for clinic, the use of cotrimoxazole or dapsone, CD4 count closest to cART initiation, receiving treatment for tuberculosis,, gender, age, clinic type (Referral Hospital, Sub-District and District Hospitals, and Rural Health Centres), and travel time to clinic.
Conclusions
In conclusion, we have demonstrated that weekly rapid assessments by nurses, either by phone or in person, with immediate referrals to clinical officers or physicians if needed, can significantly improve survival among high-risk HIV-infected patients initiating cART in a sub-Saharan African setting. Although the cost effectiveness of the Express Care model needs to be thoroughly evaluated, our experience and findings suggest that this may be an innovative way of increasing patient volume, improving the quality of care, and greatly improving patient outcomes in the short term.
[ "Background", "Study design", "The programme", "Clinical procedures: express care and routine care", "Data collection", "Study population", "Outcomes, explanatory variables and confounders", "Analysis", "Competing interests", "Authors' contributions" ]
[ "Combination antiretroviral treatment (cART) has proven itself to be an effective therapeutic mechanism for suppressing viral replication and enabling reconstitution of the immune system, thus allowing patients to recover and live with HIV disease as a chronic illness [1-3]. If adherence to the medications is high, severe immune-suppression is not present at cART initiation, and no significant co-morbidities, such as hepatitis C infection, exist, projections suggest that people living with HIV/AIDS have greatly improved long-term prognosis [4]. Despite the proven effectiveness of cART in low-income countries [5-9], mortality rates among patients in these settings are higher than those seen in high-income environments [10].\nIn resource-poor settings, mortality is at its highest during the first 3 months after cART initiation [9-12]. It is at least four times higher than rates in high-income countries in the first month of treatment [10]. Why mortality is at its highest during this period has been the subject of much debate and speculation. Reasons for these differences have been attributed to the non-use of cotrimoxazole prophylaxis [13,14], tuberculosis-associated immune reconstitution inflammatory syndrome (IRIS) [15-17], IRIS due to other opportunistic infections [18], and hepatotoxicity related to antiretroviral agents [19]. A consistently clear predictor of mortality during this period is having a low CD4 count at the time of treatment initiation [10,20].\nRecent estimates by the World Health Organization (WHO) indicate that although 6.7 million individuals in low- and middle-income settings are receiving cART, this represents only 47% coverage of individuals who are in clinical need [21]. The massive scale up of HIV care and treatment programmes has required enormous investments, and still there is a substantial unmet need. Thus, the challenge presented to HIV care programmes operating in resource-poor settings is how to continue scaling up while simultaneously improving the outcomes of those enrolling in treatment programmes. As such, novel models of care, such as task shifting [22-24], which increase healthcare efficiency and improve patient outcomes, clearly need to be designed and tested.\nHere, we describe the impact of a nurse-clinician approach [25] on mortality and patient retention among severely immune-suppressed HIV-infected adults initiating cART within a large multi-centre HIV/AIDS care and treatment programme in western Kenya.", "This was a retrospective analysis of prospectively collected routine clinical data. The study was approved by the Indiana University School of Medicine Institutional Review Board and the Moi University School of Medicine Institutional Review and Ethics Committee.", "The Academic Model Providing Access to Healthcare (AMPATH) was initiated in 2001 as a joint partnership between Moi University School of Medicine in Kenya, the Indiana University School of Medicine, and the Moi Teaching and Referral Hospital. The USAID-AMPATH Partnership was initiated in 2004 when AMPATH received ongoing funding through the United States Agency for International Development (USAID) and the United States Presidential Emergency Plan for AIDS Relief (PEPFAR). The initial goal of AMPATH was to establish an HIV care system to serve the needs of both urban and rural patients and to assess the barriers to and outcomes of antiretroviral therapy. Details of the development of this programme have been described in detail elsewhere [26].\nThe first urban and rural HIV clinics were opened in November 2001. Since then, the programme has enrolled more than 140,000 HIV-infected adults and children in 25 Ministry of Health (MOH) facilities and numerous satellite clinics in western Kenya (data for satellite clinics are incorporated into their \"parent\" clinic). Although located within the MOH facilities, the AMPATH clinics are dedicated to HIV and HIV/TB care, treatment and support. All HIV- and tuberculosis-related care and treatment are provided free at the point of care.\n Clinical procedures: express care and routine care The HIV clinical care protocols used by the USAID-AMPATH Partnership are consistent with those recommended by WHO and have been described in detail elsewhere [27]. Briefly, the Routine Care protocol for patients receiving cART is that patients are seen by the clinician (clinical officer or physician) 2 weeks after initiating treatment, and then monthly thereafter. Those who have not initiated cART return every 1 to 3 months depending on their clinical status and co-morbidities.\nDuring these visits patients are seen by multiple care providers, including nurses, clinicians, pharmacy technicians, nutritionists, peer outreach workers and social workers. For new patients, clinical contact begins at registration, followed by the nurse who checks vital signs. The patient also sees a peer outreach worker for documentation of locator information, then goes on to see the doctor/clinical officer. Returning patients go directly to the nurse and then follow a course similar to that established for new patients. All patients newly initiated on cART who miss a scheduled clinic visit trigger an outreach attempt within 24 h, through either a telephone contact or home visit conducted by trained peers. Standard first-line antiretroviral regimens used are either nevirapine-based or efavirenz-based [28].\nBeginning in March 2007, the High Risk Express Care (HREC) programme was implemented in a step-wise fashion in USAID-AMPATH Partnership clinics. As of May 2008, HREC had been rolled out to all parent clinics. The selection of clinics to pilot the programme was based primarily on space availability, patient volume and clinic congestion, and the general capacity of healthcare personnel to pilot a new programme. Once a clinic had implemented HREC, all patients meeting the criteria for HREC were eligible for referral to the programme. The criteria for referral to HREC are having a CD4 count of 100 cells/mm3 or less and initiating cART.\nOnce a patient is identified as eligible for HREC, the clinical officer or physician prescribes a two-week supply of cART and then refers the patient to HREC (within the same clinic location). The patient is seen by a clinical officer or physician 2 weeks after antiretroviral initiation and then monthly. The HREC nurse is then responsible for interim weekly visits either physically or by telephone for a period of 3 months. The patient is seen monthly by a clinical officer or physician. In HREC, returning patients do not queue in the waiting bay or go through the nursing station, clinician room, pharmacy or any other referral points within the clinic. Instead, such patients go directly to the \"Express Care room\", which provides \"one-stop care\". The HREC nurse maintains a list of scheduled return visits, and if a patient misses a clinic appointment, the outreach team is activated as per routine protocol.\nThe HREC visit for the high-risk patients is focused on identifying co-morbidities and complications of cART and reinforcing medication adherence. The nurse asks about adherence to medication by asking whether the patient has missed any of his/her medications in the previous 7 days and then conducting a pill count. If the patient is not perfectly adherent, he or she is referred to the clinical officer or physician. The nurse reviews a symptom checklist (new cough, breathlessness, rash, jaundiced eyes, vomiting, diarrhoea, severe headache, fever or \"any other problem that you feel you need a doctor for\") and measures temperature and transcutaneous oxygen saturation. If the patient reports any symptoms or meets the pre-set threshold for either temperature (≥37.2°Celsius) or oxygen saturation (O2 ≤ 93%), the patient is referred immediately to the clinical officer or physician. The nurse does not dispense drugs during an HREC visit as these are prescribed during monthly clinical officer or physician visits. A summary of similarities and differences between HREC and Routine Care can be found in Table 1.\nSummary of programme characteristics in High Risk Express Care (HREC) versus Routine Care\nThe HIV clinical care protocols used by the USAID-AMPATH Partnership are consistent with those recommended by WHO and have been described in detail elsewhere [27]. Briefly, the Routine Care protocol for patients receiving cART is that patients are seen by the clinician (clinical officer or physician) 2 weeks after initiating treatment, and then monthly thereafter. Those who have not initiated cART return every 1 to 3 months depending on their clinical status and co-morbidities.\nDuring these visits patients are seen by multiple care providers, including nurses, clinicians, pharmacy technicians, nutritionists, peer outreach workers and social workers. For new patients, clinical contact begins at registration, followed by the nurse who checks vital signs. The patient also sees a peer outreach worker for documentation of locator information, then goes on to see the doctor/clinical officer. Returning patients go directly to the nurse and then follow a course similar to that established for new patients. All patients newly initiated on cART who miss a scheduled clinic visit trigger an outreach attempt within 24 h, through either a telephone contact or home visit conducted by trained peers. Standard first-line antiretroviral regimens used are either nevirapine-based or efavirenz-based [28].\nBeginning in March 2007, the High Risk Express Care (HREC) programme was implemented in a step-wise fashion in USAID-AMPATH Partnership clinics. As of May 2008, HREC had been rolled out to all parent clinics. The selection of clinics to pilot the programme was based primarily on space availability, patient volume and clinic congestion, and the general capacity of healthcare personnel to pilot a new programme. Once a clinic had implemented HREC, all patients meeting the criteria for HREC were eligible for referral to the programme. The criteria for referral to HREC are having a CD4 count of 100 cells/mm3 or less and initiating cART.\nOnce a patient is identified as eligible for HREC, the clinical officer or physician prescribes a two-week supply of cART and then refers the patient to HREC (within the same clinic location). The patient is seen by a clinical officer or physician 2 weeks after antiretroviral initiation and then monthly. The HREC nurse is then responsible for interim weekly visits either physically or by telephone for a period of 3 months. The patient is seen monthly by a clinical officer or physician. In HREC, returning patients do not queue in the waiting bay or go through the nursing station, clinician room, pharmacy or any other referral points within the clinic. Instead, such patients go directly to the \"Express Care room\", which provides \"one-stop care\". The HREC nurse maintains a list of scheduled return visits, and if a patient misses a clinic appointment, the outreach team is activated as per routine protocol.\nThe HREC visit for the high-risk patients is focused on identifying co-morbidities and complications of cART and reinforcing medication adherence. The nurse asks about adherence to medication by asking whether the patient has missed any of his/her medications in the previous 7 days and then conducting a pill count. If the patient is not perfectly adherent, he or she is referred to the clinical officer or physician. The nurse reviews a symptom checklist (new cough, breathlessness, rash, jaundiced eyes, vomiting, diarrhoea, severe headache, fever or \"any other problem that you feel you need a doctor for\") and measures temperature and transcutaneous oxygen saturation. If the patient reports any symptoms or meets the pre-set threshold for either temperature (≥37.2°Celsius) or oxygen saturation (O2 ≤ 93%), the patient is referred immediately to the clinical officer or physician. The nurse does not dispense drugs during an HREC visit as these are prescribed during monthly clinical officer or physician visits. A summary of similarities and differences between HREC and Routine Care can be found in Table 1.\nSummary of programme characteristics in High Risk Express Care (HREC) versus Routine Care", "The HIV clinical care protocols used by the USAID-AMPATH Partnership are consistent with those recommended by WHO and have been described in detail elsewhere [27]. Briefly, the Routine Care protocol for patients receiving cART is that patients are seen by the clinician (clinical officer or physician) 2 weeks after initiating treatment, and then monthly thereafter. Those who have not initiated cART return every 1 to 3 months depending on their clinical status and co-morbidities.\nDuring these visits patients are seen by multiple care providers, including nurses, clinicians, pharmacy technicians, nutritionists, peer outreach workers and social workers. For new patients, clinical contact begins at registration, followed by the nurse who checks vital signs. The patient also sees a peer outreach worker for documentation of locator information, then goes on to see the doctor/clinical officer. Returning patients go directly to the nurse and then follow a course similar to that established for new patients. All patients newly initiated on cART who miss a scheduled clinic visit trigger an outreach attempt within 24 h, through either a telephone contact or home visit conducted by trained peers. Standard first-line antiretroviral regimens used are either nevirapine-based or efavirenz-based [28].\nBeginning in March 2007, the High Risk Express Care (HREC) programme was implemented in a step-wise fashion in USAID-AMPATH Partnership clinics. As of May 2008, HREC had been rolled out to all parent clinics. The selection of clinics to pilot the programme was based primarily on space availability, patient volume and clinic congestion, and the general capacity of healthcare personnel to pilot a new programme. Once a clinic had implemented HREC, all patients meeting the criteria for HREC were eligible for referral to the programme. The criteria for referral to HREC are having a CD4 count of 100 cells/mm3 or less and initiating cART.\nOnce a patient is identified as eligible for HREC, the clinical officer or physician prescribes a two-week supply of cART and then refers the patient to HREC (within the same clinic location). The patient is seen by a clinical officer or physician 2 weeks after antiretroviral initiation and then monthly. The HREC nurse is then responsible for interim weekly visits either physically or by telephone for a period of 3 months. The patient is seen monthly by a clinical officer or physician. In HREC, returning patients do not queue in the waiting bay or go through the nursing station, clinician room, pharmacy or any other referral points within the clinic. Instead, such patients go directly to the \"Express Care room\", which provides \"one-stop care\". The HREC nurse maintains a list of scheduled return visits, and if a patient misses a clinic appointment, the outreach team is activated as per routine protocol.\nThe HREC visit for the high-risk patients is focused on identifying co-morbidities and complications of cART and reinforcing medication adherence. The nurse asks about adherence to medication by asking whether the patient has missed any of his/her medications in the previous 7 days and then conducting a pill count. If the patient is not perfectly adherent, he or she is referred to the clinical officer or physician. The nurse reviews a symptom checklist (new cough, breathlessness, rash, jaundiced eyes, vomiting, diarrhoea, severe headache, fever or \"any other problem that you feel you need a doctor for\") and measures temperature and transcutaneous oxygen saturation. If the patient reports any symptoms or meets the pre-set threshold for either temperature (≥37.2°Celsius) or oxygen saturation (O2 ≤ 93%), the patient is referred immediately to the clinical officer or physician. The nurse does not dispense drugs during an HREC visit as these are prescribed during monthly clinical officer or physician visits. A summary of similarities and differences between HREC and Routine Care can be found in Table 1.\nSummary of programme characteristics in High Risk Express Care (HREC) versus Routine Care", "Clinicians complete standardized forms capturing demographic, clinical and pharmacologic information at each patient visit. These data are then hand-entered into the AMPATH Medical Record System, a secure computerized database designed for clinical management, with data entry validated by random review of 10% of the forms entered [29]. At the time of registration, patients are provided with a unique identifying number. For this study, all data were stripped of identifying information prior to analysis.", "The analysis included all patients aged 14 years or older who were initiating cART with CD4 counts less than or equal to 100 cells/mm3 in one of the USAID-AMPATH clinics from March 2007 until March 2009.", "The primary goal of the HREC system is to prevent mortality during the first 3 months following cART initiation. Therefore, the primary outcome for this analysis was all-cause mortality. Two secondary outcomes were analyzed: loss to follow up (LTFU), defined as being absent from the clinic for at least 3 months with no information regarding vital status, and a composite outcome defined as LTFU or death. The rationale for the composite outcome is that loss to follow up in such a high-risk population is likely to mean that a patient has died, even if the death has not yet been reported to the clinic [10,30-34]. Analysis of the composite outcome can be viewed as a sensitivity analysis for the mortality rate as it provides an upper bound of the mortality estimate.\nThe primary explanatory variable is being in the HREC programme (versus remaining in Routine Care). Our analyses quantify the effect of HREC on mortality, loss to follow up, and the composite outcome of death or loss to follow up using crude and adjusted hazard rate (HR) ratios.\nOur adjusted HR ratios control for the following potential confounding variables, all measured at time of cART initiation and selected a priori based on their potential to independently affect changes in risk of mortality and/or loss to follow up: CD4 count (analyzed as a continuous variable); receipt of treatment for tuberculosis at the time of cART initiation (yes/no); receipt of cotrimoxazole or dapsone prophylaxis within 28 days of cART initiation (yes/no) [13,17]; travel time to clinic (dichotomized as up to one hour vs. more than one hour); type of clinic (referral hospital, district or sub-district hospital, or rural health centre); age; and sex (male/female). CD4 and age are centred at their mean values. WHO clinical stage was not included in the final model because: a) it was non-significantly associated with the outcomes of interest in bivariable analyses; and b) there was missing data in one of the two groups.", "We included all eligible patients and categorized them as having been enrolled into HREC at initiation of cART or having remained in Routine Care. The distributions of the time to event outcomes are summarized using Kaplan-Meier curves. Event times and censoring times are defined as follows: for analysis of mortality, the event time is the date of death; others are censored at the time of their last clinic visit. For the analysis of loss to follow up, patients were defined as lost if they had not returned to clinic for more than three consecutive months; for these patients, the event time is the 90th day following the most recent visit to the clinic.\nIndividuals whose reported follow-up time is zero days had 1 day added for the purpose of analysis. Those who died were censored on their death date, and patients whose most recent clinic was less than 90 days prior to the close of the database were censored at the date of their last clinic visit. For the loss to follow up analysis, the earliest possible event time is 90 days after the initiation of cART; hence our tests and regression models use a start time of day 90. Finally, for the composite outcome, the event time is the earliest of death date or loss to follow up date, where the LTFU date is defined as we have explained.\nThe adjusted HR ratios control for potential confounding variables using inverse weighting by the treatment propensity score method [35]. For each individual, the propensity score is the probability of receiving HREC as a function of individual level characteristics. The propensity score, denoted by p(x), is estimated by fitting a logistic regression model of HREC status (yes/no) on potential confounding variables x. The propensity score model is checked for lack of fit using the Hosmer-Lemeshow goodness-of-fit test [36]. The adjusted HR ratios are calculated by fitting a weighted, stratified proportional hazards regression of the event time on HREC status. The weights are proportional to 1/p(x) for those who receive HREC, and to 1/{1 - p(x)} for those who do not. Following Hernan et al. [27], stabilized weights were used in the estimation (full details are available upon request). The stratification variable is clinic type. Robust standard errors are used to account for clustering by clinic and for correlation induced by the use of inverse probability weights.\nBecause the propensity scores represent the probability of receiving HREC as a function of individual-level covariates (listed here), the weight for each individual is inversely proportional to the (estimated) probability of his or her actual HREC status. The weighted sample can therefore be viewed as one where differential selection into HREC attributable to x has been eliminated; hence, to the extent that x contains all relevant confounders, the adjusted HR is equivalent to the exposure effect from a marginal structural proportional hazards model and can be interpreted as the causal effect of HREC on the event of interest [37,38].\nAll analyses were done using STATA Version SE/10 (College Station, Texas, USA).", "The authors declare that they have no competing interests.", "PB was primarily responsible for the writing of the manuscript. AS, RK, JS, JM, and SK are practicing physicians in Kenya who developed the Express Care programme and contributed significantly to the generation of hypotheses and interpretation of results for the manuscript. JH is the biostatistician on record for this analysis and provided significant input and technical assistance in the methodological approaches used. AK was the analyst for the study. KWK critically reviewed the manuscript for issues of design and interpretation. ES was the data manager responsible for providing and overseeing the data. All authors have read and approved the final version of this manuscript." ]
[ null, null, null, null, null, null, null, null, null, null ]
[ "Background", "Methods", "Study design", "The programme", "Clinical procedures: express care and routine care", "Data collection", "Study population", "Outcomes, explanatory variables and confounders", "Analysis", "Results", "Discussion", "Conclusions", "Competing interests", "Authors' contributions" ]
[ "Combination antiretroviral treatment (cART) has proven itself to be an effective therapeutic mechanism for suppressing viral replication and enabling reconstitution of the immune system, thus allowing patients to recover and live with HIV disease as a chronic illness [1-3]. If adherence to the medications is high, severe immune-suppression is not present at cART initiation, and no significant co-morbidities, such as hepatitis C infection, exist, projections suggest that people living with HIV/AIDS have greatly improved long-term prognosis [4]. Despite the proven effectiveness of cART in low-income countries [5-9], mortality rates among patients in these settings are higher than those seen in high-income environments [10].\nIn resource-poor settings, mortality is at its highest during the first 3 months after cART initiation [9-12]. It is at least four times higher than rates in high-income countries in the first month of treatment [10]. Why mortality is at its highest during this period has been the subject of much debate and speculation. Reasons for these differences have been attributed to the non-use of cotrimoxazole prophylaxis [13,14], tuberculosis-associated immune reconstitution inflammatory syndrome (IRIS) [15-17], IRIS due to other opportunistic infections [18], and hepatotoxicity related to antiretroviral agents [19]. A consistently clear predictor of mortality during this period is having a low CD4 count at the time of treatment initiation [10,20].\nRecent estimates by the World Health Organization (WHO) indicate that although 6.7 million individuals in low- and middle-income settings are receiving cART, this represents only 47% coverage of individuals who are in clinical need [21]. The massive scale up of HIV care and treatment programmes has required enormous investments, and still there is a substantial unmet need. Thus, the challenge presented to HIV care programmes operating in resource-poor settings is how to continue scaling up while simultaneously improving the outcomes of those enrolling in treatment programmes. As such, novel models of care, such as task shifting [22-24], which increase healthcare efficiency and improve patient outcomes, clearly need to be designed and tested.\nHere, we describe the impact of a nurse-clinician approach [25] on mortality and patient retention among severely immune-suppressed HIV-infected adults initiating cART within a large multi-centre HIV/AIDS care and treatment programme in western Kenya.", " Study design This was a retrospective analysis of prospectively collected routine clinical data. The study was approved by the Indiana University School of Medicine Institutional Review Board and the Moi University School of Medicine Institutional Review and Ethics Committee.\nThis was a retrospective analysis of prospectively collected routine clinical data. The study was approved by the Indiana University School of Medicine Institutional Review Board and the Moi University School of Medicine Institutional Review and Ethics Committee.\n The programme The Academic Model Providing Access to Healthcare (AMPATH) was initiated in 2001 as a joint partnership between Moi University School of Medicine in Kenya, the Indiana University School of Medicine, and the Moi Teaching and Referral Hospital. The USAID-AMPATH Partnership was initiated in 2004 when AMPATH received ongoing funding through the United States Agency for International Development (USAID) and the United States Presidential Emergency Plan for AIDS Relief (PEPFAR). The initial goal of AMPATH was to establish an HIV care system to serve the needs of both urban and rural patients and to assess the barriers to and outcomes of antiretroviral therapy. Details of the development of this programme have been described in detail elsewhere [26].\nThe first urban and rural HIV clinics were opened in November 2001. Since then, the programme has enrolled more than 140,000 HIV-infected adults and children in 25 Ministry of Health (MOH) facilities and numerous satellite clinics in western Kenya (data for satellite clinics are incorporated into their \"parent\" clinic). Although located within the MOH facilities, the AMPATH clinics are dedicated to HIV and HIV/TB care, treatment and support. All HIV- and tuberculosis-related care and treatment are provided free at the point of care.\n Clinical procedures: express care and routine care The HIV clinical care protocols used by the USAID-AMPATH Partnership are consistent with those recommended by WHO and have been described in detail elsewhere [27]. Briefly, the Routine Care protocol for patients receiving cART is that patients are seen by the clinician (clinical officer or physician) 2 weeks after initiating treatment, and then monthly thereafter. Those who have not initiated cART return every 1 to 3 months depending on their clinical status and co-morbidities.\nDuring these visits patients are seen by multiple care providers, including nurses, clinicians, pharmacy technicians, nutritionists, peer outreach workers and social workers. For new patients, clinical contact begins at registration, followed by the nurse who checks vital signs. The patient also sees a peer outreach worker for documentation of locator information, then goes on to see the doctor/clinical officer. Returning patients go directly to the nurse and then follow a course similar to that established for new patients. All patients newly initiated on cART who miss a scheduled clinic visit trigger an outreach attempt within 24 h, through either a telephone contact or home visit conducted by trained peers. Standard first-line antiretroviral regimens used are either nevirapine-based or efavirenz-based [28].\nBeginning in March 2007, the High Risk Express Care (HREC) programme was implemented in a step-wise fashion in USAID-AMPATH Partnership clinics. As of May 2008, HREC had been rolled out to all parent clinics. The selection of clinics to pilot the programme was based primarily on space availability, patient volume and clinic congestion, and the general capacity of healthcare personnel to pilot a new programme. Once a clinic had implemented HREC, all patients meeting the criteria for HREC were eligible for referral to the programme. The criteria for referral to HREC are having a CD4 count of 100 cells/mm3 or less and initiating cART.\nOnce a patient is identified as eligible for HREC, the clinical officer or physician prescribes a two-week supply of cART and then refers the patient to HREC (within the same clinic location). The patient is seen by a clinical officer or physician 2 weeks after antiretroviral initiation and then monthly. The HREC nurse is then responsible for interim weekly visits either physically or by telephone for a period of 3 months. The patient is seen monthly by a clinical officer or physician. In HREC, returning patients do not queue in the waiting bay or go through the nursing station, clinician room, pharmacy or any other referral points within the clinic. Instead, such patients go directly to the \"Express Care room\", which provides \"one-stop care\". The HREC nurse maintains a list of scheduled return visits, and if a patient misses a clinic appointment, the outreach team is activated as per routine protocol.\nThe HREC visit for the high-risk patients is focused on identifying co-morbidities and complications of cART and reinforcing medication adherence. The nurse asks about adherence to medication by asking whether the patient has missed any of his/her medications in the previous 7 days and then conducting a pill count. If the patient is not perfectly adherent, he or she is referred to the clinical officer or physician. The nurse reviews a symptom checklist (new cough, breathlessness, rash, jaundiced eyes, vomiting, diarrhoea, severe headache, fever or \"any other problem that you feel you need a doctor for\") and measures temperature and transcutaneous oxygen saturation. If the patient reports any symptoms or meets the pre-set threshold for either temperature (≥37.2°Celsius) or oxygen saturation (O2 ≤ 93%), the patient is referred immediately to the clinical officer or physician. The nurse does not dispense drugs during an HREC visit as these are prescribed during monthly clinical officer or physician visits. A summary of similarities and differences between HREC and Routine Care can be found in Table 1.\nSummary of programme characteristics in High Risk Express Care (HREC) versus Routine Care\nThe HIV clinical care protocols used by the USAID-AMPATH Partnership are consistent with those recommended by WHO and have been described in detail elsewhere [27]. Briefly, the Routine Care protocol for patients receiving cART is that patients are seen by the clinician (clinical officer or physician) 2 weeks after initiating treatment, and then monthly thereafter. Those who have not initiated cART return every 1 to 3 months depending on their clinical status and co-morbidities.\nDuring these visits patients are seen by multiple care providers, including nurses, clinicians, pharmacy technicians, nutritionists, peer outreach workers and social workers. For new patients, clinical contact begins at registration, followed by the nurse who checks vital signs. The patient also sees a peer outreach worker for documentation of locator information, then goes on to see the doctor/clinical officer. Returning patients go directly to the nurse and then follow a course similar to that established for new patients. All patients newly initiated on cART who miss a scheduled clinic visit trigger an outreach attempt within 24 h, through either a telephone contact or home visit conducted by trained peers. Standard first-line antiretroviral regimens used are either nevirapine-based or efavirenz-based [28].\nBeginning in March 2007, the High Risk Express Care (HREC) programme was implemented in a step-wise fashion in USAID-AMPATH Partnership clinics. As of May 2008, HREC had been rolled out to all parent clinics. The selection of clinics to pilot the programme was based primarily on space availability, patient volume and clinic congestion, and the general capacity of healthcare personnel to pilot a new programme. Once a clinic had implemented HREC, all patients meeting the criteria for HREC were eligible for referral to the programme. The criteria for referral to HREC are having a CD4 count of 100 cells/mm3 or less and initiating cART.\nOnce a patient is identified as eligible for HREC, the clinical officer or physician prescribes a two-week supply of cART and then refers the patient to HREC (within the same clinic location). The patient is seen by a clinical officer or physician 2 weeks after antiretroviral initiation and then monthly. The HREC nurse is then responsible for interim weekly visits either physically or by telephone for a period of 3 months. The patient is seen monthly by a clinical officer or physician. In HREC, returning patients do not queue in the waiting bay or go through the nursing station, clinician room, pharmacy or any other referral points within the clinic. Instead, such patients go directly to the \"Express Care room\", which provides \"one-stop care\". The HREC nurse maintains a list of scheduled return visits, and if a patient misses a clinic appointment, the outreach team is activated as per routine protocol.\nThe HREC visit for the high-risk patients is focused on identifying co-morbidities and complications of cART and reinforcing medication adherence. The nurse asks about adherence to medication by asking whether the patient has missed any of his/her medications in the previous 7 days and then conducting a pill count. If the patient is not perfectly adherent, he or she is referred to the clinical officer or physician. The nurse reviews a symptom checklist (new cough, breathlessness, rash, jaundiced eyes, vomiting, diarrhoea, severe headache, fever or \"any other problem that you feel you need a doctor for\") and measures temperature and transcutaneous oxygen saturation. If the patient reports any symptoms or meets the pre-set threshold for either temperature (≥37.2°Celsius) or oxygen saturation (O2 ≤ 93%), the patient is referred immediately to the clinical officer or physician. The nurse does not dispense drugs during an HREC visit as these are prescribed during monthly clinical officer or physician visits. A summary of similarities and differences between HREC and Routine Care can be found in Table 1.\nSummary of programme characteristics in High Risk Express Care (HREC) versus Routine Care\nThe Academic Model Providing Access to Healthcare (AMPATH) was initiated in 2001 as a joint partnership between Moi University School of Medicine in Kenya, the Indiana University School of Medicine, and the Moi Teaching and Referral Hospital. The USAID-AMPATH Partnership was initiated in 2004 when AMPATH received ongoing funding through the United States Agency for International Development (USAID) and the United States Presidential Emergency Plan for AIDS Relief (PEPFAR). The initial goal of AMPATH was to establish an HIV care system to serve the needs of both urban and rural patients and to assess the barriers to and outcomes of antiretroviral therapy. Details of the development of this programme have been described in detail elsewhere [26].\nThe first urban and rural HIV clinics were opened in November 2001. Since then, the programme has enrolled more than 140,000 HIV-infected adults and children in 25 Ministry of Health (MOH) facilities and numerous satellite clinics in western Kenya (data for satellite clinics are incorporated into their \"parent\" clinic). Although located within the MOH facilities, the AMPATH clinics are dedicated to HIV and HIV/TB care, treatment and support. All HIV- and tuberculosis-related care and treatment are provided free at the point of care.\n Clinical procedures: express care and routine care The HIV clinical care protocols used by the USAID-AMPATH Partnership are consistent with those recommended by WHO and have been described in detail elsewhere [27]. Briefly, the Routine Care protocol for patients receiving cART is that patients are seen by the clinician (clinical officer or physician) 2 weeks after initiating treatment, and then monthly thereafter. Those who have not initiated cART return every 1 to 3 months depending on their clinical status and co-morbidities.\nDuring these visits patients are seen by multiple care providers, including nurses, clinicians, pharmacy technicians, nutritionists, peer outreach workers and social workers. For new patients, clinical contact begins at registration, followed by the nurse who checks vital signs. The patient also sees a peer outreach worker for documentation of locator information, then goes on to see the doctor/clinical officer. Returning patients go directly to the nurse and then follow a course similar to that established for new patients. All patients newly initiated on cART who miss a scheduled clinic visit trigger an outreach attempt within 24 h, through either a telephone contact or home visit conducted by trained peers. Standard first-line antiretroviral regimens used are either nevirapine-based or efavirenz-based [28].\nBeginning in March 2007, the High Risk Express Care (HREC) programme was implemented in a step-wise fashion in USAID-AMPATH Partnership clinics. As of May 2008, HREC had been rolled out to all parent clinics. The selection of clinics to pilot the programme was based primarily on space availability, patient volume and clinic congestion, and the general capacity of healthcare personnel to pilot a new programme. Once a clinic had implemented HREC, all patients meeting the criteria for HREC were eligible for referral to the programme. The criteria for referral to HREC are having a CD4 count of 100 cells/mm3 or less and initiating cART.\nOnce a patient is identified as eligible for HREC, the clinical officer or physician prescribes a two-week supply of cART and then refers the patient to HREC (within the same clinic location). The patient is seen by a clinical officer or physician 2 weeks after antiretroviral initiation and then monthly. The HREC nurse is then responsible for interim weekly visits either physically or by telephone for a period of 3 months. The patient is seen monthly by a clinical officer or physician. In HREC, returning patients do not queue in the waiting bay or go through the nursing station, clinician room, pharmacy or any other referral points within the clinic. Instead, such patients go directly to the \"Express Care room\", which provides \"one-stop care\". The HREC nurse maintains a list of scheduled return visits, and if a patient misses a clinic appointment, the outreach team is activated as per routine protocol.\nThe HREC visit for the high-risk patients is focused on identifying co-morbidities and complications of cART and reinforcing medication adherence. The nurse asks about adherence to medication by asking whether the patient has missed any of his/her medications in the previous 7 days and then conducting a pill count. If the patient is not perfectly adherent, he or she is referred to the clinical officer or physician. The nurse reviews a symptom checklist (new cough, breathlessness, rash, jaundiced eyes, vomiting, diarrhoea, severe headache, fever or \"any other problem that you feel you need a doctor for\") and measures temperature and transcutaneous oxygen saturation. If the patient reports any symptoms or meets the pre-set threshold for either temperature (≥37.2°Celsius) or oxygen saturation (O2 ≤ 93%), the patient is referred immediately to the clinical officer or physician. The nurse does not dispense drugs during an HREC visit as these are prescribed during monthly clinical officer or physician visits. A summary of similarities and differences between HREC and Routine Care can be found in Table 1.\nSummary of programme characteristics in High Risk Express Care (HREC) versus Routine Care\nThe HIV clinical care protocols used by the USAID-AMPATH Partnership are consistent with those recommended by WHO and have been described in detail elsewhere [27]. Briefly, the Routine Care protocol for patients receiving cART is that patients are seen by the clinician (clinical officer or physician) 2 weeks after initiating treatment, and then monthly thereafter. Those who have not initiated cART return every 1 to 3 months depending on their clinical status and co-morbidities.\nDuring these visits patients are seen by multiple care providers, including nurses, clinicians, pharmacy technicians, nutritionists, peer outreach workers and social workers. For new patients, clinical contact begins at registration, followed by the nurse who checks vital signs. The patient also sees a peer outreach worker for documentation of locator information, then goes on to see the doctor/clinical officer. Returning patients go directly to the nurse and then follow a course similar to that established for new patients. All patients newly initiated on cART who miss a scheduled clinic visit trigger an outreach attempt within 24 h, through either a telephone contact or home visit conducted by trained peers. Standard first-line antiretroviral regimens used are either nevirapine-based or efavirenz-based [28].\nBeginning in March 2007, the High Risk Express Care (HREC) programme was implemented in a step-wise fashion in USAID-AMPATH Partnership clinics. As of May 2008, HREC had been rolled out to all parent clinics. The selection of clinics to pilot the programme was based primarily on space availability, patient volume and clinic congestion, and the general capacity of healthcare personnel to pilot a new programme. Once a clinic had implemented HREC, all patients meeting the criteria for HREC were eligible for referral to the programme. The criteria for referral to HREC are having a CD4 count of 100 cells/mm3 or less and initiating cART.\nOnce a patient is identified as eligible for HREC, the clinical officer or physician prescribes a two-week supply of cART and then refers the patient to HREC (within the same clinic location). The patient is seen by a clinical officer or physician 2 weeks after antiretroviral initiation and then monthly. The HREC nurse is then responsible for interim weekly visits either physically or by telephone for a period of 3 months. The patient is seen monthly by a clinical officer or physician. In HREC, returning patients do not queue in the waiting bay or go through the nursing station, clinician room, pharmacy or any other referral points within the clinic. Instead, such patients go directly to the \"Express Care room\", which provides \"one-stop care\". The HREC nurse maintains a list of scheduled return visits, and if a patient misses a clinic appointment, the outreach team is activated as per routine protocol.\nThe HREC visit for the high-risk patients is focused on identifying co-morbidities and complications of cART and reinforcing medication adherence. The nurse asks about adherence to medication by asking whether the patient has missed any of his/her medications in the previous 7 days and then conducting a pill count. If the patient is not perfectly adherent, he or she is referred to the clinical officer or physician. The nurse reviews a symptom checklist (new cough, breathlessness, rash, jaundiced eyes, vomiting, diarrhoea, severe headache, fever or \"any other problem that you feel you need a doctor for\") and measures temperature and transcutaneous oxygen saturation. If the patient reports any symptoms or meets the pre-set threshold for either temperature (≥37.2°Celsius) or oxygen saturation (O2 ≤ 93%), the patient is referred immediately to the clinical officer or physician. The nurse does not dispense drugs during an HREC visit as these are prescribed during monthly clinical officer or physician visits. A summary of similarities and differences between HREC and Routine Care can be found in Table 1.\nSummary of programme characteristics in High Risk Express Care (HREC) versus Routine Care\n Data collection Clinicians complete standardized forms capturing demographic, clinical and pharmacologic information at each patient visit. These data are then hand-entered into the AMPATH Medical Record System, a secure computerized database designed for clinical management, with data entry validated by random review of 10% of the forms entered [29]. At the time of registration, patients are provided with a unique identifying number. For this study, all data were stripped of identifying information prior to analysis.\nClinicians complete standardized forms capturing demographic, clinical and pharmacologic information at each patient visit. These data are then hand-entered into the AMPATH Medical Record System, a secure computerized database designed for clinical management, with data entry validated by random review of 10% of the forms entered [29]. At the time of registration, patients are provided with a unique identifying number. For this study, all data were stripped of identifying information prior to analysis.\n Study population The analysis included all patients aged 14 years or older who were initiating cART with CD4 counts less than or equal to 100 cells/mm3 in one of the USAID-AMPATH clinics from March 2007 until March 2009.\nThe analysis included all patients aged 14 years or older who were initiating cART with CD4 counts less than or equal to 100 cells/mm3 in one of the USAID-AMPATH clinics from March 2007 until March 2009.\n Outcomes, explanatory variables and confounders The primary goal of the HREC system is to prevent mortality during the first 3 months following cART initiation. Therefore, the primary outcome for this analysis was all-cause mortality. Two secondary outcomes were analyzed: loss to follow up (LTFU), defined as being absent from the clinic for at least 3 months with no information regarding vital status, and a composite outcome defined as LTFU or death. The rationale for the composite outcome is that loss to follow up in such a high-risk population is likely to mean that a patient has died, even if the death has not yet been reported to the clinic [10,30-34]. Analysis of the composite outcome can be viewed as a sensitivity analysis for the mortality rate as it provides an upper bound of the mortality estimate.\nThe primary explanatory variable is being in the HREC programme (versus remaining in Routine Care). Our analyses quantify the effect of HREC on mortality, loss to follow up, and the composite outcome of death or loss to follow up using crude and adjusted hazard rate (HR) ratios.\nOur adjusted HR ratios control for the following potential confounding variables, all measured at time of cART initiation and selected a priori based on their potential to independently affect changes in risk of mortality and/or loss to follow up: CD4 count (analyzed as a continuous variable); receipt of treatment for tuberculosis at the time of cART initiation (yes/no); receipt of cotrimoxazole or dapsone prophylaxis within 28 days of cART initiation (yes/no) [13,17]; travel time to clinic (dichotomized as up to one hour vs. more than one hour); type of clinic (referral hospital, district or sub-district hospital, or rural health centre); age; and sex (male/female). CD4 and age are centred at their mean values. WHO clinical stage was not included in the final model because: a) it was non-significantly associated with the outcomes of interest in bivariable analyses; and b) there was missing data in one of the two groups.\nThe primary goal of the HREC system is to prevent mortality during the first 3 months following cART initiation. Therefore, the primary outcome for this analysis was all-cause mortality. Two secondary outcomes were analyzed: loss to follow up (LTFU), defined as being absent from the clinic for at least 3 months with no information regarding vital status, and a composite outcome defined as LTFU or death. The rationale for the composite outcome is that loss to follow up in such a high-risk population is likely to mean that a patient has died, even if the death has not yet been reported to the clinic [10,30-34]. Analysis of the composite outcome can be viewed as a sensitivity analysis for the mortality rate as it provides an upper bound of the mortality estimate.\nThe primary explanatory variable is being in the HREC programme (versus remaining in Routine Care). Our analyses quantify the effect of HREC on mortality, loss to follow up, and the composite outcome of death or loss to follow up using crude and adjusted hazard rate (HR) ratios.\nOur adjusted HR ratios control for the following potential confounding variables, all measured at time of cART initiation and selected a priori based on their potential to independently affect changes in risk of mortality and/or loss to follow up: CD4 count (analyzed as a continuous variable); receipt of treatment for tuberculosis at the time of cART initiation (yes/no); receipt of cotrimoxazole or dapsone prophylaxis within 28 days of cART initiation (yes/no) [13,17]; travel time to clinic (dichotomized as up to one hour vs. more than one hour); type of clinic (referral hospital, district or sub-district hospital, or rural health centre); age; and sex (male/female). CD4 and age are centred at their mean values. WHO clinical stage was not included in the final model because: a) it was non-significantly associated with the outcomes of interest in bivariable analyses; and b) there was missing data in one of the two groups.\n Analysis We included all eligible patients and categorized them as having been enrolled into HREC at initiation of cART or having remained in Routine Care. The distributions of the time to event outcomes are summarized using Kaplan-Meier curves. Event times and censoring times are defined as follows: for analysis of mortality, the event time is the date of death; others are censored at the time of their last clinic visit. For the analysis of loss to follow up, patients were defined as lost if they had not returned to clinic for more than three consecutive months; for these patients, the event time is the 90th day following the most recent visit to the clinic.\nIndividuals whose reported follow-up time is zero days had 1 day added for the purpose of analysis. Those who died were censored on their death date, and patients whose most recent clinic was less than 90 days prior to the close of the database were censored at the date of their last clinic visit. For the loss to follow up analysis, the earliest possible event time is 90 days after the initiation of cART; hence our tests and regression models use a start time of day 90. Finally, for the composite outcome, the event time is the earliest of death date or loss to follow up date, where the LTFU date is defined as we have explained.\nThe adjusted HR ratios control for potential confounding variables using inverse weighting by the treatment propensity score method [35]. For each individual, the propensity score is the probability of receiving HREC as a function of individual level characteristics. The propensity score, denoted by p(x), is estimated by fitting a logistic regression model of HREC status (yes/no) on potential confounding variables x. The propensity score model is checked for lack of fit using the Hosmer-Lemeshow goodness-of-fit test [36]. The adjusted HR ratios are calculated by fitting a weighted, stratified proportional hazards regression of the event time on HREC status. The weights are proportional to 1/p(x) for those who receive HREC, and to 1/{1 - p(x)} for those who do not. Following Hernan et al. [27], stabilized weights were used in the estimation (full details are available upon request). The stratification variable is clinic type. Robust standard errors are used to account for clustering by clinic and for correlation induced by the use of inverse probability weights.\nBecause the propensity scores represent the probability of receiving HREC as a function of individual-level covariates (listed here), the weight for each individual is inversely proportional to the (estimated) probability of his or her actual HREC status. The weighted sample can therefore be viewed as one where differential selection into HREC attributable to x has been eliminated; hence, to the extent that x contains all relevant confounders, the adjusted HR is equivalent to the exposure effect from a marginal structural proportional hazards model and can be interpreted as the causal effect of HREC on the event of interest [37,38].\nAll analyses were done using STATA Version SE/10 (College Station, Texas, USA).\nWe included all eligible patients and categorized them as having been enrolled into HREC at initiation of cART or having remained in Routine Care. The distributions of the time to event outcomes are summarized using Kaplan-Meier curves. Event times and censoring times are defined as follows: for analysis of mortality, the event time is the date of death; others are censored at the time of their last clinic visit. For the analysis of loss to follow up, patients were defined as lost if they had not returned to clinic for more than three consecutive months; for these patients, the event time is the 90th day following the most recent visit to the clinic.\nIndividuals whose reported follow-up time is zero days had 1 day added for the purpose of analysis. Those who died were censored on their death date, and patients whose most recent clinic was less than 90 days prior to the close of the database were censored at the date of their last clinic visit. For the loss to follow up analysis, the earliest possible event time is 90 days after the initiation of cART; hence our tests and regression models use a start time of day 90. Finally, for the composite outcome, the event time is the earliest of death date or loss to follow up date, where the LTFU date is defined as we have explained.\nThe adjusted HR ratios control for potential confounding variables using inverse weighting by the treatment propensity score method [35]. For each individual, the propensity score is the probability of receiving HREC as a function of individual level characteristics. The propensity score, denoted by p(x), is estimated by fitting a logistic regression model of HREC status (yes/no) on potential confounding variables x. The propensity score model is checked for lack of fit using the Hosmer-Lemeshow goodness-of-fit test [36]. The adjusted HR ratios are calculated by fitting a weighted, stratified proportional hazards regression of the event time on HREC status. The weights are proportional to 1/p(x) for those who receive HREC, and to 1/{1 - p(x)} for those who do not. Following Hernan et al. [27], stabilized weights were used in the estimation (full details are available upon request). The stratification variable is clinic type. Robust standard errors are used to account for clustering by clinic and for correlation induced by the use of inverse probability weights.\nBecause the propensity scores represent the probability of receiving HREC as a function of individual-level covariates (listed here), the weight for each individual is inversely proportional to the (estimated) probability of his or her actual HREC status. The weighted sample can therefore be viewed as one where differential selection into HREC attributable to x has been eliminated; hence, to the extent that x contains all relevant confounders, the adjusted HR is equivalent to the exposure effect from a marginal structural proportional hazards model and can be interpreted as the causal effect of HREC on the event of interest [37,38].\nAll analyses were done using STATA Version SE/10 (College Station, Texas, USA).", "This was a retrospective analysis of prospectively collected routine clinical data. The study was approved by the Indiana University School of Medicine Institutional Review Board and the Moi University School of Medicine Institutional Review and Ethics Committee.", "The Academic Model Providing Access to Healthcare (AMPATH) was initiated in 2001 as a joint partnership between Moi University School of Medicine in Kenya, the Indiana University School of Medicine, and the Moi Teaching and Referral Hospital. The USAID-AMPATH Partnership was initiated in 2004 when AMPATH received ongoing funding through the United States Agency for International Development (USAID) and the United States Presidential Emergency Plan for AIDS Relief (PEPFAR). The initial goal of AMPATH was to establish an HIV care system to serve the needs of both urban and rural patients and to assess the barriers to and outcomes of antiretroviral therapy. Details of the development of this programme have been described in detail elsewhere [26].\nThe first urban and rural HIV clinics were opened in November 2001. Since then, the programme has enrolled more than 140,000 HIV-infected adults and children in 25 Ministry of Health (MOH) facilities and numerous satellite clinics in western Kenya (data for satellite clinics are incorporated into their \"parent\" clinic). Although located within the MOH facilities, the AMPATH clinics are dedicated to HIV and HIV/TB care, treatment and support. All HIV- and tuberculosis-related care and treatment are provided free at the point of care.\n Clinical procedures: express care and routine care The HIV clinical care protocols used by the USAID-AMPATH Partnership are consistent with those recommended by WHO and have been described in detail elsewhere [27]. Briefly, the Routine Care protocol for patients receiving cART is that patients are seen by the clinician (clinical officer or physician) 2 weeks after initiating treatment, and then monthly thereafter. Those who have not initiated cART return every 1 to 3 months depending on their clinical status and co-morbidities.\nDuring these visits patients are seen by multiple care providers, including nurses, clinicians, pharmacy technicians, nutritionists, peer outreach workers and social workers. For new patients, clinical contact begins at registration, followed by the nurse who checks vital signs. The patient also sees a peer outreach worker for documentation of locator information, then goes on to see the doctor/clinical officer. Returning patients go directly to the nurse and then follow a course similar to that established for new patients. All patients newly initiated on cART who miss a scheduled clinic visit trigger an outreach attempt within 24 h, through either a telephone contact or home visit conducted by trained peers. Standard first-line antiretroviral regimens used are either nevirapine-based or efavirenz-based [28].\nBeginning in March 2007, the High Risk Express Care (HREC) programme was implemented in a step-wise fashion in USAID-AMPATH Partnership clinics. As of May 2008, HREC had been rolled out to all parent clinics. The selection of clinics to pilot the programme was based primarily on space availability, patient volume and clinic congestion, and the general capacity of healthcare personnel to pilot a new programme. Once a clinic had implemented HREC, all patients meeting the criteria for HREC were eligible for referral to the programme. The criteria for referral to HREC are having a CD4 count of 100 cells/mm3 or less and initiating cART.\nOnce a patient is identified as eligible for HREC, the clinical officer or physician prescribes a two-week supply of cART and then refers the patient to HREC (within the same clinic location). The patient is seen by a clinical officer or physician 2 weeks after antiretroviral initiation and then monthly. The HREC nurse is then responsible for interim weekly visits either physically or by telephone for a period of 3 months. The patient is seen monthly by a clinical officer or physician. In HREC, returning patients do not queue in the waiting bay or go through the nursing station, clinician room, pharmacy or any other referral points within the clinic. Instead, such patients go directly to the \"Express Care room\", which provides \"one-stop care\". The HREC nurse maintains a list of scheduled return visits, and if a patient misses a clinic appointment, the outreach team is activated as per routine protocol.\nThe HREC visit for the high-risk patients is focused on identifying co-morbidities and complications of cART and reinforcing medication adherence. The nurse asks about adherence to medication by asking whether the patient has missed any of his/her medications in the previous 7 days and then conducting a pill count. If the patient is not perfectly adherent, he or she is referred to the clinical officer or physician. The nurse reviews a symptom checklist (new cough, breathlessness, rash, jaundiced eyes, vomiting, diarrhoea, severe headache, fever or \"any other problem that you feel you need a doctor for\") and measures temperature and transcutaneous oxygen saturation. If the patient reports any symptoms or meets the pre-set threshold for either temperature (≥37.2°Celsius) or oxygen saturation (O2 ≤ 93%), the patient is referred immediately to the clinical officer or physician. The nurse does not dispense drugs during an HREC visit as these are prescribed during monthly clinical officer or physician visits. A summary of similarities and differences between HREC and Routine Care can be found in Table 1.\nSummary of programme characteristics in High Risk Express Care (HREC) versus Routine Care\nThe HIV clinical care protocols used by the USAID-AMPATH Partnership are consistent with those recommended by WHO and have been described in detail elsewhere [27]. Briefly, the Routine Care protocol for patients receiving cART is that patients are seen by the clinician (clinical officer or physician) 2 weeks after initiating treatment, and then monthly thereafter. Those who have not initiated cART return every 1 to 3 months depending on their clinical status and co-morbidities.\nDuring these visits patients are seen by multiple care providers, including nurses, clinicians, pharmacy technicians, nutritionists, peer outreach workers and social workers. For new patients, clinical contact begins at registration, followed by the nurse who checks vital signs. The patient also sees a peer outreach worker for documentation of locator information, then goes on to see the doctor/clinical officer. Returning patients go directly to the nurse and then follow a course similar to that established for new patients. All patients newly initiated on cART who miss a scheduled clinic visit trigger an outreach attempt within 24 h, through either a telephone contact or home visit conducted by trained peers. Standard first-line antiretroviral regimens used are either nevirapine-based or efavirenz-based [28].\nBeginning in March 2007, the High Risk Express Care (HREC) programme was implemented in a step-wise fashion in USAID-AMPATH Partnership clinics. As of May 2008, HREC had been rolled out to all parent clinics. The selection of clinics to pilot the programme was based primarily on space availability, patient volume and clinic congestion, and the general capacity of healthcare personnel to pilot a new programme. Once a clinic had implemented HREC, all patients meeting the criteria for HREC were eligible for referral to the programme. The criteria for referral to HREC are having a CD4 count of 100 cells/mm3 or less and initiating cART.\nOnce a patient is identified as eligible for HREC, the clinical officer or physician prescribes a two-week supply of cART and then refers the patient to HREC (within the same clinic location). The patient is seen by a clinical officer or physician 2 weeks after antiretroviral initiation and then monthly. The HREC nurse is then responsible for interim weekly visits either physically or by telephone for a period of 3 months. The patient is seen monthly by a clinical officer or physician. In HREC, returning patients do not queue in the waiting bay or go through the nursing station, clinician room, pharmacy or any other referral points within the clinic. Instead, such patients go directly to the \"Express Care room\", which provides \"one-stop care\". The HREC nurse maintains a list of scheduled return visits, and if a patient misses a clinic appointment, the outreach team is activated as per routine protocol.\nThe HREC visit for the high-risk patients is focused on identifying co-morbidities and complications of cART and reinforcing medication adherence. The nurse asks about adherence to medication by asking whether the patient has missed any of his/her medications in the previous 7 days and then conducting a pill count. If the patient is not perfectly adherent, he or she is referred to the clinical officer or physician. The nurse reviews a symptom checklist (new cough, breathlessness, rash, jaundiced eyes, vomiting, diarrhoea, severe headache, fever or \"any other problem that you feel you need a doctor for\") and measures temperature and transcutaneous oxygen saturation. If the patient reports any symptoms or meets the pre-set threshold for either temperature (≥37.2°Celsius) or oxygen saturation (O2 ≤ 93%), the patient is referred immediately to the clinical officer or physician. The nurse does not dispense drugs during an HREC visit as these are prescribed during monthly clinical officer or physician visits. A summary of similarities and differences between HREC and Routine Care can be found in Table 1.\nSummary of programme characteristics in High Risk Express Care (HREC) versus Routine Care", "The HIV clinical care protocols used by the USAID-AMPATH Partnership are consistent with those recommended by WHO and have been described in detail elsewhere [27]. Briefly, the Routine Care protocol for patients receiving cART is that patients are seen by the clinician (clinical officer or physician) 2 weeks after initiating treatment, and then monthly thereafter. Those who have not initiated cART return every 1 to 3 months depending on their clinical status and co-morbidities.\nDuring these visits patients are seen by multiple care providers, including nurses, clinicians, pharmacy technicians, nutritionists, peer outreach workers and social workers. For new patients, clinical contact begins at registration, followed by the nurse who checks vital signs. The patient also sees a peer outreach worker for documentation of locator information, then goes on to see the doctor/clinical officer. Returning patients go directly to the nurse and then follow a course similar to that established for new patients. All patients newly initiated on cART who miss a scheduled clinic visit trigger an outreach attempt within 24 h, through either a telephone contact or home visit conducted by trained peers. Standard first-line antiretroviral regimens used are either nevirapine-based or efavirenz-based [28].\nBeginning in March 2007, the High Risk Express Care (HREC) programme was implemented in a step-wise fashion in USAID-AMPATH Partnership clinics. As of May 2008, HREC had been rolled out to all parent clinics. The selection of clinics to pilot the programme was based primarily on space availability, patient volume and clinic congestion, and the general capacity of healthcare personnel to pilot a new programme. Once a clinic had implemented HREC, all patients meeting the criteria for HREC were eligible for referral to the programme. The criteria for referral to HREC are having a CD4 count of 100 cells/mm3 or less and initiating cART.\nOnce a patient is identified as eligible for HREC, the clinical officer or physician prescribes a two-week supply of cART and then refers the patient to HREC (within the same clinic location). The patient is seen by a clinical officer or physician 2 weeks after antiretroviral initiation and then monthly. The HREC nurse is then responsible for interim weekly visits either physically or by telephone for a period of 3 months. The patient is seen monthly by a clinical officer or physician. In HREC, returning patients do not queue in the waiting bay or go through the nursing station, clinician room, pharmacy or any other referral points within the clinic. Instead, such patients go directly to the \"Express Care room\", which provides \"one-stop care\". The HREC nurse maintains a list of scheduled return visits, and if a patient misses a clinic appointment, the outreach team is activated as per routine protocol.\nThe HREC visit for the high-risk patients is focused on identifying co-morbidities and complications of cART and reinforcing medication adherence. The nurse asks about adherence to medication by asking whether the patient has missed any of his/her medications in the previous 7 days and then conducting a pill count. If the patient is not perfectly adherent, he or she is referred to the clinical officer or physician. The nurse reviews a symptom checklist (new cough, breathlessness, rash, jaundiced eyes, vomiting, diarrhoea, severe headache, fever or \"any other problem that you feel you need a doctor for\") and measures temperature and transcutaneous oxygen saturation. If the patient reports any symptoms or meets the pre-set threshold for either temperature (≥37.2°Celsius) or oxygen saturation (O2 ≤ 93%), the patient is referred immediately to the clinical officer or physician. The nurse does not dispense drugs during an HREC visit as these are prescribed during monthly clinical officer or physician visits. A summary of similarities and differences between HREC and Routine Care can be found in Table 1.\nSummary of programme characteristics in High Risk Express Care (HREC) versus Routine Care", "Clinicians complete standardized forms capturing demographic, clinical and pharmacologic information at each patient visit. These data are then hand-entered into the AMPATH Medical Record System, a secure computerized database designed for clinical management, with data entry validated by random review of 10% of the forms entered [29]. At the time of registration, patients are provided with a unique identifying number. For this study, all data were stripped of identifying information prior to analysis.", "The analysis included all patients aged 14 years or older who were initiating cART with CD4 counts less than or equal to 100 cells/mm3 in one of the USAID-AMPATH clinics from March 2007 until March 2009.", "The primary goal of the HREC system is to prevent mortality during the first 3 months following cART initiation. Therefore, the primary outcome for this analysis was all-cause mortality. Two secondary outcomes were analyzed: loss to follow up (LTFU), defined as being absent from the clinic for at least 3 months with no information regarding vital status, and a composite outcome defined as LTFU or death. The rationale for the composite outcome is that loss to follow up in such a high-risk population is likely to mean that a patient has died, even if the death has not yet been reported to the clinic [10,30-34]. Analysis of the composite outcome can be viewed as a sensitivity analysis for the mortality rate as it provides an upper bound of the mortality estimate.\nThe primary explanatory variable is being in the HREC programme (versus remaining in Routine Care). Our analyses quantify the effect of HREC on mortality, loss to follow up, and the composite outcome of death or loss to follow up using crude and adjusted hazard rate (HR) ratios.\nOur adjusted HR ratios control for the following potential confounding variables, all measured at time of cART initiation and selected a priori based on their potential to independently affect changes in risk of mortality and/or loss to follow up: CD4 count (analyzed as a continuous variable); receipt of treatment for tuberculosis at the time of cART initiation (yes/no); receipt of cotrimoxazole or dapsone prophylaxis within 28 days of cART initiation (yes/no) [13,17]; travel time to clinic (dichotomized as up to one hour vs. more than one hour); type of clinic (referral hospital, district or sub-district hospital, or rural health centre); age; and sex (male/female). CD4 and age are centred at their mean values. WHO clinical stage was not included in the final model because: a) it was non-significantly associated with the outcomes of interest in bivariable analyses; and b) there was missing data in one of the two groups.", "We included all eligible patients and categorized them as having been enrolled into HREC at initiation of cART or having remained in Routine Care. The distributions of the time to event outcomes are summarized using Kaplan-Meier curves. Event times and censoring times are defined as follows: for analysis of mortality, the event time is the date of death; others are censored at the time of their last clinic visit. For the analysis of loss to follow up, patients were defined as lost if they had not returned to clinic for more than three consecutive months; for these patients, the event time is the 90th day following the most recent visit to the clinic.\nIndividuals whose reported follow-up time is zero days had 1 day added for the purpose of analysis. Those who died were censored on their death date, and patients whose most recent clinic was less than 90 days prior to the close of the database were censored at the date of their last clinic visit. For the loss to follow up analysis, the earliest possible event time is 90 days after the initiation of cART; hence our tests and regression models use a start time of day 90. Finally, for the composite outcome, the event time is the earliest of death date or loss to follow up date, where the LTFU date is defined as we have explained.\nThe adjusted HR ratios control for potential confounding variables using inverse weighting by the treatment propensity score method [35]. For each individual, the propensity score is the probability of receiving HREC as a function of individual level characteristics. The propensity score, denoted by p(x), is estimated by fitting a logistic regression model of HREC status (yes/no) on potential confounding variables x. The propensity score model is checked for lack of fit using the Hosmer-Lemeshow goodness-of-fit test [36]. The adjusted HR ratios are calculated by fitting a weighted, stratified proportional hazards regression of the event time on HREC status. The weights are proportional to 1/p(x) for those who receive HREC, and to 1/{1 - p(x)} for those who do not. Following Hernan et al. [27], stabilized weights were used in the estimation (full details are available upon request). The stratification variable is clinic type. Robust standard errors are used to account for clustering by clinic and for correlation induced by the use of inverse probability weights.\nBecause the propensity scores represent the probability of receiving HREC as a function of individual-level covariates (listed here), the weight for each individual is inversely proportional to the (estimated) probability of his or her actual HREC status. The weighted sample can therefore be viewed as one where differential selection into HREC attributable to x has been eliminated; hence, to the extent that x contains all relevant confounders, the adjusted HR is equivalent to the exposure effect from a marginal structural proportional hazards model and can be interpreted as the causal effect of HREC on the event of interest [37,38].\nAll analyses were done using STATA Version SE/10 (College Station, Texas, USA).", "There were 4,958 patients aged 14 years or older with CD4 counts of ≤100 cells/mm3 who initiated cART at one of the USAID-AMPATH clinics during the study period. Of these, 635 were enrolled into HREC. Reasons why patients were not enrolled into HREC included that the HREC programme had not yet been rolled out to a particular clinic, that the patient lived too far to allow them to attend clinic weekly or bi-weekly, and lack of available space in the clinic for expansion of the HREC programme.\nAs summarized in Table 2 patients in Routine Care and HREC were similar with regard to gender distribution and age: 40% male with a median age of approximately 36 years. There were no significant differences between the groups with regard to baseline CD4 count or proportion receiving treatment for tuberculosis. Patients in HREC were slightly less likely to be WHO Stage III/IV at cART initiation (66% vs. 69%) and to have to travel at least 1 h to clinic (71% vs. 77%). They were more likely to be attending an urban clinic (61% vs. 52%), and using cotrimoxazole or dapsone prophylaxis at cART initiation (100% vs. 95%). The median follow-up time was 318 days (interquartile range 147-533).\nSocio-demographic and clinical characteristics of patients in High Risk Express Care (HREC) versus Routine Care\nN.B. The only variable for which there were missing data was WHO Stage, as described in the table. This variable was not included in the propensity score model as it was not a significant predictor of the outcomes of interest\nThere were 426 deaths among the study population: 39 in HREC and 387 in Routine Care. The crude incidence rate of death during the follow-up period was 5.7 per 100 person-years in HREC compared with 10.6 per 100 person-years in Routine Care (incidence rate ratio, IRR: 0.54; 95% confidence interval, CI: 0.38-0.75) (Figure 1a). After adjustment for potential confounders, the HREC programme was associated with a 40% reduced risk of death (adjusted HR: 0.59; 95% CI: 0.45-0.77).\nKaplan-Meier curves demonstrating the effect of the High Risk Express Care compared with Routine Care among all high-risk patients on: a) their probability of survival; b) their probability of remaining in care (i.e., loss to follow up); and c) their probability of remaining alive and in care (n = 4958). a Probability of remaining alive after cART initiation. b Probability of remaining in care after cART initiation. c Probability of remaining alive and in care after cART.\nThere were also 1,299 patients lost to follow up during the same period, including 134 in HREC and 1,165 in Routine Care. The crude incidence rate of LTFU among HREC was 18.7 per 100 person-years versus 29.7 in Routine Care (IRR: 0.63; 95% CI: 0.52-0.76) (Figure 1b). After adjustment, patients in HREC were also much less likely to become lost to follow up (AHR 0.62; 95% CI: 0.55-0.70).\nWhen we assessed the combined endpoint of LTFU and death, there were 1,725 events in 4639.5 person-years of follow up, for a crude incidence rate of 24.2 per 100 person-years in HREC versus 39.5 in Routine Care (IRR: 0.61; 95% CI: 0.52-0.72) (Figure 1c). Overall, the HREC patients were much more likely to be alive and in care after starting cART by the end of the study follow-up period (AHR 0.62; 95% CI: 0.57-0.67) (Table 3).\nAdjusted Hazard Ratios (HR)* and 95% Confidence Intervals (CI) of the effect of High Risk Express Care (HREC) vs. Routine Care on: a) Death, b) Lost to Follow-up (LTFU), and c) Death or LTFU (combined endpoint) following cART initiation\n*Adjusted using propensity scores for clinic, the use of cotrimoxazole or dapsone, CD4 count closest to cART initiation, receiving treatment for tuberculosis,, gender, age, clinic type (Referral Hospital, Sub-District and District Hospitals, and Rural Health Centres), and travel time to clinic.", "The first few months following initiation of cART is a critical time for severely immune-suppressed HIV-infected patients. These data suggest that more frequent monitoring of patients in the early months by a dedicated nurse can significantly improve survival and retention in care among these high-risk patients and improve their retention in care. Although further evaluation is needed, this intervention may be relatively easy to implement in other resource-constrained environments.\nTo our knowledge, the concept of frequent nurse-based rapid assessments is among the few interventions other than cotrimoxazole prophylaxis that has been associated with a profound reduction in early mortality among high-risk HIV-infected patients initiating cART in low-income settings [13,14]. We believe the effects of HREC are a combination of the rapid and frequent assessments. Rapid because this makes accessing healthcare more accessible to patients (by not having to wait as long and by not having to spend as much time in the clinic); frequent because it makes it more likely that early warning symptoms (e.g., fever, rash) can be identified within days, as opposed to weeks, of their onset. If, for example, the monthly standard of care visits were simply made more rapid, such symptoms as fever or rash would go unattended for potentially weeks, thereby increasing the risk of full-blown immune-reconstitution disease, more severe toxicities, etc.\nWe also postulate that more frequent monitoring is effective at improving early patient outcomes through both direct and indirect mechanisms. For example, earlier identification of the signs and symptoms of drug toxicity, opportunistic infections and immune reconstitution syndrome likely leads to earlier interventions to address these issues; thus patients should experience a direct survival advantage in the short-term.\nIndirectly, HREC may have improved adherence to cART and thus improved short-term outcomes. Adherence may be improved in the HREC population because of the weekly contact, reminders and supports; as a result of improved adherence, patients will be more likely to experience complete virologic suppression, have improved immune-response, and be less likely to develop resistance, therefore indirectly contributing to survival benefits over the short and long term. HREC may have improved retention by enabling patients to be seen quickly without having to wait in long queues and without having to pass through multiple stations (i.e., spending much of the day at the clinic), thereby making their healthcare more accessible to them.\nThere are some additional costs associated with HREC. For example, there are added direct costs to patients arising from increased transportation required to and from the clinic. They may also have to miss more work because of more frequent clinic visits, translating into increased opportunity costs. The nurses hired to work on HREC were hired explicitly for that purpose, and this certainly adds to the overall programme expense on personnel. Whether these additional costs and expenses are justified when weighed against the costs associated with increased morbidity, mortality and loss to follow up is the subject for a detailed cost-effectiveness analysis that is beyond the scope of the present evaluation.\nThere are two key strengths to these findings. First, the standard of care in AMPATH is already to provide cotrimoxazole or dapsone until a patient's CD4 count is above 200 cells/mm3. These data have therefore been able to assess the effect of more frequent monitoring in a setting where the vast majority of patients were already receiving cotrimoxazole or dapsone. Second, the effect of HREC was strong across three different versions of the primary outcome, suggesting a robust effect. By evaluating the impact of HREC on both mortality and LTFU, we took account of the two crucial factors determining the success of an HIV treatment programme: keeping patients alive and in care. Moreover, we used statistical methods that appropriately and robustly adjusted for measured confounders.\nThere are also limitations to this analysis. First, the choice of clinics in which HREC was rolled out to first may have created a variety of potential biases in our analysis related to a possible higher quality of care offered at those clinics. For example, this may have created some selection bias (improved patient outcomes at those clinics irrespective of HREC, e.g., because of lower patient volumes, or higher functioning staff), and ascertainment bias (better ascertainment of death and other clinical outcomes at the higher functioning clinics). Similarly, although all patients were eligible to be enrolled into HREC, only a small proportion were enrolled. If the providers who were more likely to refer patients to HREC were also the ones more likely to be current with clinical protocols (i.e., to prescribe cotrimoxazole) and/or more likely to be better clinicians, those patients referred by them may have been more likely to have better outcomes anyway.\nHowever, for those patients referred, it was the HREC nurse who had the majority of clinical contact with them, thereby reducing any potential provider bias on the part of the referring clinician. Similarly, a slightly smaller proportion of patients enrolled into HREC were WHO Stage III/IV at treatment initiation. Although these issues may have biased the findings favourably towards HREC, the use of propensity score methods helps to overcome this possible bias because weighting in inverse proportion to the treatment propensity score creates a pseudo-sample wherein allocation to HREC is independent of the confounders that have been included in the propensity score. Hence the weighted dataset can be analyzed as if the group allocation were random. A limitation of this method is that there may be non-random allocation to HREC based on unmeasured factors to the extent that receipt of HREC depends on unmeasured factors. We acknowledge that residual bias may therefore remain due to potential unmeasured confounders, including adherence to cART (not accounted for in this analysis due to unreliability of the data).", "In conclusion, we have demonstrated that weekly rapid assessments by nurses, either by phone or in person, with immediate referrals to clinical officers or physicians if needed, can significantly improve survival among high-risk HIV-infected patients initiating cART in a sub-Saharan African setting. Although the cost effectiveness of the Express Care model needs to be thoroughly evaluated, our experience and findings suggest that this may be an innovative way of increasing patient volume, improving the quality of care, and greatly improving patient outcomes in the short term.", "The authors declare that they have no competing interests.", "PB was primarily responsible for the writing of the manuscript. AS, RK, JS, JM, and SK are practicing physicians in Kenya who developed the Express Care programme and contributed significantly to the generation of hypotheses and interpretation of results for the manuscript. JH is the biostatistician on record for this analysis and provided significant input and technical assistance in the methodological approaches used. AK was the analyst for the study. KWK critically reviewed the manuscript for issues of design and interpretation. ES was the data manager responsible for providing and overseeing the data. All authors have read and approved the final version of this manuscript." ]
[ null, "methods", null, null, null, null, null, null, null, "results", "discussion", "conclusions", null, null ]
[ "Antiretrovirals", "Mortality", "Losses to follow up", "Adherence", "Models of care", "Africa" ]
Background: Combination antiretroviral treatment (cART) has proven itself to be an effective therapeutic mechanism for suppressing viral replication and enabling reconstitution of the immune system, thus allowing patients to recover and live with HIV disease as a chronic illness [1-3]. If adherence to the medications is high, severe immune-suppression is not present at cART initiation, and no significant co-morbidities, such as hepatitis C infection, exist, projections suggest that people living with HIV/AIDS have greatly improved long-term prognosis [4]. Despite the proven effectiveness of cART in low-income countries [5-9], mortality rates among patients in these settings are higher than those seen in high-income environments [10]. In resource-poor settings, mortality is at its highest during the first 3 months after cART initiation [9-12]. It is at least four times higher than rates in high-income countries in the first month of treatment [10]. Why mortality is at its highest during this period has been the subject of much debate and speculation. Reasons for these differences have been attributed to the non-use of cotrimoxazole prophylaxis [13,14], tuberculosis-associated immune reconstitution inflammatory syndrome (IRIS) [15-17], IRIS due to other opportunistic infections [18], and hepatotoxicity related to antiretroviral agents [19]. A consistently clear predictor of mortality during this period is having a low CD4 count at the time of treatment initiation [10,20]. Recent estimates by the World Health Organization (WHO) indicate that although 6.7 million individuals in low- and middle-income settings are receiving cART, this represents only 47% coverage of individuals who are in clinical need [21]. The massive scale up of HIV care and treatment programmes has required enormous investments, and still there is a substantial unmet need. Thus, the challenge presented to HIV care programmes operating in resource-poor settings is how to continue scaling up while simultaneously improving the outcomes of those enrolling in treatment programmes. As such, novel models of care, such as task shifting [22-24], which increase healthcare efficiency and improve patient outcomes, clearly need to be designed and tested. Here, we describe the impact of a nurse-clinician approach [25] on mortality and patient retention among severely immune-suppressed HIV-infected adults initiating cART within a large multi-centre HIV/AIDS care and treatment programme in western Kenya. Methods: Study design This was a retrospective analysis of prospectively collected routine clinical data. The study was approved by the Indiana University School of Medicine Institutional Review Board and the Moi University School of Medicine Institutional Review and Ethics Committee. This was a retrospective analysis of prospectively collected routine clinical data. The study was approved by the Indiana University School of Medicine Institutional Review Board and the Moi University School of Medicine Institutional Review and Ethics Committee. The programme The Academic Model Providing Access to Healthcare (AMPATH) was initiated in 2001 as a joint partnership between Moi University School of Medicine in Kenya, the Indiana University School of Medicine, and the Moi Teaching and Referral Hospital. The USAID-AMPATH Partnership was initiated in 2004 when AMPATH received ongoing funding through the United States Agency for International Development (USAID) and the United States Presidential Emergency Plan for AIDS Relief (PEPFAR). The initial goal of AMPATH was to establish an HIV care system to serve the needs of both urban and rural patients and to assess the barriers to and outcomes of antiretroviral therapy. Details of the development of this programme have been described in detail elsewhere [26]. The first urban and rural HIV clinics were opened in November 2001. Since then, the programme has enrolled more than 140,000 HIV-infected adults and children in 25 Ministry of Health (MOH) facilities and numerous satellite clinics in western Kenya (data for satellite clinics are incorporated into their "parent" clinic). Although located within the MOH facilities, the AMPATH clinics are dedicated to HIV and HIV/TB care, treatment and support. All HIV- and tuberculosis-related care and treatment are provided free at the point of care. Clinical procedures: express care and routine care The HIV clinical care protocols used by the USAID-AMPATH Partnership are consistent with those recommended by WHO and have been described in detail elsewhere [27]. Briefly, the Routine Care protocol for patients receiving cART is that patients are seen by the clinician (clinical officer or physician) 2 weeks after initiating treatment, and then monthly thereafter. Those who have not initiated cART return every 1 to 3 months depending on their clinical status and co-morbidities. During these visits patients are seen by multiple care providers, including nurses, clinicians, pharmacy technicians, nutritionists, peer outreach workers and social workers. For new patients, clinical contact begins at registration, followed by the nurse who checks vital signs. The patient also sees a peer outreach worker for documentation of locator information, then goes on to see the doctor/clinical officer. Returning patients go directly to the nurse and then follow a course similar to that established for new patients. All patients newly initiated on cART who miss a scheduled clinic visit trigger an outreach attempt within 24 h, through either a telephone contact or home visit conducted by trained peers. Standard first-line antiretroviral regimens used are either nevirapine-based or efavirenz-based [28]. Beginning in March 2007, the High Risk Express Care (HREC) programme was implemented in a step-wise fashion in USAID-AMPATH Partnership clinics. As of May 2008, HREC had been rolled out to all parent clinics. The selection of clinics to pilot the programme was based primarily on space availability, patient volume and clinic congestion, and the general capacity of healthcare personnel to pilot a new programme. Once a clinic had implemented HREC, all patients meeting the criteria for HREC were eligible for referral to the programme. The criteria for referral to HREC are having a CD4 count of 100 cells/mm3 or less and initiating cART. Once a patient is identified as eligible for HREC, the clinical officer or physician prescribes a two-week supply of cART and then refers the patient to HREC (within the same clinic location). The patient is seen by a clinical officer or physician 2 weeks after antiretroviral initiation and then monthly. The HREC nurse is then responsible for interim weekly visits either physically or by telephone for a period of 3 months. The patient is seen monthly by a clinical officer or physician. In HREC, returning patients do not queue in the waiting bay or go through the nursing station, clinician room, pharmacy or any other referral points within the clinic. Instead, such patients go directly to the "Express Care room", which provides "one-stop care". The HREC nurse maintains a list of scheduled return visits, and if a patient misses a clinic appointment, the outreach team is activated as per routine protocol. The HREC visit for the high-risk patients is focused on identifying co-morbidities and complications of cART and reinforcing medication adherence. The nurse asks about adherence to medication by asking whether the patient has missed any of his/her medications in the previous 7 days and then conducting a pill count. If the patient is not perfectly adherent, he or she is referred to the clinical officer or physician. The nurse reviews a symptom checklist (new cough, breathlessness, rash, jaundiced eyes, vomiting, diarrhoea, severe headache, fever or "any other problem that you feel you need a doctor for") and measures temperature and transcutaneous oxygen saturation. If the patient reports any symptoms or meets the pre-set threshold for either temperature (≥37.2°Celsius) or oxygen saturation (O2 ≤ 93%), the patient is referred immediately to the clinical officer or physician. The nurse does not dispense drugs during an HREC visit as these are prescribed during monthly clinical officer or physician visits. A summary of similarities and differences between HREC and Routine Care can be found in Table 1. Summary of programme characteristics in High Risk Express Care (HREC) versus Routine Care The HIV clinical care protocols used by the USAID-AMPATH Partnership are consistent with those recommended by WHO and have been described in detail elsewhere [27]. Briefly, the Routine Care protocol for patients receiving cART is that patients are seen by the clinician (clinical officer or physician) 2 weeks after initiating treatment, and then monthly thereafter. Those who have not initiated cART return every 1 to 3 months depending on their clinical status and co-morbidities. During these visits patients are seen by multiple care providers, including nurses, clinicians, pharmacy technicians, nutritionists, peer outreach workers and social workers. For new patients, clinical contact begins at registration, followed by the nurse who checks vital signs. The patient also sees a peer outreach worker for documentation of locator information, then goes on to see the doctor/clinical officer. Returning patients go directly to the nurse and then follow a course similar to that established for new patients. All patients newly initiated on cART who miss a scheduled clinic visit trigger an outreach attempt within 24 h, through either a telephone contact or home visit conducted by trained peers. Standard first-line antiretroviral regimens used are either nevirapine-based or efavirenz-based [28]. Beginning in March 2007, the High Risk Express Care (HREC) programme was implemented in a step-wise fashion in USAID-AMPATH Partnership clinics. As of May 2008, HREC had been rolled out to all parent clinics. The selection of clinics to pilot the programme was based primarily on space availability, patient volume and clinic congestion, and the general capacity of healthcare personnel to pilot a new programme. Once a clinic had implemented HREC, all patients meeting the criteria for HREC were eligible for referral to the programme. The criteria for referral to HREC are having a CD4 count of 100 cells/mm3 or less and initiating cART. Once a patient is identified as eligible for HREC, the clinical officer or physician prescribes a two-week supply of cART and then refers the patient to HREC (within the same clinic location). The patient is seen by a clinical officer or physician 2 weeks after antiretroviral initiation and then monthly. The HREC nurse is then responsible for interim weekly visits either physically or by telephone for a period of 3 months. The patient is seen monthly by a clinical officer or physician. In HREC, returning patients do not queue in the waiting bay or go through the nursing station, clinician room, pharmacy or any other referral points within the clinic. Instead, such patients go directly to the "Express Care room", which provides "one-stop care". The HREC nurse maintains a list of scheduled return visits, and if a patient misses a clinic appointment, the outreach team is activated as per routine protocol. The HREC visit for the high-risk patients is focused on identifying co-morbidities and complications of cART and reinforcing medication adherence. The nurse asks about adherence to medication by asking whether the patient has missed any of his/her medications in the previous 7 days and then conducting a pill count. If the patient is not perfectly adherent, he or she is referred to the clinical officer or physician. The nurse reviews a symptom checklist (new cough, breathlessness, rash, jaundiced eyes, vomiting, diarrhoea, severe headache, fever or "any other problem that you feel you need a doctor for") and measures temperature and transcutaneous oxygen saturation. If the patient reports any symptoms or meets the pre-set threshold for either temperature (≥37.2°Celsius) or oxygen saturation (O2 ≤ 93%), the patient is referred immediately to the clinical officer or physician. The nurse does not dispense drugs during an HREC visit as these are prescribed during monthly clinical officer or physician visits. A summary of similarities and differences between HREC and Routine Care can be found in Table 1. Summary of programme characteristics in High Risk Express Care (HREC) versus Routine Care The Academic Model Providing Access to Healthcare (AMPATH) was initiated in 2001 as a joint partnership between Moi University School of Medicine in Kenya, the Indiana University School of Medicine, and the Moi Teaching and Referral Hospital. The USAID-AMPATH Partnership was initiated in 2004 when AMPATH received ongoing funding through the United States Agency for International Development (USAID) and the United States Presidential Emergency Plan for AIDS Relief (PEPFAR). The initial goal of AMPATH was to establish an HIV care system to serve the needs of both urban and rural patients and to assess the barriers to and outcomes of antiretroviral therapy. Details of the development of this programme have been described in detail elsewhere [26]. The first urban and rural HIV clinics were opened in November 2001. Since then, the programme has enrolled more than 140,000 HIV-infected adults and children in 25 Ministry of Health (MOH) facilities and numerous satellite clinics in western Kenya (data for satellite clinics are incorporated into their "parent" clinic). Although located within the MOH facilities, the AMPATH clinics are dedicated to HIV and HIV/TB care, treatment and support. All HIV- and tuberculosis-related care and treatment are provided free at the point of care. Clinical procedures: express care and routine care The HIV clinical care protocols used by the USAID-AMPATH Partnership are consistent with those recommended by WHO and have been described in detail elsewhere [27]. Briefly, the Routine Care protocol for patients receiving cART is that patients are seen by the clinician (clinical officer or physician) 2 weeks after initiating treatment, and then monthly thereafter. Those who have not initiated cART return every 1 to 3 months depending on their clinical status and co-morbidities. During these visits patients are seen by multiple care providers, including nurses, clinicians, pharmacy technicians, nutritionists, peer outreach workers and social workers. For new patients, clinical contact begins at registration, followed by the nurse who checks vital signs. The patient also sees a peer outreach worker for documentation of locator information, then goes on to see the doctor/clinical officer. Returning patients go directly to the nurse and then follow a course similar to that established for new patients. All patients newly initiated on cART who miss a scheduled clinic visit trigger an outreach attempt within 24 h, through either a telephone contact or home visit conducted by trained peers. Standard first-line antiretroviral regimens used are either nevirapine-based or efavirenz-based [28]. Beginning in March 2007, the High Risk Express Care (HREC) programme was implemented in a step-wise fashion in USAID-AMPATH Partnership clinics. As of May 2008, HREC had been rolled out to all parent clinics. The selection of clinics to pilot the programme was based primarily on space availability, patient volume and clinic congestion, and the general capacity of healthcare personnel to pilot a new programme. Once a clinic had implemented HREC, all patients meeting the criteria for HREC were eligible for referral to the programme. The criteria for referral to HREC are having a CD4 count of 100 cells/mm3 or less and initiating cART. Once a patient is identified as eligible for HREC, the clinical officer or physician prescribes a two-week supply of cART and then refers the patient to HREC (within the same clinic location). The patient is seen by a clinical officer or physician 2 weeks after antiretroviral initiation and then monthly. The HREC nurse is then responsible for interim weekly visits either physically or by telephone for a period of 3 months. The patient is seen monthly by a clinical officer or physician. In HREC, returning patients do not queue in the waiting bay or go through the nursing station, clinician room, pharmacy or any other referral points within the clinic. Instead, such patients go directly to the "Express Care room", which provides "one-stop care". The HREC nurse maintains a list of scheduled return visits, and if a patient misses a clinic appointment, the outreach team is activated as per routine protocol. The HREC visit for the high-risk patients is focused on identifying co-morbidities and complications of cART and reinforcing medication adherence. The nurse asks about adherence to medication by asking whether the patient has missed any of his/her medications in the previous 7 days and then conducting a pill count. If the patient is not perfectly adherent, he or she is referred to the clinical officer or physician. The nurse reviews a symptom checklist (new cough, breathlessness, rash, jaundiced eyes, vomiting, diarrhoea, severe headache, fever or "any other problem that you feel you need a doctor for") and measures temperature and transcutaneous oxygen saturation. If the patient reports any symptoms or meets the pre-set threshold for either temperature (≥37.2°Celsius) or oxygen saturation (O2 ≤ 93%), the patient is referred immediately to the clinical officer or physician. The nurse does not dispense drugs during an HREC visit as these are prescribed during monthly clinical officer or physician visits. A summary of similarities and differences between HREC and Routine Care can be found in Table 1. Summary of programme characteristics in High Risk Express Care (HREC) versus Routine Care The HIV clinical care protocols used by the USAID-AMPATH Partnership are consistent with those recommended by WHO and have been described in detail elsewhere [27]. Briefly, the Routine Care protocol for patients receiving cART is that patients are seen by the clinician (clinical officer or physician) 2 weeks after initiating treatment, and then monthly thereafter. Those who have not initiated cART return every 1 to 3 months depending on their clinical status and co-morbidities. During these visits patients are seen by multiple care providers, including nurses, clinicians, pharmacy technicians, nutritionists, peer outreach workers and social workers. For new patients, clinical contact begins at registration, followed by the nurse who checks vital signs. The patient also sees a peer outreach worker for documentation of locator information, then goes on to see the doctor/clinical officer. Returning patients go directly to the nurse and then follow a course similar to that established for new patients. All patients newly initiated on cART who miss a scheduled clinic visit trigger an outreach attempt within 24 h, through either a telephone contact or home visit conducted by trained peers. Standard first-line antiretroviral regimens used are either nevirapine-based or efavirenz-based [28]. Beginning in March 2007, the High Risk Express Care (HREC) programme was implemented in a step-wise fashion in USAID-AMPATH Partnership clinics. As of May 2008, HREC had been rolled out to all parent clinics. The selection of clinics to pilot the programme was based primarily on space availability, patient volume and clinic congestion, and the general capacity of healthcare personnel to pilot a new programme. Once a clinic had implemented HREC, all patients meeting the criteria for HREC were eligible for referral to the programme. The criteria for referral to HREC are having a CD4 count of 100 cells/mm3 or less and initiating cART. Once a patient is identified as eligible for HREC, the clinical officer or physician prescribes a two-week supply of cART and then refers the patient to HREC (within the same clinic location). The patient is seen by a clinical officer or physician 2 weeks after antiretroviral initiation and then monthly. The HREC nurse is then responsible for interim weekly visits either physically or by telephone for a period of 3 months. The patient is seen monthly by a clinical officer or physician. In HREC, returning patients do not queue in the waiting bay or go through the nursing station, clinician room, pharmacy or any other referral points within the clinic. Instead, such patients go directly to the "Express Care room", which provides "one-stop care". The HREC nurse maintains a list of scheduled return visits, and if a patient misses a clinic appointment, the outreach team is activated as per routine protocol. The HREC visit for the high-risk patients is focused on identifying co-morbidities and complications of cART and reinforcing medication adherence. The nurse asks about adherence to medication by asking whether the patient has missed any of his/her medications in the previous 7 days and then conducting a pill count. If the patient is not perfectly adherent, he or she is referred to the clinical officer or physician. The nurse reviews a symptom checklist (new cough, breathlessness, rash, jaundiced eyes, vomiting, diarrhoea, severe headache, fever or "any other problem that you feel you need a doctor for") and measures temperature and transcutaneous oxygen saturation. If the patient reports any symptoms or meets the pre-set threshold for either temperature (≥37.2°Celsius) or oxygen saturation (O2 ≤ 93%), the patient is referred immediately to the clinical officer or physician. The nurse does not dispense drugs during an HREC visit as these are prescribed during monthly clinical officer or physician visits. A summary of similarities and differences between HREC and Routine Care can be found in Table 1. Summary of programme characteristics in High Risk Express Care (HREC) versus Routine Care Data collection Clinicians complete standardized forms capturing demographic, clinical and pharmacologic information at each patient visit. These data are then hand-entered into the AMPATH Medical Record System, a secure computerized database designed for clinical management, with data entry validated by random review of 10% of the forms entered [29]. At the time of registration, patients are provided with a unique identifying number. For this study, all data were stripped of identifying information prior to analysis. Clinicians complete standardized forms capturing demographic, clinical and pharmacologic information at each patient visit. These data are then hand-entered into the AMPATH Medical Record System, a secure computerized database designed for clinical management, with data entry validated by random review of 10% of the forms entered [29]. At the time of registration, patients are provided with a unique identifying number. For this study, all data were stripped of identifying information prior to analysis. Study population The analysis included all patients aged 14 years or older who were initiating cART with CD4 counts less than or equal to 100 cells/mm3 in one of the USAID-AMPATH clinics from March 2007 until March 2009. The analysis included all patients aged 14 years or older who were initiating cART with CD4 counts less than or equal to 100 cells/mm3 in one of the USAID-AMPATH clinics from March 2007 until March 2009. Outcomes, explanatory variables and confounders The primary goal of the HREC system is to prevent mortality during the first 3 months following cART initiation. Therefore, the primary outcome for this analysis was all-cause mortality. Two secondary outcomes were analyzed: loss to follow up (LTFU), defined as being absent from the clinic for at least 3 months with no information regarding vital status, and a composite outcome defined as LTFU or death. The rationale for the composite outcome is that loss to follow up in such a high-risk population is likely to mean that a patient has died, even if the death has not yet been reported to the clinic [10,30-34]. Analysis of the composite outcome can be viewed as a sensitivity analysis for the mortality rate as it provides an upper bound of the mortality estimate. The primary explanatory variable is being in the HREC programme (versus remaining in Routine Care). Our analyses quantify the effect of HREC on mortality, loss to follow up, and the composite outcome of death or loss to follow up using crude and adjusted hazard rate (HR) ratios. Our adjusted HR ratios control for the following potential confounding variables, all measured at time of cART initiation and selected a priori based on their potential to independently affect changes in risk of mortality and/or loss to follow up: CD4 count (analyzed as a continuous variable); receipt of treatment for tuberculosis at the time of cART initiation (yes/no); receipt of cotrimoxazole or dapsone prophylaxis within 28 days of cART initiation (yes/no) [13,17]; travel time to clinic (dichotomized as up to one hour vs. more than one hour); type of clinic (referral hospital, district or sub-district hospital, or rural health centre); age; and sex (male/female). CD4 and age are centred at their mean values. WHO clinical stage was not included in the final model because: a) it was non-significantly associated with the outcomes of interest in bivariable analyses; and b) there was missing data in one of the two groups. The primary goal of the HREC system is to prevent mortality during the first 3 months following cART initiation. Therefore, the primary outcome for this analysis was all-cause mortality. Two secondary outcomes were analyzed: loss to follow up (LTFU), defined as being absent from the clinic for at least 3 months with no information regarding vital status, and a composite outcome defined as LTFU or death. The rationale for the composite outcome is that loss to follow up in such a high-risk population is likely to mean that a patient has died, even if the death has not yet been reported to the clinic [10,30-34]. Analysis of the composite outcome can be viewed as a sensitivity analysis for the mortality rate as it provides an upper bound of the mortality estimate. The primary explanatory variable is being in the HREC programme (versus remaining in Routine Care). Our analyses quantify the effect of HREC on mortality, loss to follow up, and the composite outcome of death or loss to follow up using crude and adjusted hazard rate (HR) ratios. Our adjusted HR ratios control for the following potential confounding variables, all measured at time of cART initiation and selected a priori based on their potential to independently affect changes in risk of mortality and/or loss to follow up: CD4 count (analyzed as a continuous variable); receipt of treatment for tuberculosis at the time of cART initiation (yes/no); receipt of cotrimoxazole or dapsone prophylaxis within 28 days of cART initiation (yes/no) [13,17]; travel time to clinic (dichotomized as up to one hour vs. more than one hour); type of clinic (referral hospital, district or sub-district hospital, or rural health centre); age; and sex (male/female). CD4 and age are centred at their mean values. WHO clinical stage was not included in the final model because: a) it was non-significantly associated with the outcomes of interest in bivariable analyses; and b) there was missing data in one of the two groups. Analysis We included all eligible patients and categorized them as having been enrolled into HREC at initiation of cART or having remained in Routine Care. The distributions of the time to event outcomes are summarized using Kaplan-Meier curves. Event times and censoring times are defined as follows: for analysis of mortality, the event time is the date of death; others are censored at the time of their last clinic visit. For the analysis of loss to follow up, patients were defined as lost if they had not returned to clinic for more than three consecutive months; for these patients, the event time is the 90th day following the most recent visit to the clinic. Individuals whose reported follow-up time is zero days had 1 day added for the purpose of analysis. Those who died were censored on their death date, and patients whose most recent clinic was less than 90 days prior to the close of the database were censored at the date of their last clinic visit. For the loss to follow up analysis, the earliest possible event time is 90 days after the initiation of cART; hence our tests and regression models use a start time of day 90. Finally, for the composite outcome, the event time is the earliest of death date or loss to follow up date, where the LTFU date is defined as we have explained. The adjusted HR ratios control for potential confounding variables using inverse weighting by the treatment propensity score method [35]. For each individual, the propensity score is the probability of receiving HREC as a function of individual level characteristics. The propensity score, denoted by p(x), is estimated by fitting a logistic regression model of HREC status (yes/no) on potential confounding variables x. The propensity score model is checked for lack of fit using the Hosmer-Lemeshow goodness-of-fit test [36]. The adjusted HR ratios are calculated by fitting a weighted, stratified proportional hazards regression of the event time on HREC status. The weights are proportional to 1/p(x) for those who receive HREC, and to 1/{1 - p(x)} for those who do not. Following Hernan et al. [27], stabilized weights were used in the estimation (full details are available upon request). The stratification variable is clinic type. Robust standard errors are used to account for clustering by clinic and for correlation induced by the use of inverse probability weights. Because the propensity scores represent the probability of receiving HREC as a function of individual-level covariates (listed here), the weight for each individual is inversely proportional to the (estimated) probability of his or her actual HREC status. The weighted sample can therefore be viewed as one where differential selection into HREC attributable to x has been eliminated; hence, to the extent that x contains all relevant confounders, the adjusted HR is equivalent to the exposure effect from a marginal structural proportional hazards model and can be interpreted as the causal effect of HREC on the event of interest [37,38]. All analyses were done using STATA Version SE/10 (College Station, Texas, USA). We included all eligible patients and categorized them as having been enrolled into HREC at initiation of cART or having remained in Routine Care. The distributions of the time to event outcomes are summarized using Kaplan-Meier curves. Event times and censoring times are defined as follows: for analysis of mortality, the event time is the date of death; others are censored at the time of their last clinic visit. For the analysis of loss to follow up, patients were defined as lost if they had not returned to clinic for more than three consecutive months; for these patients, the event time is the 90th day following the most recent visit to the clinic. Individuals whose reported follow-up time is zero days had 1 day added for the purpose of analysis. Those who died were censored on their death date, and patients whose most recent clinic was less than 90 days prior to the close of the database were censored at the date of their last clinic visit. For the loss to follow up analysis, the earliest possible event time is 90 days after the initiation of cART; hence our tests and regression models use a start time of day 90. Finally, for the composite outcome, the event time is the earliest of death date or loss to follow up date, where the LTFU date is defined as we have explained. The adjusted HR ratios control for potential confounding variables using inverse weighting by the treatment propensity score method [35]. For each individual, the propensity score is the probability of receiving HREC as a function of individual level characteristics. The propensity score, denoted by p(x), is estimated by fitting a logistic regression model of HREC status (yes/no) on potential confounding variables x. The propensity score model is checked for lack of fit using the Hosmer-Lemeshow goodness-of-fit test [36]. The adjusted HR ratios are calculated by fitting a weighted, stratified proportional hazards regression of the event time on HREC status. The weights are proportional to 1/p(x) for those who receive HREC, and to 1/{1 - p(x)} for those who do not. Following Hernan et al. [27], stabilized weights were used in the estimation (full details are available upon request). The stratification variable is clinic type. Robust standard errors are used to account for clustering by clinic and for correlation induced by the use of inverse probability weights. Because the propensity scores represent the probability of receiving HREC as a function of individual-level covariates (listed here), the weight for each individual is inversely proportional to the (estimated) probability of his or her actual HREC status. The weighted sample can therefore be viewed as one where differential selection into HREC attributable to x has been eliminated; hence, to the extent that x contains all relevant confounders, the adjusted HR is equivalent to the exposure effect from a marginal structural proportional hazards model and can be interpreted as the causal effect of HREC on the event of interest [37,38]. All analyses were done using STATA Version SE/10 (College Station, Texas, USA). Study design: This was a retrospective analysis of prospectively collected routine clinical data. The study was approved by the Indiana University School of Medicine Institutional Review Board and the Moi University School of Medicine Institutional Review and Ethics Committee. The programme: The Academic Model Providing Access to Healthcare (AMPATH) was initiated in 2001 as a joint partnership between Moi University School of Medicine in Kenya, the Indiana University School of Medicine, and the Moi Teaching and Referral Hospital. The USAID-AMPATH Partnership was initiated in 2004 when AMPATH received ongoing funding through the United States Agency for International Development (USAID) and the United States Presidential Emergency Plan for AIDS Relief (PEPFAR). The initial goal of AMPATH was to establish an HIV care system to serve the needs of both urban and rural patients and to assess the barriers to and outcomes of antiretroviral therapy. Details of the development of this programme have been described in detail elsewhere [26]. The first urban and rural HIV clinics were opened in November 2001. Since then, the programme has enrolled more than 140,000 HIV-infected adults and children in 25 Ministry of Health (MOH) facilities and numerous satellite clinics in western Kenya (data for satellite clinics are incorporated into their "parent" clinic). Although located within the MOH facilities, the AMPATH clinics are dedicated to HIV and HIV/TB care, treatment and support. All HIV- and tuberculosis-related care and treatment are provided free at the point of care. Clinical procedures: express care and routine care The HIV clinical care protocols used by the USAID-AMPATH Partnership are consistent with those recommended by WHO and have been described in detail elsewhere [27]. Briefly, the Routine Care protocol for patients receiving cART is that patients are seen by the clinician (clinical officer or physician) 2 weeks after initiating treatment, and then monthly thereafter. Those who have not initiated cART return every 1 to 3 months depending on their clinical status and co-morbidities. During these visits patients are seen by multiple care providers, including nurses, clinicians, pharmacy technicians, nutritionists, peer outreach workers and social workers. For new patients, clinical contact begins at registration, followed by the nurse who checks vital signs. The patient also sees a peer outreach worker for documentation of locator information, then goes on to see the doctor/clinical officer. Returning patients go directly to the nurse and then follow a course similar to that established for new patients. All patients newly initiated on cART who miss a scheduled clinic visit trigger an outreach attempt within 24 h, through either a telephone contact or home visit conducted by trained peers. Standard first-line antiretroviral regimens used are either nevirapine-based or efavirenz-based [28]. Beginning in March 2007, the High Risk Express Care (HREC) programme was implemented in a step-wise fashion in USAID-AMPATH Partnership clinics. As of May 2008, HREC had been rolled out to all parent clinics. The selection of clinics to pilot the programme was based primarily on space availability, patient volume and clinic congestion, and the general capacity of healthcare personnel to pilot a new programme. Once a clinic had implemented HREC, all patients meeting the criteria for HREC were eligible for referral to the programme. The criteria for referral to HREC are having a CD4 count of 100 cells/mm3 or less and initiating cART. Once a patient is identified as eligible for HREC, the clinical officer or physician prescribes a two-week supply of cART and then refers the patient to HREC (within the same clinic location). The patient is seen by a clinical officer or physician 2 weeks after antiretroviral initiation and then monthly. The HREC nurse is then responsible for interim weekly visits either physically or by telephone for a period of 3 months. The patient is seen monthly by a clinical officer or physician. In HREC, returning patients do not queue in the waiting bay or go through the nursing station, clinician room, pharmacy or any other referral points within the clinic. Instead, such patients go directly to the "Express Care room", which provides "one-stop care". The HREC nurse maintains a list of scheduled return visits, and if a patient misses a clinic appointment, the outreach team is activated as per routine protocol. The HREC visit for the high-risk patients is focused on identifying co-morbidities and complications of cART and reinforcing medication adherence. The nurse asks about adherence to medication by asking whether the patient has missed any of his/her medications in the previous 7 days and then conducting a pill count. If the patient is not perfectly adherent, he or she is referred to the clinical officer or physician. The nurse reviews a symptom checklist (new cough, breathlessness, rash, jaundiced eyes, vomiting, diarrhoea, severe headache, fever or "any other problem that you feel you need a doctor for") and measures temperature and transcutaneous oxygen saturation. If the patient reports any symptoms or meets the pre-set threshold for either temperature (≥37.2°Celsius) or oxygen saturation (O2 ≤ 93%), the patient is referred immediately to the clinical officer or physician. The nurse does not dispense drugs during an HREC visit as these are prescribed during monthly clinical officer or physician visits. A summary of similarities and differences between HREC and Routine Care can be found in Table 1. Summary of programme characteristics in High Risk Express Care (HREC) versus Routine Care The HIV clinical care protocols used by the USAID-AMPATH Partnership are consistent with those recommended by WHO and have been described in detail elsewhere [27]. Briefly, the Routine Care protocol for patients receiving cART is that patients are seen by the clinician (clinical officer or physician) 2 weeks after initiating treatment, and then monthly thereafter. Those who have not initiated cART return every 1 to 3 months depending on their clinical status and co-morbidities. During these visits patients are seen by multiple care providers, including nurses, clinicians, pharmacy technicians, nutritionists, peer outreach workers and social workers. For new patients, clinical contact begins at registration, followed by the nurse who checks vital signs. The patient also sees a peer outreach worker for documentation of locator information, then goes on to see the doctor/clinical officer. Returning patients go directly to the nurse and then follow a course similar to that established for new patients. All patients newly initiated on cART who miss a scheduled clinic visit trigger an outreach attempt within 24 h, through either a telephone contact or home visit conducted by trained peers. Standard first-line antiretroviral regimens used are either nevirapine-based or efavirenz-based [28]. Beginning in March 2007, the High Risk Express Care (HREC) programme was implemented in a step-wise fashion in USAID-AMPATH Partnership clinics. As of May 2008, HREC had been rolled out to all parent clinics. The selection of clinics to pilot the programme was based primarily on space availability, patient volume and clinic congestion, and the general capacity of healthcare personnel to pilot a new programme. Once a clinic had implemented HREC, all patients meeting the criteria for HREC were eligible for referral to the programme. The criteria for referral to HREC are having a CD4 count of 100 cells/mm3 or less and initiating cART. Once a patient is identified as eligible for HREC, the clinical officer or physician prescribes a two-week supply of cART and then refers the patient to HREC (within the same clinic location). The patient is seen by a clinical officer or physician 2 weeks after antiretroviral initiation and then monthly. The HREC nurse is then responsible for interim weekly visits either physically or by telephone for a period of 3 months. The patient is seen monthly by a clinical officer or physician. In HREC, returning patients do not queue in the waiting bay or go through the nursing station, clinician room, pharmacy or any other referral points within the clinic. Instead, such patients go directly to the "Express Care room", which provides "one-stop care". The HREC nurse maintains a list of scheduled return visits, and if a patient misses a clinic appointment, the outreach team is activated as per routine protocol. The HREC visit for the high-risk patients is focused on identifying co-morbidities and complications of cART and reinforcing medication adherence. The nurse asks about adherence to medication by asking whether the patient has missed any of his/her medications in the previous 7 days and then conducting a pill count. If the patient is not perfectly adherent, he or she is referred to the clinical officer or physician. The nurse reviews a symptom checklist (new cough, breathlessness, rash, jaundiced eyes, vomiting, diarrhoea, severe headache, fever or "any other problem that you feel you need a doctor for") and measures temperature and transcutaneous oxygen saturation. If the patient reports any symptoms or meets the pre-set threshold for either temperature (≥37.2°Celsius) or oxygen saturation (O2 ≤ 93%), the patient is referred immediately to the clinical officer or physician. The nurse does not dispense drugs during an HREC visit as these are prescribed during monthly clinical officer or physician visits. A summary of similarities and differences between HREC and Routine Care can be found in Table 1. Summary of programme characteristics in High Risk Express Care (HREC) versus Routine Care Clinical procedures: express care and routine care: The HIV clinical care protocols used by the USAID-AMPATH Partnership are consistent with those recommended by WHO and have been described in detail elsewhere [27]. Briefly, the Routine Care protocol for patients receiving cART is that patients are seen by the clinician (clinical officer or physician) 2 weeks after initiating treatment, and then monthly thereafter. Those who have not initiated cART return every 1 to 3 months depending on their clinical status and co-morbidities. During these visits patients are seen by multiple care providers, including nurses, clinicians, pharmacy technicians, nutritionists, peer outreach workers and social workers. For new patients, clinical contact begins at registration, followed by the nurse who checks vital signs. The patient also sees a peer outreach worker for documentation of locator information, then goes on to see the doctor/clinical officer. Returning patients go directly to the nurse and then follow a course similar to that established for new patients. All patients newly initiated on cART who miss a scheduled clinic visit trigger an outreach attempt within 24 h, through either a telephone contact or home visit conducted by trained peers. Standard first-line antiretroviral regimens used are either nevirapine-based or efavirenz-based [28]. Beginning in March 2007, the High Risk Express Care (HREC) programme was implemented in a step-wise fashion in USAID-AMPATH Partnership clinics. As of May 2008, HREC had been rolled out to all parent clinics. The selection of clinics to pilot the programme was based primarily on space availability, patient volume and clinic congestion, and the general capacity of healthcare personnel to pilot a new programme. Once a clinic had implemented HREC, all patients meeting the criteria for HREC were eligible for referral to the programme. The criteria for referral to HREC are having a CD4 count of 100 cells/mm3 or less and initiating cART. Once a patient is identified as eligible for HREC, the clinical officer or physician prescribes a two-week supply of cART and then refers the patient to HREC (within the same clinic location). The patient is seen by a clinical officer or physician 2 weeks after antiretroviral initiation and then monthly. The HREC nurse is then responsible for interim weekly visits either physically or by telephone for a period of 3 months. The patient is seen monthly by a clinical officer or physician. In HREC, returning patients do not queue in the waiting bay or go through the nursing station, clinician room, pharmacy or any other referral points within the clinic. Instead, such patients go directly to the "Express Care room", which provides "one-stop care". The HREC nurse maintains a list of scheduled return visits, and if a patient misses a clinic appointment, the outreach team is activated as per routine protocol. The HREC visit for the high-risk patients is focused on identifying co-morbidities and complications of cART and reinforcing medication adherence. The nurse asks about adherence to medication by asking whether the patient has missed any of his/her medications in the previous 7 days and then conducting a pill count. If the patient is not perfectly adherent, he or she is referred to the clinical officer or physician. The nurse reviews a symptom checklist (new cough, breathlessness, rash, jaundiced eyes, vomiting, diarrhoea, severe headache, fever or "any other problem that you feel you need a doctor for") and measures temperature and transcutaneous oxygen saturation. If the patient reports any symptoms or meets the pre-set threshold for either temperature (≥37.2°Celsius) or oxygen saturation (O2 ≤ 93%), the patient is referred immediately to the clinical officer or physician. The nurse does not dispense drugs during an HREC visit as these are prescribed during monthly clinical officer or physician visits. A summary of similarities and differences between HREC and Routine Care can be found in Table 1. Summary of programme characteristics in High Risk Express Care (HREC) versus Routine Care Data collection: Clinicians complete standardized forms capturing demographic, clinical and pharmacologic information at each patient visit. These data are then hand-entered into the AMPATH Medical Record System, a secure computerized database designed for clinical management, with data entry validated by random review of 10% of the forms entered [29]. At the time of registration, patients are provided with a unique identifying number. For this study, all data were stripped of identifying information prior to analysis. Study population: The analysis included all patients aged 14 years or older who were initiating cART with CD4 counts less than or equal to 100 cells/mm3 in one of the USAID-AMPATH clinics from March 2007 until March 2009. Outcomes, explanatory variables and confounders: The primary goal of the HREC system is to prevent mortality during the first 3 months following cART initiation. Therefore, the primary outcome for this analysis was all-cause mortality. Two secondary outcomes were analyzed: loss to follow up (LTFU), defined as being absent from the clinic for at least 3 months with no information regarding vital status, and a composite outcome defined as LTFU or death. The rationale for the composite outcome is that loss to follow up in such a high-risk population is likely to mean that a patient has died, even if the death has not yet been reported to the clinic [10,30-34]. Analysis of the composite outcome can be viewed as a sensitivity analysis for the mortality rate as it provides an upper bound of the mortality estimate. The primary explanatory variable is being in the HREC programme (versus remaining in Routine Care). Our analyses quantify the effect of HREC on mortality, loss to follow up, and the composite outcome of death or loss to follow up using crude and adjusted hazard rate (HR) ratios. Our adjusted HR ratios control for the following potential confounding variables, all measured at time of cART initiation and selected a priori based on their potential to independently affect changes in risk of mortality and/or loss to follow up: CD4 count (analyzed as a continuous variable); receipt of treatment for tuberculosis at the time of cART initiation (yes/no); receipt of cotrimoxazole or dapsone prophylaxis within 28 days of cART initiation (yes/no) [13,17]; travel time to clinic (dichotomized as up to one hour vs. more than one hour); type of clinic (referral hospital, district or sub-district hospital, or rural health centre); age; and sex (male/female). CD4 and age are centred at their mean values. WHO clinical stage was not included in the final model because: a) it was non-significantly associated with the outcomes of interest in bivariable analyses; and b) there was missing data in one of the two groups. Analysis: We included all eligible patients and categorized them as having been enrolled into HREC at initiation of cART or having remained in Routine Care. The distributions of the time to event outcomes are summarized using Kaplan-Meier curves. Event times and censoring times are defined as follows: for analysis of mortality, the event time is the date of death; others are censored at the time of their last clinic visit. For the analysis of loss to follow up, patients were defined as lost if they had not returned to clinic for more than three consecutive months; for these patients, the event time is the 90th day following the most recent visit to the clinic. Individuals whose reported follow-up time is zero days had 1 day added for the purpose of analysis. Those who died were censored on their death date, and patients whose most recent clinic was less than 90 days prior to the close of the database were censored at the date of their last clinic visit. For the loss to follow up analysis, the earliest possible event time is 90 days after the initiation of cART; hence our tests and regression models use a start time of day 90. Finally, for the composite outcome, the event time is the earliest of death date or loss to follow up date, where the LTFU date is defined as we have explained. The adjusted HR ratios control for potential confounding variables using inverse weighting by the treatment propensity score method [35]. For each individual, the propensity score is the probability of receiving HREC as a function of individual level characteristics. The propensity score, denoted by p(x), is estimated by fitting a logistic regression model of HREC status (yes/no) on potential confounding variables x. The propensity score model is checked for lack of fit using the Hosmer-Lemeshow goodness-of-fit test [36]. The adjusted HR ratios are calculated by fitting a weighted, stratified proportional hazards regression of the event time on HREC status. The weights are proportional to 1/p(x) for those who receive HREC, and to 1/{1 - p(x)} for those who do not. Following Hernan et al. [27], stabilized weights were used in the estimation (full details are available upon request). The stratification variable is clinic type. Robust standard errors are used to account for clustering by clinic and for correlation induced by the use of inverse probability weights. Because the propensity scores represent the probability of receiving HREC as a function of individual-level covariates (listed here), the weight for each individual is inversely proportional to the (estimated) probability of his or her actual HREC status. The weighted sample can therefore be viewed as one where differential selection into HREC attributable to x has been eliminated; hence, to the extent that x contains all relevant confounders, the adjusted HR is equivalent to the exposure effect from a marginal structural proportional hazards model and can be interpreted as the causal effect of HREC on the event of interest [37,38]. All analyses were done using STATA Version SE/10 (College Station, Texas, USA). Results: There were 4,958 patients aged 14 years or older with CD4 counts of ≤100 cells/mm3 who initiated cART at one of the USAID-AMPATH clinics during the study period. Of these, 635 were enrolled into HREC. Reasons why patients were not enrolled into HREC included that the HREC programme had not yet been rolled out to a particular clinic, that the patient lived too far to allow them to attend clinic weekly or bi-weekly, and lack of available space in the clinic for expansion of the HREC programme. As summarized in Table 2 patients in Routine Care and HREC were similar with regard to gender distribution and age: 40% male with a median age of approximately 36 years. There were no significant differences between the groups with regard to baseline CD4 count or proportion receiving treatment for tuberculosis. Patients in HREC were slightly less likely to be WHO Stage III/IV at cART initiation (66% vs. 69%) and to have to travel at least 1 h to clinic (71% vs. 77%). They were more likely to be attending an urban clinic (61% vs. 52%), and using cotrimoxazole or dapsone prophylaxis at cART initiation (100% vs. 95%). The median follow-up time was 318 days (interquartile range 147-533). Socio-demographic and clinical characteristics of patients in High Risk Express Care (HREC) versus Routine Care N.B. The only variable for which there were missing data was WHO Stage, as described in the table. This variable was not included in the propensity score model as it was not a significant predictor of the outcomes of interest There were 426 deaths among the study population: 39 in HREC and 387 in Routine Care. The crude incidence rate of death during the follow-up period was 5.7 per 100 person-years in HREC compared with 10.6 per 100 person-years in Routine Care (incidence rate ratio, IRR: 0.54; 95% confidence interval, CI: 0.38-0.75) (Figure 1a). After adjustment for potential confounders, the HREC programme was associated with a 40% reduced risk of death (adjusted HR: 0.59; 95% CI: 0.45-0.77). Kaplan-Meier curves demonstrating the effect of the High Risk Express Care compared with Routine Care among all high-risk patients on: a) their probability of survival; b) their probability of remaining in care (i.e., loss to follow up); and c) their probability of remaining alive and in care (n = 4958). a Probability of remaining alive after cART initiation. b Probability of remaining in care after cART initiation. c Probability of remaining alive and in care after cART. There were also 1,299 patients lost to follow up during the same period, including 134 in HREC and 1,165 in Routine Care. The crude incidence rate of LTFU among HREC was 18.7 per 100 person-years versus 29.7 in Routine Care (IRR: 0.63; 95% CI: 0.52-0.76) (Figure 1b). After adjustment, patients in HREC were also much less likely to become lost to follow up (AHR 0.62; 95% CI: 0.55-0.70). When we assessed the combined endpoint of LTFU and death, there were 1,725 events in 4639.5 person-years of follow up, for a crude incidence rate of 24.2 per 100 person-years in HREC versus 39.5 in Routine Care (IRR: 0.61; 95% CI: 0.52-0.72) (Figure 1c). Overall, the HREC patients were much more likely to be alive and in care after starting cART by the end of the study follow-up period (AHR 0.62; 95% CI: 0.57-0.67) (Table 3). Adjusted Hazard Ratios (HR)* and 95% Confidence Intervals (CI) of the effect of High Risk Express Care (HREC) vs. Routine Care on: a) Death, b) Lost to Follow-up (LTFU), and c) Death or LTFU (combined endpoint) following cART initiation *Adjusted using propensity scores for clinic, the use of cotrimoxazole or dapsone, CD4 count closest to cART initiation, receiving treatment for tuberculosis,, gender, age, clinic type (Referral Hospital, Sub-District and District Hospitals, and Rural Health Centres), and travel time to clinic. Discussion: The first few months following initiation of cART is a critical time for severely immune-suppressed HIV-infected patients. These data suggest that more frequent monitoring of patients in the early months by a dedicated nurse can significantly improve survival and retention in care among these high-risk patients and improve their retention in care. Although further evaluation is needed, this intervention may be relatively easy to implement in other resource-constrained environments. To our knowledge, the concept of frequent nurse-based rapid assessments is among the few interventions other than cotrimoxazole prophylaxis that has been associated with a profound reduction in early mortality among high-risk HIV-infected patients initiating cART in low-income settings [13,14]. We believe the effects of HREC are a combination of the rapid and frequent assessments. Rapid because this makes accessing healthcare more accessible to patients (by not having to wait as long and by not having to spend as much time in the clinic); frequent because it makes it more likely that early warning symptoms (e.g., fever, rash) can be identified within days, as opposed to weeks, of their onset. If, for example, the monthly standard of care visits were simply made more rapid, such symptoms as fever or rash would go unattended for potentially weeks, thereby increasing the risk of full-blown immune-reconstitution disease, more severe toxicities, etc. We also postulate that more frequent monitoring is effective at improving early patient outcomes through both direct and indirect mechanisms. For example, earlier identification of the signs and symptoms of drug toxicity, opportunistic infections and immune reconstitution syndrome likely leads to earlier interventions to address these issues; thus patients should experience a direct survival advantage in the short-term. Indirectly, HREC may have improved adherence to cART and thus improved short-term outcomes. Adherence may be improved in the HREC population because of the weekly contact, reminders and supports; as a result of improved adherence, patients will be more likely to experience complete virologic suppression, have improved immune-response, and be less likely to develop resistance, therefore indirectly contributing to survival benefits over the short and long term. HREC may have improved retention by enabling patients to be seen quickly without having to wait in long queues and without having to pass through multiple stations (i.e., spending much of the day at the clinic), thereby making their healthcare more accessible to them. There are some additional costs associated with HREC. For example, there are added direct costs to patients arising from increased transportation required to and from the clinic. They may also have to miss more work because of more frequent clinic visits, translating into increased opportunity costs. The nurses hired to work on HREC were hired explicitly for that purpose, and this certainly adds to the overall programme expense on personnel. Whether these additional costs and expenses are justified when weighed against the costs associated with increased morbidity, mortality and loss to follow up is the subject for a detailed cost-effectiveness analysis that is beyond the scope of the present evaluation. There are two key strengths to these findings. First, the standard of care in AMPATH is already to provide cotrimoxazole or dapsone until a patient's CD4 count is above 200 cells/mm3. These data have therefore been able to assess the effect of more frequent monitoring in a setting where the vast majority of patients were already receiving cotrimoxazole or dapsone. Second, the effect of HREC was strong across three different versions of the primary outcome, suggesting a robust effect. By evaluating the impact of HREC on both mortality and LTFU, we took account of the two crucial factors determining the success of an HIV treatment programme: keeping patients alive and in care. Moreover, we used statistical methods that appropriately and robustly adjusted for measured confounders. There are also limitations to this analysis. First, the choice of clinics in which HREC was rolled out to first may have created a variety of potential biases in our analysis related to a possible higher quality of care offered at those clinics. For example, this may have created some selection bias (improved patient outcomes at those clinics irrespective of HREC, e.g., because of lower patient volumes, or higher functioning staff), and ascertainment bias (better ascertainment of death and other clinical outcomes at the higher functioning clinics). Similarly, although all patients were eligible to be enrolled into HREC, only a small proportion were enrolled. If the providers who were more likely to refer patients to HREC were also the ones more likely to be current with clinical protocols (i.e., to prescribe cotrimoxazole) and/or more likely to be better clinicians, those patients referred by them may have been more likely to have better outcomes anyway. However, for those patients referred, it was the HREC nurse who had the majority of clinical contact with them, thereby reducing any potential provider bias on the part of the referring clinician. Similarly, a slightly smaller proportion of patients enrolled into HREC were WHO Stage III/IV at treatment initiation. Although these issues may have biased the findings favourably towards HREC, the use of propensity score methods helps to overcome this possible bias because weighting in inverse proportion to the treatment propensity score creates a pseudo-sample wherein allocation to HREC is independent of the confounders that have been included in the propensity score. Hence the weighted dataset can be analyzed as if the group allocation were random. A limitation of this method is that there may be non-random allocation to HREC based on unmeasured factors to the extent that receipt of HREC depends on unmeasured factors. We acknowledge that residual bias may therefore remain due to potential unmeasured confounders, including adherence to cART (not accounted for in this analysis due to unreliability of the data). Conclusions: In conclusion, we have demonstrated that weekly rapid assessments by nurses, either by phone or in person, with immediate referrals to clinical officers or physicians if needed, can significantly improve survival among high-risk HIV-infected patients initiating cART in a sub-Saharan African setting. Although the cost effectiveness of the Express Care model needs to be thoroughly evaluated, our experience and findings suggest that this may be an innovative way of increasing patient volume, improving the quality of care, and greatly improving patient outcomes in the short term. Competing interests: The authors declare that they have no competing interests. Authors' contributions: PB was primarily responsible for the writing of the manuscript. AS, RK, JS, JM, and SK are practicing physicians in Kenya who developed the Express Care programme and contributed significantly to the generation of hypotheses and interpretation of results for the manuscript. JH is the biostatistician on record for this analysis and provided significant input and technical assistance in the methodological approaches used. AK was the analyst for the study. KWK critically reviewed the manuscript for issues of design and interpretation. ES was the data manager responsible for providing and overseeing the data. All authors have read and approved the final version of this manuscript.
Background: In resource-poor settings, mortality is at its highest during the first 3 months after combination antiretroviral treatment (cART) initiation. A clear predictor of mortality during this period is having a low CD4 count at the time of treatment initiation. The objective of this study was to evaluate the effect on survival and clinic retention of a nurse-based rapid assessment clinic for high-risk individuals initiating cART in a resource-constrained setting. Methods: The USAID-AMPATH Partnership has enrolled more than 140,000 patients at 25 clinics throughout western Kenya. High Risk Express Care (HREC) provides weekly or bi-weekly rapid contacts with nurses for individuals initiating cART with CD4 counts of ≤100 cells/mm3. All HIV-infected individuals aged 14 years or older initiating cART with CD4 counts of ≤100 cells/mm3 were eligible for enrolment into HREC and for analysis. Adjusted hazard ratios (AHRs) control for potential confounding using propensity score methods. Results: Between March 2007 and March 2009, 4,958 patients initiated cART with CD4 counts of ≤100 cells/mm3. After adjusting for age, sex, CD4 count, use of cotrimoxazole, treatment for tuberculosis, travel time to clinic and type of clinic, individuals in HREC had reduced mortality (AHR: 0.59; 95% confidence interval: 0.45-0.77), and reduced loss to follow up (AHR: 0.62; 95% CI: 0.55-0.70) compared with individuals in routine care. Overall, patients in HREC were much more likely to be alive and in care after a median of nearly 11 months of follow up (AHR: 0.62; 95% CI: 0.57-0.67). Conclusions: Frequent monitoring by dedicated nurses in the early months of cART can significantly reduce mortality and loss to follow up among high-risk patients initiating treatment in resource-constrained settings.
Background: Combination antiretroviral treatment (cART) has proven itself to be an effective therapeutic mechanism for suppressing viral replication and enabling reconstitution of the immune system, thus allowing patients to recover and live with HIV disease as a chronic illness [1-3]. If adherence to the medications is high, severe immune-suppression is not present at cART initiation, and no significant co-morbidities, such as hepatitis C infection, exist, projections suggest that people living with HIV/AIDS have greatly improved long-term prognosis [4]. Despite the proven effectiveness of cART in low-income countries [5-9], mortality rates among patients in these settings are higher than those seen in high-income environments [10]. In resource-poor settings, mortality is at its highest during the first 3 months after cART initiation [9-12]. It is at least four times higher than rates in high-income countries in the first month of treatment [10]. Why mortality is at its highest during this period has been the subject of much debate and speculation. Reasons for these differences have been attributed to the non-use of cotrimoxazole prophylaxis [13,14], tuberculosis-associated immune reconstitution inflammatory syndrome (IRIS) [15-17], IRIS due to other opportunistic infections [18], and hepatotoxicity related to antiretroviral agents [19]. A consistently clear predictor of mortality during this period is having a low CD4 count at the time of treatment initiation [10,20]. Recent estimates by the World Health Organization (WHO) indicate that although 6.7 million individuals in low- and middle-income settings are receiving cART, this represents only 47% coverage of individuals who are in clinical need [21]. The massive scale up of HIV care and treatment programmes has required enormous investments, and still there is a substantial unmet need. Thus, the challenge presented to HIV care programmes operating in resource-poor settings is how to continue scaling up while simultaneously improving the outcomes of those enrolling in treatment programmes. As such, novel models of care, such as task shifting [22-24], which increase healthcare efficiency and improve patient outcomes, clearly need to be designed and tested. Here, we describe the impact of a nurse-clinician approach [25] on mortality and patient retention among severely immune-suppressed HIV-infected adults initiating cART within a large multi-centre HIV/AIDS care and treatment programme in western Kenya. Conclusions: In conclusion, we have demonstrated that weekly rapid assessments by nurses, either by phone or in person, with immediate referrals to clinical officers or physicians if needed, can significantly improve survival among high-risk HIV-infected patients initiating cART in a sub-Saharan African setting. Although the cost effectiveness of the Express Care model needs to be thoroughly evaluated, our experience and findings suggest that this may be an innovative way of increasing patient volume, improving the quality of care, and greatly improving patient outcomes in the short term.
Background: In resource-poor settings, mortality is at its highest during the first 3 months after combination antiretroviral treatment (cART) initiation. A clear predictor of mortality during this period is having a low CD4 count at the time of treatment initiation. The objective of this study was to evaluate the effect on survival and clinic retention of a nurse-based rapid assessment clinic for high-risk individuals initiating cART in a resource-constrained setting. Methods: The USAID-AMPATH Partnership has enrolled more than 140,000 patients at 25 clinics throughout western Kenya. High Risk Express Care (HREC) provides weekly or bi-weekly rapid contacts with nurses for individuals initiating cART with CD4 counts of ≤100 cells/mm3. All HIV-infected individuals aged 14 years or older initiating cART with CD4 counts of ≤100 cells/mm3 were eligible for enrolment into HREC and for analysis. Adjusted hazard ratios (AHRs) control for potential confounding using propensity score methods. Results: Between March 2007 and March 2009, 4,958 patients initiated cART with CD4 counts of ≤100 cells/mm3. After adjusting for age, sex, CD4 count, use of cotrimoxazole, treatment for tuberculosis, travel time to clinic and type of clinic, individuals in HREC had reduced mortality (AHR: 0.59; 95% confidence interval: 0.45-0.77), and reduced loss to follow up (AHR: 0.62; 95% CI: 0.55-0.70) compared with individuals in routine care. Overall, patients in HREC were much more likely to be alive and in care after a median of nearly 11 months of follow up (AHR: 0.62; 95% CI: 0.57-0.67). Conclusions: Frequent monitoring by dedicated nurses in the early months of cART can significantly reduce mortality and loss to follow up among high-risk patients initiating treatment in resource-constrained settings.
12,254
357
[ 474, 39, 1762, 758, 87, 41, 398, 592, 10, 116 ]
14
[ "hrec", "patients", "care", "clinical", "patient", "clinic", "cart", "clinical officer", "officer", "programme" ]
[ "barriers outcomes antiretroviral", "outcomes antiretroviral", "live hiv", "support hiv tuberculosis", "antiretroviral treatment cart" ]
[CONTENT] Antiretrovirals | Mortality | Losses to follow up | Adherence | Models of care | Africa [SUMMARY]
[CONTENT] Antiretrovirals | Mortality | Losses to follow up | Adherence | Models of care | Africa [SUMMARY]
[CONTENT] Antiretrovirals | Mortality | Losses to follow up | Adherence | Models of care | Africa [SUMMARY]
[CONTENT] Antiretrovirals | Mortality | Losses to follow up | Adherence | Models of care | Africa [SUMMARY]
[CONTENT] Antiretrovirals | Mortality | Losses to follow up | Adherence | Models of care | Africa [SUMMARY]
[CONTENT] Antiretrovirals | Mortality | Losses to follow up | Adherence | Models of care | Africa [SUMMARY]
[CONTENT] Adolescent | Adult | Ambulatory Care | Ambulatory Care Facilities | Anti-HIV Agents | CD4 Lymphocyte Count | Female | Follow-Up Studies | HIV Infections | House Calls | Humans | Kenya | Male | Middle Aged | Models, Nursing | Nurse Clinicians | Patient Compliance | Prospective Studies | Retrospective Studies | Young Adult [SUMMARY]
[CONTENT] Adolescent | Adult | Ambulatory Care | Ambulatory Care Facilities | Anti-HIV Agents | CD4 Lymphocyte Count | Female | Follow-Up Studies | HIV Infections | House Calls | Humans | Kenya | Male | Middle Aged | Models, Nursing | Nurse Clinicians | Patient Compliance | Prospective Studies | Retrospective Studies | Young Adult [SUMMARY]
[CONTENT] Adolescent | Adult | Ambulatory Care | Ambulatory Care Facilities | Anti-HIV Agents | CD4 Lymphocyte Count | Female | Follow-Up Studies | HIV Infections | House Calls | Humans | Kenya | Male | Middle Aged | Models, Nursing | Nurse Clinicians | Patient Compliance | Prospective Studies | Retrospective Studies | Young Adult [SUMMARY]
[CONTENT] Adolescent | Adult | Ambulatory Care | Ambulatory Care Facilities | Anti-HIV Agents | CD4 Lymphocyte Count | Female | Follow-Up Studies | HIV Infections | House Calls | Humans | Kenya | Male | Middle Aged | Models, Nursing | Nurse Clinicians | Patient Compliance | Prospective Studies | Retrospective Studies | Young Adult [SUMMARY]
[CONTENT] Adolescent | Adult | Ambulatory Care | Ambulatory Care Facilities | Anti-HIV Agents | CD4 Lymphocyte Count | Female | Follow-Up Studies | HIV Infections | House Calls | Humans | Kenya | Male | Middle Aged | Models, Nursing | Nurse Clinicians | Patient Compliance | Prospective Studies | Retrospective Studies | Young Adult [SUMMARY]
[CONTENT] Adolescent | Adult | Ambulatory Care | Ambulatory Care Facilities | Anti-HIV Agents | CD4 Lymphocyte Count | Female | Follow-Up Studies | HIV Infections | House Calls | Humans | Kenya | Male | Middle Aged | Models, Nursing | Nurse Clinicians | Patient Compliance | Prospective Studies | Retrospective Studies | Young Adult [SUMMARY]
[CONTENT] barriers outcomes antiretroviral | outcomes antiretroviral | live hiv | support hiv tuberculosis | antiretroviral treatment cart [SUMMARY]
[CONTENT] barriers outcomes antiretroviral | outcomes antiretroviral | live hiv | support hiv tuberculosis | antiretroviral treatment cart [SUMMARY]
[CONTENT] barriers outcomes antiretroviral | outcomes antiretroviral | live hiv | support hiv tuberculosis | antiretroviral treatment cart [SUMMARY]
[CONTENT] barriers outcomes antiretroviral | outcomes antiretroviral | live hiv | support hiv tuberculosis | antiretroviral treatment cart [SUMMARY]
[CONTENT] barriers outcomes antiretroviral | outcomes antiretroviral | live hiv | support hiv tuberculosis | antiretroviral treatment cart [SUMMARY]
[CONTENT] barriers outcomes antiretroviral | outcomes antiretroviral | live hiv | support hiv tuberculosis | antiretroviral treatment cart [SUMMARY]
[CONTENT] hrec | patients | care | clinical | patient | clinic | cart | clinical officer | officer | programme [SUMMARY]
[CONTENT] hrec | patients | care | clinical | patient | clinic | cart | clinical officer | officer | programme [SUMMARY]
[CONTENT] hrec | patients | care | clinical | patient | clinic | cart | clinical officer | officer | programme [SUMMARY]
[CONTENT] hrec | patients | care | clinical | patient | clinic | cart | clinical officer | officer | programme [SUMMARY]
[CONTENT] hrec | patients | care | clinical | patient | clinic | cart | clinical officer | officer | programme [SUMMARY]
[CONTENT] hrec | patients | care | clinical | patient | clinic | cart | clinical officer | officer | programme [SUMMARY]
[CONTENT] hiv | settings | immune | income | mortality | treatment | programmes | cart | low | need [SUMMARY]
[CONTENT] hrec | clinic | patients | officer | clinical officer | clinical | care | patient | officer physician | physician [SUMMARY]
[CONTENT] hrec | 95 | care | ci | years | routine care | probability remaining | 95 ci | person years | routine [SUMMARY]
[CONTENT] improving | initiating cart sub | setting cost | weekly rapid assessments | weekly rapid | patient volume improving | patient volume improving quality | immediate referrals clinical | care greatly | care greatly improving [SUMMARY]
[CONTENT] hrec | patients | care | clinic | cart | clinical | patient | clinical officer | officer | analysis [SUMMARY]
[CONTENT] hrec | patients | care | clinic | cart | clinical | patient | clinical officer | officer | analysis [SUMMARY]
[CONTENT] the first 3 months ||| ||| [SUMMARY]
[CONTENT] more than 140,000 | 25 | Kenya ||| High Risk Express Care ( | weekly | weekly ||| 14 years or | HREC ||| [SUMMARY]
[CONTENT] Between March 2007 and March 2009 | 4,958 ||| HREC | AHR | 0.59 | 95% | 0.45-0.77 | AHR | 0.62 | 95% | CI | 0.55 ||| HREC | nearly 11 months | AHR | 0.62 | 95% | CI | 0.57-0.67 [SUMMARY]
[CONTENT] the early months [SUMMARY]
[CONTENT] the first 3 months ||| ||| ||| more than 140,000 | 25 | Kenya ||| High Risk Express Care ( | weekly | weekly ||| 14 years or | HREC ||| ||| Between March 2007 and March 2009 | 4,958 ||| HREC | AHR | 0.59 | 95% | 0.45-0.77 | AHR | 0.62 | 95% | CI | 0.55 ||| HREC | nearly 11 months | AHR | 0.62 | 95% | CI | 0.57-0.67 ||| the early months [SUMMARY]
[CONTENT] the first 3 months ||| ||| ||| more than 140,000 | 25 | Kenya ||| High Risk Express Care ( | weekly | weekly ||| 14 years or | HREC ||| ||| Between March 2007 and March 2009 | 4,958 ||| HREC | AHR | 0.59 | 95% | 0.45-0.77 | AHR | 0.62 | 95% | CI | 0.55 ||| HREC | nearly 11 months | AHR | 0.62 | 95% | CI | 0.57-0.67 ||| the early months [SUMMARY]
An outpatient telehealth elective for displaced clinical learners during the COVID-19 pandemic.
33743676
In response to the COVID-19 pandemic, medical schools suspended clinical rotations. This displacement of medical students from wards has limited experiential learning. Concurrently, outpatient practices are experiencing reduced volumes of in-person visits and are shifting towards virtual healthcare, a transition that comes with its own logistical challenges. This article describes a workflow that enabled medical students to engage in meaningful clinical education while helping an institution's outpatient practices implement remote telemedicine visits.
BACKGROUND
A 4-week virtual elective was designed to allow clinical learners to participate in virtual telemedicine patient encounters. Students were prepared with EMR training and introduced to a novel workflow that supported healthcare providers in the outpatient setting. Patients were consented to telehealth services before encounters with medical students. All collected clinical information was documented in the EMR, after which students transitioned patients to a virtual Doxy.me video appointment. Surveys were used to evaluate clinical and educational outcomes of students' participation. Elective evaluations and student reflections were also collected.
METHODS
Survey results showed students felt well-prepared to initiate patient encounters. They expressed comfort while engaging with patients virtually during telemedicine appointments. Students identified clinical educational value, citing opportunities to develop patient management plans consistent with in-person experiences. A significant healthcare burden was also alleviated by student involvement. Over 1000 total scheduled appointments were serviced by students who transitioned more than 80 % of patients into virtual attending provider waiting rooms.
RESULTS
After piloting this elective with fourth-year students, pre-clerkship students were also recruited to act in a role normally associated with clinical learners (e.g., elicit patient histories, conduct a review of systems, etc.). Furthermore, additional telemedicine electives are being designed so medical students can contribute to patient care without risk of exposure to COVID-19. These efforts will allow students to continue with their clinical education during the pandemic. Medical educators can adopt a similar workflow to suit evolving remote learning needs.
CONCLUSIONS
[ "Ambulatory Care", "COVID-19", "Clinical Competence", "Curriculum", "Education, Distance", "Education, Medical, Undergraduate", "Humans", "Pandemics", "Pilot Projects", "SARS-CoV-2", "Telemedicine", "Workflow" ]
7980791
Background
On March 17th, 2020, the Association of American Medical Colleges (AAMC) issued strong guidance that medical schools suspend clinical rotations during the coronavirus pandemic, effective immediately [1]. A temporary and unified suspension of clinical teaching by all institutions would ensure medical student safety as new information about COVID-19 continued to emerge. Medical schools following this guidance developed innovative curricular modifications to deliver uncompromised education during the ongoing pandemic. This article reports the successful implementation of a new telehealth elective course designed to introduce fourth-year medical students to the use of technologies for telehealth at the Rutgers Robert Wood Johnson Medical School (RWJMS) in response to the crisis. The elective course was designed not only to provide students with a unique educational opportunity, but also to help patients of the Robert Wood Johnson Medical Group (RWJMG) transition rapidly from in-person to remote consultations. The Commonwealth Fund estimated outpatient practices scheduled less than half their regular volume of in-person visits throughout March to mitigate virus transmission [2]. For the numerous practices that struggle to care for their patient population, the ability to extend patient-provider relationships beyond traditional in-person visits is invaluable. Under normal circumstances, the RWJMG outpatient practices care for approximately 15,000 patients each week. Total patient volume decreased nearly 33 % in the last two weeks of March 2020 before remote visits were possible. By the end of April, the outpatient practices scheduled 6800 telemedicine visits per week. This transition was not without challenges, as provider illness and post-exposure quarantine requirements exacerbated an already stressed system during the pandemic’s initial wave. The 2015–2016 Liaison Committee on Medical Education (LCME) Annual Medical School Questionnaire reported heterogenous use of telemedicine in undergraduate medical education by one-half of medical schools surveyed [3]. A 2020 review of telemedicine in undergraduate medical education revealed varied integration of telemedicine into medical school curricula. 71 % of respondents discussed the basics of telemedicine during didactics. Telemedicine featured more practically during standardized patient encounters (59 %), patient encounters (53 %), interprofessional training (40 %), and scholarly projects (29 %) [3]. The pandemic has stimulated a surge in telehealth integration for undergraduate medical education, providing quality education while maintaining social distancing policies [4]. A telehealth workflow was developed in coordination with RWJMG practice managers to facilitate efficient diagnosis, treatment, management, and education virtually. Students participating in an elective course were instrumental during this transition, reducing the workforce burden while gaining valuable exposure to a modern healthcare intervention. The aim of this elective was to provide students with opportunities to utilize telemedicine technologies for authentic patient encounters and enable them to understand telehealth applications and factors affecting the ability of patients to connect remotely.
null
null
Results
Educational outcomes Over a four-week period (April 13 to May 8), 64 fourth-year medical students (38.1 % of the graduating class) completed the mandatory training session and logged contributed hours within the program. 29 (45.3 %) students contributed enough hours to qualify for elective credit, with an average of 35.4 h logged (median of 26.8 h). Three students logged over 80 h. Surveys identified many barriers to patient encounter initiation that were used by practice managers for process improvement. Students learned to troubleshoot with patients when appropriate and provided detailed solutions thereafter to ensure future telehealth appointments were successful. Social determinants of health were frequently recognized as barriers to successful telehealth delivery for underserved patients. Technological illiteracy was often cited in association with elderly patients. Despite these challenges, collected evaluation forms revealed students gained substantial experience with telehealth technologies. Many students received practical instruction about various aspects of telehealth and felt they were able to explore virtual patient encounter capabilities simultaneously. Students also felt they could connect with patients, understand patient perspectives, and determine patients’ understanding of their conditions (Fig. 2). Fig. 2Metrics of Medical Student Comfort with Patients. Students reported their comfort engaging with patients by indicating their ability to (a) make a personal connection with the patient, (b) explore the patient’s perspective of their illness, (c) determine the accuracy of the patient’s understanding Metrics of Medical Student Comfort with Patients. Students reported their comfort engaging with patients by indicating their ability to (a) make a personal connection with the patient, (b) explore the patient’s perspective of their illness, (c) determine the accuracy of the patient’s understanding Students’ reflections provided narrative affirmation of objective data collected on the evaluation forms. Confidence was a recurrent motif, as students honed their abilities to elicit a history and counsel patients via video conference or telephone. Other topics discussed in students’ reflections include navigating technology, heightened awareness of barriers to telehealth use, and telehealth in medical education. Excerpts from students’ reflections are provided in Table 1 to highlight some of their experiences with telehealth. These excerpts are representative of the overwhelmingly positive response to the elective. For balance, some negative remarks were extracted from what otherwise were positive reflections. Table 1Student ExcerptsStudent 1This experience has been very enlightening regarding the strengths and limitations of telemedicine. Telemedicine’s strength, with the current level of technology, means that almost all patients will be able to connect with their physician at any time or place. Patients are usually more comfortable being interviewed in their own home and are more patient if they need to wait for their physician, as they can generally continue with their daily lives. This experience has been helpful for developing my ability to establish rapport while making my interviewing style more efficient. There have been many instances where I have been able to soothe anxious patients or family members with a few words or change in tone, thus leading to a better care experience.Student 2Like any experience for me, the patients were always the best part. I surprisingly was able to spend a lot of time talking to patients. Similar to how I have been feeling in the pandemic, my patient’s families made me realize that this is not easy for anyone. I will never forget this experience in my many years to come as a resident, attending and beyond.Student 3Participating in this telemedicine elective allowed me to spend time interacting with patients, learning the intricacies of telemedicine, and continuing to practice my history-taking skills in a novel situation. I truly enjoyed my time with patients. I worked with amazing physicians who demonstrated their flexibility and adaptability through hosting online, video appointments with their patients.Student 4Partaking in the telemedicine elective was a phenomenal experience for a number of reasons. First, it enabled us to help physicians meet an urgent demand and gave us an active role in rapidly developing a much-needed platform. Second, it gave us medical students early exposure into something that I feel is going to be a greater part of our careers as physicians, as the healthcare industry moves in a more virtual-platform-friendly direction. Lastly, it helped us hone our skills in taking patient histories, connecting with patients, and making them feel heard, as well as learning about the unique diagnoses and care plans that supersede patient’s chief complaints.Student 5Telehealth served an integral purpose for all patients who needed to access healthcare during the pandemic. Many physician-patient relationships will continue to be defined by the necessity of face-to-face visits, yet telehealth has established itself as a critical platform for increasing accessibility to health care. It was an incredible opportunity to be a part of this healthcare transformation, and it allowed me to remain connected to both my medical school and the local community in its time of need.Student 6Through our courses in medical school, we often discussed different approaches to patient-centered care and how to optimize our services. It was great to finally apply some of these principles to clinical practice, and I now feel comfortable connecting with patients through various platforms.Student 7Through this experience, I learned that much of the progress towards healing begins outside of the hospital, when patients are equipped with medical, social, and financial resources to address their health needs. I am grateful that the Telehealth Elective allowed me to develop virtual relationships with patients and support them during this unprecedented pandemic.Student 8Being able to participate in telehealth was a great experience and gave me an opportunity to understand what virtual patient care entails. In school, we often hear the word telehealth…but we don’t really get a sense of what that means or how it works. We have always exposed to the traditional method of in-person patient care. This elective allowed me to understand what exactly telehealth is, how it is setup, and the pros/cons of this method of patient care.Student 9Another aspect of the Telehealth Elective that was helpful to me was gaining experience in speaking to patients on the phone. Oftentimes this can be more challenging than speaking to patients in person, as there is less information to work with when you are on the phone with patients. Developing the skill of speaking to patients on the phone is something that takes practice and this elective helped me to do that.Student 10An area that I found irritating was that I wanted to be more involved in the care of these patients. Many of the physicians would get the patient in the room and tell me to move on to the next patient. However, one of the physicians allowed me to partake in one of her calls and listen to her counseling and management for the patients. Being able to take part in the call was much more enjoyable than simply rooming the patient and moving on to the next one. …Being confined at home for many months, I was not getting much clinical experience and I craved more.Student 11These patient visits taught me how much of clinical differential and plan can be shaped from the patient’s history alone. However, other visits would have benefited from the ability to use a stethoscope or explain pertinent information in-person. Telehealth was particularly challenging for patients who were unfamiliar with technology, as well as for patients who were less forthcoming with their acute concerns. There is a connection that forms from simply sharing the same space as another human being — it is one that fills the room with compassion and ensures the patient knows that they have the physician’s undivided attention.Student 12This experience highlighted the vast differences in technological knowledge/access to technology and how adding language challenges to this amplified the differences even more. …Nothing can replace the in-person office visit as there are inherent shortcomings to not being physically in the same room as the patient. …However, utilizing the telehealth platform was absolutely more beneficial…and allowed for the physician-patient relationship to continue even during these times. Student Excerpts Reflections also lent insight to the educational values of the elective. Interviewing patients, collecting the history, and documenting the encounter all provided significant educational value for students. Students who supported attending physicians with fewer scheduled patients were able to develop patient management plans and participate extensively during the Doxy.me patient encounter. Of the 34 students for whom reflections were received, 11 (32.4 %) mention debrief sessions with an attending physician after observation of and/or participation in patient-attending encounters. Unfortunately, this statistic does not capture the true frequency of student observation and participation – nor debrief sessions – as the majority of students chose to reflect upon other aspects of their experience. Over a four-week period (April 13 to May 8), 64 fourth-year medical students (38.1 % of the graduating class) completed the mandatory training session and logged contributed hours within the program. 29 (45.3 %) students contributed enough hours to qualify for elective credit, with an average of 35.4 h logged (median of 26.8 h). Three students logged over 80 h. Surveys identified many barriers to patient encounter initiation that were used by practice managers for process improvement. Students learned to troubleshoot with patients when appropriate and provided detailed solutions thereafter to ensure future telehealth appointments were successful. Social determinants of health were frequently recognized as barriers to successful telehealth delivery for underserved patients. Technological illiteracy was often cited in association with elderly patients. Despite these challenges, collected evaluation forms revealed students gained substantial experience with telehealth technologies. Many students received practical instruction about various aspects of telehealth and felt they were able to explore virtual patient encounter capabilities simultaneously. Students also felt they could connect with patients, understand patient perspectives, and determine patients’ understanding of their conditions (Fig. 2). Fig. 2Metrics of Medical Student Comfort with Patients. Students reported their comfort engaging with patients by indicating their ability to (a) make a personal connection with the patient, (b) explore the patient’s perspective of their illness, (c) determine the accuracy of the patient’s understanding Metrics of Medical Student Comfort with Patients. Students reported their comfort engaging with patients by indicating their ability to (a) make a personal connection with the patient, (b) explore the patient’s perspective of their illness, (c) determine the accuracy of the patient’s understanding Students’ reflections provided narrative affirmation of objective data collected on the evaluation forms. Confidence was a recurrent motif, as students honed their abilities to elicit a history and counsel patients via video conference or telephone. Other topics discussed in students’ reflections include navigating technology, heightened awareness of barriers to telehealth use, and telehealth in medical education. Excerpts from students’ reflections are provided in Table 1 to highlight some of their experiences with telehealth. These excerpts are representative of the overwhelmingly positive response to the elective. For balance, some negative remarks were extracted from what otherwise were positive reflections. Table 1Student ExcerptsStudent 1This experience has been very enlightening regarding the strengths and limitations of telemedicine. Telemedicine’s strength, with the current level of technology, means that almost all patients will be able to connect with their physician at any time or place. Patients are usually more comfortable being interviewed in their own home and are more patient if they need to wait for their physician, as they can generally continue with their daily lives. This experience has been helpful for developing my ability to establish rapport while making my interviewing style more efficient. There have been many instances where I have been able to soothe anxious patients or family members with a few words or change in tone, thus leading to a better care experience.Student 2Like any experience for me, the patients were always the best part. I surprisingly was able to spend a lot of time talking to patients. Similar to how I have been feeling in the pandemic, my patient’s families made me realize that this is not easy for anyone. I will never forget this experience in my many years to come as a resident, attending and beyond.Student 3Participating in this telemedicine elective allowed me to spend time interacting with patients, learning the intricacies of telemedicine, and continuing to practice my history-taking skills in a novel situation. I truly enjoyed my time with patients. I worked with amazing physicians who demonstrated their flexibility and adaptability through hosting online, video appointments with their patients.Student 4Partaking in the telemedicine elective was a phenomenal experience for a number of reasons. First, it enabled us to help physicians meet an urgent demand and gave us an active role in rapidly developing a much-needed platform. Second, it gave us medical students early exposure into something that I feel is going to be a greater part of our careers as physicians, as the healthcare industry moves in a more virtual-platform-friendly direction. Lastly, it helped us hone our skills in taking patient histories, connecting with patients, and making them feel heard, as well as learning about the unique diagnoses and care plans that supersede patient’s chief complaints.Student 5Telehealth served an integral purpose for all patients who needed to access healthcare during the pandemic. Many physician-patient relationships will continue to be defined by the necessity of face-to-face visits, yet telehealth has established itself as a critical platform for increasing accessibility to health care. It was an incredible opportunity to be a part of this healthcare transformation, and it allowed me to remain connected to both my medical school and the local community in its time of need.Student 6Through our courses in medical school, we often discussed different approaches to patient-centered care and how to optimize our services. It was great to finally apply some of these principles to clinical practice, and I now feel comfortable connecting with patients through various platforms.Student 7Through this experience, I learned that much of the progress towards healing begins outside of the hospital, when patients are equipped with medical, social, and financial resources to address their health needs. I am grateful that the Telehealth Elective allowed me to develop virtual relationships with patients and support them during this unprecedented pandemic.Student 8Being able to participate in telehealth was a great experience and gave me an opportunity to understand what virtual patient care entails. In school, we often hear the word telehealth…but we don’t really get a sense of what that means or how it works. We have always exposed to the traditional method of in-person patient care. This elective allowed me to understand what exactly telehealth is, how it is setup, and the pros/cons of this method of patient care.Student 9Another aspect of the Telehealth Elective that was helpful to me was gaining experience in speaking to patients on the phone. Oftentimes this can be more challenging than speaking to patients in person, as there is less information to work with when you are on the phone with patients. Developing the skill of speaking to patients on the phone is something that takes practice and this elective helped me to do that.Student 10An area that I found irritating was that I wanted to be more involved in the care of these patients. Many of the physicians would get the patient in the room and tell me to move on to the next patient. However, one of the physicians allowed me to partake in one of her calls and listen to her counseling and management for the patients. Being able to take part in the call was much more enjoyable than simply rooming the patient and moving on to the next one. …Being confined at home for many months, I was not getting much clinical experience and I craved more.Student 11These patient visits taught me how much of clinical differential and plan can be shaped from the patient’s history alone. However, other visits would have benefited from the ability to use a stethoscope or explain pertinent information in-person. Telehealth was particularly challenging for patients who were unfamiliar with technology, as well as for patients who were less forthcoming with their acute concerns. There is a connection that forms from simply sharing the same space as another human being — it is one that fills the room with compassion and ensures the patient knows that they have the physician’s undivided attention.Student 12This experience highlighted the vast differences in technological knowledge/access to technology and how adding language challenges to this amplified the differences even more. …Nothing can replace the in-person office visit as there are inherent shortcomings to not being physically in the same room as the patient. …However, utilizing the telehealth platform was absolutely more beneficial…and allowed for the physician-patient relationship to continue even during these times. Student Excerpts Reflections also lent insight to the educational values of the elective. Interviewing patients, collecting the history, and documenting the encounter all provided significant educational value for students. Students who supported attending physicians with fewer scheduled patients were able to develop patient management plans and participate extensively during the Doxy.me patient encounter. Of the 34 students for whom reflections were received, 11 (32.4 %) mention debrief sessions with an attending physician after observation of and/or participation in patient-attending encounters. Unfortunately, this statistic does not capture the true frequency of student observation and participation – nor debrief sessions – as the majority of students chose to reflect upon other aspects of their experience. Clinical outcomes Given the large selection of participating specialty and subspecialty attending physicians at RWJMG, students were able to serve a broad spectrum of patients. During the elective, 58 attending physicians from 3 specialties (internal medicine, pediatrics, neurology; inclusive of 17 total subspecialties) participated. The 64 fourth-year medical students serviced 1411 total scheduled appointments. The number of student-initiated patient encounters was high from the outset and increased significantly in subsequent weeks, with the percentage of patients successfully transitioned to a Doxy.me virtual encounter trending higher over time (Fig. 3). Fig. 3Metrics of Medical Student Engagement with Clinical Outpatient Practices. Trend of total scheduled telehealth appointments over a four-week period (April 13 to May 8) for RWJMG specialty clinics (internal medicine, pediatric, and neurology). All patient encounters successfully transitioned to a Doxy.me video call (dark blue) are illustrated as a percentage of initiated patient encounters (light blue) Metrics of Medical Student Engagement with Clinical Outpatient Practices. Trend of total scheduled telehealth appointments over a four-week period (April 13 to May 8) for RWJMG specialty clinics (internal medicine, pediatric, and neurology). All patient encounters successfully transitioned to a Doxy.me video call (dark blue) are illustrated as a percentage of initiated patient encounters (light blue) Several barriers (Fig. 4a, b) to successful telehealth utilization were encountered during the pilot period. Together, 197 (13.9 %) of 1411 total scheduled appointments were unable to be initiated for two primary reasons: patients either asked to reschedule (80, 41.5 %) or simply did not answer the phone (79, 40.9 %). Of the 1214 patient encounters that students were able to initiate, 992 (81.7 %) were successfully transitioned to a Doxy.me video call. Most patients from the remaining 222 encounters specifically requested transition to a traditional phone call (101, 46.5 %) or reported inability to access a compatible smartphone or computer at the time (71, 32.7 %). Despite assistance from students, 40 (18.4 %) patients were not adept enough with technology to enter the Doxy.me virtual waiting room. Student surveys contained valuable workflow feedback, which allowed clinic teams to modify the workflow where necessary to better meet patient needs. Fig. 4Barriers to Successful Telehealth Utilization. Student survey responses that quantify instances where students were unable to (a) initiate the patient encounter and (b) transition initiated patient encounters to a Doxy.me video call Barriers to Successful Telehealth Utilization. Student survey responses that quantify instances where students were unable to (a) initiate the patient encounter and (b) transition initiated patient encounters to a Doxy.me video call Given the large selection of participating specialty and subspecialty attending physicians at RWJMG, students were able to serve a broad spectrum of patients. During the elective, 58 attending physicians from 3 specialties (internal medicine, pediatrics, neurology; inclusive of 17 total subspecialties) participated. The 64 fourth-year medical students serviced 1411 total scheduled appointments. The number of student-initiated patient encounters was high from the outset and increased significantly in subsequent weeks, with the percentage of patients successfully transitioned to a Doxy.me virtual encounter trending higher over time (Fig. 3). Fig. 3Metrics of Medical Student Engagement with Clinical Outpatient Practices. Trend of total scheduled telehealth appointments over a four-week period (April 13 to May 8) for RWJMG specialty clinics (internal medicine, pediatric, and neurology). All patient encounters successfully transitioned to a Doxy.me video call (dark blue) are illustrated as a percentage of initiated patient encounters (light blue) Metrics of Medical Student Engagement with Clinical Outpatient Practices. Trend of total scheduled telehealth appointments over a four-week period (April 13 to May 8) for RWJMG specialty clinics (internal medicine, pediatric, and neurology). All patient encounters successfully transitioned to a Doxy.me video call (dark blue) are illustrated as a percentage of initiated patient encounters (light blue) Several barriers (Fig. 4a, b) to successful telehealth utilization were encountered during the pilot period. Together, 197 (13.9 %) of 1411 total scheduled appointments were unable to be initiated for two primary reasons: patients either asked to reschedule (80, 41.5 %) or simply did not answer the phone (79, 40.9 %). Of the 1214 patient encounters that students were able to initiate, 992 (81.7 %) were successfully transitioned to a Doxy.me video call. Most patients from the remaining 222 encounters specifically requested transition to a traditional phone call (101, 46.5 %) or reported inability to access a compatible smartphone or computer at the time (71, 32.7 %). Despite assistance from students, 40 (18.4 %) patients were not adept enough with technology to enter the Doxy.me virtual waiting room. Student surveys contained valuable workflow feedback, which allowed clinic teams to modify the workflow where necessary to better meet patient needs. Fig. 4Barriers to Successful Telehealth Utilization. Student survey responses that quantify instances where students were unable to (a) initiate the patient encounter and (b) transition initiated patient encounters to a Doxy.me video call Barriers to Successful Telehealth Utilization. Student survey responses that quantify instances where students were unable to (a) initiate the patient encounter and (b) transition initiated patient encounters to a Doxy.me video call
Conclusions
During these unprecedented times, medical schools are developing innovative methods of remote educational instruction. The pilot study provides a workflow for displaced clinical learners to engage with patients via telehealth technologies, allowing them to remain clinically active through authentic patient encounters. The workflow was well-received by students and allowed a large number of students to support the outpatient practices of the medical school. With support from governing bodies, telehealth endeavors can remain a permanent feature in medical school curricula.
[ "Background", "Methods", "Setting and objectives", "Outpatient workflow", "Educational outcomes", "Clinical outcomes", "Limitations" ]
[ "On March 17th, 2020, the Association of American Medical Colleges (AAMC) issued strong guidance that medical schools suspend clinical rotations during the coronavirus pandemic, effective immediately [1]. A temporary and unified suspension of clinical teaching by all institutions would ensure medical student safety as new information about COVID-19 continued to emerge. Medical schools following this guidance developed innovative curricular modifications to deliver uncompromised education during the ongoing pandemic. This article reports the successful implementation of a new telehealth elective course designed to introduce fourth-year medical students to the use of technologies for telehealth at the Rutgers Robert Wood Johnson Medical School (RWJMS) in response to the crisis.\nThe elective course was designed not only to provide students with a unique educational opportunity, but also to help patients of the Robert Wood Johnson Medical Group (RWJMG) transition rapidly from in-person to remote consultations. The Commonwealth Fund estimated outpatient practices scheduled less than half their regular volume of in-person visits throughout March to mitigate virus transmission [2]. For the numerous practices that struggle to care for their patient population, the ability to extend patient-provider relationships beyond traditional in-person visits is invaluable. Under normal circumstances, the RWJMG outpatient practices care for approximately 15,000 patients each week. Total patient volume decreased nearly 33 % in the last two weeks of March 2020 before remote visits were possible. By the end of April, the outpatient practices scheduled 6800 telemedicine visits per week. This transition was not without challenges, as provider illness and post-exposure quarantine requirements exacerbated an already stressed system during the pandemic’s initial wave.\nThe 2015–2016 Liaison Committee on Medical Education (LCME) Annual Medical School Questionnaire reported heterogenous use of telemedicine in undergraduate medical education by one-half of medical schools surveyed [3]. A 2020 review of telemedicine in undergraduate medical education revealed varied integration of telemedicine into medical school curricula. 71 % of respondents discussed the basics of telemedicine during didactics. Telemedicine featured more practically during standardized patient encounters (59 %), patient encounters (53 %), interprofessional training (40 %), and scholarly projects (29 %) [3]. The pandemic has stimulated a surge in telehealth integration for undergraduate medical education, providing quality education while maintaining social distancing policies [4]. A telehealth workflow was developed in coordination with RWJMG practice managers to facilitate efficient diagnosis, treatment, management, and education virtually. Students participating in an elective course were instrumental during this transition, reducing the workforce burden while gaining valuable exposure to a modern healthcare intervention.\nThe aim of this elective was to provide students with opportunities to utilize telemedicine technologies for authentic patient encounters and enable them to understand telehealth applications and factors affecting the ability of patients to connect remotely.", "Setting and objectives Fourth-year medical students were recruited throughout March and April 2020 to pilot the four-week service elective. To ensure satisfaction of curricular competencies, several educational objectives were defined for students to: (1) understand the broad applications of telehealth interventions in the outpatient setting, (2) develop fluency with telehealth technologies to support virtual appointments, (3) learn from clinicians and implement techniques to interact with patients during virtual appointments, (4) use telehealth modalities for assessment of clinical status and risk of decompensation, (5) develop a plan of care and counsel patients throughout virtual appointments, and (6) communicate with patients about health literacy and other social determinants of health affecting virtual appointment success. These educational objectives aligned with competencies subsequently published by the AAMC in their Competencies Across the Learning Continuum Series that addresses telehealth in September 2020 [5].\nFourth-year medical students were recruited throughout March and April 2020 to pilot the four-week service elective. To ensure satisfaction of curricular competencies, several educational objectives were defined for students to: (1) understand the broad applications of telehealth interventions in the outpatient setting, (2) develop fluency with telehealth technologies to support virtual appointments, (3) learn from clinicians and implement techniques to interact with patients during virtual appointments, (4) use telehealth modalities for assessment of clinical status and risk of decompensation, (5) develop a plan of care and counsel patients throughout virtual appointments, and (6) communicate with patients about health literacy and other social determinants of health affecting virtual appointment success. These educational objectives aligned with competencies subsequently published by the AAMC in their Competencies Across the Learning Continuum Series that addresses telehealth in September 2020 [5].\nOutpatient workflow Students attended one mandatory training session – conducted via Zoom video conference – where an overview of the teleconsultation workflow (Fig. 1), the electronic medical record (EMR) was demonstrated, and the Doximity and Doxy.me applications were introduced. Shift schedule logistics, use of interpretation services for non-English speaking patients, and common technology and connectivity issues were also discussed. All training sessions were recorded for students to review later.\nFig. 1Outpatient Workflow for Virtual Telehealth Patient Encounters. Swimlane diagram of the outpatient workflow developed in coordination with RWJMG practice managers\nOutpatient Workflow for Virtual Telehealth Patient Encounters. Swimlane diagram of the outpatient workflow developed in coordination with RWJMG practice managers\nFor attending physicians participating in the elective, a Microsoft Excel workbook was populated with their contact information and deidentified appointment times in half-day increments each time they requested student support. Automated emails were generated throughout each week to alert students as the Excel workbook populates with new attending physicians and their respective appointment times. Students could indicate availability to support attending physicians via a hyperlink provided in these emails. Students were required to contact each respective attending physician and confirm their desire for student support before servicing any patients. Attending physicians indicated whether students could observe or participate in patient encounters – as permitted by time and technology constraints – through subsequent correspondences.\nPatients scheduling a telemedicine appointment were told to expect a phone call from the office in lieu of standard check-in procedures. It was made clear by office staff that a smartphone or computer was required to complete video visits. On the day of patients’ scheduled appointments, students initiated contact by phone call approximately 30 min before their scheduled appointment time. The dialer feature of the Doximity application was used to protect personal phone numbers, and the subsequent patient encounter was documented in the EMR via remote access [6]. A new document type was created in the EMR for telehealth patient encounters to satisfy documentation requirements. This telehealth note requires information about the visit type (phone call or video call), the method used to connect the patient (phone, Doxy.me, or other), the patient’s location (state), and the attending physician’s location (clinical office, location other than clinical office, home or other non-affiliated location). Before collecting any clinical information, the patient was provided with information about the risks and benefits of telehealth services. Their verbal consent to participate in telehealth services was obtained and students proceeded with the patient encounter thereafter. Vital signs were then obtained from personal home health monitoring equipment if available (e.g., scales, thermometers, blood pressure monitors, etc.). All clinical information including history of current illness, review of systems, medication reconciliation, allergies, and any interval surgical, medical, or social history was entered into the telemedicine note where applicable.\nNext, students transitioned the patient from the initial phone call to a Doxy.me video call. This HIPAA-compliant telehealth system provides a cloud-based video conferencing platform with a virtual waiting room feature for providers to triage and connect with patients at their appointment time [7]. Students used skills attained from the training sessions to troubleshoot technological difficulties and other issues that may arise during the transition. Students held their preliminary note in the EMR for use by the attending physician once the transition to Doxy.me was successful. Attending physician schedules influenced whether a student joined the subsequent patient-attending consultation or instead exited the patient encounter. Students could also participate via telephone to preserve bandwidth needed for a three-person Doxy.me video conference. In rare cases, the patient-attending consultation was conducted through an alternate HIPAA-compliant video conference platform that also allowed student participation. Debrief sessions were held by attending physicians either after certain patient encounters or after a completed shift to provide feedback and education.\nTo be eligible for course credit, students had to complete a survey each time they logged contributed hours. The survey assessed barriers to successful telehealth utilization. Various appointment and workflow data were collected to enable retrospective analysis and real-time process improvement. One week of elective course credit was granted per 20 h of support contributed. Students wrote a one-page reflection on their experience with telehealth and virtual patient care and completed an elective evaluation form.\nStudents attended one mandatory training session – conducted via Zoom video conference – where an overview of the teleconsultation workflow (Fig. 1), the electronic medical record (EMR) was demonstrated, and the Doximity and Doxy.me applications were introduced. Shift schedule logistics, use of interpretation services for non-English speaking patients, and common technology and connectivity issues were also discussed. All training sessions were recorded for students to review later.\nFig. 1Outpatient Workflow for Virtual Telehealth Patient Encounters. Swimlane diagram of the outpatient workflow developed in coordination with RWJMG practice managers\nOutpatient Workflow for Virtual Telehealth Patient Encounters. Swimlane diagram of the outpatient workflow developed in coordination with RWJMG practice managers\nFor attending physicians participating in the elective, a Microsoft Excel workbook was populated with their contact information and deidentified appointment times in half-day increments each time they requested student support. Automated emails were generated throughout each week to alert students as the Excel workbook populates with new attending physicians and their respective appointment times. Students could indicate availability to support attending physicians via a hyperlink provided in these emails. Students were required to contact each respective attending physician and confirm their desire for student support before servicing any patients. Attending physicians indicated whether students could observe or participate in patient encounters – as permitted by time and technology constraints – through subsequent correspondences.\nPatients scheduling a telemedicine appointment were told to expect a phone call from the office in lieu of standard check-in procedures. It was made clear by office staff that a smartphone or computer was required to complete video visits. On the day of patients’ scheduled appointments, students initiated contact by phone call approximately 30 min before their scheduled appointment time. The dialer feature of the Doximity application was used to protect personal phone numbers, and the subsequent patient encounter was documented in the EMR via remote access [6]. A new document type was created in the EMR for telehealth patient encounters to satisfy documentation requirements. This telehealth note requires information about the visit type (phone call or video call), the method used to connect the patient (phone, Doxy.me, or other), the patient’s location (state), and the attending physician’s location (clinical office, location other than clinical office, home or other non-affiliated location). Before collecting any clinical information, the patient was provided with information about the risks and benefits of telehealth services. Their verbal consent to participate in telehealth services was obtained and students proceeded with the patient encounter thereafter. Vital signs were then obtained from personal home health monitoring equipment if available (e.g., scales, thermometers, blood pressure monitors, etc.). All clinical information including history of current illness, review of systems, medication reconciliation, allergies, and any interval surgical, medical, or social history was entered into the telemedicine note where applicable.\nNext, students transitioned the patient from the initial phone call to a Doxy.me video call. This HIPAA-compliant telehealth system provides a cloud-based video conferencing platform with a virtual waiting room feature for providers to triage and connect with patients at their appointment time [7]. Students used skills attained from the training sessions to troubleshoot technological difficulties and other issues that may arise during the transition. Students held their preliminary note in the EMR for use by the attending physician once the transition to Doxy.me was successful. Attending physician schedules influenced whether a student joined the subsequent patient-attending consultation or instead exited the patient encounter. Students could also participate via telephone to preserve bandwidth needed for a three-person Doxy.me video conference. In rare cases, the patient-attending consultation was conducted through an alternate HIPAA-compliant video conference platform that also allowed student participation. Debrief sessions were held by attending physicians either after certain patient encounters or after a completed shift to provide feedback and education.\nTo be eligible for course credit, students had to complete a survey each time they logged contributed hours. The survey assessed barriers to successful telehealth utilization. Various appointment and workflow data were collected to enable retrospective analysis and real-time process improvement. One week of elective course credit was granted per 20 h of support contributed. Students wrote a one-page reflection on their experience with telehealth and virtual patient care and completed an elective evaluation form.", "Fourth-year medical students were recruited throughout March and April 2020 to pilot the four-week service elective. To ensure satisfaction of curricular competencies, several educational objectives were defined for students to: (1) understand the broad applications of telehealth interventions in the outpatient setting, (2) develop fluency with telehealth technologies to support virtual appointments, (3) learn from clinicians and implement techniques to interact with patients during virtual appointments, (4) use telehealth modalities for assessment of clinical status and risk of decompensation, (5) develop a plan of care and counsel patients throughout virtual appointments, and (6) communicate with patients about health literacy and other social determinants of health affecting virtual appointment success. These educational objectives aligned with competencies subsequently published by the AAMC in their Competencies Across the Learning Continuum Series that addresses telehealth in September 2020 [5].", "Students attended one mandatory training session – conducted via Zoom video conference – where an overview of the teleconsultation workflow (Fig. 1), the electronic medical record (EMR) was demonstrated, and the Doximity and Doxy.me applications were introduced. Shift schedule logistics, use of interpretation services for non-English speaking patients, and common technology and connectivity issues were also discussed. All training sessions were recorded for students to review later.\nFig. 1Outpatient Workflow for Virtual Telehealth Patient Encounters. Swimlane diagram of the outpatient workflow developed in coordination with RWJMG practice managers\nOutpatient Workflow for Virtual Telehealth Patient Encounters. Swimlane diagram of the outpatient workflow developed in coordination with RWJMG practice managers\nFor attending physicians participating in the elective, a Microsoft Excel workbook was populated with their contact information and deidentified appointment times in half-day increments each time they requested student support. Automated emails were generated throughout each week to alert students as the Excel workbook populates with new attending physicians and their respective appointment times. Students could indicate availability to support attending physicians via a hyperlink provided in these emails. Students were required to contact each respective attending physician and confirm their desire for student support before servicing any patients. Attending physicians indicated whether students could observe or participate in patient encounters – as permitted by time and technology constraints – through subsequent correspondences.\nPatients scheduling a telemedicine appointment were told to expect a phone call from the office in lieu of standard check-in procedures. It was made clear by office staff that a smartphone or computer was required to complete video visits. On the day of patients’ scheduled appointments, students initiated contact by phone call approximately 30 min before their scheduled appointment time. The dialer feature of the Doximity application was used to protect personal phone numbers, and the subsequent patient encounter was documented in the EMR via remote access [6]. A new document type was created in the EMR for telehealth patient encounters to satisfy documentation requirements. This telehealth note requires information about the visit type (phone call or video call), the method used to connect the patient (phone, Doxy.me, or other), the patient’s location (state), and the attending physician’s location (clinical office, location other than clinical office, home or other non-affiliated location). Before collecting any clinical information, the patient was provided with information about the risks and benefits of telehealth services. Their verbal consent to participate in telehealth services was obtained and students proceeded with the patient encounter thereafter. Vital signs were then obtained from personal home health monitoring equipment if available (e.g., scales, thermometers, blood pressure monitors, etc.). All clinical information including history of current illness, review of systems, medication reconciliation, allergies, and any interval surgical, medical, or social history was entered into the telemedicine note where applicable.\nNext, students transitioned the patient from the initial phone call to a Doxy.me video call. This HIPAA-compliant telehealth system provides a cloud-based video conferencing platform with a virtual waiting room feature for providers to triage and connect with patients at their appointment time [7]. Students used skills attained from the training sessions to troubleshoot technological difficulties and other issues that may arise during the transition. Students held their preliminary note in the EMR for use by the attending physician once the transition to Doxy.me was successful. Attending physician schedules influenced whether a student joined the subsequent patient-attending consultation or instead exited the patient encounter. Students could also participate via telephone to preserve bandwidth needed for a three-person Doxy.me video conference. In rare cases, the patient-attending consultation was conducted through an alternate HIPAA-compliant video conference platform that also allowed student participation. Debrief sessions were held by attending physicians either after certain patient encounters or after a completed shift to provide feedback and education.\nTo be eligible for course credit, students had to complete a survey each time they logged contributed hours. The survey assessed barriers to successful telehealth utilization. Various appointment and workflow data were collected to enable retrospective analysis and real-time process improvement. One week of elective course credit was granted per 20 h of support contributed. Students wrote a one-page reflection on their experience with telehealth and virtual patient care and completed an elective evaluation form.", "Over a four-week period (April 13 to May 8), 64 fourth-year medical students (38.1 % of the graduating class) completed the mandatory training session and logged contributed hours within the program. 29 (45.3 %) students contributed enough hours to qualify for elective credit, with an average of 35.4 h logged (median of 26.8 h). Three students logged over 80 h.\nSurveys identified many barriers to patient encounter initiation that were used by practice managers for process improvement. Students learned to troubleshoot with patients when appropriate and provided detailed solutions thereafter to ensure future telehealth appointments were successful. Social determinants of health were frequently recognized as barriers to successful telehealth delivery for underserved patients. Technological illiteracy was often cited in association with elderly patients. Despite these challenges, collected evaluation forms revealed students gained substantial experience with telehealth technologies. Many students received practical instruction about various aspects of telehealth and felt they were able to explore virtual patient encounter capabilities simultaneously. Students also felt they could connect with patients, understand patient perspectives, and determine patients’ understanding of their conditions (Fig. 2).\nFig. 2Metrics of Medical Student Comfort with Patients. Students reported their comfort engaging with patients by indicating their ability to (a) make a personal connection with the patient, (b) explore the patient’s perspective of their illness, (c) determine the accuracy of the patient’s understanding\nMetrics of Medical Student Comfort with Patients. Students reported their comfort engaging with patients by indicating their ability to (a) make a personal connection with the patient, (b) explore the patient’s perspective of their illness, (c) determine the accuracy of the patient’s understanding\nStudents’ reflections provided narrative affirmation of objective data collected on the evaluation forms. Confidence was a recurrent motif, as students honed their abilities to elicit a history and counsel patients via video conference or telephone. Other topics discussed in students’ reflections include navigating technology, heightened awareness of barriers to telehealth use, and telehealth in medical education. Excerpts from students’ reflections are provided in Table 1 to highlight some of their experiences with telehealth. These excerpts are representative of the overwhelmingly positive response to the elective. For balance, some negative remarks were extracted from what otherwise were positive reflections.\nTable 1Student ExcerptsStudent 1This experience has been very enlightening regarding the strengths and limitations of telemedicine. Telemedicine’s strength, with the current level of technology, means that almost all patients will be able to connect with their physician at any time or place. Patients are usually more comfortable being interviewed in their own home and are more patient if they need to wait for their physician, as they can generally continue with their daily lives. This experience has been helpful for developing my ability to establish rapport while making my interviewing style more efficient. There have been many instances where I have been able to soothe anxious patients or family members with a few words or change in tone, thus leading to a better care experience.Student 2Like any experience for me, the patients were always the best part. I surprisingly was able to spend a lot of time talking to patients. Similar to how I have been feeling in the pandemic, my patient’s families made me realize that this is not easy for anyone. I will never forget this experience in my many years to come as a resident, attending and beyond.Student 3Participating in this telemedicine elective allowed me to spend time interacting with patients, learning the intricacies of telemedicine, and continuing to practice my history-taking skills in a novel situation. I truly enjoyed my time with patients. I worked with amazing physicians who demonstrated their flexibility and adaptability through hosting online, video appointments with their patients.Student 4Partaking in the telemedicine elective was a phenomenal experience for a number of reasons. First, it enabled us to help physicians meet an urgent demand and gave us an active role in rapidly developing a much-needed platform. Second, it gave us medical students early exposure into something that I feel is going to be a greater part of our careers as physicians, as the healthcare industry moves in a more virtual-platform-friendly direction. Lastly, it helped us hone our skills in taking patient histories, connecting with patients, and making them feel heard, as well as learning about the unique diagnoses and care plans that supersede patient’s chief complaints.Student 5Telehealth served an integral purpose for all patients who needed to access healthcare during the pandemic. Many physician-patient relationships will continue to be defined by the necessity of face-to-face visits, yet telehealth has established itself as a critical platform for increasing accessibility to health care. It was an incredible opportunity to be a part of this healthcare transformation, and it allowed me to remain connected to both my medical school and the local community in its time of need.Student 6Through our courses in medical school, we often discussed different approaches to patient-centered care and how to optimize our services. It was great to finally apply some of these principles to clinical practice, and I now feel comfortable connecting with patients through various platforms.Student 7Through this experience, I learned that much of the progress towards healing begins outside of the hospital, when patients are equipped with medical, social, and financial resources to address their health needs. I am grateful that the Telehealth Elective allowed me to develop virtual relationships with patients and support them during this unprecedented pandemic.Student 8Being able to participate in telehealth was a great experience and gave me an opportunity to understand what virtual patient care entails. In school, we often hear the word telehealth…but we don’t really get a sense of what that means or how it works. We have always exposed to the traditional method of in-person patient care. This elective allowed me to understand what exactly telehealth is, how it is setup, and the pros/cons of this method of patient care.Student 9Another aspect of the Telehealth Elective that was helpful to me was gaining experience in speaking to patients on the phone. Oftentimes this can be more challenging than speaking to patients in person, as there is less information to work with when you are on the phone with patients. Developing the skill of speaking to patients on the phone is something that takes practice and this elective helped me to do that.Student 10An area that I found irritating was that I wanted to be more involved in the care of these patients. Many of the physicians would get the patient in the room and tell me to move on to the next patient. However, one of the physicians allowed me to partake in one of her calls and listen to her counseling and management for the patients. Being able to take part in the call was much more enjoyable than simply rooming the patient and moving on to the next one. …Being confined at home for many months, I was not getting much clinical experience and I craved more.Student 11These patient visits taught me how much of clinical differential and plan can be shaped from the patient’s history alone. However, other visits would have benefited from the ability to use a stethoscope or explain pertinent information in-person. Telehealth was particularly challenging for patients who were unfamiliar with technology, as well as for patients who were less forthcoming with their acute concerns. There is a connection that forms from simply sharing the same space as another human being — it is one that fills the room with compassion and ensures the patient knows that they have the physician’s undivided attention.Student 12This experience highlighted the vast differences in technological knowledge/access to technology and how adding language challenges to this amplified the differences even more. …Nothing can replace the in-person office visit as there are inherent shortcomings to not being physically in the same room as the patient. …However, utilizing the telehealth platform was absolutely more beneficial…and allowed for the physician-patient relationship to continue even during these times.\nStudent Excerpts\nReflections also lent insight to the educational values of the elective. Interviewing patients, collecting the history, and documenting the encounter all provided significant educational value for students. Students who supported attending physicians with fewer scheduled patients were able to develop patient management plans and participate extensively during the Doxy.me patient encounter. Of the 34 students for whom reflections were received, 11 (32.4 %) mention debrief sessions with an attending physician after observation of and/or participation in patient-attending encounters. Unfortunately, this statistic does not capture the true frequency of student observation and participation – nor debrief sessions – as the majority of students chose to reflect upon other aspects of their experience.", "Given the large selection of participating specialty and subspecialty attending physicians at RWJMG, students were able to serve a broad spectrum of patients. During the elective, 58 attending physicians from 3 specialties (internal medicine, pediatrics, neurology; inclusive of 17 total subspecialties) participated. The 64 fourth-year medical students serviced 1411 total scheduled appointments. The number of student-initiated patient encounters was high from the outset and increased significantly in subsequent weeks, with the percentage of patients successfully transitioned to a Doxy.me virtual encounter trending higher over time (Fig. 3).\nFig. 3Metrics of Medical Student Engagement with Clinical Outpatient Practices. Trend of total scheduled telehealth appointments over a four-week period (April 13 to May 8) for RWJMG specialty clinics (internal medicine, pediatric, and neurology). All patient encounters successfully transitioned to a Doxy.me video call (dark blue) are illustrated as a percentage of initiated patient encounters (light blue)\nMetrics of Medical Student Engagement with Clinical Outpatient Practices. Trend of total scheduled telehealth appointments over a four-week period (April 13 to May 8) for RWJMG specialty clinics (internal medicine, pediatric, and neurology). All patient encounters successfully transitioned to a Doxy.me video call (dark blue) are illustrated as a percentage of initiated patient encounters (light blue)\nSeveral barriers (Fig. 4a, b) to successful telehealth utilization were encountered during the pilot period. Together, 197 (13.9 %) of 1411 total scheduled appointments were unable to be initiated for two primary reasons: patients either asked to reschedule (80, 41.5 %) or simply did not answer the phone (79, 40.9 %). Of the 1214 patient encounters that students were able to initiate, 992 (81.7 %) were successfully transitioned to a Doxy.me video call. Most patients from the remaining 222 encounters specifically requested transition to a traditional phone call (101, 46.5 %) or reported inability to access a compatible smartphone or computer at the time (71, 32.7 %). Despite assistance from students, 40 (18.4 %) patients were not adept enough with technology to enter the Doxy.me virtual waiting room. Student surveys contained valuable workflow feedback, which allowed clinic teams to modify the workflow where necessary to better meet patient needs.\nFig. 4Barriers to Successful Telehealth Utilization. Student survey responses that quantify instances where students were unable to (a) initiate the patient encounter and (b) transition initiated patient encounters to a Doxy.me video call\nBarriers to Successful Telehealth Utilization. Student survey responses that quantify instances where students were unable to (a) initiate the patient encounter and (b) transition initiated patient encounters to a Doxy.me video call", "Elective evaluations were only completed by those students who requested elective credit. Of the 64 fourth-year medical students, only 29 completed enough sessions to receive credit. Many of the other students who participated significantly did not require credit and did not submit an evaluation.\nAmong the measured clinical outcomes, data collected about barriers to telehealth implementation are likely skewed. The pilot was conducted while the RWJMG outpatient practices existed in an unprecedented state of flux without their usual automated scheduling systems in place. Instances where students were unable to initiate the patient encounter via phone call were increasingly corrected towards the end of the pilot as these RWJMG systems were reimplemented. Unfortunately, it was not possible to follow up with these patients who were rescheduled to ensure they eventually received care." ]
[ null, null, null, null, null, null, null ]
[ "Background", "Methods", "Setting and objectives", "Outpatient workflow", "Results", "Educational outcomes", "Clinical outcomes", "Discussion", "Limitations", "Conclusions" ]
[ "On March 17th, 2020, the Association of American Medical Colleges (AAMC) issued strong guidance that medical schools suspend clinical rotations during the coronavirus pandemic, effective immediately [1]. A temporary and unified suspension of clinical teaching by all institutions would ensure medical student safety as new information about COVID-19 continued to emerge. Medical schools following this guidance developed innovative curricular modifications to deliver uncompromised education during the ongoing pandemic. This article reports the successful implementation of a new telehealth elective course designed to introduce fourth-year medical students to the use of technologies for telehealth at the Rutgers Robert Wood Johnson Medical School (RWJMS) in response to the crisis.\nThe elective course was designed not only to provide students with a unique educational opportunity, but also to help patients of the Robert Wood Johnson Medical Group (RWJMG) transition rapidly from in-person to remote consultations. The Commonwealth Fund estimated outpatient practices scheduled less than half their regular volume of in-person visits throughout March to mitigate virus transmission [2]. For the numerous practices that struggle to care for their patient population, the ability to extend patient-provider relationships beyond traditional in-person visits is invaluable. Under normal circumstances, the RWJMG outpatient practices care for approximately 15,000 patients each week. Total patient volume decreased nearly 33 % in the last two weeks of March 2020 before remote visits were possible. By the end of April, the outpatient practices scheduled 6800 telemedicine visits per week. This transition was not without challenges, as provider illness and post-exposure quarantine requirements exacerbated an already stressed system during the pandemic’s initial wave.\nThe 2015–2016 Liaison Committee on Medical Education (LCME) Annual Medical School Questionnaire reported heterogenous use of telemedicine in undergraduate medical education by one-half of medical schools surveyed [3]. A 2020 review of telemedicine in undergraduate medical education revealed varied integration of telemedicine into medical school curricula. 71 % of respondents discussed the basics of telemedicine during didactics. Telemedicine featured more practically during standardized patient encounters (59 %), patient encounters (53 %), interprofessional training (40 %), and scholarly projects (29 %) [3]. The pandemic has stimulated a surge in telehealth integration for undergraduate medical education, providing quality education while maintaining social distancing policies [4]. A telehealth workflow was developed in coordination with RWJMG practice managers to facilitate efficient diagnosis, treatment, management, and education virtually. Students participating in an elective course were instrumental during this transition, reducing the workforce burden while gaining valuable exposure to a modern healthcare intervention.\nThe aim of this elective was to provide students with opportunities to utilize telemedicine technologies for authentic patient encounters and enable them to understand telehealth applications and factors affecting the ability of patients to connect remotely.", "Setting and objectives Fourth-year medical students were recruited throughout March and April 2020 to pilot the four-week service elective. To ensure satisfaction of curricular competencies, several educational objectives were defined for students to: (1) understand the broad applications of telehealth interventions in the outpatient setting, (2) develop fluency with telehealth technologies to support virtual appointments, (3) learn from clinicians and implement techniques to interact with patients during virtual appointments, (4) use telehealth modalities for assessment of clinical status and risk of decompensation, (5) develop a plan of care and counsel patients throughout virtual appointments, and (6) communicate with patients about health literacy and other social determinants of health affecting virtual appointment success. These educational objectives aligned with competencies subsequently published by the AAMC in their Competencies Across the Learning Continuum Series that addresses telehealth in September 2020 [5].\nFourth-year medical students were recruited throughout March and April 2020 to pilot the four-week service elective. To ensure satisfaction of curricular competencies, several educational objectives were defined for students to: (1) understand the broad applications of telehealth interventions in the outpatient setting, (2) develop fluency with telehealth technologies to support virtual appointments, (3) learn from clinicians and implement techniques to interact with patients during virtual appointments, (4) use telehealth modalities for assessment of clinical status and risk of decompensation, (5) develop a plan of care and counsel patients throughout virtual appointments, and (6) communicate with patients about health literacy and other social determinants of health affecting virtual appointment success. These educational objectives aligned with competencies subsequently published by the AAMC in their Competencies Across the Learning Continuum Series that addresses telehealth in September 2020 [5].\nOutpatient workflow Students attended one mandatory training session – conducted via Zoom video conference – where an overview of the teleconsultation workflow (Fig. 1), the electronic medical record (EMR) was demonstrated, and the Doximity and Doxy.me applications were introduced. Shift schedule logistics, use of interpretation services for non-English speaking patients, and common technology and connectivity issues were also discussed. All training sessions were recorded for students to review later.\nFig. 1Outpatient Workflow for Virtual Telehealth Patient Encounters. Swimlane diagram of the outpatient workflow developed in coordination with RWJMG practice managers\nOutpatient Workflow for Virtual Telehealth Patient Encounters. Swimlane diagram of the outpatient workflow developed in coordination with RWJMG practice managers\nFor attending physicians participating in the elective, a Microsoft Excel workbook was populated with their contact information and deidentified appointment times in half-day increments each time they requested student support. Automated emails were generated throughout each week to alert students as the Excel workbook populates with new attending physicians and their respective appointment times. Students could indicate availability to support attending physicians via a hyperlink provided in these emails. Students were required to contact each respective attending physician and confirm their desire for student support before servicing any patients. Attending physicians indicated whether students could observe or participate in patient encounters – as permitted by time and technology constraints – through subsequent correspondences.\nPatients scheduling a telemedicine appointment were told to expect a phone call from the office in lieu of standard check-in procedures. It was made clear by office staff that a smartphone or computer was required to complete video visits. On the day of patients’ scheduled appointments, students initiated contact by phone call approximately 30 min before their scheduled appointment time. The dialer feature of the Doximity application was used to protect personal phone numbers, and the subsequent patient encounter was documented in the EMR via remote access [6]. A new document type was created in the EMR for telehealth patient encounters to satisfy documentation requirements. This telehealth note requires information about the visit type (phone call or video call), the method used to connect the patient (phone, Doxy.me, or other), the patient’s location (state), and the attending physician’s location (clinical office, location other than clinical office, home or other non-affiliated location). Before collecting any clinical information, the patient was provided with information about the risks and benefits of telehealth services. Their verbal consent to participate in telehealth services was obtained and students proceeded with the patient encounter thereafter. Vital signs were then obtained from personal home health monitoring equipment if available (e.g., scales, thermometers, blood pressure monitors, etc.). All clinical information including history of current illness, review of systems, medication reconciliation, allergies, and any interval surgical, medical, or social history was entered into the telemedicine note where applicable.\nNext, students transitioned the patient from the initial phone call to a Doxy.me video call. This HIPAA-compliant telehealth system provides a cloud-based video conferencing platform with a virtual waiting room feature for providers to triage and connect with patients at their appointment time [7]. Students used skills attained from the training sessions to troubleshoot technological difficulties and other issues that may arise during the transition. Students held their preliminary note in the EMR for use by the attending physician once the transition to Doxy.me was successful. Attending physician schedules influenced whether a student joined the subsequent patient-attending consultation or instead exited the patient encounter. Students could also participate via telephone to preserve bandwidth needed for a three-person Doxy.me video conference. In rare cases, the patient-attending consultation was conducted through an alternate HIPAA-compliant video conference platform that also allowed student participation. Debrief sessions were held by attending physicians either after certain patient encounters or after a completed shift to provide feedback and education.\nTo be eligible for course credit, students had to complete a survey each time they logged contributed hours. The survey assessed barriers to successful telehealth utilization. Various appointment and workflow data were collected to enable retrospective analysis and real-time process improvement. One week of elective course credit was granted per 20 h of support contributed. Students wrote a one-page reflection on their experience with telehealth and virtual patient care and completed an elective evaluation form.\nStudents attended one mandatory training session – conducted via Zoom video conference – where an overview of the teleconsultation workflow (Fig. 1), the electronic medical record (EMR) was demonstrated, and the Doximity and Doxy.me applications were introduced. Shift schedule logistics, use of interpretation services for non-English speaking patients, and common technology and connectivity issues were also discussed. All training sessions were recorded for students to review later.\nFig. 1Outpatient Workflow for Virtual Telehealth Patient Encounters. Swimlane diagram of the outpatient workflow developed in coordination with RWJMG practice managers\nOutpatient Workflow for Virtual Telehealth Patient Encounters. Swimlane diagram of the outpatient workflow developed in coordination with RWJMG practice managers\nFor attending physicians participating in the elective, a Microsoft Excel workbook was populated with their contact information and deidentified appointment times in half-day increments each time they requested student support. Automated emails were generated throughout each week to alert students as the Excel workbook populates with new attending physicians and their respective appointment times. Students could indicate availability to support attending physicians via a hyperlink provided in these emails. Students were required to contact each respective attending physician and confirm their desire for student support before servicing any patients. Attending physicians indicated whether students could observe or participate in patient encounters – as permitted by time and technology constraints – through subsequent correspondences.\nPatients scheduling a telemedicine appointment were told to expect a phone call from the office in lieu of standard check-in procedures. It was made clear by office staff that a smartphone or computer was required to complete video visits. On the day of patients’ scheduled appointments, students initiated contact by phone call approximately 30 min before their scheduled appointment time. The dialer feature of the Doximity application was used to protect personal phone numbers, and the subsequent patient encounter was documented in the EMR via remote access [6]. A new document type was created in the EMR for telehealth patient encounters to satisfy documentation requirements. This telehealth note requires information about the visit type (phone call or video call), the method used to connect the patient (phone, Doxy.me, or other), the patient’s location (state), and the attending physician’s location (clinical office, location other than clinical office, home or other non-affiliated location). Before collecting any clinical information, the patient was provided with information about the risks and benefits of telehealth services. Their verbal consent to participate in telehealth services was obtained and students proceeded with the patient encounter thereafter. Vital signs were then obtained from personal home health monitoring equipment if available (e.g., scales, thermometers, blood pressure monitors, etc.). All clinical information including history of current illness, review of systems, medication reconciliation, allergies, and any interval surgical, medical, or social history was entered into the telemedicine note where applicable.\nNext, students transitioned the patient from the initial phone call to a Doxy.me video call. This HIPAA-compliant telehealth system provides a cloud-based video conferencing platform with a virtual waiting room feature for providers to triage and connect with patients at their appointment time [7]. Students used skills attained from the training sessions to troubleshoot technological difficulties and other issues that may arise during the transition. Students held their preliminary note in the EMR for use by the attending physician once the transition to Doxy.me was successful. Attending physician schedules influenced whether a student joined the subsequent patient-attending consultation or instead exited the patient encounter. Students could also participate via telephone to preserve bandwidth needed for a three-person Doxy.me video conference. In rare cases, the patient-attending consultation was conducted through an alternate HIPAA-compliant video conference platform that also allowed student participation. Debrief sessions were held by attending physicians either after certain patient encounters or after a completed shift to provide feedback and education.\nTo be eligible for course credit, students had to complete a survey each time they logged contributed hours. The survey assessed barriers to successful telehealth utilization. Various appointment and workflow data were collected to enable retrospective analysis and real-time process improvement. One week of elective course credit was granted per 20 h of support contributed. Students wrote a one-page reflection on their experience with telehealth and virtual patient care and completed an elective evaluation form.", "Fourth-year medical students were recruited throughout March and April 2020 to pilot the four-week service elective. To ensure satisfaction of curricular competencies, several educational objectives were defined for students to: (1) understand the broad applications of telehealth interventions in the outpatient setting, (2) develop fluency with telehealth technologies to support virtual appointments, (3) learn from clinicians and implement techniques to interact with patients during virtual appointments, (4) use telehealth modalities for assessment of clinical status and risk of decompensation, (5) develop a plan of care and counsel patients throughout virtual appointments, and (6) communicate with patients about health literacy and other social determinants of health affecting virtual appointment success. These educational objectives aligned with competencies subsequently published by the AAMC in their Competencies Across the Learning Continuum Series that addresses telehealth in September 2020 [5].", "Students attended one mandatory training session – conducted via Zoom video conference – where an overview of the teleconsultation workflow (Fig. 1), the electronic medical record (EMR) was demonstrated, and the Doximity and Doxy.me applications were introduced. Shift schedule logistics, use of interpretation services for non-English speaking patients, and common technology and connectivity issues were also discussed. All training sessions were recorded for students to review later.\nFig. 1Outpatient Workflow for Virtual Telehealth Patient Encounters. Swimlane diagram of the outpatient workflow developed in coordination with RWJMG practice managers\nOutpatient Workflow for Virtual Telehealth Patient Encounters. Swimlane diagram of the outpatient workflow developed in coordination with RWJMG practice managers\nFor attending physicians participating in the elective, a Microsoft Excel workbook was populated with their contact information and deidentified appointment times in half-day increments each time they requested student support. Automated emails were generated throughout each week to alert students as the Excel workbook populates with new attending physicians and their respective appointment times. Students could indicate availability to support attending physicians via a hyperlink provided in these emails. Students were required to contact each respective attending physician and confirm their desire for student support before servicing any patients. Attending physicians indicated whether students could observe or participate in patient encounters – as permitted by time and technology constraints – through subsequent correspondences.\nPatients scheduling a telemedicine appointment were told to expect a phone call from the office in lieu of standard check-in procedures. It was made clear by office staff that a smartphone or computer was required to complete video visits. On the day of patients’ scheduled appointments, students initiated contact by phone call approximately 30 min before their scheduled appointment time. The dialer feature of the Doximity application was used to protect personal phone numbers, and the subsequent patient encounter was documented in the EMR via remote access [6]. A new document type was created in the EMR for telehealth patient encounters to satisfy documentation requirements. This telehealth note requires information about the visit type (phone call or video call), the method used to connect the patient (phone, Doxy.me, or other), the patient’s location (state), and the attending physician’s location (clinical office, location other than clinical office, home or other non-affiliated location). Before collecting any clinical information, the patient was provided with information about the risks and benefits of telehealth services. Their verbal consent to participate in telehealth services was obtained and students proceeded with the patient encounter thereafter. Vital signs were then obtained from personal home health monitoring equipment if available (e.g., scales, thermometers, blood pressure monitors, etc.). All clinical information including history of current illness, review of systems, medication reconciliation, allergies, and any interval surgical, medical, or social history was entered into the telemedicine note where applicable.\nNext, students transitioned the patient from the initial phone call to a Doxy.me video call. This HIPAA-compliant telehealth system provides a cloud-based video conferencing platform with a virtual waiting room feature for providers to triage and connect with patients at their appointment time [7]. Students used skills attained from the training sessions to troubleshoot technological difficulties and other issues that may arise during the transition. Students held their preliminary note in the EMR for use by the attending physician once the transition to Doxy.me was successful. Attending physician schedules influenced whether a student joined the subsequent patient-attending consultation or instead exited the patient encounter. Students could also participate via telephone to preserve bandwidth needed for a three-person Doxy.me video conference. In rare cases, the patient-attending consultation was conducted through an alternate HIPAA-compliant video conference platform that also allowed student participation. Debrief sessions were held by attending physicians either after certain patient encounters or after a completed shift to provide feedback and education.\nTo be eligible for course credit, students had to complete a survey each time they logged contributed hours. The survey assessed barriers to successful telehealth utilization. Various appointment and workflow data were collected to enable retrospective analysis and real-time process improvement. One week of elective course credit was granted per 20 h of support contributed. Students wrote a one-page reflection on their experience with telehealth and virtual patient care and completed an elective evaluation form.", "Educational outcomes Over a four-week period (April 13 to May 8), 64 fourth-year medical students (38.1 % of the graduating class) completed the mandatory training session and logged contributed hours within the program. 29 (45.3 %) students contributed enough hours to qualify for elective credit, with an average of 35.4 h logged (median of 26.8 h). Three students logged over 80 h.\nSurveys identified many barriers to patient encounter initiation that were used by practice managers for process improvement. Students learned to troubleshoot with patients when appropriate and provided detailed solutions thereafter to ensure future telehealth appointments were successful. Social determinants of health were frequently recognized as barriers to successful telehealth delivery for underserved patients. Technological illiteracy was often cited in association with elderly patients. Despite these challenges, collected evaluation forms revealed students gained substantial experience with telehealth technologies. Many students received practical instruction about various aspects of telehealth and felt they were able to explore virtual patient encounter capabilities simultaneously. Students also felt they could connect with patients, understand patient perspectives, and determine patients’ understanding of their conditions (Fig. 2).\nFig. 2Metrics of Medical Student Comfort with Patients. Students reported their comfort engaging with patients by indicating their ability to (a) make a personal connection with the patient, (b) explore the patient’s perspective of their illness, (c) determine the accuracy of the patient’s understanding\nMetrics of Medical Student Comfort with Patients. Students reported their comfort engaging with patients by indicating their ability to (a) make a personal connection with the patient, (b) explore the patient’s perspective of their illness, (c) determine the accuracy of the patient’s understanding\nStudents’ reflections provided narrative affirmation of objective data collected on the evaluation forms. Confidence was a recurrent motif, as students honed their abilities to elicit a history and counsel patients via video conference or telephone. Other topics discussed in students’ reflections include navigating technology, heightened awareness of barriers to telehealth use, and telehealth in medical education. Excerpts from students’ reflections are provided in Table 1 to highlight some of their experiences with telehealth. These excerpts are representative of the overwhelmingly positive response to the elective. For balance, some negative remarks were extracted from what otherwise were positive reflections.\nTable 1Student ExcerptsStudent 1This experience has been very enlightening regarding the strengths and limitations of telemedicine. Telemedicine’s strength, with the current level of technology, means that almost all patients will be able to connect with their physician at any time or place. Patients are usually more comfortable being interviewed in their own home and are more patient if they need to wait for their physician, as they can generally continue with their daily lives. This experience has been helpful for developing my ability to establish rapport while making my interviewing style more efficient. There have been many instances where I have been able to soothe anxious patients or family members with a few words or change in tone, thus leading to a better care experience.Student 2Like any experience for me, the patients were always the best part. I surprisingly was able to spend a lot of time talking to patients. Similar to how I have been feeling in the pandemic, my patient’s families made me realize that this is not easy for anyone. I will never forget this experience in my many years to come as a resident, attending and beyond.Student 3Participating in this telemedicine elective allowed me to spend time interacting with patients, learning the intricacies of telemedicine, and continuing to practice my history-taking skills in a novel situation. I truly enjoyed my time with patients. I worked with amazing physicians who demonstrated their flexibility and adaptability through hosting online, video appointments with their patients.Student 4Partaking in the telemedicine elective was a phenomenal experience for a number of reasons. First, it enabled us to help physicians meet an urgent demand and gave us an active role in rapidly developing a much-needed platform. Second, it gave us medical students early exposure into something that I feel is going to be a greater part of our careers as physicians, as the healthcare industry moves in a more virtual-platform-friendly direction. Lastly, it helped us hone our skills in taking patient histories, connecting with patients, and making them feel heard, as well as learning about the unique diagnoses and care plans that supersede patient’s chief complaints.Student 5Telehealth served an integral purpose for all patients who needed to access healthcare during the pandemic. Many physician-patient relationships will continue to be defined by the necessity of face-to-face visits, yet telehealth has established itself as a critical platform for increasing accessibility to health care. It was an incredible opportunity to be a part of this healthcare transformation, and it allowed me to remain connected to both my medical school and the local community in its time of need.Student 6Through our courses in medical school, we often discussed different approaches to patient-centered care and how to optimize our services. It was great to finally apply some of these principles to clinical practice, and I now feel comfortable connecting with patients through various platforms.Student 7Through this experience, I learned that much of the progress towards healing begins outside of the hospital, when patients are equipped with medical, social, and financial resources to address their health needs. I am grateful that the Telehealth Elective allowed me to develop virtual relationships with patients and support them during this unprecedented pandemic.Student 8Being able to participate in telehealth was a great experience and gave me an opportunity to understand what virtual patient care entails. In school, we often hear the word telehealth…but we don’t really get a sense of what that means or how it works. We have always exposed to the traditional method of in-person patient care. This elective allowed me to understand what exactly telehealth is, how it is setup, and the pros/cons of this method of patient care.Student 9Another aspect of the Telehealth Elective that was helpful to me was gaining experience in speaking to patients on the phone. Oftentimes this can be more challenging than speaking to patients in person, as there is less information to work with when you are on the phone with patients. Developing the skill of speaking to patients on the phone is something that takes practice and this elective helped me to do that.Student 10An area that I found irritating was that I wanted to be more involved in the care of these patients. Many of the physicians would get the patient in the room and tell me to move on to the next patient. However, one of the physicians allowed me to partake in one of her calls and listen to her counseling and management for the patients. Being able to take part in the call was much more enjoyable than simply rooming the patient and moving on to the next one. …Being confined at home for many months, I was not getting much clinical experience and I craved more.Student 11These patient visits taught me how much of clinical differential and plan can be shaped from the patient’s history alone. However, other visits would have benefited from the ability to use a stethoscope or explain pertinent information in-person. Telehealth was particularly challenging for patients who were unfamiliar with technology, as well as for patients who were less forthcoming with their acute concerns. There is a connection that forms from simply sharing the same space as another human being — it is one that fills the room with compassion and ensures the patient knows that they have the physician’s undivided attention.Student 12This experience highlighted the vast differences in technological knowledge/access to technology and how adding language challenges to this amplified the differences even more. …Nothing can replace the in-person office visit as there are inherent shortcomings to not being physically in the same room as the patient. …However, utilizing the telehealth platform was absolutely more beneficial…and allowed for the physician-patient relationship to continue even during these times.\nStudent Excerpts\nReflections also lent insight to the educational values of the elective. Interviewing patients, collecting the history, and documenting the encounter all provided significant educational value for students. Students who supported attending physicians with fewer scheduled patients were able to develop patient management plans and participate extensively during the Doxy.me patient encounter. Of the 34 students for whom reflections were received, 11 (32.4 %) mention debrief sessions with an attending physician after observation of and/or participation in patient-attending encounters. Unfortunately, this statistic does not capture the true frequency of student observation and participation – nor debrief sessions – as the majority of students chose to reflect upon other aspects of their experience.\nOver a four-week period (April 13 to May 8), 64 fourth-year medical students (38.1 % of the graduating class) completed the mandatory training session and logged contributed hours within the program. 29 (45.3 %) students contributed enough hours to qualify for elective credit, with an average of 35.4 h logged (median of 26.8 h). Three students logged over 80 h.\nSurveys identified many barriers to patient encounter initiation that were used by practice managers for process improvement. Students learned to troubleshoot with patients when appropriate and provided detailed solutions thereafter to ensure future telehealth appointments were successful. Social determinants of health were frequently recognized as barriers to successful telehealth delivery for underserved patients. Technological illiteracy was often cited in association with elderly patients. Despite these challenges, collected evaluation forms revealed students gained substantial experience with telehealth technologies. Many students received practical instruction about various aspects of telehealth and felt they were able to explore virtual patient encounter capabilities simultaneously. Students also felt they could connect with patients, understand patient perspectives, and determine patients’ understanding of their conditions (Fig. 2).\nFig. 2Metrics of Medical Student Comfort with Patients. Students reported their comfort engaging with patients by indicating their ability to (a) make a personal connection with the patient, (b) explore the patient’s perspective of their illness, (c) determine the accuracy of the patient’s understanding\nMetrics of Medical Student Comfort with Patients. Students reported their comfort engaging with patients by indicating their ability to (a) make a personal connection with the patient, (b) explore the patient’s perspective of their illness, (c) determine the accuracy of the patient’s understanding\nStudents’ reflections provided narrative affirmation of objective data collected on the evaluation forms. Confidence was a recurrent motif, as students honed their abilities to elicit a history and counsel patients via video conference or telephone. Other topics discussed in students’ reflections include navigating technology, heightened awareness of barriers to telehealth use, and telehealth in medical education. Excerpts from students’ reflections are provided in Table 1 to highlight some of their experiences with telehealth. These excerpts are representative of the overwhelmingly positive response to the elective. For balance, some negative remarks were extracted from what otherwise were positive reflections.\nTable 1Student ExcerptsStudent 1This experience has been very enlightening regarding the strengths and limitations of telemedicine. Telemedicine’s strength, with the current level of technology, means that almost all patients will be able to connect with their physician at any time or place. Patients are usually more comfortable being interviewed in their own home and are more patient if they need to wait for their physician, as they can generally continue with their daily lives. This experience has been helpful for developing my ability to establish rapport while making my interviewing style more efficient. There have been many instances where I have been able to soothe anxious patients or family members with a few words or change in tone, thus leading to a better care experience.Student 2Like any experience for me, the patients were always the best part. I surprisingly was able to spend a lot of time talking to patients. Similar to how I have been feeling in the pandemic, my patient’s families made me realize that this is not easy for anyone. I will never forget this experience in my many years to come as a resident, attending and beyond.Student 3Participating in this telemedicine elective allowed me to spend time interacting with patients, learning the intricacies of telemedicine, and continuing to practice my history-taking skills in a novel situation. I truly enjoyed my time with patients. I worked with amazing physicians who demonstrated their flexibility and adaptability through hosting online, video appointments with their patients.Student 4Partaking in the telemedicine elective was a phenomenal experience for a number of reasons. First, it enabled us to help physicians meet an urgent demand and gave us an active role in rapidly developing a much-needed platform. Second, it gave us medical students early exposure into something that I feel is going to be a greater part of our careers as physicians, as the healthcare industry moves in a more virtual-platform-friendly direction. Lastly, it helped us hone our skills in taking patient histories, connecting with patients, and making them feel heard, as well as learning about the unique diagnoses and care plans that supersede patient’s chief complaints.Student 5Telehealth served an integral purpose for all patients who needed to access healthcare during the pandemic. Many physician-patient relationships will continue to be defined by the necessity of face-to-face visits, yet telehealth has established itself as a critical platform for increasing accessibility to health care. It was an incredible opportunity to be a part of this healthcare transformation, and it allowed me to remain connected to both my medical school and the local community in its time of need.Student 6Through our courses in medical school, we often discussed different approaches to patient-centered care and how to optimize our services. It was great to finally apply some of these principles to clinical practice, and I now feel comfortable connecting with patients through various platforms.Student 7Through this experience, I learned that much of the progress towards healing begins outside of the hospital, when patients are equipped with medical, social, and financial resources to address their health needs. I am grateful that the Telehealth Elective allowed me to develop virtual relationships with patients and support them during this unprecedented pandemic.Student 8Being able to participate in telehealth was a great experience and gave me an opportunity to understand what virtual patient care entails. In school, we often hear the word telehealth…but we don’t really get a sense of what that means or how it works. We have always exposed to the traditional method of in-person patient care. This elective allowed me to understand what exactly telehealth is, how it is setup, and the pros/cons of this method of patient care.Student 9Another aspect of the Telehealth Elective that was helpful to me was gaining experience in speaking to patients on the phone. Oftentimes this can be more challenging than speaking to patients in person, as there is less information to work with when you are on the phone with patients. Developing the skill of speaking to patients on the phone is something that takes practice and this elective helped me to do that.Student 10An area that I found irritating was that I wanted to be more involved in the care of these patients. Many of the physicians would get the patient in the room and tell me to move on to the next patient. However, one of the physicians allowed me to partake in one of her calls and listen to her counseling and management for the patients. Being able to take part in the call was much more enjoyable than simply rooming the patient and moving on to the next one. …Being confined at home for many months, I was not getting much clinical experience and I craved more.Student 11These patient visits taught me how much of clinical differential and plan can be shaped from the patient’s history alone. However, other visits would have benefited from the ability to use a stethoscope or explain pertinent information in-person. Telehealth was particularly challenging for patients who were unfamiliar with technology, as well as for patients who were less forthcoming with their acute concerns. There is a connection that forms from simply sharing the same space as another human being — it is one that fills the room with compassion and ensures the patient knows that they have the physician’s undivided attention.Student 12This experience highlighted the vast differences in technological knowledge/access to technology and how adding language challenges to this amplified the differences even more. …Nothing can replace the in-person office visit as there are inherent shortcomings to not being physically in the same room as the patient. …However, utilizing the telehealth platform was absolutely more beneficial…and allowed for the physician-patient relationship to continue even during these times.\nStudent Excerpts\nReflections also lent insight to the educational values of the elective. Interviewing patients, collecting the history, and documenting the encounter all provided significant educational value for students. Students who supported attending physicians with fewer scheduled patients were able to develop patient management plans and participate extensively during the Doxy.me patient encounter. Of the 34 students for whom reflections were received, 11 (32.4 %) mention debrief sessions with an attending physician after observation of and/or participation in patient-attending encounters. Unfortunately, this statistic does not capture the true frequency of student observation and participation – nor debrief sessions – as the majority of students chose to reflect upon other aspects of their experience.\nClinical outcomes Given the large selection of participating specialty and subspecialty attending physicians at RWJMG, students were able to serve a broad spectrum of patients. During the elective, 58 attending physicians from 3 specialties (internal medicine, pediatrics, neurology; inclusive of 17 total subspecialties) participated. The 64 fourth-year medical students serviced 1411 total scheduled appointments. The number of student-initiated patient encounters was high from the outset and increased significantly in subsequent weeks, with the percentage of patients successfully transitioned to a Doxy.me virtual encounter trending higher over time (Fig. 3).\nFig. 3Metrics of Medical Student Engagement with Clinical Outpatient Practices. Trend of total scheduled telehealth appointments over a four-week period (April 13 to May 8) for RWJMG specialty clinics (internal medicine, pediatric, and neurology). All patient encounters successfully transitioned to a Doxy.me video call (dark blue) are illustrated as a percentage of initiated patient encounters (light blue)\nMetrics of Medical Student Engagement with Clinical Outpatient Practices. Trend of total scheduled telehealth appointments over a four-week period (April 13 to May 8) for RWJMG specialty clinics (internal medicine, pediatric, and neurology). All patient encounters successfully transitioned to a Doxy.me video call (dark blue) are illustrated as a percentage of initiated patient encounters (light blue)\nSeveral barriers (Fig. 4a, b) to successful telehealth utilization were encountered during the pilot period. Together, 197 (13.9 %) of 1411 total scheduled appointments were unable to be initiated for two primary reasons: patients either asked to reschedule (80, 41.5 %) or simply did not answer the phone (79, 40.9 %). Of the 1214 patient encounters that students were able to initiate, 992 (81.7 %) were successfully transitioned to a Doxy.me video call. Most patients from the remaining 222 encounters specifically requested transition to a traditional phone call (101, 46.5 %) or reported inability to access a compatible smartphone or computer at the time (71, 32.7 %). Despite assistance from students, 40 (18.4 %) patients were not adept enough with technology to enter the Doxy.me virtual waiting room. Student surveys contained valuable workflow feedback, which allowed clinic teams to modify the workflow where necessary to better meet patient needs.\nFig. 4Barriers to Successful Telehealth Utilization. Student survey responses that quantify instances where students were unable to (a) initiate the patient encounter and (b) transition initiated patient encounters to a Doxy.me video call\nBarriers to Successful Telehealth Utilization. Student survey responses that quantify instances where students were unable to (a) initiate the patient encounter and (b) transition initiated patient encounters to a Doxy.me video call\nGiven the large selection of participating specialty and subspecialty attending physicians at RWJMG, students were able to serve a broad spectrum of patients. During the elective, 58 attending physicians from 3 specialties (internal medicine, pediatrics, neurology; inclusive of 17 total subspecialties) participated. The 64 fourth-year medical students serviced 1411 total scheduled appointments. The number of student-initiated patient encounters was high from the outset and increased significantly in subsequent weeks, with the percentage of patients successfully transitioned to a Doxy.me virtual encounter trending higher over time (Fig. 3).\nFig. 3Metrics of Medical Student Engagement with Clinical Outpatient Practices. Trend of total scheduled telehealth appointments over a four-week period (April 13 to May 8) for RWJMG specialty clinics (internal medicine, pediatric, and neurology). All patient encounters successfully transitioned to a Doxy.me video call (dark blue) are illustrated as a percentage of initiated patient encounters (light blue)\nMetrics of Medical Student Engagement with Clinical Outpatient Practices. Trend of total scheduled telehealth appointments over a four-week period (April 13 to May 8) for RWJMG specialty clinics (internal medicine, pediatric, and neurology). All patient encounters successfully transitioned to a Doxy.me video call (dark blue) are illustrated as a percentage of initiated patient encounters (light blue)\nSeveral barriers (Fig. 4a, b) to successful telehealth utilization were encountered during the pilot period. Together, 197 (13.9 %) of 1411 total scheduled appointments were unable to be initiated for two primary reasons: patients either asked to reschedule (80, 41.5 %) or simply did not answer the phone (79, 40.9 %). Of the 1214 patient encounters that students were able to initiate, 992 (81.7 %) were successfully transitioned to a Doxy.me video call. Most patients from the remaining 222 encounters specifically requested transition to a traditional phone call (101, 46.5 %) or reported inability to access a compatible smartphone or computer at the time (71, 32.7 %). Despite assistance from students, 40 (18.4 %) patients were not adept enough with technology to enter the Doxy.me virtual waiting room. Student surveys contained valuable workflow feedback, which allowed clinic teams to modify the workflow where necessary to better meet patient needs.\nFig. 4Barriers to Successful Telehealth Utilization. Student survey responses that quantify instances where students were unable to (a) initiate the patient encounter and (b) transition initiated patient encounters to a Doxy.me video call\nBarriers to Successful Telehealth Utilization. Student survey responses that quantify instances where students were unable to (a) initiate the patient encounter and (b) transition initiated patient encounters to a Doxy.me video call", "Over a four-week period (April 13 to May 8), 64 fourth-year medical students (38.1 % of the graduating class) completed the mandatory training session and logged contributed hours within the program. 29 (45.3 %) students contributed enough hours to qualify for elective credit, with an average of 35.4 h logged (median of 26.8 h). Three students logged over 80 h.\nSurveys identified many barriers to patient encounter initiation that were used by practice managers for process improvement. Students learned to troubleshoot with patients when appropriate and provided detailed solutions thereafter to ensure future telehealth appointments were successful. Social determinants of health were frequently recognized as barriers to successful telehealth delivery for underserved patients. Technological illiteracy was often cited in association with elderly patients. Despite these challenges, collected evaluation forms revealed students gained substantial experience with telehealth technologies. Many students received practical instruction about various aspects of telehealth and felt they were able to explore virtual patient encounter capabilities simultaneously. Students also felt they could connect with patients, understand patient perspectives, and determine patients’ understanding of their conditions (Fig. 2).\nFig. 2Metrics of Medical Student Comfort with Patients. Students reported their comfort engaging with patients by indicating their ability to (a) make a personal connection with the patient, (b) explore the patient’s perspective of their illness, (c) determine the accuracy of the patient’s understanding\nMetrics of Medical Student Comfort with Patients. Students reported their comfort engaging with patients by indicating their ability to (a) make a personal connection with the patient, (b) explore the patient’s perspective of their illness, (c) determine the accuracy of the patient’s understanding\nStudents’ reflections provided narrative affirmation of objective data collected on the evaluation forms. Confidence was a recurrent motif, as students honed their abilities to elicit a history and counsel patients via video conference or telephone. Other topics discussed in students’ reflections include navigating technology, heightened awareness of barriers to telehealth use, and telehealth in medical education. Excerpts from students’ reflections are provided in Table 1 to highlight some of their experiences with telehealth. These excerpts are representative of the overwhelmingly positive response to the elective. For balance, some negative remarks were extracted from what otherwise were positive reflections.\nTable 1Student ExcerptsStudent 1This experience has been very enlightening regarding the strengths and limitations of telemedicine. Telemedicine’s strength, with the current level of technology, means that almost all patients will be able to connect with their physician at any time or place. Patients are usually more comfortable being interviewed in their own home and are more patient if they need to wait for their physician, as they can generally continue with their daily lives. This experience has been helpful for developing my ability to establish rapport while making my interviewing style more efficient. There have been many instances where I have been able to soothe anxious patients or family members with a few words or change in tone, thus leading to a better care experience.Student 2Like any experience for me, the patients were always the best part. I surprisingly was able to spend a lot of time talking to patients. Similar to how I have been feeling in the pandemic, my patient’s families made me realize that this is not easy for anyone. I will never forget this experience in my many years to come as a resident, attending and beyond.Student 3Participating in this telemedicine elective allowed me to spend time interacting with patients, learning the intricacies of telemedicine, and continuing to practice my history-taking skills in a novel situation. I truly enjoyed my time with patients. I worked with amazing physicians who demonstrated their flexibility and adaptability through hosting online, video appointments with their patients.Student 4Partaking in the telemedicine elective was a phenomenal experience for a number of reasons. First, it enabled us to help physicians meet an urgent demand and gave us an active role in rapidly developing a much-needed platform. Second, it gave us medical students early exposure into something that I feel is going to be a greater part of our careers as physicians, as the healthcare industry moves in a more virtual-platform-friendly direction. Lastly, it helped us hone our skills in taking patient histories, connecting with patients, and making them feel heard, as well as learning about the unique diagnoses and care plans that supersede patient’s chief complaints.Student 5Telehealth served an integral purpose for all patients who needed to access healthcare during the pandemic. Many physician-patient relationships will continue to be defined by the necessity of face-to-face visits, yet telehealth has established itself as a critical platform for increasing accessibility to health care. It was an incredible opportunity to be a part of this healthcare transformation, and it allowed me to remain connected to both my medical school and the local community in its time of need.Student 6Through our courses in medical school, we often discussed different approaches to patient-centered care and how to optimize our services. It was great to finally apply some of these principles to clinical practice, and I now feel comfortable connecting with patients through various platforms.Student 7Through this experience, I learned that much of the progress towards healing begins outside of the hospital, when patients are equipped with medical, social, and financial resources to address their health needs. I am grateful that the Telehealth Elective allowed me to develop virtual relationships with patients and support them during this unprecedented pandemic.Student 8Being able to participate in telehealth was a great experience and gave me an opportunity to understand what virtual patient care entails. In school, we often hear the word telehealth…but we don’t really get a sense of what that means or how it works. We have always exposed to the traditional method of in-person patient care. This elective allowed me to understand what exactly telehealth is, how it is setup, and the pros/cons of this method of patient care.Student 9Another aspect of the Telehealth Elective that was helpful to me was gaining experience in speaking to patients on the phone. Oftentimes this can be more challenging than speaking to patients in person, as there is less information to work with when you are on the phone with patients. Developing the skill of speaking to patients on the phone is something that takes practice and this elective helped me to do that.Student 10An area that I found irritating was that I wanted to be more involved in the care of these patients. Many of the physicians would get the patient in the room and tell me to move on to the next patient. However, one of the physicians allowed me to partake in one of her calls and listen to her counseling and management for the patients. Being able to take part in the call was much more enjoyable than simply rooming the patient and moving on to the next one. …Being confined at home for many months, I was not getting much clinical experience and I craved more.Student 11These patient visits taught me how much of clinical differential and plan can be shaped from the patient’s history alone. However, other visits would have benefited from the ability to use a stethoscope or explain pertinent information in-person. Telehealth was particularly challenging for patients who were unfamiliar with technology, as well as for patients who were less forthcoming with their acute concerns. There is a connection that forms from simply sharing the same space as another human being — it is one that fills the room with compassion and ensures the patient knows that they have the physician’s undivided attention.Student 12This experience highlighted the vast differences in technological knowledge/access to technology and how adding language challenges to this amplified the differences even more. …Nothing can replace the in-person office visit as there are inherent shortcomings to not being physically in the same room as the patient. …However, utilizing the telehealth platform was absolutely more beneficial…and allowed for the physician-patient relationship to continue even during these times.\nStudent Excerpts\nReflections also lent insight to the educational values of the elective. Interviewing patients, collecting the history, and documenting the encounter all provided significant educational value for students. Students who supported attending physicians with fewer scheduled patients were able to develop patient management plans and participate extensively during the Doxy.me patient encounter. Of the 34 students for whom reflections were received, 11 (32.4 %) mention debrief sessions with an attending physician after observation of and/or participation in patient-attending encounters. Unfortunately, this statistic does not capture the true frequency of student observation and participation – nor debrief sessions – as the majority of students chose to reflect upon other aspects of their experience.", "Given the large selection of participating specialty and subspecialty attending physicians at RWJMG, students were able to serve a broad spectrum of patients. During the elective, 58 attending physicians from 3 specialties (internal medicine, pediatrics, neurology; inclusive of 17 total subspecialties) participated. The 64 fourth-year medical students serviced 1411 total scheduled appointments. The number of student-initiated patient encounters was high from the outset and increased significantly in subsequent weeks, with the percentage of patients successfully transitioned to a Doxy.me virtual encounter trending higher over time (Fig. 3).\nFig. 3Metrics of Medical Student Engagement with Clinical Outpatient Practices. Trend of total scheduled telehealth appointments over a four-week period (April 13 to May 8) for RWJMG specialty clinics (internal medicine, pediatric, and neurology). All patient encounters successfully transitioned to a Doxy.me video call (dark blue) are illustrated as a percentage of initiated patient encounters (light blue)\nMetrics of Medical Student Engagement with Clinical Outpatient Practices. Trend of total scheduled telehealth appointments over a four-week period (April 13 to May 8) for RWJMG specialty clinics (internal medicine, pediatric, and neurology). All patient encounters successfully transitioned to a Doxy.me video call (dark blue) are illustrated as a percentage of initiated patient encounters (light blue)\nSeveral barriers (Fig. 4a, b) to successful telehealth utilization were encountered during the pilot period. Together, 197 (13.9 %) of 1411 total scheduled appointments were unable to be initiated for two primary reasons: patients either asked to reschedule (80, 41.5 %) or simply did not answer the phone (79, 40.9 %). Of the 1214 patient encounters that students were able to initiate, 992 (81.7 %) were successfully transitioned to a Doxy.me video call. Most patients from the remaining 222 encounters specifically requested transition to a traditional phone call (101, 46.5 %) or reported inability to access a compatible smartphone or computer at the time (71, 32.7 %). Despite assistance from students, 40 (18.4 %) patients were not adept enough with technology to enter the Doxy.me virtual waiting room. Student surveys contained valuable workflow feedback, which allowed clinic teams to modify the workflow where necessary to better meet patient needs.\nFig. 4Barriers to Successful Telehealth Utilization. Student survey responses that quantify instances where students were unable to (a) initiate the patient encounter and (b) transition initiated patient encounters to a Doxy.me video call\nBarriers to Successful Telehealth Utilization. Student survey responses that quantify instances where students were unable to (a) initiate the patient encounter and (b) transition initiated patient encounters to a Doxy.me video call", " The pilot employed a workflow enabling students to meaningfully participate in patient care within the telehealth setting. Students expressed increased confidence and comfort with telemedicine technologies and patient interviewing. Many reflections identified barriers to telehealth implementation and access (Table 1).\nThere were variations in encounter quality and teaching opportunities. Attending physicians were able to personalize their requests for student documentation and participation, with some encouraging student participation during the actual patient encounters or including subspecialty-specific review of systems. Although all attending physicians were asked to conduct debrief sessions when possible, time constraints and schedule conflicts reduced the number of attending physician-conducted educational moments. Most debrief sessions focused on the medical knowledge displayed through each student’s documentation.\nSince the pilot, efforts expanded to include 100 second-year medical students. With these additional pre-clinical participants, the telemedicine elective has serviced thousands of patients from the RWJMG outpatient practices. This expansion demonstrates how the workflow can be adapted to accommodate large volumes of students. Another project has adapted the workflow to identify social determinants that prevent the successful adoption of telehealth. Student volunteers perform similar calls with a greater focus on addressing patients’ needs across multiple clinic sites.\nAs patient volumes stabilize in the outpatient clinics, the elective will increasingly focus on education. As in-person appointments become possible with social distancing protocols, fewer telemedicine visits per attending physician will promote uniformity among students’ experiences and increase educational moments. To ensure student exposure to various patient populations and pathophysiology, several specialty clinics adopted the presented workflow to create specialty-specific telehealth elective opportunities. The Pediatrics Clerkship has also integrated a modified outpatient to supplement clinical experiences.\nAfter a successful four-week outpatient pilot, an inpatient telemedicine initiative was developed based on the Yale iCollaborative resources [8]. This initiative will allow medical students to contribute to inpatient care for patients under isolation. The primary objective of this initiative is to introduce students to telemedicine encounters with patients where appropriate without risk of exposure to COVID-19, while additional objectives require students to participate in rounds and practice case presentations, contribute to patient documentation, and act as patient liaisons after discharge to ensure patients successfully connect with the next phases of their care. Successful integration of this inpatient clinical curriculum will improve the capacity necessary for displaced students to engage with inpatients, albeit through a different medium.\nThe American Medical Association (AMA) recently published a playbook of telehealth workflows in support of recent national policy changes to reduce in-person visits and concomitant exposure risks to the coronavirus [9]. The AAMC further recognizes the ability of telehealth to support longitudinal patient care models across the continuum of practice [10]. Together, these AMA and AAMC recommendations establish a framework from which medical schools can build educational telehealth opportunities into their curricula. These objectives ultimately reflect the evolving demand for non-traditional digital healthcare services in periods of both crises and normalcy. Telemedicine workflows like the pilot’s can increase flexibility for clinical educators within this new framework.\nWith the development and implementation of these new telehealth initiatives, clinical learning experiences can be expanded in virtual settings. These novel educational experiences have the potential to make the clinical curriculum more resilient during this global pandemic and beyond.\nLimitations Elective evaluations were only completed by those students who requested elective credit. Of the 64 fourth-year medical students, only 29 completed enough sessions to receive credit. Many of the other students who participated significantly did not require credit and did not submit an evaluation.\nAmong the measured clinical outcomes, data collected about barriers to telehealth implementation are likely skewed. The pilot was conducted while the RWJMG outpatient practices existed in an unprecedented state of flux without their usual automated scheduling systems in place. Instances where students were unable to initiate the patient encounter via phone call were increasingly corrected towards the end of the pilot as these RWJMG systems were reimplemented. Unfortunately, it was not possible to follow up with these patients who were rescheduled to ensure they eventually received care.\nElective evaluations were only completed by those students who requested elective credit. Of the 64 fourth-year medical students, only 29 completed enough sessions to receive credit. Many of the other students who participated significantly did not require credit and did not submit an evaluation.\nAmong the measured clinical outcomes, data collected about barriers to telehealth implementation are likely skewed. The pilot was conducted while the RWJMG outpatient practices existed in an unprecedented state of flux without their usual automated scheduling systems in place. Instances where students were unable to initiate the patient encounter via phone call were increasingly corrected towards the end of the pilot as these RWJMG systems were reimplemented. Unfortunately, it was not possible to follow up with these patients who were rescheduled to ensure they eventually received care.", "Elective evaluations were only completed by those students who requested elective credit. Of the 64 fourth-year medical students, only 29 completed enough sessions to receive credit. Many of the other students who participated significantly did not require credit and did not submit an evaluation.\nAmong the measured clinical outcomes, data collected about barriers to telehealth implementation are likely skewed. The pilot was conducted while the RWJMG outpatient practices existed in an unprecedented state of flux without their usual automated scheduling systems in place. Instances where students were unable to initiate the patient encounter via phone call were increasingly corrected towards the end of the pilot as these RWJMG systems were reimplemented. Unfortunately, it was not possible to follow up with these patients who were rescheduled to ensure they eventually received care.", "During these unprecedented times, medical schools are developing innovative methods of remote educational instruction. The pilot study provides a workflow for displaced clinical learners to engage with patients via telehealth technologies, allowing them to remain clinically active through authentic patient encounters. The workflow was well-received by students and allowed a large number of students to support the outpatient practices of the medical school. With support from governing bodies, telehealth endeavors can remain a permanent feature in medical school curricula." ]
[ null, null, null, null, "results", null, null, "discussion", null, "conclusion" ]
[ "Telemedicine elective", "Remote learning", "Outpatient clinic", "Coronavirus pandemic", "COVID-19" ]
Background: On March 17th, 2020, the Association of American Medical Colleges (AAMC) issued strong guidance that medical schools suspend clinical rotations during the coronavirus pandemic, effective immediately [1]. A temporary and unified suspension of clinical teaching by all institutions would ensure medical student safety as new information about COVID-19 continued to emerge. Medical schools following this guidance developed innovative curricular modifications to deliver uncompromised education during the ongoing pandemic. This article reports the successful implementation of a new telehealth elective course designed to introduce fourth-year medical students to the use of technologies for telehealth at the Rutgers Robert Wood Johnson Medical School (RWJMS) in response to the crisis. The elective course was designed not only to provide students with a unique educational opportunity, but also to help patients of the Robert Wood Johnson Medical Group (RWJMG) transition rapidly from in-person to remote consultations. The Commonwealth Fund estimated outpatient practices scheduled less than half their regular volume of in-person visits throughout March to mitigate virus transmission [2]. For the numerous practices that struggle to care for their patient population, the ability to extend patient-provider relationships beyond traditional in-person visits is invaluable. Under normal circumstances, the RWJMG outpatient practices care for approximately 15,000 patients each week. Total patient volume decreased nearly 33 % in the last two weeks of March 2020 before remote visits were possible. By the end of April, the outpatient practices scheduled 6800 telemedicine visits per week. This transition was not without challenges, as provider illness and post-exposure quarantine requirements exacerbated an already stressed system during the pandemic’s initial wave. The 2015–2016 Liaison Committee on Medical Education (LCME) Annual Medical School Questionnaire reported heterogenous use of telemedicine in undergraduate medical education by one-half of medical schools surveyed [3]. A 2020 review of telemedicine in undergraduate medical education revealed varied integration of telemedicine into medical school curricula. 71 % of respondents discussed the basics of telemedicine during didactics. Telemedicine featured more practically during standardized patient encounters (59 %), patient encounters (53 %), interprofessional training (40 %), and scholarly projects (29 %) [3]. The pandemic has stimulated a surge in telehealth integration for undergraduate medical education, providing quality education while maintaining social distancing policies [4]. A telehealth workflow was developed in coordination with RWJMG practice managers to facilitate efficient diagnosis, treatment, management, and education virtually. Students participating in an elective course were instrumental during this transition, reducing the workforce burden while gaining valuable exposure to a modern healthcare intervention. The aim of this elective was to provide students with opportunities to utilize telemedicine technologies for authentic patient encounters and enable them to understand telehealth applications and factors affecting the ability of patients to connect remotely. Methods: Setting and objectives Fourth-year medical students were recruited throughout March and April 2020 to pilot the four-week service elective. To ensure satisfaction of curricular competencies, several educational objectives were defined for students to: (1) understand the broad applications of telehealth interventions in the outpatient setting, (2) develop fluency with telehealth technologies to support virtual appointments, (3) learn from clinicians and implement techniques to interact with patients during virtual appointments, (4) use telehealth modalities for assessment of clinical status and risk of decompensation, (5) develop a plan of care and counsel patients throughout virtual appointments, and (6) communicate with patients about health literacy and other social determinants of health affecting virtual appointment success. These educational objectives aligned with competencies subsequently published by the AAMC in their Competencies Across the Learning Continuum Series that addresses telehealth in September 2020 [5]. Fourth-year medical students were recruited throughout March and April 2020 to pilot the four-week service elective. To ensure satisfaction of curricular competencies, several educational objectives were defined for students to: (1) understand the broad applications of telehealth interventions in the outpatient setting, (2) develop fluency with telehealth technologies to support virtual appointments, (3) learn from clinicians and implement techniques to interact with patients during virtual appointments, (4) use telehealth modalities for assessment of clinical status and risk of decompensation, (5) develop a plan of care and counsel patients throughout virtual appointments, and (6) communicate with patients about health literacy and other social determinants of health affecting virtual appointment success. These educational objectives aligned with competencies subsequently published by the AAMC in their Competencies Across the Learning Continuum Series that addresses telehealth in September 2020 [5]. Outpatient workflow Students attended one mandatory training session – conducted via Zoom video conference – where an overview of the teleconsultation workflow (Fig. 1), the electronic medical record (EMR) was demonstrated, and the Doximity and Doxy.me applications were introduced. Shift schedule logistics, use of interpretation services for non-English speaking patients, and common technology and connectivity issues were also discussed. All training sessions were recorded for students to review later. Fig. 1Outpatient Workflow for Virtual Telehealth Patient Encounters. Swimlane diagram of the outpatient workflow developed in coordination with RWJMG practice managers Outpatient Workflow for Virtual Telehealth Patient Encounters. Swimlane diagram of the outpatient workflow developed in coordination with RWJMG practice managers For attending physicians participating in the elective, a Microsoft Excel workbook was populated with their contact information and deidentified appointment times in half-day increments each time they requested student support. Automated emails were generated throughout each week to alert students as the Excel workbook populates with new attending physicians and their respective appointment times. Students could indicate availability to support attending physicians via a hyperlink provided in these emails. Students were required to contact each respective attending physician and confirm their desire for student support before servicing any patients. Attending physicians indicated whether students could observe or participate in patient encounters – as permitted by time and technology constraints – through subsequent correspondences. Patients scheduling a telemedicine appointment were told to expect a phone call from the office in lieu of standard check-in procedures. It was made clear by office staff that a smartphone or computer was required to complete video visits. On the day of patients’ scheduled appointments, students initiated contact by phone call approximately 30 min before their scheduled appointment time. The dialer feature of the Doximity application was used to protect personal phone numbers, and the subsequent patient encounter was documented in the EMR via remote access [6]. A new document type was created in the EMR for telehealth patient encounters to satisfy documentation requirements. This telehealth note requires information about the visit type (phone call or video call), the method used to connect the patient (phone, Doxy.me, or other), the patient’s location (state), and the attending physician’s location (clinical office, location other than clinical office, home or other non-affiliated location). Before collecting any clinical information, the patient was provided with information about the risks and benefits of telehealth services. Their verbal consent to participate in telehealth services was obtained and students proceeded with the patient encounter thereafter. Vital signs were then obtained from personal home health monitoring equipment if available (e.g., scales, thermometers, blood pressure monitors, etc.). All clinical information including history of current illness, review of systems, medication reconciliation, allergies, and any interval surgical, medical, or social history was entered into the telemedicine note where applicable. Next, students transitioned the patient from the initial phone call to a Doxy.me video call. This HIPAA-compliant telehealth system provides a cloud-based video conferencing platform with a virtual waiting room feature for providers to triage and connect with patients at their appointment time [7]. Students used skills attained from the training sessions to troubleshoot technological difficulties and other issues that may arise during the transition. Students held their preliminary note in the EMR for use by the attending physician once the transition to Doxy.me was successful. Attending physician schedules influenced whether a student joined the subsequent patient-attending consultation or instead exited the patient encounter. Students could also participate via telephone to preserve bandwidth needed for a three-person Doxy.me video conference. In rare cases, the patient-attending consultation was conducted through an alternate HIPAA-compliant video conference platform that also allowed student participation. Debrief sessions were held by attending physicians either after certain patient encounters or after a completed shift to provide feedback and education. To be eligible for course credit, students had to complete a survey each time they logged contributed hours. The survey assessed barriers to successful telehealth utilization. Various appointment and workflow data were collected to enable retrospective analysis and real-time process improvement. One week of elective course credit was granted per 20 h of support contributed. Students wrote a one-page reflection on their experience with telehealth and virtual patient care and completed an elective evaluation form. Students attended one mandatory training session – conducted via Zoom video conference – where an overview of the teleconsultation workflow (Fig. 1), the electronic medical record (EMR) was demonstrated, and the Doximity and Doxy.me applications were introduced. Shift schedule logistics, use of interpretation services for non-English speaking patients, and common technology and connectivity issues were also discussed. All training sessions were recorded for students to review later. Fig. 1Outpatient Workflow for Virtual Telehealth Patient Encounters. Swimlane diagram of the outpatient workflow developed in coordination with RWJMG practice managers Outpatient Workflow for Virtual Telehealth Patient Encounters. Swimlane diagram of the outpatient workflow developed in coordination with RWJMG practice managers For attending physicians participating in the elective, a Microsoft Excel workbook was populated with their contact information and deidentified appointment times in half-day increments each time they requested student support. Automated emails were generated throughout each week to alert students as the Excel workbook populates with new attending physicians and their respective appointment times. Students could indicate availability to support attending physicians via a hyperlink provided in these emails. Students were required to contact each respective attending physician and confirm their desire for student support before servicing any patients. Attending physicians indicated whether students could observe or participate in patient encounters – as permitted by time and technology constraints – through subsequent correspondences. Patients scheduling a telemedicine appointment were told to expect a phone call from the office in lieu of standard check-in procedures. It was made clear by office staff that a smartphone or computer was required to complete video visits. On the day of patients’ scheduled appointments, students initiated contact by phone call approximately 30 min before their scheduled appointment time. The dialer feature of the Doximity application was used to protect personal phone numbers, and the subsequent patient encounter was documented in the EMR via remote access [6]. A new document type was created in the EMR for telehealth patient encounters to satisfy documentation requirements. This telehealth note requires information about the visit type (phone call or video call), the method used to connect the patient (phone, Doxy.me, or other), the patient’s location (state), and the attending physician’s location (clinical office, location other than clinical office, home or other non-affiliated location). Before collecting any clinical information, the patient was provided with information about the risks and benefits of telehealth services. Their verbal consent to participate in telehealth services was obtained and students proceeded with the patient encounter thereafter. Vital signs were then obtained from personal home health monitoring equipment if available (e.g., scales, thermometers, blood pressure monitors, etc.). All clinical information including history of current illness, review of systems, medication reconciliation, allergies, and any interval surgical, medical, or social history was entered into the telemedicine note where applicable. Next, students transitioned the patient from the initial phone call to a Doxy.me video call. This HIPAA-compliant telehealth system provides a cloud-based video conferencing platform with a virtual waiting room feature for providers to triage and connect with patients at their appointment time [7]. Students used skills attained from the training sessions to troubleshoot technological difficulties and other issues that may arise during the transition. Students held their preliminary note in the EMR for use by the attending physician once the transition to Doxy.me was successful. Attending physician schedules influenced whether a student joined the subsequent patient-attending consultation or instead exited the patient encounter. Students could also participate via telephone to preserve bandwidth needed for a three-person Doxy.me video conference. In rare cases, the patient-attending consultation was conducted through an alternate HIPAA-compliant video conference platform that also allowed student participation. Debrief sessions were held by attending physicians either after certain patient encounters or after a completed shift to provide feedback and education. To be eligible for course credit, students had to complete a survey each time they logged contributed hours. The survey assessed barriers to successful telehealth utilization. Various appointment and workflow data were collected to enable retrospective analysis and real-time process improvement. One week of elective course credit was granted per 20 h of support contributed. Students wrote a one-page reflection on their experience with telehealth and virtual patient care and completed an elective evaluation form. Setting and objectives: Fourth-year medical students were recruited throughout March and April 2020 to pilot the four-week service elective. To ensure satisfaction of curricular competencies, several educational objectives were defined for students to: (1) understand the broad applications of telehealth interventions in the outpatient setting, (2) develop fluency with telehealth technologies to support virtual appointments, (3) learn from clinicians and implement techniques to interact with patients during virtual appointments, (4) use telehealth modalities for assessment of clinical status and risk of decompensation, (5) develop a plan of care and counsel patients throughout virtual appointments, and (6) communicate with patients about health literacy and other social determinants of health affecting virtual appointment success. These educational objectives aligned with competencies subsequently published by the AAMC in their Competencies Across the Learning Continuum Series that addresses telehealth in September 2020 [5]. Outpatient workflow: Students attended one mandatory training session – conducted via Zoom video conference – where an overview of the teleconsultation workflow (Fig. 1), the electronic medical record (EMR) was demonstrated, and the Doximity and Doxy.me applications were introduced. Shift schedule logistics, use of interpretation services for non-English speaking patients, and common technology and connectivity issues were also discussed. All training sessions were recorded for students to review later. Fig. 1Outpatient Workflow for Virtual Telehealth Patient Encounters. Swimlane diagram of the outpatient workflow developed in coordination with RWJMG practice managers Outpatient Workflow for Virtual Telehealth Patient Encounters. Swimlane diagram of the outpatient workflow developed in coordination with RWJMG practice managers For attending physicians participating in the elective, a Microsoft Excel workbook was populated with their contact information and deidentified appointment times in half-day increments each time they requested student support. Automated emails were generated throughout each week to alert students as the Excel workbook populates with new attending physicians and their respective appointment times. Students could indicate availability to support attending physicians via a hyperlink provided in these emails. Students were required to contact each respective attending physician and confirm their desire for student support before servicing any patients. Attending physicians indicated whether students could observe or participate in patient encounters – as permitted by time and technology constraints – through subsequent correspondences. Patients scheduling a telemedicine appointment were told to expect a phone call from the office in lieu of standard check-in procedures. It was made clear by office staff that a smartphone or computer was required to complete video visits. On the day of patients’ scheduled appointments, students initiated contact by phone call approximately 30 min before their scheduled appointment time. The dialer feature of the Doximity application was used to protect personal phone numbers, and the subsequent patient encounter was documented in the EMR via remote access [6]. A new document type was created in the EMR for telehealth patient encounters to satisfy documentation requirements. This telehealth note requires information about the visit type (phone call or video call), the method used to connect the patient (phone, Doxy.me, or other), the patient’s location (state), and the attending physician’s location (clinical office, location other than clinical office, home or other non-affiliated location). Before collecting any clinical information, the patient was provided with information about the risks and benefits of telehealth services. Their verbal consent to participate in telehealth services was obtained and students proceeded with the patient encounter thereafter. Vital signs were then obtained from personal home health monitoring equipment if available (e.g., scales, thermometers, blood pressure monitors, etc.). All clinical information including history of current illness, review of systems, medication reconciliation, allergies, and any interval surgical, medical, or social history was entered into the telemedicine note where applicable. Next, students transitioned the patient from the initial phone call to a Doxy.me video call. This HIPAA-compliant telehealth system provides a cloud-based video conferencing platform with a virtual waiting room feature for providers to triage and connect with patients at their appointment time [7]. Students used skills attained from the training sessions to troubleshoot technological difficulties and other issues that may arise during the transition. Students held their preliminary note in the EMR for use by the attending physician once the transition to Doxy.me was successful. Attending physician schedules influenced whether a student joined the subsequent patient-attending consultation or instead exited the patient encounter. Students could also participate via telephone to preserve bandwidth needed for a three-person Doxy.me video conference. In rare cases, the patient-attending consultation was conducted through an alternate HIPAA-compliant video conference platform that also allowed student participation. Debrief sessions were held by attending physicians either after certain patient encounters or after a completed shift to provide feedback and education. To be eligible for course credit, students had to complete a survey each time they logged contributed hours. The survey assessed barriers to successful telehealth utilization. Various appointment and workflow data were collected to enable retrospective analysis and real-time process improvement. One week of elective course credit was granted per 20 h of support contributed. Students wrote a one-page reflection on their experience with telehealth and virtual patient care and completed an elective evaluation form. Results: Educational outcomes Over a four-week period (April 13 to May 8), 64 fourth-year medical students (38.1 % of the graduating class) completed the mandatory training session and logged contributed hours within the program. 29 (45.3 %) students contributed enough hours to qualify for elective credit, with an average of 35.4 h logged (median of 26.8 h). Three students logged over 80 h. Surveys identified many barriers to patient encounter initiation that were used by practice managers for process improvement. Students learned to troubleshoot with patients when appropriate and provided detailed solutions thereafter to ensure future telehealth appointments were successful. Social determinants of health were frequently recognized as barriers to successful telehealth delivery for underserved patients. Technological illiteracy was often cited in association with elderly patients. Despite these challenges, collected evaluation forms revealed students gained substantial experience with telehealth technologies. Many students received practical instruction about various aspects of telehealth and felt they were able to explore virtual patient encounter capabilities simultaneously. Students also felt they could connect with patients, understand patient perspectives, and determine patients’ understanding of their conditions (Fig. 2). Fig. 2Metrics of Medical Student Comfort with Patients. Students reported their comfort engaging with patients by indicating their ability to (a) make a personal connection with the patient, (b) explore the patient’s perspective of their illness, (c) determine the accuracy of the patient’s understanding Metrics of Medical Student Comfort with Patients. Students reported their comfort engaging with patients by indicating their ability to (a) make a personal connection with the patient, (b) explore the patient’s perspective of their illness, (c) determine the accuracy of the patient’s understanding Students’ reflections provided narrative affirmation of objective data collected on the evaluation forms. Confidence was a recurrent motif, as students honed their abilities to elicit a history and counsel patients via video conference or telephone. Other topics discussed in students’ reflections include navigating technology, heightened awareness of barriers to telehealth use, and telehealth in medical education. Excerpts from students’ reflections are provided in Table 1 to highlight some of their experiences with telehealth. These excerpts are representative of the overwhelmingly positive response to the elective. For balance, some negative remarks were extracted from what otherwise were positive reflections. Table 1Student ExcerptsStudent 1This experience has been very enlightening regarding the strengths and limitations of telemedicine. Telemedicine’s strength, with the current level of technology, means that almost all patients will be able to connect with their physician at any time or place. Patients are usually more comfortable being interviewed in their own home and are more patient if they need to wait for their physician, as they can generally continue with their daily lives. This experience has been helpful for developing my ability to establish rapport while making my interviewing style more efficient. There have been many instances where I have been able to soothe anxious patients or family members with a few words or change in tone, thus leading to a better care experience.Student 2Like any experience for me, the patients were always the best part. I surprisingly was able to spend a lot of time talking to patients. Similar to how I have been feeling in the pandemic, my patient’s families made me realize that this is not easy for anyone. I will never forget this experience in my many years to come as a resident, attending and beyond.Student 3Participating in this telemedicine elective allowed me to spend time interacting with patients, learning the intricacies of telemedicine, and continuing to practice my history-taking skills in a novel situation. I truly enjoyed my time with patients. I worked with amazing physicians who demonstrated their flexibility and adaptability through hosting online, video appointments with their patients.Student 4Partaking in the telemedicine elective was a phenomenal experience for a number of reasons. First, it enabled us to help physicians meet an urgent demand and gave us an active role in rapidly developing a much-needed platform. Second, it gave us medical students early exposure into something that I feel is going to be a greater part of our careers as physicians, as the healthcare industry moves in a more virtual-platform-friendly direction. Lastly, it helped us hone our skills in taking patient histories, connecting with patients, and making them feel heard, as well as learning about the unique diagnoses and care plans that supersede patient’s chief complaints.Student 5Telehealth served an integral purpose for all patients who needed to access healthcare during the pandemic. Many physician-patient relationships will continue to be defined by the necessity of face-to-face visits, yet telehealth has established itself as a critical platform for increasing accessibility to health care. It was an incredible opportunity to be a part of this healthcare transformation, and it allowed me to remain connected to both my medical school and the local community in its time of need.Student 6Through our courses in medical school, we often discussed different approaches to patient-centered care and how to optimize our services. It was great to finally apply some of these principles to clinical practice, and I now feel comfortable connecting with patients through various platforms.Student 7Through this experience, I learned that much of the progress towards healing begins outside of the hospital, when patients are equipped with medical, social, and financial resources to address their health needs. I am grateful that the Telehealth Elective allowed me to develop virtual relationships with patients and support them during this unprecedented pandemic.Student 8Being able to participate in telehealth was a great experience and gave me an opportunity to understand what virtual patient care entails. In school, we often hear the word telehealth…but we don’t really get a sense of what that means or how it works. We have always exposed to the traditional method of in-person patient care. This elective allowed me to understand what exactly telehealth is, how it is setup, and the pros/cons of this method of patient care.Student 9Another aspect of the Telehealth Elective that was helpful to me was gaining experience in speaking to patients on the phone. Oftentimes this can be more challenging than speaking to patients in person, as there is less information to work with when you are on the phone with patients. Developing the skill of speaking to patients on the phone is something that takes practice and this elective helped me to do that.Student 10An area that I found irritating was that I wanted to be more involved in the care of these patients. Many of the physicians would get the patient in the room and tell me to move on to the next patient. However, one of the physicians allowed me to partake in one of her calls and listen to her counseling and management for the patients. Being able to take part in the call was much more enjoyable than simply rooming the patient and moving on to the next one. …Being confined at home for many months, I was not getting much clinical experience and I craved more.Student 11These patient visits taught me how much of clinical differential and plan can be shaped from the patient’s history alone. However, other visits would have benefited from the ability to use a stethoscope or explain pertinent information in-person. Telehealth was particularly challenging for patients who were unfamiliar with technology, as well as for patients who were less forthcoming with their acute concerns. There is a connection that forms from simply sharing the same space as another human being — it is one that fills the room with compassion and ensures the patient knows that they have the physician’s undivided attention.Student 12This experience highlighted the vast differences in technological knowledge/access to technology and how adding language challenges to this amplified the differences even more. …Nothing can replace the in-person office visit as there are inherent shortcomings to not being physically in the same room as the patient. …However, utilizing the telehealth platform was absolutely more beneficial…and allowed for the physician-patient relationship to continue even during these times. Student Excerpts Reflections also lent insight to the educational values of the elective. Interviewing patients, collecting the history, and documenting the encounter all provided significant educational value for students. Students who supported attending physicians with fewer scheduled patients were able to develop patient management plans and participate extensively during the Doxy.me patient encounter. Of the 34 students for whom reflections were received, 11 (32.4 %) mention debrief sessions with an attending physician after observation of and/or participation in patient-attending encounters. Unfortunately, this statistic does not capture the true frequency of student observation and participation – nor debrief sessions – as the majority of students chose to reflect upon other aspects of their experience. Over a four-week period (April 13 to May 8), 64 fourth-year medical students (38.1 % of the graduating class) completed the mandatory training session and logged contributed hours within the program. 29 (45.3 %) students contributed enough hours to qualify for elective credit, with an average of 35.4 h logged (median of 26.8 h). Three students logged over 80 h. Surveys identified many barriers to patient encounter initiation that were used by practice managers for process improvement. Students learned to troubleshoot with patients when appropriate and provided detailed solutions thereafter to ensure future telehealth appointments were successful. Social determinants of health were frequently recognized as barriers to successful telehealth delivery for underserved patients. Technological illiteracy was often cited in association with elderly patients. Despite these challenges, collected evaluation forms revealed students gained substantial experience with telehealth technologies. Many students received practical instruction about various aspects of telehealth and felt they were able to explore virtual patient encounter capabilities simultaneously. Students also felt they could connect with patients, understand patient perspectives, and determine patients’ understanding of their conditions (Fig. 2). Fig. 2Metrics of Medical Student Comfort with Patients. Students reported their comfort engaging with patients by indicating their ability to (a) make a personal connection with the patient, (b) explore the patient’s perspective of their illness, (c) determine the accuracy of the patient’s understanding Metrics of Medical Student Comfort with Patients. Students reported their comfort engaging with patients by indicating their ability to (a) make a personal connection with the patient, (b) explore the patient’s perspective of their illness, (c) determine the accuracy of the patient’s understanding Students’ reflections provided narrative affirmation of objective data collected on the evaluation forms. Confidence was a recurrent motif, as students honed their abilities to elicit a history and counsel patients via video conference or telephone. Other topics discussed in students’ reflections include navigating technology, heightened awareness of barriers to telehealth use, and telehealth in medical education. Excerpts from students’ reflections are provided in Table 1 to highlight some of their experiences with telehealth. These excerpts are representative of the overwhelmingly positive response to the elective. For balance, some negative remarks were extracted from what otherwise were positive reflections. Table 1Student ExcerptsStudent 1This experience has been very enlightening regarding the strengths and limitations of telemedicine. Telemedicine’s strength, with the current level of technology, means that almost all patients will be able to connect with their physician at any time or place. Patients are usually more comfortable being interviewed in their own home and are more patient if they need to wait for their physician, as they can generally continue with their daily lives. This experience has been helpful for developing my ability to establish rapport while making my interviewing style more efficient. There have been many instances where I have been able to soothe anxious patients or family members with a few words or change in tone, thus leading to a better care experience.Student 2Like any experience for me, the patients were always the best part. I surprisingly was able to spend a lot of time talking to patients. Similar to how I have been feeling in the pandemic, my patient’s families made me realize that this is not easy for anyone. I will never forget this experience in my many years to come as a resident, attending and beyond.Student 3Participating in this telemedicine elective allowed me to spend time interacting with patients, learning the intricacies of telemedicine, and continuing to practice my history-taking skills in a novel situation. I truly enjoyed my time with patients. I worked with amazing physicians who demonstrated their flexibility and adaptability through hosting online, video appointments with their patients.Student 4Partaking in the telemedicine elective was a phenomenal experience for a number of reasons. First, it enabled us to help physicians meet an urgent demand and gave us an active role in rapidly developing a much-needed platform. Second, it gave us medical students early exposure into something that I feel is going to be a greater part of our careers as physicians, as the healthcare industry moves in a more virtual-platform-friendly direction. Lastly, it helped us hone our skills in taking patient histories, connecting with patients, and making them feel heard, as well as learning about the unique diagnoses and care plans that supersede patient’s chief complaints.Student 5Telehealth served an integral purpose for all patients who needed to access healthcare during the pandemic. Many physician-patient relationships will continue to be defined by the necessity of face-to-face visits, yet telehealth has established itself as a critical platform for increasing accessibility to health care. It was an incredible opportunity to be a part of this healthcare transformation, and it allowed me to remain connected to both my medical school and the local community in its time of need.Student 6Through our courses in medical school, we often discussed different approaches to patient-centered care and how to optimize our services. It was great to finally apply some of these principles to clinical practice, and I now feel comfortable connecting with patients through various platforms.Student 7Through this experience, I learned that much of the progress towards healing begins outside of the hospital, when patients are equipped with medical, social, and financial resources to address their health needs. I am grateful that the Telehealth Elective allowed me to develop virtual relationships with patients and support them during this unprecedented pandemic.Student 8Being able to participate in telehealth was a great experience and gave me an opportunity to understand what virtual patient care entails. In school, we often hear the word telehealth…but we don’t really get a sense of what that means or how it works. We have always exposed to the traditional method of in-person patient care. This elective allowed me to understand what exactly telehealth is, how it is setup, and the pros/cons of this method of patient care.Student 9Another aspect of the Telehealth Elective that was helpful to me was gaining experience in speaking to patients on the phone. Oftentimes this can be more challenging than speaking to patients in person, as there is less information to work with when you are on the phone with patients. Developing the skill of speaking to patients on the phone is something that takes practice and this elective helped me to do that.Student 10An area that I found irritating was that I wanted to be more involved in the care of these patients. Many of the physicians would get the patient in the room and tell me to move on to the next patient. However, one of the physicians allowed me to partake in one of her calls and listen to her counseling and management for the patients. Being able to take part in the call was much more enjoyable than simply rooming the patient and moving on to the next one. …Being confined at home for many months, I was not getting much clinical experience and I craved more.Student 11These patient visits taught me how much of clinical differential and plan can be shaped from the patient’s history alone. However, other visits would have benefited from the ability to use a stethoscope or explain pertinent information in-person. Telehealth was particularly challenging for patients who were unfamiliar with technology, as well as for patients who were less forthcoming with their acute concerns. There is a connection that forms from simply sharing the same space as another human being — it is one that fills the room with compassion and ensures the patient knows that they have the physician’s undivided attention.Student 12This experience highlighted the vast differences in technological knowledge/access to technology and how adding language challenges to this amplified the differences even more. …Nothing can replace the in-person office visit as there are inherent shortcomings to not being physically in the same room as the patient. …However, utilizing the telehealth platform was absolutely more beneficial…and allowed for the physician-patient relationship to continue even during these times. Student Excerpts Reflections also lent insight to the educational values of the elective. Interviewing patients, collecting the history, and documenting the encounter all provided significant educational value for students. Students who supported attending physicians with fewer scheduled patients were able to develop patient management plans and participate extensively during the Doxy.me patient encounter. Of the 34 students for whom reflections were received, 11 (32.4 %) mention debrief sessions with an attending physician after observation of and/or participation in patient-attending encounters. Unfortunately, this statistic does not capture the true frequency of student observation and participation – nor debrief sessions – as the majority of students chose to reflect upon other aspects of their experience. Clinical outcomes Given the large selection of participating specialty and subspecialty attending physicians at RWJMG, students were able to serve a broad spectrum of patients. During the elective, 58 attending physicians from 3 specialties (internal medicine, pediatrics, neurology; inclusive of 17 total subspecialties) participated. The 64 fourth-year medical students serviced 1411 total scheduled appointments. The number of student-initiated patient encounters was high from the outset and increased significantly in subsequent weeks, with the percentage of patients successfully transitioned to a Doxy.me virtual encounter trending higher over time (Fig. 3). Fig. 3Metrics of Medical Student Engagement with Clinical Outpatient Practices. Trend of total scheduled telehealth appointments over a four-week period (April 13 to May 8) for RWJMG specialty clinics (internal medicine, pediatric, and neurology). All patient encounters successfully transitioned to a Doxy.me video call (dark blue) are illustrated as a percentage of initiated patient encounters (light blue) Metrics of Medical Student Engagement with Clinical Outpatient Practices. Trend of total scheduled telehealth appointments over a four-week period (April 13 to May 8) for RWJMG specialty clinics (internal medicine, pediatric, and neurology). All patient encounters successfully transitioned to a Doxy.me video call (dark blue) are illustrated as a percentage of initiated patient encounters (light blue) Several barriers (Fig. 4a, b) to successful telehealth utilization were encountered during the pilot period. Together, 197 (13.9 %) of 1411 total scheduled appointments were unable to be initiated for two primary reasons: patients either asked to reschedule (80, 41.5 %) or simply did not answer the phone (79, 40.9 %). Of the 1214 patient encounters that students were able to initiate, 992 (81.7 %) were successfully transitioned to a Doxy.me video call. Most patients from the remaining 222 encounters specifically requested transition to a traditional phone call (101, 46.5 %) or reported inability to access a compatible smartphone or computer at the time (71, 32.7 %). Despite assistance from students, 40 (18.4 %) patients were not adept enough with technology to enter the Doxy.me virtual waiting room. Student surveys contained valuable workflow feedback, which allowed clinic teams to modify the workflow where necessary to better meet patient needs. Fig. 4Barriers to Successful Telehealth Utilization. Student survey responses that quantify instances where students were unable to (a) initiate the patient encounter and (b) transition initiated patient encounters to a Doxy.me video call Barriers to Successful Telehealth Utilization. Student survey responses that quantify instances where students were unable to (a) initiate the patient encounter and (b) transition initiated patient encounters to a Doxy.me video call Given the large selection of participating specialty and subspecialty attending physicians at RWJMG, students were able to serve a broad spectrum of patients. During the elective, 58 attending physicians from 3 specialties (internal medicine, pediatrics, neurology; inclusive of 17 total subspecialties) participated. The 64 fourth-year medical students serviced 1411 total scheduled appointments. The number of student-initiated patient encounters was high from the outset and increased significantly in subsequent weeks, with the percentage of patients successfully transitioned to a Doxy.me virtual encounter trending higher over time (Fig. 3). Fig. 3Metrics of Medical Student Engagement with Clinical Outpatient Practices. Trend of total scheduled telehealth appointments over a four-week period (April 13 to May 8) for RWJMG specialty clinics (internal medicine, pediatric, and neurology). All patient encounters successfully transitioned to a Doxy.me video call (dark blue) are illustrated as a percentage of initiated patient encounters (light blue) Metrics of Medical Student Engagement with Clinical Outpatient Practices. Trend of total scheduled telehealth appointments over a four-week period (April 13 to May 8) for RWJMG specialty clinics (internal medicine, pediatric, and neurology). All patient encounters successfully transitioned to a Doxy.me video call (dark blue) are illustrated as a percentage of initiated patient encounters (light blue) Several barriers (Fig. 4a, b) to successful telehealth utilization were encountered during the pilot period. Together, 197 (13.9 %) of 1411 total scheduled appointments were unable to be initiated for two primary reasons: patients either asked to reschedule (80, 41.5 %) or simply did not answer the phone (79, 40.9 %). Of the 1214 patient encounters that students were able to initiate, 992 (81.7 %) were successfully transitioned to a Doxy.me video call. Most patients from the remaining 222 encounters specifically requested transition to a traditional phone call (101, 46.5 %) or reported inability to access a compatible smartphone or computer at the time (71, 32.7 %). Despite assistance from students, 40 (18.4 %) patients were not adept enough with technology to enter the Doxy.me virtual waiting room. Student surveys contained valuable workflow feedback, which allowed clinic teams to modify the workflow where necessary to better meet patient needs. Fig. 4Barriers to Successful Telehealth Utilization. Student survey responses that quantify instances where students were unable to (a) initiate the patient encounter and (b) transition initiated patient encounters to a Doxy.me video call Barriers to Successful Telehealth Utilization. Student survey responses that quantify instances where students were unable to (a) initiate the patient encounter and (b) transition initiated patient encounters to a Doxy.me video call Educational outcomes: Over a four-week period (April 13 to May 8), 64 fourth-year medical students (38.1 % of the graduating class) completed the mandatory training session and logged contributed hours within the program. 29 (45.3 %) students contributed enough hours to qualify for elective credit, with an average of 35.4 h logged (median of 26.8 h). Three students logged over 80 h. Surveys identified many barriers to patient encounter initiation that were used by practice managers for process improvement. Students learned to troubleshoot with patients when appropriate and provided detailed solutions thereafter to ensure future telehealth appointments were successful. Social determinants of health were frequently recognized as barriers to successful telehealth delivery for underserved patients. Technological illiteracy was often cited in association with elderly patients. Despite these challenges, collected evaluation forms revealed students gained substantial experience with telehealth technologies. Many students received practical instruction about various aspects of telehealth and felt they were able to explore virtual patient encounter capabilities simultaneously. Students also felt they could connect with patients, understand patient perspectives, and determine patients’ understanding of their conditions (Fig. 2). Fig. 2Metrics of Medical Student Comfort with Patients. Students reported their comfort engaging with patients by indicating their ability to (a) make a personal connection with the patient, (b) explore the patient’s perspective of their illness, (c) determine the accuracy of the patient’s understanding Metrics of Medical Student Comfort with Patients. Students reported their comfort engaging with patients by indicating their ability to (a) make a personal connection with the patient, (b) explore the patient’s perspective of their illness, (c) determine the accuracy of the patient’s understanding Students’ reflections provided narrative affirmation of objective data collected on the evaluation forms. Confidence was a recurrent motif, as students honed their abilities to elicit a history and counsel patients via video conference or telephone. Other topics discussed in students’ reflections include navigating technology, heightened awareness of barriers to telehealth use, and telehealth in medical education. Excerpts from students’ reflections are provided in Table 1 to highlight some of their experiences with telehealth. These excerpts are representative of the overwhelmingly positive response to the elective. For balance, some negative remarks were extracted from what otherwise were positive reflections. Table 1Student ExcerptsStudent 1This experience has been very enlightening regarding the strengths and limitations of telemedicine. Telemedicine’s strength, with the current level of technology, means that almost all patients will be able to connect with their physician at any time or place. Patients are usually more comfortable being interviewed in their own home and are more patient if they need to wait for their physician, as they can generally continue with their daily lives. This experience has been helpful for developing my ability to establish rapport while making my interviewing style more efficient. There have been many instances where I have been able to soothe anxious patients or family members with a few words or change in tone, thus leading to a better care experience.Student 2Like any experience for me, the patients were always the best part. I surprisingly was able to spend a lot of time talking to patients. Similar to how I have been feeling in the pandemic, my patient’s families made me realize that this is not easy for anyone. I will never forget this experience in my many years to come as a resident, attending and beyond.Student 3Participating in this telemedicine elective allowed me to spend time interacting with patients, learning the intricacies of telemedicine, and continuing to practice my history-taking skills in a novel situation. I truly enjoyed my time with patients. I worked with amazing physicians who demonstrated their flexibility and adaptability through hosting online, video appointments with their patients.Student 4Partaking in the telemedicine elective was a phenomenal experience for a number of reasons. First, it enabled us to help physicians meet an urgent demand and gave us an active role in rapidly developing a much-needed platform. Second, it gave us medical students early exposure into something that I feel is going to be a greater part of our careers as physicians, as the healthcare industry moves in a more virtual-platform-friendly direction. Lastly, it helped us hone our skills in taking patient histories, connecting with patients, and making them feel heard, as well as learning about the unique diagnoses and care plans that supersede patient’s chief complaints.Student 5Telehealth served an integral purpose for all patients who needed to access healthcare during the pandemic. Many physician-patient relationships will continue to be defined by the necessity of face-to-face visits, yet telehealth has established itself as a critical platform for increasing accessibility to health care. It was an incredible opportunity to be a part of this healthcare transformation, and it allowed me to remain connected to both my medical school and the local community in its time of need.Student 6Through our courses in medical school, we often discussed different approaches to patient-centered care and how to optimize our services. It was great to finally apply some of these principles to clinical practice, and I now feel comfortable connecting with patients through various platforms.Student 7Through this experience, I learned that much of the progress towards healing begins outside of the hospital, when patients are equipped with medical, social, and financial resources to address their health needs. I am grateful that the Telehealth Elective allowed me to develop virtual relationships with patients and support them during this unprecedented pandemic.Student 8Being able to participate in telehealth was a great experience and gave me an opportunity to understand what virtual patient care entails. In school, we often hear the word telehealth…but we don’t really get a sense of what that means or how it works. We have always exposed to the traditional method of in-person patient care. This elective allowed me to understand what exactly telehealth is, how it is setup, and the pros/cons of this method of patient care.Student 9Another aspect of the Telehealth Elective that was helpful to me was gaining experience in speaking to patients on the phone. Oftentimes this can be more challenging than speaking to patients in person, as there is less information to work with when you are on the phone with patients. Developing the skill of speaking to patients on the phone is something that takes practice and this elective helped me to do that.Student 10An area that I found irritating was that I wanted to be more involved in the care of these patients. Many of the physicians would get the patient in the room and tell me to move on to the next patient. However, one of the physicians allowed me to partake in one of her calls and listen to her counseling and management for the patients. Being able to take part in the call was much more enjoyable than simply rooming the patient and moving on to the next one. …Being confined at home for many months, I was not getting much clinical experience and I craved more.Student 11These patient visits taught me how much of clinical differential and plan can be shaped from the patient’s history alone. However, other visits would have benefited from the ability to use a stethoscope or explain pertinent information in-person. Telehealth was particularly challenging for patients who were unfamiliar with technology, as well as for patients who were less forthcoming with their acute concerns. There is a connection that forms from simply sharing the same space as another human being — it is one that fills the room with compassion and ensures the patient knows that they have the physician’s undivided attention.Student 12This experience highlighted the vast differences in technological knowledge/access to technology and how adding language challenges to this amplified the differences even more. …Nothing can replace the in-person office visit as there are inherent shortcomings to not being physically in the same room as the patient. …However, utilizing the telehealth platform was absolutely more beneficial…and allowed for the physician-patient relationship to continue even during these times. Student Excerpts Reflections also lent insight to the educational values of the elective. Interviewing patients, collecting the history, and documenting the encounter all provided significant educational value for students. Students who supported attending physicians with fewer scheduled patients were able to develop patient management plans and participate extensively during the Doxy.me patient encounter. Of the 34 students for whom reflections were received, 11 (32.4 %) mention debrief sessions with an attending physician after observation of and/or participation in patient-attending encounters. Unfortunately, this statistic does not capture the true frequency of student observation and participation – nor debrief sessions – as the majority of students chose to reflect upon other aspects of their experience. Clinical outcomes: Given the large selection of participating specialty and subspecialty attending physicians at RWJMG, students were able to serve a broad spectrum of patients. During the elective, 58 attending physicians from 3 specialties (internal medicine, pediatrics, neurology; inclusive of 17 total subspecialties) participated. The 64 fourth-year medical students serviced 1411 total scheduled appointments. The number of student-initiated patient encounters was high from the outset and increased significantly in subsequent weeks, with the percentage of patients successfully transitioned to a Doxy.me virtual encounter trending higher over time (Fig. 3). Fig. 3Metrics of Medical Student Engagement with Clinical Outpatient Practices. Trend of total scheduled telehealth appointments over a four-week period (April 13 to May 8) for RWJMG specialty clinics (internal medicine, pediatric, and neurology). All patient encounters successfully transitioned to a Doxy.me video call (dark blue) are illustrated as a percentage of initiated patient encounters (light blue) Metrics of Medical Student Engagement with Clinical Outpatient Practices. Trend of total scheduled telehealth appointments over a four-week period (April 13 to May 8) for RWJMG specialty clinics (internal medicine, pediatric, and neurology). All patient encounters successfully transitioned to a Doxy.me video call (dark blue) are illustrated as a percentage of initiated patient encounters (light blue) Several barriers (Fig. 4a, b) to successful telehealth utilization were encountered during the pilot period. Together, 197 (13.9 %) of 1411 total scheduled appointments were unable to be initiated for two primary reasons: patients either asked to reschedule (80, 41.5 %) or simply did not answer the phone (79, 40.9 %). Of the 1214 patient encounters that students were able to initiate, 992 (81.7 %) were successfully transitioned to a Doxy.me video call. Most patients from the remaining 222 encounters specifically requested transition to a traditional phone call (101, 46.5 %) or reported inability to access a compatible smartphone or computer at the time (71, 32.7 %). Despite assistance from students, 40 (18.4 %) patients were not adept enough with technology to enter the Doxy.me virtual waiting room. Student surveys contained valuable workflow feedback, which allowed clinic teams to modify the workflow where necessary to better meet patient needs. Fig. 4Barriers to Successful Telehealth Utilization. Student survey responses that quantify instances where students were unable to (a) initiate the patient encounter and (b) transition initiated patient encounters to a Doxy.me video call Barriers to Successful Telehealth Utilization. Student survey responses that quantify instances where students were unable to (a) initiate the patient encounter and (b) transition initiated patient encounters to a Doxy.me video call Discussion: The pilot employed a workflow enabling students to meaningfully participate in patient care within the telehealth setting. Students expressed increased confidence and comfort with telemedicine technologies and patient interviewing. Many reflections identified barriers to telehealth implementation and access (Table 1). There were variations in encounter quality and teaching opportunities. Attending physicians were able to personalize their requests for student documentation and participation, with some encouraging student participation during the actual patient encounters or including subspecialty-specific review of systems. Although all attending physicians were asked to conduct debrief sessions when possible, time constraints and schedule conflicts reduced the number of attending physician-conducted educational moments. Most debrief sessions focused on the medical knowledge displayed through each student’s documentation. Since the pilot, efforts expanded to include 100 second-year medical students. With these additional pre-clinical participants, the telemedicine elective has serviced thousands of patients from the RWJMG outpatient practices. This expansion demonstrates how the workflow can be adapted to accommodate large volumes of students. Another project has adapted the workflow to identify social determinants that prevent the successful adoption of telehealth. Student volunteers perform similar calls with a greater focus on addressing patients’ needs across multiple clinic sites. As patient volumes stabilize in the outpatient clinics, the elective will increasingly focus on education. As in-person appointments become possible with social distancing protocols, fewer telemedicine visits per attending physician will promote uniformity among students’ experiences and increase educational moments. To ensure student exposure to various patient populations and pathophysiology, several specialty clinics adopted the presented workflow to create specialty-specific telehealth elective opportunities. The Pediatrics Clerkship has also integrated a modified outpatient to supplement clinical experiences. After a successful four-week outpatient pilot, an inpatient telemedicine initiative was developed based on the Yale iCollaborative resources [8]. This initiative will allow medical students to contribute to inpatient care for patients under isolation. The primary objective of this initiative is to introduce students to telemedicine encounters with patients where appropriate without risk of exposure to COVID-19, while additional objectives require students to participate in rounds and practice case presentations, contribute to patient documentation, and act as patient liaisons after discharge to ensure patients successfully connect with the next phases of their care. Successful integration of this inpatient clinical curriculum will improve the capacity necessary for displaced students to engage with inpatients, albeit through a different medium. The American Medical Association (AMA) recently published a playbook of telehealth workflows in support of recent national policy changes to reduce in-person visits and concomitant exposure risks to the coronavirus [9]. The AAMC further recognizes the ability of telehealth to support longitudinal patient care models across the continuum of practice [10]. Together, these AMA and AAMC recommendations establish a framework from which medical schools can build educational telehealth opportunities into their curricula. These objectives ultimately reflect the evolving demand for non-traditional digital healthcare services in periods of both crises and normalcy. Telemedicine workflows like the pilot’s can increase flexibility for clinical educators within this new framework. With the development and implementation of these new telehealth initiatives, clinical learning experiences can be expanded in virtual settings. These novel educational experiences have the potential to make the clinical curriculum more resilient during this global pandemic and beyond. Limitations Elective evaluations were only completed by those students who requested elective credit. Of the 64 fourth-year medical students, only 29 completed enough sessions to receive credit. Many of the other students who participated significantly did not require credit and did not submit an evaluation. Among the measured clinical outcomes, data collected about barriers to telehealth implementation are likely skewed. The pilot was conducted while the RWJMG outpatient practices existed in an unprecedented state of flux without their usual automated scheduling systems in place. Instances where students were unable to initiate the patient encounter via phone call were increasingly corrected towards the end of the pilot as these RWJMG systems were reimplemented. Unfortunately, it was not possible to follow up with these patients who were rescheduled to ensure they eventually received care. Elective evaluations were only completed by those students who requested elective credit. Of the 64 fourth-year medical students, only 29 completed enough sessions to receive credit. Many of the other students who participated significantly did not require credit and did not submit an evaluation. Among the measured clinical outcomes, data collected about barriers to telehealth implementation are likely skewed. The pilot was conducted while the RWJMG outpatient practices existed in an unprecedented state of flux without their usual automated scheduling systems in place. Instances where students were unable to initiate the patient encounter via phone call were increasingly corrected towards the end of the pilot as these RWJMG systems were reimplemented. Unfortunately, it was not possible to follow up with these patients who were rescheduled to ensure they eventually received care. Limitations: Elective evaluations were only completed by those students who requested elective credit. Of the 64 fourth-year medical students, only 29 completed enough sessions to receive credit. Many of the other students who participated significantly did not require credit and did not submit an evaluation. Among the measured clinical outcomes, data collected about barriers to telehealth implementation are likely skewed. The pilot was conducted while the RWJMG outpatient practices existed in an unprecedented state of flux without their usual automated scheduling systems in place. Instances where students were unable to initiate the patient encounter via phone call were increasingly corrected towards the end of the pilot as these RWJMG systems were reimplemented. Unfortunately, it was not possible to follow up with these patients who were rescheduled to ensure they eventually received care. Conclusions: During these unprecedented times, medical schools are developing innovative methods of remote educational instruction. The pilot study provides a workflow for displaced clinical learners to engage with patients via telehealth technologies, allowing them to remain clinically active through authentic patient encounters. The workflow was well-received by students and allowed a large number of students to support the outpatient practices of the medical school. With support from governing bodies, telehealth endeavors can remain a permanent feature in medical school curricula.
Background: In response to the COVID-19 pandemic, medical schools suspended clinical rotations. This displacement of medical students from wards has limited experiential learning. Concurrently, outpatient practices are experiencing reduced volumes of in-person visits and are shifting towards virtual healthcare, a transition that comes with its own logistical challenges. This article describes a workflow that enabled medical students to engage in meaningful clinical education while helping an institution's outpatient practices implement remote telemedicine visits. Methods: A 4-week virtual elective was designed to allow clinical learners to participate in virtual telemedicine patient encounters. Students were prepared with EMR training and introduced to a novel workflow that supported healthcare providers in the outpatient setting. Patients were consented to telehealth services before encounters with medical students. All collected clinical information was documented in the EMR, after which students transitioned patients to a virtual Doxy.me video appointment. Surveys were used to evaluate clinical and educational outcomes of students' participation. Elective evaluations and student reflections were also collected. Results: Survey results showed students felt well-prepared to initiate patient encounters. They expressed comfort while engaging with patients virtually during telemedicine appointments. Students identified clinical educational value, citing opportunities to develop patient management plans consistent with in-person experiences. A significant healthcare burden was also alleviated by student involvement. Over 1000 total scheduled appointments were serviced by students who transitioned more than 80 % of patients into virtual attending provider waiting rooms. Conclusions: After piloting this elective with fourth-year students, pre-clerkship students were also recruited to act in a role normally associated with clinical learners (e.g., elicit patient histories, conduct a review of systems, etc.). Furthermore, additional telemedicine electives are being designed so medical students can contribute to patient care without risk of exposure to COVID-19. These efforts will allow students to continue with their clinical education during the pandemic. Medical educators can adopt a similar workflow to suit evolving remote learning needs.
Background: On March 17th, 2020, the Association of American Medical Colleges (AAMC) issued strong guidance that medical schools suspend clinical rotations during the coronavirus pandemic, effective immediately [1]. A temporary and unified suspension of clinical teaching by all institutions would ensure medical student safety as new information about COVID-19 continued to emerge. Medical schools following this guidance developed innovative curricular modifications to deliver uncompromised education during the ongoing pandemic. This article reports the successful implementation of a new telehealth elective course designed to introduce fourth-year medical students to the use of technologies for telehealth at the Rutgers Robert Wood Johnson Medical School (RWJMS) in response to the crisis. The elective course was designed not only to provide students with a unique educational opportunity, but also to help patients of the Robert Wood Johnson Medical Group (RWJMG) transition rapidly from in-person to remote consultations. The Commonwealth Fund estimated outpatient practices scheduled less than half their regular volume of in-person visits throughout March to mitigate virus transmission [2]. For the numerous practices that struggle to care for their patient population, the ability to extend patient-provider relationships beyond traditional in-person visits is invaluable. Under normal circumstances, the RWJMG outpatient practices care for approximately 15,000 patients each week. Total patient volume decreased nearly 33 % in the last two weeks of March 2020 before remote visits were possible. By the end of April, the outpatient practices scheduled 6800 telemedicine visits per week. This transition was not without challenges, as provider illness and post-exposure quarantine requirements exacerbated an already stressed system during the pandemic’s initial wave. The 2015–2016 Liaison Committee on Medical Education (LCME) Annual Medical School Questionnaire reported heterogenous use of telemedicine in undergraduate medical education by one-half of medical schools surveyed [3]. A 2020 review of telemedicine in undergraduate medical education revealed varied integration of telemedicine into medical school curricula. 71 % of respondents discussed the basics of telemedicine during didactics. Telemedicine featured more practically during standardized patient encounters (59 %), patient encounters (53 %), interprofessional training (40 %), and scholarly projects (29 %) [3]. The pandemic has stimulated a surge in telehealth integration for undergraduate medical education, providing quality education while maintaining social distancing policies [4]. A telehealth workflow was developed in coordination with RWJMG practice managers to facilitate efficient diagnosis, treatment, management, and education virtually. Students participating in an elective course were instrumental during this transition, reducing the workforce burden while gaining valuable exposure to a modern healthcare intervention. The aim of this elective was to provide students with opportunities to utilize telemedicine technologies for authentic patient encounters and enable them to understand telehealth applications and factors affecting the ability of patients to connect remotely. Conclusions: During these unprecedented times, medical schools are developing innovative methods of remote educational instruction. The pilot study provides a workflow for displaced clinical learners to engage with patients via telehealth technologies, allowing them to remain clinically active through authentic patient encounters. The workflow was well-received by students and allowed a large number of students to support the outpatient practices of the medical school. With support from governing bodies, telehealth endeavors can remain a permanent feature in medical school curricula.
Background: In response to the COVID-19 pandemic, medical schools suspended clinical rotations. This displacement of medical students from wards has limited experiential learning. Concurrently, outpatient practices are experiencing reduced volumes of in-person visits and are shifting towards virtual healthcare, a transition that comes with its own logistical challenges. This article describes a workflow that enabled medical students to engage in meaningful clinical education while helping an institution's outpatient practices implement remote telemedicine visits. Methods: A 4-week virtual elective was designed to allow clinical learners to participate in virtual telemedicine patient encounters. Students were prepared with EMR training and introduced to a novel workflow that supported healthcare providers in the outpatient setting. Patients were consented to telehealth services before encounters with medical students. All collected clinical information was documented in the EMR, after which students transitioned patients to a virtual Doxy.me video appointment. Surveys were used to evaluate clinical and educational outcomes of students' participation. Elective evaluations and student reflections were also collected. Results: Survey results showed students felt well-prepared to initiate patient encounters. They expressed comfort while engaging with patients virtually during telemedicine appointments. Students identified clinical educational value, citing opportunities to develop patient management plans consistent with in-person experiences. A significant healthcare burden was also alleviated by student involvement. Over 1000 total scheduled appointments were serviced by students who transitioned more than 80 % of patients into virtual attending provider waiting rooms. Conclusions: After piloting this elective with fourth-year students, pre-clerkship students were also recruited to act in a role normally associated with clinical learners (e.g., elicit patient histories, conduct a review of systems, etc.). Furthermore, additional telemedicine electives are being designed so medical students can contribute to patient care without risk of exposure to COVID-19. These efforts will allow students to continue with their clinical education during the pandemic. Medical educators can adopt a similar workflow to suit evolving remote learning needs.
11,172
372
[ 528, 1962, 163, 814, 1652, 520, 144 ]
10
[ "patient", "patients", "students", "telehealth", "student", "medical", "elective", "attending", "encounters", "patient encounters" ]
[ "telehealth initiatives clinical", "students telemedicine encounters", "telemedicine medical school", "telehealth setting students", "telehealth student volunteers" ]
null
[CONTENT] Telemedicine elective | Remote learning | Outpatient clinic | Coronavirus pandemic | COVID-19 [SUMMARY]
null
[CONTENT] Telemedicine elective | Remote learning | Outpatient clinic | Coronavirus pandemic | COVID-19 [SUMMARY]
[CONTENT] Telemedicine elective | Remote learning | Outpatient clinic | Coronavirus pandemic | COVID-19 [SUMMARY]
[CONTENT] Telemedicine elective | Remote learning | Outpatient clinic | Coronavirus pandemic | COVID-19 [SUMMARY]
[CONTENT] Telemedicine elective | Remote learning | Outpatient clinic | Coronavirus pandemic | COVID-19 [SUMMARY]
[CONTENT] Ambulatory Care | COVID-19 | Clinical Competence | Curriculum | Education, Distance | Education, Medical, Undergraduate | Humans | Pandemics | Pilot Projects | SARS-CoV-2 | Telemedicine | Workflow [SUMMARY]
null
[CONTENT] Ambulatory Care | COVID-19 | Clinical Competence | Curriculum | Education, Distance | Education, Medical, Undergraduate | Humans | Pandemics | Pilot Projects | SARS-CoV-2 | Telemedicine | Workflow [SUMMARY]
[CONTENT] Ambulatory Care | COVID-19 | Clinical Competence | Curriculum | Education, Distance | Education, Medical, Undergraduate | Humans | Pandemics | Pilot Projects | SARS-CoV-2 | Telemedicine | Workflow [SUMMARY]
[CONTENT] Ambulatory Care | COVID-19 | Clinical Competence | Curriculum | Education, Distance | Education, Medical, Undergraduate | Humans | Pandemics | Pilot Projects | SARS-CoV-2 | Telemedicine | Workflow [SUMMARY]
[CONTENT] Ambulatory Care | COVID-19 | Clinical Competence | Curriculum | Education, Distance | Education, Medical, Undergraduate | Humans | Pandemics | Pilot Projects | SARS-CoV-2 | Telemedicine | Workflow [SUMMARY]
[CONTENT] telehealth initiatives clinical | students telemedicine encounters | telemedicine medical school | telehealth setting students | telehealth student volunteers [SUMMARY]
null
[CONTENT] telehealth initiatives clinical | students telemedicine encounters | telemedicine medical school | telehealth setting students | telehealth student volunteers [SUMMARY]
[CONTENT] telehealth initiatives clinical | students telemedicine encounters | telemedicine medical school | telehealth setting students | telehealth student volunteers [SUMMARY]
[CONTENT] telehealth initiatives clinical | students telemedicine encounters | telemedicine medical school | telehealth setting students | telehealth student volunteers [SUMMARY]
[CONTENT] telehealth initiatives clinical | students telemedicine encounters | telemedicine medical school | telehealth setting students | telehealth student volunteers [SUMMARY]
[CONTENT] patient | patients | students | telehealth | student | medical | elective | attending | encounters | patient encounters [SUMMARY]
null
[CONTENT] patient | patients | students | telehealth | student | medical | elective | attending | encounters | patient encounters [SUMMARY]
[CONTENT] patient | patients | students | telehealth | student | medical | elective | attending | encounters | patient encounters [SUMMARY]
[CONTENT] patient | patients | students | telehealth | student | medical | elective | attending | encounters | patient encounters [SUMMARY]
[CONTENT] patient | patients | students | telehealth | student | medical | elective | attending | encounters | patient encounters [SUMMARY]
[CONTENT] medical | education | telemedicine | undergraduate | undergraduate medical education | undergraduate medical | medical education | pandemic | patient | march [SUMMARY]
null
[CONTENT] patient | patients | student | students | experience | telehealth | able | doxy | reflections | encounters [SUMMARY]
[CONTENT] remain | medical school | school | medical | outpatient practices medical | endeavors remain permanent feature | engage patients telehealth technologies | engage patients telehealth | provides workflow | provides workflow displaced [SUMMARY]
[CONTENT] patient | students | patients | telehealth | student | medical | attending | elective | encounters | virtual [SUMMARY]
[CONTENT] patient | students | patients | telehealth | student | medical | attending | elective | encounters | virtual [SUMMARY]
[CONTENT] COVID-19 ||| ||| ||| [SUMMARY]
null
[CONTENT] ||| ||| ||| ||| more than 80 % [SUMMARY]
[CONTENT] fourth-year ||| COVID-19 ||| ||| [SUMMARY]
[CONTENT] COVID-19 ||| ||| ||| ||| 4-week ||| EMR ||| ||| ||| ||| ||| ||| ||| ||| ||| ||| more than 80 % ||| fourth-year ||| COVID-19 ||| ||| [SUMMARY]
[CONTENT] COVID-19 ||| ||| ||| ||| 4-week ||| EMR ||| ||| ||| ||| ||| ||| ||| ||| ||| ||| more than 80 % ||| fourth-year ||| COVID-19 ||| ||| [SUMMARY]
Neuroprotection and spatial memory enhancement of four herbal mixture extract in HT22 hippocampal cells and a mouse model of focal cerebral ischemia.
26122524
Four traditional Korean medicinal herbs which act in retarding the aging process, Polygonum multiflorum Thunb., Rehmannia glutinosa (Gaertn) Libosch., Polygala tenuifolia Willd., and Acorus gramineus Soland., were prepared by systematic investigation of Dongeuibogam (Treasured Mirror of Eastern Medicine), published in the early 17th century in Korea. This study was performed to evaluate beneficial effects of four herbal mixture extract (PMC-12) on hippocampal neuron and spatial memory.
BACKGROUND
High performance liquid chromatography (HPLC) analysis was performed for standardization of PMC-12. Cell viability, lactate dehydrogenase, flow cytometry, reactive oxygen species (ROS), and Western blot assays were performed in HT22 hippocampal cells and immunohistochemistry and behavioral tests were performed in a mouse model of focal cerebral ischemia in order to observe alterations of hippocampal cell survival and subsequent memory function.
METHODS
In the HPLC analysis, PMC-12 was standardized to contain 3.09% 2,3,5,4'-tetrahydroxystilbene-2-O-β-D-glucoside, 0.35% 3',6-disinapoyl sucrose, and 0.79% catalpol. In HT22 cells, pretreatment with PMC-12 resulted in significantly reduced glutamate-induced apoptotic cell death. Pretreatment with PMC-12 also resulted in suppression of ROS accumulation in connection with cellular Ca(2+) level after exposure to glutamate. Expression levels of phosphorylated p38 mitogen-activated protein kinases (MAPK) and dephosphorylated phosphatidylinositol-3 kinase (PI3K) by glutamate exposure were recovered by pretreatment with either PMC-12 or anti-oxidant N-acetyl-L-cysteine (NAC). Expression levels of mature brain-derived neurotrophic factor (BDNF) and phosphorylated cAMP response element binding protein (CREB) were significantly enhanced by treatment with either PMC-12 or NAC. Combination treatment with PMC-12, NAC, and intracellular Ca(2+) inhibitor BAPTA showed similar expression levels. In a mouse model of focal cerebral ischemia, we observed higher expression of mature BDNF and phosphorylation of CREB in the hippocampus and further confirmed improved spatial memory by treatment with PMC-12.
RESULTS
Our results suggest that PMC-12 mainly exerted protective effects on hippocampal neurons through suppression of Ca(2+)-related ROS accumulation and regulation of signaling pathways of p38 MAPK and PI3K associated with mature BDNF expression and CREB phosphorylation and subsequently enhanced spatial memory.
CONCLUSIONS
[ "Animals", "Brain Ischemia", "Cell Line", "Disease Models, Animal", "Hippocampus", "Mice", "Neuroprotective Agents", "Plant Extracts", "Spatial Memory" ]
4486694
Background
Due to an increase in life expectancy and the elderly population, memory and cognitive impairments including dementia have become a major public health problem [1]. Hippocampal neuronal death is a major factor in the progress of memory impairment in many brain disorders [2, 3]. Prevention of hippocampal neuronal deaths provides a potential new therapeutic strategy to ameliorate memory and cognitive impairment of many neurological disorders. HT22 hippocampal cell line, which lacks a functional glutamate receptor, is valuable for studying molecular mechanism of memory deficits [2, 4]. Exposure of HT22 hippocampal cells to glutamate shows neurotoxicity through oxidative stress rather than N-methyl-D-aspartate receptor-mediated excitotoxicity [5–7]. Non-receptor-mediated oxidative stress involves inhibition of cysteine uptake, alterations of intracellular cysteine homeostasis, glutathione depletion, and ultimately elevation of reactive oxygen species (ROS) activation inducing neuronal cell death [5, 8–10]. Death of hippocampal cell following oxidative stress and accumulation of ROS play a role in learning and memory impairment of brain disorders [11]. Under oxidative neuronal death, the pathways of p38 mitogen-activated protein kinase (MAPK) and phosphatidylinositol-3 kinase (PI3K) play critical roles in control of neuronal death and cell survival caused by glutamate, respectively [6, 12]. Brain-derived neurotrophic factor (BDNF) mediates neuronal survival and neuroplasticity and associated learning and memory, and its signaling is associated with alteration of the main mediators of PI3K pathways [13–15]. Cyclic AMP response element binding protein (CREB) has multiple roles in different brain areas, as well as promotion of cell survival [16, 17] and is involved in memory, learning, and synaptic transmission in the brain [18, 19]. Studies of the neuroprotection have been performed in both HT22 cells and middle cerebral artery occlusion (MCAO)-induced injury [20, 21]. However memory deficits are also frequently noted after stroke. Transient MCAO induce a progressive deficiency in spatial performance related to impaired hippocampal function [22]. Thus shorter durations of ischemia have been used in the experiments that aimed to test impaired spatial learning and memory performance [23]. In traditional literature of Korean/Chinese medicine, research for screening, evaluating citation and attempting practical use of herbs has been conducted for development of therapeutic strategies [24]. We prepared multiherb formulae comprising the roots of Polygonum multiflorum, Rehmannia glutinosa, Polygala tenuifolia, and Acorus gramineus to increase medicinal efficacy by systematic investigation of Dongeuibogam, published by Joon Hur in the early 17th century in Korea. Our aim was to determine the beneficial effects of herbal mixture extract on hippocampal neurons, a susceptible cell important in memory impairment, in HT22 hippocampal cells and the hippocampus with subsequent memory enhancement in a mouse model of focal cerebral ischemia.
null
null
Results
HPLC analysis of PMC-12 HPLC conditions, particularly the mobile phase and its elution program, are important for determination of the compound in the biological matrix. In this study, we found that a mobile phase consisting of acetonitrile and containing H2O can separate THS, DISS, and catalpol (Fig. 1). The HPLC conditions developed in this study produced full peak-to-baseline resolution of the major active THS, DISS, and catalpol present in PMC-12. Based on UV maximal absorption, we detected THS and DISS at 254 nm and catalpol at 203 nm for quantitative analysis. The contents of THS, DISS, and catalpol in PMC-12 were 3.085 ± 0.271 %, 0.352 ± 0.058 %, and 0.785 ± 0.059 %, respectively. Linear calibration curve showed good linear regression (r2 > 0.999) within test ranges; the LOD (S/N = 3) and the LOQ (S/N = 10) were less than 1.5 and 4.5 μg at 254 nm for THS and DISS and at 203 nm for the catalpol (Table 1).Fig. 1HPLC analysis and quantification of PMC-12. HPLC chromatograms of THS, DISS, and catalpol reference (a) and PMC-12 (b) obtained using a Luna C18 (2) column monitored at 254 nm and eluted with 100 % water to 100 % acetonitrile for 40 at a flow-rate of 1.0 ml/minTable 1Concentration, calibration curve, regression data, LODs and LOQs for THS, DISS, and catalpol by HPLCCompoundsWavelength (nm)Concentration (%)Calibration curver2 LOD (μg/ml)LOQ (μg/ml)THS2543.085 ± 0.271y = 3284.72x + 2.000.9991.414.27DISS2540.352 ± 0.058y = 4289.06x + 47.950.9990.591.79Catalpol2030.785 ± 0.059y = 1253.49x + 31.370.9990.631.91 HPLC analysis and quantification of PMC-12. HPLC chromatograms of THS, DISS, and catalpol reference (a) and PMC-12 (b) obtained using a Luna C18 (2) column monitored at 254 nm and eluted with 100 % water to 100 % acetonitrile for 40 at a flow-rate of 1.0 ml/min Concentration, calibration curve, regression data, LODs and LOQs for THS, DISS, and catalpol by HPLC HPLC conditions, particularly the mobile phase and its elution program, are important for determination of the compound in the biological matrix. In this study, we found that a mobile phase consisting of acetonitrile and containing H2O can separate THS, DISS, and catalpol (Fig. 1). The HPLC conditions developed in this study produced full peak-to-baseline resolution of the major active THS, DISS, and catalpol present in PMC-12. Based on UV maximal absorption, we detected THS and DISS at 254 nm and catalpol at 203 nm for quantitative analysis. The contents of THS, DISS, and catalpol in PMC-12 were 3.085 ± 0.271 %, 0.352 ± 0.058 %, and 0.785 ± 0.059 %, respectively. Linear calibration curve showed good linear regression (r2 > 0.999) within test ranges; the LOD (S/N = 3) and the LOQ (S/N = 10) were less than 1.5 and 4.5 μg at 254 nm for THS and DISS and at 203 nm for the catalpol (Table 1).Fig. 1HPLC analysis and quantification of PMC-12. HPLC chromatograms of THS, DISS, and catalpol reference (a) and PMC-12 (b) obtained using a Luna C18 (2) column monitored at 254 nm and eluted with 100 % water to 100 % acetonitrile for 40 at a flow-rate of 1.0 ml/minTable 1Concentration, calibration curve, regression data, LODs and LOQs for THS, DISS, and catalpol by HPLCCompoundsWavelength (nm)Concentration (%)Calibration curver2 LOD (μg/ml)LOQ (μg/ml)THS2543.085 ± 0.271y = 3284.72x + 2.000.9991.414.27DISS2540.352 ± 0.058y = 4289.06x + 47.950.9990.591.79Catalpol2030.785 ± 0.059y = 1253.49x + 31.370.9990.631.91 HPLC analysis and quantification of PMC-12. HPLC chromatograms of THS, DISS, and catalpol reference (a) and PMC-12 (b) obtained using a Luna C18 (2) column monitored at 254 nm and eluted with 100 % water to 100 % acetonitrile for 40 at a flow-rate of 1.0 ml/min Concentration, calibration curve, regression data, LODs and LOQs for THS, DISS, and catalpol by HPLC Pretreatment with PMC-12 reduces glutamate-induced neuronal toxicity in HT22 cells Exposure of cells to glutamate resulted in reduced cell viability of approximately 36.5 % compared with the control. Pretreatment with PMC-12 at a concentration range of 0.01 to 10 μg/ml (ED50 = 0.32 μg/ml PMC-12) resulted in significantly reduced glutamate-induced cytotoxicity in a dose-dependent manner (Fig. 2a). The levels of LDH release showed a significant increase to 77.8 % after exposure to glutamate, while pretreatment with PMC-12 resulted in a marked decrease of glutamate-induced release of LDH (Fig. 2b). These results suggest that pretreatment with PMC-12 exerts a potent neuroprotective effect against oxidative toxicity caused by exposure of HT22 cells to glutamate.Fig. 2Protective effects of PMC-12 against glutamate-induced cell death in HT22 cells. Cell viability and toxicity were determined by MTT (a) and LDH assay (b). Cells were pretreated with 0.01, 0.1, 1, and 10 μg/ml of PMC-12 for 24 h, followed by exposure to 5 mM glutamate for 24 h. ### P < 0.001 vs. control; *P < 0.05, **P < 0.01 and ***P < 0.001 vs. glutamate-treated cells. All data are represented as the mean ± SEM of three independent experiments Protective effects of PMC-12 against glutamate-induced cell death in HT22 cells. Cell viability and toxicity were determined by MTT (a) and LDH assay (b). Cells were pretreated with 0.01, 0.1, 1, and 10 μg/ml of PMC-12 for 24 h, followed by exposure to 5 mM glutamate for 24 h. ### P < 0.001 vs. control; *P < 0.05, **P < 0.01 and ***P < 0.001 vs. glutamate-treated cells. All data are represented as the mean ± SEM of three independent experiments Exposure of cells to glutamate resulted in reduced cell viability of approximately 36.5 % compared with the control. Pretreatment with PMC-12 at a concentration range of 0.01 to 10 μg/ml (ED50 = 0.32 μg/ml PMC-12) resulted in significantly reduced glutamate-induced cytotoxicity in a dose-dependent manner (Fig. 2a). The levels of LDH release showed a significant increase to 77.8 % after exposure to glutamate, while pretreatment with PMC-12 resulted in a marked decrease of glutamate-induced release of LDH (Fig. 2b). These results suggest that pretreatment with PMC-12 exerts a potent neuroprotective effect against oxidative toxicity caused by exposure of HT22 cells to glutamate.Fig. 2Protective effects of PMC-12 against glutamate-induced cell death in HT22 cells. Cell viability and toxicity were determined by MTT (a) and LDH assay (b). Cells were pretreated with 0.01, 0.1, 1, and 10 μg/ml of PMC-12 for 24 h, followed by exposure to 5 mM glutamate for 24 h. ### P < 0.001 vs. control; *P < 0.05, **P < 0.01 and ***P < 0.001 vs. glutamate-treated cells. All data are represented as the mean ± SEM of three independent experiments Protective effects of PMC-12 against glutamate-induced cell death in HT22 cells. Cell viability and toxicity were determined by MTT (a) and LDH assay (b). Cells were pretreated with 0.01, 0.1, 1, and 10 μg/ml of PMC-12 for 24 h, followed by exposure to 5 mM glutamate for 24 h. ### P < 0.001 vs. control; *P < 0.05, **P < 0.01 and ***P < 0.001 vs. glutamate-treated cells. All data are represented as the mean ± SEM of three independent experiments Pretreatment with PMC-12 inhibits glutamate-induced apoptotic cell death in HT22 cells We performed flow cytometry analysis using Annexin V/PI staining in order to characterize the types of neuronal death. Concentrations of 0.1, 1, and 10 μg/ml of PMC-12 were selected for testing. After exposure to glutamate, cells were likely to undergo apoptotic cell death rather than necrotic death, however, pretreatment with PMC-12 resulted in a marked decrease in the apoptotic population (Fig. 3). These results suggest that pretreatment with PMC-12 exerts a neuroprotective effect through inhibition of glutamate-induced apoptotic cell death.Fig. 3Protective effect of PMC-12 on types of cell death in HT22 cells. Cells were pretreated with 0.1, 1, or 10 μg/ml PMC-12 for 24 h, followed by exposure to 5 mM glutamate for 24 h. Representative flow cytometric analysis scatter-grams of Annexin V/PI staining (a) and quantitative analysis of the histograms (b and c). ### P < 0.001 vs. control; ***P < 0.001 vs. glutamate-treated cells. All data are represented as the mean ± SEM of three independent experiments Protective effect of PMC-12 on types of cell death in HT22 cells. Cells were pretreated with 0.1, 1, or 10 μg/ml PMC-12 for 24 h, followed by exposure to 5 mM glutamate for 24 h. Representative flow cytometric analysis scatter-grams of Annexin V/PI staining (a) and quantitative analysis of the histograms (b and c). ### P < 0.001 vs. control; ***P < 0.001 vs. glutamate-treated cells. All data are represented as the mean ± SEM of three independent experiments We performed flow cytometry analysis using Annexin V/PI staining in order to characterize the types of neuronal death. Concentrations of 0.1, 1, and 10 μg/ml of PMC-12 were selected for testing. After exposure to glutamate, cells were likely to undergo apoptotic cell death rather than necrotic death, however, pretreatment with PMC-12 resulted in a marked decrease in the apoptotic population (Fig. 3). These results suggest that pretreatment with PMC-12 exerts a neuroprotective effect through inhibition of glutamate-induced apoptotic cell death.Fig. 3Protective effect of PMC-12 on types of cell death in HT22 cells. Cells were pretreated with 0.1, 1, or 10 μg/ml PMC-12 for 24 h, followed by exposure to 5 mM glutamate for 24 h. Representative flow cytometric analysis scatter-grams of Annexin V/PI staining (a) and quantitative analysis of the histograms (b and c). ### P < 0.001 vs. control; ***P < 0.001 vs. glutamate-treated cells. All data are represented as the mean ± SEM of three independent experiments Protective effect of PMC-12 on types of cell death in HT22 cells. Cells were pretreated with 0.1, 1, or 10 μg/ml PMC-12 for 24 h, followed by exposure to 5 mM glutamate for 24 h. Representative flow cytometric analysis scatter-grams of Annexin V/PI staining (a) and quantitative analysis of the histograms (b and c). ### P < 0.001 vs. control; ***P < 0.001 vs. glutamate-treated cells. All data are represented as the mean ± SEM of three independent experiments Pretreatment with PMC-12 inhibits glutamate-induced production of ROS in HT22 cells Treatment of HT22 cells with glutamate resulted in increased production of ROS. However, pretreatment with PMC-12 resulted in a significant decrease in ROS production, which prevented elevation of ROS level caused by exposure to glutamate (Fig. 4a). We also performed staining with Hoechst 33342 to confirm morphological changes by glutamate-induced oxidative toxicity. Our result showed that PMC-12 protected against apoptotic cell death by production of ROS after exposure to glutamate (Fig. 4b). When cells were treated with 10 μM of intracellular Ca2+ chelator BAPTA-AM and 1.5 mM of extracellular Ca2+ chelator EGTA, both inhibitors caused a significant reduction in the levels of glutamate-induced production of ROS. However, no change in ROS production was observed in cells treated with a combination of Ca2+ chelators and PMC-12 compared to cells treated with PMC-12 alone followed by exposure to glutamate (Fig. 5). These results suggest that PMC-12 suppresses glutamate-induced oxidative stress by blocking ROS production, which may be related to an indirect Ca2+ pathway.Fig. 4Protective effect of PMC-12 on ROS generation in glutamate-treated HT22 cells. Cells were pretreated with 0.01, 0.1, 1, or 10 μg/ml of PMC-12 for 24 h, followed by exposure to 5 mM glutamate for 24 h. The oxidation sensitive fluorescence dye, carboxy-H2DCFDA (20 μM), was used in measurement of ROS levels. Production of ROS was analyzed using a fluorescence plate reader (a) and fluorescence microscope (b). In addition, apoptotic nuclei were observed after staining with Hoechst 33342 for detection of apoptosis morphologically (b). ### P < 0.001 vs. control; **P < 0.01 and ***P < 0.001 vs. glutamate-treated cells. All data are represented as the mean ± SEM of three independent experiments. Scale bars = 50 μmFig. 5Protective effects of PMC-12 on cellular Ca2+-related ROS production in HT22 cells. Cells were pretreated with 10 μg/ml of PMC-12 for 24 h, followed by exposure to 5 mM glutamate for 24 h. Cells were treated with the specific Ca2+ inhibitors, 1.5 mM EGTA or 10 μM BAPTA-AM, for 30 min before addition of PMC-12 or glutamate. ROS production was measured using a fluorescence plate reader and mean fluorescence intensity was expressed as the mean ± SEM of three independent experiments. ### P < 0.001 vs. control; ***P < 0.001 vs. glutamate-treated cells Protective effect of PMC-12 on ROS generation in glutamate-treated HT22 cells. Cells were pretreated with 0.01, 0.1, 1, or 10 μg/ml of PMC-12 for 24 h, followed by exposure to 5 mM glutamate for 24 h. The oxidation sensitive fluorescence dye, carboxy-H2DCFDA (20 μM), was used in measurement of ROS levels. Production of ROS was analyzed using a fluorescence plate reader (a) and fluorescence microscope (b). In addition, apoptotic nuclei were observed after staining with Hoechst 33342 for detection of apoptosis morphologically (b). ### P < 0.001 vs. control; **P < 0.01 and ***P < 0.001 vs. glutamate-treated cells. All data are represented as the mean ± SEM of three independent experiments. Scale bars = 50 μm Protective effects of PMC-12 on cellular Ca2+-related ROS production in HT22 cells. Cells were pretreated with 10 μg/ml of PMC-12 for 24 h, followed by exposure to 5 mM glutamate for 24 h. Cells were treated with the specific Ca2+ inhibitors, 1.5 mM EGTA or 10 μM BAPTA-AM, for 30 min before addition of PMC-12 or glutamate. ROS production was measured using a fluorescence plate reader and mean fluorescence intensity was expressed as the mean ± SEM of three independent experiments. ### P < 0.001 vs. control; ***P < 0.001 vs. glutamate-treated cells Treatment of HT22 cells with glutamate resulted in increased production of ROS. However, pretreatment with PMC-12 resulted in a significant decrease in ROS production, which prevented elevation of ROS level caused by exposure to glutamate (Fig. 4a). We also performed staining with Hoechst 33342 to confirm morphological changes by glutamate-induced oxidative toxicity. Our result showed that PMC-12 protected against apoptotic cell death by production of ROS after exposure to glutamate (Fig. 4b). When cells were treated with 10 μM of intracellular Ca2+ chelator BAPTA-AM and 1.5 mM of extracellular Ca2+ chelator EGTA, both inhibitors caused a significant reduction in the levels of glutamate-induced production of ROS. However, no change in ROS production was observed in cells treated with a combination of Ca2+ chelators and PMC-12 compared to cells treated with PMC-12 alone followed by exposure to glutamate (Fig. 5). These results suggest that PMC-12 suppresses glutamate-induced oxidative stress by blocking ROS production, which may be related to an indirect Ca2+ pathway.Fig. 4Protective effect of PMC-12 on ROS generation in glutamate-treated HT22 cells. Cells were pretreated with 0.01, 0.1, 1, or 10 μg/ml of PMC-12 for 24 h, followed by exposure to 5 mM glutamate for 24 h. The oxidation sensitive fluorescence dye, carboxy-H2DCFDA (20 μM), was used in measurement of ROS levels. Production of ROS was analyzed using a fluorescence plate reader (a) and fluorescence microscope (b). In addition, apoptotic nuclei were observed after staining with Hoechst 33342 for detection of apoptosis morphologically (b). ### P < 0.001 vs. control; **P < 0.01 and ***P < 0.001 vs. glutamate-treated cells. All data are represented as the mean ± SEM of three independent experiments. Scale bars = 50 μmFig. 5Protective effects of PMC-12 on cellular Ca2+-related ROS production in HT22 cells. Cells were pretreated with 10 μg/ml of PMC-12 for 24 h, followed by exposure to 5 mM glutamate for 24 h. Cells were treated with the specific Ca2+ inhibitors, 1.5 mM EGTA or 10 μM BAPTA-AM, for 30 min before addition of PMC-12 or glutamate. ROS production was measured using a fluorescence plate reader and mean fluorescence intensity was expressed as the mean ± SEM of three independent experiments. ### P < 0.001 vs. control; ***P < 0.001 vs. glutamate-treated cells Protective effect of PMC-12 on ROS generation in glutamate-treated HT22 cells. Cells were pretreated with 0.01, 0.1, 1, or 10 μg/ml of PMC-12 for 24 h, followed by exposure to 5 mM glutamate for 24 h. The oxidation sensitive fluorescence dye, carboxy-H2DCFDA (20 μM), was used in measurement of ROS levels. Production of ROS was analyzed using a fluorescence plate reader (a) and fluorescence microscope (b). In addition, apoptotic nuclei were observed after staining with Hoechst 33342 for detection of apoptosis morphologically (b). ### P < 0.001 vs. control; **P < 0.01 and ***P < 0.001 vs. glutamate-treated cells. All data are represented as the mean ± SEM of three independent experiments. Scale bars = 50 μm Protective effects of PMC-12 on cellular Ca2+-related ROS production in HT22 cells. Cells were pretreated with 10 μg/ml of PMC-12 for 24 h, followed by exposure to 5 mM glutamate for 24 h. Cells were treated with the specific Ca2+ inhibitors, 1.5 mM EGTA or 10 μM BAPTA-AM, for 30 min before addition of PMC-12 or glutamate. ROS production was measured using a fluorescence plate reader and mean fluorescence intensity was expressed as the mean ± SEM of three independent experiments. ### P < 0.001 vs. control; ***P < 0.001 vs. glutamate-treated cells Pretreatment with PMC-12 enhances mature BDNF expression and CREB phosphorylation via p38 MAPK and PI3K in HT22 cells Levels of phosphorylated p38 MAPK and dephosphorylated PI3K were significantly decreased by treatment with either PMC-12 or anti-oxidant NAC compared to glutamate-treated cells. Protein levels of mature BDNF and CREB phosphorylation were significantly increased by treatment with either PMC-12 or NAC (Fig. 6a). When cells were treated with PMC-12, NAC, or intracellular Ca2+ inhibitor BAPTA-AM, combination treatment resulted in markedly reduced levels of phosphorylated p38 MAPK and dephosphorylated PI3K compared to other cells. The combination treatment of cells also resulted in elevation of decreased protein levels of mature BDNF and CREB phosphorylation after exposure to glutamate (Fig. 6b). These results suggest that neuroprotective effects of PMC-12 may be regulated by both p38 MAPK and PI3K signaling with mature BDNF expression and CREB phosphorylation, and these effects may be related to ROS accumulation and Ca2+ influx.Fig. 6Protective effects of PMC-12 on regulation of intracellular protein kinases in HT22 cells. Cells were pretreated with 1 μg/ml PMC-12 or 1 mM NAC, followed by exposure to 5 mM glutamate (a). In addition, cells were pretreated with 10 μM BAPTA-AM or NAC for 30 min before addition of PMC-12 or glutamate (b). The histogram for Fig. 6b was indicated as the mean ± SEM of three independent experiments (c). Equal amounts of proteins and each sample were subjected to Western blot analysis using the indicated antibodies. Equal protein loading was confirmed by actin expression. ### P < 0.001 vs. normal control; ***P < 0.001 vs. glutamate-treated groups; $ P < 0.05 vs. groups except control and glutamate-treated groups Protective effects of PMC-12 on regulation of intracellular protein kinases in HT22 cells. Cells were pretreated with 1 μg/ml PMC-12 or 1 mM NAC, followed by exposure to 5 mM glutamate (a). In addition, cells were pretreated with 10 μM BAPTA-AM or NAC for 30 min before addition of PMC-12 or glutamate (b). The histogram for Fig. 6b was indicated as the mean ± SEM of three independent experiments (c). Equal amounts of proteins and each sample were subjected to Western blot analysis using the indicated antibodies. Equal protein loading was confirmed by actin expression. ### P < 0.001 vs. normal control; ***P < 0.001 vs. glutamate-treated groups; $ P < 0.05 vs. groups except control and glutamate-treated groups Levels of phosphorylated p38 MAPK and dephosphorylated PI3K were significantly decreased by treatment with either PMC-12 or anti-oxidant NAC compared to glutamate-treated cells. Protein levels of mature BDNF and CREB phosphorylation were significantly increased by treatment with either PMC-12 or NAC (Fig. 6a). When cells were treated with PMC-12, NAC, or intracellular Ca2+ inhibitor BAPTA-AM, combination treatment resulted in markedly reduced levels of phosphorylated p38 MAPK and dephosphorylated PI3K compared to other cells. The combination treatment of cells also resulted in elevation of decreased protein levels of mature BDNF and CREB phosphorylation after exposure to glutamate (Fig. 6b). These results suggest that neuroprotective effects of PMC-12 may be regulated by both p38 MAPK and PI3K signaling with mature BDNF expression and CREB phosphorylation, and these effects may be related to ROS accumulation and Ca2+ influx.Fig. 6Protective effects of PMC-12 on regulation of intracellular protein kinases in HT22 cells. Cells were pretreated with 1 μg/ml PMC-12 or 1 mM NAC, followed by exposure to 5 mM glutamate (a). In addition, cells were pretreated with 10 μM BAPTA-AM or NAC for 30 min before addition of PMC-12 or glutamate (b). The histogram for Fig. 6b was indicated as the mean ± SEM of three independent experiments (c). Equal amounts of proteins and each sample were subjected to Western blot analysis using the indicated antibodies. Equal protein loading was confirmed by actin expression. ### P < 0.001 vs. normal control; ***P < 0.001 vs. glutamate-treated groups; $ P < 0.05 vs. groups except control and glutamate-treated groups Protective effects of PMC-12 on regulation of intracellular protein kinases in HT22 cells. Cells were pretreated with 1 μg/ml PMC-12 or 1 mM NAC, followed by exposure to 5 mM glutamate (a). In addition, cells were pretreated with 10 μM BAPTA-AM or NAC for 30 min before addition of PMC-12 or glutamate (b). The histogram for Fig. 6b was indicated as the mean ± SEM of three independent experiments (c). Equal amounts of proteins and each sample were subjected to Western blot analysis using the indicated antibodies. Equal protein loading was confirmed by actin expression. ### P < 0.001 vs. normal control; ***P < 0.001 vs. glutamate-treated groups; $ P < 0.05 vs. groups except control and glutamate-treated groups Treatment with PMC-12 enhances mature BDNF expression and CREB phosphorylation in the hippocampus of MCAO mice At 26 days after MCAO, positive neuronal cells of pCREB and mature BDNF in the hippocampal CA1 and dentate gyrus (DG) region were counted after immunofluorescence staining. Post-treatment with PMC-12 followed by MCAO surgery resulted in a significant increase in the number of double-positive cells of pCREB/NeuN or mature BDNF/NeuN in the CA1 and DG region of the ipsilateral hippocampus compared to the MCAO group (Fig. 7). These results suggest a possible association of the protective effects of PMC-12 with neuronal mature BDNF expression and CREB phosphorylation in the hippocampus of MCAO mice.Fig. 7Protective effects of PMC-12 on expression of both pCREB and mature BDNF in the hippocampus of focal cerebral ischemia in mice at 26 days after MCAO. Photomicrograph (a-b, d-e) and its histogram for pCREB/NeuN (c) and mature BDNF/NeuN double-positive cells (f) in the DG and CA1 regions of the hippocampus. Total number of pCREB/NeuN and mBDNF/NeuN double-positive cells was significantly increased by administration of PMC-12 in DG and CA1 compared to the MCAO group. ## P < 0.01 and ### P < 0.001 vs. control; *P < 0.05 and **P < 0.01 vs. MCAO mice. Scale bar = 100 μm Protective effects of PMC-12 on expression of both pCREB and mature BDNF in the hippocampus of focal cerebral ischemia in mice at 26 days after MCAO. Photomicrograph (a-b, d-e) and its histogram for pCREB/NeuN (c) and mature BDNF/NeuN double-positive cells (f) in the DG and CA1 regions of the hippocampus. Total number of pCREB/NeuN and mBDNF/NeuN double-positive cells was significantly increased by administration of PMC-12 in DG and CA1 compared to the MCAO group. ## P < 0.01 and ### P < 0.001 vs. control; *P < 0.05 and **P < 0.01 vs. MCAO mice. Scale bar = 100 μm At 26 days after MCAO, positive neuronal cells of pCREB and mature BDNF in the hippocampal CA1 and dentate gyrus (DG) region were counted after immunofluorescence staining. Post-treatment with PMC-12 followed by MCAO surgery resulted in a significant increase in the number of double-positive cells of pCREB/NeuN or mature BDNF/NeuN in the CA1 and DG region of the ipsilateral hippocampus compared to the MCAO group (Fig. 7). These results suggest a possible association of the protective effects of PMC-12 with neuronal mature BDNF expression and CREB phosphorylation in the hippocampus of MCAO mice.Fig. 7Protective effects of PMC-12 on expression of both pCREB and mature BDNF in the hippocampus of focal cerebral ischemia in mice at 26 days after MCAO. Photomicrograph (a-b, d-e) and its histogram for pCREB/NeuN (c) and mature BDNF/NeuN double-positive cells (f) in the DG and CA1 regions of the hippocampus. Total number of pCREB/NeuN and mBDNF/NeuN double-positive cells was significantly increased by administration of PMC-12 in DG and CA1 compared to the MCAO group. ## P < 0.01 and ### P < 0.001 vs. control; *P < 0.05 and **P < 0.01 vs. MCAO mice. Scale bar = 100 μm Protective effects of PMC-12 on expression of both pCREB and mature BDNF in the hippocampus of focal cerebral ischemia in mice at 26 days after MCAO. Photomicrograph (a-b, d-e) and its histogram for pCREB/NeuN (c) and mature BDNF/NeuN double-positive cells (f) in the DG and CA1 regions of the hippocampus. Total number of pCREB/NeuN and mBDNF/NeuN double-positive cells was significantly increased by administration of PMC-12 in DG and CA1 compared to the MCAO group. ## P < 0.01 and ### P < 0.001 vs. control; *P < 0.05 and **P < 0.01 vs. MCAO mice. Scale bar = 100 μm Treatment with PMC-12 ameliorates spatial memory impairment in MCAO mice Spatial memory was assessed using the water maze test. MCAO mice took a longer time on average to find the platform than the basal group. However, PMC-12-administered mice attained a significantly lower time at both concentrations from 22 to 25 days after MCAO compared to the vehicle group (Fig. 8). In particular, it has shown that a low dose (100 mg/kg) of PMC-12 is enough for improvement of the damaged memory function. These results suggest that treatment with PMC-12 may induce beneficial effects for improvement of memory function in a focal cerebral ischemia model.Fig. 8Beneficial effects of PMC-12 on the spatial memory function in MCAO mice. Morris water maze test was performed from 22 d to 25 d after MCAO. Administration of PMC-12 resulted in significantly improved memory function of MCAO mice during the late phase of the experiment. There was no significant differences in the results by two different doses (100 and 500 mg/kg) of PMC-12. Mean ± SEM. ### P < 0.001 vs. control; ***P < 0.001 vs. MCAO mice (vehicle) Beneficial effects of PMC-12 on the spatial memory function in MCAO mice. Morris water maze test was performed from 22 d to 25 d after MCAO. Administration of PMC-12 resulted in significantly improved memory function of MCAO mice during the late phase of the experiment. There was no significant differences in the results by two different doses (100 and 500 mg/kg) of PMC-12. Mean ± SEM. ### P < 0.001 vs. control; ***P < 0.001 vs. MCAO mice (vehicle) Spatial memory was assessed using the water maze test. MCAO mice took a longer time on average to find the platform than the basal group. However, PMC-12-administered mice attained a significantly lower time at both concentrations from 22 to 25 days after MCAO compared to the vehicle group (Fig. 8). In particular, it has shown that a low dose (100 mg/kg) of PMC-12 is enough for improvement of the damaged memory function. These results suggest that treatment with PMC-12 may induce beneficial effects for improvement of memory function in a focal cerebral ischemia model.Fig. 8Beneficial effects of PMC-12 on the spatial memory function in MCAO mice. Morris water maze test was performed from 22 d to 25 d after MCAO. Administration of PMC-12 resulted in significantly improved memory function of MCAO mice during the late phase of the experiment. There was no significant differences in the results by two different doses (100 and 500 mg/kg) of PMC-12. Mean ± SEM. ### P < 0.001 vs. control; ***P < 0.001 vs. MCAO mice (vehicle) Beneficial effects of PMC-12 on the spatial memory function in MCAO mice. Morris water maze test was performed from 22 d to 25 d after MCAO. Administration of PMC-12 resulted in significantly improved memory function of MCAO mice during the late phase of the experiment. There was no significant differences in the results by two different doses (100 and 500 mg/kg) of PMC-12. Mean ± SEM. ### P < 0.001 vs. control; ***P < 0.001 vs. MCAO mice (vehicle)
Conclusions
PMC-12 protects against hippocampal neuronal cell death via inhibition of Ca2+-related accumulation of ROS and these effects regulate through the signaling pathways of p38 MAPK and PI3K associated with mature BDNF expression and CREB phosphorylation. PMC-12 may also modulate recovery of memory function via mature BDNF and CREB against cerebral ischemic stroke. These results have shown that multiherb formula PMC-12 has potential applications as a useful therapeutic strategy in memory impairment of brain disorders.
[ "Chemicals and antibodies", "Preparation of four herbal mixture extract", "High performance liquid chromatography (HPLC) analysis and quantification", "Cell culture", "Cell viability assay", "Determination of cell cytotoxicity", "Flow cytometric analysis", "ROS measurement", "Nuclear staining with Hoechst 33342", "Western blot analysis", "Focal cerebral ischemia", "Immunofluorescence staining", "Behavioral assessment", "Data analysis", "HPLC analysis of PMC-12", "Pretreatment with PMC-12 reduces glutamate-induced neuronal toxicity in HT22 cells", "Pretreatment with PMC-12 inhibits glutamate-induced apoptotic cell death in HT22 cells", "Pretreatment with PMC-12 inhibits glutamate-induced production of ROS in HT22 cells", "Pretreatment with PMC-12 enhances mature BDNF expression and CREB phosphorylation via p38 MAPK and PI3K in HT22 cells", "Treatment with PMC-12 enhances mature BDNF expression and CREB phosphorylation in the hippocampus of MCAO mice", "Treatment with PMC-12 ameliorates spatial memory impairment in MCAO mice" ]
[ "L-glutamate, 3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyltetrazolium bromide (MTT), N-acetyl-L-cysteine (NAC), and β-actin antibody were purchased from Sigma-Aldrich (St. Louis, MO, USA). BAPTA-AM and EGTA were purchased from Tocris Bioscience (Ellisville, MO, USA). Dulbecco’s modified Eagle’s medium (DMEM), fetal bovine serum (FBS), and other cell culture reagents were purchased from Gibco-Invitrogen (Carlsbad, CA, USA). Antibodies recognizing p38, phospho-p38 (pp38, Thr180/Tyr182), PI3K, and pro-BDNF were supplied by Santa Cruz Biotechnology (Santa Cruz, CA, USA), and CREB, phospho-CREB (pCREB, Ser133), and phospho-PI3K (pPI3K, Tyr458) were supplied by Cell Signaling (Danvers, MA, USA). Antibody recognizing neuronal nuclei (NeuN) was supplied by Millipore Corporation (Billerica, MA, USA), and mature BDNF was supplied by Abcam (Cambridge, MA, USA). Secondary antibodies were supplied by Santa Cruz Biotechnology. A FITC Annexin V apoptosis detection kit was purchased from BD Bioscience (San Diego, CA, USA). A lactate dehydrogenase (LDH) cytotoxicity assay kit was purchased from Promega (Madison, WI, USA), and ROS detection reagent, 5-(and-6)-carboxy-2′,7′-dichlorodihydrofluorescein diacetate (carboxy-H2DCFDA), and Hoechst 33342 was purchased from Invitrogen (Carlsbad, CA, USA).", "The dried roots of Polygonum multiflorum Thunb., Rehmannia glutinosa (Gaertn) Libosch., Polygala tenuifolia Willd., and Acorus gramineus Soland. were purchased from Dongnam Co. (Busan, Korea) and were authenticated by Professor Y.W. Choi, Department of Horticultural Bioscience, College of Natural Resource and Life Science, Pusan National University. A voucher specimen (accession number PMCWSD2.1 ~ 2.4) was deposited at the Plant Drug Research Laboratory of Pusan National University (Miryang, Korea). Dried powdered Polygonum multiflorum (25.5 kg), Rehmannia glutinosa (9.5 kg), Polygala tenuifolia (7.5 kg), and Acorus gramineus roots (7.5 kg) were immersed in 450 L of distilled water and boiled at 120 ± 5 °C for 150 min. The resultant extract was centrifuged (2000 × g for 20 min at 4 °C) and filtered through a 0.2-μm filter. The filtrate was then concentrated in vacuo at 70 ± 5 °C under reduced pressure and then converted into a fine spray-dried powder at a yielding rate of 4.6 % (2.3 kg) in a vacuum drying apparatus. Finally, the solid form of the spray-dried powder was dissolved with dimethyl sulfoxide (DMSO) for use as PMC-12 in experiments.", "For analysis of quality and quantity for PMC-12, sample of 0.5 g dry weight was sonicated in 10 ml MeOH, filtered through a 0.45 μm membrane filter before HPLC analysis. HPLC using G1100 systems (Agilent Technologies, Waldbronn, Germany) was performed on a Luna C18 column (5 μm, 150 mm × 3.0 mm i.d. Phenomenex, Torrance, CA, USA) with a mobile phase gradient of acetonitrile–water (0 to 100) for 35 min. The injection volume was 10 μl of sample and mobile phase flow rate 0.4 ml/min with UV detection at 254 nm for 2,3,5,4′-tetrahydroxystilbene-2-O-β-D-glucoside (THS) and 3′,6-disinapoyl sucrose (DISS) and at 203 nm for catalpol. Acquisition and analysis of chromatographic data were performed using Agilent chromatographic Work Station software (Agilent Technologies). Stock solutions of THS, DISS, and catalpol were prepared for quantification of PMC-12. The contents of PMC-12 were determined by regression equations, calculated in the form of y = ax + b, where x and y were peak area and contents of the compound. The limits of detection (LOD) and quantification (LOQ) under the current chromatographic conditions were determined at a signal-to-noise ratio of 3 and 10, respectively.", "HT22 cells were cultured in DMEM supplemented with 10 % FBS and 1 % penicillin/streptomycin in a 5 % CO2 humidified incubator at 37 °C. The cells were incubated for 24 h prior to experimental treatments. After incubation, cells were treated with various concentrations of PMC-12 for 24 h, followed by exposure to 5 mM glutamate for 24 h. Cells were pretreated with inhibitors for 30 min before addition of PMC-12 and then exposed to glutamate.", "HT22 cell survival was assessed using a MTT assay. The culture medium was replaced with 0.5 mg/ml MTT solution and then left in a dark place for 4 h at 37 °C. Following incubation, the cells were treated with DMSO in order to dissolve the formazan crystals. Absorbance was determined at 595 nm using a SpectraMax 190 spectrophotometer (Molecular Devices, Sunnyvale, CA, USA). Results were expressed as a percentage of control.", "Released LDH from damaged cells was measured for estimation of cytotoxicity. For induction of maximal cell lysis, treated cells were lysed with 0.9 % Triton X-100 for 45 min at 37 °C. Supernatant samples were transferred to a 96-well enzymatic assay plate and reacted with substrate mix in the dark for 30 min at room temperature. At the end of that time, samples were treated with stop solution and read at 490 nm using a SpectraMax 190 spectrophotometer (Molecular Devices). Data represent the percentage of LDH released relative to controls.", "After treatment, cells were harvested and resuspended in binding buffer at a concentration of 1x104 cells/ml. For analysis of cell death types, 100 μl of the solution was transferred to a flow cytometric tube, followed by incubation with Annexin V-FITC and propidium iodide (PI) in the dark at room temperature for 15 min. Subsequently, 400 μl of binding buffer was added and analysis of the samples was performed using a flow cytometer (FACS Canto™ II; Becton-Dickinson, San Jose, CA, USA).", "HT22 cells were cultured in 96-well white plates at a density of 5x103 cells per well. After adherence, cells were pretreated with PMC-12 for 24 h and then exposed to 5 mM glutamate for 24 h. Treated cells were washed with PBS. Carboxy-H2DCFDA (20 μM) (Invitrogen) was applied to the cells, followed by incubation for 1 h in a 37 °C incubator. Fluorescence was measured using a Mutilabel counter (Perkin Elmer 1420, MA, USA). Accumulation of intracellular ROS was observed and photographed under a fluorescence microscope (Carl Zeiss Imager M1, Carl Zeiss, Inc., Gottingen, Germany). In addition, cells were harvested, resuspended in 1 ml PBS with 20 μM carboxy-H2DCFDA (Invitrogen), and then incubated for 1 h at 37 °C. After washing, cellular fluorescence was measured using a flow cytometer.", "Apoptosis was investigated by staining the cells with Hoechst 33342 (Invitrogen). HT22 cells were washed three times with PBS and then fixed in PBS containing 4 % paraformaldehyde for 25 min at 4 °C. Fixed cells were washed with PBS and stained with Hoechst 33342 (10 μg/ml) for 15 min at room temperature. The cells were washed three times with PBS and mounted using the medium for fluorescence (Vector Laboratories, Inc.) The cells were observed under a fluorescence microscope for nuclei showing typical apoptotic features such as chromatin condensation and fragmentation. Photographs were taken at a magnification of X 200.", "The cells were homogenized with lysis buffer [200 mM Tris (pH 8.0), 150 mM NaCl, 2 mM EDTA, 1 mM NaF, 1 % NP40, 1 mM PMSF, 1 mM Na3VO4, protease inhibitor cocktail]. Equal amounts of proteins were separated by 10 or 12 % sodium dodecyl sulfate-polyacrylamide gel electrophoresis (SDS-PAGE) and then transferred to a nitrocellulose membrane (Whatman GmbH, Dassel, Germany). The membranes were blocked with 5 % skim milk in PBST for 1 h, followed by overnight exposure to appropriate antibodies. Membranes were then incubated with appropriate horseradish peroxidase-conjugated antibodies for 1 h. All specific bands were visualized using an enhanced chemiluminescence system (Pierce Biotech, Rockford, IL, USA) and imaged using an Image Quant LAS-4000 imaging system (GE Healthcare Life Science, Uppsala, Sweden). Results of the Western blot assay reported here are representative of at least three experiments.", "To confirm beneficial effects of PMC-12 on hippocampal cell, we used middle cerebral artery occlusion (MCAO) model. Male C57BL/6 mice (20-25 g) were obtained from Dooyeol Biotech (Seoul, Korea). The mice were housed at 22 °C under alternating 12 h cycles of dark and light, and were fed a commercial diet and allowed tap water ad libitum. All experiments were approved by the Pusan National University Animal Care and Use Committee. Each group consisted of six mice and all treatments were administered under isoflurane (Choongwae, Seoul, Korea) anesthesia, which was provided using a calibrated vaporizer (Midmark VIP3000, Orchad Park, OH, USA).\nFocal cerebral ischemia was induced by occluding the middle cerebral artery (MCA) using the intraluminal filament technique. A fiber-optic probe was affixed to the skull over the middle cerebral artery for measurement of regional cerebral blood flow using a PeriFlux Laser Doppler System 5000 (Perimed, Stockholm, Sweden). Middle cerebral artery occlusion model was induced by a silicon-coated 4-0 monofilament in the internal carotid artery and the monofilament was advanced to occlude the MCA. The filament was withdrawn 30 min after occlusion and reperfusion was confirmed using laser Doppler. Mice in the PMC-12 groups received oral administration daily at the doses of 100 and 500 mg/kg for three weeks after MCAO, while mice in the control and vehicle groups were only given distilled water at the same intervals.", "Mice anesthetized with isoflurane received intracardial perfusion with saline and then 4 % paraformaldehyde in PBS. Brains were removed, post-fixed in the same fixative for 4 h at 4 °C, and immersed in 30 % sucrose for 48 h at 4 °C for cryoprotection. Frozen 20 μm-thick sections were incubated for blocking with a blocking buffer (1X PBS/5 % normal serum/0.3 % Triton X-100) for 1 h at room temperature. The sections were incubated with the following primary antibodies to NeuN (Millipore Corporation), mature BDNF (Abcam), and pCREB (Cell Signaling) overnight in PBS at 4 °C. After washes with PBS, the sections were incubated with the fluorescent secondary antibody (Vector Laboratories, Inc., Burlingame, CA, USA) at room temperature in the dark, respectively, and then washed three times with PBS. Subsequently, slides were mounted in the mounting medium (Vector Laboratories, Inc.) and captured using a fluorescence microscope.", "Acquisition training for the Morris water maze was performed on four consecutive days from 10 days to seven days before MCAO (five trials per day) and basal time was measured at six days before MCAO. The tank had a diameter of 100 cm and an altitude of 50 cm. The platform was placed 0.5 cm beneath the surface of the water. Each trial was performed for 90 s or until the mouse arrived on the platform. PMC-12-treated mice received daily oral administration at the doses of 100 and 500 mg/kg for three weeks after MCAO, while mice in the control and vehicle groups were only given distilled water at the same intervals. After final administration, results of the experiment were recorded using SMART 2.5.18 (Panlab S.L.U.).", "All data were expressed as mean ± SEM and analyzed using the SigmaStat statistical program version 11.2 (Systat Software, San Jose, CA, USA). Statistical comparisons were performed using one-way analysis of variance (ANOVA) for repeated measures followed by Tukey’s test of least significant difference. A P-value < 0.05 was considered to indicate a statistically significant result. The median effective dose (ED50) value of PMC-12 (in vitro experiments) was derived from dose–response curve.", "HPLC conditions, particularly the mobile phase and its elution program, are important for determination of the compound in the biological matrix. In this study, we found that a mobile phase consisting of acetonitrile and containing H2O can separate THS, DISS, and catalpol (Fig. 1). The HPLC conditions developed in this study produced full peak-to-baseline resolution of the major active THS, DISS, and catalpol present in PMC-12. Based on UV maximal absorption, we detected THS and DISS at 254 nm and catalpol at 203 nm for quantitative analysis. The contents of THS, DISS, and catalpol in PMC-12 were 3.085 ± 0.271 %, 0.352 ± 0.058 %, and 0.785 ± 0.059 %, respectively. Linear calibration curve showed good linear regression (r2 > 0.999) within test ranges; the LOD (S/N = 3) and the LOQ (S/N = 10) were less than 1.5 and 4.5 μg at 254 nm for THS and DISS and at 203 nm for the catalpol (Table 1).Fig. 1HPLC analysis and quantification of PMC-12. HPLC chromatograms of THS, DISS, and catalpol reference (a) and PMC-12 (b) obtained using a Luna C18 (2) column monitored at 254 nm and eluted with 100 % water to 100 % acetonitrile for 40 at a flow-rate of 1.0 ml/minTable 1Concentration, calibration curve, regression data, LODs and LOQs for THS, DISS, and catalpol by HPLCCompoundsWavelength (nm)Concentration (%)Calibration curver2\nLOD (μg/ml)LOQ (μg/ml)THS2543.085 ± 0.271y = 3284.72x + 2.000.9991.414.27DISS2540.352 ± 0.058y = 4289.06x + 47.950.9990.591.79Catalpol2030.785 ± 0.059y = 1253.49x + 31.370.9990.631.91\nHPLC analysis and quantification of PMC-12. HPLC chromatograms of THS, DISS, and catalpol reference (a) and PMC-12 (b) obtained using a Luna C18 (2) column monitored at 254 nm and eluted with 100 % water to 100 % acetonitrile for 40 at a flow-rate of 1.0 ml/min\nConcentration, calibration curve, regression data, LODs and LOQs for THS, DISS, and catalpol by HPLC", "Exposure of cells to glutamate resulted in reduced cell viability of approximately 36.5 % compared with the control. Pretreatment with PMC-12 at a concentration range of 0.01 to 10 μg/ml (ED50 = 0.32 μg/ml PMC-12) resulted in significantly reduced glutamate-induced cytotoxicity in a dose-dependent manner (Fig. 2a). The levels of LDH release showed a significant increase to 77.8 % after exposure to glutamate, while pretreatment with PMC-12 resulted in a marked decrease of glutamate-induced release of LDH (Fig. 2b). These results suggest that pretreatment with PMC-12 exerts a potent neuroprotective effect against oxidative toxicity caused by exposure of HT22 cells to glutamate.Fig. 2Protective effects of PMC-12 against glutamate-induced cell death in HT22 cells. Cell viability and toxicity were determined by MTT (a) and LDH assay (b). Cells were pretreated with 0.01, 0.1, 1, and 10 μg/ml of PMC-12 for 24 h, followed by exposure to 5 mM glutamate for 24 h. ###\nP < 0.001 vs. control; *P < 0.05, **P < 0.01 and ***P < 0.001 vs. glutamate-treated cells. All data are represented as the mean ± SEM of three independent experiments\nProtective effects of PMC-12 against glutamate-induced cell death in HT22 cells. Cell viability and toxicity were determined by MTT (a) and LDH assay (b). Cells were pretreated with 0.01, 0.1, 1, and 10 μg/ml of PMC-12 for 24 h, followed by exposure to 5 mM glutamate for 24 h. ###\nP < 0.001 vs. control; *P < 0.05, **P < 0.01 and ***P < 0.001 vs. glutamate-treated cells. All data are represented as the mean ± SEM of three independent experiments", "We performed flow cytometry analysis using Annexin V/PI staining in order to characterize the types of neuronal death. Concentrations of 0.1, 1, and 10 μg/ml of PMC-12 were selected for testing. After exposure to glutamate, cells were likely to undergo apoptotic cell death rather than necrotic death, however, pretreatment with PMC-12 resulted in a marked decrease in the apoptotic population (Fig. 3). These results suggest that pretreatment with PMC-12 exerts a neuroprotective effect through inhibition of glutamate-induced apoptotic cell death.Fig. 3Protective effect of PMC-12 on types of cell death in HT22 cells. Cells were pretreated with 0.1, 1, or 10 μg/ml PMC-12 for 24 h, followed by exposure to 5 mM glutamate for 24 h. Representative flow cytometric analysis scatter-grams of Annexin V/PI staining (a) and quantitative analysis of the histograms (b and c). ###\nP < 0.001 vs. control; ***P < 0.001 vs. glutamate-treated cells. All data are represented as the mean ± SEM of three independent experiments\nProtective effect of PMC-12 on types of cell death in HT22 cells. Cells were pretreated with 0.1, 1, or 10 μg/ml PMC-12 for 24 h, followed by exposure to 5 mM glutamate for 24 h. Representative flow cytometric analysis scatter-grams of Annexin V/PI staining (a) and quantitative analysis of the histograms (b and c). ###\nP < 0.001 vs. control; ***P < 0.001 vs. glutamate-treated cells. All data are represented as the mean ± SEM of three independent experiments", "Treatment of HT22 cells with glutamate resulted in increased production of ROS. However, pretreatment with PMC-12 resulted in a significant decrease in ROS production, which prevented elevation of ROS level caused by exposure to glutamate (Fig. 4a). We also performed staining with Hoechst 33342 to confirm morphological changes by glutamate-induced oxidative toxicity. Our result showed that PMC-12 protected against apoptotic cell death by production of ROS after exposure to glutamate (Fig. 4b). When cells were treated with 10 μM of intracellular Ca2+ chelator BAPTA-AM and 1.5 mM of extracellular Ca2+ chelator EGTA, both inhibitors caused a significant reduction in the levels of glutamate-induced production of ROS. However, no change in ROS production was observed in cells treated with a combination of Ca2+ chelators and PMC-12 compared to cells treated with PMC-12 alone followed by exposure to glutamate (Fig. 5). These results suggest that PMC-12 suppresses glutamate-induced oxidative stress by blocking ROS production, which may be related to an indirect Ca2+ pathway.Fig. 4Protective effect of PMC-12 on ROS generation in glutamate-treated HT22 cells. Cells were pretreated with 0.01, 0.1, 1, or 10 μg/ml of PMC-12 for 24 h, followed by exposure to 5 mM glutamate for 24 h. The oxidation sensitive fluorescence dye, carboxy-H2DCFDA (20 μM), was used in measurement of ROS levels. Production of ROS was analyzed using a fluorescence plate reader (a) and fluorescence microscope (b). In addition, apoptotic nuclei were observed after staining with Hoechst 33342 for detection of apoptosis morphologically (b). ###\nP < 0.001 vs. control; **P < 0.01 and ***P < 0.001 vs. glutamate-treated cells. All data are represented as the mean ± SEM of three independent experiments. Scale bars = 50 μmFig. 5Protective effects of PMC-12 on cellular Ca2+-related ROS production in HT22 cells. Cells were pretreated with 10 μg/ml of PMC-12 for 24 h, followed by exposure to 5 mM glutamate for 24 h. Cells were treated with the specific Ca2+ inhibitors, 1.5 mM EGTA or 10 μM BAPTA-AM, for 30 min before addition of PMC-12 or glutamate. ROS production was measured using a fluorescence plate reader and mean fluorescence intensity was expressed as the mean ± SEM of three independent experiments. ###\nP < 0.001 vs. control; ***P < 0.001 vs. glutamate-treated cells\nProtective effect of PMC-12 on ROS generation in glutamate-treated HT22 cells. Cells were pretreated with 0.01, 0.1, 1, or 10 μg/ml of PMC-12 for 24 h, followed by exposure to 5 mM glutamate for 24 h. The oxidation sensitive fluorescence dye, carboxy-H2DCFDA (20 μM), was used in measurement of ROS levels. Production of ROS was analyzed using a fluorescence plate reader (a) and fluorescence microscope (b). In addition, apoptotic nuclei were observed after staining with Hoechst 33342 for detection of apoptosis morphologically (b). ###\nP < 0.001 vs. control; **P < 0.01 and ***P < 0.001 vs. glutamate-treated cells. All data are represented as the mean ± SEM of three independent experiments. Scale bars = 50 μm\nProtective effects of PMC-12 on cellular Ca2+-related ROS production in HT22 cells. Cells were pretreated with 10 μg/ml of PMC-12 for 24 h, followed by exposure to 5 mM glutamate for 24 h. Cells were treated with the specific Ca2+ inhibitors, 1.5 mM EGTA or 10 μM BAPTA-AM, for 30 min before addition of PMC-12 or glutamate. ROS production was measured using a fluorescence plate reader and mean fluorescence intensity was expressed as the mean ± SEM of three independent experiments. ###\nP < 0.001 vs. control; ***P < 0.001 vs. glutamate-treated cells", "Levels of phosphorylated p38 MAPK and dephosphorylated PI3K were significantly decreased by treatment with either PMC-12 or anti-oxidant NAC compared to glutamate-treated cells. Protein levels of mature BDNF and CREB phosphorylation were significantly increased by treatment with either PMC-12 or NAC (Fig. 6a). When cells were treated with PMC-12, NAC, or intracellular Ca2+ inhibitor BAPTA-AM, combination treatment resulted in markedly reduced levels of phosphorylated p38 MAPK and dephosphorylated PI3K compared to other cells. The combination treatment of cells also resulted in elevation of decreased protein levels of mature BDNF and CREB phosphorylation after exposure to glutamate (Fig. 6b). These results suggest that neuroprotective effects of PMC-12 may be regulated by both p38 MAPK and PI3K signaling with mature BDNF expression and CREB phosphorylation, and these effects may be related to ROS accumulation and Ca2+ influx.Fig. 6Protective effects of PMC-12 on regulation of intracellular protein kinases in HT22 cells. Cells were pretreated with 1 μg/ml PMC-12 or 1 mM NAC, followed by exposure to 5 mM glutamate (a). In addition, cells were pretreated with 10 μM BAPTA-AM or NAC for 30 min before addition of PMC-12 or glutamate (b). The histogram for Fig. 6b was indicated as the mean ± SEM of three independent experiments (c). Equal amounts of proteins and each sample were subjected to Western blot analysis using the indicated antibodies. Equal protein loading was confirmed by actin expression. ###\nP < 0.001 vs. normal control; ***P < 0.001 vs. glutamate-treated groups; $\nP < 0.05 vs. groups except control and glutamate-treated groups\nProtective effects of PMC-12 on regulation of intracellular protein kinases in HT22 cells. Cells were pretreated with 1 μg/ml PMC-12 or 1 mM NAC, followed by exposure to 5 mM glutamate (a). In addition, cells were pretreated with 10 μM BAPTA-AM or NAC for 30 min before addition of PMC-12 or glutamate (b). The histogram for Fig. 6b was indicated as the mean ± SEM of three independent experiments (c). Equal amounts of proteins and each sample were subjected to Western blot analysis using the indicated antibodies. Equal protein loading was confirmed by actin expression. ###\nP < 0.001 vs. normal control; ***P < 0.001 vs. glutamate-treated groups; $\nP < 0.05 vs. groups except control and glutamate-treated groups", "At 26 days after MCAO, positive neuronal cells of pCREB and mature BDNF in the hippocampal CA1 and dentate gyrus (DG) region were counted after immunofluorescence staining. Post-treatment with PMC-12 followed by MCAO surgery resulted in a significant increase in the number of double-positive cells of pCREB/NeuN or mature BDNF/NeuN in the CA1 and DG region of the ipsilateral hippocampus compared to the MCAO group (Fig. 7). These results suggest a possible association of the protective effects of PMC-12 with neuronal mature BDNF expression and CREB phosphorylation in the hippocampus of MCAO mice.Fig. 7Protective effects of PMC-12 on expression of both pCREB and mature BDNF in the hippocampus of focal cerebral ischemia in mice at 26 days after MCAO. Photomicrograph (a-b, d-e) and its histogram for pCREB/NeuN (c) and mature BDNF/NeuN double-positive cells (f) in the DG and CA1 regions of the hippocampus. Total number of pCREB/NeuN and mBDNF/NeuN double-positive cells was significantly increased by administration of PMC-12 in DG and CA1 compared to the MCAO group. ##\nP < 0.01 and ###\nP < 0.001 vs. control; *P < 0.05 and **P < 0.01 vs. MCAO mice. Scale bar = 100 μm\nProtective effects of PMC-12 on expression of both pCREB and mature BDNF in the hippocampus of focal cerebral ischemia in mice at 26 days after MCAO. Photomicrograph (a-b, d-e) and its histogram for pCREB/NeuN (c) and mature BDNF/NeuN double-positive cells (f) in the DG and CA1 regions of the hippocampus. Total number of pCREB/NeuN and mBDNF/NeuN double-positive cells was significantly increased by administration of PMC-12 in DG and CA1 compared to the MCAO group. ##\nP < 0.01 and ###\nP < 0.001 vs. control; *P < 0.05 and **P < 0.01 vs. MCAO mice. Scale bar = 100 μm", "Spatial memory was assessed using the water maze test. MCAO mice took a longer time on average to find the platform than the basal group. However, PMC-12-administered mice attained a significantly lower time at both concentrations from 22 to 25 days after MCAO compared to the vehicle group (Fig. 8). In particular, it has shown that a low dose (100 mg/kg) of PMC-12 is enough for improvement of the damaged memory function. These results suggest that treatment with PMC-12 may induce beneficial effects for improvement of memory function in a focal cerebral ischemia model.Fig. 8Beneficial effects of PMC-12 on the spatial memory function in MCAO mice. Morris water maze test was performed from 22 d to 25 d after MCAO. Administration of PMC-12 resulted in significantly improved memory function of MCAO mice during the late phase of the experiment. There was no significant differences in the results by two different doses (100 and 500 mg/kg) of PMC-12. Mean ± SEM. ###\nP < 0.001 vs. control; ***P < 0.001 vs. MCAO mice (vehicle)\nBeneficial effects of PMC-12 on the spatial memory function in MCAO mice. Morris water maze test was performed from 22 d to 25 d after MCAO. Administration of PMC-12 resulted in significantly improved memory function of MCAO mice during the late phase of the experiment. There was no significant differences in the results by two different doses (100 and 500 mg/kg) of PMC-12. Mean ± SEM. ###\nP < 0.001 vs. control; ***P < 0.001 vs. MCAO mice (vehicle)" ]
[ null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null ]
[ "Background", "Methods", "Chemicals and antibodies", "Preparation of four herbal mixture extract", "High performance liquid chromatography (HPLC) analysis and quantification", "Cell culture", "Cell viability assay", "Determination of cell cytotoxicity", "Flow cytometric analysis", "ROS measurement", "Nuclear staining with Hoechst 33342", "Western blot analysis", "Focal cerebral ischemia", "Immunofluorescence staining", "Behavioral assessment", "Data analysis", "Results", "HPLC analysis of PMC-12", "Pretreatment with PMC-12 reduces glutamate-induced neuronal toxicity in HT22 cells", "Pretreatment with PMC-12 inhibits glutamate-induced apoptotic cell death in HT22 cells", "Pretreatment with PMC-12 inhibits glutamate-induced production of ROS in HT22 cells", "Pretreatment with PMC-12 enhances mature BDNF expression and CREB phosphorylation via p38 MAPK and PI3K in HT22 cells", "Treatment with PMC-12 enhances mature BDNF expression and CREB phosphorylation in the hippocampus of MCAO mice", "Treatment with PMC-12 ameliorates spatial memory impairment in MCAO mice", "Discussion", "Conclusions" ]
[ "Due to an increase in life expectancy and the elderly population, memory and cognitive impairments including dementia have become a major public health problem [1]. Hippocampal neuronal death is a major factor in the progress of memory impairment in many brain disorders [2, 3]. Prevention of hippocampal neuronal deaths provides a potential new therapeutic strategy to ameliorate memory and cognitive impairment of many neurological disorders. HT22 hippocampal cell line, which lacks a functional glutamate receptor, is valuable for studying molecular mechanism of memory deficits [2, 4].\nExposure of HT22 hippocampal cells to glutamate shows neurotoxicity through oxidative stress rather than N-methyl-D-aspartate receptor-mediated excitotoxicity [5–7]. Non-receptor-mediated oxidative stress involves inhibition of cysteine uptake, alterations of intracellular cysteine homeostasis, glutathione depletion, and ultimately elevation of reactive oxygen species (ROS) activation inducing neuronal cell death [5, 8–10]. Death of hippocampal cell following oxidative stress and accumulation of ROS play a role in learning and memory impairment of brain disorders [11].\nUnder oxidative neuronal death, the pathways of p38 mitogen-activated protein kinase (MAPK) and phosphatidylinositol-3 kinase (PI3K) play critical roles in control of neuronal death and cell survival caused by glutamate, respectively [6, 12]. Brain-derived neurotrophic factor (BDNF) mediates neuronal survival and neuroplasticity and associated learning and memory, and its signaling is associated with alteration of the main mediators of PI3K pathways [13–15]. Cyclic AMP response element binding protein (CREB) has multiple roles in different brain areas, as well as promotion of cell survival [16, 17] and is involved in memory, learning, and synaptic transmission in the brain [18, 19].\nStudies of the neuroprotection have been performed in both HT22 cells and middle cerebral artery occlusion (MCAO)-induced injury [20, 21]. However memory deficits are also frequently noted after stroke. Transient MCAO induce a progressive deficiency in spatial performance related to impaired hippocampal function [22]. Thus shorter durations of ischemia have been used in the experiments that aimed to test impaired spatial learning and memory performance [23].\nIn traditional literature of Korean/Chinese medicine, research for screening, evaluating citation and attempting practical use of herbs has been conducted for development of therapeutic strategies [24]. We prepared multiherb formulae comprising the roots of Polygonum multiflorum, Rehmannia glutinosa, Polygala tenuifolia, and Acorus gramineus to increase medicinal efficacy by systematic investigation of Dongeuibogam, published by Joon Hur in the early 17th century in Korea. Our aim was to determine the beneficial effects of herbal mixture extract on hippocampal neurons, a susceptible cell important in memory impairment, in HT22 hippocampal cells and the hippocampus with subsequent memory enhancement in a mouse model of focal cerebral ischemia.", " Chemicals and antibodies L-glutamate, 3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyltetrazolium bromide (MTT), N-acetyl-L-cysteine (NAC), and β-actin antibody were purchased from Sigma-Aldrich (St. Louis, MO, USA). BAPTA-AM and EGTA were purchased from Tocris Bioscience (Ellisville, MO, USA). Dulbecco’s modified Eagle’s medium (DMEM), fetal bovine serum (FBS), and other cell culture reagents were purchased from Gibco-Invitrogen (Carlsbad, CA, USA). Antibodies recognizing p38, phospho-p38 (pp38, Thr180/Tyr182), PI3K, and pro-BDNF were supplied by Santa Cruz Biotechnology (Santa Cruz, CA, USA), and CREB, phospho-CREB (pCREB, Ser133), and phospho-PI3K (pPI3K, Tyr458) were supplied by Cell Signaling (Danvers, MA, USA). Antibody recognizing neuronal nuclei (NeuN) was supplied by Millipore Corporation (Billerica, MA, USA), and mature BDNF was supplied by Abcam (Cambridge, MA, USA). Secondary antibodies were supplied by Santa Cruz Biotechnology. A FITC Annexin V apoptosis detection kit was purchased from BD Bioscience (San Diego, CA, USA). A lactate dehydrogenase (LDH) cytotoxicity assay kit was purchased from Promega (Madison, WI, USA), and ROS detection reagent, 5-(and-6)-carboxy-2′,7′-dichlorodihydrofluorescein diacetate (carboxy-H2DCFDA), and Hoechst 33342 was purchased from Invitrogen (Carlsbad, CA, USA).\nL-glutamate, 3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyltetrazolium bromide (MTT), N-acetyl-L-cysteine (NAC), and β-actin antibody were purchased from Sigma-Aldrich (St. Louis, MO, USA). BAPTA-AM and EGTA were purchased from Tocris Bioscience (Ellisville, MO, USA). Dulbecco’s modified Eagle’s medium (DMEM), fetal bovine serum (FBS), and other cell culture reagents were purchased from Gibco-Invitrogen (Carlsbad, CA, USA). Antibodies recognizing p38, phospho-p38 (pp38, Thr180/Tyr182), PI3K, and pro-BDNF were supplied by Santa Cruz Biotechnology (Santa Cruz, CA, USA), and CREB, phospho-CREB (pCREB, Ser133), and phospho-PI3K (pPI3K, Tyr458) were supplied by Cell Signaling (Danvers, MA, USA). Antibody recognizing neuronal nuclei (NeuN) was supplied by Millipore Corporation (Billerica, MA, USA), and mature BDNF was supplied by Abcam (Cambridge, MA, USA). Secondary antibodies were supplied by Santa Cruz Biotechnology. A FITC Annexin V apoptosis detection kit was purchased from BD Bioscience (San Diego, CA, USA). A lactate dehydrogenase (LDH) cytotoxicity assay kit was purchased from Promega (Madison, WI, USA), and ROS detection reagent, 5-(and-6)-carboxy-2′,7′-dichlorodihydrofluorescein diacetate (carboxy-H2DCFDA), and Hoechst 33342 was purchased from Invitrogen (Carlsbad, CA, USA).\n Preparation of four herbal mixture extract The dried roots of Polygonum multiflorum Thunb., Rehmannia glutinosa (Gaertn) Libosch., Polygala tenuifolia Willd., and Acorus gramineus Soland. were purchased from Dongnam Co. (Busan, Korea) and were authenticated by Professor Y.W. Choi, Department of Horticultural Bioscience, College of Natural Resource and Life Science, Pusan National University. A voucher specimen (accession number PMCWSD2.1 ~ 2.4) was deposited at the Plant Drug Research Laboratory of Pusan National University (Miryang, Korea). Dried powdered Polygonum multiflorum (25.5 kg), Rehmannia glutinosa (9.5 kg), Polygala tenuifolia (7.5 kg), and Acorus gramineus roots (7.5 kg) were immersed in 450 L of distilled water and boiled at 120 ± 5 °C for 150 min. The resultant extract was centrifuged (2000 × g for 20 min at 4 °C) and filtered through a 0.2-μm filter. The filtrate was then concentrated in vacuo at 70 ± 5 °C under reduced pressure and then converted into a fine spray-dried powder at a yielding rate of 4.6 % (2.3 kg) in a vacuum drying apparatus. Finally, the solid form of the spray-dried powder was dissolved with dimethyl sulfoxide (DMSO) for use as PMC-12 in experiments.\nThe dried roots of Polygonum multiflorum Thunb., Rehmannia glutinosa (Gaertn) Libosch., Polygala tenuifolia Willd., and Acorus gramineus Soland. were purchased from Dongnam Co. (Busan, Korea) and were authenticated by Professor Y.W. Choi, Department of Horticultural Bioscience, College of Natural Resource and Life Science, Pusan National University. A voucher specimen (accession number PMCWSD2.1 ~ 2.4) was deposited at the Plant Drug Research Laboratory of Pusan National University (Miryang, Korea). Dried powdered Polygonum multiflorum (25.5 kg), Rehmannia glutinosa (9.5 kg), Polygala tenuifolia (7.5 kg), and Acorus gramineus roots (7.5 kg) were immersed in 450 L of distilled water and boiled at 120 ± 5 °C for 150 min. The resultant extract was centrifuged (2000 × g for 20 min at 4 °C) and filtered through a 0.2-μm filter. The filtrate was then concentrated in vacuo at 70 ± 5 °C under reduced pressure and then converted into a fine spray-dried powder at a yielding rate of 4.6 % (2.3 kg) in a vacuum drying apparatus. Finally, the solid form of the spray-dried powder was dissolved with dimethyl sulfoxide (DMSO) for use as PMC-12 in experiments.\n High performance liquid chromatography (HPLC) analysis and quantification For analysis of quality and quantity for PMC-12, sample of 0.5 g dry weight was sonicated in 10 ml MeOH, filtered through a 0.45 μm membrane filter before HPLC analysis. HPLC using G1100 systems (Agilent Technologies, Waldbronn, Germany) was performed on a Luna C18 column (5 μm, 150 mm × 3.0 mm i.d. Phenomenex, Torrance, CA, USA) with a mobile phase gradient of acetonitrile–water (0 to 100) for 35 min. The injection volume was 10 μl of sample and mobile phase flow rate 0.4 ml/min with UV detection at 254 nm for 2,3,5,4′-tetrahydroxystilbene-2-O-β-D-glucoside (THS) and 3′,6-disinapoyl sucrose (DISS) and at 203 nm for catalpol. Acquisition and analysis of chromatographic data were performed using Agilent chromatographic Work Station software (Agilent Technologies). Stock solutions of THS, DISS, and catalpol were prepared for quantification of PMC-12. The contents of PMC-12 were determined by regression equations, calculated in the form of y = ax + b, where x and y were peak area and contents of the compound. The limits of detection (LOD) and quantification (LOQ) under the current chromatographic conditions were determined at a signal-to-noise ratio of 3 and 10, respectively.\nFor analysis of quality and quantity for PMC-12, sample of 0.5 g dry weight was sonicated in 10 ml MeOH, filtered through a 0.45 μm membrane filter before HPLC analysis. HPLC using G1100 systems (Agilent Technologies, Waldbronn, Germany) was performed on a Luna C18 column (5 μm, 150 mm × 3.0 mm i.d. Phenomenex, Torrance, CA, USA) with a mobile phase gradient of acetonitrile–water (0 to 100) for 35 min. The injection volume was 10 μl of sample and mobile phase flow rate 0.4 ml/min with UV detection at 254 nm for 2,3,5,4′-tetrahydroxystilbene-2-O-β-D-glucoside (THS) and 3′,6-disinapoyl sucrose (DISS) and at 203 nm for catalpol. Acquisition and analysis of chromatographic data were performed using Agilent chromatographic Work Station software (Agilent Technologies). Stock solutions of THS, DISS, and catalpol were prepared for quantification of PMC-12. The contents of PMC-12 were determined by regression equations, calculated in the form of y = ax + b, where x and y were peak area and contents of the compound. The limits of detection (LOD) and quantification (LOQ) under the current chromatographic conditions were determined at a signal-to-noise ratio of 3 and 10, respectively.\n Cell culture HT22 cells were cultured in DMEM supplemented with 10 % FBS and 1 % penicillin/streptomycin in a 5 % CO2 humidified incubator at 37 °C. The cells were incubated for 24 h prior to experimental treatments. After incubation, cells were treated with various concentrations of PMC-12 for 24 h, followed by exposure to 5 mM glutamate for 24 h. Cells were pretreated with inhibitors for 30 min before addition of PMC-12 and then exposed to glutamate.\nHT22 cells were cultured in DMEM supplemented with 10 % FBS and 1 % penicillin/streptomycin in a 5 % CO2 humidified incubator at 37 °C. The cells were incubated for 24 h prior to experimental treatments. After incubation, cells were treated with various concentrations of PMC-12 for 24 h, followed by exposure to 5 mM glutamate for 24 h. Cells were pretreated with inhibitors for 30 min before addition of PMC-12 and then exposed to glutamate.\n Cell viability assay HT22 cell survival was assessed using a MTT assay. The culture medium was replaced with 0.5 mg/ml MTT solution and then left in a dark place for 4 h at 37 °C. Following incubation, the cells were treated with DMSO in order to dissolve the formazan crystals. Absorbance was determined at 595 nm using a SpectraMax 190 spectrophotometer (Molecular Devices, Sunnyvale, CA, USA). Results were expressed as a percentage of control.\nHT22 cell survival was assessed using a MTT assay. The culture medium was replaced with 0.5 mg/ml MTT solution and then left in a dark place for 4 h at 37 °C. Following incubation, the cells were treated with DMSO in order to dissolve the formazan crystals. Absorbance was determined at 595 nm using a SpectraMax 190 spectrophotometer (Molecular Devices, Sunnyvale, CA, USA). Results were expressed as a percentage of control.\n Determination of cell cytotoxicity Released LDH from damaged cells was measured for estimation of cytotoxicity. For induction of maximal cell lysis, treated cells were lysed with 0.9 % Triton X-100 for 45 min at 37 °C. Supernatant samples were transferred to a 96-well enzymatic assay plate and reacted with substrate mix in the dark for 30 min at room temperature. At the end of that time, samples were treated with stop solution and read at 490 nm using a SpectraMax 190 spectrophotometer (Molecular Devices). Data represent the percentage of LDH released relative to controls.\nReleased LDH from damaged cells was measured for estimation of cytotoxicity. For induction of maximal cell lysis, treated cells were lysed with 0.9 % Triton X-100 for 45 min at 37 °C. Supernatant samples were transferred to a 96-well enzymatic assay plate and reacted with substrate mix in the dark for 30 min at room temperature. At the end of that time, samples were treated with stop solution and read at 490 nm using a SpectraMax 190 spectrophotometer (Molecular Devices). Data represent the percentage of LDH released relative to controls.\n Flow cytometric analysis After treatment, cells were harvested and resuspended in binding buffer at a concentration of 1x104 cells/ml. For analysis of cell death types, 100 μl of the solution was transferred to a flow cytometric tube, followed by incubation with Annexin V-FITC and propidium iodide (PI) in the dark at room temperature for 15 min. Subsequently, 400 μl of binding buffer was added and analysis of the samples was performed using a flow cytometer (FACS Canto™ II; Becton-Dickinson, San Jose, CA, USA).\nAfter treatment, cells were harvested and resuspended in binding buffer at a concentration of 1x104 cells/ml. For analysis of cell death types, 100 μl of the solution was transferred to a flow cytometric tube, followed by incubation with Annexin V-FITC and propidium iodide (PI) in the dark at room temperature for 15 min. Subsequently, 400 μl of binding buffer was added and analysis of the samples was performed using a flow cytometer (FACS Canto™ II; Becton-Dickinson, San Jose, CA, USA).\n ROS measurement HT22 cells were cultured in 96-well white plates at a density of 5x103 cells per well. After adherence, cells were pretreated with PMC-12 for 24 h and then exposed to 5 mM glutamate for 24 h. Treated cells were washed with PBS. Carboxy-H2DCFDA (20 μM) (Invitrogen) was applied to the cells, followed by incubation for 1 h in a 37 °C incubator. Fluorescence was measured using a Mutilabel counter (Perkin Elmer 1420, MA, USA). Accumulation of intracellular ROS was observed and photographed under a fluorescence microscope (Carl Zeiss Imager M1, Carl Zeiss, Inc., Gottingen, Germany). In addition, cells were harvested, resuspended in 1 ml PBS with 20 μM carboxy-H2DCFDA (Invitrogen), and then incubated for 1 h at 37 °C. After washing, cellular fluorescence was measured using a flow cytometer.\nHT22 cells were cultured in 96-well white plates at a density of 5x103 cells per well. After adherence, cells were pretreated with PMC-12 for 24 h and then exposed to 5 mM glutamate for 24 h. Treated cells were washed with PBS. Carboxy-H2DCFDA (20 μM) (Invitrogen) was applied to the cells, followed by incubation for 1 h in a 37 °C incubator. Fluorescence was measured using a Mutilabel counter (Perkin Elmer 1420, MA, USA). Accumulation of intracellular ROS was observed and photographed under a fluorescence microscope (Carl Zeiss Imager M1, Carl Zeiss, Inc., Gottingen, Germany). In addition, cells were harvested, resuspended in 1 ml PBS with 20 μM carboxy-H2DCFDA (Invitrogen), and then incubated for 1 h at 37 °C. After washing, cellular fluorescence was measured using a flow cytometer.\n Nuclear staining with Hoechst 33342 Apoptosis was investigated by staining the cells with Hoechst 33342 (Invitrogen). HT22 cells were washed three times with PBS and then fixed in PBS containing 4 % paraformaldehyde for 25 min at 4 °C. Fixed cells were washed with PBS and stained with Hoechst 33342 (10 μg/ml) for 15 min at room temperature. The cells were washed three times with PBS and mounted using the medium for fluorescence (Vector Laboratories, Inc.) The cells were observed under a fluorescence microscope for nuclei showing typical apoptotic features such as chromatin condensation and fragmentation. Photographs were taken at a magnification of X 200.\nApoptosis was investigated by staining the cells with Hoechst 33342 (Invitrogen). HT22 cells were washed three times with PBS and then fixed in PBS containing 4 % paraformaldehyde for 25 min at 4 °C. Fixed cells were washed with PBS and stained with Hoechst 33342 (10 μg/ml) for 15 min at room temperature. The cells were washed three times with PBS and mounted using the medium for fluorescence (Vector Laboratories, Inc.) The cells were observed under a fluorescence microscope for nuclei showing typical apoptotic features such as chromatin condensation and fragmentation. Photographs were taken at a magnification of X 200.\n Western blot analysis The cells were homogenized with lysis buffer [200 mM Tris (pH 8.0), 150 mM NaCl, 2 mM EDTA, 1 mM NaF, 1 % NP40, 1 mM PMSF, 1 mM Na3VO4, protease inhibitor cocktail]. Equal amounts of proteins were separated by 10 or 12 % sodium dodecyl sulfate-polyacrylamide gel electrophoresis (SDS-PAGE) and then transferred to a nitrocellulose membrane (Whatman GmbH, Dassel, Germany). The membranes were blocked with 5 % skim milk in PBST for 1 h, followed by overnight exposure to appropriate antibodies. Membranes were then incubated with appropriate horseradish peroxidase-conjugated antibodies for 1 h. All specific bands were visualized using an enhanced chemiluminescence system (Pierce Biotech, Rockford, IL, USA) and imaged using an Image Quant LAS-4000 imaging system (GE Healthcare Life Science, Uppsala, Sweden). Results of the Western blot assay reported here are representative of at least three experiments.\nThe cells were homogenized with lysis buffer [200 mM Tris (pH 8.0), 150 mM NaCl, 2 mM EDTA, 1 mM NaF, 1 % NP40, 1 mM PMSF, 1 mM Na3VO4, protease inhibitor cocktail]. Equal amounts of proteins were separated by 10 or 12 % sodium dodecyl sulfate-polyacrylamide gel electrophoresis (SDS-PAGE) and then transferred to a nitrocellulose membrane (Whatman GmbH, Dassel, Germany). The membranes were blocked with 5 % skim milk in PBST for 1 h, followed by overnight exposure to appropriate antibodies. Membranes were then incubated with appropriate horseradish peroxidase-conjugated antibodies for 1 h. All specific bands were visualized using an enhanced chemiluminescence system (Pierce Biotech, Rockford, IL, USA) and imaged using an Image Quant LAS-4000 imaging system (GE Healthcare Life Science, Uppsala, Sweden). Results of the Western blot assay reported here are representative of at least three experiments.\n Focal cerebral ischemia To confirm beneficial effects of PMC-12 on hippocampal cell, we used middle cerebral artery occlusion (MCAO) model. Male C57BL/6 mice (20-25 g) were obtained from Dooyeol Biotech (Seoul, Korea). The mice were housed at 22 °C under alternating 12 h cycles of dark and light, and were fed a commercial diet and allowed tap water ad libitum. All experiments were approved by the Pusan National University Animal Care and Use Committee. Each group consisted of six mice and all treatments were administered under isoflurane (Choongwae, Seoul, Korea) anesthesia, which was provided using a calibrated vaporizer (Midmark VIP3000, Orchad Park, OH, USA).\nFocal cerebral ischemia was induced by occluding the middle cerebral artery (MCA) using the intraluminal filament technique. A fiber-optic probe was affixed to the skull over the middle cerebral artery for measurement of regional cerebral blood flow using a PeriFlux Laser Doppler System 5000 (Perimed, Stockholm, Sweden). Middle cerebral artery occlusion model was induced by a silicon-coated 4-0 monofilament in the internal carotid artery and the monofilament was advanced to occlude the MCA. The filament was withdrawn 30 min after occlusion and reperfusion was confirmed using laser Doppler. Mice in the PMC-12 groups received oral administration daily at the doses of 100 and 500 mg/kg for three weeks after MCAO, while mice in the control and vehicle groups were only given distilled water at the same intervals.\nTo confirm beneficial effects of PMC-12 on hippocampal cell, we used middle cerebral artery occlusion (MCAO) model. Male C57BL/6 mice (20-25 g) were obtained from Dooyeol Biotech (Seoul, Korea). The mice were housed at 22 °C under alternating 12 h cycles of dark and light, and were fed a commercial diet and allowed tap water ad libitum. All experiments were approved by the Pusan National University Animal Care and Use Committee. Each group consisted of six mice and all treatments were administered under isoflurane (Choongwae, Seoul, Korea) anesthesia, which was provided using a calibrated vaporizer (Midmark VIP3000, Orchad Park, OH, USA).\nFocal cerebral ischemia was induced by occluding the middle cerebral artery (MCA) using the intraluminal filament technique. A fiber-optic probe was affixed to the skull over the middle cerebral artery for measurement of regional cerebral blood flow using a PeriFlux Laser Doppler System 5000 (Perimed, Stockholm, Sweden). Middle cerebral artery occlusion model was induced by a silicon-coated 4-0 monofilament in the internal carotid artery and the monofilament was advanced to occlude the MCA. The filament was withdrawn 30 min after occlusion and reperfusion was confirmed using laser Doppler. Mice in the PMC-12 groups received oral administration daily at the doses of 100 and 500 mg/kg for three weeks after MCAO, while mice in the control and vehicle groups were only given distilled water at the same intervals.\n Immunofluorescence staining Mice anesthetized with isoflurane received intracardial perfusion with saline and then 4 % paraformaldehyde in PBS. Brains were removed, post-fixed in the same fixative for 4 h at 4 °C, and immersed in 30 % sucrose for 48 h at 4 °C for cryoprotection. Frozen 20 μm-thick sections were incubated for blocking with a blocking buffer (1X PBS/5 % normal serum/0.3 % Triton X-100) for 1 h at room temperature. The sections were incubated with the following primary antibodies to NeuN (Millipore Corporation), mature BDNF (Abcam), and pCREB (Cell Signaling) overnight in PBS at 4 °C. After washes with PBS, the sections were incubated with the fluorescent secondary antibody (Vector Laboratories, Inc., Burlingame, CA, USA) at room temperature in the dark, respectively, and then washed three times with PBS. Subsequently, slides were mounted in the mounting medium (Vector Laboratories, Inc.) and captured using a fluorescence microscope.\nMice anesthetized with isoflurane received intracardial perfusion with saline and then 4 % paraformaldehyde in PBS. Brains were removed, post-fixed in the same fixative for 4 h at 4 °C, and immersed in 30 % sucrose for 48 h at 4 °C for cryoprotection. Frozen 20 μm-thick sections were incubated for blocking with a blocking buffer (1X PBS/5 % normal serum/0.3 % Triton X-100) for 1 h at room temperature. The sections were incubated with the following primary antibodies to NeuN (Millipore Corporation), mature BDNF (Abcam), and pCREB (Cell Signaling) overnight in PBS at 4 °C. After washes with PBS, the sections were incubated with the fluorescent secondary antibody (Vector Laboratories, Inc., Burlingame, CA, USA) at room temperature in the dark, respectively, and then washed three times with PBS. Subsequently, slides were mounted in the mounting medium (Vector Laboratories, Inc.) and captured using a fluorescence microscope.\n Behavioral assessment Acquisition training for the Morris water maze was performed on four consecutive days from 10 days to seven days before MCAO (five trials per day) and basal time was measured at six days before MCAO. The tank had a diameter of 100 cm and an altitude of 50 cm. The platform was placed 0.5 cm beneath the surface of the water. Each trial was performed for 90 s or until the mouse arrived on the platform. PMC-12-treated mice received daily oral administration at the doses of 100 and 500 mg/kg for three weeks after MCAO, while mice in the control and vehicle groups were only given distilled water at the same intervals. After final administration, results of the experiment were recorded using SMART 2.5.18 (Panlab S.L.U.).\nAcquisition training for the Morris water maze was performed on four consecutive days from 10 days to seven days before MCAO (five trials per day) and basal time was measured at six days before MCAO. The tank had a diameter of 100 cm and an altitude of 50 cm. The platform was placed 0.5 cm beneath the surface of the water. Each trial was performed for 90 s or until the mouse arrived on the platform. PMC-12-treated mice received daily oral administration at the doses of 100 and 500 mg/kg for three weeks after MCAO, while mice in the control and vehicle groups were only given distilled water at the same intervals. After final administration, results of the experiment were recorded using SMART 2.5.18 (Panlab S.L.U.).\n Data analysis All data were expressed as mean ± SEM and analyzed using the SigmaStat statistical program version 11.2 (Systat Software, San Jose, CA, USA). Statistical comparisons were performed using one-way analysis of variance (ANOVA) for repeated measures followed by Tukey’s test of least significant difference. A P-value < 0.05 was considered to indicate a statistically significant result. The median effective dose (ED50) value of PMC-12 (in vitro experiments) was derived from dose–response curve.\nAll data were expressed as mean ± SEM and analyzed using the SigmaStat statistical program version 11.2 (Systat Software, San Jose, CA, USA). Statistical comparisons were performed using one-way analysis of variance (ANOVA) for repeated measures followed by Tukey’s test of least significant difference. A P-value < 0.05 was considered to indicate a statistically significant result. The median effective dose (ED50) value of PMC-12 (in vitro experiments) was derived from dose–response curve.", "L-glutamate, 3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyltetrazolium bromide (MTT), N-acetyl-L-cysteine (NAC), and β-actin antibody were purchased from Sigma-Aldrich (St. Louis, MO, USA). BAPTA-AM and EGTA were purchased from Tocris Bioscience (Ellisville, MO, USA). Dulbecco’s modified Eagle’s medium (DMEM), fetal bovine serum (FBS), and other cell culture reagents were purchased from Gibco-Invitrogen (Carlsbad, CA, USA). Antibodies recognizing p38, phospho-p38 (pp38, Thr180/Tyr182), PI3K, and pro-BDNF were supplied by Santa Cruz Biotechnology (Santa Cruz, CA, USA), and CREB, phospho-CREB (pCREB, Ser133), and phospho-PI3K (pPI3K, Tyr458) were supplied by Cell Signaling (Danvers, MA, USA). Antibody recognizing neuronal nuclei (NeuN) was supplied by Millipore Corporation (Billerica, MA, USA), and mature BDNF was supplied by Abcam (Cambridge, MA, USA). Secondary antibodies were supplied by Santa Cruz Biotechnology. A FITC Annexin V apoptosis detection kit was purchased from BD Bioscience (San Diego, CA, USA). A lactate dehydrogenase (LDH) cytotoxicity assay kit was purchased from Promega (Madison, WI, USA), and ROS detection reagent, 5-(and-6)-carboxy-2′,7′-dichlorodihydrofluorescein diacetate (carboxy-H2DCFDA), and Hoechst 33342 was purchased from Invitrogen (Carlsbad, CA, USA).", "The dried roots of Polygonum multiflorum Thunb., Rehmannia glutinosa (Gaertn) Libosch., Polygala tenuifolia Willd., and Acorus gramineus Soland. were purchased from Dongnam Co. (Busan, Korea) and were authenticated by Professor Y.W. Choi, Department of Horticultural Bioscience, College of Natural Resource and Life Science, Pusan National University. A voucher specimen (accession number PMCWSD2.1 ~ 2.4) was deposited at the Plant Drug Research Laboratory of Pusan National University (Miryang, Korea). Dried powdered Polygonum multiflorum (25.5 kg), Rehmannia glutinosa (9.5 kg), Polygala tenuifolia (7.5 kg), and Acorus gramineus roots (7.5 kg) were immersed in 450 L of distilled water and boiled at 120 ± 5 °C for 150 min. The resultant extract was centrifuged (2000 × g for 20 min at 4 °C) and filtered through a 0.2-μm filter. The filtrate was then concentrated in vacuo at 70 ± 5 °C under reduced pressure and then converted into a fine spray-dried powder at a yielding rate of 4.6 % (2.3 kg) in a vacuum drying apparatus. Finally, the solid form of the spray-dried powder was dissolved with dimethyl sulfoxide (DMSO) for use as PMC-12 in experiments.", "For analysis of quality and quantity for PMC-12, sample of 0.5 g dry weight was sonicated in 10 ml MeOH, filtered through a 0.45 μm membrane filter before HPLC analysis. HPLC using G1100 systems (Agilent Technologies, Waldbronn, Germany) was performed on a Luna C18 column (5 μm, 150 mm × 3.0 mm i.d. Phenomenex, Torrance, CA, USA) with a mobile phase gradient of acetonitrile–water (0 to 100) for 35 min. The injection volume was 10 μl of sample and mobile phase flow rate 0.4 ml/min with UV detection at 254 nm for 2,3,5,4′-tetrahydroxystilbene-2-O-β-D-glucoside (THS) and 3′,6-disinapoyl sucrose (DISS) and at 203 nm for catalpol. Acquisition and analysis of chromatographic data were performed using Agilent chromatographic Work Station software (Agilent Technologies). Stock solutions of THS, DISS, and catalpol were prepared for quantification of PMC-12. The contents of PMC-12 were determined by regression equations, calculated in the form of y = ax + b, where x and y were peak area and contents of the compound. The limits of detection (LOD) and quantification (LOQ) under the current chromatographic conditions were determined at a signal-to-noise ratio of 3 and 10, respectively.", "HT22 cells were cultured in DMEM supplemented with 10 % FBS and 1 % penicillin/streptomycin in a 5 % CO2 humidified incubator at 37 °C. The cells were incubated for 24 h prior to experimental treatments. After incubation, cells were treated with various concentrations of PMC-12 for 24 h, followed by exposure to 5 mM glutamate for 24 h. Cells were pretreated with inhibitors for 30 min before addition of PMC-12 and then exposed to glutamate.", "HT22 cell survival was assessed using a MTT assay. The culture medium was replaced with 0.5 mg/ml MTT solution and then left in a dark place for 4 h at 37 °C. Following incubation, the cells were treated with DMSO in order to dissolve the formazan crystals. Absorbance was determined at 595 nm using a SpectraMax 190 spectrophotometer (Molecular Devices, Sunnyvale, CA, USA). Results were expressed as a percentage of control.", "Released LDH from damaged cells was measured for estimation of cytotoxicity. For induction of maximal cell lysis, treated cells were lysed with 0.9 % Triton X-100 for 45 min at 37 °C. Supernatant samples were transferred to a 96-well enzymatic assay plate and reacted with substrate mix in the dark for 30 min at room temperature. At the end of that time, samples were treated with stop solution and read at 490 nm using a SpectraMax 190 spectrophotometer (Molecular Devices). Data represent the percentage of LDH released relative to controls.", "After treatment, cells were harvested and resuspended in binding buffer at a concentration of 1x104 cells/ml. For analysis of cell death types, 100 μl of the solution was transferred to a flow cytometric tube, followed by incubation with Annexin V-FITC and propidium iodide (PI) in the dark at room temperature for 15 min. Subsequently, 400 μl of binding buffer was added and analysis of the samples was performed using a flow cytometer (FACS Canto™ II; Becton-Dickinson, San Jose, CA, USA).", "HT22 cells were cultured in 96-well white plates at a density of 5x103 cells per well. After adherence, cells were pretreated with PMC-12 for 24 h and then exposed to 5 mM glutamate for 24 h. Treated cells were washed with PBS. Carboxy-H2DCFDA (20 μM) (Invitrogen) was applied to the cells, followed by incubation for 1 h in a 37 °C incubator. Fluorescence was measured using a Mutilabel counter (Perkin Elmer 1420, MA, USA). Accumulation of intracellular ROS was observed and photographed under a fluorescence microscope (Carl Zeiss Imager M1, Carl Zeiss, Inc., Gottingen, Germany). In addition, cells were harvested, resuspended in 1 ml PBS with 20 μM carboxy-H2DCFDA (Invitrogen), and then incubated for 1 h at 37 °C. After washing, cellular fluorescence was measured using a flow cytometer.", "Apoptosis was investigated by staining the cells with Hoechst 33342 (Invitrogen). HT22 cells were washed three times with PBS and then fixed in PBS containing 4 % paraformaldehyde for 25 min at 4 °C. Fixed cells were washed with PBS and stained with Hoechst 33342 (10 μg/ml) for 15 min at room temperature. The cells were washed three times with PBS and mounted using the medium for fluorescence (Vector Laboratories, Inc.) The cells were observed under a fluorescence microscope for nuclei showing typical apoptotic features such as chromatin condensation and fragmentation. Photographs were taken at a magnification of X 200.", "The cells were homogenized with lysis buffer [200 mM Tris (pH 8.0), 150 mM NaCl, 2 mM EDTA, 1 mM NaF, 1 % NP40, 1 mM PMSF, 1 mM Na3VO4, protease inhibitor cocktail]. Equal amounts of proteins were separated by 10 or 12 % sodium dodecyl sulfate-polyacrylamide gel electrophoresis (SDS-PAGE) and then transferred to a nitrocellulose membrane (Whatman GmbH, Dassel, Germany). The membranes were blocked with 5 % skim milk in PBST for 1 h, followed by overnight exposure to appropriate antibodies. Membranes were then incubated with appropriate horseradish peroxidase-conjugated antibodies for 1 h. All specific bands were visualized using an enhanced chemiluminescence system (Pierce Biotech, Rockford, IL, USA) and imaged using an Image Quant LAS-4000 imaging system (GE Healthcare Life Science, Uppsala, Sweden). Results of the Western blot assay reported here are representative of at least three experiments.", "To confirm beneficial effects of PMC-12 on hippocampal cell, we used middle cerebral artery occlusion (MCAO) model. Male C57BL/6 mice (20-25 g) were obtained from Dooyeol Biotech (Seoul, Korea). The mice were housed at 22 °C under alternating 12 h cycles of dark and light, and were fed a commercial diet and allowed tap water ad libitum. All experiments were approved by the Pusan National University Animal Care and Use Committee. Each group consisted of six mice and all treatments were administered under isoflurane (Choongwae, Seoul, Korea) anesthesia, which was provided using a calibrated vaporizer (Midmark VIP3000, Orchad Park, OH, USA).\nFocal cerebral ischemia was induced by occluding the middle cerebral artery (MCA) using the intraluminal filament technique. A fiber-optic probe was affixed to the skull over the middle cerebral artery for measurement of regional cerebral blood flow using a PeriFlux Laser Doppler System 5000 (Perimed, Stockholm, Sweden). Middle cerebral artery occlusion model was induced by a silicon-coated 4-0 monofilament in the internal carotid artery and the monofilament was advanced to occlude the MCA. The filament was withdrawn 30 min after occlusion and reperfusion was confirmed using laser Doppler. Mice in the PMC-12 groups received oral administration daily at the doses of 100 and 500 mg/kg for three weeks after MCAO, while mice in the control and vehicle groups were only given distilled water at the same intervals.", "Mice anesthetized with isoflurane received intracardial perfusion with saline and then 4 % paraformaldehyde in PBS. Brains were removed, post-fixed in the same fixative for 4 h at 4 °C, and immersed in 30 % sucrose for 48 h at 4 °C for cryoprotection. Frozen 20 μm-thick sections were incubated for blocking with a blocking buffer (1X PBS/5 % normal serum/0.3 % Triton X-100) for 1 h at room temperature. The sections were incubated with the following primary antibodies to NeuN (Millipore Corporation), mature BDNF (Abcam), and pCREB (Cell Signaling) overnight in PBS at 4 °C. After washes with PBS, the sections were incubated with the fluorescent secondary antibody (Vector Laboratories, Inc., Burlingame, CA, USA) at room temperature in the dark, respectively, and then washed three times with PBS. Subsequently, slides were mounted in the mounting medium (Vector Laboratories, Inc.) and captured using a fluorescence microscope.", "Acquisition training for the Morris water maze was performed on four consecutive days from 10 days to seven days before MCAO (five trials per day) and basal time was measured at six days before MCAO. The tank had a diameter of 100 cm and an altitude of 50 cm. The platform was placed 0.5 cm beneath the surface of the water. Each trial was performed for 90 s or until the mouse arrived on the platform. PMC-12-treated mice received daily oral administration at the doses of 100 and 500 mg/kg for three weeks after MCAO, while mice in the control and vehicle groups were only given distilled water at the same intervals. After final administration, results of the experiment were recorded using SMART 2.5.18 (Panlab S.L.U.).", "All data were expressed as mean ± SEM and analyzed using the SigmaStat statistical program version 11.2 (Systat Software, San Jose, CA, USA). Statistical comparisons were performed using one-way analysis of variance (ANOVA) for repeated measures followed by Tukey’s test of least significant difference. A P-value < 0.05 was considered to indicate a statistically significant result. The median effective dose (ED50) value of PMC-12 (in vitro experiments) was derived from dose–response curve.", " HPLC analysis of PMC-12 HPLC conditions, particularly the mobile phase and its elution program, are important for determination of the compound in the biological matrix. In this study, we found that a mobile phase consisting of acetonitrile and containing H2O can separate THS, DISS, and catalpol (Fig. 1). The HPLC conditions developed in this study produced full peak-to-baseline resolution of the major active THS, DISS, and catalpol present in PMC-12. Based on UV maximal absorption, we detected THS and DISS at 254 nm and catalpol at 203 nm for quantitative analysis. The contents of THS, DISS, and catalpol in PMC-12 were 3.085 ± 0.271 %, 0.352 ± 0.058 %, and 0.785 ± 0.059 %, respectively. Linear calibration curve showed good linear regression (r2 > 0.999) within test ranges; the LOD (S/N = 3) and the LOQ (S/N = 10) were less than 1.5 and 4.5 μg at 254 nm for THS and DISS and at 203 nm for the catalpol (Table 1).Fig. 1HPLC analysis and quantification of PMC-12. HPLC chromatograms of THS, DISS, and catalpol reference (a) and PMC-12 (b) obtained using a Luna C18 (2) column monitored at 254 nm and eluted with 100 % water to 100 % acetonitrile for 40 at a flow-rate of 1.0 ml/minTable 1Concentration, calibration curve, regression data, LODs and LOQs for THS, DISS, and catalpol by HPLCCompoundsWavelength (nm)Concentration (%)Calibration curver2\nLOD (μg/ml)LOQ (μg/ml)THS2543.085 ± 0.271y = 3284.72x + 2.000.9991.414.27DISS2540.352 ± 0.058y = 4289.06x + 47.950.9990.591.79Catalpol2030.785 ± 0.059y = 1253.49x + 31.370.9990.631.91\nHPLC analysis and quantification of PMC-12. HPLC chromatograms of THS, DISS, and catalpol reference (a) and PMC-12 (b) obtained using a Luna C18 (2) column monitored at 254 nm and eluted with 100 % water to 100 % acetonitrile for 40 at a flow-rate of 1.0 ml/min\nConcentration, calibration curve, regression data, LODs and LOQs for THS, DISS, and catalpol by HPLC\nHPLC conditions, particularly the mobile phase and its elution program, are important for determination of the compound in the biological matrix. In this study, we found that a mobile phase consisting of acetonitrile and containing H2O can separate THS, DISS, and catalpol (Fig. 1). The HPLC conditions developed in this study produced full peak-to-baseline resolution of the major active THS, DISS, and catalpol present in PMC-12. Based on UV maximal absorption, we detected THS and DISS at 254 nm and catalpol at 203 nm for quantitative analysis. The contents of THS, DISS, and catalpol in PMC-12 were 3.085 ± 0.271 %, 0.352 ± 0.058 %, and 0.785 ± 0.059 %, respectively. Linear calibration curve showed good linear regression (r2 > 0.999) within test ranges; the LOD (S/N = 3) and the LOQ (S/N = 10) were less than 1.5 and 4.5 μg at 254 nm for THS and DISS and at 203 nm for the catalpol (Table 1).Fig. 1HPLC analysis and quantification of PMC-12. HPLC chromatograms of THS, DISS, and catalpol reference (a) and PMC-12 (b) obtained using a Luna C18 (2) column monitored at 254 nm and eluted with 100 % water to 100 % acetonitrile for 40 at a flow-rate of 1.0 ml/minTable 1Concentration, calibration curve, regression data, LODs and LOQs for THS, DISS, and catalpol by HPLCCompoundsWavelength (nm)Concentration (%)Calibration curver2\nLOD (μg/ml)LOQ (μg/ml)THS2543.085 ± 0.271y = 3284.72x + 2.000.9991.414.27DISS2540.352 ± 0.058y = 4289.06x + 47.950.9990.591.79Catalpol2030.785 ± 0.059y = 1253.49x + 31.370.9990.631.91\nHPLC analysis and quantification of PMC-12. HPLC chromatograms of THS, DISS, and catalpol reference (a) and PMC-12 (b) obtained using a Luna C18 (2) column monitored at 254 nm and eluted with 100 % water to 100 % acetonitrile for 40 at a flow-rate of 1.0 ml/min\nConcentration, calibration curve, regression data, LODs and LOQs for THS, DISS, and catalpol by HPLC\n Pretreatment with PMC-12 reduces glutamate-induced neuronal toxicity in HT22 cells Exposure of cells to glutamate resulted in reduced cell viability of approximately 36.5 % compared with the control. Pretreatment with PMC-12 at a concentration range of 0.01 to 10 μg/ml (ED50 = 0.32 μg/ml PMC-12) resulted in significantly reduced glutamate-induced cytotoxicity in a dose-dependent manner (Fig. 2a). The levels of LDH release showed a significant increase to 77.8 % after exposure to glutamate, while pretreatment with PMC-12 resulted in a marked decrease of glutamate-induced release of LDH (Fig. 2b). These results suggest that pretreatment with PMC-12 exerts a potent neuroprotective effect against oxidative toxicity caused by exposure of HT22 cells to glutamate.Fig. 2Protective effects of PMC-12 against glutamate-induced cell death in HT22 cells. Cell viability and toxicity were determined by MTT (a) and LDH assay (b). Cells were pretreated with 0.01, 0.1, 1, and 10 μg/ml of PMC-12 for 24 h, followed by exposure to 5 mM glutamate for 24 h. ###\nP < 0.001 vs. control; *P < 0.05, **P < 0.01 and ***P < 0.001 vs. glutamate-treated cells. All data are represented as the mean ± SEM of three independent experiments\nProtective effects of PMC-12 against glutamate-induced cell death in HT22 cells. Cell viability and toxicity were determined by MTT (a) and LDH assay (b). Cells were pretreated with 0.01, 0.1, 1, and 10 μg/ml of PMC-12 for 24 h, followed by exposure to 5 mM glutamate for 24 h. ###\nP < 0.001 vs. control; *P < 0.05, **P < 0.01 and ***P < 0.001 vs. glutamate-treated cells. All data are represented as the mean ± SEM of three independent experiments\nExposure of cells to glutamate resulted in reduced cell viability of approximately 36.5 % compared with the control. Pretreatment with PMC-12 at a concentration range of 0.01 to 10 μg/ml (ED50 = 0.32 μg/ml PMC-12) resulted in significantly reduced glutamate-induced cytotoxicity in a dose-dependent manner (Fig. 2a). The levels of LDH release showed a significant increase to 77.8 % after exposure to glutamate, while pretreatment with PMC-12 resulted in a marked decrease of glutamate-induced release of LDH (Fig. 2b). These results suggest that pretreatment with PMC-12 exerts a potent neuroprotective effect against oxidative toxicity caused by exposure of HT22 cells to glutamate.Fig. 2Protective effects of PMC-12 against glutamate-induced cell death in HT22 cells. Cell viability and toxicity were determined by MTT (a) and LDH assay (b). Cells were pretreated with 0.01, 0.1, 1, and 10 μg/ml of PMC-12 for 24 h, followed by exposure to 5 mM glutamate for 24 h. ###\nP < 0.001 vs. control; *P < 0.05, **P < 0.01 and ***P < 0.001 vs. glutamate-treated cells. All data are represented as the mean ± SEM of three independent experiments\nProtective effects of PMC-12 against glutamate-induced cell death in HT22 cells. Cell viability and toxicity were determined by MTT (a) and LDH assay (b). Cells were pretreated with 0.01, 0.1, 1, and 10 μg/ml of PMC-12 for 24 h, followed by exposure to 5 mM glutamate for 24 h. ###\nP < 0.001 vs. control; *P < 0.05, **P < 0.01 and ***P < 0.001 vs. glutamate-treated cells. All data are represented as the mean ± SEM of three independent experiments\n Pretreatment with PMC-12 inhibits glutamate-induced apoptotic cell death in HT22 cells We performed flow cytometry analysis using Annexin V/PI staining in order to characterize the types of neuronal death. Concentrations of 0.1, 1, and 10 μg/ml of PMC-12 were selected for testing. After exposure to glutamate, cells were likely to undergo apoptotic cell death rather than necrotic death, however, pretreatment with PMC-12 resulted in a marked decrease in the apoptotic population (Fig. 3). These results suggest that pretreatment with PMC-12 exerts a neuroprotective effect through inhibition of glutamate-induced apoptotic cell death.Fig. 3Protective effect of PMC-12 on types of cell death in HT22 cells. Cells were pretreated with 0.1, 1, or 10 μg/ml PMC-12 for 24 h, followed by exposure to 5 mM glutamate for 24 h. Representative flow cytometric analysis scatter-grams of Annexin V/PI staining (a) and quantitative analysis of the histograms (b and c). ###\nP < 0.001 vs. control; ***P < 0.001 vs. glutamate-treated cells. All data are represented as the mean ± SEM of three independent experiments\nProtective effect of PMC-12 on types of cell death in HT22 cells. Cells were pretreated with 0.1, 1, or 10 μg/ml PMC-12 for 24 h, followed by exposure to 5 mM glutamate for 24 h. Representative flow cytometric analysis scatter-grams of Annexin V/PI staining (a) and quantitative analysis of the histograms (b and c). ###\nP < 0.001 vs. control; ***P < 0.001 vs. glutamate-treated cells. All data are represented as the mean ± SEM of three independent experiments\nWe performed flow cytometry analysis using Annexin V/PI staining in order to characterize the types of neuronal death. Concentrations of 0.1, 1, and 10 μg/ml of PMC-12 were selected for testing. After exposure to glutamate, cells were likely to undergo apoptotic cell death rather than necrotic death, however, pretreatment with PMC-12 resulted in a marked decrease in the apoptotic population (Fig. 3). These results suggest that pretreatment with PMC-12 exerts a neuroprotective effect through inhibition of glutamate-induced apoptotic cell death.Fig. 3Protective effect of PMC-12 on types of cell death in HT22 cells. Cells were pretreated with 0.1, 1, or 10 μg/ml PMC-12 for 24 h, followed by exposure to 5 mM glutamate for 24 h. Representative flow cytometric analysis scatter-grams of Annexin V/PI staining (a) and quantitative analysis of the histograms (b and c). ###\nP < 0.001 vs. control; ***P < 0.001 vs. glutamate-treated cells. All data are represented as the mean ± SEM of three independent experiments\nProtective effect of PMC-12 on types of cell death in HT22 cells. Cells were pretreated with 0.1, 1, or 10 μg/ml PMC-12 for 24 h, followed by exposure to 5 mM glutamate for 24 h. Representative flow cytometric analysis scatter-grams of Annexin V/PI staining (a) and quantitative analysis of the histograms (b and c). ###\nP < 0.001 vs. control; ***P < 0.001 vs. glutamate-treated cells. All data are represented as the mean ± SEM of three independent experiments\n Pretreatment with PMC-12 inhibits glutamate-induced production of ROS in HT22 cells Treatment of HT22 cells with glutamate resulted in increased production of ROS. However, pretreatment with PMC-12 resulted in a significant decrease in ROS production, which prevented elevation of ROS level caused by exposure to glutamate (Fig. 4a). We also performed staining with Hoechst 33342 to confirm morphological changes by glutamate-induced oxidative toxicity. Our result showed that PMC-12 protected against apoptotic cell death by production of ROS after exposure to glutamate (Fig. 4b). When cells were treated with 10 μM of intracellular Ca2+ chelator BAPTA-AM and 1.5 mM of extracellular Ca2+ chelator EGTA, both inhibitors caused a significant reduction in the levels of glutamate-induced production of ROS. However, no change in ROS production was observed in cells treated with a combination of Ca2+ chelators and PMC-12 compared to cells treated with PMC-12 alone followed by exposure to glutamate (Fig. 5). These results suggest that PMC-12 suppresses glutamate-induced oxidative stress by blocking ROS production, which may be related to an indirect Ca2+ pathway.Fig. 4Protective effect of PMC-12 on ROS generation in glutamate-treated HT22 cells. Cells were pretreated with 0.01, 0.1, 1, or 10 μg/ml of PMC-12 for 24 h, followed by exposure to 5 mM glutamate for 24 h. The oxidation sensitive fluorescence dye, carboxy-H2DCFDA (20 μM), was used in measurement of ROS levels. Production of ROS was analyzed using a fluorescence plate reader (a) and fluorescence microscope (b). In addition, apoptotic nuclei were observed after staining with Hoechst 33342 for detection of apoptosis morphologically (b). ###\nP < 0.001 vs. control; **P < 0.01 and ***P < 0.001 vs. glutamate-treated cells. All data are represented as the mean ± SEM of three independent experiments. Scale bars = 50 μmFig. 5Protective effects of PMC-12 on cellular Ca2+-related ROS production in HT22 cells. Cells were pretreated with 10 μg/ml of PMC-12 for 24 h, followed by exposure to 5 mM glutamate for 24 h. Cells were treated with the specific Ca2+ inhibitors, 1.5 mM EGTA or 10 μM BAPTA-AM, for 30 min before addition of PMC-12 or glutamate. ROS production was measured using a fluorescence plate reader and mean fluorescence intensity was expressed as the mean ± SEM of three independent experiments. ###\nP < 0.001 vs. control; ***P < 0.001 vs. glutamate-treated cells\nProtective effect of PMC-12 on ROS generation in glutamate-treated HT22 cells. Cells were pretreated with 0.01, 0.1, 1, or 10 μg/ml of PMC-12 for 24 h, followed by exposure to 5 mM glutamate for 24 h. The oxidation sensitive fluorescence dye, carboxy-H2DCFDA (20 μM), was used in measurement of ROS levels. Production of ROS was analyzed using a fluorescence plate reader (a) and fluorescence microscope (b). In addition, apoptotic nuclei were observed after staining with Hoechst 33342 for detection of apoptosis morphologically (b). ###\nP < 0.001 vs. control; **P < 0.01 and ***P < 0.001 vs. glutamate-treated cells. All data are represented as the mean ± SEM of three independent experiments. Scale bars = 50 μm\nProtective effects of PMC-12 on cellular Ca2+-related ROS production in HT22 cells. Cells were pretreated with 10 μg/ml of PMC-12 for 24 h, followed by exposure to 5 mM glutamate for 24 h. Cells were treated with the specific Ca2+ inhibitors, 1.5 mM EGTA or 10 μM BAPTA-AM, for 30 min before addition of PMC-12 or glutamate. ROS production was measured using a fluorescence plate reader and mean fluorescence intensity was expressed as the mean ± SEM of three independent experiments. ###\nP < 0.001 vs. control; ***P < 0.001 vs. glutamate-treated cells\nTreatment of HT22 cells with glutamate resulted in increased production of ROS. However, pretreatment with PMC-12 resulted in a significant decrease in ROS production, which prevented elevation of ROS level caused by exposure to glutamate (Fig. 4a). We also performed staining with Hoechst 33342 to confirm morphological changes by glutamate-induced oxidative toxicity. Our result showed that PMC-12 protected against apoptotic cell death by production of ROS after exposure to glutamate (Fig. 4b). When cells were treated with 10 μM of intracellular Ca2+ chelator BAPTA-AM and 1.5 mM of extracellular Ca2+ chelator EGTA, both inhibitors caused a significant reduction in the levels of glutamate-induced production of ROS. However, no change in ROS production was observed in cells treated with a combination of Ca2+ chelators and PMC-12 compared to cells treated with PMC-12 alone followed by exposure to glutamate (Fig. 5). These results suggest that PMC-12 suppresses glutamate-induced oxidative stress by blocking ROS production, which may be related to an indirect Ca2+ pathway.Fig. 4Protective effect of PMC-12 on ROS generation in glutamate-treated HT22 cells. Cells were pretreated with 0.01, 0.1, 1, or 10 μg/ml of PMC-12 for 24 h, followed by exposure to 5 mM glutamate for 24 h. The oxidation sensitive fluorescence dye, carboxy-H2DCFDA (20 μM), was used in measurement of ROS levels. Production of ROS was analyzed using a fluorescence plate reader (a) and fluorescence microscope (b). In addition, apoptotic nuclei were observed after staining with Hoechst 33342 for detection of apoptosis morphologically (b). ###\nP < 0.001 vs. control; **P < 0.01 and ***P < 0.001 vs. glutamate-treated cells. All data are represented as the mean ± SEM of three independent experiments. Scale bars = 50 μmFig. 5Protective effects of PMC-12 on cellular Ca2+-related ROS production in HT22 cells. Cells were pretreated with 10 μg/ml of PMC-12 for 24 h, followed by exposure to 5 mM glutamate for 24 h. Cells were treated with the specific Ca2+ inhibitors, 1.5 mM EGTA or 10 μM BAPTA-AM, for 30 min before addition of PMC-12 or glutamate. ROS production was measured using a fluorescence plate reader and mean fluorescence intensity was expressed as the mean ± SEM of three independent experiments. ###\nP < 0.001 vs. control; ***P < 0.001 vs. glutamate-treated cells\nProtective effect of PMC-12 on ROS generation in glutamate-treated HT22 cells. Cells were pretreated with 0.01, 0.1, 1, or 10 μg/ml of PMC-12 for 24 h, followed by exposure to 5 mM glutamate for 24 h. The oxidation sensitive fluorescence dye, carboxy-H2DCFDA (20 μM), was used in measurement of ROS levels. Production of ROS was analyzed using a fluorescence plate reader (a) and fluorescence microscope (b). In addition, apoptotic nuclei were observed after staining with Hoechst 33342 for detection of apoptosis morphologically (b). ###\nP < 0.001 vs. control; **P < 0.01 and ***P < 0.001 vs. glutamate-treated cells. All data are represented as the mean ± SEM of three independent experiments. Scale bars = 50 μm\nProtective effects of PMC-12 on cellular Ca2+-related ROS production in HT22 cells. Cells were pretreated with 10 μg/ml of PMC-12 for 24 h, followed by exposure to 5 mM glutamate for 24 h. Cells were treated with the specific Ca2+ inhibitors, 1.5 mM EGTA or 10 μM BAPTA-AM, for 30 min before addition of PMC-12 or glutamate. ROS production was measured using a fluorescence plate reader and mean fluorescence intensity was expressed as the mean ± SEM of three independent experiments. ###\nP < 0.001 vs. control; ***P < 0.001 vs. glutamate-treated cells\n Pretreatment with PMC-12 enhances mature BDNF expression and CREB phosphorylation via p38 MAPK and PI3K in HT22 cells Levels of phosphorylated p38 MAPK and dephosphorylated PI3K were significantly decreased by treatment with either PMC-12 or anti-oxidant NAC compared to glutamate-treated cells. Protein levels of mature BDNF and CREB phosphorylation were significantly increased by treatment with either PMC-12 or NAC (Fig. 6a). When cells were treated with PMC-12, NAC, or intracellular Ca2+ inhibitor BAPTA-AM, combination treatment resulted in markedly reduced levels of phosphorylated p38 MAPK and dephosphorylated PI3K compared to other cells. The combination treatment of cells also resulted in elevation of decreased protein levels of mature BDNF and CREB phosphorylation after exposure to glutamate (Fig. 6b). These results suggest that neuroprotective effects of PMC-12 may be regulated by both p38 MAPK and PI3K signaling with mature BDNF expression and CREB phosphorylation, and these effects may be related to ROS accumulation and Ca2+ influx.Fig. 6Protective effects of PMC-12 on regulation of intracellular protein kinases in HT22 cells. Cells were pretreated with 1 μg/ml PMC-12 or 1 mM NAC, followed by exposure to 5 mM glutamate (a). In addition, cells were pretreated with 10 μM BAPTA-AM or NAC for 30 min before addition of PMC-12 or glutamate (b). The histogram for Fig. 6b was indicated as the mean ± SEM of three independent experiments (c). Equal amounts of proteins and each sample were subjected to Western blot analysis using the indicated antibodies. Equal protein loading was confirmed by actin expression. ###\nP < 0.001 vs. normal control; ***P < 0.001 vs. glutamate-treated groups; $\nP < 0.05 vs. groups except control and glutamate-treated groups\nProtective effects of PMC-12 on regulation of intracellular protein kinases in HT22 cells. Cells were pretreated with 1 μg/ml PMC-12 or 1 mM NAC, followed by exposure to 5 mM glutamate (a). In addition, cells were pretreated with 10 μM BAPTA-AM or NAC for 30 min before addition of PMC-12 or glutamate (b). The histogram for Fig. 6b was indicated as the mean ± SEM of three independent experiments (c). Equal amounts of proteins and each sample were subjected to Western blot analysis using the indicated antibodies. Equal protein loading was confirmed by actin expression. ###\nP < 0.001 vs. normal control; ***P < 0.001 vs. glutamate-treated groups; $\nP < 0.05 vs. groups except control and glutamate-treated groups\nLevels of phosphorylated p38 MAPK and dephosphorylated PI3K were significantly decreased by treatment with either PMC-12 or anti-oxidant NAC compared to glutamate-treated cells. Protein levels of mature BDNF and CREB phosphorylation were significantly increased by treatment with either PMC-12 or NAC (Fig. 6a). When cells were treated with PMC-12, NAC, or intracellular Ca2+ inhibitor BAPTA-AM, combination treatment resulted in markedly reduced levels of phosphorylated p38 MAPK and dephosphorylated PI3K compared to other cells. The combination treatment of cells also resulted in elevation of decreased protein levels of mature BDNF and CREB phosphorylation after exposure to glutamate (Fig. 6b). These results suggest that neuroprotective effects of PMC-12 may be regulated by both p38 MAPK and PI3K signaling with mature BDNF expression and CREB phosphorylation, and these effects may be related to ROS accumulation and Ca2+ influx.Fig. 6Protective effects of PMC-12 on regulation of intracellular protein kinases in HT22 cells. Cells were pretreated with 1 μg/ml PMC-12 or 1 mM NAC, followed by exposure to 5 mM glutamate (a). In addition, cells were pretreated with 10 μM BAPTA-AM or NAC for 30 min before addition of PMC-12 or glutamate (b). The histogram for Fig. 6b was indicated as the mean ± SEM of three independent experiments (c). Equal amounts of proteins and each sample were subjected to Western blot analysis using the indicated antibodies. Equal protein loading was confirmed by actin expression. ###\nP < 0.001 vs. normal control; ***P < 0.001 vs. glutamate-treated groups; $\nP < 0.05 vs. groups except control and glutamate-treated groups\nProtective effects of PMC-12 on regulation of intracellular protein kinases in HT22 cells. Cells were pretreated with 1 μg/ml PMC-12 or 1 mM NAC, followed by exposure to 5 mM glutamate (a). In addition, cells were pretreated with 10 μM BAPTA-AM or NAC for 30 min before addition of PMC-12 or glutamate (b). The histogram for Fig. 6b was indicated as the mean ± SEM of three independent experiments (c). Equal amounts of proteins and each sample were subjected to Western blot analysis using the indicated antibodies. Equal protein loading was confirmed by actin expression. ###\nP < 0.001 vs. normal control; ***P < 0.001 vs. glutamate-treated groups; $\nP < 0.05 vs. groups except control and glutamate-treated groups\n Treatment with PMC-12 enhances mature BDNF expression and CREB phosphorylation in the hippocampus of MCAO mice At 26 days after MCAO, positive neuronal cells of pCREB and mature BDNF in the hippocampal CA1 and dentate gyrus (DG) region were counted after immunofluorescence staining. Post-treatment with PMC-12 followed by MCAO surgery resulted in a significant increase in the number of double-positive cells of pCREB/NeuN or mature BDNF/NeuN in the CA1 and DG region of the ipsilateral hippocampus compared to the MCAO group (Fig. 7). These results suggest a possible association of the protective effects of PMC-12 with neuronal mature BDNF expression and CREB phosphorylation in the hippocampus of MCAO mice.Fig. 7Protective effects of PMC-12 on expression of both pCREB and mature BDNF in the hippocampus of focal cerebral ischemia in mice at 26 days after MCAO. Photomicrograph (a-b, d-e) and its histogram for pCREB/NeuN (c) and mature BDNF/NeuN double-positive cells (f) in the DG and CA1 regions of the hippocampus. Total number of pCREB/NeuN and mBDNF/NeuN double-positive cells was significantly increased by administration of PMC-12 in DG and CA1 compared to the MCAO group. ##\nP < 0.01 and ###\nP < 0.001 vs. control; *P < 0.05 and **P < 0.01 vs. MCAO mice. Scale bar = 100 μm\nProtective effects of PMC-12 on expression of both pCREB and mature BDNF in the hippocampus of focal cerebral ischemia in mice at 26 days after MCAO. Photomicrograph (a-b, d-e) and its histogram for pCREB/NeuN (c) and mature BDNF/NeuN double-positive cells (f) in the DG and CA1 regions of the hippocampus. Total number of pCREB/NeuN and mBDNF/NeuN double-positive cells was significantly increased by administration of PMC-12 in DG and CA1 compared to the MCAO group. ##\nP < 0.01 and ###\nP < 0.001 vs. control; *P < 0.05 and **P < 0.01 vs. MCAO mice. Scale bar = 100 μm\nAt 26 days after MCAO, positive neuronal cells of pCREB and mature BDNF in the hippocampal CA1 and dentate gyrus (DG) region were counted after immunofluorescence staining. Post-treatment with PMC-12 followed by MCAO surgery resulted in a significant increase in the number of double-positive cells of pCREB/NeuN or mature BDNF/NeuN in the CA1 and DG region of the ipsilateral hippocampus compared to the MCAO group (Fig. 7). These results suggest a possible association of the protective effects of PMC-12 with neuronal mature BDNF expression and CREB phosphorylation in the hippocampus of MCAO mice.Fig. 7Protective effects of PMC-12 on expression of both pCREB and mature BDNF in the hippocampus of focal cerebral ischemia in mice at 26 days after MCAO. Photomicrograph (a-b, d-e) and its histogram for pCREB/NeuN (c) and mature BDNF/NeuN double-positive cells (f) in the DG and CA1 regions of the hippocampus. Total number of pCREB/NeuN and mBDNF/NeuN double-positive cells was significantly increased by administration of PMC-12 in DG and CA1 compared to the MCAO group. ##\nP < 0.01 and ###\nP < 0.001 vs. control; *P < 0.05 and **P < 0.01 vs. MCAO mice. Scale bar = 100 μm\nProtective effects of PMC-12 on expression of both pCREB and mature BDNF in the hippocampus of focal cerebral ischemia in mice at 26 days after MCAO. Photomicrograph (a-b, d-e) and its histogram for pCREB/NeuN (c) and mature BDNF/NeuN double-positive cells (f) in the DG and CA1 regions of the hippocampus. Total number of pCREB/NeuN and mBDNF/NeuN double-positive cells was significantly increased by administration of PMC-12 in DG and CA1 compared to the MCAO group. ##\nP < 0.01 and ###\nP < 0.001 vs. control; *P < 0.05 and **P < 0.01 vs. MCAO mice. Scale bar = 100 μm\n Treatment with PMC-12 ameliorates spatial memory impairment in MCAO mice Spatial memory was assessed using the water maze test. MCAO mice took a longer time on average to find the platform than the basal group. However, PMC-12-administered mice attained a significantly lower time at both concentrations from 22 to 25 days after MCAO compared to the vehicle group (Fig. 8). In particular, it has shown that a low dose (100 mg/kg) of PMC-12 is enough for improvement of the damaged memory function. These results suggest that treatment with PMC-12 may induce beneficial effects for improvement of memory function in a focal cerebral ischemia model.Fig. 8Beneficial effects of PMC-12 on the spatial memory function in MCAO mice. Morris water maze test was performed from 22 d to 25 d after MCAO. Administration of PMC-12 resulted in significantly improved memory function of MCAO mice during the late phase of the experiment. There was no significant differences in the results by two different doses (100 and 500 mg/kg) of PMC-12. Mean ± SEM. ###\nP < 0.001 vs. control; ***P < 0.001 vs. MCAO mice (vehicle)\nBeneficial effects of PMC-12 on the spatial memory function in MCAO mice. Morris water maze test was performed from 22 d to 25 d after MCAO. Administration of PMC-12 resulted in significantly improved memory function of MCAO mice during the late phase of the experiment. There was no significant differences in the results by two different doses (100 and 500 mg/kg) of PMC-12. Mean ± SEM. ###\nP < 0.001 vs. control; ***P < 0.001 vs. MCAO mice (vehicle)\nSpatial memory was assessed using the water maze test. MCAO mice took a longer time on average to find the platform than the basal group. However, PMC-12-administered mice attained a significantly lower time at both concentrations from 22 to 25 days after MCAO compared to the vehicle group (Fig. 8). In particular, it has shown that a low dose (100 mg/kg) of PMC-12 is enough for improvement of the damaged memory function. These results suggest that treatment with PMC-12 may induce beneficial effects for improvement of memory function in a focal cerebral ischemia model.Fig. 8Beneficial effects of PMC-12 on the spatial memory function in MCAO mice. Morris water maze test was performed from 22 d to 25 d after MCAO. Administration of PMC-12 resulted in significantly improved memory function of MCAO mice during the late phase of the experiment. There was no significant differences in the results by two different doses (100 and 500 mg/kg) of PMC-12. Mean ± SEM. ###\nP < 0.001 vs. control; ***P < 0.001 vs. MCAO mice (vehicle)\nBeneficial effects of PMC-12 on the spatial memory function in MCAO mice. Morris water maze test was performed from 22 d to 25 d after MCAO. Administration of PMC-12 resulted in significantly improved memory function of MCAO mice during the late phase of the experiment. There was no significant differences in the results by two different doses (100 and 500 mg/kg) of PMC-12. Mean ± SEM. ###\nP < 0.001 vs. control; ***P < 0.001 vs. MCAO mice (vehicle)", "HPLC conditions, particularly the mobile phase and its elution program, are important for determination of the compound in the biological matrix. In this study, we found that a mobile phase consisting of acetonitrile and containing H2O can separate THS, DISS, and catalpol (Fig. 1). The HPLC conditions developed in this study produced full peak-to-baseline resolution of the major active THS, DISS, and catalpol present in PMC-12. Based on UV maximal absorption, we detected THS and DISS at 254 nm and catalpol at 203 nm for quantitative analysis. The contents of THS, DISS, and catalpol in PMC-12 were 3.085 ± 0.271 %, 0.352 ± 0.058 %, and 0.785 ± 0.059 %, respectively. Linear calibration curve showed good linear regression (r2 > 0.999) within test ranges; the LOD (S/N = 3) and the LOQ (S/N = 10) were less than 1.5 and 4.5 μg at 254 nm for THS and DISS and at 203 nm for the catalpol (Table 1).Fig. 1HPLC analysis and quantification of PMC-12. HPLC chromatograms of THS, DISS, and catalpol reference (a) and PMC-12 (b) obtained using a Luna C18 (2) column monitored at 254 nm and eluted with 100 % water to 100 % acetonitrile for 40 at a flow-rate of 1.0 ml/minTable 1Concentration, calibration curve, regression data, LODs and LOQs for THS, DISS, and catalpol by HPLCCompoundsWavelength (nm)Concentration (%)Calibration curver2\nLOD (μg/ml)LOQ (μg/ml)THS2543.085 ± 0.271y = 3284.72x + 2.000.9991.414.27DISS2540.352 ± 0.058y = 4289.06x + 47.950.9990.591.79Catalpol2030.785 ± 0.059y = 1253.49x + 31.370.9990.631.91\nHPLC analysis and quantification of PMC-12. HPLC chromatograms of THS, DISS, and catalpol reference (a) and PMC-12 (b) obtained using a Luna C18 (2) column monitored at 254 nm and eluted with 100 % water to 100 % acetonitrile for 40 at a flow-rate of 1.0 ml/min\nConcentration, calibration curve, regression data, LODs and LOQs for THS, DISS, and catalpol by HPLC", "Exposure of cells to glutamate resulted in reduced cell viability of approximately 36.5 % compared with the control. Pretreatment with PMC-12 at a concentration range of 0.01 to 10 μg/ml (ED50 = 0.32 μg/ml PMC-12) resulted in significantly reduced glutamate-induced cytotoxicity in a dose-dependent manner (Fig. 2a). The levels of LDH release showed a significant increase to 77.8 % after exposure to glutamate, while pretreatment with PMC-12 resulted in a marked decrease of glutamate-induced release of LDH (Fig. 2b). These results suggest that pretreatment with PMC-12 exerts a potent neuroprotective effect against oxidative toxicity caused by exposure of HT22 cells to glutamate.Fig. 2Protective effects of PMC-12 against glutamate-induced cell death in HT22 cells. Cell viability and toxicity were determined by MTT (a) and LDH assay (b). Cells were pretreated with 0.01, 0.1, 1, and 10 μg/ml of PMC-12 for 24 h, followed by exposure to 5 mM glutamate for 24 h. ###\nP < 0.001 vs. control; *P < 0.05, **P < 0.01 and ***P < 0.001 vs. glutamate-treated cells. All data are represented as the mean ± SEM of three independent experiments\nProtective effects of PMC-12 against glutamate-induced cell death in HT22 cells. Cell viability and toxicity were determined by MTT (a) and LDH assay (b). Cells were pretreated with 0.01, 0.1, 1, and 10 μg/ml of PMC-12 for 24 h, followed by exposure to 5 mM glutamate for 24 h. ###\nP < 0.001 vs. control; *P < 0.05, **P < 0.01 and ***P < 0.001 vs. glutamate-treated cells. All data are represented as the mean ± SEM of three independent experiments", "We performed flow cytometry analysis using Annexin V/PI staining in order to characterize the types of neuronal death. Concentrations of 0.1, 1, and 10 μg/ml of PMC-12 were selected for testing. After exposure to glutamate, cells were likely to undergo apoptotic cell death rather than necrotic death, however, pretreatment with PMC-12 resulted in a marked decrease in the apoptotic population (Fig. 3). These results suggest that pretreatment with PMC-12 exerts a neuroprotective effect through inhibition of glutamate-induced apoptotic cell death.Fig. 3Protective effect of PMC-12 on types of cell death in HT22 cells. Cells were pretreated with 0.1, 1, or 10 μg/ml PMC-12 for 24 h, followed by exposure to 5 mM glutamate for 24 h. Representative flow cytometric analysis scatter-grams of Annexin V/PI staining (a) and quantitative analysis of the histograms (b and c). ###\nP < 0.001 vs. control; ***P < 0.001 vs. glutamate-treated cells. All data are represented as the mean ± SEM of three independent experiments\nProtective effect of PMC-12 on types of cell death in HT22 cells. Cells were pretreated with 0.1, 1, or 10 μg/ml PMC-12 for 24 h, followed by exposure to 5 mM glutamate for 24 h. Representative flow cytometric analysis scatter-grams of Annexin V/PI staining (a) and quantitative analysis of the histograms (b and c). ###\nP < 0.001 vs. control; ***P < 0.001 vs. glutamate-treated cells. All data are represented as the mean ± SEM of three independent experiments", "Treatment of HT22 cells with glutamate resulted in increased production of ROS. However, pretreatment with PMC-12 resulted in a significant decrease in ROS production, which prevented elevation of ROS level caused by exposure to glutamate (Fig. 4a). We also performed staining with Hoechst 33342 to confirm morphological changes by glutamate-induced oxidative toxicity. Our result showed that PMC-12 protected against apoptotic cell death by production of ROS after exposure to glutamate (Fig. 4b). When cells were treated with 10 μM of intracellular Ca2+ chelator BAPTA-AM and 1.5 mM of extracellular Ca2+ chelator EGTA, both inhibitors caused a significant reduction in the levels of glutamate-induced production of ROS. However, no change in ROS production was observed in cells treated with a combination of Ca2+ chelators and PMC-12 compared to cells treated with PMC-12 alone followed by exposure to glutamate (Fig. 5). These results suggest that PMC-12 suppresses glutamate-induced oxidative stress by blocking ROS production, which may be related to an indirect Ca2+ pathway.Fig. 4Protective effect of PMC-12 on ROS generation in glutamate-treated HT22 cells. Cells were pretreated with 0.01, 0.1, 1, or 10 μg/ml of PMC-12 for 24 h, followed by exposure to 5 mM glutamate for 24 h. The oxidation sensitive fluorescence dye, carboxy-H2DCFDA (20 μM), was used in measurement of ROS levels. Production of ROS was analyzed using a fluorescence plate reader (a) and fluorescence microscope (b). In addition, apoptotic nuclei were observed after staining with Hoechst 33342 for detection of apoptosis morphologically (b). ###\nP < 0.001 vs. control; **P < 0.01 and ***P < 0.001 vs. glutamate-treated cells. All data are represented as the mean ± SEM of three independent experiments. Scale bars = 50 μmFig. 5Protective effects of PMC-12 on cellular Ca2+-related ROS production in HT22 cells. Cells were pretreated with 10 μg/ml of PMC-12 for 24 h, followed by exposure to 5 mM glutamate for 24 h. Cells were treated with the specific Ca2+ inhibitors, 1.5 mM EGTA or 10 μM BAPTA-AM, for 30 min before addition of PMC-12 or glutamate. ROS production was measured using a fluorescence plate reader and mean fluorescence intensity was expressed as the mean ± SEM of three independent experiments. ###\nP < 0.001 vs. control; ***P < 0.001 vs. glutamate-treated cells\nProtective effect of PMC-12 on ROS generation in glutamate-treated HT22 cells. Cells were pretreated with 0.01, 0.1, 1, or 10 μg/ml of PMC-12 for 24 h, followed by exposure to 5 mM glutamate for 24 h. The oxidation sensitive fluorescence dye, carboxy-H2DCFDA (20 μM), was used in measurement of ROS levels. Production of ROS was analyzed using a fluorescence plate reader (a) and fluorescence microscope (b). In addition, apoptotic nuclei were observed after staining with Hoechst 33342 for detection of apoptosis morphologically (b). ###\nP < 0.001 vs. control; **P < 0.01 and ***P < 0.001 vs. glutamate-treated cells. All data are represented as the mean ± SEM of three independent experiments. Scale bars = 50 μm\nProtective effects of PMC-12 on cellular Ca2+-related ROS production in HT22 cells. Cells were pretreated with 10 μg/ml of PMC-12 for 24 h, followed by exposure to 5 mM glutamate for 24 h. Cells were treated with the specific Ca2+ inhibitors, 1.5 mM EGTA or 10 μM BAPTA-AM, for 30 min before addition of PMC-12 or glutamate. ROS production was measured using a fluorescence plate reader and mean fluorescence intensity was expressed as the mean ± SEM of three independent experiments. ###\nP < 0.001 vs. control; ***P < 0.001 vs. glutamate-treated cells", "Levels of phosphorylated p38 MAPK and dephosphorylated PI3K were significantly decreased by treatment with either PMC-12 or anti-oxidant NAC compared to glutamate-treated cells. Protein levels of mature BDNF and CREB phosphorylation were significantly increased by treatment with either PMC-12 or NAC (Fig. 6a). When cells were treated with PMC-12, NAC, or intracellular Ca2+ inhibitor BAPTA-AM, combination treatment resulted in markedly reduced levels of phosphorylated p38 MAPK and dephosphorylated PI3K compared to other cells. The combination treatment of cells also resulted in elevation of decreased protein levels of mature BDNF and CREB phosphorylation after exposure to glutamate (Fig. 6b). These results suggest that neuroprotective effects of PMC-12 may be regulated by both p38 MAPK and PI3K signaling with mature BDNF expression and CREB phosphorylation, and these effects may be related to ROS accumulation and Ca2+ influx.Fig. 6Protective effects of PMC-12 on regulation of intracellular protein kinases in HT22 cells. Cells were pretreated with 1 μg/ml PMC-12 or 1 mM NAC, followed by exposure to 5 mM glutamate (a). In addition, cells were pretreated with 10 μM BAPTA-AM or NAC for 30 min before addition of PMC-12 or glutamate (b). The histogram for Fig. 6b was indicated as the mean ± SEM of three independent experiments (c). Equal amounts of proteins and each sample were subjected to Western blot analysis using the indicated antibodies. Equal protein loading was confirmed by actin expression. ###\nP < 0.001 vs. normal control; ***P < 0.001 vs. glutamate-treated groups; $\nP < 0.05 vs. groups except control and glutamate-treated groups\nProtective effects of PMC-12 on regulation of intracellular protein kinases in HT22 cells. Cells were pretreated with 1 μg/ml PMC-12 or 1 mM NAC, followed by exposure to 5 mM glutamate (a). In addition, cells were pretreated with 10 μM BAPTA-AM or NAC for 30 min before addition of PMC-12 or glutamate (b). The histogram for Fig. 6b was indicated as the mean ± SEM of three independent experiments (c). Equal amounts of proteins and each sample were subjected to Western blot analysis using the indicated antibodies. Equal protein loading was confirmed by actin expression. ###\nP < 0.001 vs. normal control; ***P < 0.001 vs. glutamate-treated groups; $\nP < 0.05 vs. groups except control and glutamate-treated groups", "At 26 days after MCAO, positive neuronal cells of pCREB and mature BDNF in the hippocampal CA1 and dentate gyrus (DG) region were counted after immunofluorescence staining. Post-treatment with PMC-12 followed by MCAO surgery resulted in a significant increase in the number of double-positive cells of pCREB/NeuN or mature BDNF/NeuN in the CA1 and DG region of the ipsilateral hippocampus compared to the MCAO group (Fig. 7). These results suggest a possible association of the protective effects of PMC-12 with neuronal mature BDNF expression and CREB phosphorylation in the hippocampus of MCAO mice.Fig. 7Protective effects of PMC-12 on expression of both pCREB and mature BDNF in the hippocampus of focal cerebral ischemia in mice at 26 days after MCAO. Photomicrograph (a-b, d-e) and its histogram for pCREB/NeuN (c) and mature BDNF/NeuN double-positive cells (f) in the DG and CA1 regions of the hippocampus. Total number of pCREB/NeuN and mBDNF/NeuN double-positive cells was significantly increased by administration of PMC-12 in DG and CA1 compared to the MCAO group. ##\nP < 0.01 and ###\nP < 0.001 vs. control; *P < 0.05 and **P < 0.01 vs. MCAO mice. Scale bar = 100 μm\nProtective effects of PMC-12 on expression of both pCREB and mature BDNF in the hippocampus of focal cerebral ischemia in mice at 26 days after MCAO. Photomicrograph (a-b, d-e) and its histogram for pCREB/NeuN (c) and mature BDNF/NeuN double-positive cells (f) in the DG and CA1 regions of the hippocampus. Total number of pCREB/NeuN and mBDNF/NeuN double-positive cells was significantly increased by administration of PMC-12 in DG and CA1 compared to the MCAO group. ##\nP < 0.01 and ###\nP < 0.001 vs. control; *P < 0.05 and **P < 0.01 vs. MCAO mice. Scale bar = 100 μm", "Spatial memory was assessed using the water maze test. MCAO mice took a longer time on average to find the platform than the basal group. However, PMC-12-administered mice attained a significantly lower time at both concentrations from 22 to 25 days after MCAO compared to the vehicle group (Fig. 8). In particular, it has shown that a low dose (100 mg/kg) of PMC-12 is enough for improvement of the damaged memory function. These results suggest that treatment with PMC-12 may induce beneficial effects for improvement of memory function in a focal cerebral ischemia model.Fig. 8Beneficial effects of PMC-12 on the spatial memory function in MCAO mice. Morris water maze test was performed from 22 d to 25 d after MCAO. Administration of PMC-12 resulted in significantly improved memory function of MCAO mice during the late phase of the experiment. There was no significant differences in the results by two different doses (100 and 500 mg/kg) of PMC-12. Mean ± SEM. ###\nP < 0.001 vs. control; ***P < 0.001 vs. MCAO mice (vehicle)\nBeneficial effects of PMC-12 on the spatial memory function in MCAO mice. Morris water maze test was performed from 22 d to 25 d after MCAO. Administration of PMC-12 resulted in significantly improved memory function of MCAO mice during the late phase of the experiment. There was no significant differences in the results by two different doses (100 and 500 mg/kg) of PMC-12. Mean ± SEM. ###\nP < 0.001 vs. control; ***P < 0.001 vs. MCAO mice (vehicle)", "Traditional medical literature can be used to identify and shortlist herbs and combinations of herbs for further experimental and clinical research [24]. We have used a systematic method for screening and evaluating citations from the traditional Korean medicine literature, Dongeuibogam. In a screening exercise performed using HT22 hippocampal cells for selection of functional herbs on memory impairment, among many herbal candidates, we found that Polygonum multiflorum exhibited prominent neuroprotective effects.\nOur previous results demonstrated that extracts from Polygonum multiflorum exert significant anti-apoptotic effects against glutamate-induced neurotoxicity [25, 26] and protect against cerebral ischemic damage via regulation of endothelial nitric oxide [27]. However, extracts of Polygonum multiflorum alone had a partial blocking effect against neuronal damage.\nTo enhance the potential beneficial effects of Polygonum multiflorum, we analyzed 4,014 herbal prescriptions mentioned in Dongeuibogam and noted 19 multiherb formulae, including the roots of Rehmannia glutinosa, Polygala tenuifolia, and Acorus gramineus. These 19 multiherbs are commonly used in treatment of mental and physical weakness of the elderly.\nWe prepared PMC-12 with Polygonum multiflorum base and above multiherb formulae to enhance medical efficiency. In a further screening exercise performed using a focal cerebral ischemia model, PMC-12 had marked effects on the decline of infarct size compared with Polygonum multiflorum extract alone. Thus, in the current study, we investigated the beneficial effects of herbal prescription, PMC-12, on hippocampal neurons and spatial memory deficits in mice.\nExposure to glutamate causes neuronal cell death via non-receptor-mediated oxidative stress in hippocampal HT22 cells [28, 29]. Increased levels of Ca2+ and ROS caused by exposure to glutamate lead to both apoptotic and necrotic cell death [6, 30, 31]. Since alteration of Ca2+ signaling is involved in cell death, it can increase ROS production [32–34].\nIn the current study, treatment with PMC-12 resulted in reduction of apoptotic cell death with suppression of ROS accumulation; PMC-12 may contribute to neuronal survival against glutamate-induced oxidative toxicity related to cellular content of ROS and Ca2+. However, in treatment with PMC-12, ROS production was not significantly diminished in a concentration-dependent manner and cells were effectively protected for an additional 24 after removal of PMC-12. The beneficial effect of PMC-12 on hippocampal cells may not be attributable to its direct control of ROS accumulation and Ca2+ influx. With anti-oxidative activities, PMC-12 may contribute to other protective pathways, including endogenous kinases.\nNeuroprotective effects under oxidative stress are mediated by regulation of MAPK and PI3K signaling pathways, leading to cell death or survival in a variety of neurodegenerative disorders [35–37]. In particular, roles of p38 MAPK signaling in neuronal death [38, 39] and activation of PI3K in neuroprotection [10, 37] play an essential role under oxidative stress conditions. Our results showed that neuroprotective effects of PMC-12 are regulated by signaling pathways of p38 MAPK and PI3K against glutamate-induced oxidative cell death.\nCREB, an important transcription factor implicated in control of adaptive neuronal responses, contributes to several critical functions of BDNF-mediated cell survival [40, 41]. The impaired CREB phosphorylation in hippocampus may be a pathological component in neurodegenerative disorders [42, 43]. BDNF has recently been recognized as a potent modulator capable of regulating a wide repertoire of neuronal functions [44]. Among two forms of extracellular BDNF, pro- and mature BDNF, mature BDNF is essential in protection of neonatal or developing brain from ischemia injury as well as neuronal cells, whereas pro-BDNF may induce neuronal death [44–48]. In accordance with previous studies, cells treated with PMC-12 recovered protein levels of mature BDNF and CREB phosphorylation caused by glutamate. These results suggest that neuroprotective effects of PMC-12 may be regulated by both mature BDNF expression and CREB phosphorylation.\nMemory improvement is indicative of the structural and functional hippocampal plastic changes including its expression of BDNF [22]. To confirm involvement of mature BDNF and CREB activation in neuroprotection of PMC-12, we performed in vivo study using a mild ischemic mouse model [22, 23]. Immunohistochemistry results of treatment with PMC-12 for three weeks in MCAO mice indicated significant expression in the CA1 and DG regions of the hippocampus, suggesting that PMC-12 may prevent neuronal death via signaling pathways of both mature BDNF expression and CREB phosphorylation.\nIn addition, we applied a behavioral test, Morris water maze, which assesses MCAO-induced deficits of learning and memory. Previous report showed that a short duration of 60 min MCAO instead of 2 h does not result in concomitant sensorimotor deficits in mice [23, 49]. Thus 30 min MCAO was conducted to avoid the influence of sensorimotor deficit on the performance of the mice in our Morris water maze test. Treatment with PMC-12 resulted in attainment of a significantly lower time from 22 to 25 days after MCAO compared with the vehicle group. These findings suggest that PMC-12 may be a good candidate for recovery of damaged learning and memory function in a focal cerebral ischemia model.\nOur study did not investigate the main functional components of multiherb formula PMC-12. However, phenolic constituents are well known as major active components of Polygonum multiflorum [50] and tetrahydroxystilbene-glucoside identified from this herb promotes long-term potentiation inductions contributing to enhancement of learning and memory in mice models [51, 52]. In line with these studies, our HPLC analysis also showed that 2,3,5,4′-tetrahydroxystilbene-2-O-β-D-glucoside is one of the main components of multiherb formula PMC-12. However, medical herbs have diverse roles within multiherb formulae according to well-known perspectives of traditional Korean medicine. Some herbs target the primary disorder and others aim to alleviate secondary symptoms such as absorption or counting undesirable effects of other herbs [24, 53]. PMC-12 may enhance memory function to target primary neuroprotective effects and other secondary symptoms by diverse roles within multiherb formulae.", "PMC-12 protects against hippocampal neuronal cell death via inhibition of Ca2+-related accumulation of ROS and these effects regulate through the signaling pathways of p38 MAPK and PI3K associated with mature BDNF expression and CREB phosphorylation. PMC-12 may also modulate recovery of memory function via mature BDNF and CREB against cerebral ischemic stroke. These results have shown that multiherb formula PMC-12 has potential applications as a useful therapeutic strategy in memory impairment of brain disorders." ]
[ "introduction", "materials|methods", null, null, null, null, null, null, null, null, null, null, null, null, null, null, "results", null, null, null, null, null, null, null, "discussion", "conclusion" ]
[ "Hippocampal cell", "Neuroprotection", "Memory", "\nPolygonum multiflorum\n" ]
Background: Due to an increase in life expectancy and the elderly population, memory and cognitive impairments including dementia have become a major public health problem [1]. Hippocampal neuronal death is a major factor in the progress of memory impairment in many brain disorders [2, 3]. Prevention of hippocampal neuronal deaths provides a potential new therapeutic strategy to ameliorate memory and cognitive impairment of many neurological disorders. HT22 hippocampal cell line, which lacks a functional glutamate receptor, is valuable for studying molecular mechanism of memory deficits [2, 4]. Exposure of HT22 hippocampal cells to glutamate shows neurotoxicity through oxidative stress rather than N-methyl-D-aspartate receptor-mediated excitotoxicity [5–7]. Non-receptor-mediated oxidative stress involves inhibition of cysteine uptake, alterations of intracellular cysteine homeostasis, glutathione depletion, and ultimately elevation of reactive oxygen species (ROS) activation inducing neuronal cell death [5, 8–10]. Death of hippocampal cell following oxidative stress and accumulation of ROS play a role in learning and memory impairment of brain disorders [11]. Under oxidative neuronal death, the pathways of p38 mitogen-activated protein kinase (MAPK) and phosphatidylinositol-3 kinase (PI3K) play critical roles in control of neuronal death and cell survival caused by glutamate, respectively [6, 12]. Brain-derived neurotrophic factor (BDNF) mediates neuronal survival and neuroplasticity and associated learning and memory, and its signaling is associated with alteration of the main mediators of PI3K pathways [13–15]. Cyclic AMP response element binding protein (CREB) has multiple roles in different brain areas, as well as promotion of cell survival [16, 17] and is involved in memory, learning, and synaptic transmission in the brain [18, 19]. Studies of the neuroprotection have been performed in both HT22 cells and middle cerebral artery occlusion (MCAO)-induced injury [20, 21]. However memory deficits are also frequently noted after stroke. Transient MCAO induce a progressive deficiency in spatial performance related to impaired hippocampal function [22]. Thus shorter durations of ischemia have been used in the experiments that aimed to test impaired spatial learning and memory performance [23]. In traditional literature of Korean/Chinese medicine, research for screening, evaluating citation and attempting practical use of herbs has been conducted for development of therapeutic strategies [24]. We prepared multiherb formulae comprising the roots of Polygonum multiflorum, Rehmannia glutinosa, Polygala tenuifolia, and Acorus gramineus to increase medicinal efficacy by systematic investigation of Dongeuibogam, published by Joon Hur in the early 17th century in Korea. Our aim was to determine the beneficial effects of herbal mixture extract on hippocampal neurons, a susceptible cell important in memory impairment, in HT22 hippocampal cells and the hippocampus with subsequent memory enhancement in a mouse model of focal cerebral ischemia. Methods: Chemicals and antibodies L-glutamate, 3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyltetrazolium bromide (MTT), N-acetyl-L-cysteine (NAC), and β-actin antibody were purchased from Sigma-Aldrich (St. Louis, MO, USA). BAPTA-AM and EGTA were purchased from Tocris Bioscience (Ellisville, MO, USA). Dulbecco’s modified Eagle’s medium (DMEM), fetal bovine serum (FBS), and other cell culture reagents were purchased from Gibco-Invitrogen (Carlsbad, CA, USA). Antibodies recognizing p38, phospho-p38 (pp38, Thr180/Tyr182), PI3K, and pro-BDNF were supplied by Santa Cruz Biotechnology (Santa Cruz, CA, USA), and CREB, phospho-CREB (pCREB, Ser133), and phospho-PI3K (pPI3K, Tyr458) were supplied by Cell Signaling (Danvers, MA, USA). Antibody recognizing neuronal nuclei (NeuN) was supplied by Millipore Corporation (Billerica, MA, USA), and mature BDNF was supplied by Abcam (Cambridge, MA, USA). Secondary antibodies were supplied by Santa Cruz Biotechnology. A FITC Annexin V apoptosis detection kit was purchased from BD Bioscience (San Diego, CA, USA). A lactate dehydrogenase (LDH) cytotoxicity assay kit was purchased from Promega (Madison, WI, USA), and ROS detection reagent, 5-(and-6)-carboxy-2′,7′-dichlorodihydrofluorescein diacetate (carboxy-H2DCFDA), and Hoechst 33342 was purchased from Invitrogen (Carlsbad, CA, USA). L-glutamate, 3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyltetrazolium bromide (MTT), N-acetyl-L-cysteine (NAC), and β-actin antibody were purchased from Sigma-Aldrich (St. Louis, MO, USA). BAPTA-AM and EGTA were purchased from Tocris Bioscience (Ellisville, MO, USA). Dulbecco’s modified Eagle’s medium (DMEM), fetal bovine serum (FBS), and other cell culture reagents were purchased from Gibco-Invitrogen (Carlsbad, CA, USA). Antibodies recognizing p38, phospho-p38 (pp38, Thr180/Tyr182), PI3K, and pro-BDNF were supplied by Santa Cruz Biotechnology (Santa Cruz, CA, USA), and CREB, phospho-CREB (pCREB, Ser133), and phospho-PI3K (pPI3K, Tyr458) were supplied by Cell Signaling (Danvers, MA, USA). Antibody recognizing neuronal nuclei (NeuN) was supplied by Millipore Corporation (Billerica, MA, USA), and mature BDNF was supplied by Abcam (Cambridge, MA, USA). Secondary antibodies were supplied by Santa Cruz Biotechnology. A FITC Annexin V apoptosis detection kit was purchased from BD Bioscience (San Diego, CA, USA). A lactate dehydrogenase (LDH) cytotoxicity assay kit was purchased from Promega (Madison, WI, USA), and ROS detection reagent, 5-(and-6)-carboxy-2′,7′-dichlorodihydrofluorescein diacetate (carboxy-H2DCFDA), and Hoechst 33342 was purchased from Invitrogen (Carlsbad, CA, USA). Preparation of four herbal mixture extract The dried roots of Polygonum multiflorum Thunb., Rehmannia glutinosa (Gaertn) Libosch., Polygala tenuifolia Willd., and Acorus gramineus Soland. were purchased from Dongnam Co. (Busan, Korea) and were authenticated by Professor Y.W. Choi, Department of Horticultural Bioscience, College of Natural Resource and Life Science, Pusan National University. A voucher specimen (accession number PMCWSD2.1 ~ 2.4) was deposited at the Plant Drug Research Laboratory of Pusan National University (Miryang, Korea). Dried powdered Polygonum multiflorum (25.5 kg), Rehmannia glutinosa (9.5 kg), Polygala tenuifolia (7.5 kg), and Acorus gramineus roots (7.5 kg) were immersed in 450 L of distilled water and boiled at 120 ± 5 °C for 150 min. The resultant extract was centrifuged (2000 × g for 20 min at 4 °C) and filtered through a 0.2-μm filter. The filtrate was then concentrated in vacuo at 70 ± 5 °C under reduced pressure and then converted into a fine spray-dried powder at a yielding rate of 4.6 % (2.3 kg) in a vacuum drying apparatus. Finally, the solid form of the spray-dried powder was dissolved with dimethyl sulfoxide (DMSO) for use as PMC-12 in experiments. The dried roots of Polygonum multiflorum Thunb., Rehmannia glutinosa (Gaertn) Libosch., Polygala tenuifolia Willd., and Acorus gramineus Soland. were purchased from Dongnam Co. (Busan, Korea) and were authenticated by Professor Y.W. Choi, Department of Horticultural Bioscience, College of Natural Resource and Life Science, Pusan National University. A voucher specimen (accession number PMCWSD2.1 ~ 2.4) was deposited at the Plant Drug Research Laboratory of Pusan National University (Miryang, Korea). Dried powdered Polygonum multiflorum (25.5 kg), Rehmannia glutinosa (9.5 kg), Polygala tenuifolia (7.5 kg), and Acorus gramineus roots (7.5 kg) were immersed in 450 L of distilled water and boiled at 120 ± 5 °C for 150 min. The resultant extract was centrifuged (2000 × g for 20 min at 4 °C) and filtered through a 0.2-μm filter. The filtrate was then concentrated in vacuo at 70 ± 5 °C under reduced pressure and then converted into a fine spray-dried powder at a yielding rate of 4.6 % (2.3 kg) in a vacuum drying apparatus. Finally, the solid form of the spray-dried powder was dissolved with dimethyl sulfoxide (DMSO) for use as PMC-12 in experiments. High performance liquid chromatography (HPLC) analysis and quantification For analysis of quality and quantity for PMC-12, sample of 0.5 g dry weight was sonicated in 10 ml MeOH, filtered through a 0.45 μm membrane filter before HPLC analysis. HPLC using G1100 systems (Agilent Technologies, Waldbronn, Germany) was performed on a Luna C18 column (5 μm, 150 mm × 3.0 mm i.d. Phenomenex, Torrance, CA, USA) with a mobile phase gradient of acetonitrile–water (0 to 100) for 35 min. The injection volume was 10 μl of sample and mobile phase flow rate 0.4 ml/min with UV detection at 254 nm for 2,3,5,4′-tetrahydroxystilbene-2-O-β-D-glucoside (THS) and 3′,6-disinapoyl sucrose (DISS) and at 203 nm for catalpol. Acquisition and analysis of chromatographic data were performed using Agilent chromatographic Work Station software (Agilent Technologies). Stock solutions of THS, DISS, and catalpol were prepared for quantification of PMC-12. The contents of PMC-12 were determined by regression equations, calculated in the form of y = ax + b, where x and y were peak area and contents of the compound. The limits of detection (LOD) and quantification (LOQ) under the current chromatographic conditions were determined at a signal-to-noise ratio of 3 and 10, respectively. For analysis of quality and quantity for PMC-12, sample of 0.5 g dry weight was sonicated in 10 ml MeOH, filtered through a 0.45 μm membrane filter before HPLC analysis. HPLC using G1100 systems (Agilent Technologies, Waldbronn, Germany) was performed on a Luna C18 column (5 μm, 150 mm × 3.0 mm i.d. Phenomenex, Torrance, CA, USA) with a mobile phase gradient of acetonitrile–water (0 to 100) for 35 min. The injection volume was 10 μl of sample and mobile phase flow rate 0.4 ml/min with UV detection at 254 nm for 2,3,5,4′-tetrahydroxystilbene-2-O-β-D-glucoside (THS) and 3′,6-disinapoyl sucrose (DISS) and at 203 nm for catalpol. Acquisition and analysis of chromatographic data were performed using Agilent chromatographic Work Station software (Agilent Technologies). Stock solutions of THS, DISS, and catalpol were prepared for quantification of PMC-12. The contents of PMC-12 were determined by regression equations, calculated in the form of y = ax + b, where x and y were peak area and contents of the compound. The limits of detection (LOD) and quantification (LOQ) under the current chromatographic conditions were determined at a signal-to-noise ratio of 3 and 10, respectively. Cell culture HT22 cells were cultured in DMEM supplemented with 10 % FBS and 1 % penicillin/streptomycin in a 5 % CO2 humidified incubator at 37 °C. The cells were incubated for 24 h prior to experimental treatments. After incubation, cells were treated with various concentrations of PMC-12 for 24 h, followed by exposure to 5 mM glutamate for 24 h. Cells were pretreated with inhibitors for 30 min before addition of PMC-12 and then exposed to glutamate. HT22 cells were cultured in DMEM supplemented with 10 % FBS and 1 % penicillin/streptomycin in a 5 % CO2 humidified incubator at 37 °C. The cells were incubated for 24 h prior to experimental treatments. After incubation, cells were treated with various concentrations of PMC-12 for 24 h, followed by exposure to 5 mM glutamate for 24 h. Cells were pretreated with inhibitors for 30 min before addition of PMC-12 and then exposed to glutamate. Cell viability assay HT22 cell survival was assessed using a MTT assay. The culture medium was replaced with 0.5 mg/ml MTT solution and then left in a dark place for 4 h at 37 °C. Following incubation, the cells were treated with DMSO in order to dissolve the formazan crystals. Absorbance was determined at 595 nm using a SpectraMax 190 spectrophotometer (Molecular Devices, Sunnyvale, CA, USA). Results were expressed as a percentage of control. HT22 cell survival was assessed using a MTT assay. The culture medium was replaced with 0.5 mg/ml MTT solution and then left in a dark place for 4 h at 37 °C. Following incubation, the cells were treated with DMSO in order to dissolve the formazan crystals. Absorbance was determined at 595 nm using a SpectraMax 190 spectrophotometer (Molecular Devices, Sunnyvale, CA, USA). Results were expressed as a percentage of control. Determination of cell cytotoxicity Released LDH from damaged cells was measured for estimation of cytotoxicity. For induction of maximal cell lysis, treated cells were lysed with 0.9 % Triton X-100 for 45 min at 37 °C. Supernatant samples were transferred to a 96-well enzymatic assay plate and reacted with substrate mix in the dark for 30 min at room temperature. At the end of that time, samples were treated with stop solution and read at 490 nm using a SpectraMax 190 spectrophotometer (Molecular Devices). Data represent the percentage of LDH released relative to controls. Released LDH from damaged cells was measured for estimation of cytotoxicity. For induction of maximal cell lysis, treated cells were lysed with 0.9 % Triton X-100 for 45 min at 37 °C. Supernatant samples were transferred to a 96-well enzymatic assay plate and reacted with substrate mix in the dark for 30 min at room temperature. At the end of that time, samples were treated with stop solution and read at 490 nm using a SpectraMax 190 spectrophotometer (Molecular Devices). Data represent the percentage of LDH released relative to controls. Flow cytometric analysis After treatment, cells were harvested and resuspended in binding buffer at a concentration of 1x104 cells/ml. For analysis of cell death types, 100 μl of the solution was transferred to a flow cytometric tube, followed by incubation with Annexin V-FITC and propidium iodide (PI) in the dark at room temperature for 15 min. Subsequently, 400 μl of binding buffer was added and analysis of the samples was performed using a flow cytometer (FACS Canto™ II; Becton-Dickinson, San Jose, CA, USA). After treatment, cells were harvested and resuspended in binding buffer at a concentration of 1x104 cells/ml. For analysis of cell death types, 100 μl of the solution was transferred to a flow cytometric tube, followed by incubation with Annexin V-FITC and propidium iodide (PI) in the dark at room temperature for 15 min. Subsequently, 400 μl of binding buffer was added and analysis of the samples was performed using a flow cytometer (FACS Canto™ II; Becton-Dickinson, San Jose, CA, USA). ROS measurement HT22 cells were cultured in 96-well white plates at a density of 5x103 cells per well. After adherence, cells were pretreated with PMC-12 for 24 h and then exposed to 5 mM glutamate for 24 h. Treated cells were washed with PBS. Carboxy-H2DCFDA (20 μM) (Invitrogen) was applied to the cells, followed by incubation for 1 h in a 37 °C incubator. Fluorescence was measured using a Mutilabel counter (Perkin Elmer 1420, MA, USA). Accumulation of intracellular ROS was observed and photographed under a fluorescence microscope (Carl Zeiss Imager M1, Carl Zeiss, Inc., Gottingen, Germany). In addition, cells were harvested, resuspended in 1 ml PBS with 20 μM carboxy-H2DCFDA (Invitrogen), and then incubated for 1 h at 37 °C. After washing, cellular fluorescence was measured using a flow cytometer. HT22 cells were cultured in 96-well white plates at a density of 5x103 cells per well. After adherence, cells were pretreated with PMC-12 for 24 h and then exposed to 5 mM glutamate for 24 h. Treated cells were washed with PBS. Carboxy-H2DCFDA (20 μM) (Invitrogen) was applied to the cells, followed by incubation for 1 h in a 37 °C incubator. Fluorescence was measured using a Mutilabel counter (Perkin Elmer 1420, MA, USA). Accumulation of intracellular ROS was observed and photographed under a fluorescence microscope (Carl Zeiss Imager M1, Carl Zeiss, Inc., Gottingen, Germany). In addition, cells were harvested, resuspended in 1 ml PBS with 20 μM carboxy-H2DCFDA (Invitrogen), and then incubated for 1 h at 37 °C. After washing, cellular fluorescence was measured using a flow cytometer. Nuclear staining with Hoechst 33342 Apoptosis was investigated by staining the cells with Hoechst 33342 (Invitrogen). HT22 cells were washed three times with PBS and then fixed in PBS containing 4 % paraformaldehyde for 25 min at 4 °C. Fixed cells were washed with PBS and stained with Hoechst 33342 (10 μg/ml) for 15 min at room temperature. The cells were washed three times with PBS and mounted using the medium for fluorescence (Vector Laboratories, Inc.) The cells were observed under a fluorescence microscope for nuclei showing typical apoptotic features such as chromatin condensation and fragmentation. Photographs were taken at a magnification of X 200. Apoptosis was investigated by staining the cells with Hoechst 33342 (Invitrogen). HT22 cells were washed three times with PBS and then fixed in PBS containing 4 % paraformaldehyde for 25 min at 4 °C. Fixed cells were washed with PBS and stained with Hoechst 33342 (10 μg/ml) for 15 min at room temperature. The cells were washed three times with PBS and mounted using the medium for fluorescence (Vector Laboratories, Inc.) The cells were observed under a fluorescence microscope for nuclei showing typical apoptotic features such as chromatin condensation and fragmentation. Photographs were taken at a magnification of X 200. Western blot analysis The cells were homogenized with lysis buffer [200 mM Tris (pH 8.0), 150 mM NaCl, 2 mM EDTA, 1 mM NaF, 1 % NP40, 1 mM PMSF, 1 mM Na3VO4, protease inhibitor cocktail]. Equal amounts of proteins were separated by 10 or 12 % sodium dodecyl sulfate-polyacrylamide gel electrophoresis (SDS-PAGE) and then transferred to a nitrocellulose membrane (Whatman GmbH, Dassel, Germany). The membranes were blocked with 5 % skim milk in PBST for 1 h, followed by overnight exposure to appropriate antibodies. Membranes were then incubated with appropriate horseradish peroxidase-conjugated antibodies for 1 h. All specific bands were visualized using an enhanced chemiluminescence system (Pierce Biotech, Rockford, IL, USA) and imaged using an Image Quant LAS-4000 imaging system (GE Healthcare Life Science, Uppsala, Sweden). Results of the Western blot assay reported here are representative of at least three experiments. The cells were homogenized with lysis buffer [200 mM Tris (pH 8.0), 150 mM NaCl, 2 mM EDTA, 1 mM NaF, 1 % NP40, 1 mM PMSF, 1 mM Na3VO4, protease inhibitor cocktail]. Equal amounts of proteins were separated by 10 or 12 % sodium dodecyl sulfate-polyacrylamide gel electrophoresis (SDS-PAGE) and then transferred to a nitrocellulose membrane (Whatman GmbH, Dassel, Germany). The membranes were blocked with 5 % skim milk in PBST for 1 h, followed by overnight exposure to appropriate antibodies. Membranes were then incubated with appropriate horseradish peroxidase-conjugated antibodies for 1 h. All specific bands were visualized using an enhanced chemiluminescence system (Pierce Biotech, Rockford, IL, USA) and imaged using an Image Quant LAS-4000 imaging system (GE Healthcare Life Science, Uppsala, Sweden). Results of the Western blot assay reported here are representative of at least three experiments. Focal cerebral ischemia To confirm beneficial effects of PMC-12 on hippocampal cell, we used middle cerebral artery occlusion (MCAO) model. Male C57BL/6 mice (20-25 g) were obtained from Dooyeol Biotech (Seoul, Korea). The mice were housed at 22 °C under alternating 12 h cycles of dark and light, and were fed a commercial diet and allowed tap water ad libitum. All experiments were approved by the Pusan National University Animal Care and Use Committee. Each group consisted of six mice and all treatments were administered under isoflurane (Choongwae, Seoul, Korea) anesthesia, which was provided using a calibrated vaporizer (Midmark VIP3000, Orchad Park, OH, USA). Focal cerebral ischemia was induced by occluding the middle cerebral artery (MCA) using the intraluminal filament technique. A fiber-optic probe was affixed to the skull over the middle cerebral artery for measurement of regional cerebral blood flow using a PeriFlux Laser Doppler System 5000 (Perimed, Stockholm, Sweden). Middle cerebral artery occlusion model was induced by a silicon-coated 4-0 monofilament in the internal carotid artery and the monofilament was advanced to occlude the MCA. The filament was withdrawn 30 min after occlusion and reperfusion was confirmed using laser Doppler. Mice in the PMC-12 groups received oral administration daily at the doses of 100 and 500 mg/kg for three weeks after MCAO, while mice in the control and vehicle groups were only given distilled water at the same intervals. To confirm beneficial effects of PMC-12 on hippocampal cell, we used middle cerebral artery occlusion (MCAO) model. Male C57BL/6 mice (20-25 g) were obtained from Dooyeol Biotech (Seoul, Korea). The mice were housed at 22 °C under alternating 12 h cycles of dark and light, and were fed a commercial diet and allowed tap water ad libitum. All experiments were approved by the Pusan National University Animal Care and Use Committee. Each group consisted of six mice and all treatments were administered under isoflurane (Choongwae, Seoul, Korea) anesthesia, which was provided using a calibrated vaporizer (Midmark VIP3000, Orchad Park, OH, USA). Focal cerebral ischemia was induced by occluding the middle cerebral artery (MCA) using the intraluminal filament technique. A fiber-optic probe was affixed to the skull over the middle cerebral artery for measurement of regional cerebral blood flow using a PeriFlux Laser Doppler System 5000 (Perimed, Stockholm, Sweden). Middle cerebral artery occlusion model was induced by a silicon-coated 4-0 monofilament in the internal carotid artery and the monofilament was advanced to occlude the MCA. The filament was withdrawn 30 min after occlusion and reperfusion was confirmed using laser Doppler. Mice in the PMC-12 groups received oral administration daily at the doses of 100 and 500 mg/kg for three weeks after MCAO, while mice in the control and vehicle groups were only given distilled water at the same intervals. Immunofluorescence staining Mice anesthetized with isoflurane received intracardial perfusion with saline and then 4 % paraformaldehyde in PBS. Brains were removed, post-fixed in the same fixative for 4 h at 4 °C, and immersed in 30 % sucrose for 48 h at 4 °C for cryoprotection. Frozen 20 μm-thick sections were incubated for blocking with a blocking buffer (1X PBS/5 % normal serum/0.3 % Triton X-100) for 1 h at room temperature. The sections were incubated with the following primary antibodies to NeuN (Millipore Corporation), mature BDNF (Abcam), and pCREB (Cell Signaling) overnight in PBS at 4 °C. After washes with PBS, the sections were incubated with the fluorescent secondary antibody (Vector Laboratories, Inc., Burlingame, CA, USA) at room temperature in the dark, respectively, and then washed three times with PBS. Subsequently, slides were mounted in the mounting medium (Vector Laboratories, Inc.) and captured using a fluorescence microscope. Mice anesthetized with isoflurane received intracardial perfusion with saline and then 4 % paraformaldehyde in PBS. Brains were removed, post-fixed in the same fixative for 4 h at 4 °C, and immersed in 30 % sucrose for 48 h at 4 °C for cryoprotection. Frozen 20 μm-thick sections were incubated for blocking with a blocking buffer (1X PBS/5 % normal serum/0.3 % Triton X-100) for 1 h at room temperature. The sections were incubated with the following primary antibodies to NeuN (Millipore Corporation), mature BDNF (Abcam), and pCREB (Cell Signaling) overnight in PBS at 4 °C. After washes with PBS, the sections were incubated with the fluorescent secondary antibody (Vector Laboratories, Inc., Burlingame, CA, USA) at room temperature in the dark, respectively, and then washed three times with PBS. Subsequently, slides were mounted in the mounting medium (Vector Laboratories, Inc.) and captured using a fluorescence microscope. Behavioral assessment Acquisition training for the Morris water maze was performed on four consecutive days from 10 days to seven days before MCAO (five trials per day) and basal time was measured at six days before MCAO. The tank had a diameter of 100 cm and an altitude of 50 cm. The platform was placed 0.5 cm beneath the surface of the water. Each trial was performed for 90 s or until the mouse arrived on the platform. PMC-12-treated mice received daily oral administration at the doses of 100 and 500 mg/kg for three weeks after MCAO, while mice in the control and vehicle groups were only given distilled water at the same intervals. After final administration, results of the experiment were recorded using SMART 2.5.18 (Panlab S.L.U.). Acquisition training for the Morris water maze was performed on four consecutive days from 10 days to seven days before MCAO (five trials per day) and basal time was measured at six days before MCAO. The tank had a diameter of 100 cm and an altitude of 50 cm. The platform was placed 0.5 cm beneath the surface of the water. Each trial was performed for 90 s or until the mouse arrived on the platform. PMC-12-treated mice received daily oral administration at the doses of 100 and 500 mg/kg for three weeks after MCAO, while mice in the control and vehicle groups were only given distilled water at the same intervals. After final administration, results of the experiment were recorded using SMART 2.5.18 (Panlab S.L.U.). Data analysis All data were expressed as mean ± SEM and analyzed using the SigmaStat statistical program version 11.2 (Systat Software, San Jose, CA, USA). Statistical comparisons were performed using one-way analysis of variance (ANOVA) for repeated measures followed by Tukey’s test of least significant difference. A P-value < 0.05 was considered to indicate a statistically significant result. The median effective dose (ED50) value of PMC-12 (in vitro experiments) was derived from dose–response curve. All data were expressed as mean ± SEM and analyzed using the SigmaStat statistical program version 11.2 (Systat Software, San Jose, CA, USA). Statistical comparisons were performed using one-way analysis of variance (ANOVA) for repeated measures followed by Tukey’s test of least significant difference. A P-value < 0.05 was considered to indicate a statistically significant result. The median effective dose (ED50) value of PMC-12 (in vitro experiments) was derived from dose–response curve. Chemicals and antibodies: L-glutamate, 3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyltetrazolium bromide (MTT), N-acetyl-L-cysteine (NAC), and β-actin antibody were purchased from Sigma-Aldrich (St. Louis, MO, USA). BAPTA-AM and EGTA were purchased from Tocris Bioscience (Ellisville, MO, USA). Dulbecco’s modified Eagle’s medium (DMEM), fetal bovine serum (FBS), and other cell culture reagents were purchased from Gibco-Invitrogen (Carlsbad, CA, USA). Antibodies recognizing p38, phospho-p38 (pp38, Thr180/Tyr182), PI3K, and pro-BDNF were supplied by Santa Cruz Biotechnology (Santa Cruz, CA, USA), and CREB, phospho-CREB (pCREB, Ser133), and phospho-PI3K (pPI3K, Tyr458) were supplied by Cell Signaling (Danvers, MA, USA). Antibody recognizing neuronal nuclei (NeuN) was supplied by Millipore Corporation (Billerica, MA, USA), and mature BDNF was supplied by Abcam (Cambridge, MA, USA). Secondary antibodies were supplied by Santa Cruz Biotechnology. A FITC Annexin V apoptosis detection kit was purchased from BD Bioscience (San Diego, CA, USA). A lactate dehydrogenase (LDH) cytotoxicity assay kit was purchased from Promega (Madison, WI, USA), and ROS detection reagent, 5-(and-6)-carboxy-2′,7′-dichlorodihydrofluorescein diacetate (carboxy-H2DCFDA), and Hoechst 33342 was purchased from Invitrogen (Carlsbad, CA, USA). Preparation of four herbal mixture extract: The dried roots of Polygonum multiflorum Thunb., Rehmannia glutinosa (Gaertn) Libosch., Polygala tenuifolia Willd., and Acorus gramineus Soland. were purchased from Dongnam Co. (Busan, Korea) and were authenticated by Professor Y.W. Choi, Department of Horticultural Bioscience, College of Natural Resource and Life Science, Pusan National University. A voucher specimen (accession number PMCWSD2.1 ~ 2.4) was deposited at the Plant Drug Research Laboratory of Pusan National University (Miryang, Korea). Dried powdered Polygonum multiflorum (25.5 kg), Rehmannia glutinosa (9.5 kg), Polygala tenuifolia (7.5 kg), and Acorus gramineus roots (7.5 kg) were immersed in 450 L of distilled water and boiled at 120 ± 5 °C for 150 min. The resultant extract was centrifuged (2000 × g for 20 min at 4 °C) and filtered through a 0.2-μm filter. The filtrate was then concentrated in vacuo at 70 ± 5 °C under reduced pressure and then converted into a fine spray-dried powder at a yielding rate of 4.6 % (2.3 kg) in a vacuum drying apparatus. Finally, the solid form of the spray-dried powder was dissolved with dimethyl sulfoxide (DMSO) for use as PMC-12 in experiments. High performance liquid chromatography (HPLC) analysis and quantification: For analysis of quality and quantity for PMC-12, sample of 0.5 g dry weight was sonicated in 10 ml MeOH, filtered through a 0.45 μm membrane filter before HPLC analysis. HPLC using G1100 systems (Agilent Technologies, Waldbronn, Germany) was performed on a Luna C18 column (5 μm, 150 mm × 3.0 mm i.d. Phenomenex, Torrance, CA, USA) with a mobile phase gradient of acetonitrile–water (0 to 100) for 35 min. The injection volume was 10 μl of sample and mobile phase flow rate 0.4 ml/min with UV detection at 254 nm for 2,3,5,4′-tetrahydroxystilbene-2-O-β-D-glucoside (THS) and 3′,6-disinapoyl sucrose (DISS) and at 203 nm for catalpol. Acquisition and analysis of chromatographic data were performed using Agilent chromatographic Work Station software (Agilent Technologies). Stock solutions of THS, DISS, and catalpol were prepared for quantification of PMC-12. The contents of PMC-12 were determined by regression equations, calculated in the form of y = ax + b, where x and y were peak area and contents of the compound. The limits of detection (LOD) and quantification (LOQ) under the current chromatographic conditions were determined at a signal-to-noise ratio of 3 and 10, respectively. Cell culture: HT22 cells were cultured in DMEM supplemented with 10 % FBS and 1 % penicillin/streptomycin in a 5 % CO2 humidified incubator at 37 °C. The cells were incubated for 24 h prior to experimental treatments. After incubation, cells were treated with various concentrations of PMC-12 for 24 h, followed by exposure to 5 mM glutamate for 24 h. Cells were pretreated with inhibitors for 30 min before addition of PMC-12 and then exposed to glutamate. Cell viability assay: HT22 cell survival was assessed using a MTT assay. The culture medium was replaced with 0.5 mg/ml MTT solution and then left in a dark place for 4 h at 37 °C. Following incubation, the cells were treated with DMSO in order to dissolve the formazan crystals. Absorbance was determined at 595 nm using a SpectraMax 190 spectrophotometer (Molecular Devices, Sunnyvale, CA, USA). Results were expressed as a percentage of control. Determination of cell cytotoxicity: Released LDH from damaged cells was measured for estimation of cytotoxicity. For induction of maximal cell lysis, treated cells were lysed with 0.9 % Triton X-100 for 45 min at 37 °C. Supernatant samples were transferred to a 96-well enzymatic assay plate and reacted with substrate mix in the dark for 30 min at room temperature. At the end of that time, samples were treated with stop solution and read at 490 nm using a SpectraMax 190 spectrophotometer (Molecular Devices). Data represent the percentage of LDH released relative to controls. Flow cytometric analysis: After treatment, cells were harvested and resuspended in binding buffer at a concentration of 1x104 cells/ml. For analysis of cell death types, 100 μl of the solution was transferred to a flow cytometric tube, followed by incubation with Annexin V-FITC and propidium iodide (PI) in the dark at room temperature for 15 min. Subsequently, 400 μl of binding buffer was added and analysis of the samples was performed using a flow cytometer (FACS Canto™ II; Becton-Dickinson, San Jose, CA, USA). ROS measurement: HT22 cells were cultured in 96-well white plates at a density of 5x103 cells per well. After adherence, cells were pretreated with PMC-12 for 24 h and then exposed to 5 mM glutamate for 24 h. Treated cells were washed with PBS. Carboxy-H2DCFDA (20 μM) (Invitrogen) was applied to the cells, followed by incubation for 1 h in a 37 °C incubator. Fluorescence was measured using a Mutilabel counter (Perkin Elmer 1420, MA, USA). Accumulation of intracellular ROS was observed and photographed under a fluorescence microscope (Carl Zeiss Imager M1, Carl Zeiss, Inc., Gottingen, Germany). In addition, cells were harvested, resuspended in 1 ml PBS with 20 μM carboxy-H2DCFDA (Invitrogen), and then incubated for 1 h at 37 °C. After washing, cellular fluorescence was measured using a flow cytometer. Nuclear staining with Hoechst 33342: Apoptosis was investigated by staining the cells with Hoechst 33342 (Invitrogen). HT22 cells were washed three times with PBS and then fixed in PBS containing 4 % paraformaldehyde for 25 min at 4 °C. Fixed cells were washed with PBS and stained with Hoechst 33342 (10 μg/ml) for 15 min at room temperature. The cells were washed three times with PBS and mounted using the medium for fluorescence (Vector Laboratories, Inc.) The cells were observed under a fluorescence microscope for nuclei showing typical apoptotic features such as chromatin condensation and fragmentation. Photographs were taken at a magnification of X 200. Western blot analysis: The cells were homogenized with lysis buffer [200 mM Tris (pH 8.0), 150 mM NaCl, 2 mM EDTA, 1 mM NaF, 1 % NP40, 1 mM PMSF, 1 mM Na3VO4, protease inhibitor cocktail]. Equal amounts of proteins were separated by 10 or 12 % sodium dodecyl sulfate-polyacrylamide gel electrophoresis (SDS-PAGE) and then transferred to a nitrocellulose membrane (Whatman GmbH, Dassel, Germany). The membranes were blocked with 5 % skim milk in PBST for 1 h, followed by overnight exposure to appropriate antibodies. Membranes were then incubated with appropriate horseradish peroxidase-conjugated antibodies for 1 h. All specific bands were visualized using an enhanced chemiluminescence system (Pierce Biotech, Rockford, IL, USA) and imaged using an Image Quant LAS-4000 imaging system (GE Healthcare Life Science, Uppsala, Sweden). Results of the Western blot assay reported here are representative of at least three experiments. Focal cerebral ischemia: To confirm beneficial effects of PMC-12 on hippocampal cell, we used middle cerebral artery occlusion (MCAO) model. Male C57BL/6 mice (20-25 g) were obtained from Dooyeol Biotech (Seoul, Korea). The mice were housed at 22 °C under alternating 12 h cycles of dark and light, and were fed a commercial diet and allowed tap water ad libitum. All experiments were approved by the Pusan National University Animal Care and Use Committee. Each group consisted of six mice and all treatments were administered under isoflurane (Choongwae, Seoul, Korea) anesthesia, which was provided using a calibrated vaporizer (Midmark VIP3000, Orchad Park, OH, USA). Focal cerebral ischemia was induced by occluding the middle cerebral artery (MCA) using the intraluminal filament technique. A fiber-optic probe was affixed to the skull over the middle cerebral artery for measurement of regional cerebral blood flow using a PeriFlux Laser Doppler System 5000 (Perimed, Stockholm, Sweden). Middle cerebral artery occlusion model was induced by a silicon-coated 4-0 monofilament in the internal carotid artery and the monofilament was advanced to occlude the MCA. The filament was withdrawn 30 min after occlusion and reperfusion was confirmed using laser Doppler. Mice in the PMC-12 groups received oral administration daily at the doses of 100 and 500 mg/kg for three weeks after MCAO, while mice in the control and vehicle groups were only given distilled water at the same intervals. Immunofluorescence staining: Mice anesthetized with isoflurane received intracardial perfusion with saline and then 4 % paraformaldehyde in PBS. Brains were removed, post-fixed in the same fixative for 4 h at 4 °C, and immersed in 30 % sucrose for 48 h at 4 °C for cryoprotection. Frozen 20 μm-thick sections were incubated for blocking with a blocking buffer (1X PBS/5 % normal serum/0.3 % Triton X-100) for 1 h at room temperature. The sections were incubated with the following primary antibodies to NeuN (Millipore Corporation), mature BDNF (Abcam), and pCREB (Cell Signaling) overnight in PBS at 4 °C. After washes with PBS, the sections were incubated with the fluorescent secondary antibody (Vector Laboratories, Inc., Burlingame, CA, USA) at room temperature in the dark, respectively, and then washed three times with PBS. Subsequently, slides were mounted in the mounting medium (Vector Laboratories, Inc.) and captured using a fluorescence microscope. Behavioral assessment: Acquisition training for the Morris water maze was performed on four consecutive days from 10 days to seven days before MCAO (five trials per day) and basal time was measured at six days before MCAO. The tank had a diameter of 100 cm and an altitude of 50 cm. The platform was placed 0.5 cm beneath the surface of the water. Each trial was performed for 90 s or until the mouse arrived on the platform. PMC-12-treated mice received daily oral administration at the doses of 100 and 500 mg/kg for three weeks after MCAO, while mice in the control and vehicle groups were only given distilled water at the same intervals. After final administration, results of the experiment were recorded using SMART 2.5.18 (Panlab S.L.U.). Data analysis: All data were expressed as mean ± SEM and analyzed using the SigmaStat statistical program version 11.2 (Systat Software, San Jose, CA, USA). Statistical comparisons were performed using one-way analysis of variance (ANOVA) for repeated measures followed by Tukey’s test of least significant difference. A P-value < 0.05 was considered to indicate a statistically significant result. The median effective dose (ED50) value of PMC-12 (in vitro experiments) was derived from dose–response curve. Results: HPLC analysis of PMC-12 HPLC conditions, particularly the mobile phase and its elution program, are important for determination of the compound in the biological matrix. In this study, we found that a mobile phase consisting of acetonitrile and containing H2O can separate THS, DISS, and catalpol (Fig. 1). The HPLC conditions developed in this study produced full peak-to-baseline resolution of the major active THS, DISS, and catalpol present in PMC-12. Based on UV maximal absorption, we detected THS and DISS at 254 nm and catalpol at 203 nm for quantitative analysis. The contents of THS, DISS, and catalpol in PMC-12 were 3.085 ± 0.271 %, 0.352 ± 0.058 %, and 0.785 ± 0.059 %, respectively. Linear calibration curve showed good linear regression (r2 > 0.999) within test ranges; the LOD (S/N = 3) and the LOQ (S/N = 10) were less than 1.5 and 4.5 μg at 254 nm for THS and DISS and at 203 nm for the catalpol (Table 1).Fig. 1HPLC analysis and quantification of PMC-12. HPLC chromatograms of THS, DISS, and catalpol reference (a) and PMC-12 (b) obtained using a Luna C18 (2) column monitored at 254 nm and eluted with 100 % water to 100 % acetonitrile for 40 at a flow-rate of 1.0 ml/minTable 1Concentration, calibration curve, regression data, LODs and LOQs for THS, DISS, and catalpol by HPLCCompoundsWavelength (nm)Concentration (%)Calibration curver2 LOD (μg/ml)LOQ (μg/ml)THS2543.085 ± 0.271y = 3284.72x + 2.000.9991.414.27DISS2540.352 ± 0.058y = 4289.06x + 47.950.9990.591.79Catalpol2030.785 ± 0.059y = 1253.49x + 31.370.9990.631.91 HPLC analysis and quantification of PMC-12. HPLC chromatograms of THS, DISS, and catalpol reference (a) and PMC-12 (b) obtained using a Luna C18 (2) column monitored at 254 nm and eluted with 100 % water to 100 % acetonitrile for 40 at a flow-rate of 1.0 ml/min Concentration, calibration curve, regression data, LODs and LOQs for THS, DISS, and catalpol by HPLC HPLC conditions, particularly the mobile phase and its elution program, are important for determination of the compound in the biological matrix. In this study, we found that a mobile phase consisting of acetonitrile and containing H2O can separate THS, DISS, and catalpol (Fig. 1). The HPLC conditions developed in this study produced full peak-to-baseline resolution of the major active THS, DISS, and catalpol present in PMC-12. Based on UV maximal absorption, we detected THS and DISS at 254 nm and catalpol at 203 nm for quantitative analysis. The contents of THS, DISS, and catalpol in PMC-12 were 3.085 ± 0.271 %, 0.352 ± 0.058 %, and 0.785 ± 0.059 %, respectively. Linear calibration curve showed good linear regression (r2 > 0.999) within test ranges; the LOD (S/N = 3) and the LOQ (S/N = 10) were less than 1.5 and 4.5 μg at 254 nm for THS and DISS and at 203 nm for the catalpol (Table 1).Fig. 1HPLC analysis and quantification of PMC-12. HPLC chromatograms of THS, DISS, and catalpol reference (a) and PMC-12 (b) obtained using a Luna C18 (2) column monitored at 254 nm and eluted with 100 % water to 100 % acetonitrile for 40 at a flow-rate of 1.0 ml/minTable 1Concentration, calibration curve, regression data, LODs and LOQs for THS, DISS, and catalpol by HPLCCompoundsWavelength (nm)Concentration (%)Calibration curver2 LOD (μg/ml)LOQ (μg/ml)THS2543.085 ± 0.271y = 3284.72x + 2.000.9991.414.27DISS2540.352 ± 0.058y = 4289.06x + 47.950.9990.591.79Catalpol2030.785 ± 0.059y = 1253.49x + 31.370.9990.631.91 HPLC analysis and quantification of PMC-12. HPLC chromatograms of THS, DISS, and catalpol reference (a) and PMC-12 (b) obtained using a Luna C18 (2) column monitored at 254 nm and eluted with 100 % water to 100 % acetonitrile for 40 at a flow-rate of 1.0 ml/min Concentration, calibration curve, regression data, LODs and LOQs for THS, DISS, and catalpol by HPLC Pretreatment with PMC-12 reduces glutamate-induced neuronal toxicity in HT22 cells Exposure of cells to glutamate resulted in reduced cell viability of approximately 36.5 % compared with the control. Pretreatment with PMC-12 at a concentration range of 0.01 to 10 μg/ml (ED50 = 0.32 μg/ml PMC-12) resulted in significantly reduced glutamate-induced cytotoxicity in a dose-dependent manner (Fig. 2a). The levels of LDH release showed a significant increase to 77.8 % after exposure to glutamate, while pretreatment with PMC-12 resulted in a marked decrease of glutamate-induced release of LDH (Fig. 2b). These results suggest that pretreatment with PMC-12 exerts a potent neuroprotective effect against oxidative toxicity caused by exposure of HT22 cells to glutamate.Fig. 2Protective effects of PMC-12 against glutamate-induced cell death in HT22 cells. Cell viability and toxicity were determined by MTT (a) and LDH assay (b). Cells were pretreated with 0.01, 0.1, 1, and 10 μg/ml of PMC-12 for 24 h, followed by exposure to 5 mM glutamate for 24 h. ### P < 0.001 vs. control; *P < 0.05, **P < 0.01 and ***P < 0.001 vs. glutamate-treated cells. All data are represented as the mean ± SEM of three independent experiments Protective effects of PMC-12 against glutamate-induced cell death in HT22 cells. Cell viability and toxicity were determined by MTT (a) and LDH assay (b). Cells were pretreated with 0.01, 0.1, 1, and 10 μg/ml of PMC-12 for 24 h, followed by exposure to 5 mM glutamate for 24 h. ### P < 0.001 vs. control; *P < 0.05, **P < 0.01 and ***P < 0.001 vs. glutamate-treated cells. All data are represented as the mean ± SEM of three independent experiments Exposure of cells to glutamate resulted in reduced cell viability of approximately 36.5 % compared with the control. Pretreatment with PMC-12 at a concentration range of 0.01 to 10 μg/ml (ED50 = 0.32 μg/ml PMC-12) resulted in significantly reduced glutamate-induced cytotoxicity in a dose-dependent manner (Fig. 2a). The levels of LDH release showed a significant increase to 77.8 % after exposure to glutamate, while pretreatment with PMC-12 resulted in a marked decrease of glutamate-induced release of LDH (Fig. 2b). These results suggest that pretreatment with PMC-12 exerts a potent neuroprotective effect against oxidative toxicity caused by exposure of HT22 cells to glutamate.Fig. 2Protective effects of PMC-12 against glutamate-induced cell death in HT22 cells. Cell viability and toxicity were determined by MTT (a) and LDH assay (b). Cells were pretreated with 0.01, 0.1, 1, and 10 μg/ml of PMC-12 for 24 h, followed by exposure to 5 mM glutamate for 24 h. ### P < 0.001 vs. control; *P < 0.05, **P < 0.01 and ***P < 0.001 vs. glutamate-treated cells. All data are represented as the mean ± SEM of three independent experiments Protective effects of PMC-12 against glutamate-induced cell death in HT22 cells. Cell viability and toxicity were determined by MTT (a) and LDH assay (b). Cells were pretreated with 0.01, 0.1, 1, and 10 μg/ml of PMC-12 for 24 h, followed by exposure to 5 mM glutamate for 24 h. ### P < 0.001 vs. control; *P < 0.05, **P < 0.01 and ***P < 0.001 vs. glutamate-treated cells. All data are represented as the mean ± SEM of three independent experiments Pretreatment with PMC-12 inhibits glutamate-induced apoptotic cell death in HT22 cells We performed flow cytometry analysis using Annexin V/PI staining in order to characterize the types of neuronal death. Concentrations of 0.1, 1, and 10 μg/ml of PMC-12 were selected for testing. After exposure to glutamate, cells were likely to undergo apoptotic cell death rather than necrotic death, however, pretreatment with PMC-12 resulted in a marked decrease in the apoptotic population (Fig. 3). These results suggest that pretreatment with PMC-12 exerts a neuroprotective effect through inhibition of glutamate-induced apoptotic cell death.Fig. 3Protective effect of PMC-12 on types of cell death in HT22 cells. Cells were pretreated with 0.1, 1, or 10 μg/ml PMC-12 for 24 h, followed by exposure to 5 mM glutamate for 24 h. Representative flow cytometric analysis scatter-grams of Annexin V/PI staining (a) and quantitative analysis of the histograms (b and c). ### P < 0.001 vs. control; ***P < 0.001 vs. glutamate-treated cells. All data are represented as the mean ± SEM of three independent experiments Protective effect of PMC-12 on types of cell death in HT22 cells. Cells were pretreated with 0.1, 1, or 10 μg/ml PMC-12 for 24 h, followed by exposure to 5 mM glutamate for 24 h. Representative flow cytometric analysis scatter-grams of Annexin V/PI staining (a) and quantitative analysis of the histograms (b and c). ### P < 0.001 vs. control; ***P < 0.001 vs. glutamate-treated cells. All data are represented as the mean ± SEM of three independent experiments We performed flow cytometry analysis using Annexin V/PI staining in order to characterize the types of neuronal death. Concentrations of 0.1, 1, and 10 μg/ml of PMC-12 were selected for testing. After exposure to glutamate, cells were likely to undergo apoptotic cell death rather than necrotic death, however, pretreatment with PMC-12 resulted in a marked decrease in the apoptotic population (Fig. 3). These results suggest that pretreatment with PMC-12 exerts a neuroprotective effect through inhibition of glutamate-induced apoptotic cell death.Fig. 3Protective effect of PMC-12 on types of cell death in HT22 cells. Cells were pretreated with 0.1, 1, or 10 μg/ml PMC-12 for 24 h, followed by exposure to 5 mM glutamate for 24 h. Representative flow cytometric analysis scatter-grams of Annexin V/PI staining (a) and quantitative analysis of the histograms (b and c). ### P < 0.001 vs. control; ***P < 0.001 vs. glutamate-treated cells. All data are represented as the mean ± SEM of three independent experiments Protective effect of PMC-12 on types of cell death in HT22 cells. Cells were pretreated with 0.1, 1, or 10 μg/ml PMC-12 for 24 h, followed by exposure to 5 mM glutamate for 24 h. Representative flow cytometric analysis scatter-grams of Annexin V/PI staining (a) and quantitative analysis of the histograms (b and c). ### P < 0.001 vs. control; ***P < 0.001 vs. glutamate-treated cells. All data are represented as the mean ± SEM of three independent experiments Pretreatment with PMC-12 inhibits glutamate-induced production of ROS in HT22 cells Treatment of HT22 cells with glutamate resulted in increased production of ROS. However, pretreatment with PMC-12 resulted in a significant decrease in ROS production, which prevented elevation of ROS level caused by exposure to glutamate (Fig. 4a). We also performed staining with Hoechst 33342 to confirm morphological changes by glutamate-induced oxidative toxicity. Our result showed that PMC-12 protected against apoptotic cell death by production of ROS after exposure to glutamate (Fig. 4b). When cells were treated with 10 μM of intracellular Ca2+ chelator BAPTA-AM and 1.5 mM of extracellular Ca2+ chelator EGTA, both inhibitors caused a significant reduction in the levels of glutamate-induced production of ROS. However, no change in ROS production was observed in cells treated with a combination of Ca2+ chelators and PMC-12 compared to cells treated with PMC-12 alone followed by exposure to glutamate (Fig. 5). These results suggest that PMC-12 suppresses glutamate-induced oxidative stress by blocking ROS production, which may be related to an indirect Ca2+ pathway.Fig. 4Protective effect of PMC-12 on ROS generation in glutamate-treated HT22 cells. Cells were pretreated with 0.01, 0.1, 1, or 10 μg/ml of PMC-12 for 24 h, followed by exposure to 5 mM glutamate for 24 h. The oxidation sensitive fluorescence dye, carboxy-H2DCFDA (20 μM), was used in measurement of ROS levels. Production of ROS was analyzed using a fluorescence plate reader (a) and fluorescence microscope (b). In addition, apoptotic nuclei were observed after staining with Hoechst 33342 for detection of apoptosis morphologically (b). ### P < 0.001 vs. control; **P < 0.01 and ***P < 0.001 vs. glutamate-treated cells. All data are represented as the mean ± SEM of three independent experiments. Scale bars = 50 μmFig. 5Protective effects of PMC-12 on cellular Ca2+-related ROS production in HT22 cells. Cells were pretreated with 10 μg/ml of PMC-12 for 24 h, followed by exposure to 5 mM glutamate for 24 h. Cells were treated with the specific Ca2+ inhibitors, 1.5 mM EGTA or 10 μM BAPTA-AM, for 30 min before addition of PMC-12 or glutamate. ROS production was measured using a fluorescence plate reader and mean fluorescence intensity was expressed as the mean ± SEM of three independent experiments. ### P < 0.001 vs. control; ***P < 0.001 vs. glutamate-treated cells Protective effect of PMC-12 on ROS generation in glutamate-treated HT22 cells. Cells were pretreated with 0.01, 0.1, 1, or 10 μg/ml of PMC-12 for 24 h, followed by exposure to 5 mM glutamate for 24 h. The oxidation sensitive fluorescence dye, carboxy-H2DCFDA (20 μM), was used in measurement of ROS levels. Production of ROS was analyzed using a fluorescence plate reader (a) and fluorescence microscope (b). In addition, apoptotic nuclei were observed after staining with Hoechst 33342 for detection of apoptosis morphologically (b). ### P < 0.001 vs. control; **P < 0.01 and ***P < 0.001 vs. glutamate-treated cells. All data are represented as the mean ± SEM of three independent experiments. Scale bars = 50 μm Protective effects of PMC-12 on cellular Ca2+-related ROS production in HT22 cells. Cells were pretreated with 10 μg/ml of PMC-12 for 24 h, followed by exposure to 5 mM glutamate for 24 h. Cells were treated with the specific Ca2+ inhibitors, 1.5 mM EGTA or 10 μM BAPTA-AM, for 30 min before addition of PMC-12 or glutamate. ROS production was measured using a fluorescence plate reader and mean fluorescence intensity was expressed as the mean ± SEM of three independent experiments. ### P < 0.001 vs. control; ***P < 0.001 vs. glutamate-treated cells Treatment of HT22 cells with glutamate resulted in increased production of ROS. However, pretreatment with PMC-12 resulted in a significant decrease in ROS production, which prevented elevation of ROS level caused by exposure to glutamate (Fig. 4a). We also performed staining with Hoechst 33342 to confirm morphological changes by glutamate-induced oxidative toxicity. Our result showed that PMC-12 protected against apoptotic cell death by production of ROS after exposure to glutamate (Fig. 4b). When cells were treated with 10 μM of intracellular Ca2+ chelator BAPTA-AM and 1.5 mM of extracellular Ca2+ chelator EGTA, both inhibitors caused a significant reduction in the levels of glutamate-induced production of ROS. However, no change in ROS production was observed in cells treated with a combination of Ca2+ chelators and PMC-12 compared to cells treated with PMC-12 alone followed by exposure to glutamate (Fig. 5). These results suggest that PMC-12 suppresses glutamate-induced oxidative stress by blocking ROS production, which may be related to an indirect Ca2+ pathway.Fig. 4Protective effect of PMC-12 on ROS generation in glutamate-treated HT22 cells. Cells were pretreated with 0.01, 0.1, 1, or 10 μg/ml of PMC-12 for 24 h, followed by exposure to 5 mM glutamate for 24 h. The oxidation sensitive fluorescence dye, carboxy-H2DCFDA (20 μM), was used in measurement of ROS levels. Production of ROS was analyzed using a fluorescence plate reader (a) and fluorescence microscope (b). In addition, apoptotic nuclei were observed after staining with Hoechst 33342 for detection of apoptosis morphologically (b). ### P < 0.001 vs. control; **P < 0.01 and ***P < 0.001 vs. glutamate-treated cells. All data are represented as the mean ± SEM of three independent experiments. Scale bars = 50 μmFig. 5Protective effects of PMC-12 on cellular Ca2+-related ROS production in HT22 cells. Cells were pretreated with 10 μg/ml of PMC-12 for 24 h, followed by exposure to 5 mM glutamate for 24 h. Cells were treated with the specific Ca2+ inhibitors, 1.5 mM EGTA or 10 μM BAPTA-AM, for 30 min before addition of PMC-12 or glutamate. ROS production was measured using a fluorescence plate reader and mean fluorescence intensity was expressed as the mean ± SEM of three independent experiments. ### P < 0.001 vs. control; ***P < 0.001 vs. glutamate-treated cells Protective effect of PMC-12 on ROS generation in glutamate-treated HT22 cells. Cells were pretreated with 0.01, 0.1, 1, or 10 μg/ml of PMC-12 for 24 h, followed by exposure to 5 mM glutamate for 24 h. The oxidation sensitive fluorescence dye, carboxy-H2DCFDA (20 μM), was used in measurement of ROS levels. Production of ROS was analyzed using a fluorescence plate reader (a) and fluorescence microscope (b). In addition, apoptotic nuclei were observed after staining with Hoechst 33342 for detection of apoptosis morphologically (b). ### P < 0.001 vs. control; **P < 0.01 and ***P < 0.001 vs. glutamate-treated cells. All data are represented as the mean ± SEM of three independent experiments. Scale bars = 50 μm Protective effects of PMC-12 on cellular Ca2+-related ROS production in HT22 cells. Cells were pretreated with 10 μg/ml of PMC-12 for 24 h, followed by exposure to 5 mM glutamate for 24 h. Cells were treated with the specific Ca2+ inhibitors, 1.5 mM EGTA or 10 μM BAPTA-AM, for 30 min before addition of PMC-12 or glutamate. ROS production was measured using a fluorescence plate reader and mean fluorescence intensity was expressed as the mean ± SEM of three independent experiments. ### P < 0.001 vs. control; ***P < 0.001 vs. glutamate-treated cells Pretreatment with PMC-12 enhances mature BDNF expression and CREB phosphorylation via p38 MAPK and PI3K in HT22 cells Levels of phosphorylated p38 MAPK and dephosphorylated PI3K were significantly decreased by treatment with either PMC-12 or anti-oxidant NAC compared to glutamate-treated cells. Protein levels of mature BDNF and CREB phosphorylation were significantly increased by treatment with either PMC-12 or NAC (Fig. 6a). When cells were treated with PMC-12, NAC, or intracellular Ca2+ inhibitor BAPTA-AM, combination treatment resulted in markedly reduced levels of phosphorylated p38 MAPK and dephosphorylated PI3K compared to other cells. The combination treatment of cells also resulted in elevation of decreased protein levels of mature BDNF and CREB phosphorylation after exposure to glutamate (Fig. 6b). These results suggest that neuroprotective effects of PMC-12 may be regulated by both p38 MAPK and PI3K signaling with mature BDNF expression and CREB phosphorylation, and these effects may be related to ROS accumulation and Ca2+ influx.Fig. 6Protective effects of PMC-12 on regulation of intracellular protein kinases in HT22 cells. Cells were pretreated with 1 μg/ml PMC-12 or 1 mM NAC, followed by exposure to 5 mM glutamate (a). In addition, cells were pretreated with 10 μM BAPTA-AM or NAC for 30 min before addition of PMC-12 or glutamate (b). The histogram for Fig. 6b was indicated as the mean ± SEM of three independent experiments (c). Equal amounts of proteins and each sample were subjected to Western blot analysis using the indicated antibodies. Equal protein loading was confirmed by actin expression. ### P < 0.001 vs. normal control; ***P < 0.001 vs. glutamate-treated groups; $ P < 0.05 vs. groups except control and glutamate-treated groups Protective effects of PMC-12 on regulation of intracellular protein kinases in HT22 cells. Cells were pretreated with 1 μg/ml PMC-12 or 1 mM NAC, followed by exposure to 5 mM glutamate (a). In addition, cells were pretreated with 10 μM BAPTA-AM or NAC for 30 min before addition of PMC-12 or glutamate (b). The histogram for Fig. 6b was indicated as the mean ± SEM of three independent experiments (c). Equal amounts of proteins and each sample were subjected to Western blot analysis using the indicated antibodies. Equal protein loading was confirmed by actin expression. ### P < 0.001 vs. normal control; ***P < 0.001 vs. glutamate-treated groups; $ P < 0.05 vs. groups except control and glutamate-treated groups Levels of phosphorylated p38 MAPK and dephosphorylated PI3K were significantly decreased by treatment with either PMC-12 or anti-oxidant NAC compared to glutamate-treated cells. Protein levels of mature BDNF and CREB phosphorylation were significantly increased by treatment with either PMC-12 or NAC (Fig. 6a). When cells were treated with PMC-12, NAC, or intracellular Ca2+ inhibitor BAPTA-AM, combination treatment resulted in markedly reduced levels of phosphorylated p38 MAPK and dephosphorylated PI3K compared to other cells. The combination treatment of cells also resulted in elevation of decreased protein levels of mature BDNF and CREB phosphorylation after exposure to glutamate (Fig. 6b). These results suggest that neuroprotective effects of PMC-12 may be regulated by both p38 MAPK and PI3K signaling with mature BDNF expression and CREB phosphorylation, and these effects may be related to ROS accumulation and Ca2+ influx.Fig. 6Protective effects of PMC-12 on regulation of intracellular protein kinases in HT22 cells. Cells were pretreated with 1 μg/ml PMC-12 or 1 mM NAC, followed by exposure to 5 mM glutamate (a). In addition, cells were pretreated with 10 μM BAPTA-AM or NAC for 30 min before addition of PMC-12 or glutamate (b). The histogram for Fig. 6b was indicated as the mean ± SEM of three independent experiments (c). Equal amounts of proteins and each sample were subjected to Western blot analysis using the indicated antibodies. Equal protein loading was confirmed by actin expression. ### P < 0.001 vs. normal control; ***P < 0.001 vs. glutamate-treated groups; $ P < 0.05 vs. groups except control and glutamate-treated groups Protective effects of PMC-12 on regulation of intracellular protein kinases in HT22 cells. Cells were pretreated with 1 μg/ml PMC-12 or 1 mM NAC, followed by exposure to 5 mM glutamate (a). In addition, cells were pretreated with 10 μM BAPTA-AM or NAC for 30 min before addition of PMC-12 or glutamate (b). The histogram for Fig. 6b was indicated as the mean ± SEM of three independent experiments (c). Equal amounts of proteins and each sample were subjected to Western blot analysis using the indicated antibodies. Equal protein loading was confirmed by actin expression. ### P < 0.001 vs. normal control; ***P < 0.001 vs. glutamate-treated groups; $ P < 0.05 vs. groups except control and glutamate-treated groups Treatment with PMC-12 enhances mature BDNF expression and CREB phosphorylation in the hippocampus of MCAO mice At 26 days after MCAO, positive neuronal cells of pCREB and mature BDNF in the hippocampal CA1 and dentate gyrus (DG) region were counted after immunofluorescence staining. Post-treatment with PMC-12 followed by MCAO surgery resulted in a significant increase in the number of double-positive cells of pCREB/NeuN or mature BDNF/NeuN in the CA1 and DG region of the ipsilateral hippocampus compared to the MCAO group (Fig. 7). These results suggest a possible association of the protective effects of PMC-12 with neuronal mature BDNF expression and CREB phosphorylation in the hippocampus of MCAO mice.Fig. 7Protective effects of PMC-12 on expression of both pCREB and mature BDNF in the hippocampus of focal cerebral ischemia in mice at 26 days after MCAO. Photomicrograph (a-b, d-e) and its histogram for pCREB/NeuN (c) and mature BDNF/NeuN double-positive cells (f) in the DG and CA1 regions of the hippocampus. Total number of pCREB/NeuN and mBDNF/NeuN double-positive cells was significantly increased by administration of PMC-12 in DG and CA1 compared to the MCAO group. ## P < 0.01 and ### P < 0.001 vs. control; *P < 0.05 and **P < 0.01 vs. MCAO mice. Scale bar = 100 μm Protective effects of PMC-12 on expression of both pCREB and mature BDNF in the hippocampus of focal cerebral ischemia in mice at 26 days after MCAO. Photomicrograph (a-b, d-e) and its histogram for pCREB/NeuN (c) and mature BDNF/NeuN double-positive cells (f) in the DG and CA1 regions of the hippocampus. Total number of pCREB/NeuN and mBDNF/NeuN double-positive cells was significantly increased by administration of PMC-12 in DG and CA1 compared to the MCAO group. ## P < 0.01 and ### P < 0.001 vs. control; *P < 0.05 and **P < 0.01 vs. MCAO mice. Scale bar = 100 μm At 26 days after MCAO, positive neuronal cells of pCREB and mature BDNF in the hippocampal CA1 and dentate gyrus (DG) region were counted after immunofluorescence staining. Post-treatment with PMC-12 followed by MCAO surgery resulted in a significant increase in the number of double-positive cells of pCREB/NeuN or mature BDNF/NeuN in the CA1 and DG region of the ipsilateral hippocampus compared to the MCAO group (Fig. 7). These results suggest a possible association of the protective effects of PMC-12 with neuronal mature BDNF expression and CREB phosphorylation in the hippocampus of MCAO mice.Fig. 7Protective effects of PMC-12 on expression of both pCREB and mature BDNF in the hippocampus of focal cerebral ischemia in mice at 26 days after MCAO. Photomicrograph (a-b, d-e) and its histogram for pCREB/NeuN (c) and mature BDNF/NeuN double-positive cells (f) in the DG and CA1 regions of the hippocampus. Total number of pCREB/NeuN and mBDNF/NeuN double-positive cells was significantly increased by administration of PMC-12 in DG and CA1 compared to the MCAO group. ## P < 0.01 and ### P < 0.001 vs. control; *P < 0.05 and **P < 0.01 vs. MCAO mice. Scale bar = 100 μm Protective effects of PMC-12 on expression of both pCREB and mature BDNF in the hippocampus of focal cerebral ischemia in mice at 26 days after MCAO. Photomicrograph (a-b, d-e) and its histogram for pCREB/NeuN (c) and mature BDNF/NeuN double-positive cells (f) in the DG and CA1 regions of the hippocampus. Total number of pCREB/NeuN and mBDNF/NeuN double-positive cells was significantly increased by administration of PMC-12 in DG and CA1 compared to the MCAO group. ## P < 0.01 and ### P < 0.001 vs. control; *P < 0.05 and **P < 0.01 vs. MCAO mice. Scale bar = 100 μm Treatment with PMC-12 ameliorates spatial memory impairment in MCAO mice Spatial memory was assessed using the water maze test. MCAO mice took a longer time on average to find the platform than the basal group. However, PMC-12-administered mice attained a significantly lower time at both concentrations from 22 to 25 days after MCAO compared to the vehicle group (Fig. 8). In particular, it has shown that a low dose (100 mg/kg) of PMC-12 is enough for improvement of the damaged memory function. These results suggest that treatment with PMC-12 may induce beneficial effects for improvement of memory function in a focal cerebral ischemia model.Fig. 8Beneficial effects of PMC-12 on the spatial memory function in MCAO mice. Morris water maze test was performed from 22 d to 25 d after MCAO. Administration of PMC-12 resulted in significantly improved memory function of MCAO mice during the late phase of the experiment. There was no significant differences in the results by two different doses (100 and 500 mg/kg) of PMC-12. Mean ± SEM. ### P < 0.001 vs. control; ***P < 0.001 vs. MCAO mice (vehicle) Beneficial effects of PMC-12 on the spatial memory function in MCAO mice. Morris water maze test was performed from 22 d to 25 d after MCAO. Administration of PMC-12 resulted in significantly improved memory function of MCAO mice during the late phase of the experiment. There was no significant differences in the results by two different doses (100 and 500 mg/kg) of PMC-12. Mean ± SEM. ### P < 0.001 vs. control; ***P < 0.001 vs. MCAO mice (vehicle) Spatial memory was assessed using the water maze test. MCAO mice took a longer time on average to find the platform than the basal group. However, PMC-12-administered mice attained a significantly lower time at both concentrations from 22 to 25 days after MCAO compared to the vehicle group (Fig. 8). In particular, it has shown that a low dose (100 mg/kg) of PMC-12 is enough for improvement of the damaged memory function. These results suggest that treatment with PMC-12 may induce beneficial effects for improvement of memory function in a focal cerebral ischemia model.Fig. 8Beneficial effects of PMC-12 on the spatial memory function in MCAO mice. Morris water maze test was performed from 22 d to 25 d after MCAO. Administration of PMC-12 resulted in significantly improved memory function of MCAO mice during the late phase of the experiment. There was no significant differences in the results by two different doses (100 and 500 mg/kg) of PMC-12. Mean ± SEM. ### P < 0.001 vs. control; ***P < 0.001 vs. MCAO mice (vehicle) Beneficial effects of PMC-12 on the spatial memory function in MCAO mice. Morris water maze test was performed from 22 d to 25 d after MCAO. Administration of PMC-12 resulted in significantly improved memory function of MCAO mice during the late phase of the experiment. There was no significant differences in the results by two different doses (100 and 500 mg/kg) of PMC-12. Mean ± SEM. ### P < 0.001 vs. control; ***P < 0.001 vs. MCAO mice (vehicle) HPLC analysis of PMC-12: HPLC conditions, particularly the mobile phase and its elution program, are important for determination of the compound in the biological matrix. In this study, we found that a mobile phase consisting of acetonitrile and containing H2O can separate THS, DISS, and catalpol (Fig. 1). The HPLC conditions developed in this study produced full peak-to-baseline resolution of the major active THS, DISS, and catalpol present in PMC-12. Based on UV maximal absorption, we detected THS and DISS at 254 nm and catalpol at 203 nm for quantitative analysis. The contents of THS, DISS, and catalpol in PMC-12 were 3.085 ± 0.271 %, 0.352 ± 0.058 %, and 0.785 ± 0.059 %, respectively. Linear calibration curve showed good linear regression (r2 > 0.999) within test ranges; the LOD (S/N = 3) and the LOQ (S/N = 10) were less than 1.5 and 4.5 μg at 254 nm for THS and DISS and at 203 nm for the catalpol (Table 1).Fig. 1HPLC analysis and quantification of PMC-12. HPLC chromatograms of THS, DISS, and catalpol reference (a) and PMC-12 (b) obtained using a Luna C18 (2) column monitored at 254 nm and eluted with 100 % water to 100 % acetonitrile for 40 at a flow-rate of 1.0 ml/minTable 1Concentration, calibration curve, regression data, LODs and LOQs for THS, DISS, and catalpol by HPLCCompoundsWavelength (nm)Concentration (%)Calibration curver2 LOD (μg/ml)LOQ (μg/ml)THS2543.085 ± 0.271y = 3284.72x + 2.000.9991.414.27DISS2540.352 ± 0.058y = 4289.06x + 47.950.9990.591.79Catalpol2030.785 ± 0.059y = 1253.49x + 31.370.9990.631.91 HPLC analysis and quantification of PMC-12. HPLC chromatograms of THS, DISS, and catalpol reference (a) and PMC-12 (b) obtained using a Luna C18 (2) column monitored at 254 nm and eluted with 100 % water to 100 % acetonitrile for 40 at a flow-rate of 1.0 ml/min Concentration, calibration curve, regression data, LODs and LOQs for THS, DISS, and catalpol by HPLC Pretreatment with PMC-12 reduces glutamate-induced neuronal toxicity in HT22 cells: Exposure of cells to glutamate resulted in reduced cell viability of approximately 36.5 % compared with the control. Pretreatment with PMC-12 at a concentration range of 0.01 to 10 μg/ml (ED50 = 0.32 μg/ml PMC-12) resulted in significantly reduced glutamate-induced cytotoxicity in a dose-dependent manner (Fig. 2a). The levels of LDH release showed a significant increase to 77.8 % after exposure to glutamate, while pretreatment with PMC-12 resulted in a marked decrease of glutamate-induced release of LDH (Fig. 2b). These results suggest that pretreatment with PMC-12 exerts a potent neuroprotective effect against oxidative toxicity caused by exposure of HT22 cells to glutamate.Fig. 2Protective effects of PMC-12 against glutamate-induced cell death in HT22 cells. Cell viability and toxicity were determined by MTT (a) and LDH assay (b). Cells were pretreated with 0.01, 0.1, 1, and 10 μg/ml of PMC-12 for 24 h, followed by exposure to 5 mM glutamate for 24 h. ### P < 0.001 vs. control; *P < 0.05, **P < 0.01 and ***P < 0.001 vs. glutamate-treated cells. All data are represented as the mean ± SEM of three independent experiments Protective effects of PMC-12 against glutamate-induced cell death in HT22 cells. Cell viability and toxicity were determined by MTT (a) and LDH assay (b). Cells were pretreated with 0.01, 0.1, 1, and 10 μg/ml of PMC-12 for 24 h, followed by exposure to 5 mM glutamate for 24 h. ### P < 0.001 vs. control; *P < 0.05, **P < 0.01 and ***P < 0.001 vs. glutamate-treated cells. All data are represented as the mean ± SEM of three independent experiments Pretreatment with PMC-12 inhibits glutamate-induced apoptotic cell death in HT22 cells: We performed flow cytometry analysis using Annexin V/PI staining in order to characterize the types of neuronal death. Concentrations of 0.1, 1, and 10 μg/ml of PMC-12 were selected for testing. After exposure to glutamate, cells were likely to undergo apoptotic cell death rather than necrotic death, however, pretreatment with PMC-12 resulted in a marked decrease in the apoptotic population (Fig. 3). These results suggest that pretreatment with PMC-12 exerts a neuroprotective effect through inhibition of glutamate-induced apoptotic cell death.Fig. 3Protective effect of PMC-12 on types of cell death in HT22 cells. Cells were pretreated with 0.1, 1, or 10 μg/ml PMC-12 for 24 h, followed by exposure to 5 mM glutamate for 24 h. Representative flow cytometric analysis scatter-grams of Annexin V/PI staining (a) and quantitative analysis of the histograms (b and c). ### P < 0.001 vs. control; ***P < 0.001 vs. glutamate-treated cells. All data are represented as the mean ± SEM of three independent experiments Protective effect of PMC-12 on types of cell death in HT22 cells. Cells were pretreated with 0.1, 1, or 10 μg/ml PMC-12 for 24 h, followed by exposure to 5 mM glutamate for 24 h. Representative flow cytometric analysis scatter-grams of Annexin V/PI staining (a) and quantitative analysis of the histograms (b and c). ### P < 0.001 vs. control; ***P < 0.001 vs. glutamate-treated cells. All data are represented as the mean ± SEM of three independent experiments Pretreatment with PMC-12 inhibits glutamate-induced production of ROS in HT22 cells: Treatment of HT22 cells with glutamate resulted in increased production of ROS. However, pretreatment with PMC-12 resulted in a significant decrease in ROS production, which prevented elevation of ROS level caused by exposure to glutamate (Fig. 4a). We also performed staining with Hoechst 33342 to confirm morphological changes by glutamate-induced oxidative toxicity. Our result showed that PMC-12 protected against apoptotic cell death by production of ROS after exposure to glutamate (Fig. 4b). When cells were treated with 10 μM of intracellular Ca2+ chelator BAPTA-AM and 1.5 mM of extracellular Ca2+ chelator EGTA, both inhibitors caused a significant reduction in the levels of glutamate-induced production of ROS. However, no change in ROS production was observed in cells treated with a combination of Ca2+ chelators and PMC-12 compared to cells treated with PMC-12 alone followed by exposure to glutamate (Fig. 5). These results suggest that PMC-12 suppresses glutamate-induced oxidative stress by blocking ROS production, which may be related to an indirect Ca2+ pathway.Fig. 4Protective effect of PMC-12 on ROS generation in glutamate-treated HT22 cells. Cells were pretreated with 0.01, 0.1, 1, or 10 μg/ml of PMC-12 for 24 h, followed by exposure to 5 mM glutamate for 24 h. The oxidation sensitive fluorescence dye, carboxy-H2DCFDA (20 μM), was used in measurement of ROS levels. Production of ROS was analyzed using a fluorescence plate reader (a) and fluorescence microscope (b). In addition, apoptotic nuclei were observed after staining with Hoechst 33342 for detection of apoptosis morphologically (b). ### P < 0.001 vs. control; **P < 0.01 and ***P < 0.001 vs. glutamate-treated cells. All data are represented as the mean ± SEM of three independent experiments. Scale bars = 50 μmFig. 5Protective effects of PMC-12 on cellular Ca2+-related ROS production in HT22 cells. Cells were pretreated with 10 μg/ml of PMC-12 for 24 h, followed by exposure to 5 mM glutamate for 24 h. Cells were treated with the specific Ca2+ inhibitors, 1.5 mM EGTA or 10 μM BAPTA-AM, for 30 min before addition of PMC-12 or glutamate. ROS production was measured using a fluorescence plate reader and mean fluorescence intensity was expressed as the mean ± SEM of three independent experiments. ### P < 0.001 vs. control; ***P < 0.001 vs. glutamate-treated cells Protective effect of PMC-12 on ROS generation in glutamate-treated HT22 cells. Cells were pretreated with 0.01, 0.1, 1, or 10 μg/ml of PMC-12 for 24 h, followed by exposure to 5 mM glutamate for 24 h. The oxidation sensitive fluorescence dye, carboxy-H2DCFDA (20 μM), was used in measurement of ROS levels. Production of ROS was analyzed using a fluorescence plate reader (a) and fluorescence microscope (b). In addition, apoptotic nuclei were observed after staining with Hoechst 33342 for detection of apoptosis morphologically (b). ### P < 0.001 vs. control; **P < 0.01 and ***P < 0.001 vs. glutamate-treated cells. All data are represented as the mean ± SEM of three independent experiments. Scale bars = 50 μm Protective effects of PMC-12 on cellular Ca2+-related ROS production in HT22 cells. Cells were pretreated with 10 μg/ml of PMC-12 for 24 h, followed by exposure to 5 mM glutamate for 24 h. Cells were treated with the specific Ca2+ inhibitors, 1.5 mM EGTA or 10 μM BAPTA-AM, for 30 min before addition of PMC-12 or glutamate. ROS production was measured using a fluorescence plate reader and mean fluorescence intensity was expressed as the mean ± SEM of three independent experiments. ### P < 0.001 vs. control; ***P < 0.001 vs. glutamate-treated cells Pretreatment with PMC-12 enhances mature BDNF expression and CREB phosphorylation via p38 MAPK and PI3K in HT22 cells: Levels of phosphorylated p38 MAPK and dephosphorylated PI3K were significantly decreased by treatment with either PMC-12 or anti-oxidant NAC compared to glutamate-treated cells. Protein levels of mature BDNF and CREB phosphorylation were significantly increased by treatment with either PMC-12 or NAC (Fig. 6a). When cells were treated with PMC-12, NAC, or intracellular Ca2+ inhibitor BAPTA-AM, combination treatment resulted in markedly reduced levels of phosphorylated p38 MAPK and dephosphorylated PI3K compared to other cells. The combination treatment of cells also resulted in elevation of decreased protein levels of mature BDNF and CREB phosphorylation after exposure to glutamate (Fig. 6b). These results suggest that neuroprotective effects of PMC-12 may be regulated by both p38 MAPK and PI3K signaling with mature BDNF expression and CREB phosphorylation, and these effects may be related to ROS accumulation and Ca2+ influx.Fig. 6Protective effects of PMC-12 on regulation of intracellular protein kinases in HT22 cells. Cells were pretreated with 1 μg/ml PMC-12 or 1 mM NAC, followed by exposure to 5 mM glutamate (a). In addition, cells were pretreated with 10 μM BAPTA-AM or NAC for 30 min before addition of PMC-12 or glutamate (b). The histogram for Fig. 6b was indicated as the mean ± SEM of three independent experiments (c). Equal amounts of proteins and each sample were subjected to Western blot analysis using the indicated antibodies. Equal protein loading was confirmed by actin expression. ### P < 0.001 vs. normal control; ***P < 0.001 vs. glutamate-treated groups; $ P < 0.05 vs. groups except control and glutamate-treated groups Protective effects of PMC-12 on regulation of intracellular protein kinases in HT22 cells. Cells were pretreated with 1 μg/ml PMC-12 or 1 mM NAC, followed by exposure to 5 mM glutamate (a). In addition, cells were pretreated with 10 μM BAPTA-AM or NAC for 30 min before addition of PMC-12 or glutamate (b). The histogram for Fig. 6b was indicated as the mean ± SEM of three independent experiments (c). Equal amounts of proteins and each sample were subjected to Western blot analysis using the indicated antibodies. Equal protein loading was confirmed by actin expression. ### P < 0.001 vs. normal control; ***P < 0.001 vs. glutamate-treated groups; $ P < 0.05 vs. groups except control and glutamate-treated groups Treatment with PMC-12 enhances mature BDNF expression and CREB phosphorylation in the hippocampus of MCAO mice: At 26 days after MCAO, positive neuronal cells of pCREB and mature BDNF in the hippocampal CA1 and dentate gyrus (DG) region were counted after immunofluorescence staining. Post-treatment with PMC-12 followed by MCAO surgery resulted in a significant increase in the number of double-positive cells of pCREB/NeuN or mature BDNF/NeuN in the CA1 and DG region of the ipsilateral hippocampus compared to the MCAO group (Fig. 7). These results suggest a possible association of the protective effects of PMC-12 with neuronal mature BDNF expression and CREB phosphorylation in the hippocampus of MCAO mice.Fig. 7Protective effects of PMC-12 on expression of both pCREB and mature BDNF in the hippocampus of focal cerebral ischemia in mice at 26 days after MCAO. Photomicrograph (a-b, d-e) and its histogram for pCREB/NeuN (c) and mature BDNF/NeuN double-positive cells (f) in the DG and CA1 regions of the hippocampus. Total number of pCREB/NeuN and mBDNF/NeuN double-positive cells was significantly increased by administration of PMC-12 in DG and CA1 compared to the MCAO group. ## P < 0.01 and ### P < 0.001 vs. control; *P < 0.05 and **P < 0.01 vs. MCAO mice. Scale bar = 100 μm Protective effects of PMC-12 on expression of both pCREB and mature BDNF in the hippocampus of focal cerebral ischemia in mice at 26 days after MCAO. Photomicrograph (a-b, d-e) and its histogram for pCREB/NeuN (c) and mature BDNF/NeuN double-positive cells (f) in the DG and CA1 regions of the hippocampus. Total number of pCREB/NeuN and mBDNF/NeuN double-positive cells was significantly increased by administration of PMC-12 in DG and CA1 compared to the MCAO group. ## P < 0.01 and ### P < 0.001 vs. control; *P < 0.05 and **P < 0.01 vs. MCAO mice. Scale bar = 100 μm Treatment with PMC-12 ameliorates spatial memory impairment in MCAO mice: Spatial memory was assessed using the water maze test. MCAO mice took a longer time on average to find the platform than the basal group. However, PMC-12-administered mice attained a significantly lower time at both concentrations from 22 to 25 days after MCAO compared to the vehicle group (Fig. 8). In particular, it has shown that a low dose (100 mg/kg) of PMC-12 is enough for improvement of the damaged memory function. These results suggest that treatment with PMC-12 may induce beneficial effects for improvement of memory function in a focal cerebral ischemia model.Fig. 8Beneficial effects of PMC-12 on the spatial memory function in MCAO mice. Morris water maze test was performed from 22 d to 25 d after MCAO. Administration of PMC-12 resulted in significantly improved memory function of MCAO mice during the late phase of the experiment. There was no significant differences in the results by two different doses (100 and 500 mg/kg) of PMC-12. Mean ± SEM. ### P < 0.001 vs. control; ***P < 0.001 vs. MCAO mice (vehicle) Beneficial effects of PMC-12 on the spatial memory function in MCAO mice. Morris water maze test was performed from 22 d to 25 d after MCAO. Administration of PMC-12 resulted in significantly improved memory function of MCAO mice during the late phase of the experiment. There was no significant differences in the results by two different doses (100 and 500 mg/kg) of PMC-12. Mean ± SEM. ### P < 0.001 vs. control; ***P < 0.001 vs. MCAO mice (vehicle) Discussion: Traditional medical literature can be used to identify and shortlist herbs and combinations of herbs for further experimental and clinical research [24]. We have used a systematic method for screening and evaluating citations from the traditional Korean medicine literature, Dongeuibogam. In a screening exercise performed using HT22 hippocampal cells for selection of functional herbs on memory impairment, among many herbal candidates, we found that Polygonum multiflorum exhibited prominent neuroprotective effects. Our previous results demonstrated that extracts from Polygonum multiflorum exert significant anti-apoptotic effects against glutamate-induced neurotoxicity [25, 26] and protect against cerebral ischemic damage via regulation of endothelial nitric oxide [27]. However, extracts of Polygonum multiflorum alone had a partial blocking effect against neuronal damage. To enhance the potential beneficial effects of Polygonum multiflorum, we analyzed 4,014 herbal prescriptions mentioned in Dongeuibogam and noted 19 multiherb formulae, including the roots of Rehmannia glutinosa, Polygala tenuifolia, and Acorus gramineus. These 19 multiherbs are commonly used in treatment of mental and physical weakness of the elderly. We prepared PMC-12 with Polygonum multiflorum base and above multiherb formulae to enhance medical efficiency. In a further screening exercise performed using a focal cerebral ischemia model, PMC-12 had marked effects on the decline of infarct size compared with Polygonum multiflorum extract alone. Thus, in the current study, we investigated the beneficial effects of herbal prescription, PMC-12, on hippocampal neurons and spatial memory deficits in mice. Exposure to glutamate causes neuronal cell death via non-receptor-mediated oxidative stress in hippocampal HT22 cells [28, 29]. Increased levels of Ca2+ and ROS caused by exposure to glutamate lead to both apoptotic and necrotic cell death [6, 30, 31]. Since alteration of Ca2+ signaling is involved in cell death, it can increase ROS production [32–34]. In the current study, treatment with PMC-12 resulted in reduction of apoptotic cell death with suppression of ROS accumulation; PMC-12 may contribute to neuronal survival against glutamate-induced oxidative toxicity related to cellular content of ROS and Ca2+. However, in treatment with PMC-12, ROS production was not significantly diminished in a concentration-dependent manner and cells were effectively protected for an additional 24 after removal of PMC-12. The beneficial effect of PMC-12 on hippocampal cells may not be attributable to its direct control of ROS accumulation and Ca2+ influx. With anti-oxidative activities, PMC-12 may contribute to other protective pathways, including endogenous kinases. Neuroprotective effects under oxidative stress are mediated by regulation of MAPK and PI3K signaling pathways, leading to cell death or survival in a variety of neurodegenerative disorders [35–37]. In particular, roles of p38 MAPK signaling in neuronal death [38, 39] and activation of PI3K in neuroprotection [10, 37] play an essential role under oxidative stress conditions. Our results showed that neuroprotective effects of PMC-12 are regulated by signaling pathways of p38 MAPK and PI3K against glutamate-induced oxidative cell death. CREB, an important transcription factor implicated in control of adaptive neuronal responses, contributes to several critical functions of BDNF-mediated cell survival [40, 41]. The impaired CREB phosphorylation in hippocampus may be a pathological component in neurodegenerative disorders [42, 43]. BDNF has recently been recognized as a potent modulator capable of regulating a wide repertoire of neuronal functions [44]. Among two forms of extracellular BDNF, pro- and mature BDNF, mature BDNF is essential in protection of neonatal or developing brain from ischemia injury as well as neuronal cells, whereas pro-BDNF may induce neuronal death [44–48]. In accordance with previous studies, cells treated with PMC-12 recovered protein levels of mature BDNF and CREB phosphorylation caused by glutamate. These results suggest that neuroprotective effects of PMC-12 may be regulated by both mature BDNF expression and CREB phosphorylation. Memory improvement is indicative of the structural and functional hippocampal plastic changes including its expression of BDNF [22]. To confirm involvement of mature BDNF and CREB activation in neuroprotection of PMC-12, we performed in vivo study using a mild ischemic mouse model [22, 23]. Immunohistochemistry results of treatment with PMC-12 for three weeks in MCAO mice indicated significant expression in the CA1 and DG regions of the hippocampus, suggesting that PMC-12 may prevent neuronal death via signaling pathways of both mature BDNF expression and CREB phosphorylation. In addition, we applied a behavioral test, Morris water maze, which assesses MCAO-induced deficits of learning and memory. Previous report showed that a short duration of 60 min MCAO instead of 2 h does not result in concomitant sensorimotor deficits in mice [23, 49]. Thus 30 min MCAO was conducted to avoid the influence of sensorimotor deficit on the performance of the mice in our Morris water maze test. Treatment with PMC-12 resulted in attainment of a significantly lower time from 22 to 25 days after MCAO compared with the vehicle group. These findings suggest that PMC-12 may be a good candidate for recovery of damaged learning and memory function in a focal cerebral ischemia model. Our study did not investigate the main functional components of multiherb formula PMC-12. However, phenolic constituents are well known as major active components of Polygonum multiflorum [50] and tetrahydroxystilbene-glucoside identified from this herb promotes long-term potentiation inductions contributing to enhancement of learning and memory in mice models [51, 52]. In line with these studies, our HPLC analysis also showed that 2,3,5,4′-tetrahydroxystilbene-2-O-β-D-glucoside is one of the main components of multiherb formula PMC-12. However, medical herbs have diverse roles within multiherb formulae according to well-known perspectives of traditional Korean medicine. Some herbs target the primary disorder and others aim to alleviate secondary symptoms such as absorption or counting undesirable effects of other herbs [24, 53]. PMC-12 may enhance memory function to target primary neuroprotective effects and other secondary symptoms by diverse roles within multiherb formulae. Conclusions: PMC-12 protects against hippocampal neuronal cell death via inhibition of Ca2+-related accumulation of ROS and these effects regulate through the signaling pathways of p38 MAPK and PI3K associated with mature BDNF expression and CREB phosphorylation. PMC-12 may also modulate recovery of memory function via mature BDNF and CREB against cerebral ischemic stroke. These results have shown that multiherb formula PMC-12 has potential applications as a useful therapeutic strategy in memory impairment of brain disorders.
Background: Four traditional Korean medicinal herbs which act in retarding the aging process, Polygonum multiflorum Thunb., Rehmannia glutinosa (Gaertn) Libosch., Polygala tenuifolia Willd., and Acorus gramineus Soland., were prepared by systematic investigation of Dongeuibogam (Treasured Mirror of Eastern Medicine), published in the early 17th century in Korea. This study was performed to evaluate beneficial effects of four herbal mixture extract (PMC-12) on hippocampal neuron and spatial memory. Methods: High performance liquid chromatography (HPLC) analysis was performed for standardization of PMC-12. Cell viability, lactate dehydrogenase, flow cytometry, reactive oxygen species (ROS), and Western blot assays were performed in HT22 hippocampal cells and immunohistochemistry and behavioral tests were performed in a mouse model of focal cerebral ischemia in order to observe alterations of hippocampal cell survival and subsequent memory function. Results: In the HPLC analysis, PMC-12 was standardized to contain 3.09% 2,3,5,4'-tetrahydroxystilbene-2-O-β-D-glucoside, 0.35% 3',6-disinapoyl sucrose, and 0.79% catalpol. In HT22 cells, pretreatment with PMC-12 resulted in significantly reduced glutamate-induced apoptotic cell death. Pretreatment with PMC-12 also resulted in suppression of ROS accumulation in connection with cellular Ca(2+) level after exposure to glutamate. Expression levels of phosphorylated p38 mitogen-activated protein kinases (MAPK) and dephosphorylated phosphatidylinositol-3 kinase (PI3K) by glutamate exposure were recovered by pretreatment with either PMC-12 or anti-oxidant N-acetyl-L-cysteine (NAC). Expression levels of mature brain-derived neurotrophic factor (BDNF) and phosphorylated cAMP response element binding protein (CREB) were significantly enhanced by treatment with either PMC-12 or NAC. Combination treatment with PMC-12, NAC, and intracellular Ca(2+) inhibitor BAPTA showed similar expression levels. In a mouse model of focal cerebral ischemia, we observed higher expression of mature BDNF and phosphorylation of CREB in the hippocampus and further confirmed improved spatial memory by treatment with PMC-12. Conclusions: Our results suggest that PMC-12 mainly exerted protective effects on hippocampal neurons through suppression of Ca(2+)-related ROS accumulation and regulation of signaling pathways of p38 MAPK and PI3K associated with mature BDNF expression and CREB phosphorylation and subsequently enhanced spatial memory.
Background: Due to an increase in life expectancy and the elderly population, memory and cognitive impairments including dementia have become a major public health problem [1]. Hippocampal neuronal death is a major factor in the progress of memory impairment in many brain disorders [2, 3]. Prevention of hippocampal neuronal deaths provides a potential new therapeutic strategy to ameliorate memory and cognitive impairment of many neurological disorders. HT22 hippocampal cell line, which lacks a functional glutamate receptor, is valuable for studying molecular mechanism of memory deficits [2, 4]. Exposure of HT22 hippocampal cells to glutamate shows neurotoxicity through oxidative stress rather than N-methyl-D-aspartate receptor-mediated excitotoxicity [5–7]. Non-receptor-mediated oxidative stress involves inhibition of cysteine uptake, alterations of intracellular cysteine homeostasis, glutathione depletion, and ultimately elevation of reactive oxygen species (ROS) activation inducing neuronal cell death [5, 8–10]. Death of hippocampal cell following oxidative stress and accumulation of ROS play a role in learning and memory impairment of brain disorders [11]. Under oxidative neuronal death, the pathways of p38 mitogen-activated protein kinase (MAPK) and phosphatidylinositol-3 kinase (PI3K) play critical roles in control of neuronal death and cell survival caused by glutamate, respectively [6, 12]. Brain-derived neurotrophic factor (BDNF) mediates neuronal survival and neuroplasticity and associated learning and memory, and its signaling is associated with alteration of the main mediators of PI3K pathways [13–15]. Cyclic AMP response element binding protein (CREB) has multiple roles in different brain areas, as well as promotion of cell survival [16, 17] and is involved in memory, learning, and synaptic transmission in the brain [18, 19]. Studies of the neuroprotection have been performed in both HT22 cells and middle cerebral artery occlusion (MCAO)-induced injury [20, 21]. However memory deficits are also frequently noted after stroke. Transient MCAO induce a progressive deficiency in spatial performance related to impaired hippocampal function [22]. Thus shorter durations of ischemia have been used in the experiments that aimed to test impaired spatial learning and memory performance [23]. In traditional literature of Korean/Chinese medicine, research for screening, evaluating citation and attempting practical use of herbs has been conducted for development of therapeutic strategies [24]. We prepared multiherb formulae comprising the roots of Polygonum multiflorum, Rehmannia glutinosa, Polygala tenuifolia, and Acorus gramineus to increase medicinal efficacy by systematic investigation of Dongeuibogam, published by Joon Hur in the early 17th century in Korea. Our aim was to determine the beneficial effects of herbal mixture extract on hippocampal neurons, a susceptible cell important in memory impairment, in HT22 hippocampal cells and the hippocampus with subsequent memory enhancement in a mouse model of focal cerebral ischemia. Conclusions: PMC-12 protects against hippocampal neuronal cell death via inhibition of Ca2+-related accumulation of ROS and these effects regulate through the signaling pathways of p38 MAPK and PI3K associated with mature BDNF expression and CREB phosphorylation. PMC-12 may also modulate recovery of memory function via mature BDNF and CREB against cerebral ischemic stroke. These results have shown that multiherb formula PMC-12 has potential applications as a useful therapeutic strategy in memory impairment of brain disorders.
Background: Four traditional Korean medicinal herbs which act in retarding the aging process, Polygonum multiflorum Thunb., Rehmannia glutinosa (Gaertn) Libosch., Polygala tenuifolia Willd., and Acorus gramineus Soland., were prepared by systematic investigation of Dongeuibogam (Treasured Mirror of Eastern Medicine), published in the early 17th century in Korea. This study was performed to evaluate beneficial effects of four herbal mixture extract (PMC-12) on hippocampal neuron and spatial memory. Methods: High performance liquid chromatography (HPLC) analysis was performed for standardization of PMC-12. Cell viability, lactate dehydrogenase, flow cytometry, reactive oxygen species (ROS), and Western blot assays were performed in HT22 hippocampal cells and immunohistochemistry and behavioral tests were performed in a mouse model of focal cerebral ischemia in order to observe alterations of hippocampal cell survival and subsequent memory function. Results: In the HPLC analysis, PMC-12 was standardized to contain 3.09% 2,3,5,4'-tetrahydroxystilbene-2-O-β-D-glucoside, 0.35% 3',6-disinapoyl sucrose, and 0.79% catalpol. In HT22 cells, pretreatment with PMC-12 resulted in significantly reduced glutamate-induced apoptotic cell death. Pretreatment with PMC-12 also resulted in suppression of ROS accumulation in connection with cellular Ca(2+) level after exposure to glutamate. Expression levels of phosphorylated p38 mitogen-activated protein kinases (MAPK) and dephosphorylated phosphatidylinositol-3 kinase (PI3K) by glutamate exposure were recovered by pretreatment with either PMC-12 or anti-oxidant N-acetyl-L-cysteine (NAC). Expression levels of mature brain-derived neurotrophic factor (BDNF) and phosphorylated cAMP response element binding protein (CREB) were significantly enhanced by treatment with either PMC-12 or NAC. Combination treatment with PMC-12, NAC, and intracellular Ca(2+) inhibitor BAPTA showed similar expression levels. In a mouse model of focal cerebral ischemia, we observed higher expression of mature BDNF and phosphorylation of CREB in the hippocampus and further confirmed improved spatial memory by treatment with PMC-12. Conclusions: Our results suggest that PMC-12 mainly exerted protective effects on hippocampal neurons through suppression of Ca(2+)-related ROS accumulation and regulation of signaling pathways of p38 MAPK and PI3K associated with mature BDNF expression and CREB phosphorylation and subsequently enhanced spatial memory.
18,923
419
[ 288, 256, 260, 93, 89, 107, 105, 176, 120, 189, 282, 195, 148, 98, 441, 382, 331, 802, 496, 411, 323 ]
26
[ "12", "pmc 12", "pmc", "cells", "glutamate", "vs", "mcao", "treated", "001 vs", "001" ]
[ "glutamate induced oxidative", "hippocampal neuronal deaths", "hippocampal neuronal death", "hippocampal cells glutamate", "neurotoxicity oxidative stress" ]
null
[CONTENT] Hippocampal cell | Neuroprotection | Memory | Polygonum multiflorum [SUMMARY]
null
[CONTENT] Hippocampal cell | Neuroprotection | Memory | Polygonum multiflorum [SUMMARY]
[CONTENT] Hippocampal cell | Neuroprotection | Memory | Polygonum multiflorum [SUMMARY]
[CONTENT] Hippocampal cell | Neuroprotection | Memory | Polygonum multiflorum [SUMMARY]
[CONTENT] Hippocampal cell | Neuroprotection | Memory | Polygonum multiflorum [SUMMARY]
[CONTENT] Animals | Brain Ischemia | Cell Line | Disease Models, Animal | Hippocampus | Mice | Neuroprotective Agents | Plant Extracts | Spatial Memory [SUMMARY]
null
[CONTENT] Animals | Brain Ischemia | Cell Line | Disease Models, Animal | Hippocampus | Mice | Neuroprotective Agents | Plant Extracts | Spatial Memory [SUMMARY]
[CONTENT] Animals | Brain Ischemia | Cell Line | Disease Models, Animal | Hippocampus | Mice | Neuroprotective Agents | Plant Extracts | Spatial Memory [SUMMARY]
[CONTENT] Animals | Brain Ischemia | Cell Line | Disease Models, Animal | Hippocampus | Mice | Neuroprotective Agents | Plant Extracts | Spatial Memory [SUMMARY]
[CONTENT] Animals | Brain Ischemia | Cell Line | Disease Models, Animal | Hippocampus | Mice | Neuroprotective Agents | Plant Extracts | Spatial Memory [SUMMARY]
[CONTENT] glutamate induced oxidative | hippocampal neuronal deaths | hippocampal neuronal death | hippocampal cells glutamate | neurotoxicity oxidative stress [SUMMARY]
null
[CONTENT] glutamate induced oxidative | hippocampal neuronal deaths | hippocampal neuronal death | hippocampal cells glutamate | neurotoxicity oxidative stress [SUMMARY]
[CONTENT] glutamate induced oxidative | hippocampal neuronal deaths | hippocampal neuronal death | hippocampal cells glutamate | neurotoxicity oxidative stress [SUMMARY]
[CONTENT] glutamate induced oxidative | hippocampal neuronal deaths | hippocampal neuronal death | hippocampal cells glutamate | neurotoxicity oxidative stress [SUMMARY]
[CONTENT] glutamate induced oxidative | hippocampal neuronal deaths | hippocampal neuronal death | hippocampal cells glutamate | neurotoxicity oxidative stress [SUMMARY]
[CONTENT] 12 | pmc 12 | pmc | cells | glutamate | vs | mcao | treated | 001 vs | 001 [SUMMARY]
null
[CONTENT] 12 | pmc 12 | pmc | cells | glutamate | vs | mcao | treated | 001 vs | 001 [SUMMARY]
[CONTENT] 12 | pmc 12 | pmc | cells | glutamate | vs | mcao | treated | 001 vs | 001 [SUMMARY]
[CONTENT] 12 | pmc 12 | pmc | cells | glutamate | vs | mcao | treated | 001 vs | 001 [SUMMARY]
[CONTENT] 12 | pmc 12 | pmc | cells | glutamate | vs | mcao | treated | 001 vs | 001 [SUMMARY]
[CONTENT] memory | hippocampal | brain | learning | neuronal | impairment | oxidative | death | learning memory | receptor [SUMMARY]
null
[CONTENT] pmc | pmc 12 | glutamate | 12 | cells | vs | 001 | 001 vs | mcao | fig [SUMMARY]
[CONTENT] memory | creb | mature | mature bdnf | pmc 12 | pmc | bdnf | 12 | memory function mature bdnf | strategy memory impairment [SUMMARY]
[CONTENT] cells | pmc | pmc 12 | 12 | glutamate | mcao | vs | mm | mice | pbs [SUMMARY]
[CONTENT] cells | pmc | pmc 12 | 12 | glutamate | mcao | vs | mm | mice | pbs [SUMMARY]
[CONTENT] Four | Korean | Polygonum | Thunb | Rehmannia glutinosa | Gaertn) Libosch | Polygala | Willd | Acorus | Soland | Mirror of Eastern Medicine | the early 17th century | Korea ||| four [SUMMARY]
null
[CONTENT] HPLC | 3.09% ||| 0.35% | 3',6 | 0.79% ||| ||| ROS ||| NAC ||| CREB | NAC ||| NAC | BAPTA ||| CREB [SUMMARY]
[CONTENT] ROS | CREB [SUMMARY]
[CONTENT] Four | Korean | Polygonum | Thunb | Rehmannia glutinosa | Gaertn) Libosch | Polygala | Willd | Acorus | Soland | Mirror of Eastern Medicine | the early 17th century | Korea ||| four ||| ||| ROS ||| ||| HPLC | 3.09% ||| 0.35% | 3',6 | 0.79% ||| ||| ROS ||| NAC ||| CREB | NAC ||| NAC | BAPTA ||| CREB ||| ROS | CREB [SUMMARY]
[CONTENT] Four | Korean | Polygonum | Thunb | Rehmannia glutinosa | Gaertn) Libosch | Polygala | Willd | Acorus | Soland | Mirror of Eastern Medicine | the early 17th century | Korea ||| four ||| ||| ROS ||| ||| HPLC | 3.09% ||| 0.35% | 3',6 | 0.79% ||| ||| ROS ||| NAC ||| CREB | NAC ||| NAC | BAPTA ||| CREB ||| ROS | CREB [SUMMARY]
Additional Drug Resistance in Patients with Multidrug-resistant Tuberculosis in Korea: a Multicenter Study from 2010 to 2019.
34227261
Drug-resistance surveillance (DRS) data provide key information for building an effective treatment regimen in patients with multidrug-resistant tuberculosis (MDR-TB). This study was conducted to investigate the patterns and trends of additional drug resistance in MDR-TB patients in South Korea.
BACKGROUND
Phenotypic drug susceptibility test (DST) results of MDR-TB patients collected from seven hospitals in South Korea from 2010 to 2019 were retrospectively analyzed.
METHODS
In total, 633 patients with MDR-TB were included in the analysis. Of all patients, 361 (57.0%) were new patients. All patients had additional resistance to a median of three anti-TB drugs. The resistance rates of any fluoroquinolone (FQ), linezolid, and cycloserine were 26.2%, 0.0%, and 6.3%, respectively. The proportions of new patients and resistance rates of most anti-TB drugs did not decrease during the study period. The number of additional resistant drugs was significantly higher in FQ-resistant MDR-TB than in FQ-susceptible MDR-TB (median of 9.0 vs. 2.0). Among 26 patients with results of minimum inhibitory concentrations for bedaquiline (BDQ) and delamanid (DLM), one (3.8%) and three (11.5%) patients were considered resistant to BDQ and DLM with interim critical concentrations, respectively. Based on the DST results, 72.4% and 24.8% of patients were eligible for the World Health Organization's longer and shorter MDR-TB treatment regimen, respectively.
RESULTS
The proportions of new patients and rates of additional drug resistance in patients with MDR-TB were high and remain stable in South Korea. A nationwide analysis of DRS data is required to provide effective treatment for MDR-TB patients in South Korea.
CONCLUSION
[ "Adolescent", "Adult", "Aged", "Aged, 80 and over", "Antitubercular Agents", "Child", "Child, Preschool", "Diarylquinolines", "Drug Resistance, Multiple, Bacterial", "Female", "Fluoroquinolones", "Humans", "Infant", "Infant, Newborn", "Male", "Microbial Sensitivity Tests", "Middle Aged", "Mycobacterium tuberculosis", "Republic of Korea", "Retrospective Studies", "Tuberculosis", "Tuberculosis, Multidrug-Resistant", "Young Adult" ]
8258238
INTRODUCTION
Multidrug-resistant tuberculosis (MDR-TB) remains a major public health concern. An estimated 465,000 incident cases of MDR- or rifampicin-resistant (RR)-TB were recorded globally in 2019 and their burden remains stable.1 Treatment outcomes of MDR/RR-TB are still suboptimal; only 57% of MDR/RR-TB patients successfully completed treatment in 2017.1 Treating patients with MDR-TB is challenging due to the less effective and more toxic anti-TB drugs, long duration of therapy, and suboptimal patient adherence.2 Drug susceptibility test (DST) results of MDR-TB patients provide key information for choosing and designing the treatment regimen. Inappropriate regimens not only amplify drug resistance and reduce treatment success of patients, but also increase the risk of transmission of pathogens in the community.34 However, in the absence of individual DST results, recent representative anti-TB drug-resistance surveillance (DRS) data in a province or country are important to guide selection of the regimen.5 Also, as the proportion of MDR/RR-TB patients starting treatment with limited information about the resistance of molecular DST (e.g., Xpert MTB/RIF assay or line probe assay) increases, DRS data are becoming more important for selecting an empirical regimen before full DST results are obtained. Several nationwide South Korean anti-TB DRSs have been employed for TB patients in the public sector between 1994 and 2004.6 Additionally, several hospitals and laboratories from the private sector have reported phenotypic DST results of TB patients.789 Although previous studies have helped clarify the status and trends of MDR-TB in South Korea, detailed information on additional drug resistance in patients with MDR-TB is lacking. The authors of a previous study reported the patterns and trends of additional drug resistance in patients with MDR-TB in South Korea using phenotypic DST results collected from seven hospitals from 2010 to 2014.10 Since then, many changes have been made in the diagnosis and treatment of MDR-TB in South Korea: universal use of molecular DSTs, the introduction and increased use of new and re-purposed anti-TB drugs such as bedaquiline (BDQ), delamanid (DLM), linezolid (LZD) and clofazimine (CFZ), and changes in the MDR-TB treatment guidelines. These changes may have affected the status of drug resistance in MDR-TB patients. Therefore, the present study was conducted to update the patterns and trends of additional drug resistance in patients with MDR-TB in South Korea after 2014.
METHODS
Study design and data collection This retrospective study was conducted at seven university-affiliated tertiary care hospitals in Busan, Ulsan, and Gyeongsangnam-do Provinces of South Korea. The hospitals that participated in this study were the same as in a previous study.10 Patients who were diagnosed with MDR-TB based on the phenotypic DST results between January 2010 and December 2019 were included in the study. Patients with unknown previous TB treatment histories and previously treated patients with unknown regimens or outcomes of their most recent courses of TB treatment were excluded. Duplicate inter-hospital patients were also excluded by checking each patient's initials and birth date. If a patient had more than one DST result, the earlier result was used. If a patient had DST results from both pulmonary and extra-pulmonary specimens, results from the pulmonary specimen were selected. Age, sex, history of TB treatment, type of specimen, date of specimen collection, and results of the phenotypic DST were collected from the medical records. This retrospective study was conducted at seven university-affiliated tertiary care hospitals in Busan, Ulsan, and Gyeongsangnam-do Provinces of South Korea. The hospitals that participated in this study were the same as in a previous study.10 Patients who were diagnosed with MDR-TB based on the phenotypic DST results between January 2010 and December 2019 were included in the study. Patients with unknown previous TB treatment histories and previously treated patients with unknown regimens or outcomes of their most recent courses of TB treatment were excluded. Duplicate inter-hospital patients were also excluded by checking each patient's initials and birth date. If a patient had more than one DST result, the earlier result was used. If a patient had DST results from both pulmonary and extra-pulmonary specimens, results from the pulmonary specimen were selected. Age, sex, history of TB treatment, type of specimen, date of specimen collection, and results of the phenotypic DST were collected from the medical records. DST Six hospitals sent Mycobacterium tuberculosis isolates for the phenotypic DST to the Korean Institute of Tuberculosis, which is a supranational reference TB laboratory, and one hospital sent isolates to the Green Cross Laboratory, a commercial reference laboratory. The workflow used and the references of the critical concentrations did not differ between the laboratories. The DST was performed using the absolute concentration method and Lowenstein-Jensen medium. The drugs and their critical concentrations for resistance were as follows: isoniazid (INH), 0.2 and 1.0 μg/mL; rifampin (RIF), 40 μg/mL; ethambutol (EMB), 2.0 μg/mL; rifabutin, 20 μg/mL; ofloxacin (OFX), 2.0 μg/mL; levofloxacin (LFX), 2.0 μg/mL; moxifloxacin (MFX), 2.0 μg/mL; streptomycin (SM), 10 μg/mL; amikacin (AMK), 40 μg/mL; kanamycin (KM), 40 μg/mL; capreomycin (CM), 40 μg/mL; prothionamide (PTO), 40 μg/mL; cycloserine (CS), 30 μg/mL; p-aminosalicylic acid (PAS), 1.0 μg/mL; and LZD, 2.0 μg/mL. Pyrazinamide (PZA) susceptibility was determined using the pyrazinamidase test. Critical concentrations for resistance of AMK, KM, OFX, and MFX were changed during the study periods as follows: AMK and KM, 30 μg/mL in January 2014; OFX, 4.0 μg/mL in January 2016; and MFX, 1.0 μg/mL in August 2018. Tests for higher concentrations of INH (1.0 μg/mL as the critical concentration) and LZD have been available since late 2016. Minimum inhibitory concentrations (MICs) for BDQ and DLM were measured using 7H9 broth medium and the serial two-fold dilution method. The ranges of the concentrations prepared for BDQ and DLM were 0.03125–4.0 mg/L, and 0.00625–0.8 mg/L, respectively. Tests for the MICs of BDQ and DLM have been available since mid-2018. Six hospitals sent Mycobacterium tuberculosis isolates for the phenotypic DST to the Korean Institute of Tuberculosis, which is a supranational reference TB laboratory, and one hospital sent isolates to the Green Cross Laboratory, a commercial reference laboratory. The workflow used and the references of the critical concentrations did not differ between the laboratories. The DST was performed using the absolute concentration method and Lowenstein-Jensen medium. The drugs and their critical concentrations for resistance were as follows: isoniazid (INH), 0.2 and 1.0 μg/mL; rifampin (RIF), 40 μg/mL; ethambutol (EMB), 2.0 μg/mL; rifabutin, 20 μg/mL; ofloxacin (OFX), 2.0 μg/mL; levofloxacin (LFX), 2.0 μg/mL; moxifloxacin (MFX), 2.0 μg/mL; streptomycin (SM), 10 μg/mL; amikacin (AMK), 40 μg/mL; kanamycin (KM), 40 μg/mL; capreomycin (CM), 40 μg/mL; prothionamide (PTO), 40 μg/mL; cycloserine (CS), 30 μg/mL; p-aminosalicylic acid (PAS), 1.0 μg/mL; and LZD, 2.0 μg/mL. Pyrazinamide (PZA) susceptibility was determined using the pyrazinamidase test. Critical concentrations for resistance of AMK, KM, OFX, and MFX were changed during the study periods as follows: AMK and KM, 30 μg/mL in January 2014; OFX, 4.0 μg/mL in January 2016; and MFX, 1.0 μg/mL in August 2018. Tests for higher concentrations of INH (1.0 μg/mL as the critical concentration) and LZD have been available since late 2016. Minimum inhibitory concentrations (MICs) for BDQ and DLM were measured using 7H9 broth medium and the serial two-fold dilution method. The ranges of the concentrations prepared for BDQ and DLM were 0.03125–4.0 mg/L, and 0.00625–0.8 mg/L, respectively. Tests for the MICs of BDQ and DLM have been available since mid-2018. Definitions MDR-TB was defined as TB that was resistant to at least INH and RIF. Patients were classified based on a history of TB treatment as follows: new patients, have never been treated for TB or have taken anti-TB drugs for < 1 month; previously treated patients, received 1 month or more of anti-TB drugs in the past.11 Previously treated patients were further classified by the outcomes of their most recent treatment course, such as relapse, treatment after failure, and treatment after loss to follow-up.11 The first-line anti-TB drugs were INH, RIF, EMB, and PZA. SM was considered the first-line drug if it was used to treat drug-susceptible TB; otherwise, it was regarded as a second-line drug. Drug use was defined as the use of an anti-TB drug for 1 month or more in the past. MDR-TB was defined as TB that was resistant to at least INH and RIF. Patients were classified based on a history of TB treatment as follows: new patients, have never been treated for TB or have taken anti-TB drugs for < 1 month; previously treated patients, received 1 month or more of anti-TB drugs in the past.11 Previously treated patients were further classified by the outcomes of their most recent treatment course, such as relapse, treatment after failure, and treatment after loss to follow-up.11 The first-line anti-TB drugs were INH, RIF, EMB, and PZA. SM was considered the first-line drug if it was used to treat drug-susceptible TB; otherwise, it was regarded as a second-line drug. Drug use was defined as the use of an anti-TB drug for 1 month or more in the past. Statistical analysis Data are presented as medians with interquartile ranges (IQRs) for continuous variables, and as numbers with percentages for categorical variables. Continuous variables were compared using the Mann-Whitney U test or Kruskal-Wallis test, and categorical variables were compared using the Pearson's χ2 test or Fisher's exact test. The χ2 test for trends was performed to evaluate annual trends in drug resistance and the proportion of patients eligible for the treatment regimen. All tests were two-tailed, and a P value < 0.05 was considered significant. Statistical analysis was performed using SPSS Statistics, version 22.0 software (SPSS Inc., Chicago, IL, USA). Data are presented as medians with interquartile ranges (IQRs) for continuous variables, and as numbers with percentages for categorical variables. Continuous variables were compared using the Mann-Whitney U test or Kruskal-Wallis test, and categorical variables were compared using the Pearson's χ2 test or Fisher's exact test. The χ2 test for trends was performed to evaluate annual trends in drug resistance and the proportion of patients eligible for the treatment regimen. All tests were two-tailed, and a P value < 0.05 was considered significant. Statistical analysis was performed using SPSS Statistics, version 22.0 software (SPSS Inc., Chicago, IL, USA). Ethics statement The study protocol was reviewed and approved by the Institutional Review Board of Pusan National University Hospital (approval No. H-2005-009-090). The need for informed consent was waived given the observational retrospective nature of the study. Our study had no effect on the diagnosis or treatment of patients. Patient information was anonymized and de-identified before analysis. The study protocol was reviewed and approved by the Institutional Review Board of Pusan National University Hospital (approval No. H-2005-009-090). The need for informed consent was waived given the observational retrospective nature of the study. Our study had no effect on the diagnosis or treatment of patients. Patient information was anonymized and de-identified before analysis.
RESULTS
Patient characteristics After the criteria defined above were applied, a total of 10,557 culture-confirmed TB patients were screened during the study period. Of them, 633 (6.0%) had MDR-TB and were included in the final analysis. Table 1 lists the baseline characteristics of all MDR-TB patients. Their median age was 50.0 years (IQR, 36.0–63.0) and 64.6% were male. None of the patients had a human immunodeficiency virus infection. The age groups of 40–49 and 50–59 years accounted for the largest proportion of patients. With regard to TB treatment history, 57.0% of all patients were new patients. The proportion of new patients among total patients did not change during the study period. Among 272 previously treated patients, 212 (77.9%) had been treated with first-line anti-TB drugs, and 60 (22.1%) with second-line anti-TB drugs. There were significant differences in the proportions of outcomes of the most recent treatment course between the patients previously treated with first-line drugs and those previously treated with second-line drugs (P < 0.001): relapse (84.9% vs. 36.7%); treatment after failure (6.6% vs. 33.3%); and treatment after loss to follow-up (8.5% vs. 30.0%). Continuous variables are presented as medians (interquartile ranges) and categorical variables as numbers (%). After the criteria defined above were applied, a total of 10,557 culture-confirmed TB patients were screened during the study period. Of them, 633 (6.0%) had MDR-TB and were included in the final analysis. Table 1 lists the baseline characteristics of all MDR-TB patients. Their median age was 50.0 years (IQR, 36.0–63.0) and 64.6% were male. None of the patients had a human immunodeficiency virus infection. The age groups of 40–49 and 50–59 years accounted for the largest proportion of patients. With regard to TB treatment history, 57.0% of all patients were new patients. The proportion of new patients among total patients did not change during the study period. Among 272 previously treated patients, 212 (77.9%) had been treated with first-line anti-TB drugs, and 60 (22.1%) with second-line anti-TB drugs. There were significant differences in the proportions of outcomes of the most recent treatment course between the patients previously treated with first-line drugs and those previously treated with second-line drugs (P < 0.001): relapse (84.9% vs. 36.7%); treatment after failure (6.6% vs. 33.3%); and treatment after loss to follow-up (8.5% vs. 30.0%). Continuous variables are presented as medians (interquartile ranges) and categorical variables as numbers (%). Pattern and trend of additional drug resistance Table 2 lists the rates of additional drug resistance among all MDR-TB patients. All patients had additional resistance to a median of three drugs (IQR, 2.0–7.0) (excluding INH, RIF, a higher concentration of INH, and LZD). The resistance rates of the drugs in groups A and B according to the World Health Organization (WHO) guidelines were as follows12: any fluoroquinolone ([FQ]; OFX, LFX, or MFX; 26.2%), LZD (0.0%), and CS (6.3%). The resistance rates for drugs in group C were: EMB (63.3%), PZA (41.4%), SM (34.6%), PAS (26.9%), PTO (16.9%), and AMK (14.5%). Seventy-two patients (11.4%) had extensively drug-resistant (XDR)-TB (based on a previous definition: TB resistant to any FQ and at least one of three second-line injectable drugs [CM, KM, or AMK], in addition to MDR). The resistance rates of PZA, FQ, KM, and PTO were higher in previously treated patients than in new patients. During the 10-year study period, the resistance rate of PZA tended to increase, whereas that of PAS tended to decrease. However, no significant changes in the trends in resistance were observed to any of the other drugs (Table 3). Continuous variables are presented as medians (interquartile ranges) and categorical variables as numbers (%). RFB = rifabutin, EMB = ethambutol, PZA = pyrazinamide, OFX = ofloxacin, LFX = levofloxacin, MFX = moxifloxacin, SM = streptomycin, AMK = amikacin, KM = kanamycin, CM = capreomycin, PTO = prothionamide, CS = cycloserine, PAS = p-aminosalicylic acid, INH = isoniazid, LZD = linezolid, FQr = fluoroquinolone-resistant, MDR = multidrug-resistant, TB = tuberculosis, NA = not applicable. aComparison of new patients, patients previously treated with first-line drugs and patients previously treated with second-line drugs; bHigher concentration of isoniazid (critical concentration of 1.0 μg/mL); cNumber of resistant patients/total tested patients (%); dMultidrug-resistant tuberculosis resistant to any fluoroquinolone (ofloxacin, levofloxacin or moxifloxacin); eExcluding isoniazid, rifampin, a higher concentration of isoniazid, and linezolid. RFB = rifabutin, EMB = ethambutol, PZA = pyrazinamide, OFX = ofloxacin, LFX = levofloxacin, MFX = moxifloxacin, SM = streptomycin, AMK = amikacin, KM = kanamycin, CM = capreomycin, PTO = prothionamide, CS = cycloserine, PAS = p-aminosalicylic acid, FQr = fluoroquinolone-resistant, MDR = multidrug-resistant, TB = tuberculosis. aMultidrug-resistant tuberculosis resistant to any fluoroquinolone (ofloxacin, levofloxacin, or moxifloxacin). Table 4 compares additional drug resistance rates between FQ-susceptible and FQ-resistant MDR-TB. All drugs except SM, a higher concentration of INH, and LZD had higher rates of resistance in patients with FQ-resistant MDR-TB. The number of additional resistant drugs (excluding INH, RIF, a higher concentration of INH, and LZD) was significantly higher in FQ-resistant MDR-TB than in FQ-susceptible MDR-TB (median of 9.0 vs. 2.0). Continuous variables are presented as medians (interquartile ranges) and categorical variables as numbers (%). RFB = rifabutin, EMB = ethambutol, PZA = pyrazinamide, OFX = ofloxacin, LFX = levofloxacin, MFX = moxifloxacin, SM = streptomycin, AMK = amikacin, KM = kanamycin, CM = capreomycin, PTO = prothionamide, CS = cycloserine, PAS = p-aminosalicylic acid, INH = isoniazid, LZD = linezolid, FQs = fluoroquinolone-susceptible, FQr = fluoroquinolone-resistant, MDR = multidrug-resistant, TB = tuberculosis, NA = not applicable. aMultidrug-resistant tuberculosis susceptible to all fluoroquinolones (ofloxacin, levofloxacin, and moxifloxacin); bMultidrug-resistant tuberculosis resistant to any fluoroquinolone (ofloxacin, levofloxacin or moxifloxacin); cComparison of patients with fluoroquinolone susceptible and resistant multidrug-resistant tuberculosis; dHigher concentration of isoniazid (critical concentration of 1.0 μg/mL); eNumber of resistant patients/total tested patients (%); fExcluding isoniazid, rifampin, a higher concentration of isoniazid, and linezolid. Table 2 lists the rates of additional drug resistance among all MDR-TB patients. All patients had additional resistance to a median of three drugs (IQR, 2.0–7.0) (excluding INH, RIF, a higher concentration of INH, and LZD). The resistance rates of the drugs in groups A and B according to the World Health Organization (WHO) guidelines were as follows12: any fluoroquinolone ([FQ]; OFX, LFX, or MFX; 26.2%), LZD (0.0%), and CS (6.3%). The resistance rates for drugs in group C were: EMB (63.3%), PZA (41.4%), SM (34.6%), PAS (26.9%), PTO (16.9%), and AMK (14.5%). Seventy-two patients (11.4%) had extensively drug-resistant (XDR)-TB (based on a previous definition: TB resistant to any FQ and at least one of three second-line injectable drugs [CM, KM, or AMK], in addition to MDR). The resistance rates of PZA, FQ, KM, and PTO were higher in previously treated patients than in new patients. During the 10-year study period, the resistance rate of PZA tended to increase, whereas that of PAS tended to decrease. However, no significant changes in the trends in resistance were observed to any of the other drugs (Table 3). Continuous variables are presented as medians (interquartile ranges) and categorical variables as numbers (%). RFB = rifabutin, EMB = ethambutol, PZA = pyrazinamide, OFX = ofloxacin, LFX = levofloxacin, MFX = moxifloxacin, SM = streptomycin, AMK = amikacin, KM = kanamycin, CM = capreomycin, PTO = prothionamide, CS = cycloserine, PAS = p-aminosalicylic acid, INH = isoniazid, LZD = linezolid, FQr = fluoroquinolone-resistant, MDR = multidrug-resistant, TB = tuberculosis, NA = not applicable. aComparison of new patients, patients previously treated with first-line drugs and patients previously treated with second-line drugs; bHigher concentration of isoniazid (critical concentration of 1.0 μg/mL); cNumber of resistant patients/total tested patients (%); dMultidrug-resistant tuberculosis resistant to any fluoroquinolone (ofloxacin, levofloxacin or moxifloxacin); eExcluding isoniazid, rifampin, a higher concentration of isoniazid, and linezolid. RFB = rifabutin, EMB = ethambutol, PZA = pyrazinamide, OFX = ofloxacin, LFX = levofloxacin, MFX = moxifloxacin, SM = streptomycin, AMK = amikacin, KM = kanamycin, CM = capreomycin, PTO = prothionamide, CS = cycloserine, PAS = p-aminosalicylic acid, FQr = fluoroquinolone-resistant, MDR = multidrug-resistant, TB = tuberculosis. aMultidrug-resistant tuberculosis resistant to any fluoroquinolone (ofloxacin, levofloxacin, or moxifloxacin). Table 4 compares additional drug resistance rates between FQ-susceptible and FQ-resistant MDR-TB. All drugs except SM, a higher concentration of INH, and LZD had higher rates of resistance in patients with FQ-resistant MDR-TB. The number of additional resistant drugs (excluding INH, RIF, a higher concentration of INH, and LZD) was significantly higher in FQ-resistant MDR-TB than in FQ-susceptible MDR-TB (median of 9.0 vs. 2.0). Continuous variables are presented as medians (interquartile ranges) and categorical variables as numbers (%). RFB = rifabutin, EMB = ethambutol, PZA = pyrazinamide, OFX = ofloxacin, LFX = levofloxacin, MFX = moxifloxacin, SM = streptomycin, AMK = amikacin, KM = kanamycin, CM = capreomycin, PTO = prothionamide, CS = cycloserine, PAS = p-aminosalicylic acid, INH = isoniazid, LZD = linezolid, FQs = fluoroquinolone-susceptible, FQr = fluoroquinolone-resistant, MDR = multidrug-resistant, TB = tuberculosis, NA = not applicable. aMultidrug-resistant tuberculosis susceptible to all fluoroquinolones (ofloxacin, levofloxacin, and moxifloxacin); bMultidrug-resistant tuberculosis resistant to any fluoroquinolone (ofloxacin, levofloxacin or moxifloxacin); cComparison of patients with fluoroquinolone susceptible and resistant multidrug-resistant tuberculosis; dHigher concentration of isoniazid (critical concentration of 1.0 μg/mL); eNumber of resistant patients/total tested patients (%); fExcluding isoniazid, rifampin, a higher concentration of isoniazid, and linezolid. MICs of BDQ and DLM Twenty-six patients had MIC results for BDQ and DLM. Fig. 1 presents the overall MIC distribution of the patients. When an interim critical concentration of 0.25 mg/L was applied for BDQ resistance,1314 one patient (3.8%) was resistant to BDQ. When the interim critical concentration of 0.016 mg/L was applied for DLM resistance,13 three patients (11.5%) were resistant to DLM; the results remained unchanged if 0.06 or 0.2 mg/L was applied as the critical concentration of DLM resistance.1516 One BDQ-resistant patient had a history of treatment failure with a BDQ- and CFZ-containing regimen. Among the three patients resistant to DLM, one had never been treated for TB in the past, and the remaining two patients had histories of TB treatments with first-line anti-TB drugs only. MIC = minimum inhibitory concentration Twenty-six patients had MIC results for BDQ and DLM. Fig. 1 presents the overall MIC distribution of the patients. When an interim critical concentration of 0.25 mg/L was applied for BDQ resistance,1314 one patient (3.8%) was resistant to BDQ. When the interim critical concentration of 0.016 mg/L was applied for DLM resistance,13 three patients (11.5%) were resistant to DLM; the results remained unchanged if 0.06 or 0.2 mg/L was applied as the critical concentration of DLM resistance.1516 One BDQ-resistant patient had a history of treatment failure with a BDQ- and CFZ-containing regimen. Among the three patients resistant to DLM, one had never been treated for TB in the past, and the remaining two patients had histories of TB treatments with first-line anti-TB drugs only. MIC = minimum inhibitory concentration Eligibility for the treatment regimen Of all patients, 458 (72.4%) were susceptible to all FQs and CS. Assuming that these patients were also susceptible to BDQ, LZD, and CFZ, they would be eligible for WHO's longer treatment regimen without drug modification.12 In total, 447 patients (70.6%) were eligible for the WHO's shorter all-oral BDQ-containing regimen after applying the inclusion criteria of “MDR-TB patients who have not been exposed to treatment with second-line drugs for more than 1 month, and in whom resistance to FQs has been excluded.”12 However, if the strict inclusion criteria, “MDR-TB patients without resistance to a medicine in the shorter regimen” were applied,12 only 157 patients (24.8%) were fully susceptible to all FQs, PTO, EMB, and PZA, and thus were eligible for the shorter regimen (even if all patients were assumed to be susceptible to both BDQ and CFZ). Fig. 2 presents the annual trends in the proportion of patients eligible for the longer and shorter (with strict inclusion criteria) regimens, respectively. The proportion did not change during the study period. aPatients susceptible to all fluoroquinolones and cycloserine (assuming all patients are also susceptible to bedaquiline, linezolid, and clofazimine); bPatients fully susceptible to all fluoroquinolones, prothionamide, ethambutol, and pyrazinamide (assuming all patients are also susceptible to bedaquiline and clofazimine). Of all patients, 458 (72.4%) were susceptible to all FQs and CS. Assuming that these patients were also susceptible to BDQ, LZD, and CFZ, they would be eligible for WHO's longer treatment regimen without drug modification.12 In total, 447 patients (70.6%) were eligible for the WHO's shorter all-oral BDQ-containing regimen after applying the inclusion criteria of “MDR-TB patients who have not been exposed to treatment with second-line drugs for more than 1 month, and in whom resistance to FQs has been excluded.”12 However, if the strict inclusion criteria, “MDR-TB patients without resistance to a medicine in the shorter regimen” were applied,12 only 157 patients (24.8%) were fully susceptible to all FQs, PTO, EMB, and PZA, and thus were eligible for the shorter regimen (even if all patients were assumed to be susceptible to both BDQ and CFZ). Fig. 2 presents the annual trends in the proportion of patients eligible for the longer and shorter (with strict inclusion criteria) regimens, respectively. The proportion did not change during the study period. aPatients susceptible to all fluoroquinolones and cycloserine (assuming all patients are also susceptible to bedaquiline, linezolid, and clofazimine); bPatients fully susceptible to all fluoroquinolones, prothionamide, ethambutol, and pyrazinamide (assuming all patients are also susceptible to bedaquiline and clofazimine).
null
null
[ "Study design and data collection", "DST", "Definitions", "Statistical analysis", "Ethics statement", "Patient characteristics", "Pattern and trend of additional drug resistance", "MICs of BDQ and DLM", "Eligibility for the treatment regimen" ]
[ "This retrospective study was conducted at seven university-affiliated tertiary care hospitals in Busan, Ulsan, and Gyeongsangnam-do Provinces of South Korea. The hospitals that participated in this study were the same as in a previous study.10 Patients who were diagnosed with MDR-TB based on the phenotypic DST results between January 2010 and December 2019 were included in the study. Patients with unknown previous TB treatment histories and previously treated patients with unknown regimens or outcomes of their most recent courses of TB treatment were excluded. Duplicate inter-hospital patients were also excluded by checking each patient's initials and birth date. If a patient had more than one DST result, the earlier result was used. If a patient had DST results from both pulmonary and extra-pulmonary specimens, results from the pulmonary specimen were selected. Age, sex, history of TB treatment, type of specimen, date of specimen collection, and results of the phenotypic DST were collected from the medical records.", "Six hospitals sent Mycobacterium tuberculosis isolates for the phenotypic DST to the Korean Institute of Tuberculosis, which is a supranational reference TB laboratory, and one hospital sent isolates to the Green Cross Laboratory, a commercial reference laboratory. The workflow used and the references of the critical concentrations did not differ between the laboratories.\nThe DST was performed using the absolute concentration method and Lowenstein-Jensen medium. The drugs and their critical concentrations for resistance were as follows: isoniazid (INH), 0.2 and 1.0 μg/mL; rifampin (RIF), 40 μg/mL; ethambutol (EMB), 2.0 μg/mL; rifabutin, 20 μg/mL; ofloxacin (OFX), 2.0 μg/mL; levofloxacin (LFX), 2.0 μg/mL; moxifloxacin (MFX), 2.0 μg/mL; streptomycin (SM), 10 μg/mL; amikacin (AMK), 40 μg/mL; kanamycin (KM), 40 μg/mL; capreomycin (CM), 40 μg/mL; prothionamide (PTO), 40 μg/mL; cycloserine (CS), 30 μg/mL; p-aminosalicylic acid (PAS), 1.0 μg/mL; and LZD, 2.0 μg/mL. Pyrazinamide (PZA) susceptibility was determined using the pyrazinamidase test. Critical concentrations for resistance of AMK, KM, OFX, and MFX were changed during the study periods as follows: AMK and KM, 30 μg/mL in January 2014; OFX, 4.0 μg/mL in January 2016; and MFX, 1.0 μg/mL in August 2018. Tests for higher concentrations of INH (1.0 μg/mL as the critical concentration) and LZD have been available since late 2016.\nMinimum inhibitory concentrations (MICs) for BDQ and DLM were measured using 7H9 broth medium and the serial two-fold dilution method. The ranges of the concentrations prepared for BDQ and DLM were 0.03125–4.0 mg/L, and 0.00625–0.8 mg/L, respectively. Tests for the MICs of BDQ and DLM have been available since mid-2018.", "MDR-TB was defined as TB that was resistant to at least INH and RIF. Patients were classified based on a history of TB treatment as follows: new patients, have never been treated for TB or have taken anti-TB drugs for < 1 month; previously treated patients, received 1 month or more of anti-TB drugs in the past.11 Previously treated patients were further classified by the outcomes of their most recent treatment course, such as relapse, treatment after failure, and treatment after loss to follow-up.11 The first-line anti-TB drugs were INH, RIF, EMB, and PZA. SM was considered the first-line drug if it was used to treat drug-susceptible TB; otherwise, it was regarded as a second-line drug. Drug use was defined as the use of an anti-TB drug for 1 month or more in the past.", "Data are presented as medians with interquartile ranges (IQRs) for continuous variables, and as numbers with percentages for categorical variables. Continuous variables were compared using the Mann-Whitney U test or Kruskal-Wallis test, and categorical variables were compared using the Pearson's χ2 test or Fisher's exact test. The χ2 test for trends was performed to evaluate annual trends in drug resistance and the proportion of patients eligible for the treatment regimen. All tests were two-tailed, and a P value < 0.05 was considered significant. Statistical analysis was performed using SPSS Statistics, version 22.0 software (SPSS Inc., Chicago, IL, USA).", "The study protocol was reviewed and approved by the Institutional Review Board of Pusan National University Hospital (approval No. H-2005-009-090). The need for informed consent was waived given the observational retrospective nature of the study. Our study had no effect on the diagnosis or treatment of patients. Patient information was anonymized and de-identified before analysis.", "After the criteria defined above were applied, a total of 10,557 culture-confirmed TB patients were screened during the study period. Of them, 633 (6.0%) had MDR-TB and were included in the final analysis. Table 1 lists the baseline characteristics of all MDR-TB patients. Their median age was 50.0 years (IQR, 36.0–63.0) and 64.6% were male. None of the patients had a human immunodeficiency virus infection. The age groups of 40–49 and 50–59 years accounted for the largest proportion of patients. With regard to TB treatment history, 57.0% of all patients were new patients. The proportion of new patients among total patients did not change during the study period. Among 272 previously treated patients, 212 (77.9%) had been treated with first-line anti-TB drugs, and 60 (22.1%) with second-line anti-TB drugs. There were significant differences in the proportions of outcomes of the most recent treatment course between the patients previously treated with first-line drugs and those previously treated with second-line drugs (P < 0.001): relapse (84.9% vs. 36.7%); treatment after failure (6.6% vs. 33.3%); and treatment after loss to follow-up (8.5% vs. 30.0%).\nContinuous variables are presented as medians (interquartile ranges) and categorical variables as numbers (%).", "Table 2 lists the rates of additional drug resistance among all MDR-TB patients. All patients had additional resistance to a median of three drugs (IQR, 2.0–7.0) (excluding INH, RIF, a higher concentration of INH, and LZD). The resistance rates of the drugs in groups A and B according to the World Health Organization (WHO) guidelines were as follows12: any fluoroquinolone ([FQ]; OFX, LFX, or MFX; 26.2%), LZD (0.0%), and CS (6.3%). The resistance rates for drugs in group C were: EMB (63.3%), PZA (41.4%), SM (34.6%), PAS (26.9%), PTO (16.9%), and AMK (14.5%). Seventy-two patients (11.4%) had extensively drug-resistant (XDR)-TB (based on a previous definition: TB resistant to any FQ and at least one of three second-line injectable drugs [CM, KM, or AMK], in addition to MDR). The resistance rates of PZA, FQ, KM, and PTO were higher in previously treated patients than in new patients. During the 10-year study period, the resistance rate of PZA tended to increase, whereas that of PAS tended to decrease. However, no significant changes in the trends in resistance were observed to any of the other drugs (Table 3).\nContinuous variables are presented as medians (interquartile ranges) and categorical variables as numbers (%).\nRFB = rifabutin, EMB = ethambutol, PZA = pyrazinamide, OFX = ofloxacin, LFX = levofloxacin, MFX = moxifloxacin, SM = streptomycin, AMK = amikacin, KM = kanamycin, CM = capreomycin, PTO = prothionamide, CS = cycloserine, PAS = p-aminosalicylic acid, INH = isoniazid, LZD = linezolid, FQr = fluoroquinolone-resistant, MDR = multidrug-resistant, TB = tuberculosis, NA = not applicable.\naComparison of new patients, patients previously treated with first-line drugs and patients previously treated with second-line drugs; bHigher concentration of isoniazid (critical concentration of 1.0 μg/mL); cNumber of resistant patients/total tested patients (%); dMultidrug-resistant tuberculosis resistant to any fluoroquinolone (ofloxacin, levofloxacin or moxifloxacin); eExcluding isoniazid, rifampin, a higher concentration of isoniazid, and linezolid.\nRFB = rifabutin, EMB = ethambutol, PZA = pyrazinamide, OFX = ofloxacin, LFX = levofloxacin, MFX = moxifloxacin, SM = streptomycin, AMK = amikacin, KM = kanamycin, CM = capreomycin, PTO = prothionamide, CS = cycloserine, PAS = p-aminosalicylic acid, FQr = fluoroquinolone-resistant, MDR = multidrug-resistant, TB = tuberculosis.\naMultidrug-resistant tuberculosis resistant to any fluoroquinolone (ofloxacin, levofloxacin, or moxifloxacin).\nTable 4 compares additional drug resistance rates between FQ-susceptible and FQ-resistant MDR-TB. All drugs except SM, a higher concentration of INH, and LZD had higher rates of resistance in patients with FQ-resistant MDR-TB. The number of additional resistant drugs (excluding INH, RIF, a higher concentration of INH, and LZD) was significantly higher in FQ-resistant MDR-TB than in FQ-susceptible MDR-TB (median of 9.0 vs. 2.0).\nContinuous variables are presented as medians (interquartile ranges) and categorical variables as numbers (%).\nRFB = rifabutin, EMB = ethambutol, PZA = pyrazinamide, OFX = ofloxacin, LFX = levofloxacin, MFX = moxifloxacin, SM = streptomycin, AMK = amikacin, KM = kanamycin, CM = capreomycin, PTO = prothionamide, CS = cycloserine, PAS = p-aminosalicylic acid, INH = isoniazid, LZD = linezolid, FQs = fluoroquinolone-susceptible, FQr = fluoroquinolone-resistant, MDR = multidrug-resistant, TB = tuberculosis, NA = not applicable.\naMultidrug-resistant tuberculosis susceptible to all fluoroquinolones (ofloxacin, levofloxacin, and moxifloxacin); bMultidrug-resistant tuberculosis resistant to any fluoroquinolone (ofloxacin, levofloxacin or moxifloxacin); cComparison of patients with fluoroquinolone susceptible and resistant multidrug-resistant tuberculosis; dHigher concentration of isoniazid (critical concentration of 1.0 μg/mL); eNumber of resistant patients/total tested patients (%); fExcluding isoniazid, rifampin, a higher concentration of isoniazid, and linezolid.", "Twenty-six patients had MIC results for BDQ and DLM. Fig. 1 presents the overall MIC distribution of the patients. When an interim critical concentration of 0.25 mg/L was applied for BDQ resistance,1314 one patient (3.8%) was resistant to BDQ. When the interim critical concentration of 0.016 mg/L was applied for DLM resistance,13 three patients (11.5%) were resistant to DLM; the results remained unchanged if 0.06 or 0.2 mg/L was applied as the critical concentration of DLM resistance.1516 One BDQ-resistant patient had a history of treatment failure with a BDQ- and CFZ-containing regimen. Among the three patients resistant to DLM, one had never been treated for TB in the past, and the remaining two patients had histories of TB treatments with first-line anti-TB drugs only.\nMIC = minimum inhibitory concentration", "Of all patients, 458 (72.4%) were susceptible to all FQs and CS. Assuming that these patients were also susceptible to BDQ, LZD, and CFZ, they would be eligible for WHO's longer treatment regimen without drug modification.12 In total, 447 patients (70.6%) were eligible for the WHO's shorter all-oral BDQ-containing regimen after applying the inclusion criteria of “MDR-TB patients who have not been exposed to treatment with second-line drugs for more than 1 month, and in whom resistance to FQs has been excluded.”12 However, if the strict inclusion criteria, “MDR-TB patients without resistance to a medicine in the shorter regimen” were applied,12 only 157 patients (24.8%) were fully susceptible to all FQs, PTO, EMB, and PZA, and thus were eligible for the shorter regimen (even if all patients were assumed to be susceptible to both BDQ and CFZ). Fig. 2 presents the annual trends in the proportion of patients eligible for the longer and shorter (with strict inclusion criteria) regimens, respectively. The proportion did not change during the study period.\naPatients susceptible to all fluoroquinolones and cycloserine (assuming all patients are also susceptible to bedaquiline, linezolid, and clofazimine); bPatients fully susceptible to all fluoroquinolones, prothionamide, ethambutol, and pyrazinamide (assuming all patients are also susceptible to bedaquiline and clofazimine)." ]
[ null, null, null, null, null, null, null, null, null ]
[ "INTRODUCTION", "METHODS", "Study design and data collection", "DST", "Definitions", "Statistical analysis", "Ethics statement", "RESULTS", "Patient characteristics", "Pattern and trend of additional drug resistance", "MICs of BDQ and DLM", "Eligibility for the treatment regimen", "DISCUSSION" ]
[ "Multidrug-resistant tuberculosis (MDR-TB) remains a major public health concern. An estimated 465,000 incident cases of MDR- or rifampicin-resistant (RR)-TB were recorded globally in 2019 and their burden remains stable.1 Treatment outcomes of MDR/RR-TB are still suboptimal; only 57% of MDR/RR-TB patients successfully completed treatment in 2017.1 Treating patients with MDR-TB is challenging due to the less effective and more toxic anti-TB drugs, long duration of therapy, and suboptimal patient adherence.2\nDrug susceptibility test (DST) results of MDR-TB patients provide key information for choosing and designing the treatment regimen. Inappropriate regimens not only amplify drug resistance and reduce treatment success of patients, but also increase the risk of transmission of pathogens in the community.34 However, in the absence of individual DST results, recent representative anti-TB drug-resistance surveillance (DRS) data in a province or country are important to guide selection of the regimen.5 Also, as the proportion of MDR/RR-TB patients starting treatment with limited information about the resistance of molecular DST (e.g., Xpert MTB/RIF assay or line probe assay) increases, DRS data are becoming more important for selecting an empirical regimen before full DST results are obtained.\nSeveral nationwide South Korean anti-TB DRSs have been employed for TB patients in the public sector between 1994 and 2004.6 Additionally, several hospitals and laboratories from the private sector have reported phenotypic DST results of TB patients.789 Although previous studies have helped clarify the status and trends of MDR-TB in South Korea, detailed information on additional drug resistance in patients with MDR-TB is lacking.\nThe authors of a previous study reported the patterns and trends of additional drug resistance in patients with MDR-TB in South Korea using phenotypic DST results collected from seven hospitals from 2010 to 2014.10 Since then, many changes have been made in the diagnosis and treatment of MDR-TB in South Korea: universal use of molecular DSTs, the introduction and increased use of new and re-purposed anti-TB drugs such as bedaquiline (BDQ), delamanid (DLM), linezolid (LZD) and clofazimine (CFZ), and changes in the MDR-TB treatment guidelines. These changes may have affected the status of drug resistance in MDR-TB patients. Therefore, the present study was conducted to update the patterns and trends of additional drug resistance in patients with MDR-TB in South Korea after 2014.", "Study design and data collection This retrospective study was conducted at seven university-affiliated tertiary care hospitals in Busan, Ulsan, and Gyeongsangnam-do Provinces of South Korea. The hospitals that participated in this study were the same as in a previous study.10 Patients who were diagnosed with MDR-TB based on the phenotypic DST results between January 2010 and December 2019 were included in the study. Patients with unknown previous TB treatment histories and previously treated patients with unknown regimens or outcomes of their most recent courses of TB treatment were excluded. Duplicate inter-hospital patients were also excluded by checking each patient's initials and birth date. If a patient had more than one DST result, the earlier result was used. If a patient had DST results from both pulmonary and extra-pulmonary specimens, results from the pulmonary specimen were selected. Age, sex, history of TB treatment, type of specimen, date of specimen collection, and results of the phenotypic DST were collected from the medical records.\nThis retrospective study was conducted at seven university-affiliated tertiary care hospitals in Busan, Ulsan, and Gyeongsangnam-do Provinces of South Korea. The hospitals that participated in this study were the same as in a previous study.10 Patients who were diagnosed with MDR-TB based on the phenotypic DST results between January 2010 and December 2019 were included in the study. Patients with unknown previous TB treatment histories and previously treated patients with unknown regimens or outcomes of their most recent courses of TB treatment were excluded. Duplicate inter-hospital patients were also excluded by checking each patient's initials and birth date. If a patient had more than one DST result, the earlier result was used. If a patient had DST results from both pulmonary and extra-pulmonary specimens, results from the pulmonary specimen were selected. Age, sex, history of TB treatment, type of specimen, date of specimen collection, and results of the phenotypic DST were collected from the medical records.\nDST Six hospitals sent Mycobacterium tuberculosis isolates for the phenotypic DST to the Korean Institute of Tuberculosis, which is a supranational reference TB laboratory, and one hospital sent isolates to the Green Cross Laboratory, a commercial reference laboratory. The workflow used and the references of the critical concentrations did not differ between the laboratories.\nThe DST was performed using the absolute concentration method and Lowenstein-Jensen medium. The drugs and their critical concentrations for resistance were as follows: isoniazid (INH), 0.2 and 1.0 μg/mL; rifampin (RIF), 40 μg/mL; ethambutol (EMB), 2.0 μg/mL; rifabutin, 20 μg/mL; ofloxacin (OFX), 2.0 μg/mL; levofloxacin (LFX), 2.0 μg/mL; moxifloxacin (MFX), 2.0 μg/mL; streptomycin (SM), 10 μg/mL; amikacin (AMK), 40 μg/mL; kanamycin (KM), 40 μg/mL; capreomycin (CM), 40 μg/mL; prothionamide (PTO), 40 μg/mL; cycloserine (CS), 30 μg/mL; p-aminosalicylic acid (PAS), 1.0 μg/mL; and LZD, 2.0 μg/mL. Pyrazinamide (PZA) susceptibility was determined using the pyrazinamidase test. Critical concentrations for resistance of AMK, KM, OFX, and MFX were changed during the study periods as follows: AMK and KM, 30 μg/mL in January 2014; OFX, 4.0 μg/mL in January 2016; and MFX, 1.0 μg/mL in August 2018. Tests for higher concentrations of INH (1.0 μg/mL as the critical concentration) and LZD have been available since late 2016.\nMinimum inhibitory concentrations (MICs) for BDQ and DLM were measured using 7H9 broth medium and the serial two-fold dilution method. The ranges of the concentrations prepared for BDQ and DLM were 0.03125–4.0 mg/L, and 0.00625–0.8 mg/L, respectively. Tests for the MICs of BDQ and DLM have been available since mid-2018.\nSix hospitals sent Mycobacterium tuberculosis isolates for the phenotypic DST to the Korean Institute of Tuberculosis, which is a supranational reference TB laboratory, and one hospital sent isolates to the Green Cross Laboratory, a commercial reference laboratory. The workflow used and the references of the critical concentrations did not differ between the laboratories.\nThe DST was performed using the absolute concentration method and Lowenstein-Jensen medium. The drugs and their critical concentrations for resistance were as follows: isoniazid (INH), 0.2 and 1.0 μg/mL; rifampin (RIF), 40 μg/mL; ethambutol (EMB), 2.0 μg/mL; rifabutin, 20 μg/mL; ofloxacin (OFX), 2.0 μg/mL; levofloxacin (LFX), 2.0 μg/mL; moxifloxacin (MFX), 2.0 μg/mL; streptomycin (SM), 10 μg/mL; amikacin (AMK), 40 μg/mL; kanamycin (KM), 40 μg/mL; capreomycin (CM), 40 μg/mL; prothionamide (PTO), 40 μg/mL; cycloserine (CS), 30 μg/mL; p-aminosalicylic acid (PAS), 1.0 μg/mL; and LZD, 2.0 μg/mL. Pyrazinamide (PZA) susceptibility was determined using the pyrazinamidase test. Critical concentrations for resistance of AMK, KM, OFX, and MFX were changed during the study periods as follows: AMK and KM, 30 μg/mL in January 2014; OFX, 4.0 μg/mL in January 2016; and MFX, 1.0 μg/mL in August 2018. Tests for higher concentrations of INH (1.0 μg/mL as the critical concentration) and LZD have been available since late 2016.\nMinimum inhibitory concentrations (MICs) for BDQ and DLM were measured using 7H9 broth medium and the serial two-fold dilution method. The ranges of the concentrations prepared for BDQ and DLM were 0.03125–4.0 mg/L, and 0.00625–0.8 mg/L, respectively. Tests for the MICs of BDQ and DLM have been available since mid-2018.\nDefinitions MDR-TB was defined as TB that was resistant to at least INH and RIF. Patients were classified based on a history of TB treatment as follows: new patients, have never been treated for TB or have taken anti-TB drugs for < 1 month; previously treated patients, received 1 month or more of anti-TB drugs in the past.11 Previously treated patients were further classified by the outcomes of their most recent treatment course, such as relapse, treatment after failure, and treatment after loss to follow-up.11 The first-line anti-TB drugs were INH, RIF, EMB, and PZA. SM was considered the first-line drug if it was used to treat drug-susceptible TB; otherwise, it was regarded as a second-line drug. Drug use was defined as the use of an anti-TB drug for 1 month or more in the past.\nMDR-TB was defined as TB that was resistant to at least INH and RIF. Patients were classified based on a history of TB treatment as follows: new patients, have never been treated for TB or have taken anti-TB drugs for < 1 month; previously treated patients, received 1 month or more of anti-TB drugs in the past.11 Previously treated patients were further classified by the outcomes of their most recent treatment course, such as relapse, treatment after failure, and treatment after loss to follow-up.11 The first-line anti-TB drugs were INH, RIF, EMB, and PZA. SM was considered the first-line drug if it was used to treat drug-susceptible TB; otherwise, it was regarded as a second-line drug. Drug use was defined as the use of an anti-TB drug for 1 month or more in the past.\nStatistical analysis Data are presented as medians with interquartile ranges (IQRs) for continuous variables, and as numbers with percentages for categorical variables. Continuous variables were compared using the Mann-Whitney U test or Kruskal-Wallis test, and categorical variables were compared using the Pearson's χ2 test or Fisher's exact test. The χ2 test for trends was performed to evaluate annual trends in drug resistance and the proportion of patients eligible for the treatment regimen. All tests were two-tailed, and a P value < 0.05 was considered significant. Statistical analysis was performed using SPSS Statistics, version 22.0 software (SPSS Inc., Chicago, IL, USA).\nData are presented as medians with interquartile ranges (IQRs) for continuous variables, and as numbers with percentages for categorical variables. Continuous variables were compared using the Mann-Whitney U test or Kruskal-Wallis test, and categorical variables were compared using the Pearson's χ2 test or Fisher's exact test. The χ2 test for trends was performed to evaluate annual trends in drug resistance and the proportion of patients eligible for the treatment regimen. All tests were two-tailed, and a P value < 0.05 was considered significant. Statistical analysis was performed using SPSS Statistics, version 22.0 software (SPSS Inc., Chicago, IL, USA).\nEthics statement The study protocol was reviewed and approved by the Institutional Review Board of Pusan National University Hospital (approval No. H-2005-009-090). The need for informed consent was waived given the observational retrospective nature of the study. Our study had no effect on the diagnosis or treatment of patients. Patient information was anonymized and de-identified before analysis.\nThe study protocol was reviewed and approved by the Institutional Review Board of Pusan National University Hospital (approval No. H-2005-009-090). The need for informed consent was waived given the observational retrospective nature of the study. Our study had no effect on the diagnosis or treatment of patients. Patient information was anonymized and de-identified before analysis.", "This retrospective study was conducted at seven university-affiliated tertiary care hospitals in Busan, Ulsan, and Gyeongsangnam-do Provinces of South Korea. The hospitals that participated in this study were the same as in a previous study.10 Patients who were diagnosed with MDR-TB based on the phenotypic DST results between January 2010 and December 2019 were included in the study. Patients with unknown previous TB treatment histories and previously treated patients with unknown regimens or outcomes of their most recent courses of TB treatment were excluded. Duplicate inter-hospital patients were also excluded by checking each patient's initials and birth date. If a patient had more than one DST result, the earlier result was used. If a patient had DST results from both pulmonary and extra-pulmonary specimens, results from the pulmonary specimen were selected. Age, sex, history of TB treatment, type of specimen, date of specimen collection, and results of the phenotypic DST were collected from the medical records.", "Six hospitals sent Mycobacterium tuberculosis isolates for the phenotypic DST to the Korean Institute of Tuberculosis, which is a supranational reference TB laboratory, and one hospital sent isolates to the Green Cross Laboratory, a commercial reference laboratory. The workflow used and the references of the critical concentrations did not differ between the laboratories.\nThe DST was performed using the absolute concentration method and Lowenstein-Jensen medium. The drugs and their critical concentrations for resistance were as follows: isoniazid (INH), 0.2 and 1.0 μg/mL; rifampin (RIF), 40 μg/mL; ethambutol (EMB), 2.0 μg/mL; rifabutin, 20 μg/mL; ofloxacin (OFX), 2.0 μg/mL; levofloxacin (LFX), 2.0 μg/mL; moxifloxacin (MFX), 2.0 μg/mL; streptomycin (SM), 10 μg/mL; amikacin (AMK), 40 μg/mL; kanamycin (KM), 40 μg/mL; capreomycin (CM), 40 μg/mL; prothionamide (PTO), 40 μg/mL; cycloserine (CS), 30 μg/mL; p-aminosalicylic acid (PAS), 1.0 μg/mL; and LZD, 2.0 μg/mL. Pyrazinamide (PZA) susceptibility was determined using the pyrazinamidase test. Critical concentrations for resistance of AMK, KM, OFX, and MFX were changed during the study periods as follows: AMK and KM, 30 μg/mL in January 2014; OFX, 4.0 μg/mL in January 2016; and MFX, 1.0 μg/mL in August 2018. Tests for higher concentrations of INH (1.0 μg/mL as the critical concentration) and LZD have been available since late 2016.\nMinimum inhibitory concentrations (MICs) for BDQ and DLM were measured using 7H9 broth medium and the serial two-fold dilution method. The ranges of the concentrations prepared for BDQ and DLM were 0.03125–4.0 mg/L, and 0.00625–0.8 mg/L, respectively. Tests for the MICs of BDQ and DLM have been available since mid-2018.", "MDR-TB was defined as TB that was resistant to at least INH and RIF. Patients were classified based on a history of TB treatment as follows: new patients, have never been treated for TB or have taken anti-TB drugs for < 1 month; previously treated patients, received 1 month or more of anti-TB drugs in the past.11 Previously treated patients were further classified by the outcomes of their most recent treatment course, such as relapse, treatment after failure, and treatment after loss to follow-up.11 The first-line anti-TB drugs were INH, RIF, EMB, and PZA. SM was considered the first-line drug if it was used to treat drug-susceptible TB; otherwise, it was regarded as a second-line drug. Drug use was defined as the use of an anti-TB drug for 1 month or more in the past.", "Data are presented as medians with interquartile ranges (IQRs) for continuous variables, and as numbers with percentages for categorical variables. Continuous variables were compared using the Mann-Whitney U test or Kruskal-Wallis test, and categorical variables were compared using the Pearson's χ2 test or Fisher's exact test. The χ2 test for trends was performed to evaluate annual trends in drug resistance and the proportion of patients eligible for the treatment regimen. All tests were two-tailed, and a P value < 0.05 was considered significant. Statistical analysis was performed using SPSS Statistics, version 22.0 software (SPSS Inc., Chicago, IL, USA).", "The study protocol was reviewed and approved by the Institutional Review Board of Pusan National University Hospital (approval No. H-2005-009-090). The need for informed consent was waived given the observational retrospective nature of the study. Our study had no effect on the diagnosis or treatment of patients. Patient information was anonymized and de-identified before analysis.", "Patient characteristics After the criteria defined above were applied, a total of 10,557 culture-confirmed TB patients were screened during the study period. Of them, 633 (6.0%) had MDR-TB and were included in the final analysis. Table 1 lists the baseline characteristics of all MDR-TB patients. Their median age was 50.0 years (IQR, 36.0–63.0) and 64.6% were male. None of the patients had a human immunodeficiency virus infection. The age groups of 40–49 and 50–59 years accounted for the largest proportion of patients. With regard to TB treatment history, 57.0% of all patients were new patients. The proportion of new patients among total patients did not change during the study period. Among 272 previously treated patients, 212 (77.9%) had been treated with first-line anti-TB drugs, and 60 (22.1%) with second-line anti-TB drugs. There were significant differences in the proportions of outcomes of the most recent treatment course between the patients previously treated with first-line drugs and those previously treated with second-line drugs (P < 0.001): relapse (84.9% vs. 36.7%); treatment after failure (6.6% vs. 33.3%); and treatment after loss to follow-up (8.5% vs. 30.0%).\nContinuous variables are presented as medians (interquartile ranges) and categorical variables as numbers (%).\nAfter the criteria defined above were applied, a total of 10,557 culture-confirmed TB patients were screened during the study period. Of them, 633 (6.0%) had MDR-TB and were included in the final analysis. Table 1 lists the baseline characteristics of all MDR-TB patients. Their median age was 50.0 years (IQR, 36.0–63.0) and 64.6% were male. None of the patients had a human immunodeficiency virus infection. The age groups of 40–49 and 50–59 years accounted for the largest proportion of patients. With regard to TB treatment history, 57.0% of all patients were new patients. The proportion of new patients among total patients did not change during the study period. Among 272 previously treated patients, 212 (77.9%) had been treated with first-line anti-TB drugs, and 60 (22.1%) with second-line anti-TB drugs. There were significant differences in the proportions of outcomes of the most recent treatment course between the patients previously treated with first-line drugs and those previously treated with second-line drugs (P < 0.001): relapse (84.9% vs. 36.7%); treatment after failure (6.6% vs. 33.3%); and treatment after loss to follow-up (8.5% vs. 30.0%).\nContinuous variables are presented as medians (interquartile ranges) and categorical variables as numbers (%).\nPattern and trend of additional drug resistance Table 2 lists the rates of additional drug resistance among all MDR-TB patients. All patients had additional resistance to a median of three drugs (IQR, 2.0–7.0) (excluding INH, RIF, a higher concentration of INH, and LZD). The resistance rates of the drugs in groups A and B according to the World Health Organization (WHO) guidelines were as follows12: any fluoroquinolone ([FQ]; OFX, LFX, or MFX; 26.2%), LZD (0.0%), and CS (6.3%). The resistance rates for drugs in group C were: EMB (63.3%), PZA (41.4%), SM (34.6%), PAS (26.9%), PTO (16.9%), and AMK (14.5%). Seventy-two patients (11.4%) had extensively drug-resistant (XDR)-TB (based on a previous definition: TB resistant to any FQ and at least one of three second-line injectable drugs [CM, KM, or AMK], in addition to MDR). The resistance rates of PZA, FQ, KM, and PTO were higher in previously treated patients than in new patients. During the 10-year study period, the resistance rate of PZA tended to increase, whereas that of PAS tended to decrease. However, no significant changes in the trends in resistance were observed to any of the other drugs (Table 3).\nContinuous variables are presented as medians (interquartile ranges) and categorical variables as numbers (%).\nRFB = rifabutin, EMB = ethambutol, PZA = pyrazinamide, OFX = ofloxacin, LFX = levofloxacin, MFX = moxifloxacin, SM = streptomycin, AMK = amikacin, KM = kanamycin, CM = capreomycin, PTO = prothionamide, CS = cycloserine, PAS = p-aminosalicylic acid, INH = isoniazid, LZD = linezolid, FQr = fluoroquinolone-resistant, MDR = multidrug-resistant, TB = tuberculosis, NA = not applicable.\naComparison of new patients, patients previously treated with first-line drugs and patients previously treated with second-line drugs; bHigher concentration of isoniazid (critical concentration of 1.0 μg/mL); cNumber of resistant patients/total tested patients (%); dMultidrug-resistant tuberculosis resistant to any fluoroquinolone (ofloxacin, levofloxacin or moxifloxacin); eExcluding isoniazid, rifampin, a higher concentration of isoniazid, and linezolid.\nRFB = rifabutin, EMB = ethambutol, PZA = pyrazinamide, OFX = ofloxacin, LFX = levofloxacin, MFX = moxifloxacin, SM = streptomycin, AMK = amikacin, KM = kanamycin, CM = capreomycin, PTO = prothionamide, CS = cycloserine, PAS = p-aminosalicylic acid, FQr = fluoroquinolone-resistant, MDR = multidrug-resistant, TB = tuberculosis.\naMultidrug-resistant tuberculosis resistant to any fluoroquinolone (ofloxacin, levofloxacin, or moxifloxacin).\nTable 4 compares additional drug resistance rates between FQ-susceptible and FQ-resistant MDR-TB. All drugs except SM, a higher concentration of INH, and LZD had higher rates of resistance in patients with FQ-resistant MDR-TB. The number of additional resistant drugs (excluding INH, RIF, a higher concentration of INH, and LZD) was significantly higher in FQ-resistant MDR-TB than in FQ-susceptible MDR-TB (median of 9.0 vs. 2.0).\nContinuous variables are presented as medians (interquartile ranges) and categorical variables as numbers (%).\nRFB = rifabutin, EMB = ethambutol, PZA = pyrazinamide, OFX = ofloxacin, LFX = levofloxacin, MFX = moxifloxacin, SM = streptomycin, AMK = amikacin, KM = kanamycin, CM = capreomycin, PTO = prothionamide, CS = cycloserine, PAS = p-aminosalicylic acid, INH = isoniazid, LZD = linezolid, FQs = fluoroquinolone-susceptible, FQr = fluoroquinolone-resistant, MDR = multidrug-resistant, TB = tuberculosis, NA = not applicable.\naMultidrug-resistant tuberculosis susceptible to all fluoroquinolones (ofloxacin, levofloxacin, and moxifloxacin); bMultidrug-resistant tuberculosis resistant to any fluoroquinolone (ofloxacin, levofloxacin or moxifloxacin); cComparison of patients with fluoroquinolone susceptible and resistant multidrug-resistant tuberculosis; dHigher concentration of isoniazid (critical concentration of 1.0 μg/mL); eNumber of resistant patients/total tested patients (%); fExcluding isoniazid, rifampin, a higher concentration of isoniazid, and linezolid.\nTable 2 lists the rates of additional drug resistance among all MDR-TB patients. All patients had additional resistance to a median of three drugs (IQR, 2.0–7.0) (excluding INH, RIF, a higher concentration of INH, and LZD). The resistance rates of the drugs in groups A and B according to the World Health Organization (WHO) guidelines were as follows12: any fluoroquinolone ([FQ]; OFX, LFX, or MFX; 26.2%), LZD (0.0%), and CS (6.3%). The resistance rates for drugs in group C were: EMB (63.3%), PZA (41.4%), SM (34.6%), PAS (26.9%), PTO (16.9%), and AMK (14.5%). Seventy-two patients (11.4%) had extensively drug-resistant (XDR)-TB (based on a previous definition: TB resistant to any FQ and at least one of three second-line injectable drugs [CM, KM, or AMK], in addition to MDR). The resistance rates of PZA, FQ, KM, and PTO were higher in previously treated patients than in new patients. During the 10-year study period, the resistance rate of PZA tended to increase, whereas that of PAS tended to decrease. However, no significant changes in the trends in resistance were observed to any of the other drugs (Table 3).\nContinuous variables are presented as medians (interquartile ranges) and categorical variables as numbers (%).\nRFB = rifabutin, EMB = ethambutol, PZA = pyrazinamide, OFX = ofloxacin, LFX = levofloxacin, MFX = moxifloxacin, SM = streptomycin, AMK = amikacin, KM = kanamycin, CM = capreomycin, PTO = prothionamide, CS = cycloserine, PAS = p-aminosalicylic acid, INH = isoniazid, LZD = linezolid, FQr = fluoroquinolone-resistant, MDR = multidrug-resistant, TB = tuberculosis, NA = not applicable.\naComparison of new patients, patients previously treated with first-line drugs and patients previously treated with second-line drugs; bHigher concentration of isoniazid (critical concentration of 1.0 μg/mL); cNumber of resistant patients/total tested patients (%); dMultidrug-resistant tuberculosis resistant to any fluoroquinolone (ofloxacin, levofloxacin or moxifloxacin); eExcluding isoniazid, rifampin, a higher concentration of isoniazid, and linezolid.\nRFB = rifabutin, EMB = ethambutol, PZA = pyrazinamide, OFX = ofloxacin, LFX = levofloxacin, MFX = moxifloxacin, SM = streptomycin, AMK = amikacin, KM = kanamycin, CM = capreomycin, PTO = prothionamide, CS = cycloserine, PAS = p-aminosalicylic acid, FQr = fluoroquinolone-resistant, MDR = multidrug-resistant, TB = tuberculosis.\naMultidrug-resistant tuberculosis resistant to any fluoroquinolone (ofloxacin, levofloxacin, or moxifloxacin).\nTable 4 compares additional drug resistance rates between FQ-susceptible and FQ-resistant MDR-TB. All drugs except SM, a higher concentration of INH, and LZD had higher rates of resistance in patients with FQ-resistant MDR-TB. The number of additional resistant drugs (excluding INH, RIF, a higher concentration of INH, and LZD) was significantly higher in FQ-resistant MDR-TB than in FQ-susceptible MDR-TB (median of 9.0 vs. 2.0).\nContinuous variables are presented as medians (interquartile ranges) and categorical variables as numbers (%).\nRFB = rifabutin, EMB = ethambutol, PZA = pyrazinamide, OFX = ofloxacin, LFX = levofloxacin, MFX = moxifloxacin, SM = streptomycin, AMK = amikacin, KM = kanamycin, CM = capreomycin, PTO = prothionamide, CS = cycloserine, PAS = p-aminosalicylic acid, INH = isoniazid, LZD = linezolid, FQs = fluoroquinolone-susceptible, FQr = fluoroquinolone-resistant, MDR = multidrug-resistant, TB = tuberculosis, NA = not applicable.\naMultidrug-resistant tuberculosis susceptible to all fluoroquinolones (ofloxacin, levofloxacin, and moxifloxacin); bMultidrug-resistant tuberculosis resistant to any fluoroquinolone (ofloxacin, levofloxacin or moxifloxacin); cComparison of patients with fluoroquinolone susceptible and resistant multidrug-resistant tuberculosis; dHigher concentration of isoniazid (critical concentration of 1.0 μg/mL); eNumber of resistant patients/total tested patients (%); fExcluding isoniazid, rifampin, a higher concentration of isoniazid, and linezolid.\nMICs of BDQ and DLM Twenty-six patients had MIC results for BDQ and DLM. Fig. 1 presents the overall MIC distribution of the patients. When an interim critical concentration of 0.25 mg/L was applied for BDQ resistance,1314 one patient (3.8%) was resistant to BDQ. When the interim critical concentration of 0.016 mg/L was applied for DLM resistance,13 three patients (11.5%) were resistant to DLM; the results remained unchanged if 0.06 or 0.2 mg/L was applied as the critical concentration of DLM resistance.1516 One BDQ-resistant patient had a history of treatment failure with a BDQ- and CFZ-containing regimen. Among the three patients resistant to DLM, one had never been treated for TB in the past, and the remaining two patients had histories of TB treatments with first-line anti-TB drugs only.\nMIC = minimum inhibitory concentration\nTwenty-six patients had MIC results for BDQ and DLM. Fig. 1 presents the overall MIC distribution of the patients. When an interim critical concentration of 0.25 mg/L was applied for BDQ resistance,1314 one patient (3.8%) was resistant to BDQ. When the interim critical concentration of 0.016 mg/L was applied for DLM resistance,13 three patients (11.5%) were resistant to DLM; the results remained unchanged if 0.06 or 0.2 mg/L was applied as the critical concentration of DLM resistance.1516 One BDQ-resistant patient had a history of treatment failure with a BDQ- and CFZ-containing regimen. Among the three patients resistant to DLM, one had never been treated for TB in the past, and the remaining two patients had histories of TB treatments with first-line anti-TB drugs only.\nMIC = minimum inhibitory concentration\nEligibility for the treatment regimen Of all patients, 458 (72.4%) were susceptible to all FQs and CS. Assuming that these patients were also susceptible to BDQ, LZD, and CFZ, they would be eligible for WHO's longer treatment regimen without drug modification.12 In total, 447 patients (70.6%) were eligible for the WHO's shorter all-oral BDQ-containing regimen after applying the inclusion criteria of “MDR-TB patients who have not been exposed to treatment with second-line drugs for more than 1 month, and in whom resistance to FQs has been excluded.”12 However, if the strict inclusion criteria, “MDR-TB patients without resistance to a medicine in the shorter regimen” were applied,12 only 157 patients (24.8%) were fully susceptible to all FQs, PTO, EMB, and PZA, and thus were eligible for the shorter regimen (even if all patients were assumed to be susceptible to both BDQ and CFZ). Fig. 2 presents the annual trends in the proportion of patients eligible for the longer and shorter (with strict inclusion criteria) regimens, respectively. The proportion did not change during the study period.\naPatients susceptible to all fluoroquinolones and cycloserine (assuming all patients are also susceptible to bedaquiline, linezolid, and clofazimine); bPatients fully susceptible to all fluoroquinolones, prothionamide, ethambutol, and pyrazinamide (assuming all patients are also susceptible to bedaquiline and clofazimine).\nOf all patients, 458 (72.4%) were susceptible to all FQs and CS. Assuming that these patients were also susceptible to BDQ, LZD, and CFZ, they would be eligible for WHO's longer treatment regimen without drug modification.12 In total, 447 patients (70.6%) were eligible for the WHO's shorter all-oral BDQ-containing regimen after applying the inclusion criteria of “MDR-TB patients who have not been exposed to treatment with second-line drugs for more than 1 month, and in whom resistance to FQs has been excluded.”12 However, if the strict inclusion criteria, “MDR-TB patients without resistance to a medicine in the shorter regimen” were applied,12 only 157 patients (24.8%) were fully susceptible to all FQs, PTO, EMB, and PZA, and thus were eligible for the shorter regimen (even if all patients were assumed to be susceptible to both BDQ and CFZ). Fig. 2 presents the annual trends in the proportion of patients eligible for the longer and shorter (with strict inclusion criteria) regimens, respectively. The proportion did not change during the study period.\naPatients susceptible to all fluoroquinolones and cycloserine (assuming all patients are also susceptible to bedaquiline, linezolid, and clofazimine); bPatients fully susceptible to all fluoroquinolones, prothionamide, ethambutol, and pyrazinamide (assuming all patients are also susceptible to bedaquiline and clofazimine).", "After the criteria defined above were applied, a total of 10,557 culture-confirmed TB patients were screened during the study period. Of them, 633 (6.0%) had MDR-TB and were included in the final analysis. Table 1 lists the baseline characteristics of all MDR-TB patients. Their median age was 50.0 years (IQR, 36.0–63.0) and 64.6% were male. None of the patients had a human immunodeficiency virus infection. The age groups of 40–49 and 50–59 years accounted for the largest proportion of patients. With regard to TB treatment history, 57.0% of all patients were new patients. The proportion of new patients among total patients did not change during the study period. Among 272 previously treated patients, 212 (77.9%) had been treated with first-line anti-TB drugs, and 60 (22.1%) with second-line anti-TB drugs. There were significant differences in the proportions of outcomes of the most recent treatment course between the patients previously treated with first-line drugs and those previously treated with second-line drugs (P < 0.001): relapse (84.9% vs. 36.7%); treatment after failure (6.6% vs. 33.3%); and treatment after loss to follow-up (8.5% vs. 30.0%).\nContinuous variables are presented as medians (interquartile ranges) and categorical variables as numbers (%).", "Table 2 lists the rates of additional drug resistance among all MDR-TB patients. All patients had additional resistance to a median of three drugs (IQR, 2.0–7.0) (excluding INH, RIF, a higher concentration of INH, and LZD). The resistance rates of the drugs in groups A and B according to the World Health Organization (WHO) guidelines were as follows12: any fluoroquinolone ([FQ]; OFX, LFX, or MFX; 26.2%), LZD (0.0%), and CS (6.3%). The resistance rates for drugs in group C were: EMB (63.3%), PZA (41.4%), SM (34.6%), PAS (26.9%), PTO (16.9%), and AMK (14.5%). Seventy-two patients (11.4%) had extensively drug-resistant (XDR)-TB (based on a previous definition: TB resistant to any FQ and at least one of three second-line injectable drugs [CM, KM, or AMK], in addition to MDR). The resistance rates of PZA, FQ, KM, and PTO were higher in previously treated patients than in new patients. During the 10-year study period, the resistance rate of PZA tended to increase, whereas that of PAS tended to decrease. However, no significant changes in the trends in resistance were observed to any of the other drugs (Table 3).\nContinuous variables are presented as medians (interquartile ranges) and categorical variables as numbers (%).\nRFB = rifabutin, EMB = ethambutol, PZA = pyrazinamide, OFX = ofloxacin, LFX = levofloxacin, MFX = moxifloxacin, SM = streptomycin, AMK = amikacin, KM = kanamycin, CM = capreomycin, PTO = prothionamide, CS = cycloserine, PAS = p-aminosalicylic acid, INH = isoniazid, LZD = linezolid, FQr = fluoroquinolone-resistant, MDR = multidrug-resistant, TB = tuberculosis, NA = not applicable.\naComparison of new patients, patients previously treated with first-line drugs and patients previously treated with second-line drugs; bHigher concentration of isoniazid (critical concentration of 1.0 μg/mL); cNumber of resistant patients/total tested patients (%); dMultidrug-resistant tuberculosis resistant to any fluoroquinolone (ofloxacin, levofloxacin or moxifloxacin); eExcluding isoniazid, rifampin, a higher concentration of isoniazid, and linezolid.\nRFB = rifabutin, EMB = ethambutol, PZA = pyrazinamide, OFX = ofloxacin, LFX = levofloxacin, MFX = moxifloxacin, SM = streptomycin, AMK = amikacin, KM = kanamycin, CM = capreomycin, PTO = prothionamide, CS = cycloserine, PAS = p-aminosalicylic acid, FQr = fluoroquinolone-resistant, MDR = multidrug-resistant, TB = tuberculosis.\naMultidrug-resistant tuberculosis resistant to any fluoroquinolone (ofloxacin, levofloxacin, or moxifloxacin).\nTable 4 compares additional drug resistance rates between FQ-susceptible and FQ-resistant MDR-TB. All drugs except SM, a higher concentration of INH, and LZD had higher rates of resistance in patients with FQ-resistant MDR-TB. The number of additional resistant drugs (excluding INH, RIF, a higher concentration of INH, and LZD) was significantly higher in FQ-resistant MDR-TB than in FQ-susceptible MDR-TB (median of 9.0 vs. 2.0).\nContinuous variables are presented as medians (interquartile ranges) and categorical variables as numbers (%).\nRFB = rifabutin, EMB = ethambutol, PZA = pyrazinamide, OFX = ofloxacin, LFX = levofloxacin, MFX = moxifloxacin, SM = streptomycin, AMK = amikacin, KM = kanamycin, CM = capreomycin, PTO = prothionamide, CS = cycloserine, PAS = p-aminosalicylic acid, INH = isoniazid, LZD = linezolid, FQs = fluoroquinolone-susceptible, FQr = fluoroquinolone-resistant, MDR = multidrug-resistant, TB = tuberculosis, NA = not applicable.\naMultidrug-resistant tuberculosis susceptible to all fluoroquinolones (ofloxacin, levofloxacin, and moxifloxacin); bMultidrug-resistant tuberculosis resistant to any fluoroquinolone (ofloxacin, levofloxacin or moxifloxacin); cComparison of patients with fluoroquinolone susceptible and resistant multidrug-resistant tuberculosis; dHigher concentration of isoniazid (critical concentration of 1.0 μg/mL); eNumber of resistant patients/total tested patients (%); fExcluding isoniazid, rifampin, a higher concentration of isoniazid, and linezolid.", "Twenty-six patients had MIC results for BDQ and DLM. Fig. 1 presents the overall MIC distribution of the patients. When an interim critical concentration of 0.25 mg/L was applied for BDQ resistance,1314 one patient (3.8%) was resistant to BDQ. When the interim critical concentration of 0.016 mg/L was applied for DLM resistance,13 three patients (11.5%) were resistant to DLM; the results remained unchanged if 0.06 or 0.2 mg/L was applied as the critical concentration of DLM resistance.1516 One BDQ-resistant patient had a history of treatment failure with a BDQ- and CFZ-containing regimen. Among the three patients resistant to DLM, one had never been treated for TB in the past, and the remaining two patients had histories of TB treatments with first-line anti-TB drugs only.\nMIC = minimum inhibitory concentration", "Of all patients, 458 (72.4%) were susceptible to all FQs and CS. Assuming that these patients were also susceptible to BDQ, LZD, and CFZ, they would be eligible for WHO's longer treatment regimen without drug modification.12 In total, 447 patients (70.6%) were eligible for the WHO's shorter all-oral BDQ-containing regimen after applying the inclusion criteria of “MDR-TB patients who have not been exposed to treatment with second-line drugs for more than 1 month, and in whom resistance to FQs has been excluded.”12 However, if the strict inclusion criteria, “MDR-TB patients without resistance to a medicine in the shorter regimen” were applied,12 only 157 patients (24.8%) were fully susceptible to all FQs, PTO, EMB, and PZA, and thus were eligible for the shorter regimen (even if all patients were assumed to be susceptible to both BDQ and CFZ). Fig. 2 presents the annual trends in the proportion of patients eligible for the longer and shorter (with strict inclusion criteria) regimens, respectively. The proportion did not change during the study period.\naPatients susceptible to all fluoroquinolones and cycloserine (assuming all patients are also susceptible to bedaquiline, linezolid, and clofazimine); bPatients fully susceptible to all fluoroquinolones, prothionamide, ethambutol, and pyrazinamide (assuming all patients are also susceptible to bedaquiline and clofazimine).", "In the present study, additional drug resistance of MDR-TB patients was not only high for companion drugs but also for FQ, a core second-line anti-TB drug. The resistance rate did not exhibit a decreasing trend for most anti-TB drugs during the study period. More than half of all MDR-TB patients were classified as new patients, and this proportion did not decrease over time. These findings are not different from those of a previous study,10 suggesting that inappropriate treatment and management of MDR-TB patients is not improving in South Korea.\nBecause injectables are no longer considered the core drugs to treat MDR-TB, the classification of MDR-TB was simplified to FQ-susceptible or FQ-resistant MDR-TB (the latter is so-called pre-XDR-TB).17 In other words, FQ has increased in importance as a core drug for treating MDR-TB. In our study, the proportion of FQ-resistant MDR-TB among all patients was 26.2%, which is comparable with the global estimate of 20.1%.1 Furthermore, the proportion did not decrease during the study period. FQ-resistance in MDR-TB patients may be due to the widespread uncontrolled use of FQs in the community because FQs are widely used for various infectious diseases.18 Also, suboptimal TB treatment with an FQ-containing regimen and inappropriate patient management may generate and amplify FQ resistance in MDR-TB patients at the individual level.19\nThe treatment outcomes of patients with FQ-resistant MDR-TB are poor.2021 One of the main reasons for this is the difficulty building an effective treatment regimen due to the high rate of resistance to other drugs. In our study, patients with FQ-resistant MDR-TB had additional resistances to a median of nine drugs. To improve the treatment outcomes of patients with FQ-resistant MDR-TB, it is important to provide an effective empirical treatment regimen based on DRS data and the patient's treatment history until full DST results are obtained. Rapid detection of FQ resistance is also essential for a positive outcome in MDR-TB patients. However, a rapid molecular DST for FQ resistance (e.g., line probe assay for second-line drugs; GenoType MTBDRsl) is not yet widely available in South Korea. The universal use of this rapid test in routine practice is urgently needed. The recently developed Xpert MTB/XDR assay may be promising.\nIn our study, 72.4% of patients were eligible for the WHO's longer MDR-TB treatment regimen.12 However, this percentage may have been overestimated because only FQ and CS resistance was considered for eligibility. Although resistance to BDQ, LZD, and CFZ was expected to be low as these drugs have not been widely used to treat MDR-TB in South Korea, resistance can occur in the absence of antimicrobial exposure.22 Although standard critical concentrations have not been established, several studies have reported the primary resistances of BDQ, DLM, and CFZ with interim critical concentrations.22232425 In our study, among the 26 patients with MIC results for BDQ and DLM, one and three patients were considered to have resistance to BDQ and DLM, respectively. None of the three DLM-resistant patients had previously been treated with a DLM-containing regimen. These results may be due to true primary resistance, but they may also be due to incomplete DST methods or critical concentrations. Reliable and reproducible DST methods should be established, particularly for new and repurposed anti-TB drugs that are now defined as core drugs. The wide use of these drugs without the support of accurate DST methods will inevitably increase resistance rates.\nThe WHO recently updated the guidelines for a shorter MDR-TB treatment regimen in which injectables are replaced with BDQ.12 In our study, only 24.8% of patients were eligible for the shorter regimen with strict inclusion criteria. The most common reason for exclusion was resistance to PZA or EMB. However, recent studies have suggested that initial resistance to companion drugs in the shorter regimen (e.g., PZA, EMB, or PTO) has no or minimal effect on treatment outcomes of MDR-TB patients if their isolates are susceptible to core drugs.2627 Also, the phenotypic DST for several drugs, such as EMB and PTO, may not be reliable and reproducible.12 Further investigations are needed to clarify these issues. Until clear evidence of inclusion for the shorter regimen is established, an individualized longer regimen based on DST results is a reasonable choice in the South Korean context.\nPrimary drug resistance is an indicator of the effectiveness of a TB control program. It reflects suboptimal treatment and management of TB patients and the spread of the pathogen under inappropriate infection control. In our study, more than half of all MDR-TB patients were classified as new patients, and this proportion did not decrease over time. In addition to providing an effective treatment regimen for MDR-TB patients, a comprehensive TB control program that encompasses diagnosis, prevention, patient management, and infection control measures is needed to reduce the transmission of and victims of this difficult-to-treat pathogen.\nOur study had several limitations. First, our results were based on regional data derived from seven private hospitals and may not represent the general South Korean population. However, according to the 2019 nationwide notification data, 15.3% of all TB cases in South Korea were reported in our province, and 96.9% of all TB cases were reported from private sectors.28 Second, all hospitals that participated in the study were university-affiliated tertiary care hospitals, so referral or selection bias may have been involved. However, the majority of MDR-TB patients in South Korea are transferred to and treated at tertiary care hospitals. Third, critical concentrations for several drugs changed during the study period, and tests for LZD and higher concentrations of INH could not be performed during the entire study period, which may limit the interpretation of the results.\nIn conclusion, the proportion of new patients and rates of additional drug resistance in MDR-TB patients in South Korea were high and did not decrease during a 10-year period. Effective treatment for MDR-TB patients in South Korea will require the widespread use of a rapid molecular DST for FQ and the introduction of reliable DST methods for core second-line anti-TB drugs, as well as analyses of nationwide DRS data collected using standardized methods." ]
[ "intro", "methods", null, null, null, null, null, "results", null, null, null, null, "discussion" ]
[ "Multidrug-resistant", "Resistance", "South Korea", "Tuberculosis" ]
INTRODUCTION: Multidrug-resistant tuberculosis (MDR-TB) remains a major public health concern. An estimated 465,000 incident cases of MDR- or rifampicin-resistant (RR)-TB were recorded globally in 2019 and their burden remains stable.1 Treatment outcomes of MDR/RR-TB are still suboptimal; only 57% of MDR/RR-TB patients successfully completed treatment in 2017.1 Treating patients with MDR-TB is challenging due to the less effective and more toxic anti-TB drugs, long duration of therapy, and suboptimal patient adherence.2 Drug susceptibility test (DST) results of MDR-TB patients provide key information for choosing and designing the treatment regimen. Inappropriate regimens not only amplify drug resistance and reduce treatment success of patients, but also increase the risk of transmission of pathogens in the community.34 However, in the absence of individual DST results, recent representative anti-TB drug-resistance surveillance (DRS) data in a province or country are important to guide selection of the regimen.5 Also, as the proportion of MDR/RR-TB patients starting treatment with limited information about the resistance of molecular DST (e.g., Xpert MTB/RIF assay or line probe assay) increases, DRS data are becoming more important for selecting an empirical regimen before full DST results are obtained. Several nationwide South Korean anti-TB DRSs have been employed for TB patients in the public sector between 1994 and 2004.6 Additionally, several hospitals and laboratories from the private sector have reported phenotypic DST results of TB patients.789 Although previous studies have helped clarify the status and trends of MDR-TB in South Korea, detailed information on additional drug resistance in patients with MDR-TB is lacking. The authors of a previous study reported the patterns and trends of additional drug resistance in patients with MDR-TB in South Korea using phenotypic DST results collected from seven hospitals from 2010 to 2014.10 Since then, many changes have been made in the diagnosis and treatment of MDR-TB in South Korea: universal use of molecular DSTs, the introduction and increased use of new and re-purposed anti-TB drugs such as bedaquiline (BDQ), delamanid (DLM), linezolid (LZD) and clofazimine (CFZ), and changes in the MDR-TB treatment guidelines. These changes may have affected the status of drug resistance in MDR-TB patients. Therefore, the present study was conducted to update the patterns and trends of additional drug resistance in patients with MDR-TB in South Korea after 2014. METHODS: Study design and data collection This retrospective study was conducted at seven university-affiliated tertiary care hospitals in Busan, Ulsan, and Gyeongsangnam-do Provinces of South Korea. The hospitals that participated in this study were the same as in a previous study.10 Patients who were diagnosed with MDR-TB based on the phenotypic DST results between January 2010 and December 2019 were included in the study. Patients with unknown previous TB treatment histories and previously treated patients with unknown regimens or outcomes of their most recent courses of TB treatment were excluded. Duplicate inter-hospital patients were also excluded by checking each patient's initials and birth date. If a patient had more than one DST result, the earlier result was used. If a patient had DST results from both pulmonary and extra-pulmonary specimens, results from the pulmonary specimen were selected. Age, sex, history of TB treatment, type of specimen, date of specimen collection, and results of the phenotypic DST were collected from the medical records. This retrospective study was conducted at seven university-affiliated tertiary care hospitals in Busan, Ulsan, and Gyeongsangnam-do Provinces of South Korea. The hospitals that participated in this study were the same as in a previous study.10 Patients who were diagnosed with MDR-TB based on the phenotypic DST results between January 2010 and December 2019 were included in the study. Patients with unknown previous TB treatment histories and previously treated patients with unknown regimens or outcomes of their most recent courses of TB treatment were excluded. Duplicate inter-hospital patients were also excluded by checking each patient's initials and birth date. If a patient had more than one DST result, the earlier result was used. If a patient had DST results from both pulmonary and extra-pulmonary specimens, results from the pulmonary specimen were selected. Age, sex, history of TB treatment, type of specimen, date of specimen collection, and results of the phenotypic DST were collected from the medical records. DST Six hospitals sent Mycobacterium tuberculosis isolates for the phenotypic DST to the Korean Institute of Tuberculosis, which is a supranational reference TB laboratory, and one hospital sent isolates to the Green Cross Laboratory, a commercial reference laboratory. The workflow used and the references of the critical concentrations did not differ between the laboratories. The DST was performed using the absolute concentration method and Lowenstein-Jensen medium. The drugs and their critical concentrations for resistance were as follows: isoniazid (INH), 0.2 and 1.0 μg/mL; rifampin (RIF), 40 μg/mL; ethambutol (EMB), 2.0 μg/mL; rifabutin, 20 μg/mL; ofloxacin (OFX), 2.0 μg/mL; levofloxacin (LFX), 2.0 μg/mL; moxifloxacin (MFX), 2.0 μg/mL; streptomycin (SM), 10 μg/mL; amikacin (AMK), 40 μg/mL; kanamycin (KM), 40 μg/mL; capreomycin (CM), 40 μg/mL; prothionamide (PTO), 40 μg/mL; cycloserine (CS), 30 μg/mL; p-aminosalicylic acid (PAS), 1.0 μg/mL; and LZD, 2.0 μg/mL. Pyrazinamide (PZA) susceptibility was determined using the pyrazinamidase test. Critical concentrations for resistance of AMK, KM, OFX, and MFX were changed during the study periods as follows: AMK and KM, 30 μg/mL in January 2014; OFX, 4.0 μg/mL in January 2016; and MFX, 1.0 μg/mL in August 2018. Tests for higher concentrations of INH (1.0 μg/mL as the critical concentration) and LZD have been available since late 2016. Minimum inhibitory concentrations (MICs) for BDQ and DLM were measured using 7H9 broth medium and the serial two-fold dilution method. The ranges of the concentrations prepared for BDQ and DLM were 0.03125–4.0 mg/L, and 0.00625–0.8 mg/L, respectively. Tests for the MICs of BDQ and DLM have been available since mid-2018. Six hospitals sent Mycobacterium tuberculosis isolates for the phenotypic DST to the Korean Institute of Tuberculosis, which is a supranational reference TB laboratory, and one hospital sent isolates to the Green Cross Laboratory, a commercial reference laboratory. The workflow used and the references of the critical concentrations did not differ between the laboratories. The DST was performed using the absolute concentration method and Lowenstein-Jensen medium. The drugs and their critical concentrations for resistance were as follows: isoniazid (INH), 0.2 and 1.0 μg/mL; rifampin (RIF), 40 μg/mL; ethambutol (EMB), 2.0 μg/mL; rifabutin, 20 μg/mL; ofloxacin (OFX), 2.0 μg/mL; levofloxacin (LFX), 2.0 μg/mL; moxifloxacin (MFX), 2.0 μg/mL; streptomycin (SM), 10 μg/mL; amikacin (AMK), 40 μg/mL; kanamycin (KM), 40 μg/mL; capreomycin (CM), 40 μg/mL; prothionamide (PTO), 40 μg/mL; cycloserine (CS), 30 μg/mL; p-aminosalicylic acid (PAS), 1.0 μg/mL; and LZD, 2.0 μg/mL. Pyrazinamide (PZA) susceptibility was determined using the pyrazinamidase test. Critical concentrations for resistance of AMK, KM, OFX, and MFX were changed during the study periods as follows: AMK and KM, 30 μg/mL in January 2014; OFX, 4.0 μg/mL in January 2016; and MFX, 1.0 μg/mL in August 2018. Tests for higher concentrations of INH (1.0 μg/mL as the critical concentration) and LZD have been available since late 2016. Minimum inhibitory concentrations (MICs) for BDQ and DLM were measured using 7H9 broth medium and the serial two-fold dilution method. The ranges of the concentrations prepared for BDQ and DLM were 0.03125–4.0 mg/L, and 0.00625–0.8 mg/L, respectively. Tests for the MICs of BDQ and DLM have been available since mid-2018. Definitions MDR-TB was defined as TB that was resistant to at least INH and RIF. Patients were classified based on a history of TB treatment as follows: new patients, have never been treated for TB or have taken anti-TB drugs for < 1 month; previously treated patients, received 1 month or more of anti-TB drugs in the past.11 Previously treated patients were further classified by the outcomes of their most recent treatment course, such as relapse, treatment after failure, and treatment after loss to follow-up.11 The first-line anti-TB drugs were INH, RIF, EMB, and PZA. SM was considered the first-line drug if it was used to treat drug-susceptible TB; otherwise, it was regarded as a second-line drug. Drug use was defined as the use of an anti-TB drug for 1 month or more in the past. MDR-TB was defined as TB that was resistant to at least INH and RIF. Patients were classified based on a history of TB treatment as follows: new patients, have never been treated for TB or have taken anti-TB drugs for < 1 month; previously treated patients, received 1 month or more of anti-TB drugs in the past.11 Previously treated patients were further classified by the outcomes of their most recent treatment course, such as relapse, treatment after failure, and treatment after loss to follow-up.11 The first-line anti-TB drugs were INH, RIF, EMB, and PZA. SM was considered the first-line drug if it was used to treat drug-susceptible TB; otherwise, it was regarded as a second-line drug. Drug use was defined as the use of an anti-TB drug for 1 month or more in the past. Statistical analysis Data are presented as medians with interquartile ranges (IQRs) for continuous variables, and as numbers with percentages for categorical variables. Continuous variables were compared using the Mann-Whitney U test or Kruskal-Wallis test, and categorical variables were compared using the Pearson's χ2 test or Fisher's exact test. The χ2 test for trends was performed to evaluate annual trends in drug resistance and the proportion of patients eligible for the treatment regimen. All tests were two-tailed, and a P value < 0.05 was considered significant. Statistical analysis was performed using SPSS Statistics, version 22.0 software (SPSS Inc., Chicago, IL, USA). Data are presented as medians with interquartile ranges (IQRs) for continuous variables, and as numbers with percentages for categorical variables. Continuous variables were compared using the Mann-Whitney U test or Kruskal-Wallis test, and categorical variables were compared using the Pearson's χ2 test or Fisher's exact test. The χ2 test for trends was performed to evaluate annual trends in drug resistance and the proportion of patients eligible for the treatment regimen. All tests were two-tailed, and a P value < 0.05 was considered significant. Statistical analysis was performed using SPSS Statistics, version 22.0 software (SPSS Inc., Chicago, IL, USA). Ethics statement The study protocol was reviewed and approved by the Institutional Review Board of Pusan National University Hospital (approval No. H-2005-009-090). The need for informed consent was waived given the observational retrospective nature of the study. Our study had no effect on the diagnosis or treatment of patients. Patient information was anonymized and de-identified before analysis. The study protocol was reviewed and approved by the Institutional Review Board of Pusan National University Hospital (approval No. H-2005-009-090). The need for informed consent was waived given the observational retrospective nature of the study. Our study had no effect on the diagnosis or treatment of patients. Patient information was anonymized and de-identified before analysis. Study design and data collection: This retrospective study was conducted at seven university-affiliated tertiary care hospitals in Busan, Ulsan, and Gyeongsangnam-do Provinces of South Korea. The hospitals that participated in this study were the same as in a previous study.10 Patients who were diagnosed with MDR-TB based on the phenotypic DST results between January 2010 and December 2019 were included in the study. Patients with unknown previous TB treatment histories and previously treated patients with unknown regimens or outcomes of their most recent courses of TB treatment were excluded. Duplicate inter-hospital patients were also excluded by checking each patient's initials and birth date. If a patient had more than one DST result, the earlier result was used. If a patient had DST results from both pulmonary and extra-pulmonary specimens, results from the pulmonary specimen were selected. Age, sex, history of TB treatment, type of specimen, date of specimen collection, and results of the phenotypic DST were collected from the medical records. DST: Six hospitals sent Mycobacterium tuberculosis isolates for the phenotypic DST to the Korean Institute of Tuberculosis, which is a supranational reference TB laboratory, and one hospital sent isolates to the Green Cross Laboratory, a commercial reference laboratory. The workflow used and the references of the critical concentrations did not differ between the laboratories. The DST was performed using the absolute concentration method and Lowenstein-Jensen medium. The drugs and their critical concentrations for resistance were as follows: isoniazid (INH), 0.2 and 1.0 μg/mL; rifampin (RIF), 40 μg/mL; ethambutol (EMB), 2.0 μg/mL; rifabutin, 20 μg/mL; ofloxacin (OFX), 2.0 μg/mL; levofloxacin (LFX), 2.0 μg/mL; moxifloxacin (MFX), 2.0 μg/mL; streptomycin (SM), 10 μg/mL; amikacin (AMK), 40 μg/mL; kanamycin (KM), 40 μg/mL; capreomycin (CM), 40 μg/mL; prothionamide (PTO), 40 μg/mL; cycloserine (CS), 30 μg/mL; p-aminosalicylic acid (PAS), 1.0 μg/mL; and LZD, 2.0 μg/mL. Pyrazinamide (PZA) susceptibility was determined using the pyrazinamidase test. Critical concentrations for resistance of AMK, KM, OFX, and MFX were changed during the study periods as follows: AMK and KM, 30 μg/mL in January 2014; OFX, 4.0 μg/mL in January 2016; and MFX, 1.0 μg/mL in August 2018. Tests for higher concentrations of INH (1.0 μg/mL as the critical concentration) and LZD have been available since late 2016. Minimum inhibitory concentrations (MICs) for BDQ and DLM were measured using 7H9 broth medium and the serial two-fold dilution method. The ranges of the concentrations prepared for BDQ and DLM were 0.03125–4.0 mg/L, and 0.00625–0.8 mg/L, respectively. Tests for the MICs of BDQ and DLM have been available since mid-2018. Definitions: MDR-TB was defined as TB that was resistant to at least INH and RIF. Patients were classified based on a history of TB treatment as follows: new patients, have never been treated for TB or have taken anti-TB drugs for < 1 month; previously treated patients, received 1 month or more of anti-TB drugs in the past.11 Previously treated patients were further classified by the outcomes of their most recent treatment course, such as relapse, treatment after failure, and treatment after loss to follow-up.11 The first-line anti-TB drugs were INH, RIF, EMB, and PZA. SM was considered the first-line drug if it was used to treat drug-susceptible TB; otherwise, it was regarded as a second-line drug. Drug use was defined as the use of an anti-TB drug for 1 month or more in the past. Statistical analysis: Data are presented as medians with interquartile ranges (IQRs) for continuous variables, and as numbers with percentages for categorical variables. Continuous variables were compared using the Mann-Whitney U test or Kruskal-Wallis test, and categorical variables were compared using the Pearson's χ2 test or Fisher's exact test. The χ2 test for trends was performed to evaluate annual trends in drug resistance and the proportion of patients eligible for the treatment regimen. All tests were two-tailed, and a P value < 0.05 was considered significant. Statistical analysis was performed using SPSS Statistics, version 22.0 software (SPSS Inc., Chicago, IL, USA). Ethics statement: The study protocol was reviewed and approved by the Institutional Review Board of Pusan National University Hospital (approval No. H-2005-009-090). The need for informed consent was waived given the observational retrospective nature of the study. Our study had no effect on the diagnosis or treatment of patients. Patient information was anonymized and de-identified before analysis. RESULTS: Patient characteristics After the criteria defined above were applied, a total of 10,557 culture-confirmed TB patients were screened during the study period. Of them, 633 (6.0%) had MDR-TB and were included in the final analysis. Table 1 lists the baseline characteristics of all MDR-TB patients. Their median age was 50.0 years (IQR, 36.0–63.0) and 64.6% were male. None of the patients had a human immunodeficiency virus infection. The age groups of 40–49 and 50–59 years accounted for the largest proportion of patients. With regard to TB treatment history, 57.0% of all patients were new patients. The proportion of new patients among total patients did not change during the study period. Among 272 previously treated patients, 212 (77.9%) had been treated with first-line anti-TB drugs, and 60 (22.1%) with second-line anti-TB drugs. There were significant differences in the proportions of outcomes of the most recent treatment course between the patients previously treated with first-line drugs and those previously treated with second-line drugs (P < 0.001): relapse (84.9% vs. 36.7%); treatment after failure (6.6% vs. 33.3%); and treatment after loss to follow-up (8.5% vs. 30.0%). Continuous variables are presented as medians (interquartile ranges) and categorical variables as numbers (%). After the criteria defined above were applied, a total of 10,557 culture-confirmed TB patients were screened during the study period. Of them, 633 (6.0%) had MDR-TB and were included in the final analysis. Table 1 lists the baseline characteristics of all MDR-TB patients. Their median age was 50.0 years (IQR, 36.0–63.0) and 64.6% were male. None of the patients had a human immunodeficiency virus infection. The age groups of 40–49 and 50–59 years accounted for the largest proportion of patients. With regard to TB treatment history, 57.0% of all patients were new patients. The proportion of new patients among total patients did not change during the study period. Among 272 previously treated patients, 212 (77.9%) had been treated with first-line anti-TB drugs, and 60 (22.1%) with second-line anti-TB drugs. There were significant differences in the proportions of outcomes of the most recent treatment course between the patients previously treated with first-line drugs and those previously treated with second-line drugs (P < 0.001): relapse (84.9% vs. 36.7%); treatment after failure (6.6% vs. 33.3%); and treatment after loss to follow-up (8.5% vs. 30.0%). Continuous variables are presented as medians (interquartile ranges) and categorical variables as numbers (%). Pattern and trend of additional drug resistance Table 2 lists the rates of additional drug resistance among all MDR-TB patients. All patients had additional resistance to a median of three drugs (IQR, 2.0–7.0) (excluding INH, RIF, a higher concentration of INH, and LZD). The resistance rates of the drugs in groups A and B according to the World Health Organization (WHO) guidelines were as follows12: any fluoroquinolone ([FQ]; OFX, LFX, or MFX; 26.2%), LZD (0.0%), and CS (6.3%). The resistance rates for drugs in group C were: EMB (63.3%), PZA (41.4%), SM (34.6%), PAS (26.9%), PTO (16.9%), and AMK (14.5%). Seventy-two patients (11.4%) had extensively drug-resistant (XDR)-TB (based on a previous definition: TB resistant to any FQ and at least one of three second-line injectable drugs [CM, KM, or AMK], in addition to MDR). The resistance rates of PZA, FQ, KM, and PTO were higher in previously treated patients than in new patients. During the 10-year study period, the resistance rate of PZA tended to increase, whereas that of PAS tended to decrease. However, no significant changes in the trends in resistance were observed to any of the other drugs (Table 3). Continuous variables are presented as medians (interquartile ranges) and categorical variables as numbers (%). RFB = rifabutin, EMB = ethambutol, PZA = pyrazinamide, OFX = ofloxacin, LFX = levofloxacin, MFX = moxifloxacin, SM = streptomycin, AMK = amikacin, KM = kanamycin, CM = capreomycin, PTO = prothionamide, CS = cycloserine, PAS = p-aminosalicylic acid, INH = isoniazid, LZD = linezolid, FQr = fluoroquinolone-resistant, MDR = multidrug-resistant, TB = tuberculosis, NA = not applicable. aComparison of new patients, patients previously treated with first-line drugs and patients previously treated with second-line drugs; bHigher concentration of isoniazid (critical concentration of 1.0 μg/mL); cNumber of resistant patients/total tested patients (%); dMultidrug-resistant tuberculosis resistant to any fluoroquinolone (ofloxacin, levofloxacin or moxifloxacin); eExcluding isoniazid, rifampin, a higher concentration of isoniazid, and linezolid. RFB = rifabutin, EMB = ethambutol, PZA = pyrazinamide, OFX = ofloxacin, LFX = levofloxacin, MFX = moxifloxacin, SM = streptomycin, AMK = amikacin, KM = kanamycin, CM = capreomycin, PTO = prothionamide, CS = cycloserine, PAS = p-aminosalicylic acid, FQr = fluoroquinolone-resistant, MDR = multidrug-resistant, TB = tuberculosis. aMultidrug-resistant tuberculosis resistant to any fluoroquinolone (ofloxacin, levofloxacin, or moxifloxacin). Table 4 compares additional drug resistance rates between FQ-susceptible and FQ-resistant MDR-TB. All drugs except SM, a higher concentration of INH, and LZD had higher rates of resistance in patients with FQ-resistant MDR-TB. The number of additional resistant drugs (excluding INH, RIF, a higher concentration of INH, and LZD) was significantly higher in FQ-resistant MDR-TB than in FQ-susceptible MDR-TB (median of 9.0 vs. 2.0). Continuous variables are presented as medians (interquartile ranges) and categorical variables as numbers (%). RFB = rifabutin, EMB = ethambutol, PZA = pyrazinamide, OFX = ofloxacin, LFX = levofloxacin, MFX = moxifloxacin, SM = streptomycin, AMK = amikacin, KM = kanamycin, CM = capreomycin, PTO = prothionamide, CS = cycloserine, PAS = p-aminosalicylic acid, INH = isoniazid, LZD = linezolid, FQs = fluoroquinolone-susceptible, FQr = fluoroquinolone-resistant, MDR = multidrug-resistant, TB = tuberculosis, NA = not applicable. aMultidrug-resistant tuberculosis susceptible to all fluoroquinolones (ofloxacin, levofloxacin, and moxifloxacin); bMultidrug-resistant tuberculosis resistant to any fluoroquinolone (ofloxacin, levofloxacin or moxifloxacin); cComparison of patients with fluoroquinolone susceptible and resistant multidrug-resistant tuberculosis; dHigher concentration of isoniazid (critical concentration of 1.0 μg/mL); eNumber of resistant patients/total tested patients (%); fExcluding isoniazid, rifampin, a higher concentration of isoniazid, and linezolid. Table 2 lists the rates of additional drug resistance among all MDR-TB patients. All patients had additional resistance to a median of three drugs (IQR, 2.0–7.0) (excluding INH, RIF, a higher concentration of INH, and LZD). The resistance rates of the drugs in groups A and B according to the World Health Organization (WHO) guidelines were as follows12: any fluoroquinolone ([FQ]; OFX, LFX, or MFX; 26.2%), LZD (0.0%), and CS (6.3%). The resistance rates for drugs in group C were: EMB (63.3%), PZA (41.4%), SM (34.6%), PAS (26.9%), PTO (16.9%), and AMK (14.5%). Seventy-two patients (11.4%) had extensively drug-resistant (XDR)-TB (based on a previous definition: TB resistant to any FQ and at least one of three second-line injectable drugs [CM, KM, or AMK], in addition to MDR). The resistance rates of PZA, FQ, KM, and PTO were higher in previously treated patients than in new patients. During the 10-year study period, the resistance rate of PZA tended to increase, whereas that of PAS tended to decrease. However, no significant changes in the trends in resistance were observed to any of the other drugs (Table 3). Continuous variables are presented as medians (interquartile ranges) and categorical variables as numbers (%). RFB = rifabutin, EMB = ethambutol, PZA = pyrazinamide, OFX = ofloxacin, LFX = levofloxacin, MFX = moxifloxacin, SM = streptomycin, AMK = amikacin, KM = kanamycin, CM = capreomycin, PTO = prothionamide, CS = cycloserine, PAS = p-aminosalicylic acid, INH = isoniazid, LZD = linezolid, FQr = fluoroquinolone-resistant, MDR = multidrug-resistant, TB = tuberculosis, NA = not applicable. aComparison of new patients, patients previously treated with first-line drugs and patients previously treated with second-line drugs; bHigher concentration of isoniazid (critical concentration of 1.0 μg/mL); cNumber of resistant patients/total tested patients (%); dMultidrug-resistant tuberculosis resistant to any fluoroquinolone (ofloxacin, levofloxacin or moxifloxacin); eExcluding isoniazid, rifampin, a higher concentration of isoniazid, and linezolid. RFB = rifabutin, EMB = ethambutol, PZA = pyrazinamide, OFX = ofloxacin, LFX = levofloxacin, MFX = moxifloxacin, SM = streptomycin, AMK = amikacin, KM = kanamycin, CM = capreomycin, PTO = prothionamide, CS = cycloserine, PAS = p-aminosalicylic acid, FQr = fluoroquinolone-resistant, MDR = multidrug-resistant, TB = tuberculosis. aMultidrug-resistant tuberculosis resistant to any fluoroquinolone (ofloxacin, levofloxacin, or moxifloxacin). Table 4 compares additional drug resistance rates between FQ-susceptible and FQ-resistant MDR-TB. All drugs except SM, a higher concentration of INH, and LZD had higher rates of resistance in patients with FQ-resistant MDR-TB. The number of additional resistant drugs (excluding INH, RIF, a higher concentration of INH, and LZD) was significantly higher in FQ-resistant MDR-TB than in FQ-susceptible MDR-TB (median of 9.0 vs. 2.0). Continuous variables are presented as medians (interquartile ranges) and categorical variables as numbers (%). RFB = rifabutin, EMB = ethambutol, PZA = pyrazinamide, OFX = ofloxacin, LFX = levofloxacin, MFX = moxifloxacin, SM = streptomycin, AMK = amikacin, KM = kanamycin, CM = capreomycin, PTO = prothionamide, CS = cycloserine, PAS = p-aminosalicylic acid, INH = isoniazid, LZD = linezolid, FQs = fluoroquinolone-susceptible, FQr = fluoroquinolone-resistant, MDR = multidrug-resistant, TB = tuberculosis, NA = not applicable. aMultidrug-resistant tuberculosis susceptible to all fluoroquinolones (ofloxacin, levofloxacin, and moxifloxacin); bMultidrug-resistant tuberculosis resistant to any fluoroquinolone (ofloxacin, levofloxacin or moxifloxacin); cComparison of patients with fluoroquinolone susceptible and resistant multidrug-resistant tuberculosis; dHigher concentration of isoniazid (critical concentration of 1.0 μg/mL); eNumber of resistant patients/total tested patients (%); fExcluding isoniazid, rifampin, a higher concentration of isoniazid, and linezolid. MICs of BDQ and DLM Twenty-six patients had MIC results for BDQ and DLM. Fig. 1 presents the overall MIC distribution of the patients. When an interim critical concentration of 0.25 mg/L was applied for BDQ resistance,1314 one patient (3.8%) was resistant to BDQ. When the interim critical concentration of 0.016 mg/L was applied for DLM resistance,13 three patients (11.5%) were resistant to DLM; the results remained unchanged if 0.06 or 0.2 mg/L was applied as the critical concentration of DLM resistance.1516 One BDQ-resistant patient had a history of treatment failure with a BDQ- and CFZ-containing regimen. Among the three patients resistant to DLM, one had never been treated for TB in the past, and the remaining two patients had histories of TB treatments with first-line anti-TB drugs only. MIC = minimum inhibitory concentration Twenty-six patients had MIC results for BDQ and DLM. Fig. 1 presents the overall MIC distribution of the patients. When an interim critical concentration of 0.25 mg/L was applied for BDQ resistance,1314 one patient (3.8%) was resistant to BDQ. When the interim critical concentration of 0.016 mg/L was applied for DLM resistance,13 three patients (11.5%) were resistant to DLM; the results remained unchanged if 0.06 or 0.2 mg/L was applied as the critical concentration of DLM resistance.1516 One BDQ-resistant patient had a history of treatment failure with a BDQ- and CFZ-containing regimen. Among the three patients resistant to DLM, one had never been treated for TB in the past, and the remaining two patients had histories of TB treatments with first-line anti-TB drugs only. MIC = minimum inhibitory concentration Eligibility for the treatment regimen Of all patients, 458 (72.4%) were susceptible to all FQs and CS. Assuming that these patients were also susceptible to BDQ, LZD, and CFZ, they would be eligible for WHO's longer treatment regimen without drug modification.12 In total, 447 patients (70.6%) were eligible for the WHO's shorter all-oral BDQ-containing regimen after applying the inclusion criteria of “MDR-TB patients who have not been exposed to treatment with second-line drugs for more than 1 month, and in whom resistance to FQs has been excluded.”12 However, if the strict inclusion criteria, “MDR-TB patients without resistance to a medicine in the shorter regimen” were applied,12 only 157 patients (24.8%) were fully susceptible to all FQs, PTO, EMB, and PZA, and thus were eligible for the shorter regimen (even if all patients were assumed to be susceptible to both BDQ and CFZ). Fig. 2 presents the annual trends in the proportion of patients eligible for the longer and shorter (with strict inclusion criteria) regimens, respectively. The proportion did not change during the study period. aPatients susceptible to all fluoroquinolones and cycloserine (assuming all patients are also susceptible to bedaquiline, linezolid, and clofazimine); bPatients fully susceptible to all fluoroquinolones, prothionamide, ethambutol, and pyrazinamide (assuming all patients are also susceptible to bedaquiline and clofazimine). Of all patients, 458 (72.4%) were susceptible to all FQs and CS. Assuming that these patients were also susceptible to BDQ, LZD, and CFZ, they would be eligible for WHO's longer treatment regimen without drug modification.12 In total, 447 patients (70.6%) were eligible for the WHO's shorter all-oral BDQ-containing regimen after applying the inclusion criteria of “MDR-TB patients who have not been exposed to treatment with second-line drugs for more than 1 month, and in whom resistance to FQs has been excluded.”12 However, if the strict inclusion criteria, “MDR-TB patients without resistance to a medicine in the shorter regimen” were applied,12 only 157 patients (24.8%) were fully susceptible to all FQs, PTO, EMB, and PZA, and thus were eligible for the shorter regimen (even if all patients were assumed to be susceptible to both BDQ and CFZ). Fig. 2 presents the annual trends in the proportion of patients eligible for the longer and shorter (with strict inclusion criteria) regimens, respectively. The proportion did not change during the study period. aPatients susceptible to all fluoroquinolones and cycloserine (assuming all patients are also susceptible to bedaquiline, linezolid, and clofazimine); bPatients fully susceptible to all fluoroquinolones, prothionamide, ethambutol, and pyrazinamide (assuming all patients are also susceptible to bedaquiline and clofazimine). Patient characteristics: After the criteria defined above were applied, a total of 10,557 culture-confirmed TB patients were screened during the study period. Of them, 633 (6.0%) had MDR-TB and were included in the final analysis. Table 1 lists the baseline characteristics of all MDR-TB patients. Their median age was 50.0 years (IQR, 36.0–63.0) and 64.6% were male. None of the patients had a human immunodeficiency virus infection. The age groups of 40–49 and 50–59 years accounted for the largest proportion of patients. With regard to TB treatment history, 57.0% of all patients were new patients. The proportion of new patients among total patients did not change during the study period. Among 272 previously treated patients, 212 (77.9%) had been treated with first-line anti-TB drugs, and 60 (22.1%) with second-line anti-TB drugs. There were significant differences in the proportions of outcomes of the most recent treatment course between the patients previously treated with first-line drugs and those previously treated with second-line drugs (P < 0.001): relapse (84.9% vs. 36.7%); treatment after failure (6.6% vs. 33.3%); and treatment after loss to follow-up (8.5% vs. 30.0%). Continuous variables are presented as medians (interquartile ranges) and categorical variables as numbers (%). Pattern and trend of additional drug resistance: Table 2 lists the rates of additional drug resistance among all MDR-TB patients. All patients had additional resistance to a median of three drugs (IQR, 2.0–7.0) (excluding INH, RIF, a higher concentration of INH, and LZD). The resistance rates of the drugs in groups A and B according to the World Health Organization (WHO) guidelines were as follows12: any fluoroquinolone ([FQ]; OFX, LFX, or MFX; 26.2%), LZD (0.0%), and CS (6.3%). The resistance rates for drugs in group C were: EMB (63.3%), PZA (41.4%), SM (34.6%), PAS (26.9%), PTO (16.9%), and AMK (14.5%). Seventy-two patients (11.4%) had extensively drug-resistant (XDR)-TB (based on a previous definition: TB resistant to any FQ and at least one of three second-line injectable drugs [CM, KM, or AMK], in addition to MDR). The resistance rates of PZA, FQ, KM, and PTO were higher in previously treated patients than in new patients. During the 10-year study period, the resistance rate of PZA tended to increase, whereas that of PAS tended to decrease. However, no significant changes in the trends in resistance were observed to any of the other drugs (Table 3). Continuous variables are presented as medians (interquartile ranges) and categorical variables as numbers (%). RFB = rifabutin, EMB = ethambutol, PZA = pyrazinamide, OFX = ofloxacin, LFX = levofloxacin, MFX = moxifloxacin, SM = streptomycin, AMK = amikacin, KM = kanamycin, CM = capreomycin, PTO = prothionamide, CS = cycloserine, PAS = p-aminosalicylic acid, INH = isoniazid, LZD = linezolid, FQr = fluoroquinolone-resistant, MDR = multidrug-resistant, TB = tuberculosis, NA = not applicable. aComparison of new patients, patients previously treated with first-line drugs and patients previously treated with second-line drugs; bHigher concentration of isoniazid (critical concentration of 1.0 μg/mL); cNumber of resistant patients/total tested patients (%); dMultidrug-resistant tuberculosis resistant to any fluoroquinolone (ofloxacin, levofloxacin or moxifloxacin); eExcluding isoniazid, rifampin, a higher concentration of isoniazid, and linezolid. RFB = rifabutin, EMB = ethambutol, PZA = pyrazinamide, OFX = ofloxacin, LFX = levofloxacin, MFX = moxifloxacin, SM = streptomycin, AMK = amikacin, KM = kanamycin, CM = capreomycin, PTO = prothionamide, CS = cycloserine, PAS = p-aminosalicylic acid, FQr = fluoroquinolone-resistant, MDR = multidrug-resistant, TB = tuberculosis. aMultidrug-resistant tuberculosis resistant to any fluoroquinolone (ofloxacin, levofloxacin, or moxifloxacin). Table 4 compares additional drug resistance rates between FQ-susceptible and FQ-resistant MDR-TB. All drugs except SM, a higher concentration of INH, and LZD had higher rates of resistance in patients with FQ-resistant MDR-TB. The number of additional resistant drugs (excluding INH, RIF, a higher concentration of INH, and LZD) was significantly higher in FQ-resistant MDR-TB than in FQ-susceptible MDR-TB (median of 9.0 vs. 2.0). Continuous variables are presented as medians (interquartile ranges) and categorical variables as numbers (%). RFB = rifabutin, EMB = ethambutol, PZA = pyrazinamide, OFX = ofloxacin, LFX = levofloxacin, MFX = moxifloxacin, SM = streptomycin, AMK = amikacin, KM = kanamycin, CM = capreomycin, PTO = prothionamide, CS = cycloserine, PAS = p-aminosalicylic acid, INH = isoniazid, LZD = linezolid, FQs = fluoroquinolone-susceptible, FQr = fluoroquinolone-resistant, MDR = multidrug-resistant, TB = tuberculosis, NA = not applicable. aMultidrug-resistant tuberculosis susceptible to all fluoroquinolones (ofloxacin, levofloxacin, and moxifloxacin); bMultidrug-resistant tuberculosis resistant to any fluoroquinolone (ofloxacin, levofloxacin or moxifloxacin); cComparison of patients with fluoroquinolone susceptible and resistant multidrug-resistant tuberculosis; dHigher concentration of isoniazid (critical concentration of 1.0 μg/mL); eNumber of resistant patients/total tested patients (%); fExcluding isoniazid, rifampin, a higher concentration of isoniazid, and linezolid. MICs of BDQ and DLM: Twenty-six patients had MIC results for BDQ and DLM. Fig. 1 presents the overall MIC distribution of the patients. When an interim critical concentration of 0.25 mg/L was applied for BDQ resistance,1314 one patient (3.8%) was resistant to BDQ. When the interim critical concentration of 0.016 mg/L was applied for DLM resistance,13 three patients (11.5%) were resistant to DLM; the results remained unchanged if 0.06 or 0.2 mg/L was applied as the critical concentration of DLM resistance.1516 One BDQ-resistant patient had a history of treatment failure with a BDQ- and CFZ-containing regimen. Among the three patients resistant to DLM, one had never been treated for TB in the past, and the remaining two patients had histories of TB treatments with first-line anti-TB drugs only. MIC = minimum inhibitory concentration Eligibility for the treatment regimen: Of all patients, 458 (72.4%) were susceptible to all FQs and CS. Assuming that these patients were also susceptible to BDQ, LZD, and CFZ, they would be eligible for WHO's longer treatment regimen without drug modification.12 In total, 447 patients (70.6%) were eligible for the WHO's shorter all-oral BDQ-containing regimen after applying the inclusion criteria of “MDR-TB patients who have not been exposed to treatment with second-line drugs for more than 1 month, and in whom resistance to FQs has been excluded.”12 However, if the strict inclusion criteria, “MDR-TB patients without resistance to a medicine in the shorter regimen” were applied,12 only 157 patients (24.8%) were fully susceptible to all FQs, PTO, EMB, and PZA, and thus were eligible for the shorter regimen (even if all patients were assumed to be susceptible to both BDQ and CFZ). Fig. 2 presents the annual trends in the proportion of patients eligible for the longer and shorter (with strict inclusion criteria) regimens, respectively. The proportion did not change during the study period. aPatients susceptible to all fluoroquinolones and cycloserine (assuming all patients are also susceptible to bedaquiline, linezolid, and clofazimine); bPatients fully susceptible to all fluoroquinolones, prothionamide, ethambutol, and pyrazinamide (assuming all patients are also susceptible to bedaquiline and clofazimine). DISCUSSION: In the present study, additional drug resistance of MDR-TB patients was not only high for companion drugs but also for FQ, a core second-line anti-TB drug. The resistance rate did not exhibit a decreasing trend for most anti-TB drugs during the study period. More than half of all MDR-TB patients were classified as new patients, and this proportion did not decrease over time. These findings are not different from those of a previous study,10 suggesting that inappropriate treatment and management of MDR-TB patients is not improving in South Korea. Because injectables are no longer considered the core drugs to treat MDR-TB, the classification of MDR-TB was simplified to FQ-susceptible or FQ-resistant MDR-TB (the latter is so-called pre-XDR-TB).17 In other words, FQ has increased in importance as a core drug for treating MDR-TB. In our study, the proportion of FQ-resistant MDR-TB among all patients was 26.2%, which is comparable with the global estimate of 20.1%.1 Furthermore, the proportion did not decrease during the study period. FQ-resistance in MDR-TB patients may be due to the widespread uncontrolled use of FQs in the community because FQs are widely used for various infectious diseases.18 Also, suboptimal TB treatment with an FQ-containing regimen and inappropriate patient management may generate and amplify FQ resistance in MDR-TB patients at the individual level.19 The treatment outcomes of patients with FQ-resistant MDR-TB are poor.2021 One of the main reasons for this is the difficulty building an effective treatment regimen due to the high rate of resistance to other drugs. In our study, patients with FQ-resistant MDR-TB had additional resistances to a median of nine drugs. To improve the treatment outcomes of patients with FQ-resistant MDR-TB, it is important to provide an effective empirical treatment regimen based on DRS data and the patient's treatment history until full DST results are obtained. Rapid detection of FQ resistance is also essential for a positive outcome in MDR-TB patients. However, a rapid molecular DST for FQ resistance (e.g., line probe assay for second-line drugs; GenoType MTBDRsl) is not yet widely available in South Korea. The universal use of this rapid test in routine practice is urgently needed. The recently developed Xpert MTB/XDR assay may be promising. In our study, 72.4% of patients were eligible for the WHO's longer MDR-TB treatment regimen.12 However, this percentage may have been overestimated because only FQ and CS resistance was considered for eligibility. Although resistance to BDQ, LZD, and CFZ was expected to be low as these drugs have not been widely used to treat MDR-TB in South Korea, resistance can occur in the absence of antimicrobial exposure.22 Although standard critical concentrations have not been established, several studies have reported the primary resistances of BDQ, DLM, and CFZ with interim critical concentrations.22232425 In our study, among the 26 patients with MIC results for BDQ and DLM, one and three patients were considered to have resistance to BDQ and DLM, respectively. None of the three DLM-resistant patients had previously been treated with a DLM-containing regimen. These results may be due to true primary resistance, but they may also be due to incomplete DST methods or critical concentrations. Reliable and reproducible DST methods should be established, particularly for new and repurposed anti-TB drugs that are now defined as core drugs. The wide use of these drugs without the support of accurate DST methods will inevitably increase resistance rates. The WHO recently updated the guidelines for a shorter MDR-TB treatment regimen in which injectables are replaced with BDQ.12 In our study, only 24.8% of patients were eligible for the shorter regimen with strict inclusion criteria. The most common reason for exclusion was resistance to PZA or EMB. However, recent studies have suggested that initial resistance to companion drugs in the shorter regimen (e.g., PZA, EMB, or PTO) has no or minimal effect on treatment outcomes of MDR-TB patients if their isolates are susceptible to core drugs.2627 Also, the phenotypic DST for several drugs, such as EMB and PTO, may not be reliable and reproducible.12 Further investigations are needed to clarify these issues. Until clear evidence of inclusion for the shorter regimen is established, an individualized longer regimen based on DST results is a reasonable choice in the South Korean context. Primary drug resistance is an indicator of the effectiveness of a TB control program. It reflects suboptimal treatment and management of TB patients and the spread of the pathogen under inappropriate infection control. In our study, more than half of all MDR-TB patients were classified as new patients, and this proportion did not decrease over time. In addition to providing an effective treatment regimen for MDR-TB patients, a comprehensive TB control program that encompasses diagnosis, prevention, patient management, and infection control measures is needed to reduce the transmission of and victims of this difficult-to-treat pathogen. Our study had several limitations. First, our results were based on regional data derived from seven private hospitals and may not represent the general South Korean population. However, according to the 2019 nationwide notification data, 15.3% of all TB cases in South Korea were reported in our province, and 96.9% of all TB cases were reported from private sectors.28 Second, all hospitals that participated in the study were university-affiliated tertiary care hospitals, so referral or selection bias may have been involved. However, the majority of MDR-TB patients in South Korea are transferred to and treated at tertiary care hospitals. Third, critical concentrations for several drugs changed during the study period, and tests for LZD and higher concentrations of INH could not be performed during the entire study period, which may limit the interpretation of the results. In conclusion, the proportion of new patients and rates of additional drug resistance in MDR-TB patients in South Korea were high and did not decrease during a 10-year period. Effective treatment for MDR-TB patients in South Korea will require the widespread use of a rapid molecular DST for FQ and the introduction of reliable DST methods for core second-line anti-TB drugs, as well as analyses of nationwide DRS data collected using standardized methods.
Background: Drug-resistance surveillance (DRS) data provide key information for building an effective treatment regimen in patients with multidrug-resistant tuberculosis (MDR-TB). This study was conducted to investigate the patterns and trends of additional drug resistance in MDR-TB patients in South Korea. Methods: Phenotypic drug susceptibility test (DST) results of MDR-TB patients collected from seven hospitals in South Korea from 2010 to 2019 were retrospectively analyzed. Results: In total, 633 patients with MDR-TB were included in the analysis. Of all patients, 361 (57.0%) were new patients. All patients had additional resistance to a median of three anti-TB drugs. The resistance rates of any fluoroquinolone (FQ), linezolid, and cycloserine were 26.2%, 0.0%, and 6.3%, respectively. The proportions of new patients and resistance rates of most anti-TB drugs did not decrease during the study period. The number of additional resistant drugs was significantly higher in FQ-resistant MDR-TB than in FQ-susceptible MDR-TB (median of 9.0 vs. 2.0). Among 26 patients with results of minimum inhibitory concentrations for bedaquiline (BDQ) and delamanid (DLM), one (3.8%) and three (11.5%) patients were considered resistant to BDQ and DLM with interim critical concentrations, respectively. Based on the DST results, 72.4% and 24.8% of patients were eligible for the World Health Organization's longer and shorter MDR-TB treatment regimen, respectively. Conclusions: The proportions of new patients and rates of additional drug resistance in patients with MDR-TB were high and remain stable in South Korea. A nationwide analysis of DRS data is required to provide effective treatment for MDR-TB patients in South Korea.
null
null
9,209
345
[ 183, 391, 171, 122, 68, 266, 843, 161, 267 ]
13
[ "patients", "tb", "resistant", "mdr", "resistance", "drugs", "treatment", "mdr tb", "μg", "ml" ]
[ "tb drugs analyses", "tb patients resistance", "resistant tuberculosis susceptible", "multidrug resistant tb", "resistant tuberculosis mdr" ]
null
null
[CONTENT] Multidrug-resistant | Resistance | South Korea | Tuberculosis [SUMMARY]
[CONTENT] Multidrug-resistant | Resistance | South Korea | Tuberculosis [SUMMARY]
[CONTENT] Multidrug-resistant | Resistance | South Korea | Tuberculosis [SUMMARY]
null
[CONTENT] Multidrug-resistant | Resistance | South Korea | Tuberculosis [SUMMARY]
null
[CONTENT] Adolescent | Adult | Aged | Aged, 80 and over | Antitubercular Agents | Child | Child, Preschool | Diarylquinolines | Drug Resistance, Multiple, Bacterial | Female | Fluoroquinolones | Humans | Infant | Infant, Newborn | Male | Microbial Sensitivity Tests | Middle Aged | Mycobacterium tuberculosis | Republic of Korea | Retrospective Studies | Tuberculosis | Tuberculosis, Multidrug-Resistant | Young Adult [SUMMARY]
[CONTENT] Adolescent | Adult | Aged | Aged, 80 and over | Antitubercular Agents | Child | Child, Preschool | Diarylquinolines | Drug Resistance, Multiple, Bacterial | Female | Fluoroquinolones | Humans | Infant | Infant, Newborn | Male | Microbial Sensitivity Tests | Middle Aged | Mycobacterium tuberculosis | Republic of Korea | Retrospective Studies | Tuberculosis | Tuberculosis, Multidrug-Resistant | Young Adult [SUMMARY]
[CONTENT] Adolescent | Adult | Aged | Aged, 80 and over | Antitubercular Agents | Child | Child, Preschool | Diarylquinolines | Drug Resistance, Multiple, Bacterial | Female | Fluoroquinolones | Humans | Infant | Infant, Newborn | Male | Microbial Sensitivity Tests | Middle Aged | Mycobacterium tuberculosis | Republic of Korea | Retrospective Studies | Tuberculosis | Tuberculosis, Multidrug-Resistant | Young Adult [SUMMARY]
null
[CONTENT] Adolescent | Adult | Aged | Aged, 80 and over | Antitubercular Agents | Child | Child, Preschool | Diarylquinolines | Drug Resistance, Multiple, Bacterial | Female | Fluoroquinolones | Humans | Infant | Infant, Newborn | Male | Microbial Sensitivity Tests | Middle Aged | Mycobacterium tuberculosis | Republic of Korea | Retrospective Studies | Tuberculosis | Tuberculosis, Multidrug-Resistant | Young Adult [SUMMARY]
null
[CONTENT] tb drugs analyses | tb patients resistance | resistant tuberculosis susceptible | multidrug resistant tb | resistant tuberculosis mdr [SUMMARY]
[CONTENT] tb drugs analyses | tb patients resistance | resistant tuberculosis susceptible | multidrug resistant tb | resistant tuberculosis mdr [SUMMARY]
[CONTENT] tb drugs analyses | tb patients resistance | resistant tuberculosis susceptible | multidrug resistant tb | resistant tuberculosis mdr [SUMMARY]
null
[CONTENT] tb drugs analyses | tb patients resistance | resistant tuberculosis susceptible | multidrug resistant tb | resistant tuberculosis mdr [SUMMARY]
null
[CONTENT] patients | tb | resistant | mdr | resistance | drugs | treatment | mdr tb | μg | ml [SUMMARY]
[CONTENT] patients | tb | resistant | mdr | resistance | drugs | treatment | mdr tb | μg | ml [SUMMARY]
[CONTENT] patients | tb | resistant | mdr | resistance | drugs | treatment | mdr tb | μg | ml [SUMMARY]
null
[CONTENT] patients | tb | resistant | mdr | resistance | drugs | treatment | mdr tb | μg | ml [SUMMARY]
null
[CONTENT] tb | mdr | mdr tb | patients | patients mdr tb | patients mdr | rr tb | rr | dst | tb south korea [SUMMARY]
[CONTENT] ml | μg ml | μg | tb | concentrations | 40 μg ml | 40 μg | dst | test | patients [SUMMARY]
[CONTENT] resistant | patients | tb | concentration | fluoroquinolone | susceptible | resistance | drugs | fq | mdr [SUMMARY]
null
[CONTENT] patients | tb | resistant | ml | μg | μg ml | mdr | treatment | resistance | mdr tb [SUMMARY]
null
[CONTENT] DRS ||| MDR | South Korea [SUMMARY]
[CONTENT] DST | MDR | seven | South Korea | 2010 | 2019 [SUMMARY]
[CONTENT] 633 | MDR ||| 361 | 57.0% ||| three ||| 26.2% | 0.0% | 6.3% ||| ||| 9.0 | 2.0 ||| 26 | BDQ | DLM | 3.8% | three | 11.5% | BDQ | DLM ||| DST | 72.4% | 24.8% | the World Health Organization's [SUMMARY]
null
[CONTENT] DRS ||| MDR | South Korea ||| DST | MDR | seven | South Korea | 2010 | 2019 ||| ||| 633 | MDR ||| 361 | 57.0% ||| three ||| 26.2% | 0.0% | 6.3% ||| ||| 9.0 | 2.0 ||| 26 | BDQ | DLM | 3.8% | three | 11.5% | BDQ | DLM ||| DST | 72.4% | 24.8% | the World Health Organization's ||| MDR | South Korea ||| DRS | MDR | South Korea [SUMMARY]
null
Chronic Experimental Model of TNBS-Induced Colitis to Study Inflammatory Bowel Disease.
35563130
Inflammatory bowel disease (IBD) is a world healthcare problem. In order to evaluate the effect of new pharmacological approaches for IBD, we aim to develop and validate chronic trinitrobenzene sulfonic acid (TNBS)-induced colitis in mice.
BACKGROUND
Experimental colitis was induced by the rectal administration of multiple doses of TNBS in female CD-1 mice. The protocol was performed with six experimental groups, depending on the TNBS administration frequency, and two control groups (sham and ethanol groups).
METHODS
The survival rate was 73.3% in the first three weeks and, from week 4 until the end of the experimental protocol, the mice's survival remained unaltered at 70.9%. Fecal hemoglobin presented a progressive increase until week 4 (5.8 ± 0.3 µmol Hg/g feces, p &lt; 0.0001) compared with the ethanol group, with no statistical differences to week 6. The highest level of tumor necrosis factor-α was observed on week 3; however, after week 4, a slight decrease in tumor necrosis factor-α concentration was verified, and the level was maintained until week 6 (71.3 ± 3.3 pg/mL and 72.7 ± 3.6 pg/mL, respectively).
RESULTS
These findings allowed the verification of a stable pattern of clinical and inflammation signs after week 4, suggesting that the chronic model of TNBS-induced colitis develops in 4 weeks.
CONCLUSIONS
[ "Animals", "Chronic Disease", "Colitis", "Colon", "Disease Models, Animal", "Ethanol", "Female", "Inflammatory Bowel Diseases", "Mice", "Trinitrobenzenesulfonic Acid", "Tumor Necrosis Factor-alpha" ]
9105049
1. Introduction
Inflammatory bowel disease (IBD) comprises Crohn’s disease (CD) and ulcerative colitis (UC) [1]. CD and UC are chronic and relapsing inflammatory conditions of the gastrointestinal tract that have distinct pathological and clinical characteristics. The evolution of IBD follows the advancement of society [2,3]. It is almost a global disease, affecting all ages, including the pediatric population [4]. Several treatments are currently available for the treatment of IBD, though none of them reverses the underlying pathogenic mechanism of this disease [5,6]. Animal models of IBD remain essential for a proper understanding of histopathological change in the gastrointestinal tract and play a key role in the development of new pharmacological approaches [7]. The trinitrobenzene sulfonic acid (TNBS)-induced model, in particular, is a commonly used model of IBD since it is capable of reproducing CD in humans [8,9,10]. Additionally, the TNBS-induced model can mimic the acute and chronic stages of IBD [11,12]. TNBS is a haptenizing agent that stimulates a delayed-type hypersensitivity immune response, driving colitis in susceptible mouse strains [13,14,15,16,17]. The chemical is dissolved in ethanol, enabling the interaction of TNBS with colon tissue proteins. The ethanol breaks the mucosal barrier, allowing TNBS to penetrate into the bowel wall [16]. TNBS administration induces transmural colitis that is driven by a Th1-mediated immune response [13,14,17] that is characterized by the infiltration of CD4 cells, neutrophils, and macrophages into the lamina propria and the secretion of cytokines [14] The most common cytokines are tumor necrosis factor (TNF)-α and interleukin (IL)-12. Since the protocols using the TNBS-induced colitis model are not standardized, a systematic review [18] developed by our research group concludes that the chronic TNBS-induced colitis model can be obtained with multiple TNBS administrations. In the literature, there is no consensus about the induction method, and several original articles have been published with different ways to induce a chronic model of TNBS-induced colitis using different doses, numbers of TNBS administrations, strains, genders, and ages of mice. By this point of view, the main objective of this study is to develop and validate a chronic TNBS-induced colitis in mice in order to evaluate the effect of new pharmacological approaches for IBD. The advantage of this chronic model compared to acute models is that the latter may provide only limited information about the pathogenesis of human IBDs, as the chemical injury to the epithelial barrier leads to self-limiting inflammation rather than chronic disease [19]. Our research group has previously developed preclinical studies in an acute TNBS-induced colitis model [20,21,22]. However, as cited by Bilsborough and colleagues (2021), since IBD is a chronic disease, the development of a standardized and validated induction method for chronic colitis is useful for studying new pharmacological approaches [23]. Our findings allow us to conclude that TNBS-induced chronic colitis should be developed in 4 weeks, providing a chronic intestinal inflammation model.
null
null
2. Results
2.1. Clinical Signs For six weeks, the mice were observed daily for body weight, stool consistency, and morbidity. Between weeks 2 and 4, the mice presented an alteration of intestinal motility that was characterized by soft stools and moderate morbidity. On the other hand, after week 4, mice presented an apparent recovery. No alterations were identified in the control groups. Concerning body weight, all groups demonstrated a very similar curve in the register of body weight during the experiment (Figure 1). Until week 4, the majority of the TNBS groups showed a progressive increase in body weight. After six administrations, the T6 group animals increased 14.2 ± 2.7% from their initial weight. At the end of the experimental period, the ethanol group gained 14.7 ± 2.3% of its initial weight. No significant differences were observed between the groups. In this experimental procedure, the mortality rate is important to regulate the TNBS dose since the TNBS should induce the disease without inducing a high mortality rate. In this way, the survival curve was recorded, and it demonstrates a decrease in the survival rate in the first three weeks to approximately 73.3% (Figure 2). After week 4, the mice’s survival maintained the same value until the end of the experimental protocol, approximately 70.9%. For six weeks, the mice were observed daily for body weight, stool consistency, and morbidity. Between weeks 2 and 4, the mice presented an alteration of intestinal motility that was characterized by soft stools and moderate morbidity. On the other hand, after week 4, mice presented an apparent recovery. No alterations were identified in the control groups. Concerning body weight, all groups demonstrated a very similar curve in the register of body weight during the experiment (Figure 1). Until week 4, the majority of the TNBS groups showed a progressive increase in body weight. After six administrations, the T6 group animals increased 14.2 ± 2.7% from their initial weight. At the end of the experimental period, the ethanol group gained 14.7 ± 2.3% of its initial weight. No significant differences were observed between the groups. In this experimental procedure, the mortality rate is important to regulate the TNBS dose since the TNBS should induce the disease without inducing a high mortality rate. In this way, the survival curve was recorded, and it demonstrates a decrease in the survival rate in the first three weeks to approximately 73.3% (Figure 2). After week 4, the mice’s survival maintained the same value until the end of the experimental protocol, approximately 70.9%. 2.2. Macroscopic Assessment of Colitis Macroscopically, the colons were observed and scored for gross morphological damage, according to Morris et al., 1989 [24]. The maximal damage in the colon was observed with two administrations of TNBS (T2 group), with a mean score of 2.7 ± 0.7 (p < 0.0001 compared with the ethanol group), corresponding to a linear ulcer with inflammation at one site. After week 2, the gross morphology damage decreased and stabilized from week 4. In the T4 group, the attributed score was 0.9 ± 0.1 (p < 0.0001 compared with the T2 group), corresponding only to localized hyperemia without ulcers. The ethanol group presented a score of 0, with no damage registered (Figure 3). Macroscopically, the colons were observed and scored for gross morphological damage, according to Morris et al., 1989 [24]. The maximal damage in the colon was observed with two administrations of TNBS (T2 group), with a mean score of 2.7 ± 0.7 (p < 0.0001 compared with the ethanol group), corresponding to a linear ulcer with inflammation at one site. After week 2, the gross morphology damage decreased and stabilized from week 4. In the T4 group, the attributed score was 0.9 ± 0.1 (p < 0.0001 compared with the T2 group), corresponding only to localized hyperemia without ulcers. The ethanol group presented a score of 0, with no damage registered (Figure 3). 2.3. Biochemical Markers Fecal hemoglobin allows the evaluation of the intensity of the hemorrhagic focus (Figure 4). After week 2, the fecal hemoglobin progressively increased until week 6. A significant increase (p < 0.01) was observed when comparing the fecal hemoglobin of the T3 group (4.0 ± 0.2 µmol Hg/g feces) with the T4 group (5.8 ± 0.3 µmol Hg/g feces). However, when comparing the fecal hemoglobin of the T4 group (5.8 ± 0.3 µmol Hg/g feces) with the T6 group (7.5 ± 0.5 µmol Hg/g feces), no statistical differences were observed. Due to its essential role in intestinal homeostasis, The ALP concentration in the blood was evaluated (Figure 5). In general, the ALP levels observed in the TNBS groups were higher than those observed in the control groups, with a maximum ALP level in the T5 group of 58.5 ± 2.2 U/L (p < 0.0001 compared with the ethanol group). After week 3, the maintenance of ALP values can be observed, with 39.7 ± 2.4 U/L in the T4 group and 44.4 ± 2.3 U/L in the T6 group and no statistical significance. Fecal hemoglobin allows the evaluation of the intensity of the hemorrhagic focus (Figure 4). After week 2, the fecal hemoglobin progressively increased until week 6. A significant increase (p < 0.01) was observed when comparing the fecal hemoglobin of the T3 group (4.0 ± 0.2 µmol Hg/g feces) with the T4 group (5.8 ± 0.3 µmol Hg/g feces). However, when comparing the fecal hemoglobin of the T4 group (5.8 ± 0.3 µmol Hg/g feces) with the T6 group (7.5 ± 0.5 µmol Hg/g feces), no statistical differences were observed. Due to its essential role in intestinal homeostasis, The ALP concentration in the blood was evaluated (Figure 5). In general, the ALP levels observed in the TNBS groups were higher than those observed in the control groups, with a maximum ALP level in the T5 group of 58.5 ± 2.2 U/L (p < 0.0001 compared with the ethanol group). After week 3, the maintenance of ALP values can be observed, with 39.7 ± 2.4 U/L in the T4 group and 44.4 ± 2.3 U/L in the T6 group and no statistical significance. 2.4. Pro and Anti-Inflammatory Cytokine Levels This animal model showed significant production of TNF-α, a pro-inflammatory cytokine, after TNBS administration (Figure 6). All TNBS groups presented higher levels of TNF-α than the control groups, except on week 2, where only a slight increase was observed. The T2 group presented 51.4 ± 2.9 pg/mL of TNF-α, while the ethanol group presented 39.8 ± 1.5 pg/mL, without statistical significance. The highest level of TNF-α was observed on week 3 (87.3 ± 13.1 pg/mL). However, on week 4, the T4 group verified a slight decrease to 71.3 ± 3.3 pg/mL, which indicates maintenance of the values over time when compared with the T6 group (72.7 ± 3.6 pg/mL), without statistical significance being observed. Compared with the ethanol group, the T4 group presented statistically significant differences (p < 0.0001). IL-10 plays a central role in the mucosal immune system, inhibiting pro-inflammatory cytokine synthesis. IL-10 concentrations decreased in week 2. However, serum levels progressively increased until week 6 (Figure 7). Comparing the T3 group with the T6 group, the IL-10 concentration progressed from 44.6 ± 3.7 pg/mL to 68.2 ± 1.8 pg/mL, respectively (p < 0.0001). However, the T4 group (57.3 ± 3.8 pg/mL) had no significant differences from the T6 group (68.2 ± 1.8 pg/mL). This animal model showed significant production of TNF-α, a pro-inflammatory cytokine, after TNBS administration (Figure 6). All TNBS groups presented higher levels of TNF-α than the control groups, except on week 2, where only a slight increase was observed. The T2 group presented 51.4 ± 2.9 pg/mL of TNF-α, while the ethanol group presented 39.8 ± 1.5 pg/mL, without statistical significance. The highest level of TNF-α was observed on week 3 (87.3 ± 13.1 pg/mL). However, on week 4, the T4 group verified a slight decrease to 71.3 ± 3.3 pg/mL, which indicates maintenance of the values over time when compared with the T6 group (72.7 ± 3.6 pg/mL), without statistical significance being observed. Compared with the ethanol group, the T4 group presented statistically significant differences (p < 0.0001). IL-10 plays a central role in the mucosal immune system, inhibiting pro-inflammatory cytokine synthesis. IL-10 concentrations decreased in week 2. However, serum levels progressively increased until week 6 (Figure 7). Comparing the T3 group with the T6 group, the IL-10 concentration progressed from 44.6 ± 3.7 pg/mL to 68.2 ± 1.8 pg/mL, respectively (p < 0.0001). However, the T4 group (57.3 ± 3.8 pg/mL) had no significant differences from the T6 group (68.2 ± 1.8 pg/mL). 2.5. Histopathological Features The histopathological analysis allowed the evaluation of colonic injury based on inflammatory cell infiltration and tissue damage. The colons presented a severe infiltration of inflammatory cells, foci of ulceration with necrosis, and tissue disruption in the T1, T2, and T3 groups (Figure 8). The inflammatory infiltrates were mostly present in the mucosa and submucosa, but some animals presented transmural infiltrates with extension to the mesentery. The T2 group showed the most severe phenotype. In the T4, T5, and T6 groups, the level of inflammatory cell infiltration and areas of epithelial ulceration and tissue disruption decreased. The colitis severity of each group was calculated by summing individual lesions. The T1 and T4 groups presented the lowest histopathological score, showing inflammation without fibrosis or tissue loss. The T2 group exhibited the most severe score, with tissue loss, epithelial lesion, and inflammation (Figure 9). The histopathological analysis allowed the evaluation of colonic injury based on inflammatory cell infiltration and tissue damage. The colons presented a severe infiltration of inflammatory cells, foci of ulceration with necrosis, and tissue disruption in the T1, T2, and T3 groups (Figure 8). The inflammatory infiltrates were mostly present in the mucosa and submucosa, but some animals presented transmural infiltrates with extension to the mesentery. The T2 group showed the most severe phenotype. In the T4, T5, and T6 groups, the level of inflammatory cell infiltration and areas of epithelial ulceration and tissue disruption decreased. The colitis severity of each group was calculated by summing individual lesions. The T1 and T4 groups presented the lowest histopathological score, showing inflammation without fibrosis or tissue loss. The T2 group exhibited the most severe score, with tissue loss, epithelial lesion, and inflammation (Figure 9).
5. Conclusions
A TNBS-induced colitis model was monitored for 6 weeks. A scheme of multiple TNBS administrations was performed since the aim was to achieve a chronic pattern of induced colitis and to identify the week in which the damage becomes chronic. Clinical manifestations of chronic colitis usually peak within 2 weeks and may be followed by partial recovery or death. These results were expected and are compatible with the correct induction of colitis [26]. Our findings allow us to conclude that TNBS-induced chronic colitis should be developed in 4 weeks, providing a chronic intestinal inflammation model. Indeed, the parameters under evaluation, such as clinical manifestations; biochemical markers, including fecal hemoglobin and pro- and anti-inflammatory cytokine levels; and macroscopic evaluation, corroborate that the chronic illness pattern is observed from week 4 after the induction. Some mice died during the early days of the study, possibly because they did not resist the aggravation of the disease in its acute phase. On the other hand, the remaining mice resisted and progressed to the chronic phase of the disease, showing the same symptoms but more lightly.
[ "2.1. Clinical Signs", "2.2. Macroscopic Assessment of Colitis", "2.3. Biochemical Markers", "2.4. Pro and Anti-Inflammatory Cytokine Levels", "2.5. Histopathological Features", "4. Materials and Methods", "4.1. Material", "4.2. Animals ", "4.3. Trinitrobenzene Sulfonic Acid-Induced Colitis", "4.4. Experimental Groups", "4.5. Monitoring of Clinical Signs", "4.6. Macroscopic Assessment of Colitis", "4.7. Biochemical Markers", "4.8. Measurement of Cytokines", "4.9. Histopathological Analysis", "4.10. Statistical Analysis " ]
[ "For six weeks, the mice were observed daily for body weight, stool consistency, and morbidity. Between weeks 2 and 4, the mice presented an alteration of intestinal motility that was characterized by soft stools and moderate morbidity. On the other hand, after week 4, mice presented an apparent recovery. No alterations were identified in the control groups.\nConcerning body weight, all groups demonstrated a very similar curve in the register of body weight during the experiment (Figure 1). Until week 4, the majority of the TNBS groups showed a progressive increase in body weight. After six administrations, the T6 group animals increased 14.2 ± 2.7% from their initial weight. At the end of the experimental period, the ethanol group gained 14.7 ± 2.3% of its initial weight. No significant differences were observed between the groups.\nIn this experimental procedure, the mortality rate is important to regulate the TNBS dose since the TNBS should induce the disease without inducing a high mortality rate. In this way, the survival curve was recorded, and it demonstrates a decrease in the survival rate in the first three weeks to approximately 73.3% (Figure 2). After week 4, the mice’s survival maintained the same value until the end of the experimental protocol, approximately 70.9%.", "Macroscopically, the colons were observed and scored for gross morphological damage, according to Morris et al., 1989 [24]. The maximal damage in the colon was observed with two administrations of TNBS (T2 group), with a mean score of 2.7 ± 0.7 (p < 0.0001 compared with the ethanol group), corresponding to a linear ulcer with inflammation at one site. After week 2, the gross morphology damage decreased and stabilized from week 4. In the T4 group, the attributed score was 0.9 ± 0.1 (p < 0.0001 compared with the T2 group), corresponding only to localized hyperemia without ulcers. The ethanol group presented a score of 0, with no damage registered (Figure 3).", "Fecal hemoglobin allows the evaluation of the intensity of the hemorrhagic focus (Figure 4). After week 2, the fecal hemoglobin progressively increased until week 6. A significant increase (p < 0.01) was observed when comparing the fecal hemoglobin of the T3 group (4.0 ± 0.2 µmol Hg/g feces) with the T4 group (5.8 ± 0.3 µmol Hg/g feces). However, when comparing the fecal hemoglobin of the T4 group (5.8 ± 0.3 µmol Hg/g feces) with the T6 group (7.5 ± 0.5 µmol Hg/g feces), no statistical differences were observed.\nDue to its essential role in intestinal homeostasis, The ALP concentration in the blood was evaluated (Figure 5). In general, the ALP levels observed in the TNBS groups were higher than those observed in the control groups, with a maximum ALP level in the T5 group of 58.5 ± 2.2 U/L (p < 0.0001 compared with the ethanol group). After week 3, the maintenance of ALP values can be observed, with 39.7 ± 2.4 U/L in the T4 group and 44.4 ± 2.3 U/L in the T6 group and no statistical significance. ", "This animal model showed significant production of TNF-α, a pro-inflammatory cytokine, after TNBS administration (Figure 6). All TNBS groups presented higher levels of TNF-α than the control groups, except on week 2, where only a slight increase was observed. The T2 group presented 51.4 ± 2.9 pg/mL of TNF-α, while the ethanol group presented 39.8 ± 1.5 pg/mL, without statistical significance. The highest level of TNF-α was observed on week 3 (87.3 ± 13.1 pg/mL). However, on week 4, the T4 group verified a slight decrease to 71.3 ± 3.3 pg/mL, which indicates maintenance of the values over time when compared with the T6 group (72.7 ± 3.6 pg/mL), without statistical significance being observed. Compared with the ethanol group, the T4 group presented statistically significant differences (p < 0.0001).\nIL-10 plays a central role in the mucosal immune system, inhibiting pro-inflammatory cytokine synthesis. IL-10 concentrations decreased in week 2. However, serum levels progressively increased until week 6 (Figure 7). Comparing the T3 group with the T6 group, the IL-10 concentration progressed from 44.6 ± 3.7 pg/mL to 68.2 ± 1.8 pg/mL, respectively (p < 0.0001). However, the T4 group (57.3 ± 3.8 pg/mL) had no significant differences from the T6 group (68.2 ± 1.8 pg/mL).", "The histopathological analysis allowed the evaluation of colonic injury based on inflammatory cell infiltration and tissue damage.\nThe colons presented a severe infiltration of inflammatory cells, foci of ulceration with necrosis, and tissue disruption in the T1, T2, and T3 groups (Figure 8). The inflammatory infiltrates were mostly present in the mucosa and submucosa, but some animals presented transmural infiltrates with extension to the mesentery. The T2 group showed the most severe phenotype. In the T4, T5, and T6 groups, the level of inflammatory cell infiltration and areas of epithelial ulceration and tissue disruption decreased.\nThe colitis severity of each group was calculated by summing individual lesions. The T1 and T4 groups presented the lowest histopathological score, showing inflammation without fibrosis or tissue loss. The T2 group exhibited the most severe score, with tissue loss, epithelial lesion, and inflammation (Figure 9).", "4.1. Material 2,4,6-Trinitrobenzene sulfonic acid (TNBS 5%) was purchased from Sigma-Aldrich Chemical. Ketamine (Ketamidor® 100 mg/mL, Lisbon, Portugal) was purchase from Richter Pharma. Xylazine (Sedoxylan® 20 mg/mL, Lisbon, Portugal) was purchased from Dechra. An ADVIA® kit was purchased from Siemens Healthcare Diagnostics (Madrid, Spain). An ELISA assay kit for TNF-α measurement was obtained from Hycult Biotechnology.\n2,4,6-Trinitrobenzene sulfonic acid (TNBS 5%) was purchased from Sigma-Aldrich Chemical. Ketamine (Ketamidor® 100 mg/mL, Lisbon, Portugal) was purchase from Richter Pharma. Xylazine (Sedoxylan® 20 mg/mL, Lisbon, Portugal) was purchased from Dechra. An ADVIA® kit was purchased from Siemens Healthcare Diagnostics (Madrid, Spain). An ELISA assay kit for TNF-α measurement was obtained from Hycult Biotechnology.\n4.2. Animals Female CD-1 mice, weighing 20–30 g and aged 6–10 weeks, were obtained from the Institute of Hygiene and Tropical Medicine. The animals were housed in standard polypropylene cages with ad libitum access to food and water in the bioterium of the Faculty of Pharmacy of the University of Lisbon. The mice were kept at 18–23 °C and 40–60% humidity in a controlled 12 h light/dark cycle. All animal care and experimental procedures were performed in accordance with the internationally accepted principles for laboratory animal use and care, Directive 2010/63/EU, transposed to the Portuguese legislation by Directive Law 113/2013. The experiment was approved by the institutional animal ethics committee (ORBEA) of the Faculty of Pharmacy of the University of Lisbon, 3/2020.\nFemale CD-1 mice, weighing 20–30 g and aged 6–10 weeks, were obtained from the Institute of Hygiene and Tropical Medicine. The animals were housed in standard polypropylene cages with ad libitum access to food and water in the bioterium of the Faculty of Pharmacy of the University of Lisbon. The mice were kept at 18–23 °C and 40–60% humidity in a controlled 12 h light/dark cycle. All animal care and experimental procedures were performed in accordance with the internationally accepted principles for laboratory animal use and care, Directive 2010/63/EU, transposed to the Portuguese legislation by Directive Law 113/2013. The experiment was approved by the institutional animal ethics committee (ORBEA) of the Faculty of Pharmacy of the University of Lisbon, 3/2020.\n4.3. Trinitrobenzene Sulfonic Acid-Induced Colitis The mice were left unfed for 24 h before the induction day. On the induction day (day 0), the mice were anesthetized with an intraperitoneal (IP) injection of ketamine 100 mg/Kg + xylazine 10 mg/Kg (40 µL/mice), and a catheter was carefully inserted into the colon until the tip was 4 cm proximal to the anus. Then, 100 µL of 1% TNBS in 50% ethanol was administered, and the mice were kept in a Trendelenburg position for 1 min. This procedure was repeated weekly, for a period of 5 weeks. On weeks 1, 2, 3, 4, 5, and 6, depending on the experimental TNBS group, the mice were anesthetized, and blood samples were collected by cardiac puncture. The mice were sacrificed by cervical dislocation. The necropsy was initiated with a midline incision into the abdomen. The colon was separated from the surrounding tissues and removed.\nThe mice were left unfed for 24 h before the induction day. On the induction day (day 0), the mice were anesthetized with an intraperitoneal (IP) injection of ketamine 100 mg/Kg + xylazine 10 mg/Kg (40 µL/mice), and a catheter was carefully inserted into the colon until the tip was 4 cm proximal to the anus. Then, 100 µL of 1% TNBS in 50% ethanol was administered, and the mice were kept in a Trendelenburg position for 1 min. This procedure was repeated weekly, for a period of 5 weeks. On weeks 1, 2, 3, 4, 5, and 6, depending on the experimental TNBS group, the mice were anesthetized, and blood samples were collected by cardiac puncture. The mice were sacrificed by cervical dislocation. The necropsy was initiated with a midline incision into the abdomen. The colon was separated from the surrounding tissues and removed.\n4.4. Experimental Groups The mice were categorized into eight groups: six TNBS groups, depending on the number of TNBS administrations, and two control groups. The TNBS groups were: the T1 group (n = 15) that received one TNBS administration at week 0; the T2 group (n = 15) that received two TNBS administrations at weeks 0 and 1; the T3 group (n = 15) that received three TNBS administrations at weeks 0, 1, and 2; the T4 group (n = 15) that received four TNBS administrations at weeks 0, 1, 2, and 3; the T5 group (n = 15) that received five TNBS administrations at weeks 0, 1, 2, 3, and 4; and the T6 group (n = 15) that received six TNBS administrations at weeks 0, 1, 2, 3, 4, and 5. The control groups were the ethanol (E) and sham (S) groups; the E group (n = 15) received 100 µL of 50% ethanol (TNBS vehicle) and the S group (n = 15) received 100 µL of saline solution. \nThe mice were categorized into eight groups: six TNBS groups, depending on the number of TNBS administrations, and two control groups. The TNBS groups were: the T1 group (n = 15) that received one TNBS administration at week 0; the T2 group (n = 15) that received two TNBS administrations at weeks 0 and 1; the T3 group (n = 15) that received three TNBS administrations at weeks 0, 1, and 2; the T4 group (n = 15) that received four TNBS administrations at weeks 0, 1, 2, and 3; the T5 group (n = 15) that received five TNBS administrations at weeks 0, 1, 2, 3, and 4; and the T6 group (n = 15) that received six TNBS administrations at weeks 0, 1, 2, 3, 4, and 5. The control groups were the ethanol (E) and sham (S) groups; the E group (n = 15) received 100 µL of 50% ethanol (TNBS vehicle) and the S group (n = 15) received 100 µL of saline solution. \n4.5. Monitoring of Clinical Signs After induction, the animals were observed daily for the monitoring of body weight, stool consistency, morbidity, and mortality.\nAfter induction, the animals were observed daily for the monitoring of body weight, stool consistency, morbidity, and mortality.\n4.6. Macroscopic Assessment of Colitis A macroscopic assessment of TNBS-induced colitis was performed by using the criteria for the scoring of gross morphologic damage, as previously described by Morris et al., 1989 [24]. The score was attributed based on the presence of hyperemia, ulcers, and inflammation and the number of sites with ulceration and/or inflammation and its extension.\nA macroscopic assessment of TNBS-induced colitis was performed by using the criteria for the scoring of gross morphologic damage, as previously described by Morris et al., 1989 [24]. The score was attributed based on the presence of hyperemia, ulcers, and inflammation and the number of sites with ulceration and/or inflammation and its extension.\n4.7. Biochemical Markers Serum from the collected blood samples was separated by centrifugation at 3600 rpm for 15 min. A serum analysis was conducted in order to evaluate alkaline phosphatase (ALP) using an automated clinical chemistry analyzer (ADVIA® 1200, Madrid, Spain). Fecal hemoglobin was evaluated using a quantitative method by immunoturbidimetry (Krom Systems).\nSerum from the collected blood samples was separated by centrifugation at 3600 rpm for 15 min. A serum analysis was conducted in order to evaluate alkaline phosphatase (ALP) using an automated clinical chemistry analyzer (ADVIA® 1200, Madrid, Spain). Fecal hemoglobin was evaluated using a quantitative method by immunoturbidimetry (Krom Systems).\n4.8. Measurement of Cytokines The pro-inflammatory cytokine TNF-α and the anti-inflammatory cytokine IL-10 were measured and expressed in pg/mL. Colonic tissue samples from each animal were weighed and homogenized in phosphate buffer (Ultra-turrax T25, 13,500 rev/min, twice for 30 s). Afterward, samples were centrifuged (15,000 rpm for 15 min at 4 °C). The aliquots of the supernatant were conserved at −20 °C until use. A spectrophotometric measurement of the cytokine level was performed at 450 nm (ELISA kit Quantikine, Hycult Biotechnology, Abingdon, UK).\nThe pro-inflammatory cytokine TNF-α and the anti-inflammatory cytokine IL-10 were measured and expressed in pg/mL. Colonic tissue samples from each animal were weighed and homogenized in phosphate buffer (Ultra-turrax T25, 13,500 rev/min, twice for 30 s). Afterward, samples were centrifuged (15,000 rpm for 15 min at 4 °C). The aliquots of the supernatant were conserved at −20 °C until use. A spectrophotometric measurement of the cytokine level was performed at 450 nm (ELISA kit Quantikine, Hycult Biotechnology, Abingdon, UK).\n4.9. Histopathological Analysis The histopathology was carried out by an independent histopathologist from the Gulbenkian Institute of Science who was blinded to the groups. Colon samples were fixed in 10% phosphate-buffered formalin, processed routinely for paraffin embedding, sectioned at 5 µm, and stained with hematoxylin and eosin. To increase the possibility of detecting fibrosis, Masson’s trichrome staining was used. Sections of the distal colon were evaluated based on adapted criteria of Seamons and colleagues (2013) [47]. The histopathological score of lesions was partially scored (0–4 increasing in severity) with some parameters, namely: (1) the presence of tissue loss/necrosis; (2) the severity of the mucosal epithelial lesion; (3) inflammation; (4) extent 1—the percentage of the intestine affected in any manner; and (5) extent 2—the percentage of intestine affected by the most severe lesion. The colitis severity was calculated by summing the individual lesions and the extent scores, promoting a final colitis score (max score = 20).\nThe histopathology was carried out by an independent histopathologist from the Gulbenkian Institute of Science who was blinded to the groups. Colon samples were fixed in 10% phosphate-buffered formalin, processed routinely for paraffin embedding, sectioned at 5 µm, and stained with hematoxylin and eosin. To increase the possibility of detecting fibrosis, Masson’s trichrome staining was used. Sections of the distal colon were evaluated based on adapted criteria of Seamons and colleagues (2013) [47]. The histopathological score of lesions was partially scored (0–4 increasing in severity) with some parameters, namely: (1) the presence of tissue loss/necrosis; (2) the severity of the mucosal epithelial lesion; (3) inflammation; (4) extent 1—the percentage of the intestine affected in any manner; and (5) extent 2—the percentage of intestine affected by the most severe lesion. The colitis severity was calculated by summing the individual lesions and the extent scores, promoting a final colitis score (max score = 20).\n4.10. Statistical Analysis The results were expressed as the mean ± SEM of N observations, where N represents the number of animals analyzed. Data analysis was performed using SPSS software (version 26.0). The results were analyzed by a one-way ANOVA to determine the statistical significance between the TNBS and control groups. For multiple comparisons, Tukey’s post hoc test was used. A p-value of less than 0.05 was considered significant.\nThe results were expressed as the mean ± SEM of N observations, where N represents the number of animals analyzed. Data analysis was performed using SPSS software (version 26.0). The results were analyzed by a one-way ANOVA to determine the statistical significance between the TNBS and control groups. For multiple comparisons, Tukey’s post hoc test was used. A p-value of less than 0.05 was considered significant.", "2,4,6-Trinitrobenzene sulfonic acid (TNBS 5%) was purchased from Sigma-Aldrich Chemical. Ketamine (Ketamidor® 100 mg/mL, Lisbon, Portugal) was purchase from Richter Pharma. Xylazine (Sedoxylan® 20 mg/mL, Lisbon, Portugal) was purchased from Dechra. An ADVIA® kit was purchased from Siemens Healthcare Diagnostics (Madrid, Spain). An ELISA assay kit for TNF-α measurement was obtained from Hycult Biotechnology.", "Female CD-1 mice, weighing 20–30 g and aged 6–10 weeks, were obtained from the Institute of Hygiene and Tropical Medicine. The animals were housed in standard polypropylene cages with ad libitum access to food and water in the bioterium of the Faculty of Pharmacy of the University of Lisbon. The mice were kept at 18–23 °C and 40–60% humidity in a controlled 12 h light/dark cycle. All animal care and experimental procedures were performed in accordance with the internationally accepted principles for laboratory animal use and care, Directive 2010/63/EU, transposed to the Portuguese legislation by Directive Law 113/2013. The experiment was approved by the institutional animal ethics committee (ORBEA) of the Faculty of Pharmacy of the University of Lisbon, 3/2020.", "The mice were left unfed for 24 h before the induction day. On the induction day (day 0), the mice were anesthetized with an intraperitoneal (IP) injection of ketamine 100 mg/Kg + xylazine 10 mg/Kg (40 µL/mice), and a catheter was carefully inserted into the colon until the tip was 4 cm proximal to the anus. Then, 100 µL of 1% TNBS in 50% ethanol was administered, and the mice were kept in a Trendelenburg position for 1 min. This procedure was repeated weekly, for a period of 5 weeks. On weeks 1, 2, 3, 4, 5, and 6, depending on the experimental TNBS group, the mice were anesthetized, and blood samples were collected by cardiac puncture. The mice were sacrificed by cervical dislocation. The necropsy was initiated with a midline incision into the abdomen. The colon was separated from the surrounding tissues and removed.", "The mice were categorized into eight groups: six TNBS groups, depending on the number of TNBS administrations, and two control groups. The TNBS groups were: the T1 group (n = 15) that received one TNBS administration at week 0; the T2 group (n = 15) that received two TNBS administrations at weeks 0 and 1; the T3 group (n = 15) that received three TNBS administrations at weeks 0, 1, and 2; the T4 group (n = 15) that received four TNBS administrations at weeks 0, 1, 2, and 3; the T5 group (n = 15) that received five TNBS administrations at weeks 0, 1, 2, 3, and 4; and the T6 group (n = 15) that received six TNBS administrations at weeks 0, 1, 2, 3, 4, and 5. The control groups were the ethanol (E) and sham (S) groups; the E group (n = 15) received 100 µL of 50% ethanol (TNBS vehicle) and the S group (n = 15) received 100 µL of saline solution. ", "After induction, the animals were observed daily for the monitoring of body weight, stool consistency, morbidity, and mortality.", "A macroscopic assessment of TNBS-induced colitis was performed by using the criteria for the scoring of gross morphologic damage, as previously described by Morris et al., 1989 [24]. The score was attributed based on the presence of hyperemia, ulcers, and inflammation and the number of sites with ulceration and/or inflammation and its extension.", "Serum from the collected blood samples was separated by centrifugation at 3600 rpm for 15 min. A serum analysis was conducted in order to evaluate alkaline phosphatase (ALP) using an automated clinical chemistry analyzer (ADVIA® 1200, Madrid, Spain). Fecal hemoglobin was evaluated using a quantitative method by immunoturbidimetry (Krom Systems).", "The pro-inflammatory cytokine TNF-α and the anti-inflammatory cytokine IL-10 were measured and expressed in pg/mL. Colonic tissue samples from each animal were weighed and homogenized in phosphate buffer (Ultra-turrax T25, 13,500 rev/min, twice for 30 s). Afterward, samples were centrifuged (15,000 rpm for 15 min at 4 °C). The aliquots of the supernatant were conserved at −20 °C until use. A spectrophotometric measurement of the cytokine level was performed at 450 nm (ELISA kit Quantikine, Hycult Biotechnology, Abingdon, UK).", "The histopathology was carried out by an independent histopathologist from the Gulbenkian Institute of Science who was blinded to the groups. Colon samples were fixed in 10% phosphate-buffered formalin, processed routinely for paraffin embedding, sectioned at 5 µm, and stained with hematoxylin and eosin. To increase the possibility of detecting fibrosis, Masson’s trichrome staining was used. Sections of the distal colon were evaluated based on adapted criteria of Seamons and colleagues (2013) [47]. The histopathological score of lesions was partially scored (0–4 increasing in severity) with some parameters, namely: (1) the presence of tissue loss/necrosis; (2) the severity of the mucosal epithelial lesion; (3) inflammation; (4) extent 1—the percentage of the intestine affected in any manner; and (5) extent 2—the percentage of intestine affected by the most severe lesion. The colitis severity was calculated by summing the individual lesions and the extent scores, promoting a final colitis score (max score = 20).", "The results were expressed as the mean ± SEM of N observations, where N represents the number of animals analyzed. Data analysis was performed using SPSS software (version 26.0). The results were analyzed by a one-way ANOVA to determine the statistical significance between the TNBS and control groups. For multiple comparisons, Tukey’s post hoc test was used. A p-value of less than 0.05 was considered significant." ]
[ null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null ]
[ "1. Introduction", "2. Results", "2.1. Clinical Signs", "2.2. Macroscopic Assessment of Colitis", "2.3. Biochemical Markers", "2.4. Pro and Anti-Inflammatory Cytokine Levels", "2.5. Histopathological Features", "3. Discussion", "4. Materials and Methods", "4.1. Material", "4.2. Animals ", "4.3. Trinitrobenzene Sulfonic Acid-Induced Colitis", "4.4. Experimental Groups", "4.5. Monitoring of Clinical Signs", "4.6. Macroscopic Assessment of Colitis", "4.7. Biochemical Markers", "4.8. Measurement of Cytokines", "4.9. Histopathological Analysis", "4.10. Statistical Analysis ", "5. Conclusions" ]
[ "Inflammatory bowel disease (IBD) comprises Crohn’s disease (CD) and ulcerative colitis (UC) [1]. CD and UC are chronic and relapsing inflammatory conditions of the gastrointestinal tract that have distinct pathological and clinical characteristics. The evolution of IBD follows the advancement of society [2,3]. It is almost a global disease, affecting all ages, including the pediatric population [4]. Several treatments are currently available for the treatment of IBD, though none of them reverses the underlying pathogenic mechanism of this disease [5,6].\nAnimal models of IBD remain essential for a proper understanding of histopathological change in the gastrointestinal tract and play a key role in the development of new pharmacological approaches [7]. The trinitrobenzene sulfonic acid (TNBS)-induced model, in particular, is a commonly used model of IBD since it is capable of reproducing CD in humans [8,9,10]. Additionally, the TNBS-induced model can mimic the acute and chronic stages of IBD [11,12]. TNBS is a haptenizing agent that stimulates a delayed-type hypersensitivity immune response, driving colitis in susceptible mouse strains [13,14,15,16,17]. The chemical is dissolved in ethanol, enabling the interaction of TNBS with colon tissue proteins. The ethanol breaks the mucosal barrier, allowing TNBS to penetrate into the bowel wall [16]. TNBS administration induces transmural colitis that is driven by a Th1-mediated immune response [13,14,17] that is characterized by the infiltration of CD4 cells, neutrophils, and macrophages into the lamina propria and the secretion of cytokines [14] The most common cytokines are tumor necrosis factor (TNF)-α and interleukin (IL)-12. Since the protocols using the TNBS-induced colitis model are not standardized, a systematic review [18] developed by our research group concludes that the chronic TNBS-induced colitis model can be obtained with multiple TNBS administrations. \nIn the literature, there is no consensus about the induction method, and several original articles have been published with different ways to induce a chronic model of TNBS-induced colitis using different doses, numbers of TNBS administrations, strains, genders, and ages of mice. By this point of view, the main objective of this study is to develop and validate a chronic TNBS-induced colitis in mice in order to evaluate the effect of new pharmacological approaches for IBD. The advantage of this chronic model compared to acute models is that the latter may provide only limited information about the pathogenesis of human IBDs, as the chemical injury to the epithelial barrier leads to self-limiting inflammation rather than chronic disease [19]. Our research group has previously developed preclinical studies in an acute TNBS-induced colitis model [20,21,22]. However, as cited by Bilsborough and colleagues (2021), since IBD is a chronic disease, the development of a standardized and validated induction method for chronic colitis is useful for studying new pharmacological approaches [23]. Our findings allow us to conclude that TNBS-induced chronic colitis should be developed in 4 weeks, providing a chronic intestinal inflammation model.", "2.1. Clinical Signs For six weeks, the mice were observed daily for body weight, stool consistency, and morbidity. Between weeks 2 and 4, the mice presented an alteration of intestinal motility that was characterized by soft stools and moderate morbidity. On the other hand, after week 4, mice presented an apparent recovery. No alterations were identified in the control groups.\nConcerning body weight, all groups demonstrated a very similar curve in the register of body weight during the experiment (Figure 1). Until week 4, the majority of the TNBS groups showed a progressive increase in body weight. After six administrations, the T6 group animals increased 14.2 ± 2.7% from their initial weight. At the end of the experimental period, the ethanol group gained 14.7 ± 2.3% of its initial weight. No significant differences were observed between the groups.\nIn this experimental procedure, the mortality rate is important to regulate the TNBS dose since the TNBS should induce the disease without inducing a high mortality rate. In this way, the survival curve was recorded, and it demonstrates a decrease in the survival rate in the first three weeks to approximately 73.3% (Figure 2). After week 4, the mice’s survival maintained the same value until the end of the experimental protocol, approximately 70.9%.\nFor six weeks, the mice were observed daily for body weight, stool consistency, and morbidity. Between weeks 2 and 4, the mice presented an alteration of intestinal motility that was characterized by soft stools and moderate morbidity. On the other hand, after week 4, mice presented an apparent recovery. No alterations were identified in the control groups.\nConcerning body weight, all groups demonstrated a very similar curve in the register of body weight during the experiment (Figure 1). Until week 4, the majority of the TNBS groups showed a progressive increase in body weight. After six administrations, the T6 group animals increased 14.2 ± 2.7% from their initial weight. At the end of the experimental period, the ethanol group gained 14.7 ± 2.3% of its initial weight. No significant differences were observed between the groups.\nIn this experimental procedure, the mortality rate is important to regulate the TNBS dose since the TNBS should induce the disease without inducing a high mortality rate. In this way, the survival curve was recorded, and it demonstrates a decrease in the survival rate in the first three weeks to approximately 73.3% (Figure 2). After week 4, the mice’s survival maintained the same value until the end of the experimental protocol, approximately 70.9%.\n2.2. Macroscopic Assessment of Colitis Macroscopically, the colons were observed and scored for gross morphological damage, according to Morris et al., 1989 [24]. The maximal damage in the colon was observed with two administrations of TNBS (T2 group), with a mean score of 2.7 ± 0.7 (p < 0.0001 compared with the ethanol group), corresponding to a linear ulcer with inflammation at one site. After week 2, the gross morphology damage decreased and stabilized from week 4. In the T4 group, the attributed score was 0.9 ± 0.1 (p < 0.0001 compared with the T2 group), corresponding only to localized hyperemia without ulcers. The ethanol group presented a score of 0, with no damage registered (Figure 3).\nMacroscopically, the colons were observed and scored for gross morphological damage, according to Morris et al., 1989 [24]. The maximal damage in the colon was observed with two administrations of TNBS (T2 group), with a mean score of 2.7 ± 0.7 (p < 0.0001 compared with the ethanol group), corresponding to a linear ulcer with inflammation at one site. After week 2, the gross morphology damage decreased and stabilized from week 4. In the T4 group, the attributed score was 0.9 ± 0.1 (p < 0.0001 compared with the T2 group), corresponding only to localized hyperemia without ulcers. The ethanol group presented a score of 0, with no damage registered (Figure 3).\n2.3. Biochemical Markers Fecal hemoglobin allows the evaluation of the intensity of the hemorrhagic focus (Figure 4). After week 2, the fecal hemoglobin progressively increased until week 6. A significant increase (p < 0.01) was observed when comparing the fecal hemoglobin of the T3 group (4.0 ± 0.2 µmol Hg/g feces) with the T4 group (5.8 ± 0.3 µmol Hg/g feces). However, when comparing the fecal hemoglobin of the T4 group (5.8 ± 0.3 µmol Hg/g feces) with the T6 group (7.5 ± 0.5 µmol Hg/g feces), no statistical differences were observed.\nDue to its essential role in intestinal homeostasis, The ALP concentration in the blood was evaluated (Figure 5). In general, the ALP levels observed in the TNBS groups were higher than those observed in the control groups, with a maximum ALP level in the T5 group of 58.5 ± 2.2 U/L (p < 0.0001 compared with the ethanol group). After week 3, the maintenance of ALP values can be observed, with 39.7 ± 2.4 U/L in the T4 group and 44.4 ± 2.3 U/L in the T6 group and no statistical significance. \nFecal hemoglobin allows the evaluation of the intensity of the hemorrhagic focus (Figure 4). After week 2, the fecal hemoglobin progressively increased until week 6. A significant increase (p < 0.01) was observed when comparing the fecal hemoglobin of the T3 group (4.0 ± 0.2 µmol Hg/g feces) with the T4 group (5.8 ± 0.3 µmol Hg/g feces). However, when comparing the fecal hemoglobin of the T4 group (5.8 ± 0.3 µmol Hg/g feces) with the T6 group (7.5 ± 0.5 µmol Hg/g feces), no statistical differences were observed.\nDue to its essential role in intestinal homeostasis, The ALP concentration in the blood was evaluated (Figure 5). In general, the ALP levels observed in the TNBS groups were higher than those observed in the control groups, with a maximum ALP level in the T5 group of 58.5 ± 2.2 U/L (p < 0.0001 compared with the ethanol group). After week 3, the maintenance of ALP values can be observed, with 39.7 ± 2.4 U/L in the T4 group and 44.4 ± 2.3 U/L in the T6 group and no statistical significance. \n2.4. Pro and Anti-Inflammatory Cytokine Levels This animal model showed significant production of TNF-α, a pro-inflammatory cytokine, after TNBS administration (Figure 6). All TNBS groups presented higher levels of TNF-α than the control groups, except on week 2, where only a slight increase was observed. The T2 group presented 51.4 ± 2.9 pg/mL of TNF-α, while the ethanol group presented 39.8 ± 1.5 pg/mL, without statistical significance. The highest level of TNF-α was observed on week 3 (87.3 ± 13.1 pg/mL). However, on week 4, the T4 group verified a slight decrease to 71.3 ± 3.3 pg/mL, which indicates maintenance of the values over time when compared with the T6 group (72.7 ± 3.6 pg/mL), without statistical significance being observed. Compared with the ethanol group, the T4 group presented statistically significant differences (p < 0.0001).\nIL-10 plays a central role in the mucosal immune system, inhibiting pro-inflammatory cytokine synthesis. IL-10 concentrations decreased in week 2. However, serum levels progressively increased until week 6 (Figure 7). Comparing the T3 group with the T6 group, the IL-10 concentration progressed from 44.6 ± 3.7 pg/mL to 68.2 ± 1.8 pg/mL, respectively (p < 0.0001). However, the T4 group (57.3 ± 3.8 pg/mL) had no significant differences from the T6 group (68.2 ± 1.8 pg/mL).\nThis animal model showed significant production of TNF-α, a pro-inflammatory cytokine, after TNBS administration (Figure 6). All TNBS groups presented higher levels of TNF-α than the control groups, except on week 2, where only a slight increase was observed. The T2 group presented 51.4 ± 2.9 pg/mL of TNF-α, while the ethanol group presented 39.8 ± 1.5 pg/mL, without statistical significance. The highest level of TNF-α was observed on week 3 (87.3 ± 13.1 pg/mL). However, on week 4, the T4 group verified a slight decrease to 71.3 ± 3.3 pg/mL, which indicates maintenance of the values over time when compared with the T6 group (72.7 ± 3.6 pg/mL), without statistical significance being observed. Compared with the ethanol group, the T4 group presented statistically significant differences (p < 0.0001).\nIL-10 plays a central role in the mucosal immune system, inhibiting pro-inflammatory cytokine synthesis. IL-10 concentrations decreased in week 2. However, serum levels progressively increased until week 6 (Figure 7). Comparing the T3 group with the T6 group, the IL-10 concentration progressed from 44.6 ± 3.7 pg/mL to 68.2 ± 1.8 pg/mL, respectively (p < 0.0001). However, the T4 group (57.3 ± 3.8 pg/mL) had no significant differences from the T6 group (68.2 ± 1.8 pg/mL).\n2.5. Histopathological Features The histopathological analysis allowed the evaluation of colonic injury based on inflammatory cell infiltration and tissue damage.\nThe colons presented a severe infiltration of inflammatory cells, foci of ulceration with necrosis, and tissue disruption in the T1, T2, and T3 groups (Figure 8). The inflammatory infiltrates were mostly present in the mucosa and submucosa, but some animals presented transmural infiltrates with extension to the mesentery. The T2 group showed the most severe phenotype. In the T4, T5, and T6 groups, the level of inflammatory cell infiltration and areas of epithelial ulceration and tissue disruption decreased.\nThe colitis severity of each group was calculated by summing individual lesions. The T1 and T4 groups presented the lowest histopathological score, showing inflammation without fibrosis or tissue loss. The T2 group exhibited the most severe score, with tissue loss, epithelial lesion, and inflammation (Figure 9).\nThe histopathological analysis allowed the evaluation of colonic injury based on inflammatory cell infiltration and tissue damage.\nThe colons presented a severe infiltration of inflammatory cells, foci of ulceration with necrosis, and tissue disruption in the T1, T2, and T3 groups (Figure 8). The inflammatory infiltrates were mostly present in the mucosa and submucosa, but some animals presented transmural infiltrates with extension to the mesentery. The T2 group showed the most severe phenotype. In the T4, T5, and T6 groups, the level of inflammatory cell infiltration and areas of epithelial ulceration and tissue disruption decreased.\nThe colitis severity of each group was calculated by summing individual lesions. The T1 and T4 groups presented the lowest histopathological score, showing inflammation without fibrosis or tissue loss. The T2 group exhibited the most severe score, with tissue loss, epithelial lesion, and inflammation (Figure 9).", "For six weeks, the mice were observed daily for body weight, stool consistency, and morbidity. Between weeks 2 and 4, the mice presented an alteration of intestinal motility that was characterized by soft stools and moderate morbidity. On the other hand, after week 4, mice presented an apparent recovery. No alterations were identified in the control groups.\nConcerning body weight, all groups demonstrated a very similar curve in the register of body weight during the experiment (Figure 1). Until week 4, the majority of the TNBS groups showed a progressive increase in body weight. After six administrations, the T6 group animals increased 14.2 ± 2.7% from their initial weight. At the end of the experimental period, the ethanol group gained 14.7 ± 2.3% of its initial weight. No significant differences were observed between the groups.\nIn this experimental procedure, the mortality rate is important to regulate the TNBS dose since the TNBS should induce the disease without inducing a high mortality rate. In this way, the survival curve was recorded, and it demonstrates a decrease in the survival rate in the first three weeks to approximately 73.3% (Figure 2). After week 4, the mice’s survival maintained the same value until the end of the experimental protocol, approximately 70.9%.", "Macroscopically, the colons were observed and scored for gross morphological damage, according to Morris et al., 1989 [24]. The maximal damage in the colon was observed with two administrations of TNBS (T2 group), with a mean score of 2.7 ± 0.7 (p < 0.0001 compared with the ethanol group), corresponding to a linear ulcer with inflammation at one site. After week 2, the gross morphology damage decreased and stabilized from week 4. In the T4 group, the attributed score was 0.9 ± 0.1 (p < 0.0001 compared with the T2 group), corresponding only to localized hyperemia without ulcers. The ethanol group presented a score of 0, with no damage registered (Figure 3).", "Fecal hemoglobin allows the evaluation of the intensity of the hemorrhagic focus (Figure 4). After week 2, the fecal hemoglobin progressively increased until week 6. A significant increase (p < 0.01) was observed when comparing the fecal hemoglobin of the T3 group (4.0 ± 0.2 µmol Hg/g feces) with the T4 group (5.8 ± 0.3 µmol Hg/g feces). However, when comparing the fecal hemoglobin of the T4 group (5.8 ± 0.3 µmol Hg/g feces) with the T6 group (7.5 ± 0.5 µmol Hg/g feces), no statistical differences were observed.\nDue to its essential role in intestinal homeostasis, The ALP concentration in the blood was evaluated (Figure 5). In general, the ALP levels observed in the TNBS groups were higher than those observed in the control groups, with a maximum ALP level in the T5 group of 58.5 ± 2.2 U/L (p < 0.0001 compared with the ethanol group). After week 3, the maintenance of ALP values can be observed, with 39.7 ± 2.4 U/L in the T4 group and 44.4 ± 2.3 U/L in the T6 group and no statistical significance. ", "This animal model showed significant production of TNF-α, a pro-inflammatory cytokine, after TNBS administration (Figure 6). All TNBS groups presented higher levels of TNF-α than the control groups, except on week 2, where only a slight increase was observed. The T2 group presented 51.4 ± 2.9 pg/mL of TNF-α, while the ethanol group presented 39.8 ± 1.5 pg/mL, without statistical significance. The highest level of TNF-α was observed on week 3 (87.3 ± 13.1 pg/mL). However, on week 4, the T4 group verified a slight decrease to 71.3 ± 3.3 pg/mL, which indicates maintenance of the values over time when compared with the T6 group (72.7 ± 3.6 pg/mL), without statistical significance being observed. Compared with the ethanol group, the T4 group presented statistically significant differences (p < 0.0001).\nIL-10 plays a central role in the mucosal immune system, inhibiting pro-inflammatory cytokine synthesis. IL-10 concentrations decreased in week 2. However, serum levels progressively increased until week 6 (Figure 7). Comparing the T3 group with the T6 group, the IL-10 concentration progressed from 44.6 ± 3.7 pg/mL to 68.2 ± 1.8 pg/mL, respectively (p < 0.0001). However, the T4 group (57.3 ± 3.8 pg/mL) had no significant differences from the T6 group (68.2 ± 1.8 pg/mL).", "The histopathological analysis allowed the evaluation of colonic injury based on inflammatory cell infiltration and tissue damage.\nThe colons presented a severe infiltration of inflammatory cells, foci of ulceration with necrosis, and tissue disruption in the T1, T2, and T3 groups (Figure 8). The inflammatory infiltrates were mostly present in the mucosa and submucosa, but some animals presented transmural infiltrates with extension to the mesentery. The T2 group showed the most severe phenotype. In the T4, T5, and T6 groups, the level of inflammatory cell infiltration and areas of epithelial ulceration and tissue disruption decreased.\nThe colitis severity of each group was calculated by summing individual lesions. The T1 and T4 groups presented the lowest histopathological score, showing inflammation without fibrosis or tissue loss. The T2 group exhibited the most severe score, with tissue loss, epithelial lesion, and inflammation (Figure 9).", "The TNBS groups presented several clinical manifestations, including alterations in intestinal motility, characterized by soft stools and diarrhea, and moderate morbidity, which is consistent with the literature [25,26]. The ethanol and sham groups remained free of alterations. The peak of the clinical signs occurred two weeks after induction and was followed by a partial recovery [24,27,28]. The fact that the peak was observed in week 2 is compatible with an acute model. Moreover, in the acute model developed by our research group, it was found that the acute lesions occurred 4 to 5 days after induction. After this period, the model becomes chronic. This was also confirmed with this experimental protocol. The model only worsened in the second week because the mice were subjected to a second administration of TNBS and, once again, very aggressive values occurred 4 to 5 days after this induction. However, from the third week, the mice appeared to show resistance and began to recover on some parameters. From week 4 to week 6, the model remained stable, presenting chronic inflammation.\nConcerning body weight monitoring, colitic mice presented a progressive increase in body weight throughout the experimental procedure, with a 10% increase in body weight since the induction day. Mice of the T6 group had a weight gain, after six administrations, consistent with previously reported results [26,29,30,31,32]. However, three weeks after the beginning of the experiment, the TNBS groups showed a decrease in body weight. This decrease may be related to the fragility of the mice following three TNBS administrations. However, a recovery in body weight occurred over the following weeks. This may be consistent with the fact that after the fourth administration mice gain resistance to TNBS.\nIn addition, around 73% of the mice survived in the first three weeks of the protocol, followed by a stabilization of the survival rate from week four until the end of the study. These results are coincident with the peak of morbidity observed in this study and with the peak of the disease, as described by some authors [24,28,29]. On the other hand, the survival rate indicates that the dose and frequency of TNBS administrations are optimal to induce the disease while minimizing the mortality rate. A TNBS dose reduction could be considered to reduce mortality; however, the disease induction would be compromised.\nThe peak of colon damage was observed in week 2, corresponding to the linear ulcer with inflammation at one site. This score passed to 1 with “localized hyperemia, but no ulcers” in weeks 4, 5, and 6. On the other hand, the ethanol and sham groups present a score of 0, with no colonic damage. Once again, we can observe a stabilization of the model from week 4. Consistent with the literature, mice from the TNBS groups should present the colon with marked hyperemia and ulcers [33,34,35].\nAfter week 2, the fecal hemoglobin progressively increased until week 6, with a significant increase between the third and the fourth week (p < 0.01). As well as the parameters analyzed above, fecal hemoglobin exhibits a stable pattern of TNBS-induced colitis from the fourth administration. These results seem to show the presence of hemorrhagic ulcers and are in accordance with the results obtained by our research group in the acute colitis model [20,21,22].\nThe TNBS groups presented the highest values of total serum ALP concentration compared to the control groups, suggesting that the increase in ALP levels observed in the TNBS groups was caused by the induction of intestinal injury with TNBS. These results are consistent with other studies [20,21,22,36]. Increased ALP values were maintained from the fourth week, demonstrating again a stabilization of the model from week 4. Comparing the results of the blood concentration of ALP from the ethanol and the sham groups, a slight increase in ALP levels in the ethanol group was observed. The use of ethanol as a TNBS vehicle aims to induce changes in intestinal permeability, enabling the translocation of TNBS into the submucosal layer, causing colon damage [37,38]. The increased values of ALP in the ethanol group confirm this damage.\nTNF-α is a pro-inflammatory cytokine that is produced during the innate immune response of IBD [37]. The literature indicates that increased values of this pro-inflammatory cytokine are related to the pathogenesis of IBD [38,39,40]. Our results confirm this tendency. A significant increase in the levels of TNF-α was observed in the colon. Specifically, this cytokine recorded a peak in the third week. However, on week 4 there was a slight decrease and maintenance until week 6, indicating the possible onset of the chronic phase of this inflammatory disease. In acute inflammation, the values of pro-inflammatory cytokines are very high [20,21,22]. Conversely, in chronic inflammation, the values are only slightly increased. In contrast, anti-inflammatory cytokines act as a brake on this process, preventing an exacerbated response and possibly producing undesirable effects on the inflammation itself and the healing process [41]. The presence of IL-10 was measured in parallel to pro-inflammatory cytokines since it plays a central role in the mucosal immune system [42]. A decrease in IL-10 was observed in week 2. However, after this week, the production of IL-10 progressively increased until the sixth week. Normally, when the values of TNF-α increased, IL-10 decreased. However, from the second week, the values of IL-10 increased. This is probably an intrinsic compensatory mechanism that is trying to solve the inflammation by increasing this anti-inflammatory cytokine. The literature confirms this tendency of the immune system to restore homeostasis, balancing the values of biochemical markers of inflammation [43,44].\nMoreover, the histological examination of the colon showed a severe infiltration of inflammatory cells in the mucosa that was associated with ulceration and tissue disruption. Our results are consistent with the literature, which also reports damage of the mucosal architecture simultaneously with a thickening of the colon wall, ulcers, and extensive inflammatory cell infiltration in the colonic mucosa of colitic mice [27,45,46].", "4.1. Material 2,4,6-Trinitrobenzene sulfonic acid (TNBS 5%) was purchased from Sigma-Aldrich Chemical. Ketamine (Ketamidor® 100 mg/mL, Lisbon, Portugal) was purchase from Richter Pharma. Xylazine (Sedoxylan® 20 mg/mL, Lisbon, Portugal) was purchased from Dechra. An ADVIA® kit was purchased from Siemens Healthcare Diagnostics (Madrid, Spain). An ELISA assay kit for TNF-α measurement was obtained from Hycult Biotechnology.\n2,4,6-Trinitrobenzene sulfonic acid (TNBS 5%) was purchased from Sigma-Aldrich Chemical. Ketamine (Ketamidor® 100 mg/mL, Lisbon, Portugal) was purchase from Richter Pharma. Xylazine (Sedoxylan® 20 mg/mL, Lisbon, Portugal) was purchased from Dechra. An ADVIA® kit was purchased from Siemens Healthcare Diagnostics (Madrid, Spain). An ELISA assay kit for TNF-α measurement was obtained from Hycult Biotechnology.\n4.2. Animals Female CD-1 mice, weighing 20–30 g and aged 6–10 weeks, were obtained from the Institute of Hygiene and Tropical Medicine. The animals were housed in standard polypropylene cages with ad libitum access to food and water in the bioterium of the Faculty of Pharmacy of the University of Lisbon. The mice were kept at 18–23 °C and 40–60% humidity in a controlled 12 h light/dark cycle. All animal care and experimental procedures were performed in accordance with the internationally accepted principles for laboratory animal use and care, Directive 2010/63/EU, transposed to the Portuguese legislation by Directive Law 113/2013. The experiment was approved by the institutional animal ethics committee (ORBEA) of the Faculty of Pharmacy of the University of Lisbon, 3/2020.\nFemale CD-1 mice, weighing 20–30 g and aged 6–10 weeks, were obtained from the Institute of Hygiene and Tropical Medicine. The animals were housed in standard polypropylene cages with ad libitum access to food and water in the bioterium of the Faculty of Pharmacy of the University of Lisbon. The mice were kept at 18–23 °C and 40–60% humidity in a controlled 12 h light/dark cycle. All animal care and experimental procedures were performed in accordance with the internationally accepted principles for laboratory animal use and care, Directive 2010/63/EU, transposed to the Portuguese legislation by Directive Law 113/2013. The experiment was approved by the institutional animal ethics committee (ORBEA) of the Faculty of Pharmacy of the University of Lisbon, 3/2020.\n4.3. Trinitrobenzene Sulfonic Acid-Induced Colitis The mice were left unfed for 24 h before the induction day. On the induction day (day 0), the mice were anesthetized with an intraperitoneal (IP) injection of ketamine 100 mg/Kg + xylazine 10 mg/Kg (40 µL/mice), and a catheter was carefully inserted into the colon until the tip was 4 cm proximal to the anus. Then, 100 µL of 1% TNBS in 50% ethanol was administered, and the mice were kept in a Trendelenburg position for 1 min. This procedure was repeated weekly, for a period of 5 weeks. On weeks 1, 2, 3, 4, 5, and 6, depending on the experimental TNBS group, the mice were anesthetized, and blood samples were collected by cardiac puncture. The mice were sacrificed by cervical dislocation. The necropsy was initiated with a midline incision into the abdomen. The colon was separated from the surrounding tissues and removed.\nThe mice were left unfed for 24 h before the induction day. On the induction day (day 0), the mice were anesthetized with an intraperitoneal (IP) injection of ketamine 100 mg/Kg + xylazine 10 mg/Kg (40 µL/mice), and a catheter was carefully inserted into the colon until the tip was 4 cm proximal to the anus. Then, 100 µL of 1% TNBS in 50% ethanol was administered, and the mice were kept in a Trendelenburg position for 1 min. This procedure was repeated weekly, for a period of 5 weeks. On weeks 1, 2, 3, 4, 5, and 6, depending on the experimental TNBS group, the mice were anesthetized, and blood samples were collected by cardiac puncture. The mice were sacrificed by cervical dislocation. The necropsy was initiated with a midline incision into the abdomen. The colon was separated from the surrounding tissues and removed.\n4.4. Experimental Groups The mice were categorized into eight groups: six TNBS groups, depending on the number of TNBS administrations, and two control groups. The TNBS groups were: the T1 group (n = 15) that received one TNBS administration at week 0; the T2 group (n = 15) that received two TNBS administrations at weeks 0 and 1; the T3 group (n = 15) that received three TNBS administrations at weeks 0, 1, and 2; the T4 group (n = 15) that received four TNBS administrations at weeks 0, 1, 2, and 3; the T5 group (n = 15) that received five TNBS administrations at weeks 0, 1, 2, 3, and 4; and the T6 group (n = 15) that received six TNBS administrations at weeks 0, 1, 2, 3, 4, and 5. The control groups were the ethanol (E) and sham (S) groups; the E group (n = 15) received 100 µL of 50% ethanol (TNBS vehicle) and the S group (n = 15) received 100 µL of saline solution. \nThe mice were categorized into eight groups: six TNBS groups, depending on the number of TNBS administrations, and two control groups. The TNBS groups were: the T1 group (n = 15) that received one TNBS administration at week 0; the T2 group (n = 15) that received two TNBS administrations at weeks 0 and 1; the T3 group (n = 15) that received three TNBS administrations at weeks 0, 1, and 2; the T4 group (n = 15) that received four TNBS administrations at weeks 0, 1, 2, and 3; the T5 group (n = 15) that received five TNBS administrations at weeks 0, 1, 2, 3, and 4; and the T6 group (n = 15) that received six TNBS administrations at weeks 0, 1, 2, 3, 4, and 5. The control groups were the ethanol (E) and sham (S) groups; the E group (n = 15) received 100 µL of 50% ethanol (TNBS vehicle) and the S group (n = 15) received 100 µL of saline solution. \n4.5. Monitoring of Clinical Signs After induction, the animals were observed daily for the monitoring of body weight, stool consistency, morbidity, and mortality.\nAfter induction, the animals were observed daily for the monitoring of body weight, stool consistency, morbidity, and mortality.\n4.6. Macroscopic Assessment of Colitis A macroscopic assessment of TNBS-induced colitis was performed by using the criteria for the scoring of gross morphologic damage, as previously described by Morris et al., 1989 [24]. The score was attributed based on the presence of hyperemia, ulcers, and inflammation and the number of sites with ulceration and/or inflammation and its extension.\nA macroscopic assessment of TNBS-induced colitis was performed by using the criteria for the scoring of gross morphologic damage, as previously described by Morris et al., 1989 [24]. The score was attributed based on the presence of hyperemia, ulcers, and inflammation and the number of sites with ulceration and/or inflammation and its extension.\n4.7. Biochemical Markers Serum from the collected blood samples was separated by centrifugation at 3600 rpm for 15 min. A serum analysis was conducted in order to evaluate alkaline phosphatase (ALP) using an automated clinical chemistry analyzer (ADVIA® 1200, Madrid, Spain). Fecal hemoglobin was evaluated using a quantitative method by immunoturbidimetry (Krom Systems).\nSerum from the collected blood samples was separated by centrifugation at 3600 rpm for 15 min. A serum analysis was conducted in order to evaluate alkaline phosphatase (ALP) using an automated clinical chemistry analyzer (ADVIA® 1200, Madrid, Spain). Fecal hemoglobin was evaluated using a quantitative method by immunoturbidimetry (Krom Systems).\n4.8. Measurement of Cytokines The pro-inflammatory cytokine TNF-α and the anti-inflammatory cytokine IL-10 were measured and expressed in pg/mL. Colonic tissue samples from each animal were weighed and homogenized in phosphate buffer (Ultra-turrax T25, 13,500 rev/min, twice for 30 s). Afterward, samples were centrifuged (15,000 rpm for 15 min at 4 °C). The aliquots of the supernatant were conserved at −20 °C until use. A spectrophotometric measurement of the cytokine level was performed at 450 nm (ELISA kit Quantikine, Hycult Biotechnology, Abingdon, UK).\nThe pro-inflammatory cytokine TNF-α and the anti-inflammatory cytokine IL-10 were measured and expressed in pg/mL. Colonic tissue samples from each animal were weighed and homogenized in phosphate buffer (Ultra-turrax T25, 13,500 rev/min, twice for 30 s). Afterward, samples were centrifuged (15,000 rpm for 15 min at 4 °C). The aliquots of the supernatant were conserved at −20 °C until use. A spectrophotometric measurement of the cytokine level was performed at 450 nm (ELISA kit Quantikine, Hycult Biotechnology, Abingdon, UK).\n4.9. Histopathological Analysis The histopathology was carried out by an independent histopathologist from the Gulbenkian Institute of Science who was blinded to the groups. Colon samples were fixed in 10% phosphate-buffered formalin, processed routinely for paraffin embedding, sectioned at 5 µm, and stained with hematoxylin and eosin. To increase the possibility of detecting fibrosis, Masson’s trichrome staining was used. Sections of the distal colon were evaluated based on adapted criteria of Seamons and colleagues (2013) [47]. The histopathological score of lesions was partially scored (0–4 increasing in severity) with some parameters, namely: (1) the presence of tissue loss/necrosis; (2) the severity of the mucosal epithelial lesion; (3) inflammation; (4) extent 1—the percentage of the intestine affected in any manner; and (5) extent 2—the percentage of intestine affected by the most severe lesion. The colitis severity was calculated by summing the individual lesions and the extent scores, promoting a final colitis score (max score = 20).\nThe histopathology was carried out by an independent histopathologist from the Gulbenkian Institute of Science who was blinded to the groups. Colon samples were fixed in 10% phosphate-buffered formalin, processed routinely for paraffin embedding, sectioned at 5 µm, and stained with hematoxylin and eosin. To increase the possibility of detecting fibrosis, Masson’s trichrome staining was used. Sections of the distal colon were evaluated based on adapted criteria of Seamons and colleagues (2013) [47]. The histopathological score of lesions was partially scored (0–4 increasing in severity) with some parameters, namely: (1) the presence of tissue loss/necrosis; (2) the severity of the mucosal epithelial lesion; (3) inflammation; (4) extent 1—the percentage of the intestine affected in any manner; and (5) extent 2—the percentage of intestine affected by the most severe lesion. The colitis severity was calculated by summing the individual lesions and the extent scores, promoting a final colitis score (max score = 20).\n4.10. Statistical Analysis The results were expressed as the mean ± SEM of N observations, where N represents the number of animals analyzed. Data analysis was performed using SPSS software (version 26.0). The results were analyzed by a one-way ANOVA to determine the statistical significance between the TNBS and control groups. For multiple comparisons, Tukey’s post hoc test was used. A p-value of less than 0.05 was considered significant.\nThe results were expressed as the mean ± SEM of N observations, where N represents the number of animals analyzed. Data analysis was performed using SPSS software (version 26.0). The results were analyzed by a one-way ANOVA to determine the statistical significance between the TNBS and control groups. For multiple comparisons, Tukey’s post hoc test was used. A p-value of less than 0.05 was considered significant.", "2,4,6-Trinitrobenzene sulfonic acid (TNBS 5%) was purchased from Sigma-Aldrich Chemical. Ketamine (Ketamidor® 100 mg/mL, Lisbon, Portugal) was purchase from Richter Pharma. Xylazine (Sedoxylan® 20 mg/mL, Lisbon, Portugal) was purchased from Dechra. An ADVIA® kit was purchased from Siemens Healthcare Diagnostics (Madrid, Spain). An ELISA assay kit for TNF-α measurement was obtained from Hycult Biotechnology.", "Female CD-1 mice, weighing 20–30 g and aged 6–10 weeks, were obtained from the Institute of Hygiene and Tropical Medicine. The animals were housed in standard polypropylene cages with ad libitum access to food and water in the bioterium of the Faculty of Pharmacy of the University of Lisbon. The mice were kept at 18–23 °C and 40–60% humidity in a controlled 12 h light/dark cycle. All animal care and experimental procedures were performed in accordance with the internationally accepted principles for laboratory animal use and care, Directive 2010/63/EU, transposed to the Portuguese legislation by Directive Law 113/2013. The experiment was approved by the institutional animal ethics committee (ORBEA) of the Faculty of Pharmacy of the University of Lisbon, 3/2020.", "The mice were left unfed for 24 h before the induction day. On the induction day (day 0), the mice were anesthetized with an intraperitoneal (IP) injection of ketamine 100 mg/Kg + xylazine 10 mg/Kg (40 µL/mice), and a catheter was carefully inserted into the colon until the tip was 4 cm proximal to the anus. Then, 100 µL of 1% TNBS in 50% ethanol was administered, and the mice were kept in a Trendelenburg position for 1 min. This procedure was repeated weekly, for a period of 5 weeks. On weeks 1, 2, 3, 4, 5, and 6, depending on the experimental TNBS group, the mice were anesthetized, and blood samples were collected by cardiac puncture. The mice were sacrificed by cervical dislocation. The necropsy was initiated with a midline incision into the abdomen. The colon was separated from the surrounding tissues and removed.", "The mice were categorized into eight groups: six TNBS groups, depending on the number of TNBS administrations, and two control groups. The TNBS groups were: the T1 group (n = 15) that received one TNBS administration at week 0; the T2 group (n = 15) that received two TNBS administrations at weeks 0 and 1; the T3 group (n = 15) that received three TNBS administrations at weeks 0, 1, and 2; the T4 group (n = 15) that received four TNBS administrations at weeks 0, 1, 2, and 3; the T5 group (n = 15) that received five TNBS administrations at weeks 0, 1, 2, 3, and 4; and the T6 group (n = 15) that received six TNBS administrations at weeks 0, 1, 2, 3, 4, and 5. The control groups were the ethanol (E) and sham (S) groups; the E group (n = 15) received 100 µL of 50% ethanol (TNBS vehicle) and the S group (n = 15) received 100 µL of saline solution. ", "After induction, the animals were observed daily for the monitoring of body weight, stool consistency, morbidity, and mortality.", "A macroscopic assessment of TNBS-induced colitis was performed by using the criteria for the scoring of gross morphologic damage, as previously described by Morris et al., 1989 [24]. The score was attributed based on the presence of hyperemia, ulcers, and inflammation and the number of sites with ulceration and/or inflammation and its extension.", "Serum from the collected blood samples was separated by centrifugation at 3600 rpm for 15 min. A serum analysis was conducted in order to evaluate alkaline phosphatase (ALP) using an automated clinical chemistry analyzer (ADVIA® 1200, Madrid, Spain). Fecal hemoglobin was evaluated using a quantitative method by immunoturbidimetry (Krom Systems).", "The pro-inflammatory cytokine TNF-α and the anti-inflammatory cytokine IL-10 were measured and expressed in pg/mL. Colonic tissue samples from each animal were weighed and homogenized in phosphate buffer (Ultra-turrax T25, 13,500 rev/min, twice for 30 s). Afterward, samples were centrifuged (15,000 rpm for 15 min at 4 °C). The aliquots of the supernatant were conserved at −20 °C until use. A spectrophotometric measurement of the cytokine level was performed at 450 nm (ELISA kit Quantikine, Hycult Biotechnology, Abingdon, UK).", "The histopathology was carried out by an independent histopathologist from the Gulbenkian Institute of Science who was blinded to the groups. Colon samples were fixed in 10% phosphate-buffered formalin, processed routinely for paraffin embedding, sectioned at 5 µm, and stained with hematoxylin and eosin. To increase the possibility of detecting fibrosis, Masson’s trichrome staining was used. Sections of the distal colon were evaluated based on adapted criteria of Seamons and colleagues (2013) [47]. The histopathological score of lesions was partially scored (0–4 increasing in severity) with some parameters, namely: (1) the presence of tissue loss/necrosis; (2) the severity of the mucosal epithelial lesion; (3) inflammation; (4) extent 1—the percentage of the intestine affected in any manner; and (5) extent 2—the percentage of intestine affected by the most severe lesion. The colitis severity was calculated by summing the individual lesions and the extent scores, promoting a final colitis score (max score = 20).", "The results were expressed as the mean ± SEM of N observations, where N represents the number of animals analyzed. Data analysis was performed using SPSS software (version 26.0). The results were analyzed by a one-way ANOVA to determine the statistical significance between the TNBS and control groups. For multiple comparisons, Tukey’s post hoc test was used. A p-value of less than 0.05 was considered significant.", "A TNBS-induced colitis model was monitored for 6 weeks. A scheme of multiple TNBS administrations was performed since the aim was to achieve a chronic pattern of induced colitis and to identify the week in which the damage becomes chronic.\nClinical manifestations of chronic colitis usually peak within 2 weeks and may be followed by partial recovery or death. These results were expected and are compatible with the correct induction of colitis [26]. Our findings allow us to conclude that TNBS-induced chronic colitis should be developed in 4 weeks, providing a chronic intestinal inflammation model. Indeed, the parameters under evaluation, such as clinical manifestations; biochemical markers, including fecal hemoglobin and pro- and anti-inflammatory cytokine levels; and macroscopic evaluation, corroborate that the chronic illness pattern is observed from week 4 after the induction. Some mice died during the early days of the study, possibly because they did not resist the aggravation of the disease in its acute phase. On the other hand, the remaining mice resisted and progressed to the chronic phase of the disease, showing the same symptoms but more lightly." ]
[ "intro", "results", null, null, null, null, null, "discussion", null, null, null, null, null, null, null, null, null, null, null, "conclusions" ]
[ "inflammatory bowel disease", "TNBS-induced colitis", "chronic animal model" ]
1. Introduction: Inflammatory bowel disease (IBD) comprises Crohn’s disease (CD) and ulcerative colitis (UC) [1]. CD and UC are chronic and relapsing inflammatory conditions of the gastrointestinal tract that have distinct pathological and clinical characteristics. The evolution of IBD follows the advancement of society [2,3]. It is almost a global disease, affecting all ages, including the pediatric population [4]. Several treatments are currently available for the treatment of IBD, though none of them reverses the underlying pathogenic mechanism of this disease [5,6]. Animal models of IBD remain essential for a proper understanding of histopathological change in the gastrointestinal tract and play a key role in the development of new pharmacological approaches [7]. The trinitrobenzene sulfonic acid (TNBS)-induced model, in particular, is a commonly used model of IBD since it is capable of reproducing CD in humans [8,9,10]. Additionally, the TNBS-induced model can mimic the acute and chronic stages of IBD [11,12]. TNBS is a haptenizing agent that stimulates a delayed-type hypersensitivity immune response, driving colitis in susceptible mouse strains [13,14,15,16,17]. The chemical is dissolved in ethanol, enabling the interaction of TNBS with colon tissue proteins. The ethanol breaks the mucosal barrier, allowing TNBS to penetrate into the bowel wall [16]. TNBS administration induces transmural colitis that is driven by a Th1-mediated immune response [13,14,17] that is characterized by the infiltration of CD4 cells, neutrophils, and macrophages into the lamina propria and the secretion of cytokines [14] The most common cytokines are tumor necrosis factor (TNF)-α and interleukin (IL)-12. Since the protocols using the TNBS-induced colitis model are not standardized, a systematic review [18] developed by our research group concludes that the chronic TNBS-induced colitis model can be obtained with multiple TNBS administrations. In the literature, there is no consensus about the induction method, and several original articles have been published with different ways to induce a chronic model of TNBS-induced colitis using different doses, numbers of TNBS administrations, strains, genders, and ages of mice. By this point of view, the main objective of this study is to develop and validate a chronic TNBS-induced colitis in mice in order to evaluate the effect of new pharmacological approaches for IBD. The advantage of this chronic model compared to acute models is that the latter may provide only limited information about the pathogenesis of human IBDs, as the chemical injury to the epithelial barrier leads to self-limiting inflammation rather than chronic disease [19]. Our research group has previously developed preclinical studies in an acute TNBS-induced colitis model [20,21,22]. However, as cited by Bilsborough and colleagues (2021), since IBD is a chronic disease, the development of a standardized and validated induction method for chronic colitis is useful for studying new pharmacological approaches [23]. Our findings allow us to conclude that TNBS-induced chronic colitis should be developed in 4 weeks, providing a chronic intestinal inflammation model. 2. Results: 2.1. Clinical Signs For six weeks, the mice were observed daily for body weight, stool consistency, and morbidity. Between weeks 2 and 4, the mice presented an alteration of intestinal motility that was characterized by soft stools and moderate morbidity. On the other hand, after week 4, mice presented an apparent recovery. No alterations were identified in the control groups. Concerning body weight, all groups demonstrated a very similar curve in the register of body weight during the experiment (Figure 1). Until week 4, the majority of the TNBS groups showed a progressive increase in body weight. After six administrations, the T6 group animals increased 14.2 ± 2.7% from their initial weight. At the end of the experimental period, the ethanol group gained 14.7 ± 2.3% of its initial weight. No significant differences were observed between the groups. In this experimental procedure, the mortality rate is important to regulate the TNBS dose since the TNBS should induce the disease without inducing a high mortality rate. In this way, the survival curve was recorded, and it demonstrates a decrease in the survival rate in the first three weeks to approximately 73.3% (Figure 2). After week 4, the mice’s survival maintained the same value until the end of the experimental protocol, approximately 70.9%. For six weeks, the mice were observed daily for body weight, stool consistency, and morbidity. Between weeks 2 and 4, the mice presented an alteration of intestinal motility that was characterized by soft stools and moderate morbidity. On the other hand, after week 4, mice presented an apparent recovery. No alterations were identified in the control groups. Concerning body weight, all groups demonstrated a very similar curve in the register of body weight during the experiment (Figure 1). Until week 4, the majority of the TNBS groups showed a progressive increase in body weight. After six administrations, the T6 group animals increased 14.2 ± 2.7% from their initial weight. At the end of the experimental period, the ethanol group gained 14.7 ± 2.3% of its initial weight. No significant differences were observed between the groups. In this experimental procedure, the mortality rate is important to regulate the TNBS dose since the TNBS should induce the disease without inducing a high mortality rate. In this way, the survival curve was recorded, and it demonstrates a decrease in the survival rate in the first three weeks to approximately 73.3% (Figure 2). After week 4, the mice’s survival maintained the same value until the end of the experimental protocol, approximately 70.9%. 2.2. Macroscopic Assessment of Colitis Macroscopically, the colons were observed and scored for gross morphological damage, according to Morris et al., 1989 [24]. The maximal damage in the colon was observed with two administrations of TNBS (T2 group), with a mean score of 2.7 ± 0.7 (p < 0.0001 compared with the ethanol group), corresponding to a linear ulcer with inflammation at one site. After week 2, the gross morphology damage decreased and stabilized from week 4. In the T4 group, the attributed score was 0.9 ± 0.1 (p < 0.0001 compared with the T2 group), corresponding only to localized hyperemia without ulcers. The ethanol group presented a score of 0, with no damage registered (Figure 3). Macroscopically, the colons were observed and scored for gross morphological damage, according to Morris et al., 1989 [24]. The maximal damage in the colon was observed with two administrations of TNBS (T2 group), with a mean score of 2.7 ± 0.7 (p < 0.0001 compared with the ethanol group), corresponding to a linear ulcer with inflammation at one site. After week 2, the gross morphology damage decreased and stabilized from week 4. In the T4 group, the attributed score was 0.9 ± 0.1 (p < 0.0001 compared with the T2 group), corresponding only to localized hyperemia without ulcers. The ethanol group presented a score of 0, with no damage registered (Figure 3). 2.3. Biochemical Markers Fecal hemoglobin allows the evaluation of the intensity of the hemorrhagic focus (Figure 4). After week 2, the fecal hemoglobin progressively increased until week 6. A significant increase (p < 0.01) was observed when comparing the fecal hemoglobin of the T3 group (4.0 ± 0.2 µmol Hg/g feces) with the T4 group (5.8 ± 0.3 µmol Hg/g feces). However, when comparing the fecal hemoglobin of the T4 group (5.8 ± 0.3 µmol Hg/g feces) with the T6 group (7.5 ± 0.5 µmol Hg/g feces), no statistical differences were observed. Due to its essential role in intestinal homeostasis, The ALP concentration in the blood was evaluated (Figure 5). In general, the ALP levels observed in the TNBS groups were higher than those observed in the control groups, with a maximum ALP level in the T5 group of 58.5 ± 2.2 U/L (p < 0.0001 compared with the ethanol group). After week 3, the maintenance of ALP values can be observed, with 39.7 ± 2.4 U/L in the T4 group and 44.4 ± 2.3 U/L in the T6 group and no statistical significance. Fecal hemoglobin allows the evaluation of the intensity of the hemorrhagic focus (Figure 4). After week 2, the fecal hemoglobin progressively increased until week 6. A significant increase (p < 0.01) was observed when comparing the fecal hemoglobin of the T3 group (4.0 ± 0.2 µmol Hg/g feces) with the T4 group (5.8 ± 0.3 µmol Hg/g feces). However, when comparing the fecal hemoglobin of the T4 group (5.8 ± 0.3 µmol Hg/g feces) with the T6 group (7.5 ± 0.5 µmol Hg/g feces), no statistical differences were observed. Due to its essential role in intestinal homeostasis, The ALP concentration in the blood was evaluated (Figure 5). In general, the ALP levels observed in the TNBS groups were higher than those observed in the control groups, with a maximum ALP level in the T5 group of 58.5 ± 2.2 U/L (p < 0.0001 compared with the ethanol group). After week 3, the maintenance of ALP values can be observed, with 39.7 ± 2.4 U/L in the T4 group and 44.4 ± 2.3 U/L in the T6 group and no statistical significance. 2.4. Pro and Anti-Inflammatory Cytokine Levels This animal model showed significant production of TNF-α, a pro-inflammatory cytokine, after TNBS administration (Figure 6). All TNBS groups presented higher levels of TNF-α than the control groups, except on week 2, where only a slight increase was observed. The T2 group presented 51.4 ± 2.9 pg/mL of TNF-α, while the ethanol group presented 39.8 ± 1.5 pg/mL, without statistical significance. The highest level of TNF-α was observed on week 3 (87.3 ± 13.1 pg/mL). However, on week 4, the T4 group verified a slight decrease to 71.3 ± 3.3 pg/mL, which indicates maintenance of the values over time when compared with the T6 group (72.7 ± 3.6 pg/mL), without statistical significance being observed. Compared with the ethanol group, the T4 group presented statistically significant differences (p < 0.0001). IL-10 plays a central role in the mucosal immune system, inhibiting pro-inflammatory cytokine synthesis. IL-10 concentrations decreased in week 2. However, serum levels progressively increased until week 6 (Figure 7). Comparing the T3 group with the T6 group, the IL-10 concentration progressed from 44.6 ± 3.7 pg/mL to 68.2 ± 1.8 pg/mL, respectively (p < 0.0001). However, the T4 group (57.3 ± 3.8 pg/mL) had no significant differences from the T6 group (68.2 ± 1.8 pg/mL). This animal model showed significant production of TNF-α, a pro-inflammatory cytokine, after TNBS administration (Figure 6). All TNBS groups presented higher levels of TNF-α than the control groups, except on week 2, where only a slight increase was observed. The T2 group presented 51.4 ± 2.9 pg/mL of TNF-α, while the ethanol group presented 39.8 ± 1.5 pg/mL, without statistical significance. The highest level of TNF-α was observed on week 3 (87.3 ± 13.1 pg/mL). However, on week 4, the T4 group verified a slight decrease to 71.3 ± 3.3 pg/mL, which indicates maintenance of the values over time when compared with the T6 group (72.7 ± 3.6 pg/mL), without statistical significance being observed. Compared with the ethanol group, the T4 group presented statistically significant differences (p < 0.0001). IL-10 plays a central role in the mucosal immune system, inhibiting pro-inflammatory cytokine synthesis. IL-10 concentrations decreased in week 2. However, serum levels progressively increased until week 6 (Figure 7). Comparing the T3 group with the T6 group, the IL-10 concentration progressed from 44.6 ± 3.7 pg/mL to 68.2 ± 1.8 pg/mL, respectively (p < 0.0001). However, the T4 group (57.3 ± 3.8 pg/mL) had no significant differences from the T6 group (68.2 ± 1.8 pg/mL). 2.5. Histopathological Features The histopathological analysis allowed the evaluation of colonic injury based on inflammatory cell infiltration and tissue damage. The colons presented a severe infiltration of inflammatory cells, foci of ulceration with necrosis, and tissue disruption in the T1, T2, and T3 groups (Figure 8). The inflammatory infiltrates were mostly present in the mucosa and submucosa, but some animals presented transmural infiltrates with extension to the mesentery. The T2 group showed the most severe phenotype. In the T4, T5, and T6 groups, the level of inflammatory cell infiltration and areas of epithelial ulceration and tissue disruption decreased. The colitis severity of each group was calculated by summing individual lesions. The T1 and T4 groups presented the lowest histopathological score, showing inflammation without fibrosis or tissue loss. The T2 group exhibited the most severe score, with tissue loss, epithelial lesion, and inflammation (Figure 9). The histopathological analysis allowed the evaluation of colonic injury based on inflammatory cell infiltration and tissue damage. The colons presented a severe infiltration of inflammatory cells, foci of ulceration with necrosis, and tissue disruption in the T1, T2, and T3 groups (Figure 8). The inflammatory infiltrates were mostly present in the mucosa and submucosa, but some animals presented transmural infiltrates with extension to the mesentery. The T2 group showed the most severe phenotype. In the T4, T5, and T6 groups, the level of inflammatory cell infiltration and areas of epithelial ulceration and tissue disruption decreased. The colitis severity of each group was calculated by summing individual lesions. The T1 and T4 groups presented the lowest histopathological score, showing inflammation without fibrosis or tissue loss. The T2 group exhibited the most severe score, with tissue loss, epithelial lesion, and inflammation (Figure 9). 2.1. Clinical Signs: For six weeks, the mice were observed daily for body weight, stool consistency, and morbidity. Between weeks 2 and 4, the mice presented an alteration of intestinal motility that was characterized by soft stools and moderate morbidity. On the other hand, after week 4, mice presented an apparent recovery. No alterations were identified in the control groups. Concerning body weight, all groups demonstrated a very similar curve in the register of body weight during the experiment (Figure 1). Until week 4, the majority of the TNBS groups showed a progressive increase in body weight. After six administrations, the T6 group animals increased 14.2 ± 2.7% from their initial weight. At the end of the experimental period, the ethanol group gained 14.7 ± 2.3% of its initial weight. No significant differences were observed between the groups. In this experimental procedure, the mortality rate is important to regulate the TNBS dose since the TNBS should induce the disease without inducing a high mortality rate. In this way, the survival curve was recorded, and it demonstrates a decrease in the survival rate in the first three weeks to approximately 73.3% (Figure 2). After week 4, the mice’s survival maintained the same value until the end of the experimental protocol, approximately 70.9%. 2.2. Macroscopic Assessment of Colitis: Macroscopically, the colons were observed and scored for gross morphological damage, according to Morris et al., 1989 [24]. The maximal damage in the colon was observed with two administrations of TNBS (T2 group), with a mean score of 2.7 ± 0.7 (p < 0.0001 compared with the ethanol group), corresponding to a linear ulcer with inflammation at one site. After week 2, the gross morphology damage decreased and stabilized from week 4. In the T4 group, the attributed score was 0.9 ± 0.1 (p < 0.0001 compared with the T2 group), corresponding only to localized hyperemia without ulcers. The ethanol group presented a score of 0, with no damage registered (Figure 3). 2.3. Biochemical Markers: Fecal hemoglobin allows the evaluation of the intensity of the hemorrhagic focus (Figure 4). After week 2, the fecal hemoglobin progressively increased until week 6. A significant increase (p < 0.01) was observed when comparing the fecal hemoglobin of the T3 group (4.0 ± 0.2 µmol Hg/g feces) with the T4 group (5.8 ± 0.3 µmol Hg/g feces). However, when comparing the fecal hemoglobin of the T4 group (5.8 ± 0.3 µmol Hg/g feces) with the T6 group (7.5 ± 0.5 µmol Hg/g feces), no statistical differences were observed. Due to its essential role in intestinal homeostasis, The ALP concentration in the blood was evaluated (Figure 5). In general, the ALP levels observed in the TNBS groups were higher than those observed in the control groups, with a maximum ALP level in the T5 group of 58.5 ± 2.2 U/L (p < 0.0001 compared with the ethanol group). After week 3, the maintenance of ALP values can be observed, with 39.7 ± 2.4 U/L in the T4 group and 44.4 ± 2.3 U/L in the T6 group and no statistical significance. 2.4. Pro and Anti-Inflammatory Cytokine Levels: This animal model showed significant production of TNF-α, a pro-inflammatory cytokine, after TNBS administration (Figure 6). All TNBS groups presented higher levels of TNF-α than the control groups, except on week 2, where only a slight increase was observed. The T2 group presented 51.4 ± 2.9 pg/mL of TNF-α, while the ethanol group presented 39.8 ± 1.5 pg/mL, without statistical significance. The highest level of TNF-α was observed on week 3 (87.3 ± 13.1 pg/mL). However, on week 4, the T4 group verified a slight decrease to 71.3 ± 3.3 pg/mL, which indicates maintenance of the values over time when compared with the T6 group (72.7 ± 3.6 pg/mL), without statistical significance being observed. Compared with the ethanol group, the T4 group presented statistically significant differences (p < 0.0001). IL-10 plays a central role in the mucosal immune system, inhibiting pro-inflammatory cytokine synthesis. IL-10 concentrations decreased in week 2. However, serum levels progressively increased until week 6 (Figure 7). Comparing the T3 group with the T6 group, the IL-10 concentration progressed from 44.6 ± 3.7 pg/mL to 68.2 ± 1.8 pg/mL, respectively (p < 0.0001). However, the T4 group (57.3 ± 3.8 pg/mL) had no significant differences from the T6 group (68.2 ± 1.8 pg/mL). 2.5. Histopathological Features: The histopathological analysis allowed the evaluation of colonic injury based on inflammatory cell infiltration and tissue damage. The colons presented a severe infiltration of inflammatory cells, foci of ulceration with necrosis, and tissue disruption in the T1, T2, and T3 groups (Figure 8). The inflammatory infiltrates were mostly present in the mucosa and submucosa, but some animals presented transmural infiltrates with extension to the mesentery. The T2 group showed the most severe phenotype. In the T4, T5, and T6 groups, the level of inflammatory cell infiltration and areas of epithelial ulceration and tissue disruption decreased. The colitis severity of each group was calculated by summing individual lesions. The T1 and T4 groups presented the lowest histopathological score, showing inflammation without fibrosis or tissue loss. The T2 group exhibited the most severe score, with tissue loss, epithelial lesion, and inflammation (Figure 9). 3. Discussion: The TNBS groups presented several clinical manifestations, including alterations in intestinal motility, characterized by soft stools and diarrhea, and moderate morbidity, which is consistent with the literature [25,26]. The ethanol and sham groups remained free of alterations. The peak of the clinical signs occurred two weeks after induction and was followed by a partial recovery [24,27,28]. The fact that the peak was observed in week 2 is compatible with an acute model. Moreover, in the acute model developed by our research group, it was found that the acute lesions occurred 4 to 5 days after induction. After this period, the model becomes chronic. This was also confirmed with this experimental protocol. The model only worsened in the second week because the mice were subjected to a second administration of TNBS and, once again, very aggressive values occurred 4 to 5 days after this induction. However, from the third week, the mice appeared to show resistance and began to recover on some parameters. From week 4 to week 6, the model remained stable, presenting chronic inflammation. Concerning body weight monitoring, colitic mice presented a progressive increase in body weight throughout the experimental procedure, with a 10% increase in body weight since the induction day. Mice of the T6 group had a weight gain, after six administrations, consistent with previously reported results [26,29,30,31,32]. However, three weeks after the beginning of the experiment, the TNBS groups showed a decrease in body weight. This decrease may be related to the fragility of the mice following three TNBS administrations. However, a recovery in body weight occurred over the following weeks. This may be consistent with the fact that after the fourth administration mice gain resistance to TNBS. In addition, around 73% of the mice survived in the first three weeks of the protocol, followed by a stabilization of the survival rate from week four until the end of the study. These results are coincident with the peak of morbidity observed in this study and with the peak of the disease, as described by some authors [24,28,29]. On the other hand, the survival rate indicates that the dose and frequency of TNBS administrations are optimal to induce the disease while minimizing the mortality rate. A TNBS dose reduction could be considered to reduce mortality; however, the disease induction would be compromised. The peak of colon damage was observed in week 2, corresponding to the linear ulcer with inflammation at one site. This score passed to 1 with “localized hyperemia, but no ulcers” in weeks 4, 5, and 6. On the other hand, the ethanol and sham groups present a score of 0, with no colonic damage. Once again, we can observe a stabilization of the model from week 4. Consistent with the literature, mice from the TNBS groups should present the colon with marked hyperemia and ulcers [33,34,35]. After week 2, the fecal hemoglobin progressively increased until week 6, with a significant increase between the third and the fourth week (p < 0.01). As well as the parameters analyzed above, fecal hemoglobin exhibits a stable pattern of TNBS-induced colitis from the fourth administration. These results seem to show the presence of hemorrhagic ulcers and are in accordance with the results obtained by our research group in the acute colitis model [20,21,22]. The TNBS groups presented the highest values of total serum ALP concentration compared to the control groups, suggesting that the increase in ALP levels observed in the TNBS groups was caused by the induction of intestinal injury with TNBS. These results are consistent with other studies [20,21,22,36]. Increased ALP values were maintained from the fourth week, demonstrating again a stabilization of the model from week 4. Comparing the results of the blood concentration of ALP from the ethanol and the sham groups, a slight increase in ALP levels in the ethanol group was observed. The use of ethanol as a TNBS vehicle aims to induce changes in intestinal permeability, enabling the translocation of TNBS into the submucosal layer, causing colon damage [37,38]. The increased values of ALP in the ethanol group confirm this damage. TNF-α is a pro-inflammatory cytokine that is produced during the innate immune response of IBD [37]. The literature indicates that increased values of this pro-inflammatory cytokine are related to the pathogenesis of IBD [38,39,40]. Our results confirm this tendency. A significant increase in the levels of TNF-α was observed in the colon. Specifically, this cytokine recorded a peak in the third week. However, on week 4 there was a slight decrease and maintenance until week 6, indicating the possible onset of the chronic phase of this inflammatory disease. In acute inflammation, the values of pro-inflammatory cytokines are very high [20,21,22]. Conversely, in chronic inflammation, the values are only slightly increased. In contrast, anti-inflammatory cytokines act as a brake on this process, preventing an exacerbated response and possibly producing undesirable effects on the inflammation itself and the healing process [41]. The presence of IL-10 was measured in parallel to pro-inflammatory cytokines since it plays a central role in the mucosal immune system [42]. A decrease in IL-10 was observed in week 2. However, after this week, the production of IL-10 progressively increased until the sixth week. Normally, when the values of TNF-α increased, IL-10 decreased. However, from the second week, the values of IL-10 increased. This is probably an intrinsic compensatory mechanism that is trying to solve the inflammation by increasing this anti-inflammatory cytokine. The literature confirms this tendency of the immune system to restore homeostasis, balancing the values of biochemical markers of inflammation [43,44]. Moreover, the histological examination of the colon showed a severe infiltration of inflammatory cells in the mucosa that was associated with ulceration and tissue disruption. Our results are consistent with the literature, which also reports damage of the mucosal architecture simultaneously with a thickening of the colon wall, ulcers, and extensive inflammatory cell infiltration in the colonic mucosa of colitic mice [27,45,46]. 4. Materials and Methods: 4.1. Material 2,4,6-Trinitrobenzene sulfonic acid (TNBS 5%) was purchased from Sigma-Aldrich Chemical. Ketamine (Ketamidor® 100 mg/mL, Lisbon, Portugal) was purchase from Richter Pharma. Xylazine (Sedoxylan® 20 mg/mL, Lisbon, Portugal) was purchased from Dechra. An ADVIA® kit was purchased from Siemens Healthcare Diagnostics (Madrid, Spain). An ELISA assay kit for TNF-α measurement was obtained from Hycult Biotechnology. 2,4,6-Trinitrobenzene sulfonic acid (TNBS 5%) was purchased from Sigma-Aldrich Chemical. Ketamine (Ketamidor® 100 mg/mL, Lisbon, Portugal) was purchase from Richter Pharma. Xylazine (Sedoxylan® 20 mg/mL, Lisbon, Portugal) was purchased from Dechra. An ADVIA® kit was purchased from Siemens Healthcare Diagnostics (Madrid, Spain). An ELISA assay kit for TNF-α measurement was obtained from Hycult Biotechnology. 4.2. Animals Female CD-1 mice, weighing 20–30 g and aged 6–10 weeks, were obtained from the Institute of Hygiene and Tropical Medicine. The animals were housed in standard polypropylene cages with ad libitum access to food and water in the bioterium of the Faculty of Pharmacy of the University of Lisbon. The mice were kept at 18–23 °C and 40–60% humidity in a controlled 12 h light/dark cycle. All animal care and experimental procedures were performed in accordance with the internationally accepted principles for laboratory animal use and care, Directive 2010/63/EU, transposed to the Portuguese legislation by Directive Law 113/2013. The experiment was approved by the institutional animal ethics committee (ORBEA) of the Faculty of Pharmacy of the University of Lisbon, 3/2020. Female CD-1 mice, weighing 20–30 g and aged 6–10 weeks, were obtained from the Institute of Hygiene and Tropical Medicine. The animals were housed in standard polypropylene cages with ad libitum access to food and water in the bioterium of the Faculty of Pharmacy of the University of Lisbon. The mice were kept at 18–23 °C and 40–60% humidity in a controlled 12 h light/dark cycle. All animal care and experimental procedures were performed in accordance with the internationally accepted principles for laboratory animal use and care, Directive 2010/63/EU, transposed to the Portuguese legislation by Directive Law 113/2013. The experiment was approved by the institutional animal ethics committee (ORBEA) of the Faculty of Pharmacy of the University of Lisbon, 3/2020. 4.3. Trinitrobenzene Sulfonic Acid-Induced Colitis The mice were left unfed for 24 h before the induction day. On the induction day (day 0), the mice were anesthetized with an intraperitoneal (IP) injection of ketamine 100 mg/Kg + xylazine 10 mg/Kg (40 µL/mice), and a catheter was carefully inserted into the colon until the tip was 4 cm proximal to the anus. Then, 100 µL of 1% TNBS in 50% ethanol was administered, and the mice were kept in a Trendelenburg position for 1 min. This procedure was repeated weekly, for a period of 5 weeks. On weeks 1, 2, 3, 4, 5, and 6, depending on the experimental TNBS group, the mice were anesthetized, and blood samples were collected by cardiac puncture. The mice were sacrificed by cervical dislocation. The necropsy was initiated with a midline incision into the abdomen. The colon was separated from the surrounding tissues and removed. The mice were left unfed for 24 h before the induction day. On the induction day (day 0), the mice were anesthetized with an intraperitoneal (IP) injection of ketamine 100 mg/Kg + xylazine 10 mg/Kg (40 µL/mice), and a catheter was carefully inserted into the colon until the tip was 4 cm proximal to the anus. Then, 100 µL of 1% TNBS in 50% ethanol was administered, and the mice were kept in a Trendelenburg position for 1 min. This procedure was repeated weekly, for a period of 5 weeks. On weeks 1, 2, 3, 4, 5, and 6, depending on the experimental TNBS group, the mice were anesthetized, and blood samples were collected by cardiac puncture. The mice were sacrificed by cervical dislocation. The necropsy was initiated with a midline incision into the abdomen. The colon was separated from the surrounding tissues and removed. 4.4. Experimental Groups The mice were categorized into eight groups: six TNBS groups, depending on the number of TNBS administrations, and two control groups. The TNBS groups were: the T1 group (n = 15) that received one TNBS administration at week 0; the T2 group (n = 15) that received two TNBS administrations at weeks 0 and 1; the T3 group (n = 15) that received three TNBS administrations at weeks 0, 1, and 2; the T4 group (n = 15) that received four TNBS administrations at weeks 0, 1, 2, and 3; the T5 group (n = 15) that received five TNBS administrations at weeks 0, 1, 2, 3, and 4; and the T6 group (n = 15) that received six TNBS administrations at weeks 0, 1, 2, 3, 4, and 5. The control groups were the ethanol (E) and sham (S) groups; the E group (n = 15) received 100 µL of 50% ethanol (TNBS vehicle) and the S group (n = 15) received 100 µL of saline solution. The mice were categorized into eight groups: six TNBS groups, depending on the number of TNBS administrations, and two control groups. The TNBS groups were: the T1 group (n = 15) that received one TNBS administration at week 0; the T2 group (n = 15) that received two TNBS administrations at weeks 0 and 1; the T3 group (n = 15) that received three TNBS administrations at weeks 0, 1, and 2; the T4 group (n = 15) that received four TNBS administrations at weeks 0, 1, 2, and 3; the T5 group (n = 15) that received five TNBS administrations at weeks 0, 1, 2, 3, and 4; and the T6 group (n = 15) that received six TNBS administrations at weeks 0, 1, 2, 3, 4, and 5. The control groups were the ethanol (E) and sham (S) groups; the E group (n = 15) received 100 µL of 50% ethanol (TNBS vehicle) and the S group (n = 15) received 100 µL of saline solution. 4.5. Monitoring of Clinical Signs After induction, the animals were observed daily for the monitoring of body weight, stool consistency, morbidity, and mortality. After induction, the animals were observed daily for the monitoring of body weight, stool consistency, morbidity, and mortality. 4.6. Macroscopic Assessment of Colitis A macroscopic assessment of TNBS-induced colitis was performed by using the criteria for the scoring of gross morphologic damage, as previously described by Morris et al., 1989 [24]. The score was attributed based on the presence of hyperemia, ulcers, and inflammation and the number of sites with ulceration and/or inflammation and its extension. A macroscopic assessment of TNBS-induced colitis was performed by using the criteria for the scoring of gross morphologic damage, as previously described by Morris et al., 1989 [24]. The score was attributed based on the presence of hyperemia, ulcers, and inflammation and the number of sites with ulceration and/or inflammation and its extension. 4.7. Biochemical Markers Serum from the collected blood samples was separated by centrifugation at 3600 rpm for 15 min. A serum analysis was conducted in order to evaluate alkaline phosphatase (ALP) using an automated clinical chemistry analyzer (ADVIA® 1200, Madrid, Spain). Fecal hemoglobin was evaluated using a quantitative method by immunoturbidimetry (Krom Systems). Serum from the collected blood samples was separated by centrifugation at 3600 rpm for 15 min. A serum analysis was conducted in order to evaluate alkaline phosphatase (ALP) using an automated clinical chemistry analyzer (ADVIA® 1200, Madrid, Spain). Fecal hemoglobin was evaluated using a quantitative method by immunoturbidimetry (Krom Systems). 4.8. Measurement of Cytokines The pro-inflammatory cytokine TNF-α and the anti-inflammatory cytokine IL-10 were measured and expressed in pg/mL. Colonic tissue samples from each animal were weighed and homogenized in phosphate buffer (Ultra-turrax T25, 13,500 rev/min, twice for 30 s). Afterward, samples were centrifuged (15,000 rpm for 15 min at 4 °C). The aliquots of the supernatant were conserved at −20 °C until use. A spectrophotometric measurement of the cytokine level was performed at 450 nm (ELISA kit Quantikine, Hycult Biotechnology, Abingdon, UK). The pro-inflammatory cytokine TNF-α and the anti-inflammatory cytokine IL-10 were measured and expressed in pg/mL. Colonic tissue samples from each animal were weighed and homogenized in phosphate buffer (Ultra-turrax T25, 13,500 rev/min, twice for 30 s). Afterward, samples were centrifuged (15,000 rpm for 15 min at 4 °C). The aliquots of the supernatant were conserved at −20 °C until use. A spectrophotometric measurement of the cytokine level was performed at 450 nm (ELISA kit Quantikine, Hycult Biotechnology, Abingdon, UK). 4.9. Histopathological Analysis The histopathology was carried out by an independent histopathologist from the Gulbenkian Institute of Science who was blinded to the groups. Colon samples were fixed in 10% phosphate-buffered formalin, processed routinely for paraffin embedding, sectioned at 5 µm, and stained with hematoxylin and eosin. To increase the possibility of detecting fibrosis, Masson’s trichrome staining was used. Sections of the distal colon were evaluated based on adapted criteria of Seamons and colleagues (2013) [47]. The histopathological score of lesions was partially scored (0–4 increasing in severity) with some parameters, namely: (1) the presence of tissue loss/necrosis; (2) the severity of the mucosal epithelial lesion; (3) inflammation; (4) extent 1—the percentage of the intestine affected in any manner; and (5) extent 2—the percentage of intestine affected by the most severe lesion. The colitis severity was calculated by summing the individual lesions and the extent scores, promoting a final colitis score (max score = 20). The histopathology was carried out by an independent histopathologist from the Gulbenkian Institute of Science who was blinded to the groups. Colon samples were fixed in 10% phosphate-buffered formalin, processed routinely for paraffin embedding, sectioned at 5 µm, and stained with hematoxylin and eosin. To increase the possibility of detecting fibrosis, Masson’s trichrome staining was used. Sections of the distal colon were evaluated based on adapted criteria of Seamons and colleagues (2013) [47]. The histopathological score of lesions was partially scored (0–4 increasing in severity) with some parameters, namely: (1) the presence of tissue loss/necrosis; (2) the severity of the mucosal epithelial lesion; (3) inflammation; (4) extent 1—the percentage of the intestine affected in any manner; and (5) extent 2—the percentage of intestine affected by the most severe lesion. The colitis severity was calculated by summing the individual lesions and the extent scores, promoting a final colitis score (max score = 20). 4.10. Statistical Analysis The results were expressed as the mean ± SEM of N observations, where N represents the number of animals analyzed. Data analysis was performed using SPSS software (version 26.0). The results were analyzed by a one-way ANOVA to determine the statistical significance between the TNBS and control groups. For multiple comparisons, Tukey’s post hoc test was used. A p-value of less than 0.05 was considered significant. The results were expressed as the mean ± SEM of N observations, where N represents the number of animals analyzed. Data analysis was performed using SPSS software (version 26.0). The results were analyzed by a one-way ANOVA to determine the statistical significance between the TNBS and control groups. For multiple comparisons, Tukey’s post hoc test was used. A p-value of less than 0.05 was considered significant. 4.1. Material: 2,4,6-Trinitrobenzene sulfonic acid (TNBS 5%) was purchased from Sigma-Aldrich Chemical. Ketamine (Ketamidor® 100 mg/mL, Lisbon, Portugal) was purchase from Richter Pharma. Xylazine (Sedoxylan® 20 mg/mL, Lisbon, Portugal) was purchased from Dechra. An ADVIA® kit was purchased from Siemens Healthcare Diagnostics (Madrid, Spain). An ELISA assay kit for TNF-α measurement was obtained from Hycult Biotechnology. 4.2. Animals : Female CD-1 mice, weighing 20–30 g and aged 6–10 weeks, were obtained from the Institute of Hygiene and Tropical Medicine. The animals were housed in standard polypropylene cages with ad libitum access to food and water in the bioterium of the Faculty of Pharmacy of the University of Lisbon. The mice were kept at 18–23 °C and 40–60% humidity in a controlled 12 h light/dark cycle. All animal care and experimental procedures were performed in accordance with the internationally accepted principles for laboratory animal use and care, Directive 2010/63/EU, transposed to the Portuguese legislation by Directive Law 113/2013. The experiment was approved by the institutional animal ethics committee (ORBEA) of the Faculty of Pharmacy of the University of Lisbon, 3/2020. 4.3. Trinitrobenzene Sulfonic Acid-Induced Colitis: The mice were left unfed for 24 h before the induction day. On the induction day (day 0), the mice were anesthetized with an intraperitoneal (IP) injection of ketamine 100 mg/Kg + xylazine 10 mg/Kg (40 µL/mice), and a catheter was carefully inserted into the colon until the tip was 4 cm proximal to the anus. Then, 100 µL of 1% TNBS in 50% ethanol was administered, and the mice were kept in a Trendelenburg position for 1 min. This procedure was repeated weekly, for a period of 5 weeks. On weeks 1, 2, 3, 4, 5, and 6, depending on the experimental TNBS group, the mice were anesthetized, and blood samples were collected by cardiac puncture. The mice were sacrificed by cervical dislocation. The necropsy was initiated with a midline incision into the abdomen. The colon was separated from the surrounding tissues and removed. 4.4. Experimental Groups: The mice were categorized into eight groups: six TNBS groups, depending on the number of TNBS administrations, and two control groups. The TNBS groups were: the T1 group (n = 15) that received one TNBS administration at week 0; the T2 group (n = 15) that received two TNBS administrations at weeks 0 and 1; the T3 group (n = 15) that received three TNBS administrations at weeks 0, 1, and 2; the T4 group (n = 15) that received four TNBS administrations at weeks 0, 1, 2, and 3; the T5 group (n = 15) that received five TNBS administrations at weeks 0, 1, 2, 3, and 4; and the T6 group (n = 15) that received six TNBS administrations at weeks 0, 1, 2, 3, 4, and 5. The control groups were the ethanol (E) and sham (S) groups; the E group (n = 15) received 100 µL of 50% ethanol (TNBS vehicle) and the S group (n = 15) received 100 µL of saline solution. 4.5. Monitoring of Clinical Signs: After induction, the animals were observed daily for the monitoring of body weight, stool consistency, morbidity, and mortality. 4.6. Macroscopic Assessment of Colitis: A macroscopic assessment of TNBS-induced colitis was performed by using the criteria for the scoring of gross morphologic damage, as previously described by Morris et al., 1989 [24]. The score was attributed based on the presence of hyperemia, ulcers, and inflammation and the number of sites with ulceration and/or inflammation and its extension. 4.7. Biochemical Markers: Serum from the collected blood samples was separated by centrifugation at 3600 rpm for 15 min. A serum analysis was conducted in order to evaluate alkaline phosphatase (ALP) using an automated clinical chemistry analyzer (ADVIA® 1200, Madrid, Spain). Fecal hemoglobin was evaluated using a quantitative method by immunoturbidimetry (Krom Systems). 4.8. Measurement of Cytokines: The pro-inflammatory cytokine TNF-α and the anti-inflammatory cytokine IL-10 were measured and expressed in pg/mL. Colonic tissue samples from each animal were weighed and homogenized in phosphate buffer (Ultra-turrax T25, 13,500 rev/min, twice for 30 s). Afterward, samples were centrifuged (15,000 rpm for 15 min at 4 °C). The aliquots of the supernatant were conserved at −20 °C until use. A spectrophotometric measurement of the cytokine level was performed at 450 nm (ELISA kit Quantikine, Hycult Biotechnology, Abingdon, UK). 4.9. Histopathological Analysis: The histopathology was carried out by an independent histopathologist from the Gulbenkian Institute of Science who was blinded to the groups. Colon samples were fixed in 10% phosphate-buffered formalin, processed routinely for paraffin embedding, sectioned at 5 µm, and stained with hematoxylin and eosin. To increase the possibility of detecting fibrosis, Masson’s trichrome staining was used. Sections of the distal colon were evaluated based on adapted criteria of Seamons and colleagues (2013) [47]. The histopathological score of lesions was partially scored (0–4 increasing in severity) with some parameters, namely: (1) the presence of tissue loss/necrosis; (2) the severity of the mucosal epithelial lesion; (3) inflammation; (4) extent 1—the percentage of the intestine affected in any manner; and (5) extent 2—the percentage of intestine affected by the most severe lesion. The colitis severity was calculated by summing the individual lesions and the extent scores, promoting a final colitis score (max score = 20). 4.10. Statistical Analysis : The results were expressed as the mean ± SEM of N observations, where N represents the number of animals analyzed. Data analysis was performed using SPSS software (version 26.0). The results were analyzed by a one-way ANOVA to determine the statistical significance between the TNBS and control groups. For multiple comparisons, Tukey’s post hoc test was used. A p-value of less than 0.05 was considered significant. 5. Conclusions: A TNBS-induced colitis model was monitored for 6 weeks. A scheme of multiple TNBS administrations was performed since the aim was to achieve a chronic pattern of induced colitis and to identify the week in which the damage becomes chronic. Clinical manifestations of chronic colitis usually peak within 2 weeks and may be followed by partial recovery or death. These results were expected and are compatible with the correct induction of colitis [26]. Our findings allow us to conclude that TNBS-induced chronic colitis should be developed in 4 weeks, providing a chronic intestinal inflammation model. Indeed, the parameters under evaluation, such as clinical manifestations; biochemical markers, including fecal hemoglobin and pro- and anti-inflammatory cytokine levels; and macroscopic evaluation, corroborate that the chronic illness pattern is observed from week 4 after the induction. Some mice died during the early days of the study, possibly because they did not resist the aggravation of the disease in its acute phase. On the other hand, the remaining mice resisted and progressed to the chronic phase of the disease, showing the same symptoms but more lightly.
Background: Inflammatory bowel disease (IBD) is a world healthcare problem. In order to evaluate the effect of new pharmacological approaches for IBD, we aim to develop and validate chronic trinitrobenzene sulfonic acid (TNBS)-induced colitis in mice. Methods: Experimental colitis was induced by the rectal administration of multiple doses of TNBS in female CD-1 mice. The protocol was performed with six experimental groups, depending on the TNBS administration frequency, and two control groups (sham and ethanol groups). Results: The survival rate was 73.3% in the first three weeks and, from week 4 until the end of the experimental protocol, the mice's survival remained unaltered at 70.9%. Fecal hemoglobin presented a progressive increase until week 4 (5.8 ± 0.3 µmol Hg/g feces, p &lt; 0.0001) compared with the ethanol group, with no statistical differences to week 6. The highest level of tumor necrosis factor-α was observed on week 3; however, after week 4, a slight decrease in tumor necrosis factor-α concentration was verified, and the level was maintained until week 6 (71.3 ± 3.3 pg/mL and 72.7 ± 3.6 pg/mL, respectively). Conclusions: These findings allowed the verification of a stable pattern of clinical and inflammation signs after week 4, suggesting that the chronic model of TNBS-induced colitis develops in 4 weeks.
1. Introduction: Inflammatory bowel disease (IBD) comprises Crohn’s disease (CD) and ulcerative colitis (UC) [1]. CD and UC are chronic and relapsing inflammatory conditions of the gastrointestinal tract that have distinct pathological and clinical characteristics. The evolution of IBD follows the advancement of society [2,3]. It is almost a global disease, affecting all ages, including the pediatric population [4]. Several treatments are currently available for the treatment of IBD, though none of them reverses the underlying pathogenic mechanism of this disease [5,6]. Animal models of IBD remain essential for a proper understanding of histopathological change in the gastrointestinal tract and play a key role in the development of new pharmacological approaches [7]. The trinitrobenzene sulfonic acid (TNBS)-induced model, in particular, is a commonly used model of IBD since it is capable of reproducing CD in humans [8,9,10]. Additionally, the TNBS-induced model can mimic the acute and chronic stages of IBD [11,12]. TNBS is a haptenizing agent that stimulates a delayed-type hypersensitivity immune response, driving colitis in susceptible mouse strains [13,14,15,16,17]. The chemical is dissolved in ethanol, enabling the interaction of TNBS with colon tissue proteins. The ethanol breaks the mucosal barrier, allowing TNBS to penetrate into the bowel wall [16]. TNBS administration induces transmural colitis that is driven by a Th1-mediated immune response [13,14,17] that is characterized by the infiltration of CD4 cells, neutrophils, and macrophages into the lamina propria and the secretion of cytokines [14] The most common cytokines are tumor necrosis factor (TNF)-α and interleukin (IL)-12. Since the protocols using the TNBS-induced colitis model are not standardized, a systematic review [18] developed by our research group concludes that the chronic TNBS-induced colitis model can be obtained with multiple TNBS administrations. In the literature, there is no consensus about the induction method, and several original articles have been published with different ways to induce a chronic model of TNBS-induced colitis using different doses, numbers of TNBS administrations, strains, genders, and ages of mice. By this point of view, the main objective of this study is to develop and validate a chronic TNBS-induced colitis in mice in order to evaluate the effect of new pharmacological approaches for IBD. The advantage of this chronic model compared to acute models is that the latter may provide only limited information about the pathogenesis of human IBDs, as the chemical injury to the epithelial barrier leads to self-limiting inflammation rather than chronic disease [19]. Our research group has previously developed preclinical studies in an acute TNBS-induced colitis model [20,21,22]. However, as cited by Bilsborough and colleagues (2021), since IBD is a chronic disease, the development of a standardized and validated induction method for chronic colitis is useful for studying new pharmacological approaches [23]. Our findings allow us to conclude that TNBS-induced chronic colitis should be developed in 4 weeks, providing a chronic intestinal inflammation model. 5. Conclusions: A TNBS-induced colitis model was monitored for 6 weeks. A scheme of multiple TNBS administrations was performed since the aim was to achieve a chronic pattern of induced colitis and to identify the week in which the damage becomes chronic. Clinical manifestations of chronic colitis usually peak within 2 weeks and may be followed by partial recovery or death. These results were expected and are compatible with the correct induction of colitis [26]. Our findings allow us to conclude that TNBS-induced chronic colitis should be developed in 4 weeks, providing a chronic intestinal inflammation model. Indeed, the parameters under evaluation, such as clinical manifestations; biochemical markers, including fecal hemoglobin and pro- and anti-inflammatory cytokine levels; and macroscopic evaluation, corroborate that the chronic illness pattern is observed from week 4 after the induction. Some mice died during the early days of the study, possibly because they did not resist the aggravation of the disease in its acute phase. On the other hand, the remaining mice resisted and progressed to the chronic phase of the disease, showing the same symptoms but more lightly.
Background: Inflammatory bowel disease (IBD) is a world healthcare problem. In order to evaluate the effect of new pharmacological approaches for IBD, we aim to develop and validate chronic trinitrobenzene sulfonic acid (TNBS)-induced colitis in mice. Methods: Experimental colitis was induced by the rectal administration of multiple doses of TNBS in female CD-1 mice. The protocol was performed with six experimental groups, depending on the TNBS administration frequency, and two control groups (sham and ethanol groups). Results: The survival rate was 73.3% in the first three weeks and, from week 4 until the end of the experimental protocol, the mice's survival remained unaltered at 70.9%. Fecal hemoglobin presented a progressive increase until week 4 (5.8 ± 0.3 µmol Hg/g feces, p &lt; 0.0001) compared with the ethanol group, with no statistical differences to week 6. The highest level of tumor necrosis factor-α was observed on week 3; however, after week 4, a slight decrease in tumor necrosis factor-α concentration was verified, and the level was maintained until week 6 (71.3 ± 3.3 pg/mL and 72.7 ± 3.6 pg/mL, respectively). Conclusions: These findings allowed the verification of a stable pattern of clinical and inflammation signs after week 4, suggesting that the chronic model of TNBS-induced colitis develops in 4 weeks.
8,836
267
[ 246, 137, 227, 279, 166, 2382, 86, 138, 180, 218, 23, 64, 62, 109, 197, 80 ]
20
[ "group", "tnbs", "groups", "week", "mice", "observed", "weeks", "inflammatory", "ml", "colitis" ]
[ "ibd comprises crohn", "pharmacological approaches ibd", "introduction inflammatory bowel", "inflammatory conditions gastrointestinal", "intestinal inflammation model" ]
null
[CONTENT] inflammatory bowel disease | TNBS-induced colitis | chronic animal model [SUMMARY]
null
[CONTENT] inflammatory bowel disease | TNBS-induced colitis | chronic animal model [SUMMARY]
[CONTENT] inflammatory bowel disease | TNBS-induced colitis | chronic animal model [SUMMARY]
[CONTENT] inflammatory bowel disease | TNBS-induced colitis | chronic animal model [SUMMARY]
[CONTENT] inflammatory bowel disease | TNBS-induced colitis | chronic animal model [SUMMARY]
[CONTENT] Animals | Chronic Disease | Colitis | Colon | Disease Models, Animal | Ethanol | Female | Inflammatory Bowel Diseases | Mice | Trinitrobenzenesulfonic Acid | Tumor Necrosis Factor-alpha [SUMMARY]
null
[CONTENT] Animals | Chronic Disease | Colitis | Colon | Disease Models, Animal | Ethanol | Female | Inflammatory Bowel Diseases | Mice | Trinitrobenzenesulfonic Acid | Tumor Necrosis Factor-alpha [SUMMARY]
[CONTENT] Animals | Chronic Disease | Colitis | Colon | Disease Models, Animal | Ethanol | Female | Inflammatory Bowel Diseases | Mice | Trinitrobenzenesulfonic Acid | Tumor Necrosis Factor-alpha [SUMMARY]
[CONTENT] Animals | Chronic Disease | Colitis | Colon | Disease Models, Animal | Ethanol | Female | Inflammatory Bowel Diseases | Mice | Trinitrobenzenesulfonic Acid | Tumor Necrosis Factor-alpha [SUMMARY]
[CONTENT] Animals | Chronic Disease | Colitis | Colon | Disease Models, Animal | Ethanol | Female | Inflammatory Bowel Diseases | Mice | Trinitrobenzenesulfonic Acid | Tumor Necrosis Factor-alpha [SUMMARY]
[CONTENT] ibd comprises crohn | pharmacological approaches ibd | introduction inflammatory bowel | inflammatory conditions gastrointestinal | intestinal inflammation model [SUMMARY]
null
[CONTENT] ibd comprises crohn | pharmacological approaches ibd | introduction inflammatory bowel | inflammatory conditions gastrointestinal | intestinal inflammation model [SUMMARY]
[CONTENT] ibd comprises crohn | pharmacological approaches ibd | introduction inflammatory bowel | inflammatory conditions gastrointestinal | intestinal inflammation model [SUMMARY]
[CONTENT] ibd comprises crohn | pharmacological approaches ibd | introduction inflammatory bowel | inflammatory conditions gastrointestinal | intestinal inflammation model [SUMMARY]
[CONTENT] ibd comprises crohn | pharmacological approaches ibd | introduction inflammatory bowel | inflammatory conditions gastrointestinal | intestinal inflammation model [SUMMARY]
[CONTENT] group | tnbs | groups | week | mice | observed | weeks | inflammatory | ml | colitis [SUMMARY]
null
[CONTENT] group | tnbs | groups | week | mice | observed | weeks | inflammatory | ml | colitis [SUMMARY]
[CONTENT] group | tnbs | groups | week | mice | observed | weeks | inflammatory | ml | colitis [SUMMARY]
[CONTENT] group | tnbs | groups | week | mice | observed | weeks | inflammatory | ml | colitis [SUMMARY]
[CONTENT] group | tnbs | groups | week | mice | observed | weeks | inflammatory | ml | colitis [SUMMARY]
[CONTENT] chronic | ibd | model | tnbs | colitis | tnbs induced | induced | disease | tnbs induced colitis | induced colitis [SUMMARY]
null
[CONTENT] group | week | pg | pg ml | presented | observed | ml | figure | groups | t4 [SUMMARY]
[CONTENT] chronic | colitis | induced | pattern | clinical manifestations | manifestations | phase | chronic colitis | weeks | evaluation [SUMMARY]
[CONTENT] group | tnbs | week | groups | mice | observed | ml | chronic | 15 | weeks [SUMMARY]
[CONTENT] group | tnbs | week | groups | mice | observed | ml | chronic | 15 | weeks [SUMMARY]
[CONTENT] ||| [SUMMARY]
null
[CONTENT] 73.3% | the first three weeks | week 4 | 70.9% ||| week 4 | 5.8 | 0.3 | p &lt | 0.0001 | week 6 ||| week 3 | week 4 | week 6 | 71.3 | 3.3 | 72.7 | 3.6 [SUMMARY]
[CONTENT] week 4 | 4 weeks [SUMMARY]
[CONTENT] ||| ||| TNBS ||| six | TNBS | two | sham ||| ||| 73.3% | the first three weeks | week 4 | 70.9% ||| week 4 | 5.8 | 0.3 | p &lt | 0.0001 | week 6 ||| week 3 | week 4 | week 6 | 71.3 | 3.3 | 72.7 | 3.6 ||| week 4 | 4 weeks [SUMMARY]
[CONTENT] ||| ||| TNBS ||| six | TNBS | two | sham ||| ||| 73.3% | the first three weeks | week 4 | 70.9% ||| week 4 | 5.8 | 0.3 | p &lt | 0.0001 | week 6 ||| week 3 | week 4 | week 6 | 71.3 | 3.3 | 72.7 | 3.6 ||| week 4 | 4 weeks [SUMMARY]
Rheumatoid arthritis: travelling biological era a Romanian X-ray population.
20108756
Rheumatoid arthritis (RA) is associated with loss of overall functionality of the locomotion system and it is connected with substantial economic losses.
BACKGROUND
RA cases have been enrolled from southern and western part of the country, covering a surface of 23 counties.
METHOD
Particularly in the literature data, Romanian RA patients become work disabled at 5.65 +/- 5.99 years old after the diagnosis. At cohort level, retirement in the first year after RA diagnosis is of 22.9%. From those, 13% were treated with biologic DMARDs; those on non-biologic DMARDs were 28.6%. In oral therapy group the most prescribed drug is lefunomide (61.2%). RA has an important impact on pain, function and utility, influenced by social factors. Patients' follow up is often based on hospitalization.
RESULTS
Currently, when the clinician may choose for one certain therapy or another, the social influence is still overwhelming at all the evaluation levels in RA patients, as well as at economic impact.
CONCLUSION
[ "Adalimumab", "Adolescent", "Adult", "Age Distribution", "Aged", "Anti-Inflammatory Agents, Non-Steroidal", "Antibodies, Monoclonal", "Antibodies, Monoclonal, Humanized", "Antirheumatic Agents", "Arthritis, Rheumatoid", "Disabled Persons", "Etanercept", "Female", "Humans", "Immunoglobulin G", "Infliximab", "Male", "Middle Aged", "Radiography", "Receptors, Tumor Necrosis Factor", "Romania", "Severity of Illness Index", "Young Adult" ]
3019021
null
null
Patients and methods
The lack of a National RA Register, as well as a National RA Database has imposed a different sample enrollment method. We used two sources. Some of the cases are represented by patients from the Internal Medicine and Rheumatology Department of ‘Dr. Ion Cantacuzino’ Hospital in Bucharest, during the year 2007. They have been drawn out chronologically, according to their time presentation, from the hospital electronic database. Inclusion criteria consisted of RA diagnosis and exclusion criteria, the presence of any malignancy. Through the collaboration with other rheumatologists within the country, other cases have been enrolled from the ambulatory care, in order to cover a larger territorial area: randomly, from their patients' lists, based on the same inclusion and exclusion criteria. The RA diagnosis was established by each specialist (rheumatologist) for each patient apart. The patients – initially 480, recorded with names and addresses – were invited, through a post consent letter, to attend a scientific research. Three series of self report interviews were collected by post mail (at our address, written on the enclosed stamped envelope). The collected interval time was of six months, as following: 0 – 6 – 12 months. The first interview was conducted during November – December 2007, the second one during May – June 2008 and the last one during November –December 2008. Following the first approach, from the initially 480 cases, we collected 206 responders' envelopes (response rate being roughly 50%). These cases were considered eligible for the study; the second and third mail approach was conducted only for these last patients. Each serial evaluation consisted of three different questionnaires: an original one, Health Assessment Questionnaire (HAQ) Disability and Discomfort Scales – simple translation, not being validated in our country yet –and EUROQOL EQ–5D, Romanian version (having the original authors' consent for using it). The collected data (all self reports) were distributed on the following interest categories: demographic (date of birth, age, sex, ethnic origin, marital status, environment of habitat, level of training, average income per month, professional status); co–morbidities: the categories of patients with high blood pressure (HBP), diabetes mellitus (DM), chronic hepatitis (CH), coronary heart disease (CHD), gastro–duodenal ulcer (GDU) – indicating episodes of digestive bleeding, renal failure (RF), asthma (A), osteoporosis (OP), osteoarthritis (OA), thyroid gland disease (T) and others have been noted together with arthroplasty procedures and other surgical interventions; concerning the major disease (RA): General features: diagnosis year, current treatment and its starting time, associated medication (corticotherapy /non steroidal anti–inflammatory drugs–NSAIDs). Functional characteristics: HAQ score, self reporting of pain intensity, disease activity and fatigue on a visual analogue scale of 100 mm (marked only at extremes), number of disabled days for usual activities, number of persons involved in home aid to its frequency. Quality of life characteristics: utility (EQ–5D) and EQ–domains components (RA impact on mobility, self care, usual activities, pain/discomfort, anxiety/ depression), self reporting of the health quality on a visual analogue scale of 100 mm (feeling thermometer), marked each millimeter. Features of the economic impact with a time frame of 6 months: the number of sick leave days and hospitalization days, frequency for sick leave and hospitalizations, number of medical visits to the primary care and to rheumatologist, medical system appeals (regardless of the specialty), laboratory checks, number of X–rays, and CT/MRI examination, reporting on rehabilitation frequency, the patient's monthly contribution (own pocket expenses) to the treatment. Data analyses Geographically, the sample (n = 206) covers 23 counties, from the Southern and Western part of the country (Fig 1). The large territorial distribution of the cases as well as the normality statistical sample (Fig 2), determined us to estimate that the sample is representative of the entire population suffering of RA, in our country. The data have been analyzed in the program SPSS 10; we used ANOVA, two independent samples T test – for the continuous variables, Chi–Square, Kruskall Wallis and Man Whitney tests– for non–continuous variables, bivariate correlations (Pearson, Spearman coefficients). The sample has been subdivided according to the therapy, as it follows: group treated with oral agents (non-biologic DMARDs monotherapy and combinations = Group A) and biological agents group (biologic DMARDs = Group B). Seven cases have been excluded from the review: five without remission therapy (the size of the subgroup being too small to be analyzed compared to the other), and two other cases that have not responded to questions regarding the medication and could not be allotted to any group. Cohort territorial distribution P–P plot for age (n=206) Geographically, the sample (n = 206) covers 23 counties, from the Southern and Western part of the country (Fig 1). The large territorial distribution of the cases as well as the normality statistical sample (Fig 2), determined us to estimate that the sample is representative of the entire population suffering of RA, in our country. The data have been analyzed in the program SPSS 10; we used ANOVA, two independent samples T test – for the continuous variables, Chi–Square, Kruskall Wallis and Man Whitney tests– for non–continuous variables, bivariate correlations (Pearson, Spearman coefficients). The sample has been subdivided according to the therapy, as it follows: group treated with oral agents (non-biologic DMARDs monotherapy and combinations = Group A) and biological agents group (biologic DMARDs = Group B). Seven cases have been excluded from the review: five without remission therapy (the size of the subgroup being too small to be analyzed compared to the other), and two other cases that have not responded to questions regarding the medication and could not be allotted to any group. Cohort territorial distribution P–P plot for age (n=206)
null
null
null
null
[]
[]
[]
[ "Background", "Rheumatoid arthritis: an up to date", "European Health Systems", "Paper objective", "Patients and methods", "Data analyses", "Results, discussions, conclusions" ]
[ "At the beginning of the third millennium, starting with \ngetting thoroughly into the molecular mechanisms responsible for \nthe synovitis initiation in rheumatoid arthritis (RA), medical \nresearch has reached new therapy forms, through the biological \nagents. After the non–biologic DMARDs (disease \nmodifying antirheumatic drugs) era, the introduction of biological \nagents in the current medical practice has revolutionized the \nRheumatology field. Recently, the RA evolution was described as \na ‘potentially reversible/treatable physical disability’ \n[1]. Parallel with this new \ntherapy introduction, many clinical studies have shown its evidence \nbased on short term and long term efficacy, as well as tolerability \n[5].\nTreatment with biologic DMARDs is expensive. However, the \nbetter reduction of disease activity and effect on the retirement of \na long–term physical function might be cost–saving to \nthe community, because disease improvement might lead to the \nimprovement of the quality of life but also to improved utilization \nof health resources (such as hospitalizations) and reduced sick \nleaving and work retirements. In International cost–\nutility analyses, it has already been shown that the extra costs \nto achieve the extra benefits are acceptable with cost–\nutility ratios falling between 50,000–60,000 USD/ 1 QALY \n[5]. Is this acceptable in \nRomania? As long as these kinds of studies have not been performed in \nour country, the question still needs an answer. However, these \nstudies were performed in Europe and North–America \nand cost–effectiveness analyses cannot simply be transferred \nto other countries which have a different healthcare and cost system. \nIn developing countries, along with Romania, the society has not \nenough power to entirely cover the payment mechanisms for all the \npatients who would theoretically have indication for biologics. As \na consequence, a centralized settlement program was developed, \naccording to a nationally validated protocol. Based on the \nparameters assessment, the access to biologics is then decided. \nConsidering these actualities how does the RA population in \nRomania looks like? What are the rheumatologists prescribing? What is \nthe prescription trend and which are the factors that influence \nphysicians to choose between the treatment options? Costs, benefits? \nWhat is the report between the therapy forms? These are only a \nfew questions which need answers in order to expand the RA picture \nto other geographical, social and economical areas. These data might \nbe the start for formal cost–effectiveness analyses.\n Rheumatoid arthritis: an up to date Rheumatoid arthritis (RA) is a systemic chronic inflammatory \ndisease, with fluctuant evolution and unpredictable prognosis \n[8]; it leads to severe decline \nin functional status and quality of life and increases morbidity \nand mortality [7]. \nRA induces considerable socio–economic effects \n[2, \n10–\n13]. It is known that after \nten years of disease evolution, roughly half of the patients are \nwork disabled; this brings the loss of productivity in the foreground \nof the RA economic impact [2, \n9, \n10, \n14, \n15].\nThe major therapeutically goal in RA is to interfere with the \ndisease pathogenic paths. Stopping the joint destruction process \nwould maintain the quality of life, through the prevention of \nphysical disability and premature death.\nPharmacological treatment represents the main option. The group \nof non–biologic remission agents, generically called \nDMARDs, consists of: methotrexate (MTX), sulphasalasine (SSZ), \nleflunomide (LFL), gold salts, hydroxiclorochine, D–\npenicillamine, cyclophosfamide, azathioprine, cyclosporine A. MTX \nis currently the most used remissive agent, being \ntherapeutically considered the ‘gold–standard’ \n[3].\nThe biologics introduction opened new perspectives at a \npathogenically level (confirming the implications of the \nimmunity elements) and at a clinical practice level, offering \nthe alternative of the remission induction for non responders \nto non–biologic DMARDs. \nIn Romania, the biological therapy uses anti TNF–alpha \n(alpha tumor necrosis factor) monoclonal antibody (Infliximab) and \nsoluble receptors for TNF–alpha (Etanercept, Adalimumab) and \nanti CD20 antibodies (from B lymphocytes surface) \n(Rituximab). Chronologically, the first one introduced, having RA \nas indication was Infliximab, in 2000, followed by Etanercept, during\n 2004 (indicated in juvenile idiopathic arthritis and starting with \n 2005 in adult RA as well); during 2005 Adalimumab was also \n introduced. The latest biologic agent adopted in our country for \n clinical use is Rituximab, in 2008.\nFrom the National Health Insurance House database, in The \nNational Committee for the Biological Therapy Approval for RA \nPatients, during the first trimester of 2006, a number of 1074 \npatients were on biologics, in the fourth trimester of 2007 the \ntotal reaching 1500 patients, and in the same trimester of 2008 the \ntotal number of patients was 2143 (\nTable 1). \nRheumatoid arthritis on biological therapy in Romania\nRheumatoid arthritis (RA) is a systemic chronic inflammatory \ndisease, with fluctuant evolution and unpredictable prognosis \n[8]; it leads to severe decline \nin functional status and quality of life and increases morbidity \nand mortality [7]. \nRA induces considerable socio–economic effects \n[2, \n10–\n13]. It is known that after \nten years of disease evolution, roughly half of the patients are \nwork disabled; this brings the loss of productivity in the foreground \nof the RA economic impact [2, \n9, \n10, \n14, \n15].\nThe major therapeutically goal in RA is to interfere with the \ndisease pathogenic paths. Stopping the joint destruction process \nwould maintain the quality of life, through the prevention of \nphysical disability and premature death.\nPharmacological treatment represents the main option. The group \nof non–biologic remission agents, generically called \nDMARDs, consists of: methotrexate (MTX), sulphasalasine (SSZ), \nleflunomide (LFL), gold salts, hydroxiclorochine, D–\npenicillamine, cyclophosfamide, azathioprine, cyclosporine A. MTX \nis currently the most used remissive agent, being \ntherapeutically considered the ‘gold–standard’ \n[3].\nThe biologics introduction opened new perspectives at a \npathogenically level (confirming the implications of the \nimmunity elements) and at a clinical practice level, offering \nthe alternative of the remission induction for non responders \nto non–biologic DMARDs. \nIn Romania, the biological therapy uses anti TNF–alpha \n(alpha tumor necrosis factor) monoclonal antibody (Infliximab) and \nsoluble receptors for TNF–alpha (Etanercept, Adalimumab) and \nanti CD20 antibodies (from B lymphocytes surface) \n(Rituximab). Chronologically, the first one introduced, having RA \nas indication was Infliximab, in 2000, followed by Etanercept, during\n 2004 (indicated in juvenile idiopathic arthritis and starting with \n 2005 in adult RA as well); during 2005 Adalimumab was also \n introduced. The latest biologic agent adopted in our country for \n clinical use is Rituximab, in 2008.\nFrom the National Health Insurance House database, in The \nNational Committee for the Biological Therapy Approval for RA \nPatients, during the first trimester of 2006, a number of 1074 \npatients were on biologics, in the fourth trimester of 2007 the \ntotal reaching 1500 patients, and in the same trimester of 2008 the \ntotal number of patients was 2143 (\nTable 1). \nRheumatoid arthritis on biological therapy in Romania\n European Health Systems Health systems are constantly changing. There are three main \nhealthcare systems in Europe: ‘The National \nHealthcare System’, ‘The Social Health \nInsurances System’– Bismark and ‘The \nCentralized Healthcare System’– Semashko. The \nmajor differences between these are responsible for the consequences \nof medical practice.\nNHS was at first introduced in England but nowadays it can be \nalso found in Denmark, Italy, Finland, Ireland, Norway, Sweden, \nGreece, Portugal and Spain. The system is financed through general \ntaxes, in controlled by the government and has both a state budget and \na private sector. All citizens have free access to the system, \nthe coverage is general and the state authorities manage the system. \nIn certain cases, the patients pay a part of the cost for some\n medical services. Its major disadvantage consists of long waiting \nlists for certain medical services and a high level of bureaucracy \n[6].\nThe Social Health Insurances System is the most used one and it \nis based on compiling the main elements of the social and\n medical insurances. This system operates in Germany, Austria, \nBelgium, Switzerland, France, Luxembourg and Netherlands. Even if \nthe system offers a broad coverage, a certain part of the \npopulation remains outside the coverage area of the medical services. \nThe system financing is based on the compulsory contributions of \nthe employers and employees [6].\nThe Centralized Healthcare System, introduced in Russia was typical \nfor the Central and Eastern European countries, which are now \ngoing through a transition process to the market economy. The state \nhad full control over the production factors and health facilities \nand services. The doctors were state clerks and there was no \nprivate sector. The medical assistance was free for everyone \nemployed oversized personnel and hospitals; it had no competition and \nit lacked performance [6].\nJust like other former socialist countries, Romania organized \nthe national healthcare system according to Semashko Russian model, \nbased on free access to medical services for every citizen. Despite \nthe fact that after the communism fell and reforms in healthcare \nsystem tried to get it closer to the German model, our system still \nfights with true weaknesses: little health expenditure as percentage \nin the GDP/capita (the current allocation is of only 3.2%, \ncompared to the necessary of 8% from GDP); centralized \nallocation of resources; overrated hospital services; \nlack of professional medical equipment and drugs; inequity in medical \nservices delivery across the regions of the country \n[6].\nGiven this picture with ambiguous borders, what is really \nhappening with RA patients who go beyond non–biologic \nDMARDs therapy phase being labeled as non– responsive?\nHealth systems are constantly changing. There are three main \nhealthcare systems in Europe: ‘The National \nHealthcare System’, ‘The Social Health \nInsurances System’– Bismark and ‘The \nCentralized Healthcare System’– Semashko. The \nmajor differences between these are responsible for the consequences \nof medical practice.\nNHS was at first introduced in England but nowadays it can be \nalso found in Denmark, Italy, Finland, Ireland, Norway, Sweden, \nGreece, Portugal and Spain. The system is financed through general \ntaxes, in controlled by the government and has both a state budget and \na private sector. All citizens have free access to the system, \nthe coverage is general and the state authorities manage the system. \nIn certain cases, the patients pay a part of the cost for some\n medical services. Its major disadvantage consists of long waiting \nlists for certain medical services and a high level of bureaucracy \n[6].\nThe Social Health Insurances System is the most used one and it \nis based on compiling the main elements of the social and\n medical insurances. This system operates in Germany, Austria, \nBelgium, Switzerland, France, Luxembourg and Netherlands. Even if \nthe system offers a broad coverage, a certain part of the \npopulation remains outside the coverage area of the medical services. \nThe system financing is based on the compulsory contributions of \nthe employers and employees [6].\nThe Centralized Healthcare System, introduced in Russia was typical \nfor the Central and Eastern European countries, which are now \ngoing through a transition process to the market economy. The state \nhad full control over the production factors and health facilities \nand services. The doctors were state clerks and there was no \nprivate sector. The medical assistance was free for everyone \nemployed oversized personnel and hospitals; it had no competition and \nit lacked performance [6].\nJust like other former socialist countries, Romania organized \nthe national healthcare system according to Semashko Russian model, \nbased on free access to medical services for every citizen. Despite \nthe fact that after the communism fell and reforms in healthcare \nsystem tried to get it closer to the German model, our system still \nfights with true weaknesses: little health expenditure as percentage \nin the GDP/capita (the current allocation is of only 3.2%, \ncompared to the necessary of 8% from GDP); centralized \nallocation of resources; overrated hospital services; \nlack of professional medical equipment and drugs; inequity in medical \nservices delivery across the regions of the country \n[6].\nGiven this picture with ambiguous borders, what is really \nhappening with RA patients who go beyond non–biologic \nDMARDs therapy phase being labeled as non– responsive?", "Rheumatoid arthritis (RA) is a systemic chronic inflammatory \ndisease, with fluctuant evolution and unpredictable prognosis \n[8]; it leads to severe decline \nin functional status and quality of life and increases morbidity \nand mortality [7]. \nRA induces considerable socio–economic effects \n[2, \n10–\n13]. It is known that after \nten years of disease evolution, roughly half of the patients are \nwork disabled; this brings the loss of productivity in the foreground \nof the RA economic impact [2, \n9, \n10, \n14, \n15].\nThe major therapeutically goal in RA is to interfere with the \ndisease pathogenic paths. Stopping the joint destruction process \nwould maintain the quality of life, through the prevention of \nphysical disability and premature death.\nPharmacological treatment represents the main option. The group \nof non–biologic remission agents, generically called \nDMARDs, consists of: methotrexate (MTX), sulphasalasine (SSZ), \nleflunomide (LFL), gold salts, hydroxiclorochine, D–\npenicillamine, cyclophosfamide, azathioprine, cyclosporine A. MTX \nis currently the most used remissive agent, being \ntherapeutically considered the ‘gold–standard’ \n[3].\nThe biologics introduction opened new perspectives at a \npathogenically level (confirming the implications of the \nimmunity elements) and at a clinical practice level, offering \nthe alternative of the remission induction for non responders \nto non–biologic DMARDs. \nIn Romania, the biological therapy uses anti TNF–alpha \n(alpha tumor necrosis factor) monoclonal antibody (Infliximab) and \nsoluble receptors for TNF–alpha (Etanercept, Adalimumab) and \nanti CD20 antibodies (from B lymphocytes surface) \n(Rituximab). Chronologically, the first one introduced, having RA \nas indication was Infliximab, in 2000, followed by Etanercept, during\n 2004 (indicated in juvenile idiopathic arthritis and starting with \n 2005 in adult RA as well); during 2005 Adalimumab was also \n introduced. The latest biologic agent adopted in our country for \n clinical use is Rituximab, in 2008.\nFrom the National Health Insurance House database, in The \nNational Committee for the Biological Therapy Approval for RA \nPatients, during the first trimester of 2006, a number of 1074 \npatients were on biologics, in the fourth trimester of 2007 the \ntotal reaching 1500 patients, and in the same trimester of 2008 the \ntotal number of patients was 2143 (\nTable 1). \nRheumatoid arthritis on biological therapy in Romania", "Health systems are constantly changing. There are three main \nhealthcare systems in Europe: ‘The National \nHealthcare System’, ‘The Social Health \nInsurances System’– Bismark and ‘The \nCentralized Healthcare System’– Semashko. The \nmajor differences between these are responsible for the consequences \nof medical practice.\nNHS was at first introduced in England but nowadays it can be \nalso found in Denmark, Italy, Finland, Ireland, Norway, Sweden, \nGreece, Portugal and Spain. The system is financed through general \ntaxes, in controlled by the government and has both a state budget and \na private sector. All citizens have free access to the system, \nthe coverage is general and the state authorities manage the system. \nIn certain cases, the patients pay a part of the cost for some\n medical services. Its major disadvantage consists of long waiting \nlists for certain medical services and a high level of bureaucracy \n[6].\nThe Social Health Insurances System is the most used one and it \nis based on compiling the main elements of the social and\n medical insurances. This system operates in Germany, Austria, \nBelgium, Switzerland, France, Luxembourg and Netherlands. Even if \nthe system offers a broad coverage, a certain part of the \npopulation remains outside the coverage area of the medical services. \nThe system financing is based on the compulsory contributions of \nthe employers and employees [6].\nThe Centralized Healthcare System, introduced in Russia was typical \nfor the Central and Eastern European countries, which are now \ngoing through a transition process to the market economy. The state \nhad full control over the production factors and health facilities \nand services. The doctors were state clerks and there was no \nprivate sector. The medical assistance was free for everyone \nemployed oversized personnel and hospitals; it had no competition and \nit lacked performance [6].\nJust like other former socialist countries, Romania organized \nthe national healthcare system according to Semashko Russian model, \nbased on free access to medical services for every citizen. Despite \nthe fact that after the communism fell and reforms in healthcare \nsystem tried to get it closer to the German model, our system still \nfights with true weaknesses: little health expenditure as percentage \nin the GDP/capita (the current allocation is of only 3.2%, \ncompared to the necessary of 8% from GDP); centralized \nallocation of resources; overrated hospital services; \nlack of professional medical equipment and drugs; inequity in medical \nservices delivery across the regions of the country \n[6].\nGiven this picture with ambiguous borders, what is really \nhappening with RA patients who go beyond non–biologic \nDMARDs therapy phase being labeled as non– responsive?", "Having designed an observational cross sectional cohort study \nof cost–effectiveness of the biological treatment compared \nto classical DMARDs, with a follow up period of 12 months, in this \npaper we proposed to describe the clinical characteristics and \nhealthcare resource utilization characteristics and to analyze \nthe correlations between a cross–sectional sample of 206 \nRA patients, at baseline (December 2007). ", "The lack of a National RA Register, as well as a National RA \nDatabase has imposed a different sample enrollment method. We used \ntwo sources. Some of the cases are represented by patients from \nthe Internal Medicine and Rheumatology Department of ‘Dr. \nIon Cantacuzino’ Hospital in Bucharest, during the year 2007. \nThey have been drawn out chronologically, according to their \ntime presentation, from the hospital electronic database. \nInclusion criteria consisted of RA diagnosis and exclusion criteria, \nthe presence of any malignancy. Through the collaboration with \nother rheumatologists within the country, other cases have been \nenrolled from the ambulatory care, in order to cover a larger \nterritorial area: randomly, from their patients' lists, based \non the same inclusion and exclusion criteria. The RA diagnosis \nwas established by each specialist (rheumatologist) for each \npatient apart.\nThe patients – initially 480, recorded with names and \naddresses – were invited, through a post consent letter, to \nattend a scientific research. Three series of self report interviews \nwere collected by post mail (at our address, written on the \nenclosed stamped envelope). The collected interval time was of six \nmonths, as following: 0 – 6 – 12 months. The first \ninterview was conducted during November – December 2007, the \nsecond one during May – June 2008 and the last one during \nNovember –December 2008. Following the first approach, from \nthe initially 480 cases, we collected 206 responders' \nenvelopes (response rate being roughly 50%). These cases \nwere considered eligible for the study; the second and third mail \napproach was conducted only for these last patients. \nEach serial evaluation consisted of three different questionnaires: \nan original one, Health Assessment Questionnaire (HAQ) Disability \nand Discomfort Scales – simple translation, not being validated \nin our country yet –and EUROQOL EQ–5D, Romanian \nversion (having the original authors' consent for using it). \nThe collected data (all self reports) were distributed on the \nfollowing interest categories: \ndemographic (date of birth, age, sex, ethnic origin, \nmarital status, environment of habitat, level of training, average \nincome per month, professional status);\nco–morbidities: the categories of patients with \nhigh blood pressure (HBP), diabetes mellitus (DM), chronic hepatitis \n(CH), coronary heart disease (CHD), gastro–duodenal ulcer \n(GDU) – indicating episodes of digestive bleeding, renal \nfailure (RF), asthma (A), osteoporosis (OP), osteoarthritis (OA), \nthyroid gland disease (T) and others have been noted together \nwith arthroplasty procedures and other surgical interventions;\nconcerning the major disease (RA):\nGeneral features: \ndiagnosis year, current treatment and its starting time, \nassociated medication (corticotherapy /non \nsteroidal anti–inflammatory drugs–NSAIDs).\nFunctional characteristics:\n HAQ score, self reporting of pain intensity, disease activity and \nfatigue on a visual analogue scale of 100 mm (marked only at \nextremes), number of disabled days for usual activities, number of \npersons involved in home aid to its frequency.\nQuality of life characteristics: utility (EQ–5D) and EQ–domains components \n(RA impact on mobility, self care, usual activities, \npain/discomfort, anxiety/ depression), self reporting of the \nhealth quality on a visual analogue scale of 100 mm (feeling \nthermometer), marked each millimeter.\nFeatures of the economic impact with a time frame of 6 months: the number of sick leave days \nand hospitalization days, frequency for sick leave and \nhospitalizations, number of medical visits to the primary care and \nto rheumatologist, medical system appeals (regardless of the \nspecialty), laboratory checks, number of X–rays, and \nCT/MRI examination, reporting on rehabilitation frequency, \nthe patient's monthly contribution (own pocket expenses) to \nthe treatment.\n Data analyses Geographically, the sample (n = 206) covers 23 counties, from \nthe Southern and Western part of the country \n(Fig 1). The large \nterritorial distribution of the cases as well as the normality \nstatistical sample (Fig 2),\n determined us to estimate that the sample is representative of the \nentire population suffering of RA, in our country.\nThe data have been analyzed in the program SPSS 10; we used ANOVA, \ntwo independent samples T test – for the continuous \nvariables, Chi–Square, Kruskall Wallis and Man \nWhitney tests– for non–continuous variables, \nbivariate correlations (Pearson, Spearman coefficients).\nThe sample has been subdivided according to the therapy, as it \nfollows: group treated with oral agents (non-biologic DMARDs \nmonotherapy and combinations = Group A) and biological agents \ngroup (biologic DMARDs = Group B). Seven cases have been excluded from \nthe review: five without remission therapy (the size of the subgroup \nbeing too small to be analyzed compared to the other), and two other \ncases that have not responded to questions regarding the medication \nand could not be allotted to any group. \nCohort territorial distribution\nP–P plot for age (n=206)\nGeographically, the sample (n = 206) covers 23 counties, from \nthe Southern and Western part of the country \n(Fig 1). The large \nterritorial distribution of the cases as well as the normality \nstatistical sample (Fig 2),\n determined us to estimate that the sample is representative of the \nentire population suffering of RA, in our country.\nThe data have been analyzed in the program SPSS 10; we used ANOVA, \ntwo independent samples T test – for the continuous \nvariables, Chi–Square, Kruskall Wallis and Man \nWhitney tests– for non–continuous variables, \nbivariate correlations (Pearson, Spearman coefficients).\nThe sample has been subdivided according to the therapy, as it \nfollows: group treated with oral agents (non-biologic DMARDs \nmonotherapy and combinations = Group A) and biological agents \ngroup (biologic DMARDs = Group B). Seven cases have been excluded from \nthe review: five without remission therapy (the size of the subgroup \nbeing too small to be analyzed compared to the other), and two other \ncases that have not responded to questions regarding the medication \nand could not be allotted to any group. \nCohort territorial distribution\nP–P plot for age (n=206)", "Geographically, the sample (n = 206) covers 23 counties, from \nthe Southern and Western part of the country \n(Fig 1). The large \nterritorial distribution of the cases as well as the normality \nstatistical sample (Fig 2),\n determined us to estimate that the sample is representative of the \nentire population suffering of RA, in our country.\nThe data have been analyzed in the program SPSS 10; we used ANOVA, \ntwo independent samples T test – for the continuous \nvariables, Chi–Square, Kruskall Wallis and Man \nWhitney tests– for non–continuous variables, \nbivariate correlations (Pearson, Spearman coefficients).\nThe sample has been subdivided according to the therapy, as it \nfollows: group treated with oral agents (non-biologic DMARDs \nmonotherapy and combinations = Group A) and biological agents \ngroup (biologic DMARDs = Group B). Seven cases have been excluded from \nthe review: five without remission therapy (the size of the subgroup \nbeing too small to be analyzed compared to the other), and two other \ncases that have not responded to questions regarding the medication \nand could not be allotted to any group. \nCohort territorial distribution\nP–P plot for age (n=206)", "Sample and subgroups features, spread over the study categories \nat inclusion are summarized in Table 2\n, Table 3, \nTable 4, \nTable 5, \nTable 6, and \nTable 7.\nResults are given in average± DS for continuous variables and \nin percentages for non–continuous variables; a + b = 199 \n(7 cases have been excluded after splitting the sample into \ntherapeutic groups); group A= non–biologic DMARDs; group \nB= biologic DMARDs; * Level of significance alpha: \np<0,05; NS=non statistically significant \nDemographic characteristics\nAlthough the patient's age in group B is significantly lower \n(Table 2), this difference is \nnot the notable one in the working activity status and income. It \nfigures a RA population with an average age of 54.90 ± 12.67 \nyears old, which is theoretically part of the working active \ncategory. Practically, however, two thirds are retired, most cases \nhave low monthly income (<1000 lei/month, to 90.3%), \nand approximately half of them have completed only primary education \n(it seems we are dealing with a RA population of young people, poor \nand elementary trained?). In this framework, between the level \nof education and the monthly income, there is a homogeneous \nsignificant positive correlation in both groups (ρs = 0.645, p \n< 0.01). These issues outline the social conditions in \nthe demographic characteristics background.\nN.B. Results are given in percentages \nfor non–continuous variables; a + b = 199 (7 cases have \nbeen excluded after splitting the sample into therapeutic groups); \ngroup A= non–biologic DMARDs; group B= biologic DMARDs; \n* Level of significance alpha: p<0,05; NS=non \nstatistically significant \nAssociated RA morbidities: characteristics\nMorbidity is significantly associated with PR \n(Table 3). Over half of \nthe patients (57.3%) had associated three diseases with PR, \nin terms of population with an average age of 54.90 ± 12.67 \nyears old. Between them, the first three places are occupied by high \nblood pressure, osteoporosis and coronary heart disease; as \nalready confirmed, cardiovascular diseases increase the mortality \nrate, independently. It also remarks the significantly higher \narthroplasty rate in B (7.1%, compared to 1.6%; \np<0.05): it refers to a history of more severe diseases for \nthe current biologics cases.\nN.B. Results are given in average±DS\n for continuous variables and in percentages \nfor non–continuous variables; a + b = 199 (7 cases have \nbeen excluded after splitting the sample into therapeutic groups); \nc percentages include monotherapy and combinations; group \nA= non–biologic DMARDs; group B= biologic DMARDs; * Level \nof significance alpha: p<0,05; NS=non statistically significant; \nNA = not applicable; NSAIDs =non steroidal anti–inflammatory \ndrugs; MTX=methotrexate; SSZ= sulphasalazine; LFL = leflunomide.\n\nGeneral characteristics of rheumatoid arthritis\nAverage disease duration is of 9.40 ± 8.87 years old, with \na significant difference between groups in favor of biologics \n(11.32 ± 8.30 years old). In other words, biological \nagents predominate in older forms of the disease at a younger category \nof patients. If in group A, RA age increases linearly with age \n(r = 0.233, p < 0.01) in group B these factors are \nindependent, supporting a broader distribution of the cases on the \nage axis.\nOn figures (Table 4), two \nthirds of patients are following non–biologic DMARDs \n(¾ monotherapy, ¼ combinations), and one third, \nbiological agents. Interesting is that assessing the entire sample, \nthe most prescribed DMARD seems to be MTX (48.6%) \n– including here MTX prescriptions associated with \nbiologics–while looking only in group A (non–\nbiologic DMARDs), first place is occupied by LFL (61.2%). \nThe explanation relies on two aspects: on the one hand, surprising \nRA population at approximately 10 years from the disease evolution, \nwhen most patients are beyond stage MTX, either through \ninefficiency (secondary resistance) or by cumulative dose over time; \non the other hand, the influence role of various pharmaceutical \ncompanies in prescribing a certain drug should not be missed. What \nis interesting about the analyzed population is that other \nDMARDs preparations were not declared, which draws attention to a phase \nof decline in using a medication formerly overused (gold \nsalts, hydroxycloroquine etc.), as well as to a selective promotion \nof products from pharmaceutical companies. In the biologics group, \nthe ‘oldest’ drug is placed on top of the most \nprescribed one: Infliximab (65.7%). This feature follows \nthe national wide distribution of TNF blockers agents in \n‘The National Committee for the Biological Therapy \nApproval’ (Table 1), \nwhere Infiliximab was the most frequently biologic at the end of 2007. \nWe conclude that these figures reflect a stage of plateau in the \ndynamic of the described parameters, as the average duration of \nthe reported treatment is of 2.70 ± 2.64 years old.\nAINS consumption is high (89%), without differences \nbetween groups. Looking at the figures, over ⅓ of the cases \nrequire daily NSAIDs (35.6%). Considering correlation with age \nand disease duration, NSAIDs intake increased with RA age only in group \nA (ρs = 0.212; p< 0.05). Supporting these data with \nassociated pathology, the risk of adverse events, even fatal (by \nmajor cardio–vascular disorders) is amplified.\nCorticotherapy is part of the treatment in 39% of the \ncases. Looking at sample level, the relationship corticotherapy \n– HAQ disability categories (Fig 3\n), cortisone therapy is missing in 73% of the cases for \nHAQ categories <1.6; starting with HAQ scores > \n1.6 corticotherapy is present at 59% (p <0.01). The \nratios are different in subgroups. Group A describes a similar \ncurve, except for the report reversal point starting with HAQ values \nof > 2.1. On the contrary, in group B cortisone therapy is \npresent in 81% of the cases belonging to the HAQ \ncategory: 1.6–2.1; in all other intervals, the majority of \nthe patients do not require cortisone therapy, even for HAQ \nvalues <1.6, as for those > 2.1 (75 % and \n58% patients, respectively), p < 0.01.\nSurprisingly, group B described a positive correlation (gs = 0.247, \np < 0.05) between corticotherapy and age. It clearly appears \nthat the oldest patients belong mostly to HAQ interval 1.1– \n2.1 (meaning moderate and advanced disability), the same area recorded \na peak in cortisone therapy, as well. \nAt least two conclusions can be detached: in oral therapy \ngroup, ‘RA end stage’ of irreversible \ndisability (27.2%) is frequently cortisone dependent; in \nthis group, corticotherapy describes a linear growth in the degree \nof disability, in order to maintain a minimum functionality of \nthe locomotion system. On the contrary, in biologics group \ncortisone prescription follows the disability intervals \npotentially reversible (HAQ 1.1 – 2.1), where the case density \nis also the highest (51.5%).\nCorticotherapy in relation with HAQ categories\nIs working activity status related to the consumption of NSAIDs \nand cortisone? In group B, the retired cases correlate positively \nwith corticotherapy (ρs = 0.260, p < 0.05) and with NSAIDs \nintake (ρs = 0.265, p < 0.05), while the correlation is \nnegative for the active professional cases. This reflects the \nformer severity of the disease in group B: more severe disease \n(HAQ represents partly the cumulative disease activity – \ndamage over time) reduces ability to work and makes it more likely \nto receive steroids and/or NSAIDs. Considering that groups have \nnot significant differences concerning the active professional \nproportion, it seems that in the oral therapy group, these factors \nare independent. It could also reflect a former less severity of \nthe disease in group A.\nIn functional terms (Table 5\n) the average HAQ score at baseline was of 1.29 ± 0.80. \nIf we look to HAQ categories, household activities recorded the \nmost severe score (36.9% of cases), followed by \nhygiene (17.9%).\nConsidering 6 categories of severity, corresponding to a certain \nrange of HAQ score, we mention: in group A, 27.2% of cases \nare severely and very severely disabled, compared to 15.7% in \ngroup B (p < 0.05). Advanced and moderate disability is observed \nin 51.5% of cases representing group B, compared to 27.1% \nof group A (p < 0.05) (Fig 4\n). In other words, RA terminal phases (end–stage) which \nhave less therapeutically benefits in functional terms are mostly \ntreated with classical agents, while the potentially reversible stages \nof RA disability can be found especially in the group receiving a \nmore expensive therapy. In a society with limited health resources, \nthis ‘selective affiliation’ of cases is \npredictable, considering as a therapeutic target also \nthe socio–economic reinsertion of patients.\nAge is a factor that increases the degree of disability appreciated \nby HAQ (r = 0.417, p < 0.01); on the contrary, less \nobvious correlation of disability in relation to disease duration \n(r = 0.251, p < 0.05). In the same polarity, we mention \nHAQ influence on the retired status (gs = 0.318, p < 0.01).\nUsing of self–quantitative VAS scale showed unexpected \nhigh scores for pain (mean 54.45 ± 24.23), disease activity \n(mean 55.17 ± 23.12) and fatigue (57.49 ± 24.87). Age is \nan inflexible characteristic, but its influence on self \nreporting assessments is confirmed by the positive correlation with \nthe mentioned variables (r = 0.219, 0.259, 0.272, p < 0.01).\nAlthough approximately ¼ of the cohort (22.8%) belongs \nto a functional irreversible RA phase, needed assistance in everyday \nlife overcomes the expected level: 86.7% of patients \nrequire household help and 58.3% of them frequently report \nand permanently help. One single person is involved in house hold help \nfor most of the cases (68.8%), 41.7% of patients call for \nauxiliary objects and 35% for aid devices, mostly for the \ncategory of usual daily activities (64.6%).\nHAQ categories according to therapy groups\nN.B. Results are given in average± DS for \ncontinuous variables and in percentages for non–\ncontinuous variables; a + b = 199 (7 cases have been excluded \nafter splitting the sample into therapeutic groups); group A= non \nbiologic DMARDs; group B= biologic DMARDs; * Level of \nsignificance alpha: p<0,05; NS=non statistically significant.\nRA functional characteristics\nOverall functional impact in daily life is materialized in about \nten days lost every month because of the inability of performing \nhousehold tasks (average: 9.64 ± 9.06 days). There is an \nimportant and relatively homogeneous positive correlation between \nlost days and the self reporting level for pain, fatigue and \ndisease activity (r = 0.650, p < 0.01) in both groups.\nThe category of working active cases has a certain degree \nof independence at home. Both groups reveal a negative correlation \nof these cases with lost days for usual activities (gs = –0.255 \nand –0.351, p < 0.01), while in group A the help \nfrequency is bigger (gs =– 0.307, p < 0.01); \npensioners group correlation is positive (meaning an increase \nin non–medical direct costs).\nUtility (Table 6), \nappreciated on a scale from 0 to 1, where 1 corresponds to perfect \nquality and 0 to death, is placed for the whole sample to an average \nof 0.417 ± 0.337. Between groups, there is a difference in favor \nof biologics (0.382 ± 0.347 and 0.452 ± 0.317, p = 0.1), \nbut both figures belong to a low level. The most frequently reported \nscore was of 0.516 and it was found in one third of cases. It \nis interesting that the proportion of those who reported moderate \nand severe problems in the utility components stratify somehow the \nRA impact in patient daily life. Thus, 95.5% reported \npain / discomfort, 83% have problems in mobility and usual activities, \n74% in self care and 69% have anxiety / depression.\nThe quality of health state assessed through VAS showed an \naverage score of 47.39 ± 22.13, with a statistically \nsignificant difference between groups in favor of biologics \n(44.92 ± 22.34, and 51.36 ± 21, 23, p < 0.05).\nN.B. Results are given in average±DS for \ncontinuous variables and in percentages for non–\ncontinuous variables;** Percentages \nrepresent frequency of problems (moderate and severe) in \nmentioned category; a + b = 199 (7 cases have been excluded \nafter splitting the sample into therapeutic groups); group A= non \nbiologic DMARDs; group B= biologic DMARDs; * Level \nof significance alpha: p<0,05; NS=non statistically \nsignificant.\nUtility and quality of life parameters\nIn both subgroups increased disability lowers the quality of \nhealth state and utility. (group A: gs = –0.618, –0.665 \n(p < 0.01); gs group B = –0.707 – 0.552 (p \n< 0.01).\nGiven the social context of RA patient in Romania, we consider \nit appropriate to present the correlations of some social elements \nwith quality of life components, independently of other factors \ndirectly related to RA:\nThe increasing level of education is associated with \nhealth state quality and utility: r = 0.377 and r = 0.380, where \np < 0.01.\nThere is an association between the standard of \nliving caused by a low monthly income and utility score, as well \nas quality of health state: r = 0.323 and r = 0.364, where p \n< 0.01.\nThere is an association between the low monthly income \nand the severity of utility components: mobility \n(gs = –0.186), self care (gs =–0.302), usual activities \n(gs = – 0.304), pain/discomfort \n(gs = –0.233), anxiety/depression (ρs = –0.216), where \np < 0.01.\nThere is an association between inactivity (retired \ngroup) and the severity of utility components: gs = 0.224; 0.250; \n0.255; 0.167; 0.220, where p < 0.01.\nIn order to improve the RA impact at individual, social and \neconomic level, some supporting and stimulating measures of any \nworking activity, tailored according to the disease functionality \nare required. \nAnalyzing the effectiveness, strictly in terms of utility and \nquality of health state, the biologics class is clearly superior \nto classical therapy. Expanding to the level of economic \nimpact characteristics, the differences between groups did not \ndescribe the same behavior (Table 7\n).\nReported to the active professional subgroup, labor productivity\n was evaluated by sick leave and early retirement.\nQuantifying the absenteeism frequency, 20% of the patients \nreported sick leave in the last 6 months. There is however a \nsignificant difference in sick leave duration, in favor of group A \n(mean 3.58 ± 8.66 days, compared to 0.43 ± 1.08 days \nin group B, p < 0.05). The analysis of correlation with \nthe hospitalization duration revealed that the two features \nare independent. As a result, in DMARDs group longer sick leaves is not \na consequence of hospitalization, but probably of outpatient visits in \nthe primary care or rheumatologist, the only ones who can fix sick \nleave in ambulatory care.\nLabor productivity loss through early retirement reaches a threshold \nof 38.5% of cases (33.7% in group A and 46.9% \nin group B) at a median duration after RA diagnosis time of only \n5.65 ± 5.99 years (also the comparable average between \ngroups). What is interesting is that in the first year after RA \ndiagnosis, 22.9% of the newly diagnosed patients are \nmedically retired (28.6% belonging to group A and 13% \nto group B). Recall that literature sustains a loss of labor \nproductivity through work incapacity of about 50% in the first \n10 years after diagnosis [15].\nN.B. Results are given in average± DS for continuous \nvariables and in percentages for non-continuous variables; a + \nb = 199 (7 cases have been excluded after splitting the sample \ninto therapeutic groups); c: n = 60 (working active \nsubgroup); d: n = 143 (retired subgroup); e: \nn = 55 (RA retired subgroup); group A= non biologic DMARDs; group \nB= biologic DMARDs; * Level of significance alpha: \np<0,05; NS=non statistically significant.\nRA economic impact characteristics\nWhat caused the patients to be declared work disabled so early? \nNo evident correlations (factors with independent behavior) were \nfound between the disability degree (HAQ) and the time elapsed from \nthe diagnosis moment to the RA retirement. On the contrary, there is \na strong positive correlation (r = 0.758, p < 0.01) between RA \nage and time elapsed from diagnosis until RA retirement. By adding age \nin this equation, no additional information result was found. From \nthis perspective, RA age, not patient age, comes in the foreground of \nRA impact on labor productivity.\nHowever, figures show that a significant proportion of patients \nare declared work disabled at less than one year from the time \nof diagnosis. From this perspective, RA age losses the top position \nin final decision on work capacity. Considering also some social \nfactors, we found interesting associations:\nreduction of monthly income is associated with time from \nRA diagnosis to ill retirement (r = 0.565, where p < 0.01;\nthe highest the education level is, the greater the \ntendency to remain active professionally results (significant \npositive correlation in both groups, but more expressed in group B: \ngs = 0.466, p < 0.01).\nIn conclusion, it seems that in our country the social level of \nRA patients plays also a major role in the loss of work productivity. \nThis socio–economic weakness supports a vicious \ncircle: ‘small proportion of work active population \n– insufficient funds allocated in the health system– \ngreat selection and low accessibility of patients to more \nexpensive therapies, even if with superior efficiency’.\nRegarding the follow up visits to the rheumatologist, ⅓ of \ncases reported monthly visits, the differences being \nstatistically significant between groups (39%–group \nA, compared with 21% – group B, p < 0.01); \n20% of cases are monitored at intervals of 2 months \n(11.2% group A, compared with 35.7% in group B, p \n< 0.01). From the entire cohort perspective, the rhythm \nof monitoring visits has no correlation with the disability \nseverity (HAQ). Given this aspect and also considering the \neconomic implications of a medical check, a question arises: what \ninduces the rhythm of follow up? Looking within groups on \ndisability levels, patients with more severe disabilities are monitored \non monthly basis; differences occur in cases of HAQ interval 0.6 –\n 2.1. Thus, in group A, most cases are monitored monthly; in group \n B dominate 2 months visits. In terms of the drug prescription, most \nof non–biologic DMARDs recipes belong to the primary care \nnetwork. Thus, monthly rheumatology visits for the lower categories \nof disability could have two interpretations: on one hand, it \ncould support the excess use of health care departments (inside \nand outside hospital), but on the other hand the patients could \nhave better function because they come to the hospital more \nfrequently. The study design implies 1 year of follow up, so \nthe patient's characteristics dynamic will support one of these \ntwo hypotheses (which will be reported, as well). In group B, two \nmonths follow up is probably related to the administration rhythm \nof infliximab. At this point, it seems that rheumatology monitoring \nrhythm is determined by the clinician. \n41.7% of cases appeal monthly the primary care network \nand 27% have monthly visits to other specialties; cardiology \nhold on the top. \nRegarding the lab tests monitoring, approximately ¼ cases \nare tested at 2 months interval, but with significant differences \nbetween groups: 35.7% in group B, compared to 21.4% in \ngroup A (p < 0.01). The biologic patient is therefore more \nclosely monitored biologically. These data reflect physician option.\nRadiological monitoring revealed that about ¼ of cases failed \nto X–ray control in the last 6 months, with significant \ndifferences between groups (32% group A, group B 51%, \np < 0.01), while one and half of cases had from 1 to 3 radiographs.\nFor 75% of cases, the monthly patient contribution to \nthe treatment is less than 100 lei, with no significant \ndifferences between groups. This level of own pocket expenses \nrepresent 10% of the average monthly income, for 90% of \nthe patients included.\nConcerning direct medical costs level, the economic impact reveals \nthat hospitalizations rate (reported for half of the cases) \nis significantly higher in group B (67.1% versus 47.2%, \np <0.01). Although no significant differences between groups \nas extent of hospitalizations (6.82 days and 4.79 days), it seems \nthat with age increase, only patients treated with biological \nagents require more frequent hospitalizations and of longer duration \n(r = 0.529, p < 0.01), amplifying direct costs, eventually.\nThe frequency and duration of hospitalization is directly related \nto the degree of disability in group A (gs = 0.323, p < \n0.01), while in group B, it is valid for the duration of \nhospitalization, not for its frequency (gs = 0, 329; p < 0.01).\nWith respect to the superior hospitalization rate in group B, there \nis not a discrepancy without explanation. The reason lies in the \nlarge proportion of biologics patients treated with Infliximab, which \nis managed only through hospital admission. \nIn conclusion, in our country, the rate of hospitalizations is not \nonly a consequence of RA relapse episodes. The current health care \nsystem services still hospitalized based, associated to a \nparticular social context, could increase direct medical costs in \ncases not related to compulsory hospitalization. The claim \nrequires evidence in support of monetary unit, providing by data \nanalysis which will be soon reported. " ]
[ "Background", "Rheumatoid arthritis: an up to date", "European Health Systems", "Paper objective", "methods", "Data analyses", "Results, discussions, conclusions" ]
[ "rheumatoid arthritis", "biologic and non–biologic DMARD", "disability", "social", "utility" ]
Background: At the beginning of the third millennium, starting with getting thoroughly into the molecular mechanisms responsible for the synovitis initiation in rheumatoid arthritis (RA), medical research has reached new therapy forms, through the biological agents. After the non–biologic DMARDs (disease modifying antirheumatic drugs) era, the introduction of biological agents in the current medical practice has revolutionized the Rheumatology field. Recently, the RA evolution was described as a ‘potentially reversible/treatable physical disability’ [1]. Parallel with this new therapy introduction, many clinical studies have shown its evidence based on short term and long term efficacy, as well as tolerability [5]. Treatment with biologic DMARDs is expensive. However, the better reduction of disease activity and effect on the retirement of a long–term physical function might be cost–saving to the community, because disease improvement might lead to the improvement of the quality of life but also to improved utilization of health resources (such as hospitalizations) and reduced sick leaving and work retirements. In International cost– utility analyses, it has already been shown that the extra costs to achieve the extra benefits are acceptable with cost– utility ratios falling between 50,000–60,000 USD/ 1 QALY [5]. Is this acceptable in Romania? As long as these kinds of studies have not been performed in our country, the question still needs an answer. However, these studies were performed in Europe and North–America and cost–effectiveness analyses cannot simply be transferred to other countries which have a different healthcare and cost system. In developing countries, along with Romania, the society has not enough power to entirely cover the payment mechanisms for all the patients who would theoretically have indication for biologics. As a consequence, a centralized settlement program was developed, according to a nationally validated protocol. Based on the parameters assessment, the access to biologics is then decided. Considering these actualities how does the RA population in Romania looks like? What are the rheumatologists prescribing? What is the prescription trend and which are the factors that influence physicians to choose between the treatment options? Costs, benefits? What is the report between the therapy forms? These are only a few questions which need answers in order to expand the RA picture to other geographical, social and economical areas. These data might be the start for formal cost–effectiveness analyses. Rheumatoid arthritis: an up to date Rheumatoid arthritis (RA) is a systemic chronic inflammatory disease, with fluctuant evolution and unpredictable prognosis [8]; it leads to severe decline in functional status and quality of life and increases morbidity and mortality [7]. RA induces considerable socio–economic effects [2, 10– 13]. It is known that after ten years of disease evolution, roughly half of the patients are work disabled; this brings the loss of productivity in the foreground of the RA economic impact [2, 9, 10, 14, 15]. The major therapeutically goal in RA is to interfere with the disease pathogenic paths. Stopping the joint destruction process would maintain the quality of life, through the prevention of physical disability and premature death. Pharmacological treatment represents the main option. The group of non–biologic remission agents, generically called DMARDs, consists of: methotrexate (MTX), sulphasalasine (SSZ), leflunomide (LFL), gold salts, hydroxiclorochine, D– penicillamine, cyclophosfamide, azathioprine, cyclosporine A. MTX is currently the most used remissive agent, being therapeutically considered the ‘gold–standard’ [3]. The biologics introduction opened new perspectives at a pathogenically level (confirming the implications of the immunity elements) and at a clinical practice level, offering the alternative of the remission induction for non responders to non–biologic DMARDs. In Romania, the biological therapy uses anti TNF–alpha (alpha tumor necrosis factor) monoclonal antibody (Infliximab) and soluble receptors for TNF–alpha (Etanercept, Adalimumab) and anti CD20 antibodies (from B lymphocytes surface) (Rituximab). Chronologically, the first one introduced, having RA as indication was Infliximab, in 2000, followed by Etanercept, during 2004 (indicated in juvenile idiopathic arthritis and starting with 2005 in adult RA as well); during 2005 Adalimumab was also introduced. The latest biologic agent adopted in our country for clinical use is Rituximab, in 2008. From the National Health Insurance House database, in The National Committee for the Biological Therapy Approval for RA Patients, during the first trimester of 2006, a number of 1074 patients were on biologics, in the fourth trimester of 2007 the total reaching 1500 patients, and in the same trimester of 2008 the total number of patients was 2143 ( Table 1). Rheumatoid arthritis on biological therapy in Romania Rheumatoid arthritis (RA) is a systemic chronic inflammatory disease, with fluctuant evolution and unpredictable prognosis [8]; it leads to severe decline in functional status and quality of life and increases morbidity and mortality [7]. RA induces considerable socio–economic effects [2, 10– 13]. It is known that after ten years of disease evolution, roughly half of the patients are work disabled; this brings the loss of productivity in the foreground of the RA economic impact [2, 9, 10, 14, 15]. The major therapeutically goal in RA is to interfere with the disease pathogenic paths. Stopping the joint destruction process would maintain the quality of life, through the prevention of physical disability and premature death. Pharmacological treatment represents the main option. The group of non–biologic remission agents, generically called DMARDs, consists of: methotrexate (MTX), sulphasalasine (SSZ), leflunomide (LFL), gold salts, hydroxiclorochine, D– penicillamine, cyclophosfamide, azathioprine, cyclosporine A. MTX is currently the most used remissive agent, being therapeutically considered the ‘gold–standard’ [3]. The biologics introduction opened new perspectives at a pathogenically level (confirming the implications of the immunity elements) and at a clinical practice level, offering the alternative of the remission induction for non responders to non–biologic DMARDs. In Romania, the biological therapy uses anti TNF–alpha (alpha tumor necrosis factor) monoclonal antibody (Infliximab) and soluble receptors for TNF–alpha (Etanercept, Adalimumab) and anti CD20 antibodies (from B lymphocytes surface) (Rituximab). Chronologically, the first one introduced, having RA as indication was Infliximab, in 2000, followed by Etanercept, during 2004 (indicated in juvenile idiopathic arthritis and starting with 2005 in adult RA as well); during 2005 Adalimumab was also introduced. The latest biologic agent adopted in our country for clinical use is Rituximab, in 2008. From the National Health Insurance House database, in The National Committee for the Biological Therapy Approval for RA Patients, during the first trimester of 2006, a number of 1074 patients were on biologics, in the fourth trimester of 2007 the total reaching 1500 patients, and in the same trimester of 2008 the total number of patients was 2143 ( Table 1). Rheumatoid arthritis on biological therapy in Romania European Health Systems Health systems are constantly changing. There are three main healthcare systems in Europe: ‘The National Healthcare System’, ‘The Social Health Insurances System’– Bismark and ‘The Centralized Healthcare System’– Semashko. The major differences between these are responsible for the consequences of medical practice. NHS was at first introduced in England but nowadays it can be also found in Denmark, Italy, Finland, Ireland, Norway, Sweden, Greece, Portugal and Spain. The system is financed through general taxes, in controlled by the government and has both a state budget and a private sector. All citizens have free access to the system, the coverage is general and the state authorities manage the system. In certain cases, the patients pay a part of the cost for some medical services. Its major disadvantage consists of long waiting lists for certain medical services and a high level of bureaucracy [6]. The Social Health Insurances System is the most used one and it is based on compiling the main elements of the social and medical insurances. This system operates in Germany, Austria, Belgium, Switzerland, France, Luxembourg and Netherlands. Even if the system offers a broad coverage, a certain part of the population remains outside the coverage area of the medical services. The system financing is based on the compulsory contributions of the employers and employees [6]. The Centralized Healthcare System, introduced in Russia was typical for the Central and Eastern European countries, which are now going through a transition process to the market economy. The state had full control over the production factors and health facilities and services. The doctors were state clerks and there was no private sector. The medical assistance was free for everyone employed oversized personnel and hospitals; it had no competition and it lacked performance [6]. Just like other former socialist countries, Romania organized the national healthcare system according to Semashko Russian model, based on free access to medical services for every citizen. Despite the fact that after the communism fell and reforms in healthcare system tried to get it closer to the German model, our system still fights with true weaknesses: little health expenditure as percentage in the GDP/capita (the current allocation is of only 3.2%, compared to the necessary of 8% from GDP); centralized allocation of resources; overrated hospital services; lack of professional medical equipment and drugs; inequity in medical services delivery across the regions of the country [6]. Given this picture with ambiguous borders, what is really happening with RA patients who go beyond non–biologic DMARDs therapy phase being labeled as non– responsive? Health systems are constantly changing. There are three main healthcare systems in Europe: ‘The National Healthcare System’, ‘The Social Health Insurances System’– Bismark and ‘The Centralized Healthcare System’– Semashko. The major differences between these are responsible for the consequences of medical practice. NHS was at first introduced in England but nowadays it can be also found in Denmark, Italy, Finland, Ireland, Norway, Sweden, Greece, Portugal and Spain. The system is financed through general taxes, in controlled by the government and has both a state budget and a private sector. All citizens have free access to the system, the coverage is general and the state authorities manage the system. In certain cases, the patients pay a part of the cost for some medical services. Its major disadvantage consists of long waiting lists for certain medical services and a high level of bureaucracy [6]. The Social Health Insurances System is the most used one and it is based on compiling the main elements of the social and medical insurances. This system operates in Germany, Austria, Belgium, Switzerland, France, Luxembourg and Netherlands. Even if the system offers a broad coverage, a certain part of the population remains outside the coverage area of the medical services. The system financing is based on the compulsory contributions of the employers and employees [6]. The Centralized Healthcare System, introduced in Russia was typical for the Central and Eastern European countries, which are now going through a transition process to the market economy. The state had full control over the production factors and health facilities and services. The doctors were state clerks and there was no private sector. The medical assistance was free for everyone employed oversized personnel and hospitals; it had no competition and it lacked performance [6]. Just like other former socialist countries, Romania organized the national healthcare system according to Semashko Russian model, based on free access to medical services for every citizen. Despite the fact that after the communism fell and reforms in healthcare system tried to get it closer to the German model, our system still fights with true weaknesses: little health expenditure as percentage in the GDP/capita (the current allocation is of only 3.2%, compared to the necessary of 8% from GDP); centralized allocation of resources; overrated hospital services; lack of professional medical equipment and drugs; inequity in medical services delivery across the regions of the country [6]. Given this picture with ambiguous borders, what is really happening with RA patients who go beyond non–biologic DMARDs therapy phase being labeled as non– responsive? Rheumatoid arthritis: an up to date: Rheumatoid arthritis (RA) is a systemic chronic inflammatory disease, with fluctuant evolution and unpredictable prognosis [8]; it leads to severe decline in functional status and quality of life and increases morbidity and mortality [7]. RA induces considerable socio–economic effects [2, 10– 13]. It is known that after ten years of disease evolution, roughly half of the patients are work disabled; this brings the loss of productivity in the foreground of the RA economic impact [2, 9, 10, 14, 15]. The major therapeutically goal in RA is to interfere with the disease pathogenic paths. Stopping the joint destruction process would maintain the quality of life, through the prevention of physical disability and premature death. Pharmacological treatment represents the main option. The group of non–biologic remission agents, generically called DMARDs, consists of: methotrexate (MTX), sulphasalasine (SSZ), leflunomide (LFL), gold salts, hydroxiclorochine, D– penicillamine, cyclophosfamide, azathioprine, cyclosporine A. MTX is currently the most used remissive agent, being therapeutically considered the ‘gold–standard’ [3]. The biologics introduction opened new perspectives at a pathogenically level (confirming the implications of the immunity elements) and at a clinical practice level, offering the alternative of the remission induction for non responders to non–biologic DMARDs. In Romania, the biological therapy uses anti TNF–alpha (alpha tumor necrosis factor) monoclonal antibody (Infliximab) and soluble receptors for TNF–alpha (Etanercept, Adalimumab) and anti CD20 antibodies (from B lymphocytes surface) (Rituximab). Chronologically, the first one introduced, having RA as indication was Infliximab, in 2000, followed by Etanercept, during 2004 (indicated in juvenile idiopathic arthritis and starting with 2005 in adult RA as well); during 2005 Adalimumab was also introduced. The latest biologic agent adopted in our country for clinical use is Rituximab, in 2008. From the National Health Insurance House database, in The National Committee for the Biological Therapy Approval for RA Patients, during the first trimester of 2006, a number of 1074 patients were on biologics, in the fourth trimester of 2007 the total reaching 1500 patients, and in the same trimester of 2008 the total number of patients was 2143 ( Table 1). Rheumatoid arthritis on biological therapy in Romania European Health Systems: Health systems are constantly changing. There are three main healthcare systems in Europe: ‘The National Healthcare System’, ‘The Social Health Insurances System’– Bismark and ‘The Centralized Healthcare System’– Semashko. The major differences between these are responsible for the consequences of medical practice. NHS was at first introduced in England but nowadays it can be also found in Denmark, Italy, Finland, Ireland, Norway, Sweden, Greece, Portugal and Spain. The system is financed through general taxes, in controlled by the government and has both a state budget and a private sector. All citizens have free access to the system, the coverage is general and the state authorities manage the system. In certain cases, the patients pay a part of the cost for some medical services. Its major disadvantage consists of long waiting lists for certain medical services and a high level of bureaucracy [6]. The Social Health Insurances System is the most used one and it is based on compiling the main elements of the social and medical insurances. This system operates in Germany, Austria, Belgium, Switzerland, France, Luxembourg and Netherlands. Even if the system offers a broad coverage, a certain part of the population remains outside the coverage area of the medical services. The system financing is based on the compulsory contributions of the employers and employees [6]. The Centralized Healthcare System, introduced in Russia was typical for the Central and Eastern European countries, which are now going through a transition process to the market economy. The state had full control over the production factors and health facilities and services. The doctors were state clerks and there was no private sector. The medical assistance was free for everyone employed oversized personnel and hospitals; it had no competition and it lacked performance [6]. Just like other former socialist countries, Romania organized the national healthcare system according to Semashko Russian model, based on free access to medical services for every citizen. Despite the fact that after the communism fell and reforms in healthcare system tried to get it closer to the German model, our system still fights with true weaknesses: little health expenditure as percentage in the GDP/capita (the current allocation is of only 3.2%, compared to the necessary of 8% from GDP); centralized allocation of resources; overrated hospital services; lack of professional medical equipment and drugs; inequity in medical services delivery across the regions of the country [6]. Given this picture with ambiguous borders, what is really happening with RA patients who go beyond non–biologic DMARDs therapy phase being labeled as non– responsive? Paper objective: Having designed an observational cross sectional cohort study of cost–effectiveness of the biological treatment compared to classical DMARDs, with a follow up period of 12 months, in this paper we proposed to describe the clinical characteristics and healthcare resource utilization characteristics and to analyze the correlations between a cross–sectional sample of 206 RA patients, at baseline (December 2007). Patients and methods: The lack of a National RA Register, as well as a National RA Database has imposed a different sample enrollment method. We used two sources. Some of the cases are represented by patients from the Internal Medicine and Rheumatology Department of ‘Dr. Ion Cantacuzino’ Hospital in Bucharest, during the year 2007. They have been drawn out chronologically, according to their time presentation, from the hospital electronic database. Inclusion criteria consisted of RA diagnosis and exclusion criteria, the presence of any malignancy. Through the collaboration with other rheumatologists within the country, other cases have been enrolled from the ambulatory care, in order to cover a larger territorial area: randomly, from their patients' lists, based on the same inclusion and exclusion criteria. The RA diagnosis was established by each specialist (rheumatologist) for each patient apart. The patients – initially 480, recorded with names and addresses – were invited, through a post consent letter, to attend a scientific research. Three series of self report interviews were collected by post mail (at our address, written on the enclosed stamped envelope). The collected interval time was of six months, as following: 0 – 6 – 12 months. The first interview was conducted during November – December 2007, the second one during May – June 2008 and the last one during November –December 2008. Following the first approach, from the initially 480 cases, we collected 206 responders' envelopes (response rate being roughly 50%). These cases were considered eligible for the study; the second and third mail approach was conducted only for these last patients. Each serial evaluation consisted of three different questionnaires: an original one, Health Assessment Questionnaire (HAQ) Disability and Discomfort Scales – simple translation, not being validated in our country yet –and EUROQOL EQ–5D, Romanian version (having the original authors' consent for using it). The collected data (all self reports) were distributed on the following interest categories: demographic (date of birth, age, sex, ethnic origin, marital status, environment of habitat, level of training, average income per month, professional status); co–morbidities: the categories of patients with high blood pressure (HBP), diabetes mellitus (DM), chronic hepatitis (CH), coronary heart disease (CHD), gastro–duodenal ulcer (GDU) – indicating episodes of digestive bleeding, renal failure (RF), asthma (A), osteoporosis (OP), osteoarthritis (OA), thyroid gland disease (T) and others have been noted together with arthroplasty procedures and other surgical interventions; concerning the major disease (RA): General features: diagnosis year, current treatment and its starting time, associated medication (corticotherapy /non steroidal anti–inflammatory drugs–NSAIDs). Functional characteristics: HAQ score, self reporting of pain intensity, disease activity and fatigue on a visual analogue scale of 100 mm (marked only at extremes), number of disabled days for usual activities, number of persons involved in home aid to its frequency. Quality of life characteristics: utility (EQ–5D) and EQ–domains components (RA impact on mobility, self care, usual activities, pain/discomfort, anxiety/ depression), self reporting of the health quality on a visual analogue scale of 100 mm (feeling thermometer), marked each millimeter. Features of the economic impact with a time frame of 6 months: the number of sick leave days and hospitalization days, frequency for sick leave and hospitalizations, number of medical visits to the primary care and to rheumatologist, medical system appeals (regardless of the specialty), laboratory checks, number of X–rays, and CT/MRI examination, reporting on rehabilitation frequency, the patient's monthly contribution (own pocket expenses) to the treatment. Data analyses Geographically, the sample (n = 206) covers 23 counties, from the Southern and Western part of the country (Fig 1). The large territorial distribution of the cases as well as the normality statistical sample (Fig 2), determined us to estimate that the sample is representative of the entire population suffering of RA, in our country. The data have been analyzed in the program SPSS 10; we used ANOVA, two independent samples T test – for the continuous variables, Chi–Square, Kruskall Wallis and Man Whitney tests– for non–continuous variables, bivariate correlations (Pearson, Spearman coefficients). The sample has been subdivided according to the therapy, as it follows: group treated with oral agents (non-biologic DMARDs monotherapy and combinations = Group A) and biological agents group (biologic DMARDs = Group B). Seven cases have been excluded from the review: five without remission therapy (the size of the subgroup being too small to be analyzed compared to the other), and two other cases that have not responded to questions regarding the medication and could not be allotted to any group. Cohort territorial distribution P–P plot for age (n=206) Geographically, the sample (n = 206) covers 23 counties, from the Southern and Western part of the country (Fig 1). The large territorial distribution of the cases as well as the normality statistical sample (Fig 2), determined us to estimate that the sample is representative of the entire population suffering of RA, in our country. The data have been analyzed in the program SPSS 10; we used ANOVA, two independent samples T test – for the continuous variables, Chi–Square, Kruskall Wallis and Man Whitney tests– for non–continuous variables, bivariate correlations (Pearson, Spearman coefficients). The sample has been subdivided according to the therapy, as it follows: group treated with oral agents (non-biologic DMARDs monotherapy and combinations = Group A) and biological agents group (biologic DMARDs = Group B). Seven cases have been excluded from the review: five without remission therapy (the size of the subgroup being too small to be analyzed compared to the other), and two other cases that have not responded to questions regarding the medication and could not be allotted to any group. Cohort territorial distribution P–P plot for age (n=206) Data analyses: Geographically, the sample (n = 206) covers 23 counties, from the Southern and Western part of the country (Fig 1). The large territorial distribution of the cases as well as the normality statistical sample (Fig 2), determined us to estimate that the sample is representative of the entire population suffering of RA, in our country. The data have been analyzed in the program SPSS 10; we used ANOVA, two independent samples T test – for the continuous variables, Chi–Square, Kruskall Wallis and Man Whitney tests– for non–continuous variables, bivariate correlations (Pearson, Spearman coefficients). The sample has been subdivided according to the therapy, as it follows: group treated with oral agents (non-biologic DMARDs monotherapy and combinations = Group A) and biological agents group (biologic DMARDs = Group B). Seven cases have been excluded from the review: five without remission therapy (the size of the subgroup being too small to be analyzed compared to the other), and two other cases that have not responded to questions regarding the medication and could not be allotted to any group. Cohort territorial distribution P–P plot for age (n=206) Results, discussions, conclusions: Sample and subgroups features, spread over the study categories at inclusion are summarized in Table 2 , Table 3, Table 4, Table 5, Table 6, and Table 7. Results are given in average± DS for continuous variables and in percentages for non–continuous variables; a + b = 199 (7 cases have been excluded after splitting the sample into therapeutic groups); group A= non–biologic DMARDs; group B= biologic DMARDs; * Level of significance alpha: p<0,05; NS=non statistically significant Demographic characteristics Although the patient's age in group B is significantly lower (Table 2), this difference is not the notable one in the working activity status and income. It figures a RA population with an average age of 54.90 ± 12.67 years old, which is theoretically part of the working active category. Practically, however, two thirds are retired, most cases have low monthly income (<1000 lei/month, to 90.3%), and approximately half of them have completed only primary education (it seems we are dealing with a RA population of young people, poor and elementary trained?). In this framework, between the level of education and the monthly income, there is a homogeneous significant positive correlation in both groups (ρs = 0.645, p < 0.01). These issues outline the social conditions in the demographic characteristics background. N.B. Results are given in percentages for non–continuous variables; a + b = 199 (7 cases have been excluded after splitting the sample into therapeutic groups); group A= non–biologic DMARDs; group B= biologic DMARDs; * Level of significance alpha: p<0,05; NS=non statistically significant Associated RA morbidities: characteristics Morbidity is significantly associated with PR (Table 3). Over half of the patients (57.3%) had associated three diseases with PR, in terms of population with an average age of 54.90 ± 12.67 years old. Between them, the first three places are occupied by high blood pressure, osteoporosis and coronary heart disease; as already confirmed, cardiovascular diseases increase the mortality rate, independently. It also remarks the significantly higher arthroplasty rate in B (7.1%, compared to 1.6%; p<0.05): it refers to a history of more severe diseases for the current biologics cases. N.B. Results are given in average±DS for continuous variables and in percentages for non–continuous variables; a + b = 199 (7 cases have been excluded after splitting the sample into therapeutic groups); c percentages include monotherapy and combinations; group A= non–biologic DMARDs; group B= biologic DMARDs; * Level of significance alpha: p<0,05; NS=non statistically significant; NA = not applicable; NSAIDs =non steroidal anti–inflammatory drugs; MTX=methotrexate; SSZ= sulphasalazine; LFL = leflunomide. General characteristics of rheumatoid arthritis Average disease duration is of 9.40 ± 8.87 years old, with a significant difference between groups in favor of biologics (11.32 ± 8.30 years old). In other words, biological agents predominate in older forms of the disease at a younger category of patients. If in group A, RA age increases linearly with age (r = 0.233, p < 0.01) in group B these factors are independent, supporting a broader distribution of the cases on the age axis. On figures (Table 4), two thirds of patients are following non–biologic DMARDs (¾ monotherapy, ¼ combinations), and one third, biological agents. Interesting is that assessing the entire sample, the most prescribed DMARD seems to be MTX (48.6%) – including here MTX prescriptions associated with biologics–while looking only in group A (non– biologic DMARDs), first place is occupied by LFL (61.2%). The explanation relies on two aspects: on the one hand, surprising RA population at approximately 10 years from the disease evolution, when most patients are beyond stage MTX, either through inefficiency (secondary resistance) or by cumulative dose over time; on the other hand, the influence role of various pharmaceutical companies in prescribing a certain drug should not be missed. What is interesting about the analyzed population is that other DMARDs preparations were not declared, which draws attention to a phase of decline in using a medication formerly overused (gold salts, hydroxycloroquine etc.), as well as to a selective promotion of products from pharmaceutical companies. In the biologics group, the ‘oldest’ drug is placed on top of the most prescribed one: Infliximab (65.7%). This feature follows the national wide distribution of TNF blockers agents in ‘The National Committee for the Biological Therapy Approval’ (Table 1), where Infiliximab was the most frequently biologic at the end of 2007. We conclude that these figures reflect a stage of plateau in the dynamic of the described parameters, as the average duration of the reported treatment is of 2.70 ± 2.64 years old. AINS consumption is high (89%), without differences between groups. Looking at the figures, over ⅓ of the cases require daily NSAIDs (35.6%). Considering correlation with age and disease duration, NSAIDs intake increased with RA age only in group A (ρs = 0.212; p< 0.05). Supporting these data with associated pathology, the risk of adverse events, even fatal (by major cardio–vascular disorders) is amplified. Corticotherapy is part of the treatment in 39% of the cases. Looking at sample level, the relationship corticotherapy – HAQ disability categories (Fig 3 ), cortisone therapy is missing in 73% of the cases for HAQ categories <1.6; starting with HAQ scores > 1.6 corticotherapy is present at 59% (p <0.01). The ratios are different in subgroups. Group A describes a similar curve, except for the report reversal point starting with HAQ values of > 2.1. On the contrary, in group B cortisone therapy is present in 81% of the cases belonging to the HAQ category: 1.6–2.1; in all other intervals, the majority of the patients do not require cortisone therapy, even for HAQ values <1.6, as for those > 2.1 (75 % and 58% patients, respectively), p < 0.01. Surprisingly, group B described a positive correlation (gs = 0.247, p < 0.05) between corticotherapy and age. It clearly appears that the oldest patients belong mostly to HAQ interval 1.1– 2.1 (meaning moderate and advanced disability), the same area recorded a peak in cortisone therapy, as well. At least two conclusions can be detached: in oral therapy group, ‘RA end stage’ of irreversible disability (27.2%) is frequently cortisone dependent; in this group, corticotherapy describes a linear growth in the degree of disability, in order to maintain a minimum functionality of the locomotion system. On the contrary, in biologics group cortisone prescription follows the disability intervals potentially reversible (HAQ 1.1 – 2.1), where the case density is also the highest (51.5%). Corticotherapy in relation with HAQ categories Is working activity status related to the consumption of NSAIDs and cortisone? In group B, the retired cases correlate positively with corticotherapy (ρs = 0.260, p < 0.05) and with NSAIDs intake (ρs = 0.265, p < 0.05), while the correlation is negative for the active professional cases. This reflects the former severity of the disease in group B: more severe disease (HAQ represents partly the cumulative disease activity – damage over time) reduces ability to work and makes it more likely to receive steroids and/or NSAIDs. Considering that groups have not significant differences concerning the active professional proportion, it seems that in the oral therapy group, these factors are independent. It could also reflect a former less severity of the disease in group A. In functional terms (Table 5 ) the average HAQ score at baseline was of 1.29 ± 0.80. If we look to HAQ categories, household activities recorded the most severe score (36.9% of cases), followed by hygiene (17.9%). Considering 6 categories of severity, corresponding to a certain range of HAQ score, we mention: in group A, 27.2% of cases are severely and very severely disabled, compared to 15.7% in group B (p < 0.05). Advanced and moderate disability is observed in 51.5% of cases representing group B, compared to 27.1% of group A (p < 0.05) (Fig 4 ). In other words, RA terminal phases (end–stage) which have less therapeutically benefits in functional terms are mostly treated with classical agents, while the potentially reversible stages of RA disability can be found especially in the group receiving a more expensive therapy. In a society with limited health resources, this ‘selective affiliation’ of cases is predictable, considering as a therapeutic target also the socio–economic reinsertion of patients. Age is a factor that increases the degree of disability appreciated by HAQ (r = 0.417, p < 0.01); on the contrary, less obvious correlation of disability in relation to disease duration (r = 0.251, p < 0.05). In the same polarity, we mention HAQ influence on the retired status (gs = 0.318, p < 0.01). Using of self–quantitative VAS scale showed unexpected high scores for pain (mean 54.45 ± 24.23), disease activity (mean 55.17 ± 23.12) and fatigue (57.49 ± 24.87). Age is an inflexible characteristic, but its influence on self reporting assessments is confirmed by the positive correlation with the mentioned variables (r = 0.219, 0.259, 0.272, p < 0.01). Although approximately ¼ of the cohort (22.8%) belongs to a functional irreversible RA phase, needed assistance in everyday life overcomes the expected level: 86.7% of patients require household help and 58.3% of them frequently report and permanently help. One single person is involved in house hold help for most of the cases (68.8%), 41.7% of patients call for auxiliary objects and 35% for aid devices, mostly for the category of usual daily activities (64.6%). HAQ categories according to therapy groups N.B. Results are given in average± DS for continuous variables and in percentages for non– continuous variables; a + b = 199 (7 cases have been excluded after splitting the sample into therapeutic groups); group A= non biologic DMARDs; group B= biologic DMARDs; * Level of significance alpha: p<0,05; NS=non statistically significant. RA functional characteristics Overall functional impact in daily life is materialized in about ten days lost every month because of the inability of performing household tasks (average: 9.64 ± 9.06 days). There is an important and relatively homogeneous positive correlation between lost days and the self reporting level for pain, fatigue and disease activity (r = 0.650, p < 0.01) in both groups. The category of working active cases has a certain degree of independence at home. Both groups reveal a negative correlation of these cases with lost days for usual activities (gs = –0.255 and –0.351, p < 0.01), while in group A the help frequency is bigger (gs =– 0.307, p < 0.01); pensioners group correlation is positive (meaning an increase in non–medical direct costs). Utility (Table 6), appreciated on a scale from 0 to 1, where 1 corresponds to perfect quality and 0 to death, is placed for the whole sample to an average of 0.417 ± 0.337. Between groups, there is a difference in favor of biologics (0.382 ± 0.347 and 0.452 ± 0.317, p = 0.1), but both figures belong to a low level. The most frequently reported score was of 0.516 and it was found in one third of cases. It is interesting that the proportion of those who reported moderate and severe problems in the utility components stratify somehow the RA impact in patient daily life. Thus, 95.5% reported pain / discomfort, 83% have problems in mobility and usual activities, 74% in self care and 69% have anxiety / depression. The quality of health state assessed through VAS showed an average score of 47.39 ± 22.13, with a statistically significant difference between groups in favor of biologics (44.92 ± 22.34, and 51.36 ± 21, 23, p < 0.05). N.B. Results are given in average±DS for continuous variables and in percentages for non– continuous variables;** Percentages represent frequency of problems (moderate and severe) in mentioned category; a + b = 199 (7 cases have been excluded after splitting the sample into therapeutic groups); group A= non biologic DMARDs; group B= biologic DMARDs; * Level of significance alpha: p<0,05; NS=non statistically significant. Utility and quality of life parameters In both subgroups increased disability lowers the quality of health state and utility. (group A: gs = –0.618, –0.665 (p < 0.01); gs group B = –0.707 – 0.552 (p < 0.01). Given the social context of RA patient in Romania, we consider it appropriate to present the correlations of some social elements with quality of life components, independently of other factors directly related to RA: The increasing level of education is associated with health state quality and utility: r = 0.377 and r = 0.380, where p < 0.01. There is an association between the standard of living caused by a low monthly income and utility score, as well as quality of health state: r = 0.323 and r = 0.364, where p < 0.01. There is an association between the low monthly income and the severity of utility components: mobility (gs = –0.186), self care (gs =–0.302), usual activities (gs = – 0.304), pain/discomfort (gs = –0.233), anxiety/depression (ρs = –0.216), where p < 0.01. There is an association between inactivity (retired group) and the severity of utility components: gs = 0.224; 0.250; 0.255; 0.167; 0.220, where p < 0.01. In order to improve the RA impact at individual, social and economic level, some supporting and stimulating measures of any working activity, tailored according to the disease functionality are required. Analyzing the effectiveness, strictly in terms of utility and quality of health state, the biologics class is clearly superior to classical therapy. Expanding to the level of economic impact characteristics, the differences between groups did not describe the same behavior (Table 7 ). Reported to the active professional subgroup, labor productivity was evaluated by sick leave and early retirement. Quantifying the absenteeism frequency, 20% of the patients reported sick leave in the last 6 months. There is however a significant difference in sick leave duration, in favor of group A (mean 3.58 ± 8.66 days, compared to 0.43 ± 1.08 days in group B, p < 0.05). The analysis of correlation with the hospitalization duration revealed that the two features are independent. As a result, in DMARDs group longer sick leaves is not a consequence of hospitalization, but probably of outpatient visits in the primary care or rheumatologist, the only ones who can fix sick leave in ambulatory care. Labor productivity loss through early retirement reaches a threshold of 38.5% of cases (33.7% in group A and 46.9% in group B) at a median duration after RA diagnosis time of only 5.65 ± 5.99 years (also the comparable average between groups). What is interesting is that in the first year after RA diagnosis, 22.9% of the newly diagnosed patients are medically retired (28.6% belonging to group A and 13% to group B). Recall that literature sustains a loss of labor productivity through work incapacity of about 50% in the first 10 years after diagnosis [15]. N.B. Results are given in average± DS for continuous variables and in percentages for non-continuous variables; a + b = 199 (7 cases have been excluded after splitting the sample into therapeutic groups); c: n = 60 (working active subgroup); d: n = 143 (retired subgroup); e: n = 55 (RA retired subgroup); group A= non biologic DMARDs; group B= biologic DMARDs; * Level of significance alpha: p<0,05; NS=non statistically significant. RA economic impact characteristics What caused the patients to be declared work disabled so early? No evident correlations (factors with independent behavior) were found between the disability degree (HAQ) and the time elapsed from the diagnosis moment to the RA retirement. On the contrary, there is a strong positive correlation (r = 0.758, p < 0.01) between RA age and time elapsed from diagnosis until RA retirement. By adding age in this equation, no additional information result was found. From this perspective, RA age, not patient age, comes in the foreground of RA impact on labor productivity. However, figures show that a significant proportion of patients are declared work disabled at less than one year from the time of diagnosis. From this perspective, RA age losses the top position in final decision on work capacity. Considering also some social factors, we found interesting associations: reduction of monthly income is associated with time from RA diagnosis to ill retirement (r = 0.565, where p < 0.01; the highest the education level is, the greater the tendency to remain active professionally results (significant positive correlation in both groups, but more expressed in group B: gs = 0.466, p < 0.01). In conclusion, it seems that in our country the social level of RA patients plays also a major role in the loss of work productivity. This socio–economic weakness supports a vicious circle: ‘small proportion of work active population – insufficient funds allocated in the health system– great selection and low accessibility of patients to more expensive therapies, even if with superior efficiency’. Regarding the follow up visits to the rheumatologist, ⅓ of cases reported monthly visits, the differences being statistically significant between groups (39%–group A, compared with 21% – group B, p < 0.01); 20% of cases are monitored at intervals of 2 months (11.2% group A, compared with 35.7% in group B, p < 0.01). From the entire cohort perspective, the rhythm of monitoring visits has no correlation with the disability severity (HAQ). Given this aspect and also considering the economic implications of a medical check, a question arises: what induces the rhythm of follow up? Looking within groups on disability levels, patients with more severe disabilities are monitored on monthly basis; differences occur in cases of HAQ interval 0.6 – 2.1. Thus, in group A, most cases are monitored monthly; in group B dominate 2 months visits. In terms of the drug prescription, most of non–biologic DMARDs recipes belong to the primary care network. Thus, monthly rheumatology visits for the lower categories of disability could have two interpretations: on one hand, it could support the excess use of health care departments (inside and outside hospital), but on the other hand the patients could have better function because they come to the hospital more frequently. The study design implies 1 year of follow up, so the patient's characteristics dynamic will support one of these two hypotheses (which will be reported, as well). In group B, two months follow up is probably related to the administration rhythm of infliximab. At this point, it seems that rheumatology monitoring rhythm is determined by the clinician. 41.7% of cases appeal monthly the primary care network and 27% have monthly visits to other specialties; cardiology hold on the top. Regarding the lab tests monitoring, approximately ¼ cases are tested at 2 months interval, but with significant differences between groups: 35.7% in group B, compared to 21.4% in group A (p < 0.01). The biologic patient is therefore more closely monitored biologically. These data reflect physician option. Radiological monitoring revealed that about ¼ of cases failed to X–ray control in the last 6 months, with significant differences between groups (32% group A, group B 51%, p < 0.01), while one and half of cases had from 1 to 3 radiographs. For 75% of cases, the monthly patient contribution to the treatment is less than 100 lei, with no significant differences between groups. This level of own pocket expenses represent 10% of the average monthly income, for 90% of the patients included. Concerning direct medical costs level, the economic impact reveals that hospitalizations rate (reported for half of the cases) is significantly higher in group B (67.1% versus 47.2%, p <0.01). Although no significant differences between groups as extent of hospitalizations (6.82 days and 4.79 days), it seems that with age increase, only patients treated with biological agents require more frequent hospitalizations and of longer duration (r = 0.529, p < 0.01), amplifying direct costs, eventually. The frequency and duration of hospitalization is directly related to the degree of disability in group A (gs = 0.323, p < 0.01), while in group B, it is valid for the duration of hospitalization, not for its frequency (gs = 0, 329; p < 0.01). With respect to the superior hospitalization rate in group B, there is not a discrepancy without explanation. The reason lies in the large proportion of biologics patients treated with Infliximab, which is managed only through hospital admission. In conclusion, in our country, the rate of hospitalizations is not only a consequence of RA relapse episodes. The current health care system services still hospitalized based, associated to a particular social context, could increase direct medical costs in cases not related to compulsory hospitalization. The claim requires evidence in support of monetary unit, providing by data analysis which will be soon reported.
Background: Rheumatoid arthritis (RA) is associated with loss of overall functionality of the locomotion system and it is connected with substantial economic losses. Methods: RA cases have been enrolled from southern and western part of the country, covering a surface of 23 counties. Results: Particularly in the literature data, Romanian RA patients become work disabled at 5.65 +/- 5.99 years old after the diagnosis. At cohort level, retirement in the first year after RA diagnosis is of 22.9%. From those, 13% were treated with biologic DMARDs; those on non-biologic DMARDs were 28.6%. In oral therapy group the most prescribed drug is lefunomide (61.2%). RA has an important impact on pain, function and utility, influenced by social factors. Patients' follow up is often based on hospitalization. Conclusions: Currently, when the clinician may choose for one certain therapy or another, the social influence is still overwhelming at all the evaluation levels in RA patients, as well as at economic impact.
null
null
9,821
200
[]
7
[ "group", "ra", "cases", "patients", "system", "non", "biologic", "medical", "dmards", "health" ]
[ "arthritis ra medical", "arthritis biological therapy", "revolutionized rheumatology field", "synovitis initiation rheumatoid", "effectiveness analyses rheumatoid" ]
null
null
null
null
null
[CONTENT] rheumatoid arthritis | biologic and non–biologic DMARD | disability | social | utility [SUMMARY]
null
null
[CONTENT] rheumatoid arthritis | biologic and non–biologic DMARD | disability | social | utility [SUMMARY]
null
null
[CONTENT] Adalimumab | Adolescent | Adult | Age Distribution | Aged | Anti-Inflammatory Agents, Non-Steroidal | Antibodies, Monoclonal | Antibodies, Monoclonal, Humanized | Antirheumatic Agents | Arthritis, Rheumatoid | Disabled Persons | Etanercept | Female | Humans | Immunoglobulin G | Infliximab | Male | Middle Aged | Radiography | Receptors, Tumor Necrosis Factor | Romania | Severity of Illness Index | Young Adult [SUMMARY]
null
null
[CONTENT] Adalimumab | Adolescent | Adult | Age Distribution | Aged | Anti-Inflammatory Agents, Non-Steroidal | Antibodies, Monoclonal | Antibodies, Monoclonal, Humanized | Antirheumatic Agents | Arthritis, Rheumatoid | Disabled Persons | Etanercept | Female | Humans | Immunoglobulin G | Infliximab | Male | Middle Aged | Radiography | Receptors, Tumor Necrosis Factor | Romania | Severity of Illness Index | Young Adult [SUMMARY]
null
null
[CONTENT] arthritis ra medical | arthritis biological therapy | revolutionized rheumatology field | synovitis initiation rheumatoid | effectiveness analyses rheumatoid [SUMMARY]
null
null
[CONTENT] arthritis ra medical | arthritis biological therapy | revolutionized rheumatology field | synovitis initiation rheumatoid | effectiveness analyses rheumatoid [SUMMARY]
null
null
[CONTENT] group | ra | cases | patients | system | non | biologic | medical | dmards | health [SUMMARY]
null
null
[CONTENT] group | ra | cases | patients | system | non | biologic | medical | dmards | health [SUMMARY]
null
null
[CONTENT] sample | cases | group | territorial | self | collected | number | 206 | ra | territorial distribution [SUMMARY]
null
null
[CONTENT] group | system | ra | medical | cases | patients | sample | non | healthcare | services [SUMMARY]
null
null
[CONTENT] RA | 23 [SUMMARY]
null
null
[CONTENT] Rheumatoid | RA ||| RA | 23 ||| Romanian | RA | 5.65 | 5.99 years old ||| the first year | RA | 22.9% ||| 13% | 28.6% ||| 61.2% ||| RA ||| ||| RA [SUMMARY]
null
Ambient carbon monoxide and fine particulate matter in relation to preeclampsia and preterm delivery in western Washington State.
21262595
Preterm delivery and preeclampsia are common adverse pregnancy outcomes that have been inconsistently associated with ambient air pollutant exposures.
BACKGROUND
We used data from 3,509 western Washington women who delivered infants between 1996 and 2006. We predicted ambient CO and PM2.5 exposures using regression models based on regional air pollutant monitoring data. Models contained predictor terms for year, month, weather, and land use characteristics. We evaluated several exposure windows, including prepregnancy, early pregnancy, the first two trimesters, the last month, and the last 3 months of pregnancy. Outcomes were identified using abstracted maternal medical record data. Covariate information was obtained from maternal interviews.
METHODS
Predicted periconceptional CO exposure was significantly associated with preeclampsia after adjustment for maternal characteristics and season of conception [adjusted odds ratio (OR) per 0.1 ppm=1.07; 95% confidence interval (CI), 1.02-1.13]. However, further adjustment for year of conception essentially nullified the association (adjusted OR=0.98; 95% CI, 0.91-1.06). Associations between PM2.5 and preeclampsia were nonsignificant and weaker than associations estimated for CO, and neither air pollutant was strongly associated with preterm delivery. Patterns were similar across all exposure windows.
RESULTS
Because both CO concentrations and preeclampsia incidence declined during the study period, secular changes in another preeclampsia risk factor may explain the association observed here. We saw little evidence of other associations with preeclampsia or preterm delivery in this setting.
CONCLUSIONS
[ "Adult", "Air Pollutants", "Carbon Monoxide", "Cohort Studies", "Female", "Humans", "Maternal Exposure", "Particulate Matter", "Pre-Eclampsia", "Pregnancy", "Premature Birth", "Prospective Studies", "Risk Factors", "Surveys and Questionnaires", "Washington", "Young Adult" ]
3114827
null
null
Statistical analysis
We used multivariable logistic regression to model the relationships [odds ratios (ORs) and 95% confidence intervals (CIs)] between each pregnancy outcome and air pollutant exposure. Our alpha (type 1 error) level was 0.05. We used linear regression for models of length of gestation. For our primary analyses, we categorized air pollutant concentrations according to distribution quartiles. We also modeled air pollutant concentrations as continuous exposures. For these analyses, we report ORs associated with a 0.1-ppm increase in CO and a 0.5-μg/m3 increase in PM2.5. These gradients are roughly equivalent to the increments between deciles for both pollutants. We evaluated the following characteristics, reported in interviews, as potential confounders: maternal age, nulliparity (no prior live births), prepregnancy body mass index (BMI), race/ethnicity, education, employment in early pregnancy, marital status, household income, smoking before and during pregnancy, usual hours/week of secondhand smoke exposure in and outside of the home in the year before pregnancy, regular recreational physical activity before and during pregnancy, and histories of asthma, diabetes, and chronic hypertension. We also evaluated year and month of conception as confounders because of temporal changes in both pollutant concentrations and preeclampsia risk (PSCAA 2007; Rudra and Williams 2005). A priori, we chose to include maternal age, race/ethnicity, prepregnancy BMI, and nulliparity as covariates because of their confounding effects in previous analyses of Omega Study data. Season and year were also included in multivariable models because of their influence on preeclampsia ORs. We found no evidence of confounding by other covariates using a 10% difference in magnitude between crude and adjusted ORs as the criterion for confounding. We found no evidence of poor model fit or influential observations after performing the following regression diagnostics: the Hosmer–Lemeshow test, comparison of observed and predicted outcomes within exposure and covariate strata, examination of Pearson and deviance residual plots, and examination of leverage plots (Hosmer and Lemeshow 2000). We conducted several stratified analyses to evaluate the possibility of interaction by characteristics chosen a priori because of their strong relationships with either the outcomes or exposures. We stratified participants according to maternal age (< 35 or ≥ 35 years), parity (no vs. any prior live births), prepregnancy BMI (< 25 or ≥ 25 kg/m2), ever smoking or secondhand smoke exposure (neither vs. either), and employment in early pregnancy (used as a surrogate for time spent at home). We evaluated interaction by comparing ORs across strata; we defined interaction as a difference of > 20% in the magnitude of at least two of four category ORs. We also fitted models after excluding women with self-reported or medical record–indicated histories of prepregnancy diabetes or hypertension. Finally, we conducted sensitivity analyses to examine the impact of extrapolation of the environmental characteristic predictors used in the air pollutant exposure models. For both CO and PM2.5, we classified women according to whether one or more values of the environmental characteristics used in the exposure estimation model were out of the range of values observed at the monitoring sites. For example, although the distance from each CO monitoring site to the nearest major road ranged from 0.2 to 1.3 km, the range of values at Omega participants’ addresses was much broader (0.01–92 km). Furthermore, a small proportion of women (≤ 4% for each exposure window) had one or more CO exposure estimates outside the range of the concentrations observed in the region. All predicted PM2.5 exposures fell within the range of observed concentrations. We hypothesized that our exposure estimates might be less accurate for women with one or more extrapolated values and that this inaccuracy might bias our estimates of association toward the null.
Results
Median [interquartile range (IQR)] CO and PM2.5 concentrations observed at monitoring sites during the study period were 1.08 (0.80–1.38) ppm and 10.0 (8.3–12.4) μg/m3, respectively [Supplemental Material, Table 1 (doi:10.1289/ehp.1002947)]. Among study participants, median (IQR) predicted CO and PM2.5 exposures in the periconceptional period were 0.80 (0.59–1.02) ppm and 10.1 (8.7–11.4) μg/m3, respectively. By design, peak CO and PM2.5 exposures were higher [median (IQR): peak CO, 1.05 (0.85–1.27) ppm; peak PM2.5, 14.2 (12.6–15.1) μg/m3]. Distributions in the other six windows were similar to those in the periconceptional window: median CO concentrations ranged from 0.79 to 0.80 ppm, and median PM2.5 concentrations ranged from 9.5 to 12.6 μg/m3. The difference between distributions of observed CO concentrations and participants’ predicted exposures was due primarily to the fact that CO monitoring was more pervasive in the earlier years of the study (PSCAA 2007). CO and PM2.5 exposures were moderately correlated: Pairwise correlation coefficients ranged from 0.25 to 0.45 within exposure windows. Table 1 shows distributions of selected participants’ characteristics and periconceptional CO and PM2.5 distributions within categories of those characteristics. Distributions for other exposure windows were similar. Participants were predominantly white, married, well educated, and nonsmokers. Predicted CO exposures were inversely associated with age, educational attainment, and household income and were higher among black or Hispanic women than among white women, unmarried women, and those who reported smoking or secondhand smoke exposure. Predicted CO exposures also declined sharply over the study period and were highest in women conceiving in winter. Although predicted PM2.5 exposures were generally less strongly related to maternal characteristics, they were also highest among black women, women with lower educational attainment or income, and those who conceived in winter. A total of 117 women (3.3%) developed preeclampsia. Table 2 shows unadjusted and adjusted associations between periconceptional air pollutant exposures and preeclampsia. Because adjustment for year of conception strongly influenced the estimated associations, we show adjusted models without (model 2) and with (model 3) inclusion of year as a covariate. Fourth-quartile periconceptional CO concentrations were significantly associated with preeclampsia after adjustment for maternal characteristics and season of conception (fourth vs. first quartile: OR = 2.08; 95% CI, 1.16–3.72). However, this association became nonsignificant and inverted after further adjustment for year of conception (OR = 0.75; 95% CI, 0.42–1.64). Similarly, although each 0.1-ppm increase in CO was significantly associated with preeclampsia after adjustment for maternal characteristics and season of conception (OR = 1.07; 95% CI, 1.02–1.13), the association disappeared after further adjustment for year of conception (OR = 0.98; 95% CI, 0.91–1.06). The association between preeclampsia and periconceptional PM2.5 exposure was nonsignificant and weaker than for CO (fourth vs. first quartile OR, adjusted for maternal characteristics and season of conception, 1.63; 95% CI, 0.79–3.38). This association weakened somewhat after further adjustment for year of conception (OR = 1.41; 95% CI, 0.63–3.18). Unadjusted and adjusted associations with CO and PM2.5 in the preconceptional and postconceptional windows and the periconceptional month of peak exposure were similar to those in the periconceptional window (data not shown; tables of these and other secondary analyses are available from the authors upon request). For example, in models of each of these exposure windows, unadjusted for year of conception, ORs for each 0.1-ppm CO increase were all 1.07. The incidence of preterm delivery was 10.5% (369 cases). Predicted air pollutant exposures in the last 3 months of pregnancy were not strongly or significantly associated with preterm delivery, and year of conception did not materially influence estimates of association (Table 3). For instance, fourth- versus first-quartile ORs were 0.88 (95% CI, 0.59–1.31) for CO exposure and 0.81 (95% CI, 0.49–1.34) for PM2.5 exposure in fully adjusted models. ORs for CO and PM2.5 exposures in the first and second trimesters and the last month of pregnancy were similar (data not shown). Multivariable linear regression analysis using gestational age at delivery as the outcome also produced similar results: We found no statistically significant or strong associations with exposures to either air pollutant (data not shown). Analyses of models stratified by maternal age, parity, prepregnancy BMI, employment, and smoking or secondhand smoke exposure did not provide evidence that any of these characteristics modified associations between air pollutant exposures and either outcome (data not shown). CO–preeclampsia associations were robust to exclusion of women with prepregnancy diabetes (n = 105) or hypertension (n = 115; data not shown). We examined the sensitivity of our results to participants’ residential values of environmental characteristics used as predictors in our air pollutant exposure models. A total of 1,930 women (55%) resided in an area at which one or more CO model predictors were outside the range observed at monitoring sites. Proportions for individual predictors were 7.9% for population density, 9.2% for street density, and 47.3% for distance to the nearest major road (only 0.1% lived < 0.2 km from a major road; 47.2% lived > 1.3 km). For 904 participants (25.8%), the street density measure used in the PM2.5 model was higher (24.2%) or lower (1.6%) than the range observed at monitoring sites. Associations between preeclampsia and air pollutant exposures were slightly stronger in the subset of women with no out-of-range model predictors. For example, the OR for a 0.1-ppm increase in periconceptional CO exposure was 1.10 (95% CI, 0.98–1.23) among 1,579 women with no out-of-range model predictors and 1.06 (95% CI, 0.99–1.12) among 1,930 women with one or more out-of-range predictors (ORs were adjusted for maternal characteristics and season but not year). Associations with preterm delivery also were similar between the two subgroups (data not shown). Finally, exclusion of the small proportion of women with predicted CO exposures outside the range of concentrations observed in the area (between 44 and 101 observations for each of the CO exposure windows) did not materially affect our results (data not shown).
Conclusions
We found strong positive associations between CO and preeclampsia risk. Because both factors declined during the study period, we cannot exclude the possibility that secular changes in a preeclampsia risk factor unidentified in this study may explain the association. Our results do not provide evidence that PM2.5 and CO strongly influence risks of preterm delivery and preeclampsia among western Washington State women.
[ "Methods", "Study population", "Air pollutant exposure estimation", "Air pollutant predictor variables", "Model fitting procedures", "Exposure estimation", "Outcome measurements" ]
[ " Study population We conducted a prospective analysis using data from the Omega Study, a pregnancy cohort study previously described in detail (Butler et al. 2004). Participants were women attending prenatal care clinics affiliated with Swedish Medical Center in Seattle and Tacoma General Hospital in Tacoma, Washington. Eligible women initiated prenatal care before 20 weeks of gestation. Women were ineligible if they were < 18 years of age, were non-English speakers, or did not plan to deliver at either hospital. Participants completed an interviewer-administered questionnaire soon after enrollment (mean ± SD gestational age, 15.9 ± 4.8 weeks). This questionnaire was used to collect information on sociodemographic, anthropometric, behavioral, medical, and reproductive characteristics. After delivery, study personnel abstracted information on the pregnancy course and outcome from medical records. Study procedures were approved by the institutional review boards of both hospitals. All participants provided written informed consent.\nWe used data from women recruited between 1996 and 2006. During this period, 4,000 (79%) of 5,063 invited women enrolled in the study. Of these, 79 experienced early pregnancy losses, 68 moved or delivered elsewhere, and 169 were lost to follow-up due to unknown delivery outcome or a missing medical record. Of the 3,675 women who completed the study, we included 3,509 (95%) in this analysis. We excluded 145 participants for whom we were unable to accurately assess exposures: 121 had nongeocodable residential addresses, and 24 lived outside the study area of King, Kitsap, Pierce, and Snohomish counties. These participants were less likely, on average, to be nulliparous than women who remained in the analytic sample (51% vs. 60%) but were otherwise similar. We also excluded 21 additional women who were missing information on one or more covariates chosen a priori for final models, as described below. Distributions of air pollutant exposures did not substantially differ between women with missing and nonmissing covariate information.\nWe conducted a prospective analysis using data from the Omega Study, a pregnancy cohort study previously described in detail (Butler et al. 2004). Participants were women attending prenatal care clinics affiliated with Swedish Medical Center in Seattle and Tacoma General Hospital in Tacoma, Washington. Eligible women initiated prenatal care before 20 weeks of gestation. Women were ineligible if they were < 18 years of age, were non-English speakers, or did not plan to deliver at either hospital. Participants completed an interviewer-administered questionnaire soon after enrollment (mean ± SD gestational age, 15.9 ± 4.8 weeks). This questionnaire was used to collect information on sociodemographic, anthropometric, behavioral, medical, and reproductive characteristics. After delivery, study personnel abstracted information on the pregnancy course and outcome from medical records. Study procedures were approved by the institutional review boards of both hospitals. All participants provided written informed consent.\nWe used data from women recruited between 1996 and 2006. During this period, 4,000 (79%) of 5,063 invited women enrolled in the study. Of these, 79 experienced early pregnancy losses, 68 moved or delivered elsewhere, and 169 were lost to follow-up due to unknown delivery outcome or a missing medical record. Of the 3,675 women who completed the study, we included 3,509 (95%) in this analysis. We excluded 145 participants for whom we were unable to accurately assess exposures: 121 had nongeocodable residential addresses, and 24 lived outside the study area of King, Kitsap, Pierce, and Snohomish counties. These participants were less likely, on average, to be nulliparous than women who remained in the analytic sample (51% vs. 60%) but were otherwise similar. We also excluded 21 additional women who were missing information on one or more covariates chosen a priori for final models, as described below. Distributions of air pollutant exposures did not substantially differ between women with missing and nonmissing covariate information.\n Air pollutant exposure estimation We predicted monthly ambient CO and PM2.5 exposures using two multivariable linear regression models that included terms for environmental characteristics, month, and year. We did not examine other criteria pollutants (ozone, nitrogen oxides, sulfur dioxide, and lead) because there were too few monitoring sites to construct exposure models (PSCAA 2007). The CO model has been previously described in detail (Rudra et al. 2010b). Our models were constructed using CO and PM2.5 measurements collected daily between 1996 and 2006 at monitoring sites administered by the PSCAA. The sites were located by the PSCAA using U.S. Environmental Protection Agency (EPA) criteria to ensure a consistent and representative measure of air quality (PSCAA 2007). We collapsed daily measurements into monthly average concentrations, resulting in 890 CO concentrations from 15 sites and 803 PM2.5 concentrations from 12 sites. For more site characteristics and a map of monitoring sites and participants’ residences, see Supplemental Material, Table 1, Figure 1 (doi:10.1289/ehp.1002947).\nWe predicted monthly ambient CO and PM2.5 exposures using two multivariable linear regression models that included terms for environmental characteristics, month, and year. We did not examine other criteria pollutants (ozone, nitrogen oxides, sulfur dioxide, and lead) because there were too few monitoring sites to construct exposure models (PSCAA 2007). The CO model has been previously described in detail (Rudra et al. 2010b). Our models were constructed using CO and PM2.5 measurements collected daily between 1996 and 2006 at monitoring sites administered by the PSCAA. The sites were located by the PSCAA using U.S. Environmental Protection Agency (EPA) criteria to ensure a consistent and representative measure of air quality (PSCAA 2007). We collapsed daily measurements into monthly average concentrations, resulting in 890 CO concentrations from 15 sites and 803 PM2.5 concentrations from 12 sites. For more site characteristics and a map of monitoring sites and participants’ residences, see Supplemental Material, Table 1, Figure 1 (doi:10.1289/ehp.1002947).\n Air pollutant predictor variables We evaluated several characteristics as potential predictors of monthly average pollutant concentrations at the monitoring sites. We mapped and measured characteristics at each site using ArcMap (version 9.2; ESRI, Redlands, CA). We used 2001 traffic count data from the Washington Department of Transportation to estimate annual average traffic volume on the nearest major road (federal or state highway) within circular buffers with radii of 250 and 500 m (Washington State Department of Transportation 2002). We also measured distance to the nearest major road. We used Census 2000 TIGER line files to estimate street density (kilometers per square kilometer) within 100-, 250-, 500-, and 1,000-m buffers (U.S. Census Bureau 2002). As has been done in previous literature, we chose a priori several buffer sizes to allow examination of multiple spatial scales (e.g., Brauer et al. 2003, 2008; Jerrett et al. 2005). We used Census 2000 measures of population density (persons per square kilometer) and housing density (housing units per square kilometer) within each site’s census block group (U.S. Census Bureau 2002). We used monthly averages of daily high and low temperatures and precipitation collected at 31 weather stations (Western Regional Climate Center 2008). We used measurements at the nearest weather station; the average distance between each monitoring site and the nearest station was 6.8 km (range, 0.7–32 km). We used year and month terms to capture secular and seasonal variations in air pollutant concentrations.\nWe evaluated several characteristics as potential predictors of monthly average pollutant concentrations at the monitoring sites. We mapped and measured characteristics at each site using ArcMap (version 9.2; ESRI, Redlands, CA). We used 2001 traffic count data from the Washington Department of Transportation to estimate annual average traffic volume on the nearest major road (federal or state highway) within circular buffers with radii of 250 and 500 m (Washington State Department of Transportation 2002). We also measured distance to the nearest major road. We used Census 2000 TIGER line files to estimate street density (kilometers per square kilometer) within 100-, 250-, 500-, and 1,000-m buffers (U.S. Census Bureau 2002). As has been done in previous literature, we chose a priori several buffer sizes to allow examination of multiple spatial scales (e.g., Brauer et al. 2003, 2008; Jerrett et al. 2005). We used Census 2000 measures of population density (persons per square kilometer) and housing density (housing units per square kilometer) within each site’s census block group (U.S. Census Bureau 2002). We used monthly averages of daily high and low temperatures and precipitation collected at 31 weather stations (Western Regional Climate Center 2008). We used measurements at the nearest weather station; the average distance between each monitoring site and the nearest station was 6.8 km (range, 0.7–32 km). We used year and month terms to capture secular and seasonal variations in air pollutant concentrations.\n Model fitting procedures Using a stepwise procedure previously described in detail (Rudra et al. 2010b), we fit multivariable linear regression models to quantify the relationship between each environmental characteristic and monthly average CO and PM2.5 concentrations. After determining the final exposure models, we measured the same environmental characteristics at each participant’s geocoded residential address reported during the interview. We used model coefficients and these measurements to predict participants’ monthly air pollutant exposures. We predicted exposures within each calendar month of pregnancy after approximating both the conception and delivery dates to the nearest calendar month, by assigning days 1–15 to the current month and rounding days 16–31 to the next month. We measured date of conception using maternal report of the date of the last menstrual period (LMP) and ultrasound at ≤ 20 weeks of gestation. LMP and ultrasound information were gathered by interview and medical record abstraction, respectively. If both LMP and ultrasound-based dates agreed within 14 days, we used the former. Among 4% of participants with dates differing by > 14 days, we used the ultrasound-based date.\nThe CO model included terms for year and month (indicators), street density within 500 m (quartile-based indicators), distance to the nearest major road (< 100, 101–1,000, > 1,000 m), and census tract population density (continuous). We previously reported that the model explained 73% of variation in CO concentrations (Rudra et al. 2010b). The root mean square error was 0.22 ppm (10% of the range of observed concentrations). The split-sample R2 was 0.71. CO concentrations were highest in winter, in the earliest years of the study, and in densely populated areas and were inversely related to the distance to the nearest major road. Concentrations were highest in areas with third-quartile street density.\nThe PM2.5 model included terms for year and month (indicators), street density within 250 m (tertile-based indicators), distance to the nearest major road (dichotomized at 2,000 m), and monthly averages of daily high temperatures and precipitation (quartile-based indicators). The model explained 47% of variance in regional PM2.5 concentrations. The split-sample R2 was 0.41, and the root mean square error was 2.5 μg/m3 (10% of the range). PM2.5 concentrations were highest in winter and in months with higher average temperature or precipitation; they were somewhat higher in earlier years but remained stable from 2003 onward. PM2.5 concentrations were inversely related to the distance to the nearest major road and, unexpectedly, to street density.\nUsing a stepwise procedure previously described in detail (Rudra et al. 2010b), we fit multivariable linear regression models to quantify the relationship between each environmental characteristic and monthly average CO and PM2.5 concentrations. After determining the final exposure models, we measured the same environmental characteristics at each participant’s geocoded residential address reported during the interview. We used model coefficients and these measurements to predict participants’ monthly air pollutant exposures. We predicted exposures within each calendar month of pregnancy after approximating both the conception and delivery dates to the nearest calendar month, by assigning days 1–15 to the current month and rounding days 16–31 to the next month. We measured date of conception using maternal report of the date of the last menstrual period (LMP) and ultrasound at ≤ 20 weeks of gestation. LMP and ultrasound information were gathered by interview and medical record abstraction, respectively. If both LMP and ultrasound-based dates agreed within 14 days, we used the former. Among 4% of participants with dates differing by > 14 days, we used the ultrasound-based date.\nThe CO model included terms for year and month (indicators), street density within 500 m (quartile-based indicators), distance to the nearest major road (< 100, 101–1,000, > 1,000 m), and census tract population density (continuous). We previously reported that the model explained 73% of variation in CO concentrations (Rudra et al. 2010b). The root mean square error was 0.22 ppm (10% of the range of observed concentrations). The split-sample R2 was 0.71. CO concentrations were highest in winter, in the earliest years of the study, and in densely populated areas and were inversely related to the distance to the nearest major road. Concentrations were highest in areas with third-quartile street density.\nThe PM2.5 model included terms for year and month (indicators), street density within 250 m (tertile-based indicators), distance to the nearest major road (dichotomized at 2,000 m), and monthly averages of daily high temperatures and precipitation (quartile-based indicators). The model explained 47% of variance in regional PM2.5 concentrations. The split-sample R2 was 0.41, and the root mean square error was 2.5 μg/m3 (10% of the range). PM2.5 concentrations were highest in winter and in months with higher average temperature or precipitation; they were somewhat higher in earlier years but remained stable from 2003 onward. PM2.5 concentrations were inversely related to the distance to the nearest major road and, unexpectedly, to street density.\n Exposure estimation For this analysis we defined a priori four exposure windows for each outcome of interest. For preeclampsia, exposure windows were periconceptional (the 7 months surrounding the month of conception), preconceptional (the average of the monthly concentrations in 3 months before pregnancy), postconceptional (the first 4 months of pregnancy, before preeclampsia can be diagnosed), and the peak monthly concentration in the 7-month periconceptional period (the month with the highest average concentration within that period). These windows were chosen to preclude exposures that may have occurred after diagnosis and to allow us to separately examine preconceptional exposures, which we hypothesized could increase maternal systemic oxidative stress or inflammation, and postconceptional exposures, which could adversely affect placentation (Brook 2008; Roberts et al. 2003). We chose a 7-month periconceptional window in order to have a roughly symmetric exposure period surrounding conception and to avoid exposures that could potentially occur after diagnosis. For preterm delivery, we examined average exposures within the first and second trimesters, the last 3 months of pregnancy, and the month of delivery. These windows were chosen to provide as direct comparisons as possible with the prior literature (Brauer et al. 2008; Liu et al. 2003; Parker et al. 2008; Ritz et al. 2007). The first- and second-trimester exposure windows were calculated based on the month of conception (months 1–3 and 4–6, respectively). The window based on the last 3 months of pregnancy was calculated by counting back from and including the month of delivery. A woman who conceived in April 1997 and delivered in December 1997 would have the following exposure windows: preconceptional, January–March; postconceptional, April–July; periconceptional, January–July; first trimester, April–June; second trimester, July–September; and third trimester, October–December 1997.\nFor this analysis we defined a priori four exposure windows for each outcome of interest. For preeclampsia, exposure windows were periconceptional (the 7 months surrounding the month of conception), preconceptional (the average of the monthly concentrations in 3 months before pregnancy), postconceptional (the first 4 months of pregnancy, before preeclampsia can be diagnosed), and the peak monthly concentration in the 7-month periconceptional period (the month with the highest average concentration within that period). These windows were chosen to preclude exposures that may have occurred after diagnosis and to allow us to separately examine preconceptional exposures, which we hypothesized could increase maternal systemic oxidative stress or inflammation, and postconceptional exposures, which could adversely affect placentation (Brook 2008; Roberts et al. 2003). We chose a 7-month periconceptional window in order to have a roughly symmetric exposure period surrounding conception and to avoid exposures that could potentially occur after diagnosis. For preterm delivery, we examined average exposures within the first and second trimesters, the last 3 months of pregnancy, and the month of delivery. These windows were chosen to provide as direct comparisons as possible with the prior literature (Brauer et al. 2008; Liu et al. 2003; Parker et al. 2008; Ritz et al. 2007). The first- and second-trimester exposure windows were calculated based on the month of conception (months 1–3 and 4–6, respectively). The window based on the last 3 months of pregnancy was calculated by counting back from and including the month of delivery. A woman who conceived in April 1997 and delivered in December 1997 would have the following exposure windows: preconceptional, January–March; postconceptional, April–July; periconceptional, January–July; first trimester, April–June; second trimester, July–September; and third trimester, October–December 1997.\n Outcome measurements We defined preeclampsia according to ACOG criteria using data obtained from medical records. The criteria are sustained pregnancy-induced hypertension (≥ 140/90 mmHg) and proteinuria (urine protein concentrations of 30 mg/dL or 1+ on two or more urine dipsticks) (ACOG 2002). We defined preterm delivery as a pregnancy lasting < 37 completed weeks of gestation. In secondary analyses, we examined length of gestation measured in days as an outcome, using the same exposure windows as in our preterm delivery analyses.\nWe defined preeclampsia according to ACOG criteria using data obtained from medical records. The criteria are sustained pregnancy-induced hypertension (≥ 140/90 mmHg) and proteinuria (urine protein concentrations of 30 mg/dL or 1+ on two or more urine dipsticks) (ACOG 2002). We defined preterm delivery as a pregnancy lasting < 37 completed weeks of gestation. In secondary analyses, we examined length of gestation measured in days as an outcome, using the same exposure windows as in our preterm delivery analyses.\n Statistical analysis We used multivariable logistic regression to model the relationships [odds ratios (ORs) and 95% confidence intervals (CIs)] between each pregnancy outcome and air pollutant exposure. Our alpha (type 1 error) level was 0.05. We used linear regression for models of length of gestation. For our primary analyses, we categorized air pollutant concentrations according to distribution quartiles. We also modeled air pollutant concentrations as continuous exposures. For these analyses, we report ORs associated with a 0.1-ppm increase in CO and a 0.5-μg/m3 increase in PM2.5. These gradients are roughly equivalent to the increments between deciles for both pollutants.\nWe evaluated the following characteristics, reported in interviews, as potential confounders: maternal age, nulliparity (no prior live births), prepregnancy body mass index (BMI), race/ethnicity, education, employment in early pregnancy, marital status, household income, smoking before and during pregnancy, usual hours/week of secondhand smoke exposure in and outside of the home in the year before pregnancy, regular recreational physical activity before and during pregnancy, and histories of asthma, diabetes, and chronic hypertension. We also evaluated year and month of conception as confounders because of temporal changes in both pollutant concentrations and preeclampsia risk (PSCAA 2007; Rudra and Williams 2005). A priori, we chose to include maternal age, race/ethnicity, prepregnancy BMI, and nulliparity as covariates because of their confounding effects in previous analyses of Omega Study data. Season and year were also included in multivariable models because of their influence on preeclampsia ORs. We found no evidence of confounding by other covariates using a 10% difference in magnitude between crude and adjusted ORs as the criterion for confounding. We found no evidence of poor model fit or influential observations after performing the following regression diagnostics: the Hosmer–Lemeshow test, comparison of observed and predicted outcomes within exposure and covariate strata, examination of Pearson and deviance residual plots, and examination of leverage plots (Hosmer and Lemeshow 2000).\nWe conducted several stratified analyses to evaluate the possibility of interaction by characteristics chosen a priori because of their strong relationships with either the outcomes or exposures. We stratified participants according to maternal age (< 35 or ≥ 35 years), parity (no vs. any prior live births), prepregnancy BMI (< 25 or ≥ 25 kg/m2), ever smoking or secondhand smoke exposure (neither vs. either), and employment in early pregnancy (used as a surrogate for time spent at home). We evaluated interaction by comparing ORs across strata; we defined interaction as a difference of > 20% in the magnitude of at least two of four category ORs. We also fitted models after excluding women with self-reported or medical record–indicated histories of prepregnancy diabetes or hypertension.\nFinally, we conducted sensitivity analyses to examine the impact of extrapolation of the environmental characteristic predictors used in the air pollutant exposure models. For both CO and PM2.5, we classified women according to whether one or more values of the environmental characteristics used in the exposure estimation model were out of the range of values observed at the monitoring sites. For example, although the distance from each CO monitoring site to the nearest major road ranged from 0.2 to 1.3 km, the range of values at Omega participants’ addresses was much broader (0.01–92 km). Furthermore, a small proportion of women (≤ 4% for each exposure window) had one or more CO exposure estimates outside the range of the concentrations observed in the region. All predicted PM2.5 exposures fell within the range of observed concentrations. We hypothesized that our exposure estimates might be less accurate for women with one or more extrapolated values and that this inaccuracy might bias our estimates of association toward the null.\nWe used multivariable logistic regression to model the relationships [odds ratios (ORs) and 95% confidence intervals (CIs)] between each pregnancy outcome and air pollutant exposure. Our alpha (type 1 error) level was 0.05. We used linear regression for models of length of gestation. For our primary analyses, we categorized air pollutant concentrations according to distribution quartiles. We also modeled air pollutant concentrations as continuous exposures. For these analyses, we report ORs associated with a 0.1-ppm increase in CO and a 0.5-μg/m3 increase in PM2.5. These gradients are roughly equivalent to the increments between deciles for both pollutants.\nWe evaluated the following characteristics, reported in interviews, as potential confounders: maternal age, nulliparity (no prior live births), prepregnancy body mass index (BMI), race/ethnicity, education, employment in early pregnancy, marital status, household income, smoking before and during pregnancy, usual hours/week of secondhand smoke exposure in and outside of the home in the year before pregnancy, regular recreational physical activity before and during pregnancy, and histories of asthma, diabetes, and chronic hypertension. We also evaluated year and month of conception as confounders because of temporal changes in both pollutant concentrations and preeclampsia risk (PSCAA 2007; Rudra and Williams 2005). A priori, we chose to include maternal age, race/ethnicity, prepregnancy BMI, and nulliparity as covariates because of their confounding effects in previous analyses of Omega Study data. Season and year were also included in multivariable models because of their influence on preeclampsia ORs. We found no evidence of confounding by other covariates using a 10% difference in magnitude between crude and adjusted ORs as the criterion for confounding. We found no evidence of poor model fit or influential observations after performing the following regression diagnostics: the Hosmer–Lemeshow test, comparison of observed and predicted outcomes within exposure and covariate strata, examination of Pearson and deviance residual plots, and examination of leverage plots (Hosmer and Lemeshow 2000).\nWe conducted several stratified analyses to evaluate the possibility of interaction by characteristics chosen a priori because of their strong relationships with either the outcomes or exposures. We stratified participants according to maternal age (< 35 or ≥ 35 years), parity (no vs. any prior live births), prepregnancy BMI (< 25 or ≥ 25 kg/m2), ever smoking or secondhand smoke exposure (neither vs. either), and employment in early pregnancy (used as a surrogate for time spent at home). We evaluated interaction by comparing ORs across strata; we defined interaction as a difference of > 20% in the magnitude of at least two of four category ORs. We also fitted models after excluding women with self-reported or medical record–indicated histories of prepregnancy diabetes or hypertension.\nFinally, we conducted sensitivity analyses to examine the impact of extrapolation of the environmental characteristic predictors used in the air pollutant exposure models. For both CO and PM2.5, we classified women according to whether one or more values of the environmental characteristics used in the exposure estimation model were out of the range of values observed at the monitoring sites. For example, although the distance from each CO monitoring site to the nearest major road ranged from 0.2 to 1.3 km, the range of values at Omega participants’ addresses was much broader (0.01–92 km). Furthermore, a small proportion of women (≤ 4% for each exposure window) had one or more CO exposure estimates outside the range of the concentrations observed in the region. All predicted PM2.5 exposures fell within the range of observed concentrations. We hypothesized that our exposure estimates might be less accurate for women with one or more extrapolated values and that this inaccuracy might bias our estimates of association toward the null.", "We conducted a prospective analysis using data from the Omega Study, a pregnancy cohort study previously described in detail (Butler et al. 2004). Participants were women attending prenatal care clinics affiliated with Swedish Medical Center in Seattle and Tacoma General Hospital in Tacoma, Washington. Eligible women initiated prenatal care before 20 weeks of gestation. Women were ineligible if they were < 18 years of age, were non-English speakers, or did not plan to deliver at either hospital. Participants completed an interviewer-administered questionnaire soon after enrollment (mean ± SD gestational age, 15.9 ± 4.8 weeks). This questionnaire was used to collect information on sociodemographic, anthropometric, behavioral, medical, and reproductive characteristics. After delivery, study personnel abstracted information on the pregnancy course and outcome from medical records. Study procedures were approved by the institutional review boards of both hospitals. All participants provided written informed consent.\nWe used data from women recruited between 1996 and 2006. During this period, 4,000 (79%) of 5,063 invited women enrolled in the study. Of these, 79 experienced early pregnancy losses, 68 moved or delivered elsewhere, and 169 were lost to follow-up due to unknown delivery outcome or a missing medical record. Of the 3,675 women who completed the study, we included 3,509 (95%) in this analysis. We excluded 145 participants for whom we were unable to accurately assess exposures: 121 had nongeocodable residential addresses, and 24 lived outside the study area of King, Kitsap, Pierce, and Snohomish counties. These participants were less likely, on average, to be nulliparous than women who remained in the analytic sample (51% vs. 60%) but were otherwise similar. We also excluded 21 additional women who were missing information on one or more covariates chosen a priori for final models, as described below. Distributions of air pollutant exposures did not substantially differ between women with missing and nonmissing covariate information.", "We predicted monthly ambient CO and PM2.5 exposures using two multivariable linear regression models that included terms for environmental characteristics, month, and year. We did not examine other criteria pollutants (ozone, nitrogen oxides, sulfur dioxide, and lead) because there were too few monitoring sites to construct exposure models (PSCAA 2007). The CO model has been previously described in detail (Rudra et al. 2010b). Our models were constructed using CO and PM2.5 measurements collected daily between 1996 and 2006 at monitoring sites administered by the PSCAA. The sites were located by the PSCAA using U.S. Environmental Protection Agency (EPA) criteria to ensure a consistent and representative measure of air quality (PSCAA 2007). We collapsed daily measurements into monthly average concentrations, resulting in 890 CO concentrations from 15 sites and 803 PM2.5 concentrations from 12 sites. For more site characteristics and a map of monitoring sites and participants’ residences, see Supplemental Material, Table 1, Figure 1 (doi:10.1289/ehp.1002947).", "We evaluated several characteristics as potential predictors of monthly average pollutant concentrations at the monitoring sites. We mapped and measured characteristics at each site using ArcMap (version 9.2; ESRI, Redlands, CA). We used 2001 traffic count data from the Washington Department of Transportation to estimate annual average traffic volume on the nearest major road (federal or state highway) within circular buffers with radii of 250 and 500 m (Washington State Department of Transportation 2002). We also measured distance to the nearest major road. We used Census 2000 TIGER line files to estimate street density (kilometers per square kilometer) within 100-, 250-, 500-, and 1,000-m buffers (U.S. Census Bureau 2002). As has been done in previous literature, we chose a priori several buffer sizes to allow examination of multiple spatial scales (e.g., Brauer et al. 2003, 2008; Jerrett et al. 2005). We used Census 2000 measures of population density (persons per square kilometer) and housing density (housing units per square kilometer) within each site’s census block group (U.S. Census Bureau 2002). We used monthly averages of daily high and low temperatures and precipitation collected at 31 weather stations (Western Regional Climate Center 2008). We used measurements at the nearest weather station; the average distance between each monitoring site and the nearest station was 6.8 km (range, 0.7–32 km). We used year and month terms to capture secular and seasonal variations in air pollutant concentrations.", "Using a stepwise procedure previously described in detail (Rudra et al. 2010b), we fit multivariable linear regression models to quantify the relationship between each environmental characteristic and monthly average CO and PM2.5 concentrations. After determining the final exposure models, we measured the same environmental characteristics at each participant’s geocoded residential address reported during the interview. We used model coefficients and these measurements to predict participants’ monthly air pollutant exposures. We predicted exposures within each calendar month of pregnancy after approximating both the conception and delivery dates to the nearest calendar month, by assigning days 1–15 to the current month and rounding days 16–31 to the next month. We measured date of conception using maternal report of the date of the last menstrual period (LMP) and ultrasound at ≤ 20 weeks of gestation. LMP and ultrasound information were gathered by interview and medical record abstraction, respectively. If both LMP and ultrasound-based dates agreed within 14 days, we used the former. Among 4% of participants with dates differing by > 14 days, we used the ultrasound-based date.\nThe CO model included terms for year and month (indicators), street density within 500 m (quartile-based indicators), distance to the nearest major road (< 100, 101–1,000, > 1,000 m), and census tract population density (continuous). We previously reported that the model explained 73% of variation in CO concentrations (Rudra et al. 2010b). The root mean square error was 0.22 ppm (10% of the range of observed concentrations). The split-sample R2 was 0.71. CO concentrations were highest in winter, in the earliest years of the study, and in densely populated areas and were inversely related to the distance to the nearest major road. Concentrations were highest in areas with third-quartile street density.\nThe PM2.5 model included terms for year and month (indicators), street density within 250 m (tertile-based indicators), distance to the nearest major road (dichotomized at 2,000 m), and monthly averages of daily high temperatures and precipitation (quartile-based indicators). The model explained 47% of variance in regional PM2.5 concentrations. The split-sample R2 was 0.41, and the root mean square error was 2.5 μg/m3 (10% of the range). PM2.5 concentrations were highest in winter and in months with higher average temperature or precipitation; they were somewhat higher in earlier years but remained stable from 2003 onward. PM2.5 concentrations were inversely related to the distance to the nearest major road and, unexpectedly, to street density.", "For this analysis we defined a priori four exposure windows for each outcome of interest. For preeclampsia, exposure windows were periconceptional (the 7 months surrounding the month of conception), preconceptional (the average of the monthly concentrations in 3 months before pregnancy), postconceptional (the first 4 months of pregnancy, before preeclampsia can be diagnosed), and the peak monthly concentration in the 7-month periconceptional period (the month with the highest average concentration within that period). These windows were chosen to preclude exposures that may have occurred after diagnosis and to allow us to separately examine preconceptional exposures, which we hypothesized could increase maternal systemic oxidative stress or inflammation, and postconceptional exposures, which could adversely affect placentation (Brook 2008; Roberts et al. 2003). We chose a 7-month periconceptional window in order to have a roughly symmetric exposure period surrounding conception and to avoid exposures that could potentially occur after diagnosis. For preterm delivery, we examined average exposures within the first and second trimesters, the last 3 months of pregnancy, and the month of delivery. These windows were chosen to provide as direct comparisons as possible with the prior literature (Brauer et al. 2008; Liu et al. 2003; Parker et al. 2008; Ritz et al. 2007). The first- and second-trimester exposure windows were calculated based on the month of conception (months 1–3 and 4–6, respectively). The window based on the last 3 months of pregnancy was calculated by counting back from and including the month of delivery. A woman who conceived in April 1997 and delivered in December 1997 would have the following exposure windows: preconceptional, January–March; postconceptional, April–July; periconceptional, January–July; first trimester, April–June; second trimester, July–September; and third trimester, October–December 1997.", "We defined preeclampsia according to ACOG criteria using data obtained from medical records. The criteria are sustained pregnancy-induced hypertension (≥ 140/90 mmHg) and proteinuria (urine protein concentrations of 30 mg/dL or 1+ on two or more urine dipsticks) (ACOG 2002). We defined preterm delivery as a pregnancy lasting < 37 completed weeks of gestation. In secondary analyses, we examined length of gestation measured in days as an outcome, using the same exposure windows as in our preterm delivery analyses." ]
[ "methods", "methods", null, null, "methods", null, null ]
[ "Methods", "Study population", "Air pollutant exposure estimation", "Air pollutant predictor variables", "Model fitting procedures", "Exposure estimation", "Outcome measurements", "Statistical analysis", "Results", "Discussion", "Conclusions" ]
[ " Study population We conducted a prospective analysis using data from the Omega Study, a pregnancy cohort study previously described in detail (Butler et al. 2004). Participants were women attending prenatal care clinics affiliated with Swedish Medical Center in Seattle and Tacoma General Hospital in Tacoma, Washington. Eligible women initiated prenatal care before 20 weeks of gestation. Women were ineligible if they were < 18 years of age, were non-English speakers, or did not plan to deliver at either hospital. Participants completed an interviewer-administered questionnaire soon after enrollment (mean ± SD gestational age, 15.9 ± 4.8 weeks). This questionnaire was used to collect information on sociodemographic, anthropometric, behavioral, medical, and reproductive characteristics. After delivery, study personnel abstracted information on the pregnancy course and outcome from medical records. Study procedures were approved by the institutional review boards of both hospitals. All participants provided written informed consent.\nWe used data from women recruited between 1996 and 2006. During this period, 4,000 (79%) of 5,063 invited women enrolled in the study. Of these, 79 experienced early pregnancy losses, 68 moved or delivered elsewhere, and 169 were lost to follow-up due to unknown delivery outcome or a missing medical record. Of the 3,675 women who completed the study, we included 3,509 (95%) in this analysis. We excluded 145 participants for whom we were unable to accurately assess exposures: 121 had nongeocodable residential addresses, and 24 lived outside the study area of King, Kitsap, Pierce, and Snohomish counties. These participants were less likely, on average, to be nulliparous than women who remained in the analytic sample (51% vs. 60%) but were otherwise similar. We also excluded 21 additional women who were missing information on one or more covariates chosen a priori for final models, as described below. Distributions of air pollutant exposures did not substantially differ between women with missing and nonmissing covariate information.\nWe conducted a prospective analysis using data from the Omega Study, a pregnancy cohort study previously described in detail (Butler et al. 2004). Participants were women attending prenatal care clinics affiliated with Swedish Medical Center in Seattle and Tacoma General Hospital in Tacoma, Washington. Eligible women initiated prenatal care before 20 weeks of gestation. Women were ineligible if they were < 18 years of age, were non-English speakers, or did not plan to deliver at either hospital. Participants completed an interviewer-administered questionnaire soon after enrollment (mean ± SD gestational age, 15.9 ± 4.8 weeks). This questionnaire was used to collect information on sociodemographic, anthropometric, behavioral, medical, and reproductive characteristics. After delivery, study personnel abstracted information on the pregnancy course and outcome from medical records. Study procedures were approved by the institutional review boards of both hospitals. All participants provided written informed consent.\nWe used data from women recruited between 1996 and 2006. During this period, 4,000 (79%) of 5,063 invited women enrolled in the study. Of these, 79 experienced early pregnancy losses, 68 moved or delivered elsewhere, and 169 were lost to follow-up due to unknown delivery outcome or a missing medical record. Of the 3,675 women who completed the study, we included 3,509 (95%) in this analysis. We excluded 145 participants for whom we were unable to accurately assess exposures: 121 had nongeocodable residential addresses, and 24 lived outside the study area of King, Kitsap, Pierce, and Snohomish counties. These participants were less likely, on average, to be nulliparous than women who remained in the analytic sample (51% vs. 60%) but were otherwise similar. We also excluded 21 additional women who were missing information on one or more covariates chosen a priori for final models, as described below. Distributions of air pollutant exposures did not substantially differ between women with missing and nonmissing covariate information.\n Air pollutant exposure estimation We predicted monthly ambient CO and PM2.5 exposures using two multivariable linear regression models that included terms for environmental characteristics, month, and year. We did not examine other criteria pollutants (ozone, nitrogen oxides, sulfur dioxide, and lead) because there were too few monitoring sites to construct exposure models (PSCAA 2007). The CO model has been previously described in detail (Rudra et al. 2010b). Our models were constructed using CO and PM2.5 measurements collected daily between 1996 and 2006 at monitoring sites administered by the PSCAA. The sites were located by the PSCAA using U.S. Environmental Protection Agency (EPA) criteria to ensure a consistent and representative measure of air quality (PSCAA 2007). We collapsed daily measurements into monthly average concentrations, resulting in 890 CO concentrations from 15 sites and 803 PM2.5 concentrations from 12 sites. For more site characteristics and a map of monitoring sites and participants’ residences, see Supplemental Material, Table 1, Figure 1 (doi:10.1289/ehp.1002947).\nWe predicted monthly ambient CO and PM2.5 exposures using two multivariable linear regression models that included terms for environmental characteristics, month, and year. We did not examine other criteria pollutants (ozone, nitrogen oxides, sulfur dioxide, and lead) because there were too few monitoring sites to construct exposure models (PSCAA 2007). The CO model has been previously described in detail (Rudra et al. 2010b). Our models were constructed using CO and PM2.5 measurements collected daily between 1996 and 2006 at monitoring sites administered by the PSCAA. The sites were located by the PSCAA using U.S. Environmental Protection Agency (EPA) criteria to ensure a consistent and representative measure of air quality (PSCAA 2007). We collapsed daily measurements into monthly average concentrations, resulting in 890 CO concentrations from 15 sites and 803 PM2.5 concentrations from 12 sites. For more site characteristics and a map of monitoring sites and participants’ residences, see Supplemental Material, Table 1, Figure 1 (doi:10.1289/ehp.1002947).\n Air pollutant predictor variables We evaluated several characteristics as potential predictors of monthly average pollutant concentrations at the monitoring sites. We mapped and measured characteristics at each site using ArcMap (version 9.2; ESRI, Redlands, CA). We used 2001 traffic count data from the Washington Department of Transportation to estimate annual average traffic volume on the nearest major road (federal or state highway) within circular buffers with radii of 250 and 500 m (Washington State Department of Transportation 2002). We also measured distance to the nearest major road. We used Census 2000 TIGER line files to estimate street density (kilometers per square kilometer) within 100-, 250-, 500-, and 1,000-m buffers (U.S. Census Bureau 2002). As has been done in previous literature, we chose a priori several buffer sizes to allow examination of multiple spatial scales (e.g., Brauer et al. 2003, 2008; Jerrett et al. 2005). We used Census 2000 measures of population density (persons per square kilometer) and housing density (housing units per square kilometer) within each site’s census block group (U.S. Census Bureau 2002). We used monthly averages of daily high and low temperatures and precipitation collected at 31 weather stations (Western Regional Climate Center 2008). We used measurements at the nearest weather station; the average distance between each monitoring site and the nearest station was 6.8 km (range, 0.7–32 km). We used year and month terms to capture secular and seasonal variations in air pollutant concentrations.\nWe evaluated several characteristics as potential predictors of monthly average pollutant concentrations at the monitoring sites. We mapped and measured characteristics at each site using ArcMap (version 9.2; ESRI, Redlands, CA). We used 2001 traffic count data from the Washington Department of Transportation to estimate annual average traffic volume on the nearest major road (federal or state highway) within circular buffers with radii of 250 and 500 m (Washington State Department of Transportation 2002). We also measured distance to the nearest major road. We used Census 2000 TIGER line files to estimate street density (kilometers per square kilometer) within 100-, 250-, 500-, and 1,000-m buffers (U.S. Census Bureau 2002). As has been done in previous literature, we chose a priori several buffer sizes to allow examination of multiple spatial scales (e.g., Brauer et al. 2003, 2008; Jerrett et al. 2005). We used Census 2000 measures of population density (persons per square kilometer) and housing density (housing units per square kilometer) within each site’s census block group (U.S. Census Bureau 2002). We used monthly averages of daily high and low temperatures and precipitation collected at 31 weather stations (Western Regional Climate Center 2008). We used measurements at the nearest weather station; the average distance between each monitoring site and the nearest station was 6.8 km (range, 0.7–32 km). We used year and month terms to capture secular and seasonal variations in air pollutant concentrations.\n Model fitting procedures Using a stepwise procedure previously described in detail (Rudra et al. 2010b), we fit multivariable linear regression models to quantify the relationship between each environmental characteristic and monthly average CO and PM2.5 concentrations. After determining the final exposure models, we measured the same environmental characteristics at each participant’s geocoded residential address reported during the interview. We used model coefficients and these measurements to predict participants’ monthly air pollutant exposures. We predicted exposures within each calendar month of pregnancy after approximating both the conception and delivery dates to the nearest calendar month, by assigning days 1–15 to the current month and rounding days 16–31 to the next month. We measured date of conception using maternal report of the date of the last menstrual period (LMP) and ultrasound at ≤ 20 weeks of gestation. LMP and ultrasound information were gathered by interview and medical record abstraction, respectively. If both LMP and ultrasound-based dates agreed within 14 days, we used the former. Among 4% of participants with dates differing by > 14 days, we used the ultrasound-based date.\nThe CO model included terms for year and month (indicators), street density within 500 m (quartile-based indicators), distance to the nearest major road (< 100, 101–1,000, > 1,000 m), and census tract population density (continuous). We previously reported that the model explained 73% of variation in CO concentrations (Rudra et al. 2010b). The root mean square error was 0.22 ppm (10% of the range of observed concentrations). The split-sample R2 was 0.71. CO concentrations were highest in winter, in the earliest years of the study, and in densely populated areas and were inversely related to the distance to the nearest major road. Concentrations were highest in areas with third-quartile street density.\nThe PM2.5 model included terms for year and month (indicators), street density within 250 m (tertile-based indicators), distance to the nearest major road (dichotomized at 2,000 m), and monthly averages of daily high temperatures and precipitation (quartile-based indicators). The model explained 47% of variance in regional PM2.5 concentrations. The split-sample R2 was 0.41, and the root mean square error was 2.5 μg/m3 (10% of the range). PM2.5 concentrations were highest in winter and in months with higher average temperature or precipitation; they were somewhat higher in earlier years but remained stable from 2003 onward. PM2.5 concentrations were inversely related to the distance to the nearest major road and, unexpectedly, to street density.\nUsing a stepwise procedure previously described in detail (Rudra et al. 2010b), we fit multivariable linear regression models to quantify the relationship between each environmental characteristic and monthly average CO and PM2.5 concentrations. After determining the final exposure models, we measured the same environmental characteristics at each participant’s geocoded residential address reported during the interview. We used model coefficients and these measurements to predict participants’ monthly air pollutant exposures. We predicted exposures within each calendar month of pregnancy after approximating both the conception and delivery dates to the nearest calendar month, by assigning days 1–15 to the current month and rounding days 16–31 to the next month. We measured date of conception using maternal report of the date of the last menstrual period (LMP) and ultrasound at ≤ 20 weeks of gestation. LMP and ultrasound information were gathered by interview and medical record abstraction, respectively. If both LMP and ultrasound-based dates agreed within 14 days, we used the former. Among 4% of participants with dates differing by > 14 days, we used the ultrasound-based date.\nThe CO model included terms for year and month (indicators), street density within 500 m (quartile-based indicators), distance to the nearest major road (< 100, 101–1,000, > 1,000 m), and census tract population density (continuous). We previously reported that the model explained 73% of variation in CO concentrations (Rudra et al. 2010b). The root mean square error was 0.22 ppm (10% of the range of observed concentrations). The split-sample R2 was 0.71. CO concentrations were highest in winter, in the earliest years of the study, and in densely populated areas and were inversely related to the distance to the nearest major road. Concentrations were highest in areas with third-quartile street density.\nThe PM2.5 model included terms for year and month (indicators), street density within 250 m (tertile-based indicators), distance to the nearest major road (dichotomized at 2,000 m), and monthly averages of daily high temperatures and precipitation (quartile-based indicators). The model explained 47% of variance in regional PM2.5 concentrations. The split-sample R2 was 0.41, and the root mean square error was 2.5 μg/m3 (10% of the range). PM2.5 concentrations were highest in winter and in months with higher average temperature or precipitation; they were somewhat higher in earlier years but remained stable from 2003 onward. PM2.5 concentrations were inversely related to the distance to the nearest major road and, unexpectedly, to street density.\n Exposure estimation For this analysis we defined a priori four exposure windows for each outcome of interest. For preeclampsia, exposure windows were periconceptional (the 7 months surrounding the month of conception), preconceptional (the average of the monthly concentrations in 3 months before pregnancy), postconceptional (the first 4 months of pregnancy, before preeclampsia can be diagnosed), and the peak monthly concentration in the 7-month periconceptional period (the month with the highest average concentration within that period). These windows were chosen to preclude exposures that may have occurred after diagnosis and to allow us to separately examine preconceptional exposures, which we hypothesized could increase maternal systemic oxidative stress or inflammation, and postconceptional exposures, which could adversely affect placentation (Brook 2008; Roberts et al. 2003). We chose a 7-month periconceptional window in order to have a roughly symmetric exposure period surrounding conception and to avoid exposures that could potentially occur after diagnosis. For preterm delivery, we examined average exposures within the first and second trimesters, the last 3 months of pregnancy, and the month of delivery. These windows were chosen to provide as direct comparisons as possible with the prior literature (Brauer et al. 2008; Liu et al. 2003; Parker et al. 2008; Ritz et al. 2007). The first- and second-trimester exposure windows were calculated based on the month of conception (months 1–3 and 4–6, respectively). The window based on the last 3 months of pregnancy was calculated by counting back from and including the month of delivery. A woman who conceived in April 1997 and delivered in December 1997 would have the following exposure windows: preconceptional, January–March; postconceptional, April–July; periconceptional, January–July; first trimester, April–June; second trimester, July–September; and third trimester, October–December 1997.\nFor this analysis we defined a priori four exposure windows for each outcome of interest. For preeclampsia, exposure windows were periconceptional (the 7 months surrounding the month of conception), preconceptional (the average of the monthly concentrations in 3 months before pregnancy), postconceptional (the first 4 months of pregnancy, before preeclampsia can be diagnosed), and the peak monthly concentration in the 7-month periconceptional period (the month with the highest average concentration within that period). These windows were chosen to preclude exposures that may have occurred after diagnosis and to allow us to separately examine preconceptional exposures, which we hypothesized could increase maternal systemic oxidative stress or inflammation, and postconceptional exposures, which could adversely affect placentation (Brook 2008; Roberts et al. 2003). We chose a 7-month periconceptional window in order to have a roughly symmetric exposure period surrounding conception and to avoid exposures that could potentially occur after diagnosis. For preterm delivery, we examined average exposures within the first and second trimesters, the last 3 months of pregnancy, and the month of delivery. These windows were chosen to provide as direct comparisons as possible with the prior literature (Brauer et al. 2008; Liu et al. 2003; Parker et al. 2008; Ritz et al. 2007). The first- and second-trimester exposure windows were calculated based on the month of conception (months 1–3 and 4–6, respectively). The window based on the last 3 months of pregnancy was calculated by counting back from and including the month of delivery. A woman who conceived in April 1997 and delivered in December 1997 would have the following exposure windows: preconceptional, January–March; postconceptional, April–July; periconceptional, January–July; first trimester, April–June; second trimester, July–September; and third trimester, October–December 1997.\n Outcome measurements We defined preeclampsia according to ACOG criteria using data obtained from medical records. The criteria are sustained pregnancy-induced hypertension (≥ 140/90 mmHg) and proteinuria (urine protein concentrations of 30 mg/dL or 1+ on two or more urine dipsticks) (ACOG 2002). We defined preterm delivery as a pregnancy lasting < 37 completed weeks of gestation. In secondary analyses, we examined length of gestation measured in days as an outcome, using the same exposure windows as in our preterm delivery analyses.\nWe defined preeclampsia according to ACOG criteria using data obtained from medical records. The criteria are sustained pregnancy-induced hypertension (≥ 140/90 mmHg) and proteinuria (urine protein concentrations of 30 mg/dL or 1+ on two or more urine dipsticks) (ACOG 2002). We defined preterm delivery as a pregnancy lasting < 37 completed weeks of gestation. In secondary analyses, we examined length of gestation measured in days as an outcome, using the same exposure windows as in our preterm delivery analyses.\n Statistical analysis We used multivariable logistic regression to model the relationships [odds ratios (ORs) and 95% confidence intervals (CIs)] between each pregnancy outcome and air pollutant exposure. Our alpha (type 1 error) level was 0.05. We used linear regression for models of length of gestation. For our primary analyses, we categorized air pollutant concentrations according to distribution quartiles. We also modeled air pollutant concentrations as continuous exposures. For these analyses, we report ORs associated with a 0.1-ppm increase in CO and a 0.5-μg/m3 increase in PM2.5. These gradients are roughly equivalent to the increments between deciles for both pollutants.\nWe evaluated the following characteristics, reported in interviews, as potential confounders: maternal age, nulliparity (no prior live births), prepregnancy body mass index (BMI), race/ethnicity, education, employment in early pregnancy, marital status, household income, smoking before and during pregnancy, usual hours/week of secondhand smoke exposure in and outside of the home in the year before pregnancy, regular recreational physical activity before and during pregnancy, and histories of asthma, diabetes, and chronic hypertension. We also evaluated year and month of conception as confounders because of temporal changes in both pollutant concentrations and preeclampsia risk (PSCAA 2007; Rudra and Williams 2005). A priori, we chose to include maternal age, race/ethnicity, prepregnancy BMI, and nulliparity as covariates because of their confounding effects in previous analyses of Omega Study data. Season and year were also included in multivariable models because of their influence on preeclampsia ORs. We found no evidence of confounding by other covariates using a 10% difference in magnitude between crude and adjusted ORs as the criterion for confounding. We found no evidence of poor model fit or influential observations after performing the following regression diagnostics: the Hosmer–Lemeshow test, comparison of observed and predicted outcomes within exposure and covariate strata, examination of Pearson and deviance residual plots, and examination of leverage plots (Hosmer and Lemeshow 2000).\nWe conducted several stratified analyses to evaluate the possibility of interaction by characteristics chosen a priori because of their strong relationships with either the outcomes or exposures. We stratified participants according to maternal age (< 35 or ≥ 35 years), parity (no vs. any prior live births), prepregnancy BMI (< 25 or ≥ 25 kg/m2), ever smoking or secondhand smoke exposure (neither vs. either), and employment in early pregnancy (used as a surrogate for time spent at home). We evaluated interaction by comparing ORs across strata; we defined interaction as a difference of > 20% in the magnitude of at least two of four category ORs. We also fitted models after excluding women with self-reported or medical record–indicated histories of prepregnancy diabetes or hypertension.\nFinally, we conducted sensitivity analyses to examine the impact of extrapolation of the environmental characteristic predictors used in the air pollutant exposure models. For both CO and PM2.5, we classified women according to whether one or more values of the environmental characteristics used in the exposure estimation model were out of the range of values observed at the monitoring sites. For example, although the distance from each CO monitoring site to the nearest major road ranged from 0.2 to 1.3 km, the range of values at Omega participants’ addresses was much broader (0.01–92 km). Furthermore, a small proportion of women (≤ 4% for each exposure window) had one or more CO exposure estimates outside the range of the concentrations observed in the region. All predicted PM2.5 exposures fell within the range of observed concentrations. We hypothesized that our exposure estimates might be less accurate for women with one or more extrapolated values and that this inaccuracy might bias our estimates of association toward the null.\nWe used multivariable logistic regression to model the relationships [odds ratios (ORs) and 95% confidence intervals (CIs)] between each pregnancy outcome and air pollutant exposure. Our alpha (type 1 error) level was 0.05. We used linear regression for models of length of gestation. For our primary analyses, we categorized air pollutant concentrations according to distribution quartiles. We also modeled air pollutant concentrations as continuous exposures. For these analyses, we report ORs associated with a 0.1-ppm increase in CO and a 0.5-μg/m3 increase in PM2.5. These gradients are roughly equivalent to the increments between deciles for both pollutants.\nWe evaluated the following characteristics, reported in interviews, as potential confounders: maternal age, nulliparity (no prior live births), prepregnancy body mass index (BMI), race/ethnicity, education, employment in early pregnancy, marital status, household income, smoking before and during pregnancy, usual hours/week of secondhand smoke exposure in and outside of the home in the year before pregnancy, regular recreational physical activity before and during pregnancy, and histories of asthma, diabetes, and chronic hypertension. We also evaluated year and month of conception as confounders because of temporal changes in both pollutant concentrations and preeclampsia risk (PSCAA 2007; Rudra and Williams 2005). A priori, we chose to include maternal age, race/ethnicity, prepregnancy BMI, and nulliparity as covariates because of their confounding effects in previous analyses of Omega Study data. Season and year were also included in multivariable models because of their influence on preeclampsia ORs. We found no evidence of confounding by other covariates using a 10% difference in magnitude between crude and adjusted ORs as the criterion for confounding. We found no evidence of poor model fit or influential observations after performing the following regression diagnostics: the Hosmer–Lemeshow test, comparison of observed and predicted outcomes within exposure and covariate strata, examination of Pearson and deviance residual plots, and examination of leverage plots (Hosmer and Lemeshow 2000).\nWe conducted several stratified analyses to evaluate the possibility of interaction by characteristics chosen a priori because of their strong relationships with either the outcomes or exposures. We stratified participants according to maternal age (< 35 or ≥ 35 years), parity (no vs. any prior live births), prepregnancy BMI (< 25 or ≥ 25 kg/m2), ever smoking or secondhand smoke exposure (neither vs. either), and employment in early pregnancy (used as a surrogate for time spent at home). We evaluated interaction by comparing ORs across strata; we defined interaction as a difference of > 20% in the magnitude of at least two of four category ORs. We also fitted models after excluding women with self-reported or medical record–indicated histories of prepregnancy diabetes or hypertension.\nFinally, we conducted sensitivity analyses to examine the impact of extrapolation of the environmental characteristic predictors used in the air pollutant exposure models. For both CO and PM2.5, we classified women according to whether one or more values of the environmental characteristics used in the exposure estimation model were out of the range of values observed at the monitoring sites. For example, although the distance from each CO monitoring site to the nearest major road ranged from 0.2 to 1.3 km, the range of values at Omega participants’ addresses was much broader (0.01–92 km). Furthermore, a small proportion of women (≤ 4% for each exposure window) had one or more CO exposure estimates outside the range of the concentrations observed in the region. All predicted PM2.5 exposures fell within the range of observed concentrations. We hypothesized that our exposure estimates might be less accurate for women with one or more extrapolated values and that this inaccuracy might bias our estimates of association toward the null.", "We conducted a prospective analysis using data from the Omega Study, a pregnancy cohort study previously described in detail (Butler et al. 2004). Participants were women attending prenatal care clinics affiliated with Swedish Medical Center in Seattle and Tacoma General Hospital in Tacoma, Washington. Eligible women initiated prenatal care before 20 weeks of gestation. Women were ineligible if they were < 18 years of age, were non-English speakers, or did not plan to deliver at either hospital. Participants completed an interviewer-administered questionnaire soon after enrollment (mean ± SD gestational age, 15.9 ± 4.8 weeks). This questionnaire was used to collect information on sociodemographic, anthropometric, behavioral, medical, and reproductive characteristics. After delivery, study personnel abstracted information on the pregnancy course and outcome from medical records. Study procedures were approved by the institutional review boards of both hospitals. All participants provided written informed consent.\nWe used data from women recruited between 1996 and 2006. During this period, 4,000 (79%) of 5,063 invited women enrolled in the study. Of these, 79 experienced early pregnancy losses, 68 moved or delivered elsewhere, and 169 were lost to follow-up due to unknown delivery outcome or a missing medical record. Of the 3,675 women who completed the study, we included 3,509 (95%) in this analysis. We excluded 145 participants for whom we were unable to accurately assess exposures: 121 had nongeocodable residential addresses, and 24 lived outside the study area of King, Kitsap, Pierce, and Snohomish counties. These participants were less likely, on average, to be nulliparous than women who remained in the analytic sample (51% vs. 60%) but were otherwise similar. We also excluded 21 additional women who were missing information on one or more covariates chosen a priori for final models, as described below. Distributions of air pollutant exposures did not substantially differ between women with missing and nonmissing covariate information.", "We predicted monthly ambient CO and PM2.5 exposures using two multivariable linear regression models that included terms for environmental characteristics, month, and year. We did not examine other criteria pollutants (ozone, nitrogen oxides, sulfur dioxide, and lead) because there were too few monitoring sites to construct exposure models (PSCAA 2007). The CO model has been previously described in detail (Rudra et al. 2010b). Our models were constructed using CO and PM2.5 measurements collected daily between 1996 and 2006 at monitoring sites administered by the PSCAA. The sites were located by the PSCAA using U.S. Environmental Protection Agency (EPA) criteria to ensure a consistent and representative measure of air quality (PSCAA 2007). We collapsed daily measurements into monthly average concentrations, resulting in 890 CO concentrations from 15 sites and 803 PM2.5 concentrations from 12 sites. For more site characteristics and a map of monitoring sites and participants’ residences, see Supplemental Material, Table 1, Figure 1 (doi:10.1289/ehp.1002947).", "We evaluated several characteristics as potential predictors of monthly average pollutant concentrations at the monitoring sites. We mapped and measured characteristics at each site using ArcMap (version 9.2; ESRI, Redlands, CA). We used 2001 traffic count data from the Washington Department of Transportation to estimate annual average traffic volume on the nearest major road (federal or state highway) within circular buffers with radii of 250 and 500 m (Washington State Department of Transportation 2002). We also measured distance to the nearest major road. We used Census 2000 TIGER line files to estimate street density (kilometers per square kilometer) within 100-, 250-, 500-, and 1,000-m buffers (U.S. Census Bureau 2002). As has been done in previous literature, we chose a priori several buffer sizes to allow examination of multiple spatial scales (e.g., Brauer et al. 2003, 2008; Jerrett et al. 2005). We used Census 2000 measures of population density (persons per square kilometer) and housing density (housing units per square kilometer) within each site’s census block group (U.S. Census Bureau 2002). We used monthly averages of daily high and low temperatures and precipitation collected at 31 weather stations (Western Regional Climate Center 2008). We used measurements at the nearest weather station; the average distance between each monitoring site and the nearest station was 6.8 km (range, 0.7–32 km). We used year and month terms to capture secular and seasonal variations in air pollutant concentrations.", "Using a stepwise procedure previously described in detail (Rudra et al. 2010b), we fit multivariable linear regression models to quantify the relationship between each environmental characteristic and monthly average CO and PM2.5 concentrations. After determining the final exposure models, we measured the same environmental characteristics at each participant’s geocoded residential address reported during the interview. We used model coefficients and these measurements to predict participants’ monthly air pollutant exposures. We predicted exposures within each calendar month of pregnancy after approximating both the conception and delivery dates to the nearest calendar month, by assigning days 1–15 to the current month and rounding days 16–31 to the next month. We measured date of conception using maternal report of the date of the last menstrual period (LMP) and ultrasound at ≤ 20 weeks of gestation. LMP and ultrasound information were gathered by interview and medical record abstraction, respectively. If both LMP and ultrasound-based dates agreed within 14 days, we used the former. Among 4% of participants with dates differing by > 14 days, we used the ultrasound-based date.\nThe CO model included terms for year and month (indicators), street density within 500 m (quartile-based indicators), distance to the nearest major road (< 100, 101–1,000, > 1,000 m), and census tract population density (continuous). We previously reported that the model explained 73% of variation in CO concentrations (Rudra et al. 2010b). The root mean square error was 0.22 ppm (10% of the range of observed concentrations). The split-sample R2 was 0.71. CO concentrations were highest in winter, in the earliest years of the study, and in densely populated areas and were inversely related to the distance to the nearest major road. Concentrations were highest in areas with third-quartile street density.\nThe PM2.5 model included terms for year and month (indicators), street density within 250 m (tertile-based indicators), distance to the nearest major road (dichotomized at 2,000 m), and monthly averages of daily high temperatures and precipitation (quartile-based indicators). The model explained 47% of variance in regional PM2.5 concentrations. The split-sample R2 was 0.41, and the root mean square error was 2.5 μg/m3 (10% of the range). PM2.5 concentrations were highest in winter and in months with higher average temperature or precipitation; they were somewhat higher in earlier years but remained stable from 2003 onward. PM2.5 concentrations were inversely related to the distance to the nearest major road and, unexpectedly, to street density.", "For this analysis we defined a priori four exposure windows for each outcome of interest. For preeclampsia, exposure windows were periconceptional (the 7 months surrounding the month of conception), preconceptional (the average of the monthly concentrations in 3 months before pregnancy), postconceptional (the first 4 months of pregnancy, before preeclampsia can be diagnosed), and the peak monthly concentration in the 7-month periconceptional period (the month with the highest average concentration within that period). These windows were chosen to preclude exposures that may have occurred after diagnosis and to allow us to separately examine preconceptional exposures, which we hypothesized could increase maternal systemic oxidative stress or inflammation, and postconceptional exposures, which could adversely affect placentation (Brook 2008; Roberts et al. 2003). We chose a 7-month periconceptional window in order to have a roughly symmetric exposure period surrounding conception and to avoid exposures that could potentially occur after diagnosis. For preterm delivery, we examined average exposures within the first and second trimesters, the last 3 months of pregnancy, and the month of delivery. These windows were chosen to provide as direct comparisons as possible with the prior literature (Brauer et al. 2008; Liu et al. 2003; Parker et al. 2008; Ritz et al. 2007). The first- and second-trimester exposure windows were calculated based on the month of conception (months 1–3 and 4–6, respectively). The window based on the last 3 months of pregnancy was calculated by counting back from and including the month of delivery. A woman who conceived in April 1997 and delivered in December 1997 would have the following exposure windows: preconceptional, January–March; postconceptional, April–July; periconceptional, January–July; first trimester, April–June; second trimester, July–September; and third trimester, October–December 1997.", "We defined preeclampsia according to ACOG criteria using data obtained from medical records. The criteria are sustained pregnancy-induced hypertension (≥ 140/90 mmHg) and proteinuria (urine protein concentrations of 30 mg/dL or 1+ on two or more urine dipsticks) (ACOG 2002). We defined preterm delivery as a pregnancy lasting < 37 completed weeks of gestation. In secondary analyses, we examined length of gestation measured in days as an outcome, using the same exposure windows as in our preterm delivery analyses.", "We used multivariable logistic regression to model the relationships [odds ratios (ORs) and 95% confidence intervals (CIs)] between each pregnancy outcome and air pollutant exposure. Our alpha (type 1 error) level was 0.05. We used linear regression for models of length of gestation. For our primary analyses, we categorized air pollutant concentrations according to distribution quartiles. We also modeled air pollutant concentrations as continuous exposures. For these analyses, we report ORs associated with a 0.1-ppm increase in CO and a 0.5-μg/m3 increase in PM2.5. These gradients are roughly equivalent to the increments between deciles for both pollutants.\nWe evaluated the following characteristics, reported in interviews, as potential confounders: maternal age, nulliparity (no prior live births), prepregnancy body mass index (BMI), race/ethnicity, education, employment in early pregnancy, marital status, household income, smoking before and during pregnancy, usual hours/week of secondhand smoke exposure in and outside of the home in the year before pregnancy, regular recreational physical activity before and during pregnancy, and histories of asthma, diabetes, and chronic hypertension. We also evaluated year and month of conception as confounders because of temporal changes in both pollutant concentrations and preeclampsia risk (PSCAA 2007; Rudra and Williams 2005). A priori, we chose to include maternal age, race/ethnicity, prepregnancy BMI, and nulliparity as covariates because of their confounding effects in previous analyses of Omega Study data. Season and year were also included in multivariable models because of their influence on preeclampsia ORs. We found no evidence of confounding by other covariates using a 10% difference in magnitude between crude and adjusted ORs as the criterion for confounding. We found no evidence of poor model fit or influential observations after performing the following regression diagnostics: the Hosmer–Lemeshow test, comparison of observed and predicted outcomes within exposure and covariate strata, examination of Pearson and deviance residual plots, and examination of leverage plots (Hosmer and Lemeshow 2000).\nWe conducted several stratified analyses to evaluate the possibility of interaction by characteristics chosen a priori because of their strong relationships with either the outcomes or exposures. We stratified participants according to maternal age (< 35 or ≥ 35 years), parity (no vs. any prior live births), prepregnancy BMI (< 25 or ≥ 25 kg/m2), ever smoking or secondhand smoke exposure (neither vs. either), and employment in early pregnancy (used as a surrogate for time spent at home). We evaluated interaction by comparing ORs across strata; we defined interaction as a difference of > 20% in the magnitude of at least two of four category ORs. We also fitted models after excluding women with self-reported or medical record–indicated histories of prepregnancy diabetes or hypertension.\nFinally, we conducted sensitivity analyses to examine the impact of extrapolation of the environmental characteristic predictors used in the air pollutant exposure models. For both CO and PM2.5, we classified women according to whether one or more values of the environmental characteristics used in the exposure estimation model were out of the range of values observed at the monitoring sites. For example, although the distance from each CO monitoring site to the nearest major road ranged from 0.2 to 1.3 km, the range of values at Omega participants’ addresses was much broader (0.01–92 km). Furthermore, a small proportion of women (≤ 4% for each exposure window) had one or more CO exposure estimates outside the range of the concentrations observed in the region. All predicted PM2.5 exposures fell within the range of observed concentrations. We hypothesized that our exposure estimates might be less accurate for women with one or more extrapolated values and that this inaccuracy might bias our estimates of association toward the null.", "Median [interquartile range (IQR)] CO and PM2.5 concentrations observed at monitoring sites during the study period were 1.08 (0.80–1.38) ppm and 10.0 (8.3–12.4) μg/m3, respectively [Supplemental Material, Table 1 (doi:10.1289/ehp.1002947)]. Among study participants, median (IQR) predicted CO and PM2.5 exposures in the periconceptional period were 0.80 (0.59–1.02) ppm and 10.1 (8.7–11.4) μg/m3, respectively. By design, peak CO and PM2.5 exposures were higher [median (IQR): peak CO, 1.05 (0.85–1.27) ppm; peak PM2.5, 14.2 (12.6–15.1) μg/m3]. Distributions in the other six windows were similar to those in the periconceptional window: median CO concentrations ranged from 0.79 to 0.80 ppm, and median PM2.5 concentrations ranged from 9.5 to 12.6 μg/m3. The difference between distributions of observed CO concentrations and participants’ predicted exposures was due primarily to the fact that CO monitoring was more pervasive in the earlier years of the study (PSCAA 2007). CO and PM2.5 exposures were moderately correlated: Pairwise correlation coefficients ranged from 0.25 to 0.45 within exposure windows.\nTable 1 shows distributions of selected participants’ characteristics and periconceptional CO and PM2.5 distributions within categories of those characteristics. Distributions for other exposure windows were similar. Participants were predominantly white, married, well educated, and nonsmokers. Predicted CO exposures were inversely associated with age, educational attainment, and household income and were higher among black or Hispanic women than among white women, unmarried women, and those who reported smoking or secondhand smoke exposure. Predicted CO exposures also declined sharply over the study period and were highest in women conceiving in winter. Although predicted PM2.5 exposures were generally less strongly related to maternal characteristics, they were also highest among black women, women with lower educational attainment or income, and those who conceived in winter.\nA total of 117 women (3.3%) developed preeclampsia. Table 2 shows unadjusted and adjusted associations between periconceptional air pollutant exposures and preeclampsia. Because adjustment for year of conception strongly influenced the estimated associations, we show adjusted models without (model 2) and with (model 3) inclusion of year as a covariate. Fourth-quartile periconceptional CO concentrations were significantly associated with preeclampsia after adjustment for maternal characteristics and season of conception (fourth vs. first quartile: OR = 2.08; 95% CI, 1.16–3.72). However, this association became nonsignificant and inverted after further adjustment for year of conception (OR = 0.75; 95% CI, 0.42–1.64). Similarly, although each 0.1-ppm increase in CO was significantly associated with preeclampsia after adjustment for maternal characteristics and season of conception (OR = 1.07; 95% CI, 1.02–1.13), the association disappeared after further adjustment for year of conception (OR = 0.98; 95% CI, 0.91–1.06). The association between preeclampsia and periconceptional PM2.5 exposure was nonsignificant and weaker than for CO (fourth vs. first quartile OR, adjusted for maternal characteristics and season of conception, 1.63; 95% CI, 0.79–3.38). This association weakened somewhat after further adjustment for year of conception (OR = 1.41; 95% CI, 0.63–3.18). Unadjusted and adjusted associations with CO and PM2.5 in the preconceptional and postconceptional windows and the periconceptional month of peak exposure were similar to those in the periconceptional window (data not shown; tables of these and other secondary analyses are available from the authors upon request). For example, in models of each of these exposure windows, unadjusted for year of conception, ORs for each 0.1-ppm CO increase were all 1.07.\nThe incidence of preterm delivery was 10.5% (369 cases). Predicted air pollutant exposures in the last 3 months of pregnancy were not strongly or significantly associated with preterm delivery, and year of conception did not materially influence estimates of association (Table 3). For instance, fourth- versus first-quartile ORs were 0.88 (95% CI, 0.59–1.31) for CO exposure and 0.81 (95% CI, 0.49–1.34) for PM2.5 exposure in fully adjusted models. ORs for CO and PM2.5 exposures in the first and second trimesters and the last month of pregnancy were similar (data not shown). Multivariable linear regression analysis using gestational age at delivery as the outcome also produced similar results: We found no statistically significant or strong associations with exposures to either air pollutant (data not shown).\nAnalyses of models stratified by maternal age, parity, prepregnancy BMI, employment, and smoking or secondhand smoke exposure did not provide evidence that any of these characteristics modified associations between air pollutant exposures and either outcome (data not shown). CO–preeclampsia associations were robust to exclusion of women with prepregnancy diabetes (n = 105) or hypertension (n = 115; data not shown). We examined the sensitivity of our results to participants’ residential values of environmental characteristics used as predictors in our air pollutant exposure models. A total of 1,930 women (55%) resided in an area at which one or more CO model predictors were outside the range observed at monitoring sites. Proportions for individual predictors were 7.9% for population density, 9.2% for street density, and 47.3% for distance to the nearest major road (only 0.1% lived < 0.2 km from a major road; 47.2% lived > 1.3 km). For 904 participants (25.8%), the street density measure used in the PM2.5 model was higher (24.2%) or lower (1.6%) than the range observed at monitoring sites. Associations between preeclampsia and air pollutant exposures were slightly stronger in the subset of women with no out-of-range model predictors. For example, the OR for a 0.1-ppm increase in periconceptional CO exposure was 1.10 (95% CI, 0.98–1.23) among 1,579 women with no out-of-range model predictors and 1.06 (95% CI, 0.99–1.12) among 1,930 women with one or more out-of-range predictors (ORs were adjusted for maternal characteristics and season but not year). Associations with preterm delivery also were similar between the two subgroups (data not shown). Finally, exclusion of the small proportion of women with predicted CO exposures outside the range of concentrations observed in the area (between 44 and 101 observations for each of the CO exposure windows) did not materially affect our results (data not shown).", "We found little evidence of strong associations between either predicted CO or PM2.5 exposure and preterm delivery risk in this setting. Although point estimates suggested increased risk of preeclampsia associated with PM2.5 exposure, CIs were wide. We found statistically significant, positive associations between preeclampsia and CO in all examined exposure windows. However, these associations disappeared after adjustment for year of conception.\nWe debated whether adjustment for year of conception was appropriate when examining the association between CO exposure and preeclampsia. CO concentrations in the four counties studied here declined 70 ppb/year (95% CI, 63–77 ppb/year), on average, between 1996 and 2006. Regional concentrations of PM2.5 and other criteria air pollutants remained stable or declined much less markedly (PSCAA 2007). The incidence of preeclampsia in our cohort declined over time independently of changes in maternal age, race/ethnicity, parity, prepregnancy BMI, and smoking history (adjusted decline in incidence, 5.3 cases/1,000 per year; 95% CI, 3.0–7.6/1,000 per year). No other covariate reported in Table 1 explained the secular decrease in incidence (data not shown). Because we consistently applied ACOG criteria to medical record information rather than relying on physician diagnosis, changes in diagnostic criteria cannot explain the decline in incidence. If regional CO declines were causally related to declines in preeclampsia incidence, adjustment for year of conception would be inappropriate. However, we cannot discount the possibility that secular change in some other risk factor may explain the decline in preeclampsia incidence. Adjustment for calendar year showed that differences in CO exposures among participants who conceived in the same year were not associated with preeclampsia risk. However, year of conception and CO exposures were strongly correlated in this population. More than half (52%) of participants with fourth-quartile periconceptional CO exposure conceived in the first 3 years of the study, whereas 43% of those with first-quartile exposure conceived in the last 2 years. As previously described in detail, the CO model captured the fact that most CO variation in this setting was temporal rather than spatial (Rudra et al. 2010b). The lesser spatial CO variability limited our ability to examine spatial contrasts and may also explain the lack of association after adjustment for year of conception.\nThere have been few studies of preeclampsia in relation to air pollutant exposures. We previously reported suggestive but nonsignificant associations between CO and preeclampsia among the subset of 1,699 Omega Study participants who enrolled between 1996 and 2002 (Rudra and Williams 2006). The third- versus first-tertile periconceptional CO-adjusted OR was 1.73 (95% CI, 0.91–3.27), and PM2.5 exposures were not strongly related to preeclampsia. Similarly, Woodruff et al. (2008) reported positive associations between nearest-monitor CO concentrations and preeclampsia in an analysis of approximately 2.3 million California residents delivering between 1996 and 2004 (fourth vs. first quartile: adjusted OR = 1.08; 95% CI, 1.02–1.13; quartile cut-points not reported). They adjusted this estimate of association for year, season, and other characteristics. They did not observe associations with PM2.5 exposures. Van den Hooven et al. (2009) observed no association between residential proximity to traffic (a proxy of air pollutant exposure) and preeclampsia risk in 7,339 Rotterdam residents enrolled in a population-based cohort. Most recently, Wu et al. (2009) reported that traffic-generated PM2.5 exposures predicted by dispersion models were significantly associated with increased preeclampsia risk [fourth vs. first quartile: adjusted OR = 1.42; 95% CI, 1.26–1.59; mean (IQR) PM2.5 exposures, 1.82 (1.35) μg/m3]. Traffic-generated nitrogen oxides were similarly associated (fourth vs. first quartile: adjusted OR = 1.33; 95% CI, 1.18–1.48). They did not examine CO exposures.\nGiven the similarities between preeclampsia and atherosclerotic cardiovascular disease (Kaaja and Greer 2005), many of the hypothesized mechanisms by which air pollutants increase cardiovascular risk such as inflammation, oxidative stress, and endothelial dysfunction (Brook 2008) may also apply to preeclampsia. Furthermore, hypoxia at the fetal–maternal interface secondary to impaired placentation has been hypothesized to cause dissemination of free radicals that trigger preeclampsia in susceptible women (Roberts et al. 2003). Inhaled CO may also induce hypoxia because it displaces oxygen from hemoglobin, forming carboxyhemoglobin (Scherer 2006). Even fairly low maternal carboxyhemoglobin concentrations can reduce oxygen transfer across the placenta (Hsia 1998). We previously reported that maternal blood carboxyhemoglobin in early pregnancy was associated with preeclampsia risk among Omega participants, although only within the subgroup of women with prior live births (> 1% vs. 0.7% carboxyhemoglobin: adjusted OR = 4.1; 95% CI, 1.3–12.9; parity interaction p-value = 0.01) (Rudra et al. 2010a). In contrast, the association we observed in the present study between CO and preeclampsia exposure was not modified by parity.\nThe literature on CO, PM2.5, and preterm delivery provides mixed results and no strong evidence of specific exposure windows (Brauer et al. 2008; Darrow et al. 2009; Liu et al. 2003; Parker et al. 2008; Ritz et al. 2007; Wu et al. 2009; Zeka et al. 2008). Two studies of the topic were conducted in Vancouver, British Columbia, an area with an airshed similar to that of the Puget Sound region (Brauer et al. 2008; Liu et al. 2003). Among about 230,000 women delivering between 1987 and 1998, Liu et al. (2003) observed increased preterm delivery associated with each 1.0-ppm increase in CO exposure in the last month of pregnancy (adjusted OR = 1.08; 95% CI, 1.01–1.15) but not the first month (adjusted OR = 0.95; 95% CI, 0.89–1.01).They estimated CO exposure as the average within the study region and did not examine PM2.5. Brauer et al. (2008) used inverse-distance weighting to predict CO and PM2.5 exposures for approximately 70,000 women delivering between 1999 and 2002. Each 1-μg/m3 increase in PM2.5 exposure over the entire pregnancy was associated with a 6% increase in the odds of preterm delivery (OR = 1.06; 95% CI, 1.01–1.11). CO exposure was not associated with delivery at < 37 weeks of gestation but was associated with delivery at < 30 weeks of gestation [adjusted OR, per 100-μg/m3 (~ 0.09 ppm) increase, 1.16; 95% CI, 1.01–1.33]. They found no exposure windows of greater or lesser relevance.\nWe used prediction models containing month, year, and land use terms to capture both spatial and temporal variations in CO and PM2.5 exposures. A strength of these models was the opportunity they provided to examine multiple exposure windows for both outcomes. As measured by R2 statistics, these models performed similarly to others described in the literature (Brauer et al. 2003; Clougherty et al. 2008; Henderson et al. 2007; Moore et al. 2007). We relied on local air monitoring data in designing our models rather than collecting data for the purposes of this study. Although this approach is cost-effective and convenient, the monitoring sites may not have been optimally located for the purposes of individual exposure prediction. CO sites, particularly, were often closer to major roads or in denser areas than were many participants’ residences. However, our results were robust to exclusion of women who lived in areas that differed substantially from monitoring sites with respect to model predictors.\nWe previously reported that our CO model–based exposure estimates were moderately correlated with contemporaneously measured whole-blood carboxyhemoglobin concentrations in this study population (Rudra et al. 2010b). There is no known biomarker of PM2.5 exposure, and because of budgetary constraints we were unable to validate our PM2.5 model against independently collected air quality measurements. The R2 for the PM2.5 model was lower than that for the CO model, primarily because year, traffic, and street density measures were less strongly predictive of PM2.5 than of CO. Research has shown that errors in air pollutant exposure estimates such as those used here have both Berkson-like and classical-like components, and that it is theoretically possible that these errors are biased away from the null (Szpiro et al. 2010). However, simulation studies using data from a network of a small number of monitors have found bias toward the null and wider CIs (Kim et al. 2009). Thus, we believe that limitations of our estimation method may have resulted in misclassification of PM2.5 and CO exposures toward the null. The misclassification of PM2.5 exposures may have masked associations with preeclampsia and preterm delivery. Errors and uncertainties in both CO and PM2.5 models may have also arisen from imprecision in geocoding, a limited number of locations (none of which were participants’ locations), rounding of exposure windows to the nearest month (which also hampered our ability to differentiate among exposure windows), and inaccurate land use measures. Because we based exposures on self-reported residential address, errors may have also arisen if participants moved soon before or after the interview; longer exposure windows may have been more greatly affected by this source of misclassification. Although employment status, used as a rough surrogate for time spent at home, did not influence our findings, we were not able to capture exposures experienced at other locations and during commuting. Additionally, our models did not capture exposures arising from indoor sources, although we were able to evaluate self-reported secondhand smoke exposure as a potential confounder. If missing nonambient exposures were unrelated to ambient exposures, they are unlikely to have affected our estimates of association (Sheppard et al. 2005).\nMost previous studies of air pollution and pregnancy outcomes have been based on birth registries or hospital databases (Brauer et al. 2008; Darrow et al. 2009; Liu et al. 2003; Parker et al. 2008; Ritz et al. 2007; Wilhelm and Ritz 2005; Wu et al. 2009; Zeka et al. 2008), with at least two exceptions (Ritz et al. 2007; van den Hooven et al. 2009). Our ability to rely on data from an epidemiologic cohort was a significant strength, because we were able to evaluate many characteristics that would not have been available from birth records as potential confounders and effect modifiers. Furthermore, abstraction of medical record data to determine outcomes strengthened our study by precluding limitations arising from inaccuracies in gestational age and underreporting of preeclampsia in birth records (Lydon-Rochelle et al. 2005; Martin 2007). However, reliance on cohort data resulted in fewer outcomes and less precise estimates than we would have been able to obtain with a larger regional registry. Because of limited medical record data on preeclampsia severity, we could not evaluate the relationships between preeclampsia severity or gestational age at onset and air pollutant exposures. However, in post hoc analyses we observed that associations were similar between preeclampsia associated with term delivery (n = 72) and preeclampsia associated with preterm delivery (n = 45; data not shown). The Omega Study’s high participation and retention rates reduced the likelihood that selection bias and missing data influenced our estimates of association. Study participants had higher income and educational attainment, on average, than did Washington State pregnant women (Rudra and Williams 2005). Participants were also more likely to be white and non-Hispanic. Therefore, these results may not be generalizable to other demographic groups, particularly those with higher risks of these outcomes.\nAlthough CO concentrations were quite high at the start of the study period (the region attained U.S. EPA compliance in 1996), they declined rapidly. PM2.5 concentrations were moderate compared with other U.S. cities; most of the region was in compliance with the current U.S. EPA standard throughout the study period (PSCAA 2007). These results may not be generalizable to women living in areas with markedly different air pollutant concentrations. Future studies of these outcomes in high-pollutant settings would be useful contributions to the literature.", "We found strong positive associations between CO and preeclampsia risk. Because both factors declined during the study period, we cannot exclude the possibility that secular changes in a preeclampsia risk factor unidentified in this study may explain the association. Our results do not provide evidence that PM2.5 and CO strongly influence risks of preterm delivery and preeclampsia among western Washington State women." ]
[ "methods", "methods", null, null, "methods", null, null, "methods", "results", "discussion", "conclusions" ]
[ "air pollution", "carbon monoxide", "fine particulate matter", "preeclampsia", "pregnancy preterm delivery" ]
Methods: Study population We conducted a prospective analysis using data from the Omega Study, a pregnancy cohort study previously described in detail (Butler et al. 2004). Participants were women attending prenatal care clinics affiliated with Swedish Medical Center in Seattle and Tacoma General Hospital in Tacoma, Washington. Eligible women initiated prenatal care before 20 weeks of gestation. Women were ineligible if they were < 18 years of age, were non-English speakers, or did not plan to deliver at either hospital. Participants completed an interviewer-administered questionnaire soon after enrollment (mean ± SD gestational age, 15.9 ± 4.8 weeks). This questionnaire was used to collect information on sociodemographic, anthropometric, behavioral, medical, and reproductive characteristics. After delivery, study personnel abstracted information on the pregnancy course and outcome from medical records. Study procedures were approved by the institutional review boards of both hospitals. All participants provided written informed consent. We used data from women recruited between 1996 and 2006. During this period, 4,000 (79%) of 5,063 invited women enrolled in the study. Of these, 79 experienced early pregnancy losses, 68 moved or delivered elsewhere, and 169 were lost to follow-up due to unknown delivery outcome or a missing medical record. Of the 3,675 women who completed the study, we included 3,509 (95%) in this analysis. We excluded 145 participants for whom we were unable to accurately assess exposures: 121 had nongeocodable residential addresses, and 24 lived outside the study area of King, Kitsap, Pierce, and Snohomish counties. These participants were less likely, on average, to be nulliparous than women who remained in the analytic sample (51% vs. 60%) but were otherwise similar. We also excluded 21 additional women who were missing information on one or more covariates chosen a priori for final models, as described below. Distributions of air pollutant exposures did not substantially differ between women with missing and nonmissing covariate information. We conducted a prospective analysis using data from the Omega Study, a pregnancy cohort study previously described in detail (Butler et al. 2004). Participants were women attending prenatal care clinics affiliated with Swedish Medical Center in Seattle and Tacoma General Hospital in Tacoma, Washington. Eligible women initiated prenatal care before 20 weeks of gestation. Women were ineligible if they were < 18 years of age, were non-English speakers, or did not plan to deliver at either hospital. Participants completed an interviewer-administered questionnaire soon after enrollment (mean ± SD gestational age, 15.9 ± 4.8 weeks). This questionnaire was used to collect information on sociodemographic, anthropometric, behavioral, medical, and reproductive characteristics. After delivery, study personnel abstracted information on the pregnancy course and outcome from medical records. Study procedures were approved by the institutional review boards of both hospitals. All participants provided written informed consent. We used data from women recruited between 1996 and 2006. During this period, 4,000 (79%) of 5,063 invited women enrolled in the study. Of these, 79 experienced early pregnancy losses, 68 moved or delivered elsewhere, and 169 were lost to follow-up due to unknown delivery outcome or a missing medical record. Of the 3,675 women who completed the study, we included 3,509 (95%) in this analysis. We excluded 145 participants for whom we were unable to accurately assess exposures: 121 had nongeocodable residential addresses, and 24 lived outside the study area of King, Kitsap, Pierce, and Snohomish counties. These participants were less likely, on average, to be nulliparous than women who remained in the analytic sample (51% vs. 60%) but were otherwise similar. We also excluded 21 additional women who were missing information on one or more covariates chosen a priori for final models, as described below. Distributions of air pollutant exposures did not substantially differ between women with missing and nonmissing covariate information. Air pollutant exposure estimation We predicted monthly ambient CO and PM2.5 exposures using two multivariable linear regression models that included terms for environmental characteristics, month, and year. We did not examine other criteria pollutants (ozone, nitrogen oxides, sulfur dioxide, and lead) because there were too few monitoring sites to construct exposure models (PSCAA 2007). The CO model has been previously described in detail (Rudra et al. 2010b). Our models were constructed using CO and PM2.5 measurements collected daily between 1996 and 2006 at monitoring sites administered by the PSCAA. The sites were located by the PSCAA using U.S. Environmental Protection Agency (EPA) criteria to ensure a consistent and representative measure of air quality (PSCAA 2007). We collapsed daily measurements into monthly average concentrations, resulting in 890 CO concentrations from 15 sites and 803 PM2.5 concentrations from 12 sites. For more site characteristics and a map of monitoring sites and participants’ residences, see Supplemental Material, Table 1, Figure 1 (doi:10.1289/ehp.1002947). We predicted monthly ambient CO and PM2.5 exposures using two multivariable linear regression models that included terms for environmental characteristics, month, and year. We did not examine other criteria pollutants (ozone, nitrogen oxides, sulfur dioxide, and lead) because there were too few monitoring sites to construct exposure models (PSCAA 2007). The CO model has been previously described in detail (Rudra et al. 2010b). Our models were constructed using CO and PM2.5 measurements collected daily between 1996 and 2006 at monitoring sites administered by the PSCAA. The sites were located by the PSCAA using U.S. Environmental Protection Agency (EPA) criteria to ensure a consistent and representative measure of air quality (PSCAA 2007). We collapsed daily measurements into monthly average concentrations, resulting in 890 CO concentrations from 15 sites and 803 PM2.5 concentrations from 12 sites. For more site characteristics and a map of monitoring sites and participants’ residences, see Supplemental Material, Table 1, Figure 1 (doi:10.1289/ehp.1002947). Air pollutant predictor variables We evaluated several characteristics as potential predictors of monthly average pollutant concentrations at the monitoring sites. We mapped and measured characteristics at each site using ArcMap (version 9.2; ESRI, Redlands, CA). We used 2001 traffic count data from the Washington Department of Transportation to estimate annual average traffic volume on the nearest major road (federal or state highway) within circular buffers with radii of 250 and 500 m (Washington State Department of Transportation 2002). We also measured distance to the nearest major road. We used Census 2000 TIGER line files to estimate street density (kilometers per square kilometer) within 100-, 250-, 500-, and 1,000-m buffers (U.S. Census Bureau 2002). As has been done in previous literature, we chose a priori several buffer sizes to allow examination of multiple spatial scales (e.g., Brauer et al. 2003, 2008; Jerrett et al. 2005). We used Census 2000 measures of population density (persons per square kilometer) and housing density (housing units per square kilometer) within each site’s census block group (U.S. Census Bureau 2002). We used monthly averages of daily high and low temperatures and precipitation collected at 31 weather stations (Western Regional Climate Center 2008). We used measurements at the nearest weather station; the average distance between each monitoring site and the nearest station was 6.8 km (range, 0.7–32 km). We used year and month terms to capture secular and seasonal variations in air pollutant concentrations. We evaluated several characteristics as potential predictors of monthly average pollutant concentrations at the monitoring sites. We mapped and measured characteristics at each site using ArcMap (version 9.2; ESRI, Redlands, CA). We used 2001 traffic count data from the Washington Department of Transportation to estimate annual average traffic volume on the nearest major road (federal or state highway) within circular buffers with radii of 250 and 500 m (Washington State Department of Transportation 2002). We also measured distance to the nearest major road. We used Census 2000 TIGER line files to estimate street density (kilometers per square kilometer) within 100-, 250-, 500-, and 1,000-m buffers (U.S. Census Bureau 2002). As has been done in previous literature, we chose a priori several buffer sizes to allow examination of multiple spatial scales (e.g., Brauer et al. 2003, 2008; Jerrett et al. 2005). We used Census 2000 measures of population density (persons per square kilometer) and housing density (housing units per square kilometer) within each site’s census block group (U.S. Census Bureau 2002). We used monthly averages of daily high and low temperatures and precipitation collected at 31 weather stations (Western Regional Climate Center 2008). We used measurements at the nearest weather station; the average distance between each monitoring site and the nearest station was 6.8 km (range, 0.7–32 km). We used year and month terms to capture secular and seasonal variations in air pollutant concentrations. Model fitting procedures Using a stepwise procedure previously described in detail (Rudra et al. 2010b), we fit multivariable linear regression models to quantify the relationship between each environmental characteristic and monthly average CO and PM2.5 concentrations. After determining the final exposure models, we measured the same environmental characteristics at each participant’s geocoded residential address reported during the interview. We used model coefficients and these measurements to predict participants’ monthly air pollutant exposures. We predicted exposures within each calendar month of pregnancy after approximating both the conception and delivery dates to the nearest calendar month, by assigning days 1–15 to the current month and rounding days 16–31 to the next month. We measured date of conception using maternal report of the date of the last menstrual period (LMP) and ultrasound at ≤ 20 weeks of gestation. LMP and ultrasound information were gathered by interview and medical record abstraction, respectively. If both LMP and ultrasound-based dates agreed within 14 days, we used the former. Among 4% of participants with dates differing by > 14 days, we used the ultrasound-based date. The CO model included terms for year and month (indicators), street density within 500 m (quartile-based indicators), distance to the nearest major road (< 100, 101–1,000, > 1,000 m), and census tract population density (continuous). We previously reported that the model explained 73% of variation in CO concentrations (Rudra et al. 2010b). The root mean square error was 0.22 ppm (10% of the range of observed concentrations). The split-sample R2 was 0.71. CO concentrations were highest in winter, in the earliest years of the study, and in densely populated areas and were inversely related to the distance to the nearest major road. Concentrations were highest in areas with third-quartile street density. The PM2.5 model included terms for year and month (indicators), street density within 250 m (tertile-based indicators), distance to the nearest major road (dichotomized at 2,000 m), and monthly averages of daily high temperatures and precipitation (quartile-based indicators). The model explained 47% of variance in regional PM2.5 concentrations. The split-sample R2 was 0.41, and the root mean square error was 2.5 μg/m3 (10% of the range). PM2.5 concentrations were highest in winter and in months with higher average temperature or precipitation; they were somewhat higher in earlier years but remained stable from 2003 onward. PM2.5 concentrations were inversely related to the distance to the nearest major road and, unexpectedly, to street density. Using a stepwise procedure previously described in detail (Rudra et al. 2010b), we fit multivariable linear regression models to quantify the relationship between each environmental characteristic and monthly average CO and PM2.5 concentrations. After determining the final exposure models, we measured the same environmental characteristics at each participant’s geocoded residential address reported during the interview. We used model coefficients and these measurements to predict participants’ monthly air pollutant exposures. We predicted exposures within each calendar month of pregnancy after approximating both the conception and delivery dates to the nearest calendar month, by assigning days 1–15 to the current month and rounding days 16–31 to the next month. We measured date of conception using maternal report of the date of the last menstrual period (LMP) and ultrasound at ≤ 20 weeks of gestation. LMP and ultrasound information were gathered by interview and medical record abstraction, respectively. If both LMP and ultrasound-based dates agreed within 14 days, we used the former. Among 4% of participants with dates differing by > 14 days, we used the ultrasound-based date. The CO model included terms for year and month (indicators), street density within 500 m (quartile-based indicators), distance to the nearest major road (< 100, 101–1,000, > 1,000 m), and census tract population density (continuous). We previously reported that the model explained 73% of variation in CO concentrations (Rudra et al. 2010b). The root mean square error was 0.22 ppm (10% of the range of observed concentrations). The split-sample R2 was 0.71. CO concentrations were highest in winter, in the earliest years of the study, and in densely populated areas and were inversely related to the distance to the nearest major road. Concentrations were highest in areas with third-quartile street density. The PM2.5 model included terms for year and month (indicators), street density within 250 m (tertile-based indicators), distance to the nearest major road (dichotomized at 2,000 m), and monthly averages of daily high temperatures and precipitation (quartile-based indicators). The model explained 47% of variance in regional PM2.5 concentrations. The split-sample R2 was 0.41, and the root mean square error was 2.5 μg/m3 (10% of the range). PM2.5 concentrations were highest in winter and in months with higher average temperature or precipitation; they were somewhat higher in earlier years but remained stable from 2003 onward. PM2.5 concentrations were inversely related to the distance to the nearest major road and, unexpectedly, to street density. Exposure estimation For this analysis we defined a priori four exposure windows for each outcome of interest. For preeclampsia, exposure windows were periconceptional (the 7 months surrounding the month of conception), preconceptional (the average of the monthly concentrations in 3 months before pregnancy), postconceptional (the first 4 months of pregnancy, before preeclampsia can be diagnosed), and the peak monthly concentration in the 7-month periconceptional period (the month with the highest average concentration within that period). These windows were chosen to preclude exposures that may have occurred after diagnosis and to allow us to separately examine preconceptional exposures, which we hypothesized could increase maternal systemic oxidative stress or inflammation, and postconceptional exposures, which could adversely affect placentation (Brook 2008; Roberts et al. 2003). We chose a 7-month periconceptional window in order to have a roughly symmetric exposure period surrounding conception and to avoid exposures that could potentially occur after diagnosis. For preterm delivery, we examined average exposures within the first and second trimesters, the last 3 months of pregnancy, and the month of delivery. These windows were chosen to provide as direct comparisons as possible with the prior literature (Brauer et al. 2008; Liu et al. 2003; Parker et al. 2008; Ritz et al. 2007). The first- and second-trimester exposure windows were calculated based on the month of conception (months 1–3 and 4–6, respectively). The window based on the last 3 months of pregnancy was calculated by counting back from and including the month of delivery. A woman who conceived in April 1997 and delivered in December 1997 would have the following exposure windows: preconceptional, January–March; postconceptional, April–July; periconceptional, January–July; first trimester, April–June; second trimester, July–September; and third trimester, October–December 1997. For this analysis we defined a priori four exposure windows for each outcome of interest. For preeclampsia, exposure windows were periconceptional (the 7 months surrounding the month of conception), preconceptional (the average of the monthly concentrations in 3 months before pregnancy), postconceptional (the first 4 months of pregnancy, before preeclampsia can be diagnosed), and the peak monthly concentration in the 7-month periconceptional period (the month with the highest average concentration within that period). These windows were chosen to preclude exposures that may have occurred after diagnosis and to allow us to separately examine preconceptional exposures, which we hypothesized could increase maternal systemic oxidative stress or inflammation, and postconceptional exposures, which could adversely affect placentation (Brook 2008; Roberts et al. 2003). We chose a 7-month periconceptional window in order to have a roughly symmetric exposure period surrounding conception and to avoid exposures that could potentially occur after diagnosis. For preterm delivery, we examined average exposures within the first and second trimesters, the last 3 months of pregnancy, and the month of delivery. These windows were chosen to provide as direct comparisons as possible with the prior literature (Brauer et al. 2008; Liu et al. 2003; Parker et al. 2008; Ritz et al. 2007). The first- and second-trimester exposure windows were calculated based on the month of conception (months 1–3 and 4–6, respectively). The window based on the last 3 months of pregnancy was calculated by counting back from and including the month of delivery. A woman who conceived in April 1997 and delivered in December 1997 would have the following exposure windows: preconceptional, January–March; postconceptional, April–July; periconceptional, January–July; first trimester, April–June; second trimester, July–September; and third trimester, October–December 1997. Outcome measurements We defined preeclampsia according to ACOG criteria using data obtained from medical records. The criteria are sustained pregnancy-induced hypertension (≥ 140/90 mmHg) and proteinuria (urine protein concentrations of 30 mg/dL or 1+ on two or more urine dipsticks) (ACOG 2002). We defined preterm delivery as a pregnancy lasting < 37 completed weeks of gestation. In secondary analyses, we examined length of gestation measured in days as an outcome, using the same exposure windows as in our preterm delivery analyses. We defined preeclampsia according to ACOG criteria using data obtained from medical records. The criteria are sustained pregnancy-induced hypertension (≥ 140/90 mmHg) and proteinuria (urine protein concentrations of 30 mg/dL or 1+ on two or more urine dipsticks) (ACOG 2002). We defined preterm delivery as a pregnancy lasting < 37 completed weeks of gestation. In secondary analyses, we examined length of gestation measured in days as an outcome, using the same exposure windows as in our preterm delivery analyses. Statistical analysis We used multivariable logistic regression to model the relationships [odds ratios (ORs) and 95% confidence intervals (CIs)] between each pregnancy outcome and air pollutant exposure. Our alpha (type 1 error) level was 0.05. We used linear regression for models of length of gestation. For our primary analyses, we categorized air pollutant concentrations according to distribution quartiles. We also modeled air pollutant concentrations as continuous exposures. For these analyses, we report ORs associated with a 0.1-ppm increase in CO and a 0.5-μg/m3 increase in PM2.5. These gradients are roughly equivalent to the increments between deciles for both pollutants. We evaluated the following characteristics, reported in interviews, as potential confounders: maternal age, nulliparity (no prior live births), prepregnancy body mass index (BMI), race/ethnicity, education, employment in early pregnancy, marital status, household income, smoking before and during pregnancy, usual hours/week of secondhand smoke exposure in and outside of the home in the year before pregnancy, regular recreational physical activity before and during pregnancy, and histories of asthma, diabetes, and chronic hypertension. We also evaluated year and month of conception as confounders because of temporal changes in both pollutant concentrations and preeclampsia risk (PSCAA 2007; Rudra and Williams 2005). A priori, we chose to include maternal age, race/ethnicity, prepregnancy BMI, and nulliparity as covariates because of their confounding effects in previous analyses of Omega Study data. Season and year were also included in multivariable models because of their influence on preeclampsia ORs. We found no evidence of confounding by other covariates using a 10% difference in magnitude between crude and adjusted ORs as the criterion for confounding. We found no evidence of poor model fit or influential observations after performing the following regression diagnostics: the Hosmer–Lemeshow test, comparison of observed and predicted outcomes within exposure and covariate strata, examination of Pearson and deviance residual plots, and examination of leverage plots (Hosmer and Lemeshow 2000). We conducted several stratified analyses to evaluate the possibility of interaction by characteristics chosen a priori because of their strong relationships with either the outcomes or exposures. We stratified participants according to maternal age (< 35 or ≥ 35 years), parity (no vs. any prior live births), prepregnancy BMI (< 25 or ≥ 25 kg/m2), ever smoking or secondhand smoke exposure (neither vs. either), and employment in early pregnancy (used as a surrogate for time spent at home). We evaluated interaction by comparing ORs across strata; we defined interaction as a difference of > 20% in the magnitude of at least two of four category ORs. We also fitted models after excluding women with self-reported or medical record–indicated histories of prepregnancy diabetes or hypertension. Finally, we conducted sensitivity analyses to examine the impact of extrapolation of the environmental characteristic predictors used in the air pollutant exposure models. For both CO and PM2.5, we classified women according to whether one or more values of the environmental characteristics used in the exposure estimation model were out of the range of values observed at the monitoring sites. For example, although the distance from each CO monitoring site to the nearest major road ranged from 0.2 to 1.3 km, the range of values at Omega participants’ addresses was much broader (0.01–92 km). Furthermore, a small proportion of women (≤ 4% for each exposure window) had one or more CO exposure estimates outside the range of the concentrations observed in the region. All predicted PM2.5 exposures fell within the range of observed concentrations. We hypothesized that our exposure estimates might be less accurate for women with one or more extrapolated values and that this inaccuracy might bias our estimates of association toward the null. We used multivariable logistic regression to model the relationships [odds ratios (ORs) and 95% confidence intervals (CIs)] between each pregnancy outcome and air pollutant exposure. Our alpha (type 1 error) level was 0.05. We used linear regression for models of length of gestation. For our primary analyses, we categorized air pollutant concentrations according to distribution quartiles. We also modeled air pollutant concentrations as continuous exposures. For these analyses, we report ORs associated with a 0.1-ppm increase in CO and a 0.5-μg/m3 increase in PM2.5. These gradients are roughly equivalent to the increments between deciles for both pollutants. We evaluated the following characteristics, reported in interviews, as potential confounders: maternal age, nulliparity (no prior live births), prepregnancy body mass index (BMI), race/ethnicity, education, employment in early pregnancy, marital status, household income, smoking before and during pregnancy, usual hours/week of secondhand smoke exposure in and outside of the home in the year before pregnancy, regular recreational physical activity before and during pregnancy, and histories of asthma, diabetes, and chronic hypertension. We also evaluated year and month of conception as confounders because of temporal changes in both pollutant concentrations and preeclampsia risk (PSCAA 2007; Rudra and Williams 2005). A priori, we chose to include maternal age, race/ethnicity, prepregnancy BMI, and nulliparity as covariates because of their confounding effects in previous analyses of Omega Study data. Season and year were also included in multivariable models because of their influence on preeclampsia ORs. We found no evidence of confounding by other covariates using a 10% difference in magnitude between crude and adjusted ORs as the criterion for confounding. We found no evidence of poor model fit or influential observations after performing the following regression diagnostics: the Hosmer–Lemeshow test, comparison of observed and predicted outcomes within exposure and covariate strata, examination of Pearson and deviance residual plots, and examination of leverage plots (Hosmer and Lemeshow 2000). We conducted several stratified analyses to evaluate the possibility of interaction by characteristics chosen a priori because of their strong relationships with either the outcomes or exposures. We stratified participants according to maternal age (< 35 or ≥ 35 years), parity (no vs. any prior live births), prepregnancy BMI (< 25 or ≥ 25 kg/m2), ever smoking or secondhand smoke exposure (neither vs. either), and employment in early pregnancy (used as a surrogate for time spent at home). We evaluated interaction by comparing ORs across strata; we defined interaction as a difference of > 20% in the magnitude of at least two of four category ORs. We also fitted models after excluding women with self-reported or medical record–indicated histories of prepregnancy diabetes or hypertension. Finally, we conducted sensitivity analyses to examine the impact of extrapolation of the environmental characteristic predictors used in the air pollutant exposure models. For both CO and PM2.5, we classified women according to whether one or more values of the environmental characteristics used in the exposure estimation model were out of the range of values observed at the monitoring sites. For example, although the distance from each CO monitoring site to the nearest major road ranged from 0.2 to 1.3 km, the range of values at Omega participants’ addresses was much broader (0.01–92 km). Furthermore, a small proportion of women (≤ 4% for each exposure window) had one or more CO exposure estimates outside the range of the concentrations observed in the region. All predicted PM2.5 exposures fell within the range of observed concentrations. We hypothesized that our exposure estimates might be less accurate for women with one or more extrapolated values and that this inaccuracy might bias our estimates of association toward the null. Study population: We conducted a prospective analysis using data from the Omega Study, a pregnancy cohort study previously described in detail (Butler et al. 2004). Participants were women attending prenatal care clinics affiliated with Swedish Medical Center in Seattle and Tacoma General Hospital in Tacoma, Washington. Eligible women initiated prenatal care before 20 weeks of gestation. Women were ineligible if they were < 18 years of age, were non-English speakers, or did not plan to deliver at either hospital. Participants completed an interviewer-administered questionnaire soon after enrollment (mean ± SD gestational age, 15.9 ± 4.8 weeks). This questionnaire was used to collect information on sociodemographic, anthropometric, behavioral, medical, and reproductive characteristics. After delivery, study personnel abstracted information on the pregnancy course and outcome from medical records. Study procedures were approved by the institutional review boards of both hospitals. All participants provided written informed consent. We used data from women recruited between 1996 and 2006. During this period, 4,000 (79%) of 5,063 invited women enrolled in the study. Of these, 79 experienced early pregnancy losses, 68 moved or delivered elsewhere, and 169 were lost to follow-up due to unknown delivery outcome or a missing medical record. Of the 3,675 women who completed the study, we included 3,509 (95%) in this analysis. We excluded 145 participants for whom we were unable to accurately assess exposures: 121 had nongeocodable residential addresses, and 24 lived outside the study area of King, Kitsap, Pierce, and Snohomish counties. These participants were less likely, on average, to be nulliparous than women who remained in the analytic sample (51% vs. 60%) but were otherwise similar. We also excluded 21 additional women who were missing information on one or more covariates chosen a priori for final models, as described below. Distributions of air pollutant exposures did not substantially differ between women with missing and nonmissing covariate information. Air pollutant exposure estimation: We predicted monthly ambient CO and PM2.5 exposures using two multivariable linear regression models that included terms for environmental characteristics, month, and year. We did not examine other criteria pollutants (ozone, nitrogen oxides, sulfur dioxide, and lead) because there were too few monitoring sites to construct exposure models (PSCAA 2007). The CO model has been previously described in detail (Rudra et al. 2010b). Our models were constructed using CO and PM2.5 measurements collected daily between 1996 and 2006 at monitoring sites administered by the PSCAA. The sites were located by the PSCAA using U.S. Environmental Protection Agency (EPA) criteria to ensure a consistent and representative measure of air quality (PSCAA 2007). We collapsed daily measurements into monthly average concentrations, resulting in 890 CO concentrations from 15 sites and 803 PM2.5 concentrations from 12 sites. For more site characteristics and a map of monitoring sites and participants’ residences, see Supplemental Material, Table 1, Figure 1 (doi:10.1289/ehp.1002947). Air pollutant predictor variables: We evaluated several characteristics as potential predictors of monthly average pollutant concentrations at the monitoring sites. We mapped and measured characteristics at each site using ArcMap (version 9.2; ESRI, Redlands, CA). We used 2001 traffic count data from the Washington Department of Transportation to estimate annual average traffic volume on the nearest major road (federal or state highway) within circular buffers with radii of 250 and 500 m (Washington State Department of Transportation 2002). We also measured distance to the nearest major road. We used Census 2000 TIGER line files to estimate street density (kilometers per square kilometer) within 100-, 250-, 500-, and 1,000-m buffers (U.S. Census Bureau 2002). As has been done in previous literature, we chose a priori several buffer sizes to allow examination of multiple spatial scales (e.g., Brauer et al. 2003, 2008; Jerrett et al. 2005). We used Census 2000 measures of population density (persons per square kilometer) and housing density (housing units per square kilometer) within each site’s census block group (U.S. Census Bureau 2002). We used monthly averages of daily high and low temperatures and precipitation collected at 31 weather stations (Western Regional Climate Center 2008). We used measurements at the nearest weather station; the average distance between each monitoring site and the nearest station was 6.8 km (range, 0.7–32 km). We used year and month terms to capture secular and seasonal variations in air pollutant concentrations. Model fitting procedures: Using a stepwise procedure previously described in detail (Rudra et al. 2010b), we fit multivariable linear regression models to quantify the relationship between each environmental characteristic and monthly average CO and PM2.5 concentrations. After determining the final exposure models, we measured the same environmental characteristics at each participant’s geocoded residential address reported during the interview. We used model coefficients and these measurements to predict participants’ monthly air pollutant exposures. We predicted exposures within each calendar month of pregnancy after approximating both the conception and delivery dates to the nearest calendar month, by assigning days 1–15 to the current month and rounding days 16–31 to the next month. We measured date of conception using maternal report of the date of the last menstrual period (LMP) and ultrasound at ≤ 20 weeks of gestation. LMP and ultrasound information were gathered by interview and medical record abstraction, respectively. If both LMP and ultrasound-based dates agreed within 14 days, we used the former. Among 4% of participants with dates differing by > 14 days, we used the ultrasound-based date. The CO model included terms for year and month (indicators), street density within 500 m (quartile-based indicators), distance to the nearest major road (< 100, 101–1,000, > 1,000 m), and census tract population density (continuous). We previously reported that the model explained 73% of variation in CO concentrations (Rudra et al. 2010b). The root mean square error was 0.22 ppm (10% of the range of observed concentrations). The split-sample R2 was 0.71. CO concentrations were highest in winter, in the earliest years of the study, and in densely populated areas and were inversely related to the distance to the nearest major road. Concentrations were highest in areas with third-quartile street density. The PM2.5 model included terms for year and month (indicators), street density within 250 m (tertile-based indicators), distance to the nearest major road (dichotomized at 2,000 m), and monthly averages of daily high temperatures and precipitation (quartile-based indicators). The model explained 47% of variance in regional PM2.5 concentrations. The split-sample R2 was 0.41, and the root mean square error was 2.5 μg/m3 (10% of the range). PM2.5 concentrations were highest in winter and in months with higher average temperature or precipitation; they were somewhat higher in earlier years but remained stable from 2003 onward. PM2.5 concentrations were inversely related to the distance to the nearest major road and, unexpectedly, to street density. Exposure estimation: For this analysis we defined a priori four exposure windows for each outcome of interest. For preeclampsia, exposure windows were periconceptional (the 7 months surrounding the month of conception), preconceptional (the average of the monthly concentrations in 3 months before pregnancy), postconceptional (the first 4 months of pregnancy, before preeclampsia can be diagnosed), and the peak monthly concentration in the 7-month periconceptional period (the month with the highest average concentration within that period). These windows were chosen to preclude exposures that may have occurred after diagnosis and to allow us to separately examine preconceptional exposures, which we hypothesized could increase maternal systemic oxidative stress or inflammation, and postconceptional exposures, which could adversely affect placentation (Brook 2008; Roberts et al. 2003). We chose a 7-month periconceptional window in order to have a roughly symmetric exposure period surrounding conception and to avoid exposures that could potentially occur after diagnosis. For preterm delivery, we examined average exposures within the first and second trimesters, the last 3 months of pregnancy, and the month of delivery. These windows were chosen to provide as direct comparisons as possible with the prior literature (Brauer et al. 2008; Liu et al. 2003; Parker et al. 2008; Ritz et al. 2007). The first- and second-trimester exposure windows were calculated based on the month of conception (months 1–3 and 4–6, respectively). The window based on the last 3 months of pregnancy was calculated by counting back from and including the month of delivery. A woman who conceived in April 1997 and delivered in December 1997 would have the following exposure windows: preconceptional, January–March; postconceptional, April–July; periconceptional, January–July; first trimester, April–June; second trimester, July–September; and third trimester, October–December 1997. Outcome measurements: We defined preeclampsia according to ACOG criteria using data obtained from medical records. The criteria are sustained pregnancy-induced hypertension (≥ 140/90 mmHg) and proteinuria (urine protein concentrations of 30 mg/dL or 1+ on two or more urine dipsticks) (ACOG 2002). We defined preterm delivery as a pregnancy lasting < 37 completed weeks of gestation. In secondary analyses, we examined length of gestation measured in days as an outcome, using the same exposure windows as in our preterm delivery analyses. Statistical analysis: We used multivariable logistic regression to model the relationships [odds ratios (ORs) and 95% confidence intervals (CIs)] between each pregnancy outcome and air pollutant exposure. Our alpha (type 1 error) level was 0.05. We used linear regression for models of length of gestation. For our primary analyses, we categorized air pollutant concentrations according to distribution quartiles. We also modeled air pollutant concentrations as continuous exposures. For these analyses, we report ORs associated with a 0.1-ppm increase in CO and a 0.5-μg/m3 increase in PM2.5. These gradients are roughly equivalent to the increments between deciles for both pollutants. We evaluated the following characteristics, reported in interviews, as potential confounders: maternal age, nulliparity (no prior live births), prepregnancy body mass index (BMI), race/ethnicity, education, employment in early pregnancy, marital status, household income, smoking before and during pregnancy, usual hours/week of secondhand smoke exposure in and outside of the home in the year before pregnancy, regular recreational physical activity before and during pregnancy, and histories of asthma, diabetes, and chronic hypertension. We also evaluated year and month of conception as confounders because of temporal changes in both pollutant concentrations and preeclampsia risk (PSCAA 2007; Rudra and Williams 2005). A priori, we chose to include maternal age, race/ethnicity, prepregnancy BMI, and nulliparity as covariates because of their confounding effects in previous analyses of Omega Study data. Season and year were also included in multivariable models because of their influence on preeclampsia ORs. We found no evidence of confounding by other covariates using a 10% difference in magnitude between crude and adjusted ORs as the criterion for confounding. We found no evidence of poor model fit or influential observations after performing the following regression diagnostics: the Hosmer–Lemeshow test, comparison of observed and predicted outcomes within exposure and covariate strata, examination of Pearson and deviance residual plots, and examination of leverage plots (Hosmer and Lemeshow 2000). We conducted several stratified analyses to evaluate the possibility of interaction by characteristics chosen a priori because of their strong relationships with either the outcomes or exposures. We stratified participants according to maternal age (< 35 or ≥ 35 years), parity (no vs. any prior live births), prepregnancy BMI (< 25 or ≥ 25 kg/m2), ever smoking or secondhand smoke exposure (neither vs. either), and employment in early pregnancy (used as a surrogate for time spent at home). We evaluated interaction by comparing ORs across strata; we defined interaction as a difference of > 20% in the magnitude of at least two of four category ORs. We also fitted models after excluding women with self-reported or medical record–indicated histories of prepregnancy diabetes or hypertension. Finally, we conducted sensitivity analyses to examine the impact of extrapolation of the environmental characteristic predictors used in the air pollutant exposure models. For both CO and PM2.5, we classified women according to whether one or more values of the environmental characteristics used in the exposure estimation model were out of the range of values observed at the monitoring sites. For example, although the distance from each CO monitoring site to the nearest major road ranged from 0.2 to 1.3 km, the range of values at Omega participants’ addresses was much broader (0.01–92 km). Furthermore, a small proportion of women (≤ 4% for each exposure window) had one or more CO exposure estimates outside the range of the concentrations observed in the region. All predicted PM2.5 exposures fell within the range of observed concentrations. We hypothesized that our exposure estimates might be less accurate for women with one or more extrapolated values and that this inaccuracy might bias our estimates of association toward the null. Results: Median [interquartile range (IQR)] CO and PM2.5 concentrations observed at monitoring sites during the study period were 1.08 (0.80–1.38) ppm and 10.0 (8.3–12.4) μg/m3, respectively [Supplemental Material, Table 1 (doi:10.1289/ehp.1002947)]. Among study participants, median (IQR) predicted CO and PM2.5 exposures in the periconceptional period were 0.80 (0.59–1.02) ppm and 10.1 (8.7–11.4) μg/m3, respectively. By design, peak CO and PM2.5 exposures were higher [median (IQR): peak CO, 1.05 (0.85–1.27) ppm; peak PM2.5, 14.2 (12.6–15.1) μg/m3]. Distributions in the other six windows were similar to those in the periconceptional window: median CO concentrations ranged from 0.79 to 0.80 ppm, and median PM2.5 concentrations ranged from 9.5 to 12.6 μg/m3. The difference between distributions of observed CO concentrations and participants’ predicted exposures was due primarily to the fact that CO monitoring was more pervasive in the earlier years of the study (PSCAA 2007). CO and PM2.5 exposures were moderately correlated: Pairwise correlation coefficients ranged from 0.25 to 0.45 within exposure windows. Table 1 shows distributions of selected participants’ characteristics and periconceptional CO and PM2.5 distributions within categories of those characteristics. Distributions for other exposure windows were similar. Participants were predominantly white, married, well educated, and nonsmokers. Predicted CO exposures were inversely associated with age, educational attainment, and household income and were higher among black or Hispanic women than among white women, unmarried women, and those who reported smoking or secondhand smoke exposure. Predicted CO exposures also declined sharply over the study period and were highest in women conceiving in winter. Although predicted PM2.5 exposures were generally less strongly related to maternal characteristics, they were also highest among black women, women with lower educational attainment or income, and those who conceived in winter. A total of 117 women (3.3%) developed preeclampsia. Table 2 shows unadjusted and adjusted associations between periconceptional air pollutant exposures and preeclampsia. Because adjustment for year of conception strongly influenced the estimated associations, we show adjusted models without (model 2) and with (model 3) inclusion of year as a covariate. Fourth-quartile periconceptional CO concentrations were significantly associated with preeclampsia after adjustment for maternal characteristics and season of conception (fourth vs. first quartile: OR = 2.08; 95% CI, 1.16–3.72). However, this association became nonsignificant and inverted after further adjustment for year of conception (OR = 0.75; 95% CI, 0.42–1.64). Similarly, although each 0.1-ppm increase in CO was significantly associated with preeclampsia after adjustment for maternal characteristics and season of conception (OR = 1.07; 95% CI, 1.02–1.13), the association disappeared after further adjustment for year of conception (OR = 0.98; 95% CI, 0.91–1.06). The association between preeclampsia and periconceptional PM2.5 exposure was nonsignificant and weaker than for CO (fourth vs. first quartile OR, adjusted for maternal characteristics and season of conception, 1.63; 95% CI, 0.79–3.38). This association weakened somewhat after further adjustment for year of conception (OR = 1.41; 95% CI, 0.63–3.18). Unadjusted and adjusted associations with CO and PM2.5 in the preconceptional and postconceptional windows and the periconceptional month of peak exposure were similar to those in the periconceptional window (data not shown; tables of these and other secondary analyses are available from the authors upon request). For example, in models of each of these exposure windows, unadjusted for year of conception, ORs for each 0.1-ppm CO increase were all 1.07. The incidence of preterm delivery was 10.5% (369 cases). Predicted air pollutant exposures in the last 3 months of pregnancy were not strongly or significantly associated with preterm delivery, and year of conception did not materially influence estimates of association (Table 3). For instance, fourth- versus first-quartile ORs were 0.88 (95% CI, 0.59–1.31) for CO exposure and 0.81 (95% CI, 0.49–1.34) for PM2.5 exposure in fully adjusted models. ORs for CO and PM2.5 exposures in the first and second trimesters and the last month of pregnancy were similar (data not shown). Multivariable linear regression analysis using gestational age at delivery as the outcome also produced similar results: We found no statistically significant or strong associations with exposures to either air pollutant (data not shown). Analyses of models stratified by maternal age, parity, prepregnancy BMI, employment, and smoking or secondhand smoke exposure did not provide evidence that any of these characteristics modified associations between air pollutant exposures and either outcome (data not shown). CO–preeclampsia associations were robust to exclusion of women with prepregnancy diabetes (n = 105) or hypertension (n = 115; data not shown). We examined the sensitivity of our results to participants’ residential values of environmental characteristics used as predictors in our air pollutant exposure models. A total of 1,930 women (55%) resided in an area at which one or more CO model predictors were outside the range observed at monitoring sites. Proportions for individual predictors were 7.9% for population density, 9.2% for street density, and 47.3% for distance to the nearest major road (only 0.1% lived < 0.2 km from a major road; 47.2% lived > 1.3 km). For 904 participants (25.8%), the street density measure used in the PM2.5 model was higher (24.2%) or lower (1.6%) than the range observed at monitoring sites. Associations between preeclampsia and air pollutant exposures were slightly stronger in the subset of women with no out-of-range model predictors. For example, the OR for a 0.1-ppm increase in periconceptional CO exposure was 1.10 (95% CI, 0.98–1.23) among 1,579 women with no out-of-range model predictors and 1.06 (95% CI, 0.99–1.12) among 1,930 women with one or more out-of-range predictors (ORs were adjusted for maternal characteristics and season but not year). Associations with preterm delivery also were similar between the two subgroups (data not shown). Finally, exclusion of the small proportion of women with predicted CO exposures outside the range of concentrations observed in the area (between 44 and 101 observations for each of the CO exposure windows) did not materially affect our results (data not shown). Discussion: We found little evidence of strong associations between either predicted CO or PM2.5 exposure and preterm delivery risk in this setting. Although point estimates suggested increased risk of preeclampsia associated with PM2.5 exposure, CIs were wide. We found statistically significant, positive associations between preeclampsia and CO in all examined exposure windows. However, these associations disappeared after adjustment for year of conception. We debated whether adjustment for year of conception was appropriate when examining the association between CO exposure and preeclampsia. CO concentrations in the four counties studied here declined 70 ppb/year (95% CI, 63–77 ppb/year), on average, between 1996 and 2006. Regional concentrations of PM2.5 and other criteria air pollutants remained stable or declined much less markedly (PSCAA 2007). The incidence of preeclampsia in our cohort declined over time independently of changes in maternal age, race/ethnicity, parity, prepregnancy BMI, and smoking history (adjusted decline in incidence, 5.3 cases/1,000 per year; 95% CI, 3.0–7.6/1,000 per year). No other covariate reported in Table 1 explained the secular decrease in incidence (data not shown). Because we consistently applied ACOG criteria to medical record information rather than relying on physician diagnosis, changes in diagnostic criteria cannot explain the decline in incidence. If regional CO declines were causally related to declines in preeclampsia incidence, adjustment for year of conception would be inappropriate. However, we cannot discount the possibility that secular change in some other risk factor may explain the decline in preeclampsia incidence. Adjustment for calendar year showed that differences in CO exposures among participants who conceived in the same year were not associated with preeclampsia risk. However, year of conception and CO exposures were strongly correlated in this population. More than half (52%) of participants with fourth-quartile periconceptional CO exposure conceived in the first 3 years of the study, whereas 43% of those with first-quartile exposure conceived in the last 2 years. As previously described in detail, the CO model captured the fact that most CO variation in this setting was temporal rather than spatial (Rudra et al. 2010b). The lesser spatial CO variability limited our ability to examine spatial contrasts and may also explain the lack of association after adjustment for year of conception. There have been few studies of preeclampsia in relation to air pollutant exposures. We previously reported suggestive but nonsignificant associations between CO and preeclampsia among the subset of 1,699 Omega Study participants who enrolled between 1996 and 2002 (Rudra and Williams 2006). The third- versus first-tertile periconceptional CO-adjusted OR was 1.73 (95% CI, 0.91–3.27), and PM2.5 exposures were not strongly related to preeclampsia. Similarly, Woodruff et al. (2008) reported positive associations between nearest-monitor CO concentrations and preeclampsia in an analysis of approximately 2.3 million California residents delivering between 1996 and 2004 (fourth vs. first quartile: adjusted OR = 1.08; 95% CI, 1.02–1.13; quartile cut-points not reported). They adjusted this estimate of association for year, season, and other characteristics. They did not observe associations with PM2.5 exposures. Van den Hooven et al. (2009) observed no association between residential proximity to traffic (a proxy of air pollutant exposure) and preeclampsia risk in 7,339 Rotterdam residents enrolled in a population-based cohort. Most recently, Wu et al. (2009) reported that traffic-generated PM2.5 exposures predicted by dispersion models were significantly associated with increased preeclampsia risk [fourth vs. first quartile: adjusted OR = 1.42; 95% CI, 1.26–1.59; mean (IQR) PM2.5 exposures, 1.82 (1.35) μg/m3]. Traffic-generated nitrogen oxides were similarly associated (fourth vs. first quartile: adjusted OR = 1.33; 95% CI, 1.18–1.48). They did not examine CO exposures. Given the similarities between preeclampsia and atherosclerotic cardiovascular disease (Kaaja and Greer 2005), many of the hypothesized mechanisms by which air pollutants increase cardiovascular risk such as inflammation, oxidative stress, and endothelial dysfunction (Brook 2008) may also apply to preeclampsia. Furthermore, hypoxia at the fetal–maternal interface secondary to impaired placentation has been hypothesized to cause dissemination of free radicals that trigger preeclampsia in susceptible women (Roberts et al. 2003). Inhaled CO may also induce hypoxia because it displaces oxygen from hemoglobin, forming carboxyhemoglobin (Scherer 2006). Even fairly low maternal carboxyhemoglobin concentrations can reduce oxygen transfer across the placenta (Hsia 1998). We previously reported that maternal blood carboxyhemoglobin in early pregnancy was associated with preeclampsia risk among Omega participants, although only within the subgroup of women with prior live births (> 1% vs. 0.7% carboxyhemoglobin: adjusted OR = 4.1; 95% CI, 1.3–12.9; parity interaction p-value = 0.01) (Rudra et al. 2010a). In contrast, the association we observed in the present study between CO and preeclampsia exposure was not modified by parity. The literature on CO, PM2.5, and preterm delivery provides mixed results and no strong evidence of specific exposure windows (Brauer et al. 2008; Darrow et al. 2009; Liu et al. 2003; Parker et al. 2008; Ritz et al. 2007; Wu et al. 2009; Zeka et al. 2008). Two studies of the topic were conducted in Vancouver, British Columbia, an area with an airshed similar to that of the Puget Sound region (Brauer et al. 2008; Liu et al. 2003). Among about 230,000 women delivering between 1987 and 1998, Liu et al. (2003) observed increased preterm delivery associated with each 1.0-ppm increase in CO exposure in the last month of pregnancy (adjusted OR = 1.08; 95% CI, 1.01–1.15) but not the first month (adjusted OR = 0.95; 95% CI, 0.89–1.01).They estimated CO exposure as the average within the study region and did not examine PM2.5. Brauer et al. (2008) used inverse-distance weighting to predict CO and PM2.5 exposures for approximately 70,000 women delivering between 1999 and 2002. Each 1-μg/m3 increase in PM2.5 exposure over the entire pregnancy was associated with a 6% increase in the odds of preterm delivery (OR = 1.06; 95% CI, 1.01–1.11). CO exposure was not associated with delivery at < 37 weeks of gestation but was associated with delivery at < 30 weeks of gestation [adjusted OR, per 100-μg/m3 (~ 0.09 ppm) increase, 1.16; 95% CI, 1.01–1.33]. They found no exposure windows of greater or lesser relevance. We used prediction models containing month, year, and land use terms to capture both spatial and temporal variations in CO and PM2.5 exposures. A strength of these models was the opportunity they provided to examine multiple exposure windows for both outcomes. As measured by R2 statistics, these models performed similarly to others described in the literature (Brauer et al. 2003; Clougherty et al. 2008; Henderson et al. 2007; Moore et al. 2007). We relied on local air monitoring data in designing our models rather than collecting data for the purposes of this study. Although this approach is cost-effective and convenient, the monitoring sites may not have been optimally located for the purposes of individual exposure prediction. CO sites, particularly, were often closer to major roads or in denser areas than were many participants’ residences. However, our results were robust to exclusion of women who lived in areas that differed substantially from monitoring sites with respect to model predictors. We previously reported that our CO model–based exposure estimates were moderately correlated with contemporaneously measured whole-blood carboxyhemoglobin concentrations in this study population (Rudra et al. 2010b). There is no known biomarker of PM2.5 exposure, and because of budgetary constraints we were unable to validate our PM2.5 model against independently collected air quality measurements. The R2 for the PM2.5 model was lower than that for the CO model, primarily because year, traffic, and street density measures were less strongly predictive of PM2.5 than of CO. Research has shown that errors in air pollutant exposure estimates such as those used here have both Berkson-like and classical-like components, and that it is theoretically possible that these errors are biased away from the null (Szpiro et al. 2010). However, simulation studies using data from a network of a small number of monitors have found bias toward the null and wider CIs (Kim et al. 2009). Thus, we believe that limitations of our estimation method may have resulted in misclassification of PM2.5 and CO exposures toward the null. The misclassification of PM2.5 exposures may have masked associations with preeclampsia and preterm delivery. Errors and uncertainties in both CO and PM2.5 models may have also arisen from imprecision in geocoding, a limited number of locations (none of which were participants’ locations), rounding of exposure windows to the nearest month (which also hampered our ability to differentiate among exposure windows), and inaccurate land use measures. Because we based exposures on self-reported residential address, errors may have also arisen if participants moved soon before or after the interview; longer exposure windows may have been more greatly affected by this source of misclassification. Although employment status, used as a rough surrogate for time spent at home, did not influence our findings, we were not able to capture exposures experienced at other locations and during commuting. Additionally, our models did not capture exposures arising from indoor sources, although we were able to evaluate self-reported secondhand smoke exposure as a potential confounder. If missing nonambient exposures were unrelated to ambient exposures, they are unlikely to have affected our estimates of association (Sheppard et al. 2005). Most previous studies of air pollution and pregnancy outcomes have been based on birth registries or hospital databases (Brauer et al. 2008; Darrow et al. 2009; Liu et al. 2003; Parker et al. 2008; Ritz et al. 2007; Wilhelm and Ritz 2005; Wu et al. 2009; Zeka et al. 2008), with at least two exceptions (Ritz et al. 2007; van den Hooven et al. 2009). Our ability to rely on data from an epidemiologic cohort was a significant strength, because we were able to evaluate many characteristics that would not have been available from birth records as potential confounders and effect modifiers. Furthermore, abstraction of medical record data to determine outcomes strengthened our study by precluding limitations arising from inaccuracies in gestational age and underreporting of preeclampsia in birth records (Lydon-Rochelle et al. 2005; Martin 2007). However, reliance on cohort data resulted in fewer outcomes and less precise estimates than we would have been able to obtain with a larger regional registry. Because of limited medical record data on preeclampsia severity, we could not evaluate the relationships between preeclampsia severity or gestational age at onset and air pollutant exposures. However, in post hoc analyses we observed that associations were similar between preeclampsia associated with term delivery (n = 72) and preeclampsia associated with preterm delivery (n = 45; data not shown). The Omega Study’s high participation and retention rates reduced the likelihood that selection bias and missing data influenced our estimates of association. Study participants had higher income and educational attainment, on average, than did Washington State pregnant women (Rudra and Williams 2005). Participants were also more likely to be white and non-Hispanic. Therefore, these results may not be generalizable to other demographic groups, particularly those with higher risks of these outcomes. Although CO concentrations were quite high at the start of the study period (the region attained U.S. EPA compliance in 1996), they declined rapidly. PM2.5 concentrations were moderate compared with other U.S. cities; most of the region was in compliance with the current U.S. EPA standard throughout the study period (PSCAA 2007). These results may not be generalizable to women living in areas with markedly different air pollutant concentrations. Future studies of these outcomes in high-pollutant settings would be useful contributions to the literature. Conclusions: We found strong positive associations between CO and preeclampsia risk. Because both factors declined during the study period, we cannot exclude the possibility that secular changes in a preeclampsia risk factor unidentified in this study may explain the association. Our results do not provide evidence that PM2.5 and CO strongly influence risks of preterm delivery and preeclampsia among western Washington State women.
Background: Preterm delivery and preeclampsia are common adverse pregnancy outcomes that have been inconsistently associated with ambient air pollutant exposures. Methods: We used data from 3,509 western Washington women who delivered infants between 1996 and 2006. We predicted ambient CO and PM2.5 exposures using regression models based on regional air pollutant monitoring data. Models contained predictor terms for year, month, weather, and land use characteristics. We evaluated several exposure windows, including prepregnancy, early pregnancy, the first two trimesters, the last month, and the last 3 months of pregnancy. Outcomes were identified using abstracted maternal medical record data. Covariate information was obtained from maternal interviews. Results: Predicted periconceptional CO exposure was significantly associated with preeclampsia after adjustment for maternal characteristics and season of conception [adjusted odds ratio (OR) per 0.1 ppm=1.07; 95% confidence interval (CI), 1.02-1.13]. However, further adjustment for year of conception essentially nullified the association (adjusted OR=0.98; 95% CI, 0.91-1.06). Associations between PM2.5 and preeclampsia were nonsignificant and weaker than associations estimated for CO, and neither air pollutant was strongly associated with preterm delivery. Patterns were similar across all exposure windows. Conclusions: Because both CO concentrations and preeclampsia incidence declined during the study period, secular changes in another preeclampsia risk factor may explain the association observed here. We saw little evidence of other associations with preeclampsia or preterm delivery in this setting.
null
null
11,148
276
[ 5028, 368, 187, 282, 492, 352, 96 ]
11
[ "co", "exposure", "concentrations", "exposures", "pm2", "women", "pregnancy", "month", "preeclampsia", "study" ]
[ "women initiated prenatal", "conception maternal report", "pregnancy outcomes based", "study pregnancy cohort", "omega study pregnancy" ]
null
null
null
[CONTENT] air pollution | carbon monoxide | fine particulate matter | preeclampsia | pregnancy preterm delivery [SUMMARY]
[CONTENT] air pollution | carbon monoxide | fine particulate matter | preeclampsia | pregnancy preterm delivery [SUMMARY]
[CONTENT] air pollution | carbon monoxide | fine particulate matter | preeclampsia | pregnancy preterm delivery [SUMMARY]
[CONTENT] air pollution | carbon monoxide | fine particulate matter | preeclampsia | pregnancy preterm delivery [SUMMARY]
null
null
[CONTENT] Adult | Air Pollutants | Carbon Monoxide | Cohort Studies | Female | Humans | Maternal Exposure | Particulate Matter | Pre-Eclampsia | Pregnancy | Premature Birth | Prospective Studies | Risk Factors | Surveys and Questionnaires | Washington | Young Adult [SUMMARY]
[CONTENT] Adult | Air Pollutants | Carbon Monoxide | Cohort Studies | Female | Humans | Maternal Exposure | Particulate Matter | Pre-Eclampsia | Pregnancy | Premature Birth | Prospective Studies | Risk Factors | Surveys and Questionnaires | Washington | Young Adult [SUMMARY]
[CONTENT] Adult | Air Pollutants | Carbon Monoxide | Cohort Studies | Female | Humans | Maternal Exposure | Particulate Matter | Pre-Eclampsia | Pregnancy | Premature Birth | Prospective Studies | Risk Factors | Surveys and Questionnaires | Washington | Young Adult [SUMMARY]
[CONTENT] Adult | Air Pollutants | Carbon Monoxide | Cohort Studies | Female | Humans | Maternal Exposure | Particulate Matter | Pre-Eclampsia | Pregnancy | Premature Birth | Prospective Studies | Risk Factors | Surveys and Questionnaires | Washington | Young Adult [SUMMARY]
null
null
[CONTENT] women initiated prenatal | conception maternal report | pregnancy outcomes based | study pregnancy cohort | omega study pregnancy [SUMMARY]
[CONTENT] women initiated prenatal | conception maternal report | pregnancy outcomes based | study pregnancy cohort | omega study pregnancy [SUMMARY]
[CONTENT] women initiated prenatal | conception maternal report | pregnancy outcomes based | study pregnancy cohort | omega study pregnancy [SUMMARY]
[CONTENT] women initiated prenatal | conception maternal report | pregnancy outcomes based | study pregnancy cohort | omega study pregnancy [SUMMARY]
null
null
[CONTENT] co | exposure | concentrations | exposures | pm2 | women | pregnancy | month | preeclampsia | study [SUMMARY]
[CONTENT] co | exposure | concentrations | exposures | pm2 | women | pregnancy | month | preeclampsia | study [SUMMARY]
[CONTENT] co | exposure | concentrations | exposures | pm2 | women | pregnancy | month | preeclampsia | study [SUMMARY]
[CONTENT] co | exposure | concentrations | exposures | pm2 | women | pregnancy | month | preeclampsia | study [SUMMARY]
null
null
[CONTENT] ors | exposure | analyses | values | pregnancy | prepregnancy | confounding | pollutant | observed | range [SUMMARY]
[CONTENT] co | 95 ci | ci | women | pm2 | exposures | 95 | periconceptional | associations | data shown [SUMMARY]
[CONTENT] preeclampsia | risk | preeclampsia risk | study | co | period exclude possibility | delivery preeclampsia | risk factors declined study | found strong | found strong positive [SUMMARY]
[CONTENT] co | exposure | women | pm2 | concentrations | exposures | preeclampsia | pregnancy | study | month [SUMMARY]
null
null
[CONTENT] 3,509 | Washington | between 1996 and 2006 ||| CO ||| year, month ||| first | two | the last month | the last 3 months ||| ||| Covariate [SUMMARY]
[CONTENT] CO | preeclampsia | season | 0.1 | ppm=1.07 | 95% | CI | 1.02 ||| year | 95% | CI | 0.91-1.06 ||| preeclampsia | CO ||| [SUMMARY]
[CONTENT] preeclampsia incidence ||| preeclampsia [SUMMARY]
[CONTENT] ||| 3,509 | Washington | between 1996 and 2006 ||| CO ||| year, month ||| first | two | the last month | the last 3 months ||| ||| Covariate ||| ||| CO | preeclampsia | season | 0.1 | ppm=1.07 | 95% | CI | 1.02 ||| year | 95% | CI | 0.91-1.06 ||| preeclampsia | CO ||| ||| preeclampsia incidence ||| preeclampsia [SUMMARY]
null
Characteristics and treatment outcomes of HIV infected elderly patients enrolled in Kisii Teaching and Referral Hospital, Kenya.
34394214
A better understanding of the baseline characteristics of elderly people living with HIV/AIDS (PLWHA) is relevant because the world's HIV population is ageing.
BACKGROUND
We retrospectively evaluated temporal inclinations of CD4 levels, viral load change and baseline demographic characteristics in the electronic records at the hospital using a mixed error-component model for 1329 PLWHA attending clinic between January 2008 and December 2019.
METHODS
Findings showed a significant difference in the comparison between baseline VL and WHO AIDS staging (p=0.026). Overall VL levels decreased over the period significantly by WHO AIDS staging (p<0.0001). Significant difference was observed by gender (p<0.0001), across age groups (p<0.0001) and baseline CD4 counts (p=0.003). There were significant differences in WHO staging by CD4 count >200cell/mm3 (p=0.048) and residence (p=0.001).
RESULTS
Age, WHO AIDS staging, gender and residence are relevant parameters associated with viral load decline and CD4 count in elderly PLWHA. A noticeable VL suppression was attained confirming possible attainment of VL suppression among PLWHA under clinical care.
CONCLUSION
[ "Adult", "Aged", "Aged, 80 and over", "CD4 Lymphocyte Count", "Cross-Sectional Studies", "Female", "HIV Infections", "Hospitals, Teaching", "Humans", "Kenya", "Male", "Middle Aged", "Retrospective Studies", "Treatment Outcome", "Viral Load" ]
8351854
Introduction
Administration of antiretroviral therapy (ART) among human immunodeficiency virus (HIV) infected patients has led to increased survival rates among PLWHA.1 It is projected that by 2040, the number of PLWHA aged ≥50years will have grown by nearly three times in Sub-Saharan Africa (SSA) from an estimated 3.1 million in 2011 to 9.1 million2. With increasing access to ART across populations, there will be a need for long-term ART care.3,4 Previous studies from developed countries have shown distinct characteristics at diagnosis and clinical outcomes among PLWHA aged ≥50years compared to younger PLWHA.5 There are few research findings in SSA on the indicative features of HIV infection in PLWHA aged ≥50years receiving ART.6,7,8 Additionally, the restricted number studies that portray baseline indicative features, immunological response and mortality in elderly PLWHA have a limitation of small sample sizes.9,10 On account of this immediate information gap, furnishing supplementary data on the baseline indicative features of HIV infection among elderly PLWHA in sub-Saharan Africa is necessary. This study reports on the baseline characteristics of elderly PLWHA in Kenya which has not been addressed by previous studies. The purpose of our investigation was to examine baseline VL and CD4 count among people aged ≥50 years at the time of recruitment and identify disparities, if any, by gender, age group, current residence, WHO staging, patient source, and marital status in patients attending HIV/AIDS clinic at KTRH.
null
null
Results
Baseline CD4 counts, VL measurements and demographic characteristics of the study population were summarised (Table 1). Demographic Characteristics, CD4 Count and HIV Viral Load of PLWHA First CD4 and VL counts were defined as the first available value at the database. Comparison of baseline characteristics by gender was determined (Table 2). Statistically significant differences were found in gender and median age (p<0.0001) and in gender and age groups (P<0.0001). Statistically significant differences were found in gender and CD4 categories (p=0.003). Comparisons Of Baseline Characteristics By Gender The p-value represents the comparison of the population baseline characteristics by gender. Comparison of baseline characteristics by baseline CD4 cell count was determined (Table 3). Statistically significant differences were found in baseline CD4 count and WHO staging (P=0.048). Statistically significant differences were also found in baseline CD4 count by gender (P=0.003). Comparison Of Baseline Characteristics By CD4 Count P-value represents the comparison of the population baseline characteristics by CD4 count Comparison of baseline characteristics by WHO AIDS Staging was determined (Table 4). Statistically significant differences were found in WHO staging by baseline CD4 count categories (P=0.048). Statistically significant differences were found in WHO staging by residence (P=0.001) and WHO staging by baseline VL (P=0 .026). Comparison of Baseline Characteristics by WHO AIDS Staging The p-value represents the comparison of populations by WHO AIDS staging. Comparison of the baseline characteristics by VL change was determined (Table 5). Statistically significant differences in changes in VL and were found by WHO staging (p<0.0001). Comparison Of Baseline Characteristics by Change in Viral Load
Conclusion and recommendations
This study uncovered unavoidable indicative features (Age, WHO AIDS stages, Gender and Residence) as associated factors to CD4 count and VL decline. Therefore CD4 count, VL, age, WHO AIDS stages, gender and residence should be utilised in solving health care challenges associated with elderly PLWHA. Additionally, Noticeable VL suppression was attained during the study period confirming possible attainment of VL suppression among elderly PLWHA under clnical care.
[ "Study Area", "Study Design and Data Collection", "Demographics", "Ethical considerations", "Statistical analysis" ]
[ "The study was conducted at HIV care clinics of KTRH.", "This study was a cross-sectional retrospective study using secondary electronic data obtained from HIV patients visiting an HIV/AIDS clinic at KTRH. Data on baseline CD4 count, baseline VL, last reported VL, WHO status, age, gender, marital status, the point at which individual patient source and residence was collected retrospectively from the electronic data maintained at from at KTRH.", "Age was categorised as 50–59/60–69/70–79/≥80 years. Age was determined based on the age of the individual at the time of enrolment to the clinic. The current residence was categorised as urban or rural as previously defined.11 Gender was categorised as male or female while marital status was married/cohabiting, divorced or widowed. Patient Source was categorised as an outpatient, Prevention of mother-to-child transmission (PMTCT), Tuberculosis (TB), inpatient or Voluntary Counselling and Testing (VCT) clinics while WHO AIDS Staging was stage I/II/III/IV.\nCD4 count and viral load: The baseline CD4 cell counts were placed in four categories as previously described,12,13,14 while VLs were classified into two guided by the previous related study.15 All undetectable VL values reported during the study period were replaced with 200 copies/mL following the CDC guideline.16", "University of Eastern Africa, Baraton research ethics committee, Ministry of Health, Ministry of Education and National Commission for Science, Technology and Innovation of Kenya approved this study.", "Descriptive statistics were used to examine baseline CD4 count, VL and demographic characteristics. Continuous measures were compared using the Kruskal-Wallis. Wilcoxon signed-rank test was performed to compare first and last, VL counts by demographics. The p-values ≤0.05 were considered statistically significant. All analyses were performed using SAS version 9.3." ]
[ null, null, null, null, null ]
[ "Introduction", "Materials and Methods", "Study Area", "Study Design and Data Collection", "Demographics", "Ethical considerations", "Statistical analysis", "Results", "Discussion", "Conclusion and recommendations" ]
[ "Administration of antiretroviral therapy (ART) among human immunodeficiency virus (HIV) infected patients has led to increased survival rates among PLWHA.1 It is projected that by 2040, the number of PLWHA aged ≥50years will have grown by nearly three times in Sub-Saharan Africa (SSA) from an estimated 3.1 million in 2011 to 9.1 million2. With increasing access to ART across populations, there will be a need for long-term ART care.3,4 Previous studies from developed countries have shown distinct characteristics at diagnosis and clinical outcomes among PLWHA aged ≥50years compared to younger PLWHA.5\nThere are few research findings in SSA on the indicative features of HIV infection in PLWHA aged ≥50years receiving ART.6,7,8 Additionally, the restricted number studies that portray baseline indicative features, immunological response and mortality in elderly PLWHA have a limitation of small sample sizes.9,10 On account of this immediate information gap, furnishing supplementary data on the baseline indicative features of HIV infection among elderly PLWHA in sub-Saharan Africa is necessary. This study reports on the baseline characteristics of elderly PLWHA in Kenya which has not been addressed by previous studies. The purpose of our investigation was to examine baseline VL and CD4 count among people aged ≥50 years at the time of recruitment and identify disparities, if any, by gender, age group, current residence, WHO staging, patient source, and marital status in patients attending HIV/AIDS clinic at KTRH.", "Study Area The study was conducted at HIV care clinics of KTRH.\nThe study was conducted at HIV care clinics of KTRH.\nStudy Design and Data Collection This study was a cross-sectional retrospective study using secondary electronic data obtained from HIV patients visiting an HIV/AIDS clinic at KTRH. Data on baseline CD4 count, baseline VL, last reported VL, WHO status, age, gender, marital status, the point at which individual patient source and residence was collected retrospectively from the electronic data maintained at from at KTRH.\nThis study was a cross-sectional retrospective study using secondary electronic data obtained from HIV patients visiting an HIV/AIDS clinic at KTRH. Data on baseline CD4 count, baseline VL, last reported VL, WHO status, age, gender, marital status, the point at which individual patient source and residence was collected retrospectively from the electronic data maintained at from at KTRH.\nDemographics Age was categorised as 50–59/60–69/70–79/≥80 years. Age was determined based on the age of the individual at the time of enrolment to the clinic. The current residence was categorised as urban or rural as previously defined.11 Gender was categorised as male or female while marital status was married/cohabiting, divorced or widowed. Patient Source was categorised as an outpatient, Prevention of mother-to-child transmission (PMTCT), Tuberculosis (TB), inpatient or Voluntary Counselling and Testing (VCT) clinics while WHO AIDS Staging was stage I/II/III/IV.\nCD4 count and viral load: The baseline CD4 cell counts were placed in four categories as previously described,12,13,14 while VLs were classified into two guided by the previous related study.15 All undetectable VL values reported during the study period were replaced with 200 copies/mL following the CDC guideline.16\nAge was categorised as 50–59/60–69/70–79/≥80 years. Age was determined based on the age of the individual at the time of enrolment to the clinic. The current residence was categorised as urban or rural as previously defined.11 Gender was categorised as male or female while marital status was married/cohabiting, divorced or widowed. Patient Source was categorised as an outpatient, Prevention of mother-to-child transmission (PMTCT), Tuberculosis (TB), inpatient or Voluntary Counselling and Testing (VCT) clinics while WHO AIDS Staging was stage I/II/III/IV.\nCD4 count and viral load: The baseline CD4 cell counts were placed in four categories as previously described,12,13,14 while VLs were classified into two guided by the previous related study.15 All undetectable VL values reported during the study period were replaced with 200 copies/mL following the CDC guideline.16\nEthical considerations University of Eastern Africa, Baraton research ethics committee, Ministry of Health, Ministry of Education and National Commission for Science, Technology and Innovation of Kenya approved this study.\nUniversity of Eastern Africa, Baraton research ethics committee, Ministry of Health, Ministry of Education and National Commission for Science, Technology and Innovation of Kenya approved this study.", "The study was conducted at HIV care clinics of KTRH.", "This study was a cross-sectional retrospective study using secondary electronic data obtained from HIV patients visiting an HIV/AIDS clinic at KTRH. Data on baseline CD4 count, baseline VL, last reported VL, WHO status, age, gender, marital status, the point at which individual patient source and residence was collected retrospectively from the electronic data maintained at from at KTRH.", "Age was categorised as 50–59/60–69/70–79/≥80 years. Age was determined based on the age of the individual at the time of enrolment to the clinic. The current residence was categorised as urban or rural as previously defined.11 Gender was categorised as male or female while marital status was married/cohabiting, divorced or widowed. Patient Source was categorised as an outpatient, Prevention of mother-to-child transmission (PMTCT), Tuberculosis (TB), inpatient or Voluntary Counselling and Testing (VCT) clinics while WHO AIDS Staging was stage I/II/III/IV.\nCD4 count and viral load: The baseline CD4 cell counts were placed in four categories as previously described,12,13,14 while VLs were classified into two guided by the previous related study.15 All undetectable VL values reported during the study period were replaced with 200 copies/mL following the CDC guideline.16", "University of Eastern Africa, Baraton research ethics committee, Ministry of Health, Ministry of Education and National Commission for Science, Technology and Innovation of Kenya approved this study.", "Descriptive statistics were used to examine baseline CD4 count, VL and demographic characteristics. Continuous measures were compared using the Kruskal-Wallis. Wilcoxon signed-rank test was performed to compare first and last, VL counts by demographics. The p-values ≤0.05 were considered statistically significant. All analyses were performed using SAS version 9.3.", "Baseline CD4 counts, VL measurements and demographic characteristics of the study population were summarised (Table 1).\nDemographic Characteristics, CD4 Count and HIV Viral Load of PLWHA\nFirst CD4 and VL counts were defined as the first available value at the database.\nComparison of baseline characteristics by gender was determined (Table 2). Statistically significant differences were found in gender and median age (p<0.0001) and in gender and age groups (P<0.0001). Statistically significant differences were found in gender and CD4 categories (p=0.003).\nComparisons Of Baseline Characteristics By Gender\nThe p-value represents the comparison of the population baseline characteristics by gender.\nComparison of baseline characteristics by baseline CD4 cell count was determined (Table 3). Statistically significant differences were found in baseline CD4 count and WHO staging (P=0.048). Statistically significant differences were also found in baseline CD4 count by gender (P=0.003).\nComparison Of Baseline Characteristics By CD4 Count\nP-value represents the comparison of the population baseline characteristics by CD4 count\nComparison of baseline characteristics by WHO AIDS Staging was determined (Table 4). Statistically significant differences were found in WHO staging by baseline CD4 count categories (P=0.048). Statistically significant differences were found in WHO staging by residence (P=0.001) and WHO staging by baseline VL (P=0 .026).\nComparison of Baseline Characteristics by WHO AIDS Staging\nThe p-value represents the comparison of populations by WHO AIDS staging.\nComparison of the baseline characteristics by VL change was determined (Table 5). Statistically significant differences in changes in VL and were found by WHO staging (p<0.0001).\nComparison Of Baseline Characteristics by Change in Viral Load", "We report baseline demographic characteristics, CD4 counts and VL for PLWHA aged ≥ 50years first-time HIV testers who were ART-naive and diagnosed with HIV. Majority of the PLWHA were originally tested at the outpatient clinic. The explanation could be that most people are unwilling to test for HIV and only get a reason to test when visiting health centres where they are recommended to take HIV test and the out-patient clinic is visited by most patients who seek treatment.\nOverall, the HIV epidemic among the elderly who visited KTRH was dominated by adults aged 50–59years whose proportion was higher when compared to the national proportions17 for the same age group. The proportion of females was also higher compared to males which do not mimic the national inclination where the proportion of males is higher than females.17 This observation is because treatment guidelines have changed to emphasise early diagnosis, treatment, and attachment to care with a result that more individuals are receiving ART.12 Findings also show that the highest poportion of PLWHA aged ≥ 50years attending HIV/AIDS clinic are married/cohabiting. A feasible explanation is that by age of 24.8 years most Kenyans are married.18\nIn the stated study period VL declined. Among the reasons for this observation is that the current ART regimens have been improved and are more comfortably endured. It is also plausible that patients with higher VL die earlier and the people who remain have lower VL.12 We observed a higher decline in VL among PL-WHA in WHO AIDS stages III/IV as compared to WHO AIDS stages I/II. Probably this is because many individuals with WHO AIDS stages I/II may have entered the study with lower VLs compared to those at stage III/IV making it difficult to detect any additional VL decline in this group. There were significantly more individuals whose baseline VL was ≤10000copies/mL at recruitment just as there were more individuals in WHO AIDS stages I/II. This again can be explained by a change in treatment guidelines as explained above. When we adjusted VL for age group and WHO AIDS staging, we observed fewer individuals in the age groups 70 years and above. This could be explained by natural attrition with an increase in age. Life expectancy in Kenya is 66.7 years. 19\nThere was a significantly higher number of females in each CD4 category which was not clear. Notwithstanding some studies suggest that the probability of late testing for HIV is higher for men compared to females.20,21 Higher number of individuals in WHO AIDS stages I/II/III was observed to have rural residence. It is plausible that most people aged ≥ 50years could have retired and relocated to rural homes at least for the Kenyan case.\nThere were some limitations in our study. Data was not available for PLWHA who dropped out of care and those who were not tested. Our study did not include co-morbidities which impinge HIV treatment particularly in PLWHA aged ≥ 50years.22", "This study uncovered unavoidable indicative features (Age, WHO AIDS stages, Gender and Residence) as associated factors to CD4 count and VL decline. Therefore CD4 count, VL, age, WHO AIDS stages, gender and residence should be utilised in solving health care challenges associated with elderly PLWHA. Additionally, Noticeable VL suppression was attained during the study period confirming possible attainment of VL suppression among elderly PLWHA under clnical care." ]
[ "intro", "materials|methods", null, null, null, null, null, "results", "discussion", "conclusions" ]
[ "HIV infected elderly patients", "Kisii Teaching and Referral Hospital, Kenya" ]
Introduction: Administration of antiretroviral therapy (ART) among human immunodeficiency virus (HIV) infected patients has led to increased survival rates among PLWHA.1 It is projected that by 2040, the number of PLWHA aged ≥50years will have grown by nearly three times in Sub-Saharan Africa (SSA) from an estimated 3.1 million in 2011 to 9.1 million2. With increasing access to ART across populations, there will be a need for long-term ART care.3,4 Previous studies from developed countries have shown distinct characteristics at diagnosis and clinical outcomes among PLWHA aged ≥50years compared to younger PLWHA.5 There are few research findings in SSA on the indicative features of HIV infection in PLWHA aged ≥50years receiving ART.6,7,8 Additionally, the restricted number studies that portray baseline indicative features, immunological response and mortality in elderly PLWHA have a limitation of small sample sizes.9,10 On account of this immediate information gap, furnishing supplementary data on the baseline indicative features of HIV infection among elderly PLWHA in sub-Saharan Africa is necessary. This study reports on the baseline characteristics of elderly PLWHA in Kenya which has not been addressed by previous studies. The purpose of our investigation was to examine baseline VL and CD4 count among people aged ≥50 years at the time of recruitment and identify disparities, if any, by gender, age group, current residence, WHO staging, patient source, and marital status in patients attending HIV/AIDS clinic at KTRH. Materials and Methods: Study Area The study was conducted at HIV care clinics of KTRH. The study was conducted at HIV care clinics of KTRH. Study Design and Data Collection This study was a cross-sectional retrospective study using secondary electronic data obtained from HIV patients visiting an HIV/AIDS clinic at KTRH. Data on baseline CD4 count, baseline VL, last reported VL, WHO status, age, gender, marital status, the point at which individual patient source and residence was collected retrospectively from the electronic data maintained at from at KTRH. This study was a cross-sectional retrospective study using secondary electronic data obtained from HIV patients visiting an HIV/AIDS clinic at KTRH. Data on baseline CD4 count, baseline VL, last reported VL, WHO status, age, gender, marital status, the point at which individual patient source and residence was collected retrospectively from the electronic data maintained at from at KTRH. Demographics Age was categorised as 50–59/60–69/70–79/≥80 years. Age was determined based on the age of the individual at the time of enrolment to the clinic. The current residence was categorised as urban or rural as previously defined.11 Gender was categorised as male or female while marital status was married/cohabiting, divorced or widowed. Patient Source was categorised as an outpatient, Prevention of mother-to-child transmission (PMTCT), Tuberculosis (TB), inpatient or Voluntary Counselling and Testing (VCT) clinics while WHO AIDS Staging was stage I/II/III/IV. CD4 count and viral load: The baseline CD4 cell counts were placed in four categories as previously described,12,13,14 while VLs were classified into two guided by the previous related study.15 All undetectable VL values reported during the study period were replaced with 200 copies/mL following the CDC guideline.16 Age was categorised as 50–59/60–69/70–79/≥80 years. Age was determined based on the age of the individual at the time of enrolment to the clinic. The current residence was categorised as urban or rural as previously defined.11 Gender was categorised as male or female while marital status was married/cohabiting, divorced or widowed. Patient Source was categorised as an outpatient, Prevention of mother-to-child transmission (PMTCT), Tuberculosis (TB), inpatient or Voluntary Counselling and Testing (VCT) clinics while WHO AIDS Staging was stage I/II/III/IV. CD4 count and viral load: The baseline CD4 cell counts were placed in four categories as previously described,12,13,14 while VLs were classified into two guided by the previous related study.15 All undetectable VL values reported during the study period were replaced with 200 copies/mL following the CDC guideline.16 Ethical considerations University of Eastern Africa, Baraton research ethics committee, Ministry of Health, Ministry of Education and National Commission for Science, Technology and Innovation of Kenya approved this study. University of Eastern Africa, Baraton research ethics committee, Ministry of Health, Ministry of Education and National Commission for Science, Technology and Innovation of Kenya approved this study. Study Area: The study was conducted at HIV care clinics of KTRH. Study Design and Data Collection: This study was a cross-sectional retrospective study using secondary electronic data obtained from HIV patients visiting an HIV/AIDS clinic at KTRH. Data on baseline CD4 count, baseline VL, last reported VL, WHO status, age, gender, marital status, the point at which individual patient source and residence was collected retrospectively from the electronic data maintained at from at KTRH. Demographics: Age was categorised as 50–59/60–69/70–79/≥80 years. Age was determined based on the age of the individual at the time of enrolment to the clinic. The current residence was categorised as urban or rural as previously defined.11 Gender was categorised as male or female while marital status was married/cohabiting, divorced or widowed. Patient Source was categorised as an outpatient, Prevention of mother-to-child transmission (PMTCT), Tuberculosis (TB), inpatient or Voluntary Counselling and Testing (VCT) clinics while WHO AIDS Staging was stage I/II/III/IV. CD4 count and viral load: The baseline CD4 cell counts were placed in four categories as previously described,12,13,14 while VLs were classified into two guided by the previous related study.15 All undetectable VL values reported during the study period were replaced with 200 copies/mL following the CDC guideline.16 Ethical considerations: University of Eastern Africa, Baraton research ethics committee, Ministry of Health, Ministry of Education and National Commission for Science, Technology and Innovation of Kenya approved this study. Statistical analysis: Descriptive statistics were used to examine baseline CD4 count, VL and demographic characteristics. Continuous measures were compared using the Kruskal-Wallis. Wilcoxon signed-rank test was performed to compare first and last, VL counts by demographics. The p-values ≤0.05 were considered statistically significant. All analyses were performed using SAS version 9.3. Results: Baseline CD4 counts, VL measurements and demographic characteristics of the study population were summarised (Table 1). Demographic Characteristics, CD4 Count and HIV Viral Load of PLWHA First CD4 and VL counts were defined as the first available value at the database. Comparison of baseline characteristics by gender was determined (Table 2). Statistically significant differences were found in gender and median age (p<0.0001) and in gender and age groups (P<0.0001). Statistically significant differences were found in gender and CD4 categories (p=0.003). Comparisons Of Baseline Characteristics By Gender The p-value represents the comparison of the population baseline characteristics by gender. Comparison of baseline characteristics by baseline CD4 cell count was determined (Table 3). Statistically significant differences were found in baseline CD4 count and WHO staging (P=0.048). Statistically significant differences were also found in baseline CD4 count by gender (P=0.003). Comparison Of Baseline Characteristics By CD4 Count P-value represents the comparison of the population baseline characteristics by CD4 count Comparison of baseline characteristics by WHO AIDS Staging was determined (Table 4). Statistically significant differences were found in WHO staging by baseline CD4 count categories (P=0.048). Statistically significant differences were found in WHO staging by residence (P=0.001) and WHO staging by baseline VL (P=0 .026). Comparison of Baseline Characteristics by WHO AIDS Staging The p-value represents the comparison of populations by WHO AIDS staging. Comparison of the baseline characteristics by VL change was determined (Table 5). Statistically significant differences in changes in VL and were found by WHO staging (p<0.0001). Comparison Of Baseline Characteristics by Change in Viral Load Discussion: We report baseline demographic characteristics, CD4 counts and VL for PLWHA aged ≥ 50years first-time HIV testers who were ART-naive and diagnosed with HIV. Majority of the PLWHA were originally tested at the outpatient clinic. The explanation could be that most people are unwilling to test for HIV and only get a reason to test when visiting health centres where they are recommended to take HIV test and the out-patient clinic is visited by most patients who seek treatment. Overall, the HIV epidemic among the elderly who visited KTRH was dominated by adults aged 50–59years whose proportion was higher when compared to the national proportions17 for the same age group. The proportion of females was also higher compared to males which do not mimic the national inclination where the proportion of males is higher than females.17 This observation is because treatment guidelines have changed to emphasise early diagnosis, treatment, and attachment to care with a result that more individuals are receiving ART.12 Findings also show that the highest poportion of PLWHA aged ≥ 50years attending HIV/AIDS clinic are married/cohabiting. A feasible explanation is that by age of 24.8 years most Kenyans are married.18 In the stated study period VL declined. Among the reasons for this observation is that the current ART regimens have been improved and are more comfortably endured. It is also plausible that patients with higher VL die earlier and the people who remain have lower VL.12 We observed a higher decline in VL among PL-WHA in WHO AIDS stages III/IV as compared to WHO AIDS stages I/II. Probably this is because many individuals with WHO AIDS stages I/II may have entered the study with lower VLs compared to those at stage III/IV making it difficult to detect any additional VL decline in this group. There were significantly more individuals whose baseline VL was ≤10000copies/mL at recruitment just as there were more individuals in WHO AIDS stages I/II. This again can be explained by a change in treatment guidelines as explained above. When we adjusted VL for age group and WHO AIDS staging, we observed fewer individuals in the age groups 70 years and above. This could be explained by natural attrition with an increase in age. Life expectancy in Kenya is 66.7 years. 19 There was a significantly higher number of females in each CD4 category which was not clear. Notwithstanding some studies suggest that the probability of late testing for HIV is higher for men compared to females.20,21 Higher number of individuals in WHO AIDS stages I/II/III was observed to have rural residence. It is plausible that most people aged ≥ 50years could have retired and relocated to rural homes at least for the Kenyan case. There were some limitations in our study. Data was not available for PLWHA who dropped out of care and those who were not tested. Our study did not include co-morbidities which impinge HIV treatment particularly in PLWHA aged ≥ 50years.22 Conclusion and recommendations: This study uncovered unavoidable indicative features (Age, WHO AIDS stages, Gender and Residence) as associated factors to CD4 count and VL decline. Therefore CD4 count, VL, age, WHO AIDS stages, gender and residence should be utilised in solving health care challenges associated with elderly PLWHA. Additionally, Noticeable VL suppression was attained during the study period confirming possible attainment of VL suppression among elderly PLWHA under clnical care.
Background: A better understanding of the baseline characteristics of elderly people living with HIV/AIDS (PLWHA) is relevant because the world's HIV population is ageing. Methods: We retrospectively evaluated temporal inclinations of CD4 levels, viral load change and baseline demographic characteristics in the electronic records at the hospital using a mixed error-component model for 1329 PLWHA attending clinic between January 2008 and December 2019. Results: Findings showed a significant difference in the comparison between baseline VL and WHO AIDS staging (p=0.026). Overall VL levels decreased over the period significantly by WHO AIDS staging (p<0.0001). Significant difference was observed by gender (p<0.0001), across age groups (p<0.0001) and baseline CD4 counts (p=0.003). There were significant differences in WHO staging by CD4 count >200cell/mm3 (p=0.048) and residence (p=0.001). Conclusions: Age, WHO AIDS staging, gender and residence are relevant parameters associated with viral load decline and CD4 count in elderly PLWHA. A noticeable VL suppression was attained confirming possible attainment of VL suppression among PLWHA under clinical care.
Introduction: Administration of antiretroviral therapy (ART) among human immunodeficiency virus (HIV) infected patients has led to increased survival rates among PLWHA.1 It is projected that by 2040, the number of PLWHA aged ≥50years will have grown by nearly three times in Sub-Saharan Africa (SSA) from an estimated 3.1 million in 2011 to 9.1 million2. With increasing access to ART across populations, there will be a need for long-term ART care.3,4 Previous studies from developed countries have shown distinct characteristics at diagnosis and clinical outcomes among PLWHA aged ≥50years compared to younger PLWHA.5 There are few research findings in SSA on the indicative features of HIV infection in PLWHA aged ≥50years receiving ART.6,7,8 Additionally, the restricted number studies that portray baseline indicative features, immunological response and mortality in elderly PLWHA have a limitation of small sample sizes.9,10 On account of this immediate information gap, furnishing supplementary data on the baseline indicative features of HIV infection among elderly PLWHA in sub-Saharan Africa is necessary. This study reports on the baseline characteristics of elderly PLWHA in Kenya which has not been addressed by previous studies. The purpose of our investigation was to examine baseline VL and CD4 count among people aged ≥50 years at the time of recruitment and identify disparities, if any, by gender, age group, current residence, WHO staging, patient source, and marital status in patients attending HIV/AIDS clinic at KTRH. Conclusion and recommendations: This study uncovered unavoidable indicative features (Age, WHO AIDS stages, Gender and Residence) as associated factors to CD4 count and VL decline. Therefore CD4 count, VL, age, WHO AIDS stages, gender and residence should be utilised in solving health care challenges associated with elderly PLWHA. Additionally, Noticeable VL suppression was attained during the study period confirming possible attainment of VL suppression among elderly PLWHA under clnical care.
Background: A better understanding of the baseline characteristics of elderly people living with HIV/AIDS (PLWHA) is relevant because the world's HIV population is ageing. Methods: We retrospectively evaluated temporal inclinations of CD4 levels, viral load change and baseline demographic characteristics in the electronic records at the hospital using a mixed error-component model for 1329 PLWHA attending clinic between January 2008 and December 2019. Results: Findings showed a significant difference in the comparison between baseline VL and WHO AIDS staging (p=0.026). Overall VL levels decreased over the period significantly by WHO AIDS staging (p<0.0001). Significant difference was observed by gender (p<0.0001), across age groups (p<0.0001) and baseline CD4 counts (p=0.003). There were significant differences in WHO staging by CD4 count >200cell/mm3 (p=0.048) and residence (p=0.001). Conclusions: Age, WHO AIDS staging, gender and residence are relevant parameters associated with viral load decline and CD4 count in elderly PLWHA. A noticeable VL suppression was attained confirming possible attainment of VL suppression among PLWHA under clinical care.
2,152
210
[ 11, 71, 158, 32, 62 ]
10
[ "baseline", "study", "vl", "cd4", "age", "hiv", "aids", "count", "plwha", "cd4 count" ]
[ "50years time hiv", "age aids", "obtained hiv patients", "hiv majority plwha", "hiv infection plwha" ]
null
[CONTENT] HIV infected elderly patients | Kisii Teaching and Referral Hospital, Kenya [SUMMARY]
null
[CONTENT] HIV infected elderly patients | Kisii Teaching and Referral Hospital, Kenya [SUMMARY]
[CONTENT] HIV infected elderly patients | Kisii Teaching and Referral Hospital, Kenya [SUMMARY]
[CONTENT] HIV infected elderly patients | Kisii Teaching and Referral Hospital, Kenya [SUMMARY]
[CONTENT] HIV infected elderly patients | Kisii Teaching and Referral Hospital, Kenya [SUMMARY]
[CONTENT] Adult | Aged | Aged, 80 and over | CD4 Lymphocyte Count | Cross-Sectional Studies | Female | HIV Infections | Hospitals, Teaching | Humans | Kenya | Male | Middle Aged | Retrospective Studies | Treatment Outcome | Viral Load [SUMMARY]
null
[CONTENT] Adult | Aged | Aged, 80 and over | CD4 Lymphocyte Count | Cross-Sectional Studies | Female | HIV Infections | Hospitals, Teaching | Humans | Kenya | Male | Middle Aged | Retrospective Studies | Treatment Outcome | Viral Load [SUMMARY]
[CONTENT] Adult | Aged | Aged, 80 and over | CD4 Lymphocyte Count | Cross-Sectional Studies | Female | HIV Infections | Hospitals, Teaching | Humans | Kenya | Male | Middle Aged | Retrospective Studies | Treatment Outcome | Viral Load [SUMMARY]
[CONTENT] Adult | Aged | Aged, 80 and over | CD4 Lymphocyte Count | Cross-Sectional Studies | Female | HIV Infections | Hospitals, Teaching | Humans | Kenya | Male | Middle Aged | Retrospective Studies | Treatment Outcome | Viral Load [SUMMARY]
[CONTENT] Adult | Aged | Aged, 80 and over | CD4 Lymphocyte Count | Cross-Sectional Studies | Female | HIV Infections | Hospitals, Teaching | Humans | Kenya | Male | Middle Aged | Retrospective Studies | Treatment Outcome | Viral Load [SUMMARY]
[CONTENT] 50years time hiv | age aids | obtained hiv patients | hiv majority plwha | hiv infection plwha [SUMMARY]
null
[CONTENT] 50years time hiv | age aids | obtained hiv patients | hiv majority plwha | hiv infection plwha [SUMMARY]
[CONTENT] 50years time hiv | age aids | obtained hiv patients | hiv majority plwha | hiv infection plwha [SUMMARY]
[CONTENT] 50years time hiv | age aids | obtained hiv patients | hiv majority plwha | hiv infection plwha [SUMMARY]
[CONTENT] 50years time hiv | age aids | obtained hiv patients | hiv majority plwha | hiv infection plwha [SUMMARY]
[CONTENT] baseline | study | vl | cd4 | age | hiv | aids | count | plwha | cd4 count [SUMMARY]
null
[CONTENT] baseline | study | vl | cd4 | age | hiv | aids | count | plwha | cd4 count [SUMMARY]
[CONTENT] baseline | study | vl | cd4 | age | hiv | aids | count | plwha | cd4 count [SUMMARY]
[CONTENT] baseline | study | vl | cd4 | age | hiv | aids | count | plwha | cd4 count [SUMMARY]
[CONTENT] baseline | study | vl | cd4 | age | hiv | aids | count | plwha | cd4 count [SUMMARY]
[CONTENT] plwha | aged | art | indicative features | features | elderly plwha | aged 50years | indicative | studies | plwha aged 50years [SUMMARY]
null
[CONTENT] comparison | baseline characteristics | characteristics | baseline | found | differences | comparison baseline | comparison baseline characteristics | significant differences | statistically significant differences [SUMMARY]
[CONTENT] suppression | gender residence | stages gender residence | vl suppression | associated | aids stages gender | aids stages gender residence | stages gender | age aids stages gender | age aids [SUMMARY]
[CONTENT] study | vl | hiv | baseline | cd4 | ktrh | plwha | age | categorised | care [SUMMARY]
[CONTENT] study | vl | hiv | baseline | cd4 | ktrh | plwha | age | categorised | care [SUMMARY]
[CONTENT] [SUMMARY]
null
[CONTENT] VL ||| VL ||| ||| WHO ||| 200cell [SUMMARY]
[CONTENT] PLWHA ||| VL | VL | PLWHA [SUMMARY]
[CONTENT] ||| 1329 | between January 2008 and December 2019 ||| ||| VL ||| VL ||| ||| WHO ||| 200cell ||| PLWHA ||| VL | VL | PLWHA [SUMMARY]
[CONTENT] ||| 1329 | between January 2008 and December 2019 ||| ||| VL ||| VL ||| ||| WHO ||| 200cell ||| PLWHA ||| VL | VL | PLWHA [SUMMARY]
Positions and Types of Pterion in Adult Human Skulls: A Preliminary Study.
34703188
A trauma to the skull in the area of the pterion usually causes rupture of the middle meningeal artery leading to life- threatening epidural hematoma. The objective of the study is to assess the prevalence of different types of pterion and to determine its location using valuable bony landmarks.
BACKGROUND
On 90 dry adult human skulls of unknown sex, age and nationality the distance of different landmarks from pterion was measured using stainless steel sliding Vernier caliper. The data were analyzed using SPSS version-20 and an independent t-test analysis was implemented. A value of P< 0.05 was considered as statistically significant.
METHODS
A higher occurrence of sphenoparietal type of pterion with the absence of frontotemporal type was noted. About 23% and 77% of the suture types are found to be unilateral and bilateral, respectively. There was a statistically significant difference between right and left sides of the skull in distances from the center of pterion to frontozygomatic suture, root of zygomatic arch, inion and in central thickness pterion.
RESULTS
This study showed that the most prevalent type of pterion is sphenoparietal, and revealed asymmetry in the distances from center of pterion to frontozygomatic suture, root of zygomatic arch and inion, and its central thickness. Such findings could offer worthy information about the type and location of pterion, which could be relevant to anatomists, neurosurgeons, forensic medicine specialist and anthropologists.
CONCLUSION
[ "Adult", "Cranial Sutures", "Ethnicity", "Humans", "Skull", "Zygoma" ]
8512946
Introduction
The pterion is an H-shaped bony neurological landmark found at the junction of the frontal, sphenoid, parietal and the squamous part of temporal bone (1). It is located approximately 4 cm superior to the zygomatic arch and 3.5 cm posterior to the frontozygomatic suture (2). Internally, the pterion is related to various anatomical structures, namely, the anterior division of the middle meningeal vessels, middle cerebral vessels, Sylvain fissure, circle of Willis, insula and Broca's motor speech area (on the left) and optic nerve (3). Any traumatic blow to the pterion presumably causes rupture of the anterior divisions of the middle meningeal vessels causing an epidural haematoma subsequently resulting in compression of cerebral cortex and death unless proper intervention is carried out (4,5). Surgical approach via the pterion has been quoted as the most widely implemented approach for the proper management of intracranial anterior circulation aneurysm. This approach has better advantages over the traditional surgical approach with minor tissue damage, lesser brain retraction, a superior cosmetic results and a shorter duration of surgery (6). According to Murphy (1956), pterion can be categorized into four types, namely, sphenoparietal, frontotemporal, stellate and epipteric suture (7). The sphenoparietal type is the most common suture formed by the articulation of the greater wing of sphenoid bone with parietal bone. The frontotemporal type is a pterional sutural pattern between the frontal and temporal bone. The stellate variety of suture is the site of articulation formed by the fusion of four flat bones, sphenoid, frontal, parietal and temporal bones. The epipteric type of pterion is characterized by the presence of small sutural bones between the sphenoid and parietal bones. The presence of epipteric or wormian (sutural) bone in the area, can possibly lead to wrong radiological diagnosis and clinical management of fracture in the pterion. The presence of sutural bones could possibly complicate surgical interventions involving burr hole surgeries as their extension may lead to orbital penetration (4,8,9). Various evidences demonstrated that the type and location of pterion exhibits significant ethnic, sex and age-related variations (3,4,10–14). Even though such evidences exist, there is no documented information in Ethiopia so far. The present study is aimed to determine the type and location of the pterion using dry adult human skulls obtained from Northwest Ethiopia. The findings of this study could provide baseline information about the type and location of the pterion in the studied population that can be useful for anatomists, neurosurgeons, forensic pathologists and anthropologists. The objective of the present study is to assess the prevalence of different types of pterion and to determine its location using valuable bony landmarks.
null
null
Results
As it is presented in Table 1 and Figure 2, three types of pterion patterns (sphenoparietal, epipteric and stellate) were identified. Sphenoparietal was the most common type with frequency of 152 (84.4%), followed by epipteric 24 (13.3%). Frontotemporal type of pterion was not observed. In all the three identified types of pteria, there was asymmetric distribution. Distribution of types of pteria among skulls obtained in Northwest Ethiopia, 2020 Identified types of the pteria on skulls of unknown sex age and nationality obtained from the anatomical museum in the Department of Human Anatomy, School of Medicine, College of Medicine and Health Sciences, University of Gondar, Ethiopia, 2020. In this study, the average distances between the center of pterion and several clinically and morphologically important structural landmarks were determined using the sliding stainless steel Vernier caliper. After checking for normality of distribution and homogeneity of variance, independent sample t-test was done to determine whether the central distance of the pterion from various bony landmarks differs with the sides of the skull as presented in Table 2. The mean distances from the center of pterion to FZS on the right side was 2.92 ± 0.05 cm and on the left side was 2.75 ± 0.05 cm. The distances were relatively shorter on the left side than on the right side and the difference was statistically significant (P= 0.021). However, the actual difference of the two sides was found to be small (Eta squared= 0.03). Similarly, the average distance from the RZA to the pterion on the right and left sides was 3.55 ± 0.04 cm and 3.30 ± 0.05 cm, respectively, and the difference was statistically significant (P= 0.000) with small actual difference between the two sides (Eta squared= 0.076). Comparison of the mean central distances from pterion (P) to various bony landmarks on the two sides The distance between the center of pterion and inion was found to be 12.52 ± 0.07 cm on the right and 12.73 ± 0.05 cm on the left side being shorter on the right side as compared to the left side and the difference was statistically significant (P<0.05) but the effect of distance to the sides was small (Eta squared= 0.031) (Table 2). The central thickness of pterion was found to be 0.59 ± 0.01 cm and 0.41 ± 0.01 cm on the right and left sides, respectively. The left side of pterion was significantly thinner than the corresponding right sides (P<0.05, Eta squared=0.120) (Table 2).
null
null
[]
[]
[]
[ "Introduction", "Materials and Methods", "Results", "Discussion" ]
[ "The pterion is an H-shaped bony neurological landmark found at the junction of the frontal, sphenoid, parietal and the squamous part of temporal bone (1). It is located approximately 4 cm superior to the zygomatic arch and 3.5 cm posterior to the frontozygomatic suture (2).\nInternally, the pterion is related to various anatomical structures, namely, the anterior division of the middle meningeal vessels, middle cerebral vessels, Sylvain fissure, circle of Willis, insula and Broca's motor speech area (on the left) and optic nerve (3).\nAny traumatic blow to the pterion presumably causes rupture of the anterior divisions of the middle meningeal vessels causing an epidural haematoma subsequently resulting in compression of cerebral cortex and death unless proper intervention is carried out (4,5). Surgical approach via the pterion has been quoted as the most widely implemented approach for the proper management of intracranial anterior circulation aneurysm. This approach has better advantages over the traditional surgical approach with minor tissue damage, lesser brain retraction, a superior cosmetic results and a shorter duration of surgery (6).\nAccording to Murphy (1956), pterion can be categorized into four types, namely, sphenoparietal, frontotemporal, stellate and epipteric suture (7). The sphenoparietal type is the most common suture formed by the articulation of the greater wing of sphenoid bone with parietal bone. The frontotemporal type is a pterional sutural pattern between the frontal and temporal bone. The stellate variety of suture is the site of articulation formed by the fusion of four flat bones, sphenoid, frontal, parietal and temporal bones. The epipteric type of pterion is characterized by the presence of small sutural bones between the sphenoid and parietal bones. The presence of epipteric or wormian (sutural) bone in the area, can possibly lead to wrong radiological diagnosis and clinical management of fracture in the pterion. The presence of sutural bones could possibly complicate surgical interventions involving burr hole surgeries as their extension may lead to orbital penetration (4,8,9).\nVarious evidences demonstrated that the type and location of pterion exhibits significant ethnic, sex and age-related variations (3,4,10–14). Even though such evidences exist, there is no documented information in Ethiopia so far. The present study is aimed to determine the type and location of the pterion using dry adult human skulls obtained from Northwest Ethiopia. The findings of this study could provide baseline information about the type and location of the pterion in the studied population that can be useful for anatomists, neurosurgeons, forensic pathologists and anthropologists.\nThe objective of the present study is to assess the prevalence of different types of pterion and to determine its location using valuable bony landmarks.", "A cross sectional study was conducted on ninety dried and intact adult skulls of unknown sex, age and nationality. The study was conducted from 20th of March to 20th of April, 2020 on dry skulls obtained from the anatomical museum in the Department of Human Anatomy, School of Medicine, College of Medicine and Health Sciences, University of Gondar, Ethiopia. Sutural patterns of the pterion and its distance from selected structural landmarks were assessed macroscopically on both sides. The data collection was performed after the ethical clearance and approval obtained from the School of Medicine Ethical committee, University of Gondar (Reference SOM 876/12 dated December 24, 2019). Skulls with any pathological deformities and trauma affecting the measurements, for instance fracture of zygomatic arch were excluded.\nThe sutural patterns of the pterion (sphenoparietal, frontotemporal, stellate and epipteric types) were studied on both sides of each skull using the principles of Murphy classification (7).\nFor the purpose of measurements of distances of various clinically important landmarks from the corresponding pterion, a circle of smallest radius was drawn just at the site of formation of the pterion. The center of the circle was considered as the center of pterion (Figure 1a and b). Distance measurements in centimeter were taken from the center of pterion with a stainless-steel sliding Vernier caliper (Figure 1c) twice and the average was taken as the actual measurement.\nRepresentative picture presenting the distances from the pterion to some special external (A) and internal (B) structural landmarks and measurement from the central distance of pterion to various landmarks of skulls of unknown sex age and nationality obtained from the anatomical museum in the Department of Human Anatomy, School of Medicine, College of Medicine and Health Sciences, University of Gondar, Ethiopia using a Stainless steel Vernier caliper (C)\nThe following distance measurement parameters were taken on the lateral aspects of the skulls from the center of the pterion (P), 1) to the posterolateral aspect of the frontozygomatic suture (FZS), 2) to the root of zygomatic arch (RZA), 3) to the tip of the mastoid process (TMP), 4) to the suprameatal spine (SMS), 5) to the external occipital protuberance (EOP), 6) to the Frankfurt horizontal plane (FHP). Additionally, distance measurements were also taken from the internal aspect of the center of the pterion to the lateral end of the sphenoid ridge on the lesser wing of sphenoid bone (LWS) and to the lateral margin of the optic canal (OC). The thickness at the center of the pterion (TAAC) was also measured.\nStatistical analysis: All the data were analyzed using SPSS version 20 statistical software. A comparison of the mean values between the sides was done using the independent t-test and a P-value less than 0.05 was considered as statistically significant. The data were presented as mean with the corresponding standard error of mean (SEM).", "As it is presented in Table 1 and Figure 2, three types of pterion patterns (sphenoparietal, epipteric and stellate) were identified. Sphenoparietal was the most common type with frequency of 152 (84.4%), followed by epipteric 24 (13.3%). Frontotemporal type of pterion was not observed. In all the three identified types of pteria, there was asymmetric distribution.\nDistribution of types of pteria among skulls obtained in Northwest Ethiopia, 2020\nIdentified types of the pteria on skulls of unknown sex age and nationality obtained from the anatomical museum in the Department of Human Anatomy, School of Medicine, College of Medicine and Health Sciences, University of Gondar, Ethiopia, 2020.\nIn this study, the average distances between the center of pterion and several clinically and morphologically important structural landmarks were determined using the sliding stainless steel Vernier caliper. After checking for normality of distribution and homogeneity of variance, independent sample t-test was done to determine whether the central distance of the pterion from various bony landmarks differs with the sides of the skull as presented in Table 2. The mean distances from the center of pterion to FZS on the right side was 2.92 ± 0.05 cm and on the left side was 2.75 ± 0.05 cm. The distances were relatively shorter on the left side than on the right side and the difference was statistically significant (P= 0.021). However, the actual difference of the two sides was found to be small (Eta squared= 0.03). Similarly, the average distance from the RZA to the pterion on the right and left sides was 3.55 ± 0.04 cm and 3.30 ± 0.05 cm, respectively, and the difference was statistically significant (P= 0.000) with small actual difference between the two sides (Eta squared= 0.076).\nComparison of the mean central distances from pterion (P) to various bony landmarks on the two sides\nThe distance between the center of pterion and inion was found to be 12.52 ± 0.07 cm on the right and 12.73 ± 0.05 cm on the left side being shorter on the right side as compared to the left side and the difference was statistically significant (P<0.05) but the effect of distance to the sides was small (Eta squared= 0.031) (Table 2). The central thickness of pterion was found to be 0.59 ± 0.01 cm and 0.41 ± 0.01 cm on the right and left sides, respectively. The left side of pterion was significantly thinner than the corresponding right sides (P<0.05, Eta squared=0.120) (Table 2).", "Surgical approach via pterion has been cited as the most widely implemented approach to the proper management of intracranial anterior circulation aneurysmus. This approach has more advantages than the traditional surgical approach with minor tissue damage, less brain retraction, a superior cosmetic results and a shorter duration of surgery (6). Fundamentally, the knowledge of the pterion sutural patterns and its relationship to the bony landmarks is useful for specialist in various field of medical professions particularly for neurosurgeons (17, 18). The main findings of this study are notably the higher occurrence of the sphenoparietal type of pterion with the absence of frontotemporal type. About 23% and 77% of the suture types are found to be unilateral and bilateral, respectively. Furthermore, the central distances of the pterion to the FZS, RZA and inion, and the thickness at its center had statistically significant difference between the right and left sides of the skull.\nThe incidence of the different types of pteria among a diversified population of different countries were compared (1, 3, 4, 7, 8, 10–12, 14, 19–23) and illustrated in Table 3.\nComparison of the percentage of pteria types with other population Type of pterion\nWang et al, (2006) explained the influence of environmental and/or genetic factors on the sutural patterns of pterion (13). A longitudinal study done using 368 Australian skulls reported four different stutural patterns of pterion (7). The incidence of the various types of pterion was 73.23%, 7.75%, 18.34% and 0.68% for sphenoparietal, frontotemporal, epipteric and stellate types, respectively. A study conducted in Turkey using 300 dried human skulls stated sphenoparietal type (96%), frontotemporal (3.7%), epipteric (9%) and stellate type (0.2%) (8). They revealed the existence of an epipteric or wormian bone at the pterion may complicate surgical orientation leading to complication during burr hole surgeries like orbital penetration. As it has been reported in another study on 26 dry adult skulls of Turks, 88% were sphenoparietal, 10% were frontotemporal and 2% were epipteric while the stellate type was absent (12). Different studies unanimously, reported that sphenoparietal type of pterion was found to be pre-dominant type of suture (3, 4, 10–12, 17, 20, 24). A study done on 52 dried adult Sri Lankan skulls reported the sphenoparietal type (74.04%) as the most common type followed by epipteric type (21.15%) and frontotemporal type (4.181%). Similarly, they did not find any stellate variety of pterion in their study (19). A Nigerian study also reported sphenoparietal type (77.33%), frontotemporal type (8.3%) and stellate type (5.6%) patterns of pterion without epipteric type (1). A study in India population showed sphenoparietal type (77.33%), epipteric type (21.33%), stellate type (1.34%) but a frontotemporal type of pterion was not seen (23). In another study done on 149 dried Korean skulls, the most frequent type of pterion found was sphenoparietal type (76.5%) followed by the epipteric type (40.3%) without the occurrence of stellate and frontotemporal types (20). In the present study, sphenoparietal (84.4%), epipteric (13.3%) and stellate (2.2%) types of pterion were identified. Of the observed patterns of pterion 23.3% and 76.7% were present unilaterally and bilaterally, respectively but no frontotemporal variety of pterion was observed. The presence or absence of frontotemporal pattern of pterion clearly indicates the contribution of genetic factor to the variation of occurrence ranged from zero in a British seventeenth century cementery to 9.8% in Nigerian crania (25).\nThe mean distance measurements between the center of pterion and different landmarks among different studies conducted elsewhere were compared (1, 3, 9–12, 14, 19, 21, 26, 27) and are depicted in Table 4. A study done on cone beam CR scans of 50 adult craniums and 76 adult dry skulls in New Zealand reported the significant clinical relationship of the anterior division of middle meningeal artery to the center of the pterion in the Frankfurt plane (21). As it was reported forty years ago the pterion is situated 3.0 – 3.5 cm posterior to the FZS (2). The mean distance of the center of the pterion from the posterolateral margin of the FZS in adult dry skulls was 3.1 ± 0.4 cm on the right side and 3.09 ± 0.4 cm on the left side (4). Studies conducted elsewhere reported a mean pterion to FZS distance ranging from 3.0 cm to 3.7 cm (10–12, 14, 27). However, Nigerian (3) and New Zealandian (21) studies stated shorter mean center of pterion to FZS distance on the right and left sides, respectively as (2.74 ± 0.17cm and 2.7 ± 0.06 cm; 2.6 ± 0.4 cm and 2.5 ± 0.4 cm). In the present study, the center of the pterion was found to be 2.92 ± 0.05 cm on the right side and 2.75 ± 0.05 cm on the left side above the posterolateral margin of the FZS (P<0.05, Eta squared= 0.03).\nIt has been reported that the center of the pterion is found to be 3 – 4 cm above the zygomatic arch (2). Similarly, studies conducted in a diversified population demonstrated the mean pterion - RZA distance in as presented in Table 4. In this study, the pterion was 3.55 ± 0.04 cm on the right side and 3.30 ± 0.05 cm on the left side superior to the RZA (P<0.05, Eta squared= 0.076).\nThe lesser wing of the sphenoid bone (LWS) is a common site for meningiomas and it can be approached through the pterional surgical technique. In this case, the distance between the internal portion of the pterion and the lateral margin of sphenoid bone is crucial. An Indian study done on 42 dry adult human skulls reported the mean distance between the pterion and LWS as 1.36 ± 0.35 cm on the right side and 1.33 ± 0.22 cm on the left side (14). Kenyans study reported that the distance of the pterion center from the lateral margin of LWS was 1.4 ± 0.33 cm on the right and 1.48 ± 0.32 cm on the left side far from the lateral margin of LWS (11). However, in the present study, the internal portion of pterion center was far from the lateral margin of LWS with a mean distance of 1.69 ± 0.02 cm on the right side and 1.73 ± 0.03 cm on the left side (P>0.05). The discrepancy of the mean distance is probably due to environmental and genetic variabilities.\nPterional approach is useful to reach to the optic canal containing optic nerve (CN II) and ophthalmic artery. In such a case, the distance measurements between the internal aspect of pterion and optic canal (OC) is decisive. Studies from India and Kenya reported that the internal surface of pterion center is far from the OC with a mean measurement of 4.52 ± 0.32 cm and 4.39 ± 0.4 cm on the right side and 4.37 ± 0.23 cm and 4.36 ± 0.4 cm on the left side, respectively (11, 14). In the present study, the mean measurement between the internal portion of pterion center and OC was 3.84 ± 0.01 cm on the right side and 3.80 ± 0.02 cm on the left side.\nA very recent study done in Turkey using 75 dry adult skulls reported the mean measurement between the center of the pterion and the inion to be 13.55 ± 0.62 cm in male and 12.62 ± 0.63 cm in female (10). In another report based on skulls of male subjects of the Byzantine period, the distance between the pterion and the inion was found to be 13.80 ± 0.5 cm on the right side and 13.70 ± 0.40 cm on the left side (28). However, in this study, the mean measurement between pterion and inion was 12.52 ± 0.07 cm and 12.73 ± 0.05 cm on the right and left sides, respectively (P=0.017, Eta squared= 0.031).\nA comparative study done on human skulls from 13th to 20th century using manual measurements revealed that the mean distance between pterion and the tip of mastoid process (TMP) in male subjects as 8.30 ± 0.34 cm on the right side and 8.50 ± 0.26 cm on the left side (28). In a study done in Turkey, the mean distance between pterion and TMP was found to be 8.02 ± 0.6 cm (10). On the other hand, in this study, the mean measurement between the center of the pterion and the TMP was 7.69 ± 0.05 cm on the right side and 7.65 ± 0.05 cm on the left side (P>0.05).\nCimen and collaborators (2019) measured the mean distance between pterion and external acoustic meatus as 5.71 ± 0.77 cm and 5.34 ± 0.36 cm in male and female skulls, respectively (10). In the present study, the measured mean distance between the center of pterion and the supra meatal spine was 4.97 ± 0.04 cm on the right side and 5.06 ± 0.03 cm on the left side (P>0.05). The difference may be due to geographical, genetic or methodological variations.\nA study done in India on 100 dry skulls reported the mean thickness at the center of the pterion to be 0.352±0.145 cm (9). In addition, an Asian scholar reported mean different thicknesses at the center of the pterion as 0.513±0.167 cm in Thai skulls (26), 0.39 to 0.41 cm in Turks (12), and 0.319±0.085 cm in Korean skulls (29). In this study the central thickness of the pterion was 0.59 ± 0.01 cm on the right and 0.41 ± 0.01 cm on the left side (P<0.05, Eta squared= 0.12). Clinically, the knowledge of the thickness of pterion is very important for neurosurgeons which could be applied during internal and external neurosurgical fixation procedures.\nIn conclusion, according to the finding in this current study, sphenoparietal type of suture is the most frequent variety of pterion. The mean measurements between the center of pterion and FZS, RZA and inion and central thickness of the pterion had statistically significant difference between the right and left sides of the skulls. The findings of this study may, presumably, be useful for the anatomists, neurosurgeons, forensic pathologies and anthropologists in the area of the studied population. Further investigation on the skulls of identified sex, age and nationality, particularly Ethiopian skulls, using computed scan, X-ray and dry human skulls is strongly recommended." ]
[ "intro", "materials|methods", "results", "discussion" ]
[ "Frontozygomatic suture", "inion", "Pterion", "sphenoid bone", "zygomatic arch" ]
Introduction: The pterion is an H-shaped bony neurological landmark found at the junction of the frontal, sphenoid, parietal and the squamous part of temporal bone (1). It is located approximately 4 cm superior to the zygomatic arch and 3.5 cm posterior to the frontozygomatic suture (2). Internally, the pterion is related to various anatomical structures, namely, the anterior division of the middle meningeal vessels, middle cerebral vessels, Sylvain fissure, circle of Willis, insula and Broca's motor speech area (on the left) and optic nerve (3). Any traumatic blow to the pterion presumably causes rupture of the anterior divisions of the middle meningeal vessels causing an epidural haematoma subsequently resulting in compression of cerebral cortex and death unless proper intervention is carried out (4,5). Surgical approach via the pterion has been quoted as the most widely implemented approach for the proper management of intracranial anterior circulation aneurysm. This approach has better advantages over the traditional surgical approach with minor tissue damage, lesser brain retraction, a superior cosmetic results and a shorter duration of surgery (6). According to Murphy (1956), pterion can be categorized into four types, namely, sphenoparietal, frontotemporal, stellate and epipteric suture (7). The sphenoparietal type is the most common suture formed by the articulation of the greater wing of sphenoid bone with parietal bone. The frontotemporal type is a pterional sutural pattern between the frontal and temporal bone. The stellate variety of suture is the site of articulation formed by the fusion of four flat bones, sphenoid, frontal, parietal and temporal bones. The epipteric type of pterion is characterized by the presence of small sutural bones between the sphenoid and parietal bones. The presence of epipteric or wormian (sutural) bone in the area, can possibly lead to wrong radiological diagnosis and clinical management of fracture in the pterion. The presence of sutural bones could possibly complicate surgical interventions involving burr hole surgeries as their extension may lead to orbital penetration (4,8,9). Various evidences demonstrated that the type and location of pterion exhibits significant ethnic, sex and age-related variations (3,4,10–14). Even though such evidences exist, there is no documented information in Ethiopia so far. The present study is aimed to determine the type and location of the pterion using dry adult human skulls obtained from Northwest Ethiopia. The findings of this study could provide baseline information about the type and location of the pterion in the studied population that can be useful for anatomists, neurosurgeons, forensic pathologists and anthropologists. The objective of the present study is to assess the prevalence of different types of pterion and to determine its location using valuable bony landmarks. Materials and Methods: A cross sectional study was conducted on ninety dried and intact adult skulls of unknown sex, age and nationality. The study was conducted from 20th of March to 20th of April, 2020 on dry skulls obtained from the anatomical museum in the Department of Human Anatomy, School of Medicine, College of Medicine and Health Sciences, University of Gondar, Ethiopia. Sutural patterns of the pterion and its distance from selected structural landmarks were assessed macroscopically on both sides. The data collection was performed after the ethical clearance and approval obtained from the School of Medicine Ethical committee, University of Gondar (Reference SOM 876/12 dated December 24, 2019). Skulls with any pathological deformities and trauma affecting the measurements, for instance fracture of zygomatic arch were excluded. The sutural patterns of the pterion (sphenoparietal, frontotemporal, stellate and epipteric types) were studied on both sides of each skull using the principles of Murphy classification (7). For the purpose of measurements of distances of various clinically important landmarks from the corresponding pterion, a circle of smallest radius was drawn just at the site of formation of the pterion. The center of the circle was considered as the center of pterion (Figure 1a and b). Distance measurements in centimeter were taken from the center of pterion with a stainless-steel sliding Vernier caliper (Figure 1c) twice and the average was taken as the actual measurement. Representative picture presenting the distances from the pterion to some special external (A) and internal (B) structural landmarks and measurement from the central distance of pterion to various landmarks of skulls of unknown sex age and nationality obtained from the anatomical museum in the Department of Human Anatomy, School of Medicine, College of Medicine and Health Sciences, University of Gondar, Ethiopia using a Stainless steel Vernier caliper (C) The following distance measurement parameters were taken on the lateral aspects of the skulls from the center of the pterion (P), 1) to the posterolateral aspect of the frontozygomatic suture (FZS), 2) to the root of zygomatic arch (RZA), 3) to the tip of the mastoid process (TMP), 4) to the suprameatal spine (SMS), 5) to the external occipital protuberance (EOP), 6) to the Frankfurt horizontal plane (FHP). Additionally, distance measurements were also taken from the internal aspect of the center of the pterion to the lateral end of the sphenoid ridge on the lesser wing of sphenoid bone (LWS) and to the lateral margin of the optic canal (OC). The thickness at the center of the pterion (TAAC) was also measured. Statistical analysis: All the data were analyzed using SPSS version 20 statistical software. A comparison of the mean values between the sides was done using the independent t-test and a P-value less than 0.05 was considered as statistically significant. The data were presented as mean with the corresponding standard error of mean (SEM). Results: As it is presented in Table 1 and Figure 2, three types of pterion patterns (sphenoparietal, epipteric and stellate) were identified. Sphenoparietal was the most common type with frequency of 152 (84.4%), followed by epipteric 24 (13.3%). Frontotemporal type of pterion was not observed. In all the three identified types of pteria, there was asymmetric distribution. Distribution of types of pteria among skulls obtained in Northwest Ethiopia, 2020 Identified types of the pteria on skulls of unknown sex age and nationality obtained from the anatomical museum in the Department of Human Anatomy, School of Medicine, College of Medicine and Health Sciences, University of Gondar, Ethiopia, 2020. In this study, the average distances between the center of pterion and several clinically and morphologically important structural landmarks were determined using the sliding stainless steel Vernier caliper. After checking for normality of distribution and homogeneity of variance, independent sample t-test was done to determine whether the central distance of the pterion from various bony landmarks differs with the sides of the skull as presented in Table 2. The mean distances from the center of pterion to FZS on the right side was 2.92 ± 0.05 cm and on the left side was 2.75 ± 0.05 cm. The distances were relatively shorter on the left side than on the right side and the difference was statistically significant (P= 0.021). However, the actual difference of the two sides was found to be small (Eta squared= 0.03). Similarly, the average distance from the RZA to the pterion on the right and left sides was 3.55 ± 0.04 cm and 3.30 ± 0.05 cm, respectively, and the difference was statistically significant (P= 0.000) with small actual difference between the two sides (Eta squared= 0.076). Comparison of the mean central distances from pterion (P) to various bony landmarks on the two sides The distance between the center of pterion and inion was found to be 12.52 ± 0.07 cm on the right and 12.73 ± 0.05 cm on the left side being shorter on the right side as compared to the left side and the difference was statistically significant (P<0.05) but the effect of distance to the sides was small (Eta squared= 0.031) (Table 2). The central thickness of pterion was found to be 0.59 ± 0.01 cm and 0.41 ± 0.01 cm on the right and left sides, respectively. The left side of pterion was significantly thinner than the corresponding right sides (P<0.05, Eta squared=0.120) (Table 2). Discussion: Surgical approach via pterion has been cited as the most widely implemented approach to the proper management of intracranial anterior circulation aneurysmus. This approach has more advantages than the traditional surgical approach with minor tissue damage, less brain retraction, a superior cosmetic results and a shorter duration of surgery (6). Fundamentally, the knowledge of the pterion sutural patterns and its relationship to the bony landmarks is useful for specialist in various field of medical professions particularly for neurosurgeons (17, 18). The main findings of this study are notably the higher occurrence of the sphenoparietal type of pterion with the absence of frontotemporal type. About 23% and 77% of the suture types are found to be unilateral and bilateral, respectively. Furthermore, the central distances of the pterion to the FZS, RZA and inion, and the thickness at its center had statistically significant difference between the right and left sides of the skull. The incidence of the different types of pteria among a diversified population of different countries were compared (1, 3, 4, 7, 8, 10–12, 14, 19–23) and illustrated in Table 3. Comparison of the percentage of pteria types with other population Type of pterion Wang et al, (2006) explained the influence of environmental and/or genetic factors on the sutural patterns of pterion (13). A longitudinal study done using 368 Australian skulls reported four different stutural patterns of pterion (7). The incidence of the various types of pterion was 73.23%, 7.75%, 18.34% and 0.68% for sphenoparietal, frontotemporal, epipteric and stellate types, respectively. A study conducted in Turkey using 300 dried human skulls stated sphenoparietal type (96%), frontotemporal (3.7%), epipteric (9%) and stellate type (0.2%) (8). They revealed the existence of an epipteric or wormian bone at the pterion may complicate surgical orientation leading to complication during burr hole surgeries like orbital penetration. As it has been reported in another study on 26 dry adult skulls of Turks, 88% were sphenoparietal, 10% were frontotemporal and 2% were epipteric while the stellate type was absent (12). Different studies unanimously, reported that sphenoparietal type of pterion was found to be pre-dominant type of suture (3, 4, 10–12, 17, 20, 24). A study done on 52 dried adult Sri Lankan skulls reported the sphenoparietal type (74.04%) as the most common type followed by epipteric type (21.15%) and frontotemporal type (4.181%). Similarly, they did not find any stellate variety of pterion in their study (19). A Nigerian study also reported sphenoparietal type (77.33%), frontotemporal type (8.3%) and stellate type (5.6%) patterns of pterion without epipteric type (1). A study in India population showed sphenoparietal type (77.33%), epipteric type (21.33%), stellate type (1.34%) but a frontotemporal type of pterion was not seen (23). In another study done on 149 dried Korean skulls, the most frequent type of pterion found was sphenoparietal type (76.5%) followed by the epipteric type (40.3%) without the occurrence of stellate and frontotemporal types (20). In the present study, sphenoparietal (84.4%), epipteric (13.3%) and stellate (2.2%) types of pterion were identified. Of the observed patterns of pterion 23.3% and 76.7% were present unilaterally and bilaterally, respectively but no frontotemporal variety of pterion was observed. The presence or absence of frontotemporal pattern of pterion clearly indicates the contribution of genetic factor to the variation of occurrence ranged from zero in a British seventeenth century cementery to 9.8% in Nigerian crania (25). The mean distance measurements between the center of pterion and different landmarks among different studies conducted elsewhere were compared (1, 3, 9–12, 14, 19, 21, 26, 27) and are depicted in Table 4. A study done on cone beam CR scans of 50 adult craniums and 76 adult dry skulls in New Zealand reported the significant clinical relationship of the anterior division of middle meningeal artery to the center of the pterion in the Frankfurt plane (21). As it was reported forty years ago the pterion is situated 3.0 – 3.5 cm posterior to the FZS (2). The mean distance of the center of the pterion from the posterolateral margin of the FZS in adult dry skulls was 3.1 ± 0.4 cm on the right side and 3.09 ± 0.4 cm on the left side (4). Studies conducted elsewhere reported a mean pterion to FZS distance ranging from 3.0 cm to 3.7 cm (10–12, 14, 27). However, Nigerian (3) and New Zealandian (21) studies stated shorter mean center of pterion to FZS distance on the right and left sides, respectively as (2.74 ± 0.17cm and 2.7 ± 0.06 cm; 2.6 ± 0.4 cm and 2.5 ± 0.4 cm). In the present study, the center of the pterion was found to be 2.92 ± 0.05 cm on the right side and 2.75 ± 0.05 cm on the left side above the posterolateral margin of the FZS (P<0.05, Eta squared= 0.03). It has been reported that the center of the pterion is found to be 3 – 4 cm above the zygomatic arch (2). Similarly, studies conducted in a diversified population demonstrated the mean pterion - RZA distance in as presented in Table 4. In this study, the pterion was 3.55 ± 0.04 cm on the right side and 3.30 ± 0.05 cm on the left side superior to the RZA (P<0.05, Eta squared= 0.076). The lesser wing of the sphenoid bone (LWS) is a common site for meningiomas and it can be approached through the pterional surgical technique. In this case, the distance between the internal portion of the pterion and the lateral margin of sphenoid bone is crucial. An Indian study done on 42 dry adult human skulls reported the mean distance between the pterion and LWS as 1.36 ± 0.35 cm on the right side and 1.33 ± 0.22 cm on the left side (14). Kenyans study reported that the distance of the pterion center from the lateral margin of LWS was 1.4 ± 0.33 cm on the right and 1.48 ± 0.32 cm on the left side far from the lateral margin of LWS (11). However, in the present study, the internal portion of pterion center was far from the lateral margin of LWS with a mean distance of 1.69 ± 0.02 cm on the right side and 1.73 ± 0.03 cm on the left side (P>0.05). The discrepancy of the mean distance is probably due to environmental and genetic variabilities. Pterional approach is useful to reach to the optic canal containing optic nerve (CN II) and ophthalmic artery. In such a case, the distance measurements between the internal aspect of pterion and optic canal (OC) is decisive. Studies from India and Kenya reported that the internal surface of pterion center is far from the OC with a mean measurement of 4.52 ± 0.32 cm and 4.39 ± 0.4 cm on the right side and 4.37 ± 0.23 cm and 4.36 ± 0.4 cm on the left side, respectively (11, 14). In the present study, the mean measurement between the internal portion of pterion center and OC was 3.84 ± 0.01 cm on the right side and 3.80 ± 0.02 cm on the left side. A very recent study done in Turkey using 75 dry adult skulls reported the mean measurement between the center of the pterion and the inion to be 13.55 ± 0.62 cm in male and 12.62 ± 0.63 cm in female (10). In another report based on skulls of male subjects of the Byzantine period, the distance between the pterion and the inion was found to be 13.80 ± 0.5 cm on the right side and 13.70 ± 0.40 cm on the left side (28). However, in this study, the mean measurement between pterion and inion was 12.52 ± 0.07 cm and 12.73 ± 0.05 cm on the right and left sides, respectively (P=0.017, Eta squared= 0.031). A comparative study done on human skulls from 13th to 20th century using manual measurements revealed that the mean distance between pterion and the tip of mastoid process (TMP) in male subjects as 8.30 ± 0.34 cm on the right side and 8.50 ± 0.26 cm on the left side (28). In a study done in Turkey, the mean distance between pterion and TMP was found to be 8.02 ± 0.6 cm (10). On the other hand, in this study, the mean measurement between the center of the pterion and the TMP was 7.69 ± 0.05 cm on the right side and 7.65 ± 0.05 cm on the left side (P>0.05). Cimen and collaborators (2019) measured the mean distance between pterion and external acoustic meatus as 5.71 ± 0.77 cm and 5.34 ± 0.36 cm in male and female skulls, respectively (10). In the present study, the measured mean distance between the center of pterion and the supra meatal spine was 4.97 ± 0.04 cm on the right side and 5.06 ± 0.03 cm on the left side (P>0.05). The difference may be due to geographical, genetic or methodological variations. A study done in India on 100 dry skulls reported the mean thickness at the center of the pterion to be 0.352±0.145 cm (9). In addition, an Asian scholar reported mean different thicknesses at the center of the pterion as 0.513±0.167 cm in Thai skulls (26), 0.39 to 0.41 cm in Turks (12), and 0.319±0.085 cm in Korean skulls (29). In this study the central thickness of the pterion was 0.59 ± 0.01 cm on the right and 0.41 ± 0.01 cm on the left side (P<0.05, Eta squared= 0.12). Clinically, the knowledge of the thickness of pterion is very important for neurosurgeons which could be applied during internal and external neurosurgical fixation procedures. In conclusion, according to the finding in this current study, sphenoparietal type of suture is the most frequent variety of pterion. The mean measurements between the center of pterion and FZS, RZA and inion and central thickness of the pterion had statistically significant difference between the right and left sides of the skulls. The findings of this study may, presumably, be useful for the anatomists, neurosurgeons, forensic pathologies and anthropologists in the area of the studied population. Further investigation on the skulls of identified sex, age and nationality, particularly Ethiopian skulls, using computed scan, X-ray and dry human skulls is strongly recommended.
Background: A trauma to the skull in the area of the pterion usually causes rupture of the middle meningeal artery leading to life- threatening epidural hematoma. The objective of the study is to assess the prevalence of different types of pterion and to determine its location using valuable bony landmarks. Methods: On 90 dry adult human skulls of unknown sex, age and nationality the distance of different landmarks from pterion was measured using stainless steel sliding Vernier caliper. The data were analyzed using SPSS version-20 and an independent t-test analysis was implemented. A value of P< 0.05 was considered as statistically significant. Results: A higher occurrence of sphenoparietal type of pterion with the absence of frontotemporal type was noted. About 23% and 77% of the suture types are found to be unilateral and bilateral, respectively. There was a statistically significant difference between right and left sides of the skull in distances from the center of pterion to frontozygomatic suture, root of zygomatic arch, inion and in central thickness pterion. Conclusions: This study showed that the most prevalent type of pterion is sphenoparietal, and revealed asymmetry in the distances from center of pterion to frontozygomatic suture, root of zygomatic arch and inion, and its central thickness. Such findings could offer worthy information about the type and location of pterion, which could be relevant to anatomists, neurosurgeons, forensic medicine specialist and anthropologists.
null
null
3,587
266
[]
4
[ "pterion", "cm", "study", "type", "skulls", "center", "distance", "left", "mean", "right" ]
[ "approached pterional surgical", "suture internally pterion", "pterional surgical technique", "pterion shaped bony", "pterion important neurosurgeons" ]
null
null
null
[CONTENT] Frontozygomatic suture | inion | Pterion | sphenoid bone | zygomatic arch [SUMMARY]
null
[CONTENT] Frontozygomatic suture | inion | Pterion | sphenoid bone | zygomatic arch [SUMMARY]
null
[CONTENT] Frontozygomatic suture | inion | Pterion | sphenoid bone | zygomatic arch [SUMMARY]
null
[CONTENT] Adult | Cranial Sutures | Ethnicity | Humans | Skull | Zygoma [SUMMARY]
null
[CONTENT] Adult | Cranial Sutures | Ethnicity | Humans | Skull | Zygoma [SUMMARY]
null
[CONTENT] Adult | Cranial Sutures | Ethnicity | Humans | Skull | Zygoma [SUMMARY]
null
[CONTENT] approached pterional surgical | suture internally pterion | pterional surgical technique | pterion shaped bony | pterion important neurosurgeons [SUMMARY]
null
[CONTENT] approached pterional surgical | suture internally pterion | pterional surgical technique | pterion shaped bony | pterion important neurosurgeons [SUMMARY]
null
[CONTENT] approached pterional surgical | suture internally pterion | pterional surgical technique | pterion shaped bony | pterion important neurosurgeons [SUMMARY]
null
[CONTENT] pterion | cm | study | type | skulls | center | distance | left | mean | right [SUMMARY]
null
[CONTENT] pterion | cm | study | type | skulls | center | distance | left | mean | right [SUMMARY]
null
[CONTENT] pterion | cm | study | type | skulls | center | distance | left | mean | right [SUMMARY]
null
[CONTENT] pterion | bones | parietal | location | type | bone | approach | type location pterion | type location | frontal [SUMMARY]
null
[CONTENT] right | pterion | cm | sides | left | difference | 05 | eta squared | table | 05 cm [SUMMARY]
null
[CONTENT] pterion | cm | type | right | center | distance | left | study | sides | center pterion [SUMMARY]
null
[CONTENT] ||| [SUMMARY]
null
[CONTENT] ||| About 23% and 77% ||| [SUMMARY]
null
[CONTENT] ||| ||| 90 ||| SPSS ||| ||| ||| ||| About 23% and 77% ||| ||| ||| [SUMMARY]
null
Au-Pt Nanoparticle Formulation as a Radiosensitizer for Radiotherapy with Dual Effects.
33469284
Radiotherapy occupies an essential position as one of the most significant approaches for the clinical treatment of cancer. However, we cannot overcome the shortcoming of X-rays which is the high value of the oxygen enhancement ratio (OER). Radiosensitizers with the ability to enhance the radiosensitivity of tumor cells provide an alternative to changing X-rays to protons and heavy ion radiotherapy.
BACKGROUND
We prepared the Au-Pt nanoparticles (Au-Pt NPs) using a one-step method. The characteristics of the Au-Pt NPs were determined using TEM, HAADF-STEM, elemental mapping images, and DLS. The enhanced radiotherapy was demonstrated in vitro using MTT assays, colony formation assays, fluorescence imaging, and flow cytometric analyses of the apoptosis. The biodistribution of the Au-Pt NPs was analyzed using ICP-OES, and thermal images. The enhanced radiotherapy was demonstrated in vitro using immunofluorescence images, tumor volume and weigh, and hematoxylin & eosin (H&E) staining.
MATERIALS AND METHODS
Polyethylene glycol (PEG) functionalized nanoparticles composed of the metallic elements Au and Pt were designed to increase synergistic radiosensitivity. The mechanism demonstrated that heavy metal NPs possess a high X-ray photon capture cross-section and Compton scattering effect which increased DNA damage. Furthermore, the Au-Pt NPs exhibited enzyme-mimicking activities by catalyzing the decomposition of endogenous H2O2 to O2 in the solid tumor microenvironment (TME).
RESULTS
Our work provides a systematically administered radiosensitizer that can selectively reside in a tumor via the EPR effect and enhances the efficiency of treating cancer with radiotherapy.
CONCLUSION
[ "Animals", "Catalysis", "Cell Line, Tumor", "Cell Survival", "Female", "Gold", "Human Umbilical Vein Endothelial Cells", "Humans", "Metal Nanoparticles", "Mice", "Mice, Inbred BALB C", "Neoplasms", "Platinum", "Radiation-Sensitizing Agents", "Tissue Distribution", "Tumor Microenvironment" ]
7811476
Introduction
Radiation therapy (radiotherapy, RT) is one of the most effective therapies available for the treatment of localized solid cancers.1 During radiotherapy of a malignant tumor, biological macromolecules of the tumor cells directly interact with the radiation to generate free radicals which cause damage to the DNA, such as strand breaks, point mutations, and aberrant DNA cross-linking.2–4 In addition, the radiation causes water molecules in the tissues to ionize and produce free radicals, which in turn interact with the biological macromolecules to cause cell damage and death.5 Theoretically, almost all types of tumors could be effectively controlled when the local radiation dose is high enough.6 However, a high dose of radiation induces high toxicity to healthy tissues causing side effects.7 Except for the new radiation-delivery techniques that improved the therapeutic effect, highly effective and low-toxicity radiosensitizers are necessary. The improvement of the physical technology, for example, brachytherapy, intensity-modulated RT, image-guided RT, and stereotactic RT has greatly improved radiation therapy.8–11 However, many problems still remain, such as the complex tumor microenvironment containing cancer stem cells, vasculogenesis, and metabolic alterations.12–14 Except for tumor size, the intrinsic radio-sensitivity, cell proliferation, and the extent of hypoxia are important factors for controlling the tumor.15 Hypoxia is one of the most important indicators and is related to cancer therapy-resistance.16 Hypoxia activates the Nrf2 pathway, which leads to anti-oxidant and anti-inflammatory responses with an aggregation of M2 macrophages and MDSCs that protect the tumor.5 Owing to the existence of the hypoxic microenvironment of the tumor, the therapeutic efficiency of chemotherapy, radiotherapy and others, are reduced. In recent years, different strategies including physical stimulation (e.g. fractional irradiation) and chemical reaction, have been developed for relieving tumor hypoxia.17,18 However, it is difficult to prevent the growth of a tumor using a monotherapy. Due to the reduced side effects and enhanced therapeutic effect of synergistic therapy, it is used to overcome the insufficient tumor suppression effect of monotherapy. Radiosensitizers are molecules/materials with the ability to enhance the radiosensitivity of tumor cells and provide opportunities to overcome the obstacles mentioned above.2 There are many kinds of radiosensitizers, namely small-molecule chemicals, macromolecules, and nanostructures.19 One of the most important mechanisms for increasing radiosensitivity is to change the hypoxic microenvironment via different kinds of radiosensitizers. Many tumors are resistant to radiation therapy because they lack oxygen within the tumor due to abnormal or dysfunctional blood vessels.20 This state of hypoxia leads to less DNA damage at the same radiation dose.21 Therefore, oxygen is a well-known radiosensitizer.4 Chemicals and hypoxia-inducing cytotoxins were developed for clinical use because of their oxygen-producing capacity through a mechanism of damage fixation utilizing their electron affinity.22,23 Macromolecules are able to regulate the radio-sensitivity mechanism by binding with the DNAs, miRNA, mRNAs, siRNAs, proteins or peptides.24 Recently, the introduction of nanotechnology has provided momentum and expanded the horizon for the development of radiosensitizers.25 In recent years, heavy-metal nanomaterials with high atomic numbers (Z) have shown promise as radiosensitizers because they can absorb, scatter, and emit radiation energy.26–28 An excellent candidate is gold nanomaterials (AuNPs) because they have tunable morphologies, they are easy to modify, they have good biological safety, and they have exceptional radiation sensitization capabilities.29–32 Furthermore, AuNPs are good for drug delivery and cancer therapy.33 In addition, various catalase-like nanomaterials have been developed for catalyzing the decomposition of hydrogen peroxide (H2O2) into oxygen, which alleviates tumor hypoxia, and overcomes the hypoxia-induced radioresistance.34 Therefore, to develop a simple and effective strategy for relieving the hypoxic tumor microenvironment is essential to improving the therapeutic efficacy of radiotherapy. In the present study, we designed a nanoparticle composed of the metallic elements Au and Pt (Au-Pt NPs). The Au-Pt NPs exhibited enzyme-mimicking activities, which resulted in synergistic radiosensitivity for reinforced tumor therapy. The new Au-Pt NPs with nanozymes were synthesized in one step at room temperature. The nanoparticles produced had a uniform nanosphere shape and were very stable in physiological solutions. After intravenous administration, the Au-Pt NPs passively targeted the tumor and rapidly accumulated in it because of the enhanced permeability and retention (EPR) effect. Not only do the Au-Pt nanozymes possess a radiosensitivity ability to enhance the energy deposition of the X-rays but are also able to catalyze the reaction that converts H2O2 to O2. In addition to attenuating tumor hypoxia, the Au-Pt NPs significantly inhibited tumor growth under 8 Gy of X-ray irradiation compared to the control group. This demonstrated that the Au-Pt NPs improved the therapeutic efficiency of radiotherapy. Therefore, our work developed a novel radiosensitizer with low toxicity, for enhanced tumor radiotherapy by alleviating the tumor hypoxic microenvironment. Our findings highlight the great potential of applying Au-Pt NPs in the future clinical treatment of cancer.
null
null
null
null
Conclusions
In summary, we developed Au-Pt NPs via a simple method that simultaneously possesses metal nanoparticle mediated radio-sensitization, and enzyme catalytic activities. A general mechanism in nanoparticle-based radiotherapy was uncovered in that heavy metal NPs possesses a high X-ray photon capture cross-section and Compton scattering effect which escalated the damage of DNA in tumor cells. Furthermore, our Au-Pt NPs increased the decomposition of H2O2 in TME and improved the anoxic status of the tumors treated with X-rays and enhanced the efficacy of radiotherapy. More importantly, in vitro, and in vivo data showed that Au-Pt NPs could effectively inhibit tumor growth with no significant side effects on normal tissues and organs. Therefore, our work developed a low toxicity novel radiosensitizer for enhanced tumor radiotherapy via attenuation of hypoxia. All completed clinical trials have demonstrated the safety of gold-based nanomaterials.40 In a recent clinical trial, gold-silica nanoshells (GSNs) were designed and used for ablating prostate tumors in patients without any obvious treatment-related side effects.41 Our findings further highlight the great potential application of Au-Pt NPs in the future clinical treatment of cancers.
[ "Materials and Methods", "Synthesis of Au-Pt NPs", "Determining the Characteristics of the Au-Pt NPs", "Detection of O2", "Cell Line and Animals", "In vitro Cell Experiments", "Pharmacokinetic Parameters and Biodistribution in vivo", "Antitumor Efficacy in vivo", "Results and Discussion", "Synthesis and Characterization", "In vitro Enhanced Radiotherapy", "Pharmacokinetics and Biodistribution", "In vivo Enhanced Radiotherapy", "Conclusions" ]
[ "Synthesis of Au-Pt NPs L-proline 0.0384 g, 20 μL of HAuCl4 and H2PtCl6 solution (1mol/L) were dispersed in 10 mL of deionized water. The pH value of the solution was adjusted to 9 by adding NaOH. Next, ascorbic acid (10 mmol L−1) was added drop by drop. The products were obtained by allowing the solution to react for 20 min at room temperature. Finally, C18PMH-PEG (10 mg) was added to 10 mL of an Au-Pt solution. Au-Pt-PEG-PMH was obtained by stirring the solution for 24 h at room temperature. After 2 h of ultrasonication, the mixture was dried by rotary evaporator. Ten mL of deionized water was added to dissolve the solid which yielded PEGylated Au-Pt nanoparticles (Au-Pt NPs). The Au-Pt nanoparticle that were obtained were further purified through centrifugation at 6000 rpm for 10 min to remove any incompatible nanomaterials. In addition, the Au-Pt NPs were washed three times through ultrafiltration filters with 100 kDa MWCO to remove any unbound PEG.\nL-proline 0.0384 g, 20 μL of HAuCl4 and H2PtCl6 solution (1mol/L) were dispersed in 10 mL of deionized water. The pH value of the solution was adjusted to 9 by adding NaOH. Next, ascorbic acid (10 mmol L−1) was added drop by drop. The products were obtained by allowing the solution to react for 20 min at room temperature. Finally, C18PMH-PEG (10 mg) was added to 10 mL of an Au-Pt solution. Au-Pt-PEG-PMH was obtained by stirring the solution for 24 h at room temperature. After 2 h of ultrasonication, the mixture was dried by rotary evaporator. Ten mL of deionized water was added to dissolve the solid which yielded PEGylated Au-Pt nanoparticles (Au-Pt NPs). The Au-Pt nanoparticle that were obtained were further purified through centrifugation at 6000 rpm for 10 min to remove any incompatible nanomaterials. In addition, the Au-Pt NPs were washed three times through ultrafiltration filters with 100 kDa MWCO to remove any unbound PEG.\nDetermining the Characteristics of the Au-Pt NPs The hydrodynamic diameters and Zeta potentials of the Au-Pt NPs suspended in PBS and 10% FBS were measured by dynamic light scattering (DLS) (Malvern Instruments, UK). The morphology was determined using high-angle annular dark-field scanning transmission electron microscopy (HAADF-STEM), and the characteristics were described from elemental mapping images using a transmission electron microscope (TEM, JEM-1230, Japan).\nThe hydrodynamic diameters and Zeta potentials of the Au-Pt NPs suspended in PBS and 10% FBS were measured by dynamic light scattering (DLS) (Malvern Instruments, UK). The morphology was determined using high-angle annular dark-field scanning transmission electron microscopy (HAADF-STEM), and the characteristics were described from elemental mapping images using a transmission electron microscope (TEM, JEM-1230, Japan).\nDetection of O2 To investigate the efficiency with which oxygen is generated in vitro, the Au-Pt NPs (50 μg/mL), H2O2 (2 mmol/L) were added to 3 mL of PBS (pH 6.75) at room temperature. An oxygen probe (JPBJ-608 portable Dissolved Oxygen Meters, Shanghai REX Instrument Factory) was used to measure the dissolved oxygen concentration every minute for 18 minutes.\nTo investigate the efficiency with which oxygen is generated in vitro, the Au-Pt NPs (50 μg/mL), H2O2 (2 mmol/L) were added to 3 mL of PBS (pH 6.75) at room temperature. An oxygen probe (JPBJ-608 portable Dissolved Oxygen Meters, Shanghai REX Instrument Factory) was used to measure the dissolved oxygen concentration every minute for 18 minutes.\nCell Line and Animals Cells of the human vascular endothelial cell line HUEVC and murine breast cancer cell line 4T1 were provided by the central laboratory of Taizhou People’s Hospital (purchased from the Cell Bank, Shanghai Institutes for Biological Sciences, Chinese Academy of Sciences, Shanghai) and grown in the recommended cell culture medium under the standard conditions. Female BALB/c mice (6–8 weeks) were maintained according to the protocols approved by the Nantong University Laboratory Animal Center. All animal protocols were in accordance with the National Institute’s Health Guide for the Care and Use of Laboratory Animals.\nCells of the human vascular endothelial cell line HUEVC and murine breast cancer cell line 4T1 were provided by the central laboratory of Taizhou People’s Hospital (purchased from the Cell Bank, Shanghai Institutes for Biological Sciences, Chinese Academy of Sciences, Shanghai) and grown in the recommended cell culture medium under the standard conditions. Female BALB/c mice (6–8 weeks) were maintained according to the protocols approved by the Nantong University Laboratory Animal Center. All animal protocols were in accordance with the National Institute’s Health Guide for the Care and Use of Laboratory Animals.\nIn vitro Cell Experiments The cell cytotoxicity was determined by the 3-(4,5-dimethyl-2-thiazolyl)-2,5-diphenyl-2H-tetrazolium bromide (MTT) assay (Sigma) following the standard protocol. The surviving fraction (SF) of the clone formation assay was calculated after the cells were irradiated with 0, 2, 4, 6 or 8 Gy X-rays and kept in culture for 10 days. Fluorescence images were obtained of the phosphorylated H2AX (γH2AX) in the 4T1 murine breast cancer cells after irradiation for 24 h and treated with Au-Pt NPs. The radiation source was a RS 2000 with 0.3 mm copper filter, irradiating at 160 kV at 25 mA and a dose rate of 1.203 Gy/min. The fluorescence images were analyzed using Living Imaging software. Live/dead cell discrimination was performed using the Live/Dead Fixable Aqua Dead Cell Stain Kit (Life Technologies) or the Sytox Red Dead Cell Stain (Life Technologies). Cell surface staining was done for 20–30 minutes. Cells were spun down and stained with a 1:200 final concentration of the antibodies in 50% of a 2.4G2 (Fc block) and 50% of a FACS Buffer (PBS + 1% FBS, 2mM EDTA) for 15 min. Flow cytometric analyses were performed to determine the percentage of apoptotic cells. Cells suspended in PBS containing 1% FBS were incubated with anti-mouse antibody against Annexin V-FITC and PI for 20 min at room temperature in the dark and then evaluated using a flow cytometer. All flow cytometry analyses were performed using an LSRFortessa (BD Biosciences) or an LSR-II (BD Biosciences). The flow data were analyzed using FlowJo v.10 (Tree Star, Inc.).\nThe cell cytotoxicity was determined by the 3-(4,5-dimethyl-2-thiazolyl)-2,5-diphenyl-2H-tetrazolium bromide (MTT) assay (Sigma) following the standard protocol. The surviving fraction (SF) of the clone formation assay was calculated after the cells were irradiated with 0, 2, 4, 6 or 8 Gy X-rays and kept in culture for 10 days. Fluorescence images were obtained of the phosphorylated H2AX (γH2AX) in the 4T1 murine breast cancer cells after irradiation for 24 h and treated with Au-Pt NPs. The radiation source was a RS 2000 with 0.3 mm copper filter, irradiating at 160 kV at 25 mA and a dose rate of 1.203 Gy/min. The fluorescence images were analyzed using Living Imaging software. Live/dead cell discrimination was performed using the Live/Dead Fixable Aqua Dead Cell Stain Kit (Life Technologies) or the Sytox Red Dead Cell Stain (Life Technologies). Cell surface staining was done for 20–30 minutes. Cells were spun down and stained with a 1:200 final concentration of the antibodies in 50% of a 2.4G2 (Fc block) and 50% of a FACS Buffer (PBS + 1% FBS, 2mM EDTA) for 15 min. Flow cytometric analyses were performed to determine the percentage of apoptotic cells. Cells suspended in PBS containing 1% FBS were incubated with anti-mouse antibody against Annexin V-FITC and PI for 20 min at room temperature in the dark and then evaluated using a flow cytometer. All flow cytometry analyses were performed using an LSRFortessa (BD Biosciences) or an LSR-II (BD Biosciences). The flow data were analyzed using FlowJo v.10 (Tree Star, Inc.).\nPharmacokinetic Parameters and Biodistribution in vivo At different points in time, post injection of the Au-Pt NPs, 20 μL of blood was drawn from the right orbital venous plexus for the Inductively Coupled Plasma Optical Emission Spectrometer (ICP-OES) analysis. The ICP-OES signals in blood samples were measured using a gamma counter. At 24 h post injection, the mice were sacrificed, and all major organs and tumors were collected. Then, fluorescence of organs and tumors were measured using a gamma counter. In vivo CT imaging, 100 μL of PBS and 100 μL of Au-Pt NPs (20 mg/kg) were injected into the tumor site in situ. Mice were anesthetized and scanned using a Philips CT imager. CT images were obtained from the Philips 256 CT scanning system.\nAt different points in time, post injection of the Au-Pt NPs, 20 μL of blood was drawn from the right orbital venous plexus for the Inductively Coupled Plasma Optical Emission Spectrometer (ICP-OES) analysis. The ICP-OES signals in blood samples were measured using a gamma counter. At 24 h post injection, the mice were sacrificed, and all major organs and tumors were collected. Then, fluorescence of organs and tumors were measured using a gamma counter. In vivo CT imaging, 100 μL of PBS and 100 μL of Au-Pt NPs (20 mg/kg) were injected into the tumor site in situ. Mice were anesthetized and scanned using a Philips CT imager. CT images were obtained from the Philips 256 CT scanning system.\nAntitumor Efficacy in vivo The 4T1 tumor-bearing BALB/c mice were randomly allocated to four groups: (a) control group (PBS); (b) Au-Pt NPs; (c) X-ray; (d) Au-Pt NPs + X-ray. When the tumor volume reached 60 mm3 at 24 h post-injection of PBS and Au-Pt NPs (20 mg/kg), the mice were exposed to X-ray (8 Gy). The tumor sizes and weight were recorded every other day after treatment. The mice were defined as dead when the tumor volume exceeded 1000 mm3 (Volume = length*width2/2). These mice were sacrificed on day 16 to collect the tumors for the hypoxia staining and crucial organs for the hematoxylin & eosin (H&E) staining.\nThe tumor tissues were removed for the frozen section. The sections were incubated overnight with an anti-pimonidazole mouse monoclonal antibody (100 times dilution, Hypoxyprobe Inc.). Then, an Alexa Fluo 488 conjugated goat-anti-mouse antibody (100 times dilution, Jackson Inc.) was used as the secondary antibody to combine with the primary antibody. All sections were analyzed and imaged by CLSM (OLYMPUS).\nThe 4T1 tumor-bearing BALB/c mice were randomly allocated to four groups: (a) control group (PBS); (b) Au-Pt NPs; (c) X-ray; (d) Au-Pt NPs + X-ray. When the tumor volume reached 60 mm3 at 24 h post-injection of PBS and Au-Pt NPs (20 mg/kg), the mice were exposed to X-ray (8 Gy). The tumor sizes and weight were recorded every other day after treatment. The mice were defined as dead when the tumor volume exceeded 1000 mm3 (Volume = length*width2/2). These mice were sacrificed on day 16 to collect the tumors for the hypoxia staining and crucial organs for the hematoxylin & eosin (H&E) staining.\nThe tumor tissues were removed for the frozen section. The sections were incubated overnight with an anti-pimonidazole mouse monoclonal antibody (100 times dilution, Hypoxyprobe Inc.). Then, an Alexa Fluo 488 conjugated goat-anti-mouse antibody (100 times dilution, Jackson Inc.) was used as the secondary antibody to combine with the primary antibody. All sections were analyzed and imaged by CLSM (OLYMPUS).", "L-proline 0.0384 g, 20 μL of HAuCl4 and H2PtCl6 solution (1mol/L) were dispersed in 10 mL of deionized water. The pH value of the solution was adjusted to 9 by adding NaOH. Next, ascorbic acid (10 mmol L−1) was added drop by drop. The products were obtained by allowing the solution to react for 20 min at room temperature. Finally, C18PMH-PEG (10 mg) was added to 10 mL of an Au-Pt solution. Au-Pt-PEG-PMH was obtained by stirring the solution for 24 h at room temperature. After 2 h of ultrasonication, the mixture was dried by rotary evaporator. Ten mL of deionized water was added to dissolve the solid which yielded PEGylated Au-Pt nanoparticles (Au-Pt NPs). The Au-Pt nanoparticle that were obtained were further purified through centrifugation at 6000 rpm for 10 min to remove any incompatible nanomaterials. In addition, the Au-Pt NPs were washed three times through ultrafiltration filters with 100 kDa MWCO to remove any unbound PEG.", "The hydrodynamic diameters and Zeta potentials of the Au-Pt NPs suspended in PBS and 10% FBS were measured by dynamic light scattering (DLS) (Malvern Instruments, UK). The morphology was determined using high-angle annular dark-field scanning transmission electron microscopy (HAADF-STEM), and the characteristics were described from elemental mapping images using a transmission electron microscope (TEM, JEM-1230, Japan).", "To investigate the efficiency with which oxygen is generated in vitro, the Au-Pt NPs (50 μg/mL), H2O2 (2 mmol/L) were added to 3 mL of PBS (pH 6.75) at room temperature. An oxygen probe (JPBJ-608 portable Dissolved Oxygen Meters, Shanghai REX Instrument Factory) was used to measure the dissolved oxygen concentration every minute for 18 minutes.", "Cells of the human vascular endothelial cell line HUEVC and murine breast cancer cell line 4T1 were provided by the central laboratory of Taizhou People’s Hospital (purchased from the Cell Bank, Shanghai Institutes for Biological Sciences, Chinese Academy of Sciences, Shanghai) and grown in the recommended cell culture medium under the standard conditions. Female BALB/c mice (6–8 weeks) were maintained according to the protocols approved by the Nantong University Laboratory Animal Center. All animal protocols were in accordance with the National Institute’s Health Guide for the Care and Use of Laboratory Animals.", "The cell cytotoxicity was determined by the 3-(4,5-dimethyl-2-thiazolyl)-2,5-diphenyl-2H-tetrazolium bromide (MTT) assay (Sigma) following the standard protocol. The surviving fraction (SF) of the clone formation assay was calculated after the cells were irradiated with 0, 2, 4, 6 or 8 Gy X-rays and kept in culture for 10 days. Fluorescence images were obtained of the phosphorylated H2AX (γH2AX) in the 4T1 murine breast cancer cells after irradiation for 24 h and treated with Au-Pt NPs. The radiation source was a RS 2000 with 0.3 mm copper filter, irradiating at 160 kV at 25 mA and a dose rate of 1.203 Gy/min. The fluorescence images were analyzed using Living Imaging software. Live/dead cell discrimination was performed using the Live/Dead Fixable Aqua Dead Cell Stain Kit (Life Technologies) or the Sytox Red Dead Cell Stain (Life Technologies). Cell surface staining was done for 20–30 minutes. Cells were spun down and stained with a 1:200 final concentration of the antibodies in 50% of a 2.4G2 (Fc block) and 50% of a FACS Buffer (PBS + 1% FBS, 2mM EDTA) for 15 min. Flow cytometric analyses were performed to determine the percentage of apoptotic cells. Cells suspended in PBS containing 1% FBS were incubated with anti-mouse antibody against Annexin V-FITC and PI for 20 min at room temperature in the dark and then evaluated using a flow cytometer. All flow cytometry analyses were performed using an LSRFortessa (BD Biosciences) or an LSR-II (BD Biosciences). The flow data were analyzed using FlowJo v.10 (Tree Star, Inc.).", "At different points in time, post injection of the Au-Pt NPs, 20 μL of blood was drawn from the right orbital venous plexus for the Inductively Coupled Plasma Optical Emission Spectrometer (ICP-OES) analysis. The ICP-OES signals in blood samples were measured using a gamma counter. At 24 h post injection, the mice were sacrificed, and all major organs and tumors were collected. Then, fluorescence of organs and tumors were measured using a gamma counter. In vivo CT imaging, 100 μL of PBS and 100 μL of Au-Pt NPs (20 mg/kg) were injected into the tumor site in situ. Mice were anesthetized and scanned using a Philips CT imager. CT images were obtained from the Philips 256 CT scanning system.", "The 4T1 tumor-bearing BALB/c mice were randomly allocated to four groups: (a) control group (PBS); (b) Au-Pt NPs; (c) X-ray; (d) Au-Pt NPs + X-ray. When the tumor volume reached 60 mm3 at 24 h post-injection of PBS and Au-Pt NPs (20 mg/kg), the mice were exposed to X-ray (8 Gy). The tumor sizes and weight were recorded every other day after treatment. The mice were defined as dead when the tumor volume exceeded 1000 mm3 (Volume = length*width2/2). These mice were sacrificed on day 16 to collect the tumors for the hypoxia staining and crucial organs for the hematoxylin & eosin (H&E) staining.\nThe tumor tissues were removed for the frozen section. The sections were incubated overnight with an anti-pimonidazole mouse monoclonal antibody (100 times dilution, Hypoxyprobe Inc.). Then, an Alexa Fluo 488 conjugated goat-anti-mouse antibody (100 times dilution, Jackson Inc.) was used as the secondary antibody to combine with the primary antibody. All sections were analyzed and imaged by CLSM (OLYMPUS).", "Synthesis and Characterization The synthesis procedures are shown in Scheme 1. The Au-Pt NPs were synthesized through a co-reduction of H2PtCl6 and HAuCl4 by L-ascorbic acid as a chelating agent to slow down the kinetics of the reduction reaction in a one-pot method. To improve the biocompatibility of the Au-Pt NPs, polyethylene glycol (maleic anhydridealt-1-octadecene) (C18-PMH-PEG5k) was used to modify it via stacking. According to the TEM mages (Figure 1A), the Au-Pt NPs were uniformly dispersed and formed spherical aggregations with a diameter of about 150 ± 20 nm. The ultraviolet absorption spectrum showed that there was no characteristic peak of Au-Pt NPs (Figure S1). The energy-dispersive spectroscopy (EDS) elemental mapping images (Figure 1B) showed that the Au and Pt atoms were uniformly distributed in the spherical aggregations. Au and Pt as high atomic number elements, would absorb, scatter, and emit radiation energy to enhance the radiation dose deposition at the interface of the surrounding tissue.35 The prepared Au-Pt NPs maintained uniform dispersion and good stability in 10% fetal bovine serum (FBS), which indicated potential for its biological application (Figure 1C and E). The Zeta potential of Au-Pt (−18.9 mV) and Au-Pt NPs (−21.1 mV) were similar (Figure 1D).\nScheme 1Schematic illustrations for the fabrication of Au-Pt NPs and synergistic radiotherapy by employing tumor microenvironment.Figure 1The schematic procedure and characterization of Au-Pt NPs. (A) TEM image, (B) the high-angle annular dark-field scanning transmission electron microscopy (HAADF-STEM) and elemental mapping images, (C) Hydrodynamic diameters of Au-Pt NPs in 10% FBS and PBS measured by DLS, (D) Zeta potential of Au-Pt NPs, (E) Hydrodynamic diameters of Au-Pt NPs 10% FBS and PBS at different times, (F) Oxygen generation in H2O2 solution (3.2 mM) with different concentrations of Au-Pt NPs at pH 6.75.\nSchematic illustrations for the fabrication of Au-Pt NPs and synergistic radiotherapy by employing tumor microenvironment.\nThe schematic procedure and characterization of Au-Pt NPs. (A) TEM image, (B) the high-angle annular dark-field scanning transmission electron microscopy (HAADF-STEM) and elemental mapping images, (C) Hydrodynamic diameters of Au-Pt NPs in 10% FBS and PBS measured by DLS, (D) Zeta potential of Au-Pt NPs, (E) Hydrodynamic diameters of Au-Pt NPs 10% FBS and PBS at different times, (F) Oxygen generation in H2O2 solution (3.2 mM) with different concentrations of Au-Pt NPs at pH 6.75.\nMore importantly, we found that Au-Pt NPs catalyzed the reaction of H2O2 to O2. After different concentrations of Au-Pt were added to H2O2 solutions (pH 6.75) the change in the oxygen concentration was measured using a portable dissolved oxygen meter. The oxygen content did not change in the presence of only H2O2. However, the oxygen production rate increased gradually with an increase in the concentration of Au-Pt, which indicated the excellent CAT-like activity of Au-Pt NPs (Figures 1F and S2). Excessive reactive O2 species (such as H2O2 derived by cancer cells) is one of the oxidative stresses within the tumor microenvironment. The triggering off hypoxia stimulates autophagy and glycolysis through HIF1α and NFκB signaling, which promotes tumor growth and metastasis.3,36 Increasing the decomposition of H2O2 in TME by various nanomaterials would increase the therapeutic effect of treating tumors with phototherapy, chemodynamic therapy, and radiotherapy.37,38\nThe synthesis procedures are shown in Scheme 1. The Au-Pt NPs were synthesized through a co-reduction of H2PtCl6 and HAuCl4 by L-ascorbic acid as a chelating agent to slow down the kinetics of the reduction reaction in a one-pot method. To improve the biocompatibility of the Au-Pt NPs, polyethylene glycol (maleic anhydridealt-1-octadecene) (C18-PMH-PEG5k) was used to modify it via stacking. According to the TEM mages (Figure 1A), the Au-Pt NPs were uniformly dispersed and formed spherical aggregations with a diameter of about 150 ± 20 nm. The ultraviolet absorption spectrum showed that there was no characteristic peak of Au-Pt NPs (Figure S1). The energy-dispersive spectroscopy (EDS) elemental mapping images (Figure 1B) showed that the Au and Pt atoms were uniformly distributed in the spherical aggregations. Au and Pt as high atomic number elements, would absorb, scatter, and emit radiation energy to enhance the radiation dose deposition at the interface of the surrounding tissue.35 The prepared Au-Pt NPs maintained uniform dispersion and good stability in 10% fetal bovine serum (FBS), which indicated potential for its biological application (Figure 1C and E). The Zeta potential of Au-Pt (−18.9 mV) and Au-Pt NPs (−21.1 mV) were similar (Figure 1D).\nScheme 1Schematic illustrations for the fabrication of Au-Pt NPs and synergistic radiotherapy by employing tumor microenvironment.Figure 1The schematic procedure and characterization of Au-Pt NPs. (A) TEM image, (B) the high-angle annular dark-field scanning transmission electron microscopy (HAADF-STEM) and elemental mapping images, (C) Hydrodynamic diameters of Au-Pt NPs in 10% FBS and PBS measured by DLS, (D) Zeta potential of Au-Pt NPs, (E) Hydrodynamic diameters of Au-Pt NPs 10% FBS and PBS at different times, (F) Oxygen generation in H2O2 solution (3.2 mM) with different concentrations of Au-Pt NPs at pH 6.75.\nSchematic illustrations for the fabrication of Au-Pt NPs and synergistic radiotherapy by employing tumor microenvironment.\nThe schematic procedure and characterization of Au-Pt NPs. (A) TEM image, (B) the high-angle annular dark-field scanning transmission electron microscopy (HAADF-STEM) and elemental mapping images, (C) Hydrodynamic diameters of Au-Pt NPs in 10% FBS and PBS measured by DLS, (D) Zeta potential of Au-Pt NPs, (E) Hydrodynamic diameters of Au-Pt NPs 10% FBS and PBS at different times, (F) Oxygen generation in H2O2 solution (3.2 mM) with different concentrations of Au-Pt NPs at pH 6.75.\nMore importantly, we found that Au-Pt NPs catalyzed the reaction of H2O2 to O2. After different concentrations of Au-Pt were added to H2O2 solutions (pH 6.75) the change in the oxygen concentration was measured using a portable dissolved oxygen meter. The oxygen content did not change in the presence of only H2O2. However, the oxygen production rate increased gradually with an increase in the concentration of Au-Pt, which indicated the excellent CAT-like activity of Au-Pt NPs (Figures 1F and S2). Excessive reactive O2 species (such as H2O2 derived by cancer cells) is one of the oxidative stresses within the tumor microenvironment. The triggering off hypoxia stimulates autophagy and glycolysis through HIF1α and NFκB signaling, which promotes tumor growth and metastasis.3,36 Increasing the decomposition of H2O2 in TME by various nanomaterials would increase the therapeutic effect of treating tumors with phototherapy, chemodynamic therapy, and radiotherapy.37,38\nIn vitro Enhanced Radiotherapy According to the MTT assay, the Au-Pt NPs exhibited negligible toxicity to human umbilical endothelial vein cells (HUEVC) and 4T1 cells at different concentrations of Au-Pt NPs at 24 h after treatment (Figure 2A). Furthermore, X-Rays alone (0–4 Gy) also exhibited limited toxicity to HUEVC cells and 4T1 cells (Figure 2B). On the other hand, the results showed that cells cultured without Au-Pt NPs exhibited excellent viability after irradiation with X-rays. The MTT assay revealed that Au-Pt NPs are an excellent radiosensitizer for X-ray-triggered in vitro cancer RT (Figure 2C). The relative cell viability of the 4T1 cells after irradiation and 24 h after treatment with Au-Pt NPs decreased greatly compared to X-Ray treatment alone. When combined with Au-Pt NPs treatment, the relative cell viability of groups irradiated with X-rays (4 Gy) decreased to 45% (with 100μg/mL Au-Pt NPs) and 22% (with 200μg/mL Au-Pt NPs). The colony formation assay revealed that at the same X-ray irradiation intensity, the clone formation numbers were reduced in the presence of Au-Pt NPs, which demonstrated its enhanced antigrowth effects (Figure 2D). The double-strand DNA breaks were evaluated by γ-H2AX staining because these breaks are considered to be the major cause of X-ray induced cell death.39 Low signals of the γ-H2AX foci were detected for cells incubated with PBS and Au-Pt NPs alone. In contrast, a higher level of DNA damage was observed for cells irradiated with X-rays. However, cells irradiated with X-rays and treated with Au-Pt NPs appeared to have a higher level of DNA damage, which was further evidencing of the radio-sensitizing function of our nanoparticles (Figure 2E). All of these assays indicated that the cooperation of Au-Pt NPs and irradiation with X-rays can inhibit the proliferation of cancer cells.Figure 2Relative cell viability of HUEVC cells and 4T1 cells treated with Au-Pt NPs (A) and irradiation (B) for 24 h. (C) Relative cell viability of 4T1 cells after irradiation for 24 h treated with Au-Pt NPs. (D) Clonogenic cell survival curve was generated for 4T1 cells. (E) Representative immunofluorescence images of phosphorylated H2AX (γH2AX) in 4T1 cells after irradiation for 24 h treated with Au-Pt NPs.\nRelative cell viability of HUEVC cells and 4T1 cells treated with Au-Pt NPs (A) and irradiation (B) for 24 h. (C) Relative cell viability of 4T1 cells after irradiation for 24 h treated with Au-Pt NPs. (D) Clonogenic cell survival curve was generated for 4T1 cells. (E) Representative immunofluorescence images of phosphorylated H2AX (γH2AX) in 4T1 cells after irradiation for 24 h treated with Au-Pt NPs.\nFurthermore, the pathways of cell death were assessed using live/dead dye and the Annexin V-FITC Apoptosis Detection Kit. The 4T1 cells were stained with live/dead dye as described in the surface marker staining protocol above. The signals of the PI (red) showed a dramatic increase in tumors injected with Au-Pt NPs and irradiated with X-rays compared to the tumors treated with FBS and Au-Pt NPs alone (Figure 3A). A flow cytometry-based apoptosis assay was performed to determine whether there is synergy between the Au-Pt NPs and X-rays in affecting apoptosis. The apoptosis/necrosis of the 4T1 cells treated with PBS, and X-rays were about 9.12% and 25%, respectively. The apoptosis/necrosis of the combination therapy group was more than 37%. Our results showed that Au-Pt NPs significantly affect apoptosis of 4T1 cells treated with X-rays compared with the control groups (Figure 3B and C).Figure 3(A) LIVE/DEAD staining of 4T1 cells by staining with FDA (green) and PI (red) after different treatments (scale bar: 200 μm). (B, C) Flow cytometric analysis of the apoptosis of 4T1 cells by staining with Annexin V-FITC and PI after different treatments (*P < 0.01).\n(A) LIVE/DEAD staining of 4T1 cells by staining with FDA (green) and PI (red) after different treatments (scale bar: 200 μm). (B, C) Flow cytometric analysis of the apoptosis of 4T1 cells by staining with Annexin V-FITC and PI after different treatments (*P < 0.01).\nAccording to the MTT assay, the Au-Pt NPs exhibited negligible toxicity to human umbilical endothelial vein cells (HUEVC) and 4T1 cells at different concentrations of Au-Pt NPs at 24 h after treatment (Figure 2A). Furthermore, X-Rays alone (0–4 Gy) also exhibited limited toxicity to HUEVC cells and 4T1 cells (Figure 2B). On the other hand, the results showed that cells cultured without Au-Pt NPs exhibited excellent viability after irradiation with X-rays. The MTT assay revealed that Au-Pt NPs are an excellent radiosensitizer for X-ray-triggered in vitro cancer RT (Figure 2C). The relative cell viability of the 4T1 cells after irradiation and 24 h after treatment with Au-Pt NPs decreased greatly compared to X-Ray treatment alone. When combined with Au-Pt NPs treatment, the relative cell viability of groups irradiated with X-rays (4 Gy) decreased to 45% (with 100μg/mL Au-Pt NPs) and 22% (with 200μg/mL Au-Pt NPs). The colony formation assay revealed that at the same X-ray irradiation intensity, the clone formation numbers were reduced in the presence of Au-Pt NPs, which demonstrated its enhanced antigrowth effects (Figure 2D). The double-strand DNA breaks were evaluated by γ-H2AX staining because these breaks are considered to be the major cause of X-ray induced cell death.39 Low signals of the γ-H2AX foci were detected for cells incubated with PBS and Au-Pt NPs alone. In contrast, a higher level of DNA damage was observed for cells irradiated with X-rays. However, cells irradiated with X-rays and treated with Au-Pt NPs appeared to have a higher level of DNA damage, which was further evidencing of the radio-sensitizing function of our nanoparticles (Figure 2E). All of these assays indicated that the cooperation of Au-Pt NPs and irradiation with X-rays can inhibit the proliferation of cancer cells.Figure 2Relative cell viability of HUEVC cells and 4T1 cells treated with Au-Pt NPs (A) and irradiation (B) for 24 h. (C) Relative cell viability of 4T1 cells after irradiation for 24 h treated with Au-Pt NPs. (D) Clonogenic cell survival curve was generated for 4T1 cells. (E) Representative immunofluorescence images of phosphorylated H2AX (γH2AX) in 4T1 cells after irradiation for 24 h treated with Au-Pt NPs.\nRelative cell viability of HUEVC cells and 4T1 cells treated with Au-Pt NPs (A) and irradiation (B) for 24 h. (C) Relative cell viability of 4T1 cells after irradiation for 24 h treated with Au-Pt NPs. (D) Clonogenic cell survival curve was generated for 4T1 cells. (E) Representative immunofluorescence images of phosphorylated H2AX (γH2AX) in 4T1 cells after irradiation for 24 h treated with Au-Pt NPs.\nFurthermore, the pathways of cell death were assessed using live/dead dye and the Annexin V-FITC Apoptosis Detection Kit. The 4T1 cells were stained with live/dead dye as described in the surface marker staining protocol above. The signals of the PI (red) showed a dramatic increase in tumors injected with Au-Pt NPs and irradiated with X-rays compared to the tumors treated with FBS and Au-Pt NPs alone (Figure 3A). A flow cytometry-based apoptosis assay was performed to determine whether there is synergy between the Au-Pt NPs and X-rays in affecting apoptosis. The apoptosis/necrosis of the 4T1 cells treated with PBS, and X-rays were about 9.12% and 25%, respectively. The apoptosis/necrosis of the combination therapy group was more than 37%. Our results showed that Au-Pt NPs significantly affect apoptosis of 4T1 cells treated with X-rays compared with the control groups (Figure 3B and C).Figure 3(A) LIVE/DEAD staining of 4T1 cells by staining with FDA (green) and PI (red) after different treatments (scale bar: 200 μm). (B, C) Flow cytometric analysis of the apoptosis of 4T1 cells by staining with Annexin V-FITC and PI after different treatments (*P < 0.01).\n(A) LIVE/DEAD staining of 4T1 cells by staining with FDA (green) and PI (red) after different treatments (scale bar: 200 μm). (B, C) Flow cytometric analysis of the apoptosis of 4T1 cells by staining with Annexin V-FITC and PI after different treatments (*P < 0.01).\nPharmacokinetics and Biodistribution At different points in time post the intravenous injection (i.v.) of Au-Pt NPs, mouse blood samples were collected and the radioactive counts measured (Figure 4A). The nanoparticles were demonstrated to have a long blood circulation half-life (t1/2 = 0.5 h). The ICP-MS signals were observed in many organs, especially in the liver of mice, because of the clearance of the nanoparticles by the reticuloendothelial system (RES) (Figure 4B). The tumor uptake of the Au-Pt NPs was determined to be 6.5% ID/g at 24 h post injection. We used CT imaging to study the in vivo behaviors of the Au-Pt NPs because they exhibited the ability to be detected with CT fluorescence imaging (Figure S3). Mice bearing 4T1 tumors were thus injected intravenously with Au-Pt NPs, then imaged with a CT imaging system. The time-dependent increase of the CT signals (Figure 4C) was observed in the tumor after the intravenous injection with Au-Pt NPs, which indicated the efficient uptake of Au-Pt NPs by the tumor because of the EPR effect.Figure 4(A) Inductively Coupled Plasma Optical Emission Spectrometer (ICP-OES) analysis of Au-Pt NPs after tail intravenous injection for different times. (B) Organ biodistribution according to ICP-OES signal (N = 3 animals) with Au-Pt NPs. Data represent mean ± s.d. (C) The thermal images of 4T1 tumor bearing mice with injection of Au-Pt NPs for different times.\n(A) Inductively Coupled Plasma Optical Emission Spectrometer (ICP-OES) analysis of Au-Pt NPs after tail intravenous injection for different times. (B) Organ biodistribution according to ICP-OES signal (N = 3 animals) with Au-Pt NPs. Data represent mean ± s.d. (C) The thermal images of 4T1 tumor bearing mice with injection of Au-Pt NPs for different times.\nAt different points in time post the intravenous injection (i.v.) of Au-Pt NPs, mouse blood samples were collected and the radioactive counts measured (Figure 4A). The nanoparticles were demonstrated to have a long blood circulation half-life (t1/2 = 0.5 h). The ICP-MS signals were observed in many organs, especially in the liver of mice, because of the clearance of the nanoparticles by the reticuloendothelial system (RES) (Figure 4B). The tumor uptake of the Au-Pt NPs was determined to be 6.5% ID/g at 24 h post injection. We used CT imaging to study the in vivo behaviors of the Au-Pt NPs because they exhibited the ability to be detected with CT fluorescence imaging (Figure S3). Mice bearing 4T1 tumors were thus injected intravenously with Au-Pt NPs, then imaged with a CT imaging system. The time-dependent increase of the CT signals (Figure 4C) was observed in the tumor after the intravenous injection with Au-Pt NPs, which indicated the efficient uptake of Au-Pt NPs by the tumor because of the EPR effect.Figure 4(A) Inductively Coupled Plasma Optical Emission Spectrometer (ICP-OES) analysis of Au-Pt NPs after tail intravenous injection for different times. (B) Organ biodistribution according to ICP-OES signal (N = 3 animals) with Au-Pt NPs. Data represent mean ± s.d. (C) The thermal images of 4T1 tumor bearing mice with injection of Au-Pt NPs for different times.\n(A) Inductively Coupled Plasma Optical Emission Spectrometer (ICP-OES) analysis of Au-Pt NPs after tail intravenous injection for different times. (B) Organ biodistribution according to ICP-OES signal (N = 3 animals) with Au-Pt NPs. Data represent mean ± s.d. (C) The thermal images of 4T1 tumor bearing mice with injection of Au-Pt NPs for different times.\nIn vivo Enhanced Radiotherapy Since catalase has the ability to decompose H2O2 to H2O and O2, we assessed the ability of Au-Pt NPs to reduce tumor hypoxia in vivo using an immunofluorescent staining assay (Figure 5A). Four h post injection of different nanoparticles, BALB/c mice bearing CT26 tumors were sacrificed and their tumors were collected for immunofluorescence staining. The signals of the hypoxia-probe (pimonidazole) showed a dramatic decrease in tumors injected with Au-Pt NPs because of the decomposition of H2O2 that generated O2 in the tumor microenvironment by the catalase loaded inside the nanoparticles. By contrast, similar to the untreated tumors, tumors from mice treated with X-rays only exhibited large hypoxic areas.Figure 5(A) Representative immunofluorescence images of tumor slices after hypoxia staining (scale bar: 100 μm). The hypoxia areas were stained by anti-pimonidazole antibody (green, FITC). (B) Relative tumor volume after various treatments indicated. (C) Tumor weight of sacrificed mice at Day 16 (*P< 0.01). (D) Change curves for the body weight of mice. (E) H&E staining of tumor tissues at Day 16 (scale bar: 100 μm).\n(A) Representative immunofluorescence images of tumor slices after hypoxia staining (scale bar: 100 μm). The hypoxia areas were stained by anti-pimonidazole antibody (green, FITC). (B) Relative tumor volume after various treatments indicated. (C) Tumor weight of sacrificed mice at Day 16 (*P< 0.01). (D) Change curves for the body weight of mice. (E) H&E staining of tumor tissues at Day 16 (scale bar: 100 μm).\nInspired by the eminent lethality of cancer cells in vitro because of the Au-Pt NPs, the therapeutic efficacy on tumor-bearing mice was then studied. The 4T1 tumor-bearing BALB/c mice were randomly allocated to four groups: (a) control group (PBS); (b) Au-Pt NPs; (c) X-ray (8Gy); (d) Au-Pt NPs + X-ray. When sacrificed on day 16, the tumors of the mice from the four groups demonstrated that the size of the tumors in the mice treated with Au-Pt NPs was smaller than in any of the other groups (Figure S4). Compared to X-ray treatment alone which resulted in only partly delayed tumor growth (Tumor volume: 459.5±76.09 mm3, Tumor weight: 0.3065±0.049 g), treatment with Au-Pt NPs, at the same level of irradiation with X-rays, resulted in about a twofold increase in the inhibitory effect on the growth of the tumor (Tumor volume: 231.7±63.35 mm3, Tumor weight: 0.1662±0.032 g) (Figure 5B and C). In addition, the mice treated with Au-Pt NPs exhibited no loss of body weight, which indicated that there were almost no short-term side effects (Figure 5D). Moreover, the H&E staining of the main organs from the mice showed no pathological change, which indicated that there was no poisoning of the organisms (Figure S5). In addition, the H&E staining was conducted to investigate any morphological changes and apoptosis of the cancer cells in the different treatment groups. As illustrated in Figure 5E, we found that a large number of morphological changes and apoptosis of the cancer cells were dispersed throughout the whole tumor tissue from mice treated with Au-Pt NPs (Figure 5E). Taken together, the Au-Pt NPs amplified the killing effect of X-rays, which lead to the effective suppression of tumor growth without any distinct damage to surrounding non-tumor tissues.\nSince catalase has the ability to decompose H2O2 to H2O and O2, we assessed the ability of Au-Pt NPs to reduce tumor hypoxia in vivo using an immunofluorescent staining assay (Figure 5A). Four h post injection of different nanoparticles, BALB/c mice bearing CT26 tumors were sacrificed and their tumors were collected for immunofluorescence staining. The signals of the hypoxia-probe (pimonidazole) showed a dramatic decrease in tumors injected with Au-Pt NPs because of the decomposition of H2O2 that generated O2 in the tumor microenvironment by the catalase loaded inside the nanoparticles. By contrast, similar to the untreated tumors, tumors from mice treated with X-rays only exhibited large hypoxic areas.Figure 5(A) Representative immunofluorescence images of tumor slices after hypoxia staining (scale bar: 100 μm). The hypoxia areas were stained by anti-pimonidazole antibody (green, FITC). (B) Relative tumor volume after various treatments indicated. (C) Tumor weight of sacrificed mice at Day 16 (*P< 0.01). (D) Change curves for the body weight of mice. (E) H&E staining of tumor tissues at Day 16 (scale bar: 100 μm).\n(A) Representative immunofluorescence images of tumor slices after hypoxia staining (scale bar: 100 μm). The hypoxia areas were stained by anti-pimonidazole antibody (green, FITC). (B) Relative tumor volume after various treatments indicated. (C) Tumor weight of sacrificed mice at Day 16 (*P< 0.01). (D) Change curves for the body weight of mice. (E) H&E staining of tumor tissues at Day 16 (scale bar: 100 μm).\nInspired by the eminent lethality of cancer cells in vitro because of the Au-Pt NPs, the therapeutic efficacy on tumor-bearing mice was then studied. The 4T1 tumor-bearing BALB/c mice were randomly allocated to four groups: (a) control group (PBS); (b) Au-Pt NPs; (c) X-ray (8Gy); (d) Au-Pt NPs + X-ray. When sacrificed on day 16, the tumors of the mice from the four groups demonstrated that the size of the tumors in the mice treated with Au-Pt NPs was smaller than in any of the other groups (Figure S4). Compared to X-ray treatment alone which resulted in only partly delayed tumor growth (Tumor volume: 459.5±76.09 mm3, Tumor weight: 0.3065±0.049 g), treatment with Au-Pt NPs, at the same level of irradiation with X-rays, resulted in about a twofold increase in the inhibitory effect on the growth of the tumor (Tumor volume: 231.7±63.35 mm3, Tumor weight: 0.1662±0.032 g) (Figure 5B and C). In addition, the mice treated with Au-Pt NPs exhibited no loss of body weight, which indicated that there were almost no short-term side effects (Figure 5D). Moreover, the H&E staining of the main organs from the mice showed no pathological change, which indicated that there was no poisoning of the organisms (Figure S5). In addition, the H&E staining was conducted to investigate any morphological changes and apoptosis of the cancer cells in the different treatment groups. As illustrated in Figure 5E, we found that a large number of morphological changes and apoptosis of the cancer cells were dispersed throughout the whole tumor tissue from mice treated with Au-Pt NPs (Figure 5E). Taken together, the Au-Pt NPs amplified the killing effect of X-rays, which lead to the effective suppression of tumor growth without any distinct damage to surrounding non-tumor tissues.", "The synthesis procedures are shown in Scheme 1. The Au-Pt NPs were synthesized through a co-reduction of H2PtCl6 and HAuCl4 by L-ascorbic acid as a chelating agent to slow down the kinetics of the reduction reaction in a one-pot method. To improve the biocompatibility of the Au-Pt NPs, polyethylene glycol (maleic anhydridealt-1-octadecene) (C18-PMH-PEG5k) was used to modify it via stacking. According to the TEM mages (Figure 1A), the Au-Pt NPs were uniformly dispersed and formed spherical aggregations with a diameter of about 150 ± 20 nm. The ultraviolet absorption spectrum showed that there was no characteristic peak of Au-Pt NPs (Figure S1). The energy-dispersive spectroscopy (EDS) elemental mapping images (Figure 1B) showed that the Au and Pt atoms were uniformly distributed in the spherical aggregations. Au and Pt as high atomic number elements, would absorb, scatter, and emit radiation energy to enhance the radiation dose deposition at the interface of the surrounding tissue.35 The prepared Au-Pt NPs maintained uniform dispersion and good stability in 10% fetal bovine serum (FBS), which indicated potential for its biological application (Figure 1C and E). The Zeta potential of Au-Pt (−18.9 mV) and Au-Pt NPs (−21.1 mV) were similar (Figure 1D).\nScheme 1Schematic illustrations for the fabrication of Au-Pt NPs and synergistic radiotherapy by employing tumor microenvironment.Figure 1The schematic procedure and characterization of Au-Pt NPs. (A) TEM image, (B) the high-angle annular dark-field scanning transmission electron microscopy (HAADF-STEM) and elemental mapping images, (C) Hydrodynamic diameters of Au-Pt NPs in 10% FBS and PBS measured by DLS, (D) Zeta potential of Au-Pt NPs, (E) Hydrodynamic diameters of Au-Pt NPs 10% FBS and PBS at different times, (F) Oxygen generation in H2O2 solution (3.2 mM) with different concentrations of Au-Pt NPs at pH 6.75.\nSchematic illustrations for the fabrication of Au-Pt NPs and synergistic radiotherapy by employing tumor microenvironment.\nThe schematic procedure and characterization of Au-Pt NPs. (A) TEM image, (B) the high-angle annular dark-field scanning transmission electron microscopy (HAADF-STEM) and elemental mapping images, (C) Hydrodynamic diameters of Au-Pt NPs in 10% FBS and PBS measured by DLS, (D) Zeta potential of Au-Pt NPs, (E) Hydrodynamic diameters of Au-Pt NPs 10% FBS and PBS at different times, (F) Oxygen generation in H2O2 solution (3.2 mM) with different concentrations of Au-Pt NPs at pH 6.75.\nMore importantly, we found that Au-Pt NPs catalyzed the reaction of H2O2 to O2. After different concentrations of Au-Pt were added to H2O2 solutions (pH 6.75) the change in the oxygen concentration was measured using a portable dissolved oxygen meter. The oxygen content did not change in the presence of only H2O2. However, the oxygen production rate increased gradually with an increase in the concentration of Au-Pt, which indicated the excellent CAT-like activity of Au-Pt NPs (Figures 1F and S2). Excessive reactive O2 species (such as H2O2 derived by cancer cells) is one of the oxidative stresses within the tumor microenvironment. The triggering off hypoxia stimulates autophagy and glycolysis through HIF1α and NFκB signaling, which promotes tumor growth and metastasis.3,36 Increasing the decomposition of H2O2 in TME by various nanomaterials would increase the therapeutic effect of treating tumors with phototherapy, chemodynamic therapy, and radiotherapy.37,38", "According to the MTT assay, the Au-Pt NPs exhibited negligible toxicity to human umbilical endothelial vein cells (HUEVC) and 4T1 cells at different concentrations of Au-Pt NPs at 24 h after treatment (Figure 2A). Furthermore, X-Rays alone (0–4 Gy) also exhibited limited toxicity to HUEVC cells and 4T1 cells (Figure 2B). On the other hand, the results showed that cells cultured without Au-Pt NPs exhibited excellent viability after irradiation with X-rays. The MTT assay revealed that Au-Pt NPs are an excellent radiosensitizer for X-ray-triggered in vitro cancer RT (Figure 2C). The relative cell viability of the 4T1 cells after irradiation and 24 h after treatment with Au-Pt NPs decreased greatly compared to X-Ray treatment alone. When combined with Au-Pt NPs treatment, the relative cell viability of groups irradiated with X-rays (4 Gy) decreased to 45% (with 100μg/mL Au-Pt NPs) and 22% (with 200μg/mL Au-Pt NPs). The colony formation assay revealed that at the same X-ray irradiation intensity, the clone formation numbers were reduced in the presence of Au-Pt NPs, which demonstrated its enhanced antigrowth effects (Figure 2D). The double-strand DNA breaks were evaluated by γ-H2AX staining because these breaks are considered to be the major cause of X-ray induced cell death.39 Low signals of the γ-H2AX foci were detected for cells incubated with PBS and Au-Pt NPs alone. In contrast, a higher level of DNA damage was observed for cells irradiated with X-rays. However, cells irradiated with X-rays and treated with Au-Pt NPs appeared to have a higher level of DNA damage, which was further evidencing of the radio-sensitizing function of our nanoparticles (Figure 2E). All of these assays indicated that the cooperation of Au-Pt NPs and irradiation with X-rays can inhibit the proliferation of cancer cells.Figure 2Relative cell viability of HUEVC cells and 4T1 cells treated with Au-Pt NPs (A) and irradiation (B) for 24 h. (C) Relative cell viability of 4T1 cells after irradiation for 24 h treated with Au-Pt NPs. (D) Clonogenic cell survival curve was generated for 4T1 cells. (E) Representative immunofluorescence images of phosphorylated H2AX (γH2AX) in 4T1 cells after irradiation for 24 h treated with Au-Pt NPs.\nRelative cell viability of HUEVC cells and 4T1 cells treated with Au-Pt NPs (A) and irradiation (B) for 24 h. (C) Relative cell viability of 4T1 cells after irradiation for 24 h treated with Au-Pt NPs. (D) Clonogenic cell survival curve was generated for 4T1 cells. (E) Representative immunofluorescence images of phosphorylated H2AX (γH2AX) in 4T1 cells after irradiation for 24 h treated with Au-Pt NPs.\nFurthermore, the pathways of cell death were assessed using live/dead dye and the Annexin V-FITC Apoptosis Detection Kit. The 4T1 cells were stained with live/dead dye as described in the surface marker staining protocol above. The signals of the PI (red) showed a dramatic increase in tumors injected with Au-Pt NPs and irradiated with X-rays compared to the tumors treated with FBS and Au-Pt NPs alone (Figure 3A). A flow cytometry-based apoptosis assay was performed to determine whether there is synergy between the Au-Pt NPs and X-rays in affecting apoptosis. The apoptosis/necrosis of the 4T1 cells treated with PBS, and X-rays were about 9.12% and 25%, respectively. The apoptosis/necrosis of the combination therapy group was more than 37%. Our results showed that Au-Pt NPs significantly affect apoptosis of 4T1 cells treated with X-rays compared with the control groups (Figure 3B and C).Figure 3(A) LIVE/DEAD staining of 4T1 cells by staining with FDA (green) and PI (red) after different treatments (scale bar: 200 μm). (B, C) Flow cytometric analysis of the apoptosis of 4T1 cells by staining with Annexin V-FITC and PI after different treatments (*P < 0.01).\n(A) LIVE/DEAD staining of 4T1 cells by staining with FDA (green) and PI (red) after different treatments (scale bar: 200 μm). (B, C) Flow cytometric analysis of the apoptosis of 4T1 cells by staining with Annexin V-FITC and PI after different treatments (*P < 0.01).", "At different points in time post the intravenous injection (i.v.) of Au-Pt NPs, mouse blood samples were collected and the radioactive counts measured (Figure 4A). The nanoparticles were demonstrated to have a long blood circulation half-life (t1/2 = 0.5 h). The ICP-MS signals were observed in many organs, especially in the liver of mice, because of the clearance of the nanoparticles by the reticuloendothelial system (RES) (Figure 4B). The tumor uptake of the Au-Pt NPs was determined to be 6.5% ID/g at 24 h post injection. We used CT imaging to study the in vivo behaviors of the Au-Pt NPs because they exhibited the ability to be detected with CT fluorescence imaging (Figure S3). Mice bearing 4T1 tumors were thus injected intravenously with Au-Pt NPs, then imaged with a CT imaging system. The time-dependent increase of the CT signals (Figure 4C) was observed in the tumor after the intravenous injection with Au-Pt NPs, which indicated the efficient uptake of Au-Pt NPs by the tumor because of the EPR effect.Figure 4(A) Inductively Coupled Plasma Optical Emission Spectrometer (ICP-OES) analysis of Au-Pt NPs after tail intravenous injection for different times. (B) Organ biodistribution according to ICP-OES signal (N = 3 animals) with Au-Pt NPs. Data represent mean ± s.d. (C) The thermal images of 4T1 tumor bearing mice with injection of Au-Pt NPs for different times.\n(A) Inductively Coupled Plasma Optical Emission Spectrometer (ICP-OES) analysis of Au-Pt NPs after tail intravenous injection for different times. (B) Organ biodistribution according to ICP-OES signal (N = 3 animals) with Au-Pt NPs. Data represent mean ± s.d. (C) The thermal images of 4T1 tumor bearing mice with injection of Au-Pt NPs for different times.", "Since catalase has the ability to decompose H2O2 to H2O and O2, we assessed the ability of Au-Pt NPs to reduce tumor hypoxia in vivo using an immunofluorescent staining assay (Figure 5A). Four h post injection of different nanoparticles, BALB/c mice bearing CT26 tumors were sacrificed and their tumors were collected for immunofluorescence staining. The signals of the hypoxia-probe (pimonidazole) showed a dramatic decrease in tumors injected with Au-Pt NPs because of the decomposition of H2O2 that generated O2 in the tumor microenvironment by the catalase loaded inside the nanoparticles. By contrast, similar to the untreated tumors, tumors from mice treated with X-rays only exhibited large hypoxic areas.Figure 5(A) Representative immunofluorescence images of tumor slices after hypoxia staining (scale bar: 100 μm). The hypoxia areas were stained by anti-pimonidazole antibody (green, FITC). (B) Relative tumor volume after various treatments indicated. (C) Tumor weight of sacrificed mice at Day 16 (*P< 0.01). (D) Change curves for the body weight of mice. (E) H&E staining of tumor tissues at Day 16 (scale bar: 100 μm).\n(A) Representative immunofluorescence images of tumor slices after hypoxia staining (scale bar: 100 μm). The hypoxia areas were stained by anti-pimonidazole antibody (green, FITC). (B) Relative tumor volume after various treatments indicated. (C) Tumor weight of sacrificed mice at Day 16 (*P< 0.01). (D) Change curves for the body weight of mice. (E) H&E staining of tumor tissues at Day 16 (scale bar: 100 μm).\nInspired by the eminent lethality of cancer cells in vitro because of the Au-Pt NPs, the therapeutic efficacy on tumor-bearing mice was then studied. The 4T1 tumor-bearing BALB/c mice were randomly allocated to four groups: (a) control group (PBS); (b) Au-Pt NPs; (c) X-ray (8Gy); (d) Au-Pt NPs + X-ray. When sacrificed on day 16, the tumors of the mice from the four groups demonstrated that the size of the tumors in the mice treated with Au-Pt NPs was smaller than in any of the other groups (Figure S4). Compared to X-ray treatment alone which resulted in only partly delayed tumor growth (Tumor volume: 459.5±76.09 mm3, Tumor weight: 0.3065±0.049 g), treatment with Au-Pt NPs, at the same level of irradiation with X-rays, resulted in about a twofold increase in the inhibitory effect on the growth of the tumor (Tumor volume: 231.7±63.35 mm3, Tumor weight: 0.1662±0.032 g) (Figure 5B and C). In addition, the mice treated with Au-Pt NPs exhibited no loss of body weight, which indicated that there were almost no short-term side effects (Figure 5D). Moreover, the H&E staining of the main organs from the mice showed no pathological change, which indicated that there was no poisoning of the organisms (Figure S5). In addition, the H&E staining was conducted to investigate any morphological changes and apoptosis of the cancer cells in the different treatment groups. As illustrated in Figure 5E, we found that a large number of morphological changes and apoptosis of the cancer cells were dispersed throughout the whole tumor tissue from mice treated with Au-Pt NPs (Figure 5E). Taken together, the Au-Pt NPs amplified the killing effect of X-rays, which lead to the effective suppression of tumor growth without any distinct damage to surrounding non-tumor tissues.", "In summary, we developed Au-Pt NPs via a simple method that simultaneously possesses metal nanoparticle mediated radio-sensitization, and enzyme catalytic activities. A general mechanism in nanoparticle-based radiotherapy was uncovered in that heavy metal NPs possesses a high X-ray photon capture cross-section and Compton scattering effect which escalated the damage of DNA in tumor cells. Furthermore, our Au-Pt NPs increased the decomposition of H2O2 in TME and improved the anoxic status of the tumors treated with X-rays and enhanced the efficacy of radiotherapy. More importantly, in vitro, and in vivo data showed that Au-Pt NPs could effectively inhibit tumor growth with no significant side effects on normal tissues and organs. Therefore, our work developed a low toxicity novel radiosensitizer for enhanced tumor radiotherapy via attenuation of hypoxia. All completed clinical trials have demonstrated the safety of gold-based nanomaterials.40 In a recent clinical trial, gold-silica nanoshells (GSNs) were designed and used for ablating prostate tumors in patients without any obvious treatment-related side effects.41 Our findings further highlight the great potential application of Au-Pt NPs in the future clinical treatment of cancers." ]
[ null, null, null, null, null, null, null, null, null, null, null, null, null, null ]
[ "Introduction", "Materials and Methods", "Synthesis of Au-Pt NPs", "Determining the Characteristics of the Au-Pt NPs", "Detection of O2", "Cell Line and Animals", "In vitro Cell Experiments", "Pharmacokinetic Parameters and Biodistribution in vivo", "Antitumor Efficacy in vivo", "Results and Discussion", "Synthesis and Characterization", "In vitro Enhanced Radiotherapy", "Pharmacokinetics and Biodistribution", "In vivo Enhanced Radiotherapy", "Conclusions" ]
[ "Radiation therapy (radiotherapy, RT) is one of the most effective therapies available for the treatment of localized solid cancers.1 During radiotherapy of a malignant tumor, biological macromolecules of the tumor cells directly interact with the radiation to generate free radicals which cause damage to the DNA, such as strand breaks, point mutations, and aberrant DNA cross-linking.2–4 In addition, the radiation causes water molecules in the tissues to ionize and produce free radicals, which in turn interact with the biological macromolecules to cause cell damage and death.5 Theoretically, almost all types of tumors could be effectively controlled when the local radiation dose is high enough.6 However, a high dose of radiation induces high toxicity to healthy tissues causing side effects.7 Except for the new radiation-delivery techniques that improved the therapeutic effect, highly effective and low-toxicity radiosensitizers are necessary. The improvement of the physical technology, for example, brachytherapy, intensity-modulated RT, image-guided RT, and stereotactic RT has greatly improved radiation therapy.8–11 However, many problems still remain, such as the complex tumor microenvironment containing cancer stem cells, vasculogenesis, and metabolic alterations.12–14\nExcept for tumor size, the intrinsic radio-sensitivity, cell proliferation, and the extent of hypoxia are important factors for controlling the tumor.15 Hypoxia is one of the most important indicators and is related to cancer therapy-resistance.16 Hypoxia activates the Nrf2 pathway, which leads to anti-oxidant and anti-inflammatory responses with an aggregation of M2 macrophages and MDSCs that protect the tumor.5 Owing to the existence of the hypoxic microenvironment of the tumor, the therapeutic efficiency of chemotherapy, radiotherapy and others, are reduced. In recent years, different strategies including physical stimulation (e.g. fractional irradiation) and chemical reaction, have been developed for relieving tumor hypoxia.17,18 However, it is difficult to prevent the growth of a tumor using a monotherapy. Due to the reduced side effects and enhanced therapeutic effect of synergistic therapy, it is used to overcome the insufficient tumor suppression effect of monotherapy.\nRadiosensitizers are molecules/materials with the ability to enhance the radiosensitivity of tumor cells and provide opportunities to overcome the obstacles mentioned above.2 There are many kinds of radiosensitizers, namely small-molecule chemicals, macromolecules, and nanostructures.19 One of the most important mechanisms for increasing radiosensitivity is to change the hypoxic microenvironment via different kinds of radiosensitizers. Many tumors are resistant to radiation therapy because they lack oxygen within the tumor due to abnormal or dysfunctional blood vessels.20 This state of hypoxia leads to less DNA damage at the same radiation dose.21 Therefore, oxygen is a well-known radiosensitizer.4 Chemicals and hypoxia-inducing cytotoxins were developed for clinical use because of their oxygen-producing capacity through a mechanism of damage fixation utilizing their electron affinity.22,23 Macromolecules are able to regulate the radio-sensitivity mechanism by binding with the DNAs, miRNA, mRNAs, siRNAs, proteins or peptides.24 Recently, the introduction of nanotechnology has provided momentum and expanded the horizon for the development of radiosensitizers.25 In recent years, heavy-metal nanomaterials with high atomic numbers (Z) have shown promise as radiosensitizers because they can absorb, scatter, and emit radiation energy.26–28 An excellent candidate is gold nanomaterials (AuNPs) because they have tunable morphologies, they are easy to modify, they have good biological safety, and they have exceptional radiation sensitization capabilities.29–32 Furthermore, AuNPs are good for drug delivery and cancer therapy.33 In addition, various catalase-like nanomaterials have been developed for catalyzing the decomposition of hydrogen peroxide (H2O2) into oxygen, which alleviates tumor hypoxia, and overcomes the hypoxia-induced radioresistance.34 Therefore, to develop a simple and effective strategy for relieving the hypoxic tumor microenvironment is essential to improving the therapeutic efficacy of radiotherapy.\nIn the present study, we designed a nanoparticle composed of the metallic elements Au and Pt (Au-Pt NPs). The Au-Pt NPs exhibited enzyme-mimicking activities, which resulted in synergistic radiosensitivity for reinforced tumor therapy. The new Au-Pt NPs with nanozymes were synthesized in one step at room temperature. The nanoparticles produced had a uniform nanosphere shape and were very stable in physiological solutions. After intravenous administration, the Au-Pt NPs passively targeted the tumor and rapidly accumulated in it because of the enhanced permeability and retention (EPR) effect. Not only do the Au-Pt nanozymes possess a radiosensitivity ability to enhance the energy deposition of the X-rays but are also able to catalyze the reaction that converts H2O2 to O2. In addition to attenuating tumor hypoxia, the Au-Pt NPs significantly inhibited tumor growth under 8 Gy of X-ray irradiation compared to the control group. This demonstrated that the Au-Pt NPs improved the therapeutic efficiency of radiotherapy. Therefore, our work developed a novel radiosensitizer with low toxicity, for enhanced tumor radiotherapy by alleviating the tumor hypoxic microenvironment. Our findings highlight the great potential of applying Au-Pt NPs in the future clinical treatment of cancer.", "Synthesis of Au-Pt NPs L-proline 0.0384 g, 20 μL of HAuCl4 and H2PtCl6 solution (1mol/L) were dispersed in 10 mL of deionized water. The pH value of the solution was adjusted to 9 by adding NaOH. Next, ascorbic acid (10 mmol L−1) was added drop by drop. The products were obtained by allowing the solution to react for 20 min at room temperature. Finally, C18PMH-PEG (10 mg) was added to 10 mL of an Au-Pt solution. Au-Pt-PEG-PMH was obtained by stirring the solution for 24 h at room temperature. After 2 h of ultrasonication, the mixture was dried by rotary evaporator. Ten mL of deionized water was added to dissolve the solid which yielded PEGylated Au-Pt nanoparticles (Au-Pt NPs). The Au-Pt nanoparticle that were obtained were further purified through centrifugation at 6000 rpm for 10 min to remove any incompatible nanomaterials. In addition, the Au-Pt NPs were washed three times through ultrafiltration filters with 100 kDa MWCO to remove any unbound PEG.\nL-proline 0.0384 g, 20 μL of HAuCl4 and H2PtCl6 solution (1mol/L) were dispersed in 10 mL of deionized water. The pH value of the solution was adjusted to 9 by adding NaOH. Next, ascorbic acid (10 mmol L−1) was added drop by drop. The products were obtained by allowing the solution to react for 20 min at room temperature. Finally, C18PMH-PEG (10 mg) was added to 10 mL of an Au-Pt solution. Au-Pt-PEG-PMH was obtained by stirring the solution for 24 h at room temperature. After 2 h of ultrasonication, the mixture was dried by rotary evaporator. Ten mL of deionized water was added to dissolve the solid which yielded PEGylated Au-Pt nanoparticles (Au-Pt NPs). The Au-Pt nanoparticle that were obtained were further purified through centrifugation at 6000 rpm for 10 min to remove any incompatible nanomaterials. In addition, the Au-Pt NPs were washed three times through ultrafiltration filters with 100 kDa MWCO to remove any unbound PEG.\nDetermining the Characteristics of the Au-Pt NPs The hydrodynamic diameters and Zeta potentials of the Au-Pt NPs suspended in PBS and 10% FBS were measured by dynamic light scattering (DLS) (Malvern Instruments, UK). The morphology was determined using high-angle annular dark-field scanning transmission electron microscopy (HAADF-STEM), and the characteristics were described from elemental mapping images using a transmission electron microscope (TEM, JEM-1230, Japan).\nThe hydrodynamic diameters and Zeta potentials of the Au-Pt NPs suspended in PBS and 10% FBS were measured by dynamic light scattering (DLS) (Malvern Instruments, UK). The morphology was determined using high-angle annular dark-field scanning transmission electron microscopy (HAADF-STEM), and the characteristics were described from elemental mapping images using a transmission electron microscope (TEM, JEM-1230, Japan).\nDetection of O2 To investigate the efficiency with which oxygen is generated in vitro, the Au-Pt NPs (50 μg/mL), H2O2 (2 mmol/L) were added to 3 mL of PBS (pH 6.75) at room temperature. An oxygen probe (JPBJ-608 portable Dissolved Oxygen Meters, Shanghai REX Instrument Factory) was used to measure the dissolved oxygen concentration every minute for 18 minutes.\nTo investigate the efficiency with which oxygen is generated in vitro, the Au-Pt NPs (50 μg/mL), H2O2 (2 mmol/L) were added to 3 mL of PBS (pH 6.75) at room temperature. An oxygen probe (JPBJ-608 portable Dissolved Oxygen Meters, Shanghai REX Instrument Factory) was used to measure the dissolved oxygen concentration every minute for 18 minutes.\nCell Line and Animals Cells of the human vascular endothelial cell line HUEVC and murine breast cancer cell line 4T1 were provided by the central laboratory of Taizhou People’s Hospital (purchased from the Cell Bank, Shanghai Institutes for Biological Sciences, Chinese Academy of Sciences, Shanghai) and grown in the recommended cell culture medium under the standard conditions. Female BALB/c mice (6–8 weeks) were maintained according to the protocols approved by the Nantong University Laboratory Animal Center. All animal protocols were in accordance with the National Institute’s Health Guide for the Care and Use of Laboratory Animals.\nCells of the human vascular endothelial cell line HUEVC and murine breast cancer cell line 4T1 were provided by the central laboratory of Taizhou People’s Hospital (purchased from the Cell Bank, Shanghai Institutes for Biological Sciences, Chinese Academy of Sciences, Shanghai) and grown in the recommended cell culture medium under the standard conditions. Female BALB/c mice (6–8 weeks) were maintained according to the protocols approved by the Nantong University Laboratory Animal Center. All animal protocols were in accordance with the National Institute’s Health Guide for the Care and Use of Laboratory Animals.\nIn vitro Cell Experiments The cell cytotoxicity was determined by the 3-(4,5-dimethyl-2-thiazolyl)-2,5-diphenyl-2H-tetrazolium bromide (MTT) assay (Sigma) following the standard protocol. The surviving fraction (SF) of the clone formation assay was calculated after the cells were irradiated with 0, 2, 4, 6 or 8 Gy X-rays and kept in culture for 10 days. Fluorescence images were obtained of the phosphorylated H2AX (γH2AX) in the 4T1 murine breast cancer cells after irradiation for 24 h and treated with Au-Pt NPs. The radiation source was a RS 2000 with 0.3 mm copper filter, irradiating at 160 kV at 25 mA and a dose rate of 1.203 Gy/min. The fluorescence images were analyzed using Living Imaging software. Live/dead cell discrimination was performed using the Live/Dead Fixable Aqua Dead Cell Stain Kit (Life Technologies) or the Sytox Red Dead Cell Stain (Life Technologies). Cell surface staining was done for 20–30 minutes. Cells were spun down and stained with a 1:200 final concentration of the antibodies in 50% of a 2.4G2 (Fc block) and 50% of a FACS Buffer (PBS + 1% FBS, 2mM EDTA) for 15 min. Flow cytometric analyses were performed to determine the percentage of apoptotic cells. Cells suspended in PBS containing 1% FBS were incubated with anti-mouse antibody against Annexin V-FITC and PI for 20 min at room temperature in the dark and then evaluated using a flow cytometer. All flow cytometry analyses were performed using an LSRFortessa (BD Biosciences) or an LSR-II (BD Biosciences). The flow data were analyzed using FlowJo v.10 (Tree Star, Inc.).\nThe cell cytotoxicity was determined by the 3-(4,5-dimethyl-2-thiazolyl)-2,5-diphenyl-2H-tetrazolium bromide (MTT) assay (Sigma) following the standard protocol. The surviving fraction (SF) of the clone formation assay was calculated after the cells were irradiated with 0, 2, 4, 6 or 8 Gy X-rays and kept in culture for 10 days. Fluorescence images were obtained of the phosphorylated H2AX (γH2AX) in the 4T1 murine breast cancer cells after irradiation for 24 h and treated with Au-Pt NPs. The radiation source was a RS 2000 with 0.3 mm copper filter, irradiating at 160 kV at 25 mA and a dose rate of 1.203 Gy/min. The fluorescence images were analyzed using Living Imaging software. Live/dead cell discrimination was performed using the Live/Dead Fixable Aqua Dead Cell Stain Kit (Life Technologies) or the Sytox Red Dead Cell Stain (Life Technologies). Cell surface staining was done for 20–30 minutes. Cells were spun down and stained with a 1:200 final concentration of the antibodies in 50% of a 2.4G2 (Fc block) and 50% of a FACS Buffer (PBS + 1% FBS, 2mM EDTA) for 15 min. Flow cytometric analyses were performed to determine the percentage of apoptotic cells. Cells suspended in PBS containing 1% FBS were incubated with anti-mouse antibody against Annexin V-FITC and PI for 20 min at room temperature in the dark and then evaluated using a flow cytometer. All flow cytometry analyses were performed using an LSRFortessa (BD Biosciences) or an LSR-II (BD Biosciences). The flow data were analyzed using FlowJo v.10 (Tree Star, Inc.).\nPharmacokinetic Parameters and Biodistribution in vivo At different points in time, post injection of the Au-Pt NPs, 20 μL of blood was drawn from the right orbital venous plexus for the Inductively Coupled Plasma Optical Emission Spectrometer (ICP-OES) analysis. The ICP-OES signals in blood samples were measured using a gamma counter. At 24 h post injection, the mice were sacrificed, and all major organs and tumors were collected. Then, fluorescence of organs and tumors were measured using a gamma counter. In vivo CT imaging, 100 μL of PBS and 100 μL of Au-Pt NPs (20 mg/kg) were injected into the tumor site in situ. Mice were anesthetized and scanned using a Philips CT imager. CT images were obtained from the Philips 256 CT scanning system.\nAt different points in time, post injection of the Au-Pt NPs, 20 μL of blood was drawn from the right orbital venous plexus for the Inductively Coupled Plasma Optical Emission Spectrometer (ICP-OES) analysis. The ICP-OES signals in blood samples were measured using a gamma counter. At 24 h post injection, the mice were sacrificed, and all major organs and tumors were collected. Then, fluorescence of organs and tumors were measured using a gamma counter. In vivo CT imaging, 100 μL of PBS and 100 μL of Au-Pt NPs (20 mg/kg) were injected into the tumor site in situ. Mice were anesthetized and scanned using a Philips CT imager. CT images were obtained from the Philips 256 CT scanning system.\nAntitumor Efficacy in vivo The 4T1 tumor-bearing BALB/c mice were randomly allocated to four groups: (a) control group (PBS); (b) Au-Pt NPs; (c) X-ray; (d) Au-Pt NPs + X-ray. When the tumor volume reached 60 mm3 at 24 h post-injection of PBS and Au-Pt NPs (20 mg/kg), the mice were exposed to X-ray (8 Gy). The tumor sizes and weight were recorded every other day after treatment. The mice were defined as dead when the tumor volume exceeded 1000 mm3 (Volume = length*width2/2). These mice were sacrificed on day 16 to collect the tumors for the hypoxia staining and crucial organs for the hematoxylin & eosin (H&E) staining.\nThe tumor tissues were removed for the frozen section. The sections were incubated overnight with an anti-pimonidazole mouse monoclonal antibody (100 times dilution, Hypoxyprobe Inc.). Then, an Alexa Fluo 488 conjugated goat-anti-mouse antibody (100 times dilution, Jackson Inc.) was used as the secondary antibody to combine with the primary antibody. All sections were analyzed and imaged by CLSM (OLYMPUS).\nThe 4T1 tumor-bearing BALB/c mice were randomly allocated to four groups: (a) control group (PBS); (b) Au-Pt NPs; (c) X-ray; (d) Au-Pt NPs + X-ray. When the tumor volume reached 60 mm3 at 24 h post-injection of PBS and Au-Pt NPs (20 mg/kg), the mice were exposed to X-ray (8 Gy). The tumor sizes and weight were recorded every other day after treatment. The mice were defined as dead when the tumor volume exceeded 1000 mm3 (Volume = length*width2/2). These mice were sacrificed on day 16 to collect the tumors for the hypoxia staining and crucial organs for the hematoxylin & eosin (H&E) staining.\nThe tumor tissues were removed for the frozen section. The sections were incubated overnight with an anti-pimonidazole mouse monoclonal antibody (100 times dilution, Hypoxyprobe Inc.). Then, an Alexa Fluo 488 conjugated goat-anti-mouse antibody (100 times dilution, Jackson Inc.) was used as the secondary antibody to combine with the primary antibody. All sections were analyzed and imaged by CLSM (OLYMPUS).", "L-proline 0.0384 g, 20 μL of HAuCl4 and H2PtCl6 solution (1mol/L) were dispersed in 10 mL of deionized water. The pH value of the solution was adjusted to 9 by adding NaOH. Next, ascorbic acid (10 mmol L−1) was added drop by drop. The products were obtained by allowing the solution to react for 20 min at room temperature. Finally, C18PMH-PEG (10 mg) was added to 10 mL of an Au-Pt solution. Au-Pt-PEG-PMH was obtained by stirring the solution for 24 h at room temperature. After 2 h of ultrasonication, the mixture was dried by rotary evaporator. Ten mL of deionized water was added to dissolve the solid which yielded PEGylated Au-Pt nanoparticles (Au-Pt NPs). The Au-Pt nanoparticle that were obtained were further purified through centrifugation at 6000 rpm for 10 min to remove any incompatible nanomaterials. In addition, the Au-Pt NPs were washed three times through ultrafiltration filters with 100 kDa MWCO to remove any unbound PEG.", "The hydrodynamic diameters and Zeta potentials of the Au-Pt NPs suspended in PBS and 10% FBS were measured by dynamic light scattering (DLS) (Malvern Instruments, UK). The morphology was determined using high-angle annular dark-field scanning transmission electron microscopy (HAADF-STEM), and the characteristics were described from elemental mapping images using a transmission electron microscope (TEM, JEM-1230, Japan).", "To investigate the efficiency with which oxygen is generated in vitro, the Au-Pt NPs (50 μg/mL), H2O2 (2 mmol/L) were added to 3 mL of PBS (pH 6.75) at room temperature. An oxygen probe (JPBJ-608 portable Dissolved Oxygen Meters, Shanghai REX Instrument Factory) was used to measure the dissolved oxygen concentration every minute for 18 minutes.", "Cells of the human vascular endothelial cell line HUEVC and murine breast cancer cell line 4T1 were provided by the central laboratory of Taizhou People’s Hospital (purchased from the Cell Bank, Shanghai Institutes for Biological Sciences, Chinese Academy of Sciences, Shanghai) and grown in the recommended cell culture medium under the standard conditions. Female BALB/c mice (6–8 weeks) were maintained according to the protocols approved by the Nantong University Laboratory Animal Center. All animal protocols were in accordance with the National Institute’s Health Guide for the Care and Use of Laboratory Animals.", "The cell cytotoxicity was determined by the 3-(4,5-dimethyl-2-thiazolyl)-2,5-diphenyl-2H-tetrazolium bromide (MTT) assay (Sigma) following the standard protocol. The surviving fraction (SF) of the clone formation assay was calculated after the cells were irradiated with 0, 2, 4, 6 or 8 Gy X-rays and kept in culture for 10 days. Fluorescence images were obtained of the phosphorylated H2AX (γH2AX) in the 4T1 murine breast cancer cells after irradiation for 24 h and treated with Au-Pt NPs. The radiation source was a RS 2000 with 0.3 mm copper filter, irradiating at 160 kV at 25 mA and a dose rate of 1.203 Gy/min. The fluorescence images were analyzed using Living Imaging software. Live/dead cell discrimination was performed using the Live/Dead Fixable Aqua Dead Cell Stain Kit (Life Technologies) or the Sytox Red Dead Cell Stain (Life Technologies). Cell surface staining was done for 20–30 minutes. Cells were spun down and stained with a 1:200 final concentration of the antibodies in 50% of a 2.4G2 (Fc block) and 50% of a FACS Buffer (PBS + 1% FBS, 2mM EDTA) for 15 min. Flow cytometric analyses were performed to determine the percentage of apoptotic cells. Cells suspended in PBS containing 1% FBS were incubated with anti-mouse antibody against Annexin V-FITC and PI for 20 min at room temperature in the dark and then evaluated using a flow cytometer. All flow cytometry analyses were performed using an LSRFortessa (BD Biosciences) or an LSR-II (BD Biosciences). The flow data were analyzed using FlowJo v.10 (Tree Star, Inc.).", "At different points in time, post injection of the Au-Pt NPs, 20 μL of blood was drawn from the right orbital venous plexus for the Inductively Coupled Plasma Optical Emission Spectrometer (ICP-OES) analysis. The ICP-OES signals in blood samples were measured using a gamma counter. At 24 h post injection, the mice were sacrificed, and all major organs and tumors were collected. Then, fluorescence of organs and tumors were measured using a gamma counter. In vivo CT imaging, 100 μL of PBS and 100 μL of Au-Pt NPs (20 mg/kg) were injected into the tumor site in situ. Mice were anesthetized and scanned using a Philips CT imager. CT images were obtained from the Philips 256 CT scanning system.", "The 4T1 tumor-bearing BALB/c mice were randomly allocated to four groups: (a) control group (PBS); (b) Au-Pt NPs; (c) X-ray; (d) Au-Pt NPs + X-ray. When the tumor volume reached 60 mm3 at 24 h post-injection of PBS and Au-Pt NPs (20 mg/kg), the mice were exposed to X-ray (8 Gy). The tumor sizes and weight were recorded every other day after treatment. The mice were defined as dead when the tumor volume exceeded 1000 mm3 (Volume = length*width2/2). These mice were sacrificed on day 16 to collect the tumors for the hypoxia staining and crucial organs for the hematoxylin & eosin (H&E) staining.\nThe tumor tissues were removed for the frozen section. The sections were incubated overnight with an anti-pimonidazole mouse monoclonal antibody (100 times dilution, Hypoxyprobe Inc.). Then, an Alexa Fluo 488 conjugated goat-anti-mouse antibody (100 times dilution, Jackson Inc.) was used as the secondary antibody to combine with the primary antibody. All sections were analyzed and imaged by CLSM (OLYMPUS).", "Synthesis and Characterization The synthesis procedures are shown in Scheme 1. The Au-Pt NPs were synthesized through a co-reduction of H2PtCl6 and HAuCl4 by L-ascorbic acid as a chelating agent to slow down the kinetics of the reduction reaction in a one-pot method. To improve the biocompatibility of the Au-Pt NPs, polyethylene glycol (maleic anhydridealt-1-octadecene) (C18-PMH-PEG5k) was used to modify it via stacking. According to the TEM mages (Figure 1A), the Au-Pt NPs were uniformly dispersed and formed spherical aggregations with a diameter of about 150 ± 20 nm. The ultraviolet absorption spectrum showed that there was no characteristic peak of Au-Pt NPs (Figure S1). The energy-dispersive spectroscopy (EDS) elemental mapping images (Figure 1B) showed that the Au and Pt atoms were uniformly distributed in the spherical aggregations. Au and Pt as high atomic number elements, would absorb, scatter, and emit radiation energy to enhance the radiation dose deposition at the interface of the surrounding tissue.35 The prepared Au-Pt NPs maintained uniform dispersion and good stability in 10% fetal bovine serum (FBS), which indicated potential for its biological application (Figure 1C and E). The Zeta potential of Au-Pt (−18.9 mV) and Au-Pt NPs (−21.1 mV) were similar (Figure 1D).\nScheme 1Schematic illustrations for the fabrication of Au-Pt NPs and synergistic radiotherapy by employing tumor microenvironment.Figure 1The schematic procedure and characterization of Au-Pt NPs. (A) TEM image, (B) the high-angle annular dark-field scanning transmission electron microscopy (HAADF-STEM) and elemental mapping images, (C) Hydrodynamic diameters of Au-Pt NPs in 10% FBS and PBS measured by DLS, (D) Zeta potential of Au-Pt NPs, (E) Hydrodynamic diameters of Au-Pt NPs 10% FBS and PBS at different times, (F) Oxygen generation in H2O2 solution (3.2 mM) with different concentrations of Au-Pt NPs at pH 6.75.\nSchematic illustrations for the fabrication of Au-Pt NPs and synergistic radiotherapy by employing tumor microenvironment.\nThe schematic procedure and characterization of Au-Pt NPs. (A) TEM image, (B) the high-angle annular dark-field scanning transmission electron microscopy (HAADF-STEM) and elemental mapping images, (C) Hydrodynamic diameters of Au-Pt NPs in 10% FBS and PBS measured by DLS, (D) Zeta potential of Au-Pt NPs, (E) Hydrodynamic diameters of Au-Pt NPs 10% FBS and PBS at different times, (F) Oxygen generation in H2O2 solution (3.2 mM) with different concentrations of Au-Pt NPs at pH 6.75.\nMore importantly, we found that Au-Pt NPs catalyzed the reaction of H2O2 to O2. After different concentrations of Au-Pt were added to H2O2 solutions (pH 6.75) the change in the oxygen concentration was measured using a portable dissolved oxygen meter. The oxygen content did not change in the presence of only H2O2. However, the oxygen production rate increased gradually with an increase in the concentration of Au-Pt, which indicated the excellent CAT-like activity of Au-Pt NPs (Figures 1F and S2). Excessive reactive O2 species (such as H2O2 derived by cancer cells) is one of the oxidative stresses within the tumor microenvironment. The triggering off hypoxia stimulates autophagy and glycolysis through HIF1α and NFκB signaling, which promotes tumor growth and metastasis.3,36 Increasing the decomposition of H2O2 in TME by various nanomaterials would increase the therapeutic effect of treating tumors with phototherapy, chemodynamic therapy, and radiotherapy.37,38\nThe synthesis procedures are shown in Scheme 1. The Au-Pt NPs were synthesized through a co-reduction of H2PtCl6 and HAuCl4 by L-ascorbic acid as a chelating agent to slow down the kinetics of the reduction reaction in a one-pot method. To improve the biocompatibility of the Au-Pt NPs, polyethylene glycol (maleic anhydridealt-1-octadecene) (C18-PMH-PEG5k) was used to modify it via stacking. According to the TEM mages (Figure 1A), the Au-Pt NPs were uniformly dispersed and formed spherical aggregations with a diameter of about 150 ± 20 nm. The ultraviolet absorption spectrum showed that there was no characteristic peak of Au-Pt NPs (Figure S1). The energy-dispersive spectroscopy (EDS) elemental mapping images (Figure 1B) showed that the Au and Pt atoms were uniformly distributed in the spherical aggregations. Au and Pt as high atomic number elements, would absorb, scatter, and emit radiation energy to enhance the radiation dose deposition at the interface of the surrounding tissue.35 The prepared Au-Pt NPs maintained uniform dispersion and good stability in 10% fetal bovine serum (FBS), which indicated potential for its biological application (Figure 1C and E). The Zeta potential of Au-Pt (−18.9 mV) and Au-Pt NPs (−21.1 mV) were similar (Figure 1D).\nScheme 1Schematic illustrations for the fabrication of Au-Pt NPs and synergistic radiotherapy by employing tumor microenvironment.Figure 1The schematic procedure and characterization of Au-Pt NPs. (A) TEM image, (B) the high-angle annular dark-field scanning transmission electron microscopy (HAADF-STEM) and elemental mapping images, (C) Hydrodynamic diameters of Au-Pt NPs in 10% FBS and PBS measured by DLS, (D) Zeta potential of Au-Pt NPs, (E) Hydrodynamic diameters of Au-Pt NPs 10% FBS and PBS at different times, (F) Oxygen generation in H2O2 solution (3.2 mM) with different concentrations of Au-Pt NPs at pH 6.75.\nSchematic illustrations for the fabrication of Au-Pt NPs and synergistic radiotherapy by employing tumor microenvironment.\nThe schematic procedure and characterization of Au-Pt NPs. (A) TEM image, (B) the high-angle annular dark-field scanning transmission electron microscopy (HAADF-STEM) and elemental mapping images, (C) Hydrodynamic diameters of Au-Pt NPs in 10% FBS and PBS measured by DLS, (D) Zeta potential of Au-Pt NPs, (E) Hydrodynamic diameters of Au-Pt NPs 10% FBS and PBS at different times, (F) Oxygen generation in H2O2 solution (3.2 mM) with different concentrations of Au-Pt NPs at pH 6.75.\nMore importantly, we found that Au-Pt NPs catalyzed the reaction of H2O2 to O2. After different concentrations of Au-Pt were added to H2O2 solutions (pH 6.75) the change in the oxygen concentration was measured using a portable dissolved oxygen meter. The oxygen content did not change in the presence of only H2O2. However, the oxygen production rate increased gradually with an increase in the concentration of Au-Pt, which indicated the excellent CAT-like activity of Au-Pt NPs (Figures 1F and S2). Excessive reactive O2 species (such as H2O2 derived by cancer cells) is one of the oxidative stresses within the tumor microenvironment. The triggering off hypoxia stimulates autophagy and glycolysis through HIF1α and NFκB signaling, which promotes tumor growth and metastasis.3,36 Increasing the decomposition of H2O2 in TME by various nanomaterials would increase the therapeutic effect of treating tumors with phototherapy, chemodynamic therapy, and radiotherapy.37,38\nIn vitro Enhanced Radiotherapy According to the MTT assay, the Au-Pt NPs exhibited negligible toxicity to human umbilical endothelial vein cells (HUEVC) and 4T1 cells at different concentrations of Au-Pt NPs at 24 h after treatment (Figure 2A). Furthermore, X-Rays alone (0–4 Gy) also exhibited limited toxicity to HUEVC cells and 4T1 cells (Figure 2B). On the other hand, the results showed that cells cultured without Au-Pt NPs exhibited excellent viability after irradiation with X-rays. The MTT assay revealed that Au-Pt NPs are an excellent radiosensitizer for X-ray-triggered in vitro cancer RT (Figure 2C). The relative cell viability of the 4T1 cells after irradiation and 24 h after treatment with Au-Pt NPs decreased greatly compared to X-Ray treatment alone. When combined with Au-Pt NPs treatment, the relative cell viability of groups irradiated with X-rays (4 Gy) decreased to 45% (with 100μg/mL Au-Pt NPs) and 22% (with 200μg/mL Au-Pt NPs). The colony formation assay revealed that at the same X-ray irradiation intensity, the clone formation numbers were reduced in the presence of Au-Pt NPs, which demonstrated its enhanced antigrowth effects (Figure 2D). The double-strand DNA breaks were evaluated by γ-H2AX staining because these breaks are considered to be the major cause of X-ray induced cell death.39 Low signals of the γ-H2AX foci were detected for cells incubated with PBS and Au-Pt NPs alone. In contrast, a higher level of DNA damage was observed for cells irradiated with X-rays. However, cells irradiated with X-rays and treated with Au-Pt NPs appeared to have a higher level of DNA damage, which was further evidencing of the radio-sensitizing function of our nanoparticles (Figure 2E). All of these assays indicated that the cooperation of Au-Pt NPs and irradiation with X-rays can inhibit the proliferation of cancer cells.Figure 2Relative cell viability of HUEVC cells and 4T1 cells treated with Au-Pt NPs (A) and irradiation (B) for 24 h. (C) Relative cell viability of 4T1 cells after irradiation for 24 h treated with Au-Pt NPs. (D) Clonogenic cell survival curve was generated for 4T1 cells. (E) Representative immunofluorescence images of phosphorylated H2AX (γH2AX) in 4T1 cells after irradiation for 24 h treated with Au-Pt NPs.\nRelative cell viability of HUEVC cells and 4T1 cells treated with Au-Pt NPs (A) and irradiation (B) for 24 h. (C) Relative cell viability of 4T1 cells after irradiation for 24 h treated with Au-Pt NPs. (D) Clonogenic cell survival curve was generated for 4T1 cells. (E) Representative immunofluorescence images of phosphorylated H2AX (γH2AX) in 4T1 cells after irradiation for 24 h treated with Au-Pt NPs.\nFurthermore, the pathways of cell death were assessed using live/dead dye and the Annexin V-FITC Apoptosis Detection Kit. The 4T1 cells were stained with live/dead dye as described in the surface marker staining protocol above. The signals of the PI (red) showed a dramatic increase in tumors injected with Au-Pt NPs and irradiated with X-rays compared to the tumors treated with FBS and Au-Pt NPs alone (Figure 3A). A flow cytometry-based apoptosis assay was performed to determine whether there is synergy between the Au-Pt NPs and X-rays in affecting apoptosis. The apoptosis/necrosis of the 4T1 cells treated with PBS, and X-rays were about 9.12% and 25%, respectively. The apoptosis/necrosis of the combination therapy group was more than 37%. Our results showed that Au-Pt NPs significantly affect apoptosis of 4T1 cells treated with X-rays compared with the control groups (Figure 3B and C).Figure 3(A) LIVE/DEAD staining of 4T1 cells by staining with FDA (green) and PI (red) after different treatments (scale bar: 200 μm). (B, C) Flow cytometric analysis of the apoptosis of 4T1 cells by staining with Annexin V-FITC and PI after different treatments (*P < 0.01).\n(A) LIVE/DEAD staining of 4T1 cells by staining with FDA (green) and PI (red) after different treatments (scale bar: 200 μm). (B, C) Flow cytometric analysis of the apoptosis of 4T1 cells by staining with Annexin V-FITC and PI after different treatments (*P < 0.01).\nAccording to the MTT assay, the Au-Pt NPs exhibited negligible toxicity to human umbilical endothelial vein cells (HUEVC) and 4T1 cells at different concentrations of Au-Pt NPs at 24 h after treatment (Figure 2A). Furthermore, X-Rays alone (0–4 Gy) also exhibited limited toxicity to HUEVC cells and 4T1 cells (Figure 2B). On the other hand, the results showed that cells cultured without Au-Pt NPs exhibited excellent viability after irradiation with X-rays. The MTT assay revealed that Au-Pt NPs are an excellent radiosensitizer for X-ray-triggered in vitro cancer RT (Figure 2C). The relative cell viability of the 4T1 cells after irradiation and 24 h after treatment with Au-Pt NPs decreased greatly compared to X-Ray treatment alone. When combined with Au-Pt NPs treatment, the relative cell viability of groups irradiated with X-rays (4 Gy) decreased to 45% (with 100μg/mL Au-Pt NPs) and 22% (with 200μg/mL Au-Pt NPs). The colony formation assay revealed that at the same X-ray irradiation intensity, the clone formation numbers were reduced in the presence of Au-Pt NPs, which demonstrated its enhanced antigrowth effects (Figure 2D). The double-strand DNA breaks were evaluated by γ-H2AX staining because these breaks are considered to be the major cause of X-ray induced cell death.39 Low signals of the γ-H2AX foci were detected for cells incubated with PBS and Au-Pt NPs alone. In contrast, a higher level of DNA damage was observed for cells irradiated with X-rays. However, cells irradiated with X-rays and treated with Au-Pt NPs appeared to have a higher level of DNA damage, which was further evidencing of the radio-sensitizing function of our nanoparticles (Figure 2E). All of these assays indicated that the cooperation of Au-Pt NPs and irradiation with X-rays can inhibit the proliferation of cancer cells.Figure 2Relative cell viability of HUEVC cells and 4T1 cells treated with Au-Pt NPs (A) and irradiation (B) for 24 h. (C) Relative cell viability of 4T1 cells after irradiation for 24 h treated with Au-Pt NPs. (D) Clonogenic cell survival curve was generated for 4T1 cells. (E) Representative immunofluorescence images of phosphorylated H2AX (γH2AX) in 4T1 cells after irradiation for 24 h treated with Au-Pt NPs.\nRelative cell viability of HUEVC cells and 4T1 cells treated with Au-Pt NPs (A) and irradiation (B) for 24 h. (C) Relative cell viability of 4T1 cells after irradiation for 24 h treated with Au-Pt NPs. (D) Clonogenic cell survival curve was generated for 4T1 cells. (E) Representative immunofluorescence images of phosphorylated H2AX (γH2AX) in 4T1 cells after irradiation for 24 h treated with Au-Pt NPs.\nFurthermore, the pathways of cell death were assessed using live/dead dye and the Annexin V-FITC Apoptosis Detection Kit. The 4T1 cells were stained with live/dead dye as described in the surface marker staining protocol above. The signals of the PI (red) showed a dramatic increase in tumors injected with Au-Pt NPs and irradiated with X-rays compared to the tumors treated with FBS and Au-Pt NPs alone (Figure 3A). A flow cytometry-based apoptosis assay was performed to determine whether there is synergy between the Au-Pt NPs and X-rays in affecting apoptosis. The apoptosis/necrosis of the 4T1 cells treated with PBS, and X-rays were about 9.12% and 25%, respectively. The apoptosis/necrosis of the combination therapy group was more than 37%. Our results showed that Au-Pt NPs significantly affect apoptosis of 4T1 cells treated with X-rays compared with the control groups (Figure 3B and C).Figure 3(A) LIVE/DEAD staining of 4T1 cells by staining with FDA (green) and PI (red) after different treatments (scale bar: 200 μm). (B, C) Flow cytometric analysis of the apoptosis of 4T1 cells by staining with Annexin V-FITC and PI after different treatments (*P < 0.01).\n(A) LIVE/DEAD staining of 4T1 cells by staining with FDA (green) and PI (red) after different treatments (scale bar: 200 μm). (B, C) Flow cytometric analysis of the apoptosis of 4T1 cells by staining with Annexin V-FITC and PI after different treatments (*P < 0.01).\nPharmacokinetics and Biodistribution At different points in time post the intravenous injection (i.v.) of Au-Pt NPs, mouse blood samples were collected and the radioactive counts measured (Figure 4A). The nanoparticles were demonstrated to have a long blood circulation half-life (t1/2 = 0.5 h). The ICP-MS signals were observed in many organs, especially in the liver of mice, because of the clearance of the nanoparticles by the reticuloendothelial system (RES) (Figure 4B). The tumor uptake of the Au-Pt NPs was determined to be 6.5% ID/g at 24 h post injection. We used CT imaging to study the in vivo behaviors of the Au-Pt NPs because they exhibited the ability to be detected with CT fluorescence imaging (Figure S3). Mice bearing 4T1 tumors were thus injected intravenously with Au-Pt NPs, then imaged with a CT imaging system. The time-dependent increase of the CT signals (Figure 4C) was observed in the tumor after the intravenous injection with Au-Pt NPs, which indicated the efficient uptake of Au-Pt NPs by the tumor because of the EPR effect.Figure 4(A) Inductively Coupled Plasma Optical Emission Spectrometer (ICP-OES) analysis of Au-Pt NPs after tail intravenous injection for different times. (B) Organ biodistribution according to ICP-OES signal (N = 3 animals) with Au-Pt NPs. Data represent mean ± s.d. (C) The thermal images of 4T1 tumor bearing mice with injection of Au-Pt NPs for different times.\n(A) Inductively Coupled Plasma Optical Emission Spectrometer (ICP-OES) analysis of Au-Pt NPs after tail intravenous injection for different times. (B) Organ biodistribution according to ICP-OES signal (N = 3 animals) with Au-Pt NPs. Data represent mean ± s.d. (C) The thermal images of 4T1 tumor bearing mice with injection of Au-Pt NPs for different times.\nAt different points in time post the intravenous injection (i.v.) of Au-Pt NPs, mouse blood samples were collected and the radioactive counts measured (Figure 4A). The nanoparticles were demonstrated to have a long blood circulation half-life (t1/2 = 0.5 h). The ICP-MS signals were observed in many organs, especially in the liver of mice, because of the clearance of the nanoparticles by the reticuloendothelial system (RES) (Figure 4B). The tumor uptake of the Au-Pt NPs was determined to be 6.5% ID/g at 24 h post injection. We used CT imaging to study the in vivo behaviors of the Au-Pt NPs because they exhibited the ability to be detected with CT fluorescence imaging (Figure S3). Mice bearing 4T1 tumors were thus injected intravenously with Au-Pt NPs, then imaged with a CT imaging system. The time-dependent increase of the CT signals (Figure 4C) was observed in the tumor after the intravenous injection with Au-Pt NPs, which indicated the efficient uptake of Au-Pt NPs by the tumor because of the EPR effect.Figure 4(A) Inductively Coupled Plasma Optical Emission Spectrometer (ICP-OES) analysis of Au-Pt NPs after tail intravenous injection for different times. (B) Organ biodistribution according to ICP-OES signal (N = 3 animals) with Au-Pt NPs. Data represent mean ± s.d. (C) The thermal images of 4T1 tumor bearing mice with injection of Au-Pt NPs for different times.\n(A) Inductively Coupled Plasma Optical Emission Spectrometer (ICP-OES) analysis of Au-Pt NPs after tail intravenous injection for different times. (B) Organ biodistribution according to ICP-OES signal (N = 3 animals) with Au-Pt NPs. Data represent mean ± s.d. (C) The thermal images of 4T1 tumor bearing mice with injection of Au-Pt NPs for different times.\nIn vivo Enhanced Radiotherapy Since catalase has the ability to decompose H2O2 to H2O and O2, we assessed the ability of Au-Pt NPs to reduce tumor hypoxia in vivo using an immunofluorescent staining assay (Figure 5A). Four h post injection of different nanoparticles, BALB/c mice bearing CT26 tumors were sacrificed and their tumors were collected for immunofluorescence staining. The signals of the hypoxia-probe (pimonidazole) showed a dramatic decrease in tumors injected with Au-Pt NPs because of the decomposition of H2O2 that generated O2 in the tumor microenvironment by the catalase loaded inside the nanoparticles. By contrast, similar to the untreated tumors, tumors from mice treated with X-rays only exhibited large hypoxic areas.Figure 5(A) Representative immunofluorescence images of tumor slices after hypoxia staining (scale bar: 100 μm). The hypoxia areas were stained by anti-pimonidazole antibody (green, FITC). (B) Relative tumor volume after various treatments indicated. (C) Tumor weight of sacrificed mice at Day 16 (*P< 0.01). (D) Change curves for the body weight of mice. (E) H&E staining of tumor tissues at Day 16 (scale bar: 100 μm).\n(A) Representative immunofluorescence images of tumor slices after hypoxia staining (scale bar: 100 μm). The hypoxia areas were stained by anti-pimonidazole antibody (green, FITC). (B) Relative tumor volume after various treatments indicated. (C) Tumor weight of sacrificed mice at Day 16 (*P< 0.01). (D) Change curves for the body weight of mice. (E) H&E staining of tumor tissues at Day 16 (scale bar: 100 μm).\nInspired by the eminent lethality of cancer cells in vitro because of the Au-Pt NPs, the therapeutic efficacy on tumor-bearing mice was then studied. The 4T1 tumor-bearing BALB/c mice were randomly allocated to four groups: (a) control group (PBS); (b) Au-Pt NPs; (c) X-ray (8Gy); (d) Au-Pt NPs + X-ray. When sacrificed on day 16, the tumors of the mice from the four groups demonstrated that the size of the tumors in the mice treated with Au-Pt NPs was smaller than in any of the other groups (Figure S4). Compared to X-ray treatment alone which resulted in only partly delayed tumor growth (Tumor volume: 459.5±76.09 mm3, Tumor weight: 0.3065±0.049 g), treatment with Au-Pt NPs, at the same level of irradiation with X-rays, resulted in about a twofold increase in the inhibitory effect on the growth of the tumor (Tumor volume: 231.7±63.35 mm3, Tumor weight: 0.1662±0.032 g) (Figure 5B and C). In addition, the mice treated with Au-Pt NPs exhibited no loss of body weight, which indicated that there were almost no short-term side effects (Figure 5D). Moreover, the H&E staining of the main organs from the mice showed no pathological change, which indicated that there was no poisoning of the organisms (Figure S5). In addition, the H&E staining was conducted to investigate any morphological changes and apoptosis of the cancer cells in the different treatment groups. As illustrated in Figure 5E, we found that a large number of morphological changes and apoptosis of the cancer cells were dispersed throughout the whole tumor tissue from mice treated with Au-Pt NPs (Figure 5E). Taken together, the Au-Pt NPs amplified the killing effect of X-rays, which lead to the effective suppression of tumor growth without any distinct damage to surrounding non-tumor tissues.\nSince catalase has the ability to decompose H2O2 to H2O and O2, we assessed the ability of Au-Pt NPs to reduce tumor hypoxia in vivo using an immunofluorescent staining assay (Figure 5A). Four h post injection of different nanoparticles, BALB/c mice bearing CT26 tumors were sacrificed and their tumors were collected for immunofluorescence staining. The signals of the hypoxia-probe (pimonidazole) showed a dramatic decrease in tumors injected with Au-Pt NPs because of the decomposition of H2O2 that generated O2 in the tumor microenvironment by the catalase loaded inside the nanoparticles. By contrast, similar to the untreated tumors, tumors from mice treated with X-rays only exhibited large hypoxic areas.Figure 5(A) Representative immunofluorescence images of tumor slices after hypoxia staining (scale bar: 100 μm). The hypoxia areas were stained by anti-pimonidazole antibody (green, FITC). (B) Relative tumor volume after various treatments indicated. (C) Tumor weight of sacrificed mice at Day 16 (*P< 0.01). (D) Change curves for the body weight of mice. (E) H&E staining of tumor tissues at Day 16 (scale bar: 100 μm).\n(A) Representative immunofluorescence images of tumor slices after hypoxia staining (scale bar: 100 μm). The hypoxia areas were stained by anti-pimonidazole antibody (green, FITC). (B) Relative tumor volume after various treatments indicated. (C) Tumor weight of sacrificed mice at Day 16 (*P< 0.01). (D) Change curves for the body weight of mice. (E) H&E staining of tumor tissues at Day 16 (scale bar: 100 μm).\nInspired by the eminent lethality of cancer cells in vitro because of the Au-Pt NPs, the therapeutic efficacy on tumor-bearing mice was then studied. The 4T1 tumor-bearing BALB/c mice were randomly allocated to four groups: (a) control group (PBS); (b) Au-Pt NPs; (c) X-ray (8Gy); (d) Au-Pt NPs + X-ray. When sacrificed on day 16, the tumors of the mice from the four groups demonstrated that the size of the tumors in the mice treated with Au-Pt NPs was smaller than in any of the other groups (Figure S4). Compared to X-ray treatment alone which resulted in only partly delayed tumor growth (Tumor volume: 459.5±76.09 mm3, Tumor weight: 0.3065±0.049 g), treatment with Au-Pt NPs, at the same level of irradiation with X-rays, resulted in about a twofold increase in the inhibitory effect on the growth of the tumor (Tumor volume: 231.7±63.35 mm3, Tumor weight: 0.1662±0.032 g) (Figure 5B and C). In addition, the mice treated with Au-Pt NPs exhibited no loss of body weight, which indicated that there were almost no short-term side effects (Figure 5D). Moreover, the H&E staining of the main organs from the mice showed no pathological change, which indicated that there was no poisoning of the organisms (Figure S5). In addition, the H&E staining was conducted to investigate any morphological changes and apoptosis of the cancer cells in the different treatment groups. As illustrated in Figure 5E, we found that a large number of morphological changes and apoptosis of the cancer cells were dispersed throughout the whole tumor tissue from mice treated with Au-Pt NPs (Figure 5E). Taken together, the Au-Pt NPs amplified the killing effect of X-rays, which lead to the effective suppression of tumor growth without any distinct damage to surrounding non-tumor tissues.", "The synthesis procedures are shown in Scheme 1. The Au-Pt NPs were synthesized through a co-reduction of H2PtCl6 and HAuCl4 by L-ascorbic acid as a chelating agent to slow down the kinetics of the reduction reaction in a one-pot method. To improve the biocompatibility of the Au-Pt NPs, polyethylene glycol (maleic anhydridealt-1-octadecene) (C18-PMH-PEG5k) was used to modify it via stacking. According to the TEM mages (Figure 1A), the Au-Pt NPs were uniformly dispersed and formed spherical aggregations with a diameter of about 150 ± 20 nm. The ultraviolet absorption spectrum showed that there was no characteristic peak of Au-Pt NPs (Figure S1). The energy-dispersive spectroscopy (EDS) elemental mapping images (Figure 1B) showed that the Au and Pt atoms were uniformly distributed in the spherical aggregations. Au and Pt as high atomic number elements, would absorb, scatter, and emit radiation energy to enhance the radiation dose deposition at the interface of the surrounding tissue.35 The prepared Au-Pt NPs maintained uniform dispersion and good stability in 10% fetal bovine serum (FBS), which indicated potential for its biological application (Figure 1C and E). The Zeta potential of Au-Pt (−18.9 mV) and Au-Pt NPs (−21.1 mV) were similar (Figure 1D).\nScheme 1Schematic illustrations for the fabrication of Au-Pt NPs and synergistic radiotherapy by employing tumor microenvironment.Figure 1The schematic procedure and characterization of Au-Pt NPs. (A) TEM image, (B) the high-angle annular dark-field scanning transmission electron microscopy (HAADF-STEM) and elemental mapping images, (C) Hydrodynamic diameters of Au-Pt NPs in 10% FBS and PBS measured by DLS, (D) Zeta potential of Au-Pt NPs, (E) Hydrodynamic diameters of Au-Pt NPs 10% FBS and PBS at different times, (F) Oxygen generation in H2O2 solution (3.2 mM) with different concentrations of Au-Pt NPs at pH 6.75.\nSchematic illustrations for the fabrication of Au-Pt NPs and synergistic radiotherapy by employing tumor microenvironment.\nThe schematic procedure and characterization of Au-Pt NPs. (A) TEM image, (B) the high-angle annular dark-field scanning transmission electron microscopy (HAADF-STEM) and elemental mapping images, (C) Hydrodynamic diameters of Au-Pt NPs in 10% FBS and PBS measured by DLS, (D) Zeta potential of Au-Pt NPs, (E) Hydrodynamic diameters of Au-Pt NPs 10% FBS and PBS at different times, (F) Oxygen generation in H2O2 solution (3.2 mM) with different concentrations of Au-Pt NPs at pH 6.75.\nMore importantly, we found that Au-Pt NPs catalyzed the reaction of H2O2 to O2. After different concentrations of Au-Pt were added to H2O2 solutions (pH 6.75) the change in the oxygen concentration was measured using a portable dissolved oxygen meter. The oxygen content did not change in the presence of only H2O2. However, the oxygen production rate increased gradually with an increase in the concentration of Au-Pt, which indicated the excellent CAT-like activity of Au-Pt NPs (Figures 1F and S2). Excessive reactive O2 species (such as H2O2 derived by cancer cells) is one of the oxidative stresses within the tumor microenvironment. The triggering off hypoxia stimulates autophagy and glycolysis through HIF1α and NFκB signaling, which promotes tumor growth and metastasis.3,36 Increasing the decomposition of H2O2 in TME by various nanomaterials would increase the therapeutic effect of treating tumors with phototherapy, chemodynamic therapy, and radiotherapy.37,38", "According to the MTT assay, the Au-Pt NPs exhibited negligible toxicity to human umbilical endothelial vein cells (HUEVC) and 4T1 cells at different concentrations of Au-Pt NPs at 24 h after treatment (Figure 2A). Furthermore, X-Rays alone (0–4 Gy) also exhibited limited toxicity to HUEVC cells and 4T1 cells (Figure 2B). On the other hand, the results showed that cells cultured without Au-Pt NPs exhibited excellent viability after irradiation with X-rays. The MTT assay revealed that Au-Pt NPs are an excellent radiosensitizer for X-ray-triggered in vitro cancer RT (Figure 2C). The relative cell viability of the 4T1 cells after irradiation and 24 h after treatment with Au-Pt NPs decreased greatly compared to X-Ray treatment alone. When combined with Au-Pt NPs treatment, the relative cell viability of groups irradiated with X-rays (4 Gy) decreased to 45% (with 100μg/mL Au-Pt NPs) and 22% (with 200μg/mL Au-Pt NPs). The colony formation assay revealed that at the same X-ray irradiation intensity, the clone formation numbers were reduced in the presence of Au-Pt NPs, which demonstrated its enhanced antigrowth effects (Figure 2D). The double-strand DNA breaks were evaluated by γ-H2AX staining because these breaks are considered to be the major cause of X-ray induced cell death.39 Low signals of the γ-H2AX foci were detected for cells incubated with PBS and Au-Pt NPs alone. In contrast, a higher level of DNA damage was observed for cells irradiated with X-rays. However, cells irradiated with X-rays and treated with Au-Pt NPs appeared to have a higher level of DNA damage, which was further evidencing of the radio-sensitizing function of our nanoparticles (Figure 2E). All of these assays indicated that the cooperation of Au-Pt NPs and irradiation with X-rays can inhibit the proliferation of cancer cells.Figure 2Relative cell viability of HUEVC cells and 4T1 cells treated with Au-Pt NPs (A) and irradiation (B) for 24 h. (C) Relative cell viability of 4T1 cells after irradiation for 24 h treated with Au-Pt NPs. (D) Clonogenic cell survival curve was generated for 4T1 cells. (E) Representative immunofluorescence images of phosphorylated H2AX (γH2AX) in 4T1 cells after irradiation for 24 h treated with Au-Pt NPs.\nRelative cell viability of HUEVC cells and 4T1 cells treated with Au-Pt NPs (A) and irradiation (B) for 24 h. (C) Relative cell viability of 4T1 cells after irradiation for 24 h treated with Au-Pt NPs. (D) Clonogenic cell survival curve was generated for 4T1 cells. (E) Representative immunofluorescence images of phosphorylated H2AX (γH2AX) in 4T1 cells after irradiation for 24 h treated with Au-Pt NPs.\nFurthermore, the pathways of cell death were assessed using live/dead dye and the Annexin V-FITC Apoptosis Detection Kit. The 4T1 cells were stained with live/dead dye as described in the surface marker staining protocol above. The signals of the PI (red) showed a dramatic increase in tumors injected with Au-Pt NPs and irradiated with X-rays compared to the tumors treated with FBS and Au-Pt NPs alone (Figure 3A). A flow cytometry-based apoptosis assay was performed to determine whether there is synergy between the Au-Pt NPs and X-rays in affecting apoptosis. The apoptosis/necrosis of the 4T1 cells treated with PBS, and X-rays were about 9.12% and 25%, respectively. The apoptosis/necrosis of the combination therapy group was more than 37%. Our results showed that Au-Pt NPs significantly affect apoptosis of 4T1 cells treated with X-rays compared with the control groups (Figure 3B and C).Figure 3(A) LIVE/DEAD staining of 4T1 cells by staining with FDA (green) and PI (red) after different treatments (scale bar: 200 μm). (B, C) Flow cytometric analysis of the apoptosis of 4T1 cells by staining with Annexin V-FITC and PI after different treatments (*P < 0.01).\n(A) LIVE/DEAD staining of 4T1 cells by staining with FDA (green) and PI (red) after different treatments (scale bar: 200 μm). (B, C) Flow cytometric analysis of the apoptosis of 4T1 cells by staining with Annexin V-FITC and PI after different treatments (*P < 0.01).", "At different points in time post the intravenous injection (i.v.) of Au-Pt NPs, mouse blood samples were collected and the radioactive counts measured (Figure 4A). The nanoparticles were demonstrated to have a long blood circulation half-life (t1/2 = 0.5 h). The ICP-MS signals were observed in many organs, especially in the liver of mice, because of the clearance of the nanoparticles by the reticuloendothelial system (RES) (Figure 4B). The tumor uptake of the Au-Pt NPs was determined to be 6.5% ID/g at 24 h post injection. We used CT imaging to study the in vivo behaviors of the Au-Pt NPs because they exhibited the ability to be detected with CT fluorescence imaging (Figure S3). Mice bearing 4T1 tumors were thus injected intravenously with Au-Pt NPs, then imaged with a CT imaging system. The time-dependent increase of the CT signals (Figure 4C) was observed in the tumor after the intravenous injection with Au-Pt NPs, which indicated the efficient uptake of Au-Pt NPs by the tumor because of the EPR effect.Figure 4(A) Inductively Coupled Plasma Optical Emission Spectrometer (ICP-OES) analysis of Au-Pt NPs after tail intravenous injection for different times. (B) Organ biodistribution according to ICP-OES signal (N = 3 animals) with Au-Pt NPs. Data represent mean ± s.d. (C) The thermal images of 4T1 tumor bearing mice with injection of Au-Pt NPs for different times.\n(A) Inductively Coupled Plasma Optical Emission Spectrometer (ICP-OES) analysis of Au-Pt NPs after tail intravenous injection for different times. (B) Organ biodistribution according to ICP-OES signal (N = 3 animals) with Au-Pt NPs. Data represent mean ± s.d. (C) The thermal images of 4T1 tumor bearing mice with injection of Au-Pt NPs for different times.", "Since catalase has the ability to decompose H2O2 to H2O and O2, we assessed the ability of Au-Pt NPs to reduce tumor hypoxia in vivo using an immunofluorescent staining assay (Figure 5A). Four h post injection of different nanoparticles, BALB/c mice bearing CT26 tumors were sacrificed and their tumors were collected for immunofluorescence staining. The signals of the hypoxia-probe (pimonidazole) showed a dramatic decrease in tumors injected with Au-Pt NPs because of the decomposition of H2O2 that generated O2 in the tumor microenvironment by the catalase loaded inside the nanoparticles. By contrast, similar to the untreated tumors, tumors from mice treated with X-rays only exhibited large hypoxic areas.Figure 5(A) Representative immunofluorescence images of tumor slices after hypoxia staining (scale bar: 100 μm). The hypoxia areas were stained by anti-pimonidazole antibody (green, FITC). (B) Relative tumor volume after various treatments indicated. (C) Tumor weight of sacrificed mice at Day 16 (*P< 0.01). (D) Change curves for the body weight of mice. (E) H&E staining of tumor tissues at Day 16 (scale bar: 100 μm).\n(A) Representative immunofluorescence images of tumor slices after hypoxia staining (scale bar: 100 μm). The hypoxia areas were stained by anti-pimonidazole antibody (green, FITC). (B) Relative tumor volume after various treatments indicated. (C) Tumor weight of sacrificed mice at Day 16 (*P< 0.01). (D) Change curves for the body weight of mice. (E) H&E staining of tumor tissues at Day 16 (scale bar: 100 μm).\nInspired by the eminent lethality of cancer cells in vitro because of the Au-Pt NPs, the therapeutic efficacy on tumor-bearing mice was then studied. The 4T1 tumor-bearing BALB/c mice were randomly allocated to four groups: (a) control group (PBS); (b) Au-Pt NPs; (c) X-ray (8Gy); (d) Au-Pt NPs + X-ray. When sacrificed on day 16, the tumors of the mice from the four groups demonstrated that the size of the tumors in the mice treated with Au-Pt NPs was smaller than in any of the other groups (Figure S4). Compared to X-ray treatment alone which resulted in only partly delayed tumor growth (Tumor volume: 459.5±76.09 mm3, Tumor weight: 0.3065±0.049 g), treatment with Au-Pt NPs, at the same level of irradiation with X-rays, resulted in about a twofold increase in the inhibitory effect on the growth of the tumor (Tumor volume: 231.7±63.35 mm3, Tumor weight: 0.1662±0.032 g) (Figure 5B and C). In addition, the mice treated with Au-Pt NPs exhibited no loss of body weight, which indicated that there were almost no short-term side effects (Figure 5D). Moreover, the H&E staining of the main organs from the mice showed no pathological change, which indicated that there was no poisoning of the organisms (Figure S5). In addition, the H&E staining was conducted to investigate any morphological changes and apoptosis of the cancer cells in the different treatment groups. As illustrated in Figure 5E, we found that a large number of morphological changes and apoptosis of the cancer cells were dispersed throughout the whole tumor tissue from mice treated with Au-Pt NPs (Figure 5E). Taken together, the Au-Pt NPs amplified the killing effect of X-rays, which lead to the effective suppression of tumor growth without any distinct damage to surrounding non-tumor tissues.", "In summary, we developed Au-Pt NPs via a simple method that simultaneously possesses metal nanoparticle mediated radio-sensitization, and enzyme catalytic activities. A general mechanism in nanoparticle-based radiotherapy was uncovered in that heavy metal NPs possesses a high X-ray photon capture cross-section and Compton scattering effect which escalated the damage of DNA in tumor cells. Furthermore, our Au-Pt NPs increased the decomposition of H2O2 in TME and improved the anoxic status of the tumors treated with X-rays and enhanced the efficacy of radiotherapy. More importantly, in vitro, and in vivo data showed that Au-Pt NPs could effectively inhibit tumor growth with no significant side effects on normal tissues and organs. Therefore, our work developed a low toxicity novel radiosensitizer for enhanced tumor radiotherapy via attenuation of hypoxia. All completed clinical trials have demonstrated the safety of gold-based nanomaterials.40 In a recent clinical trial, gold-silica nanoshells (GSNs) were designed and used for ablating prostate tumors in patients without any obvious treatment-related side effects.41 Our findings further highlight the great potential application of Au-Pt NPs in the future clinical treatment of cancers." ]
[ "intro", null, null, null, null, null, null, null, null, null, null, null, null, null, null ]
[ "radiosensitizer", "nano-enzyme", "TME", "radiotherapy" ]
Introduction: Radiation therapy (radiotherapy, RT) is one of the most effective therapies available for the treatment of localized solid cancers.1 During radiotherapy of a malignant tumor, biological macromolecules of the tumor cells directly interact with the radiation to generate free radicals which cause damage to the DNA, such as strand breaks, point mutations, and aberrant DNA cross-linking.2–4 In addition, the radiation causes water molecules in the tissues to ionize and produce free radicals, which in turn interact with the biological macromolecules to cause cell damage and death.5 Theoretically, almost all types of tumors could be effectively controlled when the local radiation dose is high enough.6 However, a high dose of radiation induces high toxicity to healthy tissues causing side effects.7 Except for the new radiation-delivery techniques that improved the therapeutic effect, highly effective and low-toxicity radiosensitizers are necessary. The improvement of the physical technology, for example, brachytherapy, intensity-modulated RT, image-guided RT, and stereotactic RT has greatly improved radiation therapy.8–11 However, many problems still remain, such as the complex tumor microenvironment containing cancer stem cells, vasculogenesis, and metabolic alterations.12–14 Except for tumor size, the intrinsic radio-sensitivity, cell proliferation, and the extent of hypoxia are important factors for controlling the tumor.15 Hypoxia is one of the most important indicators and is related to cancer therapy-resistance.16 Hypoxia activates the Nrf2 pathway, which leads to anti-oxidant and anti-inflammatory responses with an aggregation of M2 macrophages and MDSCs that protect the tumor.5 Owing to the existence of the hypoxic microenvironment of the tumor, the therapeutic efficiency of chemotherapy, radiotherapy and others, are reduced. In recent years, different strategies including physical stimulation (e.g. fractional irradiation) and chemical reaction, have been developed for relieving tumor hypoxia.17,18 However, it is difficult to prevent the growth of a tumor using a monotherapy. Due to the reduced side effects and enhanced therapeutic effect of synergistic therapy, it is used to overcome the insufficient tumor suppression effect of monotherapy. Radiosensitizers are molecules/materials with the ability to enhance the radiosensitivity of tumor cells and provide opportunities to overcome the obstacles mentioned above.2 There are many kinds of radiosensitizers, namely small-molecule chemicals, macromolecules, and nanostructures.19 One of the most important mechanisms for increasing radiosensitivity is to change the hypoxic microenvironment via different kinds of radiosensitizers. Many tumors are resistant to radiation therapy because they lack oxygen within the tumor due to abnormal or dysfunctional blood vessels.20 This state of hypoxia leads to less DNA damage at the same radiation dose.21 Therefore, oxygen is a well-known radiosensitizer.4 Chemicals and hypoxia-inducing cytotoxins were developed for clinical use because of their oxygen-producing capacity through a mechanism of damage fixation utilizing their electron affinity.22,23 Macromolecules are able to regulate the radio-sensitivity mechanism by binding with the DNAs, miRNA, mRNAs, siRNAs, proteins or peptides.24 Recently, the introduction of nanotechnology has provided momentum and expanded the horizon for the development of radiosensitizers.25 In recent years, heavy-metal nanomaterials with high atomic numbers (Z) have shown promise as radiosensitizers because they can absorb, scatter, and emit radiation energy.26–28 An excellent candidate is gold nanomaterials (AuNPs) because they have tunable morphologies, they are easy to modify, they have good biological safety, and they have exceptional radiation sensitization capabilities.29–32 Furthermore, AuNPs are good for drug delivery and cancer therapy.33 In addition, various catalase-like nanomaterials have been developed for catalyzing the decomposition of hydrogen peroxide (H2O2) into oxygen, which alleviates tumor hypoxia, and overcomes the hypoxia-induced radioresistance.34 Therefore, to develop a simple and effective strategy for relieving the hypoxic tumor microenvironment is essential to improving the therapeutic efficacy of radiotherapy. In the present study, we designed a nanoparticle composed of the metallic elements Au and Pt (Au-Pt NPs). The Au-Pt NPs exhibited enzyme-mimicking activities, which resulted in synergistic radiosensitivity for reinforced tumor therapy. The new Au-Pt NPs with nanozymes were synthesized in one step at room temperature. The nanoparticles produced had a uniform nanosphere shape and were very stable in physiological solutions. After intravenous administration, the Au-Pt NPs passively targeted the tumor and rapidly accumulated in it because of the enhanced permeability and retention (EPR) effect. Not only do the Au-Pt nanozymes possess a radiosensitivity ability to enhance the energy deposition of the X-rays but are also able to catalyze the reaction that converts H2O2 to O2. In addition to attenuating tumor hypoxia, the Au-Pt NPs significantly inhibited tumor growth under 8 Gy of X-ray irradiation compared to the control group. This demonstrated that the Au-Pt NPs improved the therapeutic efficiency of radiotherapy. Therefore, our work developed a novel radiosensitizer with low toxicity, for enhanced tumor radiotherapy by alleviating the tumor hypoxic microenvironment. Our findings highlight the great potential of applying Au-Pt NPs in the future clinical treatment of cancer. Materials and Methods: Synthesis of Au-Pt NPs L-proline 0.0384 g, 20 μL of HAuCl4 and H2PtCl6 solution (1mol/L) were dispersed in 10 mL of deionized water. The pH value of the solution was adjusted to 9 by adding NaOH. Next, ascorbic acid (10 mmol L−1) was added drop by drop. The products were obtained by allowing the solution to react for 20 min at room temperature. Finally, C18PMH-PEG (10 mg) was added to 10 mL of an Au-Pt solution. Au-Pt-PEG-PMH was obtained by stirring the solution for 24 h at room temperature. After 2 h of ultrasonication, the mixture was dried by rotary evaporator. Ten mL of deionized water was added to dissolve the solid which yielded PEGylated Au-Pt nanoparticles (Au-Pt NPs). The Au-Pt nanoparticle that were obtained were further purified through centrifugation at 6000 rpm for 10 min to remove any incompatible nanomaterials. In addition, the Au-Pt NPs were washed three times through ultrafiltration filters with 100 kDa MWCO to remove any unbound PEG. L-proline 0.0384 g, 20 μL of HAuCl4 and H2PtCl6 solution (1mol/L) were dispersed in 10 mL of deionized water. The pH value of the solution was adjusted to 9 by adding NaOH. Next, ascorbic acid (10 mmol L−1) was added drop by drop. The products were obtained by allowing the solution to react for 20 min at room temperature. Finally, C18PMH-PEG (10 mg) was added to 10 mL of an Au-Pt solution. Au-Pt-PEG-PMH was obtained by stirring the solution for 24 h at room temperature. After 2 h of ultrasonication, the mixture was dried by rotary evaporator. Ten mL of deionized water was added to dissolve the solid which yielded PEGylated Au-Pt nanoparticles (Au-Pt NPs). The Au-Pt nanoparticle that were obtained were further purified through centrifugation at 6000 rpm for 10 min to remove any incompatible nanomaterials. In addition, the Au-Pt NPs were washed three times through ultrafiltration filters with 100 kDa MWCO to remove any unbound PEG. Determining the Characteristics of the Au-Pt NPs The hydrodynamic diameters and Zeta potentials of the Au-Pt NPs suspended in PBS and 10% FBS were measured by dynamic light scattering (DLS) (Malvern Instruments, UK). The morphology was determined using high-angle annular dark-field scanning transmission electron microscopy (HAADF-STEM), and the characteristics were described from elemental mapping images using a transmission electron microscope (TEM, JEM-1230, Japan). The hydrodynamic diameters and Zeta potentials of the Au-Pt NPs suspended in PBS and 10% FBS were measured by dynamic light scattering (DLS) (Malvern Instruments, UK). The morphology was determined using high-angle annular dark-field scanning transmission electron microscopy (HAADF-STEM), and the characteristics were described from elemental mapping images using a transmission electron microscope (TEM, JEM-1230, Japan). Detection of O2 To investigate the efficiency with which oxygen is generated in vitro, the Au-Pt NPs (50 μg/mL), H2O2 (2 mmol/L) were added to 3 mL of PBS (pH 6.75) at room temperature. An oxygen probe (JPBJ-608 portable Dissolved Oxygen Meters, Shanghai REX Instrument Factory) was used to measure the dissolved oxygen concentration every minute for 18 minutes. To investigate the efficiency with which oxygen is generated in vitro, the Au-Pt NPs (50 μg/mL), H2O2 (2 mmol/L) were added to 3 mL of PBS (pH 6.75) at room temperature. An oxygen probe (JPBJ-608 portable Dissolved Oxygen Meters, Shanghai REX Instrument Factory) was used to measure the dissolved oxygen concentration every minute for 18 minutes. Cell Line and Animals Cells of the human vascular endothelial cell line HUEVC and murine breast cancer cell line 4T1 were provided by the central laboratory of Taizhou People’s Hospital (purchased from the Cell Bank, Shanghai Institutes for Biological Sciences, Chinese Academy of Sciences, Shanghai) and grown in the recommended cell culture medium under the standard conditions. Female BALB/c mice (6–8 weeks) were maintained according to the protocols approved by the Nantong University Laboratory Animal Center. All animal protocols were in accordance with the National Institute’s Health Guide for the Care and Use of Laboratory Animals. Cells of the human vascular endothelial cell line HUEVC and murine breast cancer cell line 4T1 were provided by the central laboratory of Taizhou People’s Hospital (purchased from the Cell Bank, Shanghai Institutes for Biological Sciences, Chinese Academy of Sciences, Shanghai) and grown in the recommended cell culture medium under the standard conditions. Female BALB/c mice (6–8 weeks) were maintained according to the protocols approved by the Nantong University Laboratory Animal Center. All animal protocols were in accordance with the National Institute’s Health Guide for the Care and Use of Laboratory Animals. In vitro Cell Experiments The cell cytotoxicity was determined by the 3-(4,5-dimethyl-2-thiazolyl)-2,5-diphenyl-2H-tetrazolium bromide (MTT) assay (Sigma) following the standard protocol. The surviving fraction (SF) of the clone formation assay was calculated after the cells were irradiated with 0, 2, 4, 6 or 8 Gy X-rays and kept in culture for 10 days. Fluorescence images were obtained of the phosphorylated H2AX (γH2AX) in the 4T1 murine breast cancer cells after irradiation for 24 h and treated with Au-Pt NPs. The radiation source was a RS 2000 with 0.3 mm copper filter, irradiating at 160 kV at 25 mA and a dose rate of 1.203 Gy/min. The fluorescence images were analyzed using Living Imaging software. Live/dead cell discrimination was performed using the Live/Dead Fixable Aqua Dead Cell Stain Kit (Life Technologies) or the Sytox Red Dead Cell Stain (Life Technologies). Cell surface staining was done for 20–30 minutes. Cells were spun down and stained with a 1:200 final concentration of the antibodies in 50% of a 2.4G2 (Fc block) and 50% of a FACS Buffer (PBS + 1% FBS, 2mM EDTA) for 15 min. Flow cytometric analyses were performed to determine the percentage of apoptotic cells. Cells suspended in PBS containing 1% FBS were incubated with anti-mouse antibody against Annexin V-FITC and PI for 20 min at room temperature in the dark and then evaluated using a flow cytometer. All flow cytometry analyses were performed using an LSRFortessa (BD Biosciences) or an LSR-II (BD Biosciences). The flow data were analyzed using FlowJo v.10 (Tree Star, Inc.). The cell cytotoxicity was determined by the 3-(4,5-dimethyl-2-thiazolyl)-2,5-diphenyl-2H-tetrazolium bromide (MTT) assay (Sigma) following the standard protocol. The surviving fraction (SF) of the clone formation assay was calculated after the cells were irradiated with 0, 2, 4, 6 or 8 Gy X-rays and kept in culture for 10 days. Fluorescence images were obtained of the phosphorylated H2AX (γH2AX) in the 4T1 murine breast cancer cells after irradiation for 24 h and treated with Au-Pt NPs. The radiation source was a RS 2000 with 0.3 mm copper filter, irradiating at 160 kV at 25 mA and a dose rate of 1.203 Gy/min. The fluorescence images were analyzed using Living Imaging software. Live/dead cell discrimination was performed using the Live/Dead Fixable Aqua Dead Cell Stain Kit (Life Technologies) or the Sytox Red Dead Cell Stain (Life Technologies). Cell surface staining was done for 20–30 minutes. Cells were spun down and stained with a 1:200 final concentration of the antibodies in 50% of a 2.4G2 (Fc block) and 50% of a FACS Buffer (PBS + 1% FBS, 2mM EDTA) for 15 min. Flow cytometric analyses were performed to determine the percentage of apoptotic cells. Cells suspended in PBS containing 1% FBS were incubated with anti-mouse antibody against Annexin V-FITC and PI for 20 min at room temperature in the dark and then evaluated using a flow cytometer. All flow cytometry analyses were performed using an LSRFortessa (BD Biosciences) or an LSR-II (BD Biosciences). The flow data were analyzed using FlowJo v.10 (Tree Star, Inc.). Pharmacokinetic Parameters and Biodistribution in vivo At different points in time, post injection of the Au-Pt NPs, 20 μL of blood was drawn from the right orbital venous plexus for the Inductively Coupled Plasma Optical Emission Spectrometer (ICP-OES) analysis. The ICP-OES signals in blood samples were measured using a gamma counter. At 24 h post injection, the mice were sacrificed, and all major organs and tumors were collected. Then, fluorescence of organs and tumors were measured using a gamma counter. In vivo CT imaging, 100 μL of PBS and 100 μL of Au-Pt NPs (20 mg/kg) were injected into the tumor site in situ. Mice were anesthetized and scanned using a Philips CT imager. CT images were obtained from the Philips 256 CT scanning system. At different points in time, post injection of the Au-Pt NPs, 20 μL of blood was drawn from the right orbital venous plexus for the Inductively Coupled Plasma Optical Emission Spectrometer (ICP-OES) analysis. The ICP-OES signals in blood samples were measured using a gamma counter. At 24 h post injection, the mice were sacrificed, and all major organs and tumors were collected. Then, fluorescence of organs and tumors were measured using a gamma counter. In vivo CT imaging, 100 μL of PBS and 100 μL of Au-Pt NPs (20 mg/kg) were injected into the tumor site in situ. Mice were anesthetized and scanned using a Philips CT imager. CT images were obtained from the Philips 256 CT scanning system. Antitumor Efficacy in vivo The 4T1 tumor-bearing BALB/c mice were randomly allocated to four groups: (a) control group (PBS); (b) Au-Pt NPs; (c) X-ray; (d) Au-Pt NPs + X-ray. When the tumor volume reached 60 mm3 at 24 h post-injection of PBS and Au-Pt NPs (20 mg/kg), the mice were exposed to X-ray (8 Gy). The tumor sizes and weight were recorded every other day after treatment. The mice were defined as dead when the tumor volume exceeded 1000 mm3 (Volume = length*width2/2). These mice were sacrificed on day 16 to collect the tumors for the hypoxia staining and crucial organs for the hematoxylin & eosin (H&E) staining. The tumor tissues were removed for the frozen section. The sections were incubated overnight with an anti-pimonidazole mouse monoclonal antibody (100 times dilution, Hypoxyprobe Inc.). Then, an Alexa Fluo 488 conjugated goat-anti-mouse antibody (100 times dilution, Jackson Inc.) was used as the secondary antibody to combine with the primary antibody. All sections were analyzed and imaged by CLSM (OLYMPUS). The 4T1 tumor-bearing BALB/c mice were randomly allocated to four groups: (a) control group (PBS); (b) Au-Pt NPs; (c) X-ray; (d) Au-Pt NPs + X-ray. When the tumor volume reached 60 mm3 at 24 h post-injection of PBS and Au-Pt NPs (20 mg/kg), the mice were exposed to X-ray (8 Gy). The tumor sizes and weight were recorded every other day after treatment. The mice were defined as dead when the tumor volume exceeded 1000 mm3 (Volume = length*width2/2). These mice were sacrificed on day 16 to collect the tumors for the hypoxia staining and crucial organs for the hematoxylin & eosin (H&E) staining. The tumor tissues were removed for the frozen section. The sections were incubated overnight with an anti-pimonidazole mouse monoclonal antibody (100 times dilution, Hypoxyprobe Inc.). Then, an Alexa Fluo 488 conjugated goat-anti-mouse antibody (100 times dilution, Jackson Inc.) was used as the secondary antibody to combine with the primary antibody. All sections were analyzed and imaged by CLSM (OLYMPUS). Synthesis of Au-Pt NPs: L-proline 0.0384 g, 20 μL of HAuCl4 and H2PtCl6 solution (1mol/L) were dispersed in 10 mL of deionized water. The pH value of the solution was adjusted to 9 by adding NaOH. Next, ascorbic acid (10 mmol L−1) was added drop by drop. The products were obtained by allowing the solution to react for 20 min at room temperature. Finally, C18PMH-PEG (10 mg) was added to 10 mL of an Au-Pt solution. Au-Pt-PEG-PMH was obtained by stirring the solution for 24 h at room temperature. After 2 h of ultrasonication, the mixture was dried by rotary evaporator. Ten mL of deionized water was added to dissolve the solid which yielded PEGylated Au-Pt nanoparticles (Au-Pt NPs). The Au-Pt nanoparticle that were obtained were further purified through centrifugation at 6000 rpm for 10 min to remove any incompatible nanomaterials. In addition, the Au-Pt NPs were washed three times through ultrafiltration filters with 100 kDa MWCO to remove any unbound PEG. Determining the Characteristics of the Au-Pt NPs: The hydrodynamic diameters and Zeta potentials of the Au-Pt NPs suspended in PBS and 10% FBS were measured by dynamic light scattering (DLS) (Malvern Instruments, UK). The morphology was determined using high-angle annular dark-field scanning transmission electron microscopy (HAADF-STEM), and the characteristics were described from elemental mapping images using a transmission electron microscope (TEM, JEM-1230, Japan). Detection of O2: To investigate the efficiency with which oxygen is generated in vitro, the Au-Pt NPs (50 μg/mL), H2O2 (2 mmol/L) were added to 3 mL of PBS (pH 6.75) at room temperature. An oxygen probe (JPBJ-608 portable Dissolved Oxygen Meters, Shanghai REX Instrument Factory) was used to measure the dissolved oxygen concentration every minute for 18 minutes. Cell Line and Animals: Cells of the human vascular endothelial cell line HUEVC and murine breast cancer cell line 4T1 were provided by the central laboratory of Taizhou People’s Hospital (purchased from the Cell Bank, Shanghai Institutes for Biological Sciences, Chinese Academy of Sciences, Shanghai) and grown in the recommended cell culture medium under the standard conditions. Female BALB/c mice (6–8 weeks) were maintained according to the protocols approved by the Nantong University Laboratory Animal Center. All animal protocols were in accordance with the National Institute’s Health Guide for the Care and Use of Laboratory Animals. In vitro Cell Experiments: The cell cytotoxicity was determined by the 3-(4,5-dimethyl-2-thiazolyl)-2,5-diphenyl-2H-tetrazolium bromide (MTT) assay (Sigma) following the standard protocol. The surviving fraction (SF) of the clone formation assay was calculated after the cells were irradiated with 0, 2, 4, 6 or 8 Gy X-rays and kept in culture for 10 days. Fluorescence images were obtained of the phosphorylated H2AX (γH2AX) in the 4T1 murine breast cancer cells after irradiation for 24 h and treated with Au-Pt NPs. The radiation source was a RS 2000 with 0.3 mm copper filter, irradiating at 160 kV at 25 mA and a dose rate of 1.203 Gy/min. The fluorescence images were analyzed using Living Imaging software. Live/dead cell discrimination was performed using the Live/Dead Fixable Aqua Dead Cell Stain Kit (Life Technologies) or the Sytox Red Dead Cell Stain (Life Technologies). Cell surface staining was done for 20–30 minutes. Cells were spun down and stained with a 1:200 final concentration of the antibodies in 50% of a 2.4G2 (Fc block) and 50% of a FACS Buffer (PBS + 1% FBS, 2mM EDTA) for 15 min. Flow cytometric analyses were performed to determine the percentage of apoptotic cells. Cells suspended in PBS containing 1% FBS were incubated with anti-mouse antibody against Annexin V-FITC and PI for 20 min at room temperature in the dark and then evaluated using a flow cytometer. All flow cytometry analyses were performed using an LSRFortessa (BD Biosciences) or an LSR-II (BD Biosciences). The flow data were analyzed using FlowJo v.10 (Tree Star, Inc.). Pharmacokinetic Parameters and Biodistribution in vivo: At different points in time, post injection of the Au-Pt NPs, 20 μL of blood was drawn from the right orbital venous plexus for the Inductively Coupled Plasma Optical Emission Spectrometer (ICP-OES) analysis. The ICP-OES signals in blood samples were measured using a gamma counter. At 24 h post injection, the mice were sacrificed, and all major organs and tumors were collected. Then, fluorescence of organs and tumors were measured using a gamma counter. In vivo CT imaging, 100 μL of PBS and 100 μL of Au-Pt NPs (20 mg/kg) were injected into the tumor site in situ. Mice were anesthetized and scanned using a Philips CT imager. CT images were obtained from the Philips 256 CT scanning system. Antitumor Efficacy in vivo: The 4T1 tumor-bearing BALB/c mice were randomly allocated to four groups: (a) control group (PBS); (b) Au-Pt NPs; (c) X-ray; (d) Au-Pt NPs + X-ray. When the tumor volume reached 60 mm3 at 24 h post-injection of PBS and Au-Pt NPs (20 mg/kg), the mice were exposed to X-ray (8 Gy). The tumor sizes and weight were recorded every other day after treatment. The mice were defined as dead when the tumor volume exceeded 1000 mm3 (Volume = length*width2/2). These mice were sacrificed on day 16 to collect the tumors for the hypoxia staining and crucial organs for the hematoxylin & eosin (H&E) staining. The tumor tissues were removed for the frozen section. The sections were incubated overnight with an anti-pimonidazole mouse monoclonal antibody (100 times dilution, Hypoxyprobe Inc.). Then, an Alexa Fluo 488 conjugated goat-anti-mouse antibody (100 times dilution, Jackson Inc.) was used as the secondary antibody to combine with the primary antibody. All sections were analyzed and imaged by CLSM (OLYMPUS). Results and Discussion: Synthesis and Characterization The synthesis procedures are shown in Scheme 1. The Au-Pt NPs were synthesized through a co-reduction of H2PtCl6 and HAuCl4 by L-ascorbic acid as a chelating agent to slow down the kinetics of the reduction reaction in a one-pot method. To improve the biocompatibility of the Au-Pt NPs, polyethylene glycol (maleic anhydridealt-1-octadecene) (C18-PMH-PEG5k) was used to modify it via stacking. According to the TEM mages (Figure 1A), the Au-Pt NPs were uniformly dispersed and formed spherical aggregations with a diameter of about 150 ± 20 nm. The ultraviolet absorption spectrum showed that there was no characteristic peak of Au-Pt NPs (Figure S1). The energy-dispersive spectroscopy (EDS) elemental mapping images (Figure 1B) showed that the Au and Pt atoms were uniformly distributed in the spherical aggregations. Au and Pt as high atomic number elements, would absorb, scatter, and emit radiation energy to enhance the radiation dose deposition at the interface of the surrounding tissue.35 The prepared Au-Pt NPs maintained uniform dispersion and good stability in 10% fetal bovine serum (FBS), which indicated potential for its biological application (Figure 1C and E). The Zeta potential of Au-Pt (−18.9 mV) and Au-Pt NPs (−21.1 mV) were similar (Figure 1D). Scheme 1Schematic illustrations for the fabrication of Au-Pt NPs and synergistic radiotherapy by employing tumor microenvironment.Figure 1The schematic procedure and characterization of Au-Pt NPs. (A) TEM image, (B) the high-angle annular dark-field scanning transmission electron microscopy (HAADF-STEM) and elemental mapping images, (C) Hydrodynamic diameters of Au-Pt NPs in 10% FBS and PBS measured by DLS, (D) Zeta potential of Au-Pt NPs, (E) Hydrodynamic diameters of Au-Pt NPs 10% FBS and PBS at different times, (F) Oxygen generation in H2O2 solution (3.2 mM) with different concentrations of Au-Pt NPs at pH 6.75. Schematic illustrations for the fabrication of Au-Pt NPs and synergistic radiotherapy by employing tumor microenvironment. The schematic procedure and characterization of Au-Pt NPs. (A) TEM image, (B) the high-angle annular dark-field scanning transmission electron microscopy (HAADF-STEM) and elemental mapping images, (C) Hydrodynamic diameters of Au-Pt NPs in 10% FBS and PBS measured by DLS, (D) Zeta potential of Au-Pt NPs, (E) Hydrodynamic diameters of Au-Pt NPs 10% FBS and PBS at different times, (F) Oxygen generation in H2O2 solution (3.2 mM) with different concentrations of Au-Pt NPs at pH 6.75. More importantly, we found that Au-Pt NPs catalyzed the reaction of H2O2 to O2. After different concentrations of Au-Pt were added to H2O2 solutions (pH 6.75) the change in the oxygen concentration was measured using a portable dissolved oxygen meter. The oxygen content did not change in the presence of only H2O2. However, the oxygen production rate increased gradually with an increase in the concentration of Au-Pt, which indicated the excellent CAT-like activity of Au-Pt NPs (Figures 1F and S2). Excessive reactive O2 species (such as H2O2 derived by cancer cells) is one of the oxidative stresses within the tumor microenvironment. The triggering off hypoxia stimulates autophagy and glycolysis through HIF1α and NFκB signaling, which promotes tumor growth and metastasis.3,36 Increasing the decomposition of H2O2 in TME by various nanomaterials would increase the therapeutic effect of treating tumors with phototherapy, chemodynamic therapy, and radiotherapy.37,38 The synthesis procedures are shown in Scheme 1. The Au-Pt NPs were synthesized through a co-reduction of H2PtCl6 and HAuCl4 by L-ascorbic acid as a chelating agent to slow down the kinetics of the reduction reaction in a one-pot method. To improve the biocompatibility of the Au-Pt NPs, polyethylene glycol (maleic anhydridealt-1-octadecene) (C18-PMH-PEG5k) was used to modify it via stacking. According to the TEM mages (Figure 1A), the Au-Pt NPs were uniformly dispersed and formed spherical aggregations with a diameter of about 150 ± 20 nm. The ultraviolet absorption spectrum showed that there was no characteristic peak of Au-Pt NPs (Figure S1). The energy-dispersive spectroscopy (EDS) elemental mapping images (Figure 1B) showed that the Au and Pt atoms were uniformly distributed in the spherical aggregations. Au and Pt as high atomic number elements, would absorb, scatter, and emit radiation energy to enhance the radiation dose deposition at the interface of the surrounding tissue.35 The prepared Au-Pt NPs maintained uniform dispersion and good stability in 10% fetal bovine serum (FBS), which indicated potential for its biological application (Figure 1C and E). The Zeta potential of Au-Pt (−18.9 mV) and Au-Pt NPs (−21.1 mV) were similar (Figure 1D). Scheme 1Schematic illustrations for the fabrication of Au-Pt NPs and synergistic radiotherapy by employing tumor microenvironment.Figure 1The schematic procedure and characterization of Au-Pt NPs. (A) TEM image, (B) the high-angle annular dark-field scanning transmission electron microscopy (HAADF-STEM) and elemental mapping images, (C) Hydrodynamic diameters of Au-Pt NPs in 10% FBS and PBS measured by DLS, (D) Zeta potential of Au-Pt NPs, (E) Hydrodynamic diameters of Au-Pt NPs 10% FBS and PBS at different times, (F) Oxygen generation in H2O2 solution (3.2 mM) with different concentrations of Au-Pt NPs at pH 6.75. Schematic illustrations for the fabrication of Au-Pt NPs and synergistic radiotherapy by employing tumor microenvironment. The schematic procedure and characterization of Au-Pt NPs. (A) TEM image, (B) the high-angle annular dark-field scanning transmission electron microscopy (HAADF-STEM) and elemental mapping images, (C) Hydrodynamic diameters of Au-Pt NPs in 10% FBS and PBS measured by DLS, (D) Zeta potential of Au-Pt NPs, (E) Hydrodynamic diameters of Au-Pt NPs 10% FBS and PBS at different times, (F) Oxygen generation in H2O2 solution (3.2 mM) with different concentrations of Au-Pt NPs at pH 6.75. More importantly, we found that Au-Pt NPs catalyzed the reaction of H2O2 to O2. After different concentrations of Au-Pt were added to H2O2 solutions (pH 6.75) the change in the oxygen concentration was measured using a portable dissolved oxygen meter. The oxygen content did not change in the presence of only H2O2. However, the oxygen production rate increased gradually with an increase in the concentration of Au-Pt, which indicated the excellent CAT-like activity of Au-Pt NPs (Figures 1F and S2). Excessive reactive O2 species (such as H2O2 derived by cancer cells) is one of the oxidative stresses within the tumor microenvironment. The triggering off hypoxia stimulates autophagy and glycolysis through HIF1α and NFκB signaling, which promotes tumor growth and metastasis.3,36 Increasing the decomposition of H2O2 in TME by various nanomaterials would increase the therapeutic effect of treating tumors with phototherapy, chemodynamic therapy, and radiotherapy.37,38 In vitro Enhanced Radiotherapy According to the MTT assay, the Au-Pt NPs exhibited negligible toxicity to human umbilical endothelial vein cells (HUEVC) and 4T1 cells at different concentrations of Au-Pt NPs at 24 h after treatment (Figure 2A). Furthermore, X-Rays alone (0–4 Gy) also exhibited limited toxicity to HUEVC cells and 4T1 cells (Figure 2B). On the other hand, the results showed that cells cultured without Au-Pt NPs exhibited excellent viability after irradiation with X-rays. The MTT assay revealed that Au-Pt NPs are an excellent radiosensitizer for X-ray-triggered in vitro cancer RT (Figure 2C). The relative cell viability of the 4T1 cells after irradiation and 24 h after treatment with Au-Pt NPs decreased greatly compared to X-Ray treatment alone. When combined with Au-Pt NPs treatment, the relative cell viability of groups irradiated with X-rays (4 Gy) decreased to 45% (with 100μg/mL Au-Pt NPs) and 22% (with 200μg/mL Au-Pt NPs). The colony formation assay revealed that at the same X-ray irradiation intensity, the clone formation numbers were reduced in the presence of Au-Pt NPs, which demonstrated its enhanced antigrowth effects (Figure 2D). The double-strand DNA breaks were evaluated by γ-H2AX staining because these breaks are considered to be the major cause of X-ray induced cell death.39 Low signals of the γ-H2AX foci were detected for cells incubated with PBS and Au-Pt NPs alone. In contrast, a higher level of DNA damage was observed for cells irradiated with X-rays. However, cells irradiated with X-rays and treated with Au-Pt NPs appeared to have a higher level of DNA damage, which was further evidencing of the radio-sensitizing function of our nanoparticles (Figure 2E). All of these assays indicated that the cooperation of Au-Pt NPs and irradiation with X-rays can inhibit the proliferation of cancer cells.Figure 2Relative cell viability of HUEVC cells and 4T1 cells treated with Au-Pt NPs (A) and irradiation (B) for 24 h. (C) Relative cell viability of 4T1 cells after irradiation for 24 h treated with Au-Pt NPs. (D) Clonogenic cell survival curve was generated for 4T1 cells. (E) Representative immunofluorescence images of phosphorylated H2AX (γH2AX) in 4T1 cells after irradiation for 24 h treated with Au-Pt NPs. Relative cell viability of HUEVC cells and 4T1 cells treated with Au-Pt NPs (A) and irradiation (B) for 24 h. (C) Relative cell viability of 4T1 cells after irradiation for 24 h treated with Au-Pt NPs. (D) Clonogenic cell survival curve was generated for 4T1 cells. (E) Representative immunofluorescence images of phosphorylated H2AX (γH2AX) in 4T1 cells after irradiation for 24 h treated with Au-Pt NPs. Furthermore, the pathways of cell death were assessed using live/dead dye and the Annexin V-FITC Apoptosis Detection Kit. The 4T1 cells were stained with live/dead dye as described in the surface marker staining protocol above. The signals of the PI (red) showed a dramatic increase in tumors injected with Au-Pt NPs and irradiated with X-rays compared to the tumors treated with FBS and Au-Pt NPs alone (Figure 3A). A flow cytometry-based apoptosis assay was performed to determine whether there is synergy between the Au-Pt NPs and X-rays in affecting apoptosis. The apoptosis/necrosis of the 4T1 cells treated with PBS, and X-rays were about 9.12% and 25%, respectively. The apoptosis/necrosis of the combination therapy group was more than 37%. Our results showed that Au-Pt NPs significantly affect apoptosis of 4T1 cells treated with X-rays compared with the control groups (Figure 3B and C).Figure 3(A) LIVE/DEAD staining of 4T1 cells by staining with FDA (green) and PI (red) after different treatments (scale bar: 200 μm). (B, C) Flow cytometric analysis of the apoptosis of 4T1 cells by staining with Annexin V-FITC and PI after different treatments (*P < 0.01). (A) LIVE/DEAD staining of 4T1 cells by staining with FDA (green) and PI (red) after different treatments (scale bar: 200 μm). (B, C) Flow cytometric analysis of the apoptosis of 4T1 cells by staining with Annexin V-FITC and PI after different treatments (*P < 0.01). According to the MTT assay, the Au-Pt NPs exhibited negligible toxicity to human umbilical endothelial vein cells (HUEVC) and 4T1 cells at different concentrations of Au-Pt NPs at 24 h after treatment (Figure 2A). Furthermore, X-Rays alone (0–4 Gy) also exhibited limited toxicity to HUEVC cells and 4T1 cells (Figure 2B). On the other hand, the results showed that cells cultured without Au-Pt NPs exhibited excellent viability after irradiation with X-rays. The MTT assay revealed that Au-Pt NPs are an excellent radiosensitizer for X-ray-triggered in vitro cancer RT (Figure 2C). The relative cell viability of the 4T1 cells after irradiation and 24 h after treatment with Au-Pt NPs decreased greatly compared to X-Ray treatment alone. When combined with Au-Pt NPs treatment, the relative cell viability of groups irradiated with X-rays (4 Gy) decreased to 45% (with 100μg/mL Au-Pt NPs) and 22% (with 200μg/mL Au-Pt NPs). The colony formation assay revealed that at the same X-ray irradiation intensity, the clone formation numbers were reduced in the presence of Au-Pt NPs, which demonstrated its enhanced antigrowth effects (Figure 2D). The double-strand DNA breaks were evaluated by γ-H2AX staining because these breaks are considered to be the major cause of X-ray induced cell death.39 Low signals of the γ-H2AX foci were detected for cells incubated with PBS and Au-Pt NPs alone. In contrast, a higher level of DNA damage was observed for cells irradiated with X-rays. However, cells irradiated with X-rays and treated with Au-Pt NPs appeared to have a higher level of DNA damage, which was further evidencing of the radio-sensitizing function of our nanoparticles (Figure 2E). All of these assays indicated that the cooperation of Au-Pt NPs and irradiation with X-rays can inhibit the proliferation of cancer cells.Figure 2Relative cell viability of HUEVC cells and 4T1 cells treated with Au-Pt NPs (A) and irradiation (B) for 24 h. (C) Relative cell viability of 4T1 cells after irradiation for 24 h treated with Au-Pt NPs. (D) Clonogenic cell survival curve was generated for 4T1 cells. (E) Representative immunofluorescence images of phosphorylated H2AX (γH2AX) in 4T1 cells after irradiation for 24 h treated with Au-Pt NPs. Relative cell viability of HUEVC cells and 4T1 cells treated with Au-Pt NPs (A) and irradiation (B) for 24 h. (C) Relative cell viability of 4T1 cells after irradiation for 24 h treated with Au-Pt NPs. (D) Clonogenic cell survival curve was generated for 4T1 cells. (E) Representative immunofluorescence images of phosphorylated H2AX (γH2AX) in 4T1 cells after irradiation for 24 h treated with Au-Pt NPs. Furthermore, the pathways of cell death were assessed using live/dead dye and the Annexin V-FITC Apoptosis Detection Kit. The 4T1 cells were stained with live/dead dye as described in the surface marker staining protocol above. The signals of the PI (red) showed a dramatic increase in tumors injected with Au-Pt NPs and irradiated with X-rays compared to the tumors treated with FBS and Au-Pt NPs alone (Figure 3A). A flow cytometry-based apoptosis assay was performed to determine whether there is synergy between the Au-Pt NPs and X-rays in affecting apoptosis. The apoptosis/necrosis of the 4T1 cells treated with PBS, and X-rays were about 9.12% and 25%, respectively. The apoptosis/necrosis of the combination therapy group was more than 37%. Our results showed that Au-Pt NPs significantly affect apoptosis of 4T1 cells treated with X-rays compared with the control groups (Figure 3B and C).Figure 3(A) LIVE/DEAD staining of 4T1 cells by staining with FDA (green) and PI (red) after different treatments (scale bar: 200 μm). (B, C) Flow cytometric analysis of the apoptosis of 4T1 cells by staining with Annexin V-FITC and PI after different treatments (*P < 0.01). (A) LIVE/DEAD staining of 4T1 cells by staining with FDA (green) and PI (red) after different treatments (scale bar: 200 μm). (B, C) Flow cytometric analysis of the apoptosis of 4T1 cells by staining with Annexin V-FITC and PI after different treatments (*P < 0.01). Pharmacokinetics and Biodistribution At different points in time post the intravenous injection (i.v.) of Au-Pt NPs, mouse blood samples were collected and the radioactive counts measured (Figure 4A). The nanoparticles were demonstrated to have a long blood circulation half-life (t1/2 = 0.5 h). The ICP-MS signals were observed in many organs, especially in the liver of mice, because of the clearance of the nanoparticles by the reticuloendothelial system (RES) (Figure 4B). The tumor uptake of the Au-Pt NPs was determined to be 6.5% ID/g at 24 h post injection. We used CT imaging to study the in vivo behaviors of the Au-Pt NPs because they exhibited the ability to be detected with CT fluorescence imaging (Figure S3). Mice bearing 4T1 tumors were thus injected intravenously with Au-Pt NPs, then imaged with a CT imaging system. The time-dependent increase of the CT signals (Figure 4C) was observed in the tumor after the intravenous injection with Au-Pt NPs, which indicated the efficient uptake of Au-Pt NPs by the tumor because of the EPR effect.Figure 4(A) Inductively Coupled Plasma Optical Emission Spectrometer (ICP-OES) analysis of Au-Pt NPs after tail intravenous injection for different times. (B) Organ biodistribution according to ICP-OES signal (N = 3 animals) with Au-Pt NPs. Data represent mean ± s.d. (C) The thermal images of 4T1 tumor bearing mice with injection of Au-Pt NPs for different times. (A) Inductively Coupled Plasma Optical Emission Spectrometer (ICP-OES) analysis of Au-Pt NPs after tail intravenous injection for different times. (B) Organ biodistribution according to ICP-OES signal (N = 3 animals) with Au-Pt NPs. Data represent mean ± s.d. (C) The thermal images of 4T1 tumor bearing mice with injection of Au-Pt NPs for different times. At different points in time post the intravenous injection (i.v.) of Au-Pt NPs, mouse blood samples were collected and the radioactive counts measured (Figure 4A). The nanoparticles were demonstrated to have a long blood circulation half-life (t1/2 = 0.5 h). The ICP-MS signals were observed in many organs, especially in the liver of mice, because of the clearance of the nanoparticles by the reticuloendothelial system (RES) (Figure 4B). The tumor uptake of the Au-Pt NPs was determined to be 6.5% ID/g at 24 h post injection. We used CT imaging to study the in vivo behaviors of the Au-Pt NPs because they exhibited the ability to be detected with CT fluorescence imaging (Figure S3). Mice bearing 4T1 tumors were thus injected intravenously with Au-Pt NPs, then imaged with a CT imaging system. The time-dependent increase of the CT signals (Figure 4C) was observed in the tumor after the intravenous injection with Au-Pt NPs, which indicated the efficient uptake of Au-Pt NPs by the tumor because of the EPR effect.Figure 4(A) Inductively Coupled Plasma Optical Emission Spectrometer (ICP-OES) analysis of Au-Pt NPs after tail intravenous injection for different times. (B) Organ biodistribution according to ICP-OES signal (N = 3 animals) with Au-Pt NPs. Data represent mean ± s.d. (C) The thermal images of 4T1 tumor bearing mice with injection of Au-Pt NPs for different times. (A) Inductively Coupled Plasma Optical Emission Spectrometer (ICP-OES) analysis of Au-Pt NPs after tail intravenous injection for different times. (B) Organ biodistribution according to ICP-OES signal (N = 3 animals) with Au-Pt NPs. Data represent mean ± s.d. (C) The thermal images of 4T1 tumor bearing mice with injection of Au-Pt NPs for different times. In vivo Enhanced Radiotherapy Since catalase has the ability to decompose H2O2 to H2O and O2, we assessed the ability of Au-Pt NPs to reduce tumor hypoxia in vivo using an immunofluorescent staining assay (Figure 5A). Four h post injection of different nanoparticles, BALB/c mice bearing CT26 tumors were sacrificed and their tumors were collected for immunofluorescence staining. The signals of the hypoxia-probe (pimonidazole) showed a dramatic decrease in tumors injected with Au-Pt NPs because of the decomposition of H2O2 that generated O2 in the tumor microenvironment by the catalase loaded inside the nanoparticles. By contrast, similar to the untreated tumors, tumors from mice treated with X-rays only exhibited large hypoxic areas.Figure 5(A) Representative immunofluorescence images of tumor slices after hypoxia staining (scale bar: 100 μm). The hypoxia areas were stained by anti-pimonidazole antibody (green, FITC). (B) Relative tumor volume after various treatments indicated. (C) Tumor weight of sacrificed mice at Day 16 (*P< 0.01). (D) Change curves for the body weight of mice. (E) H&E staining of tumor tissues at Day 16 (scale bar: 100 μm). (A) Representative immunofluorescence images of tumor slices after hypoxia staining (scale bar: 100 μm). The hypoxia areas were stained by anti-pimonidazole antibody (green, FITC). (B) Relative tumor volume after various treatments indicated. (C) Tumor weight of sacrificed mice at Day 16 (*P< 0.01). (D) Change curves for the body weight of mice. (E) H&E staining of tumor tissues at Day 16 (scale bar: 100 μm). Inspired by the eminent lethality of cancer cells in vitro because of the Au-Pt NPs, the therapeutic efficacy on tumor-bearing mice was then studied. The 4T1 tumor-bearing BALB/c mice were randomly allocated to four groups: (a) control group (PBS); (b) Au-Pt NPs; (c) X-ray (8Gy); (d) Au-Pt NPs + X-ray. When sacrificed on day 16, the tumors of the mice from the four groups demonstrated that the size of the tumors in the mice treated with Au-Pt NPs was smaller than in any of the other groups (Figure S4). Compared to X-ray treatment alone which resulted in only partly delayed tumor growth (Tumor volume: 459.5±76.09 mm3, Tumor weight: 0.3065±0.049 g), treatment with Au-Pt NPs, at the same level of irradiation with X-rays, resulted in about a twofold increase in the inhibitory effect on the growth of the tumor (Tumor volume: 231.7±63.35 mm3, Tumor weight: 0.1662±0.032 g) (Figure 5B and C). In addition, the mice treated with Au-Pt NPs exhibited no loss of body weight, which indicated that there were almost no short-term side effects (Figure 5D). Moreover, the H&E staining of the main organs from the mice showed no pathological change, which indicated that there was no poisoning of the organisms (Figure S5). In addition, the H&E staining was conducted to investigate any morphological changes and apoptosis of the cancer cells in the different treatment groups. As illustrated in Figure 5E, we found that a large number of morphological changes and apoptosis of the cancer cells were dispersed throughout the whole tumor tissue from mice treated with Au-Pt NPs (Figure 5E). Taken together, the Au-Pt NPs amplified the killing effect of X-rays, which lead to the effective suppression of tumor growth without any distinct damage to surrounding non-tumor tissues. Since catalase has the ability to decompose H2O2 to H2O and O2, we assessed the ability of Au-Pt NPs to reduce tumor hypoxia in vivo using an immunofluorescent staining assay (Figure 5A). Four h post injection of different nanoparticles, BALB/c mice bearing CT26 tumors were sacrificed and their tumors were collected for immunofluorescence staining. The signals of the hypoxia-probe (pimonidazole) showed a dramatic decrease in tumors injected with Au-Pt NPs because of the decomposition of H2O2 that generated O2 in the tumor microenvironment by the catalase loaded inside the nanoparticles. By contrast, similar to the untreated tumors, tumors from mice treated with X-rays only exhibited large hypoxic areas.Figure 5(A) Representative immunofluorescence images of tumor slices after hypoxia staining (scale bar: 100 μm). The hypoxia areas were stained by anti-pimonidazole antibody (green, FITC). (B) Relative tumor volume after various treatments indicated. (C) Tumor weight of sacrificed mice at Day 16 (*P< 0.01). (D) Change curves for the body weight of mice. (E) H&E staining of tumor tissues at Day 16 (scale bar: 100 μm). (A) Representative immunofluorescence images of tumor slices after hypoxia staining (scale bar: 100 μm). The hypoxia areas were stained by anti-pimonidazole antibody (green, FITC). (B) Relative tumor volume after various treatments indicated. (C) Tumor weight of sacrificed mice at Day 16 (*P< 0.01). (D) Change curves for the body weight of mice. (E) H&E staining of tumor tissues at Day 16 (scale bar: 100 μm). Inspired by the eminent lethality of cancer cells in vitro because of the Au-Pt NPs, the therapeutic efficacy on tumor-bearing mice was then studied. The 4T1 tumor-bearing BALB/c mice were randomly allocated to four groups: (a) control group (PBS); (b) Au-Pt NPs; (c) X-ray (8Gy); (d) Au-Pt NPs + X-ray. When sacrificed on day 16, the tumors of the mice from the four groups demonstrated that the size of the tumors in the mice treated with Au-Pt NPs was smaller than in any of the other groups (Figure S4). Compared to X-ray treatment alone which resulted in only partly delayed tumor growth (Tumor volume: 459.5±76.09 mm3, Tumor weight: 0.3065±0.049 g), treatment with Au-Pt NPs, at the same level of irradiation with X-rays, resulted in about a twofold increase in the inhibitory effect on the growth of the tumor (Tumor volume: 231.7±63.35 mm3, Tumor weight: 0.1662±0.032 g) (Figure 5B and C). In addition, the mice treated with Au-Pt NPs exhibited no loss of body weight, which indicated that there were almost no short-term side effects (Figure 5D). Moreover, the H&E staining of the main organs from the mice showed no pathological change, which indicated that there was no poisoning of the organisms (Figure S5). In addition, the H&E staining was conducted to investigate any morphological changes and apoptosis of the cancer cells in the different treatment groups. As illustrated in Figure 5E, we found that a large number of morphological changes and apoptosis of the cancer cells were dispersed throughout the whole tumor tissue from mice treated with Au-Pt NPs (Figure 5E). Taken together, the Au-Pt NPs amplified the killing effect of X-rays, which lead to the effective suppression of tumor growth without any distinct damage to surrounding non-tumor tissues. Synthesis and Characterization: The synthesis procedures are shown in Scheme 1. The Au-Pt NPs were synthesized through a co-reduction of H2PtCl6 and HAuCl4 by L-ascorbic acid as a chelating agent to slow down the kinetics of the reduction reaction in a one-pot method. To improve the biocompatibility of the Au-Pt NPs, polyethylene glycol (maleic anhydridealt-1-octadecene) (C18-PMH-PEG5k) was used to modify it via stacking. According to the TEM mages (Figure 1A), the Au-Pt NPs were uniformly dispersed and formed spherical aggregations with a diameter of about 150 ± 20 nm. The ultraviolet absorption spectrum showed that there was no characteristic peak of Au-Pt NPs (Figure S1). The energy-dispersive spectroscopy (EDS) elemental mapping images (Figure 1B) showed that the Au and Pt atoms were uniformly distributed in the spherical aggregations. Au and Pt as high atomic number elements, would absorb, scatter, and emit radiation energy to enhance the radiation dose deposition at the interface of the surrounding tissue.35 The prepared Au-Pt NPs maintained uniform dispersion and good stability in 10% fetal bovine serum (FBS), which indicated potential for its biological application (Figure 1C and E). The Zeta potential of Au-Pt (−18.9 mV) and Au-Pt NPs (−21.1 mV) were similar (Figure 1D). Scheme 1Schematic illustrations for the fabrication of Au-Pt NPs and synergistic radiotherapy by employing tumor microenvironment.Figure 1The schematic procedure and characterization of Au-Pt NPs. (A) TEM image, (B) the high-angle annular dark-field scanning transmission electron microscopy (HAADF-STEM) and elemental mapping images, (C) Hydrodynamic diameters of Au-Pt NPs in 10% FBS and PBS measured by DLS, (D) Zeta potential of Au-Pt NPs, (E) Hydrodynamic diameters of Au-Pt NPs 10% FBS and PBS at different times, (F) Oxygen generation in H2O2 solution (3.2 mM) with different concentrations of Au-Pt NPs at pH 6.75. Schematic illustrations for the fabrication of Au-Pt NPs and synergistic radiotherapy by employing tumor microenvironment. The schematic procedure and characterization of Au-Pt NPs. (A) TEM image, (B) the high-angle annular dark-field scanning transmission electron microscopy (HAADF-STEM) and elemental mapping images, (C) Hydrodynamic diameters of Au-Pt NPs in 10% FBS and PBS measured by DLS, (D) Zeta potential of Au-Pt NPs, (E) Hydrodynamic diameters of Au-Pt NPs 10% FBS and PBS at different times, (F) Oxygen generation in H2O2 solution (3.2 mM) with different concentrations of Au-Pt NPs at pH 6.75. More importantly, we found that Au-Pt NPs catalyzed the reaction of H2O2 to O2. After different concentrations of Au-Pt were added to H2O2 solutions (pH 6.75) the change in the oxygen concentration was measured using a portable dissolved oxygen meter. The oxygen content did not change in the presence of only H2O2. However, the oxygen production rate increased gradually with an increase in the concentration of Au-Pt, which indicated the excellent CAT-like activity of Au-Pt NPs (Figures 1F and S2). Excessive reactive O2 species (such as H2O2 derived by cancer cells) is one of the oxidative stresses within the tumor microenvironment. The triggering off hypoxia stimulates autophagy and glycolysis through HIF1α and NFκB signaling, which promotes tumor growth and metastasis.3,36 Increasing the decomposition of H2O2 in TME by various nanomaterials would increase the therapeutic effect of treating tumors with phototherapy, chemodynamic therapy, and radiotherapy.37,38 In vitro Enhanced Radiotherapy: According to the MTT assay, the Au-Pt NPs exhibited negligible toxicity to human umbilical endothelial vein cells (HUEVC) and 4T1 cells at different concentrations of Au-Pt NPs at 24 h after treatment (Figure 2A). Furthermore, X-Rays alone (0–4 Gy) also exhibited limited toxicity to HUEVC cells and 4T1 cells (Figure 2B). On the other hand, the results showed that cells cultured without Au-Pt NPs exhibited excellent viability after irradiation with X-rays. The MTT assay revealed that Au-Pt NPs are an excellent radiosensitizer for X-ray-triggered in vitro cancer RT (Figure 2C). The relative cell viability of the 4T1 cells after irradiation and 24 h after treatment with Au-Pt NPs decreased greatly compared to X-Ray treatment alone. When combined with Au-Pt NPs treatment, the relative cell viability of groups irradiated with X-rays (4 Gy) decreased to 45% (with 100μg/mL Au-Pt NPs) and 22% (with 200μg/mL Au-Pt NPs). The colony formation assay revealed that at the same X-ray irradiation intensity, the clone formation numbers were reduced in the presence of Au-Pt NPs, which demonstrated its enhanced antigrowth effects (Figure 2D). The double-strand DNA breaks were evaluated by γ-H2AX staining because these breaks are considered to be the major cause of X-ray induced cell death.39 Low signals of the γ-H2AX foci were detected for cells incubated with PBS and Au-Pt NPs alone. In contrast, a higher level of DNA damage was observed for cells irradiated with X-rays. However, cells irradiated with X-rays and treated with Au-Pt NPs appeared to have a higher level of DNA damage, which was further evidencing of the radio-sensitizing function of our nanoparticles (Figure 2E). All of these assays indicated that the cooperation of Au-Pt NPs and irradiation with X-rays can inhibit the proliferation of cancer cells.Figure 2Relative cell viability of HUEVC cells and 4T1 cells treated with Au-Pt NPs (A) and irradiation (B) for 24 h. (C) Relative cell viability of 4T1 cells after irradiation for 24 h treated with Au-Pt NPs. (D) Clonogenic cell survival curve was generated for 4T1 cells. (E) Representative immunofluorescence images of phosphorylated H2AX (γH2AX) in 4T1 cells after irradiation for 24 h treated with Au-Pt NPs. Relative cell viability of HUEVC cells and 4T1 cells treated with Au-Pt NPs (A) and irradiation (B) for 24 h. (C) Relative cell viability of 4T1 cells after irradiation for 24 h treated with Au-Pt NPs. (D) Clonogenic cell survival curve was generated for 4T1 cells. (E) Representative immunofluorescence images of phosphorylated H2AX (γH2AX) in 4T1 cells after irradiation for 24 h treated with Au-Pt NPs. Furthermore, the pathways of cell death were assessed using live/dead dye and the Annexin V-FITC Apoptosis Detection Kit. The 4T1 cells were stained with live/dead dye as described in the surface marker staining protocol above. The signals of the PI (red) showed a dramatic increase in tumors injected with Au-Pt NPs and irradiated with X-rays compared to the tumors treated with FBS and Au-Pt NPs alone (Figure 3A). A flow cytometry-based apoptosis assay was performed to determine whether there is synergy between the Au-Pt NPs and X-rays in affecting apoptosis. The apoptosis/necrosis of the 4T1 cells treated with PBS, and X-rays were about 9.12% and 25%, respectively. The apoptosis/necrosis of the combination therapy group was more than 37%. Our results showed that Au-Pt NPs significantly affect apoptosis of 4T1 cells treated with X-rays compared with the control groups (Figure 3B and C).Figure 3(A) LIVE/DEAD staining of 4T1 cells by staining with FDA (green) and PI (red) after different treatments (scale bar: 200 μm). (B, C) Flow cytometric analysis of the apoptosis of 4T1 cells by staining with Annexin V-FITC and PI after different treatments (*P < 0.01). (A) LIVE/DEAD staining of 4T1 cells by staining with FDA (green) and PI (red) after different treatments (scale bar: 200 μm). (B, C) Flow cytometric analysis of the apoptosis of 4T1 cells by staining with Annexin V-FITC and PI after different treatments (*P < 0.01). Pharmacokinetics and Biodistribution: At different points in time post the intravenous injection (i.v.) of Au-Pt NPs, mouse blood samples were collected and the radioactive counts measured (Figure 4A). The nanoparticles were demonstrated to have a long blood circulation half-life (t1/2 = 0.5 h). The ICP-MS signals were observed in many organs, especially in the liver of mice, because of the clearance of the nanoparticles by the reticuloendothelial system (RES) (Figure 4B). The tumor uptake of the Au-Pt NPs was determined to be 6.5% ID/g at 24 h post injection. We used CT imaging to study the in vivo behaviors of the Au-Pt NPs because they exhibited the ability to be detected with CT fluorescence imaging (Figure S3). Mice bearing 4T1 tumors were thus injected intravenously with Au-Pt NPs, then imaged with a CT imaging system. The time-dependent increase of the CT signals (Figure 4C) was observed in the tumor after the intravenous injection with Au-Pt NPs, which indicated the efficient uptake of Au-Pt NPs by the tumor because of the EPR effect.Figure 4(A) Inductively Coupled Plasma Optical Emission Spectrometer (ICP-OES) analysis of Au-Pt NPs after tail intravenous injection for different times. (B) Organ biodistribution according to ICP-OES signal (N = 3 animals) with Au-Pt NPs. Data represent mean ± s.d. (C) The thermal images of 4T1 tumor bearing mice with injection of Au-Pt NPs for different times. (A) Inductively Coupled Plasma Optical Emission Spectrometer (ICP-OES) analysis of Au-Pt NPs after tail intravenous injection for different times. (B) Organ biodistribution according to ICP-OES signal (N = 3 animals) with Au-Pt NPs. Data represent mean ± s.d. (C) The thermal images of 4T1 tumor bearing mice with injection of Au-Pt NPs for different times. In vivo Enhanced Radiotherapy: Since catalase has the ability to decompose H2O2 to H2O and O2, we assessed the ability of Au-Pt NPs to reduce tumor hypoxia in vivo using an immunofluorescent staining assay (Figure 5A). Four h post injection of different nanoparticles, BALB/c mice bearing CT26 tumors were sacrificed and their tumors were collected for immunofluorescence staining. The signals of the hypoxia-probe (pimonidazole) showed a dramatic decrease in tumors injected with Au-Pt NPs because of the decomposition of H2O2 that generated O2 in the tumor microenvironment by the catalase loaded inside the nanoparticles. By contrast, similar to the untreated tumors, tumors from mice treated with X-rays only exhibited large hypoxic areas.Figure 5(A) Representative immunofluorescence images of tumor slices after hypoxia staining (scale bar: 100 μm). The hypoxia areas were stained by anti-pimonidazole antibody (green, FITC). (B) Relative tumor volume after various treatments indicated. (C) Tumor weight of sacrificed mice at Day 16 (*P< 0.01). (D) Change curves for the body weight of mice. (E) H&E staining of tumor tissues at Day 16 (scale bar: 100 μm). (A) Representative immunofluorescence images of tumor slices after hypoxia staining (scale bar: 100 μm). The hypoxia areas were stained by anti-pimonidazole antibody (green, FITC). (B) Relative tumor volume after various treatments indicated. (C) Tumor weight of sacrificed mice at Day 16 (*P< 0.01). (D) Change curves for the body weight of mice. (E) H&E staining of tumor tissues at Day 16 (scale bar: 100 μm). Inspired by the eminent lethality of cancer cells in vitro because of the Au-Pt NPs, the therapeutic efficacy on tumor-bearing mice was then studied. The 4T1 tumor-bearing BALB/c mice were randomly allocated to four groups: (a) control group (PBS); (b) Au-Pt NPs; (c) X-ray (8Gy); (d) Au-Pt NPs + X-ray. When sacrificed on day 16, the tumors of the mice from the four groups demonstrated that the size of the tumors in the mice treated with Au-Pt NPs was smaller than in any of the other groups (Figure S4). Compared to X-ray treatment alone which resulted in only partly delayed tumor growth (Tumor volume: 459.5±76.09 mm3, Tumor weight: 0.3065±0.049 g), treatment with Au-Pt NPs, at the same level of irradiation with X-rays, resulted in about a twofold increase in the inhibitory effect on the growth of the tumor (Tumor volume: 231.7±63.35 mm3, Tumor weight: 0.1662±0.032 g) (Figure 5B and C). In addition, the mice treated with Au-Pt NPs exhibited no loss of body weight, which indicated that there were almost no short-term side effects (Figure 5D). Moreover, the H&E staining of the main organs from the mice showed no pathological change, which indicated that there was no poisoning of the organisms (Figure S5). In addition, the H&E staining was conducted to investigate any morphological changes and apoptosis of the cancer cells in the different treatment groups. As illustrated in Figure 5E, we found that a large number of morphological changes and apoptosis of the cancer cells were dispersed throughout the whole tumor tissue from mice treated with Au-Pt NPs (Figure 5E). Taken together, the Au-Pt NPs amplified the killing effect of X-rays, which lead to the effective suppression of tumor growth without any distinct damage to surrounding non-tumor tissues. Conclusions: In summary, we developed Au-Pt NPs via a simple method that simultaneously possesses metal nanoparticle mediated radio-sensitization, and enzyme catalytic activities. A general mechanism in nanoparticle-based radiotherapy was uncovered in that heavy metal NPs possesses a high X-ray photon capture cross-section and Compton scattering effect which escalated the damage of DNA in tumor cells. Furthermore, our Au-Pt NPs increased the decomposition of H2O2 in TME and improved the anoxic status of the tumors treated with X-rays and enhanced the efficacy of radiotherapy. More importantly, in vitro, and in vivo data showed that Au-Pt NPs could effectively inhibit tumor growth with no significant side effects on normal tissues and organs. Therefore, our work developed a low toxicity novel radiosensitizer for enhanced tumor radiotherapy via attenuation of hypoxia. All completed clinical trials have demonstrated the safety of gold-based nanomaterials.40 In a recent clinical trial, gold-silica nanoshells (GSNs) were designed and used for ablating prostate tumors in patients without any obvious treatment-related side effects.41 Our findings further highlight the great potential application of Au-Pt NPs in the future clinical treatment of cancers.
Background: Radiotherapy occupies an essential position as one of the most significant approaches for the clinical treatment of cancer. However, we cannot overcome the shortcoming of X-rays which is the high value of the oxygen enhancement ratio (OER). Radiosensitizers with the ability to enhance the radiosensitivity of tumor cells provide an alternative to changing X-rays to protons and heavy ion radiotherapy. Methods: We prepared the Au-Pt nanoparticles (Au-Pt NPs) using a one-step method. The characteristics of the Au-Pt NPs were determined using TEM, HAADF-STEM, elemental mapping images, and DLS. The enhanced radiotherapy was demonstrated in vitro using MTT assays, colony formation assays, fluorescence imaging, and flow cytometric analyses of the apoptosis. The biodistribution of the Au-Pt NPs was analyzed using ICP-OES, and thermal images. The enhanced radiotherapy was demonstrated in vitro using immunofluorescence images, tumor volume and weigh, and hematoxylin & eosin (H&E) staining. Results: Polyethylene glycol (PEG) functionalized nanoparticles composed of the metallic elements Au and Pt were designed to increase synergistic radiosensitivity. The mechanism demonstrated that heavy metal NPs possess a high X-ray photon capture cross-section and Compton scattering effect which increased DNA damage. Furthermore, the Au-Pt NPs exhibited enzyme-mimicking activities by catalyzing the decomposition of endogenous H2O2 to O2 in the solid tumor microenvironment (TME). Conclusions: Our work provides a systematically administered radiosensitizer that can selectively reside in a tumor via the EPR effect and enhances the efficiency of treating cancer with radiotherapy.
Introduction: Radiation therapy (radiotherapy, RT) is one of the most effective therapies available for the treatment of localized solid cancers.1 During radiotherapy of a malignant tumor, biological macromolecules of the tumor cells directly interact with the radiation to generate free radicals which cause damage to the DNA, such as strand breaks, point mutations, and aberrant DNA cross-linking.2–4 In addition, the radiation causes water molecules in the tissues to ionize and produce free radicals, which in turn interact with the biological macromolecules to cause cell damage and death.5 Theoretically, almost all types of tumors could be effectively controlled when the local radiation dose is high enough.6 However, a high dose of radiation induces high toxicity to healthy tissues causing side effects.7 Except for the new radiation-delivery techniques that improved the therapeutic effect, highly effective and low-toxicity radiosensitizers are necessary. The improvement of the physical technology, for example, brachytherapy, intensity-modulated RT, image-guided RT, and stereotactic RT has greatly improved radiation therapy.8–11 However, many problems still remain, such as the complex tumor microenvironment containing cancer stem cells, vasculogenesis, and metabolic alterations.12–14 Except for tumor size, the intrinsic radio-sensitivity, cell proliferation, and the extent of hypoxia are important factors for controlling the tumor.15 Hypoxia is one of the most important indicators and is related to cancer therapy-resistance.16 Hypoxia activates the Nrf2 pathway, which leads to anti-oxidant and anti-inflammatory responses with an aggregation of M2 macrophages and MDSCs that protect the tumor.5 Owing to the existence of the hypoxic microenvironment of the tumor, the therapeutic efficiency of chemotherapy, radiotherapy and others, are reduced. In recent years, different strategies including physical stimulation (e.g. fractional irradiation) and chemical reaction, have been developed for relieving tumor hypoxia.17,18 However, it is difficult to prevent the growth of a tumor using a monotherapy. Due to the reduced side effects and enhanced therapeutic effect of synergistic therapy, it is used to overcome the insufficient tumor suppression effect of monotherapy. Radiosensitizers are molecules/materials with the ability to enhance the radiosensitivity of tumor cells and provide opportunities to overcome the obstacles mentioned above.2 There are many kinds of radiosensitizers, namely small-molecule chemicals, macromolecules, and nanostructures.19 One of the most important mechanisms for increasing radiosensitivity is to change the hypoxic microenvironment via different kinds of radiosensitizers. Many tumors are resistant to radiation therapy because they lack oxygen within the tumor due to abnormal or dysfunctional blood vessels.20 This state of hypoxia leads to less DNA damage at the same radiation dose.21 Therefore, oxygen is a well-known radiosensitizer.4 Chemicals and hypoxia-inducing cytotoxins were developed for clinical use because of their oxygen-producing capacity through a mechanism of damage fixation utilizing their electron affinity.22,23 Macromolecules are able to regulate the radio-sensitivity mechanism by binding with the DNAs, miRNA, mRNAs, siRNAs, proteins or peptides.24 Recently, the introduction of nanotechnology has provided momentum and expanded the horizon for the development of radiosensitizers.25 In recent years, heavy-metal nanomaterials with high atomic numbers (Z) have shown promise as radiosensitizers because they can absorb, scatter, and emit radiation energy.26–28 An excellent candidate is gold nanomaterials (AuNPs) because they have tunable morphologies, they are easy to modify, they have good biological safety, and they have exceptional radiation sensitization capabilities.29–32 Furthermore, AuNPs are good for drug delivery and cancer therapy.33 In addition, various catalase-like nanomaterials have been developed for catalyzing the decomposition of hydrogen peroxide (H2O2) into oxygen, which alleviates tumor hypoxia, and overcomes the hypoxia-induced radioresistance.34 Therefore, to develop a simple and effective strategy for relieving the hypoxic tumor microenvironment is essential to improving the therapeutic efficacy of radiotherapy. In the present study, we designed a nanoparticle composed of the metallic elements Au and Pt (Au-Pt NPs). The Au-Pt NPs exhibited enzyme-mimicking activities, which resulted in synergistic radiosensitivity for reinforced tumor therapy. The new Au-Pt NPs with nanozymes were synthesized in one step at room temperature. The nanoparticles produced had a uniform nanosphere shape and were very stable in physiological solutions. After intravenous administration, the Au-Pt NPs passively targeted the tumor and rapidly accumulated in it because of the enhanced permeability and retention (EPR) effect. Not only do the Au-Pt nanozymes possess a radiosensitivity ability to enhance the energy deposition of the X-rays but are also able to catalyze the reaction that converts H2O2 to O2. In addition to attenuating tumor hypoxia, the Au-Pt NPs significantly inhibited tumor growth under 8 Gy of X-ray irradiation compared to the control group. This demonstrated that the Au-Pt NPs improved the therapeutic efficiency of radiotherapy. Therefore, our work developed a novel radiosensitizer with low toxicity, for enhanced tumor radiotherapy by alleviating the tumor hypoxic microenvironment. Our findings highlight the great potential of applying Au-Pt NPs in the future clinical treatment of cancer. Conclusions: In summary, we developed Au-Pt NPs via a simple method that simultaneously possesses metal nanoparticle mediated radio-sensitization, and enzyme catalytic activities. A general mechanism in nanoparticle-based radiotherapy was uncovered in that heavy metal NPs possesses a high X-ray photon capture cross-section and Compton scattering effect which escalated the damage of DNA in tumor cells. Furthermore, our Au-Pt NPs increased the decomposition of H2O2 in TME and improved the anoxic status of the tumors treated with X-rays and enhanced the efficacy of radiotherapy. More importantly, in vitro, and in vivo data showed that Au-Pt NPs could effectively inhibit tumor growth with no significant side effects on normal tissues and organs. Therefore, our work developed a low toxicity novel radiosensitizer for enhanced tumor radiotherapy via attenuation of hypoxia. All completed clinical trials have demonstrated the safety of gold-based nanomaterials.40 In a recent clinical trial, gold-silica nanoshells (GSNs) were designed and used for ablating prostate tumors in patients without any obvious treatment-related side effects.41 Our findings further highlight the great potential application of Au-Pt NPs in the future clinical treatment of cancers.
Background: Radiotherapy occupies an essential position as one of the most significant approaches for the clinical treatment of cancer. However, we cannot overcome the shortcoming of X-rays which is the high value of the oxygen enhancement ratio (OER). Radiosensitizers with the ability to enhance the radiosensitivity of tumor cells provide an alternative to changing X-rays to protons and heavy ion radiotherapy. Methods: We prepared the Au-Pt nanoparticles (Au-Pt NPs) using a one-step method. The characteristics of the Au-Pt NPs were determined using TEM, HAADF-STEM, elemental mapping images, and DLS. The enhanced radiotherapy was demonstrated in vitro using MTT assays, colony formation assays, fluorescence imaging, and flow cytometric analyses of the apoptosis. The biodistribution of the Au-Pt NPs was analyzed using ICP-OES, and thermal images. The enhanced radiotherapy was demonstrated in vitro using immunofluorescence images, tumor volume and weigh, and hematoxylin & eosin (H&E) staining. Results: Polyethylene glycol (PEG) functionalized nanoparticles composed of the metallic elements Au and Pt were designed to increase synergistic radiosensitivity. The mechanism demonstrated that heavy metal NPs possess a high X-ray photon capture cross-section and Compton scattering effect which increased DNA damage. Furthermore, the Au-Pt NPs exhibited enzyme-mimicking activities by catalyzing the decomposition of endogenous H2O2 to O2 in the solid tumor microenvironment (TME). Conclusions: Our work provides a systematically administered radiosensitizer that can selectively reside in a tumor via the EPR effect and enhances the efficiency of treating cancer with radiotherapy.
12,803
307
[ 2365, 203, 79, 75, 106, 318, 146, 231, 5365, 706, 883, 376, 707, 218 ]
15
[ "pt", "au", "au pt", "nps", "pt nps", "au pt nps", "tumor", "cells", "figure", "4t1" ]
[ "enhanced tumor radiotherapy", "enhance radiosensitivity tumor", "radiotherapy employing tumor", "monotherapy radiosensitizers molecules", "cancer cells irradiation" ]
null
null
[CONTENT] radiosensitizer | nano-enzyme | TME | radiotherapy [SUMMARY]
null
null
[CONTENT] radiosensitizer | nano-enzyme | TME | radiotherapy [SUMMARY]
[CONTENT] radiosensitizer | nano-enzyme | TME | radiotherapy [SUMMARY]
[CONTENT] radiosensitizer | nano-enzyme | TME | radiotherapy [SUMMARY]
[CONTENT] Animals | Catalysis | Cell Line, Tumor | Cell Survival | Female | Gold | Human Umbilical Vein Endothelial Cells | Humans | Metal Nanoparticles | Mice | Mice, Inbred BALB C | Neoplasms | Platinum | Radiation-Sensitizing Agents | Tissue Distribution | Tumor Microenvironment [SUMMARY]
null
null
[CONTENT] Animals | Catalysis | Cell Line, Tumor | Cell Survival | Female | Gold | Human Umbilical Vein Endothelial Cells | Humans | Metal Nanoparticles | Mice | Mice, Inbred BALB C | Neoplasms | Platinum | Radiation-Sensitizing Agents | Tissue Distribution | Tumor Microenvironment [SUMMARY]
[CONTENT] Animals | Catalysis | Cell Line, Tumor | Cell Survival | Female | Gold | Human Umbilical Vein Endothelial Cells | Humans | Metal Nanoparticles | Mice | Mice, Inbred BALB C | Neoplasms | Platinum | Radiation-Sensitizing Agents | Tissue Distribution | Tumor Microenvironment [SUMMARY]
[CONTENT] Animals | Catalysis | Cell Line, Tumor | Cell Survival | Female | Gold | Human Umbilical Vein Endothelial Cells | Humans | Metal Nanoparticles | Mice | Mice, Inbred BALB C | Neoplasms | Platinum | Radiation-Sensitizing Agents | Tissue Distribution | Tumor Microenvironment [SUMMARY]
[CONTENT] enhanced tumor radiotherapy | enhance radiosensitivity tumor | radiotherapy employing tumor | monotherapy radiosensitizers molecules | cancer cells irradiation [SUMMARY]
null
null
[CONTENT] enhanced tumor radiotherapy | enhance radiosensitivity tumor | radiotherapy employing tumor | monotherapy radiosensitizers molecules | cancer cells irradiation [SUMMARY]
[CONTENT] enhanced tumor radiotherapy | enhance radiosensitivity tumor | radiotherapy employing tumor | monotherapy radiosensitizers molecules | cancer cells irradiation [SUMMARY]
[CONTENT] enhanced tumor radiotherapy | enhance radiosensitivity tumor | radiotherapy employing tumor | monotherapy radiosensitizers molecules | cancer cells irradiation [SUMMARY]
[CONTENT] pt | au | au pt | nps | pt nps | au pt nps | tumor | cells | figure | 4t1 [SUMMARY]
null
null
[CONTENT] pt | au | au pt | nps | pt nps | au pt nps | tumor | cells | figure | 4t1 [SUMMARY]
[CONTENT] pt | au | au pt | nps | pt nps | au pt nps | tumor | cells | figure | 4t1 [SUMMARY]
[CONTENT] pt | au | au pt | nps | pt nps | au pt nps | tumor | cells | figure | 4t1 [SUMMARY]
[CONTENT] tumor | radiation | radiosensitizers | hypoxia | therapy | radiotherapy | macromolecules | radiosensitivity | microenvironment | therapeutic [SUMMARY]
null
null
[CONTENT] clinical | radiotherapy | possesses | gold | metal | developed | nps | based | tumor | enhanced [SUMMARY]
[CONTENT] au | pt | au pt | nps | pt nps | au pt nps | tumor | cells | mice | cell [SUMMARY]
[CONTENT] au | pt | au pt | nps | pt nps | au pt nps | tumor | cells | mice | cell [SUMMARY]
[CONTENT] ||| OER ||| [SUMMARY]
null
null
[CONTENT] EPR [SUMMARY]
[CONTENT] ||| OER ||| ||| one ||| TEM ||| MTT ||| NPs ||| hematoxylin & eosin | H&E ||| ||| ||| Au ||| NPs | Compton ||| O2 ||| EPR [SUMMARY]
[CONTENT] ||| OER ||| ||| one ||| TEM ||| MTT ||| NPs ||| hematoxylin & eosin | H&E ||| ||| ||| Au ||| NPs | Compton ||| O2 ||| EPR [SUMMARY]
The potential benefit of endothelin receptor antagonists' therapy in idiopathic pulmonary fibrosis: A meta-analysis of results from randomized controlled trials.
36221345
Fibrotic diseases take a very heavy toll in terms of morbidity and mortality equal to or even greater than that caused by metastatic cancer. This meta-analysis aimed to evaluate the effect of endothelin receptor antagonists on idiopathic pulmonary fibrosis.
BACKGROUND
A systematic search of the clinical trials from the Medline, Google Scholar, Cochrane Library, and PubMed electronic databases was performed. Stata version 12.0 statistical software (Stata Crop LP, College Station, TX) was adopted as statistical software.
METHOD
A total of 5 studies, which included 1500 participants. Our analysis found there is no significant difference between using the endothelin receptor antagonists' group and placebo groups regarding the lung function via estimating both the change of forced vital capacity from baseline and DLco index. Exercise capacity and serious adverse effects are taken into consideration as well; however, there is still no significant change between the 2 groups.
RESULT
This meta-analysis provides insufficient evidence to support that endothelin receptor antagonists' administration provides a benefit among included participants who encounter idiopathic pulmonary fibrosis.
CONCLUSION
[ "Endothelin Receptor Antagonists", "Humans", "Idiopathic Pulmonary Fibrosis", "Randomized Controlled Trials as Topic", "Respiratory Function Tests", "Vital Capacity" ]
9543018
1. Introduction
Interstitial lung diseases (ILDs) represent a large group of diseases that cause scarring (fibrosis) of the lungs,[1] which causes stiffness, making it difficult to breathe and get oxygen to the bloodstream.[2] Of the ILDs, idiopathic pulmonary fibrosis, also known as cryptogenic fibrosing alveolitis is the most common and fatal. It is characterized by the aberrant accumulation of epithelial and endothelial damage progressing to fibrosis in the lungs parenchyma[3] and the pathological hallmark of usual interstitial pneumonia.[4] Globally, idiopathic pulmonary fibrosis affects more than 5 million people every year.[5] Even though idiopathic pulmonary fibrosis is considered an unusual disease, it still places a severe burden on each family unfortunately getting in this kind of disease, both in economics and psychology.[6] The progression of idiopathic pulmonary fibrosis is generally manifested by a step-wise decline in pulmonary function, with worsening dyspnea and a high degree of morbidity, measured as forced vital capacity (FVC).[7,8] Endothelin receptor antagonists are a type of potent vasodilator and antimitotic substances that specifically dilate and remodel the pulmonary arterial system.[9] Bosentan, a specific dual-receptor antagonist of both endothelin (ET) receptor subtypes (ETA and ETB), was firstly developed for the treatment of pulmonary arterial hypertension[10] and congestive heart failure.[11] By blocking the receptor of ET, bosentan has been shown to decrease pulmonary vascular resistance and retard the pathogenic manifestations in the bleomycin-induced pulmonary fibrosis model,[12] which reduces collagen deposition in the lungs. Despite there is emerging many medicines to cure this disease recent years with increasing knowledge about underlying mechanism of idiopathic pulmonary fibrosis, there is no therapy available to effectively influence this. Thus, we performed a study to evaluate the efficacy and safety of endothelin receptor antagonists in patients with idiopathic pulmonary fibrosis.
2. Materials and Methods
2.1. Ethical review All analyses were conducted according to available published literature; thus, no ethical approval or patient consent was required. All analyses were conducted according to available published literature; thus, no ethical approval or patient consent was required. 2.2. Literature search strategy We systematically performed electronic literature from Medline, Embase, Cochrane Library, and PubMed electronic databases following the recommended guidelines of the Preferred Reporting Items for Systematic Review and Meta-analysis.[13] All 4 databases were scanned from 1997 when bosentan was first indicated in the pathogenesis of pulmonary fibrosis in the rodent model until August 2021 for the keywords of endothelin receptor antagonists and idiopathic pulmonary fibrosis in combination with Boolean logic.[12] We also performed a manual search of the references cited in relevant review articles.[14] After original searching, the relevant and their references were searched manually by 2 authors. We systematically performed electronic literature from Medline, Embase, Cochrane Library, and PubMed electronic databases following the recommended guidelines of the Preferred Reporting Items for Systematic Review and Meta-analysis.[13] All 4 databases were scanned from 1997 when bosentan was first indicated in the pathogenesis of pulmonary fibrosis in the rodent model until August 2021 for the keywords of endothelin receptor antagonists and idiopathic pulmonary fibrosis in combination with Boolean logic.[12] We also performed a manual search of the references cited in relevant review articles.[14] After original searching, the relevant and their references were searched manually by 2 authors. 2.3. Inclusion and exclusion criteria The following predefined inclusion criteria were used: (i) population: participant with idiopathic pulmonary fibrosis; (ii) intervention: patients strictly treated with endothelin receptor antagonists; (iii) comparison intervention: use of endothelin receptor antagonists compared to placebo group; (iv) outcome measures: one or more of the clinical outcomes were reported, forced vital capacity (FVC), diffusion capacity of the lung for carbon monoxide (DLco), 6-minute walk distance test (6MWD), and serious adverse events; (v) official published studies in English prospective and retrospective studies. The exclusion criteria were listed as follows: (i) conference or commentary articles and letters; (ii) atypical patients and outcome data; (iii) case report and case series; (iv) animal observation. The following predefined inclusion criteria were used: (i) population: participant with idiopathic pulmonary fibrosis; (ii) intervention: patients strictly treated with endothelin receptor antagonists; (iii) comparison intervention: use of endothelin receptor antagonists compared to placebo group; (iv) outcome measures: one or more of the clinical outcomes were reported, forced vital capacity (FVC), diffusion capacity of the lung for carbon monoxide (DLco), 6-minute walk distance test (6MWD), and serious adverse events; (v) official published studies in English prospective and retrospective studies. The exclusion criteria were listed as follows: (i) conference or commentary articles and letters; (ii) atypical patients and outcome data; (iii) case report and case series; (iv) animal observation. 2.4. Data extraction and outcome measures Two researchers independently provided a detailed record for each study for the following essential information: the first author of the study, publication year, follow-up years, study design, sampling method, endpoints, study characteristics including the number of populations, mean age, gender ratio, and country. All disagreements were discussed and reached the final decision. The outcome measurements were the following: FVC, DLco, 6-minute walk distance test, and incidence of serious adverse events. Two researchers independently provided a detailed record for each study for the following essential information: the first author of the study, publication year, follow-up years, study design, sampling method, endpoints, study characteristics including the number of populations, mean age, gender ratio, and country. All disagreements were discussed and reached the final decision. The outcome measurements were the following: FVC, DLco, 6-minute walk distance test, and incidence of serious adverse events. 2.5. Quality assessment The Cochrane Collaboration tool was applied to evaluate the risk of bias in all involved literature. Specifically, each trial was assessed for random sequence generation, allocation concealment, blinding of participants and personnel, blinding of outcome assessment, incomplete outcome data, selective reporting, and other sources of bias.[15] The Cochrane Collaboration tool was applied to evaluate the risk of bias in all involved literature. Specifically, each trial was assessed for random sequence generation, allocation concealment, blinding of participants and personnel, blinding of outcome assessment, incomplete outcome data, selective reporting, and other sources of bias.[15] 2.6. Statistical analysis The analysis, design, and reporting for meta-analysis were carried out using Stata version 12.0 statistical software (Stata Crop LP, College Station, TX) as statistical software. The weighted mean difference (WMD) or Odds ratios (ORs) with the corresponding 95% confidence intervals (95% CIs) were used as measures of the treatment effect of bosentan. Heterogeneity of the observed studies was accessed with the Higgins I-square (I2) value. I2 over 25% and 75% was considered as moderate heterogeneous or significant heterogeneity, respectively. If I2 was under 25%, the endpoint item was considered to be homogeneous, then we run a meta-analysis by using a fix-effect model according to the Cochrane Handbook for Systematic Reviews of Interventions. The analysis, design, and reporting for meta-analysis were carried out using Stata version 12.0 statistical software (Stata Crop LP, College Station, TX) as statistical software. The weighted mean difference (WMD) or Odds ratios (ORs) with the corresponding 95% confidence intervals (95% CIs) were used as measures of the treatment effect of bosentan. Heterogeneity of the observed studies was accessed with the Higgins I-square (I2) value. I2 over 25% and 75% was considered as moderate heterogeneous or significant heterogeneity, respectively. If I2 was under 25%, the endpoint item was considered to be homogeneous, then we run a meta-analysis by using a fix-effect model according to the Cochrane Handbook for Systematic Reviews of Interventions.
3.1. Search result
A total of 143 records were identified through the database by the search strategy, which is displayed in the following diagram (Fig. 1) in accordance with the inclusion and exclusion criteria. After removing duplicated or irrelevant articles, 49 eligible studies were enrolled in this study, of which 11 articles meeting our inclusion criteria were retrieved after evaluating the full text of the remaining articles. Finally, 5 studies[16–20] were combined in the present quantitative synthesis. Flowchart of the study selection process.
5. Conclusion
Shuang Li and Chuanhua Yan designed and conceptualized the article. Shuang Li and Wenqiang Xin prepared the figures and tables. All authors significantly contributed to writing the paper and provided important intellectual content.
[ "2.1. Ethical review", "2.2. Literature search strategy", "2.3. Inclusion and exclusion criteria", "2.4. Data extraction and outcome measures", "2.5. Quality assessment", "2.6. Statistical analysis", "3. Results", "3.2. Characteristics of included studies", "3.3. Quality assessment", "3.4. The outcome of the meta-analysis", "3.5. Lung function", "3.6. Exercise capacity and serious adverse effects", "5. Conclusion" ]
[ "All analyses were conducted according to available published literature; thus, no ethical approval or patient consent was required.", "We systematically performed electronic literature from Medline, Embase, Cochrane Library, and PubMed electronic databases following the recommended guidelines of the Preferred Reporting Items for Systematic Review and Meta-analysis.[13] All 4 databases were scanned from 1997 when bosentan was first indicated in the pathogenesis of pulmonary fibrosis in the rodent model until August 2021 for the keywords of endothelin receptor antagonists and idiopathic pulmonary fibrosis in combination with Boolean logic.[12] We also performed a manual search of the references cited in relevant review articles.[14] After original searching, the relevant and their references were searched manually by 2 authors.", "The following predefined inclusion criteria were used: (i) population: participant with idiopathic pulmonary fibrosis; (ii) intervention: patients strictly treated with endothelin receptor antagonists; (iii) comparison intervention: use of endothelin receptor antagonists compared to placebo group; (iv) outcome measures: one or more of the clinical outcomes were reported, forced vital capacity (FVC), diffusion capacity of the lung for carbon monoxide (DLco), 6-minute walk distance test (6MWD), and serious adverse events; (v) official published studies in English prospective and retrospective studies.\nThe exclusion criteria were listed as follows: (i) conference or commentary articles and letters; (ii) atypical patients and outcome data; (iii) case report and case series; (iv) animal observation.", "Two researchers independently provided a detailed record for each study for the following essential information: the first author of the study, publication year, follow-up years, study design, sampling method, endpoints, study characteristics including the number of populations, mean age, gender ratio, and country. All disagreements were discussed and reached the final decision. The outcome measurements were the following: FVC, DLco, 6-minute walk distance test, and incidence of serious adverse events.", "The Cochrane Collaboration tool was applied to evaluate the risk of bias in all involved literature. Specifically, each trial was assessed for random sequence generation, allocation concealment, blinding of participants and personnel, blinding of outcome assessment, incomplete outcome data, selective reporting, and other sources of bias.[15]", "The analysis, design, and reporting for meta-analysis were carried out using Stata version 12.0 statistical software (Stata Crop LP, College Station, TX) as statistical software. The weighted mean difference (WMD) or Odds ratios (ORs) with the corresponding 95% confidence intervals (95% CIs) were used as measures of the treatment effect of bosentan. Heterogeneity of the observed studies was accessed with the Higgins I-square (I2) value. I2 over 25% and 75% was considered as moderate heterogeneous or significant heterogeneity, respectively. If I2 was under 25%, the endpoint item was considered to be homogeneous, then we run a meta-analysis by using a fix-effect model according to the Cochrane Handbook for Systematic Reviews of Interventions.", "3.1. Search result A total of 143 records were identified through the database by the search strategy, which is displayed in the following diagram (Fig. 1) in accordance with the inclusion and exclusion criteria. After removing duplicated or irrelevant articles, 49 eligible studies were enrolled in this study, of which 11 articles meeting our inclusion criteria were retrieved after evaluating the full text of the remaining articles. Finally, 5 studies[16–20] were combined in the present quantitative synthesis.\nFlowchart of the study selection process.\nA total of 143 records were identified through the database by the search strategy, which is displayed in the following diagram (Fig. 1) in accordance with the inclusion and exclusion criteria. After removing duplicated or irrelevant articles, 49 eligible studies were enrolled in this study, of which 11 articles meeting our inclusion criteria were retrieved after evaluating the full text of the remaining articles. Finally, 5 studies[16–20] were combined in the present quantitative synthesis.\nFlowchart of the study selection process.\n3.2. Characteristics of included studies Detailed characteristics concerning the involved studies are summarized in Table 1. A total of 5 randomized controlled trials (RCTs) were published between 2008 and 2014 including 1500 participants, and the sample size varied from 60 to 616. This study enrolled 518 patients treated with bosentan, 119 patients with macitentan, and 329 with ambrisentan, which is all endothelin receptor antagonists, compared with 534 participants in the placebo group. The average ages were from 63.8 to 66.6 years, the ratios of men varied from 68.0 to 72.7%, and only Corte et al[17] reported pulmonary arterial pressure of the included participants. In each RCT, all endothelin receptor antagonists or placebo were given after the idiopathic pulmonary fibrosis. Nearly all studies (four studies) did research on forced vital capacity and diffusion capacity of the lungs for carbon monoxide. Besides, King et al,[16] Raghu et al[19] and Corte et al[17] also took 6-minute walk distance test in consideration.\nMain characteristics of the randomized controlled trials included in the meat-analysis.\nDLco = diffusion capacity of the lungs for carbon monoxide, ERA = Endothelin receptor antagonist, FVC = forced vital capacity, PAP = pulmonary arterial pressure, 6MWD = 6-minute walk distance test, NR = not reported, ± = standard deviation.\nDetailed characteristics concerning the involved studies are summarized in Table 1. A total of 5 randomized controlled trials (RCTs) were published between 2008 and 2014 including 1500 participants, and the sample size varied from 60 to 616. This study enrolled 518 patients treated with bosentan, 119 patients with macitentan, and 329 with ambrisentan, which is all endothelin receptor antagonists, compared with 534 participants in the placebo group. The average ages were from 63.8 to 66.6 years, the ratios of men varied from 68.0 to 72.7%, and only Corte et al[17] reported pulmonary arterial pressure of the included participants. In each RCT, all endothelin receptor antagonists or placebo were given after the idiopathic pulmonary fibrosis. Nearly all studies (four studies) did research on forced vital capacity and diffusion capacity of the lungs for carbon monoxide. Besides, King et al,[16] Raghu et al[19] and Corte et al[17] also took 6-minute walk distance test in consideration.\nMain characteristics of the randomized controlled trials included in the meat-analysis.\nDLco = diffusion capacity of the lungs for carbon monoxide, ERA = Endothelin receptor antagonist, FVC = forced vital capacity, PAP = pulmonary arterial pressure, 6MWD = 6-minute walk distance test, NR = not reported, ± = standard deviation.\n3.3. Quality assessment The research scores of all nonrandomized controlled trials were assessed by 2 reviewers separately evaluating the methodological quality of the included observational studies by Cochrane Collaboration tool. Most of the included RCTs were high quality. Most of them showed a low risk of bias for random sequence generation, blinding of outcome assessment, incomplete outcome data, and selective reporting. The results of the quality assessment of trials are provided in Table 2.\nCochrane Collaboration tool for quality assessment in all included trials.\nReferences\n1.King T, Behr J, Brown K, du Bois R, Lancaster L, de Andrade J, et al BUILD-1: a randomized placebo-controlled trial of bosentan in idiopathic pulmonary fibrosis. American journal of respiratory and critical care medicine. 2008;177(1):75–81. doi: 10.1164/rccm.200705-732OC. PubMed PMID: 17901413.\n2.King T, Brown K, Raghu G, du Bois R, Lynch D, Martinez F, et al BUILD-3: a randomized, controlled trial of bosentan in idiopathic pulmonary fibrosis. American journal of respiratory and critical care medicine. 2011;184(1):92–9. doi: 10.1164/rccm.201011-1874OC. PubMed PMID: 21474646.\n3.Raghu G, Million-Rousseau R, Morganti A, Perchenet L, Behr J. Macitentan for the treatment of idiopathic pulmonary fibrosis: the randomised controlled MUSIC trial. The European respiratory journal. 2013;42(6):1622–32. doi: 10.1183/09031936.00104612. PubMed PMID: 23682110.\n4.Raghu G, Behr J, Brown K, Egan J, Kawut S, Flaherty K, et al. Treatment of idiopathic pulmonary fibrosis with ambrisentan: a parallel, randomized trial. Annals of internal medicine. 2013;158(9):641–9. doi: 10.7326/0003-4819-158-9-201305070-00003. PubMed PMID: 23648946.\n5.Corte T, Keir G, Dimopoulos K, Howard L, Corris P, Parfitt L, et al. Bosentan in pulmonary hypertension associated with fibrotic idiopathic interstitial pneumonia. American journal of respiratory and critical care medicine. 2014;190(2):208–17. doi: 10.1164/rccm.201403-0446OC. PubMed PMID: 24937643.\nThe research scores of all nonrandomized controlled trials were assessed by 2 reviewers separately evaluating the methodological quality of the included observational studies by Cochrane Collaboration tool. Most of the included RCTs were high quality. Most of them showed a low risk of bias for random sequence generation, blinding of outcome assessment, incomplete outcome data, and selective reporting. The results of the quality assessment of trials are provided in Table 2.\nCochrane Collaboration tool for quality assessment in all included trials.\nReferences\n1.King T, Behr J, Brown K, du Bois R, Lancaster L, de Andrade J, et al BUILD-1: a randomized placebo-controlled trial of bosentan in idiopathic pulmonary fibrosis. American journal of respiratory and critical care medicine. 2008;177(1):75–81. doi: 10.1164/rccm.200705-732OC. PubMed PMID: 17901413.\n2.King T, Brown K, Raghu G, du Bois R, Lynch D, Martinez F, et al BUILD-3: a randomized, controlled trial of bosentan in idiopathic pulmonary fibrosis. American journal of respiratory and critical care medicine. 2011;184(1):92–9. doi: 10.1164/rccm.201011-1874OC. PubMed PMID: 21474646.\n3.Raghu G, Million-Rousseau R, Morganti A, Perchenet L, Behr J. Macitentan for the treatment of idiopathic pulmonary fibrosis: the randomised controlled MUSIC trial. The European respiratory journal. 2013;42(6):1622–32. doi: 10.1183/09031936.00104612. PubMed PMID: 23682110.\n4.Raghu G, Behr J, Brown K, Egan J, Kawut S, Flaherty K, et al. Treatment of idiopathic pulmonary fibrosis with ambrisentan: a parallel, randomized trial. Annals of internal medicine. 2013;158(9):641–9. doi: 10.7326/0003-4819-158-9-201305070-00003. PubMed PMID: 23648946.\n5.Corte T, Keir G, Dimopoulos K, Howard L, Corris P, Parfitt L, et al. Bosentan in pulmonary hypertension associated with fibrotic idiopathic interstitial pneumonia. American journal of respiratory and critical care medicine. 2014;190(2):208–17. doi: 10.1164/rccm.201403-0446OC. PubMed PMID: 24937643.\n3.4. The outcome of the meta-analysis A total of 5 RCTs were eligible for analysis, with 1500 patients undergoing idiopathic pulmonary fibrosis. The detailed results are shown in Table 3 and listed as follows.\nThe outcomes of this meta-analysis.\nCIs = confidence intervals, DLco = diffusion capacity of the lung for carbon monoxide, ERA = endothelin receptor antagonist, FVC = forced vital capacity, ISAE = incidence of serious adverse events, OR = odds ratio, RD = rate difference, WMD = weighted mean difference, 6MWD = 6-minute walk distance.\nA total of 5 RCTs were eligible for analysis, with 1500 patients undergoing idiopathic pulmonary fibrosis. The detailed results are shown in Table 3 and listed as follows.\nThe outcomes of this meta-analysis.\nCIs = confidence intervals, DLco = diffusion capacity of the lung for carbon monoxide, ERA = endothelin receptor antagonist, FVC = forced vital capacity, ISAE = incidence of serious adverse events, OR = odds ratio, RD = rate difference, WMD = weighted mean difference, 6MWD = 6-minute walk distance.\n3.5. Lung function Four RCT studies (1325 patients) reported on FVC changes from baseline and at 4 to 12 months follow-up.[17–20] However, there is no difference between the using endothelin receptor antagonists and placebo group (WMD, −2.079; 95% CI, −2.079–3.471; P = .463; I2 = 0.0% for FVC % predicted, and WMD, 0.028; 95% CI, −0.158–0.214; P = .769; I2 = 0.0% for FVC L, Fig. 2). Data regarding the information of Dlco in idiopathic pulmonary fibrosis patients treated with endothelin receptor antagonists were also available in 4 trials (1325 patients).[17–20] The Dlco % predicted percentage in the endothelin receptor antagonists’ group did not show a significant lower tendency than the control group (WMD, −1.334; 95% CI, −4.945–2.276; P = .469; I2 = 0.0%, Fig. 3) and besides, the mean change from baseline in DLco (mmol·kPa − 1·min − 1) did not differ significant between the 2 groups either (WMD, 0.124; 95% CI, −0.183–0.431; P = .427; I2 = 0.0%, Fig. 3).\nForest plot on the assessment of the forced vital capacity.\nForest plot on the assessment of the lung for carbon monoxide.\nFour RCT studies (1325 patients) reported on FVC changes from baseline and at 4 to 12 months follow-up.[17–20] However, there is no difference between the using endothelin receptor antagonists and placebo group (WMD, −2.079; 95% CI, −2.079–3.471; P = .463; I2 = 0.0% for FVC % predicted, and WMD, 0.028; 95% CI, −0.158–0.214; P = .769; I2 = 0.0% for FVC L, Fig. 2). Data regarding the information of Dlco in idiopathic pulmonary fibrosis patients treated with endothelin receptor antagonists were also available in 4 trials (1325 patients).[17–20] The Dlco % predicted percentage in the endothelin receptor antagonists’ group did not show a significant lower tendency than the control group (WMD, −1.334; 95% CI, −4.945–2.276; P = .469; I2 = 0.0%, Fig. 3) and besides, the mean change from baseline in DLco (mmol·kPa − 1·min − 1) did not differ significant between the 2 groups either (WMD, 0.124; 95% CI, −0.183–0.431; P = .427; I2 = 0.0%, Fig. 3).\nForest plot on the assessment of the forced vital capacity.\nForest plot on the assessment of the lung for carbon monoxide.\n3.6. Exercise capacity and serious adverse effects Four studies were involved to measure the effectiveness of endothelin receptor antagonists in improving the exercise capacity, measured using the 6-minute walk distance.[16–19] Whereas, there was no difference in the endothelin receptor antagonists group compared with the control group (WMD, −2.160; 95% CI, −7.996–3.677; P = .468). Between-trial heterogeneity was homogeneous (I2 = 21.7%), similarly, serious adverse events were similar between these 2 groups (OR, 1.063; 95% CI, 0.669–1.690; P = .796).\nFour studies were involved to measure the effectiveness of endothelin receptor antagonists in improving the exercise capacity, measured using the 6-minute walk distance.[16–19] Whereas, there was no difference in the endothelin receptor antagonists group compared with the control group (WMD, −2.160; 95% CI, −7.996–3.677; P = .468). Between-trial heterogeneity was homogeneous (I2 = 21.7%), similarly, serious adverse events were similar between these 2 groups (OR, 1.063; 95% CI, 0.669–1.690; P = .796).", "Detailed characteristics concerning the involved studies are summarized in Table 1. A total of 5 randomized controlled trials (RCTs) were published between 2008 and 2014 including 1500 participants, and the sample size varied from 60 to 616. This study enrolled 518 patients treated with bosentan, 119 patients with macitentan, and 329 with ambrisentan, which is all endothelin receptor antagonists, compared with 534 participants in the placebo group. The average ages were from 63.8 to 66.6 years, the ratios of men varied from 68.0 to 72.7%, and only Corte et al[17] reported pulmonary arterial pressure of the included participants. In each RCT, all endothelin receptor antagonists or placebo were given after the idiopathic pulmonary fibrosis. Nearly all studies (four studies) did research on forced vital capacity and diffusion capacity of the lungs for carbon monoxide. Besides, King et al,[16] Raghu et al[19] and Corte et al[17] also took 6-minute walk distance test in consideration.\nMain characteristics of the randomized controlled trials included in the meat-analysis.\nDLco = diffusion capacity of the lungs for carbon monoxide, ERA = Endothelin receptor antagonist, FVC = forced vital capacity, PAP = pulmonary arterial pressure, 6MWD = 6-minute walk distance test, NR = not reported, ± = standard deviation.", "The research scores of all nonrandomized controlled trials were assessed by 2 reviewers separately evaluating the methodological quality of the included observational studies by Cochrane Collaboration tool. Most of the included RCTs were high quality. Most of them showed a low risk of bias for random sequence generation, blinding of outcome assessment, incomplete outcome data, and selective reporting. The results of the quality assessment of trials are provided in Table 2.\nCochrane Collaboration tool for quality assessment in all included trials.\nReferences\n1.King T, Behr J, Brown K, du Bois R, Lancaster L, de Andrade J, et al BUILD-1: a randomized placebo-controlled trial of bosentan in idiopathic pulmonary fibrosis. American journal of respiratory and critical care medicine. 2008;177(1):75–81. doi: 10.1164/rccm.200705-732OC. PubMed PMID: 17901413.\n2.King T, Brown K, Raghu G, du Bois R, Lynch D, Martinez F, et al BUILD-3: a randomized, controlled trial of bosentan in idiopathic pulmonary fibrosis. American journal of respiratory and critical care medicine. 2011;184(1):92–9. doi: 10.1164/rccm.201011-1874OC. PubMed PMID: 21474646.\n3.Raghu G, Million-Rousseau R, Morganti A, Perchenet L, Behr J. Macitentan for the treatment of idiopathic pulmonary fibrosis: the randomised controlled MUSIC trial. The European respiratory journal. 2013;42(6):1622–32. doi: 10.1183/09031936.00104612. PubMed PMID: 23682110.\n4.Raghu G, Behr J, Brown K, Egan J, Kawut S, Flaherty K, et al. Treatment of idiopathic pulmonary fibrosis with ambrisentan: a parallel, randomized trial. Annals of internal medicine. 2013;158(9):641–9. doi: 10.7326/0003-4819-158-9-201305070-00003. PubMed PMID: 23648946.\n5.Corte T, Keir G, Dimopoulos K, Howard L, Corris P, Parfitt L, et al. Bosentan in pulmonary hypertension associated with fibrotic idiopathic interstitial pneumonia. American journal of respiratory and critical care medicine. 2014;190(2):208–17. doi: 10.1164/rccm.201403-0446OC. PubMed PMID: 24937643.", "A total of 5 RCTs were eligible for analysis, with 1500 patients undergoing idiopathic pulmonary fibrosis. The detailed results are shown in Table 3 and listed as follows.\nThe outcomes of this meta-analysis.\nCIs = confidence intervals, DLco = diffusion capacity of the lung for carbon monoxide, ERA = endothelin receptor antagonist, FVC = forced vital capacity, ISAE = incidence of serious adverse events, OR = odds ratio, RD = rate difference, WMD = weighted mean difference, 6MWD = 6-minute walk distance.", "Four RCT studies (1325 patients) reported on FVC changes from baseline and at 4 to 12 months follow-up.[17–20] However, there is no difference between the using endothelin receptor antagonists and placebo group (WMD, −2.079; 95% CI, −2.079–3.471; P = .463; I2 = 0.0% for FVC % predicted, and WMD, 0.028; 95% CI, −0.158–0.214; P = .769; I2 = 0.0% for FVC L, Fig. 2). Data regarding the information of Dlco in idiopathic pulmonary fibrosis patients treated with endothelin receptor antagonists were also available in 4 trials (1325 patients).[17–20] The Dlco % predicted percentage in the endothelin receptor antagonists’ group did not show a significant lower tendency than the control group (WMD, −1.334; 95% CI, −4.945–2.276; P = .469; I2 = 0.0%, Fig. 3) and besides, the mean change from baseline in DLco (mmol·kPa − 1·min − 1) did not differ significant between the 2 groups either (WMD, 0.124; 95% CI, −0.183–0.431; P = .427; I2 = 0.0%, Fig. 3).\nForest plot on the assessment of the forced vital capacity.\nForest plot on the assessment of the lung for carbon monoxide.", "Four studies were involved to measure the effectiveness of endothelin receptor antagonists in improving the exercise capacity, measured using the 6-minute walk distance.[16–19] Whereas, there was no difference in the endothelin receptor antagonists group compared with the control group (WMD, −2.160; 95% CI, −7.996–3.677; P = .468). Between-trial heterogeneity was homogeneous (I2 = 21.7%), similarly, serious adverse events were similar between these 2 groups (OR, 1.063; 95% CI, 0.669–1.690; P = .796).", "Despite endothelin receptor antagonists providing the better benefits for pulmonary arterial hypertension, this meta-analysis provides insufficient evidence to support that endothelin receptor antagonists’ administration induces provides a benefit in idiopathic pulmonary fibrosis patients." ]
[ null, null, null, null, null, null, "results", null, null, null, null, null, null ]
[ "1. Introduction", "2. Materials and Methods", "2.1. Ethical review", "2.2. Literature search strategy", "2.3. Inclusion and exclusion criteria", "2.4. Data extraction and outcome measures", "2.5. Quality assessment", "2.6. Statistical analysis", "3. Results", "3.1. Search result", "3.2. Characteristics of included studies", "3.3. Quality assessment", "3.4. The outcome of the meta-analysis", "3.5. Lung function", "3.6. Exercise capacity and serious adverse effects", "4. Discussion", "5. Conclusion" ]
[ "Interstitial lung diseases (ILDs) represent a large group of diseases that cause scarring (fibrosis) of the lungs,[1] which causes stiffness, making it difficult to breathe and get oxygen to the bloodstream.[2] Of the ILDs, idiopathic pulmonary fibrosis, also known as cryptogenic fibrosing alveolitis is the most common and fatal. It is characterized by the aberrant accumulation of epithelial and endothelial damage progressing to fibrosis in the lungs parenchyma[3] and the pathological hallmark of usual interstitial pneumonia.[4] Globally, idiopathic pulmonary fibrosis affects more than 5 million people every year.[5] Even though idiopathic pulmonary fibrosis is considered an unusual disease, it still places a severe burden on each family unfortunately getting in this kind of disease, both in economics and psychology.[6] The progression of idiopathic pulmonary fibrosis is generally manifested by a step-wise decline in pulmonary function, with worsening dyspnea and a high degree of morbidity, measured as forced vital capacity (FVC).[7,8] Endothelin receptor antagonists are a type of potent vasodilator and antimitotic substances that specifically dilate and remodel the pulmonary arterial system.[9] Bosentan, a specific dual-receptor antagonist of both endothelin (ET) receptor subtypes (ETA and ETB), was firstly developed for the treatment of pulmonary arterial hypertension[10] and congestive heart failure.[11] By blocking the receptor of ET, bosentan has been shown to decrease pulmonary vascular resistance and retard the pathogenic manifestations in the bleomycin-induced pulmonary fibrosis model,[12] which reduces collagen deposition in the lungs. Despite there is emerging many medicines to cure this disease recent years with increasing knowledge about underlying mechanism of idiopathic pulmonary fibrosis, there is no therapy available to effectively influence this. Thus, we performed a study to evaluate the efficacy and safety of endothelin receptor antagonists in patients with idiopathic pulmonary fibrosis.", "2.1. Ethical review All analyses were conducted according to available published literature; thus, no ethical approval or patient consent was required.\nAll analyses were conducted according to available published literature; thus, no ethical approval or patient consent was required.\n2.2. Literature search strategy We systematically performed electronic literature from Medline, Embase, Cochrane Library, and PubMed electronic databases following the recommended guidelines of the Preferred Reporting Items for Systematic Review and Meta-analysis.[13] All 4 databases were scanned from 1997 when bosentan was first indicated in the pathogenesis of pulmonary fibrosis in the rodent model until August 2021 for the keywords of endothelin receptor antagonists and idiopathic pulmonary fibrosis in combination with Boolean logic.[12] We also performed a manual search of the references cited in relevant review articles.[14] After original searching, the relevant and their references were searched manually by 2 authors.\nWe systematically performed electronic literature from Medline, Embase, Cochrane Library, and PubMed electronic databases following the recommended guidelines of the Preferred Reporting Items for Systematic Review and Meta-analysis.[13] All 4 databases were scanned from 1997 when bosentan was first indicated in the pathogenesis of pulmonary fibrosis in the rodent model until August 2021 for the keywords of endothelin receptor antagonists and idiopathic pulmonary fibrosis in combination with Boolean logic.[12] We also performed a manual search of the references cited in relevant review articles.[14] After original searching, the relevant and their references were searched manually by 2 authors.\n2.3. Inclusion and exclusion criteria The following predefined inclusion criteria were used: (i) population: participant with idiopathic pulmonary fibrosis; (ii) intervention: patients strictly treated with endothelin receptor antagonists; (iii) comparison intervention: use of endothelin receptor antagonists compared to placebo group; (iv) outcome measures: one or more of the clinical outcomes were reported, forced vital capacity (FVC), diffusion capacity of the lung for carbon monoxide (DLco), 6-minute walk distance test (6MWD), and serious adverse events; (v) official published studies in English prospective and retrospective studies.\nThe exclusion criteria were listed as follows: (i) conference or commentary articles and letters; (ii) atypical patients and outcome data; (iii) case report and case series; (iv) animal observation.\nThe following predefined inclusion criteria were used: (i) population: participant with idiopathic pulmonary fibrosis; (ii) intervention: patients strictly treated with endothelin receptor antagonists; (iii) comparison intervention: use of endothelin receptor antagonists compared to placebo group; (iv) outcome measures: one or more of the clinical outcomes were reported, forced vital capacity (FVC), diffusion capacity of the lung for carbon monoxide (DLco), 6-minute walk distance test (6MWD), and serious adverse events; (v) official published studies in English prospective and retrospective studies.\nThe exclusion criteria were listed as follows: (i) conference or commentary articles and letters; (ii) atypical patients and outcome data; (iii) case report and case series; (iv) animal observation.\n2.4. Data extraction and outcome measures Two researchers independently provided a detailed record for each study for the following essential information: the first author of the study, publication year, follow-up years, study design, sampling method, endpoints, study characteristics including the number of populations, mean age, gender ratio, and country. All disagreements were discussed and reached the final decision. The outcome measurements were the following: FVC, DLco, 6-minute walk distance test, and incidence of serious adverse events.\nTwo researchers independently provided a detailed record for each study for the following essential information: the first author of the study, publication year, follow-up years, study design, sampling method, endpoints, study characteristics including the number of populations, mean age, gender ratio, and country. All disagreements were discussed and reached the final decision. The outcome measurements were the following: FVC, DLco, 6-minute walk distance test, and incidence of serious adverse events.\n2.5. Quality assessment The Cochrane Collaboration tool was applied to evaluate the risk of bias in all involved literature. Specifically, each trial was assessed for random sequence generation, allocation concealment, blinding of participants and personnel, blinding of outcome assessment, incomplete outcome data, selective reporting, and other sources of bias.[15]\nThe Cochrane Collaboration tool was applied to evaluate the risk of bias in all involved literature. Specifically, each trial was assessed for random sequence generation, allocation concealment, blinding of participants and personnel, blinding of outcome assessment, incomplete outcome data, selective reporting, and other sources of bias.[15]\n2.6. Statistical analysis The analysis, design, and reporting for meta-analysis were carried out using Stata version 12.0 statistical software (Stata Crop LP, College Station, TX) as statistical software. The weighted mean difference (WMD) or Odds ratios (ORs) with the corresponding 95% confidence intervals (95% CIs) were used as measures of the treatment effect of bosentan. Heterogeneity of the observed studies was accessed with the Higgins I-square (I2) value. I2 over 25% and 75% was considered as moderate heterogeneous or significant heterogeneity, respectively. If I2 was under 25%, the endpoint item was considered to be homogeneous, then we run a meta-analysis by using a fix-effect model according to the Cochrane Handbook for Systematic Reviews of Interventions.\nThe analysis, design, and reporting for meta-analysis were carried out using Stata version 12.0 statistical software (Stata Crop LP, College Station, TX) as statistical software. The weighted mean difference (WMD) or Odds ratios (ORs) with the corresponding 95% confidence intervals (95% CIs) were used as measures of the treatment effect of bosentan. Heterogeneity of the observed studies was accessed with the Higgins I-square (I2) value. I2 over 25% and 75% was considered as moderate heterogeneous or significant heterogeneity, respectively. If I2 was under 25%, the endpoint item was considered to be homogeneous, then we run a meta-analysis by using a fix-effect model according to the Cochrane Handbook for Systematic Reviews of Interventions.", "All analyses were conducted according to available published literature; thus, no ethical approval or patient consent was required.", "We systematically performed electronic literature from Medline, Embase, Cochrane Library, and PubMed electronic databases following the recommended guidelines of the Preferred Reporting Items for Systematic Review and Meta-analysis.[13] All 4 databases were scanned from 1997 when bosentan was first indicated in the pathogenesis of pulmonary fibrosis in the rodent model until August 2021 for the keywords of endothelin receptor antagonists and idiopathic pulmonary fibrosis in combination with Boolean logic.[12] We also performed a manual search of the references cited in relevant review articles.[14] After original searching, the relevant and their references were searched manually by 2 authors.", "The following predefined inclusion criteria were used: (i) population: participant with idiopathic pulmonary fibrosis; (ii) intervention: patients strictly treated with endothelin receptor antagonists; (iii) comparison intervention: use of endothelin receptor antagonists compared to placebo group; (iv) outcome measures: one or more of the clinical outcomes were reported, forced vital capacity (FVC), diffusion capacity of the lung for carbon monoxide (DLco), 6-minute walk distance test (6MWD), and serious adverse events; (v) official published studies in English prospective and retrospective studies.\nThe exclusion criteria were listed as follows: (i) conference or commentary articles and letters; (ii) atypical patients and outcome data; (iii) case report and case series; (iv) animal observation.", "Two researchers independently provided a detailed record for each study for the following essential information: the first author of the study, publication year, follow-up years, study design, sampling method, endpoints, study characteristics including the number of populations, mean age, gender ratio, and country. All disagreements were discussed and reached the final decision. The outcome measurements were the following: FVC, DLco, 6-minute walk distance test, and incidence of serious adverse events.", "The Cochrane Collaboration tool was applied to evaluate the risk of bias in all involved literature. Specifically, each trial was assessed for random sequence generation, allocation concealment, blinding of participants and personnel, blinding of outcome assessment, incomplete outcome data, selective reporting, and other sources of bias.[15]", "The analysis, design, and reporting for meta-analysis were carried out using Stata version 12.0 statistical software (Stata Crop LP, College Station, TX) as statistical software. The weighted mean difference (WMD) or Odds ratios (ORs) with the corresponding 95% confidence intervals (95% CIs) were used as measures of the treatment effect of bosentan. Heterogeneity of the observed studies was accessed with the Higgins I-square (I2) value. I2 over 25% and 75% was considered as moderate heterogeneous or significant heterogeneity, respectively. If I2 was under 25%, the endpoint item was considered to be homogeneous, then we run a meta-analysis by using a fix-effect model according to the Cochrane Handbook for Systematic Reviews of Interventions.", "3.1. Search result A total of 143 records were identified through the database by the search strategy, which is displayed in the following diagram (Fig. 1) in accordance with the inclusion and exclusion criteria. After removing duplicated or irrelevant articles, 49 eligible studies were enrolled in this study, of which 11 articles meeting our inclusion criteria were retrieved after evaluating the full text of the remaining articles. Finally, 5 studies[16–20] were combined in the present quantitative synthesis.\nFlowchart of the study selection process.\nA total of 143 records were identified through the database by the search strategy, which is displayed in the following diagram (Fig. 1) in accordance with the inclusion and exclusion criteria. After removing duplicated or irrelevant articles, 49 eligible studies were enrolled in this study, of which 11 articles meeting our inclusion criteria were retrieved after evaluating the full text of the remaining articles. Finally, 5 studies[16–20] were combined in the present quantitative synthesis.\nFlowchart of the study selection process.\n3.2. Characteristics of included studies Detailed characteristics concerning the involved studies are summarized in Table 1. A total of 5 randomized controlled trials (RCTs) were published between 2008 and 2014 including 1500 participants, and the sample size varied from 60 to 616. This study enrolled 518 patients treated with bosentan, 119 patients with macitentan, and 329 with ambrisentan, which is all endothelin receptor antagonists, compared with 534 participants in the placebo group. The average ages were from 63.8 to 66.6 years, the ratios of men varied from 68.0 to 72.7%, and only Corte et al[17] reported pulmonary arterial pressure of the included participants. In each RCT, all endothelin receptor antagonists or placebo were given after the idiopathic pulmonary fibrosis. Nearly all studies (four studies) did research on forced vital capacity and diffusion capacity of the lungs for carbon monoxide. Besides, King et al,[16] Raghu et al[19] and Corte et al[17] also took 6-minute walk distance test in consideration.\nMain characteristics of the randomized controlled trials included in the meat-analysis.\nDLco = diffusion capacity of the lungs for carbon monoxide, ERA = Endothelin receptor antagonist, FVC = forced vital capacity, PAP = pulmonary arterial pressure, 6MWD = 6-minute walk distance test, NR = not reported, ± = standard deviation.\nDetailed characteristics concerning the involved studies are summarized in Table 1. A total of 5 randomized controlled trials (RCTs) were published between 2008 and 2014 including 1500 participants, and the sample size varied from 60 to 616. This study enrolled 518 patients treated with bosentan, 119 patients with macitentan, and 329 with ambrisentan, which is all endothelin receptor antagonists, compared with 534 participants in the placebo group. The average ages were from 63.8 to 66.6 years, the ratios of men varied from 68.0 to 72.7%, and only Corte et al[17] reported pulmonary arterial pressure of the included participants. In each RCT, all endothelin receptor antagonists or placebo were given after the idiopathic pulmonary fibrosis. Nearly all studies (four studies) did research on forced vital capacity and diffusion capacity of the lungs for carbon monoxide. Besides, King et al,[16] Raghu et al[19] and Corte et al[17] also took 6-minute walk distance test in consideration.\nMain characteristics of the randomized controlled trials included in the meat-analysis.\nDLco = diffusion capacity of the lungs for carbon monoxide, ERA = Endothelin receptor antagonist, FVC = forced vital capacity, PAP = pulmonary arterial pressure, 6MWD = 6-minute walk distance test, NR = not reported, ± = standard deviation.\n3.3. Quality assessment The research scores of all nonrandomized controlled trials were assessed by 2 reviewers separately evaluating the methodological quality of the included observational studies by Cochrane Collaboration tool. Most of the included RCTs were high quality. Most of them showed a low risk of bias for random sequence generation, blinding of outcome assessment, incomplete outcome data, and selective reporting. The results of the quality assessment of trials are provided in Table 2.\nCochrane Collaboration tool for quality assessment in all included trials.\nReferences\n1.King T, Behr J, Brown K, du Bois R, Lancaster L, de Andrade J, et al BUILD-1: a randomized placebo-controlled trial of bosentan in idiopathic pulmonary fibrosis. American journal of respiratory and critical care medicine. 2008;177(1):75–81. doi: 10.1164/rccm.200705-732OC. PubMed PMID: 17901413.\n2.King T, Brown K, Raghu G, du Bois R, Lynch D, Martinez F, et al BUILD-3: a randomized, controlled trial of bosentan in idiopathic pulmonary fibrosis. American journal of respiratory and critical care medicine. 2011;184(1):92–9. doi: 10.1164/rccm.201011-1874OC. PubMed PMID: 21474646.\n3.Raghu G, Million-Rousseau R, Morganti A, Perchenet L, Behr J. Macitentan for the treatment of idiopathic pulmonary fibrosis: the randomised controlled MUSIC trial. The European respiratory journal. 2013;42(6):1622–32. doi: 10.1183/09031936.00104612. PubMed PMID: 23682110.\n4.Raghu G, Behr J, Brown K, Egan J, Kawut S, Flaherty K, et al. Treatment of idiopathic pulmonary fibrosis with ambrisentan: a parallel, randomized trial. Annals of internal medicine. 2013;158(9):641–9. doi: 10.7326/0003-4819-158-9-201305070-00003. PubMed PMID: 23648946.\n5.Corte T, Keir G, Dimopoulos K, Howard L, Corris P, Parfitt L, et al. Bosentan in pulmonary hypertension associated with fibrotic idiopathic interstitial pneumonia. American journal of respiratory and critical care medicine. 2014;190(2):208–17. doi: 10.1164/rccm.201403-0446OC. PubMed PMID: 24937643.\nThe research scores of all nonrandomized controlled trials were assessed by 2 reviewers separately evaluating the methodological quality of the included observational studies by Cochrane Collaboration tool. Most of the included RCTs were high quality. Most of them showed a low risk of bias for random sequence generation, blinding of outcome assessment, incomplete outcome data, and selective reporting. The results of the quality assessment of trials are provided in Table 2.\nCochrane Collaboration tool for quality assessment in all included trials.\nReferences\n1.King T, Behr J, Brown K, du Bois R, Lancaster L, de Andrade J, et al BUILD-1: a randomized placebo-controlled trial of bosentan in idiopathic pulmonary fibrosis. American journal of respiratory and critical care medicine. 2008;177(1):75–81. doi: 10.1164/rccm.200705-732OC. PubMed PMID: 17901413.\n2.King T, Brown K, Raghu G, du Bois R, Lynch D, Martinez F, et al BUILD-3: a randomized, controlled trial of bosentan in idiopathic pulmonary fibrosis. American journal of respiratory and critical care medicine. 2011;184(1):92–9. doi: 10.1164/rccm.201011-1874OC. PubMed PMID: 21474646.\n3.Raghu G, Million-Rousseau R, Morganti A, Perchenet L, Behr J. Macitentan for the treatment of idiopathic pulmonary fibrosis: the randomised controlled MUSIC trial. The European respiratory journal. 2013;42(6):1622–32. doi: 10.1183/09031936.00104612. PubMed PMID: 23682110.\n4.Raghu G, Behr J, Brown K, Egan J, Kawut S, Flaherty K, et al. Treatment of idiopathic pulmonary fibrosis with ambrisentan: a parallel, randomized trial. Annals of internal medicine. 2013;158(9):641–9. doi: 10.7326/0003-4819-158-9-201305070-00003. PubMed PMID: 23648946.\n5.Corte T, Keir G, Dimopoulos K, Howard L, Corris P, Parfitt L, et al. Bosentan in pulmonary hypertension associated with fibrotic idiopathic interstitial pneumonia. American journal of respiratory and critical care medicine. 2014;190(2):208–17. doi: 10.1164/rccm.201403-0446OC. PubMed PMID: 24937643.\n3.4. The outcome of the meta-analysis A total of 5 RCTs were eligible for analysis, with 1500 patients undergoing idiopathic pulmonary fibrosis. The detailed results are shown in Table 3 and listed as follows.\nThe outcomes of this meta-analysis.\nCIs = confidence intervals, DLco = diffusion capacity of the lung for carbon monoxide, ERA = endothelin receptor antagonist, FVC = forced vital capacity, ISAE = incidence of serious adverse events, OR = odds ratio, RD = rate difference, WMD = weighted mean difference, 6MWD = 6-minute walk distance.\nA total of 5 RCTs were eligible for analysis, with 1500 patients undergoing idiopathic pulmonary fibrosis. The detailed results are shown in Table 3 and listed as follows.\nThe outcomes of this meta-analysis.\nCIs = confidence intervals, DLco = diffusion capacity of the lung for carbon monoxide, ERA = endothelin receptor antagonist, FVC = forced vital capacity, ISAE = incidence of serious adverse events, OR = odds ratio, RD = rate difference, WMD = weighted mean difference, 6MWD = 6-minute walk distance.\n3.5. Lung function Four RCT studies (1325 patients) reported on FVC changes from baseline and at 4 to 12 months follow-up.[17–20] However, there is no difference between the using endothelin receptor antagonists and placebo group (WMD, −2.079; 95% CI, −2.079–3.471; P = .463; I2 = 0.0% for FVC % predicted, and WMD, 0.028; 95% CI, −0.158–0.214; P = .769; I2 = 0.0% for FVC L, Fig. 2). Data regarding the information of Dlco in idiopathic pulmonary fibrosis patients treated with endothelin receptor antagonists were also available in 4 trials (1325 patients).[17–20] The Dlco % predicted percentage in the endothelin receptor antagonists’ group did not show a significant lower tendency than the control group (WMD, −1.334; 95% CI, −4.945–2.276; P = .469; I2 = 0.0%, Fig. 3) and besides, the mean change from baseline in DLco (mmol·kPa − 1·min − 1) did not differ significant between the 2 groups either (WMD, 0.124; 95% CI, −0.183–0.431; P = .427; I2 = 0.0%, Fig. 3).\nForest plot on the assessment of the forced vital capacity.\nForest plot on the assessment of the lung for carbon monoxide.\nFour RCT studies (1325 patients) reported on FVC changes from baseline and at 4 to 12 months follow-up.[17–20] However, there is no difference between the using endothelin receptor antagonists and placebo group (WMD, −2.079; 95% CI, −2.079–3.471; P = .463; I2 = 0.0% for FVC % predicted, and WMD, 0.028; 95% CI, −0.158–0.214; P = .769; I2 = 0.0% for FVC L, Fig. 2). Data regarding the information of Dlco in idiopathic pulmonary fibrosis patients treated with endothelin receptor antagonists were also available in 4 trials (1325 patients).[17–20] The Dlco % predicted percentage in the endothelin receptor antagonists’ group did not show a significant lower tendency than the control group (WMD, −1.334; 95% CI, −4.945–2.276; P = .469; I2 = 0.0%, Fig. 3) and besides, the mean change from baseline in DLco (mmol·kPa − 1·min − 1) did not differ significant between the 2 groups either (WMD, 0.124; 95% CI, −0.183–0.431; P = .427; I2 = 0.0%, Fig. 3).\nForest plot on the assessment of the forced vital capacity.\nForest plot on the assessment of the lung for carbon monoxide.\n3.6. Exercise capacity and serious adverse effects Four studies were involved to measure the effectiveness of endothelin receptor antagonists in improving the exercise capacity, measured using the 6-minute walk distance.[16–19] Whereas, there was no difference in the endothelin receptor antagonists group compared with the control group (WMD, −2.160; 95% CI, −7.996–3.677; P = .468). Between-trial heterogeneity was homogeneous (I2 = 21.7%), similarly, serious adverse events were similar between these 2 groups (OR, 1.063; 95% CI, 0.669–1.690; P = .796).\nFour studies were involved to measure the effectiveness of endothelin receptor antagonists in improving the exercise capacity, measured using the 6-minute walk distance.[16–19] Whereas, there was no difference in the endothelin receptor antagonists group compared with the control group (WMD, −2.160; 95% CI, −7.996–3.677; P = .468). Between-trial heterogeneity was homogeneous (I2 = 21.7%), similarly, serious adverse events were similar between these 2 groups (OR, 1.063; 95% CI, 0.669–1.690; P = .796).", "A total of 143 records were identified through the database by the search strategy, which is displayed in the following diagram (Fig. 1) in accordance with the inclusion and exclusion criteria. After removing duplicated or irrelevant articles, 49 eligible studies were enrolled in this study, of which 11 articles meeting our inclusion criteria were retrieved after evaluating the full text of the remaining articles. Finally, 5 studies[16–20] were combined in the present quantitative synthesis.\nFlowchart of the study selection process.", "Detailed characteristics concerning the involved studies are summarized in Table 1. A total of 5 randomized controlled trials (RCTs) were published between 2008 and 2014 including 1500 participants, and the sample size varied from 60 to 616. This study enrolled 518 patients treated with bosentan, 119 patients with macitentan, and 329 with ambrisentan, which is all endothelin receptor antagonists, compared with 534 participants in the placebo group. The average ages were from 63.8 to 66.6 years, the ratios of men varied from 68.0 to 72.7%, and only Corte et al[17] reported pulmonary arterial pressure of the included participants. In each RCT, all endothelin receptor antagonists or placebo were given after the idiopathic pulmonary fibrosis. Nearly all studies (four studies) did research on forced vital capacity and diffusion capacity of the lungs for carbon monoxide. Besides, King et al,[16] Raghu et al[19] and Corte et al[17] also took 6-minute walk distance test in consideration.\nMain characteristics of the randomized controlled trials included in the meat-analysis.\nDLco = diffusion capacity of the lungs for carbon monoxide, ERA = Endothelin receptor antagonist, FVC = forced vital capacity, PAP = pulmonary arterial pressure, 6MWD = 6-minute walk distance test, NR = not reported, ± = standard deviation.", "The research scores of all nonrandomized controlled trials were assessed by 2 reviewers separately evaluating the methodological quality of the included observational studies by Cochrane Collaboration tool. Most of the included RCTs were high quality. Most of them showed a low risk of bias for random sequence generation, blinding of outcome assessment, incomplete outcome data, and selective reporting. The results of the quality assessment of trials are provided in Table 2.\nCochrane Collaboration tool for quality assessment in all included trials.\nReferences\n1.King T, Behr J, Brown K, du Bois R, Lancaster L, de Andrade J, et al BUILD-1: a randomized placebo-controlled trial of bosentan in idiopathic pulmonary fibrosis. American journal of respiratory and critical care medicine. 2008;177(1):75–81. doi: 10.1164/rccm.200705-732OC. PubMed PMID: 17901413.\n2.King T, Brown K, Raghu G, du Bois R, Lynch D, Martinez F, et al BUILD-3: a randomized, controlled trial of bosentan in idiopathic pulmonary fibrosis. American journal of respiratory and critical care medicine. 2011;184(1):92–9. doi: 10.1164/rccm.201011-1874OC. PubMed PMID: 21474646.\n3.Raghu G, Million-Rousseau R, Morganti A, Perchenet L, Behr J. Macitentan for the treatment of idiopathic pulmonary fibrosis: the randomised controlled MUSIC trial. The European respiratory journal. 2013;42(6):1622–32. doi: 10.1183/09031936.00104612. PubMed PMID: 23682110.\n4.Raghu G, Behr J, Brown K, Egan J, Kawut S, Flaherty K, et al. Treatment of idiopathic pulmonary fibrosis with ambrisentan: a parallel, randomized trial. Annals of internal medicine. 2013;158(9):641–9. doi: 10.7326/0003-4819-158-9-201305070-00003. PubMed PMID: 23648946.\n5.Corte T, Keir G, Dimopoulos K, Howard L, Corris P, Parfitt L, et al. Bosentan in pulmonary hypertension associated with fibrotic idiopathic interstitial pneumonia. American journal of respiratory and critical care medicine. 2014;190(2):208–17. doi: 10.1164/rccm.201403-0446OC. PubMed PMID: 24937643.", "A total of 5 RCTs were eligible for analysis, with 1500 patients undergoing idiopathic pulmonary fibrosis. The detailed results are shown in Table 3 and listed as follows.\nThe outcomes of this meta-analysis.\nCIs = confidence intervals, DLco = diffusion capacity of the lung for carbon monoxide, ERA = endothelin receptor antagonist, FVC = forced vital capacity, ISAE = incidence of serious adverse events, OR = odds ratio, RD = rate difference, WMD = weighted mean difference, 6MWD = 6-minute walk distance.", "Four RCT studies (1325 patients) reported on FVC changes from baseline and at 4 to 12 months follow-up.[17–20] However, there is no difference between the using endothelin receptor antagonists and placebo group (WMD, −2.079; 95% CI, −2.079–3.471; P = .463; I2 = 0.0% for FVC % predicted, and WMD, 0.028; 95% CI, −0.158–0.214; P = .769; I2 = 0.0% for FVC L, Fig. 2). Data regarding the information of Dlco in idiopathic pulmonary fibrosis patients treated with endothelin receptor antagonists were also available in 4 trials (1325 patients).[17–20] The Dlco % predicted percentage in the endothelin receptor antagonists’ group did not show a significant lower tendency than the control group (WMD, −1.334; 95% CI, −4.945–2.276; P = .469; I2 = 0.0%, Fig. 3) and besides, the mean change from baseline in DLco (mmol·kPa − 1·min − 1) did not differ significant between the 2 groups either (WMD, 0.124; 95% CI, −0.183–0.431; P = .427; I2 = 0.0%, Fig. 3).\nForest plot on the assessment of the forced vital capacity.\nForest plot on the assessment of the lung for carbon monoxide.", "Four studies were involved to measure the effectiveness of endothelin receptor antagonists in improving the exercise capacity, measured using the 6-minute walk distance.[16–19] Whereas, there was no difference in the endothelin receptor antagonists group compared with the control group (WMD, −2.160; 95% CI, −7.996–3.677; P = .468). Between-trial heterogeneity was homogeneous (I2 = 21.7%), similarly, serious adverse events were similar between these 2 groups (OR, 1.063; 95% CI, 0.669–1.690; P = .796).", "Idiopathic pulmonary fibrosis is a kind of chronic lung fibrosing disorder with a high incidence and a worse prognosis than numerous tumors.[21] Plenty of new studies are aimed at exploring a novel treatment for suppressing the initiation and progression of pulmonary fibrosis.[22] Despite this, the optimal treatment of disease is largely unknown. Pulmonary arterial hypertension is commonly appeared with idiopathic pulmonary fibrosis, relating to a significant negative effect on survival time. Endothelin receptor antagonists, such as ambrisentan and bosentan, have been illustrated to be effective individually compared with placebo in treating pulmonary arterial hypertension.[23] Despite this, the exact effect of endothelin receptor antagonists in the treatment of idiopathic pulmonary fibrosis remains controversial, therefore, this study aimed to conduct a meta-analysis that evaluates current available studies to uncover the effect of endothelin receptor antagonists on the treatment of idiopathic pulmonary fibrosis.\nProgress in the past several decades has been made in the widespread use of various medications to inhibit the loss of lung function in patients with idiopathic pulmonary fibrosis.[3] Gunther et al[24] conducted a study; 12 idiopathic pulmonary fibrosis patients underwent analysis of gas exchange properties on day 1 before and after the administration of 125 mg bosentan, the results showed no significant improvement in lung function within a period of 3-month treatment, even have some tendency toward deterioration of FVC (55.73 change into 52.18) and DLco (36.73 into 33.27) values. In the current study, we assess the effect of lung function by evaluating FVC and DLco in patients with idiopathic pulmonary fibrosis after endothelin receptor antagonists, demonstrating that it did not differ significantly between the experiment and the control group on changes in FVC from baseline. Similarly, Data regarding DLco changes from baseline were analyzed and revealed that no significant difference was found and the decline in percent-predicted DLco was not significantly lower in the endothelin receptor antagonists’ group.\nThe improvement of quality of life after drug therapy is also one of the current concerns in the treatment of idiopathic pulmonary fibrosis, therefore, we evaluated the 2 items, including mean change of 6-minute walk distance (WMD, −2.160; P = 0.468) and incidence of serious adverse events (OR, 1.337; P = 0.450). This study, thus, illustrated that the endothelin receptor antagonists did not affect the improvement of idiopathic pulmonary fibrosis, which is similar to several previous publications. Raghu et al[25] revealed that bosentan did not have superiority over placebo in the improvement 6-min walking distance. Additionally, Gunther et al demonstrated that the mean value of 6-min walking distance is the same before and after the treatment of bosentan (320.9 vs 320.9). Despite Lee et al[14] performed a meta-analysis and showed that pulmonary arterial hypertension-specific agents such as PDE-5 inhibitor significantly affected the quality of life, as measured by George’s Respiratory Questionnaire (SGRQ) total score. In this study, only 1 article assessed the SGRQ total score and revealed that, up to Month 6, the SGRQ total score in the bosentan group remained almost unchanged, whereas it worsened in the placebo group.[16] Therefore, more high-quality studies are necessary to uncover this issue.\nOf note, several limitations were involved in this meta-analysis. First, although all the studies included in this study were RCTs, there were only 6 articles exploring the effect of endothelin receptor antagonists on idiopathic pulmonary fibrosis, and their sample size was small. Second, it is crucial to consider the heterogeneity when illustrating the results of a meta-analysis, however, this study is of a moderate-high heterogeneity of the statistical results. Third, despite we identified 2 trials that examined lung function, we had to exclude them from our pooled analyses since they reported insufficient outcomes without a value change. Finally, the follow-up period of all involved studies is not the total same. Thus, although many meaningful conclusions could be drawn about the trends in the effect of endothelin receptor antagonists, further high-quality and large sample size studies are necessary to determine the effect of endothelin receptor antagonists.", "Despite endothelin receptor antagonists providing the better benefits for pulmonary arterial hypertension, this meta-analysis provides insufficient evidence to support that endothelin receptor antagonists’ administration induces provides a benefit in idiopathic pulmonary fibrosis patients." ]
[ "intro", "methods", null, null, null, null, null, null, "results", "results", null, null, null, null, null, "discussion", null ]
[ "bosentan", "endothelin receptor antagonists", "idiopathic pulmonary fibrosis", "meta-analysis" ]
1. Introduction: Interstitial lung diseases (ILDs) represent a large group of diseases that cause scarring (fibrosis) of the lungs,[1] which causes stiffness, making it difficult to breathe and get oxygen to the bloodstream.[2] Of the ILDs, idiopathic pulmonary fibrosis, also known as cryptogenic fibrosing alveolitis is the most common and fatal. It is characterized by the aberrant accumulation of epithelial and endothelial damage progressing to fibrosis in the lungs parenchyma[3] and the pathological hallmark of usual interstitial pneumonia.[4] Globally, idiopathic pulmonary fibrosis affects more than 5 million people every year.[5] Even though idiopathic pulmonary fibrosis is considered an unusual disease, it still places a severe burden on each family unfortunately getting in this kind of disease, both in economics and psychology.[6] The progression of idiopathic pulmonary fibrosis is generally manifested by a step-wise decline in pulmonary function, with worsening dyspnea and a high degree of morbidity, measured as forced vital capacity (FVC).[7,8] Endothelin receptor antagonists are a type of potent vasodilator and antimitotic substances that specifically dilate and remodel the pulmonary arterial system.[9] Bosentan, a specific dual-receptor antagonist of both endothelin (ET) receptor subtypes (ETA and ETB), was firstly developed for the treatment of pulmonary arterial hypertension[10] and congestive heart failure.[11] By blocking the receptor of ET, bosentan has been shown to decrease pulmonary vascular resistance and retard the pathogenic manifestations in the bleomycin-induced pulmonary fibrosis model,[12] which reduces collagen deposition in the lungs. Despite there is emerging many medicines to cure this disease recent years with increasing knowledge about underlying mechanism of idiopathic pulmonary fibrosis, there is no therapy available to effectively influence this. Thus, we performed a study to evaluate the efficacy and safety of endothelin receptor antagonists in patients with idiopathic pulmonary fibrosis. 2. Materials and Methods: 2.1. Ethical review All analyses were conducted according to available published literature; thus, no ethical approval or patient consent was required. All analyses were conducted according to available published literature; thus, no ethical approval or patient consent was required. 2.2. Literature search strategy We systematically performed electronic literature from Medline, Embase, Cochrane Library, and PubMed electronic databases following the recommended guidelines of the Preferred Reporting Items for Systematic Review and Meta-analysis.[13] All 4 databases were scanned from 1997 when bosentan was first indicated in the pathogenesis of pulmonary fibrosis in the rodent model until August 2021 for the keywords of endothelin receptor antagonists and idiopathic pulmonary fibrosis in combination with Boolean logic.[12] We also performed a manual search of the references cited in relevant review articles.[14] After original searching, the relevant and their references were searched manually by 2 authors. We systematically performed electronic literature from Medline, Embase, Cochrane Library, and PubMed electronic databases following the recommended guidelines of the Preferred Reporting Items for Systematic Review and Meta-analysis.[13] All 4 databases were scanned from 1997 when bosentan was first indicated in the pathogenesis of pulmonary fibrosis in the rodent model until August 2021 for the keywords of endothelin receptor antagonists and idiopathic pulmonary fibrosis in combination with Boolean logic.[12] We also performed a manual search of the references cited in relevant review articles.[14] After original searching, the relevant and their references were searched manually by 2 authors. 2.3. Inclusion and exclusion criteria The following predefined inclusion criteria were used: (i) population: participant with idiopathic pulmonary fibrosis; (ii) intervention: patients strictly treated with endothelin receptor antagonists; (iii) comparison intervention: use of endothelin receptor antagonists compared to placebo group; (iv) outcome measures: one or more of the clinical outcomes were reported, forced vital capacity (FVC), diffusion capacity of the lung for carbon monoxide (DLco), 6-minute walk distance test (6MWD), and serious adverse events; (v) official published studies in English prospective and retrospective studies. The exclusion criteria were listed as follows: (i) conference or commentary articles and letters; (ii) atypical patients and outcome data; (iii) case report and case series; (iv) animal observation. The following predefined inclusion criteria were used: (i) population: participant with idiopathic pulmonary fibrosis; (ii) intervention: patients strictly treated with endothelin receptor antagonists; (iii) comparison intervention: use of endothelin receptor antagonists compared to placebo group; (iv) outcome measures: one or more of the clinical outcomes were reported, forced vital capacity (FVC), diffusion capacity of the lung for carbon monoxide (DLco), 6-minute walk distance test (6MWD), and serious adverse events; (v) official published studies in English prospective and retrospective studies. The exclusion criteria were listed as follows: (i) conference or commentary articles and letters; (ii) atypical patients and outcome data; (iii) case report and case series; (iv) animal observation. 2.4. Data extraction and outcome measures Two researchers independently provided a detailed record for each study for the following essential information: the first author of the study, publication year, follow-up years, study design, sampling method, endpoints, study characteristics including the number of populations, mean age, gender ratio, and country. All disagreements were discussed and reached the final decision. The outcome measurements were the following: FVC, DLco, 6-minute walk distance test, and incidence of serious adverse events. Two researchers independently provided a detailed record for each study for the following essential information: the first author of the study, publication year, follow-up years, study design, sampling method, endpoints, study characteristics including the number of populations, mean age, gender ratio, and country. All disagreements were discussed and reached the final decision. The outcome measurements were the following: FVC, DLco, 6-minute walk distance test, and incidence of serious adverse events. 2.5. Quality assessment The Cochrane Collaboration tool was applied to evaluate the risk of bias in all involved literature. Specifically, each trial was assessed for random sequence generation, allocation concealment, blinding of participants and personnel, blinding of outcome assessment, incomplete outcome data, selective reporting, and other sources of bias.[15] The Cochrane Collaboration tool was applied to evaluate the risk of bias in all involved literature. Specifically, each trial was assessed for random sequence generation, allocation concealment, blinding of participants and personnel, blinding of outcome assessment, incomplete outcome data, selective reporting, and other sources of bias.[15] 2.6. Statistical analysis The analysis, design, and reporting for meta-analysis were carried out using Stata version 12.0 statistical software (Stata Crop LP, College Station, TX) as statistical software. The weighted mean difference (WMD) or Odds ratios (ORs) with the corresponding 95% confidence intervals (95% CIs) were used as measures of the treatment effect of bosentan. Heterogeneity of the observed studies was accessed with the Higgins I-square (I2) value. I2 over 25% and 75% was considered as moderate heterogeneous or significant heterogeneity, respectively. If I2 was under 25%, the endpoint item was considered to be homogeneous, then we run a meta-analysis by using a fix-effect model according to the Cochrane Handbook for Systematic Reviews of Interventions. The analysis, design, and reporting for meta-analysis were carried out using Stata version 12.0 statistical software (Stata Crop LP, College Station, TX) as statistical software. The weighted mean difference (WMD) or Odds ratios (ORs) with the corresponding 95% confidence intervals (95% CIs) were used as measures of the treatment effect of bosentan. Heterogeneity of the observed studies was accessed with the Higgins I-square (I2) value. I2 over 25% and 75% was considered as moderate heterogeneous or significant heterogeneity, respectively. If I2 was under 25%, the endpoint item was considered to be homogeneous, then we run a meta-analysis by using a fix-effect model according to the Cochrane Handbook for Systematic Reviews of Interventions. 2.1. Ethical review: All analyses were conducted according to available published literature; thus, no ethical approval or patient consent was required. 2.2. Literature search strategy: We systematically performed electronic literature from Medline, Embase, Cochrane Library, and PubMed electronic databases following the recommended guidelines of the Preferred Reporting Items for Systematic Review and Meta-analysis.[13] All 4 databases were scanned from 1997 when bosentan was first indicated in the pathogenesis of pulmonary fibrosis in the rodent model until August 2021 for the keywords of endothelin receptor antagonists and idiopathic pulmonary fibrosis in combination with Boolean logic.[12] We also performed a manual search of the references cited in relevant review articles.[14] After original searching, the relevant and their references were searched manually by 2 authors. 2.3. Inclusion and exclusion criteria: The following predefined inclusion criteria were used: (i) population: participant with idiopathic pulmonary fibrosis; (ii) intervention: patients strictly treated with endothelin receptor antagonists; (iii) comparison intervention: use of endothelin receptor antagonists compared to placebo group; (iv) outcome measures: one or more of the clinical outcomes were reported, forced vital capacity (FVC), diffusion capacity of the lung for carbon monoxide (DLco), 6-minute walk distance test (6MWD), and serious adverse events; (v) official published studies in English prospective and retrospective studies. The exclusion criteria were listed as follows: (i) conference or commentary articles and letters; (ii) atypical patients and outcome data; (iii) case report and case series; (iv) animal observation. 2.4. Data extraction and outcome measures: Two researchers independently provided a detailed record for each study for the following essential information: the first author of the study, publication year, follow-up years, study design, sampling method, endpoints, study characteristics including the number of populations, mean age, gender ratio, and country. All disagreements were discussed and reached the final decision. The outcome measurements were the following: FVC, DLco, 6-minute walk distance test, and incidence of serious adverse events. 2.5. Quality assessment: The Cochrane Collaboration tool was applied to evaluate the risk of bias in all involved literature. Specifically, each trial was assessed for random sequence generation, allocation concealment, blinding of participants and personnel, blinding of outcome assessment, incomplete outcome data, selective reporting, and other sources of bias.[15] 2.6. Statistical analysis: The analysis, design, and reporting for meta-analysis were carried out using Stata version 12.0 statistical software (Stata Crop LP, College Station, TX) as statistical software. The weighted mean difference (WMD) or Odds ratios (ORs) with the corresponding 95% confidence intervals (95% CIs) were used as measures of the treatment effect of bosentan. Heterogeneity of the observed studies was accessed with the Higgins I-square (I2) value. I2 over 25% and 75% was considered as moderate heterogeneous or significant heterogeneity, respectively. If I2 was under 25%, the endpoint item was considered to be homogeneous, then we run a meta-analysis by using a fix-effect model according to the Cochrane Handbook for Systematic Reviews of Interventions. 3. Results: 3.1. Search result A total of 143 records were identified through the database by the search strategy, which is displayed in the following diagram (Fig. 1) in accordance with the inclusion and exclusion criteria. After removing duplicated or irrelevant articles, 49 eligible studies were enrolled in this study, of which 11 articles meeting our inclusion criteria were retrieved after evaluating the full text of the remaining articles. Finally, 5 studies[16–20] were combined in the present quantitative synthesis. Flowchart of the study selection process. A total of 143 records were identified through the database by the search strategy, which is displayed in the following diagram (Fig. 1) in accordance with the inclusion and exclusion criteria. After removing duplicated or irrelevant articles, 49 eligible studies were enrolled in this study, of which 11 articles meeting our inclusion criteria were retrieved after evaluating the full text of the remaining articles. Finally, 5 studies[16–20] were combined in the present quantitative synthesis. Flowchart of the study selection process. 3.2. Characteristics of included studies Detailed characteristics concerning the involved studies are summarized in Table 1. A total of 5 randomized controlled trials (RCTs) were published between 2008 and 2014 including 1500 participants, and the sample size varied from 60 to 616. This study enrolled 518 patients treated with bosentan, 119 patients with macitentan, and 329 with ambrisentan, which is all endothelin receptor antagonists, compared with 534 participants in the placebo group. The average ages were from 63.8 to 66.6 years, the ratios of men varied from 68.0 to 72.7%, and only Corte et al[17] reported pulmonary arterial pressure of the included participants. In each RCT, all endothelin receptor antagonists or placebo were given after the idiopathic pulmonary fibrosis. Nearly all studies (four studies) did research on forced vital capacity and diffusion capacity of the lungs for carbon monoxide. Besides, King et al,[16] Raghu et al[19] and Corte et al[17] also took 6-minute walk distance test in consideration. Main characteristics of the randomized controlled trials included in the meat-analysis. DLco = diffusion capacity of the lungs for carbon monoxide, ERA = Endothelin receptor antagonist, FVC = forced vital capacity, PAP = pulmonary arterial pressure, 6MWD = 6-minute walk distance test, NR = not reported, ± = standard deviation. Detailed characteristics concerning the involved studies are summarized in Table 1. A total of 5 randomized controlled trials (RCTs) were published between 2008 and 2014 including 1500 participants, and the sample size varied from 60 to 616. This study enrolled 518 patients treated with bosentan, 119 patients with macitentan, and 329 with ambrisentan, which is all endothelin receptor antagonists, compared with 534 participants in the placebo group. The average ages were from 63.8 to 66.6 years, the ratios of men varied from 68.0 to 72.7%, and only Corte et al[17] reported pulmonary arterial pressure of the included participants. In each RCT, all endothelin receptor antagonists or placebo were given after the idiopathic pulmonary fibrosis. Nearly all studies (four studies) did research on forced vital capacity and diffusion capacity of the lungs for carbon monoxide. Besides, King et al,[16] Raghu et al[19] and Corte et al[17] also took 6-minute walk distance test in consideration. Main characteristics of the randomized controlled trials included in the meat-analysis. DLco = diffusion capacity of the lungs for carbon monoxide, ERA = Endothelin receptor antagonist, FVC = forced vital capacity, PAP = pulmonary arterial pressure, 6MWD = 6-minute walk distance test, NR = not reported, ± = standard deviation. 3.3. Quality assessment The research scores of all nonrandomized controlled trials were assessed by 2 reviewers separately evaluating the methodological quality of the included observational studies by Cochrane Collaboration tool. Most of the included RCTs were high quality. Most of them showed a low risk of bias for random sequence generation, blinding of outcome assessment, incomplete outcome data, and selective reporting. The results of the quality assessment of trials are provided in Table 2. Cochrane Collaboration tool for quality assessment in all included trials. References 1.King T, Behr J, Brown K, du Bois R, Lancaster L, de Andrade J, et al BUILD-1: a randomized placebo-controlled trial of bosentan in idiopathic pulmonary fibrosis. American journal of respiratory and critical care medicine. 2008;177(1):75–81. doi: 10.1164/rccm.200705-732OC. PubMed PMID: 17901413. 2.King T, Brown K, Raghu G, du Bois R, Lynch D, Martinez F, et al BUILD-3: a randomized, controlled trial of bosentan in idiopathic pulmonary fibrosis. American journal of respiratory and critical care medicine. 2011;184(1):92–9. doi: 10.1164/rccm.201011-1874OC. PubMed PMID: 21474646. 3.Raghu G, Million-Rousseau R, Morganti A, Perchenet L, Behr J. Macitentan for the treatment of idiopathic pulmonary fibrosis: the randomised controlled MUSIC trial. The European respiratory journal. 2013;42(6):1622–32. doi: 10.1183/09031936.00104612. PubMed PMID: 23682110. 4.Raghu G, Behr J, Brown K, Egan J, Kawut S, Flaherty K, et al. Treatment of idiopathic pulmonary fibrosis with ambrisentan: a parallel, randomized trial. Annals of internal medicine. 2013;158(9):641–9. doi: 10.7326/0003-4819-158-9-201305070-00003. PubMed PMID: 23648946. 5.Corte T, Keir G, Dimopoulos K, Howard L, Corris P, Parfitt L, et al. Bosentan in pulmonary hypertension associated with fibrotic idiopathic interstitial pneumonia. American journal of respiratory and critical care medicine. 2014;190(2):208–17. doi: 10.1164/rccm.201403-0446OC. PubMed PMID: 24937643. The research scores of all nonrandomized controlled trials were assessed by 2 reviewers separately evaluating the methodological quality of the included observational studies by Cochrane Collaboration tool. Most of the included RCTs were high quality. Most of them showed a low risk of bias for random sequence generation, blinding of outcome assessment, incomplete outcome data, and selective reporting. The results of the quality assessment of trials are provided in Table 2. Cochrane Collaboration tool for quality assessment in all included trials. References 1.King T, Behr J, Brown K, du Bois R, Lancaster L, de Andrade J, et al BUILD-1: a randomized placebo-controlled trial of bosentan in idiopathic pulmonary fibrosis. American journal of respiratory and critical care medicine. 2008;177(1):75–81. doi: 10.1164/rccm.200705-732OC. PubMed PMID: 17901413. 2.King T, Brown K, Raghu G, du Bois R, Lynch D, Martinez F, et al BUILD-3: a randomized, controlled trial of bosentan in idiopathic pulmonary fibrosis. American journal of respiratory and critical care medicine. 2011;184(1):92–9. doi: 10.1164/rccm.201011-1874OC. PubMed PMID: 21474646. 3.Raghu G, Million-Rousseau R, Morganti A, Perchenet L, Behr J. Macitentan for the treatment of idiopathic pulmonary fibrosis: the randomised controlled MUSIC trial. The European respiratory journal. 2013;42(6):1622–32. doi: 10.1183/09031936.00104612. PubMed PMID: 23682110. 4.Raghu G, Behr J, Brown K, Egan J, Kawut S, Flaherty K, et al. Treatment of idiopathic pulmonary fibrosis with ambrisentan: a parallel, randomized trial. Annals of internal medicine. 2013;158(9):641–9. doi: 10.7326/0003-4819-158-9-201305070-00003. PubMed PMID: 23648946. 5.Corte T, Keir G, Dimopoulos K, Howard L, Corris P, Parfitt L, et al. Bosentan in pulmonary hypertension associated with fibrotic idiopathic interstitial pneumonia. American journal of respiratory and critical care medicine. 2014;190(2):208–17. doi: 10.1164/rccm.201403-0446OC. PubMed PMID: 24937643. 3.4. The outcome of the meta-analysis A total of 5 RCTs were eligible for analysis, with 1500 patients undergoing idiopathic pulmonary fibrosis. The detailed results are shown in Table 3 and listed as follows. The outcomes of this meta-analysis. CIs = confidence intervals, DLco = diffusion capacity of the lung for carbon monoxide, ERA = endothelin receptor antagonist, FVC = forced vital capacity, ISAE = incidence of serious adverse events, OR = odds ratio, RD = rate difference, WMD = weighted mean difference, 6MWD = 6-minute walk distance. A total of 5 RCTs were eligible for analysis, with 1500 patients undergoing idiopathic pulmonary fibrosis. The detailed results are shown in Table 3 and listed as follows. The outcomes of this meta-analysis. CIs = confidence intervals, DLco = diffusion capacity of the lung for carbon monoxide, ERA = endothelin receptor antagonist, FVC = forced vital capacity, ISAE = incidence of serious adverse events, OR = odds ratio, RD = rate difference, WMD = weighted mean difference, 6MWD = 6-minute walk distance. 3.5. Lung function Four RCT studies (1325 patients) reported on FVC changes from baseline and at 4 to 12 months follow-up.[17–20] However, there is no difference between the using endothelin receptor antagonists and placebo group (WMD, −2.079; 95% CI, −2.079–3.471; P = .463; I2 = 0.0% for FVC % predicted, and WMD, 0.028; 95% CI, −0.158–0.214; P = .769; I2 = 0.0% for FVC L, Fig. 2). Data regarding the information of Dlco in idiopathic pulmonary fibrosis patients treated with endothelin receptor antagonists were also available in 4 trials (1325 patients).[17–20] The Dlco % predicted percentage in the endothelin receptor antagonists’ group did not show a significant lower tendency than the control group (WMD, −1.334; 95% CI, −4.945–2.276; P = .469; I2 = 0.0%, Fig. 3) and besides, the mean change from baseline in DLco (mmol·kPa − 1·min − 1) did not differ significant between the 2 groups either (WMD, 0.124; 95% CI, −0.183–0.431; P = .427; I2 = 0.0%, Fig. 3). Forest plot on the assessment of the forced vital capacity. Forest plot on the assessment of the lung for carbon monoxide. Four RCT studies (1325 patients) reported on FVC changes from baseline and at 4 to 12 months follow-up.[17–20] However, there is no difference between the using endothelin receptor antagonists and placebo group (WMD, −2.079; 95% CI, −2.079–3.471; P = .463; I2 = 0.0% for FVC % predicted, and WMD, 0.028; 95% CI, −0.158–0.214; P = .769; I2 = 0.0% for FVC L, Fig. 2). Data regarding the information of Dlco in idiopathic pulmonary fibrosis patients treated with endothelin receptor antagonists were also available in 4 trials (1325 patients).[17–20] The Dlco % predicted percentage in the endothelin receptor antagonists’ group did not show a significant lower tendency than the control group (WMD, −1.334; 95% CI, −4.945–2.276; P = .469; I2 = 0.0%, Fig. 3) and besides, the mean change from baseline in DLco (mmol·kPa − 1·min − 1) did not differ significant between the 2 groups either (WMD, 0.124; 95% CI, −0.183–0.431; P = .427; I2 = 0.0%, Fig. 3). Forest plot on the assessment of the forced vital capacity. Forest plot on the assessment of the lung for carbon monoxide. 3.6. Exercise capacity and serious adverse effects Four studies were involved to measure the effectiveness of endothelin receptor antagonists in improving the exercise capacity, measured using the 6-minute walk distance.[16–19] Whereas, there was no difference in the endothelin receptor antagonists group compared with the control group (WMD, −2.160; 95% CI, −7.996–3.677; P = .468). Between-trial heterogeneity was homogeneous (I2 = 21.7%), similarly, serious adverse events were similar between these 2 groups (OR, 1.063; 95% CI, 0.669–1.690; P = .796). Four studies were involved to measure the effectiveness of endothelin receptor antagonists in improving the exercise capacity, measured using the 6-minute walk distance.[16–19] Whereas, there was no difference in the endothelin receptor antagonists group compared with the control group (WMD, −2.160; 95% CI, −7.996–3.677; P = .468). Between-trial heterogeneity was homogeneous (I2 = 21.7%), similarly, serious adverse events were similar between these 2 groups (OR, 1.063; 95% CI, 0.669–1.690; P = .796). 3.1. Search result: A total of 143 records were identified through the database by the search strategy, which is displayed in the following diagram (Fig. 1) in accordance with the inclusion and exclusion criteria. After removing duplicated or irrelevant articles, 49 eligible studies were enrolled in this study, of which 11 articles meeting our inclusion criteria were retrieved after evaluating the full text of the remaining articles. Finally, 5 studies[16–20] were combined in the present quantitative synthesis. Flowchart of the study selection process. 3.2. Characteristics of included studies: Detailed characteristics concerning the involved studies are summarized in Table 1. A total of 5 randomized controlled trials (RCTs) were published between 2008 and 2014 including 1500 participants, and the sample size varied from 60 to 616. This study enrolled 518 patients treated with bosentan, 119 patients with macitentan, and 329 with ambrisentan, which is all endothelin receptor antagonists, compared with 534 participants in the placebo group. The average ages were from 63.8 to 66.6 years, the ratios of men varied from 68.0 to 72.7%, and only Corte et al[17] reported pulmonary arterial pressure of the included participants. In each RCT, all endothelin receptor antagonists or placebo were given after the idiopathic pulmonary fibrosis. Nearly all studies (four studies) did research on forced vital capacity and diffusion capacity of the lungs for carbon monoxide. Besides, King et al,[16] Raghu et al[19] and Corte et al[17] also took 6-minute walk distance test in consideration. Main characteristics of the randomized controlled trials included in the meat-analysis. DLco = diffusion capacity of the lungs for carbon monoxide, ERA = Endothelin receptor antagonist, FVC = forced vital capacity, PAP = pulmonary arterial pressure, 6MWD = 6-minute walk distance test, NR = not reported, ± = standard deviation. 3.3. Quality assessment: The research scores of all nonrandomized controlled trials were assessed by 2 reviewers separately evaluating the methodological quality of the included observational studies by Cochrane Collaboration tool. Most of the included RCTs were high quality. Most of them showed a low risk of bias for random sequence generation, blinding of outcome assessment, incomplete outcome data, and selective reporting. The results of the quality assessment of trials are provided in Table 2. Cochrane Collaboration tool for quality assessment in all included trials. References 1.King T, Behr J, Brown K, du Bois R, Lancaster L, de Andrade J, et al BUILD-1: a randomized placebo-controlled trial of bosentan in idiopathic pulmonary fibrosis. American journal of respiratory and critical care medicine. 2008;177(1):75–81. doi: 10.1164/rccm.200705-732OC. PubMed PMID: 17901413. 2.King T, Brown K, Raghu G, du Bois R, Lynch D, Martinez F, et al BUILD-3: a randomized, controlled trial of bosentan in idiopathic pulmonary fibrosis. American journal of respiratory and critical care medicine. 2011;184(1):92–9. doi: 10.1164/rccm.201011-1874OC. PubMed PMID: 21474646. 3.Raghu G, Million-Rousseau R, Morganti A, Perchenet L, Behr J. Macitentan for the treatment of idiopathic pulmonary fibrosis: the randomised controlled MUSIC trial. The European respiratory journal. 2013;42(6):1622–32. doi: 10.1183/09031936.00104612. PubMed PMID: 23682110. 4.Raghu G, Behr J, Brown K, Egan J, Kawut S, Flaherty K, et al. Treatment of idiopathic pulmonary fibrosis with ambrisentan: a parallel, randomized trial. Annals of internal medicine. 2013;158(9):641–9. doi: 10.7326/0003-4819-158-9-201305070-00003. PubMed PMID: 23648946. 5.Corte T, Keir G, Dimopoulos K, Howard L, Corris P, Parfitt L, et al. Bosentan in pulmonary hypertension associated with fibrotic idiopathic interstitial pneumonia. American journal of respiratory and critical care medicine. 2014;190(2):208–17. doi: 10.1164/rccm.201403-0446OC. PubMed PMID: 24937643. 3.4. The outcome of the meta-analysis: A total of 5 RCTs were eligible for analysis, with 1500 patients undergoing idiopathic pulmonary fibrosis. The detailed results are shown in Table 3 and listed as follows. The outcomes of this meta-analysis. CIs = confidence intervals, DLco = diffusion capacity of the lung for carbon monoxide, ERA = endothelin receptor antagonist, FVC = forced vital capacity, ISAE = incidence of serious adverse events, OR = odds ratio, RD = rate difference, WMD = weighted mean difference, 6MWD = 6-minute walk distance. 3.5. Lung function: Four RCT studies (1325 patients) reported on FVC changes from baseline and at 4 to 12 months follow-up.[17–20] However, there is no difference between the using endothelin receptor antagonists and placebo group (WMD, −2.079; 95% CI, −2.079–3.471; P = .463; I2 = 0.0% for FVC % predicted, and WMD, 0.028; 95% CI, −0.158–0.214; P = .769; I2 = 0.0% for FVC L, Fig. 2). Data regarding the information of Dlco in idiopathic pulmonary fibrosis patients treated with endothelin receptor antagonists were also available in 4 trials (1325 patients).[17–20] The Dlco % predicted percentage in the endothelin receptor antagonists’ group did not show a significant lower tendency than the control group (WMD, −1.334; 95% CI, −4.945–2.276; P = .469; I2 = 0.0%, Fig. 3) and besides, the mean change from baseline in DLco (mmol·kPa − 1·min − 1) did not differ significant between the 2 groups either (WMD, 0.124; 95% CI, −0.183–0.431; P = .427; I2 = 0.0%, Fig. 3). Forest plot on the assessment of the forced vital capacity. Forest plot on the assessment of the lung for carbon monoxide. 3.6. Exercise capacity and serious adverse effects: Four studies were involved to measure the effectiveness of endothelin receptor antagonists in improving the exercise capacity, measured using the 6-minute walk distance.[16–19] Whereas, there was no difference in the endothelin receptor antagonists group compared with the control group (WMD, −2.160; 95% CI, −7.996–3.677; P = .468). Between-trial heterogeneity was homogeneous (I2 = 21.7%), similarly, serious adverse events were similar between these 2 groups (OR, 1.063; 95% CI, 0.669–1.690; P = .796). 4. Discussion: Idiopathic pulmonary fibrosis is a kind of chronic lung fibrosing disorder with a high incidence and a worse prognosis than numerous tumors.[21] Plenty of new studies are aimed at exploring a novel treatment for suppressing the initiation and progression of pulmonary fibrosis.[22] Despite this, the optimal treatment of disease is largely unknown. Pulmonary arterial hypertension is commonly appeared with idiopathic pulmonary fibrosis, relating to a significant negative effect on survival time. Endothelin receptor antagonists, such as ambrisentan and bosentan, have been illustrated to be effective individually compared with placebo in treating pulmonary arterial hypertension.[23] Despite this, the exact effect of endothelin receptor antagonists in the treatment of idiopathic pulmonary fibrosis remains controversial, therefore, this study aimed to conduct a meta-analysis that evaluates current available studies to uncover the effect of endothelin receptor antagonists on the treatment of idiopathic pulmonary fibrosis. Progress in the past several decades has been made in the widespread use of various medications to inhibit the loss of lung function in patients with idiopathic pulmonary fibrosis.[3] Gunther et al[24] conducted a study; 12 idiopathic pulmonary fibrosis patients underwent analysis of gas exchange properties on day 1 before and after the administration of 125 mg bosentan, the results showed no significant improvement in lung function within a period of 3-month treatment, even have some tendency toward deterioration of FVC (55.73 change into 52.18) and DLco (36.73 into 33.27) values. In the current study, we assess the effect of lung function by evaluating FVC and DLco in patients with idiopathic pulmonary fibrosis after endothelin receptor antagonists, demonstrating that it did not differ significantly between the experiment and the control group on changes in FVC from baseline. Similarly, Data regarding DLco changes from baseline were analyzed and revealed that no significant difference was found and the decline in percent-predicted DLco was not significantly lower in the endothelin receptor antagonists’ group. The improvement of quality of life after drug therapy is also one of the current concerns in the treatment of idiopathic pulmonary fibrosis, therefore, we evaluated the 2 items, including mean change of 6-minute walk distance (WMD, −2.160; P = 0.468) and incidence of serious adverse events (OR, 1.337; P = 0.450). This study, thus, illustrated that the endothelin receptor antagonists did not affect the improvement of idiopathic pulmonary fibrosis, which is similar to several previous publications. Raghu et al[25] revealed that bosentan did not have superiority over placebo in the improvement 6-min walking distance. Additionally, Gunther et al demonstrated that the mean value of 6-min walking distance is the same before and after the treatment of bosentan (320.9 vs 320.9). Despite Lee et al[14] performed a meta-analysis and showed that pulmonary arterial hypertension-specific agents such as PDE-5 inhibitor significantly affected the quality of life, as measured by George’s Respiratory Questionnaire (SGRQ) total score. In this study, only 1 article assessed the SGRQ total score and revealed that, up to Month 6, the SGRQ total score in the bosentan group remained almost unchanged, whereas it worsened in the placebo group.[16] Therefore, more high-quality studies are necessary to uncover this issue. Of note, several limitations were involved in this meta-analysis. First, although all the studies included in this study were RCTs, there were only 6 articles exploring the effect of endothelin receptor antagonists on idiopathic pulmonary fibrosis, and their sample size was small. Second, it is crucial to consider the heterogeneity when illustrating the results of a meta-analysis, however, this study is of a moderate-high heterogeneity of the statistical results. Third, despite we identified 2 trials that examined lung function, we had to exclude them from our pooled analyses since they reported insufficient outcomes without a value change. Finally, the follow-up period of all involved studies is not the total same. Thus, although many meaningful conclusions could be drawn about the trends in the effect of endothelin receptor antagonists, further high-quality and large sample size studies are necessary to determine the effect of endothelin receptor antagonists. 5. Conclusion: Despite endothelin receptor antagonists providing the better benefits for pulmonary arterial hypertension, this meta-analysis provides insufficient evidence to support that endothelin receptor antagonists’ administration induces provides a benefit in idiopathic pulmonary fibrosis patients.
Background: Fibrotic diseases take a very heavy toll in terms of morbidity and mortality equal to or even greater than that caused by metastatic cancer. This meta-analysis aimed to evaluate the effect of endothelin receptor antagonists on idiopathic pulmonary fibrosis. Methods: A systematic search of the clinical trials from the Medline, Google Scholar, Cochrane Library, and PubMed electronic databases was performed. Stata version 12.0 statistical software (Stata Crop LP, College Station, TX) was adopted as statistical software. Results: A total of 5 studies, which included 1500 participants. Our analysis found there is no significant difference between using the endothelin receptor antagonists' group and placebo groups regarding the lung function via estimating both the change of forced vital capacity from baseline and DLco index. Exercise capacity and serious adverse effects are taken into consideration as well; however, there is still no significant change between the 2 groups. Conclusions: This meta-analysis provides insufficient evidence to support that endothelin receptor antagonists' administration provides a benefit among included participants who encounter idiopathic pulmonary fibrosis.
1. Introduction: Interstitial lung diseases (ILDs) represent a large group of diseases that cause scarring (fibrosis) of the lungs,[1] which causes stiffness, making it difficult to breathe and get oxygen to the bloodstream.[2] Of the ILDs, idiopathic pulmonary fibrosis, also known as cryptogenic fibrosing alveolitis is the most common and fatal. It is characterized by the aberrant accumulation of epithelial and endothelial damage progressing to fibrosis in the lungs parenchyma[3] and the pathological hallmark of usual interstitial pneumonia.[4] Globally, idiopathic pulmonary fibrosis affects more than 5 million people every year.[5] Even though idiopathic pulmonary fibrosis is considered an unusual disease, it still places a severe burden on each family unfortunately getting in this kind of disease, both in economics and psychology.[6] The progression of idiopathic pulmonary fibrosis is generally manifested by a step-wise decline in pulmonary function, with worsening dyspnea and a high degree of morbidity, measured as forced vital capacity (FVC).[7,8] Endothelin receptor antagonists are a type of potent vasodilator and antimitotic substances that specifically dilate and remodel the pulmonary arterial system.[9] Bosentan, a specific dual-receptor antagonist of both endothelin (ET) receptor subtypes (ETA and ETB), was firstly developed for the treatment of pulmonary arterial hypertension[10] and congestive heart failure.[11] By blocking the receptor of ET, bosentan has been shown to decrease pulmonary vascular resistance and retard the pathogenic manifestations in the bleomycin-induced pulmonary fibrosis model,[12] which reduces collagen deposition in the lungs. Despite there is emerging many medicines to cure this disease recent years with increasing knowledge about underlying mechanism of idiopathic pulmonary fibrosis, there is no therapy available to effectively influence this. Thus, we performed a study to evaluate the efficacy and safety of endothelin receptor antagonists in patients with idiopathic pulmonary fibrosis. 5. Conclusion: Shuang Li and Chuanhua Yan designed and conceptualized the article. Shuang Li and Wenqiang Xin prepared the figures and tables. All authors significantly contributed to writing the paper and provided important intellectual content.
Background: Fibrotic diseases take a very heavy toll in terms of morbidity and mortality equal to or even greater than that caused by metastatic cancer. This meta-analysis aimed to evaluate the effect of endothelin receptor antagonists on idiopathic pulmonary fibrosis. Methods: A systematic search of the clinical trials from the Medline, Google Scholar, Cochrane Library, and PubMed electronic databases was performed. Stata version 12.0 statistical software (Stata Crop LP, College Station, TX) was adopted as statistical software. Results: A total of 5 studies, which included 1500 participants. Our analysis found there is no significant difference between using the endothelin receptor antagonists' group and placebo groups regarding the lung function via estimating both the change of forced vital capacity from baseline and DLco index. Exercise capacity and serious adverse effects are taken into consideration as well; however, there is still no significant change between the 2 groups. Conclusions: This meta-analysis provides insufficient evidence to support that endothelin receptor antagonists' administration provides a benefit among included participants who encounter idiopathic pulmonary fibrosis.
6,553
205
[ 21, 107, 154, 91, 55, 147, 2362, 244, 381, 101, 239, 100, 38 ]
17
[ "pulmonary", "receptor", "fibrosis", "endothelin", "endothelin receptor", "pulmonary fibrosis", "idiopathic", "idiopathic pulmonary", "idiopathic pulmonary fibrosis", "antagonists" ]
[ "fibrosis lungs causes", "lung fibrosing disorder", "pulmonary fibrosis affects", "interstitial lung diseases", "idiopathic pulmonary fibrosis" ]
[CONTENT] bosentan | endothelin receptor antagonists | idiopathic pulmonary fibrosis | meta-analysis [SUMMARY]
[CONTENT] bosentan | endothelin receptor antagonists | idiopathic pulmonary fibrosis | meta-analysis [SUMMARY]
[CONTENT] bosentan | endothelin receptor antagonists | idiopathic pulmonary fibrosis | meta-analysis [SUMMARY]
[CONTENT] bosentan | endothelin receptor antagonists | idiopathic pulmonary fibrosis | meta-analysis [SUMMARY]
[CONTENT] bosentan | endothelin receptor antagonists | idiopathic pulmonary fibrosis | meta-analysis [SUMMARY]
[CONTENT] bosentan | endothelin receptor antagonists | idiopathic pulmonary fibrosis | meta-analysis [SUMMARY]
[CONTENT] Endothelin Receptor Antagonists | Humans | Idiopathic Pulmonary Fibrosis | Randomized Controlled Trials as Topic | Respiratory Function Tests | Vital Capacity [SUMMARY]
[CONTENT] Endothelin Receptor Antagonists | Humans | Idiopathic Pulmonary Fibrosis | Randomized Controlled Trials as Topic | Respiratory Function Tests | Vital Capacity [SUMMARY]
[CONTENT] Endothelin Receptor Antagonists | Humans | Idiopathic Pulmonary Fibrosis | Randomized Controlled Trials as Topic | Respiratory Function Tests | Vital Capacity [SUMMARY]
[CONTENT] Endothelin Receptor Antagonists | Humans | Idiopathic Pulmonary Fibrosis | Randomized Controlled Trials as Topic | Respiratory Function Tests | Vital Capacity [SUMMARY]
[CONTENT] Endothelin Receptor Antagonists | Humans | Idiopathic Pulmonary Fibrosis | Randomized Controlled Trials as Topic | Respiratory Function Tests | Vital Capacity [SUMMARY]
[CONTENT] Endothelin Receptor Antagonists | Humans | Idiopathic Pulmonary Fibrosis | Randomized Controlled Trials as Topic | Respiratory Function Tests | Vital Capacity [SUMMARY]
[CONTENT] fibrosis lungs causes | lung fibrosing disorder | pulmonary fibrosis affects | interstitial lung diseases | idiopathic pulmonary fibrosis [SUMMARY]
[CONTENT] fibrosis lungs causes | lung fibrosing disorder | pulmonary fibrosis affects | interstitial lung diseases | idiopathic pulmonary fibrosis [SUMMARY]
[CONTENT] fibrosis lungs causes | lung fibrosing disorder | pulmonary fibrosis affects | interstitial lung diseases | idiopathic pulmonary fibrosis [SUMMARY]
[CONTENT] fibrosis lungs causes | lung fibrosing disorder | pulmonary fibrosis affects | interstitial lung diseases | idiopathic pulmonary fibrosis [SUMMARY]
[CONTENT] fibrosis lungs causes | lung fibrosing disorder | pulmonary fibrosis affects | interstitial lung diseases | idiopathic pulmonary fibrosis [SUMMARY]
[CONTENT] fibrosis lungs causes | lung fibrosing disorder | pulmonary fibrosis affects | interstitial lung diseases | idiopathic pulmonary fibrosis [SUMMARY]
[CONTENT] pulmonary | receptor | fibrosis | endothelin | endothelin receptor | pulmonary fibrosis | idiopathic | idiopathic pulmonary | idiopathic pulmonary fibrosis | antagonists [SUMMARY]
[CONTENT] pulmonary | receptor | fibrosis | endothelin | endothelin receptor | pulmonary fibrosis | idiopathic | idiopathic pulmonary | idiopathic pulmonary fibrosis | antagonists [SUMMARY]
[CONTENT] pulmonary | receptor | fibrosis | endothelin | endothelin receptor | pulmonary fibrosis | idiopathic | idiopathic pulmonary | idiopathic pulmonary fibrosis | antagonists [SUMMARY]
[CONTENT] pulmonary | receptor | fibrosis | endothelin | endothelin receptor | pulmonary fibrosis | idiopathic | idiopathic pulmonary | idiopathic pulmonary fibrosis | antagonists [SUMMARY]
[CONTENT] pulmonary | receptor | fibrosis | endothelin | endothelin receptor | pulmonary fibrosis | idiopathic | idiopathic pulmonary | idiopathic pulmonary fibrosis | antagonists [SUMMARY]
[CONTENT] pulmonary | receptor | fibrosis | endothelin | endothelin receptor | pulmonary fibrosis | idiopathic | idiopathic pulmonary | idiopathic pulmonary fibrosis | antagonists [SUMMARY]
[CONTENT] pulmonary | fibrosis | pulmonary fibrosis | idiopathic | idiopathic pulmonary fibrosis | idiopathic pulmonary | disease | lungs | receptor | diseases [SUMMARY]
[CONTENT] outcome | literature | following | analysis | study | review | i2 | measures | statistical | reporting [SUMMARY]
[CONTENT] articles | inclusion | criteria | study | studies | fig accordance | displayed following diagram fig | displayed following diagram | displayed following | displayed [SUMMARY]
[CONTENT] provides | better benefits pulmonary | better benefits pulmonary arterial | endothelin receptor antagonists providing | receptor antagonists providing better | receptor antagonists providing | antagonists administration | antagonists administration induces | antagonists administration induces provides | provides benefit [SUMMARY]
[CONTENT] pulmonary | receptor | endothelin | endothelin receptor | fibrosis | receptor antagonists | antagonists | endothelin receptor antagonists | pulmonary fibrosis | idiopathic [SUMMARY]
[CONTENT] pulmonary | receptor | endothelin | endothelin receptor | fibrosis | receptor antagonists | antagonists | endothelin receptor antagonists | pulmonary fibrosis | idiopathic [SUMMARY]
[CONTENT] ||| endothelin [SUMMARY]
[CONTENT] Medline | Google Scholar | Cochrane Library | PubMed ||| 12.0 | Stata Crop LP, College Station | TX [SUMMARY]
[CONTENT] 5 | 1500 ||| endothelin | DLco ||| 2 [SUMMARY]
[CONTENT] endothelin [SUMMARY]
[CONTENT] ||| endothelin ||| Medline | Google Scholar | Cochrane Library | PubMed ||| 12.0 | Stata Crop LP, College Station | TX ||| 5 | 1500 ||| endothelin | DLco ||| 2 ||| endothelin [SUMMARY]
[CONTENT] ||| endothelin ||| Medline | Google Scholar | Cochrane Library | PubMed ||| 12.0 | Stata Crop LP, College Station | TX ||| 5 | 1500 ||| endothelin | DLco ||| 2 ||| endothelin [SUMMARY]
Effects of a Catechol-Functionalized Hyaluronic Acid Patch Combined with Human Adipose-Derived Stem Cells in Diabetic Wound Healing.
33807864
Chronic inflammation and impaired neovascularization play critical roles in delayed wound healing in diabetic patients. To overcome the limitations of current diabetic wound (DBW) management interventions, we investigated the effects of a catechol-functionalized hyaluronic acid (HA-CA) patch combined with adipose-derived mesenchymal stem cells (ADSCs) in DBW mouse models.
INTRODUCTION
Diabetes in mice (C57BL/6, male) was induced by streptozotocin (50 mg/kg, >250 mg/dL). Mice were divided into four groups: control (DBW) group, ADSCs group, HA-CA group, and HA-CA + ADSCs group (n = 10 per group). Fluorescently labeled ADSCs (5 × 105 cells/100 µL) were transplanted into healthy tissues at the wound boundary or deposited at the HA-CA patch at the wound site. The wound area was visually examined. Collagen content, granulation tissue thickness and vascularity, cell apoptosis, and re-epithelialization were assessed. Angiogenesis was evaluated by immunohistochemistry, quantitative real-time polymerase chain reaction, and Western blot.
METHODS
DBW size was significantly smaller in the HA-CA + ADSCs group (8% ± 2%) compared with the control (16% ± 5%, p < 0.01) and ADSCs (24% ± 17%, p < 0.05) groups. In mice treated with HA-CA + ADSCs, the epidermis was regenerated, and skin thickness was restored. CD31 and von Willebrand factor-positive vessels were detected in mice treated with HA-CA + ADSCs. The mRNA and protein levels of VEGF, IGF-1, FGF-2, ANG-1, PIK, and AKT in the HA-CA + ADSCs group were the highest among all groups, although the Spred1 and ERK expression levels remained unchanged.
RESULTS
The combination of HA-CA and ADSCs provided synergistic wound healing effects by maximizing paracrine signaling and angiogenesis via the PI3K/AKT pathway. Therefore, ADSC-loaded HA-CA might represent a novel strategy for the treatment of DBW.
CONCLUSIONS
[ "Adipose Tissue", "Animals", "Bandages", "Diabetes Mellitus, Experimental", "Diabetic Angiopathies", "Female", "Humans", "Hyaluronic Acid", "Male", "Mice", "Stem Cell Transplantation", "Stem Cells", "Wound Healing", "Wounds and Injuries" ]
7961484
1. Introduction
Diabetic wound (DBW) is a broad term describing various pathological conditions that manifest as wounds or ulcers associated with diabetes. Diabetic foot ulcers and other DBWs are common complications in diabetes, occurring in approximately 20% of diabetic patients [1]. DBWs have been associated with hyperglycemia, which often results in diabetic peripheral neuropathy and blockage of peripheral blood vessels, ultimately leading to diabetic foot ulcers [2]. Patients with DBWs often progress rapidly, making treatment challenging and potentially resulting in lower extremity amputation. In addition to diabetic peripheral neuropathy and peripheral vascular disease, several other risk factors of DBW have been identified, including limited joint mobility and foot deformities [2,3]. Chronic inflammation is an important barrier in the treatment of DBW. Inflammation plays a crucial role in wound healing; in DBW, inflammatory responses are delayed, and the wound does not heal. In addition, the balance between collagen production and degradation is disrupted, further impairing the wound healing process [4]. The clinical management of DBW typically involves wound dressing and debridement of necrotic tissues. Numerous recent studies have indicated that modulating extracellular matrix (ECM) synthesis, growth factor release, and vascularization-targeting approaches might be useful in the treatment of DBW. Many clinical trials with mesenchymal stem cells (MSCs) are currently in progress as they are readily accessible and safer than other types of stem cells. MSCs represent a stable source for experiments or clinical treatment because they can be obtained from various tissues [5]. MSCs also exhibit multilineage differentiation potential and exert various immunomodulatory and paracrine effects [6]. Adipose-derived stem cells (ADSCs), which can be isolated easily and in large numbers from adipose tissue through liposuction surgery and have been tested for various clinical applications, exhibit long-term growth in vitro and can differentiate into various cell types upon induction [7]. Moreover, ADSCs are known to exert paracrine effects that promote tissue regeneration [6]. Hyaluronic acid (HA) is a polysaccharide found in various tissues of the human body, including the joints, cartilage, eyes, and skin [8,9]. HA plays a key role in wound healing by regulating cell proliferation, migration, and differentiation, as well as ECM organization and metabolism. High molecular weight HA inhibits the proliferation and migration of most cell types, while low molecular weight forms of HA (<300 kDa) promote cell proliferation and display angiogenic properties [10,11,12]. HA in the molecular weight range of 150–250 kDa has been shown to have beneficial effects on wound healing by enhancing cell–HA interactions through cell-surface receptors for HA, which activates signal transduction pathways essential for cellular migration and proliferation [13,14]. Therefore, HA has been proposed as a biomaterial for DBW treatment [8]. HA hydrogels are widely used in numerous biomedical and pharmaceutical applications due to their inherent biocompatibility, matrix structure similarity, and drug delivery capabilities [15]. However, the application of injectable hydrogels to treat human diseases remains limited. Furthermore, the incorporation of stem cells or growth factors into hydrogels remains technically challenging. Catechol-modified HA (HA-CA) hydrogels have higher biocompatibility, better tissue adhesion properties, and provide improved stem cell survival and functionality compared to conventional HA hydrogels [16]. However, their potential use to treat DBW remains largely unexplored. The purpose of this study was to assess the therapeutic effects of an HA-CA patch combined with ADSCs in the treatment of DBW. In particular, we evaluated their effects on tissue regeneration and angiogenesis using a DBW mouse model.
null
null
2. Results
2.1. Wound Area Measurement The wounds were photographed on days 1, 3, 5, 7, 14, and 21, and the wound areas were quantified using ImageJ software. The change in wound size over time was calculated as the percentage of wound closure for each treatment group compared to the initial area of the wound and normalized relative to the initial area. Visual inspection indicated a decrease in wound size. Compared with control mice, the wound closure rate on day 3 was significantly higher in the treatment groups (control DBW group, 105% ± 18% vs. ADSCs group, 82% ± 15%, HA-CA group, 71% ± 11% **, HA-CA + ADSCs group, 76% ± 10% *; * p < 0.05, ** p < 0.01). At 14 days, remarkable wound healing was observed in the HA-CA + ADSCs group (control DBW group 59% ± 8% vs. HA-CA + ADSCs group 19% ± 4%; p < 0.01; Figure 1A,B). Although the wounds appeared to have healed in all groups on day 21, the wound size was significantly smaller in the HA-CA + ADSCs group than in the control and ADSCs groups (control DBW group, 16% ± 5% **, ADSCs group, 24% ± 17% * vs. HA-CA + ADSCs group, 8% ± 2%; * p < 0.05, ** p < 0.01; Figure 1A,B). The wounds were photographed on days 1, 3, 5, 7, 14, and 21, and the wound areas were quantified using ImageJ software. The change in wound size over time was calculated as the percentage of wound closure for each treatment group compared to the initial area of the wound and normalized relative to the initial area. Visual inspection indicated a decrease in wound size. Compared with control mice, the wound closure rate on day 3 was significantly higher in the treatment groups (control DBW group, 105% ± 18% vs. ADSCs group, 82% ± 15%, HA-CA group, 71% ± 11% **, HA-CA + ADSCs group, 76% ± 10% *; * p < 0.05, ** p < 0.01). At 14 days, remarkable wound healing was observed in the HA-CA + ADSCs group (control DBW group 59% ± 8% vs. HA-CA + ADSCs group 19% ± 4%; p < 0.01; Figure 1A,B). Although the wounds appeared to have healed in all groups on day 21, the wound size was significantly smaller in the HA-CA + ADSCs group than in the control and ADSCs groups (control DBW group, 16% ± 5% **, ADSCs group, 24% ± 17% * vs. HA-CA + ADSCs group, 8% ± 2%; * p < 0.05, ** p < 0.01; Figure 1A,B). 2.2. PKH26-Labeled ADSC Tracing PKH26-labeled ADSCs (5 × 105 cells/100 µL) were injected into healthy subcutaneous tissues at the wound boundary. In the HA-CA + ADSCs group, PKH26-labeled ADSCs were transplanted with the HA-CA patch at the wound site. The mice were sacrificed on day 14 for PKH26-labeled ADSC tracking. Although most ADSCs migrated from the injection site to the wound, only a few ADSCs were observed in the ADSC injection group. In contrast to the ADSC group, ADSCs were detected in the epidermis, papillary dermis, and reticular dermis at the wound site in the HA-CA + ADSCs group (Figure 1C). PKH26-labeled ADSCs (5 × 105 cells/100 µL) were injected into healthy subcutaneous tissues at the wound boundary. In the HA-CA + ADSCs group, PKH26-labeled ADSCs were transplanted with the HA-CA patch at the wound site. The mice were sacrificed on day 14 for PKH26-labeled ADSC tracking. Although most ADSCs migrated from the injection site to the wound, only a few ADSCs were observed in the ADSC injection group. In contrast to the ADSC group, ADSCs were detected in the epidermis, papillary dermis, and reticular dermis at the wound site in the HA-CA + ADSCs group (Figure 1C). 2.3. Histopathological Assessment Wounds were stained with hematoxylin and eosin on postoperative day (POD) 21 for histological observation. Complete re-epithelialization was observed in all groups; however, there were significant differences in epidermis thickness and skin appendages among the groups. In the DBW group, the epidermis consisted of one to three thin epithelial cell layers, and the extent of tissue regeneration at the dermis was low. In the ADSCs, HA-CA, and HA-CA + ADSCs groups, the epidermis was completely regenerated, and epidermal appendages, including sweat glands and hair follicles, were observed. Notably, neovascular-like structures were observed in the dermis layer (Figure 2A). Masson’s trichrome staining revealed marked granulation tissue formation in the HA-CA, ADSCs, and HA-CA + ADSCs groups. Furthermore, these groups showed increased proportions of muscle cells and extensive dermis tissue remodeling (Figure 2B). Terminal deoxynucleotidyl transferase dUTP nick end labeling (TUNEL)-positive cells were counted manually in microscopic fields (DBW group, 19 ± 2; ADSCs group, 25 ± 4; HA-CA group, 21 ± 4; HA-CA + ADSCs group, 21 ± 3; Figure 2F). The level of apoptosis was not significantly different among the groups (Figure 2C). In comparison with the DBW group, the skin was significantly thicker in the HA-CA and HA-CA + ADSCs groups (DBW group, 468 ± 34; ADSCs group, 555 ± 52; HA-CA group, 815 ± 90 *; HA-CA + ADSCs group, 900 ± 102 μm *, * p < 0.05). The epidermis was thickest in the HA-CA + ADSCs group (DBW group, 77 ± 4; ADSCs group, 104 ± 16; HA-CA group, 107 ± 17; HA-CA + ADSCs group, 122 ± 7 μm *, * p < 0.05; Figure 2D,E). Wounds were stained with hematoxylin and eosin on postoperative day (POD) 21 for histological observation. Complete re-epithelialization was observed in all groups; however, there were significant differences in epidermis thickness and skin appendages among the groups. In the DBW group, the epidermis consisted of one to three thin epithelial cell layers, and the extent of tissue regeneration at the dermis was low. In the ADSCs, HA-CA, and HA-CA + ADSCs groups, the epidermis was completely regenerated, and epidermal appendages, including sweat glands and hair follicles, were observed. Notably, neovascular-like structures were observed in the dermis layer (Figure 2A). Masson’s trichrome staining revealed marked granulation tissue formation in the HA-CA, ADSCs, and HA-CA + ADSCs groups. Furthermore, these groups showed increased proportions of muscle cells and extensive dermis tissue remodeling (Figure 2B). Terminal deoxynucleotidyl transferase dUTP nick end labeling (TUNEL)-positive cells were counted manually in microscopic fields (DBW group, 19 ± 2; ADSCs group, 25 ± 4; HA-CA group, 21 ± 4; HA-CA + ADSCs group, 21 ± 3; Figure 2F). The level of apoptosis was not significantly different among the groups (Figure 2C). In comparison with the DBW group, the skin was significantly thicker in the HA-CA and HA-CA + ADSCs groups (DBW group, 468 ± 34; ADSCs group, 555 ± 52; HA-CA group, 815 ± 90 *; HA-CA + ADSCs group, 900 ± 102 μm *, * p < 0.05). The epidermis was thickest in the HA-CA + ADSCs group (DBW group, 77 ± 4; ADSCs group, 104 ± 16; HA-CA group, 107 ± 17; HA-CA + ADSCs group, 122 ± 7 μm *, * p < 0.05; Figure 2D,E). 2.4. Tissue Neovascularization To examine wound neovascularization, we performed immunohistochemical staining for the endothelial cell-specific markers cluster of differentiation 31 (CD31) and von willebrand factor (vWF). Most of the CD31- and vWF-positive vessels were distributed in the lower middle part of the reticular layer of the dermis. CD31 is expressed in small vessels, mature blood vessels, and immature vascular structures. Most CD31-positive vessels were 10–15 μm in diameter and none exceeded 20 μm. Compared with the control DBW group, the number of CD31-positive vessels was higher in mice treated with HA-CA and HA-CA + ADSCs (control DBW group, 14 ± 2; ADSCs group, 25 ± 3; HA-CA group, 52 ± 5 *; HA-CA + ADSCs group, 73 ± 10 **; * p < 0.05, ** p < 0.01; Figure 3A,C). The vWF-positive vessels were larger than CD31-positive vessels and were 20–30 μm in diameter, with some exceeding 40 μm. Mice in the HA-CA + ADSCS group had the greatest number of vWF-positive vessels (control DBW group, 5 ± 1; ADSCs group, 6 ± 2; HA-CA group, 9 ± 1; HA-CA + ADSCs group, 16 ± 3 *; * p < 0.05; Figure 3B,D). To examine wound neovascularization, we performed immunohistochemical staining for the endothelial cell-specific markers cluster of differentiation 31 (CD31) and von willebrand factor (vWF). Most of the CD31- and vWF-positive vessels were distributed in the lower middle part of the reticular layer of the dermis. CD31 is expressed in small vessels, mature blood vessels, and immature vascular structures. Most CD31-positive vessels were 10–15 μm in diameter and none exceeded 20 μm. Compared with the control DBW group, the number of CD31-positive vessels was higher in mice treated with HA-CA and HA-CA + ADSCs (control DBW group, 14 ± 2; ADSCs group, 25 ± 3; HA-CA group, 52 ± 5 *; HA-CA + ADSCs group, 73 ± 10 **; * p < 0.05, ** p < 0.01; Figure 3A,C). The vWF-positive vessels were larger than CD31-positive vessels and were 20–30 μm in diameter, with some exceeding 40 μm. Mice in the HA-CA + ADSCS group had the greatest number of vWF-positive vessels (control DBW group, 5 ± 1; ADSCs group, 6 ± 2; HA-CA group, 9 ± 1; HA-CA + ADSCs group, 16 ± 3 *; * p < 0.05; Figure 3B,D). 2.5. Angiogenesis-Related Gene Expression Profile To confirm the effects of the different treatments on angiogenesis, we assessed the mRNA levels of various angiogenesis-associated genes, including those encoding vascular endothelial growth factor (VEGF), angiopoietin 1 (Ang-1), insulin-like growth factor 1 (IGF-1), fibroblast growth factor 2 (FGF-2), phosphoinositide 3-kinase (PI3K), protein kinase B (Akt), sprouty-related EVH1 domain-containing 1 (Spred1), and extracellular signal-regulated kinase (ERK). Mice treated with HA-CA + ADSCs had the highest VEGF, IGF-1, and FGF-2 mRNA levels. Ang-1 was upregulated in all treatment groups compared with the control group but was statistically significant only in the HA-CA + ADSCs group. FGF-2 was expressed at higher levels in the HA-CA and HA-CA + ADSCs treatment groups than in the control group, but treatment with ADSCs alone had no effect on FGF-2 mRNA levels. Although PI3K and Akt were significantly upregulated in mice treated with HA-CA + ADSCs, no changes were observed in the Spred1 or ERK mRNA levels (Figure 4). To confirm the effects of the different treatments on angiogenesis, we assessed the mRNA levels of various angiogenesis-associated genes, including those encoding vascular endothelial growth factor (VEGF), angiopoietin 1 (Ang-1), insulin-like growth factor 1 (IGF-1), fibroblast growth factor 2 (FGF-2), phosphoinositide 3-kinase (PI3K), protein kinase B (Akt), sprouty-related EVH1 domain-containing 1 (Spred1), and extracellular signal-regulated kinase (ERK). Mice treated with HA-CA + ADSCs had the highest VEGF, IGF-1, and FGF-2 mRNA levels. Ang-1 was upregulated in all treatment groups compared with the control group but was statistically significant only in the HA-CA + ADSCs group. FGF-2 was expressed at higher levels in the HA-CA and HA-CA + ADSCs treatment groups than in the control group, but treatment with ADSCs alone had no effect on FGF-2 mRNA levels. Although PI3K and Akt were significantly upregulated in mice treated with HA-CA + ADSCs, no changes were observed in the Spred1 or ERK mRNA levels (Figure 4). 2.6. VEGF Expression Analysis by Immunofluorescence To further confirm the effects of ADSCs and HA-CA on angiogenesis, we performed immunofluorescence staining for VEGF. Compared with the DBW group, in which the number of VEGF-expressing cells was low, the numbers of VEGF-positive cells were significantly higher in the ADSCs and HA-CA groups. In particular, mice treated with HA-CA + ADSCs had the greatest number of VEGF-expressing cells (Figure 4I). VEGF-positive cells were counted manually in microscopic fields and were highest in the HA-CA + ADSCs group (control DBW group, 15 ± 4; ADSCs group, 25 ± 6; HA-CA group, 28 ± 5; HA-CA + ADSCs group, 46 ± 5 **; ** p < 0.01; Figure 4J). To further confirm the effects of ADSCs and HA-CA on angiogenesis, we performed immunofluorescence staining for VEGF. Compared with the DBW group, in which the number of VEGF-expressing cells was low, the numbers of VEGF-positive cells were significantly higher in the ADSCs and HA-CA groups. In particular, mice treated with HA-CA + ADSCs had the greatest number of VEGF-expressing cells (Figure 4I). VEGF-positive cells were counted manually in microscopic fields and were highest in the HA-CA + ADSCs group (control DBW group, 15 ± 4; ADSCs group, 25 ± 6; HA-CA group, 28 ± 5; HA-CA + ADSCs group, 46 ± 5 **; ** p < 0.01; Figure 4J). 2.7. Angiogenesis-Related Protein Expression We investigated the expression of various angiogenesis-associated proteins, including VEGFA, FGF-2, Akt, phospho-Akt, ERK1/2, and phospho-ERK1/2. VEGFA and FGF-2 expression was increased in the HA-CA and HA-CA + ADSCs treatment groups compared to the control DBW group (VEGFA, control DBW group, 49 ± 4; ADSCs group, 43 ± 14; HA-CA group, 70 ± 8; HA-CA + ADSCs group, 91 ± 2 *; * p < 0.05; Figure 5A). FGF-2 was increased in all groups compared to the controls, especially in the HA-CA group (control DBW group, 34 ± 8; ADSCs group, 54 ± 6; HA-CA group, 76 ± 3 **; HA-CA + ADSCs group, 52 ± 2; ** p < 0.01; Figure 5A). With regard to components of the ERK and PI3K signaling pathways, ERK1/2 and AKT1 expression was confirmed. Both the AKT and phospho-AKT (p-AKT) levels were increased in all treatment groups compared to the controls, and the difference was especially great in the HA-CA group (control DBW group, 20 ± 3; ADSCs group, 34 ± 5; HA-CA group, 56 ± 2 **; HA-CA + ADSCs group, 43 ± 2; ** p < 0.01; Figure 5B). The expression of total ERK1/2 was similar in all groups, and phospho-ERK1/2 (p-ERK1/2) was hardly expressed (control DBW group, 32 ± 3; ADSCs group, 28 ± 1; HA-CA group, 29 ± 2; HA-CA + ADSCs group, 27 ± 2; Figure 5B). We investigated the expression of various angiogenesis-associated proteins, including VEGFA, FGF-2, Akt, phospho-Akt, ERK1/2, and phospho-ERK1/2. VEGFA and FGF-2 expression was increased in the HA-CA and HA-CA + ADSCs treatment groups compared to the control DBW group (VEGFA, control DBW group, 49 ± 4; ADSCs group, 43 ± 14; HA-CA group, 70 ± 8; HA-CA + ADSCs group, 91 ± 2 *; * p < 0.05; Figure 5A). FGF-2 was increased in all groups compared to the controls, especially in the HA-CA group (control DBW group, 34 ± 8; ADSCs group, 54 ± 6; HA-CA group, 76 ± 3 **; HA-CA + ADSCs group, 52 ± 2; ** p < 0.01; Figure 5A). With regard to components of the ERK and PI3K signaling pathways, ERK1/2 and AKT1 expression was confirmed. Both the AKT and phospho-AKT (p-AKT) levels were increased in all treatment groups compared to the controls, and the difference was especially great in the HA-CA group (control DBW group, 20 ± 3; ADSCs group, 34 ± 5; HA-CA group, 56 ± 2 **; HA-CA + ADSCs group, 43 ± 2; ** p < 0.01; Figure 5B). The expression of total ERK1/2 was similar in all groups, and phospho-ERK1/2 (p-ERK1/2) was hardly expressed (control DBW group, 32 ± 3; ADSCs group, 28 ± 1; HA-CA group, 29 ± 2; HA-CA + ADSCs group, 27 ± 2; Figure 5B).
null
null
[ "2.1. Wound Area Measurement", "2.2. PKH26-Labeled ADSC Tracing", "2.3. Histopathological Assessment", "2.4. Tissue Neovascularization", "2.5. Angiogenesis-Related Gene Expression Profile", "2.6. VEGF Expression Analysis by Immunofluorescence", "2.7. Angiogenesis-Related Protein Expression", "4. Materials and Methods", "4.1. HA-CA Synthesis", "4.2. ADSC Isolation and Characterization", "4.3. Diabetic Wound Animal Model", "4.4. Histopathological Assessment", "4.5. Terminal Deoxynucleotidyl Transferase dUTP Nick End Labeling (TUNEL) Staining", "4.6. Immunohistochemistry", "4.7. RNA Isolation and qRT-PCR", "4.8. Immunofluorescence Analysis", "4.9. Protein Extraction and Western Blotting", "4.10. Statistical Analysis" ]
[ "The wounds were photographed on days 1, 3, 5, 7, 14, and 21, and the wound areas were quantified using ImageJ software. The change in wound size over time was calculated as the percentage of wound closure for each treatment group compared to the initial area of the wound and normalized relative to the initial area. Visual inspection indicated a decrease in wound size. Compared with control mice, the wound closure rate on day 3 was significantly higher in the treatment groups (control DBW group, 105% ± 18% vs. ADSCs group, 82% ± 15%, HA-CA group, 71% ± 11% **, HA-CA + ADSCs group, 76% ± 10% *; * p < 0.05, ** p < 0.01). At 14 days, remarkable wound healing was observed in the HA-CA + ADSCs group (control DBW group 59% ± 8% vs. HA-CA + ADSCs group 19% ± 4%; p < 0.01; Figure 1A,B). Although the wounds appeared to have healed in all groups on day 21, the wound size was significantly smaller in the HA-CA + ADSCs group than in the control and ADSCs groups (control DBW group, 16% ± 5% **, ADSCs group, 24% ± 17% * vs. HA-CA + ADSCs group, 8% ± 2%; * p < 0.05, ** p < 0.01; Figure 1A,B).", "PKH26-labeled ADSCs (5 × 105 cells/100 µL) were injected into healthy subcutaneous tissues at the wound boundary. In the HA-CA + ADSCs group, PKH26-labeled ADSCs were transplanted with the HA-CA patch at the wound site. The mice were sacrificed on day 14 for PKH26-labeled ADSC tracking. Although most ADSCs migrated from the injection site to the wound, only a few ADSCs were observed in the ADSC injection group. In contrast to the ADSC group, ADSCs were detected in the epidermis, papillary dermis, and reticular dermis at the wound site in the HA-CA + ADSCs group (Figure 1C).", "Wounds were stained with hematoxylin and eosin on postoperative day (POD) 21 for histological observation. Complete re-epithelialization was observed in all groups; however, there were significant differences in epidermis thickness and skin appendages among the groups. In the DBW group, the epidermis consisted of one to three thin epithelial cell layers, and the extent of tissue regeneration at the dermis was low. In the ADSCs, HA-CA, and HA-CA + ADSCs groups, the epidermis was completely regenerated, and epidermal appendages, including sweat glands and hair follicles, were observed. Notably, neovascular-like structures were observed in the dermis layer (Figure 2A). Masson’s trichrome staining revealed marked granulation tissue formation in the HA-CA, ADSCs, and HA-CA + ADSCs groups. Furthermore, these groups showed increased proportions of muscle cells and extensive dermis tissue remodeling (Figure 2B). Terminal deoxynucleotidyl transferase dUTP nick end labeling (TUNEL)-positive cells were counted manually in microscopic fields (DBW group, 19 ± 2; ADSCs group, 25 ± 4; HA-CA group, 21 ± 4; HA-CA + ADSCs group, 21 ± 3; Figure 2F). The level of apoptosis was not significantly different among the groups (Figure 2C).\nIn comparison with the DBW group, the skin was significantly thicker in the HA-CA and HA-CA + ADSCs groups (DBW group, 468 ± 34; ADSCs group, 555 ± 52; HA-CA group, 815 ± 90 *; HA-CA + ADSCs group, 900 ± 102 μm *, * p < 0.05). The epidermis was thickest in the HA-CA + ADSCs group (DBW group, 77 ± 4; ADSCs group, 104 ± 16; HA-CA group, 107 ± 17; HA-CA + ADSCs group, 122 ± 7 μm *, * p < 0.05; Figure 2D,E).", "To examine wound neovascularization, we performed immunohistochemical staining for the endothelial cell-specific markers cluster of differentiation 31 (CD31) and von willebrand factor (vWF). Most of the CD31- and vWF-positive vessels were distributed in the lower middle part of the reticular layer of the dermis. CD31 is expressed in small vessels, mature blood vessels, and immature vascular structures. Most CD31-positive vessels were 10–15 μm in diameter and none exceeded 20 μm. Compared with the control DBW group, the number of CD31-positive vessels was higher in mice treated with HA-CA and HA-CA + ADSCs (control DBW group, 14 ± 2; ADSCs group, 25 ± 3; HA-CA group, 52 ± 5 *; HA-CA + ADSCs group, 73 ± 10 **; * p < 0.05, ** p < 0.01; Figure 3A,C). The vWF-positive vessels were larger than CD31-positive vessels and were 20–30 μm in diameter, with some exceeding 40 μm. Mice in the HA-CA + ADSCS group had the greatest number of vWF-positive vessels (control DBW group, 5 ± 1; ADSCs group, 6 ± 2; HA-CA group, 9 ± 1; HA-CA + ADSCs group, 16 ± 3 *; * p < 0.05; Figure 3B,D).", "To confirm the effects of the different treatments on angiogenesis, we assessed the mRNA levels of various angiogenesis-associated genes, including those encoding vascular endothelial growth factor (VEGF), angiopoietin 1 (Ang-1), insulin-like growth factor 1 (IGF-1), fibroblast growth factor 2 (FGF-2), phosphoinositide 3-kinase (PI3K), protein kinase B (Akt), sprouty-related EVH1 domain-containing 1 (Spred1), and extracellular signal-regulated kinase (ERK). Mice treated with HA-CA + ADSCs had the highest VEGF, IGF-1, and FGF-2 mRNA levels. Ang-1 was upregulated in all treatment groups compared with the control group but was statistically significant only in the HA-CA + ADSCs group. FGF-2 was expressed at higher levels in the HA-CA and HA-CA + ADSCs treatment groups than in the control group, but treatment with ADSCs alone had no effect on FGF-2 mRNA levels. Although PI3K and Akt were significantly upregulated in mice treated with HA-CA + ADSCs, no changes were observed in the Spred1 or ERK mRNA levels (Figure 4).", "To further confirm the effects of ADSCs and HA-CA on angiogenesis, we performed immunofluorescence staining for VEGF. Compared with the DBW group, in which the number of VEGF-expressing cells was low, the numbers of VEGF-positive cells were significantly higher in the ADSCs and HA-CA groups. In particular, mice treated with HA-CA + ADSCs had the greatest number of VEGF-expressing cells (Figure 4I). VEGF-positive cells were counted manually in microscopic fields and were highest in the HA-CA + ADSCs group (control DBW group, 15 ± 4; ADSCs group, 25 ± 6; HA-CA group, 28 ± 5; HA-CA + ADSCs group, 46 ± 5 **; ** p < 0.01; Figure 4J).", "We investigated the expression of various angiogenesis-associated proteins, including VEGFA, FGF-2, Akt, phospho-Akt, ERK1/2, and phospho-ERK1/2. VEGFA and FGF-2 expression was increased in the HA-CA and HA-CA + ADSCs treatment groups compared to the control DBW group (VEGFA, control DBW group, 49 ± 4; ADSCs group, 43 ± 14; HA-CA group, 70 ± 8; HA-CA + ADSCs group, 91 ± 2 *; * p < 0.05; Figure 5A). FGF-2 was increased in all groups compared to the controls, especially in the HA-CA group (control DBW group, 34 ± 8; ADSCs group, 54 ± 6; HA-CA group, 76 ± 3 **; HA-CA + ADSCs group, 52 ± 2; ** p < 0.01; Figure 5A). With regard to components of the ERK and PI3K signaling pathways, ERK1/2 and AKT1 expression was confirmed. Both the AKT and phospho-AKT (p-AKT) levels were increased in all treatment groups compared to the controls, and the difference was especially great in the HA-CA group (control DBW group, 20 ± 3; ADSCs group, 34 ± 5; HA-CA group, 56 ± 2 **; HA-CA + ADSCs group, 43 ± 2; ** p < 0.01; Figure 5B). The expression of total ERK1/2 was similar in all groups, and phospho-ERK1/2 (p-ERK1/2) was hardly expressed (control DBW group, 32 ± 3; ADSCs group, 28 ± 1; HA-CA group, 29 ± 2; HA-CA + ADSCs group, 27 ± 2; Figure 5B).", "4.1. HA-CA Synthesis The HA-CA patch was synthesized by conjugating dopamine hydrochloride (Sigma-Aldrich, St. Louis, MO, USA) to a 200 kDa HA backbone (Lifecore Biomedical, Chaska, MN, USA), as previously described [16,36]. Briefly, HA was dissolved in distilled water at a concentration of 1% (w/v). Subsequently, 1-ethyl-3-(3-dimethyl aminopropyl) carbodiimide (EDC; TCI Co., Japan) and N-hydroxysulfosuccinimide (NHS; Sigma-Aldrich) were added to HA solution at an HA:EDC:NHS molar ratio of 1:1.5:1.5 (pH 5.0). Dopamine hydrochloride was added to the solution at an HA:dopamine hydrochloride molar ratio of 1:1.5, and the solution was incubated for 12 h at room temperature. The solution was then dialyzed using a membrane (Cellu Sep T2, MW cut-off 6–8 kDa; Membrane Filtration Products Inc., Seguin, TX, USA) against pH 5.0 phosphate-buffered saline (PBS; Biosesang, Seongnam, Korea) to remove uncoupled dopamine hydrochloride. The resulting product was poured into Petri dishes in a thin layer and freeze-dried. The synthesis yield of HA-CA conjugate was about 80%. The substitution degree of catechol group to HA backbone was 8.8%, which was determined by measuring the absorbance of HA-CA solution at 280 nm wavelength using an ultraviolet-visible (UV-vis) light spectrophotometer (JASCO Corporation, Tokyo, Japan). To form a hydrogel, lyophilized HA-CA conjugate was dissolved in neutral PBS (Sigma) and mixed with oxidizing solution containing 4.5 mg/mL sodium periodate (NaIO4; Sigma) and 0.4 M sodium hydroxide (NaOH; Sigma).\nThe HA-CA patch was synthesized by conjugating dopamine hydrochloride (Sigma-Aldrich, St. Louis, MO, USA) to a 200 kDa HA backbone (Lifecore Biomedical, Chaska, MN, USA), as previously described [16,36]. Briefly, HA was dissolved in distilled water at a concentration of 1% (w/v). Subsequently, 1-ethyl-3-(3-dimethyl aminopropyl) carbodiimide (EDC; TCI Co., Japan) and N-hydroxysulfosuccinimide (NHS; Sigma-Aldrich) were added to HA solution at an HA:EDC:NHS molar ratio of 1:1.5:1.5 (pH 5.0). Dopamine hydrochloride was added to the solution at an HA:dopamine hydrochloride molar ratio of 1:1.5, and the solution was incubated for 12 h at room temperature. The solution was then dialyzed using a membrane (Cellu Sep T2, MW cut-off 6–8 kDa; Membrane Filtration Products Inc., Seguin, TX, USA) against pH 5.0 phosphate-buffered saline (PBS; Biosesang, Seongnam, Korea) to remove uncoupled dopamine hydrochloride. The resulting product was poured into Petri dishes in a thin layer and freeze-dried. The synthesis yield of HA-CA conjugate was about 80%. The substitution degree of catechol group to HA backbone was 8.8%, which was determined by measuring the absorbance of HA-CA solution at 280 nm wavelength using an ultraviolet-visible (UV-vis) light spectrophotometer (JASCO Corporation, Tokyo, Japan). To form a hydrogel, lyophilized HA-CA conjugate was dissolved in neutral PBS (Sigma) and mixed with oxidizing solution containing 4.5 mg/mL sodium periodate (NaIO4; Sigma) and 0.4 M sodium hydroxide (NaOH; Sigma).\n4.2. ADSC Isolation and Characterization Human ADSCs were isolated from three patients who underwent augmentation mammoplasty—the patients did not have inflammation or cancer. This study was approved by Seoul National University Bundang Hospital Institutional Review Board and Ethics Committee and conducted in accordance with the guidelines of the 1975 Declaration of Helsinki (IRB No. B-1702/381-301). Briefly, adipose tissue was washed with PBS and cut into smaller pieces. Enzymatic digestion was performed using 0.075% collagenase type I (Sigma-Aldrich) in a humidified 5% CO2 incubator for 1 h at 37 °C. After neutralization, samples were centrifuged, and supernatants were passed through a 100 µm cell strainer (BD Biosciences, Bedford, MA, USA). The cells were transferred into cell culture flasks with Dulbecco’s modified Eagle’s medium (Welgene, Gyeongsan, Korea) supplemented with 10% fetal bovine serum (Gibco, Carlsbad, CA, USA), 100 U/mL penicillin, and 100 μg/mL streptomycin (Lonza, Walkersville, MD, USA), and cells were maintained at 37 °C in a humidified 5% CO2 incubator. The ADSCs were used between passages 4 and 6 for fluorescence-activated cell sorting and animal experiments. ADSCs were characterized by flow cytometry for the cell-surface markers CD31, CD34, CD44, CD45, CD90, and HLA-DR (BD Biosciences Pharmingen, San Jose, CA, USA). To track the transplanted ADSCs, they were labeled with PKH26 red fluorescent dye (Sigma-Aldrich, St. Louis, MO, USA) according to the manufacturer’s instructions. Briefly, ADSCs were harvested and resuspended in 1 mL of diluent C solution. Then, 4 µL of PKH26 dye was added followed by incubation for 5 min. Fetal bovine serum (1 mL) was added for quenching for 2 min, followed by washing with PBS.\nHuman ADSCs were isolated from three patients who underwent augmentation mammoplasty—the patients did not have inflammation or cancer. This study was approved by Seoul National University Bundang Hospital Institutional Review Board and Ethics Committee and conducted in accordance with the guidelines of the 1975 Declaration of Helsinki (IRB No. B-1702/381-301). Briefly, adipose tissue was washed with PBS and cut into smaller pieces. Enzymatic digestion was performed using 0.075% collagenase type I (Sigma-Aldrich) in a humidified 5% CO2 incubator for 1 h at 37 °C. After neutralization, samples were centrifuged, and supernatants were passed through a 100 µm cell strainer (BD Biosciences, Bedford, MA, USA). The cells were transferred into cell culture flasks with Dulbecco’s modified Eagle’s medium (Welgene, Gyeongsan, Korea) supplemented with 10% fetal bovine serum (Gibco, Carlsbad, CA, USA), 100 U/mL penicillin, and 100 μg/mL streptomycin (Lonza, Walkersville, MD, USA), and cells were maintained at 37 °C in a humidified 5% CO2 incubator. The ADSCs were used between passages 4 and 6 for fluorescence-activated cell sorting and animal experiments. ADSCs were characterized by flow cytometry for the cell-surface markers CD31, CD34, CD44, CD45, CD90, and HLA-DR (BD Biosciences Pharmingen, San Jose, CA, USA). To track the transplanted ADSCs, they were labeled with PKH26 red fluorescent dye (Sigma-Aldrich, St. Louis, MO, USA) according to the manufacturer’s instructions. Briefly, ADSCs were harvested and resuspended in 1 mL of diluent C solution. Then, 4 µL of PKH26 dye was added followed by incubation for 5 min. Fetal bovine serum (1 mL) was added for quenching for 2 min, followed by washing with PBS.\n4.3. Diabetic Wound Animal Model C57BL/6 male mice (7 weeks old, 23–26 g) were purchased from ORIENT BIO (Seongnam, Korea) and maintained according to the Association for Assessment and Accreditation of Laboratory Animal Care International system. All animal experiments conformed to the International Guide for the Care and Use of Laboratory Animals and were approved by the Institutional Animal Care and Use Committee of Seoul National University Bundang Hospital (IACUC No. BA1710-234/090-01).\nForty C57BL/6 mice were equally divided into four groups: control diabetic wound (DBW) group, ADSCs group, HA-CA group, and HA-CA + ADSCs group. Diabetes induction was performed by intraperitoneal injection of streptozotocin (STZ; Sigma-Aldrich) at a dose of 150 mg/kg dissolved in citrate buffer (pH 5.5) [37,38]. Blood was drawn from the tail vein, and the glucose level was determined using a glucometer (Accu-Check Performa; Roche, Pleasanton, CA, USA). The blood glucose level and body weights were measured every 3 days. Mice with blood glucose levels >250 mg/dL were considered diabetic. After 4 weeks of STZ administration, mice were anesthetized with 2% isoflurane inhalation. Excisional biopsy wounds on the shaved dorsal regions of the midline extending through the panniculus carnosus were made using a 6 mm punch. ADSCs (5 × 105 cells/100 µL) labeled with PKH26 (Sigma-Aldrich) were transplanted into healthy tissue at the wound boundary, while HA-CA patches were injected at the wound site. The HA-CA patch (6 mm) was placed on the DBW (Figure 7). Wound areas were photographed on days 1, 3, 5, 7, 14, and 21 after ADSC and HA-CA transplantation. We identified the wound margins as whitish, dry, membrane-like structures, and measured the surface area using ImageJ software (version 1.51j8; National Institutes of Health, Bethesda, MD, USA). Changes in wound area over time were expressed as the percentage of the initial wound area.\nC57BL/6 male mice (7 weeks old, 23–26 g) were purchased from ORIENT BIO (Seongnam, Korea) and maintained according to the Association for Assessment and Accreditation of Laboratory Animal Care International system. All animal experiments conformed to the International Guide for the Care and Use of Laboratory Animals and were approved by the Institutional Animal Care and Use Committee of Seoul National University Bundang Hospital (IACUC No. BA1710-234/090-01).\nForty C57BL/6 mice were equally divided into four groups: control diabetic wound (DBW) group, ADSCs group, HA-CA group, and HA-CA + ADSCs group. Diabetes induction was performed by intraperitoneal injection of streptozotocin (STZ; Sigma-Aldrich) at a dose of 150 mg/kg dissolved in citrate buffer (pH 5.5) [37,38]. Blood was drawn from the tail vein, and the glucose level was determined using a glucometer (Accu-Check Performa; Roche, Pleasanton, CA, USA). The blood glucose level and body weights were measured every 3 days. Mice with blood glucose levels >250 mg/dL were considered diabetic. After 4 weeks of STZ administration, mice were anesthetized with 2% isoflurane inhalation. Excisional biopsy wounds on the shaved dorsal regions of the midline extending through the panniculus carnosus were made using a 6 mm punch. ADSCs (5 × 105 cells/100 µL) labeled with PKH26 (Sigma-Aldrich) were transplanted into healthy tissue at the wound boundary, while HA-CA patches were injected at the wound site. The HA-CA patch (6 mm) was placed on the DBW (Figure 7). Wound areas were photographed on days 1, 3, 5, 7, 14, and 21 after ADSC and HA-CA transplantation. We identified the wound margins as whitish, dry, membrane-like structures, and measured the surface area using ImageJ software (version 1.51j8; National Institutes of Health, Bethesda, MD, USA). Changes in wound area over time were expressed as the percentage of the initial wound area.\n4.4. Histopathological Assessment DBW tissues were fixed in 10% formalin and embedded in paraffin. The tissues were routinely processed and cut into sections 4–5 μm thick. The sections were deparaffinized in xylene at room temperature and stained with hematoxylin and eosin (Cancer Diagnostics Inc., Durham, NC, USA) according to the manufacturer’s instructions. Masson’s trichrome staining (BBC Biochemical, Mount Vernon, WA, USA) was performed in accordance with the manufacturer’s protocol. Briefly, deparaffinized sections were fixed in Bouin’s solution for 1 h at 56 °C and stained with ClearView Iron Hematoxylin working solution for 10 min. Subsequently, tissues were stained with Biebrich scarlet-acid fuchsin solution (2 min), phosphomolybdic-phosphotungstic acid solution (10 min), aniline blue solution (3 min), and 1% acetic acid solution (2 min). ECM, collagen, and other connective tissue elements were stained blue and smooth muscles were stained red. DBW tissue sections were imaged (×100 magnification) using Carl Zeiss AxioVision 4 (Carl Zeiss MicroImaging GmbH, Jena, Germany).\nDBW tissues were fixed in 10% formalin and embedded in paraffin. The tissues were routinely processed and cut into sections 4–5 μm thick. The sections were deparaffinized in xylene at room temperature and stained with hematoxylin and eosin (Cancer Diagnostics Inc., Durham, NC, USA) according to the manufacturer’s instructions. Masson’s trichrome staining (BBC Biochemical, Mount Vernon, WA, USA) was performed in accordance with the manufacturer’s protocol. Briefly, deparaffinized sections were fixed in Bouin’s solution for 1 h at 56 °C and stained with ClearView Iron Hematoxylin working solution for 10 min. Subsequently, tissues were stained with Biebrich scarlet-acid fuchsin solution (2 min), phosphomolybdic-phosphotungstic acid solution (10 min), aniline blue solution (3 min), and 1% acetic acid solution (2 min). ECM, collagen, and other connective tissue elements were stained blue and smooth muscles were stained red. DBW tissue sections were imaged (×100 magnification) using Carl Zeiss AxioVision 4 (Carl Zeiss MicroImaging GmbH, Jena, Germany).\n4.5. Terminal Deoxynucleotidyl Transferase dUTP Nick End Labeling (TUNEL) Staining In situ detection of apoptosis was performed by labeling DNA strand breaks in tissue sections using a TUNEL staining kit (Roche Diagnostics, Penzberg, Germany). Briefly, DBW tissue sections were pretreated with proteinase K (20 μg/mL) at 37 °C for 30 min and immediately washed. Subsequently, the DBW tissue sections were incubated with the TUNEL reaction mixture at 37 °C for 60 min. After mounting, sections were imaged under a fluorescence microscope (×200 magnification) using Carl Zeiss AxioVision 4 (Carl Zeiss MicroImaging GmbH).\nIn situ detection of apoptosis was performed by labeling DNA strand breaks in tissue sections using a TUNEL staining kit (Roche Diagnostics, Penzberg, Germany). Briefly, DBW tissue sections were pretreated with proteinase K (20 μg/mL) at 37 °C for 30 min and immediately washed. Subsequently, the DBW tissue sections were incubated with the TUNEL reaction mixture at 37 °C for 60 min. After mounting, sections were imaged under a fluorescence microscope (×200 magnification) using Carl Zeiss AxioVision 4 (Carl Zeiss MicroImaging GmbH).\n4.6. Immunohistochemistry Immunohistochemistry was performed using a GBI Polink-2 HRP kit (Golden Bridge International Inc., Bothell, WA, USA). Briefly, the sections were deparaffinized in xylene at room temperature and rehydrated in a graded series of ethanol. After heat-induced epitope retrieval in citrate buffer, pH 6.0 (Scytek Laboratories, Inc., West Logan, UT, USA), tissues were incubated in peroxidase blocking reagent for 15 min at room temperature. Subsequently, tissues were incubated with anti-von Willebrand factor (vWF; EMD Millipore, Temecula, CA, USA) and anti-CD31 (Thermo Fisher Scientific, Waltham, MA, USA) primary antibodies (1:50) for 90 min at room temperature, followed by incubation with diaminobenzidine chromogen. After dehydration, the sections were mounted with Histomount (National Diagnostics, Atlanta, GA, USA) and imaged (×400 magnification) using Carl Zeiss AxioVision 4 (Carl Zeiss MicroImaging GmbH).\nImmunohistochemistry was performed using a GBI Polink-2 HRP kit (Golden Bridge International Inc., Bothell, WA, USA). Briefly, the sections were deparaffinized in xylene at room temperature and rehydrated in a graded series of ethanol. After heat-induced epitope retrieval in citrate buffer, pH 6.0 (Scytek Laboratories, Inc., West Logan, UT, USA), tissues were incubated in peroxidase blocking reagent for 15 min at room temperature. Subsequently, tissues were incubated with anti-von Willebrand factor (vWF; EMD Millipore, Temecula, CA, USA) and anti-CD31 (Thermo Fisher Scientific, Waltham, MA, USA) primary antibodies (1:50) for 90 min at room temperature, followed by incubation with diaminobenzidine chromogen. After dehydration, the sections were mounted with Histomount (National Diagnostics, Atlanta, GA, USA) and imaged (×400 magnification) using Carl Zeiss AxioVision 4 (Carl Zeiss MicroImaging GmbH).\n4.7. RNA Isolation and qRT-PCR RNA was isolated and purified using an RNeasy Plus Mini Kit (QIAGEN, Hilden, Germany) according to the manufacturer’s instructions. cDNA synthesis was performed using a High-Capacity cDNA Reverse Transcription Kit (Thermo Fisher Scientific). Primers for qRT-PCR were obtained from Cosmogenetech (Seoul, Korea), and the primer sequences are shown in Table 1. Reactions were prepared using Power SYBR Green PCR Master Mix (Applied Biosystems, Foster City, CA, USA) according to the manufacturer’s instructions, and were run on a ViiA 7 Real-Time PCR System (Life Technologies Corporation, Carlsbad, CA, USA) using the following cycling conditions: one cycle of denaturation at 95 °C/10 min, followed by 40 two-segment amplification cycles (95 °C/10 min, 60 °C/1 min). All reactions were performed in triplicate.\nRNA was isolated and purified using an RNeasy Plus Mini Kit (QIAGEN, Hilden, Germany) according to the manufacturer’s instructions. cDNA synthesis was performed using a High-Capacity cDNA Reverse Transcription Kit (Thermo Fisher Scientific). Primers for qRT-PCR were obtained from Cosmogenetech (Seoul, Korea), and the primer sequences are shown in Table 1. Reactions were prepared using Power SYBR Green PCR Master Mix (Applied Biosystems, Foster City, CA, USA) according to the manufacturer’s instructions, and were run on a ViiA 7 Real-Time PCR System (Life Technologies Corporation, Carlsbad, CA, USA) using the following cycling conditions: one cycle of denaturation at 95 °C/10 min, followed by 40 two-segment amplification cycles (95 °C/10 min, 60 °C/1 min). All reactions were performed in triplicate.\n4.8. Immunofluorescence Analysis Skin sections were deparaffinized in xylene and rehydrated in a graded ethanol series. After heat-induced epitope retrieval in citrate buffer, pH 6.0 (Scytek Laboratories, Inc.), sections were incubated with 3% bovine serum albumin blocking reagent for 10 min at room temperature. After blocking, sections were incubated with a primary antibody against VEGF (Santa Cruz Biotechnology, Inc., Santa Cruz, CA, USA) followed by incubation with Alexa Fluor 488 anti-rabbit secondary antibody (Biolegend, San Diego, CA, USA). The sections were mounted with 4′,6-diamidino-2-phenylindole (DAPI)-containing mounting medium (Vector Laboratories Inc., Burlingame, CA, USA) and observed under an inverted microscope (Axio Observer 7; Carl Zeiss Microscopy GmbH). VEGF-positive cells were counted manually in five microscopic fields in each stained sample, and the mean value was used for statistical analyses.\nSkin sections were deparaffinized in xylene and rehydrated in a graded ethanol series. After heat-induced epitope retrieval in citrate buffer, pH 6.0 (Scytek Laboratories, Inc.), sections were incubated with 3% bovine serum albumin blocking reagent for 10 min at room temperature. After blocking, sections were incubated with a primary antibody against VEGF (Santa Cruz Biotechnology, Inc., Santa Cruz, CA, USA) followed by incubation with Alexa Fluor 488 anti-rabbit secondary antibody (Biolegend, San Diego, CA, USA). The sections were mounted with 4′,6-diamidino-2-phenylindole (DAPI)-containing mounting medium (Vector Laboratories Inc., Burlingame, CA, USA) and observed under an inverted microscope (Axio Observer 7; Carl Zeiss Microscopy GmbH). VEGF-positive cells were counted manually in five microscopic fields in each stained sample, and the mean value was used for statistical analyses.\n4.9. Protein Extraction and Western Blotting Proteins were extracted from formalin-fixed paraffin-embedded (FFPE) samples using a Qproteome FFPE Tissue Kit (QIAGEN) according to the manufacturer’s instructions. The protein concentration was determined using Bio-Rad assay reagent (Bio-Rad, Hercules, CA, USA). Briefly, samples with equal concentrations of protein were mixed with 4× sample buffer (GenDEPOT Inc., Barker, TX, USA), heated at 95 °C for 10 min, and separated by 10% sodium dodecyl sulfate–polyacrylamide gel electrophoresis (SDS-PAGE). Proteins were then transferred onto polyvinylidene difluoride (PVDF) membranes (Amersham Life Science, Arlington Heights, IL, USA) in Tris-glycine transfer buffer (Invitrogen, Carlsbad, CA, USA). The membranes were blocked for 1 h at room temperature with 5% skim milk in Tris-buffered saline with Tween-20. The membranes were incubated at 4 °C overnight with anti-VEGFA (Abcam, Cambridge, UK), anti-FGF-2 (Santa Cruz Biotechnology, Inc.), anti-AKT1 (Santa Cruz Biotechnology, Inc.), anti-p-AKT1 (Santa Cruz Biotechnology, Inc.), anti-ERK1/2 (Santa Cruz Biotechnology, Inc.), anti-p-ERK (Santa Cruz Biotechnology, Inc.), or anti-β-actin (Santa Cruz Biotechnology, Inc.) primary antibodies, followed by incubation with horseradish peroxidase-conjugated anti-mouse IgG (Cell Signaling Technology, Danvers, MA, USA) or anti-rabbit IgG (Cell Signaling Technology) secondary antibodies as appropriate for 1 h at room temperature. The membranes were washed and then incubated using a West-Q Chemiluminescent Substrate Plus kit (GenDEPOT Inc.). The intensities of the protein bands were determined using Multi-Gauge software (version 3.0; Fuji Photo Film, Tokyo, Japan), and relative densities were expressed as ratios of control values. All reactions were performed in duplicate.\nProteins were extracted from formalin-fixed paraffin-embedded (FFPE) samples using a Qproteome FFPE Tissue Kit (QIAGEN) according to the manufacturer’s instructions. The protein concentration was determined using Bio-Rad assay reagent (Bio-Rad, Hercules, CA, USA). Briefly, samples with equal concentrations of protein were mixed with 4× sample buffer (GenDEPOT Inc., Barker, TX, USA), heated at 95 °C for 10 min, and separated by 10% sodium dodecyl sulfate–polyacrylamide gel electrophoresis (SDS-PAGE). Proteins were then transferred onto polyvinylidene difluoride (PVDF) membranes (Amersham Life Science, Arlington Heights, IL, USA) in Tris-glycine transfer buffer (Invitrogen, Carlsbad, CA, USA). The membranes were blocked for 1 h at room temperature with 5% skim milk in Tris-buffered saline with Tween-20. The membranes were incubated at 4 °C overnight with anti-VEGFA (Abcam, Cambridge, UK), anti-FGF-2 (Santa Cruz Biotechnology, Inc.), anti-AKT1 (Santa Cruz Biotechnology, Inc.), anti-p-AKT1 (Santa Cruz Biotechnology, Inc.), anti-ERK1/2 (Santa Cruz Biotechnology, Inc.), anti-p-ERK (Santa Cruz Biotechnology, Inc.), or anti-β-actin (Santa Cruz Biotechnology, Inc.) primary antibodies, followed by incubation with horseradish peroxidase-conjugated anti-mouse IgG (Cell Signaling Technology, Danvers, MA, USA) or anti-rabbit IgG (Cell Signaling Technology) secondary antibodies as appropriate for 1 h at room temperature. The membranes were washed and then incubated using a West-Q Chemiluminescent Substrate Plus kit (GenDEPOT Inc.). The intensities of the protein bands were determined using Multi-Gauge software (version 3.0; Fuji Photo Film, Tokyo, Japan), and relative densities were expressed as ratios of control values. All reactions were performed in duplicate.\n4.10. Statistical Analysis Quantitative data were expressed as the mean ± standard deviation. Differences between groups were evaluated by one-way analysis of variance (ANOVA) followed by Dunn’s multiple comparison post hoc test. All statistical analyses were performed using PRISM v.5.01 (GraphPad Software, Inc., La Jolla, CA, USA) and p < 0.05 was taken to indicate statistical significance.\nQuantitative data were expressed as the mean ± standard deviation. Differences between groups were evaluated by one-way analysis of variance (ANOVA) followed by Dunn’s multiple comparison post hoc test. All statistical analyses were performed using PRISM v.5.01 (GraphPad Software, Inc., La Jolla, CA, USA) and p < 0.05 was taken to indicate statistical significance.", "The HA-CA patch was synthesized by conjugating dopamine hydrochloride (Sigma-Aldrich, St. Louis, MO, USA) to a 200 kDa HA backbone (Lifecore Biomedical, Chaska, MN, USA), as previously described [16,36]. Briefly, HA was dissolved in distilled water at a concentration of 1% (w/v). Subsequently, 1-ethyl-3-(3-dimethyl aminopropyl) carbodiimide (EDC; TCI Co., Japan) and N-hydroxysulfosuccinimide (NHS; Sigma-Aldrich) were added to HA solution at an HA:EDC:NHS molar ratio of 1:1.5:1.5 (pH 5.0). Dopamine hydrochloride was added to the solution at an HA:dopamine hydrochloride molar ratio of 1:1.5, and the solution was incubated for 12 h at room temperature. The solution was then dialyzed using a membrane (Cellu Sep T2, MW cut-off 6–8 kDa; Membrane Filtration Products Inc., Seguin, TX, USA) against pH 5.0 phosphate-buffered saline (PBS; Biosesang, Seongnam, Korea) to remove uncoupled dopamine hydrochloride. The resulting product was poured into Petri dishes in a thin layer and freeze-dried. The synthesis yield of HA-CA conjugate was about 80%. The substitution degree of catechol group to HA backbone was 8.8%, which was determined by measuring the absorbance of HA-CA solution at 280 nm wavelength using an ultraviolet-visible (UV-vis) light spectrophotometer (JASCO Corporation, Tokyo, Japan). To form a hydrogel, lyophilized HA-CA conjugate was dissolved in neutral PBS (Sigma) and mixed with oxidizing solution containing 4.5 mg/mL sodium periodate (NaIO4; Sigma) and 0.4 M sodium hydroxide (NaOH; Sigma).", "Human ADSCs were isolated from three patients who underwent augmentation mammoplasty—the patients did not have inflammation or cancer. This study was approved by Seoul National University Bundang Hospital Institutional Review Board and Ethics Committee and conducted in accordance with the guidelines of the 1975 Declaration of Helsinki (IRB No. B-1702/381-301). Briefly, adipose tissue was washed with PBS and cut into smaller pieces. Enzymatic digestion was performed using 0.075% collagenase type I (Sigma-Aldrich) in a humidified 5% CO2 incubator for 1 h at 37 °C. After neutralization, samples were centrifuged, and supernatants were passed through a 100 µm cell strainer (BD Biosciences, Bedford, MA, USA). The cells were transferred into cell culture flasks with Dulbecco’s modified Eagle’s medium (Welgene, Gyeongsan, Korea) supplemented with 10% fetal bovine serum (Gibco, Carlsbad, CA, USA), 100 U/mL penicillin, and 100 μg/mL streptomycin (Lonza, Walkersville, MD, USA), and cells were maintained at 37 °C in a humidified 5% CO2 incubator. The ADSCs were used between passages 4 and 6 for fluorescence-activated cell sorting and animal experiments. ADSCs were characterized by flow cytometry for the cell-surface markers CD31, CD34, CD44, CD45, CD90, and HLA-DR (BD Biosciences Pharmingen, San Jose, CA, USA). To track the transplanted ADSCs, they were labeled with PKH26 red fluorescent dye (Sigma-Aldrich, St. Louis, MO, USA) according to the manufacturer’s instructions. Briefly, ADSCs were harvested and resuspended in 1 mL of diluent C solution. Then, 4 µL of PKH26 dye was added followed by incubation for 5 min. Fetal bovine serum (1 mL) was added for quenching for 2 min, followed by washing with PBS.", "C57BL/6 male mice (7 weeks old, 23–26 g) were purchased from ORIENT BIO (Seongnam, Korea) and maintained according to the Association for Assessment and Accreditation of Laboratory Animal Care International system. All animal experiments conformed to the International Guide for the Care and Use of Laboratory Animals and were approved by the Institutional Animal Care and Use Committee of Seoul National University Bundang Hospital (IACUC No. BA1710-234/090-01).\nForty C57BL/6 mice were equally divided into four groups: control diabetic wound (DBW) group, ADSCs group, HA-CA group, and HA-CA + ADSCs group. Diabetes induction was performed by intraperitoneal injection of streptozotocin (STZ; Sigma-Aldrich) at a dose of 150 mg/kg dissolved in citrate buffer (pH 5.5) [37,38]. Blood was drawn from the tail vein, and the glucose level was determined using a glucometer (Accu-Check Performa; Roche, Pleasanton, CA, USA). The blood glucose level and body weights were measured every 3 days. Mice with blood glucose levels >250 mg/dL were considered diabetic. After 4 weeks of STZ administration, mice were anesthetized with 2% isoflurane inhalation. Excisional biopsy wounds on the shaved dorsal regions of the midline extending through the panniculus carnosus were made using a 6 mm punch. ADSCs (5 × 105 cells/100 µL) labeled with PKH26 (Sigma-Aldrich) were transplanted into healthy tissue at the wound boundary, while HA-CA patches were injected at the wound site. The HA-CA patch (6 mm) was placed on the DBW (Figure 7). Wound areas were photographed on days 1, 3, 5, 7, 14, and 21 after ADSC and HA-CA transplantation. We identified the wound margins as whitish, dry, membrane-like structures, and measured the surface area using ImageJ software (version 1.51j8; National Institutes of Health, Bethesda, MD, USA). Changes in wound area over time were expressed as the percentage of the initial wound area.", "DBW tissues were fixed in 10% formalin and embedded in paraffin. The tissues were routinely processed and cut into sections 4–5 μm thick. The sections were deparaffinized in xylene at room temperature and stained with hematoxylin and eosin (Cancer Diagnostics Inc., Durham, NC, USA) according to the manufacturer’s instructions. Masson’s trichrome staining (BBC Biochemical, Mount Vernon, WA, USA) was performed in accordance with the manufacturer’s protocol. Briefly, deparaffinized sections were fixed in Bouin’s solution for 1 h at 56 °C and stained with ClearView Iron Hematoxylin working solution for 10 min. Subsequently, tissues were stained with Biebrich scarlet-acid fuchsin solution (2 min), phosphomolybdic-phosphotungstic acid solution (10 min), aniline blue solution (3 min), and 1% acetic acid solution (2 min). ECM, collagen, and other connective tissue elements were stained blue and smooth muscles were stained red. DBW tissue sections were imaged (×100 magnification) using Carl Zeiss AxioVision 4 (Carl Zeiss MicroImaging GmbH, Jena, Germany).", "In situ detection of apoptosis was performed by labeling DNA strand breaks in tissue sections using a TUNEL staining kit (Roche Diagnostics, Penzberg, Germany). Briefly, DBW tissue sections were pretreated with proteinase K (20 μg/mL) at 37 °C for 30 min and immediately washed. Subsequently, the DBW tissue sections were incubated with the TUNEL reaction mixture at 37 °C for 60 min. After mounting, sections were imaged under a fluorescence microscope (×200 magnification) using Carl Zeiss AxioVision 4 (Carl Zeiss MicroImaging GmbH).", "Immunohistochemistry was performed using a GBI Polink-2 HRP kit (Golden Bridge International Inc., Bothell, WA, USA). Briefly, the sections were deparaffinized in xylene at room temperature and rehydrated in a graded series of ethanol. After heat-induced epitope retrieval in citrate buffer, pH 6.0 (Scytek Laboratories, Inc., West Logan, UT, USA), tissues were incubated in peroxidase blocking reagent for 15 min at room temperature. Subsequently, tissues were incubated with anti-von Willebrand factor (vWF; EMD Millipore, Temecula, CA, USA) and anti-CD31 (Thermo Fisher Scientific, Waltham, MA, USA) primary antibodies (1:50) for 90 min at room temperature, followed by incubation with diaminobenzidine chromogen. After dehydration, the sections were mounted with Histomount (National Diagnostics, Atlanta, GA, USA) and imaged (×400 magnification) using Carl Zeiss AxioVision 4 (Carl Zeiss MicroImaging GmbH).", "RNA was isolated and purified using an RNeasy Plus Mini Kit (QIAGEN, Hilden, Germany) according to the manufacturer’s instructions. cDNA synthesis was performed using a High-Capacity cDNA Reverse Transcription Kit (Thermo Fisher Scientific). Primers for qRT-PCR were obtained from Cosmogenetech (Seoul, Korea), and the primer sequences are shown in Table 1. Reactions were prepared using Power SYBR Green PCR Master Mix (Applied Biosystems, Foster City, CA, USA) according to the manufacturer’s instructions, and were run on a ViiA 7 Real-Time PCR System (Life Technologies Corporation, Carlsbad, CA, USA) using the following cycling conditions: one cycle of denaturation at 95 °C/10 min, followed by 40 two-segment amplification cycles (95 °C/10 min, 60 °C/1 min). All reactions were performed in triplicate.", "Skin sections were deparaffinized in xylene and rehydrated in a graded ethanol series. After heat-induced epitope retrieval in citrate buffer, pH 6.0 (Scytek Laboratories, Inc.), sections were incubated with 3% bovine serum albumin blocking reagent for 10 min at room temperature. After blocking, sections were incubated with a primary antibody against VEGF (Santa Cruz Biotechnology, Inc., Santa Cruz, CA, USA) followed by incubation with Alexa Fluor 488 anti-rabbit secondary antibody (Biolegend, San Diego, CA, USA). The sections were mounted with 4′,6-diamidino-2-phenylindole (DAPI)-containing mounting medium (Vector Laboratories Inc., Burlingame, CA, USA) and observed under an inverted microscope (Axio Observer 7; Carl Zeiss Microscopy GmbH). VEGF-positive cells were counted manually in five microscopic fields in each stained sample, and the mean value was used for statistical analyses.", "Proteins were extracted from formalin-fixed paraffin-embedded (FFPE) samples using a Qproteome FFPE Tissue Kit (QIAGEN) according to the manufacturer’s instructions. The protein concentration was determined using Bio-Rad assay reagent (Bio-Rad, Hercules, CA, USA). Briefly, samples with equal concentrations of protein were mixed with 4× sample buffer (GenDEPOT Inc., Barker, TX, USA), heated at 95 °C for 10 min, and separated by 10% sodium dodecyl sulfate–polyacrylamide gel electrophoresis (SDS-PAGE). Proteins were then transferred onto polyvinylidene difluoride (PVDF) membranes (Amersham Life Science, Arlington Heights, IL, USA) in Tris-glycine transfer buffer (Invitrogen, Carlsbad, CA, USA). The membranes were blocked for 1 h at room temperature with 5% skim milk in Tris-buffered saline with Tween-20. The membranes were incubated at 4 °C overnight with anti-VEGFA (Abcam, Cambridge, UK), anti-FGF-2 (Santa Cruz Biotechnology, Inc.), anti-AKT1 (Santa Cruz Biotechnology, Inc.), anti-p-AKT1 (Santa Cruz Biotechnology, Inc.), anti-ERK1/2 (Santa Cruz Biotechnology, Inc.), anti-p-ERK (Santa Cruz Biotechnology, Inc.), or anti-β-actin (Santa Cruz Biotechnology, Inc.) primary antibodies, followed by incubation with horseradish peroxidase-conjugated anti-mouse IgG (Cell Signaling Technology, Danvers, MA, USA) or anti-rabbit IgG (Cell Signaling Technology) secondary antibodies as appropriate for 1 h at room temperature. The membranes were washed and then incubated using a West-Q Chemiluminescent Substrate Plus kit (GenDEPOT Inc.). The intensities of the protein bands were determined using Multi-Gauge software (version 3.0; Fuji Photo Film, Tokyo, Japan), and relative densities were expressed as ratios of control values. All reactions were performed in duplicate.", "Quantitative data were expressed as the mean ± standard deviation. Differences between groups were evaluated by one-way analysis of variance (ANOVA) followed by Dunn’s multiple comparison post hoc test. All statistical analyses were performed using PRISM v.5.01 (GraphPad Software, Inc., La Jolla, CA, USA) and p < 0.05 was taken to indicate statistical significance." ]
[ null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null ]
[ "1. Introduction", "2. Results", "2.1. Wound Area Measurement", "2.2. PKH26-Labeled ADSC Tracing", "2.3. Histopathological Assessment", "2.4. Tissue Neovascularization", "2.5. Angiogenesis-Related Gene Expression Profile", "2.6. VEGF Expression Analysis by Immunofluorescence", "2.7. Angiogenesis-Related Protein Expression", "3. Discussion", "4. Materials and Methods", "4.1. HA-CA Synthesis", "4.2. ADSC Isolation and Characterization", "4.3. Diabetic Wound Animal Model", "4.4. Histopathological Assessment", "4.5. Terminal Deoxynucleotidyl Transferase dUTP Nick End Labeling (TUNEL) Staining", "4.6. Immunohistochemistry", "4.7. RNA Isolation and qRT-PCR", "4.8. Immunofluorescence Analysis", "4.9. Protein Extraction and Western Blotting", "4.10. Statistical Analysis" ]
[ "Diabetic wound (DBW) is a broad term describing various pathological conditions that manifest as wounds or ulcers associated with diabetes. Diabetic foot ulcers and other DBWs are common complications in diabetes, occurring in approximately 20% of diabetic patients [1]. DBWs have been associated with hyperglycemia, which often results in diabetic peripheral neuropathy and blockage of peripheral blood vessels, ultimately leading to diabetic foot ulcers [2]. Patients with DBWs often progress rapidly, making treatment challenging and potentially resulting in lower extremity amputation. In addition to diabetic peripheral neuropathy and peripheral vascular disease, several other risk factors of DBW have been identified, including limited joint mobility and foot deformities [2,3]. Chronic inflammation is an important barrier in the treatment of DBW. Inflammation plays a crucial role in wound healing; in DBW, inflammatory responses are delayed, and the wound does not heal. In addition, the balance between collagen production and degradation is disrupted, further impairing the wound healing process [4]. The clinical management of DBW typically involves wound dressing and debridement of necrotic tissues. Numerous recent studies have indicated that modulating extracellular matrix (ECM) synthesis, growth factor release, and vascularization-targeting approaches might be useful in the treatment of DBW.\nMany clinical trials with mesenchymal stem cells (MSCs) are currently in progress as they are readily accessible and safer than other types of stem cells. MSCs represent a stable source for experiments or clinical treatment because they can be obtained from various tissues [5]. MSCs also exhibit multilineage differentiation potential and exert various immunomodulatory and paracrine effects [6]. Adipose-derived stem cells (ADSCs), which can be isolated easily and in large numbers from adipose tissue through liposuction surgery and have been tested for various clinical applications, exhibit long-term growth in vitro and can differentiate into various cell types upon induction [7]. Moreover, ADSCs are known to exert paracrine effects that promote tissue regeneration [6].\nHyaluronic acid (HA) is a polysaccharide found in various tissues of the human body, including the joints, cartilage, eyes, and skin [8,9]. HA plays a key role in wound healing by regulating cell proliferation, migration, and differentiation, as well as ECM organization and metabolism. High molecular weight HA inhibits the proliferation and migration of most cell types, while low molecular weight forms of HA (<300 kDa) promote cell proliferation and display angiogenic properties [10,11,12]. HA in the molecular weight range of 150–250 kDa has been shown to have beneficial effects on wound healing by enhancing cell–HA interactions through cell-surface receptors for HA, which activates signal transduction pathways essential for cellular migration and proliferation [13,14]. Therefore, HA has been proposed as a biomaterial for DBW treatment [8]. HA hydrogels are widely used in numerous biomedical and pharmaceutical applications due to their inherent biocompatibility, matrix structure similarity, and drug delivery capabilities [15]. However, the application of injectable hydrogels to treat human diseases remains limited. Furthermore, the incorporation of stem cells or growth factors into hydrogels remains technically challenging. Catechol-modified HA (HA-CA) hydrogels have higher biocompatibility, better tissue adhesion properties, and provide improved stem cell survival and functionality compared to conventional HA hydrogels [16]. However, their potential use to treat DBW remains largely unexplored.\nThe purpose of this study was to assess the therapeutic effects of an HA-CA patch combined with ADSCs in the treatment of DBW. In particular, we evaluated their effects on tissue regeneration and angiogenesis using a DBW mouse model.", "2.1. Wound Area Measurement The wounds were photographed on days 1, 3, 5, 7, 14, and 21, and the wound areas were quantified using ImageJ software. The change in wound size over time was calculated as the percentage of wound closure for each treatment group compared to the initial area of the wound and normalized relative to the initial area. Visual inspection indicated a decrease in wound size. Compared with control mice, the wound closure rate on day 3 was significantly higher in the treatment groups (control DBW group, 105% ± 18% vs. ADSCs group, 82% ± 15%, HA-CA group, 71% ± 11% **, HA-CA + ADSCs group, 76% ± 10% *; * p < 0.05, ** p < 0.01). At 14 days, remarkable wound healing was observed in the HA-CA + ADSCs group (control DBW group 59% ± 8% vs. HA-CA + ADSCs group 19% ± 4%; p < 0.01; Figure 1A,B). Although the wounds appeared to have healed in all groups on day 21, the wound size was significantly smaller in the HA-CA + ADSCs group than in the control and ADSCs groups (control DBW group, 16% ± 5% **, ADSCs group, 24% ± 17% * vs. HA-CA + ADSCs group, 8% ± 2%; * p < 0.05, ** p < 0.01; Figure 1A,B).\nThe wounds were photographed on days 1, 3, 5, 7, 14, and 21, and the wound areas were quantified using ImageJ software. The change in wound size over time was calculated as the percentage of wound closure for each treatment group compared to the initial area of the wound and normalized relative to the initial area. Visual inspection indicated a decrease in wound size. Compared with control mice, the wound closure rate on day 3 was significantly higher in the treatment groups (control DBW group, 105% ± 18% vs. ADSCs group, 82% ± 15%, HA-CA group, 71% ± 11% **, HA-CA + ADSCs group, 76% ± 10% *; * p < 0.05, ** p < 0.01). At 14 days, remarkable wound healing was observed in the HA-CA + ADSCs group (control DBW group 59% ± 8% vs. HA-CA + ADSCs group 19% ± 4%; p < 0.01; Figure 1A,B). Although the wounds appeared to have healed in all groups on day 21, the wound size was significantly smaller in the HA-CA + ADSCs group than in the control and ADSCs groups (control DBW group, 16% ± 5% **, ADSCs group, 24% ± 17% * vs. HA-CA + ADSCs group, 8% ± 2%; * p < 0.05, ** p < 0.01; Figure 1A,B).\n2.2. PKH26-Labeled ADSC Tracing PKH26-labeled ADSCs (5 × 105 cells/100 µL) were injected into healthy subcutaneous tissues at the wound boundary. In the HA-CA + ADSCs group, PKH26-labeled ADSCs were transplanted with the HA-CA patch at the wound site. The mice were sacrificed on day 14 for PKH26-labeled ADSC tracking. Although most ADSCs migrated from the injection site to the wound, only a few ADSCs were observed in the ADSC injection group. In contrast to the ADSC group, ADSCs were detected in the epidermis, papillary dermis, and reticular dermis at the wound site in the HA-CA + ADSCs group (Figure 1C).\nPKH26-labeled ADSCs (5 × 105 cells/100 µL) were injected into healthy subcutaneous tissues at the wound boundary. In the HA-CA + ADSCs group, PKH26-labeled ADSCs were transplanted with the HA-CA patch at the wound site. The mice were sacrificed on day 14 for PKH26-labeled ADSC tracking. Although most ADSCs migrated from the injection site to the wound, only a few ADSCs were observed in the ADSC injection group. In contrast to the ADSC group, ADSCs were detected in the epidermis, papillary dermis, and reticular dermis at the wound site in the HA-CA + ADSCs group (Figure 1C).\n2.3. Histopathological Assessment Wounds were stained with hematoxylin and eosin on postoperative day (POD) 21 for histological observation. Complete re-epithelialization was observed in all groups; however, there were significant differences in epidermis thickness and skin appendages among the groups. In the DBW group, the epidermis consisted of one to three thin epithelial cell layers, and the extent of tissue regeneration at the dermis was low. In the ADSCs, HA-CA, and HA-CA + ADSCs groups, the epidermis was completely regenerated, and epidermal appendages, including sweat glands and hair follicles, were observed. Notably, neovascular-like structures were observed in the dermis layer (Figure 2A). Masson’s trichrome staining revealed marked granulation tissue formation in the HA-CA, ADSCs, and HA-CA + ADSCs groups. Furthermore, these groups showed increased proportions of muscle cells and extensive dermis tissue remodeling (Figure 2B). Terminal deoxynucleotidyl transferase dUTP nick end labeling (TUNEL)-positive cells were counted manually in microscopic fields (DBW group, 19 ± 2; ADSCs group, 25 ± 4; HA-CA group, 21 ± 4; HA-CA + ADSCs group, 21 ± 3; Figure 2F). The level of apoptosis was not significantly different among the groups (Figure 2C).\nIn comparison with the DBW group, the skin was significantly thicker in the HA-CA and HA-CA + ADSCs groups (DBW group, 468 ± 34; ADSCs group, 555 ± 52; HA-CA group, 815 ± 90 *; HA-CA + ADSCs group, 900 ± 102 μm *, * p < 0.05). The epidermis was thickest in the HA-CA + ADSCs group (DBW group, 77 ± 4; ADSCs group, 104 ± 16; HA-CA group, 107 ± 17; HA-CA + ADSCs group, 122 ± 7 μm *, * p < 0.05; Figure 2D,E).\nWounds were stained with hematoxylin and eosin on postoperative day (POD) 21 for histological observation. Complete re-epithelialization was observed in all groups; however, there were significant differences in epidermis thickness and skin appendages among the groups. In the DBW group, the epidermis consisted of one to three thin epithelial cell layers, and the extent of tissue regeneration at the dermis was low. In the ADSCs, HA-CA, and HA-CA + ADSCs groups, the epidermis was completely regenerated, and epidermal appendages, including sweat glands and hair follicles, were observed. Notably, neovascular-like structures were observed in the dermis layer (Figure 2A). Masson’s trichrome staining revealed marked granulation tissue formation in the HA-CA, ADSCs, and HA-CA + ADSCs groups. Furthermore, these groups showed increased proportions of muscle cells and extensive dermis tissue remodeling (Figure 2B). Terminal deoxynucleotidyl transferase dUTP nick end labeling (TUNEL)-positive cells were counted manually in microscopic fields (DBW group, 19 ± 2; ADSCs group, 25 ± 4; HA-CA group, 21 ± 4; HA-CA + ADSCs group, 21 ± 3; Figure 2F). The level of apoptosis was not significantly different among the groups (Figure 2C).\nIn comparison with the DBW group, the skin was significantly thicker in the HA-CA and HA-CA + ADSCs groups (DBW group, 468 ± 34; ADSCs group, 555 ± 52; HA-CA group, 815 ± 90 *; HA-CA + ADSCs group, 900 ± 102 μm *, * p < 0.05). The epidermis was thickest in the HA-CA + ADSCs group (DBW group, 77 ± 4; ADSCs group, 104 ± 16; HA-CA group, 107 ± 17; HA-CA + ADSCs group, 122 ± 7 μm *, * p < 0.05; Figure 2D,E).\n2.4. Tissue Neovascularization To examine wound neovascularization, we performed immunohistochemical staining for the endothelial cell-specific markers cluster of differentiation 31 (CD31) and von willebrand factor (vWF). Most of the CD31- and vWF-positive vessels were distributed in the lower middle part of the reticular layer of the dermis. CD31 is expressed in small vessels, mature blood vessels, and immature vascular structures. Most CD31-positive vessels were 10–15 μm in diameter and none exceeded 20 μm. Compared with the control DBW group, the number of CD31-positive vessels was higher in mice treated with HA-CA and HA-CA + ADSCs (control DBW group, 14 ± 2; ADSCs group, 25 ± 3; HA-CA group, 52 ± 5 *; HA-CA + ADSCs group, 73 ± 10 **; * p < 0.05, ** p < 0.01; Figure 3A,C). The vWF-positive vessels were larger than CD31-positive vessels and were 20–30 μm in diameter, with some exceeding 40 μm. Mice in the HA-CA + ADSCS group had the greatest number of vWF-positive vessels (control DBW group, 5 ± 1; ADSCs group, 6 ± 2; HA-CA group, 9 ± 1; HA-CA + ADSCs group, 16 ± 3 *; * p < 0.05; Figure 3B,D).\nTo examine wound neovascularization, we performed immunohistochemical staining for the endothelial cell-specific markers cluster of differentiation 31 (CD31) and von willebrand factor (vWF). Most of the CD31- and vWF-positive vessels were distributed in the lower middle part of the reticular layer of the dermis. CD31 is expressed in small vessels, mature blood vessels, and immature vascular structures. Most CD31-positive vessels were 10–15 μm in diameter and none exceeded 20 μm. Compared with the control DBW group, the number of CD31-positive vessels was higher in mice treated with HA-CA and HA-CA + ADSCs (control DBW group, 14 ± 2; ADSCs group, 25 ± 3; HA-CA group, 52 ± 5 *; HA-CA + ADSCs group, 73 ± 10 **; * p < 0.05, ** p < 0.01; Figure 3A,C). The vWF-positive vessels were larger than CD31-positive vessels and were 20–30 μm in diameter, with some exceeding 40 μm. Mice in the HA-CA + ADSCS group had the greatest number of vWF-positive vessels (control DBW group, 5 ± 1; ADSCs group, 6 ± 2; HA-CA group, 9 ± 1; HA-CA + ADSCs group, 16 ± 3 *; * p < 0.05; Figure 3B,D).\n2.5. Angiogenesis-Related Gene Expression Profile To confirm the effects of the different treatments on angiogenesis, we assessed the mRNA levels of various angiogenesis-associated genes, including those encoding vascular endothelial growth factor (VEGF), angiopoietin 1 (Ang-1), insulin-like growth factor 1 (IGF-1), fibroblast growth factor 2 (FGF-2), phosphoinositide 3-kinase (PI3K), protein kinase B (Akt), sprouty-related EVH1 domain-containing 1 (Spred1), and extracellular signal-regulated kinase (ERK). Mice treated with HA-CA + ADSCs had the highest VEGF, IGF-1, and FGF-2 mRNA levels. Ang-1 was upregulated in all treatment groups compared with the control group but was statistically significant only in the HA-CA + ADSCs group. FGF-2 was expressed at higher levels in the HA-CA and HA-CA + ADSCs treatment groups than in the control group, but treatment with ADSCs alone had no effect on FGF-2 mRNA levels. Although PI3K and Akt were significantly upregulated in mice treated with HA-CA + ADSCs, no changes were observed in the Spred1 or ERK mRNA levels (Figure 4).\nTo confirm the effects of the different treatments on angiogenesis, we assessed the mRNA levels of various angiogenesis-associated genes, including those encoding vascular endothelial growth factor (VEGF), angiopoietin 1 (Ang-1), insulin-like growth factor 1 (IGF-1), fibroblast growth factor 2 (FGF-2), phosphoinositide 3-kinase (PI3K), protein kinase B (Akt), sprouty-related EVH1 domain-containing 1 (Spred1), and extracellular signal-regulated kinase (ERK). Mice treated with HA-CA + ADSCs had the highest VEGF, IGF-1, and FGF-2 mRNA levels. Ang-1 was upregulated in all treatment groups compared with the control group but was statistically significant only in the HA-CA + ADSCs group. FGF-2 was expressed at higher levels in the HA-CA and HA-CA + ADSCs treatment groups than in the control group, but treatment with ADSCs alone had no effect on FGF-2 mRNA levels. Although PI3K and Akt were significantly upregulated in mice treated with HA-CA + ADSCs, no changes were observed in the Spred1 or ERK mRNA levels (Figure 4).\n2.6. VEGF Expression Analysis by Immunofluorescence To further confirm the effects of ADSCs and HA-CA on angiogenesis, we performed immunofluorescence staining for VEGF. Compared with the DBW group, in which the number of VEGF-expressing cells was low, the numbers of VEGF-positive cells were significantly higher in the ADSCs and HA-CA groups. In particular, mice treated with HA-CA + ADSCs had the greatest number of VEGF-expressing cells (Figure 4I). VEGF-positive cells were counted manually in microscopic fields and were highest in the HA-CA + ADSCs group (control DBW group, 15 ± 4; ADSCs group, 25 ± 6; HA-CA group, 28 ± 5; HA-CA + ADSCs group, 46 ± 5 **; ** p < 0.01; Figure 4J).\nTo further confirm the effects of ADSCs and HA-CA on angiogenesis, we performed immunofluorescence staining for VEGF. Compared with the DBW group, in which the number of VEGF-expressing cells was low, the numbers of VEGF-positive cells were significantly higher in the ADSCs and HA-CA groups. In particular, mice treated with HA-CA + ADSCs had the greatest number of VEGF-expressing cells (Figure 4I). VEGF-positive cells were counted manually in microscopic fields and were highest in the HA-CA + ADSCs group (control DBW group, 15 ± 4; ADSCs group, 25 ± 6; HA-CA group, 28 ± 5; HA-CA + ADSCs group, 46 ± 5 **; ** p < 0.01; Figure 4J).\n2.7. Angiogenesis-Related Protein Expression We investigated the expression of various angiogenesis-associated proteins, including VEGFA, FGF-2, Akt, phospho-Akt, ERK1/2, and phospho-ERK1/2. VEGFA and FGF-2 expression was increased in the HA-CA and HA-CA + ADSCs treatment groups compared to the control DBW group (VEGFA, control DBW group, 49 ± 4; ADSCs group, 43 ± 14; HA-CA group, 70 ± 8; HA-CA + ADSCs group, 91 ± 2 *; * p < 0.05; Figure 5A). FGF-2 was increased in all groups compared to the controls, especially in the HA-CA group (control DBW group, 34 ± 8; ADSCs group, 54 ± 6; HA-CA group, 76 ± 3 **; HA-CA + ADSCs group, 52 ± 2; ** p < 0.01; Figure 5A). With regard to components of the ERK and PI3K signaling pathways, ERK1/2 and AKT1 expression was confirmed. Both the AKT and phospho-AKT (p-AKT) levels were increased in all treatment groups compared to the controls, and the difference was especially great in the HA-CA group (control DBW group, 20 ± 3; ADSCs group, 34 ± 5; HA-CA group, 56 ± 2 **; HA-CA + ADSCs group, 43 ± 2; ** p < 0.01; Figure 5B). The expression of total ERK1/2 was similar in all groups, and phospho-ERK1/2 (p-ERK1/2) was hardly expressed (control DBW group, 32 ± 3; ADSCs group, 28 ± 1; HA-CA group, 29 ± 2; HA-CA + ADSCs group, 27 ± 2; Figure 5B).\nWe investigated the expression of various angiogenesis-associated proteins, including VEGFA, FGF-2, Akt, phospho-Akt, ERK1/2, and phospho-ERK1/2. VEGFA and FGF-2 expression was increased in the HA-CA and HA-CA + ADSCs treatment groups compared to the control DBW group (VEGFA, control DBW group, 49 ± 4; ADSCs group, 43 ± 14; HA-CA group, 70 ± 8; HA-CA + ADSCs group, 91 ± 2 *; * p < 0.05; Figure 5A). FGF-2 was increased in all groups compared to the controls, especially in the HA-CA group (control DBW group, 34 ± 8; ADSCs group, 54 ± 6; HA-CA group, 76 ± 3 **; HA-CA + ADSCs group, 52 ± 2; ** p < 0.01; Figure 5A). With regard to components of the ERK and PI3K signaling pathways, ERK1/2 and AKT1 expression was confirmed. Both the AKT and phospho-AKT (p-AKT) levels were increased in all treatment groups compared to the controls, and the difference was especially great in the HA-CA group (control DBW group, 20 ± 3; ADSCs group, 34 ± 5; HA-CA group, 56 ± 2 **; HA-CA + ADSCs group, 43 ± 2; ** p < 0.01; Figure 5B). The expression of total ERK1/2 was similar in all groups, and phospho-ERK1/2 (p-ERK1/2) was hardly expressed (control DBW group, 32 ± 3; ADSCs group, 28 ± 1; HA-CA group, 29 ± 2; HA-CA + ADSCs group, 27 ± 2; Figure 5B).", "The wounds were photographed on days 1, 3, 5, 7, 14, and 21, and the wound areas were quantified using ImageJ software. The change in wound size over time was calculated as the percentage of wound closure for each treatment group compared to the initial area of the wound and normalized relative to the initial area. Visual inspection indicated a decrease in wound size. Compared with control mice, the wound closure rate on day 3 was significantly higher in the treatment groups (control DBW group, 105% ± 18% vs. ADSCs group, 82% ± 15%, HA-CA group, 71% ± 11% **, HA-CA + ADSCs group, 76% ± 10% *; * p < 0.05, ** p < 0.01). At 14 days, remarkable wound healing was observed in the HA-CA + ADSCs group (control DBW group 59% ± 8% vs. HA-CA + ADSCs group 19% ± 4%; p < 0.01; Figure 1A,B). Although the wounds appeared to have healed in all groups on day 21, the wound size was significantly smaller in the HA-CA + ADSCs group than in the control and ADSCs groups (control DBW group, 16% ± 5% **, ADSCs group, 24% ± 17% * vs. HA-CA + ADSCs group, 8% ± 2%; * p < 0.05, ** p < 0.01; Figure 1A,B).", "PKH26-labeled ADSCs (5 × 105 cells/100 µL) were injected into healthy subcutaneous tissues at the wound boundary. In the HA-CA + ADSCs group, PKH26-labeled ADSCs were transplanted with the HA-CA patch at the wound site. The mice were sacrificed on day 14 for PKH26-labeled ADSC tracking. Although most ADSCs migrated from the injection site to the wound, only a few ADSCs were observed in the ADSC injection group. In contrast to the ADSC group, ADSCs were detected in the epidermis, papillary dermis, and reticular dermis at the wound site in the HA-CA + ADSCs group (Figure 1C).", "Wounds were stained with hematoxylin and eosin on postoperative day (POD) 21 for histological observation. Complete re-epithelialization was observed in all groups; however, there were significant differences in epidermis thickness and skin appendages among the groups. In the DBW group, the epidermis consisted of one to three thin epithelial cell layers, and the extent of tissue regeneration at the dermis was low. In the ADSCs, HA-CA, and HA-CA + ADSCs groups, the epidermis was completely regenerated, and epidermal appendages, including sweat glands and hair follicles, were observed. Notably, neovascular-like structures were observed in the dermis layer (Figure 2A). Masson’s trichrome staining revealed marked granulation tissue formation in the HA-CA, ADSCs, and HA-CA + ADSCs groups. Furthermore, these groups showed increased proportions of muscle cells and extensive dermis tissue remodeling (Figure 2B). Terminal deoxynucleotidyl transferase dUTP nick end labeling (TUNEL)-positive cells were counted manually in microscopic fields (DBW group, 19 ± 2; ADSCs group, 25 ± 4; HA-CA group, 21 ± 4; HA-CA + ADSCs group, 21 ± 3; Figure 2F). The level of apoptosis was not significantly different among the groups (Figure 2C).\nIn comparison with the DBW group, the skin was significantly thicker in the HA-CA and HA-CA + ADSCs groups (DBW group, 468 ± 34; ADSCs group, 555 ± 52; HA-CA group, 815 ± 90 *; HA-CA + ADSCs group, 900 ± 102 μm *, * p < 0.05). The epidermis was thickest in the HA-CA + ADSCs group (DBW group, 77 ± 4; ADSCs group, 104 ± 16; HA-CA group, 107 ± 17; HA-CA + ADSCs group, 122 ± 7 μm *, * p < 0.05; Figure 2D,E).", "To examine wound neovascularization, we performed immunohistochemical staining for the endothelial cell-specific markers cluster of differentiation 31 (CD31) and von willebrand factor (vWF). Most of the CD31- and vWF-positive vessels were distributed in the lower middle part of the reticular layer of the dermis. CD31 is expressed in small vessels, mature blood vessels, and immature vascular structures. Most CD31-positive vessels were 10–15 μm in diameter and none exceeded 20 μm. Compared with the control DBW group, the number of CD31-positive vessels was higher in mice treated with HA-CA and HA-CA + ADSCs (control DBW group, 14 ± 2; ADSCs group, 25 ± 3; HA-CA group, 52 ± 5 *; HA-CA + ADSCs group, 73 ± 10 **; * p < 0.05, ** p < 0.01; Figure 3A,C). The vWF-positive vessels were larger than CD31-positive vessels and were 20–30 μm in diameter, with some exceeding 40 μm. Mice in the HA-CA + ADSCS group had the greatest number of vWF-positive vessels (control DBW group, 5 ± 1; ADSCs group, 6 ± 2; HA-CA group, 9 ± 1; HA-CA + ADSCs group, 16 ± 3 *; * p < 0.05; Figure 3B,D).", "To confirm the effects of the different treatments on angiogenesis, we assessed the mRNA levels of various angiogenesis-associated genes, including those encoding vascular endothelial growth factor (VEGF), angiopoietin 1 (Ang-1), insulin-like growth factor 1 (IGF-1), fibroblast growth factor 2 (FGF-2), phosphoinositide 3-kinase (PI3K), protein kinase B (Akt), sprouty-related EVH1 domain-containing 1 (Spred1), and extracellular signal-regulated kinase (ERK). Mice treated with HA-CA + ADSCs had the highest VEGF, IGF-1, and FGF-2 mRNA levels. Ang-1 was upregulated in all treatment groups compared with the control group but was statistically significant only in the HA-CA + ADSCs group. FGF-2 was expressed at higher levels in the HA-CA and HA-CA + ADSCs treatment groups than in the control group, but treatment with ADSCs alone had no effect on FGF-2 mRNA levels. Although PI3K and Akt were significantly upregulated in mice treated with HA-CA + ADSCs, no changes were observed in the Spred1 or ERK mRNA levels (Figure 4).", "To further confirm the effects of ADSCs and HA-CA on angiogenesis, we performed immunofluorescence staining for VEGF. Compared with the DBW group, in which the number of VEGF-expressing cells was low, the numbers of VEGF-positive cells were significantly higher in the ADSCs and HA-CA groups. In particular, mice treated with HA-CA + ADSCs had the greatest number of VEGF-expressing cells (Figure 4I). VEGF-positive cells were counted manually in microscopic fields and were highest in the HA-CA + ADSCs group (control DBW group, 15 ± 4; ADSCs group, 25 ± 6; HA-CA group, 28 ± 5; HA-CA + ADSCs group, 46 ± 5 **; ** p < 0.01; Figure 4J).", "We investigated the expression of various angiogenesis-associated proteins, including VEGFA, FGF-2, Akt, phospho-Akt, ERK1/2, and phospho-ERK1/2. VEGFA and FGF-2 expression was increased in the HA-CA and HA-CA + ADSCs treatment groups compared to the control DBW group (VEGFA, control DBW group, 49 ± 4; ADSCs group, 43 ± 14; HA-CA group, 70 ± 8; HA-CA + ADSCs group, 91 ± 2 *; * p < 0.05; Figure 5A). FGF-2 was increased in all groups compared to the controls, especially in the HA-CA group (control DBW group, 34 ± 8; ADSCs group, 54 ± 6; HA-CA group, 76 ± 3 **; HA-CA + ADSCs group, 52 ± 2; ** p < 0.01; Figure 5A). With regard to components of the ERK and PI3K signaling pathways, ERK1/2 and AKT1 expression was confirmed. Both the AKT and phospho-AKT (p-AKT) levels were increased in all treatment groups compared to the controls, and the difference was especially great in the HA-CA group (control DBW group, 20 ± 3; ADSCs group, 34 ± 5; HA-CA group, 56 ± 2 **; HA-CA + ADSCs group, 43 ± 2; ** p < 0.01; Figure 5B). The expression of total ERK1/2 was similar in all groups, and phospho-ERK1/2 (p-ERK1/2) was hardly expressed (control DBW group, 32 ± 3; ADSCs group, 28 ± 1; HA-CA group, 29 ± 2; HA-CA + ADSCs group, 27 ± 2; Figure 5B).", "Chronic hyperglycemia is the leading cause of vascular complications related to diabetes. Skin damage poses serious risks to diabetic patients due to the impaired healing abilities of the skin [17]. Diabetic foot ulcers are the most common complications in diabetic patients [18]. DBWs are caused by peripheral neuropathy, vascular dysfunction, and arterial occlusive disease due to persistent hyperglycemia [2,19]. Thus, limited joint mobility and foot deformities increase the risk of wounds and ulcers. In addition, hyperglycemia is linked to decreased white blood cell counts and macrophage function, and ischemic and neuropathic dysfunctions can lead to infections and delayed wound healing [18,20]. As the current DBW management methods have various limitations, the present study was performed to assess the potential clinical usefulness of the combination of biomaterials with stem cells.\nSeveral biomaterials and stem cell types have been proposed for the treatment of DBW, and some of these have yielded encouraging results [21,22,23]. Biocompatibility, safety, degradability, and mechanical properties are crucial points to be taken into consideration when developing biomaterials intended for therapeutic use [24]. Woo et al. [22] assessed the effects of silk fibroin chitosan film and ADSCs in a DBW rat model and found that the biomaterial provided a biocompatible scaffold that could be used for stem cell delivery. Kanitkar et al. [25] reported that a polycaprolactone-gelatin nanofiber matrix exerted promising wound healing effects in a DBW mouse model.\nHere, we investigated the effects of a HA-CA patch combined with ADSCs in a DBW mouse model. HA is a naturally occurring polysaccharide and a major component of the ECM. Due to its excellent viscoelastic properties and ability to promote cell migration, HA has emerged as a new therapeutic agent for use in regeneration and wound healing [26]. Many researchers are interested in the various functions of HA, and a number of types of biomaterials modified from HA have been developed. In addition, there is ongoing research in the field of wound medicine using various biomaterials [27,28]. In this study, 200 kDa HA was used and modified. To induce the crosslinking of HA to increase its retention time at the injury site, a mussel adhesion-inspired catechol group was introduced into the HA backbone. The HA-CA patch maximizes the effects of cell therapy by preventing stem cells from becoming attached to the scaffold and lost on transplantation. HA-CA patches provide excellent biocompatibility, tissue adhesion, and an improved survival rate, maximizing the regeneration ability of stem cells [16]. In this study, we showed that although both HA-CA and HA-CA + ADSCs reduced wound size, the wound healing effects of HA-CA + ADSCs were more potent. We believe that this synergistic effect reflects the fact that the HA-CA patch improved the regeneration ability of stem cells and maximized their paracrine effects.\nThe synergistic wound healing effects of the HA-CA patch and ADSCs were confirmed by histopathological analyses. The combination of HA-CA and ADSCs resulted in increased epithelialization, granulation tissue formation, and capillary formation. ADSCs alone or in combination with HA-CA promoted epidermis regeneration, neovascularization, and skin appendage development.\nTo confirm the effects of HA-CA and ADSCs on neovascularization in DBW, we stained tissues for CD31 and vWF. Although CD31-positive small vessels were observed in mice treated with ADSCs alone or in combination with HA-CA, the number of CD31-expressing vessels was significantly higher in the HA-CA + ADSCs group. Consistent with these observations, mice treated with HA-CA + ADSCs exhibited the greatest number of vWF-positive vessels. Although the therapeutic effects of stem cells in DBW have been reported previously [29], our findings suggest that ADSCs promote wound healing and neovascularization, and that their effects are augmented when combined with HA-CA.\nADSCs secrete various growth factors and cytokines, including VEGF, IGF-1, and FGF, promoting angiogenesis and tissue regeneration [16,30]. In this study, we showed that treatment with HA-CA combined with ADSCs significantly upregulated the expression of VEGF, Ang-1, IGF-1, and FGF-2. Importantly, mice treated with HA-CA + ADSCs exhibited the highest mRNA levels of VEGF and IGF-1. These results confirmed the synergistic neovascularization-promoting effects of HA-CA and ADSCs. Although the PI3K and Akt mRNA levels were significantly increased upon treatment with HA-CA + ADSCs, we found no changes in the expression levels of ERK and Spred1.\nTo confirm that the expression of mRNA related to angiogenesis was consistent at the protein level, proteins were extracted from the paraffin blocks, and VEGF, FGF-2, AKT-1, and ERK expression were confirmed. The VEGFA and FGF-2 expression levels were increased in the HA-CA and HA-CA + ADSCs groups. In particular, the expression of VEGFA showed the greatest increase in the HA-CA + ADSCs group and FGF-2 showed the greatest increase in expression in the HA-CA group. VEGFA is a potent inducer of angiogenesis and triggers most of the mechanisms involved in the activation and regulation of angiogenesis [31]. FGF-2 promotes epithelialization by mediating skin wound healing and strongly activates fibroblasts as well as other mesodermal-derived cells, including vascular endothelial and smooth muscle cells, osteoblasts, and chondrocytes. In addition, it is known to play a prevalent role in epidermal defect wound models [32]. This seems to be because wound healing progressed gradually with sacrifice of the animals at 3 weeks after HA-CA or ADSCs transplantation. In the HA-CA + ADSCs group, the progression of angiogenesis occurred, while in the HACA group, dermis cell proliferation seems to have been the main cause. Total ERK1/2 expression was confirmed in all groups, but there was little expression of p-ERK1/2 (activated ERK protein). Thus, angiogenesis was not mediated mainly through the mitogen-activated protein kinase (MAPK) pathway. The levels of AKT and p-AKT expression were increased in all treatment groups in comparison with the control DBW group, suggesting that angiogenesis was mediated through the PI3K/Akt pathway (Figure 6).\nThe PI3K/Akt pathway promotes endothelial cell proliferation, differentiation, and migration in response to tyrosine kinase growth factor receptors, including IGF-1 receptor, receptor tyrosine kinase receptor, and VEGFR [33,34]. Moreover, PI3K/Akt signaling increases Bcl-2 levels and decreases Bax levels, promoting cell survival [35]. The results of the present study indicate that the expression levels of various growth factors were increased in the HA-CA + ADSCs group compared with the ADSCs or HA-CA only treatment group. Therefore, by upregulating various growth factors and upstream pathway regulators, HA-CA and ADSCs activate the PI3K/Akt pathway promoting angiogenesis and tissue regeneration in DBW.\nIn conclusion, we investigated the effects of the HA-based biomaterial HA-CA and ADSCs in a mouse model of DBW. The combination of HA-CA and ADSCs showed synergistic wound healing effects via acceleration of tissue regeneration and angiogenesis. Additional clinical studies are warranted to confirm the clinical benefits of HA-CA combined with ADSCs in patients with DBW.", "4.1. HA-CA Synthesis The HA-CA patch was synthesized by conjugating dopamine hydrochloride (Sigma-Aldrich, St. Louis, MO, USA) to a 200 kDa HA backbone (Lifecore Biomedical, Chaska, MN, USA), as previously described [16,36]. Briefly, HA was dissolved in distilled water at a concentration of 1% (w/v). Subsequently, 1-ethyl-3-(3-dimethyl aminopropyl) carbodiimide (EDC; TCI Co., Japan) and N-hydroxysulfosuccinimide (NHS; Sigma-Aldrich) were added to HA solution at an HA:EDC:NHS molar ratio of 1:1.5:1.5 (pH 5.0). Dopamine hydrochloride was added to the solution at an HA:dopamine hydrochloride molar ratio of 1:1.5, and the solution was incubated for 12 h at room temperature. The solution was then dialyzed using a membrane (Cellu Sep T2, MW cut-off 6–8 kDa; Membrane Filtration Products Inc., Seguin, TX, USA) against pH 5.0 phosphate-buffered saline (PBS; Biosesang, Seongnam, Korea) to remove uncoupled dopamine hydrochloride. The resulting product was poured into Petri dishes in a thin layer and freeze-dried. The synthesis yield of HA-CA conjugate was about 80%. The substitution degree of catechol group to HA backbone was 8.8%, which was determined by measuring the absorbance of HA-CA solution at 280 nm wavelength using an ultraviolet-visible (UV-vis) light spectrophotometer (JASCO Corporation, Tokyo, Japan). To form a hydrogel, lyophilized HA-CA conjugate was dissolved in neutral PBS (Sigma) and mixed with oxidizing solution containing 4.5 mg/mL sodium periodate (NaIO4; Sigma) and 0.4 M sodium hydroxide (NaOH; Sigma).\nThe HA-CA patch was synthesized by conjugating dopamine hydrochloride (Sigma-Aldrich, St. Louis, MO, USA) to a 200 kDa HA backbone (Lifecore Biomedical, Chaska, MN, USA), as previously described [16,36]. Briefly, HA was dissolved in distilled water at a concentration of 1% (w/v). Subsequently, 1-ethyl-3-(3-dimethyl aminopropyl) carbodiimide (EDC; TCI Co., Japan) and N-hydroxysulfosuccinimide (NHS; Sigma-Aldrich) were added to HA solution at an HA:EDC:NHS molar ratio of 1:1.5:1.5 (pH 5.0). Dopamine hydrochloride was added to the solution at an HA:dopamine hydrochloride molar ratio of 1:1.5, and the solution was incubated for 12 h at room temperature. The solution was then dialyzed using a membrane (Cellu Sep T2, MW cut-off 6–8 kDa; Membrane Filtration Products Inc., Seguin, TX, USA) against pH 5.0 phosphate-buffered saline (PBS; Biosesang, Seongnam, Korea) to remove uncoupled dopamine hydrochloride. The resulting product was poured into Petri dishes in a thin layer and freeze-dried. The synthesis yield of HA-CA conjugate was about 80%. The substitution degree of catechol group to HA backbone was 8.8%, which was determined by measuring the absorbance of HA-CA solution at 280 nm wavelength using an ultraviolet-visible (UV-vis) light spectrophotometer (JASCO Corporation, Tokyo, Japan). To form a hydrogel, lyophilized HA-CA conjugate was dissolved in neutral PBS (Sigma) and mixed with oxidizing solution containing 4.5 mg/mL sodium periodate (NaIO4; Sigma) and 0.4 M sodium hydroxide (NaOH; Sigma).\n4.2. ADSC Isolation and Characterization Human ADSCs were isolated from three patients who underwent augmentation mammoplasty—the patients did not have inflammation or cancer. This study was approved by Seoul National University Bundang Hospital Institutional Review Board and Ethics Committee and conducted in accordance with the guidelines of the 1975 Declaration of Helsinki (IRB No. B-1702/381-301). Briefly, adipose tissue was washed with PBS and cut into smaller pieces. Enzymatic digestion was performed using 0.075% collagenase type I (Sigma-Aldrich) in a humidified 5% CO2 incubator for 1 h at 37 °C. After neutralization, samples were centrifuged, and supernatants were passed through a 100 µm cell strainer (BD Biosciences, Bedford, MA, USA). The cells were transferred into cell culture flasks with Dulbecco’s modified Eagle’s medium (Welgene, Gyeongsan, Korea) supplemented with 10% fetal bovine serum (Gibco, Carlsbad, CA, USA), 100 U/mL penicillin, and 100 μg/mL streptomycin (Lonza, Walkersville, MD, USA), and cells were maintained at 37 °C in a humidified 5% CO2 incubator. The ADSCs were used between passages 4 and 6 for fluorescence-activated cell sorting and animal experiments. ADSCs were characterized by flow cytometry for the cell-surface markers CD31, CD34, CD44, CD45, CD90, and HLA-DR (BD Biosciences Pharmingen, San Jose, CA, USA). To track the transplanted ADSCs, they were labeled with PKH26 red fluorescent dye (Sigma-Aldrich, St. Louis, MO, USA) according to the manufacturer’s instructions. Briefly, ADSCs were harvested and resuspended in 1 mL of diluent C solution. Then, 4 µL of PKH26 dye was added followed by incubation for 5 min. Fetal bovine serum (1 mL) was added for quenching for 2 min, followed by washing with PBS.\nHuman ADSCs were isolated from three patients who underwent augmentation mammoplasty—the patients did not have inflammation or cancer. This study was approved by Seoul National University Bundang Hospital Institutional Review Board and Ethics Committee and conducted in accordance with the guidelines of the 1975 Declaration of Helsinki (IRB No. B-1702/381-301). Briefly, adipose tissue was washed with PBS and cut into smaller pieces. Enzymatic digestion was performed using 0.075% collagenase type I (Sigma-Aldrich) in a humidified 5% CO2 incubator for 1 h at 37 °C. After neutralization, samples were centrifuged, and supernatants were passed through a 100 µm cell strainer (BD Biosciences, Bedford, MA, USA). The cells were transferred into cell culture flasks with Dulbecco’s modified Eagle’s medium (Welgene, Gyeongsan, Korea) supplemented with 10% fetal bovine serum (Gibco, Carlsbad, CA, USA), 100 U/mL penicillin, and 100 μg/mL streptomycin (Lonza, Walkersville, MD, USA), and cells were maintained at 37 °C in a humidified 5% CO2 incubator. The ADSCs were used between passages 4 and 6 for fluorescence-activated cell sorting and animal experiments. ADSCs were characterized by flow cytometry for the cell-surface markers CD31, CD34, CD44, CD45, CD90, and HLA-DR (BD Biosciences Pharmingen, San Jose, CA, USA). To track the transplanted ADSCs, they were labeled with PKH26 red fluorescent dye (Sigma-Aldrich, St. Louis, MO, USA) according to the manufacturer’s instructions. Briefly, ADSCs were harvested and resuspended in 1 mL of diluent C solution. Then, 4 µL of PKH26 dye was added followed by incubation for 5 min. Fetal bovine serum (1 mL) was added for quenching for 2 min, followed by washing with PBS.\n4.3. Diabetic Wound Animal Model C57BL/6 male mice (7 weeks old, 23–26 g) were purchased from ORIENT BIO (Seongnam, Korea) and maintained according to the Association for Assessment and Accreditation of Laboratory Animal Care International system. All animal experiments conformed to the International Guide for the Care and Use of Laboratory Animals and were approved by the Institutional Animal Care and Use Committee of Seoul National University Bundang Hospital (IACUC No. BA1710-234/090-01).\nForty C57BL/6 mice were equally divided into four groups: control diabetic wound (DBW) group, ADSCs group, HA-CA group, and HA-CA + ADSCs group. Diabetes induction was performed by intraperitoneal injection of streptozotocin (STZ; Sigma-Aldrich) at a dose of 150 mg/kg dissolved in citrate buffer (pH 5.5) [37,38]. Blood was drawn from the tail vein, and the glucose level was determined using a glucometer (Accu-Check Performa; Roche, Pleasanton, CA, USA). The blood glucose level and body weights were measured every 3 days. Mice with blood glucose levels >250 mg/dL were considered diabetic. After 4 weeks of STZ administration, mice were anesthetized with 2% isoflurane inhalation. Excisional biopsy wounds on the shaved dorsal regions of the midline extending through the panniculus carnosus were made using a 6 mm punch. ADSCs (5 × 105 cells/100 µL) labeled with PKH26 (Sigma-Aldrich) were transplanted into healthy tissue at the wound boundary, while HA-CA patches were injected at the wound site. The HA-CA patch (6 mm) was placed on the DBW (Figure 7). Wound areas were photographed on days 1, 3, 5, 7, 14, and 21 after ADSC and HA-CA transplantation. We identified the wound margins as whitish, dry, membrane-like structures, and measured the surface area using ImageJ software (version 1.51j8; National Institutes of Health, Bethesda, MD, USA). Changes in wound area over time were expressed as the percentage of the initial wound area.\nC57BL/6 male mice (7 weeks old, 23–26 g) were purchased from ORIENT BIO (Seongnam, Korea) and maintained according to the Association for Assessment and Accreditation of Laboratory Animal Care International system. All animal experiments conformed to the International Guide for the Care and Use of Laboratory Animals and were approved by the Institutional Animal Care and Use Committee of Seoul National University Bundang Hospital (IACUC No. BA1710-234/090-01).\nForty C57BL/6 mice were equally divided into four groups: control diabetic wound (DBW) group, ADSCs group, HA-CA group, and HA-CA + ADSCs group. Diabetes induction was performed by intraperitoneal injection of streptozotocin (STZ; Sigma-Aldrich) at a dose of 150 mg/kg dissolved in citrate buffer (pH 5.5) [37,38]. Blood was drawn from the tail vein, and the glucose level was determined using a glucometer (Accu-Check Performa; Roche, Pleasanton, CA, USA). The blood glucose level and body weights were measured every 3 days. Mice with blood glucose levels >250 mg/dL were considered diabetic. After 4 weeks of STZ administration, mice were anesthetized with 2% isoflurane inhalation. Excisional biopsy wounds on the shaved dorsal regions of the midline extending through the panniculus carnosus were made using a 6 mm punch. ADSCs (5 × 105 cells/100 µL) labeled with PKH26 (Sigma-Aldrich) were transplanted into healthy tissue at the wound boundary, while HA-CA patches were injected at the wound site. The HA-CA patch (6 mm) was placed on the DBW (Figure 7). Wound areas were photographed on days 1, 3, 5, 7, 14, and 21 after ADSC and HA-CA transplantation. We identified the wound margins as whitish, dry, membrane-like structures, and measured the surface area using ImageJ software (version 1.51j8; National Institutes of Health, Bethesda, MD, USA). Changes in wound area over time were expressed as the percentage of the initial wound area.\n4.4. Histopathological Assessment DBW tissues were fixed in 10% formalin and embedded in paraffin. The tissues were routinely processed and cut into sections 4–5 μm thick. The sections were deparaffinized in xylene at room temperature and stained with hematoxylin and eosin (Cancer Diagnostics Inc., Durham, NC, USA) according to the manufacturer’s instructions. Masson’s trichrome staining (BBC Biochemical, Mount Vernon, WA, USA) was performed in accordance with the manufacturer’s protocol. Briefly, deparaffinized sections were fixed in Bouin’s solution for 1 h at 56 °C and stained with ClearView Iron Hematoxylin working solution for 10 min. Subsequently, tissues were stained with Biebrich scarlet-acid fuchsin solution (2 min), phosphomolybdic-phosphotungstic acid solution (10 min), aniline blue solution (3 min), and 1% acetic acid solution (2 min). ECM, collagen, and other connective tissue elements were stained blue and smooth muscles were stained red. DBW tissue sections were imaged (×100 magnification) using Carl Zeiss AxioVision 4 (Carl Zeiss MicroImaging GmbH, Jena, Germany).\nDBW tissues were fixed in 10% formalin and embedded in paraffin. The tissues were routinely processed and cut into sections 4–5 μm thick. The sections were deparaffinized in xylene at room temperature and stained with hematoxylin and eosin (Cancer Diagnostics Inc., Durham, NC, USA) according to the manufacturer’s instructions. Masson’s trichrome staining (BBC Biochemical, Mount Vernon, WA, USA) was performed in accordance with the manufacturer’s protocol. Briefly, deparaffinized sections were fixed in Bouin’s solution for 1 h at 56 °C and stained with ClearView Iron Hematoxylin working solution for 10 min. Subsequently, tissues were stained with Biebrich scarlet-acid fuchsin solution (2 min), phosphomolybdic-phosphotungstic acid solution (10 min), aniline blue solution (3 min), and 1% acetic acid solution (2 min). ECM, collagen, and other connective tissue elements were stained blue and smooth muscles were stained red. DBW tissue sections were imaged (×100 magnification) using Carl Zeiss AxioVision 4 (Carl Zeiss MicroImaging GmbH, Jena, Germany).\n4.5. Terminal Deoxynucleotidyl Transferase dUTP Nick End Labeling (TUNEL) Staining In situ detection of apoptosis was performed by labeling DNA strand breaks in tissue sections using a TUNEL staining kit (Roche Diagnostics, Penzberg, Germany). Briefly, DBW tissue sections were pretreated with proteinase K (20 μg/mL) at 37 °C for 30 min and immediately washed. Subsequently, the DBW tissue sections were incubated with the TUNEL reaction mixture at 37 °C for 60 min. After mounting, sections were imaged under a fluorescence microscope (×200 magnification) using Carl Zeiss AxioVision 4 (Carl Zeiss MicroImaging GmbH).\nIn situ detection of apoptosis was performed by labeling DNA strand breaks in tissue sections using a TUNEL staining kit (Roche Diagnostics, Penzberg, Germany). Briefly, DBW tissue sections were pretreated with proteinase K (20 μg/mL) at 37 °C for 30 min and immediately washed. Subsequently, the DBW tissue sections were incubated with the TUNEL reaction mixture at 37 °C for 60 min. After mounting, sections were imaged under a fluorescence microscope (×200 magnification) using Carl Zeiss AxioVision 4 (Carl Zeiss MicroImaging GmbH).\n4.6. Immunohistochemistry Immunohistochemistry was performed using a GBI Polink-2 HRP kit (Golden Bridge International Inc., Bothell, WA, USA). Briefly, the sections were deparaffinized in xylene at room temperature and rehydrated in a graded series of ethanol. After heat-induced epitope retrieval in citrate buffer, pH 6.0 (Scytek Laboratories, Inc., West Logan, UT, USA), tissues were incubated in peroxidase blocking reagent for 15 min at room temperature. Subsequently, tissues were incubated with anti-von Willebrand factor (vWF; EMD Millipore, Temecula, CA, USA) and anti-CD31 (Thermo Fisher Scientific, Waltham, MA, USA) primary antibodies (1:50) for 90 min at room temperature, followed by incubation with diaminobenzidine chromogen. After dehydration, the sections were mounted with Histomount (National Diagnostics, Atlanta, GA, USA) and imaged (×400 magnification) using Carl Zeiss AxioVision 4 (Carl Zeiss MicroImaging GmbH).\nImmunohistochemistry was performed using a GBI Polink-2 HRP kit (Golden Bridge International Inc., Bothell, WA, USA). Briefly, the sections were deparaffinized in xylene at room temperature and rehydrated in a graded series of ethanol. After heat-induced epitope retrieval in citrate buffer, pH 6.0 (Scytek Laboratories, Inc., West Logan, UT, USA), tissues were incubated in peroxidase blocking reagent for 15 min at room temperature. Subsequently, tissues were incubated with anti-von Willebrand factor (vWF; EMD Millipore, Temecula, CA, USA) and anti-CD31 (Thermo Fisher Scientific, Waltham, MA, USA) primary antibodies (1:50) for 90 min at room temperature, followed by incubation with diaminobenzidine chromogen. After dehydration, the sections were mounted with Histomount (National Diagnostics, Atlanta, GA, USA) and imaged (×400 magnification) using Carl Zeiss AxioVision 4 (Carl Zeiss MicroImaging GmbH).\n4.7. RNA Isolation and qRT-PCR RNA was isolated and purified using an RNeasy Plus Mini Kit (QIAGEN, Hilden, Germany) according to the manufacturer’s instructions. cDNA synthesis was performed using a High-Capacity cDNA Reverse Transcription Kit (Thermo Fisher Scientific). Primers for qRT-PCR were obtained from Cosmogenetech (Seoul, Korea), and the primer sequences are shown in Table 1. Reactions were prepared using Power SYBR Green PCR Master Mix (Applied Biosystems, Foster City, CA, USA) according to the manufacturer’s instructions, and were run on a ViiA 7 Real-Time PCR System (Life Technologies Corporation, Carlsbad, CA, USA) using the following cycling conditions: one cycle of denaturation at 95 °C/10 min, followed by 40 two-segment amplification cycles (95 °C/10 min, 60 °C/1 min). All reactions were performed in triplicate.\nRNA was isolated and purified using an RNeasy Plus Mini Kit (QIAGEN, Hilden, Germany) according to the manufacturer’s instructions. cDNA synthesis was performed using a High-Capacity cDNA Reverse Transcription Kit (Thermo Fisher Scientific). Primers for qRT-PCR were obtained from Cosmogenetech (Seoul, Korea), and the primer sequences are shown in Table 1. Reactions were prepared using Power SYBR Green PCR Master Mix (Applied Biosystems, Foster City, CA, USA) according to the manufacturer’s instructions, and were run on a ViiA 7 Real-Time PCR System (Life Technologies Corporation, Carlsbad, CA, USA) using the following cycling conditions: one cycle of denaturation at 95 °C/10 min, followed by 40 two-segment amplification cycles (95 °C/10 min, 60 °C/1 min). All reactions were performed in triplicate.\n4.8. Immunofluorescence Analysis Skin sections were deparaffinized in xylene and rehydrated in a graded ethanol series. After heat-induced epitope retrieval in citrate buffer, pH 6.0 (Scytek Laboratories, Inc.), sections were incubated with 3% bovine serum albumin blocking reagent for 10 min at room temperature. After blocking, sections were incubated with a primary antibody against VEGF (Santa Cruz Biotechnology, Inc., Santa Cruz, CA, USA) followed by incubation with Alexa Fluor 488 anti-rabbit secondary antibody (Biolegend, San Diego, CA, USA). The sections were mounted with 4′,6-diamidino-2-phenylindole (DAPI)-containing mounting medium (Vector Laboratories Inc., Burlingame, CA, USA) and observed under an inverted microscope (Axio Observer 7; Carl Zeiss Microscopy GmbH). VEGF-positive cells were counted manually in five microscopic fields in each stained sample, and the mean value was used for statistical analyses.\nSkin sections were deparaffinized in xylene and rehydrated in a graded ethanol series. After heat-induced epitope retrieval in citrate buffer, pH 6.0 (Scytek Laboratories, Inc.), sections were incubated with 3% bovine serum albumin blocking reagent for 10 min at room temperature. After blocking, sections were incubated with a primary antibody against VEGF (Santa Cruz Biotechnology, Inc., Santa Cruz, CA, USA) followed by incubation with Alexa Fluor 488 anti-rabbit secondary antibody (Biolegend, San Diego, CA, USA). The sections were mounted with 4′,6-diamidino-2-phenylindole (DAPI)-containing mounting medium (Vector Laboratories Inc., Burlingame, CA, USA) and observed under an inverted microscope (Axio Observer 7; Carl Zeiss Microscopy GmbH). VEGF-positive cells were counted manually in five microscopic fields in each stained sample, and the mean value was used for statistical analyses.\n4.9. Protein Extraction and Western Blotting Proteins were extracted from formalin-fixed paraffin-embedded (FFPE) samples using a Qproteome FFPE Tissue Kit (QIAGEN) according to the manufacturer’s instructions. The protein concentration was determined using Bio-Rad assay reagent (Bio-Rad, Hercules, CA, USA). Briefly, samples with equal concentrations of protein were mixed with 4× sample buffer (GenDEPOT Inc., Barker, TX, USA), heated at 95 °C for 10 min, and separated by 10% sodium dodecyl sulfate–polyacrylamide gel electrophoresis (SDS-PAGE). Proteins were then transferred onto polyvinylidene difluoride (PVDF) membranes (Amersham Life Science, Arlington Heights, IL, USA) in Tris-glycine transfer buffer (Invitrogen, Carlsbad, CA, USA). The membranes were blocked for 1 h at room temperature with 5% skim milk in Tris-buffered saline with Tween-20. The membranes were incubated at 4 °C overnight with anti-VEGFA (Abcam, Cambridge, UK), anti-FGF-2 (Santa Cruz Biotechnology, Inc.), anti-AKT1 (Santa Cruz Biotechnology, Inc.), anti-p-AKT1 (Santa Cruz Biotechnology, Inc.), anti-ERK1/2 (Santa Cruz Biotechnology, Inc.), anti-p-ERK (Santa Cruz Biotechnology, Inc.), or anti-β-actin (Santa Cruz Biotechnology, Inc.) primary antibodies, followed by incubation with horseradish peroxidase-conjugated anti-mouse IgG (Cell Signaling Technology, Danvers, MA, USA) or anti-rabbit IgG (Cell Signaling Technology) secondary antibodies as appropriate for 1 h at room temperature. The membranes were washed and then incubated using a West-Q Chemiluminescent Substrate Plus kit (GenDEPOT Inc.). The intensities of the protein bands were determined using Multi-Gauge software (version 3.0; Fuji Photo Film, Tokyo, Japan), and relative densities were expressed as ratios of control values. All reactions were performed in duplicate.\nProteins were extracted from formalin-fixed paraffin-embedded (FFPE) samples using a Qproteome FFPE Tissue Kit (QIAGEN) according to the manufacturer’s instructions. The protein concentration was determined using Bio-Rad assay reagent (Bio-Rad, Hercules, CA, USA). Briefly, samples with equal concentrations of protein were mixed with 4× sample buffer (GenDEPOT Inc., Barker, TX, USA), heated at 95 °C for 10 min, and separated by 10% sodium dodecyl sulfate–polyacrylamide gel electrophoresis (SDS-PAGE). Proteins were then transferred onto polyvinylidene difluoride (PVDF) membranes (Amersham Life Science, Arlington Heights, IL, USA) in Tris-glycine transfer buffer (Invitrogen, Carlsbad, CA, USA). The membranes were blocked for 1 h at room temperature with 5% skim milk in Tris-buffered saline with Tween-20. The membranes were incubated at 4 °C overnight with anti-VEGFA (Abcam, Cambridge, UK), anti-FGF-2 (Santa Cruz Biotechnology, Inc.), anti-AKT1 (Santa Cruz Biotechnology, Inc.), anti-p-AKT1 (Santa Cruz Biotechnology, Inc.), anti-ERK1/2 (Santa Cruz Biotechnology, Inc.), anti-p-ERK (Santa Cruz Biotechnology, Inc.), or anti-β-actin (Santa Cruz Biotechnology, Inc.) primary antibodies, followed by incubation with horseradish peroxidase-conjugated anti-mouse IgG (Cell Signaling Technology, Danvers, MA, USA) or anti-rabbit IgG (Cell Signaling Technology) secondary antibodies as appropriate for 1 h at room temperature. The membranes were washed and then incubated using a West-Q Chemiluminescent Substrate Plus kit (GenDEPOT Inc.). The intensities of the protein bands were determined using Multi-Gauge software (version 3.0; Fuji Photo Film, Tokyo, Japan), and relative densities were expressed as ratios of control values. All reactions were performed in duplicate.\n4.10. Statistical Analysis Quantitative data were expressed as the mean ± standard deviation. Differences between groups were evaluated by one-way analysis of variance (ANOVA) followed by Dunn’s multiple comparison post hoc test. All statistical analyses were performed using PRISM v.5.01 (GraphPad Software, Inc., La Jolla, CA, USA) and p < 0.05 was taken to indicate statistical significance.\nQuantitative data were expressed as the mean ± standard deviation. Differences between groups were evaluated by one-way analysis of variance (ANOVA) followed by Dunn’s multiple comparison post hoc test. All statistical analyses were performed using PRISM v.5.01 (GraphPad Software, Inc., La Jolla, CA, USA) and p < 0.05 was taken to indicate statistical significance.", "The HA-CA patch was synthesized by conjugating dopamine hydrochloride (Sigma-Aldrich, St. Louis, MO, USA) to a 200 kDa HA backbone (Lifecore Biomedical, Chaska, MN, USA), as previously described [16,36]. Briefly, HA was dissolved in distilled water at a concentration of 1% (w/v). Subsequently, 1-ethyl-3-(3-dimethyl aminopropyl) carbodiimide (EDC; TCI Co., Japan) and N-hydroxysulfosuccinimide (NHS; Sigma-Aldrich) were added to HA solution at an HA:EDC:NHS molar ratio of 1:1.5:1.5 (pH 5.0). Dopamine hydrochloride was added to the solution at an HA:dopamine hydrochloride molar ratio of 1:1.5, and the solution was incubated for 12 h at room temperature. The solution was then dialyzed using a membrane (Cellu Sep T2, MW cut-off 6–8 kDa; Membrane Filtration Products Inc., Seguin, TX, USA) against pH 5.0 phosphate-buffered saline (PBS; Biosesang, Seongnam, Korea) to remove uncoupled dopamine hydrochloride. The resulting product was poured into Petri dishes in a thin layer and freeze-dried. The synthesis yield of HA-CA conjugate was about 80%. The substitution degree of catechol group to HA backbone was 8.8%, which was determined by measuring the absorbance of HA-CA solution at 280 nm wavelength using an ultraviolet-visible (UV-vis) light spectrophotometer (JASCO Corporation, Tokyo, Japan). To form a hydrogel, lyophilized HA-CA conjugate was dissolved in neutral PBS (Sigma) and mixed with oxidizing solution containing 4.5 mg/mL sodium periodate (NaIO4; Sigma) and 0.4 M sodium hydroxide (NaOH; Sigma).", "Human ADSCs were isolated from three patients who underwent augmentation mammoplasty—the patients did not have inflammation or cancer. This study was approved by Seoul National University Bundang Hospital Institutional Review Board and Ethics Committee and conducted in accordance with the guidelines of the 1975 Declaration of Helsinki (IRB No. B-1702/381-301). Briefly, adipose tissue was washed with PBS and cut into smaller pieces. Enzymatic digestion was performed using 0.075% collagenase type I (Sigma-Aldrich) in a humidified 5% CO2 incubator for 1 h at 37 °C. After neutralization, samples were centrifuged, and supernatants were passed through a 100 µm cell strainer (BD Biosciences, Bedford, MA, USA). The cells were transferred into cell culture flasks with Dulbecco’s modified Eagle’s medium (Welgene, Gyeongsan, Korea) supplemented with 10% fetal bovine serum (Gibco, Carlsbad, CA, USA), 100 U/mL penicillin, and 100 μg/mL streptomycin (Lonza, Walkersville, MD, USA), and cells were maintained at 37 °C in a humidified 5% CO2 incubator. The ADSCs were used between passages 4 and 6 for fluorescence-activated cell sorting and animal experiments. ADSCs were characterized by flow cytometry for the cell-surface markers CD31, CD34, CD44, CD45, CD90, and HLA-DR (BD Biosciences Pharmingen, San Jose, CA, USA). To track the transplanted ADSCs, they were labeled with PKH26 red fluorescent dye (Sigma-Aldrich, St. Louis, MO, USA) according to the manufacturer’s instructions. Briefly, ADSCs were harvested and resuspended in 1 mL of diluent C solution. Then, 4 µL of PKH26 dye was added followed by incubation for 5 min. Fetal bovine serum (1 mL) was added for quenching for 2 min, followed by washing with PBS.", "C57BL/6 male mice (7 weeks old, 23–26 g) were purchased from ORIENT BIO (Seongnam, Korea) and maintained according to the Association for Assessment and Accreditation of Laboratory Animal Care International system. All animal experiments conformed to the International Guide for the Care and Use of Laboratory Animals and were approved by the Institutional Animal Care and Use Committee of Seoul National University Bundang Hospital (IACUC No. BA1710-234/090-01).\nForty C57BL/6 mice were equally divided into four groups: control diabetic wound (DBW) group, ADSCs group, HA-CA group, and HA-CA + ADSCs group. Diabetes induction was performed by intraperitoneal injection of streptozotocin (STZ; Sigma-Aldrich) at a dose of 150 mg/kg dissolved in citrate buffer (pH 5.5) [37,38]. Blood was drawn from the tail vein, and the glucose level was determined using a glucometer (Accu-Check Performa; Roche, Pleasanton, CA, USA). The blood glucose level and body weights were measured every 3 days. Mice with blood glucose levels >250 mg/dL were considered diabetic. After 4 weeks of STZ administration, mice were anesthetized with 2% isoflurane inhalation. Excisional biopsy wounds on the shaved dorsal regions of the midline extending through the panniculus carnosus were made using a 6 mm punch. ADSCs (5 × 105 cells/100 µL) labeled with PKH26 (Sigma-Aldrich) were transplanted into healthy tissue at the wound boundary, while HA-CA patches were injected at the wound site. The HA-CA patch (6 mm) was placed on the DBW (Figure 7). Wound areas were photographed on days 1, 3, 5, 7, 14, and 21 after ADSC and HA-CA transplantation. We identified the wound margins as whitish, dry, membrane-like structures, and measured the surface area using ImageJ software (version 1.51j8; National Institutes of Health, Bethesda, MD, USA). Changes in wound area over time were expressed as the percentage of the initial wound area.", "DBW tissues were fixed in 10% formalin and embedded in paraffin. The tissues were routinely processed and cut into sections 4–5 μm thick. The sections were deparaffinized in xylene at room temperature and stained with hematoxylin and eosin (Cancer Diagnostics Inc., Durham, NC, USA) according to the manufacturer’s instructions. Masson’s trichrome staining (BBC Biochemical, Mount Vernon, WA, USA) was performed in accordance with the manufacturer’s protocol. Briefly, deparaffinized sections were fixed in Bouin’s solution for 1 h at 56 °C and stained with ClearView Iron Hematoxylin working solution for 10 min. Subsequently, tissues were stained with Biebrich scarlet-acid fuchsin solution (2 min), phosphomolybdic-phosphotungstic acid solution (10 min), aniline blue solution (3 min), and 1% acetic acid solution (2 min). ECM, collagen, and other connective tissue elements were stained blue and smooth muscles were stained red. DBW tissue sections were imaged (×100 magnification) using Carl Zeiss AxioVision 4 (Carl Zeiss MicroImaging GmbH, Jena, Germany).", "In situ detection of apoptosis was performed by labeling DNA strand breaks in tissue sections using a TUNEL staining kit (Roche Diagnostics, Penzberg, Germany). Briefly, DBW tissue sections were pretreated with proteinase K (20 μg/mL) at 37 °C for 30 min and immediately washed. Subsequently, the DBW tissue sections were incubated with the TUNEL reaction mixture at 37 °C for 60 min. After mounting, sections were imaged under a fluorescence microscope (×200 magnification) using Carl Zeiss AxioVision 4 (Carl Zeiss MicroImaging GmbH).", "Immunohistochemistry was performed using a GBI Polink-2 HRP kit (Golden Bridge International Inc., Bothell, WA, USA). Briefly, the sections were deparaffinized in xylene at room temperature and rehydrated in a graded series of ethanol. After heat-induced epitope retrieval in citrate buffer, pH 6.0 (Scytek Laboratories, Inc., West Logan, UT, USA), tissues were incubated in peroxidase blocking reagent for 15 min at room temperature. Subsequently, tissues were incubated with anti-von Willebrand factor (vWF; EMD Millipore, Temecula, CA, USA) and anti-CD31 (Thermo Fisher Scientific, Waltham, MA, USA) primary antibodies (1:50) for 90 min at room temperature, followed by incubation with diaminobenzidine chromogen. After dehydration, the sections were mounted with Histomount (National Diagnostics, Atlanta, GA, USA) and imaged (×400 magnification) using Carl Zeiss AxioVision 4 (Carl Zeiss MicroImaging GmbH).", "RNA was isolated and purified using an RNeasy Plus Mini Kit (QIAGEN, Hilden, Germany) according to the manufacturer’s instructions. cDNA synthesis was performed using a High-Capacity cDNA Reverse Transcription Kit (Thermo Fisher Scientific). Primers for qRT-PCR were obtained from Cosmogenetech (Seoul, Korea), and the primer sequences are shown in Table 1. Reactions were prepared using Power SYBR Green PCR Master Mix (Applied Biosystems, Foster City, CA, USA) according to the manufacturer’s instructions, and were run on a ViiA 7 Real-Time PCR System (Life Technologies Corporation, Carlsbad, CA, USA) using the following cycling conditions: one cycle of denaturation at 95 °C/10 min, followed by 40 two-segment amplification cycles (95 °C/10 min, 60 °C/1 min). All reactions were performed in triplicate.", "Skin sections were deparaffinized in xylene and rehydrated in a graded ethanol series. After heat-induced epitope retrieval in citrate buffer, pH 6.0 (Scytek Laboratories, Inc.), sections were incubated with 3% bovine serum albumin blocking reagent for 10 min at room temperature. After blocking, sections were incubated with a primary antibody against VEGF (Santa Cruz Biotechnology, Inc., Santa Cruz, CA, USA) followed by incubation with Alexa Fluor 488 anti-rabbit secondary antibody (Biolegend, San Diego, CA, USA). The sections were mounted with 4′,6-diamidino-2-phenylindole (DAPI)-containing mounting medium (Vector Laboratories Inc., Burlingame, CA, USA) and observed under an inverted microscope (Axio Observer 7; Carl Zeiss Microscopy GmbH). VEGF-positive cells were counted manually in five microscopic fields in each stained sample, and the mean value was used for statistical analyses.", "Proteins were extracted from formalin-fixed paraffin-embedded (FFPE) samples using a Qproteome FFPE Tissue Kit (QIAGEN) according to the manufacturer’s instructions. The protein concentration was determined using Bio-Rad assay reagent (Bio-Rad, Hercules, CA, USA). Briefly, samples with equal concentrations of protein were mixed with 4× sample buffer (GenDEPOT Inc., Barker, TX, USA), heated at 95 °C for 10 min, and separated by 10% sodium dodecyl sulfate–polyacrylamide gel electrophoresis (SDS-PAGE). Proteins were then transferred onto polyvinylidene difluoride (PVDF) membranes (Amersham Life Science, Arlington Heights, IL, USA) in Tris-glycine transfer buffer (Invitrogen, Carlsbad, CA, USA). The membranes were blocked for 1 h at room temperature with 5% skim milk in Tris-buffered saline with Tween-20. The membranes were incubated at 4 °C overnight with anti-VEGFA (Abcam, Cambridge, UK), anti-FGF-2 (Santa Cruz Biotechnology, Inc.), anti-AKT1 (Santa Cruz Biotechnology, Inc.), anti-p-AKT1 (Santa Cruz Biotechnology, Inc.), anti-ERK1/2 (Santa Cruz Biotechnology, Inc.), anti-p-ERK (Santa Cruz Biotechnology, Inc.), or anti-β-actin (Santa Cruz Biotechnology, Inc.) primary antibodies, followed by incubation with horseradish peroxidase-conjugated anti-mouse IgG (Cell Signaling Technology, Danvers, MA, USA) or anti-rabbit IgG (Cell Signaling Technology) secondary antibodies as appropriate for 1 h at room temperature. The membranes were washed and then incubated using a West-Q Chemiluminescent Substrate Plus kit (GenDEPOT Inc.). The intensities of the protein bands were determined using Multi-Gauge software (version 3.0; Fuji Photo Film, Tokyo, Japan), and relative densities were expressed as ratios of control values. All reactions were performed in duplicate.", "Quantitative data were expressed as the mean ± standard deviation. Differences between groups were evaluated by one-way analysis of variance (ANOVA) followed by Dunn’s multiple comparison post hoc test. All statistical analyses were performed using PRISM v.5.01 (GraphPad Software, Inc., La Jolla, CA, USA) and p < 0.05 was taken to indicate statistical significance." ]
[ "intro", "results", null, null, null, null, null, null, null, "discussion", null, null, null, null, null, null, null, null, null, null, null ]
[ "diabetic wound", "hyaluronic acid", "biomaterial", "adipose-derived stem cells", "angiogenesis" ]
1. Introduction: Diabetic wound (DBW) is a broad term describing various pathological conditions that manifest as wounds or ulcers associated with diabetes. Diabetic foot ulcers and other DBWs are common complications in diabetes, occurring in approximately 20% of diabetic patients [1]. DBWs have been associated with hyperglycemia, which often results in diabetic peripheral neuropathy and blockage of peripheral blood vessels, ultimately leading to diabetic foot ulcers [2]. Patients with DBWs often progress rapidly, making treatment challenging and potentially resulting in lower extremity amputation. In addition to diabetic peripheral neuropathy and peripheral vascular disease, several other risk factors of DBW have been identified, including limited joint mobility and foot deformities [2,3]. Chronic inflammation is an important barrier in the treatment of DBW. Inflammation plays a crucial role in wound healing; in DBW, inflammatory responses are delayed, and the wound does not heal. In addition, the balance between collagen production and degradation is disrupted, further impairing the wound healing process [4]. The clinical management of DBW typically involves wound dressing and debridement of necrotic tissues. Numerous recent studies have indicated that modulating extracellular matrix (ECM) synthesis, growth factor release, and vascularization-targeting approaches might be useful in the treatment of DBW. Many clinical trials with mesenchymal stem cells (MSCs) are currently in progress as they are readily accessible and safer than other types of stem cells. MSCs represent a stable source for experiments or clinical treatment because they can be obtained from various tissues [5]. MSCs also exhibit multilineage differentiation potential and exert various immunomodulatory and paracrine effects [6]. Adipose-derived stem cells (ADSCs), which can be isolated easily and in large numbers from adipose tissue through liposuction surgery and have been tested for various clinical applications, exhibit long-term growth in vitro and can differentiate into various cell types upon induction [7]. Moreover, ADSCs are known to exert paracrine effects that promote tissue regeneration [6]. Hyaluronic acid (HA) is a polysaccharide found in various tissues of the human body, including the joints, cartilage, eyes, and skin [8,9]. HA plays a key role in wound healing by regulating cell proliferation, migration, and differentiation, as well as ECM organization and metabolism. High molecular weight HA inhibits the proliferation and migration of most cell types, while low molecular weight forms of HA (<300 kDa) promote cell proliferation and display angiogenic properties [10,11,12]. HA in the molecular weight range of 150–250 kDa has been shown to have beneficial effects on wound healing by enhancing cell–HA interactions through cell-surface receptors for HA, which activates signal transduction pathways essential for cellular migration and proliferation [13,14]. Therefore, HA has been proposed as a biomaterial for DBW treatment [8]. HA hydrogels are widely used in numerous biomedical and pharmaceutical applications due to their inherent biocompatibility, matrix structure similarity, and drug delivery capabilities [15]. However, the application of injectable hydrogels to treat human diseases remains limited. Furthermore, the incorporation of stem cells or growth factors into hydrogels remains technically challenging. Catechol-modified HA (HA-CA) hydrogels have higher biocompatibility, better tissue adhesion properties, and provide improved stem cell survival and functionality compared to conventional HA hydrogels [16]. However, their potential use to treat DBW remains largely unexplored. The purpose of this study was to assess the therapeutic effects of an HA-CA patch combined with ADSCs in the treatment of DBW. In particular, we evaluated their effects on tissue regeneration and angiogenesis using a DBW mouse model. 2. Results: 2.1. Wound Area Measurement The wounds were photographed on days 1, 3, 5, 7, 14, and 21, and the wound areas were quantified using ImageJ software. The change in wound size over time was calculated as the percentage of wound closure for each treatment group compared to the initial area of the wound and normalized relative to the initial area. Visual inspection indicated a decrease in wound size. Compared with control mice, the wound closure rate on day 3 was significantly higher in the treatment groups (control DBW group, 105% ± 18% vs. ADSCs group, 82% ± 15%, HA-CA group, 71% ± 11% **, HA-CA + ADSCs group, 76% ± 10% *; * p < 0.05, ** p < 0.01). At 14 days, remarkable wound healing was observed in the HA-CA + ADSCs group (control DBW group 59% ± 8% vs. HA-CA + ADSCs group 19% ± 4%; p < 0.01; Figure 1A,B). Although the wounds appeared to have healed in all groups on day 21, the wound size was significantly smaller in the HA-CA + ADSCs group than in the control and ADSCs groups (control DBW group, 16% ± 5% **, ADSCs group, 24% ± 17% * vs. HA-CA + ADSCs group, 8% ± 2%; * p < 0.05, ** p < 0.01; Figure 1A,B). The wounds were photographed on days 1, 3, 5, 7, 14, and 21, and the wound areas were quantified using ImageJ software. The change in wound size over time was calculated as the percentage of wound closure for each treatment group compared to the initial area of the wound and normalized relative to the initial area. Visual inspection indicated a decrease in wound size. Compared with control mice, the wound closure rate on day 3 was significantly higher in the treatment groups (control DBW group, 105% ± 18% vs. ADSCs group, 82% ± 15%, HA-CA group, 71% ± 11% **, HA-CA + ADSCs group, 76% ± 10% *; * p < 0.05, ** p < 0.01). At 14 days, remarkable wound healing was observed in the HA-CA + ADSCs group (control DBW group 59% ± 8% vs. HA-CA + ADSCs group 19% ± 4%; p < 0.01; Figure 1A,B). Although the wounds appeared to have healed in all groups on day 21, the wound size was significantly smaller in the HA-CA + ADSCs group than in the control and ADSCs groups (control DBW group, 16% ± 5% **, ADSCs group, 24% ± 17% * vs. HA-CA + ADSCs group, 8% ± 2%; * p < 0.05, ** p < 0.01; Figure 1A,B). 2.2. PKH26-Labeled ADSC Tracing PKH26-labeled ADSCs (5 × 105 cells/100 µL) were injected into healthy subcutaneous tissues at the wound boundary. In the HA-CA + ADSCs group, PKH26-labeled ADSCs were transplanted with the HA-CA patch at the wound site. The mice were sacrificed on day 14 for PKH26-labeled ADSC tracking. Although most ADSCs migrated from the injection site to the wound, only a few ADSCs were observed in the ADSC injection group. In contrast to the ADSC group, ADSCs were detected in the epidermis, papillary dermis, and reticular dermis at the wound site in the HA-CA + ADSCs group (Figure 1C). PKH26-labeled ADSCs (5 × 105 cells/100 µL) were injected into healthy subcutaneous tissues at the wound boundary. In the HA-CA + ADSCs group, PKH26-labeled ADSCs were transplanted with the HA-CA patch at the wound site. The mice were sacrificed on day 14 for PKH26-labeled ADSC tracking. Although most ADSCs migrated from the injection site to the wound, only a few ADSCs were observed in the ADSC injection group. In contrast to the ADSC group, ADSCs were detected in the epidermis, papillary dermis, and reticular dermis at the wound site in the HA-CA + ADSCs group (Figure 1C). 2.3. Histopathological Assessment Wounds were stained with hematoxylin and eosin on postoperative day (POD) 21 for histological observation. Complete re-epithelialization was observed in all groups; however, there were significant differences in epidermis thickness and skin appendages among the groups. In the DBW group, the epidermis consisted of one to three thin epithelial cell layers, and the extent of tissue regeneration at the dermis was low. In the ADSCs, HA-CA, and HA-CA + ADSCs groups, the epidermis was completely regenerated, and epidermal appendages, including sweat glands and hair follicles, were observed. Notably, neovascular-like structures were observed in the dermis layer (Figure 2A). Masson’s trichrome staining revealed marked granulation tissue formation in the HA-CA, ADSCs, and HA-CA + ADSCs groups. Furthermore, these groups showed increased proportions of muscle cells and extensive dermis tissue remodeling (Figure 2B). Terminal deoxynucleotidyl transferase dUTP nick end labeling (TUNEL)-positive cells were counted manually in microscopic fields (DBW group, 19 ± 2; ADSCs group, 25 ± 4; HA-CA group, 21 ± 4; HA-CA + ADSCs group, 21 ± 3; Figure 2F). The level of apoptosis was not significantly different among the groups (Figure 2C). In comparison with the DBW group, the skin was significantly thicker in the HA-CA and HA-CA + ADSCs groups (DBW group, 468 ± 34; ADSCs group, 555 ± 52; HA-CA group, 815 ± 90 *; HA-CA + ADSCs group, 900 ± 102 μm *, * p < 0.05). The epidermis was thickest in the HA-CA + ADSCs group (DBW group, 77 ± 4; ADSCs group, 104 ± 16; HA-CA group, 107 ± 17; HA-CA + ADSCs group, 122 ± 7 μm *, * p < 0.05; Figure 2D,E). Wounds were stained with hematoxylin and eosin on postoperative day (POD) 21 for histological observation. Complete re-epithelialization was observed in all groups; however, there were significant differences in epidermis thickness and skin appendages among the groups. In the DBW group, the epidermis consisted of one to three thin epithelial cell layers, and the extent of tissue regeneration at the dermis was low. In the ADSCs, HA-CA, and HA-CA + ADSCs groups, the epidermis was completely regenerated, and epidermal appendages, including sweat glands and hair follicles, were observed. Notably, neovascular-like structures were observed in the dermis layer (Figure 2A). Masson’s trichrome staining revealed marked granulation tissue formation in the HA-CA, ADSCs, and HA-CA + ADSCs groups. Furthermore, these groups showed increased proportions of muscle cells and extensive dermis tissue remodeling (Figure 2B). Terminal deoxynucleotidyl transferase dUTP nick end labeling (TUNEL)-positive cells were counted manually in microscopic fields (DBW group, 19 ± 2; ADSCs group, 25 ± 4; HA-CA group, 21 ± 4; HA-CA + ADSCs group, 21 ± 3; Figure 2F). The level of apoptosis was not significantly different among the groups (Figure 2C). In comparison with the DBW group, the skin was significantly thicker in the HA-CA and HA-CA + ADSCs groups (DBW group, 468 ± 34; ADSCs group, 555 ± 52; HA-CA group, 815 ± 90 *; HA-CA + ADSCs group, 900 ± 102 μm *, * p < 0.05). The epidermis was thickest in the HA-CA + ADSCs group (DBW group, 77 ± 4; ADSCs group, 104 ± 16; HA-CA group, 107 ± 17; HA-CA + ADSCs group, 122 ± 7 μm *, * p < 0.05; Figure 2D,E). 2.4. Tissue Neovascularization To examine wound neovascularization, we performed immunohistochemical staining for the endothelial cell-specific markers cluster of differentiation 31 (CD31) and von willebrand factor (vWF). Most of the CD31- and vWF-positive vessels were distributed in the lower middle part of the reticular layer of the dermis. CD31 is expressed in small vessels, mature blood vessels, and immature vascular structures. Most CD31-positive vessels were 10–15 μm in diameter and none exceeded 20 μm. Compared with the control DBW group, the number of CD31-positive vessels was higher in mice treated with HA-CA and HA-CA + ADSCs (control DBW group, 14 ± 2; ADSCs group, 25 ± 3; HA-CA group, 52 ± 5 *; HA-CA + ADSCs group, 73 ± 10 **; * p < 0.05, ** p < 0.01; Figure 3A,C). The vWF-positive vessels were larger than CD31-positive vessels and were 20–30 μm in diameter, with some exceeding 40 μm. Mice in the HA-CA + ADSCS group had the greatest number of vWF-positive vessels (control DBW group, 5 ± 1; ADSCs group, 6 ± 2; HA-CA group, 9 ± 1; HA-CA + ADSCs group, 16 ± 3 *; * p < 0.05; Figure 3B,D). To examine wound neovascularization, we performed immunohistochemical staining for the endothelial cell-specific markers cluster of differentiation 31 (CD31) and von willebrand factor (vWF). Most of the CD31- and vWF-positive vessels were distributed in the lower middle part of the reticular layer of the dermis. CD31 is expressed in small vessels, mature blood vessels, and immature vascular structures. Most CD31-positive vessels were 10–15 μm in diameter and none exceeded 20 μm. Compared with the control DBW group, the number of CD31-positive vessels was higher in mice treated with HA-CA and HA-CA + ADSCs (control DBW group, 14 ± 2; ADSCs group, 25 ± 3; HA-CA group, 52 ± 5 *; HA-CA + ADSCs group, 73 ± 10 **; * p < 0.05, ** p < 0.01; Figure 3A,C). The vWF-positive vessels were larger than CD31-positive vessels and were 20–30 μm in diameter, with some exceeding 40 μm. Mice in the HA-CA + ADSCS group had the greatest number of vWF-positive vessels (control DBW group, 5 ± 1; ADSCs group, 6 ± 2; HA-CA group, 9 ± 1; HA-CA + ADSCs group, 16 ± 3 *; * p < 0.05; Figure 3B,D). 2.5. Angiogenesis-Related Gene Expression Profile To confirm the effects of the different treatments on angiogenesis, we assessed the mRNA levels of various angiogenesis-associated genes, including those encoding vascular endothelial growth factor (VEGF), angiopoietin 1 (Ang-1), insulin-like growth factor 1 (IGF-1), fibroblast growth factor 2 (FGF-2), phosphoinositide 3-kinase (PI3K), protein kinase B (Akt), sprouty-related EVH1 domain-containing 1 (Spred1), and extracellular signal-regulated kinase (ERK). Mice treated with HA-CA + ADSCs had the highest VEGF, IGF-1, and FGF-2 mRNA levels. Ang-1 was upregulated in all treatment groups compared with the control group but was statistically significant only in the HA-CA + ADSCs group. FGF-2 was expressed at higher levels in the HA-CA and HA-CA + ADSCs treatment groups than in the control group, but treatment with ADSCs alone had no effect on FGF-2 mRNA levels. Although PI3K and Akt were significantly upregulated in mice treated with HA-CA + ADSCs, no changes were observed in the Spred1 or ERK mRNA levels (Figure 4). To confirm the effects of the different treatments on angiogenesis, we assessed the mRNA levels of various angiogenesis-associated genes, including those encoding vascular endothelial growth factor (VEGF), angiopoietin 1 (Ang-1), insulin-like growth factor 1 (IGF-1), fibroblast growth factor 2 (FGF-2), phosphoinositide 3-kinase (PI3K), protein kinase B (Akt), sprouty-related EVH1 domain-containing 1 (Spred1), and extracellular signal-regulated kinase (ERK). Mice treated with HA-CA + ADSCs had the highest VEGF, IGF-1, and FGF-2 mRNA levels. Ang-1 was upregulated in all treatment groups compared with the control group but was statistically significant only in the HA-CA + ADSCs group. FGF-2 was expressed at higher levels in the HA-CA and HA-CA + ADSCs treatment groups than in the control group, but treatment with ADSCs alone had no effect on FGF-2 mRNA levels. Although PI3K and Akt were significantly upregulated in mice treated with HA-CA + ADSCs, no changes were observed in the Spred1 or ERK mRNA levels (Figure 4). 2.6. VEGF Expression Analysis by Immunofluorescence To further confirm the effects of ADSCs and HA-CA on angiogenesis, we performed immunofluorescence staining for VEGF. Compared with the DBW group, in which the number of VEGF-expressing cells was low, the numbers of VEGF-positive cells were significantly higher in the ADSCs and HA-CA groups. In particular, mice treated with HA-CA + ADSCs had the greatest number of VEGF-expressing cells (Figure 4I). VEGF-positive cells were counted manually in microscopic fields and were highest in the HA-CA + ADSCs group (control DBW group, 15 ± 4; ADSCs group, 25 ± 6; HA-CA group, 28 ± 5; HA-CA + ADSCs group, 46 ± 5 **; ** p < 0.01; Figure 4J). To further confirm the effects of ADSCs and HA-CA on angiogenesis, we performed immunofluorescence staining for VEGF. Compared with the DBW group, in which the number of VEGF-expressing cells was low, the numbers of VEGF-positive cells were significantly higher in the ADSCs and HA-CA groups. In particular, mice treated with HA-CA + ADSCs had the greatest number of VEGF-expressing cells (Figure 4I). VEGF-positive cells were counted manually in microscopic fields and were highest in the HA-CA + ADSCs group (control DBW group, 15 ± 4; ADSCs group, 25 ± 6; HA-CA group, 28 ± 5; HA-CA + ADSCs group, 46 ± 5 **; ** p < 0.01; Figure 4J). 2.7. Angiogenesis-Related Protein Expression We investigated the expression of various angiogenesis-associated proteins, including VEGFA, FGF-2, Akt, phospho-Akt, ERK1/2, and phospho-ERK1/2. VEGFA and FGF-2 expression was increased in the HA-CA and HA-CA + ADSCs treatment groups compared to the control DBW group (VEGFA, control DBW group, 49 ± 4; ADSCs group, 43 ± 14; HA-CA group, 70 ± 8; HA-CA + ADSCs group, 91 ± 2 *; * p < 0.05; Figure 5A). FGF-2 was increased in all groups compared to the controls, especially in the HA-CA group (control DBW group, 34 ± 8; ADSCs group, 54 ± 6; HA-CA group, 76 ± 3 **; HA-CA + ADSCs group, 52 ± 2; ** p < 0.01; Figure 5A). With regard to components of the ERK and PI3K signaling pathways, ERK1/2 and AKT1 expression was confirmed. Both the AKT and phospho-AKT (p-AKT) levels were increased in all treatment groups compared to the controls, and the difference was especially great in the HA-CA group (control DBW group, 20 ± 3; ADSCs group, 34 ± 5; HA-CA group, 56 ± 2 **; HA-CA + ADSCs group, 43 ± 2; ** p < 0.01; Figure 5B). The expression of total ERK1/2 was similar in all groups, and phospho-ERK1/2 (p-ERK1/2) was hardly expressed (control DBW group, 32 ± 3; ADSCs group, 28 ± 1; HA-CA group, 29 ± 2; HA-CA + ADSCs group, 27 ± 2; Figure 5B). We investigated the expression of various angiogenesis-associated proteins, including VEGFA, FGF-2, Akt, phospho-Akt, ERK1/2, and phospho-ERK1/2. VEGFA and FGF-2 expression was increased in the HA-CA and HA-CA + ADSCs treatment groups compared to the control DBW group (VEGFA, control DBW group, 49 ± 4; ADSCs group, 43 ± 14; HA-CA group, 70 ± 8; HA-CA + ADSCs group, 91 ± 2 *; * p < 0.05; Figure 5A). FGF-2 was increased in all groups compared to the controls, especially in the HA-CA group (control DBW group, 34 ± 8; ADSCs group, 54 ± 6; HA-CA group, 76 ± 3 **; HA-CA + ADSCs group, 52 ± 2; ** p < 0.01; Figure 5A). With regard to components of the ERK and PI3K signaling pathways, ERK1/2 and AKT1 expression was confirmed. Both the AKT and phospho-AKT (p-AKT) levels were increased in all treatment groups compared to the controls, and the difference was especially great in the HA-CA group (control DBW group, 20 ± 3; ADSCs group, 34 ± 5; HA-CA group, 56 ± 2 **; HA-CA + ADSCs group, 43 ± 2; ** p < 0.01; Figure 5B). The expression of total ERK1/2 was similar in all groups, and phospho-ERK1/2 (p-ERK1/2) was hardly expressed (control DBW group, 32 ± 3; ADSCs group, 28 ± 1; HA-CA group, 29 ± 2; HA-CA + ADSCs group, 27 ± 2; Figure 5B). 2.1. Wound Area Measurement: The wounds were photographed on days 1, 3, 5, 7, 14, and 21, and the wound areas were quantified using ImageJ software. The change in wound size over time was calculated as the percentage of wound closure for each treatment group compared to the initial area of the wound and normalized relative to the initial area. Visual inspection indicated a decrease in wound size. Compared with control mice, the wound closure rate on day 3 was significantly higher in the treatment groups (control DBW group, 105% ± 18% vs. ADSCs group, 82% ± 15%, HA-CA group, 71% ± 11% **, HA-CA + ADSCs group, 76% ± 10% *; * p < 0.05, ** p < 0.01). At 14 days, remarkable wound healing was observed in the HA-CA + ADSCs group (control DBW group 59% ± 8% vs. HA-CA + ADSCs group 19% ± 4%; p < 0.01; Figure 1A,B). Although the wounds appeared to have healed in all groups on day 21, the wound size was significantly smaller in the HA-CA + ADSCs group than in the control and ADSCs groups (control DBW group, 16% ± 5% **, ADSCs group, 24% ± 17% * vs. HA-CA + ADSCs group, 8% ± 2%; * p < 0.05, ** p < 0.01; Figure 1A,B). 2.2. PKH26-Labeled ADSC Tracing: PKH26-labeled ADSCs (5 × 105 cells/100 µL) were injected into healthy subcutaneous tissues at the wound boundary. In the HA-CA + ADSCs group, PKH26-labeled ADSCs were transplanted with the HA-CA patch at the wound site. The mice were sacrificed on day 14 for PKH26-labeled ADSC tracking. Although most ADSCs migrated from the injection site to the wound, only a few ADSCs were observed in the ADSC injection group. In contrast to the ADSC group, ADSCs were detected in the epidermis, papillary dermis, and reticular dermis at the wound site in the HA-CA + ADSCs group (Figure 1C). 2.3. Histopathological Assessment: Wounds were stained with hematoxylin and eosin on postoperative day (POD) 21 for histological observation. Complete re-epithelialization was observed in all groups; however, there were significant differences in epidermis thickness and skin appendages among the groups. In the DBW group, the epidermis consisted of one to three thin epithelial cell layers, and the extent of tissue regeneration at the dermis was low. In the ADSCs, HA-CA, and HA-CA + ADSCs groups, the epidermis was completely regenerated, and epidermal appendages, including sweat glands and hair follicles, were observed. Notably, neovascular-like structures were observed in the dermis layer (Figure 2A). Masson’s trichrome staining revealed marked granulation tissue formation in the HA-CA, ADSCs, and HA-CA + ADSCs groups. Furthermore, these groups showed increased proportions of muscle cells and extensive dermis tissue remodeling (Figure 2B). Terminal deoxynucleotidyl transferase dUTP nick end labeling (TUNEL)-positive cells were counted manually in microscopic fields (DBW group, 19 ± 2; ADSCs group, 25 ± 4; HA-CA group, 21 ± 4; HA-CA + ADSCs group, 21 ± 3; Figure 2F). The level of apoptosis was not significantly different among the groups (Figure 2C). In comparison with the DBW group, the skin was significantly thicker in the HA-CA and HA-CA + ADSCs groups (DBW group, 468 ± 34; ADSCs group, 555 ± 52; HA-CA group, 815 ± 90 *; HA-CA + ADSCs group, 900 ± 102 μm *, * p < 0.05). The epidermis was thickest in the HA-CA + ADSCs group (DBW group, 77 ± 4; ADSCs group, 104 ± 16; HA-CA group, 107 ± 17; HA-CA + ADSCs group, 122 ± 7 μm *, * p < 0.05; Figure 2D,E). 2.4. Tissue Neovascularization: To examine wound neovascularization, we performed immunohistochemical staining for the endothelial cell-specific markers cluster of differentiation 31 (CD31) and von willebrand factor (vWF). Most of the CD31- and vWF-positive vessels were distributed in the lower middle part of the reticular layer of the dermis. CD31 is expressed in small vessels, mature blood vessels, and immature vascular structures. Most CD31-positive vessels were 10–15 μm in diameter and none exceeded 20 μm. Compared with the control DBW group, the number of CD31-positive vessels was higher in mice treated with HA-CA and HA-CA + ADSCs (control DBW group, 14 ± 2; ADSCs group, 25 ± 3; HA-CA group, 52 ± 5 *; HA-CA + ADSCs group, 73 ± 10 **; * p < 0.05, ** p < 0.01; Figure 3A,C). The vWF-positive vessels were larger than CD31-positive vessels and were 20–30 μm in diameter, with some exceeding 40 μm. Mice in the HA-CA + ADSCS group had the greatest number of vWF-positive vessels (control DBW group, 5 ± 1; ADSCs group, 6 ± 2; HA-CA group, 9 ± 1; HA-CA + ADSCs group, 16 ± 3 *; * p < 0.05; Figure 3B,D). 2.5. Angiogenesis-Related Gene Expression Profile: To confirm the effects of the different treatments on angiogenesis, we assessed the mRNA levels of various angiogenesis-associated genes, including those encoding vascular endothelial growth factor (VEGF), angiopoietin 1 (Ang-1), insulin-like growth factor 1 (IGF-1), fibroblast growth factor 2 (FGF-2), phosphoinositide 3-kinase (PI3K), protein kinase B (Akt), sprouty-related EVH1 domain-containing 1 (Spred1), and extracellular signal-regulated kinase (ERK). Mice treated with HA-CA + ADSCs had the highest VEGF, IGF-1, and FGF-2 mRNA levels. Ang-1 was upregulated in all treatment groups compared with the control group but was statistically significant only in the HA-CA + ADSCs group. FGF-2 was expressed at higher levels in the HA-CA and HA-CA + ADSCs treatment groups than in the control group, but treatment with ADSCs alone had no effect on FGF-2 mRNA levels. Although PI3K and Akt were significantly upregulated in mice treated with HA-CA + ADSCs, no changes were observed in the Spred1 or ERK mRNA levels (Figure 4). 2.6. VEGF Expression Analysis by Immunofluorescence: To further confirm the effects of ADSCs and HA-CA on angiogenesis, we performed immunofluorescence staining for VEGF. Compared with the DBW group, in which the number of VEGF-expressing cells was low, the numbers of VEGF-positive cells were significantly higher in the ADSCs and HA-CA groups. In particular, mice treated with HA-CA + ADSCs had the greatest number of VEGF-expressing cells (Figure 4I). VEGF-positive cells were counted manually in microscopic fields and were highest in the HA-CA + ADSCs group (control DBW group, 15 ± 4; ADSCs group, 25 ± 6; HA-CA group, 28 ± 5; HA-CA + ADSCs group, 46 ± 5 **; ** p < 0.01; Figure 4J). 2.7. Angiogenesis-Related Protein Expression: We investigated the expression of various angiogenesis-associated proteins, including VEGFA, FGF-2, Akt, phospho-Akt, ERK1/2, and phospho-ERK1/2. VEGFA and FGF-2 expression was increased in the HA-CA and HA-CA + ADSCs treatment groups compared to the control DBW group (VEGFA, control DBW group, 49 ± 4; ADSCs group, 43 ± 14; HA-CA group, 70 ± 8; HA-CA + ADSCs group, 91 ± 2 *; * p < 0.05; Figure 5A). FGF-2 was increased in all groups compared to the controls, especially in the HA-CA group (control DBW group, 34 ± 8; ADSCs group, 54 ± 6; HA-CA group, 76 ± 3 **; HA-CA + ADSCs group, 52 ± 2; ** p < 0.01; Figure 5A). With regard to components of the ERK and PI3K signaling pathways, ERK1/2 and AKT1 expression was confirmed. Both the AKT and phospho-AKT (p-AKT) levels were increased in all treatment groups compared to the controls, and the difference was especially great in the HA-CA group (control DBW group, 20 ± 3; ADSCs group, 34 ± 5; HA-CA group, 56 ± 2 **; HA-CA + ADSCs group, 43 ± 2; ** p < 0.01; Figure 5B). The expression of total ERK1/2 was similar in all groups, and phospho-ERK1/2 (p-ERK1/2) was hardly expressed (control DBW group, 32 ± 3; ADSCs group, 28 ± 1; HA-CA group, 29 ± 2; HA-CA + ADSCs group, 27 ± 2; Figure 5B). 3. Discussion: Chronic hyperglycemia is the leading cause of vascular complications related to diabetes. Skin damage poses serious risks to diabetic patients due to the impaired healing abilities of the skin [17]. Diabetic foot ulcers are the most common complications in diabetic patients [18]. DBWs are caused by peripheral neuropathy, vascular dysfunction, and arterial occlusive disease due to persistent hyperglycemia [2,19]. Thus, limited joint mobility and foot deformities increase the risk of wounds and ulcers. In addition, hyperglycemia is linked to decreased white blood cell counts and macrophage function, and ischemic and neuropathic dysfunctions can lead to infections and delayed wound healing [18,20]. As the current DBW management methods have various limitations, the present study was performed to assess the potential clinical usefulness of the combination of biomaterials with stem cells. Several biomaterials and stem cell types have been proposed for the treatment of DBW, and some of these have yielded encouraging results [21,22,23]. Biocompatibility, safety, degradability, and mechanical properties are crucial points to be taken into consideration when developing biomaterials intended for therapeutic use [24]. Woo et al. [22] assessed the effects of silk fibroin chitosan film and ADSCs in a DBW rat model and found that the biomaterial provided a biocompatible scaffold that could be used for stem cell delivery. Kanitkar et al. [25] reported that a polycaprolactone-gelatin nanofiber matrix exerted promising wound healing effects in a DBW mouse model. Here, we investigated the effects of a HA-CA patch combined with ADSCs in a DBW mouse model. HA is a naturally occurring polysaccharide and a major component of the ECM. Due to its excellent viscoelastic properties and ability to promote cell migration, HA has emerged as a new therapeutic agent for use in regeneration and wound healing [26]. Many researchers are interested in the various functions of HA, and a number of types of biomaterials modified from HA have been developed. In addition, there is ongoing research in the field of wound medicine using various biomaterials [27,28]. In this study, 200 kDa HA was used and modified. To induce the crosslinking of HA to increase its retention time at the injury site, a mussel adhesion-inspired catechol group was introduced into the HA backbone. The HA-CA patch maximizes the effects of cell therapy by preventing stem cells from becoming attached to the scaffold and lost on transplantation. HA-CA patches provide excellent biocompatibility, tissue adhesion, and an improved survival rate, maximizing the regeneration ability of stem cells [16]. In this study, we showed that although both HA-CA and HA-CA + ADSCs reduced wound size, the wound healing effects of HA-CA + ADSCs were more potent. We believe that this synergistic effect reflects the fact that the HA-CA patch improved the regeneration ability of stem cells and maximized their paracrine effects. The synergistic wound healing effects of the HA-CA patch and ADSCs were confirmed by histopathological analyses. The combination of HA-CA and ADSCs resulted in increased epithelialization, granulation tissue formation, and capillary formation. ADSCs alone or in combination with HA-CA promoted epidermis regeneration, neovascularization, and skin appendage development. To confirm the effects of HA-CA and ADSCs on neovascularization in DBW, we stained tissues for CD31 and vWF. Although CD31-positive small vessels were observed in mice treated with ADSCs alone or in combination with HA-CA, the number of CD31-expressing vessels was significantly higher in the HA-CA + ADSCs group. Consistent with these observations, mice treated with HA-CA + ADSCs exhibited the greatest number of vWF-positive vessels. Although the therapeutic effects of stem cells in DBW have been reported previously [29], our findings suggest that ADSCs promote wound healing and neovascularization, and that their effects are augmented when combined with HA-CA. ADSCs secrete various growth factors and cytokines, including VEGF, IGF-1, and FGF, promoting angiogenesis and tissue regeneration [16,30]. In this study, we showed that treatment with HA-CA combined with ADSCs significantly upregulated the expression of VEGF, Ang-1, IGF-1, and FGF-2. Importantly, mice treated with HA-CA + ADSCs exhibited the highest mRNA levels of VEGF and IGF-1. These results confirmed the synergistic neovascularization-promoting effects of HA-CA and ADSCs. Although the PI3K and Akt mRNA levels were significantly increased upon treatment with HA-CA + ADSCs, we found no changes in the expression levels of ERK and Spred1. To confirm that the expression of mRNA related to angiogenesis was consistent at the protein level, proteins were extracted from the paraffin blocks, and VEGF, FGF-2, AKT-1, and ERK expression were confirmed. The VEGFA and FGF-2 expression levels were increased in the HA-CA and HA-CA + ADSCs groups. In particular, the expression of VEGFA showed the greatest increase in the HA-CA + ADSCs group and FGF-2 showed the greatest increase in expression in the HA-CA group. VEGFA is a potent inducer of angiogenesis and triggers most of the mechanisms involved in the activation and regulation of angiogenesis [31]. FGF-2 promotes epithelialization by mediating skin wound healing and strongly activates fibroblasts as well as other mesodermal-derived cells, including vascular endothelial and smooth muscle cells, osteoblasts, and chondrocytes. In addition, it is known to play a prevalent role in epidermal defect wound models [32]. This seems to be because wound healing progressed gradually with sacrifice of the animals at 3 weeks after HA-CA or ADSCs transplantation. In the HA-CA + ADSCs group, the progression of angiogenesis occurred, while in the HACA group, dermis cell proliferation seems to have been the main cause. Total ERK1/2 expression was confirmed in all groups, but there was little expression of p-ERK1/2 (activated ERK protein). Thus, angiogenesis was not mediated mainly through the mitogen-activated protein kinase (MAPK) pathway. The levels of AKT and p-AKT expression were increased in all treatment groups in comparison with the control DBW group, suggesting that angiogenesis was mediated through the PI3K/Akt pathway (Figure 6). The PI3K/Akt pathway promotes endothelial cell proliferation, differentiation, and migration in response to tyrosine kinase growth factor receptors, including IGF-1 receptor, receptor tyrosine kinase receptor, and VEGFR [33,34]. Moreover, PI3K/Akt signaling increases Bcl-2 levels and decreases Bax levels, promoting cell survival [35]. The results of the present study indicate that the expression levels of various growth factors were increased in the HA-CA + ADSCs group compared with the ADSCs or HA-CA only treatment group. Therefore, by upregulating various growth factors and upstream pathway regulators, HA-CA and ADSCs activate the PI3K/Akt pathway promoting angiogenesis and tissue regeneration in DBW. In conclusion, we investigated the effects of the HA-based biomaterial HA-CA and ADSCs in a mouse model of DBW. The combination of HA-CA and ADSCs showed synergistic wound healing effects via acceleration of tissue regeneration and angiogenesis. Additional clinical studies are warranted to confirm the clinical benefits of HA-CA combined with ADSCs in patients with DBW. 4. Materials and Methods: 4.1. HA-CA Synthesis The HA-CA patch was synthesized by conjugating dopamine hydrochloride (Sigma-Aldrich, St. Louis, MO, USA) to a 200 kDa HA backbone (Lifecore Biomedical, Chaska, MN, USA), as previously described [16,36]. Briefly, HA was dissolved in distilled water at a concentration of 1% (w/v). Subsequently, 1-ethyl-3-(3-dimethyl aminopropyl) carbodiimide (EDC; TCI Co., Japan) and N-hydroxysulfosuccinimide (NHS; Sigma-Aldrich) were added to HA solution at an HA:EDC:NHS molar ratio of 1:1.5:1.5 (pH 5.0). Dopamine hydrochloride was added to the solution at an HA:dopamine hydrochloride molar ratio of 1:1.5, and the solution was incubated for 12 h at room temperature. The solution was then dialyzed using a membrane (Cellu Sep T2, MW cut-off 6–8 kDa; Membrane Filtration Products Inc., Seguin, TX, USA) against pH 5.0 phosphate-buffered saline (PBS; Biosesang, Seongnam, Korea) to remove uncoupled dopamine hydrochloride. The resulting product was poured into Petri dishes in a thin layer and freeze-dried. The synthesis yield of HA-CA conjugate was about 80%. The substitution degree of catechol group to HA backbone was 8.8%, which was determined by measuring the absorbance of HA-CA solution at 280 nm wavelength using an ultraviolet-visible (UV-vis) light spectrophotometer (JASCO Corporation, Tokyo, Japan). To form a hydrogel, lyophilized HA-CA conjugate was dissolved in neutral PBS (Sigma) and mixed with oxidizing solution containing 4.5 mg/mL sodium periodate (NaIO4; Sigma) and 0.4 M sodium hydroxide (NaOH; Sigma). The HA-CA patch was synthesized by conjugating dopamine hydrochloride (Sigma-Aldrich, St. Louis, MO, USA) to a 200 kDa HA backbone (Lifecore Biomedical, Chaska, MN, USA), as previously described [16,36]. Briefly, HA was dissolved in distilled water at a concentration of 1% (w/v). Subsequently, 1-ethyl-3-(3-dimethyl aminopropyl) carbodiimide (EDC; TCI Co., Japan) and N-hydroxysulfosuccinimide (NHS; Sigma-Aldrich) were added to HA solution at an HA:EDC:NHS molar ratio of 1:1.5:1.5 (pH 5.0). Dopamine hydrochloride was added to the solution at an HA:dopamine hydrochloride molar ratio of 1:1.5, and the solution was incubated for 12 h at room temperature. The solution was then dialyzed using a membrane (Cellu Sep T2, MW cut-off 6–8 kDa; Membrane Filtration Products Inc., Seguin, TX, USA) against pH 5.0 phosphate-buffered saline (PBS; Biosesang, Seongnam, Korea) to remove uncoupled dopamine hydrochloride. The resulting product was poured into Petri dishes in a thin layer and freeze-dried. The synthesis yield of HA-CA conjugate was about 80%. The substitution degree of catechol group to HA backbone was 8.8%, which was determined by measuring the absorbance of HA-CA solution at 280 nm wavelength using an ultraviolet-visible (UV-vis) light spectrophotometer (JASCO Corporation, Tokyo, Japan). To form a hydrogel, lyophilized HA-CA conjugate was dissolved in neutral PBS (Sigma) and mixed with oxidizing solution containing 4.5 mg/mL sodium periodate (NaIO4; Sigma) and 0.4 M sodium hydroxide (NaOH; Sigma). 4.2. ADSC Isolation and Characterization Human ADSCs were isolated from three patients who underwent augmentation mammoplasty—the patients did not have inflammation or cancer. This study was approved by Seoul National University Bundang Hospital Institutional Review Board and Ethics Committee and conducted in accordance with the guidelines of the 1975 Declaration of Helsinki (IRB No. B-1702/381-301). Briefly, adipose tissue was washed with PBS and cut into smaller pieces. Enzymatic digestion was performed using 0.075% collagenase type I (Sigma-Aldrich) in a humidified 5% CO2 incubator for 1 h at 37 °C. After neutralization, samples were centrifuged, and supernatants were passed through a 100 µm cell strainer (BD Biosciences, Bedford, MA, USA). The cells were transferred into cell culture flasks with Dulbecco’s modified Eagle’s medium (Welgene, Gyeongsan, Korea) supplemented with 10% fetal bovine serum (Gibco, Carlsbad, CA, USA), 100 U/mL penicillin, and 100 μg/mL streptomycin (Lonza, Walkersville, MD, USA), and cells were maintained at 37 °C in a humidified 5% CO2 incubator. The ADSCs were used between passages 4 and 6 for fluorescence-activated cell sorting and animal experiments. ADSCs were characterized by flow cytometry for the cell-surface markers CD31, CD34, CD44, CD45, CD90, and HLA-DR (BD Biosciences Pharmingen, San Jose, CA, USA). To track the transplanted ADSCs, they were labeled with PKH26 red fluorescent dye (Sigma-Aldrich, St. Louis, MO, USA) according to the manufacturer’s instructions. Briefly, ADSCs were harvested and resuspended in 1 mL of diluent C solution. Then, 4 µL of PKH26 dye was added followed by incubation for 5 min. Fetal bovine serum (1 mL) was added for quenching for 2 min, followed by washing with PBS. Human ADSCs were isolated from three patients who underwent augmentation mammoplasty—the patients did not have inflammation or cancer. This study was approved by Seoul National University Bundang Hospital Institutional Review Board and Ethics Committee and conducted in accordance with the guidelines of the 1975 Declaration of Helsinki (IRB No. B-1702/381-301). Briefly, adipose tissue was washed with PBS and cut into smaller pieces. Enzymatic digestion was performed using 0.075% collagenase type I (Sigma-Aldrich) in a humidified 5% CO2 incubator for 1 h at 37 °C. After neutralization, samples were centrifuged, and supernatants were passed through a 100 µm cell strainer (BD Biosciences, Bedford, MA, USA). The cells were transferred into cell culture flasks with Dulbecco’s modified Eagle’s medium (Welgene, Gyeongsan, Korea) supplemented with 10% fetal bovine serum (Gibco, Carlsbad, CA, USA), 100 U/mL penicillin, and 100 μg/mL streptomycin (Lonza, Walkersville, MD, USA), and cells were maintained at 37 °C in a humidified 5% CO2 incubator. The ADSCs were used between passages 4 and 6 for fluorescence-activated cell sorting and animal experiments. ADSCs were characterized by flow cytometry for the cell-surface markers CD31, CD34, CD44, CD45, CD90, and HLA-DR (BD Biosciences Pharmingen, San Jose, CA, USA). To track the transplanted ADSCs, they were labeled with PKH26 red fluorescent dye (Sigma-Aldrich, St. Louis, MO, USA) according to the manufacturer’s instructions. Briefly, ADSCs were harvested and resuspended in 1 mL of diluent C solution. Then, 4 µL of PKH26 dye was added followed by incubation for 5 min. Fetal bovine serum (1 mL) was added for quenching for 2 min, followed by washing with PBS. 4.3. Diabetic Wound Animal Model C57BL/6 male mice (7 weeks old, 23–26 g) were purchased from ORIENT BIO (Seongnam, Korea) and maintained according to the Association for Assessment and Accreditation of Laboratory Animal Care International system. All animal experiments conformed to the International Guide for the Care and Use of Laboratory Animals and were approved by the Institutional Animal Care and Use Committee of Seoul National University Bundang Hospital (IACUC No. BA1710-234/090-01). Forty C57BL/6 mice were equally divided into four groups: control diabetic wound (DBW) group, ADSCs group, HA-CA group, and HA-CA + ADSCs group. Diabetes induction was performed by intraperitoneal injection of streptozotocin (STZ; Sigma-Aldrich) at a dose of 150 mg/kg dissolved in citrate buffer (pH 5.5) [37,38]. Blood was drawn from the tail vein, and the glucose level was determined using a glucometer (Accu-Check Performa; Roche, Pleasanton, CA, USA). The blood glucose level and body weights were measured every 3 days. Mice with blood glucose levels >250 mg/dL were considered diabetic. After 4 weeks of STZ administration, mice were anesthetized with 2% isoflurane inhalation. Excisional biopsy wounds on the shaved dorsal regions of the midline extending through the panniculus carnosus were made using a 6 mm punch. ADSCs (5 × 105 cells/100 µL) labeled with PKH26 (Sigma-Aldrich) were transplanted into healthy tissue at the wound boundary, while HA-CA patches were injected at the wound site. The HA-CA patch (6 mm) was placed on the DBW (Figure 7). Wound areas were photographed on days 1, 3, 5, 7, 14, and 21 after ADSC and HA-CA transplantation. We identified the wound margins as whitish, dry, membrane-like structures, and measured the surface area using ImageJ software (version 1.51j8; National Institutes of Health, Bethesda, MD, USA). Changes in wound area over time were expressed as the percentage of the initial wound area. C57BL/6 male mice (7 weeks old, 23–26 g) were purchased from ORIENT BIO (Seongnam, Korea) and maintained according to the Association for Assessment and Accreditation of Laboratory Animal Care International system. All animal experiments conformed to the International Guide for the Care and Use of Laboratory Animals and were approved by the Institutional Animal Care and Use Committee of Seoul National University Bundang Hospital (IACUC No. BA1710-234/090-01). Forty C57BL/6 mice were equally divided into four groups: control diabetic wound (DBW) group, ADSCs group, HA-CA group, and HA-CA + ADSCs group. Diabetes induction was performed by intraperitoneal injection of streptozotocin (STZ; Sigma-Aldrich) at a dose of 150 mg/kg dissolved in citrate buffer (pH 5.5) [37,38]. Blood was drawn from the tail vein, and the glucose level was determined using a glucometer (Accu-Check Performa; Roche, Pleasanton, CA, USA). The blood glucose level and body weights were measured every 3 days. Mice with blood glucose levels >250 mg/dL were considered diabetic. After 4 weeks of STZ administration, mice were anesthetized with 2% isoflurane inhalation. Excisional biopsy wounds on the shaved dorsal regions of the midline extending through the panniculus carnosus were made using a 6 mm punch. ADSCs (5 × 105 cells/100 µL) labeled with PKH26 (Sigma-Aldrich) were transplanted into healthy tissue at the wound boundary, while HA-CA patches were injected at the wound site. The HA-CA patch (6 mm) was placed on the DBW (Figure 7). Wound areas were photographed on days 1, 3, 5, 7, 14, and 21 after ADSC and HA-CA transplantation. We identified the wound margins as whitish, dry, membrane-like structures, and measured the surface area using ImageJ software (version 1.51j8; National Institutes of Health, Bethesda, MD, USA). Changes in wound area over time were expressed as the percentage of the initial wound area. 4.4. Histopathological Assessment DBW tissues were fixed in 10% formalin and embedded in paraffin. The tissues were routinely processed and cut into sections 4–5 μm thick. The sections were deparaffinized in xylene at room temperature and stained with hematoxylin and eosin (Cancer Diagnostics Inc., Durham, NC, USA) according to the manufacturer’s instructions. Masson’s trichrome staining (BBC Biochemical, Mount Vernon, WA, USA) was performed in accordance with the manufacturer’s protocol. Briefly, deparaffinized sections were fixed in Bouin’s solution for 1 h at 56 °C and stained with ClearView Iron Hematoxylin working solution for 10 min. Subsequently, tissues were stained with Biebrich scarlet-acid fuchsin solution (2 min), phosphomolybdic-phosphotungstic acid solution (10 min), aniline blue solution (3 min), and 1% acetic acid solution (2 min). ECM, collagen, and other connective tissue elements were stained blue and smooth muscles were stained red. DBW tissue sections were imaged (×100 magnification) using Carl Zeiss AxioVision 4 (Carl Zeiss MicroImaging GmbH, Jena, Germany). DBW tissues were fixed in 10% formalin and embedded in paraffin. The tissues were routinely processed and cut into sections 4–5 μm thick. The sections were deparaffinized in xylene at room temperature and stained with hematoxylin and eosin (Cancer Diagnostics Inc., Durham, NC, USA) according to the manufacturer’s instructions. Masson’s trichrome staining (BBC Biochemical, Mount Vernon, WA, USA) was performed in accordance with the manufacturer’s protocol. Briefly, deparaffinized sections were fixed in Bouin’s solution for 1 h at 56 °C and stained with ClearView Iron Hematoxylin working solution for 10 min. Subsequently, tissues were stained with Biebrich scarlet-acid fuchsin solution (2 min), phosphomolybdic-phosphotungstic acid solution (10 min), aniline blue solution (3 min), and 1% acetic acid solution (2 min). ECM, collagen, and other connective tissue elements were stained blue and smooth muscles were stained red. DBW tissue sections were imaged (×100 magnification) using Carl Zeiss AxioVision 4 (Carl Zeiss MicroImaging GmbH, Jena, Germany). 4.5. Terminal Deoxynucleotidyl Transferase dUTP Nick End Labeling (TUNEL) Staining In situ detection of apoptosis was performed by labeling DNA strand breaks in tissue sections using a TUNEL staining kit (Roche Diagnostics, Penzberg, Germany). Briefly, DBW tissue sections were pretreated with proteinase K (20 μg/mL) at 37 °C for 30 min and immediately washed. Subsequently, the DBW tissue sections were incubated with the TUNEL reaction mixture at 37 °C for 60 min. After mounting, sections were imaged under a fluorescence microscope (×200 magnification) using Carl Zeiss AxioVision 4 (Carl Zeiss MicroImaging GmbH). In situ detection of apoptosis was performed by labeling DNA strand breaks in tissue sections using a TUNEL staining kit (Roche Diagnostics, Penzberg, Germany). Briefly, DBW tissue sections were pretreated with proteinase K (20 μg/mL) at 37 °C for 30 min and immediately washed. Subsequently, the DBW tissue sections were incubated with the TUNEL reaction mixture at 37 °C for 60 min. After mounting, sections were imaged under a fluorescence microscope (×200 magnification) using Carl Zeiss AxioVision 4 (Carl Zeiss MicroImaging GmbH). 4.6. Immunohistochemistry Immunohistochemistry was performed using a GBI Polink-2 HRP kit (Golden Bridge International Inc., Bothell, WA, USA). Briefly, the sections were deparaffinized in xylene at room temperature and rehydrated in a graded series of ethanol. After heat-induced epitope retrieval in citrate buffer, pH 6.0 (Scytek Laboratories, Inc., West Logan, UT, USA), tissues were incubated in peroxidase blocking reagent for 15 min at room temperature. Subsequently, tissues were incubated with anti-von Willebrand factor (vWF; EMD Millipore, Temecula, CA, USA) and anti-CD31 (Thermo Fisher Scientific, Waltham, MA, USA) primary antibodies (1:50) for 90 min at room temperature, followed by incubation with diaminobenzidine chromogen. After dehydration, the sections were mounted with Histomount (National Diagnostics, Atlanta, GA, USA) and imaged (×400 magnification) using Carl Zeiss AxioVision 4 (Carl Zeiss MicroImaging GmbH). Immunohistochemistry was performed using a GBI Polink-2 HRP kit (Golden Bridge International Inc., Bothell, WA, USA). Briefly, the sections were deparaffinized in xylene at room temperature and rehydrated in a graded series of ethanol. After heat-induced epitope retrieval in citrate buffer, pH 6.0 (Scytek Laboratories, Inc., West Logan, UT, USA), tissues were incubated in peroxidase blocking reagent for 15 min at room temperature. Subsequently, tissues were incubated with anti-von Willebrand factor (vWF; EMD Millipore, Temecula, CA, USA) and anti-CD31 (Thermo Fisher Scientific, Waltham, MA, USA) primary antibodies (1:50) for 90 min at room temperature, followed by incubation with diaminobenzidine chromogen. After dehydration, the sections were mounted with Histomount (National Diagnostics, Atlanta, GA, USA) and imaged (×400 magnification) using Carl Zeiss AxioVision 4 (Carl Zeiss MicroImaging GmbH). 4.7. RNA Isolation and qRT-PCR RNA was isolated and purified using an RNeasy Plus Mini Kit (QIAGEN, Hilden, Germany) according to the manufacturer’s instructions. cDNA synthesis was performed using a High-Capacity cDNA Reverse Transcription Kit (Thermo Fisher Scientific). Primers for qRT-PCR were obtained from Cosmogenetech (Seoul, Korea), and the primer sequences are shown in Table 1. Reactions were prepared using Power SYBR Green PCR Master Mix (Applied Biosystems, Foster City, CA, USA) according to the manufacturer’s instructions, and were run on a ViiA 7 Real-Time PCR System (Life Technologies Corporation, Carlsbad, CA, USA) using the following cycling conditions: one cycle of denaturation at 95 °C/10 min, followed by 40 two-segment amplification cycles (95 °C/10 min, 60 °C/1 min). All reactions were performed in triplicate. RNA was isolated and purified using an RNeasy Plus Mini Kit (QIAGEN, Hilden, Germany) according to the manufacturer’s instructions. cDNA synthesis was performed using a High-Capacity cDNA Reverse Transcription Kit (Thermo Fisher Scientific). Primers for qRT-PCR were obtained from Cosmogenetech (Seoul, Korea), and the primer sequences are shown in Table 1. Reactions were prepared using Power SYBR Green PCR Master Mix (Applied Biosystems, Foster City, CA, USA) according to the manufacturer’s instructions, and were run on a ViiA 7 Real-Time PCR System (Life Technologies Corporation, Carlsbad, CA, USA) using the following cycling conditions: one cycle of denaturation at 95 °C/10 min, followed by 40 two-segment amplification cycles (95 °C/10 min, 60 °C/1 min). All reactions were performed in triplicate. 4.8. Immunofluorescence Analysis Skin sections were deparaffinized in xylene and rehydrated in a graded ethanol series. After heat-induced epitope retrieval in citrate buffer, pH 6.0 (Scytek Laboratories, Inc.), sections were incubated with 3% bovine serum albumin blocking reagent for 10 min at room temperature. After blocking, sections were incubated with a primary antibody against VEGF (Santa Cruz Biotechnology, Inc., Santa Cruz, CA, USA) followed by incubation with Alexa Fluor 488 anti-rabbit secondary antibody (Biolegend, San Diego, CA, USA). The sections were mounted with 4′,6-diamidino-2-phenylindole (DAPI)-containing mounting medium (Vector Laboratories Inc., Burlingame, CA, USA) and observed under an inverted microscope (Axio Observer 7; Carl Zeiss Microscopy GmbH). VEGF-positive cells were counted manually in five microscopic fields in each stained sample, and the mean value was used for statistical analyses. Skin sections were deparaffinized in xylene and rehydrated in a graded ethanol series. After heat-induced epitope retrieval in citrate buffer, pH 6.0 (Scytek Laboratories, Inc.), sections were incubated with 3% bovine serum albumin blocking reagent for 10 min at room temperature. After blocking, sections were incubated with a primary antibody against VEGF (Santa Cruz Biotechnology, Inc., Santa Cruz, CA, USA) followed by incubation with Alexa Fluor 488 anti-rabbit secondary antibody (Biolegend, San Diego, CA, USA). The sections were mounted with 4′,6-diamidino-2-phenylindole (DAPI)-containing mounting medium (Vector Laboratories Inc., Burlingame, CA, USA) and observed under an inverted microscope (Axio Observer 7; Carl Zeiss Microscopy GmbH). VEGF-positive cells were counted manually in five microscopic fields in each stained sample, and the mean value was used for statistical analyses. 4.9. Protein Extraction and Western Blotting Proteins were extracted from formalin-fixed paraffin-embedded (FFPE) samples using a Qproteome FFPE Tissue Kit (QIAGEN) according to the manufacturer’s instructions. The protein concentration was determined using Bio-Rad assay reagent (Bio-Rad, Hercules, CA, USA). Briefly, samples with equal concentrations of protein were mixed with 4× sample buffer (GenDEPOT Inc., Barker, TX, USA), heated at 95 °C for 10 min, and separated by 10% sodium dodecyl sulfate–polyacrylamide gel electrophoresis (SDS-PAGE). Proteins were then transferred onto polyvinylidene difluoride (PVDF) membranes (Amersham Life Science, Arlington Heights, IL, USA) in Tris-glycine transfer buffer (Invitrogen, Carlsbad, CA, USA). The membranes were blocked for 1 h at room temperature with 5% skim milk in Tris-buffered saline with Tween-20. The membranes were incubated at 4 °C overnight with anti-VEGFA (Abcam, Cambridge, UK), anti-FGF-2 (Santa Cruz Biotechnology, Inc.), anti-AKT1 (Santa Cruz Biotechnology, Inc.), anti-p-AKT1 (Santa Cruz Biotechnology, Inc.), anti-ERK1/2 (Santa Cruz Biotechnology, Inc.), anti-p-ERK (Santa Cruz Biotechnology, Inc.), or anti-β-actin (Santa Cruz Biotechnology, Inc.) primary antibodies, followed by incubation with horseradish peroxidase-conjugated anti-mouse IgG (Cell Signaling Technology, Danvers, MA, USA) or anti-rabbit IgG (Cell Signaling Technology) secondary antibodies as appropriate for 1 h at room temperature. The membranes were washed and then incubated using a West-Q Chemiluminescent Substrate Plus kit (GenDEPOT Inc.). The intensities of the protein bands were determined using Multi-Gauge software (version 3.0; Fuji Photo Film, Tokyo, Japan), and relative densities were expressed as ratios of control values. All reactions were performed in duplicate. Proteins were extracted from formalin-fixed paraffin-embedded (FFPE) samples using a Qproteome FFPE Tissue Kit (QIAGEN) according to the manufacturer’s instructions. The protein concentration was determined using Bio-Rad assay reagent (Bio-Rad, Hercules, CA, USA). Briefly, samples with equal concentrations of protein were mixed with 4× sample buffer (GenDEPOT Inc., Barker, TX, USA), heated at 95 °C for 10 min, and separated by 10% sodium dodecyl sulfate–polyacrylamide gel electrophoresis (SDS-PAGE). Proteins were then transferred onto polyvinylidene difluoride (PVDF) membranes (Amersham Life Science, Arlington Heights, IL, USA) in Tris-glycine transfer buffer (Invitrogen, Carlsbad, CA, USA). The membranes were blocked for 1 h at room temperature with 5% skim milk in Tris-buffered saline with Tween-20. The membranes were incubated at 4 °C overnight with anti-VEGFA (Abcam, Cambridge, UK), anti-FGF-2 (Santa Cruz Biotechnology, Inc.), anti-AKT1 (Santa Cruz Biotechnology, Inc.), anti-p-AKT1 (Santa Cruz Biotechnology, Inc.), anti-ERK1/2 (Santa Cruz Biotechnology, Inc.), anti-p-ERK (Santa Cruz Biotechnology, Inc.), or anti-β-actin (Santa Cruz Biotechnology, Inc.) primary antibodies, followed by incubation with horseradish peroxidase-conjugated anti-mouse IgG (Cell Signaling Technology, Danvers, MA, USA) or anti-rabbit IgG (Cell Signaling Technology) secondary antibodies as appropriate for 1 h at room temperature. The membranes were washed and then incubated using a West-Q Chemiluminescent Substrate Plus kit (GenDEPOT Inc.). The intensities of the protein bands were determined using Multi-Gauge software (version 3.0; Fuji Photo Film, Tokyo, Japan), and relative densities were expressed as ratios of control values. All reactions were performed in duplicate. 4.10. Statistical Analysis Quantitative data were expressed as the mean ± standard deviation. Differences between groups were evaluated by one-way analysis of variance (ANOVA) followed by Dunn’s multiple comparison post hoc test. All statistical analyses were performed using PRISM v.5.01 (GraphPad Software, Inc., La Jolla, CA, USA) and p < 0.05 was taken to indicate statistical significance. Quantitative data were expressed as the mean ± standard deviation. Differences between groups were evaluated by one-way analysis of variance (ANOVA) followed by Dunn’s multiple comparison post hoc test. All statistical analyses were performed using PRISM v.5.01 (GraphPad Software, Inc., La Jolla, CA, USA) and p < 0.05 was taken to indicate statistical significance. 4.1. HA-CA Synthesis: The HA-CA patch was synthesized by conjugating dopamine hydrochloride (Sigma-Aldrich, St. Louis, MO, USA) to a 200 kDa HA backbone (Lifecore Biomedical, Chaska, MN, USA), as previously described [16,36]. Briefly, HA was dissolved in distilled water at a concentration of 1% (w/v). Subsequently, 1-ethyl-3-(3-dimethyl aminopropyl) carbodiimide (EDC; TCI Co., Japan) and N-hydroxysulfosuccinimide (NHS; Sigma-Aldrich) were added to HA solution at an HA:EDC:NHS molar ratio of 1:1.5:1.5 (pH 5.0). Dopamine hydrochloride was added to the solution at an HA:dopamine hydrochloride molar ratio of 1:1.5, and the solution was incubated for 12 h at room temperature. The solution was then dialyzed using a membrane (Cellu Sep T2, MW cut-off 6–8 kDa; Membrane Filtration Products Inc., Seguin, TX, USA) against pH 5.0 phosphate-buffered saline (PBS; Biosesang, Seongnam, Korea) to remove uncoupled dopamine hydrochloride. The resulting product was poured into Petri dishes in a thin layer and freeze-dried. The synthesis yield of HA-CA conjugate was about 80%. The substitution degree of catechol group to HA backbone was 8.8%, which was determined by measuring the absorbance of HA-CA solution at 280 nm wavelength using an ultraviolet-visible (UV-vis) light spectrophotometer (JASCO Corporation, Tokyo, Japan). To form a hydrogel, lyophilized HA-CA conjugate was dissolved in neutral PBS (Sigma) and mixed with oxidizing solution containing 4.5 mg/mL sodium periodate (NaIO4; Sigma) and 0.4 M sodium hydroxide (NaOH; Sigma). 4.2. ADSC Isolation and Characterization: Human ADSCs were isolated from three patients who underwent augmentation mammoplasty—the patients did not have inflammation or cancer. This study was approved by Seoul National University Bundang Hospital Institutional Review Board and Ethics Committee and conducted in accordance with the guidelines of the 1975 Declaration of Helsinki (IRB No. B-1702/381-301). Briefly, adipose tissue was washed with PBS and cut into smaller pieces. Enzymatic digestion was performed using 0.075% collagenase type I (Sigma-Aldrich) in a humidified 5% CO2 incubator for 1 h at 37 °C. After neutralization, samples were centrifuged, and supernatants were passed through a 100 µm cell strainer (BD Biosciences, Bedford, MA, USA). The cells were transferred into cell culture flasks with Dulbecco’s modified Eagle’s medium (Welgene, Gyeongsan, Korea) supplemented with 10% fetal bovine serum (Gibco, Carlsbad, CA, USA), 100 U/mL penicillin, and 100 μg/mL streptomycin (Lonza, Walkersville, MD, USA), and cells were maintained at 37 °C in a humidified 5% CO2 incubator. The ADSCs were used between passages 4 and 6 for fluorescence-activated cell sorting and animal experiments. ADSCs were characterized by flow cytometry for the cell-surface markers CD31, CD34, CD44, CD45, CD90, and HLA-DR (BD Biosciences Pharmingen, San Jose, CA, USA). To track the transplanted ADSCs, they were labeled with PKH26 red fluorescent dye (Sigma-Aldrich, St. Louis, MO, USA) according to the manufacturer’s instructions. Briefly, ADSCs were harvested and resuspended in 1 mL of diluent C solution. Then, 4 µL of PKH26 dye was added followed by incubation for 5 min. Fetal bovine serum (1 mL) was added for quenching for 2 min, followed by washing with PBS. 4.3. Diabetic Wound Animal Model: C57BL/6 male mice (7 weeks old, 23–26 g) were purchased from ORIENT BIO (Seongnam, Korea) and maintained according to the Association for Assessment and Accreditation of Laboratory Animal Care International system. All animal experiments conformed to the International Guide for the Care and Use of Laboratory Animals and were approved by the Institutional Animal Care and Use Committee of Seoul National University Bundang Hospital (IACUC No. BA1710-234/090-01). Forty C57BL/6 mice were equally divided into four groups: control diabetic wound (DBW) group, ADSCs group, HA-CA group, and HA-CA + ADSCs group. Diabetes induction was performed by intraperitoneal injection of streptozotocin (STZ; Sigma-Aldrich) at a dose of 150 mg/kg dissolved in citrate buffer (pH 5.5) [37,38]. Blood was drawn from the tail vein, and the glucose level was determined using a glucometer (Accu-Check Performa; Roche, Pleasanton, CA, USA). The blood glucose level and body weights were measured every 3 days. Mice with blood glucose levels >250 mg/dL were considered diabetic. After 4 weeks of STZ administration, mice were anesthetized with 2% isoflurane inhalation. Excisional biopsy wounds on the shaved dorsal regions of the midline extending through the panniculus carnosus were made using a 6 mm punch. ADSCs (5 × 105 cells/100 µL) labeled with PKH26 (Sigma-Aldrich) were transplanted into healthy tissue at the wound boundary, while HA-CA patches were injected at the wound site. The HA-CA patch (6 mm) was placed on the DBW (Figure 7). Wound areas were photographed on days 1, 3, 5, 7, 14, and 21 after ADSC and HA-CA transplantation. We identified the wound margins as whitish, dry, membrane-like structures, and measured the surface area using ImageJ software (version 1.51j8; National Institutes of Health, Bethesda, MD, USA). Changes in wound area over time were expressed as the percentage of the initial wound area. 4.4. Histopathological Assessment: DBW tissues were fixed in 10% formalin and embedded in paraffin. The tissues were routinely processed and cut into sections 4–5 μm thick. The sections were deparaffinized in xylene at room temperature and stained with hematoxylin and eosin (Cancer Diagnostics Inc., Durham, NC, USA) according to the manufacturer’s instructions. Masson’s trichrome staining (BBC Biochemical, Mount Vernon, WA, USA) was performed in accordance with the manufacturer’s protocol. Briefly, deparaffinized sections were fixed in Bouin’s solution for 1 h at 56 °C and stained with ClearView Iron Hematoxylin working solution for 10 min. Subsequently, tissues were stained with Biebrich scarlet-acid fuchsin solution (2 min), phosphomolybdic-phosphotungstic acid solution (10 min), aniline blue solution (3 min), and 1% acetic acid solution (2 min). ECM, collagen, and other connective tissue elements were stained blue and smooth muscles were stained red. DBW tissue sections were imaged (×100 magnification) using Carl Zeiss AxioVision 4 (Carl Zeiss MicroImaging GmbH, Jena, Germany). 4.5. Terminal Deoxynucleotidyl Transferase dUTP Nick End Labeling (TUNEL) Staining: In situ detection of apoptosis was performed by labeling DNA strand breaks in tissue sections using a TUNEL staining kit (Roche Diagnostics, Penzberg, Germany). Briefly, DBW tissue sections were pretreated with proteinase K (20 μg/mL) at 37 °C for 30 min and immediately washed. Subsequently, the DBW tissue sections were incubated with the TUNEL reaction mixture at 37 °C for 60 min. After mounting, sections were imaged under a fluorescence microscope (×200 magnification) using Carl Zeiss AxioVision 4 (Carl Zeiss MicroImaging GmbH). 4.6. Immunohistochemistry: Immunohistochemistry was performed using a GBI Polink-2 HRP kit (Golden Bridge International Inc., Bothell, WA, USA). Briefly, the sections were deparaffinized in xylene at room temperature and rehydrated in a graded series of ethanol. After heat-induced epitope retrieval in citrate buffer, pH 6.0 (Scytek Laboratories, Inc., West Logan, UT, USA), tissues were incubated in peroxidase blocking reagent for 15 min at room temperature. Subsequently, tissues were incubated with anti-von Willebrand factor (vWF; EMD Millipore, Temecula, CA, USA) and anti-CD31 (Thermo Fisher Scientific, Waltham, MA, USA) primary antibodies (1:50) for 90 min at room temperature, followed by incubation with diaminobenzidine chromogen. After dehydration, the sections were mounted with Histomount (National Diagnostics, Atlanta, GA, USA) and imaged (×400 magnification) using Carl Zeiss AxioVision 4 (Carl Zeiss MicroImaging GmbH). 4.7. RNA Isolation and qRT-PCR: RNA was isolated and purified using an RNeasy Plus Mini Kit (QIAGEN, Hilden, Germany) according to the manufacturer’s instructions. cDNA synthesis was performed using a High-Capacity cDNA Reverse Transcription Kit (Thermo Fisher Scientific). Primers for qRT-PCR were obtained from Cosmogenetech (Seoul, Korea), and the primer sequences are shown in Table 1. Reactions were prepared using Power SYBR Green PCR Master Mix (Applied Biosystems, Foster City, CA, USA) according to the manufacturer’s instructions, and were run on a ViiA 7 Real-Time PCR System (Life Technologies Corporation, Carlsbad, CA, USA) using the following cycling conditions: one cycle of denaturation at 95 °C/10 min, followed by 40 two-segment amplification cycles (95 °C/10 min, 60 °C/1 min). All reactions were performed in triplicate. 4.8. Immunofluorescence Analysis: Skin sections were deparaffinized in xylene and rehydrated in a graded ethanol series. After heat-induced epitope retrieval in citrate buffer, pH 6.0 (Scytek Laboratories, Inc.), sections were incubated with 3% bovine serum albumin blocking reagent for 10 min at room temperature. After blocking, sections were incubated with a primary antibody against VEGF (Santa Cruz Biotechnology, Inc., Santa Cruz, CA, USA) followed by incubation with Alexa Fluor 488 anti-rabbit secondary antibody (Biolegend, San Diego, CA, USA). The sections were mounted with 4′,6-diamidino-2-phenylindole (DAPI)-containing mounting medium (Vector Laboratories Inc., Burlingame, CA, USA) and observed under an inverted microscope (Axio Observer 7; Carl Zeiss Microscopy GmbH). VEGF-positive cells were counted manually in five microscopic fields in each stained sample, and the mean value was used for statistical analyses. 4.9. Protein Extraction and Western Blotting: Proteins were extracted from formalin-fixed paraffin-embedded (FFPE) samples using a Qproteome FFPE Tissue Kit (QIAGEN) according to the manufacturer’s instructions. The protein concentration was determined using Bio-Rad assay reagent (Bio-Rad, Hercules, CA, USA). Briefly, samples with equal concentrations of protein were mixed with 4× sample buffer (GenDEPOT Inc., Barker, TX, USA), heated at 95 °C for 10 min, and separated by 10% sodium dodecyl sulfate–polyacrylamide gel electrophoresis (SDS-PAGE). Proteins were then transferred onto polyvinylidene difluoride (PVDF) membranes (Amersham Life Science, Arlington Heights, IL, USA) in Tris-glycine transfer buffer (Invitrogen, Carlsbad, CA, USA). The membranes were blocked for 1 h at room temperature with 5% skim milk in Tris-buffered saline with Tween-20. The membranes were incubated at 4 °C overnight with anti-VEGFA (Abcam, Cambridge, UK), anti-FGF-2 (Santa Cruz Biotechnology, Inc.), anti-AKT1 (Santa Cruz Biotechnology, Inc.), anti-p-AKT1 (Santa Cruz Biotechnology, Inc.), anti-ERK1/2 (Santa Cruz Biotechnology, Inc.), anti-p-ERK (Santa Cruz Biotechnology, Inc.), or anti-β-actin (Santa Cruz Biotechnology, Inc.) primary antibodies, followed by incubation with horseradish peroxidase-conjugated anti-mouse IgG (Cell Signaling Technology, Danvers, MA, USA) or anti-rabbit IgG (Cell Signaling Technology) secondary antibodies as appropriate for 1 h at room temperature. The membranes were washed and then incubated using a West-Q Chemiluminescent Substrate Plus kit (GenDEPOT Inc.). The intensities of the protein bands were determined using Multi-Gauge software (version 3.0; Fuji Photo Film, Tokyo, Japan), and relative densities were expressed as ratios of control values. All reactions were performed in duplicate. 4.10. Statistical Analysis: Quantitative data were expressed as the mean ± standard deviation. Differences between groups were evaluated by one-way analysis of variance (ANOVA) followed by Dunn’s multiple comparison post hoc test. All statistical analyses were performed using PRISM v.5.01 (GraphPad Software, Inc., La Jolla, CA, USA) and p < 0.05 was taken to indicate statistical significance.
Background: Chronic inflammation and impaired neovascularization play critical roles in delayed wound healing in diabetic patients. To overcome the limitations of current diabetic wound (DBW) management interventions, we investigated the effects of a catechol-functionalized hyaluronic acid (HA-CA) patch combined with adipose-derived mesenchymal stem cells (ADSCs) in DBW mouse models. Methods: Diabetes in mice (C57BL/6, male) was induced by streptozotocin (50 mg/kg, >250 mg/dL). Mice were divided into four groups: control (DBW) group, ADSCs group, HA-CA group, and HA-CA + ADSCs group (n = 10 per group). Fluorescently labeled ADSCs (5 × 105 cells/100 µL) were transplanted into healthy tissues at the wound boundary or deposited at the HA-CA patch at the wound site. The wound area was visually examined. Collagen content, granulation tissue thickness and vascularity, cell apoptosis, and re-epithelialization were assessed. Angiogenesis was evaluated by immunohistochemistry, quantitative real-time polymerase chain reaction, and Western blot. Results: DBW size was significantly smaller in the HA-CA + ADSCs group (8% ± 2%) compared with the control (16% ± 5%, p < 0.01) and ADSCs (24% ± 17%, p < 0.05) groups. In mice treated with HA-CA + ADSCs, the epidermis was regenerated, and skin thickness was restored. CD31 and von Willebrand factor-positive vessels were detected in mice treated with HA-CA + ADSCs. The mRNA and protein levels of VEGF, IGF-1, FGF-2, ANG-1, PIK, and AKT in the HA-CA + ADSCs group were the highest among all groups, although the Spred1 and ERK expression levels remained unchanged. Conclusions: The combination of HA-CA and ADSCs provided synergistic wound healing effects by maximizing paracrine signaling and angiogenesis via the PI3K/AKT pathway. Therefore, ADSC-loaded HA-CA might represent a novel strategy for the treatment of DBW.
null
null
14,567
394
[ 288, 123, 371, 263, 214, 152, 335, 4722, 326, 351, 391, 203, 103, 176, 163, 168, 372, 68 ]
21
[ "ha", "group", "adscs", "ha adscs", "adscs group", "dbw", "usa", "wound", "ha adscs group", "groups" ]
[ "wound healing dbw", "impairing wound healing", "diabetic wound animal", "diabetes diabetic foot", "diabetic foot ulcers" ]
null
null
null
[CONTENT] diabetic wound | hyaluronic acid | biomaterial | adipose-derived stem cells | angiogenesis [SUMMARY]
null
[CONTENT] diabetic wound | hyaluronic acid | biomaterial | adipose-derived stem cells | angiogenesis [SUMMARY]
null
[CONTENT] diabetic wound | hyaluronic acid | biomaterial | adipose-derived stem cells | angiogenesis [SUMMARY]
null
[CONTENT] Adipose Tissue | Animals | Bandages | Diabetes Mellitus, Experimental | Diabetic Angiopathies | Female | Humans | Hyaluronic Acid | Male | Mice | Stem Cell Transplantation | Stem Cells | Wound Healing | Wounds and Injuries [SUMMARY]
null
[CONTENT] Adipose Tissue | Animals | Bandages | Diabetes Mellitus, Experimental | Diabetic Angiopathies | Female | Humans | Hyaluronic Acid | Male | Mice | Stem Cell Transplantation | Stem Cells | Wound Healing | Wounds and Injuries [SUMMARY]
null
[CONTENT] Adipose Tissue | Animals | Bandages | Diabetes Mellitus, Experimental | Diabetic Angiopathies | Female | Humans | Hyaluronic Acid | Male | Mice | Stem Cell Transplantation | Stem Cells | Wound Healing | Wounds and Injuries [SUMMARY]
null
[CONTENT] wound healing dbw | impairing wound healing | diabetic wound animal | diabetes diabetic foot | diabetic foot ulcers [SUMMARY]
null
[CONTENT] wound healing dbw | impairing wound healing | diabetic wound animal | diabetes diabetic foot | diabetic foot ulcers [SUMMARY]
null
[CONTENT] wound healing dbw | impairing wound healing | diabetic wound animal | diabetes diabetic foot | diabetic foot ulcers [SUMMARY]
null
[CONTENT] ha | group | adscs | ha adscs | adscs group | dbw | usa | wound | ha adscs group | groups [SUMMARY]
null
[CONTENT] ha | group | adscs | ha adscs | adscs group | dbw | usa | wound | ha adscs group | groups [SUMMARY]
null
[CONTENT] ha | group | adscs | ha adscs | adscs group | dbw | usa | wound | ha adscs group | groups [SUMMARY]
null
[CONTENT] ha | hydrogels | dbw | stem | diabetic | cell | wound | treatment | stem cells | peripheral [SUMMARY]
null
[CONTENT] group | adscs | ha | adscs group | ha adscs | ha adscs group | dbw group | groups | dbw | figure [SUMMARY]
null
[CONTENT] ha | group | adscs | ha adscs | adscs group | wound | usa | dbw | sections | ha adscs group [SUMMARY]
null
[CONTENT] ||| DBW | DBW [SUMMARY]
null
[CONTENT] DBW | the HA-CA + ADSCs | 8% | 2% | 16% | 5% | ADSCs | 24% | 17% ||| HA-CA + ADSCs | epidermis ||| HA-CA + ADSCs ||| VEGF | IGF-1 | FGF-2 | PIK | AKT | the HA-CA + ADSCs | Spred1 | ERK [SUMMARY]
null
[CONTENT] ||| DBW | DBW ||| 50 mg/kg | 250 mg ||| four | DBW | HA-CA | 10 ||| ADSCs | 5 ||| ||| Collagen ||| ||| DBW | the HA-CA + ADSCs | 8% | 2% | 16% | 5% | ADSCs | 24% | 17% ||| HA-CA + ADSCs | epidermis ||| HA-CA + ADSCs ||| VEGF | IGF-1 | FGF-2 | PIK | AKT | the HA-CA + ADSCs | Spred1 | ERK ||| HA-CA ||| HA-CA | DBW [SUMMARY]
null
Assessment of the effect on blood loss and transfusion requirements when adding a polyethylene glycol sealant to the anastomotic closure of aortic procedures: a case-control analysis of 102 patients undergoing Bentall procedures.
23043723
The use of CoSeal(®), a polyethylene glycol sealant, in cardiac and vascular surgery for prevention of anastomotic bleeding has been subject to prior investigations. We analysed our perioperative data to determine the clinical benefit of using polyethylene glycol sealant to inhibit suture line bleeding in aortic surgery.
BACKGROUND
From January 2004 to June 2006, 124 patients underwent aortic surgical procedures such as full root replacements, reconstruction and/or replacement of ascending aorta and aortic arch procedures. A Bentall procedure was employed in 102 of these patients. In 48 of these, a polyethylene glycol sealant was added to the anastomotic closure of the aortic procedure (sealant group) and the other 54 patients did not have this additive treatment to the suture line (control group).
METHODS
There were no significant between-group differences in the demographic characteristics of the patients undergoing Bentall procedures. Mean EuroSCORES (European System for Cardiac Operative Risk Evaluation) were 13.7 ± 7.7 (sealant group) and 14.4 ± 6.2 (control group), p = NS. The polyethylene glycol sealant group had reduced intraoperative and postoperative transfusion requirements (red blood cells: 761 ± 863 versus 1248 ± 1206 ml, p = 0.02; fresh frozen plasma: 413 ± 532 versus 779 ± 834 ml, p = 0.009); and less postoperative drainage loss (985 ± 972 versus 1709 ± 1302 ml, p = 0.002). A trend towards a lower rate of rethoracotomy was observed in the sealant group (1/48 versus 6/54, p = 0.07) and there was significantly less time spent in the intensive care unit or hospital (both p = 0.03). Based on hypothesis-generating calculations, the resulting economic benefit conferred by shorter intensive care unit and hospital stays, reduced transfusion requirements and a potentially lower rethoracotomy rate is estimated at €1,943 per patient in this data analysis.
RESULTS
The use of this polymeric surgical sealant demonstrated improved intraoperative and postoperative management of anastomotic bleeding in Bentall procedures, leading to reduced postoperative drainage loss, less transfusion requirements, and a trend towards a lower rate of rethoracotomy. Hypothesis-generating calculations indicate that the use of this sealant translates to cost savings. Further studies are warranted to investigate the clinical and economic benefits of CoSeal in a prospective manner.
CONCLUSIONS
[ "Aged", "Aged, 80 and over", "Anastomosis, Surgical", "Aorta", "Blood Loss, Surgical", "Blood Transfusion", "Cardiovascular Surgical Procedures", "Case-Control Studies", "Female", "Humans", "Male", "Middle Aged", "Polyethylene Glycols", "Sutures", "Thoracotomy" ]
3477041
Background
Aortic repair and reconstruction (including aortic arch surgery and combined procedures), despite being major cardiac surgical procedures, are now commonplace. Worldwide, tens of thousands of procedures are performed each year for the treatment of aortic or thoracic aneurysms, occlusions or dissections [1]. One of the main complications with these types of surgical procedures is the intraoperative and postoperative bleeding at the anastomotic suture line. The likelihood of bleeding can be influenced by a range of factors including comorbidities, surgical history, anticoagulation therapies, the type of surgical procedure employed and individual patient risk. This type of bleeding complication presents a major challenge to intraoperative and postoperative haemostasis, and failure to achieve adequate haemostasis can lead to a longer operative time, a greater need for blood transfusion products and a higher risk of postoperative complications and rethoracotomy. These factors and complications are obviously detrimental to the patient and incur significant additional costs. The level of additional costs is dependent upon the need for rethoracotomies, length of stay in the intensive care unit (ICU), overall duration of hospital stay, and the additional use of medical equipment and medical services used to diagnose and resolve the complication. Considering the large number of procedures performed each year, the maintenance of intraoperative and postoperative haemostasis is essential not only for improving patient outcomes, but also for the reduction of these societal and healthcare costs. Increasingly, surgical sealants such as polymeric polyethylene glycol (PEG) are being utilised in cardiac and vascular surgery to control anastamotic bleeding from high-pressure suture lines [2-4]. In several clinical and pre-clinical studies, PEG has been shown to provide rapid and strong sealing while maintaining flexibility and elasticity, and avoiding any disturbance of the wound closure [5,6]. This PEG is a fully synthetic sealant containing no human/animal proteins or gluteraldehyde and it does not induce any adverse tissue responses, exhibits minimal or no toxicity, and resorbs fully within four weeks [3,5]. The objective of this retrospective patient case–control analysis was to assess the effect on blood loss, transfusion requirements, and associated cost savings when adding a PEG sealant to the anastomotic closure of aortic procedures performed using a Bentall procedure.
Methods
Patients and study setting Between January 2004 and June 2006, 124 consecutive patients underwent aortic-related surgical procedures including full root replacements, reconstruction and/or replacement of ascending aorta and aortic arch procedures in the Cardiothoracic Surgery Department at the Klinikum Oldenburg, Oldenburg, Germany. Patients were operated on by any one of six cardiothoracic surgeons using standardised procedures, three of whom routinely added a PEG sealant to the anastomotic closure of aortic procedures with the intent to enhance sealing and reduce blood loss. Of the 124 patients, 102 underwent Bentall procedures and therefore comprised a comparable cohort without procedures that may facilitate bleeding and thus influence between-group comparisons (e.g. deep hypothermic circulatory arrest). These patients were retrospectively divided into two study groups depending on the haemostatic procedure used by the attending surgeon: The sealant group i.e. those that received PEG as a surgical sealant in addition to the suture line (standard procedure plus PEG sealant performed by three of the six surgeons). The control group i.e. those who did not receive any additional treatment to the suture line (standard procedure without the addition of a PEG sealant which was performed by the other three of the six surgeons). Between January 2004 and June 2006, 124 consecutive patients underwent aortic-related surgical procedures including full root replacements, reconstruction and/or replacement of ascending aorta and aortic arch procedures in the Cardiothoracic Surgery Department at the Klinikum Oldenburg, Oldenburg, Germany. Patients were operated on by any one of six cardiothoracic surgeons using standardised procedures, three of whom routinely added a PEG sealant to the anastomotic closure of aortic procedures with the intent to enhance sealing and reduce blood loss. Of the 124 patients, 102 underwent Bentall procedures and therefore comprised a comparable cohort without procedures that may facilitate bleeding and thus influence between-group comparisons (e.g. deep hypothermic circulatory arrest). These patients were retrospectively divided into two study groups depending on the haemostatic procedure used by the attending surgeon: The sealant group i.e. those that received PEG as a surgical sealant in addition to the suture line (standard procedure plus PEG sealant performed by three of the six surgeons). The control group i.e. those who did not receive any additional treatment to the suture line (standard procedure without the addition of a PEG sealant which was performed by the other three of the six surgeons).
Results
Between January 2004 and June 2006, a total of 124 consecutive patients underwent aortic-related surgical procedures at the Department of Cardiothoracic Surgery, Klinikum Oldenburg, Oldenburg, Germany. Of these, 102 patients underwent Bentall procedures and were included in the analysis: 48 (47.1%) received PEG as a surgical sealant in addition to the suture line (sealant group) and 54 (52.9%) did not receive this additive treatment to the suture line (control group). Demographic profiles of the two patient groups were similar and showed no significant differences between the sealant and control groups (Table 1). Demographic and perioperative characteristics of the study population PEG, polyethylene glycol; CABG, Coronary artery bypass graft; CPB, cardiopulmonary bypass; EuroSCORE, European System for Cardiac Operative Risk Evaluation; SD, standard deviation. P-values calculated using the following statistical tests: Student’s t-test, chi-square test, Wilcoxon-Mann-Whitney test, or Fisher’s exact test. All values are presented as mean ± SD or percentages of subgroup populations. Compared with the control group, patients in the sealant group required significantly fewer intraoperative and postoperative transfusions of RBC (mean 761 ± 863 versus 1248 ± 1206 ml, p = 0.02; Table 1) or FFP (413 ± 532 versus 779 ± 834 ml, p = 0.009). All other intraoperative characteristics were similar between groups (urgency, CPB time, aortic clamp time, total operative duration, Table 1). Postoperatively, drainage volumes were significantly reduced in the sealant group (985 ± 972 ml) versus the control group (1709 ± 1302 ml), p = 0.002 (Table 1), as was the duration of ICU stay and the total hospital stay (Table 1; both p = 0.03). In addition, a trend towards a reduced rethoracotomy rate was observed in the sealant group (1/48) versus the control group (6/54; p = 0.07). No adverse events related to the use of PEG sealant were reported during this study. Overall, for the 102 procedures performed in this analysis, per patient cost-savings when adding a PEG sealant to the suture line (sealant group) versus no additive treatment (control group) for the anastomotic closure during aortic surgery were estimated at €1,943 (Table 2). Estimated per-patient cost savings when using PEG sealant for anastomotic closure during aortic surgery RBC, red blood cells (1 unit = 250 ml); FFP, fresh frozen plasma (1 unit = 200 ml); PEG, polyethylene glycol; ICU, intensive care unit. *Operating room (OR)-associated costs only (i.e. anaesthesia, OR and staff time, but excluding extended intensive care or hospital stay). Cost savings based on average reduced procedural requirements when using PEG sealant using typical costs at the Klinikum Oldenburg in the period January 2004 to June 2006.
Conclusion
This retrospective analysis of aortic Bentall procedures in 102 patients over a 30-month period assessed the effect on blood loss and transfusion requirements of adding a PEG sealant to the anastomotic closure. The use of PEG sealant for suture-line reinforcement provided significant benefits on postoperative drainage and transfusion requirements versus sutures alone, with a trend towards a reduced rethoracotomy rate. These benefits may also translate into considerable cost savings. The clinically significant findings reported here warrant confirmation in prospective studies, which should also include the analysis of postoperative hypertension.
[ "Background", "Patients and study setting", "Operative procedure", "Outcomes", "Statistical analysis", "Abbreviations", "Competing interests", "Authors’ contributions" ]
[ "Aortic repair and reconstruction (including aortic arch surgery and combined procedures), despite being major cardiac surgical procedures, are now commonplace. Worldwide, tens of thousands of procedures are performed each year for the treatment of aortic or thoracic aneurysms, occlusions or dissections\n[1].\nOne of the main complications with these types of surgical procedures is the intraoperative and postoperative bleeding at the anastomotic suture line. The likelihood of bleeding can be influenced by a range of factors including comorbidities, surgical history, anticoagulation therapies, the type of surgical procedure employed and individual patient risk. This type of bleeding complication presents a major challenge to intraoperative and postoperative haemostasis, and failure to achieve adequate haemostasis can lead to a longer operative time, a greater need for blood transfusion products and a higher risk of postoperative complications and rethoracotomy. These factors and complications are obviously detrimental to the patient and incur significant additional costs. The level of additional costs is dependent upon the need for rethoracotomies, length of stay in the intensive care unit (ICU), overall duration of hospital stay, and the additional use of medical equipment and medical services used to diagnose and resolve the complication. Considering the large number of procedures performed each year, the maintenance of intraoperative and postoperative haemostasis is essential not only for improving patient outcomes, but also for the reduction of these societal and healthcare costs.\nIncreasingly, surgical sealants such as polymeric polyethylene glycol (PEG) are being utilised in cardiac and vascular surgery to control anastamotic bleeding from high-pressure suture lines\n[2-4]. In several clinical and pre-clinical studies, PEG has been shown to provide rapid and strong sealing while maintaining flexibility and elasticity, and avoiding any disturbance of the wound closure\n[5,6]. This PEG is a fully synthetic sealant containing no human/animal proteins or gluteraldehyde and it does not induce any adverse tissue responses, exhibits minimal or no toxicity, and resorbs fully within four weeks\n[3,5].\nThe objective of this retrospective patient case–control analysis was to assess the effect on blood loss, transfusion requirements, and associated cost savings when adding a PEG sealant to the anastomotic closure of aortic procedures performed using a Bentall procedure.", "Between January 2004 and June 2006, 124 consecutive patients underwent aortic-related surgical procedures including full root replacements, reconstruction and/or replacement of ascending aorta and aortic arch procedures in the Cardiothoracic Surgery Department at the Klinikum Oldenburg, Oldenburg, Germany. Patients were operated on by any one of six cardiothoracic surgeons using standardised procedures, three of whom routinely added a PEG sealant to the anastomotic closure of aortic procedures with the intent to enhance sealing and reduce blood loss. Of the 124 patients, 102 underwent Bentall procedures and therefore comprised a comparable cohort without procedures that may facilitate bleeding and thus influence between-group comparisons (e.g. deep hypothermic circulatory arrest). These patients were retrospectively divided into two study groups depending on the haemostatic procedure used by the attending surgeon:\nThe sealant group i.e. those that received PEG as a surgical sealant in addition to the suture line (standard procedure plus PEG sealant performed by three of the six surgeons). The control group i.e. those who did not receive any additional treatment to the suture line (standard procedure without the addition of a PEG sealant which was performed by the other three of the six surgeons).", "Routine preoperative and intraoperative care protocols were followed, and all patients were assessed prior to surgery to ensure that patients had a normal haemoglobin level and no hereditary coagulopathies. In elective surgeries, preoperative aspirin was discontinued one week prior to surgery. Patients only received blood substitution immediately prior to surgery in emergency circumstances. A drop in blood haemoglobin below the normal levels (men: 13.5–17.5 g/dL; women: 12.0–16.0 g/dL) was the trigger for intraoperative transfusion. Aortic repair and reconstruction was carried out using a Bentall procedure utilising a Medtronic Freestyle® stentless root prosthesis (Medtronic Inc. Minneapolis, MN). In summary, once the diseased aortic tissue and valve had been removed, proximal anastomosis was performed using 20–25 single interrupted 4/0 Ethibond® sutures in a single plane. Mobilised coronary buttons were then sewn end-to-side to the corresponding aortic sinus using a continuous 5/0 polypropylene suture. Finally, the cranial end of the bioprosthesis was sewn end-to-end to the aorta with a continuous 4/0 polypropylene suture, completing the root replacement. Due to the short root prosthesis of the Medtronic Freestyle® device used for this procedure, some patients required additional length, which was provided by inserting a vascular tube graft (Vascutek®, Renfrewshire, Scotland). Any further surgery was carried out according to standard procedures; deaeration was performed following clamp removal and protamine was administered upon cessation of bypass.", "The evaluation of the effect of added PEG sealant was based on the retrospective analysis of three key outcome measures:\n1) requirement for transfusion based on volumes (ml) of red blood cells (RBC) and fresh frozen plasma (FFP) within the first 48 hours following surgery;\n2) postoperative drainage volume (ml) within the first 48 hours following surgery; and\n3) rate of rethoracotomy.\nThe same key measures were used to perform a hypothesis-generating calculation of the costs associated with the Bentall procedures employed, either with or without PEG sealant. Indirect economic benefit was retrospectively estimated, with respect to reduced transfusion requirements and rethoracotomy rate, based on the following typical costs at the Klinikum Oldenburg from January 2004 to June 2006: RBC, €200 per unit; FFP, €160 per unit; PEG sealant, €237 per 2 ml application; ICU stay, €400 per day; hospital stay, €150 per day; and rethoracotomy, €2000 per procedure (operative costs only).", "Clinical assessment data were presented as mean ± standard deviation (SD) and population characteristics as a percentage of each subgroup (treatment or control). The following statistical tests were used to compare treatment groups: Student’s t-tests (gender, age, weight, European System for Cardiac Operative Risk Evaluation [EuroSCORE], cardiopulmonary bypass [CPB] time, aortic cross clamp time, total operative duration [or time], drainage volume, duration of ICU and total hospital stay), Wilcoxon-Mann-Whitney tests (urgency of operation), chi-square tests (surgical history and Bentall procedure), or a two-proportion z-test (postoperative rethoracotomy). P values of <0.05 were considered to be significant.", "ICU: Intensive Care Unit; PEG: Polyethylene Glycol; RBC: Red Blood Cells; FFP: Fresh Frozen Plasma; CBP: Cardiopulmonary Bypass; CABG: Coronary Artery Bypass Graft; SD: standard deviation; EuroSCORE: European System for Cardiac Operative Risk Evaluation.", "EN has received personal compensation from Baxter Healthcare Corporation, the manufacturer of CoSeal®, for serving as a consultant and speaker. Baxter Healthcare Corporation financed the preparation and submission of this manuscript for publication. MS and OED declare that they have no competing interests.", "EN and OED were responsible for the study concept, study design, performing surgical procedures, data analysis, and reviewing the manuscript. MS participated in the study concept and was responsible for data analysis, performing surgical procedures, and reviewing the manuscript. All authors read and approved the final manuscript." ]
[ null, null, null, null, null, null, null, null ]
[ "Background", "Methods", "Patients and study setting", "Materials", "Operative procedure", "Outcomes", "Statistical analysis", "Results", "Discussion", "Conclusion", "Abbreviations", "Competing interests", "Authors’ contributions", "Supplementary Material" ]
[ "Aortic repair and reconstruction (including aortic arch surgery and combined procedures), despite being major cardiac surgical procedures, are now commonplace. Worldwide, tens of thousands of procedures are performed each year for the treatment of aortic or thoracic aneurysms, occlusions or dissections\n[1].\nOne of the main complications with these types of surgical procedures is the intraoperative and postoperative bleeding at the anastomotic suture line. The likelihood of bleeding can be influenced by a range of factors including comorbidities, surgical history, anticoagulation therapies, the type of surgical procedure employed and individual patient risk. This type of bleeding complication presents a major challenge to intraoperative and postoperative haemostasis, and failure to achieve adequate haemostasis can lead to a longer operative time, a greater need for blood transfusion products and a higher risk of postoperative complications and rethoracotomy. These factors and complications are obviously detrimental to the patient and incur significant additional costs. The level of additional costs is dependent upon the need for rethoracotomies, length of stay in the intensive care unit (ICU), overall duration of hospital stay, and the additional use of medical equipment and medical services used to diagnose and resolve the complication. Considering the large number of procedures performed each year, the maintenance of intraoperative and postoperative haemostasis is essential not only for improving patient outcomes, but also for the reduction of these societal and healthcare costs.\nIncreasingly, surgical sealants such as polymeric polyethylene glycol (PEG) are being utilised in cardiac and vascular surgery to control anastamotic bleeding from high-pressure suture lines\n[2-4]. In several clinical and pre-clinical studies, PEG has been shown to provide rapid and strong sealing while maintaining flexibility and elasticity, and avoiding any disturbance of the wound closure\n[5,6]. This PEG is a fully synthetic sealant containing no human/animal proteins or gluteraldehyde and it does not induce any adverse tissue responses, exhibits minimal or no toxicity, and resorbs fully within four weeks\n[3,5].\nThe objective of this retrospective patient case–control analysis was to assess the effect on blood loss, transfusion requirements, and associated cost savings when adding a PEG sealant to the anastomotic closure of aortic procedures performed using a Bentall procedure.", " Patients and study setting Between January 2004 and June 2006, 124 consecutive patients underwent aortic-related surgical procedures including full root replacements, reconstruction and/or replacement of ascending aorta and aortic arch procedures in the Cardiothoracic Surgery Department at the Klinikum Oldenburg, Oldenburg, Germany. Patients were operated on by any one of six cardiothoracic surgeons using standardised procedures, three of whom routinely added a PEG sealant to the anastomotic closure of aortic procedures with the intent to enhance sealing and reduce blood loss. Of the 124 patients, 102 underwent Bentall procedures and therefore comprised a comparable cohort without procedures that may facilitate bleeding and thus influence between-group comparisons (e.g. deep hypothermic circulatory arrest). These patients were retrospectively divided into two study groups depending on the haemostatic procedure used by the attending surgeon:\nThe sealant group i.e. those that received PEG as a surgical sealant in addition to the suture line (standard procedure plus PEG sealant performed by three of the six surgeons). The control group i.e. those who did not receive any additional treatment to the suture line (standard procedure without the addition of a PEG sealant which was performed by the other three of the six surgeons).\nBetween January 2004 and June 2006, 124 consecutive patients underwent aortic-related surgical procedures including full root replacements, reconstruction and/or replacement of ascending aorta and aortic arch procedures in the Cardiothoracic Surgery Department at the Klinikum Oldenburg, Oldenburg, Germany. Patients were operated on by any one of six cardiothoracic surgeons using standardised procedures, three of whom routinely added a PEG sealant to the anastomotic closure of aortic procedures with the intent to enhance sealing and reduce blood loss. Of the 124 patients, 102 underwent Bentall procedures and therefore comprised a comparable cohort without procedures that may facilitate bleeding and thus influence between-group comparisons (e.g. deep hypothermic circulatory arrest). These patients were retrospectively divided into two study groups depending on the haemostatic procedure used by the attending surgeon:\nThe sealant group i.e. those that received PEG as a surgical sealant in addition to the suture line (standard procedure plus PEG sealant performed by three of the six surgeons). The control group i.e. those who did not receive any additional treatment to the suture line (standard procedure without the addition of a PEG sealant which was performed by the other three of the six surgeons).", "Between January 2004 and June 2006, 124 consecutive patients underwent aortic-related surgical procedures including full root replacements, reconstruction and/or replacement of ascending aorta and aortic arch procedures in the Cardiothoracic Surgery Department at the Klinikum Oldenburg, Oldenburg, Germany. Patients were operated on by any one of six cardiothoracic surgeons using standardised procedures, three of whom routinely added a PEG sealant to the anastomotic closure of aortic procedures with the intent to enhance sealing and reduce blood loss. Of the 124 patients, 102 underwent Bentall procedures and therefore comprised a comparable cohort without procedures that may facilitate bleeding and thus influence between-group comparisons (e.g. deep hypothermic circulatory arrest). These patients were retrospectively divided into two study groups depending on the haemostatic procedure used by the attending surgeon:\nThe sealant group i.e. those that received PEG as a surgical sealant in addition to the suture line (standard procedure plus PEG sealant performed by three of the six surgeons). The control group i.e. those who did not receive any additional treatment to the suture line (standard procedure without the addition of a PEG sealant which was performed by the other three of the six surgeons).", "The PEG sealant used in this study was CoSeal® (Baxter Healthcare Corporation, Deerfield, IL, USA), which is composed of two synthetic PEGs, a dilute hydrogen chloride solution and a sodium phosphate/sodium carbonate solution. It is indicated for use in vascular reconstructions to achieve adjunctive haemostasis by mechanically sealing areas of leakage. At the time of administration, the mixed PEGs and solutions, which must be used within two hours of reconstitution, generate a biocompatible, strongly adherent hydrogel that forms a cohesive matrix on the tissue within 60 seconds, and fully resorbs over the subsequent weeks\n[7].\nFor patients in the sealant group, the PEG sealant was sprayed in a prophylactic way on clamped vascular prostheses in a thin homogeneous layer over the anastomotic sites; minimal volumes (2–4 ml) were applied to blotted or air-dried surfaces and allowed to stand for at least one minute before unclamping. Application was in accordance with the product instructions for use via a product-specific gas-driven spray device (CoSeal Easy Spray®, Air Enhanced Applicator, Baxter, Deerfield, IL, USA)\n[7]. Additional movie files\n1 and\n2 show the intraoperative application of PEG sealant in more detail (see additional file\n1 and\n2). CoSeal was the only product used for aortic suture-line sealing during the procedure (treatment group).\n Operative procedure Routine preoperative and intraoperative care protocols were followed, and all patients were assessed prior to surgery to ensure that patients had a normal haemoglobin level and no hereditary coagulopathies. In elective surgeries, preoperative aspirin was discontinued one week prior to surgery. Patients only received blood substitution immediately prior to surgery in emergency circumstances. A drop in blood haemoglobin below the normal levels (men: 13.5–17.5 g/dL; women: 12.0–16.0 g/dL) was the trigger for intraoperative transfusion. Aortic repair and reconstruction was carried out using a Bentall procedure utilising a Medtronic Freestyle® stentless root prosthesis (Medtronic Inc. Minneapolis, MN). In summary, once the diseased aortic tissue and valve had been removed, proximal anastomosis was performed using 20–25 single interrupted 4/0 Ethibond® sutures in a single plane. Mobilised coronary buttons were then sewn end-to-side to the corresponding aortic sinus using a continuous 5/0 polypropylene suture. Finally, the cranial end of the bioprosthesis was sewn end-to-end to the aorta with a continuous 4/0 polypropylene suture, completing the root replacement. Due to the short root prosthesis of the Medtronic Freestyle® device used for this procedure, some patients required additional length, which was provided by inserting a vascular tube graft (Vascutek®, Renfrewshire, Scotland). Any further surgery was carried out according to standard procedures; deaeration was performed following clamp removal and protamine was administered upon cessation of bypass.\nRoutine preoperative and intraoperative care protocols were followed, and all patients were assessed prior to surgery to ensure that patients had a normal haemoglobin level and no hereditary coagulopathies. In elective surgeries, preoperative aspirin was discontinued one week prior to surgery. Patients only received blood substitution immediately prior to surgery in emergency circumstances. A drop in blood haemoglobin below the normal levels (men: 13.5–17.5 g/dL; women: 12.0–16.0 g/dL) was the trigger for intraoperative transfusion. Aortic repair and reconstruction was carried out using a Bentall procedure utilising a Medtronic Freestyle® stentless root prosthesis (Medtronic Inc. Minneapolis, MN). In summary, once the diseased aortic tissue and valve had been removed, proximal anastomosis was performed using 20–25 single interrupted 4/0 Ethibond® sutures in a single plane. Mobilised coronary buttons were then sewn end-to-side to the corresponding aortic sinus using a continuous 5/0 polypropylene suture. Finally, the cranial end of the bioprosthesis was sewn end-to-end to the aorta with a continuous 4/0 polypropylene suture, completing the root replacement. Due to the short root prosthesis of the Medtronic Freestyle® device used for this procedure, some patients required additional length, which was provided by inserting a vascular tube graft (Vascutek®, Renfrewshire, Scotland). Any further surgery was carried out according to standard procedures; deaeration was performed following clamp removal and protamine was administered upon cessation of bypass.\n Outcomes The evaluation of the effect of added PEG sealant was based on the retrospective analysis of three key outcome measures:\n1) requirement for transfusion based on volumes (ml) of red blood cells (RBC) and fresh frozen plasma (FFP) within the first 48 hours following surgery;\n2) postoperative drainage volume (ml) within the first 48 hours following surgery; and\n3) rate of rethoracotomy.\nThe same key measures were used to perform a hypothesis-generating calculation of the costs associated with the Bentall procedures employed, either with or without PEG sealant. Indirect economic benefit was retrospectively estimated, with respect to reduced transfusion requirements and rethoracotomy rate, based on the following typical costs at the Klinikum Oldenburg from January 2004 to June 2006: RBC, €200 per unit; FFP, €160 per unit; PEG sealant, €237 per 2 ml application; ICU stay, €400 per day; hospital stay, €150 per day; and rethoracotomy, €2000 per procedure (operative costs only).\nThe evaluation of the effect of added PEG sealant was based on the retrospective analysis of three key outcome measures:\n1) requirement for transfusion based on volumes (ml) of red blood cells (RBC) and fresh frozen plasma (FFP) within the first 48 hours following surgery;\n2) postoperative drainage volume (ml) within the first 48 hours following surgery; and\n3) rate of rethoracotomy.\nThe same key measures were used to perform a hypothesis-generating calculation of the costs associated with the Bentall procedures employed, either with or without PEG sealant. Indirect economic benefit was retrospectively estimated, with respect to reduced transfusion requirements and rethoracotomy rate, based on the following typical costs at the Klinikum Oldenburg from January 2004 to June 2006: RBC, €200 per unit; FFP, €160 per unit; PEG sealant, €237 per 2 ml application; ICU stay, €400 per day; hospital stay, €150 per day; and rethoracotomy, €2000 per procedure (operative costs only).\n Statistical analysis Clinical assessment data were presented as mean ± standard deviation (SD) and population characteristics as a percentage of each subgroup (treatment or control). The following statistical tests were used to compare treatment groups: Student’s t-tests (gender, age, weight, European System for Cardiac Operative Risk Evaluation [EuroSCORE], cardiopulmonary bypass [CPB] time, aortic cross clamp time, total operative duration [or time], drainage volume, duration of ICU and total hospital stay), Wilcoxon-Mann-Whitney tests (urgency of operation), chi-square tests (surgical history and Bentall procedure), or a two-proportion z-test (postoperative rethoracotomy). P values of <0.05 were considered to be significant.\nClinical assessment data were presented as mean ± standard deviation (SD) and population characteristics as a percentage of each subgroup (treatment or control). The following statistical tests were used to compare treatment groups: Student’s t-tests (gender, age, weight, European System for Cardiac Operative Risk Evaluation [EuroSCORE], cardiopulmonary bypass [CPB] time, aortic cross clamp time, total operative duration [or time], drainage volume, duration of ICU and total hospital stay), Wilcoxon-Mann-Whitney tests (urgency of operation), chi-square tests (surgical history and Bentall procedure), or a two-proportion z-test (postoperative rethoracotomy). P values of <0.05 were considered to be significant.", "Routine preoperative and intraoperative care protocols were followed, and all patients were assessed prior to surgery to ensure that patients had a normal haemoglobin level and no hereditary coagulopathies. In elective surgeries, preoperative aspirin was discontinued one week prior to surgery. Patients only received blood substitution immediately prior to surgery in emergency circumstances. A drop in blood haemoglobin below the normal levels (men: 13.5–17.5 g/dL; women: 12.0–16.0 g/dL) was the trigger for intraoperative transfusion. Aortic repair and reconstruction was carried out using a Bentall procedure utilising a Medtronic Freestyle® stentless root prosthesis (Medtronic Inc. Minneapolis, MN). In summary, once the diseased aortic tissue and valve had been removed, proximal anastomosis was performed using 20–25 single interrupted 4/0 Ethibond® sutures in a single plane. Mobilised coronary buttons were then sewn end-to-side to the corresponding aortic sinus using a continuous 5/0 polypropylene suture. Finally, the cranial end of the bioprosthesis was sewn end-to-end to the aorta with a continuous 4/0 polypropylene suture, completing the root replacement. Due to the short root prosthesis of the Medtronic Freestyle® device used for this procedure, some patients required additional length, which was provided by inserting a vascular tube graft (Vascutek®, Renfrewshire, Scotland). Any further surgery was carried out according to standard procedures; deaeration was performed following clamp removal and protamine was administered upon cessation of bypass.", "The evaluation of the effect of added PEG sealant was based on the retrospective analysis of three key outcome measures:\n1) requirement for transfusion based on volumes (ml) of red blood cells (RBC) and fresh frozen plasma (FFP) within the first 48 hours following surgery;\n2) postoperative drainage volume (ml) within the first 48 hours following surgery; and\n3) rate of rethoracotomy.\nThe same key measures were used to perform a hypothesis-generating calculation of the costs associated with the Bentall procedures employed, either with or without PEG sealant. Indirect economic benefit was retrospectively estimated, with respect to reduced transfusion requirements and rethoracotomy rate, based on the following typical costs at the Klinikum Oldenburg from January 2004 to June 2006: RBC, €200 per unit; FFP, €160 per unit; PEG sealant, €237 per 2 ml application; ICU stay, €400 per day; hospital stay, €150 per day; and rethoracotomy, €2000 per procedure (operative costs only).", "Clinical assessment data were presented as mean ± standard deviation (SD) and population characteristics as a percentage of each subgroup (treatment or control). The following statistical tests were used to compare treatment groups: Student’s t-tests (gender, age, weight, European System for Cardiac Operative Risk Evaluation [EuroSCORE], cardiopulmonary bypass [CPB] time, aortic cross clamp time, total operative duration [or time], drainage volume, duration of ICU and total hospital stay), Wilcoxon-Mann-Whitney tests (urgency of operation), chi-square tests (surgical history and Bentall procedure), or a two-proportion z-test (postoperative rethoracotomy). P values of <0.05 were considered to be significant.", "Between January 2004 and June 2006, a total of 124 consecutive patients underwent aortic-related surgical procedures at the Department of Cardiothoracic Surgery, Klinikum Oldenburg, Oldenburg, Germany. Of these, 102 patients underwent Bentall procedures and were included in the analysis: 48 (47.1%) received PEG as a surgical sealant in addition to the suture line (sealant group) and 54 (52.9%) did not receive this additive treatment to the suture line (control group). Demographic profiles of the two patient groups were similar and showed no significant differences between the sealant and control groups (Table\n1).\n\nDemographic and perioperative characteristics\nof the study population\n\nPEG, polyethylene glycol; CABG, Coronary artery bypass graft; CPB, cardiopulmonary bypass; EuroSCORE, European System for Cardiac Operative Risk Evaluation; SD, standard deviation.\nP-values calculated using the following statistical tests: Student’s t-test, chi-square test, Wilcoxon-Mann-Whitney test, or Fisher’s exact test. All values are presented as mean ± SD or percentages of subgroup populations.\nCompared with the control group, patients in the sealant group required significantly fewer intraoperative and postoperative transfusions of RBC (mean 761 ± 863 versus 1248 ± 1206 ml, p = 0.02; Table\n1) or FFP (413 ± 532 versus 779 ± 834 ml, p = 0.009). All other intraoperative characteristics were similar between groups (urgency, CPB time, aortic clamp time, total operative duration, Table\n1).\nPostoperatively, drainage volumes were significantly reduced in the sealant group (985 ± 972 ml) versus the control group (1709 ± 1302 ml), p = 0.002 (Table\n1), as was the duration of ICU stay and the total hospital stay (Table\n1; both p = 0.03). In addition, a trend towards a reduced rethoracotomy rate was observed in the sealant group (1/48) versus the control group (6/54; p = 0.07). No adverse events related to the use of PEG sealant were reported during this study.\nOverall, for the 102 procedures performed in this analysis, per patient cost-savings when adding a PEG sealant to the suture line (sealant group) versus no additive treatment (control group) for the anastomotic closure during aortic surgery were estimated at €1,943 (Table\n2).\n\nEstimated per-patient cost savings\nwhen using PEG sealant\nfor anastomotic closure during\naortic surgery\n\nRBC, red blood cells (1 unit = 250 ml); FFP, fresh frozen plasma (1 unit = 200 ml); PEG, polyethylene glycol; ICU, intensive care unit.\n*Operating room (OR)-associated costs only (i.e. anaesthesia, OR and staff time, but excluding extended intensive care or hospital stay).\nCost savings based on average reduced procedural requirements when using PEG sealant using typical costs at the Klinikum Oldenburg in the period January 2004 to June 2006.", "The results of this single centre, retrospective case series demonstrate that adding a PEG sealant to the anastomotic closure of aortic Bentall procedures provides a beneficial effect on blood loss and transfusion requirements. Compared with sutures alone, PEG sealant significantly reduced postoperative drainage loss and transfusion requirements, with no additional adverse events. In addition, a trend towards a reduced rethoracotomy rate versus sutures alone was also observed. These benefits may have translated into substantial cost savings for aortic Bentall procedures over the 30-month study period (January 2004 to June 2006).\nSeveral studies have recommended the use of surgical sealants for treating anastomotic suture lines in patients undergoing aortic reconstructions\n[2-5,8], and the positive findings reported here are supported by other retrospective analyses of sealant use in aortic reconstruction\n[9]. However, the case series reported here provides a more accurate overview of routine clinical practice than a controlled clinical study environment, and therefore the results are highly relevant to practising cardiac surgeons.\nIn addition to the benefits on clinical outcomes reported here, there are also important practical aspects that should be considered. Unlike haemostatic gelatins (with or without the use of thrombin), or haemostatic agents that induce platelet aggregation, the PEG sealant used in this study works independent of the clotting cascade, allowing its use in patients with severely inhibited coagulation. In addition, PEG sealant can and should be applied pre-emptively of any bleeding and requires the application of smaller volumes than other sealants, particularly if the spray applicator is utilised. This is advantageous as the use of smaller volumes of sealant can help to reduce the risk of stenosis, which is particularly important under the ostium of the left coronary artery. While the PEG sealant used provides a seal of great strength via covalent tissue bonds, it still retains flexibility. This allows normal physiological dilation without stiffening, thus avoiding any additional wall stress and weakening of the surrounding tissue\n[10]. The PEG sealant also begins to set in five seconds and is fully polymerised as a hydrogel in one minute\n[5], providing rapid sealing of the prosthesis. This prevents blood loss at unclamping, reducing the risk of further complications related to the extracorporeal circulation.\nCost calculations reported here are hypothesis generating, do not represent a detailed or formal analysis of cost effectiveness and warrant further confirmation in dedicated economic studies. Nevertheless, based on the estimated €1,943 per patient cost saving associated with use of PEG sealant in this analysis, it seems plausible that substantial reductions observed for some of the surgical requirements may have directly translated into procedural cost savings. The cost savings associated with the lower rate of transfusions conferred by the use of PEG sealant reported here add further to the data supporting the economic benefit of this intervention\n[11,12]. In particular, the suggested reduction in the rethoracotomy rate, which in itself influences the amount of RBC and FFP required as a result of the complication, results in cost savings from the use of PEG sealant far outweighing the cost of the treatment itself. In addition, the significant reductions in ICU stay and hospital stay associated with the use of PEG sealant would be expected to further reduce healthcare costs. We therefore consider the adjunctive use of the PEG sealant in this analysis of considerable economic benefit.\nAs with all clinical studies, there are limitations that should be considered. This was a retrospective analysis of non-blinded data, recorded at a single study centre. As such it may be open to potential bias. Three of the participating surgeons routinely added a PEG sealant to the anastomotic closure of aortic procedures, whereas the other three did not. Therefore, the results may have been influenced by particular surgeons’ experience and individual techniques. Furthermore, the fact that the data to be recorded for the efficacy and safety outcomes were not pre-defined, but taken out of the existing routine documentation, may also impact the suitability of the measured parameters. With regard to concomitant aspirin use, while it is likely that urgent or emergent cases did not have their aspirin discontinued before surgery and postoperative drainage volumes were increased as a result, the percentages of urgent/emergent cases were similar in the two groups (6/48 [12.5%] versus 8/54 [14.8%], respectively). Postoperative hypertension was not recorded during this study, so comparisons to establish the effectiveness of the PEG sealant in patients with postoperative hypertension were not possible. While the PEG sealant has been demonstrated to withstand supra-physiological pressures of up to 660 ± 150 mmHg\n[6], this is still an important clinical consideration that warrants further investigation in future studies.", "This retrospective analysis of aortic Bentall procedures in 102 patients over a 30-month period assessed the effect on blood loss and transfusion requirements of adding a PEG sealant to the anastomotic closure. The use of PEG sealant for suture-line reinforcement provided significant benefits on postoperative drainage and transfusion requirements versus sutures alone, with a trend towards a reduced rethoracotomy rate. These benefits may also translate into considerable cost savings. The clinically significant findings reported here warrant confirmation in prospective studies, which should also include the analysis of postoperative hypertension.", "ICU: Intensive Care Unit; PEG: Polyethylene Glycol; RBC: Red Blood Cells; FFP: Fresh Frozen Plasma; CBP: Cardiopulmonary Bypass; CABG: Coronary Artery Bypass Graft; SD: standard deviation; EuroSCORE: European System for Cardiac Operative Risk Evaluation.", "EN has received personal compensation from Baxter Healthcare Corporation, the manufacturer of CoSeal®, for serving as a consultant and speaker. Baxter Healthcare Corporation financed the preparation and submission of this manuscript for publication. MS and OED declare that they have no competing interests.", "EN and OED were responsible for the study concept, study design, performing surgical procedures, data analysis, and reviewing the manuscript. MS participated in the study concept and was responsible for data analysis, performing surgical procedures, and reviewing the manuscript. All authors read and approved the final manuscript.", "Video 1. Intraoperative application of PEG sealant during an aortic surgical procedure using a spray applicator.\nClick here for file\nVideo 2. Intraoperative application of PEG sealant during an aortic surgical procedure using a tip applicator.\nClick here for file" ]
[ null, "methods", null, "materials", null, null, null, "results", "discussion", "conclusions", null, null, null, "supplementary-material" ]
[ "Aortic repair", "Blood loss", "Transfusion", "Economics", "Surgical sealant" ]
Background: Aortic repair and reconstruction (including aortic arch surgery and combined procedures), despite being major cardiac surgical procedures, are now commonplace. Worldwide, tens of thousands of procedures are performed each year for the treatment of aortic or thoracic aneurysms, occlusions or dissections [1]. One of the main complications with these types of surgical procedures is the intraoperative and postoperative bleeding at the anastomotic suture line. The likelihood of bleeding can be influenced by a range of factors including comorbidities, surgical history, anticoagulation therapies, the type of surgical procedure employed and individual patient risk. This type of bleeding complication presents a major challenge to intraoperative and postoperative haemostasis, and failure to achieve adequate haemostasis can lead to a longer operative time, a greater need for blood transfusion products and a higher risk of postoperative complications and rethoracotomy. These factors and complications are obviously detrimental to the patient and incur significant additional costs. The level of additional costs is dependent upon the need for rethoracotomies, length of stay in the intensive care unit (ICU), overall duration of hospital stay, and the additional use of medical equipment and medical services used to diagnose and resolve the complication. Considering the large number of procedures performed each year, the maintenance of intraoperative and postoperative haemostasis is essential not only for improving patient outcomes, but also for the reduction of these societal and healthcare costs. Increasingly, surgical sealants such as polymeric polyethylene glycol (PEG) are being utilised in cardiac and vascular surgery to control anastamotic bleeding from high-pressure suture lines [2-4]. In several clinical and pre-clinical studies, PEG has been shown to provide rapid and strong sealing while maintaining flexibility and elasticity, and avoiding any disturbance of the wound closure [5,6]. This PEG is a fully synthetic sealant containing no human/animal proteins or gluteraldehyde and it does not induce any adverse tissue responses, exhibits minimal or no toxicity, and resorbs fully within four weeks [3,5]. The objective of this retrospective patient case–control analysis was to assess the effect on blood loss, transfusion requirements, and associated cost savings when adding a PEG sealant to the anastomotic closure of aortic procedures performed using a Bentall procedure. Methods: Patients and study setting Between January 2004 and June 2006, 124 consecutive patients underwent aortic-related surgical procedures including full root replacements, reconstruction and/or replacement of ascending aorta and aortic arch procedures in the Cardiothoracic Surgery Department at the Klinikum Oldenburg, Oldenburg, Germany. Patients were operated on by any one of six cardiothoracic surgeons using standardised procedures, three of whom routinely added a PEG sealant to the anastomotic closure of aortic procedures with the intent to enhance sealing and reduce blood loss. Of the 124 patients, 102 underwent Bentall procedures and therefore comprised a comparable cohort without procedures that may facilitate bleeding and thus influence between-group comparisons (e.g. deep hypothermic circulatory arrest). These patients were retrospectively divided into two study groups depending on the haemostatic procedure used by the attending surgeon: The sealant group i.e. those that received PEG as a surgical sealant in addition to the suture line (standard procedure plus PEG sealant performed by three of the six surgeons). The control group i.e. those who did not receive any additional treatment to the suture line (standard procedure without the addition of a PEG sealant which was performed by the other three of the six surgeons). Between January 2004 and June 2006, 124 consecutive patients underwent aortic-related surgical procedures including full root replacements, reconstruction and/or replacement of ascending aorta and aortic arch procedures in the Cardiothoracic Surgery Department at the Klinikum Oldenburg, Oldenburg, Germany. Patients were operated on by any one of six cardiothoracic surgeons using standardised procedures, three of whom routinely added a PEG sealant to the anastomotic closure of aortic procedures with the intent to enhance sealing and reduce blood loss. Of the 124 patients, 102 underwent Bentall procedures and therefore comprised a comparable cohort without procedures that may facilitate bleeding and thus influence between-group comparisons (e.g. deep hypothermic circulatory arrest). These patients were retrospectively divided into two study groups depending on the haemostatic procedure used by the attending surgeon: The sealant group i.e. those that received PEG as a surgical sealant in addition to the suture line (standard procedure plus PEG sealant performed by three of the six surgeons). The control group i.e. those who did not receive any additional treatment to the suture line (standard procedure without the addition of a PEG sealant which was performed by the other three of the six surgeons). Patients and study setting: Between January 2004 and June 2006, 124 consecutive patients underwent aortic-related surgical procedures including full root replacements, reconstruction and/or replacement of ascending aorta and aortic arch procedures in the Cardiothoracic Surgery Department at the Klinikum Oldenburg, Oldenburg, Germany. Patients were operated on by any one of six cardiothoracic surgeons using standardised procedures, three of whom routinely added a PEG sealant to the anastomotic closure of aortic procedures with the intent to enhance sealing and reduce blood loss. Of the 124 patients, 102 underwent Bentall procedures and therefore comprised a comparable cohort without procedures that may facilitate bleeding and thus influence between-group comparisons (e.g. deep hypothermic circulatory arrest). These patients were retrospectively divided into two study groups depending on the haemostatic procedure used by the attending surgeon: The sealant group i.e. those that received PEG as a surgical sealant in addition to the suture line (standard procedure plus PEG sealant performed by three of the six surgeons). The control group i.e. those who did not receive any additional treatment to the suture line (standard procedure without the addition of a PEG sealant which was performed by the other three of the six surgeons). Materials: The PEG sealant used in this study was CoSeal® (Baxter Healthcare Corporation, Deerfield, IL, USA), which is composed of two synthetic PEGs, a dilute hydrogen chloride solution and a sodium phosphate/sodium carbonate solution. It is indicated for use in vascular reconstructions to achieve adjunctive haemostasis by mechanically sealing areas of leakage. At the time of administration, the mixed PEGs and solutions, which must be used within two hours of reconstitution, generate a biocompatible, strongly adherent hydrogel that forms a cohesive matrix on the tissue within 60 seconds, and fully resorbs over the subsequent weeks [7]. For patients in the sealant group, the PEG sealant was sprayed in a prophylactic way on clamped vascular prostheses in a thin homogeneous layer over the anastomotic sites; minimal volumes (2–4 ml) were applied to blotted or air-dried surfaces and allowed to stand for at least one minute before unclamping. Application was in accordance with the product instructions for use via a product-specific gas-driven spray device (CoSeal Easy Spray®, Air Enhanced Applicator, Baxter, Deerfield, IL, USA) [7]. Additional movie files 1 and 2 show the intraoperative application of PEG sealant in more detail (see additional file 1 and 2). CoSeal was the only product used for aortic suture-line sealing during the procedure (treatment group). Operative procedure Routine preoperative and intraoperative care protocols were followed, and all patients were assessed prior to surgery to ensure that patients had a normal haemoglobin level and no hereditary coagulopathies. In elective surgeries, preoperative aspirin was discontinued one week prior to surgery. Patients only received blood substitution immediately prior to surgery in emergency circumstances. A drop in blood haemoglobin below the normal levels (men: 13.5–17.5 g/dL; women: 12.0–16.0 g/dL) was the trigger for intraoperative transfusion. Aortic repair and reconstruction was carried out using a Bentall procedure utilising a Medtronic Freestyle® stentless root prosthesis (Medtronic Inc. Minneapolis, MN). In summary, once the diseased aortic tissue and valve had been removed, proximal anastomosis was performed using 20–25 single interrupted 4/0 Ethibond® sutures in a single plane. Mobilised coronary buttons were then sewn end-to-side to the corresponding aortic sinus using a continuous 5/0 polypropylene suture. Finally, the cranial end of the bioprosthesis was sewn end-to-end to the aorta with a continuous 4/0 polypropylene suture, completing the root replacement. Due to the short root prosthesis of the Medtronic Freestyle® device used for this procedure, some patients required additional length, which was provided by inserting a vascular tube graft (Vascutek®, Renfrewshire, Scotland). Any further surgery was carried out according to standard procedures; deaeration was performed following clamp removal and protamine was administered upon cessation of bypass. Routine preoperative and intraoperative care protocols were followed, and all patients were assessed prior to surgery to ensure that patients had a normal haemoglobin level and no hereditary coagulopathies. In elective surgeries, preoperative aspirin was discontinued one week prior to surgery. Patients only received blood substitution immediately prior to surgery in emergency circumstances. A drop in blood haemoglobin below the normal levels (men: 13.5–17.5 g/dL; women: 12.0–16.0 g/dL) was the trigger for intraoperative transfusion. Aortic repair and reconstruction was carried out using a Bentall procedure utilising a Medtronic Freestyle® stentless root prosthesis (Medtronic Inc. Minneapolis, MN). In summary, once the diseased aortic tissue and valve had been removed, proximal anastomosis was performed using 20–25 single interrupted 4/0 Ethibond® sutures in a single plane. Mobilised coronary buttons were then sewn end-to-side to the corresponding aortic sinus using a continuous 5/0 polypropylene suture. Finally, the cranial end of the bioprosthesis was sewn end-to-end to the aorta with a continuous 4/0 polypropylene suture, completing the root replacement. Due to the short root prosthesis of the Medtronic Freestyle® device used for this procedure, some patients required additional length, which was provided by inserting a vascular tube graft (Vascutek®, Renfrewshire, Scotland). Any further surgery was carried out according to standard procedures; deaeration was performed following clamp removal and protamine was administered upon cessation of bypass. Outcomes The evaluation of the effect of added PEG sealant was based on the retrospective analysis of three key outcome measures: 1) requirement for transfusion based on volumes (ml) of red blood cells (RBC) and fresh frozen plasma (FFP) within the first 48 hours following surgery; 2) postoperative drainage volume (ml) within the first 48 hours following surgery; and 3) rate of rethoracotomy. The same key measures were used to perform a hypothesis-generating calculation of the costs associated with the Bentall procedures employed, either with or without PEG sealant. Indirect economic benefit was retrospectively estimated, with respect to reduced transfusion requirements and rethoracotomy rate, based on the following typical costs at the Klinikum Oldenburg from January 2004 to June 2006: RBC, €200 per unit; FFP, €160 per unit; PEG sealant, €237 per 2 ml application; ICU stay, €400 per day; hospital stay, €150 per day; and rethoracotomy, €2000 per procedure (operative costs only). The evaluation of the effect of added PEG sealant was based on the retrospective analysis of three key outcome measures: 1) requirement for transfusion based on volumes (ml) of red blood cells (RBC) and fresh frozen plasma (FFP) within the first 48 hours following surgery; 2) postoperative drainage volume (ml) within the first 48 hours following surgery; and 3) rate of rethoracotomy. The same key measures were used to perform a hypothesis-generating calculation of the costs associated with the Bentall procedures employed, either with or without PEG sealant. Indirect economic benefit was retrospectively estimated, with respect to reduced transfusion requirements and rethoracotomy rate, based on the following typical costs at the Klinikum Oldenburg from January 2004 to June 2006: RBC, €200 per unit; FFP, €160 per unit; PEG sealant, €237 per 2 ml application; ICU stay, €400 per day; hospital stay, €150 per day; and rethoracotomy, €2000 per procedure (operative costs only). Statistical analysis Clinical assessment data were presented as mean ± standard deviation (SD) and population characteristics as a percentage of each subgroup (treatment or control). The following statistical tests were used to compare treatment groups: Student’s t-tests (gender, age, weight, European System for Cardiac Operative Risk Evaluation [EuroSCORE], cardiopulmonary bypass [CPB] time, aortic cross clamp time, total operative duration [or time], drainage volume, duration of ICU and total hospital stay), Wilcoxon-Mann-Whitney tests (urgency of operation), chi-square tests (surgical history and Bentall procedure), or a two-proportion z-test (postoperative rethoracotomy). P values of <0.05 were considered to be significant. Clinical assessment data were presented as mean ± standard deviation (SD) and population characteristics as a percentage of each subgroup (treatment or control). The following statistical tests were used to compare treatment groups: Student’s t-tests (gender, age, weight, European System for Cardiac Operative Risk Evaluation [EuroSCORE], cardiopulmonary bypass [CPB] time, aortic cross clamp time, total operative duration [or time], drainage volume, duration of ICU and total hospital stay), Wilcoxon-Mann-Whitney tests (urgency of operation), chi-square tests (surgical history and Bentall procedure), or a two-proportion z-test (postoperative rethoracotomy). P values of <0.05 were considered to be significant. Operative procedure: Routine preoperative and intraoperative care protocols were followed, and all patients were assessed prior to surgery to ensure that patients had a normal haemoglobin level and no hereditary coagulopathies. In elective surgeries, preoperative aspirin was discontinued one week prior to surgery. Patients only received blood substitution immediately prior to surgery in emergency circumstances. A drop in blood haemoglobin below the normal levels (men: 13.5–17.5 g/dL; women: 12.0–16.0 g/dL) was the trigger for intraoperative transfusion. Aortic repair and reconstruction was carried out using a Bentall procedure utilising a Medtronic Freestyle® stentless root prosthesis (Medtronic Inc. Minneapolis, MN). In summary, once the diseased aortic tissue and valve had been removed, proximal anastomosis was performed using 20–25 single interrupted 4/0 Ethibond® sutures in a single plane. Mobilised coronary buttons were then sewn end-to-side to the corresponding aortic sinus using a continuous 5/0 polypropylene suture. Finally, the cranial end of the bioprosthesis was sewn end-to-end to the aorta with a continuous 4/0 polypropylene suture, completing the root replacement. Due to the short root prosthesis of the Medtronic Freestyle® device used for this procedure, some patients required additional length, which was provided by inserting a vascular tube graft (Vascutek®, Renfrewshire, Scotland). Any further surgery was carried out according to standard procedures; deaeration was performed following clamp removal and protamine was administered upon cessation of bypass. Outcomes: The evaluation of the effect of added PEG sealant was based on the retrospective analysis of three key outcome measures: 1) requirement for transfusion based on volumes (ml) of red blood cells (RBC) and fresh frozen plasma (FFP) within the first 48 hours following surgery; 2) postoperative drainage volume (ml) within the first 48 hours following surgery; and 3) rate of rethoracotomy. The same key measures were used to perform a hypothesis-generating calculation of the costs associated with the Bentall procedures employed, either with or without PEG sealant. Indirect economic benefit was retrospectively estimated, with respect to reduced transfusion requirements and rethoracotomy rate, based on the following typical costs at the Klinikum Oldenburg from January 2004 to June 2006: RBC, €200 per unit; FFP, €160 per unit; PEG sealant, €237 per 2 ml application; ICU stay, €400 per day; hospital stay, €150 per day; and rethoracotomy, €2000 per procedure (operative costs only). Statistical analysis: Clinical assessment data were presented as mean ± standard deviation (SD) and population characteristics as a percentage of each subgroup (treatment or control). The following statistical tests were used to compare treatment groups: Student’s t-tests (gender, age, weight, European System for Cardiac Operative Risk Evaluation [EuroSCORE], cardiopulmonary bypass [CPB] time, aortic cross clamp time, total operative duration [or time], drainage volume, duration of ICU and total hospital stay), Wilcoxon-Mann-Whitney tests (urgency of operation), chi-square tests (surgical history and Bentall procedure), or a two-proportion z-test (postoperative rethoracotomy). P values of <0.05 were considered to be significant. Results: Between January 2004 and June 2006, a total of 124 consecutive patients underwent aortic-related surgical procedures at the Department of Cardiothoracic Surgery, Klinikum Oldenburg, Oldenburg, Germany. Of these, 102 patients underwent Bentall procedures and were included in the analysis: 48 (47.1%) received PEG as a surgical sealant in addition to the suture line (sealant group) and 54 (52.9%) did not receive this additive treatment to the suture line (control group). Demographic profiles of the two patient groups were similar and showed no significant differences between the sealant and control groups (Table 1). Demographic and perioperative characteristics of the study population PEG, polyethylene glycol; CABG, Coronary artery bypass graft; CPB, cardiopulmonary bypass; EuroSCORE, European System for Cardiac Operative Risk Evaluation; SD, standard deviation. P-values calculated using the following statistical tests: Student’s t-test, chi-square test, Wilcoxon-Mann-Whitney test, or Fisher’s exact test. All values are presented as mean ± SD or percentages of subgroup populations. Compared with the control group, patients in the sealant group required significantly fewer intraoperative and postoperative transfusions of RBC (mean 761 ± 863 versus 1248 ± 1206 ml, p = 0.02; Table 1) or FFP (413 ± 532 versus 779 ± 834 ml, p = 0.009). All other intraoperative characteristics were similar between groups (urgency, CPB time, aortic clamp time, total operative duration, Table 1). Postoperatively, drainage volumes were significantly reduced in the sealant group (985 ± 972 ml) versus the control group (1709 ± 1302 ml), p = 0.002 (Table 1), as was the duration of ICU stay and the total hospital stay (Table 1; both p = 0.03). In addition, a trend towards a reduced rethoracotomy rate was observed in the sealant group (1/48) versus the control group (6/54; p = 0.07). No adverse events related to the use of PEG sealant were reported during this study. Overall, for the 102 procedures performed in this analysis, per patient cost-savings when adding a PEG sealant to the suture line (sealant group) versus no additive treatment (control group) for the anastomotic closure during aortic surgery were estimated at €1,943 (Table 2). Estimated per-patient cost savings when using PEG sealant for anastomotic closure during aortic surgery RBC, red blood cells (1 unit = 250 ml); FFP, fresh frozen plasma (1 unit = 200 ml); PEG, polyethylene glycol; ICU, intensive care unit. *Operating room (OR)-associated costs only (i.e. anaesthesia, OR and staff time, but excluding extended intensive care or hospital stay). Cost savings based on average reduced procedural requirements when using PEG sealant using typical costs at the Klinikum Oldenburg in the period January 2004 to June 2006. Discussion: The results of this single centre, retrospective case series demonstrate that adding a PEG sealant to the anastomotic closure of aortic Bentall procedures provides a beneficial effect on blood loss and transfusion requirements. Compared with sutures alone, PEG sealant significantly reduced postoperative drainage loss and transfusion requirements, with no additional adverse events. In addition, a trend towards a reduced rethoracotomy rate versus sutures alone was also observed. These benefits may have translated into substantial cost savings for aortic Bentall procedures over the 30-month study period (January 2004 to June 2006). Several studies have recommended the use of surgical sealants for treating anastomotic suture lines in patients undergoing aortic reconstructions [2-5,8], and the positive findings reported here are supported by other retrospective analyses of sealant use in aortic reconstruction [9]. However, the case series reported here provides a more accurate overview of routine clinical practice than a controlled clinical study environment, and therefore the results are highly relevant to practising cardiac surgeons. In addition to the benefits on clinical outcomes reported here, there are also important practical aspects that should be considered. Unlike haemostatic gelatins (with or without the use of thrombin), or haemostatic agents that induce platelet aggregation, the PEG sealant used in this study works independent of the clotting cascade, allowing its use in patients with severely inhibited coagulation. In addition, PEG sealant can and should be applied pre-emptively of any bleeding and requires the application of smaller volumes than other sealants, particularly if the spray applicator is utilised. This is advantageous as the use of smaller volumes of sealant can help to reduce the risk of stenosis, which is particularly important under the ostium of the left coronary artery. While the PEG sealant used provides a seal of great strength via covalent tissue bonds, it still retains flexibility. This allows normal physiological dilation without stiffening, thus avoiding any additional wall stress and weakening of the surrounding tissue [10]. The PEG sealant also begins to set in five seconds and is fully polymerised as a hydrogel in one minute [5], providing rapid sealing of the prosthesis. This prevents blood loss at unclamping, reducing the risk of further complications related to the extracorporeal circulation. Cost calculations reported here are hypothesis generating, do not represent a detailed or formal analysis of cost effectiveness and warrant further confirmation in dedicated economic studies. Nevertheless, based on the estimated €1,943 per patient cost saving associated with use of PEG sealant in this analysis, it seems plausible that substantial reductions observed for some of the surgical requirements may have directly translated into procedural cost savings. The cost savings associated with the lower rate of transfusions conferred by the use of PEG sealant reported here add further to the data supporting the economic benefit of this intervention [11,12]. In particular, the suggested reduction in the rethoracotomy rate, which in itself influences the amount of RBC and FFP required as a result of the complication, results in cost savings from the use of PEG sealant far outweighing the cost of the treatment itself. In addition, the significant reductions in ICU stay and hospital stay associated with the use of PEG sealant would be expected to further reduce healthcare costs. We therefore consider the adjunctive use of the PEG sealant in this analysis of considerable economic benefit. As with all clinical studies, there are limitations that should be considered. This was a retrospective analysis of non-blinded data, recorded at a single study centre. As such it may be open to potential bias. Three of the participating surgeons routinely added a PEG sealant to the anastomotic closure of aortic procedures, whereas the other three did not. Therefore, the results may have been influenced by particular surgeons’ experience and individual techniques. Furthermore, the fact that the data to be recorded for the efficacy and safety outcomes were not pre-defined, but taken out of the existing routine documentation, may also impact the suitability of the measured parameters. With regard to concomitant aspirin use, while it is likely that urgent or emergent cases did not have their aspirin discontinued before surgery and postoperative drainage volumes were increased as a result, the percentages of urgent/emergent cases were similar in the two groups (6/48 [12.5%] versus 8/54 [14.8%], respectively). Postoperative hypertension was not recorded during this study, so comparisons to establish the effectiveness of the PEG sealant in patients with postoperative hypertension were not possible. While the PEG sealant has been demonstrated to withstand supra-physiological pressures of up to 660 ± 150 mmHg [6], this is still an important clinical consideration that warrants further investigation in future studies. Conclusion: This retrospective analysis of aortic Bentall procedures in 102 patients over a 30-month period assessed the effect on blood loss and transfusion requirements of adding a PEG sealant to the anastomotic closure. The use of PEG sealant for suture-line reinforcement provided significant benefits on postoperative drainage and transfusion requirements versus sutures alone, with a trend towards a reduced rethoracotomy rate. These benefits may also translate into considerable cost savings. The clinically significant findings reported here warrant confirmation in prospective studies, which should also include the analysis of postoperative hypertension. Abbreviations: ICU: Intensive Care Unit; PEG: Polyethylene Glycol; RBC: Red Blood Cells; FFP: Fresh Frozen Plasma; CBP: Cardiopulmonary Bypass; CABG: Coronary Artery Bypass Graft; SD: standard deviation; EuroSCORE: European System for Cardiac Operative Risk Evaluation. Competing interests: EN has received personal compensation from Baxter Healthcare Corporation, the manufacturer of CoSeal®, for serving as a consultant and speaker. Baxter Healthcare Corporation financed the preparation and submission of this manuscript for publication. MS and OED declare that they have no competing interests. Authors’ contributions: EN and OED were responsible for the study concept, study design, performing surgical procedures, data analysis, and reviewing the manuscript. MS participated in the study concept and was responsible for data analysis, performing surgical procedures, and reviewing the manuscript. All authors read and approved the final manuscript. Supplementary Material: Video 1. Intraoperative application of PEG sealant during an aortic surgical procedure using a spray applicator. Click here for file Video 2. Intraoperative application of PEG sealant during an aortic surgical procedure using a tip applicator. Click here for file
Background: The use of CoSeal(®), a polyethylene glycol sealant, in cardiac and vascular surgery for prevention of anastomotic bleeding has been subject to prior investigations. We analysed our perioperative data to determine the clinical benefit of using polyethylene glycol sealant to inhibit suture line bleeding in aortic surgery. Methods: From January 2004 to June 2006, 124 patients underwent aortic surgical procedures such as full root replacements, reconstruction and/or replacement of ascending aorta and aortic arch procedures. A Bentall procedure was employed in 102 of these patients. In 48 of these, a polyethylene glycol sealant was added to the anastomotic closure of the aortic procedure (sealant group) and the other 54 patients did not have this additive treatment to the suture line (control group). Results: There were no significant between-group differences in the demographic characteristics of the patients undergoing Bentall procedures. Mean EuroSCORES (European System for Cardiac Operative Risk Evaluation) were 13.7 ± 7.7 (sealant group) and 14.4 ± 6.2 (control group), p = NS. The polyethylene glycol sealant group had reduced intraoperative and postoperative transfusion requirements (red blood cells: 761 ± 863 versus 1248 ± 1206 ml, p = 0.02; fresh frozen plasma: 413 ± 532 versus 779 ± 834 ml, p = 0.009); and less postoperative drainage loss (985 ± 972 versus 1709 ± 1302 ml, p = 0.002). A trend towards a lower rate of rethoracotomy was observed in the sealant group (1/48 versus 6/54, p = 0.07) and there was significantly less time spent in the intensive care unit or hospital (both p = 0.03). Based on hypothesis-generating calculations, the resulting economic benefit conferred by shorter intensive care unit and hospital stays, reduced transfusion requirements and a potentially lower rethoracotomy rate is estimated at €1,943 per patient in this data analysis. Conclusions: The use of this polymeric surgical sealant demonstrated improved intraoperative and postoperative management of anastomotic bleeding in Bentall procedures, leading to reduced postoperative drainage loss, less transfusion requirements, and a trend towards a lower rate of rethoracotomy. Hypothesis-generating calculations indicate that the use of this sealant translates to cost savings. Further studies are warranted to investigate the clinical and economic benefits of CoSeal in a prospective manner.
Background: Aortic repair and reconstruction (including aortic arch surgery and combined procedures), despite being major cardiac surgical procedures, are now commonplace. Worldwide, tens of thousands of procedures are performed each year for the treatment of aortic or thoracic aneurysms, occlusions or dissections [1]. One of the main complications with these types of surgical procedures is the intraoperative and postoperative bleeding at the anastomotic suture line. The likelihood of bleeding can be influenced by a range of factors including comorbidities, surgical history, anticoagulation therapies, the type of surgical procedure employed and individual patient risk. This type of bleeding complication presents a major challenge to intraoperative and postoperative haemostasis, and failure to achieve adequate haemostasis can lead to a longer operative time, a greater need for blood transfusion products and a higher risk of postoperative complications and rethoracotomy. These factors and complications are obviously detrimental to the patient and incur significant additional costs. The level of additional costs is dependent upon the need for rethoracotomies, length of stay in the intensive care unit (ICU), overall duration of hospital stay, and the additional use of medical equipment and medical services used to diagnose and resolve the complication. Considering the large number of procedures performed each year, the maintenance of intraoperative and postoperative haemostasis is essential not only for improving patient outcomes, but also for the reduction of these societal and healthcare costs. Increasingly, surgical sealants such as polymeric polyethylene glycol (PEG) are being utilised in cardiac and vascular surgery to control anastamotic bleeding from high-pressure suture lines [2-4]. In several clinical and pre-clinical studies, PEG has been shown to provide rapid and strong sealing while maintaining flexibility and elasticity, and avoiding any disturbance of the wound closure [5,6]. This PEG is a fully synthetic sealant containing no human/animal proteins or gluteraldehyde and it does not induce any adverse tissue responses, exhibits minimal or no toxicity, and resorbs fully within four weeks [3,5]. The objective of this retrospective patient case–control analysis was to assess the effect on blood loss, transfusion requirements, and associated cost savings when adding a PEG sealant to the anastomotic closure of aortic procedures performed using a Bentall procedure. Conclusion: This retrospective analysis of aortic Bentall procedures in 102 patients over a 30-month period assessed the effect on blood loss and transfusion requirements of adding a PEG sealant to the anastomotic closure. The use of PEG sealant for suture-line reinforcement provided significant benefits on postoperative drainage and transfusion requirements versus sutures alone, with a trend towards a reduced rethoracotomy rate. These benefits may also translate into considerable cost savings. The clinically significant findings reported here warrant confirmation in prospective studies, which should also include the analysis of postoperative hypertension.
Background: The use of CoSeal(®), a polyethylene glycol sealant, in cardiac and vascular surgery for prevention of anastomotic bleeding has been subject to prior investigations. We analysed our perioperative data to determine the clinical benefit of using polyethylene glycol sealant to inhibit suture line bleeding in aortic surgery. Methods: From January 2004 to June 2006, 124 patients underwent aortic surgical procedures such as full root replacements, reconstruction and/or replacement of ascending aorta and aortic arch procedures. A Bentall procedure was employed in 102 of these patients. In 48 of these, a polyethylene glycol sealant was added to the anastomotic closure of the aortic procedure (sealant group) and the other 54 patients did not have this additive treatment to the suture line (control group). Results: There were no significant between-group differences in the demographic characteristics of the patients undergoing Bentall procedures. Mean EuroSCORES (European System for Cardiac Operative Risk Evaluation) were 13.7 ± 7.7 (sealant group) and 14.4 ± 6.2 (control group), p = NS. The polyethylene glycol sealant group had reduced intraoperative and postoperative transfusion requirements (red blood cells: 761 ± 863 versus 1248 ± 1206 ml, p = 0.02; fresh frozen plasma: 413 ± 532 versus 779 ± 834 ml, p = 0.009); and less postoperative drainage loss (985 ± 972 versus 1709 ± 1302 ml, p = 0.002). A trend towards a lower rate of rethoracotomy was observed in the sealant group (1/48 versus 6/54, p = 0.07) and there was significantly less time spent in the intensive care unit or hospital (both p = 0.03). Based on hypothesis-generating calculations, the resulting economic benefit conferred by shorter intensive care unit and hospital stays, reduced transfusion requirements and a potentially lower rethoracotomy rate is estimated at €1,943 per patient in this data analysis. Conclusions: The use of this polymeric surgical sealant demonstrated improved intraoperative and postoperative management of anastomotic bleeding in Bentall procedures, leading to reduced postoperative drainage loss, less transfusion requirements, and a trend towards a lower rate of rethoracotomy. Hypothesis-generating calculations indicate that the use of this sealant translates to cost savings. Further studies are warranted to investigate the clinical and economic benefits of CoSeal in a prospective manner.
5,049
433
[ 421, 215, 269, 201, 144, 50, 49, 56 ]
14
[ "sealant", "peg", "peg sealant", "procedures", "aortic", "patients", "procedure", "surgery", "surgical", "group" ]
[ "postoperative haemostasis", "transfusion aortic", "bleeding anastomotic suture", "transfusion aortic repair", "intraoperative postoperative bleeding" ]
[CONTENT] Aortic repair | Blood loss | Transfusion | Economics | Surgical sealant [SUMMARY]
[CONTENT] Aortic repair | Blood loss | Transfusion | Economics | Surgical sealant [SUMMARY]
[CONTENT] Aortic repair | Blood loss | Transfusion | Economics | Surgical sealant [SUMMARY]
[CONTENT] Aortic repair | Blood loss | Transfusion | Economics | Surgical sealant [SUMMARY]
[CONTENT] Aortic repair | Blood loss | Transfusion | Economics | Surgical sealant [SUMMARY]
[CONTENT] Aortic repair | Blood loss | Transfusion | Economics | Surgical sealant [SUMMARY]
[CONTENT] Aged | Aged, 80 and over | Anastomosis, Surgical | Aorta | Blood Loss, Surgical | Blood Transfusion | Cardiovascular Surgical Procedures | Case-Control Studies | Female | Humans | Male | Middle Aged | Polyethylene Glycols | Sutures | Thoracotomy [SUMMARY]
[CONTENT] Aged | Aged, 80 and over | Anastomosis, Surgical | Aorta | Blood Loss, Surgical | Blood Transfusion | Cardiovascular Surgical Procedures | Case-Control Studies | Female | Humans | Male | Middle Aged | Polyethylene Glycols | Sutures | Thoracotomy [SUMMARY]
[CONTENT] Aged | Aged, 80 and over | Anastomosis, Surgical | Aorta | Blood Loss, Surgical | Blood Transfusion | Cardiovascular Surgical Procedures | Case-Control Studies | Female | Humans | Male | Middle Aged | Polyethylene Glycols | Sutures | Thoracotomy [SUMMARY]
[CONTENT] Aged | Aged, 80 and over | Anastomosis, Surgical | Aorta | Blood Loss, Surgical | Blood Transfusion | Cardiovascular Surgical Procedures | Case-Control Studies | Female | Humans | Male | Middle Aged | Polyethylene Glycols | Sutures | Thoracotomy [SUMMARY]
[CONTENT] Aged | Aged, 80 and over | Anastomosis, Surgical | Aorta | Blood Loss, Surgical | Blood Transfusion | Cardiovascular Surgical Procedures | Case-Control Studies | Female | Humans | Male | Middle Aged | Polyethylene Glycols | Sutures | Thoracotomy [SUMMARY]
[CONTENT] Aged | Aged, 80 and over | Anastomosis, Surgical | Aorta | Blood Loss, Surgical | Blood Transfusion | Cardiovascular Surgical Procedures | Case-Control Studies | Female | Humans | Male | Middle Aged | Polyethylene Glycols | Sutures | Thoracotomy [SUMMARY]
[CONTENT] postoperative haemostasis | transfusion aortic | bleeding anastomotic suture | transfusion aortic repair | intraoperative postoperative bleeding [SUMMARY]
[CONTENT] postoperative haemostasis | transfusion aortic | bleeding anastomotic suture | transfusion aortic repair | intraoperative postoperative bleeding [SUMMARY]
[CONTENT] postoperative haemostasis | transfusion aortic | bleeding anastomotic suture | transfusion aortic repair | intraoperative postoperative bleeding [SUMMARY]
[CONTENT] postoperative haemostasis | transfusion aortic | bleeding anastomotic suture | transfusion aortic repair | intraoperative postoperative bleeding [SUMMARY]
[CONTENT] postoperative haemostasis | transfusion aortic | bleeding anastomotic suture | transfusion aortic repair | intraoperative postoperative bleeding [SUMMARY]
[CONTENT] postoperative haemostasis | transfusion aortic | bleeding anastomotic suture | transfusion aortic repair | intraoperative postoperative bleeding [SUMMARY]
[CONTENT] sealant | peg | peg sealant | procedures | aortic | patients | procedure | surgery | surgical | group [SUMMARY]
[CONTENT] sealant | peg | peg sealant | procedures | aortic | patients | procedure | surgery | surgical | group [SUMMARY]
[CONTENT] sealant | peg | peg sealant | procedures | aortic | patients | procedure | surgery | surgical | group [SUMMARY]
[CONTENT] sealant | peg | peg sealant | procedures | aortic | patients | procedure | surgery | surgical | group [SUMMARY]
[CONTENT] sealant | peg | peg sealant | procedures | aortic | patients | procedure | surgery | surgical | group [SUMMARY]
[CONTENT] sealant | peg | peg sealant | procedures | aortic | patients | procedure | surgery | surgical | group [SUMMARY]
[CONTENT] patient | bleeding | procedures | complications | intraoperative postoperative | procedures performed | haemostasis | surgical | postoperative | need [SUMMARY]
[CONTENT] procedures | patients | sealant | surgeons | group | peg | line standard | line standard procedure | sealant performed surgeons | sealant performed [SUMMARY]
[CONTENT] group | table | sealant | ml | versus | control group | control | sealant group | test | peg [SUMMARY]
[CONTENT] benefits | transfusion requirements | requirements | transfusion | significant | postoperative | analysis | patients 30 | patients 30 month period | patients 30 month [SUMMARY]
[CONTENT] sealant | peg | peg sealant | procedures | aortic | patients | procedure | group | surgical | surgery [SUMMARY]
[CONTENT] sealant | peg | peg sealant | procedures | aortic | patients | procedure | group | surgical | surgery [SUMMARY]
[CONTENT] CoSeal ||| [SUMMARY]
[CONTENT] January 2004 to June 2006 | 124 ||| 102 ||| 48 | 54 | control group [SUMMARY]
[CONTENT] Bentall ||| European System for Cardiac Operative Risk Evaluation | 13.7 | 7.7 | 14.4 | 6.2 | NS ||| 761 | 863 | 1248 ± 1206 | 0.02 | 413 | 532 | 779 | 834 ml | 0.009 | 985 | 972 | 1709 ± | 1302 | 0.002 ||| 1/48 | 6/54 | 0.07 | 0.03 ||| 1,943 [SUMMARY]
[CONTENT] Bentall ||| ||| CoSeal [SUMMARY]
[CONTENT] CoSeal ||| ||| January 2004 to June 2006 | 124 ||| 102 ||| 48 | 54 | control group ||| Bentall ||| European System for Cardiac Operative Risk Evaluation | 13.7 | 7.7 | 14.4 | 6.2 | NS ||| 761 | 863 | 1248 ± 1206 | 0.02 | 413 | 532 | 779 | 834 ml | 0.009 | 985 | 972 | 1709 ± | 1302 | 0.002 ||| 1/48 | 6/54 | 0.07 | 0.03 ||| 1,943 ||| Bentall ||| ||| CoSeal [SUMMARY]
[CONTENT] CoSeal ||| ||| January 2004 to June 2006 | 124 ||| 102 ||| 48 | 54 | control group ||| Bentall ||| European System for Cardiac Operative Risk Evaluation | 13.7 | 7.7 | 14.4 | 6.2 | NS ||| 761 | 863 | 1248 ± 1206 | 0.02 | 413 | 532 | 779 | 834 ml | 0.009 | 985 | 972 | 1709 ± | 1302 | 0.002 ||| 1/48 | 6/54 | 0.07 | 0.03 ||| 1,943 ||| Bentall ||| ||| CoSeal [SUMMARY]
Bone mass density estimation: Archimede's principle versus automatic X-ray histogram and edge detection technique in ovariectomized rats treated with germinated brown rice bioactives.
24187491
Bone mass density is an important parameter used in the estimation of the severity and depth of lesions in osteoporosis. Estimation of bone density using existing methods in experimental models has its advantages as well as drawbacks.
BACKGROUND
In this study, the X-ray histogram edge detection technique was used to estimate the bone mass density in ovariectomized rats treated orally with germinated brown rice (GBR) bioactives, and the results were compared with estimated results obtained using Archimede's principle. New bone cell proliferation was assessed by histology and immunohistochemical reaction using polyclonal nuclear antigen. Additionally, serum alkaline phosphatase activity, serum and bone calcium and zinc concentrations were detected using a chemistry analyzer and atomic absorption spectroscopy. Rats were divided into groups of six as follows: sham (nonovariectomized, nontreated); ovariectomized, nontreated; and ovariectomized and treated with estrogen, or Remifemin®, GBR-phenolics, acylated steryl glucosides, gamma oryzanol, and gamma amino-butyric acid extracted from GBR at different doses.
MATERIALS AND METHODS
Our results indicate a significant increase in alkaline phosphatase activity, serum and bone calcium, and zinc and ash content in the treated groups compared with the ovariectomized nontreated group (P < 0.05). Bone density increased significantly (P < 0.05) in groups treated with estrogen, GBR, Remifemin®, and gamma oryzanol compared to the ovariectomized nontreated group. Histological sections revealed more osteoblasts in the treated groups when compared with the untreated groups. A polyclonal nuclear antigen reaction showing proliferating new cells was observed in groups treated with estrogen, Remifemin®, GBR, acylated steryl glucosides, and gamma oryzanol. There was a good correlation between bone mass densities estimated using Archimede's principle and the edge detection technique between the treated groups (r (2) = 0.737, P = 0.004).
RESULTS
Our study shows that GBR bioactives increase bone density, which might be via the activation of zinc formation and increased calcium content, and that X-ray edge detection technique is effective in the measurement of bone density and can be employed effectively in this respect.
CONCLUSION
[ "Animals", "Bone Density", "Calcium", "Femur", "Minerals", "Models, Statistical", "Oryza", "Osteoporosis", "Outcome Assessment, Health Care", "Ovariectomy", "Pattern Recognition, Automated", "Phytotherapy", "Plant Extracts", "Rats", "Rats, Sprague-Dawley", "Spectrophotometry, Atomic", "Zinc" ]
3810202
Introduction
Traditionally, dual X-ray absorptiometry and Archimede’s principle are employed in the measurement of bone mass density (BMD) in biomedical research. The use of dual X-ray absorptiometry to measure bone density has become very popular over the years, and although a significant correlation was obtained when comparing BMD using dual X-ray absorptiometry and Archimede’s principle in rats, the precision, accuracy, and sensitivity of this method in bone density determination remains a topic of concern.1 Previous research has revealed that a progressive decline in BMD occurs with estrogen deficiency at menopause, exposing women to a greater risk for the development of osteoporosis.2,3 Postmenopausal osteoporosis is now a worldwide problem, and the conventional treatment of estrogen replacement and other related chemotherapy are reported to be associated with side effects ranging from alterations in physiology to therapy-related cancers.4–6 Fisher et al7 described osteoporotic fracture as an important public health issue with increasing morbidity, mortality, and prevalence. In the United States alone, osteoporosis affects an estimated 10–28 million Americans over the age of 50, with 33 million people, mostly women, having low bone density and close to 1.2 million bone fractures being related to osteoporosis.8,9 In view of this, natural products, especially phytoestrogens, are being explored as an alternative to hormone therapy for the management of osteoporosis and other related bone degenerative diseases. In this study, germinated brown rice (GBR), a rice with increased levels of bioactive compounds compared to polished rice and brown rice, with potential benefits in the management of chronic disease, was administered to rats with the aim of studying its ability to protect bone from osteoporosis. Bone ash weight and calcium and zinc content were quantified, and bone histology and polyclonal nuclear antigen staining were used to detect new bone formation. BMD was estimated using Archimede’s principle and was compared with edge detection X-ray morphometry in order to explore its applicability in measuring bone density in biomedical research.
Statistical analysis
Data are presented as mean ± standard deviation. Differences were determined by one-way analysis of variance (ANOVA) and mean comparison by Tukey–Kramer post hoc test, using JMP10 statistical software (SAS, Cary, NC, USA). Differences were considered significant at P < 0.05.
Results
The diameter of the femur in the distal ½ in the sham nonovariectomized group, and the ovariectomized groups treated with 0.2 mg/kg estrogen, 200 mg/kg ASG, GBR, or ORZ was significantly higher compared to the ovariectomized, nontreated group (P < 0.05). The diameter of the femur in the distal ¼ in the ovariectomized, nontreated group and groups treated with ORZ 200 mg/kg and estrogen 0.2 mg/kg was significantly higher (P < 0.05) compared to the nontreated group (Table 1). The bone densities of the sham nonovariectomized group and groups treated with estrogen 0.2 mg/kg, and ORZ, ASG, GABA each at 200 mg/kg, and Remifemin® 20 mg/kg were significantly higher compared to the ovariectomized, nontreated group (P < 0.002, P < 0.03, and P < 0.01, respectively). The other treated groups expressed a higher density than the nontreated group although not significantly (P > 0.05) when using Archimede’s principle (Table 1). The X-ray edge detection technique yielded the following percentage increase in bone formation in these groups: ASG 200 mg/kg, 0.38%; estrogen 0.2 mg/kg, 1.55%; GABA 200 mg/kg, 0.02%; ORZ 200 mg/kg, 0.6%; Remifemin® 20 mg/kg, 0.04%; and SHAM, 0.28%. These treatment groups showed an increase in BMD and fell under the X4 scale of classification (Table 2). A significant correlation was observed between densities obtained using Archimede’s principle and those obtained in X3 (bone density values in class 3 grading using X-ray morphometry) (R2 = 0.737, P = 0.004); a nonsignificant correlation was observed between the two methods in groups that showed a significant increase in bone formation (R2 = 0.710, P = 0.179) (Table 3). The weight of the femur bone immediately after sacrifice of rats (wet weight) in the ovariectomized, nontreated group was significantly lower (P < 0.05) than in the sham and other treated groups (Table 4). Bone ash weight was higher in the sham group and groups treated with 0.2 mg/kg estrogen, ASG 200 mg/kg, and ORZ 100 mg/kg and 200 mg/kg, and significantly different (P < 0.05) from that of the ovariectomized, nontreated group and the other treatment groups, as shown in Table 4. Bone calcium content in the ovariectomized, nontreated group was significantly lower (P < 0.05) than that in sham and all ovariectomized treated groups. No significant difference in terms of calcium content was observed in groups treated with estrogen 0.2 mg/kg, GBR 200 mg/kg, ASG 100 mg/kg and 200 mg/kg, ORZ 100 mg/kg and 200 mg/kg, GABA 200 mg/kg, and the sham nonovariectomized group (P < 0.05) (Table 4). Bone zinc concentration was significantly lower in the ovariectomized, nontreated group when compared to the sham and all treated groups (P < 0.05). The concentration increased significantly (P < 0.05) in groups treated with estrogen 0.2 mg/kg, Remifemin® 10 mg/kg, ASG 200 mg/kg, GABA 200 mg/kg, ORZ 100 mg/kg and 200 mg/kg compared to the sham nonovariectomized group, as shown in Table 4. There was significant difference in the levels of serum alkaline phosphatase (ALP) between 2 weeks and 8 weeks after treatment in the group treated with ASG 100 mg/kg compared to the ovariectomized, nontreated group and other treatment groups: P < 0.05. During the fourth week after treatment, a significant difference was observed between sham nonovariectomized rats and the group treated with Remifemin® 10 mg/kg (P < 0.0451) (Table 5). Serum calcium concentrations were significantly different between treated groups in the eighth week only after commencement of treatment (P = 0.0001). There was no significant difference between the treated groups in the second and fourth week (P = 0.9390 and P = 0.8948, respectively) (Table 5). Serum zinc concentrations showed a significant increase in the sham nontreated group, ORZ 200 mg/kg, and GBR 200 mg/kg 2 weeks after treatment compared to ovariectomized, nontreated and other treated groups (P < 0.0001) (Table 5). After 4 weeks of treatment, serum zinc concentration in groups treated with GABA 200 mg/kg, estrogen 0.2 mg/kg, and GBR 200 mg/kg were significantly different compared to other treatment groups (P < 0.0001), as shown in Table 5. Zinc concentrations in the nonovariectomized (sham) group and groups treated with GABA 200 mg/kg and estrogen 0.2 mg/kg in the eighth week differed significantly from those of the other treatment groups (P < 0.0001) (Table 5). Bone histology Histologically, the sham nonovariectomized group showed a normal bone cell configuration with more osteocytes and minimal osteoblastic activity compared to ovariectomized, nontreated group (Figure 3A). The ovariectomized, nontreated rats showed vacuolation in the bone marrow, reduced marrow and osteogenic activity, complete absence of osteoblasts, reduced bone density, and marked degenerative changes (Figure 3B), while the group treated with estrogen showed an increase in bone formation with minimal marrow activity and osteoblast entrapment at the margins (Figure 3C). Similarly, the Remifemin®-treated group showed evidence of an increase in new bone formation (Figure 3D). The GBR-treated group showed active proliferation of new bone cells (Figure 3E) and the ASG-treated group showed bone marrow with lymphoblastic cells with some proteinaceous fluid. Osteocytes were entrapped at the margin. Active osteoblasts at the margin were in the process of converting to osteocytes, showing active bone formation and increased bone density (Figure 3F). Hematopoietic cells, mature osteocytes, and osteoblasts were present. The GABA-treated group also showed evidence of active bone formation (Figure 3G), and the ORZ-treated group showed an increase in osteoblastic activity and an increase in new bone formation (Figure 3H). Histologically, the sham nonovariectomized group showed a normal bone cell configuration with more osteocytes and minimal osteoblastic activity compared to ovariectomized, nontreated group (Figure 3A). The ovariectomized, nontreated rats showed vacuolation in the bone marrow, reduced marrow and osteogenic activity, complete absence of osteoblasts, reduced bone density, and marked degenerative changes (Figure 3B), while the group treated with estrogen showed an increase in bone formation with minimal marrow activity and osteoblast entrapment at the margins (Figure 3C). Similarly, the Remifemin®-treated group showed evidence of an increase in new bone formation (Figure 3D). The GBR-treated group showed active proliferation of new bone cells (Figure 3E) and the ASG-treated group showed bone marrow with lymphoblastic cells with some proteinaceous fluid. Osteocytes were entrapped at the margin. Active osteoblasts at the margin were in the process of converting to osteocytes, showing active bone formation and increased bone density (Figure 3F). Hematopoietic cells, mature osteocytes, and osteoblasts were present. The GABA-treated group also showed evidence of active bone formation (Figure 3G), and the ORZ-treated group showed an increase in osteoblastic activity and an increase in new bone formation (Figure 3H). Immunohistochemistry Immunohistochemistry of the bone tissue gave weak positive staining, indicating new bone formation in all treated groups, and the reaction was more prominent in the groups treated with estrogen, remifemin, and ASG (Figure 4C, D and F). While groups treated with GBR, GABA and ORZ showed a milder reaction (Figure 4E, G and H). The sham and OVX-untreated groups both showed a very little or no reaction (Figure 4A and B). Immunohistochemistry of the bone tissue gave weak positive staining, indicating new bone formation in all treated groups, and the reaction was more prominent in the groups treated with estrogen, remifemin, and ASG (Figure 4C, D and F). While groups treated with GBR, GABA and ORZ showed a milder reaction (Figure 4E, G and H). The sham and OVX-untreated groups both showed a very little or no reaction (Figure 4A and B).
Conclusion
This study indicates that the increase in BMD and osteoprotective effect of GBR bioactives might involve the activation of zinc formation and increased calcium, coupled with an increase in ALP, which together increase osteoblastic activity and subsequently bone formation. However, this proposed mechanistic effect requires further clarification, and more research is needed to address the molecular mechanism of zinc stimulation by GBR bioactives.
[ "Introduction", "Animals, grouping, dosing, and frequency", "Bone mass density and diameter of the femur bone", "Archimede’s principle", "X-ray morphometry", "Automatic classification of bone structure", "Measurement of serum zinc and calcium concentrations", "Bone ash weight, calcium and zinc content", "Histology", "Immunohistochemistry", "Bone histology", "Immunohistochemistry", "Conclusion" ]
[ "Traditionally, dual X-ray absorptiometry and Archimede’s principle are employed in the measurement of bone mass density (BMD) in biomedical research. The use of dual X-ray absorptiometry to measure bone density has become very popular over the years, and although a significant correlation was obtained when comparing BMD using dual X-ray absorptiometry and Archimede’s principle in rats, the precision, accuracy, and sensitivity of this method in bone density determination remains a topic of concern.1\nPrevious research has revealed that a progressive decline in BMD occurs with estrogen deficiency at menopause, exposing women to a greater risk for the development of osteoporosis.2,3 Postmenopausal osteoporosis is now a worldwide problem, and the conventional treatment of estrogen replacement and other related chemotherapy are reported to be associated with side effects ranging from alterations in physiology to therapy-related cancers.4–6 Fisher et al7 described osteoporotic fracture as an important public health issue with increasing morbidity, mortality, and prevalence. In the United States alone, osteoporosis affects an estimated 10–28 million Americans over the age of 50, with 33 million people, mostly women, having low bone density and close to 1.2 million bone fractures being related to osteoporosis.8,9 In view of this, natural products, especially phytoestrogens, are being explored as an alternative to hormone therapy for the management of osteoporosis and other related bone degenerative diseases. In this study, germinated brown rice (GBR), a rice with increased levels of bioactive compounds compared to polished rice and brown rice, with potential benefits in the management of chronic disease, was administered to rats with the aim of studying its ability to protect bone from osteoporosis. Bone ash weight and calcium and zinc content were quantified, and bone histology and polyclonal nuclear antigen staining were used to detect new bone formation. BMD was estimated using Archimede’s principle and was compared with edge detection X-ray morphometry in order to explore its applicability in measuring bone density in biomedical research.", "A total of 78 mature 12-week-old Sprague Dawley rats were procured from the Faculty of Veterinary Medicine, University Putra Malaysia (Selangor, Malaysia). They were acclimatized for 2 weeks before ovariectomy or sham operation. Six rats were assigned to each of the 13 treatment groups: group 1: nonovariectomized, nontreated (sham) negative control; group 2: ovariectomized, nontreated positive control; group 3: ovariectomized and treated with estrogen (0.2 mg/kg); groups 4 and 5: ovariectomized and treated with Remifemin® (10 and 20 mg/kg); groups 6–13: ovariectomized and treated with GBR, acylated steryl glucosides (ASG), gamma amino-butyric acid (GABA) or gamma oryzanol (ORZ) at two different doses (100 and 200 mg/kg). Extracts were administered by oral gavage once a day for 8 weeks. The entire animal study was carried out at the animal facility, Faculty of Medicine and Health Science, University Putra Malaysia. Rats were individually housed in plastic cages in a temperature and humidity controlled air-conditioned room (25°C–30°C) with exposure to a 12/12-hour light/dark cycle. The study was carried out according to the guidelines for the use of animals and was approved by the Animal Care and Use Committee (ACUC) of the Faculty of Medicine and Health Sciences, University Putra Malaysia.", "Femurs were isolated and freed from the surrounding muscles, and bone diameters (diameter of the femur in the distal ¼ and diameter of the femur in the distal ½) were determined using a KERNN 150 mm digital caliper (Kern and Sohn, Balingen, Germany). The bone mass density was estimated using both Archimede’s principle and the X-ray edge detection technique.", "The BMD of the left femur was calculated by measuring the mass of the bone in air and measuring the weight again after submerging in a specific volume of distilled water. From these two masses, the density was then calculated using the formula,\nwhere d is density, w is weight, w1 is weight in air, w2 is weight in water, and P is density of distilled water at a given temperature and is expressed as g/cm3.", "The bones were placed on a grid view specimen container (CIRS, Norfolk, VA, USA) at the 3, 6, 9, and 12 o’clock positions, and exposed to X-rays using a Philips Mammo diagnostic 3000 X-ray machine (Philips Healthcare, Andover, MA, USA) at controlled exposures of 62 kVp, 4 mAs.", "We designed an algorithm that had the ability to classify the bone structure automatically as summarized in Figure 1. This algorithm was designed with MATLAB software (MathWorks, Natick, MA, USA) and runs on a PC with a HP Pavilion Notebook dv6 computer (Hewlett-Packard, Palo Alto, CA, USA) with the following specifications: core i5, 2.4 GHz processor speed, 520 M cache. According to the proposed algorithm, the original image obtained from the X-ray machine (Figure 2A) was filtered through a Wiener filter to remove the noise in the image. The filtered image was then divided into different subimages (Figure 2B). The local range filter and local entropy were applied to extract the features of this subimage. The local range filter was used to specify the magnitude of the gradient. Local range filtering was determined by calculating the difference between the maximum and minimum range values of the filtered window. Local range filtering has a short calculation time, as it operates on only a small number of inputs for each output pixel.10 On the other hand, using local entropy will create a textured image wherein the local entropy is related to the variance in the grayscale in the neighborhood. The local entropy is larger for a heterogeneous region but smaller for a homogeneous neighborhood. Accordingly, the transition region will have larger local entropy values than those in nontransition regions of the image.11 The weighted sum (v and w) of the magnitude of the gradient and the local entropy was used to calculate the pixels’ importance and can be defined as follows,\nwhere gra (x, y) and entr (x, y) are the magnitude of the gradient and the local entropy for pixel (x, y), respectively, and v and w are the weight for balancing the local entropy and the magnitude of gradient features, respectively. At this stage, the grayscale subimage was converted to a binary image to obtain an image with two colors: white for the bone and black for the background of the subimage. To enable the classification process, the original grayscale X1–X7 (where X1 denotes lower density and X7 denotes higher density) was converted to RBG (red, blue, green) color for easy identification, as shown in Figure 2C. The range of intensity in each block on the original grayscale was used as a training set for k-nearest neighbor, which is a powerful method of nonparametric analysis.12 There are three key elements of this approach: (1) a set of labeled objects, (2) a distance or similarity metric to compute distance between objects, and (3) the value of the number of nearest neighbor’s k. To classify an unlabeled object, the distance of this object to the labeled objects is computed, and its k-nearest neighbors are identified; the nearest-neighbor list is thus obtained, the test object is classified based on the majority class of its nearest neighbors, and the class labels of these nearest neighbors are then used to determine the class label of the object.12–14 The fractional percentage for the bone was calculated by taking each pixel in the bone and classifying it with k-nearest neighbor to specify each pixel’s structure according to the classification range (X1–X7). Figure 2D–F show the segmentation process, the edge detection method, and classification, respectively.", "Blood was obtained from rats at 2, 4, and 8 weeks by cardiac puncture under anesthesia. Serum was collected by centrifugation at 3,000 rpm for 15 minutes and stored at −20°C. Zinc and calcium concentrations were analyzed using Randox analytical kits (Randox, Crumlin, UK) on a Selectra XL automated chemistry analyzer (Vita Scientific, Dieren, The Netherlands) according to the manufacturer’s instructions.", "Bone calcium was determined by atomic absorption spectrophotometry (S series GE712405 v1.30; Thermo Fisher Scientific, Waltham, MA, USA) using a procedure reported by Harrison et al,15 with some modifications. Briefly, bone samples were dried at 100°C for 24 hours before they were ashed in a muffle furnace at 550°C–600°C for another 24 hours and the ash weighed. The ash was pulverized into powder and hydrolyzed with 6 M HCl. The hydrolysate was then used to determine the calcium content at a wavelength of 422.7 nm, lamp current 100%, and a band pass and measurement time of 0.5 nm and 4 seconds, respectively. Zinc was determined at a wavelength of 213.9 nm, lamp current 75%, and a band pass and measurement time of 0.2 nm and 4 minutes, respectively. Both calcium and zinc concentrations were expressed in mg/L.", "Bones were fixed in 10% neutral formalin for 3 days, during which the formalin was changed every 24 hours. The bones were transferred to 80% formic acid for decalcification for 1 week, then embedded in paraffin and cut into longitudinal sections of 5 μm thickness. The sections were stained with hematoxylin and eosin and later viewed under a light microscope.", "Immunohistochemistry to detect the presence of a nuclear antigen was performed following a previously reported procedure,16 with some modifications. Bone sections were mounted on gelatin-coated glass slides, deparaffinized in three changes of xylol, dehydrated in graded alcohol, and then washed with distilled water. The sections were placed in 10 mM citrate buffer, pH 6.0, for 10 minutes at 50 W in a microwave, then cooled at room temperature for 5 minutes. Nonspecific binding was covered using 5% bovine serum albumin (BSA). Sections were then incubated using hydrogen peroxide (3%) for 30 minutes to block endogenous peroxidase activity and were then washed in phosphate buffered saline containing 0.2% solution of Tween-20 and distilled water. PC10 monoclonal antibody (DakoCytomation, Copenhagen, Denmark) was used as the primary antibody for 1 hour at a ratio of 1:200 then rinsed in phosphate buffered saline and reacted with polyclonal rabbit anti-mouse secondary antibody for 10 minutes at room temperature. The peroxidase reactions were developed in 3,3 diaminobenzidine in chromagen solution (DakoCytomation, Copenhagen, Denmark), counterstained with methylene blue for 2 minutes, and finally, the sections were cleared in xylene and coverslipped for examination under the light microscope. Images were captured at strategic locations on the slide using an image analyzer (Analysis LS Research) attached to the microscope (Olympus BX51, Japan).", "Histologically, the sham nonovariectomized group showed a normal bone cell configuration with more osteocytes and minimal osteoblastic activity compared to ovariectomized, nontreated group (Figure 3A). The ovariectomized, nontreated rats showed vacuolation in the bone marrow, reduced marrow and osteogenic activity, complete absence of osteoblasts, reduced bone density, and marked degenerative changes (Figure 3B), while the group treated with estrogen showed an increase in bone formation with minimal marrow activity and osteoblast entrapment at the margins (Figure 3C). Similarly, the Remifemin®-treated group showed evidence of an increase in new bone formation (Figure 3D). The GBR-treated group showed active proliferation of new bone cells (Figure 3E) and the ASG-treated group showed bone marrow with lymphoblastic cells with some proteinaceous fluid. Osteocytes were entrapped at the margin. Active osteoblasts at the margin were in the process of converting to osteocytes, showing active bone formation and increased bone density (Figure 3F). Hematopoietic cells, mature osteocytes, and osteoblasts were present. The GABA-treated group also showed evidence of active bone formation (Figure 3G), and the ORZ-treated group showed an increase in osteoblastic activity and an increase in new bone formation (Figure 3H).", "Immunohistochemistry of the bone tissue gave weak positive staining, indicating new bone formation in all treated groups, and the reaction was more prominent in the groups treated with estrogen, remifemin, and ASG (Figure 4C, D and F). While groups treated with GBR, GABA and ORZ showed a milder reaction (Figure 4E, G and H). The sham and OVX-untreated groups both showed a very little or no reaction (Figure 4A and B).", "This study indicates that the increase in BMD and osteoprotective effect of GBR bioactives might involve the activation of zinc formation and increased calcium, coupled with an increase in ALP, which together increase osteoblastic activity and subsequently bone formation. However, this proposed mechanistic effect requires further clarification, and more research is needed to address the molecular mechanism of zinc stimulation by GBR bioactives." ]
[ null, null, null, null, null, null, null, null, null, null, null, null, null ]
[ "Introduction", "Materials and methods", "Animals, grouping, dosing, and frequency", "Bone mass density and diameter of the femur bone", "Archimede’s principle", "X-ray morphometry", "Automatic classification of bone structure", "Measurement of serum zinc and calcium concentrations", "Bone ash weight, calcium and zinc content", "Histology", "Immunohistochemistry", "Statistical analysis", "Results", "Bone histology", "Immunohistochemistry", "Discussion", "Conclusion" ]
[ "Traditionally, dual X-ray absorptiometry and Archimede’s principle are employed in the measurement of bone mass density (BMD) in biomedical research. The use of dual X-ray absorptiometry to measure bone density has become very popular over the years, and although a significant correlation was obtained when comparing BMD using dual X-ray absorptiometry and Archimede’s principle in rats, the precision, accuracy, and sensitivity of this method in bone density determination remains a topic of concern.1\nPrevious research has revealed that a progressive decline in BMD occurs with estrogen deficiency at menopause, exposing women to a greater risk for the development of osteoporosis.2,3 Postmenopausal osteoporosis is now a worldwide problem, and the conventional treatment of estrogen replacement and other related chemotherapy are reported to be associated with side effects ranging from alterations in physiology to therapy-related cancers.4–6 Fisher et al7 described osteoporotic fracture as an important public health issue with increasing morbidity, mortality, and prevalence. In the United States alone, osteoporosis affects an estimated 10–28 million Americans over the age of 50, with 33 million people, mostly women, having low bone density and close to 1.2 million bone fractures being related to osteoporosis.8,9 In view of this, natural products, especially phytoestrogens, are being explored as an alternative to hormone therapy for the management of osteoporosis and other related bone degenerative diseases. In this study, germinated brown rice (GBR), a rice with increased levels of bioactive compounds compared to polished rice and brown rice, with potential benefits in the management of chronic disease, was administered to rats with the aim of studying its ability to protect bone from osteoporosis. Bone ash weight and calcium and zinc content were quantified, and bone histology and polyclonal nuclear antigen staining were used to detect new bone formation. BMD was estimated using Archimede’s principle and was compared with edge detection X-ray morphometry in order to explore its applicability in measuring bone density in biomedical research.", "BERNAS Rice Company (Sri Tiram Jaya, Malaysia) supplied the brown rice. Cimicifuga racemosa (Remifemin® 20 mg/tab) was procured from Schaper and Brümmer GmbH (Salzgitter, Germany). Conjugated estrogen (Premarin® 0.625 mg/tab) was obtained from Wyeth Medica Ireland (Newbridge, Ireland). Xylazine HCl 20 mg/mL and ketamine HCl were obtained from Troy Laboratories (Smithfield, Australia).\n Animals, grouping, dosing, and frequency A total of 78 mature 12-week-old Sprague Dawley rats were procured from the Faculty of Veterinary Medicine, University Putra Malaysia (Selangor, Malaysia). They were acclimatized for 2 weeks before ovariectomy or sham operation. Six rats were assigned to each of the 13 treatment groups: group 1: nonovariectomized, nontreated (sham) negative control; group 2: ovariectomized, nontreated positive control; group 3: ovariectomized and treated with estrogen (0.2 mg/kg); groups 4 and 5: ovariectomized and treated with Remifemin® (10 and 20 mg/kg); groups 6–13: ovariectomized and treated with GBR, acylated steryl glucosides (ASG), gamma amino-butyric acid (GABA) or gamma oryzanol (ORZ) at two different doses (100 and 200 mg/kg). Extracts were administered by oral gavage once a day for 8 weeks. The entire animal study was carried out at the animal facility, Faculty of Medicine and Health Science, University Putra Malaysia. Rats were individually housed in plastic cages in a temperature and humidity controlled air-conditioned room (25°C–30°C) with exposure to a 12/12-hour light/dark cycle. The study was carried out according to the guidelines for the use of animals and was approved by the Animal Care and Use Committee (ACUC) of the Faculty of Medicine and Health Sciences, University Putra Malaysia.\nA total of 78 mature 12-week-old Sprague Dawley rats were procured from the Faculty of Veterinary Medicine, University Putra Malaysia (Selangor, Malaysia). They were acclimatized for 2 weeks before ovariectomy or sham operation. Six rats were assigned to each of the 13 treatment groups: group 1: nonovariectomized, nontreated (sham) negative control; group 2: ovariectomized, nontreated positive control; group 3: ovariectomized and treated with estrogen (0.2 mg/kg); groups 4 and 5: ovariectomized and treated with Remifemin® (10 and 20 mg/kg); groups 6–13: ovariectomized and treated with GBR, acylated steryl glucosides (ASG), gamma amino-butyric acid (GABA) or gamma oryzanol (ORZ) at two different doses (100 and 200 mg/kg). Extracts were administered by oral gavage once a day for 8 weeks. The entire animal study was carried out at the animal facility, Faculty of Medicine and Health Science, University Putra Malaysia. Rats were individually housed in plastic cages in a temperature and humidity controlled air-conditioned room (25°C–30°C) with exposure to a 12/12-hour light/dark cycle. The study was carried out according to the guidelines for the use of animals and was approved by the Animal Care and Use Committee (ACUC) of the Faculty of Medicine and Health Sciences, University Putra Malaysia.\n Bone mass density and diameter of the femur bone Femurs were isolated and freed from the surrounding muscles, and bone diameters (diameter of the femur in the distal ¼ and diameter of the femur in the distal ½) were determined using a KERNN 150 mm digital caliper (Kern and Sohn, Balingen, Germany). The bone mass density was estimated using both Archimede’s principle and the X-ray edge detection technique.\nFemurs were isolated and freed from the surrounding muscles, and bone diameters (diameter of the femur in the distal ¼ and diameter of the femur in the distal ½) were determined using a KERNN 150 mm digital caliper (Kern and Sohn, Balingen, Germany). The bone mass density was estimated using both Archimede’s principle and the X-ray edge detection technique.\n Archimede’s principle The BMD of the left femur was calculated by measuring the mass of the bone in air and measuring the weight again after submerging in a specific volume of distilled water. From these two masses, the density was then calculated using the formula,\nwhere d is density, w is weight, w1 is weight in air, w2 is weight in water, and P is density of distilled water at a given temperature and is expressed as g/cm3.\nThe BMD of the left femur was calculated by measuring the mass of the bone in air and measuring the weight again after submerging in a specific volume of distilled water. From these two masses, the density was then calculated using the formula,\nwhere d is density, w is weight, w1 is weight in air, w2 is weight in water, and P is density of distilled water at a given temperature and is expressed as g/cm3.\n X-ray morphometry The bones were placed on a grid view specimen container (CIRS, Norfolk, VA, USA) at the 3, 6, 9, and 12 o’clock positions, and exposed to X-rays using a Philips Mammo diagnostic 3000 X-ray machine (Philips Healthcare, Andover, MA, USA) at controlled exposures of 62 kVp, 4 mAs.\nThe bones were placed on a grid view specimen container (CIRS, Norfolk, VA, USA) at the 3, 6, 9, and 12 o’clock positions, and exposed to X-rays using a Philips Mammo diagnostic 3000 X-ray machine (Philips Healthcare, Andover, MA, USA) at controlled exposures of 62 kVp, 4 mAs.\n Automatic classification of bone structure We designed an algorithm that had the ability to classify the bone structure automatically as summarized in Figure 1. This algorithm was designed with MATLAB software (MathWorks, Natick, MA, USA) and runs on a PC with a HP Pavilion Notebook dv6 computer (Hewlett-Packard, Palo Alto, CA, USA) with the following specifications: core i5, 2.4 GHz processor speed, 520 M cache. According to the proposed algorithm, the original image obtained from the X-ray machine (Figure 2A) was filtered through a Wiener filter to remove the noise in the image. The filtered image was then divided into different subimages (Figure 2B). The local range filter and local entropy were applied to extract the features of this subimage. The local range filter was used to specify the magnitude of the gradient. Local range filtering was determined by calculating the difference between the maximum and minimum range values of the filtered window. Local range filtering has a short calculation time, as it operates on only a small number of inputs for each output pixel.10 On the other hand, using local entropy will create a textured image wherein the local entropy is related to the variance in the grayscale in the neighborhood. The local entropy is larger for a heterogeneous region but smaller for a homogeneous neighborhood. Accordingly, the transition region will have larger local entropy values than those in nontransition regions of the image.11 The weighted sum (v and w) of the magnitude of the gradient and the local entropy was used to calculate the pixels’ importance and can be defined as follows,\nwhere gra (x, y) and entr (x, y) are the magnitude of the gradient and the local entropy for pixel (x, y), respectively, and v and w are the weight for balancing the local entropy and the magnitude of gradient features, respectively. At this stage, the grayscale subimage was converted to a binary image to obtain an image with two colors: white for the bone and black for the background of the subimage. To enable the classification process, the original grayscale X1–X7 (where X1 denotes lower density and X7 denotes higher density) was converted to RBG (red, blue, green) color for easy identification, as shown in Figure 2C. The range of intensity in each block on the original grayscale was used as a training set for k-nearest neighbor, which is a powerful method of nonparametric analysis.12 There are three key elements of this approach: (1) a set of labeled objects, (2) a distance or similarity metric to compute distance between objects, and (3) the value of the number of nearest neighbor’s k. To classify an unlabeled object, the distance of this object to the labeled objects is computed, and its k-nearest neighbors are identified; the nearest-neighbor list is thus obtained, the test object is classified based on the majority class of its nearest neighbors, and the class labels of these nearest neighbors are then used to determine the class label of the object.12–14 The fractional percentage for the bone was calculated by taking each pixel in the bone and classifying it with k-nearest neighbor to specify each pixel’s structure according to the classification range (X1–X7). Figure 2D–F show the segmentation process, the edge detection method, and classification, respectively.\nWe designed an algorithm that had the ability to classify the bone structure automatically as summarized in Figure 1. This algorithm was designed with MATLAB software (MathWorks, Natick, MA, USA) and runs on a PC with a HP Pavilion Notebook dv6 computer (Hewlett-Packard, Palo Alto, CA, USA) with the following specifications: core i5, 2.4 GHz processor speed, 520 M cache. According to the proposed algorithm, the original image obtained from the X-ray machine (Figure 2A) was filtered through a Wiener filter to remove the noise in the image. The filtered image was then divided into different subimages (Figure 2B). The local range filter and local entropy were applied to extract the features of this subimage. The local range filter was used to specify the magnitude of the gradient. Local range filtering was determined by calculating the difference between the maximum and minimum range values of the filtered window. Local range filtering has a short calculation time, as it operates on only a small number of inputs for each output pixel.10 On the other hand, using local entropy will create a textured image wherein the local entropy is related to the variance in the grayscale in the neighborhood. The local entropy is larger for a heterogeneous region but smaller for a homogeneous neighborhood. Accordingly, the transition region will have larger local entropy values than those in nontransition regions of the image.11 The weighted sum (v and w) of the magnitude of the gradient and the local entropy was used to calculate the pixels’ importance and can be defined as follows,\nwhere gra (x, y) and entr (x, y) are the magnitude of the gradient and the local entropy for pixel (x, y), respectively, and v and w are the weight for balancing the local entropy and the magnitude of gradient features, respectively. At this stage, the grayscale subimage was converted to a binary image to obtain an image with two colors: white for the bone and black for the background of the subimage. To enable the classification process, the original grayscale X1–X7 (where X1 denotes lower density and X7 denotes higher density) was converted to RBG (red, blue, green) color for easy identification, as shown in Figure 2C. The range of intensity in each block on the original grayscale was used as a training set for k-nearest neighbor, which is a powerful method of nonparametric analysis.12 There are three key elements of this approach: (1) a set of labeled objects, (2) a distance or similarity metric to compute distance between objects, and (3) the value of the number of nearest neighbor’s k. To classify an unlabeled object, the distance of this object to the labeled objects is computed, and its k-nearest neighbors are identified; the nearest-neighbor list is thus obtained, the test object is classified based on the majority class of its nearest neighbors, and the class labels of these nearest neighbors are then used to determine the class label of the object.12–14 The fractional percentage for the bone was calculated by taking each pixel in the bone and classifying it with k-nearest neighbor to specify each pixel’s structure according to the classification range (X1–X7). Figure 2D–F show the segmentation process, the edge detection method, and classification, respectively.\n Measurement of serum zinc and calcium concentrations Blood was obtained from rats at 2, 4, and 8 weeks by cardiac puncture under anesthesia. Serum was collected by centrifugation at 3,000 rpm for 15 minutes and stored at −20°C. Zinc and calcium concentrations were analyzed using Randox analytical kits (Randox, Crumlin, UK) on a Selectra XL automated chemistry analyzer (Vita Scientific, Dieren, The Netherlands) according to the manufacturer’s instructions.\nBlood was obtained from rats at 2, 4, and 8 weeks by cardiac puncture under anesthesia. Serum was collected by centrifugation at 3,000 rpm for 15 minutes and stored at −20°C. Zinc and calcium concentrations were analyzed using Randox analytical kits (Randox, Crumlin, UK) on a Selectra XL automated chemistry analyzer (Vita Scientific, Dieren, The Netherlands) according to the manufacturer’s instructions.\n Bone ash weight, calcium and zinc content Bone calcium was determined by atomic absorption spectrophotometry (S series GE712405 v1.30; Thermo Fisher Scientific, Waltham, MA, USA) using a procedure reported by Harrison et al,15 with some modifications. Briefly, bone samples were dried at 100°C for 24 hours before they were ashed in a muffle furnace at 550°C–600°C for another 24 hours and the ash weighed. The ash was pulverized into powder and hydrolyzed with 6 M HCl. The hydrolysate was then used to determine the calcium content at a wavelength of 422.7 nm, lamp current 100%, and a band pass and measurement time of 0.5 nm and 4 seconds, respectively. Zinc was determined at a wavelength of 213.9 nm, lamp current 75%, and a band pass and measurement time of 0.2 nm and 4 minutes, respectively. Both calcium and zinc concentrations were expressed in mg/L.\nBone calcium was determined by atomic absorption spectrophotometry (S series GE712405 v1.30; Thermo Fisher Scientific, Waltham, MA, USA) using a procedure reported by Harrison et al,15 with some modifications. Briefly, bone samples were dried at 100°C for 24 hours before they were ashed in a muffle furnace at 550°C–600°C for another 24 hours and the ash weighed. The ash was pulverized into powder and hydrolyzed with 6 M HCl. The hydrolysate was then used to determine the calcium content at a wavelength of 422.7 nm, lamp current 100%, and a band pass and measurement time of 0.5 nm and 4 seconds, respectively. Zinc was determined at a wavelength of 213.9 nm, lamp current 75%, and a band pass and measurement time of 0.2 nm and 4 minutes, respectively. Both calcium and zinc concentrations were expressed in mg/L.\n Histology Bones were fixed in 10% neutral formalin for 3 days, during which the formalin was changed every 24 hours. The bones were transferred to 80% formic acid for decalcification for 1 week, then embedded in paraffin and cut into longitudinal sections of 5 μm thickness. The sections were stained with hematoxylin and eosin and later viewed under a light microscope.\nBones were fixed in 10% neutral formalin for 3 days, during which the formalin was changed every 24 hours. The bones were transferred to 80% formic acid for decalcification for 1 week, then embedded in paraffin and cut into longitudinal sections of 5 μm thickness. The sections were stained with hematoxylin and eosin and later viewed under a light microscope.\n Immunohistochemistry Immunohistochemistry to detect the presence of a nuclear antigen was performed following a previously reported procedure,16 with some modifications. Bone sections were mounted on gelatin-coated glass slides, deparaffinized in three changes of xylol, dehydrated in graded alcohol, and then washed with distilled water. The sections were placed in 10 mM citrate buffer, pH 6.0, for 10 minutes at 50 W in a microwave, then cooled at room temperature for 5 minutes. Nonspecific binding was covered using 5% bovine serum albumin (BSA). Sections were then incubated using hydrogen peroxide (3%) for 30 minutes to block endogenous peroxidase activity and were then washed in phosphate buffered saline containing 0.2% solution of Tween-20 and distilled water. PC10 monoclonal antibody (DakoCytomation, Copenhagen, Denmark) was used as the primary antibody for 1 hour at a ratio of 1:200 then rinsed in phosphate buffered saline and reacted with polyclonal rabbit anti-mouse secondary antibody for 10 minutes at room temperature. The peroxidase reactions were developed in 3,3 diaminobenzidine in chromagen solution (DakoCytomation, Copenhagen, Denmark), counterstained with methylene blue for 2 minutes, and finally, the sections were cleared in xylene and coverslipped for examination under the light microscope. Images were captured at strategic locations on the slide using an image analyzer (Analysis LS Research) attached to the microscope (Olympus BX51, Japan).\nImmunohistochemistry to detect the presence of a nuclear antigen was performed following a previously reported procedure,16 with some modifications. Bone sections were mounted on gelatin-coated glass slides, deparaffinized in three changes of xylol, dehydrated in graded alcohol, and then washed with distilled water. The sections were placed in 10 mM citrate buffer, pH 6.0, for 10 minutes at 50 W in a microwave, then cooled at room temperature for 5 minutes. Nonspecific binding was covered using 5% bovine serum albumin (BSA). Sections were then incubated using hydrogen peroxide (3%) for 30 minutes to block endogenous peroxidase activity and were then washed in phosphate buffered saline containing 0.2% solution of Tween-20 and distilled water. PC10 monoclonal antibody (DakoCytomation, Copenhagen, Denmark) was used as the primary antibody for 1 hour at a ratio of 1:200 then rinsed in phosphate buffered saline and reacted with polyclonal rabbit anti-mouse secondary antibody for 10 minutes at room temperature. The peroxidase reactions were developed in 3,3 diaminobenzidine in chromagen solution (DakoCytomation, Copenhagen, Denmark), counterstained with methylene blue for 2 minutes, and finally, the sections were cleared in xylene and coverslipped for examination under the light microscope. Images were captured at strategic locations on the slide using an image analyzer (Analysis LS Research) attached to the microscope (Olympus BX51, Japan).\n Statistical analysis Data are presented as mean ± standard deviation. Differences were determined by one-way analysis of variance (ANOVA) and mean comparison by Tukey–Kramer post hoc test, using JMP10 statistical software (SAS, Cary, NC, USA). Differences were considered significant at P < 0.05.\nData are presented as mean ± standard deviation. Differences were determined by one-way analysis of variance (ANOVA) and mean comparison by Tukey–Kramer post hoc test, using JMP10 statistical software (SAS, Cary, NC, USA). Differences were considered significant at P < 0.05.", "A total of 78 mature 12-week-old Sprague Dawley rats were procured from the Faculty of Veterinary Medicine, University Putra Malaysia (Selangor, Malaysia). They were acclimatized for 2 weeks before ovariectomy or sham operation. Six rats were assigned to each of the 13 treatment groups: group 1: nonovariectomized, nontreated (sham) negative control; group 2: ovariectomized, nontreated positive control; group 3: ovariectomized and treated with estrogen (0.2 mg/kg); groups 4 and 5: ovariectomized and treated with Remifemin® (10 and 20 mg/kg); groups 6–13: ovariectomized and treated with GBR, acylated steryl glucosides (ASG), gamma amino-butyric acid (GABA) or gamma oryzanol (ORZ) at two different doses (100 and 200 mg/kg). Extracts were administered by oral gavage once a day for 8 weeks. The entire animal study was carried out at the animal facility, Faculty of Medicine and Health Science, University Putra Malaysia. Rats were individually housed in plastic cages in a temperature and humidity controlled air-conditioned room (25°C–30°C) with exposure to a 12/12-hour light/dark cycle. The study was carried out according to the guidelines for the use of animals and was approved by the Animal Care and Use Committee (ACUC) of the Faculty of Medicine and Health Sciences, University Putra Malaysia.", "Femurs were isolated and freed from the surrounding muscles, and bone diameters (diameter of the femur in the distal ¼ and diameter of the femur in the distal ½) were determined using a KERNN 150 mm digital caliper (Kern and Sohn, Balingen, Germany). The bone mass density was estimated using both Archimede’s principle and the X-ray edge detection technique.", "The BMD of the left femur was calculated by measuring the mass of the bone in air and measuring the weight again after submerging in a specific volume of distilled water. From these two masses, the density was then calculated using the formula,\nwhere d is density, w is weight, w1 is weight in air, w2 is weight in water, and P is density of distilled water at a given temperature and is expressed as g/cm3.", "The bones were placed on a grid view specimen container (CIRS, Norfolk, VA, USA) at the 3, 6, 9, and 12 o’clock positions, and exposed to X-rays using a Philips Mammo diagnostic 3000 X-ray machine (Philips Healthcare, Andover, MA, USA) at controlled exposures of 62 kVp, 4 mAs.", "We designed an algorithm that had the ability to classify the bone structure automatically as summarized in Figure 1. This algorithm was designed with MATLAB software (MathWorks, Natick, MA, USA) and runs on a PC with a HP Pavilion Notebook dv6 computer (Hewlett-Packard, Palo Alto, CA, USA) with the following specifications: core i5, 2.4 GHz processor speed, 520 M cache. According to the proposed algorithm, the original image obtained from the X-ray machine (Figure 2A) was filtered through a Wiener filter to remove the noise in the image. The filtered image was then divided into different subimages (Figure 2B). The local range filter and local entropy were applied to extract the features of this subimage. The local range filter was used to specify the magnitude of the gradient. Local range filtering was determined by calculating the difference between the maximum and minimum range values of the filtered window. Local range filtering has a short calculation time, as it operates on only a small number of inputs for each output pixel.10 On the other hand, using local entropy will create a textured image wherein the local entropy is related to the variance in the grayscale in the neighborhood. The local entropy is larger for a heterogeneous region but smaller for a homogeneous neighborhood. Accordingly, the transition region will have larger local entropy values than those in nontransition regions of the image.11 The weighted sum (v and w) of the magnitude of the gradient and the local entropy was used to calculate the pixels’ importance and can be defined as follows,\nwhere gra (x, y) and entr (x, y) are the magnitude of the gradient and the local entropy for pixel (x, y), respectively, and v and w are the weight for balancing the local entropy and the magnitude of gradient features, respectively. At this stage, the grayscale subimage was converted to a binary image to obtain an image with two colors: white for the bone and black for the background of the subimage. To enable the classification process, the original grayscale X1–X7 (where X1 denotes lower density and X7 denotes higher density) was converted to RBG (red, blue, green) color for easy identification, as shown in Figure 2C. The range of intensity in each block on the original grayscale was used as a training set for k-nearest neighbor, which is a powerful method of nonparametric analysis.12 There are three key elements of this approach: (1) a set of labeled objects, (2) a distance or similarity metric to compute distance between objects, and (3) the value of the number of nearest neighbor’s k. To classify an unlabeled object, the distance of this object to the labeled objects is computed, and its k-nearest neighbors are identified; the nearest-neighbor list is thus obtained, the test object is classified based on the majority class of its nearest neighbors, and the class labels of these nearest neighbors are then used to determine the class label of the object.12–14 The fractional percentage for the bone was calculated by taking each pixel in the bone and classifying it with k-nearest neighbor to specify each pixel’s structure according to the classification range (X1–X7). Figure 2D–F show the segmentation process, the edge detection method, and classification, respectively.", "Blood was obtained from rats at 2, 4, and 8 weeks by cardiac puncture under anesthesia. Serum was collected by centrifugation at 3,000 rpm for 15 minutes and stored at −20°C. Zinc and calcium concentrations were analyzed using Randox analytical kits (Randox, Crumlin, UK) on a Selectra XL automated chemistry analyzer (Vita Scientific, Dieren, The Netherlands) according to the manufacturer’s instructions.", "Bone calcium was determined by atomic absorption spectrophotometry (S series GE712405 v1.30; Thermo Fisher Scientific, Waltham, MA, USA) using a procedure reported by Harrison et al,15 with some modifications. Briefly, bone samples were dried at 100°C for 24 hours before they were ashed in a muffle furnace at 550°C–600°C for another 24 hours and the ash weighed. The ash was pulverized into powder and hydrolyzed with 6 M HCl. The hydrolysate was then used to determine the calcium content at a wavelength of 422.7 nm, lamp current 100%, and a band pass and measurement time of 0.5 nm and 4 seconds, respectively. Zinc was determined at a wavelength of 213.9 nm, lamp current 75%, and a band pass and measurement time of 0.2 nm and 4 minutes, respectively. Both calcium and zinc concentrations were expressed in mg/L.", "Bones were fixed in 10% neutral formalin for 3 days, during which the formalin was changed every 24 hours. The bones were transferred to 80% formic acid for decalcification for 1 week, then embedded in paraffin and cut into longitudinal sections of 5 μm thickness. The sections were stained with hematoxylin and eosin and later viewed under a light microscope.", "Immunohistochemistry to detect the presence of a nuclear antigen was performed following a previously reported procedure,16 with some modifications. Bone sections were mounted on gelatin-coated glass slides, deparaffinized in three changes of xylol, dehydrated in graded alcohol, and then washed with distilled water. The sections were placed in 10 mM citrate buffer, pH 6.0, for 10 minutes at 50 W in a microwave, then cooled at room temperature for 5 minutes. Nonspecific binding was covered using 5% bovine serum albumin (BSA). Sections were then incubated using hydrogen peroxide (3%) for 30 minutes to block endogenous peroxidase activity and were then washed in phosphate buffered saline containing 0.2% solution of Tween-20 and distilled water. PC10 monoclonal antibody (DakoCytomation, Copenhagen, Denmark) was used as the primary antibody for 1 hour at a ratio of 1:200 then rinsed in phosphate buffered saline and reacted with polyclonal rabbit anti-mouse secondary antibody for 10 minutes at room temperature. The peroxidase reactions were developed in 3,3 diaminobenzidine in chromagen solution (DakoCytomation, Copenhagen, Denmark), counterstained with methylene blue for 2 minutes, and finally, the sections were cleared in xylene and coverslipped for examination under the light microscope. Images were captured at strategic locations on the slide using an image analyzer (Analysis LS Research) attached to the microscope (Olympus BX51, Japan).", "Data are presented as mean ± standard deviation. Differences were determined by one-way analysis of variance (ANOVA) and mean comparison by Tukey–Kramer post hoc test, using JMP10 statistical software (SAS, Cary, NC, USA). Differences were considered significant at P < 0.05.", "The diameter of the femur in the distal ½ in the sham nonovariectomized group, and the ovariectomized groups treated with 0.2 mg/kg estrogen, 200 mg/kg ASG, GBR, or ORZ was significantly higher compared to the ovariectomized, nontreated group (P < 0.05). The diameter of the femur in the distal ¼ in the ovariectomized, nontreated group and groups treated with ORZ 200 mg/kg and estrogen 0.2 mg/kg was significantly higher (P < 0.05) compared to the nontreated group (Table 1).\nThe bone densities of the sham nonovariectomized group and groups treated with estrogen 0.2 mg/kg, and ORZ, ASG, GABA each at 200 mg/kg, and Remifemin® 20 mg/kg were significantly higher compared to the ovariectomized, nontreated group (P < 0.002, P < 0.03, and P < 0.01, respectively). The other treated groups expressed a higher density than the nontreated group although not significantly (P > 0.05) when using Archimede’s principle (Table 1). The X-ray edge detection technique yielded the following percentage increase in bone formation in these groups: ASG 200 mg/kg, 0.38%; estrogen 0.2 mg/kg, 1.55%; GABA 200 mg/kg, 0.02%; ORZ 200 mg/kg, 0.6%; Remifemin® 20 mg/kg, 0.04%; and SHAM, 0.28%. These treatment groups showed an increase in BMD and fell under the X4 scale of classification (Table 2). A significant correlation was observed between densities obtained using Archimede’s principle and those obtained in X3 (bone density values in class 3 grading using X-ray morphometry) (R2 = 0.737, P = 0.004); a nonsignificant correlation was observed between the two methods in groups that showed a significant increase in bone formation (R2 = 0.710, P = 0.179) (Table 3).\nThe weight of the femur bone immediately after sacrifice of rats (wet weight) in the ovariectomized, nontreated group was significantly lower (P < 0.05) than in the sham and other treated groups (Table 4). Bone ash weight was higher in the sham group and groups treated with 0.2 mg/kg estrogen, ASG 200 mg/kg, and ORZ 100 mg/kg and 200 mg/kg, and significantly different (P < 0.05) from that of the ovariectomized, nontreated group and the other treatment groups, as shown in Table 4.\nBone calcium content in the ovariectomized, nontreated group was significantly lower (P < 0.05) than that in sham and all ovariectomized treated groups. No significant difference in terms of calcium content was observed in groups treated with estrogen 0.2 mg/kg, GBR 200 mg/kg, ASG 100 mg/kg and 200 mg/kg, ORZ 100 mg/kg and 200 mg/kg, GABA 200 mg/kg, and the sham nonovariectomized group (P < 0.05) (Table 4).\nBone zinc concentration was significantly lower in the ovariectomized, nontreated group when compared to the sham and all treated groups (P < 0.05). The concentration increased significantly (P < 0.05) in groups treated with estrogen 0.2 mg/kg, Remifemin® 10 mg/kg, ASG 200 mg/kg, GABA 200 mg/kg, ORZ 100 mg/kg and 200 mg/kg compared to the sham nonovariectomized group, as shown in Table 4.\nThere was significant difference in the levels of serum alkaline phosphatase (ALP) between 2 weeks and 8 weeks after treatment in the group treated with ASG 100 mg/kg compared to the ovariectomized, nontreated group and other treatment groups: P < 0.05. During the fourth week after treatment, a significant difference was observed between sham nonovariectomized rats and the group treated with Remifemin® 10 mg/kg (P < 0.0451) (Table 5).\nSerum calcium concentrations were significantly different between treated groups in the eighth week only after commencement of treatment (P = 0.0001). There was no significant difference between the treated groups in the second and fourth week (P = 0.9390 and P = 0.8948, respectively) (Table 5).\nSerum zinc concentrations showed a significant increase in the sham nontreated group, ORZ 200 mg/kg, and GBR 200 mg/kg 2 weeks after treatment compared to ovariectomized, nontreated and other treated groups (P < 0.0001) (Table 5). After 4 weeks of treatment, serum zinc concentration in groups treated with GABA 200 mg/kg, estrogen 0.2 mg/kg, and GBR 200 mg/kg were significantly different compared to other treatment groups (P < 0.0001), as shown in Table 5. Zinc concentrations in the nonovariectomized (sham) group and groups treated with GABA 200 mg/kg and estrogen 0.2 mg/kg in the eighth week differed significantly from those of the other treatment groups (P < 0.0001) (Table 5).\n Bone histology Histologically, the sham nonovariectomized group showed a normal bone cell configuration with more osteocytes and minimal osteoblastic activity compared to ovariectomized, nontreated group (Figure 3A). The ovariectomized, nontreated rats showed vacuolation in the bone marrow, reduced marrow and osteogenic activity, complete absence of osteoblasts, reduced bone density, and marked degenerative changes (Figure 3B), while the group treated with estrogen showed an increase in bone formation with minimal marrow activity and osteoblast entrapment at the margins (Figure 3C). Similarly, the Remifemin®-treated group showed evidence of an increase in new bone formation (Figure 3D). The GBR-treated group showed active proliferation of new bone cells (Figure 3E) and the ASG-treated group showed bone marrow with lymphoblastic cells with some proteinaceous fluid. Osteocytes were entrapped at the margin. Active osteoblasts at the margin were in the process of converting to osteocytes, showing active bone formation and increased bone density (Figure 3F). Hematopoietic cells, mature osteocytes, and osteoblasts were present. The GABA-treated group also showed evidence of active bone formation (Figure 3G), and the ORZ-treated group showed an increase in osteoblastic activity and an increase in new bone formation (Figure 3H).\nHistologically, the sham nonovariectomized group showed a normal bone cell configuration with more osteocytes and minimal osteoblastic activity compared to ovariectomized, nontreated group (Figure 3A). The ovariectomized, nontreated rats showed vacuolation in the bone marrow, reduced marrow and osteogenic activity, complete absence of osteoblasts, reduced bone density, and marked degenerative changes (Figure 3B), while the group treated with estrogen showed an increase in bone formation with minimal marrow activity and osteoblast entrapment at the margins (Figure 3C). Similarly, the Remifemin®-treated group showed evidence of an increase in new bone formation (Figure 3D). The GBR-treated group showed active proliferation of new bone cells (Figure 3E) and the ASG-treated group showed bone marrow with lymphoblastic cells with some proteinaceous fluid. Osteocytes were entrapped at the margin. Active osteoblasts at the margin were in the process of converting to osteocytes, showing active bone formation and increased bone density (Figure 3F). Hematopoietic cells, mature osteocytes, and osteoblasts were present. The GABA-treated group also showed evidence of active bone formation (Figure 3G), and the ORZ-treated group showed an increase in osteoblastic activity and an increase in new bone formation (Figure 3H).\n Immunohistochemistry Immunohistochemistry of the bone tissue gave weak positive staining, indicating new bone formation in all treated groups, and the reaction was more prominent in the groups treated with estrogen, remifemin, and ASG (Figure 4C, D and F). While groups treated with GBR, GABA and ORZ showed a milder reaction (Figure 4E, G and H). The sham and OVX-untreated groups both showed a very little or no reaction (Figure 4A and B).\nImmunohistochemistry of the bone tissue gave weak positive staining, indicating new bone formation in all treated groups, and the reaction was more prominent in the groups treated with estrogen, remifemin, and ASG (Figure 4C, D and F). While groups treated with GBR, GABA and ORZ showed a milder reaction (Figure 4E, G and H). The sham and OVX-untreated groups both showed a very little or no reaction (Figure 4A and B).", "Histologically, the sham nonovariectomized group showed a normal bone cell configuration with more osteocytes and minimal osteoblastic activity compared to ovariectomized, nontreated group (Figure 3A). The ovariectomized, nontreated rats showed vacuolation in the bone marrow, reduced marrow and osteogenic activity, complete absence of osteoblasts, reduced bone density, and marked degenerative changes (Figure 3B), while the group treated with estrogen showed an increase in bone formation with minimal marrow activity and osteoblast entrapment at the margins (Figure 3C). Similarly, the Remifemin®-treated group showed evidence of an increase in new bone formation (Figure 3D). The GBR-treated group showed active proliferation of new bone cells (Figure 3E) and the ASG-treated group showed bone marrow with lymphoblastic cells with some proteinaceous fluid. Osteocytes were entrapped at the margin. Active osteoblasts at the margin were in the process of converting to osteocytes, showing active bone formation and increased bone density (Figure 3F). Hematopoietic cells, mature osteocytes, and osteoblasts were present. The GABA-treated group also showed evidence of active bone formation (Figure 3G), and the ORZ-treated group showed an increase in osteoblastic activity and an increase in new bone formation (Figure 3H).", "Immunohistochemistry of the bone tissue gave weak positive staining, indicating new bone formation in all treated groups, and the reaction was more prominent in the groups treated with estrogen, remifemin, and ASG (Figure 4C, D and F). While groups treated with GBR, GABA and ORZ showed a milder reaction (Figure 4E, G and H). The sham and OVX-untreated groups both showed a very little or no reaction (Figure 4A and B).", "It was evident in this study that GBR bioactives increased bone density. To the best of our knowledge this is the first report on the effects of GBR on BMD and the application of the X-ray edge detection technique in the measurement of bone density.\nOur results using the edge detection technique quantified the increase in percentage of bone formation in the individual treatments, and the results were highly correlated with those obtained using the standard Archimede’s principle.\nThe ovariectomized, nontreated rats showed a decrease in BMD. Ovariectomy is known to stimulate oxidative stress and interfere with the antioxidant system in rats.17 This leads to an increase in the level of oxidative stress markers such as hydrogen peroxide, which is highly indicative of bone loss in estrogen shortage18 and also tumor necrosis factor alpha, which is generated as a result of the low level of thiol antioxidants within bone cells.19–21 Our data show that GBR bioactives, specifically ASG, GABA, and ORZ, increased the BMD to a level almost the same as that of the sham nonovariectomized group. GABA and ORZ from GBR have been shown to upregulate genes related to bone formation in ovariectomized rats.22 Serum ALP activity increased in the ovariectomized, nontreated group and other treatment groups especially during the fourth week after ovariectomy. This might be due to an increase in the rate of bone metabolism, especially a few days after ovariectomy, and an increase in osteogenic activity in the treated groups. An increase in ALP was also reported in ovariectomized rats treated with soy isoflavones.23,24 ALP is known as a prominent marker of bone formation.25 Bone remodeling involves both bone formation and resumption in a coupling effect,26 which might explain the increase in the activity of ALP in ovariectomized rats. Serum and bone calcium level, bone wet weight, and the ash content of the bone decreased in the ovariectomized, nontreated group due to a decrease in estrogenic activity, which in turn affected the bone mass and constituency, calcium content, and zinc concentration. Treatment with ORZ, GABA, and ASG at 200 mg/kg restored the values of these parameters to the same level or higher than in the sham nonovariectomized group. This gives a clear indication that these bioactives are fully involved in osteogenesis. The decrease in zinc concentration observed in ovariectomized, nontreated rats has also been reported by other researchers in low estrogenic conditions of menopausal women and in ovariectomized rats.27,28 This decrease in zinc concentration is due to oxidative stress, and zinc is known to play a major regulatory role in the action of antioxidant enzymes.29 Treatment for 8 weeks with GBR bioactives increased the level of zinc in bone tissue to a level higher than that in the control sham nonovariectomized rats and groups treated with ORZ, ASG, estrogen, and Remifemin®. It has been reported that zinc is a potent inhibitor of osteoclastogenesis and is an osteoblast stimulator in vitro.30,31 Zinc is known to stimulate protein synthesis, and its role in bone formation and in preserving bone mass is greater than that of bone regulating hormones.32 Histology and the polyclonal nuclear antigen reactions showed evidence of active bone formation with new bone cells in their active proliferation stage.", "This study indicates that the increase in BMD and osteoprotective effect of GBR bioactives might involve the activation of zinc formation and increased calcium, coupled with an increase in ALP, which together increase osteoblastic activity and subsequently bone formation. However, this proposed mechanistic effect requires further clarification, and more research is needed to address the molecular mechanism of zinc stimulation by GBR bioactives." ]
[ null, "materials|methods", null, null, null, null, null, null, null, null, null, "methods", "results", null, null, "discussion", null ]
[ "Archimede’s principle", "atomic absorption spectrophotometry", "X-ray edge detection technique", "bone mass density", "germinated brown rice bioactives" ]
Introduction: Traditionally, dual X-ray absorptiometry and Archimede’s principle are employed in the measurement of bone mass density (BMD) in biomedical research. The use of dual X-ray absorptiometry to measure bone density has become very popular over the years, and although a significant correlation was obtained when comparing BMD using dual X-ray absorptiometry and Archimede’s principle in rats, the precision, accuracy, and sensitivity of this method in bone density determination remains a topic of concern.1 Previous research has revealed that a progressive decline in BMD occurs with estrogen deficiency at menopause, exposing women to a greater risk for the development of osteoporosis.2,3 Postmenopausal osteoporosis is now a worldwide problem, and the conventional treatment of estrogen replacement and other related chemotherapy are reported to be associated with side effects ranging from alterations in physiology to therapy-related cancers.4–6 Fisher et al7 described osteoporotic fracture as an important public health issue with increasing morbidity, mortality, and prevalence. In the United States alone, osteoporosis affects an estimated 10–28 million Americans over the age of 50, with 33 million people, mostly women, having low bone density and close to 1.2 million bone fractures being related to osteoporosis.8,9 In view of this, natural products, especially phytoestrogens, are being explored as an alternative to hormone therapy for the management of osteoporosis and other related bone degenerative diseases. In this study, germinated brown rice (GBR), a rice with increased levels of bioactive compounds compared to polished rice and brown rice, with potential benefits in the management of chronic disease, was administered to rats with the aim of studying its ability to protect bone from osteoporosis. Bone ash weight and calcium and zinc content were quantified, and bone histology and polyclonal nuclear antigen staining were used to detect new bone formation. BMD was estimated using Archimede’s principle and was compared with edge detection X-ray morphometry in order to explore its applicability in measuring bone density in biomedical research. Materials and methods: BERNAS Rice Company (Sri Tiram Jaya, Malaysia) supplied the brown rice. Cimicifuga racemosa (Remifemin® 20 mg/tab) was procured from Schaper and Brümmer GmbH (Salzgitter, Germany). Conjugated estrogen (Premarin® 0.625 mg/tab) was obtained from Wyeth Medica Ireland (Newbridge, Ireland). Xylazine HCl 20 mg/mL and ketamine HCl were obtained from Troy Laboratories (Smithfield, Australia). Animals, grouping, dosing, and frequency A total of 78 mature 12-week-old Sprague Dawley rats were procured from the Faculty of Veterinary Medicine, University Putra Malaysia (Selangor, Malaysia). They were acclimatized for 2 weeks before ovariectomy or sham operation. Six rats were assigned to each of the 13 treatment groups: group 1: nonovariectomized, nontreated (sham) negative control; group 2: ovariectomized, nontreated positive control; group 3: ovariectomized and treated with estrogen (0.2 mg/kg); groups 4 and 5: ovariectomized and treated with Remifemin® (10 and 20 mg/kg); groups 6–13: ovariectomized and treated with GBR, acylated steryl glucosides (ASG), gamma amino-butyric acid (GABA) or gamma oryzanol (ORZ) at two different doses (100 and 200 mg/kg). Extracts were administered by oral gavage once a day for 8 weeks. The entire animal study was carried out at the animal facility, Faculty of Medicine and Health Science, University Putra Malaysia. Rats were individually housed in plastic cages in a temperature and humidity controlled air-conditioned room (25°C–30°C) with exposure to a 12/12-hour light/dark cycle. The study was carried out according to the guidelines for the use of animals and was approved by the Animal Care and Use Committee (ACUC) of the Faculty of Medicine and Health Sciences, University Putra Malaysia. A total of 78 mature 12-week-old Sprague Dawley rats were procured from the Faculty of Veterinary Medicine, University Putra Malaysia (Selangor, Malaysia). They were acclimatized for 2 weeks before ovariectomy or sham operation. Six rats were assigned to each of the 13 treatment groups: group 1: nonovariectomized, nontreated (sham) negative control; group 2: ovariectomized, nontreated positive control; group 3: ovariectomized and treated with estrogen (0.2 mg/kg); groups 4 and 5: ovariectomized and treated with Remifemin® (10 and 20 mg/kg); groups 6–13: ovariectomized and treated with GBR, acylated steryl glucosides (ASG), gamma amino-butyric acid (GABA) or gamma oryzanol (ORZ) at two different doses (100 and 200 mg/kg). Extracts were administered by oral gavage once a day for 8 weeks. The entire animal study was carried out at the animal facility, Faculty of Medicine and Health Science, University Putra Malaysia. Rats were individually housed in plastic cages in a temperature and humidity controlled air-conditioned room (25°C–30°C) with exposure to a 12/12-hour light/dark cycle. The study was carried out according to the guidelines for the use of animals and was approved by the Animal Care and Use Committee (ACUC) of the Faculty of Medicine and Health Sciences, University Putra Malaysia. Bone mass density and diameter of the femur bone Femurs were isolated and freed from the surrounding muscles, and bone diameters (diameter of the femur in the distal ¼ and diameter of the femur in the distal ½) were determined using a KERNN 150 mm digital caliper (Kern and Sohn, Balingen, Germany). The bone mass density was estimated using both Archimede’s principle and the X-ray edge detection technique. Femurs were isolated and freed from the surrounding muscles, and bone diameters (diameter of the femur in the distal ¼ and diameter of the femur in the distal ½) were determined using a KERNN 150 mm digital caliper (Kern and Sohn, Balingen, Germany). The bone mass density was estimated using both Archimede’s principle and the X-ray edge detection technique. Archimede’s principle The BMD of the left femur was calculated by measuring the mass of the bone in air and measuring the weight again after submerging in a specific volume of distilled water. From these two masses, the density was then calculated using the formula, where d is density, w is weight, w1 is weight in air, w2 is weight in water, and P is density of distilled water at a given temperature and is expressed as g/cm3. The BMD of the left femur was calculated by measuring the mass of the bone in air and measuring the weight again after submerging in a specific volume of distilled water. From these two masses, the density was then calculated using the formula, where d is density, w is weight, w1 is weight in air, w2 is weight in water, and P is density of distilled water at a given temperature and is expressed as g/cm3. X-ray morphometry The bones were placed on a grid view specimen container (CIRS, Norfolk, VA, USA) at the 3, 6, 9, and 12 o’clock positions, and exposed to X-rays using a Philips Mammo diagnostic 3000 X-ray machine (Philips Healthcare, Andover, MA, USA) at controlled exposures of 62 kVp, 4 mAs. The bones were placed on a grid view specimen container (CIRS, Norfolk, VA, USA) at the 3, 6, 9, and 12 o’clock positions, and exposed to X-rays using a Philips Mammo diagnostic 3000 X-ray machine (Philips Healthcare, Andover, MA, USA) at controlled exposures of 62 kVp, 4 mAs. Automatic classification of bone structure We designed an algorithm that had the ability to classify the bone structure automatically as summarized in Figure 1. This algorithm was designed with MATLAB software (MathWorks, Natick, MA, USA) and runs on a PC with a HP Pavilion Notebook dv6 computer (Hewlett-Packard, Palo Alto, CA, USA) with the following specifications: core i5, 2.4 GHz processor speed, 520 M cache. According to the proposed algorithm, the original image obtained from the X-ray machine (Figure 2A) was filtered through a Wiener filter to remove the noise in the image. The filtered image was then divided into different subimages (Figure 2B). The local range filter and local entropy were applied to extract the features of this subimage. The local range filter was used to specify the magnitude of the gradient. Local range filtering was determined by calculating the difference between the maximum and minimum range values of the filtered window. Local range filtering has a short calculation time, as it operates on only a small number of inputs for each output pixel.10 On the other hand, using local entropy will create a textured image wherein the local entropy is related to the variance in the grayscale in the neighborhood. The local entropy is larger for a heterogeneous region but smaller for a homogeneous neighborhood. Accordingly, the transition region will have larger local entropy values than those in nontransition regions of the image.11 The weighted sum (v and w) of the magnitude of the gradient and the local entropy was used to calculate the pixels’ importance and can be defined as follows, where gra (x, y) and entr (x, y) are the magnitude of the gradient and the local entropy for pixel (x, y), respectively, and v and w are the weight for balancing the local entropy and the magnitude of gradient features, respectively. At this stage, the grayscale subimage was converted to a binary image to obtain an image with two colors: white for the bone and black for the background of the subimage. To enable the classification process, the original grayscale X1–X7 (where X1 denotes lower density and X7 denotes higher density) was converted to RBG (red, blue, green) color for easy identification, as shown in Figure 2C. The range of intensity in each block on the original grayscale was used as a training set for k-nearest neighbor, which is a powerful method of nonparametric analysis.12 There are three key elements of this approach: (1) a set of labeled objects, (2) a distance or similarity metric to compute distance between objects, and (3) the value of the number of nearest neighbor’s k. To classify an unlabeled object, the distance of this object to the labeled objects is computed, and its k-nearest neighbors are identified; the nearest-neighbor list is thus obtained, the test object is classified based on the majority class of its nearest neighbors, and the class labels of these nearest neighbors are then used to determine the class label of the object.12–14 The fractional percentage for the bone was calculated by taking each pixel in the bone and classifying it with k-nearest neighbor to specify each pixel’s structure according to the classification range (X1–X7). Figure 2D–F show the segmentation process, the edge detection method, and classification, respectively. We designed an algorithm that had the ability to classify the bone structure automatically as summarized in Figure 1. This algorithm was designed with MATLAB software (MathWorks, Natick, MA, USA) and runs on a PC with a HP Pavilion Notebook dv6 computer (Hewlett-Packard, Palo Alto, CA, USA) with the following specifications: core i5, 2.4 GHz processor speed, 520 M cache. According to the proposed algorithm, the original image obtained from the X-ray machine (Figure 2A) was filtered through a Wiener filter to remove the noise in the image. The filtered image was then divided into different subimages (Figure 2B). The local range filter and local entropy were applied to extract the features of this subimage. The local range filter was used to specify the magnitude of the gradient. Local range filtering was determined by calculating the difference between the maximum and minimum range values of the filtered window. Local range filtering has a short calculation time, as it operates on only a small number of inputs for each output pixel.10 On the other hand, using local entropy will create a textured image wherein the local entropy is related to the variance in the grayscale in the neighborhood. The local entropy is larger for a heterogeneous region but smaller for a homogeneous neighborhood. Accordingly, the transition region will have larger local entropy values than those in nontransition regions of the image.11 The weighted sum (v and w) of the magnitude of the gradient and the local entropy was used to calculate the pixels’ importance and can be defined as follows, where gra (x, y) and entr (x, y) are the magnitude of the gradient and the local entropy for pixel (x, y), respectively, and v and w are the weight for balancing the local entropy and the magnitude of gradient features, respectively. At this stage, the grayscale subimage was converted to a binary image to obtain an image with two colors: white for the bone and black for the background of the subimage. To enable the classification process, the original grayscale X1–X7 (where X1 denotes lower density and X7 denotes higher density) was converted to RBG (red, blue, green) color for easy identification, as shown in Figure 2C. The range of intensity in each block on the original grayscale was used as a training set for k-nearest neighbor, which is a powerful method of nonparametric analysis.12 There are three key elements of this approach: (1) a set of labeled objects, (2) a distance or similarity metric to compute distance between objects, and (3) the value of the number of nearest neighbor’s k. To classify an unlabeled object, the distance of this object to the labeled objects is computed, and its k-nearest neighbors are identified; the nearest-neighbor list is thus obtained, the test object is classified based on the majority class of its nearest neighbors, and the class labels of these nearest neighbors are then used to determine the class label of the object.12–14 The fractional percentage for the bone was calculated by taking each pixel in the bone and classifying it with k-nearest neighbor to specify each pixel’s structure according to the classification range (X1–X7). Figure 2D–F show the segmentation process, the edge detection method, and classification, respectively. Measurement of serum zinc and calcium concentrations Blood was obtained from rats at 2, 4, and 8 weeks by cardiac puncture under anesthesia. Serum was collected by centrifugation at 3,000 rpm for 15 minutes and stored at −20°C. Zinc and calcium concentrations were analyzed using Randox analytical kits (Randox, Crumlin, UK) on a Selectra XL automated chemistry analyzer (Vita Scientific, Dieren, The Netherlands) according to the manufacturer’s instructions. Blood was obtained from rats at 2, 4, and 8 weeks by cardiac puncture under anesthesia. Serum was collected by centrifugation at 3,000 rpm for 15 minutes and stored at −20°C. Zinc and calcium concentrations were analyzed using Randox analytical kits (Randox, Crumlin, UK) on a Selectra XL automated chemistry analyzer (Vita Scientific, Dieren, The Netherlands) according to the manufacturer’s instructions. Bone ash weight, calcium and zinc content Bone calcium was determined by atomic absorption spectrophotometry (S series GE712405 v1.30; Thermo Fisher Scientific, Waltham, MA, USA) using a procedure reported by Harrison et al,15 with some modifications. Briefly, bone samples were dried at 100°C for 24 hours before they were ashed in a muffle furnace at 550°C–600°C for another 24 hours and the ash weighed. The ash was pulverized into powder and hydrolyzed with 6 M HCl. The hydrolysate was then used to determine the calcium content at a wavelength of 422.7 nm, lamp current 100%, and a band pass and measurement time of 0.5 nm and 4 seconds, respectively. Zinc was determined at a wavelength of 213.9 nm, lamp current 75%, and a band pass and measurement time of 0.2 nm and 4 minutes, respectively. Both calcium and zinc concentrations were expressed in mg/L. Bone calcium was determined by atomic absorption spectrophotometry (S series GE712405 v1.30; Thermo Fisher Scientific, Waltham, MA, USA) using a procedure reported by Harrison et al,15 with some modifications. Briefly, bone samples were dried at 100°C for 24 hours before they were ashed in a muffle furnace at 550°C–600°C for another 24 hours and the ash weighed. The ash was pulverized into powder and hydrolyzed with 6 M HCl. The hydrolysate was then used to determine the calcium content at a wavelength of 422.7 nm, lamp current 100%, and a band pass and measurement time of 0.5 nm and 4 seconds, respectively. Zinc was determined at a wavelength of 213.9 nm, lamp current 75%, and a band pass and measurement time of 0.2 nm and 4 minutes, respectively. Both calcium and zinc concentrations were expressed in mg/L. Histology Bones were fixed in 10% neutral formalin for 3 days, during which the formalin was changed every 24 hours. The bones were transferred to 80% formic acid for decalcification for 1 week, then embedded in paraffin and cut into longitudinal sections of 5 μm thickness. The sections were stained with hematoxylin and eosin and later viewed under a light microscope. Bones were fixed in 10% neutral formalin for 3 days, during which the formalin was changed every 24 hours. The bones were transferred to 80% formic acid for decalcification for 1 week, then embedded in paraffin and cut into longitudinal sections of 5 μm thickness. The sections were stained with hematoxylin and eosin and later viewed under a light microscope. Immunohistochemistry Immunohistochemistry to detect the presence of a nuclear antigen was performed following a previously reported procedure,16 with some modifications. Bone sections were mounted on gelatin-coated glass slides, deparaffinized in three changes of xylol, dehydrated in graded alcohol, and then washed with distilled water. The sections were placed in 10 mM citrate buffer, pH 6.0, for 10 minutes at 50 W in a microwave, then cooled at room temperature for 5 minutes. Nonspecific binding was covered using 5% bovine serum albumin (BSA). Sections were then incubated using hydrogen peroxide (3%) for 30 minutes to block endogenous peroxidase activity and were then washed in phosphate buffered saline containing 0.2% solution of Tween-20 and distilled water. PC10 monoclonal antibody (DakoCytomation, Copenhagen, Denmark) was used as the primary antibody for 1 hour at a ratio of 1:200 then rinsed in phosphate buffered saline and reacted with polyclonal rabbit anti-mouse secondary antibody for 10 minutes at room temperature. The peroxidase reactions were developed in 3,3 diaminobenzidine in chromagen solution (DakoCytomation, Copenhagen, Denmark), counterstained with methylene blue for 2 minutes, and finally, the sections were cleared in xylene and coverslipped for examination under the light microscope. Images were captured at strategic locations on the slide using an image analyzer (Analysis LS Research) attached to the microscope (Olympus BX51, Japan). Immunohistochemistry to detect the presence of a nuclear antigen was performed following a previously reported procedure,16 with some modifications. Bone sections were mounted on gelatin-coated glass slides, deparaffinized in three changes of xylol, dehydrated in graded alcohol, and then washed with distilled water. The sections were placed in 10 mM citrate buffer, pH 6.0, for 10 minutes at 50 W in a microwave, then cooled at room temperature for 5 minutes. Nonspecific binding was covered using 5% bovine serum albumin (BSA). Sections were then incubated using hydrogen peroxide (3%) for 30 minutes to block endogenous peroxidase activity and were then washed in phosphate buffered saline containing 0.2% solution of Tween-20 and distilled water. PC10 monoclonal antibody (DakoCytomation, Copenhagen, Denmark) was used as the primary antibody for 1 hour at a ratio of 1:200 then rinsed in phosphate buffered saline and reacted with polyclonal rabbit anti-mouse secondary antibody for 10 minutes at room temperature. The peroxidase reactions were developed in 3,3 diaminobenzidine in chromagen solution (DakoCytomation, Copenhagen, Denmark), counterstained with methylene blue for 2 minutes, and finally, the sections were cleared in xylene and coverslipped for examination under the light microscope. Images were captured at strategic locations on the slide using an image analyzer (Analysis LS Research) attached to the microscope (Olympus BX51, Japan). Statistical analysis Data are presented as mean ± standard deviation. Differences were determined by one-way analysis of variance (ANOVA) and mean comparison by Tukey–Kramer post hoc test, using JMP10 statistical software (SAS, Cary, NC, USA). Differences were considered significant at P < 0.05. Data are presented as mean ± standard deviation. Differences were determined by one-way analysis of variance (ANOVA) and mean comparison by Tukey–Kramer post hoc test, using JMP10 statistical software (SAS, Cary, NC, USA). Differences were considered significant at P < 0.05. Animals, grouping, dosing, and frequency: A total of 78 mature 12-week-old Sprague Dawley rats were procured from the Faculty of Veterinary Medicine, University Putra Malaysia (Selangor, Malaysia). They were acclimatized for 2 weeks before ovariectomy or sham operation. Six rats were assigned to each of the 13 treatment groups: group 1: nonovariectomized, nontreated (sham) negative control; group 2: ovariectomized, nontreated positive control; group 3: ovariectomized and treated with estrogen (0.2 mg/kg); groups 4 and 5: ovariectomized and treated with Remifemin® (10 and 20 mg/kg); groups 6–13: ovariectomized and treated with GBR, acylated steryl glucosides (ASG), gamma amino-butyric acid (GABA) or gamma oryzanol (ORZ) at two different doses (100 and 200 mg/kg). Extracts were administered by oral gavage once a day for 8 weeks. The entire animal study was carried out at the animal facility, Faculty of Medicine and Health Science, University Putra Malaysia. Rats were individually housed in plastic cages in a temperature and humidity controlled air-conditioned room (25°C–30°C) with exposure to a 12/12-hour light/dark cycle. The study was carried out according to the guidelines for the use of animals and was approved by the Animal Care and Use Committee (ACUC) of the Faculty of Medicine and Health Sciences, University Putra Malaysia. Bone mass density and diameter of the femur bone: Femurs were isolated and freed from the surrounding muscles, and bone diameters (diameter of the femur in the distal ¼ and diameter of the femur in the distal ½) were determined using a KERNN 150 mm digital caliper (Kern and Sohn, Balingen, Germany). The bone mass density was estimated using both Archimede’s principle and the X-ray edge detection technique. Archimede’s principle: The BMD of the left femur was calculated by measuring the mass of the bone in air and measuring the weight again after submerging in a specific volume of distilled water. From these two masses, the density was then calculated using the formula, where d is density, w is weight, w1 is weight in air, w2 is weight in water, and P is density of distilled water at a given temperature and is expressed as g/cm3. X-ray morphometry: The bones were placed on a grid view specimen container (CIRS, Norfolk, VA, USA) at the 3, 6, 9, and 12 o’clock positions, and exposed to X-rays using a Philips Mammo diagnostic 3000 X-ray machine (Philips Healthcare, Andover, MA, USA) at controlled exposures of 62 kVp, 4 mAs. Automatic classification of bone structure: We designed an algorithm that had the ability to classify the bone structure automatically as summarized in Figure 1. This algorithm was designed with MATLAB software (MathWorks, Natick, MA, USA) and runs on a PC with a HP Pavilion Notebook dv6 computer (Hewlett-Packard, Palo Alto, CA, USA) with the following specifications: core i5, 2.4 GHz processor speed, 520 M cache. According to the proposed algorithm, the original image obtained from the X-ray machine (Figure 2A) was filtered through a Wiener filter to remove the noise in the image. The filtered image was then divided into different subimages (Figure 2B). The local range filter and local entropy were applied to extract the features of this subimage. The local range filter was used to specify the magnitude of the gradient. Local range filtering was determined by calculating the difference between the maximum and minimum range values of the filtered window. Local range filtering has a short calculation time, as it operates on only a small number of inputs for each output pixel.10 On the other hand, using local entropy will create a textured image wherein the local entropy is related to the variance in the grayscale in the neighborhood. The local entropy is larger for a heterogeneous region but smaller for a homogeneous neighborhood. Accordingly, the transition region will have larger local entropy values than those in nontransition regions of the image.11 The weighted sum (v and w) of the magnitude of the gradient and the local entropy was used to calculate the pixels’ importance and can be defined as follows, where gra (x, y) and entr (x, y) are the magnitude of the gradient and the local entropy for pixel (x, y), respectively, and v and w are the weight for balancing the local entropy and the magnitude of gradient features, respectively. At this stage, the grayscale subimage was converted to a binary image to obtain an image with two colors: white for the bone and black for the background of the subimage. To enable the classification process, the original grayscale X1–X7 (where X1 denotes lower density and X7 denotes higher density) was converted to RBG (red, blue, green) color for easy identification, as shown in Figure 2C. The range of intensity in each block on the original grayscale was used as a training set for k-nearest neighbor, which is a powerful method of nonparametric analysis.12 There are three key elements of this approach: (1) a set of labeled objects, (2) a distance or similarity metric to compute distance between objects, and (3) the value of the number of nearest neighbor’s k. To classify an unlabeled object, the distance of this object to the labeled objects is computed, and its k-nearest neighbors are identified; the nearest-neighbor list is thus obtained, the test object is classified based on the majority class of its nearest neighbors, and the class labels of these nearest neighbors are then used to determine the class label of the object.12–14 The fractional percentage for the bone was calculated by taking each pixel in the bone and classifying it with k-nearest neighbor to specify each pixel’s structure according to the classification range (X1–X7). Figure 2D–F show the segmentation process, the edge detection method, and classification, respectively. Measurement of serum zinc and calcium concentrations: Blood was obtained from rats at 2, 4, and 8 weeks by cardiac puncture under anesthesia. Serum was collected by centrifugation at 3,000 rpm for 15 minutes and stored at −20°C. Zinc and calcium concentrations were analyzed using Randox analytical kits (Randox, Crumlin, UK) on a Selectra XL automated chemistry analyzer (Vita Scientific, Dieren, The Netherlands) according to the manufacturer’s instructions. Bone ash weight, calcium and zinc content: Bone calcium was determined by atomic absorption spectrophotometry (S series GE712405 v1.30; Thermo Fisher Scientific, Waltham, MA, USA) using a procedure reported by Harrison et al,15 with some modifications. Briefly, bone samples were dried at 100°C for 24 hours before they were ashed in a muffle furnace at 550°C–600°C for another 24 hours and the ash weighed. The ash was pulverized into powder and hydrolyzed with 6 M HCl. The hydrolysate was then used to determine the calcium content at a wavelength of 422.7 nm, lamp current 100%, and a band pass and measurement time of 0.5 nm and 4 seconds, respectively. Zinc was determined at a wavelength of 213.9 nm, lamp current 75%, and a band pass and measurement time of 0.2 nm and 4 minutes, respectively. Both calcium and zinc concentrations were expressed in mg/L. Histology: Bones were fixed in 10% neutral formalin for 3 days, during which the formalin was changed every 24 hours. The bones were transferred to 80% formic acid for decalcification for 1 week, then embedded in paraffin and cut into longitudinal sections of 5 μm thickness. The sections were stained with hematoxylin and eosin and later viewed under a light microscope. Immunohistochemistry: Immunohistochemistry to detect the presence of a nuclear antigen was performed following a previously reported procedure,16 with some modifications. Bone sections were mounted on gelatin-coated glass slides, deparaffinized in three changes of xylol, dehydrated in graded alcohol, and then washed with distilled water. The sections were placed in 10 mM citrate buffer, pH 6.0, for 10 minutes at 50 W in a microwave, then cooled at room temperature for 5 minutes. Nonspecific binding was covered using 5% bovine serum albumin (BSA). Sections were then incubated using hydrogen peroxide (3%) for 30 minutes to block endogenous peroxidase activity and were then washed in phosphate buffered saline containing 0.2% solution of Tween-20 and distilled water. PC10 monoclonal antibody (DakoCytomation, Copenhagen, Denmark) was used as the primary antibody for 1 hour at a ratio of 1:200 then rinsed in phosphate buffered saline and reacted with polyclonal rabbit anti-mouse secondary antibody for 10 minutes at room temperature. The peroxidase reactions were developed in 3,3 diaminobenzidine in chromagen solution (DakoCytomation, Copenhagen, Denmark), counterstained with methylene blue for 2 minutes, and finally, the sections were cleared in xylene and coverslipped for examination under the light microscope. Images were captured at strategic locations on the slide using an image analyzer (Analysis LS Research) attached to the microscope (Olympus BX51, Japan). Statistical analysis: Data are presented as mean ± standard deviation. Differences were determined by one-way analysis of variance (ANOVA) and mean comparison by Tukey–Kramer post hoc test, using JMP10 statistical software (SAS, Cary, NC, USA). Differences were considered significant at P < 0.05. Results: The diameter of the femur in the distal ½ in the sham nonovariectomized group, and the ovariectomized groups treated with 0.2 mg/kg estrogen, 200 mg/kg ASG, GBR, or ORZ was significantly higher compared to the ovariectomized, nontreated group (P < 0.05). The diameter of the femur in the distal ¼ in the ovariectomized, nontreated group and groups treated with ORZ 200 mg/kg and estrogen 0.2 mg/kg was significantly higher (P < 0.05) compared to the nontreated group (Table 1). The bone densities of the sham nonovariectomized group and groups treated with estrogen 0.2 mg/kg, and ORZ, ASG, GABA each at 200 mg/kg, and Remifemin® 20 mg/kg were significantly higher compared to the ovariectomized, nontreated group (P < 0.002, P < 0.03, and P < 0.01, respectively). The other treated groups expressed a higher density than the nontreated group although not significantly (P > 0.05) when using Archimede’s principle (Table 1). The X-ray edge detection technique yielded the following percentage increase in bone formation in these groups: ASG 200 mg/kg, 0.38%; estrogen 0.2 mg/kg, 1.55%; GABA 200 mg/kg, 0.02%; ORZ 200 mg/kg, 0.6%; Remifemin® 20 mg/kg, 0.04%; and SHAM, 0.28%. These treatment groups showed an increase in BMD and fell under the X4 scale of classification (Table 2). A significant correlation was observed between densities obtained using Archimede’s principle and those obtained in X3 (bone density values in class 3 grading using X-ray morphometry) (R2 = 0.737, P = 0.004); a nonsignificant correlation was observed between the two methods in groups that showed a significant increase in bone formation (R2 = 0.710, P = 0.179) (Table 3). The weight of the femur bone immediately after sacrifice of rats (wet weight) in the ovariectomized, nontreated group was significantly lower (P < 0.05) than in the sham and other treated groups (Table 4). Bone ash weight was higher in the sham group and groups treated with 0.2 mg/kg estrogen, ASG 200 mg/kg, and ORZ 100 mg/kg and 200 mg/kg, and significantly different (P < 0.05) from that of the ovariectomized, nontreated group and the other treatment groups, as shown in Table 4. Bone calcium content in the ovariectomized, nontreated group was significantly lower (P < 0.05) than that in sham and all ovariectomized treated groups. No significant difference in terms of calcium content was observed in groups treated with estrogen 0.2 mg/kg, GBR 200 mg/kg, ASG 100 mg/kg and 200 mg/kg, ORZ 100 mg/kg and 200 mg/kg, GABA 200 mg/kg, and the sham nonovariectomized group (P < 0.05) (Table 4). Bone zinc concentration was significantly lower in the ovariectomized, nontreated group when compared to the sham and all treated groups (P < 0.05). The concentration increased significantly (P < 0.05) in groups treated with estrogen 0.2 mg/kg, Remifemin® 10 mg/kg, ASG 200 mg/kg, GABA 200 mg/kg, ORZ 100 mg/kg and 200 mg/kg compared to the sham nonovariectomized group, as shown in Table 4. There was significant difference in the levels of serum alkaline phosphatase (ALP) between 2 weeks and 8 weeks after treatment in the group treated with ASG 100 mg/kg compared to the ovariectomized, nontreated group and other treatment groups: P < 0.05. During the fourth week after treatment, a significant difference was observed between sham nonovariectomized rats and the group treated with Remifemin® 10 mg/kg (P < 0.0451) (Table 5). Serum calcium concentrations were significantly different between treated groups in the eighth week only after commencement of treatment (P = 0.0001). There was no significant difference between the treated groups in the second and fourth week (P = 0.9390 and P = 0.8948, respectively) (Table 5). Serum zinc concentrations showed a significant increase in the sham nontreated group, ORZ 200 mg/kg, and GBR 200 mg/kg 2 weeks after treatment compared to ovariectomized, nontreated and other treated groups (P < 0.0001) (Table 5). After 4 weeks of treatment, serum zinc concentration in groups treated with GABA 200 mg/kg, estrogen 0.2 mg/kg, and GBR 200 mg/kg were significantly different compared to other treatment groups (P < 0.0001), as shown in Table 5. Zinc concentrations in the nonovariectomized (sham) group and groups treated with GABA 200 mg/kg and estrogen 0.2 mg/kg in the eighth week differed significantly from those of the other treatment groups (P < 0.0001) (Table 5). Bone histology Histologically, the sham nonovariectomized group showed a normal bone cell configuration with more osteocytes and minimal osteoblastic activity compared to ovariectomized, nontreated group (Figure 3A). The ovariectomized, nontreated rats showed vacuolation in the bone marrow, reduced marrow and osteogenic activity, complete absence of osteoblasts, reduced bone density, and marked degenerative changes (Figure 3B), while the group treated with estrogen showed an increase in bone formation with minimal marrow activity and osteoblast entrapment at the margins (Figure 3C). Similarly, the Remifemin®-treated group showed evidence of an increase in new bone formation (Figure 3D). The GBR-treated group showed active proliferation of new bone cells (Figure 3E) and the ASG-treated group showed bone marrow with lymphoblastic cells with some proteinaceous fluid. Osteocytes were entrapped at the margin. Active osteoblasts at the margin were in the process of converting to osteocytes, showing active bone formation and increased bone density (Figure 3F). Hematopoietic cells, mature osteocytes, and osteoblasts were present. The GABA-treated group also showed evidence of active bone formation (Figure 3G), and the ORZ-treated group showed an increase in osteoblastic activity and an increase in new bone formation (Figure 3H). Histologically, the sham nonovariectomized group showed a normal bone cell configuration with more osteocytes and minimal osteoblastic activity compared to ovariectomized, nontreated group (Figure 3A). The ovariectomized, nontreated rats showed vacuolation in the bone marrow, reduced marrow and osteogenic activity, complete absence of osteoblasts, reduced bone density, and marked degenerative changes (Figure 3B), while the group treated with estrogen showed an increase in bone formation with minimal marrow activity and osteoblast entrapment at the margins (Figure 3C). Similarly, the Remifemin®-treated group showed evidence of an increase in new bone formation (Figure 3D). The GBR-treated group showed active proliferation of new bone cells (Figure 3E) and the ASG-treated group showed bone marrow with lymphoblastic cells with some proteinaceous fluid. Osteocytes were entrapped at the margin. Active osteoblasts at the margin were in the process of converting to osteocytes, showing active bone formation and increased bone density (Figure 3F). Hematopoietic cells, mature osteocytes, and osteoblasts were present. The GABA-treated group also showed evidence of active bone formation (Figure 3G), and the ORZ-treated group showed an increase in osteoblastic activity and an increase in new bone formation (Figure 3H). Immunohistochemistry Immunohistochemistry of the bone tissue gave weak positive staining, indicating new bone formation in all treated groups, and the reaction was more prominent in the groups treated with estrogen, remifemin, and ASG (Figure 4C, D and F). While groups treated with GBR, GABA and ORZ showed a milder reaction (Figure 4E, G and H). The sham and OVX-untreated groups both showed a very little or no reaction (Figure 4A and B). Immunohistochemistry of the bone tissue gave weak positive staining, indicating new bone formation in all treated groups, and the reaction was more prominent in the groups treated with estrogen, remifemin, and ASG (Figure 4C, D and F). While groups treated with GBR, GABA and ORZ showed a milder reaction (Figure 4E, G and H). The sham and OVX-untreated groups both showed a very little or no reaction (Figure 4A and B). Bone histology: Histologically, the sham nonovariectomized group showed a normal bone cell configuration with more osteocytes and minimal osteoblastic activity compared to ovariectomized, nontreated group (Figure 3A). The ovariectomized, nontreated rats showed vacuolation in the bone marrow, reduced marrow and osteogenic activity, complete absence of osteoblasts, reduced bone density, and marked degenerative changes (Figure 3B), while the group treated with estrogen showed an increase in bone formation with minimal marrow activity and osteoblast entrapment at the margins (Figure 3C). Similarly, the Remifemin®-treated group showed evidence of an increase in new bone formation (Figure 3D). The GBR-treated group showed active proliferation of new bone cells (Figure 3E) and the ASG-treated group showed bone marrow with lymphoblastic cells with some proteinaceous fluid. Osteocytes were entrapped at the margin. Active osteoblasts at the margin were in the process of converting to osteocytes, showing active bone formation and increased bone density (Figure 3F). Hematopoietic cells, mature osteocytes, and osteoblasts were present. The GABA-treated group also showed evidence of active bone formation (Figure 3G), and the ORZ-treated group showed an increase in osteoblastic activity and an increase in new bone formation (Figure 3H). Immunohistochemistry: Immunohistochemistry of the bone tissue gave weak positive staining, indicating new bone formation in all treated groups, and the reaction was more prominent in the groups treated with estrogen, remifemin, and ASG (Figure 4C, D and F). While groups treated with GBR, GABA and ORZ showed a milder reaction (Figure 4E, G and H). The sham and OVX-untreated groups both showed a very little or no reaction (Figure 4A and B). Discussion: It was evident in this study that GBR bioactives increased bone density. To the best of our knowledge this is the first report on the effects of GBR on BMD and the application of the X-ray edge detection technique in the measurement of bone density. Our results using the edge detection technique quantified the increase in percentage of bone formation in the individual treatments, and the results were highly correlated with those obtained using the standard Archimede’s principle. The ovariectomized, nontreated rats showed a decrease in BMD. Ovariectomy is known to stimulate oxidative stress and interfere with the antioxidant system in rats.17 This leads to an increase in the level of oxidative stress markers such as hydrogen peroxide, which is highly indicative of bone loss in estrogen shortage18 and also tumor necrosis factor alpha, which is generated as a result of the low level of thiol antioxidants within bone cells.19–21 Our data show that GBR bioactives, specifically ASG, GABA, and ORZ, increased the BMD to a level almost the same as that of the sham nonovariectomized group. GABA and ORZ from GBR have been shown to upregulate genes related to bone formation in ovariectomized rats.22 Serum ALP activity increased in the ovariectomized, nontreated group and other treatment groups especially during the fourth week after ovariectomy. This might be due to an increase in the rate of bone metabolism, especially a few days after ovariectomy, and an increase in osteogenic activity in the treated groups. An increase in ALP was also reported in ovariectomized rats treated with soy isoflavones.23,24 ALP is known as a prominent marker of bone formation.25 Bone remodeling involves both bone formation and resumption in a coupling effect,26 which might explain the increase in the activity of ALP in ovariectomized rats. Serum and bone calcium level, bone wet weight, and the ash content of the bone decreased in the ovariectomized, nontreated group due to a decrease in estrogenic activity, which in turn affected the bone mass and constituency, calcium content, and zinc concentration. Treatment with ORZ, GABA, and ASG at 200 mg/kg restored the values of these parameters to the same level or higher than in the sham nonovariectomized group. This gives a clear indication that these bioactives are fully involved in osteogenesis. The decrease in zinc concentration observed in ovariectomized, nontreated rats has also been reported by other researchers in low estrogenic conditions of menopausal women and in ovariectomized rats.27,28 This decrease in zinc concentration is due to oxidative stress, and zinc is known to play a major regulatory role in the action of antioxidant enzymes.29 Treatment for 8 weeks with GBR bioactives increased the level of zinc in bone tissue to a level higher than that in the control sham nonovariectomized rats and groups treated with ORZ, ASG, estrogen, and Remifemin®. It has been reported that zinc is a potent inhibitor of osteoclastogenesis and is an osteoblast stimulator in vitro.30,31 Zinc is known to stimulate protein synthesis, and its role in bone formation and in preserving bone mass is greater than that of bone regulating hormones.32 Histology and the polyclonal nuclear antigen reactions showed evidence of active bone formation with new bone cells in their active proliferation stage. Conclusion: This study indicates that the increase in BMD and osteoprotective effect of GBR bioactives might involve the activation of zinc formation and increased calcium, coupled with an increase in ALP, which together increase osteoblastic activity and subsequently bone formation. However, this proposed mechanistic effect requires further clarification, and more research is needed to address the molecular mechanism of zinc stimulation by GBR bioactives.
Background: Bone mass density is an important parameter used in the estimation of the severity and depth of lesions in osteoporosis. Estimation of bone density using existing methods in experimental models has its advantages as well as drawbacks. Methods: In this study, the X-ray histogram edge detection technique was used to estimate the bone mass density in ovariectomized rats treated orally with germinated brown rice (GBR) bioactives, and the results were compared with estimated results obtained using Archimede's principle. New bone cell proliferation was assessed by histology and immunohistochemical reaction using polyclonal nuclear antigen. Additionally, serum alkaline phosphatase activity, serum and bone calcium and zinc concentrations were detected using a chemistry analyzer and atomic absorption spectroscopy. Rats were divided into groups of six as follows: sham (nonovariectomized, nontreated); ovariectomized, nontreated; and ovariectomized and treated with estrogen, or Remifemin®, GBR-phenolics, acylated steryl glucosides, gamma oryzanol, and gamma amino-butyric acid extracted from GBR at different doses. Results: Our results indicate a significant increase in alkaline phosphatase activity, serum and bone calcium, and zinc and ash content in the treated groups compared with the ovariectomized nontreated group (P < 0.05). Bone density increased significantly (P < 0.05) in groups treated with estrogen, GBR, Remifemin®, and gamma oryzanol compared to the ovariectomized nontreated group. Histological sections revealed more osteoblasts in the treated groups when compared with the untreated groups. A polyclonal nuclear antigen reaction showing proliferating new cells was observed in groups treated with estrogen, Remifemin®, GBR, acylated steryl glucosides, and gamma oryzanol. There was a good correlation between bone mass densities estimated using Archimede's principle and the edge detection technique between the treated groups (r (2) = 0.737, P = 0.004). Conclusions: Our study shows that GBR bioactives increase bone density, which might be via the activation of zinc formation and increased calcium content, and that X-ray edge detection technique is effective in the measurement of bone density and can be employed effectively in this respect.
Introduction: Traditionally, dual X-ray absorptiometry and Archimede’s principle are employed in the measurement of bone mass density (BMD) in biomedical research. The use of dual X-ray absorptiometry to measure bone density has become very popular over the years, and although a significant correlation was obtained when comparing BMD using dual X-ray absorptiometry and Archimede’s principle in rats, the precision, accuracy, and sensitivity of this method in bone density determination remains a topic of concern.1 Previous research has revealed that a progressive decline in BMD occurs with estrogen deficiency at menopause, exposing women to a greater risk for the development of osteoporosis.2,3 Postmenopausal osteoporosis is now a worldwide problem, and the conventional treatment of estrogen replacement and other related chemotherapy are reported to be associated with side effects ranging from alterations in physiology to therapy-related cancers.4–6 Fisher et al7 described osteoporotic fracture as an important public health issue with increasing morbidity, mortality, and prevalence. In the United States alone, osteoporosis affects an estimated 10–28 million Americans over the age of 50, with 33 million people, mostly women, having low bone density and close to 1.2 million bone fractures being related to osteoporosis.8,9 In view of this, natural products, especially phytoestrogens, are being explored as an alternative to hormone therapy for the management of osteoporosis and other related bone degenerative diseases. In this study, germinated brown rice (GBR), a rice with increased levels of bioactive compounds compared to polished rice and brown rice, with potential benefits in the management of chronic disease, was administered to rats with the aim of studying its ability to protect bone from osteoporosis. Bone ash weight and calcium and zinc content were quantified, and bone histology and polyclonal nuclear antigen staining were used to detect new bone formation. BMD was estimated using Archimede’s principle and was compared with edge detection X-ray morphometry in order to explore its applicability in measuring bone density in biomedical research. Conclusion: This study indicates that the increase in BMD and osteoprotective effect of GBR bioactives might involve the activation of zinc formation and increased calcium, coupled with an increase in ALP, which together increase osteoblastic activity and subsequently bone formation. However, this proposed mechanistic effect requires further clarification, and more research is needed to address the molecular mechanism of zinc stimulation by GBR bioactives.
Background: Bone mass density is an important parameter used in the estimation of the severity and depth of lesions in osteoporosis. Estimation of bone density using existing methods in experimental models has its advantages as well as drawbacks. Methods: In this study, the X-ray histogram edge detection technique was used to estimate the bone mass density in ovariectomized rats treated orally with germinated brown rice (GBR) bioactives, and the results were compared with estimated results obtained using Archimede's principle. New bone cell proliferation was assessed by histology and immunohistochemical reaction using polyclonal nuclear antigen. Additionally, serum alkaline phosphatase activity, serum and bone calcium and zinc concentrations were detected using a chemistry analyzer and atomic absorption spectroscopy. Rats were divided into groups of six as follows: sham (nonovariectomized, nontreated); ovariectomized, nontreated; and ovariectomized and treated with estrogen, or Remifemin®, GBR-phenolics, acylated steryl glucosides, gamma oryzanol, and gamma amino-butyric acid extracted from GBR at different doses. Results: Our results indicate a significant increase in alkaline phosphatase activity, serum and bone calcium, and zinc and ash content in the treated groups compared with the ovariectomized nontreated group (P < 0.05). Bone density increased significantly (P < 0.05) in groups treated with estrogen, GBR, Remifemin®, and gamma oryzanol compared to the ovariectomized nontreated group. Histological sections revealed more osteoblasts in the treated groups when compared with the untreated groups. A polyclonal nuclear antigen reaction showing proliferating new cells was observed in groups treated with estrogen, Remifemin®, GBR, acylated steryl glucosides, and gamma oryzanol. There was a good correlation between bone mass densities estimated using Archimede's principle and the edge detection technique between the treated groups (r (2) = 0.737, P = 0.004). Conclusions: Our study shows that GBR bioactives increase bone density, which might be via the activation of zinc formation and increased calcium content, and that X-ray edge detection technique is effective in the measurement of bone density and can be employed effectively in this respect.
8,447
397
[ 362, 266, 71, 87, 68, 638, 77, 165, 67, 254, 234, 89, 69 ]
17
[ "bone", "treated", "group", "mg", "mg kg", "figure", "kg", "groups", "ovariectomized", "local" ]
[ "osteoporosis postmenopausal", "osteoporosis worldwide", "osteoporosis view", "bone loss estrogen", "osteoporosis affects estimated" ]
[CONTENT] Archimede’s principle | atomic absorption spectrophotometry | X-ray edge detection technique | bone mass density | germinated brown rice bioactives [SUMMARY]
[CONTENT] Archimede’s principle | atomic absorption spectrophotometry | X-ray edge detection technique | bone mass density | germinated brown rice bioactives [SUMMARY]
[CONTENT] Archimede’s principle | atomic absorption spectrophotometry | X-ray edge detection technique | bone mass density | germinated brown rice bioactives [SUMMARY]
[CONTENT] Archimede’s principle | atomic absorption spectrophotometry | X-ray edge detection technique | bone mass density | germinated brown rice bioactives [SUMMARY]
[CONTENT] Archimede’s principle | atomic absorption spectrophotometry | X-ray edge detection technique | bone mass density | germinated brown rice bioactives [SUMMARY]
[CONTENT] Archimede’s principle | atomic absorption spectrophotometry | X-ray edge detection technique | bone mass density | germinated brown rice bioactives [SUMMARY]
[CONTENT] Animals | Bone Density | Calcium | Femur | Minerals | Models, Statistical | Oryza | Osteoporosis | Outcome Assessment, Health Care | Ovariectomy | Pattern Recognition, Automated | Phytotherapy | Plant Extracts | Rats | Rats, Sprague-Dawley | Spectrophotometry, Atomic | Zinc [SUMMARY]
[CONTENT] Animals | Bone Density | Calcium | Femur | Minerals | Models, Statistical | Oryza | Osteoporosis | Outcome Assessment, Health Care | Ovariectomy | Pattern Recognition, Automated | Phytotherapy | Plant Extracts | Rats | Rats, Sprague-Dawley | Spectrophotometry, Atomic | Zinc [SUMMARY]
[CONTENT] Animals | Bone Density | Calcium | Femur | Minerals | Models, Statistical | Oryza | Osteoporosis | Outcome Assessment, Health Care | Ovariectomy | Pattern Recognition, Automated | Phytotherapy | Plant Extracts | Rats | Rats, Sprague-Dawley | Spectrophotometry, Atomic | Zinc [SUMMARY]
[CONTENT] Animals | Bone Density | Calcium | Femur | Minerals | Models, Statistical | Oryza | Osteoporosis | Outcome Assessment, Health Care | Ovariectomy | Pattern Recognition, Automated | Phytotherapy | Plant Extracts | Rats | Rats, Sprague-Dawley | Spectrophotometry, Atomic | Zinc [SUMMARY]
[CONTENT] Animals | Bone Density | Calcium | Femur | Minerals | Models, Statistical | Oryza | Osteoporosis | Outcome Assessment, Health Care | Ovariectomy | Pattern Recognition, Automated | Phytotherapy | Plant Extracts | Rats | Rats, Sprague-Dawley | Spectrophotometry, Atomic | Zinc [SUMMARY]
[CONTENT] Animals | Bone Density | Calcium | Femur | Minerals | Models, Statistical | Oryza | Osteoporosis | Outcome Assessment, Health Care | Ovariectomy | Pattern Recognition, Automated | Phytotherapy | Plant Extracts | Rats | Rats, Sprague-Dawley | Spectrophotometry, Atomic | Zinc [SUMMARY]
[CONTENT] osteoporosis postmenopausal | osteoporosis worldwide | osteoporosis view | bone loss estrogen | osteoporosis affects estimated [SUMMARY]
[CONTENT] osteoporosis postmenopausal | osteoporosis worldwide | osteoporosis view | bone loss estrogen | osteoporosis affects estimated [SUMMARY]
[CONTENT] osteoporosis postmenopausal | osteoporosis worldwide | osteoporosis view | bone loss estrogen | osteoporosis affects estimated [SUMMARY]
[CONTENT] osteoporosis postmenopausal | osteoporosis worldwide | osteoporosis view | bone loss estrogen | osteoporosis affects estimated [SUMMARY]
[CONTENT] osteoporosis postmenopausal | osteoporosis worldwide | osteoporosis view | bone loss estrogen | osteoporosis affects estimated [SUMMARY]
[CONTENT] osteoporosis postmenopausal | osteoporosis worldwide | osteoporosis view | bone loss estrogen | osteoporosis affects estimated [SUMMARY]
[CONTENT] bone | treated | group | mg | mg kg | figure | kg | groups | ovariectomized | local [SUMMARY]
[CONTENT] bone | treated | group | mg | mg kg | figure | kg | groups | ovariectomized | local [SUMMARY]
[CONTENT] bone | treated | group | mg | mg kg | figure | kg | groups | ovariectomized | local [SUMMARY]
[CONTENT] bone | treated | group | mg | mg kg | figure | kg | groups | ovariectomized | local [SUMMARY]
[CONTENT] bone | treated | group | mg | mg kg | figure | kg | groups | ovariectomized | local [SUMMARY]
[CONTENT] bone | treated | group | mg | mg kg | figure | kg | groups | ovariectomized | local [SUMMARY]
[CONTENT] osteoporosis | bone | rice | dual | dual ray absorptiometry | absorptiometry | million | ray absorptiometry | dual ray | bone density [SUMMARY]
[CONTENT] mean | differences | statistical | statistical software sas cary | statistical software sas | statistical software | presented | presented mean | presented mean standard | presented mean standard deviation [SUMMARY]
[CONTENT] mg kg | kg | mg | group | treated | groups | showed | bone | figure | 200 mg [SUMMARY]
[CONTENT] increase | gbr bioactives | effect | bioactives | formation | zinc | gbr | mechanism zinc stimulation | mechanism zinc | mechanism [SUMMARY]
[CONTENT] bone | treated | figure | group | groups | showed | increase | density | formation | local [SUMMARY]
[CONTENT] bone | treated | figure | group | groups | showed | increase | density | formation | local [SUMMARY]
[CONTENT] ||| [SUMMARY]
[CONTENT] Archimede ||| ||| ||| six | sham | Remifemin® | GBR-phenolics | GBR [SUMMARY]
[CONTENT] ||| estrogen | GBR ||| ||| Remifemin® | GBR ||| Archimede | 2 | 0.737 | 0.004 [SUMMARY]
[CONTENT] GBR [SUMMARY]
[CONTENT] ||| ||| Archimede ||| ||| ||| six | sham | Remifemin® | GBR-phenolics | GBR ||| ||| ||| estrogen | GBR ||| ||| Remifemin® | GBR ||| Archimede | 2 | 0.737 | 0.004 ||| GBR [SUMMARY]
[CONTENT] ||| ||| Archimede ||| ||| ||| six | sham | Remifemin® | GBR-phenolics | GBR ||| ||| ||| estrogen | GBR ||| ||| Remifemin® | GBR ||| Archimede | 2 | 0.737 | 0.004 ||| GBR [SUMMARY]
Risk of obstructive sleep apnea and quality of sleep among adults with type 2 diabetes mellitus in a sub-Saharan Africa city.
35251458
diabetes mellitus (DM) and Obstructive Sleep Apnea (OSA) are two major and interconnected non-communicable diseases. Both negatively impact on sleep quality. This study aimed to determine among persons with type 2 DM, the proportions at high risk of OSA and of self-reported poor sleep quality along with their associated-factors in Parakou city, Benin.
INTRODUCTION
this was a cross-sectional prospective study of 100% (n=383) outpatient adults with type 2 DM, conducted between April and August 2019 in the three top centres managing diabetic persons in Parakou city. They were interviewed, examined and investigated using capillary fasting blood glucose tests. The STOP-Bang Questionnaire (SBQ) was used to determine the risk of OSA.
METHODS
overall, their mean age was 57.37 (11.45) years. They were 61.62% (n=236) females and 38.38% (n=147) males. Sleep duration was insufficient in 26.89% (n=103). Nocturia was reported in 49.35% (n=189). The risk of OSA was high in 14.10% (n=54), intermediate in 24.80% (n=95) and low in 61.10% (n=234). Friedman Position Tongue Grade 3 (Adjusted Odds Ratio, aOR=2.48; 95%CI=1.11 - 5.55; p=0.025) and 4 (aOR=4.65; 95%CI=1.26 - 15.90; p=0.015) were independently associated with a high risk of OSA. The prevalence of reported poor sleep quality was 27.42% (n=105). Female gender (aOR=2.08; 95%CI=1.18-3.83; p=0.014), diabetic foot (aOR=5.07; 95%CI=1.15-23.63; p=0.031), nocturia (aOR=1.96; 95%CI=1.18-3.29; p=0.010), tiredness (aOR=2.77; 95%CI=1.26-6.23; p=0.012) and a high risk of OSA (aOR=3.31; 95%CI=1.28-8.93; p=0.015) were independently associated with a greater risk of reported poor sleep quality.
RESULTS
in Parakou, the proportions of patients with type 2 DM at increased risk of OSA and with poor quality of sleep are relatively high. There is need for better systematic screening of OSA in persons with DM.
CONCLUSION
[ "Adult", "Cross-Sectional Studies", "Diabetes Mellitus, Type 2", "Female", "Humans", "Male", "Middle Aged", "Prospective Studies", "Sleep", "Sleep Apnea, Obstructive", "Sleep Quality", "Surveys and Questionnaires" ]
8856968
Introduction
Increasing urbanization, sedentary lifestyles, population ageing, changes in feeding, behavioural habits and sleep disruptions have prompted the emergence of non-communicable diseases as a global public health threat. According to the World Health Organization (WHO), non-communicable diseases are nowadays the leading causes of death in the world (1), 85% of these occurring in low- and middle-income countries [1]. Diabetes mellitus (DM) is one of the most common non-communicable diseases, and type 2 diabetes (T2DM) is by far the most frequent, accounting for 90% of all cases. Globally, the prevalence of DM has doubled from 4.7% to 8.5% between 1980 and 2014, and is the seventh leading cause of death [2]. In sub-Saharan Africa, it affects about 19 million people [3]. Similarly, Obstructive Sleep Apnea (OSA), the most common sleep disorder, is another non-communicable disease also on the increase globally. The prevalence of OSA is estimated to have increased from 4% in men and 2% in women in 1993 to 12.5% in men and 5.9% in women in 2016 [4, 5]. The condition, however, remains largely under-diagnosed and under-treated, especially in sub-Saharan Africa. Several reports in the literature mention numerous complex interrelationships between the two diseases. OSA has been associated with insulin resistance, glucose intolerance and T2DM independently of obesity, with this association increasing with the severity of the sleep disorder [6, 7]. Chronic intermittent hypoxia and sleep disruption resulting from OSA are thought to play a major role in glucose metabolism disturbances [6, 7]. On the other hand, the mechanisms underlining OSA among diabetic patients are less well understood, but may involve an autonomic neuropathy according to some authors (8). Overall, the prevalence of T2DM among those with OSA varies from 5% to 30%, while the prevalence of OSA among persons with diabetes ranges from 58% to 86% [7]. These observations prompted the International Diabetes Federation to recommend in 2008 that patients with one condition should be systematically screened for the other condition [8]. Furthermore, there is evidence of a vicious cycle between both diseases and sleep quality. OSA, by fragmenting the sleep architecture, alters its quality. Among persons with diabetes, there is evidence that poor sleep quality may disrupt glycaemic control [9]. Therefore, diabetic persons with untreated OSA have a high risk of poor sleep quality, chronic hyperglycaemia and its related-complications such as cardiovascular disease. Most studies on these issues between DM, OSA and sleep quality have been conducted in high resources countries. The aim of the present study was to report the experience from a developing setting and to determine among persons with T2DM living in Parakou city in Benin, the numbers and proportions of those with a high risk of OSA and of those who reported a poor sleep quality, as well as their associated factors.
Methods
Study design period: this was a cross-sectional prospective study that was conducted between April 11 and August 5, 2019. Setting: Benin, a developing country located in West Africa, has not been spared by the increasing prevalence of DM worldwide. In 2008, DM prevalence was 2.6% [10]. In 2015, the prevalence of fasting hyperglycaemia (≥6.1 mmol/l) or persons with DM on medication was 12.4% [11]. From a Cotonou study, the economic capital city of the country, the prevalence of OSA found after recording patients with ventilator polygraphy was 53.2% [12]. Parakou, the third major city of the country, is located in Borgou, the region with the highest prevalence of impaired fasting glycaemia in the country at 12.4% [10]. The prevalence of DM in Parakou was 7.9% [10]. The study was carried out in three (2 public and 1 private) of the top-level health centres delivering care to diabetic persons in Parakou city. Study population: the study population was selected from all outpatients with T2DM who were consulted during the study period in one of these three facilities. The inclusion criteria were being aged ≥18 years and giving informed consent. Non-inclusion criteria were: i) not working at night; ii) not having psychiatric disorders; and iii) not being pregnant (for females) at the time of the survey. The sample size was calculated using the Schwartz formula N= ℇ2pq/i2, with N the sample size, ℇ the 95% confidence level equal to 1.96, p the prevalence of OSA among diabetic persons in Cotonou equal to 53.2% [12], q that is 1 - p being equal to 0.468 and i the accepted margin of error being equal to 5%. The sample size was 383 patients. A systematic sampling of all outpatients with T2DM who met the inclusion criteria was performed. Data collection: a structured interview with the patient was led by a medical student who was at the end of her medical training, using a pre-test and corrected questionnaire. Information on socio-demographic characteristics, DM history, treatment and complications, co-morbidities, lifestyle, sleep patterns, and OSA-suggestive symptoms were sought. Anthropometric parameters, neck and waist circumferences, as well as vital signs were measured. Neck circumference was measured with the head being in a neutral position, halfway between the tip of the chin and the supra-sternal hollow. Waist circumference was measured in a standing position and in gentle exhalation, halfway between the lower costal margin and the iliac crest. These measurements were followed by an Ear Nose Throat (ENT) and oral cavity examination, with a special focus on the presence of any morphological abnormalities. The snoring, tiredness, observed apnea, blood pressure, body mass index, age, neck circumference, and male gender (STOP-Bang) questionnaire (SBQ) was used and a score was derived to assess the risk of OSA [13]. The clinical assessment was completed with a measurement of capillary fasting glycaemia using an Accu-check mobile glucometer at the survey date. Diagnostic criteria: in this study, a high risk of OSA corresponded to any person who was recorded as “high risk” at SBQ. A self-reported poor sleep quality was considered if any patient responded “yes” to the following question: in the last month, do you wake up in the morning at least 4 days in a week with the feeling of being still tired or not having had a restful sleep? Details on other diagnostic criteria used in this study are shown in Table 1[3, 13-18]. the diagnostic criteria used in this study HbA1c=Glycated Haemoglobin; OSA= Obstructive Sleep Apnea; BMI=Body Mass Index Statistical analysis: all data were collected and double entered into EpiData entry Client v.4.4.3.1 (EpiData Association, Odense, Denmark). The data were analysed using the software Rx64 3.6.0. Percentages were determined to describe categorical variables. Means (with standard deviation) and medians (with interquartile range) were used to describe continuous variables appropriately. Factors associated with a high risk of OSA and a poor sleep quality were investigated using univariate analysis followed by a multiple logistic regression. Among potential risk factors, there has been a special focus on morphological ENT abnormalities, the presence of which would increase the risk of OSA. The following factors, namely sex, age, body mass index, hypertension were not included to determine factors associated with high risk of OSA, as they were already part of the SBQ. To investigate potential factors associated with reported poor sleep quality, DM complications, symptoms and the risk of OSA deserved special attention. Levels of significance were set at 5%. Ethical considerations: this study was conducted with the agreement of the Local Ethics Committee for Biomedical Research of the University of Parakou. Informed consent from all persons with T2DM was obtained prior to their inclusion in the study.
Results
Participants´ characteristics: the mean age of the 383 diabetic persons who were enrolled in the study was 57.37 (11.45) years. They were 61.62% (n=236) females and 38.38% (n=147) males. The average follow-up time from the diagnosis date of the DM was 7.45 years (6.22), ranging from 3 months to 38 years. Of the persons with DM, 77.81% (n=298) were treated with oral antidiabetic drugs. The most commonly reported complication was diabetic neuropathy in 34.73% (n=133). Of associated comorbidities, obesity and hypertension were the most commonly diagnosed in 86.95% (n=333) and 62.14% (n=238) respectively (Table 2). On the date of the survey, the mean fasting blood glucose was 1.58 (0.83) g/l, ranging between 0.53 - 5 g/l. Of all the study participants, 8.62% (n=33) had a HbA1c test, of whom 39.39% (n=13) were found to be poorly controlled. treatment, comorbidities, complications and lifestyles of persons with type 2 diabetes mellitus, Parakou, Benin: April - August 2019 (N=383) Having smoked or drank alcohol more than one year ago Sleep duration and events during sleeping: overall, the mean duration of sleep for all participants was 7.39 (1.39) hours, ranging from 4 to 10.99 hours. Of these persons, the sleep duration was insufficient in 26.89% (n=103), normal in 65.01% (n=249) and too long in 8.09% (n=31). There were 8.09% (n=31) patients who complained of frequent nightmares, and 49.35% (n=189) who complained of more than one nocturnal awakening. OSA-related findings: amongst symptoms suggestive of OSA, nocturia was reported in 49.35% (n=189) persons, excessive daytime sleepiness in 34.73% (n=133), tiredness in 23.24% (n=89), severe daily snoring in 21.15% (n=81), apnea during sleep in 11.49% (n=44) and lack of concentration in 02.61% (n=10). Based on the SBQ, the risk of OSA was high in 14.10% (n=54), intermediate in 24.80% (n=95) and low in 61.10% (n=234). Diabetic persons with a high risk of OSA slept for significantly shorter periods compared with those who had a low or intermediate risk (6,96 ± 1.20 hours vs 7,46 ± 1.41 hours; p=0.011). The frequency of nocturia was comparable in both subgroups (53.70% vs 48.63%; p=0.490). After multivariate analysis, FTP Grade 3 (adjusted Odds Ratio, aOR=2.48; 95%CI=1.11 - 5.55; p=0.025) and 4 (aOR=4.65; 95%CI=1.26 - 15.90; p=0.015) were associated with a high risk of OSA (Table 3). morphologic abnormalities associated with a high risk of obstructive sleep apnea among persons with type 2 diabetes mellitus, Parakou, Benin: April-August 2019 (N=383) cOR= Crude Odds Ratio; aOR= Adjusted Odds Ratio; 95%CI= 95% Confidence interval Poor sleep quality related findings: there were 27.42% (n=105) of participants who reported poor sleep quality. After multivariate analysis, factors associated with a greater risk of reported poor sleep quality were being female (aOR=2.08; 95%CI=1.18-3.83; p=0.014), or having a diabetic foot complication (aOR=5.07; 95%CI=1.15-23.63; p=0.031), nocturia (aOR=1.96; 95%CI=1.18-3.29; p=0.010), tiredness (aOR=2.77; 95%CI=1.26-6.23; p=0.012) and a high risk of OSA (aOR=3.31; 95%CI=1.28-8.93; p=0.015) (Table 4). factors associated with a reported poor sleep quality among persons with T2DM, Parakou, Benin: April-August 2019 (N=383) cOR= Crude Odds Ratio; aOR= Adjusted Odds Ratio; 95%CI= 95% Confidence interval; OSA = Obstructive Sleep Apnea; SBQ = STOP-Bang Questionnaire
Conclusion
Obstructive sleep apnea symptoms were frequent among persons with T2DM in Parakou city, Benin. One in seven participants had a high risk of OSA and this was associated with a FTP grade 3 or 4. The prevalence of poor sleep quality was recorded in three out of ten diabetic patients and was increased in women and in the presence of nocturia, tiredness, a high risk of OSA and diabetic foot. In sub-Saharan Africa cities like Parakou, there is need for better systematic screening of OSA in persons with DM, by making sophisticated confirmatory technology more widely available so that more appropriate treatment can be offered and for improving glycaemic control. What is known about this topic Both diabetes mellitus and obstructive sleep apnea are two non-communicable diseases that are interconnected ; and the diagnosis of one of these conditions may prompt the screening of the second one; Sleep quality is poor for many diabetic patients. Both diabetes mellitus and obstructive sleep apnea are two non-communicable diseases that are interconnected ; and the diagnosis of one of these conditions may prompt the screening of the second one; Sleep quality is poor for many diabetic patients. Both diabetes mellitus and obstructive sleep apnea are two non-communicable diseases that are interconnected ; and the diagnosis of one of these conditions may prompt the screening of the second one; Sleep quality is poor for many diabetic patients. Both diabetes mellitus and obstructive sleep apnea are two non-communicable diseases that are interconnected ; and the diagnosis of one of these conditions may prompt the screening of the second one; Sleep quality is poor for many diabetic patients. What this study adds This study reports one of the rare experiences in assessing the risk of obstructive sleep apnea and the burden of poor sleep among patients with type 2 diabetes mellitus in a developing setting, where communicable diseases continue to receive most attention; Friedman Position Tongue is a simple oral examination that may contribute to identify those with a high risk of OSA, and therefore might be given priority during sleep recording; Some obstructive sleep apnea suggestive symptoms such as nocturia and tiredness are found independently associated with poor sleep quality and should be systematically asked diabetic patients during routine consultation. This study reports one of the rare experiences in assessing the risk of obstructive sleep apnea and the burden of poor sleep among patients with type 2 diabetes mellitus in a developing setting, where communicable diseases continue to receive most attention; Friedman Position Tongue is a simple oral examination that may contribute to identify those with a high risk of OSA, and therefore might be given priority during sleep recording; Some obstructive sleep apnea suggestive symptoms such as nocturia and tiredness are found independently associated with poor sleep quality and should be systematically asked diabetic patients during routine consultation. This study reports one of the rare experiences in assessing the risk of obstructive sleep apnea and the burden of poor sleep among patients with type 2 diabetes mellitus in a developing setting, where communicable diseases continue to receive most attention; Friedman Position Tongue is a simple oral examination that may contribute to identify those with a high risk of OSA, and therefore might be given priority during sleep recording; Some obstructive sleep apnea suggestive symptoms such as nocturia and tiredness are found independently associated with poor sleep quality and should be systematically asked diabetic patients during routine consultation. This study reports one of the rare experiences in assessing the risk of obstructive sleep apnea and the burden of poor sleep among patients with type 2 diabetes mellitus in a developing setting, where communicable diseases continue to receive most attention; Friedman Position Tongue is a simple oral examination that may contribute to identify those with a high risk of OSA, and therefore might be given priority during sleep recording; Some obstructive sleep apnea suggestive symptoms such as nocturia and tiredness are found independently associated with poor sleep quality and should be systematically asked diabetic patients during routine consultation.
[ "\nWhat is known about this topic\n", "\nWhat this study adds\n" ]
[ "\n\nBoth diabetes mellitus and obstructive sleep apnea are two non-communicable diseases that are interconnected ; and the diagnosis of one of these conditions may prompt the screening of the second one;\nSleep quality is poor for many diabetic patients.\n\n\nBoth diabetes mellitus and obstructive sleep apnea are two non-communicable diseases that are interconnected ; and the diagnosis of one of these conditions may prompt the screening of the second one;\n\nSleep quality is poor for many diabetic patients.", "\n\nThis study reports one of the rare experiences in assessing the risk of obstructive sleep apnea and the burden of poor sleep among patients with type 2 diabetes mellitus in a developing setting, where communicable diseases continue to receive most attention;\n\nFriedman Position Tongue is a simple oral examination that may contribute to identify those with a high risk of OSA, and therefore might be given priority during sleep recording;\nSome obstructive sleep apnea suggestive symptoms such as nocturia and tiredness are found independently associated with poor sleep quality and should be systematically asked diabetic patients during routine consultation.\n\n\nThis study reports one of the rare experiences in assessing the risk of obstructive sleep apnea and the burden of poor sleep among patients with type 2 diabetes mellitus in a developing setting, where communicable diseases continue to receive most attention;\n\n\nFriedman Position Tongue is a simple oral examination that may contribute to identify those with a high risk of OSA, and therefore might be given priority during sleep recording;\n\nSome obstructive sleep apnea suggestive symptoms such as nocturia and tiredness are found independently associated with poor sleep quality and should be systematically asked diabetic patients during routine consultation." ]
[ null, null ]
[ "Introduction", "Methods", "Results", "Discussion", "Conclusion", "\nWhat is known about this topic\n", "\nWhat this study adds\n" ]
[ "Increasing urbanization, sedentary lifestyles, population ageing, changes in feeding, behavioural habits and sleep disruptions have prompted the emergence of non-communicable diseases as a global public health threat. According to the World Health Organization (WHO), non-communicable diseases are nowadays the leading causes of death in the world (1), 85% of these occurring in low- and middle-income countries [1]. Diabetes mellitus (DM) is one of the most common non-communicable diseases, and type 2 diabetes (T2DM) is by far the most frequent, accounting for 90% of all cases. Globally, the prevalence of DM has doubled from 4.7% to 8.5% between 1980 and 2014, and is the seventh leading cause of death [2]. In sub-Saharan Africa, it affects about 19 million people [3]. Similarly, Obstructive Sleep Apnea (OSA), the most common sleep disorder, is another non-communicable disease also on the increase globally. The prevalence of OSA is estimated to have increased from 4% in men and 2% in women in 1993 to 12.5% in men and 5.9% in women in 2016 [4, 5]. The condition, however, remains largely under-diagnosed and under-treated, especially in sub-Saharan Africa.\nSeveral reports in the literature mention numerous complex interrelationships between the two diseases. OSA has been associated with insulin resistance, glucose intolerance and T2DM independently of obesity, with this association increasing with the severity of the sleep disorder [6, 7]. Chronic intermittent hypoxia and sleep disruption resulting from OSA are thought to play a major role in glucose metabolism disturbances [6, 7]. On the other hand, the mechanisms underlining OSA among diabetic patients are less well understood, but may involve an autonomic neuropathy according to some authors (8). Overall, the prevalence of T2DM among those with OSA varies from 5% to 30%, while the prevalence of OSA among persons with diabetes ranges from 58% to 86% [7]. These observations prompted the International Diabetes Federation to recommend in 2008 that patients with one condition should be systematically screened for the other condition [8].\nFurthermore, there is evidence of a vicious cycle between both diseases and sleep quality. OSA, by fragmenting the sleep architecture, alters its quality. Among persons with diabetes, there is evidence that poor sleep quality may disrupt glycaemic control [9]. Therefore, diabetic persons with untreated OSA have a high risk of poor sleep quality, chronic hyperglycaemia and its related-complications such as cardiovascular disease. Most studies on these issues between DM, OSA and sleep quality have been conducted in high resources countries. The aim of the present study was to report the experience from a developing setting and to determine among persons with T2DM living in Parakou city in Benin, the numbers and proportions of those with a high risk of OSA and of those who reported a poor sleep quality, as well as their associated factors.", "Study design period: this was a cross-sectional prospective study that was conducted between April 11 and August 5, 2019.\nSetting: Benin, a developing country located in West Africa, has not been spared by the increasing prevalence of DM worldwide. In 2008, DM prevalence was 2.6% [10]. In 2015, the prevalence of fasting hyperglycaemia (≥6.1 mmol/l) or persons with DM on medication was 12.4% [11]. From a Cotonou study, the economic capital city of the country, the prevalence of OSA found after recording patients with ventilator polygraphy was 53.2% [12].\nParakou, the third major city of the country, is located in Borgou, the region with the highest prevalence of impaired fasting glycaemia in the country at 12.4% [10]. The prevalence of DM in Parakou was 7.9% [10]. The study was carried out in three (2 public and 1 private) of the top-level health centres delivering care to diabetic persons in Parakou city.\nStudy population: the study population was selected from all outpatients with T2DM who were consulted during the study period in one of these three facilities. The inclusion criteria were being aged ≥18 years and giving informed consent. Non-inclusion criteria were: i) not working at night; ii) not having psychiatric disorders; and iii) not being pregnant (for females) at the time of the survey. The sample size was calculated using the Schwartz formula N= ℇ2pq/i2, with N the sample size, ℇ the 95% confidence level equal to 1.96, p the prevalence of OSA among diabetic persons in Cotonou equal to 53.2% [12], q that is 1 - p being equal to 0.468 and i the accepted margin of error being equal to 5%. The sample size was 383 patients. A systematic sampling of all outpatients with T2DM who met the inclusion criteria was performed.\nData collection: a structured interview with the patient was led by a medical student who was at the end of her medical training, using a pre-test and corrected questionnaire. Information on socio-demographic characteristics, DM history, treatment and complications, co-morbidities, lifestyle, sleep patterns, and OSA-suggestive symptoms were sought. Anthropometric parameters, neck and waist circumferences, as well as vital signs were measured. Neck circumference was measured with the head being in a neutral position, halfway between the tip of the chin and the supra-sternal hollow. Waist circumference was measured in a standing position and in gentle exhalation, halfway between the lower costal margin and the iliac crest. These measurements were followed by an Ear Nose Throat (ENT) and oral cavity examination, with a special focus on the presence of any morphological abnormalities. The snoring, tiredness, observed apnea, blood pressure, body mass index, age, neck circumference, and male gender (STOP-Bang) questionnaire (SBQ) was used and a score was derived to assess the risk of OSA [13]. The clinical assessment was completed with a measurement of capillary fasting glycaemia using an Accu-check mobile glucometer at the survey date.\nDiagnostic criteria: in this study, a high risk of OSA corresponded to any person who was recorded as “high risk” at SBQ. A self-reported poor sleep quality was considered if any patient responded “yes” to the following question: in the last month, do you wake up in the morning at least 4 days in a week with the feeling of being still tired or not having had a restful sleep? Details on other diagnostic criteria used in this study are shown in Table 1[3, 13-18].\nthe diagnostic criteria used in this study\nHbA1c=Glycated Haemoglobin; OSA= Obstructive Sleep Apnea; BMI=Body Mass Index\nStatistical analysis: all data were collected and double entered into EpiData entry Client v.4.4.3.1 (EpiData Association, Odense, Denmark). The data were analysed using the software Rx64 3.6.0. Percentages were determined to describe categorical variables. Means (with standard deviation) and medians (with interquartile range) were used to describe continuous variables appropriately. Factors associated with a high risk of OSA and a poor sleep quality were investigated using univariate analysis followed by a multiple logistic regression. Among potential risk factors, there has been a special focus on morphological ENT abnormalities, the presence of which would increase the risk of OSA. The following factors, namely sex, age, body mass index, hypertension were not included to determine factors associated with high risk of OSA, as they were already part of the SBQ. To investigate potential factors associated with reported poor sleep quality, DM complications, symptoms and the risk of OSA deserved special attention. Levels of significance were set at 5%.\nEthical considerations: this study was conducted with the agreement of the Local Ethics Committee for Biomedical Research of the University of Parakou. Informed consent from all persons with T2DM was obtained prior to their inclusion in the study.", "Participants´ characteristics: the mean age of the 383 diabetic persons who were enrolled in the study was 57.37 (11.45) years. They were 61.62% (n=236) females and 38.38% (n=147) males. The average follow-up time from the diagnosis date of the DM was 7.45 years (6.22), ranging from 3 months to 38 years. Of the persons with DM, 77.81% (n=298) were treated with oral antidiabetic drugs. The most commonly reported complication was diabetic neuropathy in 34.73% (n=133). Of associated comorbidities, obesity and hypertension were the most commonly diagnosed in 86.95% (n=333) and 62.14% (n=238) respectively (Table 2). On the date of the survey, the mean fasting blood glucose was 1.58 (0.83) g/l, ranging between 0.53 - 5 g/l. Of all the study participants, 8.62% (n=33) had a HbA1c test, of whom 39.39% (n=13) were found to be poorly controlled.\ntreatment, comorbidities, complications and lifestyles of persons with type 2 diabetes mellitus, Parakou, Benin: April - August 2019 (N=383)\nHaving smoked or drank alcohol more than one year ago\nSleep duration and events during sleeping: overall, the mean duration of sleep for all participants was 7.39 (1.39) hours, ranging from 4 to 10.99 hours. Of these persons, the sleep duration was insufficient in 26.89% (n=103), normal in 65.01% (n=249) and too long in 8.09% (n=31). There were 8.09% (n=31) patients who complained of frequent nightmares, and 49.35% (n=189) who complained of more than one nocturnal awakening.\nOSA-related findings: amongst symptoms suggestive of OSA, nocturia was reported in 49.35% (n=189) persons, excessive daytime sleepiness in 34.73% (n=133), tiredness in 23.24% (n=89), severe daily snoring in 21.15% (n=81), apnea during sleep in 11.49% (n=44) and lack of concentration in 02.61% (n=10). Based on the SBQ, the risk of OSA was high in 14.10% (n=54), intermediate in 24.80% (n=95) and low in 61.10% (n=234). Diabetic persons with a high risk of OSA slept for significantly shorter periods compared with those who had a low or intermediate risk (6,96 ± 1.20 hours vs 7,46 ± 1.41 hours; p=0.011). The frequency of nocturia was comparable in both subgroups (53.70% vs 48.63%; p=0.490). After multivariate analysis, FTP Grade 3 (adjusted Odds Ratio, aOR=2.48; 95%CI=1.11 - 5.55; p=0.025) and 4 (aOR=4.65; 95%CI=1.26 - 15.90; p=0.015) were associated with a high risk of OSA (Table 3).\nmorphologic abnormalities associated with a high risk of obstructive sleep apnea among persons with type 2 diabetes mellitus, Parakou, Benin: April-August 2019 (N=383)\ncOR= Crude Odds Ratio; aOR= Adjusted Odds Ratio; 95%CI= 95% Confidence interval\nPoor sleep quality related findings: there were 27.42% (n=105) of participants who reported poor sleep quality. After multivariate analysis, factors associated with a greater risk of reported poor sleep quality were being female (aOR=2.08; 95%CI=1.18-3.83; p=0.014), or having a diabetic foot complication (aOR=5.07; 95%CI=1.15-23.63; p=0.031), nocturia (aOR=1.96; 95%CI=1.18-3.29; p=0.010), tiredness (aOR=2.77; 95%CI=1.26-6.23; p=0.012) and a high risk of OSA (aOR=3.31; 95%CI=1.28-8.93; p=0.015) (Table 4).\nfactors associated with a reported poor sleep quality among persons with T2DM, Parakou, Benin: April-August 2019 (N=383)\ncOR= Crude Odds Ratio; aOR= Adjusted Odds Ratio; 95%CI= 95% Confidence interval; OSA = Obstructive Sleep Apnea; SBQ = STOP-Bang Questionnaire", "This is one of the few studies from sub-Saharan Africa that has attempted to document sleep quality and the risk of OSA in this vulnerable group of persons with diabetes. With the high prevalence of DM in Borgou region and Parakou city, it was important to document the burden of an associated non-communicable disease, namely OSA, that would potentially increase the risk of cardiovascular complications. Some findings from the study deserve consideration.\nFirst, we were surprised by the reported frequency of OSA symptoms in study participants. Nearly half of the participants suffered from nocturia, one third had excessive daytime sleepiness, and one patient in five reported tiredness and severe snoring. In general health settings, these symptoms are not usually systematically sought for during routine consultation, and they are not always spontaneously listed by patients themselves. Local recommendations have therefore been made that practitioners systematically ask about these symptoms during routine consultation.\nSecond, nearly one patient in seven (14.10%) was found at high risk of OSA in Parakou city. This prevalence was similar to that reported from Saudi Arabia by Kalakattawi et al. who also applied the same SBQ to determine the risk of OSA [19]. However, when comparing with the other rare clinically-based studies from sub-Saharan Africa, the reported prevalence of OSA was much higher. From two studies in Nigeria that included outpatients in tertiary hospitals, the authors reported a high risk of OSA of 27% and 49.5% in 2014 and 2020 respectively [20, 21]. In another study carried out in Kenya, Sokwalla et al. reported a prevalence of 44% [22]. The differences observed may be explained by the use of another clinical score, the Berlin questionnaire, to determine the risk of OSA [20-22]. In the literature, comparisons between the different clinically validated scores to screen for OSA found a higher sensitivity for the SBQ compared with the Berlin questionnaire, with sensitivity increasing with the severity of the disease [23]. Furthermore, the observed prevalence in Parakou contrasts with published research from developed countries. In these studies, sleep events were more often recorded using polygraphy or polysomnography, regardless of the clinical score; and lack of this technology has been a limitation of the current study. OSA was diagnosed in more than half of diabetic persons using this technology [7, 24]. These findings are consistent with that found in Cotonou (53.2%) after having recorded the sleep pattern of 79 T2DM patients, thanks to an initiative implemented in a private institution [12]. All these reports highlight the importance of acquiring more sophisticated technology for diagnosing non-communicable diseases, even though communicable diseases remain more highly prevalent in sub-Saharan African settings. We need continued advocacy directed at decision-makers with the end goal of being able to systematically record sleeping patterns in diabetic persons in the public sector and therefore allowing appropriate therapies to be more widely available.\nThird, we hypothesized that morphological abnormalities, if any, would increase the risk OSA in patients with T2DM. This has been confirmed by the findings, with an increasing risk of OSA in the presence of a FTP Grade 3 or 4, and this is in line with findings in the general population [18]. In practice, such simple physical assessments could be used to identify those who require urgent sleep pattern recording for diagnostic confirmation.\nFourth, up to three diabetic persons in ten (27.42%) reported poor sleep quality. In the literature, approximately one third of diabetic persons suffer from sleep disorders and poor sleep quality [9]. This quite frequent complaint among diabetic persons is probably due to several reasons that include concerns about the disease itself, requirements for self-management, other social life issues, and sleep disorders including OSA which play an important role. In other settings, the situation is even worse. The proportion of diabetic persons with poor sleep quality was nearly double in Kenya at 53% [22]. There is an urgent need to address factors associated with poor sleep quality. One of these, diabetic foot, is a chronic DM complication, the prevention of which requires good glycaemic control. As is also reported from elsewhere, nocturia disturbs the pattern of sleep and this may cause poor sleep quality [9]. In our context, nocturia may represent an OSA symptom, as it may also be a manifestation of poor glycaemic control with resulting polyuria. The comparable proportion of diabetic persons complaining of nocturia in both high and low-risk groups for OSA tends to reinforce the last assertion. This is further supported by the fact that up to 40% of the study participants who had a successful HbA1c test did not have their DM controlled. Another factor independently associated with poor sleep quality was a high risk of OSA at SBQ, through sleep fragmentation. Unsurprisingly, tiredness in the day, resulting from a non-restorative sleep at night, was associated with a greater risk of poor sleep quality. Finally, women with their usual higher workload in our context, including household and professional activities, childcare, and who, additionally, are the first to wake up, more often reported a non-restorative sleep than men.", "Obstructive sleep apnea symptoms were frequent among persons with T2DM in Parakou city, Benin. One in seven participants had a high risk of OSA and this was associated with a FTP grade 3 or 4. The prevalence of poor sleep quality was recorded in three out of ten diabetic patients and was increased in women and in the presence of nocturia, tiredness, a high risk of OSA and diabetic foot. In sub-Saharan Africa cities like Parakou, there is need for better systematic screening of OSA in persons with DM, by making sophisticated confirmatory technology more widely available so that more appropriate treatment can be offered and for improving glycaemic control.\n \nWhat is known about this topic\n \n\nBoth diabetes mellitus and obstructive sleep apnea are two non-communicable diseases that are interconnected ; and the diagnosis of one of these conditions may prompt the screening of the second one;\nSleep quality is poor for many diabetic patients.\n\n\nBoth diabetes mellitus and obstructive sleep apnea are two non-communicable diseases that are interconnected ; and the diagnosis of one of these conditions may prompt the screening of the second one;\n\nSleep quality is poor for many diabetic patients.\n\n\nBoth diabetes mellitus and obstructive sleep apnea are two non-communicable diseases that are interconnected ; and the diagnosis of one of these conditions may prompt the screening of the second one;\nSleep quality is poor for many diabetic patients.\n\n\nBoth diabetes mellitus and obstructive sleep apnea are two non-communicable diseases that are interconnected ; and the diagnosis of one of these conditions may prompt the screening of the second one;\n\nSleep quality is poor for many diabetic patients.\n \nWhat this study adds\n \n\nThis study reports one of the rare experiences in assessing the risk of obstructive sleep apnea and the burden of poor sleep among patients with type 2 diabetes mellitus in a developing setting, where communicable diseases continue to receive most attention;\n\nFriedman Position Tongue is a simple oral examination that may contribute to identify those with a high risk of OSA, and therefore might be given priority during sleep recording;\nSome obstructive sleep apnea suggestive symptoms such as nocturia and tiredness are found independently associated with poor sleep quality and should be systematically asked diabetic patients during routine consultation.\n\n\nThis study reports one of the rare experiences in assessing the risk of obstructive sleep apnea and the burden of poor sleep among patients with type 2 diabetes mellitus in a developing setting, where communicable diseases continue to receive most attention;\n\n\nFriedman Position Tongue is a simple oral examination that may contribute to identify those with a high risk of OSA, and therefore might be given priority during sleep recording;\n\nSome obstructive sleep apnea suggestive symptoms such as nocturia and tiredness are found independently associated with poor sleep quality and should be systematically asked diabetic patients during routine consultation.\n\n\nThis study reports one of the rare experiences in assessing the risk of obstructive sleep apnea and the burden of poor sleep among patients with type 2 diabetes mellitus in a developing setting, where communicable diseases continue to receive most attention;\n\nFriedman Position Tongue is a simple oral examination that may contribute to identify those with a high risk of OSA, and therefore might be given priority during sleep recording;\nSome obstructive sleep apnea suggestive symptoms such as nocturia and tiredness are found independently associated with poor sleep quality and should be systematically asked diabetic patients during routine consultation.\n\n\nThis study reports one of the rare experiences in assessing the risk of obstructive sleep apnea and the burden of poor sleep among patients with type 2 diabetes mellitus in a developing setting, where communicable diseases continue to receive most attention;\n\n\nFriedman Position Tongue is a simple oral examination that may contribute to identify those with a high risk of OSA, and therefore might be given priority during sleep recording;\n\nSome obstructive sleep apnea suggestive symptoms such as nocturia and tiredness are found independently associated with poor sleep quality and should be systematically asked diabetic patients during routine consultation.", "\n\nBoth diabetes mellitus and obstructive sleep apnea are two non-communicable diseases that are interconnected ; and the diagnosis of one of these conditions may prompt the screening of the second one;\nSleep quality is poor for many diabetic patients.\n\n\nBoth diabetes mellitus and obstructive sleep apnea are two non-communicable diseases that are interconnected ; and the diagnosis of one of these conditions may prompt the screening of the second one;\n\nSleep quality is poor for many diabetic patients.", "\n\nThis study reports one of the rare experiences in assessing the risk of obstructive sleep apnea and the burden of poor sleep among patients with type 2 diabetes mellitus in a developing setting, where communicable diseases continue to receive most attention;\n\nFriedman Position Tongue is a simple oral examination that may contribute to identify those with a high risk of OSA, and therefore might be given priority during sleep recording;\nSome obstructive sleep apnea suggestive symptoms such as nocturia and tiredness are found independently associated with poor sleep quality and should be systematically asked diabetic patients during routine consultation.\n\n\nThis study reports one of the rare experiences in assessing the risk of obstructive sleep apnea and the burden of poor sleep among patients with type 2 diabetes mellitus in a developing setting, where communicable diseases continue to receive most attention;\n\n\nFriedman Position Tongue is a simple oral examination that may contribute to identify those with a high risk of OSA, and therefore might be given priority during sleep recording;\n\nSome obstructive sleep apnea suggestive symptoms such as nocturia and tiredness are found independently associated with poor sleep quality and should be systematically asked diabetic patients during routine consultation." ]
[ "intro", "methods", "results", "discussion", "conclusions", null, null ]
[ "Type 2 diabetes mellitus", "obstructive sleep apnea", "sleep quality", "Benin" ]
Introduction: Increasing urbanization, sedentary lifestyles, population ageing, changes in feeding, behavioural habits and sleep disruptions have prompted the emergence of non-communicable diseases as a global public health threat. According to the World Health Organization (WHO), non-communicable diseases are nowadays the leading causes of death in the world (1), 85% of these occurring in low- and middle-income countries [1]. Diabetes mellitus (DM) is one of the most common non-communicable diseases, and type 2 diabetes (T2DM) is by far the most frequent, accounting for 90% of all cases. Globally, the prevalence of DM has doubled from 4.7% to 8.5% between 1980 and 2014, and is the seventh leading cause of death [2]. In sub-Saharan Africa, it affects about 19 million people [3]. Similarly, Obstructive Sleep Apnea (OSA), the most common sleep disorder, is another non-communicable disease also on the increase globally. The prevalence of OSA is estimated to have increased from 4% in men and 2% in women in 1993 to 12.5% in men and 5.9% in women in 2016 [4, 5]. The condition, however, remains largely under-diagnosed and under-treated, especially in sub-Saharan Africa. Several reports in the literature mention numerous complex interrelationships between the two diseases. OSA has been associated with insulin resistance, glucose intolerance and T2DM independently of obesity, with this association increasing with the severity of the sleep disorder [6, 7]. Chronic intermittent hypoxia and sleep disruption resulting from OSA are thought to play a major role in glucose metabolism disturbances [6, 7]. On the other hand, the mechanisms underlining OSA among diabetic patients are less well understood, but may involve an autonomic neuropathy according to some authors (8). Overall, the prevalence of T2DM among those with OSA varies from 5% to 30%, while the prevalence of OSA among persons with diabetes ranges from 58% to 86% [7]. These observations prompted the International Diabetes Federation to recommend in 2008 that patients with one condition should be systematically screened for the other condition [8]. Furthermore, there is evidence of a vicious cycle between both diseases and sleep quality. OSA, by fragmenting the sleep architecture, alters its quality. Among persons with diabetes, there is evidence that poor sleep quality may disrupt glycaemic control [9]. Therefore, diabetic persons with untreated OSA have a high risk of poor sleep quality, chronic hyperglycaemia and its related-complications such as cardiovascular disease. Most studies on these issues between DM, OSA and sleep quality have been conducted in high resources countries. The aim of the present study was to report the experience from a developing setting and to determine among persons with T2DM living in Parakou city in Benin, the numbers and proportions of those with a high risk of OSA and of those who reported a poor sleep quality, as well as their associated factors. Methods: Study design period: this was a cross-sectional prospective study that was conducted between April 11 and August 5, 2019. Setting: Benin, a developing country located in West Africa, has not been spared by the increasing prevalence of DM worldwide. In 2008, DM prevalence was 2.6% [10]. In 2015, the prevalence of fasting hyperglycaemia (≥6.1 mmol/l) or persons with DM on medication was 12.4% [11]. From a Cotonou study, the economic capital city of the country, the prevalence of OSA found after recording patients with ventilator polygraphy was 53.2% [12]. Parakou, the third major city of the country, is located in Borgou, the region with the highest prevalence of impaired fasting glycaemia in the country at 12.4% [10]. The prevalence of DM in Parakou was 7.9% [10]. The study was carried out in three (2 public and 1 private) of the top-level health centres delivering care to diabetic persons in Parakou city. Study population: the study population was selected from all outpatients with T2DM who were consulted during the study period in one of these three facilities. The inclusion criteria were being aged ≥18 years and giving informed consent. Non-inclusion criteria were: i) not working at night; ii) not having psychiatric disorders; and iii) not being pregnant (for females) at the time of the survey. The sample size was calculated using the Schwartz formula N= ℇ2pq/i2, with N the sample size, ℇ the 95% confidence level equal to 1.96, p the prevalence of OSA among diabetic persons in Cotonou equal to 53.2% [12], q that is 1 - p being equal to 0.468 and i the accepted margin of error being equal to 5%. The sample size was 383 patients. A systematic sampling of all outpatients with T2DM who met the inclusion criteria was performed. Data collection: a structured interview with the patient was led by a medical student who was at the end of her medical training, using a pre-test and corrected questionnaire. Information on socio-demographic characteristics, DM history, treatment and complications, co-morbidities, lifestyle, sleep patterns, and OSA-suggestive symptoms were sought. Anthropometric parameters, neck and waist circumferences, as well as vital signs were measured. Neck circumference was measured with the head being in a neutral position, halfway between the tip of the chin and the supra-sternal hollow. Waist circumference was measured in a standing position and in gentle exhalation, halfway between the lower costal margin and the iliac crest. These measurements were followed by an Ear Nose Throat (ENT) and oral cavity examination, with a special focus on the presence of any morphological abnormalities. The snoring, tiredness, observed apnea, blood pressure, body mass index, age, neck circumference, and male gender (STOP-Bang) questionnaire (SBQ) was used and a score was derived to assess the risk of OSA [13]. The clinical assessment was completed with a measurement of capillary fasting glycaemia using an Accu-check mobile glucometer at the survey date. Diagnostic criteria: in this study, a high risk of OSA corresponded to any person who was recorded as “high risk” at SBQ. A self-reported poor sleep quality was considered if any patient responded “yes” to the following question: in the last month, do you wake up in the morning at least 4 days in a week with the feeling of being still tired or not having had a restful sleep? Details on other diagnostic criteria used in this study are shown in Table 1[3, 13-18]. the diagnostic criteria used in this study HbA1c=Glycated Haemoglobin; OSA= Obstructive Sleep Apnea; BMI=Body Mass Index Statistical analysis: all data were collected and double entered into EpiData entry Client v.4.4.3.1 (EpiData Association, Odense, Denmark). The data were analysed using the software Rx64 3.6.0. Percentages were determined to describe categorical variables. Means (with standard deviation) and medians (with interquartile range) were used to describe continuous variables appropriately. Factors associated with a high risk of OSA and a poor sleep quality were investigated using univariate analysis followed by a multiple logistic regression. Among potential risk factors, there has been a special focus on morphological ENT abnormalities, the presence of which would increase the risk of OSA. The following factors, namely sex, age, body mass index, hypertension were not included to determine factors associated with high risk of OSA, as they were already part of the SBQ. To investigate potential factors associated with reported poor sleep quality, DM complications, symptoms and the risk of OSA deserved special attention. Levels of significance were set at 5%. Ethical considerations: this study was conducted with the agreement of the Local Ethics Committee for Biomedical Research of the University of Parakou. Informed consent from all persons with T2DM was obtained prior to their inclusion in the study. Results: Participants´ characteristics: the mean age of the 383 diabetic persons who were enrolled in the study was 57.37 (11.45) years. They were 61.62% (n=236) females and 38.38% (n=147) males. The average follow-up time from the diagnosis date of the DM was 7.45 years (6.22), ranging from 3 months to 38 years. Of the persons with DM, 77.81% (n=298) were treated with oral antidiabetic drugs. The most commonly reported complication was diabetic neuropathy in 34.73% (n=133). Of associated comorbidities, obesity and hypertension were the most commonly diagnosed in 86.95% (n=333) and 62.14% (n=238) respectively (Table 2). On the date of the survey, the mean fasting blood glucose was 1.58 (0.83) g/l, ranging between 0.53 - 5 g/l. Of all the study participants, 8.62% (n=33) had a HbA1c test, of whom 39.39% (n=13) were found to be poorly controlled. treatment, comorbidities, complications and lifestyles of persons with type 2 diabetes mellitus, Parakou, Benin: April - August 2019 (N=383) Having smoked or drank alcohol more than one year ago Sleep duration and events during sleeping: overall, the mean duration of sleep for all participants was 7.39 (1.39) hours, ranging from 4 to 10.99 hours. Of these persons, the sleep duration was insufficient in 26.89% (n=103), normal in 65.01% (n=249) and too long in 8.09% (n=31). There were 8.09% (n=31) patients who complained of frequent nightmares, and 49.35% (n=189) who complained of more than one nocturnal awakening. OSA-related findings: amongst symptoms suggestive of OSA, nocturia was reported in 49.35% (n=189) persons, excessive daytime sleepiness in 34.73% (n=133), tiredness in 23.24% (n=89), severe daily snoring in 21.15% (n=81), apnea during sleep in 11.49% (n=44) and lack of concentration in 02.61% (n=10). Based on the SBQ, the risk of OSA was high in 14.10% (n=54), intermediate in 24.80% (n=95) and low in 61.10% (n=234). Diabetic persons with a high risk of OSA slept for significantly shorter periods compared with those who had a low or intermediate risk (6,96 ± 1.20 hours vs 7,46 ± 1.41 hours; p=0.011). The frequency of nocturia was comparable in both subgroups (53.70% vs 48.63%; p=0.490). After multivariate analysis, FTP Grade 3 (adjusted Odds Ratio, aOR=2.48; 95%CI=1.11 - 5.55; p=0.025) and 4 (aOR=4.65; 95%CI=1.26 - 15.90; p=0.015) were associated with a high risk of OSA (Table 3). morphologic abnormalities associated with a high risk of obstructive sleep apnea among persons with type 2 diabetes mellitus, Parakou, Benin: April-August 2019 (N=383) cOR= Crude Odds Ratio; aOR= Adjusted Odds Ratio; 95%CI= 95% Confidence interval Poor sleep quality related findings: there were 27.42% (n=105) of participants who reported poor sleep quality. After multivariate analysis, factors associated with a greater risk of reported poor sleep quality were being female (aOR=2.08; 95%CI=1.18-3.83; p=0.014), or having a diabetic foot complication (aOR=5.07; 95%CI=1.15-23.63; p=0.031), nocturia (aOR=1.96; 95%CI=1.18-3.29; p=0.010), tiredness (aOR=2.77; 95%CI=1.26-6.23; p=0.012) and a high risk of OSA (aOR=3.31; 95%CI=1.28-8.93; p=0.015) (Table 4). factors associated with a reported poor sleep quality among persons with T2DM, Parakou, Benin: April-August 2019 (N=383) cOR= Crude Odds Ratio; aOR= Adjusted Odds Ratio; 95%CI= 95% Confidence interval; OSA = Obstructive Sleep Apnea; SBQ = STOP-Bang Questionnaire Discussion: This is one of the few studies from sub-Saharan Africa that has attempted to document sleep quality and the risk of OSA in this vulnerable group of persons with diabetes. With the high prevalence of DM in Borgou region and Parakou city, it was important to document the burden of an associated non-communicable disease, namely OSA, that would potentially increase the risk of cardiovascular complications. Some findings from the study deserve consideration. First, we were surprised by the reported frequency of OSA symptoms in study participants. Nearly half of the participants suffered from nocturia, one third had excessive daytime sleepiness, and one patient in five reported tiredness and severe snoring. In general health settings, these symptoms are not usually systematically sought for during routine consultation, and they are not always spontaneously listed by patients themselves. Local recommendations have therefore been made that practitioners systematically ask about these symptoms during routine consultation. Second, nearly one patient in seven (14.10%) was found at high risk of OSA in Parakou city. This prevalence was similar to that reported from Saudi Arabia by Kalakattawi et al. who also applied the same SBQ to determine the risk of OSA [19]. However, when comparing with the other rare clinically-based studies from sub-Saharan Africa, the reported prevalence of OSA was much higher. From two studies in Nigeria that included outpatients in tertiary hospitals, the authors reported a high risk of OSA of 27% and 49.5% in 2014 and 2020 respectively [20, 21]. In another study carried out in Kenya, Sokwalla et al. reported a prevalence of 44% [22]. The differences observed may be explained by the use of another clinical score, the Berlin questionnaire, to determine the risk of OSA [20-22]. In the literature, comparisons between the different clinically validated scores to screen for OSA found a higher sensitivity for the SBQ compared with the Berlin questionnaire, with sensitivity increasing with the severity of the disease [23]. Furthermore, the observed prevalence in Parakou contrasts with published research from developed countries. In these studies, sleep events were more often recorded using polygraphy or polysomnography, regardless of the clinical score; and lack of this technology has been a limitation of the current study. OSA was diagnosed in more than half of diabetic persons using this technology [7, 24]. These findings are consistent with that found in Cotonou (53.2%) after having recorded the sleep pattern of 79 T2DM patients, thanks to an initiative implemented in a private institution [12]. All these reports highlight the importance of acquiring more sophisticated technology for diagnosing non-communicable diseases, even though communicable diseases remain more highly prevalent in sub-Saharan African settings. We need continued advocacy directed at decision-makers with the end goal of being able to systematically record sleeping patterns in diabetic persons in the public sector and therefore allowing appropriate therapies to be more widely available. Third, we hypothesized that morphological abnormalities, if any, would increase the risk OSA in patients with T2DM. This has been confirmed by the findings, with an increasing risk of OSA in the presence of a FTP Grade 3 or 4, and this is in line with findings in the general population [18]. In practice, such simple physical assessments could be used to identify those who require urgent sleep pattern recording for diagnostic confirmation. Fourth, up to three diabetic persons in ten (27.42%) reported poor sleep quality. In the literature, approximately one third of diabetic persons suffer from sleep disorders and poor sleep quality [9]. This quite frequent complaint among diabetic persons is probably due to several reasons that include concerns about the disease itself, requirements for self-management, other social life issues, and sleep disorders including OSA which play an important role. In other settings, the situation is even worse. The proportion of diabetic persons with poor sleep quality was nearly double in Kenya at 53% [22]. There is an urgent need to address factors associated with poor sleep quality. One of these, diabetic foot, is a chronic DM complication, the prevention of which requires good glycaemic control. As is also reported from elsewhere, nocturia disturbs the pattern of sleep and this may cause poor sleep quality [9]. In our context, nocturia may represent an OSA symptom, as it may also be a manifestation of poor glycaemic control with resulting polyuria. The comparable proportion of diabetic persons complaining of nocturia in both high and low-risk groups for OSA tends to reinforce the last assertion. This is further supported by the fact that up to 40% of the study participants who had a successful HbA1c test did not have their DM controlled. Another factor independently associated with poor sleep quality was a high risk of OSA at SBQ, through sleep fragmentation. Unsurprisingly, tiredness in the day, resulting from a non-restorative sleep at night, was associated with a greater risk of poor sleep quality. Finally, women with their usual higher workload in our context, including household and professional activities, childcare, and who, additionally, are the first to wake up, more often reported a non-restorative sleep than men. Conclusion: Obstructive sleep apnea symptoms were frequent among persons with T2DM in Parakou city, Benin. One in seven participants had a high risk of OSA and this was associated with a FTP grade 3 or 4. The prevalence of poor sleep quality was recorded in three out of ten diabetic patients and was increased in women and in the presence of nocturia, tiredness, a high risk of OSA and diabetic foot. In sub-Saharan Africa cities like Parakou, there is need for better systematic screening of OSA in persons with DM, by making sophisticated confirmatory technology more widely available so that more appropriate treatment can be offered and for improving glycaemic control. What is known about this topic Both diabetes mellitus and obstructive sleep apnea are two non-communicable diseases that are interconnected ; and the diagnosis of one of these conditions may prompt the screening of the second one; Sleep quality is poor for many diabetic patients. Both diabetes mellitus and obstructive sleep apnea are two non-communicable diseases that are interconnected ; and the diagnosis of one of these conditions may prompt the screening of the second one; Sleep quality is poor for many diabetic patients. Both diabetes mellitus and obstructive sleep apnea are two non-communicable diseases that are interconnected ; and the diagnosis of one of these conditions may prompt the screening of the second one; Sleep quality is poor for many diabetic patients. Both diabetes mellitus and obstructive sleep apnea are two non-communicable diseases that are interconnected ; and the diagnosis of one of these conditions may prompt the screening of the second one; Sleep quality is poor for many diabetic patients. What this study adds This study reports one of the rare experiences in assessing the risk of obstructive sleep apnea and the burden of poor sleep among patients with type 2 diabetes mellitus in a developing setting, where communicable diseases continue to receive most attention; Friedman Position Tongue is a simple oral examination that may contribute to identify those with a high risk of OSA, and therefore might be given priority during sleep recording; Some obstructive sleep apnea suggestive symptoms such as nocturia and tiredness are found independently associated with poor sleep quality and should be systematically asked diabetic patients during routine consultation. This study reports one of the rare experiences in assessing the risk of obstructive sleep apnea and the burden of poor sleep among patients with type 2 diabetes mellitus in a developing setting, where communicable diseases continue to receive most attention; Friedman Position Tongue is a simple oral examination that may contribute to identify those with a high risk of OSA, and therefore might be given priority during sleep recording; Some obstructive sleep apnea suggestive symptoms such as nocturia and tiredness are found independently associated with poor sleep quality and should be systematically asked diabetic patients during routine consultation. This study reports one of the rare experiences in assessing the risk of obstructive sleep apnea and the burden of poor sleep among patients with type 2 diabetes mellitus in a developing setting, where communicable diseases continue to receive most attention; Friedman Position Tongue is a simple oral examination that may contribute to identify those with a high risk of OSA, and therefore might be given priority during sleep recording; Some obstructive sleep apnea suggestive symptoms such as nocturia and tiredness are found independently associated with poor sleep quality and should be systematically asked diabetic patients during routine consultation. This study reports one of the rare experiences in assessing the risk of obstructive sleep apnea and the burden of poor sleep among patients with type 2 diabetes mellitus in a developing setting, where communicable diseases continue to receive most attention; Friedman Position Tongue is a simple oral examination that may contribute to identify those with a high risk of OSA, and therefore might be given priority during sleep recording; Some obstructive sleep apnea suggestive symptoms such as nocturia and tiredness are found independently associated with poor sleep quality and should be systematically asked diabetic patients during routine consultation. What is known about this topic : Both diabetes mellitus and obstructive sleep apnea are two non-communicable diseases that are interconnected ; and the diagnosis of one of these conditions may prompt the screening of the second one; Sleep quality is poor for many diabetic patients. Both diabetes mellitus and obstructive sleep apnea are two non-communicable diseases that are interconnected ; and the diagnosis of one of these conditions may prompt the screening of the second one; Sleep quality is poor for many diabetic patients. What this study adds : This study reports one of the rare experiences in assessing the risk of obstructive sleep apnea and the burden of poor sleep among patients with type 2 diabetes mellitus in a developing setting, where communicable diseases continue to receive most attention; Friedman Position Tongue is a simple oral examination that may contribute to identify those with a high risk of OSA, and therefore might be given priority during sleep recording; Some obstructive sleep apnea suggestive symptoms such as nocturia and tiredness are found independently associated with poor sleep quality and should be systematically asked diabetic patients during routine consultation. This study reports one of the rare experiences in assessing the risk of obstructive sleep apnea and the burden of poor sleep among patients with type 2 diabetes mellitus in a developing setting, where communicable diseases continue to receive most attention; Friedman Position Tongue is a simple oral examination that may contribute to identify those with a high risk of OSA, and therefore might be given priority during sleep recording; Some obstructive sleep apnea suggestive symptoms such as nocturia and tiredness are found independently associated with poor sleep quality and should be systematically asked diabetic patients during routine consultation.
Background: diabetes mellitus (DM) and Obstructive Sleep Apnea (OSA) are two major and interconnected non-communicable diseases. Both negatively impact on sleep quality. This study aimed to determine among persons with type 2 DM, the proportions at high risk of OSA and of self-reported poor sleep quality along with their associated-factors in Parakou city, Benin. Methods: this was a cross-sectional prospective study of 100% (n=383) outpatient adults with type 2 DM, conducted between April and August 2019 in the three top centres managing diabetic persons in Parakou city. They were interviewed, examined and investigated using capillary fasting blood glucose tests. The STOP-Bang Questionnaire (SBQ) was used to determine the risk of OSA. Results: overall, their mean age was 57.37 (11.45) years. They were 61.62% (n=236) females and 38.38% (n=147) males. Sleep duration was insufficient in 26.89% (n=103). Nocturia was reported in 49.35% (n=189). The risk of OSA was high in 14.10% (n=54), intermediate in 24.80% (n=95) and low in 61.10% (n=234). Friedman Position Tongue Grade 3 (Adjusted Odds Ratio, aOR=2.48; 95%CI=1.11 - 5.55; p=0.025) and 4 (aOR=4.65; 95%CI=1.26 - 15.90; p=0.015) were independently associated with a high risk of OSA. The prevalence of reported poor sleep quality was 27.42% (n=105). Female gender (aOR=2.08; 95%CI=1.18-3.83; p=0.014), diabetic foot (aOR=5.07; 95%CI=1.15-23.63; p=0.031), nocturia (aOR=1.96; 95%CI=1.18-3.29; p=0.010), tiredness (aOR=2.77; 95%CI=1.26-6.23; p=0.012) and a high risk of OSA (aOR=3.31; 95%CI=1.28-8.93; p=0.015) were independently associated with a greater risk of reported poor sleep quality. Conclusions: in Parakou, the proportions of patients with type 2 DM at increased risk of OSA and with poor quality of sleep are relatively high. There is need for better systematic screening of OSA in persons with DM.
Introduction: Increasing urbanization, sedentary lifestyles, population ageing, changes in feeding, behavioural habits and sleep disruptions have prompted the emergence of non-communicable diseases as a global public health threat. According to the World Health Organization (WHO), non-communicable diseases are nowadays the leading causes of death in the world (1), 85% of these occurring in low- and middle-income countries [1]. Diabetes mellitus (DM) is one of the most common non-communicable diseases, and type 2 diabetes (T2DM) is by far the most frequent, accounting for 90% of all cases. Globally, the prevalence of DM has doubled from 4.7% to 8.5% between 1980 and 2014, and is the seventh leading cause of death [2]. In sub-Saharan Africa, it affects about 19 million people [3]. Similarly, Obstructive Sleep Apnea (OSA), the most common sleep disorder, is another non-communicable disease also on the increase globally. The prevalence of OSA is estimated to have increased from 4% in men and 2% in women in 1993 to 12.5% in men and 5.9% in women in 2016 [4, 5]. The condition, however, remains largely under-diagnosed and under-treated, especially in sub-Saharan Africa. Several reports in the literature mention numerous complex interrelationships between the two diseases. OSA has been associated with insulin resistance, glucose intolerance and T2DM independently of obesity, with this association increasing with the severity of the sleep disorder [6, 7]. Chronic intermittent hypoxia and sleep disruption resulting from OSA are thought to play a major role in glucose metabolism disturbances [6, 7]. On the other hand, the mechanisms underlining OSA among diabetic patients are less well understood, but may involve an autonomic neuropathy according to some authors (8). Overall, the prevalence of T2DM among those with OSA varies from 5% to 30%, while the prevalence of OSA among persons with diabetes ranges from 58% to 86% [7]. These observations prompted the International Diabetes Federation to recommend in 2008 that patients with one condition should be systematically screened for the other condition [8]. Furthermore, there is evidence of a vicious cycle between both diseases and sleep quality. OSA, by fragmenting the sleep architecture, alters its quality. Among persons with diabetes, there is evidence that poor sleep quality may disrupt glycaemic control [9]. Therefore, diabetic persons with untreated OSA have a high risk of poor sleep quality, chronic hyperglycaemia and its related-complications such as cardiovascular disease. Most studies on these issues between DM, OSA and sleep quality have been conducted in high resources countries. The aim of the present study was to report the experience from a developing setting and to determine among persons with T2DM living in Parakou city in Benin, the numbers and proportions of those with a high risk of OSA and of those who reported a poor sleep quality, as well as their associated factors. Conclusion: Obstructive sleep apnea symptoms were frequent among persons with T2DM in Parakou city, Benin. One in seven participants had a high risk of OSA and this was associated with a FTP grade 3 or 4. The prevalence of poor sleep quality was recorded in three out of ten diabetic patients and was increased in women and in the presence of nocturia, tiredness, a high risk of OSA and diabetic foot. In sub-Saharan Africa cities like Parakou, there is need for better systematic screening of OSA in persons with DM, by making sophisticated confirmatory technology more widely available so that more appropriate treatment can be offered and for improving glycaemic control. What is known about this topic Both diabetes mellitus and obstructive sleep apnea are two non-communicable diseases that are interconnected ; and the diagnosis of one of these conditions may prompt the screening of the second one; Sleep quality is poor for many diabetic patients. Both diabetes mellitus and obstructive sleep apnea are two non-communicable diseases that are interconnected ; and the diagnosis of one of these conditions may prompt the screening of the second one; Sleep quality is poor for many diabetic patients. Both diabetes mellitus and obstructive sleep apnea are two non-communicable diseases that are interconnected ; and the diagnosis of one of these conditions may prompt the screening of the second one; Sleep quality is poor for many diabetic patients. Both diabetes mellitus and obstructive sleep apnea are two non-communicable diseases that are interconnected ; and the diagnosis of one of these conditions may prompt the screening of the second one; Sleep quality is poor for many diabetic patients. What this study adds This study reports one of the rare experiences in assessing the risk of obstructive sleep apnea and the burden of poor sleep among patients with type 2 diabetes mellitus in a developing setting, where communicable diseases continue to receive most attention; Friedman Position Tongue is a simple oral examination that may contribute to identify those with a high risk of OSA, and therefore might be given priority during sleep recording; Some obstructive sleep apnea suggestive symptoms such as nocturia and tiredness are found independently associated with poor sleep quality and should be systematically asked diabetic patients during routine consultation. This study reports one of the rare experiences in assessing the risk of obstructive sleep apnea and the burden of poor sleep among patients with type 2 diabetes mellitus in a developing setting, where communicable diseases continue to receive most attention; Friedman Position Tongue is a simple oral examination that may contribute to identify those with a high risk of OSA, and therefore might be given priority during sleep recording; Some obstructive sleep apnea suggestive symptoms such as nocturia and tiredness are found independently associated with poor sleep quality and should be systematically asked diabetic patients during routine consultation. This study reports one of the rare experiences in assessing the risk of obstructive sleep apnea and the burden of poor sleep among patients with type 2 diabetes mellitus in a developing setting, where communicable diseases continue to receive most attention; Friedman Position Tongue is a simple oral examination that may contribute to identify those with a high risk of OSA, and therefore might be given priority during sleep recording; Some obstructive sleep apnea suggestive symptoms such as nocturia and tiredness are found independently associated with poor sleep quality and should be systematically asked diabetic patients during routine consultation. This study reports one of the rare experiences in assessing the risk of obstructive sleep apnea and the burden of poor sleep among patients with type 2 diabetes mellitus in a developing setting, where communicable diseases continue to receive most attention; Friedman Position Tongue is a simple oral examination that may contribute to identify those with a high risk of OSA, and therefore might be given priority during sleep recording; Some obstructive sleep apnea suggestive symptoms such as nocturia and tiredness are found independently associated with poor sleep quality and should be systematically asked diabetic patients during routine consultation.
Background: diabetes mellitus (DM) and Obstructive Sleep Apnea (OSA) are two major and interconnected non-communicable diseases. Both negatively impact on sleep quality. This study aimed to determine among persons with type 2 DM, the proportions at high risk of OSA and of self-reported poor sleep quality along with their associated-factors in Parakou city, Benin. Methods: this was a cross-sectional prospective study of 100% (n=383) outpatient adults with type 2 DM, conducted between April and August 2019 in the three top centres managing diabetic persons in Parakou city. They were interviewed, examined and investigated using capillary fasting blood glucose tests. The STOP-Bang Questionnaire (SBQ) was used to determine the risk of OSA. Results: overall, their mean age was 57.37 (11.45) years. They were 61.62% (n=236) females and 38.38% (n=147) males. Sleep duration was insufficient in 26.89% (n=103). Nocturia was reported in 49.35% (n=189). The risk of OSA was high in 14.10% (n=54), intermediate in 24.80% (n=95) and low in 61.10% (n=234). Friedman Position Tongue Grade 3 (Adjusted Odds Ratio, aOR=2.48; 95%CI=1.11 - 5.55; p=0.025) and 4 (aOR=4.65; 95%CI=1.26 - 15.90; p=0.015) were independently associated with a high risk of OSA. The prevalence of reported poor sleep quality was 27.42% (n=105). Female gender (aOR=2.08; 95%CI=1.18-3.83; p=0.014), diabetic foot (aOR=5.07; 95%CI=1.15-23.63; p=0.031), nocturia (aOR=1.96; 95%CI=1.18-3.29; p=0.010), tiredness (aOR=2.77; 95%CI=1.26-6.23; p=0.012) and a high risk of OSA (aOR=3.31; 95%CI=1.28-8.93; p=0.015) were independently associated with a greater risk of reported poor sleep quality. Conclusions: in Parakou, the proportions of patients with type 2 DM at increased risk of OSA and with poor quality of sleep are relatively high. There is need for better systematic screening of OSA in persons with DM.
4,345
397
[ 90, 214 ]
7
[ "sleep", "osa", "risk", "poor", "quality", "sleep quality", "poor sleep", "diabetic", "study", "risk osa" ]
[ "diseases type diabetes", "countries diabetes", "morbidities lifestyle sleep", "diabetes mellitus obstructive", "prevalence poor sleep" ]
[CONTENT] Type 2 diabetes mellitus | obstructive sleep apnea | sleep quality | Benin [SUMMARY]
[CONTENT] Type 2 diabetes mellitus | obstructive sleep apnea | sleep quality | Benin [SUMMARY]
[CONTENT] Type 2 diabetes mellitus | obstructive sleep apnea | sleep quality | Benin [SUMMARY]
[CONTENT] Type 2 diabetes mellitus | obstructive sleep apnea | sleep quality | Benin [SUMMARY]
[CONTENT] Type 2 diabetes mellitus | obstructive sleep apnea | sleep quality | Benin [SUMMARY]
[CONTENT] Type 2 diabetes mellitus | obstructive sleep apnea | sleep quality | Benin [SUMMARY]
[CONTENT] Adult | Cross-Sectional Studies | Diabetes Mellitus, Type 2 | Female | Humans | Male | Middle Aged | Prospective Studies | Sleep | Sleep Apnea, Obstructive | Sleep Quality | Surveys and Questionnaires [SUMMARY]
[CONTENT] Adult | Cross-Sectional Studies | Diabetes Mellitus, Type 2 | Female | Humans | Male | Middle Aged | Prospective Studies | Sleep | Sleep Apnea, Obstructive | Sleep Quality | Surveys and Questionnaires [SUMMARY]
[CONTENT] Adult | Cross-Sectional Studies | Diabetes Mellitus, Type 2 | Female | Humans | Male | Middle Aged | Prospective Studies | Sleep | Sleep Apnea, Obstructive | Sleep Quality | Surveys and Questionnaires [SUMMARY]
[CONTENT] Adult | Cross-Sectional Studies | Diabetes Mellitus, Type 2 | Female | Humans | Male | Middle Aged | Prospective Studies | Sleep | Sleep Apnea, Obstructive | Sleep Quality | Surveys and Questionnaires [SUMMARY]
[CONTENT] Adult | Cross-Sectional Studies | Diabetes Mellitus, Type 2 | Female | Humans | Male | Middle Aged | Prospective Studies | Sleep | Sleep Apnea, Obstructive | Sleep Quality | Surveys and Questionnaires [SUMMARY]
[CONTENT] Adult | Cross-Sectional Studies | Diabetes Mellitus, Type 2 | Female | Humans | Male | Middle Aged | Prospective Studies | Sleep | Sleep Apnea, Obstructive | Sleep Quality | Surveys and Questionnaires [SUMMARY]
[CONTENT] diseases type diabetes | countries diabetes | morbidities lifestyle sleep | diabetes mellitus obstructive | prevalence poor sleep [SUMMARY]
[CONTENT] diseases type diabetes | countries diabetes | morbidities lifestyle sleep | diabetes mellitus obstructive | prevalence poor sleep [SUMMARY]
[CONTENT] diseases type diabetes | countries diabetes | morbidities lifestyle sleep | diabetes mellitus obstructive | prevalence poor sleep [SUMMARY]
[CONTENT] diseases type diabetes | countries diabetes | morbidities lifestyle sleep | diabetes mellitus obstructive | prevalence poor sleep [SUMMARY]
[CONTENT] diseases type diabetes | countries diabetes | morbidities lifestyle sleep | diabetes mellitus obstructive | prevalence poor sleep [SUMMARY]
[CONTENT] diseases type diabetes | countries diabetes | morbidities lifestyle sleep | diabetes mellitus obstructive | prevalence poor sleep [SUMMARY]
[CONTENT] sleep | osa | risk | poor | quality | sleep quality | poor sleep | diabetic | study | risk osa [SUMMARY]
[CONTENT] sleep | osa | risk | poor | quality | sleep quality | poor sleep | diabetic | study | risk osa [SUMMARY]
[CONTENT] sleep | osa | risk | poor | quality | sleep quality | poor sleep | diabetic | study | risk osa [SUMMARY]
[CONTENT] sleep | osa | risk | poor | quality | sleep quality | poor sleep | diabetic | study | risk osa [SUMMARY]
[CONTENT] sleep | osa | risk | poor | quality | sleep quality | poor sleep | diabetic | study | risk osa [SUMMARY]
[CONTENT] sleep | osa | risk | poor | quality | sleep quality | poor sleep | diabetic | study | risk osa [SUMMARY]
[CONTENT] osa | sleep | condition | diseases | quality | prevalence | non communicable | diabetes | communicable | non [SUMMARY]
[CONTENT] criteria | study | osa | prevalence | country | equal | inclusion | risk | dm | factors [SUMMARY]
[CONTENT] 95 | aor | ci | 95 ci | ratio | odds ratio | odds | persons | sleep | hours [SUMMARY]
[CONTENT] sleep | obstructive sleep | apnea | obstructive | obstructive sleep apnea | sleep apnea | diabetic patients | poor | patients | risk [SUMMARY]
[CONTENT] sleep | osa | risk | poor | quality | sleep quality | poor sleep | apnea | diabetic | persons [SUMMARY]
[CONTENT] sleep | osa | risk | poor | quality | sleep quality | poor sleep | apnea | diabetic | persons [SUMMARY]
[CONTENT] Sleep Apnea | two ||| ||| Parakou | Benin [SUMMARY]
[CONTENT] 100% | between April and August 2019 | three | Parakou ||| ||| OSA [SUMMARY]
[CONTENT] 57.37 | 11.45) years ||| 61.62% | 38.38% ||| 26.89% ||| Nocturia | 49.35% ||| OSA | 14.10% | n=54 | 24.80% | 61.10% ||| Friedman Position Tongue Grade 3 | 95%CI=1.11 - 5.55 | 4 | 95%CI=1.26 - 15.90 | OSA ||| 27.42% ||| 95%CI=1.18 | 95%CI=1.15-23.63 | nocturia | aOR=1.96 | 95%CI=1.18-3.29 | 95%CI=1.26-6.23 | 95%CI=1.28-8.93 [SUMMARY]
[CONTENT] Parakou ||| DM [SUMMARY]
[CONTENT] Sleep Apnea | two ||| ||| Parakou | Benin ||| 100% | between April and August 2019 | three | Parakou ||| ||| OSA ||| 57.37 | 11.45) years ||| 61.62% | 38.38% ||| 26.89% ||| Nocturia | 49.35% ||| OSA | 14.10% | n=54 | 24.80% | 61.10% ||| Friedman Position Tongue Grade 3 | 95%CI=1.11 - 5.55 | 4 | 95%CI=1.26 - 15.90 | OSA ||| 27.42% ||| 95%CI=1.18 | 95%CI=1.15-23.63 | nocturia | aOR=1.96 | 95%CI=1.18-3.29 | 95%CI=1.26-6.23 | 95%CI=1.28-8.93 ||| Parakou ||| DM [SUMMARY]
[CONTENT] Sleep Apnea | two ||| ||| Parakou | Benin ||| 100% | between April and August 2019 | three | Parakou ||| ||| OSA ||| 57.37 | 11.45) years ||| 61.62% | 38.38% ||| 26.89% ||| Nocturia | 49.35% ||| OSA | 14.10% | n=54 | 24.80% | 61.10% ||| Friedman Position Tongue Grade 3 | 95%CI=1.11 - 5.55 | 4 | 95%CI=1.26 - 15.90 | OSA ||| 27.42% ||| 95%CI=1.18 | 95%CI=1.15-23.63 | nocturia | aOR=1.96 | 95%CI=1.18-3.29 | 95%CI=1.26-6.23 | 95%CI=1.28-8.93 ||| Parakou ||| DM [SUMMARY]
Association of Physical Activity and Sedentary Behavior with Colorectal Cancer Risk in Moroccan Adults: A Large-Scale, Population-Based Case-Control Study.
35763624
Physical activity has been associated with a lower risk of colorectal cancer in studies mainly conducted in high-income countries, while sedentary behavior has been suggested to increase CRC risk. In this study, we aimed to investigate the role of physical activity and sedentary behavior on CRC risk in the Moroccan population.
BACKGROUND
A case-control study was conducted involving 1516 case-control pairs, matched on age, sex and center in five university hospital centers. A structured questionnaire was used to collect information on socio-demographics, lifestyle habits, family history of CRC, and non-steroidal anti-inflammatory drug (NSAID) use. Information on physical activity and sedentary behavior were collected by the Global Physical Activity Questionnaire (GPAQ). For each activity (work, household, and recreational activities), a metabolic equivalent (MET) was calculated using GPAQ recommendations. Conditional logistic regression models were used to assess the association between physical activity, sedentary behavior and the risk of overall CRC, colon cancer, and rectal cancer taking into account other CRC risk factors.
METHODS
High level of physical activity was associated with lower risk of rectal cancer, colon cancer, and overall CRC, the adjusted odds ratios (ORa) for the highest versus the lowest level of activity were 0.67 (95% CI: 0.54-0.82), 0.77 (95% CI: 0.62-0.96), and 0.72 (95% CI: 0.62-0.83), respectively. In contrast, sedentary behavior was positively associated with rectal cancer risk (ORa=1.19, 95% CI: 1.01-1.40), but was unrelated to colon cancer risk (ORa=1.02, 95% CI: 0.87-1.20).
RESULTS
We found an inverse association between physical activity and CRC risk in the Moroccan population, and a positive association between sedentary behavior and rectal cancer risk. Considering that one-third of the total population studied had a sedentary lifestyle, these results may be used to improve strategies of public health suitable for Moroccan population.
CONCLUSION
[ "Adult", "Case-Control Studies", "Colonic Neoplasms", "Exercise", "Humans", "Rectal Neoplasms", "Sedentary Behavior" ]
9587816
Introduction
Colorectal cancer is the third most commonly diagnosed cancer worldwide with 1.8 million cases, and the second in terms of mortality with 881,000 deaths (Ferlay et al., 2018; Bray et al., 2018). There is a wide geographical variation in the distribution of the CRC incidence around the world (Bray et al., 2018; BW and CP). In 2018, incidence rates were generally from 2 to 3-fold higher in high income countries compared with low and middle income countries (LMIC) (Bray et al., 2018). However, differences in mortality rates between these two regions are smaller, specifically because CRC patients living in LMIC discover their disease at a later stage (Bray et al., 2018). In Morocco, for example, from 2008 to 2012, there is increasing tendency in the CRC incidence with age standardized increases in incidence rates of 3.8 to 8.4 and from 2.6 to 7.4 per 100,000 in men and women, respectively (Fondation Lalla Salma Prévention et Traitement des Cancers). It is well known that CRC risk is influenced by both, genetic and environmental factors (Kuipers et al., 2015). A range of modifiable factors including smoking, alcohol intake, food consumption (e.g. red and processed meat, fibre, calcium) and obesity can influence the risk of developing CRC (Eaglehouse et al., 2017; Abar et al., 2018). According to the World Cancer Research Fund/American Institute for Cancer Research, the evidence on the association between all types of physical activity (occupational, household, transport and recreational) and reduced CRC risk has been classified as convincing (World Cancer Research Fund/ Institute for Cancer Research 2018). Based on observational epidemiological evidence, the decrease in the risk associated to steady physical activity is estimated to be 25–30%, when comparing the most active to least active participants in these studies (Chao et al., 2004; Friedenreich et al., 2006; Wolin et al., 2007, 2009). Sedentary behavior is positively and independently of PA associated with an increased risk of colorectal cancer (Kerr et al., 2017). This evidence was based mainly on studies conducted among western populations. However, less information is available on the association between physical activity and CRC risk in developing countries. Several studies have investigated the associations between physical activity, dietary habits and health outcomes in Morocco, but these mainly focused on Moroccan teenagers and adolescents (López et al., 2012; Hamrani et al., 2015; El-ammari et al., 2017). Morocco is a fast-growing country, experiencing an important epidemiological and nutritional transition (Belahsen, 2014; Ronto et al., 2018). Urbanization and economical growth have been identified as the main determinants of reduced physical activity levels among this population where 24% of women and 9% of men were classified in the lowest physically active group (Najdi et al., 2011). To obtain evidence on a link between physical activity and CRC in the Moroccan context, we studied the association between physical activity and sedentary behavior and CRC risk in a population-based case control study.
null
null
Results
Table 1 presents the general socio-demographic characteristics and potential confounders for this case-control study. Compared to controls, the mean age of cases was slightly higher (56.45 ± 13.95 years vs. 55.50 ± 13.70). Marital status and residency were similarly distributed between cases and controls. When cases and controls were compared, cases were more likely to be smokers and to have a higher occurrence of family history of CRC. Concerning CRC anatomical location, 50.2% cases had colon cancer and 49.8% had rectal cancer. Table 2 shows the distribution of physical activity intensity in MET-minutes/week and sedentary behavior among CRC cases and controls. More than a quarter of the female and the men cases ranked in the low physical activity category and had a sedentary behavior. In addition, more than one-third of cases and controls (women and men) had a sedentary lifestyle. Table 3 shows the crude and the adjusted Odds Ratios for CRC risk and intensity of physical activity and sedentary behavior. Moderate and higher levels of physical activity comparing to the low physical activity intensity were associated with reduced risk of rectal cancer and CRC overall in the crude and the multivariable models. For rectal cancer, comparing to the low physical activity intensity, the estimated risks for moderate and high physical activity intensity were respectively ORa=0.72, 95% CI: 0.61-0.85 and ORa=0.67, 95% CI: 0.54-0.82, p-trend<0.001 respectively. For CRC overall, comparing to the low physical activity intensity the estimated risks for moderate and high physical activity intensity were ORa=0.80, 95% CI: 0.71-0.90 and ORa=0.72, 95% CI: 0.62-0.83, p-trend<0.001 respectively. For colon cancer, an inverse association was found for high activity, but it did not reach the significance threshold (ORa=0.77, 95% CI: 0.62-0.96, p-trend=0.07). We found a borderline significant interaction between BMI and PA for overall CRC (p-interaction = 0.05); however, inverse associations were found for both the low and high BMI strata (Table 3). No significant interactions between BMI and PA were found for colon (p-interaction = 0.27) and rectal cancer (p-interaction = 0.12) were observed. The inverse association we found for physical activity and CRC risk was consistent across low and high BMI groups. (supplementary material - table 3 and 4). No significant association was observed between sedentary behavior and CRC risk; except for rectal cancer for which sedentary behavior was positively associated (ORa=1.19, 95% CI: 1.01-1.40). Table 4 shows crude and adjusted ORs and 95% CIs by anatomical location of the tumour (colon or rectum) by for physical activity intensity and sedentary behavior for men and women separately. Before and after adjustment for confounding factors, moderate and higher levels of physical activity were associated with reduced risk of overall CRC in men, the adjusted OR were 0.83 (95% CI: 0.69-0.99) and 0.69 (95% CI: 0.56-0.85) p-trend<0.001 respectively. Similar relationships were found in women for overall CRC; the adjusted OR=0.76, 95% CI: 0.65-0.89 for moderate PA and OR=0.75, 95% CI: 0.60-0.93 for high PA, p-trend<0.001, P-heterogeneities=0.001. For colon cancer, in both men and women, an inverse association was found for moderate and high PA, but it did not reach the significance threshold (P-heterogeneities=0.07). For rectal cancer, in women, moderate and higher levels of physical activity were associated with reduced risk of rectal cancer (ORa=0.72, 95% CI: 0.61-0.85) and (ORa=0.67, 95% CI: 0.54-0.82), p-trend<0.001 respectively. In men, an inverse association was only found for high levels of PA, ORa=0.66, 95% CI: 0.50-0.88, p-trend<0.02, P-heterogeneities=0.001. For sedentary behavior, a positive association was limited to rectal cancer ORa=1.28, 95% CI: 1.02-1.61 but not colon cancer, and only for men (P-heterogeneity=0.001). In addition, no association was found between sedentary behavior and risks of overall CRC and colon cancer for both men and women. Adjusted Odds Ratio for Physical Activity Intensity and Sedentary Behavior in CRC Cases and Controls by Anatomical Location of the CRC (Colon or Rectum) (N=2906). *Crude Odds Ratio (ORc); Crude model analysis adjusted; ¥Adjusted Odds Ratio (ORa); Multivariable model: conditional logistic regression using age in years, residence (urban, rural), education level (illiterate, primary, secondary, higher), monthly income (low, medium, high), smoking status (never smoker, Ex-smoker and current smoker), Non-steroidal anti-inflammatory drugs (yes or no), total energy intake (continuous), intakes of red processed meat and dietary fiber (both continuous), calcium (continuous), family history of colorectal cancer(yes or no). Adjusted Odds Ratio for Physical Activity Intensity and Sedentary Behavior in CRC Cases and Controls by Sex Stratification) (N=2906) *Crude Odds Ratio (ORc); Crude model analysis; ¥Adjusted Odds Ratio (ORa); Multivariable model: conditional logistic regression using age in years, residence (urban, rural), education level (illiterate, primary, secondary, higher), monthly income (low, medium, high), smoking status (never smoker, Ex-smoker and current smoker), alcohol consumption (yes, no), Non-steroidal anti-inflammatory drugs (yes or no), total energy intake (continuous), intakes of red processed meat and dietary fiber (both continuous), calcium (continuous), family history of colorectal cancer(yes or no); +Adjusted Odds Ratio (ORa) for women; Multivariable model: conditional logistic regression using age in years, residence (urban, rural), education level (illiterate, primary, secondary, higher), monthly income (low, medium, high), Non-steroidal anti-inflammatory drugs (yes or no), total energy intake (continuous), intakes of red processed meat and dietary fiber (both continuous), calcium (continuous), family history of colorectal cancer(yes or no). An Interaction by BMI for the Association between PA and CRC Overall, Colon and Rectal Cancer (N=2906) Adjusted Odds Ratio for Physical Activity Intensity in CRC Cases and Controls by BMI Stratification (N=2906) ¥Adjusted Odds Ratio (ORa); Multivariable model: conditional logistic regression using age in years, residence (urban, rural), education level (illiterate, primary, secondary, higher), monthly income (low, medium, high), smoking status (never smoker, Ex-smoker and current smoker), Non-steroidal anti-inflammatory drugs (yes or no), total energy intake (continuous), intakes of red processed meat and dietary fiber (both continuous), calcium (continuous), family history of colorectal cancer(yes or no).
null
null
[ "Author Contribution Statement" ]
[ "The study idea and its design were conceived by ZH and KE. The analyses and the interpretation of the data were led by the same doctors. KR conceived the study idea, its design, and led the analyses and interpretation of the data and supervised the drafting. IH, NM, MG, MK, MD, HAB, BA and AA participated in the conception and the design of the study. AM, BK and IMZ contributed to the conception of the study, and the collection of data. All authors have read and approved the manuscript." ]
[ null ]
[ "Introduction", "Materials and Methods", "Results", "Discussion", "Author Contribution Statement" ]
[ "Colorectal cancer is the third most commonly diagnosed cancer worldwide with 1.8 million cases, and the second in terms of mortality with 881,000 deaths (Ferlay et al., 2018; Bray et al., 2018). There is a wide geographical variation in the distribution of the CRC incidence around the world (Bray et al., 2018; BW and CP). In 2018, incidence rates were generally from 2 to 3-fold higher in high income countries compared with low and middle income countries (LMIC) (Bray et al., 2018). However, differences in mortality rates between these two regions are smaller, specifically because CRC patients living in LMIC discover their disease at a later stage (Bray et al., 2018). In Morocco, for example, from 2008 to 2012, there is increasing tendency in the CRC incidence with age standardized increases in incidence rates of 3.8 to 8.4 and from 2.6 to 7.4 per 100,000 in men and women, respectively (Fondation Lalla Salma Prévention et Traitement des Cancers).\nIt is well known that CRC risk is influenced by both, genetic and environmental factors (Kuipers et al., 2015). A range of modifiable factors including smoking, alcohol intake, food consumption (e.g. red and processed meat, fibre, calcium) and obesity can influence the risk of developing CRC (Eaglehouse et al., 2017; Abar et al., 2018). According to the World Cancer Research Fund/American Institute for Cancer Research, the evidence on the association between all types of physical activity (occupational, household, transport and recreational) and reduced CRC risk has been classified as convincing (World Cancer Research Fund/ Institute for Cancer Research 2018).\nBased on observational epidemiological evidence, the decrease in the risk associated to steady physical activity is estimated to be 25–30%, when comparing the most active to least active participants in these studies (Chao et al., 2004; Friedenreich et al., 2006; Wolin et al., 2007, 2009). Sedentary behavior is positively and independently of PA associated with an increased risk of colorectal cancer (Kerr et al., 2017). This evidence was based mainly on studies conducted among western populations. However, less information is available on the association between physical activity and CRC risk in developing countries.\nSeveral studies have investigated the associations between physical activity, dietary habits and health outcomes in Morocco, but these mainly focused on Moroccan teenagers and adolescents (López et al., 2012; Hamrani et al., 2015; El-ammari et al., 2017). Morocco is a fast-growing country, experiencing an important epidemiological and nutritional transition (Belahsen, 2014; Ronto et al., 2018). Urbanization and economical growth have been identified as the main determinants of reduced physical activity levels among this population where 24% of women and 9% of men were classified in the lowest physically active group (Najdi et al., 2011). \nTo obtain evidence on a link between physical activity and CRC in the Moroccan context, we studied the association between physical activity and sedentary behavior and CRC risk in a population-based case control study.", "\nStudy population \n\nA case control study was conducted from September 2009 until February 2017 in five Moroccan University hospitals located in Rabat, Casablanca, Oujda, Fez, and Marrakech (Najdi et al. 2011).\n\nInclusion criteria for cases and controls \n\nDetails of this study have been reported elsewhere(Fondation Lalla Salma Prévention et Traitement des Cancers). Briefly, only newly diagnosed CRC cases were recruited in this study. Controls were randomly selected from outpatients accompanying other patients and visitors that were healthy disease-free and recruited in the same time-window as cases. Cases and controls were individually matched on age (± 5 years), sex, and center.\nOther eligibility criteria included the following: aged at least 18 years old, had not received any treatment (radiotherapy, chemotherapy and hormone-therapy), psychiatric problems, diabetes mellitus, and cardiovascular diseases, and an ability to communicate and carry out the interview. \n\nParticipation rate\n\nIn this study, the participation rate was 97.5% (1,516/1,555) for cases and 75.8% (1,516/2,000) for controls.\n\nEthical procedure \n\nThe protocol of this study has been approved by the ethics Committee at the University of Fez. Before starting the study, all participants provided written informed consent to participate.\n\nData collection\n\nTrained interviewers collected data using a structured questionnaire. Information on sociodemographic characteristics, clinical, lifestyle and dietary data were collected via face-to-face interview. Socio-demographic data included age, sex, center, residency, marital status, educational level, and monthly income. Clinical data included the type and the stage of cancer, family history specially related to CRC (first and second-degree relatives), and the use of non-steroidal anti-inflammatory drugs (NSAID).\nTo estimate the intensity of physical activity, it is necessary to take into account the frequency, time, intensity, and type of physical activity (work, home and recreational activities) (World Health Organization). The Global Physical Activity Questionnaire (GPAQ) was used to collect this detailed information on physical activity and the Metabolic Equivalent of Task (MET) was calculated for each participant according to GPAQ guideline (World Health Organization). The physical activity intensity was obtained by dividing the METs into three categories: low intensity (<600 MET-minutes per week), moderate intensity (600–3,000 MET-minutes per week), and vigorous intensity (≥3,000 MET-minutes per week). Information about sedentary behavior was collected as time spent during a typical sitting or reclining per day. A sedentary person was defined as spending more than 4 hours in a sitting or lying position, excluding time spent a sleep (Sigmundová et al., 2015).\nInformation on alcohol consumption was collected and classified into never or current consumers, smoking status was defined according to the International Union Against Tuberculosis and Lung Disease guide (never, current and ex-smokers) (Slama et al., 2008).\nA validated food frequency questionnaire (FFQ) was used for dietary assessment (El Kinany et al., 2018). This FFQ included 255 items to estimate food intake in the Moroccan population. Food consumption frequencies were divided into 8 categories: (never, 1-3 times/month, once a week, 2-4 times/week, 5-6 times/week, once a day, 2-3 times/day, and equal or more than 4 times/day). \nAnthropometric measurements, including weight and height, were extracted from medical records. Body mass index (BMI; kg/m2) was calculated as the ratio of the weight divided by the square of height in meters. BMI was classified using cut-off points recommended by WHO [29]: underweight [16–18.5[ kg/m2, normal [18.5–25[ kg/m2, overweight [25–30[ kg/m2), obesity for BMI ≥30 kg/m2. For BMI subgroups, risk estimates for all subgroups were taken and classified in “low” and “high” BMI groups. In general, “low” BMI groups represented those in the” Underweight”(BMI<24.9 kg/m2) or “normal” (BMI<25 kg/m2) range of BMI. Effect estimates that were classified as “high” BMI generally represented those in the “overweight” (25 ≤BMI < 30 kg/m2) or “obese” (BMI ≥ 30 kg/m2) ranges of BMI. \n\nStatistical analysis \n\nExclusion criteria prior to commencing the analyses included: participants with unspecified primitive cancer (n=7), cases with old biopsies (6 cases), participants with missing dietary data (n=10), duplicate records (n=2), unmatched records (n=8) and participants with the lowest and highest 1% of the distribution of the ratio between energy intake and energy requirement (n=30).\nDescriptive analyses were conducted using frequencies for categorical variables and means ± standard deviation (SD) for continuous variables. To assess the difference between cases and controls, we used the Mc-Nemar test for categorical variables and a t-test for matched samples for analyzing continuous variables. A description of the study population was published previously (El Kinany et al., 2019).\nConditional logistic regression models were utilised to evaluate the associations between the intensity of physical activity and sedentary behavior and CRC risk. The adjusted odds ratio (ORa) and 95% confidence interval (95% CI) were estimated taking into account relevant confounders: age (years), residency (urban, rural), education level (illiterate, primary, secondary, higher), monthly income (low, medium, high), BMI categories (normal, underweight, overweight and obesity), smoking status (never smoker, ex-smoker and current smoker), alcohol (yes, no), family history of CRC (yes, no), sedentary behavior (yes, no), NSAID use (yes, no), intake of red and processed meat (continuous, g/day), fiber (continuous, g/day), calcium (continuous, g/day), and total energy intake (continuous, kcal/day). As body size/adiposity is potentially on the causal pathway linking physical activity and sedentary activities with colorectal cancer, we also performed all models with and without adjustment for BMI categories. Further adjustment for previous screening of colorectal cancer resulted in virtually unchanged risk estimates, so this variable was not involved in the multivariable models (supplementary material- Tables 1 and 2). Trend tests across physical activity categories were calculated by entering the categorical exposure variables into the models as continuous variables. In addition to overall CRC, analyses were also undertaken for colon cancer and rectal cancer for sexes combined and for men and women separately. Heterogeneity of associations by sex and across anatomical cancer subsites was assessed by calculating X2 statistics. We examined effect modification by BMI, tests of interaction were based on a Wald test of the interaction term. The interaction with BMI was not statistically significant for colon and rectal cancer. Only for overall CRC, we found a borderline significant interaction between BMI and PA. ", "\nTable 1 presents the general socio-demographic characteristics and potential confounders for this case-control study. Compared to controls, the mean age of cases was slightly higher (56.45 ± 13.95 years vs. 55.50 ± 13.70). Marital status and residency were similarly distributed between cases and controls. When cases and controls were compared, cases were more likely to be smokers and to have a higher occurrence of family history of CRC. Concerning CRC anatomical location, 50.2% cases had colon cancer and 49.8% had rectal cancer.\n\nTable 2 shows the distribution of physical activity intensity in MET-minutes/week and sedentary behavior among CRC cases and controls. More than a quarter of the female and the men cases ranked in the low physical activity category and had a sedentary behavior. In addition, more than one-third of cases and controls (women and men) had a sedentary lifestyle.\n\nTable 3 shows the crude and the adjusted Odds Ratios for CRC risk and intensity of physical activity and sedentary behavior. Moderate and higher levels of physical activity comparing to the low physical activity intensity were associated with reduced risk of rectal cancer and CRC overall in the crude and the multivariable models. For rectal cancer, comparing to the low physical activity intensity, the estimated risks for moderate and high physical activity intensity were respectively ORa=0.72, 95% CI: 0.61-0.85 and ORa=0.67, 95% CI: 0.54-0.82, p-trend<0.001 respectively. For CRC overall, comparing to the low physical activity intensity the estimated risks for moderate and high physical activity intensity were ORa=0.80, 95% CI: 0.71-0.90 and ORa=0.72, 95% CI: 0.62-0.83, p-trend<0.001 respectively. For colon cancer, an inverse association was found for high activity, but it did not reach the significance threshold (ORa=0.77, 95% CI: 0.62-0.96, p-trend=0.07). We found a borderline significant interaction between BMI and PA for overall CRC (p-interaction = 0.05); however, inverse associations were found for both the low and high BMI strata (Table 3). No significant interactions between BMI and PA were found for colon (p-interaction = 0.27) and rectal cancer (p-interaction = 0.12) were observed. The inverse association we found for physical activity and CRC risk was consistent across low and high BMI groups. (supplementary material - table 3 and 4).\nNo significant association was observed between sedentary behavior and CRC risk; except for rectal cancer for which sedentary behavior was positively associated (ORa=1.19, 95% CI: 1.01-1.40). \n\nTable 4 shows crude and adjusted ORs and 95% CIs by anatomical location of the tumour (colon or rectum) by for physical activity intensity and sedentary behavior for men and women separately. Before and after adjustment for confounding factors, moderate and higher levels of physical activity were associated with reduced risk of overall CRC in men, the adjusted OR were 0.83 (95% CI: 0.69-0.99) and 0.69 (95% CI: 0.56-0.85) p-trend<0.001 respectively. Similar relationships were found in women for overall CRC; the adjusted OR=0.76, 95% CI: 0.65-0.89 for moderate PA and OR=0.75, 95% CI: 0.60-0.93 for high PA, p-trend<0.001, P-heterogeneities=0.001. For colon cancer, in both men and women, an inverse association was found for moderate and high PA, but it did not reach the significance threshold (P-heterogeneities=0.07).\nFor rectal cancer, in women, moderate and higher levels of physical activity were associated with reduced risk of rectal cancer (ORa=0.72, 95% CI: 0.61-0.85) and (ORa=0.67, 95% CI: 0.54-0.82), p-trend<0.001 respectively. In men, an inverse association was only found for high levels of PA, ORa=0.66, 95% CI: 0.50-0.88, p-trend<0.02, P-heterogeneities=0.001.\nFor sedentary behavior, a positive association was limited to rectal cancer ORa=1.28, 95% CI: 1.02-1.61 but not colon cancer, and only for men (P-heterogeneity=0.001). In addition, no association was found between sedentary behavior and risks of overall CRC and colon cancer for both men and women.\nAdjusted Odds Ratio for Physical Activity Intensity and Sedentary Behavior in CRC Cases and Controls by Anatomical Location of the CRC (Colon or Rectum) (N=2906).\n\n*Crude Odds Ratio (ORc); Crude model analysis adjusted; ¥Adjusted Odds Ratio (ORa); Multivariable model: conditional logistic regression using age in years, residence (urban, rural), education level (illiterate, primary, secondary, higher), monthly income (low, medium, high), smoking status (never smoker, Ex-smoker and current smoker), Non-steroidal anti-inflammatory drugs (yes or no), total energy intake (continuous), intakes of red processed meat and dietary fiber (both continuous), calcium (continuous), family history of colorectal cancer(yes or no). \nAdjusted Odds Ratio for Physical Activity Intensity and Sedentary Behavior in CRC Cases and Controls by Sex Stratification) (N=2906)\n\n*Crude Odds Ratio (ORc); Crude model analysis; ¥Adjusted Odds Ratio (ORa); Multivariable model: conditional logistic regression using age in years, residence (urban, rural), education level (illiterate, primary, secondary, higher), monthly income (low, medium, high), smoking status (never smoker, Ex-smoker and current smoker), alcohol consumption (yes, no), Non-steroidal anti-inflammatory drugs (yes or no), total energy intake (continuous), intakes of red processed meat and dietary fiber (both continuous), calcium (continuous), family history of colorectal cancer(yes or no); +Adjusted Odds Ratio (ORa) for women; Multivariable model: conditional logistic regression using age in years, residence (urban, rural), education level (illiterate, primary, secondary, higher), monthly income (low, medium, high), Non-steroidal anti-inflammatory drugs (yes or no), total energy intake (continuous), intakes of red processed meat and dietary fiber (both continuous), calcium (continuous), family history of colorectal cancer(yes or no).\nAn Interaction by BMI for the Association between PA and CRC Overall, Colon and Rectal Cancer (N=2906)\nAdjusted Odds Ratio for Physical Activity Intensity in CRC Cases and Controls by BMI Stratification (N=2906)\n\n¥Adjusted Odds Ratio (ORa); Multivariable model: conditional logistic regression using age in years, residence (urban, rural), education level (illiterate, primary, secondary, higher), monthly income (low, medium, high), smoking status (never smoker, Ex-smoker and current smoker), Non-steroidal anti-inflammatory drugs (yes or no), total energy intake (continuous), intakes of red processed meat and dietary fiber (both continuous), calcium (continuous), family history of colorectal cancer(yes or no).", "In this CRC case-control study carried out in Morocco, we examined the association between physical activity, sedentary behavior and CRC risk. We found that a high level of physical activity was associated with reduced risks of colon cancer, rectal cancer and overall CRC. For sedentary behavior, a positive association was found for rectal cancer, but not for overall CRC and colon cancer.\nWe found an inverse association between moderate and high intensity activity and overall CRC risk among men and women. Similar inverse associations between higher levels of physical activity and CRC risk have been reported by multiple other studies (Wolin and Tuchman, 2011; Golshiri et al., 2016; Ghafari et al., 2016). The inverse association we found for physical activity and CRC risk was consistent across low and high BMI groups. \nWe found that colon cancer was inversely associated with high intensity of physical activity. Multiple studies showed an inverse association between colon cancer risk and physical activity (Gerhardsson et al., 1988; Giovannucci et al., 1995; Thune and Lund, 1996; Colditz et al., 1997; Lee, 2003; Morris et al., 2018). For sexes combined, we found an inverse association for physical activity and colon cancer risk, with similar magnitudes found for these inverse associations for men and women separately. Similar results were found by a large pooled analysis that also reported inverse associations between physical activity and colon cancer for both, men and women (Gerhardsson et al., 1988; Giovannucci et al., 1995; Thune and Lund ,1996; Colditz et al., 1997; Lee, 2003; Morris et al., 2018). In this current study, moderate and high intensity activity was inversely associated with rectal cancer risk for both men and women and sexes combined. Our findings are in agreement with the study published by Moore et al., ??? that showed a decrease in the risk of CRC and rectal cancer in individuals with vigorous physical activity (Lee et al., 1991; Slattery et al., 2003). However, the World Cancer Research Fund/American Institute for Cancer Research did not find any association between physical activity and rectal cancer risk (Lee et al., 1991; Thune and Lund, 1996; Robsahm et al., 2013). A cohort study in the Netherlands suggests that non-occupational physical activity was associated with rectal cancer in women (Simons et al., 2013a).\nEmerging evidence suggests that sedentary behavior may be a risk factor for CRC, independent of PA (Lynch, 2010; Simons et al., 2013; Schmid and Leitzmann, 2014; Cao et al., 2015; Kerr et al., 2017). In a meta-analysis, sedentary behavior as specified by time spent watching TV, occupational sitting time, and total sitting time was associated with a 54%, 24%, and 24% increased risk of colon cancer, respectively (Schmid and Leitzmann, 2014). In a more recent prospective analysis in the UK Biobank (Morris et al., 2018), greater television watching time, but not time spent on a computer, was associated with higher colon cancer risk; with no associations found for rectal cancer risk. In the current study, rather than domain-specific activities, sedentary behavior was defined as the time spent during a typical day sitting or reclining. Consequently, we were unable to assess how specific sedentary behaviors were associated with CRC risk. Substantial challenges also remain to translate the current understanding of the impact of sedentary behavior on CRC risk into interventions with possible clinical impact. Subjective measures based on self-reported information are prone to measurement error, whereas more objective measures are costly and lacking information on specific domains of sedentary behavior (Healy et al., 2011). Additional studies are needed to examine the possible role of sedentary behavior in colorectal cancer development.\nThe main mechanisms that could explain the potentially beneficial effect of physical activity on the risk of CRC are associated to its effects on weight and adiposity (mainly abdominal) and favorable effects on circulating levels of insulin and insulin-like growth factor 1 (IGF-1) which promote cellular proliferation (Gerhardsson et al., 1988; Giovannucci et al., 1995; Thune and Lund 1996; Colditz et al., 1997; Lee, 2003; Morris et al., 2018).The physiological mechanisms of movement from sitting to standing may improve several functions of the human body like: glucose regulation that will be achieved by increasing insulin sensitivity and non–insulin-dependent glucose in muscles during regular physical activity (Short, 2013). Physical activity can also lower colorectal cancer risk by stimulating digestion and reducing transit time through the intestine thus reducing the time of exposure of the colonic mucosa and fecal contents to food-borne carcinogens (Gerhardsson et al., 1988; Giovannucci et al., 1995; Thune and Lund, 1996; Colditz et al., 1997; Lee, 2003; Morris et al., 2018). In addition, being active can mask mitochondrial aging in the muscle and increases blood flow, adrenergic signaling, and shear stress that enhances vascular homeostasis of the endothelium (Brierley et al., 1996; Olufsen et al., 2005; Pagan et al., 2018), all of which have the potential to regulate tumor growth and tumor metabolism (Brierley et al., 1996; Kerr et al., 2017). Lower sedentary behaviors have also been associated with lower insulin levels and lower inflammation, they convincingly increase the risk of weight gain, overweight and obesity (Gerhardsson et al., 1988; Giovannucci et al., 1995; Thune and Lund, 1996; Colditz et al., 1997; Lee, 2003; Morris et al., 2018).\nThe urban population in Morocco has increased from 29% in 1960 to 62.4% in 2018 (El Rhazi et al., 2020) while undergoing an economical transition, characterized by increasing industrialization and accompanied by an increased sedentary lifestyle and decreased physical activity. Leisure (walking and cycling) and labor activities have been replaced by mechanized activities (Batnitzky, 2008). Moroccan rural residents are more likely to participate in all forms of physical activity (at work, play sports, transport, etc.) than urban residents, only 14.2% of rural residents did not meet WHO recommendations (El-ammari et al., 2017). Generally, in the Arab world, the prevalence of physical activity was higher in men than in women, due to cultural and social factors and restrictions on external exercises, especially for women. This was confirmed by prevalence results showing that the lowest physically active class was higher in women (24%) compared to men (9%) among the Moroccan adult population in 2011 (Najdi et al., 2011). The sedentary problem further increased due to changes in adolescent behaviors. The prevalence of physical inactivity was 79.5% and sedentary behavior was 36.5% among Moroccan adolescents in 2017 (El-ammari et al., 2017). It is women who are more likely to have sedentary behaviors 26% compared to men 16.1% (El-ammari et al., 2017). The urbanization and the globalization are the principal determinants of low physical activity among Moroccan adults. Low levels of physical activity and increased sedentary behavior have been associated with increasing health risks, calling for appropriate interventions and a political and educational framework to combat the pandemic of sedentary behavior among children, adolescents, and adults in Morocco. Increasing the frequency and the active time in school-based sports and enhancing public awareness about the healthy lifestyle may reduce the prevalence of physical inactivity. As highlighted in the World Health Organization Global action plan on physical activity 2018–2030, all stakeholders should support the strengthening of the evidence and data systems, particularly in LMICs (World Health Organization).\nThis study had several strengths. This study is among the first to investigate the associations between intensity of physical activity, sedentary behavior and CRC risk in North Africa in such a large sample. The relatively large number of the participants permitted analyses by sex and across colorectal subsites. Further, the detailed information on the exposure collected from participants enabled us to carefully adjust for known colorectal cancer risk factors.\nPotential limitations of this study are the complexity of the physical activity and sedentary quantifications. Physical activity measurements were obtained based on the GPAQ, for which the questions focused on the study year, without considering physical activity changes during the life course. In addition, underestimations of the physical activity levels are probable and may especially be an issue among housewives (El-ammari et al., 2017).\nTo conclude, we found an inverse association between intensity of physical activity and CRC risk in the Moroccan population, and a positive association between sedentary behavior and rectal cancer risk. Considering one-third of the study population had a sedentary lifestyle, these results can be used to establish public health strategies adapted to the Moroccan population.\n\nAbbreviations\n\nBMI: Body Mass Index.\nCI: Confident Interval.\nCRC: Colorectal cancer.\nFFQ: Food Frequency Questionnaire.\nIARC: International Agency for Research on Cancer.\nGPAQ: Global Physical Activity Questionnaire.\nLMIC: Low and Middle Income Countries.\nMET: Metabolic Equivalent Task.\nNSAID: Non-Steroidal Anti-Inflammatory Drugs.\nSD: Standard Deviation.\nWCRF/AICR: World Cancer Research Fund/American Institute for Cancer Research.\nIARC disclaimer:\nWhere authors are identified as personnel of the International Agency for Research on Cancer / World Health Organization, the authors alone are responsible for the views expressed in this article and they do not necessarily represent the decisions, policy or views of the International Agency for Research on Cancer / World Health Organization.", "The study idea and its design were conceived by ZH and KE. The analyses and the interpretation of the data were led by the same doctors. KR conceived the study idea, its design, and led the analyses and interpretation of the data and supervised the drafting. IH, NM, MG, MK, MD, HAB, BA and AA participated in the conception and the design of the study. AM, BK and IMZ contributed to the conception of the study, and the collection of data. All authors have read and approved the manuscript." ]
[ "intro", "materials|methods", "results", "discussion", null ]
[ "Physical activity", "sedentary behaviour", "colorectal cancer risk", "Morocco" ]
Introduction: Colorectal cancer is the third most commonly diagnosed cancer worldwide with 1.8 million cases, and the second in terms of mortality with 881,000 deaths (Ferlay et al., 2018; Bray et al., 2018). There is a wide geographical variation in the distribution of the CRC incidence around the world (Bray et al., 2018; BW and CP). In 2018, incidence rates were generally from 2 to 3-fold higher in high income countries compared with low and middle income countries (LMIC) (Bray et al., 2018). However, differences in mortality rates between these two regions are smaller, specifically because CRC patients living in LMIC discover their disease at a later stage (Bray et al., 2018). In Morocco, for example, from 2008 to 2012, there is increasing tendency in the CRC incidence with age standardized increases in incidence rates of 3.8 to 8.4 and from 2.6 to 7.4 per 100,000 in men and women, respectively (Fondation Lalla Salma Prévention et Traitement des Cancers). It is well known that CRC risk is influenced by both, genetic and environmental factors (Kuipers et al., 2015). A range of modifiable factors including smoking, alcohol intake, food consumption (e.g. red and processed meat, fibre, calcium) and obesity can influence the risk of developing CRC (Eaglehouse et al., 2017; Abar et al., 2018). According to the World Cancer Research Fund/American Institute for Cancer Research, the evidence on the association between all types of physical activity (occupational, household, transport and recreational) and reduced CRC risk has been classified as convincing (World Cancer Research Fund/ Institute for Cancer Research 2018). Based on observational epidemiological evidence, the decrease in the risk associated to steady physical activity is estimated to be 25–30%, when comparing the most active to least active participants in these studies (Chao et al., 2004; Friedenreich et al., 2006; Wolin et al., 2007, 2009). Sedentary behavior is positively and independently of PA associated with an increased risk of colorectal cancer (Kerr et al., 2017). This evidence was based mainly on studies conducted among western populations. However, less information is available on the association between physical activity and CRC risk in developing countries. Several studies have investigated the associations between physical activity, dietary habits and health outcomes in Morocco, but these mainly focused on Moroccan teenagers and adolescents (López et al., 2012; Hamrani et al., 2015; El-ammari et al., 2017). Morocco is a fast-growing country, experiencing an important epidemiological and nutritional transition (Belahsen, 2014; Ronto et al., 2018). Urbanization and economical growth have been identified as the main determinants of reduced physical activity levels among this population where 24% of women and 9% of men were classified in the lowest physically active group (Najdi et al., 2011). To obtain evidence on a link between physical activity and CRC in the Moroccan context, we studied the association between physical activity and sedentary behavior and CRC risk in a population-based case control study. Materials and Methods: Study population A case control study was conducted from September 2009 until February 2017 in five Moroccan University hospitals located in Rabat, Casablanca, Oujda, Fez, and Marrakech (Najdi et al. 2011). Inclusion criteria for cases and controls Details of this study have been reported elsewhere(Fondation Lalla Salma Prévention et Traitement des Cancers). Briefly, only newly diagnosed CRC cases were recruited in this study. Controls were randomly selected from outpatients accompanying other patients and visitors that were healthy disease-free and recruited in the same time-window as cases. Cases and controls were individually matched on age (± 5 years), sex, and center. Other eligibility criteria included the following: aged at least 18 years old, had not received any treatment (radiotherapy, chemotherapy and hormone-therapy), psychiatric problems, diabetes mellitus, and cardiovascular diseases, and an ability to communicate and carry out the interview. Participation rate In this study, the participation rate was 97.5% (1,516/1,555) for cases and 75.8% (1,516/2,000) for controls. Ethical procedure The protocol of this study has been approved by the ethics Committee at the University of Fez. Before starting the study, all participants provided written informed consent to participate. Data collection Trained interviewers collected data using a structured questionnaire. Information on sociodemographic characteristics, clinical, lifestyle and dietary data were collected via face-to-face interview. Socio-demographic data included age, sex, center, residency, marital status, educational level, and monthly income. Clinical data included the type and the stage of cancer, family history specially related to CRC (first and second-degree relatives), and the use of non-steroidal anti-inflammatory drugs (NSAID). To estimate the intensity of physical activity, it is necessary to take into account the frequency, time, intensity, and type of physical activity (work, home and recreational activities) (World Health Organization). The Global Physical Activity Questionnaire (GPAQ) was used to collect this detailed information on physical activity and the Metabolic Equivalent of Task (MET) was calculated for each participant according to GPAQ guideline (World Health Organization). The physical activity intensity was obtained by dividing the METs into three categories: low intensity (<600 MET-minutes per week), moderate intensity (600–3,000 MET-minutes per week), and vigorous intensity (≥3,000 MET-minutes per week). Information about sedentary behavior was collected as time spent during a typical sitting or reclining per day. A sedentary person was defined as spending more than 4 hours in a sitting or lying position, excluding time spent a sleep (Sigmundová et al., 2015). Information on alcohol consumption was collected and classified into never or current consumers, smoking status was defined according to the International Union Against Tuberculosis and Lung Disease guide (never, current and ex-smokers) (Slama et al., 2008). A validated food frequency questionnaire (FFQ) was used for dietary assessment (El Kinany et al., 2018). This FFQ included 255 items to estimate food intake in the Moroccan population. Food consumption frequencies were divided into 8 categories: (never, 1-3 times/month, once a week, 2-4 times/week, 5-6 times/week, once a day, 2-3 times/day, and equal or more than 4 times/day). Anthropometric measurements, including weight and height, were extracted from medical records. Body mass index (BMI; kg/m2) was calculated as the ratio of the weight divided by the square of height in meters. BMI was classified using cut-off points recommended by WHO [29]: underweight [16–18.5[ kg/m2, normal [18.5–25[ kg/m2, overweight [25–30[ kg/m2), obesity for BMI ≥30 kg/m2. For BMI subgroups, risk estimates for all subgroups were taken and classified in “low” and “high” BMI groups. In general, “low” BMI groups represented those in the” Underweight”(BMI<24.9 kg/m2) or “normal” (BMI<25 kg/m2) range of BMI. Effect estimates that were classified as “high” BMI generally represented those in the “overweight” (25 ≤BMI < 30 kg/m2) or “obese” (BMI ≥ 30 kg/m2) ranges of BMI. Statistical analysis Exclusion criteria prior to commencing the analyses included: participants with unspecified primitive cancer (n=7), cases with old biopsies (6 cases), participants with missing dietary data (n=10), duplicate records (n=2), unmatched records (n=8) and participants with the lowest and highest 1% of the distribution of the ratio between energy intake and energy requirement (n=30). Descriptive analyses were conducted using frequencies for categorical variables and means ± standard deviation (SD) for continuous variables. To assess the difference between cases and controls, we used the Mc-Nemar test for categorical variables and a t-test for matched samples for analyzing continuous variables. A description of the study population was published previously (El Kinany et al., 2019). Conditional logistic regression models were utilised to evaluate the associations between the intensity of physical activity and sedentary behavior and CRC risk. The adjusted odds ratio (ORa) and 95% confidence interval (95% CI) were estimated taking into account relevant confounders: age (years), residency (urban, rural), education level (illiterate, primary, secondary, higher), monthly income (low, medium, high), BMI categories (normal, underweight, overweight and obesity), smoking status (never smoker, ex-smoker and current smoker), alcohol (yes, no), family history of CRC (yes, no), sedentary behavior (yes, no), NSAID use (yes, no), intake of red and processed meat (continuous, g/day), fiber (continuous, g/day), calcium (continuous, g/day), and total energy intake (continuous, kcal/day). As body size/adiposity is potentially on the causal pathway linking physical activity and sedentary activities with colorectal cancer, we also performed all models with and without adjustment for BMI categories. Further adjustment for previous screening of colorectal cancer resulted in virtually unchanged risk estimates, so this variable was not involved in the multivariable models (supplementary material- Tables 1 and 2). Trend tests across physical activity categories were calculated by entering the categorical exposure variables into the models as continuous variables. In addition to overall CRC, analyses were also undertaken for colon cancer and rectal cancer for sexes combined and for men and women separately. Heterogeneity of associations by sex and across anatomical cancer subsites was assessed by calculating X2 statistics. We examined effect modification by BMI, tests of interaction were based on a Wald test of the interaction term. The interaction with BMI was not statistically significant for colon and rectal cancer. Only for overall CRC, we found a borderline significant interaction between BMI and PA. Results: Table 1 presents the general socio-demographic characteristics and potential confounders for this case-control study. Compared to controls, the mean age of cases was slightly higher (56.45 ± 13.95 years vs. 55.50 ± 13.70). Marital status and residency were similarly distributed between cases and controls. When cases and controls were compared, cases were more likely to be smokers and to have a higher occurrence of family history of CRC. Concerning CRC anatomical location, 50.2% cases had colon cancer and 49.8% had rectal cancer. Table 2 shows the distribution of physical activity intensity in MET-minutes/week and sedentary behavior among CRC cases and controls. More than a quarter of the female and the men cases ranked in the low physical activity category and had a sedentary behavior. In addition, more than one-third of cases and controls (women and men) had a sedentary lifestyle. Table 3 shows the crude and the adjusted Odds Ratios for CRC risk and intensity of physical activity and sedentary behavior. Moderate and higher levels of physical activity comparing to the low physical activity intensity were associated with reduced risk of rectal cancer and CRC overall in the crude and the multivariable models. For rectal cancer, comparing to the low physical activity intensity, the estimated risks for moderate and high physical activity intensity were respectively ORa=0.72, 95% CI: 0.61-0.85 and ORa=0.67, 95% CI: 0.54-0.82, p-trend<0.001 respectively. For CRC overall, comparing to the low physical activity intensity the estimated risks for moderate and high physical activity intensity were ORa=0.80, 95% CI: 0.71-0.90 and ORa=0.72, 95% CI: 0.62-0.83, p-trend<0.001 respectively. For colon cancer, an inverse association was found for high activity, but it did not reach the significance threshold (ORa=0.77, 95% CI: 0.62-0.96, p-trend=0.07). We found a borderline significant interaction between BMI and PA for overall CRC (p-interaction = 0.05); however, inverse associations were found for both the low and high BMI strata (Table 3). No significant interactions between BMI and PA were found for colon (p-interaction = 0.27) and rectal cancer (p-interaction = 0.12) were observed. The inverse association we found for physical activity and CRC risk was consistent across low and high BMI groups. (supplementary material - table 3 and 4). No significant association was observed between sedentary behavior and CRC risk; except for rectal cancer for which sedentary behavior was positively associated (ORa=1.19, 95% CI: 1.01-1.40). Table 4 shows crude and adjusted ORs and 95% CIs by anatomical location of the tumour (colon or rectum) by for physical activity intensity and sedentary behavior for men and women separately. Before and after adjustment for confounding factors, moderate and higher levels of physical activity were associated with reduced risk of overall CRC in men, the adjusted OR were 0.83 (95% CI: 0.69-0.99) and 0.69 (95% CI: 0.56-0.85) p-trend<0.001 respectively. Similar relationships were found in women for overall CRC; the adjusted OR=0.76, 95% CI: 0.65-0.89 for moderate PA and OR=0.75, 95% CI: 0.60-0.93 for high PA, p-trend<0.001, P-heterogeneities=0.001. For colon cancer, in both men and women, an inverse association was found for moderate and high PA, but it did not reach the significance threshold (P-heterogeneities=0.07). For rectal cancer, in women, moderate and higher levels of physical activity were associated with reduced risk of rectal cancer (ORa=0.72, 95% CI: 0.61-0.85) and (ORa=0.67, 95% CI: 0.54-0.82), p-trend<0.001 respectively. In men, an inverse association was only found for high levels of PA, ORa=0.66, 95% CI: 0.50-0.88, p-trend<0.02, P-heterogeneities=0.001. For sedentary behavior, a positive association was limited to rectal cancer ORa=1.28, 95% CI: 1.02-1.61 but not colon cancer, and only for men (P-heterogeneity=0.001). In addition, no association was found between sedentary behavior and risks of overall CRC and colon cancer for both men and women. Adjusted Odds Ratio for Physical Activity Intensity and Sedentary Behavior in CRC Cases and Controls by Anatomical Location of the CRC (Colon or Rectum) (N=2906). *Crude Odds Ratio (ORc); Crude model analysis adjusted; ¥Adjusted Odds Ratio (ORa); Multivariable model: conditional logistic regression using age in years, residence (urban, rural), education level (illiterate, primary, secondary, higher), monthly income (low, medium, high), smoking status (never smoker, Ex-smoker and current smoker), Non-steroidal anti-inflammatory drugs (yes or no), total energy intake (continuous), intakes of red processed meat and dietary fiber (both continuous), calcium (continuous), family history of colorectal cancer(yes or no). Adjusted Odds Ratio for Physical Activity Intensity and Sedentary Behavior in CRC Cases and Controls by Sex Stratification) (N=2906) *Crude Odds Ratio (ORc); Crude model analysis; ¥Adjusted Odds Ratio (ORa); Multivariable model: conditional logistic regression using age in years, residence (urban, rural), education level (illiterate, primary, secondary, higher), monthly income (low, medium, high), smoking status (never smoker, Ex-smoker and current smoker), alcohol consumption (yes, no), Non-steroidal anti-inflammatory drugs (yes or no), total energy intake (continuous), intakes of red processed meat and dietary fiber (both continuous), calcium (continuous), family history of colorectal cancer(yes or no); +Adjusted Odds Ratio (ORa) for women; Multivariable model: conditional logistic regression using age in years, residence (urban, rural), education level (illiterate, primary, secondary, higher), monthly income (low, medium, high), Non-steroidal anti-inflammatory drugs (yes or no), total energy intake (continuous), intakes of red processed meat and dietary fiber (both continuous), calcium (continuous), family history of colorectal cancer(yes or no). An Interaction by BMI for the Association between PA and CRC Overall, Colon and Rectal Cancer (N=2906) Adjusted Odds Ratio for Physical Activity Intensity in CRC Cases and Controls by BMI Stratification (N=2906) ¥Adjusted Odds Ratio (ORa); Multivariable model: conditional logistic regression using age in years, residence (urban, rural), education level (illiterate, primary, secondary, higher), monthly income (low, medium, high), smoking status (never smoker, Ex-smoker and current smoker), Non-steroidal anti-inflammatory drugs (yes or no), total energy intake (continuous), intakes of red processed meat and dietary fiber (both continuous), calcium (continuous), family history of colorectal cancer(yes or no). Discussion: In this CRC case-control study carried out in Morocco, we examined the association between physical activity, sedentary behavior and CRC risk. We found that a high level of physical activity was associated with reduced risks of colon cancer, rectal cancer and overall CRC. For sedentary behavior, a positive association was found for rectal cancer, but not for overall CRC and colon cancer. We found an inverse association between moderate and high intensity activity and overall CRC risk among men and women. Similar inverse associations between higher levels of physical activity and CRC risk have been reported by multiple other studies (Wolin and Tuchman, 2011; Golshiri et al., 2016; Ghafari et al., 2016). The inverse association we found for physical activity and CRC risk was consistent across low and high BMI groups. We found that colon cancer was inversely associated with high intensity of physical activity. Multiple studies showed an inverse association between colon cancer risk and physical activity (Gerhardsson et al., 1988; Giovannucci et al., 1995; Thune and Lund, 1996; Colditz et al., 1997; Lee, 2003; Morris et al., 2018). For sexes combined, we found an inverse association for physical activity and colon cancer risk, with similar magnitudes found for these inverse associations for men and women separately. Similar results were found by a large pooled analysis that also reported inverse associations between physical activity and colon cancer for both, men and women (Gerhardsson et al., 1988; Giovannucci et al., 1995; Thune and Lund ,1996; Colditz et al., 1997; Lee, 2003; Morris et al., 2018). In this current study, moderate and high intensity activity was inversely associated with rectal cancer risk for both men and women and sexes combined. Our findings are in agreement with the study published by Moore et al., ??? that showed a decrease in the risk of CRC and rectal cancer in individuals with vigorous physical activity (Lee et al., 1991; Slattery et al., 2003). However, the World Cancer Research Fund/American Institute for Cancer Research did not find any association between physical activity and rectal cancer risk (Lee et al., 1991; Thune and Lund, 1996; Robsahm et al., 2013). A cohort study in the Netherlands suggests that non-occupational physical activity was associated with rectal cancer in women (Simons et al., 2013a). Emerging evidence suggests that sedentary behavior may be a risk factor for CRC, independent of PA (Lynch, 2010; Simons et al., 2013; Schmid and Leitzmann, 2014; Cao et al., 2015; Kerr et al., 2017). In a meta-analysis, sedentary behavior as specified by time spent watching TV, occupational sitting time, and total sitting time was associated with a 54%, 24%, and 24% increased risk of colon cancer, respectively (Schmid and Leitzmann, 2014). In a more recent prospective analysis in the UK Biobank (Morris et al., 2018), greater television watching time, but not time spent on a computer, was associated with higher colon cancer risk; with no associations found for rectal cancer risk. In the current study, rather than domain-specific activities, sedentary behavior was defined as the time spent during a typical day sitting or reclining. Consequently, we were unable to assess how specific sedentary behaviors were associated with CRC risk. Substantial challenges also remain to translate the current understanding of the impact of sedentary behavior on CRC risk into interventions with possible clinical impact. Subjective measures based on self-reported information are prone to measurement error, whereas more objective measures are costly and lacking information on specific domains of sedentary behavior (Healy et al., 2011). Additional studies are needed to examine the possible role of sedentary behavior in colorectal cancer development. The main mechanisms that could explain the potentially beneficial effect of physical activity on the risk of CRC are associated to its effects on weight and adiposity (mainly abdominal) and favorable effects on circulating levels of insulin and insulin-like growth factor 1 (IGF-1) which promote cellular proliferation (Gerhardsson et al., 1988; Giovannucci et al., 1995; Thune and Lund 1996; Colditz et al., 1997; Lee, 2003; Morris et al., 2018).The physiological mechanisms of movement from sitting to standing may improve several functions of the human body like: glucose regulation that will be achieved by increasing insulin sensitivity and non–insulin-dependent glucose in muscles during regular physical activity (Short, 2013). Physical activity can also lower colorectal cancer risk by stimulating digestion and reducing transit time through the intestine thus reducing the time of exposure of the colonic mucosa and fecal contents to food-borne carcinogens (Gerhardsson et al., 1988; Giovannucci et al., 1995; Thune and Lund, 1996; Colditz et al., 1997; Lee, 2003; Morris et al., 2018). In addition, being active can mask mitochondrial aging in the muscle and increases blood flow, adrenergic signaling, and shear stress that enhances vascular homeostasis of the endothelium (Brierley et al., 1996; Olufsen et al., 2005; Pagan et al., 2018), all of which have the potential to regulate tumor growth and tumor metabolism (Brierley et al., 1996; Kerr et al., 2017). Lower sedentary behaviors have also been associated with lower insulin levels and lower inflammation, they convincingly increase the risk of weight gain, overweight and obesity (Gerhardsson et al., 1988; Giovannucci et al., 1995; Thune and Lund, 1996; Colditz et al., 1997; Lee, 2003; Morris et al., 2018). The urban population in Morocco has increased from 29% in 1960 to 62.4% in 2018 (El Rhazi et al., 2020) while undergoing an economical transition, characterized by increasing industrialization and accompanied by an increased sedentary lifestyle and decreased physical activity. Leisure (walking and cycling) and labor activities have been replaced by mechanized activities (Batnitzky, 2008). Moroccan rural residents are more likely to participate in all forms of physical activity (at work, play sports, transport, etc.) than urban residents, only 14.2% of rural residents did not meet WHO recommendations (El-ammari et al., 2017). Generally, in the Arab world, the prevalence of physical activity was higher in men than in women, due to cultural and social factors and restrictions on external exercises, especially for women. This was confirmed by prevalence results showing that the lowest physically active class was higher in women (24%) compared to men (9%) among the Moroccan adult population in 2011 (Najdi et al., 2011). The sedentary problem further increased due to changes in adolescent behaviors. The prevalence of physical inactivity was 79.5% and sedentary behavior was 36.5% among Moroccan adolescents in 2017 (El-ammari et al., 2017). It is women who are more likely to have sedentary behaviors 26% compared to men 16.1% (El-ammari et al., 2017). The urbanization and the globalization are the principal determinants of low physical activity among Moroccan adults. Low levels of physical activity and increased sedentary behavior have been associated with increasing health risks, calling for appropriate interventions and a political and educational framework to combat the pandemic of sedentary behavior among children, adolescents, and adults in Morocco. Increasing the frequency and the active time in school-based sports and enhancing public awareness about the healthy lifestyle may reduce the prevalence of physical inactivity. As highlighted in the World Health Organization Global action plan on physical activity 2018–2030, all stakeholders should support the strengthening of the evidence and data systems, particularly in LMICs (World Health Organization). This study had several strengths. This study is among the first to investigate the associations between intensity of physical activity, sedentary behavior and CRC risk in North Africa in such a large sample. The relatively large number of the participants permitted analyses by sex and across colorectal subsites. Further, the detailed information on the exposure collected from participants enabled us to carefully adjust for known colorectal cancer risk factors. Potential limitations of this study are the complexity of the physical activity and sedentary quantifications. Physical activity measurements were obtained based on the GPAQ, for which the questions focused on the study year, without considering physical activity changes during the life course. In addition, underestimations of the physical activity levels are probable and may especially be an issue among housewives (El-ammari et al., 2017). To conclude, we found an inverse association between intensity of physical activity and CRC risk in the Moroccan population, and a positive association between sedentary behavior and rectal cancer risk. Considering one-third of the study population had a sedentary lifestyle, these results can be used to establish public health strategies adapted to the Moroccan population. Abbreviations BMI: Body Mass Index. CI: Confident Interval. CRC: Colorectal cancer. FFQ: Food Frequency Questionnaire. IARC: International Agency for Research on Cancer. GPAQ: Global Physical Activity Questionnaire. LMIC: Low and Middle Income Countries. MET: Metabolic Equivalent Task. NSAID: Non-Steroidal Anti-Inflammatory Drugs. SD: Standard Deviation. WCRF/AICR: World Cancer Research Fund/American Institute for Cancer Research. IARC disclaimer: Where authors are identified as personnel of the International Agency for Research on Cancer / World Health Organization, the authors alone are responsible for the views expressed in this article and they do not necessarily represent the decisions, policy or views of the International Agency for Research on Cancer / World Health Organization. Author Contribution Statement: The study idea and its design were conceived by ZH and KE. The analyses and the interpretation of the data were led by the same doctors. KR conceived the study idea, its design, and led the analyses and interpretation of the data and supervised the drafting. IH, NM, MG, MK, MD, HAB, BA and AA participated in the conception and the design of the study. AM, BK and IMZ contributed to the conception of the study, and the collection of data. All authors have read and approved the manuscript.
Background: Physical activity has been associated with a lower risk of colorectal cancer in studies mainly conducted in high-income countries, while sedentary behavior has been suggested to increase CRC risk. In this study, we aimed to investigate the role of physical activity and sedentary behavior on CRC risk in the Moroccan population. Methods: A case-control study was conducted involving 1516 case-control pairs, matched on age, sex and center in five university hospital centers. A structured questionnaire was used to collect information on socio-demographics, lifestyle habits, family history of CRC, and non-steroidal anti-inflammatory drug (NSAID) use. Information on physical activity and sedentary behavior were collected by the Global Physical Activity Questionnaire (GPAQ). For each activity (work, household, and recreational activities), a metabolic equivalent (MET) was calculated using GPAQ recommendations. Conditional logistic regression models were used to assess the association between physical activity, sedentary behavior and the risk of overall CRC, colon cancer, and rectal cancer taking into account other CRC risk factors. Results: High level of physical activity was associated with lower risk of rectal cancer, colon cancer, and overall CRC, the adjusted odds ratios (ORa) for the highest versus the lowest level of activity were 0.67 (95% CI: 0.54-0.82), 0.77 (95% CI: 0.62-0.96), and 0.72 (95% CI: 0.62-0.83), respectively. In contrast, sedentary behavior was positively associated with rectal cancer risk (ORa=1.19, 95% CI: 1.01-1.40), but was unrelated to colon cancer risk (ORa=1.02, 95% CI: 0.87-1.20). Conclusions: We found an inverse association between physical activity and CRC risk in the Moroccan population, and a positive association between sedentary behavior and rectal cancer risk. Considering that one-third of the total population studied had a sedentary lifestyle, these results may be used to improve strategies of public health suitable for Moroccan population.
null
null
5,366
390
[ 105 ]
5
[ "activity", "physical", "cancer", "physical activity", "crc", "sedentary", "risk", "sedentary behavior", "behavior", "bmi" ]
[ "cancer risk associations", "cancer risk similar", "increased risk colorectal", "lower colorectal cancer", "crc colorectal cancer" ]
null
null
null
[CONTENT] Physical activity | sedentary behaviour | colorectal cancer risk | Morocco [SUMMARY]
null
[CONTENT] Physical activity | sedentary behaviour | colorectal cancer risk | Morocco [SUMMARY]
null
[CONTENT] Physical activity | sedentary behaviour | colorectal cancer risk | Morocco [SUMMARY]
null
[CONTENT] Adult | Case-Control Studies | Colonic Neoplasms | Exercise | Humans | Rectal Neoplasms | Sedentary Behavior [SUMMARY]
null
[CONTENT] Adult | Case-Control Studies | Colonic Neoplasms | Exercise | Humans | Rectal Neoplasms | Sedentary Behavior [SUMMARY]
null
[CONTENT] Adult | Case-Control Studies | Colonic Neoplasms | Exercise | Humans | Rectal Neoplasms | Sedentary Behavior [SUMMARY]
null
[CONTENT] cancer risk associations | cancer risk similar | increased risk colorectal | lower colorectal cancer | crc colorectal cancer [SUMMARY]
null
[CONTENT] cancer risk associations | cancer risk similar | increased risk colorectal | lower colorectal cancer | crc colorectal cancer [SUMMARY]
null
[CONTENT] cancer risk associations | cancer risk similar | increased risk colorectal | lower colorectal cancer | crc colorectal cancer [SUMMARY]
null
[CONTENT] activity | physical | cancer | physical activity | crc | sedentary | risk | sedentary behavior | behavior | bmi [SUMMARY]
null
[CONTENT] activity | physical | cancer | physical activity | crc | sedentary | risk | sedentary behavior | behavior | bmi [SUMMARY]
null
[CONTENT] activity | physical | cancer | physical activity | crc | sedentary | risk | sedentary behavior | behavior | bmi [SUMMARY]
null
[CONTENT] 2018 | crc | bray 2018 | bray | incidence | physical activity | physical | cancer | risk | activity [SUMMARY]
null
[CONTENT] 95 | 95 ci | ora | cancer | continuous | adjusted | crc | activity | ci | physical activity [SUMMARY]
null
[CONTENT] activity | physical | cancer | physical activity | crc | risk | sedentary | study | bmi | 2018 [SUMMARY]
null
[CONTENT] CRC ||| CRC | Moroccan [SUMMARY]
null
[CONTENT] CRC | 0.67 | 95% | CI | 0.54-0.82 | 0.77 | 95% | CI | 0.62-0.96 | 0.72 | 95% | CI | 0.62-0.83 ||| 95% | CI | 1.01 | 95% | CI | 0.87-1.20 [SUMMARY]
null
[CONTENT] CRC ||| CRC | Moroccan ||| 1516 | five ||| CRC | NSAID ||| the Global Physical Activity Questionnaire | GPAQ ||| MET | GPAQ ||| CRC | CRC ||| CRC | 0.67 | 95% | CI | 0.54-0.82 | 0.77 | 95% | CI | 0.62-0.96 | 0.72 | 95% | CI | 0.62-0.83 ||| 95% | CI | 1.01 | 95% | CI | 0.87-1.20 ||| CRC | Moroccan ||| one-third | Moroccan [SUMMARY]
null
The role of coronary artery collaterals in the preservation of left ventricular function: a study to address a long-standing controversy.
28470330
The functional significance of coronary artery collateral (CAC) vasculature in humans has been debated for decades and this has been compounded by the lack of a standard, systematic, objective method of grading and documenting CAC flow in man. CACs serve as alternative conduits for blood in obstructive coronary artery disease. This study aimed to evaluate the impact of CACs on left ventricular function in the presence of total coronary arterial occlusion.
INTRODUCTION
The study group included the coronary angiographic records of 97 patients (mean age: 59 ± 8 years). CACs were graded from 0-3 based on the collateral connection between the donor and recipient arteries. Left ventricular function was computed from the ventriculogram and expressed as ejection fraction (EF).
METHODS
The mean EF of the patients with grades 0, 1, 2 and 3 CACs were calculated as 50.4, 47, 60.5 and 70%, respectively. A significant difference was recorded in the mean EF calculated for the different CAC grades (p = 0.001). There was a significant positive correlation (p < 0.001; r = 0.478) between the mean EF and the CAC grades.
RESULTS
The patients with better coronary collateral grades had a higher mean EF. Therefore, as the grade of CACs increased, there was an improvement in their ability to preserve left ventricular function.
CONCLUSION
[ "Aged", "Collateral Circulation", "Coronary Angiography", "Coronary Circulation", "Coronary Occlusion", "Coronary Vessels", "Female", "Humans", "Male", "Middle Aged", "Severity of Illness Index", "Stroke Volume", "Ventricular Function, Left" ]
5488059
Introduction
Controversy has existed for decades regarding the functional significance of coronary artery collaterals (CACs) in humans,1 and this has been compounded by the lack of a standard systematic method for determining CAC flow in man.2,3 These CACs are reported to have a protective effect on myocardial perfusion and contractile function, and to prevent left ventricular (LV) aneurysm formation in the presence of severe coronary artery obstruction.4,5 The presence of collateral vessels may play a major role in determining whether the patient will develop symptoms of myocardial ischaemia and vulnerability of the myocardium to myocardial infarction.6 The use of coronary angiography allows correlation of the extent of development of CACs with the severity of coronary arterial disease.7,8 The presence of functional collateral vessels can be of important prognostic value and can also assist in determining the need for intervention and the type of interventional procedure to be performed.9,10 Indeed, the presence of well-developed coronary collateral vasculature and flow has been correlated with the absence of ischaemic symptoms in patients with established coronary artery disease (CAD).11 Meier and colleagues,12 in an analysis of previous studies on the effect of coronary collaterals on mortality, reported that well-developed collaterals reduced the mortality rate in the order of 35%. The presence of adequately developed collateral supply limits the degree of myocardial necrosis during myocardial infarction.8,13 The area at risk of myocardial infarction is inversely related to the collateral supply to that region, and therefore becomes zero in the presence of well-developed functional collaterals.14-16 In cases of unsuccessful intra-coronary thrombolytic therapy after the onset of symptoms in acute myocardial infarction, the improvement in LV function and wall motion in the infarct region have been associated with the presence of collateral flow to the region perfused by the obstructed vessel.5,17 However, some reports have cast doubt on the value of CACs. Banerjee reported that the presence of CACs had no protective role on the incidence of LV aneurysm formation following myocardial infarction.18 Ilia et al.19 also reported that there was no correlation between the characteristics of CACs and the presence or absence of LV systolic abnormality in patients with significant CAD. Furthermore, Turgut et al.20 stated that coronary collaterals did not have a protective role on preservation of LV function in the presence of severe left anterior descending artery stenosis. Meier et al. also reported that the development of good CACs increased the risk of restenosis after percutaneous coronary intervention.21 In view of these controversies, our study was undertaken to evaluate the effect of CACs on LV function in the setting of a totally occluded coronary artery demonstrated on angiogram.
Methods
The study group was selected from the reviewed angiographic records of 2 029 consecutive patients (mean age: 59 ± 12 years) who had had coronary catheterisation performed by interventional cardiologists for symptoms suggestive of CAD. In order to assess the effect of CACs on LV function in the presence of total occlusion of the coronary artery, only those coronary angiograms that had LV function assessed by ventriculography were selected for analysis. Ninety-seven such patients with total occlusion of a coronary artery and LV functional assessment were included in the analysed angiograms. The mean EF and the different grades of CACs in these patients were determined. The angiograms were obtained from the cardiac catheterisation laboratories of hospitals within the private sector in the eThekwini municipality region of KwaZulu-Natal, South Africa. Ethical approval (ethics number BE 196/13) for the study was obtained from the University of KwaZulu-Natal Biomedical Research Ethics Committee. Coronary arteriography was performed via the percutaneous transfemoral approach by injecting a radio-opaque contrast agent into the coronary blood vessels, and the images were taken using X-ray fluoroscopy. These images were recorded on digital media in DICOM (Digital Imaging and Communication in Medicine) format and stored in the cardiac catheterisation laboratories. The relationship between the location of the atherosclerotic lesions and the CAC grades were examined in the angiograms that met the inclusion criteria. In addition, the relationship between the location of the atherosclerotic lesions and the mean EF was also evaluated. The location of atherosclerotic lesion was determined by dividing the coronary arteries into proximal, middle and distal regions. The Rentrop grading system22 is the most widely used grading system for coronary collaterals and is employed by many researchers. However, most patients are graded Rentrop 2 or 3 in chronic total coronary occlusion.23 The grading of the coronary collaterals in the present study was based on the grading system used by Werner et al.,24 with the addition of a grade for absent CACs. This system centered on defining the collateral connection between the donor and the recipient arteries. Therefore, in this study, the coronary collaterals were graded as: grade 0 for absent collateralisation, where there were no demonstrable CACs to the distal region of the obstructed vessel (Fig. 1); grade 1 for poor collateralisation, where there were CACs showing no continuous connection between the donor and recipient arteries (Fig. 2); grade 2 for good collateralisation, where there were continuous threadlike connections between the donor and recipient arteries; and grade 3 for excellent collateralisation, where there were continuous prominent connections with side branches between the donor and recipient arteries (Fig. 3). Coronary angiogram in the right anterior oblique view (caudal angulation) showing obstruction of the diagonal branch of the left anterior descending (LAD) artery (red ring) without collateral vessels to the distal segment of the obstructed vessel. LCA, left coronary artery; D, diagonal; Cx, circumflex; OM, obtuse marginal artery. Coronary angiogram in the right anterior oblique view showing the filling of the posterior descending artery of an obstructed right coronary artery (RCA) by grade 1 collateral vessel (red arrows) originating from the septal branch of the left anterior descending (LAD) artery. LCA, left coronary artery; Cx, circumflex; SB, septal branch; PDA, posterior descending artery. Coronary angiogram in the right anterior oblique view (caudal angulation) showing obstruction of the circumflex (Cx) branch (red ring) with the filling of the obtuse marginal (OM) branch by grade 3 collateral vessel (red arrows) originating from the diagonal branch of the left anterior descending (LAD) artery. D, diagonal. Data were analysed with the Statistical Package for the Social Sciences (SPSS) version 21 for Windows (IBM SPSS, NY, USA). A p-value < 0.05 was considered statistically significant.
Results
The mean age of the patients with coronary artery occlusion who had LV function assessed by ventriculography was 59 ± 8 years. The patients consisted of 25.8% females and 74.2% males (Table 1). The grades of the CACs were as follows: absent (15.4%), poor (15.4%), good (36.9%) and excellent (32.3%). The morphological properties of the coronary arterial tree in the analysed angiograms are shown in Table 1. The grades of the collateral pathways with regard to the location of atherosclerotic obstruction were evaluated. They were recorded as 15.9, 9.1, 34.1 and 40.9% in the proximal region of the coronary arteries for absent, poor, good and excellent CACs, respectively. The grades of the collateral pathways with obstruction of the middle region were recorded as 16.2, 16.2, 37.8 and 29.7% for absent, poor, good and excellent CACs, respectively. The grades of the collateral pathways with the obstruction of the distal region were recorded as 18.8, 18.8, 37.5 and 25% for absent, poor, good and excellent CACs, respectively. There was no significant difference in the grades of CACs between the different regions of obstruction (p = 0.87) (2Table 2). The mean EF of the patients with proximal, middle and distal location of atherosclerotic lesions was 63.3, 57.8 and 57.5%, respectively. This indicated that the best mean EF was recorded in the patients with proximally located atherosclerotic lesions. However, analysis of variance (ANOVA) showed that there was no significant difference in the mean EF calculated for the different locations of atherosclerotic lesions (p = 0.33) (Table 3). SD, standard deviation. The mean EF of the patients with absent, poor, good and excellent CACs was calculated as 50.4, 47, 60.5 and 70%, respectively. ANOVA showed a significant difference in the mean EF calculated for the different CAC grades in the patients (p < 0.001) (Table 4). SD, standard deviation. A post hoc test was performed to determine the significance of the differences in mean EF calculated for each grade of CAC. There were significant differences between the mean EF calculated for patients with absent and excellent CACs (p = 0.004), and between the mean EF for poor and excellent CACs (p < 0.001). In addition, there was also a significant difference between the mean EF calculated for patients with poor and good CACs (p < 0.05). The mean EF of the patients was also correlated with the CAC grades. In assessing the correlation between the mean EF and the CAC grades, a Spearman’s correlation analysis was performed. This revealed a positive correlation coefficient (r = 0.478) that was significant (p < 0.001) between the mean EF of the patients and the CAC grades. This showed that the patients with better CAC grade had a higher mean EF.
Conclusion
The location of atherosclerotic lesion had no significant effect on the prevalence of CAC grades and the resultant LV function. However, with the development of well-functioning coronary collaterals, there was a significant improvement in the ability of these collaterals to preserve LV function.
[]
[]
[]
[ "Introduction", "Methods", "Results", "Discussion", "Conclusion" ]
[ "Controversy has existed for decades regarding the functional significance of coronary artery collaterals (CACs) in humans,1 and this has been compounded by the lack of a standard systematic method for determining CAC flow in man.2,3 These CACs are reported to have a protective effect on myocardial perfusion and contractile function, and to prevent left ventricular (LV) aneurysm formation in the presence of severe coronary artery obstruction.4,5 The presence of collateral vessels may play a major role in determining whether the patient will develop symptoms of myocardial ischaemia and vulnerability of the myocardium to myocardial infarction.6\nThe use of coronary angiography allows correlation of the extent of development of CACs with the severity of coronary arterial disease.7,8 The presence of functional collateral vessels can be of important prognostic value and can also assist in determining the need for intervention and the type of interventional procedure to be performed.9,10 Indeed, the presence of well-developed coronary collateral vasculature and flow has been correlated with the absence of ischaemic symptoms in patients with established coronary artery disease (CAD).11\nMeier and colleagues,12 in an analysis of previous studies on the effect of coronary collaterals on mortality, reported that well-developed collaterals reduced the mortality rate in the order of 35%. The presence of adequately developed collateral supply limits the degree of myocardial necrosis during myocardial infarction.8,13 The area at risk of myocardial infarction is inversely related to the collateral supply to that region, and therefore becomes zero in the presence of well-developed functional collaterals.14-16 In cases of unsuccessful intra-coronary thrombolytic therapy after the onset of symptoms in acute myocardial infarction, the improvement in LV function and wall motion in the infarct region have been associated with the presence of collateral flow to the region perfused by the obstructed vessel.5,17\nHowever, some reports have cast doubt on the value of CACs. Banerjee reported that the presence of CACs had no protective role on the incidence of LV aneurysm formation following myocardial infarction.18 Ilia et al.19 also reported that there was no correlation between the characteristics of CACs and the presence or absence of LV systolic abnormality in patients with significant CAD. Furthermore, Turgut et al.20 stated that coronary collaterals did not have a protective role on preservation of LV function in the presence of severe left anterior descending artery stenosis. Meier et al. also reported that the development of good CACs increased the risk of restenosis after percutaneous coronary intervention.21 In view of these controversies, our study was undertaken to evaluate the effect of CACs on LV function in the setting of a totally occluded coronary artery demonstrated on angiogram.", "The study group was selected from the reviewed angiographic records of 2 029 consecutive patients (mean age: 59 ± 12 years) who had had coronary catheterisation performed by interventional cardiologists for symptoms suggestive of CAD. In order to assess the effect of CACs on LV function in the presence of total occlusion of the coronary artery, only those coronary angiograms that had LV function assessed by ventriculography were selected for analysis.\nNinety-seven such patients with total occlusion of a coronary artery and LV functional assessment were included in the analysed angiograms. The mean EF and the different grades of CACs in these patients were determined.\nThe angiograms were obtained from the cardiac catheterisation laboratories of hospitals within the private sector in the eThekwini municipality region of KwaZulu-Natal, South Africa. Ethical approval (ethics number BE 196/13) for the study was obtained from the University of KwaZulu-Natal Biomedical Research Ethics Committee.\nCoronary arteriography was performed via the percutaneous transfemoral approach by injecting a radio-opaque contrast agent into the coronary blood vessels, and the images were taken using X-ray fluoroscopy. These images were recorded on digital media in DICOM (Digital Imaging and Communication in Medicine) format and stored in the cardiac catheterisation laboratories.\nThe relationship between the location of the atherosclerotic lesions and the CAC grades were examined in the angiograms that met the inclusion criteria. In addition, the relationship between the location of the atherosclerotic lesions and the mean EF was also evaluated. The location of atherosclerotic lesion was determined by dividing the coronary arteries into proximal, middle and distal regions.\nThe Rentrop grading system22 is the most widely used grading system for coronary collaterals and is employed by many researchers. However, most patients are graded Rentrop 2 or 3 in chronic total coronary occlusion.23\nThe grading of the coronary collaterals in the present study was based on the grading system used by Werner et al.,24 with the addition of a grade for absent CACs. This system centered on defining the collateral connection between the donor and the recipient arteries. Therefore, in this study, the coronary collaterals were graded as: grade 0 for absent collateralisation, where there were no demonstrable CACs to the distal region of the obstructed vessel (Fig. 1); grade 1 for poor collateralisation, where there were CACs showing no continuous connection between the donor and recipient arteries (Fig. 2); grade 2 for good collateralisation, where there were continuous threadlike connections between the donor and recipient arteries; and grade 3 for excellent collateralisation, where there were continuous prominent connections with side branches between the donor and recipient arteries (Fig. 3).\nCoronary angiogram in the right anterior oblique view (caudal angulation) showing obstruction of the diagonal branch of the left anterior descending (LAD) artery (red ring) without collateral vessels to the distal segment of the obstructed vessel. LCA, left coronary artery; D, diagonal; Cx, circumflex; OM, obtuse marginal artery.\nCoronary angiogram in the right anterior oblique view showing the filling of the posterior descending artery of an obstructed right coronary artery (RCA) by grade 1 collateral vessel (red arrows) originating from the septal branch of the left anterior descending (LAD) artery. LCA, left coronary artery; Cx, circumflex; SB, septal branch; PDA, posterior descending artery.\nCoronary angiogram in the right anterior oblique view (caudal angulation) showing obstruction of the circumflex (Cx) branch (red ring) with the filling of the obtuse marginal (OM) branch by grade 3 collateral vessel (red arrows) originating from the diagonal branch of the left anterior descending (LAD) artery. D, diagonal.\nData were analysed with the Statistical Package for the Social Sciences (SPSS) version 21 for Windows (IBM SPSS, NY, USA). A p-value < 0.05 was considered statistically significant.", "The mean age of the patients with coronary artery occlusion who had LV function assessed by ventriculography was 59 ± 8 years. The patients consisted of 25.8% females and 74.2% males (Table 1). The grades of the CACs were as follows: absent (15.4%), poor (15.4%), good (36.9%) and excellent (32.3%). The morphological properties of the coronary arterial tree in the analysed angiograms are shown in Table 1.\nThe grades of the collateral pathways with regard to the location of atherosclerotic obstruction were evaluated. They were recorded as 15.9, 9.1, 34.1 and 40.9% in the proximal region of the coronary arteries for absent, poor, good and excellent CACs, respectively. The grades of the collateral pathways with obstruction of the middle region were recorded as 16.2, 16.2, 37.8 and 29.7% for absent, poor, good and excellent CACs, respectively. The grades of the collateral pathways with the obstruction of the distal region were recorded as 18.8, 18.8, 37.5 and 25% for absent, poor, good and excellent CACs, respectively. There was no significant difference in the grades of CACs between the different regions of obstruction (p = 0.87) (2Table 2).\nThe mean EF of the patients with proximal, middle and distal location of atherosclerotic lesions was 63.3, 57.8 and 57.5%, respectively. This indicated that the best mean EF was recorded in the patients with proximally located atherosclerotic lesions. However, analysis of variance (ANOVA) showed that there was no significant difference in the mean EF calculated for the different locations of atherosclerotic lesions (p = 0.33) (Table 3).\nSD, standard deviation.\nThe mean EF of the patients with absent, poor, good and excellent CACs was calculated as 50.4, 47, 60.5 and 70%, respectively. ANOVA showed a significant difference in the mean EF calculated for the different CAC grades in the patients (p < 0.001) (Table 4).\nSD, standard deviation.\nA post hoc test was performed to determine the significance of the differences in mean EF calculated for each grade of CAC. There were significant differences between the mean EF calculated for patients with absent and excellent CACs (p = 0.004), and between the mean EF for poor and excellent CACs (p < 0.001). In addition, there was also a significant difference between the mean EF calculated for patients with poor and good CACs (p < 0.05).\nThe mean EF of the patients was also correlated with the CAC grades. In assessing the correlation between the mean EF and the CAC grades, a Spearman’s correlation analysis was performed. This revealed a positive correlation coefficient (r = 0.478) that was significant (p < 0.001) between the mean EF of the patients and the CAC grades. This showed that the patients with better CAC grade had a higher mean EF.", "Coronary collateral arteries have their origin from the same embryonic precursor as the native coronary arteries during embryogenesis; therefore the foundation of these collateral arterial networks is laid down during embryonic life and is present in the newborn.9,25 The normal human heart contains interconnecting channels,26 hence, coronary collateral pathways are present in both normal and diseased hearts.21 These channels exist as microvessels whose function is not clear and is not demonstrable angiographically when coronary circulation is normal or mildly obstructed.11,26\nFunctional collaterals were suggested to have developed from hypertrophic evolution of the vessels present in the normal heart.6 This evolutionary process is triggered by myocardial ischaemia and/or an increase in the pressure gradient in the collateral network.8,25,27 Due to this pressure gradient, there is an increase in the volume of blood propelled through these channels. They progressively dilate and are eventually angiographically visible as coronary collateral channels.26 The pressure gradient also results in an increased fluid shear stress in the vessel.28 This fluid shear stress is a primary morphogenic physical factor that determines the size of the developing collateral vessel.25\nIn the present study, the best developed CACs were recorded in those patients who had proximally located lesions (40.9%). Excellent collaterals were found in 29.7 and 25% of middle and distally located lesions, respectively. The more proximally located the lesion, the higher the pressure gradient between the normal (collateral-donating) coronary artery and the obstructed (collateral-receiving) vessel. In addition, the more proximal the lesion was situated, the greater the mass of ‘at risk’ ischaemic myocardium.29 Therefore, the highest prevalence of excellent collaterals in patients with proximally located lesions in the present study may have resulted from the combination of these factors (increased pressure gradient and myocardial ischaemia). Consequently, this results in an increased stimulus for collateral vessel formation.\nIt is apparent from the literature reviewed that there are no reports on the relationship between the situation of the lesion and LV function. In the present study, the mean EF calculated for the patients with proximally located lesions was the highest (63.3%) compared to mean EF for the middle (57.8%) and distally (57.5%) located lesions (Table 3). However, the current study did not find any significant difference in the prevalence of CACs with regard to the location of atherosclerotic lesion and the resultant preservation of LV function.\nThere are conflicting reports with regard to the functional importance of coronary collateral arteries. Sheehan et al.30 examined global left ventricular ejection fraction (LVEF) in patients with acute myocardial infarction before treatment and at discharge. They reported that global LVEF increased in patients with CACs but was the same in patients without coronary collaterals.\nHabib et al.31 divided patients who failed to canalise at 90 minutes after administration of a thrombolytic agent, into two groups (with and without collaterals) and reported that global LVEF was significantly greater in patients with CACs at hospital discharge. On the contrary, Wackers et al.32 found no difference in the global LVEF in patients with and without CACs.\nThere is yet another supposed negative effect of coronary collaterals, namely coronary ‘steal’. This occurs either when the pressure in the donor vessel is suddenly low or when there is higher resistance in the collateral pathway.33 Therefore, it results in the flow of blood from the region of the collateral-receiving vessel to the collateral-donating vessel. However, patients with poorly developed CACs are more prone to coronary steal than those with well-developed CACs.33\nTo our knowledge, this study is the first attempt at establishing a relationship between the different grades of CACs and LVEF in the presence of total coronary arterial obstruction. There was a significant difference (p < 0.001) in the mean EF calculated for the different grades of CACs. In addition, a post hoc test showed a significant difference in the mean EF between excellent and absent collaterals (p = 0.004) and excellent and poor collaterals (p < 0.001). Therefore the development of excellent collaterals has a significant supportive effect in the preservation of LV function compared to patients with absent or poor collateralisation.\nThere was also a significant positive correlation between CAC grades and mean EF calculated for the different CAC grades. Our study corroborated the findings of Sheehan et al.30 and Habib et al.,31that the presence of excellent and well-developed CACs had a significant role in the preservation of LV function.\nIn addition, the present study showed that, as the grades of the CACs increased, there was an improvement in the ability of these collaterals to preserve LV function. Consequently, LV myocardial perfusion was greater in patients with well-developed CACs where the native artery was totally occluded, and resulted in better preservation of LV function even in the face of an acute coronary event.34 To date, the significance of collateral circulation in coronary bypass surgery has not yet been fully investigated. However, it has been reported that the collateral circulation is favourable for the successful construction of coronary artery bypass grafts.26\nFrom the result of this study, it can therefore be seen that the presence of well-developed CACs should be considered in decision making in the management of patients with coronary arterial obstruction. In the presence of an adequately preserved LV function by coronary collaterals in asymptomatic patients, a strong case can be made for no intervention. Anecdotally, most cardiac practitioners would be aware of patients with total coronary arterial obstruction who have been leading a normal life, and even engaging in high-intensity sport without symptoms. Therefore, the significance of the coronary collateral arteries should not be underestimated, as identification of the CACs is relevant in clinical decision making.35\nThe limitations to the current study include the absence of clinical records, which made it impossible to determine the patients with risk factors and co-morbid conditions, such as diabetes mellitus and hypertension, which may also have influenced collateral vessel development. This would have enhanced the study; however, the aim of this study was to evaluate the functional importance of coronary collaterals on LV function, which was achieved by analysing the angiographic records.", "The location of atherosclerotic lesion had no significant effect on the prevalence of CAC grades and the resultant LV function. However, with the development of well-functioning coronary collaterals, there was a significant improvement in the ability of these collaterals to preserve LV function." ]
[ "introduction", "methods", "results", "discussion", "conclusion" ]
[ "coronary artery obstruction", "coronary collateral artery", "ventricular function" ]
Introduction: Controversy has existed for decades regarding the functional significance of coronary artery collaterals (CACs) in humans,1 and this has been compounded by the lack of a standard systematic method for determining CAC flow in man.2,3 These CACs are reported to have a protective effect on myocardial perfusion and contractile function, and to prevent left ventricular (LV) aneurysm formation in the presence of severe coronary artery obstruction.4,5 The presence of collateral vessels may play a major role in determining whether the patient will develop symptoms of myocardial ischaemia and vulnerability of the myocardium to myocardial infarction.6 The use of coronary angiography allows correlation of the extent of development of CACs with the severity of coronary arterial disease.7,8 The presence of functional collateral vessels can be of important prognostic value and can also assist in determining the need for intervention and the type of interventional procedure to be performed.9,10 Indeed, the presence of well-developed coronary collateral vasculature and flow has been correlated with the absence of ischaemic symptoms in patients with established coronary artery disease (CAD).11 Meier and colleagues,12 in an analysis of previous studies on the effect of coronary collaterals on mortality, reported that well-developed collaterals reduced the mortality rate in the order of 35%. The presence of adequately developed collateral supply limits the degree of myocardial necrosis during myocardial infarction.8,13 The area at risk of myocardial infarction is inversely related to the collateral supply to that region, and therefore becomes zero in the presence of well-developed functional collaterals.14-16 In cases of unsuccessful intra-coronary thrombolytic therapy after the onset of symptoms in acute myocardial infarction, the improvement in LV function and wall motion in the infarct region have been associated with the presence of collateral flow to the region perfused by the obstructed vessel.5,17 However, some reports have cast doubt on the value of CACs. Banerjee reported that the presence of CACs had no protective role on the incidence of LV aneurysm formation following myocardial infarction.18 Ilia et al.19 also reported that there was no correlation between the characteristics of CACs and the presence or absence of LV systolic abnormality in patients with significant CAD. Furthermore, Turgut et al.20 stated that coronary collaterals did not have a protective role on preservation of LV function in the presence of severe left anterior descending artery stenosis. Meier et al. also reported that the development of good CACs increased the risk of restenosis after percutaneous coronary intervention.21 In view of these controversies, our study was undertaken to evaluate the effect of CACs on LV function in the setting of a totally occluded coronary artery demonstrated on angiogram. Methods: The study group was selected from the reviewed angiographic records of 2 029 consecutive patients (mean age: 59 ± 12 years) who had had coronary catheterisation performed by interventional cardiologists for symptoms suggestive of CAD. In order to assess the effect of CACs on LV function in the presence of total occlusion of the coronary artery, only those coronary angiograms that had LV function assessed by ventriculography were selected for analysis. Ninety-seven such patients with total occlusion of a coronary artery and LV functional assessment were included in the analysed angiograms. The mean EF and the different grades of CACs in these patients were determined. The angiograms were obtained from the cardiac catheterisation laboratories of hospitals within the private sector in the eThekwini municipality region of KwaZulu-Natal, South Africa. Ethical approval (ethics number BE 196/13) for the study was obtained from the University of KwaZulu-Natal Biomedical Research Ethics Committee. Coronary arteriography was performed via the percutaneous transfemoral approach by injecting a radio-opaque contrast agent into the coronary blood vessels, and the images were taken using X-ray fluoroscopy. These images were recorded on digital media in DICOM (Digital Imaging and Communication in Medicine) format and stored in the cardiac catheterisation laboratories. The relationship between the location of the atherosclerotic lesions and the CAC grades were examined in the angiograms that met the inclusion criteria. In addition, the relationship between the location of the atherosclerotic lesions and the mean EF was also evaluated. The location of atherosclerotic lesion was determined by dividing the coronary arteries into proximal, middle and distal regions. The Rentrop grading system22 is the most widely used grading system for coronary collaterals and is employed by many researchers. However, most patients are graded Rentrop 2 or 3 in chronic total coronary occlusion.23 The grading of the coronary collaterals in the present study was based on the grading system used by Werner et al.,24 with the addition of a grade for absent CACs. This system centered on defining the collateral connection between the donor and the recipient arteries. Therefore, in this study, the coronary collaterals were graded as: grade 0 for absent collateralisation, where there were no demonstrable CACs to the distal region of the obstructed vessel (Fig. 1); grade 1 for poor collateralisation, where there were CACs showing no continuous connection between the donor and recipient arteries (Fig. 2); grade 2 for good collateralisation, where there were continuous threadlike connections between the donor and recipient arteries; and grade 3 for excellent collateralisation, where there were continuous prominent connections with side branches between the donor and recipient arteries (Fig. 3). Coronary angiogram in the right anterior oblique view (caudal angulation) showing obstruction of the diagonal branch of the left anterior descending (LAD) artery (red ring) without collateral vessels to the distal segment of the obstructed vessel. LCA, left coronary artery; D, diagonal; Cx, circumflex; OM, obtuse marginal artery. Coronary angiogram in the right anterior oblique view showing the filling of the posterior descending artery of an obstructed right coronary artery (RCA) by grade 1 collateral vessel (red arrows) originating from the septal branch of the left anterior descending (LAD) artery. LCA, left coronary artery; Cx, circumflex; SB, septal branch; PDA, posterior descending artery. Coronary angiogram in the right anterior oblique view (caudal angulation) showing obstruction of the circumflex (Cx) branch (red ring) with the filling of the obtuse marginal (OM) branch by grade 3 collateral vessel (red arrows) originating from the diagonal branch of the left anterior descending (LAD) artery. D, diagonal. Data were analysed with the Statistical Package for the Social Sciences (SPSS) version 21 for Windows (IBM SPSS, NY, USA). A p-value < 0.05 was considered statistically significant. Results: The mean age of the patients with coronary artery occlusion who had LV function assessed by ventriculography was 59 ± 8 years. The patients consisted of 25.8% females and 74.2% males (Table 1). The grades of the CACs were as follows: absent (15.4%), poor (15.4%), good (36.9%) and excellent (32.3%). The morphological properties of the coronary arterial tree in the analysed angiograms are shown in Table 1. The grades of the collateral pathways with regard to the location of atherosclerotic obstruction were evaluated. They were recorded as 15.9, 9.1, 34.1 and 40.9% in the proximal region of the coronary arteries for absent, poor, good and excellent CACs, respectively. The grades of the collateral pathways with obstruction of the middle region were recorded as 16.2, 16.2, 37.8 and 29.7% for absent, poor, good and excellent CACs, respectively. The grades of the collateral pathways with the obstruction of the distal region were recorded as 18.8, 18.8, 37.5 and 25% for absent, poor, good and excellent CACs, respectively. There was no significant difference in the grades of CACs between the different regions of obstruction (p = 0.87) (2Table 2). The mean EF of the patients with proximal, middle and distal location of atherosclerotic lesions was 63.3, 57.8 and 57.5%, respectively. This indicated that the best mean EF was recorded in the patients with proximally located atherosclerotic lesions. However, analysis of variance (ANOVA) showed that there was no significant difference in the mean EF calculated for the different locations of atherosclerotic lesions (p = 0.33) (Table 3). SD, standard deviation. The mean EF of the patients with absent, poor, good and excellent CACs was calculated as 50.4, 47, 60.5 and 70%, respectively. ANOVA showed a significant difference in the mean EF calculated for the different CAC grades in the patients (p < 0.001) (Table 4). SD, standard deviation. A post hoc test was performed to determine the significance of the differences in mean EF calculated for each grade of CAC. There were significant differences between the mean EF calculated for patients with absent and excellent CACs (p = 0.004), and between the mean EF for poor and excellent CACs (p < 0.001). In addition, there was also a significant difference between the mean EF calculated for patients with poor and good CACs (p < 0.05). The mean EF of the patients was also correlated with the CAC grades. In assessing the correlation between the mean EF and the CAC grades, a Spearman’s correlation analysis was performed. This revealed a positive correlation coefficient (r = 0.478) that was significant (p < 0.001) between the mean EF of the patients and the CAC grades. This showed that the patients with better CAC grade had a higher mean EF. Discussion: Coronary collateral arteries have their origin from the same embryonic precursor as the native coronary arteries during embryogenesis; therefore the foundation of these collateral arterial networks is laid down during embryonic life and is present in the newborn.9,25 The normal human heart contains interconnecting channels,26 hence, coronary collateral pathways are present in both normal and diseased hearts.21 These channels exist as microvessels whose function is not clear and is not demonstrable angiographically when coronary circulation is normal or mildly obstructed.11,26 Functional collaterals were suggested to have developed from hypertrophic evolution of the vessels present in the normal heart.6 This evolutionary process is triggered by myocardial ischaemia and/or an increase in the pressure gradient in the collateral network.8,25,27 Due to this pressure gradient, there is an increase in the volume of blood propelled through these channels. They progressively dilate and are eventually angiographically visible as coronary collateral channels.26 The pressure gradient also results in an increased fluid shear stress in the vessel.28 This fluid shear stress is a primary morphogenic physical factor that determines the size of the developing collateral vessel.25 In the present study, the best developed CACs were recorded in those patients who had proximally located lesions (40.9%). Excellent collaterals were found in 29.7 and 25% of middle and distally located lesions, respectively. The more proximally located the lesion, the higher the pressure gradient between the normal (collateral-donating) coronary artery and the obstructed (collateral-receiving) vessel. In addition, the more proximal the lesion was situated, the greater the mass of ‘at risk’ ischaemic myocardium.29 Therefore, the highest prevalence of excellent collaterals in patients with proximally located lesions in the present study may have resulted from the combination of these factors (increased pressure gradient and myocardial ischaemia). Consequently, this results in an increased stimulus for collateral vessel formation. It is apparent from the literature reviewed that there are no reports on the relationship between the situation of the lesion and LV function. In the present study, the mean EF calculated for the patients with proximally located lesions was the highest (63.3%) compared to mean EF for the middle (57.8%) and distally (57.5%) located lesions (Table 3). However, the current study did not find any significant difference in the prevalence of CACs with regard to the location of atherosclerotic lesion and the resultant preservation of LV function. There are conflicting reports with regard to the functional importance of coronary collateral arteries. Sheehan et al.30 examined global left ventricular ejection fraction (LVEF) in patients with acute myocardial infarction before treatment and at discharge. They reported that global LVEF increased in patients with CACs but was the same in patients without coronary collaterals. Habib et al.31 divided patients who failed to canalise at 90 minutes after administration of a thrombolytic agent, into two groups (with and without collaterals) and reported that global LVEF was significantly greater in patients with CACs at hospital discharge. On the contrary, Wackers et al.32 found no difference in the global LVEF in patients with and without CACs. There is yet another supposed negative effect of coronary collaterals, namely coronary ‘steal’. This occurs either when the pressure in the donor vessel is suddenly low or when there is higher resistance in the collateral pathway.33 Therefore, it results in the flow of blood from the region of the collateral-receiving vessel to the collateral-donating vessel. However, patients with poorly developed CACs are more prone to coronary steal than those with well-developed CACs.33 To our knowledge, this study is the first attempt at establishing a relationship between the different grades of CACs and LVEF in the presence of total coronary arterial obstruction. There was a significant difference (p < 0.001) in the mean EF calculated for the different grades of CACs. In addition, a post hoc test showed a significant difference in the mean EF between excellent and absent collaterals (p = 0.004) and excellent and poor collaterals (p < 0.001). Therefore the development of excellent collaterals has a significant supportive effect in the preservation of LV function compared to patients with absent or poor collateralisation. There was also a significant positive correlation between CAC grades and mean EF calculated for the different CAC grades. Our study corroborated the findings of Sheehan et al.30 and Habib et al.,31that the presence of excellent and well-developed CACs had a significant role in the preservation of LV function. In addition, the present study showed that, as the grades of the CACs increased, there was an improvement in the ability of these collaterals to preserve LV function. Consequently, LV myocardial perfusion was greater in patients with well-developed CACs where the native artery was totally occluded, and resulted in better preservation of LV function even in the face of an acute coronary event.34 To date, the significance of collateral circulation in coronary bypass surgery has not yet been fully investigated. However, it has been reported that the collateral circulation is favourable for the successful construction of coronary artery bypass grafts.26 From the result of this study, it can therefore be seen that the presence of well-developed CACs should be considered in decision making in the management of patients with coronary arterial obstruction. In the presence of an adequately preserved LV function by coronary collaterals in asymptomatic patients, a strong case can be made for no intervention. Anecdotally, most cardiac practitioners would be aware of patients with total coronary arterial obstruction who have been leading a normal life, and even engaging in high-intensity sport without symptoms. Therefore, the significance of the coronary collateral arteries should not be underestimated, as identification of the CACs is relevant in clinical decision making.35 The limitations to the current study include the absence of clinical records, which made it impossible to determine the patients with risk factors and co-morbid conditions, such as diabetes mellitus and hypertension, which may also have influenced collateral vessel development. This would have enhanced the study; however, the aim of this study was to evaluate the functional importance of coronary collaterals on LV function, which was achieved by analysing the angiographic records. Conclusion: The location of atherosclerotic lesion had no significant effect on the prevalence of CAC grades and the resultant LV function. However, with the development of well-functioning coronary collaterals, there was a significant improvement in the ability of these collaterals to preserve LV function.
Background: The functional significance of coronary artery collateral (CAC) vasculature in humans has been debated for decades and this has been compounded by the lack of a standard, systematic, objective method of grading and documenting CAC flow in man. CACs serve as alternative conduits for blood in obstructive coronary artery disease. This study aimed to evaluate the impact of CACs on left ventricular function in the presence of total coronary arterial occlusion. Methods: The study group included the coronary angiographic records of 97 patients (mean age: 59 ± 8 years). CACs were graded from 0-3 based on the collateral connection between the donor and recipient arteries. Left ventricular function was computed from the ventriculogram and expressed as ejection fraction (EF). Results: The mean EF of the patients with grades 0, 1, 2 and 3 CACs were calculated as 50.4, 47, 60.5 and 70%, respectively. A significant difference was recorded in the mean EF calculated for the different CAC grades (p = 0.001). There was a significant positive correlation (p < 0.001; r = 0.478) between the mean EF and the CAC grades. Conclusions: The patients with better coronary collateral grades had a higher mean EF. Therefore, as the grade of CACs increased, there was an improvement in their ability to preserve left ventricular function.
Introduction: Controversy has existed for decades regarding the functional significance of coronary artery collaterals (CACs) in humans,1 and this has been compounded by the lack of a standard systematic method for determining CAC flow in man.2,3 These CACs are reported to have a protective effect on myocardial perfusion and contractile function, and to prevent left ventricular (LV) aneurysm formation in the presence of severe coronary artery obstruction.4,5 The presence of collateral vessels may play a major role in determining whether the patient will develop symptoms of myocardial ischaemia and vulnerability of the myocardium to myocardial infarction.6 The use of coronary angiography allows correlation of the extent of development of CACs with the severity of coronary arterial disease.7,8 The presence of functional collateral vessels can be of important prognostic value and can also assist in determining the need for intervention and the type of interventional procedure to be performed.9,10 Indeed, the presence of well-developed coronary collateral vasculature and flow has been correlated with the absence of ischaemic symptoms in patients with established coronary artery disease (CAD).11 Meier and colleagues,12 in an analysis of previous studies on the effect of coronary collaterals on mortality, reported that well-developed collaterals reduced the mortality rate in the order of 35%. The presence of adequately developed collateral supply limits the degree of myocardial necrosis during myocardial infarction.8,13 The area at risk of myocardial infarction is inversely related to the collateral supply to that region, and therefore becomes zero in the presence of well-developed functional collaterals.14-16 In cases of unsuccessful intra-coronary thrombolytic therapy after the onset of symptoms in acute myocardial infarction, the improvement in LV function and wall motion in the infarct region have been associated with the presence of collateral flow to the region perfused by the obstructed vessel.5,17 However, some reports have cast doubt on the value of CACs. Banerjee reported that the presence of CACs had no protective role on the incidence of LV aneurysm formation following myocardial infarction.18 Ilia et al.19 also reported that there was no correlation between the characteristics of CACs and the presence or absence of LV systolic abnormality in patients with significant CAD. Furthermore, Turgut et al.20 stated that coronary collaterals did not have a protective role on preservation of LV function in the presence of severe left anterior descending artery stenosis. Meier et al. also reported that the development of good CACs increased the risk of restenosis after percutaneous coronary intervention.21 In view of these controversies, our study was undertaken to evaluate the effect of CACs on LV function in the setting of a totally occluded coronary artery demonstrated on angiogram. Conclusion: The location of atherosclerotic lesion had no significant effect on the prevalence of CAC grades and the resultant LV function. However, with the development of well-functioning coronary collaterals, there was a significant improvement in the ability of these collaterals to preserve LV function.
Background: The functional significance of coronary artery collateral (CAC) vasculature in humans has been debated for decades and this has been compounded by the lack of a standard, systematic, objective method of grading and documenting CAC flow in man. CACs serve as alternative conduits for blood in obstructive coronary artery disease. This study aimed to evaluate the impact of CACs on left ventricular function in the presence of total coronary arterial occlusion. Methods: The study group included the coronary angiographic records of 97 patients (mean age: 59 ± 8 years). CACs were graded from 0-3 based on the collateral connection between the donor and recipient arteries. Left ventricular function was computed from the ventriculogram and expressed as ejection fraction (EF). Results: The mean EF of the patients with grades 0, 1, 2 and 3 CACs were calculated as 50.4, 47, 60.5 and 70%, respectively. A significant difference was recorded in the mean EF calculated for the different CAC grades (p = 0.001). There was a significant positive correlation (p < 0.001; r = 0.478) between the mean EF and the CAC grades. Conclusions: The patients with better coronary collateral grades had a higher mean EF. Therefore, as the grade of CACs increased, there was an improvement in their ability to preserve left ventricular function.
2,966
260
[]
5
[ "coronary", "cacs", "patients", "collateral", "mean", "collaterals", "lv", "ef", "mean ef", "artery" ]
[ "coronary collaterals significant", "effect coronary collaterals", "function coronary collaterals", "coronary collateral vasculature", "collateral circulation coronary" ]
[CONTENT] coronary artery obstruction | coronary collateral artery | ventricular function [SUMMARY]
[CONTENT] coronary artery obstruction | coronary collateral artery | ventricular function [SUMMARY]
[CONTENT] coronary artery obstruction | coronary collateral artery | ventricular function [SUMMARY]
[CONTENT] coronary artery obstruction | coronary collateral artery | ventricular function [SUMMARY]
[CONTENT] coronary artery obstruction | coronary collateral artery | ventricular function [SUMMARY]
[CONTENT] coronary artery obstruction | coronary collateral artery | ventricular function [SUMMARY]
[CONTENT] Aged | Collateral Circulation | Coronary Angiography | Coronary Circulation | Coronary Occlusion | Coronary Vessels | Female | Humans | Male | Middle Aged | Severity of Illness Index | Stroke Volume | Ventricular Function, Left [SUMMARY]
[CONTENT] Aged | Collateral Circulation | Coronary Angiography | Coronary Circulation | Coronary Occlusion | Coronary Vessels | Female | Humans | Male | Middle Aged | Severity of Illness Index | Stroke Volume | Ventricular Function, Left [SUMMARY]
[CONTENT] Aged | Collateral Circulation | Coronary Angiography | Coronary Circulation | Coronary Occlusion | Coronary Vessels | Female | Humans | Male | Middle Aged | Severity of Illness Index | Stroke Volume | Ventricular Function, Left [SUMMARY]
[CONTENT] Aged | Collateral Circulation | Coronary Angiography | Coronary Circulation | Coronary Occlusion | Coronary Vessels | Female | Humans | Male | Middle Aged | Severity of Illness Index | Stroke Volume | Ventricular Function, Left [SUMMARY]
[CONTENT] Aged | Collateral Circulation | Coronary Angiography | Coronary Circulation | Coronary Occlusion | Coronary Vessels | Female | Humans | Male | Middle Aged | Severity of Illness Index | Stroke Volume | Ventricular Function, Left [SUMMARY]
[CONTENT] Aged | Collateral Circulation | Coronary Angiography | Coronary Circulation | Coronary Occlusion | Coronary Vessels | Female | Humans | Male | Middle Aged | Severity of Illness Index | Stroke Volume | Ventricular Function, Left [SUMMARY]
[CONTENT] coronary collaterals significant | effect coronary collaterals | function coronary collaterals | coronary collateral vasculature | collateral circulation coronary [SUMMARY]
[CONTENT] coronary collaterals significant | effect coronary collaterals | function coronary collaterals | coronary collateral vasculature | collateral circulation coronary [SUMMARY]
[CONTENT] coronary collaterals significant | effect coronary collaterals | function coronary collaterals | coronary collateral vasculature | collateral circulation coronary [SUMMARY]
[CONTENT] coronary collaterals significant | effect coronary collaterals | function coronary collaterals | coronary collateral vasculature | collateral circulation coronary [SUMMARY]
[CONTENT] coronary collaterals significant | effect coronary collaterals | function coronary collaterals | coronary collateral vasculature | collateral circulation coronary [SUMMARY]
[CONTENT] coronary collaterals significant | effect coronary collaterals | function coronary collaterals | coronary collateral vasculature | collateral circulation coronary [SUMMARY]
[CONTENT] coronary | cacs | patients | collateral | mean | collaterals | lv | ef | mean ef | artery [SUMMARY]
[CONTENT] coronary | cacs | patients | collateral | mean | collaterals | lv | ef | mean ef | artery [SUMMARY]
[CONTENT] coronary | cacs | patients | collateral | mean | collaterals | lv | ef | mean ef | artery [SUMMARY]
[CONTENT] coronary | cacs | patients | collateral | mean | collaterals | lv | ef | mean ef | artery [SUMMARY]
[CONTENT] coronary | cacs | patients | collateral | mean | collaterals | lv | ef | mean ef | artery [SUMMARY]
[CONTENT] coronary | cacs | patients | collateral | mean | collaterals | lv | ef | mean ef | artery [SUMMARY]
[CONTENT] presence | myocardial | coronary | cacs | myocardial infarction | infarction | reported | collateral | developed | determining [SUMMARY]
[CONTENT] coronary | artery | branch | grade | anterior | descending | recipient | recipient arteries | grading | diagonal [SUMMARY]
[CONTENT] mean | ef | mean ef | patients | excellent cacs | cacs | grades | poor good | calculated | excellent [SUMMARY]
[CONTENT] collaterals | prevalence cac | resultant lv function | grades resultant lv | grades resultant lv function | atherosclerotic lesion significant effect | atherosclerotic lesion significant | function development functioning coronary | function development functioning | function development [SUMMARY]
[CONTENT] coronary | cacs | patients | collaterals | collateral | mean | lv | ef | mean ef | function [SUMMARY]
[CONTENT] coronary | cacs | patients | collaterals | collateral | mean | lv | ef | mean ef | function [SUMMARY]
[CONTENT] CAC | decades | CAC ||| ||| [SUMMARY]
[CONTENT] 97 | 59 ± | 8 years ||| 0-3 ||| [SUMMARY]
[CONTENT] 0 | 1 | 2 | 3 | 50.4 | 47 | 60.5 | 70% ||| CAC | 0.001 ||| 0.478 | EF | CAC [SUMMARY]
[CONTENT] ||| [SUMMARY]
[CONTENT] CAC | decades | CAC ||| ||| ||| 97 | 59 ± | 8 years ||| 0-3 ||| ||| ||| 0 | 1 | 2 | 3 | 50.4 | 47 | 60.5 | 70% ||| CAC | 0.001 ||| 0.478 | EF | CAC ||| ||| [SUMMARY]
[CONTENT] CAC | decades | CAC ||| ||| ||| 97 | 59 ± | 8 years ||| 0-3 ||| ||| ||| 0 | 1 | 2 | 3 | 50.4 | 47 | 60.5 | 70% ||| CAC | 0.001 ||| 0.478 | EF | CAC ||| ||| [SUMMARY]